uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,562,415
arxiv
\section{Introduction} Eigenvalues of various graph-theoretical matrices often reflect structure properties of graphs meaningfully \cite{cvetkovic1}. In fact, studying eigenvalues of graphs and, then, characterizing those graphs based on certain properties of their eigenvalues has a long standing history, see \cite{cvetkovic1}. Also, eigenvalues have been used for characterzing graphs quantitatively in terms of defining graph complexity as well similarity measures \cite{randic_2001_2,DeEm14}. An analysis revealed that eigenvalue-based graphs measures tend to be quite unique, i.e., they are able to discriminate graphs uniquely \cite{dehmer_grabner_2012_2}. Some of the studied measures even outperformed measures from the family of the so-called Molecular ID Numbers, see \cite{dehmer_grabner_2012_2}. In this short paper, we further investigate an approach in \cite{DeEm14} where the authors explored inequalities for graph distance measures. Those are based on topological indices using eigenvalues of adjacency matrix, Laplacian matrix and signless Laplacian matrix. The graph distance measure is defined as $$ d_I(G, H) = d(I(G), I(H) = 1 - e^{- \left( \frac{I(G) - I(H)}{\sigma} \right) ^2}, $$ where $G$ and $H$ are two graphs and $I(G)$ and $I(H)$ are topological indices applied to both $G$ and $H$. In this short note, we are going to disprove three conjectures proposed in Dehmer et al. \cite{DePi19}, by constructing families of counterexamples and using computer search. \section{Main result} Let $G$ be a simple connected graph on $n$ vertices. Let $\lambda_1$ be the largest eigenvalue of the adjacency matrix of $G$, and $q_1$ be the largest eigenvalue of the Laplacian matrix of $G$. The authors from \cite{DePi19} proposed the following conjectures and stated that it is likely that we need deeper results from matrix theory and from the theory of graph spectra to prove these. \begin{conjecture} Let $T$ and $T'$ be two trees on $n$ vertices. Then $$ d_{q_1} (T, T') \geq d_{\lambda_1} (T, T'). $$ \end{conjecture} We are going to disprove the above conjecture by providing a family of counterexamples for which it holds $$ 0 = d_{q_1} (T, T') < d_{\lambda_1} (T, T'), $$ or in other words $q_1(T) = q_1(T')$ and $\lambda_1 (T) \neq \lambda_1 (T')$. In \cite{Os13}, the author proved the following result: Almost all trees have a cospectral mate with respect to the Laplacian matrix. \begin{theorem} Given fixed rooted graphs $(G, v)$ and $(H, v)$ and an arbitrary rooted graph $(K, w)$, if $(G, u)$ and $(H, v)$ are Laplacian (signless Laplacian, normalized Laplacian, adjacency) cospectrally rooted then $G\cdot K$ and $H \cdot K$ are cospectral with respect to the Laplacian (signless Laplacian, normalized Laplacian, adjacency) matrix. \end{theorem} Starting from Laplacian cospectrally rooted trees shown in Figure 1 - one can construct many graphs by choosing arbitrary trees $K$. \begin{figure}[h] \centering \includegraphics[height=3cm]{laplacian_trees.png} \caption{Rooted Laplacian cospectral trees.} \end{figure} By direct calculation, we get that these trees are not adjacency cospectral and therefore the adjacency spectral radiuses are different (2.0684 vs 2.0743). We rerun the same simulation for trees using \texttt{Nauty} \cite{Mc81} on $n = 10$ vertices as discussed in \cite{DePi19}. Based on the computed search - the smallest counterexample is on $n = 8$ vertices. \begin{center} \begin{tabular}{ |r|r|r| } \hline $n$ & tree pairs & Conjecture 5.1 counterexamples \\ \hline 4 & 3 & 0 \\ 5 & 6 & 0 \\ 6 & 21 & 0 \\ 7 & 66 & 0 \\ 8 & 276 & 2 \\ 9 & 1128 & 11 \\ 10 & 5671 & 89 \\ 11 & 27730 & 568 \\ 12 & 152076 & 3532 \\ 13 & 846951 & 21726 \\ 14 & 4991220 & 138080 \\ 15 & 29965411 & 877546 \\ 16 & 186640860 & 5725833 \\ \hline \end{tabular} \end{center} Degree powers, or the zeroth Randi\'{c} index are defined as $$ F_k = \sum_{v \in V} deg^k (v). $$ \begin{conjecture} Let $T$ and $T'$ be two trees on $n$ vertices. Then $$ d_{F_2} (T, T') \geq d_{q_1} (T, T'). $$ \end{conjecture} We rerun the same computer simulation and found many examples of pairs for which holds $$ |F_2 (T) - F_2(T')| < |q_1(T) - q_1(T')|, $$ and consequently $d_{F_2} (T, T') < d_{q_1} (T, T')$. In particular the smallest counterexample is on $n=6$ vertices and shown in Figure 2: clearly $F_2(T) = F_2(T') = 20$ and $$4.214320 = q_1(T) < q_1(T') = 4.302776.$$ This disproves the above conjecture and corrects the results from \cite{DePi19}. \begin{center} \begin{tabular}{ |r|r|r| } \hline $n$ & tree pairs & Conjecture 5.2 counterexamples \\ \hline 4 & 3 & 0 \\ 5 & 6 & 0 \\ 6 & 21 & 1 \\ 7 & 66 & 5 \\ 8 & 276 & 28 \\ 9 & 1128 & 117 \\ 10 & 5671 & 577 \\ 11 & 27730 & 2672 \\ 12 & 152076 & 13805 \\ 13 & 846951 & 72801 \\ 14 & 4991220 & 405454 \\ 15 & 29965411 & 2312368 \\ 16 & 186640860 & 13713949 \\ \hline \end{tabular} \end{center} \begin{figure}[h] \centering \includegraphics[height=3cm]{counterexamples.png} \caption{Two trees with $F_2(T) = F_2(T')$ and $q_1(T) \neq q_1 (T')$.} \end{figure} To conclude, these results disprove Conjectures 5.1 and Conjecture 5.2 from \cite{DePi19}, while Conjecture 5.3 on relationship between $\lambda_1$ and $F_2$ \cite{DePi19} directly follows from these. \section{Acknowledgement} Matthias Dehmer thanks the Austria Science Funds (FWF) for financial support (P 30031).
1,108,101,562,416
arxiv
\section{Introduction} Free-boundary problems are known for their high mathematical complexity, both in terms of analysis and numerical modelling. The understanding and tracking of the free-boundary adds extra difficulties to the already challenging distributed parameter systems, and demands the use of a wide range of sofhisticated mathematical tools \cite{friedman2010variational,baiocchivariational}. A particular class of free-boundary problems are those where a threshold behavior takes place. This means that there is a certain physical behavior if a quantity of interest remains below a given threshold value, and a different behavior if this quantity exceeds that limit. There exists a significant variety of phenomena that fit into this category. Among others we mention: viscoplastic fluids, plate deformation, frictional contact mechanics, elastoplasticity, among others. In many circumstances, the previously described behavior can be modeled as a non-differentiable energy minimization problem of the form: \begin{equation} \label{eq: intro: energy minimization} \min_{u \in V} ~E(u)=\frac{1}{2} \dual{Au}{u}+ j(u) - \dual{f}{u}, \end{equation} where $V$ is a reflexive Banach space such that $V \hookrightarrow L^2(\Omega) \hookrightarrow V'$ with compact and continuous embedding, $A: V \mapsto V'$ is a linear elliptic operator, $\dual{\cdot}{\cdot}$ stands for the duality product between $V'$ and $V$, $f \in V'$, and $j(\cdot)$ is a convex functional of the form $$j(v)= \beta \int_{\S} |K v| ~ds, \qquad \text{with } \beta >0,$$ where $K$ stands for a linear operator, $|\cdot|$ for the Euclidean norm and $\S \subset \Omega \subset \mathbb R^d$ is the subdomain or boundary part where the nonsmooth behaviour occurs. The positive constant $\beta$ stands for the threshold coefficient, the key value for the determination of the free-boundary of the problem. Thanks to the convexity of the energy cost functional, a necessary and sufficient optimality condition for problem \eqref{eq: intro: energy minimization} is given by what is known as a \emph{variational inequality of the second kind}: Find $u \in V$ such that \begin{equation} \label{eq:VI} \dual{Au}{v-u} + \beta \int_{\S} |K v| ~ds - \beta \int_{\S} |K u| ~ds \geq \dual{f}{v-u}, \text{ for all } v \in V. \end{equation} Available analytical results for \eqref{eq:VI} comprise existence and uniqueness of solutions \cite{DuvautLions1976}, extra regularity of solutions \cite{Brezis1971} and, in some cases, geometric studies of the free-boundary \cite{MoMia}. Moreover, analytical results for the dynamical counterpart of \eqref{eq:VI} have also been obtained, including well-posedness and long-time behaviour of solutions \cite{DuvautLions1976}. In practice, the application of models such as \eqref{eq:VI} requires the knowledge of the different parameters involved, especially the yield coefficient $\beta$. For estimating these quantities, a classical least-squares fitting functional with the variational inequality \eqref{eq:VI} as constraint is usually proposed. Moreover, in some situations it is of interest not only to know the physical behaviour of the system, but also being able to act on it to achieve some predetermined objective. These types of control problems generally involve the minimization of a functional on top of the variational inequality. Both the inverse parameter estimation problem as well as the optimal control problem present serious analytical and numerical difficulties which will be discussed in this paper by means of a model optimal control problem. The outline of the manuscript is as follows. In Section 2 some relevant application examples are reviewed and some special properties of the problems highlighted. In Section 3 a thorough revision of the optimization results for these types of systems are summarized. Finally, in Section 4 some challenges and future perspectives are discussed. \section{Applications} \subsection{Viscoplastic fluid flow} Viscoplastic fluids are characterized by the existence of a stress threshold that determines a dual behavior of the material. If the stress is below this threshold the material behaves as a rigid solid, while above this limit the behavior is that of a viscous fluid \cite{Bonn,huilgol2015fluid}. These types of materials were investigated in the late nineteenth and early twentieth centuries by several prominent fluid mechanicists (see \cite{huilgol2015fluid} for more details). A first mathematical model was proposed by Eugene Bingham in 1922 to describe the behavior of certain suspensions. Such materials are now known precisely as Bingham fluids, in honor of the founder of rheology. The classical steady state Bingham model considers the fluid dynamics equations with a non-differentiable term for the Cauchy tensor: \begin{subequations} \label{eq: Bingham model} \begin{align} &-\textrm{Div}\,\sigma+(u\cdot \nabla) u +\nabla p= f &&\textrm{ in }\Omega, \label{eq: Bingham model1}\\ & \textrm{div}\,u =0 &&\textrm{ in }\Omega,\\ & \mathbf \sigma =2 \mu \mathcal{E} (u) + \beta \frac{\mathcal{E}(u)}{|\mathcal{E} (u) |}, && \textrm{ if } \mathcal{E}(u)\neq 0, \\ &|\sigma| \leq \beta,\hspace{0.2cm}&& \textrm{ if }\mathcal{E}(u)=0, \end{align} \end{subequations} where $\mu >0$ stands for the viscosity coefficient, $\beta >0$ for the plasticity threshold (yield stress), $f$ for the body force and Div is the row-wise divergence operator. The deviatoric part of the Cauchy stress tensor is denoted by $\sigma$ and $\mathcal{E}$ stands for the rate of strain tensor, defined by $\mathcal{E}(u):=\frac{1}{2}\left( \nabla u +\nabla u^T \right).$ The model has to be endowed with suitable boundary conditions. The first equation in \eqref{eq: Bingham model} corresponds to the conservation of momentum, while the second one corresponds to the incompressibility condition for a fluid. As can be observed from the third equation, the Cauchy tensor is fully characterized in spatial points where the stress tensor is different from zero. If that is not the case, the material cannot be described as a fluid and the areas in which this happens are precisely known as the rigid zones (see Figure \ref{fig:driven cavity}). \begin{figure}[H] \begin{center} \includegraphics[height=6.2cm,width=7cm]{inactivos_cav.png} \caption{Steady state of a viscoplastic Bingham fluid in a wall driven cavity: Rigid zones (light gray), fluid region (black).} \label{fig:driven cavity} \end{center} \end{figure} In the 1960s the development of functional analytic tools made it possible to study in depth a variational model for these types of materials. The resulting inequality problem consists in finding $u \in V$ such that \begin{equation} \label{eq:probvarinvec} \dual{Au}{v-u} + c(u,u,{v}- u)+ j(v)- j (u)\geq \langle f,v-u\rangle,\,\textrm{for all $v\in V$}, \end{equation} where \begin{align*} & \dual{Au}{v}:= 2\mu\int_{\Omega}(\mathcal{E}(u):\mathcal{E}{(v)})~dx, && j({v}) :=\sqrt{2} \beta \int_{\Omega}{| \mathcal{E}{(v)} |} ~dx,\\ & c(u,{v},{w}) := \int_{\Omega}{w^T \left( (u\cdot \nabla)\,{v}\right)} ~dx, && \end{align*} $(C : D)=\textrm{tr}(CD^T)$, with $C,D \in \mathbb R^{d \times d}$, stands for the Frobenius scalar product and $V$ is a suitable solenoidal function space. This reformulation enabled the study of existence, uniqueness and regularity of solutions \cite{MoMia,DuvautLions1972,FuchsSeregin}. In particular, if the convective term $(u \cdot \nabla)u$ is dismissed in \eqref{eq: Bingham model1}, the resulting variational model corresponds to the necessary and sufficient optimality condition of a convex energy minimization problem as in \eqref{eq:VI}. The numerical solution of the Bingham model has also been widely investigated. On the one hand, several discretization schemes (finite differences, finite elements, etc.) with the corresponding approximation results have been considered \cite{Glowinski,MuravlevaOlshanskii2009}. On the other hand, numerical algorithms for coping with the nonsmoothness of the underlying model have been devised. In this context we mention augmented Lagrangian methods \cite{Glowinski,AposporidisEtAl2010}, dual based algorithms (Uzawa, ISTA, FISTA, etc.) \cite{Glowinski,TRESKATIS2016115} and semismooth Newton methods \cite{dlRG2,ItoKunischBook}. In addition to Bingham viscoplastic fluids, depending on the constitutive relation between the shear rate and the shear stress, the fluid at hand can be casted as shear thickening or shear thinning (see Figure \ref{fig: constitutive}). Important applications of these constitutive laws take place in, for instance, food industry and geophysical fluids \cite{CiarletGlowinski2010}. \begin{figure}[H] \begin{center} \begin{tikzpicture} \begin{axis}[ height=6cm, width=8cm, xmin=0, xmax=2., xlabel={shear rate}, ylabel={shear stress}, legend style={at={(0.5,0.95)}}, ] \addplot[gray,domain=0:2]{x +1}; \addplot[red,domain=0:2]{1+(abs(x)^(2.5-2))*x}; \addplot[blue,domain=0:2]{1+(abs(x)^(1.75-2))*x}; \legend{{\scriptsize Bingham fluid}, {\scriptsize Shear thickening}, {\scriptsize Shear thinning}} \end{axis} \end{tikzpicture} \caption{Constitutive laws for different kind of viscoplastic fluids} \label{fig: constitutive} \end{center} \end{figure} \subsection{Geophysical fluids} A phenomenon of importance in the context of geophysics is the flow of volcanic lava. In this case the goal is to be able to numerically simulate the flow to the greatest detail, in order to predict and mitigate possible disasters (see, e.g., \cite{Fink}). As for optimization, rather than controlling the flow, the goal is to estimate the constitutive rheological parameter as best as possible, and from that, determine the explosiveness of a certain volcano. The lava model, in addition to the complicated rheology, must be coupled with a heat transfer model that is responsible for changes in both the viscous and rheological behavior of the material. The complete model to be solved in three dimensions, in a moving domain $\Omega_t$, is given by: \begin{subequations} \label{eq: lava full model} \begin{align} &\rho \frac{\partial u}{\partial t} -\textrm{Div}\,\sigma+ \rho (u\cdot \nabla) u +\nabla p= \rho f &&\textrm{ in }\Omega_t \times (0,T),\\ & \rho \frac{\partial e}{\partial t}+ \rho (u\cdot \nabla)e= k \Delta w+ \sum_{i,j} \sigma_{ij} \frac{\partial u_i}{\partial x_j} &&\textrm{ in }\Omega_t \times (0,T),\\ & \textrm{div}\,u =0 &&\textrm{ in }\Omega_t \times (0,T),\\ & \mathbf \sigma =2 \mu \mathcal{E} (u) +\beta \frac{\mathcal{E}(u)}{|\mathcal{E} (u) |}, && \textrm{ if } \mathcal{E}(u)\neq 0, \\ &|\sigma| \leq \beta,\hspace{0.2cm}&& \textrm{ if }\mathcal{E}(u)=0, \end{align} \end{subequations} complemented with suitable initial and boundary conditions. In addition to the previous notation, $\rho$ stands for the fluid density, $e$ for the enthalpy function, $w$ is the temperature, $k$ is the thermal conductivity and $f$ is the gravity force. In the simplest case, the enthalpy is given by the product between a constant specific heat and the temperature (see, e.g., \cite{costa2005computational} for further details). In general, scarce analytical and numerical investigation has been carried out for model \eqref{eq: lava full model}. Due to its high complexity, questions about existence and uniqueness, solution regularity and properties of the solution operator are still open. Moreover, its numerical treatment posses several challenges concerning discretization, solution algorithms, tracking of the interface and computational efficiency. A common approach, among computational geophysicists, to deal with \eqref{eq: lava full model} consists in using approximations that reduce the dimensionality of the problem and allow to solve it computationally in a more straightforward way. A very popular technique is the shallow water approximation \cite{bernabeu2016modelling}, which enables a faster numerical solution of the problem without dismissing the topography of the terrain, a crucial variable in this type of phenomena. The price to pay for this simplification is the obtention of a system of hyperbolic conservation laws, which demand sophisticated techniques for their analysis and numerical solution. \subsection{Elastoplasticity} The transition from an elastic to a plastic regime in a solid body subject to different loads is a natural candidate phenomena to be modeled with variational inequalities, since this transition occurs precisely when the stress exceeds a certain threshold. One of the interesting mathematical properties of these types of phenomena is that they can be variationally formulated in terms of their primal or their dual variables \cite{temamplast,han2012plasticity}. In the first case, a variational inequality of the second kind is obtained, while in the other, an obstacle-like inequality is derived. This fact has been much utilized in the analysis and numerical simulation of this type of problems. Under the assumption of small strains, a quasi-static linear kinematic hardening model with a von Mises yield condition in its \emph{primal formulation} is given in the following way. Consider a solid body $\Omega \in \mathbb R^3$ that is clamped on a nonvanishing Dirichlet part $\Gamma_D$ of its boundary $\Gamma$, and it is subject to boundary loads on the remaining Neumann part $\Gamma_F$. The variables of the problem are the \emph{displacement} $u \in V:=H_{\Gamma_D}^1(\Omega;\mathbb R^3)$ and the \emph{plastic strain} $q \in Q :=\{ p \in L^2(\Omega; \mathbb R^{3 \times 3}_{sym}): \text{trace}(p)=0 \}$, with $\mathbb{R}^{3 \times 3}_{sym}$ the space of symmetric matrices. The problem consists in finding $W=(u,p)$ which satisfies \begin{equation*} \dual{A W}{Y-W}_{Z',Z} + j(q)-j(p) \geq \dual{f}{v-u}, \text{ for all } Y=(v,q) \in Z=V \times Q, \end{equation*} where \begin{align*} &\dual{A W}{Y}_{Z',Z}= \int_\Omega \left( \mathcal{E}(u)-p: \mathbb C(\mathcal{E}(v)-q) \right) ~dx + \int_\Omega (p: \mathbb H q) ~dx,\\ &j(p)= \beta \int_\Omega |p| ~dx,\\ &\dual{f}{v}= \int_\Omega l v ~dx + \int_{\Gamma_F} g \cdot v ~ds, \end{align*} $\mathbb C$ represents the material's fourth-order elasticity tensor and $\mathbb H$ is the hardening modulus. The constant $\beta > 0$ denotes the material's yield stress and the data $l$ and $g$ are the volume and boundary loads, respectively. If the temperature is also taken into account, the phenomenon gains in complexity and its modeling is possible only through the use of primal variables, that is, through variational inequalities of the second kind \cite{ottosen2005mechanics}. At present, this phenomenon is intensively investigated in terms of the analysis of solutions \cite{bartels2008thermoviscoplasticity,chelminski2006mathematical} and their numerical approximation \cite{bartels2013numerical}. Optimal control problems in elastoplasticity have also been studied in recent years, yielding optimality conditions for its primal and dual variants \cite{herzog2012c,HerzogMeyerWachsmuth,dlRHM13}, as well as numerical algorithms for solving the problem \cite{herzog2014optimal}. The thermoviscoplastic case is also currently being addressed with very challenging and promising perspectives \cite{herzog2015existence}. \subsection{Contact mechanics} One of the most emblematic problems modeled by variational inequalities occurs in contact mechanics and is known as Signorini's problem. It consists in determining the deformation of a certain surface subject to external forces, and in contact with an obstacle. This last fact induces forces in the normal direction to the contact surface, modeled by variational inequalities of the first kind, and tangential forces (friction) along the contact region, usually modeled by inequalities of the second kind. This combination of phenomena, however, occurs in a non-linear fashion, leading to a more complicated category of inequalities known as quasi-variational. Contact problems have been widely addressed in the literature. The first works of Heinrich Hertz in the nineteenth century were followed by several contributions on the modelling of the problems, the analysis of existence and uniqueness of solutions \cite{EkeTemam,Stampacchia/Kinderlehrer,SofoneaMatei2009}, the regularity of solutions and of the free-boundary \cite{Stampacchia/Kinderlehrer,caffarelli2005geometric}, and the numerical approximation and solution of the models \cite{Glowinski,KikuchiOden,wriggers2006computational}. A frequently used version of Signorini's contact problem is the one with so-called Coulomb friction law. For its formulation, let us consider $\Omega \subset \mathbb R^d$, $d=2,3,$ a bounded domain with regular boundary $\Gamma$. The boundary can be further divided into three non-intersecting components $\Gamma=\Gamma_D \uplus \Gamma_F \uplus \Gamma_C$, corresponding to the Dirichlet, Neumann and contact boundary sectors, respectively. The friction forces intervene only on the boundary sector where contact with the rigid foundation takes place. The problem consists in finding a displacement vector $u$ that solves the following system: \begin{subequations} \label{eq: Signorini with Coulomb} \begin{align} &- \textrm{Div} \, \sigma =f_1 &&\text{ in }\Omega,\\ & u =0 &&\text{ on }\Gamma_D,\\ & \sigma_N (u) =t &&\text{ on }\Gamma_F\\ & u_N \leq g, \quad \sigma_N (u) \leq 0, \quad (u_N-g) \, \sigma_N (u)=0 &&\textrm{ on }\Gamma_C,\\ & \sigma_T (u)= - \beta(u) \frac{u_T}{|u_T |}, && \textrm{ on } \{ x \in \Gamma_C: u_T \neq 0 \}, \label{eq:general friction law 1}\\ &|\sigma_T (u)| \leq \beta(u), && \textrm{ on } \{ x \in \Gamma_C: u_T = 0 \},\label{eq:general friction law 2} \end{align} \end{subequations} where $g$ denotes the gap between the bodies and $N$ and $T$ stand for the unit outward normal and unit tangential vector, respectively. The notation $u_N$ and $u_T$ stands for the product $u \cdot N$ and $u \cdot T$, respectively. The stress-strain relation for a linear elastic material is given by Hooke’s law: \begin{equation} \sigma= 2 \mu \mathcal E(u) + \lambda ~\text{tr}(\mathcal E(u)), \end{equation} where $\mathcal E(u):=\frac{1}{2}\left( \nabla u +\nabla u^T \right)$ stands for the rate of strain tensor and $\lambda>0$ and $\mu >0$ are the Lam\'e parameters. If the friction effect is dismissed, the resulting model may be formulated as a variational inequality of the first kind: Find $u \in \mathcal K := \{ v \in H^1_{\Gamma_D}(\Omega): v_N \leq g \text{ a.e. on } \Gamma_C \}$ such that \begin{equation} \label{eq: contact VI first} \dual{Au}{v-u} \geq \dual{f}{v-u}, \text{ for all }v \in \mathcal K, \end{equation} where \begin{equation*} \dual{Au}{v}:= \int_{\Omega}(\sigma :\mathcal{E}{v})~dx, \quad \quad \dual{f}{v} :=\int_{\Omega}{f_1 \,v}~dx+ \int_{\Gamma_F} t \, v ~ds \end{equation*} and $H^1_{\Gamma_D}(\Omega):= \{ v \in H^1(\Omega): v=0 \text{ on }\Gamma_D \}$. If, on the contrary, the contact surface is assumed to be known and only the nonsmoothness due to friction is taken into account, the phenomenon is modeled by a variational inequality of the second kind: \begin{equation} \label{eq: contact VI second} \dual{Au}{v-u} +\beta \int_{\Gamma_F}|v| ~ds- \beta \int_{\Gamma_F}{|u|}~ds \geq \dual{f}{v-u}, \text{ for all }v \in V:= H^1_{\Gamma_D}(\Omega). \end{equation} Optimal control and inverse problems in contact mechanics have been addressed in, e.g., \cite{BergouniouxMignot2000,BeremlijskiEtAl2002,BermudezSaguez1987,jarusekoutrata2007,betz2015optimal}, generally using the simplified versions \eqref{eq: contact VI first} or \eqref{eq: contact VI second} of Signorini's problem. Optimality conditions of more or less sharpness are currently available for these types of problems, which have been derived using similar techniques to those that will be explained in the forthcoming Section \ref{sec: optimal}. In the case of the complete model the problem is still farely open. The complexity of the combined nonlinearities and nonsmoothness makes the analysis extremely imbricated. Some initial attempts to deal with such structures have been carried out in \cite{dietrich2001optimal}. \section{Optimal control} \label{sec: optimal} The development of optimal control theory is closely linked to the space race in the twentieth century. The moon landing problem is a classical application example of this theory, where the trajectory, velocity and acceleration of a space vehicle had to be optimally determined in each instant of time to achieve a desired goal. The celebrated Pontryagin maximum principle \cite{pontryagin1987mathematical} was, within this framework, a milestone that made it possible to actually solve the resulting control problems. The extension of the theory to models with partial differential equations started to take place in the 1960s. The main goal in this case was to extend and develop techniques to cope with cases where the spatial variable also played a crucial role, which occurs, for instance, in diffusion processes. At present there are important established techniques for the mathematical analysis and numerical solution of such PDE control problems \cite{Li1,troltzsch2010optimal,de2015numerical}. Further, optimal control problems governed by variational inequalities are related to the design of mechanisms to act on the dynamics of a nonsmooth distributed parameter system to guide them towards some desired tarjet. Such problems where considered since the 1970s, with a renewed interest in the field in recent years, due to the wide applicability of the results. In addition to control, some relevant and related problems take place when trying to estimate different coefficients involved in the distributed parameter system. In the case of the variational inequalities under consideration, estimating the threshold coefficient by solving the resulting inverse problem appears to be of high relevance for the understanding of the material at hand. In this section we present the main up-to-date results related to the optimal control of variational inequalities of the second kind by means of the following model problem: Find an optimal control $f \in L^2(\Omega)$ and a corresponding state $u \in V$ solution of \begin{equation} \label{eq: optimal control problem} \tag{$\mathcal P$} \left\{ \begin{array}{ll} \min \limits_{(u,f) \in V \times L^2(\Omega)} ~J(u,f)=\frac{1}{2} \int_\Omega |u-z_d|^2 ~dx + \frac{\alpha}{2} \int_\Omega |f|^2 ~dx \vspace{0.3cm}\\ \text{subject to: }\\[3pt] \quad \dual{Au}{v-u}+\beta \int_\Omega |v|~dx -\beta \int_\Omega |u|~dx \geq \dual{f}{v -u}, &\text{ for all } v \in V. \end{array} \right. \end{equation} where $\alpha >0$, $\Omega \subset \mathbb R^d$ is a bounded domain and $z_d \in L^2(\Omega)$. We recall that $V$ is assumed to be a reflexive Banach space such that $V \hookrightarrow L^2(\Omega) \hookrightarrow V'$ with compact and continuous embedding, $A: V \mapsto V'$ is a linear elliptic operator and $\dual{\cdot}{\cdot}$ stands for the duality product between $V'$ and $V$. Although more general cost functionals may be considered as well, we restrict our attention here to the tracking type one which is the first and more intuitive choice. Using the direct method of the calculus of variations \cite{de2015numerical} it can be proved that there exists a unique solution to the lower level problem in \eqref{eq: optimal control problem}. Moreover, by duality arguments \cite{EkeTemam}, there exists a dual multiplier $q \in L^{\infty}(\Omega)$ such that the following primal-dual system holds: \begin{equation} \begin{array}{ll} \dual{Au}{v}+\int_\Omega q \, v~dx = \dual{f}{v}, &\text{ for all } v \in V, \\ [3pt] q(x) u(x) = \beta |u(x)|, &\text{ a.e. in } \Omega,\\ [3pt] |q(x)| \leq \beta, &\text{ a.e. in } \Omega. \end{array} \end{equation} The presence of the dual variable is not only important theoretically, but also numerically, since it gives rise to important dual and primal-dual solution algorithms \cite{Glowinski}. With help of primal and dual variables we may define the active, inactive and biactive sets for the problem as follows: \begin{align*} \mathcal A &= \{ x\in \Omega: u(x) = 0\} && \text{(active set)},\\ \mathcal I &= \{ x\in \Omega: u(x) \neq 0\} && \text{(inactive set)},\\ \mathcal B &= \{ x\in \Omega: u(x) = 0 \, \land \, |q(x)|=\beta \} && \text{(biactive set)} . \end{align*} Next we will show that there exists a global optimal solution to problem \eqref{eq: optimal control problem}, which is, however, not necessarily unique. Although in practice it may not be possible to compute global but only local minima, the next global existence result constitutes the first step towards the successful analysis of the optimal control problem at hand. \begin{theorem} Problem \eqref{eq: optimal control problem} has at least one optimal solution. \end{theorem} \begin{proof} Since the cost functional is bounded from below, there exists a minimizing sequence $\{ (u_n,f_n) \}$, i.e., $J(u_n,f_n) \to \inf_{f} J(u,f),$ where $u_n$ stands for the unique solution to \begin{equation} \label{eq:NS-VI-3} \dual{Au_n}{v-u_n}+\beta \int_\Omega |v|~dx -\beta \int_\Omega |u_n|~dx = \dual{f_n}{v-u_n}, \text{ for all } v \in V. \end{equation} From the structure of the cost functional it also follows that $\{ f_n \}$ is bounded in $L^2(\Omega)$ and, thanks to \eqref{eq:NS-VI-3}, also $\{u_n\}$ is bounded in $V$. Consequently, there exists a subsequence (denoted in the same way) such that \begin{equation*} f_n \rightharpoonup \hat f \text{ weakly in } L^2(\Omega) \hspace{0.5cm}\text{ and } \hspace{0.5cm} u_n \rightharpoonup \hat u \text{ weakly in }V. \end{equation*} Due to the compact embedding $L^2(\Omega) \hookrightarrow V'$ it then follows that \begin{equation*} u_n \to \hat u \text{ strongly in } V'. \end{equation*} From \eqref{eq:NS-VI-3} we directly obtain that \begin{equation*} \dual{Au_n}{u_n}-\dual{Au_n}{v} +j(u_n)-j(v) - \langle f_n ,u_n- v \rangle \leq 0, ~\forall v \in V. \end{equation*} Thanks to the convexity and continuity of $\dual{A \cdot}{\cdot}$ and $j(\cdot)$ we may take the limit inferior in the previous inequality and obtain that \begin{equation} \dual{A \hat u}{\hat u}-\dual{A \hat u}{v}+ j(\hat u)-j(v) - \langle \hat f ,\hat u- v \rangle \leq 0, ~\forall v \in V, \end{equation} which implies that $\hat u$ solves the lower level problem with $\hat f$ on the right hand side. Thanks to the weakly lower semicontinuity of the cost functional we finally obtain that \begin{equation*} J(\hat u,\hat f) \leq \liminf_{n \to \infty} J(u(f_n),f_n) = \inf_{f} J(u(f),f), \end{equation*} which implies the result. \end{proof} Once the existence of optimal solutions is guaranteed, the next step consists in characterizing local optima by means of first order optimality conditions, also called optimality systems. \subsection{Optimality systems} As in finite dimensions, it is in general not possible to verify standard constraint qualification conditions for infinite dimensional nonsmooth optimization problems like \eqref{eq: optimal control problem}. Consequently, in order to get a Karush-Kuhn-Tucker optimality system, alternative techniques have to be devised. One of the possibilities to derive optimality conditions for problem \eqref{eq: optimal control problem} consists in regularizing the non-differentiable term, getting rid of the nonsmoothness. In \cite{Barbu1984}, for instance, a general regularization procedure is presented, where the functional $j(\cdot)$ is replaced by a smooth approximation of it. Specifically, global type regularizations like $$\phi_{\gamma}(x):= \beta \sqrt{|x|^2+ \gamma^{-2}} \quad \text{ or } \quad \phi_{\gamma}(x):=\frac{\beta}{\gamma + 1} \left(\gamma |x| \right)^{\frac{\gamma+1}{\gamma}},$$ with $\gamma >0$, were frequently used. The resulting regularized control problems can be analyzed using PDE-constrained optimization techniques, yielding the following first order optimality system of Karush-Kuhn-Tucker type: \begin{subequations} \label{eq: reg. OS} \begin{align} & \dual{Au_\gamma}{v}+\int_\Omega q_\gamma \, v~dx = \dual{f_\gamma}{v},&& \text{ for all } v \in V, \label{eq: reg. OS1}\\ & q_\gamma(x) = \phi_\gamma'(x) && \text{ a.e. in }\Omega, \label{eq: reg. OS2}\\ & \dual{A^* p_\gamma}{v}+\int_\Omega \phi_\gamma''(u_\gamma)^*p_\gamma \, v~dx = \int_\Omega (u_\gamma-z_d) \,v ~dx,&& \text{ for all } v \in V,\label{eq: reg. OS3}\\ & \alpha f_\gamma+p_\gamma = 0&& \text{ a.e. in }\Omega \label{eq: reg. OS4} \end{align} \end{subequations} Concerning the consistency of the regularization, usually two types of results shall be proved. The first one guarantees that the family of regularized controls $\{ f_\gamma \}_{\gamma >0}$ contains a weakly convergent subsequence whose limit precisely solves problem \eqref{eq: optimal control problem}. The other type of consistency result assures that, given a local optimal control which satisfies a quadratic growth condition, there is a family of regularized controls that approximate it. Once the consistency has been analyzed, an optimality system for the original problem can be obtained by passing to the limit in system \eqref{eq: reg. OS} (see, e.g., \cite{Barbu1984,BonnansTiba1991}). \begin{theorem} Let $f \in L^2(\Omega)$ be a local optimal solution of \eqref{eq: optimal control problem} and $u \in V$ its corresponding state. Let $\{ f_\gamma \}_{\gamma >0}$ be a sequence of regularized optimal controls such that $f_\gamma \rightharpoonup f$ weakly in $L^2(\Omega)$, as $\gamma \to \infty$. Then there exist multipliers $p \in V$ and $\xi \in V'$ such that the following system holds: \begin{subequations} \label{eq: OSweak} \begin{align} & \dual{Au}{v}+\int_\Omega q \, v~dx = \dual{f}{v},&& \text{ for all } v \in V, \label{eq: OSweak1}\\ & q(x) u(x) =\beta | u(x)| && \text{ a.e. in }\Omega, \label{eq: OSweak2}\\ & |q(x)| \leq \beta && \text{ a.e. in }\Omega, \label{eq: OSweak3}\\ & \dual{A^* p}{v}+\langle \xi, v \rangle = \int_\Omega (u-z_d) \,v ~dx,&& \text{ for all } v \in V \label{eq: OSweak4}\\ & \alpha f+p=0&& \text{ a.e. in }\Omega. \label{eq: OSweak5} \end{align} \end{subequations} \end{theorem} \begin{proof} Equations \eqref{eq: OSweak1}-\eqref{eq: OSweak2} are obtained directly from the continuity of the regularized solution operator. Testing \eqref{eq: reg. OS3} with $v=p_\gamma$ yields \begin{equation} \label{eq:adjoint eq multiplied by p} \dual{A^* p_\gamma}{p_\gamma}+\int_\Omega p_\gamma \, \phi_\gamma''(u_\gamma)^* p_\gamma~dx = \int_\Omega (u_\gamma-z_d) \,p_\gamma ~dx \end{equation} Thanks to the convexity of the regularizing function $\phi_\gamma$, it follows that $$\int_\Omega p_\gamma \phi_\gamma''(u_\gamma)^* p_\gamma~dx \geq 0,$$ which together with the ellipticity of the operator $A$ and the boundedness of the sequence $\{u_\gamma\}_{\gamma >0}$ implies that \begin{equation} \label{eq:bound of reg. seq. of adjoints} \|p_\gamma \|_V \leq C_p, \quad \text{ for all }\gamma>0, \end{equation} i.e., the sequence $\{ p_\gamma \}_{\gamma >0}$ is bounded in $V$ and there exists a subsequence (denoted in the same way) and a limit $p \in V$ such that \begin{align*} p_\gamma \rightharpoonup p \text{ weakly in }V \qquad \text{and} \qquad A p_\gamma \rightharpoonup A p \text{ weakly in }V'. \end{align*} From the the latter and the boundedness of $\{u_\gamma\}_{\gamma >0}$, we obtain that $\{ \phi_\gamma''(u_\gamma)^*p_\gamma \}_{\gamma >0}$ is bounded in $V'$. Consequently there exists a subsequence (denoted the same) and a limit $\xi \in V'$ such that $$\phi_\gamma''(u_\gamma)^*p_\gamma \rightharpoonup \xi \text{ weakly in } V'.$$ Passing to the limit in \eqref{eq: reg. OS3} and \eqref{eq: reg. OS4} then yields the result. \end{proof} Although system \eqref{eq: OSweak} includes equation \eqref{eq: OSweak4} for the adjoint state $p$, it does not characterize its behavior in relation to the state $u$, the dual multiplier $q$ or the additional multiplier $\xi$. This is a main drawback which makes the characterization incomplete, i.e., there may exist several solutions of system \eqref{eq: OSweak} that do not correspond to stationary points of the optimal control problem. Through the use of tailored local regularizations, more detailed optimality systems can be obtained \cite{Delosreyes2009}. In particular, relations between the quantities $u,~q, ~p$ and $\xi$ are obtained within the optimality condition, ressembling what is known as Clarke-stationarity in finite-dimensional nonsmooth optimization \cite{sun2006optimization}. Such tailored regularizations seek to locally approximate the generalized derivative of the Euclidean norm, that is, the regularized function coincides exactly with the generalized derivative, except in a neighborhood of the non-differentiable elements. In particular, the following smoothing function \begin{equation} \phi_{\gamma}'(x)= \begin{cases} \beta \frac{x}{|x|} &\text{ if }~\gamma |x| \geq \beta + \frac{1}{2\gamma},\\ \frac{x}{|x|} (\beta- \frac{\gamma}{2} (\beta- \gamma |x|+\frac{1}{2\gamma})^2) &\text{ if }~\beta-\frac{1}{2\gamma}\leq \gamma |x| \leq \beta+\frac{1}{2\gamma},\\ \gamma x &\text{ if }~\gamma |x| \leq \beta-\frac{1}{2\gamma}, \end{cases} \end{equation} for $\gamma$ sufficiently large, has been proposed, yielding the following result \cite{Delosreyes2009}. \begin{theorem} Let $f \in L^2(\Omega)$ be a local optimal solution of \eqref{eq: optimal control problem} and $u \in V$ its corresponding state. Let $\{ f_\gamma \}_{\gamma >0}$ be a sequence of (locally) regularized optimal controls such that $f_\gamma \rightharpoonup f$ weakly in $L^2(\Omega)$, as $\gamma \to \infty$. Then there exist multipliers $p \in V$ and $\xi \in V'$ such that the following system holds: \begin{subequations} \label{eq: OS with local regularization} \begin{align} & \dual{Au}{v}+ \int_\Omega q \, v~dx = \dual{f}{v},&& \text{ for all } v \in V,\\ & q(x) u(x) =\beta | u(x)| && \text{ a.e. in }\Omega,\\ & |q(x)| \leq \beta && \text{ a.e. in }\Omega,\\ & \dual{A^* p}{v}+\langle \xi, v \rangle = \int_\Omega (u-z_d) \,v ~dx,&& \text{ for all } v \in V,\\ & \alpha f+p=0&& \text{ a.e. in }\Omega, \end{align} and, additionally, \begin{align} \label{eq: compl in OS with local reg} & p(x)=0 && \text{ a.e. in }\mathcal I:=\{ x: |q(x)| < \beta \},\\ & \dual{\xi}{p} \geq 0,\\ & \dual{\xi}{u}=0. \end{align} \end{subequations} \end{theorem} In addition to the complementarity relations, system \eqref{eq: OS with local regularization} has as main advantage the fact that it can be derived for different types of controls (distributed, boundary, coefficients) and in the presence of additional control, mixed or state constraints. Alternatively, in order to derive a stronger optimality system, the nonsmooth properties of the control-to-state operator, including some sort of differentiability, have to be carefully analyzed (see, e.g., \cite{Mignot1976,MignotPuel1984,DelosReyesMeyer2015}). In the next result we show that such solution operator satisfies a Lipschitz property. \begin{lemma}\label{lem:lipschitz} For every $f\in V'$ there exists a unique solution $u\in V$ of \begin{equation} \label{eq: VI Lipschitz} \dual{Au}{v-u}+\beta \int_\Omega |v|~dx -\beta \int_\Omega |u|~dx \geq \dual{f}{v -u}, \quad \text{ for all } v \in V, \end{equation} which we denote by $u = S(f)$. The associated solution operator $S: V' \to V$ is globally Lipschitz continuous, i.e., there exists a constant $L > 0$ such that \begin{equation} \|S(f_1) - S(f_2)\|_V \leq L \, \|f_1 - f_2\|_{V'} \quad \forall \, f_1, f_2 \in V'. \end{equation} \end{lemma} \begin{proof} Existence and uniqueness follows by standard arguments from the maximal monotonicity of $A + \partial \| \cdot \|_{L^1(\Omega)}$, see for instance \cite{Barbu1993}. To prove the Lipschitz continuity we test the variational inequality \eqref{eq: VI Lipschitz} for $u_1 = S(f_1)$ with $u_2 = S(f_2)$ and vice versa and add the arising inequalities to obtain \begin{equation*} \dual{A(u_1 - u_2)}{u_1 - u_2} \leq \dual{f_1 - f_2}{u_1 - u_2}. \end{equation*} The ellipticity of $A$ then yields the result. \end{proof} In addition to the Lipschitz continuity, the directional differentiability of the solution operator is indispensable in order to derive stronger optimality conditions. The obtention of such a result, however, requires some additional assumptions about the structure of the biactive set and the regularity of the solution. \begin{assumption} \label{assu: structural} The active set $\mathcal A = \{ x\in \Omega: u(x) = 0\}$ satisfies the following conditions: \begin{enumerate} \item\label{assu:active1} $\mathcal A = \mathcal A_1 \cup \mathcal A_0$, where $\mathcal A_1$ has positive measure and $\mathcal A_0$ has zero capacity \cite{attouch2014variational}. \item\label{assu:active2} $\mathcal A_1$ is closed with non-empty interior. Moreover, it holds $\mathcal A_1 = \overline{\text{int}(\mathcal A_1)}$. \item\label{assu:active3} For the set $\mathcal J:= \Omega\setminus \mathcal A_1$ it holds \begin{equation}\label{eq:innererrand} \partial\mathcal J \setminus (\partial\mathcal J\cap\partial\Omega) = \partial\mathcal A_1\setminus(\partial\mathcal A_1\cap\partial\Omega), \end{equation} and both $\mathcal A_1$ and $\mathcal J$ are supposed to have regular boundaries. That is, the connected components of $\mathcal J$ and $\mathcal A_1$ have positive distance from each other and the boundaries of each of them satisfies the cone condition \cite{gri85}. \end{enumerate} \end{assumption} \begin{figure} \begin{center} \includegraphics[height=4.5cm]{biactiveset} \includegraphics[height=4.5cm]{biactivenot} \caption{Allowed (left) and not allowed (right) active sets according to Assumption \ref{assu: structural}} \label{fig: active} \end{center} \end{figure} Assumption \ref{assu: structural} has been relaxed in \cite{christofmeyer2017} allowing for the presence of $d-1$ dimensional active subsets. An alternative polyhedricity hypothesis has been considered recently in \cite{hintermuller2017directional}. Although the latter apparently avoids structural assumptions on the active set, in the recent work \cite{christof2017non} it is shown that in order to get polyhedricity, structural assumptions on the active set are actually unavoidable. \begin{theorem}\label{thm:ablvi} Let $f,h \in L^r(\Omega)$ with $r > \max\{d/2,1\}$ be given. Suppose further that Assumption \ref{assu: structural} is fulfilled by $u = S(f)$ and the associated slack variable $q$, and that both functions are continuous. Then there holds \begin{equation}\label{eq:weaklim} \frac{S(f + t\,h) - S(f)}{t} \rightharpoonup \eta \quad \text{weak in } V, \quad \text{as } t \searrow 0, \end{equation} where $\eta \in V$ solves the following VI of first kind: \begin{equation}\label{eq:ablvi} \begin{aligned} \eta \in \mathcal K(u),\quad \dual{A\eta}{v-\eta} \geq \dual{h}{v-\eta}, \quad \forall\, v\in \mathcal K(u), \end{aligned} \end{equation} with $$\mathcal K(u)=\begin{aligned}[t] \{v\in V: \;\, & v(x) = 0 \text{ a.e., where } |q(x)| < \beta,\\ & v(x)q(x) \geq 0 \text{ a.e., where } |q(x)| = \beta \text{ and } u(x) = 0\}. \end{aligned}$$ \end{theorem} The last theorem establishes a directional differentiability result for the solution operator in a weak sense. Composed with the quadratic structure of the tracking type cost functional, the directional differentiability of the reduced cost is obtained. In case $\mathcal B = \emptyset,$ the result can be further improved and G\^ateaux differentiability is obtained. Theorem \ref{thm:ablvi} was recently generalized in \cite{christofmeyer2017}, where in addition to improving Assumption \ref{assu: structural}, semilinear terms in the inequality were considered. Similarly as for optimal control of obstacle problems (see \cite{MignotPuel1984}), a stronger stationarity condition can only be obtained if the control is of distributed type and no control constraints are imposed (see \cite{wachsmuth2014strong} for further details on the presence of control constraints). In the following theorem a strong stationary optimality system is established for the model optimal control problem \eqref{eq: optimal control problem}. \begin{theorem} Let $f \in L^2(\Omega)$ be a local optimal solution of \eqref{eq: optimal control problem} and $u \in V$ its corresponding state. Suppose that Assumption \ref{assu: structural} holds and assume further that $u,q \in C(\bar{\Omega})$. Then there exist multipliers $p \in V$ and $\xi \in V'$ such that the following system holds: \begin{subequations} \label{eq: strong stationary OS} \begin{align} & \dual{Au}{v}+ \int_\Omega q \, v~dx = \dual{f}{v},&& \text{ for all } v \in V,\\ & q(x) u(x) =\beta | u(x)| && \text{ a.e. in }\Omega,\\ & |q(x)| \leq \beta && \text{ a.e. in }\Omega,\\ & \dual{A^* p}{v} +\langle \xi, v \rangle = \int_\Omega (u-z_d) \,v ~dx,&& \text{ for all } v \in V,\\ & \alpha f+p=0&& \text{ a.e. in }\Omega, \end{align} and, additionally, \begin{align} & p(x)=0 &&\text{a.e. in }\mathcal I:=\{ x: |q(x)| < \beta \},\\ &p(x) q(x)=0 && \text{a.e. in } \mathcal B,\\ &\dual{\xi}{v} \geq 0, && \forall v \in V: v(x)=0 \text{ if } |q(x)|< \beta \land v(x) q(x) \geq 0 \text{ a.e. in } \mathcal B, \end{align} \end{subequations} where $\mathcal B := \{x \in \Omega: u(x)=0 \land |q(x)|=\beta \}$. \end{theorem} Optimality system \eqref{eq: strong stationary OS} is sharper than \eqref{eq: OS with local regularization} since it includes a pointwise relation between the adjoint state and the dual multiplier on the biactive set and a sign condition on the additional multiplier $\xi$ for a specific test function set. However, it is worth noting that the required assumptions for getting this result are quite strong and not extendable to several cases of interest, like boundary control or control constrained problems. Finally, let us recall that in case $\mathcal B = \emptyset,$ the problem becomes smooth and optimality systems \eqref{eq: OS with local regularization} and \eqref{eq: strong stationary OS} coincide. \section{Challenges and perspectives} Although several efforts have been made in the study of \emph{variational inequalities of the second kind} and their \emph{optimal control}, the topic is still very active and full of challenges. We comment next on some extensions, open problems and future perspectives within this field. \subsection{Different operators $K$} The model optimal control problem previously considered deals with the $K$ operator equal to the identity. For more complex cases, such as the gradient, the trace of a function, etc., analytical complications arise that prevent an immediate extension of the previous results. The case of the operator $K = \nabla$ is of particular interest due to its applicability in viscoplastic fluid mechanics (see Section 2). However, the loss of regularity that occurs when applying the gradient leads to major difficulties in the analysis of the differentiability of the solution operator. Optimal control problems within this context have been considered in \cite{Delosreyes2009,dlRe2010,dlReSchoen2013} with applications to viscoplastic fluids, contact mechanics and imaging. Existing results concern the characterization of Clarke stationary points and their numerical approximation. The characterization of strong stationary points is still an open problem. \subsection{Time dependent problems} Another important extension which has not been deeply explored yet is the optimal control of time-dependent variational inequalities of the second kind. Here, the inequality that governs the phenomena is of parabolic type and given by: Find $u \in L^2(0,T;V)$ such that \begin{multline*} \label{eq:parabolic VI} \dual{\frac{\partial u}{\partial t}}{v-u} + \dual{Au}{v-u} + \beta \int_{\S} |K v| ~ds\\ - \beta \int_{\S} |K u| ~ds \geq \dual{f}{v-u}, \text{ for all } v \in V, \text{ a.e. }t \in ]0,T[, \end{multline*} complemented with an initial condition $u(0)=u_0 \in V.$ Existence, uniqueness and regularity of solutions for these types of inequalities have been investigated in the past \cite{DuvautLions1976}. Moreover, there are several numerical approaches for the solution of these problems (see \cite{Glowinski} and the references therein). The corresponding optimal control problems have been approached only from a general perspective, with regularizations of global type \cite{Barbu1993}. The study of different aspects such as existence, optimality conditions, Pontryagin's maximum principle, solution regularity, sufficient conditions, etc. are still open and deserve to be investigated in the coming future. Even more so, since the obtention of results for the optimal control of time-dependent variational inequalities of the first kind has been proved to be very challenging. \bibliographystyle{plain}
1,108,101,562,417
arxiv
\section{INTRODUCTION} Physical properties of anomalous X-ray pulsars (AXPs), soft gamma repeaters (SGRs), and dim isolated neutron stars (XDINs) indicated by the observations, significantly depend on the actual torque mechanism slowing down these sources. There are two basic torque models that try to explain the evolution of the rotational properties of these young neutron star systems consistently with their X-ray luminosities~: (1) magnetic dipole torque acting on the neutron star rotating in vacuum, (2) the torque of a fallback disc acting on the star through interaction with the magnetic dipole field of the star. The dipole field strength of the star could be overestimated by one or two orders of magnitude with the dipole torque assumption if the star is actually evolving with a fallback disc. Using the basic principles of the fallback disc model (\citealt{Chatterjee_etal_00, Alpar_01}), we have developed a long-term evolution model considering the inactivation of the disc below a critical temperature, the contribution of the cooling luminosity of the star to X-ray irradiation, and the effect of the X-ray irradiation on the evolution of the disc. We applied the same evolution model earlier to individual AXP/SGRs including the so-called low-B magnetars (\citealt{Alpar_etal_11, Benli_etal_13, Benli_Ertan_16}), to the six XDINs with confirmed period and period derivatives \citep{Ertan_etal_14}, and to a high--B radio pulsar (HBRP) with an anomalous braking index ($n \simeq 1$), namely PSR J1734--3333 \citep{Caliskan_etal_13}. These model applications are self-consistent in that the results are obtained with similar basic disc parameters that are expected to be similar for the fallback discs of different systems. In this work, we use the same model to investigate the properties of the 3 HBRPs with measured braking indices. Since the model is described in the earlier works (see e.g. \citealt{Caliskan_etal_13, Ertan_etal_14, Benli_Ertan_16}), we do not discuss the model details here. In Section 2, we briefly explain the model parameters and the evolution of a neutron star with a fallback disc. In Section 3, we give the observational properties of the 3 HBRPs. Our results are given in Section 4. We summarize our conclusion in Section 5. \section{EVOLUTION OF A NEUTRON STAR WITH A FALLBACK DISC} Evolutionary phases of a neutron star evolving with a fallback disc depend on the initial conditions: the magnetic dipole field strength on the pole of the star, $B_0$, the initial period, $P_0$, and the disc mass, $M_{\mathrm{d}}$. We assume that the disc is initially extended to a radius $r_{\mathrm{out}}$ at which the effective temperature decreases to a critical value, $T_{\mathrm{p}}$, the minimum temperature that can keep the disc matter in a viscously active state. During the long-term evolution, $r_{\mathrm{out}}$ gradually propagates inwards with decreasing X-ray irradiation flux that can be written as $F_{\mathrm{irr}} \simeq 1.2~C~L_{\mathrm{x}}/\pi r^2$ where $C$ is the irradiation efficiency parameter that depends on the disc geometry and albedo of the disc surface and $L_{\mathrm{x}}$ is the X-ray luminosity of the neutron star \citep{Fukue_92}. The values of $C$ in the $(1-7) \times 10^{-4}$ range can produce the optical and infrared emission spectra of AXP/SGRs \citep{Ertan_Caliskan_06, Ertan_etal_07b}. Keeping $C$ in this range, the general X-ray luminosity and the rotational properties of AXP/SGRs can be obtained with $T_{\mathrm{p}} \sim 100$ K \citep{Ertan_etal_09}. We solve the disc diffusion equation using the kinematic viscousity $\nu = \alpha c_{\mathrm{s}} h$, where $c_{\mathrm{s}}$ is the sound speed, $h$ is the pressure scale-height of the disc. We use the same $\alpha$ parameter ($\alpha = 0.045$) that was used earlier for AXP/SGR and XDINs. In the accretion phase, to find the total torque acting on the star we integrate the magnetic torque between the conventional Alfv$\acute{e}$n~radius, $r_{\mathrm{A}} \cong (G M)^{-1/7}~\mu^{4/7} \dot{M}_{\mathrm{in}}^{-2/7}$ \citep{Davidson_Ostriker_73, Lamb_etal_73}, and the co-rotation radius, $r_{\mathrm{co}} = (G M / \Omega_\ast^2)^{1/3}$ where $M$, $\Omega_\ast$ and $\mu$ are the mass, the angular frequency and the magnetic dipole moment of the neutron star, $G$ is the gravitational constant and $\dot{M}_{\mathrm{in}}$ is the rate of mass-flow to the inner disc. The integrated spin-down torque can be written as, $N = I ~\dot{\Omega}_{\ast} \simeq \frac{1}{2} \dot{M}_{\mathrm{in}} ~(G M r_{\mathrm{in}})^{1/2} ~(1 - (r_{\mathrm{in}}/r_{\mathrm{co}})^3)$ (\citealt{Ertan_Erkut_08}). The radius $r_{\mathrm{in}}$ can be considered as the inner radius of the thin disk, while the boundary region of interaction between the field lines and the inner disk extends from $r_{\mathrm{in}}$ to $r_{\mathrm{co}}$. For the accretion phase, we set $r_{\mathrm{in}} = r_{\mathrm{A}}$ in the torque expression. Since the critical condition for the transition between the accretion and the propeller phase is not well known, we use the simplified condition $r_{\mathrm{A}} = r_{\mathrm{LC}}$ for the accretion-propeller transition, where $r_{\mathrm{LC}} = c/\Omega_\ast$ is the light cylinder radius and $c$ is the speed of light. That is, when the calculated $r_{\mathrm{A}}$ is found to be greater than $r_{\mathrm{LC}}$, the system is in the propeller phase. In this phase, we calculate the total torque substituting $ r_{\mathrm{in}} = r_{\mathrm{LC}}$ in the torque equation. In the propeller phase, we assume that all the mass flowing to the inner disc is expelled from the system allowing pulsed radio emission. (see \citealt{Ertan_Erkut_08} for the details of the torque model). Previously, \cite{Chen_Li_16} also studied the long-term evolution of PSR J1734--3333 using a different torque model and could produce the anomalous braking index of the source. Nevertheless, in their model, the rotational properties of the source are produced with disk luminosities that are well above the observed luminosities. This discrepancy between our results and those found by \cite{Chen_Li_16} is mainly due to the differences in the torque calculations. In this work, we use the same relatively efficient disk torque model that can also reproduce the X-ray luminosities and the rotational properties of different sources from different populations in a self-consistent way. In line with the long-term evolution models of AXP/SGRs and XDINs with fallback discs, in this work we also take $\alpha =0.045$ and $T_{\mathrm{p}} = 100$~K and $C$ in the $(1-7) \times 10^{-4}$ range for all the sources. To test the model, we repeat the simulations tracing the initial conditions, $P_0$, $B_0$ and $M_{\mathrm{d}}$. We count a model as a reasonable alternative representation of the evolution of a given HBRP, if it reproduces the X-ray luminosity, $L_{\mathrm{x}}$, the period, $P$, the period derivative, $\dot{P}$, and the second period derivative, $\ddot{P}$, (or the breaking index, $n = 2 - P \ddot{P} / \dot{P}^2$) simultaneously (For the details of the model see e.g. \citealt{Benli_Ertan_16}). \section{SOURCE PROPERTIES} {\bf PSR J1119--6127} was discovered in the Parkes Multibeam Pulsar Survey with a rotational period $P = 0.408$~s and a period derivative $\dot{P} = 4.02 \times 10^{-12}$~s~s$^{-1}$. The braking index $n = 2.91 \pm 0.05$ was measured using a 1.2 year data \citep{Camilo_etal_00}. Using more than 12 year timing data, \cite{Weltevrede_etal_11} obtained $n = 2.684 \pm 0.002$. A more recent analysis covering 16 year data excluding the imprint of glitch activities yielded $n \simeq 2.7$ \citep{Antonopoulou_etal_15}. The source has a rotational power $\dot{E} = 2.3 \times 10^{36}$ erg s$^{-1}$. \cite{Caswell_etal_04} estimated a distance of 8.4~kpc using neutral hydrogen absorption measurement to the supernova remnant (SNR) G292.2--0.5 and estimated an upper limit to the SNR age around 10 kyr. Through spectral fit to the X-ray spectra, \cite{Ng_etal_12} estimated the bolometric X-ray luminosity as $L_{\mathrm{x}} = (1.1-3.8) \times 10^{33}$~erg~s$^{-1}$ in the 0.5--7~keV band, assuming $d = 8.4$~kpc. {\bf PSR J1734--3333}~ has period $P = 1.17$ s, period derivative $\dot{P} = 2.28 \times 10^{-12}$ s s$^{-1}$ and period second derivative $\ddot{P} = 5.3 \times 10^{-24}$ s s$^{-2}$ ($n = 0.9 \pm 0.2$, \citealt{Espinoza_etal_11}). This braking index is the lowest among the young radio pulsars. The range of bolometric X-ray luminosity of the source corresponding 25\% uncertainty in the distance estimate is $7.3 \times 10^{31} - 6.6 \times 10^{32}$ erg s$^{-1}$ \citep{Olausen_etal_13}. The age of the SNR associated with the source is estimated to be greater than 1300 yr \citep{Ho_Anderson_12}. {\bf PSR B1509--58} was discovered in the soft X-ray band with \textit{Einstein Observatory} \citep{Seward_Harnden_82} and subsequently observed also in the radio band \citep{Manchester_etal_82}. It has a period $P \simeq 0.15$ s and a period derivative $\dot{P} \simeq 1.5 \times 10^{-12}$~s s$^{-1}$, which give $\dot{E} \simeq 1.7 \times 10^{37}$~erg~s$^{-1}$ and a characteristic age of 1570 yr. The age of SNR G320.4--01.2 associated with the source is estimated to be less than about $ 1700$~yr \citep{Gaensler_etal_99}. Recent analysis of the off-pulsed X-ray spectrum gives the $0.5-7$~keV X-ray luminosity between $10^{33}$~erg~s$^{-1}$ and $2 \times 10^{34}$~erg~s$^{-1}$ for the distance $ d = 5.2$~kpc \citep{Hu_etal_17}. The persistent braking index of the source is measured to be $n \simeq 2.84$ \citep{Kaspi_etal_94}. \section{RESULTS AND DISCUSSION} Tracing the initial conditions $B_0$, $P_0$ and $M_{\mathrm{d}}$, we have tested whether our long-term evolution model can produce the observed rotational properties and X-ray luminosities of the three HBRPs. Taking the viscousity parameter $\alpha = 0.045$, the critical temperature $T_{\mathrm{p}} = 100$~K and keeping the irradiation parameter $C$ in the $(1-7) \times 10^{-4}$ range, like in the models applied to AXP/SGRs and XDINs, we have found that the model can reproduce the properties of the 3 HBRPs for certain ranges of the initial conditions. The evolution of PSR J1734--3333 was studied earlier by \cite{Caliskan_etal_13} with $\alpha = 0.03$. For a complete analysis of HBRPs with measured braking indices, we have re-analysed the source with $\alpha = 0.045$, and found similar evolutionary curves to those found by \cite{Caliskan_etal_13}. It is seen in Figs (1--3) that the model can reproduce the individual source properties ($L_{\mathrm{x}}$, $P$, $\dot{P}$ and the braking index $n$) simultaneously for PSR J1119--6127, PSR J1734--3333 and PSR B1509--58. The model parameters are given in the figures. The three HBRPs are associated with SNRs. The ages of the sources indicated by the evolutionary curves are consistent with the estimated ages of their SNRs (see Section 3). The estimated ages and the observed properties of the sources are listed in Table \ref{tab:properties}. \begin{figure} \centering \includegraphics[width=\columnwidth,angle=0]{fig/j1119.pdf} \caption{Illustrative model curve for PSR J1119--6127. The properties of the source are produced at age $\sim 5 \times 10^{3}$ yr (the vertical dashed line). From the top to the bottom panel, the model curves represent $L_{\mathrm{x}}$, $P$, $\dot{P}$ and $n$ evolutions. For this model, $P_0 = 50$ ms, $C = 1 \times 10^{-4}$ and $T_{\mathrm{p}} = 100$ K. The $M_{\mathrm{d}}$ and $B_0$ values are given in the top panel. The horizontal dashed lines show the observed properties. The double dashed lines in the top panel show the uncertainty range of the X-ray luminosity. } \label{fig:1119} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth,angle=0]{fig/1734/j1734.pdf} \caption{Illustrative model curves for PSR J1734--3333. For these models, $B_0 = 2 \times 10^{12}$~G, $T_{\mathrm{p}} = 100$ K and $C = 7 \times 10^{-4}$. In these two models, we use the $P_0$ and $M_{\mathrm{d}}$ values given in the top panel. The cross signs indicate the times at which the sources intersect the pulsar death-line. } \label{fig:1734} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth,angle=0]{fig/B1509.pdf} \caption{Model curves for PSR B1509--58. For these three illustrative models, we take $M_{\mathrm{d}} \sim 2 \times 10^{-5} \mathrm{M}_{\sun}$, $P_0 = 100$ ms, $C = 1 \times 10^{-4}$ and $T_{\mathrm{p}} = 100$ K. The dotted, dashed and solid model curves are obtained with the $B_0$ values given in the top panel. } \label{fig:B1509} \end{figure} Unlike for AXP/SGRs and XDINs, our model does not significantly constrain the dipole field strength, $B_0$, on the pole of the star. Similar evolutionary curves can be obtained with $B_0$ values from $\sim 10^{12}$~G to $\sim 10^{13}$~G. The results of earlier work indicate that XDINs have relatively weak fields with $10^{11}$~G $\lesssim B_0 \lesssim 10^{12}$~G, while for AXP/SGRs $B_0 \gtrsim 10^{12}$~G \citep{Ertan_etal_14, Benli_Ertan_16}. This imply that HBRPs could have field strengths similar to or possibly greater than those of AXP/SGRs in the fallback disc model. For these three sources, the radio pulsar death line does not impose a lower limit on $B_0$ either, due to their short periods ($\lesssim 1$ s). For PSR J1119--6127 and PSR B1509--58, we obtain the best results with the $P_0$ and $M_{\mathrm{d}}$ values given in Figs. 1 and 3. Nevertheless, reasonable model curves could be obtained with large range of these initial parameters. For PSR J1734--3333, the evolution is not very sensitive to the initial period either. The properties of this source could be produced with $P_0$ and $M_{\mathrm{d}} $ values in the ranges $ \sim 50-300$ ms and $\sim 10^{-6}-10^{-5} ~\mathrm{M}_{\sun}$ respectively (see Fig. \ref{fig:1734}). The disc masses of the three HBRPs, like their field strengths, seem to be similar to or greater than those of AXP/SGRs. Our results imply that these three sources are not likely to be identified as HBRPs after ages of a few $10^4$ yr (see Figs. 1--3) depending on their actual initial disc masses. One of the sources (PSR J1734--3333) is evolving into the AXP phase, while the others are likely to continue their evolution as radio pulsars, but with rapidly decreasing $\dot{P}$ to the normal radio pulsar properties. A more detailed analysis is required to determine the entire domain of the initial conditions leading a neutron star to HBRP properties and the duration of these phases with relatively high $\dot{P}$ values. In the accretion phase, the sources cannot emit radio pulses. For the self-consistency of the model, the observed properties of HBRPs should be acquired in a phase that allows radio pulses, that is, in the propeller phase. Our results for the three sources satisfy this requirement as well. A radio pulsar with a fallback disc can evolve into accretion phase depending on the initial conditions (see e.g. Fig. \ref{fig:1734}). Once a source enter this phase, it is not likely to evolve into the radio phase. Because, in most cases, the accretion phase terminates with a rotational rate that is not sufficient to power pulsed radio emission. For instance, in this model, XDINs slowed down in the accretion phase in the past, and are currently in the propeller phase. During the accretion-propeller transition, all these sources have periods that place them below the pulsar death-line in the $B_0$-$P$ plane (see Fig. 4. in \citealt{Ertan_etal_14}). The differences of our model from the earlier fallback disc models \citep{Chatterjee_etal_00, Alpar_01, Menou_etal_01} was discussed in \cite{Caliskan_etal_13}. The basic differences are: (1) the inactivation temperature of the disc in our model ($\sim 100$~K) which is much lower than in the other models, (2) the torque calculation, and more importantly (3) the condition for the transition between the propeller and the accretion phases. In our model, the sources are accreting matter from the disc over a large range of accretion rates in the spin-down phase. Observations of the transitional millisecond pulsars that show transitions between the X-ray pulsar and the radio pulsar phases at very low X-ray luminosities \citep{Archibald_etal_15b, Papitto_etal_15} seem to be consistent with our simplified condition for the onset of the propeller phase. Recently, \cite{Ertan_17} estimated the critical accretion rate for this transition depending on the period and the dipole field strength of the star. These critical accretion rates estimated by simple analytical calculations are in agreement with the rates obtained by our simplified condition ($r_{\mathrm{A}} = r_{\mathrm{LC}}$) for the accretion-propeller transition. \begin{table*} \centering \caption{The observed properties of high-magnetic field radio pulsars and their ages found from the model} \label{tab:properties} \begin{tabular}{lccccr} \hline Name & $P$ (s) & $\dot{P}$ (s s$^{-1}$) & n & $L_{\mathrm{x}}$ (erg s$^{-1}$) & Age (yr) \\ \hline PSR J1734-3333 & 1.17 & $2.28 \times 10^{-12}$ & $0.9 \pm 0.2$ & $7.3-66 \times 10^{31}$ & $2.5-3 \times 10^4$ \\ PSR J1119--6127 & 0.41 & $4.02 \times 10^{-12}$ & 2.7 & $1.1-3.8 \times 10^{33}$ & $5 \times 10^{3}$ \\ PSR B1509--58 & 0.15 & $1.53 \times 10^{-12}$ & 2.84 & $1-20 \times 10^{33}$ & $ 1 \times 10^{3}$ \\ \hline \end{tabular} \end{table*} \section{CONCLUSION} We have investigated the long-term evolution of the three high--B radio pulsars (HBRPs) with measured braking indices, namely PSR J1734--3333, PSR J1119--6127 and PSR B1509--58, using the same model that was applied earlier to AXP/SGRs and XDINs. We have shown that the neutron stars starting their evolution from a certain domain of initial conditions ($B_0$, $P_0$, $M_{\mathrm{d}}$) can reach the observed X-ray luminosity, period, period derivative and braking index of each of these HBRPs simultaneously through evolutions with fallback discs and with conventional magnetic dipole fields. For all these sources, the model reproduces the observed properties at ages that are in good agreement with the estimated ages of the supernova remnants associated with these systems. In the model, the three HBRPs are currently in the propeller phase when the accretion on to the star is not allowed, which is consistent with the radio pulsar property of these sources. We obtain the properties of the 3 HBRPs with relatively high disc masses in comparison with those of AXP/SGRs and XDINs. In the fallback disc model, our results indicate that the dipole fields of HBRPs could be in the $\sim 10^{12}$~G to $\sim 10^{13}$~G range that is similar to AXP/SGR field range. This means that a fraction of the HBRPs could have evolutionary connections with AXP/SGRs (see e.g. the model curves for PSR J1734--3333 with $B_0 \simeq 2 \times 10^{12}$~G), while the remaining fraction could evolve as radio pulsars over their observable life-times (see the illustrative model curves in Figs \ref{fig:1119} and \ref{fig:B1509}). Our results, comparing with the results of earlier work, also show that the individual source properties of AXPs, SGRs, XDINs and HBRPs can be re-produced in the same model as a natural outcome of evolutions of neutron stars with fallback discs. It is the differences in the three initial conditions ($B_0$, $P_0$, $M_{\mathrm{d}}$) that lead to emergence of observed diversity of these neutron star populations. A detailed analysis of the evolutionary connections between different isolated neutron star populations in the fallback disc model will be studied in an independent paper. In particular for the 3 sources studied in this work, our results imply that they will lose their HBRP property, inferred from the observed $P$ and $\dot{P}$ with purely magnetic dipole torque assumption, before the ages of a few $10^4$ yr. In the subsequent evolutionary phases, they could be identified either as normal radio pulsars because of their rapidly decreasing $\dot{P}$ (see Figs. \ref{fig:1119} and \ref{fig:B1509}) or they could enter the accretion phase switching off their radio pulses (see Fig. \ref{fig:1734}) and possibly switching on the AXP/SGR properties. \section*{Acknowledgements} We acknowledge research support from Sabanc{\i} University, and from T\"{U}B\.{I}TAK (The Scientific and Technological Research Council of Turkey) through grant 116F336. \bibliographystyle{mn2e}
1,108,101,562,418
arxiv
\section*{Supplementary Material} See supplementary online material for the structural and magnetic characterization of the bi- and trilayer samples, the carrier concentration of the Ga:ZnO layer, and a more elaborate discussion of the SMR trilayer simulation. \begin{acknowledgments} M.S.R. and J.M. would like to thank for funding from Department of Science and Technology, New Delhi, that facilitated the establishment of Nano Functional Materials Technology Centre (Grant: SR NM/NAT/02-2005). J.M. would like to thank UGC for SRF fellowship. We acknowledge financial support by the German Academic Exchange Service (DAAD) via project no.~57085749. \end{acknowledgments}
1,108,101,562,419
arxiv
\chapter{Feynman rules}\label{app:feyn_rules} In this Appendix we present the Feynman rules involving different types of anomalous couplings generated by effective Lagrangians defined various sections of chapters~\ref{chap:neutral_currents} and~\ref{chap:CC}. \section{Neutral currents}\label{app:feyn_neutral} In Tab.~\ref{tab:feyns_fcnc} we present Feynman rules for FCNC top quark transitions governed by the Lagrangian given in Eq.~(\ref{eq:Lagr}). \begin{table}[h!] \begin{tabular}{m{1.6cm}l} \includegraphics[scale= 0.5]{feyn_fcnc1.pdf} &$\mathrm{i} g_Z \frac{v^2}{\Lambda^2}\Big[\gamma^{\mu}a^{Z}_{R,L} -\frac{2\mathrm{i} \sigma^{\mu\nu}q_{\nu}}{v}b^Z_{LR,RL}\Big]P_{R,L}$ \\ \includegraphics[scale= 0.5]{feyn_fcnc2.pdf} &$-\mathrm{i} e \frac{v^2}{\Lambda^2}\frac{2\mathrm{i} \sigma^{\mu\nu}q_{\nu}}{v}b^\gamma_{LR,RL}\,P_{R,L}$ \\ \includegraphics[scale= 0.5]{feyn_fcnc3.pdf} &$-\mathrm{i} g_s \frac{v^2}{\Lambda^2}\frac{2\mathrm{i} \sigma^{\mu\nu}q_{\nu}}{v}b^g_{LR,RL}T^a\,P_{R,L}$ \\\ \end{tabular} \caption{Feynman rules for $tVq$ FCNC vertices, $q_{\mu}$ is the momentum of the outgoing gauge boson and $P_{R,L}=(1\pm \gamma^5)/2$ are the chirality projectors.} \label{tab:feyns_fcnc} \end{table} \vfill \section{Charged currents}\label{app:feyn_charged} Feynman rules for vertices generated by operators given in Eq.~(\ref{eq:ops1}) that turn out to be relevant for our analysis are shown in Tab.~\ref{tab:feyns_cc}. We use the following abbreviations, labeling flavor with $i,j$ \small \begin{eqnarray} v_R &=& \kappa_{RR} \delta_{3i}\delta_{3j}\,,\hspace{0.5cm} \tilde{v}_R=\frac{c_W}{s_W}v_R\,,\\ \nonumber v_L &=& \kappa_{LL}\delta_{3i}+\kappa_{LL}^{\prime}\delta_{3j}+\kappa_{LL}^{\prime\pr}\delta_{3i}\delta_{3j}\,,\\ \nonumber \tilde{v}_L&=&\frac{c_W^2-s_W^2}{2 c_W s_W} v_L-\frac{1}{2c_Ws_W}\Big(\kappa_{LL}^*\delta_{3i}+\kappa_{LL}^{\prime*}\delta_{3j}+\kappa_{LL}^{\prime\pr*}\frac{V_{ib}V_{tj}}{V_{ij}}\Big)\,,\\ \nonumber g_R &=& -\kappa_{LRb}\,,\\ \nonumber g_L &=& -\kappa_{LRt}^* \delta_{3i}-\kappa_{LRt}^{\prime*}\delta_{3i}\delta_{3j}\,, \end{eqnarray} \normalsize \vspace{-0.4cm} \begin{table}[h!] \begin{tabular}{m{1.6cm}l|m{1.6cm}l} \includegraphics[scale= 0.5]{diag1.pdf} &$-\frac{\mathrm{i} g}{\sqrt{2}}V_{ij}\Big[\gamma^{\mu}v_{R,L} +\frac{\mathrm{i} \sigma^{\mu\nu}q_{\nu}}{m_W}g_{R,L}\Big]P_{R,L}$ & \includegraphics[scale= 0.5]{diag2.pdf} &$-\frac{\mathrm{i} g}{\sqrt{2}}V_{ij}\frac{\gs{q}}{m_W}(-v_{R,L}) P_{R,L} $\\ \includegraphics[scale= 0.5]{diag1new.pdf} &$\begin{array}{l}-\frac{\mathrm{i} g}{\sqrt{2}} V_{ij}\frac{e}{m_W}\Big[\{v_{R,L};\tilde{v}_{R,L} \}\gamma^{\mu}\\ +\{1;\frac{c_W}{s_W}\}(-g_{R,L})\frac{\mathrm{i} \sigma^{\alpha\mu}k_{\mu}}{2 m_W}\Big]P_{R,L}\end{array}$& \includegraphics[scale= 0.5]{diag2new.pdf} &$-\frac{\mathrm{i} g}{\sqrt{2}}V_{ij} e\Big[\{1;\frac{c_W}{s_W}\}(-g_{R,L})\frac{\mathrm{i} \sigma^{\mu\alpha}}{m_W}\Big]P_{R,L}$\\\hline \includegraphics[scale= 0.5]{diag4new.pdf} &$\begin{array}{l}\mathrm{i}\big(\frac{g}{\sqrt{2}}\big)^2 \frac{e}{m_W^2}V_{tm}^*V_{tn}\{1;\frac{c_W^2-s_W^2}{2c_W s_W}\}\gamma^{\alpha}P_L\\ \times(\kappa_{LL}+\delta_{3n}\kappa_{LL}^{\prime\pr})\end{array}$& \includegraphics[scale= 0.5]{diag3.pdf} &$\begin{array}{l}-\mathrm{i}\big(\frac{g}{\sqrt{2}}\big)^2V_{tm}^* V_{tn}\frac{ \gs{q}}{m_W^2}{P}_L\\\times(\kappa_{LL}+\delta_{3n}\kappa_{LL}^{\prime\prime})\end{array}$\\ \includegraphics[scale= 0.5]{diag4.pdf} &$\begin{array}{l}\mathrm{i} \big(\frac{g}{\sqrt{2}}\big)^2V_{tm}^* V_{tn}\frac{1}{m_W}\gamma^{\mu}{P}_L\\ \times(\kappa_{LL}+\delta_{3n}\kappa_{LL}^{\prime\prime})\end{array}$& \includegraphics[scale= 0.5]{diag7new.pdf} &$\mathrm{i} e\,\kappa_{LRb}\delta_{3m}\delta_{3n}\frac{\mathrm{i} \sigma^{\alpha\mu}k_{\mu}}{2m_W}P_R$\\\hline \includegraphics[scale= 0.5]{diag6new.pdf} &$\begin{array}{l}-\mathrm{i} e \frac{1}{2s_Wc_W}\gamma^{\alpha}P_L\\ \times\big(\delta_{3m}\delta_{3n}\kappa_{LL}+V_{mb}V_{nb}^*(\kappa_{LL}^{\prime}+\delta_{3m}\kappa_{LL}^{\prime\pr})\big)\end{array}$& \includegraphics[scale= 0.5]{diag5new.pdf} &$\begin{array}{l}-\mathrm{i} e\{1;\frac{c_W}{s_W}\} \frac{\mathrm{i} \sigma^{\alpha\nu}k_{\nu}}{2 m_W}P_R \\ \times\big(\delta_{3m}\delta_{3n} \kappa_{LRt}+V_{mb}\delta_{3n}\kappa_{LRt}^{\prime}\big)\end{array}$ \\ \end{tabular} \caption{ Feynman rules for the anomalous vertices. Indicies $i,j$ and $m,n$ label quark flavor.} \label{tab:feyns_cc} \end{table} Feynman rule for $tWb$ vertex obtained from Eq.~(\ref{eq:effsimple}) is given in Tab.~\ref{tab:feyn_main1}. \begin{table}[h!] \begin{tabular}{m{1.6cm}l} \includegraphics[scale= 0.5]{feyn_main1.pdf} &$-\mathrm{i} \frac{g}{\sqrt{2}}\Big[\gamma^{\mu}a_{R,L} -\frac{2\mathrm{i} \sigma^{\mu\nu}q_{\nu}}{m_t}b_{LR,RL}\Big]P_{R,L}$ \\ \end{tabular} \caption{Feynman rule for the general parametrization of $tWb$ vertex, $q_\mu$ is the momentum of the outgoing gauge boson.} \label{tab:feyn_main1} \end{table} \chapter{Loop functions} In this Appendix we present analytic expressions for various loop functions obtained in the calculation of one-loop amplitudes in FCNC processes. \section{FCNC top decay form factors}\label{app:form_factors_qcd} Here we present the form factors defined in Eqs.~(\ref{eq:fcnc_amp1}, \ref{eq:fcnc_amp1}). Expressions are given in $d=4+\epsilon$ dimensions regularizing both UV as well as IR divergences denoted $\epsilon_{\mathrm{UV}}$ and $\epsilon_{\mathrm{IR}}$ respectively. Further we define $$C_{\epsilon}=(m_t/\mu)^{\epsilon}\Gamma(1-\epsilon/2)/(4\pi)^{\epsilon/2}\,,$$ and for the UV divergent form factors add the counter terms denoted by $\delta$. The form factors are \small \begin{eqnarray} F_{b}^{\gamma}&=&C_{\epsilon} \Bigg[-\frac{4}{\epsilon_{\mathrm{IR}}^2}+\frac{5}{\epsilon_{\mathrm{IR}}}+\frac{2}{\epsilon_{\mathrm{UV}}} -6 \Bigg] + \delta_{b}^{\gamma}\,, \label{eq:Fb_gamma}\\ F_{bg}^{\gamma}&=&Q C_{\epsilon} \Bigg[\frac{8}{\epsilon_{\mathrm{UV}}}-11+\frac{2}{3}\pi^2-2\pi\mathrm{i} \Bigg]+ \delta_{bg}^{\gamma}\,,\label{eq:Fbg_gamma}\\ F_{a}^Z&=&C_{\epsilon}\left[ -\frac{4}{\epsilon_{\mathrm{IR}}^2}+\frac{5-4\log(1-r_Z)}{\epsilon_{\mathrm{IR}}}-2\log^2(1-r_Z)+3\log(1-r_Z)-2\mathrm{Li}_2(r_Z)-6\right]\,,\label{eq:Fa}\\ F_{b}^Z&=&C_{\epsilon} \Big[ -\frac{4}{\epsilon_{\mathrm{IR}}^2}+\frac{5-4\log(1-r_Z)}{\epsilon_{\mathrm{IR}}}+\frac{2}{\epsilon_{\mathrm{UV}}} -2 \log^2(1-r_Z)+4\log(1-r_Z)\label{eq:Fb}\\ &-&2\mathrm{Li}_2(r_Z)-6 \Big]+\delta_{b}^Z\,,\nonumber\\ F_{ab}^Z&=&-4 m_t \log(1-r_Z)\,,\label{eq:Fab}\\ F_{ba}^Z&=&-\frac{1}{m_t}\frac{1}{2r_Z}\log(1-r_Z)\,,\label{eq:Fba} \end{eqnarray} \begin{eqnarray} F_{ag}^Z&=&m_t\bigg[\hat v+\hat a \label{eq:Fag}\\ \nonumber &+& (\hat v-\hat a)\Big\llbracket\frac{r_Z(4-r_Z)(1+r_Z)}{(1-r_Z)^3}f_1 -\frac{2r_Z(4-r_Z)}{(1-r_Z)^4}f_2-\frac{1-7r_Z+3r_Z^2}{(1-r_Z)^2}+\frac{2 r_Z}{1-r_Z}\log r_Z\Big\rrbracket\bigg]\,,\\ F_{bg}^Z&=&C_{\epsilon}\bigg[ 2 \hat v \frac{2}{\epsilon_{\mathrm{UV}}}+(\hat v+\hat a)(f_1-2)\label{eq:Fbg}\\ &+&(\hat v-\hat a)\Big\llbracket-\frac{r_Z}{1-r_Z}\log(r_Z) -\mathrm{i} \pi- \frac{7/2-4r_Z+2r_Z^2}{(1-r_Z)^2} -\frac{(1+r_Z)(2+r_Z)}{2(1-r_Z)^3}f_1+\frac{2+r_Z}{(1-r_Z)^4}f_2\Big\rrbracket\bigg]+ \delta_{bg}^Z \,.\nonumber \end{eqnarray} \normalsize where we have defined \begin{eqnarray}\label{eq:def_fcnc} r_Z = m_Z^2/m_t^2\,,\hspace{0.5cm} \hat{v} = T_3 - 2 \sin \theta_W Q\,,\hspace{0.5cm} \hat{a} = T_3\,. \label{eq:some_def} \end{eqnarray} For the up-type quarks $Q = 2/3$ and $T_3 = 1/2$. Further we have introduced auxiliary functions $f_1$ and $f_2$ for shorter notation \small \begin{eqnarray} f_1&=&2\sqrt{\frac{4-r_Z}{r_Z}}\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)\,,\\ f_2&=&-2\mathrm{Li}_2(r_Z-1)+2\arctan\Big(\frac{1-r_Z}{3-r_Z}\sqrt{\frac{4-r_Z}{r_Z}}\Big) \arctan\Big(\frac{r_Z}{2-r_Z}\sqrt{\frac{4-r_Z}{r_Z}}\Big) \nonumber\\ &+&2\mathrm{Re}\Big\{\mathrm{Li}_2\Big((1-r_Z)^2 \big(1-\frac{r_Z}{2}\frac{2-r_Z}{1-r_Z}(1+\mathrm{i}\sqrt{\frac{4-r_Z}{r_Z}})\big)\Big) -\mathrm{Li}_2\Big(\frac{1-r_Z}{2}\big(2-r_Z-\mathrm{i} \sqrt{(4-r_Z)r_Z}\big)\Big)\Big\}\,.\nonumber \end{eqnarray} \normalsize \section{$|\Delta B|=2$ loop functions}\label{app:NP_D_B_2} Here we present SM as well as NP loop functions for $|\Delta B|=2$ processes defined in Eqs.~(\ref{LOWils}). Functions $S_{0}^{LL (\prime)}$ are UV divergent and the forms presented here are $\overline{\mathrm{MS}}$ renormalized. \begin{subequations}\label{eq:S0s} \begin{eqnarray} S_0^{\mathrm{SM}}(x_t)\hspace{-0.2cm}&=&\hspace{-0.2cm}\frac{1}{2}S_0^{LL\prime}(x_t)=\frac{x_t(x_t^2-11 x_t+4)}{4(x_t-1)^2}+\frac{3x_t^3\log x_t}{2(x_t-1)^3}\,,\\ S_{0\overline{\mathrm{MS}}}^{LL}(x_t)\hspace{-0.2cm}&=&\hspace{-0.2cm}2S_{0\overline{\mathrm{MS}}}^{LL\prime\pr}(x_t)=-\frac{x_t \left(x_t^2+10 x_t+1\right)}{2 \left(x_t-1\right)^2} + x_t \log \frac{m_W^2}{\mu^2}\\ &&\hspace{1.7cm}+\frac{x_t \left(x_t^3-3 x_t^2+12 x_t-4\right) \log x_t}{\left(x_t-1\right)^3}\,,\nonumber\\ S_0^{LRt}(x_t)\hspace{-0.2cm}&=&\hspace{-0.2cm}2S_0^{LRt\prime}(x_t)=3\sqrt{x_t}\bigg[-\frac{x_t(x_t+1)}{(x_t-1)^2}+\frac{2x_t^2\log x_t}{(x_t-1)^3}\bigg]\,. \end{eqnarray} \end{subequations} \section{$|\Delta B|=1$ loop functions}\label{app:SM_D_B_1} Below we present loop functions obtained in the calculation of $|\Delta B|=1$ processes within the SM in the general $R_{\xi}$ gauge defined in Eqs.~(\ref{eq:delta_B1_SM}). \begin{eqnarray} B_0(x)&=& \frac{x}{2(x-1)}-\frac{x \log x}{2(x-1)^2} - \frac{1}{2}\,\phi(x,\xi)\,,\\ \tilde{B}_0(x)&=& -\frac{2 x}{x-1}+\frac{2 x \log x}{(x-1)^2}+ \frac{1}{2}\, \phi(x,\xi)\,,\\ C_0(x)&=&-\frac{x(x^2-7x+6)}{2(x-1)^2}-\frac{x(3x+2)}{2(x-1)^2}\log x -\phi(x,\xi)\,,\\ D_0(x)&=&\frac{4}{9} \log x + \frac{x^2(19x-25)}{36(x-1)^3} +\frac{x^2(-5x^2+2x+6)}{18(x-1)^4}\log x + \phi(x,\xi)\,,\\ D_0^\prime(x)&=& \frac{8x^3+5x^2-7x}{12(x-1)^3}-\frac{x^2(3x-2)}{2(x-1)^4}\log x\,,\\ E_0(x)&=&\frac{2}{3}\log x -\frac{x(x^2+11x-18)}{12(x-1)^3}-\frac{x^2(4x^2-16x+15)\log x}{6(x-1)^4}\,,\\ E_0^\prime(x)&=&\frac{x(x^2-5x-2)}{4(x-1)^3} + \frac{3x^2\log x}{2(x-1)^4}\,. \end{eqnarray} Function \begin{eqnarray} \phi(x,\xi)= \frac{x^2(\xi-1)(x\xi+7x-8\xi)\log x}{4(x-1)^2(x-\xi)^2} -\frac{x(x\xi-7x+6\xi)}{4(x-1)(x-\xi)} -\frac{x\xi(6x+\xi^2-7\xi)\log\xi}{4(\xi-1)(x-\xi)^2}\,, \end{eqnarray} captures all the $\xi$ dependance and has the property $\lim_{x\to0} \phi(x,\xi) = \lim_{\xi \to 1} \phi(x,\xi) = 0$. It is obvious that the following linear combinations are gauge independent \begin{eqnarray} 2B_0(x)-C_0 (x)\,,\hspace{0.5cm}2\tilde{B}_0(x)+C_0 (x)\,,\hspace{0.5cm} C_0(x) + D_0(x) \,. \end{eqnarray} Next we present analytical expressions for functions $f_i^{(j)}$ and $\tilde{f}_i^{(j)}$ defined in Eq.~(\ref{eq:fs}). For shorter notation we further decompose \begin{eqnarray*} f_{9}^{(j)}=g^{(j)}-\frac{1}{4s_W^2} h^{(j)}\,,\hspace{0.5cm}f_{10}^{(j)}=\frac{1}{4 s_W^2}h^{(j)}\,,\hspace{0.5cm} f^{(j)}_{\nu\bar{\nu}}=\frac{1}{4s_W^2}k^{(j)}\,. \end{eqnarray*} Functions containing explicit $\mu$ dependance have been renormalized using $\overline{\mathrm{MS}}$ scheme. Below we give all nonzero contributions {\allowdisplaybreaks \small \begin{eqnarray} f_7^{(LL)} &=&\tilde{f}_7^{(LL)}=f_7^{(LL\prime\pr)}= \frac{22 x^3-153x^2+159x-46}{72 (x-1)^3}+\frac{3x^3-2x^2}{4(x-1)^4}\log x\,,\\ f_7^{(LL\prime)}&=&-\frac{8x^3+5x^2-7x}{24(x-1)^3}+\frac{3x^3-2x^2}{4(x-1)^4}\log x \,,\\ f_7^{(RR)}&=&\frac{m_t}{m_b}\Big[\frac{-5x^2+31x-20}{12 (x-1)^2}+\frac{2x-3x^2}{2(x-1)^3}\log x\Big]\label{eq:rr}\,,\\ f_7^{(LRb)}&=&\frac{m_W}{m_b}\Big[-\frac{x}{2}\log \frac{m_W^2}{\mu^2}+\frac{6x^3-31x^2+19x}{12(x-1)^2}+\frac{-3x^4+16x^3-12x^2+2x}{6(x-1)^3}\log x\Big] \label{eq:lrb}\,,\\ f_7^{(LRt)}&=&\frac{m_t}{m_W}\Big[\frac{1}{8}\log\frac{m_W^2}{\mu^2}+\frac{-9x^3+63x^2-61x+19}{48(x-1)^3}+\frac{3x^4-12x^3-9x^2+20x-8}{24(x-1)^4}\log x\Big]\,,\\ \tilde{f}_7^{(LRt)}&=&\tilde{f}_7^{(LRt\prime)}=\frac{m_t}{m_W}\Big[\frac{-3x^3+17x^2-4x-4}{24(x-1)^3}+\frac{2x-3x^2}{4(x-1)^4}\log x\Big]\,,\\ f_7^{(LRt\prime)}&=&\frac{m_t}{m_W}|V_{tb}|^2\Big[\frac{-x^2-x}{8(x-1)^2}+\frac{x^2\log x}{4(x-1)^3}\Big]\,,\\ f_8^{(LL)}&=&\tilde{f}_8^{(LL)}=f_8^{(LL\prime\pr)}=\frac{5 x^3-9 x^2+30 x-8}{24 (x-1)^3}-\frac{3 x^2 \log x}{4 (x-1)^4}\,,\\ f_8^{(LL\prime)}&=&\frac{-x^3+5x^2+2x}{8 (x-1)^3}-\frac{3 x^2 \log x}{4 (x-1)^4}\,,\\ f_8^{(RR)}&=&\frac{m_t}{m_b}\Big[\frac{-x^2-x-4}{4 (x-1)^2}+\frac{3 x \log x}{2 (x-1)^3}\Big]\,,\\ f_8^{(LRb)}&=&\frac{m_W}{m_b}\Big[\frac{x^2+5 x}{4 (x-1)^2}+\frac{2 x^3-6 x^2+x}{2 (x-1)^3}\log x\Big]\,,\\ f_8^{(LRt)}&=&\frac{m_t}{m_W}\Big[\frac{3 x^2-13 x+4}{8 (x-1)^3}+\frac{5 x-2 }{4 (x-1)^4}\log x\Big]\,,\\ \tilde{f}_8^{(LRt)}&=&\tilde{f}_8^{(LRt\prime)}=\frac{m_t}{m_W}\Big[\frac{x^2-5 x-2}{8 (x-1)^3}+\frac{3 x \log (x)}{4 (x-1)^4}\Big]\,,\\ g^{(LL)}&=&\tilde{g}^{(LL)}=(-x-\frac{4}{3})\log\frac{m_W^2}{\mu^2}+\frac{250x^3-384x^2+39x+77}{108 (x-1)^3}\\ &+&\frac{-18x^5+48x^4-102x^3+135x^2-68x+8}{18(x-1)^4}\log x \,,\\ g^{(LL\prime)}&=&(\frac{4}{9}-\frac{x}{2})\log\frac{m_W^2}{\mu^2}+\frac{125x^3-253x^2+138x -16}{36(x-1)^3}\\ &+&\frac{-9x^5+12x^4-48x^3+99x^2-59x+8}{18(x-1)^4}\log x - |V_{tb}|^2\frac{x}{2}\,,\nonumber\\ \tilde{g}^{(LL\prime)}&=&\tilde{h}^{(LL\prime)}=\tilde{f}^{(LL\prime)}_{\nu\bar{\nu}}=-\frac{x}{2}\log\frac{m_W^2}{\mu^2}+\frac{x}{2}(1-\log x -|V_{tb}|^2)\,,\\ g^{(LL\prime\pr)}&=&-\big(\frac{4}{3}+\frac{x}{2}+|V_{tb}|^2\frac{x}{2} \big)\log\frac{m_W^2}{\mu^2} + \frac{250x^3-384x^2+39x + 77}{108(x-1)^3}\\ &+&\frac{-9x^5+12x^4-48x^3+99x^2-59x+8}{18(x-1)^4}\log x - |V_{tb}|^2\frac{x}{2}\log x \,,\nonumber \\ \tilde{g}^{(LL\prime\pr)}&=&\tilde{h}^{(LL\prime\pr)}=\tilde{f}^{(LL\prime\pr)}_{\nu\bar{\nu}}=|V_{tb}|^2\Big(-\frac{x}{2}\log\frac{m_W^2}{\mu^2}-\frac{x}{2}\log x \Big)\,,\\ g^{(LRt)}&=&\tilde{g}^{(LRt)}=\frac{m_t}{m_W}\Big[\frac{-99x^3+136x^2+25x-50}{72(x-1)^3}+\frac{24x^3-45x^2+17x+2}{12(x-1)^4}\log x\Big]\,,\\ g^{(LRt\prime)}&=&\frac{m_t}{m_W}|V_{tb}|^2\Big[\frac{x^2+3x-2}{8(x-1)^2}+\frac{x-2x^2}{4(x-1)^3}\log x\Big]\,,\\ \tilde{g}^{(LRt\prime)}&=&\frac{m_t}{m_W}\bigg[\frac{-54x^3+59x^2+35x-34}{36(x-1)^3}+\frac{15x^3-27x^2+10x+1}{6(x-1)^4}\log x\\ &+& |V_{tb}|^2\Big[\frac{x^2+3x-2}{8(x-1)^2}+\frac{x-2x^2}{4(x-1)^3}\log x\Big]\bigg]\,,\\ h^{(LL)}&=&\tilde{h}^{(LL)}=-(x+\frac{3}{2})\log\frac{m_W^2}{\mu^2}+\frac{11x-5}{4(x-1)}+\frac{-2x^3+x^2-2x}{2(x-1)^2}\log x\,,\\ h^{(LL\prime)}&=&-\frac{x}{2}\log\frac{m_W^2}{\mu^2}+\frac{3x}{2(x-1)}-\frac{x^3+x^2+x}{2(x-1)^2}\log x-|V_{tb}|^2\frac{x}{2}\,, \\ h^{(LL\prime\pr)}&=&-\big(\frac{3}{2}+\frac{x}{2}+|V_{tb}|^2\frac{x}{2}\big)\log\frac{m_W^2}{\mu^2}+\frac{11x-5}{4(x-1)}-\frac{x^3+x^2+x}{2(x-1)^2}\log x-|V_{tb}|^2 \frac{x}{2}\log x\,,\\ h^{(LRt)}&=&\tilde{h}^{(LRt)}=\tilde{h}^{(LRt\prime)}=\frac{m_t}{m_W}\Big[-\frac{3x}{2(x-1)}+\frac{3x\log x}{2(x-1)^2}\Big]\,,\\ k^{(LL)}&=&\tilde{k}^{(LL)}=h^{(LL)}-\frac{3}{(x-1)}+\frac{3x\log x}{(x-1)^2}\,,\\ k^{(LL\prime)}&=&h^{(LL\prime)}-\frac{3x}{(x-1)}+\frac{3x\log x}{(x-1)^2}\,,\\ k^{(LL\prime\pr)}&=&h^{(LL\prime\pr)}-\frac{3}{(x-1)}+\frac{3x\log x}{(x-1)^2}\,,\\ k^{(LRt)}&=&\tilde{k}^{(LRt)}=\tilde{k}^{(LRt\prime)}=h^{(LRt)}+\frac{3}{x-1}-\frac{3x\log x}{(x-1)^2}\,. \end{eqnarray} \normalsize } \chapter{Decay Widths}\label{app:allwidths} In this Appendix we present various decay widths obtained in calculating rates at NLO in QCD. For the FCNC processes covered in chapter~\ref{chap:neutral_currents} we separately present the decay widths for $t\to q Z$, $t\to q \gamma$ including the virtual NLO QCD corrections and the decay widths for bremsstrahlung processes of $t\to q g Z$, $t\to q g \gamma$. For the main decay channel analysis given in chapter~\ref{chap:CC} we present only the combined virtual and bremsstrahlung decay rates. \section{FCNC decays} \subsection{Virtual corrections}\label{app:dw1} Here we present the decay widths defined in Eq.~(\ref{eq:FCNC_virt}) with $C_F = 4/3$ {\allowdisplaybreaks \small \begin{eqnarray} \Gamma_{b}^{\gamma,\mathrm{virt.}}&=&\Gamma_{b}^{\gamma(0)} \bigg[1+\frac{\alpha_s}{4\pi}C_F\Big[-\frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{6}{\epsilon_{\mathrm{IR}}}-7-\frac{\pi^2}{3}+2\log\left(\frac{m_t^2}{\mu^2}\right)\Big]\bigg]\,,\label{eq:Gamma_virt_first}\\ \Gamma_{bg}^{\gamma,\mathrm{virt.}}&=&\Gamma_{b}^{\gamma(0)} \frac{\alpha_s}{4\pi}C_F Q\Big[-11+\frac{2\pi^2}{3}+4\log\left(\frac{m_t^2}{\mu^2}\right)\Big]\,,\\ \tilde{\Gamma}_{bg}^{\gamma,\mathrm{virt.}}&=&\Gamma_{b}^{\gamma(0)} \frac{\alpha_s}{4\pi}C_F Q\Big[-2\pi\Big]\,,\\ \Gamma_{a}^{Z,\mathrm{virt.}}&=&\Gamma_{a}^{Z(0)}\bigg[1+\frac{\alpha_s}{4\pi} C_F\Big\llbracket-\frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{-16 \log(1-r_Z)+\frac{4}{1+2 r_Z}+6}{\epsilon_{\mathrm{IR}}}\label{virtIR1} \\* &-& 16 \log^2(1-r_Z)+\frac{2(5+8r_Z)}{1+2 r_Z}\log(1-r_Z)-\frac{\pi^2}{3}-\frac{2(6+7r_Z)}{1+2r_Z} - 4\mathrm{Li}_2(r_Z)\Big\rrbracket\bigg]\,,\nonumber\\ \Gamma_{b}^{Z,\mathrm{virt.}}&=&\Gamma_{b}^{Z(0)}\bigg[1+\frac{\alpha_s}{4\pi}C_F\Big\llbracket-\frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{-16 \log(1-r_Z)-\frac{8}{2+ r_Z}+10}{\epsilon_{\mathrm{IR}}} \label{virtIR2}\\* &-& 16 \log^2(1-r_Z) +\frac{2(4+9r_Z)}{2+r_Z}\log(1-r_Z)-\frac{\pi^2}{3}+2\log\left(\frac{m_t^2}{\mu^2}\right) - \frac{2(7+6r_Z)}{2+r_Z}-4\mathrm{Li}_2(r_Z)\Big\rrbracket\bigg]\,,\nonumber \\ \Gamma_{ab}^{Z,\mathrm{virt.}}&=&\Gamma_{ab}^{Z(0)}\bigg[1+\frac{\alpha_s}{4\pi}C_F\Big\llbracket-\frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{-16 \log(1-r_Z)+\frac{22}{3}}{\epsilon_{\mathrm{IR}}} \label{virtIR3}\\* &-&16 \log^2(1-r_Z)-\frac{2(2-15 r_Z)}{3 r_Z}\log(1-r_Z)-\frac{\pi^2}{3}-\frac{26}{3}-4\mathrm{Li}_2(r_Z)\Big\rrbracket\bigg]\,,\nonumber \\ \Gamma_{ag}^{Z,\mathrm{virt.}}&=&\Gamma_{ab}^{Z(0)}\frac{\alpha_s}{4\pi}C_F \bigg[ 2 \hat v \log\left(\frac{m_t^2}{\mu^2}\right)+(\hat v-\hat a)\Big[\frac{1}{3}\log(r_Z)+\frac{2f_2}{3(1-r_Z)^2}\Big]\\* &+&\frac{2}{3}f_1\frac{\hat a(2-r_Z)+\hat v(1-2 r_Z)}{1-r_Z}+\frac{\hat a}{3} (4+\frac{1}{r_Z})-\frac{14\hat v}{3}\bigg]\,,\nonumber\\ \Gamma_{bg}^{Z,\mathrm{virt.}}&=&\Gamma_{b}^{Z(0)}\frac{\alpha_s}{4\pi}C_F\bigg[ 2 \hat v \log\left(\frac{m_t^2}{\mu^2}\right) + (\hat v-\hat a)\Big[\frac{r_Z}{2+r_Z}\log(r_Z)+ \frac{4 f_2}{(1-r_Z)^2(2+r_Z)}\Big]\\* &+& f_1\frac{\hat a (4+r_Z-r_Z^2) - \hat v(3+r_Z)r_Z}{(1-r_Z)(2+r_Z)} - \hat v\frac{11+4r_Z}{2+r_Z}+ \hat a \frac{6}{2+r_Z}\bigg]\,,\nonumber\\ \tilde{\Gamma}_{ag}^{Z,\mathrm{virt.}}&=&\Gamma_{ab}^{(0)} \frac{\alpha_s}{4\pi}C_F (\hat v-\hat a)(-\pi)\,,\\ \tilde{\Gamma}_{bg}^{Z,\mathrm{virt.}}&=&\Gamma_{b}^{(0)}\frac{\alpha_s}{4\pi}C_F (\hat v-\hat a)(-\pi)\,.\label{eq:gamma_virt_last} \end{eqnarray} \normalsize } \subsection{Bremsstrahlung}\label{app:dw2} Below we give the $t\to qgZ,\gamma$ bremsstrahlung decay rates, where for the photon channel Eqs.~(\ref{eq:GbbF1}, \ref{eq:GbbF2}, \ref{eq:GbbF3}) we include the kinematical cuts. For the purpose of shorter notation we define $\hat{x} \equiv \delta r_c$ and $\hat{y} \equiv 2 E_{\gamma}^{\mathrm{cut}}/m_t$. {\allowdisplaybreaks \small \begin{eqnarray} \Gamma_b^{\gamma,\mathrm{brems.}}&=&\Gamma_b^{\gamma(0)}\frac{\alpha_s}{4\pi}C_F\Big[ \frac{8}{\epsilon_{\mathrm{IR}}^2}-\frac{6}{\epsilon_{\mathrm{IR}}} + \frac{37}{3} - \pi^2\Big]\,, \label{eq:bremsIR0}\\ \Gamma_{bg}^{\gamma,\mathrm{brems.}}&=&\Gamma_b^{\gamma(0)}\frac{\alpha_s}{4\pi}C_F\Big[ -\frac{25}{3} + \frac{2}{3}\pi^2\Big]\,,\\ \Gamma_{a}^{Z,\mathrm{brems.}}&=& \Gamma_a^{Z(0)}\frac{\alpha_s}{4\pi}C_F\bigg[ \frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{16\log(1-r_Z)-\frac{4}{1+2r_Z}-6}{\epsilon_{\mathrm{IR}}} + 16 \log^2(1-r_Z)- 4\log (r_Z)\log(1-r_Z)\label{eq:bremsIR1}\\* &-& 4\frac{5 + 6 r_Z}{1+2r_Z}\log(1-r_Z)-\frac{4(1-r_Z-2r_Z^2)r_Z}{(1-r_Z)^2(1+2r_Z)} \log(r_Z)- \pi^2 -4\mathrm{Li}_2(r_Z) +\frac{7+r_Z}{(1-r_Z)(1+2r_Z)}+10\bigg]\,,\nonumber\\ \Gamma_{b}^{Z,\mathrm{brems.}}&=& \Gamma_b^{Z(0)}\frac{\alpha_s}{4\pi}C_F\bigg[ \frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{16\log(1-r_Z)+\frac{8}{2+r_Z}-10}{\epsilon_{\mathrm{IR}}} + 16\log^2(1-r_Z)-4\log(r_Z)\log(1-r_Z)\label{eq:bremsIR2}\\* &-&4\frac{6+5r_Z}{2+r_Z}\log(1-r_Z)-\frac{4(2-2r_Z-r_Z^2)r_Z}{(1-r_Z)^2(2+r_Z)}\log(r_Z) -\pi^2-4\mathrm{Li}_2(r_Z)-\frac{4-8r_Z}{(1-r_Z)(2+r_Z)}+\frac{43}{3}\bigg]\,,\nonumber\\ \Gamma_{ab}^{Z,\mathrm{brems.}}&=& \Gamma_{ab}^{Z(0)}\frac{\alpha_s}{4\pi}C_F\bigg[ \frac{8}{\epsilon_{\mathrm{IR}}^2}+\frac{16\log(1-r_Z)-\frac{22}{3}}{\epsilon_{\mathrm{IR}}} + 16\log^2(1-r_Z)-4\log(r_Z)\log(1-r_Z)\label{eq:bremsIR3}\\* &-&\frac{44}{3}\log(1-r_Z)-\frac{4(3-2r_Z)r_Z}{3(1-r_Z)^2}\log(r_Z) -\pi^2-4\mathrm{Li}_2(r_Z)-\frac{4}{3(1-r_Z)}+\frac{47}{3}\bigg]\,,\nonumber\\ \Gamma_{ag}^{Z,\mathrm{brems.}}&=&\frac{\Gamma^{Z(0)}_{ab}}{3(1-r_Z)^2} \frac{\alpha_s}{4\pi}C_F\Bigg[ 2\hat v\bigg\llbracket \frac{1}{4}(3 -4 r_Z+r_Z^2)+\log(r_Z)(1-r_Z-r_Z^2)-\mathrm{Li}_2(1-r_Z)\label{Bremsag}\\* &+&r_Z\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big)\nonumber \\* &+&2\mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}(1-r_Z)(2-r_Z-\mathrm{i}\sqrt{(4-r_Z)r_Z}\Big)\Big\} \bigg\rrbracket \nonumber\\* &+&2\hat a\bigg\llbracket \frac{1}{4}(3 -8 r_Z+5r_Z^2)+\frac{1}{2}\log(r_Z)(-2-7r_Z+2r_Z^2)+\mathrm{Li}_2(1-r_Z)\nonumber\\* &+&(3-r_Z)\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big)\nonumber \\* &-&2\mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}(1-r_Z)(2-r_Z-\mathrm{i}\sqrt{(4-r_Z)r_Z}\Big)\Big\} \bigg\rrbracket\Bigg]\,,\nonumber\\ \Gamma_{bg}^{Z,\mathrm{brems.}}&=& \frac{\Gamma_b^{Z(0)}}{(1-r_Z)^2 2(2+r_Z)} \frac{\alpha_s}{4\pi}C_F\Bigg[ 2 \hat v\bigg\llbracket \frac{1}{3}(1-r_Z)(-25+2r_Z-r_Z^2)-4r_Z\log(r_Z)(1+r_Z)\label{Bremsbg}\\* &-&4(1-r_Z)\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big)-4\mathrm{Li}_2(1-r_Z)\nonumber \\* &+&8 \mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}(1-r_Z)(2-r_Z-\mathrm{i}\sqrt{(4-r_Z)r_Z}\Big)\Big\} \bigg\rrbracket \nonumber\\* &+&2\hat a\bigg\llbracket 9-r_Z(2+7r_Z)+r_Z\log(r_Z)(8+5r_Z)+4\mathrm{Li}_2(1-r_Z) \nonumber\\* &+&2(2-r_Z)\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big) \nonumber \\* &-&8 \mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}(1-r_Z)(2-r_Z-\mathrm{i}\sqrt{(4-r_Z)r_Z}\Big)\Big\} \bigg\rrbracket\Bigg]\,,\nonumber\\ \Gamma_{g}^{Z} &=& \frac{\Gamma_b^{Z(0)}}{(1-r_Z)^2 2(2+r_Z)}\frac{\alpha_s}{4\pi}C_F\Bigg[ \frac{\hat v^2}{6} \bigg\llbracket(1-r_Z)(77-r_Z-4 r_Z^2)+ 3\log(r_Z)(10-4r_Z-9r_Z^2) \label{Bremsg}\\* &+&6 \sqrt{\frac{r_Z}{4-r_Z}}(20+10r_Z-3r_Z^2) \Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big) +12\log^2(r_Z)\nonumber \\* &+&48 \mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}+\frac{\mathrm{i}}{2} \sqrt{\frac{4-r_Z}{r_Z}}\Big)-\mathrm{Li}_2\Big(\frac{r_Z}{2}+\frac{\mathrm{i}}{2} \sqrt{(4-r_Z) r_Z}\Big)\Big\}\bigg\rrbracket \nonumber\\* &+&\frac{\hat a^2}{6}\bigg\llbracket\frac{(1-r_Z)}{r_Z}(1-70r_Z+38r_Z^2-5r_Z^3)+3\log(r_Z)(2+46r_Z-9r_Z^2+4\log(r_Z)) \nonumber\\* &-&6(20-3r_Z)\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big)\nonumber \\* &+&48 \mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}+\frac{\mathrm{i}}{2} \sqrt{\frac{4-r_Z}{r_Z}}\Big)-\mathrm{Li}_2\Big(\frac{r_Z}{2}+\frac{\mathrm{i}}{2} \sqrt{(4-r_Z) r_Z}\Big)\Big\}\bigg\rrbracket\nonumber \\* &+&\hat a \hat v \bigg\llbracket - 7 + 22 r_Z -15 r_Z^2 -\log(r_Z)(6-5r_Z^2+4\log(r_Z))\nonumber\\* &+&2(2+r_Z)\sqrt{(4-r_Z)r_Z}\Big(\arctan\Big(\sqrt{\frac{r_Z}{4-r_Z}}\Big)+\arctan\Big(\frac{r_Z-2}{\sqrt{(4-r_Z) r_Z}}\Big)\Big)\nonumber \\* &-&16 \mathrm{Re}\Big\{\mathrm{Li}_2\Big(\frac{1}{2}+\frac{\mathrm{i}}{2} \sqrt{\frac{4-r_Z}{r_Z}}\Big)-\mathrm{Li}_2\Big(\frac{r_Z}{2}+\frac{\mathrm{i}}{2} \sqrt{(4-r_Z) r_Z}\Big)\Big\}\bigg\rrbracket \Bigg]\,,\nonumber \\ \Gamma_{b}^{\gamma,\mathrm{brems.}}&=&\Gamma_{b}^{\gamma(0)} \frac{\alpha_s}{4\pi}C_F\Bigg[ \frac{8}{\epsilon_{\mathrm{IR}}^2}-\frac{6}{\epsilon_{\mathrm{IR}}}+1-\pi^2-2\frac{\hat{y}(1-\hat{y})(2\hat{y}-1)}{2-\hat{y} \hat{x}}+\hat{y} +\frac{4}{\hat{x}}(2-\hat{y})(1-\hat{y})\label{eq:GbbF1}\\* &-&16\frac{1-\hat{y}}{\hat{x}^2} -2\log^2(1-\hat{y})+(\hat{y}^2+2\hat{y}-10)\log(1-\hat{y})-\frac{2\hat{x}^2-24 \hat{x}+32}{\hat{x}^3}\log\Big(\frac{2-\hat{x}}{2-\hat{y}\hat{x}}\Big)\nonumber\\* &-&6\log\Big(\frac{2-\hat{x}}{2-\hat{x}\hat{y}(2-\hat{y})}\Big)-\Big(\frac{2}{\hat{x}}+\hat{y}^2+2\hat{y}\Big) \log\Big(\frac{2-\hat{x}\hat{y}(2-\hat{y})}{2-\hat{y}\hat{x}}\Big)\nonumber\\* &+&12\sqrt{2/\hat{x}-1}\arctan\Big(\frac{1-\hat{y}}{\sqrt{2/\hat{x}-1}}\Big) +4\mathrm{Li}_2\Big(\hat{x} \frac{1-\hat{y}}{\hat{x} -2}\Big)-2\mathrm{Li}_2\Big(\hat{x}\frac{(1-\hat{y})^2}{\hat{x}-2}\Big) \Bigg]\,,\nonumber\\ \Gamma_{bg}^{\gamma,\mathrm{brems.}}&=& \Gamma_{b}^{\gamma(0)} \frac{\alpha_s}{4\pi}C_F Q\Bigg[ -\frac{(1-\hat{y})(2-\hat{x})(\hat{y}\hat{x}^2-2\hat{y}\hat{x}-2\hat{x}+8)}{\hat{x}^2(2-\hat{y}\hat{x})}+\frac{2\pi^2}{3}\label{eq:GbbF2}\\* &-&4(1-\hat{y})\log\Big(\frac{2-\hat{x}\hat{y}(2-\hat{y})}{(1-\hat{y})(2-\hat{y}\hat{x})}\Big) -4\log(\hat{y})\log\Big(\frac{2-\hat{x}\hat{y}(2-\hat{y})}{2}\Big)+2\log\Big(\frac{\hat{x}}{2}\Big)\log\Big(\frac{2-\hat{x}}{2-\hat{x}\hat{y}(2-\hat{y})}\Big)\nonumber \\* &-&\frac{4}{\hat{x}^3}(\hat{x}^2-4\hat{x}+4)\log\Big(\frac{2-\hat{x}}{2-\hat{x}\hat{y}}\Big) +4\Big(\mathrm{Li}_2\Big(\frac{\hat{x}}{2}\Big)-\mathrm{Li}_2(\hat{y})-\mathrm{Li}_2\Big(\frac{\hat{x}\hat{y}}{2}\Big)\Big)\nonumber\\* &-&8\arctan\Big(\frac{1-\hat{y}}{\sqrt{2/\hat{x}-1}}\Big)\Big(\sqrt{2/\hat{x}-1}- \arctan(\sqrt{2/\hat{x}-1}\Big) \nonumber\\* &+&8\mathrm{Re}\Big\{\mathrm{Li}\Big(\frac{1}{2}\big(2-\hat{x}-\mathrm{i}\sqrt{(2-\hat{x})\hat{x}}\big)\Big)- \mathrm{Li}_2\Big(\frac{1}{2}\big(2-\hat{x}\hat{y}-\mathrm{i} \hat{y}\sqrt{(2-\hat{x})\hat{x}}\big)\Big)\Big\}\Bigg] \nonumber\,,\\ \Gamma_{g}^{\gamma}&=& \Gamma_{b}^{\gamma(0)} \frac{\alpha_s}{4\pi}C_F Q^2\Bigg[-\frac{(1-\hat{y})(2-\hat{x})(3\hat{y}\hat{x}^2-4\hat{x}\hat{y}-8\hat{x}+16)}{\hat{x}^2(2-\hat{x}\hat{y})}+\frac{2\pi^2}{3}\label{eq:GbbF3}\\* &+&\big(4-2\hat{x}+4\log\Big(\frac{\hat{x}}{2}\Big)\big)\log(\hat{y}) +(3-\hat{y})(1-\hat{y})\log\Big(\hat{x}\frac{1-\hat{y}}{2-\hat{x} \hat{y}}\Big)\nonumber\\* &+&\frac{2}{\hat{x}^3}(2-\hat{x})(\hat{x}^3-\hat{x}^2+6\hat{x}-8)\log\Big(\frac{2-\hat{x}}{2-\hat{x}\hat{y}}\Big) +4\Big(\mathrm{Li}_2\Big(\frac{\hat{x}\hat{y}}{2}\Big)-\mathrm{Li}_2\Big(\frac{\hat{x}}{2}\Big)-\mathrm{Li}_2(\hat{y})\Big) \Bigg]\,.\nonumber \end{eqnarray} \normalsize } \section{Main decay channel}\label{app:gamma_main} Here we present analytical formulae for all nine $\Gamma^{L,+,-}_{a,b,ab}$ appearing in Eq.~(\ref{e3}) to ${\cal O}(\alpha_s)$ order and in the $m_b=0$ limit. Note that in this section $x= m_W/m_t$ {\allowdisplaybreaks \small \begin{eqnarray} \Gamma_{a}^L &=&\frac{(1-x^2)^2}{2 x^2}+\frac{\alpha_s}{4\pi}C_F\Bigg[ \frac{(1-x^2)(5+47 x^2-4x^4)}{2 x^2}-\frac{2\pi^2}{3}\frac{1+5x^2+2x^4}{x^2}-\frac{3(1-x^2)^2}{x^2}\log(1-x^2)\\* &-&\frac{2(1-x)^2(2-x+6x^2+x^3)}{x^2}\log(x)\log(1-x)- \frac{2(1+x)^2(2+x+6x^2-x^3)}{x^2}\log(x)\log(1+x)\nonumber\\* &-&\frac{2(1-x)^2(4+3x+8x^2+x^3)}{x^2}\mathrm{Li}_2(x)-\frac{2(1+x)^2(4-3x+8x^2-x^3)}{x^2}\mathrm{Li}_2(-x)+16(1+2x^2)\log(x)\Bigg]\,,\nonumber\\ \Gamma_b^L&=&2x^2(1-x^2)^2+\frac{\alpha_s}{4\pi}C_F\Bigg[-2x^2(1-x^2)(21-x^2)+\frac{2\pi^2}{3}4x^2(1+x^2)(3-x^2)+4x^2(1-x^2)^2\log\Big(\frac{m_t^2}{\mu^2}\Big)\nonumber\\* &-&16x^2(3+3x^2-x^4)\log(x)-4(1-x^2)^2(2+x^2)\log(1-x^2)-8x(1-x)^2(3+3x^2+2x^3)\log(x)\log(1-x)\nonumber\\* &+&8x(1+x)^2(3+3x^2-2x^3)\log(x)\log(1+x)-8x(1-x)^2(3+2x+7x^2+4x^3)\mathrm{Li}_2(x)\nonumber\\* &+&8x(1+x)^2(3-2x+7x^2-4x^3)\mathrm{Li}_2(-x)\Bigg]\,,\\ \Gamma_{ab}^L&=&(1-x^2)^2+\frac{\alpha_s}{4\pi}C_F\Bigg[-(1-x^2)(1+11x^2)-\frac{2\pi^2}{3}(1-7x^2+2x^4)+(1-x^2)^2 \log\Big(\frac{m_t^2}{\mu^2}\Big)\nonumber\\* &-&\frac{2(1-x^2)^2(1+2x^2)}{x^2}\log(1-x^2)-4x^2(7-x^2)\log(x)-4(1-x)^2(1+5x+2x^2)\log(x)\log(1-x)\nonumber\\* &-&4(1+x)^2(1-5x+2x^2)\log(x)\log(1+x)-4(1-x)^2(3+9x+4x^2)\mathrm{Li}_2(x)\nonumber\\* &-&4(1+x)^2(3-9x+4x^2)\mathrm{Li}_2(-x)\Bigg]\,,\nonumber\\ \Gamma_{a}^+ &=&\frac{\alpha_s}{4\pi}C_F\Bigg[-\frac{1}{2}(1-x)(25+5x+9x^2+x^3)+\frac{\pi^2}{3}(7+6x^2-2x^4)-2(5-7x^2+2x^4)\log(1+x)\\* &-&2(5+7x^2-2x^4)\log(x)-\frac{(1-x)^2(5+7x^2+4x^3)}{x}\log(x)\log(1-x)-\frac{(1-x)^2(5+7x^2+4x^3)}{x}\mathrm{Li}_2(x)\nonumber\\* &+&\frac{(1+x)^2(5+7x^2-4x^3)}{x}\log(x)\log(1+x)+\frac{5+10x+12x^2+30x^3-x^4-12x^5}{x}\mathrm{Li}_2(-x)\Bigg]\,,\nonumber\\ \Gamma_{b}^+ &=&\frac{\alpha_s}{4\pi}C_F\Bigg[ \frac{4}{3}x(1-x)(30+3x+7x^2-2x^3-2x^4)-4\pi^2 x^4-8(5-9x^2+4x^4)\log(1+x)\\* &+&8x^2(1+5x^2)\log(x)-4(1-x)^2(4+5x+6x^2+x^3)\log(x)\log(1-x)\nonumber\\* &-&4(1+x)^2(4-5x+6x^2-x^3)\log(x)\log(1+x)-4(1-x)^2(4+5x+6x^2+x^3)\mathrm{Li}_2(x)\nonumber\\* &-&4(4+3x-16x^2+6x^3+16x^4-x^5)\mathrm{Li}_2(-x)\Bigg]\,,\nonumber\\ \Gamma_{ab}^+ &=&\frac{\alpha_s}{4\pi}C_F\Bigg[ 2x(1-x)(15-11x)+\frac{2\pi^2}{3}x^2(5-2x^2)-2(13-16x^2+3x^4)\log(1+x)\\* &-&2(1-x)^2(5+7x+4x^2)\log(x)\log(1-x)-2(1+x)^2(5-7x+4x^2)\log(x)\log(1+x)\nonumber\\* &+&2x^2(1+3x^2)\log(x)-2(1-x)^2(5+7x+4x^2)\mathrm{Li}_2(x)-2(3+3x-31x^2+x^3+12x^4)\mathrm{Li}_2(-x)\Bigg]\,,\nonumber\\ \Gamma_{a}^- &=&(1-x^2)^2+\frac{\alpha_s}{4\pi}C_F\Bigg[-\frac{1}{2}(1-x)(13+33x-7x^2+x^3)+\frac{\pi^2}{3}(3+4x^2-2x^4)\\* \nonumber&-&2(5+7x^2-2x^4)\log(x)-\frac{2(1-x^2)^2(1+2x^2)}{x^2}\log(1-x)-\frac{2(1-x^2)(1-4x^2)}{x^2}\log(1+x)\\* \nonumber&-&\frac{(1-x)^2(5+7x^2+4x^3)}{x}\log(x)\log(1-x)+\frac{(1+x)^2(5+7x^2-4x^3)}{x}\log(x)\log(1+x)\\* \nonumber&-&\frac{(1-x)^2(5+3x)(1+x+4x^2)}{x}\mathrm{Li}_2(x)+\frac{5+2x+12x^2+6x^3-x^4-4x^5}{x}\mathrm{Li}_2(-x)\Bigg]\,,\\ \Gamma_{b}^- &=&4(1-x^2)^2+\frac{\alpha_s}{4\pi}C_F\Bigg[\frac{4}{3}(1-x)(16-14x+22x^2+18x^3-3x^4-3x^5)\\* \nonumber&-&\frac{\pi^2}{3}4(4+x^4) +8x^2(1+5x^2)\log(x)-24(1-x^2)^2\log(1-x)+8(1-x^2)(2-x^2)\log(1+x)\\* \nonumber&-&4(1-x)^2(4+5x+6x^2+x^3)\log(x)\log(1-x)-4(1+x)^2(4-5x+6x^2-x^3)\log(x)\log(1+x)\\* &-&4(1-x)^2(12+21x+14x^2+x^3)\mathrm{Li}_2(x)-4(12+3x+6 x^3-x^5)\mathrm{Li}_2(-x)+8(1-x^2)^2\log\Big(\frac{m_t^2}{\mu^2}\Big)\Bigg]\,,\nonumber\\ \Gamma_{ab}^- &=&2(1-x^2)^2+\frac{\alpha_s}{4\pi}C_F\Bigg[2(1-x)(9-6x+6x^2-5x^3)-\frac{2\pi^2}{3}(5+2x^4)+2x^2(1+3x^2)\log(x)\\* \nonumber&-&\frac{2(1-x^2)^2(1+5x^2)}{x^2}\log(1-x)-\frac{2(1-x^2)(1-9x^2-2x^4)}{x^2}\log(1+x)\\* \nonumber&-&2(1-x)^2(5+7x+4x^2)\log(x)\log(1-x)-2(1+x)^2(5-7x+4x^2)\log(x)\log(1+x)\\* &-&2(1-x)^2(13+23x+12x^2)\mathrm{Li}_2(x)-2(15+3x+5x^2+x^3+4x^4)\mathrm{Li}_2(-x)+2(1-x^2)^2\log\Big(\frac{m_t^2}{\mu^2}\Big)\Bigg]\,.\nonumber \end{eqnarray} \normalsize } \chapter{Three-body FCNC top decays}\label{app:allTB} In this Appendix we present some details of the $t\to q\ell^+ \ell^-$ analysis presented in section~\ref{sec:three_body} of chapter~{\ref{chap:neutral_currents}}. \section{Analytical formulae}\label{app:tqll} Below we give the complete analytic formulae for the partial differential decay rate distributions in terms of our chosen kinematical variables and the expression for functions appearing in FBA and LRA. Mostly they are given in unevaluated integral form, as analytic integration, though possible in most cases, yields very long expressions. \subsection{Photon mediation} The double and single differential decay widths are given as \begin{eqnarray} \frac{\mathrm{d}\Gamma^\gamma}{\mathrm{d} \hat{u}\mathrm{d}\hat{s}}&=&\frac{m_t}{16\pi^3}\frac{g_Z^4v^4}{\Lambda^4}B_{\gamma}\times\frac{1}{\hat{s}}\Big[\hat{s}(2\hat{u}-1) + 2\hat{u}^2-2\hat{u}+1\Big]\,,\\ \frac{\mathrm{d}\Gamma^\gamma}{\mathrm{d}\hat s} &=& \frac{m_t}{16\pi^3}\frac{g_Z^4v^4}{\Lambda^4} B_{\gamma} \frac{(1-\hat{s})^2(\hat{s}+2)}{3\hat{s}}\,.\nonumber \end{eqnarray} Functions $f_{\gamma}$ and $g_{\gamma}$ defined in Eqs.~(\ref{eq:gamma_gamma}, \ref{eq:def_g}) are \begin{eqnarray} f_{\gamma}(x) &=& \frac{1}{9} \Big[-x^3 + 9x-6\log(x)-8\Big]\,,\label{fg}\\ g_{\gamma}(x) &=&-\frac{13}{18}+3x-2x^2+x^3-\frac{2}{3}\log(4x)\,.\label{gg} \end{eqnarray} \subsection{$Z$ mediation}\label{app:Zmediation} We define $$ \hat{m}_Z = \frac{m_Z^2}{m_t^2}\,,\hspace{0.5cm}\gamma_Z = \frac{\Gamma_Z}{m_Z}\,. $$ The double and single differential decay widths are given as \begin{eqnarray} \frac{\mathrm{d} \Gamma^Z}{\mathrm{d} \hat{s}\mathrm{d} \hat{u}}&=&\frac{m_t}{16\pi^3}\frac{g_Z^4v^4}{\Lambda^4}\frac{1}{(\hat{s}-\hat{m}_Z)^2+\hat{s}^2\gamma_Z^2} \Big[\frac{ A + \alpha}{4}(1- \hat{s} - \hat{u} )( \hat{s} + \hat{u}) + +\frac{ A - \alpha}{4} (1- \hat{u})\hat{u} \\ \nonumber &+&(B+\beta)\hat{u}\hat{s}(\hat{u}+\hat{s})+(B-\beta)\hat{s} (1-\hat{s}-\hat{u}) (1-\hat{u})+(C+\gamma)\hat{s} (1-\hat{u}-\hat{s})+(C-\gamma)\hat{u}\hat{s}\Big]\,,\\ \frac{\mathrm{d}\Gamma^Z}{\mathrm{d}\hat s} &=& \frac{m_t}{16\pi^3}\frac{g_Z^4v^4}{\Lambda^4}\frac{(\hat{s}-1)^2}{(\hat{s}-\hat{m}_Z)^2 + \hat{s}^2\gamma_Z^2} \Big[\frac{A}{12}(2\hat{s}+1) + \frac{B}{3}\hat{s}(\hat{s}+2) + C \hat{s}\Big]\,. \end{eqnarray} For the sake of shorter notation we first define \begin{eqnarray} \nonumber r_1 &=& \frac{(1-\hat{s})^2}{(\hat{s} - \hat{m}_Z)^2 + \hat{s}^2\gamma_Z^2}\,,\\ \nonumber r_2 &=& \frac{\frac{1}{8}(1-\hat{u})^2}{[(1-z)(1-\hat{u})-2\hat{m}_Z]^2 + \gamma_Z^2(1-z)^2(1-\hat{u})^2 }\,, \end{eqnarray} then present $f_i$ functions defined in Eqs.~(\ref{eq:fcnc_Z_1}) in the from of the following integrals \begin{align} f_A&=\int_{0}^{1} \mathrm{d} \hat{s} \,r_1\, \frac{1}{12}(1+2\hat{s})\,,&\label{eq:fA} f_B&=\int_{0}^{1} \mathrm{d} \hat{s} \,r_1\, \frac{1}{3}(2\hat{s}+\hat{s}^2)\,,&\\ \nonumber f_C&=\int_{0}^{1} \mathrm{d} \hat{s} \,r_1\, \hat{s}\,,& f_{\alpha\beta\gamma} &= -\frac{1}{8} f_C\,. \end{align} The $g_i$ functions present in LRA expressions defined in Eqs.~(\ref{eq:fcnc_Z_1}) are more complicated due to the fact that the angular variable appears in the resonant factor of the matrix element. So for the sake of brevity we define additional functions $G_i$ in which the $\hat{u}$ integration is performed \begin{eqnarray} g_i&=&\int_0^1 \mathrm{d} z \,G_i - \int_{-1}^{0}\mathrm{d} z \,G_i\,,\hspace{0.5cm} i = A,B,C,\alpha\beta\gamma\,,\label{eq:gZ}\\ \nonumber G_A &=&\int_{0}^{1} \mathrm{d} \hat{u}\, r_2\, (1+5\hat{u}+2\hat{u} z-z^2+\hat{u} z^2)\,,\\ \nonumber G_B &=&\int_{0}^{1} \mathrm{d} \hat{u}\, r_2\, 4(1-\hat{u}+2\hat{u}^2-2\hat{u} z-z^2+3\hat{u} z^2-2\hat{u}^2z^2)\,,\\ \nonumber G_C&=&\int_{0}^{1} \mathrm{d} \hat{u}\, r_2\, 4(1 + \hat{u} - 2\hat{u} z - z^2 +\hat{u} z^2)\,,\\ \nonumber G_{\alpha\beta\gamma} &=&\int_{0}^{1} \mathrm{d} \hat{u}\, r_2\, (1 - 3 \hat{u} + 2\hat{u} z - z^2 + \hat{u} z^2)\,. \end{eqnarray} \subsection{Interference between $Z$ and photon mediation}\label{app:fcnc_gammaZ} The interference contribution between the $Z$ and the photon to the double differential decay rate is \begin{eqnarray} \nonumber\frac{\mathrm{d} \Gamma^{\mathrm{int}}}{\mathrm{d} \hat{s}\mathrm{d}\hat{u}}&=&\frac{m_t}{16\pi^3}\frac{v^4g_Z^4}{\Lambda^4}\mathrm{Re}\Bigg\{\frac{\hat{s}-\hat{m}_Z-\mathrm{i}\hat{s}\gamma_Z}{(\hat{s}-\hat{m}_Z)^2+\hat{s}^2\gamma_Z^2}\times\\ \nonumber && \Big[2W_1(1-\hat{s}-\hat{u})(1-\hat{u})+2W_2\hat{u}(\hat{u}+\hat{s})+ W_3(1-\hat{s}-\hat{u}) + W_4\hat{u}\Big]\Bigg\}\,. \end{eqnarray} In all further computations we neglect the imaginary part in the propagator's numerator $\gamma_Z \sim 0.02$. This means that $\mathrm{Re}$ acts only on the model dependent constants $W_1,\dots,W_4$. $f_i^{\epsilon}$ and $g_i^{\epsilon}$ are the same as $f_i$ and $g_i$, except that the integration limits are altered due to the di-lepton invariant mass cutoff $\epsilon$. In $f_X$ the $\hat{s}$ integration is now in the $[\epsilon/m_t^2,1]$ region, in $g_X$ the intervals for $z$ are $[0,1-2\epsilon/m_t^2]$ and $[-1,0]$, and for the $\hat{u}$ in $G_X$ functions $\hat{u}\in[0,1-\frac{2\epsilon}{1-z}]$. We further define \begin{eqnarray} \nonumber r_3 &=& \frac{[(1-\hat{u})(1-z)-2\hat{m}_Z](1-\hat{u})}{[(1-z)(1-u)-2\hat{m}_Z]^2 + \gamma_Z^2(1-z)^2(1-u)^2}\,. \end{eqnarray} The new $f_i$ and $g_i$ functions defined in Eqs.~(\ref{eq:new_fs}) are \begin{subequations}\label{eq:fcnc_int_1} \begin{eqnarray} f_{W_{12}} &=&\int_{\epsilon/m_t^2}^1 \mathrm{d} \hat{s}\, r_1\,(\hat{s}-\hat{m}_Z) \frac{1}{3} (\hat{s} +2)\,,\label{fW12}\\ f_{W_{34}} &=&\int_{\epsilon/m_t^2}^1 \mathrm{d} \hat{s}\, r_1\,(\hat{s}-\hat{m}_Z) \frac{1}{2}\,,\label{fW34}\\ f_{W} &=&\frac{1}{2} f_{W_{34}}\,,\label{fW}\\ g_i&=&\int_0^{1-2\epsilon/m_t^2} \mathrm{d} z \,G_i - \int_{-1}^{0}\mathrm{d} z \,G_i\,,\label{gZg} \end{eqnarray} \end{subequations} \begin{align} G_{W_1}&=\int_{0}^{1-\frac{2\epsilon/m_t^2}{1-z}} \mathrm{d} \hat{u}\, r_3\, (1-\hat{u})^2(1+z)\,,& G_{W_2}&=\int_{0}^{1-\frac{2\epsilon/m_t^2}{1-z}} \mathrm{d} \hat{u}\, r_3\, \hat{u} (1+\hat{u}-z+z\hat{u})\,,\\ \nonumber G_{W_3}&=\int_{0}^{1-\frac{2\epsilon/m_t^2}{1-z}} \mathrm{d} \hat{u}\, r_3\, \frac{1}{2} (1+\hat{u}+z-z\hat{u})\,,& G_{W_4}&=\int_{0}^{1-\frac{2\epsilon/m_t^2}{1-z}} \mathrm{d} \hat{u}\, r_3\, \hat{u} \,. \end{align} \vfill \hfill \section{Matching to the parametrization of Fox et al.}\label{app:tofox} Here we present the conversion of ${\cal L}_{\mathrm{eff}}$ presented in Ref.~\cite{Fox:2007in} to the form in Eq.~(\ref{eq:Lagr}). Fox et al. give a complete set of dimension six operators that generate a $tcZ$ or $tc\gamma$ vertex \begin{align*} O^u_{LL} &=\mathrm{i} \Big[\bar{Q}_3\tilde{H}\Big] \Big[(\slashed{D}\tilde{H})^{\dagger}Q_2\Big] -\mathrm{i}\Big[\bar{Q}_3(\slashed{D}\tilde{H})\Big] \Big[\tilde{H}^{\dagger}Q_2\Big] \,,& O_{LL}^h &= \mathrm{i} \Big[\bar{Q}_3\gamma^{\mu}Q_2\Big]\Big[H^{\dagger} (D_{\mu}H) - (D_{\mu}H)^{\dagger} H \Big] \,,\\ O_{RL}^w &=g_2\Big[\bar{Q}_2\sigma^{\mu\nu}\sigma^a \tilde{H}\Big]t_R W^a_{\mu\nu} \,,& O_{RL}^b &= g_1\Big[\bar{Q}_2\sigma^{\mu\nu}\tilde{H}\Big]t_R B_{\mu\nu}\,,\\ O_{LR}^w & = g_2\Big[\bar{Q}_3\sigma^{\mu\nu}\sigma^a \tilde{H}\Big]c_R W_{\mu\nu}^a \,,& O_{LR}^b &= g_1\Big[\bar{Q}_3\sigma^{\mu\nu} \tilde{H} \Big]c_R B_{\mu\nu} \,,\\ O^u_{RR} &=\mathrm{i} \bar{t}_R\gamma^{\mu}c_R \Big[H^{\dagger} (D_{\mu}H) - (D_{\mu}H)^{\dagger} H \Big] \,,& \end{align*} for notational details see Ref.~\cite{Fox:2007in}. We have suppressed the addition of h.c. for every operator. Keeping only FCNC parts and the VEV of the Higgs field we obtain \begin{align*} O^{u}_{LL} &= \frac{v^2}{2}[g A_{\mu}^3 - g' B_{\mu}]\,\,\Big[ \bar{ t}_L \gamma^{\mu}c_L\Big] \,,& O_{LL}^h &= \frac{v^2}{2}[g A_{\mu}^3 - g' B_{\mu}]\,\,\Big[\bar{t}_L \gamma^{\mu} c_L + \bar{b}_L \gamma^{\mu} s_L\Big] \,, \\ O_{RL}^w &=g\frac{v}{\sqrt{2}}W_{\mu\nu}^3\,\,\Big[\bar{c}_L \sigma^{\mu\nu} t_R\Big] \,,& O_{RL}^b &= g'\frac{v}{\sqrt{2}} B_{\mu\nu}\,\,\Big[\bar{c}_L \sigma^{\mu\nu} t_R\Big] \,, \\ O_{LR}^w &= g\frac{v}{\sqrt{2}}W_{\mu\nu}^3\,\,\Big[\bar{t}_L \sigma^{\mu\nu} c_R\Big] \,, & O_{LR}^b &=g'\frac{v}{\sqrt{2}} B_{\mu\nu}\,\,\Big[\bar{t}_L \sigma^{\mu\nu} c_R\Big] \,, \\ O_{RR}^u &= \frac{v^2}{2}[g A_{\mu}^3 - g' B_{\mu}]\,\,\Big[\bar{t}_R\gamma^{\mu}c_R \Big] \,.& \end{align*} Finally our coupling constants can be expressed as \begin{subequations}\label{eq:fcnc_trans} \begin{eqnarray} a_L^Z &=& \frac{1}{2}\Big[C^u_{LL}+ C_{LL}^h\Big]\,,\label{eq:fox1}\\ a_R^Z &=& \frac{C^u_{RR}}{2} \,, \label{eq:fox2}\\ b_{LR}^Z &=& \frac{C_{RL}^w \cos^2\theta_W - C_{RL}^b \sin^2\theta_W}{\sqrt{2} } \,,\label{eq:fox3}\\ b_{RL}^Z &=& \frac{C_{LR}^w \cos^2\theta_W -C_{LR}^b \sin^2\theta_W}{\sqrt{2}}\,,\label{eq:fox4}\\ b_{LR}^{\gamma} &=& \frac{C_{RL}^w + C_{RL}^b }{\sqrt{2}}\,,\label{eq:fox5}\\ b_{RL}^{\gamma} &=& \frac{C_{LR}^w + C_{LR}^b }{\sqrt{2}}\,.\label{eq:fox6} \end{eqnarray} \end{subequations} \chapter{NP in Top Decays: Charged currents} \renewcommand{{\mathcal O}}{\mathcal Q} \label{chap:CC} \section{Introduction} In this chapter we analyze the possible deviations from the SM coupling strength and structure of the {\sl charged quark currents}. Following the strategy outlined in section~\ref{sec:strategy} of the introductory chapter we explore on one hand the implications of such deviations on the low energy $B$ physics and on the other hand delve into the possible consequences to be observed in top quark decays, particularly focusing on the main decay channel and the $tWb$ interaction. Results presented in this chapter are based on our published work ~\cite{Drobnak:2011aa,Drobnak:2011wj,Drobnak:2010ej}. Unlike in the case of FCNC top quark decays, the complete analysis of low energy constraints for model independent anomalous charged quark currents has not been performed yet. In particular analysis of anomalous $tWd,s$ couplings on $B$ meson mixing has been attempted in Ref.~\cite{Lee:2008xs,Lee:2010hv}, using however a subset of all possible effective $tWd,s$ operators. Furthermore effects in $b\to s \gamma$ transition have been analyzed in Ref.~\cite{Grzadkowski:2008mf}. Again, only a subset of the operators that we shall be considering has been used and no other $\Delta B=1$ transitions have been ruminated. We therefore first set out to upgrade the analysis of indirect implications available in the literature focusing on observables in $\Delta B=2$ and $\Delta B=1$ processes presented in sections \ref{sec:dB2} and \ref{sec:dB1} respectively. The experimentally well measured low energy observables which mostly agree with the SM predictions provide constraints on the anomalous charged quark current structures. Since the top quark with its possible anomalous interactions is in these processes a virtual particle we label such constraints on top quark physics as indirect. Only then do we turn to the direct top quark physics where the structure of $tWb$ vertex can be probed by analyzing the main decay channel of on-shell produced top quarks at Tevatron and LHC. The derived indirect constraints give us an idea of how much room there is left for deviation away from SM predictions given that the NP can be parametrized in the form of anomalous $tWb$ couplings. We briefly comment that there is another class of observables in which virtual top quarks play an important role called {\sl electroweak precision observables} (EWPO). Namely, anomalous $tWb$ vertices can effect the $S,T,U$ oblique parameters and the $Z\to b\bar{b}$ decays (see for example \cite{Peskin:1991sw,ALEPH:2005ab}). Following our publications of the results that are presented in this work, the analysis of anomalous top quark couplings on EPWO was given in Refs.~\cite{Zhang:2012cd,Greiner:2011tt}. Although the operator basis parametrizing NP used by the authors of Refs.~\cite{Zhang:2012cd,Greiner:2011tt} does not coincide with the one used in this work, especially in regards to flavor structure, there are some common operators for which we can confront the obtained indirect bounds to find they are comparable and compatible. This chapter is structured as follows. The first section is devoted to the formulation of the effective theory. Specifying our framework in terms of effective operator basis we proceed to analyze the effect in $B$ physics, giving detailed study of $\Delta B = 2$ transitions, namely $B$ meson mixing and $\Delta B = 1$ transitions comprised of FCNC $B$ meson decays. In the last section we focus on the top quark physics and the effects of anomalous $tWb$ couplings on helicity fractions in its main decay channel. Here we perform our analysis at NLO in QCD, which seems sensible, since as pointed out in section~\ref{sec:hfSM}, NLO QCD corrections prove crucial in consideration of helicity fractions within the SM. We show that the direct bounds are in a sense complementary to the indirect bounds and reveal a nice interplay between top and bottom physics. \section{Effective Lagrangian} Following the strategy outlined in section~\ref{sec:strategy}, our first objective is to specify the operator basis $\mathcal Q_i$ of our interest, namely the determination of gauge and flavor structure of the dimension six operators appearing in the effective Lagrangian (\ref{eq:lagr}). To analyze the effects in $B$ physics, we will have to further perform the second step illustrated in Fig.~\ref{fig:intout}. Much like in the SM case presented in sections \ref{sec:dB2} and \ref{sec:dB1}, we will have to integrate out the SM degrees of freedom with masses greater than that of the $b$ quark, matching $\mathcal L_{\mathrm{eff}}$ to low energy effective Lagrangians (\ref{eq:LSMmix}, \ref{eq:loweff1}). By doing that we shall gain access to different $B$ physics observables of interest and the possibility to see how predictions are affected by NP. \subsection{Gauge structure} Our operator basis consists of all dimension-six operators invariant under the SM gauge group that generate charged current quark interactions with the $W$. The possible gauge structures we can use are \cite{Buchmuller:1985jz} \begin{subequations}\label{eq:GaugeStr} \begin{eqnarray} &&\big[\bar{u}\gamma^{\mu}d\big](\phi_u^{\dagger}\mathrm{i} D_{\mu}\phi_d)\,,\\ &&\big[\bar{Q}\gamma^{\mu} Q\big](\phi_d^{\dagger}\mathrm{i} D_{\mu}\phi_d)\,,\hspace{0.5cm}\big[\bar{Q}\gamma^{\mu}\tau^a Q\big](\phi^{\dagger}\tau^a\mathrm{i} D_{\mu}\phi)\,,\label{eq:GaugeStr_LL}\\ &&\big[\bar{Q}\sigma^{\mu\nu}\tau^a u\big]\phi_u W_{\mu\nu}^a\,,\\ &&\big[\bar{Q}\sigma^{\mu\nu}\tau^a d\big]\phi_d W_{\mu\nu}^a\,. \end{eqnarray} \end{subequations} Here $Q$ stands for the quark $SU(2)_L$ doublet $u$, $d$ are the up- and down-type quark $SU(2)_L$ singlets respectively, $\tau^a$ are the $SU(2)_L$ Pauli matrices. In addition we have the covariant derivative and field strength definitions \begin{eqnarray} D_{\mu}&=&\partial_{\mu}+\mathrm{i} \frac{g}{2}W_{\mu}^a\tau^a +\mathrm{i} \frac{g'}{2}B_{\mu} Y\,, \\ W^a_{\mu\nu}&=&\partial_{\mu}W_{\nu}^a-\partial_{\nu}W_{\mu}^a - g\epsilon_{abc}W_{\mu}^b W_{\nu}^c\,,\nonumber \end{eqnarray} and finally $\phi_{u,d}$ are the up- and down-type Higgs fields (in the SM $\phi_u\equiv \tilde{\phi} =\mathrm{i} \tau^2 \phi_d^*$). The operators in Eq.~(\ref{eq:GaugeStr}) are written with quark fields in the interaction basis being flavor universal. \subsection{Flavor structure} On the flavor side, we restrict the structure of the operators to be consistent with {\sl Minimal Flavor Violation} (MFV) hypothesis \cite{Buras:2003jf,D'Ambrosio:2002ex,Grossman:2007bd}, which postulates that even in the presence of NP operators, Yukawa matrices present in the SM remain the sole source of flavor violation. The way to implement this concept is to make the operators formally invariant under the SM flavor group~(\ref{eq:SM_G_flav}) and insist that the only $\mathcal G^{\rm SM}$ symmetry breaking spurionic fields in the theory are the up and down quark Yukawa matrices $Y_{u,d}$, introduced in Eq.~(\ref{eq:intro_yuk}), formally transforming as $(3,\bar 3,1)$ and $(3,1,\bar 3)$ respectively. The four distinct quark bilinears appearing in Eq.~(\ref{eq:GaugeStr}) have different transformation properties under $\mathcal G^{\rm SM}$: $\bar u d$, $\bar Q Q$, $\bar Q u$ and $\bar Q d$ transforming as $(1,\bar 3, 3)$, $({1\oplus8},1,1)$, $(\bar 3,3,1)$ and $(\bar 3, 1, 3)$ respectively. From them we can construct the most general $\mathcal G^{\rm SM}$ invariant structures as \begin{equation} \bar u Y_u^\dagger \mathcal A_{ud} Y_d d\,, ~~~ \bar Q \mathcal A_{QQ} Q\,, ~~~ \bar Q \mathcal A_{Qu} Y_u u\,,~~~ \bar Q \mathcal A_{Qd} Y_d d\,, \label{eq:flav} \end{equation} where $\mathcal A_{xy}$ are arbitrary polynomials of $Y_{u} Y_{u}^\dagger$ and/or $Y_{d}Y_d^\dagger$, transforming as $({1\oplus8},1,1)$. In order to identify the relevant flavor structures in terms of physical parameters, we can without the loss of generality consider $Y_{u,d}$ condensate values in the down basis in which $\langle Y_d \rangle$ is diagonal~(\ref{eq:down_basis}) \begin{eqnarray} \langle Y_d \rangle \simeq \mathrm{diag}(0,0,m_b)/v_d\,,\hspace{0.5cm} \langle Y_u \rangle \simeq V_{}^\dagger \mathrm{diag}(0,0,m_t)/v_u\,. \end{eqnarray} Here $V$ is the SM CKM matrix~(\ref{eq:CKMmat}) and we have introduced separate up- and down-type Higgs condensates $v_{u,d}$. We have also neglected the masses of first two generation quarks, which is the approximation we will be using throughout this chapter. Further, once we assume electroweak symmetry breaking (see Eq.~(\ref{eq:breaking}) and text around it) we rewrite the quark fields in the mass eigenbasis. Making the flavor indices and chirality explicit these fields are \begin{eqnarray} Q_i=(V^*_{ki} u_{Lk},d_{Li})^{\mathrm{T}}\,, \hspace{0.5cm} u_{iR}\,,\hspace{0.5cm} d_{iR}\,. \end{eqnarray} We consider first the simplest case of linear MFV where $\mathcal A_{xy}$ is such that powers of $Y_d$ in (\ref{eq:flav}) do not exceed 1, and powers of $Y_u$ do not exceed 2. Obtained flavor structures are given in first two columns of Tab.~\ref{tab:FlavStr}. Following~\cite{Kagan:2009bn}, the generalization of the above discussion to MFV scenarios where large bottom Yukawa effects can be important is straight forward. We implement it by raising the highest allowed power of $Y_d$ appearing in (\ref{eq:flav}) to 2. This gives us additional flavor structures presented in the last two columns of Tab.~\ref{tab:FlavStr}. We can see from the form of $Y_u Y_u^\dagger$ and $Y_d Y_d^\dagger$ with flavor indices explicitly written out \begin{eqnarray} (Y_u Y_u^\dagger)_{ij} = \frac{m_t^2}{v_u^2}V^*_{ti}V_{tj}\,,\hspace{0.5cm}(Y_d Y_d^\dagger)_{ij} = \frac{m_b^2}{v_d^2}\delta_{3i}\delta_{3j}\,, \end{eqnarray} that apart from these two forms, the only additional flavor structure that can be obtained is \begin{eqnarray} (Y_u Y_u^\dagger Y_d Y_d^{\dagger})_{ij} &=& \frac{m_t^2}{v_u^2}\frac{m_b^2}{v_d^2}V^*_{ti}\delta_{3j}\,. \end{eqnarray} This means that our list given in Tab.~\ref{tab:FlavStr} is exhausting as forms of $\mathcal A$ more complex than presented therein generate no new flavor structures, rather just give higher powered overall factors. \begin{table}[h] \setlength{\doublerulesep}{0.05cm} \begin{center} \begin{tabular}{c|cc|c c}\hline\hline & $\mathcal A=\mathds{1}$ & $\mathcal A= Y_u Y_u^{\dagger}$ & $\mathcal A = Y_d Y_d^{\dagger} $& $\mathcal A = Y_u Y_u^\dagger Y_d Y_d^{\dagger} $ \\\hline $\bar u Y_u^\dagger \mathcal A_{ud} Y_d d$&$\frac{m_t}{v_u}\frac{m_b}{v_d} \bar t_R V_{tb} b_R$& & &\\ $\bar Q \mathcal A_{QQ} Q$ &$\bar Q_i Q_i$& $\frac{m_t^2}{v_u^2} \bar Q_i V^*_{ti} V_{tj} Q_j$ & $\frac{m_b^2}{v_d^2}\bar Q_3 Q_3$&$\frac{m_b^2}{v_d^2}\frac{m_t^2}{v_u^2}\bar Q_i V_{ti}^* V_{tb}Q_3$\\ $\bar Q \mathcal A_{Qu} Y_u u$ &$\frac{m_t}{v_u}\bar Q_i V^{*}_{ti}t_R $& &$\frac{m_b^2}{v_d^2}\frac{m_t}{v_u}\bar Q_3 V_{tb}^* t_R$ &\\ $\bar Q \mathcal A_{Qd} Y_d d$ &$\frac{m_b}{v_d}\bar Q_3 b_R$& $\frac{m_b}{v_d}\frac{m_t^2}{v_u^2}\bar Q_i V_{ti}^* b_R$&&\\\hline\hline \end{tabular} \end{center} \caption{MFV consistent flavor structures of the quark bilinear parts of dimension-six operators that generate charged current quark interactions with the $W$. First two columns represent the simplest case of linear MFV with the highest allowed powers of $Y_u$ and $Y_d$ set to 2 and 1 respectively while the last two columns represent MFV scenarios where large bottom Yukawa effects can be important, so the highest allowed $Y_d$ power is raised to 2.} \label{tab:FlavStr} \end{table} \subsection{Final operator basis} \label{sec:FinalBasis} Before writing down the final set of operators that we will be analyzing we first put the obtained structures under further scrutiny and make some modifications. \begin{itemize} \item Since $\bar Q_i Q_i$ is completely flavor universal, when coupled to the $W$ it would modify the effective Fermi constant as extracted from charged quark currents compared to the muon lifetime. Existing tight constraints on such deviations~\cite{Antonelli:2009ws} do not allow for significant effects in $B$ meson or top quark phenomenology and we do not consider this structure in our analysis. \item On the other hand, $\bar Q_i V^*_{ti} V_{tj} Q_j$ potentially leads to large tree-level FCNCs in the down quark sector when coupled to the $Z$. Explicitly, both the singlet and triplet operators (\ref{eq:GaugeStr_LL}) include the following term \begin{eqnarray} \frac{\mathrm{i} g}{2c_W}V_{ti}^*V_{tj}\big[\bar{d}_{Li}\gamma^{\mu} d_{Lj}\big]Z_{\mu}\varphi_0^2\,, \end{eqnarray} where $\varphi_0$ is the lower $SU(2)_L$ component of $\phi_d$, having zero electric charge. After it acquires a VEV, such terms generate the aforementioned FCNC couplings. We therefore consider as our operator the linear combination of the singlet and the triplet, choosing the relative negative sign between them therby getting rid of the FCNCs in the bottom sector. \item Similarly, $\bar Q_i V^*_{ti} V_{tb} b_R$ includes the following term \begin{eqnarray} -V_{ti}^*\big[\bar{d}_{Li}\sigma^{\mu\nu}b_R\big] \varphi_0(s_W F_{\mu\nu}+c_W Z_{\mu\nu})\,, \end{eqnarray} which, once $\varphi_0$ acquires a VEV, generates tree-level FCNC $b\to s \gamma,Z$ transitions. These are already tightly constrained by $B\to X_s\gamma$ and $B\to X_s \ell^+ \ell^-$~\cite{Hurth:2008jc}. Since all the charged current mediating $SU(2)_L$ invariant operators of dimension six or less containing such a flavor structure do necessarily involve either the $Z$ or the photon, we drop this structure from our subsequent analysis. \end{itemize} Taking these considerations into account, we obtain the set of seven effective dimension six operators invariant under the SM gauge group and consistent with MFV hypotheses that involve charged quark currents \begin{subequations} \label{eq:ops1} \begin{eqnarray} \mathcal Q_{RR}&=& V_{tb} [\bar{t}_R\gamma^{\mu}b_R] \big(\phi_u^\dagger\mathrm{i} D_{\mu}\phi_d\big) \,, \\ \mathcal Q_{LL}&=&[\bar Q^{\prime}_3\tau^a\gamma^{\mu}Q'_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big)-[\bar Q'_3\gamma^{\mu}Q'_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q'_{LL}&=&[\bar Q_3\tau^a\gamma^{\mu}Q_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big) -[\bar Q_3\gamma^{\mu}Q_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q^{\prime\prime}_{LL}&=&[\bar Q'_3\tau^a\gamma^{\mu}Q_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big)-[\bar Q'_3\gamma^{\mu}Q_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q_{LRt} &=& [\bar Q'_3 \tau^a\sigma^{\mu\nu} t_R]{\phi_u}W_{\mu\nu}^a \,,\\ \mathcal Q'_{LRt} &=& [\bar Q_3 \tau^a\sigma^{\mu\nu} t_R]{\phi_u}W_{\mu\nu}^a \,,\\ \mathcal Q_{LRb} &=& [\bar Q_3 \tau^a\sigma^{\mu\nu} b_R]\phi_d W_{\mu\nu}^a \,, \end{eqnarray} \end{subequations} where we have introduced \begin{eqnarray} \bar Q'_3 =\bar Q_i V^*_{ti}= (\bar{t}_L,V_{ti}^*\bar{d}_{iL})^T\,. \end{eqnarray} We note, that the final set of operators coincides with those considered in the $B\to X_s \gamma$ analysis of anomalous $tWb$ couplings~\cite{Grzadkowski:2008mf} expanded by the three primed operators which originate from the structures given by the last two columns of Tab.~\ref{tab:FlavStr}, corresponding to higher order down-Yukawa MFV\footnote{In the final form of the operators we do not explicitly write out the $m_t/v_u$ and $m_b/v_d$ factors present in Tab.~\ref{tab:FlavStr}, technically shifting them to the Wilson coefficients.}. Furthermore we do not make the operators hermitian, hence effects of operators $\mathcal Q_i^{\dagger}$ are accompanied by $C_i^{*}$ and will be kept track of separately. Notice that starting with the most general MFV construction we are led to a set of operators, where largest deviations in charged quark currents are expected to involve the third generation (a notable exception being the flavor universal $\bar Q_i Q_i$ structure present already in the SM, which we have dropped from our analysis). As a consequence of our effective theory approach, operators ${\mathcal O}_i$ given in Eq.~(\ref{eq:ops1}) modify not only the $tWb$ vertex but withhold a much richer flavor and gauge structure. Since our aim is to analyze the effects of these operators in $B$ physics we make two notes regarding the additional effects ${\mathcal O}_i$ might cause that we shall not be pursuing. \begin{itemize} \item $\mathcal Q_{LL}$ and $\mathcal Q_{LRt}$ also modify $tWs$ and $tWd$ vertices. Consequently, they also contribute to $K^0-\bar K^0$ mixing at one-loop. However, their contributions to neutral kaon and as well as $B$ meson oscillations turn out to be universal and purely real (see discussion below Eq.~(\ref{eq:kappas})) so they cannot increase the $\epsilon$ and $\epsilon^{\prime}$ predictions. \item $\mathcal Q_{LRb}$ and $\mathcal Q'_{LL}$ also modify $uWb$ and $cWb$ vertices. This could interfere with $V_{cb}$ and $V_{ub}$ extraction from semileptonic $B$ decays. Since these quantities are crucial for the reconstruction of the CKM matrix in MFV models, a consistent analysis of these operators would require a modified CKM unitarity fit, which is beyond the scope of this work. \end{itemize} The Feynman rules for all the vertices generated by ${\mathcal O}_i$ that are relevant for our analysis are presented in the Appendix~\ref{app:feyn_charged}. Since we shall be working in the general $R_{\xi}$ gauge, beneficing the check of the $\xi$ dependence cancelation in the final results, we will have to consider also the would-be Goldstone bosons in our calculations. \section{$|\Delta B|=2$ transitions} \label{sec:mixing} Recently, possible NP effects in the $B_{q}-\bar B_{q}$, mixing amplitudes ($q=d,s$) have received considerable attention (c.f.~\cite{Lenz:2010gu} and references within). In particular within the SM, the $B_d-\bar B_d$ mass difference and the time-dependent CP asymmetry in $B_d\to J/\psi K_s$ are strongly correlated with the branching ratio $\mathrm{Br}(B^+\to \tau^+ \nu)$. The most recent global analysis point to a disagreement of this correlation with direct measurements at the level of 2.9 standard deviations~\cite{Lenz:2010gu}. Similarly in the $B_s$ sector, the measured CP-asymmetries by the Tevatron experiments, namely in $B_s \to J/\psi\phi$ and in di-muonic inclusive decays when combined, deviate from the SM prediction for the CP violating phase $\phi_s$ in $B_s-\bar B_s$ mixing by $3.3$ standard deviations~\cite{Lenz:2010gu}. This indication of NP effect is however weakened by the latest LHCb result of the $\phi_s$ phase inferred from combined analysis of $B_s\to J/\psi \phi$ and $B_s\to J/\psi f(980)$ channels \cite{lhcb:phi_s}, showing agreement with SM prediction therefore eliminating the possibility of large NP contributions. In this section we analyze the effects of our operators (\ref{eq:ops1}) on the $B_{q}-\bar B_{q}$ mixing amplitudes. We first perform the matching to the low energy Lagrangian, where we consider only diagrams with one ${\mathcal O}_i$ operator insertions, resulting in first order corrections in $1/\Lambda^2$ expansion. Consideration of only single NP operator insertion is a good approximation given the small size of observed deviations in the CP-conserving $B_{q}$ mixing observables from SM predictions. However, we have also computed higher order insertions and checked explicitly that they do not change our conclusions of the numerical analysis presented in section \ref{sec:BBnumerics}. To obtain the constraints on NP contributions we rely on the recent global CKM and $B_{q}$ mixing fits given in Refs.~\cite{Lenz:2010gu} and \cite{Lenz:2012az}. \subsection{Matching} In order to study the effects of our operators (\ref{eq:ops1}) on the matrix elements relevant in $B_{q}-\bar B_{q}$ mixing, we normalize them, following~\cite{Lenz:2010gu}, to the SM values given in Eq.~(\ref{eq:mixnorm}) by writing \begin{equation} M_{12}^{q}=\frac{1}{2m_{B_q}}\langle B_{q}^0|{\cal H}_{\mathrm{eff}}|\bar{B}_{q}^0\rangle_{\mathrm{disp}}= M_{12}^{q,\mathrm{SM}}\Delta_{q}\,,\label{mat} \end{equation} where the deviation of parameter $\Delta_{q}$ from $1$ quantifies NP contributions. Proceeding in a similar fashion as in section \ref{sec:SMmix}, where we have analyzed the mixing amplitudes in the SM, we now match our effective theory to the low energy effective theory relevant for $|\Delta B|=2$ transitions which is governed by the Lagrangian \begin{eqnarray} {\cal L}_{\mathrm{eff}} =-\frac{G_F^2 m_W^2}{4 \pi^2}\big(V_{tb}V^*_{tq}\big)^2 \sum_{i=1}^5 C_i(\mu) {\cal O}_i^{q}\,. \label{eq:lagrBB} \end{eqnarray} Compared to the low energy effective Lagrangian (\ref{eq:LSMmix}), to which we were matching the pure SM contributions, effective Lagrangian of (\ref{eq:lagrBB}) contains four additional operators~\cite{Becirevic:2001xt} \begin{eqnarray} \mathcal O_2^d = \big[\bar{d}_R^{\alpha}b_L^{\alpha}\big]\big[\bar{d}_R^{\beta}b_L^{\beta}\big]\,,\hspace{0.5cm} \mathcal O_3^d =\big[\bar{d}_R^{\alpha}b_L^{\beta}\big]\big[\bar{d}_R^{\beta}b_L^{\alpha}\big]\,,\\ \nonumber\mathcal O_4^d= \big[\bar{d}_R^{\alpha}b_L^{\alpha}\big]\big[\bar{d}_L^{\beta}b_R^{\beta}\big]\,,\hspace{0.5cm} \mathcal O_5^d =\big[\bar{d}_R^{\alpha}b_L^{\beta}\big]\big[\bar{d}_L^{\beta}b_R^{\alpha}\big]\,, \end{eqnarray} which need to be included since non-SM chirality structures are present in our operator basis. We have explicitly written out the $\alpha,\beta$ color indices. In the matching procedure the $W$ boson and the top quark are integrated out by computing the box diagrams such as the one depicted in Fig.~\ref{fig:NPmix}, which now contain anomalous couplings, from insertions of operators ${\mathcal O}_i$. The box diagrams with anomalous couplings appearing in the bottom-right corner instead the top-left and the crossed diagrams with internal quark and boson lines exchanged are completely symmetric and need not be computed separately. We note that working in general $R_{\xi}$ gauge for weak interactions brings about new anomalous interactions of Would-be goldstone bosons generated by $\mathcal Q^{({\prime},{\prime}{\prime})}_{LL}$ and $\mathcal Q_{RR}$ operators. What is more, in the general $R_{\xi}$ gauge, operators ${\mathcal O}_{LL}$ and ${\mathcal O}_{LL}^{\prime\pr}$ contribute to mixing amplitudes also through the triangle diagrams shown on the righthand side of Fig.~\ref{fig:NPmix}. \begin{figure}[h] \begin{center} \includegraphics[scale= 0.6]{anomalmix.pdf} \caption{Feynman diagrams for $\bar{B}_q\to B_q$ transitions with one insertion of $\mathcal Q_i$ operators, labeled with a square. The zigzag lines represent $W$ gauge bosons or would-be Goldstone scalars $\phi$. Quarks running in the loop are up-type quarks. {\bf Left}: Box diagram to which all $\mathcal Q_i$ contribute. {\bf Right}: Triangle diagrams generated only by $\mathcal Q_{LL}^{(\prime\prime)}$ operators. } \label{fig:NPmix} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{chirality.pdf} \end{center} \caption{Schematic consideration of ${\mathcal O}_{RR}$ and ${\mathcal O}_{LRb}$ insertions in the mixing diagrams. Crosses and $q$s on the top quark propagators mark which part of the propagator is needed due to specific chirality demands of the vertices. $q$ at the anomalous vertices mark that the Feynman rules for those vertices include the loop momentum $q$. } \label{fig:chirality} \end{figure} By simple consideration of the chirality structure of the diagrams we find that one insertion of operators $\mathcal Q_{RR}$ and $\mathcal Q_{LRb}$ give contributions suppressed by the down quark masses. This is illustrated in Fig.~\ref{fig:chirality}, which shows explicitly the chiralities of quark fields and the number of times the loop integration momentum $q$ appears in the mixing diagrams. Because we neglect the down quark masses, no chirality flip of these fields is possible, therefore their chirality is determined by the interaction vertices. The up (top) quarks in the loop however can experience a helicity flip or instead pick up the loop integration momentum $q$ from the propagator. What is more some anomalous couplings also include momentum $q$ in the Feynman rule. If we sum the number of times that $q$ appears in the diagram to be odd, such diagram gives a zero contribution due to the symmetry of the integration over $\mathrm{d}^4 q$ and the fact that the only momentum appearing in the diagram is $q$ (see section~\ref{sec:dB2}) \begin{eqnarray} \int \frac{\mathrm{d}^4 q}{(2\pi)^4} \frac{q^{2n} q^\mu}{\prod_{i}[q^2-m^2_i]} = 0 \,,\hspace{0.5cm} n=0,1,2,\dots \,. \end{eqnarray} This is what happens in all one-insertion diagrams for ${\mathcal O}_{RR}$ and ${\mathcal O}_{LRb}$, which are presented in Fig.~\ref{fig:chirality}, meaning that these two operators contribute only upon two insertions and will therefore not be considered further in this section. In turn the only relevant four quark operator of the effective Lagrangian (\ref{eq:lagrBB}) for the matching is $\mathcal O_1^q$ and we can write \begin{eqnarray}\label{eq:C1_and_NP} C_1 = C_1^{\mathrm{SM}} + \delta C_1\,, \end{eqnarray} where $C_1^{\mathrm{SM}}$ is given in Eq.~(\ref{eq:C1SM}) and $\delta C_1$, which is identical for $q=d$ and $q=s$ case is \begin{eqnarray} \delta C_1 = \mathrm{Re}[\kappa_{LL}] S_0^{LL}(x_t) +\mathrm{Re}[\kappa_{LRt}]S_0^{LRt}(x_t) +\kappa_{LL}^{\prime(\prime\prime)}S_{0}^{LL\prime(\prime\prime)}(x_t) +\kappa_{LRt}^{\prime}S_{0}^{LRt\prime}(x_t)\,, \label{LOWils} \end{eqnarray} where $x_t=m_t^2/m_W^2$ and we have defined \begin{eqnarray} \kappa_{LL}^{(\prime,\prime\prime)}=\frac{C_{LL}^{(\prime,\prime\prime)}}{\Lambda^2\sqrt{2}G_F}\,,\hspace{0.5cm} \kappa_{LRt}^{(\prime)}=\frac{C_{LRt}^{(\prime)}}{\Lambda^2 G_F}\,. \label{eq:kappas} \end{eqnarray} The $S_0^i(x_t)$ loop functions are given in Eqs.~(\ref{eq:S0s}) of the Appendix. Their gauge independence has been checked by the cancelation of all $\xi$-dependent terms. There are two features of the obtained results that we want to point out. \begin{itemize} \item First one is that, $S_0^{LL}$ and $S_0^{LL\prime\prime}$ contributions turn out to be UV-divergent. The divergences originate from the triangle diagrams with two would-be Goldstone bosons running in the loop\footnote{Had we been working in the unitary gauge, these diagrams would not exist. The UV divergences would however still emerge from the box diagrams, since one ${\mathcal O}_{LL}$ or ${\mathcal O}_{LL}^{\prime\prime}$ insertions demand one of the up-type quarks to be the top quark. This means that rather than having two summations in Eq.~(\ref{eq:SMmixAMP1}) as we did in the SM case, we have just one, which leaves the results UV divergent.}. We renormalize them using the $\overline{\mathrm{MS}}$ prescription, leading to remnant renormalization scale dependent terms of the form $x_t\log m_W^2/\mu^2$ . Because of this ultraviolet renormalization, it would be inconsistent to assume that no other operators but those in Eq.~(\ref{eq:ops1}) comprise the dimension-six part of the Lagrangian (\ref{eq:lagr}). In particular, on dimensional grounds it is easy to verify that the appropriate MFV consistent counter-terms are generated by the four-quark operators of the form \begin{eqnarray} {\mathcal O}_{4Q}=\big[\bar Q \mathcal A_{QQ} \gamma^{\mu}Q\big]\big[\bar Q \mathcal A^\prime_{QQ} \gamma_{\mu}Q\big]\,, \hspace{0.3cm} \mathcal A_{QQ} = Y_u Y_u^{\dagger}\,,\hspace{0.3cm} \mathcal A_{QQ}^\prime = \bigg\{\hspace{-0.2cm} \begin{array}{c l } Y_u Y_u^\dagger\,,& \text{for } {\mathcal O}_{LL}\,\\ Y_u Y_u^\dagger Y_d Y_d^\dagger\,,&\text{for } {\mathcal O}_{LL}^{\prime\pr}\, \end{array}\,, \end{eqnarray} giving the change in Wilson coefficient \begin{eqnarray} \delta C_1 = \frac{C_{4Q}}{\Lambda^2\sqrt{2}G_F}8\pi^2\,x_t\,. \end{eqnarray} Appearance of $x_t$ in the expression above is crucial to match the $x_t$ factor accompanying the UV divergences in ${\mathcal O}_{LL}$ and ${\mathcal O}_{LL}^{\prime\pr}$ contributions to $\delta C_1$. Generic tree-level contributions of this kind to $\delta C_1$ have been analyzed in detail in~\cite{Ligeti:2010ia} although not in the context of radiative corrections but as standalone dimension-six $\Delta F=2$ effective operators adhering to MFV -- we will not consider them further. It is however important to keep in mind that our derived bounds on $\kappa_i$ presented in the next section assume that the dominant NP effects at the $\mu\simeq m_t$ scale are represented by a single $\mathcal Q_i$ insertion. \item The second feature to be pointed out is that only real parts of $\kappa_{LL}$ and $\kappa_{LRt}$ enter Eq.~(\ref{LOWils}) and thus cannot introduce a new CP violating phase. On a computational level, this is due to the fact that these operators always contribute to the mixing amplitudes in hermitian conjugate pairs. In particular, a box diagram with an insertion of operator ${\mathcal O}_{LL}$ or ${\mathcal O}_{LRt}$ in the upper-left corner is accompanied by a diagram with insertion of ${\mathcal O}_{LL}^\dagger$ and ${\mathcal O}_{LRt}^\dagger$ in the upper-right corner, since as was pointed out in section \ref{sec:FinalBasis}, these operators also effect $tWd$ and $tWs$ vertices. Both diagrams give the same result but one with $\kappa_{LL,LRt}$ and the other with $\kappa_{LL,LRt}^*$ pre-factors respectively, resulting in the appearance of $\mathrm{Re}[\kappa_{LL,LRt}]$ in the sum of both contributions. Similarly, the triangular diagrams generated by ${\mathcal O}_{LL}$ are also generated by ${\mathcal O}_{LL}^{\dagger}$. This inability to introduce new phases can also be understood more generally already at the operator level. Namely as shown in Ref.~\cite{Blum:2009sk}, a necessary condition for new flavor violating structures $\mathcal Y_x$ to introduce new sources of CP violation in quark transitions is that \begin{eqnarray} {\rm Tr}(\mathcal Y_x[\langle Y_u Y_u^\dagger \rangle , \langle Y_d Y_d^\dagger \rangle])\neq 0\,. \label{eq:Perez1} \end{eqnarray} In MFV models (where $\mathcal Y_x$ is built out of $Y_u$ and $Y_d$ ) this condition can only be met if $\mathcal Y_x$ contains products of both $Y_u$ and $Y_d$. In our analysis this is true for all operators except $\mathcal Q_{LL}$ and $\mathcal Q_{LRt}$. \end{itemize} \subsection{Semi-numerical formula} \label{sec:BBnumerics} In order to evaluate the hadronic matrix elements of the operators, we have to evolve the Wilson coefficient~(\ref{eq:C1_and_NP}) from the matching scale at the top quark mass to the low energy scale at the bottom quark mass. Next-to-leading log (NLL) running for the SM Wilson coefficient $C_1^{\mathrm{SM}}$ in the $\overline{\mathrm{MS}}$ (NDR) scheme is \cite{Buras:2001ra} \begin{eqnarray} C_1^{\mathrm{SM}}(m_b)&=& 0.840\, C_1^{\mathrm{SM}}(m_t)\,. \end{eqnarray} Because we are relying on results with consistent $\overline{\mathrm{MS}}$ renormalization procedures, we need to use the $\overline{\mathrm{MS}}$ quark masses $m_t\equiv\overline{m}_t(\overline{m}_t)$, $m_b\equiv\overline{m}_b(\overline{m}_b)$. Following the reasoning outlined in the last paragraph of section~\ref{sec:strategy} we assume the same running also for the $\delta C_1$ part. Under this assumption the effects of running between the change $\delta C_1$ and the SM contribution cancel out and the parameter for quantification of NP in mixing amplitudes can then be written as \begin{eqnarray} \nonumber \Delta &=& 1+\frac{\delta C_1}{C_1^{\mathrm{SM}}} = 1 + \sum_i \frac{S_0^{i}(x_t)}{S_0^{\mathrm{SM}}(x_t)}\\ &=&1- 2.57\, \mathrm{Re}[\kappa_{LL}]-1.54\,\mathrm{Re}[\kappa_{LRt}] +2.00\, \kappa_{LL}^{\prime}-1.29\, \kappa_{LL}^{\prime\prime} - 0.77\,\kappa_{LRt}^{\prime}\,, \label{eq:seminum1} \end{eqnarray} where the parameters $\kappa_i$ are understood to be evaluated at the high matching scale $m_t$. In order to be consistent with the global analysis of Ref.~\cite{Lenz:2010gu}, on which we shall rely in the next section, we have used the numerical values for masses and other parameters as specified therein. \subsection{Bounds on NP contributions} Let us first assume that the anomalous couplings $\kappa_i$ are real. Using Eq.~(\ref{eq:seminum1}), we consider one $\kappa_i(\mu=m_t)$ at the time to be non-zero. The assumption of real $\kappa_i$ makes our NP fall under the ``scenario II'' of \cite{Lenz:2010gu}, for which the global analysis gives \begin{eqnarray} \Delta = 0.90 \Big[\hspace{-0.2cm}\begin{array}{c} {\scriptstyle +0.07} \vspace{-0.1cm}\\ {\scriptstyle -0.07} \end{array}\hspace{-0.2cm}\Big] \Big[\hspace{-0.2cm}\begin{array}{c} {\scriptstyle +0.31} \vspace{-0.1cm}\\ {\scriptstyle -0.10} \end{array}\hspace{-0.2cm}\Big]\,, \end{eqnarray} here the bracketed intervals represent the $1\sigma$ and $2\sigma$ C.L. intervals around the central value, which we can use to obtain $95\%$ C.L. bounds on $\kappa_i$. We present our results in Tab.~\ref{tab:MixingBounds}. \begin{table}[h!] \begin{center} \begin{minipage}{0.2\textwidth} \begin{tabular}{c|c}\hline\hline & $95 \%$ C.L. \\\hline $\kappa_{LL}$ &$\begin{array}{r} 0.08\\-0.09\end{array}$\\\hline $\kappa_{LL}^{\prime}$ &$\begin{array}{r} 0.11\\-0.09\end{array}$\\\hline $\kappa_{LL}^{\prime\pr}$ &$\begin{array}{r} 0.18\\-0.18\end{array}$\\\hline $\kappa_{LRt}$ &$\begin{array}{r} 0.13\\-0.14\end{array}$\\\hline $\kappa_{LRt}^{\prime}$ &$\begin{array}{r} 0.29\\-0.29\end{array}$\\ \hline\hline \end{tabular} \end{minipage} \begin{minipage}{0.5\textwidth}\begin{center} \includegraphics[scale=0.88]{BBbarbounds.pdf} \end{center} \end{minipage} \caption{$95\%$ C.L. allowed intervals for $\kappa_i$ which are considered to be real and analyzed with only one being different than zero at a time. Accompanying graph shows $\Delta$ as function of one $\kappa_i$. The horizontal line represents the central fitted value and the orange bands the $1\sigma$ and $2\sigma$ regions for $\Delta$ as obtained in the global analysis of \cite{Lenz:2010gu}. $95\%$ C.L. limits on the parameters $\kappa_i$, are obtained by looking at where the functions cross the outer orange regions. } \label{tab:MixingBounds} \end{center} \end{table} Compared to existing $B\to X_s \gamma$ constraints for couplings $\kappa_{LL}$ and $\kappa_{LRt}$ given in Ref.~\cite{Grzadkowski:2008mf}, we find our bounds on $\kappa_{LL}$ to be comparable, while bounds on $\kappa_{LRt}$ are improved. Relaxing the assumption of real $\kappa_i$, contributions of the primed operators can introduce new CP violating phases if the anomalous couplings have nonzero imaginary components. The analysis of such general complex contributions to $\Delta$ fall under the ``scenario III'' of \cite{Lenz:2010gu}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{newcomplex.pdf} \caption{68\% (dashed) and 95\% (solid) C.L. allowed regions for $\kappa_{LL}^{\prime(\prime\pr)}$ and $\kappa_{LRt}^{\prime}$ in the complex plain obtained from Eq.~(\ref{eq:seminum1}) and the results of Ref.~\cite{Lenz:2012az}. } \label{fig:phis} \end{center} \end{figure} As we have already mentioned at the beginning of this chapter, the best fitted values for NP when new phases are considered given in \cite{Lenz:2010gu} are highly influenced by the extraction of the $\phi_s$, the mixing angle in the $B_s$ system, from Tevatron data. One can imagine the fits to change quite significantly on the side of imaginary component of $\Delta$, if the newest measurement of $\phi_s^{\mathrm{exp.}}$ from LHCb~\cite{lhcb:phi_s} is taken into consideration since it is in agreement with the SM prediction. This is confirmed by comparing the graphical results from Ref.~\cite{Lenz:2010gu} and the recent update given in Ref.~\cite{Lenz:2012az}, which we use to present the $68\%$ and $95\%$ C.L. regions in the complex plane for $\kappa_{LL}^{\prime(\prime\pr)}$ and $\kappa_{LRt}^\prime$ shown in Fig.~\ref{fig:phis}. We can still observe the preference of non-zero imaginary components by the global fits, significance of which is however reduced compared to the situation prior to the LHCb measurement~\cite{Drobnak:2011wj}. \subsection{Summary} To summarize, we have matched our effective Lagrangian (\ref{eq:lagr}) to the low energy effective Lagrangian (\ref{eq:lagrBB}) and analyzed the impact of the effective operators ${\mathcal O}_i$ on $|\Delta B| = 2$ mixing to first order in $1/\Lambda^2$, allowing one ${\mathcal O}_i$ insertion in mixing diagrams. We have shown that operators ${\mathcal O}_{RR}$ and ${\mathcal O}_{LRb}$ do not contribute upon one insertion, and while operators ${\mathcal O}_{LL}$ and ${\mathcal O}_{LRt}$ can not contribute any new phases to the mixing, the primed operators ${\mathcal O}_{LL}^{\prime (\prime\pr)}$ and ${\mathcal O}_{LRt}^{\prime}$ can. Effects of ${\mathcal O}_i$ turn out to be the same for $B_d$ and $B_s$ systems, so that they can be parametrized in terms of one parameter $\Delta$. Following the global analysis of \cite{Lenz:2010gu,Lenz:2012az} we were able to put constraints on $\kappa_i$. In particular, first assuming the Wilson coefficients $\kappa_i$ to be real, were able to obtain for them the $95\%$ C.L. allowed intervals given in Tab.~\ref{tab:MixingBounds}, which compared to the $b\to s \gamma$ constraint given in Ref.~\cite{Grzadkowski:2008mf} prove to be competitive for $\kappa_{LL}$ and improved for $\kappa_{LRt}$. For the three primed operators, which can contribute new phases, we have obtained the 95\% C.L. allowed regions in the corresponding complex plains, presented in Fig.~{\ref{fig:phis}}. \section{$|\Delta B| = 1$ transitions} In this section we turn our analysis of the rare $|\Delta B|=1$ processes introduced in section~\ref{sec:dB1}. After performing the one-loop matching of our operator basis~(\ref{eq:ops1}) onto the low energy effective Lagrangian~(\ref{eq:loweff1}), we obtain corrections to the relevant Wilson coefficients. We proceed by calculating the effects in the inclusive $B\to X_s\gamma$ and $B \to X_s \ell^+ \ell^-$. In order to derive bounds on both the real and imaginary parts of the appropriate Wilson coefficients we include the experimental results not only for the decay rates but also for the CP asymmetry in $B \to X_s \gamma$. After performing a global fit of the Wilson coefficients, we derive predictions for several rare $B$ meson processes: $B_s \to \mu^+ \mu^-$, the forward-backward asymmetry in $B \to K^* \ell^+ \ell^-$ and the branching ratios for $B \to K^{(*)} \nu \bar \nu$. \subsection{Matching} The procedure of matching closely resembles the one described for the SM case in section~\ref{sec:dB1}. We are again interested in NP contributions to the observables at order $1/\Lambda^2$ and thus only consider single operator insertions. Generic penguin and box diagrams with anomalous couplings are shown in Fig.~\ref{fig:feyns11} and Fig.~\ref{fig:feyns3}, where again due to the commitment to $R_{\xi}$ gauge, we are faced with diagrams with would-be Goldstone bosons. Exact diagrams for a specific ${\cal Q}_i$ can be reconstructed using Feynman rules given in the Appendix~\ref{app:feyn_charged}. \begin{figure}[h] \begin{center} \includegraphics[scale= 0.6]{feyns12.pdf} \caption{Types of Feynman diagrams encountered when computing $b\to s V$ transitions, where $V$ stands for $\gamma, Z, g$. Dotted lines represent would-be Goldstone bosons, crosses mark additional points where $V$ can be emitted in one-particle-reducible diagrams and square represents an anomalous coupling. Gluon emission is only possible from quark lines and with the SM coupling. Quarks running in the loops are up-type.} \label{fig:feyns11} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale= 0.6]{leptonbox.pdf} \caption{Box Feynman diagrams contributing to $b\to s \ell^+ \ell^-$ and $b\to s \nu\bar{\nu}$ transitions. Diagrams with would-be Goldstone bosons are absent, since the leptons are treated as massless. Quark running in the loop is up-type.} \label{fig:feyns3} \end{center} \end{figure} As the result of the matching procedure we obtain deviations from the SM values for the Wilson coefficients, $C_i = C_i^{\rm SM} + \delta C_i$, which we parametrize as \begin{eqnarray} \delta C_i(\mu) &=&\sum_{j}\kappa_j(\mu) f_i^{(j)}(x_t,\mu) + \kappa_j^{*}(\mu) \tilde{f}^{(j)}_i (x_t,\mu)\,,\label{eq:fs} \end{eqnarray} where index $j$ runs over the operator basis~(\ref{eq:ops1}), $\mu$ is the matching scale and $\kappa_j$ are rescaled Wilson coefficients defined as\footnote{For easier reading we repeat the definition of $\kappa_{LL}^{(\prime,\prime\pr)}$ and $\kappa_{LRt}^{(\prime)}$ previously defined in Eq.~(\ref{eq:kappas})} \begin{eqnarray} \kappa_{LL}^{(\prime,\prime\prime)}=\frac{C_{LL}^{(\prime,\prime\prime)}}{\Lambda^2\sqrt{2}G_F}\,,\hspace{0.3cm} \kappa_{RR}=\frac{C_{RR}}{\Lambda^2 2\sqrt{2} G_F}\,,\hspace{0.3cm} \kappa_{LRb}=\frac{C_{LRb}}{\Lambda^2 G_F}\,,\hspace{0.3cm} \kappa_{LRt}^{(\prime)}=\frac{C_{LRt}^{(\prime)}}{\Lambda^2 G_F}\,. \label{eq:kappas2} \end{eqnarray} Separate track is kept of ${\mathcal O}_i$ and ${\mathcal O}_i^\dagger$ contributions that are quantified by functions $f_i^{(j)}$ and $\tilde{f}_i^{(j)}$ respectively of which the analytic expressions are given in the Appendix~\ref{app:SM_D_B_1}. We note, that the matching procedure for operator ${\cal Q}_{LL}^{\prime}$ stands out compared to the other operators. The charged current structure of this operator resembles that of the SM operator $\bar{Q}_i\gamma^{\mu}\tau^a Q_i W^a_\mu$. Consequently, when considering ${\mathcal O}_{LL}^{\prime}$, Wilson coefficients $C_{1,\dots,8}$ are changed in a trivial way $C_i(\mu) = (1+\kappa_{LL}^{\prime}(\mu))C_i^{\mathrm{SM}}(\mu)$. The change of the remaining Wilson coefficients $C_{9,10,\nu\bar{\nu}}$, matching of which involves the $Z$ boson, is however not of this form. As in the case of $|\Delta B| =2$ process, some of the diagrams in Fig.~\ref{fig:feyns11} are UV divergent. We remove these divergences using the $\overline{\mathrm{MS}}$ prescription leading to remnant $\log (m_W^2/\mu^2)$ terms. We shall quantify the matching scale dependence of our results and consequently their sensitivity to the UV completion of the effective theory by varying the scale between $\mu=2m_W$ and $\mu=m_W$. Since UV renormalization is necessary, again our operator basis needs to be extended to include operators that can serve as the appropriate counter-terms. Within the employed MFV framework examples of these operators read \begin{eqnarray} {\cal Q}_1^{\mathrm{c.t.}}=\big[\bar{Q}\sigma^{\mu\nu}\mathcal A_{Qd} Y_d\tau^a d\big]\phi_d W^a_{\mu\nu}\,,\hspace{0.5cm} {\cal Q}_2^{\mathrm{c.t.}}=\big[\bar{Q}\gamma^{\mu}\mathcal A_{QQ}Q\big]\big[\bar{\ell}\gamma_{\mu}\ell\big]\,, \label{eq:counterterms} \\ {\cal Q}_3^{\mathrm{c.t.}}=\big[\bar{Q}\gamma^{\mu}\tau^a \mathcal A^{\prime}_{QQ}Q\big]\big[\phi_d^{\dagger}\tau^a\mathrm{i} D_{\mu}\phi_d\big] + \big[\bar{Q}\gamma^{\mu}\mathcal A^{\prime}_{QQ}Q\big]\big[\phi_d^{\dagger}\mathrm{i} D_{\mu}\phi_d\big]\,.\nonumber \end{eqnarray} The operator ${\cal Q}_1^{\mathrm{c.t.}}$ produces a counter-term for divergences in $\delta C_7$ , while ${\cal Q}_{2,3}^{\mathrm{c.t.}}$ provide counter-terms for divergent parts of $\delta C_{9,10,\nu\bar{\nu}}$. The operator ${\cal Q}_3^{\mathrm{c.t.}}$ generates a tree-level $bZs$ vertex. The sets of flavor matrices needed to match the structures of divergencies generated by the various operators in \eqref{eq:ops1} are \begin{eqnarray} \mathcal A_{Qd}&=&Y_u Y_u^{\dagger}\,, \\ \nonumber\mathcal A_{QQ}&=&Y_u Y_u^{\dagger}\,,\, Y_u Y_u^{\dagger} Y_d Y_d^{\dagger}\,,\nonumber \\ \nonumber\mathcal A_{QQ}^{\prime}&=&Y_u Y_u^{\dagger}\,,\, (Y_u Y_u^{\dagger})^2\,,\,Y_u Y_u^{\dagger}Y_d Y_d^{\dagger}\,,\, (Y_u Y_u^{\dagger})^2Y_d Y_d^{\dagger}\,,\, Y_u Y_u^{\dagger} Y_d Y_d^{\dagger}Y_u Y_u^{\dagger}\,. \end{eqnarray} Just as in the analysis of NP in $|\Delta B|=2$ processes, we will selectively set contributions of certain operators to be nonzero in our numerical analysis. Consequently in the following, we will drop the implicit (tree-level) contributions of the operators in~\eqref{eq:counterterms} to $\delta C_i$, as these have been already investigated and constrained in the existing literature~\cite{Hurth:2008jc}. All $f_i^{(j)}, \tilde{f}_i^{(j)}$ are found to be $\xi$ independent and a crosscheck with results from Ref.~\cite{Grzadkowski:2008mf} is possible for some of them. We confirm their original results for all the operators except $\mathcal Q_{LRb}$, while an updated version of~\cite{Grzadkowski:2008mf} confirms our result also for this operator. \begin{table}[h] \begin{tabular}{c|c|cc|cc|cc|cc|cc|cc}\hline\hline &SM&$\kappa_{LL}$&$\kappa_{LL}^*$&$\kappa_{LL}^{\prime}$&$\kappa_{LL}^{\prime*}$&$\kappa_{LL}^{\prime\pr}$&$\kappa_{LL}^{\prime\pr*}$&$\kappa_{RR}$&$\kappa_{LRb}$&$\kappa_{LRt}$&$\kappa_{LRt}^*$&$\kappa_{LRt}^{\prime}$&$\kappa_{LRt}^{\prime*}$\\\hline $f_7$&-0.19 &0.45& 0.45& -0.19& 0& 0.45& 0& -45.3& 85.5& -0.13& -0.17& -0.15& -0.17\\ $f_8$&-0.095&0.24& 0.24& -0.095& 0& 0.48& 0& -20.2& 54.5& 0.15& 0.05& 0& 0.05\\ $f_9$&1.34&-1.11& -1.11& 1.35& 0.09& -1.11& 0.009& 0& 0& 0.64& 0.64& 0.009& 0.64\\ $f_{10}$&-4.16&1.48& 1.48& -4.28& -0.12& 1.48& -0.12& 0& 0& -2.41& -2.41& 0& -2.41 \\ $f_{\nu\bar{\nu}}$&-6.52&2.38& 2.38& -6.63& -0.12& 2.38& -0.12& 0& 0& -4.25& -4.25& 0& -4.25\\ \hline\hline \end{tabular} \caption{Numerical values of functions $f_i^{(j)}$ and $\tilde{f}_i^{(j)}$ at $\mu=2 m_W$. Numerical values used for the input parameters are $\overline{m}_t(2m_W)=165.0$~GeV, $s_W^2=0.231$, $m_W= 80.4$~GeV, $\overline{m}_b(2m_W)= 2.9$~GeV, $|V_{tb}|^2=1$. All $f_i$ values correspond to matching at LO in QCD.} \label{tab:cs} \end{table} To quantify the effects of our seven operators on Wilson coefficients (\ref{eq:fs}) we present the numerical values of $f_i^{(j)}$ evaluated at $\mu=2m_W$ in Tab.~\ref{tab:cs}. We see that the contributions of the operator ${\cal Q}_{LL}$ and ${\cal Q}_{LL}^{\dagger}$ are identical in all cases, which means that $\kappa_{LL}$ can not induce new CP violating phases in the Wilson coefficients. Likewise, $\mathcal Q_{LRt}$ contributions to $C_{9,10,\nu\bar\nu}$ are Hermitian but this operator can induce a new CP violating phase in $C_{7,8}$. Finally at order $1/\Lambda^2$, operators ${\cal Q}_{RR}$ and ${\cal Q}_{LRb}$ which contain right-handed down quarks only contribute to $C_{7,8}$. These contributions are however very significant, since they appear enhanced as $m_t/m_b$ (\ref{eq:rr}, \ref{eq:lrb}) due to the lifting of the chiral suppression, as already pointed out in Ref.~\cite{Grzadkowski:2008mf}. \subsection{Bounds on anomalous couplings}\label{sec:bounds} Having computed $\delta C_i$ in terms of $\kappa_{j}$, we turn our attention to observables affected by such shifts. In particular at order $1/\Lambda^2$, the presently most constraining observables -- the decay rates for $B\to X_s\gamma$ and $B\to X_s \ell^+\ell^-$ are mostly sensitive to the real parts of $\kappa_i$~\cite{Huber:2005ig}. While in general both $B\to X_{d,s} \gamma$ channels are complementary in their sensitivity to flavor violating NP contributions~\cite{Crivellin:2011ba}, within MFV such effects are to a very good approximation universal and the smaller theoretical and experimental uncertainties in the later mode make it favorable for our analysis. In order to bound imaginary parts of $\kappa_i$, we consider the CP asymmetry in $B\to X_s \gamma$. Finally, we compare and combine these bounds with the ones obtained from $B_{q} - \bar B_{q}$ oscillation observables given in Tab.~\ref{tab:MixingBounds}. \subsubsection{Real parts} We consider the inclusive $B\to X_s\gamma$ and $B\to X_s \ell^+\ell^-$ branching ratios, for which the presently most precise experimental values have been compiled in~\cite{Asner:2010qj, Huber:2007vv} \begin{align} &\mathrm{Br}[\bar{B}\to X_s \gamma]_{E_{\gamma}>1.6~\mathrm{GeV}}=(3.55 \pm 0.26)\times 10^{-4}\,, \\ &\mathrm{Br}[\bar{B}\to X_s \mu^+ \mu^-]_{\mathrm{low}\, q^2}\hspace{0.2cm}=(1.60\pm 0.50)\times 10^{-6}\,.\nonumber \end{align} Because the SM contributions to $C_{i}(\mu_b)$ and the corresponding operator matrix elements are mostly real~\cite{Huber:2005ig}, the linear terms in $\delta C_i$, which stem from SM--NP interference contributions contribute mostly as $\mathrm{Re}[\delta C_i]$. These are the only terms contributing at order $1/\Lambda^2$. Therefore, the bounds derived from these two observables are mostly sensitive to the real parts of $\kappa_j$. Using results of~\cite{Huber:2005ig}, we have explicitly verified that the small $\mathrm{Im}[\delta C_i]$ contributions to ${\mathrm{Br}} [\bar B\to X_s \ell^+\ell^-]$ have negligible effect for all operators except $\mathcal Q_{RR,LRb}$. However, even for these operators ${\rm Im}[\kappa_i]$ are much more severely constrained by $\mathcal A_{X_s \gamma}$, discussed in the next section. Also, using known NLO $B\to X_s\gamma$ formulae~\cite{Chetyrkin:1996vx}, we have verified that ${\rm Im}[\delta C_i]$ contributions to this decay rate are negligible. To analyze the effects of $\delta C_i$ on the two branching ratios, we therefore neglect the small $\mathrm{Im}[\delta C_i]$ contributions and employ the semi-numerical formulae given in Ref.~\cite{DescotesGenon:2011yn} with a few modifications that we specify below. \begin{itemize} \item In~\cite{DescotesGenon:2011yn} all predictions are given in terms of $\delta C_i$ at the scale $\mu_b=4.8$~GeV. Since we wish to check how our results depend on the matching scale $\mu$, we express $\delta C_i(\mu_b)$ using NNLO QCD running~\cite{Bobeth:1999mk,Gracey:2000am,Gambino:2003zm,Gorbahn:2005sa} as \begin{eqnarray} \delta C_7(\mu_b) &=& 0.627\,\delta C_7(m_W)\,,\\ \delta C_7(\mu_b) &=& 0.579\,\delta C_7(2m_W)\,.\nonumber \end{eqnarray} On the other hand, $C_{9,10}$ are only affected by EW running and their change with scale from $2m_W$ to $m_W$ is negligible. \item The authors of Ref.~\cite{DescotesGenon:2011yn} assumed $\delta C_8 =0$, which is not the case in our analysis. However, LO $C_7$ and $C_8$ (thus also $\delta C_7$ and $\delta C_8$) enter both observables in approximately the same combination (conventionally denoted as $C_7^{eff}$, c.f.~\cite{Buras:1993xp}). Employing the known SM NNLO matching and RGE running formulae~\cite{Bobeth:1999mk,Gracey:2000am,Gambino:2003zm,Gorbahn:2005sa} we can correct for this with a simple substitution in the expressions of Ref.~\cite{DescotesGenon:2011yn} for the branching ratios $\delta C_7 \to \delta C_7 + 0.24\, \delta C_8$, where we have neglected the small difference between the matching conditions at $\mu = 2m_W$ and $\mu=m_W$. We have verified that in this way we reproduce approximately the known $\delta C_8$ dependencies in $B\to X_s\gamma$~\cite{Freitas:2008vh} and $B\to X_s \ell^+\ell^-$~\cite{Huber:2005ig}. \item As pointed out in the previous section, ${\cal Q}_{LL}^{\prime}$ is to be treated differently than the other operators. Its effects in $\mathcal O_{1,\dots,8}$ can be seen as a shift in the CKM factor appearing $V_{tb}V_{ts}^*\to(1+\kappa_{LL}^{\prime})V_{tb}V_{ts}^*$. Consequently SM predictions for these contributions simply get multiplied by the factor of $|1+\kappa_{LL}^{\prime}|^2$ and only $\delta C_{9,10,\nu\bar{\nu}}$ need to be considered separatly. \end{itemize} Taking all this into account and considering only one operator ${\cal Q}_i$ to contribute at a time, we obtain the 95\% C.L. bounds on ${\rm Re} [\kappa_i]$ shown in Tab.~\ref{tab:bounds}. \begin{table}[h!] \hspace{-1cm} \begin{center} \begin{tabular}{c|ccc|c|c}\hline\hline &$B-\bar{B}$&$B\to X_s\gamma$&$B\to X_s \mu^{+}\mu^-$ & combined & $C_i(2m_W)\sim 1$ \\ \hline \LINE{$\kappa_{LL}$}{$\bs{0.08}{-0.09}$} {$\bs{0.03}{-0.12}$} {$\bs{0.48}{-0.49}$} {$\bs{0.04}{-0.09}\Big(\bs{0.03}{-0.10}\Big)$}& $\Lambda> 0.82\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LL}^{\prime}$}{$\bs{0.11}{-0.11}$} {$\bs{0.17}{-0.04}$} {$\bs{0.31}{-0.30}$} {$\bs{0.11}{-0.06}\Big(\bs{0.10}{-0.06}\Big)$}& $\Lambda> 0.74\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LL}^{\prime\pr}$}{$\bs{0.18}{-0.18}$} {$\bs{0.06}{-0.22}$} {$\bs{1.02}{-1.04}$} {$\bs{0.08}{-0.17}\Big(\bs{0.05}{-0.15}\Big)$}& $\Lambda> 0.60\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{RR}$}{} {$\bs{0.003}{-0.0006}$} {$\bs{0.68}{-0.66}$} {$\bs{0.003}{-0.0006}\Big(\bs{0.002}{-0.0006}\Big)$}& $\Lambda> 3.18\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRb}$}{} {$\bs{0.0003}{-0.001}$} {$\bs{0.34}{-0.35}$} {$\bs{0.0003}{-0.001}\Big(\bs{0.003}{-0.01}\Big)$}& $\Lambda> 9.26\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRt}$}{$\bs{0.13}{-0.14}$} {$\bs{0.51}{-0.13}$} {$\bs{0.38}{-0.37}$} {$\bs{0.13}{-0.07}\Big(\bs{0.12}{-0.14}\Big)$}& $\Lambda> 0.81\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRt}^{\prime}$}{$\bs{0.29}{-0.29}$} {$\bs{0.41}{-0.11}$} {$\bs{0.75}{-0.73}$} {$\bs{0.27}{-0.07}\Big(\bs{0.25}{-0.06}\Big)$}& $\Lambda> 0.56\,\, \mathrm{TeV}$\\\hline\hline \end{tabular} \end{center} \hspace{1.4cm} \begin{center} \includegraphics[scale= 1.0]{Bars2.pdf} \end{center} \caption{Lower and upper $95\%$ C.L. bounds on real parts of individual anomalous couplings $\kappa_j$ for $\mu = 2 m_W$ and bracketed for $\mu=m_W$, where $\overline{m}_t(m_W)=173.8$~GeV and $\overline{m}_b(m_W)=3.06$~GeV have been used. *\,The $B\to X_s \ell^+\ell^-$ bounds on $\mathrm{Re}[\kappa_{RR,LRb}]$ are valid in the $\mathrm{Im}[\kappa_{RR,LRb}]=0$ limit; see text for details. Last column shows estimated lower bounds for NP scale $\Lambda$ assuming the combined bounds and Wilson coefficients $C_i$ to be of the order one. Accompanying plot serves for visual comparison of the presently allowed intervals.} \label{tab:bounds} \end{table} The first column shows bounds obtained from $B_q-\bar{B}_q$ mixing as analyzed in section~\ref{sec:mixing}, while the ``combined'' column corresponds to combined bounds from all three observables. For the later we also present the results when the matching scale is set to $\mu=m_W$ to check the scale dependence of our results. We can see that the bounds obtained change significantly only in the case of $\kappa_{LRb}$ where lowering the scale to $\mu=m_W$ loosens the bounds by almost an order of magnitude. We have also checked that the $B\to X_s \gamma$ bounds agree nicely with those obtained in Ref.~\cite{Grzadkowski:2008mf}. In the last column we present the lower bound on NP scale $\Lambda$ obtained from the combined bounds and under the assumption that Wilson coefficients are of the order $1$ at scale $2m_W$. The bounds on $\kappa_{RR}$ and $\kappa_{LRb}$ indeed turn out to be an order of magnitude more stringent then for the rest of the coefficients. This was anticipated by the numerical values given in Tab.~{\ref{tab:cs}} where very large effects of operators ${\mathcal O}_{RR}$ and ${\mathcal O}_{LRb}$ on $C_7$ and $C_8$ were observed. \subsubsection{Imaginary parts} We have shown in section \ref{sec:mixing} that imaginary parts of primed Wilson coefficients~(\ref{eq:kappas}) can affect the CP violating phase in $B_{q}-\bar B_{q}$ mixing and nonzero values were found to be favored by the global fit of~\cite{Lenz:2012az}. To constrain imaginary parts of the remaining four operators, which do not contribute with new phases in $B_{q}-\bar B_{q}$ mixing, we consider the direct CP asymmetry in $B\to X_s \gamma$ for which the current world average experimental value reads \cite{Asner:2010qj} \begin{eqnarray} A_{X_s \gamma}=\frac{\Gamma(\bar{B}\to X_s\gamma)-\Gamma(B\to X_{\bar{s}}\gamma)}{\Gamma(\bar{B}\to X_s\gamma)+\Gamma(B\to X_{\bar{s}}\gamma)} = -0.012 \pm 0.028\,. \label{eq:cpnow} \end{eqnarray} Based on the recent analysis of this observable in Ref.~\cite{Benzke:2010tq} we obtain the following semi-numerical formula \begin{eqnarray} A_{X_s \gamma}&=&0.006+0.039(\tilde{\Lambda}_{17}^u-\tilde{\Lambda}_{17}^c) \\ \nonumber&+&\Big[0.008+0.051 (\tilde{\Lambda}_{17}^u-\tilde{\Lambda}_{17}^c)\Big]\mathrm{Re}[\delta C_7(2m_W)]\\ \nonumber &+&\Big[0.012(\tilde{\Lambda}_{17}^u- \tilde{\Lambda}_{17}^c)+0.002\Big]\mathrm{Re}[\delta C_8(2m_W)]\\ \nonumber&+&\Big[-0.256+0.264 \tilde{\Lambda}_{78}-0.023 \tilde{\Lambda}_{17}^u-2.799 \tilde{\Lambda}_{17}^c\Big]\mathrm{Im}[\delta C_7(2m_W)]\\ \nonumber&+&\Big[-0.668 \tilde{\Lambda}_{17}^c-0.005 \tilde{\Lambda}_{17}^u-0.563 \tilde{\Lambda}_{78}+0.135\Big]\mathrm{Im}[\delta C_8(2m_W)]\,. \end{eqnarray} The estimated intervals for hadronic parameters $\tilde{\Lambda}_{17}^u$, $\tilde{\Lambda}_{17}^c$ and $\tilde{\Lambda}_{78}$ as specified in Ref.~\cite{Benzke:2010tq} dominate the theoretical uncertainty making it sufficient to use a LO QCD analysis in the perturbative regime. Thus, in addition to the numerical parameters specified in~\cite{Benzke:2010tq}, we have used the LO QCD running for $\delta C_{7,8}$ in this observable. Performing a combined analysis of all considered bounds on the real and imaginary parts of individual $\kappa_i$ in which we marginalize the hadronic parameters entering $A_{X_s\gamma}$ within the allowed intervals, we can obtain the allowed regions in the complex plain of $({\rm Re}[\kappa_i],{\rm Im}[\kappa_i])$ shown in Fig.~\ref{fig:CP}. \begin{figure} \begin{center}\vspace{-0.55cm} \includegraphics[scale=0.6]{complex.pdf} \caption{$95\%$ C.L. allowed regions in the $\kappa_{RR}(2m_W)$ and $\kappa_{LRb}(2m_W)$ complex plain (solid). The constraints are dominated by ${\rm Br}[{B}\to X_s \gamma]$ and $A_{X_s\gamma}$. Dashed and dotted lines represent bounds obtained under the projected sensitivity of Super-Belle \cite{Browder:2008em,Aushev:2010bq} measurement with 5 and 50 $\mathrm{ab}^{-1}$ of integrated luminosity. } \label{fig:CP} \end{center} \end{figure} As already argued, the imaginary part of $\kappa_{LL}$ does not contribute to the $\delta C_i$ and thus remains unconstrained. It also turns out that due to the large hadronic uncertainties, the imaginary parts of $\kappa^{(\prime)}_{LRt}$, $\kappa^{\prime(\prime\pr)}_{LL}$ remain largely unconstrained by $A_{X_s \gamma}$ and $B_{q}-\bar B_{q}$ mixing observables still provide the strongest constraints (except for ${\rm Im}[\kappa_{LRt}]$ which again remains unconstrained). On the contrary, constraints on the imaginary parts of $\kappa_{RR}$ and $\kappa_{LRb}$ reach per-cent level, as can be seen in Fig.~\ref{fig:CP}, where we also illustrate the projected bounds for Super-Belle, assuming the measured central value to be the same as given in Eq.~(\ref{eq:cpnow}) and using the estimated Super-Belle accuracy given in Ref.~\cite{Browder:2008em,Aushev:2010bq}. Finally we note that in absence of the long-distance effects on NP contributions considered in~\cite{Benzke:2010tq}, $A_{X_s \gamma}$ would exhibit an even greater sensitivity to the imaginary parts of $\delta C_{7,8}$~\cite{Barbieri:2011fc}, thus we consider our derived bounds as conservative. \subsection{Predictions} Having derived bounds on anomalous $\kappa_j$ couplings, it is interesting to study to what extent these can still affect other rare $B$ decay observables. Analyzing one operator at a time we set the matching scale to $\mu=2m_W$ and consider $\kappa_j$ to be real. We turn once more to the semi-numerical formulae given in Ref.~\cite{DescotesGenon:2011yn}, and first consider the branching ratio $\mathrm{Br}[\bar B_s\to\mu^+\mu^-]$ for which CDF's latest analysis yields~\cite{Aaltonen:2011fi} \begin{eqnarray} 4.6\times 10^{-9}<\mathrm{Br}[\bar B_s\to\mu^+\mu^-]<3.9\times 10^{-8}\,,\hspace{0.5cm} \text{at $90\%$ C.L.}\,, \end{eqnarray} while the LHCb collaboration reports on the upper limit \cite{Aaij:2012ac} \begin{eqnarray} \mathrm{Br}[\bar B_s\to\mu^+\mu^-] < 4.5\times 10^{-9}\,,\hspace{0.5cm} \text{at $95\%$ C.L.}\,, \end{eqnarray} and a CMS, Atlas and LHCb combined limit has recently been made public \cite{CMS:Bsmumu} \begin{eqnarray} \mathrm{Br}[\bar B_s\to\mu^+\mu^-] < 4.2\times 10^{-9}\,,\hspace{0.5cm} \text{at $95\%$ C.L.}\,. \end{eqnarray} In addition we explore the differential forward-backward asymmetry $A_{\mathrm{FB}}(q^2)$ in the $\bar{B}_d\to \bar{K}^*\ell^+\ell^-$ decay, for which the latest measurement of LHCb has recently been published in Ref.~\cite{Aaij:2011aa}. Finally, following Ref.~\cite{Altmannshofer:2009ma} we analyze the allowed effects of $\kappa_j$ on the branching ratios $\mathrm{Br}[B\to K^{(*)}\nu\bar{\nu}]$, which are expected to become experimentally accessible at the super-B factories~\cite{O'Leary:2010af}. \begin{figure}[h] \begin{center} \includegraphics[scale= 0.513]{mumu.pdf}\hspace{0.8cm} \includegraphics[scale=0.48]{BothNUS.pdf \caption{Ranges of values for branching ratios obtained as anomalous couplings are varied within the $95\%$ C.L. intervals given in Tab.~\ref{tab:bounds}. We also show the SM predictions (black) with $1\sigma$ theoretical uncertainty band (dotted) and for the muonic decay channel the lower end of the experimental $90\%$ C.L. interval from \cite{Aaltonen:2011fi}, the 95\% C.L. upper bound from LHCb \cite{Aaij:2012ac} and the latest combined LHC upper bound \cite{CMS:Bsmumu}.} \label{fig:predict1} \end{center} \end{figure} We present our findings in Figs.~\ref{fig:predict1} and~\ref{fig:predict2}. The effects of anomalous couplings $\kappa_j$ on all branching ratios are similar. There is a slight tenancy of anomalous $\kappa_j$ couplings to increase the predictions compared to the SM values at the level of the present theoretical uncertainties, with the exception of $\kappa_{RR}$ and $\kappa_{LRb}$ of which effects are negligible. In particular, none of the contributions can accommodate the recent CDF measurement of ${\rm Br}[\bar B_s \to \mu^+\mu^-]$ at the $90\%$ C.L.\,, while the latest measurements from LHC are already starting to constrain $\kappa'_{LL}$ and $\kappa^{(\prime)}_{LRt}$ contributions and could in future become an important factor to be included in the combined bound analysis. Furthermore we find that the forward-backward asymmetry $A_{\mathrm{FB}}(q^2)$ can still be somewhat effected by $\kappa_{LL}^{\prime\pr}$ and $\kappa_{RR}$, for which we present the bands obtained when varied within the $95\%$ C.L. intervals in Fig.~\ref{fig:predict2}. While not sensitive at the moment, in the near future, improved measurements by the LHCb experiment could possibly probe such effects. On the other hand, the contributions of other anomalous couplings all fall within the theoretical uncertainty bands around the SM predicted curve. \begin{figure}[h!] \begin{center} \includegraphics[scale= 0.6]{AFBbandLLpp.pdf}\hspace{0.8cm} \includegraphics[scale= 0.6]{AFBbandRR.pdf} \caption{$A_{\mathrm{FB}}(q^2)$ band obtained when varying real parts of $\kappa_{LL}^{\prime\pr}$ (left) and $\kappa_{RR}$ (right) within the 95\% C.L. interval given in Tab.~\ref{tab:bounds}. Also presented are the SM predicted central value (black) with $1\sigma$ theoretical uncertainty band (dashed) and the latest measured points with experimental errors given in Ref.~\cite{Aaij:2011aa}. } \label{fig:predict2} \end{center} \end{figure} \subsection{Summary} We have investigated contributions of anomalous charged quark currents in flavor changing neutral current mediated $|\Delta B|=1$ processes within an effective field theory framework assuming minimal flavor violation. We have determined the indirect bounds on the real and imaginary parts of the anomalous couplings. In particular, we are able for the first time to constrain the imaginary parts of $\kappa_{RR}$ and $\kappa_{LRb}$ already at order $1/\Lambda^2$. Taking into account the obtained bounds on real parts of $\kappa_i$ we have predicted the magnitude of effects that the operators considered might have on the branching ratio of the $B_s \to \mu^+ \mu^-$ decay, the forward-backward asymmetry in $B \to K^* \ell^+ \ell^-$, as well as the branching ratios of $B \to K^{(*)} \nu \bar \nu$ decays. The better knowledge of these (especially the potential further lowering of the $\mathrm{Br}[B_s\to \mu^+\mu^-]$ upper limit) and other recently proposed~\cite{Bobeth:2007dw} observables in the future could further constrain some of the anomalous couplings. \section{Helicity fractions at NLO in QCD} \label{sec:hel_nlo} Having exhausted the implications of charged quark current operators (\ref{eq:ops1}) on the $B$ meson mixing, radiative and rare semileptonic decays, we turn in this section, to the study of how the non-SM $tWb$ interactions induced by these operators influence the $W$ gauge boson helicity fractions in unpolarized top quark decays at NLO in QCD. We aim to confront these effects with the indirect bounds obtained for the couplings from $B$ physics to see how much deviation from SM predictions in helicity fractions may still be compatible with the low energy observations and further examine if perhaps LHC and Tevatron measurements might turnout to provide more stringent constraints on such NP. \subsection{Framework} Since in this section we shall be dealing with $t\to W b$ decays exclusively, the $tWb$ vertex and its deviation from the SM value and structure is the only charged quark interaction of interest. Following \cite{AguilarSaavedra:2008zc}, we can obtain the most general parametrization thereof by considering the following effective Lagrangian \begin{eqnarray} \mathcal L_{\mathrm{eff}} = -\frac{g}{\sqrt{2}}\bar{b}\Big[\gamma^{\mu} \big(a_L P_L +a_R P_R\big) -(b_{RL} P_L + b_{LR} P_R)\frac{2\mathrm{i} \sigma^{\mu\nu}}{m_t}q_{\nu} \Big]t W_{\mu}\,,\label{eq:effsimple} \end{eqnarray} where $q$ is the momentum of the $W$ boson and $P_{R,L}=1/2(1\pm\gamma^5)$ are the chirality projectors. Note that $a_L$ includes also the SM contribution $a_L = V_{tb} + \delta a_L$. Extraction of the modified Feynman rule for $tWb$ vertex is straight forward and is given in the Appendix~\ref{app:feyn_charged}. All of the operators (\ref{eq:ops1}) considered in the previous two sections generate these couplings with the following correspondence \begin{eqnarray} \delta a_L = V_{tb}^* \kappa_{LL}^{(\prime,\prime\pr)*}\,,\hspace{0.3cm} a_R = V_{tb}^{*}\kappa_{RR}^*\,,\hspace{0.3cm} b_{LR} = -\frac{m_t}{2 m_W}V_{tb}^{*}\kappa_{LRt}^{(\prime)}\,,\hspace{0.3cm} b_{RL} = -\frac{m_t}{2 m_W} V_{tb}^* \kappa_{LRb}^*\,,\label{eq:translation} \end{eqnarray} where $\kappa_j$ have been defined in Eq.~(\ref{eq:kappas}). We shall parametrize the main decay channel of the top quark in the following way \begin{eqnarray} \Gamma_{t\to W b}&=&\frac{m_t}{16\pi}\frac{g^2}{2}\sum_{i}\Gamma^i\,, \end{eqnarray} where $i= L,+,-$ stands for longitudinal, transverse-plus and transverse-minus as introduced in section~\ref{sec:hfSM}. The $\Gamma^i$ decay rates have already been studied to quite some extent in the existing literature. The tree-level analysis of the effective interactions~(\ref{eq:effsimple}) has been conducted in Ref.~\cite{AguilarSaavedra:2006fy}. QCD corrections, however, have been studied only for the chirality conserving (SM) operators. As we have argued in chapter~\ref{sec:hfSM} QCD corrections are especially important for the observable $\mathcal F_+$ since they allow to lift the helicity suppression present at the LO in the SM. Helicity suppression in this observable is also exhibited in the presence non SM dipole structure $b_{LR}$ of $tWb$ vertex, which is especially interesting since it is least constrained by indirect bounds coming from $B$ physics presented in Tab.~\ref{tab:bounds}. It might therefore have the potential to modify the $t\to W b$ decay properties in an observable way. \subsection{Computation} We compute the ${\cal O}(\alpha_s)$ corrections to the polarized rates $\Gamma^i$ in the $m_b=0$ limit using the most general $tWb$ interaction vertex extracted from Eq.~(\ref{eq:effsimple}). The appropriate Feynman one-loop and bremsstrahlung diagrams to be considered are presented in Fig.~\ref{fig:feyndiags}. We regulate UV and IR divergences by working in $d=4+\epsilon$ dimensions. The renormalization procedure and the fusing of virtual and bremsstrahlung contributions to attain the cancelation of IR divergences closely resembles the procedure (for the $t\to qZ$ case) described in chapter~\ref{sec:rge}, where we have analyzed the NLO QCD corrections for FCNC top quark decays. There are however some differences in the computation worth pointing out. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{anomal2.pdf} \end{center} \caption{Feynman diagrams for next-to-leading order QCD contributions. Square marks the insertion of the generally parametrized $tWb$ interaction specified in Eq.~(\ref{eq:effsimple}) and cross the additional point from which the gluon can be emitted. } \label{fig:feyndiags} \end{figure} First, since there are no contributions of gluonic NP operators, there are no QCD mixing effects to be considered. Furthermore, we need to make use of the covariant projector technique introduced in section~\ref{sec:hfSM} of the introductory chapter and summarized in Tab.~\ref{tab:projectors}, to project out the desired helicities of the $W$ boson. This makes the analysis more involved. What is more, since one of the projectors includes an explicit $\epsilon_{\alpha\beta\gamma\delta}$ tensor one can envision encountering problems with $\gamma^5$ in $d$ dimensions if naive dimensional regularization, whereby $\gamma^5$ is assumed to anti-commute with other $\gamma^{\mu}$ matrices in $d=4+\epsilon$ dimensions as well, is used. To avoid conceivable problems we use the prescriptions based on 't Hooft--Veltman $\gamma^5$ regularization that have been derived by S.A. Larin and are given in Ref.~\cite{Larin:1993tq,Ball:2004rg} \begin{eqnarray} \gamma_{\mu}\gamma_5 &\to& (1-4a_s)\frac{\mathrm{i}}{3!}\epsilon_{\mu\nu_1\nu_2\nu_3}\gamma^{\nu_1}\gamma^{\nu_2}\gamma^{\nu_3}\,,\\ \sigma_{\mu\nu}\gamma_5&\to&-\frac{\mathrm{i}}{2}\epsilon_{\mu\nu\alpha\beta}\sigma^{\alpha\beta}\,, \end{eqnarray} where $a_s = C_F \alpha_s/(4\pi)$ is needed because since the anticommutativity of $\gamma^5$ is violated the standard properties of the axial current and Ward identities are also violated and need to be restored by additional renormalization (see Ref.~\cite{Larin:1993tq} for details). \subsection{The decay rates} In the $m_b=0$ limit, which we are employing, there is no mixing between chirality flipped operators and the decay rates can be written as \begin{eqnarray} \Gamma^{(L,+,-)}&=& |a_L|^2 \Gamma^{(L,+,-)}_a + |b_{LR}|^2 \Gamma^{(L,+,-)}_{b} \label{e3} + 2\mathrm{Re}\{a_L b_{LR}^*\} \Gamma^{(L,+,-)}_{ab} + \langle L \leftrightarrow R,+\leftrightarrow -\rangle\,,\label{eq:form} \end{eqnarray} where considering the $a_R$, $b_{RL}$ pair can be accommodated by changing the role of transverse plus and transverse minus decay widths $\Gamma^+ \leftrightarrow \Gamma^-$ as indicated by the bracketed term in Eq.~(\ref{eq:form}). Analytical formulae for $\Gamma^{i}_{a,b,ab}$ functions are given in the Appendix~\ref{app:gamma_main}. We have crosschecked $\Gamma^i_{a}$ with the corresponding expressions given in \cite{Fischer:2000kx} and found agreement between the results. \begin{table}[h!] \begin{center} \begin{tabular}{l|c|c|c|c||c|c|c|c}\hline\hline & $L$ & $+$ & $-$& unpolarized & &$L$ & $+$ &$-$ \\\hline $\Gamma_a^{i,\mathrm{LO}}$&$\frac{(1-x^2)^2}{2x^2}$& $0$ & $(1-x^2)^2$&$\frac{(1-x^2)^2(1+2x^2)}{2x^2}$ &$\Gamma^{i,\mathrm{NLO}}_a/\Gamma_a^{i,\mathrm{LO}}$ &$0.90$&$3.50$&$0.93$\\ $\Gamma_b^{i,\mathrm{LO}}$& $2x^2(1-x^2)^2$ &$0$&$4(1-x^2)^2$&$2(1-x^2)^2(2+x^2)$ &$\Gamma^{i,\mathrm{NLO}}_b/\Gamma_b^{i,\mathrm{LO}}$ &$0.96$&$4.71$&$0.91$\\ $\Gamma_{ab}^{i,\mathrm{LO}}$&$(1-x^2)^2$&$0$&$2(1-x^2)^2$&$3(1-x^2)^2$ &$\Gamma^{i,\mathrm{NLO}}_{ab}/\Gamma_{ab}^{i,\mathrm{LO}}$ &$0.93$&$3.75$&$0.92$\\\hline\hline \end{tabular} \caption{{\bf Left}: Tree-level decay widths for different $W$ helicities and their sum, which gives the unpolarized width. All results are in the $m_b=0$ limit and we have defined $x=m_W/m_t$. {\bf Right}: Numerical values for $\Gamma^\mathrm{NLO}/\Gamma^{\mathrm{LO}}$ with the following input parameters $m_t=173.0$ GeV, $m_W = 80.4$ GeV, $\alpha_s(m_t) = 0.108$. Scale $\mu$ appearing in NLO expressions is set to $\mu=m_t$. In addition $m_b=4.8$ GeV. These values are used throughout the section for all numerical analysis. } \label{tab:NP_hel} \end{center} \end{table} The LO (${\cal O}(\alpha_s^0)$) contributions to decay rates $\Gamma_{a,b,ab}^{i,\mathrm{LO}}$ are obtained with a tree-level calculation and are given on the left side of Tab.~\ref{tab:NP_hel}. Our results coincide with those given in \cite{AguilarSaavedra:2006fy}, if the mass $m_b$ is set to zero. The change of $\Gamma_{a,b,ab}^{i}$ going form LO to NLO QCD is presented on the right side of Tab.~\ref{tab:NP_hel}. Since in the $m_b=0$ limit $\Gamma_{a,b,ab}^{+,\mathrm{LO}}$ vanish, we use the full $m_b$ dependence of the LO rate when dealing with $W$ transverse-plus helicity. Effectively we neglect the ${\cal O}(\alpha_s m_b)$ contributions. In Ref.~\cite{Fischer:2001gp} it has been shown, that these sub-leading contributions can scale as $\alpha_s (m_b/m_W)^2 \log (m_b/m_t)^2 $ leading to a relative effect of a couple of percent compared to the size of $\mathcal O (\alpha_s)$ corrections in the $m_b=0$ limit. \subsection{Effects on ${\cal F}_+$} In Fig.~\ref{fig:F+} we present how each separate anomalous coupling, assumed to be real, affects $\mathcal F_+$. Deviation of the left-handed current coupling from the SM value $\delta a_L$ can not be probed with helicity fractions as long as no interference effects with other NP is considered, since its effects simply factor out in the decay widths. The impact of going from LO to NLO in QCD is presented in terms of the bands where the lower line corresponds to LO while the upper line presents the inflated NLO result. A relatively narrow range in the size of anomalous couplings is shown since using Eq.~(\ref{eq:translation}) we translate the indirect constraints on anomalous couplings given in Tab.~\ref{tab:bounds} to find the $95\%$ C.L. allowed intervals are quite narrow \begin{eqnarray} -0.0006 \le a_R \le 0.003\,,\hspace{0.5cm} -0.0004 \le b_{RL} \le 0.0016\,,\hspace{0.5cm} -0.14 (-0.29) \le b_{LR} \le 0.08\,.\label{eq:ind_translated} \end{eqnarray} Two separate lower bounds on $b_{LR}$, which is substantially less constrained the other two anomalous couplings, correspond to which operator $\mathcal Q_{LRt}$ or $\mathcal Q_{LRt}^\prime$ we assume the $b_{LR}$ to be generated from. The graph on the righthand side of Fig.~\ref{fig:F+} shows the $\mathcal F_+$ dependence on $b_{LR}$ in more detail, along with intervals given in Eq.~(\ref{eq:ind_translated}). We see that the increase is substantial when going to NLO in QCD, but still leaves ${\cal F}_+$ at the $1-2$ per-mille level. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{LOtoNLOFplusxx.pdf}\hspace{1cm} \includegraphics[scale=0.6]{LOtoNLOFplusx2.pdf} \end{center} \caption{Dependence of $\mathcal F_+$ on anomalous couplings which are considered to be real and non-zero one at the time. {\bf Left}: Value of ${\cal F}_+$ for $a_R$ (blue, dotted), $b_{RL}$ (orange, dashed) and $b_{LR}$ (black, solid). Lower and upper lines correspond to LO and NLO results respectively and cross marks the SM NNLO prediction. {\bf Right}: Value of ${\cal F}_+$ for $b_{LR}$. Dashed line corresponds to LO results, while the solid line represents the NLO results. We also present the SM NNLO value along with its error bars given in Eq.~(\ref{eq:e22b}) and the $95\%$ C.L. allowed intervals for $b_{LR}$ given in Eq.~(\ref{eq:ind_translated}).} \label{fig:F+} \end{figure} Since the indirect constraints on non-zero values of $a_R$ and $b_{RL}$ are very stringent they can not produce large $\mathcal F_+$, both giving the maximal values of $\mathcal F_{+} =0.00133$, which is within $1\%$ of the SM prediction $\mathcal F_+^{\mathrm{SM,NLO}}=0.00132 $. \subsection{Effects on ${\cal F}_L$} Analyzing a single real NP contribution at the time, leading QCD corrections decrease ${\cal F}_L$ by approximately $1\%$ in all cases. In in Fig.~\ref{fig:FL} we show the $\mathcal F_L$ NLO dependance on the anomalous couplings. Possible effects of $a_R$ and $b_{RL}$ are again severely constrained by indirect $B$ physics considerations. On the other hand, we find that the most recent combined Tevatron measurement of $\mathcal F_L$ given Eq.~(\ref{eq:hel_exp}) allows to put competitive bounds on $b_{LR}$ compared to the indirect constraints given in Eq.~(\ref{eq:ind_translated}). A detailed plot of ${\cal F}_L$ dependance on $b_{LR}$ is given on the right graph of Fig.~\ref{fig:FL}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{LOtoNLOFLx.pdf}\hspace{1cm} \includegraphics[scale=0.6]{LOtoNLOFL2.pdf} \end{center} \caption{Dependence of $\mathcal F_L$ on anomalous couplings which are considered to be real and non-zero one at the time. Also shown is the central measured Tevatron value (dashed) and the $95\%$ C.L. interval as well as the expected ATLAS $95\%$ C.L. interval put on top of the Tevatron central value. {\bf Left}: Value of ${\cal F}_L$ for $a_R$ (blue, dotted), $b_{RL}$ (orange, dashed) and $b_{LR}$ (black, solid) at NLO in QCD. {\bf Right}: Dependance on $b_{LR}$ and the $95\%$ C.L. allowed intervals for $b_{LR}$ given in Eq.~(\ref{eq:ind_translated}). } \label{fig:FL} \end{figure} We see that at present the indirect constraints are a bit better, however if the projected sensitivity is reached, the direct bounds from $\mathcal F_L$ could turn out to be more stringent. \subsection{Comparison with direct constraints} As we have shown, there is an interesting interplay between direct and indirect constrains when considering the anomalous charged quark currents of the top quark. In this subsection we would like to stress this point further, going a bit beyond the scope of this work, by including another important top quark process that is influenced by anomalous $tWb$ vertices, namely the single top quark production which proceeds through weak interactions. Not going into details of the subject, we only comment that measurement of the single top quark production cross section~\cite{Group:2009qk} and its agreement with the SM predicted value serve to constrain NP contributions affecting the cross section~\cite{Tait:2000sh}. Having an additional sensitive observable at disposal one can consider pairs of NP operators altering the $tWb$ vertex contributing simultaneously and obtain $95\%$ C.L. allowed regions in the corresponding NP parameter planes. This was performed in Ref.~\cite{AguilarSaavedra:2011ct} using the single top production cross section and the helicity fractions as the constraining observables. \begin{figure}[h!] \begin{center} \includegraphics[scale= 0.6]{CompareVec.pdf}\hspace{0.8cm} \includegraphics[scale=0.6]{CompareDip.pdf} \caption{95\% C.L. allowed regions in different $(\kappa_i,\kappa_j)$ planes. The gray bands represent the allowed regions from direct Tevatron constraints given in Ref.~\cite{AguilarSaavedra:2011ct} and $\kappa_i$ are assumed to be real. {\bf Left}: $\kappa_{RR}$ - $\kappa_{LL}$ (solid), $\kappa_{LL}^{\prime}$ (dashed), $\kappa_{LL}^{\prime\pr}$ (dotted) plane. Matching scale is set to $\mu=2 m_W$. {\bf Right}: $\kappa_{LRb}$ - $\kappa_{LRt}$ (solid), $\kappa_{LRt}^{\prime}$ (dashed) plane. Matching scale is set to $\mu=2 m_W$ (narrow regions) and $\mu=m_W$ (wider regions).} \label{fig:2d1} \end{center} \end{figure} We compare the regions presented there with those that we can obtain using our indirect constraints from $B$ physics in Fig.~\ref{fig:2d1}. The comparison nicely summarizes the interplay of direct and indirect constraints on anomalous $tWb$ interactions and shows that they are in a way complementary. On both plots the gray area represents the $95\%$ C.L. allowed regions obtained in Ref.~\cite{AguilarSaavedra:2011ct}. They appear as bands because the direct constraints in $\kappa_{RR}$ and $\kappa_{LRb}$ directions are much weaker than the indirect. On the other hand we can see that Tevatron constraints on $\kappa_{LL}^{(\prime,\prime\pr)}$ and $\kappa_{LRt}^{(\prime)}$ are comparable and in some cases more stringent than the indirect. Having analyzed the helicity fraction constrains in detail we can deduce that the single top production contributions to the direct constraints presented in Fig.~\ref{fig:2d1} are significant, improving the direct bounds considerably. \subsection{Summary} We have analyzed the decay of an unpolarized top quark to a bottom quark and a polarized $W$ boson as mediated by the most general effective $tWb$ vertex at ${\cal O}(\alpha_s)$. We have shown, that within this approach the helicity fraction ${\cal F}_+$ can reach maximum values of the order of $2$ per-mille in the presence of a non-SM $b_{LR}$ contributions. Leading QCD effects increase the contributions $b_{LR}$ substantially owing to the helicity suppression of the LO result. Indirect constraints coming from $B$ physics already severely restrict the contributions of anomalous $tWb$ couplings. In particular, considering only real contributions of a single anomalous coupling at a time, all considered anomalous couplings except $b_{LR}$ are constrained to yield $\mathcal F_+$ within $2\%$ of the SM prediction. Even in the presence of the much less constrained $b_{LR}$ contributions, a potential determination of ${\cal F}_+ $ significantly deviating from the SM prediction, at the projected sensitivity of the LHC experiments~\cite{AguilarSaavedra:2007rs}, could not be explained within such framework. Based on the existing SM calculations of higher order QCD and electroweak corrections \cite{Czarnecki:2010gb, Do:2002ky}, we do not expect such corrections to significantly affect our conclusions. Finally, with increased precision of the ${\cal F}_L $ and the single top quark production cross-section measurements at the Tevatron and the LHC the direct bounds on $b_{LR}$ and $\delta a_L$ are expected to regnant over the indirect. \chapter*{Povzetek}\thispagestyle{empty} \vspace{-1.cm} V tem delu raziskujemo kako lahko razli\v{c}na teoreti\v{c}na odstopanja od okvira {\sl Standardnega modela} (SM) vplivajo na lastnosti razpadov kvarka top. Manifestacijo {\sl nove fizike} (NF) onkraj SM, katere energijska skala znatno presega skalo elektro-\v{s}ibkih pojavov, parametriziramo v obliki vrste vi\v{s}je dimenzionalnih operatorjev in z uporabo metod efektivne teorije polja. V prvem delu obravnavamo NF, ki bi lahko vplivala na nevtralne tokove kvarkov top, ki spreminjajo okus. Ti tokovi so v okviru SM zelo redki in v primeru, da bi jih v bodo\v{c}e eksperimentalno odkrili, bi to pomenilo odkritje NF. Bolj konkretno se zanimamo za dvo-del\v{c}ne razpade $t\to q Z,\gamma$ in tro-del\v{c}ne razpade $t\to q \ell^+ \ell^-$. V analizo dvo-del\v{c}nih razpadov vklju\v{c}imo popravke prvega reda kvantne kromodinamike. Preu\v{c}ujemo tako me\v{s}anje operatorjev pod renormalizacijo, kot tudi kon\v{c}ne popravke matri\v{c}nih elementov, vklju\v{c}no s tako imenovanimi ``bremsstrahlung'' procesi. Izka\v{z}e se, da so popravki kvantne kromodinamke, zlasti v razpadih $t\to q \gamma$, lahko pomembni in vplivajo na interpretacijo eksperimentalnih meritev. Glavni motiv za obravnavo tro-del\v{c}nih razpadov je pove\v{c}anje faznega prostora kon\v{c}nega stanja, kar pomeni, da imamo na voljo ve\v{c}je \v{s}tevilo kinemati\v{c}nih opazljivk, ki nam lahko pomagajo razlikovati med razli\v{c}nimi strukturami NF v nevtralnih tokovih kvarka top, ki spreminjajo okus. V drugem delu pa se zanimamo za NF, ki bi lahko vplivala na glavni razpadni kanal kvarka top $t\to W b$. Ta je tesno povezan z nabitimi tokovi kvarkov. Z vnosom novih operatorjev, ki spremenijo lastnosti $tWb$ interakcije, v teorijo, ne vplivamo le na glavni razpadni kanal kvarka top. Indirektne posledice, ki se pojavijo v teoreti\v{c}nih napovedih opazljivk v fiziki mezonov so lahko znatne, saj virtualni kvarki top v redkih procesih igrajo vodilno vlogo. V tem delu zato podrobno preu\v{c}imo indirektne omejitve na NF v nabitih tokovih kvarka top, ki izvirajo iz fizike mezonov $B$. \v{S}ele, ko odkrijemo intervale za parametre NF, ki so skladni z meritvami mezonske fizike, se osredoto\v{c}imo na posledice v vi\v{s}je energijskih procesih, razpadih kvarkov top na masni lupini. Osrednje opazljivke na\v{s}e analize so {\sl su\v{c}nostni dele\v{z}i} (ang. {\sl helicity fractions}) bozona $W$, ki nastane pri razpadu kvarka top. Ker se kvantno kromodinamski popravki pri obravnavi teh opazljivk izka\v{z}ejo za pomembne v analizi SM, take popravke obravnavamo tudi ob vklju\v{c}itvi NF. Iz eksperimentalnih meritev, ki za enkrat ka\v{z}ejo ujemanje s SM, lahko postavimo direktne omejitve na NF, ki jih primerjamo z indirektnimi. V tem se nam razkrije zanimiva povezava med fiziko kvaka top in fiziko mezonov $B$. \vspace{0.3cm} \small {\noindent \sf Klju\v{c}ne besede}: Razpadi kvarka top, nevtralni tokovi, kvark top in nova fizika, efektivne teorije in nova fizika, popravki kvantne kromodinamike, su\v{c}nostni dele\v{z}i bozona $W$, indirektne omejitve nove fizike {\noindent \sf PACS}: 12.15.Mm, 12.38.Bx, 12.60.Cn, 12.60.Fr, 13.88.+e, 14.65.Ha \normalsize \chapter*{Abstract}\thispagestyle{empty} \vspace{-1.0cm} We study possible theoretical deviations from {\sl Standard Model} (SM) in top quark physics which alter the decay properties of the top quark. Using effective filed theory techniques we parametrize the effects of potential {\sl new physic} (NP) of scales well above the electroweak scale in terms of effective operators. On one side we investigate NP manifestation in the form of {\sl flavor changing neutral current} (FCNC) decays of the top quark, which are highly suppressed in SM and potential observation of which would undoubtedly signal the presence of NP. We examine the two-body $t\to q Z,\gamma$ and three-body $t \to q \ell^+ \ell^-$ decays. Our analysis of the two-body FCNC decays is performed at {\sl next-to-leading order} (NLO) in {\sl quantum chromodynamics} (QCD). We examine the effects of operator mixing under the QCD renormalization as well as the finite matrix element corrections along with appropriate bremsstrahlung processes. We find that the effects of FCNC operators mixing under renormalization can be substantial, especially in the case of $t\to q \gamma$ decays and QCD corrections affect the way the experimental measurements are to be interpreted. In the three-body decays we aim to exploit the increased phase space of the final state by defining different types of observables which could help to discriminate between structures of the vertices governing the FCNC transition of the top quark. On the other side we examine possible deviations from SM predictions in top quark's main decay channel, which is governed by the charged quark current interactions. Introduction of higher dimensional operators that modify $tWb$ interactions, however has additional consequences that have yet not been thoroughly analyzed in the literature. In particular, low energy observables of rare processes in $B$ physics, where virtual top quarks and their charged current interactions play a dominant role, are expected to be affected. We therefore perform a detailed study of indirect constraints on the NP operator basis and only then turn to the study of the effects in the decays of on-shell produced top quarks. The observables of interest are the helicity fractions of the $W$ boson produced in the main decay channel. Since for SM predictions of helicity fractions higher order QCD corrections prove to be crucial, we conduct the analysis of NP effects at NLO in QCD. We confront our predictions with the experimental measurements to obtain the direct constraints on NP further comparing them with the indirect constraints from low energy processes revealing an interesting interplay of top and bottom physics. \vspace{0.3cm} \small {\noindent \sf Key Words}: Top quark decays, flavor changing neutral currents, new physics in top quark physics, effective theory approach to new physics, QCD corrections, helicity fractions of $W$ boson, indirect constraints of new physics {\noindent \sf PACS}: 12.15.Mm, 12.38.Bx, 12.60.Cn, 12.60.Fr, 13.88.+e, 14.65.Ha \normalsize \begingroup \hypersetup{linkcolor=black} \tableofcontents \thispagestyle{empty} \endgroup \addtocontents{toc}{\protect\thispagestyle{empty}} \newpage \thispagestyle{empty} \input{introduction.tex} \input{neutral_currents.tex} \input{charged_currents_v1.tex} \chapter{Concluding remarks} \vspace{-0.2cm} As we are approaching the end of Tevatron era and have well entered into the exciting era of the LHC, the hunt for BSM physics is in full effect. Searching for new particles is by no means the only way that LHC can produce new answers and questions. One of the area that we hope it will shed some light on, is the flavor problem of the SM. There is no question that this is where top quark, with its high mass, plays an outstanding role. Since LHC can be considered a true top quark factory, top quark physics is for the first time being probed at high precision. Precise determination of top quarks parameters and interactions could serve as a window to observations of physics beyond SM. In this work we have considered different aspects of top quark decays and how NP, which we have parametrized using effective theory approach, could affect them. On one hand we have investigated the effects of perturbative NLO QCD corrections on different decay rates of the top quark, something to be considered when dealing with quarks and being confronted with measurements of ever increasing precision. In particular, we have investigated the branching ratios of $t\to q \gamma,Z$ decays and different kinematical asymmetries in subsequent three body decays $t\to q \ell^+ \ell^-$ as well as the main decay channel of the top quark $t\to W b$, paying special attention to helicity fractions as observables sensitive to the structure of the $tWb$ vertex. On the other hand, we tried to accent the importance of considering the effects in well measured observables of meson physics, whenever deviations from SM in top quark physics are present. The dominant role that top quark plays in rare processes of meson physics, where it appears as a virtual particle, should always be kept in mind. While the analysis of indirect constraints for operators generating FCNC top quark decays has already been performed and can be found in literature, a comprehensive analysis of indirect constraints on operators generating anomalous charged currents with top quark has not and is an essential result of our work. As we have shown, the precise measurements of different ``top quark sensitive'' observables in $|\Delta B| = 2$ and $|\Delta B|=1$ processes put constraints on NP. The significance of some indirect constraints is not expected to be met by the direct constraints from the LHC data. Whether any deviation from SM predictions in top quark decays is to be observed or not, the future measurements are expected to play an important role in the flavor aspects of constructing and constraining BSM models. \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{Acknowledgments} \chapter*{Acknowledgments} I would like to sincerely thank my advisors Svjetlana Fajfer and Jernej F. Kamenik for all their help and guidance throughout my graduate study years. Their receptiveness and willingness to help is something I have come to greatly appreciate. Special thanks go to Jure Zupan for his hospitality at University of Cincinnati and his valuable insights about the world of physics. \newpage \thispagestyle{plain} \cleardoublepage \phantomsection \thispagestyle{plain} \addcontentsline{toc}{chapter}{List of publications} \chapter*{List of publications} \input{reference_moje2} \newpage \thispagestyle{plain} \bibliographystyle{h-physrev} \cleardoublepage \phantomsection \thispagestyle{plain} \addcontentsline{toc}{chapter}{Bibliography} \chapter{Introduction} \setcounter{page}{1} \section{The Standard Model} The goal of theoretical high energy physics is to mathematically describe phenomena occurring at lowest experimentally accessible length scales using as the main tool quantum field theory. Theoretical predictions are put to the test by some of the most sophisticated experiments in the world ranging from different kind of particle colliders, to satellites orbiting our planet. Continuous advances in theoretical insight and experimental techniques in the past century have led to the formulation of the {\sl Standard Model} (SM), theoretical framework describing the content of elementary particles and their interactions with its formulation dating back to the 1960's \cite{Weinberg:1967tq}. SM is remarkable both in its simplicity and great predictive power which has been put under enemas scrutiny over the last century, most recently by the {\sl Large Hadron Collider} (LHC). In this introductory section we aim to give a very brief overview of the main theoretical features of the SM. Our discussion is kept on a purely informative level providing however appropriate references to be consulted for many underlaying details. A bit more time is spent on the discussion of flavor aspect of SM, since the concepts encountered there prove to be crucial for our top quark studies. At the end of the section we try to motivate the need for theoretical explorations beyond the SM, since this is the frontier that we shall be crossing in our work. \subsection{Particle content} The corner stone of SM as a quantum field theory is the gauge group under which the theory is to be invariant \begin{eqnarray} SU(3)_c \times SU(2)_L \times U(1)_Y\,. \label{eq:sm_gauge_group \end{eqnarray} Here $SU(3)_c$ is the gauge group of {\sl quantum chromodynamics} (QCD), $SU(2)_L$ is the weak isospin group and finally, the Abelian $U(1)_Y$ is called the weak hypercharge. The corresponding coupling constants of the three groups are denoted by $g_s$, $g$ and $g^{\prime}$ respectively. Specifying the gauge group entirely fixes the content of the gauge boson sector. On the other hand, there is more freedom in specifying the scalar and fermionic sector of the theory. In particular we have to assign what representations of the gauge group the fields are to be in. The only scalar field in SM is the Higgs boson. Its representations under the gauge groups can be written as $\phi(1,2)_{+1/2}$, meaning that it is a singlet under $SU(3)_c$, a doublet under $SU(2)_L$ and it carries a hypercharge of $+1/2$. At present it remains the only quantum of the SM that has not been experimentally confirmed, however the recently discovered new particle at LHC \cite{ATLAS:higgs, CMS:higgs} poses to be a very strong candidate. Higgs boson plays a very specific role in the SM, since by acquiring a {\sl vacuum expectation value} (VEV) it spontaneously breaks the SM gauge group \begin{eqnarray} SU(3)_c \times SU(2)_L \times U(1)_{Y} \xrightarrow{\langle \phi \rangle} SU(3)_c \times U(1)_Q\,, \hspace{0.5cm} Q = Y + T_3\,, \end{eqnarray} where $U(1)_Q$ is the gauge group of {\sl quantum electrodynamics} (QED), with $Q$ the electromagnetic charge and $T_3$ the eigenvalue of the diagonal $SU(2)_L$ generator. QED gauge coupling is $e = g \sin \theta_W = g^{\prime} \cos \theta_W$, where $\theta_W$ is the Weinberg mixing angle. The described pattern of symmetry breaking gives rise to mass terms for the three weak gauge bosons through the covariant derivative $(D_{\mu}\phi)^{\dagger}(D_{\mu}\phi)$ part of the Higgs Lagrangian and to mass terms for the fermions through the so-called Yukawa interactions, which we shall return to shortly. Turning to the fermionic sector of the SM, we first note the subscript $L$ of the weak gauge group, which describes an important postulate of the SM, that only left-handed fermions carry weak-hypercharge. Further we classify fermions depending on what representation of $SU(3)_c$ they are in. Singlets are called leptons and quarks are postulated to be in its fundamental representation. \begin{table}[h] \begin{center} \begin{tabular}{c|c|c}\hline \hline &Quarks & Leptons\\ \hline left handed& $Q_L(3,2)_{+1/6} =\Big(\hspace{-0.15cm} \begin{array}{c} u_L\\ d_L\end{array}\hspace{-0.15cm} \Big)$& $L_L(1,2)_{-1/2} =\Big(\hspace{-0.15cm}\begin{array}{c}\nu_{L} \\ \ell_L\end{array}\hspace{-0.15cm} \Big)$ \\ right handed &$u_R(3,1)_{+2/3}$, $d_R(3,1)_{-1/3}$ &$\ell_R(3,1)_{-1}$ \\ \hline \hline \end{tabular} \caption{SM fermions and their representations under the SM gauge group (\ref{eq:sm_gauge_group}). Subscripts $L$ and $R$ denote the chirality of the fields.} \label{tab:sm_fermions} \end{center} \end{table} The fermionic sector of the SM is further enriched by making three repetitions (generations) of gauge representations described above and given in Tab.~\ref{tab:sm_fermions}. Each of the repetitions is assigned a flavor allowing us to distinguish among them. We define $\{u, c, t\}$ up-type quarks, $\{d,s,b\}$ down-type quarks, $\{e, \mu, \tau\}$ charged leptons and the accompanying neutrinos $\{\nu_e, \nu_\mu, \nu_\tau\}$. Since it is not crucial for our studies, we shall not deal with the leptonic part of flavor physics and concentrate only on the quarks. \subsection{Flavor} \label{sec:intro_fcnc} The only part of the SM Lagrangian that is not flavor universal, meaning that it distinguishes between different flavors, is the Yukawa interaction term \begin{eqnarray} {\cal L}_Y = - \bar{Q}_L^i [Y_d]_{ij} \phi d_R^j-\bar{Q}_L^i [Y_u]_{ij} \tilde{\phi} u_R^j + \mathrm{h.c.}\,. \label{eq:intro_yuk} \end{eqnarray} Indices $i$ and $j$ denote the flavor and we have introduced the ``up" and ``down'' $3\times 3$ complex Yukawa matrices and $\tilde{\phi} = \mathrm{i} \sigma^2 \phi$. While there are two Yukawa matrices for the quarks, there is only one for the leptons, because as evident from Tab.~\ref{tab:sm_fermions}, there are no right-handed neutrinos in the SM. In the SM the Yukawa sector is the only source of flavor physics. This statement can be put in a more formal group theoretical form by saying that Yukawa interactions break the big global symmetry of flavor \begin{eqnarray} \mathcal G^{\mathrm{SM}} = U(3)_Q \times U(3)_u \times U(3)_d\,, \label{eq:SM_G_flav} \end{eqnarray} obtained when three generations of fermions are introduced to the theory. $U(3)_{Q,u,d}$ are groups of rotations in flavor space $V_{Q,u,d}$ that can be applied to $Q$, $u$ and $d$ quark fields respectively \begin{eqnarray} Q_L \xrightarrow{U(3)_Q} Q_L^\prime= V_Q Q_L\,,\hspace{0.5cm} u_R \xrightarrow{U(3)_u} u_R^{\prime}=V_u u_R\,,\hspace{0.5cm} d_R \xrightarrow{U(3)_d} d_R^{\prime}=V_d d_R\,, \end{eqnarray} where we have suppressed the flavor indices. Omitting the scalar field, a general $\mathcal G^{\mathrm{SM}}$ rotation effects the Yukawa terms in the following way \begin{eqnarray} \bar{Q}_LY_d d_R = \bar{Q}_L^\prime\,V_Q Y_d V_d^{\dagger}\,d_R^{\prime}\,,\hspace{0.5cm} \bar{Q}_LY_u u_R = \bar{Q}_L^\prime\,V_Q Y_d V_u^{\dagger}\,u_R^{\prime}\,, \end{eqnarray} which can be seen as a change of basis in the flavor space. We can use the rotations of the broken symmetries to rotate away all the unphysical parameters of the Yukawa sector, since we know that out of all the parameters, there are as many unphysical as there are generators of broken symmetries $$ N_{\mathrm{phys.}}= N_{\mathrm{all}} - N_{\text{broken gen.}}\,. $$ We note that there is a remaining $U(1)_B$ global flavor symmetry even after inclusion of Yukawa terms, associated with the baryon number conservation. This means that we start of with $36$ ($18$ real and $18$ imaginary) free parameters of Yukawa matrices and break $26$ out of $27$ generators of the global symmetry. $17$ of which are rotations containing phases, and $9$ are rotations with no phases\footnote{$U(3)$ can be written as $U(1)\times SU(3)$. The $U(1)$ transformation is rephasing $\mathrm{e}^{i\beta}$. $SU(3)$ transformation can be written as $\mathrm{e}^{\mathrm{i} \alpha_a T^a}$, where $T^{2,4,7}$ are imaginary and contribute real rotations, while $T^{1,3,4,8}$ are real and contribute rotations with rephasing.}, this leaves us with $10$ physical flavor parameters, $9$ of which are real and $1$ is a complex phase. Using $V_{Q,u,d}$ we can rotate quark fields to the basis where one of the Yukawa matrices is diagonal \begin{eqnarray} \bar{Q}_L^{\prime}\underbrace{V_Q Y_d V_d^\dagger}_{Y_d^{\mathrm{diag.}}} d_R^\prime + \bar{Q}_L^{\prime} \underbrace{V_Q V_x^\dagger}_{V^\dagger} \underbrace{V_x Y_u V_u^\dagger}_{Y_u^{\mathrm{diag.}}} u_R^\prime\,. \label{eq:down_basis} \end{eqnarray} In the second term we had to insert a unitary matrix $V_x$ to obtain an expression with a diagonal $Y_u$, which is however multiplied with a unitary matrix $V$, called the {\sl Cabbibo-Kobayashi-Maskawa} (CKM) matrix, which was formulated in \cite{Cabibbo:1963yz,Kobayashi:1973fv} and will be subject of more discussion later. This basis is usually referred to as the ``down basis'', while we could have performed an analog diagonalization of the $Y_u$ matrix, ending up in ``up basis''. When the $SU(2)_L\times U(1)_Y$ gauge symmetry is broken by the VEV of the Higgs field $\langle \phi \rangle =(0,v/\sqrt{2})$, $\mathcal L_Y$ can be written as \begin{eqnarray} \mathcal L_Y = - \frac{v}{\sqrt{2}}\bar{d}^\prime_L Y_d^{\mathrm{diag.}} d_R^\prime - \frac{v}{\sqrt{2}} \bar{u}_L^\prime V Y_u^{\mathrm{diag.}}u_R^\prime + \cdots \,. \label{eq:breaking} \end{eqnarray} Note that we have obtained a mass term for every down-type quark. Since $SU(2)_L$ has been broken, we can rotate the $u_L$ fields with a separate $V_{u_L}$ rotation. By choosing the rotational matrix to be the CKM matrix $\bar{u}_L^{\prime\prime} = \bar{u}_L^\prime V$, we obtain the mass terms for the up quarks as well. This rotation has important consequences in the charged current sector, where it generates flavor changing currents. Removing all the primes from the quark fields and reintroducing the flavor indices the charged current Lagrangian is \begin{eqnarray} \mathcal L_{\mathrm{cc}} =- \frac{g}{\sqrt{2}} \big[\bar{u}_{iL}\gamma^{\mu}d_{jL}\big]V_{ij}W_{\mu}^+ - \frac{g}{\sqrt{2}} \big[\bar{d}_{jL}\gamma^{\mu}u_{iL}\big]V^*_{ij}W_{\mu}^-\,. \label{eq:SMcc} \end{eqnarray} On the other hand, due to the unitarity of CKM matrix $V$, there are no tree-level {\sl flavor changing neutral currents} (FCNCs) in SM\footnote{There are no dimension 4 operators in SM that would generate FCNC transitions.}. The elements of the CKM matrix are usually denoted as \begin{eqnarray} V = \left(\begin{array}{ccc} V_{ud}& V_{us}& V_{ub}\\ V_{cd}& V_{cs}& V_{cb}\\ V_{td}& V_{ts}& V_{tb}\\ \end{array}\right)\,, \label{eq:CKMmat} \end{eqnarray} where not all of the parameters are independent. We have accounted for $6$ of the flavor parameters to be quark masses, meaning that the CKM matrix must contain the remaining $4$ of which $3$ are real and $1$ is a complex phase. The CKM mechanism has proven to be very successful in describing flavor physics which has been tested with various high precision experiments. The so called CKM unitary triangle has been over-constrained by numerous measurements and is showing remarkable consistency. Furthermore a highly diagonal dominated form of the matrix has been experimentally established and perhaps most importantly, the complex phase has been measured to be non-zero, proving that the discreet symmetry of simultaneous {\sl charge conjugation and parity} (CP) is indeed violated in nature. The CKM phase is the only source of this violation within SM. For in-depth coverage of the subjects on CKM mechanism and CP violation we refer the reader to the following references \cite{Lavura:CP, Bigi:CP,Charles:2004jd}. \subsection{Need to go beyond SM} \label{seq:strategy} Despite the unprecedented success of SM it is evident that it cannot be the final theory of elementary particles and their interactions. The discovery of dark matter (see for example \cite{Olive:2003iq,Trimble:1987ee}) and neutrino oscillations \cite{Fukuda:1998mi} show that the particle content of SM is not adequate since there is no dark matter candidate and neutrinos are massless in SM. While the CP violation at low energies is very well described by the CKM mechanism, there are strong indications, that far more violation is needed to explain the high dominance of matter over anti-matter that we observe in the universe today \cite{Shaposhnikov:1987tw,Canetti:2012zc,Kajantie:1996mn}. What is more, SM does not try to describe gravitational interactions which become relevant at very high energies, of the Planck scale $\Lambda_{\mathrm{P}}=\sqrt{\hbar c/G_{\mathrm{N}}} \sim 10^{19} $ GeV. Assuming that SM could be a valid theory all the way to the Planck scale gives rise to a puzzling situation referred to as the ``hierarchy problem'' \cite{Martin:1997ns,Wells:2009kq}. The word hierarchy is related to the large separation between the electroweak and the Planck scale. Due to the fact that the dimensionality of the Higgs mass operator is $2$, the radiative corrections to the mass $\delta m_H$ turn out to be quadratically proportional to the cutoff scale $\Lambda$ \begin{eqnarray} m_H^2 = m_H^{(0)2} - \frac{\lambda^2}{(4\pi)^2} \Lambda^2 + \cdots \,. \end{eqnarray} Under the assumption that $\Lambda \sim \Lambda_{\mathrm{P}}$, a great deal of fine-tuning ($\sim$ 30 orders) between the parameters entering the $m_H^2$ expression is necessary in order to obtain the appropriately low Higgs mass (of the electroweak order). The different views about its significance as a problem of the theory has earned the hierarchy problem to be labeled as somewhat controversial. Nevertheless it is at least an aesthetic issue and one that {\sl beyond Standard Model} (BSM) theories (often in the literature referred to as UV completions) try to address and do so in many different manners. Following the aesthetic drive, the idea to unify interactions by embedding the SM product gauge group~(\ref{eq:sm_gauge_group}) into a single larger group comes naturally. The so called grand unified theories aim to do just that~\cite{Mohapatra:1999vv}. Because the flavor parameters, masses as well as mixing parameters described by the CKM elements, exhibit strong hierarchy one can not help but wonder if there is an underlaying symmetry manifested at energies above the electroweak scale that could explain it. SM provides no answer to this question which has become known as the flavor problem of SM. There is clearly more than enough reasons for theorists to explore the possibilities of {\sl new physics} (NP) BSM and further to look for observables and processes which could help to discover NP and to discriminate between different NP scenarios. In many of them, the top quark provides a preferred search window due to its large coupling to the physics responsible for the electroweak symmetry breaking. A fascinating possibility, that we shall be exploring in our work, is that the top quark properties exhibit deviations from their predictions within the SM. In the next section we argue that among other places, also the top quark decays are good ``hunting grounds" for NP. In particular decay modes and observables which are within the SM predicted to be highly suppressed are promising for detection of NP effects, since typically a non-zero measured signal of such quantities would, right-away, present a signal of physics beyond SM. Important thing to keep in mind is that in addition to directly probing top quark physics at LHC and Tevatron, top quark properties can also be explored in lower energy phenomena of meson physics, where top quark appears as a virtual particle often having the leading role in rare processes. In our exploration of BSM top quark physics, we shall not be committing to particular NP models or frameworks. Rather we will take the effective field theory approach, adding to the SM effective Lagrangians with which will be able parametrize our ignorance about the BSM theory. We will then study different observables which might be sensitive to our additions. In principle our results may be applicable to a variety of BSM models. \section{Top quark decays} Top quark is the heaviest experimentally confirmed elementary particle. It was discovered at the Tevatron in 1995 \cite{Abe:1995hr,Abachi:1995iq}. The two main features of the top quark are its large mass, experimentally measured to be \cite{Lancaster:2011wr} \begin{eqnarray} m_t = 173.2 \pm 0.9 \,\,\mathrm{GeV}\,, \label{eq:mt} \end{eqnarray} and its decay width. In the SM top quark is predicted to decay almost exclusively through the charged weak current (\ref{eq:SMcc}). The $t\to W b$ channel, which we shall refer to as the main decay channel, is highly dominant due to the extreme hierarchy between the CKM matrix (\ref{eq:CKMmat}) elements of the third row $|V_{td}|,|V_{ts}| \ll |V_{tb}|$. Branching ratios of top quark decays are always normalized to the main decay channel. The tree-level decay width computed at {\sl leading order} (LO) in QCD can be written as \begin{eqnarray} \Gamma (t\to W b ) =|V_{tb}|^2\frac{m_t}{16 \pi}\frac{g^2}{2}\frac{(1-x^2)^2(1+2x^2)}{2x^2} \sim 1.5 \,\,\mathrm{GeV}\,,\label{eq:SM_MDC} \end{eqnarray} where $x= m_W/m_t$, $m_W = 80.1$ GeV, the mass of $b$ quark has been neglected and $|V_{tb}|=1$. Numerical value of $g$ is related to the Fermi constant through $G_F/\sqrt{2} = g^2/(8m_W^2) = 1.167 \,\,\mathrm{GeV}^{-2}$ \cite{PDG}. Due to its large mass, the average life time of top quark is an order of magnitude below the typical hadronization time scale, causing it to decay before forming bound states \cite{Beneke:2000hk} making theoretical treatment free of non-perturbative QCD effects. While the production of top quarks is a very interesting area of research as well, especially in the case of $t\bar{t}$ production, where we are witnessing a persisting anomaly in the forward-backward asymmetry from Tevatron~\cite{Aaltonen:2011kc,Abazov:2011rq} that has stimulated various theoretical attempts to reconcile it~\cite{Kamenik:2011wt,Drobnak:2012cz}, we shall be concentrating in this work on two aspects of top quark decays that we describe below within the framework of SM showing that they might be interesting for NP observations and constraints. On one hand we will study the main decay channel exploring the helicities of $W$ bosons produced through the top quark decay. The information on what fraction of $W$s produced in the decays have certain helicities allows us to directly probe the structure of the $tWb$ interaction and its potential deviations from the SM form~(\ref{eq:SMcc}). On the other hand we shall consider the possibility of observing FCNC decays of the top quark. The branching ratios for these decays are highly suppressed within the SM and any observation of such a process would signal a presence of NP. Both analysis, which are given in sections~\ref{sec:fcnc_twobody} and~\ref{sec:hel_nlo} will be conducted at {\sl next-to-leading order} (NLO) in QCD. We should note that in the last five years, the precision of the experimental top quark mass determination has been gradually improving and the central value (\ref{eq:mt}) of the top quark mass has been continuously changing. As a consequence we will, in this work, encounter a few different values for top quark mass being used in the numerical analysis since the work presented here spans over four years. Deviations are however small and variations within these values have no significant effect on the results presented. \subsection{Helicity fractions in the main decay channel} \label{sec:hfSM} Since $W$ boson is a spin $1$ particle, we can split the decay width of the top quark's main decay channel into three parts depending on which of the three helicity states the produced $W$ is in \begin{eqnarray} \Gamma(t\to W b) = \Gamma_L + \Gamma_+ + \Gamma_- \,, \end{eqnarray} where $L$ stands for longitudinal, while $+$ and $-$ denote positive and negative transverse helicities respectively. We further define the helicity fractions as \begin{eqnarray} \mathcal F_{L,+,-} = \frac{\Gamma_{L,+,-}}{\Gamma}\,, \end{eqnarray} telling us what fraction of $W$ bosons produced in top quark decays have certain helicity. The main reason why helicity fractions are interesting for NP searches is that they are sensitive to the structure of the $tWb$ vertex governing the decay. On the computational side, there is more than one way to extract a certain helicity of the final state vector boson. We shall be making use of the covariant helicity projectors \cite{Kadeer:2009iw}, which are particularly useful when computing loop diagrams for QCD corrections. To define them we write down the squared matrix element for the $t\to W b$ decay as \begin{eqnarray} |\mathcal M|^2 = H_{\mu\nu} \epsilon^{\mu}(q,\lambda)\epsilon^{*\nu}(q,\lambda)\,. \end{eqnarray} $\epsilon^{\mu}(q,\lambda)$ are the polarization vectors of the $W$ fields with $\lambda=1,2,3$ labeling their basis and $q$ denotes the momentum of the $W$. We have put everything else into the $H_{\mu\nu}$. When going from the squared matrix element to the decay width we can, even when considering particular helicity final state, perform the summation over the polarizations of the $W$ boson $\sum_{\lambda}\epsilon_{\mu}(q,\lambda)\epsilon^{*}_{\nu}(q,\lambda)$ replacing it with appropriate helicity projector given in Tab.~\ref{tab:projectors}. \begin{table}[h] \begin{center} \begin{tabular}{r|l|c}\hline\hline Helicity & Projector: $\sum_{\lambda}\epsilon^{\mu}(q,\lambda)\epsilon^{*\nu}(q,\lambda)\to\mathbb{P}^{\mu\nu} $ & SM LO $\mathcal F_i$ with $m_b=0$ \\\hline Unpolarized: $\Gamma$&$ \mathbb{P}_{\mathrm{U}}^{\mu\nu}=-g^{\mu\nu}+\frac{q^{\mu}q^{\nu}}{m_W^2}$\\ Asymmetric: $\Gamma_F$ &$\mathbb{P}_{\mathrm{F}}^{\mu\nu}=\frac{1}{m_t}\frac{1}{|{\bf q}|}\mathrm{i} \epsilon^{\mu\nu\alpha\beta}p_{t\alpha}q_{\beta}$ \\ Longitudinal: $\Gamma_L$& $\mathbb{P}_{\mathrm{L}}^{\mu\nu}=\frac{m_W^2}{m_t^2}\frac{1}{|{\bf q}|^2}\big(p_t^{\mu}-\frac{p_t\cdot q}{m_W^2}q^{\mu}\big)\big(p_t^{\nu}-\frac{p_t\cdot q}{m_W^2}q^{\nu}\big)$ & $\frac{1}{1+2 x^2}$\\ Positive transversal: $\Gamma_+$& $\mathbb{P}_{\mathrm{+}}^{\mu\nu}=\frac{1}{2}\big(\mathbb{P}_{\mathrm{U}}^{\mu\nu}-\mathbb{P}_{\mathrm{L}}^{\mu\nu}+\mathbb{P}_{\mathrm{F}}^{\mu\nu}\big)$&$0$\\ Negative transversal: $\Gamma_-$& $\mathbb{P}_{\mathrm{-}}^{\mu\nu}=\frac{1}{2}\big(\mathbb{P}_{\mathrm{U}}^{\mu\nu}-\mathbb{P}_{\mathrm{L}}^{\mu\nu}-\mathbb{P}_{\mathrm{F}}^{\mu\nu}\big)$&$\frac{2x^2}{1+2x^2}$\\ \hline\hline \end{tabular} \caption{Covariant projectors extracting different helicities of a final state massive vector boson in a three-body decay. Presented projectors are for $t\to W b$ decay, where $p_t$ is the momentum of the decayed top quark, and $q$ is the momentum of the $W$. The three-vector length $|{\bf q}|$ is assumed in the top quark rest frame and $\epsilon_{0123} = 1$. Appropriate projector $\mathbb{P}_i^{\mu\nu}$ is to replace the sum over polarization vector basis depending on what helicity state we wish to project out. Last column shows the tree-level SM helicity fractions at LO and in the limit of the massless $b$ quark.} \label{tab:projectors} \end{center} \end{table} In the last column we show the LO helicity fractions in the limit of the massless $b$ quark. We can see that within SM helicity fraction $\mathcal F_+$ is $0$ in the presented approximation. The suppression is not hard to understand and is illustrated in Fig.~\ref{fig:illust}. If we consider $b$ quark to be massless and produced in the weak interaction, which strictly involve left-handed components of the fermionic fields, its helicity has to be negative, since for massless fermions helicity and chirality coincide. From a simple consideration of spin conservation, the situation where $W$ boson would have a positive helicity is not possible. \begin{SCfigure}[3.5][h!] \includegraphics[scale=0.8]{skica_hel.pdf} \vspace{0.2cm} \caption{An illustration of $t\to W b$ decay in the rest frame of the top quark where the limit $m_b =0$ is taken. Wide arrows represent third component of the spin with respect to the horizontal axes, while narrow arrows represent the direction of momentum. Because helicity and chirality of the massless $b$ quark coincide, its helicity is always negative. Third pictures represents a situation that is not possible since top quark would have to have spin greater than $1/2$ to accommodate the spin conservation, indicating the helicity suppression of $\mathcal F_+$. } \label{fig:illust} \end{SCfigure} This simple picture is altered if we consider the mass of $b$ quark or the process involves more than just three particles, which is the case once higher order quantum corrections to the decay are considered. All of these effects have been analyzed within the SM including $m_b$ and finite width of top quark effects, NLO QCD and electroweak corrections as well as {\sl next-to-next-to leading order} (NNLO) QCD corrections \cite{Fischer:2001gp,Czarnecki:2010gb,Do:2002ky,Fischer:2000kx}. We summarize the theoretical prediction to be \begin{eqnarray} \mathcal F_L^{\rm SM} = 0.687(5)\,,\hspace{0.5cm} \mathcal F_+^{\rm SM} = 0.0017(1) \label{eq:e22b}\,. \end{eqnarray} Even with inclusion of these corrections $\mathcal F_+$ remains highly suppressed, and a measurement of the positive helicity fraction of the per-cent order would undoubtedly signal the presence of NP. How these predictions get altered by the presence of NP governing the $t\to W b$ decay, is the subject of section \ref{sec:hel_nlo}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{main1.pdf} \caption{Leptonic final state of the top quark's main decay channel used for the extraction of helicity fractions. Angle $\theta^*$ is defined as the angle between the direction of the lepton in the $W$ rest frame and the direction of the $W$ in the rest frame of the decayed top quark. } \label{fig:hel_exp} \end{center} \end{figure} Helicity fractions are experimentally accessible through the measurement of angular distributions of the leptonic final state to which the $W$ boson decays $t\to W b \to b \ell \nu$ as indicated in Fig.~\ref{fig:hel_exp}. The extraction is based on the distribution of decays over $\cos\theta^*$, where $\theta^*$ is the angle between the momentum of the charged lepton in the $W$ boson rest frame and the momentum of the $W$ boson in the top quark rest frame \cite{Aaltonen:2010ha} \begin{eqnarray} \frac{\mathrm{d} \Gamma(t\to b \ell \nu)}{\mathrm{d} \cos\theta^*} \sim \frac{3}{4} (1-\cos^2\theta^*)\mathcal F_L + \frac{3}{8}(1+\cos\theta^*)^2\mathcal F_+ + \frac{3}{8}(1-\cos \theta^*)^2 \mathcal F_- \,. \end{eqnarray} At the moment the most precise measurements of the helicity fraction still come form the combined D0 and CDF analysis \cite{Aaltonen:2012rz} \begin{eqnarray} \mathcal F_L^{\mathrm{CDF+D0}} & = 0.722 \pm 0.081 \label{e1} \,,\hspace{0.5cm} \mathcal F_+^{\mathrm{CDF+D0}} = -0.033 \pm 0.046 \,, \label{eq:hel_exp} \end{eqnarray} which are for now showing agreement with the SM predictions (\ref{eq:e22b}). Helicity fractions are also being measured at the LHC \cite{ATLAS:hel} and the sensitivity expected to be reached is \cite{AguilarSaavedra:2007rs} \begin{eqnarray} \sigma(\mathcal F_+) = \pm 0.002\,,\hspace{0.5cm} \sigma(\mathcal F_L)=\pm 0.02\,, \end{eqnarray} where $\sigma(\mathcal F_i)$ denotes the predicted absolute error on the measurement of the helicity fraction with $10 \,\,\mathrm{fb}^{-1}$ of accumulated data. \subsection{FCNC top quark decays} \label{sec:top_fcnc} As mentioned in section \ref{sec:intro_fcnc} there are no FCNC decays possible at tree-level in the SM. They can however occur at one-loop level where through two insertions of charged flavor changing currents we can obtain a neutral flavor change. \begin{figure}[h] \begin{center} \includegraphics[scale=0.65]{top_pingos.pdf} \caption{Feynman diagrams contributing to FCNC decays $t\to q \gamma,Z,g$ in the unitary gauge for the weak interactions. Quarks running in the loops are down-type and their flavor is denoted by $i$. } \label{fig:top_pingos} \end{center} \end{figure} Top quark FCNC decays that we will be interested in $t\to q V$, where $V=\gamma,Z,g$ and $q=u,c$, proceed through the so-called penguin diagrams shown in Fig.~\ref{fig:top_pingos}. If we choose to work in the unitary gauge for the electroweak interactions, there are indeed only two diagrams to consider. By computing these diagrams we obtain the decay widths of the following form \begin{eqnarray} \Gamma (t\to q V)\propto \Big|\sum_{i}\tilde{\lambda}_i^{(q)} f^{(V)}(x_i)\Big|^2\,, \label{eq:fcnc2} \end{eqnarray} where $i$ denotes the flavor of the down-type quark with mass $m_i$ running in the loops and $x_i = m_i^2/m_W^2$. The loop functions $f^{(V)}$ are specific for the $V$ gauge boson in the final state. Further we have defined \begin{eqnarray} \tilde{\lambda}_i^{(q)} = V_{ti}^{*}V_{qi}\,. \label{eq:skalprod} \end{eqnarray} Note that due to unitarity of the CKM matrix, the following equation holds \begin{eqnarray} \tilde{\lambda}_{d}^{(q)} + \tilde{\lambda}_s^{(q)} + \tilde{\lambda}_b^{(q)} = 0 \label{eq:CKMuni}\,, \end{eqnarray} since there is an orthogonal relation between different rows of CKM matrix~(\ref{eq:CKMmat}). Consequently, when computing the decay widths (\ref{eq:fcnc2}), the sum over the flavors enables us to drop any term in $f^{(V)}$ that is not dependent on $x_i$. This is a property of loop functions often encountered when dealing with one-loop induced FCNC processes. Neglecting the $m_{d,s}$ and making use of unitarity relation (\ref{eq:CKMuni}), the branching ratios for these decays are proportional to \begin{eqnarray} \mathrm{Br}[t\to q V] \propto |V_{qb}|^2|f^{(V)}(x_b)|^2\,. \end{eqnarray} The suppression of the branching ratios is two fold. Firstly $x_b \ll 1$, so the loop functions give small contributions and secondly $|V_{qb}|\ll 1$. The resulting values for $t\to c V$ are \cite{Eilam:1990zc, AguilarSaavedra:2004wm} \begin{eqnarray} \mathrm{Br}[t\to c \gamma]\sim 10^{-14}\,,\hspace{0.5cm} \mathrm{Br}[t\to c Z]\sim 10^{-14}\,,\hspace{0.5cm} \mathrm{Br}[t\to c g]\sim 10^{-12}\,, \end{eqnarray} while the results for $t\to u V$ are an additional order of magnitude smaller. Within many BSM models, like Two Higgs Doublet Models, Minimal Super Symmetric Standard Model, models with up-type quark singlets, etc., the suppression of FCNC top quark decays can be lifted ~\cite{AguilarSaavedra:2004wm,Yang:2008sb,deDivitiis:1997sh,delAguila:1998tp}. It has been pointed out recently, that top quark FCNC phenomenology is crucial in constraining a wide class of NP scenarios where new flavor structures are present but can be aligned with the SM Yukawas in the down sector~\cite{Fox:2007in, Gedalia:2010zs, Gedalia:2010mf, Datta:2009zb}. Top quark FCNCs can be directly probed both in production and in decays of the top quark and all three FCNC decays presented here are being experimentally searched for. There has been no observation made so far, thus upper limits at 95\% {\sl confidence level} (C.L.) have been set, most stringent of which are \begin{eqnarray} \mathrm{Br}[t\to \{u,q\} \gamma]&<& \{5.9,32\} \times 10^{-3} \cite{Chekanov:2003yt,Abe:1997fz}\,,\label{eq:brs}\\ \nonumber\mathrm{Br}[t\to q Z]& < &3.4\times 10^{-3} \hspace{0.2cm}\cite{CMS:tcZ}\,,\\ \nonumber\mathrm{Br}[t\to \{u,c\} g]&<& \{5.7, 27\} \times 10^{-5}\hspace{0.2cm} \cite{Collaboration:2012gd}\,. \end{eqnarray} With the increasing accumulation of data LHC will be able to probe branching ratios of lower orders \cite{Carvalho:2007yi}, in particular in the case of no signal, ATLAS projects to improve the upper bounds (\ref{eq:brs}) to \begin{eqnarray} \mathrm{Br}[t\to c\gamma] \lesssim 10^{-5} \,,\hspace{0.5cm} \mathrm{Br}[t\to cZ]\lesssim 10^{-4}\,. \label{eq:ATC1} \end{eqnarray} \section{Effective field theories} \label{sec:effec} The concept of effective field theories is highly applicable in high energy physics where we often encounter problems involving widely separated scales. On one hand effective theories are very useful when the underlying theory is unknown and allows us to parametrize its effects on the physics at lower energies in a systematic fashion. On the other hand, it is also useful when the underlying theory is known, since in general the full theory can be quite complicated and going to an effective theory simplifies matters greatly. In particular, going to an effective theory can manifest approximate symmetries that are not visible in the full theory and increased symmetry means increased predictive power. Furthermore, when the full theory contains several disparate scales $m\ll M$, perturbation theory can be poorly behaved as typically, when considering higher order quantum corrections, one generates logarithmic terms of the form $\log(m^2/M^2)$. When these logs are large, they need to be re-summed in a systematic fashion in order to keep perturbation theory under control. Working within an effective theory simplifies the summation of these logs. In this section we aim to briefly introduce the effective theory techniques that we shall be employing in our analysis. It is worth mentioning that the strength and applicability of effective theories in particle physics goes far beyond what we shall be presenting. For detailed explanations we refer the reader to the following pedagogical works \cite{Collins:Ren,Collins:1995hda,Buras:1998raa,Rothstein:2003mp}. \subsection{Operator product expansion} The {\sl operator product expansion} (OPE) \cite{Wilson:1969zs} translates a time-ordered product of two operators to a series of local operators \begin{eqnarray} \hat{T}\mathcal O_1(x)\mathcal O_2(y)\xrightarrow{x\to y} \sum_{i} C_i^{(12)}(x-y) \mathcal O_i(x)\,, \end{eqnarray} where the spatial separation $x-y$ is assumed to be small. Wilson coefficients, which are c-numbers, capture all the short distance $x-y$ dependance. We can apply this expansion when we are computing amplitudes for different processes, which is typically done in the momentum space. In particular, when we encounter a Feynman diagram with a virtual heavy particle of mass $M$ and we are interested in the external momenta $p$, where $p\ll M$ we can perform a Taylor expansion of the amplitude $A$ in the parameter $p/M$. We can then ask ourselves what kind of {\sl effective Lagrangian} which does not include the heavy field we would need to write down in order to be able to reproduce the full theory result. This leads us to an expansion of the form \cite{Rothstein:2003mp} \begin{eqnarray} \mathcal L_{\mathrm{eff.}} = \sum_{i,d} C_i^{(d)} \frac{1}{M^{d-4}} \mathcal O_i^{(d)}\,,\label{eq:ope2} \end{eqnarray} where the sum runs over $d$, representing the dimensionality of the local operators, and over $i$, the basis of operators with a given dimensionality. Typically the basis consists of more than just one operator, and each operator comes accompanied with its own Wilson coefficient, which is in this equation assumed to be dimensionless. OPE reveals a very important point, that contributions of higher dimensional operators come suppressed with higher powers of the high scale $M$, representing the scale of the physics that has been ``integrated out''. The procedure of integrating out the heavy fields reduces the number of dynamical fields in the Lagrangian. If the underlying theory is known, we are able to match the effective and full theory, thus obtaining the Wilson coefficients. If, on the other hand, the full theory is not known, the matching procedure can not be performed, but we can still rely on the OPE to write down an appropriate basis of operators and analyze how they affect certain observables and most importantly, truncate the series at a certain dimensionality of the operators, knowing that higher dimensional operators are accompanied with higher suppression as indicated in (\ref{eq:ope2}). \subsection{Running of the Wilson coefficients}\label{sec:running} As already mentioned, computation of quantum corrections within a theory containing two scales $m,M$ (which we will refer to as the full theory) will typically introduce the following form of logarithmic terms in the amplitude \begin{eqnarray} A_{\mathrm{full}} = \cdots + \Big(a + b \log \frac{M^2}{m^2}+\cdots\Big)\langle \mathcal O_i\rangle + \cdots\,,\label{eq:fulll} \end{eqnarray} where $a$ and $b$ in general denote a product of different couplings. We assume $a$ to come from LO diagrams, while the $b$ term comes from some NLO corrections. For the NLO part we have written out only the logarithmic term, making our analysis a leading-log approximation. Finally, $\langle \mathcal O_i\rangle$ is the matrix element of a certain operator. If the two scales are widely separated $m\ll M$, the logarithm of their ratio can be big and we might encounter a problem, since even if $b$ is small, comprised of parameters appropriate for perturbative expansion, the logarithm creates a potentially large enhancement, rendering the perturbative expansion at least questionable. The best way to resum the large logs is to employ the effective theory approach. We compute the amplitude for the same process and to the same order in perturbation theory in the effective theory $\nolinebreak{\mathcal L_{\mathrm{eff}}= C_i \mathcal O_i}$, from which the heavy degree of freedom (having mass $M$) has been integrated out. The amplitude will be UV divergent and the factor in front of the divergence will exactly match the factor in front of the large log in the full theory \begin{eqnarray} A_{\mathrm{eff}}= \cdots + C_i \Big(1+ \frac{b}{a}\big(\frac{2}{\bar{\epsilon}} - \log \frac{m^2}{\mu^2} \big) \Big)\langle \mathcal O_i\rangle + \cdots \,, \hspace{0.5cm} \frac{2}{\bar{\epsilon}} = \frac{2}{\epsilon} - \gamma + \log 4\pi\,.\label{eq:efff} \end{eqnarray} Here we have chosen the dimensional regularization \cite{'tHooft:1973mm,'tHooft:1973us,Leibbrandt:1975dj} of the UV divergence, working in $d=4-\epsilon$ dimensions, which necessitated an introduction of an arbitrary scale $\mu$ and $\gamma$ is the Euler constant. Performing the matching procedure between (\ref{eq:fulll}) and (\ref{eq:efff}) order-by-order in perturbative expansion, thus gives us the Wilson coefficient $$ C_i= a + b\Big( \frac{2}{\bar{\epsilon}} + \log \frac{M^2}{\mu^2}\Big) + \cdots\,, $$ which is UV divergent. We can renormalize it using the {\sl modified minimal subtraction} ($\overline{\mathrm{MS}}$) renormalization scheme, very appropriate for the renormalization group applications (for detailed discussion see \cite{Buras:1998raa}). Notice, that had we performed the matching only at LO, the extracted Wilson coefficient would have been the same as the leading-log coefficient, with the matching scale is set to $\mu = M$. The renormalization of the effective theory is said to involve operator renormalization, stating that either the operators or the Wilson coefficients in the Lagrangian are bare objects, denoted by $(0)$, and need to be renormalized by the appropriate renormalization matrices denoted by $\hat{Z}$ \begin{eqnarray} \text{Bare operators: } & C_i \mathcal O_i^{(0)}=C_i Z_{ij} O_j \,,\\ \text{Bare W. coefficients: }&C_i^{(0)} \mathcal O_{i} = C_i Z^{c T}_{ij} \mathcal O_j=C_i Z_{ij}^{cT} Z^{-1}_{j k}\mathcal O_k^{(0)}\,. \end{eqnarray} Comparing the last expression of the second line with the first expression of the first line we find the relation among the two renormalization matrices \begin{eqnarray} \hat{Z}^{cT} = \hat{Z}^{-1}\,. \end{eqnarray} The extraction of $\hat{Z}$ in the minimal subtraction scheme is achieved by finding the UV divergent parts of the effective theory amplitude. Operators are composed of fields and other parameters which themselves need to be renormalized. Usually we consider these objects appearing in the operators to be renormalized, so when looking for the operator renormalization matrix, this needs to be taken in consideration (for details see Ref.~\cite{Buras:1998raa}). To obtain {\sl renormalization group equations} (RGE) for the Wilson coefficients we use the fact, that the bare quantities cannot depend on $\mu$ \begin{eqnarray} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} \mathcal O_i^{(0)} = 0& \Longrightarrow& \mu\frac{\mathrm{d} }{\mathrm{d} \mu} \boldsymbol{\mathcal O} = - \Big[\hat{Z}^{-1}\mu\frac{\mathrm{d}}{\mathrm{d} \mu}\hat{Z}\Big]\boldsymbol{\mathcal O}\equiv -\hat{\gamma} \boldsymbol{\mathcal O}\,,\\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu} C_i^{(0)}=0&\Longrightarrow& \mu \frac{\mathrm{d}}{\mathrm{d} \mu} \boldsymbol{C} = \Big[\hat{Z}^{-1} \frac{\mathrm{d}}{\mathrm{d} \mu} \hat{Z}\Big]^T \boldsymbol{C} \equiv \hat{\gamma}^T \boldsymbol{C}\,.\label{eq:runC} \end{eqnarray} We have defined the anomalous dimension matrix $\hat{\gamma}$ which governs the running of Wilson coefficients. It is dependent on some perturbative coupling, which we generally denote $g$, and is in the cases that we will be considering always QCD coupling constant $g_s$. Important thing to note is that the coupling constant is itself dependent on $\mu$ and its running is determined by its beta-function \begin{eqnarray} \mu \frac{\mathrm{d} g}{\mathrm{d} \mu} = \beta(g)\,, \end{eqnarray} where QCD beta-function is known to 4 loops \cite{vanRitbergen:1997va}. Taking that into account, the solution to the differential equation (\ref{eq:runC}) can be expressed in the following way \begin{eqnarray} \boldsymbol{C}(\mu_2) = \hat{U}(\mu_2,\mu_1) \boldsymbol{C}(\mu_1)\,,\hspace{0.5cm} U(\mu_2,\mu_1) =\exp\Big[\int_{g(\mu_1)}^{g(\mu_2)}\frac{\mathrm{d} g\prime}{\beta(g\prime)}\hat{\gamma}^T(g\prime)\Big]\,. \end{eqnarray} We can now depending on which scale we want to evaluate the matrix element of the operator at, run the Wilson coefficient to that scale and by doing that re-sum the large logarithms promoting our result to renormalization group improved perturbation theory prediction. \section{Top quark in meson physics} \label{sec:SMmix} Top quark plays an important role also in the physics of energies lower than its mass. In such processes there is not enough energy to produce an on-shell top quark, rather it appears as a virtual particle in loop diagrams. Similarly as the existence of the charm quark was predicted by the Glashow-Iliopoulos-Maiani mechanism~\cite{Glashow:1970gm} before its experimental discovery, there was a strong belief in the existence of the top quark well before its discovery at the Tevatron. What is more, due to good theoretical and experimental understanding of rare processes in kaon and $B$ meson physics, its mass was well estimated \cite{Ginsparg:1983zc, Buras:1983ap, Albrecht:1987dr, Buras:1993wr}. As pointed out in Ref.~\cite{Fox:2007in} when searching for BSM physics in top quark sector, which we have set out to do, one should also consider the meson physics observables and the indirect effects that NP in top quark physics might cause. Employing the OPE and effective theory techniques presented in section \ref{sec:effec}, our study of such indirect effects is reduced to finding the Wilson coefficients of lower energy theory where the top quark and the heavy vector bosons have been integrated out. Our modification of the SM Lagrangian will impact only physics of scales at which QCD is perturbative, and will therefore all be contained in the Wilson coefficients. Once we shall compute the Wilson coefficients, the procedure of obtaining low energy meson observables will be exactly the same as in SM since no modification to physics of low energies is made. In this section we go to some detail in explaining the matching procedure at LO in QCD for $|\Delta B|=2$ and $|\Delta B|=1$ processes, in which top quark turns out to play an important role. The same computational approaches that we introduce here for the SM case shall be employed when we consider NP manifestation in charged quark currents in chapter \ref{chap:CC}. \subsection{$|\Delta B| = 2$ transitions} \label{sec:dB2} Mixing between $B_q$ and $\bar{B}_q$ mesons, where $q$ stands for either $d$ or $s$ down type quarks, is a $|\Delta B| =2$ process since a $b \leftrightarrow \bar{b}$ transition occurs. It is a FCNC process highly suppressed in the SM and sensitive to NP effects. For pedagogical description of theoretical treatment as well as insight into the experimental aspects of meson mixing we refer the reader to the following references \cite{Lavura:CP, Bigi:CP, Lenz:2010gu}. The $B_q$ and $\bar B_q$ states are flavor eigenstates and they oscillate between each other. Within the Wigner-Weisskopf approximation, the oscillation is governed by the Schr\"odiner equation \begin{eqnarray} \mathrm{i} \frac{\mathrm{d} }{\mathrm{d} t} \left(\hspace{-0.2cm}\begin{array}{c} |B_q(t)\rangle\\ |\bar{B}_q(t)\rangle \end{array} \hspace{-0.2cm}\right)= [M^q - \frac{\mathrm{i}}{2}\Gamma^q] \left(\hspace{-0.2cm}\begin{array}{c} |B_q(t)\rangle\\ |\bar{B}_q(t)\rangle \end{array} \hspace{-0.2cm}\right)\,,\label{eq:osc} \end{eqnarray} where $M^q$ and $\Gamma^q$ are Hermitian mass and decay matrices. The physical eigenstates $|B_L\rangle\,, |B_H\rangle$, having $M_{L,H}^q$ masses and $\Gamma_{L,H}^q$ decay widths, are obtained by diagonalizing the $M^q-\mathrm{i}\Gamma^q/2$ matrix. The oscillation (\ref{eq:osc}) between the flavor eigenstates involves three physical quantities \begin{eqnarray} |M_{12}^q|\,,\hspace{0.5cm} |\Gamma_{12}^q|\,,\hspace{0.5cm} \phi_q = \arg\Big(-\frac{M_{12}^q}{\Gamma_{12}^q}\Big)\,,\label{eq:mixnorm} \end{eqnarray} which are the off-diagonal mass and width matrix terms and the CP phase respectively. On the computational side, $M_{12}^q$ is obtained from the dispersive part of the transition amplitude between the meson and anti-meson, \begin{eqnarray} M_{12}^q = \frac{1}{2m_{B_q}}\langle B_q|\mathcal H_{\mathrm{eff},q}^{|\Delta B|=2}|\bar B_q\rangle_{\mathrm{disp}}\,, \end{eqnarray} where $m_{B_q}$ is the mass of the $B$ meson. On the other hand $\Gamma_{12}^q$ is obtained from the absorptive part of the same matrix element, but we shall not be considering it further. $\mathcal H_{\mathrm{eff},q}^{|\Delta B|=2}= -\mathcal L_{\mathrm{eff},q}^{|\Delta B|=2}$ is the effective Hamiltonian governing the $|\Delta B|=2$ transitions and in the SM we have \begin{eqnarray} \mathcal L_q^{|\Delta B|=2}=- \frac{G_F^2 m_W^2}{4\pi^2}(V_{tq}^*V_{tb})^2 C_1(\mu)\mathcal O_1^q \,,\hspace{0.5cm} \mathcal O_1^q = \big[\bar q_L \gamma^{\mu} b_L\big] \big[\bar q_L\gamma_{\mu}b_L\big]\,. \label{eq:LSMmix} \end{eqnarray} In order to find the Wilson coefficient at high scale of the $W$ boson and top quark mass, which we denote as $\mu_W$, we need to perform the matching at LO in QCD. Since the quark content of $B$ and $\bar B$ mesons can be written as \begin{eqnarray} B_q\sim \bar b q\,,\hspace{0.5cm} \bar B_q \sim b \bar q\,, \end{eqnarray} we need two $\bar{q}$ and two $b$ field operators in order to contract the final and initial states and obtain a non-zero matrix element. On the full theory side, this necessitates a fourth order $g^4$ perturbative insertion of charged current interactions given in Eq.~(\ref{eq:SMcc}). For simplicity we shall make use of the unitary gauge for weak interactions, eliminating the would-be Goldstone fields from the theory. At the lowest order the mixing of $B$ mesons proceeds through the so-called box Feynman diagrams which are presented on the left side of Fig.~\ref{fig:SMmix}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SM_mixing.pdf} \includegraphics[scale=0.6]{EFF_mix.pdf} \end{center} \caption{Feynman diagrams for $\bar{B}_{q}\to B_{q}$ transition at LO in QCD for the full theory (left) and in the effective theory (right). Quark running in the loops are up-type and $i,j$ denote their flavor.} \label{fig:SMmix} \end{figure} In computation of the box diagrams the momenta of external quarks are set to zero, since the operator we are matching to contains no derivatives or masses \cite{Rothstein:2003mp}. This leaves us with all the propagators in the loop having same momentum and greatly simplifies the integration. Under these assumptions both diagrams contribute equally and the obtained amplitude reads \small\begin{eqnarray} A_{\mathrm{full}}=\frac{\mathrm{i} g^4}{64 \pi^2 m_W^2} \langle \bar{b} q | \big[\bar{q}_{1L}\gamma^{\mu}b_{3L}][\bar{q}_{2L}\gamma_{\mu}b_{4L}\big] +\big[\bar{q}_{1L}\gamma^{\mu}b_{4L}][\bar{q}_{2L}\gamma_{\mu}b_{3L}\big] |b \bar q\rangle \sum_{i,j}^{u,c,t} V_{iq}^*V_{ib}V_{jb}V_{jq}^* F(x_i,x_j)\,, \label{eq:SMmixAMP1} \end{eqnarray} \normalsize where indices $1,\dots,4$ on the quark fields label the contractions of the external fields. $F(x_i,x_j)$ is an Inami-Lim function first presented in Ref.~\cite{Inami:1980fz} with $x_{i,j} = m_{i,j}^2/m_W^2$. When computed in the unitary gauge it is UV divergent \begin{eqnarray} F(x_i,x_j)=\Big(\frac{2}{\bar \epsilon}-\log\frac{m_W^2}{\mu^2}\Big)\frac{x_i+x_j-6}{4} + \cdots\,, \label{eq:Fxixj} \end{eqnarray} where we have written out only the manifestation of the UV divergence within dimensional regularization, denoting the rest with dots. What renders the amplitude (\ref{eq:SMmixAMP1}) finite are the two summations over the flavor of the up-type quarks running in the loops. In a similar fashion as in Eq.~(\ref{eq:skalprod}), we define \begin{eqnarray} \lambda_i^{(q)} = V_{iq}^* V_{ib}\,, \label{eq:lambda_b} \end{eqnarray} where now the role of up-type and down-type quarks are interchanged and the unitarity of CKM matrix yields, in analogy to Eq.~(\ref{eq:CKMuni}) the relation \begin{eqnarray} \lambda_t^{(q)}+\lambda_c^{(q)}+\lambda_u^{(q)}=0\,, \label{eq:CKM1} \end{eqnarray} which tells us that we can safely drop any term in $F(x_i,x_j)$ that is not dependent on both $x_i$ and $x_j$. Consequently, the terms presented in Eq.~(\ref{eq:Fxixj}) get cancelled. Neglecting the masses of light up quarks $x_u=x_c=0$ and making use of (\ref{eq:CKM1}) and for brevity suppressing the $(q)$ superscript in $\lambda^{(q)}_i$ we arrive at the simplified result \begin{eqnarray} A_{\mathrm{full}}=\frac{\mathrm{i} G_F^2 m_W^2}{2 \pi^2} \langle \bar b q| \big[\bar{q}_{1L}\gamma^{\mu}b_{3L}][\bar{q}_{2L}\gamma_{\mu}b_{4L}\big] +\big[\bar{q}_{1L}\gamma^{\mu}b_{4L}][\bar{q}_{2L}\gamma_{\mu}b_{3L}\big]|b \bar q\rangle \lambda_t^2 S_0^{\mathrm{SM}}(x_t)\,, \label{eq:Afull} \end{eqnarray} where \begin{eqnarray} S_0^{\mathrm{SM}}(x_t)= F(x_t,x_t)-2F(x_t,0)+F(0,0)=\frac{x_t(x_t^2-11 x_t+4)}{4(x_t-1)^2}+\frac{3x_t^3\log x_t}{2(x_t-1)^3}\,. \label{eq:s0} \end{eqnarray} This results explicitly confirms what we have been stating about the dominant role of the top quark in the mixing process. To complete the matching procedure we have to calculate the amplitude for the same process using effective theory~(\ref{eq:LSMmix}), where we have just one local operator to be inserted at first order of perturbation. We can choose any of the two $b$ fields in the operator to contract the $b$ quark final state, giving us a factor of $2$. We then have a further choice of contracting the $q$ fields of the operator with the external $q$ quark fields, corresponding to the two Feynman diagrams given on the right side of Fig.~\ref{fig:SMmix}, obtaining the result \begin{eqnarray} A_{\mathrm{eff}}=\mathrm{i}\frac{G_F^2m_W^2}{2 \pi^2}\lambda_t^{2} C_1 \langle \bar b q| \big[\bar{q}_{1L}\gamma^{\mu}b_{3L}][\bar{q}_{2L}\gamma_{\mu}b_{4L}\big] +\big[\bar{q}_{1L}\gamma^{\mu}b_{4L}][\bar{q}_{2L}\gamma_{\mu}b_{3L}\big]|b \bar q\rangle \,. \label{eq:Aeff} \end{eqnarray} Comparing (\ref{eq:Afull}) and (\ref{eq:Aeff}) we find for the Wilson coefficient \begin{eqnarray} C_1^{\mathrm{SM}}(\mu_W)=S_0^{\mathrm{SM}}(x_t)\,. \label{eq:C1SM} \end{eqnarray} We have demonstrated the matching at LO in QCD, however higher order perturbative QCD corrections are not negligible in such processes and can be captured in rescaling factor $\widehat{\eta}_B$~\cite{Lenz:2010gu,Buras:1990fn} \begin{eqnarray} C_1^{\mathrm{SM, NLO}} = C_1^{\mathrm{SM,LO}}\widehat{\eta}_B \,,\hspace{0.5cm} \widehat{\eta}_B = 0.8393 \pm 0.0034\,. \end{eqnarray} Furthermore, considering NLO QCD corrections and employing RG methods the coefficient can be run down to lower energy scales $\mu_b$ at which different non-perturbative methods can be used to evaluate the matrix element of the $\mathcal O_1^q$ operator. Just for completeness we show the typical parametrization of the matrix element following Ref.~\cite{Lenz:2010gu} \begin{eqnarray} \langle B_q | \mathcal O^q_1 (\mu_b)|\bar{B}_q\rangle = \frac{2}{3} M^2_{B_q} f_{B_q}^2 \mathcal B_{B_q}(\mu_b)\,, \end{eqnarray} where $f_{B_q}$ and $\mathcal B_{B_q}$ are nonperturbative parameters, the decay constant and the bag parameter respectively. It is apparent that considering NP to effect the charged quark currents, especially the top quark interactions, will effect the mixing amplitude. On the computational level it will lead to new contributions on the full theory side resulting in a change of the Wilson coefficient at the weak scale. \subsection{$|\Delta B| = 1$ transitions} \label{sec:dB1} In this section we consider another type of rare FCNC processes, namely the radiative decays of the neutral $B$ mesons. On quark level the transitions that we will be interested in are \begin{eqnarray} b \to s \gamma \,,\hspace{0.2cm} b\to s g \,, \hspace{0.2cm} b\to s \ell^+ \ell^- \,, \hspace{0.2cm} b\to s \nu\bar{\nu}\,,\label{eq:fcnc_transitions} \end{eqnarray} giving rise to different decays on the hadronic level. Note that while we could again consider the light down-type quark to be either $d$ or $s$ we shall commit to the case of $s$, because the experimental sensitivities for $|\Delta B|=1$ processes that we will be studying are better when the final state quark is the $s$ quark. The results of the matching can however be applied to $d$ case by simple $s\leftrightarrow d$ change. Very similarly to the FCNC decays of the top quark described in section \ref{sec:top_fcnc}, within the SM transitions given in Eq.~(\ref{eq:fcnc_transitions}) can not proceed through tree-level diagrams, but through two insertions of charged current interactions at one-loop level. The effective Lagrangian of the low energy effective theory, from which the heavy vector bosons and the top quark have been integrated out and is adequate for description of processes (\ref{eq:fcnc_transitions}), can be written as \begin{eqnarray} {\cal L}_{\mathrm{eff}}&=& \frac{4 G_F}{\sqrt{2}}\Big[ \sum_{i=1}^2 C_{i}( \lambda_u \mathcal O^{(u)}_i + \lambda_c \mathcal O^{(c)}_i) \Big] + \frac{4 G_F}{\sqrt{2}}\lambda_t\Big[\sum_{i=3}^{10} C_{i}{\cal O}_i + C_{\nu\bar{\nu}}{\cal O}_{\nu\bar{\nu}}\Big]\,, \label{eq:loweff1} \end{eqnarray} where $\lambda_i$ stands for $\lambda^{(s)}_i$ defined in Eq.~(\ref{eq:lambda_b}). The relevant operators read \begin{align} {\cal O}_2^c &= \big(\bar{c}_L\gamma^{\mu}b_L\big)\big(\bar{s}_L\gamma_{\mu}c_L\big)\,,& {\cal O}_9&= \frac{e^2}{(4\pi)^2}\big(\bar{s}_L\gamma^{\mu}b_L\big)\big(\bar{\ell}\gamma_{\mu}\ell\big)\,,\label{eq:ops2}\\ {\cal O}_7&= \frac{e m_b}{(4\pi)^2}\big(\bar{s}_L\sigma^{\mu\nu}b_R\big)F_{\mu\nu}\,,& {\cal O}_{10}&= \frac{e^2}{(4\pi)^2}\big(\bar{s}_L\gamma^{\mu}b_L\big)\big(\bar{\ell}\gamma_{\mu}\gamma_5\ell\big)\,,\nonumber \\ \nonumber{\cal O}_8&= \frac{g_s m_b}{(4\pi)^2}\big(\bar{s}_L\sigma^{\mu\nu}T^ab_R\big)G^a_{\mu\nu}\,,& {\cal O}_{\nu\bar{\nu}}&=\frac{e^2}{(4\pi)^2}\big(\bar{s}_L\gamma^{\mu}b_L\big)\big(\bar{\nu}\gamma_{\mu}(1-\gamma^5)\nu\big) \,. \end{align} Here $T^a$ are $SU(3)_c$ generators in fundamental representation and $\sigma_{\mu\nu} = \mathrm{i}/2 [\gamma_{\mu},\gamma_{\nu}]$. The electromagnetic and gluonic field strength tensors are defined as \begin{eqnarray} F_{\mu\nu} &=& \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\,, \label{eq:FSdef}\\ \nonumber G_{\mu\nu}^a&=&\partial_{\mu}G^a_{\nu}-\partial_{\nu}G^a_{\mu}-g_s f_{abc}G_{\mu}^b G_{\nu}^c\,, \end{eqnarray} where $f_{abc}$ are the $SU(3)_c$ structure constants. Since they are not that crucial for our analysis, we omit the definition of the remaining four-quark operators $\mathcal O_{3,\dots,6}$, which can be found for example in Ref.~\cite{Buchalla:1995vs}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SM_pingos.pdf} \caption{Representative penguin and box Feynman diagrams to be calculated in the matching of SM at LO in QCD to the effective Lagrangian given in Eq.~(\ref{eq:loweff1}). Quarks running in the loops are up-type and $i$ denotes their flavor. In addition to the presented penguin diagrams one needs to consider also diagrams where the gauge bosons are emitted from the $W$ boson (except for the gluon) or the external quark legs. Diagrams with would-be Goldstone bosons which are present when working in general $R_{\xi}$ gauge for the weak interactions are not presented.} \label{fig:sm_pingos} \end{center} \end{figure} In the procedure of matching the SM to the effective theory (\ref{eq:loweff1}), again virtual top quarks turn out to play the dominant role. Representative diagrams to be computed are given in Fig.~\ref{fig:sm_pingos}. In the first diagram the photon and/or gluon are considered to be on-shell, contributing to $C_7$ and $C_8$ respectively. The remaining two penguin diagrams contribute to $C_9$, $C_{10}$ and $C_{\nu\bar\nu}$. Here the gauge bosons are off-shell coupling further to the lepton pair. We note that $C_{10}$ coefficient can only receive contributions from $Z$ mediated process due to the purely vectorial structure of the photon-lepton coupling. In addition to the penguin diagrams, $C_{9}$, $C_{10}$ and $C_{\nu\bar\nu}$ also receive contributions from box diagrams. Calculation of the box diagrams proceeds in a similar fashion as described in section \ref{sec:dB2} for the mixing case. To compute all the Wilson coefficients coming from the penguin diagrams, we have to consider the photon and gluon to be in general off-shell and perform an expansion of the amplitudes to the second order in external momenta, neglecting throughout $m_s$ and $m_b^2$ terms. For the off-shell massive $Z$ boson the computation is somewhat simplified since its momentum can be neglected compared to its mass. The amplitudes for processes can be written as \small \begin{subequations}\label{eq:delta_B1_SM} \begin{eqnarray} A_{\mathrm{full}}(b\to s \ell^+\ell^-)_{\mathrm{box}}=\mathrm{i}\lambda_i\frac{4G_F}{\sqrt{2}}\frac{e^2}{(4\pi)^2}\frac{1}{s_W^2}B_0(x_i)[\bar{s}_L\gamma^{\mu}b_L][ \bar\ell_L\gamma_{\mu}\ell_L]\,,\\ A_{\mathrm{full}}(b\to s \nu\bar\nu)_{\mathrm{box}}=\mathrm{i}\lambda_i\frac{4G_F}{\sqrt{2}}\frac{e^2}{(4\pi)^2}\frac{1}{s_W^2}\tilde{B}_0(x_i)[\bar{s}_L\gamma^{\mu}b_L][\bar{\nu}_L\gamma_{\mu}\nu_L]\,,\\ A_{\mathrm{full}}(b\to sZ)=\mathrm{i} \lambda_i\frac{4G_F}{\sqrt{2}}\frac{e}{(4\pi)^2} m_Z^2 \frac{c_W}{s_W} C_0(x_i)[\bar{s}_L\gamma^{\alpha}b_L]Z_{\alpha}(k)\,,\label{eq:tsZ}\\ A_{\mathrm{full}}(b\to s \gamma)= \mathrm{i} \lambda_i\frac{4G_F}{\sqrt{2}}\frac{e}{(4\pi)^2} \Big[D_0(x_i) [\bar{s}_L(k^2\gamma^{\alpha} - k^\alpha \gs{k})b_L] + m_b D_0^\prime(x_i) [\bar{s}_L\mathrm{i}\sigma^{\alpha\beta}k_\beta b_R]\Big]A_{\alpha}(k)\,, \label{eq:pingo_foton}\\ A_{\mathrm{full}}(b\to s g)=\mathrm{i} \lambda_i\frac{4G_F}{\sqrt{2}}\frac{g_s}{(4\pi)^2} \Big[E_0(x_i) [\bar{s}_L(k^2\gamma^{\alpha} - k^\alpha \gs{k})T^ab_L] + m_b E_0^\prime(x_i) [\bar{s}_L\mathrm{i}\sigma^{\alpha\beta}k_\beta T^a b_R]\Big]G_{\alpha}^a(k)\,. \end{eqnarray} \end{subequations} \normalsize The first two amplitudes are related to the box diagrams, while the rest are obtained from the penguin diagrams. We have introduced the abbreviations $c_W$ and $s_W$ which denote the cosine and sine of the Weinberg mixing angle. Further, $k$ denotes the momentum of the gauge boson, which can be neglected in Eq.~(\ref{eq:tsZ}) as argued above. The loop functions $B_0,\dots,E_0^{\prime}$ depend on $x_i=m_i^2/m_W^2$, where $m_i$ is the mass of the up-type quark running in the loops of the diagrams. The subscript $0$ stands to remind that the process was calculated at LO in QCD. The analytical expressions for the loop functions are given in the Appendix~\ref{app:SM_D_B_1}. We note, that again due to the CKM unitarity relation (\ref{eq:CKM1}) all the $x_i$ independent terms encountered in the computation are dropped. Functions $B_0$, $C_0$ and $D_0$ remain dependent on the choice of the gauge for weak interactions. The gauge independence is recovered once all the contributions to a particular physical final state are summed up. In particular, diagrams with off-shell photons and $Z$ bosons contribute to the leptonic final states as the bosonic field gets contracted into a propagator and further coupled to the leptons. Combining these contributions with those coming from box diagrams gives the following gauge independent combinations \small \begin{subequations}\label{eq:gindp} \begin{eqnarray} A_{\mathrm{full}}(b\to s l^+l^-) = \mathrm{i} \lambda_i \frac{4G_F}{\sqrt{2}}\frac{e^2}{(4\pi)^2} &\hspace{-0.3cm}\bigg[&\hspace{-0.3cm} \Big(\frac{2 B_0(x_i)- C_0(x_i)}{4s_W^2} + C_0(x_i) + D_0(x_i)\Big) \big[\bar s_L\gamma^{\mu}b_L\big]\big[\bar{\ell}\gamma_{\mu}\ell\big]\label{eq:bsllfull}\\ \nonumber&\hspace{-0.3cm}+&\hspace{-0.3cm}\frac{-2B_0(x_i)+C_0(x_i)}{4s_W^2} \big[\bar s_L\gamma^{\mu}b_L\big]\big[\bar{\ell}\gamma_{\mu}\gamma_5 \ell\big] \bigg]\,,\\ A_{\mathrm{full}}(b\to s \nu\bar\nu) = \mathrm{i} \lambda_i \frac{4G_F}{\sqrt{2}}\frac{e^2}{(4\pi)^2} &\hspace{-0.3cm}&\hspace{-0.3cm}\frac{2\tilde{B}_0(x_i) + C_0(x_i)}{4s_W^2}\,\, \big[\bar s_L\gamma^{\mu}b_L\big]\big[\bar \nu \gamma_{\mu}(1-\gamma_5)\nu\big]\,. \end{eqnarray} \end{subequations} \normalsize To complete the matching procedure we have to compare the full theory amplitudes, with the amplitudes computed within the effective theory (\ref{eq:loweff1}). For the on-shell photon and gluon the matching is straightforward since only contributions on effective theory side are tree-level insertions of operators. Same holds for the neutrino final state and the axially coupled charged leptons \begin{eqnarray} C_7(\mu_W) &=& -D_0^{\prime}(x_t) /2 = -\frac{8 x_t^3+5x_t^2-7x_t}{24(x_t-1)^3} + \frac{x_t^2(3x_t-2)}{4(x_t-1)^4} \log x_t\,,\\ C_8(\mu_W) &=& -E_0^{\prime}/2(x_t) = -\frac{x_t^3-5x_t^2-2x_t}{8(x_t-1)^3} - \frac{3x_t^2\log x_t}{4(x_t-1)^4}\,,\\ C_{10}(\mu_W)&=& \frac{-2B_0(x_t)+C_0(x_t)}{4s_W^2} = \frac{1}{4s_W^2} \Big(\frac{4x_t-x_t^2}{2(x_t-1)}-\frac{3x_t^2\log x_t}{2(x_t-1)^2}\Big)\,,\\ C_{\nu\bar\nu}(\mu_W)&=& \frac{2 \tilde{B}_0(x_t)+C_0(x_t)}{4 s_W^2}= \frac{1}{4s_W^2}\Big(-\frac{x_t(x_t+2)}{2(x_t-1)}-\frac{3x_t^2-6x_t}{2(x_t-1)^2}\log x_t\Big)\,. \end{eqnarray} Since the matching was performed at LO in QCD, the expressions again apply to the high $\mu_W$ scale. For the extraction of $C_9$ coefficient we have to, in addition to the trivial tree-level contribution of ${\cal O}_9$, take into account the one-loop contribution of ${\cal O}_2$ operator presented in Fig.{\ref{fig:O2c}}. The amplitude on the effective theory side is \begin{eqnarray} A_{\mathrm{eff.}}(b\to s \ell^+ \ell^-)&=&\mathrm{i}\frac{4 G_F}{\sqrt{2}}\frac{e^2}{(4\pi)^2}\big[\bar s_L\gamma^{\mu}b_L\big]\big[\bar{\ell}\gamma_{\mu}\ell\big]\bigg[\lambda_t C_9(\mu_W)\label{eq:bslleff}\\ &+& \sum_{i=u,c} \lambda_i C_2(\mu_W)\frac{4}{9}\Big(-\frac{2}{\epsilon} + \log\frac{m_W^2}{\mu^2} + \log x_i +1 \Big)\bigg]\,.\nonumber \end{eqnarray} The $2/\epsilon$ UV divergence is removed using the $\overline{\mathrm{MS}}$ renormalization\footnote{This means that $\mathcal O_2$ and $\mathcal O_9$ mix under QED renormalization.}. The constant term is characteristic to the naive dimensional regularization, which has been employed in the computation \cite{Buras:1998raa}. Because $C_2(\mu_W)=1$, we can write down, comparing (\ref{eq:bsllfull}) and (\ref{eq:bslleff}) the final expression for $C_9$ \begin{wrapfigure}{r}{0.35\textwidth} \begin{center} \includegraphics[scale=0.6]{c2diag.pdf} \caption{The one-loop contribution of $\mathcal O_2$ operator to the $b\to s \ell^+ \ell^-$ on the effective theory side.} \vspace{-2.8cm} \label{fig:O2c} \end{center} \end{wrapfigure} \begin{eqnarray} C_9(\mu_W) &=& \frac{-18x_t^4+163 x_t^3 -259 x_t^2+108x_t}{36(x_t-1)^3}\\ \nonumber&+&\frac{-64x_t^4 + 76 x_t^3+30 x_t^2 -36 x_t}{36(x_t-1)^4}\log x_t\\ &-& \frac{4}{9}+\frac{4}{9}\log x_t + -\log\frac{m_W^2}{\mu_W^2}\nonumber \,. \end{eqnarray} As in the case of $|\Delta B| = 2$ process, all of the Wilson coefficients obtain higher order perturbative QCD corrections~\cite{Misiak:2006zz, Misiak:2008ss, Misiak:2009nr}. Consideration thereof introduces renormalization mixing among many of the operators involved in $|\Delta B|=1$ processes. Application of RG methods allows us to establish the QCD running of the Wilson coefficients~\cite{Bobeth:1999mk,Gracey:2000am,Gambino:2003zm,Gorbahn:2005sa} which can then be, re-summing the large logarithms, run down to appropriately low scales, where the operator matrix elements can be evaluated thus making it possible to compare theoretical predictions with the experimental measurements of decay rates. \section{The main strategy}\label{sec:strategy} Having introduced the two phenomena of top quark physics that are interesting for NP searches and presenting also the importance of top quark contributions in $B$ physics, we devote the last section of the introductory chapter to introduce our main strategy of specifying and analyzing the deviations from SM physics in the top quark sector. We closely follow the reasoning presented in Ref.~\cite{Fox:2007in}, where the case of NP generating FCNC top quark decays was considered. We apply the same approach to the case of charged currents as well\footnote{We note that this concept is not linked only to NP in top quark physics which we are considering here. The indirect constraints on physics including new heavy degrees of freedom are often important and need to be considered, see for example~\cite{Dorsner:2011ai}.}. The main idea is illustrated in Fig.~\ref{fig:intout} and it starts with the assumption that there exists some NP at the energy scale $\Lambda$, which is much higher than the electroweak energy scale. As this NP is integrated out, it generates operators at the electroweak scale (denoted $\mu_t$), which consist of SM fields only, and are invariant under the SM gauge group~(\ref{eq:sm_gauge_group}). Making use of the highly stressed property of OPE (\ref{eq:ope2}), that the effects of higher dimensional operators generated at high scale $\Lambda$ come suppressed with higher powers of $1/\Lambda$, we avoid committing to a particular UV completion of the SM, rather working in the framework of an effective theory, described by the Lagrangian \begin{eqnarray} {\cal L}_{\mathrm{eff}}={\cal L}_{\mathrm{SM}}+\frac{1}{\Lambda^2}\sum_i C_i \mathcal Q_i +\mathrm{h.c.}+ {\cal O}(1/\Lambda^3)\,, \label{eq:lagr} \end{eqnarray} where ${\cal L}_{\mathrm{SM}}$ is the SM part, and $\mathcal Q_i$ are dimension-six operators with the aforementioned properties. \begin{figure}[h] \begin{center} \includegraphics[width=0.5 \textwidth]{Integrating_out.pdf} \caption{Illustration of the effective theory approach employed in our analysis. First step presents integrating out NP particles with mass well above the electroweak scale. The second step is integrating SM degrees of freedom above the scale of $b$ quark mass to be taken when analyzing effects in $B$ physics.} \label{fig:intout} \end{center} \end{figure} The set of operators forming $\mathcal L_{\mathrm{eff}}$ are chosen such that they generate $tqZ$, $tq\gamma$, $tqg$ vertices, where $q=u,c$ or modify the SM $tWb$ vertex, depending on whether we are interested in NP in FCNC top quark decays (chapter~\ref{chap:neutral_currents}) or the main decay channel (chapter~\ref{chap:CC}). By parametrizing the appropriate vertices in most general way and analyzing the consequences of such modifications on top quark observables, we can quantify the effects of the effective operators in $\mathcal L_{\mathrm{eff}}$ on the top quark physics side. If however, we want to establish the effects of the same operators on the $B$ physics side as well, we need to further match our effective theory~(\ref{eq:lagr}) to the low energy effective Lagrangians responsible for $|\Delta B| = 2$ and $|\Delta B| = 1 $ processes given in Eq.~(\ref{eq:LSMmix}) and Eq.~(\ref{eq:loweff1}) respectively, as is illustrated by the second second arrow in Fig.~\ref{fig:intout}. By doing that, we gain the access to a variety of observables in $B$ physics and come across an interesting interplay of top and bottom physics. We should note that, since the weak scale matching of NP contributions will be done at LO in QCD, there is an ambiguity of the order $\alpha_s(m_t)/4\pi$ and a residual scheme dependence, when performing the RGE evolution at next-to-leading log, which we shall be employing. However $\alpha_s$ corrections to the matching are in general model dependent and thus beyond the scope of our effective theory approach (c.f.~\cite{Becirevic:2001jj} for a more extensive discussion on this point). \chapter{NP in Top Decays: Neutral currents}\label{chap:neutral_currents} \section{Introduction} As we have pointed out in section \ref{sec:top_fcnc} of the introductory chapter, SM predicts highly suppressed flavor changing neutral current processes of the top quark \begin{eqnarray} \nonumber t\to q V\,,\hspace{0.3cm} V=Z,\gamma,g\,,\hspace{0.3cm} q=c,u\,, \end{eqnarray} while NP beyond the SM in many cases lifts this suppression. For the case of $t\to q Z,\gamma$ FCNC top quark decays, the effective theory approach that we have described in section~\ref{sec:strategy} has been used in Ref.~\cite{Fox:2007in}, where the authors considered the constraints on operators that generate FCNC top quark decays from $B$ physics observables. They found that contributions of some dimension six $SU(2)_L$ gauge invariant operators are not yet constrained by $B$ physics data to such extent that the consequential top quark FCNC decays could not be observed at LHC, if the predicted sensitivities summarized in Eq.~(\ref{eq:ATC1}) are reached. On the other hand, gluonic operators governing the $t\to q g$ decays are not constrained by such indirect considerations and can, at NLO in QCD, contribute to $t\to q Z,\gamma$. In this chapter we therefore focus our attention on the top quark physics side only, analyzing the decay rates of FCNC top quark decays mediated by effective operators generating most general FCNC effective vertices. The correspondence between the Wilson coefficients of our operators and those of the $SU(2)_L$ invariant operators used in Ref.~\cite{Fox:2007in} is given in Appendix~\ref{app:tofox}. In the first part of the chapter, which is based on our published work~\cite{Drobnak:2010wh,Drobnak:2010by}, we analyze the two-body decays $t\to q Z,\gamma$ at NLO in QCD. In Ref.~\cite{Zhang:2008yn} it was found that $t\to q g$ decay receives almost $20\%$ enhancement from NLO QCD contributions while corrections to the $t\to q Z,\gamma$ branching ratios are much smaller. However, the authors of~\cite{Zhang:2008yn} only considered a subset of all possible FCNC operators mediating $t\to q V$ decays at leading order and furthermore neglected the mixing of the operators induced by QCD corrections. In the case of $t\to q\gamma$ decay in particular the QCD corrections generate a nontrivial photon spectrum and the correct process under study is actually $t\to q g \gamma$. Experimental signal selection for this mode is usually based on kinematical cuts, significantly affecting the spectrum. The validity of theoretical estimates based on the completely inclusive total rate should thus be reexamined. Finally, renormalization effects induced by the running of the operators from the NP scale $\Lambda$ to the top quark scale are potentially much larger than the finite matrix element corrections. Although these effects are not needed when bounding individual effective FCNC couplings from individual null measurements, they become instrumental for interpreting a possible positive signal and relating the effective description to concrete NP models. The second part of the chapter is devoted to the study of $t \to q \ell^+ \ell^-$ decays and is based on our published work~\cite{Drobnak:2008br}. The FCNC vertex is governed by the same effective Lagrangian as in the first part, but the neutral gauge boson is further coupled through SM interaction with the pair of charged leptons. The basic goal of this analysis is the identification of possible discriminating effects of different NP models in top FCNCs, by considering different types of observables, which become attainable due to the larger phase space of the final state. \section{Framework} In writing the effective Lagrangian that will generate the $tZq$, $t\gamma q$ and $tgq$ vertices of the most general form, we rely on the notation of Ref.~\cite{AguilarSaavedra:2004wm, AguilarSaavedra:2008zc}. Hermitian conjugate and chirality flipped operators are implicitly contained in the Lagrangian and contributing to the relevant decay modes \begin{eqnarray} {\mathcal L}_{\mathrm{eff}} = \frac{v^2}{\Lambda^2}a_L^{Z}{\mathcal O}_{L}^Z +\frac{v}{\Lambda^2}\Big[b^{Z}_{LR}{\mathcal O}_{LR}^{Z}+b^{\gamma}_{LR}{\mathcal O}_{LR}^{\gamma}+b^{g}_{LR}{\mathcal O}_{LR}^{g} \Big] + (L \leftrightarrow R) + \mathrm{h.c.}\,. \label{eq:Lagr} \end{eqnarray} To explain the notation, operators considered are \begin{align} {\mathcal O}^{Z}_{L,R} &= g_Z Z_{\mu}\Big[\bar{q}_{L,R}\gamma^{\mu}t_{L,R}\Big]\,, & {\mathcal O}^{Z}_{LR,RL} &= g_Z Z_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}t_{R,L}\Big]\,, \label{eq:ops}\\ \nonumber{\mathcal O}^{\gamma}_{LR,RL} &= e F_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}t_{R,L}\Big]\,, & {\mathcal O}^{g}_{LR,RL} &= g_s G^a_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}T_a t_{R,L}\Big]\,, \end{align} and $g_Z = 2 e/\sin 2 \theta_W$. In addition to $F_{\mu\nu}$ and $G_{\mu\nu}^a$ that we have defined in Eq.~(\ref{eq:FSdef}), we have introduced the derivative part of the $Z$ boson field strength tensor \begin{eqnarray} Z_{\mu\nu} = \partial_{\mu} Z_\nu - \partial_\nu Z_\mu + \cdots \,, \end{eqnarray} neglecting the terms with more than one vector field, since they are not relevant for our analysis. Finally $v=246$~GeV is the electroweak condensate and $\Lambda$ is the effective scale of NP. In the remainder of the chapter, since there is no mixing between chirality flipped operators we shorten the notation, setting $a$ and $b$ to stand for either $a_L$, $b_{LR}$ or $a_R$, $b_{RL}$. The Feynman rules for $t\to q V$ vertices generated by operators~(\ref{eq:ops}) are given in the Appendix~\ref{app:feyn_neutral}. Note that in principle, additional, four-fermion operators might be induced at the high scale which will also give contributions to $t\to q V$ processes, however these are necessarily $\alpha_s$ suppressed. On the other hand, such contributions can be more directly constrained via e.g. single top production measurements and we neglect their effects in the present study. Throughout this chapter we will be neglecting the mass of the final state $(c,u)$ quark. Furthermore, when considering NLO QCD corrections we will regulate UV as well as IR divergences by working in $d=4+\epsilon$ dimensions. This kind of approach necessitates performing phase space integration in $d$ dimensions. We suggest the reader interested in dimensional regularization of IR divergences to consult the following references~\cite{Marciano:1975de,Marciano:1974tv,Muta}. \section{Two-body $t\to q V$ decays}\label{sec:fcnc_twobody} In this section we present the results for the NLO QCD corrections to the complete set of FCNC operators which mediate $t \to q Z,\gamma$ decays already at the leading order~(\ref{eq:ops}). We start of by considering the virtual one-loop correction and the renormalization of UV divergences for both decay channels. We present the RGE effects linking the values of Wilson coefficients at the top quark scale to those at higher NP scale. Contributions from gluonic dipole operators are also taken into account. Next we turn our attention to the finite part of virtual corrections -- the matrix element corrections, as well as the corresponding bremsstrahlung rates. For the $t\to q \gamma$ channel we also study the relevance of kinematical cuts on the photon energy and the angle between the photon and the jet stemming from the final state quark. We present our results in analytical form and also give numerical values to estimate the significance of NLO contributions. \subsection{Operator renormalization and RGE}\label{sec:rge} We assume the effective $a,b$ couplings are defined near the top quark mass scale at which we evaluate virtual matrix element corrections and $\alpha_s$. A translation to a higher scale matching is governed by anomalous dimensions of the effective operators and can be performed consistently using RGE methods. To employ this mechanism we need to examine the UV divergencies generated by the NLO QCD corrections. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{FCNCvirt.pdf} \caption{Feynman diagrams for one-loop virtual corrections to $t\to q Z,\gamma$ decays. Squares mark the insertion of effective operators given in Eq.~(\ref{eq:ops}) and crosses the additional points from which $Z$ or $\gamma$ can be emitted. } \label{fig:fcnc_virt} \end{center} \end{figure} In addition to the diagrams presented in Fig.~\ref{fig:fcnc_virt}, the first two diagrams should be accompanied by one particle reducible diagrams with SM gluon corrections attached to the external legs. These diagrams are taken into account by setting the quark fields forming the operators ${\mathcal O}^Z_{L,R}$ and ${\mathcal O}_{LR,RL}^{\gamma,Z}$ to be renormalized $q\to \sqrt{Z_q}q$, $t\to \sqrt{Z_t} t$. Since the final state light quark $q$ is considered to be massless, the corresponding field renormalization differs from the one of the initial top quark. Using the on-shell renormalization conditions~\cite{Grozin:2005yg} we obtain \begin{subequations} \begin{eqnarray} Z_t &=& 1+ \frac{\alpha_s}{4\pi}C_F \frac{\Gamma(1-\frac{\epsilon}{2})}{(4\pi)^{\epsilon/2}}\left(\frac{m_t}{\mu}\right)^{\epsilon} \Big[\frac{2}{\epsilon_{\mathrm{UV}}}+\frac{4}{\epsilon_{\mathrm{IR}}}-4\Big]\,,\label{field_ren}\\ Z_q &=& 1+ \frac{\alpha_s}{4\pi}C_F \frac{\Gamma(1-\frac{\epsilon}{2})}{(4\pi)^{\epsilon/2}}\left(\frac{m_t}{\mu}\right)^{\epsilon} \Big[\frac{2}{\epsilon_{\mathrm{UV}}}-\frac{2}{\epsilon_{\mathrm{IR}}}\Big]\,, \end{eqnarray} \end{subequations} where $\mu$ is the renormalization scale parameter, $C_F=4/3$ is the color factor and separate track is kept of UV and IR divergences. Finally $\Gamma$ denotes the Euler's gamma function. Including the tree-level LO diagrams, the quark field renormalization and the diagrams presented in Fig.~\ref{fig:fcnc_virt} yields the following amplitudes \begin{eqnarray} A_{t\to q\gamma} &=& \frac{v}{\Lambda^2}\Big[ b^{\gamma} \big(1+\frac{\alpha_s}{4\pi}C_F F^{\gamma}_b\big) + b^g \frac{\alpha_s}{4\pi}C_F F_{bg}^{\gamma}\Big]\langle {\mathcal O}_{LR,RL}^{\gamma}\rangle\,,\label{eq:fcnc_amp1}\\ A_{t\to qZ} &=& \Big[\frac{v^2}{\Lambda^2} a^Z \big(1+\frac{\alpha_s}{4\pi}C_F F^Z_a\big) + \frac{v}{\Lambda^2}b^Z \frac{\alpha_s}{4\pi}C_F F_{ab}^Z + \frac{v}{\Lambda^2}b^g \frac{\alpha_s}{4\pi}C_F F_{ag}^Z\Big]\langle {\mathcal O}_{L,R}^Z\rangle \label{eq:fcnc_amp2}\\ &+& \Big[\frac{v}{\Lambda^2} b^Z \big(1+\frac{\alpha_s}{4\pi}C_F F^Z_b\big) + \frac{v^2}{\Lambda^2}a^Z \frac{\alpha_s}{4\pi}C_F F_{ba}^Z + \frac{v}{\Lambda^2}b^g \frac{\alpha_s}{4\pi}C_F F_{bg}^Z\Big]\langle {\mathcal O}_{LR,RL}^Z\rangle\,,\nonumber \end{eqnarray} where the complete expressions of form factors $F^x_y$ are given in the Appendix~\ref{app:form_factors_qcd}. We were able to crosscheck our expressions with those found in the literature. Namely, Eqs.~(\ref{eq:Fa}--\ref{eq:Fba}) agree with the corresponding expressions given in Ref. \cite{Ghinculov:2002pe} for the $B\to X_s \ell^+ \ell^-$ decay mediated by a virtual photon after taking into account that the dipole operator in \cite{Ghinculov:2002pe} includes a mass parameter which necessitates additional mass renormalization. On the other hand, the two form factors for the photon case (\ref{eq:Fb_gamma}, \ref{eq:Fbg_gamma}) are obtained from the corresponding $Z$ form factors in the limit where the mass of the $Z$ boson is sent to zero. To some extent we were also able to crosscheck the gluon operator induced form factors $F_{ag}^Z$ and $F_{bg}^Z$ given in Eqs.~(\ref{eq:Fag}, \ref{eq:Fbg}). Namely, we find numerical agreement of the form factor's vector component with the corresponding expressions given in Ref.~\cite{Ghinculov:2003qd}. The crosscheck is only possible in the vector part, since the SM photon coupling appearing in \cite{Ghinculov:2003qd} has no axial component. We note that $F_b^{\gamma}$, $F_{bg}^{\gamma}$, $F_ b^{Z}$ and $F_{bg}^{Z}$ contain UV divergences that necessitate additional operator renormalization which we carry out in the $\overline{\mathrm{MS}}$ scheme obtaining the following renormalization factors \begin{eqnarray} Z_{b}^{\gamma} &=& 1+\frac{\alpha_s}{4\pi}C_F\delta_{b}^{\gamma}\,,\hspace{0.5cm} \delta_{b}^{\gamma}= - \Big(\frac{2}{\epsilon_{\mathrm{UV}}}\ +\gamma-\log(4\pi)\Big)\,,\label{renF}\\ Z_{bg}^{\gamma} &=& 1+\frac{\alpha_s}{4\pi}C_F\delta_{bg}^{\gamma}\,,\hspace{0.5cm} \delta_{bg}^{\gamma}= -4 Q \Big(\frac{2}{\epsilon_{\mathrm{UV}}} +\gamma-\log(4\pi)\Big)\,,\\ Z_{b}^Z &=& 1+\frac{\alpha_s}{4\pi}C_F\delta_{b}^Z\,,\hspace{0.5cm} \delta_{b}^Z= - \Big(\frac{2}{\epsilon_{\mathrm{UV}}}\ +\gamma-\log(4\pi)\Big)\,,\label{renZ}\\ Z_{bg}^Z &=& 1+\frac{\alpha_s}{4\pi}C_F\delta_{bg}^Z\,,\hspace{0.5cm} \delta_{bg}^Z= -2 \hat v \Big(\frac{2}{\epsilon_{\mathrm{UV}}} +\gamma-\log(4\pi)\Big)\,, \end{eqnarray} where $Q=2/3$ is the electric charge of the up-type quarks and $\hat v$ is defined in Eq.~(\ref{eq:some_def}) of the Appendix. The RG running is governed by the anomalous dimensions of the operators. Since operators $\mathcal O^Z_{L,R}$ do not have anomalous dimensions we assemble the remaining six operators into two vectors \begin{eqnarray} \boldsymbol{\mathcal O}_{i} = (\mathcal O^\gamma_i , \mathcal O^Z_i, \mathcal O^g_i)^T\,,\hspace{0.5cm} i = RL, LR\,, \end{eqnarray} which do not mix with each other under QCD renormalization. The corresponding one-loop anomalous dimension matrix is the same for both chiralities and reads \begin{equation} \gamma_i = \frac{\alpha_s}{2\pi} \left[ \begin{array}{ccc} C_F & 0 & 0 \\ 0 & C_F & 0 \\ 8 C_F / 3 & C_F (3 - 8 s^2_W) / 3 & 5C_F - 2 C_A \end{array} \right]\,, \label{eq:anomal} \end{equation} where $C_A= 3$. We note that to compute the last entry on the diagonal of the matrix (\ref{eq:anomal}) we need to consider virtual corrections to $t\to q g$ process mediated by ${\mathcal O}^g_i$ operator. We have performed this calculation to crosscheck it with the well known result found in the literature (see for example Ref.~\cite{Buras:1998raa}), we however refrain from explicitly showing the details of the calculation. Depending on the nature of new physics which generates the dipole operators at the scale $\Lambda$, the relevant $LR$ operators might explicitly include a factor of the top mass. By redefinition of operators $$ \widetilde{\boldsymbol{\mathcal O}}_{LR} = (m_t/v )\boldsymbol{\mathcal O}_{LR}\,, $$ their running is altered by the additional mass renormalization $Z_m$ (found for example in Ref.~\cite{Buras:1998raa}), which can be taken into account by adding $6C_F$ to the diagonal entries of $\gamma_{LR}$ given in Eq.~(\ref{eq:anomal}). As we shall demonstrate, this effect is numerically not important for the interesting range of couplings and scales, which can be probed at the Tevatron and the LHC. We are interested in particular in the mixing of the gluonic dipole contribution into the photonic and $Z$ dipole operators. For the case with no explicit top mass effect, $LR$ and $RL$ operators receive identical corrections and the effective couplings at the top mass scale read \begin{subequations} \begin{eqnarray} \hspace{-0.6cm}b^\gamma_{i} (\mu_t) \hspace{-0.15cm}&=& \hspace{-0.15cm} \eta ^{\kappa_1} b_i^\gamma (\Lambda )+\frac{16}{3}\left( \eta ^{\kappa_1}- \eta ^{\kappa_2}\right) b^g_i (\Lambda )\,,\\ \hspace{-0.6cm}b^Z_{i} (\mu_t) \hspace{-0.15cm}&=& \hspace{-0.15cm} \eta ^{\kappa_1} b_i^Z (\Lambda ) \hspace{-0.05cm}+\left[2-\frac{16}{3} s^2_W\right]\left( \eta ^{\kappa_1}- \eta ^{\kappa_2}\right) b^g_i (\Lambda )\,, \end{eqnarray} \end{subequations} where $\mu_t$ is the top mass scale, $\eta = \alpha_s(\Lambda)/\alpha_s(\mu_t)$, $\kappa_1=4/3\beta_0$, $\kappa_2=2/3\beta_0$ and $\beta_0$ is part of the one-loop QCD beta function (found for example in Ref.~\cite{Buras:1998raa}). Assuming that no new colored degrees of freedom appear below the UV matching scale which would modify the QCD beta function, it evaluates to $\beta_0=7$ above the top mass scale. If we include the top mass running in the RGE of $LR$ operators, then $\kappa_{1,2}$ are modified to $\kappa_1=16/3\beta_0$, $\kappa_2=14/3\beta_0$. We illustrate the effect of the RGE running in Fig.~\ref{fig:RGE} where we plot \begin{eqnarray} \bigg|\frac{b_i^{\gamma,Z} (\mu_t)}{b_i^{g} (\Lambda)}\bigg|\,,\hspace{0.5cm}\text{when $b^{\gamma,Z}_i(\Lambda) = 0$}\,. \end{eqnarray} This shows how much $b^{\gamma,Z}_i(\mu_t)$ can be generated at the top mass scale $\mu_t\simeq 200$ GeV, due to the QCD mixing of the operators and the presence of the gluonic dipole operator at UV scale $\Lambda$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{RGE1.pdf} \end{center} \caption{The ratio of $|b_{i}^{\gamma,Z}(\mu_t)/b_i^{g}(\Lambda)|$ as a function of $\Lambda$, when $b_{i}^{\gamma,Z}(\Lambda) =0$ and the top mass scale $\mu_t\approx 200$ GeV. Solid lines represent the case with no explicit top mass effect, while the dashed line corresponds to the Wilson coefficient of $\widetilde{{\mathcal O}}_{LR}^{\gamma}$ operator. The $\widetilde{b}_{LR}^{Z}$ is not shown as its deviation from $b_{LR}^{Z}$ is unnoticeable on the plot.} \label{fig:RGE} \end{figure} We see that for NP matching scales above $2$ TeV the induced contributions to $b^\gamma_{i}$ are around 10\% of the $b^g_{i}$ in the UV. On the other hand, due to cancelations in the RGEs for the $b^Z_{i}$, these receive much smaller corrections (below 1 \% for the interesting range). Including the top mass renormalization reduces the induced corrections to the $\widetilde b^{\gamma,Z}_{LR}$ coupling, however for UV scales of a couple of TeV or below, this effect is negligible. \subsection{Matrix element corrections}\label{sec:fcnc_matrix_elements} To consistently describe rare top decays at NLO in $\alpha_s$ one has to take into account finite QCD loop corrections to the matrix elements $\bra{q \gamma} \mathcal O_i \ket{t}$ and $\bra{q Z} \mathcal O_i \ket{t}$ evaluated at the top mass scale as well as single gluon bremsstrahlung corrections, which cancel the associated infrared and collinear divergencies in the decay rates. The total FCNC top quark decay width to $Z$ boson or a photon governed by the effective Lagrangian given in Eq.~({\ref{eq:Lagr}}) is therefore, at NLO in QCD, the sum of $\Gamma(t\to q Z,\gamma)$ and $\Gamma(t\to q g Z,\gamma)$ decay rates, where the two-body final state decay width includes the virtual QCD corrections. Contributions due to the $O^{\gamma,Z}_{LR,RL}$ have already been computed in Ref.~\cite{Zhang:2008yn}. Here we expend the analysis to the operator basis given in Eq.~(\ref{eq:ops}), including results for $\mathcal O^{Z}_{L,R}$ current operators as well as for the admixture of the gluonic dipole operators $\mathcal O^g_{LR,RL}$. The final results that we are after can therefore be parametrized in the following way \begin{eqnarray} \Gamma^V &=& |a^V|^2\frac{v^4}{\Lambda^4} \Gamma_{a}^V + \frac{v^2 m_t^2}{\Lambda^4}|b^V|^2 \Gamma^V_{b}+\frac{v^3m_t}{\Lambda^4}2\mathrm{Re}\{b^{V*}a^V\} \Gamma^V_{ab} \label{eq:oso}\\ &+& \frac{v^3m_t}{\Lambda^4} \left[2\mathrm{Re}\{a^{V*}b^g\} \Gamma^V_{ag}- 2\mathrm{Im}\{a^{V*}b^g\}\tilde{\Gamma}^V_{ag}\right]\nonumber \\&+& \frac{v^2 m_t^2}{\Lambda^4}\left[ |b^g|^2 \Gamma^V_{g}+2\mathrm{Re}\{b^{V*}b^g\} \Gamma^V_{bg} -2\mathrm{Im}\{b^{V*}b^g\}\tilde{\Gamma}^V_{bg} \right]\,,\nonumber \end{eqnarray} where $V= Z,\gamma$ and $a^{\gamma}=0$. Note that $\Gamma^V_{ag,bg,g}$ appearing in the second and third row of Eq.~(\ref{eq:oso}) correspond to contributions from the gluonic operator and are therefore absent in the LO result, emerging only at $\alpha_s$ order. \subsubsection{Tree level expressions} At the tree-level we only have $\Gamma_{a}^Z$, $\Gamma_b^{Z,\gamma}$ and $\Gamma_{ab}^Z$ contributions, which we write in $4+\epsilon$ dimensions as \begin{eqnarray} \Gamma^{\gamma(0)}_b &=& \lim_{\epsilon \to 0}m_t \alpha (1+\frac{\epsilon}{2})\Gamma(1+\frac{\epsilon}{2})\,,\\ \nonumber\Gamma_{a}^{Z(0)}&=&\lim_{\epsilon\to 0}\frac{m_t}{16\pi}g_Z^2(1-r_Z)^2 \Gamma(1+\frac{\epsilon}{2})(1-r_Z)^{\epsilon}\frac{1}{2r_Z}\big(1+(2+\epsilon)r_Z\big)\,,\\ \Gamma_{b}^{Z(0)}&=&\lim_{\epsilon\to 0}\frac{m_t}{16\pi}g_Z^2(1-r_Z)^2 \Gamma(1+\frac{\epsilon}{2})(1-r_Z)^{\epsilon} 2(2+\epsilon+r_Z)\,,\nonumber\\ \Gamma_{ab}^{Z(0)}&=&\lim_{\epsilon\to 0}\frac{m_t}{16\pi}g_Z^2(1-r_Z)^2 \Gamma(1+\frac{\epsilon}{2})(1-r_Z)^{\epsilon}(3+\epsilon)\,,\nonumber \end{eqnarray} where $r_Z=m_Z^2/m_t^2$. \subsubsection{Virtual corrections} The one-loop virtual QCD corrections to the decay amplitudes have already been presented in section~\ref{sec:rge}, where the UV divergences were renormalized. This leaves us with UV finite form factors appearing in Eqs.~(\ref{eq:fcnc_amp1}, \ref{eq:fcnc_amp2}), which however remain IR divergent and the divergences are carried over to the expressions for $t\to q V$ NLO decay widths, for which the complete expressions are given in the Appendix~\ref{app:dw1}. Here we only outline their form \begin{eqnarray} \Gamma^{V,\mathrm{virt}}_{a,b,ab} &=& \Gamma^{V(0)}_{a,b,ab}\Big[1 + \frac{\alpha_s}{4\pi}C_F \Gamma^{V(1)}_{a,b,ab}\Big]\,,\label{eq:FCNC_virt}\\ \nonumber \Gamma^{V,\mathrm{virt}}_{ag,bg} &=& \frac{\alpha_s}{4\pi}C_F \Gamma^{V(1)}_{ag,bg}\,,\\ \nonumber \tilde{\Gamma}^{V,\mathrm{virt}}_{ag,bg} &=& \frac{\alpha_s}{4\pi}C_F \tilde{\Gamma}^{V(1)}_{ag,bg}\,, \end{eqnarray} and stress that $\Gamma_{a,b,ab}^{V(1)}$ all posses IR divergences, while $\Gamma^{V(1)}_{ag,bg}$ and $\tilde{\Gamma}^{V(1)}_{ag,bg}$ are finite. \subsubsection{Bremsstrahlung Contributions} The relevant Feynman diagrams contributing to $t\to q g Z,\gamma$ bremsstrahlung processes are given in Fig.~\ref{fig:fcnc_brems}. At the level of the decay width these diagrams give contributions of the same order in $\alpha_s$ as the one-loop virtual corrections presented above. Soft and collinear IR divergences emerge in the phase space integration and have to cancel the divergences present in $\Gamma^{V,\mathrm{virt}}$ once we sum the two contributions. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{FCNCbrems.pdf} \end{center} \caption{Feynman diagrams for $t\to qgZ,\gamma$ bremsstrahlung process. Squares mark the insertion of effective operators given in Eq.~(\ref{eq:ops}) and crosses the additional points from which the gluon (in the first two diagrams) or $Z,\gamma$ (in the last diagram) can be emitted.} \label{fig:fcnc_brems} \end{figure} Computation of $\Gamma(t\to q g Z,\gamma)$ from the first two diagrams of Fig.~\ref{fig:fcnc_brems} gives contributions presented in Eqs.~(\ref{eq:bremsIR0}--\ref{eq:bremsIR3}) that indeed render the sum of three-body and two-body final state decay width IR finite {\allowdisplaybreaks \small \begin{subequations}\label{GjustZ} \begin{eqnarray} \Gamma_{b}^{\gamma}&=&\Gamma_{b}^{\gamma(0)}\Bigg[1+\frac{\alpha_s}{4\pi}C_F\bigg[ 2\log\Big(\frac{m_t^2}{\mu^2}\Big) + \frac{16}{3} - \frac{4\pi^2}{3} \bigg]\Bigg]\,,\\ \Gamma_a^{Z}&=&\Gamma_{a}^{Z(0)}\Bigg[1+\frac{\alpha_s}{4\pi}C_F\bigg[ -4\log(1-r_Z)\log(r_Z) -2\frac{5+4r_Z}{1+2r_Z}\log(1-r_Z)\label{Ga}\\ &-&\frac{4r_Z(1+r_Z)(1-2r_Z)}{(1-r_Z)^2(1+2r_Z)}\log(r_Z)- +\frac{5+9r_Z-6r_Z^2}{(1-r_Z)(1+2 r_Z)}-8\mathrm{Li}_2(r_Z) -\frac{4\pi^2}{3} \bigg]\Bigg]\,,\nonumber\\ \Gamma_{b}^{Z} &=& \Gamma_{b}^{Z(0)}\Bigg[1+\frac{\alpha_s}{4\pi}C_F \bigg[2\log\left(\frac{m_t^2}{\mu^2}\right) - 4 \log(1-r_Z)\log(r_Z)-\frac{2(8+r_Z)}{2+r_Z}\log(1-r_Z)\label{Gb}\\ &-&\frac{4r_Z(2-2r_Z-r_Z^2)}{(1-r_Z)^2(2+r_Z)}\log(r_Z) -8\mathrm{Li}_2(r_Z)-\frac{16-11r_Z-17r_Z^2}{3(1-r_Z)(2+r_Z)} + 8 -\frac{4\pi^2}{3}\bigg]\Bigg]\,,\nonumber\\ \Gamma_{ab}^{Z} &=& \Gamma_{ab}^{Z(0)}\Bigg[1+\frac{\alpha_s}{4\pi}C_F \bigg[ \log\left(\frac{m_t^2}{\mu^2}\right)-4\log(1-r_Z)\log(r_Z) -\frac{2(2+7r_Z)}{3r_Z}\log(1-r_Z)\label{Gab}\\ &-&\frac{4r_Z(3-2r_Z)}{3(1-r_Z)^2}\log(r_Z)+ \frac{5-9r_Z}{3(1-r_Z)}+4-8\mathrm{Li}_2(r_Z)-\frac{4\pi^2}{3}\bigg]\Bigg]\,.\nonumber \end{eqnarray} \end{subequations}} \normalsize We were able to crosscheck our results given in Eqs.~(\ref{GjustZ}) with the corresponding calculation done for a virtual photon contributing to the $B\to X_s \ell^+ \ell^-$ spectrum~\cite{Asatryan:2002iy}. After taking into account the different dipole operator renormalization condition in~\cite{Asatryan:2002iy} (including mass renormalization) we find complete agreement with their results. $\Gamma_a^{Z}$ was also cross-checked with the corresponding calculation of the $t\to W b$ decay width at NLO in QCD~\cite{Li:1990qf}. Finally, we have compared our $\Gamma_b^Z$ expression with the results given by Zhang et al.\ in Ref.~\cite{Zhang:2008yn}. In the limit $r_Z \to 0$ our results agree with those given in \cite{Zhang:2008yn}, but we find disagreement in the $r_Z$ dependence. After our first publication of these results in~\cite{Drobnak:2010wh}, we were made aware of a new paper in preparation by the same authors, which has now been published \cite{Zhang:2010bm} and therein a corrected result for $\Gamma_b^Z$ is given that coincides with ours. The remaining bremsstrahlung contributions are induced by the gluonic dipole operator. What needs to be pointed out here is that while final result~(\ref{eq:oso}) for $\Gamma^Z$ is finite, $\Gamma^{\gamma}$ remains IR divergent. The divergences appear in $\Gamma_g^\gamma$ (squared contribution of third diagram of Fig.~\ref{fig:fcnc_brems}) and are not canceled by any of the virtual corrections we have considered. To cancel them we would have to consider the decay width for $t\to q g$ governed by the gluonic operator and include the one-loop virtual QED corrections. The corresponding Feynman diagram is shown in Fig.~\ref{fig:fcnc_virt_qed}. \begin{wrapfigure}{r}{0.35\textwidth} \begin{center} \vspace{-0.5cm} \includegraphics[scale=0.6]{FCNCvirt_qed.pdf} \caption{Feynman diagram for one-loop virtual QED correction for $t\to q g$ decay governed by ${\mathcal O}_{LR,RL}^{g}$.} \vspace{-0.5cm} \label{fig:fcnc_virt_qed} \end{center} \end{wrapfigure} The $t\to q \gamma g$ decay process involves three (one almost) massless particles in the final state. Virtual matrix element corrections contribute only at the soft gluon endpoint ($E_g = 0$) and result in non-vanishing $b^\gamma b^g$ interference contributions. They involve IR divergencies which are in term canceled by the real gluon emission contributions. These also produce non-vanishing $|b^g|^2$ contributions, and create a non-trivial photon spectrum involving both soft and collinear divergencies. The later appear whenever a photon or a gluon is emitted collinear to the light quark jet. An analogous situation is encountered in the $B\to X_s\gamma$ decay measured at the $B$-factories. However, there the photon energy in the $B$ meson frame can be reconstructed and a hard cut ($E_\gamma^{\mathrm{cut}}$) on it removes the soft photon divergence. The cut also ensures that the $B\to X_s g$ process contributing at the end-point $E_\gamma = 0$ is suppressed. On the other hand, in present calculations the collinear divergencies are simply regulated by a non-zero strange quark mass, resulting in a moderate $\log(m_s/m_b)$ contributions to the rate. The situation at the Tevatron and the LHC is considerably different. The initial top quark boost is not known and the reconstruction of the decay is based on triggering on isolated hard photons with a very loose cut on the photon energy (a typical value being $E_\gamma>10 $\,GeV in the lab frame \cite{Aad:2009wy}). Isolation criteria are usually specified in terms of a jet veto cone $\Delta R = \sqrt {\Delta \eta^2 + \Delta \phi^2}$ where $\Delta\eta$ is the difference in pseudorapidity and $\Delta\phi$ the difference in azimuthal angle between the photon and nearest charged track. Typical values are $\Delta R > (0.2-0.4)$ \cite{Carvalho:2007yi}. Rather than including QED corrected $t\to qg$ rate, we render the $\Gamma_{g}^{\gamma}$ finite by modeling the non-trivial cuts in the top quark frame with a cut on the projection of the photon direction onto the direction of any of the two jets ($\delta r_j= 1- {\bf p}_\gamma \cdot {\bf p}_j / E_\gamma E_j$), where $j=g,q$ labels the gluon and light quark jet respectively. The effects of the different cuts on the decay Dalitz plot are shown in the left graph of Fig.~\ref{fig:foton_cuts_1}. Since at this order there are no photon collinear divergencies associated with the gluon jet, the $\delta r_g$ cut around the gluon jet has a numerically negligible effect on the rate. On the other hand the corresponding cut on the charm jet - photon separation does not completely remove the divergencies in the spectrum. However, they become integrable. The combined effect is that the contribution due to the gluonic dipole operator can be enhanced compared to the case of $B\to X_s \gamma$. We present the full analytical formulae for the $t\to q \gamma g$ and $t\to q Z g$ decay rates including the effects of kinematical cuts for the former channel in Appendix~\ref{app:dw2}. \subsubsection{Numerical analysis} \begin{figure}[h] \begin{center} \includegraphics[height=6.5cm]{dalitz.pdf}\hspace{0.8cm} \includegraphics[height=6.5cm]{drEcut.pdf} \end{center} \caption{{\bf Left}: The $t\to q \gamma g$ Dalitz plot. Contours of constant photon and gluon infrared and collinear divergent contributions are drawn in red (dot-dashed) and blue (dashed) lines respectively. The collinear divergencies appear at the horizontal and vertical boundaries of the phase-space, while the IR divergencies sit in the top and right corners. The cuts on the photon energy correspond to vertical lines, the cuts on the gluonic jet energy to horizontal lines. Full green lines correspond to cuts on the jet veto cone around the photon. {\bf Right}: Relative size of $\alpha_s$ corrections to the $\mathrm{Br}(t\to q \gamma)$ at representative ranges of $\delta r_c\equiv\delta r$ and $E^{\mathrm{cut}}_\gamma$. Contours of constant correction values are plotted for $b^g=0$ (gray, dotted), $b^g=b^\gamma$ (red) and $b^g = - b^\gamma$ (blue, dashed).} \label{fig:foton_cuts_1} \end{figure} In all the numerical analysis of this section we use the following values for the parameters \begin{align} m_W &= 80.4\,\, \mathrm{GeV}\,, & m_t &=172.3\,\, \mathrm{GeV}\,, &m_Z &= 91.2 \,\, \mathrm{GeV}\,,\label{eq:numV}\\ \mu &= m_t\,, & \alpha_s(m_t)&=0.107\,, & \sin^2 \theta_W &= 0.231\,.\nonumber \end{align} Turning first to $t\to q \gamma$ decay we show in the right graph of Fig.~\ref{fig:foton_cuts_1} the $b^g$ induced correction to the tree-level $\mathrm{Br}(t\to q \gamma)$ for representative ranges of $\delta r$ and $E^{\mathrm{cut}}_\gamma$. We observe, that the contribution of $b^g$ can be of the order of $10-15\%$ of the total measured rate, depending on the relative sizes and phases of $\mathcal O_{LR,RL}^{g,\gamma}$ and on the particular experimental cuts employed. Consequently, a bound on $\mathrm{Br}(t\to q \gamma)$ can, depending on the experimental cuts, probe both $b^{g,\gamma}$ couplings. In order to illustrate our point, we plot the ratio of radiative rates $\Gamma(t\to q \gamma) / \Gamma(t\to q g )$, both computed at NLO in QCD versus the ratio of the relevant effective FCNC dipole couplings $|b^\gamma/b^g|$ in Fig.~\ref{fig:tcg-tcg}. The NLO $\Gamma(t\to q g)$ result is taken from Ref.~\cite{Zhang:2008yn}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{c7c81.pdf} \end{center} \caption{\label{fig:tcg-tcg} The ratio of radiative rates $\Gamma(t\to q \gamma) / \Gamma(t\to q g )$ versus the absolute ratio of the relevant effective FCNC couplings $|b^\gamma/b^g|$. Two representative choices of experimental kinematic cuts are shown. The two regions represent the possible spread due to the unknown relative phase between $b^\gamma$ and $b^g$ couplings, while the lines correspond to maximal positive (full) and negative (dashed) interference $b^\gamma b^g$. See text for details.} \end{figure} We show the correlation for two representative choices of experimental kinematic cuts for the $t\to q \gamma$ decay. The vertical spread of the bands is due to the variation of the relative phase between $b^\gamma$ and $b^g$ couplings. We also display the two interesting limits where the $b^\gamma b^g$ interference is maximal positive (zero relative phase) and negative (relative phase $\pi$). We see that apart from the narrow region around $|b^\gamma/b^g| \sim 0.2$, where the two contributions may be fine-tuned and conspire to diminish the total $t\to q \gamma$ rate, the two radiative rates are well correlated. In particular, depending on the kinematical cuts employed, there is a natural lower bound on ratio of decay rates, valid outside of the fine-tuned region. Finally, for $|b^\gamma/b^g|>0.6$ the correlation becomes practically insensitive to the particular experimental cuts employed and also the unknown relative phase between $b^\gamma$ and $b^g$ couplings. For the $t\to qZ$ decay channel we present some numerical values to estimate the significance of QCD corrections. In particular we parametrize the decay width given in Eq.~(\ref{eq:oso}) as \small \begin{eqnarray} \Gamma^Z\hspace{-0.2cm}&=&\hspace{-0.2cm}\frac{m_t}{16\pi}g_Z^2\Bigg\{ \frac{v^4}{\Lambda^4}|a^Z|^2 \Big[x_{a} +\frac{\alpha_s}{4\pi}C_F y_{a} \Big]+ \frac{v^2m_t^2}{\Lambda^4}|b^Z|^2 \Big[ x_{b}+\frac{\alpha_s}{4\pi}C_F y_{b}\Big]2 +\frac{v^3 m_t}{\Lambda^4}2\mathrm{Re}\{b^{Z*} a^Z\}\Big[x_{ab} +\frac{\alpha_s}{4\pi}C_F y_{ab}\Big]\nonumber\\ &+&|b^g|^2 \frac{v^2m_t^2}{\Lambda^4}\frac{\alpha_s}{4\pi}C_F y_{g} +\frac{v^3 m_t}{\Lambda^4}\Big[ 2\mathrm{Re}\{a^{Z*} b^{g}\}\frac{\alpha_s}{4\pi}C_F y_{ag} -2\mathrm{Im}\{a^{Z*} b^{g}\}\frac{\alpha_s}{4\pi}C_F \tilde{y}_{ag}\Big] \nonumber\\ &+&\frac{v^2m_t^2}{\Lambda^4}\Big[2\mathrm{Re}\{b^{Z*}b^g\}\frac{\alpha_s}{4\pi}C_F y_{bg}-2\mathrm{Im}\{b^{Z*}b^g\}\frac{\alpha_s}{4\pi}C_F \tilde{y}_{bg} \Big] \Bigg\}\,.\label{eq:grdi} \end{eqnarray} \normalsize Here $x_i$ stand for the tree-level contributions, while $y_i\,, \tilde{y}_i$ denote the corresponding QCD corrections. Numerical values of the coefficients are given in Tab.~\ref{table:num}. We see that corrections due to the gluon dipole operator are an order of magnitude smaller (except $y_g$, which is even more suppressed) than corrections to the $Z$ operators themselves and have opposite sign. \begin{table}[h] \begin{center} \begin{tabular}{llll} \hline\hline $x_{b}=2.36$ & $x_{a}=1.44$ & $x_{ab}=1.55$ \\ $y_{b}=-17.90$ & $y_{a}=-10.68$ & $y_{ab}=-10.52$ &$y_{g}=0.0103$\\ $y_{bg}=3.41$ & $y_{ag}=2.80$ & $\tilde{y}_{bg}=2.29$ & $\tilde{y}_{ag} = 1.50$\\ \hline\hline \end{tabular} \caption{\label{table:num}Numerical values of coefficient functions appearing in Eq.~(\ref{eq:grdi}).} \end{center} \end{table} Next we investigate the relative change of the decay rates and branching ratios when going from LO to NLO in QCD. \begin{table}[!h] \begin{center} \begin{tabular}{l|l|l|l||l|l}\hline\hline &$b^Z=b^g=0$&$a^Z=b^g=0$&$a^Z=b^Z, b^g=0$& $b^Z=0, a^Z =b^g$ &$a^Z=0, b^Z =b^g$\\ \hline $\Gamma^{\mathrm{NLO}}/\Gamma^{\mathrm{LO}}$&$0.92$& $0.91$& $0.92$ & $0.95$&$0.94$\\ $\mathrm{Br}^{\mathrm{NLO}}/\mathrm{Br}^{\mathrm{LO}}$&$1.001$&$0.999$&$1.003$ &$1.032$&$1.022$\\ \hline\hline \end{tabular} \caption{Numerical values of $\Gamma^{\mathrm{NLO}}/\Gamma^{\mathrm{LO}}$ and $\mathrm{Br}^{\mathrm{NLO}}(t\to q Z)/\mathrm{Br}^{\mathrm{LO}}(t\to q Z)$ for certain values and relations between Wilson coefficients.} \label{brsZ} \end{center} \end{table} The results are presented in Tab.~\ref{brsZ}. We see that the change in the decay width is of the order 10\%. There is a severe cancellation between the QCD corrections to $\Gamma(t\to c Z)$ and the main decay channel $\Gamma(t\to b W)$. This cancellation causes the change of the branching ratio to be only at the per-mille level when $b^g$ is set to zero. In the case when only operators $\mathcal O_{L,R}^Z$ are considered this cancellation is anticipated since the NLO correction to $\Gamma^Z_{a}$ is of the same form as the correction to the rate of the main decay channel. If we treat $b$ quarks as massless, exact cancellation is avoided only due to the difference in the masses of $Z$ and $W$ bosons. It is more surprising that similar cancellation is obtained also when only the dipole $Z$ operator is considered. However, setting $b^g=a^Z$ or $b^g=b^Z$, the impact of QCD corrections is increased by an order of magnitude and reaches a few percent. \subsection{Summary} To summarize, we have presented a study of $t\to q Z,\gamma$ decays mediated by the effective operators given in Eq.~(\ref{eq:ops}) at NLO in QCD. We found that QCD corrections can induce sizable mixing of the relevant operators, both through their renormalization scale running as well as in the form of finite matrix element corrections. These effects are found to be relatively small for the $t\to q Z$ decay, but can be of the order 10\% in the $t\to q \gamma$ channel, depending on the kinematical cuts employed. The accurate interpretation of experimental bounds on radiative top processes in terms of effective FCNC operators requires the knowledge of the experimental cuts involved and can be used to probe $\mathcal O^g_{LR,RL}$ contributions indirectly. \newpage \section{Three-body $t\to q \ell^+ \ell^-$ decays }\label{sec:three_body} This section is devoted to the study of $t \to q \ell^+ \ell^-$ with the basic goal of identifying discriminating effects of different NP models in top FCNCs which can be approached by the experimental study. Exploring the three-body decay channel brings about two main advantages. First one is the larger phase-space which offers more observables to be considered -- in particular the angular asymmetries among the final state lepton and jet directions. The second advantage is that the three-body final state that we are to consider is common to both $Z$ and $\gamma$ mediated FCNC decays. Since some BSM models predict observable FCNC top quark decays in both $Z$ and $\gamma$ channels the interference effects in the common three-body channel is something worth exploring. Since the standard forward-backward asymmetry for the leptons vanishes in the photon mediated decays, we consider another asymmetry which we call the left-right asymmetry and is associated with the lepton angular distribution in the lepton-quark rest frame (see section \ref{sec:3body_obs}). This asymmetry is nontrivial also in the photon mediated decays. We explore the ranges of values for these two asymmetries in $t\rightarrow q\ell^+\ell^-$ decays mediated by both $Z$ boson and the photon. We also consider the interference effects as we expect them to significantly affect the ranges of the asymmetries. Our results can serve as a starting point for more elaborate investigations of experimental sensitivity to the proposed observables including QCD corrections, proper jet fragmentation and showering and the impact of experimental cuts and detector effects. \subsection{Effective Lagrangian} \begin{wrapfigure}{r}{0.3 \textwidth} \begin{center} \vspace{-0.8cm} \includegraphics[scale=0.6]{FCNCthree.pdf} \caption{$t\to q \ell^+ \ell^-$ Feynman diagram, where the FCNC operators ${\mathcal O}_{L,R}^Z$, ${\mathcal O}_{LR,RL}^{Z,\gamma}$, are given in Eq.~(\ref{eq:ops}).}\label{fig:fcnc_three} \vspace{-0.3cm} \end{center} \end{wrapfigure} The effective Lagrangian governing the FCNC top quark decay is considered to be the same as given in Eq.~(\ref{eq:Lagr}). We shall assume the FCNC mediating gauge boson ($Z$ boson or the photon) to further couple with a pair of charged leptons. Note that the gluonic operator will not play a role in the analysis of this section since the gluon does not couple to leptons. For completeness we present here the part of SM Lagrangian that couples charged leptons with the photon and the $Z$ boson \begin{eqnarray} \mathcal L_{\ell} = g_Z Z_{\mu} \left[ c_R\, \bar \ell_R \gamma^{\mu} \ell_R + c_L \,\bar \ell_L \gamma^{\mu} \ell_L\right] + e A_{\mu} \bar \ell \gamma^{\mu} \ell\,, \label{eq:Lell} \end{eqnarray} where the $Z$ couplings are $c_R = \sin^2\theta_W$ and $c_L =- (\cos 2\theta_W /2)$. The Feynman diagram of the process that we shall be studying is given in Fig.~{\ref{fig:fcnc_three}}. Since top quark is massive enough for the produced $Z$ boson to be on-shell we shall use the standard Breit-Wigner formula to accommodate for its finite decay width by replacing $(s-m_Z^2) \to (s - m_Z^2 +\mathrm{i} m_Z \Gamma_Z)$ in the denominator of the propagator, where $s$ denotes the square of the intermediate $Z$ boson momentum and $\Gamma_Z$ is its total decay width. \subsection{Observables}\label{sec:3body_obs} We consider scenarios where detection of a NP signal in the FCNC decay channel $t\to q \ell^+\ell^-$ could be most easily complemented by other observables in the same decay mode. This would allow distinguishing between different possible effective amplitude contributions and thus different underlying NP models. We neglect kinematical effects of lepton masses and the light quark jet invariant mass, as these are expected to yield immeasurably small effects in the kinematical phase-space set by the large top quark mass. We start with the double-differential decay rate ${\mathrm{d} \Gamma}/{(\mathrm{d} u \mathrm{d} s)}$, where $s=m_{\ell^+\ell^-}^2$ is the invariant mass of the lepton pair and $u = m_{j \ell^+}^2$ is the invariant mass of the final state quark (jet) and the lepton of positive charge $\ell^+$. Integrating this decay rate over one of the kinematical variables, we obtain the partially integrated decay rate distributions ($\mathrm{d}\Gamma / \mathrm{d} u$, $\mathrm{d}\Gamma / \mathrm{d} s$), while the full decay rate ($\Gamma$) is obtained after completely integrating these distributions. The branching ratio is obtained by normalizing the decay width to the width of the main decay channel~(\ref{eq:SM_MDC}). The differential decay rate distribution can also be decomposed in terms of two independent angles, as defined in Fig.~\ref{fig:angles}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{dve_asimetriji.pdf} \caption{\label{fig:angles} Definition of two angles relevant to our analysis. The arrows represent the three momenta of particles. Left diagram corresponds to the lepton pair rest-frame and the right diagram to the rest-frame of the positively charged lepton and the light quark. } \end{center} \end{figure} In the $\ell^+ \ell^-$ rest-frame $z_j=\cos\theta_{j}$ measures the relative direction between the negatively charged lepton and the light quark jet. Conversely, in the rest-frame of the positively charged lepton and the quark jet, we can define $z_{\ell}=\cos\theta_{\ell}$ to measure the relative directions between the two leptons. In terms of these variables, we can define two asymmetries ($i=j,\ell$) as \begin{equation} A_i = \frac{\Gamma_{z_i>0}-\Gamma_{z_i<0}}{\Gamma_{z_i>0} + \Gamma_{z_i<0}}\,, \label{eq:As} \end{equation} where we have denoted $\Gamma_{z_i\lessgtr 0}$ as the integrated decay rates with an upper or lower cut on one of the $z_i$ variables. We can then identify $A_{j}\equiv A_{\mathrm{FB}}$ as the commonly known {\sl forward-backward asymmetry} (FBA) and in addition define $A_{\ell}\equiv A_{\mathrm{LR}}$ as the {\sl left-right asymmetry} (LRA). The two angles and the asymmetries they define are related via a simple permutation of final state momentum labels between the quark jet and the positively charged lepton, and consequently via a $u\leftrightarrow s$ interchange. Since the asymmetries as defined in Eq.~(\ref{eq:As}) are normalized to the decay rate, they represent independent observables with no spurious correlations to the branching ratio. On the other hand, correlations among the two asymmetries are of course present and indicative of the particular NP operator structures contributing to the decay. \subsection{Signatures} Next we study the signatures of various possible contributions to the $t\to q \ell^+\ell^-$ decay using the integrated FBA and LRA observables defined in the previous section. Before exploring individual mediation cases a general remark is in order. Since all the effective operators of our basis~(\ref{eq:ops}) come suppressed with an undetermined NP cut-off scale, the actual values of the effective couplings ($a$, $b$) are unphysical (can always be shifted with a different choice of the cutoff scale). The total decay rate determines the overall magnitude of the physical product of the couplings with the cut-off scale. On the other hand relative sizes or ratios of couplings (independent of the cut-off) determine the magnitude of the asymmetries. The extremal cases are then naturally represented when certain (combinations of) couplings are set to zero -- often the case in concrete NP model implementations. \subsubsection{Photon mediation} As pointed out in section~\ref{sec:fcnc_matrix_elements}, the direct detection of energetic photons is considered to be the prime strategy in the search for photon mediated FCNC top quark decays. However the $t\to q\ell^+\ell^-$ channel, where the photon is coupled to the charged lepton pair can serve as an additional handle. Due to the infrared pole in the di-lepton invariant mass distribution we introduce a low $\hat{s}=m_{\ell}^2/m_t^2$ cut denoted $\hat{s}_{\mathrm{min}}\equiv\epsilon/m_t^2$ and present the total decay width as its function \begin{eqnarray} \Gamma^{\gamma} = \frac{m_t}{16\pi^3}\frac{g_Z^4 v^4 }{\Lambda^4}B_{\gamma}f_{\gamma}(\hat{s}_{\mathrm{min}})\,, \hspace{0.5cm} B_{\gamma} = \frac{m_t^2}{v^2}\frac{e^4}{g_Z^4}\frac{|b_{LR}^{\gamma}|^2+|b_{RL}^{\gamma}|^2}{2}\,.\label{eq:gamma_gamma} \end{eqnarray} \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \vspace{-0.7cm} \includegraphics[scale=0.6]{ALR_F1.pdf} \vspace{-0.3cm} \caption{\label{fig:lra_photon} The dependence of the photon-mediated LRA on the low di-lepton invariant mass cut $\epsilon$. } \end{center} \vspace{-1.0cm} \end{wrapfigure} The physical cut is of course at $\epsilon=4m_{\ell}^2$. We have also define an auxiliary variable $B_{\gamma}$ summarizing the relevant NP parameter dependencies. Function $f_{\gamma}$ depends only on the di-lepton invariant mass cutoff $\hat{s}_{\mathrm{min}}$ and is presented in Eq.~(\ref{fg}) of the Appendix. While the FBA vanishes identically, due to the purely vectorial coupling of the photon to the leptons, the LRA can be written in the following way \begin{eqnarray} A_{\mathrm{LR}}^\gamma&=&\frac{g_{\gamma}(\hat{s}_{\mathrm{min}})}{f_{\gamma}(\hat{s}_{\mathrm{min}})}\,,\label{eq:def_g} \end{eqnarray} where function $g_{\gamma}$ is presented in Eq.~(\ref{gg}) of the Appendix and also depends only on the cut of the di-lepton invariant mass. Consequently LRA does not depend on the effective dipole couplings in any way, however showing a non-trivial dependence on the low $\hat{s}$ cut, which we plot in Fig.~\ref{fig:lra_photon}. We see that the value of the integrated LRA is highly sensitive to the cut having value $1$ in the limiting case of $\sqrt{\epsilon}\to 0$, decreasing with the cut set higher and even exhibiting a change of sign between $60$ and $80$ GeV. \subsubsection{$Z$ mediation} Current search strategies for $t\to q Z$ decays actually consider $t\to q \ell^+\ell^-$ decay channel, where the charged lepton pair is to be identified as the decay product of the $Z$ boson. This is achieved by imposing a cut on the invariant lepton mass around the $Z$ mass to reduce backgrounds. As long as such cuts are loose compared to the width of the $Z$, we do not expect them to affect our observables. The decay width and the two asymmetries can be written as \begin{subequations}\label{eq:fcnc_Z_1} \begin{eqnarray} \Gamma^Z &=& \frac{m_t}{16\pi^3}\frac{g_Z^4v^4}{\Lambda^4}\Big[f_A A + f_B B + f_C C\Big]\,, \label{eq:xx}\\ A_{\mathrm{FB}}^Z &=& f_{\alpha\beta\gamma}\frac{\alpha -4\beta+4\gamma}{f_A A + f_B B + f_C C}\,, \label{eq:xx1}\\ A_{\mathrm{LR}}^Z &=&\frac{ g_A A + g_B B + g_C C + g_{\alpha\beta\gamma}[\alpha-4\beta+4\gamma]}{f_A A + f_B B + f_C C}\,,\label{eq:xx2} \end{eqnarray} \end{subequations} where the parts depending on the NP generated FCNC top quark couplings have been separated from the parts that depend just on the SM parameters and the phase space integration. The NP dependent parameters introduced are \begin{align} A&= \frac{|a_R^Z|^2+|a_L^Z|^2}{2} L_+\,,& \alpha &= \frac{|a_R^Z|^2-|a_L^Z|^2}{2}L_-\,,\\ \nonumber B&= \frac{m_t^2}{v^2}\frac{|b_{LR}^Z|^2+|b_{RL}^Z|^2}{2} L_+\,,& \nonumber \beta &= \frac{m_t^2}{v^2}\frac{|b_{LR}^Z|^2-|b_{RL}^Z|^2}{2}L_-\,,\\ \nonumber C&=-\frac{m_t}{v}\frac{\mathrm{Re}\{b^Z_{LR}a_L^{Z*}+b_{RL}^Za_{R}^{Z*}\}}{2} L_+\,,& \nonumber \gamma&=\frac{m_t}{v}\frac{ \mathrm{Re}\{b_{LR}^Za_L^{Z*}-b_{RL}^{Z}a_{R}^{Z*}\}}{2}L_-\,, \end{align} where $L_{\pm} = \frac{1}{2}\sin^4\theta_W\pm\frac{1}{8}\cos^22\theta_W$ are the factors coming from the charged lepton couplings to the $Z$ boson governed by the Lagrangian given in Eq.~(\ref{eq:Lell}). On the other hand the remaining parameters $f_i$ and $g_i$ do not depend on the effective FCNC couplings and are presented in the form of integrals in Eqs.~(\ref{eq:fA}, \ref{eq:gZ}) of the Appendix. \begin{figure}[h] \begin{center} \includegraphics[scale=0.8]{CorrZ1.pdf} \caption{\label{fig:assym_Z} The correlation of FBA and LRA in $Z$ mediated decay. The gray area (solid border) represents decays with all possible current and dipole $Z$ FCNC couplings. The orange area (dotted border) corresponds to decays with $a_L^Z$ set to zero, while the solid and dashed lines represent decays with only current and only dipole couplings respectively. The following numerical values have been used $m_{Z,t} = 91.2\,,171.2$ GeV, $\Gamma_Z = 2.5$ GeV and $\sin^2\theta_W = 0.231$.} \end{center} \end{figure} Performing a random sweep over the values of FCNC couplings that give the same value of the total FCNC decay width (\ref{eq:xx}), we explore the possible ranges and correlations between the two asymmetries (\ref{eq:xx1}, \ref{eq:xx2}) in Fig.~\ref{fig:assym_Z}. On the same plot we also project the limits, where only dipole or only current interactions of the $Z$ contribute. In Ref.~\cite{Fox:2007in} strong indirect limits were reported on the left-handed FCNC couplings of the $Z$ coming from low energy observables. Therefore we also superimpose the possible predictions for the two asymmetries when these couplings are set to zero. We observe that the LRA can be used to distinguish between dipole and current FCNC couplings of the $Z$, while the FBA can distinguish the chiralities of the couplings. \subsubsection{Interference of photon and $Z$ mediation} Several NP models predict comparable decay rates for $t\to q Z,\gamma$. This may in turn lead to a situation, where an experimental search using a common final state may be more promising than dedicated searches in each channel separately. In addition, the asymmetries in $t\to q\ell^+\ell^-$ may shed additional light on the specific couplings involved. The decay rate in this case depends again on the di-lepton invariant mass cutoff \begin{eqnarray}\label{eq:FCNC_width2} \Gamma^{\gamma+Z}(\hat{s}_{\mathrm{min}}) = \Gamma^{\gamma}(\hat{s}_{\mathrm{min}}) + \Gamma^{Z}(\hat{s}_{\mathrm{min}}) + \Gamma^{\mathrm{int}}(\hat{s}_{\mathrm{min}})\,, \end{eqnarray} where the pure photon contribution $\Gamma^{\gamma}$ is given in Eq.~(\ref{eq:gamma_gamma}) while the pure $Z$ and interference contributions can be written as \begin{subequations}\label{eq:new_fs} \begin{eqnarray} \Gamma^{Z}(\hat{s}_{\mathrm{min}}) &=&\frac{m_t}{16\pi^3}\frac{v^4g_Z^4}{\Lambda^4}\Big[f_A^{\epsilon}A+f_B^{\epsilon}B+f_C^{\epsilon}C\Big]\,,\\ \Gamma^{\mathrm{int}}(\hat{s}_{\mathrm{min}}) &=&\frac{m_t}{16\pi^3}\frac{v^4g_Z^4}{\Lambda^4}\Big[f_{W_{12}} (W_1+W_2) +f_{W_{34}}( W_3+ W_4)\Big]\,. \end{eqnarray} \end{subequations} Here $W_1,\dots,W_4$ are the newly introduced NP dependent constants containing both $Z$ and $\gamma$ effective FCNC couplings \begin{align} W_1&= \frac{m_t^2}{v^2}\frac{e^2}{g_Z^2}\frac{1}{2}\mathrm{Re}\{b_{LR}^{\gamma*}b_{LR}^{Z}c_L+b_{RL}^{\gamma*}b_{RL}^{Z}c_R \} \,, & W_2&= \frac{m_t^2}{v^2}\frac{e^2}{g_Z^2}\frac{1}{2}\mathrm{Re}\{b_{LR}^{\gamma*}b_{LR}^{Z}c_R+b_{RL}^{\gamma*}b_{RL}^{Z}c_L \}\,,& \\ \nonumber W_3&= \frac{m_t}{v}\frac{e^2}{g_Z^2}\frac{1}{2}\mathrm{Re}\{-b_{LR}^{\gamma*}a_L^{Z}c_L - b_{RL}^{\gamma*}a_{R}^{Z}c_R \}\,, & \nonumber W_4&= \frac{m_t}{v}\frac{e^2}{g_Z^2}\frac{1}{2}\mathrm{Re}\{-b_{LR}^{\gamma*}a_L^{Z}c_R - b_{RL}^{\gamma*}a_{R}^{Z}c_L \}\,. \end{align} The two asymmetries can be expressed as the following fractions \begin{eqnarray} A_{\mathrm{FB}}^{\gamma+Z} &=& \frac {f_{\alpha\beta\gamma}^{\epsilon} (\alpha-4\beta+4\gamma)+ f_W \big(2(W_2-W_1)+ W_4-W_3\big)} {f_{\gamma}B_{\gamma} + f_A^{\epsilon}A+f_B^{\epsilon}B+f_C^{\epsilon}C +f_{W_{12}} (W_1+W_2) +f_{W_{34}}( W_3+ W_4)} \,,\\ A_{\mathrm{LR}}^{\gamma+Z} &=& \frac {g_{\gamma}B_{\gamma}+g_A^{\epsilon} A + g_B^{\epsilon} B+g_C^{\epsilon} C+g_{\alpha\beta\gamma}(\alpha-4\beta+4\gamma)+\sum_{i=1}^4 g_{W_i} W_i } {f_{\gamma}B_{\gamma} + f_A^{\epsilon}A+f_B^{\epsilon}B+f_C^{\epsilon}C +f_{W_{12}} (W_1+W_2) +f_{W_{34}}( W_3+ W_4)}\,. \end{eqnarray} The newly introduced functions $f_i$ and $g_i$ now depend on the $Z$ boson parameters as well as the di-lepton invariant mass cutoff $\epsilon$. They are presented in Eqs.~(\ref{eq:fcnc_int_1}) of the Appendix. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7 ]{CorrZF1.pdf} \caption{\label{fig:assym_Zg} The correlation of FBA and LRA in $Z$ and $\gamma$ mediated decay. The gray area (solid border) represents decays with all possible $Z$ and $\gamma$ FCNC couplings. The blue (dotted border) area corresponds to decays with only current $Z$ FCNC couplings. For comparison, the orange (dashed border) area represents $Z$ mediated decays. Numerical values used are the same as in Fig.~\ref{fig:assym_Z} with addition of $\sqrt{\epsilon} = 40 $ GeV.} \end{center} \end{figure} Performing a random sweep across both $\gamma$ and $Z$ NP FCNC couplings that give the same decay width~(\ref{eq:FCNC_width2}) we explore the possible correlation between the FBA and the LRA in this scenario. We present the results with a fixed cut on $\sqrt{\epsilon}$ set to $40$~GeV in Fig.~\ref{fig:assym_Zg}. We also present possible points for the case when only the current FCNC $Z$ couplings contribute. We observe that in principle interference effects can produce a larger LRA compared to the case of pure $Z$ mediation. \begin{wraptable}{r}{0.3\textwidth} \begin{tabular}{c|c|c} & only $Z$ & $Z$ and $\gamma$ \\\hline\hline $A_{\mathrm{FB}}$ &0.045 & 0.035\\ $A_{\mathrm{LR}}$ &0.206 & 0.226\\\hline\hline \end{tabular} \caption{Values of FBA and LRA for highest allowed coefficients given by Fox et al. in Ref.~\cite{Fox:2007in}, $\sqrt{\epsilon}= 40$ GeV.} \label{tab:asym_tabela} \end{wraptable} In Ref.~\cite{Fox:2007in} upper bounds on coefficients accompanying the operators responsible for FCNC $t\to c Z$ and $t\to c \gamma$ are presented. Using transcription formulae presented in Eqs.~(\ref{eq:fcnc_trans}) of the Appendix, we can evaluate FBA and LRA associated with these upper bounds. The numerical values are presented in Tab.~\ref{tab:asym_tabela}. These values serve just for illustration that nonzero values of asymmetries can indeed be obtained. They do not represent any kind of upper bounds for asymmetries. There is no reason to think that the highest allowed values of coefficients (\ref{eq:fcnc_trans}) are to give the largest possible asymmetries which are complicated functions of these coefficients. \subsection{Summary} We have considered the top quark decay $t\to q\ell^+\ell^-$ as a probe of BSM physics manifested in FCNC top quark transitions. In addition to the branching ratio, we have defined two angular asymmetries which can serve to further discriminate between different NP scenarios. Comparing possible contributions to the decay mode via SM $\gamma$ and $Z$ mediation we can draw the following conclusions: large values of FBA ($|A_{\mathrm{FB}}|\gg 0.1$) cannot be accounted for in decay modes mediated by $Z,\gamma$ bosons as long as we assume these bosons to have SM couplings to the charged leptons. We have shown in Fig.~\ref{fig:assym_Z}, that the $A_{\mathrm{FB}}\in[-0.12,0.12]$. A measured point in $(A_{\mathrm{FB}},A_{\mathrm{LR}})$ plane could exclude models with only current or only dipole FCNC couplings of $Z$ if it were located off the solid and dashed black lines in Fig.~\ref{fig:assym_Z}. Treating the $Z$ and photon as indistinguishable mediators expands the allowed LRA region to larger positive values. Current experimental sensitivity studies look at the two-body decay modes $t\to c Z,\gamma$ ~\cite{Carvalho:2007yi}. Our analysis may be applicable to the potential measurement of $t\to c Z$ at ATLAS since they will be identifying the $Z$ boson through its decay to a lepton pair. Angular asymmetries of this pair and the remaining hard jet could provide additional information on the $tcZ$ FCNC vertex. The $t\to c\gamma$ decay is generically characterized by a single high $p_T$ photon. Current search strategies for this FCNC include the detection of this photon, and not its eventual decay to a lepton pair. In order to fully explore our decay mode, one would need to relax or modify certain criteria used by current search strategies to reduce SM backgrounds. In addition, the reconstruction of the LRA might require top quark charge tagging. In principle our results are applicable also to the purely hadronic decay modes, where the two leptons are replaced by $b$-tagged jets for example, however in this case the asymmetries are compromised by the lack of knowledge of the sign of the $b$ quark charges. \chapter{Raz\v{s}irjen povzetek} \section{Uvod} Glavni cilj teoreti\v{c}ne fizike osnovnih delcev je razumevanje, opisovanje in napovedovanje pojavov, ki se odvijajo na najmanj\v{s}ih eksperimentalno dosegljivih razdaljah. Glavno matemati\v{c}no orodje pri tem je kvantna teorija polja. Stalni napredki na teoreti\v{c}nem in eksperimentalnem podro\v{c}ju so privedli do oblikovanja teorije znane pod imenom Standardni Model (SM), katerega glavni koncepti so bili zasnovani v \v{s}estdesetih letih prej\v{s}njega stoletja~\cite{Weinberg:1967tq}. Glavni karakteristiki SM sta enostavnost in izjemna prediktivna mo\v{c}. Zadnja desetletja je zaznamovalo testiranje narazli\v{c}nej\v{s}ih napovedi te teorije z bogato paleto sofisticiranih eksperimentov, najodmevnej\v{s}i od katerih je {\sl Veliki Hadronski Trkalnik} (LHC). Kot renormalizabilno teorijo osnovnih delcev in interakcij SM zaznamuje umeritvena grupa, pod katero je invariantna, in mehanizem zloma elektro-\v{s}ibke simetrije s pomo\v{c}jo Higgsovega bozona \begin{eqnarray} SU(3)_c \times SU(2)_L \times U(1)_{Y} \xrightarrow{\langle \phi \rangle} SU(3)_c \times U(1)_Q\,, \hspace{0.5cm} Q = Y + T_3\,, \end{eqnarray} kar povsem dolo\v{c}a vsebino vektorskih in skalarnih polj. Fermionska polja, ki jih delimo na kvarke in leptone, pa se v SM pojavijo v treh ponovitvah (dru\v{z}inah) istih reprezentacij umeritvene grupe, za katere re\v{c}emo, da imajo razli\v{c}ne okuse. Baza v kateri so umeritvene interakcije diagonalne se razlikuje od baze masnih lastnih stanj, ki izvirajo iz interakcijskih \v{c}lenov tipa Yukawa. Pravimo, da je Yukawa sektor SM edini vir fizike okusa, ki v kvarkovskem sektorju privede do nabitih tokov, ki spreminjajo okus \begin{eqnarray} \mathcal L_{\mathrm{cc}} =- \frac{g}{\sqrt{2}} \big[\bar{u}_{iL}\gamma^{\mu}d_{jL}\big]V_{ij}W_{\mu}^+ - \frac{g}{\sqrt{2}} \big[\bar{d}_{jL}\gamma^{\mu}u_{iL}\big]V^*_{ij}W_{\mu}^-\,. \label{eq:SMcc!} \end{eqnarray} Me\v{s}anje med dru\v{z}inami v nabitih tokovih opisuje unitarna matrika {\sl Cabbibo-Kobayashi-Maskawa} (CKM) \begin{eqnarray} V = \left(\begin{array}{ccc} V_{ud}& V_{us}& V_{ub}\\ V_{cd}& V_{cs}& V_{cb}\\ V_{td}& V_{ts}& V_{tb}\\ \end{array}\right)\,. \label{eq:CKMmat!} \end{eqnarray} Kljub veliki uspe\v{s}nosti SM kot teorije velja splo\v{s}no sprejeto prepri\v{c}anje, da SM ni dokon\v{c}na teorija osnovnih delcev in njihovih interakcij. Nenazadnje ne opisuje kvantne gravitacije, ki postane pomembna pri energijah reda Planck-ove skale $\Lambda_{\mathrm{P}}\sim 10^{16} $ GeV. Tudi del\v{c}na vsebina SM ne zadostuje za opis vsega, kar smo do sedaj opazili v naravi, saj SM nikakor ne pojasni obstoja temne snovi in energije~\cite{Olive:2003iq,Trimble:1987ee}. Odkritje nevtrinskih oscilacij nedvomno potrjuje obstoj nevtrinskih mas, ki jih v SM ne poznamo. Poleg tega sta tu \v{s}e dve veliki konceptualni uganki, ki nam dajeta misliti, da mora kon\v{c}na teorija biti \v{s}e bolj dovr\v{s}ena. Hierarhi\v{c}ni problem~\cite{Martin:1997ns,Wells:2009kq} izpostavlja te\v{z}ko razumljivo veliko razliko med elektro-\v{s}ibko in Planckovo skalo. Problem fizike okusa pa izra\v{z}a, da SM ne zna razlo\v{z}iti parametrov fizike okusa in njihove o\v{c}itno hierarhi\v{c}ne ureditve. V dobi LHC lahko prvi\v{c} preu\v{c}ujemo fiziko kvarka top z veliko preciznostjo, saj LHC lahko smatramo za pravo tovarno kvarkov top. Kvark top, ki izstopa s svojo veliko maso $m_t = 173.2 \pm 0.9$ GeV~\cite{Lancaster:2011wr} in veliko razpadno \v{s}irino glavnega razpadnega kanala, \begin{eqnarray} \Gamma (t\to W b ) =|V_{tb}|^2\frac{m_t}{16 \pi}\frac{g^2}{2}\frac{(1-x^2)^2(1+2x^2)}{2x^2} \sim 1.5 \,\,\mathrm{GeV}\,,\label{eq:SM_MDC!} \end{eqnarray} ki nam omogo\v{c}a, da kvark top v razpadih obravnavamo kot prost delec, postaja vse bolj zanimiv in dostopen za iskanje {\sl nove fizike} (NF) onkraj SM. Osrednje vpra\v{s}anje, ki ga bomo posku\v{s}ali raziskati v tem delu, je, na kak\v{s}en na\v{c}in se NF lahko manifestira in opazi v razpadih kvarka top. Glavno vodilo so redki razpadi, za katere SM napoveduje zelo majhno verjetnost. Posledi\v{c}no so odstopanja od napovedi SM lahko jasno opazljiva. Za iskanje NF so zanimivi razpadi kvarka top, ki potekajo preko {\sl nevtralnih tokov, ki spreminjajo okus} (FCNC). Ti razpadi v okviru SM niso mogo\v{c}i na drevesnem redu, zato so njihova razvejitvena razmerja neopazljivo majhna~\cite{Eilam:1990zc, AguilarSaavedra:2004wm} \begin{eqnarray} \mathrm{Br}[t\to c \gamma]\sim 10^{-14}\,,\hspace{0.5cm} \mathrm{Br}[t\to c Z]\sim 10^{-14}\,,\hspace{0.5cm} \mathrm{Br}[t\to c g]\sim 10^{-12}\,. \end{eqnarray} Potencialna detekcija tak\v{s}nih razpadov bi nedvomno pomenila prisotnost NF. Po drugi strani lahko odstopanje od napovedi SM i\v{s}\v{c}emo tudi v glavnem razpadnem kanalu kvarka top. Ta poteka preko nabite \v{s}ibke interakcije na nivoju drevesnega reda~(\ref{eq:SMcc!}) in velja za eksperimentalno signaturo kvarka top. V kolikor bi struktura nabitih kvarkovskih tokov odstopala od strukture, ki jo poznamo v SM, bi se to poznalo na {\sl su\v{c}nostni dele\v{z}ih} (eng. {\sl helicity fractions}) bozona $W$, ki nastane pri razpadu. Definiramo jih tako, da razpadno \v{s}irino glavnega razpadnega kanala razdelimo na tri dele \begin{eqnarray} \Gamma(t\to W b) = \Gamma_L + \Gamma_+ + \Gamma_- \,, \end{eqnarray} kjer $L$ ozna\v{c}uje longitudinalno, $+$ in $-$ pa pozitivno in negativno transverzalno stanje su\v{c}nosti bozona $W$. Su\v{c}nostne dele\v{z}e nato vpeljemo kot $\mathcal F_{L,+,-} = \Gamma_{L,+,-}/\Gamma$. Ker SM napoveduje zelo majhno vrednost $\mathcal F_+$ \cite{Fischer:2001gp,Czarnecki:2010gb,Do:2002ky,Fischer:2000kx} \begin{eqnarray} \mathcal F_L^{\rm SM} = 0.687(5)\,,\hspace{0.5cm} \mathcal F_+^{\rm SM} = 0.0017(1) \label{eq:e22b!}\,, \end{eqnarray} so te opazljivke zanimive za iskanje NF. Zgoraj navedene vrednosti vklju\v{c}ujejo kvantne popravke vi\v{s}jega reda, najpomembnej\v{s}i od katerih so popravki {\sl kvantne kromodinamike} (QCD). \begin{SCfigure}[3.5][h!] \includegraphics[width=0.2\textwidth]{skica_hel.pdf} \caption{Ilustracija razpada kvarka top v mirovnem sistemu po glavnem razpadnem kanalu v limiti $m_b=0$. \v{S}iroke pu\v{s}\v{c}ice predstavljajo tretjo komponento spina, tanke pu\v{s}\v{c}ice pa smer gibanja. V brezmasni limiti su\v{c}nost in ro\v{c}nost polj sovpadata. Ker je \v{s}ibka interakcija v SM izklju\v{c}no levo-ro\v{c}na, je su\v{c}nost kvarka $b$ v omenjeni limiti vedno negativna. Tretja slika prikazuje situacijo, ki je zaradi ohranitve spina prepovedana, saj ima kvark top spin $1/2$. To nam or\v{s}e, zakaj je napovedana vrednost za $\mathcal F_+$ v SM majhna. Ta enostavna slike se podre, ko opustimo brezmasno limito ali v proces vklju\v{c}imo kvantne popravke vi\v{s}jega reda.} \label{fig:illust!} \end{SCfigure} Intuitivna razlaga za majhnost $\mathcal F_+$ je podana na sliki Fig.~\ref{fig:illust!}. V primeru, da se izmerjene vrednosti su\v{c}nostnih dele\v{z}ev ujemajo z napovedmi SM, meritve slu\v{z}ijo omejevanju NF, v primeru, da bi se izmerjene vrednosti bistveno razlikovale od napovedanih (zlasti v primeru signifikantno neni\v{c}elnega $\mathcal F_+$), pa bi to lahko pomenilo odkritje NF v nabitih tokovih s kvarkom top. Pri obravnavi NF v nabitih in nevtralnih tokovih, ki vsebujejo kvark top, ne moremo mimo dejstva, da kvark top igra zelo pomembno vlogo tudi v fiziki ni\v{z}jih energij, kjer se pojavlja kot virtualen delec. Zlasti v teoreti\v{c}nih napovedih za redke procese mezonov $B$ in $K$ lahko v primeru NF v fiziki kvarka top pri\v{c}akujemo spremembe. V primeru, da se meritve ujemajo z napovedmi SM, lahko slu\v{z}ijo za postavitev indirektnih omejitev na prispevke NF. Po drugi strani, v kolikor bodo natan\v{c}no merjene v prihodnje, lahko NF postavlja zanimive napovedi. Z ozirom na to v primeru obravnave NF v nabitih tokovih podrobno analiziramo implikacije NF na opazljivke v procesih me\v{s}anja nevtralnih mezonov $B$ ($|\Delta B|=2$ procesi) in njihovih razpadih ($|\Delta B|=1$ procesi), ki jih opi\v{s}emo z naslednjimi efektivnimi Lagrangeovimi funkcijami, v katerih ni polj, ki bi imela mase ve\v{c}je od mase kvarka $b$ \begin{subequations}\label{eq:x!} \begin{eqnarray} \mathcal L_q^{|\Delta B|=2}&=&- \frac{G_F^2 m_W^2}{4\pi^2}(V_{tq}^*V_{tb})^2 C_1(\mu)\mathcal O_1^q \,,\\%\hspace{0.5cm} \mathcal O_1^q = \big[\bar q_L \gamma^{\mu} b_L\big] \big[\bar q_L\gamma_{\mu}b_L\big]\,,\\ {\cal L}_{\mathrm{eff}}^{|\Delta B|=1}&=& \frac{4 G_F}{\sqrt{2}}\Big[ \sum_{i=1}^2 C_{i}( \lambda_u \mathcal O^{(u)}_i + \lambda_c \mathcal O^{(c)}_i) \Big] + \frac{4 G_F}{\sqrt{2}}\lambda_t\Big[\sum_{i=3}^{10} C_{i}{\cal O}_i + C_{\nu\bar{\nu}}{\cal O}_{\nu\bar{\nu}}\Big]\,. \label{eq:loweff1!} \end{eqnarray} \end{subequations} Izraze za Wilsonove koeficiente $C_i$ na elektro-\v{s}ibki skali izra\v{c}unamo s postopkom ujemanja polne teorije, ki vsebuje vse prostostne stopnje, na efektivne teorije, ki jih opisujeta zgoraj navedeni Lagrangeovi funkciji. S pomo\v{c}jo ena\v{c}b renormalizacijske grupe, ki izvirajo iz anomalnih dimenzij efektivnih operatorjev, nato spustimo skalo do reda mase kvarka $b$, kjer je z razli\v{c}nimi neperturbativnimi mogo\v{c}e izra\v{c}unati matri\v{c}ne elemente efektivnih operatorjev in s tem napovedati amplitude za razli\v{c}ne procese. V kolikor NF posega v proces ujemanja na visokih skalah, se ves njen vpliv na procese v fiziki $B$ manifestira kot sprememba Wilsonovih koeficientov na visoki skali ujemanja. Tudi pri obravnavi NF v nevtralnih in nabitih kvarkovskih tokovih se poslu\v{z}imo metod efektivnih teorij. S pomo\v{c}jo efektivnih operatorjev vi\v{s}jih dimenzij, kljub nepoznavanju fizike onkraj SM, lahko sistemati\v{c}no parametriziramo u\v{c}inke NF na omenjene tokove \begin{eqnarray} {\cal L}_{\mathrm{eff}}={\cal L}_{\mathrm{SM}}+\frac{1}{\Lambda^2}\sum_i C_i \mathcal Q_i +\mathrm{h.c.}+ {\cal O}(1/\Lambda^3)\,, \label{eq:lagr!} \end{eqnarray} kjer ${\cal L}_{\mathrm{SM}}$ predstavlja SM del, $\mathcal Q_i$ pa so operatorji dimenzije 6, invariantni na operacije umeritvene grupe SM ter so sestavljeni le iz polj, ki jih vsebuje SM. Pri tem smo uporabili zelo mo\v{c}en in pomemben koncept efektivnih teorij, ki nam zagotavlja, da vi\v{s}je dimenzionalne operatorje spremljajo vi\v{s}je negativne potence skale nove fizike in so zato njihovi prispevki vedno manj\v{s}i. \begin{figure}[h] \begin{center} \includegraphics[width=0.5 \textwidth]{Integrating_out.pdf} \caption{Shemati\v{c}en prikaz na\v{s}ega pristopa k obravnavi NF. Prvi korak predstavlja dolo\v{c}itev baze operatorjev v En.~(\ref{eq:lagr!}) s \v{c}imer parametriziramo vpliv NF, katere skala $\Lambda$ je dale\v{c} nad elektro-\v{s}ibko skalo $\mu_t$, na fiziko kvarka top. Drugi korak pa predstavlja nadaljnje ujemanje efektivne teorije~(\ref{eq:lagr!}) z efektivnima teorijama~(\ref{eq:x!}), ki nam omogo\v{c}a obravnavo vplivov NF na opazljivke fizike mezonov $B$.} \label{fig:intout!} \end{center} \end{figure} Sl.~\ref{fig:intout!} shemati\v{c}no prikazuje na\v{s}o strategijo analize NF. Prvi korak ponazarja na\v{s}e nepoznavanje NF manifestirane na skali $\Lambda$, ki je precej vi\v{s}ja od elektro-\v{s}ibke skale, katere vplive lahko parametriziramo z efektivno Lagrangeovo funkcijo oblike~(\ref{eq:lagr!}) z izbiro ustrezne baze operatorjev, glede na to kak\v{s}ne spremembe v fiziki kvarka top \v{z}elimo obravnavati. \v{C}e \v{z}elimo nadalje obravnavati tudi vplive NF na fiziko mezonov $B$, moramo izvr\v{s}iti \v{s}e drugi korak - ujemanje Lagrangeove funkcije~(\ref{eq:lagr!}) z Lagrangeovimi funkcijami ~(\ref{eq:x!}). \section{NF v razpadih kvarka top: Nevtralni tokovi} \subsection{Uvod} Razvejitvena razmerja za procese \begin{eqnarray} \nonumber t\to q V\,,\hspace{0.3cm} V=Z,\gamma,g\,,\hspace{0.3cm} q=c,u\,, \end{eqnarray} ki so v okviru SM neopazljivo majhna, lahko v okviru rez\v{s}irjenih teorij postanejo ob\v{c}utno ve\v{c}ja~\cite{AguilarSaavedra:2004wm,Yang:2008sb,deDivitiis:1997sh,delAguila:1998tp} in potencialno opazljiva na LHC, saj ATLAS ocenjuje sposobnost odkritja omenjenih razpadov v kolikor bi bila razvejitvena razmerja vsaj reda $\sim 10^{-5}$~\cite{Carvalho:2007yi}. V tem poglavju s pomo\v{c}jo efektivnih operatorjev analiziramo dvo-del\v{c}ne razpade $t \to q Z,\gamma$ in tro-del\v{c}ne razpade $t\to q \ell^+ \ell^-$. Pri dvodel\v{c}nih razpadih upo\v{s}tevamo popravke prvega reda v QCD in analiziramo tako posledice me\v{s}anja efektivnih operatorojev pod renormalizacijo, kot tudi efekte kon\v{c}nih popravkov, vklju\v{c}no s tako imenovanimi ``bremsstrahlung'' procesi. Pri obravnavi tro-del\v{c}nih razpadov pa se osredoto\v{c}imo na iskanje opazljivk, ki bi lahko pomagale pri diskriminaciji med razli\v{c}nimi oblikami NF, ki poveljuje razpadom FCNC. To nam omogo\v{c}a bogate\v{s}i fazni prostor tro-del\v{c}nega kon\v{c}nega stanja. Pri parametrizaciji NF, ki generira $tZq$, $t\gamma q$ in $tgq$ vozli\v{s}\v{c}a sledimo Ref.~\cite{AguilarSaavedra:2004wm, AguilarSaavedra:2008zc} \begin{eqnarray} {\mathcal L}_{\mathrm{eff}} = \frac{v^2}{\Lambda^2}a_L^{Z}{\mathcal O}_{L}^Z +\frac{v}{\Lambda^2}\Big[b^{Z}_{LR}{\mathcal O}_{LR}^{Z}+b^{\gamma}_{LR}{\mathcal O}_{LR}^{\gamma}+b^{g}_{LR}{\mathcal O}_{LR}^{g} \Big] + (L \leftrightarrow R) + \mathrm{h.c.}\,, \label{eq:Lagr!} \end{eqnarray} kjer so operatorji definirani kot \begin{align} {\mathcal O}^{Z}_{L,R} &= g_Z Z_{\mu}\Big[\bar{q}_{L,R}\gamma^{\mu}t_{L,R}\Big]\,, & {\mathcal O}^{Z}_{LR,RL} &= g_Z Z_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}t_{R,L}\Big]\,, \label{eq:ops!}\\ \nonumber{\mathcal O}^{\gamma}_{LR,RL} &= e F_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}t_{R,L}\Big]\,, & {\mathcal O}^{g}_{LR,RL} &= g_s G^a_{\mu\nu}\Big[\bar{q}_{L,R}\sigma^{\mu\nu}T_a t_{R,L}\Big]\,. \end{align} Pri ra\v{c}unanju popravkov QCD se poslu\v{z}imo dimezijske regularizacije, s pomo\v{c}jo katere regulariziramo tako UV kot IR divergence. Analiza u\v{c}inkov teh operatorjev v fiziki mezonov je bila opravljena v Ref.~\cite{Fox:2007in} in je v tem delu ne ponavljamo ali nadgrajujemo in s tem presko\v{c}imo oba koraka shemati\v{c}no prikazana na Sl.~\ref{fig:intout!} ter se osredoto\v{c}imo le na obravnavo pojavov s kvarki top na masni lupini. Iz analize~\cite{Fox:2007in} povzamemo, da obstajajo operatorji, ki generirajo efektivna FCNC vozli\v{s}\v{c}a s kvarkom top, za katere indirektne omejitve na njihove prispevke ne izklju\v{c}ujejo potencialne opazljivosti FCNC razpadov kvarkov top. \subsection{Dvo-del\v{c}ni razpadi} Najprej se osredoto\v{c}imo na virtualne popravke QCD. Feynmanovi diagrami za obravnavo popravkov na nivoju ene zanke so prikazani na Sl.~\ref{fig:fcnc_virt!}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{FCNCvirt.pdf} \caption{Feynmanovi diagrami virtualnih popravkov QCD za razpade $t\to q Z,\gamma$. Kvadratki ozna\v{c}ujejo delovanje efektivnih opreratorjev podanih v En.~(\ref{eq:ops!}), kri\v{z}ci pa dodatne to\v{c}ke iz katerih se lahko izsevajo bozoni $Z$ in $\gamma$.} \label{fig:fcnc_virt!} \end{center} \end{figure} V prvi vrsti ti popravki privedejo do me\v{s}anja operatorjev pod renormalizacijo. Z uporabo metod efektivnih teorij lahko izpeljemo ena\v{c}be renormalizacijske grupe, ki nam povezuje vrednosti Wilsonovih koeficientov ovrednotenih na razli\v{c}nih skalah. Ker operatorji $\mathcal O^Z_{L,R}$ nimajo anomalnih dimenzij, lahko preostalih 6 operatorjev zberemo v dva vektorja, \begin{eqnarray} \boldsymbol{\mathcal O}_{i} = (\mathcal O^\gamma_i , \mathcal O^Z_i, O^g_i)^T\,,\hspace{0.5cm} i = RL, LR\,, \end{eqnarray} ki se med seboj ne me\v{s}ata. Matrika anomalnih dimenzij na nivoju ene zanke je \begin{equation} \gamma_i = \frac{\alpha_s}{2\pi} \left[ \begin{array}{ccc} C_F & 0 & 0 \\ 0 & C_F & 0 \\ 8 C_F / 3 & C_F (3 - 8 s^2_W) / 3 & 5C_F - 2 C_A \end{array} \right]\,. \label{eq:anomal!} \end{equation} Pogosto zaradi strukture NF, ki generira operator $LR$, ta lahko vsebuje ekspliciten faktor mase kvarka top. Da analiziramo, ali to privede do opaznih sprememb v analizi renormalizacijske grupe, definiramo nov operator $$ \widetilde{\boldsymbol{\mathcal O}}_{LR} = (m_t/v )\boldsymbol{\mathcal O}_{LR}\,. $$ Me\v{s}anje gluonskega dipolnega operatorja s fotonsikm in operatojem z dipolno $Z$ sklopitvijo povzemajo spodnje ena\v{c}be \begin{subequations}\label{eq:rge!} \begin{eqnarray} b^\gamma_{i} (\mu_t) &=& \eta ^{\kappa_1} b_i^\gamma (\Lambda )+\frac{16}{3}\left( \eta ^{\kappa_1}- \eta ^{\kappa_2}\right) b^g_i (\Lambda )\,,\\ b^Z_{i} (\mu_t) &=& \eta ^{\kappa_1} b_i^Z (\Lambda ) +\left[2-\frac{16}{3} s^2_W\right]\left( \eta ^{\kappa_1}- \eta ^{\kappa_2}\right) b^g_i (\Lambda )\,, \end{eqnarray} \end{subequations} kjer $\mu_t$ predstavlja skalo mase kvarka top, $\eta = \alpha_s(\Lambda)/\alpha_s(\mu_t)$, $\kappa_1=4/3\beta_0$, $\kappa_2=2/3\beta_0$ in $\beta_0$ je del beta funkcije QCD na nivoju ene zanke~\cite{Buras:1998raa}. \begin{figure} \begin{center} \includegraphics[scale=0.7]{RGE1.pdf} \end{center} \caption{Razmerje $|b_{i}^{\gamma,Z}(\mu_t)/b_i^{g}(\Lambda)|$ kot funkcija $\Lambda$ ob predpostavki $b_{i}^{\gamma,Z}(\Lambda) =0$ in $\mu_t\approx 200$ GeV. Polne \v{c}rte se nana\v{s}ajo na operatorje brez eksplicitne mase kvarka top, prekinjena \v{c}rta pa se nana\v{s}a na Wilsonov coefficient operatorja $\widetilde{{\mathcal O}}_{LR}^{\gamma}$. $\widetilde{b}_{LR}^{Z}$ ni prikazan, saj na grafu odstopanje od $b_{LR}^{Z}$ ni opazno.} \label{fig:RGE!} \end{figure} Ob predpostavki, da pod UV skalo nimamo dodatnih barvnih prostostnih stopenj, ki bi modificirale beta funkcijo, imamo $\beta_0=7$ za skale nad $\mu_t$. \v{C}e za operatorje $LR$ vzamemo redefinirano obliko $\widetilde{\boldsymbol{\mathcal O}}_{LR}$, se $\kappa_{1,2}$ spremenita v $\kappa_1=16/3\beta_0$, $\kappa_2=14/3\beta_0$. Posledice spreminjanja renormalizacijske skale ponazarja Sl.~\ref{fig:RGE!}, kjer prikazujemo \begin{eqnarray} \bigg|\frac{b_i^{\gamma,Z} (\mu_t)}{b_i^{g} (\Lambda)}\bigg|\,,\hspace{0.5cm}\text{ko $b^{\gamma,Z}_i(\Lambda) = 0$}\,, \end{eqnarray} ki nam pove kolik\v{s}na $b^{\gamma,Z}_i(\mu_t)$ lahko generirano na skali mase kvarka top $\mu_t\simeq 200$ GeV, izklju\v{c}no z me\v{s}anjem operatorjev pod renomalizacijo QCD in prisotnostjo gluonskega operatorja na visoki skali $\Lambda$. Opazimo lahko, da inducirani prispevki $b^\gamma_{i}$ v primeru energijske skale NF okoli $\Lambda \sim 2$ TeV, zna\v{s}ajo 10\% vredosti $b^g_{i}$ koeficienta generiranega na skali $\Lambda$. Po drugi strani so, zaradi od\v{s}tevanja podobnih prispevkov v ena\v{c}bah En.~(\ref{eq:rge!}), inducirani prispevki k $b^Z_{i}$ mnogo manj\v{s}i (pod $1\%$ na prikazanem razponu skale $\Lambda$). Vklju\v{c}itev eksplicitnega faktorja mase kvarka top v operatorje teh zaklju\v{c}kov ne spremeni. Po analizi renormalizacijskih lastnosti operatorjev NF nam preostanejo \v{s}e $\alpha_s$ popravki matri\v{c}nih elementov $\bra{q \gamma} \mathcal O_i \ket{t}$ in $\bra{q Z} \mathcal O_i \ket{t}$, ki jih ovrednotimo na skali mase kvarka top, in bremsstrahlung popravki, katerih Feynmanovi diagrami so prikazani na Sl.~\ref{fig:fcnc_brems!}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{FCNCbrems.pdf} \end{center} \caption{Feynmanovi diagrami bremsstrahlung procesov $t\to qgZ,\gamma$. Kvadratki ozna\v{c}ujejo delovanje operatorja NF, kri\v{z}ci pa ozna\v{c}ujejo dodatne to\v{c}ke iz katerih se lahko izseva gluon (v prvih dveh diagramih) ali $Z,\gamma$ (v zadnjem diagramu).} \label{fig:fcnc_brems!} \end{figure} Razpadne \v{s}irine teh diagramov so istega reda v $\alpha_s$ kot razpadne \v{s}irine dvo-del\v{c}nih kon\v{c}nih stanj. Se\v{s}tevek obeh prispevkov poskrbi, da je rezultat IR kon\v{c}en. Razpadno \v{s}irino s popravki reda $\alpha_s$ parametriziramo kot \begin{eqnarray} \Gamma^V &=& |a^V|^2\frac{v^4}{\Lambda^4} \Gamma_{a}^V + \frac{v^2 m_t^2}{\Lambda^4}|b^V|^2 \Gamma^V_{b}+\frac{v^3m_t}{\Lambda^4}2\mathrm{Re}\{b^{V*}a^V\} \Gamma^V_{ab} \label{eq:oso!}\\ &+& \frac{v^3m_t}{\Lambda^4} \left[2\mathrm{Re}\{a^{V*}b^g\} \Gamma^V_{ag}- 2\mathrm{Im}\{a^{V*}b^g\}\tilde{\Gamma}^V_{ag}\right]\nonumber \\&+& \frac{v^2 m_t^2}{\Lambda^4}\left[ |b^g|^2 \Gamma^V_{g}+2\mathrm{Re}\{b^{V*}b^g\} \Gamma^V_{bg} -2\mathrm{Im}\{b^{V*}b^g\}\tilde{\Gamma}^V_{bg} \right]\,,\nonumber \end{eqnarray} kjer $V= Z,\gamma$ in $a^{\gamma}=0$. $\Gamma^V_{ag,bg,g}$ v drugi in tretji vrstici En.~(\ref{eq:oso!}) povzemajo prispevke gluonskega operatorja in so zato odsotni v prvem redu ($\alpha_s^0$) in se pojavijo \v{s}ele na redu $\alpha_s$. Analiti\v{c}ni izrazi vseh razpadnih \v{s}irin so podani v dodatku~\ref{app:allwidths}. Kromodinamski popravki procesa s fotonom, ki nima mase, v kon\v{c}nem stanju so nekoliko bolj kompleksni kot v primeru bozona $Z$. Rezultat, ki ga dobimo iz obravnave predstavljenih diagramov virtualnih in bremsstrahulg korekcij, je IR divergenten. To divergenco lahko odstranimo, \v{c}e v obravanavo vklju\v{c}imo dodaten diagram za proces $t\to q g$ s fotonskim popravkom ene zanke. Ker pa eksperimentalno iskanje procesa $t\to q \gamma$ vselej vklju\v{c}uje detekcijo izoliranega fotona, k \v{c}emer omenjeni diagram ne prispeva, razpadno \v{s}irino raje regulariziramo z vpeljavo reza, ki zagotovi, da sta smeri fotona in lahkega kvarka ali gluona dovolj narazen $\nolinebreak{\delta r_j= 1- {\bf p}_\gamma \cdot {\bf p}_j / E_\gamma E_j}$, kjer $j=g,q$. Izka\v{z}e se, da je odvisnost rezultata od reza $\delta_q$ znatna in lahko privede do pove\v{c}anja prispevka gluonskega operatorja. Poleg tega vpeljemo \v{s}e eksperimentalno motiviran rez na energijo fotona $E_{\gamma}^{\mathrm{cut}}$. Razpadne \v{s}irine za FCNC razpade kvarka top v foton predstavimo kot funkcije definiranih rezov. \begin{figure}[h!] \begin{center} \includegraphics[height=5.5cm]{drEcut.pdf} \end{center} \caption{Relativna velikost $\alpha_s$ popravkov k . Prikazana sta reprezentativna intervala za $\delta r_c\equiv\delta r$ and $E^{\mathrm{cut}}_\gamma$. Konture z dolo\v{c}eno velikostjo popravkov so narisane za $b^g=0$ (sivo, pike), $b^g=b^\gamma$ (rde\v{c}e) and $b^g = - b^\gamma$ (modro, \v{c}rte).} \label{fig:foton_cuts!} \end{figure} Numeri\v{c}no analizo prikazuje Sl.~\ref{fig:foton_cuts!}, kjer so narisane konture konstantne relativne velikosti $\alpha_s$ popravkov k $\mathrm{Br}(t\to q \gamma)$ v ravnini rezov. Opazimo, da so prispevki gluonskega operatorja lahko reda $10-15\%$ celotne razpadne \v{s}irine, odvisno od relativne faze in velikosti Wilsonovih koeficientov operatorjev $\mathcal O_{LR,RL}^{g,\gamma}$. To pomeni, da eksperimentalna meja na $\mathrm{Br}(t\to q \gamma)$ lahko dejansko omejuje tako $b^{\gamma}$ kot $b^{g}$. To opa\v{z}anje lahko dodatno podkrepimo z analizo razmerja $\Gamma(t\to q \gamma) / \Gamma(t\to q g )$, kjer obe razpadni \v{s}irini izra\v{c}unamo do reda $\alpha_s$\footnote{$\Gamma(t\to q g)$ s kromodinamskimi popravki je vzeta iz Ref.~\cite{Zhang:2008yn}. }, v odvisnosti od razmerja relevantnih FCNC Wilsonovih koeficientov $|b^\gamma/b^g|$, kar prikazuje Sl.~\ref{fig:tcg-tcg!} za dve reprezentativni izbiri kinemati\v{c}nih rezov. Vertikalna dimenzija se oblikuje ob spreminjanju relativne faze med koeficientoma $b^\gamma$ in $b^g$. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{c7c81.pdf} \end{center} \caption{\label{fig:tcg-tcg!} Razmerje razpadnih \v{s}irin $\Gamma(t\to q \gamma) / \Gamma(t\to q g )$ v odvisnosti od absolutne vrednosti razmerja relevantnih FCNC Wilsonovih koeficientov $|b^\gamma/b^g|$. Prikazani so rezultati za dve reprezentativni izbiri kinemati\v{c}nih rezov. Obmo\v{c}ja so kon\v{c}nih razse\v{z}nosti v vertikalni smeri, zaradi neznane relativne faze med $b^\gamma$ in $b^g$. Prikazane \v{c}rte so za maksimalno pozitivno (polni) in maksimalno negativno (\v{c}rtkano) interferenco $b^\gamma b^g$.} \end{figure} V primeru razpada $t \to q Z$ so popravki QCD precej manj dramati\v{c}ni. Signifikanco popravkov povzamemo v Tab.~\ref{tab:brsZ!}, kjer navajamo relativno spremembo v razpadnih \v{s}irinah in razvejitvenih razmerjih ob prehodu iz reda $\alpha_s^0$ na red $\alpha_s$. \begin{table}[!h] \begin{center} \begin{tabular}{l|l|l|l||l|l}\hline\hline &$b^Z=b^g=0$&$a^Z=b^g=0$&$a^Z=b^Z, b^g=0$& $b^Z=0, a^Z =b^g$ &$a^Z=0, b^Z =b^g$\\ \hline $\Gamma^{\mathrm{NLO}}/\Gamma^{\mathrm{LO}}$&$0.92$& $0.91$& $0.92$ & $0.95$&$0.94$\\ $\mathrm{Br}^{\mathrm{NLO}}/\mathrm{Br}^{\mathrm{LO}}$&$1.001$&$0.999$&$1.003$ &$1.032$&$1.022$\\ \hline\hline \end{tabular} \caption{Numeri\v{c}ne vrednosti razmerji $\Gamma^{\mathrm{NLO}}/\Gamma^{\mathrm{LO}}$ in $\mathrm{Br}^{\mathrm{NLO}}(t\to q Z)/\mathrm{Br}^{\mathrm{LO}}(t\to q Z)$ za dolo\v{c}ene vrednosti FCNC Wilsonovih koeficientov. } \label{tab:brsZ!} \end{center} \end{table} Opazimo, da relativna sprememba razpadnih \v{s}irin lahko dose\v{z}e $10\%$, sprememba v razvejitvenih razmerjih pa je mnogo manj\v{s}a. Razlog za to je skoraj\v{s}nje popolno izni\v{c}enje $\alpha_s$ prispevkov k razpadni \v{s}irini $t\to q Z$ in razpadni \v{s}irini glavnega razpadnega kanala $t\to Wb$, na katerega so razvejitvena razmerja normirana. V kolikor bi tak rezultat pri\v{c}akovali za prispevke $a^Z$, je podoben rezultat za $b^Z$ netrivialen. Vidimo tudi, da pri dolo\v{c}enih faznih odnosih med $b^Z$ in $b^g$ sprememba lahko naraste na par procentov. \subsection{Tro-del\v{c}ni razpadi} V tem poglavju se osredoto\v{c}imo na razpad $t \to q \ell^+ \ell^-$, kjer FCNC tranzicija poteka preko istih fotonskih in $Z$ operatorjev kot v prej\v{s}njem poglavju, nadalje pa se bozon sklaplja s parom nabitih leptonov, ki jih detektiramo v kon\v{c}nem stanju. Osrednji cilj te analize je potencialna diskriminacija med razli\v{c}nimi FCNC operatorji na podlagi razli\v{c}nih kinemati\v{c}nih opazljivk, ki jih lahko definiramo na ra\v{c}un ve\v{c}jega tro-del\v{c}nega faznega prostora. Za nas bodo zlasti zanimive asimetrije definirane na podlagi smeri, pod katerimi se gibajo delci kon\v{c}nega stanja, ki so lahko senzitivne na obliko FCNC vozli\v{s}ca. V mirovnem sistemu leptonskega para definiramo $z_j=\cos\theta_{j}$, ki se nana\v{s}a na smeri negativno nabitega leptona in lahkega kvarka. V mirovnem sistemu pozitivno nabitega leptona in lahkega kvarka pa definiramo $z_{\ell}=\cos\theta_{\ell}$, ki se nana\v{s}a na smeri nabitih leptonov. Obe definiciji ponazarja Sl.~\ref{fig:angles!} \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{dve_asimetriji.pdf} \caption{\label{fig:angles!} Definicija dveh kotov na podlagi katerih vpeljemo dve razli\v{c}ni kinemati\v{c}ni asimetriji. Pu\v{s}\v{c}ice ozna\v{c}ujejo smeri gibalnih koli\v{c}in delcev.} \end{center} \end{figure} S pomo\v{c}jo tako definiranih kotov vpeljemo dve asimetriji \begin{equation} A_i = \frac{\Gamma_{z_i>0}-\Gamma_{z_i<0}}{\Gamma_{z_i>0} + \Gamma_{z_i<0}}\,. \label{eq:As!} \end{equation} $A_{j}\equiv A_{\mathrm{FB}}$ imenujemo {\sl asimetrija naprej-nazaj} (FBA), $A_{\ell}\equiv A_{\mathrm{LR}}$ pa proglasimo za {\sl asimetrijo levo-desno } (LRA). Analize asimetrij se lotimo v treh korakih. Najprej si ogledamo razpade, ki potekajo preko izmenjave fotona, nato razpade, ki potekajo preko bozona $Z$, na koncu pa analiziramo \v{s}e razpade, ki potekajo preko obeh kanalov hkrati, kar se zdi vredno raziskati, saj \v{s}tevilni modeli NF lahko generirajo znatne FCNC razpade tako v fotone kot v bozone $Z$. Skupno leptonsko kon\v{c}no stanje nam omogo\v{c}a raziskovanje interference obeh pojavov. Analiti\v{c}ne formule so podane v poglavju~\ref{chap:neutral_currents} in dodatku~\ref{app:allTB}. Zaradi izklju\v{c}no vektorske sklopitve fotona z nabitimi leptoni, v primeru fotonske mediacije razpada FBA ne more zavzeti neni\v{c}elne vrednosti. Po drugi strani se LRA izka\v{z}e za neni\v{c}elno in povsem neodvisno od paramterov nove fizike. Vsebuje pa mo\v{c}no odvisnost od reza na invariantno maso leptonskega para $\sqrt{\epsilon}$, ki jo prikazuje levi graf na Sl.~\ref{fig:assym_Z!}. Vidimo, da je glede na razli\v{c}ne kinemati\v{c}ne reze, LRA lahko celo razli\v{c}no predzna\v{c}ena. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{ALR_F1.pdf}\hspace{0.5cm} \includegraphics[scale=0.48]{CorrZ1.pdf}\hspace{0.5cm} \includegraphics[scale=0.48]{CorrZF1.pdf} \caption{\label{fig:assym_Z!}{\bf Levo}: Odvisnost LRA od reza na spodnjo mejo invariantne mase leptonskega para $\epsilon$ za primer razpadov preko fotona. {\bf Sredina}: Korelacija med FBA in LRA za razpade, ki potekajo preko bozona $Z$. Sivo obmo\v{c}je je dobljeno, ko so dovoljene vse sklopitve, ostala obmo\v{c}ja pa, ko so dolo\v{c}ene sklopitve postavljene na ni\v{c}. {\bf Desno:} Korelacija med FBA in LRA za razpade preko fotona in bozona $Z$, vklju\v{c}no z interferen\v{c}nimi prispevki. Sivo obmo\v{c}je je dobljeno, ko so neni\v{c}elne lahko vse FCNC sklopitve s fotonom in bozonom $Z$, ostala obmo\v{c}ja pa, ko so dolo\v{c}ene sklopitve postavljene na ni\v{c}. } \end{center} \end{figure}V kolikor FCNC proces poteka preko bozona $Z$, tudi FBA zavzame netrivialne vrednosti. Obe asimetriji sedaj postaneta odvisni od parametrov NF. S pomo\v{c}jo \v{z}reba naklju\v{c}nih vrednosti parametrov NF lahko prei\v{s}\v{c}emo razpon vrednosti asimetrij in korelacijo med FBA in LRA. To prikazujeta srednji in desni graf Sl.~\ref{fig:assym_Z!}. Srednji graf se nana\v{s}a na razpade, ki potekajo izklju\v{c}no preko bozona $Z$, medtem ko desni graf vsebuje tako razpade preko bozona $Z$ kot tudi razpade preko fotona in interferen\v{c}ne prispevke obeh procesov. Vidimo lahko, da so velike vrednosti FBA ($|A_{\mathrm{FB}}|\gg 0.1$) nedosegljive v razpadih, ki potekajo preko bozona $Z$, kjer je le ta omejena na interval $A_{\mathrm{FB}}\in[-0.12,0.12]$. Eksperimentalno izmerjena to\v{c}ka v ravnini $(A_{\mathrm{FB}},A_{\mathrm{LR}})$ bi lahko slu\v{z}ila za izklju\v{c}evanje modelov, ki generirajo le dolo\v{c}ene vrste efektivnih FCNC sklopitev. \v{C}e obravnavamo razpade preko bozona $Z$ in fotona kot nelo\v{c}ljiva se razpon dosegljivih vrednosti LRA bistveno pove\v{c}a v smeri pozitivnih vrednosti. Predstavljeno analizo razpadov $t\to q \ell^+ \ell^-$ lahko v prihodnosti soo\v{c}imo z eksperimentalnim iskanjem $t\to q Z$, kjer identifikacija bozona $Z$ poteka preko detekcije leptonskega para, v katerega bozon $Z$ razpade. Po drugi strani je, kot smo \v{z}e omenili, eksprimentalno iskanje $t\to q \gamma$ vezano na detekcijo izoliranega fotona. V kolikor bi se \v{z}eleli oddaljiti od teh omejitev in se osredoto\v{c}iti na leptonsko kon\v{c}no stanje, bi bila potrebna nova podrobna analiza ozadij takega kon\v{c}nega stanja. \section{NF v razpadih kvarka top: Nabiti tokovi} \subsection{Uvod} V tem poglavju se posvetimo nabitim tokovom, ki vsebujejo kvarke top, in analiziramo posledice odstopanja od SM v le teh. V analizi sledimo konceptu orisanem v uvodnem poglavju in najprej analiziramo implikacije v fiziki mezonov $B$. \v{S}ele nato se posvetimo glavnemu razpadnemu kanalu kvarka top in su\v{c}nostnim dele\v{z}em, ki so ob\v{c}utljivi na omenjene spremembe v nabitih tokovih. Pri oblikovanju baze operatorjev dimenzije 6, s katerimi raz\v{s}irimo SM, najprej poi\v{s}\v{c}emo vse strukture, ki so invariantne na umeritveno grupo SM in vsebujejo nabite tokove s kvarkom top. Nato dolo\v{c}imo \v{s}e okusno strukturo teh operatorjev, pri \v{c}emer se omejimo na okvir {\sl minimalne kr\v{s}itve okusa} (MFV)~\cite{Buras:2003jf,D'Ambrosio:2002ex,Grossman:2007bd}, v katerem edina kr\v{s}itev okusa izvira iz Yukawinih sklopitev, kakor v SM. To nas privede do slede\v{c}e baze sedmih efektivnih operatorjev \begin{subequations} \label{eq:ops1!} \begin{eqnarray} \mathcal Q_{RR}&=& V_{tb} [\bar{t}_R\gamma^{\mu}b_R] \big(\phi_u^\dagger\mathrm{i} D_{\mu}\phi_d\big) \,, \\ \mathcal Q_{LL}&=&[\bar Q^{\prime}_3\tau^a\gamma^{\mu}Q'_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big)-[\bar Q'_3\gamma^{\mu}Q'_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q'_{LL}&=&[\bar Q_3\tau^a\gamma^{\mu}Q_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big) -[\bar Q_3\gamma^{\mu}Q_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q^{\prime\prime}_{LL}&=&[\bar Q'_3\tau^a\gamma^{\mu}Q_3] \big(\phi_d^\dagger\tau^a\mathrm{i} D_{\mu}\phi_d\big)-[\bar Q'_3\gamma^{\mu}Q_3]\big(\phi_d^\dagger\mathrm{i} D_{\mu}\phi_d\big),\\ \mathcal Q_{LRt} &=& [\bar Q'_3 \tau^a\sigma^{\mu\nu} t_R]{\phi_u}W_{\mu\nu}^a \,,\\ \mathcal Q'_{LRt} &=& [\bar Q_3 \tau^a\sigma^{\mu\nu} t_R]{\phi_u}W_{\mu\nu}^a \,,\\ \mathcal Q_{LRb} &=& [\bar Q_3 \tau^a\sigma^{\mu\nu} b_R]\phi_d W_{\mu\nu}^a \,, \end{eqnarray} \end{subequations} kjer smo definirali $SU(2)_L$ dublete \begin{eqnarray} Q_3=(V^*_{kb} u_{Lk},b_{L})^{\mathrm{T}}\,, \hspace{0.5cm} \bar Q'_3 =\bar Q_i V^*_{ti}= (\bar{t}_L,V_{ti}^*\bar{d}_{iL})^T\,, \end{eqnarray} kovariantne odvode ter tenzor elektro\v{s}ibkih polj \begin{eqnarray} D_{\mu}&=&\partial_{\mu}+\mathrm{i} \frac{g}{2}W_{\mu}^a\tau^a +\mathrm{i} \frac{g'}{2}B_{\mu} Y\,, \\ W^a_{\mu\nu}&=&\partial_{\mu}W_{\nu}^a-\partial_{\nu}W_{\mu}^a - g\epsilon_{abc}W_{\mu}^b W_{\nu}^c\,,\nonumber \end{eqnarray} in kon\v{c}no skalarna polja $\phi_{u,d}$ (v SM $\phi_u\equiv \tilde{\phi} =\mathrm{i} \tau^2 \phi_d^*$). \subsection{Indirektne posledice v fiziki mezonov $B$} Prisotnost operatorjev~(\ref{eq:ops1!}) spremeni izraze Wilsonovih koeficientov za $|\Delta B|=2$ in $|\Delta B|=1$ procese, saj kot prikazuje Sl.~\ref{fig:dp!} operatorji vstopijo v me\v{s}alne diagrame in diagrame za redke razpade mezonov $B$. Vselej obravnavamo vnos le enega operatorja v diagram, s \v{c}imer se konsistentno omejimo na prispevke NF ute\v{z}ene z $1/\Lambda^2$, vi\v{s}je potence pa zanemarimo. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{diags_povzetek.pdf} \caption{Primer dveh diagramov, kjer prispevki efektivnih operatorjev~(\ref{eq:ops1!}), ozna\v{c}eni z ora\v{z}nim kvadratkom, vplivajo na proces me\v{c}anja mezonv $B$ preko \v{s}katlastega diagrama in razpade mezona $B$ preko pingvinskega diagrama. $V$ ozna\v{c}uje foton ali gluon, kvarki v zankah pa so $u,c,t$.} \label{fig:dp!} \end{center} \end{figure} Ko izra\v{c}unamo vse diagrame in s tem opravimo ujemanje med na\v{s}o raz\v{s}irjeno teorijo na efektivne Lagrangeove funkcije~(\ref{fig:dp!}), lahko parametriziramo efekte NF s spremembo Wilsonovih koeficientov \begin{eqnarray} C_i(\mu) &=& C_i^{\mathrm{SM}}(\mu) + \delta C_i(\mu)\,, \label{eq:aaa!}\\ \delta C_i(\mu) &=&\sum_{j}\kappa_j(\mu) F_i^{(j)}(x_t,\mu) + \kappa_j^{*}(\mu) \tilde{F}^{(j)}_i (x_t,\mu)\,, \end{eqnarray} kjer $j=1,...,6$ in te\v{c}e po operatorjih~(\ref{eq:ops1!}), funkcije $F$, ki so odvisne od $x_t = m_t^2/m_W^2$ in skale na kateri smo opravili ujemanje $\mu$, pa so podane v dodatkih \ref{app:NP_D_B_2} za $|\Delta B| = 2$ procese (ozna\v{c}nene s $S^j$) in \ref{app:SM_D_B_1} za $|\Delta B|=1$ procese (ozna\v{c}ene s $f^j$). Definiramo tudi renormirane Wilsonove koeficiente NF, ki jih bomo uporabljali v nadaljnji analizi \begin{eqnarray} \kappa_{LL}^{(\prime,\prime\prime)}&=&\frac{C_{LL}^{(\prime,\prime\prime)}}{\Lambda^2\sqrt{2}G_F}\,,\hspace{0.3cm} \kappa_{RR}=\frac{C_{RR}}{\Lambda^2 2\sqrt{2} G_F}\,,\hspace{0.3cm} \kappa_{LRb}=\frac{C_{LRb}}{\Lambda^2 G_F}\,,\hspace{0.3cm} \kappa_{LRt}^{(\prime)}=\frac{C_{LRt}^{(\prime)}}{\Lambda^2 G_F}\,. \end{eqnarray} S pomo\v{c}jo podrobnih \v{s}tudij, kako spremembe v Wilsonovih koeficientih~(\ref{eq:aaa!}) vplivajo na opazljivke v fiziki mezonov $B$, ki so bile opravljene v Ref.~\cite{Lenz:2010gu,Lenz:2012az,DescotesGenon:2011yn,Benzke:2010tq,Asner:2010qj, Huber:2007vv}, lahko izpeljemo omejitve na parametre $\kappa_j$. V kolikor se omejimo na realne $\kappa_j$, lahko izpeljemo intervale, v okviru katerih se nahajajo parametri $\kappa_j$ s 95\% {\sl stopnjo zaupnja} (C.L.), ki jih prikazuje Tab.~\ref{tab:bounds!}. \begin{table}[h] \hspace{-1cm} \begin{center} \begin{tabular}{c|ccc|c|c}\hline\hline &$B-\bar{B}$&$B\to X_s\gamma$&$B\to X_s \mu^{+}\mu^-$ & skupno & $C_i(2m_W)\sim 1$ \\ \hline \LINE{$\kappa_{LL}$}{$\bs{0.08}{-0.09}$} {$\bs{0.03}{-0.12}$} {$\bs{0.48}{-0.49}$} {$\bs{0.04}{-0.09}\Big(\bs{0.03}{-0.10}\Big)$}& $\Lambda> 0.82\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LL}^{\prime}$}{$\bs{0.11}{-0.11}$} {$\bs{0.17}{-0.04}$} {$\bs{0.31}{-0.30}$} {$\bs{0.11}{-0.06}\Big(\bs{0.10}{-0.06}\Big)$}& $\Lambda> 0.74\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LL}^{\prime\pr}$}{$\bs{0.18}{-0.18}$} {$\bs{0.06}{-0.22}$} {$\bs{1.02}{-1.04}$} {$\bs{0.08}{-0.17}\Big(\bs{0.05}{-0.15}\Big)$}& $\Lambda> 0.60\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{RR}$}{} {$\bs{0.003}{-0.0006}$} {$\bs{0.68}{-0.66}$} {$\bs{0.003}{-0.0006}\Big(\bs{0.002}{-0.0006}\Big)$}& $\Lambda> 3.18\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRb}$}{} {$\bs{0.0003}{-0.001}$} {$\bs{0.34}{-0.35}$} {$\bs{0.0003}{-0.001}\Big(\bs{0.003}{-0.01}\Big)$}& $\Lambda> 9.26\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRt}$}{$\bs{0.13}{-0.14}$} {$\bs{0.51}{-0.13}$} {$\bs{0.38}{-0.37}$} {$\bs{0.13}{-0.07}\Big(\bs{0.12}{-0.14}\Big)$}& $\Lambda> 0.81\,\, \mathrm{TeV}$\\\hline \LINE{$\kappa_{LRt}^{\prime}$}{$\bs{0.29}{-0.29}$} {$\bs{0.41}{-0.11}$} {$\bs{0.75}{-0.73}$} {$\bs{0.27}{-0.07}\Big(\bs{0.25}{-0.06}\Big)$}& $\Lambda> 0.56\,\, \mathrm{TeV}$\\\hline\hline \end{tabular} \end{center} \caption{Zgornje in spodnje $95\%$ C.L. meje za realne dele $\kappa_j$ in $\mu = 2 m_W$ ter $\mu=m_W$ (v oklepaju). Zadnji stolpec prikazuje oceno spodnje meje na skalo NF $\Lambda$, v primeru da je $C_j\sim 1$.} \label{tab:bounds!} \end{table} Omeniti velja, da so meje za parametra $\kappa_{RR}$ in $\kappa_{LRb}$ za red velikosti ostrej\v{s}e kot za ostale parametre. Meji izvirata iz analize $b\to s \gamma$ razpada, kjer so prispevki operatorjev ${\mathcal O}_{RR}$ in ${\mathcal O}_{LRb}$, ki vsebujeta desno-ro\v{c}ne kvarke $b$, efektivno pove\v{c}ani za faktor $m_{W,t}/m_b$ glede na prispevke ostalih operatorjev. V primeru, da sprostimo zahtevo po realnosti $\kappa_j$ in dovolimo, da imajo tudi imaginarno komponento, lahko s pomo\v{c}jo analize efektov v me\v{s}anju in kr\v{s}itve {\sl simetrije parnosti in konjugacije naboja} (CP) v razpadih $b\to s \gamma$ izpeljemo dovoljena obmo\v{c}ja v kompleksnih ravninah parametrov $\kappa_j$. To prikazuje Sl.~\ref{fig:complex!} za vse koeficiente razen $\kappa_{LRt}$ in $\kappa_{LL}$, katerih imaginarnih komponent ne moremo omejiti preko efektov v me\v{s}anju, kjer prispevata samo z realnimi deli, niti preko CP kr\v{s}itve v razpadih $b\to s\gamma$, kjer so njuni prispevki tako majhni, da omejitve niso mogo\v{c}e. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{newcomplex.pdf}\hspace{1.5cm} \includegraphics[scale=0.5]{complex.pdf} \caption{{\bf Levo}: 95\% C.L. (polna \v{c}rta) in 68\% C.L. dovoljena obmo\v{c}ja za parametre $\kappa_{LL}^{\prime(\prime\prime)},\kappa_{LRt}^{\prime}$, dobljena analize me\v{s}anja mezonov $B$. {\bf Desno}: 95\% C.L. dovoljena obmo\v{c}ja za parametra $\kappa_{LRb}$ in $\kappa_{RR}$, dobljena iz analize CP kr\v{s}itve v $b\to s\gamma$ razapadih. Nepolne \v{c}rte ozna\v{c}ujejo potencialno zo\v{z}itev obmo\v{c}ji ob izbol\v{s}avi eksperimentalnih napak na Super-Bell.} \label{fig:complex!} \end{center} \end{figure} V primeru izpeljave mej iz CP kr\v{s}itve v razpadih $b\to s\gamma$ prika\v{z}emo tudi projekcijo, koliko lahko pri\v{c}akujemo, da se bo dovoljeno obmo\v{c}je \v{s}e skr\v{c}ilo v prihodnosti, ko bo meritve izbolj\v{s}al Super-Bell~\cite{Browder:2008em,Aushev:2010bq}. Nadalje lahko rezultate za $\delta C_i$ uporabimo za analizo nekaterih opazljivk, ki \v{s}e niso merjene s tako natan\v{c}nostjo, da bi bistveno prispevale k omejevanju velikosti parametrov $\kappa_j$. Za te opazljivke lahko preverimo napovedi odstopanja od SM, ki so \v{s}e kompatibilne z izpeljanimi mejami. To prikazuje Sl.~\ref{fig:predict1!} za razvejitvena razmerja $\mathrm{Br}[\bar B_s\to\mu^+\mu^-]$, $\mathrm{Br}[B\to K^{(*)}\nu\bar{\nu}]$ ter za asimetrijo $A_{\mathrm{FB}}(q^2)$ v razpadih $\bar{B}_d\to \bar{K}^*\ell^+\ell^-$. \begin{figure}[h!] \begin{center} \includegraphics[scale= 0.45]{mumu.pdf}\hspace{0.8cm} \includegraphics[scale=0.42]{BothNUS.pdf}\hspace{0.5cm} \includegraphics[scale= 0.4]{AFBbandLLpp.pdf}\hspace{0.8cm} \caption{{\bf Levo, Sredina}: Razpon razvejitvenih razmerij, ko anomalne parametre $\kappa_j$ spreminjamo znotraj 95\% C.L. dovoljenih intervalov prikazanih v Tab.~\ref{tab:bounds!}. Pik\v{c}aste \v{c}rte predstavljajo $1\sigma$ teoreti\v{c}ne negotovosti SM napovedi. Za razpad v mione prikazujemo tudi spodnjo eksperimentalno mejo $90\%$ C.L. intervala~\cite{Aaltonen:2011fi}, 95\% C.L. zgornjo mejo LHCb~\cite{Aaij:2012ac} in najnovej\v{s}o zdru\v{z}eno LHC meritev \cite{CMS:Bsmumu}. {\bf Desno}: $A_{\mathrm{FB}}(q^2)$ pas, ki ga dobimo, ko $\kappa_{LL}^{\prime\pr}$ spreminjamo znotraj 95\% dovoljenih intervalov prikazanih v Tab.~\ref{tab:bounds!}. Prikazana je tudi srednja vrednost napovedi SM (\v{c}rna) in pas $1\sigma$ teoreti\v{c}ne negotovosti (\v{c}rtkano) ter eksperimentalno izmerjeni toc\v{c}ki z pripadajo\v{c}imi napakami iz Ref.~\cite{Aaij:2011aa}.} \label{fig:predict1!} \end{center} \end{figure} V prvi vrsti lahko vidimo, da sta koeficienta $\kappa_{LRb}$ in $\kappa_{RR}$ tako mo\v{c}no omejena, da odstopanja v omenjenih razpadih ni mo\v{c} pri\v{c}akovati. Po drugi strani lahko opazimo, da najnovej\v{s}e meritve razvejitvenega razmerja za $B_s \to \mu^+ \mu^-$ postajajo uporabne za omejevanje ostalih parametrov. V primeru nadaljnjega spu\v{s}\v{c}anja zgornje eksperimentalne meje bi lahko ta opazljivka postala pomemben del omejevanja parametrov $\kappa_j$. Analiza $A_{\mathrm{FB}}(q^2)$ razkriva, da bistvenih odstopanj od SM v tej opazljivki ni pri\v{c}akovati. Predstavljen graf je za $\kappa_{LL}^{\prime\prime}$, katerega efekti so med ve\v{c}jimi. Za ostale parametre velja podobna ugotovitev. \subsection{Su\v{c}nostni dele\v{z}i v razpadih kvarka top} Kot zadnje predstavimo na\v{s}o analizo vpliva operatorjev~(\ref{eq:ops1!}) na su\v{c}nostne dele\v{z}e v glavnem razpadnem kanalu kvarka top. V ta namen vpeljemo najsplo\v{s}nej\v{s}o obliko efektivnega $tWb$ vozli\v{s}\v{c}a preko slede\v{c}e Lagrangeove funkcije \begin{eqnarray} \mathcal L_{\mathrm{eff}} = -\frac{g}{\sqrt{2}}\bar{b}\Big[\gamma^{\mu} \big(a_L P_L +a_R P_R\big) -(b_{RL} P_L + b_{LR} P_R)\frac{2\mathrm{i} \sigma^{\mu\nu}}{m_t}q_{\nu} \Big]t W_{\mu}\,,\label{eq:effsimple!} \end{eqnarray} kjer anomalne sklopitve lahko pove\v{z}emo s parametri $\kappa_j$ \begin{eqnarray} \delta a_L = V_{tb}^* \kappa_{LL}^{(\prime,\prime\pr)*}\,,\hspace{0.3cm} a_R = V_{tb}^{*}\kappa_{RR}^*\,,\hspace{0.3cm} b_{LR} = -\frac{m_t}{2 m_W}V_{tb}^{*}\kappa_{LRt}^{(\prime)}\,,\hspace{0.3cm} b_{RL} = -\frac{m_t}{2 m_W} V_{tb}^* \kappa_{LRb}^*\,.\label{eq:translation!} \end{eqnarray} V analizo vklju\v{c}imo popravke QCD prvega reda, ki jih prikazuje Sl.~\ref{fig:feyndiags!}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{anomal2.pdf} \end{center} \caption{Feynmanova diagrama za popravke QCD prvega red k glavnem razpadnem kanalu kvarka top $t \to W b$. Kvadratki ozna\v{c}ujejo anomalno vozli\v{s}\v{c}e, ki ga generira NF. Kri\v{z}ec ozna\v{c}uje dodatno to\v{c}ko iz katere se lahko izseva gluon.} \label{fig:feyndiags!} \end{figure} Podobno kot pri analizi indirektnih efektov v fiziki mezonov $B$, bomo tudi tukaj (sprva) analizirali prispevke le enega operatorja naenkrat. Poleg tega se bomo omejili na realne vrednosti anomalnih sklopitev. Pod takimi pogoji $\delta a_L$ nima vpliva na su\v{c}nostne dele\v{z}e, saj spremlja vozli\v{s}\v{c}e z enako strukturo kot SM in se njegovi prispevki izni\v{c}ijo. Za ostale tri anomalne sklopitve analiziramo najprej spremembe v $\mathcal F_+$, kar prikazuje Sl.~\ref{fig:F+!}. Rezultati so prikazani kot pasovi, ki ponazarjajo pove\v{c}anje prispevkov k $\mathcal F_+$ ob vklju\v{c}itvi popravkov QCD prvega reda. Indirektne omejitve na parametre $\kappa_j$, ki smo jih izpeljali v prej\v{s}njem poglavju se prevedejo v \begin{eqnarray} -0.0006 \le a_R \le 0.003\,,\hspace{0.5cm} -0.0004 \le b_{RL} \le 0.0016\,,\hspace{0.5cm} -0.14 (-0.29) \le b_{LR} \le 0.08\,.\label{eq:ind_translated!} \end{eqnarray} Iz Sl.~\ref{fig:F+!} lahko zaklju\v{c}imo, da tudi ob upo\v{s}tevanju QCD popravkov prvega reda, vrednosti $\mathcal F_+$ reda procent ali ve\v{c} nikakor ne morem pri\v{c}akovati. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{LOtoNLOFplusxx.pdf}\hspace{1cm} \includegraphics[scale=0.5]{LOtoNLOFplusx2.pdf} \end{center} \caption{Odvisnost $\mathcal F_+$ od anomalnih sklopitev, za katere privzamemo, da so realne in da je le ena sklopitev razli\v{c}na od ni\v{c}. {\bf Levo}: Odvisnost ${\cal F}_+$ od $a_R$ (modro, pik\v{c}asto), $b_{RL}$ (oran\v{z}no, \v{c}rtkano) in $b_{LR}$ (\v{c}rno, polno). Zgornja in spodnja \v{c}rta pasu pripadata analizi brez in z prvim redom QCD popravkov. Kri\v{z}ec ozna\v{c}uje napoved SM. {\bf Desno}: Odvisnost ${\cal F}_+$ od $b_{LR}$. \v{C}rtkana \v{c}rta predstavlja rezultat brez popravkov QCD, polna \v{c}rta pa vklju\v{c}uje na\v{c}e popravke prvega reda. Prikazana je tudi SM vrednost podana v En.~(\ref{eq:e22b!}) in $95\%$ C.L. dovoljena intervala za $b_{LR}$ podana v En.~(\ref{eq:ind_translated!}).} \label{fig:F+!} \end{figure} Ker je vrednost $\mathcal F_L$ izmerjena z bistveno bolj\v{s}no natan\v{c}nostjo, je vredno pogledati, ali morda izmerjena vrednost lahko slu\v{z}i za postavitev meje na velikost anomalnih sklopitev. Kako se $\mathcal F_L$ spreminja v odvisnosti od anomalnih sklopitev, je prikazano na Sl.~\ref{fig:FL!}. Vpliv korekcij QCD je v primeru $\mathcal F_L$ zanemarljiv. Anomalni sklopitvi $a_R$ in $b_{RL}$ zaradi mo\v{c}nih indirektnih omejitev zopet me moreta znatno vplivati na vrednost dele\v{z}a. Po drugi strani manj omejeni $b_{LR}$ lahko znatno spremeni vrednost $\mathcal F_L$, kar je podrobneje prikazano na desnem grafu Sl.~\ref{fig:FL!}. Vidimo, da je meja, ki jo izpeljemo iz razpadov kvarka top v tem primeru primerljiva z indirektni mejami in v prihodnosti, v kolikor bo predvidena senzitivnost Atlasa realizirana, lahko postane tudi dominantna. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{LOtoNLOFLx.pdf}\hspace{1cm} \includegraphics[scale=0.5]{LOtoNLOFL2.pdf} \end{center} \caption{Odvisnost $\mathcal F_L$ od anomalnih sklopitev, za katere privzamemo, da so realne in da je le ena sklopitev razli\v{c}na od ni\v{c}. Prikazana je tudi srednja vrednost meritve iz Tevatrona (\v{c}rtkano) in pripadajo\v{c}i 95\% C.L. interval. Poleg tega je prikazana tudi predvidena bodo\v{ca} velikost 95\% C.L. intervala iz Atlasa na podlago centralne vrednosti iz Tevatrona. {\bf Levo}: Odvisnost ${\cal F}_L$ od $a_R$ (modro, pik\v{c}asto), $b_{RL}$ (oran\v{z}no, \v{c}rtkano) and $b_{LR}$ (\v{c}rno, polno). {\bf Desno}: Odvisnost $\mathcal F_L$ od $b_{LR}$ in $95\%$ C.L. dovoljeni intervali za $b_{LR}$ podani v En.~(\ref{eq:ind_translated!}). } \label{fig:FL!} \end{figure} \v{C}e bi analizirali poleg su\v{c}nostnih dele\v{z}ev \v{s}e kak\v{s}no drugo opazljivko iz fizike kvarka top, bi lahko sprostili omejitev, s katero predpostavimo le eno neni\v{c}elno anomalno sklopitev. Ravno to so naredili v Ref.~\cite{AguilarSaavedra:2011ct}, kjer so poleg su\v{c}nostnih dele\v{z}ev (brez popravkov QCD) v analizo vklju\v{c}ili tudi produkcijo enega kvarka top, ki tudi poteka preko \v{s}ibke interakcije in je ob\v{c}utljiva na anomalne $tWb$ sklopitve. Avtorji~\cite{AguilarSaavedra:2011ct} so analizirali dovoljena obmo\v{c}ja v ravninah parov anomalnih sklopitev. Zanimivo je njihove ugotovitve, ki temeljijo izklju\v{c}no na direktnih procesih, postaviti ob bok na\v{s}im ugotovitvam indirektnih omejitev. Ker smo z izpeljavo dovoljenih intervalov uporabili ve\v{c} kot eno spremenljivko, lahko prika\v{z}emo analogno analizo v ravninah parov anomalnih sklopitev, kar prikazuje Sl.~\ref{fig:2d1!}. Sivi obmo\v{c}ji, dobljeni iz direktnih omejitev, imata obliko pasu, ker sta v smereh $\kappa_{RR}$ in $\kappa_{LRb}$ mnogo \v{s}ir\v{s}a kot obmo\v{c}ja, ki jih dobimo iz obravnave indirektnih omejitev, kar upravi\v{c}uje na\v{s}e sklepanje, da je v primeru NF v fiziki kvarka top potrebno vedno analize direktnih opazljivk postaviti ob rob indirektnim analizam v fiziki mezonov. \begin{figure}[h!] \begin{center} \includegraphics[scale= 0.6]{CompareVec.pdf}\hspace{0.8cm} \includegraphics[scale=0.6]{CompareDip.pdf} \caption{95\% C.L. dovoljena obmo\v{c}ja v razli\v{c}nih $(\kappa_i,\kappa_j)$ ravninah. Siv pas predstavlja dovoljeno obmo\v{c}je kot dobljeno iz direktnih omejitev v Ref.~\cite{AguilarSaavedra:2011ct}. $\kappa_i$ so privzeto realni. {\bf Levo}: $\kappa_{RR}$ - $\kappa_{LL}$ (polno), $\kappa_{LL}^{\prime}$ (\v{c}rtkano), $\kappa_{LL}^{\prime\pr}$ (pik\v{c}asto) ravnina. Skala ujemanja je $\mu=2 m_W$. {\bf Desno}: $\kappa_{LRb}$ - $\kappa_{LRt}$ (polno), $\kappa_{LRt}^{\prime}$ (\v{c}rtkasto) ravnina. Skalo ujemanja spremenimo iz $\mu=2 m_W$ (o\v{z}ji obmo\v{c}ji) na $\mu=m_W$ (\v{s}ir\v{s}i obmo\v{c}ji).} \label{fig:2d1!} \end{center} \end{figure} \section{Zaklju\v{c}ki} Ob koncu dobe Tevatrona smo \v{z}e globoko zakorakali v dobo LHC in lov na fiziko onkraj standardnega modela je v polnem razmahu. Iskanje novih delcev nikakor ni edini na\v{c}in, s katerim LHC teoreti\v{c}ni fiziki osnovnih delcev prina\v{s}a nova vpra\v{s}anja in odgovore. Nadejamo se razjasnitve problema fizike okusa, v kateri kvark top s svojo veliko maso igra vodilno vlogo. Ker LHC lahko smatramo kot pravo tovarno kvarka top, nam je prvi\v{c} na voljo raziskava fizike kvarka top z veliko natan\v{c}nostjo. Dolo\v{c}itve parametrov in interakcijskih struktur kvarka top nam lahko slu\v{z}i kot okno v svet nove fizike. V tem delu smo preu\v{c}ili razli\v{c}ne aspekte razpadov kvarka top in raziskovali, kako se NF, ki smo jo parametrizirali s pomo\v{c}jo efektivnih teorij, lahko v njih manifestira. Na eni strani smo raziskovali razpadne \v{s}irine in upo\v{s}tevali tudi popravke prvega reda v QCD, kar je smiselno, ko imamo opravka s kvarki in smo soo\v{c}eni z vedno ve\v{c}jo natan\v{c}nostjo eksperimentalnih meritev. Preu\v{c}ili smo razvejitvena razmerja razpadov $t\to q \gamma,Z$ in razli\v{c}ne kinemati\v{c}ne opazljivke v tro-del\v{c}nem razpadu $t\to q \ell^+ \ell^-$ ter glavni razpadni kanal kvarka top $t\to W b$, kjer smo pozornost usmerili v su\v{c}nostne dele\v{z}e bozona $W$, ki so ob\v{c}utljivi na strukturo $tWb$ vozli\v{s}\v{c}a. Po drugi strani smo izpostavljali vlogo, ki jo igra top kvark v fiziki mezonov kot virtualen delec. Posledi\v{c}no dolo\v{c}ene modifikacije fizike kvarka top lahko vplivajo na teoreti\v{c}ne napovedi opazljivk v fiziki mezonov. Ker za modifikacije nabitih tokov, ki vsebujejo kvarke top celostne analize indirektnih posledic ni mo\v{c} najti v literaturi, smo podrobno preu\v{c}ili posledice v $|\Delta B|=2$ in $|\Delta B|=1$ procesih. Po pri\v{c}akovanjih smo lahko na NF postavili ne-trivialne indirektne omejitve. Ne glede na to, ali nam LHC v razpadih kvarka top razkrije novo fiziko ali ne, bodo bodo\v{c}e meritve v fiziki kvarka top igrale pomembno vlogo v raziskovanju fizike okusa in grajenja ali omejevanja modelov fizike onkraj standardnega modela.
1,108,101,562,420
arxiv
\section{Introduction} 1E\,1048.1$-$5937{}, one of the original ``anomalous X-ray pulsars'' \citep[AXPs;][]{1995ApJ...442L..17M}, is now classified as part of a small class of pulsars known as magnetars -- neutron stars which display behavior thought to be powered by their immense magnetic fields. For a recent review of magnetars see e.g.\,\cite{2017ARA&A..55..261K} or \cite{2018MNRAS.474..961C}. A list of known magnetars is available at the {\it McGill Online Magnetar Catalog} \citep{2014ApJS..212....6O}\footnote{\url{www.physics.mcgill.ca/\~pulsar/magnetar/main.html}}. 1E\,1048.1$-$5937{} was discovered as a persistent X-ray source, with a pulse period of 6.4\,s, using the {\it Einstein X-ray Observatory} \citep{1986ApJ...305..814S}. In the following decade, 1E\,1048.1$-$5937{} was occasionally observed with various X-ray missions and, by the mid-1990s, it was noticed that the spin-down rate was variable by order unity \citep{1995ApJ...455..598M}. {X-ray flux variability in 1E\,1048.1$-$5937{} was first noted by \citet{2004ApJ...608..427M}.} Starting in 1997, 1E\,1048.1$-$5937{} was monitored regularly with the {\it Rossi X-ray Timing Explorer (RXTE)}, until the decommissioning of {\it RXTE} in 2012 \citep{2001ApJ...558..253K, 2004ApJ...609L..67G, 2014ApJ...784...37D}, and {was monitored on a regular basis \citep{2015ApJ...800...33A} with the {\it Neil Gehrels Swift X-ray Telescope} (XRT) until 2018}. During this long-term monitoring, 1E\,1048.1$-$5937{} has been one of the most active known magnetars. It has exhibited four long-term flux flares, as well as several magnetar-like bursts, and pulse profile changes. Perhaps the most striking behavior in 1E\,1048.1$-$5937{} is the dramatically changing spin-down rate, which seems to occur regularly following its radiative outbursts \citep{2004ApJ...609L..67G, 2014ApJ...784...37D, 2015ApJ...800...33A}. While many magnetars have been shown to have sudden timing changes associated with flux increases \citep[e.g.][]{2012ApJ...750L...6P, 2014ApJ...784...37D}, the repeated observation of an increased and variable torque following each observed flux flare is as yet unexplained \citep{2015ApJ...800...33A}. Counting the 2016 July outburst reported here, 1E\,1048.1$-$5937{} has now repeated this unusual behavior -- an X-ray outburst followed by delayed torque oscillations -- four times, each separated by $\sim$1700 days. Here we report on two X-ray outbursts and subsequent torque variations in 1E\,1048.1$-$5937. The first of these in 2016 July occurred with a delay from the previous outburst consistent with that predicted by \cite{2015ApJ...800...33A}. The second outburst, in 2017 December, does not follow this timescale, however, as we show, it is less energetic than the major outbursts, and decays with a shorter timescale. We also report the results of a new {\it NuSTAR} hard X-ray observation during the 2016 July outburst, wherein 1E\,1048.1$-$5937{} is {detected above 20\,keV,} displaying the hard X-ray tail that is ubiquitous among the magnetar class. \section{Observations} \subsection{{\it Swift} XRT Monitoring} \label{sec:xrt} 1E\,1048.1$-$5937{}{} {was} monitored regularly with the {\it Swift}-XRT since 2011 July as part of a campaign to study several magnetars \citep[see e.g.][]{2014ApJ...783...99S,2015ApJ...800...33A, 2017ApJ...834..163A}. The XRT was operated in Windowed-Timing (WT) mode for all observations, having a time resolution of $1.76\;$ms, and only one dimension of spatial resolution. Data were downloaded from the HEASARC \emph{Swift} archive, reduced using the {\tt xrtpipeline} standard reduction script, and time-corrected to the Solar System Barycenter using {\tt HEASOFT v6.22}. Following this, we processed the data in the same manner described by \cite{2017ApJ...834..163A}. Observations, typically 1--1.5~ks long, were taken in groups of three, with the first two observations within approximately 8 hours of each other and the third approximately a day later. This observation strategy was adopted due to the source's prior unstable timing behavior, in which maintaining phase coherence using a longer cadence was only possible for several-month intervals \citep{2001ApJ...558..253K, 2009ApJ...702..614D}. In total, 655 XRT observations totaling 1.0\,Ms of observing time spanning 2011 July through 2018 April were analyzed in this work. \subsection{{\it NuSTAR} Observation} Following the detection of the first new outburst reported in \S\ref{sec:longflux}, we received {\it NuSTAR} Director's Discretionary Time (DDT) to observe 1E\,1048.1$-$5937{} in outburst. The {\it NuSTAR} observation (obsid 90202032002) was taken on 2016 August 5 (MJD 57605) with an exposure time of 55\,ks. {\it NuSTAR} data were reduced using the {\tt nupipeline} scripts, using {\tt HEASOFT v6.20} and time-corrected to the Solar System Barycenter. Source events were extracted within a 1$'$ radius around the centroid. Background regions were selected from the same detector as the source location, and spectra were extracted using the {\tt nuproducts} script. Using {\tt grppha}, channels 0--35 ($<3$ keV) and 1935--4095 ($> 79$ keV) were ignored, and all good channels were binned to have a minimum of one count per energy bin. As shown in Figure~\ref{fig:images}, 1E\,1048.1$-$5937{} is clearly detected across the {\it NuSTAR} band, including at energies above 20\,keV allowing the spectral analysis described in \S\ref{sec:hard}. \begin{figure} \center \includegraphics[width=\columnwidth]{images} \caption{{{\it NuSTAR} X-ray images in various energy bands of 1E\,1048.1$-$5937{}{} in outburst combining data from both focal plane modules.} The images have been smoothed with a Gaussian with a width of 4 pixels {(10\arcsec)}. The position of 1E\,1048.1$-$5937{} is indicated by the dashed purple circle.} \label{fig:images} \end{figure} \section{Flux \& Spectral Evolution} \subsection{Long-term Evolution} \label{sec:longflux} Following the data reduction described in \S\ref{sec:xrt}, we fit the XRT observations using an absorbed blackbody model. $N_\mathrm{H}$ was held constant at $5.8\times 10^{21}$\,cm$^{-2}$, the best-fit value for the source before the 2012 outburst. Observations within one day of each other were grouped for this analysis. Several individual observations, most notably in 2012 November, are significantly elevated from the long-term trend. These are most likely due to catching 1E\,1048.1$-$5937{} during a period of post-burst tail emission lasting several kiloseconds, as reported by \cite{2014ApJ...790...60A}. The long-term light-curve over the XRT campaign is dominated by three outbursts. We fit phenomenological models to the flux decay following each outburst, fixing the baseline flux to that measured before the 2011 December outburst, and fixing the outburst start time to that of the first observation {with} elevated flux. We first fit a single exponential decay, as well as power-law decays, to the flux following each outburst. For the first two long outbursts, such single-component models did not adequately describe the data. When fitting two-component models, those consisting of exponentials were statistically preferred to power-law models, using $\chi^2$ goodness of fit as a metric. The optimal parameters for a two-exponential model for each outburst are shown in Table~\ref{tab:outbursts}. For the 2017 December outburst, as the 2016 July outburst had not yet fully decayed, we subtracted the best-fit model of that latter outburst before fitting. As is evident from Figure~\ref{fig:swift_timing}, by the last observation reported here (2018 April), the effects of the 2017 December outburst have waned, with the last reported fluxes consistent with the extrapolation from the 2016 July outburst. Note that for the two longest outbursts, both the short $\sim50$- and long $\sim500$-day exponential timescales are consistent at the 1$\sigma$ level with each other. In addition, the third outburst has a timescale consistent with the shorter $\sim50$-day timescale. Also, within the limited available precision, the spectral variations are similar in the three outbursts (see Fig. 2). \begin{table} \begin{center} \caption{Characterization of the flux decay during the 2016 and 2017 outbursts of 1E\,1048.1$-$5937{}.} \label{tab:outbursts} \begin{tabular}{c|c|c}\hline $t_b$ & {0.5--10\,keV} Flux Decay Fit$\star$ & $\chi^2_\nu$ \\ \hline MJD & $10^{-11}$ erg\,s$^{-1}$\,cm$^{-2}$ & \\ \hline 55926 &$(1.1\pm0.15)$\,e$^{\frac{-(t-t_b)}{550\pm50}}$+$(1.9\pm0.13)$\,e$^{\frac{-(t-t_b)}{50\pm10}}$ & 1.1 \\ 57592 &$(1.0\pm0.13)$\,e$^{\frac{-(t-t_b)}{440\pm70}}$+$(1.5\pm0.16)$\,e$^{\frac{-(t-t_b)}{51\pm9}}$ & 0.75 \\ 58120 &$\dagger$ $(1.2\pm0.2)$\,e$^{\frac{-(t-t_b)}{62\pm12}}$ & 1.5 \\\hline \end{tabular} $\star$ $t$ and $t_b$ are in units of days. $\dagger$ After subtraction of the flux decay fit from the July 2016 outburst; see \S\ref{sec:longflux}. \end{center} \end{table} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{timing_spec_torque.pdf} \caption{Flux and timing evolution of 1E\,1048.1$-$5937{} over the {\it Swift} campaign. The top panel shows the absorbed 0.5--10\,keV X-ray flux. The purple triangle indicates the time of the {\it NuSTAR} observation. The best-fit models to the flux decays provided in Table~\ref{tab:outbursts} are shown as solid colored lines. The second and third panels show the evolution of the blackbody spectral parameters, $kT$ and the radius, assuming a distance of 9\,kpc \citep{2006ApJ...650.1070D}. The light grey points show fits to observations grouped into days, and the black points combine observations into groups of one month. The bottom two panels show the evolution of the spin frequency, $\nu$, after the subtraction of the 2011 timing ephemeris, and that of the spin-down date, $\dot{\nu}$. The dashed vertical lines indicate the start of each outburst.} \label{fig:swift_timing} \end{figure} \subsection{Hard X-rays in Outburst} \label{sec:hard} \begin{table} \begin{center} \caption{{\it NuSTAR} \& {\it Swift}-XRT spectrum of 1E\,1048.1$-$5937{} in outburst.} \label{tab:spec} \begin{tabular}{lc} \multicolumn{2}{c}{Absorbed Blackbody \& Broken Power-law} \\ \hline Parameter & Value \\\hline $N_\mathrm{H}$ ($10^{22}\,\mathrm{cm^{-2}}$) & $3.7\pm0.3$\\ $C_{NuSTAR}\tablenotemark{a} $ & $0.85\pm0.05$\\ $kT_\mathrm{BB}$ (keV) & $0.88\pm0.02$ \\ $\Gamma_S$ & $4.4\pm0.1$ \\ $\Gamma_H$ & $0.5_{-0.2}^{+0.3}$ \\ Break Energy (keV) & $13.4^{+0.6}_{-0.6}$ \\ C-Stat/$\mathrm{dof}$ & 1809.7/1965 \\ Goodness\tablenotemark{b} & $49.7\%$ \\ Flux (0.5--10\,keV)\tablenotemark{c} & $31.2^{+0.7}_{-1.5}$ \\ Flux (3--79\,keV)\tablenotemark{c} & $20.^{+1}_{-2}$ \\ Flux (20--79\,keV)\tablenotemark{c} & $4.8^{+1.4}_{-1.2}$ \\\hline \end{tabular} \tablenotetext{\rm a}{Fitted relative normalization for {\it NuSTAR}.} \tablenotetext{\rm b}{Percentage of C-Stat statistic simulation trials from model parameters that are less than the fit statistic.} \tablenotetext{\rm c}{Absorbed flux in units of $10^{-12}\,\mathrm{erg\,cm^{-2}\,s^{-1}}$.} \end{center} \end{table} A {\it NuSTAR} observation was taken approximately 13 days after the {\it Swift}-XRT-detected flux increase, as indicated in Figure~\ref{fig:swift_timing}. We first verified that there were no short, magnetar-like bursts contaminating the data by conducting a burst search following the method described by \cite{2011ApJ...739...94S}. To constrain the soft X-ray spectrum, we co-fit the {\it NuSTAR} observation with the {\it Swift} observations (observation ids 00032923252 \& 254) taken on August 5--8 2016, coincident to within days of the epoch of the {\it NuSTAR} observation. We used Cash statistics \citep{1979ApJ...228..939C} for fitting and parameter estimation of the unbinned data. $N_\mathrm{H}$ was fit using the {\tt tbabs} model with \texttt{wilm} abundances \citep{2000ApJ...542..914W} and \texttt{vern} photoelectric cross-sections \citep{1996ApJ...465..487V}. {1E\,1048.1$-$5937{} is detected above 20\,keV} with a background subtracted 20--79-keV count rate of $(5.3\pm0.6)\times10^{-3}$ photons per second. The spectrum is well fit by an absorbed blackbody and broken power law; the best-fit parameters are shown in Table~\ref{tab:spec}. Here, $\Gamma_S$ and $\Gamma_H$ refer to the power-law index below and above the break energy, respectively. The X-ray spectrum and residuals are shown in Figure~\ref{fig:spec}. All the uncertainties in the spectral parameters are quoted at 90\% confidence. In a 2013 {\it NuSTAR} observation of 1E\,1048.1$-$5937{} in relative quiescence, neither \cite{2015ApJ...815...15W} nor \cite{2016ApJ...831...80Y} found any evidence of X-ray flux from 1E\,1048.1$-$5937{} above 20\,keV, setting a 3\,$\sigma$ upper limit on the total, phase-averaged flux in the 20--79\,keV band of $\sim$3--4$\times 10^{-12}\,\mathrm{erg\,cm^{-2}\,s^{-1}}$, just below our detection of a flux of $4.8^{+1.4}_{-1.2} \times 10^{-12}\,\mathrm{erg\,cm^{-2}\,s^{-1}}$. \begin{figure} \center \includegraphics[width=\columnwidth]{spec} \caption{X-ray spectra of 1E\,1048.1$-$5937{}{} in outburst. In all panels, the blue data points are from {\it Swift}-XRT, and purple data points from {\it NuSTAR}. The top panel shows the spectral energy distribution, the middle panel shows the observed spectrum and the bottom panel displays the residuals of the data relative to the model presented in Table~\ref{tab:spec}. } \label{fig:spec} \end{figure} \subsection{{\it NuSTAR} Pulsed Flux} In Figure~\ref{fig:pulse}, we show the pulse profiles of 1E\,1048.1$-$5937{} during the {\it NuSTAR} observation in units of photons per kilosecond per Focal Plane Module (FPM), folded using the timing solution from the {\it Swift} campaign (see \S\ref{sec:timing}). We calculated the RMS pulsed fraction of 1E\,1048.1$-$5937{} in several energy bands, using the method described in the appendix of \cite{2015ApJ...807...93A}. To determine the significance of the pulsed signal, we used the H-test \citep{1989A&A...221..180D}. Motivated by the spectral break in the power law at $13.4^{+0.6}_{-0.6}$\,keV, we used this value as a fiducial cut to search for a pulsed signal in the hard X-ray band. A pulsed signal is detected up to 20\,keV, and no significant pulsations are seen above this energy. In Table~\ref{tab:pulse} we report the H-test false-alarm-probabilities (P$_{FA}$), and pulse fractions, where upper limits are given at the 99\% confidence level. Due to a paucity of pulsed counts in the hard X-ray band, we can neither comment on the energy-dependence of the pulsed fraction, nor do meaningful phase-resolved spectroscopy of 1E\,1048.1$-$5937{}. \begin{figure} \center \includegraphics[width=0.9\columnwidth]{NuSTAR_pulse} \caption{Pulse profile of 1E\,1048.1$-$5937{}{} in the {\it NuSTAR} observation in various energy bands. In all panels, the black dashed line represents the background count rate, and the red dashed line shows the H-test preferred pulse profile.} \label{fig:pulse} \end{figure} \begin{table} \begin{center} \caption{{\it NuSTAR} Pulsed Flux from 1E\,1048.1$-$5937{}.} \label{tab:pulse} \begin{tabular}{c|c|c}\hline Energy Range &H-test P$_{FA}$ &RMS Pulsed Fraction$\dagger$ \\ \hline keV & & \% \\ \hline 3--7 &0& $51\pm1$ \\ 7--10 &$1\times10^{-47}$& $48\pm3$ \\ 10--13.4 &$1\times10^{-3}$& $28\pm8$ \\ 13.4--20 &0.03& $30\pm10$ \\ 20--79 &0.9& $<80$ \\ \hline \end{tabular} $\dagger$ After background subtraction. \end{center} \end{table} \section{Timing Analysis} \label{sec:timing} The processed individual XRT photons were used to derive a pulse time-of-arrival (TOA) for each observation. The rotational phase ($\phi_i$) of every photon in the observation was calculated, assuming the best prior timing model. The TOAs were created using a Maximum Likelihood (ML) method as described by \cite{2009LivingstoneTiming} and \cite{2012ApJ...761...66S}. These TOAs were fitted to a timing model in which the phase $\phi$ as a function of time $t$ is described by a Taylor expansion: \begin{equation} \phi(t) = \phi_0+\nu_0(t-t_0)+\frac{1}{2}\dot{\nu_0}(t-t_0)^2+\frac{1}{6}\ddot{\nu_0}(t-t_0)^3+\cdots \end{equation} where $\nu$ is the rotational frequency of the pulsar. This was done using the {\tt tempo2} pulsar timing software package \citep{2006MNRAS.369..655H}. As the frequency derivative of 1E\,1048.1$-$5937{} changes by up to an order of magnitude on $\sim$months time scales, we first created overlapping timing solutions with {\tt tempo2} to determine a relative pulse number for each TOA. Then, using the overlapping regions to ensure the same number of rotations in each solution, these solutions were merged, allowing the establishment of absolute pulse numbers throughout the entire {\it Swift} campaign. In order to determine the local timing behavior of 1E\,1048.1$-$5937{}, we fit splines to these absolute pulse numbers \citep[see][]{splineref}, using a method similar to that described by \cite{2014ApJ...784...37D}, using piecewise polynomials of degree $n=3$ weighted by the inverse square error on the pulse phase. To determine uncertainties, we refit these splines 1000 times after adding Gaussian noise to the pulse numbers, using their measured pulse phase uncertainties. The resulting spin frequencies and frequency derivatives are shown in Figure~\ref{fig:swift_timing}. The plotted error bars, typically comparable to the size of the points, indicate the 68\% confidence regions. We detected a spin-up glitch coincident with the 2016 July flux increase. As is evident in Figure~\ref{fig:swift_timing}, the timing parameters of 1E\,1048.1$-$5937{} are not stable. To measure the size of the glitch, we fit a simple timing solution in the interval MJD 57400--57668, consisting of $\nu$ and $\dot{\nu}$ as well as a glitch in $\nu$ with the epoch fixed to that of the flux increase. This yields a glitch with $\Delta\nu= 4.47(6)\times10^{-7}$\,Hz ($\Delta\nu/\nu= 2.89(4)\times10^{-6}$). The above epoch bounds were chosen to have a reduced $\chi^2\sim1$ and to result in no visible trends in the residuals. We note that the actual timing evolution is more complicated, as is evident in Figure~\ref{fig:swift_timing}. In the same manner, we also find a glitch coincident with the 2017 December flux increase. Fitting a simple timing solution in the interval MJD 58000--58200 with the epoch fixed to that of the flux increase gives a glitch having $\Delta\nu= 4.32(5)\times10^{-7}$\,Hz ($\Delta\nu/\nu= 2.79(3)\times10^{-6}$). Again, note that the actual timing evolution is more complicated (Fig.~\ref{fig:swift_timing}). The influence of these glitches on the long-term spin down of the pulsar is far smaller than the integrated effect of the varying torque. Collectively, the two glitches change $\nu$ by $\sim8.8\times10^{-7}$\,Hz while the added spin-down variations have contributed $\sim-2\times10^{-5}$\,Hz. \section{Discussion} \subsection{Hard X-ray Component} {Here we have presented the detection of 1E\,1048.1$-$5937{} at energies above 20\, keV. This, however, is not the first high energy detection of the source. \citet{2008A&A...477L..29L} detected 1E\,1048.1$-$5937{} at 22--100\,keV with {\it INTEGRAL} during observations of $\eta$ Carinae. Their observation totals 1.1\,Ms and is drawn from several observing epochs, but one of those epochs (MJD 52787--52827) corresponds to the peak of the 2001--2002 outburst of 1E\,1048.1$-$5937{}. This is therefore consistent with the picture of 1E\,1048.1$-$5937{} being bright in hard X-rays during outburst. } Hard X-ray emission from magnetars is ubiquitous in persistently bright magnetars \citep[e.g.][]{2006ApJ...645..556K,2014ApJ...789...75V, 2017ApJ...851...17Y, 2017ApJS..231....8E}. Additionally, in transient magnetars, similar hard-X-ray components are observed near epochs of enhanced flux. For example, in SGR\,0501$+$4516, for which, in the first four days of an outburst, {\it Suzaku} detected a hard power law with $\Gamma=0.79_{-0.18}^{+0.20}$ \citep{2010ApJ...715..665E} -- similar to the spectrum we have observed in 1E\,1048.1$-$5937{}. As well, in SGR\,1935+2154 \citep{2017ApJ...847...85Y}, a hard X-ray component was observed at the peak flux of an outburst. Thus the phenomenon of a transient hard X-ray component appearing in outburst seems common for the magnetar class. This hard X-ray emission is thought to be due to decelerating electron/positron flow in large twisted magnetic loops of the pulsar magnetosphere \citep{2013ApJ...762...13B}. In this picture, the flux evolution of magnetars following outbursts involves the untwisting of the magnetosphere \citep[e.g][]{2009ApJ...703.1044B, 2013ApJ...774...92P, 2017ApJ...844..133C}. The transient hard-X-ray emission we observed in 1E\,1048.1$-$5937{}, and other magnetars in outburst, is then consistent with this picture where hard-X-ray emission is only detectable during the peak of this outburst when the magnetosphere is maximally twisted. We would then generally expect the evolution of the hard X-ray flux to proceed on a similar timescale to that of the soft X-ray flux \citep{2017ApJ...844..133C}. Future systematic hard X-ray observations of magnetars in outburst are needed to put this to the test, although the hard X-ray relaxation of the high-magnetic-field radio pulsar PSR~J1119$-$6127 has recently been shown to proceed on a time scale similar to that of the soft X-ray relaxation post-outburst \citep{2018ApJ...869..180A}. A correlation has been observed between the surface magnetic field (or alternately the spin-down rate) of a magnetar and its hard X-ray power-law index \citep{2010ApJ...710L.115K, 2017ApJS..231....8E}. Indeed, \cite{2010ApJ...710L.115K} predicted that $\Gamma_H$ for 1E\,1048.1$-$5937{} should fall between 0--1, albeit in quiescence. Interestingly, this is in agreement with our measurement of $\Gamma_H = 0.5^{+0.3}_{-0.2}$ in outburst. In \cite{2017ApJS..231....8E}, the hardness ratio of fluxes in the 15--60\,keV and 1--10\,keV bands is shown to be correlated with the spin-down rate of the magnetar. If we take the quiescent spin-down rate of 1E\,1048.1$-$5937{} ($\sim9\times10^{-12}$\,s\,s$^{-1}$), the predicted hardness ratio for 1E\,1048.1$-$5937{} is $\sim0.4$. We measure $F_{15-60\,\mathrm{keV}} = (3.2\pm0.6)\times10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ and $F_{1-10\,\mathrm{keV}} = (31\pm 1)\times10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ for a hardness ratio of $0.10\pm 0.02$ which is broadly consistent with the trend, especially given the large fluctuations in $\dot{P}$ observed in 1E\,1048.1$-$5937{}, as well as the scatter in the observed distribution \citep{2017ApJS..231....8E}. \begin{figure*} \centering \includegraphics[width=1.9\columnwidth]{long_timing_torque.pdf} \caption{Flux and timing evolution of 1E\,1048.1$-$5937{} over the combined {\it RXTE} and {\it Swift} campaigns. The top panel shows the absorbed 0.5--10\,keV X-ray flux in black, and the {\it RXTE} 2--20 pulsed flux measured with the proportional counter array \citep{1996SPIE.2808...59J} in grey, scaled to match the {\it Swift} total flux. The bottom panel shows the evolution of the spin-down date, $\dot{\nu}$. In both panels, the dashed vertical lines indicate the start of a flux increase. The {\it RXTE} data are from \cite{2014ApJ...784...37D}, with the timing solutions pre-2000 from \cite{2001ApJ...558..253K}.} \label{fig:long_timing} \end{figure*} \subsection{Repeated outbursts \& torque changes in 1E\,1048.1$-$5937{}} In Figure~\ref{fig:long_timing} we show the last 20 yr of evolution in the X-ray flux, and spin-down rate for 1E\,1048.1$-$5937{}, as monitored by {\it RXTE} \footnote{ The {\it RXTE} data presented here are reproduced from \cite{2014ApJ...784...37D}, with the timing solutions pre-2000 from \cite{2001ApJ...558..253K}.} and {\it Swift}. Note that the fluxes are in different energy bands (0.5--10\,keV vs 2--20\,keV), and that {\it RXTE} fluxes are pulsed only, and have been scaled to match the {\it Swift} flux during the period of overlap. The time delay between the 2011 December and the 2016 July outbursts was $1670\pm10$ days. This can be compared to separations of $1800\pm10$ and $1740\pm10$ days between the prior flares as discussed by \citet{2014ApJ...784...37D} and \citet{2015ApJ...800...33A}. While this outburst timing is consistent with the quasi-periodicity suggested in \cite{2015ApJ...800...33A}, the occurrence of the 2017 December outburst suggests that this repeated time scale is spurious. However, this last outburst is decaying on a faster timescale than the major outbursts on which the claimed quasi-periodicity is based -- similar to the precursor flare noted in 2001 \citep[e.g.][]{2008ApJ...677..503T, 2014ApJ...784...37D}. It will be interesting to continue monitoring 1E\,1048.1$-$5937{} to see if there is another outburst on the timescale the quasi-periodicity predicts, i.e. in $\sim$2021. Additionally, the torque variations following the 2016 July outburst follow the trend of decreasing amplitude noted in \cite{2015ApJ...800...33A}. Following the four major outbursts observed thus far, the peak torque reached values of 12.3(1), 7.32(5), 4.4(1), and finally 1.73(1) times higher than the quiescent rate. The monotonic decrease in amplitude of these unexplained torque variations is curious, as it implies that our monitoring of 1E\,1048.1$-$5937{} was started at a special time, perhaps after a major but unobserved event. If the decline continues, by the next outburst, the torque variations should be smaller than order unity times the quiescent value. However, the monotonic decrease may also be purely coincidental. Further monitoring will be illuminating. {While the repetition, and monotonic decline in amplitude, of the torque variations} from 1E\,1048.1$-$5937{} are striking and unique, rapid, extreme variability in the torque ($\dot{\nu}$) evolution appears to be a common feature following magnetar outbursts. In addition to that observed now repeatedly in 1E\,1048.1$-$5937{}, similar variations have been observed in 1E\,1547$-$5408 \citep{2012ApJ...748....3D}, PSR\,J1622$-$4950 \citep{2017ApJ...841..126S, 2018ApJ...856..180C}, and in XTE\,1810$-$197 \citep{2016ApJ...820..110C}. Thus, in a large fraction of magnetar outbursts for which the spin-down rate has been tracked for over a decade, these extreme torque variations are observed, and can dominate the long-term spin evolution of these sources. In the magnetar model, increased torque associated with outbursts, just as the enhanced hard X-ray emission, is due to a twist in the magnetosphere \citep[e.g][]{2002ApJ...574..332T, 2009ApJ...703.1044B}. As the spin-down rate of the star is dominated by the relatively small number of open field lines, there is no reason for a strict correlation between the hard X-ray emission and spin-down rate, as it depends on the geometry of the magnetosphere \citep{2009ApJ...703.1044B, 2017ARA&A..55..261K}. In the untwisting model, the spin-down rate of the star is only affected once the twist reaches an amplitude of $\sim 1$\,radian. The delay between the peak X-ray flux and peak torque of $\sim100$\,days observed in 1E\,1048.1$-$5937{} would then be due to the initial twist not exceeding this threshold value \citep{2009ApJ...703.1044B}. \section{Conclusions} We have presented long-term X-ray observations of 1E\,1048.1$-$5937{} during which we observe two new outbursts of this source in 2016 July and 2017 December. Associated with these outbursts, we find spin-up glitches having $\Delta\nu/\nu$ of order $ 10^{-6}$, although the long-term spin evolution is dominated by a strongly fluctuating spin-down rate. We also report a transient hard X-ray component of 1E\,1048.1$-$5937{} observed with {\it NuSTAR} near the peak of the 2016 July outburst, with emission up to $\sim70$\,keV, and pulsed emission observed up to 20~keV. The spectrum and pulse properties of this hard emission are qualitatively consistent with emission models involving cooling of electron/positron pairs in large, twisted magnetic loops in the outer regions of the stellar magnetosphere \citep{2013ApJ...762...13B}. The repeating outbursts and associated large, delayed torque variations, and their possible monotonic decline in amplitude in 1E\,1048.1$-$5937{} remain, however, puzzling. \acknowledgements R.F.A. acknowledges support from an NSERC Postdoctoral Fellowship. P.S. is a Dunlap Fellow and an NSERC Postdoctoral Fellow. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. V.M.K. receives support from an NSERC Discovery Grant and Herzberg Award, the Centre de Recherche en Astrophysique du Qu\'ebec, an R. Howard Webster Foundation Fellowship from the Canadian Institute for Advanced Study, the Canada Research Chairs Program and the Lorne Trottier Chair in Astrophysics and Cosmology. A.P.B. acknowledges funding from the UK Space Agency. The authors thank the operations team of {\it NuSTAR} for approving a rapid turn-around DDT. We thank the {\it Swift} team for approving our ToO requests to monitor 1E\,1048.1$-$5937{}, and other magnetars over the years. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work made use of data from the {\it NuSTAR} mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. \facilities{{\it NuSTAR, Swift}} \software{\newline {\tt numpy} \citep{2011CSE....13b..22V}, \newline {\tt astropy} \citep{2013A&A...558A..33A,2018AJ....156..123A}, \newline {\tt xspec} \citep{1996ASPC..101...17A}, \newline {\tt heasoft} \citep{2014ascl.soft08004N}}
1,108,101,562,421
arxiv
\section{Introduction} In time-frequency analysis, one studies a signal $\psi \in L^2(\mathbb{R}^d)$ by considering various time-frequency representations of $\psi$. An important class of time-frequency representations is obtained by fixing $\varphi\in L^2(\mathbb{R}^d)$ and considering the \textit{short-time Fourier transform} $V_\varphi \psi$ of $\psi$ with window $\varphi$, which is the function on the time-frequency plane $\mathbb{R}^{2d}$ given by \begin{equation*} V_{\varphi}\psi(z)=\inner{\psi}{\pi(z)\varphi}_{L^2} \quad \text{ for } z\in \mathbb{R}^{2d}, \end{equation*} where $\pi(z):L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$ is the \textit{time-frequency shift} given by $\pi(z)\varphi(t)=e^{2\pi i \omega \cdot t}\varphi(t-x)$ for $z=(x,\omega).$ The intuition is that $V_\varphi \psi(z)$ carries information about the components of the signal $\psi$ with frequency $\omega$ at time $x$. A question going back to von Neumann \cite{vonNeumann:1955} and Gabor \cite{Gabor:1946} is the validity of reconstruction formulas of the form \begin{equation} \label{eq:intro1} \psi = \sum_{\lambda \in \Lambda} V_\varphi \psi(\lambda) \pi(\lambda) \xi \text{ for any } \psi\in L^2(\mathbb{R}^d), \end{equation} where $\Lambda=A\mathbb{Z}^{2d}$ for $A\in GL(2d,\mathbb{R})$ is a lattice in $\mathbb{R}^{2d}$ and $\varphi,\xi \in L^2(\mathbb{R}^d)$. It is known that \eqref{eq:intro1} is indeed true for certain windows $\varphi,\xi$ and lattices $\Lambda$, and such formulas naturally lead to the concept of \textit{Gabor multipliers}. If $\varphi,\xi \in L^2(\mathbb{R}^d)$ and $m=\{m(\lambda)\}_{\lambda \in \Lambda}$ is a sequence of complex numbers, we define the Gabor multiplier $\mathcal{G}_{m}^{\varphi,\xi}:L^2(\mathbb{R}^d) \to L^2(\mathbb{R}^d)$ by \begin{equation*} \mathcal{G}_{m}^{\varphi,\xi}(\psi)=\sum_{\lambda \in \Lambda} m(\lambda)V_\varphi \psi(\lambda) \pi(\lambda) \xi. \end{equation*} Compared to \eqref{eq:intro1} we see that $\mathcal{G}_{m}^{\varphi,\xi}$ modifies the time-frequency content of $\psi$ in a simple way, namely by multiplying the samples of its time-frequency representation with a mask $m$. Gabor multipliers have been studied in the mathematics literature by \cite{Grochenig:2013,Feichtinger:2002,Feichtinger:2003,Grochenig:2011,Dorfler:2010,Benedetto:2006,Feichtinger:1998a,Cordero:2003} among others, and also in more application-oriented contributions \cite{Balazs:2010,Taubock:2019,Rajbamshi:2019}. Gabor multipliers are the discrete analogues of the much-studied localization operators \cite{Daubechies:1990,Cordero:2003,Bayer:2014,Grochenig:2011toft}. In \cite{Luef:2018c} we showed that the quantum harmonic analysis developed by Werner and coauthors \cite{Werner:1984,Kiukas:2012} provides a conceptual framework for localization operators, leading to new results and interesting reinterpretations of older results on localization operators. The goal of this paper is therefore to develop a version of quantum harmonic analysis for lattices to provide a similar conceptual framework for Gabor multipliers. Hence we continue the line of research into applications of quantum harmonic analysis from \cite{Luef:2018c,Luef:2018b,Luef:2018}. With this aim we introduce two convolutions of operators and sequences in Section \ref{sec:convolutions}. Following \cite{Werner:1984,Kozek:1992a,Feichtinger:1998} we first define the translation of an operator $S$ on $L^2(\mathbb{R}^d)$ by $\lambda\in \Lambda$ to be the operator $$\alpha_\lambda(S)=\pi(\lambda)S\pi(\lambda)^*.$$ If $c\in \ell^1(\Lambda)$ and $S$ is a trace class operator on $L^2(\mathbb{R}^d)$, the convolution $c \star_\Lambda S$ is defined to be the \textit{operator} \begin{equation*} c \star_\Lambda S=\sum_{\lambda \in \Lambda} c(\lambda)\alpha_\lambda(S). \end{equation*} Gabor multipliers are then given by convolutions \begin{equation*} \mathcal{G}_{m}^{\varphi,\xi}=m\star_\Lambda (\xi \otimes \varphi), \end{equation*} where $\xi \otimes \varphi$ is the rank-one operator $\xi \otimes \varphi(\psi)=\inner{\psi}{\varphi}_{L^2} \xi$. Furthermore, we define the convolution $S\star_\Lambda T$ of two trace class operators $S$ and $T$ to be the \textit{sequence} over $\Lambda$ given by \begin{equation*} S\star_\Lambda T(\lambda)=\mathrm{tr}(S\alpha_\lambda (\check{T})), \end{equation*} where $\check{T}=PTP$ with $P$ the parity operator $P\psi(t)=\psi(-t)$ for $\psi \in L^2(\mathbb{R}^d)$. In Section \ref{sec:convolutions} we investigate the commutativity and associativity of these convolutions, extend their domains and in Proposition \ref{prop:youngschatten} we establish a version of Young's inequality for convolutions of operators and sequences. An important tool throughout the paper is a Banach space $\mathcal{B}$ of trace class operators, consisting of operators with Weyl symbol in the so-called Feichtinger algebra \cite{Feichtinger:1981}. The use of $\mathcal{B}$ allows us to obtain continuity results for the convolutions with respect to $\ell^p(\Lambda)$ and Schatten-$p$ classes -- an important example is Proposition \ref{prop:convwelldefined} which states that \begin{equation*} \|S\star_\Lambda T\|_{\ell^1(\Lambda)}\lesssim \|S\|_{\mathcal{B}} \|T\|_{\mathcal{T}} \end{equation*} for $S\in \mathcal{B}$ and trace class $T$, where $\|\cdot\|_\mathcal{T}$ is the trace class norm. While there are other classes of operators that would ensure that $S\star_\Lambda T\in \ell^1(\Lambda)$, see for instance the Schwartz operators \cite{Keyl:2015}, $\mathcal{B}$ has the advantage of being a Banach space, hence allowing the use of tools such as Banach space adjoints. The space $\mathcal{B}$ has previously been studied by \cite{Feichtinger:1998,Dorfler:2010,Feichtinger:2018} among others. To complement the convolutions, we introduce Fourier transforms of sequences and operators in Section \ref{sec:fouriertransforms}. For a sequence $c\in \ell^1(\Lambda)$ we use its symplectic Fourier series \begin{equation*} \mathcal{F}_\sigma^\Lambda(c)(z)=\sum_{\lambda \in \Lambda} c(\lambda)e^{2\pi i \sigma(\lambda,z)} \quad \text{ for } z\in \mathbb{R}^{2d}, \end{equation*} where $\sigma(z,z')=\omega\cdot x'-x\cdot \omega'$ for $z=(x,\omega),z'=(x',\omega').$ As a Fourier transform for trace class operators $S$ we use the Fourier-Wigner transform \begin{equation*} \mathcal{F}_W(S)(z)= e^{-\pi i x\cdot \omega} \mathrm{tr}(\pi(-z)S) \quad \text{ for } z=(x,\omega)\in \mathbb{R}^{2d}. \end{equation*} Equipped with both convolutions and Fourier transforms, we naturally ask whether the Fourier transforms turn convolutions into products. We show in Theorem \ref{thm:orthogonality} for $z\in \mathbb{R}^{2d}$ that \begin{equation} \label{eq:introFs} \mathcal{F}_{\sigma}^{\Lambda}(S\star_\Lambda T)(z)=\frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ}F_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ), \end{equation} where $\Lambda^\circ$ is the adjoint lattice of $\Lambda$ defined in Section \ref{sec:fouriertransforms}, and in Propositions \ref{prop:spreadinggm} and \ref{prop:spreadinggm2} we show that \begin{equation} \label{eq:introFW} \mathcal{F}_W(c\star_\Lambda S)(z)=\mathcal{F}_\sigma^\Lambda(c)(z)\mathcal{F}_W(S)(z). \end{equation} These results include as special cases the so-called fundamental identity of Gabor analysis \cite{Rieffel:1988,Feichtinger:2006,Tolimieri:1994,Janssen:1995} and results on the spreading function of Gabor multipliers due to \cite{Dorfler:2010}. Equations \eqref{eq:introFs} and \eqref{eq:introFW} hold for general classes of operators and sequences, and we take care to give a precise interpretation of the objects and equalities in all cases. A fruitful approach to Gabor multipliers due to Feichtinger \cite{Feichtinger:2002} is to consider the so-called Kohn-Nirenberg symbol of operators. The Kohn-Nirenberg symbol of an operator $S$ on $L^2(\mathbb{R}^d)$ is a function on $\mathbb{R}^{2d}$, and Feichtinger used this to reduce questions about Gabor multipliers in the Hilbert Schmidt operators to questions about functions in $L^2(\mathbb{R}^{2d})$. This approach has later been used in other papers on Gabor multipliers \cite{Dorfler:2010,Benedetto:2006,Feichtinger:2003}. As Gabor multipliers are examples of convolutions, we show in Section \ref{sec:riesz} that this approach can be generalized and phrased in terms of our quantum harmonic analysis, and that one of the main results of \cite{Feichtinger:2002} finds a natural interpretation as a Wiener's lemma in our setting -- see Theorem \ref{thm:biorthogonal}, Corollary \ref{cor:banachisomorphism} and the remarks following the corollary. In Section \ref{sec:tauberian} we show the extension of some deeper results of harmonic analysis on $\mathbb{R}^d$ to our setting. We obtain an analogue of Wiener's classical Tauberian theorem in Theorem \ref{thm:bigtauberian}, similar to the results of Werner and coauthors \cite{Werner:1984,Kiukas:2012} in the continuous setting. As an example we have the following equivalent statements for $S\in \mathcal{B}:$ \begin{enumerate}[(i)] \item The set of zeros of $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)$ contains no open subsets in $\mathbb{R}^{2d}/\Lambda^\circ$. \item If $c\star_\Lambda S=0$ for $c\in \ell^1(\Lambda)$, then $c=0$. \item $\mathcal{B}' \star_\Lambda S$ is weak*-dense in $\ell^\infty(\Lambda)$. \end{enumerate} These results are related to earlier investigations of Gabor multipliers by Feichtinger \cite{Feichtinger:2002}. In particular, he showed that if $S=\xi\otimes \varphi$ is a rank-one operator and $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)$ has \textit{no} zeros, then any $m\in \ell^\infty(\Lambda)$ can be recovered from the Gabor multiplier $\mathcal{G}_{m}^{\varphi,\xi}$. Since Gabor multipliers are given by convolutions, the equivalence (i) $\iff$ (ii) shows that we can recover $m\in \ell^1(\Lambda)$ from $\mathcal{G}_{m}^{\varphi,\xi}$ under the weaker condition (i) -- this holds in particular for finite sequences $m$. Finally, we apply our techniques to prove a version of Wiener's division lemma in Theorem \ref{thm:underspread}. At the level of Weyl symbols this turns out to reproduce a result by Gr\"ochenig and Pauwels \cite{Grochenig:2014}, but in our context it has the following interpretation: \begin{quote} If $\mathcal{F}_W(S)$ has compact support for some operator $S$, and the support is sufficiently small compared to the density of $\Lambda$, then there exists a sequence $m\in \ell^\infty(\Lambda)$ such that $S=m\star_\Lambda A$ for some $A\in \mathcal{B}$. If $S$ belongs to the Schatten-$p$ class of compact operators, then $m\in \ell^p(\Lambda)$. \end{quote} The above result fits well into the common intuition that operators $S$ with compactly supported $\mathcal{F}_W(S)$ (so-called underspread operators) can be approximated by Gabor multipliers \cite{Dorfler:2010} -- i.e. by operators $c\star_\Lambda T$ where $T$ is a rank-one operator. The result shows that if we allow $T$ to be \textit{any} operator in $\mathcal{B}$, then any underspread operator $S$ is precisely of the form $S=c\star_\Lambda T$ for a sufficiently dense lattice $\Lambda$. We end this introduction by emphasizing the hybrid nature of our setting. In \cite{Werner:1984}, Werner introduced quantum harmonic analysis of functions on $\mathbb{R}^{2d}$ and operators on the Hilbert space $L^2(\mathbb{R}^d)$. We are considering the discrete setting of sequences on a lattice instead of functions on $\mathbb{R}^{2d}$. If we had modified the Hilbert space $L^2(\mathbb{R}^d)$ accordingly, many of our results would follow by the arguments of \cite{Werner:1984}, as already outlined in \cite{Kiukas:2012}. However, we keep the same Hilbert space $L^2(\mathbb{R}^d)$ as in the continuous setting. We are therefore mixing the discrete (lattices) and the continuous ($L^2(\mathbb{R}^d)$), which leads to some extra intricacies. \section{Conventions} By a lattice $\Lambda$ we mean a full-rank lattice in $\mathbb{R}^{2d}$, i.e. $\Lambda=A\mathbb{Z}^{2d}$ for $A\in GL(2d,\mathbb{R})$. The volume of $\Lambda=A\mathbb{Z}^{2d}$ is $|\Lambda|:=\det(A)$. For a lattice $\Lambda$, the Haar measure on $\mathbb{R}^{2d}/\Lambda$ will always be normalized so that $\mathbb{R}^{2d}/\Lambda$ has total measure $1$. If $X$ is a Banach space and $X'$ its dual space, the action of $y\in X'$ on $x\in X$ is denoted by the bracket $\inner{y}{x}_{X',X}$, where the bracket is antilinear in the second coordinate to be compatible with the notation for inner products in Hilbert spaces. This means that we are identifying the dual space $X'$ with \textit{anti}linear functionals on $X$. For two Banach spaces $X,Y$ we use $\mathcal{L}(X,Y)$ to denote the Banach space of continuous linear operators from $X$ to $Y$, and if $X=Y$ we simply write $\mathcal{L}(X)$. The notation $P \lesssim Q$ means that there is some $C>0$ such that $P\leq C\cdot Q$. \section{Spaces of operators and functions} \subsection{Time-frequency shifts and the short-time Fourier transform} For $z= (x,\omega)\in \mathbb{R}^{2d}$ we define the \textit{time-frequency shift} operator $\pi(z)$ by \begin{equation*} (\pi(z)\psi)(t)=e^{2\pi i \omega \cdot t}\psi(t-x) \quad \text{ for } \psi \in L^2(\mathbb{R}^d). \end{equation*} Hence $\pi(z)$ can be written as the composition $M_\omega T_x$ of a translation operator $(T_x\psi)(t)=\psi(t-x)$ and a modulation operator $(M_\omega \psi)(t)=e^{2\pi i \omega \cdot t}\psi(t)$. The time-frequency shifts $\pi(z)$ are unitary operators on $L^2(\mathbb{R}^d)$. For $\psi,\varphi\in L^2(\mathbb{R}^d)$ we can use the time-frequency shifts to define the \textit{short-time Fourier transform } $V_\varphi \psi$ of $\psi$ with window $\varphi$ by \begin{equation*} V_\varphi \psi (z)=\inner{\psi}{\pi(z)\varphi}_{L^2} \quad \text{ for } z\in \mathbb{R}^{2d}. \end{equation*} The short-time Fourier transform satisfies an orthogonality condition, sometimes called Moyal's identity \cite{Grochenig:2001,Folland:1989}. \begin{lem}[Moyal's identity] If $\psi_1, \psi_2, \varphi_1, \varphi_2 \in L^2(\mathbb{R}^d)$, then $V_{\varphi_i}\psi_j \in L^2(\mathbb{R}^{2d})$ for $i,j\in \{1,2\}$, and the relation \begin{equation*} \inner{V_{\varphi_1}\psi_1}{V_{\varphi_2}\psi_2}_{L^2}=\inner{\psi_1}{\psi_2}_{L^2}\overline{\inner{\varphi_1}{\varphi_2}}_{L^2} \end{equation*} holds, where the leftmost inner product is in $L^2(\mathbb{R}^{2d})$ and those on the right are in $L^2(\mathbb{R}^d)$. \end{lem} By replacing the inner product in the definition of $V_\varphi \psi$ by a duality bracket, one can define the short-time Fourier transform for other classes of $\psi,\varphi$. The most general case we need is that of a Schwartz function $\varphi \in \mathcal{S}(\mathbb{R}^d)$ and a tempered distribution $\psi \in \mathcal{S}'(\mathbb{R}^d)$; we define \begin{equation*} V_\psi \varphi (z)=\inner{\psi}{\pi(z)\varphi}_{\mathcal{S}',\mathcal{S}} \quad \text{ for } z\in \mathbb{R}^{2d}. \end{equation*} \subsection{Feichtinger's algebra} An appropriate space of functions for our purposes will be Feichtinger's algebra $S_0(\mathbb{R}^d)$, first introduced by Feichtinger in \cite{Feichtinger:1981}. To define $S_0(\mathbb{R}^d)$, let $\varphi_0$ denote the $L^2$-normalized Gaussian $\varphi_0(x)=2^{d/4}e^{-\pi x\cdot x}$ for $x\in \mathbb{R}^d$. Then $S_0(\mathbb{R}^d)$ is the space of all $\psi\in \mathcal{S}'(\mathbb{R}^d)$ such that \begin{equation*} \|\psi\|_{S_0}:=\int_{\mathbb{R}^{2d}} |V_{\varphi_0}\psi(z)| \ dz <\infty. \end{equation*} With the norm above, $S_0(\mathbb{R}^d)$ is a Banach space of continuous functions and an algebra under multiplication and convolution \cite{Feichtinger:1981}. By \cite[Thm. 11.3.6]{Grochenig:2001}, the dual space of $S_0(\mathbb{R}^d)$ is the space $S_0'(\mathbb{R}^d)$ consisting of all $\psi \in \mathcal{S'}(\mathbb{R}^d)$ such that \begin{equation*} \|\psi\|_{S_0'}:=\sup_{z\in \mathbb{R}^{2d}} |V_{\varphi_0}\psi(z)| \ dz <\infty, \end{equation*} where an element $\psi\in S_0'(\mathbb{R}^d)$ acts on $\phi\in S_0(\mathbb{R}^d)$ by \begin{equation*} \inner{\phi}{\psi}_{S_0',S_0}=\int_{\mathbb{R}^{2d}} V_{\varphi_0}\phi(z) \overline{V_{\varphi_0}\psi(z)} \ dz. \end{equation*} We get the following chain of continuous inclusions: \begin{equation*} \mathcal{S}(\mathbb{R}^d)\hookrightarrow S_0(\mathbb{R}^d) \hookrightarrow L^2(\mathbb{R}^d) \hookrightarrow S_0'(\mathbb{R}^d) \hookrightarrow \mathcal{S}'(\mathbb{R}^d). \end{equation*} One important reason for using Feichtinger's algebra is that it consists of continuous functions, and that sampling them over a lattice produces a summable sequence \cite[Thm. 7C)]{Feichtinger:1981}. \begin{lem}[Sampling Feichtinger's algebra] \label{lem:s0sampling} Let $\Lambda$ be a lattice in $\mathbb{R}^{2d}$ and $f\in S_0(\mathbb{R}^{2d})$. Then $f\vert_\Lambda = \{f(\lambda)\}_{\lambda \in \Lambda}\in \ell^1(\Lambda)$ with \begin{equation*} \|f \vert_{\Lambda}\|_{\ell^1} \lesssim \|f\|_{S_0}, \end{equation*} where the implicit constant depends only on the lattice $\Lambda.$ \end{lem} \subsection{The symplectic Fourier transform} We will use the \textit{symplectic Fourier transform} $\mathcal{F}_\sigma f$ of functions $f\in L^1(\mathbb{R}^{2d})$, defined by \begin{equation*} \mathcal{F}_{\sigma}f(z)=\int_{\mathbb{R}^{2d}} f(z') e^{-2 \pi i\sigma(z,z')} \ dz', \end{equation*} where $\sigma$ is the standard symplectic form $\sigma(z,z')=\omega\cdot x'-x\cdot \omega'$ for $z=(x,\omega),z'=(x',\omega').$ $\mathcal{F}_\sigma$ is a Banach space isomorphism $S_0(\mathbb{R}^{2d})\to S_0(\mathbb{R}^{2d})$, extends to a unitary operator $L^2(\mathbb{R}^{2d})\to L^2(\mathbb{R}^{2d})$ and a Banach space isomorphism $S_0'(\mathbb{R}^{2d})\to S_0'(\mathbb{R}^{2d})$ \cite[Lem. 7.6.2]{Feichtinger:1998}. In fact, $\mathcal{F}_\sigma$ is its own inverse, so that $\mathcal{F}_\sigma(\mathcal{F}_{\sigma}(f))=f$ for $f\in S_0'(\mathbb{R}^{2d})$ \cite[Prop. 144]{deGosson:2011}. \subsection{Banach spaces of operators on $L^2(\mathbb{R}^d)$} The results of this paper concern operators on various function spaces, and we will pick operators from two kinds of spaces: the Schatten-$p$ classes $\mathcal{T}^p$ for $1\leq p \leq \infty$ and a space $\mathcal{B}$ of operators defined using the Feichtinger algebra. \subsubsection{The Schatten classes} Starting with the Schatten classes, we recall that any compact operator $S$ on $L^2(\mathbb{R}^d)$ has a singular value decomposition \cite[Remark 3.1]{Busch:2016}, i.e. there exist two orthonormal sets $\{\psi_n\}_{n\in \mathbb{N}}$ and $\{\phi_n\}_{n\in \mathbb{N}}$ in $L^2(\mathbb{R}^d)$ and a bounded sequence of positive numbers $\{s_n(S)\}_{n\in \mathbb{N}}$ such that $S$ may be expressed as \begin{equation*} S = \sum\limits_{n \in \mathbb{N}} s_n(S) \psi_n\otimes \phi_n, \end{equation*} with convergence of the sum in the operator norm. Here $\psi \otimes \phi$ for $\psi,\phi\in L^2(\mathbb{R}^d)$ denotes the rank-one operator $\psi\otimes \phi (\xi)=\inner{\xi}{\phi}_{L^2} \psi$. For $1\leq p<\infty$ we define the \textit{Schatten-$p$ class} $\mathcal{T}^p$ of operators on $L^2(\mathbb{R}^d)$ by $$\mathcal{T}^p=\lbrace T\text{ compact}: \{s_n(T)\}_{n\in \mathbb{N}} \in \ell^p\rbrace.$$ To simplify the statement of some results, we also define $\mathcal{T}^{\infty}=\mathcal{L}(L^2)$ with $\|\cdot\|_{\mathcal{T}^\infty}$ given by the operator norm. The Schatten-$p$ class $\mathcal{T}^p$ is a Banach space with the norm $\|S\|_{\mathcal{T}^p}=\left(\sum\limits_{n\in \mathbb{N}} s_n(S)^p\right)^{1/p}$. Of particular interest is the space $\mathcal{T}:=\mathcal{T}^1$; the so-called trace class operators. Given an orthonormal basis $\{e_n\}_{n\in \mathbb{N}}$ of $L^2(\mathbb{R}^d)$, the trace defined by $$\mathrm{tr}(S)=\sum_{n\in \mathbb{N}} \inner{Se_n}{e_n}_{L^2}$$ is a well-defined and bounded linear functional on $\mathcal{T}$, and independent of the orthonormal basis $\{e_n\}_{n\in \mathbb{N}}$ used. The dual space of $\mathcal{T}$ is $\mathcal{L}(L^2)$ \cite[Thm. 3.13]{Busch:2016}, and $T\in \mathcal{L}(L^2)$ defines a bounded antilinear functional on $\mathcal{T}$ by \begin{equation*} \inner{T}{S}_{\mathcal{L}(L^2),\mathcal{T}}=\mathrm{tr}(TS^*) \quad \text{ for } S\in \mathcal{T}. \end{equation*} Another special case is the space of Hilbert-Schmidt operators $\mathcal{HS}:=\mathcal{T}^2$, which is a Hilbert space with inner product $$\inner{S}{T}_{\mathcal{HS}}=\mathrm{tr}(ST^*).$$ \subsubsection{The Weyl transform and operators with symbol in $S_0(\mathbb{R}^{2d})$} The other class of operators we will use will be defined in terms of the \textit{Weyl transform.} We first need the \textit{cross-Wigner distribution} $W(\xi,\eta)$ of two functions $\xi,\eta \in L^2(\mathbb{R}^d)$, defined by {\small \begin{equation*} W(\xi,\eta)(x,\omega)=\int_{\mathbb{R}^d} \xi\left(x+\frac{t}{2}\right)\overline{\eta\left(x-\frac{t}{2}\right)} e^{-2 \pi i \omega \cdot t} \ dt \quad \text{ for } (x,\omega)\in \mathbb{R}^{2d}. \end{equation*} } For $f \in S_0'(\mathbb{R}^{2d})$, we define the \textit{Weyl transform} $L_{f}$ of $f$ to be the operator $L_f:S_0(\mathbb{R}^d)\to S_0'(\mathbb{R}^d)$ given by \begin{equation*} \inner{L_{f}\eta}{\xi}_{S_0',S_0}:=\inner{f}{W(\xi,\eta)}_{S_0',S_0} \quad \text{ for any } \xi,\eta \in S_0(\mathbb{R}^d). \end{equation*} $f$ is called the \textit{Weyl symbol} of the operator $L_{f}$. By the kernel theorem for modulation spaces \cite[Thm. 14.4.1]{Grochenig:2001}, the Weyl transform is a bijection from $S_0'(\mathbb{R}^{2d})$ to $\mathcal{L}(S_0(\mathbb{R}^d),S_0'(\mathbb{R}^d))$. \begin{notation} In particular, any $S\in \mathcal{L}(S_0(\mathbb{R}^d),S_0'(\mathbb{R}^d))$ has a Weyl symbol, and we will denote the Weyl symbol of $S$ by $a_S$. By definition, this means that $L_{a_S}=S$. \end{notation} It is also well-known that the Weyl transform is a unitary mapping from $L^2(\mathbb{R}^{2d})$ to $\mathcal{HS}$\cite{Pool:1966}. This means in particular that \begin{equation*} \inner{S}{T}_{\mathcal{HS}} = \inner{a_S}{a_T}_{L^2} \quad \text{ for } S,T \in \mathcal{HS}, \end{equation*} which often allows us to reduce statements about Hilbert Schmidt operators to statements about $L^2(\mathbb{R}^{2d})$. We then define $\mathcal{B}$ to be the Banach space of continuous operators $S:S_0(\mathbb{R}^d)\to S_0'(\mathbb{R}^d)$ such that $a_S\in S_0(\mathbb{R}^{2d})$, with norm $$\|S\|_{\mathcal{B}}:=\|a_S\|_{S_0}.$$ $\mathcal{B}$ consists of trace class operators $L^2(\mathbb{R}^d)$ and we have a norm-continuous inclusion $\iota:\mathcal{B} \hookrightarrow \mathcal{T}$\cite{Grochenig:1996,Grochenig:1999}. \begin{exmp} If $\phi,\psi \in L^2(\mathbb{R}^d)$, consider the rank-one operator $\phi\otimes \psi.$ Its Weyl symbol is the cross-Wigner distribution $W(\phi,\psi)$\cite[Cor. 207]{deGosson:2011}, and $W(\phi,\psi)\in S_0(\mathbb{R}^{2d})$ if and only if $\phi,\psi \in S_0(\mathbb{R}^d)$\cite[Prop. 365]{deGosson:2011}. The simplest examples of operators in $\mathcal{B}$ are therefore $\phi\otimes \psi$ for $\phi,\psi \in S_0(\mathbb{R}^d)$. \end{exmp} The dual space $\mathcal{B}'$ can also be identified with a Banach space of operators. By definition, $\tau:\mathcal{B} \to S_0(\mathbb{R}^{2d})$ given by $\tau(S)= a_S$ is an isometric isomorphism. Hence the Banach space adjoint $\tau^*: S_0'(\mathbb{R}^{2d})\to \mathcal{B}'$ is also an isomorphism. Since the Weyl transform is a bijection from $S_0'(\mathbb{R}^{2d})$ to $\mathcal{L}(S_0(\mathbb{R}^d),S_0'(\mathbb{R}^d))$, we can identify $\mathcal{B}'$ with operators $S_0(\mathbb{R}^d)\to S_0'(\mathbb{R}^d)$: \begin{equation*} \mathcal{B}'\xleftrightarrow{\ \ \tau^*\ \ } S_0'(\mathbb{R}^{2d}) \xleftrightarrow{\text{Weyl calculus}} \mathcal{L}(S_0(\mathbb{R}^d), S_0'(\mathbb{R}^d)). \end{equation*} In this paper we will always consider elements of $\mathcal{B}'$ as operators $S_0(\mathbb{R}^d)\to S_0'(\mathbb{R}^d)$ using these identifications. Since $\mathcal{L}(L^2)$ is the dual space of $\mathcal{T}$, the Banach space adjoint $\iota^*:\mathcal{L}(L^2)\to \mathcal{B}'$ is a weak*-to-weak*-continuous inclusion of $\mathcal{L}(L^2)$ into $\mathcal{B}'$. \begin{rem} For more results on $\mathcal{B}$ and $\mathcal{B}'$ we refer to \cite{Feichtinger:1998,Feichtinger:2018}. In particular we mention that we could have defined $\mathcal{B}$ using other pseudodifferential calculi, such as the Kohn Nirenberg calculus, and still get the same space $\mathcal{B}$ with an equivalent norm. We would also like to point out that the statements of this section may naturally be rephrased using the notion of Gelfand triples, see \cite{Feichtinger:1998}. \end{rem} \subsection{Translation of operators} The idea of translating an operator $S\in \mathcal{L}(L^2)$ by $z\in \mathbb{R}^{2d}$ using conjugation with $\pi(z)$ has been utilized both in physics \cite{Werner:1984} and time-frequency analysis \cite{Feichtinger:1998,Kozek:1992a}. More precisely, we define for $z\in \mathbb{R}^{2d}$ and $S\in \mathcal{B}'$ the translation of $S$ by $z$ to be the operator $$\alpha_z(S)=\pi(z)S\pi(z)^*.$$ We will also need the operation $S\mapsto \check{S}=PSP$, where $P$ is the parity operator $(P\psi)(t)=\psi(-t)$ for $\psi \in L^2(\mathbb{R}^d)$. The main properties of these operations are listed below, note in particular that part $(i)$ supports the intuition that $\alpha_z$ is a translation of operators. See Lemmas 3.1 and 3.2 in \cite{Luef:2018c} for the proofs. \begin{lem}\label{lem:translation} Let $S\in \mathcal{B}'$. \begin{enumerate}[(i)] \item If $a_S$ is the Weyl symbol of $S$, then the Weyl symbol of $\alpha_z(S)$ is $T_z (a_S).$ \item $\alpha_z(\alpha_{z'}(S))=\alpha_{z+z'}(S).$ \item The operations $\alpha_z$, $^*$ and $\check{\ }$ are isometries on $\mathcal{B}, \mathcal{B}'$ and $\mathcal{T}^p$ for $1\leq p \leq \infty$. \item $(S^*)\widecheck{\ }= (\check{S})^*$. \end{enumerate} \end{lem} By the last part we can unambiguously write $\check{S}^*$. \section{Convolutions of sequences and operators} \label{sec:convolutions} In \cite{Werner:1984}, the convolution of a function $f\in L^1(\mathbb{R}^{2d})$ and an operator $S\in \mathcal{T}$ was defined by the operator-valued integral \begin{equation*} f\star S = \int_{\mathbb{R}^{2d}} f(z) \alpha_z(S) \ dz \end{equation*} and the convolution of two operators $S,T \in \mathcal{T}$ was defined to be the \textit{function} \begin{equation*} S\star T(z)=\mathrm{tr}(S \alpha_z(\check{T})) \quad \text{ for } z\in \mathbb{R}^{2d}. \end{equation*} These definitions, along with a Fourier transform defined for operators, have been shown to produce a theory of quantum harmonic analysis with non-trivial consequences for topics such as quantum measurement theory \cite{Kiukas:2012} and time-frequency analysis \cite{Luef:2018c}. The setting where $\mathbb{R}^{2d}$ is replaced by some lattice $\Lambda \subset \mathbb{R}^{2d}$ is frequently studied in time-frequency analysis, and our goal is therefore to develop a theory of convolutions and Fourier transforms of operators in that setting. For a sequence $c\in \ell^1(\Lambda)$ and $S \in \mathcal{T}$, we define the operator \begin{equation} \label{eq:convseqop} c \star_\Lambda S:= S\star_{\Lambda} c:= \sum_{\lambda \in \Lambda} c(\lambda) \alpha_{\lambda}(S), \end{equation} and for operators $S\in \mathcal{B}$ and $T\in \mathcal{T}$ we define the sequence \begin{equation} \label{eq:convopop} S\star_\Lambda T(\lambda)= S\star T(\lambda) \quad \text{ for } \lambda \in \Lambda. \end{equation} Hence $S\star_\Lambda T$ is the \textit{sequence} obtained by restricting the \textit{function} $S\star T$ to $\Lambda$. \begin{rem} We use the same notation $\star_\Lambda$ for the convolution of an operator and a sequence and for the convolution of two operators. The correct interpretation of $\star_\Lambda$ will always be clear from the context. \end{rem} Since $\alpha_\lambda$ is an isometry on $\mathcal{T}$ and $\mathcal{B}$, $c\star_\Lambda S$ is well-defined with $\|c\star_\Lambda S\|_{\mathcal{T}}\leq \|c\|_{\ell^1} \|S\|_{\mathcal{T}}$ for $S\in \mathcal{T}$ and similarly $\|c\star_\Lambda S\|_{\mathcal{B}}\leq \|c\|_{\ell^1} \|S\|_{\mathcal{B}}$ for $S\in \mathcal{B}$. The fact that $S\star_\Lambda T$ is a well-defined and summable sequence on $\Lambda$ is less straightforward. \begin{prop} \label{prop:convwelldefined} If $S\in \mathcal{B}$ and $T\in \mathcal{T}$, then $S\star_\Lambda T\in \ell^1(\Lambda)$ with $\|S\star_\Lambda T\|_{\ell^1}\lesssim \|S\|_{\mathcal{B}}\|T\|_{\mathcal{T}}$. \end{prop} \begin{proof} By \cite[Thm. 8.1]{Luef:2018c} we know that $S\star T\in S_0(\mathbb{R}^{2d})$ with $\|S\star T\|_{S_0}\lesssim \|S\|_{\mathcal{B}} \|T\|_\mathcal{T}.$ Hence the result follows from Lemma \ref{lem:s0sampling} and $S\star_\Lambda T(\lambda)= S\star T(\lambda)$. \end{proof} \subsection{Gabor multipliers and sampled spectrograms} If we consider rank-one operators, these convolutions reproduce well-known objects from time-frequency analysis. First consider the rank-one operator $\xi_1 \otimes \xi_2$ for $\xi_1,\xi_2\in L^2(\mathbb{R}^d)$. The operators $c\star_\Lambda (\xi_1 \otimes \xi_2)$ are well-known in time-frequency analysis as \textit{Gabor multipliers} \cite{Feichtinger:2002,Feichtinger:2003,Benedetto:2006,Dorfler:2010}: it is simple to show that \begin{equation*} \alpha_{\lambda} (\xi_1 \otimes \xi_2)=(\pi(\lambda)\xi_1) \otimes (\pi(\lambda)\xi_2), \end{equation*} so if $c\in \ell^1(\Lambda)$ it follows from the definition \eqref{eq:convseqop} that $c\star_\Lambda (\xi_1\otimes \xi_2)$ acts on $\psi \in L^2(\mathbb{R}^d)$ by \begin{equation}\label{eq:gabormultiplier} c\star_\Lambda (\xi_1\otimes \xi_2)\psi =\sum_{\lambda \in \Lambda} c(\lambda)V_{\xi_2}\psi(\lambda)\pi(\lambda)\xi_1, \end{equation} which is the definition of the Gabor multiplier $\mathcal{G}_c^{\xi_2,\xi_1}$ used in time-frequency analysis \cite{Feichtinger:2003}, i.e. $\mathcal{G}_c^{\xi_2,\xi_1}=c\star_\Lambda (\xi_1\otimes \xi_2)$. \begin{rem} In this sense, operators of the form $c\star_\Lambda S$ are a generalization of Gabor multipliers. We mention that this is a different generalization from the \textit{multiple Gabor multipliers} introduced in \cite{Dorfler:2010}. \end{rem} If we pick another rank-one operator $\check{\varphi_1}\otimes \check{\varphi_2}$ for $\varphi_1,\varphi_2\in L^2(\mathbb{R}^d)$ (here $\check{\varphi}(t)=\varphi(-t)$), one can calculate using the definition \eqref{eq:convopop} that \begin{equation} \label{eq:tworankone0} (\xi_1 \otimes \xi_2)\star_\Lambda (\check{\varphi_1}\otimes \check{\varphi_2})(\lambda)=V_{\varphi_2} \xi_1(\lambda)\overline{V_{\varphi_1}\xi_2(\lambda)}. \end{equation} In particular, if $\varphi_1=\varphi_2=\varphi$ and $\xi_1=\xi_2=\xi$, then \begin{equation} \label{eq:tworankone} ( \xi\otimes \xi) \star_\Lambda (\check{\varphi}\otimes \check{\varphi}) (\lambda)=|V_\varphi \xi(\lambda)|^2. \end{equation} The function $|V_\varphi \xi (z)|^2$ is the so-called spectrogram of $\xi$ with window $\varphi$, hence $(\xi\otimes \xi) \star_\Lambda (\check{\varphi}\otimes \check{\varphi})$ consists of samples of the spectrogram over $\Lambda$. Finally, if $S\in \mathcal{T}$ is any operator, then one may calculate that \begin{equation} \label{eq:generalwithrankone} S\star_\Lambda \check{\varphi_1}\otimes \check{\varphi_2} (\lambda)=\inner{S\pi(\lambda)\varphi_1}{\pi(\lambda)\varphi_2}_{L^2}, \end{equation} often called the lower symbol of $S$ with respect to $\varphi_1,\varphi_2$ and $\Lambda$ \cite{Feichtinger:2002}. \begin{rem} In particular, Proposition \ref{prop:convwelldefined} does not hold for all $S\in \mathcal{T}$. By Remark 4.6 in \cite{Benedetto:2006}, there exists a function $\psi \in L^2(\mathbb{R})$ such that $$\sum_{(m,n)\in \mathbb{Z}^2} (\psi \otimes \psi)\star_{\mathbb{Z}^2} (\check{\psi} \otimes \check{\psi})(m,n)=\sum_{(m,n)\in \mathbb{Z}^2} |V_\psi \psi (m,n)|^2 =\infty.$$ Since $\psi \otimes \psi,\check{\psi}\otimes \check{\psi} \in \mathcal{T}$, this shows that the assumption $S\in \mathcal{B}$ in Proposition \ref{prop:convwelldefined} is necessary. \end{rem} \subsection{Associativity and commutativity of convolutions} Since the convolution $S\star T$ of two operators $S,T\in \mathcal{T}$ is commutative in the continuous setting\cite[Prop. 3.2]{Werner:1984}, it follows from the definitions that the convolutions \eqref{eq:convseqop} and \eqref{eq:convopop} are commutative. It is also a straightforward consequence of the definitions that the convolutions are bilinear. In the original theory of Werner \cite{Werner:1984}, the associativity of the convolution operations is of fundamental importance. Associativity still holds in some cases when moving from $\mathbb{R}^{2d}$ to $\Lambda$, but we will later see in Corollary \ref{cor:noassociativity} that the convolution of three operators over a lattice is not associative in general. In what follows, $c\ast_\Lambda d$ denotes the usual convolution of sequences \begin{equation*} c\ast_\Lambda d(\lambda)=\sum_{\lambda'\in \Lambda} c(\lambda')d(\lambda-\lambda'). \end{equation*} \begin{prop}[Associativity] Let $c,d\in \ell^1(\Lambda)$, $S \in \mathcal{B}$ and $T\in \mathcal{T}$. Then \begin{enumerate}[(i)] \item $c \ast_\Lambda (S\star_\Lambda T)=(c\star_\Lambda S)\star_\Lambda T$, \item $(c\ast_\Lambda d)\star_\Lambda T=c\star_\Lambda(d\star_\Lambda T)$. \end{enumerate} \end{prop} \begin{proof} For the proof of $(i)$, we write out the definitions of the convolutions and use the commutativity $S\star_\Lambda T=T\star_\Lambda S$ to get \begin{align*} c\ast_\Lambda (S\star_\Lambda T)(\lambda)&= c\ast_\Lambda (T\star_\Lambda S)(\lambda) \\ &= \sum_{\lambda'\in \Lambda} c(\lambda') \mathrm{tr}(T\alpha_{\lambda-\lambda'}(\check{S})) \\ &= \mathrm{tr}\left(T \sum_{\lambda'\in \Lambda} c(\lambda') \alpha_{\lambda-\lambda'}(\check{S}) \right) \\ &= \mathrm{tr}\left(T \alpha_{\lambda} \left(\sum_{\lambda'\in \Lambda} c(\lambda') \alpha_{-\lambda'}(\check{S})\right) \right) \quad \text{ by Lemma \ref{lem:translation}} \\ &= \mathrm{tr}\left(T \alpha_{\lambda} \left(P \sum_{\lambda'\in \Lambda} c(\lambda') \alpha_{\lambda'}(S) P \right) \right) \\ &= T\star_\Lambda (c\star_\Lambda S) \quad \text{ by \eqref{eq:convseqop} and \eqref{eq:convopop}}\\ &=(c\star_\Lambda S) \star_\Lambda T \quad \text{ by commutativity}. \end{align*} We have used the easily checked relation $\alpha_{-\lambda'} (\check{S})=P\alpha_{\lambda'} (S) P$. For the second part, we find that \begin{align*} (c\ast_\Lambda d)\star_\Lambda T&= \sum_{\lambda\in \Lambda} (c\ast_\Lambda d)(\lambda) \alpha_\lambda(T) \\ &= \sum_{\lambda\in \Lambda} \sum_{\lambda'\in \Lambda} c(\lambda') d(\lambda-\lambda')\alpha_\lambda(T) \\ &= \sum_{\lambda'\in \Lambda} c(\lambda') \sum_{\lambda \in \Lambda} d(\lambda-\lambda')\alpha_\lambda(T) \\ &= \sum_{\lambda'\in \Lambda} c(\lambda') \alpha_{\lambda'} (d\star_\Lambda T) =c \star_\Lambda (d\star_\Lambda T). \end{align*} To pass to the last line we have used the relation $\alpha_{\lambda '} (d\star_\Lambda T)=\sum_{\lambda} d(\lambda-\lambda ')\alpha_\lambda (T)$, which is easily verified. \end{proof} \begin{rem} Part $(ii)$ of this result along with the trivial estimate $\|c\star_\Lambda T\|_{\mathcal{T}}\leq \|c\|_{\ell^1} \|T\|_\mathcal{T}$ shows that $\mathcal{T}$ is a \textit{Banach module} (see \cite{Graven:1974}) over $\ell^1(\Lambda)$ if we define the action of $c\in \ell^{1}(\Lambda)$ on $T\in \mathcal{T}$ by $c\star_\Lambda T$. The same proofs also show that this is true when $\mathcal{T}$ is replaced by $\mathcal{B}$ or any Schatten class $\mathcal{T}^p$ for $1\leq p \leq \infty$. \end{rem} \begin{exmp} Let $\varphi,\xi \in L^2(\mathbb{R}^d)$ and $c\in \ell^1(\Lambda)$, and define $S=\xi\otimes \xi$ and $T=\check{\varphi}\otimes \check{\varphi}$. If we use \eqref{eq:tworankone} to simplify $S \star_\Lambda T$ and \eqref{eq:generalwithrankone} to simplify $(c\star_\Lambda S)\star_\Lambda T$, the first part of the result above becomes \begin{equation} \label{eq:cnn} c\ast_\Lambda |V_\varphi \xi|^2(\lambda)=\inner{(c\star_\Lambda \xi\otimes \xi)\pi(\lambda)\varphi}{\pi(\lambda)\varphi}_{L^2}. \end{equation} In words, the convolution of a sequence $c$ with samples of a spectrogram $|V_\varphi \xi|^2$ can be described using the action of a Gabor multiplier $c\star (\xi\otimes \xi)$. In applications of convolutional neural networks to audio processing, one often considers the spectrogram of an audio signal as the input to the network. Convolutions of sequences with samples of spectrograms therefore appear naturally in such networks, and the connection \eqref{eq:cnn} has been exploited in this context -- see the proof of \cite[Thm. 1]{Dorfler:2018}. \end{exmp} \subsection{Young's inequality} The convolutions in \eqref{eq:convseqop} and \eqref{eq:convopop} can be defined for more general sequences and operators by establishing a version of Young's inequality \cite[Thm. 1.2.1]{Grochenig:2001}. In the continuous case such an inequality was established by Werner \cite{Werner:1984} using the $L^p$-norms of functions and Schatten-$p$-norms of operators. In the discrete case, it is not always possible to use the Schatten-$p$-norms, since Proposition \ref{prop:convwelldefined} requires $S\in \mathcal{B}$. We will therefore always require that one of the operators belongs to $\mathcal{B}$. A Young's inequality for Schatten classes can then be established by first extending the domains of the convolutions by duality. If $S\in \mathcal{B}$ and $c\in \ell^\infty(\Lambda)$, we define $c\star_\Lambda S\in \mathcal{L}(L^2)$ by \begin{equation} \label{eq:dualconvolutions} \inner{c\star_{\Lambda} S}{R}_{\mathcal{L}(L^2),\mathcal{T}}:=\inner{c}{ R \star_{\Lambda} \check{S}^* }_{\ell^\infty,\ell^1} \text{ for any } R\in \mathcal{T}. \end{equation} and if $S\in \mathcal{B}$ and $T\in \mathcal{L}(L^2)=\mathcal{T}^\infty$ we define $T\star_\Lambda S\in \ell^\infty(\Lambda)$ by \begin{equation} \label{eq:dualconvolutions2} \inner{T\star_\Lambda S}{c}_{\ell^\infty(\Lambda),\ell^1(\Lambda)}:=\inner{T}{c\star_{\Lambda} \check{S}^*}_{\mathcal{L}(L^2),\mathcal{T}} \text{ for any } c\in \ell^1(\Lambda). \end{equation} It is a simple exercise to show that these definitions define elements of $\mathcal{L}(L^2)$ and $\ell^\infty(\Lambda)$ satisfying $\|c\star_{\Lambda} S\|_{\mathcal{L}(L^2)}\lesssim \|c\|_{\ell^\infty}\|S\|_{\mathcal{B}}$ and $\|T\star_\Lambda S\|_{\ell^\infty}\leq \|T\|_{\mathcal{L}(L^2)}\|S\|_\mathcal{B}$, and that they agree with \eqref{eq:convseqop} and \eqref{eq:convopop} when $c\in \ell^1 (\Lambda)$ or $T\in \mathcal{T}$. A standard (complex) interpolation argument then gives the following result, since $(\ell^1(\Lambda),\ell^{\infty}(\Lambda))_{\theta}=\ell^p(\Lambda)$ and $(\mathcal{T}^1,\mathcal{T}^{\infty})_{\theta}=\mathcal{T}^{p}$ with $\frac{1}{p}=1-\theta$ \cite{Bergh:1976}. For Gabor multipliers the second part of this result is well-known \cite[Thm. 5.4.1]{Feichtinger:2003}, and a weaker version of the first part is known for $p=1,2,\infty$ \cite[Thm. 5.8.3]{Feichtinger:2003}. \begin{prop}[Young's inequality] \label{prop:youngschatten} Let $S\in \mathcal{B}$ and $1\leq p \leq \infty$. \begin{enumerate}[(i)] \item If $T\in \mathcal{T}^p$, then $\|T\star_\Lambda S\|_{\ell^p}\lesssim \|T\|_{\mathcal{T}^p}\|S\|_{\mathcal{B}}$. \item If $c\in \ell^p(\Lambda)$, then $\|c\star_\Lambda S\|_{\mathcal{T}^p}\lesssim \|c\|_{\ell^p}\|S\|_{\mathcal{B}}$. \end{enumerate} \end{prop} \begin{rem} If $1\in \ell^\infty(\Lambda)$ is given by $1(\lambda)=1$ for any $\lambda$, then Feichtinger observed in \cite[Thm. 5.15]{Feichtinger:2002} that $\phi\in S_0(\mathbb{R}^d)$ generates a so-called tight Gabor frame if and only if the Gabor multiplier $1\star_\Lambda (\phi\otimes \phi)$ is the identity operator $I$ in $\mathcal{L}(L^2)$. A similar result holds in the more general case: if $S\in \mathcal{B}$, then $1\star_\Lambda S^*S=I$ if and only if $S$ generates a tight \textit{Gabor g-frame}, recently introduced in \cite{Skrettingland:2019}. \end{rem} We may also use duality to define the convolution $T\star_\Lambda S\in \ell^\infty(\Lambda)$ of $S\in \mathcal{B}$ with $T\in \mathcal{B}'$ by \begin{equation} \label{eq:dualconvolutions3} \inner{T\star_\Lambda S}{c}_{\ell^\infty,\ell^1}:=\inner{T}{c\star_{\Lambda} \check{S}^*}_{\mathcal{B}',\mathcal{B}} \text{ for any } c\in \ell^1(\Lambda), \end{equation} which agrees with \eqref{eq:dualconvolutions2} when $T\in \mathcal{L}(L^2) \subset \mathcal{B}'$ and satisfies $\|S\star_\Lambda T\|_{\ell^\infty}\leq \|S\|_\mathcal{B} \|T\|_{\mathcal{B}'}$. We end this section by showing that the space $c_0(\Lambda)$ of sequences vanishing at infinity corresponds to compact operators under convolutions with $S\in \mathcal{B}$. The second part of this statement is due to Feichtinger \cite[Thm. 5.15]{Feichtinger:2002} for the special case of Gabor multipliers. \begin{prop} Let $S\in \mathcal{B}.$ If $T$ is a compact operator, then $T\star_\Lambda S\in c_0(\Lambda)$. If $c\in c_0(\Lambda),$ then $c\star_\Lambda S$ is a compact operator on $L^2(\mathbb{R}^d)$. \end{prop} \begin{proof} By \cite[Prop. 4.6]{Luef:2018c}, the \textit{function} $T\star S$ belongs to the space $C_0(\mathbb{R}^{2d})$ of continuous functions vanishing at infinity. Since $T\star_\Lambda S$ is simply the restriction of $T\star S$ to $\Lambda$, it follows that $T\star_\Lambda S\in c_0(\Lambda)$. For the second part, let $c_N$ be the sequence \begin{equation*} c_N(\lambda)=\begin{cases} c(\lambda) \text{ if } |\lambda|<N \\ 0 \text{ otherwise.} \end{cases} \end{equation*} Then $c_N\star_\Lambda S=\sum_{|\lambda|<N} c(\lambda) \alpha_\lambda(S)$ is a compact operator for each $N\in \mathbb{N}$, and by Proposition \ref{prop:youngschatten} and the bilinearity of convolutions \begin{equation*} \|c\star_\Lambda S - c_N \star_\Lambda S\|_{\mathcal{L}(L^2)}\leq \|c-c_N\|_{\ell^\infty} \|S\|_{\mathcal{B}} \to 0 \text{ as } N\to \infty. \end{equation*} Hence $c\star_\Lambda S$ is the limit in the operator topology of compact operators, and is therefore itself compact. \end{proof} \section{Fourier transforms} \label{sec:fouriertransforms} In \cite{Werner:1984}, Werner observed that if one defines a Fourier transform of an operator $S\in \mathcal{T}$ to be the function \begin{equation*} \mathcal{F}_W(S)(z):= e^{-\pi i x\cdot \omega} \mathrm{tr}(\pi(-z)S) \quad \text{ for } z=(x,\omega)\in \mathbb{R}^{2d}, \end{equation*} then the formulas \begin{align} \label{eq:FTofconvolutionscontinuous} &\mathcal{F}_W(f\star S)=\mathcal{F}_\sigma(f) \mathcal{F}_W(S), && \mathcal{F}_\sigma(S\star T)=\mathcal{F}_W(S)\mathcal{F}_W(T) \end{align} hold for $f\in L^1(\mathbb{R}^{2d})$ and $S,T\in \mathcal{T}$. The transform $\mathcal{F}_W$, called the \textit{Fourier-Wigner transform} (or the Fourier-Weyl transform \cite{Werner:1984}) is an isomorphism $\mathcal{F}_W:\mathcal{B} \to S_0(\mathbb{R}^{2d})$, can be extended to a unitary map $\mathcal{F}_W:\mathcal{HS}\to L^2(\mathbb{R}^{2d})$, and to an isomorphism $\mathcal{F}_W:\mathcal{B}' \to S_0'(\mathbb{R}^{2d})$ by defining $\mathcal{F}_W(S)$ for $S\in \mathcal{B}'$ by duality\cite[Cor. 7.6.3]{Feichtinger:1998}: \begin{equation} \label{eq:fwdual} \inner{F_W(S)}{f}_{S_0',S_0}:= \inner{S}{\rho(f)}_{\mathcal{B}',\mathcal{B}}\quad \text{ for any }f\in S_0(\mathbb{R}^{2d}). \end{equation} Here $\rho:S_0(\mathbb{R}^{2d})\to \mathcal{B}$ is the inverse of $\mathcal{F}_W$. In fact, $\mathcal{F}_W$ and the Weyl transform are related by a symplectic Fourier transform: for any $S\in \mathcal{B}'$ we have \begin{equation*} \mathcal{F}_W(S)=\mathcal{F}_\sigma (a_S), \end{equation*} where $a_S$ is the Weyl symbol of $S$. As an important special case, the Fourier-Wigner transform of a rank-one operator $\phi\otimes \psi$ is \begin{equation} \label{eq:fwrankone} \mathcal{F}_W(\phi\otimes \psi)(x,\omega)=e^{\pi i x\cdot\omega}V_\psi \phi(x,\omega). \end{equation} Since we have defined convolutions of operators and sequences, it is natural to ask whether a version of \eqref{eq:FTofconvolutionscontinuous} holds in our setting. We start by defining a suitable Fourier transform of sequences. \subsubsection*{Symplectic Fourier series} For the purposes of this paper, we identify the dual group $\widehat{\mathbb{R}^{2d}}$ with $\mathbb{R}^{2d}$ by the bijection $\mathbb{R}^{2d} \ni z\mapsto \chi_z\in \widehat{\mathbb{R}^{2d}}$, where $\chi_z$ is the \textit{symplectic} character\footnote{Phase space, which in this paper is $\mathbb{R}^{2d}$, is more properly described by (the isomorphic) space $\mathbb{R}^d\times \widehat{\mathbb{R}^d}$. The symplectic characters appear because they are the natural way of identifying the group $\mathbb{R}^d\times \widehat{\mathbb{R}^d}$ with its dual group.} $\chi_{z}(z')=e^{2\pi i \sigma(z,z')}$. Given a lattice $\Lambda \subset \mathbb{R}^{2d}$, it follows that the dual group of $\Lambda$ is identified with $\mathbb{R}^{2d}/\Lambda^\circ$ (see \cite[Prop. 3.6.1]{Deitmar:2014}), where $\Lambda^\circ$ is the annihilator group \begin{align*} \Lambda^\circ&=\{\lambda^\circ \in \mathbb{R}^{2d} : \chi_{\lambda^\circ}(\lambda)=1 \text{ for any } \lambda \in \Lambda\} \\ &= \{\lambda^\circ \in \mathbb{R}^{2d} : e^{2\pi i \sigma(\lambda^\circ,\lambda)}=1 \text{ for any } \lambda \in \Lambda\}. \end{align*} The group $\Lambda^\circ$ is itself a lattice, namely the so-called \textit{adjoint lattice} of $\Lambda$ from \cite{Feichtinger:1998,Rieffel:1988}. Given this identification of the dual group of $\Lambda$, the Fourier transform of $c\in \ell^1(\Lambda)$ is the symplectic Fourier series \begin{equation*} \mathcal{F}_\sigma^\Lambda(c)(\dot{z}):=\sum_{\lambda \in \Lambda} c(\lambda)e^{2\pi i \sigma(\lambda,z)}. \end{equation*} Here $\dot{z}$ denotes the image of $z\in \mathbb{R}^{2d}$ under the natural quotient map $\mathbb{R}^{2d}\to \mathbb{R}^{2d}/\Lambda^\circ$, so $ \mathcal{F}_\sigma^\Lambda(c)$ is a function on $\mathbb{R}^{2d}/\Lambda^\circ$. If we denote by $A(\mathbb{R}^{2d}/\Lambda^\circ)$ the Banach space of functions on $\mathbb{R}^{2d}/\Lambda^\circ$ with symplectic Fourier coefficients in $\ell^1(\Lambda)$, the Feichtinger algebra has the following property \cite[Thm. 7 B)]{Feichtinger:1981}. \begin{lem} \label{lem:s0periodization} If $\Lambda$ is a lattice, the \textit{periodization operator} $P_\Lambda:S_0(\mathbb{R}^{2d})\to A(\mathbb{R}^{2d}/\Lambda)$ defined by $$P_\Lambda(f)(\dot{z})=|\Lambda|\sum_{\lambda \in \Lambda} f(z+\lambda)\quad \text{ for } z\in \mathbb{R}^{2d}$$ is continuous and surjective. \end{lem} \begin{rem} \begin{enumerate}[(i)] \item Since $|\Lambda^\circ|=\frac{1}{|\Lambda|}$ \cite[Lem. 7.7.4]{Feichtinger:1998}, we have $$P_{\Lambda^\circ}(f)(\dot{z})=\frac{1}{|\Lambda|}\sum_{\lambda^\circ \in \Lambda} f(z+\lambda^\circ).$$ \item One may define Feichtinger's algebra $S_0(G)$ for any locally compact abelian group $G$\cite{Feichtinger:1981}. In fact, all our function spaces besides $L^2(\mathbb{R}^d)$ are examples of Feichtinger's algebra, since $S_0(\Lambda)=\ell^1(\Lambda)$ and $S_0(\mathbb{R}^{2d}/\Lambda^\circ)=A(\mathbb{R}^{2d}/\Lambda^\circ).$ \end{enumerate} \end{rem} When we identify the dual group of $\Lambda$ with $\mathbb{R}^{2d}/\Lambda^\circ$, the Poisson summation formula for functions in $S_0(\mathbb{R}^{2d})$ takes the following form. \begin{thm}[Poisson summation] \label{thm:poisson} Let $\Lambda$ be a lattice in $\mathbb{R}^{2d}$ and assume that $f\in S_0(\mathbb{R}^{2d})$. Then \begin{equation*} \frac{1}{|\Lambda|} \sum_{\lambda^\circ \in \Lambda^\circ} f(z+\lambda^\circ) = \sum_{\lambda \in \Lambda} \mathcal{F}_\sigma(f)(\lambda)e^{2\pi i \sigma(\lambda,z)}\text{ for } z\in \mathbb{R}^{2d}. \end{equation*} \end{thm} \begin{proof} This is \cite[Thm. 3.6.3]{Deitmar:2014} with $A=\mathbb{R}^{2d}$, $B=\Lambda^\circ$ and using $(\Lambda^\circ)^\circ=\Lambda.$ To get equality for any $z\in \mathbb{R}^{2d}$, we use that $\sum_{\lambda^\circ \in \Lambda^\circ} f(z+\lambda^\circ)$ defines a continuous function on $\mathbb{R}^{2d}/\Lambda^\circ$ by Lemma \ref{lem:s0periodization}. \end{proof} Since $\mathcal{F}_\sigma^\Lambda$ is a Fourier transform it extends to a unitary mapping $\mathcal{F}_\sigma^\Lambda:\ell^2(\Lambda)\to L^2(\mathbb{R}^{2d}/\Lambda^\circ)$ satisfying \begin{equation} \label{eq:fourierseriesofconvolution} \mathcal{F}_\sigma^\Lambda(c\ast_\Lambda d)=\mathcal{F}_\sigma^\Lambda(c)\mathcal{F}_\sigma^\Lambda(d) \end{equation} for $c\in \ell^1(\Lambda)$ and $d\in \ell^2(\Lambda)$. \subsection{The Fourier transform of $S\star_\Lambda T$} We now consider a version of \eqref{eq:FTofconvolutionscontinuous} for sequences. The formula for $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda T)$ is a simple consequence of the Poisson summation formula. \begin{thm} \label{thm:orthogonality} Let $S\in \mathcal{B}$ and $T\in \mathcal{T}$. Then \begin{align*} \mathcal{F}_{\sigma}^{\Lambda}(S\star_\Lambda T)(\dot{z})&=\frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ}F_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ)\\ &=P_{\Lambda^\circ}(\mathcal{F}_W(S)\mathcal{F}_W(T))(\dot{z}) \end{align*} for any $z\in \mathbb{R}^{2d}.$ \end{thm} \begin{proof} From \cite[Thm. 8.2]{Luef:2018c}, we know that $S\star T\in S_0(\mathbb{R}^{2d})$. Hence $\mathcal{F}_\sigma(S\star T)=\mathcal{F}_W(S)\mathcal{F}_W(T)\in S_0(\mathbb{R}^{2d})$ since $\mathcal{F}_\sigma:S_0(\mathbb{R}^{2d})\to S_0(\mathbb{R}^{2d})$ is an isomorphism. By applying Poisson's summation formula from Theorem \ref{thm:poisson} to $f=\mathcal{F}_W(S)\mathcal{F}_W(T)$, we find that {\footnotesize \begin{align*} \frac{1}{|\Lambda|} \sum_{\lambda^\circ \in \Lambda^\circ} \mathcal{F}_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ) &= \sum_{\lambda \in \Lambda} \mathcal{F}_\sigma (\mathcal{F}_W(S)\mathcal{F}_W(T))(\lambda)e^{2\pi i \sigma(\lambda,z)}\\ &= \sum_{\lambda \in \Lambda} S\star_\Lambda T(\lambda)e^{2\pi i \sigma(\lambda,z)}, \end{align*} } where we used that $\mathcal{F}_\sigma$ is its own inverse to conclude that $$\mathcal{F}_\sigma(\mathcal{F}_W(S)\mathcal{F}_W(T))(\lambda)=\mathcal{F}_\sigma(\mathcal{F}_\sigma(S\star T))(\lambda)=S\star T(\lambda)=S\star_\Lambda T(\lambda).$$ Since $\mathcal{F}_W(S)\mathcal{F}_W(T)\in S_0(\mathbb{R}^{2d})$, Theorem \ref{thm:poisson} says that the equation holds for any $z\in \mathbb{R}^{2d}$. \end{proof} \begin{rem} Theorem \ref{thm:orthogonality} has also been proved and used in \cite[Cor. A.3]{Lesch:2016} in noncommutative geometry, with stronger assumptions on $S,T$. \end{rem} Theorem \ref{thm:orthogonality} has many interesting special cases. We will frequently refer to the following version, which follows since a short calculation using the definition of the Fourier-Wigner transform shows that \begin{equation} \label{eq:fwcheckadjoint} \mathcal{F}_W(\check{S^*})(z)=\overline{\mathcal{F}_W(S)(z)}. \end{equation} \begin{cor} \label{cor:orthogonalabsolute} Let $S\in \mathcal{B}.$ Then \begin{equation*} \mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S^*})(\dot{z})=\frac{1}{|\Lambda|}\sum_{\lambda^\circ \in \Lambda^\circ} |\mathcal{F}_W(S)(z+\lambda^\circ)|^2 \quad \text{ for any } z\in \mathbb{R}^{2d}. \end{equation*} \end{cor} \begin{cor} \label{cor:sumofconvolution} Let $S\in \mathcal{B}$ and $T\in \mathcal{T}$. Then \begin{equation*} \sum_{\lambda\in \Lambda} S\star_\Lambda T(\lambda)=\frac{1}{|\Lambda|} \sum_{\lambda^\circ \in \Lambda^\circ} F_W(S)(\lambda^\circ)F_W(T)(\lambda^\circ). \end{equation*} \end{cor} \begin{proof} This follows from Theorem \ref{thm:orthogonality} with $z=0$. \end{proof} Now assume that $S$ and $T$ are rank-one operators: $S=\xi_1 \otimes \xi_2$ for $\xi_1, \xi_2 \in S_0(\mathbb{R}^d)$ and $T=\check{\varphi_1}\otimes \check{\varphi_2}$ for $\varphi_1, \varphi_2 \in L^2(\mathbb{R}^d)$. By \eqref{eq:tworankone0} $$S\star_\Lambda T(\lambda)=V_{\varphi_2} \xi_1(\lambda)\overline{V_{\varphi_1}\xi_2(\lambda)},$$ and noting that $T=\check{T_0}^*$ for $T_0=\varphi_2 \otimes \varphi_1$, we can use \eqref{eq:fwrankone} and \eqref{eq:fwcheckadjoint} to find \begin{align*} \mathcal{F}_W(S)(z)&=e^{\pi i x\cdot\omega}V_{\xi_2} \xi_1(z) \\ \mathcal{F}_W(T)(z)&=e^{-\pi i x\cdot\omega}\overline{V_{\varphi_1}\varphi_2(z)} \end{align*} Hence Theorem \ref{thm:orthogonality} says that \begin{equation*} \mathcal{F}_{\sigma}^{\Lambda}(V_{\varphi_2} \xi_1\overline{V_{\varphi_1}\xi_2}\vert_\Lambda )(\dot{z})=\frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ}V_{\xi_2} \xi_1(z+\lambda^\circ)\overline{V_{\varphi_1}\varphi_2(z+\lambda^\circ)}. \end{equation*} Furthermore, Corollary \ref{cor:sumofconvolution} gives \begin{equation*} \sum_{\lambda\in \Lambda}V_{\varphi_2} \xi_1(\lambda)\overline{V_{\varphi_1}\xi_2(\lambda)}=\frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ}V_{\xi_2} \xi_1(\lambda^\circ)\overline{V_{\varphi_1}\varphi_2(\lambda^\circ)}, \end{equation*} which is the \textit{fundamental identity of Gabor analysis} \cite{Feichtinger:2006,Tolimieri:1994,Janssen:1995,Rieffel:1988}. \subsection{The Fourier transform of $c\star_\Lambda S$} When $c\in \ell^1(\Lambda)$, we obtain the expected formula for $\mathcal{F}_W(c\star_\Lambda S)$. \begin{prop} \label{prop:spreadinggm} If $c\in \ell^1(\Lambda)$ and $S\in \mathcal{T}$, then \begin{equation*} \mathcal{F}_W(c\star_\Lambda S)(z)=\mathcal{F}_\sigma^\Lambda (c)(\dot{z})\mathcal{F}_W(S)(z) \quad \text{ for } z\in \mathbb{R}^{2d}. \end{equation*} \end{prop} \begin{proof} One easily verifies the formula $$\mathcal{F}_W(\alpha_\lambda (S))(z)=e^{2\pi i \sigma(\lambda,z)}\mathcal{F}_W(S)(z),$$ showing that the Fourier transform of a translation is a modulation. Hence \begin{align*} \mathcal{F}_W(c\star_\Lambda S)(z)&= \sum_{\lambda\in \Lambda} c(\lambda) \mathcal{F}_W(\alpha_\lambda (S)) \\ &= \sum_{\lambda\in \Lambda} c(\lambda) e^{2\pi i \sigma(\lambda,z)}\mathcal{F}_W(S)(z) \\ &= \mathcal{F}_W(S)(z) \sum_{\lambda\in \Lambda} c(\lambda) e^{2\pi i \sigma(\lambda,z)}. \end{align*} To move $\mathcal{F}_W$ inside the sum, we use that the sum $\sum_{\lambda \in \Lambda} c(\lambda)\alpha_\lambda(S)$ converges absolutely in $\mathcal{T}$, and $\mathcal{F}_W$ is continuous from $\mathcal{T}$ to $L^\infty(\mathbb{R}^{2d})$ by the Riemann-Lebesgue lemma for $\mathcal{F}_W$ \cite[Prop. 6.6]{Luef:2018c}. \end{proof} \subsubsection{Technical intermezzo} Let $A'(\mathbb{R}^{2d}/\Lambda^\circ)$ denote the dual space of $A(\mathbb{R}^{2d}/\Lambda^\circ)$, consisting of distributions on $\mathbb{R}^{2d}/\Lambda^\circ$ with symplectic Fourier coefficients in $\ell^\infty(\Lambda).$ To understand the statement in Proposition \ref{prop:spreadinggm} when $c\in \ell^\infty(\Lambda)$, we need to `extend' distributions in $A'(\mathbb{R}^{2d}/\Lambda^\circ)$ to distributions in $S_0'(\mathbb{R}^{2d})$. When $f\in A(\mathbb{R}^{2d}/\Lambda^\circ)$ this is achieved by \begin{equation*} A(\mathbb{R}^{2d}/\Lambda^\circ)\ni f\mapsto f\circ q \in S_0'(\mathbb{R}^{2d}), \end{equation*} where $q:\mathbb{R}^{2d}\to \mathbb{R}^{2d}/\Lambda^\circ$ is the natural quotient map. To extend this map to distributions $f\in A'(\mathbb{R}^{2d}/\Lambda^\circ)$, one can use Weil's formula \cite[(6.2.11)]{Grochenig:1998} to show that for $f\in A(\mathbb{R}^{2d}/\Lambda^\circ)$ and $g\in S_0(\mathbb{R}^{2d})$ one has \begin{equation*} \inner{f\circ q}{g}_{S_0',S_0}=\inner{f}{P_{\Lambda^\circ}g}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d},\Lambda^\circ)}. \end{equation*} This shows that the map $f\mapsto f\circ q$ agrees with the Banach space adjoint $P_{\Lambda^\circ}^*:A'(\mathbb{R}^{2d}/\Lambda^\circ)\to S_0'(\mathbb{R}^{2d})$ for $f\in A(\mathbb{R}^{2d}/\Lambda^\circ)$. The natural way to extend $f\in A'(\mathbb{R}^{2d}/\Lambda^\circ)$ is therefore to consider $P_{\Lambda^\circ}^*f\in S_0'(\mathbb{R}^{2d})$, and by an abuse of notation we will use $f$ to also denote the extension $P_{\Lambda^\circ}^*f$ -- by definition this means that when $f\in A'(\mathbb{R}^{2d}/\Lambda^\circ)$ is considered an element of $S_0'(\mathbb{R}^{2d})$, it satisfies for $g\in S_0(\mathbb{R}^{2d})$ \begin{equation} \label{eq:periodicextension} \inner{f}{g}_{S_0',S_0}=\inner{f}{P_{\Lambda^\circ}g}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d},\Lambda^\circ)}. \end{equation} We also remind the reader that for $c\in \ell^\infty(\Lambda)$ one defines $\mathcal{F}_\sigma^\Lambda(c)$ as an element of $A'(\mathbb{R}^{2d}/\Lambda^\circ)$ by \begin{equation} \label{eq:fsigmadual} \inner{\mathcal{F}_\sigma^\Lambda(c)}{g}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)}:=\inner{c}{(\mathcal{F}_\sigma^\Lambda)^{-1}(g)}_{\ell^\infty(\Lambda),\ell^1(\Lambda)}, \end{equation} where $(\mathcal{F}_\sigma^\Lambda)^{-1}(g)$ are the symplectic Fourier coefficients of $g$. This is \cite[Example 6.8]{Jakobsen:2018} for the group $G=\mathbb{R}^{2d}/\Lambda^\circ$. Finally, recall that we can multiply $f\in S_0'(\mathbb{R}^{2d})$ with $g\in S_0(\mathbb{R}^{2d})$ to obtain an element $fg\in S_0'(\mathbb{R}^{2d})$ given by \begin{equation} \label{eq:productdual} \inner{fg}{h}_{S_0',S_0}:= \inner{f}{\overline{g}h}_{S_0',S_0} \quad \text{ for } h\in S_0(\mathbb{R}^{2d}). \end{equation} \subsubsection{The case $c\in \ell^\infty(\Lambda)$} The technical intermezzo allows us to make sense of the following generalization of Proposition \ref{prop:spreadinggm}. Recall in particular that $\mathcal{F}_\sigma^\Lambda (c)$ is shorthand for the distribution $P_{\Lambda^\circ}^* (\mathcal{F}_\sigma^\Lambda (c))\in S_0'(\mathbb{R}^{2d})$. \begin{prop} \label{prop:spreadinggm2} If $c\in \ell^\infty(\Lambda)$ and $S\in \mathcal{B}$, then \begin{equation*} \mathcal{F}_W(c\star_\Lambda S)=\mathcal{F}_\sigma^\Lambda (c)\mathcal{F}_W(S) \quad \text{ in } S_0'(\mathbb{R}^{2d}). \end{equation*} \end{prop} \begin{proof} For $h\in S_0(\mathbb{R}^{2d})$, we get from \eqref{eq:fwdual}, \eqref{eq:dualconvolutions} and \eqref{eq:fsigmadual} (in that order) \begin{align*} \inner{\mathcal{F}_W(c\star_\Lambda S)}{h}_{S_0',S_0}&=\inner{c\star_\Lambda S}{\rho(h)}_{\mathcal{B}',\mathcal{B}} \\ &= \inner{c}{\rho(h) \star_\Lambda \check{S}^*}_{\ell^\infty(\Lambda),\ell^1(\Lambda)} \\ &=\inner{\mathcal{F}_\sigma^\Lambda(c)}{\mathcal{F}_\sigma^\Lambda(\rho(h) \star_\Lambda \check{S}^* )}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)}. \end{align*} By Theorem \ref{thm:orthogonality} we find using \eqref{eq:fwcheckadjoint} that \begin{equation*} \mathcal{F}_\sigma^\Lambda(\rho(h) \star_\Lambda \check{S}^*)= P_{\Lambda^\circ} (\overline{\mathcal{F}_W(S)} h), \end{equation*} where we also used that $\rho$ is the inverse of $\mathcal{F}_W$. On the other hand we find using \eqref{eq:productdual} and \eqref{eq:periodicextension} that \begin{align*} \inner{\mathcal{F}_\sigma^\Lambda (c)\mathcal{F}_W(S)}{h}_{S_0',S_0}&= \inner{\mathcal{F}_\sigma^\Lambda(c)}{\overline{\mathcal{F}_W(S)}h}_{S_0',S_0} \\ &= \inner{\mathcal{F}_\sigma^\Lambda(c)}{P_{\Lambda^\circ} (\overline{\mathcal{F}_W(S)}h)}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)} \end{align*} Hence $\inner{\mathcal{F}_\sigma^\Lambda (c)\mathcal{F}_W(S)}{h}_{S_0',S_0}=\inner{\mathcal{F}_W(c\star_\Lambda S)}{h}_{S_0',S_0}$, which implies the statement. \end{proof} \begin{rem} For Gabor multipliers $c\star_\Lambda (\psi\otimes \psi)$, Propositions \ref{prop:spreadinggm} and \ref{prop:spreadinggm2} were proved in \cite[Lem. 14]{Dorfler:2010}, and have been used in the theory of convolutional neural networks \cite{Dorfler:2018}. \end{rem} \section{Riesz sequences of translated operators in $\mathcal{HS}$} \label{sec:riesz} Two of the useful properties of the Weyl transform $f\mapsto L_f$ are that it is a unitary transformation from $L^2(\mathbb{R}^{2d})$ to the Hilbert-Schmidt operators $\mathcal{HS}$, and that it respects translations in the sense that \begin{equation*} L_{T_z f}=\alpha_z(L_f) \quad \text{ for } f\in L^2(\mathbb{R}^{2d}), z\in \mathbb{R}^{2d}. \end{equation*} As a consequence, statements concerning translates of functions in $L^2(\mathbb{R}^{2d})$ can be lifted to statements about translates of operators and convolutions $\star_\Lambda$ in $\mathcal{HS}$. This approach was first used for Gabor multipliers in \cite{Feichtinger:2002,Feichtinger:2003}, and has later been explored in other works \cite{Benedetto:2006,Dorfler:2010} -- we include these results for completeness, and because the proofs and results find natural formulations and generalizations in the framework of this paper. For fixed $S\in \mathcal{HS}$ and lattice $\Lambda$, we will be interested in whether $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is a \textit{Riesz sequence in $\mathcal{HS}$}, i.e. whether there exist $A,B>0$ such that for all finite sequences $c\in \ell^2(\Lambda)$ \begin{equation} \label{eq:rieszoperator} A \|c\|^2_{\ell^2(\Lambda)}\leq \left\|\sum_{\lambda \in \Lambda} c(\lambda) \alpha_\lambda(S) \right\|_\mathcal{HS}^2 \leq B \|c\|_{\ell^2(\Lambda)}^2. \end{equation} Since the Weyl transform is unitary and preserves translations, if we let $a_S$ be the Weyl symbol of $S$, then \eqref{eq:rieszoperator} is clearly equivalent to the fact that $\{T_\lambda(a_S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $L^2(\mathbb{R}^{2d})$, meaning that \begin{equation*} A \|c\|_{\ell^2(\Lambda)}^2\leq \left\|\sum_{\lambda \in \Lambda} c(\lambda) T_\lambda(a_S) \right\|^2_{L^2(\mathbb{R}^{2d})} \leq B \|c\|_{\ell^2(\Lambda)}^2, \end{equation*} for finite $c\in \ell^2(\Lambda)$. Following \cite{Feichtinger:2002,Feichtinger:2003,Benedetto:2006,Dorfler:2010} we can use a result from \cite{Benedetto:1998} to give a characterization of when \eqref{eq:rieszoperator} holds in terms of an expression familiar from Corollary \ref{cor:orthogonalabsolute}. \begin{thm} \label{thm:rieszsequence} Let $\Lambda$ be a lattice and $S\in \mathcal{B}$. Then the following are equivalent. \begin{enumerate}[(i)] \item The function $$\mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S}^*) =P_{\Lambda^\circ}(|\mathcal{F}_W(S)|^2)$$ has no zeros in $\mathbb{R}^{2d}/\Lambda^\circ$. \item $\{\alpha_\lambda (S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$. \end{enumerate} \end{thm} \begin{proof} The equality in $(i)$ is Corollary \ref{cor:orthogonalabsolute}. By the preceding discussion, $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$ if and only if $\{T_\lambda(a_S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $L^2(\mathbb{R}^{2d})$. The result from \cite{Benedetto:1998} (see \cite{Benedetto:2006} for a statement for general lattices and symplectic Fourier transform) says that $\{T_\lambda(a_S)\}_{\lambda \in \Lambda}$ is a Riesz sequence if and only if there exist $A,B>0$ such that \begin{equation*} A\leq \frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ} |\mathcal{F}_\sigma(a_S)(z+\lambda^\circ)|^2 \leq B \text{ for any } z\in \mathbb{R}^{2d}. \end{equation*} Since the Weyl transform and Fourier-Wigner transform are related by $\mathcal{F}_\sigma(a_S)=\mathcal{F}_W(S)$, we we may restate this condition as \begin{equation} \label{eq:rieszinproof} A\leq \frac{1}{|\Lambda|} \sum_{\lambda^\circ\in \Lambda^\circ} |\mathcal{F}_W(S)(z+\lambda^\circ)|^2 \leq B \text{ for any } z\in \mathbb{R}^{2d}. \end{equation} Note that the middle term is $ P_{\Lambda^\circ}(|\mathcal{F}_W(S)|^2)(\dot{z})$, and since $S\in \mathcal{B}$ we know that $|\mathcal{F}_W(S)|^2\in S_0(\mathbb{R}^{2d})$. Therefore $P_{\Lambda^\circ}(|\mathcal{F}_W(S)|^2)\in A(\mathbb{R}^{2d}/\Lambda^\circ)$ by Lemma \ref{lem:s0periodization}, which in particular means that $P_{\Lambda^\circ}(|\mathcal{F}_W(S)|^2)$ is a continuous function on the compact space $\mathbb{R}^{2d}/\Lambda^\circ$. For a continuous function on a compact space, condition \eqref{eq:rieszinproof} is equivalent to having no zeros. This completes the proof. \end{proof} \begin{rem} \begin{enumerate}[(i)] \item Since we assume $S\in \mathcal{B}$, the first condition above is in fact equivalent to $\left\{ \alpha_\lambda(S) \right\}_{\lambda \in \Lambda}$ generating a \textit{frame sequence} in $\mathcal{HS}$, which is a weaker statement than (2) above. The proof of this in \cite{Benedetto:2006} for Gabor multipliers works in our more general setting. \item As mentioned in the introduction, Feichtinger \cite{Feichtinger:2002} used the Kohn-Nirenberg symbol rather than the Weyl symbol. This makes no difference for our purposes -- we have opted for the Weyl symbol as it is related to $\mathcal{F}_W$ by a symplectic Fourier transform. \end{enumerate} \end{rem} If $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$, the \textit{synthesis operator} is the map $D_S:\ell^2(\Lambda)\to \mathcal{HS}$ given by \begin{equation*} D_S(c)=c\star_\Lambda S=\sum_{\lambda\in \Lambda} c(\lambda) \alpha_\lambda(S), \end{equation*} and the sum $\sum_{\lambda\in \Lambda} c(\lambda) \alpha_\lambda(S)$ converges unconditionally in $\mathcal{HS}$ for each $c\in \ell^2(\Lambda)$ \cite[Cor. 3.2.5]{Christensen:2016}. We also get by \cite[Thm. 5.5.1]{Christensen:2016} that \begin{equation} \label{eq:closedspanconvolution} \overline{\text{span}\{\alpha_\lambda(S):\lambda \in \Lambda\}}=\ell^2(\Lambda)\star S, \end{equation} where the closure is taken with respect to the norm in $\mathcal{HS}$. \subsection{The biorthogonal system and best approximation} \label{sec:biorthogonal} Any Riesz sequence has a so-called biorthogonal sequence and, by the theory of frames of translates \cite[Prop. 9.4.2]{Christensen:2016}, if the Riesz sequence is of the form $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ for some $S\in \mathcal{B}$, then the biorthogonal system has the same form. This means that there exists $S^\prime \in \mathcal{HS}$ such that the biorthogonal system is \begin{equation*} \{\alpha_\lambda(S^\prime)\}_{\lambda \in \Lambda}, \end{equation*} and biorthogonality means that \begin{equation*} \inner{\alpha_\lambda(S)}{\alpha_{\lambda'}(S')}_{\mathcal{HS}}=\delta_{\lambda,\lambda'}, \end{equation*} where $\delta_{\lambda,\lambda'}$ is the Kronecker delta. Now note that for $T\in \mathcal{HS}$ the definition \eqref{eq:convopop} of $T\star_\Lambda S'$ implies that \begin{equation*} \inner{T}{\alpha_\lambda(S')}_\mathcal{HS}=T\star_\Lambda \check{S'}^*(\lambda), \end{equation*} so if we define $R:=\check{S^\prime}^*$ we have \begin{equation} \label{eq:convolutionasinnerproduct} \inner{T}{\alpha_\lambda(S')}_\mathcal{HS}=T\star_\Lambda R(\lambda). \end{equation} With this observation we can formulate the standard properties of the biorthogonal sequence using convolutions with $R$. \begin{lem} \label{lem:biorthogonal} Assume that $\{\alpha_\lambda(S)\}_{\lambda\in \Lambda}$ with $S\in \mathcal{B}$ is a Riesz sequence in $\mathcal{HS}$. Let $$V^2:=\overline{\text{span}\{\alpha_\lambda(S):\lambda \in \Lambda\}}=\ell^2(\Lambda)\star S.$$ With $R$ defined as above, we have that \begin{enumerate}[(i)] \item $S\star_\Lambda R(\lambda)=\delta_{\lambda,0}.$ \item For any $T\in V^2$, $T\star_\Lambda R\in \ell^2(\Lambda)$. \item For any $ T\in V^2$, \begin{equation*} T=(T\star_\Lambda R)\star_\Lambda S. \end{equation*} \end{enumerate} \end{lem} \begin{proof} This is simply a restatement of the properties of the biorthogonal sequence of a Riesz sequence using the relation $\inner{T}{\alpha_\lambda(S')}_{\mathcal{HS}}=T\star_\Lambda R(\lambda)$ -- with this observation, parts $(i),(ii)$ and $(iii)$ follow from \cite[Thm. 3.6.2]{Christensen:2016}. \end{proof} \begin{rem} \begin{enumerate}[(i)] \item If the convolution of three operators were associative, we could find for any $T\in \mathcal{HS}$ (not just $T\in V^2$ as above) that $T=(T\star_\Lambda R)\star_\Lambda S$, since $T\star_\Lambda (R\star_\Lambda S)=T\star_\Lambda \delta_{\lambda,0}=T$. However, we will soon see that the convolution of three operators is \textit{not} associative. \item For $T,R\in \mathcal{HS}$, we have strictly speaking not defined $T\star_\Lambda R$ (since \eqref{eq:convopop} has stronger assumptions than simply $\mathcal{HS}$). However, it is clear by the Cauchy Schwarz inequality for $\mathcal{HS}$ that $$|T\star_\Lambda R(\lambda)|=|\inner{T}{\alpha_\lambda(S')}_{\mathcal{HS}}|\leq \|T\|_{\mathcal{HS}} \|S'\|_\mathcal{HS},$$ so we can define $T\star_\Lambda R\in \ell^\infty(\Lambda)$ by \eqref{eq:convopop} also in this case. \end{enumerate} \end{rem} We will now answer two natural questions. First, to what extent does $R$ inherit the nice properties of $S$ -- is it true that $R\in \mathcal{B}$? Then, how is $R$ related to $S$? The answer is provided by the following theorem, first proved by Feichtinger \cite[Thm. 5.17]{Feichtinger:2002} for Gabor multipliers, and the proof finds a natural formulation using our tools. \begin{thm} \label{thm:biorthogonal} Assume that $S\in \mathcal{B}$ and that $\left\{ \alpha_{\lambda}(S) \right\}_{\lambda\in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$. If $R$ is defined as above, then $R\in \mathcal{B}$ and $R=b \star_\Lambda \check{S}^*$ where $b\in \ell^1(\Lambda)$ are the symplectic Fourier coefficients of \begin{equation*} \frac{1}{\mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S}^*)}=\frac{1}{P_{\Lambda^\circ} \left(|\mathcal{F}_W(S)|^2\right)}. \end{equation*} \end{thm} \begin{proof} By \cite[Thm. 3.6.2]{Christensen:2016}, the generator $S^\prime$ of the biorthogonal system belongs to $V^2$, hence there exists some $b^\prime\in \ell^2(\Lambda)$ such that $S^\prime=b^\prime \star_\Lambda S$. Since $R=\check{S^\prime}^*$, one easily checks by the definitions of $\check{\ }$ and $^*$ that \begin{equation*} R=(b'\star S)\check{\ }^*=b\star_\Lambda \check{S}^* \end{equation*} if we define $b(\lambda)=\overline{b^\prime(-\lambda)}$. By part $(i)$ of Lemma \ref{lem:biorthogonal} and the associativity of convolutions, we have \begin{equation*} b\ast_\Lambda (\check{S}^* \star_\Lambda S)=(b\star_\Lambda \check{S}^*) \star_\Lambda S=R\star_\Lambda S = \delta_{\lambda,0}. \end{equation*} Taking the symplectic Fourier series of this equation using \eqref{eq:fourierseriesofconvolution} and Corollary \ref{cor:orthogonalabsolute}, we find for a.e. $\dot{z}\in \mathbb{R}^{2d}/\Lambda^\circ$ \begin{equation*} \mathcal{F}_\sigma^\Lambda(b)(\dot{z})\mathcal{F}_\sigma^\Lambda(\check{S}^* \star_\Lambda S)(\dot{z})= \mathcal{F}_\sigma^\Lambda(b)(\dot{z})P_{\Lambda^\circ}\left( |\mathcal{F}_W(S)|^2 \right)(\dot{z})=1, \end{equation*} hence \begin{equation*} \mathcal{F}_\sigma^\Lambda(b)(\dot{z})=\frac{1}{P_{\Lambda^\circ} \left(|\mathcal{F}_W(S)|^2\right)}, \end{equation*} and by assumption on $S$ (see Theorem \ref{thm:rieszsequence} and its proof) the denominator is bounded from below by a positive constant. Since $S\in \mathcal{B}$, we know that $|\mathcal{F}_W(S)|^2\in S_0(\mathbb{R}^{2d})$, and therefore Lemma \ref{lem:s0periodization} implies that $P_{\Lambda^\circ} \left(|\mathcal{F}_W(S)|^2\right)\in A(\mathbb{R}^{2d}/\Lambda^\circ)$. By Wiener's lemma \cite[Thm. 6.1.1]{Reiter:2000}, we get $\frac{1}{P_{\Lambda^\circ} \left(|\mathcal{F}_W(S)|^2\right)}\in A(\mathbb{R}^{2d}/\Lambda^\circ)$. In other words, $b\in \ell^1(\Lambda)$. Since $b\in \ell^1(\Lambda)$ and $\check{S}^*\in \mathcal{B}$, it follows that $R=b\star_\Lambda \check{S}^*\in \mathcal{B}$. \end{proof} To prepare for the next result, fix $S\in \mathcal{B}$ and let $$V^\infty=\ell^\infty(\Lambda)\star_\Lambda S,$$ hence $V^\infty$ is the set of operators given as a convolution $c\star_\Lambda S$ for $c\in \ell^\infty(\Lambda)$. The first part of the next result says that when $\{\alpha_\lambda (S)\}_{\lambda \in \Lambda}$ is a Riesz sequence, then the Schatten-$p$ class properties of $c\star_\Lambda S$ are precisely captured by the $\ell^p$ properties of $c$. This result appears to be a new result even for Gabor multipliers. We also determine for any $T\in \mathcal{HS}$ the best approximation (in the norm $\|\cdot\|_{\mathcal{HS}}$) of $T$ by an operator of the form $c\star_\Lambda S$. See \cite[Thm. 5.17]{Feichtinger:2002} and \cite[Thm. 19]{Dorfler:2010} for the statement for Gabor multipliers. \begin{cor} \label{cor:banachisomorphism} Assume that $S\in \mathcal{B}$ and that $\{\alpha_\lambda (S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$, and let $R$ be as above. \begin{enumerate}[(i)] \item For any $1\leq p \leq \infty$ the map $D_S:\ell^p(\Lambda)\to \mathcal{T}^p\cap V^\infty$ given by $$D_S(c)= c\star_\Lambda S$$ is a Banach space isomorphism, with inverse $C_R:\mathcal{T}^p\cap V^\infty\to \ell^p(\Lambda)$ given by $$C_R(T)= T\star_\Lambda R.$$ Hence $V^\infty\cap \mathcal{T}^p=\ell^{p}(\Lambda)\star_\Lambda S$ and $\|c\|_{\ell^p}\lesssim\|c\star_\Lambda S\|_{\mathcal{T}^p}\lesssim \|c\|_{\ell^p}$. \item For any $T\in \mathcal{HS}$, the best approximation in $\|\cdot\|_{\mathcal{HS}}$ of $T$ by an operator $c\star_\Lambda S$ with $c\in \ell^2(\Lambda)$ is given by $$c=T\star_\Lambda R.$$ Equivalently, the symplectic Fourier series of $c$ is given by $$\mathcal{F}_\sigma^\Lambda (c)=\frac{P_{\Lambda^\circ}\left[ \overline{\mathcal{F}_W(S)}F_W(T)\right]}{P_{\Lambda^\circ} |\mathcal{F}_W(S)|^2 }.$$ \end{enumerate} \end{cor} \begin{proof} \begin{enumerate}[(i)] \item By Proposition \ref{prop:youngschatten} part $(i)$ we get $\|C_R(T)\|_{\ell^p}\leq \|T\|_{\mathcal{T}^p} \|R\|_{\mathcal{B}}$, and by part $(ii)$ of the same proposition we get $\|D_S(c)\|_{\mathcal{T}^p}\lesssim \|c\|_{\ell^p}\|S\|_{\mathcal{B}}$. Hence both maps in the statement are continuous. It remains to show that the two maps are inverses of each other, which will follow from the associativity of convolutions. First assume that $c\in \ell^p(\Lambda).$ Then \begin{equation*} C_RD_S(c)=(c\star_\Lambda S)\star_\Lambda R=c\ast_\Lambda (S\star_\Lambda R)=c, \end{equation*} where we have used associativity and part $(i)$ of Lemma \ref{lem:biorthogonal}. Then assume $T\in V^\infty\cap \mathcal{T}^p$, so that $T=c\star_\Lambda S$ for $c\in \ell^\infty(\Lambda)$. We find \begin{equation*} D_SC_R(c\star_\Lambda S)=((c\star_\Lambda S )\star_\Lambda R)\star_\Lambda S= (c\ast_\Lambda (S \star_\Lambda R))\star_\Lambda S=c\star_\Lambda S. \end{equation*} Hence $D_S$ and $C_R$ are inverses. In particular $V^\infty \cap \mathcal{T}^p=\ell^p(\Lambda)\star_\Lambda S$ as $D_S$ is onto $V^\infty \cap \mathcal{T}^p$, and $V^\infty \cap \mathcal{T}^p$ is closed in $\mathcal{T}^p$ (hence a Banach space) since $D_S:\ell^p(\Lambda)\to \mathcal{T}^p$ has a left inverse $C_R$ and therefore has a closed range in $\mathcal{T}^p$. \item We claim that the map $T\mapsto (T\star_\Lambda R)\star_\Lambda S$ is the orthogonal projection from $\mathcal{HS}$ onto $\ell^2(\Lambda)\star_\Lambda S$, which is a closed subset of $\mathcal{HS}=\mathcal{T}^2$ by part $(i)$ (or \eqref{eq:closedspanconvolution}). If $T=c\star_\Lambda S$ for some $c\in \ell^2(\Lambda)$, then $c=T\star_\Lambda R$ by part $(i)$ -- therefore $T=(T\star_\Lambda R)\star_\Lambda S$. Then assume that $T\in (\ell^2(\Lambda)\star_\Lambda S)^\perp$. As we saw in \eqref{eq:convolutionasinnerproduct}, we can write \begin{equation} \label{eq:proof:bestapprox} T\star_\Lambda R(\lambda)=\inner{T}{\alpha_{\lambda}(S')}_{\mathcal{HS}}. \end{equation} From the proof of Theorem \ref{thm:biorthogonal}, $S'=b'\star_\Lambda S$ for some $b'\in \ell^2(\Lambda)$. One easily checks that $$\alpha_\lambda(S')=\alpha_\lambda(b'\star_\Lambda S)=T_{\lambda}b'\star_\Lambda S,$$ where $T_\lambda b'(\lambda')=b'(\lambda'-\lambda)$. It follows that $\alpha_{\lambda} (S')\in \ell^2(\Lambda)\star_\Lambda S$ for any $\lambda \in \Lambda.$ Hence if $T\in (\ell^2(\Lambda)\star_\Lambda S)^\perp,$ \eqref{eq:proof:bestapprox} shows that $(T\star_\Lambda R)\star_\Lambda S=0$. Finally, to obtain the equivalent expression recall from Theorem \ref{thm:biorthogonal} that $R=b\star_\Lambda \check{S}^*$ for $b\in \ell^1(\Lambda).$ Hence by associativity and commutativity of convolutions, $$c=T\star_\Lambda R=b\star_\Lambda (T\star_\Lambda \check{S}^*).$$ It follows from \eqref{eq:fourierseriesofconvolution} that we get $$\mathcal{F}_\sigma^\Lambda(c)=\mathcal{F}_\sigma^\Lambda(b)\mathcal{F}_\sigma^\Lambda(T\star_\Lambda \check{S}^*).$$ We have a known expression for $\mathcal{F}_\sigma^\Lambda(b)$ from Theorem \ref{thm:biorthogonal}, and a known expression for $\mathcal{F}_\sigma^\Lambda(T\star_\Lambda \check{S}^*)$ from Theorem \ref{thm:orthogonality} -- inserting these expressions into the equation above yields the desired result. \end{enumerate} \end{proof} The key to the results of this section is Wiener's lemma, used in the proof of Theorem \ref{thm:biorthogonal}. In fact, we may interpret these results as a variation of Wiener's lemma. To see this, recall that $V^2=\overline{\text{span}\{\alpha_\lambda(S):\lambda \in \Lambda\}}=\ell^2(\Lambda)\star_\Lambda S\subset \mathcal{HS}$. Then $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is a Riesz sequence if and only if the convolution map $D_S:\ell^2(\Lambda)\to V^2$ given by $$D_S(c)= c\star_\Lambda S$$ has a bounded inverse \cite[Thm. 3.6.6]{Christensen:2016}. Corollary \ref{cor:banachisomorphism} therefore says the following: if $S\in \mathcal{B}$ and the convolution map $D_S: \ell^2(\Lambda)\to V^2$ has a bounded inverse, then the inverse is given by the convolution $$C_R(T)= R\star_\Lambda T$$ for some $R\in \mathcal{B}$. The similarities with Wiener's lemma are evident when we compare this to the following formulation of Wiener's lemma\cite[Thm. 5.18]{Grochenig:2010}: \begin{quote} If $b\in \ell^1(\mathbb{Z})$ and the convolution map $\ell^2(\mathbb{Z})\to \ell^2(\mathbb{Z})$ defined by $$c\mapsto c\ast_\mathbb{Z} b$$ has a bounded inverse on $\ell^2(\mathbb{Z})$, then the inverse is given by the convolution map $$c\mapsto c\ast_{\mathbb{Z}} b'$$ for some $b'\in \ell^1(\mathbb{Z})$. \end{quote} \section{Tauberian theorems} \label{sec:tauberian} In the continuous setting, where one considers functions on $\mathbb{R}^{2d}$ and the convolutions briefly introduced at the beginning of Section \ref{sec:convolutions}, a version of Wiener's Tauberian theorem for operators was obtained by Kiukas et al. \cite{Kiukas:2012}, building on earlier work by Werner \cite{Werner:1984}. This theorem consists of a long list of equivalent statements for $\mathcal{T}^p$ and $L^p(\mathbb{R}^{2d})$ for $p=1,2,\infty$, and as a starting point for our discussion we state a shortened version for $p=2$ below. \begin{thm} \label{thm:kiukastauberian} Let $S\in \mathcal{T}$. The following are equivalent. \begin{enumerate} \item The span of $\{\alpha_z(S)\}_{z\in \mathbb{R}^{2d}}$ is dense in $\mathcal{HS}$. \item The set of zeros of $\mathcal{F}_W(S)$ has Lebesgue measure $0$ in $\mathbb{R}^{2d}$. \item The set of zeros of $\mathcal{F}_\sigma(S\star \check{S}^*)$ has Lebesgue measure $0$ in $\mathbb{R}^{2d}$. \item If $f\star S=0$ for $f\in L^2(\mathbb{R}^{2d})$, then $f=0$. \item If $T\star S=0$ for $T\in \mathcal{HS}$, then $T=0$. \end{enumerate} \end{thm} We wish to obtain versions of this theorem when $\mathbb{R}^{2d}$ is replaced by a lattice $\Lambda,$ functions on $\mathbb{R}^{2d}$ are replaced by sequences on $\Lambda$ and we still consider operators on $L^2(\mathbb{R}^d)$. In this discrete setting, statements (3) and (4) in Theorem \ref{thm:kiukastauberian} are still equivalent, mutatis mutandis, while the analogues of (1) and (5) can never be true. First we show that the discrete version of statement (1) can never hold. \begin{prop} \label{prop:nodensity} Let $\Lambda$ be any lattice in $\mathbb{R}^{2d}$ and let $S\in \mathcal{HS}$. Then the linear span of $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is not dense in $\mathcal{HS}$. \end{prop} \begin{proof} As we have exploited on several occasions, the Weyl transform is unitary from from $L^2(\mathbb{R}^{2d})$ to $\mathcal{HS}$ and sends translations of operators using $\alpha$ to translations of functions. It is therefore sufficient to show that $\{T_\lambda(a_S)\}_{\lambda \in \Lambda}$ is not dense in $L^2(\mathbb{R}^{2d})$, where $a_S$ is the Weyl symbol of $S$. Let $c:=\frac{2}{|\Lambda|}$, and define $\Lambda'=c\mathbb{Z}^{2d}$. Consider the lattice $\Lambda \times \Lambda'$ in $\mathbb{R}^{4d}$. Then we have that $|\Lambda \times \Lambda'|=|\Lambda|\cdot c=2>1$. By the density theorem for Gabor systems \cite{Grochenig:2001,Heil:2007,Bekka:2004}, this implies that the system $\{\pi(\lambda,\lambda') a_S\}_{(\lambda,\lambda')\in \Lambda \times \Lambda'}$ cannot be span a dense subset in $L^2(\mathbb{R}^{2d})$, so in particular the subsystem $\{\pi(\lambda,0) a_S\}_{(\lambda,0)\in \Lambda \times \Lambda'}=\{T_{\lambda} a_S\}_{\lambda\in \Lambda }$ cannot be complete. \end{proof} This implies that we cannot hope to generalize part (5) of Theorem \ref{thm:kiukastauberian} to the discrete setting. \begin{cor} Let $S\in \mathcal{B}$. There exists $0\neq T\in \mathcal{HS}$ such that $T\star_\Lambda S=0.$ \end{cor} \begin{proof} To obtain a contradiction, we assume that $T\star_\Lambda S=0\implies T=0$ for $T\in \mathcal{HS}$. As we have seen in \eqref{eq:convolutionasinnerproduct}, $$T\star_\Lambda S(\lambda)=\inner{T}{\alpha_\lambda(\check{S}^*)}_{\mathcal{HS}}.$$ Our assumption is therefore equivalent to $$\inner{T}{\alpha_\lambda(\check{S}^*)}_{\mathcal{HS}}=0 \text{ for all } \lambda \in \Lambda \implies T=0,$$ which implies that the linear span of $\{\alpha_\lambda(\check{S}^*)\}_{\lambda\in \Lambda}$ is dense in $\mathcal{HS}$ -- a contradiction to Proposition \ref{prop:nodensity} applied to $\check{S}^*\in \mathcal{B}$. \end{proof} Proposition \ref{prop:nodensity} also allows us to construct counterexamples to the associativity of convolutions of three operators. \begin{cor} \label{cor:noassociativity} Assume that $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$ for $S\in \mathcal{B}$. Then there exist $R\in \mathcal{B}$ and $T\in \mathcal{HS}$ such that $$(T\star_\Lambda R) \star_\Lambda S\neq T\star_\Lambda (R\star_\Lambda S).$$ \end{cor} \begin{proof} Choose $R\in \mathcal{B}$ as in Section \ref{sec:biorthogonal}, i.e. such that $S\star_\Lambda R=\delta_{\lambda,0}$. Then use Proposition \ref{prop:nodensity} to pick $T\in \mathcal{HS}$ that does not belong to the closed linear span of $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ in $\mathcal{HS}$. We get that \begin{equation*} T\star_\Lambda (R\star_\Lambda S)=T\star_\Lambda \delta_{\lambda,0}=T. \end{equation*} If we assumed associativity, we would get \begin{equation*} T=(T\star_\Lambda R) \star_\Lambda S, \end{equation*} where $T\star_\Lambda R\in \ell^2(\Lambda)$ by Proposition \ref{prop:youngschatten}. Hence we could express $T=c\star_\Lambda S$ for $c\in \ell^2(\Lambda)$, which would imply that $T$ belongs to the closed linear span of $\{\alpha_\lambda(S)\}_{\lambda \in \Lambda}$ by \eqref{eq:closedspanconvolution} -- a contradiction. \end{proof} On the positive side, we can use the techniques developed in Section \ref{sec:fouriertransforms} to prove the following theorem, which shows that parts (3) and (4) of Theorem \ref{thm:kiukastauberian} have natural analogues for sequences. For Gabor multipliers, Feichtinger was interested in the question of recovering $c$ from $c\star_\Lambda (\varphi\otimes \varphi)$, and the continuity of the mapping $c\star_\Lambda (\varphi\otimes \varphi)\mapsto c$. In this case he proved the equivalence $(1)(i) \iff (1)(iv)$ below \cite[Thm. 5.17]{Feichtinger:2002}, and that this implies the final statement in part $(1)$\cite[Prop. 5.22 and Prop. 5.23]{Feichtinger:2002}. In part $(3)$ we show that any $c\in \ell^1(\Lambda)$ (in particular any finite sequence) can be recovered from $c\star_\Lambda S$ under significantly weaker assumptions on $S$ for a fixed lattice $\Lambda$, but obtain no continuity statement. \begin{thm} \label{thm:bigtauberian} Let $S\in \mathcal{B}$. \begin{enumerate} \item The following are equivalent: \begin{enumerate}[(i)] \item $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)$ has no zeros in $\mathbb{R}^{2d}/\Lambda^\circ$. \item If $c\star_\Lambda S=0$ for $c\in \ell^\infty(\Lambda)$, then $c=0$. \item $\mathcal{B}\star_\Lambda S$ is dense in $\ell^1(\Lambda).$ \item $\{\alpha_\lambda S\}_{\lambda \in \Lambda}$ is a Riesz sequence in $\mathcal{HS}$. \end{enumerate} If any of the statements above holds, $c\in \ell^\infty(\Lambda)$ is recovered from $c\star_\Lambda S$ by $c=(c\star_\Lambda S)\star_\Lambda R$ for some $R\in \mathcal{B}.$ In particular, the map $c\star_\Lambda S\mapsto c$ is continuous $\mathcal{L}(L^2) \to \ell^\infty(\Lambda)$. \item The following are equivalent: \begin{enumerate}[(i)] \item $\mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S}^*)$ is non-zero a.e. in $\mathbb{R}^{2d}/\Lambda^\circ$. \item If $c\star_\Lambda S=0$ for $c\in \ell^2(\Lambda)$, then $c=0$. \item $\mathcal{HS}\star_\Lambda S$ is dense in $\ell^2(\Lambda).$ \end{enumerate} \item The following are equivalent: \begin{enumerate}[(i)] \item The set of zeros of $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)$ contains no open subsets in $\mathbb{R}^{2d}/\Lambda^\circ$. \item If $c\star_\Lambda S=0$ for $c\in \ell^1(\Lambda)$, then $c=0$. \item $\mathcal{B}' \star_\Lambda S$ is weak*-dense in $\ell^\infty(\Lambda)$. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item The equivalence of $(i)$ and $(iv)$ was the content of Theorem \ref{thm:rieszsequence}. By Corollary \ref{cor:banachisomorphism}, $(iv)$ implies that $c\mapsto c\star_\Lambda S$ is injective, hence $(i)\iff (iv)\implies (ii)$ holds. Then assume that $(ii)$ holds, and let $\dot{z}\in \mathbb{R}^{2d}/\Lambda^\circ$ -- to show $(i)$, we need to show that $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)(\dot{z})\neq 0$, which by Corollary \ref{cor:orthogonalabsolute} is equivalent to showing that there exists some $\lambda^\circ\in \Lambda^\circ$ such that $\mathcal{F}_W(S)(z+\lambda^\circ)\neq 0$. Consider the distribution $\delta_{\dot{z}}\in A'(\mathbb{R}^{2d}/\Lambda^\circ)$ defined by $$\inner{\delta_{\dot{z}}}{f}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)} =\overline{f(\dot{z})}$$ (recall that our duality brackets are antilinear in the second coordinate), and let $c^{\dot{z}}=\{c^{\dot{z}}(\lambda)\}_{\lambda \in \Lambda}\in \ell^\infty(\Lambda)$ be its symplectic Fourier coefficients, i.e. $\mathcal{F}_\sigma^\Lambda(c^{\dot{z}})=\delta_{\dot{z}}$. We know that $c^{\dot{z}}\star_\Lambda S\in \mathcal{B}'$ is non-zero by $(ii)$, and Proposition \ref{prop:spreadinggm2} gives for any $f\in S_0(\mathbb{R}^{2d})$ that {\small \begin{align*} \inner{\mathcal{F}_W(c^{\dot{z}}\star_\Lambda S)}{f}_{S_0',S_0}&=\inner{\delta_{\dot{z}} \mathcal{F}_W(S)}{f}_{S_0',S_0} \\ &= \inner{\delta_{\dot{z}} }{\overline{\mathcal{F}_W(S)}f}_{S_0',S_0} \\ &= \inner{\delta_{\dot{z}} }{P_{\Lambda^\circ}\left[\overline{\mathcal{F}_W(S)}f \right]}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)} \quad \text{by } \eqref{eq:periodicextension} \\ &= P_{\Lambda^\circ}\left[\mathcal{F}_W(S)\overline{f} \right](\dot{z}) \\ &= \sum_{\lambda^\circ\in \Lambda^\circ}\mathcal{F}_W(S)(z+\lambda^\circ)\overline{f(z+\lambda^\circ)}. \end{align*} } From this it is clear that if $\mathcal{F}_W(S)(z+\lambda^\circ)=0$ for all $\lambda^\circ \in \Lambda^\circ$, then $\mathcal{F}_W(c^{\dot{z}}\star_\Lambda S)=0$ and hence $c^{\dot{z}}\star_\Lambda S=0$ since $\mathcal{F}_W:\mathcal{B}'\to S_0(\mathbb{R}^{2d})$ is an isomorphism, which cannot hold by $(ii)$. Before we prove $(ii)\iff (iii)$, note that $(i)$ is unchanged when $S\mapsto \check{S}^*$ by commutativity of the convolutions. Since $(i)\iff (ii)$, this means that $(ii)$ is equivalent to \begin{enumerate}[(ii')] \item If $c\star_\Lambda \check{S}^*=0$ for $c\in \ell^\infty(\Lambda)$, then $c=0$. \end{enumerate} To prove the equivalence of $(ii')$ and $(iii)$, we will prove that the map $D_{\check{S}^*}:\ell^\infty(\Lambda)\to \mathcal{B}'$ given by $D_{\check{S}^*}(c)= c\star_\Lambda \check{S}^*$ is the Banach space adjoint of $C_S:\mathcal{B}\to \ell^1(\Lambda)$ given by $C_S(T)=T\star_\Lambda S.$ This amounts to proving that \begin{equation*} \inner{D_{\check{S}^*}(c)}{T}_{\mathcal{B}',\mathcal{B}}=\inner{c}{C_S(T)}_{\ell^\infty(\Lambda),\ell^1(\Lambda)} \quad \text{ for } T\in \mathcal{B}, c\in \ell^\infty(\Lambda). \end{equation*} By writing out the definitions of $D_{\check{S}^*}$ and $C_S$, we see that we need to show that \begin{equation*} \inner{c\star_\Lambda \check{S}^*}{T}_{\mathcal{B}',\mathcal{B}}=\inner{c}{T\star_\Lambda S}_{\ell^\infty,\ell^1} \quad \text{ for } T\in \mathcal{B}, c\in \ell^\infty(\Lambda), \end{equation*} which is simply the definition of $c\star_\Lambda \check{S}^*$ when $c\in \ell^\infty(\Lambda)$ from \eqref{eq:dualconvolutions}, hence true. Since a bounded linear operator between Banach spaces has dense range if and only if its Banach space adjoint is injective (see \cite[Corollary to Thm. 4.12]{Rudin:2006}, part (b)), this implies that $(ii')$ is equivalent to $(iii)$. Finally, Corollary \ref{cor:banachisomorphism} implies the final statement that $c=(c\star_\Lambda S)\star_\Lambda R$. \item The equivalence $(ii)\iff(iii)$ is proved as above . Assume that $(i)$ holds, and that $c\star_\Lambda S=0$ for some $c\in \ell^2(\Lambda)$. By associativity of convolutions, $$c\ast_\Lambda (S\star_\Lambda \check{S}^*)=0.$$ Applying $\mathcal{F}_\sigma^\Lambda$ to this, we find using \eqref{eq:fourierseriesofconvolution} that $$\mathcal{F}_\sigma^\Lambda(c)\mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S}^*)=0.$$ By $(i)$ this implies that $\mathcal{F}_\sigma^\Lambda(c)=0$ in $L^2(\mathbb{R}^{2d}/\Lambda^\circ)$, hence $c=0$. Then assume that $(i)$ does not hold, i.e. there is a subset $U\subset \mathbb{R}^{2d}/\Lambda^\circ$ of positive measure where $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)$ vanishes. Pick $c\in \ell^2(\Lambda)$ such that $\mathcal{F}_\sigma^\Lambda(c)=\chi_U,$ where $\chi_U$ is the characteristic function of $U$, which is possible since $\mathcal{F}_\sigma^\Lambda:\ell^2(\Lambda)\to L^2(\mathbb{R}^{2d}/\Lambda^\circ)$ is unitary and so in particular onto. Then by Proposition \ref{prop:spreadinggm2}, for $f\in S_0(\mathbb{R}^{2d})$, {\small \begin{align*} \inner{\mathcal{F}_W(c\star_\Lambda S)}{f}_{S_0',S_0}&=\inner{\chi_U \mathcal{F}_W(S)}{f}_{S_0',S_0} \\ &= \inner{\chi_U }{\overline{\mathcal{F}_W(S)}f}_{S_0',S_0} \\ &= \inner{\chi_U }{P_{\Lambda^\circ}\left[\overline{\mathcal{F}_W(S)}f \right]}_{A'(\mathbb{R}^{2d}/\Lambda^\circ),A(\mathbb{R}^{2d}/\Lambda^\circ)} \quad \text{by } \eqref{eq:periodicextension} \\ &= \int_{\mathbb{R}^{2d}/\Lambda^\circ} \chi_U(\dot{z}) \sum_{\lambda^\circ\in \Lambda^\circ}\mathcal{F}_W(S)(z+\lambda^\circ)\overline{f(z+\lambda^\circ)}d\dot{z} \\ &=0. \end{align*} } To see why the last integral is zero, note first that if $\dot{z}\notin U$, then $\chi_U(\dot{z})=0.$ If $\dot{z}\in U$, then we use that by Corollary \ref{cor:orthogonalabsolute}, \begin{equation*} \mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S^*})(\dot{z})=\frac{1}{|\Lambda|}\sum_{\lambda^\circ \in \Lambda^\circ} |\mathcal{F}_W(S)(z+\lambda^\circ)|^2 \text{ for any } z\in \mathbb{R}^{2d}. \end{equation*} Hence the assumption $\mathcal{F}_\sigma^\Lambda (S\star_\Lambda \check{S^*})(\dot{z})=0$ for $\dot{z}\in U$ implies that $\mathcal{F}_W(S)(z+\lambda^\circ)=0$ for any $\lambda^\circ\in \Lambda^\circ$ when $\dot{z}\in U$. In conclusion we have shown that the integrand above is zero, hence the integral is zero. This means that $\mathcal{F}_W(c\star_\Lambda S)=0$, so $c\star_\Lambda S=0$, contradicting $(ii)$ since $c\neq 0$. \item Assume that $(i)$ holds, and that $c\star_\Lambda S=0$ for some $c\in \ell^1(\Lambda)$. By associativity, we also have that $c\star_\Lambda (S\star_\Lambda \check{S}^*)=0$, and by applying $\mathcal{F}_\sigma^\Lambda$ we get from \eqref{eq:fourierseriesofconvolution} \begin{equation*} \mathcal{F}_\sigma^\Lambda(c)(\dot{z})\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)(\dot{z})=0 \quad \text{ for any } \dot{z}\in \mathbb{R}^{2d}/\Lambda^\circ. \end{equation*} Since $c\in \ell^1(\Lambda)$, $\mathcal{F}_\sigma^\Lambda(c)$ is a continuous function. So if $c\neq 0$, there must exist an open subset $U\subset \mathbb{R}^{2d}/\Lambda^\circ$ such that $\mathcal{F}_\sigma^\Lambda(c)(\dot{z})\neq 0$ for $\dot{z}\in U$. But the equation above then gives that $\mathcal{F}_\sigma(S\star \check{S}^*)(\dot{z})=0$ for $\dot{z}\in U$; a contradiction to $(i)$. Hence $c=0,$ and $(ii)$ holds. Then assume that $(ii)$ holds, and assume that there is an open set $U\subset \mathbb{R}^{2d}/\Lambda^\circ$ such that $\mathcal{F}_\sigma^\Lambda(S\star_\Lambda \check{S}^*)(\dot{z})=0$ for any $\dot{z}\in U.$ By Theorem \ref{thm:orthogonality}, this means that \begin{equation*} \sum_{\lambda^\circ\in \Lambda^\circ} |\mathcal{F}_W(S)(z+\lambda^\circ)|^2=0 \quad \text{ when } \dot{z}\in U, \end{equation*} which is clearly equivalent to \begin{equation*} \mathcal{F}_W(S)(z)=0 \quad \text{ whenever } \dot{z}\in U. \end{equation*} Then find some non-zero $c\in \ell^1(\Lambda)$ such that $\mathcal{F}_\sigma^\Lambda(c)$ vanishes outside $U$, which is possible by \cite[Remark 5.1.4]{Reiter:2000}. Using Proposition \ref{prop:spreadinggm}, we have \begin{equation*} \mathcal{F}_W(c\star_\Lambda S)(z)=\mathcal{F}_\sigma^\Lambda(c)(\dot{z})\mathcal{F}_W(S)(z)\quad \text{ for } z\in \mathbb{R}^{2d}. \end{equation*} If $\dot{z}\notin U$, then $\mathcal{F}_\sigma^\Lambda(c)(\dot{z})=0$ by construction of $c$. Similarly, if $\dot{z}\in U$, then we saw that $\mathcal{F}_W(S)(z)=0$. Hence $\mathcal{F}_W(c\star_\Lambda S)(z)=0$ for any $z\in \mathbb{R}^{2d}$, which implies that $c\star_\Lambda S=0$. But $c\neq 0$, so this is impossible when we assume $(ii)$, so there cannot exist an open subset $U\subset \mathbb{R}^{2d}/\Lambda^\circ$ such that $\mathcal{F}_\sigma^\Lambda(c)(\dot{z})\neq 0$ for $\dot{z}\in U$. The equivalence $(ii)\iff (iii)$ is proved as in part (1), with some minor modifications. We note that $(i)$ is unchanged when $S\mapsto \check{S}^*$, so as $(i)\iff (ii)$ we have that $(ii)$ is equivalent to \begin{enumerate}[(ii')] \item If $c\star_\Lambda \check{S}^*=0$ for $c\in \ell^1(\Lambda)$, then $c=0$. \end{enumerate} By simply writing out the definitions, one sees using \eqref{eq:dualconvolutions3} that the map $C_S:\mathcal{B}'\to \ell^\infty(\Lambda)$ given by $C_S(T)=T\star_\Lambda S$ is the Banach space adjoint of $D_{\check{S}^*}:\ell^1(\Lambda)\to \mathcal{B}$ given by $D_{\check{S}^*}(c)= c\star_\Lambda \check{S}^*$. The equivalence $(ii')\iff (iii)$ therefore follows from part (c) of \cite[Corollary of Thm. 4.12]{Rudin:2006}: a bounded linear operator between Banach spaces is injective if and only if the range of its adjoint is weak*-dense. \end{enumerate} \end{proof} Let us rewrite the statements of the theorem in the case that $S$ is a rank-one operator $S=\varphi\otimes \varphi$ for $\varphi\in S_0(\mathbb{R}^d)$. By \eqref{eq:tworankone} we find that \begin{equation*} S\star_\Lambda \check{S}^*(\lambda)=|V_{\varphi}\varphi (\lambda)|^2, \end{equation*} and by \eqref{eq:gabormultiplier} $c\star_\Lambda S$ is the Gabor multiplier \begin{equation*} c\star_\Lambda (\varphi\otimes \varphi)\psi =\sum_{\lambda \in \Lambda} c(\lambda)V_{\varphi}\psi(\lambda)\pi(\lambda)\varphi. \end{equation*} Hence the equivalences $(i)\iff (ii)$ provides a characterization using the symplectic Fourier series of $V_\varphi \varphi\vert_{\Lambda}$ of when the symbol $c$ of a Gabor multiplier is uniquely determined. \subsection{Underspread operators and a Wiener division lemma} For motivation, recall Wiener's division lemma \cite[Lem. 1.4.2]{Reiter:2000}: if $f,g\in L^1(\mathbb{R}^{2d})$ satisfy that $\hat{f}$ has compact support ($\hat{f}$ is the usual Fourier transform on $\mathbb{R}^{2d}$) and $\hat{g}$ does not vanish on $\text{supp}(\hat{f}),$ then $$f=f\ast h\ast g$$ for some $h\in L^1(\mathbb{R}^{2d})$ satisfying $\hat{h}(z)=\frac{1}{\hat{g}(z)}$ for $z\in \text{supp}(\hat{f})$. The next result is a version of this statement for the convolutions and Fourier transforms of operators and sequences. At the level of Weyl symbols, this result is due to Gr\"ochenig and Pauwels \cite{Grochenig:2014} (see also the thesis of Pauwels \cite{Pauwels:2011}) using different techniques. We choose to include a proof using the techniques of this paper to show how the the statement fits our formalism. Note that apart from the function $g$ -- introduced to ensure $A\in \mathcal{B}$ -- Theorem \ref{thm:underspread} is obtained by replacing the convolutions and Fourier transforms in Wiener's division lemma by the convolutions and Fourier transforms of sequences and operators. \begin{rem} If $\Lambda^\circ=A\mathbb{Z}^{2d}$, we will pick the fundamental domain $\square_{\Lambda^\circ}=A[-\frac{1}{2},\frac{1}{2})^{2d}$ which means that any $z\in \mathbb{R}^{2d}$ can be written as $z=z_0+\lambda^\circ$ for $z_0\in \square_{\Lambda^\circ}, \lambda^\circ \in \Lambda^\circ$ in a unique way. This choice of fundamental domain implies that $(1-\epsilon)\square_{\Lambda^\circ}=A[-\frac{1}{2}+\frac{\epsilon}{2},\frac{1}{2}-\frac{\epsilon}{2})^{2d}$, so we may find $g$ in the statement below by \cite[Prop. 2.26]{Lee:2003}. \end{rem} \begin{thm} \label{thm:underspread} Assume that $S\in \mathcal{B}$ satisfies $\text{supp}(F_W(S))\subset (1-\epsilon)\square_{\Lambda^\circ}$ for some $0<\epsilon < 1/2$. Pick $g\in C^\infty_c(\mathbb{R}^{2d})$ such that $g\vert_{(1-\epsilon)\square_{\Lambda^\circ}}\equiv 1$ and $\text{supp}(g)\subset \square_{\Lambda^\circ}$. If $T\in \mathcal{B}$ satisfies $\mathcal{F}_W(T)(z)\neq 0$ for $z\in \text{supp}(g)$, then \begin{equation*} S=(S\star_\Lambda T)\star_\Lambda A, \end{equation*} where $A\in \mathcal{B}$ is given by $\mathcal{F}_W(A)=\frac{g}{\mathcal{F}_W(T)}$. \end{thm} \begin{proof} We first show that $A\in \mathcal{B}$ by showing $\mathcal{F}_W(A)\in S_0(\mathbb{R}^{2d})$. The Wiener-L\'{e}vy theorem \cite[Thm. 1.3.1]{Reiter:2000} gives $h\in L^1(\mathbb{R}^{2d})$ such that $\hat{h}(z)=1/\mathcal{F}_W(T)(z)$ for $z\in \text{supp}(g),$ where $\hat{}$ denotes the usual Fourier transform. Therefore $\mathcal{F}_W(A)=g\cdot \hat{h}$, which belongs to $S_0(\mathbb{R}^{2d})$ by \cite[Prop. 12.1.7]{Grochenig:2001}. To show that $S=(S\star_\Lambda T)\star_\Lambda A$, we will show that their Fourier-Wigner transforms are equal. Using Proposition \ref{prop:spreadinggm} and Theorem \ref{thm:orthogonality} we find that {\footnotesize \begin{align*} \mathcal{F}_W((S\star_\Lambda T)\star_\Lambda A)(z)&= \mathcal{F}_\sigma^\Lambda (S\star_\Lambda T)(\dot{z})\mathcal{F}_W(A)(z) \\ &= \mathcal{F}_W(A)(z) \sum_{\lambda^\circ \in \Lambda^\circ} \mathcal{F}_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ). \end{align*}} To show that this equals $\mathcal{F}_W(S)$, we consider three cases. \begin{itemize} \item If $z\in (1-\epsilon)\square_{\Lambda^\circ}$, then $g(z)=1$ by construction and {\footnotesize \begin{align*} \mathcal{F}_W(A)(z) \sum_{\lambda^\circ \in \Lambda^\circ} \mathcal{F}_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ)&=\mathcal{F}_W(A)(z)\mathcal{F}_W(S)(z)\mathcal{F}_W(T)(z) \\ &= \frac{g(z)}{\mathcal{F}_W(T)(z)}\mathcal{F}_W(S)(z)\mathcal{F}_W(T)(z)\\ &=\mathcal{F}_W(S)(z), \end{align*}} where we used that the only summand contributing to the sum is $\lambda^\circ=0$ since $\text{supp}(\mathcal{F}_W(S))\subset \square_{\Lambda^\circ}$ and $z\in \square_{\Lambda^\circ}$ and $\square_{\Lambda^\circ}$ is a fundamental domain. \item If $z\in \square_{\Lambda^\circ}\setminus (1-\epsilon)\square_{\Lambda^\circ}$, then $\mathcal{F}_W(S)(z)=0$ and the same argument as above gives {\footnotesize \begin{align*} \mathcal{F}_W(A)(z) \sum_{\lambda^\circ \in \Lambda^\circ} \mathcal{F}_W(S)(z+\lambda^\circ)\mathcal{F}_W(T)(z+\lambda^\circ)&=\mathcal{F}_W(A)(z)\overbrace{\mathcal{F}_W(S)(z)}^{0}\mathcal{F}_W(T)(z) \\ &= 0. \end{align*} } \item If $z\notin (1-\epsilon)\square_{\Lambda^\circ}$, then $\mathcal{F}_W(S)(z)=0$ since $\text{supp}(\mathcal{F}_W(S))\subset \square_{\Lambda^\circ}$ and $\mathcal{F}_W((S\star_\Lambda T)\star_\Lambda A)(z)=0$ since $\mathcal{F}_W(A)(z)=\frac{g(z)}{\mathcal{F}_W(T)(z)}=0$ as $\text{supp}(g)\subset \square_{\Lambda^\circ}$. \end{itemize} \end{proof} A similar argument using duality brackets shows that essentially the same result even holds for $S\in \mathcal{B}'$. \begin{thm} \label{thm:dualunderspread} Assume that $S\in \mathcal{B}'$ satisfies $\text{supp}(F_W(S))\subset (1-2\epsilon)\square_{\Lambda^\circ}$ for some $0<\epsilon < 1/2$. Pick $g\in C^\infty_c(\mathbb{R}^{2d})$ such that $g\vert_{(1-\epsilon)\square_{\Lambda^\circ}}\equiv 1$ and $\text{supp}(g)\subset \square_{\Lambda^\circ}$. If $T\in \mathcal{B}$ satisfies $\mathcal{F}_W(T)(z)\neq 0$ for $z\in \text{supp}(g)$, then \begin{equation*} S=(S\star_\Lambda T)\star_\Lambda A, \end{equation*} where $A\in \mathcal{B}$ is given by $\mathcal{F}_W(A)=\frac{g}{\mathcal{F}_W(T)}$. \end{thm} \begin{proof} We have already seen that $A\in \mathcal{B}$. Let $f\in S_0(\mathbb{R}^{2d})$. Then {\footnotesize \begin{align*} \inner{\mathcal{F}_W\left[(S\star_\Lambda T)\star_\Lambda A)\right]}{f}_{S_0',S_0}&= \inner{(S\star_\Lambda T)\star_\Lambda A)}{\rho(f)}_{\mathcal{B}',\mathcal{B}} \quad \text{ by \eqref{eq:fwdual}} \\ &= \inner{S\star_\Lambda T}{\rho(f)\star_\Lambda \check{A}^*}_{\ell^\infty,\ell^1} \quad \text{ by \eqref{eq:dualconvolutions}} \\ &= \inner{S}{(\rho(f)\star_\Lambda \check{A}^*)\star_\Lambda \check{T}^*}_{\mathcal{B}',\mathcal{B}} \quad \text{ by \eqref{eq:dualconvolutions3}} \\ &= \inner{\mathcal{F}_W(S)}{\mathcal{F}_W\left[(\rho(f)\star_\Lambda \check{A}^*)\star_\Lambda \check{T}^* \right]}_{S_0',S_0} \quad \text{ by \eqref{eq:fwdual}} \\ &= \inner{\mathcal{F}_W(S)}{b\cdot \mathcal{F}_W\left[(\rho(f)\star_\Lambda \check{A}^*)\star_\Lambda \check{T}^* \right]}_{S_0',S_0} \end{align*} } In the last line we multiplied the right hand side by a bump function $b\in C_c^\infty(\mathbb{R}^{2d})$ such that $b\vert_{(1-2\epsilon)\square_{\Lambda^\circ}}\equiv 1$ and $\text{supp}(b)\subset (1-\epsilon)\square_{\Lambda^\circ}$ -- this does not change anything by the assumptions on the supports of $\mathcal{F}_W(S)$ and $b$. We find using Theorem \ref{thm:orthogonality} and Proposition \ref{prop:spreadinggm} that \begin{align*} b\cdot \mathcal{F}_W\left[(\rho(f)\star_\Lambda \check{A}^*)\star_\Lambda \check{T}^* \right]&=b\cdot \mathcal{F}_\sigma^\Lambda (\rho(f)\star_\Lambda \check{A}^*)\cdot \overline{\mathcal{F}_W(T)} \\ &=b \cdot \overline{\mathcal{F}_W(T)} P_{\Lambda^\circ}(f\overline{\mathcal{F}_W(A)}) \quad \text{ by \eqref{eq:fwcheckadjoint}}. \end{align*} We claim that this last function equals $b\cdot f$: if $z\notin (1-\epsilon)\square_{\Lambda^\circ}$, then $b(z)=0$, so $b(z)f(z)=0$ and $$b(z) \cdot \overline{\mathcal{F}_W(T)(z)} P_{\Lambda^\circ}(f\overline{\mathcal{F}_W(A)})(\dot{z})=0.$$ If $z\in (1-\epsilon)\square_{\Lambda^\circ}$, then $g(z)=1$ and {\footnotesize \begin{align*} b(z)\overline{\mathcal{F}_W(T)(z)}P_{\Lambda^\circ}(f\overline{\mathcal{F}_W(A)})(\dot{z})&= b(z)\overline{\mathcal{F}_W(T)(z)}\sum_{\lambda^\circ \in \Lambda^\circ} f(z+\lambda^\circ) \overline{\mathcal{F}_W(A)}(z+\lambda^\circ) \\ &= b(z)\overline{\mathcal{F}_W(T)(z)}f(z)\overline{\mathcal{F}_W(A)(z)} \\ &= b(z)\overline{\mathcal{F}_W(T)(z)}f(z)\frac{\overline{g(z)}}{\overline{\mathcal{F}_W(T)(z)}} \\ &= b(z)f(z). \end{align*} } since $\mathcal{F}_W(A)$ vanishes outside of $\square_{\Lambda^\circ}$ by construction. Hence we have shown that \begin{align*} \inner{\mathcal{F}_W\left[(S\star_\Lambda T)\star_\Lambda A)\right]}{f}_{S_0',S_0}&=\inner{\mathcal{F}_W(S)}{b\cdot f}_{S_0',S_0} \\ &= \inner{\mathcal{F}_W(S)}{f}_{S_0',S_0} \end{align*} for any $f\in S_0(\mathbb{R}^{2d})$, which implies the result. \end{proof} Operators $S$ such that $\text{supp}(\mathcal{F}_W(S))\subset [-\frac{a}{2},\frac{a}{2}]^d \times [-\frac{b}{2},\frac{b}{2}]^d$ where $ab\leq 1$ are called \textit{underspread}, and provide realistic models of communication channels \cite{Kozek:2006,Kozek:1997,Strohmer:2006,Grochenig:2014,Dorfler:2010}. We immediately obtain the following consequence. \begin{cor} Any underspread operator $S\in \mathcal{B}'$ can be expressed as a convolution $T=c\star_\Lambda A$ with $c\in \ell^\infty(\Lambda)$ and $A\in \mathcal{B}$ for a sufficiently dense lattice $\Lambda$. In particular, $S$ is bounded on $L^2(\mathbb{R}^d)$. \end{cor} It is known (see \cite{Dorfler:2010}) that for an operator $S$ to be well-approximated by Gabor multipliers -- i.e. operators $c\star_\Lambda (\psi\otimes \psi)$ for $\psi\in L^2(\mathbb{R}^d)$ -- $S$ should be underspread. The result above shows that any underspread operator $S$ is given precisely by a convolution $S=c\star_\Lambda A$ if we allow $A$ to be any operator in $\mathcal{B}$, not just a rank-one operator. In fact, $A$ as constructed in the theorem will never be a rank-one operator, since $\mathcal{F}_W(A)$ has compact support -- this is not possible for rank-one operators \cite{Janssen:1998a}. If $S$ satisfies $S\in \mathcal{T}^p$ in addition to the assumptions of Theorem \ref{thm:dualunderspread}, then $c=S\star T\in \ell^p(\Lambda)$ by Proposition \ref{prop:youngschatten}. Hence the $p$-summability of $c$ in $S=c\star_\Lambda A$ reflects the fact that $S\in \mathcal{T}^p$. Theorem \ref{thm:dualunderspread} also implies that underspread operators $S$ are determined by the sequence $S\star_\Lambda T$ when $T\in \mathcal{B}$ is chosen appropriately. This was a major motivation for \cite{Grochenig:2014}, since when $T$ is a rank-one operator $T=\varphi\otimes \varphi$, the sequence $S\star_\Lambda \check{T}$ is the diagonal of the so-called channel matrix of $S$ with respect to $\varphi$ -- see \cite{Grochenig:2014,Pauwels:2011} for a thorough discussion and motivation of these concepts. Finally, note that Theorem \ref{thm:dualunderspread} gives a (partial) discrete analogue of part (5) of Theorem \ref{thm:kiukastauberian}. \section*{Acknowledgements} We thank Franz Luef for insightful feedback on various drafts of this paper. We also thank Markus Faulhuber for helpful discussions and suggestions, particularly concerning Proposition \ref{prop:nodensity}. \bibliographystyle{plain} %
1,108,101,562,422
arxiv
\section{Introduction \label{In}} Within the standard model the calculation of the $K\rightarrow \pi\pi$ decay amplitudes is based on the effective low-energy hamiltonian for $\Delta S=\nolinebreak 1$ transitions \cite{delS}, \begin{equation} {\cal H}_{ef\hspace{-0.5mm}f}^{\scriptscriptstyle \Delta S=1}=\frac{G_F}{\sqrt{2}} \;\xi_u\sum_{i=1}^8 c_i(\mu)Q_i(\mu)\hspace{1cm} (\,\mu<m_c\,)\;, \end{equation} \begin{equation} c_i(\mu)=z_i(\mu)+\tau y_i(\mu)\;,\hspace*{1cm}\tau=-\xi_t/\xi_u\;, \hspace*{1cm}\xi_q=V_{qs}^*V_{qd}^{}\;, \end{equation} where the Wilson coefficient functions $c_i(\mu)$ of the local four-fermion operators $Q_i(\mu)$ are obtained by means of the renormalization group equation. They were computed in an extensive next-to-leading logarithm analysis by two groups \cite{BJL,CFMR}. Long-distance contributions to the isospin amplitudes $A_I$ are contained in the hadronic matrix elements of the bosonized operators. Among the various four-fermions operators the gluon and the electroweak penguin \begin{equation} Q_6 =-2\sum_{q=u,d,s}\bar{s}(1+\gamma_5) q\,\bar{q}(1-\gamma_5) d \;, \hspace{1cm} Q_8=-3\sum_{q=u,d,s}e_q\,\bar{s}(1+\gamma_5) q\,\bar{q}(1-\gamma_5) d\;, \end{equation} respectively, [with $e_q=(2/3,\,-1/3,\,-1/3)$] are particularly interesting for two reasons. First, the two operators dominate the direct $C\hspace{-0.7mm}P$ violation in $K\rightarrow \pi\pi$ decays ($\varepsilon'/\varepsilon$). Secondly, they have a density-density structure different from the structure of current-current four-fermions operators widely investigated previously. In this talk we focus on the method to calculate the loop (i.e., the $1/N_c$) corrections to the hadronic matrix elements (with $N_c$ the number of colors). It is of special importance to examine whether they significantly affect the large cancellation between the gluon and the electroweak penguin contributions in the ratio $\varepsilon'/\varepsilon$ obtained at the tree level in Ref.~\cite{Buch}. The approach we will follow is the $1/N_c$ expansion as it has been introduced in Ref.~\cite{BBG} to investigate the $\Delta I = 1/2$ selection rule. To compute the hadronic matrix elements we will start from the low-energy chiral effective lagrangian for pseudoscalar mesons. Calculating the loops we have to choose in particular a regularization scheme. One possibility is to use dimensional regularization in which case strictly one applies the effective lagrangian beyond its low-energy domain of validity. This problem can be avoided by using an energy cut-off. The price to pay is the loss of translational invariance (which particularly implies a dependence of the loop integrals on the precise definition of the momentum integration variable inside the loops). In the following analysis we will use a cut-off regularization for the divergent contributions because we believe that for these contributions this procedure is more appropriate (see e.g.~Ref.~\cite{BB}). In particular we will argue that the problem of translational non-invariance can be treated in a satisfactory way separating the factorizable and non-factorizable contributions explicitly: {\it a priori} the non-factorizable diagrams are momentum prescription dependent, but only one prescription yields a consistent matching with the short-distance QCD contribution. The factorizable diagrams on the other hand refer to the purely strong sector of the theory. Consequently, as we will show explicitly, their sum does not contain any divergent term. Therefore they can and will be calculated within dimensional regularization (in difference to the non-factorizable diagrams) which yields an unambiguous result. In Section 2 we specify the low-energy effective lagrangian. In Sections 3 and 4 we analyze the factorizable and non-factorizable diagrams, respectively. Finally, in Section 5 we discuss our results and summarize. We will focus here on the general method to calculate the loop corrections to $Q_6$ and $Q_8$ in a systematic way, and we will present only the divergent terms explicitly. These will be calculated at the operator level giving the evolution of the operators $Q_6$ and $Q_8$ (from our results the $K \rightarrow \pi \pi$ matrix elements can be obtained in a straightforward way). Numerical results including the non-negligible finite terms are presented by G.~K\"ohler in these proceedings, and some additional details can be found in Ref.~\cite{HKPSB}. \section{Low-energy Effective Lagrangian} Within our study we will use the low-energy effective chiral lagrangian for pseudoscalar mesons which involves an expansion in momenta where terms up to ${\cal O}(p^4)$ are included \cite{GaL}, \begin{eqnarray} {\cal L}_{ef\hspace{-0.5mm}f}&=&\frac{f^2}{4}\Big( \langle \partial_\mu U^\dagger \partial^{\mu}U\rangle +\frac{\alpha}{4N_c}\langle \ln U^\dagger -\ln U\rangle^2 +r\langle {\cal M} U^\dagger+U{\cal M}^\dagger\rangle\Big) +r^2 H_2 \langle {\cal M}^\dagger{\cal M}\rangle \nonumber\\[1mm] && +rL_5\langle \partial_\mu U^\dagger\partial^\mu U({\cal M}^\dagger U +U^\dagger{\cal M})\rangle+rL_8\langle {\cal M}^\dagger U{\cal M}^\dagger U +{\cal M} U^\dagger{\cal M} U^\dagger \rangle\;,\label{Leff} \end{eqnarray} with $\langle A\rangle$ denoting the trace of $A$ and ${\cal M}= \mbox{diag}(m_u,\,m_d,\,m_s)$. $f$ and $r$ are free parameters related to the pion decay constant $F_\pi$ and to the quark condensate, respectively, with $r=-2\langle \bar{q}q\rangle/f^2$. In obtaining Eq.~(\ref{Leff}) we used the general form of the lagrangian \cite{GaL} and omitted terms of ${\cal O}(p^4)$ which do not contribute to the $K\rightarrow\pi\pi$ matrix elements of $Q_6$ and $Q_8$ or are subleading in the $1/N_c$ expansion.\footnote{In addition, one might note that the contribution of the contact term $\propto\langle {\cal M}^\dagger {\cal M}\rangle$ vanishes in the isospin limit ($m_u=m_d$).}~The fields of the complex matrix $U$ are identified with the pseudoscalar meson nonet defined in a non-linear representation: \begin{equation} U=\exp\frac{i}{f}\Pi\,,\hspace{1cm} \Pi=\pi^a\lambda_a\,,\hspace{1cm} \langle\lambda_a\lambda_b\rangle=2\delta_{ab}\,, \end{equation} where, in terms of the physical states, \begin{equation} \Pi=\left( \begin{array}{ccc} \textstyle\pi^0+\frac{1}{\sqrt{3}}a\eta+\sqrt{\frac{2}{3}}b\eta' & \sqrt2\pi^+ & \sqrt2 K^+ \\[2mm] \sqrt2 \pi^- & \textstyle -\pi^0+\frac{1}{\sqrt{3}}a\eta+\sqrt{\frac{2}{3}}b\eta' & \sqrt2 K^0 \\[2mm] \sqrt2 K^- & \sqrt2 \bar{K}^0 & \textstyle -\frac{2}{\sqrt{3}}b\eta+\sqrt{\frac{2}{3}}a\eta' \end{array} \right)\,, \end{equation} and \begin{equation} a= \cos \theta-\sqrt{2}\sin\theta\,, \hspace{1cm} \sqrt{2}b=\sin\theta+\sqrt{2}\cos\theta\,. \label{isopar} \end{equation} Note that we treat the singlet as a dynamical degree of freedom and include in Eq.~(\ref{Leff}) a term for the strong anomaly proportional to the instanton parameter $\alpha$. This term gives a non-vanishing mass of the $\eta_0$ in the chiral limit ($m_q=0$) reflecting the explicit breaking of the axial $U(1)$ symmetry. $\theta$ is the $\eta-\eta'$ mixing angle for which we take the value $\theta=-19^\circ$ \cite{eta}. The bosonic representation of the quark densities is defined in terms of (functional) derivatives: \begin{eqnarray} (D_L)_{ij}&=&\bar{q}_i\frac{1}{2}(1-\gamma_5) q_j \nonumber\\ &\equiv&-\frac{\delta{\cal L}_{ef\hspace{-0.5mm}f}}{\delta{\cal M}_{ij}} =-r\Big(\frac{f^2}{4}U^\dagger+L_5\partial_\mu U^\dagger \partial^\mu U U^\dagger +2rL_8U^\dagger{\cal M} U^\dagger +rH_2{\cal M}^\dagger\Big)_{ji}\;,\hspace*{4mm} \label{CD} \end{eqnarray} and the right-handed density $(D_R)_{ij}$ is obtained by hermitian conjugation. Eq.~(\ref{CD}) allows us to express the operators $Q_6$ and $Q_8$ in terms of the meson fields: \begin{eqnarray} Q_6&=&-2f^2r^2\sum_q \Bigg[ \frac{1}{4}f^2(U^\dagger)_{dq}(U)_{qs} +(U^\dagger)_{dq} \big(L_5U\partial_\mu U^\dagger\partial^\mu U +2rL_8U{\cal M}^\dagger U \nonumber \\[-2.2mm] &&+rH_2{\cal M}\big)_{qs}+\big(L_5U^\dagger\partial_\mu U\partial^\mu U^\dagger+2rL_8U^\dagger{\cal M} U^\dagger+rH_2{\cal M}^\dagger\big)_{dq} (U)_{qs}\Bigg]+{\cal O}(p^4)\,,\hspace*{6.5mm}\label{q6u}\\[2.2mm] Q_8&=&-3f^2r^2\sum_q e_q\Bigg[ \frac{1}{4}f^2(U^\dagger)_{dq}(U)_{qs} +(U^\dagger)_{dq} \big(L_5U\partial_\mu U^\dagger\partial^\mu U +2rL_8U{\cal M}^\dagger U \nonumber \\[-2.2mm] &&+rH_2{\cal M}\big)_{qs}+\big(L_5U^\dagger\partial_\mu U\partial^\mu U^\dagger+2rL_8U^\dagger{\cal M} U^\dagger+rH_2{\cal M}^\dagger\big)_{dq} (U)_{qs}\Bigg]+{\cal O}(p^4).\label{q8u} \end{eqnarray} For the operator $Q_6$ the $(U^\dagger)_{dq}(U)_{qs}$ term which is of ${\cal O}(p^0)$ vanishes at the tree level. This property follows from the unitarity of $U$. However, as we will see, when investigating off-shell corrections it must be included. In the following we will consider the one-loop effects over the ${\cal O}(p^2$) lagrangian, that is to say, the ${\cal O}(p^0/N_c$) corrections to $Q_6$ and $Q_8$. Through the renormalization procedure, this requires to take also into account the tree level ${\cal O}(p^4$) lagrangian [i.e., the ${\cal O}(p^2$) terms for $Q_6$ and $Q_8$] proportional to $L_5$, $L_8$ and $H_2$ in Eq.~(\ref{Leff}). \section{Factorizable \boldmath $1/N_c$ \unboldmath Corrections \label{FAC}} Since factorizable and non-factorizable corrections refer to disconnected sectors of the theory (strong and weak sectors), we introduce two different scales: $\lambda_c$ is the cut-off for the factorizable diagrams and $\Lambda_c$ for the non-factorizable. We will refer to them as the factorizable and the non-factorizable scales, respectively. A similar distinction of the scales was also performed in Ref.~\cite{Bkpar} in the calculation of the $B_K$ parameter. As the factorizable loop corrections refer to the purely strong sector of the theory for these corrections there is no matching between the long- and short-distance contributions except for the scale dependence of the overall factor $r^2\sim 1/m_s^2$ in $Q_6$ and $Q_8$ [see Eq.~(\ref{mk}) below]. This property follows from the fact that the evolution of $m_s$ which already appears at leading $N_c$ is the inverse of the evolution of a quark density. Therefore, except for the scale of $1/m_s^2$ which exactly cancels the factorizable evolution of the density-density operators at short distances, the only scale remaining in the matrix elements is the non-factorizable scale $\Lambda_c$. It represents the non-trivial part of the factorization scale in the operator product expansion. The only matching between long- and short-distance contributions is obtained by identifying the cut-off scale $\Lambda_c$ of the non-factorizable diagrams with the QCD renormalization scale $\mu$. In this section we shall show explicitly, at the level of a single density operator, that the quadratic and logarithmic dependence on $\lambda_c$ which arises from the factori\-zable loop diagrams is absorbed in the renormalization of the low-energy lagrangian. Consequently, in the factorizable sector the chiral loop corrections do not induce ultraviolet divergent terms in addition to the $1/m_s^2$ factor. The proof of the absorption of the factorizable scale $\lambda_c$ will be carried out in the isospin limit. This explicit demonstration is instructive for several reasons. First, we verify the validity of the general concept in the case of bosonized densities which, contrary to the currents, do not obey conservation laws (i.e., only PCAC can be used for the densities). Second, we check, within the cut-off formalism, whether there is a dependence on a given momentum shift ($q\rightarrow q\pm p$). Thirdly, including the $\eta_0$ as a dynamical degree of freedom we examine the corresponding modifications in the renormalization procedure. Finally, there remain finite terms from the factorizable $1/N_c$ corrections which explicitly enter the numerical analysis of the matrix elements. This point will be discussed at the end of this Section. To calculate the evolution of the operators we apply the background field method as used in Refs.~\cite{WB} and \cite{FG} for current-current operators. This approach is powerful as it keeps track of the chiral structure in the loop corrections. It is particularly useful to study the ultraviolet behaviour of the theory. In order to calculate the evolution of the density operator we decompose the matrix $U$ in the classical field $\bar{U}$ and the quantum fluctuation $\xi$, \begin{equation} U=\exp (i\xi/f)\,\bar{U}\;, \hspace{0.5cm}\xi=\xi^a\lambda_a\,, \end{equation} with $\bar{U}$ satisfying the equation of motion \begin{equation} \bar{U}\partial^2\bar{U}^\dagger-\partial^2 \bar{U} \bar{U}^\dagger +r\bar{U}{\cal M}^\dagger-r{\cal M}\bar{U}^\dagger \frac{\alpha}{N_c}\langle\ln\bar{U}-\ln\bar{U}^\dagger\rangle\cdot {\bf 1}\;, \hspace{0.5cm} \bar{U}=\exp(i\pi^a\lambda_a/f)\;. \end{equation} The lagrangian of Eq.~(\ref{Leff}) thus reads \begin{equation} {\cal L}=\bar{\cal L} +\frac{1}{2}(\partial_\mu\xi^a\partial^\mu\xi_a) +\frac{1}{4}\langle [\partial_\mu\xi,\,\xi]\partial^\mu \bar{U}\bar{U}^\dagger\rangle -\frac{r}{8}\langle \xi^2\bar{U}{\cal M}^\dagger+\bar{U}^\dagger\xi^2{\cal M}\rangle-\frac{1}{2}\alpha\xi^0\xi^0+{\cal O}(\xi^3)\;.\label{la2} \end{equation} The corresponding expansion of the meson density around the classical field yields \begin{equation} (D_L)_{ij}=(\bar{D}_L)_{ij}+if\frac{r}{4}(\bar{U}^\dagger\xi)_{ji} +\frac{r}{8}(\bar{U}^\dagger\xi^2)_{ji}+{\cal O}(\xi^3)\;.\label{Dexp} \vspace{6mm} \end{equation} \noindent \centerline{\epsfig{file=fig4.eps,width=9.46cm}} \protect{\vspace{-2mm}} \\ \footnotesize Fig.~1. Evolution of the density operator; the black circle, square and triangle denote the kinetic, mass and $U_A(1)$ breaking terms in Eq.~(\ref{la2}), the crossed circle the density of Eq.~(\ref{Dexp}). The lines represent the $\xi$ propagators. \\[6pt] \normalsize The evolution of $(D_L)_{ij}$ is determined by the diagrams of Fig.~1. Integrating out the fluctuation $\xi$ we obtain \begin{eqnarray} (D_L)_{ij}(\lambda_c)&=&-\frac{f^2}{4}r(\bar{U}^\dagger)_{ji}(0) +\frac{3}{4}r(\bar{U}^\dagger)_{ji}(0)\frac{\lambda_c^2}{(4\pi)^2} -\frac{r}{12}(\bar{U}^\dagger)_{ji}(0)\alpha\frac{\log \lambda_c^2}{(4\pi)^2} \nonumber \\[2mm] &&-r^2({\cal M}^\dagger)_{ji}(0)\left[H_2+\frac{3}{16}\frac{\log\lambda_c^2} {(4\pi)^2}\right]-2r^2(\bar{U}^\dagger{\cal M} \bar{U}^\dagger)_{ji}(0) \left[L_8+\frac{3}{32}\frac{\log\lambda_c^2}{(4\pi)^2}\right] \nonumber \\[2mm] &&-r(\partial_\mu \bar{U}^\dagger\partial^\mu \bar{U} \bar{U}^\dagger)_{ji}(0) \left[L_5+\frac{3}{16}\frac{\log\lambda_c^2}{(4\pi)^2}\right] +\ldots \label{opd}\;, \end{eqnarray} where the ellipses denote finite terms (non-divergent in $\lambda_c$) coming from the loop corrections. The quadratic and logarithmic terms for the wave function and mass renormalizations can be calculated from the diagrams of Figs.~2 and 3, i.e., from the off-shell corrections to the kinetic and the mass operator, respectively, second and third term of Eq.~(\ref{la2}). We get \begin{eqnarray} m_\pi^2&=&r\hat{m}\left[1-\frac{8m_\pi^2}{f^2}(L_5-2L_8)+\frac{1}{3}\alpha \frac{\log\lambda_c^2}{(4\pi)^2 f^2}\right]+\ldots \label{mp}\,,\\[2mm] m_K^2&=&r\frac{\hat{m}+m_s}{2}\left[1-\frac{8m_K^2}{f^2}(L_5-2L_8) +\frac{1}{3} \alpha\frac{\log\lambda_c^2}{(4\pi)^2 f^2}\right]+\ldots\,,\label{mk} \\[4mm] Z_\pi&=&1+\frac{8L_5}{f^2}m_\pi^2-3\frac{\lambda_c^2}{(4\pi)^2 f^2} +\frac{3}{2}m_\pi^2\frac{\log \lambda_c^2}{(4\pi)^2 f^2} +\ldots\,,\label{zpop}\\[2mm] Z_K&=&1+\frac{8L_5}{f^2}m_K^2-3\frac{\lambda_c^2}{(4\pi)^2 f^2} +\frac{3}{2}m_K^2\frac{\log \lambda_c^2}{(4\pi)^2 f^2}\label{zkop} +\ldots\,, \end{eqnarray} with $\hat{m}= (m_u+m_d)/2$. \protect{\vspace{8mm}} \noindent \centerline{\epsfig{file=fig6.eps,width=11.59cm}}\\[14pt] \centerline{ \footnotesize Fig.~2. Evolution of the kinetic operator (wave function renormalization). } \\[30pt] \noindent \centerline{\epsfig{file=fig7.eps,width=5.15cm}}\\[14pt] \centerline{ \footnotesize Fig.~3. Evolution of the mass operator (mass renormalization). } \\[30pt] \noindent \centerline{\epsfig{file=fig5.eps,width=11.65cm}}\\[14pt] \centerline{ \footnotesize Fig.~4. Evolution of the current operator. The crossed circle here denotes the bosonized current. } \\[6pt] \normalsize Along the same lines $F_\pi$ and $F_K$ can be calculated, to one-loop order, from the diagrams of Fig.~4, and we obtain\footnote{The representation of the bosonized current in terms of the background field can be found in Ref.~\cite{FG}.} \begin{eqnarray} F_\pi&=&f\left[1+\frac{4L_5}{f^2}m_\pi^2-\frac{3}{2}\frac{\lambda_c^2} {(4\pi)^2 f^2}+\frac{3}{4}m_\pi^2\frac{\log \lambda_c^2}{(4\pi)^2f^2} +\ldots\right] \,,\label{fpop}\\ [2mm] F_K&=&f\left[1+\frac{4L_5}{f^2}m_K^2-\frac{3}{2}\frac{\lambda_c^2} {(4\pi)^2 f^2}+\frac{3}{4}m_K^2\frac{\log \lambda_c^2}{(4\pi)^2 f^2} +\ldots\right] \label{fkop}\,.\end{eqnarray} Both the quadratic and the logarithmic terms of Eqs.~(\ref{opd})-(\ref{fkop}) prove to be independent of the way we define the integration variable inside the loops. This is due to the fact that the quadratically divergent integrals resulting from the diagrams of Figs.~1-4 [$\,$i.e., those of the form $d^4q/(q\pm p)^2\,$] do not induce subleading logarithms, that is to say, all quadratic and logarithmic dependence on the scale $\lambda_c$ originates from the leading divergence of a given integral. Now looking at Eqs.~(\ref{zpop})-(\ref{fkop}) we observe that the ratio $\Pi/f$ and, consequently, the matrix field $U$ are not renormalized (i.e., $\pi_0/f\,=\,\pi_r/F_\pi$ and $K_0/f\,=\,K_r/F_K$). Defining the renormalized (scale independent) couplings $\hat{L}_i$ through the relations \begin{eqnarray} \frac{F_K}{F_\pi}&=& 1 +\frac{4}{f^2}(m_K^2-m_\pi^2) \left[L_5+\frac{3}{16}\frac{\log\lambda_c^2}{(4\pi)^2}\right] +\ldots\,,\label{kp0}\\[2mm] &\equiv& 1+\frac{4\hat{L}_5^r}{F_\pi^2}(m_K^2-m_\pi^2)\,, \label{kp1} \\[4mm] \frac{m_K^2}{m_\pi^2}&=&\frac{\hat{m}+m_s}{2\hat{m}}\left[ 1-\frac{8(m_K^2-m_\pi^2)}{f^2}(L_5-2L_8)\right]+\ldots\,,\label{kp2}\\[2mm] &\equiv&\frac{\hat{m}+m_s}{2\hat{m}}\left[1-\frac{8(m_K^2-m_\pi^2)}{F_\pi^2} (\hat{L}_5^r-2\hat{L}_8^r)\right]\,\,,\label{kp3} \end{eqnarray} from Eqs.~(\ref{kp0}) and (\ref{kp1}) we find, to one-loop order, \begin{equation} L_5=\hat{L}_5^r-\frac{3}{16}\frac{\log\lambda_c^2}{(4\pi)^2}+\ldots\,, \label{L5r} \end{equation} in accordance with the result from chiral perturbation theory \cite{GaL}. Note that Eq.~(\ref{kp2}) exhibits no explicit dependence on the scale $\lambda_c$; i.e., the chiral loop corrections of Eqs.~(\ref{mp}) and (\ref{mk}) do not contribute to the $SU(3)$ breaking in the masses and, consequently, can be absorbed in $r$. This implies \begin{equation} L_5-2L_8=\hat{L}_5^r-2\hat{L}_8^r +\ldots \label{l58}\,\,. \end{equation} Then, from Eqs.~(\ref{L5r}) and (\ref{l58}) we get \begin{equation} L_8=\hat{L}_8^r-\frac{3}{32}\frac{\log\lambda_c^2}{(4\pi)^2}+\ldots\,\,. \label{L8r} \end{equation} One might note that the coefficient in front of the logarithm in Eq.~(\ref{L8r}) differs from the one given in Ref.~\cite{GaL}. This property follows from the presence of the singlet $\eta_0$ in the calculation. Eqs.~(\ref{kp2}) and (\ref{kp3}) define the renormalization conditions because the term $\hat{L}_5^r-2\hat{L}_8^r$ plus the constant terms which appear in the ratio of the masses in Eq.~(\ref{kp2}) determine the bare constant $L_5-2L_8$. Similarly Eqs.~(\ref{kp0}) and (\ref{kp1}) with the associated finite terms determine the coupling constant $L_5$. Then, by means of Eqs.~(\ref{mk}) and (\ref{fpop}), we can rewrite the density of Eq.~(\ref{opd}) as \begin{eqnarray} (D_L)_{ij}(\lambda_c)&=&-\frac{2m_K^2}{(\hat{m}+m_s)} \Bigg[\frac{F_\pi^2}{4}\Bigg(1+\frac{8\hat{L}_5^r}{F_\pi^2}\left(m_K^2 -m_\pi^2\right) -\frac{16\hat{L}_8^r}{F_\pi^2}m_K^2\Bigg)(\bar{U}^\dagger)_{ji} \nonumber\\ && +(\partial_\mu \bar{U}^\dagger\partial^\mu \bar{U}\bar{U}^\dagger)_{ji} \hat{L}_5^r+2(\bar{U}^\dagger\chi\bar{U}^\dagger)_{ji}\hat{L}_8^r +(\chi^\dagger)_{ji}\hat{H}_2^r\Bigg]\,,\label{opd2} \end{eqnarray} with $\chi=\mbox{diag}(m_\pi^2,\,m_\pi^2,\,2m_K^2-m_\pi^2)$. In obtaining Eq.~(\ref{opd2}) we used the renormalized couplings of Eqs.~(\ref{L5r}) and (\ref{L8r}). In addition, we introduced \begin{equation} \hat{H}_2^r=H_2+\frac{3}{16}\frac{\log\lambda_c^2}{(4\pi)^2}+\ldots\,\,. \label{H2r} \end{equation} Note that the renormalized density exhibits no dependence on the scale $\lambda_c$, except for the scale of $1/(\hat{m}+m_s)$. Note also that in Eqs.~(\ref{opd}) and (\ref{opd2}) we did not specify logarithmic terms induced at the one-loop order which correspond to the $L_4$, $L_6$ and $L_7$ operators in the chiral effective lagrangian of Ref.~\cite{GaL}. An explicit calculation of these terms shows that they give no contribution to the $K\rightarrow \pi\pi$ matrix elements of $Q_6$ and $Q_8$. The factorizable contributions to the $Q_6$ and $Q_8$ operators can be obtained in a straightforward way from Eq.~(\ref{opd2}). As the tree level expansion of $Q_6$, due to the unitarity of the matrix field $U$, starts at the ${\cal O}(p^2$), no terms arise from the renormalization of the wave functions and masses, as well as, the bare decay constant $f$. These corrections will be of higher order. Only the renormalization of the ${\cal O}(p^2$) parameters enters the calculation. This statement does not hold for the electroweak operator $Q_8$ which, for $K^0\rightarrow\pi^+\pi^-$, induces a non-vanishing tree matrix element at the ${\cal O}(p^0)$. In conclusion, using a cut-off regularization the evolution of the density operator up to the orders $p^2$ and $p^0/N_c$ is given, modulo finite loop corrections, by Eq.~(\ref{opd2}). Our result exhibits no explicit scale dependence. Moreover, it does not depend on the momentum prescription inside the loops. The finite terms, on the other hand, will not be absorbed completely in the renormalization of the various parameters. This can be seen, e.g., from the fact that the diagrams of Fig.~1 contain rescattering processes which induce a non-vanishing imaginary part. As the renormalized parameters are defined to be real, the latter will remain. In addition, the real part of the finite corrections carries a dependence on the momentum prescription used to define the cut-off. However, we proved that the chiral loop diagrams do not induce ultraviolet divergent terms. Therefore we are allowed to calculate the remaining finite corrections in dimensional regularization, which is momentum translation invariant (i.e., we are allowed to take the limit $\lambda_c\rightarrow\infty$). This procedure implies an extrapolation of the low-energy effective theory for terms of ${\cal O}(m_{\pi,K}^2/ \lambda_c^2;\,\,m_{\pi,K}^4/\lambda_c^4;\,\,\ldots)$ up to scales where these terms are negligible. This is the usual assumption made in chiral perturbation theory for three flavors. \section{Non-factorizable \boldmath $1/N_c$ \unboldmath Corrections \label{NFAC}} The non-factorizable $1/N_c$ corrections to the hadronic matrix elements constitute the part to be matched to the short-distance Wilson coefficient functions; i.e., the corresponding scale $\Lambda_c$ has to be identified with the renormalization scale $\mu$ of QCD. As the non-factorizable terms are ultraviolet divergent we calculate their contribution with a Euclidian cut-off following the discussion of the introduction. The integrals will generally depend on the momentum prescription used inside the loop. In the existing studies of the hadronic matrix elements the color singlet boson connecting the two densities (or currents) was integrated out from the beginning \cite{Buch,BBG,JMS1,EAP2}. Thus the integration variable was taken to be the momentum of the meson in the loop, and the cut-off was the upper limit of its momentum. As there is no corresponding quantity in the short-distance part, in this treatment of the integrals there is no clear matching with QCD. This ambiguity is removed, for non-factorizable diagrams, by considering the two densities to be connected to each other through the exchange of the color singlet boson, as was already discussed in Refs.~\cite{BB,FG,BGK,PS,TH}. A consistent matching is then obtained by assigning the same momentum to the color singlet boson at long and short distances and by identifying this momentum with the loop integration variable. Consequently, the matching fixes the frame and no other translated frame is appropriate. \protect{\vspace{9mm}} \noindent \centerline{\epsfig{file=fig8.eps,width=9.71cm}}\\[8pt] \centerline{ \footnotesize Fig.~5. Non-factorizable loop diagrams for the evolution of a density-density operator. } \\[9pt] Then, associating the cut-off to the effective color singlet boson, at the ${\cal O}(p^0)$ in the chiral expansion of the $Q_6$ and $Q_8$ operators, from the diagrams of Fig.~5 we obtain (in the isospin limit) the following evolution of $Q_6$ and $Q_8$ in the background field approach: \begin{eqnarray} Q_6^{N\hspace{-0.35mm}F}(\Lambda_c^2)&=&F_\pi^2\left(\frac{2m_K^2}{\hat{m}+m_s}\right)^2 \frac{\log \Lambda_c^2}{(4\pi)^2}\Bigg[\frac{3}{4}(\partial_\mu \bar{U}^\dagger \partial^\mu \bar{U})_{ds} \nonumber \\ &&+\frac{1}{2}(\partial_\mu \bar{U}^\dagger \bar{U})_{ds}\sum_q(\bar{U}\partial^\mu \bar{U}^\dagger)_{qq} +\frac{3}{4}(\bar{U}^\dagger\chi+\chi^\dagger\bar{U})_{ds}\Bigg] \,,\label{q6op}\\[4mm] Q_8^{N\hspace{-0.35mm}F}(\Lambda_c^2)&=&\frac{3}{2}F_\pi^2\left(\frac{2m_K^2}{\hat{m}+m_s} \right)^2\frac{\log \Lambda_c^2}{(4\pi)^2}\sum_q e_q\Bigg[\frac{1}{4} (\partial_\mu \bar{U}^\dagger\partial^\mu \bar{U})_{ds}\delta_{qq} \nonumber \\ && +\frac{1}{2}(\partial_\mu \bar{U}^\dagger\bar{U})_{ds}(\bar{U}\partial^\mu \bar{U}^\dagger)_{qq} +\frac{1}{4}(\bar{U}^\dagger\chi+\chi^\dagger \bar{U})_{ds}\delta_{qq} +\frac{1}{3}\alpha(\bar{U}^\dagger)_{dq} (\bar{U})_{qs}\Bigg].\hspace*{7mm}\label{q8op} \end{eqnarray} Only the diagonal evolution of $Q_6$, i.e., the first term on the right-hand side of Eq.~(\ref{q6op}), gives a non-zero contribution to the $K \rightarrow \pi \pi$ matrix elements. In particular, the mass term which is of the $L_8$ and $H_2$ form vanishes for $K\rightarrow\pi\pi$ decays, as do the $L_8$ and $H_2$ contributions at the tree level (due to a cancellation between the tadpole and non-tadpole diagrams). In Eq.~(\ref{q8op}) for completeness we kept the terms proportional to $\delta_{qq}$ which, however, cancel through the summation over the flavor index. Note that Eqs.~(\ref{q6op}) and (\ref{q8op}) are given in terms of operators and, consequently, can be applied to $K\rightarrow 3\pi$ decays, too. Note also that our results, Eqs.~(\ref{q6op}) and (\ref{q8op}), exhibit no quadratic dependence on the scale $\Lambda_c$; i.e., up to the first order corrections in the twofold expansion in $p^2$ and $1/N_c$ the matching involves only logarithmic terms from both the short- {\it and} the long-distance evolution of the four-quark operators. This is due to the fact that there is no quadratically divergent diagram in Fig.~5 apart from the first one which vanishes for the $Q_6$ and $Q_8$ operators. Moreover, for a general density-density operator there are no logarithms which are the subleading logs of quadratically divergent terms. Therefore, all the logarithms appearing in Eqs.~(\ref{q6op}) and (\ref{q8op}) are leading divergences, which are independent of the momentum prescription. The finite terms calculated along with these logarithms depend on the momentum prescription. They are, however, uniquely determined through the matching condition with QCD which fixes the momenta in the loop as explained above. One might note that the statements we made above do not hold for current-current operators: the $1/N_c$ corrections to these operators, performed in the first non-vanishing order of their chiral expansion, exhibit terms which are quadratic in $\Lambda_c$. Furthermore, already these terms were shown to depend on the momentum prescription \cite{FG}. We close this section by giving the long-distance evolution, at the ${\cal O}(p^0)$, of a general density-density operator $Q_D^{abcd}\equiv-8(D_R)_{ab}(D_L)_{cd}$. As we showed in Section 3, the factorizable $1/N_c$ corrections do not affect its ultraviolet behaviour. Then, from the non-factorizable diagrams of Fig.~5 we find: \begin{eqnarray} Q_D^{abcd}(\Lambda_c^2)&=&Q_D^{abcd}(0)\left[1-\frac{2}{3}\frac{\alpha} {F_\pi^2}\frac{\log \Lambda_c^2}{(4\pi)^2}\right]-F_\pi^2\left(\frac{2m_K^2} {\hat{m}+m_s}\right)^2\frac{\Lambda_c^2}{(4\pi)^2} \delta^{da}\delta^{bc} \nonumber \\[1mm] &&+\frac{F_\pi^2}{4}\left(\frac{2m_K^2}{\hat{m}+m_s}\right)^2 \frac{\log \Lambda_c^2}{(4\pi)^2}\Big[(\bar{U}^\dagger\chi+\chi^\dagger \bar{U})^{da}\delta^{bc}+\delta^{da}(\chi\bar{U}^\dagger+\bar{U} \chi^\dagger)^{bc}\hspace*{4mm}\nonumber \\[1mm] &&+(\partial_\mu \bar{U}^\dagger\partial^\mu\bar{U})^{da}\delta^{bc} +\delta^{da}(\partial_\mu \bar{U}\partial^\mu\bar{U}^\dagger)^{bc} +2(\partial_\mu \bar{U}^\dagger\bar{U})^{da}(\bar{U}\partial^\mu \bar{U}^\dagger)^{bc}\Big]\,. \label{qgop} \end{eqnarray} The corresponding expressions for the non-factorizable loop corrections to the operators $Q_6$ and $Q_8$, Eqs.~(\ref{q6op}) and (\ref{q8op}), can be obtained directly from Eq.~(\ref{qgop}). \section{Discussion} In summary, since the non-factorizable contributions contain (logarithmically) divergent terms we consider that these contributions have to be calculated within a cut-off regularization. Therefore, at the level of the finite terms [but, as we have shown, to ${\cal O}(p^0/N_c)$ not at the level of the divergent terms] the translation non-invariance could render {\it a priori} the calculation of the loops arbitrary. However, for the non-factorizable diagrams a consistent matching (in which we can identify the same quantity in the short- and long-distance pictures) fixes the momentum prescription and renders the result unambiguous. On the other hand, there is no way to establish a unique momentum prescription for the factorizable diagrams. Nevertheless, as the complete sum of the factorizable diagrams is finite, for this sum we are allowed to take the limit $\lambda_c \rightarrow \infty$ and to use dimensional regularization which yields an unambiguous result, too. Consequently, in the factorizable sector at the level of the finite terms only the sum of all (factorizable) diagrams is meaningful. To be explicit, we have no access to the renormalization of the couplings separately as their divergences induce an arbitrariness at the level of the finite terms. The case of the operator $Q_6$ is particularly illustrative. At the tree level this operator vanishes to ${\cal O}(p^0$) due to the unitarity of the matrix $U$. Nevertheless, the one-loop corrections to the ${\cal O}(p^0$) $(U^\dagger)_{dq} (U)_{qs}$ term must be computed. Indeed, as long as we keep track of the density-density structure of the operator $Q_6$ (to separate the factorizable and the non-factorizable diagrams) these corrections are non-vanishing. In particular, we have shown that the non-factorizable diagrams over the $(U^\dagger)_{dq} (U)_{qs}$ operator yield a non-trivial dependence on the scale $\Lambda_c$ which has to be matched to the short-distance contribution. In addition, the logarithms of Eq.~(\ref{opd}) are needed in order to cancel the scale dependence of the various bare parameters in the tree level expressions as shown in Section~3. We note that in the twofold expansion in $p^2$ and $1/N_c$ the contribution of the loops over the ${\cal O}(p^0$) matrix element must be treated at the same level as the leading non-vanishing ${\cal O}(p^2$) tree level contribution proportional to $L_5$. This statement does not hold for $Q_8$ whose ${\cal O}(p^0/N_c$) corrections are subleading with respect to the leading ${\cal O}(p^0$) tree level. We close with a note on the comparison of the evolution of the operators $Q_6$ and $Q_8$ at long and short distances. As argued above, to ${\cal O}(p^0/N_c)$ the long-distance evolution of $Q_6$ and $Q_8$ is only logarithmic as in the short-distance (QCD) picture. Except for the case where the coefficients of the logs are strictly equal in both domains, this property prevents us from determining any value of $\Lambda_c$ for which the matching is completely flat. It turns out that, even if the coefficients of the logarithms are found to be relatively moderate at long distance, they are still larger than the corresponding short-distance ones. This is to be expected as the short-distance coefficients are close to zero, and as we have calculated only the lowest order (long-distance) evolution in a theory which is truncated to the pseudoscalar mesons. The corrections we have calculated are the first order corrections over the well established ${\cal O}(p^2$) lagrangian, and the slope obtained for the scale dependence of the matrix elements is unambiguous. The fact that the long- and short-distance coefficients are different does not necessarily mean that the effects of higher order corrections and higher resonances are large for the absolute values of the matrix elements. However, it is desirable to investigate these effects explicitly. \newpage \noindent \begin{center}{\large Acknowledgements} \end{center} This work has been done in collaboration with G.O.~K\"ohler, E.A.~Paschos, and W.A.~Bardeen. We wish to thank J. Bijnens, J. Fatelo, and J.-M. G\'erard for helpful comments. Financial support from the Bundesministerium f\"ur Bildung, Wissenschaft, Forschung und Technologie (BMBF), 057D093P(7), Bonn, FRG, and DFG Antrag PA-10-1 is gratefully acknowledged. \footnotesize
1,108,101,562,423
arxiv
\section{Introduction} Models of dynamical electroweak symmetry breaking (DEWSB) like Technicolor are possible extensions of the Standard Model, and many proposals have been put forward for a strongly--interacting sector beyond the Standard Model since the original proposals in Refs.~\cite{Weinberg:1979bn,Susskind:1978ms}. Viable models of DEWSB must satisfy the constraints that follow from electroweak precision data~\cite{Peskin:1990zt,Altarelli:1990zd}. These constraints put severe limitations on Technicolor candidates, and QCD--like theories naively rescaled to the electroweak scale are already ruled out, see e.g. Refs.~\cite{Hill:2002ap,Sannino:2008ha} for recent reviews. The constraints are nicely encoded in bounds for the value of the $S$ parameter introduced in Refs.~\cite{Peskin:1990zt,Peskin:1991sw}. Theories near an infrared fixed point (IRFP) have been advocated as promising candidates for DEWSB. Preliminary attempts at estimating analytically the $S$ parameter for these theories suggest that it is much reduced compared to QCD--like theories~\cite{Appelquist:1998xf,Harada:2005ru}. Unfortunately these computations are not based on first principles, and have to rely on assumptions that are difficult to control. Early studies focused on finding evidence for an IRFP in theories with a large number of flavors in the fundamental representation of the gauge group following the seminal example in Ref.~\cite{Banks:1981nn}. More recently, novel models have been proposed based on smaller numbers of fermions in higher--dimensional representations~\cite{Dietrich:2006cm}. Several phenomenological scenarios have been proposed which build upon these ideas~\cite{Luty:2004ye,Foadi:2007ue}. Lattice simulations are now in a position to address these questions from first principles, so that the difficulties in dealing with the non-perturbative dynamics can be dealt with in a systematic way. The existence of IRFPs has been investigated in theories with fundamental fermions~\cite{Appelquist:2007hu,Deuzeman:2008sc, Deuzeman:2009mh,Fodor:2009nh,Hasenfratz:2009ea,Fodor:2009wk,Appelquist:2009ty} and higher representations~\cite{Catterall:2007yx,DelDebbio:2008wb,Shamir:2008pb, DelDebbio:2008zf,Catterall:2008qk,DeGrand:2008kx,Hietanen:2008mr, Hietanen:2009az,DeGrand:2009mt,DeGrand:2009et,DelDebbio:2009fd}. These preliminary studies have mapped out the space of bare lattice parameters and are starting to study the spectrum of the candidate theories, and the flow of renormalized couplings. First results hint towards an interesting landscape of theories that could exhibit scale invariance at large distances. Computing the $S$ parameter in these theories from first principles is an important ingredient in trying to build successful phenomenological models. The $S$ parameter is obtained on the lattice from the form factors that appear in the momentum--space VV-AA correlator, as defined below in Section~\ref{sec:vac}. Chiral symmetry plays an important role in guaranteeing the cancellation of power--divergent singularities when computing the above correlator. Hence lattice formulations that preserve chiral symmetry at finite lattice spacing are particularly well--suited for these studies. QCD is the ideal testing ground to develop and test the necessary lattice technology. In this work, we compute the $S$ parameter in QCD with $n_f=2+1$ flavors of Domain Wall fermions (DWF). Our study closely follows the procedure described in Ref.~\cite{Shintani:2008qe} where the $S$ parameter (or equivalently the $SU(3)$ Low Energy Constant $L_{10}$) was first computed from vacuum polarisation functions (VPFs) using overlap fermions. We adopt the method, apply it to Domain Wall Fermions, and widen its scope by using conserved currents and larger physical volumes. Section~\ref{sec:vac} contains a short account of computational methods for the VPF on the lattice. Section~\ref{sec:contact} addresses the topics of power divergences and residual chiral symmetry breaking. In Section~\ref{sec:res} we present the numerical results obtained on the gauge configurations produced by the RBC and UKQCD collaborations using Domain Wall Fermions. Section~\ref{sec:pion} contains our results for the pion mass splitting. A discussion of the numerical results and a short conclusion can be found in Section~\ref{sec:conc}. \section{Vacuum Polarisation Functions} \label{sec:vac} Domain Wall Fermions are a five dimensional formulation of lattice QCD with an approximate chiral symmetry \cite{Kaplan:1992bt,Shamir:1993zy, Furman:1994ky}. The residual explicit breaking of chiral symmetry appears in the Ward identities of the theory as terms proportional to the so--called residual mass $m_\mathrm{res}$. The conserved vector and axial currents form a multiplet under this approximate lattice chiral symmetry. The basic observables in this work are vacuum polarisation functions of the vector and the axial vector current. They are defined as current-current two-point functions in momentum space, \begin{align} \Pi_{\mu\nu}^V(q)&\equiv\sum_{x}\e^{\ii q\cdot x} \vev{0|\mathcal{V}_\mu(x)V_\nu(0)|0},\label{eq:piv}\\ \Pi_{\mu\nu}^A(q)&\equiv\sum_{x}\e^{\ii q\cdot x} \vev{0|\mathcal{A}_\mu(x)A_\nu(0)|0},\label{eq:pia} \end{align} where $\mathcal{V}_\mu$ and $\mathcal{A}_\mu$ are the conserved vector and axial currents and $V_\mu$ and $A_\mu$ are the corresponding local currents. We consider local-conserved correlators from a new set of point source propagators with up to two units of spatial momentum. The definition of the conserved currents can be found in Ref.~\cite{Furman:1994ky}. Since the Fourier transform in Eqs.~(\ref{eq:piv},\ref{eq:pia}) includes $x=0$, power--divergent contributions can arise. Due to lattice chiral symmetry these divergences cancel in the difference of the vector and axial vector correlators, if conserved currents are used. Note that similar observables are used to compute hadronic contributions to the anomalous magnetic moment of the muon in Refs.~\cite{Blum:2002ii, Gockeler:2003cw, Aubin:2006xv}, where one can find a more detailed discussion of the renormalization of the correlators. A similar cancellation was pointed out in Ref.~\cite{Shintani:2008qe}, where overlap fermions have been used. Following Ref.~\cite{Shintani:2008qe} we decompose the difference $\Pi_{\mu\nu}^{V-A}\equiv\Pi_{\mu\nu}^{V}-\Pi_{\mu\nu}^A$ into a longitudinal and a transverse part, \begin{align} \label{eq:pi1} \Pi_{\mu\nu}^{V-A}=\left(q^2\delta_{\mu\nu}-q_\mu q_\nu\right) \Pi^{(1)}(q^2) - q_\mu q_\nu\Pi^{(0)}(q^2). \end{align} For each momentum we average all components of $\Pi_{\mu\nu}$ which contribute to only one of the ``polarisations''. For example for momenta with one non-zero direction $q_\kappa$ we have $q^2\Pi^{(0)}=\Pi_{\kappa\kappa}^{V-A}$ and $q^2\Pi^{(1)}=\frac{1}{3}\sum_{\mu\neq\kappa}\Pi_{\mu\mu}^{V-A}$. We use $SU(2)$ Chiral Perturbation Theory (ChPT) to fit to our data to extract the Low Energy Constant $l_5^r$. Since we are using $2+1$ flavor lattices the low-energy constants will implicitly depend on the strange quark mass. The more common notation in the literature is to use the $SU(3)$ LEC $L_{10}^r$ which is related to $l_5^r$ by \begin{align} \begin{split} L_{10}^r&=l_5^r-\frac{1}{384\pi^2}\left(\log\frac{m_K^2}{\mu^2}+1\right),\\ &=l_5^r-3\cdot10^{-5}\;\text{for}\;\mu=m_\rho. \end{split} \end{align} The ChPT result for $\Pi^{(1)}$ can be found in Ref.~\cite{Gasser:1984gg}: \begin{align} q^2\Pi^{(1)}\left(m_\pi,q^2\right)&= -f_\pi^2-\left[\frac{1}{48\pi^2}\left(\bar{l}_5-\frac{ 1}{3}\right) -\frac{1}{3}\sigma^2\bar{J}(\sigma)\right]q^2 +O(q^4)\label{eq:chpt},\\ \text{with}\quad\bar{J}(\sigma)&=\frac{1}{16\pi^2}\left (\sigma\log\frac{\sigma-1}{\sigma+1}+2\right)\quad\text{and}\quad \sigma=\sqrt{1-\frac{4m_\pi^2}{q^2}}. \end{align} With $m_\pi$, $f_\pi$ known from fits to pseudoscalar and $A_0$ correlators (Table \ref{tab_meson}) there is only one free parameter in (\ref{eq:chpt}) for each choice of the chiral scale $\mu$. The scale invariant LEC $\bar{l}_5$ is defined by \begin{align} \bar{l}_5=-192\pi^2\,l_5^r(\mu)-\log\frac{m_\pi^2}{\mu^2}, \end{align} and the corresponding convention for the $S$ parameter \cite{Peskin:1990zt} is \begin{align} S=\frac{1}{12\pi}\left[-192\pi^2l_5^r(\mu)+\log(\mu^2/m_H^2)-\frac{1}{6}\right]. \label{eq:sl5} \end{align} \section{Contact Terms in Ward Identities} \label{sec:contact} \begin{figure}[t] \includegraphics[width=.45\textwidth]{graphs/wardid.pdf} \includegraphics[width=.45\textwidth]{graphs/wardid_VmA.pdf} \caption{Ward identity violations in the chiral limit for the vector and axial vector currents (left) and their difference (right). The same scale is chosen in both plots to visualise the cancellation.\label{fig:ward_id}} \end{figure} Contact terms in the Ward identities can yield finite contributions to the Fourier transform, so that $\Pi^{V/A}_{\mu\nu}$ is no longer transverse. However the contact terms cancel exactly in the difference between the vector--vector and the axial--axial correlator for DWF in the $L_s\to\infty$ and massless limit, provided the conserved/local correlators are used. Any power--divergent contribution also cancels in this difference as a result of the chiral symmetry of DWF. Following the notation introduced in the previous section we test the vector and axial Ward identities by extrapolating $q_\nu\Pi^{V/A}_{\mu\nu}$ to the chiral limit. We find that Ward identity violations are very similar for both currents and contact term contributions are greatly suppressed in the difference $\Pi_{\mu\nu}^{V-A}$, as shown in Figure~\ref{fig:ward_id}. The cancellation in the chiral limit also hints at only small effects from the non-conservation of the axial current due to the residual mass of Domain Wall Fermions. \begin{figure}[b] \includegraphics[width=.45\textwidth]{graphs/pi1_zaerror.pdf} \includegraphics[width=.45\textwidth]{graphs/pi1_zaerror2.pdf} \caption{Left: cancellation between $\Pi^{(V)}$ and $\Pi^{(A)}$ as a function of $q^2$, Right: relative error for given $\Delta$ for the lowest momenta.\label{fig:zaerror}} \end{figure} The DWF axial current acquires a small multiplicative renormalisation which vanishes in the $L_s\to\infty$ limit~\cite{Shamir:1993zy}. Since our simulation is done at a fixed length in the fifth dimension ($L_s=16$) we have to consider this effect as part of our error budget. In Refs.~\cite{Christ:2005xh,Sharpe:2007yd,Allton:2008pn} it has been argued that $\Delta=|Z_{\mathcal{A}}-1|$ receives contributions from both delocalized modes above the mobility edge $\lambda_c$, and from localized near zero modes of $H_W$. The former is proportional to $\e^{-\lambda_cL_s}$ while the latter is proportional to $\frac{1}{L_s^2}$. These correspond to two different contributions to the residual mass: the exponential piece is linear in the corresponding component of $m_\text{res}$ while the latter is quadratic in the localised contribution to $m_\text{res}$ that is larger in our case. As a pragmatic approach we vary $\Delta$ between 0 and $3am_\text{res}$ and estimate the relative error on the difference $\Pi^1$ as $\Delta \frac{\Pi^{(1),A}}{\Pi^{(1)}}$. Our conclusion is that there is no large cancellation for the local-conserved correlators since $q^2\Pi^{(1),V}$ approaches zero, as shown in Figure \ref{fig:zaerror}. We assume a conservative three percent systematic error in $\Pi^{(1)}$ for the non-conservation of the axial current. \section{Results} \label{sec:res} The data presented is from the ensembles generated by the RBC and UKQCD collaboration with the Iwasaki gauge action at $\beta=2.25$ which corresponds to a lattice spacing of $a^{-1}=2.33(4)\,\text{GeV}$. The details of the ensembles will be published in \cite{Aoki:2009xx}. We simulate with three values of the light quark mass which correspond to pion masses on the range of $290\,\text{MeV}\le m_\pi\le 400\,\text{MeV}$. Our chiral fits for $l_{5}$ rely on previous measurements of the pseudoscalar mass and decay constant at the unitary quark masses. For convenience we summarize the results obtained from correlators with gauge-fixed wall sources in Table \ref{tab_meson} along with the vector and axial vector ground state masses which are used as a consistency check using Weinberg's sum rules and our data for $l_5$. \begin{table}[hbt] \begin{center} \begin{tabular}{c|c|c|c|c} $am_l$ & $am_\pi$ & $af_\pi$ & $am_V$ & $am_A$ \\ \hline $0.004$ & $0.1269(4)$ & $0.0619(3)$ & $0.356(6)$ & $0.522(13)$ \\ $0.006$ & $0.1512(3)$ & $0.0645(3)$ & $0.366(5)$ & $0.543(18)$ \\ $0.008$ & $0.1727(4)$ & $0.0671(3)$ & $0.388(6)$ & $0.551(9)$ \\ \end{tabular} \caption{Meson masses and pseudoscalar decay constant in lattice units. \label{tab_meson}} \end{center} \end{table} In our analysis of $\Pi^{(1)}$ we include spatial momenta up to $(1,1,0)$ (or equivalent). We find excellent agreement between results with and without spatial momentum for the local-conserved $\Pi^{(1)}$, Figure \ref{fig:pi1}. This is in contrast to the local-local data where there are two distinct branches of points. \begin{figure}[hbt] \includegraphics[width=.7\textwidth]{graphs/Pi1mom1_con.pdf} \caption{$q^2\Pi^{(1)}$ at low momentum. The different symbols correspond to momenta of type $(n+1,0,0,0)$: squares, $(n,1,0,0)$: circles, $(n,1,1,0)$: triangles with $n=0,1,2$\label{fig:pi1}} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c|c|c} momentum & $(ap)^2$ & $am$ & $q^2\Pi^{(1)}\cdot10^3$ & $L_{10}\cdot10^3$ \\ \hline $(1,0,0,0)$ & $ 0.010 $ & $ 0.004 $ & $ -3.37 ( 12 )$ & $ -5.6 ( 1.6 )$ \\ & & $ 0.006 $ & $ -3.79 ( 16 )$ & $ -4.5 ( 2.1 )$ \\ & & $ 0.008 $ & $ -3.95 ( 13 )$ & $ -7.0 ( 1.7 )$ \\ \hline $(2,0,0,0)$ & $ 0.039 $ & $ 0.004 $ & $ -2.49 ( 10 )$ & $ -4.02 ( 32 )$ \\ & & $ 0.006 $ & $ -2.88 ( 14 )$ & $ -3.89 ( 45 )$ \\ & & $ 0.008 $ & $ -3.07 ( 11 )$ & $ -4.55 ( 37 )$ \\ \hline $(0,1,0,0)$ & $ 0.039 $ & $ 0.004 $ & $ -2.5 ( 10 )$ & $ -3.99 ( 32 )$ \\ & & $ 0.006 $ & $ -2.87 ( 15 )$ & $ -3.94 ( 50 )$ \\ & & $ 0.008 $ & $ -3.22 ( 10 )$ & $ -4.06 ( 34 )$ \\ \hline $(1,1,0,0)$ & $ 0.048 $ & $ 0.004 $ & $ -2.25 ( 9 )$ & $ -3.77 ( 23 )$ \\ & & $ 0.006 $ & $ -2.63 ( 14 )$ & $ -3.74 ( 37 )$ \\ & & $ 0.008 $ & $ -2.86 ( 10 )$ & $ -4.17 ( 26 )$ \\ \hline $(0,1,1,0)$ & $ 0.077 $ & $ 0.004 $ & $ -1.77 ( 8 )$ & $ -3.07 ( 13 )$ \\ & & $ 0.006 $ & $ -2.11 ( 12 )$ & $ -3.16 ( 20 )$ \\ & & $ 0.008 $ & $ -2.31 ( 8 )$ & $ -3.51 ( 14 )$ \\ \end{tabular} \caption{Results for $q^2\Pi^{(1)}$ and the resulting effective $L_{10}$ for the lowest momenta and all three masses. \label{tab_res}} \end{center} \end{table} As a first step we obtain effective values for $L_{10}$ for each mass and momentum from our data for $\Pi^{(1)}$, Eq.(\ref{eq:pi1}). Our results are summarised in Table \ref{tab_res}. Since ChPT is known to have only a small radius of convergence in $p^2$, we restrict ourselves to only the lowest momenta. Now we perform a one parameter fit to obtain our central value for $L_{10}$. In Figure \ref{fig:l10_fitrange} the dependence of $L_{10}$ on the momentum fit-range is shown for a fit including all three masses. In the lower panel we observe that the $\chi^2/_\text{d.o.f.}$ becomes larger than one when more than the lowest two momenta are included. Fits with fewer mass values exhibit the same behaviour. We however conclude that including only the lowest momentum gives a more reliable result since the higher momenta yield values for $q^2\Pi^{(1)}$ which are consistently below the fit curve when including only the smallest momentum, Figure \ref{fig:l10_bestfit}. Our central value therefore is obtained from a fit to all three masses and the lowest momentum only. \begin{figure}[hbt] \begin{center} \includegraphics[width=.7\textwidth]{graphs/L10_equiv_2D_mmax3pmaxdep.pdf} \caption{Upper panel: dependence of $L_{10}$ on the fit-range in $(ap)^2$. Lower panel: reduced $\chi^2$ of the corresponding fits described in the text. The dashed vertical line denotes the chiral scale in lattice units.\label{fig:l10_fitrange}} \end{center} \end{figure} In Figure \ref{fig:l10_bestfit} the data points included in the determination of $L_{10}$ and the error band of the resulting fit is shown plotted against mass (left) and momentum (right). \begin{figure}[hbt] \begin{center} \includegraphics[width=.45\textwidth]{graphs/L10_equiv_2D_mmax3_pmax1.pdf} \includegraphics[width=.45\textwidth]{graphs/L10_equiv_2D_psq_mmax3_pmax1.pdf} \caption{Data set and fit band used for the final result for $L_{10}$. \label{fig:l10_bestfit}} \end{center} \end{figure} Our value for $L_{10}^r$ is \begin{align} L_{10}^r(\mu=0.77\,\text{GeV})=-0.0057(11)_\text{stat}(7)_\text{sys}. \label{eq:result} \end{align} The systematic error has several contributions: fit-range, lattice artefacts, scale setting, strange quark mass and finite volume, which are described in more detail below. We estimate the error on the chiral fit by varying the fit-range in mass and $q^2$ and obtain an error of $0.0006$, or $11\%$. The leading lattice artefacts with Domain Wall Fermions are parametrically of order $a^2\Lambda_\text{QCD}^2$. We assume $\Lambda_\text{QCD}=300\,\text{MeV}$, use $a^{-1}=2.33(4)\,\text{GeV}$ and double the result to obtain a three percent error. The uncertainty in the lattice scale is relevant for the definition of the chiral scale $\mu=m_\rho$. We vary $a^{-1}$ within its error and obtain an additional two percent error. The dynamical strange quark mass in this computation is fixed to $am=0.03$ which differs substantially from the physical value. Using a reweighting procedure for the strange quark determinant \cite{jung:2009xx} we vary the strange quark mass down to the physical point in lattice units and find variations in $L_{10}^r$ of less than three percent. Possible finite volume effects (FVE) are well under control, since at the lightest mass we have $m_\pi L=4$ and the estimate for FVE for $f_\pi$ from \cite{Colangelo:2005gd} is $0.5\%$. Adding the errors in quadrature we find a total systematic error of $0.0007$ which is dominated by the error on the chiral fit. \begin{figure}[hbt] \begin{center} \includegraphics[width=.7\textwidth]{graphs/globalnew.pdf} \caption{Comparison of $L_{10}^r$ with other determinations. \label{fig:global}} \end{center} \end{figure} Comparing our result with the previous lattice determination \cite{Shintani:2008qe} and phenomenological estimates \cite{Bijnens:1994qh,Ecker:2007dj} we find it to be well consistent, Figure \ref{fig:global}. Lattice data for the lowest states in the vector and axial vector channels allows us to crosscheck our result with the following sum rules for the spectral densities $\rho_{V/A}$ \cite{Weinberg:1967kj,Das:1967ek}, \begin{subequations} \begin{align} \int \dd s s\left(\rho_V(s)-\rho_A(s)\right) &= 0, \\ \int \dd s \left(\rho_V(s)-\rho_A(s)\right) &= f_\pi^2, \\ \int \dd s \frac{1}{s} \left(\rho_V(s)-\rho_A(s)\right) &= -8\overline{L}_{10}. \end{align} \end{subequations} We assume that the spectral densities are saturated by the lightest resonances, \begin{align} \rho_{V/A}\approx f_{V/A}^2\delta(s-m_{V/A}^2). \end{align} The resulting simplified sum rules are solved for $\overline{L}_{10}$ using our values for $m_V,m_A$ and $f_\pi$ from Table \ref{tab_meson}, \begin{subequations}\label{eq:sr} \begin{align} (f_Vm_V)^2-(f_Am_A)^2&=0, \label{eq:sr1}\\ f_V^2-f_A^2 &= f_\pi^2, \label{eq:sr2} \\ \frac{f_V^2}{m_V^2}-\frac{f_A^2}{m_A^2} &= -8\overline{L}_{10}.\label{eq:sr3} \end{align} \end{subequations} \begin{figure}[hbt] \begin{center} \includegraphics[width=.7\textwidth]{graphs/sumrules.pdf} \caption{Results for $L_{10}^r$ from the sum rules in Eq. (\ref{eq:sr}) These are compared with the result from Eq. (\ref{eq:result}) which is indicated by the blue band.\label{fig:sumrules}} \end{center} \end{figure} Thus we obtain an independent estimate for $\overline{L}_{10}$ for each quark mass which we convert to $L_{10}^r(m_\rho)$ using the pions masses from Table \ref{tab_meson}. In Figure \ref{fig:sumrules}, these results are shown by the black circles and are compared with our best estimate (\ref{eq:result}), indicated in the figure by the blue band. We find them to be well consistent. This agreement is a non-trivial finding since it is found at finite lattice spacing and without a chiral extrapolation for the values obtained from the sum rules (\ref{eq:sr1}-\ref{eq:sr3}). \section{Pion Mass Splitting}\label{sec:pion} The computation of the vacuum polarisation functions over the complete $q^2$ range allows the computation of the electromagnetic contribution to mass splitting between the charged and neutral pions \cite{Das:1967it}, \begin{align} m_{\pi^\pm}^2-m_{\pi^0}^2=-\frac{3\alpha}{4\pi}\int_0^\infty\dd q^2\frac{q^2\Pi^{(1)}}{f_\pi^2}=1261\,\text{MeV}^2.\label{eq:dgmly} \end{align} The simplest possible ansatz to describe the mass and momentum dependence of $\Pi^{(1)}$ which respects the sum rules (\ref{eq:sr1},\ref{eq:sr2}) is \begin{align} \begin{split} q^2\Pi^{(1)}(q^2,m)&=-f_\pi^2+\frac{f_V^2q^2}{m_V^2+q^2}-\frac{f_A^2q^2}{m_A^2+q^2},\\ f_V&=x_1+x_2m_\pi^2,\quad m_V=x_3+x_4m_\pi^2,\\ f_A^2&=f_V^2-f_\pi^2,\quad m_A=m_V\frac{f_V}{f_A}, \end{split}\label{eq:ansatz} \end{align} where $x_1,\ldots,x_4$ are fit parameters. We include all data points up to $(aq)^2<1.0$ in the simultaneous fit to all three masses and obtain a stable fit with $\chi^2/_\text{d.o.f.}=1.04$. Extrapolating to the chiral limit we find \begin{align} -\frac{3\alpha}{4\pi}\int_0^1\dd q^2 \frac{q^2\Pi^{(1)}}{f^2}=1040(220) \,\text{MeV}^2.\label{eq:intlow} \end{align} We have also tested the two different fit forms used in \cite{Shintani:2008qe} which include an additional term, which is relevant at high $q^2$, however we do not find a significant change in the fit parameters $x_1,\ldots,x_4$ or a smaller $\chi^2/_\text{d.o.f.}$ when we include these additional fit parameters. \begin{figure}[hbt] \begin{center} \includegraphics[width=.45\textwidth]{graphs/Pi1_qsq_2d.pdf} \includegraphics[width=.45\textwidth]{graphs/mpisplitting_qmax.pdf} \caption{Left: fit with ansatz (\ref{eq:ansatz}), where the grey band denotes the result in the chiral limit. Right: mass splitting $m_{\pi^\pm}^2-m_{\pi^0}^2$ as a function of the cut-off momentum $q_\text{max}^2$, the horizontal line denotes the physical value, the dashed line marks our chosen cut-off. \label{fig:mpi}} \end{center} \end{figure} An illustration of the dependence on the cut-off is shown in Figure \ref{fig:mpi}. The remaining part of the integral is estimated under the assumption that the functional dependence follows the asymptotic behaviour for large $q^2$, \begin{align} \Pi^{(1)} \to \frac{c}{q^6}. \label{eq:highqsq} \end{align} Matching (\ref{eq:highqsq}) to our fit result in the chiral limit we obtain $c=-0.0062(58)\,\text{GeV}^6$. Here we have doubled the error to allow for possible deviations from the asymptotic form. The remaining integral is then \begin{align} -\frac{3\alpha}{4\pi}\int_1^\infty\dd q^2 \frac{q^2}{\Pi^{(1)}}{f^2}=140(130) \,\text{MeV}^2, \end{align} yielding a pion mass splitting of \begin{align} m_{\pi^\pm}^2-m_{\pi^0}^2=1180(260)\,\text{MeV}^2. \end{align} This error does not include a rigorous investigation of the systematic uncertainties as was done in Sec. \ref{sec:res} for $L_{10}$, however we expect these uncertainties to be well within the large statistical error of twenty percent obtained from the integral (\ref{eq:intlow}) with $0<(aq)^2\le1$. The value we obtain is in excellent agreement with the experimental value $m_{\pi^\pm}^2-m_{\pi^0}^2=1261\,\text{MeV}^2$~\cite{Amsler:2008zzb}. Finally we can use the result of the fit defined in Eq. (\ref{eq:ansatz}) to obtain another independent determination for $L_{10}$. From the slope of the ansatz (\ref{eq:ansatz}) in the chiral limit, we find $L_{10}^r(m_\rho)=5.19(14)\cdot10^{-3}$ which is in agreement with our ChPT analysis. This is a nice check of the saturation of the Weinberg sum rules. Unfortunately the large error on our determination of $l_5$ does not allow a more quantitative comparison. \section{Conclusions} \label{sec:conc} We have computed the $S$ parameter in QCD using the gauge configurations generated by the RBC-UKQCD collaboration for 2+1 flavors of dynamical DWF fermions. According to the procedure outlined in Ref.~\cite{Shintani:2008qe}, the $S$ parameter is extracted from the form factor that appears in the parametrization of the VV-AA correlator. Our final result is \begin{equation} \label{eq:finalres} L_{10}^r(\mu=0.77\,\text{GeV})=-0.0057(11)_\text{stat}(7)_\text{sys}; \end{equation} where the error is still dominated by the statistical precision of our simulation. On the other hand, the systematic error is dominated by the choice of the fit-range used in the chiral extrapolation. A further improvement to these simulations would be to include partially twisted boundary conditions, enabling access to smaller momenta where ChPT can be employed reliably. Using results from meson spectroscopy, we were able to check the saturation of the Weinberg sum rule by the lowest--lying resonances in QCD. At the current level of accuracy, we find that the contribution from the lowest--lying resonances accounts for the total value of the $S$ parameter, which confirms a widely--used assumption in phenomenological studies. Our best estimate for the $S$ parameter (\ref{eq:sl5}) with a Higgs boson mass of $m_H=120\,\text{GeV}$ is \begin{equation} \label{eq:spar} S=0.42(7), \end{equation} where we have rescaled the renormalization scale $\mu$ with the ratio of the pion decay constant to the Higgs vacuum expectation value $v=246\,\text{GeV}$. From the same correlators we have extracted the electromagnetic pion mass splitting, and the result $\Delta m_\pi^2=1180\pm 260\,\text{MeV}^2$ is in excellent agreement with the experimental results. Despite the large error, the present computation shows that it is possible to extract this quantity from the currently available DWF ensembles. The results presented here are only made possible through the chiral properties of DWF which ensure the cancellation of the power--divergent contributions in the current correlators. The QCD analysis we have performed lays the groundwork for the computation of the $S$ parameter in potential Technicolor model candidates such as those proposed Refs.~\cite{Foadi:2007ue,Ryttov:2008xe}. Computing the $S$ parameter in Technicolor theories is a crucial step in identifying the models that survive the constraints from electroweak precision measurements. We plan to extend this computation to the models recently simulated in Refs.~\cite{DelDebbio:2008zf}. \section*{Acknowlegments} We would like to thank Tom Blum, Norman Christ and Roger Horsley for stimulating discussions. LDD and JZ are supported by Advanced STFC fellowships under the grants PP/C504927/1 and ST/F009658/1. PAB is supported by an RCUK fellowship.
1,108,101,562,424
arxiv
\section{Introduction} Gamma-ray bursts (GRBs) are the most luminous explosive events in the Universe. It is well known that the observed gamma-ray emission are produced by relativistic outflows. The fireball of a GRB is required to have a relativistic speed toward the earth in order to avoid the ``compactness problem''. The Lorentz factor of the fireball increases with radius in the radiation-dominated acceleration phase. At the end of this phase, the fireball enters the matter-dominated coasting phase, which maintains a constant Lorentz factor (for reviews see Piran 1999; Zhang \& M\'{e}sz\'{a}ros 2004; M\'{e}sz\'{a}ros 2006; Kumar \& Zhang 2015). The Lorentz factor in the matter-dominated coasting phase, called the initial Lorentz factor, is a crucial parameter in understanding the physics of GRBs. There are several methods to constrain this initial Lorentz factor $\Gamma_0$, but the most effective method is that the peak time of a GRB early afterglow onset is taken as the deceleration time of the outflow. An estimation of this initial value is possible by measuring the peak time of the afterglow light curve and the prompt isotropic energy $E_{\gamma ,iso}$ (e.g., Molinari et al. 2007). This method was successfully applied in Liang et al. (2010) with a large sample of the early afterglow light curves. Liang et al. (2010) discovered a tight correlation between $\Gamma_0$ and $E_{\gamma ,iso}$. In this paper, we re-estimate the initial Lorentz factor of GRBs in the ISM and wind cases, and also obtain a tight correlation between $\Gamma_0$ and $E_{\gamma ,iso}$. The $\Gamma_0$ and $E_{\gamma ,iso}$ correlation in the wind case is even more tighter than that of the ISM case. X-ray flares are common phenomena in GRB X-ray afterglows in the {\em Swift} era. According to the temporal behavior and spectral properties of X-ray flares, it is widely argued that X-ray flares are produced by long-lasting central engine activities (Burrows et al. 2005; Ioka et al. 2005; Fan \& Wei 2005; Falcone et al. 2006; Zhang et al. 2006; Wang \& Dai 2013). It is generally believed that X-ray flares have the same physical origin as the prompt emission of GRBs. X-ray flares usually happen at $\sim 10^2-10^5$ s after prompt emissions, transferring a large percentage of the outflow energy to the radiation. Most of the methods proposed to constrain the initial Lorentz factor of GRBs are not suitable for X-ray flares. Since the fluence of most X-ray flares are smaller than the prompt emission, their energies and Lorentz factors are supposed to be smaller than those of GRBs. In this paper, we use two different methods to place limits on the Lorenz factor of X-ray flares. In Section 2, we introduce the method to estimate of the initial Lorentz factor $\Gamma_0$ of GRBs. The two methods of constraining the X-ray flares Lorentz factors are shown in Section 3. Sample study are presented in Section 4, and our results are summarized and discussed in Section 5. A concordance cosmology with parameters $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.30$, and $\Omega_{\Lambda}=0.70$ are adopted. Notation $Q_n$ denotes $Q/10^n$ in the cgs units throughout the paper. \section{Estimating the Initial Lorentz Factor $\Gamma_0$} When a relativistic fireball shell sweeps up the circumburst medium, two shocks appear: a reverse shock propagating into the fireball shell, and a forward shock propagating into the ambient medium. We suppose that the fireball shell and two shocks are spherical. Physical quantities denoted by ``${\prime}$'' are defined in the co-moving frame. We divide this two-shock system into 4 regions (Sari \& Piran 1995; Yi et al. 2013): (1) The unshocked ambient medium ($n_1$, $e_1$, $p_1$, $\Gamma_1$), (2) the shocked ambient medium ($n'_2$, $e'_2$, $p'_2$, $\Gamma_2$), (3) the shocked fireball shell ($n'_3$, $e'_3$, $p'_3$, $\Gamma_3$), and (4) the unshocked fireball shell ($n'_4$, $\Gamma_4=\Gamma_0$), where $n$ is the number density, $e$ is the internal energy density, $p$ is the pressure, and $\Gamma$ is the bulk Lorentz factor. Hereafter we calculate the properties of the reverse shock emission at the radius $R_{\times}$, where the reverse shock finishes crossing the ejecta shell and the Lorentz factor of the shell is $\Gamma_\times$. The assumptions of the equilibrium of pressures and equality of velocities along the contact discontinuity lead to $p'_2=p'_3$ and $\Gamma_2=\Gamma_3$, respectively. With the jump condition for the shocks and the equilibrium of pressures, we can obtain, \begin{equation} (4{\Gamma _{34}} + 3)({\Gamma _{34}} - 1){n'_4} = (4{\Gamma _2} + 3)({\Gamma _2} - 1){n_1} . \end{equation} The Lorentz factor of the reverse shock $\Gamma_{34}$ could be approximated as \begin{equation} {\Gamma _{34}} = \frac{1}{2}\left( {\frac{{{\Gamma _3}}}{{{\Gamma _4}}} + \frac{{{\Gamma _4}}}{{{\Gamma _3}}}} \right) = \frac{1}{2}\left( {\frac{{\Gamma _4^2 + \Gamma _3^2}}{{{\Gamma _4}{\Gamma _3}}}} \right) , \end{equation} as long as $\Gamma_{3} \gg 1$ and $\Gamma_{4} \gg 1$. Substituting Equation (2) into Equation (1), we can obtain the following equation \begin{equation} \frac{{{{({\Gamma _4} - {\Gamma _3})}^2}}}{{{\Gamma _4}{\Gamma _3}}}\left[ {\frac{{{{({\Gamma _4} + {\Gamma _3})}^2}}}{{{\Gamma _4}{\Gamma _3}}} - \frac{1}{2}} \right]{n'_4} = 4\Gamma _3^2{n_1} . \end{equation} Because $\Gamma_{3} \gg 1$ , $\Gamma_{4} \gg 1$ and $\Gamma_{4} \geq \Gamma_{3}$, we ignore the constant 1/2 term in the Equation (3) and thus we can obtain the solution of this equation (ignoring the negative solution, also see Panaitescu \& Kumar 2004) \begin{equation} {\Gamma _3} = \frac{{{\Gamma _4}}}{{{{\left[ {1 + 2{\Gamma _4}{{\left( {{n_1}/{n'_4}} \right)}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}} \right]}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}}} . \end{equation} Here we obtain the relation between the Lorentz factor of shocked fireball shell $\Gamma_3$ and the initial Lorentz factor $\Gamma_0$ ($\Gamma_4$), which depends on the ratio of these two comoving densities. The number density of the ambient medium is assumed to be $n_1 = AR^{-k}$ (Dai \& Lu 1998; M\'esz\'aros et al 1998; Chevalier \& Li 2000; Wu et al 2003, 2005; Yi et al. 2013), such a circumburst medium is a homogeneous interstellar medium (ISM) for $k=0$, and a typical stellar wind environment for $k=2$. The fireball shell is characterized by an initial kinetic energy $E_k$, initial Lorentz factor $\Gamma_{4}$, and a width $\Delta$ in the lab frame attached to the explosion center, so the number density of the shell in the comoving frame is ${n'_4} = E_{k}/(4\pi {m_p}{c^2}{R^2}\Delta{\Gamma_{4} ^2})$. The ratio of the comoving number density of the relativistic shell ${n'_4}$ to the number density of the ambient medium ${n_1}$ defined in Sari \& Piran (1995) is \begin{equation} f = \frac{{n'_4}}{{{n_1}}} = \frac{E_k}{{4\pi A{m_p}{c^2}\Delta \Gamma _4^2{R^{2 - k}}}} = \frac{X}{{\Delta \Gamma _4^2{R^{2 - k}}}}, \end{equation} where $X = E_{k}/(4\pi A {m_p}{c^2})$. The difference between the lab frame speed of the unshocked fireball shell and that of the reverse shock is (Kumar \& Panaitescu 2003), \begin{equation} {\beta _4} - {\beta _{RS}} = \frac{{1.4}}{{\Gamma _4^2}}{\left( {\frac{{\Gamma _4^2{n_1}}}{{n'_4}}} \right)^{\frac{1}{2}}} = \frac{{1.4}}{{{\Gamma _4}}}{\left( {\frac{1}{f}} \right)^{\frac{1}{2}}} . \end{equation} Considering the thin shell case $\bigtriangleup \simeq R/(2{\Gamma _4^2})$, we can calculate the radius $R_{\times}$ where the reverse shock finishes crossing the fireball shell, \begin{equation} \Delta ({R_ {\times}}) = \int_0^{{R_ {\times} }} {\left( {{\beta _4} - {\beta _{RS}}} \right)} dR . \end{equation} The substitution of Equation (5) and (6) into Equation (7) leads to \begin{equation} {R_ \times } = {\left[ {\frac{{2{{(5 - k)}^2}X}} {{{{5.6}^2}\Gamma _4^2}}} \right]^{\frac{1} {{3 - k}}}} = {\left[ {\frac{{{{(5 - k)}^2}{E_k}}} {{2 \times {{5.6}^2}\pi A{m_p}{c^2}\Gamma _4^2}}} \right]^{\frac{1} {{3 - k}}}}. \end{equation} So the comoving density ratio at $R_\times$ is \begin{equation} {f_ \times } = \frac{{n_4^{'}}} {{{n_1}}} = \frac{{2X}} {{R_ {\times} ^{3 - k}}} = \frac{{{{5.6}^2}}} {{{{(5 - k)}^2}}}\Gamma _4^2. \end{equation} Substituting Equation (9) into Equation (4), we can obtain the Lorentz factor of the reverse shock as it finishes crossing the shell \begin{equation} {\Gamma _ \times } = \frac{{{\Gamma _4}}}{{{{\left[ {1 + 2{\Gamma _4}{{\left( R^{3 - k}/(2X) \right)}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}} \right]}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}}} = \frac{{{\Gamma _4}}}{{{{\left[ {1 + 0.357(5 - k)} \right]}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}}} . \end{equation} Therefore, the relation between $\Gamma _\times$ and the initial Lorentz factor $\Gamma _0$ is \begin{equation} {\Gamma _ \times }=0.60\,{\Gamma _ 0 },\;\; {\rm for} \;\;k = 0 \;({\rm ISM}), \end{equation} and \begin{equation} {\Gamma _ \times }=0.70\,{\Gamma _ 0 },\;\;{\rm for} \;\;k = 2 \;({\rm Wind}). \end{equation} For the thin shell case, the reverse shock crossing time $T_\times$ is almost corresponding to the deceleration time $T_{dec}$, .i.e, $T_{\times} \sim T_{dec}$. Therefore, we can derive the initial Lorentz factor in the ISM and wind type cases (also see Panaitescu \& Kumar 2004). For $k=0$ (ISM), \begin{equation} {\Gamma _0} = \frac{1}{{0.60}}{\Gamma _ \times } = 1.67{\left[ {\frac{{3{E_{\gamma ,iso}}}}{{32\pi {n_1}{m_p}{c^5}\eta t_{p,z}^3}}} \right]^{\frac{1}{8}}}, \end{equation} and for $k=2$ (wind), \begin{equation} {\Gamma _0} = \frac{1}{{0.70}}{\Gamma _ \times } = 1.44{\left[ {\frac{{{E_{\gamma ,iso}}}}{{8\pi A{m_p}{c^3}\eta {t_{p,z}}}}} \right]^{\frac{1}{4}}}. \end{equation} With the isotropic-equivalent energy $E_{\gamma ,iso}$ and the peak time of the afterglow onset $t_{p,z}$, we can estimate the initial Lorentz factor of GRBs, where $t_{p,z} = t_{p}/(1+z)$. Liang et al. (2010) discovered a tight correlation between $\Gamma_0$ and $E_{\gamma ,iso}$ using 20 GRBs which show deceleration feature in the early afterglow light curves. Other work also confirmed this correlation, but with different methods and power-law indices (Ghirlanda et al. 2012; L{\"u} et al. 2012). Using the data of $t_{p,z}$ and $E_{\gamma ,iso}$ from Liang et al. (2010, 2013) and L{\"u} et al. (2012), we re-constrain the initial Lorentz factor, and also discover a tight $\Gamma_0$ and $E_{\gamma ,iso}$ correlation for the ISM and wind cases. The $\Gamma_0$ and $E_{\gamma ,iso}$ correlation in the wind case is even more tighter than that in the ISM case, as shown in Figs. 4 and 5. \section{Methods of Constraining the Bulk Lorentz Factor of X-ray flares} Because most of the models estimating the bulk Lorentz factor during the GRB prompt emission phase are inapplicable for the X-ray flares, we introduce two methods to constrain the upper and lower limits on the Lorentz factor of X-ray flares in this section. In principle, GRB outflows could be structured, but we suppose that the outflows are conical (also called jet-like) and the half-opening angles of the outflows have a same/constant value in a single GRB, i.e., the half-opening angle of the X-ray flare jet is the same as that of the prompt jet in one GRB. Therefore, we suppose that each X-ray flare is produced in a conical uniform jet with half-opening angle $\theta_{j}$. If there are several flares in one X-ray afterglow, we also assume these X-ray flares have the same jet half-opening angle $\theta_{j}$. {\bf Method I}: Lower limit on the Lorentz factor of X-ray flares. We use the late internal shock emission model to constrain the lower limits of the Lorentz factor of X-ray flares (Wu et al. 2006, 2007). The quick decline of X-ray flares after the peak time is widely interpreted as the high latitude component of X-ray pulses (Burrows et al. 2005; Tagliaferri et al. 2005; Zhang et al. 2006; Nousek et al. 2006; Liang et al. 2006). Wu et al. (2007) supposed that emission from the same internal shock radius $R_{int}$ but with different angles would have different arrival times to the observer, which is due to light propagation effect. The delay time of different photons with different $\theta$ $(\theta < \theta_{j})$ is \begin{equation} \Delta T = \frac{{{R_{{\mathop{\rm int}} }}(1 - \cos \theta )}}{c}. \end{equation} If photons are emitted at an angle $\theta = \theta_{j}$, then the delay time $\Delta T$ is about the duration time of the X-ray flare. We denote the timescale of the decay part of an X-ray flare by $T_{decay}$. We could put forth a constraint on the decaying timescale of the flare and the jet-opening angle, i.e. \begin{equation} {T_{decay}} < \frac{{{R_{{\mathop{\rm int}} }}(1 - \cos \theta_{j} )}}{c}. \end{equation} Another timescale is the angular spreading variability time $T_{rise}$, which is the rise time of an X-ray flare, \begin{equation} {T_{rise}} = \frac{{{R_{{\mathop{\rm int}} }}(1 - \cos (1/{\Gamma _x}))}}{c} \approx \frac{{{R_{{\mathop{\rm int}} }}}}{{2\Gamma _x^2c}}. \end{equation} Combining the decaying timescale and rising timescale, we could get a lower limit on the Lorentz factor of each X-ray flare, \begin{equation} {\Gamma _x} > {\left( {\frac{{{T_{decay}}}}{{{T_{rise}}}}} \right)^{\frac{1}{2}}}{\left [ {\frac{1}{{2(1 - \cos {\theta _{j}})}}} \right]^{\frac{1}{2}}} \approx \theta _{j}^{ - 1}{\left( {\frac{{{T_{decay}}}}{{{T_{rise}}}}} \right)^{\frac{1}{2}}}. \end{equation} With the jet opening angle $\theta_{j}$, decaying timescale and rising timescale, we can constrain the lower limit of the Lorentz factor of each X-ray flare. The jet half-opening angle $\theta_{j}$ can be estimated from the late afterglow of a GRB if there is a jet break in the afterglow light curve. So, we selected GRBs which have jet breaks and flares in the X-ray afterglow light curve in our sample. The sample is listed in Table 1. {\bf Method II}: Upper limit on the Lorentz factor of X-ray flares. The physical mechanism of X-ray flares is still unclear. X-ray flares are known to be similar to those of prompt emission pulses through studying of the temporal behavior and energy spectrum (Chincarini et al. 2010). We generally consider X-ray flares having the same physical origin as the prompt emission of GRB, and they are all due to the long-lasting activity of the central engine (Fan \& Wei 2005; Zhang et al. 2006). But there is still controversy about the origin of X-ray flares of GRBs. X-ray flares of short GRBs can be produced by differentially rotating, millisecond pulsars from the mergers of binary neutron stars (Dai et al. 2006). Magnetic reconnection-driven explosions lead to multiple X-ray flares minutes after the prompt GRB. Wang \& Dai (2013) performed a statistical study of X-ray flares for long and short GRBs, and found energy, duration, and wait-time distributions similar to those of solar flares, which indicates that X-ray flares of GRBs may be powered by magnetic reconnection. According to the standard GRB fireball model, after the initial radiation-dominated acceleration phase, the fireball enters the matter-dominated ``coasting'' phase (Piran 1999; M\'{e}sz\'{a}ros 2002; Zhang \& M\'{e}sz\'{a}ros 2004). Whether the fireball is baryon-rich or not determines how long is the initial radiation-dominated acceleration phase. If the fireball is baryon-poor, the initial energy of the fireball will be quickly converted into radiation energy and produce bright and brief thermal emission, which is inconsistent with most of the observations. The spectra of X-ray flares are typically non-thermal, with a photon index of about $\sim-2.0$ (Falcone et al. 2007). This suggests that X-ray flares happen when the jet is optically thin. Meanwhile, thermal emission just before any X-ray flare with compatible flux has never been detected, indicating that the jet responsible for the flare attains saturated acceleration. This requires baryon loading in the jet to be large enough, or that the Lorentz factor has an upper limit. On the other hand, the upper limit on the Lorentz factor of an X-ray flare can be estimated as (Jin et al. 2010) \begin{equation} {\Gamma _x} \le {\left( {\frac{{L\,{\sigma _T}}}{{8\pi {m_p}{c^3}{R_0}}}} \right)^{\frac{1}{4}}}, \end{equation} which depends on the total luminosity $L$ and initial radius $R_0$ of the flare outflow. Jin et al. (2010) assumed that the observed X-ray flare luminosity is just a fraction ($\epsilon_x=0.1$) of the total luminosity of the outflow, that is $L_{x} = 0.1 L$ (also see, Fan \& Piran 2006). $R_{0}$ is taken to be $10^{7}$ cm, which is comparable to the radius of a neutron star. $R_{0} = 10^{7}$ cm is a conservative value, in some cases, $R_{0}$ is taken to be $\sim 10^{8}$ cm or even larger (Pe'er et al. 2007). With the proper values, we are able to get an upper limit on $\Gamma_{x}$. In our sample, $L_{x}$ is taken as $ E_{x,iso}/T_{90,x}$, where $E_{x,iso}$ and $T_{90,x}$ are the isotropic energy and duration time of one flare respectively. The redshift of those GRBs in our sample are all measured, the isotropic 0.3 - 10 keV energy of the X-ray flares in the sample can be estimated from the fluence as \begin{equation} {E_{x,iso}} = \frac{{4\pi D_L^2}}{{1 + z}}\,S_x, \end{equation} where $S_x$ is the fluence of an X-ray flare. \section{Case Studies} Our method for placing lower limits on the Lorentz factor is feasible if the X-ray light curve presents flare and jet break simultaneously. Our sample consists of 20 GRBs with X-ray flares, redshift and jet break time (Falcone et al. 2007; Chincarini et al. 2010; Bernardini et al. 2011; Lu et al. 2012). Some of them have several flares in one GRB. The total number of X-ray flares is 43. We assume that the opening angle is the same for the jets responsible for prompt emission and late X-ray flares in a single GRB. According to the appearance time of X-ray flares, we mark the corresponding numerical order, which can be seen in the Table 1. The $T_{rise}$, $T_{decay}$, $S_x$, and $T_{90,x}$ of each flare are reported in Falcone et al. (2007), Chincarini et al. (2010) and Bernardini et al. (2011). $\theta_{j}^{Wind}$ and $\theta_{j}^{ISM}$ are calculated with the data taken from Lu et al. (2012) when the Eq. (22) and Eq. (23) are applied. The Lorentz factor in the wind type circumburst media is (Chevalier \& Li 2000) \begin{equation} \Gamma = 5.9{\left( {\frac{{1 + z}}{2}} \right)^{{1 \mathord{\left/ {\vphantom {1 4}} \right. \kern-\nulldelimiterspace} 4}}}E_{k,52}^{{1 \mathord{\left/ {\vphantom {1 4}} \right. \kern-\nulldelimiterspace} 4}}\;A_*^{{{ - 1} \mathord{\left/ {\vphantom {{ - 1} 4}} \right. \kern-\nulldelimiterspace} 4}}\,t_{days}^{ - {1 \mathord{\left/ {\vphantom {1 4}} \right. \kern-\nulldelimiterspace} 4}} , \end{equation} where $E_{k,52}$ is the initial kinetic energy of the fireball shell in units of $10^{52}$ ergs, $t_{days}$ is the observer's time in units of days, $ A = \mathop {{M_w}}\limits^. /4\pi {V_w} = 5 \times {10^{11}}{A_*}\; \rm g\;c{m^{ - 1}}$ is the wind parameter, $\mathop {{M_w}}\limits^.$ is the wind mass-loss rate, and $V_w$ is the wind velocity. Because the jet break effects are considered when $\Gamma \approx \theta_j^{-1}$, so Equation (21) could be used to estimate the jet half-opening angle for the wind case, \begin{equation} \theta_{\rm j}^{wind}=0.12\,\,{\rm rad}\,\,\left(\frac{T_{j}}{1\ \rm day}\right)^{1/4}\left(\frac{1+z}{2}\right)^{-1/4} E_{\gamma,iso,52}^{-1/4}\left(\frac{\eta}{0.2}\right)^{1/4}A_{*}^{1/4}. \end{equation} where $\eta$ is the efficiency of prompt GRBs and $A_* = 1$ adopted in this paper. The jet half-opening angle in the interstellar medium case could be described by (Sari et al. 1999; Rhoads 1999; Frail et al. 2001; Yi et al. 2015), \begin{equation} \theta_{\rm j}^{ISM}=0.076 \,\,{\rm rad}\,\,\left(\frac{T_{j}}{1\ \rm day}\right)^{3/8}\left(\frac{1+z}{2}\right)^{-3/8} E_{\gamma,iso,53}^{-1/8}\left(\frac{\eta}{0.2}\right)^{1/8}\left(\frac{n}{1\ \rm cm^{-3}}\right)^{1/8}. \end{equation} The distribution of jet half-opening angles for the ISM and wind cases is shown in Fig 1. We assume that X-ray flares are coming from relativistic jets. The tail of an X-ray flare is interpreted as emission from high latitude areas of the jet. The duration of the flare is determined by the half-opening angle of the jet through the curvature effect. With the jet half-opening angle $\theta_j$ estimated from the jet break time, decaying timescale and rising timescale of the flare, we can obtain the lower limit on the Lorentz factor of the flare via Equation (18). The upper limit on the Lorentz factor is determined by the total luminosity and initial radius of the outflow. The observed average luminosity is obtained from the isotropic 0.3 - 10 keV energy of the X-ray flare averaged by the duration time of the flare. The total luminosity of the flare is assumed to be 10 times of the observed X-ray luminosity. As already mentioned, the initial radius of the outflow $R_0$ is taken as $10^7$ cm. The obtained limits on the bulk Lorentz factor of X-ray flares range from tens to hundreds, as can be seen in Fig 2 and 3. We find that in the ISM case the correlation between the Lorentz factor and the isotropic radiation energy of X-ray flares is almost consistent with that of prompt emission of GRBs (Fig 4). However, in the wind case the lower limit on the Lorentz factor is statistically larger than the extrapolation from prompt bursts (Fig 5). \section{Discussion} X-ray flares are common features in GRB X-ray afterglows, and most of them have occurred at the early period. We can conclude that all the flares in our sample occurred before the jet break, which can be seen from the Table 1. Here, we define $f=\theta_{j,\,\gamma}/\theta_{j,\,x}$, where $\theta_{j,\,x}$ is the half-opening angle of the X-ray flare jet while $\theta_{j,\,\gamma}$ is the half-opening angle of the jet responsible for prompt emission. If $f=1$, one suggests that the jet may be conical and the flare jet and prompt emission jet have the same half-opening angle as we discussed above. The jet opening angle might be larger during the prompt emission and smaller for the X-ray flares, i.e., $f>1$, which has been predicted in some models with magnetic-dominated jets (Levinson \& Begelman 2013; Bromberg et al. 2014). In this case, the lower limit of flare Lorentz factor would be larger than that estimated with Eq. (18) assuming $\theta_{j,\,x}=\theta_{j,\,\gamma}$. The corresponding lower limits on X-ray flare Lorentz factor in Figs. 4 and 5 will increase by a factor of $f$, making the X-ray flares possibly more inconsistent (especially for the wind case) with the extrapolation of the correlation between isotropic radiation energy and Lorentz factor of prompt emission of GRBs. On the other hand, although several observations suggest that in some GRBs the ejecta may have large scale magnetic fields and therefore the ejecta could be magnetized, the degree of magnetization is usually estimated as $\sigma<$ a few in the afterglow phase. So in this paper we assume that for simplicity GRB jets have negligible magnetization, and the outflows have same half-opening angle ($f=1$) in one GRB. In addition, the fluence of most X-ray flares are smaller than that of prompt emission, their energies and Lorentz factors are supposed to be smaller than those of GRBs. The initial Lorentz factor of GRBs in this paper is generally larger than a few hundreds, and it is always larger than the lower limits of X-ray flare Lorentz factor in the same GRB. In Fig 6, we plot 5 GRBs having prompt and flare Lorentz factors, GRBs 050820A, 060418, 060906, 070318, and 071031. The initial Lorentz factors of these 5 GRBs are generally much larger than the lower limits of the Lorentz factor of their X-ray flares, and usually smaller than the upper limits of flare Lorentz factors. \section{Conclusion} The initial Lorentz factor is a key parameter to understanding the GRB physics. In this paper, we have re-estimated the initial Lorentz factor in a more accurate way. From Equation (13), we obtain a coefficient 1.67 for the ISM case, instead of 2 adopted in previous literature. We also constrain the initial Lorentz factor in the wind case, which is shown as Equation (14). With the estimated initial Lorentz factors in this paper, we confirm the tight correlation between the initial Lorentz factor and isotropic energy of GRBs for the ISM case. There is an even tighter correlation between the initial Lorentz factor and isotropic energy of GRBs for the wind case. Our sample consists of 20 GRBs with X-ray flares, whose redshifts and jet break times are known. Some of them have several flares. The total number of X-ray flares in our sample is 43. We assume that the half-opening angle is the same for the jets responsible for prompt emission and late X-ray flares in one GRB. Our results are shown in Fig 4 (ISM) and Fig 5 (Wind), which also show the correlation between isotropic radiation energy and the Lorentz factor of prompt emission of GRBs. The obtained limits on the bulk Lorentz factor of X-ray flares range from a few tens to hundreds, together with the isotropic radiation energy, are generally consistent with the correlation for prompt GRBs for the ISM case. Our results indicate that X-ray flares and prompt bursts may be caused by the same mechanism, as both are produced by the long-lasting activity of the central engine. However, in the wind case the lower limit on Lorentz factor is statistically larger than the extrapolation from prompt bursts. \acknowledgements We thank the anonymous referee for constructive suggestions. We also thank Bing Zhang, En-Wei Liang, Yun-Wei Yu, Liang-Duan Liu, Di Xiao and A-Ming Chen for useful comments and helps. This work is supported by the National Basic Research Program (``973'' Program) of China (grant Nos. 2014CB845800 and 2013CB834900), the program A for Outstanding PhD candidate of Nanjing University, and the National Natural Science Foundation of China (grant Nos. 11033002, 11422325, 11373022 and 11322328). X.F.W was also partially supported by the One- Hundred-Talent Program, the Youth Innovation Promotion Association, and the Strategic Priority Research Program ``The Emergence of Cosmological Structure'' (grant No. XDB09000000) of the Chinese Academy of Sciences, and the Natural Science Foundation of Jiangsu Province (No. BK2012890). F.Y.W was also partially supported by the Excellent Youth Foundation of Jiangsu Province (BK20140016). \clearpage
1,108,101,562,425
arxiv
\section{Introduction} The geometry of a static and stationary spherically symmetric wormhole consists of a two-mouthed tunnel (referred as a tube, throat, or handle in the literature). This tube like structure is a multiply connected spacetime which can join two asymptotically flat regions of the same spacetime or two different spacetimes. The possibility of two way travel in time becomes theoretically possible when the two regions belong to the same spacetime, such phenomenon is time machine. The wormhole spacetime has been investigated in the literature in various schemes such as a non-static axially symmetric wormhole \cite{teo}, wormholes with cylindrical symmetry \cite{ger}, wormholes supported by a cosmological constant \cite{lemos1}, thin-shell wormholes \cite{lemos}, and electrically charged static wormholes \cite{kim} (For a review see \cite{lobo}). Morris and Thorne \cite{morris} suggested the idea of a traversable wormhole suitable for travel by humans for interstellar journey and beyond. Later on wormholes were investigated in the cosmological context and theorized to be enlarged through a mechanism similar to cosmological inflation \cite{roman}. Wormholes present esoteric properties such as a violation of the Hawking chronology protection conjecture and give faster-than-light scenarios, breakdown of causality \cite{hawking}. The matter energy supporting the exotic geometry violates the standard energy conditions and hence termed 'exotic'. However this exotic behavior can be avoided by studying wormholes in extended theories of gravity such as $f(R)$ gravity and threading wormholes with normal matter. The violation of energy conditions can be avoided on account of curvature effects arising near the wormhole's throat. In an earlier work by Lobo et al \cite{lobo1}, the authors investigated the geometry and stability of stationary and static wormholes in $f(R)$ gravity. They obtained the wormhole's solutions by assuming various forms of equations of state and viable shape functions. They showed that in their model, the energy conditions are satisfied in the desired range of radial coordinate. We here perform a similar analysis of \cite{lobo1} however, taking into account a Lorentizian density distribution of a point gravitational source \cite{hamid}. We derive the wormhole's solutions in two possible schemes for a given Lorentzian distribution: assuming an astrophysically viable $F(R)$ function such as a power-law form :$F(R) = a R^m$ and have discussed several solutions corresponding to different values of the exponential parameter. In the second scheme, for the specific choice of the shape functions we have reconstructed $f(R)$. We also check the energy conditions for both schemes. Our plan of work is as follows: In Sec.II, we write down the field equations of a Morris-Thorne wormhole in $f(R)$ gravity supported by anisotropic matter. In Sec.III, we solve the field equations by assuming a power-law form: $F(R) = a R^m$ and discussed several solutions corresponding to different values of m. In Sec.IV, we solve the same field equations by inserting specific forms of shape functions and have reconstructed $f(R)$ in all cases and we conclude in Sec. IV. \section{Field equations in $F(R)$ gravity} The metric describing a static spherically symmetric wormhole spacetime is given by \begin{equation} ds^2= - e^{2\Phi(r)} dt^2+ \frac{ dr^2}{1-\frac{b(r)}{r}}+r^2 (d\theta^2+\sin^2\theta d\phi^2). \label{Eq3} \end{equation} Here, $\Phi(r)$ is a gravitational redshift function and $b(r)$ is the shape function. The radial co-ordinate r, decreases from infinity to a minimum value $r_0$, and then increases from $r_0$ to infinity. The minimum value of $r_0$, represents the location of the wormhole throat where $b(r_0) =r_0$, satisfying the flaringout condition $b-b^{\prime}r/b^2 > 0$ and $b^{\prime}(r_0)< 1$, that are imposed to have wormhole solution. For our wormhole in F(R) gravity, we assume that the matter content of the wormhole is anisotropic fluid source whose energy-momentum tensor is given by \cite{lobo} \begin{equation} T_\nu^\mu= ( \rho + p_r)u^{\mu}u_{\nu} - p_r g^{\mu}_{\nu}+ (p_t -p_r )\eta^{\mu}\eta_{\nu}, \label{eq:emten} \end{equation} with $u^{\mu}u_{\mu} = - \eta^{\mu}\eta_{\mu} = 1, $ and $u^\mu \eta_\mu=0.$ Here the vector $u^\mu$ is the fluid 4-velocity and $\eta^\mu$ is a space-like vector orthogonal to $u^\mu$. Following Lobo et al's \cite{lobo1}, we have the following gravitational field equations in $f(R)$ gravity as \begin{eqnarray} \rho(r) &=& \frac{F b'}{r^2},\label{r}\\ p_r(r)&=& -\frac{F b}{r^3} +\frac{F'}{2r^2}(b'r-b)-F'' \Big(1-\frac{b}{r}\Big),\label{r1}\\ p_t(r)&=& -\frac{F'}{r}\Big(1-\frac{b}{r}\Big) -\frac{F}{2r^3}(b'r-b).\label{r2} \end{eqnarray} Here, prime indicate derivative with respect to r and $\rho$, $p_r$ and $p_t$ are the energy density, radial pressure and tangential pressure, respectively. These equations are the generic expressions of the matter threading wormhole with constant redshift function which simplifies the calculations of field equations, and provide interesting exact wormhole solutions, where $F =\frac{df}{dR}$ and the curvature scalar, $R$, is given by \begin{equation} R(r) = 2\frac{b'(r)}{r^2}. \end{equation} \section{Wormholes for a given $F(R)$ function} We take a power-law form \begin{equation} F(R) =a R^m. \end{equation} Here, $a$ is a constant and m is an integer. We are trying to solve the field equations given above in non-commutative geometry with Lorentzian distribution. Here we take the energy density of the static and spherically symmetric smeared and particle-like gravitational source of the following \cite{hamid} \begin{equation} \rho = \frac{M \sqrt{\phi}}{\pi^2(r^2 +\phi)^2}. \end{equation} Here, the mass $M$ could be a diffused centralized object such as a wormhole and $\theta$ is noncommutative parameter. Using Eq.(6) in Eq. (7), gives \begin{equation} F(R) = a\Big(\frac{2b^{\prime}}{r^2}\Big)^m. \end{equation} Substituting Eqs. (8) and (9) in Eq. (3), we obtain the shape function given by \begin{equation} b(r)=\int{r^2\Big[\frac{M \sqrt{\phi}}{2^m \pi^2 a(r^2 +\phi)^2}\Big]^\frac{1}{1+m}}+C \end{equation} where $C$ is a constant of integration. To get the exact physical characteristics, we discuss several models resulting for different choices of m. \subsection{$m=0$} From Eq.(10), the assumption m=0, gives the shape function of the form \begin{equation} b(r) = \frac{M\sqrt{\phi}}{2 a \pi^2}\Big[\frac{\arctan\Big(\frac{r}{\sqrt{\phi}}\Big)}{\sqrt{\phi}} -\frac{r}{r^2+\phi} \Big]+C. \end{equation} By putting (11) in (6), we get \begin{equation} R(r) = \frac{2M\sqrt{\phi}}{\pi^{2}a(r^2 + \phi)^2}. \end{equation} From Eqs. (4) and (5), using the shape function in Eq.(11) the expressions for pressures (radial and tangential) become \begin{equation} P_{r}(r) = \frac{a}{r^3}\Big[\frac{M\sqrt{\phi}}{2a\pi^{2}}\Big(\frac{r}{r^2 +\phi}-\frac{1}{\sqrt{\phi}} ~ \arctan\Big(\frac{r}{\sqrt{\phi}}\Big)\Big)-C\Big], \end{equation} \begin{equation} P_{t}(r) = \frac{M\sqrt{\phi}}{4\pi^{2}r^{3}}\Big[\frac{1}{\sqrt{\phi}}~\arctan\Big(\frac{r}{\sqrt{\phi}}\Big) - \frac{r}{(r^2 + \phi)}-\frac{2r^3}{(r^2 + \phi)^2} + \frac{2aC\pi^2}{M\sqrt{\phi}}\Big]. \end{equation} We consider that the Lorentzian distribution of particle-like gravitational source given in Eq.(8), which is positive for the noncommutative parameter $\theta > 0$. The shape function given in Eq.(11) is asymptotically flat because of $b(r)/r \rightarrow 0$ as $r \rightarrow \infty$ and the redshift function is constant everywhere. In fig.(1), corresponding to m=0, the throat of the wormhole is located at r=1.5, where $\mathcal{G}(r)$ = b(r)-r cuts the r-axis ( upper right in Fig.(1)) and also $\mathcal{G} (r) < 0$, i.e., b(r)-r$< 0$, which implies that b(r) $<$ r for r$> r_0$, satisfy the fundamental property of shape function which indicates in Fig.(1) (upper left). From the Fig.(1), it is also clear that, for r $> r_0$, $\mathcal{G}(r)$ is decreasing function, therefore $\mathcal{G^{\prime}}(r) < 0$ and correspondingly $b^{\prime}(r_0) < 1$, which satisfies the flare-out condition. According to a Fig.(1) (left and right, middle position) the radial pressure $(p_r)$ is negative and transverse pressure $(p_t)$ is positive outside the throat of the wormhole for the choice of the parameters M=10, C=1, a=1 and $\phi= 1$, respectively, which shows that the wormholes violates the null energy condition (NEC) as well as weak energy conditions (WEC). \begin{figure*}[thbp] \begin{tabular}{rl} \includegraphics[width=7.5cm]{graph1.eps}& \includegraphics[width=7.5cm]{graph2.eps} \\ \includegraphics[width=7cm]{graph3.eps}& \includegraphics[width=7cm]{graph4.eps} \\ \includegraphics[width=7cm]{graph5.eps} & \includegraphics[width=7cm]{graph6.eps} \\ \end{tabular} \caption{ Graphs for the case $m=0$. } \end{figure*} \subsection{$m=1$} Similarly, from Eq. (10), we get the shape function for m=1 \begin{equation} b(r) = \sqrt{\frac{M\sqrt{\phi}}{2 a \pi^2}}\Big[r-\sqrt{\phi} ~~ \arctan\Big(\frac{r}{\sqrt{\phi}}\Big)\Big]+C. \end{equation} The corresponding value of Ricci scalar becomes \begin{equation} R(r) = \sqrt{\frac{2M\sqrt{\phi}}{a\pi^2(r^2 + \phi)^2}}. \end{equation} Moreover, the radial and transverse pressures turn out as \begin{eqnarray} P_{r}(r) &=& \sqrt{\frac{2Ma\sqrt{\phi}}{\pi^2}}\Big[\frac{1}{r^3(r^2 + \phi)}\Big[\sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}} \Big(\sqrt{\phi}~ \arctan\Big(\frac{r}{\sqrt{\phi}}\Big)-r\Big)-C\Big]\nonumber\\&&- \frac{1}{r(r^2+\phi)^2}\Big[\sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}}\frac{r^3}{(r^2+\phi)}- \sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}}\Big(r - \sqrt{\phi}~\arctan\Big(\frac{r}{\sqrt{\phi}}\Big)\Big)-C\Big]\nonumber\\&& -\frac{(6r^4+4r^2\phi-2\phi^2)}{(r^2+\phi)^4}\Big[1- \sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}}\Big(1-\frac{\sqrt{\phi}}{r}~\arctan\Big(\frac{r}{\sqrt{\phi}}\Big)\Big) -\frac{C}{r}\Big]\Big]. \end{eqnarray} \begin{eqnarray} P_{t} (r)&=& \sqrt{\frac{2Ma\sqrt{\phi}}{\pi^2}}\Big[\frac{2}{(r^2+\phi)^2}\Big[1 - \sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}} \Big(1-\frac{\sqrt{\phi}}{r}~\arctan\Big(\frac{r}{\sqrt{\phi}}\Big)\Big)-\frac{C}{r}\Big]\nonumber\\&&- \frac{1}{2r^3(r^2+\phi)}\Big[\sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}} \frac{r^3}{(r^2+\phi)}-\sqrt{\frac{M\sqrt{\phi}}{2a\pi^2}}\Big(r-\sqrt{\phi}~\arctan\Big(\frac{r}{\sqrt{\phi}} \Big)\Big)-C\Big]\Big]. \end{eqnarray} In Fig.(2), the graphs are drawn for the choice of the same parameters given in Fig.(1), corresponding to m=1 . The throat of the wormhole is located at r$\sim$1.5, where $\mathcal{G}$ cuts r-axis and the shape function satisfy all the necessary conditions of as shown in the Figs.(2). Here also from Fig.(2), (lower left) the Null Energy Condition (NEC) and the Weak Energy Condition (WEC) are violated, conditions that are necessary to hold a wormhole open. It is interesting to note that the Strong Energy Condition (SEC) is satisfied which shown in Fig(2), (lower right). \begin{figure*}[thbp] \begin{tabular}{rl} \includegraphics[width=7.5cm]{graph7.eps}& \includegraphics[width=7.5cm]{graph8.eps} \\ \includegraphics[width=7cm]{graph9.eps}& \includegraphics[width=7cm]{graph10.eps} \\ \includegraphics[width=7cm]{graph11.eps} & \includegraphics[width=7cm]{graph12.eps} \\ \end{tabular} \caption{ Plots for case $m=1$.} \end{figure*} \section{Wormhole solution for a given shape function} We developed the literature by considering several interesting shape function. \subsection{$b(r)=r_0\Big(\frac{r}{r_0}\Big)^\alpha$} Here, we consider a shape function of the form \begin{equation}\label{br} b(r)=r_0\Big(\frac{r}{r_0}\Big)^\alpha, \end{equation} which satisfy the flare-out conditions for $b^{\prime}(r_0)= \alpha < 1$, and that for r $\rightarrow \infty$ we have b(r)/r = $(r_0/r)^{1-\alpha} \rightarrow 0$. Putting Eq. (19) in (6), we have \begin{equation} R(r)=\frac{2\alpha}{r^2}\Big(\frac{r}{r_0}\Big)^{\alpha-1}. \end{equation} and, Substituting Eqs. (8) and (19) in (3), we get \begin{equation} F(r)=\frac{M\sqrt{\phi}}{\alpha \pi^2}\Big(\frac{r_0}{r}\Big)^{\alpha-1}\Big[\frac{r^{2}}{(r^2+\phi)^2}\Big]. \end{equation} Using gravitational field equations (3)-(5), the components of radial and transverse pressure, given by the following form \begin{eqnarray} p_r(r)&=&\frac{M\sqrt{\phi}}{\alpha \pi^2\Big(r^2+\phi\Big)^2} \Big[-1+\frac{(\alpha-1)}{2}\Big\{(3-\alpha)-\frac{4r^2}{r^2+\phi}\Big\}+ \Big(1-\Big(\frac{r}{r_0}\Big)^{1-\alpha}\Big)\Big\{(3-\alpha)(2-\alpha)\nonumber\\&& +\frac{4(2\alpha-5)r^2}{\Big(r^2+\phi\Big)}+\frac{8r(2r^3-\phi)}{\Big(r^2+\phi\Big)^2}\Big\}\Big],\\ p_t(r)&=&\frac{M\sqrt{\phi}}{\alpha \pi^2\Big(r^2+\phi\Big)^2} \Big[\frac{(1-\alpha)}{2}+\Big(1-\Big(\frac{r}{r_0}\Big)^{1-\alpha}\Big)\Big\{(3-\alpha)-\frac{4r^2}{r^2+\phi}\Big\} \Big]. \end{eqnarray} In Fig.(3), the graphs are drawn for the choice of the parameters M=10, $r_0 = 1$, $\alpha = -1$ and $\phi =1$, respectively. Here also the throat of the wormhole is located at r$\sim$1.5, where $\mathcal{G}$ cuts r-axis shown in the Fig.(3), (upper right) and the energy conditions are same as in previous cases shown in left and right lower Fig.(3). \begin{figure*}[thbp] \begin{tabular}{rl} \includegraphics[width=7.5cm]{graph13.eps}& \includegraphics[width=7.5cm]{graph14.eps} \\ \includegraphics[width=7cm]{graph15.eps}& \includegraphics[width=7cm]{graph16.eps} \\ \includegraphics[width=7cm]{graph17.eps} & \includegraphics[width=7cm]{graph18.eps} \\ \end{tabular} \caption{ Plots for the case A of Section IV. } \end{figure*} \subsection{A particular shape function : $ b(r)=A tan^{-1}Cr$} Let us consider a shape function of the form \begin{equation}\label{br} b(r)=A tan^{-1}Cr, \end{equation} so that b(r)/r = $A tan^{-1}Cr/r$ $\rightarrow 0$ for r $\rightarrow $ $\infty$ (by L'Hospital's rule), which met the asymptotically flat conditions. However, one may also verify that $b^\prime(r_0) = AC/ (1+C^2r^2_0) < 1$ i.e., the fundamental property of a wormhole is that a flaring out condition at the throat is met for the choice of the parameters. using Eq. (24) in (6), we have \begin{equation} R(r)=\frac{2}{r^2}\Big(\frac{AC}{1+C^2r^2}\Big). \end{equation} The form of $F(r)$ function becomes \begin{equation}\label{br} F(r)=\frac{M\sqrt{\phi}}{\pi^2 AC}\frac{r^2(1+C^2r^2)}{(r^2+\phi)^2}. \end{equation} The components of radial and transverse pressure become \begin{eqnarray} P_r(r)&=&-\frac{M\sqrt{\phi}}{\pi^2 AC}\Big[ \frac{A}{r} tan^{-1} Cr\frac{(1+C^2r^2)}{(r^2+\phi)^2}-\Big(\frac{ACr}{1+C^2r^2}-Atan^{-1} Cr\Big) \Big(\frac{1+2C^2r^2}{r(r^2+\phi)^2}-\frac{2(r+C^2r^3)}{(r^2+\phi)^3}\Big) \nonumber\\&& +\Big(1-\frac{A}{r}tan^{-1} Cr\Big)\Big(\frac{2+12C^2r^2}{(r^2+\phi)^2}-\frac{4(5r^2+9C^2r^4)}{(r^2+\phi)^3}+\frac{24r(r^3+C^2r^5)}{(r^2+\phi)^4}\Big) \Big],\\ P_t(r)&=&-\frac{M\sqrt{\phi}}{\pi^2 AC} \Big[\Big(1-\frac{A}{r}tan^{-1} Cr\Big)\Big(\frac{2+4C^2r^2}{(r^2+\phi)^2}-\frac{4(r^2+C^2r^4)}{(r^2+\phi)^3}\Big) \nonumber\\&& +\Big(\frac{AC}{1+C^2r^2}-\frac{A}{r}tan^{-1} Cr\Big)\Big(\frac{1+C^2r^2}{2(r^2+\phi)^2}\Big)\Big] . \end{eqnarray} In Fig.(4), the graphs are drawn for the choice of the parameters M=10, a=1, A=2, C=1 and $\phi =1$, respectively. Here also the throat of the wormhole is located at r$\sim$ 2, where $\mathcal{G}$ cuts r-axis shown in the Fig.(4), (upper right) and the energy conditions are same as in previous cases shown in left and right lower portion of Fig.(4). \begin{figure*}[thbp] \begin{tabular}{rl} \includegraphics[width=7.5cm]{graph19.eps}& \includegraphics[width=7.5cm]{graph20.eps} \\ \includegraphics[width=7cm]{graph21.eps}& \includegraphics[width=7cm]{graph22.eps} \\ \includegraphics[width=7cm]{graph23.eps} & \includegraphics[width=7cm]{graph24.eps} \\ \end{tabular} \caption{ Plots of case B of Section IV. } \end{figure*} \pagebreak \section{Concluding remarks} In this paper, we derived some new exact solutions of static wormholes in $f(R)$ gravity supported by the matter possesses Lorentizian density distribution of a particle-like gravitational source. We derive the wormhole's solutions in two possible schemes for a given Lorentzian distribution. The first model assumes the power-law form F(R) whereas the second model discussed assumes a particular shape function which allows the reconstruction of f(R). For the power-law form of F(R) with m=1 is interesting as in this case the null energy condition is violated, but the strong energy condition is met. For the second model, we have considered two particular shape functions and have reconstructed f(R) in both cases. In these two cases, the null energy condition are once again violated, but the strong energy conditions are met. All the solutions assume zero tidal forces which automatically that the wormholes are traversable \cite{lobo1}. \pagebreak \section*{Acknowledgments} FR is thankful to the authority of Inter-University Centre for Astronomy and Astrophysics, Pune, India for providing them Visiting Associateship under which a part of this work was carried out. AB is also thankful to IUCAA for giving him an opportunity to visit IUCAA where a part of this work was carried out. FR is also thankful UGC, Govt. of India under research award scheme, for providing financial support.
1,108,101,562,426
arxiv
\section{Introduction} \par Modern statistical applications often encounter situations where it is critical to account for inherent heterogeneity and substructures in data \cite{luo2010, verbeke1996, Dietterich2000, Bouwmeester2013}. Under the presence of heterogeneity in the distribution of covariates, it is natural to seek prediction strategies which are robust to such variability in order to accurately generalize to new data. In this regard, previous research has hinted at the possibility of improving prediction accuracy in such setups by pre-processing data through suitable clustering algorithms \cite{Ramchandran2020, Trivedi2015, deodhar2007}. Subsequently applying ensembling frameworks to pre-clustered data has also been shown to produce critical advantages \cite{Patil2018, ramchandran2021}. In this paradigm, data are first separated into their component clusters, a learning algorithm is then trained on each cluster, and finally the predictions made by each single-cluster learner are combined using weighting approaches that reward cross-cluster prediction ability within the training set. Learning algorithms that have been explored within this framework include Neural Networks, Random Forest, and (regularized) least squares linear regression \cite{Patil2018, ramchandran2021, Sharkey1996, randomForest}. These research activities have imparted pivotal insights into identifying prediction algorithms which might benefit most from such ensembling techniques compared to ``merging" methods (referring to the predictions made by the chosen learning algorithm on the entire data) that ignore any underlying clusters in the covariate distribution. In this paper, we provide a crucial theoretical lens to further our understanding of ensembling methods on clustered data under a high dimensional linear regression setup with covariate shift defining natural clusters in predictors. \par To focus our discussions, we consider two prediction algorithms, namely high dimensional linear least squares and random forest regressions, to be compared through their margin of differential behavior in a merging versus ensembling framework. The choice of these two particular methods is carefully guided by previous research which we briefly discuss next. In particular, \cite{ramchandran2021} methodologically highlighted the efficacy of ensembling over merging for Random Forest learners trained on data containing clusters. Across a wide variety of situations, including a variety of data distributions, number and overlap of clusters, and outcome models, they found the ensembling strategy to produce remarkable improvements upon merging -- at times even producing an over 60\% reduction in prediction error. The ensembling method similarly produced powerful results on cancer genomic data containing clusters, highlighting its potential applications for real datasets with feature distribution heterogeneity. Conversely, \textcolor{black}{they} observed that linear regression learners with a small number of features produced no significant differences between ensembling and merging when the model was correctly specified, even in the presence of fully separated clusters within the training set. These interesting observations obviate analytical investigation of where the benefits in ensembling for Random Forest arise, and why the same results are not achieved with linear regression. \par In this paper, we explore the role of the bias-variance interplay in conferring the benefits of ensembling. We show that for unbiased high-dimensional linear regression, even optimally weighted ensembles do not asymptotically improve over a single regression trained on the entire dataset -- and in fact perform significantly worse when the dimension of the problem grows proportional to sample size. Conversely, we show that for ensembles built from Random Forest learners (which are biased for a linear outcome model), ensembling is strictly more accurate than merging, regardless of the number of clusters within the training set. We additionally verify our theoretical findings through numerical explorations. \par We shall also utilize the following language convention for the rest of the paper. By Single Cluster Learners (SCL's), we will refer to any supervised learning algorithm that can produce a prediction model using a single cluster. In this paper, we consider linear regression and random forest as two representative SCL's for reasons described above. The term \textit{Ensemble}, to be formally introduced in Section \ref{sec:math_formalization}, will indicate training an SCL on each cluster within the training set and then combining all cluster-level predictions using a specific weighting strategy, creating a single predictor that can be applied to external studies. The \textit{Merged} method, also to be formally introduced in Section \ref{sec:math_formalization}, will refer to the strategy of training the same single learning algorithm chosen to create the SCL's on the entire training data. With this language in mind, the rest of the paper is organized in two main subsections pertaining to comparing the results of merging versus ensembling from both theoretical and numerical perspectives, for linear least regression (Section \ref{sec:least_square}) and random forest regression (Section \ref{sec:random_forest}) respectively. \section{Main results}\label{sec:main_results} We divide our discussions of the main results into the following subsections. In section \ref{sec:math_formalization} we introduce a mathematical framework under which our analytic explorations will be carried out. Subsequently, sections \ref{sec:least_square} and \ref{sec:random_forest} provide our main results on comparison between ensembling and merging methods through the lens of linear least squares and random forest predictors respectively. These subsections also contain the numerical experiments to verify and extend some of the intuitions gathered from the theoretical findings. \subsection{Mathematical Formalization}\label{sec:math_formalization} We consider independent observations $(Y_i,\mathbf{x}_i), i=1,\ldots,n$ on $n$ individuals with $Y_i\in \mathbb{R}$ and $\mathbf{x}_i\in \mathbb{R}^p$ denoting outcome of interest and $p$-dimensional covariates respectively. Our theoretical analyses will be carried out under a linear regression assumption on the conditional distribution of $Y$ given $\mathbf{x}$ as follows: $Y_i=\mathbf{x}_i^T\boldsymbol{\beta}+\boldsymbol{\varepsilon}_i, i=1,\ldots,n,\label{eqn:model}$ where $\boldsymbol{\varepsilon}_i$ are i.i.d. random variables independent of $\mathbf{x}_i$ with mean $0$ and variance $\sigma^2$. Although the conditional outcome distributions remains identical across subjects, we will naturally introduce sub-structures in the covariates $\mathbf{x}_i$'s as found through subject matter knowledge and/or pre-processing through clustering. To this end, given any $K\geq 2$ we consider a partition of $\{1,\ldots,n\}$ into disjoint subsets $ \mathbb{S}_1,\ldots,\mathbb{S}_K$ with $|\mathbb{S}_t|=n_t, t\geq 1$ and $\sum_t n_t=n$. Subsequently, we define the $t^{\rm th}$ data sub-matrix as $(\mathbf{Y}_t:\mathbb{X}_t)_{n_t\times (p+1)}$ with $\mathbf{Y}_t=(Y_i)_{i\in \mathbb{S}_t}$ and $\mathbb{X}_t$ collecting corresponding $\mathbf{x}_i$'s in its rows for $i\in \mathbb{S}_t$. We also denote the merged data matrix as $(\mathbf{Y},\mathbb{X})$ where $\mathbf{Y}=(\mathbf{Y}_1^T \ldots \mathbf{Y}_K^T)$ and $\mathbb{X}=(\mathbb{X}_1^T \ldots \mathbb{X}_K^T)$. This setup allows us to define ensembling single-cluster learners and merged prediction strategies as follows. \begin{algorithm}[ht] \caption{: \textbf{Ensembling}} \begin{algorithmic}[1] \State for $t = 1,\ldots ,K$: \begin{itemize} \item Compute $\hat{Y}_t(\mathbf{x}_{\star})$, the prediction of the SCL trained on $(\mathbf{Y}_t:\mathbb{X}_t)$ at new point $\mathbf{x}_{\star}$ \item Determine $\hat{w}_t$, the weight given to $\hat{Y}_t(\mathbf{x}_{\star})$ within the ensemble, using chosen weighting scheme \end{itemize} \State The prediction made by the \textit{Ensemble} as a function of the weight vector $\mathbf{w}=({w}_1,\ldots,{w}_K)$: $\hat{Y}_{{\mathbf{w}},E}(\mathbf{x}_{\star}) = \sum_{t = 1}^K \hat{w}_t \hat{Y}_t(\mathbf{x}_{\star})$; \Algphase{Algorithm 2 : Merging} \begin{enumerate} \item Train same learning algorithm used for the SCLs on $(\mathbf{Y},\mathbb{X})$ \item The prediction of this \textit{Merged} learner on $\mathbf{x}_{\star}$ is $\hat{Y}_M(\mathbf{x}_{\star})$ \end{enumerate} \end{algorithmic} \end{algorithm} The rest of the paper will focus on analyzing linear least squares and random forest regressions under the above notational framework. As we shall see, the main difference in the behavior of these two SCL's arise due to their respective underlying bias variance interplay. This in turn necessitates different strategies for understanding ensembling methods based on these two learners. In particular, since it has been shown in [\cite{ramchandran2021}, Figure 5] that the bias comprises nearly the entirety of the MSE for both \textit{Merged} and \textit{Ensemble} learners constructed with random forest SCLs and trained on datasets containing clusters, we focus on the behavior of the squared bias for the random forest regression analysis. In contrast, for the analysis of least squares under model \eqref{eqn:model}, the lack of any bias of the least squares method implies that one needs to pay special attention to the weighting schemes used in practice. To tackle this issue, we first discuss asymptotic behavior of the commonly used stacking \citep{breiman1996stacked} weighting method (-- see e.g. \citep{Ramchandran2020,ramchandran2021} and references therein for further details) through the lens of numerical simulations. Building on this characterization, we subsequently appeal to tools of random matrix theory to provide a precise asymptotic comparison between ensembling and merging techniques. \subsection{Linear Least Squares Regression}\label{sec:least_square} We now present our results on linear least squares regression under both ensembling and merging strategies. To this end, we shall always assume that each of $\mathbb{X}_t$ and $\mathbb{X}$ are full column rank almost surely. In fact, this condition holds whenever $p\leq \min_t n_t$ and the $\mathbf{x}_i$'s have non-atomic distributions \citep{eaton1973non}. Other "nice" distributions additionally adhere to such a condition with high probability in large sample sizes, and thus the main ideas of our arguments can be extended to other applicable distributional paradigms. If we consider a prediction for a new data point $\mathbf{x}_{\star}$, the \textit{Merged} and \textit{Ensemble} predictors based on a linear least squares SCL can be written as \begin{align*} \hat{Y}_{M}&=\mathbf{x}_{\star}^T(\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbf{Y},\\ \hat{Y}_{\hat{\mathbf{w}},E}&=\sum_{t=1}^K{\hat{w}_l\mathbf{x}_{\star}^T\left(\mathbb{X}_t^T\mathbb{X}_t\right)^{-1} \mathbb{X}_t^T \mathbf{Y}_t}, \end{align*} where $\hat{w}_l$ are the stacking weights described in section 2 of \cite{ramchandran2021}. Stacked regression weights, originally proposed by Leo Breiman in 1996 \citep{breiman1996stacked}, have been shown in practice to produce optimally performing ensembles \cite{Stacking1996, Ramchandran2020}. Therefore, a first challenge in analyzing the performance of the ensemble predictor $\hat{Y}_E$ is handling the data-dependent nature of the stacking weights $\hat{w}_t$'s. To circumvent this issue we refer to our experimental results (see e.g. Table \ref{table:invstack} in Section \ref{sec:simulation_lin_reg}) to verify that asymptotically, the stacking weights described in \cite{Patil2018} represent a convex weighting scheme. In particular, we remark that across a variety of simulations, the stacking weights corresponding to $K$ clusters asymptotically lie on the $K$-simplex. A closer look reveals that the weights individually converge to ``universal" deterministic limits which sum up to $1$. Consequently, in the rest of this section we shall work with stacking weights upholding such requirements. We note that we will not aim to derive a formula (although this can be done with some work), but rather will simply demonstrate that even the optimal weighting for predicting the outcome for a new point $\mathbf{x}_{\star}$ is inferior compared to the the merged predictor $\hat{Y}_M$. This will immediately imply the benefits of using the $\hat{Y}_M$ instead of $\hat{Y}_E$ in the case of linear least squares regression under a linear outcome model. We next provide a characterization of the optimal convex combination of weights for predicting an outcome corresponding to the new point $\mathbf{x}_{\star}$ as inverse variance weighting (IVW) in the following lemma. \begin{lemma} \label{lemma:inv_variance_weighting} Denote by $\mathbf{w}$ the $K$-dimensional vector of weights, ${\mathbf{w}}=[{w}_1, \ldots, {w}_K]^T$. Then, the inverse variance weighting (IVW) scheme is the solution to the following optimization problem \begin{align*} \min_{\mathbf{w}} \mathrm{Var}(\hat{Y}_E({\mathbf{w}},\mathbf{x}_{\star})) \quad \text{ s.t.} \quad \sum_{t = 1}^K {w}_t = 1 \end{align*} yielding ${w}_t^{\rm opt} = \left(\sum_{t = 1}^K \frac{1}{\sigma_t^2} \right)^{-1} \frac{1}{\sigma_t^2}$ for $t = 1, \ldots, K$, with $\sigma^2_t=\mathrm{Var}\left(\mathbf{x}_{\star}^T\left(\mathbb{X}_t^T\mathbb{X}_t\right)^{-1}\mathbb{X}_t^T\mathbb{Y}_t\right)$. \end{lemma} Using the results from Lemma \ref{lemma:inv_variance_weighting}, we can now provide a comparison of the mean squared prediction error of the \textit{Merged} predictor $\hat{Y}_M$ with the optimally weighted \textit{Ensemble} predictor $\hat{Y}_{w,\mathrm{opt}}(\mathbf{x}_{\star}):=\sum_{j=1}^Kw_{t}^{\mathrm{opt}}(\mathbf{x}_{\star})\mathbf{x}^T_{\star}\left(\mathbb{X}_t^T\mathbb{X}_t\right)^{-1} \mathbb{X}_t^T\mathbf{Y}_t$ from input $\mathbf{x}_{\star}$ under a high dimensional asymptotics where $\frac{p}{n}\rightarrow \gamma \in [0,1)$. It is simple to see that both learners are unbiased, and therefore the MSE's are driven by the following variances: \begin{align} \text{Var}\left[ \hat{Y}_{w,\rm opt}(\mathbf{x}_{\star}) |\mathbf{x}_{\star}\right] &= \bigg[\sum_{t = 1}^K \left[\mathbf{x}_{\star}^T(\X_t^T\X_t)^{-1}\mathbf{x}_{\star}\right]^{-1}\bigg]^{-1}\\ \text{Var}\left[ \hat{Y}_M(\mathbf{x}_{\star})|\mathbf{x}_{\star} \right] &= \mathbf{x}_{\star}^T\bigg[\sum_{t = 1}^{K}\X_t^T\X_t\bigg]^{-1}\mathbf{x}_{\star} \end{align} \begin{theorem}\label{theorem:highdim} Suppose $K=2$, $\lambda_t=n/n_t$, $\mathbf{x}_i\sim N(\bmu_1,I)$ for $i\in \mathbb{S}_1$ and $\mathbf{x}_i\sim N(\bmu_2,I)$ for $i\in \mathbb{S}_2$, and $\mathbf{x}_{\star}\sim N(\mathbf{0},I)$. Assume that $p, n \to \infty$ such that $p/n\rightarrow \gamma$ and $p/n_t\rightarrow \lambda_t\gamma<1$ for $t\in \{1,2\}$. Also assume that $\bmu_1,\bmu_2$ are uniformly bounded (in $n,p$) norm. Then, the following holds: \begin{align*} \frac{\mathrm{Var}\left[\hat{Y}_M(\mathbf{x}_{\star})|\mathbf{x}_{\star}\right]}{\mathrm{Var}\left[ \hat{Y}_{w,\rm opt}(\mathbf{x}_{\star})|\mathbf{x}_{\star}\right]} &\rightarrow \frac{\gamma}{1-\gamma}\times {\sum_{t=1}^2\frac{1-\lambda_t\gamma}{\lambda_t\gamma}}=\frac{1-2\gamma}{1-\gamma}\quad \text{in probability.} \end{align*} \end{theorem} \par A few comments are in order regarding the assumptions made and subsequent implications of the derived results. First, we note that assumptions are not designed to be optimal but rather to present a typical instance where such a comparison holds. For example, the assumption of normality and spherical variance-covariance matrix is not necessary and can be replaced easily by $\mathbf{x}_i\sim \bmu_t+\Sigma^{1/2}\mathbb{Z}_i$ for $\mathbb{Z}_i$ having i.i.d. mean-zero, variance $1$ coordinates with bounded $8^{\rm th}$ moments, and $\Sigma$ having empirical spectral distribution converging to a compact subset of $\mathbb{R}_+$ -- (see e.g. \cite{bai2008large}). Moreover, the mean $\mathbf{0}$ nature of the $\mathbf{x}_{\star}$ also allows some simplification of the formulae the arise in the analyses and can be extended to incorporate more general means by appealing to results of quadratic functionals of the Green's Function of $\mathbb{X}_t^T\mathbb{X}_t$ \citep{mestre2006asymptotic}. Similarly, the normality of $\mathbf{x}_{\star}$ also allows ease of computation for higher order moments of quadratic forms of $\mathbf{x}_{\star}$ and can be done without modulo suitable moment-assumptions. We do not pursue these generalizations for the sake of space and rather focus on a set-up which appropriately conveys the main message of the comparison between the \textit{Ensemble} and \textit{Merged} predictors. Moreover, we have also taken $p<\min_t n_t$ to produce exactly unbiased prediction through least squares. However, our analyses tools can be easily extended to explore ridge regression in $p>n$ scenarios (i.e. $\gamma>1$ in the context of Theorem \ref{theorem:highdim}). Finally, we note that the $K=2$ assumption is also made for simplification of proof ideas and one can easily conjecture a general $K$ result from our proof architecture (i.e. a ratio of $(1-K\gamma)/(1-\gamma)$ asymptotically). As for the main message, we can conclude that Theorem \ref{theorem:highdim} provides evidence that the asymptotic prediction mean squared of the \textit{Merged} is strictly less than that of the \textit{Ensemble} for positive values of $\gamma$. Furthermore, for $\gamma = 0$, indicating all slower growing dimensions $p$ compared to $n$, the ratio in Theorem \ref{theorem:highdim} converges to 1, indicating that the two approaches are asymptotically equal in this case and providing further theoretical evidence to support the empirical findings. We end this subsection by presenting the results for the fixed dimensional $p$ case, in order to demonstrate that the above results are not merely a high dimensional artifact. The next theorem, the proof of which is standard and provided in the Supplementary Materials, provides further support to this theory. \begin{theorem}\label{theorem: linreg_merged_vs_ensemble} Suppose $K=2$ with $\mathbf{x}_i\sim N(\bmu_1,I)$ for $i\in \mathbb{S}_1$ and $\mathbf{x}_i\sim N(\bmu_2,I)$ for $i\in \mathbb{S}_2$ with $\|\bmu_{\mathbf{1}}\| = \|\bmu_{\mathbf{2}}\| = 1$, and that $\mathbf{x}_{\star}$ is randomly drawn from a distribution with mean $\mathbf{0}$ and variance covariance matrix $I$. If $p=O(1)$ as $n\rightarrow \infty$, then there exists $\kappa<1$ such that \begin{align*} \frac{\mathrm{Var}\left[\hat{Y}_M(\mathbf{x}_{\star})|\mathbf{x}_{\star}\right]}{\mathrm{Var}\left[ \hat{Y}_{w,\rm opt}(\mathbf{x}_{\star})|\mathbf{x}_{\star}\right]} &\rightarrow \kappa<1, \quad \text{in probability} \end{align*} \end{theorem} Once again, the assumptions made are not designed to be optimal but rather to present a typical instance where such a comparison holds. In terms of the result, Theorem \ref{theorem: linreg_merged_vs_ensemble} provides analytic evidence that for two clusters under linear least squares regression under a fixed dimensional linear model, the \textit{Merged} asymptotically produces predictions that are strictly more accurate than the \textit{Ensemble}. This is indeed congruent with the empirical results. Although we do not provide exact analytic arguments for more clusters (since they result in more complicated expressions in the analyses without changing the main theme of results), the same relationship persists in simulations as the number of clusters increases (see e.g. Table \ref{table:invstack} displays results for $K = 5$). Overall, we once again conclude that when both the \textit{Merged} and \textit{Ensemble} are unbiased by using linear least squares predictions, there is no benefit to cluster-specific ensembling -- even when the dimension is fixed. \subsection{Simulations}\label{sec:simulation_lin_reg} \begin{table}[ht] \centering \vspace*{0mm}\hspace*{0cm}\includegraphics[scale = .54]{inv_table_labeled.pdf} \caption{Performance and ensemble coefficient values for linear regression SCLs trained on 5 simulated Gaussian clusters per iteration, over 100 reps. (A) Average RMSE of the four different ensembling methods on test sets. Standard deviations are shown in parentheses. (B) Ensemble coefficient values for the 5 SCLs in training, ranked by Inverse Variance (IV) weighting. 95\% confidence intervals are shown in parentheses. } \label{table:invstack} \end{table} We first present our numerical experiments to demonstrate asymptotic behavior of the stacking weights in Table \ref{table:invstack}. In the simulation used to create Table \ref{table:invstack}, ensembles of least square regression learners combined with IVW or stacking were compared when trained and tested on datasets generated by gaussian mixture models. For each of 100 iterations, a training set with 5 clusters was generated using the {\tt clusterGeneration} package in R, with median values of between-cluster separation \cite{clusterGeneration}. Test sets were drawn using the same general approach with 2 clusters. All outcomes were generated through a linear model, following an analogous framework to \cite{ramchandran2021}. To form the ensembles, least squares regression learners were first trained on each cluster; then, all SCLs were combined through either stacking or inverse variance weights, the latter for which each SCL was weighted proportionally to the inverse sample variance of its prediction accuracy on all clusters not used in training. \par From Table \ref{table:invstack}A, we observe that there is no significant difference in prediction accuracy between the \textit{Merged} learner with ensembles weighted through simple averaging, IVW, or stacking. Table \ref{table:invstack}B displays the weights given by either IVW or stacking to each cluster within training, with clusters ordered from the lowest to the highest weight ranges. The distribution of weights for both approaches are centered around the same value, as evidenced by the median cluster weight for Cluster 3. Stacking weights essentially mimic simple averaging, while IVW in general results in a slightly larger range of weights. However, the equal prediction performance of ensembles constructed using either weighting scheme demonstrates that these slight differences in the tails of the weighting distribution produce negligible effects. Simple averaging, IVW, and stacking weights are all centered around the same value, indicating that each SCL on average is able to learn the true covariate-outcome relationship to a similar degree. Furthermore, the \textit{Merged} learner is able to achieve the same level of accuracy, illustrating that at least empirically, there is no benefit to ensembling over merging for least squares regression SCLs. We next present our numerical experiments to demonstrate the accuracy of Theorem \ref{theorem:highdim} in the $p,n$ asymptotic regime. \begin{figure}[ht] \centering \includegraphics[width=.7\linewidth]{highdim_2clust.pdf} \caption{Percent change in average MSE of ensembling approaches compared to the \textit{Merged} for linear regression \textit{Ensemble} learners trained and tested on datasets with 2 equal sized gaussian clusters, as a function of $\gamma_t = \frac{p}{n_t}$. The theoretical limit is shown in red. } \label{fig:F1} \end{figure} In this regard, Figure \ref{fig:F1} presents the percent change in the average MSE of the \textit{Ensemble} compared to the \textit{Merged} using both the limiting expressions presented in Theorem \ref{theorem:highdim} and results from a simulation study conforming to all assumptions. In the simulation, we set the number of samples $n_t$ per cluster at 400 for $t = 1, 2$, and varied the dimension $p$ incrementally from 0 to 400 to evaluate the performance of these approaches for different values of $\gamma_t = \frac{p}{n_t}$. Clusters were again simulated using the {\tt clusterGeneration} package as described above, and for the sake of simplicity, the coefficients selected to simulate the linear outcome for each cluster were set to be opposites of one another for the first and second clusters. From Figure \ref{fig:F1}, we observe that the theoretical and empirical results closely mirror one another, and that the performance of the \textit{Ensemble} and the \textit{Merged} are almost identical for values of $\gamma_t < .8$, after which the \textit{Merged} produces an exponential comparative increase in performance. These results together with the fixed dimension asymptotics presented in Theorem \ref{theorem: linreg_merged_vs_ensemble} show that for unbiased linear regression, it is overall more advantageous to train a single learner on the entire dataset than to ensemble learners trained on clusters. \subsection{Random Forest Regression}\label{sec:random_forest} Next, we examine the asymptotic risk of the \textit{Merged} and \textit{Ensemble} approaches built with random forest SCLs. It has previously been shown that for regression tasks on clustered data, the bias dominates the MSEs of random forest-based learners \cite{ramchandran2021}. Additionally, these results indicate that reduction of the bias empirically constitutes the entirety of the improvement in prediction performance of the \textit{Enemble} over the \textit{Merged} (see figure 5 of \cite{ramchandran2021}). As we have just discussed in the above section, there is no advantage to ensembling over merging for unbiased least squares regression, and in fact the \textit{Merged} learner is asymptotically strictly more accurate than the \textit{Ensemble}. Therefore, we can explore random forest as an SCL in order to determine the effect of ensembling over merging for learning approaches that are biased for the true outcome model, and furthermore pinpoint whether bias reduction is the primary mechanism through which the general cluster-based ensembling framework produces improvements for forest-based learners. \par One of the greatest challenges in developing results for random forest is the difficulty in analyzing Breiman's original algorithm; therefore, we will build our theory upon the centered forest model initially proposed by Breiman in a technical report \cite{Breiman2004}. The primary modification in this model is that the individual trees are grown independently of the training sample, but it still has attractive features such as variance reduction through randomization and adaptive variable selection that characterize the classic random forest algorithm. Previous work has shown that if the regression function is sparse, this model retains the ability of the original to concentrate the splits only on the informative features; thus, in this section, we introduce possible sparsity into our considered outcome model. Throughout, we will be using notation consistent with \cite{klusowski2020} and \cite{biau2012}, and will advance upon the work of the former on upper bounding the bias of centered forests on standard uniform data to now present upper bounds for the bias of the \textit{Merged} and \textit{Ensemble} for centered forest SCLs trained on datasets containing clusters. \subsubsection{Training sets with two uniformly distributed clusters} We begin by considering a simple yet representative data-generating model corresponding closely to that on which several previous analytical work in this area has been based \citep{biau2012,klusowski2020}. In particular, we shall assume throughout this section that $p$ is fixed, but will still keep track the effect of the number of non-zero coefficients $S$ in the leading term of the bias of the ensembled and merged predictors. In our analyses, we consider a training set consisting of two non-overlapping, uniform clusters. in particular, the distributions of the two training clusters are given by $\X_1 \stackrel{\rm i.i.d.}{\sim} [\mathrm{U}\left(0, \frac{1}{2}\right)]^p$ and $\X_2 \stackrel{\rm i.i.d.}{\sim} [\mathrm{U}(1, \frac{3}{2})]^p$, with $n_1 = n_2$; that is, each row consists of $p$ independent uniform variables, with the specific distributional parameters depending on cluster membership. We shall often express each training observation shortly as (with an abuse of notation) \begin{align*} \mathbf{x}_{i} &\stackrel{\rm i.i.d.}{\sim} \left[\mathrm{U}\left(0, \frac{1}{2}\right)\right]^p\mathbbm{1}\{i \in \mathbb{S}_1\} + \left[\mathrm{U}\left(1, \frac{3}{2}\right)\right]^p\mathbbm{1}\{i \in \mathbb{S}_2\} \text{ for } i = 1,...,n \end{align*} As before, we denote by $\mathbb{S}_1$ and $\mathbb{S}_2$ the respective set of indices in first and second clusters. Now, we shall consider the distribution of the new point $\mathbf{x}_{\star}$ is a deterministic mixture of the first and second clusters; that is, $\mathbf{x}_{\star} \sim \rm A \times [\mathrm{U}\left(0, \frac{1}{2}\right)]^p + (1- \rm A) \times [\mathrm{U}(1, \frac{3}{2})]^p$, where $\rm A \sim $ $\text{Bernoulli}\left(\frac{1}{2}\right)$, representing an random indicator of cluster membership of $\mathbf{x}_{\star}$. We note here that the choice of parameters for each cluster-level distribution (i.e. the end points of the uniform cluster) are arbitrary; our calculations are simply dependent on the width of the interval within each uniform and that the ranges are non-overlapping. \par In this section our analysis will also be able to incorporate certain sparsity structures of $\boldsymbol{\beta}$ in \eqref{eqn:model}. In particular, given any subset $\mathbf{S}\subset \{1,\ldots,p\}$, we suppose that $\boldsymbol{\beta}$ in our outcome model has non-zero coordinates restricted to $\mathbf{S}$. The outcome model can therefore be described by the regression function $f(\mathbf{x}) = \mathbf{x}_{\mathbf{S}}^T\boldsymbol{\beta}_S$ -- the vector $\mathbf{x}_{\mathbf{S}}$ therefore denoting the covariates corresponding to only the $S$ strong features captured in vector $\boldsymbol{\beta}_S$ out of $p$ total features. Let $S=|\mathbf{S}|$ equal the number of 'strong' features, indicating the covariates that have non-zero contributions to the outcome in the linear relationship. Thus, $f(\mathbf{x})$ is a sparse regression function, with the degree of sparsity mediated by $S$. This underlying model implies a convenient asymptotic representation of the random forest development scheme as described below. \par Now, using the language from \cite{klusowski2020}, we define $p_{nj}$ as the probability that the $j^{th}$ variable is selected for a given split of tree construction, for $j = 1,..,p$. In an idealized scenario, we are aware of which variables are strong - in this case, the ensuing random procedure produces splits that asymptotically choose only the strong variables with probability $1/S$ and all other variables with zero probability; that is, $p_{nj} \to 1/S$ if $j \in \mathbf{S}$ and 0 otherwise. The splitting criteria, as described by [\cite{biau2012}, Section 3], is as follows: at each node of the tree, first select $M$ variables with replacement; if all chosen variables are weak, then choose one at random to split on. Thereafter, one can argue that under suitable assumptions, centered forests asymptotically adaptively select only the strong features, and therefore we can intuitively restrict all further analysis to those $S$ variables. In this regard, we will assume hereafter that $p_{nj}=\frac{1}{S}(1+\xi_{nj})$ for $j\in \mathbf{S}$ and $=\xi_{nj}$ otherwise -- where for each $j$, $\xi_{nj}$ is a sequence that tends to $0$ as $n\rightarrow \infty$. Finally, we shall let $p_n=\frac{1}{S}(1+\min_{j}\xi_{jn})$. Also, we let $\log_2k_n$ denote the number of times the process of splitting is repeated for parameter $k_n > 2$ as defined in \cite{klusowski2020}. \par The above randomized splitting scheme allows us to formalize the definition of a centered random forest based on a given training data $(Y_i,\mathbf{x}_i)_{i\in \mathcal{D}_n}$ as follows. Given a new test point $\mathbf{x}_{\star}$, and randomizing variable $\theta$ that defines the probabilistic mechanism building each tree, we define $A_n(\mathbf{x}_{\star}, \theta)$ as the box of the random partition containing test point $\mathbf{x}_{\star}$ and the individual tree predictor as \begin{align*} f_n(\mathbf{x}_{\star};\theta, \mathcal{D}_n) &= \frac{\sum_{i \in \mathcal{D}_n } Y_i \mathbbm{1}\{\mathbf{x}_{i} \in A_n(\mathbf{x}_{\star}, \theta)\}}{\sum_{i \in \mathcal{D}_n} \mathbbm{1}\{ \mathbf{x}_{i} \in A_n(\mathbf{x}_{\star}, \theta )\}} \mathbbm{1}\{\epsilon_n(\mathbf{x}_{\star}, \theta)\} \end{align*} where $\epsilon_n(\mathbf{x}_{\star}, \theta\}$ is the event that $\sum_{i = 1} \mathbbm{1}\{\mathbf{x}_{i} \in A_n(\mathbf{x}_{\star}, \theta\} > 0$; that is, there is at least one training point that falls within the same partition as $\mathbf{x}_{\star}$. The forest represents an average over all such trees; we can then obtain the prediction made by the forest by taking the expectation of the individual tree predictors with respect to the randomizing variable $\theta$: \begin{align*} \bar{f}_n(\mathbf{x}_{\star};\theta, \mathcal{D}_n) &= \sum_{i \in \mathcal{D}_n} Y_i \mathbbm{E}_{\theta}\left[\frac{\mathbbm{1}\{\mathbf{x}_{i} \in A_n(\mathbf{x}_{\star}, \theta\}}{\sum_{i \in \mathcal{D}_n} \mathbbm{1}\{\mathbf{x}_{i} \in A_n(\mathbf{x}_{\star}, \theta\}} \mathbbm{1}\{\epsilon_n(\mathbf{x}_{\star}, \theta)\} \right] \end{align*} \par Now, we note that the \textit{Ensemble} approach performs a weighted average of the prediction of two forests, one trained on each cluster. As before, let $\hat{Y}_1(\mathbf{x}_{\star})$, $\hat{Y}_2(\mathbf{x}_{\star})$, and $\hat{Y}_E(\mathbf{x}_{\star})$ designate the respective predictions of the forests trained on clusters 1, 2, and the overall ensemble on test point $\mathbf{x}_{\star}$. For the centered forest algorithm, coordinates of the test point that lie outside the range of the training data for that forest are given predictive values of 0. Therefore, only the forest trained on the first cluster gives non-zero predictions on coordinates in the interval $[0, \frac{1}{2}]$ and vice versa for the forest trained on the second cluster and coordinates in the interval $[1, \frac{3}{2}]$. For convenience of notation and ease of proof architecture, we internally weight each learner in the overall ensemble by cluster membership of $\mathbf{x}_{\star}$ within the functional forms of $\hat{Y}_1(\mathbf{x}_{\star})$ and $\hat{Y}_2(\mathbf{x}_{\star})$. Using this representation, we provide precise expressions for the predictions made by the Merged and Ensemble learners on $\mathbf{x}_{\star}$ during the proof of our main theorem in the supplementary material -- which essentially considers a weighting scheme based on $w_1=\mathbbm{1}(\mathbf{x}_{\star}\in [0,1/2]^p )$ and $w_2=\mathbbm{1}(\mathbf{x}_{\star}\in [1,3/2]^p )$. We show that both predictions can be represented as weighted averages of the training outcome values, with the choice of weights as the differentiating factor between the two methods. \par With this, we are now ready present our main result for the mean squared prediction errors of the \textit{Ensemble} and \textit{Merged} approaches under the centered random forest approach described above. At this point we appeal to numerical evidence demonstrating that the squared bias dominates the variance in asymptotic order [\cite{ramchandran2021}, Figure 5] and thereby focus on the asymptotics of the bias in this article. The variance component has been empirically and theoretically shown to converge to zero faster than the squared bias, and thus analysis of the squared bias term asymptotically approximates the MSE \cite{klusowski2020}. Our next result reports the leading term of the squared bias, where we will utilize the assumptions on $p_{nj}$, $p_n$, and $k_n$ discussed above. \begin{theorem} \label{theorem: rf_uniform} Assume that $\boldsymbol{\beta}_{\mathbf{S}} \sim N(\bzero, I_{S\times S})$ , $K = 2$, $\mathbf{x}_i \stackrel{\rm i.i.d.}{\sim} [\mathrm{U}\left(0, \frac{1}{2}\right)]^p$ for $i\in \mathbb{S}_1$, and $\mathbf{x}_i \stackrel{\rm i.i.d.}{\sim} [\mathrm{U}(1, \frac{3}{2})]^p$ for $i\in \mathbb{S}_2$. Additionally suppose that $p_{nj}\log k_n \to \infty$ for all $j = 1, \ldots p$ and $k_n/n \to 0$ as $n \to \infty$. Then, \begin{enumerate} \item [(i)] The squared bias of the \textit{Ensemble} is upper bounded by \begin{align*} \frac{\text{S}}{8} k_n^{log_2(1 - 3p_n/4)}\left[1 + o(1) \right] \end{align*} \item [(ii)] The squared bias of the \textit{Merged} is upper bounded by \begin{align*} \frac{\text{S}}{4} k_n^{log_2(1 - 3p_n/4)}\left[1 + o(1) \right] \end{align*} \end{enumerate} \end{theorem} We first note the similarity of these bound to the results in \cite{klusowski2020} (and therefore the bound converges to $0$ owing to the assumption of $p_{n,j}\log{k_n}\rightarrow \infty$) and thereafter note that the upper bound for the \textit{Merged} is exactly twice as high as the corresponding quantity for the \textit{Ensemble}. The bounds for both learners depend only on the number of non-sparse variables S, parameter $k_n$, and split probability $p_n$, with the squared bias terms converging at the same rate $O(k_n^{log_2(1 - 3p_n/4)})$. Intuition as to why the \textit{Merged} does not perform as well as the \textit{Ensemble} in this case may be gleaned from the following observation: the \textit{Merged} has the same bound as would be achieved by a single forest trained on observations arising from a $[U(a, a + 1)]^p$ distribution for some constant $a$. That is, the \textit{Merged} ignores the cluster-structure of the data and instead treats the data as arising from the average of its' component distributions, whereas the \textit{Ensemble} produces improvements by explicitly taking the cluster-structure into account. \subsubsection{Simulations}\label{sec:simulations} In this subsection we verify that the intuitions gathered above from the theory considering uniform distribution of covariates within each clusters also extend to other distributions in numerical experiments. \begin{figure}[ht] \centering \includegraphics[width=5in]{rf_theory_UGL.pdf} \caption{Average RMSE of the \textit{Merged} and the \textit{Ensemble} as a function of the number of clusters in the training set. \textbf{(A)} Uniform clusters \textbf{(B)} Multivariate Gaussian clusters \textbf{(C)} Multivariate Laplace-distributed clusters} \label{fig:F2} \end{figure} In this regard, Figure \ref{fig:F2} displays the results of simulation studies to experimentally validate the frameworks presented in Theorem \ref{theorem: rf_uniform} and Theorem 5 in Section A.8 of the Appendix for clustered data arising from three different distributional paradigms. We observe that regardless of distribution, the \textit{Ensemble} produces a significantly lower average RMSE than the \textit{Merged}, and that the difference between the two methods levels out as $K$ increases. Interestingly, across all distributions considered, the \textit{Ensemble} empirically converges to produce about a 33\% improvement over the \textit{Merged} for high $K$, whereas the theoretical upper bounds illustrate a continual increase of the magnitude of improvement in an exponential manner. This indicates that while the relationship between the upper bounds derived in Theorem 5 are confirmed through the simulations, the theoretical bounds get significantly less tight for higher $K$. \section{Discussion}\label{sec:discussions} In this paper, we have provided first steps towards a theoretical understanding of possible benefits and pitfalls of ensembling learners based on clusters within training data compared to ignoring the grouping structure of the covariates. Our result suggests that the benefits vary depending on the nature of underlying algorithm and often plays out differently based on the mutual dominance between the bias and variance. We verify some of the numerical observations made in practice and synthetic data using the cases of linear regression and random forest regression -- each chosen to present to contrasting perspectives. This represents a first effort into providing a theoretical lens to this phenomenon. Several questions remain - in particular, an exploration of the exact asymptotics of the stacking mechanism while ensembling, as well as incorporating actual data driven clustering algorithms (such as k-means) as a pre-processing step, are both interesting open directions worth future study. \clearpage \bibliographystyle{plainnat}
1,108,101,562,427
arxiv
\section{Introduction} The recent experimental data from Double Chooz [1], Daya Bay [2], RENO [3], T2K [4] and MINOS [5] collaborations, indicate not only a nonzero reactor angle ($\theta_{13}$) but also with its magnitude of the order of Cabibbo angle ($\theta_{c}$). Tri-Bimaximal (TBM) mixing [6] and Bimaximal (BM) mixing [7, 8, 9] are two popular mixing patterns which predict $\sin\theta_{13}=0$. TBM mixing has a strong theoretical support because of its relation with $A_{4}$ [10 - 14], one of the candidates of discrete flavour symmetry groups. From theoretical point of view, small deviation of the order of square of $\lambda_{c}$ ( where $\lambda_{c} = \sin\theta_{c} \approx 0.22$ ) is expected. But a large correction of the order of $\lambda_{c}$ to $\sin\theta_{13}=0$, clearly interrogates the loyality of TBM mixing as a first approximation. This was pointed out in the literature [15]. The same argument holds good for BM mixing scheme also. In addition, at the Neutrino 2012 conference the MINOS collaboration hinted for a non-maximal $\theta_{23}$, which also goes against the TBM and BM predictions. From the analyses given in Ref. [16, 17], $\theta_{23}$ tilts towards a preference for $\theta_{23}< 45^{0}$. A new idea of mixing scheme called Bi-Large (BL) mixing [15] has been proposed recently by Boucenna $et.al$, apart from the existing TBM and BM mixing schemes. They considered $\sin\theta_{13}$ as the fundamental parameter ($\lambda$) and the idea behind this ansatz lies in the smallness of $\theta_{13}$, among the three mixing parameters. They expressed $\sin\theta_{12}$ and $\sin\theta_{23}$ as linear functions of $\lambda$. Thus, \begin{equation} \sin\theta_{13}=\lambda,\quad \sin\theta_{12}=a\lambda, \quad \sin\theta_{23}=s\lambda . \end{equation} Here $a$, $s$ are free parameters and $a \simeq s$. The resulting parametrization neither terminates to TBM nor BM pattern as limiting cases, though maximal atmospheric angle can be obtained. When $\lambda \rightarrow 0$, the neutrinos are unmixed. From simple numerical analysis they have shown that strict BL mixing occurs when $\lambda \simeq \lambda_{c} \approx 0.22$ and under that condition we get $ a = s =3 $. We start with this strict BL ansatz [Eq.(1)] where the Cabibbo angle ($\lambda_{c}$), the most important parameter from CKM matrix generates the whole parametrization in the neutrino sector.We take, \begin{eqnarray} \sin\theta_{13}=\lambda_{c},\quad \sin\theta_{12}=3\lambda_{c}, \quad \sin\theta_{23}=3\lambda_{c} \quad . \end{eqnarray} Pending a formal derivation of the BL mixing from a discrete symmetry, we wish to explore its matrix form from phenomenological ground.Following the standard PDG scheme of parametrization, we arrive at the following strict BL mixing matrix ($U_{BL}$), \begin{align} U_{BL} &=\begin{pmatrix} \frac{3}{4}(1-\frac{\lambda_{c}^{2}}{2})& \frac{\sqrt{7}}{4}(1-\frac{\lambda_{c}^{2}}{2}) & \lambda_{c} \\ -\frac{3\sqrt{7}}{16}(1+\lambda_{c})& \frac{9}{16}(1-\frac{7}{9}\lambda_{c}) & \frac{\sqrt{7}}{4}(1-\frac{\lambda_{c}^{2}}{2}) \\ \frac{7}{16}(1-\frac{9}{7}\lambda_{c})& -\frac{3\sqrt{7}}{16}(1+\lambda_{c}) & \frac{3}{4}(1-\frac{\lambda_{c}^{2}}{2}) \end{pmatrix}, \\ \nonumber\\ \quad \quad &=\begin{pmatrix} 0.7309& 0.6446 & 0.2257 \\ -0.608& 0.4637 & 0.6446 \\ 0.3105& -0.608 & 0.7309 \end{pmatrix}. \nonumber \end{align} $U_{BL}$ satisfies unitarity condition. If we first approximate the neutrino mixing matrix $U_{\nu}$ to $U_{BL}$ and then in order to account for the required deviations, we consider the correction from charged lepton sector [18]. We try to find out the possible texture of charged lepton matrix $U_{l}$ ( must follow unitarity condition ), which may serve our purpose. \section{The problems in Bi-maximal (BM) mixing} The strict BL mixing [15] and BM mixing patterns have certain similarities. $\theta_{12}$ and $\theta_{23}$ are equal for both the cases. The former predicts them to be $41^{0}$ and the later takes them as maximal i.e., $45^{0}$. The significant difference lies in the fact that the former starts with $\theta_{13}=\theta{c}$, and later with $\theta_{13}=0^{0}$, \begin{eqnarray} U_{BM}=\begin{pmatrix} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}} & 0 \\ -\frac{1}{2}& \frac{1}{2} & \frac{1}{\sqrt{2}} \\ \frac{1}{2} & -\frac{1}{2} &\frac{1}{\sqrt{2}} \end{pmatrix}. \end{eqnarray} In the reference [18], the authors put forward an viable technique to comply with the experimental data. This is summarised as follows. In fact they considered $U_{\nu}= U_{BM}$ and then performed a charged lepton correction by choosing the charged lepton matrix $U_{l}$ to be CKM type, \begin{eqnarray} U_{l}=\begin{pmatrix} 1-\frac{\lambda_{c}^{2}}{2} & \lambda_{c} e^{i\delta_{cp}} & 0 \\ -\lambda_{c} e^{-i\delta_{cp}} & 1-\frac{\lambda_{c}^{2}}{2} & 0 \\ 0 & 0 & 1 \end{pmatrix}. \end{eqnarray} The possible inclusion of Dirac phase $\delta_{cp}$ in $1-2$ and $ 2-1$ positions of $U_{l}$ was first introduced by Fritzsch and Xing [19]. In eq (5), $U_{l}$ satisfies unitarity condition. The $U_{PMNS}=U_{l}^{\dagger} U_{\nu}$ becomes, \begin{eqnarray} U_{PMNS}=\begin{pmatrix} \frac{1}{\sqrt{2}}(1-\frac{\lambda_{c}^{2}}{2})+\frac{\lambda_{c}}{2}e^{i\delta_{cp}} & \frac{1}{\sqrt{2}}(1-\frac{\lambda_{c}^{2}}{2})-\frac{\lambda_{c}}{2}e^{i\delta_{cp}} & -\frac{\lambda_{c}}{\sqrt{2}}e^{i\delta_{cp}} \\ \frac{1}{2}(\frac{\lambda_{c}^{2}}{2}-1)+\frac{\lambda_{c}}{\sqrt{2}}e^{-i\delta_{cp}} &\frac{1}{2}(1-\frac{\lambda_{c}^{2}}{2})+\frac{\lambda_{c}}{\sqrt{2}}e^{-i\delta_{cp}} & \frac{1}{\sqrt{2}}(1-\frac{\lambda_{c}^{2}}{2}) \\ \frac{1}{2} & -\frac{1}{2} &\frac{1}{\sqrt{2}} \end{pmatrix} . \end{eqnarray} From Eq.(6), using the following relations, \begin{equation} \sin^{2}\theta_{13} = \vert U_{e3}\vert ^{2}, \quad \sin^{2}\theta_{12} = \frac{\vert U_{e2}\vert ^{2}}{1- \vert U_{e3}\vert ^{2}}, \quad \sin^{2}\theta_{23} = \frac{\vert U_{\mu 3}\vert ^{2}}{1- \vert U_{e3}\vert ^{2}}, \end{equation} we obtain \begin{eqnarray} \sin^{2}\theta_{13}&=& \frac{\lambda_{c}^{2}}{2}\approx 0.0254, \\ \sin^{2}\theta_{12} &=&\frac{4-2 \lambda_{c}^{2}+2 \sqrt{2}\lambda_{c}(\lambda_{c}^{2}-2)\cos\delta_{cp}}{8(1-\frac{\lambda_{c}^{2}}{2})} ,\\ \sin^{2}\theta_{23} &=&\frac{1}{2}(1-\frac{\lambda_{c}^{2}}{2})\approx 0.488,\\ J^{BM}_{CP}& \approx &\frac{1}{4\sqrt{2}} \lambda_{c} \sin\delta_{cp}. \end{eqnarray} The prediction of $\theta_{13}$ matches with the best-fit value [20], while that for $\theta_{23}$ lies within $2\sigma$ [20]. The prediction of $\theta_{12}$ depends on $\delta_{cp}$. Now if we want $\sin^{2}\theta_{12}$ as 0.32 (best-fit) [20], from Eq.(9), we have $\cos\delta_{cp}=1.13$, which is absurd. The relation between $\sin^{2}\theta_{12}$ and $\delta_{cp}$ is illustrated in Fig.1. The minimum value of 0.3407 for $\sin^{2}\theta_{12}$ (i.e.,$\tan^{2}\theta_{12}\approx 0.52$ ) is obtained at the cost of $\cos\delta_{cp}=1$, which in turn gives CP violation parameter Jarkslog invariant $J^{BM}_{CP}=0$. This is the discrepency of BM model where $\sin^{2}\theta_{12}$ can not be suppressed even though $J_{CP}$ is sacrificed. \begin{figure} \begin{center} \includegraphics[scale=1]{1.ps} \caption{\footnotesize The dependence of $\cos\delta_{cp}$ on $\sin^{2}\theta_{12}$ for BM case with charged lepton correction. The prediction of the solar angle can not be lowered to the present experimental best-fit through any possible way. The lowering of $\theta_{12}$ upto certain level is possible at the cost of $\delta_{cp}\rightarrow 0, 2\pi $.} \end{center} \end{figure} \section{Strict Bi-Large mixing and Charged lepton contribution} We now assume that neutrino mixing matrix $U_{\nu}$ follows strict BL mixing [Eq.(2), Eq.(3)] and take $U_{\nu} = U_{BL}$. We assume the charged lepton mixing matrix to be CKM type. Motivated by the similarities among the two mixing schemes and the partial success, we try with the same CKM type $U_{l}$ employed for BM case (Eq.(5)) [18]and generate $U_{PMNS} = U_{l}^{\dagger} U_{BL}$. \begin{eqnarray} U_{PMNS}=\begin{pmatrix} U_{e1} & U_{e2} & U_{e3} \\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\ U_{\tau 1} & U_{\tau 2} & U_{\tau 3}. \end{pmatrix}, \end{eqnarray} where,\begin{eqnarray} U_{e1}&=& \frac{3}{16}\lbrace(\lambda_{c}^{2}-2)^{2}+\sqrt{7}\lambda_{c}(1+\lambda_{c})e^{i \delta_{cp}}\rbrace, \nonumber\\ U_{e2}&=& \frac{1}{16}\lbrace \sqrt{7}(\lambda_{c}^{2}-2)^{2}+\lambda_{c}(7\lambda_{c}-9)e^{i \delta_{cp}}\rbrace , \nonumber\\ U_{e3}&=& \frac{1}{8} \lambda_{c}(\lambda_{c}^{2}-2)(\sqrt{7}e^{i\delta_{cp}}-4), \nonumber\\ U_{\mu 1}&=& \frac{3}{32}(\lambda_{c}^{2}-2)\lbrace \sqrt{7}(1+\lambda_{c})-4\lambda_{c} e ^{-i\delta_{cp}}\rbrace, \nonumber \end{eqnarray} \begin{eqnarray} U_{\mu 2}&=& \frac{1}{32}(2-\lambda_{c}^{2})( 9- 7\lambda_{c}+4\sqrt{7} e^{-i\delta_{cp}}), \nonumber\\ U_{\mu 3}&=& \frac{\sqrt{7}}{16}(\lambda_{c}^{2}-2)^{2} + \lambda_{c}^{2} e^{-i \delta_{cp}}, \nonumber\\ U_{\tau 1}&=&\frac{1}{16}(7-9\lambda_{c}),\nonumber \\ U_{\tau 2}&=& -\frac{3\sqrt{7}}{16}(1+\lambda_{c}),\nonumber\\ U_{\tau 3}&=& \frac{3}{8}(2-\lambda_{c}^{2}).\nonumber \end{eqnarray} Following Eq. (7) and from Eq. (13), we get \begin{small} \begin{eqnarray} \sin^{2}\theta_{13} &=& \frac{1}{64}\lambda_{c}^{2}(\lambda_{c}^{2}-2)^{2}(23-8\sqrt{7}\cos\delta_{cp}),\\ \sin^{2}\theta_{12} &=& \frac{112+\lambda_{c}^{2}\lbrace7 \lambda_{c}(31\lambda_{c}-18)-143\rbrace+2\sqrt{7}\lambda_{c}(7\lambda_{c}-9)(\lambda_{c}^{2}-2)^{2}\cos\delta_{cp}}{256\lbrace1+\frac{1}{64}\lambda_{c}^{2}(\lambda_{c}^{2}-2)^{2}(8\sqrt{7}\cos\delta_{cp}-23)\rbrace} ,\nonumber \\ \\ \sin^{2}\theta_{23} &=& \frac{112-\lambda_{c}^{2}\lbrace224-424\lambda_{c}^{2}-32\sqrt{7}(\lambda_{c}^{2}-2)^{2}\cos\delta_{cp}}{256\lbrace1+\frac{1}{64}\lambda_{c}^{2}(\lambda_{c}^{2}-2)^{2}(8\sqrt{7}\cos\delta_{cp}-23)\rbrace}. \end{eqnarray} \end{small} In the Ref [20], three data of $1\sigma$ ranges are specified regarding $\sin^{2}\theta_{23}$. They are 0.400-0.461 and 0.573-0.635 (N.H) and 0.569 - 0.626 (I.H). From Eq.(15), with the limit, $0\leq\vert \cos\delta_{cp}\vert\leq 1$, we get the bound of $\sin^{2}\theta_{23}$ as $0.427-0.463$ and hence out of all three possible $1\sigma$ bounds of $\sin^{2}\theta_{23}$, two are strongly ruled out and our analysis is very well fitted with the first one [fig.4]. This supports the existence of $\theta_{23}$ to lie within the first octant. It is to be noted that the best fit [20] of $\sin^{2}\theta_{23}$ i.e, $0.427$ coincides with our analysis when $\delta_{cp}=0$. From Eqs.(13 - 14), this is clear that Dirac phase $\delta_{cp}$ affects the prediction of all the three mixing angles which is different from BM case where only $\theta_{12}$ is affected by $\delta_{cp}$ (Eq. (9)). It seems that the situation is now much more complicated than the BM case. If our initial choice for $U_{\nu}$ as strictly BL and $U_{l}$ as CKM types were appropriate, then on placing the best fit [20] results at least for two of the three parameters in any two out of the three Eqs. (14-16), the predictions of $\delta_{cp}$ from the respective equations must coincide. The situation is as if for one unknown parameter $\delta_{cp}$, there are more than one equations. We first solve Eq.(14) with the best fit value of $\sin^{2}\theta_{13}$ [20], to find out $\cos\delta_{cp}$ and do the same for Eq.(15) with $ \sin^{2}\theta_{12}$. But surprisingly, we find the predictions of $\cos\delta_{cp}\approx 0.70$ (i.e $ \delta_{cp} \approx 0.25 \pi$) is same from both of the equations. In the next step, we put $\cos\delta_{cp}\approx 0.70$ in Eq. (16), and get $\sin^{2}\theta_{23}\approx 0.44$ which is close to best-fit result $\sin^{2}\theta_{23} =0.427$ [20]. These analyses are illustrated graphically in the figs.2-4. With strict BL mixing as the 1st approximation ( $U_{\nu}= U_{BL}(\lambda_{c}) $ ) and along with a unitary charged lepton mixing matrix ( $U_{l}(\lambda_{c},\delta_{cp})$ ) of CKM type, the predictions are summarised as follows.\begin{equation} \sin^{2}\theta_{13}=0.0245,\quad \sin^{2}\theta_{12}=0.3209, \quad \sin^{2}\theta_{23}=0.4533, \quad \delta_{cp}= 0.2515\pi. \end{equation} From Eq. (13), we work out the CP violation Jarsklog invariant parameter as $J_{cp}^{BL}=Im[U^{*}_{e1} U^{*}_{\mu 1} U_{e3} U_{\mu 1} ]$, \begin{align} \vert J_{CP}^{BL} \vert = \frac{9\sqrt{7}}{4096}\lambda_{c}\lbrace 28-8\lambda_{c} (1+8\lambda_{c}-\lambda_{c}^{2})+ 57 \lambda_{c}^{4}\rbrace \sin\delta_{cp} \approx 0.0304 \sin\delta_{cp}. \end{align} If we choose $\delta_{cp}\approx 0.2515 \pi$, as per as the prediction, then we get $J_{CP}^{BL} \approx 0.0216$. \begin{figure}[t] \begin{center} \includegraphics[scale=1]{2.ps} \caption{\footnotesize The dependence of $\cos\delta_{cp}$ on $\sin^{2}\theta_{13}$ for BL case with charged lepton correction.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1]{3.ps} \caption{\footnotesize The dependence of $\cos\delta_{cp}$ on $\sin^{2}\theta_{12}$ for BL case with charged lepton correction.} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=1]{4.ps} \caption{\footnotesize The dependence of $\cos\delta_{cp}$ on $\sin^{2}\theta_{23}$ for BL case with charged lepton correction.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1]{5.ps} \caption{\footnotesize The variation of $J_{cp}$ with $\sin\delta_{cp}$ for BL case with charged lepton correction.} \end{center} \end{figure} \section{Prediction of effective electron neutrino mass \textbf{$m_{ee}$ }in \textbf{$0\nu\beta\beta$} decay.} The effective electron neutrino mass $m_{ee}$ appeared in neutrinoless double decay ($0\nu\beta\beta$)is given as \begin{eqnarray} m_{ee} = \vert m_{1} U_{e1}^{2} + m_{2} U_{e2}^{2} + m_{3} U_{e3}^{2} \vert \end{eqnarray} where $m_{i}$'s are the masses of the three neutrino mass eigenstates. Using Eq.(12), with $\lambda_{c}=0.2257$, and $\delta_{cp}\approx 0.2515 \pi$ we get \begin{align} m_{ee}=\vert 0.5262 m_{1} + 0.4056 m_{2} + 0.06954 m_{3} + (0.1953 m_{1} - 0.1314 m_{2} -0.0640 m_{3})\cos\delta_{cp} \vert. \end{align} For N.H case with $m_{1}$ as the smallest mass, we have, \begin{eqnarray} m_{2}=\sqrt{m_{1}^{2}+\Delta m_{21}^{2}},\quad m_{3}=\sqrt{m_{1}^{2}+\Delta m_{31}^{2}} . \end{eqnarray} We impose the Cosmological upper bound for $\Sigma m_{i} \leq 0.28 eV$ [21] in our analysis. We fix $\Delta m_{21}^{2} \sim 7.62 \times 10^{-5} eV^{2}$ (best-fit) [20] and $\Delta m_{31}^{2} \sim 2.55 \times 10^{-3} eV^{2}$ (best-fit)[20] and plot $\Sigma m_{i}$ taking lowest mass $m_{1}$ as free parameter and get the quasidegenerate upper limit for $m_{1}$ as 0.088 eV (fig. 6). \begin{figure}[h] \begin{center} \includegraphics[scale=1]{6.ps} \caption{\footnotesize The variation of $\Sigma m_{i}$ with $m_{1}$. The cosmological upper bound : $\Sigma m_{i}\leq 0.28 eV$. The Q.D limit of $m_{1}$ is $0.088 eV$.} \end{center} \end{figure} We then plot $m_{ee}$ with respect to $m_{1}$ for three different cases concerning Majorana phases : ($ + m_{2}, + m_{3}$),( $ - m_{2}, + m_{3}$) and ( $ + m_{2}, - m_{3}$ ) (fig.7). Concerning this three cases the predictions for $m_{ee}$ under the quasidegenerate limit of $m_{1} \sim 0.088 eV$ are as follows. \begin{eqnarray} &( +m_{2}, +m _{3} )& : \quad \quad 0.0045 eV \leq m_{ee}\leq 0.0891 eV, \nonumber\\ &( -m_{2}, +m _{3} )&: \quad \quad 0\leq m_{ee}\leq 0.0335 eV,\nonumber\\ &( +m_{2}, -m _{3} )& : \quad \quad 0.0023 eV \leq m_{ee}\leq 0.0839 eV, \end{eqnarray} where $\pm$ signs before $m_{2,3}$ indicate the Majorana CP phases. \begin{figure} \begin{center} \includegraphics[scale=1]{7.ps} \caption{\footnotesize The variation of $ m_{ee}$ with $m_{1}$ for ($+m_{2},+m_{3}$),$(-m_{2}, +m_{3})$ and $(+m_{2}, -m_{3})$ $CP$ cases.} \end{center} \end{figure} Pascoli and Petcov[22] showed that if the neutrino mass ordering were of normal type, then $\vert m_{ee}\vert$ would satisfy $0.001 eV\leq\vert m_{ee}\vert$ which is consistent with the cases discussed above except ($ -m_{2}, + m_{3}$ ). There is an upper bound of neutrino mass parameter $ m_{ee}\leq 0.27 eV $ [23] which appears in the neutrinoless double beta decay experiments. The upper bounds of $m_{ee}$ for the three cases under quasidegenerate limit of $m_{1}$ satisfy this condition. \section{Summary} We have discussed the shortcomings of BM model where after considering the charged lepton correction, we are unable to lower the solar angle below $\sin^{2}\theta_{12}=0.3407$ (i.e,$\tan^{2}\theta_{12}=0.52$), although the prediction of $\theta_{13}$ and $\theta_{23}$ comply with the experimental results. Boucenna $et.al$ has itroduced a new mixing pattern called Bi-Large mixing where Cabibbo angle ($\lambda_{c}$) seeds the whole parametrization [Eqs.(1) -(3)]. We assume $U_{l}$ to be of CKM type [Eq.(5)] and construct $U_{PMNS}$. This new model although phenomenological, is characterized by the following significant features: \textbf{(a)} Any other possibilities than $\theta_{23}$ to lie within the first octant, are sharply ruled out,\textbf{(b)} The predictions $\sin^{2}\theta_{13} \sim 0.0245$ and $\sin^{2}\theta_{12} \sim 0.3209$ are in precise agreement with the experimental best-fit values. We obtain $\sin^{2}\theta_{23}\sim 0.453$ which is close to the best-fit value (within $1\sigma$ range), and \textbf{(c)} $\delta_{cp}\sim 0.2515 \pi$ and $\vert J_{CP}^{BL} \vert = 0.0304 \sin\delta_{cp}\sim 0.0216$ . The same $U_{l}$ (CKM type), when incorporated with $U_{BM}$ was partly successful in complying with the experimental results because there it imposes a condition of $\delta_{cp}\rightarrow 0$,(i.e there is no CP violation) in order to lower the solar angle. Whereas this shortcoming is removed very easily when we associate the same CKM type $U_{l}$ with strict BL scheme. Hence the BL mixing scheme is very significant in the light of present experimental results. The model is further strengthened by the fact that the predictions of $\theta_{13}$, $\theta_{12}$ and $\theta_{23}$ individually depend upon $\delta_{cp}$, without any contradiction. All the three angles agree to the desired results for a single choice of $\delta_{cp}\sim 0.2515 \pi$. Finally the model is employed to study the upper bounds of $m_{ee}$ in quasidegenerate limit for three different Majorana $CP$ phases of normal hierarchy. A formal derivation of BL mixing matrix from discrete symmetry is an important aspect for our future investigation. \section*{Acknowledgement} One of the authors (SR) wishes to convey his heartiest gratitude to Chandan Duarah of Department of Physics, Gauhati University for usefull discussion.
1,108,101,562,428
arxiv
\section{Introduction} \input{intro.tex} \label{intro} \section{Problem Formulation and Theory} \input{theory.tex} \label{theory} \section{Results and Discussion} \input{results.tex} \label{results} \section{Summary} \input{summary.tex} \label{summary} \begin{acknowledgments} This work was sponsored by NSF grant CBET-0853379. \end{acknowledgments} \subsection{Dimensionless Parameters} To gain an understanding of the relative dominance of different transport mechanisms in the system, we calculate the values of the Damk\"{o}hler and Rayleigh numbers and the parameter $\beta$ from their respective formulas using appropriate values of the system variables. In addition, we estimate the parameters using values we extract from the simulations. Specifically, we calculate approximations to these three dimensionless parameters by numerically integrating the relevant fluxes throughout the system geometry and computing the relevant ratios. Table \ref{salttab2} shows the results of these calculations. \input{salttab2.tex} The total reaction-driven flux of protons is given $J$, defined in (\ref{currcons}). The total diffusive, convective and electromigration fluxes of protons through the fluid are estimated by integrating the $z$-components of the appropriate local fluxes over the two-dimensional annular disk surrounding the middle of the rod and extending from the rod surface to the boundary of the simulation domain. The parameters are then numerically estimated by computing the relevant flux ratios: convective to diffusive flux ($Ra_e$), electromigration to diffusive ($\beta$), and total reaction flux to diffusive flux ($Da$). Although the numerical values of the dimensionless parameters differ significantly between the analytical to the numerical versions, the general trends are the same in both cases. The Rayleigh number is $O(10^{-4})$ in the analytical case and $O(10^{-1})$ in the numerical case, indicating that electroconvection is dominated by diffusion in transporting mass. The values of $\beta$ are no larger than 0.01 in either case, showing that electromigration is relatively unimportant compared to diffusion. The Damk\"{o}hler numbers are on similar orders of magnitude for both cases, suggesting that the majority of the charged species injected into the solution by the reactions are transported by diffusion. In all cases, the dimensionless parameters become smaller as salt concentration is increased. In the case of the Rayleigh number, this decrease reflects the decreased swimming speed of the rod, reducing the convective flux of all species. The reduction in $\beta$ is due to the reduction in the tangential electric field magnitude, which drives electromigration flux. Finally, the Damk\"{o}hler numbers decrease with salt concentration, indicating that the reaction rate decreases slightly as salt is added. Overall, we conclude from this analysis that the bulk of transport in the system is due to diffusion (to a greater extent than in the case without salt\cite{moran_electrokinetic_2011}), and electroconvection is relatively unimportant in transporting species. \subsection{Physical Reasons for Speed Decrease} Figure \ref{fig2} shows that the rod velocity decreases roughly by a factor of 20 when the conductivity is increased from 8.8 to 100 $\mu$S/cm. This velocity decrease is due to several factors, each of which cause a partial reduction in the swimming speed. In descending order of importance, these factors are (i) the decrease in magnitude of the propulsive electric field, (ii) the decrease in magnitude of the area-averaged zeta potential, and (iii) the decrease in overall reaction rate. We calculated the characteristic electric field $E^*$ of the system as the electric field that would need to be externally applied to drive conventional electrophoresis of the rod having the same surface charge at a speed equal to the measured swimming speed. As conductivity is increased from 8.8 to 100 $\mu$S/cm, $E^*$ decreases from 415 to 38 V/m, a decrease of roughly 90 \%, i.e. $E^* (\sigma = 8.8) / E^* (\sigma = 100)$ = 10.92. Since swimming speed is directly proportional to $E^*$, the reduction in $E^*$ with conductivity reduces the swimming speed of the rod by roughly an order of magnitude. In all cases, the electrolyte added to the solution is at a significantly higher bulk concentration than protons and bicarbonate ions. While the bulk concentration of protons and bicarbonate ions was kept fixed at 0.9 $\mu$mol/L, the electrolyte concentration was varied from 56.4 to 820 $\mu$mol/L. The counterions in the electrolyte are attracted to the rod to help screen the surface charge and result in a net decrease in proton concentration in the diffuse layer, as shown in Fig. \ref{fig9} (a). The reduced proton concentration causes a reduction in the reaction rate because the reaction rate on the gold is dependent on the square of the proton concentration. When conductivity is increased from its minimum to its maximum value, the total reaction rates on the anode and cathode decrease by roughly 20 \%, such that $J (\sigma = 8.8) / J (\sigma = 100)$ = 1.27. Due to the proportionality between the speed and surface activity identified by Golestanian, Liverpool, and Ajdari,\cite{golestanian_designing_2007} and also proposed and validated in the scaling analysis of our previous work,\cite{moran_locomotion_2010,moran_electrokinetic_2011} we conclude that the reduction in reaction rate causes a decrease in swimming speed of roughly 20 \% from the minimum to the maximum conductivity. Figure \ref{fig4} shows that the average zeta potential decreases with the addition of salt. This change in $\zeta$, which is closely linked to the rod potential, is driven by the decrease in reaction rates and the conservation of current requirement. The reaction rate on the cathode is reduced as salt is added, due to the limited availability of protons (see Fig. \ref{fig9}) to participate in the peroxide reduction reaction. Since current must be conserved, the rod potential self-adjusts such that the rate of peroxide oxidation on the anode decreases. The average zeta potential decreases by roughly 45 \%, from $-$67.9 to $-$37.1 mV. $\bar{\zeta} (\sigma = 8.8) / \bar{\zeta} (\sigma = 100) = 1.83$. The change in zeta potential should result in a reduction in the swimming speed by roughly a factor of 2. Since speed is linearly related to $E^*$, zeta potential, and reaction rate, we can multiply the reduction factors together to obtain an estimate of the total velocity reduction factor due to all three effects. The result is 25.38, which is in reasonably good agreement with the observed speed reduction factor of $U (\sigma = 8.8) / U (\sigma = 100) = 19.86$. In summary, the conductivity-induced speed decrease originates from the decrease in the characteristic electric field (due to Ohm's law), and to a lesser degree, the reduction of the cathode reaction rate due to exclusion of protons in the diffuse layer by the added nonreactive salt. \subsection{Governing Equations and Scaling Analysis} Following our previous work,\cite{moran_locomotion_2010,moran_electrokinetic_2011} we apply the Poisson-Nernst-Planck-Stokes system of equations to this problem. In the dilute solution limit, the concentration distributions of all species obey the dimensionless advection-diffusion equation, \begin{equation} Ra_e \left( \tilde{\mathbf{u}} \cdot \tilde{\nabla} \tilde{c}_k \right) = \tilde{\nabla}^2 \tilde{c}_k - \beta_k \tilde{\nabla } \cdot \left( \tilde{c}_k \tilde{\mathbf{E}} \right), \label{AD} \end{equation} where $\tilde{\mathbf{u}}$ is the fluid velocity normalized by the electroviscous velocity $U_{ev}$, $\tilde{c}_k$ is the concentration of species $k$ normalized by the background proton and bicarbonate ion concentration, $c_{\pm,\infty}$, $\tilde{\mathbf{E}} = -\tilde{\nabla} \tilde{\phi}$ is the electric field normalized by a characteristic electric field $E_0$, $Ra_e$ is the electric Rayleigh number, \begin{equation} Ra_e = \frac{U_{ev} a }{D_+}, \label{Raedef} \end{equation} $D_+$ is the diffusivity of protons, $a$ is a length scale over which the tangential electric field is significant, and $\beta_k$ is a dimensionless parameter that quantifies the relative importance of electromigration and diffusion of ion $k$ given as, \begin{equation} \beta_k = \frac{z_k F E_0 a}{RT}. \label{betadef} \end{equation} In this case, all ions are monovalent, and the parameter $\beta$ therefore has the same value for every ion. Throughout the paper, dimensionless variables are indicated with a tilde, while dimensional variables and constants have no tilde. For oxygen and hydrogen peroxide, which are uncharged, the electromigration term is omitted. The concentration distributions are coupled to the electrostatic potential distribution through Poisson's equation, \begin{equation} - \frac{\varepsilon E_0}{F a c_{\pm,\infty}} \tilde{\nabla}^2 \tilde{\phi} = \tilde{\rho}_e = \sum_k z_k \tilde{c}_k. \label{poisson} \end{equation} Here $\varepsilon$ is the permittivity of the solution, $c_{\pm,\infty}$ is the bulk ion concentration, $\tilde{\phi}$ is the dimensionless electric potential normalized by $E_0 a$, and the summation is carried out over all ionic species. The fluid flow is described by the incompressible continuity and Stokes equations: \begin{equation} \tilde{\nabla} \cdot \tilde{\mathbf{u}} = 0, \label{COM} \end{equation} \begin{equation} 0 = \frac{1}{Re} \left( - \tilde{\nabla} \tilde{p} + \tilde{\nabla}^2 \tilde{\mathbf{u}} + \tilde{\rho}_e \tilde{\mathbf{E}} \right). \label{stokes} \end{equation} Here $Re = \rho U_{ev} d / \eta$ is the Reynolds number (which is never larger than 10$^{-4}$ in the simulations considered here), $\tilde{p}$ is the pressure normalized by $\eta U_{ev}/d$, $d$ is a viscous length scale, and $\tilde{\rho}_e \tilde{\mathbf{E}}$ is the electrical body force resulting from the coupling of free charge in the solution and the self-generated electric field. Equations (\ref{AD}) and (\ref{poisson})-(\ref{stokes}) constitute a coupled, nonlinear system that is difficult to solve in general. Approximate versions of these equations have been solved analytically to study the self-propulsion of a spherical cell,\cite{lammert_ion_1996} the autonomous fluid circulation near metallic disk electrodes in a peroxide solution,\cite{kline_catalytically_2006} and recently electrokinetic self-propulsion of synthetic particles similar to the case considered here.\cite{yariv_electrokinetic_2011,sabass_nonlinear_2012} Generally, obtaining analytical solutions to these equations requires one to make simplifying assumptions, e.g. that the reactions cause small perturbations of the field variables from the equilibrium state,\cite{kline_catalytically_2006,yariv_electrokinetic_2011} or that the electrical double layer (EDL) surrounding the particle is negligibly thin compared to the size of the particle.\cite{yariv_electrokinetic_2011,sabass_nonlinear_2012} We wish to describe in rigorous detail the physical phenomena occurring at the rod/solution interface, and we accordingly make no simplifying assumptions about the size of the EDL or the concentration perturbations. This allows us to study the system for a wide range of parameter values, and obtain numerical solutions to the full nonlinear equations to account for the cylindrical geometry and incorporate all of the important physical phenomena leading to self-propulsion. The natural velocity scaling in this system is the electroviscous velocity, $U_{ev} = \rho_{e,0} E_0 / (\eta / d^2)$, which was originally introduced by \citet{hoburg_internal_1976} and reflects the balance of viscous and electrical body forces in the system. We expect that the speed of the rod scales with $U_{ev}$. In our previous work we derived the general scaling relation for a charged rod with zeta potential $\zeta$ and area-averaged proton flux $j_+$ given as \begin{equation} U_{ev} \propto \frac{F L a d^2}{\lambda_D^2 \eta D_+} j_+ \zeta, \label{UevJFM} \end{equation} where $L$ is a length scale for the charge density distribution. Previously, we assumed that $L$ and $d$ both scale with the Debye length, while $a$ was proportional to the length of the rod, $h$.\cite{moran_locomotion_2010,moran_electrokinetic_2011} Although we have confirmed that the tangential electric field does increase in magnitude with increasing $h$ (assuming the particle is suspended in an infinite medium and neglecting the role of the particle mass), as shown in the supplementary section, here we are specifically interested in changes in electric field due to changes in solution conductivity. According to Ohm's law, $i = \sigma E$, the electric field should scale inversely with solution conductivity. Solution conductivity scales approximately with ionic strength, $I$, and therefore we would expect $E_0 \propto 1 / I$. Ionic strength is related to the Debye length according to \begin{equation} \lambda_D^2 \equiv \frac{\varepsilon RT}{2 F^2 I}. \end{equation} From this definition and the inverse relationship between electric field and ionic strength, we see that the electric field can scale with the square of Debye length, $E_0 \propto \lambda_D^2$.\cite{MoranPhD13} Note that we are not claiming that the electric field scales with the physical thickness of the EDL. Instead, we concur with previous work\cite{paxton_catalytically_2006} that the electric field scales inversely with the conductivity of the solution, through Ohm's law, which we express in terms of the definition of the Debye thickness. Making this substitution into equation (\ref{UevJFM}), the scaling result for the swimming speed becomes \begin{equation} U_{ev} \propto \frac{F \lambda_D^2}{\eta D_+} j_+ \zeta \propto \frac{\varepsilon \zeta}{\eta} \frac{RT}{FD_+ I} j_+. \label{scaling} \end{equation} Again, this relation resembles the Helmholtz-Smoluchowski-like expression, Eq. (\ref{HSPax}), except with the effective electric field given by $E_0 \propto RT j_+ / FD_+ I$, and the equation is stated in terms of reaction rate $j_+$ instead of current density $i$. This equation predicts that the swimming speed should scale quadratically with Debye length (or inversely with ionic strength). A quadratic relationship between speed and Debye length was also asymptotically derived by Sabass and Seifert. \cite{sabass_nonlinear_2012} The prediction of an inverse dependence on conductivity was also made by \citet{paxton_catalytically_2006} and by Golestanian, Liverpool and Ajdari.\cite{golestanian_designing_2007} In all three cases, the predicted form for the swimming speed is proportional to the Helmholtz-Smoluchowski expression, except with different forms for the electric field. In addition, Golestanian's formula and our simulations account for the cylindrical geometry of the particle. \subsection{Boundary Conditions} The simulations are conducted in the reference frame of a stationary rod. We apply the no-slip condition at the rod surface, \begin{equation} \tilde{\mathbf{u}} = \mathbf{0}. \end{equation} At the domain boundary far from the rod, we prescribe vanishing viscous stress: \begin{equation} \left[ \tilde{\nabla} \tilde{\mathbf{u}} + \left( \tilde{\nabla} \tilde{\mathbf{u}} \right)^T \right] = 0. \end{equation} Here $\tilde{\nabla} \tilde{\mathbf{u}}$ is the velocity gradient tensor and the superscript $T$ denotes the transpose. This boundary condition effectively enforces a slip condition at the outer boundary, approximating an infinite medium. We evaluate the average fluid speed in the axial direction along this boundary to determine the swimming speed of the rod. Since anions do not react, their boundary condition is zero flux at the rod surface, \begin{equation} \mathbf{n} \cdot \tilde{\mathbf{j}}_- = 0, \end{equation} where $\mathbf{n}$ is the outward normal vector pointing into the fluid and the dimensionless flux of ion $k$ is defined as $\tilde{\mathbf{j}}_k = -\tilde{\nabla} \tilde{c}_k + \beta_k \tilde{c}_k \tilde{\mathbf{E}} + Ra_e \tilde{c}_k \tilde{\mathbf{u}}.$ The electrochemical reactions generate fluxes of protons leaving the anode surface and entering the cathode surface. These reactions are represented (dimensionally) in the model by \begin{equation} \mathbf{n} \cdot \mathbf{j}_+ = \left\{ \begin{array}{ll} j_{+,a} = K_{ox} c_{\mathrm{H}_2 \mathrm{O}_2} \exp \left[ \frac{(1-\alpha) m F \Delta \phi_S}{RT} \right], \textrm{ }0 < z < 1 \textrm{ }\mu\textrm{m},\\ j_{+,c} = K_{red} c_{\mathrm{H}_2 \mathrm{O}_2} c_+^2 \exp \left[ - \frac{\alpha m F \Delta \phi_S}{RT} \right], \textrm{ } - 1 \textrm{ }\mu\textrm{m} < z < 0 \end{array} \right. \label{fluxbc} \end{equation} where the subscript + denotes protons, the subscript $a$ indicates the anodic flux due to the peroxide oxidation reaction, and the subscript $c$ indicates cathodic flux for the peroxide reduction reaction. Positive values of the axial coordinate $z$ indicate the anode side of the rod (typically Pt), and negative values indicate the cathode (typically Au) side. Here, $K_{ox}$ and $K_{red}$ are the rate constants for peroxide oxidation and reduction, $\alpha$ is a dimensionless parameter between 0 and 1 (set here to 0.5) that quantifies the asymmetry of the energy barrier for the reaction, $m$ is the number of electrons transferred in the electrochemical reaction ($m=2$ for both reactions considered here), and $\Delta \phi_S$ is the voltage across the compact Stern layer of adsorbed species on the surface of the rod.\cite{bazant_current-voltage_2005} The expressions for the proton fluxes $j_+$ on the rod segments are given by Butler-Volmer equations with Frumkin's correction,\cite{frumkin_hydrogen_1933} and reflect the dependence of the kinetics of the electrochemical reactions on the local reactant concentrations and on the voltage across the compact Stern layer.\cite{moran_electrokinetic_2011,frumkin_hydrogen_1933,delahay_double_1965,bard_electrochemical_2000,bazant_current-voltage_2005} In equation (\ref{fluxbc}), we have implemented the Tafel approximation, meaning that the reactions are assumed to proceed in one direction only and the backward components of each reaction are considered negligible. For a derivation of Eq. (\ref{fluxbc}) starting from the full Butler-Volmer equation, the reader is referred to our previous paper.\cite{moran_electrokinetic_2011} The reaction rates for species other than protons are related to the proton fluxes according to the stoichiometry of the reactions. On the anode, one peroxide molecule is consumed for every two protons released into the solution: \begin{equation} \mathbf{n} \cdot \mathbf{j}_{\textrm{H}_2\textrm{O}_2,a} = - \frac{j_{+,a}}{2}, \end{equation} and one oxygen molecule is generated for every two protons generated: \begin{equation} \mathbf{n} \cdot \mathbf{j}_{\textrm{O}_2,a} = \frac{{j}_{+,a}}{2}. \end{equation} On the cathode, one peroxide molecule is consumed for every two protons consumed: \begin{equation} \mathbf{n} \cdot \mathbf{j}_{\textrm{H}_2\textrm{O}_2,c} = \frac{j_{+,c}}{2}. \end{equation} It has been suggested by Wang \textit{et al.},\cite{wang_bipolar_2006} among others, that the four-electron reduction of O$_2$ may also occur on the cathode end, perhaps even as the dominant reaction. While this is possible, O$_2$ reduction is not likely the dominant reduction reaction on the cathode side, since our previous work shows that the rods move faster in solutions purged of O$_2$ and slower in O$_2$-rich solutions.\cite{calvo-marzal_electrochemically-triggered_2009} We therefore assume that peroxide reduction is the only reaction occurring on cathode, and that oxygen is nonreactive on the cathode end: \begin{equation} \mathbf{n} \cdot \mathbf{j}_{\textrm{O}_2,c} = 0. \end{equation} Far from the rod, the concentrations of all chemical species approach their bulk values, \begin{equation} \tilde{c}_k \rightarrow 1 \textrm{ as } \left| \tilde{\mathbf{r}} \right| \rightarrow \infty. \end{equation} The boundary conditions for proton flux can also be stated in dimensionless form. Since the proton flux takes a different form on the anode and cathode, here the Damk\"{o}hler number is defined differently for each metal. The Damk\"{o}hler numbers can be defined in terms of the reaction kinetic expressions, equation (\ref{fluxbc}). On the anode, the dimensionless boundary condition takes the form \begin{equation} \mathbf{n} \cdot \tilde{\mathbf{j}}_+ = Da_{anode} \tilde{c}_{\textrm{H}_2 \textrm{O}_2}, \end{equation} where \begin{equation} Da_{anode} = \frac{K_{ox} a c_{\textrm{H}_2 \textrm{O}_2,\infty}}{D_+ c_{+,\infty}}. \label{Da_anode_def} \end{equation} On the cathode, the dimensionless boundary condition reads \begin{equation} \mathbf{n} \cdot \tilde{\mathbf{j}}_+ = Da_{cathode} \tilde{c}_{\textrm{H}_2 \textrm{O}_2} \tilde{c}_+^2, \end{equation} where \begin{equation} Da_{cathode} = \frac{K_{red} a c_{\textrm{H}_2 \textrm{O}_2,\infty} c_{+,\infty}}{D_+}. \label{Da_cathode_def} \end{equation} Although the Damk\"{o}hler numbers are defined differently on the anode and cathode, the rate constants in each definition have different units, so that the Damk\"{o}hler number is dimensionless in each case. In stating the definitions of the Damk\"{o}hler numbers we have ignored the exponential terms in the kinetic expressions (i.e., we have assumed these terms to be equal to unity). Although these terms could be included, they would not significantly alter the magnitude of the Damk\"{o}hler numbers. In this work, the exponential terms range in magnitude from 0.97 to 1.03 in all cases studied. In general, the flux expressions for the anode and cathode are not equal at the junction between them, $z = 0$. To avoid unphysical discontinuities in the reaction flux and flux gradient, we multiply the flux profile along the length of the rod by a dimensionless sigmoidal weighting function, $\xi (z)$, defined as \begin{equation} \xi (z) = \left| \frac{2}{1 + e^{- \gamma z}} - 1 \right|, \end{equation} where $\gamma = 10^7$~m$^{-1}$ and $z$ is evaluated in meters. The function $\xi$ is defined to be roughly equal to 1 at the end of the anode segment, 1 at the end of the cathode segment, and zero at the anode/cathode boundary. The use of this weighting function reflects a diffuse interface which would result in reduced density of available reaction sites and reaction rate near the junction between anode and cathode. The surface of the rod is theorized to contain an immobile layer of charged and uncharged adsorbed species, often referred to as the \textit{Stern layer}. Together with the diffuse layer of ions in the solution adjacent to the rod, these two layers constitute the electrical double layer (EDL). According to the Stern model of the EDL, the immobile (Stern) layer acts as a linear capacitor in series with the diffuse layer.\cite{bard_electrochemical_2000,bazant_current-voltage_2005} The electric potential gradient is extrapolated across the Stern layer, from the outer Helmholtz plane to the metal. Thus, the Stern voltage is linearly related to the normal electric field at the rod surface. Following the Stern model, we treat this layer as a linear capacitor which leads to the (dimensional) mixed boundary condition\cite{bard_electrochemical_2000,bazant_current-voltage_2005,moran_electrokinetic_2011} \begin{equation} \Phi_{rod} + \lambda_S \left( \mathbf{n} \cdot \nabla \phi \right)_{\textrm{OHP}} = \phi_{\textrm{OHP}} \equiv \zeta, \label{potentialBC} \end{equation} where $\Phi_{rod}$ is the electrical potential of the interior of the rod with respect to the bulk solution, $\lambda_S$ is an effective thickness of the Stern layer (set here to 2 {\AA} for all cases), and the subscript OHP indicates that the quantity is evaluated at the outer edge of the Stern layer, often termed the outer Helmholtz plane (OHP).\cite{bard_electrochemical_2000} Since the rod is conducting, the potential $\Phi_{rod}$ is assumed uniform everywhere in its interior. The zeta potential $\zeta$ is defined here as the potential at the OHP versus the bulk solution, and in general varies with position on the rod surface. The voltage across the Stern layer is generally defined as the internal rod potential minus the potential at the OHP, i.e. $\Delta \phi_S \equiv \Phi_{rod} - \zeta (z)$, and therefor depends on the normal electric field at the OHP through the above boundary condition on the potential, (\ref{potentialBC}). Far from the rod, the electric potential approaches zero, \begin{equation} \tilde{\phi} \rightarrow 0\textrm{ as }\left| \tilde{\mathbf{r}} \right| \rightarrow \infty. \end{equation} \subsection{Current Conservation} At steady state, the total charge in the rod must be conserved, implying that the net current into or out of the rod must be zero. We require that \begin{equation} \int_{anode} j_{+,a} dA = - \int_{cathode} j_{+,c} dA \equiv J \label{currcons} \end{equation} at steady state, where the reaction fluxes $j_{+,a}$ and $j_{+,c}$ are given by (\ref{fluxbc}). The system of equations (\ref{AD})-(\ref{stokes}) is solved concurrently and is closed by iterative determination of the rod potential $\Phi_{rod}$ that produces reaction fluxes that satisfy (\ref{currcons}). The value of $\Phi_{rod}$ directly affects the reaction rates on both the anode and cathode. On the cathode, a more negative rod potential would result in a more negative zeta potential, which means that the surface attracts more protons electrostatically to the surface to screen the surface charge. The elevated proton concentration results in faster reaction rates on the cathode, according to (\ref{fluxbc}). On the anode, a more negative potential decreases the reaction rate, since this would alter the overpotential bias to favor reduction more and oxidation less. The value of $\Phi_{rod}$ that satisfies (\ref{currcons}) is observed to vary with salt concentration. Table \ref{salttab} shows the values and units of the constants used in the simulations.\cite{lide_crc_2004} The ion mobilities $\nu_k$ are determined from the Nernst-Einstein relation, $D_k = \nu_k RT$. \input{salttab.tex}
1,108,101,562,429
arxiv
\section{Introduction} Ultracold atoms trapped in optical lattices are ideally suited for investigations of the phase structure and phase transitions of strongly interacting quantum many-body systems. A recently highlighted example is the superfluid to Mott insulator transition studied in Ref.~\cite{Greiner2002} using Rubidium Bose-Einstein condensates. In the Mott insulating phase the spinor gases show a multitude of magnetic phases thus providing amble possibilities for studies of quantum magnetism in different dimensionalities and insight into conventional as well as topological phases. In certain limits such systems may be modelled by spin lattice Hamiltonians $ H=\sum_{i=1}^N \, h_{i,i+1} $ with nearest-neighbor interactions only~\cite{PhysRevLett.93.250405}. A prominent example is the one-dimensional bilinear-biquadratic spin-1 Heisenberg model with quadratic Zeeman term \begin{equation}\label{ham} h_{i,i+1}=\cos\theta \, \vec S_i \otimes \vec S_{i+1} + \sin\theta \, (\vec S_i \otimes \vec S_{i+1})^2+D \, (S_i^z)^2 \end{equation} and $S^\nu_i$ the spin-1 SU(2) matrix representations ($\nu=x,y,z$, and $ i=1,\ldots, N$ with $N+1 \rightarrow 1$). This model shows a rich phase structure, and a rather complete overview was recently given by Rodriguez~{\it et al.}~\cite{PhysRevLett.106.105302} and De Chiara, Lewenstein, and Sanpera~\cite{PhysRevB.84.054451} as a function of $\theta$ and the Zeeman strength $D$. Despite qualitative agreement, the results of the two groups disagree significantly concerning the extension of the dimerized phase. In Ref.~\cite{PhysRevB.84.054451} the dimerized phase extends from some undefined $D < -2$ up to about $D\simeq 0.03$ at $\theta=-\pi/2$. On the contrary, the authors of Ref.~\cite{PhysRevLett.106.105302} find a dimerized phase in the parameter range $-0.3 \lesssim D \lesssim 0.6$ at this $\theta$. The methods employed in both papers are rather different: In Ref.~\cite{PhysRevLett.106.105302} the boundaries of the dimerized phase are obtained using level spectroscopy~\cite{Okamoto1992} in relatively small spin rings ($N\leq16$), while in Ref.~\cite{PhysRevB.84.054451} the dimerization order parameter is calculated from numerically obtained ground states of spin chains up to $N=204$. It is the purpose of the present paper to address this discrepancy using variants of both methods in parallel. To this end we determine both the spectrum as well as the order parameter at $\theta=-\frac{\pi}{2}$ as a function of the Zeeman coupling $D$. Calculations will be performed for systems with periodic boundary conditions (spin rings) using our own matrix product state (MPS) algorithm for systems up to 100 sites~\cite{Weyrauch2013, PhysRevB.93.054417, Rakov2017}. At $\theta=-\frac{\pi}{2}$ and $D=0$ only the biquadratic term remains in (\ref{ham}) which is SU(3) symmetric~\cite{Affleck1986,PhysRevB.65.180402}, i.e. it may be rewritten as a bilinear model in terms of the three-dimensional Gell-Mann SU(3) `quark' ($\lambda$) and `antiquark' ($\bar{\lambda}$) triplet representations, \begin{equation}\label{ham1} h_{i,i+1}=-8 \vec{\lambda}_i\otimes\vec{\bar{\lambda}}_{i+1}-\frac{4}{3} \, \mathds{1}. \end{equation} The quadratic Zeeman term, which in terms of Gell-Mann matrices reads as $2D (1/3 \, \mathds{1} + \lambda_3 - \lambda_8$), reduces the symmetry from SU(3) to SU(2), i.e. the Gell-Mann triplet splits into an SU(2) spin-$\frac{1}{2}$ duplet and one singlet at each site. We shall call this SU(2) subgroup $v$-spin. This SU(2) symmetry holds only at $\theta=-\pi/2, \pi/2, -3\pi/4$, and $\pi/4$, and is different from the $D=0$ SU(2) symmetry of the Hamiltonian~(\ref{ham}), which we shall call $s$-spin. The latter reduces to U(1) at all $\theta$ due to the Zeeman term. Since a continuous symmetry cannot be broken in one dimension~\cite{PhysRevLett.17.1133,Coleman1973}, we developed a matrix-product algorithm which incorporates $v$-spin symmetry explicitly in the ansatz for the MPS similar to our treatment of SU(2) symmetric MPS presented in Ref.~\cite{Rakov2017}. As a consequence, the obtained states may be labeled by SU(2) $v$-spin quantum numbers as will be done in this paper. The U(1) subgroups of $v$-spin and $s$-spin are related by $S_z=2 v_z$. In the following section we study the low-lying spectrum at $\theta=-\pi/2$ for various system sizes $N$ and anisotropy parameters $D$ and extrapolate these results to the thermodynamic limit. The extension of the dimerized phase is then determined from the parameter region where the extrapolated ground state energy is doubly degenerate. In addition, we also calculate the dimerization order parameter in this parameter region and compare both results for consistency. We also determine the nematic order parameter in this phase. It is well known that for $D=0$ the bilinear-biquadratic spin-1 system is dimerized for all $\theta$ between the two critical points $\theta=-\frac{3\pi}{4}$ and $\theta= -\frac{\pi}{4}$. Using the results at $\theta=-\pi/2$ as a guidance, we phenomenologically extrapolate our results to this parameter region. This extrapolation is summarized in Fig. 5 of the present paper, and it will be discussed in detail in the summary section. \section{Boundaries of the dimerized phase of the biquadratic Heisenberg model with quadratic Zeeman term}~\label{determination} The boundaries of the dimerized phase have been studied by Rodriguez {\it et al.}~\cite{PhysRevLett.106.105302} using level spectroscopy and by De Chiara {\it et al.}~\cite{PhysRevB.84.054451} from a direct calculation of the dimerization order parameter. The results are surprisingly different. The spectra for large enough systems indicate phase boundaries by the closing or opening of spectral gaps. Since we only determine spectra for finite systems, we find level crossings which may or may not indicate the closing or opening of spectral gaps in the thermodynamic limit. The spectrum for $N=30$ sites and $\theta=-\pi/2$ is shown in Fig.~\ref{spectrum30} as a function of $D$ in the interval $-0.5 < D < 0.6$. Characteristic level crossings are indicated in Fig.~\ref{spectrum30} by the dashed black lines at $D=D^-$ and $D=D^+$. These lines agree rather well with the dimerized phase boundaries $D^-$ and $D^+$ obtained in Ref.~\cite{PhysRevLett.106.105302} for this $\theta$. (Note, that due to a different sign convention for the Zeeman term "$+$" and "$-$" must be interchanged when comparing to our results.) The parameter region $D<D^-$ is characterized as the boundary of a XY nematic phase~\cite{PhysRevLett.106.105302} and its lowest excitation is a $v$-spin triplet (see Fig.~\ref{spectrum30}). The region $D>D^+$ is characterized as an Ising nematic phase, and its lowest excited states are two degenerate $v$-spin doublets. For $D^-<D<D^+$ the lowest excited state is a singlet, and in the thermodynamic limit one expects dimerization if the gap between the two lowest singlets closes. In the following we will investigate in detail, if this scenario suggested in Ref.~\cite{PhysRevLett.106.105302} sustains detailed scrutiny. \begin{figure \unitlength 1cm \includegraphics[width=0.4\textwidth]{spectrum30.eps} \caption{\footnotesize Low lying spectrum of the biquadratic ($\theta=-\pi/2$) Heisenberg ring with $N=30$ spins and quadratic Zeeman term in the parameter range $-0.5 < D < 0.6$. The two lowest $v=0$ excitations and the lowest $v=1/2$ and $v=1$ multiplets are shown (the ground state is a singlet shifted to $E=0$). At $D=0$, one observes one SU(3) octet above the two low lying SU(3) singlets, which splits into SU(2) $v$-spin multiplets at $D\neq0$. There are two characteristic level crossings at $D^- \simeq -0.30$ and $D^+ \simeq 0.54$ indicated by dashed black vertical lines. The long thick tick marks along the horizontal axis indicate those values of $D$ at which we calculate spectra for larger systems. The essential structure of the spectrum remains very similar for larger systems due to $v$-spin symmetry. \label{spectrum30}} \end{figure} \subsection{Low lying spectrum} We first study the low lying spectrum as a function of system size $N$ at several characteristic $D$ indicated by the large tick marks along the horizontal axis in Fig.~\ref{spectrum30}. Our results are collected in Figs.~\ref{gap}-\ref{gap-02-04}. We consider system sizes between $N=20$ up to $N=100$. The finite size dependence of the spectrum at $D=0$ was studied extensively by S{\o}rensen and Young~\cite{PhysRevB.42.754} using the Bethe Ansatz. In the thermodynamic limit the gap $\Delta_{00}$ between the lowest two SU(2)/SU(3) singlets closes while the gap to the SU(3) octet ($v$-spin triplet) remains finite. We include these $D=0$ results in Fig.~\ref{gap} for comparison (dashed black line). \begin{figure \unitlength 1cm \includegraphics[width=0.4\textwidth]{gap.eps} \caption{\footnotesize Energy gap between the two lowest $v=0$ states of the biquadratic Heisenberg ring with quadratic Zeeman term for various positive $D$. The extrapolated gaps $\Delta_{00}^{\infty}$ are finite for $D \ge0.03$ (e.g., $\Delta_{00}^{\infty} (D=0.05) \simeq 0.07$). \label{gap}} \end{figure} \begin{figure* \unitlength 1cm \begin{picture}(15,6)(0,0) \put(0,0) {\includegraphics[width=0.4\textwidth]{gapm02.eps}} \put(7.5,0) {\includegraphics[width=0.4\textwidth]{gapm04.eps}} \end{picture} \caption{\footnotesize Energy gap between the two lowest $v=0$ states and between the lowest $v=0$ state and the $v=1$ multiplet of the biquadratic Heisenberg ring with quadratic Zeeman effect at $D =-0.2$ (left) and $D=-0.4$ (right). The extrapolated gap $\Delta_{00}^{\infty} \simeq 0$ for both values of $D$, which indicates the presence of the dimerization. The extrapolated gap $\Delta_{01}^{\infty} (D=-0.2) \simeq 0.025$ is finite, while both gaps are closing in the thermodynamic limit at $D=-0.4$. The small value of $\Delta_{01}^{\infty} (D=-0.2)$ is in line with the suggestion that the phase transition at $D^-$ is of Kosterlitz-Thouless type. \label{gap-02-04}} \end{figure*} From the results for shown in Fig.~\ref{gap} we conclude that for positive $D \ge 0.03$ the gap does not close in the thermodynamic limit. As a consequence, the system does not dimerize due to translational symmetry. This result agrees with the findings of De Chiara {\it et al.}~\cite{PhysRevB.84.054451} from the calculated dimerization in finite chains as will be discussed in more detail in the next subsection. The question if the gap $\Delta_{00}$ closes for $0<D \le 0.03$ cannot be decided by our numerics. However, from the results presented in in the following subsection we expect that the gap closes in the region $0<D \lesssim 0.025$. We now consider the parameter range $D<0$. It is worth mentioning that much larger computational resources are required here than for positive $D$, since the correlation length is larger. In practice we gradually increase the degeneracy set until the result is converged. In the region $D \lesssim -0.3$ the correlation length increases monotonically with the system size, and the results are numerically rather hard to obtain. In Fig.~\ref{gap-02-04}~(left) we show the energy gaps $\Delta_{00}$ and $\Delta_{01}$ for $D=-0.2$ as a function of $1/N$. The extrapolated results indicate that $\Delta_{00}$ closes and $\Delta_{01}$ remains open in the thermodynamic limit. This is similar to the facts found in~\cite{PhysRevB.42.754} at $D=0$, and it indicates that the point $D=-0.2$ is inside the dimerized phase. The results for $D=-0.4$ shown in Fig.~\ref{gap-02-04}~(right) suggest that {\it both} gaps $\Delta_{00}$ and $\Delta_{01}$ are closing in the thermodymanic limit. Consequently, the system is still dimerized at $D=-0.4$ with additional gapless nematic excitations. In fact, our results suggest, that a Kosterlitz-Thouless transition to a critical nematic phase happens exactly at $D^-$. However, the phases on both sides of this transition are dimerized. The dimerization does not signal this transition. \subsection{Dimerization and nematics} De Chiara {\it et al.}~\cite{PhysRevB.84.054451} determined the extension of the dimerized phase from the expectation value of the dimerization operator $ \hat{\mathcal{D}}=\frac{1}{N} \, \sum_i \, (-1)^i \, h_{i,i+1} $ calculated for finite chains up to $N=204$ and extrapolated to the thermodynamic limit. For finite rings the ground and the excited states {\it cannot} be dimerized due to translational invariance. In order to calculate the dimerization, a symmetric/antisymmetric superposition $\frac{1}{\sqrt{2}}| 0^{(0)} \pm 0^{(\pi)}\rangle$ of the two lowest $v=0$ states with different momenta ($p=0,\pi$) is taken. These two states are separated by a small gap for finite systems, but they develop into a degenerate doublet in the thermodynamic limit within the dimerized phase. It is important to make sure that the lowest two $v=0$ states are {\it indeed} degenerate in the thermodynamic limit before using this procedure. Our results for the dimerization correlator are presented in Fig.~\ref{dimernematic}. For $D=0$ the dimerization is well-known from the literature~\cite{Xian1993, Baxter1973}, $\mathcal{D}_{\infty}=\frac{\sqrt{5}}{2} \prod_{n=1}^\infty \tanh^2(n \, {\rm arccosh} \frac{3}{2}) \simeq 0.562$. Our results for 30, 40 and 50 sites at $D=0$ can be fitted very well by the function~\cite{PhysRevB.42.754, PhysRevB.72.054433} $ \mathcal{D}(N)=\mathcal{D}_{\infty}+c\,N^{-\alpha}\,\exp(-N/2\xi) $ with $\alpha=1$. From the fit we obtain $\mathcal{D}_{\infty} \simeq 0.568$ and a large correlation length $\xi \simeq 20.2$ in good agreement with the Bethe Ansatz. A very similar result for the correlation length was obtained in Ref.~\cite{PhysRevB.42.754} from the lowest energy gap. In order to confirm that the system dimerizes for small positive $D$ we made detailed calculations for $D=0.01$ and $D=0.02$, where level spectroscopy was inconclusive, and clearly find nonvanishing dimerization. The green line in Fig.~\ref{dimernematic} shows results by De Chiara {\it et al.} at $\theta=-0.6\pi$ (somewhat extrapolated by us). In general, we confirm the result of De Chiara {\it et al.} that the dimerization extends from $-\infty<D \lesssim 0.025$. \begin{figure* \unitlength 1cm \begin{picture}(15,6)(0,0) \put(0,0) {\includegraphics[width=0.4\textwidth]{dimer.eps}} \put(7.5,0) {\includegraphics[width=0.4\textwidth]{nematic.eps}} \end{picture} \caption{\footnotesize (Left) Dimerization correlator calculated for the biquadratic Heisenberg model with quadratic Zeeman effect from the two lowest $v=0$ eigenstates with different momenta. Results for 30, 40, and 50 sites are presented. The red line shows the extrapolation to the thermodynamic limit. The black cross indicates the Bethe ansatz value at $D=0$ and $N \rightarrow \infty$. The dimerization is strictly zero at $D \gtrsim 0.025$. The green line shows the result obtained by De Chiara {\it et al.}~\cite{PhysRevB.84.054451} at $\theta=-0.6 \pi$ (the dashed part is extrapolated from their data). Our results confirm the prediction of De Chiara {\it et al.} that the dimerization persists up to large negative values of $D$. (Right) Nematic correlator extrapolated to the thermodynamic limit (blue line). It is calculated from the two lowest eigenstates for $-0.3 \lesssim D \lesssim 0.025$ and from the five lowest eigenstates for $D \lesssim -0.3$. The nematic correlator is exactly zero at $D=0$. The inflection point $D \simeq 0.02$ coincides rather well with the dimer-to-non-dimer phase transition point $D^0 \simeq 0.03$. The nematic correlator is featureless at $D^-$ and $D^+$. The results obtained by Rodriguez {\it et al.}~\cite{PhysRevLett.106.105302} at $\theta=-0.54\pi$ for spin chains of 36 sites are included for comparison (black dashed line). \label{dimernematic}} \end{figure*} In addition to the dimerization, we also present results for the nematic correlator (`chirality') $Q=\frac{1}{N} \, \sum_i (S_i^z)^2 -\frac{2}{3}$ of the ground state. At $\theta=-\pi/2$ and $D=0$ the nematic correlator is zero for any system size~\cite{PhysRevLett.106.105302}. In order to extrapolate the results to the thermodynamic limit, one must take into account that the ground state in the thermodynamic limit is 2-fold degenerate at $-0.3 \lesssim D \lesssim 0.025$ and 5-fold degenerate at $D \lesssim -0.3$. Unlike the dimerization correlator, the nematic correlator of any eigenstate is nonzero while the expectation value calculated from two different eigenstates is zero. It is observed that nematic correlators of each of the 2 (or 5) states are equal to a high precision for large systems, and finite-size effects are small. Therefore, a precise extrapolation to $N \rightarrow \infty$ is possible from calculations for systems of $N \le 50$ sites. The nematic correlator shows a characteristic inflection point at $D \simeq 0.02$ (a similar inflection was obtained for $\theta=-0.54\pi$ in~\cite{PhysRevLett.106.105302}). This inflection appears to signal the transition from a dimerized to non-dimerized phase. On the other hand, the nematic correlator is featureless at $D^-$ and $D^+$. Finally, we confirmed that the staggered magnetization of the ground state is zero throughout the line $\theta=-\pi/2$. This is not in contradiction with our suggestion that at $D<D^-$ the line $\theta=-\pi/2$ is a critical-to-Neel phase boundary. \section{Conclusions}\label{conclusions} In this work we numerically obtained the boundaries of the dimerized phase of the biquadratic ($\theta=-\frac{\pi}{2}$) spin-1 Heisenberg model with quadratic Zeeman anisotropy. We find that a {\it gapped} dimerized phase exists in the parameter range $D^- < D < D^0$ with $D^-\simeq-0.30$ and $D^0\simeq 0.025$. Moreover, we identify a $\it gapless$ dimerized phase which extends from $D^-$ to large negative $D$. While this confirms the results of De Chiara {\it et al.}~\cite{PhysRevB.84.054451}, who predicted a small dimerization even below $D<-2.0$, the existence of both a gapped and a gapless dimerized phase is reported here for the first time. The transition between these two dimerized regions occurs at $D^-$ which was erroneously identified as the boundary of the dimerized phase in Ref.~\cite{PhysRevLett.106.105302}. At the upper end of the dimerized region close to $D^0$, the dimerization sharply drops to zero and a gap opens between the two lowest singlet states marking the transition to a non-dimerized phase. We do not see a phase transition at $D^+$ which was identified as the upper dimerized phase boundary in Ref.~\cite{PhysRevLett.106.105302}. These findings for $\theta=-\frac{\pi}{2}$ are graphically represented on the vertical axis of the phase diagram shown in Fig.~\ref{phased}, where the various transition points are marked by black dots. Let us now qualitatively extrapolate these results for $\theta$ in the parameter interval $I=[ -\frac{3\pi}{4} , -\frac{\pi}{4}]$, separately for positive $D$ and negative $D$, guided by general considerations and the calculations presented in Refs.~\cite{PhysRevLett.106.105302} and \cite{PhysRevB.84.054451}: By now it is rather well established that for $D=0$ the bilinear-biquadratic spin-1 model has a dimerized ground state in the whole parameter interval $I$. In particular, a nematic non-dimerized phase close to the ferromagnetic transition has been ruled out~\cite{PhysRevLett.98.247202,PhysRevLett.113.027202}. At large negative or positive $D$ the system is not dimerized. This follows from simple analytical arguments~\cite{PhysRevB.84.054451}. At large positive $D\gg 1$ the system is in the gapped large-$D$ phase~\cite{PhysRevB.84.054451}, and the transition between the dimerized phase to a non-dimerized phase happens at small positive $D$ for all $\theta \in I$~\cite{PhysRevB.84.054451}. This we confirmed in the present paper for $\theta=-\pi/2$. In fact, one expects that the system is in an Ising nematic phase for all $D>D^0$ as indicated by the white region above the blue shaded region in Fig.~\ref{phased} as there are no gaps closing in the spectrum. However, it is expected from the results of Ref.~\cite{PhysRevLett.106.105302} that the leading excitation changes from $S_z=0$ for $D<D^+$ to $S_z=\pm 1$ at $D>D^+$ as indicated by the blue dashed lines in the phase diagram. According to Ref.~\cite{PhysRevLett.113.027202} the dimerization is related to the density of disclinations created in the spin system. Consequently, such topological defects should be absent for $D>D^0$. For large negative $D$ the system is in a gapless critical (XY nematic) phase for $-\frac{3\pi}{4}<\theta<-\frac{\pi}{2}$ and in a gapped Neel phase for $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$ ~\cite{PhysRevB.84.054451}. For small and intermediate negative $D$, the XY nematic and the Neel phases are separated by dimerized phases as indicated by the blue, red, and green shaded regions in Fig.~\ref{phased}. It is expected that the gap between the lowest two singlets closes in all theses colored regions making them dimerized. In addition, in the red region also the gap to the next triplet closes, i.e. one expects a 5-fold degenerate ground state and vanishing staggered magnetization. This corresponds to our findings at $\theta=-\frac{\pi}{2}$. In the dimerized green shaded region, we expect an open gap to the triplet state and a non-zero staggered magnetization. The line between the red and green dimerized sectors separates magnetically staggered and non-staggered phases. This must be confirmed in detail by further calculations. These considerations and extrapolations are summarized in the qualitative phase diagram shown in Fig.~\ref{phased}. \begin{figure}[t!] \unitlength 1cm \includegraphics[width=0.45\textwidth]{phased.eps} \caption{\footnotesize Schematic phase diagram of the bilinear-biquadratic Heisenberg model with quadratic Zeeman anisotropy $D$ in the range $-\frac{3\pi}{4} < \theta < -\frac{\pi}{4}$. The three coloured phases (red, green and blue) are dimerized, full blue lines indicate phase transitions. Details are discussed in the main text.}\label{phased} \end{figure} \begin{acknowledgments} We thank Alexei K. Kolezhuk for discussions. Mykhailo V. Rakov thanks Physikalisch-Technische Bundesanstalt for financial support during short visits to Braunschweig. \end{acknowledgments}
1,108,101,562,430
arxiv
\section{Introduction} \label{sec-introduction} The present world craves for optimization with respect to every possible aspects of the nature and its happenings due to the rapid depletion of easy energy sources and profit maximization. Hence a lot of focus has been made on optimization of different fields of engineering problems and management problems mainly which are multi-dimensional and mathematical based. However there remains another sector which requires considerable attention and has represented many real life and real time problems starting from path planning, social media interaction, diffusion and ranking, recommendation systems, constraint processes scheduling, multi-objective optimization etc. This field is not a new one but has changed the way of data representation and interaction. It is better known as discrete problems (as the elements are discrete events and the conditional sequence of them held immense importance) and constitutes all problems related to graph based problems and combinatorial optimization problems. This work is marked by the introduction and application of such a discrete multi-agent based algorithm called Green Heron Swarm Optimization (GHOSA) Algorithm. It suits the discrete problems because of its combination generation and optimization capability for the discrete problems. The newly introduced GHOSA is inspired by the habit of the Green Heron bird for food acquisition through their artistic skills, senses and intelligence. The algorithm is marked by the three probabilistic multi-agent based combination generation steps and an adaptive mathematical variation exclusively only for the continuous numerical mathematical problems and the various system models. The algorithm is applied on Travelling Salesman problem, 0/1 Knapsack problem and the Quadratic Assignment problems and the results are compared with the optimum values. The extended algorithm (inclusive of LBNIV) is applied on the various multi-dimensional numerical benchmark equations and optimization is sought with respect to the optimum and is compared with real coded Genetic Algorithm and Particle Swarm Optimization. Green Heron Optimization Algorithm (GHOA) denotes the algorithm without LBNIV operator and Green Heron Swarm Optimization Algorithm (GHSOA) represents the whole lot. So GHSOA signifies that the algorithm is exclusively for continuous domain problems while GHOA can be used for both discrete and limited continuous problems. The rest of the article is arranged as Section 2 for the related work in bio-inspired computation, Section 3 with the description of the bird, Section 4 with the illustration of the GHOSA, Section 5 with details of the algorithm for the various applications and results, Section 6 for benchmark performance evaluation and scope for hybridization and Section 7 concludes with future works. \section{Related work} \label{sec-related} There are several bio-inspired algorithms which are highly successful in achieving near optimal optimization of various problems of the nature both in continuous and discrete domain. Discrete Particle Swarm Optimization \citep{p1}, discrete Genetic Algorithm \citep{p2} are specialized in generating discrete values but have limited applicability for combinatorial optimization which are handled well by Ant Colony Optimization (ACO) Algorithm \citep{p3}, \citep{acome}, Intelligent Water Drop (IWD) Algorithm \citep{p4}, Egyptian Vulture Optimization Algorithm \citep{evo1}, \citep{evo2} etc like optimization algorithms. Other continuous domain algorithms like Particle Swarm Optimization (PSO) Algorithm \citep{p5} helps in social influenced multi-agent like swarm search operations, Genetic Algorithm (GA) is specialized in combination formation out of the existing solutions and prevents local stagnancy of agents, Bat Algorithm \citep{p6} is an enhanced multi-hierarchy based swarm intelligence, Bacteria Foraging Optimization Algorithm \citep{p8} operates on swarming like grouping movement and increases the level of local searches, Krill Herd \citep{p9} works on krill organism like movement and search for optimization peak, Artificial Bee Colony (ABC) \citep{p10} divides the agents for cooperative exploration and exploitations and the agents switches character, Honey Bee Swarm \citep{p11} works on reproduction and crossover of their breed for better agent generation, Firefly Algorithm \citep{p12} works on the cooperative influence of the better agents through light attraction like feature, Glowworm Swarm Optimization \citep{p13} also works on light attraction like feature but the attracting agents are limited and within a range, Cuckoo Search \citep{p14} works on random placement and removal of solution variables and thus promotes mixing and opportunity, Artificial immune system \citep{p15} develops detection and prevention factors for optimization, Differential evolution \citep{p16} works on the principle of genetic algorithm but employs functions for combination, crossover and mutation, Differential Search \citep{p17} works on Brownian random walk movement search over the work-space, Harmony Search \citep{p18} combination of solution variable values from a bunch of values to generate optimized solution sets, Biogeographical based optimization \citep{p19} depends on mathematical relation based migration of variable values from solution set to another to promote enhancement in solution quality, Invasive Weed Optimization \citep{p20} depends on random and deterministic variation like spreading of agent for optimization, Simulated Annealing \citep{p21} is an extension of hill-climbing algorithm which have introduced probabilistic acceptance and adaptive step size phenomenon, Honey Bee Mating Optimization \citep{p22} utilizes the crossover feature similar to GA between bee to produce enhancement in solution quality, League Championship Algorithm \citep{p23} employs tournament like situation to generate combinations as solutions, Teaching-Learning-Based Optimization (TLBO) \citep{p24} works on the teaching and learning principle where knowledge flow from the better agents towards other. Each of the algorithms are applied on mathematical representation of various kinds of applications and have been successful in achieving optimized solutions. \section{Nature of Green Heron Birds} \label{sec-nature} The Green Heron bird \citep{wiki} (Butorides virescens) resides on the freshwater or brackish water swampy marshes or wetlands with clumps of trees mainly in low lying areas where there are abundant scope of availability of fishes as their prey. They are nocturnal in habit and prefer to stay back in sheltered areas during the daytime. But when hungry they feed during the daytime. Their main food consists of small fish, spider, frogs, grasshoppers, snakes, rodents, reptiles, aquatic arthropods, mollusks, crustaceans, insects, amphibians, vertebrate or invertebrate animals like leeches and mice, provided they can catch. \begin{figure*}[h!] \label{f1} \centering \includegraphics[width=0.5\textwidth]{ghbird} \caption{Preying Habit with Bait of Green Heron Bird} \end{figure*} Usually the Green Heron bird forages from a perch and there it stands with its body stretched out horizontally and lowered further, to insert its bill inside water for any unsuspecting prey. Green Heron is one among the few birds who can use tools for doing their daily jobs, the Green Heron will attract prey, mainly swarm of fishes, with bait (feathers, earthworms, bread crusts, tiny stick piece, insects, or even berries) when it drops on the water surface. The bait is dropped onto the water surface in order to attract fishes and all other water organisms that hover over the bait to sense its kind and food value. When any fish takes or tries to take the bait, the green heron bird grab hold the fish and eat the prey. This prey catching feature is being exploited as a meta-heuristic for complex problem solving and most importantly achieving optimization. The next section will describe in details the steps of the algorithm and its resemblance with the natural phenomenon of the Green Heron bird. \section{Green Heron Optimization Algorithm Details} \label{sec-GHSOA} Overall the sequence of the Green Heron Optimization algorithm can be divided into the following basic operational steps which will perform different search based variations for a heuristic sequence or path generation and will thus establish a solution for the problems like graph based problems and combinatorial optimization problems, however this algorithm can be extended for discrete equations but with limited functionality of the algorithm. This is because in combinatorial optimization problem each of the individuals is a event whereas in equations each of the individuals are parameter values. However the individual constraints unique to each kind of problems are required to be established into the computation through implementation and the operations are just guidelines of what should be happening with the solution set individually or as a whole. \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{ghoa} \captionof{figure}{Flow Diagram for Green Heron Swarm Optimization Algorithm} \label{f2} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{obait} \captionof{figure}{Baiting Operation} \label{fobait} \end{minipage} \end{figure*} \subsection{Baiting} The baiting process is analogous to holding bait in the beak by the Green Heron bird, as it does and will drop at the appropriate place where there is chance of a catch or there are organisms in form of aquatic animals or fishes nearby. Similarly in the computation the bait is a solution subset that is arbitrarily generated (from a pool of such events and in case of constraints try avoiding the generation of already generated or most often generated events) and is held by the bird before it finds a good position through local search throughout the whole solution set. The bait and the prey is assumed as two individual solution subset which takes part in the operation and there are three alternatives which the bait-prey pair will bring about altering the solution set and thus heuristically creating a new solution set or improving it. Now the three alternatives (the occurrences of which depends on the problem, its constraints, and the implementation and partly on the probability and local search) are: \begin{itemize} \item\textbf{MISS CATCH} - In this case the bait gets settled at one of its preferred place where it finds continuity and the bird fails to catch any prey and hence the number of solutions in the solution set tend to increase. Depending on the problem, the situation must be tackled. Like in Travelling Salesman Problems, scheduling problems, etc the missing node must be restored (may be from last, or at random) to sustain the validity of the solutions. \item\textbf{CATCH} - In this case the bait helps the Green Heron bird to catch a prey and thus the solution set elements remains constant and one appropriate element is added and one inappropriate element is eliminated. \item\textbf{FALSE CATCH} - Here the bird gets hold of a prey without using bait as sometimes fishes come near the surface of the water. In this step an inappropriate element of the solution set is eliminated from the set. The depletion of a node in the form of a catch form the solution set, must be restored for establishment of validity for certain constraints of the problems or limitation on the part of the variables. \end{itemize} In the Figure \ref{fobait}, the baiting operation of the Green Heron Optimization algorithm is demonstrated only for a particular case though many other cases can appear. Here we have a solution string consisting of A,B,C,D,E,G,H,I (where each one of them is an events or nodes or individual solution) and F is the bait which can operate on any position of the string. However the position is derived probabilistically and according to miss catch, catch and false catch (which can also be decided randomly or through Change of Position operational step, if possible) the three cases are shown. For the problems like Travelling Salesman Problem and the discussed Quadratic Assignment Problem (QAP) and 0/1 Knapsack Problem (KSP), as the numbers of nodes are fixed and none should be repeated in the solution string, hence there must be replacement and of the replaced and deletion of the excess due to this phenomenon to compensate for the variation and to maintain the acceptability and validation of the solution set. \subsection{Change of Position} The Local search operation can occur through checking all (for small solution sets) or part (for long enough solution sets) of the solution sets for positions before it finds a suitable one. This step is analogous to the nature of the bird where it finds a suitable place where it can sit very near the surface of the water such that it can at any point of time can insert its beak inside water and take hold of a fish or aquatic animal whenever it comes near the surface naturally or due to the influence or temptation of the bait(s). This local search operation should count and made sure that too much time is not spend on a solution set if there are a number of solution sets to be taken care of in the iteration. In case of huge number elements, intensive local search strategy can be implemented for a selected zone to be checked or the low secondary fitness valued elements can be checked or any constraint of the problem can be utilized for such search and decision makings. In intensive local search the selected node is placed just by the side of the node where continuity can be established. \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{oposition} \captionof{figure}{Change of Position Operation} \label{f3} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{oattracting} \captionof{figure}{Attracting Prey Swarms Operation} \label{fig:4} \end{minipage} \end{figure*} In the Figure 4 the Change of Position of Green Heron Optimization algorithm is shown. Here also we have considered the same solution string consisting of A,B,C,D,E,G,H,I and F is the bait which can operate on any position of the string. But the position is decided by the local search through a fractional portion of a particular location of the solution string. The best position is taken as the point of application. The best position depends on the local heuristic value, like in Travelling Salesman Problem it can be the least distance. \subsection{Attracting Prey Swarms} This Attracting Prey Swarms is also an equivalence of the local search operations that makes the algorithm quick convergent for constrained discrete problems solving the precedence criteria. But the step is little bit different from the Change of Position operation described previously. In this step the position of the bird sitting with the bait remains same but the swarm of fishes actually moves towards the fish or rather the bait is released and the fishes are attracted towards it. So for the solution set the point of release of the bait will remain same but the whole set will shift to create position for the best agent to receive the bait and this will help in an evolution like step where a shift can change the solution specially when the positions of the solution hold immense meaning and the correct sequence is of utmost importance. The example shown in Figure \ref{fig:4} will make the operation more clear. However this operation should be rare and occurs selectively for the iterations, only when there is no attachment in the initial positions of the solution set. This operation can be useful for problems like Travelling Salesman Problem, Vehicle Routing Problem, scheduling problems etc where the numbers of constraints are not present, but in problems like sequence ordering problems, routing, path planning, etc it can be useful selectively. However this operation can be operated on a selective portion of the solution set which is yet to be arranged or have not yet been lucky to engage in any kind of attachments. These local search processes are followed by the Baiting operation. The number in solution subset participating in shift is to be determined randomly and also the number of shifts must be less than the number in solution subset. In the Figure 5, Attracting Prey Swarms operation of Green Heron Optimization Algorithm is demonstrated. In this step the position of bait is constant and the whole solution string or a selected portion of the string (depending on requirement and implementation) revolve to place an arbitrary node under it. Considering the same solution string consisting of A,B,C,D,E,G,H,I and F as the bait, E was under F, but thrice revolve will bring B under F. For Quadrature Assignment Problem and 0/1 Knapsack Problem we have considered the whole string to revolve. It is to be mentioned that though the operations are described separately for convenience of understanding of the implementation with respect to any problem, in reality the operational steps are interweave and cannot be operated separately unless some problems may be suited for and the key lies in proper studding the problem into the algorithm. \subsection{Brief Description of Fitness Function} The fitness function used for determination and estimation of partial solution is quite important when it comes for the decision making of the system and optimization selection. It will help in a temporary deciding how well the nodes are placed and connected in a very well manner, however is valid for only one dimensional problems. But it is noticed that in majority graph based problems, obsessed with multi-objective optimization, the complete path is reached after a huge number of iterations and by the mean time it is very difficult to clearly demarcate the better incomplete result from the others and it is in this case the act of probabilistic steps can worsen the solution. Also the acts of the operations need to be operated in proper places mainly on the node gaps where there is yet to make any linkage. Hence a brief description of the secondary fitness function needs to be addressed. This secondary fitness value can be of several types: \\ 1) The technique used in the simulation finds the linked consecutive nodes and is numbered with a number which denotes how many nodes are linked together at that portion. Say for example this can be regarded as an example. \begin{equation} \text{Node Value} = \left\{ \begin{array}{l l} 0 & ,\text{ for no linkage in either side} \\ 1 & ,\text{ for linkage in one side} \\ 2 & ,\text{ for linkage in both sides} \end{array} \right. \end{equation} Then the secondary fitness is calculated as (summation of the node value)/(number of nodes) that is with $N$ number of nodes we have, \begin{equation} \text{Secondary Fitness} = \frac{\sum\limits_{i=1}^{N} (Node Value)_i}{N} \end{equation} High secondary fitness denotes that more numbers of nodes are linked together as a unit than the other solution string. But for the TSP as there occurs a link between every node, the secondary fitness will always be constant and will be of no use. The solution with maximum secondary fitness will be the better solution. In the simulation this procedure is used. \\ 2) Another partial solution fitness evaluation can be through the use of the number of partial solution that is linked portion are present in that string which can have high probability of being processed into a complete path than the isolated ones. Here on the count of the linked sections present in the solution string are kept as secondary fitness value. But contrary to the previous method, this method provides minimum as best result. Say in a solution string represented as $S = \{a_1,a_2,\ldots,a_N\}$ and the lined portions are represented as $\{b_1,b_2,\ldots,b_m\}$ where each $b_i \subseteq S$ and $i \in \{1,2,\ldots,m\}$ and \begin{equation} \text{Secondary Fitness} = m \end{equation} and with formation of linkages $m$ will constantly decrease. In this case we can use each of the subsets as nodes for link foundation. \subsection{Adaptiveness of GHSOA} The Green Heron Swarm Optimization Algorithm provides adaptive scheme for variable handling for each data set and can be used where the dimension is not constant like in path planning, adaptive clustering, unsupervised learning based clustering etc. It is good for experimentation for the problem which can be multi-variable during the initial positions and gradually varies with iterations. Also for constant length combinatorial optimization, it can be modified accordingly and can be regarded as a special case of the algorithm. There can be situations when the linkage between two path segments can be done through single or multiple numbers of nodes and this requires the need of adaptive flexibility in the operators. For adaptive clustering, this algorithm can be helpful for generation and maintenance of the cluster centroids. \subsection{Location Based Neighbour Influenced Variation (LBNIV)} Location Based Neighbour Influenced Variation (LBNIV) is the adaptive variation scheme for the continuous domain problems like the numerical benchmark equations and mathematical models of applications. It follows the habit of the bird to get influence by the different elements of the environment mainly water organisms and then act accordingly for better catching availability and better attraction of swarms of fishes. Hence this equation is called as being influenced by the neighbours who have better position or from where there can be availability of better opportunity. For each continuous variable of the fitness function of the problem that is $x_t \in \{x1_t,x2_t,\ldots,xD_t \}$ where $D$ is the dimension of the problem and $xD_t$ is the $D^{th}$ element in the $t^{th}$ iteration. We have the following equations governing the variation of the variable's variation and is self-adaptive, previous iteration based and error sensitive based on the fitness function of the $t$ and $t-1$ iteration. \begin{equation} \label{e4} x_t = x_{t-1} + |(x_{best} - x_r)|d_{(t,r)} \epsilon_{(t,r)} + |(x_{best} - x_f)|d_{(t,f)} \epsilon_{(t,f)} + bias \end{equation} where we have $[\{d_{(t,r)}$, $d_{(t,f)}\} \in d_{t}]$ and $[\{\epsilon_{(t,r)},\epsilon_{(t,f)}\} \in \epsilon_{t}]$ given by $d_{(t+1)}$ and $\epsilon_{(t+1)}$ of Equation \ref{e1} and Equation \ref{e2} respectively of previous iteration as \begin{equation} \label{e1} d_{(t+1)} = \left\{ \begin{array}{ll} \frac{J_{t-1} - J_{t}}{|J_{t-1}|} &\mbox{ for $x_t \geq x_{t-1}$} \\ \frac{J_{t} - J_{t-1}}{|J_{t-1}|} &\mbox{ for $x_t < x_{t-1}$} \end{array} \right. \end{equation} \begin{equation} \label{e2} \epsilon_{(t+1)} = \left\{ \begin{array}{ll} \epsilon_{t}/k &\mbox{ for $x_t > x_{max}$} \\ \epsilon_{t}*k &\mbox{ for $x_t < x_{min}$} \\ \epsilon_{t} &\mbox{ Otherwise} \end{array} \right. \end{equation} for minimization problem with $J_t < J_{t-1}$ or $J_t > J_{t-1}$ where $xi_t$ is the $i^{th}$ variable for iteration $t$, $J_t$ is the fitness value at iteration $t$, $x_f$ and $x_r$ are the front and rear neighbours respectively where $xD_f \in \{xD_t \text{ of Next Fellow Agents } \}$ ad $x_r \in \{xD_t \text{ of Previous Fellow Agents } \}$, $\delta_t$ is the variations contributor with respect to the $t^{th}$ iteration generated from previous iteration $t-1$, $\epsilon$ is the constantly changing adaptive contributor which denotes what percentage of the $\delta$ must be incorporated into the variable and depends on the bound restriction of the variable. For $\epsilon = 1$, the value of $\delta$ is same for all the increasing or decreasing variables, but if the boundary of $xi_t$ varies then we have separate $\epsilon$ for different $xi_t$ variables. It is better to keep $\delta$ set and $\epsilon$ set separately for each variable and thus in total for $D$ variables for each solution set. It is to be noticed here that the Equation \ref{e1} and Equation \ref{e2} always tries to drag the value of $xi_t$ towards the best that is for minimum of $J_t$ for minimal optimization. \section{Performance Evaluation} \label{sec-performance} Green Heron Swarm Optimization Algorithm was itself developed for discrete problems but is tested on continuous numerical benchmarks. The details of the algorithmic steps for GHSOA for the various problems are provided below. \subsection{GHOSA for Travelling Salesman Problem (TSP) / Quadrature Assignment Problem (QAP)} Table 1 and Table 2 have provided the results of datasets set \citep{s1},\citep{s2} for Travelling Salesman Problem and Quadrature Assignment Problem respectively and compared with optimum value. GHOSA was operated on each of them. The Algorithm for GHSOA for Travelling Salesman Problem / Quadrature Assignment Problem is being provided.\\ \textbf{Step 1:} Initialize the Solution set all its n (n = dimension of problem) events $\{x_1,x_2,\ldots,x_n \}$.\\ \textbf{Step 2:} Generate N Solution set each consisting of $\{x_1,x_2,\ldots,x_n \}$ where each element denote an event and the corresponding position in the array denotes a fixed positional significance. \\ \textbf{Step 3:} Evaluate the fitness of each string, Store the value of profit if constraint satisfied else make it zero. Update the Global best if better is found.\\ \textbf{Step 4:} (For each string) Perform ``Baiting'' (with Miss Catch, Catch, False Catch) where position is selected randomly (need to take care of the duplicates) or at selected points depending upon implementation on deterministic approach or probability. \\ \textbf{Step 5:} Perform ``Change of Position'' operation depending upon the requirement and initial search results. (Random positioning is done to see which combination yield best result) Here on selected or the whole string depending on the pseudo-random generation of the two operation parameters.\\ \textbf{Step 6:} Perform ``Attracting Prey Swarms''. Complete ``Baiting'' operation. \\ (End of For each string) \\ \textbf{Step 7:} Evaluate the fitness of each string. Store the value of profit if constraint satisfied else make it zero. Update the Global best if required. If New (derived out of combination of operation(s)) is better, then replace the old else don't. The global best consists of the fitness value along with the string consisting of 1s and 0s. \\ \textbf{Step 8:} After each iteration replace X\% worst solutions with random initialization. (X depends on N and according to the exploration requirement) \\ \textbf{Step 9:} If number of iteration is complete then stop else continue from Step 4. \scriptsize \begin{longtable}{|c|c|r||c|c|c|c|c|} \caption{Evaluation of EVSOA on TSP Datasets}\\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst}\\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst}\\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot Ulysses16.tsp &16 & 74.11 & 75.18 & 0.047 & 74.11 & 78.27 \\\hline att48.tsp & 48 & 3.3524e+004 & 3.5436e+004 & 12.1 & 3.3613e+004 & 4.2996e+004 \\\hline st70.tsp & 70 & 678.5975 & 711.676 & 115.9 & 694 & 746 \\\hline pr76.tsp & 76 & 1.0816e+005 & 1.3319e+005 & 125.7 & 1.0816e+005 & 1.3757e+005 \\\hline gr96.tsp & 96 & 512.3094 & 643.97 & 69.4 & 573.16 & 806.4 \\\hline gr120.tsp & 120 & 1.6665e+003 & 1.7963e+003 & 46.8 & 1.7112e+003 & 1.8753e+003 \\\hline gr202.tsp & 202 & 549.9981 & 839.19 & 178.2 & 610.8 & 1005.9 \\\hline tsp225.tsp & 225 & 3919 & 4151.8 & 213.7 & 4058.9 & 5034.8 \\\hline a280.tsp & 280 & 2.5868e+003 & 2.77913e+003 & 986.1 & 2.6772e+003 & 3.1463e+003 \\\hline \end{longtable} \begin{longtable}{|c|c|r||c|c|c|c|c|} \caption{Evaluation of EVSOA on QAP Datasets}\\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst} & \textbf{Error}\\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst} & \textbf{Error}\\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot chr15a & 15 & 9552 & 9552 & 0 & 9552 & 9552 & 0 \\\hline bur26a & 26 & 5426670 & 5514119 & 8990.6 & 5426670 & 5512133 & 1.6115 \\\hline chr12a & 12 & 9552 & 9552 & 0 & 9552 & 9552 & 0 \\\hline chr18a & 18 & 11098 & 11098 & 0 & 11098 & 11098 & 0 \\\hline chr20a & 20 & 2192 & 2372.5 & 72.1 & 2192 & 2561 & 8.2345 \\\hline chr22a & 22 & 6156 & 6525.4 & 90.92 & 6156 & 6502 & 6.0006 \\\hline chr25a & 25 & 3796 & 4231.9 & 300.2 & 3796 & 7927 & 11.4831 \\\hline els19 & 19 & 17212548 & 17212548 & 0 & 17212548 & 17212548 & 0 \\\hline esc16a & 16 & 68 & 68 & 0 & 68 & 68 & 0 \\\hline esc32e & 32 & 2 & 2 & 0 & 2 & 2 & 0 \\\hline had14 & 14 & 2724 & 2724 & 0 & 2724 & 2724 & 0 \\\hline nug24 & 24 & 3488 & 3795 & 92.98 & 3488 & 3729 & 8.8016 \\\hline nug27 & 27 & 5234 & 5416.4 & 121.75 & 5234 & 5518 & 3.4849 \\\hline esc16b & 16 & 292 & 292 & 0 & 292 & 292 & 0 \\\hline nug16a & 16 & 1610 & 1610 & 0 & 1610 & 1610 & 0 \\\hline nug20 & 20 & 2570 & 2570 & 0 & 2570 & 2570 & 0 \\\hline \hline \end{longtable} \normalsize \subsection{GHOSA for 0/1 Knapsack Problem (KSP)} In this part the dataset of 0/1 Knapsack Problem \citep{p3} is being operated with GHSOA for optimization with respect to optimum value. The Algorithm for GHSOA for 0/1 Knapsack Problem .\\ \textbf{Step 1:} Consider a dataset of $n$ items and $x_i$ is the parameter to denote the inclusion or exclusion of the $i^{th}$ item for any bag, for $m$ number of bags where each bag is has maximum capacity $W_m$.\\ \textbf{Step 2:} Generate $N$ solution strings for each type of bag \textbf{m} where each string has $x_i$ for $i = \{1,2,\ldots,n\}$ consisting of n items and each is represent by a number from 1 to n without repetition of any of the numbers in the string.\\ \textbf{Step 3:} Now generate a threshold between 1 and n in integer values, so that the string of integer values can be converted to string of 0s and 1s and the random value of threshold will decide how many of them should be 0s and 1s and also the positional values will create combinations and due to the GHOA the integer values for each positions will change. \\ \textbf{Step 4:} For $(x_i >$ threshold) Make it 1, else Make it 0. So string of 0 \& 1 represent $x_i$ set.\\ \textbf{Step 5:} Evaluate the fitness of each string, Store the value of profit if constraint satisfied else make it zero. Update the Global best if required. Convert the \textbf{x} vector from 0/1 values back to its previous form of numbers from 1 to n.\\ (For each string) \\ \textbf{Step 6:} Perform ``Baiting'' (with Miss Catch, Catch, False Catch) where position is selected randomly (need to take care of the duplicates) or at selected points depending upon implementation on deterministic approach or probability. \\ \textbf{Step 7:} Perform ``Change of Position'' operation depending upon the requirement and initial search results. (Random positioning is done to see which combination yield best result) Here on selected or the whole string depending on the pseudo-random generation of the two operation parameters.\\ \textbf{Step 8:} Perform ``Attracting Prey Swarms''. Complete ``Baiting'' operation. \\ (End of For each string) \\ \textbf{Step 9:} Repeat Step 3 and 4 to make the solution strings eligible for fitness evaluation. Now Evaluate the fitness of each string. Store the value of profit if constraint satisfied else make it zero. Update the Global best if required. If New (derived out of combination of operation(s)) is better, then replace the old else don't. The global best consists of the fitness value along with the string consisting of 1s and 0s. \\ \textbf{Step 10:} After each iteration replace $X$\% worst solutions with random initialization. ($X$ depends on $N$ and according to the exploration requirement) \\ \textbf{Step 11:} If number of iteration is complete then stop else continue from Step 3. \scriptsize \begin{longtable}{|c|c|r||c|c|c|c|c|} \caption{Evaluation of EVSOA on 0/1 Knapsack Datasets}\\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst} & \textbf{Error}\\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \textbf{Name} & \textbf{Dim} & \textbf{Optimum} & \textbf{Mean} & \textbf{SD} & \textbf{Best} & \textbf{Worst} & \textbf{Error}\\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot WEISH01 & 5,30 & 4554 & 4463.2 & 23.4789 & 4549 & 4423 & 1.9939 \\\hline WEISH07 & 5,40 & 5567 & 5325.1 & 92.723 & 5493 & 5013 & 4.3452 \\\hline WEISH10 & 5,50 & 6339 & 5865.842 & 184.8216 & 6309 & 5592 & 7.4642 \\\hline WEISH15 & 5,60 & 7486 & 6918.92 & 213.202 & 7332 & 6819 & 7.5752 \\\hline WEING8 & 2,105 & 624319 & 552021 & 4000.2 & 610322 & 529391 & 11.5803\\\hline WEING1 & 2,28 & 141278 & 141156.1 & 578.4875 & 141278 & 141010 & 0.0863 \\\hline FLEI & 10,20 & 2139 & 2139 & 0 & 2139 & 2139 & 0 \\\hline HP1 & 4,28 & 3418 & 3406 & 45.02 & 3418 & 3293 & 0.3511 \\\hline PB6 & 30,40 & 776 & 728.1 & 11.2038 & 756 & 701 & 6.1727 \\\hline PET2 & 10,10 & 87061 & 87061 & 0 & 87061 & 87061 & 0 \\\hline PET3 & 10,15 & 4015 & 4015 & 0 & 4015 & 4015 & 0 \\\hline PET4 & 10,20 & 6120 & 6120 & 0 & 6120 & 6120 & 0 \\\hline PET5 & 10,28 & 12400 & 12326.74 & 52.954 & 12400 & 12129 & 0.5908 \\\hline PET6 & 5,39 & 10618 & 10499.87 & 32.982 & 10570 & 10446 & 1.1125 \\\hline PET7 & 5,50 & 16537 & 16336 & 22.956 & 16393 & 16078 & 1.2155 \\\hline \end{longtable} \normalsize \subsection{GHOSA for Multi-Objective Road Network / Resource Constrained Shortest Path Problem (RCSP)} The road network considered here is shown in Figure \ref{f5} which is being measured to see approximately how the optimized vehicle route flow in the network. Each of the edges is provided with a distance and an average waiting time assumed to be calculated on the basis of the data collected independently by the sensor network present at that crossing. The equations for distance and waiting time for the road network which is required to be minimized are two summation based equation which depends on the path traversed by the agents on its way from the source A to destination Y. \begin{equation} \label{e5} f_1 = \sum\limits_{k=1}^{n} \frac{D_k}{V} \end{equation} \begin{equation} \label{e6} f_2 = \sum\limits_{k=1}^{n} (AWT)_k \end{equation} where $f_1$ and $f_2$ are the two equations and $D_k$ is the distance and $(AWT)_k$ is the average waiting time for the path $\{k \subseteq \{i,j\} \in G\}$, $G$ is the graph network. The fitness function $f$ considered is $f = (f_1+f_2)$. $V$ is the constant velocity for normalization of the distance and it makes $f_1$ same as $f_2$ in unit of time. There are a few assumptions made on the behalf of the road model so as to make it simple and avoid unnecessary details and at the same time make it acceptable for the simulation and the algorithm. The vehicles considered here are uniform in size, speed and non-accelerating. The time taken for movement and waiting are crisp and other details like size of queue before the vehicle, width of the road etc are all discarded and an average of the all the happenings are considered. The model emphasis on the movement of the vehicles from the source to destination and in the meantime only these vehicles are considered or rather accounted for the simulation and conclusion generation. However other vehicles which are also present and are destined to move from other parts of the network graph to some other places are not considered in details but their presence is established through some random variation of the parameters like waiting time, queue length etc. The Algorithm for GHSOA for road network and similar other problems like Resource Constrained Shortest Path Problem (RCSP) is given below.\\ Step 1: Initialize the equation and all its n (n = dimension) variables as $\{x_1,x_2,\ldots ,x_n\}$ \\ Step 2: Initialize $N$ strings each consisting of $\{x_1,x_2,\ldots ,x_n\}$ as coefficients having random numerical values. (evaluate the constraints and bounds of the variables and reinitialize the strings which are violating them). [positions are related to specific variables and are fixed] \\ Step 3: Initialize the fitness matrix and Evaluate the fitness of each string and set global best result. \\ Step 4: (For each string) Perform ``Baiting'' (with Miss Catch, Catch, False Catch) where position is selected with some intensive local search strategy where the nodes with least secondary fitness is searched (probability, random partial string, etc strategies can also be used). \\ Step 5: Perform ``Change of Position'' depending upon the requirement and initial search results. \\ Step 6: Perform ``Attracting Prey Swarms''. \\ Step 7: Complete ``Baiting'' operation. \\ Step 8: Perform ``Location Based Neighbour Influenced Variation'' for each of $\{x_1,x_2,\ldots ,x_n\}$ (with $\epsilon = 1$ initially but gradually changes). (End of For each string) \\ Step 9: Evaluate the fitness of each string considering the validity (boundation and constraints) of each $x_i$ where $x_i \in \{x_1,x_2,\ldots ,x_n\}$. \\ Step 10: If New is better, then replace the old else don't. \\ Step 11: Select the best result and compare with global best. If better then set it as global best. \\ Step 12: After each iteration replace X\% worst solutions with random initialization. \\ Step 13: If number of iteration is complete then stop else continue from Step 4. \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{graph} \captionof{figure}{Road Graph Network} \label{f5} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{1} \captionof{figure}{Variation of Global Best of Total Time for all Iterations} \label{f6} \end{minipage} \end{figure*} \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{2} \captionof{figure}{Variation of Global Best of Travelling Time for all Iterations} \label{f7} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{3} \captionof{figure}{Variation of Global Best of Waiting Time for all Iterations} \label{f8} \end{minipage} \end{figure*} \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{4} \captionof{figure}{Plot for Cumulative Global Best of Total Time for all Iterations} \label{f9} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{5} \captionof{figure}{Plot for Cumulative Global Best of Travelling Time for all Iterations} \label{f10} \end{minipage} \end{figure*} \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{6} \captionof{figure}{Plot for Cumulative Global Best of Waiting Time for all Iterations} \label{f11} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{7} \captionof{figure}{Plot for Average Cumulative Global Best of Total Time for all Iterations} \label{f12} \end{minipage} \end{figure*} \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{8} \captionof{figure}{Plot for Average Cumulative Global Best of Travelling Time for all Iterations} \label{f13} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{9} \captionof{figure}{Plot for Average Cumulative Global Best of Waiting Time for all Iterations} \label{f14} \end{minipage} \end{figure*} \subsection{GHOSA for Benchmark Equations} Numerical Benchmark Equations (along with dimension, optimized value) given in Table 4 is used for simulation of GHSOA and the corresponding results (in the form of mean, standard deviation, best, worst, error) are provided in Table 5. In this section we have described the details of implementation of the algorithm along with the addition of the location based neighbour influenced variation scheme in Step 8 and problem oriented variations of the GHSOA for the benchmark equations: \\ Step 1: Initialize the equation and all its n (n = dimension) variables as $\{x_1,x_2,\ldots ,x_n\}$ \\ Step 2: Initialize $N$ strings each consisting of $\{x_1,x_2,\ldots ,x_n\}$ as coefficients having random numerical values. (evaluate the constraints and bounds of the variables and reinitialize the strings which are violating them). [positions are related to specific variables and are fixed] \\ Step 3: Initialize the fitness matrix and Evaluate the fitness of each string and set global best result. \\ Step 4: (For each string) Perform ``Baiting'' (with Miss Catch, Catch, False Catch) where position is selected with some intensive local search strategy where the nodes with least secondary fitness is searched (probability, random partial string, etc strategies can also be used). \\ Step 5: Perform ``Change of Position'' depending upon the requirement and initial search results. \\ Step 6: Perform ``Attracting Prey Swarms''. \\ Step 7: Complete ``Baiting'' operation. \\ Step 8: Perform ``Location Based Neighbour Influenced Variation'' for each of $\{x_1,x_2,\ldots ,x_n\}$ (with $\epsilon = 1$ initially but gradually changes). (End of For each string) \\ Step 9: Evaluate the fitness of each string considering the validity (boundation and constraints) of each $x_i$ where $x_i \in \{x_1,x_2,\ldots ,x_n\}$. \\ Step 10: If New is better, then replace the old else don't. \\ Step 11: Select the best result and compare with global best. If better then set it as global best. \\ Step 12: After each iteration replace X\% worst solutions with random initialization. \\ Step 13: If number of iteration is complete then stop else continue from Step 4. \subsection{Details of Benchmarks} Several famous benchmark equations and its description, constraints (in form of range values for each variable) and optimized values are being provided in Table 2 which are used for the simulation of the Egyptian Vulture Swarm Optimization Algorithm to see how it performs under various dimensions and range constraints of the benchmark equations. These equations are standardized famous mathematical representations that require optimization and are used as test-beds for analysis of the algorithms. The list of the benchmark equations are provided in Table X. They are Sphere Function $(f_1)$, Rosenbrock Function $(f_3)$, Hump Functions $(f_6)$, Branin Function $(f_7)$, Goldstein \& Price Function $(f_8)$, Power Sum Function $(f_9)$, Beale Function $(f_{10})$, Colville Function $(f_{11})$, Sum Squares Function $(f_{12})$, Dekkar and Aarts $(f_{13})$, McCormick $(f_{14})$, Two Peak Trap $(f_{15})$, Central Two Peak Trap $(f_{16})$, Five uneven peak Trap $(f_{17})$, Equal Maxima $(f_{18}$, Decreasing Maxima $(f_{19})$, Uneven Maxima $(f_{20})$, Uneven Decreasing Maxima $(f_{21})$, Himmelblau's function $(f_{22})$, Six-hump Camel Back function $(f_{23})$, Michalewics Function $(f_{24})$, Matyas Function $(f_{25})$. The computational result of simulation of the Green Heron Swarm Optimization Algorithm on the benchmark problems is performed on Matlab R2011a (in a system with configuration of Core i3-2330M 2.20 GHz Intel 2nd Generation Processor and 4 GB RAM) and is provided in the Table \ref{Tsol} having the equation ID (corresponding to Table \ref{Teq}), dimension of the equations or considered dimension, mean, standard deviations (SD), best solution, worst solution and mean error. The computational results, compared with the traditional Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), clearly revealed how the algorithm worked for the maximum 25000 iterations mark and has revealed great convergence for the solutions. Solution and convergence does not depend on the initial value of $\epsilon_t$ which is kept as usual quite low with 0.2 and k=2 and bias=0.001. \scriptsize \begin{longtable}{|p{0.3cm}|p{5cm}|p{0.3cm}|p{2cm}|p{1.3cm}|} \caption{Details of Benchmark Equations} \label{Teq}\\ \hline \textbf{\#} & \textbf{Equation} & \textbf{D} & \textbf{Range} & \textbf{Optimum}\\ \hline \endfirsthead \multicolumn{5}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \textbf{\#} & \textbf{Equation} & \textbf{D} & \textbf{Range} & \textbf{Optimum}\\ \hline \endhead \hline \multicolumn{5}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot $f_1$ & $\sum\limits_{i=1}^{D}x_i^2$ & $10$ & $[-20,20]_D$ & 0 \\ \hline $f_2$ & $\sum\limits_{i=1}^{D}|x_i|+\prod\limits_{i=1}^{D}|x_i| $ & $10$ & $[-20,20]_D$ & 0 \\ \hline $f_3$ & $\sum\limits_{i=1}^{D-1}[100(x_{i+1}-x_i^2)^2 + (x_i-1)^2]$ & $10$ & $[-20,20]_D$ & 0 \\ \hline $f_4$ & $\sum\limits_{i=1}^{D}(x_i - 0.5)^2$ & $10$ & $[-20,20]_D$ & 0 \\ \hline $f_5$ & $\sum\limits_{i=1}^{D}[x_i^2 -10cos(2\pi x_i) + 10]$ & $10$ & $[-5.12,5.12]_D$ & 0 \\ \hline $f_6$ & $4x_1^2 - 2.1x_1^4 + \frac{1}{3}x_1^6 + x_1x_2 - 4x^2_2 + 4x^4_2 $ & $2$ & $[-5,5]_D$ & -1.03163 \\ \hline $f_7$ & $(x_2 - \frac{5.1}{4\pi^2}x_1^2 + \frac{5}{\pi}x_1- 6)^2 + 10(1 - \frac{1}{8\pi})cosx_1 + 10$ & $2$ & $[-5,10]x[0,15]$ & .398 \\ \hline $f_8$ & $ [1+(x_1+x_2+1)^2(19-14x_1+3x_1^2 -14x_2+6x_1x_2+3x_2^2)]*[30+(2x_1 -3x_2)^2*(18-32x_1+12x_1^2+48x_2-36x_1x_2+27x_2^2)] $ & $2$ & $[-2,2]_D$ & 3 \\ \hline $f_9$ & $\sum\limits_{i=1}^D (\sum\limits_{j=1}^i x_j)^2$ & $10$ & $[-20,20]_D$ & 0 \\ \hline $f_{10}$ & $[1.5-x_1(1-x_2)]^2+[2.25-x_1(1-x_2^2)]^2+[2.625-x_1(1-x_2^3)]^2 $ & $2$ & $[-4.5,4.5]_D$ & 0 \\ \hline $f_{11}$ & $ 100[x_2-x_1^2]^2 + (1-x_1)^2 + 90(x_4 - x_3^2)^2 + (1-x_3)^2 + 10.1[(x_2-1)^2+(x_4-1)^2] + 19.8(x_2-1)(x_4-1) $ & $4$ & $[-10,10]_D$ & 0 \\ \hline $f_{12}$ & $\sum\limits_{i=1}^{D}ix_i^4 + random[0,1)$ & $10$ & $[-1.28,1.28]_D$ & 0 \\ \hline $f_{13}$ & $10^5x_1^2 + x_2^2 - (x_1^2 + x_2^2)^2 + 10^{-5}(x_1^2 + x_2^2)^4$ & $2$ & $[-20,20]_D$ & -24777 \\ \hline $f_{14}$ & $sin(x_1 + x_2) + (x_1 - x_2)^2 -\frac{3}{2}x_1+\frac{5}{2}x_2 + 1$ & $2$ & $ \begin{matrix}[-1.5,4]x \\ [-3,3]\end{matrix}$ & -1.9133 \\ \hline $f_{15}$ & $ \left\{ \begin{array}{ll} \frac{160}{15}(15-x) &\mbox{ for $0\leq x <15$} \\ \frac{200}{5}(x-15) &\mbox{ for $15\leq x \leq 20$} \end{array} \right. $ & $1$ & $[0,20]$ & 0 \\ \hline $f_{16}$ & $ \left\{ \begin{array}{ll} \frac{160}{10}x &\mbox{ for $0\leq x <10$} \\ \frac{160}{5}(15-x) &\mbox{ for $10\leq x <15$} \\ \frac{200}{5}(x-15) &\mbox{ for $15\leq x \leq 20$} \end{array} \right. $ & $1$ & $[0,20]$ & 0 \\ \hline $f_{17}$ & $ \left\{ \begin{array}{ll} 80(2.5-x) &\mbox{ for $0\leq x <2.5$} \\ 64(x-2.5) &\mbox{ for $2.5\leq x <5$} \\ 64(7.5 - x) &\mbox{ for $5\leq x \leq 7.5$} \\ 28(x-7.5) &\mbox{ for $7.5\leq x <12.5$} \\ 28(17.5-x) &\mbox{ for $12.5\leq x <17.5$} \\ 32(x-17.5) &\mbox{ for $17.5\leq x <22.5$} \\ 32(27.5-x) &\mbox{ for $22.5\leq x <27.5$} \\ 80(x-27.5) &\mbox{ for $27.5\leq x <30$} \end{array} \right. $ & $1$ & $[0,30]$ & 0 \\ \hline $f_{18}$ & $sin^6(5\pi x)$ & $1$ & $[0,1]_D$ & 0 \\ \hline $f_{19}$ & $exp[-2log(2).(\frac{x-0.1}{0.8})^2]. sin^6(5\pi x)$ & $1$ & $[0,1]_D$ & 0 \\ \hline $f_{20}$ & $sin^6(5\pi (x^{3/4}-0.05))$ & $1$ & $[0,1]_D$ & 0 \\ \hline $f_{21}$ & $\begin{matrix}exp[-2log(2).(\frac{x-0.08}{0.854})^2].\\sin^6(5\pi(x^{3/4}-0.05))\end{matrix}$ & $1$ & $[0,1]_D$ & 0 \\ \hline $f_{22}$ & $(x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 - 7)^2 $ & $2$ & $[-10,10]_D$ & 0 \\ \hline $f_{23}$ & $ [(4 - 2.1x_1^2 + \frac{x_1^4}{3})x_1^2 + x_1x_2 + (-4+4x_2^2)x_2^2]$ & $2$ & $\begin{matrix} [1.9,1.9]x \\ [-1.1,1.1] \end{matrix}$ & -1.03163 \\ \hline $f_{24}$ & $sin(x_1)sin^2(\frac{x_1^2}{\pi })+sin(x_2)sin^2(\frac{2x_2^2}{\pi}) $ & $2$ & $[0,\pi]_D$ & 0 \\ \hline $f_{25}$ & $0.26(x_1^2+x_2^2)-0.48x_1x_2$ & $2$ & $[-10,10]_D$ & 0 \\ \hline \end{longtable} \begin{longtable}{|c|c|c|c|c|c|c|} \caption{Evaluation of EVSOA on Benchmark Equations} \label{Tsol}\\ \hline \textbf{Name} & \textbf{} & \textbf{Mean} & \textbf{SD} & \textbf{Best Solution} & \textbf{Worst Solution} & \textbf{Error}\\ \hline \endfirsthead \multicolumn{7}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \textbf{Name} & \textbf{} & \textbf{Mean} & \textbf{SD} & \textbf{Best Solution} & \textbf{Worst Solution} & \textbf{Error}\\ \hline \endhead \hline \multicolumn{7}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot $f_1$ & EVO & 0.49523 & 0.12854 & 0.2103 & 0.8719 & 0.49523 \\ \cline{2-7} & PSO & 0.9394 & 0.2273 & 0.01728 & 1.23 & 0.9394 \\ \cline{2-7} & GA & 1.93 & 1.121 & 1.15787 & 1.675 & 1.93 \\ \hline $f_2$ & EVO & 0.45852 & 0.014751 & 0.0141 & 0.7996 & 0.45852 \\ \cline{2-7} & PSO & 0.3482 & 0.5245 & 0.124 & 0.6412 & 0.3482 \\ \cline{2-7} & GA & 1.246 & 1.4223 & 0.97542 & 1.9465 & 1.246 \\ \hline $f_3$ & EVO & 0.39028 & 0.09812 & 0.0451 & 0.7992 & 0.39028 \\ \cline{2-7} & PSO & 0.6273 & 0.2897 & .02773 & .8892 & 0.6273 \\ \cline{2-7} & GA & 1.8283 & 0.9376 & 0.36532 & 2.1334 & 1.8283 \\ \hline $f_4$ & EVO & 0.23191 & 0.00456 & 0.0921 & 0.5012 & 0.23191 \\ \cline{2-7} & PSO & 0.0876 & 0.087 & 0.0069 & 0.1834 & 0.0876 \\ \cline{2-7} & GA & 0.6555 & 0.7964 & 0.0454 & 1.3793 & 0.6555 \\ \hline $f_5$ & EVO & 0.10234 & 0.0123 & 0.0019 & 0.3298 & 0.10234 \\ \cline{2-7} & PSO & 0.1368 & 0.179 & 0.007 & 0.599 & 0.1368 \\ \cline{2-7} & GA & 0.186 & 0.938 & 0.002 & 0.4380 & 0.186 \\ \hline $f_6$ & EVO & -1 & 0 & -1 & -1 & 0 \\ \cline{2-7} & PSO & -1 & 0 & -1 & -1 & 0 \\ \cline{2-7} & GA & -1 & 0 & -1 & -1 & 0 \\ \hline $f_7$ & EVO & 0.414 & 0.002 & 0.4 & 0.478 & 0.0250 \\ \cline{2-7} & PSO & 0.419 & 0.003 & 0.4 & 0.443 & 0.419 \\ \cline{2-7} & GA & 0.451 & .0012 & 0.4 & 0.518 & 0.451 \\ \hline $f_8$ & EVO & 3.01777 & 0.06985 & 3.0008 & 3.0193 & 0.0059 \\ \cline{2-7} & PSO & 3.1345 & 0.0865 & 3.0000 & 3.1202 & 0.045 \\ \cline{2-7} & GA & 3.2976 & 0.6757 & 3.0000 & 3.3201 & 0.0992 \\ \hline $f_9$ & EVO & 0.4021 & 0.0092 & 0.101 & 0.8278 & 0.4021 \\ \cline{2-7} & PSO & 1.2039 & 0.0203 & 0.202 & 1.02 & 1.2039 \\ \cline{2-7} & GA & 0.9281 & 0.2823 & 0.23 & 1.0384 & 0.9281 \\ \hline $f_{10}$ & EVO & 0.005687 & 0.08952 & 0.00023 & 0.0362 & 0.005687 \\ \cline{2-7} & PSO & 0.0175 & 0.012 & 0.00065 & 0.056 & 0.0175 \\ \cline{2-7} & GA & 0.011 & 0.97 & 0.0081 & 0.1875 & 0.011 \\ \hline $f_{11}$ & EVO & 0.007894 & 0.001974 & 2.45E-05 & 0.014 & 0.007894 \\ \cline{2-7} & PSO & 0.00419 & 0.0017 & 0.0024 & 0.093 & 0.00419 \\ \cline{2-7} & GA & 0.00751 & 0.0340 & 0.0001 & 0.221 & 0.00751 \\ \hline $f_{12}$ & EVO & 0.2345 & 0.1291 & 0.01089 & 0.9212 & 0.2345 \\ \cline{2-7} & PSO & 1.024 & 0.0512 & 0.000411 & 1.6354 & 1.024 \\ \cline{2-7} & GA & 0.9756 & 0.9842 & 0.18753 & 1.3544 & 0.9756 \\ \hline $f_{13}$ & EVO & -24771.1 & 3.98730 & -24776 & -24765 & 2.4e-4 \\ \cline{2-7} & PSO & -24776 & 0 & -24776 & -24776 & 0 \\ \cline{2-7} & GA & -24776 & 0 & -24776 & -24776 & 0 \\ \hline $f_{14}$ & EVO & -1.9075 & 0.003255 & -1.9133 & -1.9021 & 0.0030 \\ \cline{2-7} & PSO & -1.9058 & 0.0012 & -1.9133 & -1.8292 & 0.0039 \\ \cline{2-7} & GA & -1.8974 & 0.0028 & -1.9133 & -1.7096 & 0.0083 \\ \hline $f_{15}$ & EVO & 0.0002024 & 0.000887 & 0 & 0.0029 & .0002024 \\ \cline{2-7} & PSO & 0.00057 & 0.0003 & 0 & 0.0128 & 0.00057 \\ \cline{2-7} & GA & 0.0021 & 0.00098 & 0 & 0.011 & 0.0021 \\ \hline $f_{16}$ & EVO & .000523 & 0.000122 & 0 & 0.0082 & 0.000523 \\ \cline{2-7} & PSO & 0.00399 & 0.02 & 0 & 0.0142 & 0.00399 \\ \cline{2-7} & GA & 0.00022 & 0.001 & 0 & 0.0039 & 0.00022 \\ \hline $f_{17}$ & EVO & 0.001132 & 0.000787 & 0 & 0.0039 & 0.001132 \\ \cline{2-7} & PSO & 0.07412 & 0.0025 & 0.0042 & 0.4722 & 0.07412 \\ \cline{2-7} & GA & 0.01423 & 0.0412 & 0.00096 & 0.1452 & 0.01423 \\ \hline $f_{18}$ & EVO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & PSO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & GA & 0 & 0 & 0 & 0 & 0 \\ \hline $f_{19}$ & EVO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & PSO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & GA & 0 & 0 & 0 & 0 & 0 \\ \hline $f_{20}$ & EVO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & PSO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & GA & 0 & 0 & 0 & 0 & 0 \\ \hline $f_{21}$ & EVO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & PSO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & GA & 0 & 0 & 0 & 0 & 0 \\ \hline $f_{22}$ & EVO & 0.0895 & 0.0187 & 0 & 0.714 & 0.0895 \\ \cline{2-7} & PSO & 0.0438 & 0.00156 & 0 & 0.112 & 0.0438 \\ \cline{2-7} & GA & 0.0841 & 0.4753 & 0 & 0.9742 & 0.0841 \\ \hline $f_{23}$ & EVO & -1.0274 & 0.01456 & -1.03158 & -1.02497 & 0.0042 \\ \cline{2-7} & PSO & -1.02942 & 0.0389 & -1.03163 & -1.0192 & 0.0021 \\ \cline{2-7} & GA & -1.0212 & 0.1736 & -1.03158 & -1.0142 & 0.0198 \\ \hline $f_{24}$ & EVO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & PSO & 0 & 0 & 0 & 0 & 0 \\ \cline{2-7} & GA & 0 & 0 & 0 & 0 & 0 \\ \hline $f_{25}$ & EVO & 0.01582 & 0.011987 & 0 & 0.0812 & 0.01582 \\ \cline{2-7} & PSO & 0.0554 & 0.12551 & 0 & 0.18455 & 0.0554 \\ \cline{2-7} & GA & 0.018794 & 0.05424 & 0 & 0.1541 & 0.018794 \\ \hline \end{longtable} \normalsize \section{Conclusions \& Future Works} \label{sec-conclusion} The work has been an extension for elaboration the GHOSA and its various operations both for the discrete and continuous optimization problems with the introduction of an enhanced and adaptive parameter variation factor (LBNIV) for only the continuous domain problems. The unique operations of the GHOSA naturally favour the discrete operations due to the natural behaviour of the algorithm for discrete event replacement in several schemes and varying the solutions and hence are very successful for most of the combinatorial optimization problems and other graph based challenges and optimization schemes. The result of application of the GHOSA has been compared with PSO, GA and ACO and has produced the potential of the algorithm for optimization in every sector of the problems. But yet a lot of performance analysis and convergence testing is required for the algorithm for other domains and other problem types like real time problems, dynamic and adaptive problems, etc. It is assumed that the algorithm can be applied to a lot of problems and analysis which require optimization. The main advantage of GHOSA is its adaptability for dimensionless (no proper dimension is there) problems like path planning and adaptive clustering (with unknown number of clusters). Also the GHOSA algorithm is compatible with both addition and removal of dimension(s) from the solution and this is the reason the algorithm will have a wide range of applicability in various domains of problems. From the Table 1 to Table 5 and graphs of the result of various applications have revealed how the new meta-heuristics has delivered for optimization of the various dimensional benchmark datasets and its capability as discrete solution seeker and with the addition of the location based neighbour influenced variation scheme the algorithm has been capable to performed for the optimization of the continuous problems like numerical benchmark equations as well and can compete with traditional swarm intelligence algorithms. There awaits a lot of work on the new algorithm for the real world problems and its comparison with other algorithms.
1,108,101,562,431
arxiv
\section{Introduction} The perovskite ($AB$O$_3$) family of materials has been paid considerable attention both in experimental and theoretical studies due to their flexible and coupled compositional, structural, electrical and magnetic properties ~\cite{Bellaiche00p5427, Saito04p84, Bilc06p147602}. Such flexibility arises from the structural building blocks --- the corner-connected $B$O$_6$ octahedra, where $B$ is usually a transition metal. Typical structural variations from the cubic structure include rotation and tilting of the octahedra~\cite{Glazer72p3384}, off-centering of the $A$ and/or $B$ cations (pseudo Jahn-Teller effect)~\cite{Qi10p134113}, and expansion/contraction of the $B$O$_6$ octahedral cages ~\cite{Thonhauser06p2121061}. While the first two distortions are ubiquitous, the cooperative octahedral breathing distortion is rather rare in perovskites with a single $B$ cation composition. Such breathing distortion, resulting from the alternation of elongation and contraction of the $B$-O bonds between neighboring $B$O$_6$ cages, is usually concomitant with the charge ordering of the $B$ cations and a corresponding metal-insulator transition. CaFeO$_3$ is a typical perovskite material exhibiting such charge ordering transition~\cite{Woodward00p844}. At room temperature, the strong covalency in the Fe $e_g$ - O 2$p$ interaction leads to a $\sigma^*$ band and electron delocalization which gives rise to metallic conductivity in CaFeO$_3$. Near 290~K, a second-order metal-insulator transition (MIT) occurs which reduces the conductivity dramatically~\cite{Kawasaki98p1529}. The M\"{o}ssbauer spectrum of low temperature CaFeO$_3$ has revealed the presence of two chemically distinct Fe sites (with different hyperfine fields) present in equal proportion~\cite{Takano77p923}. This indicates that the Fe cations undergo charge disproportionation 2Fe$^{4+}$$\rightarrow$Fe$^{5+}$+Fe$^{3+}$ below the transition temperature. The origin of the charge ordering transition is usually attributed to Mott insulator physics, where the carriers are localized by strong electron-lattice interactions~\cite{Millis98p147, Takano77p923, Ghosh05p245110, Woodward00p844}. More recently, it has been debated whether the difference in charge state resides on the $B$ cations or as holes in the oxygen $2p$ orbitals~\cite{Yang05p10A312, Akao03p156405, Mizokawa00p11263}, and several computational studies showed that the magnetic configuration, in addition to structural changes, plays a vital role in stabilizing the charge ordered state in CaFeO$_3$~\cite{ Mizokawa98p1320, Ma11p224115, Cammarata12p195144}. Nevertheless, the amplitude of the cooperative breathing mode is a key indicator of the magnitude of electron trapping and band gap opening in MIT. Conversely because the MIT is sensitive to lattice distortion, structural manipulation such as cation doping and epitaxial strain can be exploited to control the electrical properties of this family of oxide systems. In this study, we examine the structural and electrical properties in $B$-cation doped CaFeO$_3$ with density functional theory (DFT). Various dopant cations, concentrations, and arrangements have been tested. Dopants of different sizes are tested, and alignments of pairs of dopants along different crystallographic planes are examined. To confirm the presence of charge ordering in (111) doped CaFeO$_3$, we also carried out rigorous oxidation state calculations for Fe cations in different octahedral cages based on their wave function topologies~\cite{Jiang12p166403}. Through examination of these model systems, we assess the extent to which the structure-coupled electronic transition in doped oxide materials like CaFeO$_3$ can be influenced via doping to enhance band gap tunability, which in turn controls the MIT temperature. \section{Methodology} Our DFT calculations are performed using the norm-conserving nonlocal pseudopotential plane-wave method ~\cite{Payne92p1045}. The pseudopotentials~\cite{Rappe90p1227} are generated by the \textsc{Opium} package~\cite{OPIUM} with a 50~Ry plane-wave energy cutoff. Calculations are performed with the \textsc{Quantum-Espresso} package~\cite{Giannozzi09p395502} using the local density approximation~\cite{Perdew81p5048} with the rotationally invariant effective Hubbard $U$ correction~\cite{Johnson98p15548} of 4~eV on the Fe $d$ orbitals~\cite{Fang01p180407, Cammarata12p195144} for the exchange-correlation functional. In case of Ni and Ce doping, we applied $U$ = 4.6~eV~\cite{Cococcioni05p035105} and $U$ = 5~eV~\cite{Loschen07p035115} for Ni $d$ and Ce $f$, respectively. Calculations are performed on a $4\times4\times4$ Monkhorst-Pack $k$-point grid~\cite{Monkhorst76p5188} with electronic energy convergence of $1\times10^{-8}$~Ry, force convergence threshold of $2\times10^{-4}$~Ry/\AA, and pressure convergence threshold of 0.5 kbar. For polarization calculations a $4\times6\times12$ $k$-point grid is used, where the densely sampled direction is permuted in order to obtain all three polarization components. Different spin orderings for pure CaFeO$_3$ are tested to find the magnetic ground state, and subsequent solid solution calculations all start with that magnetic ground state. \section{Results and discussion} \subsection{Ground state of CaFeO$_3$ and CaZrO$_3$} To identify the correct spin ordering in pure CaFeO$_3$, we performed relaxations on both high temperature metallic orthorhombic $Pbnm$~\cite{Takano77p923, Ghosh05p245110, Woodward00p844, Kanamaru70p257} and low temperature semiconducting monoclinic $P2_1/n$~\cite{Saha-Dasgupta05p045143} structures with common magnetic orderings commensurate with the $2\times2\times2$ supercells, as shown in Fig.~\ref{fig:CFO} (note that diamagnetic (DM) ordering is not included in the figure). \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{CFO.pdf} \caption{(a) Low temperature $P2_1/n$ structure of CaFeO$_3$ with two symmetry-distinct Fe cation sites color coded. The spin ordering of the Fe cations are (b) ferromagnetic (FM), (c) A-type anti-ferromagnetic (AFM), (d) C-type anti-ferromagnetic and (e) G-type anti-ferromagnetic.} \label{fig:CFO} \end{figure} \begin{table}[htp] \caption{Calculated total energy $E$, atomic magnetization $m_1$ and $m_2$ for the two Fe sites, total magnetization per five-atom formula unit $M$ and FeO$_6$ octahedral volume $V_1$ and $V_2$ for the two Fe sites of relaxed CaFeO$_3$ with different starting structures and magnetic orderings.} \begin{tabularx}{\textwidth}{ X X X X X X X X} \hline\hline & & $E$ (eV) & $m_1$ ($\mu_\mathrm{B}$) & $m_2$ ($\mu_\mathrm{B}$) & $M$ ($\mu_\mathrm{B}$) & $V_1$ (\AA$^3$) & $V_2$ (\AA$^3$) \\ \hline \multirow{5}{*}{\begin{sideways}$Pbnm$\end{sideways}} & DM & 6.64 & N/A & N/A & N/A & 8.40 & 8.40 \\ & FM & 0.07 & 3.38 & 3.38 & 4.00 & 8.94 & 8.94 \\ & A-AFM & 0.43 & 3.29 & -3.29 & 0.00 & 9.00 & 9.00 \\ & C-AFM & 0.58 & 3.24 & -3.24 & 0.00 & 8.92 & 8.92 \\ & G-AFM & 0.96 & 3.44 & -3.51 & 0.06 & 9.52 & 8.47\\ \hline \multirow{5}{*}{\begin{sideways}$P2_1/n$\end{sideways}} & DM & 6.64 & N/A & N/A & N/A & 8.40 & 8.40 \\ & FM & 0 & 3.13 & 3.60 & 4.00 & 9.16 & 8.75 \\ & A-AFM & 0.32 & 3.69 & -3.69 & 0.00 & 9.57 & 8.40 \\ & C-AFM & 0.58 & 3.24 & -3.24 & 0.00 & 8.92 & 8.92 \\ & G-AFM & 0.81 & 3.85 & -2.39 & 1.00 & 10.05 & 8.13 \\ \hline\hline \end{tabularx} \label{tab:CFO} \end{table} From the results in Table.~\ref{tab:CFO}, we can see that both high temperature and low temperature ferromagnetic CaFeO$_3$ relax to the ferromagnetic ground states. Note that an additional magnetic phase transition is experimentally observed for CaFeO$_3$ at 15~K, where it adopts an incommensurate magnetic structure with a modulation vector [$\delta$, 0, $\delta$] ($\delta\approx0.32$, and reciprocal lattice vectors as basis)~\cite{Woodward00p844}. Since DFT calculates 0~K internal energy, the ferromagnetic ground state represents a reasonable approximation of the spin-spin interactions within a unit cell given the relatively long spin wave length and low experimental crossover temperature to FM. The volumes of the two FeO$_6$ cages are equivalent in the high temperature metallic phase, as expected. The low temperature ground state has a cage size difference $\Delta V =0.41 $\AA$^3$, indicating some degree of charge ordering. However, the projected density of states (PDOS) of the $P2_1/n$ ground state CaFeO$_3$ in Fig.~\ref{fig:PDOS}a shows that although there are separate gaps in each spin channel, the valence band edge in the majority spin touches the conduction band edge in the minority spin, resulting in zero total gap. The absence of band gap and the weak charge ordering is a result of the underestimation of band gaps in DFT~\cite{ Kohn65pA1133} due to its unphysical electron delocalization, This result is common in Mott insulators with partially filled $d$ orbitals and is in agreement with another DFT study of CaFeO$_3$~\cite{Yang05p10A312}. Even though DFT does not predict the correct electronic ground state of CaFeO$_3$, it is however indicative of the sensitive nature of the CaFeO$_3$ band gap as it can be easily influenced when Fe $d$ orbital filling is varied by structural or other perturbations. Moreover, the different $\Delta V$ between high temperature and low temperature of FeO$_6$ ground state suggests that the structural aspect of the MIT can be modeled reasonably well by DFT. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{CFO_CZO_pdos.pdf} \caption{Projected density of states of (a) CaFeO$_3$ (left) and (b) CaZrO$_3$.} \label{fig:PDOS} \end{figure} For comparison, we calculated the PDOS of CaZrO$_3$ relaxed from experimental $Pbmn$ structure~\cite{Levin03p170}, shown also in Fig.~\ref{fig:PDOS}b. Because Zr$^{4+}$ has empty $4d$ orbitals, the fraction of Zr $d$ states in the valence band is negligible compared to O $p$ states, and a wide charge-transfer gap of 3.82~eV occurs. Since we expect to exploit the size effect of the dopants like Zr to influence the electronic property of CaFeO$_3$, we expect that the nature of Fe spin-spin interaction is not greatly affected by doping. Therefore in the following study we continue to use FM as the starting magnetic configuration for relaxations of the doped materials. \subsection{CaFeO$_3$-CaZrO$_3$ solid solutions with $2\times2\times2$ super cell} To test how Zr doping influences the structural and electrical properties of CaFeO$_3$, we performed relaxations and subsequent band gap calculations of CaFeO$_3$-CaZrO$_3$ solid solutions. We employ a $2\times2\times2$ super cell and explore all possible $B$-site cation combinations. All the solid solutions tested turn out to be metallic except one, which has a gap of 0.93~eV. The insulating solid solution has a cation arrangement with four Zr cations on the (001) plane, making it a layered structure along [001]. Interestingly, instead of a breathing-mode charge disproportionation, this structure has a 2D Jahn-Teller type distortion, and all four Fe cations are in the same chemical environment. As shown in Fig.~\ref{fig:JT}, the Fe-O bond lengths in the $xy$ plane are 2.16~\AA\ and 1.83~\AA\ in each FeO$_6$, with the orientation alternating between neighbors. The shorter Fe-O bond length is essentially the same as that in high temperature CaFeO$_3$. The consequence of the addition of larger Zr$^{4+}$ cations ($r=0.72$~\AA) compared to Fe$^{4+}$ ($r=0.59$~\AA) is that when Zr cations occupy an entire (001) plane the in-plane lattice is expanded from 3.70~\AA\ to 3.88~\AA, elongating the Fe-O bonds and enabling the 2D Jahn-Teller type distortion. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{CFZO_JT.pdf} \caption{(a) Crystal structure of Ca(Fe$_{1/2}$Zr$_{1/2}$)O$_3$ and (b) top view of the ZrO$_2$ layer showing the 2D Jahn-Teller type distortion.} \label{fig:JT} \end{figure} From the PDOS of the solid solution in Fig.~\ref{fig:CFZO_PDOS}a we can see that like pure CaFeO$_3$ both the valence and the conduction edges are of Fe $3d$ and O $2p$ characters, with virtually no Zr contribution. In charge ordering MIT, the delocalized electrons on Fe$^{4+}$ transfer to neighboring Fe$^{4+}$, making Fe$^{3+}$/Fe$^{5+}$ pairs with the valence and conduction bands located on different cations, concomitant with FeO$_6$ cage size changes. The band gap in case of charge ordering therefore depends on the energy difference between the $e_g$ orbitals in Fe$^{3+}$ and Fe$^{5+}$, which in turn is affected by the crystal field splitting energy caused by the oxygen ligands. On the other hand, as illustrated in Fig.~\ref{fig:CFZO_PDOS}b, the solid solution band gap is caused by the removal of degeneracy in the $e_g$ orbitals and is controlled by the difference in energy between the two $e_g$ orbitals on the same Fe$^{4+}$ cation. Since the $e_g$ gap splitting is a result of the Fe-O bond length difference, it is easier to tune by applying either chemical pressure or biaxial strain to change the in-plane lattice constant, whereas the charge ordering mechanism requires the control of individual FeO$_6$ octahedral sizes to change the relative energy of $e_g$ orbitals between two Fe sites. Nevertheless, this CaFeO$_3$-CaZrO$_3$ solid solution demonstrates that when arranged in a particular way, in this case on (001) plane, the size effect of the large Zr cation can cause a cooperative steric effect on the structure and affect the electrical properties of CaFeO$_3$, opening up the band gap via a completely different mechanism. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{CFZO_PDOS.pdf} \caption{(a) PDOS of of Ca(Fe$_{1/2}$Zr$_{1/2}$)O$_3$ and (b) illustration of band gap formation in CaFeO$_3$ (left) and Ca(Fe$_{1/2}$Zr$_{1/2}$)O$_3$ (right). } \label{fig:CFZO_PDOS} \end{figure} \subsection{CaFeO$_3$ with dopants on the (111) plane} As discussed in the previous section, simply doping Zr into $2\times2\times2$ CaFeO$_3$ does not increase the band gap except for one case where 2D Jahn-Teller instead of breathing mode serves to make the system insulating. In a perovskite system with rock-salt ordered alternating $B$ cations, such as CaFeO$_3$, one $B$ cation type occupies entire (111) planes and the other type occupies its neighbors in all directions. Since Zr cation is larger than Fe cation, to fully utilize its steric effect to distinguish Fe$^{3+}$ from Fe$^{5+}$, it follows that Zr should replace a full Fe$^{3+}$ plane to increase the $B$O$_6$ size on the plane, maximizing its utility by enhancing the cage size difference. A schematic of the (111) doping strategy and the influence of the dopants on their neighboring planes is shown in Fig.~\ref{fig:CFZO_111}b. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{CFZO_111.pdf} \caption{(a) Crystal structure of $\sqrt{2}\times\sqrt{6}\times2\sqrt{3}$ CaFeO$_3$ supercell doped with one layer of Zr on the (111) plane. The average $B$O$_6$ octahedron size is listed on the side. (b) Schematic of how a layer of dopants with larger ionic radius exerts a cooperative size effect on the neighboring layers and enhances the existing charge ordering.} \label{fig:CFZO_111} \end{figure} Following this logic, we perform calculations with $\sqrt{2}\times\sqrt{6}\times2\sqrt{3}$ CaFeO$_3$ super cell, which has six (111) FeO$_2$ layers stacked perpendicularly, as the parent material. One layer of Fe's is replaced with Zr's and the structural is relaxed. The final structure is shown in Fig.~\ref{fig:CFZO_111}a, as well as the average $B$O$_6$ cage size of each layer. Clearly the introduction of a (111) Zr layer drives the charge disproportionation of Fe$^{4+}$ by exerting chemical pressure on both sides of the layer and favoring the FeO$_6$ on the two adjacent planes to be smaller and become Fe$^{5+}$. The second next neighboring layers in turn have more room to expand and favor larger Fe$^{3+}$. The size difference between the largest and the smallest FeO$_6$ cages ($\Delta V =0.58$\AA$^3$) in this structure is an enhancement compared to pure CaFeO$_3$ $\Delta (V=0.41$\AA$^3$), which suggests the presence of a stronger charge ordering and a wider band gap. However electronic structure calculation shows that this solid material is metallic as well. The reason that the seemingly more charge ordered system still does not possess a gap can be attributed to the supercell employed. By using a unit cell with six (111) layers and replacing only one layer of Fe with Zr, structurally the remaining five Fe layers are disturbed by the large Zr layer as expected. However the charge disproportionation reaction 2Fe$^{4+}$$\rightarrow$Fe$^{5+}$+Fe$^{3+}$ cannot proceed to completion, because it requires an even number of Fe layers. Therefore with one layer of dopants there will always be Fe$^{4+}$ ``leftovers'' that render the whole system metallic. To resolve the issue of odd number of Fe layers we introduce another layer of +4 dopants with smaller ionic radius than Fe. For simplicity we denote a solid solution in this case by listing the $B$ cations in each of its six (111) layers, with dopant elements in bold. For example, the previously discussed one layer Zr-doped solid solution would be denoted as \textbf{Zr}FeFeFeFeFe. The presence of two dopant layers provides both positive and negative chemical pressure to expand Fe$^{3+}$ and contract Fe$^{5+}$. These two dopant layers are separated by an even number of Fe layers so that the FeO$_6$ size alternation is enhanced. An odd number of Fe layers in between the dopant layers would disrupt and impede the size modulation period. Relaxations are performed on \textbf{ZrNi}FeFeFeFe and \textbf{Zr}FeFe\textbf{Ni}FeFe, as well as \textbf{CeNi}FeFeFeFe and \textbf{Ce}FeFe\textbf{Ni}FeFe. The average FeO$_6$ cage size per layer is listed in Table~\ref{tab:size_111}, along with the maximum cage size difference $\Delta V$ and the corresponding band gap of each solid solution. It can be seen that with only Zr as dopant, the $\Delta V$ is significantly smaller than those with two layers of dopants, and $\Delta V$ correlates with the band gap. The Ce-containing solid solutions have a larger $\Delta V$ compared to the Zr-containing ones, due to the larger size of Ce. It also shows that when the larger dopant and the smaller dopant layers are adjacent, the resulting $\Delta V$ is larger than when they are two Fe layers apart, this is due to the lack of symmetry of the former configuration where the absence of a mirror plane perpendicular to the $z$ axis allows for the FeO$_6$ close to the dopant layers to further expand or contract compared to the ones that are not neighbors of the dopant layers. In the latter configuration, symmetry guarantees that octahedra on either side of the dopant layer are deformed equally. \begin{table}[htp] \caption{Properties of CaFeO$_3$ doped on the (111) plane. $V_1$ through $V_6$ are average FeO$_6$ volumes in \AA$^3$, where the largest and the smallest cages in each solid solution are in bold. $\Delta V$ is the size difference between the largest and smallest volumes and $E_g$ is the band gap of the corresponding material in eV.} \begin{tabularx}{\textwidth}{X X X X X X X X X X} \hline\hline \multicolumn{2}{l}{(111) Layers} & $V_1$ & $V_2$ & $V_3$ & $V_4$ & $V_5$ & $V_6$ & $\Delta V$ & $E_g$ \\ \hline \textbf{Zr}FeFeFeFeFe & & Zr & 8.81 & \textbf{9.33} & \textbf{8.75} & 9.29 & 8.83 & 0.58 & 0 \\ \textbf{ZrNi}FeFeFeFe & & Zr & Ni & 9.52 & 8.53 & \textbf{9.59} & \textbf{8.27} & 1.32 & 0.49 \\ \textbf{Zr}FeFe\textbf{Ni}FeFe & & Zr & 8.32 & 9.43 & Ni & \textbf{9.45} & \textbf{8.28} & 1.17 & 0.11 \\ \textbf{CeNi}FeFeFeFe & & Ce & Ni & \textbf{9.77} & 8.61 & 9.69 & \textbf{8.29} & 1.48 & 0.83 \\ \textbf{Ce}FeFe\textbf{Ni}FeFe & & Ce & \textbf{8.39} & \textbf{9.75} & Ni & 9.73 & 8.41 & 1.36 & 0.53 \\ \hline\hline \end{tabularx} \label{tab:size_111} \end{table} From Fig.~\ref{fig:gap} we can see that with increasing difference in FeO$_6$ size, the band gap of the corresponding solid solutions increases accordingly. This relationship demonstrates the coupling between structural and electrical properties as larger FeO$_6$ size discrepancy indicates stronger and more complete charge disproportionation. As illustrated in Fig.~\ref{fig:CFZO_PDOS}b, when charge ordering is the band gap opening mechanism, the gap size depends on the crystal field splitting energy difference between Fe$^{3+}$ and Fe$^{5+}$. A larger FeO$_6$ cage size difference means that the O $2p$ - Fe $3d$ repulsion difference is also larger between the two Fe sites. This causes the energy difference of the $e_g$ orbitals in the two sites to increase and the band gap to increase as well. Using linear regression we estimate that the chemical pressure excerted on the band gap by the volume difference in this type of solid solutions is quite large at about 370~GPa, in accordance with the effective band gap tuning. Since the transition to metal occurs when thermally activated electrons have enough energy to cross the band gap and flow between the two Fe sites to make them indistinguishable, we believe that by (111) doping the MIT temperature of CaFeO$_3$ can be increased, making devices based on it more operable at room temperature. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{gap.pdf} \caption{Band gap $E_g$ of the (111) doped CaFeO$_3$ solid solutions increases with the corresponding maximum FeO$_6$ size difference $\Delta V$. This gives an effective chemical pressure on the band gap of 2.30~eV/\AA$^3$ or 370~GPa.} \label{fig:gap} \end{figure} To investigate the layered nature of the solid solutions, we use \textbf{ZrNi}FeFeFeFe as an example and plot the projected density of states of it in Fig.~\ref{fig:layered_PDOS} in a layer resolved fashion. Each of the six panels in Fig.~\ref{fig:layered_PDOS} represents a layer of Ca$B$O$_3$, and the relative position of the panels corresponds to the that of the six layers in the crystal. It can be seen clearly that for the four layers containing Fe ions, the first and the third layers have more majority spin Fe $d$ in the valence band, while the second and fourth layers have more majority spin Fe $d$ in the valence band. This difference is consistent with the fact that Fe$^{3+}$ has more filled $d$ orbitals than Fe$^{5+}$ and supports our prediction that the doubly doped (111) layered CaFeO$_3$ has an enhanced charge ordering due to the strong modulation of the FeO$_6$ cage volume. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{layered_DOS.pdf} \caption{Layer resolved projected density of states of \textbf{ZrNi}FeFeFeFe. Each of the six panels represents a layer of Ca$B$O$_3$, and the relative position of the panels corresponds to the that of the six layers in the crystal.} \label{fig:layered_PDOS} \end{figure} To further verify the charge disproportionation mechanism, we performed oxidation state calculations of the Fe cations in \textbf{ZrNi}FeFeFeFe. We employed an unambiguous oxidation state definition~\cite{Jiang12p166403} based on wave function topology, whereby moving a target ion to its image site in an adjacent cell through an insulating path and calculating the polarization change during the process, the number of electrons that accompany the moving nucleus can be calculated. The oxidation state obtained this way is guaranteed to be an integer and is unique for an atom in a given chemical environment, not dependent on other factors such as charge partitioning or the choice of orbital basis. In Fig.~\ref{fig:ox} we show how the quantity $N = \Delta\vec{P}\cdot\vec{R}/\vec{R}^2$ changes as each Fe cation is moved along an insulating path to the next cell, which is equivalent to the oxidation state of the cation. The two Fe ions with larger cages are confirmed to be Fe$^{3+}$ and the ones with smaller cages are Fe$^{5+}$. This proves that charge ordering occurs in this material and causes band gap opening. Note that the oxidation states calculated are not directly related to the charges localized around the Fe sites, which has been been shown to change insignificantly upon oxidation reaction in some cases~\cite{Sit11p12136}. In fact the Bader charge~\cite{Bader90} of the Fe cations are 1.76 and 1.73 for the smaller and larger FeO$_6$ cages, respectively, which shows minuscule differences between Fe sites that are in significantly different chemical environments in terms of oxygen ligand attraction. The (111) doping strategy shows that the size difference between Fe$^{3+}$ and Fe$^{5+}$ can be exploited and reinforced by selectively replacing layers of Fe$^{3+}$ with Fe$^{5+}$ with atoms of even larger or smaller size, respectively, to enhance charge ordering and the insulating character of the CaFeO$_3$ system. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{ox.pdf} \caption{Oxidation state $N$ of the four Fe cations. $\lambda$ denotes the reaction coordinate of moving the Fe ion sublattice to the neighboring, cell and the change in $N=\Delta\frac{\vec{P}\cdot\vec{R}}{\vec{R^2}}$ from $\lambda=0$ to $\lambda=1$ corresponds to the oxidation state of that Fe ion.} \label{fig:ox} \end{figure} \section{Conclusions} We have demonstrated that for prototypical charge ordering perovskite CaFeO$_3$, the band gap of the insulating state can be engineered by $B$-site cation doping and structural manipulation. For the dopant atoms to exert significant influence on the parent material, it is favorable to arrange them in a way that their size effects are cooperative and synergistic, producing a collective steric effect and greatly altering the structural and electrical properties. When doped on the (001) plane with larger Zr cations, the in-plane lattice constant expands and supports a 2D Jahn-Teller type distortion, where each FeO$_6$ has two distinct Fe-O bond length in the $xy$ plane. Such distortion removes the degeneracy of the two $e_g$ orbitals on each Fe$^{4+}$ and opens up a band gap (not caused by charge ordering) of 0.93~eV. On the other hand, to enhance the weak charge ordering in pure CaFeO$_3$, we discovered that including two types of dopants on the (111) plane can increase the FeO$_6$ cage size difference and enhance the charge ordering. Using Zr or Ce to replace the larger Fe$^{3+}$ and Ni to replace the smaller Fe$^{5+}$ increases the band gap up to 0.83~eV. The degree of charge ordering is closely related to the magnitude of FeO$_6$ cage size difference. We used the rigorous definition of oxidation state to verify that in the latter case the band gap opening mechanism is indeed charge disproportionation, as the oxidation states of the larger and smaller Fe cations are calculated to be +3 and +5, respectively. Our results show that the structural and electrical properties of CaFeO$_3$ are coupled, and simple steric effects can enhance charge ordering transition and alter the band gap of the material greatly when the dopant atoms are placed to act cooperatively. Lastly by enhancing the charge ordering via doping, we predict that the MIT temperature of CaFeO$_3$ can also be increased to a temperature more suitable for practical device operation. \begin{acknowledgments} L. J. was supported by the Air Force Office of Scientific Research under Grant No. FA9550-10-1-0248. D. S. G. was supported by the Department of Energy Office of Basic Energy Sciences under Grant No.~DE-FG02-07ER15920. J. T. S. was supported by a sabbatical granted by Villanova University. A. M. R. was supported by the Office of Naval Research under Grant No.~N00014-12-1-1033. Computational support was provided by the High Performance Computing Modernization Office of the Department of Defense, and the National Energy Research Scientific Computing Center of the Department of Energy. \end{acknowledgments}
1,108,101,562,432
arxiv
\section{Introduction} \label{sec:intro} \noindent The processes that cause star formation activity in galaxies to diminish or ``quench'' are of great interest in understanding galaxy evolution. A number of observational and (semi-)numerical studies \citep{vdb+08,rb09,ww10,vdw+10,hopkins+10,more+11,prescott+11,wtc12,wtcv13,cen14,hirschmann+14} have attempted to distinguish between the relative importance of galaxy mergers \citep{tt72} and wider halo-level physical processes such as ram-pressure stripping of gas \citep{gg72}, ``strangulation'' \citep{ltc80,bm00}, ``harassment'' \citep{moore+96}, etc. that may lead to quenching in group environments \citep[for a recent review, see][]{sd15}. Even where halo-level effects are indicated, there is ongoing debate over whether the location of a galaxy within its group (i.e. whether it is a ``central'' galaxy or a ``satellite'') is paramount \citep{baugh+06,peng+12,kovac+14}, or whether the nature of the group as a whole is more important in quenching its member galaxies \citep{osmond+04,vdb+08,carollo+14,hartley+15,klwk14}. In this context, a particularly interesting phenomenon that has received attention recently is that of ``galactic conformity'' \citep{weinmann+06a}. This is the observation that satellite galaxies in groups whose central galaxy is quenched (or red) are preferentially quenched, even when the groups are restricted to reside in dark matter halos of the same mass \citep[see][for an in-depth study extending this statement to a variety of measures of environment]{klwk14}. \cite{weinmann+06a} based these conclusions on a group catalog \citep{yang+05} constructed using galaxies in the Sloan Digital Sky Survey (SDSS). More recently, a similar effect has been observed by \citet{kauffmann+13} when studying the star formation rates (SFR) of SDSS galaxies at large spatial separations ($\lesssim4$Mpc) from isolated objects. The observation of galactic conformity contradicts an assumption typically made in halo occupation models of galaxy distributions, namely, that galaxy properties such as luminosity and colour distributions are determined solely by the mass of the parent dark halo of the group to which the galaxy belongs. Modelling conformity then becomes important in order to better understand the galaxy-dark matter connection \citep{zhv14}, with consequences for both semi-analytical models of galaxy evolution \citep{wwdy13,kauffmann+13} as well as statistical studies aimed at constraining cosmological parameters \citep{vdb+13,more+15,coupon+15}. It is tempting to ascribe a physical origin to galactic conformity, as \cite{hartley+15} do. Since the seminal work of \cite{fabian+03}, evidence for regulation of the hot gas temperature in clusters of galaxies, by the super-massive black hole (SMBH) at the centre of the brightest cluster galaxy, has been growing \citep{croton+06,zhuralavleva+14}. Although it is not clear whether a similar regulation exists for group-mass halos, a coupling of the star-formation properties of different group members via the hot gas content of their halo provides a natural explanation for conformity; a coupling that could even be established many Gyrs previously \citep{rawlings+04}. A coupling such as this may also exist due to re-processing of the gas content of individual galaxies in a group \citep{oppenheimer+10,birrer+14}. However, this is far from a unique solution. We might equally speculate that the virial shock heating of infalling gas depend on the halo's location within the cosmic web, perhaps being more effective in filaments than in voids at low halo masses for instance. In each of these hypothetical cases the conformity signal found by \cite{weinmann+06a} and others is driven by a hidden variable (past/present SMBH activity or large scale environment), i.e. a quantity that was not accounted for in those studies. Clearly identifying the correct hidden variable(s), those that ``explain'' conformity, will lead us to a greater understanding of the physical nature of quenching in galaxy groups. An obvious suspect as to the hidden variable, and therefore the possible origin of galactic conformity, is the assembly history of the parent halo. The phenomenon of halo assembly bias, in which low mass older halos tend to cluster more strongly at scales $\gtrsim10$Mpc than younger, more recently assembled halos of the same mass (while this trend reverses for more massive halos) is well established \citep{st04,gsw05,wechsler+06,jsm07,desjacques08,dwbs08,hahn+09,fm10}. Halo age also correlates well with other halo properties -- we will focus on halo concentration which correlates positively with age \citep{nfw97,wechsler+02} -- and halo assembly bias has been observed to extend to these properties as well; e.g., at fixed low mass, more concentrated halos typically cluster more strongly than less concentrated ones. Viewed from the present day, a halo that has a low late-time mass accretion (with respect to other similar mass haloes) would have a higher formation redshift and, statistically, a greater concentration. In relatively low-mass haloes the build up of dark matter mass and the accretion of baryons by the central galaxy are quite tightly coupled, because the time scale for cooling to offset gravitational heating is shorter than the free-fall time \citet{wr78}. The galaxies hosted by early-forming haloes would therefore experience a reduced supply of star-forming gas with respect to the larger galaxy population, and in the most extreme cases could result in a higher probability of being quenched. A plausible hypothesis, then, is that halo assembly bias affects galaxy formation and leads to a \emph{galaxy} assembly bias \citep[][]{tinker+12}, which leaves an imprint in the form of galactic conformity \citep{hwv15}. More specifically, according to this hypothesis galactic conformity both within individual groups \emph{as well as} at large scales may be explained if, at fixed halo mass and galaxy luminosity/stellar mass, quenched or red galaxies preferentially reside in older (sub)halos. One point of interest in this regard is that the conformity signal observed by \citet{kauffmann+13} at scales $\lesssim4$Mpc is quite strong (roughly an order of magnitude difference in the median specific SFR of galaxies surrounding isolated objects that belong to the upper and lower 25th percentiles of specific SFR). Halo assembly bias, on the other hand, has been argued to be relatively unimportant in modelling, e.g., the dependence of galaxy 2-point correlation functions on properties such as luminosity, colour and SFR at similar scales \citep[e.g.,][]{ss09,deason+13}. It is therefore important to ask how assembly bias can be simultaneously consistent with these observations, or whether some other mechanism could generate the conformity signal. In principle, such a test can be performed by embedding an assumed conformity-inducing correlation between galaxy and halo properties in an analytical Halo Model framework \citep[for a review, see][]{cs02}, since it is feasible to accurately model property-dependent clustering in this language \citep{wechsler+06}. However, comparing analytical calculations directly with observations can become quite involved in the presence of possibly complex selection criteria used in defining the observed sample and the signal itself. A more efficient method, then, is to use $N$-body simulations of dark matter and construct mock galaxy catalogs in which one has full control over switching the conformity signal on and off; these mocks can then be analysed in the same way as the observed sample for a fair comparison. Such mock catalogs have recently been presented by \cite{hw13,mly13,hearin+14a} who model galactic conformity by introducing a rank ordering of galaxy colours or SFR by suitably defined measures of (sub)halo age. The ``age matching'' mocks of \citet{hw13} and \citet{hearin+14a} have been shown to successfully reproduce a number of observed trends such as the 2-point correlation function and galaxy-galaxy lensing signal of SDSS galaxies \citep{hearin+14b}, the radial profile of the satellite quenched fraction in groups \citep{watson+15}, etc., in addition to containing a conformity signal. The rank ordering in these mocks leads to a fixed strength of galactic conformity, however \citep[see, e.g., Figure 3 in][]{hwv15}. In the absence of conclusive evidence as to the nature of the hidden variable that explains conformity, it is then interesting to explore alternative algorithms, particularly those that implement a \emph{tunable} effect which can then be compared with observations. Another relevant issue is that it is important to be able to distinguish cleanly between conformity within individual groups and conformity effects due to spatial correlations between distinct halos \citep[these were respectively dubbed ``1-halo'' and ``2-halo conformity'' by][]{hwv15}. This is because the distinction between what is inside and outside a dark halo becomes fuzzy in any analysis that averages over a large range of halo masses. E.g., we would like to ask the following: out to what spatial separations might one see the effects of conformity in a model that has \emph{only} 1-halo conformity built in and \emph{does not} know about the large scale environmental correlations due to halo assembly bias? This might plausibly be the case in a halo-specific gas regulation mechanism that couples to star formation activity as mentioned above. This question is particularly interesting in the context of the apparent conflict in the expected strength of galaxy assembly bias and the observed strength of galactic conformity mentioned above. In this paper we introduce an algorithm that allows us the level of control needed to address the above issues. Our algorithm is a modification and extension of the one described in \citet[][hereafter, S09]{ss09} to model galaxy positions, luminosities and colours; we model galactic conformity by introducing a positive correlation between galaxy colour and the concentration of the parent dark halo \emph{of the group} to which the galaxy belongs (i.e., we work at the level of groups and not subhalos). The strength of this correlation is determined by adjusting the value of a parameter $\rho$ which we interpret as a ``group quenching efficiency'', the terminology being motivated by similar quantities studied by earlier authors \citep[see, e.g.,][]{vdb+08,peng+10,klwk14}. In order to answer the question regarding large scale effects raised above, we also explore a model where halo concentrations are \emph{randomized} among halos of fixed mass before correlating them with the galaxy colours, thereby erasing any large scale clustering of the colours due to halo assembly bias while keeping average group-specific properties (including 1-halo conformity) intact. The paper is organised as follows. In section~\ref{sec:algorithm} we describe the S09 algorithm, followed by our modifications for correlating galaxy colours with halo concentrations and calculating stellar masses. In section~\ref{sec:simulations} we describe the $N$-body simulations on which our mock galaxy catalogs are based. Section~\ref{sec:data} describes the SDSS-based group catalog that we use for comparison with our mocks, together with various fitting functions derived from this sample which inform our mocks. Our results are described in section~\ref{sec:results}, followed by a discussion in section~\ref{sec:discuss}. We conclude in section~\ref{sec:conclude}. We will use SDSS galaxy properties K-corrected to redshift $z=0.1$, usually denoted by a superscript, e.g., ${}^{0.1}r$ for the $r$ band. Throughout, we will denote $M_{{}^{0.1}r}-5\log_{10}(h)$ as $M_r$, ${}^{0.1}(g-r)$ as $g-r$ and quote stellar masses $m_\star$ in units of \ensuremath{h^{-2}M_{\odot}}\ and halo masses $m$ in units of \ensuremath{h^{-1}M_{\odot}}, where $H_0=100\,h\,{\rm km\,s}^{-1}{\rm Mpc}^{-1}$ is the Hubble constant. We use a flat $\Lambda$CDM cosmology with parameters $\Omega_{\rm m}=0.25$, $\Omega_{\rm b}=0.045$, $h=0.7$, $\sigma_8=0.8$ and $n_{\rm s}=0.96$, which are consistent with the 5-year results of the WMAP experiment \citep{hinshaw+09}. \section{Algorithm} \label{sec:algorithm} \noindent Our basic algorithm is borrowed from S09 and is itself an extension of the algorithm described by \citet{sscs06} to include galaxy colours. The algorithm uses a Halo Occupation Distribution function \citep[HOD;][]{bw02} calibrated on SDSS luminosity-dependent projected clustering measurements to create a luminosity-complete mock galaxy catalog. We describe this algorithm and our modifications below. \subsection{The S09 algorithm} \label{sec:algo:subsec:S09} \subsubsection{Luminosities, positions and velocities of centrals and satellites} \noindent The S09 algorithm explicitly implements the so-called central-satellite split. A fraction $f_{\rm cen}(<M_{r,{\rm max}}|m)$ of $m$-halos (i.e., halos with masses in the range $(m,m+\ensuremath{{\rm d}} m)$) is chosen to have a central galaxy brighter than the luminosity threshold $M_{r,{\rm max}}$. Each $m$-halo with a central is then assigned a number of satellites drawn from a Poisson distribution with mean $\bar N_{\rm sat}(<M_{r,{\rm max}}|m)$. The luminosities of centrals and satellites are then assigned using the distributions $f_{\rm cen}(<M_{r}|m)/f_{\rm cen}(<M_{r,{\rm max}}|m)$ and $\bar N_{\rm sat}(<M_{r}|m)/\bar N_{\rm sat}(<M_{r,{\rm max}}|m)$, respectively. The functions $f_{\rm cen}(<M_{r}|m)$ and $\bar N_{\rm sat}(<M_{r}|m)$ define the HOD, with the mean number of galaxies brighter than $M_r$ residing in $m$-halos given by $\avg{N_{\rm gal}|m} = f_{\rm cen}(<M_{r}|m)\left[1+\bar N_{\rm sat}(<M_{r}|m)\right]$. In this work we will use the forms calibrated by \cite{zehavi+11}; specifically, we use an interpolation kindly provided by Ramin Skibba. Our fiducial cosmology has the same parameter values as used by \citet{zehavi+11} for their HOD analysis. Satellite luminosities are assigned after the central ones, and the algorithm ensures that the central is the brightest galaxy of the group. We will return to this point later. Each central is placed at the center-of-mass of its parent dark matter halo and is assigned the velocity of the halo. The satellites are distributed around the centrals according to a truncated Navarro-Frenk-White (NFW) profile \citep{nfw96}, using a halo concentration as described below, and are assigned random velocities relative to the central that are drawn from a Maxwell-Boltzmann distribution that scales with halo mass\footnote{We note that our analysis in this paper does not use information regarding galaxy velocities; this will however be essential in future work when comparing, e.g., to observations of clustering in redshift space.}. This procedure for assigning satellite positions and velocities is more efficient, but perhaps less accurate, than identifying satellites with subhalo positions (e.g., there is no information regarding infall times and number of pericenter passages, but we need not run a very high resolution simulation and track low mass subhalos). \subsubsection{Galaxy colours and the central-satellite split} \noindent Having assigned galaxy positions, velocities and luminosities, galaxy colours are assigned by drawing from double-Gaussian fits to the observed distribution of $g-r$ colours at fixed luminosity. The two components of the double-Gaussian -- `red' and `blue' -- have means, variances and relative fraction as functions of luminosity that are fit to data. We give the results of such fits to SDSS data in section~\ref{sec:data:subsec:p(g-r|Mr)}. Since these fits work at the level of the full galaxy sample, there is some freedom in deciding what fraction of centrals and satellites at fixed luminosity must be labelled `red' with the above definition. This is a somewhat subtle issue which merits some discussion. In general, one might assume that the central and satellite red fractions depend on both luminosity of the object as well as the mass of the parent halo, and we would denote these quantities as $p({\rm red}|{\rm cen},M_r,m)$ and $p({\rm red}|{\rm sat},M_r,m)$, respectively. S09 showed that a model in which these quantities do not depend on halo mass (but have a non-trivial luminosity dependence) is consistent with measurements of colour-marked clustering of SDSS galaxies. However, colour-dependent clustering measurements will also depend on the level of conformity, which is what we are trying to model here (and which S09 did not do). We therefore adopt a simpler approach: we continue to assume that these red fractions are independent of halo mass, but fix the form of $p({\rm red}|{\rm sat},M_r)$ by demanding agreement with a direct measurement of this quantity in an SDSS-based group catalog \citep[][henceforth, Y07]{yang+07}. The form of $p({\rm red}|{\rm cen},M_r)$ is then fixed by the assumed HOD, while the \emph{all}-galaxy red fraction $p({\rm red}|M_r,m)$ inherits a mass dependence from the HOD (see Appendix~\ref{app:massdepredfrac})\footnote{See \cite{skibba09} for a comparison of central and satellite colours using mocks based on the S09 algorithm and in an earlier version of the Y07 catalog.}. In Appendix~\ref{app:massdepredfrac} we also explore the consequences of a halo mass dependence in the red fraction of centrals. The S09 algorithm has been used in several studies of galaxy environments \citep[e.g.,][]{muldrew+12,skibba+13} and the luminosity and color dependence of galaxy clustering \citep[e.g.,][]{skibba+14,carretero+15}. We modify and extend this algorithm in two ways: (i) we correlate galaxy colours with the parent halo concentration and (ii) we calculate a stellar mass for each galaxy using a colour-dependent mass-to-light ratio. \subsection{Correlating colour and concentration} \label{sec:algo:subsec:ccc} \noindent As indicated above, in the basic S09 algorithm the colour of any given galaxy is assigned in two steps: first the galaxy is labelled `red' (a `red flag' is set to $1$) with probability $p({\rm red}|{\rm gal},M_r)$ where `gal' is either `cen' or `sat', and then its colour is drawn as a Gaussian random number with the appropriate mean and variance. We correlate galaxy colours with parent halo concentrations by modifying the first step to adjust the red flag according to the concentration. The trend we wish to introduce is that more concentrated, older halos should host older, or redder, galaxies. At fixed halo mass, halo concentration $c$ is approximately Lognormally distributed, i.e. $\ln c$ is approximately Gaussian distributed with constant scatter $\sigma_{\ln c}\simeq 0.14\ln(10)$ and a mean value $\avg{\ln c} \equiv \ln\bar c(m,z)$ that depends on halo mass and redshift \citep[see, e.g.][]{bullock+01,wechsler+02}. In the range of interest the mean value is well described by \citep{ludlow+14} \begin{equation} \bar c(m,z) = \alpha\, \nu(m,z)^{-\beta}\,, \label{lognormal} \end{equation} where $\nu(m,z)=\delta_{\rm c}(z)/\sigma(m)$ is the dimensionless ``peak height'' of the halo\footnote{$\delta_{\rm c}(z)$ and $\sigma(m)$ are, respectively, the critical density for spherical collapse at redshift $z$ and the r.m.s. of initial fluctuations smoothed on mass scale $m$, each extrapolated to $z=0$ using linear theory.} and the coefficients $\alpha$ and $\beta$ depend on the choice of halo mass definition. We will construct catalogs using the $m_{\rm 200b}$ definition for which the \citet{zehavi+11} HOD was calibrated ($m_{\rm 200b}$ is the mass contained in the radius $R_{\rm 200b}$ where the spherically averaged density is $200$ times the background matter density $\bar\rho(z)$) and the corresponding $\bar c(m,z)$ relation. For this choice we have $\alpha=7.7$ and $\beta=0.4$ \citep[see, e.g., Appendix A of][]{paranjape14}. Let $p({\rm red})$ be the value of the red fraction in the \emph{absence} of any correlation with concentration. (For clarity we drop the explicit dependence on galaxy luminosity and type.) It is useful to define the quantity \begin{equation} s\equiv\frac{\ln(c/\bar c)}{\sigma_{\ln c}}\,, \label{s-def} \end{equation} whose distribution $p(s)$ is Gaussian with zero mean and unit variance, with $\bar c(m)$ and $\sigma_{\ln c}$ defined above. We compute a conditional red fraction $p({\rm red}|s)$ that depends on the parent halo concentration by setting \begin{equation} p({\rm red}|s) = (1-\rho)p({\rm red}) + \rho\,\Theta(s-s_{\rm red})\,, \label{newredfrac} \end{equation} where the Heaviside function $\Theta(x)$ is unity for $x>0$ and zero otherwise. This says that galaxies in halos with concentrations $s > s_{\rm red}$ have an enhanced probability to be red as compared to the S09 model, while this probability is lowered by a factor $(1-\rho)$ compared to $p(\rm red)$ for galaxies in low-concentration halos. The dividing line $s_{\rm red}$ between high and low concentrations is defined in such a way that the \emph{average} red fraction across all halos satisfies \begin{equation} \avg{p({\rm red}|s)} = \int_{-\infty}^\infty\ensuremath{{\rm d}} s\,p(s)\,p({\rm red}|s) = p({\rm red})\,, \label{pred-avg} \end{equation} which, using \eqn{newredfrac}, implies \begin{equation} p({\rm red}) = p(s > s_{\rm red}) = \frac{\textrm{erfc}(s_{\rm red}/\sqrt{2})}{2}\,. \label{sred-defn} \end{equation} Equation~\eqref{newredfrac} gives a step-like dependence of the red fraction on concentration; in principle one could also imagine schemes where the red fraction increases with concentration in a continuous manner, but in this case it becomes more complicated to simultaneously ensure $\avg{p({\rm red}|s)} = p({\rm red})$ and $0 < p({\rm red}|s) < 1$, due to the Gaussian integral in \eqn{pred-avg}. The parameter $\rho$ lies between zero and unity and controls the strength of the correlation between the red fraction and concentration; we discuss its physical meaning below. Setting $\rho = 0$ gives us the uncorrelated case when the red fraction does not depend on parent halo concentration. Setting $\rho = 1$ on the other hand corresponds to `complete correlation' where \emph{all} galaxies in high(low)-concentration halos are labelled red (blue). The intermediate case $0<\rho<1$ clearly interpolates between these two extremes. Having set up our model, we proceed by assigning colours to the galaxies. The red flag of the galaxy is chosen by drawing a uniform random number $u\in[0,1)$ and setting the red flag to unity if $u < p({\rm red}|s)$ and zero otherwise. Next, we draw a Gaussian random number $g-r$ from the appropriate red or blue distribution. In the basic S09 algorithm, the latter step also does not depend on parent halo concentration. In principle, we could once again use the values of $s$ to introduce a further correlation between the actual $g-r$ values and halo concentration, but in practice it turns out to be difficult to do this while preserving the global colour distributions at fixed luminosity (which we must do since these are constrained by data). We therefore stick to the S09 procedure at this stage and draw from the appropriate `red'/`blue' Gaussian without further correlation with concentration. Ideally, we would correlate galaxy colours with the concentrations actually measured in the simulation. However, these concentrations are only approximately Lognormal, and deviations from the Lognormal shape render the second equality in \eqn{sred-defn} above invalid. Continuing to use $s_{\rm red} = \sqrt{2}\textrm{erfc}^{-1}(2p({\rm red}))$ then leads to the overall central and satellite red fractions not being preserved, with $\avg{p({\rm red}|s)} \neq p({\rm red})$. We could correct for this by calibrating the exact shape of $p(s)$ and numerically inverting the relation $p(s > s_{\rm red}) = p(\rm red)$ to obtain $s_{\rm red}$. Note, however, that what we are really interested in is not the actual shape of the distribution of concentrations, but only their \emph{ranking} in halos of fixed mass. It is therefore easier to simply ``Gaussianize'' the measured log-concentrations, proceeding as follows. \begin{enumerate} \item We first randomly draw a Gaussian variate $s$ for each halo in the catalog and derive a Lognormal concentration $c$ using \eqns{lognormal} and~\eqref{s-def}. \item Next, we bin the halos in $16$ equi-log-spaced mass bins\footnote{The number of bins was chosen after testing for convergence.} and consider the lists of measured and Lognormal concentrations of halos in each bin. We rank order and reassign the Lognormal concentrations in a given bin according to the measured concentrations in that bin. In detail, if there are $N$ halos in a bin, then the halo with the $j^{\rm th}$ largest measured concentration is assigned the $j^{\rm th}$ largest Lognormal concentration in this bin, with $j=1,2,\ldots,N$. \item These reassigned Lognormal values of $c$ are then used to distribute satellites in their respective parent halos, and the corresponding Gaussian values of $s$ are used to induce conformity in the colours as described above, with $s_{\rm red}$ given by \eqn{sred-defn}. \end{enumerate} This ensures that galaxy colours in this model are assigned consistently and inherit the large scale environment dependence of assembly bias. We will show that this procedure, which defines our default model, leads to differences in the satellite red fraction in groups with blue and red centrals (1-halo conformity) as well as specific spatial trends of galaxy red fractions around blue and red isolated objects well outside $R_{\rm 200b}$ (2-halo conformity). As mentioned in the Introduction, it is also useful to ask how these trends would change if conformity arises not because of assembly bias, but due to some other halo property that \emph{does not} show environmental trends. To this end, we will also consider a model in which galaxy colours are chosen and satellites are distributed exactly as in the default procedure above, except that we skip the rank ordering step (ii). This is equivalent to randomizing the halo concentrations at fixed halo mass before correlating them with the galaxy colours. In this model -- which we will denote ``\emph{no-2h}'' -- we expect our mocks to exhibit 1-halo conformity without a corresponding 2-halo signal. The parameter $\rho$ has an interesting interpretation. It is easy to show that $\rho$ satisfies the relations \begin{align} \rho &=\, \frac{p({\rm red}|s>s_{\rm red}) - p({\rm red})}{1 - p({\rm red})}\notag\\ &\sim\, \frac{p({\rm red}|{\rm old}) - p({\rm red})}{1 - p({\rm red})}\,. \label{understanding-rho} \end{align} where the second approximation holds if we assume that high concentration halos are old. Although simply a restatement of the definition of $\rho$ in \eqn{newredfrac}, \eqn{understanding-rho} allows us to interpret $\rho$ as a ``quenching efficiency'' \citep{vdb+08,peng+10,peng+12,kovac+14,phillips+14,klwk14}. Written like this, in our default model $\rho$ is the fraction of blue (or star forming) galaxies that became red (or quenched) because their respective parent halos grew old\footnote{This can only be approximately correct, since, e.g., the present day population of blue satellites is not in general representative of the progenitors of all present day satellites. Our results do not depend on this interpretation, however, and one could simply treat $\rho$ as a free parameter in the model.}. In the \emph{no-2h} model, however, the connection between halo ages and galaxy colours is lost (since the former exhibit halo assembly bias at large scales while the latter will not), and the second relation in \eqn{understanding-rho} will not hold. In each model of conformity, however, $\rho$ can be thought of as a ``group quenching efficiency'', driven by the age of the parent halo of the group in the default model and by some local but otherwise unspecified property of the group in the \emph{no-2h} model. Our scheme of generating concentration-dependent red flags smoothly interpolates between the uncorrelated case ($\rho=0$) and the nearly `completely correlated' case ($\rho=1$). In practice, we perform this operation separately for satellites and centrals, which ensures that the overall central and satellite red fractions are left unaltered by construction. The model is flexible enough that, if needed, the group quenching efficiency $\rho$ can be set separately for centrals and satellites, with the interpretation that different quenching mechanisms might play a role for centrals and satellites in the same halo. E.g., this might be the case if \emph{sub}halo age is more relevant for satellite colour than the age of the parent halo \citep[][]{hw13}. In this work, however, we do not explore this possibility and only use a one-parameter setup with $\rho$ taken to be the same for centrals and satellites, independent of galaxy luminosity and halo mass. Galactic conformity arises from the fact that the colours of centrals and satellites in a group are affected by their common group concentration. The fact that the concentrations have a scatter at fixed halo mass means that the conformity will persist even when binning objects by the parent halo mass. \subsection{Calculating stellar masses} \label{sec:algo:subsec:sm} \noindent As a separate step, having assigned galaxy colours, we compute stellar masses using a colour dependent mass-to-light ratio that we have also fit to SDSS galaxies. In particular, we use \begin{equation} (M/L)_r = 4.66(g-r)-1.36(g-r)^2-1.108\,, \label{masstolight-fit} \end{equation} to compute the ${}^{0.1}r$-band Petrosian mass-to-light ratio $(M/L)_r$ in units of $\ensuremath{M_{\odot}}/L_{\odot}$, and add a Gaussian $1$-sigma scatter of $0.2$ (we describe the procedure and data set used to obtain this fit in section~\ref{sec:data}). Taking the base-10 logarithm and adding $(M_r-4.76)/(-2.5)$ gives us the log stellar mass in units of $\ensuremath{h^{-2}M_{\odot}}$. This does not guarantee that the central of a group, which is the brightest by construction, is also the most massive. In practice this affects about $8$-$10\%$ of the groups in the total sample, with the actual number depending on the the value of $\rho$. This may be related to the findings of \citet{skibba+11} regarding a non-zero fraction of groups in which the central is not the brightest; we will explore this further in future work. \section{Simulations} \label{sec:simulations} \noindent Our mocks are built on a suite of $N$-body simulations of cold dark matter (CDM), each of which resolves a periodic cubic box of comoving volume $(200\ensuremath{h^{-1}{\rm Mpc}})^3$ with $512^3$ particles using our fiducial cosmology. This gives us a particle mass of $m_{\rm part}=4.1\times10^{9}\ensuremath{h^{-1}M_{\odot}}$. The simulations were run using the tree-PM code\footnote{http://www.mpa-garching.mpg.de/gadget/} \textsc{Gadget-2} \citep{springel:2005} with a force resolution $\epsilon = 12.5h^{-1}$kpc comoving ($\sim1/30$ of the mean particle separation) and a $1024^3$ PM grid. Initial conditions were generated at $z=99$ employing $2^{\rm nd}$-order Lagrangian Perturbation Theory \citep{scoccimarro98}, using the code\footnote{http://www.phys.ethz.ch/$\sim$hahn/MUSIC/} \textsc{Music} \citep{hahn11-music} with a transfer function calculated using the prescription of \citet{eh98}. We have run $10$ realisations of this simulation by changing the random number seed used for generating the initial conditions and have used the resulting $z=0$ snapshots for our mocks. The simulations were run on the Brutus cluster\footnote{http://www.cluster.ethz.ch/index\_EN} at ETH Z\"urich. To identify halos, we have used the code\footnote{http://code.google.com/p/rockstar/} \textsc{Rockstar} \citep{behroozi13-rockstar}, which assigns particles to halos based on an adaptive hierarchical Friends-of-Friends algorithm in $6$-dimensional phase space. \textsc{Rockstar} has been shown to be robust for a variety of diagnostics such as density profiles, velocity dispersions, merger histories and the halo mass function. As mentioned earlier, we use the $m_{\rm 200b}$ value reported by \textsc{Rockstar} as the mass of the parent halo, and $R_{\rm 200b}$ as its radius. We use the value of the halo scale radius $r_{\rm s}$ reported by \textsc{Rockstar} to compute halo concentration $c_{\rm 200b} = R_{\rm 200b}/r_{\rm s}$. The smallest halo mass we resolve sets the faintest luminosity we can reliably sample. We discard objects having $m_{\rm 200b} < 20\,m_{\rm part}$, which allows us to set the luminosity threshold to $M_{r,{\rm max}}=-19.0$. Having $10$ independent simulations means that we can reliably characterise the sample variance on our observables for each mock configuration. \section{Data} \label{sec:data} \noindent In this section we describe the data sets we use for comparison with the mocks, and give details of various fits used in defining the mocks. \subsection{The Y07 catalog} \label{sec:data:subsec:Y07cat} \noindent We base some of the fits required for generating our mocks (the colour distribution and satellite red fraction at fixed luminosity, and the mass-to-light ratio at fixed colour) on the galaxies contained in the Y07 group catalog\footnote{http://gax.shao.ac.cn/data/Group.html}. These fits do not depend on the level of conformity, and we therefore additionally use the Y07 catalog as a baseline for galactic conformity as well. The Y07 catalog is built using the halo-based group finder described in \cite{yang+05} to identify groups in the New York University Value Added Galaxy Catalog \citep[NYU-VAGC;][]{blanton+05}, based on the SDSS\footnote{http://www.sdss.org} \citep{york+00} data release 7 \citep[DR7;][]{abazajian+09}. Throughout, we will only consider galaxies with spectroscopic redshifts (`sample II' of Y07) restricted to the range $0.01 < z < 0.07$, and also restrict the Petrosian absolute magnitudes to the range $-23.7 < M_r < -16$. In part of the analysis below, we will use a luminosity-complete subsample -- denoted ``Y07-Lum'' -- containing galaxies with Petrosian absolute magnitudes $M_r < M_{r,{\rm max}} = -19.0$ and Model $g-r$ colours from the NYU-VAGC as provided by Y07. Additionally, when comparing properties as a function of stellar mass, we will use another subsample -- denoted ``Y07-Mass'' -- containing values of Petrosian stellar mass $m_\star$ and Model $g-r$ colours obtained by running the \textsc{kcorrect\_v4.2} code\footnote{http://howdy.physics.nyu.edu/index.php/Kcorrect} of \cite{br07} on the corresponding Petrosian and Model properties, respectively, of the Y07 galaxies obtained from the MPA-JHU catalog\footnote{http://www.mpa-garching.mpg.de/SDSS/DR7/} \citep{kauffmann+03} by matching galaxy IDs across the MPA-JHU and NYU catalogs. We use magnitudes that are K-corrected to their values at $z=0.1$. We have checked that there is reasonable agreement between comparable galaxy properties in the NYU-VAGC and the MPA-JHU catalog. When showing mass functions and fractions, we apply inverse $V_{\rm max}$-weighting to the galaxies in the Y07-Mass catalog. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{gr-Mrbins-YangVsFits} \caption{Comparison of observed Model $g-r$ distributions in bins of Petrosian $M_r$ in the Y07-Lum catalog (solid green histograms) with the corresponding double-Gaussian fits reported in \eqns{dbl-Gauss-fits}. We show the `red' and `blue' components in the fits as the respectively red and blue dashed lines, and the total of these as the dashed green lines. For comparison we show the S09 fits as dotted lines; the discrepancy between S09 and the Y07 catalog is most likely due to differences between DR7 and DR4 (which was used by S09) and our choice of using Model colours.} \label{fig:gr-comparemock} \end{figure} Our choice of Petrosian quantities for defining both the stellar masses as well as the mass-to-light fit (see below) in the Y07-Mass catalog is motivated by the fact that the HOD we use has been calibrated on Petrosian absolute magnitudes by \citet{zehavi+11}. We have checked that there is reasonable agreement between Model and Petrosian stellar masses in the Y07-Mass catalog and, moreover, that the trends in the red fractions and associated observables which we discuss below differ by only $\sim10\%$ when using Model or Petrosian quantities. Consequently, we do not expect any of our qualitative conclusions to be affected by our choice. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{redfractions-global} \caption{Conformity-independent variables as a function of luminosity \emph{(left panel)} and stellar mass \emph{(right panel)}. Filled symbols joined by solid lines show the results of averaging over $10$ realisations of our mocks, while open symbols joined by dashed lines show measurements in the Y07 catalog. The error bars on the mock results show the r.m.s. fluctuations (standard deviation) around the mean value, while the errors on the Y07 data are estimated from 150 bootstrap resamplings. The cyan diamonds show the satellite fraction $f_{\rm sat}$. The dark red crosses, green circles and yellow triangles respectively show the red fractions for all galaxies ($f_{\rm red}$), only centrals ($f_{\rm red|cen}$) and only satellites ($f_{\rm red|sat}$). The magenta squares show the satellite quenching efficiency $\varepsilon_{\rm sat} = (f_{\rm red|sat}-f_{\rm red|cen})/(1-f_{\rm red|cen})$. The `red'/`blue' classification for both data and mocks uses \eqn{colorcut-Mr} for the left panel and \eqn{colorcut-sm} for the right panel.} \label{fig:redfrac-avg} \end{figure*} \subsection{Mass-to-light ratio fit} \label{sec:data:subsec:M/L} \noindent We fit a quadratic relation to the ${}^{0.1}r$-band Petrosian mass-to-light ratio $(M/L)_r$ at fixed value of Model $g-r$ colour in the Y07-Mass catalog. In particular, we compute the median value of $(M/L)_r$ in bins of $g-r$, using the median $g-r$ as the bin center. The resulting least-squares best fit is given in \eqn{masstolight-fit} and is close to but slightly lower than the one in equation (1) from \cite{ww12}. The measured r.m.s. scatter of $(M/L)_r$ in the bins of $g-r$ has an average value of $0.2$ which we also use as described below \eqn{masstolight-fit}. \subsection{Color-luminosity fits} \label{sec:data:subsec:p(g-r|Mr)} \noindent We fit double-Gaussian shapes to the distributions of Model $g-r$ colours in bins of Petrosian absolute magnitude $M_r$ in the Y07-Lum catalog. The resulting fits can be summarised using $5$ quantities: the means and variances of the `red' and `blue' distributions and the probability $p({\rm red}|M_r)$ that the colour is drawn from the `red' distribution. The full $g-r$ distribution can then be written as \begin{align} p(g-r|M_r) &= p({\rm red}|M_r)\,p_{\rm red}(g-r|M_r)\notag\\ &\ph{p({\rm red})} + \left(1-p({\rm red}|M_r)\right)\,p_{\rm blue}(g-r|M_r)\,, \label{p-gr-full} \end{align} where, e.g., $p_{\rm red}(g-r|M_r)$ is Gaussian with mean $\avg{g-r|M_r}_{\rm red}$ and standard deviation $\sigma_{\rm red}(M_r)$, and similarly for the blue distribution. We find the following best fit values: \begin{align} p({\rm red}|M_r) &= 0.423 - 0.175\left(M_r+19.5\right) \notag\\ \avg{g-r|M_r}_{\rm red} &= 0.9050 - 0.0257\left(M_r+19.5\right) \notag\\ \avg{g-r|M_r}_{\rm blue} &= 0.575 - 0.126\left(M_r+19.5\right) \notag\\ \sigma_{\rm red}(M_r) &= 0.0519 + 0.0085\left(M_r+19.5\right) \notag\\ \sigma_{\rm blue}(M_r) &= 0.150 + 0.015\left(M_r+19.5\right) \label{dbl-Gauss-fits} \end{align} Figure~\ref{fig:gr-comparemock} compares these analytic functions (dashed lines) with the measured $g-r$ distribution in the Y07-Lum catalog (solid histograms). For comparison, the dotted lines show the fits reported by S09 which were based on a similar catalog using DR4 of SDSS; these are rather different from the Y07-Lum data, which could be due to differences between the DR7 colours and those in DR4. (The discrepancy reduces but does not disappear if we use Petrosian colours.) Overall, we see a good agreement between the data and the fits, except for a ``green valley'' which is not captured by the sum of two Gaussians. In principle, one can obtain a better fit by introducing a third `green' component \citep{krause+13,carretero+15}, but we choose not to do this here. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{massfuncs-compare} \caption{All-galaxy luminosity \emph{(left panel)} and stellar mass functions \emph{(right panel)}. Red squares show the average over $10$ independent mock catalogs, with error bars showing the r.m.s. scatter. Cyan circles show measurements in the Y07-Lum (left) and Y07-Mass catalogs (right), with errors estimated from 150 bootstrap resamplings. For comparison, the short-dashed black curve in the left panel shows the Schechter fit to the ${}^{0.1}r$-band luminosity function in SDSS data reported by \citet{blanton+03}, while the corresponding curve in the right panel shows the stellar mass function fit for the Y07 catalog as reported by \citet{peng+12}. The vertical dotted line in the right panel shows the approximate mass-completeness limit of our mocks at $\log_{10}(m_\star)=9.9$.} \label{fig:massfuncs} \end{figure*} Although the algorithm classifies objects as red or blue depending on which Gaussian distribution their colours are drawn from, for ease of comparison with observational results, we will use sharp thresholds in $g-r$. When showing results as a function of luminosity we classify objects as red when their $g-r$ colour exceeds \begin{equation} (g-r)_{\rm cut} = 0.8 - 0.03\left(M_r+20\right)\,, \label{colorcut-Mr} \end{equation} and as blue otherwise \citep{zehavi+05}. When showing results as a function of stellar mass we instead use the threshold \begin{equation} (g-r)_{\rm cut} = 0.76 + 0.10\left[\,\log_{10}(m_{\star})-10\,\right]\,, \label{colorcut-sm} \end{equation} which is somewhat shallower than the threshold quoted by \citet{vdb+08} who use a slope of $0.15$ instead of $0.1$. We have found that \eqn{colorcut-sm} gives a slightly better separation of the bi-modal colour-mass distribution in the Y07-Mass catalog than the \citet{vdb+08} relation does, although our qualitative conclusions do not depend on the choice of threshold. We apply the same threshold to colours in our mocks as well as in the Y07 data. Using sharp thresholds means that the measured red fractions $f_{\rm red}$ will, in general, differ from the probabilities $p({\rm red})$ discussed earlier. \section{Results} \label{sec:results} \noindent We now present the results of our mock algorithm and compare them with corresponding measurements in the Y07 catalog. We start by discussing observables that \emph{do not} depend on the presence or absence of conformity. \subsection{Conformity-independent observables} \label{sec:results:subsec:noconf} \noindent The following observables are independent of the level of conformity (i.e., the value of $\rho$) because their definition does not involve a simultaneous determination of central and satellite colour: the fraction of galaxies that are satellites ($f_{\rm sat}$), the average red fractions of all galaxies, centrals and satellites (respectively, $f_{\rm red}$, $f_{\rm red|cen}$ and $f_{\rm red|sat}$), and the satellite quenching efficiency $\varepsilon_{\rm sat} = (f_{\rm red|sat}-f_{\rm red|cen})/(1-f_{\rm red|cen})$. Among these functions, the luminosity dependence of $f_{\rm sat}$ is determined solely by the HOD (which itself is fit using the luminosity dependence of clustering), that of $f_{\rm red}$ is fixed by the double-Gaussian fits\footnote{Note that $f_{\rm red}$ depends on the actual shape of the double-Gaussian, not only on $p({\rm red}|M_r)$.}, and the remaining quantities additionally depend on the choice of satellite red fraction which we discuss next. As mentioned previously, we restrict ourselves to a simple prescription in which $p({\rm red}|{\rm sat},M_r)$ (the probability that $g-r$ for a satellite of luminosity $M_r$ is drawn from the `red' Gaussian) \emph{as well as} the corresponding quantity $p({\rm red}|{\rm cen},M_r)$ for centrals are independent of halo mass, in which case only one of these can be chosen independently. By comparing to the measured luminosity dependence of $f_{\rm red|sat}$, we find reasonable agreement using \begin{equation} p({\rm red}|{\rm sat},M_r) = 1.0 - 0.33\left[1+\tanh\left(\frac{M_r+20.25}{2.1}\right)\right]\,, \label{p-redsat} \end{equation} although this is by no means the only function that can do the job. This in turn fixes the luminosity dependece of $f_{\rm red|cen}$ and $\varepsilon_{\rm sat}$. The stellar mass dependence of all the quantities mentioned above is then also completely fixed. The results of the comparison are shown in Figure~\ref{fig:redfrac-avg}, as a function of luminosity in the left panel and stellar mass in the right panel. The open symbols joined by dashed lines show the measurements in the Y07-Lum (left panel) and Y07-Mass catalogs (right panel), while the filled symbols joined by solid lines show the mean over 10 realisations of our mocks. The error bars on the mock data show the r.m.s. scatter (sample variance) of the respective quantities over all realisations, while the error bars on the Y07 data are estimated from 150 bootstrap resamplings. We see that, apart from systematic differences of $\lesssim10\%$, there is good agreement between the mocks and the data. In particular, the mocks correctly reproduce the observed near-independence of $\varepsilon_{\rm sat}$ on stellar mass and luminosity, as well as matching its amplitude. The systematic differences could be due to one or more of the following: residual systematics in the HOD parameters and the mass-to-light fit, the fact that we do not exactly reproduce the observed colour distribution, and our choice of halo mass-independent central and satellite red fractions. In particular, the difference in $f_{\rm red}$ is very likely due to the mismatch in the colour distributions around the ``green valley'' (see Section~\ref{sec:data:subsec:p(g-r|Mr)} and Figure~\ref{fig:gr-comparemock}). Since we match the satellite red fraction $f_{\rm red|sat}$ quite well by construction, the difference in $f_{\rm red}$ then also causes a difference in $f_{\rm red|cen}$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{colmass-cens} \caption{Number counts of centrals as a function of $g-r$ and luminosity \emph{(top panel)} or stellar mass \emph{(bottom panel)} in one of our mocks. We clearly see how a luminosity threshold propagates into a mass incompleteness for red objects. A visual inspection indicates that our mocks should be mass complete for $\log_{10}(m_\star) \gtrsim 9.9$ (vertical dotted line in the bottom panel). The thick yellow lines show \eqn{colorcut-Mr} in the top panel and \eqn{colorcut-sm} in the bottom panel. For comparison, the dashed yellow line in the bottom panel shows the threshold used by \citet{vdb+08}.} \label{fig:hist2d-cen} \end{figure} Finally, an important set of observables that are independent of conformity are the luminosity and stellar mass functions of the mocks. Figure~\ref{fig:massfuncs} compares these with corresponding measurements in, respectively, the Y07-Lum and Y07-Mass catalogs. We also show the fits to SDSS data reported by \cite{blanton+03} for the luminosity function and \cite{peng+12} for the mass function. Overall there is good agreement between the measurements in our mocks, in the data and the corresponding fits from previous work; this serves as a sanity check on our algoritm. The fact that our measurements in the Y07-Mass catalog do not quite agree with the fits reported by \citet{peng+12} highlights the fact that the shape and amplitude of the mass function is quite sensitive to the exact definition of stellar mass. From a visual inspection, the mocks appear to be mass complete for $\log_{10}(m_\star) \gtrsim 9.9$, which is the threshold we will use below. For our choice of $h=0.7$ in the $N$-body simulations, this corresponds to $\log_{10}(m_\star/\ensuremath{M_{\odot}}) > 10.2$. The incompleteness at lower masses is because our mock is luminosity complete with $M_r < -19.0$; the colour-dependence of the mass-to-light ratio \eqref{masstolight-fit} then means that we are missing most of the faint red galaxies (and many of the faint blue ones) that would populate the low mass end. This becomes clearer in Figure~\ref{fig:hist2d-cen} which shows number counts of centrals as a function of $g-r$ and luminosity (top panel) or stellar mass (bottom panel) in one of our mocks. The deficit of low-mass red objects is clear in the bottom panel. Similar results hold for the satellites as well. Our choice of mass-completeness threshold is further justified by the fact that only $\sim1\%$ of the galaxies with $\log_{10}(m_\star)>9.9$ belong to the faintest bin $-19.2<M_r<-19.0$, implying that fainter galaxies would contribute negligibly above this mass threshold. \subsection{Effects of correlating galaxy colour and halo concentration} \label{sec:results:subsec:conf} \noindent We now investigate the effects of a non-zero value of $\rho$ in observables that \emph{do} respond to a correlation between central and satellite colours. To start with we simply explore the effects of changing the value of $\rho$, and later motivate a specific value by comparing to the Y07 catalog. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{redfractions-compare2hconf} \caption{\emph{(Left panel): }Satellite red fraction as a function of satellite stellar mass, split according to whether the associated centrals are red (red points and lines) or blue (blue points and lines). The open red triangles and blue inverted triangles with dotted lines show the result of the algorithm setting $\rho=0.01$, i.e. essentially \emph{no correlation} between galaxy colour and host halo concentration, as is routinely assumed by standard HOD based algorithms. The filled symbols with solid/dashed lines show the result when $\rho=0.99$, i.e. \emph{strong correlation}, in the presence (default model; red triangles and blue inverted triangles with solid lines) or absence (\emph{no-2h} model; red crosses and blue stars with dashed lines) of 2-halo conformity as described in the text. The points show the mean over 10 independent mocks and the error bars the standard error on the mean. \emph{(Right panel): }The red fraction of all galaxies surrounding red or blue ``isolated primaries'' as defined in the text, as a function of spherical distance, colour-coded and formatted as in the left panel, for the same set of 10 mocks used for the respective measurements in the left panel. All objects are required to have $\log_{10}(m_\star) > 9.9$ and we do not restrict the masses of their parent halos. The horizontal cyan line and band respectively show the average all-galaxy red fraction above this stellar mass threshold and its r.m.s. scatter over 10 independent mocks (these values are independent of $\rho$). Note that the scale on the vertical axis differs from that in the left panel. For both centrals and satellites, the distinction between red and blue was made using \eqn{colorcut-sm}.} \label{fig:redfrac-2hconf} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{redfractions-comparecc} \caption{Same as Figure~\ref{fig:redfrac-2hconf}, except that the filled red crosses and blue stars joined by dashed lines now show the result for mocks with the default model, but where we set $\rho=0.5$ (``medium'' correlation between colour and concentration). The open triangles with dotted lines and filled triangles with solid lines are the same as in Figure~\ref{fig:redfrac-2hconf}. This plot demonstrates the tunability of our model, which is one of our key results.} \label{fig:redfrac-cc} \end{figure*} The left panel of Figure~\ref{fig:redfrac-2hconf} shows $f_{\rm red|sat}$ as a function of satellite stellar mass, split by whether the corresponding central is red or blue, for two choices of $\rho=0.01,0.99$. For each of these choices, the points show the mean over 10 independent mocks. Since we are interested in understanding these mean trends in this plot, the error bars in both panels show the standard error on the mean (rather than the r.m.s. scatter). The plot clearly shows that a positive correlation goes in the direction of explaining conformity, which has been noted earlier by others as well. The open symbols (red triangles and blue inverted triangles) joined by dotted lines show the result of the algorithm setting $\rho=0.01$, i.e. essentially no correlation, in keeping with what is usually assumed by standard HOD based algorithms. In this case there is no distinction between the two red fractions at the bright end, but fainter satellites with blue centrals are preferentially slightly redder than those with red centrals (a weak `negative conformity'). We have checked that the latter effect is due to averaging over luminosities and completely disappears if we plot results at fixed luminosity instead\footnote{This traces back to our choice of satellite red fraction which is independent of halo mass. Had this not been the case, we would have seen a similar effect in the luminosity-based plot as well, arising in this case from an averaging over halo mass. The effect in the stellar mass plot would now be even more pronounced.} (not shown). The filled symbols with solid/dashed lines show the result when $\rho=0.99$, i.e. strong correlation. In this case satellites with red centrals are clearly significantly redder than satellites with blue centrals. The filled red triangles and blue inverted triangles joined by solid lines show the result for mocks that used our default model of conformity, rank ordering the Lognormal halo concentrations according to the measured concentrations from the simulation in $16$ equi-log-spaced mass bins (see Section~\ref{sec:algo:subsec:ccc}). The red crosses and blue stars with dashed lines show the result for mocks that used the \emph{no-2h} model, with randomly assigned Lognormal concentrations. As expected, the red fractions for these two cases are nearly identical. The right panel of the Figure is formatted identically to the left panel and uses the same set of mocks. In this case, we show the red fraction of all galaxies with $\log_{10}(m_\star)>9.9$ as a function of their distance $r$ from red or blue ``isolated primaries''; we define the latter as galaxies of mass $m_{\star}$ that do not have any galaxy more massive than $m_{\star}/2$ within a spherical radius of $500$kpc\footnote{This definition is somewhat different from the observationally-motivated one employed by \cite{kauffmann+13}, who used a cylinder in redshift space and projected distance \citep[see also][]{hwv15}. In this paper, however, we do not compare our 2-halo results with observations, and the simpler 3-dimensional definition above suffices to understand various trends in the signal. The signal itself is expected to diminish in strength due to projection effects.}. We find that the galaxies picked by this definition are predominantly centrals, with $\sim10\%$ of isolated primaries being satellites \citep[see also][]{hwv15}. We consider isolated primaries in order to be close to the selection criteria used in recent observational studies \citep[see, e.g.,][]{kauffmann+13}. We investigate the impact of changing the selection criterion in Appendix~\ref{app:conftrends}. The horizontal cyan line and associated band respectively show the mean and r.m.s. scatter over 10 realisations of the average all-galaxy red fraction which is independent of conformity strength. The mocks with $\rho = 0.01$ show nearly identical trends at $r\gtrsim2$Mpc regardless of the colour of the isolated primary, although the red fractions remain above the global value even as far as $10$Mpc from the center. There is a weak, conformity-like signal at small separations, which we have checked is entirely due to averaging over halo mass and disappears when using galaxies in fixed bins of halo mass. Together with the apparent `negative conformity' seen for this set of mocks in the left panel, this reiterates the need to be cautious when interpreting trends in analyses that average over luminosity and halo mass. Unlike the left panel, the two sets of mocks with $\rho = 0.99$ now show distinct trends. In the \emph{no-2h} mocks, the red fractions around blue and red isolated primaries are different until a distance of $\sim4$Mpc from the center (the one around red isolated primaries being higher), beyond which they are identical and also coincide with the ``no conformity'' red fractions associated with $\rho=0.01$. The default mocks, on the other hand, show a similar but substantially larger difference between the red fractions out to $\sim6$Mpc, beyond which they are also nearly identical but substantially larger than the red fractions in the other data sets (see section~\ref{sec:discuss} and Appendix~\ref{app:conftrends} for a discussion of these trends). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{redfractions-conformity} \caption{Comparison of ``conformity red fractions'' $f_{\rm red|sat,rc}$ and $f_{\rm red|sat,bc}$ of satellites having red and blue centrals, respectively, with the red fraction of centrals $f_{\rm red|cen}$, for the Y07-Mass catalog (open symbols joined by dashed lines) and in mocks that used $\rho=0.65$ (filled symbols with solid lines). The red triangles, blue inverted triangles and green circles respectively show $f_{\rm red|sat,rc}$, $f_{\rm red|sat,bc}$ and $f_{\rm red|cen}$ (the last are the same as in Figure~\ref{fig:redfrac-avg}). The error bars on the mock results show the r.m.s. fluctuations around the mean value over $10$ realisations, while the error bars on the Y07 data are estimated from 150 bootstrap resamplings. Notice the remarkable similarity between $f_{\rm red|sat,bc}$ and $f_{\rm red|cen}$, especially at small masses in the Y07-Mass catalog.} \label{fig:redfrac-conformity-1h} \end{figure} Figure~\ref{fig:redfrac-cc} shows the same quantities as Figure~\ref{fig:redfrac-2hconf}, except that the filled red crosses and blue stars joined by dashed lines now show the results for default mocks where we set $\rho=0.5$ (medium correlation). This plot shows how the difference between the red fractions around blue and red centrals/isolated primaries changes with group quenching efficiency $\rho$ and demonstrates the tunability of our model, which is one of our key results. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{redfractions-pos-conformity} \caption{Large scale conformity predicted in three configurations of mocks that used $\rho=0.65$ and whose respective 1-halo conformity signals are each consistent with measurements in the Y07 catalog. Filled symbols (red triangles and blue inverted triangles) with solid lines used our default mocks with halo mass-independent satellite and central red fractions at fixed luminosity, in which 2-halo conformity is switched on. Filled symbols (red crosses and blue stars) with dashed lines used the \emph{no-2h} mocks in which 2-halo conformity is switched off. Finally, open symbols (red triangles and blue inverted triangles) with dotted lines used mocks in which 2-halo conformity was switched on and the red fraction of centrals at fixed luminosity had an additional dependence on halo mass as described in Appendix~\ref{app:massdepredfrac}. The error bars on the mock results show the r.m.s. fluctuations around the mean value over $10$ realisations for each configuration. The inset zooms in on the behaviour of the signal in the default mocks at very large separations. Unlike Figures~\ref{fig:redfrac-2hconf} and~\ref{fig:redfrac-cc}, we restrict the isolated primaries to lie in the stellar mass range $10.0 < \log_{10}(m_\star) < 10.5$, while their neighbours are required to have $\log_{10}(m_\star) > 9.9$. The horizontal cyan line and band respectively show the average all-galaxy red fraction above the latter mass threshold and its r.m.s. scatter over 10 independent mocks (these values are independent of $\rho$).} \label{fig:redfrac-conformity-2h} \end{figure} \subsection{Fixing the level of conformity} \label{sec:results:subsec:fixconf} \noindent We notice in the left panel of Figure~\ref{fig:redfrac-cc} that the red fraction $f_{\rm red|sat,bc}$ of satellites with blue centrals is particularly sensitive to the value of $\rho$. This happens, at least in part, due to the inherent asymmetry of satellite colours, most of which are red. Consequently, we can use measurements of 1-halo conformity in the Y07 catalog to fix the value of $\rho$, which can then be used to \emph{predict} the 2-halo conformity signal. By trial and error we have found that setting $\rho$ within $\sim10\%$ of $\rho=0.65$ leads to a behaviour of $f_{\rm red|sat,bc}$ in the mocks that closely resembles the one in the data over the mass range allowed by the mocks. Figure~\ref{fig:redfrac-conformity-1h} shows red fractions averaged over $10$ mocks that used $\rho=0.65$, and we see that both $f_{\rm red|sat,bc}$ and $f_{\rm red|sat,rc}$ agree very well with the Y07 measurements. The results of the previous section show that this agreement is independent of whether we use the default or the \emph{no-2h} model. Notice that the red fraction of satellites with blue centrals $f_{\rm red|sat,bc}$ in both data and mocks is remarkably similar to the corresponding $f_{\rm red|cen}$. This is particularly true for the data at low masses where our current mocks do not reach. Figure~\ref{fig:redfrac-conformity-2h} shows the large scale conformity signal predicted in the mocks having $\rho=0.65$, using both the default and the \emph{no-2h} models. To assess the effect of our assumption of mass-independent satellite and central red fractions, we also show the corresponding signal in mocks having 2-halo conformity, but in which the red fraction of centrals is mass-\emph{dependent} as described in Appendix~\ref{app:massdepredfrac}. These were constructed to match the conformity-independent observables discussed earlier, as well as the 1-halo conformity signal, for which it suffices to use $\rho=0.65$ again (see Figure~\ref{fig:redfrac-conformity-mdep-1h}). In each case, we restrict the stellar mass of the isolated primaries to lie in the range $10.0 < \log_{10}(m_\star) < 10.5$. We see that the overall signal in all three mocks is very similar at scales $\lesssim4$Mpc. At larger scales, the mocks with a halo mass dependent central red fraction show a slightly higher signal than in our default mocks. The large scale signal in the \emph{no-2h} mocks, on the other hand, is noticeably smaller than in the default. The inset zooms in on the behaviour of the signal in the default mocks at very large separations. The error bars depict the r.m.s. fluctuations around the mean over $10$ realisations for each configuration. Figure~\ref{fig:redfrac-conformity-2h-mhbins} shows a break-down of these large scale trends as a function of halo mass. Each panel shows the results for our default and \emph{no-2h} mocks (formatted identically to Figure~\ref{fig:redfrac-conformity-2h} and with the same restrictions on stellar masses of the isolated primaries and their neighbours), with the top left panel showing the same configuration as Figure~\ref{fig:redfrac-conformity-2h} and the top right, bottom left and bottom right panels showing bins of increasingly larger halo mass. Each panel shows the average all-galaxy red fraction in the respective halo mass bin, and its r.m.s. scatter, as the horizontal cyan line and band, respectively. We clearly see that the large difference between red fractions at separations $\lesssim4$Mpc arises primarily from large halo masses, while the distinction between the default and \emph{no-2h} signal at $\gtrsim8$Mpc only emerges at the smallest halo masses. We discuss these results in section~\ref{sec:discuss} below. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{redfractions-compare2hconf-halomassbins} \caption{Large scale conformity signal for $\rho=0.65$, broken down by halo mass as indicated. Note that the mass restrictions apply to the parent halos of both the isolated primary as well as its neighbours. Each panel is formatted identically to Figure~\ref{fig:redfrac-conformity-2h} and shows the results for the default and \emph{no-2h} mocks. See text for a discussion.} \label{fig:redfrac-conformity-2h-mhbins} \end{figure*} \section{Discussion} \label{sec:discuss} \noindent The 1-halo conformity trends seen in the left panels of Figures~\ref{fig:redfrac-2hconf} and~\ref{fig:redfrac-cc}, and in the mock results in Figure~\ref{fig:redfrac-conformity-1h} are a straightforward consequence of our choice of implementation and the definition of the group quenching efficiency $\rho$ as discussed in section~\ref{sec:algo:subsec:ccc}. As $\rho$ increases and approaches unity in our model, red (blue) satellites preferentially live in halos hosting red (blue) centrals. An interesting aspect of Figure~\ref{fig:redfrac-conformity-1h} is the remarkable similarity of the red fraction of satellites with blue centrals and the red fraction of centrals at low stellar mass. A similar result was obtained by \citet{phillips+14}, who found that the SFRs of satellites with star forming centrals are essentially the same as those of isolated galaxies with similar stellar mass. This would imply that the quenching efficiencies $\varepsilon_{\rm sat|SFc}$ of the satellites with star forming centrals are consistent with being zero. On the other hand, \citet{klwk14} reported values of $\varepsilon_{\rm sat|SFc}$ different from zero. This apparent discrepancy is possibly due to differences in the sample definitions in these two studies; in particular, unlike \citet{phillips+14}, \citet{klwk14} limited their analysis to satellites residing in groups with at least $3$ members with $m_\star > 10^{10}\ensuremath{M_{\odot}}$, thereby removing many low mass groups from the sample. As \citet{klwk14} also report a strong positive dependence of $\varepsilon_{\rm sat}$ on halo mass, it is natural to expect that the quenching efficiency in a sample biased toward more massive haloes would be nonzero. It will be very interesting to extend our mocks to lower stellar masses using a higher resolution simulation and compare the resulting 1-halo conformity trends -- particularly those at fixed group richness and halo mass -- with the data, but this is beyond the scope of the present work. Turning to the large scale trends of red fractions surrounding isolated primaries, we encounter an even richer picture. It has been argued in the recent literature that large scale differences between red fractions or mean star formation rates surrounding blue and red isolated primaries must arise from environmental correlations across different halos \citep{kauffmann+13} and are therefore a signal of 2-halo conformity \citep{hwv15}. Our results in Figure~\ref{fig:redfrac-2hconf} and especially Figure~\ref{fig:redfrac-conformity-2h}, however, show that an alternative explanation is also possible. Figure~\ref{fig:redfrac-conformity-2h} used three different sets of mocks, \emph{all} of which are constructed to give a 1-halo conformity signal consistent with measurements in the Y07 catalog. Only two of these mocks have a genuine 2-halo conformity built in, however; the third contains randomly assigned halo concentrations which are not correlated with the halo environment. This third set of \emph{no-2h} mocks serves as a useful toy example of the possibility that conformity might arise due to some property of halos that \emph{does not} exhibit large scale environmental effects (see the discussion in the Introduction). The large scale conformity signal for all three mocks is \emph{very similar at scales} $\lesssim4$Mpc. Figure~\ref{fig:redfrac-conformity-2h-mhbins} shows that the difference of red fractions at these scales is dominated by halo masses $\log_{10}(m)\gtrsim13.25$ (bottom right panel) -- for which it is consistent with being a 1-halo effect\footnote{For comparison, a cluster-sized halo of mass $10^{14}\ensuremath{h^{-1}M_{\odot}}$ has a radius $R_{\rm 200b}\simeq1.7$Mpc for our cosmology, and the largest halos in our simulations reach masses of $\sim10^{14.8}$-$10^{15.2}\ensuremath{h^{-1}M_{\odot}}$.} -- and has a smaller and noisier contribution from small halo masses (top right panel, see also Appendix~\ref{app:conftrends}). This means that, at these scales, we could easily mistake a purely 1-halo effect for genuine 2-halo conformity\footnote{Figure~\ref{fig:redfrac-conformity-2h} also shows that the large scale conformity signal appears to be robust against a mass dependence of the central red fraction, showing a slight increase on average as compared to the default mocks.} Similar conclusions can be drawn by comparing the $\rho=0.99$ measurements in the right panel of Figure~\ref{fig:redfrac-2hconf} with those in the mocks with $\rho=0.01$ or no conformity. The $\rho=0.99$ \emph{no-2h} measurements are identical to the $\rho=0.01$ measurements at large scales, which is expected since neither of these mocks has any 2-halo conformity. The measurements in the two $\rho=0.99$ mocks on the other hand are very similar at scales $\lesssim4$Mpc. Taken together, this shows that that the conformity signal at $\lesssim4$Mpc in the $\rho=0.99$ mocks is almost entirely a 1-halo effect (see Appendix~\ref{app:conftrends} for further discussion). From the observational point of view, halo mass-dependencies can in principle be controlled by performing tests at, e.g., fixed group richness \citep{koester+07a,ssm07,skibba09}, stellar mass of the central \citep{more+11} or any other observable property that correlates well with halo mass \citep{lys15}. Notice, however, that all the measurements in Figures~\ref{fig:redfrac-conformity-2h} and~\ref{fig:redfrac-conformity-2h-mhbins} use isolated primaries with stellar masses $10.0 < \log_{10}(m_\star) < 10.5$ and still cannot distinguish between the default and \emph{no-2h} models at scales $\lesssim4$Mpc, unless the halo mass is explicitly restricted to small values. We have also checked that the relative behaviour of our models at these scales remains unchanged when using primaries of different stellar mass\footnote{ Figure~\ref{fig:redfrac-conformity-2h-smbins} in the Appendix shows that the signal, particularly at large scales, is essentially independent of the stellar mass of the primary; we attribute this to a large scatter in the distribution of halo masses at fixed stellar mass (Figure~\ref{fig:SHMR}).}. This is unfortunate, since it very likely means that one cannot conclusively argue that the conformity measured by \citet{kauffmann+13} at projected scales $\lesssim4$Mpc in similar mass bins is evidence of galaxy assembly bias. Projection effects are unlikely to alter this conclusion. It will be useful to repeat such tests at fixed group richness, which might be a better indicator of halo mass. For now, it is very interesting to note that there \emph{is}, in fact, a signal which diminishes in the absence of genuine 2-halo conformity and is also robust against the inclusion of halo mass dependence in the red fractions. This is the fact that, in the presence of 2-halo conformity, the red fractions at \emph{even larger} scales ($\gtrsim8$Mpc) surrounding both red and blue isolated objects \emph{remain above the global average value} by a small but statistically significant amount, showing nearly identical values and a trend that is relatively insensitive to the halo mass dependence of the central red fraction. When genuine 2-halo conformity is \emph{absent}, the red fractions at these scales are discernably closer to the global average; as noted earlier, they are also identical to the ``no conformity'' red fractions at these scales. These trends can be seen in Figures~\ref{fig:redfrac-2hconf},~\ref{fig:redfrac-conformity-2h} and~\ref{fig:redfrac-trugrps}, with the elevation compared to the global value seen clearly in the zoom-in inset panel of Figure~\ref{fig:redfrac-conformity-2h}. Figure~\ref{fig:redfrac-conformity-2h-mhbins} shows that this signal is absent at large halo masses and only emerges in the range $11.25 < \log_{10}(m) < 12.25$. This dependence on halo mass is exactly what one expects from a signal driven by halo assembly bias; in fact, one can also derive a more detailed understanding of the \emph{nature} of the signal using the Halo Model. In Appendix~\ref{app:conftrends}, e.g., we argue that the red fractions at very large separations from central galaxies are determined by ratios of cross-correlation functions of red/blue galaxies with red/blue centrals, in which the concentration-dependence of halo assembly bias couples with that of galaxy colour introduced in our default model and leads to an elevation that is qualitatively similar to what we see in Figure~\ref{fig:redfrac-conformity-2h}. The fact that the signal is only of order a few percent is consistent with halo assembly bias being a weak effect. Our model assumes that central galaxies are always the brightest members of their respective groups, and we have also imposed this condition when defining centrals in the Y07 catalog. A number of studies indicate that this is not a good assumption for a substantial fraction of groups \citep[see, e.g.,][]{skibba+11,masaki+13,hmts13}. Since galactic conformity manifests as a \emph{similarity} of galaxy colours in groups, any errors in classifying galaxies as centrals and satellites, provided they have a minimal impact on overall group membership, will tend to make these populations similar to each other and are therefore likely to \emph{induce} conformity-like effects \citep{campbell+15}. Consequently, it is possible that our analysis in Section~\ref{sec:results:subsec:fixconf} somewhat overestimates the strength of conformity (i.e., the value of $\rho$). However, the results of Section~\ref{sec:results:subsec:conf} suggest that our conclusions regarding the \emph{relative} strengths of the conformity signal at various scales are qualitatively robust to such errors. It will nevertheless be interesting to revisit this issue in future work using more refined criteria for identifying centrals. A proper comparison with data will require accounting for projection effects both in the red fraction measurements as well as the definition of isolated primaries themselves. Although these will dilute the signal, its strength should also increase upon including galaxies with smaller stellar masses (and stronger assembly bias) than we could access in our mocks, and overall we expect that the difference between large scale 1-halo and genuine 2-halo effects will remain measurable. Additionally, we expect that simultaneous measurements of 1-halo and large scale conformity together with colour-dependent clustering will be required to break the weak degeneracy between halo mass dependence of the central red fraction and the level of conformity. Such a joint analysis will also be important from the point of view of obtaining unbiased HOD fits that account for galactic conformity \citep{zhv14}. \section{Summary and Conclusions} \label{sec:conclude} \noindent We have introduced a flexible model of galactic conformity within the Halo Occupation Distribution (HOD) framework by modifying and extending the algorithm described by \citet[][]{ss09}. By construction, our mock galaxy catalogs show good agreement with measurements of conformity-\emph{independent} variables in the \citet[][Y07]{yang+07} group catalog based on DR7 of the SDSS. These variables include the satellite fraction, the red fractions of all galaxies, centrals and satellites, and the satellite quenching efficiency $\varepsilon_{\rm sat}$ (Figure~\ref{fig:redfrac-avg}), as well as the all-galaxy luminosity and stellar mass functions (Figure~\ref{fig:massfuncs}). Galaxy luminosities are assigned using the HOD \citep[we use the calibration by][]{zehavi+11}, colours are assigned using colour-luminosity fits to SDSS data (Figure~\ref{fig:gr-comparemock} and equations~\ref{dbl-Gauss-fits}) and stellar masses are assigned using a colour-dependent mass-to-light ratio (equation~\ref{masstolight-fit}), also fit to SDSS data. Our mock catalogs are luminosity-complete for $M_r<-19.0$ and mass-complete for $\log_{10}(m_\star)\gtrsim9.9$ (Figures~\ref{fig:massfuncs} and~\ref{fig:hist2d-cen}). Our algorithm introduces conformity between the colours of the central and satellites of a group (a 1-halo effect) by using a tunable group quenching efficiency $\rho$ to correlate these colours with the concentration of the parent dark halo of the group. Halo concentration (which has a scatter at fixed halo mass) is therefore identified as the ``hidden variable'' in our model which causes galactic conformity even in halos of fixed mass. Halo assembly bias then leads to a 2-halo effect at very large scales (this is our default model), which we can also switch off by randomizing halo concentrations among halos of fixed mass (we call this the \emph{no-2h} model). The latter is a useful toy example in which conformity arises due to some unspecified property of halos (e.g., this might be a coupling between star formation activity and the hot gas content in a halo) that \emph{does not} exhibit large scale environmental effects. We have performed various tests to study the nature of the signal, including changing $\rho$ (Figure~\ref{fig:redfrac-cc}) in the presence or absence of 2-halo conformity (Figure~\ref{fig:redfrac-2hconf}) for different choices of isolation criteria -- we used isolated primaries (Figures~\ref{fig:redfrac-2hconf} and~\ref{fig:redfrac-cc}) as well as group centrals (Figure~\ref{fig:redfrac-trugrps}). Our main results can be summarized as follows. \begin{itemize} \item We find that setting $\rho=0.65$ gives a 1-halo conformity signal in the mocks which agrees well with the corresponding signal in the Y07 catalog (Figure~\ref{fig:redfrac-conformity-1h}). The signal manifests as a difference between the red fractions of satellites in groups with red and blue centrals. Additionally, in the Y07 catalog, the red fraction of satellites with blue centrals is remarkably similar to the red fraction of centrals at all masses; our mocks correctly reproduce this trend down to their completeness limit. \item The above value of $\rho$ also leads to a signal at large scales in the mocks (Figure~\ref{fig:redfrac-conformity-2h}); specifically, we see a significant difference between the galaxy red fractions surrounding red and blue isolated primaries out to separations $\lesssim4$Mpc. Interestingly, we find that this signal is dominated by the 1-halo contribution of large halos with $\log_{10}(m) > 13.25$ (Figure~\ref{fig:redfrac-conformity-2h-mhbins}), and \emph{persists even when we switch off 2-halo conformity} in our \emph{no-2h} model. This implies that the observation of such a difference \citep[e.g.][]{kauffmann+13} is not conclusive evidence that galactic conformity arises from halo assembly bias, since it could also arise from 1-halo effects ``leaking'' to large scales due to averaging over halo mass (section~\ref{sec:discuss}). \item At even larger scales ($\gtrsim8$Mpc in Figure~\ref{fig:redfrac-conformity-2h}), the signals with and without 2-halo conformity \emph{do} become distinct, with the genuine 2-halo signal remaining \emph{significantly elevated} compared to the global average red fraction out to separations in excess of $15$Mpc (at least in the 3-d case that we consider). This 2-halo signal is absent at large halo masses and only emerges at smaller masses $11.25 < \log_{10}(m) < 12.25$ (Figure~\ref{fig:redfrac-conformity-2h-mhbins}), being qualitatively consistent with expectations from halo assembly bias (section~\ref{sec:discuss} and Appendix~\ref{app:conftrends}). We therefore suggest that this elevation compared to the global average could be a more robust indicator of large scale galaxy assembly bias than is the difference between red fractions at scales $\lesssim4$Mpc. \end{itemize} We end with a brief discussion of future extensions of our work. In a forthcoming paper (Kova\v c et al., in preparation), we will compare the large scale signal in our mocks, after accounting for projection effects, with a corresponding measurement in the Y07 catalog to determine whether or not the observed large scale conformity is due to galaxy assembly bias or a residual of some other 1-halo process. Tests at fixed group richness will also be useful in answering this question. It will also be interesting to explore the use of galaxy lensing and/or traditional correlation function analyses to validate the connection between galaxy colours and host halo concentrations assumed in this work, particularly to check for consistency with the value of $\rho$ presented above. In another publication (Pahwa et al., in preparation), we will present the analytical formalism for including conformity in the HOD framework; we will show that this requires straightforward modifications in existing HOD pipelines. Our algorithm can also be extended to include radial profiles for satellite velocity dispersions and colours \citep{prescott+11,hartley+15}, as well as concentration-dependent satellite abundances \citep{wechsler+06,mww15}. The signal at high redshift is interesting too; this could be modelled using analytical prescriptions tuned to match high-redshift luminosity function and clustering data \citep{ttc13,bwc13,jss14}. Finally, it will be extremely interesting to extend our algorithm to lower stellar masses using accurate faint-end ($M_{r,{\rm max}}\sim-16.0$) HOD fits and higher resolution $N$-body simulations. We estimate that a simulation with $1024^3$ particles in a $(100\ensuremath{h^{-1}{\rm Mpc}})^3$ box will allow us to create catalogs that are mass-complete for $\log_{10}(m_\star)\gtrsim8.7$; comparing these to data will provide us with stringent tests on the nature of the conformity signal. \section*{Acknowledgements} We thank J. Woo and M. Shirazi for help with the Kcorrect tools, and an anonymous referee for an insightful report that has helped improve the presentation. We are grateful to the SDSS collaboration for publicly releasing their data set, and Yang et al. for releasing their updated group catalog. We thank O. Hahn, V. Springel and P. Behroozi for making their codes publicly available. We gratefully acknowledge computing facilities at ETH, Z\"urich and IUCAA, Pune. Our mock catalogs are available upon request. \setlength{\bibhang}{2.0em} \setlength\labelwidth{0.0em}
1,108,101,562,433
arxiv
\section{Introduction} In a joint paper with Albert Schwarz \cite{ps2}, we gave definitions of Hochschild cohomology and cyclic cohomology of an \hbox{$A_\infty$}\ algebra, and showed that these cohomology theories classified the infinitesimal deformations of the \hbox{$A_\infty$}\ structure and those deformations preserving an invariant inner product. Then we showed that the Hochschild cohomology of an associative algebra classifies the deformations of the algebra into an \hbox{$A_\infty$}\ algebra, and the cyclic cohomology of an algebra with an invariant inner product classifies the deformations of the algebra into an \hbox{$A_\infty$}\ algebra preserving the inner product. We then applied these results to show that cyclic cocycles of an associative algebra determine homology cycles in the complex of metric ribbon graphs. The original purpose of this paper was to apply the same constructions to \mbox{$L_\infty$}\ algebras, and use the results to obtain homology cycles in another graph complex, that of ordinary metric graphs. However, in the preparation of the paper, I found that many of the notions which are needed to explain the results are not easy to find in the literature. Thus, to make the article more self contained, I decided to include definitions of cyclic cohomology of Lie algebras, cohomology of \hbox{$\Z_2$}-graded algebras, and coderivations of the tensor, exterior and symmetric coalgebras. It also became apparent that treatment of the cohomology of \hbox{$A_\infty$}\ algebras from the perspective of codifferentials on the tensor coalgebra was useful; we avoided this description in the joint article for simplicity, but we missed some important ideas because of this fact. {}From the coalgebra point of view, the corresponding theory of \mbox{$L_\infty$}\ algebras is seen to be closely analogous to the theory of \hbox{$A_\infty$}\ algebras, so the formulation and proofs of the results about the \mbox{$L_\infty$}\ case are given near the end of the text. The ideas in this text lead immediately to a simple formulation of the cycle in the complex of metric ribbon graphs associated to an \hbox{$A_\infty$}\ algebra with an invariant inner product. This same method can be applied to show that \mbox{$L_\infty$}\ algebras with an invariant inner product give rise to a cycle in the homology of the complex of metric ordinary graphs. These results will be presented in a paper to follow this. The notion of an \hbox{$A_\infty$}\ algebra, also called a strongly homotopy associative algebra, was introduced by J. Stasheff in \cite{sta1,sta2}, and is a generalization of an associative algebra. From a certain point of view, an associative algebra is simply a special case of a codifferential on the tensor coalgebra of a vector space. An \hbox{$A_\infty$}\ algebra is given by taking an arbitrary coderivation; in particular associative algebras and differential graded associative algebras are examples of \hbox{$A_\infty$}\ algebras. \mbox{$L_\infty$}\ algebras, also called strongly homotopy Lie algebras, first appeared in \cite{ss}, and are generalizations of Lie algebras. A Lie algebra can be viewed as simply a special case of a codifferential on the exterior coalgebra of a vector space, and \mbox{$L_\infty$}\ algebras are simply arbitrary codifferentials on this coalgebra. A bracket structure was introduced on the space of cochains of an associative algebra by M. Gerstenhaber in \cite{gers}. {}From the coagebra point of view, the Gerstenhaber bracket turns out to be simply the bracket of coderivations. The space of cochains of a Lie algebra with coefficients in the adjoint representation also has a natural bracket, which is the bracket of cochains. In this paper, we generalize these brackets to \hbox{$A_\infty$}\ and \mbox{$L_\infty$}\ algebras, and define a bracket for cyclic cohomology when there is an invariant inner product. In our considerations, we shall be interested in \hbox{$\Z_2$}-graded graded spaces, but we should point out that all the results hold in the \mbox{$\Bbb Z$}-graded case as well. \hbox{$A_\infty$}\ algebras were first defined as \mbox{$\Bbb Z$}-graded objects, but for the applications we have in mind, the \hbox{$\Z_2$}-grading is more appropriate, and the generalization of the results here to the \mbox{$\Bbb Z$}-graded case is straightforward. We shall find it necessary to consider the parity reversion of a \hbox{$\Z_2$}-graded space. This is the same space with the parity of elements reversed. (In the \mbox{$\Bbb Z$}-graded case, the corresponding notion is that of suspension.) There is a natural isomorphism between the tensor coalgebra of a \hbox{$\Z_2$}-graded space and the tensor coalgebra of its parity reversion. But in the case of the exterior coalgebra, the isomorphism is to the {\em symmetric} coalgebra of the parity reversion, a subtle point that is not clarified very well in the literature. A notion that will play a crucial role in what follows is that of a grading group. An abelian group $G$ is said to be a grading group if it possesses a symmetric \hbox{$\Z_2$}-valued bilinear form $\left<\cdot,\cdot\right>$. Any abelian group with a subgroup of index 2 possesses a natural grading form, but this is not always the form which we shall need to consider. An element $g$ of $G$ is called odd if $\ip gg=1$. A grading group with a nontrivial inner product is called good if $\ip gh=1$, whenever $g$ and $h$ are both odd. Groups equipped with the natural inner product induced by a subgroup of index 2 are good, and these include both \hbox{$\Z_2$}\ and \mbox{$\Bbb Z$}. If $G$ and $H$ are grading groups, then $G\times H$ has an induced inner product, given by $\ip{(g,g')}{(h,h')}=\ip g{g'}+\ip h{h'}$. But $G\times H$ is never good when $G$ and $H$ are good. Now, if $V$ is a $G$ graded vector space, then one can define the symmetric and exterior algebras of the tensor algebra $T(V)$. The symmetric algebra is $G$-graded commutative, but the exterior algebra is not. On the other hand, the tensor, symmetric and exterior algebras also are graded by $G\times\mbox{$\Bbb Z$}$, and with respect to the induced inner product on $G\times\mbox{$\Bbb Z$}$, the exterior algebra is graded commutative. Thus the grading group associated to the exterior algebra is not good. The consequences of this fact play an important role in the theory of \hbox{$A_\infty$}\ and \mbox{$L_\infty$}\ algebras. The organization of the paper is as follows. In section \ref{sect 1} we recall the definition of the coboundary operator for a Lie algebra, give a definition of cyclic cochain, define cyclic cohomology for a Lie algebra, and relate it to deformations preserving an invariant inner product. Section \ref{sect 2} extends these notions to the case of a \hbox{$\Z_2$}-graded Lie algebra. The main purpose of the first two sections is to present the formulas for comparison to the later generalized constructions. In section \ref{sect 3} we give definitions of the exterior and symmetric algebras, and fix conventions we use for parity and bidegree, and the inner products on the grading groups. Section \ref{sect 4} explains the tensor, exterior and symmetric coalgebra structures, and their natural grading groups. Section \ref{sect 5} discusses coderivations and codifferentials of these coalgebras, and the dependence on whether the grading group is \hbox{$\Z_2$}\ or \hbox{$\Z_2\times\Z$}. Section \ref{sect 6} examines the duality between the tensor coalgebra of a space with \hbox{$\Z_2\times\Z$}-grading, and the tensor coalgebra of the parity reversion of the space with the \hbox{$\Z_2$}-grading. The notion of \hbox{$A_\infty$}\ algebra is introduced, the Gerstenhaber bracket is defined, and it is used to define cohomology of an \hbox{$A_\infty$}\ algebra. Invariant inner products are defined, and the bracket of cyclic cochains is used to define the coboundary of cyclic cohomology. The bracket structure on cyclic cochains depends on the inner product, but as usual for cyclic cohomology, the coboundary does not depend on it. Finally, in section \ref{sect 7} the duality between the exterior coalgebra of a space with the \hbox{$\Z_2\times\Z$}-grading, and the symmetric coalgebra of the parity reversion of the space with the \hbox{$\Z_2$}-grading is used to give a definition of an \mbox{$L_\infty$}\ algebra. The results of the previous section are extended to the \mbox{$L_\infty$}\ algebra case. \section{Cohomology of Lie Algebras}\label{sect 1} \maketitle In this section we recall the definition of the cohomology of a Lie algebra with coefficients in a module, and relate the cohomology of the Lie algebra with coefficients in the adjoint representation to the theory of deformations of the Lie algebra structure. Then we discuss the theory of deformations of a Lie algebra preserving an invariant inner product. Let us suppose that $V$ is a Lie algebra over a field \mbox{\bf k}, with bracket $\left[\cdot,\cdot\right]$. The antisymmetry of the bracket means that the bracket is a linear map from $\bigwedge^2\mbox{V}\rightarrow \mbox{V}$. Let $M$ be a $\mbox{V}$ module, and let $C^n(\mbox{V},M)=\mbox{\rm Hom}(\bigwedge^n \mbox{V},M)$ be the space of antisymmetric $n$-multilinear functions on $\mbox{V}$ with values in $M$, which we will call the module of cochains of degree $n$ on $\mbox{V}$ with values in $M$. The Lie algebra coboundary operator $d:C^n(\mbox{V},M)\rightarrow C^{n+1}(\mbox{V},M)$ is defined by \begin{multline} d\varphi(v_1,\cdots, v_{n+1})=\\ \sum_{1\le i<j\le n+1}\s{i+j-1}\varphi([v_i,v_j],v_1,\cdots, \hat v_i,\cdots,\hat v_j,\cdots, v_{n+1})\\ + \sum_{1\le i\le n+1}\s iv_i\cdot\varphi(v_1,\cdots, \hat v_i,\cdots, v_{n+1}) \end{multline} Then \begin{equation} H^n(\mbox{V},M)=\ker(d:C^n(\mbox{V},M)\rightarrow C^{n+1}(\mbox{V},M))/\hbox{im}(d:C^{n-1}(\mbox{V},M)\rightarrow C^n(\mbox{V},M)) \end{equation} is the Lie algebra cohomology of $\mbox{V}$ with coefficients in $M$. When $M=\mbox{V}$, and the action of $\mbox{V}$ on itself is the adjoint action, then we denote $C^n(\mbox{V},\mbox{V})$ as simply $C^n(\mbox{V})$, and similarly $H^n(\mbox{V},\mbox{V})$ is denoted by $H^n(\mbox{V})$. The connection between the cohomology of the Lie algebra and (infinitesimal) deformations of $\mbox{V}$ is given by $H^2(\mbox{V})$. If we denote the bracket in $\mbox{V}$ by $l$, and an infinitesimally deformed product by $l_t=l+t\varphi$, with $t^2=0$, then the map $\varphi:\bigwedge^2 \mbox{V}\rightarrow \mbox{V}$ is a cocycle, and the trivial deformations are coboundaries. For by the Jacobi identity we have \begin{equation} l_t(v_1,l_t(v_2,v_3))=l_t(l_t(v_1,v_2),v_3)+l_t(v_2,l_t(v_1,v_3)) \end{equation} Expanding the expression above we determine that \begin{multline} [v_1,\varphi(v_2,v_3)]+\varphi(v_1,[v_2,v_3])= [\varphi(v_1,v_2),v_3]+\varphi([v_1,v_2],v_3)\\ +[v_2,\varphi(v_1,v_3)]+\varphi(v_2,[v_1,v_3]), \end{multline} which can be expressed as the condition $d\varphi=0$. On the other hand, the notion of a trivial deformation is that $\mbox{V}$ with the new bracket is isomorphic to the original bracket structure. This means that there is a linear bijection $\rho_t:A\rightarrow A$ such that $l_t(\rho_t(v_1),\rho_t(v_2))=\rho_t([v_1,v_2])$. We can express $\rho_t=I+t\lambda$, where $\lambda:\mbox{V}\rightarrow \mbox{V}$ is a linear map. Then \begin{equation} l_t(v_1,v_2)=l(v_1,v_2)+t(d\lambda)(v_1,v_2). \end{equation} Thus the trivial deformations are precisely those which are given by coboundaries. Next, consider an invariant inner product on $\mbox{V}$, by which we mean a non-degenerate symmetric bilinear form $\ip..:\mbox{V}\bigotimes \mbox{V}\rightarrow\mbox{\bf k}$ which satisfies \begin{equation} \ip{[v_1,v_2]}{v_3}=\ip{v_1}{[v_2,v_3]}. \end{equation} Note that for an invariant inner product, the tensor $\hbox{$\tilde l$}$ given by \begin{equation} \hbox{$\tilde l$}(v_1,v_2,v_3)=\ip{[v_1,v_2]}{v_3} \end{equation} is also antisymmetric, so that $\hbox{$\tilde l$}\in\mbox{\rm Hom}(\bigwedge^3 \mbox{V},\mbox{\bf k})$. We also note that $$\hbox{$\tilde l$}(v_1,v_2,v_3)=\hbox{$\tilde l$}(v_3,v_1,v_2),$$ so that $\hbox{$\tilde l$}$ is invariant under cyclic permutations of $v_1$, $v_2$, $v_3$. We are interested in deformations of $A$ which preserve this inner product, and these deformations are given by $H^3(\mbox{V},\mbox{\bf k})$, the cohomology of $\mbox{V}$ with trivial coefficients. To see this connection, we first define an element $\varphi\in C^n(\mbox{V})$ to be cyclic with respect to the inner product if \begin{equation} \ip{\varphi(v_1,\cdots, v_n)}{v_{n+1}}=\s{n}\ip{v_1}{\varphi(v_2,\cdots, v_{n+1})} \end{equation} Then it is easy to see that $\varphi$ is cyclic iff the map $\hbox{$\tilde \ph$}:\mbox{V}^n\rightarrow \mbox{V}$ given by \begin{equation} \hbox{$\tilde \ph$}(v_1,\cdots, v_{n+1})= \ip{\varphi(v_1,\cdots, v_n)}{v_{n+1}} \end{equation} is antisymmetric, in other words, $\hbox{$\tilde \ph$}\in\mbox{\rm Hom}(\bigwedge^{n+1}\mbox{V},\mbox{\bf k})$. The term cyclic here is used to express the fact that $\hbox{$\tilde \ph$}$ is cyclic in the sense that \begin{equation} \hbox{$\tilde \ph$}(v_1,\cdots, v_{n+1})=\s{n}\hbox{$\tilde \ph$}(v_{n+1},v_1,\cdots, v_n), \end{equation} which holds for any antisymmetric form. Since the inner product is non-degenerate, the map $\varphi\mapsto\tilde\varphi$ is an isomorphism between the subspace $CC^n(\mbox{V})$ consisting of cyclic cochains, and $C^{n+1}(\mbox{V},\mbox{\bf k})$. We shall see that $d\varphi$ is cyclic when $\varphi$ is, so that we can define the cyclic cohomology of the Lie algebra to be \begin{equation} HC^n(\mbox{V})=\ker(d:CC^n(\mbox{V})\rightarrow CC^{n+1}(\mbox{V}))/\hbox{im}(d:CC^{n-1}(\mbox{V})\rightarrow CC^n(\mbox{V})), \end{equation} We shall also show that the isomorphism between $CC^n(\mbox{V})$ and $C^{n+1}(\mbox{V},\mbox{\bf k})$ commutes with the coboundary operator, so that the cohomology of the complex of cyclic cochains coincides with the cohomology of the Lie algebra with trivial coefficients, with degree shifted by 1, \hbox{\it i.e.}, $HC^n(\mbox{V})\cong H^{n+1}(\mbox{V},\mbox{\bf k})$. Thus, unlike the case of associative algebras, cyclic cohomology does not lead to anything new. To see these facts, note that if $\varphi$ is cyclic, then \begin{multline} \ip{[v_i,\varphi(v_1,\cdots, \hat v_i,\cdots, v_{n+1})]}{v_{n+2}}= -\ip{[\varphi(v_1,\cdots, \hat v_i,\cdots, v_{n+1}),v_i]}{v_{n+2}}\\ =-\ip{\varphi(v_1,\cdots, \hat v_i,\cdots, v_{n+1})}{[v_i,v_{n+2}]}= -\hbox{$\tilde \ph$}(v_1,\cdots, \hat v_i,\cdots, v_{n+1},[v_i,v_{n+2}])\\ =\s{n+1}\hbox{$\tilde \ph$}([v_i,v_{n+2}],v_1,\cdots, \hat v_i,\cdots, v_{n+1}). \end{multline} Thus \begin{multline} \widetilde{d\varphi}(v_1,\cdots, v_{n+2})= \ip{d\varphi(v_1,\cdots, v_{n+1})}{v_{n+2}}=\\ \sum_{1\le i<j\le n+1}\s{i+j-1}\ip{\varphi([v_i,v_j],v_1,\cdots, \hat v_i,\cdots,\hat v_j,\cdots, v_{n+1})}{v_{n+2}}\\ + \sum_{1\le i\le n+1}\s i \ip{[v_i,\varphi(v_1,\cdots, \hat v_i,\cdots, v_{n+1})]}{v_{n+2}}\\ = \sum_{1\le i<j\le n+1}\s{i+j-1}\hbox{$\tilde \ph$}([v_i,v_j],v_1,\cdots, \hat v_i,\cdots,\hat v_j,\cdots, v_{n+1},v_{n+2})\\ + \sum_{1\le i\le n+1}\s{i+(n+2)-1} \hbox{$\tilde \ph$}([v_i,v_{n+2}],v_1,\cdots, \hat v_i,\cdots, v_{n+1}) \\= \sum_{1\le i<j\le n+2}\s{i+j-1}\hbox{$\tilde \ph$}([v_i,v_j],v_1,\cdots, \hat v_i,\cdots,\hat v_j,\cdots, v_{n+1},v_{n+2})\\ =d\hbox{$\tilde \ph$}(v_1,\cdots, v_{n+2}). \end{multline} The last equality follows from the triviality of the action of $\mbox{V}$ on $\mbox{\bf k}$, so that the second term in the definition of the coboundary operator drops out. Since $d\hbox{$\tilde \ph$}$ is antisymmetric, it follows that $d\varphi$ is cyclic. A deformation $l_t=l+t\varphi$ of the Lie algebra is said to preserve the inner product if the inner product remains invariant for $l_t$. This occurs precisely when $\varphi$ is cyclic with respect to the inner product. Similarly, a trivial deformation which preserves the inner product is given by a linear map $\rho_t=I+t\lambda$ which satisfies \begin{equation} \ip{\rho_t(v_1)}{\rho_t(v_2)}=\ip{v_1}{v_2}, \end{equation} which is equivalent to the condition $\ip{\lambda(v_1)}{v_2}=-\ip{v_1}{\lambda(v_2)}$; in other words, $\lambda$ is cyclic. Thus the cyclic cohomology $HC^2(\mbox{V})$ characterizes the deformations of $\mbox{V}$ which preserve the inner product. Since $HC^2(\mbox{V})\cong H^3(\mbox{V},\mbox{\bf k})$, we see that the cyclic cohomology is independent of the inner product. Of course the isomorphism does depend on the inner product. Thus we should define cyclic cohomology to be the cohomology on $C(V,\mbox{\bf k})$ induced by the isomorphism between $CC(V)$ and $C(V,\mbox{\bf k})$. It is this cohomology which we have shown coincides with $H(V,\mbox{\bf k})$, so is independent of the inner product. Finally, we introduce some notation that will make the definition of the coboundary operator generalize more easily to the \hbox{$\Z_2$}-graded case. Let $\sh pq$ be the {\bf unshuffles} of type $p,q$; that is, the subset of permutations $\sigma$ of $p+q$ elements such that $\sigma(i)<\sigma(i+1)$ when $i\ne p$. Let $\s{\sigma}$ be the sign of the permutation. Then \begin{multline} d\varphi(v_1,\cdots, v_{n+1})= \sum_{\sigma\in\sh2{n-1}} \s\sigma \varphi([v_{\sigma(1)},v_{\sigma(2)}],v_{\sigma(3)},\cdots, v_{\sigma(n+1)})\\ -\s{n-1} \sum_{\sigma\in\sh n1)} \s\sigma [\varphi(v_{\sigma(1)},,\cdots, v_{\sigma(n)}),v_{\sigma(n+1)})] \end{multline} Let us introduce a \mbox{$\Bbb Z$}-grading on the \mbox{\bf k}-module $C^*(\mbox{V})= \bigoplus_{k=1}^\infty C^k(\mbox{V})$ by defining $\operatorname{deg}(\varphi)=k-1$ if $\varphi\in C^k(\mbox{V})$. With this grading, $C^*(\mbox{V})$ becomes a \mbox{$\Bbb Z$}-graded Lie algebra, with the bracket defined by \begin{multline} [\varphi,\psi](v_1,\cdots, v_{k+l-1})=\\ \sum_{\sigma\in\sh l{k-1}} \s\sigma \varphi(\psi(v_{\sigma(1)},\cdots, v_{\sigma(k)}),v_{\sigma(k+1)},\cdots, v_{\sigma(k+l-1)})\\ -\s{(k-1)(l-1)} \sum_{\sigma\in\sh k{l-1})} \s\sigma \psi(\varphi(v_{\sigma(1)},,\cdots, v_{\sigma(l)}), v_{\sigma(l+1)},\cdots, v_{\sigma(n+1)}), \end{multline} for $\varphi\in C^l(\mbox{V})$ and $\psi\in C^k(\mbox{V})$. Later we shall establish in a more general context that this bracket satisfies the \mbox{$\Bbb Z$}-graded Jacobi Identity. The definition of the bracket does not depend on the fact that $\mbox{V}$ is a Lie algebra, but in the case of a Lie algebra, we easily see that $d\varphi=[\varphi,l]$. That $[l,l]=0$ is an immediate consequence of the Jacobi identity, and since $\operatorname{deg}(l)=1$, so that $l$ is an odd mapping, these facts show immediately that $d^2=0$. A generalization of this result is that given a not necessarily invariant inner product on $\mbox{V}$, the bracket of cyclic elements is again a cyclic element. \begin{thm}\label{th1} Suppose that $\mbox{V}$ is a \mbox{\bf k}-module, and $\left<\cdot,\cdot\right>$ is an inner product on $\mbox{V}$. Suppose that $\varphi\in C^l(\mbox{V})$ and $\psi\in C^k(\mbox{V})$ are cyclic. Then $[\varphi,\psi]$ is also cyclic. Moreover, the formula below holds. \begin{equation} \widetilde{[\varphi,\psi]}(v_1,\cdots, v_{k+l})= \sum_{\sigma\in\sh l{k-1}} \s{\sigma} \tilde\varphi(\psi(v_{\sigma(1)},\cdots, v_{\sigma(k)}),v_{\sigma(k+1)},\cdots, v_{\sigma(k+l)}). \end{equation} As a consequence, the inner product induces the structure of a graded Lie algebra in $C^*(\mbox{V},\mbox{\bf k})$, given by $[\tilde\varphi,\tilde\psi]=\widetilde{[\varphi,\psi]}$. If further, $l:\bigwedge^2 \mbox{V}\rightarrow \mbox{V}$ is a Lie algebra bracket, and the inner product is invariant with respect to this bracket, then the differential $d$ in $C^n(\mbox{V},\mbox{\bf k})$ is given by $d(\tilde\varphi)=[\tilde \varphi,\tilde l]$, so that the isomorphism between $CC^*(\mbox{V})$ and $C^{*+1}(\mbox{V},\mbox{\bf k})$, is an isomorphism of differential $\mbox{$\Bbb Z$}$-graded Lie algebras. \end{thm} \section{\hbox{$\Z_2$}-Graded Lie Algebras}\label{sect 2} Recall that a \hbox{$\Z_2$}-graded Lie algebra, is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module equipped with an degree zero bracket $l:\mbox{V}\bigotimes \mbox{V}\rightarrow \mbox{V}$, abbreviated by $l(a,b)=[a,b]$, which is graded anticommutative, so that $[v_1,v_2]=\s{v_1v_2}[v_2,v_1]$, and satisfies the graded Jacobi identity \begin{equation} [v_1,[v_2,v_3]]=[[v_1,v_2],v_3]+\s{v_1v_2}[v_2,[v_1,v_3]]. \end{equation} Odd brackets can also be considered, but here we require that the bracket has degree zero, so that $\e{[x,y]}=\e{x}+\e{y}$. One can restrict to the case where \mbox{\bf k}\ is a field of characteristic zero, but it is also interesting to allow \mbox{\bf k}\ to be a \hbox{$\Z_2$}-graded commutative ring, requiring that $\mbox{\bf k}_0$ be a field of characteristic zero. The graded antisymmetry of the bracket means that the bracket is a linear map $\left[\cdot,\cdot\right]: \bigwedge^2\mbox{V}\rightarrow \mbox{V}$, where $\bigwedge^2 \mbox{V}$ is the graded wedge product. Recall that the graded exterior algebra $\bigwedge \mbox{V}$ is defined as the quotient of $\bigotimes \mbox{V}$ by the graded ideal generated by elements of the form $x\otimes y+\s{xy}y\otimes x$, for homogeneous elements $x$, $y$ in $\mbox{V}$. An element $v_1\wedge\dots \wedge v_n$ is said to have (external) degree $n$ and parity (internal degree) $\e{v_1}+\cdots+ \e{v_n}$. If $\omega$ has degree $\operatorname{deg}(\omega)$ and parity $\e{\omega}$, and similarly for $\eta$, then \begin{equation} \omega \wedge \eta=\s{\e\omega\e\eta+\operatorname{deg}(\omega)\operatorname{deg}(\eta)} \eta\wedge\omega. \end{equation} More generally (see section \ref{sect 3}), if $\sigma$ is any permutation, we define $\epsilon(\sigma;v_1,\cdots, v_n)$ by requiring that \begin{equation} v_1\wedge\cdots\wedge v_n=\s{\sigma} \epsilon(\sigma;v_1,\cdots, v_n) v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(n)}, \end{equation} where $\s{\sigma}$ is the sign of the permutation $\sigma$. In order to see how the coboundary operator should be modified in the case of \hbox{$\Z_2$}-graded algebras, we consider infinitesimal deformations of the Lie algebra. If we denote the deformed bracket by $l_t=l+t\varphi$ as before, then we wish the deformed bracket to remain even, so that if $t$ is taken to be an even parameter, then $\varphi$ must be even. However, if we let $t$ be an odd parameter, then $\varphi$ must be odd. We also must take into account that parameters should graded commute with elements of \mbox{V}, so that $vt=\s{ta}tv$. The graded Jacobi identity takes the form \begin{equation} l_t(a,l_t(b,c))=l_t(l_t(a,b),c)+\s{ab}l_t(b,l_t(a,c)) \end{equation} Expanding this formula, we obtain the condition \begin{multline} \varphi(l(a,b),c)+\s{bc+1}\varphi(l(a,c),b)+\s{a(b+c)}\varphi(l(b,c),a)+\\ l(\varphi(a,b),c)+\s{bc+1}l(\varphi(a,c),b)+\s{a(b+c)}l(\varphi(b,c),a)=0, \end{multline} which can be expressed as $d\varphi=0$, if we define for $\varphi:\bigwedge^n \mbox{V}\rightarrow \mbox{V}$, \begin{multline}\label{ztcb} d\varphi(v_1,\cdots, v_{n+1})= \sum_{\sigma\in\sh2{n-1}} \s\sigma\epsilon(\sigma) \varphi([v_{\sigma(1)},v_{\sigma(2)}],v_{\sigma(3)},\cdots, v_{\sigma(n+1)})\\ -\s{n-1} \sum_{\sigma\in\sh n1)} \s\sigma\epsilon(\sigma) [\varphi(v_{\sigma(1)},,\cdots, v_{\sigma(n)}),v_{\sigma(n+1)})]. \end{multline} More generally, when $M$ is a \hbox{$\Z_2$}-graded $\mbox{V}$ module, we can define a right multiplication $M\bigotimes \mbox{V}\rightarrow M$ by $m\cdot a=-\s{ma}a\cdot m$. Then if we let $C^n(\mbox{V},M)=\mbox{\rm Hom}(\bigwedge^n\mbox{V},M)$ be the module of cochains of degree $n$ on \mbox{V}\ with values in $M$, we can define a coboundary operator $d:C^n(\mbox{V},M)\rightarrow C^{n+1}(\mbox{V},M)$ by \begin{multline} d\varphi(v_1,\cdots, v_{n+1})= \sum_{\sigma\in\sh2{n-1}} \s\sigma\epsilon(\sigma) \varphi([v_{\sigma(1)},v_{\sigma(2)}],v_{\sigma(3)},\cdots, v_{\sigma(n+1)})\\ -\s{n-1} \sum_{\sigma\in\sh n1)} \s\sigma \varphi(v_{\sigma(1)},,\cdots, v_{\sigma(n)})\cdot v_{\sigma(n+1)}. \end{multline} Then we define $H^n(\mbox{V},M)$ in the same manner as for ordinary Lie algebras, and as before, denote $C^n(\mbox{V},\mbox{V})=C^n(\mbox{V})$ and $H^n(\mbox{V})=H(V,V)$ for the adjoint action of \mbox{V}. As in the case of ordinary Lie algebras, we obtain that trivial (infinitesimal) deformations are given by means of coboundaries of linear maps $\lambda:\mbox{V}\rightarrow \mbox{V}$. Thus the infinitesimal deformations of a \hbox{$\Z_2$}-graded Lie algebra are classified by $H^2(\mbox{V})$, where the cohomology of the Lie algebra is given by means of the coboundary operator in equation (\ref{ztcb}) above. The fact that $d^2=0$ is verified in the usual manner. $C^*(\mbox{V})=\bigoplus_{k=1}^\infty C^*(\mbox{V})$ has a natural $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$ grading, with the bidegree of a homogeneous element $\varphi\in C^n(\mbox{V})$ given by $\operatorname{bid}(\varphi)=(\e\varphi,n-1)$. (Actually, it is more common to define the exterior degree of $\varphi$ to be $1-n$, but as the degree enters into our calculations mainly by sign, this makes no difference.) There is a natural bracket operation in $C^*(\mbox{V})$, given by \begin{multline}\label{nbra} [\varphi,\psi](v_1,\cdots, v_{k+l-1})=\\ \sum_{\sigma\in\sh l{k-1}}\!\! \s\sigma\epsilon(\sigma) \varphi(\psi(v_{\sigma(1)},\cdots, v_{\sigma(k)}),v_{\sigma(k+1)},\cdots, v_{\sigma(k+l-1)})\\ -\s{\varphi\psi+ (k-1)(l-1)}\!\!\!\!\!\!\!\!\!\! \sum_{\sigma\in\sh k{l-1})}\!\! \s\sigma\epsilon(\sigma) \psi(\varphi(v_{\sigma(1)},,\cdots, v_{\sigma(l)}), v_{\sigma(l+1)},\cdots, v_{\sigma(n+1)}). \end{multline} This bracket makes $C^*(\mbox{V})$ into a $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded Lie algebra. The differential takes the form $d(\varphi)=[\varphi,l]$, and the condition $[l,l]=0$ is precisely equivalent to the \hbox{$\Z_2$}-graded Jacoby identity for $l$. Since $l$ has even parity, we note that as a $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded map, it is odd, since $\ip{(0,1)}{(0,1)}=1$. For a \hbox{$\Z_2$}-graded \mbox{\bf k}-module \mbox{V}, an inner product is a (right) \mbox{\bf k}-module homomorphism $h:\mbox{V}\bigotimes\mbox{V}\rightarrow\mbox{V}$, which is (graded) symmetric, and non-degenerate. Denote the inner product by $\ip vw=h(v\otimes w)$. Graded symmetry means that $\ip vw=\s{vw}\ip wv$. Non-degeneracy means that the map $\lambda:\mbox{V}\rightarrow \mbox{V}^*=\mbox{\rm Hom}(\mbox{V},\mbox{\bf k})$, given by $\lambda(v)(w)=\ip vw$ is an isomorphism. (When \mbox{\bf k}\ is a field, this is equivalent to the usual definition of a non degenerate bilinear form.) If $h$ is an even map, then we say that the inner product is even. We shall only consider even inner products on \mbox{V}. If $\mbox{V}$ is a \hbox{$\Z_2$}-graded Lie algebra, then we define the notion of an invariant inner product as in the non graded case, by $\ip{[v_1,v_2]}{v_3}=\ip{v_1}{[v_2,v_3]}$. Then the tensor $\tilde l$, given by \begin{equation} \tilde l(v_1,v_2,v_3)=\ip{\sb{v_1,v_2}}{v_3}, \end{equation} is (graded) antisymmetric, so that $\tilde l\in\mbox{\rm Hom}(\bigwedge^3\mbox{V},\mbox{\bf k})$. In particular, we have \begin{equation} \tilde l(v_1,v_2,v_3)= \s{v_3(v_1+v_2)} \tilde l(v_3,v_1,v_2), \end{equation} so that $\tilde l$ satisfies a property of graded invariance under cyclic permutations. In general, we say that an element $\varphi\in C^n(\mbox{V})=\mbox{\rm Hom}(\bigwedge^n\mbox{V},\mbox{V})$ is cyclic with respect to the inner product, or preserves the inner product, if \begin{equation} \ip{\varphi(v_1,\cdots, v_n)}{v_{n+1}}=\s{n+v_1\varphi} \ip{v_1}{\varphi(v_2,\cdots, v_{n+1})}. \end{equation} Then $\varphi$ is cyclic if and only if $\tilde\varphi:\bigwedge^{n+1}\mbox{V} \rightarrow \mbox{V}$, given by \begin{equation} \tilde\varphi(v_1,\cdots, v_{n+1})= \ip{\varphi(v_1,\cdots, v_n)}{v_{n+1}} \end{equation} is antisymmetric. Since the inner product is non-degenerate, the map $\varphi\mapsto\tilde\varphi$ is an even isomorphism between the submodule $CC^n(\mbox{V})$ of $C^n(\mbox{V})$ consisting of cyclic elements, and $C^{n+1}(\mbox{V},\mbox{\bf k})$. We obtain a straightforward generalization of theorem \ref{th1} in the \hbox{$\Z_2$}-graded case. \begin{thm}\label{th2} Suppose that $\mbox{V}$ is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module, and $\left<\cdot,\cdot\right>$ is an inner product on $\mbox{V}$. Suppose that $\varphi\in C^l(\mbox{V})$ and $\psi\in C^k(\mbox{V})$ are cyclic. Then $[\varphi,\psi]$ is also cyclic. Moreover, the formula below holds. \begin{equation} \widetilde{[\varphi,\psi]}(v_1,\cdots, v_{k+l})=\!\!\!\!\! \sum_{\sigma\in\sh l{k-1}}\!\!\!\!\! \s{\sigma}\epsilon(\sigma) \tilde\varphi(\psi(v_{\sigma(1)},\cdots, v_{\sigma(k)}),v_{\sigma(k+1)},\cdots, v_{\sigma(k+l)}). \end{equation} As a consequence, the inner product induces the structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra in $C^*(\mbox{V},\mbox{\bf k})$, given by $[\tilde\varphi,\tilde\psi]=\widetilde{[\varphi,\psi]}$. If further, $l:\bigwedge^2 \mbox{V}\rightarrow \mbox{V}$ is a \hbox{$\Z_2$}-graded Lie algebra bracket, and the inner product is invariant with respect to this bracket, then the differential $d$ in $C^n(\mbox{V},\mbox{\bf k})$ is given by $d(\tilde\varphi)=[\tilde \varphi,\tilde l]$, so that the isomorphism between $CC^*(\mbox{V})$ and $C^{*+1}(\mbox{V},\mbox{\bf k})$, is an isomorphism of differential $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded Lie algebras. \end{thm} In particular, we can define cyclic cohomology of a \hbox{$\Z_2$}-graded Lie algebra in the same manner as for an ordinary Lie algebra. The isomorphism $HC^n(\mbox{V})\cong H^{n+1}(\mbox{V},\mbox{\bf k})$ holds for the \hbox{$\Z_2$}-graded case as well. In the following sections, we shall generalize the notion of cohomology of a graded Lie algebra, and cyclic cohomology of the same, to the case of \mbox{$L_\infty$}\ algebras. The important observation is that cohomology of a Lie algebra is determined by a bracket operation on the cochains. We shall see that this leads to the result that cohomology of a Lie algebra classifies the deformations of the Lie algebra into an \mbox{$L_\infty$}\ algebra. \section{The Exterior and Symmetric Algebras}\label{sect 3} Suppose that $V$ is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module. For the purposes of this paper, we shall define the tensor algebra $T(V)$ by $T(V)=\bigoplus_{n=1}^\infty V^n$, where $V^n$ is the $n$-th tensor power of $V$. Often, the tensor algebra is defined to include the term $V^0=\mbox{\bf k}$ as well, but we shall omit it here. For a treatment of \hbox{$A_\infty$}\ algebras which includes this term see \cite{getz}. For an element $v=v_1\tns\cdots\tns v_n$ in $T(V)$, define its parity $\e{v}=\e{v_1}+\cdots+ \e{v_n}$, and its degree by $\operatorname{deg}(v)$=n. We define the bidegree of $v$ by $\bd v=(\e{v},\operatorname{deg}(v))$. If $u, v \in T(V)$, then $\bd{u\otimes v}= \bd u +\bd v$, so that $T(V)$ is naturally $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded by the bidegree, and \hbox{$\Z_2$}-graded if we consider only the parity. If $\sigma\in\Sigma_n$, then there is a natural right \mbox{\bf k}-module homomorphism $S_\sigma:V^n\rightarrow V^n$, which satisfies \begin{equation}\label{a1} S_\sigma(v_1\tns\cdots\tns v_n)=\epsilon(\sigma;v_1,\cdots, v_n) v_{\sigma(1)}\tns\cdots\tns v_{\sigma(n)}, \end{equation} where $\epsilon(\sigma;v_1,\cdots, v_n)$ is a sign which can be determined by the following. If $\sigma$ interchanges $k$ and $k+1$, then $\epsilon(\sigma;v_1,\cdots, v_n)=\s{v_kv_{k+1}}$. In addition, if $\tau$ is another permutation, then \begin{equation} \epsilon(\tau\sigma;v_1,\cdots, v_n)= \epsilon(\tau;v_{\sigma(1)},\cdots, v_{\sigma(n)}) \epsilon(\sigma;v_1,\cdots, v_n). \end{equation} It is conventional to abbreviate $\epsilon(\sigma;v_1,\cdots, v_n)$ as $\epsilon(\sigma)$. The symmetric algebra $\bigodot V$ is defined as the quotient of the tensor algebra of $V$ by the bigraded ideal generated by all elements of the form $u\otimes v-\s{uv}v\otimes u$. The resulting algebra has a decomposition $\bigodot V=\bigoplus_{n=1}^\infty \bigodot^n V$, and the induced product is denoted by $\odot$. The symmetric algebra is both \hbox{$\Z_2$}\ and $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded, and is graded commutative with respect to the \hbox{$\Z_2$}-grading. For simplicity, let us denote $\s{uv}=\s{\e u\e v}$. In other words, if $u, v\in\bigodot V$, then \begin{equation} u\odot v=\s{uv}v\odot u. \end{equation} Furthermore, it is easy to see that \begin{equation} v_1\odot\cdots\odot v_n=\epsilon(\sigma)v_{\sigma(1)}\odot\cdots\odot v_{\sigma(n)}. \end{equation} The exterior algebra $\bigwedge V$ is defined as the quotient of $T(V)$ by the bigraded ideal generated by all elements of the form $u\otimes v+\s{uv}v\otimes u$. The resulting algebra has a decomposition $\bigwedge V=\bigoplus_{n=0}^\infty \bigwedge^n V$, and the induced product is denoted by $\wedge$. The exterior algebra is both \hbox{$\Z_2$}\ and $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded, and is graded commutative with respect to the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading. We introduce a \hbox{$\Z_2$}-valued inner product on $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$ by \begin{equation} \ip{(\bar m,n)}{(\bar r,s)}= {\bar m\bar r+\bar n\bar s}. \end{equation} Let $u, v\in \bigwedge V$. For simplicity, let us denote $\s{\ip uv}=\s{\ip{\operatorname{bid}(u)}{\operatorname{bid}(v)}}$. Then $$u\wedge v=\s{\ip uv} v\wedge u,$$ which is precisely the formula for $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-graded commutativity. Furthermore it is easy to see that \begin{equation} v_1\wedge\cdots\wedge v_n=\s{\sigma}\epsilon(\sigma)v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(n)}. \end{equation} \section{The Tensor, Exterior and Symmetric Coalgebras}\label{sect 4} A proper treatment of the symmetric and exterior coalgebras would introduce the coalgebra structure on the tensor algebra, and then describe these coalgebras in terms of coideals. Instead, we will describe these coalgebra structures directly. Recall that a coalgebra structure on a \mbox{\bf k}-module $C$ is given by a diagonal mapping $\Delta:C\rightarrow C\bigotimes C$. We consider only coassociative coalgebras, but we do not consider counits. The axiom of coassociativity is that $(1\otimes \Delta)\circ \Delta=(\Delta\otimes 1)\circ\Delta$. A grading on $C$ is compatible with the coalgebra structure if for homogeneous $c\in C$, $\Delta(c)=\sum_i u_i\bigotimes v_i$, where $\e{u_i}+\e{v_i}=\e c$ for all $i$. We also mention that a coalgebra is graded cocommutative if $S\circ\Delta=\Delta$, where $S:C\bigotimes C\rightarrow C\bigotimes C$ is the symmetric mapping given by $S(m\otimes n)=\s{\ip mn}n\otimes m$. The tensor coalgebra structure is given by defining the (reduced) diagonal $\Delta:T(V)\rightarrow T(V)$ by \begin{equation} \Delta(v_1\tns\cdots\tns v_n)=\sum_{k=1}^{n-1}(v_1\tns\cdots\tns v_k)\otimes(v_{k+1}\tns\cdots\tns v_n). \end{equation} (We use here the reduced diagonal, because we are not including the zero degree term in the tensor coalgebra.) The tensor coalgebra is not graded cocommutative under either the \hbox{$\Z_2$}\ or the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading, but both gradings are compatible with the coalgebra structure. The symmetric coalgebra structure on $\bigodot V$ is given by defining \begin{equation} \Delta(v_1\odot\cdots\odot v_n)= \sum_{k=1}^{n-1} \sum_{\sigma\in\sh k{n-k}} \epsilon(\sigma) v_{\sigma(1)}\odot\cdots\odot v_{\sigma(k)} \otimes v_{\sigma(k+1)}\odot\cdots\odot v_{\sigma(n)}. \end{equation} With this coalgebra structure, and the \hbox{$\Z_2$}-grading, $\bigodot V$ is a cocommutative, coassociative coalgebra without a counit. Similarly, we define the exterior coalgebra structure on $\bigwedge V$ by \begin{equation} \Delta(v_1\wedge\cdots\wedge v_n)= \sum_{k=1}^{n-1} \sum_{\sigma\in\sh k{n-k}}\!\!\!\!\!\! \s\sigma \epsilon(\sigma) v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(k)} \otimes v_{\sigma(k+1)}\wedge\cdots\wedge v_{\sigma(n)}. \end{equation} Then the coalgebra structure is coassociative, and is cocommutative with respect to the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading. \section{Coderivations}\label{sect 5} A coderivation on a graded coalgebra $C$ is a map $d:C\rightarrow C$ such that \begin{equation} \Delta\circ d=(d\otimes 1+1\otimes d)\circ\Delta. \end{equation} Note that the definition depends on the grading group, because $(1\otimes d)(\alpha\otimes\beta)=\s{\alpha d}\alpha\otimes d(\beta)$. The \mbox{\bf k}-module $\operatorname{Coder}(C)$ of all graded coderivations has a natural structure of a graded Lie algebra, with the bracket given by \begin{equation} \sb{m,n}=m\circ n-\s{\ip mn}n\circ m, \end{equation} where $\s{\ip mn}=\s{\ip{\e m}{\e n}}$ is given by the inner product in the grading group, so that the definition of the bracket also depends on the grading group. A codifferential on a coalgebra $C$ is a coderivation $d$ such that $d\circ d=0$. We examine the coderivation structure of the tensor, symmetric and exterior coalgebras. \subsection{Coderivations of the Tensor Coalgebra} Suppose that we wish to extend $d_k:V^k\rightarrow V$ to a coderivation $\hbox{{$\hat d$}}_k$ of $T(V)$. We are interested in extensions satisfying the property that $\hbox{{$\hat d$}}_k(v_1,\cdots, v_n)=0$ for $n<k$. How this extension is made depends on whether we consider the \hbox{$\Z_2$}\ or the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading. First we consider the \hbox{$\Z_2$}-grading, so that only the parity of $d$ is relevant. Then if we define \begin{multline} \hbox{{$\hat d$}}_k(v_1\tns\cdots\tns v_n)=\\ \sum_{i=0}^{n-k} \s{(v_1+\cdots+ v_i)d_k} v_1\tns\cdots\tns v_i\otimes d_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_n, \end{multline} one can show that $\hbox{{$\hat d$}}_k$ is a coderivation on $T(V)$ with respect to the \hbox{$\Z_2$}-grading. More generally, one can show that any coderivation $\hbox{{$\hat d$}}$ on $T(V)$ is completely determined by the induced mappings $d_k:V^k\rightarrow V$, and in fact, one obtains that \begin{multline}\label{cder1} \hbox{{$\hat d$}}(v_1\tns\cdots\tns v_n)=\\ \sum \begin{Sb} 1\le k\le n\\ \\ 0\le i\le n-k \end{Sb} \s{(v_1+\cdots+ v_i)d_k} v_1\tns\cdots\tns v_i\otimes d_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_n. \end{multline} Also, one can show that $\hbox{{$\hat d$}}$ is a codifferential with respect to the \hbox{$\Z_2$}-grading precisely when \begin{equation} \sum \begin{Sb} k+l=n+1\\ \\ 0\le i\le n-k \end{Sb} \s{(v_1+\cdots+ v_i)d_k} d_l(v_1,\cdots, v_i,d_k(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_n)=0. \end{equation} The module $\operatorname{Coder}(T(V))$ of coderivations of $T(V)$ with respect to the \hbox{$\Z_2$}-grading is naturally isomorphic to $\mbox{\rm Hom}(T(V),V)$, so $\mbox{\rm Hom}(T(V),V)$ inherits a natural structure of a \hbox{$\Z_2$}-graded Lie algebra. Let us examine the bracket structure on $\mbox{\rm Hom}(T(V),V)$ more closely. Suppose that for an arbitrary element $d\in\mbox{\rm Hom}(T(V),V)$, we denote by $d_k$ the restriction of $d$ to $V^k$, and by $\hbox{{$\hat d$}}$, $\hbox{{$\hat d$}}_k$ the extensions of $d$ and $d_k$ as coderivations of $T(V)$. Also denote by $d_{kl}$ the restriction of of $\hbox{{$\hat d$}}_k$ to $V^{k+l-1}$, so that $d_{kl}\in\mbox{\rm Hom}(V^{k+l-1},V^l)$. The precise expression for $d_{kl}$ is given by equation (\ref{cder1}) with $n=k+l-1$. It is easy to see that the bracket of $d_k$ and $\delta_l$ is given by \begin{equation} [d_k,\delta_l]= d_k\circ\delta_{lk}-\s{d_k\delta_l}\delta_l\circ d_{kl}. \end{equation} Furthermore, we have $[d,\delta]_n=\sum_{k+l=n+1}[d_k,\delta_l].$ The point here is that $d_{kl}$ and $\delta_{kl}$ are determined in a simple manner by $d_k$ and $\delta_k$, so we have given a description of the bracket on $\mbox{\rm Hom}(T(V),V)$ in a direct fashion. The fact that the bracket so defined has the appropriate properties follows from the fact that if $\rho=[d,\delta]$, then $\hat\rho=[\hbox{{$\hat d$}},\hbox{$\hat \delta$}]$. Now we consider how to extend a mapping $m_k:V^k\rightarrow V$ to a coderivation $\hbox{$\hat m$}_k$ with respect to the \hbox{$\Z_2\times\Z$}-grading. In this case, note that the bidegree of $m_k$ is given by $\bd{m_k}=(\e{m_k},k-1)$. The formula for the extension is the same as before, but with the bidegree in place of the parity. In other words, \begin{multline} \hbox{$\hat m$}_k(v_1\tns\cdots\tns v_n)=\\ \sum_{i=0}^{n-k} \s{(v_1+\cdots+ v_i)m_k+i(k-1)} v_1\tns\cdots\tns v_i\otimes m_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_n. \end{multline} Similarly, if we consider an arbitrary coderivation $\hbox{$\hat m$}$ on $T(V)$, then it is again determined by the induced mappings $m_k:V^k\rightarrow V$, and we see that \begin{multline} \hbox{$\hat m$}(v_1\tns\cdots\tns v_n)= \sum \begin{Sb} 1\le k\le n\\ \\ 0\le i\le n-k \end{Sb} \s{(v_1+\cdots+ v_i)m_k +i(k-1)}\\\times v_1\tns\cdots\tns v_i\otimes m_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_n. \end{multline} Also, one obtains that $\hbox{$\hat m$}$ is a codifferential with respect to the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading is equivalent to the condition that for all $n$, \begin{multline}\label{ztzcodiff} \sum \begin{Sb} k+l=n+1\\ \\ 0\le i\le n-k \end{Sb} \s{(v_1+\cdots+ v_i)m_k +i(k-1)}\\\times m_l(v_1,\cdots, v_i,m_k(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_n)=0. \end{multline} The module $\operatorname{Coder}(T(V))$ of coderivations of $T(V)$ with respect to the \hbox{$\Z_2\times\Z$}-grading is naturally isomorphic to $\bigoplus_{k=1}^\infty\mbox{\rm Hom}(V^k,V)$, rather than $\mbox{\rm Hom}(T(V),V)$, because the latter module is the direct product of the modules $\mbox{\rm Hom}(V^k,V)$. However, we would like to consider elements of the form $\hbox{$\hat m$}=\sum_{k=1}^\infty\hbox{$\hat m$}_k$, where $\hbox{$\hat m$}_k$ has bidegree $(\e{m_k},k-1)$. Such an infinite sum is a well defined element of $\mbox{\rm Hom}(T(V),T(V))$, so by abuse of notation, we will define $\operatorname{Coder}(T(V))$ to be the module of such infinite sums of coderivations. With this convention, we now have a natural isomorphism between $\operatorname{Coder}(T(V))$ and $\mbox{\rm Hom}(T(V),V)$. Furthermore, the bracket of coderivations is still well defined, and we consider $\operatorname{Coder}(T(V)$ to be a \hbox{$\Z_2\times\Z$}-graded Lie algebra. The reason that the bracket is well defined is that any homogeneous coderivation has bidegree $(m,n)$ for some $n\ge 0$, so the grading is given by $\hbox{$\Z_2$}\times\mbox{$\Bbb N$}$ rather than the full group \hbox{$\Z_2\times\Z$}. In structures where a $\mbox{$\Bbb Z$}$-grading reduces to an \mbox{$\Bbb N$}-grading, it is often advantageous to replace direct sums with direct products. Using the same notation convention as in the \hbox{$\Z_2$}-graded case, we note that if $m,\mu\in\mbox{\rm Hom}(T(V),V)$, then we have \begin{equation} [m_k,\mu_l]=m_k\circ\mu_{lk}-\s{\ip{m_k}{\mu_l}}\mu_l\circ m_{kl}, \end{equation} and $[m,\mu]_n=\sum_{k+l=n+1}[m_k,\mu_l]$. \subsection{Coderivations of the Symmetric Coalgebra} Suppose that we want to extend $m_k:\bigodot^k V\rightarrow V$ to a coderivation $\hbox{$\hat m$}_k$ of $\bigodot V$ such that $m(v_1\odot\cdots\odot v_n)=0$ for $k<n$. Define \begin{equation} \hbox{$\hat m$}_k(v_1\odot\cdots\odot v_n)= \sum_{\sigma\in\sh k{n-k}} \epsilon(\sigma) m_k(v_{\sigma(1)},\cdots, v_{\sigma(k)})\odot v_{\sigma(k+1)} \odot\cdots\odot v_{\sigma(n)}. \end{equation} Then $\hbox{$\hat m$}_k$ is a coderivation with respect to the \hbox{$\Z_2$}-grading. In general, suppose that $\hbox{$\hat m$}$ is a coderivation on the symmetric coalgebra. It is not difficult to see that if $m_k:\bigodot^k V\rightarrow V$ is the induced map, then $\hbox{$\hat m$}$ can be recovered from these maps by the relations \begin{equation} \hbox{$\hat m$}(v_1\odot\cdots\odot v_n)= \sum \begin{Sb} 1\le k\le n\\ \\ \sigma\in\sh k{n-k} \end{Sb} \epsilon(\sigma) m_k(v_{\sigma(1)},\cdots, v_{\sigma(k)})\odot v_{\sigma(k+1)} \odot\cdots\odot v_{\sigma(n)}. \end{equation} {}From this, we determine that there is a natural isomorphism between $\operatorname{Coder}(\bigodot V)$, the module of coderivations of $V$, and $\mbox{\rm Hom}(\bigodot V,V)$. Thus $\mbox{\rm Hom}(\bigodot V,V)$ inherits the structure of a graded Lie algebra. Also, $\hbox{$\hat m$}$ is a codifferential when for all $n$, \begin{equation} \sum \begin{Sb} k+l=n+1 \\ \sigma\in\sh k{n-k} \end{Sb} \epsilon(\sigma) m_l(m_k(v_{\sigma(1)},\cdots, v_{\sigma(k)}),v_{\sigma(k+1)} ,\cdots, v_{\sigma(n)})=0. \end{equation} It is reasonable to ask whether a map $m_k:\bigodot^k V\rightarrow V$ can be extended as a coderivation with respect to the \hbox{$\Z_2\times\Z$}-grading. It turns out that in general, it is not possible to do this. For example, suppose that we are given $m_2:\bigodot^2 V\rightarrow V$. If $m_2$ is extendible as a coderivation $\hbox{$\hat m$}$, then we must have \begin{multline} \Delta \hbox{$\hat m$}_2(v_1\odot v_2\odot v_3)=(\hbox{$\hat m$}\otimes 1+1\otimes \hbox{$\hat m$}) \Delta (v_1\odot v_2\odot v_3)=\\ =(\hbox{$\hat m$}\otimes 1+1\otimes \hbox{$\hat m$})[v_1\otimes v_2\odot v_3 +\s{v_1v_2}v_2\otimes v_1\odot v_3+\\ \s{v_3(v_1+v_2)}v_3\otimes v_1\odot v_2+ v_1\odot v_2\otimes v_3+\\ \s{v_2v_3}v_1\odot v_3\otimes v_2+\s{v_1(v_2+v_3)}v_2\odot v_3\otimes v_1]\\ =\s{\ip{v_1}{m_2}}v_1\otimes m_2(v_2,v_3)+\s{\ip{v_2}{m_2}+v_1v_2} v_2\otimes m_2(v_1,v_3)+\\ \s{\ip{v_3}{m_2}+v_3(v_1+v_2)}v_3\otimes m_2(v_1,v_2)+ m_2(v_1,v_2)\otimes v_3 +\\ \s{v_2v_3}m_2(v_1,v_3)\otimes v_3+\s{v_1(v_2+v_3)}m_2(v_2,v_3)\otimes v_1 \end{multline} In the above we are using the fact that $(1\otimes m_2)(\alpha\otimes \beta)=\s{\ip \alpha{m_2}}a\otimes m_2(\beta)$, where $\ip\alpha {m_2}$ depends on which grading group we are using. We need to recognize the expression as $\Delta$ of something. In order for this to be possible, the terms $\s{\ip{v_1}{m_2}}v_1\otimes m_2(v_2,v_3)$ and $\s{v_1(v_2+v_3)}m_2(v_2,v_3)\otimes v_1$ need to match up. We have $\Delta(v_1\otimes m_2(v_2,v_3))=\s{v_1(v_2+v+3+m_2)}m_2(v_2,v_3)\otimes v_1$. Thus for the expressions to match up, it is necessary that $\ip{v_1}{m_2}=\e{v_1}\e{m_2}$, which is the inner product given by the \hbox{$\Z_2$}-grading. \subsection{Coderivations of the Exterior Coalgebra} Suppose that we want to extend $l_k:\bigwedge^k V\rightarrow V$ to a coderivation $\hbox{$\hat l$}_k$ of $\bigwedge V$. Define \begin{equation} \hbox{$\hat l$}(v_1\wedge\cdots\wedge v_n)= \sum_{\sigma\in\sh k{n-k}} \s{\sigma} \epsilon(\sigma) l_k(v_{\sigma(1)},\cdots, v_{\sigma(k)})\wedge v_{\sigma(k+1)} \wedge\cdots\wedge v_{\sigma(n)}. \end{equation} Then $\hbox{$\hat l$}$ is a coderivation with respect to the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading. As in the graded symmetric case, an arbitrary coderivation on $\bigwedge V$ is completely determined by the induced maps $l_k:\bigwedge^k V\rightarrow V$ by the formula \begin{equation} l(v_1\wedge\cdots\wedge v_n)= \sum \begin{Sb} 1\le k\le n\\ \\ \sigma\in\sh k{n-k} \end{Sb} \s{\sigma} \epsilon(\sigma) l_k(v_{\sigma(1)},\cdots, v_{\sigma(k)})\wedge v_{\sigma(k+1)} \wedge\cdots\wedge v_{\sigma(n)}. \end{equation} Similarly to the previous cases considered, we see that $\hbox{$\hat l$}$ is a codifferential when for all $n$, \begin{equation} \sum \begin{Sb} k+l=n+1\\ \\ \sigma\in\sh k{n-k} \end{Sb} \s{\sigma} \epsilon(\sigma) l_l(l_k(v_{\sigma(1)},\cdots, v_{\sigma(k)})\wedge v_{\sigma(k+1)} \wedge\cdots\wedge v_{\sigma(n)})=0. \end{equation} As in the case of the tensor coalgebra, the \hbox{$\Z_2\times\Z$}-grading requires us to extend the notion of a coderivation in order to obtain an isomorphism between $\operatorname{Coder}(\bigwedge V)$ and $\mbox{\rm Hom}(\bigwedge V,V)$. In our extended sense, $\mbox{\rm Hom}(\bigwedge V,V)$ inherits a natural structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra. One can ask whether a map $l_k:\bigwedge^k V\rightarrow V$ can be extended as a coderivation with respect to the \hbox{$\Z_2$}-grading. For example, suppose that $l:\bigwedge^2 V\rightarrow V$ is given. Then if $l$ is to be extended as a coderivation with respect to the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading, we must have \begin{multline} \Delta l(v_1\wedge v_3\wedge v_3)= (l\tns1+1\otimes l) \!\!\!\! \sum \begin{Sb} 1\le k\le 2\\ \\ \sigma\in\sh k{3-k} \end{Sb} \!\!\!\! \s{\sigma} \epsilon(\sigma) v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(k)} \otimes v_{\sigma(k+1)}\wedge\cdots\wedge v_{\sigma(3)} \\ =(l\tns1+1\otimes l)[ v_1\otimes v_2\wedge v_3+v_1\wedge v_2\otimes v_3 +\s{v_1v_2+1}v_2\otimes v_1\wedge v_3 +\\ \s{v_2v_3+1}v_1\wedge v_3\otimes v_2+ \s{v_1(v_2+v_3)}v_2\wedge v_3\otimes v_1+ \s{v_3(v_1+v_2)}v_3\otimes v_1\wedge v_2]\\ =\s{v_1l+1}v_1\otimes l(v_2,v_3)+ l(v_1,v_2)\otimes v_3+ \s{v_1v_2+v_2l}v_2\otimes l(v_1,v_3)+\\ \s{v_2v_3+1}l(v_1,v_3)\otimes v_2+ \s{v_1(v_2+v_3)}l(v_2,v_3)\otimes v_1+ \s{v_3(v_1+v_3)+v_3l+1}v_3\otimes l(v_1,v_2)\\ =\Delta[l(v_1,v_2)\wedge v_3+\s{v_2v_3+1}l(v_1,v_3)\wedge v_2 +\s{v_1(v_2+v_3)}l(v_2,v_3)\wedge v_1] \end{multline} The map $l$ above cannot be extended as a coderivation with respect to the \hbox{$\Z_2$}-grading. The signs introduced by the exchange rule when applying $(1\otimes l)$ would make it impossible to express the result as $\Delta$ of something. Thus we need the $\hbox{$\Z_2$}\times\mbox{$\Bbb Z$}$-grading to obtain a good theory of coderivations of the exterior coalgebra. Similarly, the \hbox{$\Z_2$}-grading is necessary to have a good theory of coderivations of the symmetric coalgebra. \section{Cohomology of \hbox{$A_\infty$}\ algebras}\label{sect 6} In \cite{ps2}, a generalization of an associative algebra, called a strongly homotopy associative algebra, or \hbox{$A_\infty$}\ algebra was discussed, and cohomology and cyclic cohomology of this structure was defined. \hbox{$A_\infty$}\ algebras were introduced by J. Stasheff in \cite{sta1,sta2}. An \hbox{$A_\infty$}\ algebra structure is simply a codifferential on the tensor coalgebra; an associative algebra is simply a codifferential determined by a single map $m_2:V^2\rightarrow V$. We present a description of the basic theory of \hbox{$A_\infty$}\ algebras, from the coalgebra point of view. Hopefully, this will make the presentation of \mbox{$L_\infty$}\ algebras, which are given by codifferentials on the exterior coalgebra, seem more natural. If $V$ is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module, then the parity reversion $\Pi V$ is the same module, but with the parity of elements reversed. In other words, $(\Pi V)_0=V_1$ and $(\Pi V)_1=V_0$, where $V_0$ and $V_1$ are the submodules of even and odd elements of $V$, resp. The map $\pi:V\rightarrow \Pi V$, which is the identity as a map of sets, is odd. There is a natural isomorphism $\eta:T(V)\rightarrow T(\Pi V)$ given by \begin{equation} \eta(v_1\tns\cdots\tns v_n)= \s{(n-1)v_1+\cdots+ v_{n-1}}\pi v_1\tns\cdots\tns \pi v_n \end{equation} Denote the restriction of $\eta$ to $V^k$ by $\eta_k$. Note that $\eta_k$ is odd when $k$ is odd and even when $k$ is even, so that $\eta$ is neither an odd nor an even mapping. Let $W=\Pi V$. Define a bijection between $C(W)=\mbox{\rm Hom}(T(W),W)$ and $C(V)=\mbox{\rm Hom}(T(V),V)$ by setting $\mu=\eta^{-1}\circ\delta\circ\eta$, for $\delta\inC(W)$. Then $\mu_k=\eta_1^{-1}\circ\delta_k\circ\eta_k$ and $\e{\mu_k}=\e{\delta_k}+(k-1)$. In particular, note that if $\delta_k$ is odd in the \hbox{$\Z_2$}-grading, then $\operatorname{bid}(\mu_k)=(k,k-1)$, so that $\mu_k$ is odd in the \hbox{$\Z_2\times\Z$}-grading. Now extend $\delta_k:W^k\rightarrow W$ to a coderivation $\hbox{$\hat \delta$}_k$ on $T(W)$ with respect to the \hbox{$\Z_2$}\ grading, so that \begin{multline} \hbox{$\hat \delta$}_k(w_1\tns\cdots\tns w_n)= \sum_{i=0}^{n-k} \s{(w_1+\cdots+ w_i)\delta_k}\\\times w_1\tns\cdots\tns w_i \otimes \delta_k(w_{i+1},\cdots, w_{i+k}) \otimes w_{i+k+1}\tns\cdots\tns w_n. \end{multline} Let $\hbox{$\bar\mu$}_k:T(V)\rightarrow T(V)$ be given by $\hbox{$\bar\mu$}_k=\eta^{-1}\circ \hbox{$\hat \delta$}_k\circ \eta$. Let $\hbox{$\hat \mu$}$ be the extension of $\mu$ as a \hbox{$\Z_2\times\Z$}-graded coderivation of $T(V)$. We wish to investigate the relationship between $\hbox{$\bar\mu$}_k$ and $\hbox{$\hat \mu$}_k$. For simplicity, write $w_i=\pi v_i$. Note that $\eta_1=\pi$ is the parity reversion operator. So we have $\delta_k=\pi\circ\mu_k\circ\eta_k^{-1}$. Thus \begin{multline} \hbox{$\bar\mu$}_k(v_1,\cdots, v_n)= \s{r} \eta^{-1}\delta(w_1,\cdots, w_n)=\\ \eta^{-1}( \sum_{i=0}^{n-k} \s{r+s_i} w_1\tns\cdots\tns w_i\otimes \delta_k(w_{i+1},\cdots, w_{i+k})\otimes w_{i+k+1}\tns\cdots\tns w_{n} ) =\\ \eta^{-1}( \sum_{i=0}^{n-k} \s{r+s_i +t_i} w_1\tns\cdots\tns w_i\otimes \pi \mu_k(v_{i+1},\cdots, v_{i+k})\otimes w_{i+k+1}\tns\cdots\tns w_{n} ) =\\ \sum_{i=0}^{n-k} \s{r+s_i +t_i+u_i} v_1\tns\cdots\tns v_i\otimes \mu_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_{n}, \end{multline} where \begin{eqnarray*} r&=&(n-1)v_1+\cdots+ v_{n-1}\\ s_i&=&(w_1+\cdots+ w_i)\delta_k\\ &=&(v_1+\cdots+ v_i)\mu_k+i(\mu_k+1-k) +(1-k)(v_1+\cdots+ v_i)\\ t_i&=&(k-1)v_{i+1}+\cdots+ v_{i+k-1}\\ u_i&=&(n-k)v_1+\cdots+ (n-k-i+1)v_i\\ &&+(n-k-i)(\mu_k+v_{i+1}+\cdots+ v_{i+k}) + (n-k-i-1)v_{i+k+1}+\cdots+ v_{n-1} \end{eqnarray*} Combining these coefficients we find that \begin{equation} r+s_i+t_i+u_i=(v_1+\cdots+ v_i)\mu_k+i(k-1) +(n-k)\mu_k, \end{equation} so that \begin{multline}\label{hass2} \hbox{$\bar\mu$}_k(v_1,\cdots, v_n)= \sum_{i=0}^{n-k} \s{ (v_1+\cdots+ v_i)\mu_k+i(k-1) +(n-k)\mu_k}\\\times v_1\tns\cdots\tns v_i\otimes \mu_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_{n}. \end{multline} Thus we see that \begin{equation}\label{munot} \hbox{$\bar\mu$}_k(v_1,\cdots, v_n)=\s{(n-k)\mu_k}\hbox{$\hat \mu$}_k(v_1,\cdots, v_n). \end{equation} Using the notation of section \ref{sect 5}, denote the restriction of $\hbox{$\hat \mu$}_k$ to $V^{k+l-1}$ by $\hbox{$\hat \mu$}_{kl}$. Set $n=k+l-1$. Denote $\hbox{$\bar\mu$}_{kl}=\eta_l^{-1}\circ\delta_{kl}\circ\eta_{n}$, so that $\hbox{$\bar\mu$}_{kl}$ is the restriction of $\hbox{$\bar\mu$}_k$ to $V^{n}$. Then we can express equation (\ref{munot}) in the form $\hbox{$\bar\mu$}_{kl}=\s{(l-1)\mu_k}\mu_{kl}$. More generally, if $\hbox{$\hat \delta$}$ is an arbitrary derivation on $T(W)$, induced by the maps $\delta_k:V^k\rightarrow V$, then it determines maps $\mu_k:V^k\rightarrow V$ , and $\hbox{$\bar\mu$}:T(V)\rightarrow T(V)$, in a similar manner, and we have \begin{multline} \hbox{$\bar\mu$}(v_1,\cdots, v_n)= \sum \begin{Sb} 1\le k\le n\\ \\ 0\le i\le n-k \end{Sb} \s{ (v_1+\cdots+ v_i)\mu_k+i(k-1) +(n-k)\mu_k}\\\times v_1\tns\cdots\tns v_i\otimes \mu_k(v_{i+1},\cdots, v_{i+k})\otimes v_{i+k+1}\tns\cdots\tns v_{n}. \end{multline} The condition that $\hbox{$\hat \delta$}$ is a codifferential on $T(W)$ can be expressed in the form \begin{equation} \sum_{k+l=n+1}\delta_l\circ\delta_{kl}=0 \end{equation} for all $n\ge1$. This condition is equivalent to the condition \begin{equation} \sum_{k+l=n+1}\s{(l-1)\mu_k}\mu_l\circ\mu_{kl}= \sum_{k+l=n+1}\mu_l\circ\hbox{$\bar\mu$}_{kl} =0. \end{equation} We can express this condition in the form \begin{multline} \sum \begin{Sb} k+l=n+1 \\ 0\le i\le n-k \end{Sb} \s{ (v_1+\cdots+ v_i)\mu_k+i(k-1) +(n-k)\mu_k}\\\times \mu_l(v_1,\cdots, v_i,\mu_k(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_{n})=0. \end{multline} When $\hbox{$\hat \delta$}$ is an odd codifferential, $\e{\mu_k}=k$,so the sign in the expression above is simply $(v_1+\cdots+ v_i)k+i(k-1)+nk-k$. In \cite{ls,lm}, an element $\mu\inC(V)$ satisfying the equation \begin{multline} \sum \begin{Sb} k+l=n+1 \\ 0\le i\le n-k \end{Sb} \s{ (v_1+\cdots+ v_i)k+i(k-1) +nk-k}\\\times \mu_l(v_1,\cdots, v_i,\mu_k(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_{n})=0. \end{multline} is called a strongly homotopy associative algebra, or \hbox{$A_\infty$}\ algebra. We see that an \hbox{$A_\infty$}\ algebra structure on $V$ is nothing more than an odd codifferential on $T(W).$ This can also be expressed in terms of the bracket on $C(W)$. An odd element $\delta\inC(W)$ satisfying $[\delta,\delta]=0$ determines a codifferential $\hbox{$\hat \delta$}$ on $T(W)$. More precisely, the condition $[\delta,\delta]=0$ is equivalent to the condition $\hbox{$\hat \delta$}^2=0$. Thus $\mu\inC(V)$ determines an \hbox{$A_\infty$}\ algebra structure on $V$ when $\delta=\eta\circ\mu\circ\eta^{-1}$ satisfies $[\delta,\delta]=0$. This is not the same condition as $[\mu,\mu]=0$, nor even the condition $\hbox{$\hat \mu$}^2=0$, although we shall have more to say about this later. Next set $V=\Pi U$, and $v_i=\pi u_i$. Define $d_k:U^k\rightarrow U$ by $d_k=\eta^{-1}\circ \mu_k\circ\eta$, and $\hbox{$\bar d$}:T(U)\rightarrow T(U)$ by $\hbox{$\bar d$}=\eta^{-1}\circ\mu\circ\eta$. Then $\e{d_k}=\e{\mu_k}+(k-1)=\e{\delta_k}$. Then by the same reasoning as above, we see that \begin{multline} d(u_1,\cdots, u_n)= \sum_{i=0}^{n-k} \s{(u_1+\cdots+ u_i)d_k+n-nk}\\\times u_1\tns\cdots\tns u_i\otimes d_k(u_{i+1},\cdots, u_{i+k})\otimes u_{i+k+1}\tns\cdots\tns u_{n}. \end{multline} Arbitrary coderivations are treated in the same manner. Continuing this process, let us suppose that $U=\Pi X$, and $u_i=\pi x_i$. Define $m_k:X^k\rightarrow X$ by $m_k=\eta^{-1}\circ d_k\circ\eta$, and $\hbox{$\bar m$}:T(X)\rightarrow T(X)$ by $\hbox{$\bar m$}=\eta^{-1}\circ \hbox{$\bar d$} \circ\eta$. Then $\e{m_k}=\e{d_k}+(k-1)=\e{\mu_k}$. Then we obtain that \begin{multline} \hbox{$\bar m$}(x_1,\cdots, x_n)= \sum_{i=0}^{n-k} \s{(x_1+\cdots+ x_i)m_k+i(k-1)+n-nk+(n-k)m_k}\\\times x_1\tns\cdots\tns x_i\otimes m_k(x_{i+1},\cdots, x_{i+k})\otimes x_{i+k+1}\tns\cdots\tns x_{n}. \end{multline} If we extend this process to an arbitrary coderivation, and consider the signs which would result from assuming that $\delta$ is an odd codifferential, then we would obtain that \begin{multline}\label{mysigns} \sum \begin{Sb} 0\le 1\le n-k\\ \\ k+l=n+1 \end{Sb} \s{(x_1+\cdots+ x_i)k+i(k-1)+n-k}\\\times m_l(x_1,\cdots, x_i,m_k(x_{i+1},\cdots, x_{i+k}),x_{i+k+1},\cdots, x_{n})=0. \end{multline} The signs in the expression above agree with the signs in the definition of a \hbox{$A_\infty$}\ algebra as given in \cite{ps2,kon}. Finally, suppose that $X=\Pi Y$ and $y_i=\pi x_i$, and that we define $\delta'_k= \eta^{-1}\circ m_k\circ\eta$, and $\hbox{$\bar\delta$}'=\eta^{-1}\circ \hbox{$\bar m$} \circ\eta$. Then $\e{\delta'_k}=\e{m_k}+(k-1)=\e{\delta_k}$. Then we obtain that \begin{multline} \delta'(y_1,\cdots, y_n)= \sum_{i=0}^{n-k} \s{(y_1+\cdots+ y_i)\delta_k}\\\times y_1\tns\cdots\tns y_i\otimes \delta'_k(y_{i+1},\cdots, y_{i+k})\otimes y_{i+k+1}\tns\cdots\tns y_{n}. \end{multline} Note that the signs occuring in this last expression are precisely the same as for the original $\delta$. Thus it takes four rounds of parity reversion to obtain the original signs. From this construction, it is clear that the signs which arise in equation (\ref{mysigns}) are those which would be obtained if we take $V=\Pi W$, and map $C(W)$ into $C(V)$ by setting $m=\eta\circ d\circ\eta^{-1}$ for $d\inC(W)$. Then a codifferential $d$ determines an \hbox{$A_\infty$}\ structure on $V$ in the sense of \cite{ps2,kon}. {}From these observations, we see that the two sign conventions for a homotopy associative algebra originate because there are two natural choices for the relationship between the space $W$, which carries the structure of an odd differential with respect to the usual \hbox{$\Z_2$}-grading, and $V$, which is its dual. One may choose either $W=\Pi V$, to get the signs in \cite{ls,lm}, or $V=\Pi W$, to get the signs in \cite{ps2,kon}. (Actually, one can vary the definition of $\eta$ to obtain both sets of signs from either one of these models.) A natural question is why do we consider codifferentials on the tensor coalgebra $T(W)$ of the parity reversion $W$ of $V$, rather than codifferentials on $T(V)$ in the definition of an \hbox{$A_\infty$}\ algebra? In fact, \hbox{$A_\infty$}\ algebras are generalizations of associative algebras, and an associative algebra structure on $V$ is determined by a \hbox{$\Z_2\times\Z$}-graded odd codifferential on $T(V)$. As a matter of fact, a map $m_2:V^2\rightarrow V$ is an associative multiplication exactly when $\hbox{$\hat m$}_2$ is an odd codifferential. The answer is that odd codifferentials with respect to the \hbox{$\Z_2$}-grading have better properties with respect to the Lie bracket structure. Let us examine the bracket structure on the space $\operatorname{Coder}(T(V))$. In \cite{gers}, M. Gerstenhaber defined a bracket on the space of cochains of an associative algebra, which we shall call the Gerstenhaber bracket. When $V$ is concentrated in degree zero, in other words, in the non \hbox{$\Z_2$}-graded case, the Gerstenhaber bracket is just the bracket of coderivations, with the \mbox{$\Bbb Z$}-grading. Thus the Gerstenhaber bracket is given by \begin{equation} [\varphi_k,\psi_l]=\varphi_k\psi_l-\s{(k-1)(l-1)}\psi_l\varphi_k, \end{equation} for $\varphi\in C^k(V)$ and $\psi\in C^l(V)$. One of the main results of \cite{gers} is that the differential $D$ of cochains in the cohomology of an associative algebra can be expressed in terms of the bracket. It was shown that $D(\varphi)=[\varphi,m]$ where $m$ is the cochain representing the associative multiplication. This formulation leads to a simple proof that $D^2=0$, following from the properties of \mbox{$\Bbb Z$}-graded Lie algebra. The associativity of $m$ is simply the condition $[m,m]=0$. Let us recall the proof that an odd homogeneous element $m$ of a graded Lie algebra satisfying $[m,m]=0$ gives rise to a differential $D$ on the algebra by defining $D(\varphi)=[\varphi,m]$. In other words, we need to show that $[[\varphi,m],m]=0$. Recall that $m$ is odd when $\ip{\e m}{\e m}=1$. The graded Jacobi bracket gives \begin{equation} [[\varphi,m]m] =[\varphi[m,m]]+\s{\ip mm}[[\varphi,m]m]=-[[\varphi,m],m], \end{equation} which shows the desired result, since we are in characteristic zero. Moreover, we point out that the Jacobi identity also shows that $D([\varphi,\psi])=[\varphi,D(\psi)]+\s{\psi D}[D(\varphi),\psi]$, so the differential in the cohomology of an associative algebra acts as a graded derivation of the Lie algebra, equipping $C(V)$ with the structure of a differential graded Lie algebra. We wish to generalize the Gerstenhaber bracket to the \hbox{$A_\infty$}\ algebra case, where we are considering a more general codifferential $m$ on $T(V)$, in such a manner that the bracket with $m$ yields a differential graded Lie algebra structure on $C(V)$. If we consider the bracket of coderivations, then a problem arises when the codifferential is not homogeneous. First of all, $\hbox{$\hat m$}^2=0$ does imply that $[m,m]=0$, but the converse is not true in general. Secondly, if we define $D(\varphi)=[\varphi,m]$, then we do not obtain in general that $D^2=0$. To see this, note that $[m,m]=0$ is equivalent to $\sum_{k+l=n+1}[m_k,m_l]=0$ for all $n\ge 1$. Let $\varphi_p\in\mbox{\rm Hom}(V^p,V)$. Then \begin{multline} [[\varphi_p,m],m]_{n+p-1}= \sum_{k+l=n+1}[[\varphi_p,m_k],m_l]=\\ \sum_{k+l=n+1}[\varphi_p,[m_k,m_l]]+ \s{\ip{m_k}{m_l}} [[\varphi_p,m_l],m_k]=\\ \sum_{k+l=n+1}\s{k+l+1} [[\varphi_p,m_l],m_k]= \s{n}[[\varphi_p,m],m]_{n+p-1} \end{multline} Thus we only obtain cancellation of terms when $n$ is odd. However, this is sufficient to show that in the particular case where $m_k=0$ for all even or all odd $k$, then $D^2=0$. In this case we also can show that $\hbox{$\hat m$}^2=0$ is equivalent to $[m,m]=0$ as well. These problems occur because \hbox{$\Z_2\times\Z$}\ is not a good grading group. Since these problems do not arise if we are considering \hbox{$\Z_2$}-graded codifferentials (or \mbox{$\Bbb Z$}-graded codifferentials, in the \mbox{$\Bbb Z$}-graded case), it is natural to consider a codifferential on the parity reversion(or suspension) of $V$, and using its properties to get a better behaved structure. For the remainder of this section, let us assume for definiteness that $W=\Pi V$ and that $d\inC(W)$ satisfies $\hbox{{$\hat d$}}^2=0$. In terms of the associative product induced on $C(W)$, this is the same condition as $d^2=0$. Because $W$ is \hbox{$\Z_2$}-graded, $d^2=0$ is equivalent to $[d,d]=0$ for $d$ odd, and moreover, $C(W)$ is a differential graded Lie algebra with differential $D(\varphi)=[\varphi,d]$. Thus we have no problems with the differential on the $W$ side. Suppose that $m_k=\eta_1^{-1}\circ d_k\circ\eta_k$ and $\mu_l=\eta_1^{-1}\circ\delta_l\circ\eta_l$. Define a new bracket $\left\{\cdot,\cdot\right\}$ on $C(V)$ by $\br{m_k}{\mu_l}=\eta_1^{-1}\circ[d_k,\delta_l]\circ\eta_{k+l-1}$. Then it follows easily that \begin{equation} \br{m_k}{\mu_l}= \s{(k-1)\mu_l}[m_k,\mu_l]. \end{equation} Of course this new bracket no longer satisfies the graded antisymmetry or graded Jacobi identity with respect to the inner product we have been using on \hbox{$\Z_2\times\Z$}. However, the bracket does satisfy these properties with respect to the a different inner product on \hbox{$\Z_2\times\Z$}. We state this result in the form of a lemma. \begin{lma}\label{lma1} Let $V$ be equipped with a \hbox{$\Z_2\times\Z$}-graded Lie bracket $[\cdot,\cdot]$ with respect to the inner product $\ip{(\bar m,n)}{(\bar m',n')}=\bar m\bar m'+nn'$ on \hbox{$\Z_2\times\Z$}. Then the bracket $\left\{\cdot,\cdot\right\}$ on $V$ given by $\br uv=\s{\operatorname{deg}(u)|v|}[u,v]$ defines the structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra on $V$ with respect to the inner product $\ip{(\bar m,n)}{(\bar m',n')}=(\bar m+n)(\bar m'+n')$ on \hbox{$\Z_2\times\Z$}. The bracket $\left\{\cdot,\cdot\right\}$ given by $\brt uv=\s{\operatorname{deg}(u)(|v|+\operatorname{deg}(v))}[u,v]$ also defines the structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra on $V$ with respect to the second inner product. \end{lma} The proof of the above lemma is straightforward, and will be omitted. Because the bracket $\left\{\cdot,\cdot\right\}$ on $C(V)$ really coincides with the bracket of coderivations on $C(W)$, we know that if $[d,d]=0$, then $\br mm=0$, and we can define a differential on $C(V)$ by $D(\varphi)=\br\varphi m$, so that we can define a cohomology theory for an \hbox{$A_\infty$}\ algebra. We have been considering the picture $W=\Pi V$. This means we have been considering \hbox{$A_\infty$}\ algebras as defined in \cite{ls,lm}. If we consider instead the picture $V=\Pi W$, then the bracket induced on $C(V)$ by that on $C(W)$ will be given by $\br {m_k,\mu_l}=\s{(k-1)(\mu_l+l-1)}[m_k,\mu_l]$, which is the second modified bracket described in the lemma. Thus we have similar results. We shall call either one of these two brackets the modified Gerstenhaber bracket. The Hochschild cohomology of \hbox{$A_\infty$}\ algebras was defined in \cite{ps2} as the cohomology given by $D(\varphi)=\br \varphi m$, and it was shown that this cohomology classifies the infinitesimal deformations of an \hbox{$A_\infty$}\ algebra. We shall not go into the details here. It is important to note however, that unlike the cohomology theory for an associative algebra, there is only one cohomology group $H(V)$. The reason is that the image of $\varphi\in C^n(V)$ under the coboundary operator has a part in all $C^k(V)$ with $k\ge n$. Only in the case of an associative or differential graded associative algebra do we get a stratification of the cohomology. Now let us suppose that $V$ is equipped with an inner product $\left<\cdot,\cdot\right>$. The inner product induces an isomorphism between $\mbox{\rm Hom}(V^k,V)$ and $\mbox{\rm Hom}(V^{k+1},\mbox{\bf k})$, given by $\varphi\mapsto\tilde\varphi$, where \begin{equation} \tilde\varphi(v_1,\cdots, v_{k+1})=\ip{\varphi(v_1,\cdots, v_k)}{v_{k+1}}. \end{equation} An element $\varphi\in\mbox{\rm Hom}(V^k,V)$ is said to be cyclic with respect to the inner product if \begin{equation} \ip{\varphi(v_1,\cdots, v_k)}{v_{k+1}}= \s{k+v_1\varphi} \ip{v_1}{\varphi(v_2,\cdots, v_{k+1})}. \end{equation} Then $\varphi$ is cyclic if and only if $\tilde\varphi$ is cyclic in the sense that \begin{equation} \tilde\varphi(v_1,\cdots, v_{k+1})= \s{n + v_{k+1}(v_1+\cdots+ v_k)} \tilde\varphi(v_{k+1},v_1,\cdots, v_k). \end{equation} The only differences between the definitions here and the definitions given in section \ref{sect 2} is that $\tilde\varphi$ is not completely antisymmetric, and the isomorphism between $\mbox{\rm Hom}(V^{k+1},\mbox{\bf k})$ is to the module $C^k(V)$, not just the submodule consisting of cyclic elements. If $m\inC(V)$, then we say that $m$ is cyclic if $m_k$ is cyclic for all $k$. If $m$ determines an \hbox{$A_\infty$}\ algebra structure on $V$, then we say that the inner product is invariant if $m$ is cyclic with respect to the inner product. The following lemma shows how to construct cyclic elements from arbitrary elements of $\mbox{\rm Hom}(\mbox{V}^{n+1},\mbox{\bf k})$. \begin{lma}\label{lma2} Suppose that $\hbox{$\tilde f$}\in\mbox{\rm Hom}(V^{n+1},\mbox{\bf k})$. Define $C(\hbox{$\tilde f$}):V^{n+1}\rightarrow\mbox{\bf k}$ by \begin{multline} C(f)(v_1,\cdots, v_{n+1})=\\ \sum_{0\le i\le n} \s{(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1})+ni} f(v_{i+1},\cdots, v_{i}) \end{multline} Then $C(\hbox{$\tilde f$})$ is cyclic. Furthermore, $C(\hbox{$\tilde f$})=(n+1)\hbox{$\tilde f$}$ if $\hbox{$\tilde f$}$ is cyclic. \end{lma} We shall also need another lemma, which simplifies computations with cyclic elements. \begin{lma}\label{lma3} Suppose that $\tilde f\in\mbox{\rm Hom}(V^{n+1},\mbox{\bf k})$ is cyclic, $\alpha=v_1\tns\cdots\tns v_i$ and $\beta=v_i\tns\cdots\tns v_{n+1}$. Then \begin{equation} \tilde f(\alpha\otimes\beta)=\s{\alpha\beta+in}\tilde f(\beta\otimes\alpha) \end{equation} \end{lma} Denote the submodule of cyclic elements in $C(V)$ by $CC(V)$. The following lemma records the fact that $CC(V)$ is a graded Lie subalgebra of $C(V)$, with respect to the bracket of coderivations. \begin{lma}\label{lma4} Let $\varphi_k\in C^k(V)$ and $\psi_l\in C^l(V)$. If $\varphi$ and $\psi$ are cyclic, then so is $[\varphi,\psi]$. Moreover if $n=k+l-1$, then \begin{multline}\label{cycrel} \widetilde{[\varphi,\psi]}(v_1,\cdots, v_{n+1})=\\ \sum \begin{Sb} 0\le i\le n \end{Sb} \s{(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1})+in} \tilde\varphi_k(\psi_l(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_i), \end{multline} where in the expression above, indices should be interpreted $\mod n+1$. \end{lma} \begin{pf} The proof is straightforward, but involves a couple of tricks from mod 2 addition in the signs, so we present it. Denote $\rho=[\varphi,\psi]$. Then \begin{multline} \tilde\rho(v_1,\cdots, v_{n+1})=\\ \sum_{0\le i\le k-1} \s{(v_1+\cdots+ v_i)\psi+i(l-1)} \tilde\varphi(v_1,\cdots, v_i,\psi(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_{n+1})\\ -\s{\varphi\psi+(k-1)(l-1)}\times\\ \sum_{0\le i\le l-1} \s{(v_1+\cdots+ v_i)\varphi+i(k-1)} \tilde\psi(v_1,\cdots, v_i,\varphi(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_{n+1}) \end{multline} Let us express the first term above, using the cyclicity of $\varphi$. \begin{multline} \sum_{0\le i\le k-1}\s{r_i} \tilde\varphi(v_1,\cdots, v_i,\psi(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_{n+1})=\\ \sum_{0\le i\le k-1}\s{r_i+s_i} \tilde\varphi(\psi(v_{i+1},\cdots, v_{i+l}),,\cdots, v_{i+l+1},\cdots, v_i), \end{multline} where \begin{eqnarray} r_i&=&(v_1+\cdots+ v_i)\psi+i(l-1)\\ s_i&=&(v_1+\cdots+ v_i)(\psi + v_{i+1}+\cdots+ v_{n+1}) +ki, \end{eqnarray} so that the coefficient is $r_i+s_i=(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1}) +ni$. The second term requires some additional manipulations. \begin{multline} \sum_{0\le i\le l-1} \s{r_i} \tilde\psi(v_1,\cdots, v_i,\varphi(v_{i+1},\cdots, v_{i+k}),v_{i+k+1},\cdots, v_{n+1}) =\\ \sum_{0\le i\le l-1} \s{r_i+s_i} \tilde\psi(v_{i+k+1},\cdots, v_i,\varphi(v_{i+1},\cdots, v_{i+k})) =\\ \sum_{0\le i\le l-1} \s{r_i+s_i} \ip{\psi(v_{i+k+1},\cdots, v_i)}{\varphi(v_{i+1},\cdots, v_{i+k})} =\\ \sum_{0\le i\le l-1} \s{r_i+s_i+t_i} \ip{\varphi(v_{i+1},\cdots, v_{i+k})}{\psi(v_{i+k+1},\cdots, v_i)} =\\ \sum_{0\le i\le l-1} \s{r_i+s_i+t_i} \tilde\varphi(v_{i+1},\cdots, v_{i+k},\psi(v_{i+k+1},\cdots, v_i)) =\\ \sum_{0\le i\le l-1} \s{r_i+s_i+t_i+u_i} \tilde\varphi(\psi(v_{i+k+1},\cdots, v_i)),v_{i+1},\cdots, v_{i+k}), \end{multline} where \begin{eqnarray} r_i&=&\varphi\psi+(k-1)(l-1)+1+(v_1+\cdots+ v_i)\varphi +i(k-1)\\ s_i&=&(v_1+\cdots+ v_{i+k}+\varphi)(v_{i+k+1}+\cdots+ v_{n+1}+(i+1)l\\ t_i&=&(\psi+v_{i+k+1}+\cdots+ v_i)(\varphi+v_{i+1}+\cdots+ v_{i+k})\\ u_i&=&(v_{i+1}+\cdots+ v_{i+k})(\psi+v_{i+k+1}+\cdots+ v_i)+k \end{eqnarray} so that the sum of these coefficients is $(v_i+\cdots+ v_{i+k})(v_{i+k+1}+\cdots+ v_{n+1})+in+kl$. But now we use the fact that $kn=k(l+k-1)=kl+k(k-1)=kl\mod 2$, so that $in+kl=(i+k)n$, and re-indexing with $i=i+k$ allows us to express the second term as \begin{equation} \sum_{k\le i\le n} \s{(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1})+ni} \tilde\varphi(\psi(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_i). \end{equation} Thus we have shown that equation (\ref{cycrel}) holds, and this is enough to show that $\tilde\rho$ is cyclic, by lemma \ref{lma2}. \end{pf} Since $\br{\varphi}{\psi}=\s{(k-1)\psi_l}[\varphi,\psi]$, it follows that the modified Gerstenhaber bracket of cyclic elements is also cyclic. Thus finally, we can state the main theorem, which allows us to define cyclic cohomology of an \hbox{$A_\infty$}\ algebra. \begin{thm}\label{thm3} i)Suppose that $V$ is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module with an inner product $\left<\cdot,\cdot\right>$. Suppose that $\varphi, \psi\inC(V)$ are cyclic. Then $\br{\varphi}{\psi}$ is cyclic. Furthermore, the formula below holds. \begin{multline} \widetilde{\br{\varphi}{\psi}}(v_1,\cdots, v_{n+1})= \sum \begin{Sb} k+l=n+1\\ \\ 0\le i\le n \end{Sb} \s{(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1})+in+(k-1)\psi_l}\times\\ \tilde\varphi_k(\psi_l(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_i), \end{multline} where in the expression above, indices should be interpreted $\mod n+1$. Thus the inner product induces a structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra in the module $CC(V)$ consisting of all cyclic elements in $C(V))$, by defining $\br{\tilde\varphi}{\tilde\psi}=\widetilde{\br{\varphi}{\psi}}$. ii) If $m$ is an \hbox{$A_\infty$}\ structure on $V$, then there is a differential in $CC(V)$, given by \begin{multline} D(\tilde\varphi)(v_1,\cdots, v_{n+1})= \sum \begin{Sb} k+l=n+1\\ \\ 0\le i\le n \end{Sb} \s{(v_1+\cdots+ v_i)(v_{i+1}+\cdots+ v_{n+1})+in +(k-1)l}\times\\ \tilde\varphi_k(m_l(v_{i+1},\cdots, v_{i+l}),v_{i+l+1},\cdots, v_i), \end{multline} where in the expression above, $i$ should be interpreted $\mod n+1$. iii) If the inner product is invariant, then $D(\tilde\varphi)=\br{\tilde\varphi}{\tilde m}$. Thus $CC(V)$ inherits the structure of a differential graded Lie algebra. \end{thm} \begin{pf} The first statement follows from lemma \ref{lma4}, and the third follows from the first two. The second assertion follows immediately from the first when the inner product is invariant. The general case is a routine verification, which we omit. \end{pf} We define $HC(V)$ to be the cohomology associated to the coboundary operator on $CC(V)$. As in the case of Hochschild cohomology, there is no stratification of the cohomology as $HC^n(V)$. Of course, $HC(V)$ is \hbox{$\Z_2$}-graded. \section{Cohomology of \mbox{$L_\infty$}\ algbras}\label{sect 7} There is a natural isomorphism $\eta$ between $\bigwedge V$ and $\bigodot(\Pi V)$ which is given by \begin{equation} \eta(v_1\wedge\cdots\wedge v_n)= \s{(n-1)v_1+\cdots+ v_{n-1}} \pi v_1\odot\cdots\odot \pi v_n, \end{equation} Note that $\eta$ is neither even nor odd. The restriction $\eta_k$ of $\eta$ to $\bigwedge^k V$ has parity $k$. Of course, $\eta$ does preserve the exterior degree. For simplicity in the following, let $W=\Pi V$ and let $w_i=\pi v_i$, and denote $C(W)=\mbox{\rm Hom}(\bigodot W,W)$, and $C(V)=\mbox{\rm Hom}(\bigwedge V,V)$. We will use notational conventions as in section \ref{sect 6}, so that for $d\in C(W)$, $d_k$ will denote the restriction of this map to $\bigwedge^k W$, $d_{lk}$ will denote the restriction of the associated coderivation $\hbox{{$\hat d$}}_l$ to $V^{k+l-1}$ etc. The following lemma will be useful later on. \begin{lma}\label{lma5} Suppose that $\sigma$ is a permutation of $n$ elements. Then \begin{multline} \s{(n-1)v_1+\cdots+ v_{n-1}}\s{\sigma}\epsilon(\sigma;v_1,\cdots, v_n)=\\ \s{(n-1)v_{\sigma(1)}+\cdots+ v_{\sigma(n-1)}} \epsilon(\sigma;w_1,\cdots, w_n). \end{multline} \end{lma} \begin{pf} {}From the properties of the graded exterior algebra, we have \begin{multline} \eta(v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(n)})=\\ \s{\sigma}\epsilon(\sigma;v_1,\cdots, v_n) \eta(v_1,\cdots, v_n)=\\ \s{\sigma}\epsilon(\sigma;v_1,\cdots, v_n) \s{(n-1)v_1+\cdots+ v_{n-1}} w_1\odot\cdots\odot w_n. \end{multline} On the other hand, by direct substitution, we have \begin{multline} \eta(v_{\sigma(1)}\wedge\cdots\wedge v_{\sigma(n)})=\\ \s{(n-1)v_{\sigma(1)}+\cdots+ v_{\sigma(n-1)}} w_{\sigma(1)}\odot\cdots\odot w_{\sigma(n)}=\\ \s{(n-1)v_{\sigma(1)}+\cdots+ v_{\sigma(n-1)}} \epsilon(\sigma;w_1,\cdots, w_n)w_1\odot\cdots\odot w_n \end{multline} Comparing the coefficients of the two expressions yields the desired result. \end{pf} Let $d\in C(W)$ and define $l_k=\eta_1^{-1}\circ d_k\circ\eta_k$. so that \begin{equation} d_k(w_{\sigma(1)},\cdots, w_{\sigma(k)}= \s{(k-1)v_{\sigma(1)}+\cdots+ v_{\sigma(k-1)}} \pi m_k(v_{\sigma(1)},\cdots, v_{\sigma(k-1)}). \end{equation} Let us abbreviate $\epsilon(\sigma;v_1,\cdots, v_n)$ by $\epsilon(\sigma;v)$ and $\epsilon(\sigma;w_1,\cdots, w_n)$ by $\epsilon(\sigma;w)$. Let $n=k+l-1$. Define $\bar l_{lk}=\eta_k^{-1}\circ d_{lk}\circ\eta_{n}$, where $d_{lk}$ is given by \begin{equation} d_{lk}(w_1,\cdots, w_n)= \sum_{\sigma\in\sh k{n-k}} \epsilon(\sigma,w) d_l(w_{\sigma(1)},\cdots, w_{\sigma(l)})\odot w_{\sigma(l+1)}\odot\cdots\odot w_{\sigma(n)}. \end{equation} We wish to compute $\bar l_{lk}$ in terms of $l_l$. Now \begin{multline} l_{lk}(v_1,\cdots, v_n)=\\ \sum_{\sigma\in\sh l{n-l}} \!\!\! \epsilon(\sigma,w) \s{(n-1)v_1+\cdots+ v_{n-1}} \eta_k^{-1}( d_l(w_{\sigma(1)},\cdots, w_{\sigma(l)})\odot w_{\sigma(l+1)}\odot\cdots\odot w_{\sigma(n)}) \\= \sum_{\sigma\in\sh l{n-l}} \s{\sigma}\epsilon(\sigma,v) \s{r} \eta_k^{-1}( d_l(w_{\sigma(1)},\cdots, w_{\sigma(l)})\odot w_{\sigma(l+1)}\odot\cdots\odot w_{\sigma(n)}) \\= \sum_{\sigma\in\sh l{n-l}} \s{\sigma}\epsilon(\sigma,v) \s{r+s} \eta_k^{-1}( l_l(v_{\sigma(1)},\cdots, v_{\sigma(l)})\odot w_{\sigma(l+1)}\odot\cdots\odot w_{\sigma(n)}) \\= \sum_{\sigma\in\sh l{n-l}} \s{\sigma}\epsilon(\sigma,v) \s{r+s+t} l_l(v_{\sigma(1)},\cdots, v_{\sigma(l)})\odot v_{\sigma(l+1)}\odot\cdots\odot v_{\sigma(n)} \\= \sum_{\sigma\in\sh l{n-l}} \s{\sigma}\epsilon(\sigma,v) \s{(k-1)l_l} l_l(v_{\sigma(1)},\cdots, v_{\sigma(l)})\odot v_{\sigma(l+1)}\odot\cdots\odot v_{\sigma(n)}. \end{multline} where \begin{eqnarray} r&=&(n-1)v_{\sigma(1)}+\cdots+ v_{\sigma(n-1)}\\ s&=&(l-1)v_{\sigma(1)}+\cdots+ v_{\sigma(l-1)}\\ t&=&(k-1)(l_l+v_{\sigma(1)}+\cdots+ v_{\sigma(l-1)} (k-2)v_{\sigma(l+1)}+\cdots+ v_{\sigma(n-1)} \end{eqnarray} Thus we deduce immediately that $\bar l_{lk}=\s{(k-1)l_l})l_{lk}$, where $l_{lk}$ is the restriction of the coderivation $\hat l_k$ to $V^{k+l-1}$. This formula is identical to the formula we deduced in section \ref{sect 6} connecting $\hbox{$\bar m$}_{lk}$ to $\hbox{$\hat m$}_{lk}$. Now suppose that $d$ is an odd codifferential, so that $d^2=0$. This is equivalent to $\sum_{k+l=n+1} d_k\circ d_{lk}=0$, which is equivalent to the relations $\sum_{k+l=n+1} \s{(k-1)l}l_k\circ l_{lk}=0$, since $\e{l_l}=l$. This last relation can be put in the form \begin{multline} \sum \begin{Sb} k+l=n+1\\ \sigma\in\sh l{n-l} \end{Sb} \s{\sigma}\epsilon(\sigma,v) \s{(k-1)l} l_k(l_l(v_{\sigma(1)},\cdots, v_{\sigma(l)}), v_{\sigma(l+1)},\cdots, v_{\sigma(n)}))=0. \end{multline} We say that the maps $l_k$ induce the structure of an \mbox{$L_\infty$}\ algebra, or strongly homotopy Lie algebra on $V$. In \cite{ls}, the sign $k(l-1)$ instead of $(k-1)l$ appears in the definition, but since $k(l-1)-(k-1)l=n+1$, this makes no difference in the relations. If we define $[\varphi,\psi]$ to be the bracket of coderivations, then we can define the modified Gerstenhaber bracket $\br{\varphi}{\psi}=\s{\operatorname{deg}\varphi\e\psi}[\varphi,psi]$. The definition of an \mbox{$L_\infty$}\ algebra can be recast in terms of the bracket. In this language, $l\inC(V)$ determines an \mbox{$L_\infty$}\ structure on $V$ when $\br ll=0$. The cohomology of an \mbox{$L_\infty$}\ algebra is defined to be the cohomology on $C(W)$ induced by $l$, in other words, $D(\varphi)=[\varphi, l]$. This definition makes $C(V)$ a differential graded Lie algebra, with respect to the second inner product on \hbox{$\Z_2\times\Z$}. These results are completely parallel to the \hbox{$A_\infty$}\ case. In \cite{ps2}, the relationship between infinitesimal deformations of an \hbox{$A_\infty$}\ algebra and the cohomology of the \hbox{$A_\infty$}\ algebra was explored. The basic result is that the cohomology classifies the infinitesimal deformations. Since we did not explore this matter here, we shall discuss the parallel result for \mbox{$L_\infty$}\ algebras. An infinitesimal deformation $l_t$ of an \mbox{$L_\infty$}\ algebra is given by taking $l_t=l+t\lambda$, where the parity of $t$ is chosen so that $(l_t)_k$ has parity $k$, so that we must have $\e{\lambda_k}=\e{t}+k$. Since $t$ must have fixed parity, this determines the parity of $\lambda_k$. The situation is more transparent if we switch to the $W$ picture, so suppose that $l=\eta^{-1}\circ d\circ \eta$, and $\lambda=\eta^{-1}\circ\delta\circ\eta$. Let $d_t=d +t\delta$, and let us suppose that $d^2=0$, which is equivalent to $l$ giving an \mbox{$L_\infty$}\ structure on $V$. Then $d_t$ is an infinitesimal deformation of $d$ if $d_t^2=0$. Since $t$ is an infinitesimal parameter, $t^2=0$, but we also want the parity of $d_t$ to be odd, so that $\e t=1- \e\delta$. (We assume here that $\delta$ is homogeneous.) Now $d_t^2=0$ is equivalent to $d^2+t\delta d+ dt\delta=0$. Also, $dt=\s{td}td$, the condition reduces to $\delta d+\s{td}d\delta=0$. Now using the fact that $\s{td}=\s{(1-\delta)}$, we see that $d_t$ is an infinitesimal deformation precisely when $[\delta, d]=0$, in other words, infinitesimal deformations are given by cocycles in the cohomology of the \mbox{$L_\infty$}\ algebra. Trivial deformations are more complicated, because we need to consider $d$ as a codifferential of $T(W)$, and we define two codifferentials to be equivalent if there is an automorphism of $T(W)$ which takes one of them to the other. So trivial deformations depend on the structure of $T(W)$, and are not simply given by taking an isomorphism of $W$ to itself. However, one can show that $d_t$ is a trivial infinitesimal deformation precisely when $\delta$ is a coboundary. Thus the cohomology of $C(W)$ classifies the infinitesimal deformations of the \mbox{$L_\infty$}\ algebra. When we transfer this back to the $V$ picture, note that $m+\s{t}t\lambda$, is the deformed product associated to $d_t$. But nevertheless, the condition for $l_t$ to be an \mbox{$L_\infty$}\ algebra still is that $\br \lambda\delta=0$. We state these results in the theorem below. \begin{thm} Let $l$ be an \mbox{$L_\infty$}\ algebra structure on $V$. Then the cohomology $H(V)$ of $C(V)$ classifies the infinitesimal deformations of the \mbox{$L_\infty$}\ algebra. \end{thm} Moreover, let us suppose that $l$ is a Lie algebra structure on $V$. Then the Lie algebra coboundary operator on $V$ coincides up to a sign with the \mbox{$L_\infty$}\ algebra coboundary operator on $V$. This gives a nice interpretation of the cohomology of a Lie algebra. \begin{thm} Let $l$ be an Lie algebra structure on $V$. Then the Lie algebra cohomology $H(V)$ of $V$ classifies the infinitesimal deformations of the Lie algebra into an \mbox{$L_\infty$}\ algebra. \end{thm} coboundary operator Now let us address the case when $V$ is equipped with an inner product. The bracket of coderivations is the same bracket that was introduced in equation \ref{nbra} of section \ref{sect 2}, so that theorem \ref{th2} applies, and we know that the bracket of two cyclic elements in $C(V)$ is again cyclic. Thus the modified Gerstenhaber bracket is cyclic as well. Thus we have an analog of theorem \ref{thm3} for \mbox{$L_\infty$}\ algebras. \begin{thm} i)Suppose that $V$ is a \hbox{$\Z_2$}-graded \mbox{\bf k}-module with an inner product $\left<\cdot,\cdot\right>$. Suppose that $\varphi, \psi\inC(V)$ are cyclic. Then $\br{\varphi}{\psi}$ is cyclic. Furthermore, the formula below holds. \begin{multline} \widetilde{\br{\varphi}{\psi}}(v_1,\cdots, v_{n+1})=\\ \sum \begin{Sb} k+l=n+1\\ \\ \sigma\in\sh(l,k) \end{Sb} \s{\sigma}\epsilon(\sigma)\s{(k-1)\psi_l} \tilde\varphi_k(\psi_l(v_{\sigma(1)},\cdots, v_{\sigma(l)}), v_{\sigma(l+1)},\cdots, v_{\sigma(n+1)}), \end{multline} Thus the inner product induces a structure of a \hbox{$\Z_2\times\Z$}-graded Lie algebra in the module $CC(V)$ consisting of all cyclic elements in $C(V))$, by defining $\br{\tilde\varphi}{\tilde\psi}=\widetilde{\br{\varphi}{\psi}}$. ii) If $l$ is an \mbox{$L_\infty$}\ structure on $V$, then there is a differential in $CC(V)$, given by \begin{multline} D(\tilde\varphi)(v_1,\cdots, v_{n+1})=\\ \sum \begin{Sb} k+l=n+1\\ \\ \sigma\in\sh(l,k) \end{Sb} \s{\sigma}\epsilon(\sigma)\s{(k-1)l} \tilde\varphi_k(l_l(v_{\sigma(1)},\cdots, v_{\sigma(l)}), v_{\sigma(l+1)},\cdots, v_{\sigma(n+1)}), \end{multline} iii) If the inner product is invariant, then $D(\tilde\varphi)=\br{\tilde \varphi}{\tilde l}$. Thus $CC(V)$ inherits the structure of a differential graded Lie algebra. \end{thm} We denote the cohomology given by the cyclic coboundary operator as $HC(V)$. Suppose that $V$ is an \mbox{$L_\infty$}\ algebra with an invariant inner product. Then an infinitesimal deformation $l_t=l+t\varphi$ preserves the inner product, that is the inner product remains invariant under $l_t$, precisely when $\varphi$ is cyclic. Thus we see that cyclic cocycles correspond to infinitesimal deformations of the \mbox{$L_\infty$}\ structure which preserve the inner product. In a similar manner as before, cyclic coboundaries correspond to trivial deformations preserving the inner product. Thus we have the following classification theorem. \begin{thm} Let $l$ be an \mbox{$L_\infty$}\ algebra structure on $V$, with an invariant inner product. Then the cyclic cohomology $HC(V)$ classifies the infinitesimal deformations of the \mbox{$L_\infty$}\ algebra preserving the inner product. \end{thm} Finally, we have an interpretation of the cyclic cohomology of a Lie algebra. \begin{thm} Let $l$ be an Lie algebra structure on $V$, with an invariant inner product. Then the Lie algebra cyclic cohomology $H(V)$ of $V$ classifies the infinitesimal deformations of the Lie algebra into an \mbox{$L_\infty$}\ algebra preserving the inner product. \end{thm} The author would like to thank Albert Schwarz, Dmitry Fuchs and James Stasheff for reading this article and providing useful suggestions. \bibliographystyle{amsplain} \ifx\undefined\leavevmode\hbox to3em{\hrulefill}\, \newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,} \fi
1,108,101,562,434
arxiv
\section{Angular-averaged $A_{\rm PV}$} \label{sec:angaver} The parity-violating asymmetry $A_{\rm PV}$ is a function of the transferred momentum $q$, or equivalently four momentum $Q$. These are functions of the incoming electron beam energy $E_\mathrm{el}$ and the scattering angle $\theta$. To be precise, one should view $A_{\rm PV}$ in general as function of $E_\mathrm{el}$ and $\theta$. In a first round, one discusses $A_{\rm PV}$ at the experimental conditions of mean beam energy and average angle, as was done in the formal presentation in the paper. The beam energy in the PREX-2 experiment is well defined while the scattering angle $\theta$ has non-negligible width and data analysis took that explicitly into account \cite{PREX-2}. Our theoretical analysis follows the same procedure. Indeed, what we discuss in this Letter is, in fact, the angular-averaged asymmetry: \begin{equation} A_{\rm PV} = \frac{\int d\theta\sin(\theta)\epsilon(\theta)\frac{d\sigma}{d\Omega}(\theta)A_{\rm PV}(\theta)} {\int d\theta\sin(\theta)\epsilon(\theta)\frac{d\sigma}{d\Omega}(\theta)}, \label{eq:average} \end{equation} where $\epsilon(\theta)$ is the angular acceptance function as published in the supplemental material of \cite{PREX-2} and $d\sigma/d\Omega$ is the differential cross section. All quantities here are taken in the laboratory frame because $\epsilon(\theta)$ is given in that frame. We replace the integral by summation over the grid points as given in the experimental angle distribution, at each grid point angle and beam energy carry out transformation to the center of mass (cm) frame, feed that to the DWBA code, and insert the resulting $d\sigma/d\Omega$ and $A_{\rm PV}(\theta)$ into Eq. (\ref{eq:average}). Having summed that over all grid point yields finally the angular averaged $A_{\rm PV}$. The simpler alternative is to take the average scattering angle $\theta=4.69^o$ as given in \cite{PREX-2} and to calculate $A_{\rm PV}$ at that one point. The difference between these two procedures amounts to about 12 ppb. This is small as compared to the typical values for $^{208}$Pb, namely $A_{\rm PV}\approx 550-590$ ppb, but non-negligible at the present level of discussion. Similar relations are found for $^{48}$Ca discussed below where the shift is about 150 ppb out of 2400 ppb. Thus we use angle-averaged $A_{\rm PV}$ everywhere. \section{Weak charge of {$^{208}$Pb}}\label{qw} Within the Standard Model, to lowest order, $Q_{N,Z}^{(W)}=-N+Z[1-4\sin(\theta_W )]$ where $\theta_W$ is the weak-mixing angle. The scaling with neutron and proton numbers has been recently confirmed in atomic parity violation experiments in Ytterbium isotopes \cite{Antypas2019}. However, radiative corrections to $Q_{N,Z}^{(W)}$ need to be included for precise experiments. Within a 0.1\% accuracy, these corrections modify the previous expression as follows: $Q_{N,Z}^{(W)}=NQ_n^{(W)}+ZQ_p^{(W)}$ \cite{PDG2018}; which imply $Q_{126,82}^{(W)}=-118.8$. The latest theoretical estimate which includes also many-body effects in the radiative corrections is $-117.9\pm 0.3$ \cite{Gorchtein2020} and this value is used in our work. This value, properly scaled, is in agreement with the most accurate atomic parity violation measurement to date, that of the weak charge in ${}^{133}$Cs, which slightly deviates ($1.5\sigma$) from the Standard Model prediction \cite{Dzuba2012}. For the case of interest here, a reduction from $NQ_n^{(W)}+ZQ_p^{(W)}$ to $Q_{N,Z}^{(W)}$ by about 0.7\% implies a reduction on $A_{\rm PV}$ of about 4 ppb. The same reduction of $A_{\rm PV}$ would be produced by a change in the neutron rms radius of only 0.05 fm. \section{Computation of charge and weak form factors}\label{ff} The parity-violating asymmetry $A_{\rm PV}$ given in Eq. (2) depends on the nuclear charge form factor $F_C(q)$ and weak form factor $F_W(q)$ that depend on local proton and neutron density distributions, $\rho_p$ and $\rho_n$, respectively. Accounting for magnetic contributions requires also the spin-orbit current ($\nabla\mathbf{J}$ for SHF) \cite{Reinhard2021so} or the tensor current ($\rho_{T,p/n}$ for RMF) \cite{Horowitz2012}. The proton and neutron densities are normalized in the usual way: $\int d^3r\rho_p=Z$ and $\int d^3r\rho_n=N$. We assume spherically symmetric systems, i.e., $\rho(\mathbf{r})=\rho(r)$ where $r=|\mathbf{r}|$. In general, $F(q)$ and $\rho(r)$ are connected through the Fourier transformation \cite{Fri82a} \begin{subequations} \begin{eqnarray} F(q) &=& \int d^3r\,e^{\mathrm{i}\mathrm{q}\cdot\mathrm{r}}\rho(r) = 4\pi\int_0^\infty dr\,r^2\,j_0(qr)\rho(r), \\ \rho(r) &=& \int\!\!\frac{d^3q}{8\pi^3}e^{-\mathrm{i}\mathrm{q}\cdot\mathrm{r}}F(q) = \frac{1}{2\pi^2}\!\!\int_0^\infty\!\!\!\!\! dq\,q^2\,j_0(qr)F(q). \end{eqnarray} \end{subequations} The transformation applies to any local density, for protons $\rho_p\longleftrightarrow F_p$, neutrons $\rho_n\longleftrightarrow F_n$, and the weak density $\rho_W\longleftrightarrow F_W$. We prefer to formulate the weak distributions in terms of the form factor because the necessary folding operations become much simpler in the Fourier space. Charge and weak form factors, both normalized to one, can be written as: \begin{subequations} \label{eq:formfweak} \begin{eqnarray} F_C^{\mbox{}}(q) &=& \frac{e^{a_\mathrm{cm}q^2}}{Z} \sum_{t=p,n}\!\!\big(G_{E,t}(q)F_t(q)\!+\!G_{M,t}(q)F_{t}^{(ls)}(q)\big) , \\ F_W^{\mbox{}}(q) &=& \frac{e^{a_\mathrm{cm}q^2}}{ZQ^{(W)}_p+NQ^{(W)}_n} \sum_{t=p,n}\!\!\big(G_{E,t}^{(W)}(q)F_t(q)\!+\!G_{M,t}^{(W)}(q)F_{t}^{(ls)}(q)\big) , \label{eq:FW} \end{eqnarray} \end{subequations} where $a_\mathrm{cm}$ a parameter for the center-of-mass (c.m.) correction, see Eq. (\ref{eq:cmcor}). The charge form factor is expressed in terms of $G_{E/M,p}$ and $G_{E/M,n}$, the intrinsic proton and neutron electromagnetic form factors. The weak form factor calls the weak intrinsic nucleon form factors. They are expressed in terms of the electromagnetic intrinsic form factors weighted with the nucleonic weak charges as: \begin{subequations} \begin{eqnarray} G_{E,p}^{(W)} &=& Q^{(W)}_pG_{E,p} \!+\! Q^{(W)}_nG_{E,n} \!+\! Q^{(W)}_nG_{E,s} , \\ G_{E,n}^{(W)} &=& Q^{(W)}_nG_{E,p} \!+\! Q^{(W)}_pG_{E,n} \!+\! Q^{(W)}_nG_{E,s} , \\ G_{M,p}^{(W)} &=& Q^{(W)}_pG_{M,p} \!+\! Q^{(W)}_nG_{M,n} \!+\! Q^{(W)}_nG_{M,s} , \\ G_{M,n}^{(W)} &=& Q^{(W)}_nG_{M,p} \!+\! Q^{(W)}_pG_{M,n} \!+\! Q^{(W)}_nG_{M,s} , \\ G_{E,s}(q) &=& \rho_s\frac{\hbar^2q^2/(4c^2m_N^2)}{1+4.97\,\hbar^2q^2/(4c^2m_N^2)}, \label{eq:Gs} \\ G_{M,s}(q) &=& \kappa_s\frac{\hbar^2}{(4c^2m_N^2)} , \end{eqnarray} where $m_N$ is the average nucleon mass. Note that the weak form factor employs one more entry as compared to the electromagnetic form factor, namely the strange-quark electromagnetic form factor $G_{E/M,s}$. Its parameters together with nucleonic weak charges and nucleon radii are given in Table I. There is a great variety of publications on the parametrization of the intrinsic electromagnetic nucleon form factors $G_{E/M,t}$, see, e.g., \cite{Friar1975,Walther1986,Kel04a,Hoferichter2020}. The data point of interest here corresponds to low momentum $0.3978$/fm, which is not very sensitive to the subtleties of the full form factors. The leading parameters at low $q$ are the nucleonic radii and magnetic moments, and we parametrize the nucleonic form factors on terms of these parameters in a way which follows as close as possible the fully fledged forms (tested in comparison to the full Mainz form factors \cite{Simon1980,Walther1986} as reviewed, e.g., in \cite{Bender2003}). This reads \begin{eqnarray} G_{Ep}(q) &=& \frac{1} {1+\frac{1}{6}\langle r_{Ep}^2\rangle q^2} \, \sqrt{\frac{1} {1+\frac{\hbar^2}{(2m_Nc)^2}}q^2} \\ G_{En}(q) &=& G_{En}(q) = \frac{\langle r_{En}^2\rangle}{\langle r_{En}^2\rangle^\mathrm{(Mainz)}} \, G_{En}^\mathrm{(Mainz)}(q) , \\ G_{M,p}(q) &=& -(1+2\mu_p)\frac{\hbar^2}{(2m_Nc)^2}G_M^\mathrm{(S)}(q) , \end{eqnarray} \end{subequations} where $G_{En}^\mathrm{(Mainz)}(q)$ stands for the Mainz parametrization having $\langle{r}_{En}^2\rangle^\mathrm{(Mainz)}=-0.117$ fm$^2$. This is a way to maintain some information on the $q$-dependence of $G_{En}(q)$ while having full control over the neutron radius. Finally, a word about the c.m. correction. There are different ways to take the c.m. correction into account in nuclear EDFs. Most functionals considered in the paper subtract the c.m. energy $\langle\hat{P}_{c.m.}^2\rangle/(2m_NA)$ a posteriori. In that case, the corresponding correction on radii uses the factor \begin{equation} a_\mathrm{cm} = \frac{\hbar^2}{8\langle\hat{P}_{c.m.}^2\rangle} . \label{eq:cmcor} \end{equation} Some EDF use only the diagonal elements of $\hat{P}_{c.m.}^2$ which allows to implement the c.m. correction by simple renormalization of the nucleon mass $1/m_N\longrightarrow(1-1/A)/m_n$. In this case, the effect on the form factor is already accounted for by the modified kinetic energy and $a_\mathrm{cm}$ is set to zero. \begin{figure}[!htb] \includegraphics[width=0.6\columnwidth]{fig4.pdf} \caption{The residuals of the charge radius (a) and binding energy (b) of {$^{208}$Pb} for the theoretical models used in this study. The grey bands around the perfect match indicate the typical performance of well adapted modern EDFs, i.e. the r.m.s. deviation taken over all nuclei where correlation effects are small \cite{Kluepfel2008,Klupfel2009}.}\label{Modelquality} \end{figure} \section{Ability of theoretical models to describe $^{208}$\text{Pb}}\label{models} In this Letter, we compare the results for a variety of EDFs. For a fair comparison, these EDFs should also perform with approximately similar quality for basic nuclear observables. As a minimal request, we ask for comparable performance for the nucleus under consideration, $^{208}$Pb. Figure~\ref{Modelquality} shows the deviation from experimental binding energy (lower panel) and charge radius (upper panel) for the sets of parametrizations used in the paper. The grey error bands indicate the typical r.m.s. error of up-to-date parametrizations averaged over a broad selection of nuclei. The actual uncertainties indicated in Fig.~\ref{Modelquality} are $\sim$1\,MeV for the binding energy and $\sim$0.02\,fm for the charge radius. The FSU family \cite{Piekarewicz2011}, still being acceptable, falls out of the narrower range. This indicates the limitations of the traditional non-linear RMF, even with the FSU extensions. The RMF models with density dependent couplings (PC and DD in the figure) were developed exactly for the purpose of allowing better performance \cite{Niksic2002,Niksic2002}. Models which fall outside the plot ranges were discarded for the present survey. There are also other published EDFs which reproduce the {$^{208}$Pb} data very well. We do not show them to render figures manageable. \begin{figure}[!htb] \includegraphics[width=0.6\columnwidth]{fig5.pdf} \caption{Similar as in Fig. 3 but for the symmetry energy parameter $J$. The values of $J$ (in MeV) obtained in our models are: $31\pm 2$ for {SV-min}; $33\pm 1$ for {SV-min$^*$}; $35\pm 2$ for {RMF-PC}; and $39\pm 1$ for {RMF-PC$^*$}. }\label{Jsymm} \end{figure} \section{Symmetry energy parameter $J$}\label{symmJ} Figure~\ref{Jsymm} shows the predictions for $J$ by the models employed. Fig.~\ref{Jsymm} complements Fig.~3 of the paper by showing the trends of $A_{\rm PV}$ and $\alpha_{\rm D}$ with symmetry energy $J$. That looks very similar to the trends for $L$; this is not surprising as $J$ and $L$ are very strongly correlated \cite{Reinhard2010,Reinhard2016R}. \section{Parity-violating asymmetry in {$^{48}$Ca}}\label{CRX} \begin{figure}[!htb] \includegraphics[width=0.6\columnwidth]{fig6.pdf} \caption{$A_{\rm PV}$ versus $\alpha_{\rm D}$ in {$^{48}$Ca} for {SV-min}, {SV-min$^*$}, {RMF-PC}, and {RMF-PC$^*$}. The experimental value of $\alpha_{\rm D}$ \cite{Birkhan2017} is indicated.}\label{aDCa} \end{figure} With measurements of $A_{\rm PV}$ in $^{48}$Ca being accomplished in near future \cite{CREX,HorowitzC,Lin2015}, it is interesting to have a look at this quantity. For composing $A_{\rm PV}$, we use the same parameters as in Table I, except for the total weak charge $Q^{(W)}=-26.08$ which is deduced from $ZQ_p^{(W)}+NQ_n^{(W)}$ reduced by the factor 0.993 as in $^{208}$Pb. For averaging over scattering angles, we assume the same acceptance distribution as for the $^{208}$Pb experiment. These conditions may change in the final experiment which gives our prediction some principle uncertainty of a several dozen ppb. Figure~\ref{aDCa} shows $A_{\rm PV}$ versus $\alpha_{\rm D}$ in similar fashion as for $^{208}$Pb in Fig.~2, however, restricted to four parametrizations. Note that the range of $A_{\rm PV}$ shown here is relatively narrower than in Fig.~2 for $^{208}$Pb. Based on the prediction of {SV-min}, with a slight bias toward {SV-min$^*$}, we predict $A_{\rm PV}(^{48}\mathrm{Ca})=2400\pm 60$\,ppb. This, again, may come in conflict with the current experimental data on $\alpha_{\rm D}$ \cite{Birkhan2017}.
1,108,101,562,435
arxiv
\section{Introduction} The cuprate YBa$_{\text{2}}$Cu$_{\text{3}}$O$_{\text{7}-\delta}$ (YBCO) is one of the most prominent representatives of oxides exhibiting High-Temperature Superconductivity (HTS).\cite{PhysRevLett.58.908,1347-4065-26-4A-L314,bednorz} The structure of YBCO is characterized by two different lattice sites specific for the Cu atoms which are the chain site Cu(1) and the plane site Cu(2). Oxygen deficiency in YBCO, i.e. the presence of oxygen vacancies at the chain site for $\delta >0$, strongly affects the hole doping within the CuO$_2$ planes. This hole doping, however, drives exceptional phenomena highly relevant for technical applications and fundamental solid state physics such as HTS with a critical temperature $T_\mathrm{c}$ above 90\,K and, presumably intertwined, Charge-Density-Wave (CDW) order .\cite{11827562,Ghiringhelli17082012,changnature} For this reason, in literature the oxygen deficiency $\delta$ is widely used for the characterization of the superconducting properties of YBCO crystals. For technical applications such as superconducting wires, fault current limiters or magnets as well as for the fundamental investigation of the interplay between CDW and HTS phases high-quality crystals of YBCO are required. Since oxygen diffusion in YBCO is strongly affected by temperature heat treatment in vacuum or oxygen atmospheres has become a standard procedure for adjusting the value of $\delta$. Hence, the applied preparation process finally settles the distribution of oxygen in the YBCO sample. However, even in single-crystalline thin films a clear impact of the tempering procedure on the lateral homogeneity of $\delta$ was observed.\cite{ybcoapl} In this paper we report on the depth dependent investigation of the oxygen distribution in YBCO films and the evolution of $\delta$ during tempering. We applied (Coincident) Doppler Broadening Spectroscopy ((C)DBS) of the electron-positron annihilation line using a slow positron beam\cite{RevModPhys.60.701, beam} in order to determine the depth profile of the oxygen deficiency. After implantation and thermalization, positrons diffuse through the crystal until they annihilate with electrons at a typical rate of around $5\cdot10^{11}$s$^{-1}$ in singe-crystalline YBCO.\cite{RevModPhys.66.841,0953-8984-1-23-020} In this system positrons show a particular high affinity to the oxygen deficient plane of the CuO chains.\cite{PhysRevLett.60.2198,PhysRevB.39.9667,0953-8984-2-6-021,PhysRevB.43.10422,Nieminen19911577,fermischool1,RevModPhys.66.841,ybcoapl} Due to the high positron specificity to oxygen vacancies around the Cu(1) sites the momentum of the annihilating pair measured with (C)DBS is highly sensitive on the oxygen deficiency $\delta$ and in turn to the local transition temperature $T_\mathrm{c}$ in YBa$_{\text{2}}$Cu$_{\text{3}}$O$_{\text{7}-\delta}$. \section{Experimental Methods} \subsection{Single-Crystalline YBCO Thin Films} A single-crystalline YBCO film with a thickness of 230(10)\,nm was grown epitaxially on a routinely cleaned, single crystalline (001) oriented SrTiO$_3$ (STO) substrate (5$\times$5$\,mm^2$) by Pulsed Laser Deposition (PLD).\cite{RevModPhys.72.315,hammerlnature} A KrF laser with a fluence of 2\,J/cm$^2$ was used restricting pulse energy to 750\,mJ and pulse frequency to 5\,Hz. Deposition took place at around 760\,$^\circ$\,C at a defined oxygen pressure of 0.25\,mbar. After deposition, the film was annealed at 400\,$^\circ$\,C in a 400\,mbar O$_{\text{2}}$ atmosphere for oxygen loading. Afterwards, a critical temperature of $T_\mathrm{c}=90$\,K was determined by electron transport measurements. The single-crystallinity of the grown film was confirmed by X-Ray Diffraction (XRD). A linear equation reported in ref. \cite{Benzi2004625} was used to determine the overall oxygen deficiency $\delta$ from $\Theta$-$2\Theta$-scans. In the as prepared state $\delta=0.191$ and after the heat treatment described below $\delta=0.619$ were identified. \subsection{(Coincident) Doppler Broadening Spectroscopy} In (C)DBS, high-purity Ge detectors are used to measure the Doppler shifted energy $E$ = 511\,keV$\pm\Delta E$ of $\gamma$-quanta emitted from positrons annihilating with electrons. The Doppler-shift $\Delta E=pc/2$ predominantly results from the (longitudinal) momentum $p$ of the electron ($c$ is the velocity of light). The energy of a single annihilation quantum is analyzed in DBS whereas in CDBS\cite{PhysRevLett.77.2097} both annihilation quanta are detected in coincidence using a collinear detector set-up. We evaluated the DBS spectra by calculating the line shape parameter $S$, which is defined as the fraction of annihilation quanta with $\Delta E < 0.85$\,keV of the Doppler broadened annihilation line. Applying CDBS enhances the peak-to-background ratio and the respective spectra $I(\Delta E)$ were extracted by an algorithm described elsewhere \cite{Pikart201461}. It was shown that (C)DBS is highly sensitive to the oxygen deficiency $\delta$ in YBa$_{\text{2}}$Cu$_{\text{3}}$O$_{\text{7}-\delta}$: a higher $\delta$ leads to a less broadened annihilation line and hence to an increase of $S$.\cite{Smedskjaer198856,PhysRevB.36.8854,PhysRevLett.60.2198,1402-4896-1989-T29-019} This correlation was found to be linear in our YBCO films.\cite{ybcoapl} The present experiments were performed at the CDB-spectrometer\cite{1742-6596-443-1-012071, Gig17} at the high-intensity positron beam NEPOMUC\cite{beam}. A variable positron implantation energy $E_+$ in the range between 0.3 and 30\,keV enables in situ depth dependent investigations at temperatures up to 900\,$^\circ$C. \section{Experiments} \subsection{In-situ DBS During Tempering} \label{sec:exp1} For studying the oxygen diffusion we performed DBS in situ during elevating the temperature and by alternately switching the positron implantation energy $E_+$ between 4 and 7\,keV. The respective Makhovian implantation profiles $P(z,E_+)$, as plotted in fig. \ref{figure1}a), were calculated with the material dependent parameters obtained from an interpolation over the mass density\cite{param} and by accounting for the boundary condition of a continuous transmission at the YBCO/STO interface. At 4\,keV, positrons exclusively probe the bulk of the YBCO film whereas at 7\,keV the probed region is closer to the interface, and about 9.0\,\% of the positrons actually annihilate in the STO substrate according to the evaluation of the depth dependent measurements discussed in Section\,\ref{sec:exp3}. As shown in fig.\,\ref{figure1}b) the temperature $T$ was increased stepwise up to 400\,$^\circ$C within a total measurement time of around 3\,h. For both probed depth regions, i.e. at the incident energies $E_+=$\,4 and 7\,keV, the heat treatment led to an increase of the S-parameter (see $S(t)$ in fig.\,\ref{figure1}b)). The last S-value reached at each temperature step normalized to the initial S-parameter is plotted as function of $T$ in fig.\,\ref{figure1}c). We find at both probed energies a linear $S(T)$ dependence above the respective onset temperatures which is slightly above 240\,$^\circ$C at $E_+ = 4$\,keV and around 280\,$^\circ$C at $E_+ = 7$\,keV. As observed in our previous study\cite{ybcoapl} this change of the S-parameter is attributed to the increase of $\delta$. The S-parameter $S_\mathrm{STO}$ of the STO substrate remains unchanged during tempering as obvious from depth dependent DBS presented in Section\,\ref{sec:exp3}. The mean, i.e. not depth resolved, change of the oxygen deficiency was determined by complementary XRD studies on the film before and after the heat treatment yielding an overall increase of $\delta$ from 0.191 to 0.619. However, the quantitative analysis of in-situ DBS at elevated temperatures shows that the variation of $S$ significantly depends on the probed depth region in the YBCO film suggesting a non-constant $\delta$ changing with $z$. It is noteworthy that $S$(4\,keV) increases by 2.4\,\% whereas $S$(7\,keV) rises only by 1.3\,\% (see fig.\,\ref{figure1}b)). Since at $E_+=4$\,keV all and at $E_+=7$\,keV a positron fraction of 91.0\,\% annihilates in the YBCO film, an assumed constant $S_\mathrm{YBCO}(z)$ would imply an increase of $S$(7\,keV) by 0.91$\cdot$2.4\,\%=2.2\,\%. Hence, a deeper analysis of the non-constant S-parameter in the YBCO film $S_\mathrm{YBCO}(z)$ is expected to provide detailed depth dependent information of the oxygen diffusion process. \begin{figure}[t!] \includegraphics[width=0.495\textwidth]{figure1.pdf} \caption{In-situ DBS during tempering: a) Positron implantation profiles $P(z,E_+)$ as function of the depth $z$ for the used incident positron energies $E_+$, b) S-parameter and temperature $T$ as function of process time $t$, and c) normalized $S$ as function of temperature $T$ with guides to the eye.} \label{figure1} \end{figure} \subsection{CDBS -- Structural Changes} \label{sec:exp2} In order to observe the depth dependent change in $\delta$ we recorded CDB spectra in the as prepared state and after tempering at two different depth regions probed with positron implantation energies of $E_+=4$\,keV and 7\,keV. For this purpose we compared the tempered with the as prepared state by evaluating the CDB ratio curves $R_1(\Delta E) = I_{\mathrm{temp}}(\Delta E, 4$\,keV$)/I_{\mathrm{a.p.}}(\Delta E, 4$\,keV$)$ and $R_2 = I_{\mathrm{temp}}(7$\,keV$)/I_{\mathrm{a.p.}}(7$\,keV$)$ as shown in fig. \ref{figure2}a). (For reasons of clarity, the argument $\Delta E$ is omitted in the following). Alternatively, the same spectra are analyzed using the ratio curves obtained at different energies for the as prepared and tempered state, respectively, $R_3 = I_{\mathrm{a.p.}}(7$\,keV$)/I_{\mathrm{a.p.}}(4$\,keV$)$ and $R_3 = I_{\mathrm{temp}}(7$\,keV$)/I_{\mathrm{temp}}(4$\,keV$)$ (see fig. \ref{figure2}b)). In addition, theoretical ratio curves of defect-free YBa$_2$Cu$_3$O$_{7}$ to YBa$_2$Cu$_3$O$_{6}$ as well as those for various metallic vacancies V$_{\mathrm{x}}$ as potential annihilation sites in YBCO, which were calculated for earlier studies\cite{ybcoapl}, are plotted in fig. \ref{figure2}c). The ratio curve at 4\,keV $R_1$ exhibits a signature characteristic for oxygen-rich crystals as the theoretical ratio curve of YBa$_2$Cu$_3$O$_{7}$ to YBa$_2$Cu$_3$O$_{6}$ in fig.\,\ref{figure2}c) displays. In addition, $R_1$ takes values below unity for 3\,keV $< \Delta E <$ 7\,keV, which is attributed to the presence of vacancies V$_{\mathrm{Cu(2)}}$ in the CuO$_2$ planes. Other remaining positron states, which might contribute to the signature of $R_1$, cannot be further identified due to the relatively small differences seen between the calculated ratio curves for the various other annihilation sites. At higher implantation energy $R_2$ only slightly differs from unity which demonstrates that changes are clearly smaller in the depth region probed at $E_+=7$\,keV. In the as prepared sample the similarity of $R_3$ with unity shows no evidence for a depth dependent variation of annihilation sites, and positrons annihilating in the STO substrate barely, if at all, affect the CDB signatures. After tempering, however, $R_4$ behaves similar to $R_1$ showing a CDB signature characteristic for oxygen-rich crystals and emerging V$_{\mathrm{Cu(2)}}$ vacancies. Hence, we conclude that after tempering $\delta$ decreases towards the interface. \begin{figure}[t!] \includegraphics[width=0.485\textwidth]{figure2.pdf} \caption{CDB ratio curves: a) Tempered to as prepared state for positron implantation energies $E_+=4$\,keV and 7\,keV (see fig.\,\ref{figure1}a) for probed depth region), b) $E_+=7$\,keV to $E_+=4$\,keV for as-prepared and tempered state, and c) calculated ratio curves of YBa$_2$Cu$_3$O$_{7}$ to YBa$_2$Cu$_3$O$_{6}$ for various positron states.} \label{figure2} \end{figure} \subsection{DBS -- Depth Dependent Investigations} \label{sec:exp3} The depth dependent change of the oxygen deficiency is studied in more detail by analyzing the S-parameter as function of positron implantation energy $S(E_+)$ recorded before and after tempering. In general, the measured S-parameter as shown in fig.\,\ref{figure3} can be described by a superposition of different positron states with characteristic values at the surface $S_{\mathrm{surf}}$, in the YBCO film $S_\mathrm{YBCO}$ and in the STO substrate $S_\mathrm{STO}$. For higher implantation energies $E_+$, a significant fraction of positrons annihilates in the substrate with $S_\mathrm{STO}$ whereas for $E_+ < 4$\,keV positrons also annihilate at the surface with $S_{\mathrm{surf}} > 0.52$. In the as prepared state, a plateau between 4 and 8\,keV indicates the predominant annihilation in the YBCO film. After tempering $S$ increases in this region as expected from the results obtained by the in-situ measurements at higher temperature. Both, $S_\mathrm{STO}$ and $S_{\mathrm{surf}}$ hardly change during tempering. However, detailed information on the depth profile $S_\mathrm{YBCO}(z)$ in the YBCO film can be extracted from $S(E_+)$. \begin{figure}[b!] \includegraphics[width=0.495\textwidth]{figure3.pdf} \caption{S-parameter as function of positron implantation energy $E_+$ before and after tempering. The solid lines represent fits yielding $S$(z) as plotted in inset (a) with the respective goodness of the fit $\chi_{red}^{2}(\Delta S)$ with $\Delta S = S(0\mathrm{\,nm})-S(230\mathrm{\,nm})$ in the tempered sample (inset (b)) For $E_+ < 4$\,keV positron states at the surface affect the data.} \label{figure3} \end{figure} We performed least-square fits of the $S(E_+)$ curves by considering the depth distribution of annihilating positrons at each energy $E_+$ using the positron implantation profiles $P(z,E_+)$. Based on previous experimental studies\cite{ybcoapl} positron diffusion plays no significant role along the normal of the YBCO film since positrons either stick at the oxygen deficient plane of the CuO chains or are trapped in metallic vacancies. The sharp decrease of $S(E)$ at the surface as observed for $E_+ < 4$\,keV (see fig.\,\ref{figure3}) also indicates a very short positron diffusion length. The positron diffusion length $L_{+,\mathrm{STO}}$ in STO was treated as free parameter and was found to be 175\,nm. Subsequently, we folded $P(z,E_+)$ with hypothetical $S(z)$ profiles in order to obtain the best agreement with the measured $S(E_+)$ curve. It was not possible to get a reasonable fit by assuming $S_\mathrm{YBCO}(z)$ to be constant in the tempered YBCO film. Instead, applying a linear dependence of $S$ on $z$ was found to be needed to obtain excellent agreement with the measured data of both, the as prepared and tempered state. The fit results of $S(E)$ and $S(z)$ are depicted as solid lines in fig.\,\ref{figure3} by minimizing $\chi_{red}^{2}(\Delta S, S_{\mathrm{STO}},L_{+,\mathrm{STO}})$ as shown in fig.\,\ref{figure3}b. In the as prepared state $S_\mathrm{YBCO}(z)$ slightly increases towards the YBCO/STO interface. After tempering, however, $S_\mathrm{YBCO}(z)$ is on average higher and decreases towards the interface within the found range $\Delta S$. These results allow us to determine $\delta (z)$ profiles for both YBCO films. \section{Discussion} The non-constant $S(z)$ found in the YBCO film displays a depth dependent $\delta (z)$. We calculated the respective $\delta (z)$ depth profiles using the linear correlation between $S$ and $\delta$ derived in a previous study \cite{ybcoapl}. The $S$-$\delta$ calibration was done by extrapolation from the reference values determined by XRD $\delta = 0.191$ and 0.619 for the as prepared and tempered state and the respective values of $S$ obtained by averaging $S(z)$ in the YBCO film (see fig.\,\ref{figure3}). As shown in fig. \ref{figure4} in the as prepared state we observed an increase of $\delta$ from 0.0 at the surface to 0.4 at the interface and after tempering a decrease from 1.1 to 0.2. The linear dependence reflects a steady state solution of the diffusion equation for oxygen and hence, displays a state of thermodynamic equilibrium. The oxygen deficiency $\delta (0{\mathrm{\,nm}})$ represents the value where oxygen exchange between the film and atmosphere is in equilibrium. The as prepared state was achieved in an oxygen atmosphere of 400\,mbar at 400\,$^\circ$C which leads to oxygen loading of the YBCO film. Hence the maximum oxygen content is reached at the surface $\delta (0{\mathrm{\,nm}})\approx 0$ and decreases towards the interface (higher $\delta$). The heat treatment in vacuum led to oxygen unloading of the tempered sample yielding $\delta (0{\mathrm{\,nm}})\gtrsim1.1$. According to the observed $\delta (z)$ behaviour the oxygen concentration increases towards the interface, i.e.\,the oxygen deficiency near the surface is significantly higher than deep in the film close to the interface. \begin{figure}[b!] \includegraphics[width=0.494\textwidth]{figure4.pdf} \caption{Oxygen deficiency $\delta$ and inferred critical temperature $T_\mathrm{c}$ as function of depth $z$ in the as prepared and tempered YBCO film.} \label{figure4} \end{figure} As obvious from the measured $S(T)$ dependence (cf. fig. \ref{figure1}c)) applying a higher temperature leads to an increase of the oxygen loss and hence to an increase of $\delta$. The step-like behaviour of $S(t)$ (see fig. \ref{figure1}) shows that temperature induced changes end within minutes above 350\,$^\circ$C. Hence thermodynamic equilibrium is estimated to be reached within the same time scale. Our results suggest that in this regime the maximum temperature is decisive for the finally reached mean value of $\delta$. The strong increase of $\delta$ towards the surface leads to a lower overall oxygen content inside the YBCO film. At the interface, however, $\delta (230{\mathrm{\,nm}})$ slightly decreases by around 0.15 during the heat treatment. This (small) effect might indicate that oxygen diffused from the STO substrate into the YBCO film during tempering. The depth dependence found for $\delta (z)$ implies a depth variation of the critical temperature $T_{\mathrm{c}}$ since it is directly correlated to the oxygen deficiency $\delta$. \cite{PhysRevB.73.180505,PhysRevB.74.014504,Matic2013189} The $T_{\mathrm{c}}(z)$ behaviour can hence be derived by interpolating the values determined by measurements on reference samples. As expected for the as prepared state, $T_{\mathrm{c}}(z)$ changes only slightly since $\delta$ covers mostly a range where $T_{\mathrm{c}}(\delta)\approx 90$\,K, and becomes lower near the YBCO/STO interface (see fig.\,\ref{figure4}). After tempering, however, $\delta$ varies in a large range that leads to a more complex $T_{\mathrm{c}}(z)$. In the near surface region we expect insulating behaviour and for 100\,nm $ < z < $ 200\,nm HTS with $T_{\mathrm{c}}\approx 60$\,K. Closer to the interface, $T_{\mathrm{c}}$ increases further. Thus in applications of such HTS films the operating temperature can be used as driving factor for the width of the superconducting layer. It is noteworthy that the obtained $T_{\mathrm{c}}$-characteristics follow from the measured $\delta (z)$ depth profile only, whereas in transport experiments just a mean value of $T_{\mathrm{c}}$ is accessible neglecting lateral inhomogeneities in the YBCO film and in which the contacting of the samples might effect the measurement\cite{ybcoapl}. The present results demonstrate that the mobility of oxygen atoms in YBCO plays an important role for the preparation of high-quality films using PLD. In several studies, typical diffusion lengths for oxygen in YBCO bulk samples were determined elsewhere. A heat treatment at 430\,$^\circ$C for 20 minutes yielded diffusion lengths of 10\,nm along the $c$-axis and 6\,$\mu$m along the $a$- and $b$-directions.\cite{PhysRevB.51.8498} However, in the thin film samples of the present study the oxygen diffusion along $c$ is expected to be faster by roughly one order of magnitude since thermodynamic equilibrium was already reached after several minutes of tempering above 350\,$^\circ$C. An extraordinarily high oxygen mobility along the $c$-axis has been reported in other studies on YBCO films produced by magnetron sputtering\cite{xixx89} and laser ablation\cite{PhysRevB.51.8498}. Moreover, we observed an onset temperature at around 240\,$^\circ$C for oxygen diffusion, which is lower compared to values published for bulk YBCO, e.\,g., 350\,$^\circ$C observed by electric resistance measurements\cite{PhysRevB.47.3380} or 400\,$^\circ$C by positron annihilation spectroscopy\cite{PhysRevB.43.10399}. In other studies on YBCO films similarly low onset temperatures around 250\,$^\circ$C have been determined by a combination of oxygen tracer diffusion and secondary ion mass spectroscopy.\cite{PhysRevB.51.8498} Our CDBS results suggest that the observed high mobility of oxygen atoms and the low onset temperature for oxygen diffusion might be connected to V$_{\mathrm{Cu(2)}}$ vacancies in the CuO$_2$ planes. Finally, we discuss the homogeneity of $\delta$ in thin single-crystalline YBCO films with a thickness of several 100\,nm as characteristic for laser-ablated samples. Along the $c$-axis we have observed a thermal equilibrium state with a linear $\delta (z)$ depth profile determined by the surface value $\delta (0{\mathrm{\,nm}})$ and the oxygen content at the YBCO/STO interface $\delta (230{\mathrm{\,nm}})$. The value of $\delta (0{\mathrm{\,nm}})$ can be adjusted by partial oxygen pressure in the atmosphere and by temperature. \cite{PhysRevB.43.10399} Assuming that the STO substrate provides an ideal homogeneous reservoir of oxygen, only the maximum temperature should affect the exchange rate of oxygen at the YBCO/STO interface and hence the value of $\delta (230\mathrm{\,nm})$. However, more detailed insights into this process and the properties of the interface is beyond the scope of the present study. In practically identical thin film YBCO samples we found lateral inhomogeneities of $\delta$ along the direction of $a$ and $b$.\cite{ybcoapl} According to the kinematics discussed above, we estimate that a tempering time in the order of 200\,hrs is required to reach a laterally homogenous distribution of oxygen. Since local structural variations of the interface between STO substrate and the YBCO would affect the oxygen exchange future studies with the scope on the surface homogeneity of the STO substrate could provide further useful information. Possibly, the homoepitaxial deposition of a single-crystalline STO layer by PLD prior to the growth of the YBCO film with adjustable defect densities in the intermediate STO layer \cite{PhysRevB.79.014102,PhysRevB.81.064102,PhysRevLett.105.226102} might improve the homogeneity of the interface. Incorporating an oxygen diffusion barrier at the interface would lead to a constant equilibrium depth profile $\delta (z) = \delta (0{\mathrm{\,nm}})$. Therefore, such a layer is expected to simplify significantly the preparation of YBCO films with a homogeneous and precisely defined oxygen deficiency $\delta$.\\ \section{Conclusion and Outlook} In this study we investigated the diffusion related properties of the oxygen deficiency $\delta$ in epitaxial single-crystalline YBa$_{\text{2}}$Cu$_{\text{3}}$O$_{\text{7}-\delta}$ thin films. The averaged $\delta$ could be determined by XRD, and the mean $T_{\mathrm{c}}$ was obtained from transport measurements. By applying (C)DBS with a variable energy positron beam, we succeeded in revealing the depth distribution of the oxygen content which in turn unveils the depth dependent critical temperature $T_{\mathrm{c}}$. Thus in future applications it has to be considered that changes in the ambient temperature affect the width and hence the current density of the superconducting layer. In situ measurements at elevated temperature allowed us to gain insights into the kinematics of oxygen atoms. An onset temperature for oxygen diffusion was found slightly above 240\,$^\circ$C reflecting the high mobility of oxygen along the $c$-axis. The depth distribution of oxygen in the thermodynamic equilibrium after preparation and after the heat treatment were shown to be driven by the oxygen exchange at the surface and to lesser extent at the YBCO/STO interface. The oxygen content near the surface can be manipulated relatively easy by the ambient pressure and temperature, whereas the control of the interface processes is more demanding. Solving this issue is key for further improving the quality of single-crystalline YBCO films in terms of a precisely defined homogeneous oxygen deficiency. The availability of such samples exhibiting unprecedented quality is estimated to be highly relevant for a better understanding of fundamental phenomena and for technical applications of the HTS YBCO. \section*{Acknowledgments} Financial support from the DFG within project TRR\,80 and from the BMBF projects nos. 05K13WO1 and 05K16WO7 is gratefully acknowledged. The authors thank M. Leitner for helpful discussions.
1,108,101,562,436
arxiv
\section{Introduction}\label{sec:intro} The shape of the stellar body of a galaxy reflects its formation process. Reconstructing the intrinsic, three-dimensional shapes of spiral galaxies from their shapes projected on the sky has a long tradition, and proved to be an exquisitely accurate and precise approach, especially once sample size increased \citep[e.g.,][]{sandage70, lambas92, ryden04, vincent05, padilla08}. These results provided us with the general notion that the stellar bodies of present-day star-forming galaxies over a wide range in luminosity can be described as thin, nearly oblate (therefore, disk-like) systems with an intrinsic short-to-long axis ratio of $\sim0.25$. Such global shapes encompass all galactic components, including bars and bulges. The disk component is generally thinner \citep[$0.1-0.2$, e.g.,][]{kregel02}. Analogous information about the progenitors of today's galaxies is scarcer. Among faint, blue galaxies in deep Hubble Space Telescope imaging, \citet{cowie95} found a substantial population of elongated `chain' galaxies, but several authors argued that chain galaxies are edge-on disk galaxies \citep[e.g.,][]{dalcanton96, elmegreen04a, elmegreen04b}. However, \citet{ravindranath06} demonstrated that the ellipticity distribution of a large sample of $z=2-4$ Lyman Break Galaxies is inconsistent with randomly oriented disk galaxies, lending credence to the interpretation that a class of intrinsically elongated (or, prolate) objects in fact exists at high redshift. By modeling ellipticity distributions, \citet{yuma11} and \citet{law12} concluded that the intrinsic shapes of $z>1.5$ star-forming galaxies are strongly triaxial. On the other hand, regular rotation is commonly seen amongst $z\sim 1-2$ samples \citep{forster06, kassin07, law09, forster09, wisnioski11, gnerucci11, newman13}, and the evidence for the existence of gaseous disks is ample among massive systems \citep{genzel06, wright07, lottie08, stark08, epinat09}. One possible explanation for the seeming discrepancy between the geometric and kinematic shape inferences is a dependence of structure on galaxy mass. Indeed, for lower-mass galaxies ($\lesssim 10^{10}~M_{\odot}$) the evidence for rotation is less convincing \citep[e.g.,][]{forster06, law07} and in rare cases rotation is convincingly ruled out \citep[e.g.,][]{lowenthal09}. The prevailing view is that the gas --and hence presumably the stars that form from it -- in those galaxies is supported by random motions rather than ordered rotation. However, the kinematic measurements for low-mass galaxies probe only a small number of spatial resolution elements -- signs of rotation may be smeared out \citep{jones10} -- and the observed motions may have a non-gravitational origin such as feedback. Here we aim to provide the first description of the geometric shape distribution of $z>1$ star-forming galaxies and its dependence on galaxy mass. We examine the projected axis ratio distributions ($p(q)$) of large samples of star-forming galaxies out to $z=2.5$ drawn from the CANDELS \citep{grogin11, koekemoer11} and 3D-HST \citep{brammer12, skelton14} surveys. A low-redshift comparison sample is drawn from the Sloan Digital Sky Survey (SDSS). The methodology developed by \citet{holden12} and \citet{chang13b} will be used to convert $p(q)$ into 3-dimensional shape distributions of star-forming galaxies and its evolution from $z=2.5$ to the present day. \begin{figure*}[t] \epsscale{1.2} \plotone{f1.ps} \caption{Projected axis ratio distributions $p(q)$ of star-forming galaxies in four mass bins and two redshift bins ($z<0.1$ at the top; $1.5<z<2.0$ at the bottom). Histograms represent the observed distributions (each panel contains 500 or more galaxies); continuous lines are best-fitting models: these are the probability distributions of triaxial populations of objects seen at random viewing angles., where the triaxiality and ellipticity are tuned to best reproduce the observed distributions. The colored bars illustrate how the model populations are distributed over three different 3D shapes defined in Figure \ref{class}: \emph{disky} in red; \emph{spheroidal} in green; \emph{elongated} in blue. The pronounced variation among the projected axis ratio distributions illustrates that the changes in the geometric fractions are highly significant.} \label{hist} \end{figure*} \begin{figure}[t] \epsscale{1.2} \plotone{f2.ps} \caption{To facilitate a better intuitive understanding of the model shape parameters (triaxiality and ellipticity) we distinguish three crudely defined 3-dimensional shapes of objects. Objects with three similarly long axes are defined as \emph{spheroidal}; objects with two similarly long and one short axis are defined as \emph{disky}; objects with one long axis and two similarly short axes are defined as \emph{elongated}. A model population -- generated to reproduce an observed axis-ratio distribution -- should be thought of as a cloud of points in the parameter space shown in this figure, distributed as prescribed by the best-fitting values of $T$, $\sigma_T$, $E$, and $\sigma_E$ (see text for details). Each of the three regions will contain a given fraction of those points, that is, a fraction of the population. } \label{class} \end{figure} \section{Data}\label{sec:data} We construct volume-limited samples of star-forming galaxies over a large range in stellar mass ($10^9 - 10^{11}~M_{\odot}$) and redshift ($0<z<2.5$) with $q$ measured at an approximately fixed rest-frame wavelength of $4600\rm{\AA}$. \subsection{CANDELS and 3D-HST} \citet{skelton14} provide WFC3/F125W+F140W+F160W-selected, multi-wavelength catalogs for the CANDELS fields, as well as redshifts, stellar masses and rest-frame colors using the 3D-HST WFC3 grism spectroscopy in addition to the photometry. 36,653 star-forming galaxies with stellar masses $M_*>10^{9}~M_{\odot}$ and up to redshift $z=2.5$ are selected based on their rest-frame $U-V$ and $V-J$ colors as described by \citet{vanderwel14}, 35,832 of which have $q$ measurements. The typical accuracy and precision is better than 10\% \citep{vanderwel12}. For the $2<z<2.5$ galaxies we use the F160W-based values, for the $z<2$ galaxies we use the F125W-based values, such that all $z>1$ galaxies have their shapes measured at a rest-frame wavelength as close as possible to $4600\rm{\AA}$ (and always in the range $4300<\lambda/\rm{\AA} <6200$). This avoids the effects due the shape variations with wavelength seen in local galaxies \citep{dalcanton02}. Below $z=1$ our F125W shape measurements probe longer wavelengths. We compared the F125W-based shapes with HST/ACS F814W-based shapes for 1,365 galaxies \citep[see][]{vanderwel14}. The median F125W-based axis ratio is 0.014 larger than the median F814W-based shape, with a scatter of 0.06. This is consistent with the measurement errors. We conclude that using F125W axis ratios at $z<1$ does not affect our results. \subsection{SDSS: $0.04<z<0.08$} SDSS photometry-based stellar masses from \citet{brinchmann04} are used to select 36,369 star-forming galaxies with stellar masses $M_*>10^{9}~M_{\odot}$ and in the (spectroscopic) redshift range $0.04<z<0.08$. The distinction between star-forming and passive galaxies is described by \citet{holden12} and is based on the rest-frame $u-r$ and $r-z$ colors, analogous to the use of $U-V$ and $V-J$ colors at higher redshifts. For the SDSS sample we use the $q$ estimates from fitting the exponential surface brightness model to the $g$-band imaging as part of the DR7 photometric pipeline \citep{abazajian09}. These measurements have been verified by \citet{holden12}, who showed that systematic offsets and scatter with respect to our {\tt GALFIT}~-based measurements are negligible. \section{Reconstruction: from Projected to Intrinsic Shapes}\label{sec:model} The very pronounced change of the projected shape distribution with redshift (Figure \ref{hist}) immediately reveals that galaxy structure evolves with cosmic time. Especially at low stellar masses we see that a larger fraction of galaxies have flat projected shapes than at the present day. This observation underpins the analysis presented in the remainder of the Letter. Here we provide a brief description of the methodology to infer the intrinsic, 3-dimensional shapes of galaxies, outlined in detail by \citet{chang13b}. We adopt the ellipsoid as the general geometric form to describe the shapes of galaxies. It has three, generally different, axis lengths ($A \ge B \ge C$), commonly used to define ellipticity ($1-C$) and triaxiality ($(1-B^2)/(1-C^2)$). In order to facilitate an intuitive understanding of our results we define three broad geometric types, shown in Figure \ref{class}: \emph{disky} ($A \sim B > C$), \emph{elongated} ($A > B \sim C$), and \emph{spheroidal} ($A \sim B \sim C$). The goal is to find a model population of triaxial ellipsoids that, when seen under random viewing angles, has the same $p(q)$ as an observed galaxy sample. Our model population has Gaussian distributions of the ellipticity (with mean $E$ and standard deviation $\sigma_E$) and triaxiality (with mean $T$ and standard deviation $\sigma_T$). Such a model population has a known $p(q)$ which we adjust to include the effect of random uncertainties in the axis ratio measurements -- these are asymmetric for nearly round objects. Then, given that each observed value of $q$ corresponds to a known probability, we calculate the total likelihood of the model by multiplying the probabilities of each of the observed values. We search a grid of the four model parameters to find the maximal total likelihood. In Figure \ref{hist} we show observed axis ratio distributions (histograms), and the probability distributions of the corresponding best-fitting model populations (smooth lines). The models generally match the data very well. Even in the worst case (bottom-right panel) the model and data distributions are only marginally inconsistent, at the $2\sigma$ level. A triaxial model population with parameters $(E,\sigma_E,T,\sigma_T)$ corresponds to a cloud of points in Figure \ref{class} and, hence, with certain fractions of the three geometric types. The colored bars in Figure \ref{hist} represent these fractions for the best-fitting triaxial models. This illustrates the connection between projected shapes and intrinsic shapes: a broad $p(q)$ reflects a large fraction of \emph{disky} objects, whereas a narrow distribution with a peak at small $q$ is indicative of a large fraction of \emph{elongated} objects. A narrow distribution with a peak at large $q$ would indicate a large fraction of \emph{spheroidal} objects. In Figure \ref{res} we provide the modeling results for the full redshift and mass range probed here: for each stellar mass bin we show the redshift evolution of the four model parameters, including the uncertainties obtained by bootstrapping the samples. Finally, in Figure \ref{frac} we show the full set of results in the form of the color coding defined in Figure \ref{class}. \begin{figure*}[t] \epsscale{1.2} \plotone{f3.ps} \caption{Reconstructed intrinsic shape distributions of star-forming galaxies in our 3D-HST/CANDELS sample in four stellar mass bins and five redshift bins. The model ellipticity and triaxiality distributions are assumed to be Gaussian, with the mean indicated by the filled squares, and the standard deviation indicated by the open vertical bars. The $1\sigma$ uncertainties on the mean and scatter are indicated by the error bars. Essentially all present-day galaxies have large ellipticities, and small triaxialities -- they are almost all fairly thin disks. Toward higher redshifts low-mass galaxies become progressively more triaxial. High-mass galaxies always have rather low triaxialities, but they become thicker at $z\sim 2$.} \label{res} \end{figure*} \begin{figure*}[t] \epsscale{1.2} \plotone{f4.ps} \caption{Color bars indicate the fraction of the different types of shape defined in Figure \ref{class} as a function of redshift and stellar mass. The negative redshift bins represent the SDSS results for $z<0.1$; the other bins are from 3D-HST/CANDELS.} \label{frac} \end{figure*} \section{Evolution of Intrinsic Shape Distributions} The small values of $T$ and the large values of $E$ for present-day star-forming galaxies (Figure \ref{res}) imply that the vast majority are thin and nearly oblate. Indeed, according to our classification shown in Figure \ref{hist} between 80\% and 100\% are \textit{disky}, as is generally known and was demonstrated before on the basis of similar axis-ratio distribution analyses by \citet{vincent05} and \citet{padilla08}. Importantly, the intrinsic shape distribution of star-forming galaxies does not change over a large range in stellar mass ($10^9 - 10^{11}~M_{\odot}$). Toward higher redshifts star-forming galaxies become gradually less disk-like (Figures \ref{hist}, \ref{res} and \ref{frac}). This effect is most pronounced for low-mass galaxies. Already in the $0.5<z<1.0$ redshift bin in Figure \ref{res} we see evolution, mostly in the scatter in triaxiality ($\sigma_T$). That is, there is substantial variety in intrinsic galaxy shape. Beyond $z=1$, galaxies with stellar mass $10^{9}~M_{\odot}$ typically do not have a \textit{disky} geometry, but are most often \textit{elongated} (Figure \ref{res}). Galaxies with mass $10^{10}~M_{\odot}$ show similar behavior, but with evolution only apparent at $z>1.5$. This geometric evidence for mass-dependent redshift evolution of galaxy structure is corroborated by the analysis of kinematic properties of $z=0-1$ galaxies by \citet{kassin12}. \textit{Disky} objects are the most common type ($\ge75\%$) among galaxies with mass $>10^{10}~M_{\odot}$ at all redshifts $z\lesssim 2$. A population of \textit{spheroidal} galaxies is increasingly prominent among massive galaxies at $z>2$. A visual inspection of such objects reveals that at least a subset are mergers, but an in-depth interpretation of this aspect we defer to another occasion. It is interesting to note that ellipticity hardly depends on mass and redshift (Figure \ref{res}). That is, despite strong evolution in geometry, the short-to-long axis ratio remains remarkably constant with redshift, and changes little with galaxy mass. A joint analysis of galaxy size and shape is required to explore the possible implications. Note that our definition of geometric shape is unrelated to the common distinction between disks and spheroids on the basis of their concentration parameter or S\'ersic index. As a result we distinguish between the observation that most low-mass star-forming galaxies at $z\sim 2$ have exponential surface brightness profiles \citep[e.g.,][]{wuyts11} and our inference that these galaxies are not, generally, shaped like disks in a geometric sense. This illustrates that an approximately exponential light profile can correlate with the presence of a disk-like structure but cannot be used as a definition of a disk. \section{Discussion} Star formation in the present-day universe mostly takes place in $>10^{9}~M_{\odot}$ galaxies and in non-starburst galaxies. Since essentially all such star-forming galaxies are \textit{disky} and star formation in disk galaxies occurs mostly over the full extent of the stellar disk, it follows immediately that essentially all current star formation takes place in disks. The analysis presented in this \textit{Letter} allows us to generalize this conclusion to include earlier epochs. At least since $z\sim 2$ most star formation is accounted for by $\gtrsim 10^{10}~M_{\odot}$ galaxies \citep[e.g.,][]{karim11}. Figures \ref{res} and \ref{frac} show that such galaxies have disk-like geometries over the same redshift range. Given that 90\% of stars in the universe formed over that time span, it follows that the majority of all stars in the universe formed in disk galaxies. Combined with the evidence that star formation is spatially extended, and not, for example, concentrated in galaxy centers \citep[e.g.,][]{nelson12, wuyts12} this implies that the vast majority of stars formed in disks. Despite this universal dominance of disks, the elongatedness of many low-mass galaxies at $z\gtrsim 1$ implies that the shape of a galaxy generally differs from that of a disk at early stages in its evolution. According to our results, an elongated, low-mass galaxy at $z\sim 1.5$ will evolve into a disk at later times, or, reversing the argument, disk galaxies in the present-day universe do not initially start out disks.\footnote{This evolutionary path is potentially interrupted by the removal of gas and cessation of star formation.} As can be seen in Figure \ref{res}, the transition from \textit{elongated} to \textit{disky} is gradual for the population. This is not necessarily the case for individual galaxies. Hydrodynamical simulations indicate that sustained disks form quite suddenly, on a dynamical time scale, after an initial period characterized by rapidly changing dynamical configurations \citep[e.g.,][]{martig14}. This turbulent formation phase may include the subsequent formation and destruction of short-lived disks \citep[e.g.,][]{ceverino13}, associated with rapid changes in orientation and resulting in a hot stellar system of rather arbitrary shape. Our observation that at $z>1$ the low-mass galaxy population consists of a mix of \textit{disky} and \textit{elongated} objects -- in this picture, the latter represent the irregular phase without a sustained disk -- can be interpreted as some fraction of the galaxies having already transformed into a sustained disk. The probability for this transition is, then, a function of mass which may or may not depend on redshift. Given the various estimates of the stellar mass evolution of Milky Way-mass galaxies as a function of redshift \citep[e.g.,][]{vandokkum13, patel13}, we suggest that the Milky Way may have first attained a sustained stellar disk at redshift $z=1.5-2$. \section{Caveats} Our analysis rests on the assumption that stellar light traces the mass distribution of a galaxy. Potential spoilers include obscuration by dust, dispersion in age among stars, and large gas fractions. Dust has a viewing angle-dependent effect on the measured $q$. Massive galaxies at all redshifts are dusty, and a large variety of dust geometries could disturb axis ratio measurements, hiding the disk-like structure of the population when traced by the axis ratio distribution. Perhaps this plays a role at $z>2$ where we see an increased fraction of round objects. However, the reverse -- to create a disk-like axis ratio distribution for a population of dusty non-disks -- requires unlikely fine tuning. We prefer the more straightforward interpretation that massive, star-forming galaxies truly are disks, at least up to $z=2$. This is supported by the observed correlation between axis ratio and color \citep[e.g.,][]{patel12}, also seen in our sample: galaxies with smaller $q$ are redder than those with larger $q$, as expected from a population on inclined, dusty disks. Dust is also unlikely to affect $p(q)$ of low-mass galaxies. At $z>1$ galaxies with stellar masses $\lesssim 10^{10}~M_{\odot}$ are generally very blue. For these young, presumably metal-poor galaxies dust is of limited relevance to the shape measurements. This also implies that completeness of our sample is not affected by strong dust obscuration. Age variations in the stellar population and large gas fractions both potentially present challenges to our assumption that the rest-frame optical light traces the underlying mass distribution. Perhaps the luminous regions are young, bright complexes embedded in disks consisting of cold gas or fainter, older stellar populations. We cannot immediately discard this possibility as dynamical masses exceed stellar masses by an average factor of $\sim 3$ in the stellar mass range $10^8~M_{\odot} \lesssim M_* \lesssim 10^{10}~M_{\odot}$ galaxies at $z>1$ \citep[e.g.,][]{forster09, maseda13}. It is implausible that this difference between stellar mass and dynamical mass is entirely made up of undetected, older stars in a disk-like configuration. The different spatial distributions of the young and old stars would lead to wavelength-dependent shapes, which is not observed. If such a population of older stars is present, it must be spatially coincident with the young population, and not, generally, in a disk. We cannot exclude the existence of cold gas disks that are $\sim$3$\times$ more massive than the (young) stellar population. Hydrodynamical simulations show that low-mass, high-redshift systems can produce elongated stellar bodies embedded in more extended, turbulent gaseous bodies with ordered rotation \citep[e.g.,][]{ceverino13}. At the moment there is little observational evidence for such extended gaseous disks. For the mass range $10^{9.5}~M_{\odot} \lesssim M_* \lesssim 10^{10}~M_{\odot}$ gas masses in excess of the stellar mass have been inferred based on the star-formation rate and the inverse Kennicutt-Schmidt relation \citep[e.g.,][]{forster09}, but this inversion relies on the assumption of a disk-like geometry, weakening the argument. Furthermore, even if these cold gas mass estimates are correct it is not clear that the gas should be organized in a disk. Generally, gas ionized by star formation and cold gas share global kinematic traits, and in these cases the ionized gas does not generally show rotation. Deep ALMA observations will settle this issue, and for now we will leave this as the main caveat in our analysis. \section{Summary and Conclusions} We have analyzed the projected axis ratio distributions, $p(q)$, measured at rest-frame optical wavelenghts, of stellar mass-selected samples of star-forming galaxies in the redshift range $0<z<2.5$ drawn from SDSS and 3D-HST+CANDELS. The intrinsic, 3-dimensional geometric shape distribution is reconstructed under the assumption that the population consists of triaxial objects view under random viewing angles. In the present-day universe star-forming galaxies of all masses are predominantly oblate and flat, that is, they are disks. Massive galaxies ($M_*>10^{10}~M_{\odot}$) typically have this shape at all redshifts $0<z\lesssim 2$. Given the dominance of $10^{10}-10^{11}~M_{\odot}$ galaxies in terms of their contribution to the cosmic stellar mass budget and the star formation rate density it follows that, averaged over all cosmic epochs, the majority of all stars formed in disks. Lower-mass galaxies have shapes at $z>1$ that differ significantly from those of thin, oblate disks. For galaxies with stellar mass $10^{9}~M_{\odot}$ ($10^{10}~M_{\odot}$) there exists a mix of roughly equal numbers of elongated and disk galaxies at $z\sim 1$ ($z\sim 2$). At $z>1$ the $10^{9}~M_{\odot}$ galaxies are predominantly elongated. Our findings imply that low-mass galaxies at high redshift had not yet formed a regularly rotating, sustained disk. Given a range of plausible mass growth rate of Milky Way-mass galaxies we infer the disk formation phase for such galaxies at $z=1.5-2$. \bibliographystyle{apj}
1,108,101,562,437
arxiv
\section{Introduction} In \cite{Incurvati2016}, Luca Incurvati defines the scheme $\textsf{RP}_{m,n}$ as follows, for all integers $m, n$ such that $0<m\leq n$. The schema $\textsf{RP}_{m,n}$ is defined to be a schema of sentences in the $n$th-order language of set theory, where for each well-formed formula $\phi$ in the $n$th-order language of set theory with quantified variables of order at most $m$ and free variables $A_{1}, \ldots A_{k}$, the universal closure of $\phi(A_{1}, \ldots A_{k})\implies \exists\alpha \phi^{V_{\alpha}}(A_{1}^{\alpha}, \ldots A_{k}^{\alpha})$ is defined to be one of the sentences in the schema, where $A^{\alpha}$ is defined to be $A \cap V_{\alpha}$ for second-order variables $A$, and $A^{\alpha}$ is defined to be $\{B^{\alpha}\mid B \in A\}$ where $A$ is a variable of order greater than the second order. This completes the definition of the schema $\textsf{RP}_{m,n}$. \bigskip For example, $\textsf{RP}_{2,2}+\mathrm{Extensionality}+\mathrm{Foundation}+\mathrm{Separation}$ implies \bigskip $\textsf{ZF}+\{$proper class of $\Pi^{1}_{n}$-indescribables $\mid n \in \omega\}$. \bigskip But, as was first observed by Reinhardt in \cite{Reinhardt74}, and the first explicit proof of which was given by Tait in \cite{Tait2005a}, $\textsf{RP}_{1,3}$ is inconsistent. Tait tried to resolve this by seeking to motivate restrictions on the formula $\phi$, but Koellner showed in \cite{Koellner2009} that even with these restrictions $\textsf{RP}_{3,4}$ is inconsistent. In \cite{Koellner2009} Peter Koellner extensively examined the question of which reflection principles might be intrinsically justified and formulated a family of reflection principles which were special cases of the ones proposed by Tait and which Koellner showed to be provably consistent relative to an $\omega$-Erd\H{o}s cardinal. Koellner also made the conjecture that all reflection principles which could be formulated and plausibly argued to be intrinsically justified, would either prove to be inconsistent or else provably consistent relative to $\kappa(\omega)$, the first $\omega$-Erd\H{o}s cardinal. I then attempted to take this line of investigation further. \bigskip Formulating a notion of an $\alpha$-reflective cardinal for ordinals $\alpha>0$ and seeking to motivate this along lines inspired by remarks in the work of Tait, I showed in \cite{McCallum2013} that if $\kappa$ is $\omega$-reflective then $V_{\kappa}$ satisfies $\textsf{RP}_{m,n}$ for all $m, n$ with some restrictions on $\phi$ slightly more restrictive than Tait's ones. And I showed that it is consistent relative to an $\omega$-Erd\H{o}s cardinal that there is a proper class of $\alpha$-reflective cardinals for each $\alpha>0$. In \cite{McCallum2017} I used similar ideas to motivate the idea of an extremely reflective cardinal, also provably consistent relative to an $\omega$-Erd\H{o}s cardinal, and in fact equivalent to the property of being a remarkable cardinal. \bigskip Then the work of Sam Roberts \cite{Roberts2017} appeared seeking to answer Peter Koellner's challenge to formulate an intrinsically justified reflection principle of greater consistency strength than $\kappa(\omega)$. A similar attempt had already been made by Philip Welch in \cite{Welch2014}, where a reflection principle in the second-order language of set theory was described implying the existence of a proper class of Shelah cardinals (and therefore in particular a proper class of measurable Woodin cardinals) and consistent relative to a superstrong cardinal. Let us describe the reflection principle discussed by Sam Roberts in \cite{Roberts2017}. \bigskip To explain the reflection principle which Roberts formulates in \cite{Roberts2017}, let us begin by explaining the reflection principle that he calls $\textsf{R}_{2}$. This is an axiom schema in the second-order language of set theory. For each formula $\phi(x_{1}, x_{2}, \ldots x_{m}, X_{1}, X_{2}, \ldots X_{n})$ in the second-order language of set theory, there is an axiom asserting that if $\phi$ holds, then there exists an ordinal $\alpha$ such that $x_{1}, x_{2}, \ldots x_{m} \in V_{\alpha}$, and a ``set-sized" family of classes which contains the classes $X_{1}, X_{2}, \ldots X_{n}$, which is itself coded for by a single class, and which is standard for $V_{\alpha}$ in the sense that every subset $X\subseteq V_{\alpha}$ is such that some class in the family has intersection with $V_{\alpha}$ equal to $X$, such that the formula $\phi$ still holds when the first-order variables are relativised to $V_{\alpha}$ and the second-order variables are relativised to the set-sized family of classes. This completes the description of the axiom schema $\textsf{R}_{2}$. Then Roberts extends the axiom schema as follows. He extends the underlying language so as to include a satisfaction predicate for the second-order language of set theory, and then he extends the axiom schema so as to also include an axiom of the kind described for every formula in this extended language, calling this new axiom schema $\textsf{R}_{S}$. Then he denotes by $\textsf{ZFC2}_{S}$ the result of extending $\textsf{ZFC2}$ -- being the same as $\textsf{ZFC}$ except for having Separation and Replacement as single second-order axioms and also having an axiom schema of class comprehension for every formula in the second-order language of set theory -- by adding the usual Tarskian axioms for the satisfaction predicate and extending the class comprehension axiom schema to include axioms involving formulas in the extended language. Then he proceeds to investigate the theory $\textsf{ZFC2}_{S}+\textsf{R}_{S}$. This completes the description of the reflection principle which Roberts considers. He shows that the theory $\textsf{ZFC2}_{S}+\textsf{R}_{S}$ proves the existence of a proper class of 1-extendible cardinals and is consistent relative to a 2-extendible cardinal. \bigskip I have explored elsewhere the question of whether this reflection principle is intrinsically justified, and have also described how this line of thought could plausibly be taken further to motivate a reflection principle equivalent to the existence of a supercompact cardinal. Let me briefly explain how that can be done. Given a level $V_{\kappa}$, we can consider structures of the form $(V_{\kappa}, V_{\lambda})$ with $\lambda>\kappa$ and consider some formula $\phi$ in a two-sorted language holding in such a structure relative to a certain finite collection of parameters. It is natural to posit that there should exist a ``set-sized" reflecting structure, containing all the parameters, whose first component is $V_{\alpha}$ for some $\alpha<\kappa$ and whose second component is ``set-sized" in the sense of having cardinality less than $\beth_{\kappa}$ and furthermore such that the transitive collapse of the second component is of the form $V_{\beta}$ for some $\beta>\alpha$. (Here the collapsing map may not be injective.) A level $V_{\kappa}$ satisfies this form of reflection if and only if $\kappa$ is supercompact, as can be seen from Magidor's characterisation of supercompactness. Let us assume for the sake of argument, for the rest of this paper, that these kinds of considerations can be taken as a good motivation for the view that supercompact cardinals are intrinsically justified. Can one use further ideas to motivate large cardinals of still greater strength being intrinsically justified? \bigskip In Section 2 I shall tell the story of how I followed a line of thought seeking to find an intrinsic justification for extendible cardinals, building on these ideas, and arrived at a motivation for the existence of Vop\v{e}nka scheme cardinals. This line of argument can be used to obtain a proof that the theory $B_0(V_0)$, defined in \cite{Marshall89}, implies the existence of a Vop\v{e}nka scheme cardinal $\kappa$, such that $V_{\kappa} \prec V$. In particular, the theory $B_0(V_0)$ implies the existence of a proper class of extendible cardinals. Marshall raised in \cite{Marshall89} the question of whether $B_0(V_0)$ implies the existence of supercompact and extendible cardinals, and here both questions are resolved positively. In Section 3 we try to use ideas based on the Marshall's paper to motivate all large cardinals not known to be inconsistent with choice, but not the ones known to be inconsistent with choice. \section{Vop\v{e}nka scheme cardinals} Suppose that $\kappa$ is a supercompact cardinal. We define a normal proper filter $F$ on $\kappa$ consisting of those $X \subseteq \kappa$ such that, for some $\delta\geq\kappa$, $\kappa \in j(X)$ for every embedding $j:V \prec M$ witnessing the $\gamma$-supercompactness of $\kappa$ for a $\gamma\geq\delta$. This filter contains the set $X_{\alpha}$, consisting of all $\alpha$-extendible cardinals less than $\kappa$, for each $\alpha<\kappa$. We want to find a sufficient condition for being able to conclude that $\bigcap_{\alpha<\kappa} X_{\alpha} \in F$, so that $V_{\kappa}$ will be a model for the existence of a proper class of extendible cardinals. Let us describe the theory $B_0(V_0)$ of Marshall's paper \cite{Marshall89}. \bigskip It is a theory in the first-order language of set theory with the additional constant symbol $V_0$. First, any axiom of $\textsf{ZFC}$, or its relativisation to $V_0$ is taken as an axiom. Also Extensionality and Foundation are taken as axioms. And if $\phi$ is a formula with at least one free variable $x$, which does not contain $u$ or $\kappa$ free, then $\phi(A) \implies \exists \kappa \in \mathrm{On} \exists u (u \cap V_0 = R_\kappa \wedge \forall x \forall y (x, y \in u \implies [x,y] \in u) \wedge \phi^{V_0}(A^u)$ is taken as an axiom, where we define On to be the set of ordinals in $V_0$, $A^u=A\cap u$ if $A\in\mathcal{P}(V_0)$, and $A^u=\{x^u:x\in A\cap u\}$ if $A \notin\mathcal{P}(V_0)$, and $[x,y]=x\times\{0\}\cup y\times\{1\}$. Clearly if $V_{\rho}$ is a model of $B_0(V_0)$ with the constant symbol $V_0$ interpreted by $V_{\kappa}$, then $\kappa$ is a supercompact cardinal in $V_{\rho}$ with $V_{\kappa}\prec V_{\rho}$. \begin{theorem} Suppose that $\kappa$ is a supercompact cardinal in $V_{\rho}$ with $V_{\kappa} \prec V_{\rho}$ and such that $(V_{\kappa}, V_{\rho})$ is a model of $B_0(V_0)$. Then $V_{\kappa}$ is a model for the assertion that there is a proper class of extendible cardinals. \end{theorem} \begin{proof} Suppose that $X \in L_{1}(V_{\kappa})$ or $X \in L_{2}(V_{\kappa})$, where we require $X$ to be parameter-free definable in the latter case. In either case, define $j(X)$ to be the element of $L_{1}(V_{\rho})$ or $L_{2}(V_{\rho})$ defined by the same formula as the formula defining $X$, the choice of formula doesn't matter. \bigskip For all $X \in L_{1}(V_{\kappa})$, we have $X \in F$ iff $\kappa \in j(X)$, so that $F \cap L_{1}(V_{\kappa})$ is ordinal-definable in $V_{\rho}$. And there is a normal proper filter $F''$ on $\kappa$ which is a parameter-free definable of $L_2(V_{\kappa})$, such that $\langle X_\alpha \mid \alpha<\kappa \rangle \subseteq F'' \subseteq F$. Define a filter $F'$ on $\rho$ by $F':=j(F'')$. $F'$ is also a normal proper filter, and $j(X_{\alpha}) \in F'$ for each $\alpha<\kappa$, recalling that when $\alpha<\kappa$, the notation $X_\alpha$ denotes the set of $\alpha$-extendible cardinals less than $\kappa$. The assertion that for every $X$ which is in $F''$ and ordinal-definable in $V_{\kappa}$, $\kappa \in j(X)$, is true relative to $V_{\rho}$, and we are assuming that $(V_{\kappa}, V_{\rho})$ is a model of $B_0(V_0)$. For a sufficiently large $n>0$ we can find a $Y\in F$ such that for all $\delta_1<\delta_2<\kappa$ with $\delta_1, delta_2 \in Y$, we have $V_{\delta_1} \prec_{n} V_{\delta_2} \prec_{n} V_{\kappa}$. We can assume $Y \subseteq \Delta_{\alpha<\kappa} X_{\alpha}$ and that $Y \in L_1(V_\kappa)$. So if $\beta \in Y$ then $\beta \in X_\alpha$ for all $\alpha<\beta$. We want to claim that $Y$ can be chosen to satisfy all the previously stated hypotheses together with the statement that even if $\beta'<\beta$, $\beta' \in Y$, we still have $\beta' \in X_\alpha$ for all $\alpha<\beta$. This is because we can choose $Y$ in such a way that whenever $\delta_1<\delta_2<\kappa$ and $\delta_1, \delta_2 \in Y$ there is a $\Sigma_{n}$-elementary embedding $j:V_{\delta_2} \prec_{n} V_{\kappa}$ with $j(\delta_1)=\delta_2$. Choosing $n$ to be sufficiently large will yield the desired statement. From this we get $Y\subseteq\bigcap_{\alpha<\kappa} X_\alpha$ and so $\bigcap_{\alpha<\kappa} X_\alpha$ is non-empty as claimed. Thus we get the desired conclusion that $\kappa$ is a limit of cardinals that are extendible in $V_{\kappa}$, in fact we can even say $C^{(n)}$-extendible in $V_{\kappa}$, rather than just extendible in $V_{\kappa}$, for every $n$ and so by results of \cite{Bagaria2012} we conclude that $\kappa$ is a Vop\v{e}nka scheme cardinal. We have shown if $(V_{\kappa}, V_{\rho})$ is a model for $B_0(V_0)$ then $\kappa$ is a Vop\v{e}nka scheme cardinal with $V_{\kappa} \prec V_{\rho}$. \end{proof} This completes our description of our initial line of thought leading to the conclusion that Vop\v{e}nka scheme cardinals are justified. \section{Justification for all large cardinals not known to be inconsistent with ZFC} For the purpose of the argument discussed in this section, we will need to present the definition of Marshall's theory $B_0(V_0^0,V_0^1, \ldots V_0^{n-1})$ discussed in \cite{Marshall89}. It is a theory in the first-order language of set theory with constant symbols $V_0^0, V_0^1, \ldots V_0^{n-1}$. The axioms are the same as for $B_0(V_0)$ except that now the relativization of $\phi$ to $V_0^k$ for each axiom $\phi$ of ZFC is taken as an axiom for all $k$ such that $0\leq k<n$. And the reflection principle is now $\phi(A) \implies \exists \kappa \in \mathrm{On} \exists u (V_0^0 \cap u = R_\kappa \wedge (V_0^1)^u=V_0^0 \wedge (V_0^2)^u=V_0^1 \wedge \ldots \wedge (V_0^{n-1})^u=V_0^{n-2} \wedge \forall x \forall y (x, y \in u \equiv [x,y] \in u) \wedge \phi^{V_0^{n-1}}(A^u)$, where On is the set of ordinals in $V_0^0$. We can assume that the axiom of extensionality occurs as a conjunct of $\phi(A)$ and therefore speak of the embedding witnessing each instance of reflection, and we shall do so in what follows. \bigskip We introduce the following definitions. \begin{Definition} A cardinal $\kappa$ is said to be an $n$-Marshall cardinal, for an ordinal $n\in\omega$ such that $n>0$, if there exist $\kappa_0<\kappa_1<\ldots<\kappa_{n-1}<\kappa$ such that to this finite sequence of ordinals corresponds a natural model of $B_0(V_0^0,V_0^1,\ldots V_0^{n-1})$. A cardinal $\kappa$ is said to be a 0-enormous cardinal if it is an $n$-Marshall cardinal for every $n\in\omega\setminus\{0\}$. (Note that a 0-enormous cardinal is bounded above in consistency strength by a totally huge cardinal.) A cardinal $\kappa$ is said to be an $\alpha$-enormous cardinal, for an ordinal $\alpha>0$, if there exists a sequence $\langle \kappa_\beta : \beta<\alpha \rangle$ with $\kappa_0=\kappa$ such that for all $n>0$, either $\alpha>n$ and every sequence of cardinals $\kappa_{\beta_0}<\kappa_{\beta_1}<\ldots<\kappa_{\beta_{n-1}}<\kappa_{\beta_n}$ corresponds to a natural model of $B_0(V_0^0,V_0^1,\ldots V_0^{n-1})$, or $\alpha\leq n$ and for every sequence of cardinals $\kappa_{\beta_0}<\kappa_{\beta_1}<\ldots<\kappa_{\beta_m}$ with $m\leq n$, there exist cardinal $\rho_0<\rho_1<\ldots<\rho_{n-m-1}<\kappa_0$, such that the sequence $\rho_0<rho_1<\ldots<\rho_{n-m-1}<\kappa_{\beta_0}<\kappa_{beta_1}<\ldots<\kappa_{\beta_m}$ corresponds to such a model. \end{Definition} Hugh Woodin has elsewhere defined an enormous cardinal to be a cardinal $\kappa$ such that there exist ordinals $\lambda, \gamma$ such that $\kappa<\lambda<\gamma$ and $V_{\kappa} \prec V_{\lambda} \prec V_{\gamma}$ and there is an elementary embedding $j:V_{\lambda+1} \prec V_{\lambda+1}$ with critical point $\kappa$. We shall eventually show that a cardinal is enormous if and only if it is $\omega+1$-enormous. Our goal shall be to show that $\textsf{ZFC}$+``$\kappa$ is an $\omega+1$-enormous cardinal" implies the $V_{\kappa}$ is a model for the existence of a proper class of cardinals with the large-cardinal property $\phi$, for every large-cardinal property $\phi$ that has been previously considered such that the existence of a cardinal with this property is not known to be inconsistent with $\textsf{ZFC}$, with the exception of the property of being an enormous cardinal. \bigskip So suppose that the sequence $\langle \kappa_\alpha : \alpha \leq \omega \rangle$ witnesses that $\kappa:=\kappa_0$ is an $\omega+1$=enormous cardinal. For each finite ordinal $n>0$, the sequence $\langle V_{\kappa_{i+1}} : i<n \rangle$ can serve as the sequence $\langle V_0^{i}: i<n \rangle$, in a model for the theory $B(V_0^0,V_0^1, \ldots V_0^{n-1})$, and the critical point of the embedding witnessing each reflection axiom can always be chosen to be $\kappa_0$, and further, for each theory in the sequence one may choose an embedding which witnesses reflection for all formulas simultaneously, and these embeddings may be chosen so as to cohere with one another. \bigskip Then all of these embeddings may be glued together to yield an embedding $j:V_{\lambda} \prec V_{\lambda}$ where $\lambda$ is the supremum of the $\kappa_{i}$'s. This embedding (together with its image under iterates of its own extension to $V_{\lambda+1}$) witnesses that each $\kappa_{i}$ is an $I_3$ cardinal. \begin{theorem} Assume $\textsf{ZFC}$+``$\kappa_0$ is an $\omega+1$=enormous cardinal" with the notation $\langle \kappa_{i}:i \leq\omega\rangle$ as before. Let $\lambda:=\mathrm{sup}\{\kappa_{i}:i \in\omega\}$, $\gamma:=\kappa_\omega$, and $\kappa:=\kappa_{0}$. Then $V_{\kappa}\prec V_{\lambda}\prec V_{\gamma}$, and $V_{\kappa}$ is a model for the existence of a proper class of $I_0$ cardinals and also a proper class of each of the large cardinals considered by Hugh Woodin in \cite{Woodin2011}. \end{theorem} \begin{proof} In fact, using the hypothesis $V_{\kappa_{i}} \prec V_{\gamma}$, we get reflection for any formula with parameters from anywhere in $V_{\gamma}$, even with rank greater than or equal to $\lambda$, and we have embeddings witnessing the reflection of the kind described in each theory $B_0(V_0^0, V_0^1, \ldots V_0^{n-1})$ for each $n$, with the constant symbols $V_0^k$ interpreted by $V_{\kappa_{k+1}}$ for all integers $k$ such that $0\leq k<n$. Consider first the case of parameters from $V_{\lambda+1}$, in this case for each $n$ the same choice of embedding will work for all formulas, and the family of restrictions of these embeddings to $V_{\kappa_{n-1}}$, where $n$ is the positive finite ordinal corresponding to the embedding, can all be glued together to obtain an embedding with domain $V_{\lambda}$, and this determines an embedding with domain $V_{\lambda+1}$. This embedding withnesses that $\kappa$ is an $I_1$ cardinal, and in particular this embedding induces a unique $\omega$-huge embedding $j:V_{\gamma} \prec M_{\gamma}$ with critical point $\kappa$. For a model $K$ of $\textsf{ZF}$ such that $V_{\lambda+1} \cup \gamma \subseteq K \subseteq V_{\gamma}$, definable in $V_{\gamma}$ from ordinals fixed by $j$, in which every element is ordinal definable in $V_{\gamma}$ from elements of $V_{\lambda+1}$, and the same definition with ordinal parameters works relative to $K$ as well as $V_{\gamma}$, we can make use of the same argument using parameters from $K$ to obtain an embedding $j:K \prec K$. So we see that we obtain an embedding witnessing that $\kappa$ is an $I_0$ cardinal, and an embedding of the kind described in Laver's axiom and all the other axioms stronger than $I_0$ considered by Hugh Woodin in \cite{Woodin2011}. Moreover in each case we obtain that in $V_{\kappa}$ there is a proper class of cardinals $\delta$ which are the critical point of such an embedding. This completes our argument that from our stated assumption we obtain a model for every large-cardinal axiom not known to be inconsistent with choice. \end{proof} In this way, using ideas built on those in Marshall's paper, one can provide motivations for all large cardinals not known to be inconsistent with choice, while still having principled reasons to stop short of the point of inconsistency with choice. \pagebreak[4]
1,108,101,562,438
arxiv
\section{Introduction} \label{sect:intro} The ability to fabricate ultra-clean graphene sheets by encapsulation in hexagonal boron nitride crystals~\cite{mayorov_nanolett_2011,wang_science_2013,taychatanapat_naturephys_2013,bandurin_science_2016} allows the investigation of ballistic transport in a large range of temperatures up to the hydrodynamic temperature scale~\cite{bandurin_science_2016,torre_prb_2015} $T_{\rm hydro}$. At $T\gtrsim T_{\rm hydro}$, the mean-free path for electron-electron collisions $\ell_{\rm ee}$ becomes shorter than the mean free path $\ell$ for momentum non-conserving scattering and inelastic electron-electron collisions need to be taken into account in any theoretical description of transport. \begin{figure}[h!] \begin{overpic}[width=0.49\linewidth]{fig1a}\put(2,62){(a)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig1b}\put(2,62){(b)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig1c}\put(2,62){(c)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig1d}\put(2,62){(d)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig1e}\put(2,62){(e)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig1f}\put(2,62){(f)}\end{overpic} \caption{(Color online) Pictorial representation of the five and four-terminal graphene Hall bar setups considered in this work. Leads are labeled by numbers. Here, $W$ is the width of the horizontal zig-zag leads, $w$ is the width of the vertical armchair leads, $d$ is the center-to-center distance between them, $L_1$ is the distance between lead 1 and the center of lead 2, and $L_2$ is the distance between the center of lead 3 and lead 4, for panel (b) and (d), or the distance between the center of lead 4 and lead 3, for panel (f). The red dotted line (red cross) in panel (d) (panel (f)) indicates the axis (center) of symmetry of the setup. In this Article, all leads have been taken to be semi-infinite. \label{fig:one}} \end{figure} At temperatures $T\ll T_{\rm hydro}$, however, electrons in encapsulated graphene sheets propagate over distances of the order of several microns without experiencing elastic or inelastic scattering events. In this situation, transport properties can be determined by utilizing exact single-particle quantum approaches, combining e.g.~tight-binding Hamiltonians with Kubo formulas~\cite{yuan_prb_2010,roche_ssc_2012} or Landauer-B\"uttiker scattering theory~\cite{kwant,liu2015}. Graphene Hall bars fabricated by van der Waals assembly techniques and used in quantum transport experiments have characteristic linear dimensions of tens of microns, rendering brute-force numerical calculations time consuming or, simply, unfeasible. In Ref.~\onlinecite{liu2015} a convenient scaling scheme for two-terminal numerical transport simulations within Landauer-B\"uttiker scattering theory has been proposed. In such a scheme, the tight-binding parameters for real graphene~\cite{neto_rmp_2009}, namely the hopping energy $t_0$ and the lattice spacing $a_{0}$, are replaced with rescaled ones, $\tilde{t}_{0}$ and $\tilde{a}_{0}$, such that the bulk band structure $E(k)$ remains invariant, i.e.~ $E(k)=(3/2)t_{0}a_0k=(3/2)\tilde{t}_{0} \tilde{a}_{0} k$, with $k$ the magnitude of the momentum. This yields the scaling condition $\tilde{a}_{0}=a_{0}s_{\rm f}$ and $\tilde{t}_{0}=t_{0}/s_{\rm f}$, which applies only when the massless Dirac (linear) approximation is valid, where $s_{\rm f}$ is the scaling factor. Restrictions for the validity of the scaling procedure, in terms of a maximum scaling factor, are derived on the basis of the {\it bulk} band structure. As an example, this scaling procedure has been used~\cite{liu2015} to simulate the two-terminal conductance, measured on a large crystal, using a scaling factor up to $100$. In this Article we develop a scaling procedure suitable for graphene micron-sized ribbons, which is valid also in the presence of many electrodes (see Fig.~\ref{fig:one}) and therefore useful to describe {\it non-local} ballistic transport experiments. This procedure is based on the exact band structure of graphene ribbons (rather than on the bulk massless Dirac fermion band structure), and uses the Fermi energy as key scaling parameter. In brief, a geometrical downward scaling of the size of the structure, from the realistic laboratory scale to the computationally feasible scale, is accompanied by a upward scaling of the Fermi energy in such a way that the number of electronic modes responsible for transport is left unchanged. As an application of the proposed scaling procedure, we study in detail the case of transverse magnetic focusing (TMF), which has been extensively explored in the past, in metals~\cite{tsoi_jetp_1974} and in ultra-clean semiconductor heterostructures~\cite{vanhouten_epl_1988,vanhouten_prb_1989,heremans_prb_1995,heindrichs1998,rokhinson_prl_2004,tsoi_rmp_1999} fabricated by molecular beam epitaxy. Here, we focus on TMF in single-layer graphene~\cite{taychatanapat_naturephys_2013,bhandari_nanolett_2016}, comparing our quantum mechanical numerical calculations with experimental results in ultra-clean encapsulated monolayer samples. Our paper is organized as following. In Sect.~\ref{sect:TB} we present our scaling approach. In Sect.~\ref{sect:numerical_examples} we summarize our main numerical results on TMF in single-layer graphene, while Sect.~\ref{analy} is devoted to a detailed analysis of the numerical results. In particular, Sect.~\ref{analy} includes a study of the dependence of TMF on the carrier density, temperature, presence of non-ideal edges, as well as a comparison with experimental data. A brief summary and our main conclusions are reported in Sect.~\ref{sect:conc}. \section{Theoretical framework and scaling procedure} \label{sect:TB} The systems under investigation are multi-terminal graphene Hall bars similar to the ones sketched in Fig.~\ref{fig:one}. A rectangular graphene zig-zag strip, of width $W$, is attached either to $5$~[Figs.~\ref{fig:one}(a)-(b)] or $4$~[Figs.~\ref{fig:one}(c)-(d) and~\ref{fig:one}(e)-(f)] electrodes and exposed to a perpendicular magnetic field ${\bm B}$. The horizontal leads [labeled 1 and 4 in Figs.~\ref{fig:one}(a)-(d), and 1 and 3 in Figs.~\ref{fig:one}(e)-(f)] have the same width $W$ of the ribbon, while the vertical ones have width $w \ll W$. In our calculations below, all leads have been taken to be semi-infinite. Moreover, the vertical terminals are separated by a center-to-center distance $d$, while the distance between the left (right) horizontal electrode and the leftmost (rightmost) vertical electrode is $L_1$ ($L_2$) [see Figs.~\ref{fig:one}(b), (d) and~(f)]. The total length of the ribbon is therefore $L=L_1+d+L_2$. In all setups a non-local resistance $R_{21,34}$ is measured by applying a current bias between lead 1 and 2 and measuring the voltage that develops between lead 3 and 4. We therefore define \begin{equation}\label{eq:non-local} R_{21,34}=\frac{V_3-V_4}{I_2}~, \end{equation} where $I_i$ is the current flowing in lead $i$ and $V_i$ is the voltage relative to lead $i$. As we mentioned in the Introduction, in high-quality encapsulated graphene we can safely assume that low-temperature transport is coherent and neglect inelastic scattering sources. For the sake of simplicity, we also neglect elastic scattering sources: our work does not therefore deal with carrier density inhomogeneities near the charge neutrality point. The single-particle tight-binding Hamiltonian reads \begin{equation}\label{tbHam} {\cal H} = \varepsilon_{\rm F} \sum_i c^{\dagger}_i c_i -t_0 \sum_{\langle i,j \rangle} c^{\dagger}_i c_j~, \end{equation} where $\varepsilon_{\rm F}$ is the Fermi energy, measured with respect to the Dirac point ($\varepsilon_{\rm F}=0$), and $t_0\simeq 2.8~{\rm eV}$ is the nearest neighbor hopping energy (the symbol ${\langle i,j \rangle}$ denotes, as usual, nearest-neighbor sites $i$ and $j$). We remind the reader that electron-electron interactions, which are not included in our model Hamiltonian (\ref{tbHam}), enhance the value of the Fermi velocity~\cite{kotov_rmp_2012} $v_{\rm F}$ with respect to the bare non-interacting tight-binding value $v_{{\rm F}, 0} = (3/2)t_{0}a_{0} \simeq 0.9 \times 10^{6}~{\rm m}/{\rm s}$. The non-local resistance $R_{21,34}$ can be calculated starting from the linear-response current-voltage relation obtained within the Landauer-B\"uttiker scattering approach and given by\cite{buttiker1986,buttiker1988} \begin{equation}\label{eq:currents} I_i = \frac{2e^2}{h} \left[ (N_i-T_{ii})V_i - \sum_{j \ne i} T_{ij} V_j \right]~, \end{equation} at zero temperature. $R_{21,34}$ is obtained by imposing that $I_1=I_2$, $I_3=I_4=I_5=0$, and solving Eq.~(\ref{eq:currents}) for $V_3$ and $V_4$. In Eq.~(\ref{eq:currents}) $T_{ij}$ is the transmission coefficient at the Fermi energy for electrons injected from lead $j$ to be transmitted into lead $i$, satisfying the identity \begin{equation}\label{eq:sumrules} N_i = \sum_j T_{ij} = \sum_j T_{ji}~, \end{equation} $N_i$ being the number of open channels in lead $i$. The transmission coefficients $T_{ij}$ will be numerically calculated using KWANT\cite{kwant}, a toolkit which implements a wave-function matching technique. We assume that no magnetic field is present in the leads. \begin{figure}[t] \begin{overpic}[width=1.0\linewidth]{fig2a}\put(0,70){(a)}\end{overpic} \begin{overpic}[width=1.0\linewidth]{fig2b}\put(0,70){(b)}\end{overpic} \caption{\label{fig:two} (Color online) (a) Two examples of band structures of {\it armchair} leads. Energies are measured in ${\rm eV}$, while $k_y$ is measured in units of $a = a_{0} \sqrt{3}$. On the left, $w = 24.4~{\rm nm}$ and $\varepsilon_{\rm F}=1.01~{\rm eV}$; on the right, $\tilde{w} = 10.8~{\rm nm}$ and $\tilde{\varepsilon}_{\rm F}=2.05~{\rm eV}$. (b) Schematic representation of the scaling procedure. In the input box we have the parameters characterizing the real sample: $W$, $L$, $w$, $d$, $B$, $\varepsilon_{\rm F}$, and $T$ (where $T$ is temperature). ${\cal N}_{\rm oc}$ is the number of open channels in a reference lead of the real sample. Quantities denoted by a tilde refer to the rescaled system. The parameters $s$ and $s'$ are the geometric and energy scaling factors, respectively. The rescaled parameters are used to calculate non-local resistances. In this work we focus on the quantity $R_{21,34}$ defined in Eq.~(\ref{eq:non-local}).} \end{figure} Since the computation time scales roughly with the third power of the linear size of the system\cite{kwant}, a one-to-one simulation of a large-size sample, of the order of a few micrometers, is prohibitively time consuming. For this reason, the development of scaling procedures, which allow to calculate accurately the transmission coefficients on a much smaller sized system, is of great interest. Here we develop a procedure which is based on the observation that the band structure of a graphene nanoribbon varies little if its width is decreased by a scaling factor $s$ and, at the same time, the Fermi energy is increased by a suitable and, in principle, different factor $s'$. This is graphically exemplified in Fig.~\ref{fig:two}(a) where the band structures~\cite{akhmerov_prb_2008,katsnelson_book} of two armchair nanoribbons of different width (scaled by a factor $s \simeq 2.26$) are plotted side by side. The two plots resemble each other as long as the Fermi energy of the narrower nanoribbon is increased by a suitable factor $s'$. The notion of ``suitability'' will be clarified below. Note that this works as long as the Fermi energy in the right panel of Fig.~\ref{fig:two}(a) satisfies the inequality $\tilde{\varepsilon}_{\rm F} < t_{0}$. Given a certain Fermi energy $\varepsilon_{\rm F}$ relative to the actual sample, this sets a limitation on the maximum scaling factor $s$ applicable. The scaling procedure is schematized in Fig.~\ref{fig:two}(b). The ``input'' block contains all the parameters characterizing the actual sample one is interested in simulating. The scaling algorithm proceeds as following. i) One starts by choosing the size $\tilde{w}$ of the vertical leads of the rescaled system used in the calculations; ii) one then defines the {\it geometric} scaling factor $s$ (blue arrow), i.e.~the original width $w$ of the vertical leads in units of $\tilde{w}$; the procedure of geometric scaling, although applied to all the sample, is based on the vertical leads in Fig.~\ref{fig:one} since those are the narrowest ones; iii) by knowing $w$ and $\varepsilon_{\rm F}$, one proceeds by calculating the number of open channels in the actual sample (red arrow), which we denote by ${\cal N}_{\rm oc}$; iv) one then determines the {\it energy} scaling factor $s'$ by imposing that the number of open channels $\tilde{\cal N}_{\rm oc}$ in the rescaled system equals ${\cal N}_{\rm oc}$ (white arrows); v) the rescaled parameters (denoted by a tilde) are used to determine the transmission coefficients of the rescaled system and therefore the non-local resistance $R_{21,34}$. Note that the rescaled magnetic field $\tilde{B}$ is given by $\tilde{B} = s^2B$ to make sure that the flux is invariant under geometric scaling. \section{Numerical results for TMF} \label{sect:numerical_examples} In this Section we present numerical results based on the scaling procedure described above. We have decided to focus our attention on TMF. We consider the 5-terminal setup in Fig.~\ref{fig:one}(a) and (b). For an armchair lead of width $w=0.37~{\rm \mu m}$ and a Fermi energy $\varepsilon_{\rm F} = 66.86~{\rm meV}$, we find a number ${\cal N}_{\rm oc}$ of open channels given by ${\cal N}_{\rm oc} =27$. This should be compared with the approximate formula \begin{equation}\label{eq:Noc} {\cal N}_{\rm oc} \simeq {\rm int} \left[ (2w+a_{0})\frac{2\varepsilon_{\rm F}}{hv_{\rm F}} \right]~, \end{equation} which was derived by using the Dirac equation with appropriate boundary conditions~\cite{brey2006}. For the parameters reported above, Eq.~(\ref{eq:Noc}) yields ${\cal N}_{\rm oc} = 26$. The difference is due to residual finite-size effects that are not captured by Eq.~(\ref{eq:Noc}). To prove the effectiveness of the scaling procedure, we have compared the transmission $T_{32}$ and the non-local resistance $R_{21,34}$ at zero temperature for increasing values of the scaling parameter $s$ in Fig.~\ref{fig:three}. The transmission $T_{32}$, plotted in Fig.~\ref{fig:three}(a) as a function of the magnetic field $B$, relative to electrons injected from lead 2 and arriving in lead 3, is the most relevant since it determines the main peak in the non-local resistance (see Sect.~\ref{analy}). First, we notice that all curves in Fig.~\ref{fig:three}(a) do not show important quantitative differences up to $s\lesssim 34$, at least for fields as large as $0.3~{\rm Tesla}$. Similarly, Fig.~\ref{fig:three}(b) shows that the non-local resistance $R_{21,34}$ is only weakly sensitive to the scaling factor $s$. We also checked that the scaling procedure works well when the graphene Hall bar is an armchair ribbon, so that the vertical leads have zig-zag edges. In Fig.~\ref{fig:three}(c) we compare the non-local resistances $R_{21,34}$ of armchair (solid line) and zig-zag (dashed line) ribbons using approximatively the same geometric scaling factor $s \simeq 29.5$. Note that the two curves have the same behavior, the main focusing peak being virtually identical. \begin{figure}[h!] \begin{overpic}[width=0.8\linewidth]{fig3a}\put(2,70){(a)}\end{overpic}\\ \begin{overpic}[width=0.8\linewidth]{fig3b}\put(2,70){(b)}\end{overpic} \begin{overpic}[width=0.8\linewidth]{fig3c}\put(2,70){(c)}\end{overpic} \caption{\label{fig:three} (Color online) (a) and (b) Numerical results for the transmission $T_{32}$---panel (a)---and the non-local resistance $R_{21,34}$---panel (b)---are plotted versus the applied magnetic field $B$ (in Tesla). Different curves refer to different values of the geometric scaling factor $s$ in the 5-terminal setup sketched in Figs.~\ref{fig:one}(a) and (b). The scaling procedure works well for $s\lesssim 34$. Numerical data presented in this figure were obtained for the following choice of parameters: $W=2~{\rm \mu m}$, $L_1=L_2=1.5~{\rm \mu m}$, $w=0.37~{\rm \mu m}$, $d=1~{\rm \mu m}$, $\varepsilon_{\rm F} = 66.86~{\rm meV}$ and ${\cal N}_{\rm oc} = 27$. (c) Non-local resistance $R_{21,34}$ versus $B$ for an armchair (solid line, scaling factor $s=29.50$, number of open channels in lead $2$: ${\cal N}_{\rm oc} = 25$) and a zig-zag (dashed line, scaling factor $s=29.49$, number of open channels in lead $2$: ${\cal N}_{\rm oc} =27$) ribbon. The energy scaling factor is $s' = 21.06$ in both cases.} \end{figure} In Fig.~(\ref{fig:four}) we also present how the energy scaling factor $s'$ depends on $s$ for different values of the Fermi energy $\varepsilon_\text{F}$. As expected, the plot shows that $s'$ tends to deviate from $s$ for large values of $s$ and more rapidly for large values of $\varepsilon_\text{F}$. We note that the functional dependence of $s'$ on $s$ is crucial. It makes sure that the position of the focusing peaks in the non-local resistance $R_{21,34}$---see Section~\ref{analy}---is insensitive to the geometric scaling factor $s$. Imposing that the rescaled system and the original one have the same number of open channels in the injection lead---$\tilde{\cal N}_{\rm oc} = {\cal N}_{\rm oc}$ through the parameter $s'$---guarantees that the rescaled-system band structure faithfully reflects the original one. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{fig4} \caption{\label{fig:four} (Color online) Trend of the ratio $s'/s$ versus the geometric scale factor $s$ for three different values of the unscaled Fermi energy $\varepsilon_{\rm F}$. For the largest Fermi energy ($\varepsilon_{\rm F}=100$ meV), $s=36.68$ is the maximum value for which the scaling procedure can be applied (i.e. the number of channels in the vertical leads can be kept fixed).} \end{figure} \section{Detailed analysis of the numerical results} \label{analy} In this Section we analyze the origin of the different peaks exhibited by the non-local resistance as a function of $B$ and the origin of the sign of $R_{21,34}$ at {\it zero} magnetic field. For the sake of simplicity, we consider the 4-terminal setup in Figs.~\ref{fig:one}(c) and~(d), where the zero-temperature non-local resistance is given by the following analytical expression~\cite{buttiker1986,buttiker1988}: \begin{equation}\label{r2134} R_{21,34} = \frac{h}{2e^2}\frac{T_{32}T_{41}-T_{42}T_{31}}{D}~, \end{equation} where \begin{equation} D \equiv (\alpha_{11}\alpha_{22}-\alpha_{12}\alpha_{21})S~, \label{di} \end{equation} \begin{equation} S \equiv T_{13}+T_{14}+T_{23}+T_{24} = T_{31}+T_{41}+T_{32}+T_{42}~, \end{equation} \begin{align} \alpha_{11} &= \frac{2e^2}{h}\left[ T_1-\frac{(T_{13}+T_{14})(T_{41}+T_{31})}{S} \right]\\ \alpha_{12} &= -\frac{2e^2}{h} \frac{T_{14}T_{23}-T_{13}T_{24}}{S} \\ \alpha_{21} &= -\frac{2e^2}{h} \frac{T_{32}T_{41}-T_{42}T_{31}}{S} \\ \alpha_{22} &= \frac{2e^2}{h}\left[ T_4-\frac{(T_{14}+T_{24})(T_{41}+T_{42})}{S}\right]~, \label{last} \end{align} and \begin{equation} T_{i}=\sum_{j\ne i} T_{ij}~. \end{equation} We start by discussing the behavior of the different transmission coefficients $T_{ij}$ as functions of the applied magnetic field in terms of a semiclassical picture and then see how they combine to give rise to the non-local resistance $R_{21,34}$ with the aid of Eq.~(\ref{r2134}). Within a simple classical picture~\cite{heindrichs1998,milovanovic_jap_2014}, which will be corroborated below in Sect.~\ref{sect:classical}, electrons entering the Hall bar from a given electrode undergo a cyclotron motion with radius $r_{\rm c}=m^*v_{\rm F}/(eB)$ and specular reflections at the boundaries of the Hall bar. In graphene, the cyclotron radius for weak magnetic fields can be written as \begin{equation}\label{eq:cyclotronradius} r_{\rm c} = \frac{\varepsilon_{\rm F}}{eBv_{\rm F}}~, \end{equation} where $m^*= \hbar k_{\rm F}/v_{\rm F}$ is the effective electron mass in doped graphene~\cite{geim_naturemater_2007,castroneto_rmp_2009}. We denote by $B^{(2N)}_{32}$ the field values for which the center-to-center distance $d$ between contacts 2 and 3 is an integer multiple of $2r_{\rm c}$, i.e. \begin{equation} d = 2N\frac{\varepsilon_{\rm F}}{eB^{(2N)}_{32}v_{\rm F}} \end{equation} with $N=1,2$ corresponding to the trajectories shown in Fig.~\ref{fig:five}(a). The transmission coefficients $T_{41}$ (solid curve), $T_{32}$ (dotted curve) and $T_{42}$ (dashed curve) are plotted, as function of $B$, in Fig.~\ref{fig:five}(b). \begin{figure}[h!] \begin{overpic}[width=0.8\linewidth]{fig5a}\put(2,70){(a)}\end{overpic}\\ \begin{overpic}[width=0.8\linewidth]{fig5b}\put(2,70){(b)}\end{overpic} \begin{overpic}[width=0.8\linewidth]{fig5c}\put(2,70){(c)}\end{overpic} \caption{\label{fig:five} (Color online) (a) Classical electron trajectories for two values, $B_{32}^{(2)}$ and $B_{32}^{(4)}$, of the perpendicular magnetic field. (b) Numerical results for the transmission coefficients $T_{41}$ (solid line), $T_{32}$ (dotted line), and $T_{42}$ (dashed-dotted line) are plotted versus the applied magnetic field $B$ (in Tesla) for the $4$-terminal setup in Figs.~\ref{fig:one}(c) and (d), with $s=10.23$. We have plotted only three transmission coefficients since $T_{31}(B)= T_{42}(B)$. This is because the system depicted in Fig.~\ref{fig:one}(d) is symmetric under reflection about the dotted red line in Fig.~\ref{fig:one}(d). (c) Numerical results for the transmission coefficient $T_{32}$ are plotted versus $B$ for the $4$-terminal setup in Figs.~\ref{fig:one}(c) and (d), with $s=20.05$. In the inset, $T_{32}$ is plotted for $B\geq 0.4~{\rm Tesla}$, clearly showing the transition to the integer quantum Hall regime (for the largest values of $B$ considered, $T_{32} =1$). Parameters as in Fig.~\ref{fig:three}.} \end{figure} \begin{figure}[t] \includegraphics[width=1.0\linewidth]{fig6} \caption{\label{fig:six} (Color online) Numerically calculated non-local resistance $R_{21,34}$ (solid line) versus magnetic field $B$ (in Tesla) for the $4$-terminal setup in Figs.~\ref{fig:one}(c) and (d). The dotted and dash-dotted lines represent, respectively, the two terms $T_{32}T_{41}/D$ and $T_{42}T_{31}/D$ entering the mathematical expression of the non-local resistance $R_{21,34}$---see Eq.~(\ref{r2134}). Parameters as in Fig.~\ref{fig:three}.} \end{figure} Regarding the transmission probability $T_{32}$, Fig.~\ref{fig:five}(b) shows that for $B \simeq B^{(2)}_{32}$ (marked by a blue vertical line) $T_{32}$ exhibits a relative maximum, which stems from the ``direct'' trajectory with no bounces between leads $2$ and $3$ [blue line in Fig.~\ref{fig:five}(a)]. On the other hand, for magnetic fields larger than $B=B^{(4)}_{32}$ (marked by a green vertical line), $T_{32}$ exhibits a series of downward jumps---see Fig.~\ref{fig:five}(c)---preluding the eventual onset of the integer quantum Hall effect. Within the semiclassical interpretation, one expects that electrons exiting lead $2$ can either reach lead $3$ or be reflected back to lead $2$. Therefore, $T_{32}$ slowly decreases from its maximum value according to $T_{32} \simeq {\cal N}_{\rm oc} - T_{22}$, where ${\cal N}_{\rm oc}$ is the number of open channels in lead $2$. Indeed, for a fixed value of $\varepsilon_{\rm F}$, the reflection coefficient $T_{22}$ for lead $2$ increases with increasing $B$ (not shown). The slow power-law decay of $T_{32}$ for $B>B^{(4)}_{32}$ is due to trajectories with one or more bounces (skipping orbits) between leads $2$ and $3$. The trajectory with one bounce is depicted by a green line in Fig.~\ref{fig:five}(a). $T_{32}$ decreases in steps till its lowest value, $T_{32}=1$, which is reached at fields above $4~{\rm Tesla}$. In this case, leads $2$ and $3$ are connected by a single quantum Hall edge state. Note, finally, that $T_{32}$ goes to zero for large enough negative $B$, since all electrons injected from lead $2$ are in this case diverted by the Lorentz force towards lead $1$. The transmission coefficient $T_{42}$ is characterized by a dip occurring at $B=B^{(2)}_{32}$, stemming from the fact that electrons injected from lead 2 tend to be collected mostly by lead 3 at the value of $B$ that corresponds to the cyclotron orbit connecting leads 2 and 3. We note that $T_{42}$ goes to zero at negative values of $B$---the corresponding trajectories being deflected towards lead 1---and at large positive values of $B$---such that the small radius of the skipping orbits forces electrons injected from lead 2 to end up in the same lead. This fact together with the dip occurring at $B=B^{(2)}_{32}$ gives rise to two broad peaks in $T_{42}$ that we denote by $B^{(1)}_{42}$ and $B^{(3)}_{42}$. Notice furthermore that $T_{31}(B)=T_{42}(B)$ since the system sketched in Fig.~\ref{fig:one}(d) is symmetric under reflection about the dotted red line in Fig.~\ref{fig:one}(d) and that $T_{41}$ is mainly characterized by a single large peak around $B=0$, since electrons injected from lead 1 have higher probability to reach lead 4 for small fields. The slight deviation of the maximum of $T_{41}$ from $B=0$ towards a negative value can be attributed to the fact that leads 2 and 3, positioned on the bottom of the Hall bar, take away electrons at small and positive values of $B$ at the expense of $T_{41}$. As a result of Eq.~(\ref{r2134}), which expresses the non-local resistance $R_{21,34}$ in terms of the transmission probabilities, the two peaks in Fig.~\ref{fig:six} (where $R_{21,34}$ is plotted as a function of the magnetic field) at $B\simeq B^{(2)}_{32}$ and $B\simeq B^{(4)}_{32}$ stem from the two features in $T_{32}$ discussed above and are therefore genuine focusing peaks of the non-local resistance $R_{21,34}$. On the contrary, the origin of the two deep negative minima in $R_{21,34}$ is related to the two broad peaks in $T_{42}$. We finally stress that the positivity of $R_{21,34}$ at $B\approx 0$ is due to the large value---see Fig.~\ref{fig:five}(b)---of $T_{41}$ for small (positive and negative) values of $B$, which originates from the fact that leads $1$ and $4$ are much wider than leads $2$ and $3$. Note, however, that an additional contact---such as terminal $5$ in Fig.~\ref{fig:one}(a) and~(b)---present on the upper side of the Hall bar can serve as an electron drain. This may significantly affect the discussed picture at $B\approx 0$ assuming that $W$ is much smaller than the mean free path $\ell$, so that even negative values of $R_{21,34}$ can be found, depending on the relative size and position of the extra contact. However, if $W$ is larger than $\ell$, negative values of the non-local resistance $R_{21,34}$ (termed ``vicinity'' resistance in Ref.~\onlinecite{bandurin_science_2016}) in zero magnetic field cannot be explained within a single-particle ballistic approach~\cite{bandurin_science_2016}. As we will see below in Sect.~\ref{sect:disorder}, elastic disorder at the edges is not able to change the clean-limit picture. Negative values of $R_{21,34}$ (which occur only at sufficiently large temperatures) have been attributed to hydrodynamic viscous flow~\cite{bandurin_science_2016,torre_prb_2015}. To further emphasize the relation between transmission coefficients and non-local resistance, in Fig.~\ref{fig:six} we plot separately the two terms appearing in Eq.~(\ref{r2134}) along with $R_{21,34}$. The plot makes clear that the term $T_{32}T_{41}/D$ determines the occurrence of the positive peaks, while the term $T_{42}T_{31}/D$ is responsible for the appearance of the two negative dips. This interpretation of the negative dips is in agreement with earlier theoretical work~\cite{milovanovic_jap_2014}, based on a semiclassical billiard model, where TMF in a geometry identical to that in Figs.~\ref{fig:one}(c) and (d) was discussed. We now turn to an analysis of non-local ballistic magneto-transport in the 4-terminal setup sketched in Fig.~\ref{fig:one}(e) and (f), with the two vertical leads placed on {\it opposite} sides of the Hall bar. The relevant transmission coefficients are plotted in Fig.~\ref{fig:seven}(a) as functions of the magnetic field. Note that $T_{31}$ (dash-dotted line) and $T_{42}$ (dashed line) respect the following symmetry: $T_{31}(B) = T_{31}(-B)$ and $T_{42}(B)= T_{42}(-B)$. Also, we note that $T_{32}(B)= T_{41}(-B)$. This is because the system depicted in Fig.~\ref{fig:one}(f) has an inversion symmetry center, marked by a red cross in Fig.~\ref{fig:one}(f). The transmission coefficient between the two widest electrodes of the system, $T_{31}$, is however much larger than $T_{42}$ and shows a smooth bell-like shape, which decreases slowly with increasing magnitude of $B$. On the contrary, $T_{42}$ shows spiky features (not visible on the scale of the plot), possibly arising from quantum interference effects, and goes rapidly to zero at a value of the field ($|B|\simeq B^{(0)}$) that yields a cyclotron radius equal to $W/2$, i.e. \begin{equation} B^{(0)}= \frac{2\varepsilon_{\rm F}}{e Wv_{\rm F}}~. \end{equation} This is due to the fact that electrons injected from lead 2 cannot reach lead 4 for $B\simeq \pm B^{(0)}$, being deflected towards lead 3 (lead 1). This is confirmed by the behavior of $T_{32}$ (dotted line), which increases for increasing $B$, reaching a constant value for $B\geq B^{(0)}$, close to the number of open channels in lead 2 (${\cal N}_{\rm oc}=27$). Notice also that $T_{32}$ is negligible only when $B\leq -B^{(0)}$. The non-local resistance $R_{21,34}$ can be calculated using Eq.~(\ref{r2134}). Numerical results are reported in Fig.~\ref{fig:seven}(b) as a function of $B$. It turns out that $R_{21,34}$ resembles very closely the shape of $T_{42}$, but with a negative sign since $R_{21,34}$ is dominated by the term $T_{42}T_{31}/D$ at all values of $B$. It is worthwhile noticing that in our simulations $R_{21,34}$ is never positive since for $B\leq -B^{(0)}$, when $T_{42}$ becomes negligible, $T_{32}$ gets negligible too. \begin{figure}[t] \begin{overpic}[width=\linewidth]{fig7a}\put(2,70){(a)}\end{overpic}\\ \begin{overpic}[width=\linewidth]{fig7b}\put(2,70){(b)}\end{overpic} \caption{\label{fig:seven} (a) Numerically calculated transmission coefficients $T_{41}$ (solid line), $T_{32}$ (dotted line), $T_{42}$ (dashed line), and $T_{31}$ (dashed-dotted line) are plotted versus the applied magnetic field $B$ (in Tesla) for the $4$-terminal setup in Fig.~\ref{fig:one}(e) and (f), with $s=10.23$. Note that $T_{41}(-B)=T_{32}(B)$ because the system depicted in Fig.~\ref{fig:one}(f) has an inversion symmetry center, marked by a red cross in Fig.~\ref{fig:one}(f). (b) Non-local resistance $R_{21,34}$ relative to the transmissions in panel (a). Parameters as in Fig.~\ref{fig:three}.} \end{figure} \subsection{Classical trajectory model} \label{sect:classical} To further investigate the classical nature of the main features in the non-local resistance $R_{21,34}$, we have developed a model based on fully classical trajectories, which allows us to calculate the transmission probabilities between electrodes. The model is detailed as follows: \begin{itemize} \item We assume that electrons move in the Hall bar according to the classical equations of a charged particle in a transverse magnetic field; \item In a given electrode, electrons are emitted from $M_{\text{p}}$ equidistant points; \item From each such point, $M_{\text{e}}$ electrons are emitted with an {\it isotropic} distribution of angles with a fixed magnitude of velocity (equal to the Fermi velocity $v_{\rm F}$); \item The number of electrons $M_{ij}$ (with $i,j=1,2,3,4$) arriving in electrode $i$ when emitted from electrode $j$ is determined by the classical equations (notice that $M_{ij}\leq M_{\text{p}} M_{\text{e}}$); \item The transmission probabilities $\overline{T}_{ij}$ are defined by normalizing the coefficients $M_{ij}$ as follows: \begin{equation} \overline{T}_{ij} = \alpha_i \beta_j M_{ij}~, \end{equation} where $\alpha_i$ and $\beta_j$ are numerical coefficients determined by imposing the following conditions \begin{equation} \sum_{i} \overline{T}_{ij}= N_j \label{c1} \end{equation} and \begin{equation} \sum_{j} \overline{T}_{ij}= N_i~. \label{c2} \end{equation} Here, $N_i$ is the number of open channel in lead $i$ as defined in the tight-binding quantum model, see Sec.~\ref{sect:TB}; \item The non-local resistance $R_{21,34}$ is finally calculated substituting the transmission probabilities $\overline{T}_{ij}$ in Eq.~(\ref{r2134}). \end{itemize} Notice that the conditions (\ref{c1}) and (\ref{c2}) express particle current conservation within the scattering approach used in the quantum model of Sect.~\ref{sect:TB}. In Fig.~\ref{fig:eight} we plot the non-local resistance obtained with this method as a function of $B$ (red dashed line), along with the result obtained with the quantum model of Sect.~\ref{sect:TB} (black solid line). Fig.~\ref{fig:eight} shows that the main features of $R_{21,34}$, in particular the two peaks for $B >0$ and the two negative minima, are well reproduced by the classical trajectory model. This result confirms the classical nature of the main features of $R_{21,34}$, phase coherence playing a little role. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{fig8} \caption{\label{fig:eight} (Color online) Numerically calculated non-local resistance $R_{21,34}$ for the $4$-terminal setup in Fig.~\ref{fig:one}(c) and (d) as a function of $B$ calculated using the classical trajectory model (CTM, red dashed line). The black line is the result obtained with the quantum tight-binding model (same curve plotted in Fig.~\ref{fig:six}). We take the following parameters: $M_{\rm e}=100$ and $M_{\rm p}=500$. Sample parameters are the same as in Fig.~\ref{fig:three}.} \end{figure} \subsection{Carrier density dependence} \label{sect:density} So far we have seen that the main features of the non-local resistance can be explained on a classical level. This is due to the fact that the value of the Fermi energy used for the plot in Fig.~\ref{fig:six}, $\varepsilon_{\rm F} = 66.86~{\rm meV}$, corresponds to the relatively highly doped graphene sheet used in the measurements (see below). One expects, however, that quantum effects become more important by decreasing the carrier density (i.e.~the Fermi energy), thus moving to a regime where a few electronic modes are involved in transport. Upon decreasing density, however, also disorder becomes important, which we here have decided to neglect. Fig.~\ref{fig:nine} shows the evolution of the non-local resistance versus magnetic field, at zero temperature, as the Fermi energy is decreased. Starting from Fig.~\ref{fig:nine}(a), relative to $\varepsilon_{\rm F} = 66.86~{\rm meV}$, one observes that the value of the resistance increases while the focusing peaks ``degrade'', but still persist for $\varepsilon_{\rm F} = 27.74~{\rm meV}$ [Fig.~\ref{fig:nine}(b)] and for $\varepsilon_{\rm F} = 13.37~{\rm meV}$ [Fig.~\ref{fig:nine}(c)], with peak positions shifting in agreement with Eq.~(\ref{eq:cyclotronradius}). By further lowering $\varepsilon_{\rm F}$, Fig.~\ref{fig:nine}(d) shows that for $\varepsilon_{\rm F} = 2.97~{\rm meV}$ the non-local resistance presents a completely different structure which cannot be understood in classical terms. Notice, in particular, that in this latter case the number of open channels in leads 2 and 3 ($N_2$ and $N_3$), the narrowest in the system, is equal to $1$. Since focusing peaks are still distinguishable when $N_2=N_3=5$---$\varepsilon_{\rm F} = 13.37~{\rm meV}$ as in Fig.~\ref{fig:nine}(c)---we can conclude that the quantum regime sets in when the number of open channels is close to $1$. A complete analysis of the quantum regime and the interplay between electron-hole puddles and quantum interference, though, is beyond the scope of the present Article. \begin{figure}[t] \begin{overpic}[width=0.49\linewidth]{fig9a}\put(0,72){(a)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig9b}\put(0,72){(b)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig9c}\put(0,72){(c)}\end{overpic} \begin{overpic}[width=0.49\linewidth]{fig9d}\put(0,72){(d)}\end{overpic} \caption{\label{fig:nine} Numerically calculated non-local resistance $R_{21,34}$ versus applied magnetic field $B$ at different values of the Fermi energy [(a) $\varepsilon_{\rm F}= 66.86$ meV, (b) $\varepsilon_{\rm F}= 27.74$ meV, (c) $\varepsilon_{\rm F}= 13.37$ meV, and (d) $\varepsilon_{\rm F}= 2.97$ meV] for the $4$-terminal setup in Fig.~\ref{fig:one}(c) and (d). The scaling factor used for the calculations is $s=10.23$, while the sample parameters are the same as in Fig.~\ref{fig:three}. The number of open channels in the leads depend on $\varepsilon_{\rm F}$. In leads $2$ and $3$ the number of open channels is: (a) ${\cal N}_{\rm oc}=27$, (b) ${\cal N}_{\rm oc}=11$, (c) ${\cal N}_{\rm oc}=5$, and (d) ${\cal N}_{\rm oc}=1$. Notice that the scales (in both the resistance and field axes) are different in the various panels. The focusing peaks remain located where predicted by the classical analysis in panel (a), (b) and (c).} \end{figure} \subsection{Thermal smearing of the Fermi surface} \label{sect:temperature} In this Section we analize the impact of the smearing of the Fermi surface due to finite-temperature effects on TMF. Within the scattering approach in the linear-response regime, the effect of a finite temperature $T$ is taken into account by replacing in Eqs.~(\ref{r2134}-\ref{last}) the transmissions probabilities $T_{ij}$, evaluated at the Fermi energy, with the following energy integrals \begin{equation} \langle T\rangle_{ij}=\int_{-\infty}^{\infty}T_{ij}(E)\left( -\frac{\partial f(E)}{\partial E} \right)dE~, \label{finitet} \end{equation} where $f(E)=[\exp(E/(k_{\rm B} T))+1]^{-1}$ is the Fermi distribution function at temperature $T$. Plots of the non-local resistance $R_{21,34}$ as a function of the magnetic field $B$ and for different values of $T$ are reported in Fig.~\ref{fig:ten}. As expected, the non-local resistance becomes smoother for increasing values of $T$ and the height of the focusing peaks decreases as $T$ increases. Notice, however, that the first peak for positive values of $B$, which is not related to focusing, is hardly affected by temperature. This behavior can be understood on classical terms from the fact that, at finite temperatures, electrons contributing to $\langle T\rangle_{ij}$ are emitted at different energies, according to Eq.~(\ref{finitet}), and thus move at different cyclotron radii. More precisely, the values of the cyclotron radii will be distributed around the zero-temperature value [Eq.~(\ref{eq:cyclotronradius})] with a width proportional to temperature and given by \begin{equation} \delta r_c = \frac{k_{\rm B}T}{eBv_{\rm F}}~. \end{equation} In other words, with increasing temperature a larger range of values for the cyclotron radius contributes to all transmissions $\langle T\rangle_{ij}$ so that they get non-vanishing on a larger interval of values of $B$. As a result, the focusing effect is blurred. The non-local resistance diminishes in magnitude at all fields with increasing temperature and remains finite for larger values of $B$. Note that in Fig.~\ref{fig:ten} the peaks occurring at $B=B^{(1)}_{42}$ and $B=B^{(2)}_{32}$ remain distinguishable at all temperatures, although the decrease of their height is nearly exponential with $T$. The peaks occurring at $B=B^{(3)}_{42}$ and $B=B^{(4)}_{32}$, however, are more strongly affected, eventually disappearing for the largest temperatures considered. In this Section we have analyzed only Fermi-surface smearing effects induced by a finite temperature. In reality, also inelastic collisions between electrons and agents external to the 2D electron system (e.g.~acoustic phonons) play a role in determining the magnitude of the non-local signal in a TMF experiment. Our results in Fig.~\ref{fig:ten} clearly show that Fermi-surface smearing effects play a non-negligible role and must be taken into account in any serious comparison between microscopic theoretical predictions and experiments. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{fig10} \caption{\label{fig:ten} (Color online) Numerically calculated non-local resistance $R_{21,34}$ versus applied magnetic field $B$ at different temperatures $T = 0,\, 25,\, 50,\, 75,\, 100,\, 125,\, 150~{\rm K}$ for the system depicted in Fig.~\ref{fig:one}(c) and (d). The scaling factor used for the simulations is $s=20.05$ and all other parameters are the same as for Fig.~\ref{fig:three}. Note that the temperature needs to be rescaled with the energy scaling factor $s'$. Data in this plot are just meant to give the reader an idea of the magnitude of Fermi-surface smearing effects on TMF signals. As shown in Ref.~\onlinecite{bandurin_science_2016}, non-local electrical signals at temperatures $T\gtrsim T_{\rm hydro}$, defined in Sect.~\ref{sect:intro}, are sensitive to electron-electron interactions, which are not included in the numerical calculations presented in this work.} \end{figure} \subsection{Experimental data versus numerical calculations} \label{sect:exp} We have carried out transport experiments on Hall bar devices with two current and four potential probes (two potential probes on each side). To achieve mean-free paths $\ell$ larger than the sample size, graphene was encapsulated in hexagonal boron nitride~\cite{mayorov_nanolett_2011}. Fabrication details can be found in the Supplementary Material of Ref.~\onlinecite{bandurin_science_2016}. The characteristic geometrical details of our devices (Hall bar width $W$, distance $d$ between current and potential probes, and width $w$ of the probes) are the same as in the numerical calculations discussed in Sect.~\ref{sect:numerical_examples}. A standard low-frequency AC technique was employed for measurements of the $B$-field dependence of the 4-probe resistance in a commercial cryostat with a superconducting magnet. Typical TMF experimental traces are shown in Fig.~\ref{fig:eleven}. It compares the measurements with our Landauer-B\"{u}ttiker calculations. In order to do so, we use the parameters reported in the caption of Fig.~\ref{fig:eleven}. As for the scaling factor, we use $s=20.05$, while the value of the rescaled Fermi energy has been slightly adjusted, with respect to the value $\tilde{\varepsilon}_{\rm F}$ dictated by the scaling procedure, in order to fit the position of the main peak in the non-local resistance. The value used for the numerical calculations is $\tilde{\varepsilon}'_{\rm F} = 1.37~{\rm eV}$, whereas the value obtained from the scaling procedure is $\tilde{\varepsilon}_{\rm F}=1.31~{\rm eV}$, thus differing only by less than $5\%$. This adjustment is justified by the fact that the value of the Fermi energy $\varepsilon_{\rm F}$---see input box in Fig.~\ref{fig:two}---is inferred from the experimental value of the carrier density $n$, assuming the usual massless Dirac fermion relation $\varepsilon_{\rm F}= \hbar v_{\rm F}\sqrt{\pi |n|}$, which is only approximately valid for the vertical lead of width $w=0.37~{\rm \mu m}$ that is used in our algorithm to calculate the number of open channels ${\cal N}_{\rm oc}$. Such small discrepancies, $\lesssim 5~\%$, may stem from a variety of reasons including the nature of edges (zig-zag, armchair, or a combination), electron-hole aysmmetry~\cite{kretinin_prb_2009}, quantum confinement, etc. Note, moreover, that we allow only $\tilde{\varepsilon}_{\rm F}$ as a ``fit'' parameter, while taking as the hopping energy value (see discussion in Sect.~\ref{sect:TB}) its bare non-interacting tight-binding value $t_{0}$. In Fig.~\ref{fig:eleven} the measured non-local resistance $R_{21,34}$ (empty circles) as a function of $B$ is plotted along with the numerical result (solid line) for the 4-terminal setups in Fig.~\ref{fig:one}(c) and (d), panel (a), and in Fig.~\ref{fig:one}(e) and (f), panel (b). The comparison reveals good agreement such that the main features of $R_{21,34}$ are reproduced as well as its absolute value. In particular, the main peak in Fig.~\ref{fig:eleven}(a) is nearly perfectly reproduced, while the right peak is in the correct position, although exhibiting a smaller height. The position and shape of the left dip is also well captured, but not its amplitude. Regarding Fig.~\ref{fig:eleven}(b), our calculations reproduce the presence of a single minimum at zero field, but with a larger amplitude and with no additional oscillations. These discrepancies may be imputed to the actual detailed structure of the sample, disorder, and other non-idealities. \begin{figure}[t] \begin{overpic}[width=1.0\linewidth]{fig11a}\put(0,72){(a)}\end{overpic} \begin{overpic}[width=1.0\linewidth]{fig11b}\put(0,72){(b)}\end{overpic} \caption{Comparison between experimental results (empty circles) and results from numerical calculations (solid line) for the non-local resistance $R_{21,34}$ at $T = 25~{\rm K}$. Panel (a) refers to the setup in Fig.~\ref{fig:one}(c) while panel (b) refers to the setup in Fig.~\ref{fig:one}(e). Sample parameters are $W=2$ $\mu$m, $L_1=1.5$ $\mu$m, $L_2=3.5$ $\mu$m, $w=0.37$ $\mu$m, $d=1~{\rm \mu m}$, and a carrier density $n = 0.4\times 10^{12}~{\rm cm}^{-2}$.\label{fig:eleven} } \end{figure} \subsection{The role of non-ideal edges} \label{sect:disorder} In this Section we discuss the consequences of possible imperfections present at the edges of the Hall bar. We focus on their impact on the non-local resistance $R_{21,34}$ at $B=0$. Our aim here is to show that the conclusions drawn above in Sect.~\ref{analy} on the positivity of $R_{21,34}$ at $B=0$ are robust against structural disorder at the edges. Edge imperfections are implemented by carving independently the two horizontal edges using an algorithm which, at random, adds or removes two rows of atoms from each sublattice (taking care of avoiding dangling bonds) of length corresponding to a number of sites $M_{\text{R}}$, which is also randomly chosen in the range $[M_{\text{R,min}},M_{\text{R,max}}]$. An example of the resulting nanoribbon is presented in Fig.~\ref{fig:twelve}(a). A constraint is imposed on the maximum nanoribbon width, which is set by $W$. The histogram in Fig.~\ref{fig:twelve}(b) shows the values obtained for the non-local resistance of 100 different random configurations for $M_{\text{R,min}}=2$ and $M_{\text{R,max}}=6$. The mean value turns out to be $\overline{R}_{21,34}=18.50$ $\Omega$ with standard deviation equal to $\Delta R_{21,34}=4.65$ $\Omega$ (for comparison, recall that for the corresponding ideal nanoribbon one finds $R_{21,34}=20.69$ $\Omega$, well within a standard deviation). Fig.~\ref{fig:twelve}(c), on the other hand, shows the evolution of the mean value and standard deviation of non-local resistance with increasing number of random configurations, proving that convergence is obtained already with 60 configurations. We additionally mention that mean value and standard deviation of $R_{21,34}$ do not significantly change if $M_{\text{R}}$ varies in a larger range of values. Namely, for $M_{\text{R,min}}=4$ and $M_{\text{R,max}}=10$ we find $\overline{R}_{21,34}=19.72$ $\Omega$ and $\Delta R_{21,34}=6.49$ $\Omega$. \begin{figure}[t] \begin{overpic}[width=0.8\linewidth]{fig12a}\put(0,72){(a)}\end{overpic} \begin{overpic}[width=0.8\linewidth]{fig12b}\put(0,72){(b)}\end{overpic} \begin{overpic}[width=0.8\linewidth]{fig12c}\put(0,72){(c)}\end{overpic} \caption{(Color online) Numerical results for the non-local resistance in zero magnetic field and for non-ideal edges. (a) Example of a graphene ribbon with non-ideal edges, with $M_{\text{R,min}}=2$ and $M_{\text{R,max}}=6$. (b) Histogram of the non-local resistances at zero magnetic field ($B=0$) obtained for $100$ different random configurations. The relative mean value is $\overline{R}_{21,34}=18.50~\Omega$ with standard deviation equal to $\Delta R_{21,34}=4.65~\Omega$. (c) Mean value and standard deviation as a function of the number of random configurations. The scaling factor used for the simulations is $s=20.05$, $M_{\text{R,min}}=2$ and $M_{\text{R,max}}=6$, and all other parameters are the same as in Fig.~\ref{fig:three}. \label{fig:twelve} } \end{figure} \section{Conclusions} \label{sect:conc} In this report we have proposed a scaling procedure, based on the tight-binding approach and Landauer-B\"{u}ttiker theory, for transport calculations in ultra-clean graphene devices of realistic size. The procedure is based on the exact band structure of graphene {\it ribbons}, and uses the Fermi energy as key scaling parameter. We have demonstrated the effectiveness of the procedure by calculating the non-local resistance of a realistic $5$-terminal setup in the presence of a magnetic field. In such a transverse magnetic focusing setup, we have compared the non-local resistance as a function of magnetic field for increasing values of the scaling factor, proving that this approach is particularly suitable for micron-sized ribbons and in the presence of many electrodes. The case of transverse magnetic focusing has been further analysed in realistic $4$-terminal setups, where the structure of the non-local resistance as a function of magnetic field has been explained in terms of classical cyclotron orbits. Moreover, we have addressed the dependence of the non-local resistance on the carrier density and temperature and studied the impact of disorder at the edges of the ribbon. Finally, we have compared the results of our scaling approach with experimental data in high-quality encapsulated samples finding good agreement. The main features, as well as the absolute value of the non-local resistance, are well reproduced using the actual experimental parameters. \acknowledgements This work was supported by the EU Horizon 2020 research and innovation programme under grant agreement no.~696656 ``GrapheneCore1'', the EU project ``ThermiQ", the EU project COST Action MP1209 ``Thermodynamics in the quantum regime", the EU project COST Action MP1201 ``NanoSC", and the SNS internal project ``Thermoelectricity in nanodevices". Free software (www.gnu.org, www.python.org) was used.
1,108,101,562,439
arxiv
\section*{Acknowledgments} The authors acknowledge the support of \textit{Agence Nationale de la Recherche} through the \textit{Ethna}, \textit{ThermaEscape} and \textit{Monaco} projects. P.O.C. thanks B. Palpant for discussions. { \small \linespread{1}
1,108,101,562,440
arxiv
\section{About this document} \section{Executive Summary} \subsection{Findings} \begin{enumerate} \item NASA, NSF, ESA, and CERN fund data archive centers dedicated to preserving and sharing data beyond the lifetime of individual projects. DOE does not have an equivalent data archive center for its Cosmic Frontier experiments and simulations. \item Current archive centers are focused on making data available for download, with limited support for providing computational resources for in-situ analysis on those data. \end{enumerate} \subsection{Comments} \begin{enumerate} \item Currently the preservation of datasets after the operations phase is largely implemented on a best-effort basis by the lead labs. This ad-hoc arrangement puts long term data preservation at risk and hinders their joint analysis. \item As datasets and simulations grow, a “take out” model of datasets available for manual download from multiple uncoordinated data centers becomes unwieldy. Future work needs to focus on co-locating data with computing, and automating the coordination between multiple data/compute centers. \end{enumerate} \subsection{Recommendations} \begin{enumerate} \item DOE should fund a multi-site cosmology data archive center to preserve Cosmic Frontier datasets and simulations, and facilitate their joint analysis across different computing centers. \end{enumerate} \section{Introduction} It is common within cosmology and astronomy to publicly release datasets and simulations to promote their use beyond the lifetime and goals of the original projects that generated the data. It is also common for analyses to combine data and simulations across multiple experiments to achieve results that would not have been possible with single experiments alone, e.g.~\cite{Planck_plus_LRG, LegacySurveys, eBOSS_plus_LS, Amon_LensingClustering}. Publicly released data should follow the Findable, Accessible, Interoperable, and Reusable (FAIR) principles for scientific data management~\cite{FAIR-paper}. This requires more work for the original project than simply putting the data on a website for download. Additionally, curating and maintaining these data beyond the operations phase of a project requires dedicated personnel and hardware resources. Making these datasets public has significant long term scientific value, but requires dedicated resources to enable that. DOE has both large cosmology datasets and large computing centers, but so far there has been relatively little coordination to promote the joint use of these data while levering these compute resources across projects and across centers. The National Academies Decadal Survey ``Pathways to Discovery in Astronomy and Astrophysics for the 2020s'' \cite{decadal2020} chapter 4 includes a recommendation ``NASA and the National Science Foundation should explore mechanisms to improve coordination among U.S.~archive centers and to create a centralized nexus for interacting with the international archive communities.'' DOE archive centers with cosmology datasets should be included as well. There is an unrealized opportunity to better coordinate this work in the future, both within DOE resources, and between DOE and other agencies, e.g.~the efforts towards jointly processing Euclid+Rubin+Roman datasets\cite{EuclidRubinRomanJSP}. This white paper lays out some of the broad issues and opportunities, while purposefully not suggesting a specific technical solution. \section{Existing Cosmology/Astronomy Data Archive Centers} \subsection{Non-DOE cosmology-related data archive centers} NASA funds multiple data archive centers, broadly organized by the wavelengths of data that they serve: High Energy Astrophysics Science Archive Research Center (HEASARC)\footnote{\url{https://heasarc.gsfc.nasa.gov}} for extreme ultraviolet, X-ray, and gamma-ray wavelengths; Mikulski Archive for Space Telescopes (MAST)\footnote{\url{https://archive.stsci.edu}} for optical, ultraviolet, and near-infrared; NASA/IPAC Infrared Science Archive (IRSA)\footnote{\url{https://www.ipac.caltech.edu/project/irsa}} for infrared and submillimeter; and the NASA Exoplanet Science Institute (NExScI)\footnote{\url{https://nexsci.caltech.edu}} for Exoplanet Exploration Program missions. Additionally, the NASA/IPAC Extragalactic Database (NED)\footnote{\url{http://ned.ipac.caltech.edu}} curates catalog-level data about astronomical objects. NSF archives its optical and infrared astronomy data through the NOIRLab Astro Data Archive\footnote{\url{https://astroarchive.noirlab.edu}} and operates the Astro Data Lab\footnote{\url{https://datalab.noirlab.edu}} science platform for database queries and analysis tools. This center has also ingested subsets of other surveys such as SDSS and Gaia to facilitate cross matching these data to core NSF datasets. The Sloan Digital Sky Survey (SDSS)\footnote{\url{https://www.sdss.org}} self-hosts its yearly data releases, providing access at a variety of levels ranging from file downloads to database access to web queries and data visualization. The European Space Agency (ESA) hosts data archives for each of its missions at the European Space Astronomy Center (ESAC) Science Data Center (ESDC)\footnote{\url{https://www.cosmos.esa.int/web/esdc}}. The Centre de Données astronomiques de Strasbourg (CDS)\footnote{\url{https://cds.u-strasbg.fr}} also curates and distributes multiple astronomy datasets. With the exception of SDSS, these data archive centers are funded to preserve, curate, and share data beyond the lifetime and budget of individual experiments. Although they are hosted at multiple sites, the NASA data centers in particular inter-operate to promote the discovery and use of these data across the different centers. \subsection{How DOE cosmology projects currently share data} In contrast to NASA and NSF, DOE-funded cosmology projects do not have a centrally coordinated (and funded!) method of sharing their data, especially beyond the operations phase of each project. The Dark Energy Survey (DES) public data releases are hosted my multiple non-DOE sites\footnote{\url{https://des.ncsa.illinois.edu/releases/dr2/dr2-access}}; BOSS and eBOSS are hosted through SDSS\footnote{\url{https://www.sdss.org/dr17/}}; and the DESI Legacy Imaging Surveys\footnote{\url{https://legacysurvey.org}} are hosted at NERSC via using resources of the Cosmology Data Repository allocation\footnote{\url{https://portal.nersc.gov/cfs/cosmo/data/legacysurvey/dr9/}}, and also available through the NOIRLab Astro Data Archive (images) and Astro Data Lab (catalogs). DESI also intends to share its future spectroscopic data releases via the Cosmology Data Repository. For cosmology simulations, Hardware/Hybrid Accelerated Cosmology Code (HACC)\footnote{\url{https://cosmology.alcf.anl.gov/}} and AbacusSummit\footnote{\url{https://abacusnbody.org}} use the ``Modern Research Data Portal'' design pattern\cite{MRDP} to facilitate efficient bulk downloads of subsets of their simulations, but these are independently implemented and hosted. The Rubin Observatory Legacy Survey of Spact and Time (LSST) Dark Energy Science Collaboration (DESC) data portal\footnote{\url{https://data.lsstdesc.org}} also uses this design pattern to host its data from NERSC, but is completely independent of both the Cosmology Data Repository and the AbacusSummit portal, both of which are also at NERSC. \section{Co-locating Data with Computing Resources} In most cases, existing cosmology/astronomy data archive centers focus on enabling searching and downloading subsets of data, rather than providing computing resources to the community to analyze the data in-situ. Notable exceptions are the Astro Data Lab which provides a Jupyter Notebook server for exploring the data\cite{DataLabJupyter}, and SDSS SciServer\footnote{\url{https://www.sciserver.org}} to perform server side analysis of SDSS data\cite{SDSS_SciServer}. Although the AbacusSummit files are available at NERSC to anyone with an account (e.g.~LSST DESC members), this is not emphasized in the portal documentation. The same may be true of DESC simulations (also at NERSC) and HACC (at ANL) --- the data are publicly available for download, and already available at a major computing facility, but the current access methods still emphasize download rather than in-situ use, even for those who already have accounts and allocations at the centers where the original data are hosted. The DESI Legacy Imaging Surveys may be unique in documenting both how to download the data for those who do not have NERSC account, and how to access the files directly on disc for those who do (e.g.~DESC or CMB-S4 members, whether or not they are DESI collaborators). At the same time, the datasets and simulations are large enough, and the computing needs of analyses are diverse enough, that it is not realistic to expect that any one center could host all of the data and meet the needs of all of the cosmology users. A future cosmology data archive center should promote both intra-site work (e.g.~DESC members at NERSC accessing public DESI data at NERSC) and intra-site work (e.g.~combining simulations from the Argonne Leadership Class Facility with DESI and CMB-S4 data at NERSC, LSST data at SLAC, and external datasets such as NASA archives). \section{The Role of the Virtual Observatory} The Virtual Observatory (VO) is ``vision that astronomical datasets and other resources should work as a seamless whole''\footnote{\url{https://www.ivoa.net}}, with the International Virtual Observatory Alliance (IVOA) defining standards to enable this interoperability. Practical end-user uptake has been somewhat slow, but now NASA, ESA, and the NOIRLab Astro Data Lab archive centers provide VO-compliant interfaces for a subset of the most commonly used APIs (e.g.~VO Table Access, Simple Image Access, Simple Spectal Access). These are accessible through the astropy-affiliated pyVO\footnote{\url{https://pyvo.readthedocs.io}} and astroquery\footnote{\url{https://astroquery.readthedocs.io}} packages which greatly simplify their usage. The Rubin Science Platform also plans to provide VO-compliant API interfaces as one of the data access methods. The VO interfaces are primarily useful for classic astronomy use cases, e.g.~discovering and accessing distributed heterogeneous datasets covering a small number of individual objects. It is not well suited for accessing the entirety of multi-terabyte cosmology survey data, and thus is not well matched for may DOE cosmology projects which need more direct access to larger volumes of homogeneous survey data. e.g.~VO interfaces could be useful to discover if any prior survey has a host galaxy redshift for a new supernova candidate, but it would not be practical for doing a multi-band pixel-level joint fit of all DECam plus WISE data. Although the pyVO and astroquery packages have significantly simplified end-user access to Virtual Observatory data, implementing these APIs to share new data remains non-trivial. A DOE cosmology data archive center could assist in implementing these APIs to share subsets of the data with the broader scientific community, but it should not be the primary method by which these data are shared within the DOE Cosmic Frontier community. \section{Brief Case Studies} \subsection{DESI Legacy Imaging Surveys} As a recent example, the DESI Legacy Imaging Surveys\footnote{\url{https://legacysurvey.org}} were originally motivated for DESI target selection though their public data release has led to a large variety of diverse publications\footnote{\url{https://www.legacysurvey.org/pubs/}}. This project combined pixel-level imaging data from the Dark Energy Survey and DECaLS (DECam on the CTIO Blanco 4-m telescope), MzLS (Kitt Peak Mayall 4-m), BASS (Kitt Peak Bok), WISE (NASA satellite), and catalog-level data from 2MASS, GAIA, and Pan-STARRS. These data were jointly fit at NERSC using pixel-level data across multiple imaging bands, originally downloaded with custom scripts from 4 different data archive centers. Much of the data processing was I/O bound and would not have been viable if trying to download on-the-fly from the original data archive centers, some of which only provided HTTP download of individual files as a data transfer method. Co-locating the data with the computing resources necessary to jointly analyze them was critical to the success of the project. \subsection{Access to Rubin LSST Data from DOE Computing Facilities} As a future example, the Rubin Legacy Survey of Space and Time (LSST) will generate many petabytes of data, served from their primary computing center at the SLAC National Laboratory. DOE cosmology researchers have access to large computing facilities at other sites, e.g.~NERSC or the Argonne and Oakridge Leadership Class Facilities, and those researchers may need to use those computing resources to analyze LSST data. Effectively analyzing these data from one site using computing resources at another site will require either aggressive proactive caching, or non-trivial realtime data streaming on demand. The scale of the data volume makes it unviable to perform bespoke downloads like those used by the Legacy Imaging Surveys. Additionally, some use cases such as DESC transient studies (e.g.~rapid followup of Type Ia supernovae), will likely need nearly realtime access to other datasets such as DESI spectra of host galaxies or photometry from imaging surveys preceding LSST, and those datasets may be originally hosted at yet another data archive center. Combining data at one site with computing at another site could be achieved through the efforts of individual collaborations (e.g.~DESC, CMB-S4, or future spectroscopic redshift surveys), though it would be more effective to have a coordinated approach to solve the underlying challenges for everyone. \section{Data distribution within industry} Companies such as Google, Amazon, and Facebook automatically replicate their data across multiple data centers. This is partially for robustness and data integrity so that all data are continuously available even if there is an outage of an individual center, but it is also for performance so that commonly accessed results (e.g.~images from trending topics) can be moved to fast storage at many centers, while infrequently accessed data are automatically migrated to slower cheaper storage at fewer centers. Within personal computing devices, it is common for photo and music libraries to be automatically synced across multiple devices, including devices such as phones with limited storage that can't simultaneously hold the entire dataset. To the end user, this is transparent --- commonly accessed items are always there (even when offline), and less frequently accessed items are automatically synced on demand as if they always had been available. Unlike current cosmology data management, the end user does not need to start by freeing up storage space, identifying which data are at which locations, and initiating custom transfers to get the data to a desired location before beginning analysis. By analogy, one could imagine a multi-site cosmology data implementation where commonly used datasets are at every computing center; less frequently accessed data are served by a primary host institution, automatically replicated to another site for preservation robustness, and synced elsewhere as-needed upon demand; and genuinely rarely accessed data are kept on tape archives. Although all of this is currently possible ``by hand'', it lacks the automation and end-user transparency that would maximize the potential of using large datasets across multiple large computing centers. \section{Beyond cosmology datasets} Although this white paper has focused on the specific needs of cosmology datasets and analyses, the technical challenges are not unique to cosmology. As such, this work could be done in the context of a broader ``Experimental Data Archive Center'' to solve the more general problem of data preservation and access, and thus leverage the resources of other communities such as Advanced Scientific Computing Research (ASCR) and other areas within the DOE scientific portfolio beyond just High Energy Physics (HEP). \section{Conclusions} DOE should fund a multi-site cosmology data archive center to preserve Cosmic Frontier datasets and simulations, and facilitate their joint analysis across different computing centers. This requires support not only for hardware, but also for personnel to develop and maintain the technologies to simplify cross-site data sharing and personnel to curate the relevant datasets. \def\fnsymbol{footnote}{\fnsymbol{footnote}} \setcounter{footnote}{0} \bibliographystyle{JHEP}
1,108,101,562,441
arxiv
\section{Information Theory \& Emergence} \subsubsection*{A Note on Notation} The information theoretic formalisms around emergence unfortunately require notation to represent a number of overlapping, and potentially confusing, concepts. In general, one-dimensional random variables will be represented with italicized uppercase letters (e.g. $X$). Specific realizations of that variable will be denoted with lower-case, italicized letters (e.g. $X=x$). The support set of $X$ (i.e. the set of all states $X$ can adopt) will be denoted with Fraktur font: $\mathfrak{X}$. We would read: $\sum_{x\in\mathfrak{X}}\mathcal{P}(x)$ as ``the sum of the probabilities of every state $x$ that our random variable $X$ can adopt." Multidimensional variables will be denoted with boldface uppercase (e.g. \textbf{X}), their specific realizations as \textbf{x}, and their support sets in Fraktur font ($\boldsymbol{\mathfrak{X}}$). In addition to general and specific variables, there is also a need to differentiate between micro-scales and macro-scales. We will say that the \textit{macro-scale} (after coarse-graining) of \textbf{X} is $\tilde{\textbf{X}}$, and the specific realizations are $\textbf{x}$ and $\tilde{\textbf{x}}$ respectively. Only multivariate systems can be coarse-grained, and for our purposes, coarse-grained systems will always have at least two elements making them up. The expected value of a function will be denoted with calligraphic font (e.g. the expected mutual information between $X$ and $Y$ is $\mathcal{I}(X;Y)$), while specific, local functions will be in italicized lowercase (e.g. the local mutual information between $X=x$ and $Y=y$ is $i(x;y)$). Finally, following \cite{varley_decomposing_2022}, we will denote time with subscript indexing (i.e. $X_t$ is the random variable $X$ at time $t$), and set membership with superscript indexing (i.e. $X^k$ is the $k^\textnormal{th}$ element of \textbf{X}). Both kinds of indexing may be used simultaneously. \subsection{Expected and Local Mutual Information} The core of almost all information-theoretic approaches to emergence has been to start with the \textit{mutual information} between the past state of a system and it's own future (sometimes called the \textit{excess entropy} \cite{james_anatomy_2011}). The mutual information is a fundamental measure of the (statistical) interaction between two variables \cite{cover_elements_2012}. For two variables $X$ and $Y$ with states drawn from support sets $\mathfrak{X}$ and $\mathfrak{Y}$ according to $\mathcal{P}(X)$ and $\mathcal{P}(Y)$: \begin{align} \mathcal{I}(X,Y) &= \sum_{\substack{x\in\mathfrak{X}\\y\in\mathfrak{Y}}}\mathcal{P}(x,y)\log\bigg(\frac{\mathcal{P}(x,y)}{\mathcal{P}(x)\mathcal{P}(y)}\bigg) \\ &= \mathcal{H}(X) - \mathcal{H}(X|Y) \end{align} Where $\mathcal{H}$ is the Shannon entropy function. $\mathcal{I}(X;Y)$ quantifies how much knowing the state of $X$ reduces our uncertainty about the state of $Y$ \textit{on average} (and vice versa, as it is a symmetric measure). Consequently, the mutual information is fundamentally about our ability as observers to make \textit{inferences} about objects of study under conditions of uncertainty. Unlike more standard correlation measures, mutual information is non-parametric, sensitive to non-linear relationships, and strictly non-negative (i.e. knowing the state of $X$ can never make us \textit{more} uncertain about the state of $Y$). Being an average measure, $\mathcal{I}$ can also be understood as the \textit{expected value} over a distribution of particular configurations: \begin{equation} \mathcal{I}(X;Y) = \mathbb{E}_{X,Y}\bigg[\log\bigg(\frac{\mathcal{P}(x,y)}{\mathcal{P}(x)\mathcal{P}(y)}\bigg)\bigg] \end{equation} We can ``unroll" this expected value to get a \textit{local} mutual information for every combination of realizations $X=x,Y=y$ (sometimes referred to as the pointwise mutual information): \begin{align} i(x;y) &= \log\bigg(\frac{\mathcal{P}(x,y)}{\mathcal{P}(x)\mathcal{P}(y)}\bigg) \label{eq:lmi_1} \\ &= h(x) - h(x|y) \label{eq:lmi_2} \end{align} Unlike the average mutual information, the local mutual information \textit{can} be negative: if $\mathcal{P}(x,y) < \mathcal{P}(x)\mathcal{P}(y)$ then $i(x;y)<0$. To build intuition, consider the case where $\mathcal{I}(X;Y) > 0$. On average, if we know that $X=x$, we will be \textit{better} at correctly inferring the state of $Y$ than if we were basing our prediction on $Y$'s statistics alone (and vice versa). Now, suppose we observe the particular configuration $(x,y)$. What does it mean to say that $i(x;y) < 0$ when $\mathcal{I}(X;Y)>0$? It means that the particular configuration $(x,y)$ that we are observing would be \textit{more} likely if $X \bot Y$ then if they are actually coupled (which we know, \textit{a priori} that they are). Said differently, we would be more surprised to see $X=x$ if we knew that $Y=y$ than vice versa (see Eq. \ref{eq:lmi_2}). \textit{Local mutual information is negative when a system is transiently breaking from it's own average, long-term behaviour.} \subsection{Temporal Mutual Information \& Emergence} \begin{figure*} \centering \includegraphics[scale=0.5]{coarse.pdf} \caption{\textbf{Transition probability matrices for a micro- and macro-scale system. Top Left:} The macro-scale transition probability matrix for a two-element, Boolean Markov chain. Every cell gives $\mathcal{P}(Future | Past)$. In general, we refer to the system associated with this matrix as $\tilde{\textbf{X}}$ \textbf{Top Right:} The transition probability matrix associated with the four-element, Boolean Markov chain \textbf{X}. The micro-scale was constructed from the macro-scale following the equivalence-class expansion developed by Varley \& Hoel \cite{varley_emergence_2022}. \textbf{Bottom:} A visual schematic illustrating coarse-graining in a Boolean network. The lower four series are the four elements of \textbf{X}, collectively evolving according to the transition probabilities given above. The upper two series are the coarse-grained, macro-scale sequences, created by aggregating two micro-scale elements each (red to red and blue to blue).} \label{fig:tpms} \end{figure*} As mentioned above, a common starting point to asses formal theories of emergence is the \textit{time-delayed} mutual information: information about the \textit{future} that is disclosed by knowledge of the \textit{past}. The \textit{total} amount of information the entire past discloses about the entire future is the \textit{excess entropy} \cite{james_anatomy_2011}: \begin{equation} \mathcal{E}(X) = \mathcal{I}(X_{0:t-1} ; X_{t:\infty}) \end{equation} Where $X_{0:t-1}$ refers to the joint state of the entire past (from time $t=0$ to the immediate past) and $X_{t:\infty}$ refers to the entire future, from the present on. $\mathcal{E}(X)$ then provides a measure of the total dynamical ``structure" of $X$ (although it doesn't reveal how that structure is apportioned out over elements, see \cite{varley_decomposing_2022} for further discussion). Given the practical difficulties associated with infinitely long time series, it is common to assume that the system under study is Markovian and only ``remembers" information from one time-step to the next. In this case, we would say that the constrained excess entropy is just: \begin{equation} \mathcal{E}(X) = \mathcal{I}(X_{t-1};X_t) \end{equation} For a discrete system that can only adopt a finite number of states from the support set $\mathfrak{X}$, the temporal structure of the whole system can be represented in a \textit{transition probability matrix} (Fig. \ref{fig:tpms}), which gives the conditional probability $P(x_{t} | x_{t-1})$ for every $x\in\mathfrak{X}$. Being a special case of the bivariate mutual information, the excess entropy can also be localized in the same way (and can also be either positive or negative): \begin{equation} e(x) = i(x_{t-1};x_t) \end{equation} When $e(x) < 0$, the particular transition $(x_{t-1};x_t)$ would be more likely to occur if subsequent moments were being drawn at random from some distribution $\mathcal{P}(X)$, rather than showing a temporal structure. If $\mathcal{P}(x_{t} | x_{t-1}) < \mathcal{P}(x_{t})$, then you would be \textit{less} likely to guess the correct $x_{t}$ if you knew the state $x_{t-1}$ then you would be if $x_{t}$ was being randomly selected from a 0-memory process. You would be \textit{more surprised} to see $x_{t} | x_{t-1}$ than you otherwise would be. Your prediction of the future has been \textit{misinformed} by the statistics of the evolving system. For formal theories of emergence that rely on the excess entropy, this kind of breakage from the system's long-term expected statistics may represent a kind of failure mode whereby whatever ``higher-order" dependency we are tracking is ``interrupted." The past transiently ceases to inform of the future and instead misinforms. \subsection{Two Formal Approaches to Emergence} Here we focus on two formal approaches to emergence: the coarse-graining approach first proposed by Hoel et al, \cite{hoel_quantifying_2013}, and an integrated information approach, from Mediano, Rosas, et al., \cite{mediano_beyond_2019}. We chose these two since, despite using much of the same mathematical machinery to answer a common question, they lead to very different interpretations of what emergence is. We should briefly note, however, that these are \textit{not} the only information-theoretic formal approaches to emergence, for example Barnett and Seth have a proposal based on dynamical independence between micro- and macro-scales \cite{barnett_dynamical_2021}, and Chang et al., proposed a theory based on scale-specific information closure \cite{chang_information_2020,bertschinger_information_2006}. Both of these frameworks are based on excess entropy, or slight modifications thereof (Barnett and Seth consider the temporal conditional mutual information, for example). Based on the use of temporal mutual information, we anticipate that incongruous dynamics and flickering emergence are likely to appear in local formulations of both approaches (barring an unexpected mathematical constraint), also showing so is beyond the scope of this paper. \subsubsection{Coarse-Graining Approaches to Emergence} This framework was one of the first explicit, formal information-theoretic approaches to emergence, and remains one of the most well-developed. Originally termed ``causal emergence" by Hoel et al., we refer to it here as just a coarse-graining approach for the purely pragmatic reason of avoiding the perennial debate around ``causality", which is largely inimical to the point under discussion here. Recently Comolatti and Hoel showed that the phenomena they term ``causal emergence" is widespread over measures of causation (beyond the particular case discussed here) \cite{comolatti_causal_2022}, however, it is unclear whether statistical measures of causality are sufficient to fully capture causal relationships \cite{pearl_causal_2010,dewhurst_causal_nodate}. We sidestep this issue altogether by focusing on the most salient feature, which is the comparison of micro-scales and coarse-grained macro-scales. The coarse-graining approach compares the informational properties of a system \textbf{X} with the properties of a \textit{dimensionally reduced} model $\tilde{\textbf{X}}$. The core measure is the \textit{effective information}, which is the excess entropy with a maximum-entropy distribution forced on the distribution of past states: \begin{align} \mathcal{F}(\textbf{X}) &= \mathcal{E}(\textbf{X})_{|\mathcal{H}(\textbf{X}_{t-1}) = \mathcal{H}^{\max}}\\ &= \mathcal{I}(\textbf{X}_{t-1}^{\mathcal{H}_{\max}} ; \textbf{X}_t) \end{align} The effective information quantifies how much knowing the past reduces your uncertainty about the future \textit{if all past states are equally likely}. Consequently, it is a statistical approximation of experimental intervention: by forcing the prior distribution to be flat, $\mathcal{F}(\textbf{X})$ is not confounded by the potential for biases introduced by an inhomogeneous distribution $\mathcal{P}(\textbf{X}_{t-1})$. $\mathcal{F}$ is bounded from above by $\log(N)$ (where $N=|\mathfrak{X}|$): if $\mathcal{F}(\textbf{X}) = \log(N)$, then knowing $\textbf{X}_{t-1}$ completely resolves all uncertainty about the future (every $x_{t-1}$ deterministically leads to a unique $x_t$). In contrast, if $\mathcal{F}(\textbf{X}) = 0$, then knowing the past reduces no uncertainty about the future. This bound allows to normalize $\mathcal{F}$ to the interval $[0,1]$, which we refer to as the \textit{effectiveness} ($\bar{\eff}(\textbf{X})$): \begin{equation} \bar{\eff}(\textbf{X}) = \frac{\mathcal{F}(\textbf{X})}{\log(N)} \label{eq:effness} \end{equation} Hoel et al., claim that ``emergence" occurs when, for system \textbf{X}, there exists some coarse-graining $\tilde{\textbf{X}}$ such that: \begin{equation} \log\bigg(\frac{\bar{\eff}(\tilde{\textbf{X}})}{\bar{\eff}(\textbf{X})}\bigg) > 0 \end{equation} In this case, the macro-scale is more effective than the micro-scale: knowing the past of the macro-scale resolves a greater proportion of the uncertainty about the future of the macro-scale than it would when using the ``full", micro-scale model. Consider the example system displayed in Figure \ref{fig:tpms}: here, a 4-element, Boolean network evolves according to the micro-scale transition probability matrix. The system is then bipartitioned and each pair of elements (indicated by colour) is aggregated into a macro-scale with a lossy logical function (logical AND), resulting in the 2-element system. Crucially, $\textbf{X}$ is a system that displays non-trivial emergence upon coarse-graining into $\tilde{\textbf{X}}$: $\log(\bar{\eff}(\tilde{\textbf{X}})/\bar{\eff}(\textbf{X})) \approx 0.202$. \subsubsection{Synergy-Based Approaches to Emergence} \begin{figure*} \begin{center} \begin{tikzpicture}[scale=0.65, transform shape] \filldraw[black] (0,1) circle (3pt) node[anchor=south] {$\{12\}\rightarrow\{12\}$}; \filldraw[black] (-4.25,-1.5) circle (3pt) node[anchor=east] {$\{1\}\rightarrow\{12\}$}; \filldraw[black] (-1.5,-1.5) circle (3pt) node[anchor=east] {$\{2\}\rightarrow\{12\}$}; \filldraw[black] (1.5,-1.5) circle (3pt) node[anchor=west] {$\{12\}\rightarrow\{1\}$}; \filldraw[black] (4.25,-1.5) circle (3pt) node[anchor=west] {$\{12\}\rightarrow\{2\}$}; \filldraw[black] (0,-4) circle (3pt) node[anchor=east] {$\{2\}\rightarrow\{1\}$}; \filldraw[black] (-6,-5) circle (3pt) node[anchor=west] {$\{1\}\{2\}\rightarrow\{12\}$}; \filldraw[black] (-3.25,-5) circle (3pt) node[anchor=west] {$\{1\}\rightarrow\{1\}$}; \filldraw[black] (3.25,-5) circle (3pt) node[anchor=east] {$\{2\}\rightarrow\{2\}$}; \filldraw[black] (6,-5) circle (3pt) node[anchor=east] {$\{12\}\rightarrow\{1\}\{2\}$}; \filldraw[black] (0,-6) circle (3pt) node[anchor=west] {$\{1\}\rightarrow\{2\}$}; \filldraw[black] (-4.25,-8.5) circle (3pt) node[anchor=east] {$\{1\}\{2\}\rightarrow\{1\}$}; \filldraw[black] (-1.5,-8.5) circle (3pt) node[anchor=east] {$\{1\}\{2\}\rightarrow\{2\}$}; \filldraw[black] (1.5,-8.5) circle (3pt) node[anchor=west] {$\{1\}\rightarrow\{1\}\{2\}$}; \filldraw[black] (4.25,-8.5) circle (3pt) node[anchor=west] {$\{2\}\rightarrow\{1\}\{2\}$}; \filldraw[black] (0,-11) circle (3pt) node[anchor=north] {$\{1\}\{2\}\rightarrow\{1\}\{2\}$}; \draw[] (0,1) -- (-4.25,-1.5); \draw[] (0,1) -- (-1.5,-1.5); \draw[] (0,1) -- (4.25,-1.5); \draw[] (0,1) -- (1.5,-1.5); \draw[] (-4.25,-1.5) -- (-3.25, -5); \draw[] (4.25,-1.5) -- (3.25, -5); \draw[] (-4.25,-1.5) -- (0,-6); \draw[] (4.25,-1.5) -- (0,-6); \draw[] (0,-4) -- (-4.25,-8.5); \draw[] (0,-4) -- (4.25,-8.5); \draw[] (-1.5,-1.5) -- (-6,-5); \draw[] (1.5,-1.5) -- (6,-5); \draw[] (-4.25,-1.5) -- (-6,-5); \draw[] (4.25,-1.5) -- (6,-5); \draw[] (-1.5,-1.5) -- (3.25,-5); \draw[] (1.5,-1.5) -- (-3.25,-5); \draw[] (1.5,-1.5) -- (0,-4); \draw[] (-1.5,-1.5) -- (0,-4); \draw[] (-3.25,-5) -- (1.5,-8.5); \draw[] (3.25,-5) -- (-1.5,-8.5); \draw[] (0,-6) -- (1.5,-8.5); \draw[] (0,-6) -- (-1.5,-8.5); \draw[] (-6,-5) -- (-4.25,-8.5); \draw[] (6,-5) -- (4.25,-8.5); \draw[] (-6,-5) -- (-1.5,-8.5); \draw[] (6,-5) -- (1.5,-8.5); \draw[] (-3.25,-5) -- (-4.25,-8.5); \draw[] (3.25,-5) -- (4.25,-8.5); \draw[] (-4.25,-8.5) -- (0,-11); \draw[] (-1.5,-8.5) -- (0,-11); \draw[] (4.25,-8.5) -- (0,-11); \draw[] (1.5,-8.5) -- (0,-11); \end{tikzpicture} \end{center} \caption{The double-redundancy lattice for a system $\textbf{X} = \{X^1, X^2\}$. Every vertex of the lattice corresponds to a specific dependency between information in one element (or ensemble of elements) $t-1$ and another at time $t$. We use index-only notation following Williams and Beer \cite{williams_nonnegative_2010} and \cite{mediano_beyond_2019} for notational compactness.} \label{fig:lattice} \end{figure*} The Synergy-based approach to emergence takes a different tack when looking for emergent phenomena in multi-element, dynamical systems. Where the coarse-graining emergence considers the relationship between scales, the integrated emergence looks at the relationships between ``wholes" and ``parts" at a single scale. Once again, the primary measure is the excess entropy (a maximum entropy prior is not explicitly assumed, although it is an option \cite{mediano_beyond_2019}). We begin with the insight that, for a multivariate system \textbf{X}, $\mathcal{E}(\textbf{X})$ gives a measure of the \textit{total} temporal information structure of the whole, but it says nothing about how that information flow is apportioned out over various elements of $\textbf{X}$. For example, if every element of $\textbf{X}$ is independent of every other the excess entropy can still be greater than zero if the excess entropy of each part is greater than zero: \begin{equation} \mathcal{E}(\textbf{X}) = \sum_{i=1}^{|\textbf{X}|}\mathcal{E}(X^i) \iff X^i \bot X^j \forall X^i, X^j \in \textbf{X} \label{eq:indep_exx} \end{equation} In this case of a totally \textit{disintegrated} system, then $\mathcal{E}(\textbf{X}) - \sum_{i=1}^{|\textbf{X}|}\mathcal{E}(X^i) = 0$: the problem of predicting the future of the ``whole" trivially reduces to predicting the futures of each of the ``parts" considered individually. This insight prompted Balduzzi and Tononi to propose a heuristic measure of ``integration" (the degree to which the whole is greater than the sum of it's parts), which we refer to as $\Phi(\textbf{X})$: \begin{equation} \Phi(\textbf{X}) = \mathcal{E}(\textbf{X}) - \sum_{i=1}^{|\textbf{X}|}\mathcal{E}(X_i) \end{equation} If $\Phi(\textbf{X}) > 0$, then there is some predictive information in the joint state of the whole \textbf{X} that is not accessible when considering all of the parts individually. This is a rough definition of ``integrated emergence": when the future of the whole can only be predicted by considering the whole qua itself and not any of its constituent parts (contrast this with a centralized system where the future of the whole could be predicted from a central controller). $\Phi(\textbf{X})$ is only a rough heuristic though, and it can be negative if a large amount of redundant information swamps the synergistic part \cite{mediano_towards_2021}. Recently, Mediano, Rosas, et al., introduced a complete decomposition of the excess entropy for a multivariate system. Based on the partial information decomposition framework \cite{williams_nonnegative_2010,gutknecht_bits_2021}, the \textit{integrated information decomposition} ($\Phi$ID) \cite{mediano_beyond_2019,mediano_towards_2021} reveals how all of the elements of \textbf{X} (and ensembles of elements) collectively disclose temporal information about each-other. A rigorous derivation of the framework is beyond the scope of this paper, but intuitively the $\Phi$ID works by decomposing $\mathcal{E}(\textbf{X})$ into an additive set of non-overlapping values called ``integrated information atoms", each of which describes a particular dependency between elements (or ensembles of elements) at time $t-1$ and time $t$. For example, the atom $\{X^1\} \to \{X^1\}$ refers to the information that $X^1$ \textit{and only $X^1$} communicates to itself \textit{and only itself} through time (``information storage"). Similarly, $\{X^1\} \to \{X^2\}$ is the information that $X^1$ uniquely transfers to $X^2$ alone. More exotic combinations also exist, for example $\{X^1\}\{X^2\}\to\{X^2\}$ which is the information redundantly present in $X^1$ and $X^2$ simultaneously that is ``pruned" from $X^1$ and left in $X^2$. The $\Phi$ID framework allows us to refine the original heuristic $\Phi(\textbf{X})$ and construct a more rigorous metric for emergence: for a two-element system the atom $\{X^1,X^2\} \to \{X^1, X^2\}$ is the synergistic information that the \textit{whole} (and only the whole) discloses about it's own future (Mediano et al., refer to this as ``causal decoupling", although we will refer to it simply as temporal synergy). It can be thought of as measuring the persistence of the whole qua itself without making reference to any simpler combinations of constituent elements. The $\Phi$ID framework also provides another emergence-related metric ``for free:" Mediano et al., claim that ``downward causation" \cite{galaaen_disturbing_2006,davies_physics_2008} is revealed by atoms such as $\{X^1,X^2\}\to\{X^1\}$, where the synergistic past of the whole informs on the future of one of its uniquely constituent atoms. Given the aforementioned issues around ``causal" interpretations of information theory, we will not dwell on this. We do note, however, that all of the same technical concerns (localization, incongruous dynamics, etc), also apply to these atoms as well. For a two-variable system evolving through time (such as our example macro-scale $\tilde{\textbf{X}}$), it turns out that there are only sixteen unique integrated information atoms, and they are conveniently structured into an elegant lattice (see Fig. \ref{fig:lattice}). This means that given some ``double-redundancy" function that can solve for the bottom of the lattice ($\{X^1\}\{X^2\}\to\{X^1\}\{X^2\}$), it is possible to bootstrap all the other atoms via Mobius inversion. For the given lattice $\mathfrak{A}$, the value of every atom ($\Phi_\partial(\boldsymbol{A})$) can be calculated recursively by: \begin{equation} \Phi_\partial(\boldsymbol{A}\to\boldsymbol{B}) = I_{\tau sx}(\boldsymbol{A}\to\boldsymbol{B}) - \sum_{\substack{\boldsymbol{A}'\to\boldsymbol{B}'\\\preceq\\\boldsymbol{A}\to\boldsymbol{B}}}\Phi_\partial(\boldsymbol{A}'\to\boldsymbol{B}') \end{equation} Where $I_{\tau sx}$ is the double redundancy function proposed by Varley in \cite{varley_decomposing_2022}, which quantifies the shared information across time (for a more detailed discussion, see the above citation). The Mobius inversion framework provides an intuitive understanding of the double synergy atom $\{X^1,X^2\}\to\{X^1,X^2\}$: it is that portion of $\mathcal{E}(\textbf{X})$ that is \textit{not} disclosed by any simpler combination of elements in $\textbf{X}$ then $\textbf{X}$ itself. The system $\tilde{\textbf{X}}$ shown in Figure \ref{fig:eff_emergence_i} (upper right) shows non-trivial integrated emergence, with a value of $\Phi_\partial(\{X^1,X^2\}\to\{X^1,X^2\}) \approx 0.031$ bit. \section{Incongruous Emergence Appears in Multiple Frameworks} Having described how both formal approaches to emergence choose to define it, we are equipped to discuss local \textit{incongruous} emergence. As mentioned above, incongruous dynamics occurs when, locally, the information structure of a system is the opposite of the average expected tendency. Even if the system \textit{on average} displays some emergent tendency, it doesn't necessarily follow that it displays emergent properties \textit{at every moment}. Below, we will show how the localizability of excess entropy implies incongruous dynamics can occur in both coarse-graining and synergy-based emergence frameworks. \subsection{Flickering Emergence in Coarse-Graining Approaches to Emergence} \begin{figure*} \includegraphics[scale=0.5]{eff_emergence_i.pdf} \caption{\textbf{Local information of transitions at macro- and micro-scales. Top Left:} The local mutual informations associated with every transition at the macro scale. We can see a mixture of informative and misinformative transitions distributed over the matrix. \textbf{Top Right:} The local mutual informations associated with every transition at the micro-scale. We can see that a very large number of weakly informative transitions at the micro-scale (upper left-hand square) get mapped to misinformative transitions at the macro-scale (upper left-hand square of the macro-scale matrix). \textbf{Bottom:} A time series generated by a weighted random walk on the micro-scale transition matrix. We plot the ratio of the local mutual information of each transition at the micro-scale to the associated transition at the macro-scale. If the microscale is informative and the macroscale is misinformative, we say that that transition displays ``incongruous dynamics." This ``incongruous dynamics" shows that, while on average the macro-scale may be more informative than the micro-scale, there are a large number of micro-scale transitions that become not only less informative, but actively misinformative!} \label{fig:eff_emergence_i} \end{figure*} We say that a system \textbf{X} admits an emergent macro-scale $\tilde{\textbf{X}}$ if $\log(\bar{\eff}(\tilde{\textbf{X}})/\bar{\eff}(\textbf{X})) > 0$, where $\bar{\eff}(\textbf{X})$ is the effectivness of \textbf{X} (Eq. \ref{eq:effness}). In the context of the local excess entropy, the normalization by $\log(N)$ does not necessarily make sense (the inequality that $\mathcal{E}(\textbf{X}) \leq \log(N)$ only holds in the average case: the local mutual information is unbounded), however, we can instead consider the \textit{signs} of the relevant local excess entropies. When a change in sign occurs, information that may have been informative at one scale (i.e. helps us make better predictions about the future) may be be misinformative at another (i.e. pushes us towards the wrong prediction). We say that incongruous dynamics occurs when $e(\tilde{\textbf{x}}) < 0$ and $e(\textbf{x}) > 0$. That is, when a transition that is informative at the micro-scale gets mapped to a transition that is misinformative at the macro-scale. From the perspective of a scientist attempting to understand a complex system, this would occur if, \textit{on average} more predictive power (or controllability) is accessible at the macro-scale, but there exist a subset of transitions at the micro-scale that actually do \textit{worse} when coarse-grained. When considering the system in Figure \ref{fig:tpms} and its associated macro-scale, we can construct the local excess entropy for every transition (visualized in Figure \ref{fig:eff_emergence_i}). It is visually apparent that the large number of weakly informative transitions at the micro-scale are getting mapped to weakly misinformative transitions at the macro-scale. In fact, $\approx 52.73\%$ of informative micro-scale edges map to misinformative macro-scale edges! This means that, even though our toy system \textbf{X} displays non-zero emergence on average, over half of all informative micro-scale transitions are mapped to misinformative macro-scale transitions during coarse-graining. These are generally lower-probability transitions however, which accounts for the overall display of emergence. By running a random walk on $\textbf{X}$ and computing the ratio $e(\tilde{\textbf{x}}_t)/e(\textbf{x}_t)$ we can see a number of instances where the sign is negative because incongruous dynamics have occurred (indicated with grey arrows). This is an example of what we call ``flickering emergence", where the emergent quality transiently falls apart, like a candle sputtering. \subsubsection{Application to Networks} The phenomenon of incongruous dynamics is also relevant to extensions to other domains, such as network science. Klein, Griebenow, and Hoel showed that applying the same framework to the dynamics of random walkers on complex networks could reveal informative higher scales in the network structure, which in turn could be linked to many aspects of graph theory and network science \cite{klein_emergence_2020,klein_evolution_2021}. In this approach, nodes in a network are aggregated into ``macro-nodes" (analogous to communities), and those macro-nodes collapsed into a simpler network (similar to the Louvain community detection algorithm \cite{blondel_fast_2008}). From there, the effectiveness of the micro-scale network can be compared to the effectiveness of the macro-scale network in the usual way. This topological emergence has been found in a variety of biological networks and associated with evolutionary drives towards robust, fault-tolerant structures \cite{klein_evolution_2021}. In keeping with prior work, Klein et al., focused on the average information structure over the entire network, however, we can do the same kind of localized analysis on the network that we do on the Makrov chains. In Figure \ref{fig:network} (left panel), we can see a structural connectivity matrix taken from a random subject in the Human Connectome Project data set \cite{van_essen_wu-minn_2013} (previously used in \cite{pope_modular_2021}) from which we have computed the local excess entropy associated with each edge. We then used the Infomap algorithm \cite{rosvall_maps_2008,rosvall_map_2009} on the raw structural connectivity network and coarse-grained the communities into macro-nodes. When considering the macro-scale transition probability matrix, we can see that the main diagonal (corresponding to a walker staying put) are overwhelmingly informative, while off-diagonal transitions are generally misinformative (Fig. \ref{fig:network}, centre panel). This is consistent with the intuition that, in state-transition networks, Infomap finds shallow, metastable transient attractors in the landscape \cite{varley_topological_2021}. While the effectiveness of the macro-scale was barely lager than the effectiveness of the micro-scale ($\approx 0.37$ bit at the micro-scale vs. $\approx 0.38$ bit at the macro-scale), we find that incongruous dynamics is quite common in this particular network: $\approx22.47\%$ of informative micro-scale edges are mapped to macro-scale edges that are misinformative. To assess whether the distribution of informative and misinformative micro-scale edges related to the overall macro-scale structure of the network, we examined the distribution of informative and misinformative edges within and between communities/macro-nodes. We found that informative edges were roughly just as likely to link within-community nodes ($\approx 52.3\%$) as between community nodes ($\approx47.7\%$), however, misinformative edges where overwhelmingly more likely to link disparate communities ($\approx80.4\%$) as opposed to fall within a single community ($\approx19.6\%$). To ensure that these results held generally and weren't an artifact of the Infomap algorithm, we replicated these results using a spinglass \cite{reichardt_statistical_2006,traag_community_2009} and a label-propagation \cite{raghavan_near_2007} community detection algorithm and found that the results were consistent (see Supplementary Material). All analyses were done using the Python-iGraph package \cite{csardi_igraph_2006}. These results collectively show that, in much the same way expected effective information and emergence relate in fundamental ways to network topology, so do their local counterparts. \begin{figure*} \centering \includegraphics[scale=0.5]{network_emergence.pdf} \caption{\textbf{Incongruous dynamics in complex networks.} The coarse-graining framework has also been applied to characterize informative higher scales. } \label{fig:network} \end{figure*} \subsection{Flickering Emergence in $\Phi$ID-based Approaches to Emergence} \begin{figure*} \centering \includegraphics[scale=0.4]{flickering_phi.pdf} \caption{\textbf{The structure of temporal mutual information varies across time. Top:} The two double-redundancy lattices show the local integrated information decompositions for two distinct transitions ($(0,1)\to(0,0)$ and $(0,0)\to(0,0)$). Despite having the same end-state, the information structure of these two transitions is completely different. One transition has an informative causal decoupling, while the other has a misinformative one. As well as opposite-signed double-redundancy ($\{1\}\{2\} \to \{1\}\{2\}$) and a number of other discrepancies. \textbf{Bottom:} A visualization of the phenomenon of ``flickering emergence" in the context of $\Phi$ID. As the system $\tilde{\textbf{X}}$ evolves over time, it cycles through states (lower plot), and for each of those transitions, we can calculate the instantaneous causal decoupling (upper plot). We can see that incongruous dynamics can occur at different times, interspersed between congruent emergence.} \label{fig:local_phi} \end{figure*} In the $\Phi$ID-based framework, emergence is associated with information about the future of the ``whole" that can only be learned by observing the whole itself and none of it's simpler constituent components. This value can be computed using an decomposition of the excess entropy \cite{mediano_beyond_2019,varley_decomposing_2022}. Like the coarse-graining framework, the decomposition of the excess entropy can be localized to particular moments in time. For a given transitions ($(x^1_{t-1},x^2_{t-1}) \to (x^1_t,x^2_t)$) with local excess entropy $e(\textbf{x})$, we can construct a local redundancy lattice $\boldsymbol{\mathfrak{a}}$ and solve it with the same Mobius inversion. All that is required is that the redundancy function is localizable in time. The function used above, $I_{\tau sx}$ is localizable ($i_{\tau sx}$) and can be used to construct a local integrated information lattice for any configuration of $\textbf{X}$ \cite{varley_decomposing_2022}. Once gain, we take advantage of the distinction between informative and misinformative local mutual information for our indicator of incongruity. Here we say that incongruous dynamics occurs when the \textit{expected} synergistic flow ($\{X^1,X^2\}\to\{X^1,X^2\}$) is informative (positive), but the \textit{local} value of the same atom is negative. Intuitively, this is the case when, on average, the future of the whole can be meaningful predicted from its own past, however, there are some configurations where the current global state would lean an observer to make the wrong prediction about the future global state. When considering the example macro-scale system (Fig. \ref{fig:eff_emergence_i}), we found that 31.25\% of the sixteen possible transitions had a negative value for the double-synergy atom. In Figure \ref{fig:local_phi}, we can see the global integrated information lattice for two different transitions, both landing in the universal-off state ((0,0)). It is clear that these two transitions have radically different local information structures, including opposite-signed double-synergy, double redundancy, and ``downward causation," atoms: the expected value of any given atom elides a considerable amount of variability in the instantaneous dependencies. We can see this manifesting as flickering emergence when we run a random walk on the system and perform the $\Phi$ID for every time-step (Fig \ref{fig:local_phi}, Bottom): the system transiently moves through periods of informative, and misinformative emergence depending on the specific transitions occurring. \section{Discussion} In this work, we have introduced the notion of ``flickering emergence," which describes how a system can, on average, admit a meaningful emergent dynamic that falls apart locally when considering particular transitions. We argue, with worked examples, that this is likely a feature of any formal approach that is built on the Shannon mutual information between the past and the future (excess entropy \cite{james_anatomy_2011}). To demonstrate this phenomenon, we assess two approaches which share the same core feature of excess entropy, but define ``emergence" in very different ways. The first approach, based on coarse-graining micro-scales into macro-scales \cite{hoel_quantifying_2013,hoel_can_2016,hoel_when_2017,klein_emergence_2020}, says that a system admits an emergent scale when it is possible to find a coarse-graining that maximizes the determinism and minimizes the degeneracy of state transitions, optimizing the (relative) ability to predict the future given the past. In contrast, the synergy-based framework \cite{rosas_reconciling_2020,mediano_greater_2022} associates emergence with temporal information that is present in the whole and not any of the parts. We constructed a simple system with two scales (a micro- and macro-scale) that, on average, displays emergent properties under coarse-graining and synergy-based approaches. However, this system also displays incongruous dynamics: local configurations where the macro-scale does worse than the micro-scale and where the synergy in the whole is transiently misinformaive. The purpose of this paper is not to ``poke holes" in, or critique, either approach to emergence, nor is it attempting to adjudicate between them as the ``one true measure of emergence" (more on that below). Instead, our aim has been to highlight how the particular mathematical formalism one commits to can produce intriguing new properties that may not have been obvious from the outset. In this case, the commitment to a temporal mutual information-based approach necessarily raises the question how to think about local instances and what it might mean for a measure of emergence to be locally negative. How we interpret incongruous dynamics and flickering emergence depends largely on how we interpret ``emergence" as a concept. In some cases (such as discussed in \cite{varley_emergence_2022}), emergence is described in terms of a scientist's ability to model a complex system. In this case, incongruous dynamics may be largely inconsequential: every complexity scientist is familiar with the aphorism ``all models are wrong, some are useful" and if, on average, one can do better work modelling a system at the macro-scale, then transient breaks may not be a deal-breaker. Science is full of very successful models that do not work perfectly in every context, but work well enough that they can be used productively (Newtonian physics is famously an approximation that breaks down on quantum scales or at relativistic extremes, but was good enough to get humans to the Moon and back). Depending on the particular dynamic incongruities, the situation may not be so benign, however: for example, one can use a coarse-grained model of the heart as an oscillating pump very successfully and generally ignore the particular myocardial cells \textit{when the heart is healthy}. Unfortuantely, arrhythmias such as ventricular fibrillation can occur when the global synchrony is disrupted by local onset of chaotic turbulence \cite{weiss_chaos_1999}, leading to an altogether-different dynamical regime. In this case, the predictions of the coarse-grained, heart-as-a-pump model diverge dangerously from the micro-scale model, potentially with fatal results. While the technical details of this example are surely open to debate, it serves as a starting point for a larger discussion about how even-reliable coarse-grainings can be thrown off by unexpected micro-scale developments. So, depending on exactly what is being modelled, incongruous effective dynamics may still be of meaningful concern. On the other hand, if ``emergence" is treated as an ontologically ``real" process, and mapped to other observable phenomena, the situation may get even more complicated. Consider the recent proposal from Luppi et al., \cite{luppi_what_2021}, who suggest a link between integrated emergence and the persistent sense of one's conscious self as an integrated agent (also discussed in \cite{krakauer_information_2020}). This is a fascinating hypothesis, with intriguing evidence in its favour \cite{luppi_synergistic_2022}, however, in light of flickering emergence, we are forced to ask: does the sense of self ``flicker?" The sense of self can clearly be experienced as more or less intensely real (and, in some cases appears to vanish entirely) \cite{timmermann_dmt_2018,lawrence_phenomenology_2022}, although it generally seems stable during normal consciousness. If there is a link between temporal mutual information and the sense of self (an observable phenomena), then we are forced to wrangle with the question of temporal locality. Perhaps the brain never visits those misinformative configurations? Or perhaps it does and we do not notice? We remain agnostic about the proposed link between the self and synergistic temporal information, but highlight it as an example of how the localizability of mutual information can raise questions about how the links between information-theoretic measures and real-world phenomena. In addition to the main question about local information dynamics, a second aim of this paper was to bring the two frameworks into more explicit dialogue. Both approaches have developed largely in parallel (despite having a common intellectual heritage in integrated information theory) and propose different notions of what it means to be emergent. As we have seen however, they share significant commonalities in spite of their differences. While ``emergence" is typically discussed as a single phenomena in complex systems, there is a strong argument to be made for the ``pragmatic pluralist" approach \cite{bedau_weak_2010,bedau_weak_2011}, in which many different ``kinds" of emergence are recognized in parallel. Under such a framework, both frameworks discussed here would both be considered valid kinds of emergent dynamics: similar enough to belong to a common category of phenomena, but distinct enough to be considered separate. This is very similar to how the question of defining ``complexity" has practically been resolved: historically, there has been considerable debate on how we might define ``complexity" in ``complex systems," and various measures have been proposed over the years, including algorithmic compressibility \cite{ziv_compression_1978}, entropy rate \cite{richman_physiological_2000}, and integration/segregation balance \cite{tononi_measure_1994}. Despite the considerable ink spilled on the question, Feldman and Crutchfield argued that there likely is not a single measure of what it means to be ``complex" and that mathematical attempts at a universally intuitive measure were misguided \cite{feldman_measures_1998}. Instead, the field has largely moved towards an understanding that different notions of ``complexity" are appropriate in different contexts and can illuminate different aspects of system dynamics, all of which may be considered ``complex" in their own way (for example, see \cite{varley_topological_2021}, which proposes the notion of a ``dynamical morphospace" to characterize systems along different axes complex dynamics). A similar resolution may end the perennial conflict between what is and is not a valid measure or kind of emergence. The multi-scale framework may turn out to be useful when attempting to think about how cells can be coarse grained into tissues (where there is a natural distinction of scales), while the integrated information framework may turn to be useful when considering the computational properties of ensembles of neurons \cite{varley_decomposing_2022} or flocking objects \cite{rosas_reconciling_2020}, but not vice-versa. Both of these may be instances where reductionism fails to provide the crucial insight into a collective process, but importantly, reductionism may fail \textit{for different reasons in different contexts}. In summary, we feel that the problem of information-based approaches to emergence remains a rich area of research, with considerable territory remaining to be explored. Unexpected phenomena, such as flickering emergence may force us to challenge how we think about emergent properties in complex systems and potentially inform future research directions. The richness and variability of different measures is, to our mind, a feature, rather than a bug, with intriguing commonalities and differences. \section*{Conclusion} Regardless of whether one prefers coarse graining-based or synergy-based approaches to emergence, and whether one chooses to think of emergence purely as a question of modelling, or of truly novel physical properties, any approach based on excess entropy is likely to display both incongruous dynamics and flickering emergence. The localizability of mutual information and related measures (such as the transfer entropy) shows us that, even when emergence (however defined) occurs on average, there can still be complex and unexpected moment-to-moment deviations from that average. Some of those deviations can run in express opposition to the expected behaviour (signified by negative local mutual information). These deviations from the long-term norm may have profound implications for how we think about emergent properties in nature, and suggest new avenues of research for scientists interested in the role emergence plays in the natural world. \section*{Acknowledgements} TFV is supported by NSF-NRT grant 1735095, Interdisciplinary Training in Complex Networks and Systems. I would like to thank Dr. Erik Hoel for helpful feedback on this manuscript, and Dr. Olaf Sporns for giving me the space to pursue this project when I should be focusing on finishing up my PhD. \bibliographystyle{unsrt}
1,108,101,562,442
arxiv
\section{Introduction} The aim of this work is computing periods of words in the approximate pattern matching model (see e.g.\ \cite{Jewels,DBLP:books/cu/Gusfield1997}). This task can be stated as the \emph{approximate period recovery (APR) problem} that was defined by Amir et al.~\cite{DBLP:journals/talg/AmirELPS12}. In this problem, we are given a word; we suspect that it was initially periodic, but then errors might have been introduced in it. Our goal is to attempt to recover the periodicity of the original word. If too many errors have been introduced, it might be impossible to recover the period. Hence, a requirement is imposed that the distance between the original periodic word and the word with errors is upper bounded, with the bound being related to the period length. Here, edit distance is used as a metric. The fastest known solution to the APR problem is due to Amir et al.~\cite{DBLP:journals/tcs/AmirALS18}. A different version of the APR problem was considered by Sim et al.~\cite{DBLP:journals/tcs/SimIPS01}, who bound the number of errors per occurrence of the period. The general problem of computing approximate periods over weighted edit distance is known to be NP-complete; see~\cite{DBLP:journals/tcs/Popov03,DBLP:journals/tcs/SimIPS01}. Other variants of approximate periods have also been introduced. One direction is the study of approximate repetitions, that is, subwords of the given word that are approximately periodic in some sense (and, possibly, maximal); see~\cite{DBLP:journals/tcs/AmitCLS17,DBLP:journals/tcs/KolpakovK03,DBLP:journals/bioinformatics/SokolBT07,DBLP:journals/tcs/SokolT14}. Another is the study of quasiperiods, occurrences of which may overlap in the text; see, e.g.,~\cite{DBLP:journals/tcs/ApostolicoE93,DBLP:journals/ipl/ApostolicoFI91,DBLP:journals/ipl/Breslauer92,DBLP:conf/soda/KociumakaKRRW12,DBLP:journals/algorithmica/LiS02}. Let $\mbox{\sf ed-dist}(S,W)$ be the edit distance (or Levenshtein distance) between the words $S$ and $W$, that is, the minimum number of edit operations (insertions, deletions, or substitutions) necessary to transform $S$ to $W$. A word $P$ is called \emph{primitive} if it cannot be expressed as $P=Q^k$ for a word $Q$ and an integer $k \ge 2$. The APR problem can now formally be defined as follows. \defproblem{Approximate Period Recovery (APR) Problem}{ A word $S$ of length $n$ }{ All primitive words $P$ (called \emph{approximate word-periods}) for which the infinite word $P^\infty$ has a prefix $W$ such that $\mbox{\sf ed-dist}(S,W) < \tau_p$, where $p=|P|$ and $\tau_p = \lfloor \frac{n}{(3.75+\epsilon)\cdot p} \rfloor$ with $\epsilon > 0$ } \begin{remark} Amir et al.~\cite{DBLP:journals/tcs/AmirALS18} show that each approximate word-period is a subword of $S$ and thus can be represented in constant space. Moreover, they show that the number of approximate word-periods is $\mathcal{O}(n)$. Hence, the output to the APR problem uses $\mathcal{O}(n)$ space. \end{remark} The solution of Amir et al.~\cite{DBLP:journals/tcs/AmirALS18} works in $\mathcal{O}(n^{4/3})$ time\footnote{Also the APR problem under the Hamming distance was considered~\cite{DBLP:journals/talg/AmirELPS12} for which an $\mathcal{O}(n \log n)$-time algorithm was presented~\cite{DBLP:journals/tcs/AmirALS18} for the threshold $\lfloor \frac{n}{(2+\epsilon)\cdot p} \rfloor$ with $\epsilon>0$.}. Our result is an $\mathcal{O}(n \log n)$-time algorithm for the APR problem. Let us recall that two words $U$ and $V$ are \emph{cyclic shifts} (denoted as $U \approx V$) if there exist words $X$ and $Y$ such that $U=XY$ and $V=YX$. The algorithm of Amir et al.~\cite{DBLP:journals/tcs/AmirALS18} consists of two steps. First, a small number of candidates are identified, as stated in the following fact. \begin{fact}[Amir et al.~{\cite[Section 4.3]{DBLP:journals/tcs/AmirALS18}}]\label{fct:red} In $\mathcal{O}(n)$ time, one can find $\mathcal{O}(\log n)$ subwords of~$S$ (of exponentially increasing lengths) such that every approximate word-period of~$S$ is a cyclic shift of one of the candidates. \end{fact} For a pattern $S$ and an infinite word $W$, by $\mbox{\sf ED}(S,W)$ let us denote the minimum edit distance between $S$ and a prefix of $W$. By \cref{fct:red}, the APR problem reduces to $\mathcal{O}(\log n)$ instances of the following problem. \defproblem{Approximate Pattern Matching in Periodic Text (APM Problem)}{ A word $S$ of length $n$, a word $P$ of length $p$, and a threshold $k$ }{ For every cyclic shift $U$ of $P$, compute $\mbox{\sf ED}(S,U^\infty)$ or report that this value is greater than $k$ } Amir et al.~\cite{DBLP:journals/tcs/AmirALS18} use two solutions to the APM problem that work in $\mathcal{O}(np)$ time and $\mathcal{O}(n+k(k+p))$ time, respectively. The main tool of the first algorithm is \emph{wrap-around dynamic programming}~\cite{DBLP:journals/ipl/FischettiLSS93} that solves the APM problem without the threshold constraint $k$ in $\mathcal{O}(np)$ time. The other solution is based on the Landau--Vishkin algorithm~\cite{DBLP:journals/jal/LandauV89}. For each $p$ and $k < \tau_p$, either algorithm works in $\mathcal{O}(n^{4/3})$ time. \paragraph{Our results.} We show that: \begin{itemize} \item The APM problem can be solved in $\mathcal{O}(n+kp)$ time. \item The APR problem can be solved in $\mathcal{O}(n \log n)$ time. \end{itemize} Our solution to the APM problem involves a more efficient combination of wrap-around dynamic programming with the Landau--Vishkin algorithm. \section{Approximate Pattern Matching in Periodic Texts} We assume that the length of a word $U$ is denoted by $|U|$ and the letters of $U$ are numbered $0$ through $|U|-1$, with $U[i]$ representing the $i$th letter. By $U[i \mathinner{.\,.} j]$ we denote the subword $U[i] \cdots U[j]$; if $i>j$, it denotes the empty word. A prefix of $U$ is a subword $U[0 \mathinner{.\,.} i]$ and a suffix of $U$ is a subword $U[i \mathinner{.\,.} |U|-1]$, denoted also as $U[i \mathinner{.\,.}]$. The length of the longest common prefix of words $U$ and $V$ is denoted by $\mathsf{lcp}(U,V)$. The following fact specifies a well-known efficient data structure answering such queries over suffixes of a given text; see, e.g., \cite{AlgorithmsOnStrings}. \begin{fact}\label{fct:ver} Let $S$ be a word of length $n$ over an integer alphabet of size $\sigma = n^{\mathcal{O}(1)}$. After $\mathcal{O}(n)$-time preprocessing, given indices $i$ and $j$ ($0 \le i,j < n$) one can compute $\mathsf{lcp}(S[i \mathinner{.\,.}],S[j \mathinner{.\,.}])$ in $\mathcal{O}(1)$ time. \end{fact} \subsection{Wrap-Around Dynamic Programming}\label{sec:wrap} Following~\cite{DBLP:journals/ipl/FischettiLSS93}, we introduce a table $T[0\mathinner{.\,.} n, 0\mathinner{.\,.} p-1]$ whose cell $T[i,j]$ denotes the minimum edit distance between $S[0\mathinner{.\,.} i-1]$ and some subword of the periodic word $P^\infty$ ending on the $(j-1)$th character of the period. More formally, for $i \in \{0,\ldots,n\}$ and $j \in \mathbb{Z}_p$, we define \[T[i,j]=\min \{ \mbox{\sf ed-dist}(S[0\mathinner{.\,.} i-1],P^\infty[i'\mathinner{.\,.} j'])\,:\,i' \in \mathbb{N},\ j'\equiv j-1\ (\bmod\,p)\};\] see \cref{fig:sroda}. The following fact characterizes $T$ in terms of $\mbox{\sf ED}$. \begin{fact}\label{fct:obs} We have $\min\{\mbox{\sf ED}(S,U^\infty) : U \approx P\} = \min\{T[n,j] : j \in \mathbb{Z}_p\}$. \end{fact} \begin{proof} First, let us observe that the definition of $T$ immediately yields \[\min\{T[n,j] : j \in \mathbb{Z}_p\}= \min\{\mbox{\sf ed-dist}(S, P^\infty[i'\mathinner{.\,.} j'])\,:\, i',j'\in \mathbb{N}\}.\] In other words, $\min\{T[n,j] : j \in \mathbb{Z}_p\}$ is the minimum edit distance between $S$ and any subword of $P^\infty$. On the other hand, $\min\{\mbox{\sf ED}(S,U^\infty) : U \approx P\}$ by definition of $\mbox{\sf ED}$ is the minimum edit distance between $S$ and a prefix of $U^\infty$ for a cyclic shift $U$ of $P$. Finally, it suffices to note that the sets of subwords of $P^\infty$ and of prefixes of $U^\infty$ taken over all $U \approx P$ are the same. \end{proof} Below, we use $\oplus$ and $\ominus$ to denote operations in $\mathbb{Z}_p$. \begin{lemma}[\cite{DBLP:journals/ipl/FischettiLSS93}]\label{lem:Fis} The table $T$ is the unique table satisfying the following formula: \begin{align*} T[0,j]& =0,\\ T[i+1,j\oplus 1]&=\min\left\{ \begin{matrix} T[i,j\oplus 1]&+&1 \\ T[i,j]&+& [S[i] \neq P[j]] \\ T[i+1,j]&+&1 \end{matrix}\right\}. \end{align*} \end{lemma} \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{image_T.pdf} \caption{The first four columns show the table $T$ for $S=\mathtt{CBAACAABCA}$ and $P=\mathtt{ABCA}$. The asterisks represent values that are greater than $k=3$; these values need not be computed in our algorithm. The next columns contain copies of $T$; the highlighted diagonals show the computation of the array $D$ (see below). Note that $T[3,1]=1$ because $T[2,0] = 1$ and $S[2]=\texttt{A}=P[0]$. }\label{fig:sroda} \end{figure} Let us mention that the above formula contains cyclic dependencies that emerge due to wrapping (the third value in the minimum). Nevertheless, the table can be computed using a graph-theoretic interpretation. With each $T[i,j]$ we associate a vertex $(i,j)$. The arcs are implied by the formula in \cref{lem:Fis}: the arcs pointing to $(i+1,j\oplus 1)$ are from $(i,j\oplus 1)$ with weight 1 (deletion), from $(i,j)$ with weight 0 or 1 (match or substitution), and from $(i+1,j)$ with weight 1 (insertion). Then $T[i, j]$ is the length of the shortest path from any vertex $(0,j')$ to the vertex $(i,j)$. With this interpretation, the table $T$ is computed using Breadth-First Search, with the 0-arcs processed before the 1-arcs. \subsection{Wrap-Around DP with Kangaroo Jumps} Our next goal is to compute all the values $T[n,j]$ not exceeding $k$. In the algorithm, we exploit two properties that our dynamic programming array has. First of all, let us consider a diagonal modulo length of the period, that is, cells of the form $T[i, j \oplus i]$ for a fixed $j\in \mathbb{Z}_p$. We can notice that the sequence of values on every diagonal is non-decreasing. This stems from the fact that on each diagonal the alignment of the pattern is the same and extending a prefix of $S$ and a subword of $P^\infty$ by one letter does not decrease their edit distance. This results in a conclusion that if we would like to iteratively compute the set of reachable cells within a fixed distance, then we can convey this information with just the indices of the furthermost reachable cells in each of the diagonals. Our task is to check whether we can reach some cell in the last row within the distance $k$. To achieve this, we can iteratively find the set of cells reachable within subsequent distances $0,1,\ldots$. More formally, for $d \in \{0,\ldots,k\}$ and $j\in \mathbb{Z}_p$, we define \[D[d,j]=\max\{i : T[i,j\oplus i]\le d\};\] see \cref{fig:sroda}. Secondly, we observe that it is cheap to check how many consecutive cost-0 transitions can be made from a given cell. Let us remind ourselves that our only free transition checks whether the next letters of the pattern and the periodic word are equal. To know how far we can stack this transition is, in other words, finding the longest common prefix of appropriate suffixes of $S$ and $P^\infty$. We obtain the following recursive formulae for $D[d,j]$; see Fig.~\ref{fig:d}. \begin{figure}[b!] \centering \input{_fig_d} \caption{Illustration of definition and computation of the array $D$.}\label{fig:d} \end{figure} \begin{fact} The table $D$ can be computed using the following formula: \begin{align*} D[0,j] &=\mathsf{lcp}(S, P^\infty[j\mathinner{.\,.}]),\\ D[d+1,j] & = i + \mathsf{lcp}(S[i\mathinner{.\,.} ],P^\infty[i\oplus j\mathinner{.\,.}]), \end{align*} where $i = \min(n,\,\max\{D[d,j]+1,\,D[d,j \ominus 1],\,D[d,j \oplus 1]+1\})$. \end{fact} \begin{proof} We will prove the fact by considering the interpretation of $T[i,j]$ as distances in a weighted graph (see~\cref{sec:wrap}). By \cref{lem:Fis}, from every vertex $(i,j)$ we have the following outgoing arcs: \begin{itemize} \item $(i,j) \xrightarrow{1} (i + 1, j)$, \item $(i,j) \xrightarrow{[S[i] \neq P[j]]} (i + 1, j \oplus 1)$, \item $(i,j) \xrightarrow{1} (i, j \oplus 1)$. \end{itemize} Moreover, the value $T[i,j]$ is equal to the minimum distance to $(i,j)$ from some vertex $(0,j')$. The only arc of cost 0 is $(i,j) \xrightarrow{0} (i+1, j \oplus 1)$ when $S[i] = P[j]$. Therefore, when we have reached a vertex $(i,j)$, the only vertices we can reach from it by using only 0-arcs are $(i, j), {(i + 1, j \oplus 1)}, \ldots, {(i + k, j \oplus k)}$, where $k$ is the maximum number such that $S[i] = P[j]$, $S[i + 1] = P[j \oplus 1]$, \ldots, ${S[i + (k-1)] = P[j \oplus (k-1)]}$. Therefore, $k = \mathsf{lcp}(S[i\mathinner{.\,.}], P^\infty[j\mathinner{.\,.}])$. Hence, $D[0,j] = \mathsf{lcp}(S, P^\infty[j\mathinner{.\,.}])$ holds for distance $0$. Taking advantage of monotonicity of distances on each diagonal, we know that full information about reachable vertices at distance $d$ can be stored as a list of the furthest points on each diagonal. Moreover, to reach a vertex of distance $d + 1$ we need to pick a vertex of distance $d$, follow a single 1-arc and then zero or more 0-arcs. Combining this with the fact that arcs changing diagonal can be arbitrarily used at any vertex, it suffices to consider only the bottom-most point of each diagonal with the distance $d$ as the starting point of the 1-arc, as we can greedily postpone following an arc that switches diagonals. \end{proof} To conclude, assuming we know the indices of furthest reachable cells in each of the diagonals for an edit distance $d$, we can easily compute indices for the next distance. In the beginning, we update the indices by applying available 1-arcs and afterwards, we increase indices by the results of appropriate $\mathsf{lcp}$-queries. In the end, we have computed the furthest reachable cells in each of the diagonals within distance $d + 1$ and achieved that in linear time with respect to the number of diagonals, i.e., in $\mathcal{O}(p)$ time. This approach is shown as \cref{alg:main}. \begin{algorithm}[h] \caption{Compute all values $T[n,j]$ not exceeding $k$}\label{alg:main} $T[n, 0 \mathinner{.\,.} p - 1] := (\bot,\ldots,\bot)$\; \For{$d:=0$ \KwSty{to} $k$}{ \ForEach{$j\in \mathbb{Z}_p$}{ \If{$d = 0$}{$i := 0$ } \Else{$i := \min(n,\max(D[d-1, j]+1, D[d-1, j\ominus 1], D[d-1, j\oplus 1]+1))$ } $D[d, j] := i+\mathsf{lcp}(S[i\mathinner{.\,.}], P^\infty[i\oplus j \mathinner{.\,.}])$\; \If{$D[d,j] = n$ \KwSty{and} $T[n, j \oplus n] = \bot $}{ $T[n, j \oplus n] := d$\; } } } \end{algorithm} \begin{lemma} \cref{alg:main} for each $j\in \mathbb{Z}_p$ computes $T[n,j]$ or reports that $T[n,j]>k$. It can be implemented in $\mathcal{O}(n+pk)$ time. \end{lemma} \begin{proof} We use \cref{fct:ver} to answer each $\mathsf{lcp}$-query in constant time, by creating a data structure for $\mathsf{lcp}$ queries for the word $S\#P^{r }$ where $\#$ is a sentinel character and $r$ is an exponent large enough so that $|P^r|\ge n+p$. \end{proof} \subsection{Main Results} The table $T$ specifies the last position of an approximate match within the period of the periodic word. However, in our problem we need to know the starting position, which determines the sought cyclic shift of the period. Thus, let $T^R$ be the counterpart of $T$ defined for the reverse words $S^R$ and $P^R$. Its last row satisfies the following property: \begin{fact}\label{fct:tr} For every $j\in \mathbb{Z}_p$, we have \[T^R[n,p\ominus j]=\mbox{\sf ED}(S,U^\infty) \quad\text{where}\quad U=P[j\mathinner{.\,.} p-1]\cdot P[0\mathinner{.\,.} j-1].\] Here, $U$ a cyclic shift of $P$ with the leading $j$ characters moved to the back. \end{fact} \begin{proof} By definition of $T^R$ and $T$, for $0\le i \le n$ and $j\in \mathbb{Z}_p$, we have \begin{align*} T^R[n,j] &= \min\{\mbox{\sf ed-dist}(S^R, (P^R)^\infty[i'\mathinner{.\,.} j'])\,:\,i' \in \mathbb{N},\, j'\equiv j-1\ (\bmod\,p)\}\\ &=\min \{\mbox{\sf ed-dist}(S, P^\infty[j'\mathinner{.\,.} i'])\,:\,i' \in \mathbb{N},\, j'\equiv -j\ (\bmod\,p)\}\\ & =\min \{\mbox{\sf ed-dist}(S, P^\infty[p\ominus j\mathinner{.\,.} i'])\,:\,i' \in \mathbb{N}\}\\ & =\mbox{\sf ED}(S, P^\infty[p\ominus j\mathinner{.\,.}]). \end{align*} Consequently, \[T^R[n,p\ominus j]=\mbox{\sf ED}(S,P^\infty[j\mathinner{.\,.}])=\mbox{\sf ED}(S, (P[j\mathinner{.\,.} p-1]\cdot P[0\mathinner{.\,.} j-1])^\infty)\] holds as claimed. \end{proof} \begin{example} If $P=\mathtt{ABCA}$ and $S=\mathtt{CBAACAABCA}$ (see~\cref{fig:sroda}), then $T^R[10,2] = \mbox{\sf ED}(\mathtt{CBAACAABCA}, (\mathtt{CAAB})^\infty)=\mbox{\sf ed-dist}(\mathtt{CBAACAABCA},\mathtt{CAABCAABCA})=2$. \end{example} Running \cref{alg:main} for the reverse input, we obtain the solution to the APM problem. \begin{theorem}\label{thm:main} The Approximate Pattern Matching in Periodic Text problem can be solved in $\mathcal{O}(n+kp)$ time. \end{theorem} By combining \cref{fct:red} and \cref{thm:main} with $k < \tau_p$, we arrive at an improved solution to the APR problem. \begin{theorem}\label{thm:main2} The Approximate Period Recovery problem can be solved in $\mathcal{O}(n \log n)$ time. \end{theorem} \bibliographystyle{plainurl}
1,108,101,562,443
arxiv
\section*{Introduction} \label{sec:intro} In the normal heart, each heartbeat is associated with an action potential (AP). The cardiac AP consists of a depolarized phase in which the voltage is elevated; this is associated with transient increased permeability of the cell membrane to Na$^+$ and Ca$^{2+}$. The depolarized phase is followed by a repolarization to the resting membrane potential, associated with increased permeability to K$^+$ ions. These changes in the membrane potential lead to a sequence of events that result in contraction of the heart muscle, thus allowing for the pumping of blood through the body. Early afterdepolarizations (EADs) are pathological voltage oscillations that have been observed in heart muscle cells (cardiomyocytes) during the repolarizing phase of the cardiac AP under conditions in which the AP is elongated. EADs can be induced by hypokalemia \cite{Madhvani2011,Sato2010}, as well as oxidative stress \cite{Xie_LH2008}. They have also been often observed following the administration of drugs that act on K$^+$, Na$^+$, or Ca$^{2+}$\ ion channels such as dofetilide \cite{Guo2007}, {\em dl}-sotalol \cite{Yan2001}, azimilide \cite{Yan2001}, bepridil \cite{Nobe1993,Winslow1986}, isoproterenol \cite{Priori1990,Shimizu1991}, quinidine \cite{Davidenko1989}, and BayK8644 \cite{January1989,Sato2010}. These drug-induced EADs can then lead to ventricular tachyarrhythmias \cite{Asano1997,ElSherif2003, Yan2001}. Genetic defects in Na$^+$\ and K$^+$\ channels that prolong the action potential duration can also lead to an increased rate of EADs and risk of sudden death \cite{Napolitano2005}. EADs have been associated with long QT syndrome \cite{Shimizu1991}, and have long been recognized as a mechanism for the generation of premature ventricular complexes (PVCs) in the electrocardiogram \cite{Shimizu1994}. Different ventricular arrhythmias, including torsade de pointes, are thought to be initiated by PVCs stemming from EADs \cite{Cranefield1991,Lerma2007stochastic,Shimizu1997,Shimizu1991}. That is, EADs at the myocyte level have been implicated as the primary mechanism promoting arrhythmias at the tissue level in acquired and congenital long-QT syndromes, including polymorphic ventricular tachycardia and ventricular fibrillation \cite{Pogwizd2004,Sanguinetti2006, Yan2001}. Numerous mathematical models have been constructed at the cellular level to study the genesis of EADs \cite{Kurata2017,Luo1994b,Sato2010,Tran2009,Zeng1995}. These have confirmed the importance of increased inward Ca$^{2+}$\ current and decreased outward K$^+$\ current in the production of EADs. They have also confirmed that reactivation of Ca$^{2+}$\ current is a key element of EAD production \cite{Zeng1995}. The importance of this ``Ca$^{2+}$\ window current" in EAD production was later demonstrated through the use of the Dynamic Clamp technique \cite{Madhvani2011}, which is a hybrid between mathematical modeling and experimentation. Modeling at the tissue level has also been done, in this case to understand EAD propagation, synchronization and the genesis of arrthymia \cite{DeLange2012,Huffaker2004,Sato2009,Vandersickel2014}. These studies demonstrate that EADs at the cellular level can lead to arrhythmias at the tissue level, as has been suggested in experimental studies. A useful analysis technique for understanding the behavior of models of excitable systems such as cardiomyocyte models separates system variables into those that change on a fast time scale and those that change on a slow time scale, and then analyzes the two subsystems and their interaction \cite{Bertram2017}. This slow/fast analysis has been used to understand the genesis of EADs, using a 3-variable model in which two variables were treated as ``fast variables" and one variable treated as a ``slow variable". It was shown that EADs can arise via a delayed subcritical Hopf bifurcation of the fast subsystem of variables \cite{Tran2009,Kugler2016}. This explanation, while providing insights, is limited in its descriptive capabilities. For example, it provides limited information on parameter sets for which EADs may occur, and it does not allow one to predict the number of EADs that are produced when they do occur. Recently, it was demonstrated that EADs can be attributed to the existence of a folded-node singularity and the accompanying canard orbits \cite{kugler2018}. This was done with the same three-dimensional model for cardiac action potentials, but now treating one variable as a fast variable and the other two as slow variables. Such a splitting provides the potential for insights that are not available with the 1-slow/2-fast splitting, as is demonstrated in \cite{kugler2018} and in an earlier publication that focused on electrical bursting in pituitary cells \cite{vo2010}. For example, once it is established that the EADs are organized by a folded-node singularity, it is possible to determine regions of parameter space in which EADs can occur \cite{kugler2018}. Ventricular cardiomyocytes are, in a physiological setting, subject to periodic stimulation from upstream cardiac cells, originating at the sinoatrial node. Prior experimental and modeling studies have demonstrated that EADs occur more readily at low pacing frequencies than at high frequencies \cite{Sato2010,Zeng1995,Damiano1984}. At intermediate forcing frequencies the dynamics are very complex, consisting of alternans with varying numbers of EADs at each stimulus, a behavior described as ``dynamical chaos" \cite{Sato2010,Tran2009}. The primary goal of this article is to provide an understanding for these phenomena. To achieve this, we use the same minimal cardiac action potential model that was developed in \cite{Sato2010} and used recently in \cite{kugler2018}, and apply a 2-slow/1-fast splitting of the model. We demonstrate that the effects of periodic stimulation of the model cell can be understood precisely using the theory of folded-node singularities. In particular, we show that the number of EADs produced by a stimulus depends on where it injects the trajectory into the so-called ``singular funnel", and with this knowledge we demonstrate why low-frequency pacing is expected to yield more EADs than is high-frequency pacing. We also demonstrate the origin of the ``dynamical chaos" that occurs at intermediate-frequency pacing. Finally, we demonstrate why drugs that inhibit the opening of K$^+$\ channels facilitate EADs, and why EADs can be induced by hypokalemia \cite{Madhvani2011,Sato2010,Yan2001}. \section*{Action Potentials and EADs with the Minimal Model} \label{sec:model} We study a low-dimensional model for the electrical activity in a cardiomyocyte \cite{Sato2010}, \begin{equation} \label{eq:model} \begin{split} C_m \frac{dV}{dt} &= -\left( I_{\rm K} + I_{\rm Ca} \right) + I_{\rm sti}, \\ \frac{dn}{dt} &= \frac{n_{\infty}(V)-n}{\tau_n}, \\ \frac{dh}{dt} &= \frac{h_{\infty}(V)-h}{\tau_h}, \end{split} \end{equation} where $I_{\rm K}$ is a repolarizing K$^+$ current, $I_{\rm Ca}$ is a depolarizing Ca$^{2+}$ current, and $I_{\rm sti}$ is an external pacmaking stimulus current. We note that \eqref{eq:model} excludes the depolarizing Na$^+$\ current since prior studies have found that it has almost no effect on EADs (since it is inactivated during the plateau of the AP). Here, $V$ is the membrane potential across the cell, $n$ is the activation variable for the K$^+$ channels, and $h$ is the inactivation variable for the L-type Ca$^{2+}$ channels. The ionic currents are described by \[ I_{\rm K} = g_K n \left( V - V_{\rm K} \right) \quad \text{ and } \quad I_{\rm Ca} = g_{Ca} m_{\infty}(V) h \left( V - V_{\rm Ca} \right), \] and we set \[ I_{\rm sti} = 40 \sum_{k \in \mathbb{N}} \left[ H\left( t - k \cdot {\rm PCL} \right) - H\left( t - (k \cdot {\rm PCL}+1)\right) \right] \] where $H(\cdot)$ is the Heaviside function. That is, the stimulus current provides the system with square wave pulses of $1$~ms duration and $40$~$\mu$A/cm$^2$ amplitude at a frequency set by the pacing cycle length (PCL). The steady state activation functions are \[ x_{\infty}(V) = \frac{1}{1+\exp \left( \frac{V_x-V}{s_x} \right)}, \] where $x \in \{ m, n \}$, and the steady state inactivation function is \[ h_{\infty}(V) = \frac{1}{1+\exp \left( \frac{V-V_h}{s_h} \right)}. \] Standard parameter values are listed in Table \ref{tab:params}; these have been tuned so that the model \eqref{eq:model} periodically produces APs with EADs even in the absence of any stimulus current, as in \cite{Sato2010}. \begin{table}[ht] \centering \topcaption{Parameters definitions and typical values used in the minimal model \eqref{eq:model}.} \begin{tabular}{|c|c|l|} \hline Parameter & Value & Definition \\ \hline $C_m$ & 0.5 $\mu$F/cm$^2$ & Membrane capacitance \\ $g_{Ca}$ & 0.025~mS/cm$^2$ & Maximal conductance of L-type Ca$^{2+}$ channels \\ $g_K$ & 0.04~mS/cm$^2$ & Maximal conductance of K$^{+}$ channels \\ $V_{Ca}$ & 100~mV & Reversal potential for Ca$^{2+}$ \\ $V_{K}$ & -80~mV & Reversal potential for K$^{+}$ \\ $\tau_n$ & 300~ms & Time constant for activation of K$^{+}$ channels \\ $\tau_h$ & 80~ms & Time constant for activation of Ca$^{+}$ channels \\ $V_m$ & -35~mV & Voltage value at midpoint of $m_{\infty}(V)$ \\ $s_m$ & 6.24~mV & Slope parameter of $m_{\infty}(V)$ \\ $V_n$ & -40~mV & Voltage value at midpoint of $n_{\infty}(V)$ \\ $s_n$ & 5~mV & Slope parameter of $n_{\infty}(V)$ \\ $V_h$ & -20~mV & Voltage value at midpoint of $s_{\infty}(V)$ \\ $s_h$ & 8.6~mV & Slope parameter of $s_{\infty}(V)$ \\ \hline \end{tabular} \label{tab:params} \end{table} The model cell \eqref{eq:model} exhibits two distinct AP morphologies: regular APs and APs with EADs. For the remainder of the article, we use the Farey sequence notation, $1^s$, to denote a single large-amplitude AP with $s$ small-amplitude EADs during the repolarizing phase. Thus, a regular AP is denoted $1^0$ and an AP with 2 EADs is denoted $1^2$. More complicated rhythms are described using concatenations of these Farey sequences. For instance, a rhythm that periodically exhibits three regular APs followed by a single AP with 2 EADs is denoted $(1^0)^3 (1^2)$. \subsection*{Action Potential Duration and Number of EADs Increases with PCL} \label{subsec:bifurcation} The model cell \eqref{eq:model} is entrained to the periodic stimulus; for the parameter set in Table \ref{tab:params}, the cell exhibits $1^s$ impulses with period set by the PCL. For small PCLs (i.e., high-frequency pulsing), the attractor is a $1^2$ rhythm (Fig. \ref{fig:restitution}(a)). For intermediate PCLs ($1240$~ms $\lesssim {\rm PCL} \lesssim$ $1435~$ms), the cell exhibits complex EAD activity, including $1^2 1^3$ alternans (Fig. \ref{fig:restitution}(b)) and $1^2 (1^3)^3$ rhythms (Fig. \ref{fig:restitution}(c)). For large PCLs (i.e., low-frequency pulsing), the cell is in a $1^3$ state (Fig. \ref{fig:restitution}(d)). \begin{figure}[ht] \centering \includegraphics[width=5in]{PCL_Bifurcation} \put(-368,201){(a)} \put(-182,201){(b)} \put(-368,138){(c)} \put(-182,138){(d)} \put(-300,72){(e)} \caption{Dynamics of the model cardiomyocyte \eqref{eq:model} under variations in the PCL. In (a)--(d), the stimulus pulse is `on' during the cyan segments. The attractor of the cell shows (a) $1^2$ APs with EADs for ${\rm PCL} = 1200~$ms, (b) $1^2 1^3$ alternans for ${\rm PCL}=1300~$ms, (c) $1^2 (1^3)^3$ APs with EADs for ${\rm PCL} = 1420$~ms, and (d) $1^3$ APs with EADs for ${\rm{PCL} = 1500}$~ms. (e) APD versus PCL bifurcation diagram. There is an intermediate band of PCLs ($1240$~ms $\lesssim {\rm PCL} \lesssim$ $1435~$ms) over which the attractor has complex EAD signature.} \label{fig:restitution} \end{figure} We summarize the behaviour of the model cell and its response to periodic stimulation at various frequencies, by constructing a bifurcation diagram (Fig. \ref{fig:restitution}(e)). To do this, we used a dynamic restitution protocol \cite{Koller1998} in which the cell was paced at a fixed PCL until steady-state was reached, after which the action potential duration (APD) and PCL were recorded. We took the APD to be the amount of time the cell spends with $V > -70~$mV. With this choice of restitution protocol, the PCL is the sum of the APD and the diastolic interval, so our bifurcation diagram encodes the restitution curves (i.e., the plot of the APD as a function of the diastolic interval has the same qualitative features as shown in Fig. \ref{fig:restitution}(e)). The bifurcation diagram shows that the periodic stimulation elicits three types of behaviour. For high- and low-frequency stimulation, the model cell is in a purely $1^2$ or $1^3$ state, respectively. In the intermediate-frequency forcing range, the model cell has complex signature of the form $(1^2)^p (1^3)^q$, where $p$ and $q$ are integers. We observe that the AP signature becomes more complicated near the transition to the $1^3$ state. This increasing complexity of the AP signature near a transition is robust; it occurs for a wide range of $g_K$ and $g_{Ca}$ in \eqref{eq:model} and has also been observed in other forced conductance-based cardiomyocyte models \cite{Sato2010,Tran2009}. Now that we have demonstrated the rich variety of dynamics present in the minimal model \eqref{eq:model}, we next investigate the dynamical mechanisms that underlie the observed rhythms. We use Geometric Singular Perturbation Theory \cite{Fenichel1979,Jones1995} as the basis of our analysis. \section*{EADs Arise From Canard Dynamics} \label{sec:local} In this section, we show that the dynamical mechanisms responsible for the EADs are canards. To facilitate the analysis, we consider \eqref{eq:model} with no stimulus input. A similar demonstration was provided by \cite{kugler2018}, but we elaborate on how the EADs emerge as the cell capacitance is increased from 0, (i.e., moving the system away from the singular limit), and we demonstrate how the underlying rotational sectors determine the number and duration of EADs. We first show that the model has a slow/fast structure. We use this slow/fast splitting to identify the geometric cast of characters involved in producing APs and EADs. We then demonstrate that folded node canards generate EADs, and these canards are robust in parameters. Finally, we demonstrate how drugs that inhibit K$^+$\ channels or a hypokalemic environment can facilitate EAD production. \subsection*{The dynamics evolve over multiple timescales} \label{subsec:timescales} A key observation is that the dynamics of the cell evolve over multiple timescales, with slow depolarized/hyperpolarized epochs interspersed with rapid transitions between them. We formally show this multi-timescale structure by introducing dimensionless variables, $v$ and $t_s$, via the rescalings \[ v = \frac{V}{k_V} \quad \text{ and } \quad t_s = \frac{t}{k_t}, \] where $k_V$ and $k_t$ are reference voltage and timescales, respectively. With these rescalings, the minimal model \eqref{eq:model} becomes \begin{equation} \label{eq:dimless} \begin{split} \eps \frac{dv}{dt_s} &= - \left( \overline{g}_K n \left( v-\overline{V}_{\rm K} \right) + \overline{g}_{Ca} m_{\infty}(v) h \left( v-\overline{V}_{\rm Ca} \right) \right) \equiv f(v,n,h), \\ \frac{dn}{dt_s} &= \frac{k_t}{\tau_n} \left( n_{\infty}(v) - n \right) \equiv g_1(v,n), \\ \frac{dh}{dt_s} &= \frac{k_t}{\tau_h} \left( h_{\infty}(v) - h \right) \equiv g_2(v,h), \end{split} \end{equation} where $\overline{g}_u = \frac{g_u}{g_{\rm ref}}$ for $u \in \{ \rm K,Ca \}$ denotes the dimensionless conductances with reference conductance $g_{\rm ref}$, $\overline{V}_u = \frac{V_u}{k_V}$ for $u \in \{ \rm K,Ca \}$ denotes the dimensionless reversal potentials, and $0 < \eps = \frac{C_m / g_{\rm ref}}{k_t} \ll 1$ is the ratio of the voltage timescale ($C_m/g_{\rm ref}$) to the reference timescale. The benefit of recasting the model in the dimensionless form \eqref{eq:dimless} is that it reveals the typical timescales in the model. The voltage variable is fast with a timescale of $\frac{C_m}{g_{\rm ref}} \approx 5$~ms for $C_m = 0.5~\mu$F/cm$^2$ and $g_{\rm ref}=0.1$~mS/cm$^2$. The activation variable, $n$, for the K$^+$\ channels is slow with timescale $\tau_n = 80$~ms, and the inactivation variable, $h$, for the L-type Ca$^{2+}$\ channels is superslow with timescale $\tau_s = 300~$ms. Thus, the system \eqref{eq:dimless} is a three-timescale problem. One effective approach to the analysis of multiple-timescale problems, as pioneered in the neuroscience context in \cite{Rinzel1987}, is Geometric Singular Perturbation Theory (GSPT). The idea of GSPT is to decompose a slow/fast system into lower dimensional slow and fast subsystems, analyze these simpler subsystems, and combine their information in order to understand the origin and properties of the dynamics of the original model. However, the GSPT approach is currently limited to two-timescale (i.e., slow/fast) problems. In three-timescale systems such as \eqref{eq:dimless}, a choice is usually made: to either group $v$ and $n$ together as `fast', or to group $n$ and $h$ together as `slow'. Prior studies of the minimal model chose to group $v$ and $n$ together as fast, whilst using $h$ as the sole slow variable \cite{Sato2010}. In this 1-slow/2-fast approach, the EADs arise because the depolarized steady state of the $(v,n)$ subsystem loses stability via a Hopf bifurcation (with respect to $h$) leading to oscillations which are destroyed at a homoclinic bifurcation \cite{Sato2009,Tran2009,Xie2007}. Whilst this mechanism is consistent with the {\em in-vitro} and {\em in-silico} observations that the EADs appear irregularly under periodic stimulation, it does not provide insight into how many EADs should be observed or why the number of EADs change with the PCL. Here, we take the alternative approach and treat $v$ as the only fast variable, whilst grouping $n$ and $h$ together as slow. We will show that this 2-slow/1-fast approach allows us to predict the maximal number of EADs that can be generated, and explain why the number of EADs changes with the PCL. \subsection*{Underlying geometric structure} \label{subsec:gspt} We now identify the geometric features that organize the EADs and APs. We begin by reformulating \eqref{eq:dimless} in terms of the fast time, $t_f = \frac{1}{\eps} t_s$, which gives \begin{equation} \label{eq:fast} \begin{split} \frac{dv}{dt_f} &= f(v,n,h), \\ \frac{dn}{dt_f} &= \eps g_1(v,n), \\ \frac{dh}{dt_f} &= \eps g_2(v,h). \end{split} \end{equation} System \eqref{eq:fast} is equivalent to \eqref{eq:dimless} in the sense that solutions of both systems trace out the same paths in the $(v,n,h)$ phase space, just at different speeds. We have seen that the dynamics of \eqref{eq:model} alternate between slow and fast epochs. The rapid transitions between depolarized and repolarized phases are approximated by solutions of the 1D {\em fast subsystem} \begin{equation} \label{eq:layer} \begin{split} \frac{dv}{dt_f} &= f(v,n,h), \\ \frac{dn}{dt_f} &= 0, \\ \frac{dh}{dt_f} &= 0, \end{split} \end{equation} which is the approximation of \eqref{eq:dimless} in which the slow variables move so slowly that they are fixed. (The fast subsystem is obtained by taking the singular limit $\eps \to 0$ in \eqref{eq:fast}.) Similarly, the slow depolarized/repolarized segments of the dynamics are approximated by solutions of the 2D {\em slow subsystem} \begin{equation} \label{eq:reduced} \begin{split} 0 &= f(v,n,h), \\ \frac{dn}{dt_s} &= g_1(v,n), \\ \frac{dh}{dt_s} &= g_2(v,h), \end{split} \end{equation} which is the approximation of \eqref{eq:dimless} in which the fast voltage variable moves so rapidly that it (i) has already reached steady state and (ii) instantly adjusts to any changes in the slow gating dynamics. (The slow subsystem is obtained by taking the singular limit $\eps \to 0$ in \eqref{eq:dimless}.) Recall that the idea of GSPT is to analyze the 1D fast and 2D slow subsystems, and combine their information in order to understand the origin and properties of the dynamics in the full 3D system. We begin with linear stability analysis of the 1D fast subsystem \eqref{eq:layer}. The equilibria, $S_0$, of \eqref{eq:layer} form a cubic-shaped surface (in the $(v,n,h)$ space) called the critical manifold \begin{equation} \label{eq:criticalmanifold} S_0 = \left\{ (v,n,h) : f(v,n,h) = 0 \right\} = \left\{ (v,n,h) : h = h_S(v,n) = - \frac{\overline{g}_{K} n(v-\overline{V}_K)}{\overline{g}_{Ca} m_{\infty}(v) (v-\overline{V}_{Ca}) } \right\}. \end{equation} The outer sheets are stable and the middle sheet is unstable; these are separated by curves, $L^{\pm}$, of points corresponding to fold bifurcations of \eqref{eq:layer} \begin{equation} \label{eq:foldcurves} L^{\pm} = \left\{ (v,n,h) \in S_0 : \frac{\partial f}{\partial v} = 0 \right\}. \end{equation} For the cardiomyocyte model, the fold conditions \eqref{eq:foldcurves} reduce to a set of lines on $S_0$ at constant voltage values (Fig.~\ref{fig:slowflow}; red curves); $L^+$ denotes the fold curve at a depolarized voltage level, and $L^-$ denotes the fold curve at a hyperpolarized voltage that is the firing threshold. We note that the $V$-axis is also a fold curve (see `\nameref{subsec:twoparam}' section). \begin{figure}[ht] \centering \includegraphics[width=5in]{SlowFlow} \put(-360,120){(a)} \put(-162,120){(b)} \caption{Geometric structure of the cardiomyocyte model \eqref{eq:model} for the parameter set in Table \ref{tab:params}. (a) The outer attracting sheets (blue surfaces) of the critical manifold are separated from the middle repelling sheet (red surface) by the (red) fold curves, $L^{\pm}$. The slow flow (given by \eqref{eq:slowprojection}; black curves) is directed towards the folds. There is a folded node (green marker) on $L^+$ with singular strong canard, $\gamma_0$ (green trajectory). The full system equilibrium (cyan marker) is a saddle. (b) Projection into the $(V,n)$ plane. The funnel region (gray) for trajectories that enter FN is enclosed by $L^+$ and $\gamma_0$.} \label{fig:slowflow} \end{figure} From the linear stability analysis, we conclude that most solutions of \eqref{eq:layer} end up on either the depolarized attracting sheet, $S_0^{a,+}$, or the hyperpolarized attracting sheet, $S_0^{a,-}$. Once trajectories reach one of these sheets, the slow dynamics dominate the evolution and the appropriate approximating system is the slow subsystem \eqref{eq:reduced}. The algebraic equation in \eqref{eq:reduced} constrains the phase space to the critical manifold, whilst the differential equations describe the slow motions along $S_0$. Thus, the geometric singular perturbation analysis partitions the phase space into the fast dynamics away from the critical manifold together with the slow dynamics on the critical manifold. The critical manifold itself is the interface where the fast and slow subsystems interact. For the slow evolution on $S_0$, we have differential equations to describe the motions of $n$ and $h$, whilst the algebraic equation implicitly describes the associated motions in $v$ (slaved to $S_0$; Fig.~\ref{fig:slowflow} black curves). To obtain an explicit description of the $v$-motions, we differentiate $f(v,n,h)=0$ with respect to the slow time, $t_s$, and use the graph representation of the critical manifold given in \eqref{eq:criticalmanifold}. This gives \begin{equation} \label{eq:slowprojection} \begin{split} \frac{dv}{dt_s} &=-\left( \frac{\partial f}{\partial v}\right)^{-1} \left(\frac{\partial f}{\partial n} g_1 + \frac{\partial f}{\partial s} g_2 \right), \\ \frac{dn}{dt_s} &= g_1, \end{split} \end{equation} where $h$ has been replaced by $h_S(v,n)$. We stress that \eqref{eq:slowprojection} is equivalent to \eqref{eq:reduced}; we have simply incorporated the restriction to $S_0$ explicitly by setting $h = h_S(v,n)$. In this formulation, it becomes clear that the slow flow is singular along the fold curves, $L^\pm$, where $\frac{\partial f}{\partial v} = 0$. To deal with this finite-time blow-up of solutions, we perform the time rescaling $dt_s = -\frac{\partial f}{\partial v} dt_d$, which transforms the slow system \eqref{eq:slowprojection} to the {\em desingularized system}, \begin{equation} \label{eq:desingularized} \begin{split} \frac{dv}{dt_d} &=\frac{\partial f}{\partial n} g_1 + \frac{\partial f}{\partial s} g_2, \\ \frac{dn}{dt_d} &= -\left( \frac{\partial f}{\partial v}\right) g_1, \end{split} \end{equation} where again, $h = h_S(v,n)$. In this setting, the finite-time singularities of \eqref{eq:slowprojection} along the fold curves have been transformed into nullclines of \eqref{eq:desingularized}. Since the transformation that led to the desingularized system is phase space-dependent, some care must be taken when comparing trajectories of the desingularized system \eqref{eq:desingularized} with those of the true slow subsystem \eqref{eq:slowprojection}. On the attracting sheets, $S_0^{a,\pm}$, the flow of \eqref{eq:desingularized} is topologically equivalent to the flow of \eqref{eq:slowprojection} since $\frac{\partial f}{\partial v}<0$ (and hence $t_s$ and $t_d$ have the same sign). On the repelling sheet, $S_0^r$, the flow of \eqref{eq:desingularized} is in the opposite direction to the flow of \eqref{eq:slowprojection} since $\frac{\partial f}{\partial v}>0$ (and hence $t_s$ and $t_d$ have opposite signs). With this relation between the slow and desingularized systems in mind, we now analyze the desingularized system in order to learn about the dynamics of the slow subsystem. The desingularized system possesses two types of equilibria or singularities. Ordinary singularities are isolated points such that $\{ g_1 = g_2 = 0 \}$, and correspond to true equilibria of the desingularized system \eqref{eq:desingularized}, of the slow subsystem \eqref{eq:slowprojection}, and of the original model \eqref{eq:model}. For the parameter set in Table \ref{tab:params}, there is an ordinary singularity on $S_0^r$ (Fig. \ref{fig:slowflow}; cyan marker), corresponding to a saddle equilibrium. Folded singularities, $M$, are isolated points on $L^{\pm}$ where the right-hand-side of the $v$-equation in \eqref{eq:desingularized} is zero, i.e., \begin{equation} \label{eq:foldedsing} M = \left\{ (v,n,h) \in L^{\pm} : \frac{\partial f}{\partial n} g_1 + \frac{\partial f}{\partial h} g_2 =0 \right\}. \end{equation} Folded singularities correspond to equilibria of \eqref{eq:desingularized}, however, they are not equilibria of the slow subsystem \eqref{eq:slowprojection} or the original model \eqref{eq:model}. Instead, they are points where both the numerator and denominator of the right-hand-side of the $v$-equation in \eqref{eq:slowprojection} vanish at the same time, so there may be a cancellation of a simple zero. This allows the possibility of solutions of the slow flow to cross the fold curves (via the folded singularity) with finite speed and move from an attracting sheet to the repelling sheet (or vice versa). Such solutions are called {\em singular canards} \cite{Szmolyan2001}, and play important roles in applications. We refer to \cite{Desroches2012,Kuehn2015} for extensive overviews of applications of folded singularities and canards in chemical, neural, and engineering contexts. Folded singularities are classified as equilibria of the desingularized system. A folded singularity with real eigenvalues of the same sign is a folded node; with real eigenvalues of opposite signs it is a folded saddle; and with complex conjugate eigenvalues it is a folded focus. Folded nodes and folded saddles possess singular canards, whereas folded foci do not. The cardiomyocyte model possesses a folded node on $L^+$ for the standard parameter set (Fig. \ref{fig:slowflow}; green marker). \subsection*{EADs originate from a folded node} \label{subsec:mmos} We now demonstrate the origin of EADs in terms of the geometric structures identified in the prior section `\nameref{subsec:gspt}'. To motivate this, we first take a $1^3$ attractor of \eqref{eq:model} (without periodic stimulation) and compare it to the critical manifold in the $(V,n,h)$ phase space (Fig. \ref{fig:epsunfolding}(a); magenta curve). The three EADs can be seen as small loops in the magenta trajectory about the upper fold curve, $L^+$. We observe that (i) the EADs are localized to the neighbourhood of the folded node; (ii) by decreasing $\eps$, or $C_m$, the EADs decrease in amplitude (compare curves of different colors in Fig. \ref{fig:epsunfolding}); (iii) by decreasing $\eps$, the location in phase space where the trajectory transitions from a depolarized state to a hyperpolarized state converges to the folded node. These observations lead us to conjecture that the EADs observed for $0< \eps \ll 1$ arise from the folded node itself. \begin{figure}[ht] \centering \includegraphics[width=5in]{EpsUnfolding} \put(-364,112){(a)} \put(-240,112){(b)} \put(-120,112){(c)} \put(-240,50){(d)} \put(-120,50){(e)} \caption{Origin of the EADs near the folded node (green marker) for the standard parameter set. (a) Singular (black) and nonsingular (magenta, cyan, and yellow) $1^3$ attractor compared to the critical manifold. All orbits enter the depolarized sheet, $S_0^{a,+}$, inside the funnel enclosed by the singular strong canard $\gamma_0$ (green curve) and $L^+$ (red curve). The corresponding voltage time series are shown for (b) $C_m = 0.5~\mu$F/cm$^2$ (magenta), (c) $C_m = 0.25~\mu$F/cm$^2$ (cyan), (d) $C_m = 0.1~\mu$F/cm$^2$ (yellow), and (e) $C_m = 0~\mu$F/cm$^2$ (black).} \label{fig:epsunfolding} \end{figure} How do the small oscillations emerge from the folded node? To answer this, we examine how the sheets, $S_0^{a,+}$ and $S_0^r$, of the critical manifold persist for small and nonzero $\eps$. As $\eps$ is increased away from zero, the attracting and repelling sheets, $S_0^{a,+}$ and $S_0^r$, perturb to attracting and repelling slow manifolds, $S_{\eps}^{a,+}$ and $S_{\eps}^r$, respectively \cite{Fenichel1979,Jones1995}. These slow manifolds are the surfaces to which the slow segments of trajectories of \eqref{eq:model} are slaved. Both $S_{\eps}^{a,+}$ and $S_{\eps}^r$ are small and regular perturbations of $S_0^{a,+}$ and $S_0^r$, except in the neighbourhood of the folded node, where they instead twist around a common axis of rotation \cite{Szmolyan2001,Wexy2005}. The axis of rotation corresponds to the weak eigendirection of the folded node. The twisted slow manifolds are shown in Fig. \ref{fig:slowmans} for various perturbations, corresponding to the $C_m$ values used in Fig. \ref{fig:epsunfolding}. (For the purposes of visualization, the slow manifolds have only been computed up to a plane, $\Sigma$, passing through the folded node. The method of computation is detailed in \cite{Desroches2008}.) The twisting of the slow manifolds (and the slow flow on them) is confined to an $\mathcal{O}\left(\sqrt{\eps} \right)$ neighbourhood of the folded node \cite{Brons2006}. Thus, the EADs arise from locally twisted slow manifolds around the folded node. This can be seen in the insets of Fig. \ref{fig:slowmans}, where the folded node is the intersection of the dashed blue curve (intersection of $S_0^{a,+}$ with $\Sigma$) and the dashed red curve (intersection of $S_0^{a,-}$ with $\Sigma$). \begin{figure}[ht] \centering \includegraphics[width=5in]{SlowMans} \put(-366,288){(a)} \put(-182,288){(b)} \put(-366,135){(c)} \put(-185,135){(d)} \caption{Attracting (blue) and repelling (red) slow manifolds, $S_{\eps}^{a,+}$ and $S_{\eps}^r$, for (a) $C_m=0.01~\mu$F/cm$^2$, (b) $C_m = 0.1~\mu$F/cm$^2$, (c) $C_m = 0.25~\mu$F/cm$^2$, and (d) $C_m = 0.5~\mu$F/cm$^2$. The twisting of the slow manifolds produces the EADs. Insets: intersections of $S_{\eps}^{a,+}$ (solid blue) and $S_{\eps}^{r}$ (solid red) with $\Sigma$. Also shown for comparison are the intersections of $S_0^{a,+}$ (dashed blue) and $S_0^r$ (dashed red) with $\Sigma$. The folded node is at the intersection of the dashed blue and dashed red curves.} \label{fig:slowmans} \end{figure} \subsection*{Canards organize the EADs} \label{subsec:sectors} The local twisting of the slow manifolds results in a finite number of intersections between $S_{\eps}^{a,+}$ and $S_{\eps}^r$, called {\em maximal canards}. For the standard parameter set, we find that there are 5 maximal canards. The outermost, $\gamma_0$, is called the {\em maximal strong canard} and is the phase space boundary between those trajectories that exhibit EADs near the folded node and those that do not (Fig. \ref{fig:sectors}). That is, any solution of the cardiomyocyte model \eqref{eq:model} with initial condition to the left of $\gamma_0$ in Fig. \ref{fig:sectors} is a regular $1^0$ AP (Fig. \ref{fig:sectors}(a) and (d); cyan curves). Any solution with initial condition between $\gamma_0$ and the secondary maximal canard $\gamma_1$ executes 1 EAD in the neighbourhood of the folded node (Fig. \ref{fig:sectors}(b) and (d); beige curves). Any solution with initial condition enclosed by the secondary canards $\gamma_1$ and $\gamma_2$ exhibits 2 EADs around the folded node (Fig. \ref{fig:sectors}(c) and (d); brown curves). In general, an orbit in the sector between the maximal secondary canards $\gamma_{k-1}$ and $\gamma_k$ will execute $k$ EADs. The innermost maximal canard, $\gamma_w$, is called the {\em maximal weak canard} and is the axis of rotation for both the slow manifolds and the other maximal canards. Thus, the maximal canards organize the EADs in phase space; the path taken by the trajectory relative to the maximal canards determines the number of EADs produced. \begin{figure}[ht] \centering \includegraphics[width=5in]{Sectors} \put(-366,256){(a)} \put(-182,258){(b)} \put(-366,122){(c)} \put(-182,122){(d)} \caption{Organization of the EADs by maximal canards for the standard parameter set. Only the first three maximal canards, $\gamma_0$ (green), $\gamma_1$ (magenta) and $\gamma_2$ (yellow), are shown. (a) A solution ($\Gamma$; cyan) outside the rotational sectors has no EADs. (b) A solution ($\Gamma$; beige) in the sector between $\gamma_0$ and $\gamma_1$ exhibits 1 EAD. (c) A solution ($\Gamma$; orange) in the sector between $\gamma_1$ and $\gamma_2$ exhibits 2 EADs. (d) Corresponding time series, showing a regular AP (cyan), an AP with 1 EAD (beige), and an AP with 2 EADs (orange).} \label{fig:sectors} \end{figure} \subsection*{Folded Node and EAD Dynamics Are Robust} \label{subsec:twoparam} Given that the EADs arise from canard dynamics due to twisted slow manifolds around a folded node, is it possible to predict the number of maximal canards and associated EADs? The answer is `yes', and it is encoded in the strong and weak eigenvalues, $\lambda_s < \lambda_w <0$, of the folded node (when considered as an equilibrium of the desingularized system). Let $\mu = \frac{\lambda_w}{\lambda_s}$ denote the eigenvalue ratio. Then, provided $\eps$ is sufficiently small and $\mu \gg \sqrt{\eps}$, the maximal number, $s_{\max}$, of EADs around the folded node is \begin{equation} \label{eq:smax} s_{\max} = \lfloor \frac{\mu+1}{2\mu} \rfloor, \end{equation} where $\lfloor \frac{\mu+1}{2\mu} \rfloor$ denotes the greatest integer less than or equal to $\frac{\mu+1}{2\mu}$ \cite{Brons2006,Szmolyan2001}. The corresponding number of maximal canards is $s_{\max}+1$. For the folded node discussed in Figs. \ref{fig:slowflow} -- \ref{fig:sectors}, we find $\mu \approx 0.13$ so that the maximal number of EADs that can be observed is $s_{\max} = 4$, and there are 5 maximal canards (consistent with Fig. \ref{fig:slowmans}). Not only does the formula \eqref{eq:smax} predict the number of EADs, it also predicts how the number of EADs changes with parameters. Bifurcations of maximal canards occur whenever $\mu^{-1}$ passes through an odd integer value \cite{Wexy2005}. That is, if the system parameters are varied so that $\mu^{-1}$ increases through $3$, then $s_{\max}$ increases from $1$ to $2$. If the system parameters are varied so that $\mu^{-1}$ increases through $5$, then $s_{\max}$ increases from $2$ to $3$, and so on. \begin{figure}[ht] \centering \includegraphics[width=3in]{TwoParam} \caption{Genericity of canard-induced EADs. The $(g_K,V_K)$ parameter plane has been partitioned according to the properties of the folded singularity. Folded nodes and EADs exist in the region enclosed by the blue ($\mu=0$) curves and the red ($\mu=1$) curve. Within this region, the maximal number of EADs that can be observed increases as the parameters are moved from the red $\mu = 1$ boundary to the blue $\mu=0$ boundaries. The two thick arrows indicate possible effects of drugs that reduce the K$^+$\ current conductance (leftward arrow) or increase the magnitude of the K$^+$\ Nernst potential (downward arrow). } \label{fig:twoparam} \end{figure} There are two special cases, $\mu = 0$ and $\mu=1$, where the folded node ceases to exist and hence the canard-induced EADs are eliminated. The resonance $\mu = 1$ corresponds to the boundary where the folded node becomes a folded focus. Folded foci do not possess any canards. Hence, the $\mu=1$ resonance serves as the transition between regular $1^0$ APs and APs with EADs. This is illustrated in a two-parameter diagram, where the conductance of the K$^+$\ current ($g_K$) and the K$^+$\ Nernst potential ($V_K$) are varied and the asymptotic state of the system \eqref{eq:model} is shown (Fig. \ref{fig:twoparam}). The red curve is the set of parameter values for which $\mu=1$. For parameter values within the region enclosed by the red curve the folded singularity is a folded focus, so only APs are produced (without EADs). The dark green curves in Fig. \ref{fig:twoparam} are parameter combinations such that $\mu=1/3$, so in the region delimited by these curves and the red $\mu=1$ curve there is a single maximal canard ($s_{\rm max}=1$) and APs with a single EAD are possible. On the olive curves $\mu=1/5$ and in the region delimited by these curves and the dark green curves APs with two EADs are possible. This process can be continued to higher odd integer values of $\mu^{-1}$; in the region between the olive curves and blue curves APs with three or more EADs are possible. On the blue curves $\mu = 0$. The $\mu = 0$ resonance is known as a folded saddle-node (FSN) bifurcation and can occur in several distinct ways. The FSN bifurcation of type II (FSN II) is a bifurcation of the desingularized system in which a folded singularity and an ordinary singularity coalesce and swap stability in a hybrid transcritical bifurcation \cite{Guckenheimer2008,Krupa2010}. That is, for $\mu>0$, the folded singularity on $L^+$ is a folded node and the ordinary singularity on $S_0^r$ is a saddle equilibrium. For $\mu<0$, the folded singularity on $L^+$ is a folded saddle and the ordinary singularity has moved to $S_0^{a,+}$ where it is a stable node. Hence, the FSN II bifurcation corresponds to the transition between EADs and stable depolarized steady states (Fig. \ref{fig:twoparam}; left blue curve). The other way in which the FSN bifurcation can occur in the desingularized system is via a true transcritical bifurcation of folded singularities. That is, for $\mu>0$, there is a folded node on $L^+$ and there is a folded saddle on the $V$-axis. At $\mu = 0$, the folded node and folded saddle coalesce, and for $\mu <0$, the folded singularity on $L^+$ is a folded saddle whereas the folded singularity on the $V$-axis is a folded node. The slow flow around the folded node on the $V$-axis is directed away from the $V$-axis, and so EADs will not be observed. Thus, for $\mu<0$, orbits of the slow flow encounter regular fold points on $L^+$, and the corresponding rhythm exhibits regular APs (without EADs). Hence, this FSN bifurcation corresponds to the transition between EADs and regular APs (Fig. \ref{fig:twoparam}; right blue curve). We note that, to the best of our knowledge, this type of FSN bifurcation (in which a pair of folded singularities undergo a true transcritical bifurcation) has not yet been reported or studied. The two-parameter diagram (Fig. \ref{fig:twoparam}) illustrates that, in this model, there is a large set of $g_K$, $V_K$ parameters in which EADs can be produced. Thus, the behavior is generic, not limited to small regions of parameter space. It also illustrates the precision that GSPT provides in the determination of when EADs are possible, and the maximum number of EADs that are possible. Finally, the diagram shows that decreasing the K$^+$\ conductance, as is done with drugs like azimilide that act as K$^+$\ channel antagonists, can induce EADs (thick leftward arrow). Also, increasing the magnitude of the K$^+$\ Nernst potential, as in hypokalemia, can induce EADs (thick downward arrow). These observations are consistent with experimental studies \cite{Madhvani2011,Sato2010,Yan2001}. \section*{Periodic Stimulation \& Mixed-Mode Oscillations} \label{sec:global} We have established that EADs originate from canard dynamics around a folded node, and that the canards organize the EADs in both phase and parameter space. In this section, we restore the periodic stimulation and study the stimulus-driven EAD attractors. Our aim is to explain the bifurcation diagram in Fig. \ref{fig:restitution}. We will show that the variety of AP morphologies exhibited under various PCLs can be explained by the canards. \subsection*{High- and low-frequency pacing: canard-induced mixed-mode oscillations} \label{subsec:mmos} Recall that when there is periodic stimulation, $I_{\rm sti}$, the system entrains to the driving oscillator. For low PCLs (i.e., high-frequency pacing), the attractor is a $1^2$ AP with EADs (see Figs. \ref{fig:restitution}(a) and (e)). Using the results of our geometric analysis from the `\nameref{sec:local}' section above, we now deconstruct the $1^2$ rhythm (Fig. \ref{fig:highfrequency}) and find that it consists of \begin{enumerate}[(i)] \setlength{\itemsep}{0pt} \item canard-induced EADs around the folded node due to twisted slow manifolds, \item a fast transition from the depolarized folded node region to the hyperpolarized slow manifold, and \item a stimulus-driven transition from the hyperpolarized slow manifold to the depolarized slow manifold. \end{enumerate} A representative example is shown in Fig. \ref{fig:highfrequency}, where we compare the $1^2$ attractor to the slow and fast subsystems (panel (a)) and to the twisted slow manifolds (panel (b)). Note in Fig.~\ref{fig:highfrequency}(a) that the weak canard is approximately given by the stable manifold of the (cyan) saddle, and that the EADs are centered on this weak canard (i.e., the weak canard is the axis of rotation). \begin{figure}[ht!] \centering \includegraphics[width=5in]{HighFrequency} \put(-362,135){(a)} \put(-192,135){(b)} \caption{Geometric mechanism for the stimulus-driven $1^2$ attractor $\Gamma$ (thick black and cyan). Parameters are as in Fig.~\ref{fig:restitution}(a). (a) Comparison of $\Gamma$ with the slow subsystem flow (thin black) and fast subsystem geometric structures. The stimulus (cyan segment) induces a transition from the hyperpolarized sheet to the funnel of the folded node on the depolarized sheet. (b) Comparison of $\Gamma$ with the slow manifolds; $\Gamma$ lies in the sector bounded by the canards $\gamma_1$ and $\gamma_2$, and thus has 2 EADs.} \label{fig:highfrequency} \end{figure} The periodic stimulus provides the mechanism for returning orbits to the neighbourhood of the folded node. More specifically, the stimulus switches `on' during the slow hyperpolarized segment of the trajectory. This drives the orbit away from the hyperpolarized sheet before it can reach the lower firing threshold $L^-$. Moreover, the amplitude of the stimulus pulse is large enough that it pushes the orbit past the repelling sheet of the critical manifold and into the basin of attraction of the depolarized sheet, $S_{\eps}^{a,+}$. The timing of the stimulus is also such that the orbit is injected into the rotational sector enclosed by the maximal canards $\gamma_1$ and $\gamma_2$, and hence exhibits 2 EADs. This combination of a local canard mechanism (for the EADs) and a global (stimulus-induced) return mechanism is known as a canard-induced mixed-mode oscillation (MMO) \cite{Brons2006}. Similarly, we find that for large PCLs (i.e., low-frequency pacing), the stimulus-driven $1^3$ attractor is a canard-induced MMO with period set by the PCL (see Fig. \ref{fig:restitution}(d) and (e)). The $1^3$ MMO attractor consists of (local) canard-induced EADs around the folded node combined with a global stimulus-driven return that projects orbits from the hyperpolarized sheet into the rotational sector enclosed by the canards $\gamma_2$ and $\gamma_3$ (Fig. \ref{fig:lowfrequency}). \begin{figure}[ht!] \centering \includegraphics[width=5in]{LowFrequency} \put(-362,135){(a)} \put(-192,135){(b)} \caption{Local and global mechanisms for a stimulus-driven $1^3$ attractor $\Gamma$ (thick black and cyan). Parameters are as in Fig.~\ref{fig:restitution}(d). (a) Comparison of $\Gamma$ with the slow subsystem flow (thin black) and fast subsystem geometric structures in the $(V,n)$ projection. The stimulus (cyan segment) projects the orbit into the funnel of the folded node on the depolarized sheet. (b) The orbit is injected into the rotational sector delimited by the canards $\gamma_2$ and $\gamma_3$, and hence exhibits 3 EADs.} \label{fig:lowfrequency} \end{figure} \subsection*{Intermediate-frequency pacing: EAD alternans due to reinjection into adjacent rotational sectors} \label{subsec:alternans} In Fig.~\ref{fig:restitution}(e), we found that there is a band of intermediate pacing frequencies for which the stimulus-driven attractor is a $1^2 1^3$ alternator (see Fig.~\ref{fig:restitution}(b) for a representative time series). We now compare the $1^2 1^3$ attractor with the underlying geometric structures of the model cell (Fig.~\ref{fig:intfreqalternans}). As in the low- and high-frequency forcing cases, we find that the $1^2$ and $1^3$ segments are each canard-induced MMOs. The difference here is that the timing of the stimulus is such that the orbit visits different (contiguous) rotational sectors on each stimulus pulse. \begin{figure}[ht!] \centering \includegraphics[width=5in]{IntFreq_Alternans} \put(-362,135){(a)} \put(-192,135){(b)} \caption{Geometric mechanism for stimulus-driven $1^2 1^3$ alternans $\Gamma$. Parameters are as in Fig.~\ref{fig:restitution}(c). (a) Comparison of $\Gamma$ with the slow flow (thin black). The stimulus (cyan) projects the orbit into the funnel at different locations, causing $\Gamma$ to visit different rotational sectors. (b) The orbit alternately enters the rotational sector enclosed by $\gamma_1$ and $\gamma_2$ (2 EADs), and the rotational sector enclosed by $\gamma_2$ and $\gamma_3$ (3 EADs).} \label{fig:intfreqalternans} \end{figure} The $1^2 1^3$ alternans attractor, $\Gamma$, decomposes as follows. Starting on the hyperpolarized sheet, the first stimulus pulse (Fig. \ref{fig:intfreqalternans}(a); leftmost cyan segment) projects the orbit $\Gamma$ into the rotational sector enclosed by $\gamma_2$ and $\gamma_3$ (Fig. \ref{fig:intfreqalternans}(b); inset -- black marker above $\gamma_2$). Thus, $\Gamma$ exhibits 3 EADs. After these 3 EADs are completed, the orbit transitions to the hyperpolarized sheet where it slowly drifts towards the firing threshold $L^-$. Before it can reach $L^-$, the next stimulus pulse (Fig. \ref{fig:intfreqalternans}(a); rightmost cyan segment) projects the orbit into the rotational sector enclosed by $\gamma_1$ and $\gamma_2$ (Fig. \ref{fig:intfreqalternans}(b); inset -- black marker below $\gamma_2$), and thus $\Gamma$ exhibits only 2 EADs. The orbit then returns to the hyperpolarized sheet where it again slowly drifts towards $L^-$. Since $\Gamma$ only underwent 2 EADs, the APD is shorter (compared to the previous one) and the corresponding diastolic interval (DI) is longer. As such, the orbit is able to drift further along the hyperpolarized sheet before the next stimulus occurs. Once the stimulus `switches on', the process repeats periodically, thus producing the $1^2 1^3$ attractor. \subsection*{Intermediate-frequency pacing: dynamical chaos and intermittency due to sensitivity near maximal canards} \label{subsec:chaos} In Fig.~\ref{fig:restitution}(e), we found a band of intermediate pacing frequencies for which the model cell exhibited seemingly chaotic and intermittent behaviour. Here, we show that the complex EAD signatures arise from the crossing of a maximal canard. We do this for a representative $1^2 (1^3)^3$ attractor (Fig.~\ref{fig:intfreqchaos}), which we denote by $\Gamma$. As before, the individual APs with EADs are canard-induced MMOs. The variability in the number and magnitude of the EADs is due to the stimulus, which perturbs the orbit away from the hyperpolarized sheet at different locations on each pulse. \begin{figure}[ht!] \centering \includegraphics[width=5in]{IntFreq_Chaos} \put(-340,198){(a)} \put(-188,198){(b)} \put(-370,66){(c)} \put(-276,66){(d)} \put(-176,66){(e)} \caption{Geometric explanation of `dynamical chaos' and intermittency. The cause for the variability in the number and magnitude of the EADs is that $\Gamma$ peels off $\gamma_2$ at different times. Parameters are as in Fig.~\ref{fig:restitution}(d). (a) Projection of the $1^2 (1^3)^3$ attractor, $\Gamma$, into the $(V,n)$ plane. (b) The orbit stays close to the maximal canard $\gamma_2$ on each return to $S_{\eps}^{a,+}$. (c) Zoom of the $(V,n)$ plane where the stimuli are applied. (d) Zoom of the EADs as they peel off the maximal canard $\gamma_2$. (e) Time series of $\Gamma_2$ and $\Gamma_{3j}$ for $j=1,2,3$. The APD (DI) is the time spent above (below) the threshold $V = -70~$mV (dashed purple). } \label{fig:intfreqchaos} \end{figure} Let $\Gamma_2$ denote the $1^2$ segment of $\Gamma$, and $\Gamma_{3,j}, j=1,2,3$ denote the $1^3$ segments of $\Gamma$, i.e., $\Gamma = \Gamma_2 \cup \Gamma_{31} \cup \Gamma_{32} \cup \Gamma_{33}$. Starting with $\Gamma_2$, the stimulus (Fig.~\ref{fig:intfreqchaos}(c); cyan) induces a fast transition to the depolarized sheet close to $\gamma_2$ and in the sector between $\gamma_1$ and $\gamma_2$ (Fig.~\ref{fig:intfreqchaos}(b)), and hence there are 2 EADs. The intrinsic dynamics of the model cell return $\Gamma_2$ to $S_{\eps}^{a,-}$ where it slowly drifts to smaller $n$. The next stimulus initiates $\Gamma_{31}$ and causes the orbit to enter the depolarized phase in the rotational sector bound by $\gamma_2$ and $\gamma_3$. The additional EAD produced in $\Gamma_{31}$ extends the APD compared to that of $\Gamma_2$ (Fig.~\ref{fig:intfreqchaos}(e)). As such, the DI of $\Gamma_{31}$ is shorter than that of $\Gamma_2$. This means $\Gamma_{32}$ is initiated on $S_{\eps}^{a,-}$ at a larger $n$ value (Fig.~\ref{fig:intfreqchaos}(c)), and enters $S_{\eps}^{a,+}$ closer to $\gamma_2$. Since $\Gamma_{32}$ follows $\gamma_2$ more closely than $\Gamma_{31}$, (i) the resulting EADs are larger amplitude (Fig.~\ref{fig:intfreqchaos}(d)), (ii) the APD is longer, and (iii) the DI is shorter. Consequently, $\Gamma_{33}$ is initiated on $S_{\eps}^{a,-}$ at a larger $n$ value, enters $S_{\eps}^{a,+}$ closer to $\gamma_2$, and hence exhibits the (i) largest EADs, (ii) longest APD, and (iii) shortest DI. The other complex MMO signatures reported in Fig.~\ref{fig:restitution}(e) emerge by the same mechanism. That is, the $(1^2)^p (1^3)^q$ attractors for $p,q \in \mathbb{N}$ arise because the PCL is such that the orbit enters the depolarized sheet close to the maximal canard $\gamma_2$. Since the behaviour of trajectories near a maximal canard is exponentially sensitive \cite{Wexy2005}, small changes in the PCL manifest as significant changes in the number, amplitude, and duration of EADs on each pulse. Likewise, at such PCL values, small changes in initial conditions have large effects on the $V$ time course, the hallmark of chaos. \section*{Discussion} \label{sec:discussion} It has been demonstrated previously that early afterdepolarizations produced by a simple cardiomyocyte model \cite{Sato2010}, a reduction of the Luo-Rudy 1 model \cite{Luo1991}, are the consequence of carnard dynamics in the vicinity of a folded node singularity \cite{kugler2018}, a result further illuminated through the geometric analysis shown in Figs. \ref{fig:slowflow}--\ref{fig:sectors}. We showed that these dynamics are robust in the $(g_K,V_K)$ parameter plane (Fig. \ref{fig:twoparam}). These parameters were chosen since they can be modulated by drugs or environment; $g_K$ is reduced by K$^+$\ channel antagonists such as azimilide, while $V_K$ is increased in magnitude in hypokalemia. Figure \ref{fig:twoparam} predicts that both manipulations can induce EADs, and indeed both manipulations have been shown to do this in experiments \cite{Madhvani2011,Sato2010,Yan2001}. The second set of results from our study involves the paced system, which receives periodic depolarizing stimuli (Fig. \ref{fig:restitution}). Each stimulus pushes an orbit into the basin of attraction of the depolarized attracting sheet, triggering an action potential that can be a mixed-mode oscillation if EADs are produced. For high- and low-frequency pacing, orbits land in the rotational sectors delimited by the maximal canards and stay far from any of the maximal canards, so that the voltage time course exhibits regular, periodic behavior. At high stimulus frequencies, the orbits land in the rotational sector with 2 EADs, so each AP is a mixed-mode oscillation with 2 EADs. At low stimulus frequencies, the orbits land in the rotational sector with 3 EADs, so each AP is a mixed-mode oscillation with 3 EADs. The number of EADs depends upon the rotational sector in which the orbit lands in response to the stimulus (Figs. \ref{fig:highfrequency}--\ref{fig:intfreqchaos}). The EAD alternans observed for intermediate-frequency pacing emerge because the stimulus current alternately projects the orbit into different rotational sectors on each pulse. In some cases, the outcome can be quite complex, with a sequence of mixed-mode oscillations of different durations and numbers of EADs. This behavior is what was referred to as ``dynamical chaos" in earlier publications \cite{Sato2010,Tran2009}. The advantage of the minimal model for the analysis presented here is its low dimensionality. More realistic cardiomyocyte models can have 40 or more dimensions, reflecting many types of ionic currents and in many cases equations for Ca$^{2+}$\ handling in the cytosol, the sarcoplasmic reticulum (SR), and the subspace between the SR and the cell membrane \cite{Luo1994b,Luo1994a,Kurata2005,Ohara2011,Williams_sb2010}. One major advantage of these larger models is that they have more biological detail that allows for simulation of, for example, the application of pharmacological agents that act as antagonists for specific types of ion channels, such as inward-rectifying K$^+$\ channels, while the minimal model incorporates only a single type of K$^+$\ current and a single type of Ca$^{2+}$\ current. With the correct parameterizations, these more complete models are capable of reproducing the various forms of EADs that have been characterized, each with different, but partially overlapping, biophysical mechanisms \cite{Antzelevitch2011}, while the minimal model was developed to produce EADs of a particular type. EADs are divided broadly into types according to the timing of the events: ``phase-2 EADs" occur during the plateau of an elongated AP, and ``phase-3 EADs" occur during the falling phase of the AP. There are also ``depolarizing afterdepolarizations" that occur after the completion of the action potential. The analysis that we performed herein on a minimal model suggests that the dynamics underlying some phase-2 EADs are canard induced, and we speculate that this will be the case in more complete biophysical models. While the full geometric singular perturbation analysis done with the minimal model is not possible with the high-dimensional models, it is possible to perform a less complete analysis, such as determining the existence of folded node singularities. Indeed, such an analysis is important for establishing that canard dynamics are the basis of phase-2 EADs in more complete models, and is currently being undertaken by our group. Why does it matter whether EADs are due to canard dynamics near a folded node singularity? Although it sounds very abstract, the ramifications of knowing this can be very important and useful. As we have demonstrated, if the EADs are associated with a folded node singularity, then one can simply analyze the eigenvalues of the reduced desingularized system at the folded node to determine how many EADs are possible. Also, through analysis of the eigenvalues, one can determine parameter changes that will enhance EAD production or eliminate the EADs. In particular, one can determine regions of parameter space where canard-induced EADs are not possible, without the need to perform any numerical integrations (as in Fig. \ref{fig:twoparam} and \cite{kugler2018}). So once EADs are linked to folded node singularities, one gains a great deal of predictive capability. In addition to this, knowing the dynamical mechanism for the EADs helps in the understanding of complex behavior, such as dynamical chaos, that would be hard or impossible to understand from the viewpoint of interacting ionic currents (i.e., a biophysical interpretation). Knowing which ion channels are key players in EADs is of course important, and can provide targets for pharmacological or genetic manipulation, but the complexity of the multiscale nonlinear dynamical system provides limitations to interpreting behavior without mathematical tools such as GSPT. The theory of folded singularities has been applied to numerous biological systems. This includes intracellular Ca$^{2+}$\ dynamics \cite{Harvey2011}, the electrical activity of neurons \cite{Rotstein2008,Rubin2007,Rubin2008} and pituitary cells \cite{Vo2013}, and mixed-mode oscillations that are likely canard-induced have been observed in the oxidation of platinum \cite{Krischer1992}, dusty plasmas \cite{Mikikian2008}, and chemical oscillations \cite{Petrov1992,Rotstein2003}. The demonstration that some forms of EADs are canard-induced, at least in a minimal cardiomyocyte model, adds cardiac cells to the growing list of the biological and chemical systems whose dynamics are organized by folded singularities. Our system is novel, however, in that it is periodically forced under normal (i.e., physiological) conditions, where the forcing is initiated at the sinoatrial node. As we demonstrated here, this forcing can lead to complicated dynamics due to the injection of the orbit into different rotational sectors, so that the number of EADs produced following each stimulus can vary. The result can appear to be unpredictable, and chaotic, and sensitive to small changes in the forcing frequency and initial conditions. Whether this complex behavior is exhibited in a physiological setting, within an intact heart, is unclear. It is generally accepted that EADs can lead to arrythmias \cite{Cranefield1991,Lerma2007stochastic,Shimizu1997,Shimizu1991}, including ventricular tachycardia, but it has not been establshed that complex, chaotic behavior at the single myocyte level contributes to this. \section*{Conclusions} \label{sec:conclusions} In this report, we showed the benefits of a 2-slow/1-fast analysis of a model for cardiac early afterdepolarizations. Knowing that the small EADs are due to canards organized around a folded node singularity not only explains the origin of the EADs, but provides a viewpoint through which one can comprehend important behaviors. For example, an analysis of the eigenvalues of the folded-node singularity provides information on the number of EADs that are possible for different parameter sets. It also explains why inhibition of K$^+$\ channels or a hypokalemic environment facilitates EAD production. Finally, it provides a solid basis for understanding the effects of periodic stimulation of cardiomyocytes. We used this technique to show why more EADs are generated at low-frequency pacing than at a higher pacing frequency. The technique was also used to explain the origin of complex alternan behavior that occurs with intermediate-frequency pacing. Overall, the use of slow-fast analysis provides information on the dynamics of this multi-timescale system that are hard or impossible to comprehend from a purely biophysical analysis (i.e., in terms of the effects of different ionic currents) or from computer simulations alone. \small
1,108,101,562,444
arxiv
\section{Introduction} Symmetries comprise the most fundamental laws of nature, allowing to f\/ind conserved quantities and exact solutions of the equations of motion. The existence of a suf\/f\/icient number of conserved quantities facilitates the investigation of a given dynamical system. Complete integrability of the Hamiltonian systems is closely related with the separation of variables of the Hamilton--Jacobi equations \cite{SB}. The customary conserved quantities originate from geometrical symmetries of the conf\/i\-gu\-ra\-tion space of the system. These symmetries correspond to Killing (K) vectors representing the isometries of the spacetimes. Beside these symmetries, there exist hidden symmetries ge\-ne\-ra\-ted by higher rank St\"ackel--Killing (SK) tensors. The corresponding conserved quantities are quadratic, or, more general, polynomial in momenta connected with symmetries of the complete phase-space. Another natural generalization of the Killing vectors is represented by the antisymmetric Killing--Yano (KY) tensors \cite{KY}. The `square' of a KY tensor is a SK tensor, but the opposite is not generally true. The symmetries generated by KY tensors are fundamental being involved in many conserved quantities of the quantum systems. For example KY tensors appear in the description of spinning particle systems~\cite{GRH}, construction of dif\/ferential operators which commute with the Dirac operator~\cite{CML}, generation of new exotic supersymmetries~\cite{MC}, and so on. The conformal extension of the Killing vectors and SK, KY tensors is given by conformal Killing (CK) vectors, conformal St\"ackel--Killing (CSK) and conformal Killing--Yano (CKY) tensors respectively. These geometrical objects are involved in the dynamics of the massless particles and are connected with the f\/irst integrals of the null geodesics. In recent times the properties of the CKY tensors stimulated much work in generating background metrics with black-holes solutions in higher-dimensional spacetimes (see, e.g.~\cite{VPF}) or interesting geometrical structures~\cite{IVV}. In the study of the dynamics of particles in external gauge f\/ields, the covariant Hamiltonian formulation proposed by van Holten \cite{vH} proves to be more adequate involving gauge covariant equations of motion. In particular this approach permits the investigation of the possibility for a~higher order symmetry to survive when the electromagnetic interactions are taken into account. This possibility is realized in the Killing--Maxwell (KM) system~\cite{BC} and an explicit example is provided by the Kerr metric. An unavoidable problem is the investigation to what extent the classical conservation laws in a curved spacetime could be associated with quantum symmetry operators. It is shown that the CK vectors and CSK tensors do not in general produce quantum operators for the Klein--Gordon equation. In connection with Dirac type operators constructed from KY tensors we discuss the axial anomalies. The plan of this review paper is as follows. In Section~\ref{section2} we present the generalized Killing equations in a covariant framework including external gauge f\/ields and scalar potentials. In the next section we produce some examples of conserved quantities of Runge--Lenz type involving external electromagnetic f\/ields. In Section~\ref{section4} we present various generalizations of the Killing vectors. In Section~\ref{section5} we analyze the possibility for a hidden symmetry to survive when the electromagnetic interaction is taken into account. We describe the KM system giving a concrete realization in the Kerr spacetime. In Section~\ref{section6} we examine the quantum version of the hidden symmetries and analyze the gravitational and axial anomalies connected with quantum symmetry operators for Klein--Gordon and Dirac equations. The last section is devoted to conclusions. \section{Symmetries and conserved quantities}\label{section2} The classical dynamics of a point charged particle subject to an external electromagnetic f\/ield expressed (locally) in terms of the potential $1$-form~$A_{i}$ \begin{gather}\label{FdA} F=dA , \end{gather} is derived from the Hamiltonian \begin{gather}\label{H2} H = \frac{1}{2} g^{ij} (p_i - A_i) (p_j - A_j) + V(x) . \end{gather} We also added an external scalar potential $V(x)$ for later convenience and~$\mathbf{g}$ is the metric of a~(pseudo-)Riemmanian $n$-dimensional manifold~$\mathcal{M}$. In terms of the canonical phase-space coordinates $(x^i, p_i)$ the conserved quantities commute with the Hamiltonian in the sense of Poisson brackets. The disadvantage of the traditional approach is that the canonical momenta~$p_i$ and implicitly the Hamilton equations of motion are not manifestly gauge covariant. This inconvenience can be removed using van Holten's receipt~\cite{vH} by introducing the gauge invariant momenta: \begin{gather* \Pi_i = p_i - A_i . \end{gather*} The Hamiltonian \eqref{H2} becomes \begin{gather}\label{Hcov} H = \frac{1}{2} g^{ij} \Pi_i \Pi_j + V(x) , \end{gather} and equations of motion are derived using the Poisson bracket \begin{gather}\label{covPB} \{P,Q\} = \frac{\partial P}{\partial x^i} \frac{\partial Q}{\partial \Pi_i} -\frac{\partial P}{\partial \Pi_i} \frac{\partial Q}{\partial x^i} + q F_{ij}\frac{\partial P}{\partial \Pi_i} \frac{\partial Q}{\partial \Pi_j} . \end{gather} We mention that in the modif\/ied Poisson bracket the momenta $\Pi_i$ are not canonical. A f\/irst integral of degree $p$ in the momenta $\Pi$ is of the form \begin{gather}\label{cq} K = K_0 + \sum^{p}_{k=1}\frac{1}{k!} K^{i_1 \dots i_k}_k \Pi_{i_1} \cdots \Pi_{i_k}, \end{gather} and it has vanishing Poisson bracket \eqref{covPB} with the Hamiltonian, $\{K,H\} = 0$, which implies \begin{subequations}\label{constr} \begin{gather} K^i_1 V_{,i} = 0 , \label{1}\\ K_{0}^{~,i} + F_{j}^{~i} K^j_1 = K^{ij}_2 V_{,j} , \label{0}\\ K^{(i_1 \dots i_l;i_{l+1})}_l + F_j^{~(i_{l+1}} K^{i_1 \dots i_l) j}_{l+1} = \frac{1}{(l+1)} K^{i_1 \dots i_{l+1}j}_{l+2} V_{,j} \qquad \mbox{for} \quad l= 1,\dots,p-2 , \label{l} \\ K^{(i_1 \dots i_{p-1};i_p)}_{p-1} + F_j^{~(i_p} K^{i_1 \dots i_{p-1}) j}_p =0 ,\label{p-1}\\ K^{(i_1 \dots i_p;i_{p+1})}_p =0 . \label{p} \end{gather} \end{subequations} Here a semicolon denotes the covariant dif\/ferentiation corresponding to the Levi-Civita connection and round brackets denote full symmetrization over the indices enclosed. The last equation~\eqref{p} is the def\/ining equation of a SK tensor of rank~$p$. The SK tensors represent a generalization of the Killing vectors and are responsible for the hidden symmetries of the motions, connected with conserved quantities of the form~\eqref{cq} polynomials in momenta. Indeed, using equation~\eqref{p}, for any geodesic $\gamma$ with tangent vector ${\dot x}^i = p^i$ \begin{gather}\label{QK} Q_K = K^{i_1 \dots i_k}_k p_{i_1} \cdots p_{i_k} , \end{gather} is constant along $\gamma$. The rest of the equations~\eqref{constr} mixes up the terms of $K$ with the gauge f\/ield strength~$F_{ij}$ and derivatives of the potential~$V(x)$. Several applications using van Holten's covariant framework \cite{vH} are given in \cite{HN,JPN,MV1,IKI} and a few will be presented in the next section. \section{Explicit examples}\label{section3} Let us illustrate these general considerations by some nontrivial examples. In what follows the Coulomb potential in a $3$-dimensional Euclidean space $\mathbb{E}^3$ will be the basis of the examples superposing dif\/ferent types of electric and magnetic f\/ields. The hidden symmetries which will be found involve SK tensors of rank $2$ looking for constants of motion of the form \begin{gather}\label{RLE} K = K_0 + K^{i}_1 \Pi_i + \frac{1}{2} K^{ij}_2 \Pi_i \Pi_j . \end{gather} \subsection{Coulomb potential} To put in a concrete form, we consider the Hamiltonian for the motion of a point charge $q$ of mass $M$ in the Coulomb potential produced by a charge $Q$ \[ H = \frac{M}{2} \dot{\mathbf{x}}^2 + q\frac{Q}{r} . \] We start with \eqref{p} for $p=2$ which is satisf\/ied by a SK tensor of rank $2$. The most general rank $2$ SK tensor on $3$-dimensional Euclidean space involves some terms which do not contribute nothing of interest for the Coulomb problem and it proved that the following form of the SK tensor is adequate \cite{Crampin}: \begin{gather}\label{K2} K^{ij}_2 = 2 \delta^{ij} \mathbf{n}\cdot \mathbf{x} - \big(n^i x^j + n^j x^i\big) , \end{gather} with $\mathbf{n}$ an arbitrary constant vector. Corresponding to this SK tensor the non-relativistic Coulomb problem admits the Runge--Lenz vector constant of motion \begin{gather}\label{RL} \mathbf{K}_2 = \mathbf{p} \times \mathbf{L} + MqQ\frac{\mathbf{x}}{r} , \end{gather} where \begin{gather}\label{am} \mathbf{L} =\mathbf{x} \times \mathbf{p} , \end{gather} is the angular momentum and $\mathbf{p} = M \dot{\mathbf{x}}$. \subsection{Constant electric f\/ield} The next more involved example consists of an electric charge $q$ moving in the Coulomb potential in the presence of a constant electric f\/ield $\mathbf{E}$. The corresponding Hamiltonian is: \[ H = \frac{1}{2 M} \mathbf{p}^2 + q\frac{Q}{r} - q \mathbf{E}\cdot\mathbf{x}. \] Again it is adequate to take for the SK tensor of rank $2$ the simple form \eqref{K2} choosing $\mathbf{n}=\mathbf{E}$. Using this form for $K^{ij}_2$ after a straightforward working out~\eqref{0} \[ K_0 = \frac{MqQ}{r} \mathbf{E}\cdot \mathbf{x} - \frac{Mq}{2} \mathbf{E}\cdot [\mathbf{x} \times (\mathbf{x} \times \mathbf{E})] . \] Concerning equation \eqref{1}, it is automatically satisf\/ied by a vector $\mathbf{K}_1$ of the form \[ \mathbf{K}_1 = \mathbf{x} \times \mathbf{E} , \] modulo an arbitrary constant factor. This vector $\mathbf{K}_1$ contribute to a conserved quantity with a~term proportional to the angular momentum $\mathbf{L}$ along the direction of the electric f\/ield $\mathbf{E}$. In conclusion, when a uniform constant electric f\/ield is present, the Cou\-lomb system admits two constants of motion $\mathbf{L}\cdot\mathbf{E}$ and $\mathbf{C}\cdot\mathbf{E}$ where $\mathbf{C}$ is a generalization of the Runge--Lenz vector~\eqref{RL}: \[ \mathbf{C} = \mathbf{K}_2 - \frac{Mq}{2} \mathbf{x} \times ( \mathbf{x} \times \mathbf{E}) . \] \subsection{Spherically symmetric magnetic f\/ield} Another conf\/iguration which admits a hidden symmetry is the superposition of an external spherically symmetric magnetic f\/ield \[ \mathbf{B} = f(r) \mathbf{x} , \] over the Coulomb potential acting on a electric charge $q$. This conf\/iguration is quite similar to the Dirac charge-monopole system. For $K^{ij}_2$ we use again the form \eqref{K2} and $F_{ij}$ in this case is \[ F_{ij} = \epsilon_{ijk} B^k = \epsilon_{ijk} x^k f(r) . \] The system of constraint \eqref{constr} can be solely solved only for a def\/inite form of the function $f(r)$ \[ f(r) = \frac {g}{r^{5/2}} , \] with $g$ a constant connected with the strength of the magnetic f\/ield. With this special form of the function $f(r)$ we get \[ K_0 = \left [ \frac{MqQ}{r} - \frac{2g^2 q^2}{r}\right ](\mathbf{n}\cdot \mathbf{x}) , \] and \[ K^{i}_1 = - \frac{2gq}{r^{1/2}}(\mathbf{x}\times \mathbf{n})^i . \] Collecting the terms $K_0$, $K_1^{i}$, $K_2^{ij}$ the constant of motion \eqref{RLE} becomes \begin{gather}\label{ssmf} K = \mathbf{n}\cdot \left (\mathbf{K}_2 + \frac{2gq}{r^{1/2}}\mathbf{L} - 2 g^2 q^2 \frac{\mathbf{x}}{r}\right ) , \end{gather} with $\mathbf{n}$ an arbitrary constant unit vector and $\mathbf{K}$, $\mathbf{L}$ given by~\eqref{RL},~\eqref{am} respectively. The angular momentum~$\mathbf{L}$ is not separately conserved, entering the constant of motion~\eqref{ssmf}. \subsection{Magnetic f\/ield along a f\/ixed direction} Another example consists in a magnetic f\/ield directed along a f\/ixed unit vector $\mathbf{n}$ \[ \mathbf{B} = B(\mathbf{x} \cdot \mathbf{n}) \mathbf{n} , \] where, for the beginning, $B(\mathbf{x} \cdot \mathbf{n})$ is an arbitrary function. Again we are looking for a constant of motion of the form \eqref{RLE} with the SK tensor of rank~$2$~\eqref{K2}. Equations~\eqref{constr} prove to be solvable only for a particular form of the magnetic f\/ield \[ \mathbf{B} = \frac{\alpha}{\sqrt{\alpha \mathbf{x} \cdot \mathbf{n} + \beta }}\, \mathbf{n} , \] with $\alpha$ and $\beta$ two arbitrary constants. Consequently we get for $K_0$ and $K_1^{i}$ \begin{gather*} K_0 = \frac{MqQ}{r} (\mathbf{x}\cdot \mathbf{n}) + \alpha q^2 (\mathbf{x} \times \mathbf{n})^2 ,\qquad K_1^{i} = - 2 q \sqrt{\alpha \mathbf{x} \cdot \mathbf{n} + \beta }\, (\mathbf{x} \times \mathbf{n})_i . \end{gather*} The f\/inal form of the conserved quantity in this case is: \[ K= \mathbf{n} \cdot \left [ \mathbf{K}_2 + 2 q\sqrt{\alpha \mathbf{x} \cdot \mathbf{n} + \beta }\, \mathbf{L} \right ] + \alpha q^2 (\mathbf{x} \times \mathbf{n})^2 . \] As in the previous example the angular momentum $\mathbf{L}$ is forming part of this constant of motion~$K$. \subsection{Taub-NUT space and its generalizations} The four-dimensional Euclidean Taub-NUT geometry is involved in many modern studies in physics (see e.g.~\cite{VV} and reference therein). The Taub-NUT metric is Ricci-f\/lat self dual on~$\mathbb{R}^4$ and gives an example of non-trivial gravitational instanton. A Kaluza--Klein monopole was described by embedding the Taub-NUT gravitational instanton into f\/ive-dimensional Kaluza--Klein theory. Since the classical equations of motion contain the Dirac monopole, a Coulomb potential and a velocity-square dependent term, the Taub-NUT system represents a non-trivial generalization of the Coulomb/Kepler system. More interestingly, the Kaluza--Klein monopole in classical and quantum mechanics possesses conserved quantities that are analog of the angular momentum and the Runge--Lenz vector of the Kepler problem. Also various generalizations of the Kaluza--Klein monopole system are superintegrable, multiseparable~\cite{IM}. Let us consider the radially symmetric generalized Taub-NUT metric \cite{IK}: \begin{gather}\label{gTNUT} ds^2 = f(r) \delta_{ij} dx^i dx^j + h(r)\big(dx^4 + A_k dx^k\big)^2 ,\qquad i,j,k =1,2,3 , \end{gather} where $r$ is the radial coordinate on $\mathbb{R}^4 - \{0\}$ and $A_k$ is the gauge potential of a Dirac monopole. For the geodesic motion on the $4$-manifold endowed with the metric \eqref{gTNUT} the canonical momenta conjugate to the coordinates $(x^i, x^4)$ are \begin{gather*} p_j = f(r)\delta_{ij} \frac{d x^i}{d t} + h(r)\left (\frac{d x^4}{d t} + A_k\frac{d x^k}{d t} \right ) A_j ,\qquad p_4 = h(r)\left (\frac{d x^4}{d t} + A_k\frac{d x^k}{d t}\right ) = q . \end{gather*} Let us remark that the momentum associated with the cyclic variable $x^4$ is conserved and interpreted as {\it relative electric charge}. Geodesic motion on the $4$-manifold projects onto the curved $3$-manifold with metric $g_{ij} = f(r) \delta_{ij}$ augmented with a potential \cite{IK,JPN}. In the Hamiltonian~\eqref{Hcov} the potential is \begin{gather}\label{VTNUT} V(r) = \frac{q^2}{2 h(r)} , \end{gather} and accordingly the conserved energy is \[ E = \frac{\mathbf{\Pi}^2}{2 f(r)} + \frac{q^2}{2 h(r)} . \] Now the search of conserved quantities of motion in the $3$-dimensional curved space in the presence of the potential~\eqref{VTNUT} proceeds as in the previous examples. The zero-order consistency condition~\eqref{1} is satisf\/ied for an arbitrary radial potential and entails the conserved angular momentum which involves a typical monopole term \begin{gather}\label{JTNUT} \mathbf{J} = \mathbf{x} \times \mathbf{\Pi} + q \frac{\mathbf{x}}{r} . \end{gather} Next we search for a Runge--Lenz type vector and for this purpose we start again with the SK tensor of the form \eqref{K2}. The set of equations \eqref{constr} could be solved in some favorable circumstances. First of all, in the original Taub-NUT space \[ f(r) = \frac{1}{h(r)} = 1 + \frac{4 m}{r} , \] where $m$ is a real parameter, the Runge--Lenz type vector is \begin{gather}\label{RLTNUT} \mathbf{K}_2 = \mathbf{\Pi} \times \mathbf{J} - 4 m \big(E - q^2\big) \frac{\mathbf{x}}{r} . \end{gather} Another notable case is represented by the Iwai--Katayama generalizations of the Taub-NUT metric \cite{IK} \[ f(r) = \frac{a+b r}{r} ,\qquad h(r) = \frac{a r + b r^2}{1+ d r + c r^2} , \] with $a, b, c, d \in \mathbb{R}$. The corresponding Runge--Lenz type vector is \begin{gather}\label{RLgTNUT} \mathbf{K}_2 = \mathbf{\Pi} \times \mathbf{J} - \left(a E -\frac{d}{2} q^2\right) \frac{\mathbf{x}}{r} . \end{gather} In both cases, due to the simultaneous existence of the conserved angular momentum \eqref{JTNUT} and the conserved Runge--Lenz vectors \eqref{RLTNUT} and respectively \eqref{RLgTNUT} the motions of the particles are conf\/ined to conic sections. \section{Generalizations of the Killing vectors}\label{section4} A vector f\/ield $X$ on $\mathcal{M}$ is said to be a Killing vector f\/ield if the Lie derivative with respect to~$X$ of the metric $\mathbf{g}$ vanishes: \[ L_Xg=0 . \] Killing vector f\/ields can be generalized to CK vector f\/ields \cite{KY}, i.e.\ vector f\/ields with a f\/low preserving a given conformal class of metrics. Furthermore a natural generalization of CK vector f\/ields is given by the CKY tensors \cite{KSW}. A CKY tensor of rank $p$ on a (pseudo-)Riemmanian manifold $(\mathcal{M},\mathbf{g})$ is a $p$-form $Y (p\leq n)$ which satisf\/ies: \begin{gather}\label{CKY} \nabla_X Y = \frac{1}{p+1} X\hook dY - \frac{1}{n-p+1} X^\flat \wedge d^* Y , \end{gather} for any vector f\/ield $X$ on $\mathcal{M}$, where $\nabla$ is the Levi-Civita connection of $\mathbf{g}$, $X^\flat$ is the 1-form dual to the vector f\/ield $X$ with respect to the metric, $\hook$ is the operator dual to the wedge product and $d^*$ is the adjoint of the exterior derivative $d$. Let us recall that the Hodge dual maps the space of $p$-forms into the space of $(n-p)$-forms. The square of $*$ on a $p$-form $Y$ is either~$+1$ or~$-1$ depending on $n$, $p$ and the signature of the metric~\cite{JK,MC} \[ **Y = \epsilon_p Y ,\qquad *^{-1} Y = \epsilon_p * Y , \] with the number $\epsilon_p$ \[ \epsilon_p = (-1)^p *^{-1}\frac{\det g}{\vert \det g \vert} . \] With this convention, the exterior co-derivative can be written in terms of $d$ and the Hodge star: \[ d^* Y = (-1)^p *^{-1} d * Y . \] If $Y$ is co-closed in \eqref{CKY}, then we obtain the def\/inition of a KY tensor~\cite{KY} \begin{gather}\label{KY} \nabla_X Y = \frac{1}{p+1} X\hook dY . \end{gather} This def\/inition is equivalent with the property that $\nabla_j Y_{i_1 \dots i_p}$ is totally antisymmetric or, in components, \[ Y_{i_1 \dots i_{p-1}(i_p;j)} = 0 . \] The connection with the symmetry properties of the geodesic motion is the observation that along every geodesic $\gamma$ in $\mathcal{M}$, $Y_{i_1 \dots i_{p-1}j} \dot{x}^j$ is parallel. There is also a conformal generalization of the SK tensors, namely a symmetric tensor $K_{i_1 \dots i_p}= K_{(i_1 \dots i_p)}$ is called a conformal Killing (CSK) tensor if it obeys the equation \begin{gather}\label{CSK} K_{(i_1 \dots i_p;j)} = g_{j(i_1}\tilde{K}_{i_2 \dots i_p)} , \end{gather} where the tensor $\tilde{K}$ is determined by tracing the both sides of equation~\eqref{CSK}. Let us note that in the case of CSK tensors, the quantity~\eqref{QK} is constant only for null geodesics~$\gamma$. These generalizations of the Killing vectors could be related. Let $Y_{i_1 \dots i_p}$ be a (C)KY tensor, then the symmetric tensor f\/ield \begin{gather}\label{KYY} K_{ij} = Y_{i i_2 \dots i_p}Y_{j}^{~i_2 \dots i_p} , \end{gather} is a (C)SK tensor and it sometimes refers to this (C)SK tensor as the associated tensor with $Y_{i_1 \dots i_p}$. That is the case of the Kerr metric \cite{RF,RP} or the Euclidean Taub-NUT space \cite{GR,VV}. However, the converse statement is not true in general: not all SK tensors of rank $2$ are associated with a KY tensor. To wit in the Taub-NUT geometry there are known to exist four KY tensors of which three are covariantly constant. The components of the Runge--Lenz vector \eqref{RLTNUT} are SK tensors which are associated with the KY tensors of the Taub-NUT space. On the other hand, in the case of the Runge--Lenz vector \eqref{RLgTNUT} its components are also SK tensors but not associated with KY tensors since the generalized Taub-NUT space \cite{IK} does not admit KY tensors \cite{MV3}. Drawing a parallel between def\/initions \eqref{CKY} and \eqref{KY} we remark that all KY tensors are co-closed but not necessarily closed. From this point of view CKY tensors represent a generalization more symmetric in the pair of notions. CKY equation \eqref{CKY} is invariant under Hodge duality that if a $p$-form $Y$ satisf\/ies it, then so does the $(n-p)$-form $*Y$. Moreover the dual of a CKY tensor is a KY tensor if and only if it is closed. Let us assume that a CKY tensor of rank $p = 2$ is closed ($d Y = 0$) and non-degenerate called a {\it principal CKY tensor}. The principal CKY tensor obeys the following equation \cite{JJ,Frolov} \[ \nabla_X Y = X^\flat \wedge \xi^\flat , \] where $X$ is an arbitrary vector f\/ield and \begin{gather}\label{pKV} \xi_i = \frac{1}{n-1} \nabla_j Y^j_{~i} . \end{gather} Starting with a principal CKY tensor one can construct a tower of CKY tensors formed from external powers $Y^{\wedge k}$ which again are closed CKY tensors. Taking the Hodge dual of these tensors one obtains a set of KY tensors. On the other hand, the vector $\xi_i$ \eqref{pKV} obeys the following equation \[ \xi_{(i;j)} = -\frac{1}{n-2} R_{l(i} Y_{j)}^{~~l} . \] It is obvious that in a Ricci f\/lat space ($R_{ij} = 0$) or in an Einstein space ($R_{ij} \sim g_{ij}$), $\xi_{i}$ is a~Killing vector and we shall refer to it as the {\it primary Killing vector}. \section[Killing-Maxwell system]{Killing--Maxwell system}\label{section5} In this section we shall analyze what is the condition which must be imposed on the gauge f\/ields to preserve the hidden symmetry of the system. Examining the set of coupled equations~\eqref{constr}, we observe that the vanishing of the terms $F_j^{~(i_l} K^{i_1 \dots i_{l-1}) j} $ guarantees that the gauge f\/ields do not af\/fect the symmetries of the system. To make things more specif\/ic, let us assume that the system admits a hidden symmetry encapsulated in a SK tensor of rank~$2$, $K_{ij}$, associated with a~KY tensor $Y_{ij}$ according to~\eqref{KYY}. The suf\/f\/icient condition of the electromagnetic f\/ield to preserve the hidden symmetry is \cite{MV1} \begin{gather}\label{cond} F_{k[i} Y_{j]}^{~k} = 0 , \end{gather} where the indices in square bracket are to be antisymmetrized. It is worth mentioning that this condition appear in many other contexts. For example, using conformal Killing spinors~\cite{HPSW} this constraint was resorted to maintain the constant of motion along a null geodesic. Also in the case of the motion of pseudo-classical spinning point particles, relation \eqref{cond} assures the preservation of the non-generic supersymmetry associated with a KY tensor~\cite{MT}. In the case of spinor f\/ields on curved space-times this condition is necessary in the construction of Dirac-type operators that commute with the standard Dirac operator~\cite{McLS}. The KM system described by Carter \cite{BC} represent an interesting realization of the condition~\eqref{cond}. Let us consider the source equation of an electromagnetic f\/ield~$F_{ij}$ in $4$-dimensions \begin{gather}\label{Max} F^{ij}_{~~;j} = 4\pi j^i, \end{gather} and assume that the current $j^i$ is a primary Killing vector. Drawing a parallel between equations~\eqref{pKV} and \eqref{Max} we conclude that the electromagnetic f\/ield in the KM system is a CKY tensor which, in addition, is a closed $2$-form, as in \eqref{FdA}. Therefore its Hodge dual \begin{gather}\label{YHF} Y_{ij} = * F_{ij} , \end{gather} is a KY tensor which generates a hidden symmetry \eqref{KYY} associated with it. It is quite simple to verify that $F_{ij} Y_{k}^{~j} \sim F_{ij} *F_{k}^{~j}$ is a symmetric matrix ( in fact proportional with the unit matrix) making obvious that the constraint \eqref{cond} is fulf\/illed. We complete the description of the KM system observing that the $2$-form $Y$ \eqref{YHF} can be written, at least locally, as \[ Y = * d A , \] the form $A$ being usually called a {\it KY potential}. \subsection{The Kerr metric} To exemplify the results presented previously, let us consider the Kerr solution to the vacuum Einstein equations which in the Boyer--Lindquist coordinates $(t, r, \theta, \phi)$ has the form \[ g= - \frac{\Delta}{\rho^2}\big( d t - a \sin^2 \theta d \phi\big)^2 + \frac{\sin^2 \theta}{\rho^2} \big[\big(r^2 + a^2\big) d \phi - a d t\big]^2 + \frac{\rho^2}{\Delta} d r^2 + \rho^2 d \theta^2 , \] where \begin{gather*} \Delta = r^2 + a^2 - 2 m r , \qquad \rho^2 = r^2 + a^2 \cos^2 \theta . \end{gather*} This metric describes a rotating black hole of mass $m$ and angular momentum $J=a m$. As was found by Carter \cite{BC2}, the Kerr space admits the SK tensor \begin{gather*} K_{ij} dx^i dx^j = - \frac{\rho^2 a^2 \cos^2 \theta}{\Delta} d r^2 + \frac{\Delta a^2 \cos^2 \theta}{\rho^2} \big( d t - a \sin^2 \theta d \phi\big)^2 \\ \phantom{K_{ij} dx^i dx^j =}{} +\frac{r^2 \sin^2 \theta}{\rho^2} \big[{-} a d t +\big(r^2 + a^2\big) d \phi\big]^2 + \rho^2 r^2 d \theta^2 , \end{gather*} in addition to the metric tensor $g_{ij}$. This tensor is associated with the KY tensor \cite{GRH,JL} \[ Y = r \sin \theta d \theta \wedge \big[{-} a d t +\big(r^2 + a^2\big) d \phi\big] + a \cos \theta dr \wedge \big( d t - a \sin^2 \theta d \phi\big) , \] and the dual tensor \[ *Y = a \sin \theta \cos \theta d \theta \wedge \big[{-} a d t +\big(r^2 + a^2\big) d \phi\big] + r dr \wedge \big({-} d t + a \sin^2 \theta d \phi\big) , \] is a CKY tensor. The existence of this CKY tensor represents a realization of the KM system and we can verify that the KM four-potential one-form is \[ A = \frac{1}{2} \big( a^2 \cos^2 \theta - r^2\big) d t + \frac{1}{2} a \big( r^2 + a^2\big) \sin^2 \theta d \phi . \] Finally, the current is to be identif\/ied with the primary Killing vector \eqref{pKV} \[ Y^{l}\partial_l := Y^{kl}_{~~;k} \partial_l = 3 \partial_t . \] \section{Quantum anomalies}\label{section6} In this section we shall establish a quantum version of the hidden symmetries generated by Killing tensors. We have to be aware of the fact that the classical conserved quantities may not be preserved when we pass to the quantum mechanical level~-- that is {\it anomalies} may occur. For the beginning we shall investigate the quantum symmetry operators for the Klein--Gordon equation. Finally we def\/ine conserved operators in the case of Dirac equation and analyze the axial anomalies for the (generalized) Taub-NUT metric. \subsection{Gravitational anomalies} Let us now consider the necessary conditions for the existence of constants of motion in the f\/irst-quantized system. We start with the classical conserved quantities and take into account that in the quantized system the momentum operator is given by the covariant dif\/ferential operator~$\nabla$ on the manifold $\mathcal{M}$. The corresponding Hamilton operator for a free scalar particle is given by the covariant Laplacian acting on scalars: \begin{gather}\label{KG} {\cal H} = \square = \nabla_i g^{ij} \nabla_j = \nabla_i \nabla^i , \end{gather} and for a (C)K vector we def\/ine the quantum symmetry operator \[ {\cal Q}_{V} = K^i \nabla_i . \] To consider the analogous condition for the existence of constants of motion in the f\/irst-quantized system, we must evaluate the commutator $[\square,{\cal Q}_{V}] \Phi$ acting on some function $\Phi \in {\cal C}^\infty (\mathcal{M})$, solutions of the Klein--Gordon equation $\square \Phi = 0$. The evaluation of this commutator gives: \[ [{\cal H},{\cal Q}_{V}] = \frac{2-n}{n} K_k^{~;ki} \nabla_i + \frac{2}{n} K^k_{~;k} \square . \] In the case of ordinary Killing vectors, the r.h.s.\ of this commutator vanishes on the strength of Killing equation, but for CK vectors the situation is dif\/ferent. Since the term $K_k^{~;ki} \nabla_i$ survives for CK vectors, in general the system is af\/fected by quantum anomalies. For (C)SK tensors the situation is more intricate. The quantum analogue of classical conserved quantity for a (C)SK tensor of rank $2$ is \[ {\cal Q}_{T} = \nabla_i K^{ij} \nabla_j . \] On a general manifold this operator does not commute with the Klein--Gordon operator \eqref{KG} \cite{IVV, MV2}: \begin{gather} [\square, {\cal Q}_{T}] = 2 \big( \nabla^{(k} K^{ij)}\big) \nabla_{k} \nabla_{i} \nabla_{j} + 3 \nabla_{m} \big(\nabla^{(k} K^{mj)}\big) \nabla_{j}\nabla_{k} \nonumber\\ {+} \left \{ -\frac{4}{3} \nabla_{k} \big( R_{m}^{~[k} K^{j]m}\big) + \nabla_{k}\left ( \frac{1}{2}g_{ml} (\nabla^{k}\nabla^{(m} K^{lj)} - \nabla^{j}\nabla^{(m} K^{kl)} ) + \nabla_{i}\nabla^{(k} K^{ij)}\right )\right \} \nabla_{j} . \label{comuttotal} \end{gather} We mention that the last term is missing in the corresponding equation in \cite{BC1}. Some simplif\/ications occur for SK tensor since all the symmetrized derivatives vanish and we end up with a simpler result \cite{BC1} \[ [\square, {\cal Q}_{T}] = - \frac{4}{3} \nabla_{i} \big(R^{~~[i}_{l} K^{j]l}\big) \nabla_{j} . \] There are a few notable situation for which the quantum system is free of anomalies. Of course if the space is Ricci f\/lat or Einstein, i.e.\ $R_{ij} \sim g_{ij}$ the r.h.s.\ of the commutator vanishes. A more interesting and quite unexpected case is represented by SK tensors associated to KY tensors of rank $2$ as in~\eqref{KYY} \cite{BC1} a situation which occurs for some spaces~\cite{RF,RP,GR,VV}. In the case of CSK tensors, practically all terms in~\eqref{comuttotal} survive. Taking into account that the terms are arranged into groups with three, two and just one derivatives it is impossible to have compensations between them. In conclusion CK vectors and CSK tensors do not in general produce symmetry operators for the Klein--Gordon equation, the quantum system being af\/fected by quantum anomalies. \subsection{Dirac symmetry operators} We shall assume that it is possible to def\/ine a Dirac spinor structure on the base manifold~$\mathcal{M}$. Spinor f\/ields carry irreducible representation of the Clif\/ford algebra and in what follows we shall identify its elements with dif\/ferential forms. Adopting the convention for the Clif\/ford algebra \[ e^a e^b + e^b e^a = 2 g^{ab} , \] the standard Dirac operator $D$ is written in terms of the spinor connection as \[ D = e^a \nabla_{X_a} . \] A key property of CKY tensors is that one may construct from them symmetry operators for the massless Dirac equation \cite{Benn,CKK}. Let us construct an operator acting on spinors: \[ L_Y = e^a Y \nabla_{X_a} + \frac{p}{p+1} dY - \frac{n-p}{n-p+1} d^* Y . \] In order to construct a Dirac type operator, it proves to be convenient to add to $L_Y$ a term proportional to the Dirac operator: \[ D_Y = L_Y - (-1)^p Y D . \] This operator $D_Y$ is said to be $R$-(anti)commuting as the graded commutator with the Dirac operator \[ \{ D , D_Y\}_p = R D , \] where the graded commutator is \cite{Benn,CKK} \[ \{ D , D_Y\}_p = D D_Y + (-1)^p D_Y D . \] For the symmetry operator $D_Y$ the explicit form of the $R$ operator is \[ R = \frac{2 (-1)^p}{n-p+1} d^* Y D . \] Let us remark that the operator R vanishes when $Y$ is a KY tensor, otherwise the Dirac type operator $D_Y$ gives an {\it on-shell} $(D\Psi=0$) symmetry operator for Dirac operator. The four dimensional case was considered in \cite{KL}. \subsection{Axial anomalies} Having in mind that the KY tensors prevent the appearance of gravitational anomalies for the scalar f\/ield, it is natural to investigate whether they play a role also to {\it axial anomalies}. Various authors have discussed the electromagnetic and gravitational anomalies in the divergence of the axial-current which is closely related to the existence of zero-eigenvalues of the Dirac operator. The index of the Dirac operator is useful as a tool to investigate topological properties of the space as well as in computing anomalies in quantum f\/ield theories. In what follows we shall consider even-dimensional spaces in which one can def\/ine the index of a Dirac operator as the dif\/ference in the number of linearly independent zero modes with eigenvalues $+1$ and $-1$ under $\gamma_5$: \[ index(D) = n^0_+ - n^0_- . \] The commutation relation between the standard and non-standard Dirac operators generated by KY tensor \[ [D_Y , D] = 0 , \] leads to a remarkable result for the index of non-standard Dirac operator: \begin{theorem} \[ {\rm index}(D_Y) = {\rm index}(D) . \] \end{theorem} \begin{proof} For a sketch of the proof see~\cite{HWP}. \end{proof} Since we analyzed various properties of the Taub-NUT metric and its generalizations, in what follows we shall also consider the problem of axial anomalies for this space. The remarkable result is that the Taub-NUT metric makes no contribution to the axial anomaly: \begin{theorem} The Dirac operator associated with the standard Taub-NUT metric on $\mathbb{R}^4$ does not admit any $L^2$ zero modes. \end{theorem} \begin{proof} We sketch the proof \cite{MV} observing that the scalar curvature $\kappa$ of the standard Taub-NUT metric vanishes. By the Lichnerowicz formula \[ D^2 = \nabla^* \nabla + \frac{\kappa}{4} = \nabla^* \nabla . \] Let $\Psi \in L^2 $ be a solution of $D$, hence $\nabla \Psi = 0$. Since a parallel spinor has a constant pointwise norm, it cannot be in $L^2$ unless it is $0$, because the volume of $\mathbb{R}^4$ with respect to the Taub-NUT metric is inf\/inite. Therefore $\Psi = 0$. \end{proof} We turn now to the generalized Taub-NUT metric. It has proved in \cite{MV} that on the whole generalized Taub-NUT space, although the Dirac operator is not Fredholm, it only has a f\/inite number of null states. In~\cite{MM} there were found suf\/f\/icient conditions for the absence of harmonic~$L^2$ spinors on Iwai--Katayama~\cite{IK} generalizations of the Taub-NUT space. \begin{theorem} There do not exist $L^2$ harmonic spinors on $\mathbb{R}^4$ for the generalized Taub-NUT metrics. In particular, the $L^2$ index of the Dirac operator vanishes. \end{theorem} \begin{proof} We delegate the proof to \cite{MM}. \end{proof} \section{Concluding comments}\label{section7} The (C)SK and (C)KY tensors are related to a multitude of dif\/ferent topics such as classical integrability of systems together with their quantization, supergravity, string theories, hidden symmetries in higher dimensional black-holes spacetimes, etc. To conclude let us discuss shortly some problems that deserve a further attention. An obvious extension of the gauge covariant approach to hidden symmetries is represented by the non-abelian dynamics using the appropriate Poisson brackets \cite{vH,HN}. In Section~\ref{section3} we worked out some examples in an Euclidean $3$-dimensional space and restricted to SK tensors of rank~$2$. More elaborate examples working in a $N$-dimensional curved space and involving SK tensors of higher ranks will be presented elsewhere~\cite{MVprep}. Finally, let us mention that the extension of the (C)KY symmetry on spaces with a skew-symmetric torsion is desirable and may provide new insight into the theory of black holes \cite{HKWY}. \subsection*{Acknowledgements} Support through CNCSIS program IDEI-571/2008 is acknowledged. \pdfbookmark[1]{References}{ref}
1,108,101,562,445
arxiv
\section{Introduction} Neural networks provide state-of-the-art results in a variety of machine learning tasks, however, several neural network's aspects complicate their usage in practice, including overconfidence~\cite{deepens}, vulnerability to adversarial attacks~\cite{adv_attacks}, and overfitting~\cite{overfitting}. One of the ways to compensate these drawbacks is using deep ensembles, i.\,e. the ensembles of neural networks trained from different random initialization~\cite{deepens}. In addition to increasing the task-specific metric, e.\,g.\,accuracy, the deep ensembles are known to improve the quality of uncertainty estimation, compared to a single network. There is yet no consensus on how to measure the quality of uncertainty estimation. \citet{pitfalls} consider a wide range of possible metrics and show that the calibrated negative log-likelihood (CNLL) is the most reliable one because it avoids the majority of pitfalls revealed in the same work. Increasing the size $n$ of the deep ensemble, i.\,e.\,the number of networks in the ensemble, is known to improve the performance~\cite{pitfalls}. The same effect holds for increasing the size $s$ of a neural network, i.\,e. the number of its parameters. Recent works~\citep{double_descent2, double_descent} show that even in an extremely overparameterized regime, increasing $s$ leads to a higher quality. These works also mention a curious effect of non-monotonicity of the test error w.\,r.\,t.\,the network size, called double descent behaviour. In figure~\ref{fig:motivation}, left, we may observe the saturation and stabilization of quality with the growth of both the ensemble size $n$ and the network size $s$. The goal of this work is to study the asymptotic properties of CNLL of deep ensembles as a function of $n$ and $s$. We investigate under which conditions and w.\,r.\,t.\,which dimensions the CNLL follows a power law for deep ensembles in practice. In addition to the horizontal and vertical cuts of the diagram shown in figure~\ref{fig:motivation}, left, we also study its diagonal direction, which corresponds to the increase of the total parameter count. \begin{figure} \begin{center} \centerline{ \begin{tabular}{cc} \includegraphics[width=0.31\textwidth]{figures_final/motivational_carpet.pdf}& \includegraphics[width=0.65\textwidth]{figures_final/pl_vgg64_cifar100.pdf} \end{tabular}} \caption{Non-calibrated NLL and CNLL of VGG on CIFAR-100. Left: the $(n, s)$-plane for the CNLL. Middle and right: non-calibrated $\mathrm{NLL}_n$ and $\mathrm{CNLL}_n$ can be closely approximated with a power law (VGG of the commonly used size as an example) .} \label{fig:motivation} \end{center} \end{figure} The power-law behaviour of deep ensembles has previously been touched in the literature. \citet{scaling_description} consider simple shallow architectures and reason about the power-law behaviour of the test error of a deep ensemble as a function of $n$ when $n \rightarrow \infty$, and of a single network as a function of $s$ when $s \rightarrow \infty$. \citet{kaplan2020scaling, rosenfeld2019constructive} investigate the behaviour of \textit{single} networks of modern architectures and empirically show that their NLL and test error follow power laws w.\,r.\,t.\,the network size $s$. In this work, we perform a broad empirical study of power laws in deep ensembles, relying on the practical setting with properly regularized, commonly used deep neural network architectures. Our main contributions are as follows: \begin{enumerate} \item for the practically important scenario with NLL calibration, we derive the conditions under which CNLL of an ensemble follows a power law as a function of $n$ when $n \rightarrow \infty$; \item we empirically show that, in practice, the following dependencies can be closely approximated with a power law on the \emph{whole} considered range of their arguments: (a) CNLL of an ensemble as a function of the ensemble size $n \geqslant 1$; (b) CNLL of a single network as a function of the network size $s$; (c) CNLL of an ensemble as a function of the total parameter count; \item based on the discovered power laws, we make several practically important conclusions regarding the use of deep ensembles in practice, e.\,g.\,using a large single network may be less beneficial than using a so-called memory split --- an ensemble of several medium-size networks of the same total parameter count; \item we show that using the discovered power laws for $n \geqslant 1$, and having a small number of trained networks, we can predict the CNLL of the large ensembles and the optimal memory split for a given memory budget. \end{enumerate} {\bf Definitions and notation.$\quad$} In this work, we treat a power law as a family of functions $\mathrm{PL}_m = c + b m^a$, $m=1, 2, 3,\dots$; $a<0$, $b \in \mathbb{R}$, $c \in \mathbb{R}$ are the parameters of the power law. Parameter $c = \lim_{m\rightarrow \infty} (c + b m^a) = \lim_{m\rightarrow \infty} \mathrm{PL}_m \overset{\mathrm{def}}{=} \mathrm{PL}_{\infty}$ reflects the asymptote of the power law. Parameter $b=c-\mathrm{PL}_1=\mathrm{PL}_\infty - \mathrm{PL}_1$ reflects the difference between the starting point of the power law and its asymptote. Parameter $a$ reflects the speed of approaching the asymptote. In the rest of the work, $\mathrm{(C)NLL}_m$ denotes (C)NLL as a function of $m$. \section{Theoretical view} \label{sec:our_theory} The primary goal of this work is to perform the empirical study of the conditions under which NLL and CNLL of deep ensembles follow a power law. Before diving into a discussion about our empirical findings, we first provide a theoretical motivation for anticipating power laws in deep ensembles, and discuss the applicability of this theoretical reasoning to the practically important scenario with calibration. We begin with a theoretical analysis of the non-calibrated NLL of a deep ensemble as a function of ensemble size $n$. Assume that an ensemble consists of $n$ models that return independent identically distributed probabilities $p_{\mathrm{obj}, i}^* \in [0, 1], i=1,\dots,n$ of the correct class for a single object from the dataset $\mathcal{D}$. Hereinafter, operator $*$ denotes retrieving the prediction for the correct class. We introduce the \emph{model-average} NLL of an ensemble of size $n$ for the \emph{given object}: \begin{equation} \label{eq:nll_ens_single} \mathrm{NLL}_{n}^{\mathrm{obj}} = -\mathbb{E} \log \left( \frac{1}{n} \sum_{i=1}^n p_{\mathrm{obj}, i}^* \right). \end{equation} The expectation in~\eqref{eq:nll_ens_single} is taken over all possible models that may constitute the ensemble (e.\,g. random initializations). The following proposition describes the asymptotic power-law behavior of $\mathrm{NLL}_{n}^{\mathrm{obj}}$ as a function of the ensemble size. \begin{prop} \label{prop:pl_ens} Consider an ensemble of $n$ models, each producing independent and identically distributed probabilities of the correct class for a given object: $p_{\mathrm{obj}, i}^* \in \left[\epsilon_{\mathrm{obj}}, 1\right]$, $\epsilon_{\mathrm{obj}} > 0$, $i=1,\dots,n$. Let $\mu_{\mathrm{obj}} = \mathbb{E} p_{\mathrm{obj}, i}^*$ and $\sigma_{\mathrm{obj}}^2 = \mathbb{D} p_{\mathrm{obj}, i}^*$ be, respectively, the mean and variance of the distribution of probabilities. Then the model-average NLL of the ensemble for a single object can be decomposed as follows: \begin{equation} \label{eq:nll_ens_single_pl} \mathrm{NLL}_{n}^{\mathrm{obj}} = \mathrm{NLL}_{\infty}^{\mathrm{obj}} + \frac 1 n \frac{\sigma_\mathrm{obj}^2}{2 \mu_\mathrm{obj}^2} + \mathcal{O}\left(\frac{1}{n^2}\right). \end{equation} where $\mathrm{NLL}_{\infty}^{\mathrm{obj}} = -\log \left( \mu_\mathrm{obj} \right)$ is the ``infinite'' ensemble NLL for the given object. \end{prop} The proof is based on the Taylor expansions for the moments of functions of random variables, we provide it in Appendix~\ref{app:theory_prop_proof}. The assumption about the lower limit of model predictions $\epsilon_\mathrm{obj} > 0$ is necessary for the accurate derivation of the asymptotic in~\eqref{eq:nll_ens_single_pl}. We argue, however, that this condition is fulfilled in practice as real softmax outputs of neural networks are always positive and separated from zero. The model-average NLL of an ensemble of size $n$ on the whole dataset, $\mathrm{NLL}_n$, can be obtained via summing $\mathrm{NLL}_{n}^{\mathrm{obj}}$ over objects, which implies that $\mathrm{NLL}_n$ also behaves as $c + bn^{-1}$, where $c, b>0$ are constants w.\,r.\,t.\,$n$, as $n \rightarrow \infty$. However, for the finite-range $n$, the dependency in $\mathrm{NLL}_n$ may be more complex. \citet{pitfalls} emphasize that the comparison of the NLLs of different models with suboptimal softmax temperature may lead to an arbitrary ranking of the models, so the comparison should only be performed after \emph{calibration}, i.\,e. with optimally selected temperature $\tau$. The model-average CNLL of an ensemble of size $n$, measured on the whole dataset $\mathcal{D}$, is defined as follows: \begin{equation} \mathrm{CNLL}_n = \mathbb{E} \min_{\tau>0} \biggl\{ -\sum_{\mathrm{obj} \in \mathcal{D}} \log \bar{p}^*_{\mathrm{obj}, n}(\tau) \biggr\}, \label{eq:true_cnll} \end{equation} where the expectation is also taken over models, and $\bar{p}_{\mathrm{obj}, n}(\tau) \in [0, 1]^K $ is the distribution over $K$ classes output by the ensemble of $n$ networks with softmax temperature $\tau$. \citet{pitfalls} obtain this distribution by averaging predictions $p_{\mathrm{obj}, i} \in [0, 1]^K$ of the member networks $i=1,\dots, n$ for a given object and applying the temperature $\tau>0$ on top of the ensemble: $\bar{p}_{\mathrm{obj}, n}(\tau) = \mathrm{softmax} \bigl\{ \bigl( \log (\frac 1 n \sum_{i=1}^n p_{\mathrm{obj}, i} )\bigr) \bigl/ \tau \bigr\} $. This is a native way of calibrating, in a sense that we plug the ensemble into a standard procedure of calibrating an arbitrary model. We refer to the described calibration procedure as applying temperature \emph{after} averaging. In our work, we also consider another way of calibrating, namely applying temperature \emph{before} averaging: $\bar{p}_{\mathrm{obj}, n}(\tau) =\frac 1 n \sum_{i=1}^n \mathrm{softmax} \{ \log (p_{\mathrm{obj}, i}) / \tau \}$. The two calibration procedures perform similarly in practice, in most of the cases the second one performs slightly better (see Appendix~\ref{app:cnll_1}). The following series of derivations helps to connect the non-calibrated and calibrated NLLs. If we fix some $\tau>0$ and apply it \emph{before} averaging, $\bar p_{\mathrm{obj}, n} (\tau)$ fits the form of the ensemble in the right-hand side of equation~\eqref{eq:nll_ens_single}, and according to Proposition~\ref{prop:pl_ens}, we obtain that the model-average NLL of an $n$-size ensemble with fixed temperature $\tau$, $\mathrm{NLL}_n(\tau)$, follows a power law w.\,r.\,t.\,$n$ as $n \rightarrow \infty$. Applying $\tau$ \emph{after} averaging complicates the derivation, but the same result is generally still valid, see Appendix~\ref{app:theory_nll_temp_after}. However, the parameter $b$ of the power law may become negative for certain values of $\tau$. In contrast, when we apply $\tau$ before averaging, $b$ always remains positive, see eq.~\eqref{eq:nll_ens_single_pl}. Minimizing $\mathrm{NLL}_n(\tau)$ w.\,r.\,t.\,$\tau$ results in a lower envelope of the (asymptotic) power laws: \begin{equation} \label{eq:le_nll} \mathrm{LE\mbox{-}NLL}_n = \min_{\tau>0} \mathrm{NLL}_n(\tau), \quad \mathrm{NLL}_n(\tau) \overset{n \rightarrow \infty}{\sim} \mathrm{PL}_n. \end{equation} The lower envelope of power laws also follows an (asymptotic) power law. Consider for simplicity a finite set of temperatures $\{\tau_1, \dots, \tau_T\}$, which is the conventional practical case. As each of $\mathrm{NLL}_n(\tau_t), t = 1,\dots,T$ converges to its asymptote $c(\tau_t)$, there exists an optimal temperature $\tau_{t^*}$ corresponding to the lowest $c(\tau_{t^*})$. The above implies that starting from some point $n$, $\mathrm{LE\mbox{-}NLL}_n$ will coincide with $\mathrm{NLL}_n(\tau_{t^*})$ and hence follow its power law. We refer to Appendix~\ref{app:theory_lower_env_cont_temp} for further discussion on continuous temperature. Substituting the definition of $\mathrm{NLL}_n(\tau)$ into~\eqref{eq:le_nll} results in: \begin{equation} \mathrm{LE\mbox{-}NLL}_n = \min_{\tau>0} \mathbb{E} \biggl\{ - \sum_{\mathrm{obj} \in \mathcal{D}} \log \bar{p}^*_{\mathrm{obj}, n}(\tau) \biggr \}, \end{equation} from which we obtain that the only difference between $\mathrm{LE\mbox{-}NLL}_n$ and $\mathrm{CNLL}_n$ is the order of the minimum operation and the expectation. Although this results in another calibration procedure than the commonly used one, we show in Appendix~\ref{app:two_types_nll} that the difference between the values of $\mathrm{LE\mbox{-}NLL}_n$ and $\mathrm{CNLL}_n$ is negligible in practice. Conceptually, applying expectation inside the minimum is also a reasonable setting: in this case, when choosing the optimal $\tau$, we use the more reliable estimate of the NLL of the $n$-size ensemble with temperature $\tau$. This setting is not generally considered in practice, since it requires training several ensembles and, as a result, is more computationally expensive. In the experiments we follow the definition of CNLL~\eqref{eq:true_cnll} to consider the most practical scenario. To sum up, in this section we derived an asymptotic power law for LE-NLL that may be treated as another definition of CNLL, and that closely approximates the commonly used CNLL in practice. \section{Experimental setup} \label{exp_setup} We conduct our experiments with convolutional neural networks, WideResNet~\cite{wrn} and VGG16~\cite{vgg}, on CIFAR-10~\cite{CIFAR10} and CIFAR-100~\cite{CIFAR100} datasets. We consider a wide range of network sizes $s$ by varying the width factor $w$: for VGG\,/\,WideResNet, we use convolutional layers with $[w, 2w, 4w, 8w]$\,/\,$[w, 2w, 4w]$ filters, and fully-connected layers with $8w$\,/\,$4w$ neurons. For VGG\,/\,WideResNet, we consider $2 \leqslant w \leqslant 181$ / $5 \leqslant w \leqslant 453$; $w=64$\,/\,$160$ corresponds to a standard, commonly used, configuration with $s_{\mathrm{standard}}$ = 15.3M / 36.8M parameters. These sizes are later referred to as the standard budgets. For each network size, we tune hyperparameters (weight decay and dropout) using grid search. We train all networks for 200 epochs with SGD with an annealing learning schedule and a batch size of 128. We aim to follow the practical scenario in the experiments, so we use the definition CNLL~\eqref{eq:true_cnll}, not LE-NLL~\eqref{eq:le_nll}. Following~\cite{pitfalls}, we use the ``test-time cross-validation'' to compute the CNLL. We apply the temperature before averaging, the motivation for this is given in section~\ref{sec:ensemble_size}. More details are given in Appendix~\ref{app:experimental_setup}. For each network size $s$, we train at least $\ell = \max\{N, 8 s_{\mathrm{standard}}/s\}$ networks, $N=64$\,/\,$12$ for VGG\,/\,WideReNet. For each $(n, s)$ pair, given the pool of $\ell$ trained networks of size $s$, we construct $\lfloor \frac \ell n \rfloor$ ensembles of $n$ distinct networks. The NLLs of these ensembles have some variance, so in all experiments, we average NLL over $\lfloor \frac \ell n \rfloor$ runs. We use these values to approximate NLL with a power law along the different directions of the $(n, s)$-plane. For this, we only consider points that were averaged over at least three runs. {\bf Approximating sequences with power laws.$\quad$} Given an arbitrary sequence $\{\hat y_m\},\,m=1,\dots, M$, we approximate it with a power law $\mathrm{PL}_m = c + b m^{a}$. In the rest of the work, we use the hat notation $\hat y_m$ to denote the observed data, while the value without hat, $y_m$, denotes $y$ as a function of $m$. To fit the parameters $a,\,b,\,c$, we solve the following optimization task using BFGS: \begin{equation} \label{fit_loss} (a, b, c) = \underset{a, b, c}{\mathrm{argmin}} \frac 1 M \sum_{m=1}^M \bigl (\log_2 (\hat y_m - c) - \log_2 (b m^{a}) \bigr)^2 \end{equation} We use the logarithmic scale to pay more attention to the small differences between values $\hat y_m$ for large $m$. For a fixed $c$, optimizing the given loss is equivalent to fitting the linear regression model with one factor $\log_2m$ in the space $\log_2 m$ --- $\log_2(y_m-c)$ (see fig.~\ref{fig:motivation}, right as an example). \section{NLL as a function of ensemble size} \label{sec:ensemble_size} In this section, we would like to answer the question, whether the NLL as the function of ensemble size can be described by a power law in practice. We consider both calibrated NLL and NLL with a fixed temperature. To answer the stated question, we fit the parameters $a,\,b,\,c$ of the power law on the points $\widehat {\mathrm{NLL}}_n(\tau)$ or $\widehat {\mathrm{CNLL}}_n$, $n=1, 2, 3, \dots$, using the method described in section~\ref{exp_setup}, and analyze the resulting parameters and the quality of the approximation. As we show in Appendix~\ref{app:cnll_2}, when the temperature is applied \textit{after} averaging, $\mathrm{NLL}_n(\tau)$ is, in some cases, an increasing function of $n$. As for CNLL, we found settings when $\mathrm{CNLL}_n$ with the temperature applied \textit{after} averaging is not a convex function for small $n$, and as a result, cannot be described by a power law. In the rest of the work, we apply temperature \textit{before} averaging, as in this case, both $\mathrm{NLL}_n(\tau)$ and $\mathrm{CNLL}_n$ can be closely approximated with power laws in all considered cases. {\bf NLL with fixed temperature.$\quad$} For all considered dataset--architecture pairs, and for all temperatures, $\widehat {\mathrm{NLL}}_n(\tau)$ with fixed $\tau$ can be closely approximated with a power law. Figure~\ref{fig:motivation}, middle and right shows an example approximation for VGG of the commonly used size with the temperature equal to one. Figure~\ref{fig:ens_fixed_t} shows the dynamics of the parameters $a,\,b,\,c$ of power laws, approximating the NLL with a fixed temperature of ensembles of different network sizes and for different temperatures, for VGG on a CIFAR-100 dataset. The rightmost plot reports the quality of approximation measured with RMSE in the \emph{log}-space. We note that even the highest RMSE in the \emph{log}-space corresponds to the low RMSE in the \emph{linear} space (the RMSE in the \emph{linear} space is less than 0.006 for all lines in figure~\ref{fig:ens_fixed_t}). In theory, starting from large enough $n$, $\mathrm{NLL}_n(\tau)$ follows a power law with parameter $a$ equal to -1, and for small $n$, more than one terms in eq.~\eqref{eq:nll_ens_single_pl} are significant, resulting in a complex dependency $\mathrm{NLL}_n(\tau)$. In practice, we observe the power-law behaviour for the whole considered graph $\mathrm{NLL}_n(\tau)$, $n \geqslant 1$, but with $a$ slightly larger than $-1$. This result is consistent for all considered dataset--architecture pairs, see Appendix~\ref{app:nll_fixed_temp_setup2}. When the temperature grows, the general behaviour is that $a$ approaches -1 more and more tightly. This behaviour breaks for the ensembles of small networks (blue lines). The reason is that the number of trained small networks is large, and the NLL for large ensembles with high temperature is noisy in log-scale, so the approximation of NLL with power law is slightly worse than for other settings, as confirmed in the rightmost plot of fig.~\ref{fig:ens_fixed_t}. Nevertheless, these approximations are still very close to the data, we present the corresponding plots in Appendix~\ref{A:large_sized_t}. Parameter $b=\mathrm{NLL}_1(\tau)-\mathrm{NLL}_\infty(\tau)$ reflects the potential gain from the ensembling of the networks with the given network size and the given temperature. For a particular network size, the gain is higher for low temperatures, since networks with low temperatures are overconfident in their predictions, and ensembling reduces this overconfidence. With high temperatures, the predictions of both single network and ensemble get closer to the uniform distribution over classes, and $b$ approaches zero. Parameter $c$ approximates the quality of the ``infinite''-size ensemble, $\mathrm{NLL}_\infty(\tau)$. For each network size, there is an optimal temperature, which may be either higher or lower than one, depending on the dataset--architecture combination (see Appendix~\ref{app:nll_fixed_temp_setup2} for more examples). This shows that even large ensembles need calibration. Moreover, the optimal temperature increases, when the network size grows. Therefore, not only large single networks are more confident in their predictions than small single networks, but the same holds even for large ensembles. Higher optimal temperature reduces their confidence. We notice that for given network size, the optimal temperature converges when $n \rightarrow \infty$, we show the corresponding plots in Appendix~\ref{app:temp}. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pls_vs_temp_vgg_100_loss.pdf} \caption{Parameters of power laws and the quality of the approximation for $\mathrm{NLL}_n(\tau)$ with a fixed temperature $\tau$ for VGG on CIFAR-100.} \label{fig:ens_fixed_t} \end{figure} {\bf NLL with calibration.$\quad$} When the temperature is applied before averaging, $\widehat{\mathrm{CNLL}}_n$ can be closely approximated with a power law for all considered dataset--architecture pairs, see Appendix~\ref{cnll:setup2}. Figure~\ref{fig:cnll_ens} shows how the resulting parameters of the power law change when the network size increases for different settings. The rightmost plot reports the quality of the approximation. In figure~\ref{fig:cnll_ens}, we observe that for WideResNet, parameter $b$ decreases, as $s$ becomes large, and $c$ starts growing for large $s$. For VGG, this effect also occurs in a light form but is almost invisible at the plot. This suggests that large networks gain less from the ensembling, and therefore the ensembles of larger networks are less effective than the ensembles of smaller networks. We suppose, the described effect is a consequence of under-regularization (the large networks need more careful hyperparameter tuning and regularization), because we also observed the described effect in a more strong form for the networks with all regularization turned off, see Appendix~\ref{app:noreg_b_c}. However, the described effect might also be a consequence of the decreased diversity of wider networks~\cite{neal2018modern}, and needs further investigation. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pls_cnll_vgg_wr_100.pdf} \caption{Parameters of power laws and the quality of the approximation for $\mathrm{CNLL}_n$ for different network sizes $s$. VGG and WideResNet on CIFAR-100.} \label{fig:cnll_ens} \end{figure} \section{NLL as a function of network size} \label{sec:network_size} In this section, we analyze the behaviour of the NLL of the ensemble of a fixed size $n$ as a function of the member network size $s$. We consider both non-calibrated and calibrated NLL, and analyze separately cases $n=1$ and $n>1$. \citet{scaling_description} reason about a power law of accuracy for $n=1$ when $s \rightarrow \infty$, considering shallow fully-connected networks on the MNIST dataset. We would like to check, whether $\widehat{\mathrm{(C)NLL}}_s$ can be approximated with a power law on the whole reasonable range of $s$ in practice. {\bf Single network.$\quad$} Figure~\ref{fig:nll_network_size}, left shows the NLL with $\tau = 1$ and the CNLL of a single VGG on the CIFAR-100 dataset as a function of the network size. We observe the double descent behaviour~\cite{double_descent, double_descent2} of the non-calibrated NLL, which could not be approximated with a power law for the considered range of $s$. The calibration removes the double-descent behaviour, and allows a close power-law approximation, as confirmed in the middle plot of figure~\ref{fig:nll_network_size}. Interestingly, parameter $a$ is close to $-0.5$, which coincides with the results of~\citet{scaling_description} derived for the test error. The results for other dataset--architecture pairs are given in Appendix~\ref{app:network_size}. \citet{double_descent} observe the double descent behaviour of \emph{accuracy} as a function of the network size for highly overfitted networks, when training networks without regularization, with label noise, and for much more epochs than is usually needed in practice. In our practical setting, accuracy and CNLL are the monotonic functions of the network size, while for the non-calibrated NLL, the double descent behaviour is observed. \citet{pitfalls} point out that accuracy and CNLL usually correlate, so we hypothesize that the double descent \emph{may be} observed for CNLL in the same scenarios when it is observed for accuracy, while the non-calibrated NLL exhibits the double descent at the earlier epochs in these scenarios. To sum up, our results support the conclusions of~\cite{pitfalls} that the comparison of the NLL of the models of different \emph{sizes} should only be performed with an optimal temperature. {\bf Ensemble.$\quad$} As can be seen from figure~\ref{fig:nll_network_size}, right, for ensemble sizes $n > 1$, CNLL starts increasing at some network size $s$. This agrees with the behaviour of parameter $c$ of the power law for CNLL shown in figure~\ref{fig:cnll_ens} and was discussed in section~\ref{sec:ensemble_size}. Because of this behaviour, we do not perform experiments on approximating $\mathrm{CNLL}_s$ for $n>1$ with a power law. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pl_vgg_100_ms_ind_and_ens.pdf} \caption{Non-calibrated $\mathrm{NLL}_s$ and $\mathrm{CNLL}_s$ for VGG on CIFAR-100. Left and middle: for a single network, $\mathrm{NLL}_s$ exhibits double descent, while $\mathrm{CNLL}_s$ can be closely approximated with a power law. Right: $\mathrm{NLL}_s$ and $\mathrm{CNLL}_s$ of an ensemble of several networks may be non-monotonic functions.} \label{fig:nll_network_size} \end{figure} \section{NLL as a function of the total parameter count} \label{sec:budgets} In the previous sections, we analyzed the vertical and horizontal cuts of the $(n, s)$-plane shown in figure~\ref{fig:motivation}, left. In this section, we analyze the diagonal cuts of this space. One direction of diagonal corresponds to the fixed total parameter count, later referred to as a memory budget, and the orthogonal direction reflects the increasing budget. We firstly investigate CNLL as a function of the memory budget. In figure~\ref{fig:budgets}, left, we plot sequences $\widehat{\mathrm{CNNL}}_n$ for different network sizes $s$, aligning plots by the total parameter count. CNLL as a function of the memory budget is then introduced as the lower envelope of the described plots. As in the previous sections, we approximate this function with the power law, and observe, that the approximation is tight, the corresponding visualization is given in figure~\ref{fig:budgets}, middle. The same result for other dataset-architecture pairs is shown in Appendix~\ref{app:budgets}. Another, practically important, effect is that the lower envelope may be reached at $n>1$. In other words, for a fixed memory budget, a single network may perform worse than an ensemble of several medium-size networks of the same total parameter count, called a memory split in the subsequent discussion. We refer to the described effect itself as a Memory Split Advantage effect (MSA effect). We further illustrate the MSA effect in figure~\ref{fig:budgets}, right, where each line corresponds to a particular memory budget, the x-axis denotes the number of networks in the memory split, and the lowest CNLL, denoted at the y-axis, is achieved at $n > 1$ for all lines. We consistently observe the MSA effect for different settings and metrics, i.\,e. CNLL and accuracy, for a wide range of budgets, see Appendix~\ref{app:budgets}. We note that the MSA-effect holds even for budgets smaller than the standard budget. We also show in appendix~\ref{app:budgets} that using the memory split with the relatively small number of networks is only moderately slower than using a single wide network, in both training and testing stages. We describe the memory split advantage effect in more details in~\cite{chirkova2020deep}. \begin{figure} \begin{center} \centerline{ \begin{tabular}{c@{}c} \includegraphics[height=26mm]{figures_final/pl_vgg_cifar100_budget.pdf}& \includegraphics[height=26mm]{figures_final/vgg_cifar100_budgets_cnll.pdf} \end{tabular}} \caption{Left and middle: $\mathrm{CNLL}_B$ for VGG on CIFAR-100 can be closely approximated with a power law. $\mathrm{CNLL}_B$ is a lower envelope of $\mathrm{CNLL}_n$ for different network sizes $s$. Right: Memory Split Advantage effect, VGG on CIFAR-100. For different memory budgets $B$, the optimal CNLL is achieved at $n>1$.} \label{fig:budgets} \end{center} \end{figure} \section{Prediction based on power laws} \label{sec:prediction} One of the advantages of a power law is that, given a few starting points $y_1, \dots, y_m$ satisfying the power law, one can exactly predict values $y_i$ for any $i \gg m$. In this section, we check whether the power laws discovered in section~\ref{sec:ensemble_size} are stable enough to allow accurate predictions. We use the CNLL of the ensembles of sizes $1-4$ as starting points, and predict the CNLL of larger ensembles. We firstly conduct the experiment using the values of starting points, obtained by averaging over a large number of runs. In this case, the CNLL of large ensembles may be predicted with high precision, see Appendix~\ref{app:prediction}. Secondly, we conduct the experiment in the practical setting, when the values of starting points were obtained using only 6 trained networks (using 6 networks allows the more stable estimation of CNLL of ensembles of sizes $1-3$). The two left plots of figure~\ref{fig:predictions_ensembles} report the error of the prediction for the different ensemble sizes and network sizes of VGG and WideResNet on the CIFAR-100 dataset. The plots for other settings are given in Appendix~\ref{app:prediction}. The experiment was repeated 10 times for VGG and 5 times for WideResNet with the independent sets of networks, and we report the average error. The error is $1-2$ orders smaller than the value of CNLL, and based on this, we conclude that the discovered power laws allow quite accurate predictions. In section~\ref{sec:budgets}, we introduced memory splitting, a simple yet effective method of improving the quality of the network, given the memory budget $B$. Using the obtained predictions for CNLL, we can now predict the optimal memory split (OMS) for a fixed $B$ by selecting the optimum at a specific diagonal of the predicted ($n$, $s$)-plane, see Appendix~\ref{app:prediction} for more details. We show the results for the practical setting with 6 given networks in figure~\ref{fig:predictions_ensembles}, right. The plots depict the number of networks $n$ in the true and predicted OMS; the network size can be uniquely determined by $B$ and $n$. In most cases, the discovered power laws predict either the exact or the neighboring split. If we predict the neighboring split, the difference in CNLL between the true and predicted splits is negligible, i.\,e.\,of the same order as the errors presented in figure~\ref{fig:predictions_ensembles}, left. To sum up, we observe that the discovered power laws not only \textit{interpolate} $\widehat{\mathrm{CNNL}}_n$ on the \textit{whole} considered range of $n$, but also is able to \textit{extrapolate} this sequence, i.\,e.\,CNLL fitted on a \emph{short} segment of $n$ approximates well the \emph{full} range, providing an argument for using particularly power laws and not other functions. \begin{figure} \begin{center} \centerline{ \begin{tabular}{@{}c@{}c} \multicolumn{2}{c}{\footnotesize RMSE between true and predicted CNLL} \\ \includegraphics[width=0.27\textwidth]{figures_final/predicted_carpet_vgg100.pdf}& \includegraphics[width=0.27\textwidth]{figures_final/predicted_carpet_wr100.pdf} \end{tabular} \begin{tabular}{@{}c@{}c} \multicolumn{2}{c}{\footnotesize Optimal memory splits: predicted vs true}\\ \includegraphics[width=0.22\textwidth]{figures_final/vgg_cifar100_predcited_memory_splits.pdf}& \includegraphics[width=0.22\textwidth]{figures_final/wr_cifar100_predcited_memory_splits.pdf} \end{tabular}} \caption{Predictions based on $\mathrm{CNLL}_n$ power laws for VGG and WideResNet on CIFAR-100. Predictions are made for large $n$ based on $n = 1..4$. Left pair: RMSE between true and predicted CNLL. Right pair: predicted optimal memory splits vs true ones. Mean $\pm$ standard deviation is shown for predictions.} \label{fig:predictions_ensembles} \end{center} \end{figure} \section{Related Work} {\bf Deep ensembles and overparameterization.$\quad$} The two main approaches to improve deep neural networks accuracy are ensembling and increasing network size. While a bunch of works report the quantitative influence of the above-mentioned techniques on model quality~\cite{fort2019deep, ju2018relative, deepens, double_descent, neyshabur2018towards, novak2018sensitivity}, few investigate the qualitative side of the effect. Some recent works~\cite{d2020double, scaling_description, geiger2019jamming} consider a simplified or narrowed setup to tackle it. For instance,~\citet{scaling_description} similarly discover the power laws in test error w.\,r.\,t.\,model and ensemble size for simple binary classification with hinge loss, and give a heuristic argument supporting their findings. We provide an extensive theoretical and empirical justification of similar claims for the calibrated NLL using modern architectures and datasets. Other layers of works on studying neural network ensembles and overparameterized models include but not limited to the Bayesian perspective~\cite{he2020bayesian, wilson2020case, wilson2020bayesian}, ensembles diversity improvement techniques~\cite{kim2018attention, lee2016stochastic, sinha2020dibs, zaidi2020neural}, neural tangent kernel (NTK) view on overparameterized neural networks~\cite{arora2019exact, jacot2018neural, lee2019wide}, etc. {\bf Power laws for predictions.$\quad$} A few recent works also empirically discover power laws with respect to data and model size and use them to extrapolate the performance on small models/datasets to larger scales~\cite{kaplan2020scaling, rosenfeld2019constructive}. Their findings even allow estimating the optimal compute budget allocation given limited resources. However, these studies do not account for the ensembling of models and the calibration of NLL. {\bf MSA-effect.$\quad$} Concurrently with our work, \citet{google_msa} investigate a similar effect for budgets measured in FLOPs. Earlier, an MSA-like effect has also been noted in~\cite{coupled_ensembles,li2019ensemblenet}. However, the mentioned works did not consider the proper regularization of networks of different sizes and did not propose the method for predicting the OMS, while both aspects are important in practice. \section{Conclusion} In this work, we investigated the power-law behaviour of CNLL of deep ensembles as a function of ensemble size $n$ and network size $s$ and observed the following power laws. Firstly, with a minor modification of the calibration procedure, CNLL as a function of $n$ follows a power law on the wide finite range of $n$, starting from $n=1$, but with the power parameter slightly higher than the one derived theoretically. Secondly, the CNLL of a single network follows a power law as a function of the network size $s$ on the whole reasonable range of network sizes, with the power parameter approximately the same as derived. Thirdly, the CNLL also follows a power law as a function of the total parameter count (memory budget). The discovered power laws allow predicting the quality of large ensembles based on the quality of the smaller ensembles consisting of networks with the same architecture. The practically important finding is that for a given memory budget, the number of networks in the optimal memory split is usually much higher than one, and can be predicted using the discovered power laws. Our source code is available at \url{https://github.com/nadiinchi/power_laws_deep_ensembles}. \section*{Broader Impact} In this work, we provide an empirical and theoretical study of existing models (namely, deep ensembles); we propose neither new technologies nor architectures, thus we are not aware of its specific ethical or future societal impact. We, however, would like to point out a few benefits gained from our findings, such as optimization of resource consumption when training neural networks and contribution to the overall understanding of neural models. As far as we are concerned, no negative consequences may follow from our research. \begin{ack} We would like to thank Dmitry Molchanov, Arsenii Ashukha, and Kirill Struminsky for the valuable feedback. The theoretical results presented in section~\ref{sec:our_theory} were supported by Samsung Research, Samsung Electronics. The empirical results presented in sections~\ref{sec:ensemble_size},~\ref{sec:network_size},~\ref{sec:budgets},~\ref{sec:prediction} were supported by the Russian Science Foundation grant \textnumero 19-71-30020. This research was supported in part through the computational resources of HPC facilities at NRU HSE. Additional revenues of the authors for the last three years: Stipend by Lomonosov Moscow State University, Travel support by ICML, NeurIPS, Google, NTNU, DESY, UCM. \end{ack} \medskip \small \bibliographystyle{apalike}
1,108,101,562,446
arxiv
\section{Introduction} Motivated by high accuracy at reduced computational cost with respect to uniform grid methods, numerous adaptive discretization schemes of evolutionary partial differential equations (PDEs) have been developed since decades, see, e.g., \cite{Brandt77}. Real world problems, for instance, fluid and plasma turbulence, or reactive flows, typically involve a multitude of active spatial and temporal scales and adaptivity allows to concentrate the computational effort at locations and time instants where it is necessary to ensure a given numerical accuracy, while elsewhere efforts may be significantly reduced. Among adaptive approaches, multiresolution and wavelet methods offer an attractive possibility to introduce locally refined grids, which dynamically track the evolution of the solution in space and scale. Automatic error control of the adaptive discretization, with respect to a uniform grid solution, is hereby an advantageous feature \cite{Cohen00}. For a review of adaptive multiresolution methods in the context of computational fluid dynamics (CFD) we refer to \cite{ScVa10}. In many applications, in particular in CFD, Galerkin truncated discretizations of the underlying PDEs which use a finite number of modes are the methods of choice. Spectral methods \cite{CQHZ88} are a prominent example and Fourier-Galerkin schemes are widely used for direct numerical simulation of turbulence \cite{IGKa2009} due to their high accuracy. For efficiency reasons the convolution product in spectral space, due to the nonlinear quadratic term and typically encountered in hydrodynamic equations, is evaluated in physical space and aliasing errors are completely removed. This implementation, called pseudo-spectral formulation with full dealiasing using the $2/3$ rule, is equivalent to a Fourier-Galerkin scheme up to round-off errors \cite{CQHZ88}. Thus the discretization conserves the $L^2$-norm of the solution. A classical test to check the stability of pseudo-spectral codes for viscous Burgers or Navier-Stokes equations is to perform simulations with vanishing viscosity. This allows to verify if the $L^2$ norm of the solution, i.e., typically energy, is conserved and for sufficiently small time steps the truncated Galerkin schemes are stable. However, the solution of the Galerkin truncated inviscid equations, e.g., inviscid Burgers or incompressible Euler, shows artefacts in the form of oscillations and the computed solution is not physical. Already T.D. Lee \cite{Lee1952} predicted energy equipartition between all Fourier coefficients in spectral approximations for 3D incompressible Euler, called thermalization, by applying Liouville's theorem from statistical mechanics. The effect of truncating Fourier-Galerkin schemes has been studied in \cite{RFNM11,MFNBR2020} for the 1D Burgers and 2D incompressible Euler equations. The observed short-wavelength oscillations were named `tygers' and were interpreted as first manifestations of thermalization \cite{Lee1952}. The proposed cause was the resonant interaction between fluid particle motion and truncation waves. Motivated by this work, detailed numerical analysis of Fourier-Galerkin methods for nonlinear evolutionary PDEs, in particular for inviscid Burger and incompressible Euler, was then performed in \cite{BaTa13}. The authors showed spectral convergence for smooth solutions of the inviscid Burgers equation and the incompressible Euler equations. However, when the solution lacks sufficient smoothness, then both the spectral and the $2/3$ pseudo-spectral Fourier methods exhibit nonlinear instabilities which generate spurious oscillations. In particular it was shown that after the shock formation in the inviscid Burgers equation, the total variation of bounded (pseudo-) spectral Fourier solutions must increase with the number of increasing modes. The $L^2$-energy conservation of the spectral solution is reflected through spurious oscillations, which is in contrast with energy dissipating Onsager solutions. A complete explanation of these nonlinear instabilities was thus given and `tygers' \cite{RFNM11} were demystified. To remove these non-physical oscillations in Galerkin truncated approximations different numerical regularization techniques have been proposed, commonly used in numerical methods for solving hyperbolic conservation laws. If the solution is not unique the `regularized' numerical scheme selects one weak solution, which should correspond to the physically relevant one, e.g., the entropy solution of the inviscid Burgers equation, which can be computed exactly using the Legendre transform \cite{Vergassola1994}. These approaches include upwind techniques \cite{Osher1982}, total variation diminishing schemes \cite{Harten1983}, shock limiters \cite{Sweby1984}, spectral vanishing viscosity \cite{Tadmor1989, Gottlieb2001}, classical viscosity and hyperviscosity \cite{BLSB1981} and also inviscid regularization schemes \cite{BLTi2008, KhTi2008}. In the context of adaptive wavelet schemes, numerical experiments with the 1D inviscid Burgers equation showed that wavelet filtering of the Fourier-Galerkin truncated solution in each time step, which corresponds to denoising and is removing the oscillations, yields the solution to the viscous Burgers equation \cite{Nguyenvanyen2008}. For the 2D incompressible Euler equations \cite{Nguyenvanyen2009} different wavelet techniques for regularizing truncated Fourier-Galerkin solutions were studied using either real-valued or complex-valued wavelets and the results were compared with viscous and hyperviscous regularization methods. The results show that nonlinear wavelet filtering with complex-valued wavelets preserves the flow dynamics and suggest $L^2$ convergence to the reference solution. The wavelet representation offers at the same time a non negligible compression rate of about $3$ for fully developed 2D turbulence. Simulations of the 3D wavelet-filtered Navier-Stokes equations \cite{OYSFK11} showed that statistical predictability of isotropic turbulence can be preserved with a reduced number of degrees of freedom. This approach, called Coherent Vorticity Simulation (CVS) \cite{FSK1999} is a multiscale method to compute incompressible turbulent flows based on the wavelet filtered vorticity field. The coherent vorticity, corresponding to the few coefficients whose modulus is larger than a threshold, represents the organized and energetic flow part, while the remaining incoherent vorticity is noise like. Applying wavelet-based denoising, i.e., CVS filtering, to the 3D Galerkin truncated incompressible Euler equations confirmed that this adaptive regularization models turbulent dissipation and thus allows to compute turbulent flows with intermittent nonlinear dynamics and a $k^{-5/3}$ Kolmogorov energy spectrum \cite{Farge2017}. A significant compression rate of the wavelet coefficients of vorticity is likewise observed which reduces the number of active degrees of freedom to only about 3.5\% of the total number of coefficients for the studied turbulent flows, computed at Taylor microscale based Reynolds number of $200$. Filtering the wavelet representation of the Galerkin truncated inviscid Burgers and 2D incompressible Euler equations in \cite{PNFS13}, by retaining only the significant coefficients, showed that the spurious oscillations due to resonance can be filtered out, and dissipation can thus be introduced by the adaptive representation. The aim of the current work is to provide a rigorous mathematical framework to analyze and to understand the properties of adaptive discretizations of evolutionary PDEs based on dynamical Galerkin schemes. To this end we analyze these adaptive Galerkin discretizations. Galerkin schemes by itself are particularly appealing due to their optimality properties, conservation of energy and the ease of numerical analysis using Hilbert space techniques. Introducing space adaptivity, e.g., by wavelet filtering in each time step, implies that the projection operator changes over time as only a subset of basis functions is used. Hence, the projection operator is non-differentiable in time and we propose the use of an integral formulation. The projected equations are then analyzed with respect to existence and uniqueness of the solution. It is proven that non-smooth projection operators introduce dissipation, a result which is crucial for adaptive discretizations of nonlinear PDEs. Existence and uniqueness of the solution of the projected equations is likewise shown. Tools from countable systems of ordinary differential equations and functional analysis in Banach spaces are used. For related background we refer the reader to text books \cite{Deim77, Schwabik1992} and \cite{Fili2013}. The remainder of the article is organized as follows. Dynamical Galerkin schemes are defined in section~\ref{sec:dyngal} and the existence and uniqueness of the projected equations is analyzed giving an explanation of the introduced energy dissipation. Space and time discretization of the Burgers and incompressible Euler equations is described in section~\ref{sec:discret}. Numerical examples are presented in section~\ref{sec:numex} to illustrate the dissipation mechanism. Section~\ref{sec:appl} shows applications of the CVS filtering to the inviscid Burgers equation in 1D and the 2D and 3D incompressible Euler equations. Some conclusions are drawn in section~\ref{sec:concl}. \section{Dynamical Galerkin schemes} \label{sec:dyngal} \subsection{Motivation} Evolutionary PDEs can be discretized with a Galerkin method in space, by projecting the equation onto a sequence of finite dimensional linear spaces, which approximate the solution in space when the discretization parameter, $h$, goes to zero. Using truncation to a finite number of modes, the infinite dimensional countable system of ordinary differential equations in time can be reduced. An important restriction of such methods is that the projection space typically does not evolve in time and the number of modes is fixed. Here, we propose a formulation of adaptive Galerkin discretizations where the projection operator and the number of modes can change over time and we show that under suitable conditions adaptation can introduce dissipation. \subsection{Formal definition} Let $H$ be a Banach space, and consider the evolution equation \begin{equation} \label{eq:burgers_abstract} u' = f(u) \end{equation} where $u'$ denotes the weak time derivative of $u$ and $f$ is defined and continuous from some sub-Banach space $D(f) \subset H$ into $H$. Equation (\ref{eq:burgers_abstract}) is completed by a suitable initial condition $u(0)= u(t=0)$. To be more specific, we shall focus below on the case of the one-dimensional Burgers equation on the torus $\mathbb{R}/\mathbb{Z}$: \begin{equation} \label{eq:burgers} \partial_t u + u \partial_x u = \nu \partial_{xx} u \end{equation} which corresponds to (\ref{eq:burgers_abstract}) with \begin{equation} \label{eq:burgers_case} f(u) = \nu \partial_{xx} u - u \partial_x u \end{equation} and $u = u(x,t)$. The classical Galerkin discretization of (\ref{eq:burgers_abstract}) is defined as follows: for $h>0$, let $H_h$ be a fixed finite dimensional subspace of $D(f)$, such that: $$ \overline{\bigcup_{h>0} H_h} = H $$ where the adherence is taken in $H$, and let $P_h$ be the orthogonal projector on $H_h$. Find $u_h : [0,T] \in H_h$ such that: \begin{equation} \label{eq:burgers_galerkin} u_h' = P_h f(u_h) = P_h(\nu \partial_{xx} u_h - u_h \partial_x u_h) \end{equation} Now for $t\in [0,T]$, assume that $P_h(t)$ is an orthogonal projector on some finite dimensional subspace $H_h(t)$ of $H$. The dimension of $H_h(t)$ is allowed to change in time, but we assume that $H_h(t)$ remains within a fixed finite dimensional subspace $H_h^0$. $P_h$ therefore takes its values in the set of orthogonal projectors $H_h^0 \to H_h^0$, which we denote by $\Pi_h^0$, with its natural smooth manifold structure as a closed subset of all linear mappings $H_h^0 \to H_h^0$. We want to find $u_h : [0,T] \in H_h(t)$ which is an approximation of $u$. Let us first assume that $P_h$ is a smooth function of time. As in the case where $P_h$ is time independent, we apply $P_h(t)$ to the differential equation to get: \begin{equation} \label{eq:burgers_dynamic_galerkin_1} P_h(t) u_h'(t) = P_h(t) f(u_h(t)) \end{equation} but now, since $P_h$ does not commute with the time-derivative, this equation is not sufficient to determine $u_h'(t)$ entirely. We need another equation to fix the component of $u_h'(t)$ which is in the orthogonal of $H_h(t)$, i.e., in $H^\perp_h(t)$. To derive this equation, we start from the condition that $u_h(t) \in H_h(t)$ for every $t$, which is equivalent to \begin{equation} P_h(t) u_h(t) = u_h(t). \end{equation} Differentiating in time this identity leads to: \begin{equation} P_h(t) u_h'(t) + P_h'(t) u_h = u_h'(t) \end{equation} or equivalently \begin{equation} \label{eq:burgers_dynamic_galerkin_2} \left(1-P_h(t) \right) u_h'(t) = P_h'(t) u_h (t) \end{equation} which is exactly the equation we were looking for. By adding (\ref{eq:burgers_dynamic_galerkin_1}) and (\ref{eq:burgers_dynamic_galerkin_2}) together, we obtain the definition of the dynamical Galerkin scheme: \begin{equation} \label{eq:burgers_dynamic_galerkin} u_h'(t) = P_h(t) f \left(u_h(t)\right) + P_h'(t) u_h(t) \end{equation} By comparing this differential equation with (\ref{eq:burgers_galerkin}), we observe the appearance of a new term proportional to the time-derivative of $P_h$. This is the essential ingredient which characterizes the dynamical Galerkin scheme. We now show the following \medskip \begin{lemma} \label{thm:smooth_projector} Any solution of (\ref{eq:burgers_dynamic_galerkin}) such that $u_h(0) \in H_h(0)$ also satisfies $u_h(t) \in H_h(t)$ for all $t$, and moreover \begin{equation} \label{eq:energy_equation} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert u_h(t) \Vert^2 = (u_h(t), f(u_h(t))) \end{equation} \end{lemma} \begin{proof} By differentiating $P_h(t)^2 = P_h(t)$ and $P_h(t)^3 = P_h(t)$ respectively, we obtain the identities $$P_h(t) P_h(t)' + P_h(t)' P_h(t) = P_h(t)' \quad \mathrm{and} \quad P_h(t) P_h(t)' P_h(t) = 0,$$ which imply that \begin{equation} \frac{\mathrm{d}}{\mathrm{d}t}((1-P_h(t))u_h(t)) = 0 \end{equation} and the first part follows. To prove the second part, take the inner product of the equation with $u_h$: \begin{equation} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert u_h(t) \Vert^2 = (u_h(t), f(u_h(t))) + (u_h(t), P_h'(t) u_h(t)), \label{eq:energy_eq2} \end{equation} where the last term can be rewritten \begin{equation} (P_h(t) u_h(t), P_h'(t) P_h(t) u_h(t)) = (u_h(t), P_h(t) P_h'(t) P_h(t) u_h(t)) = 0 \; , \nonumber \end{equation} which proves (\ref{eq:energy_equation}). \end{proof} \medskip The above computations are valid when $P_h$ is differentiable, which is a severe restriction and forbids us in particular to switch on and off dynamically some functions in the basis of integration, which is the goal that we had set ourselves in the beginning. To pursue we therefore need to extend the definition of the scheme to non-differentiable $P_h$. For this we consider the integral formulation of (\ref{eq:burgers_dynamic_galerkin}), namely \begin{equation} \label{eq:burgers_dynamic_galerkin_integral1} u_h(t) = u_h(0) + \int_0^t P_h(\tau) f(u_h(\tau)) \mathrm{d}\tau + \int_0^t P_h'(\tau) u_h(\tau)\mathrm{d}\tau. \end{equation} This equation can be rewritten using a Stieltjes integral with respect to $P_h$: \begin{equation} \label{eq:burgers_dynamic_galerkin_integral} u_h(t) = u_h(0) + \int_0^t P_h(\tau) f(u_h(\tau)) \mathrm{d}\tau + \int_0^t \mathrm{d} P_h(\tau) u_h(\tau) \end{equation} which we call the integral formulation of the dynamical Galerkin scheme. This equation makes sense as soon as $P_h$ has bounded variation (BV), which gives it a much wider range of applicability than (\ref{eq:burgers_dynamic_galerkin}), allowing in particular discontinuities in $P_h$. To solve such an equation we need to resort to the theory of generalized ordinary differential equations, which we now recall. \subsection{Existence and uniqueness of a solution to the projected equations} The rigorous setting for integral equations such as (\ref{eq:burgers_dynamic_galerkin_integral}) involving Stieltjes integrals is explained in detail in the book \cite{Schwabik1992}. An alternative introduction can be found in \cite{Pandit1982}. We summarize the main consequences of the theory for our problem in the following: \medskip \begin{theorem} Assume that $P_h(t):[0,T] \to $ is BV and left-continuous, that $P_h(0) u_h(0) = u_h(0)$ (i.e., $u_h(0) \in H_h(0)$), and that $f:H_h^0 \to H$ is locally Lipschitz. Then \begin{enumerate}[label=(\roman{*}), ref=(\roman{*})] \item[(i)] There exists $T^*$, $0 < T^* \leq T$, such that the integral equation \begin{equation} \label{eq:burgers_dynamic_galerkin_integral0} u_h(t) = u_h(0) + \int_0^t P_h(\tau) f(u_h(\tau)) \mathrm{d}\tau + \int_0^t \mathrm{d} P_h(\tau) u_h(\tau) \end{equation} has a unique BV, left-continuous solution $u_h : [0,T^*] \to H_h^0$. \item[(ii)] This solution satisfies \begin{equation} \forall t \in [0,T], P_h(t) u_h(t) = u_h(t) \end{equation} \item[(iii)] $u_h$ is continuous at any point of continuity of $P_h$, and more generally for any $t$: \begin{equation} u_h(t^+) - u_h(t) = ( P_h(t^+) - P_h(t) ) u_h(t) \end{equation} or equivalently \begin{equation} u_h(t^+) = P_h(t^+) u_h(t) \end{equation} \item[(iv)] The energy equation (\ref{eq:energy_equation}) for smooth $P_h$ is replaced in general by: \begin{multline} \label{eq:energy_equation2} \frac{1}{2} (\Vert u_h(t) \Vert^2-\Vert u_h(0) \Vert^2) = \\ \int_0^t (u_h(\tau), f(u_h(\tau)))\mathrm{d}\tau - \frac{1}{2} \sum_{\{i\mid t_i < t\}} \Vert (1-P_h(t_i^+)) u_h(t_i) \Vert^2, \end{multline} where $(t_i)_{i \in \mathbb{N}}$ are the points of discontinuity of $P_h$. \end{enumerate} \end{theorem} \medskip \begin{proof} To prove part (i) of the theorem we first need to familiarize ourselves with a few key concepts used by \cite{Schwabik1992}. \begin{definition} Let $G = \{ x \in \mathbb{R}^n \mid \Vert x \Vert \leq c \} \times [0,T]$, $h : [0,T] \to \mathbb{R}$ a non decreasing, continuous from the left function, and $\omega : [0,+\infty) \to \mathbb{R}$ a continuous, increasing function with $\omega(0) = 0$. We will say that a function $F : G \to \mathbb{R}^n$ belongs to the class $\mathcal{F}(G,h,\omega)$, if and only if \begin{equation} \Vert F(x,t_2) - F(x,t_1) \Vert \leq \vert h(t_2) - h(t_1) \vert \end{equation} and \begin{equation} \Vert F(x,t_2) - F(x,t_1) - F(y,t_2) + F(y,t_1) \Vert \leq \omega(\Vert x - y \Vert) \vert h(t_2) - h(t_1) \vert \end{equation} for all $(x,t_2), (x,t_1), (y,t_2), (y,t_1) \in G$. \end{definition} \medskip The proof of the existence is based on the Schauder-Tichonov fixed point theorem, using theorem 4.2, p. 114 of ref.~\cite{Schwabik1992}. The uniqueness can be shown using theorem 4.8, page 122 of ref.~\cite{Schwabik1992} proving the local uniqueness property in the future, i.e., for increasing $t$. Now let us turn to (ii). The idea is to approximate $P_h$ by a family of smooth functions $P_{h,\varepsilon}$, $\varepsilon > 0$, and then to apply Lemma \ref{thm:smooth_projector} to the corresponding solution $u_{h,\varepsilon}$, giving \begin{equation} \left(1-P_{h,\varepsilon}(t) \right) \, u_{h,\varepsilon}(t) = 0 \end{equation} and then passing to the limit. For this we need $u_{h,\varepsilon}(t) \to u_{h}(t)$, which means that the solution depends continuously on $P_h$ (see chapter 8 p. 262 : continuous dependence on parameters). The continuity of $u_h$ in part (iii) follows directly from the fact that $P_h$ is left-continuous and BV. The energy equation in part (iv) can be shown by integrating (\ref{eq:energy_eq2}) in time and replacing $P_h'(t) u_h(t)$ by $(1 - P_h(t)) u'_h(t)$, cf. (\ref{eq:burgers_dynamic_galerkin_2}). \end{proof} In the case when the projector $P_h(t)$ depends on $u(t)$, e.g., when using adaptive wavelet thresholding, we have, \begin{subequations} \label{eq:coupled_dynamical_galerkin} \begin{align} \label{eq:coupled_dynamical_galerkin1} u_h(t) & = u_h(0) + \int_0^t P_h(\tau) f(u_h(\tau)) \mathrm{d}\tau + \int_0^t \mathrm{d} P_h(\tau) u_h(\tau) \\ \label{eq:coupled_dynamical_galerkin2} P_h(t) & = \Phi(u_h(t)) \end{align} \end{subequations} \begin{theorem} Under certain conditions, the system (\ref{eq:coupled_dynamical_galerkin}) has a unique solution. \end{theorem} \begin{proof} We proceed by iteration. Let $P_h^0$ be the projector on the time-independent approximation space $H_h^0$, $u_h^0$ be the corresponding solution of (\ref{eq:coupled_dynamical_galerkin1}). We then define recursively \begin{equation} P_h^{n+1}(t) = \Phi(u_h^n(t)) \end{equation} and $u_h^{n+1}$ as the solution of (\ref{eq:coupled_dynamical_galerkin1}) with $P_h = P_h^{n+1}$. \end{proof} \section{Space and time discretization} \label{sec:discret} For space discretization in the numerical results below we use a classical Fourier pseudo-spectral scheme \cite{CQHZ88}. The spectral Fourier projection of $u \in L^1(\mathbb{T}^d)$ where $\mathbb{T} = \mathbb{R} / (2 \pi \mathbb{Z})$ is given by \begin{equation} P_N u (\bm x) = u_N (\bm x) = \sum_{|{\bm k}| \lesssim N/2} \widehat u_k \, e^{i {\bm k} \cdot {\bm x}} \; , \; \widehat u_{\bm k} = \frac{1}{(2 \pi)^d} \int_{\mathbb{T}^d}\, u({\bm x}) \, e^{-i {\bm k} \cdot {\bm x}} \, d{\bm x} \label{eq:Fourierprojector} \end{equation} Note that $|{k}| \lesssim N/2$ is understood in the sense $-N/2 \le k < N/2$ and correspondingly in higher dimensions for each component of $\bm k$. Applying the spectral discretization to the one-dimensional inviscid Burgers equation ($d=1$), \begin{equation} \partial_t u + \frac{1}{2} \partial_x u^2 \, = \, 0 \quad {\text{for}} \quad x \in \mathbb{T} \quad {\text{and }} \quad t>0 \label{eq:inviscidBurgers} \end{equation} with periodic boundary conditions and suitable initial condition $u(x,t=0) = u_0(x)$ yields the Galerkin scheme \begin{equation} \partial_t u_N + \frac{1}{2} \partial_x \left( P_N (u_N)^2 \right) \, = \, 0 \quad {\text{for}} \quad x \in \mathbb{T} \quad {\text{and }} \quad t>0 \end{equation} which corresponds to a nonlinear system of $N$ coupled ODEs for $\widehat u_k(t)$ with $|{k}| \lesssim N/2$. A pseudo-spectral evaluation of the nonlinear term is utilized, and the product in physical space is fully dealiased. In other words, the Fourier modes retained in the expansion of the solution are such that $|k| \le k_C$, where $k_C$ is the desired cut-off wave number, but the grid has $N= 3k_C$ points in each direction, versus $N= 2k_C$ for a non-dealiased, critically sampled product. This dealiasing makes the pseudo-spectral scheme equivalent to a Fourier-Galerkin scheme up to round-off errors \cite{CQHZ88}, and is thus conservative. For the two- and three-dimensional incompressible Euler equations ($d=2, 3$) with periodic boundary conditions, \begin{eqnarray} \label{eq:Euler} \partial_t {\bm u} + \left( {\bm u} \cdot \nabla \right) {\bm u} \, & = & \, - \nabla p \quad {\text{for}} \quad {\bm x} \in \mathbb{T}^d \quad {\text{and }} \quad t>0 \\ \nonumber \nabla \cdot {\bm u} & = & 0 \end{eqnarray} a similar spectral discretization can be applied. The pressure $p$ is eliminated using the Leray projection onto divergence free vector fields. Eventually a nonlinear system of coupled ODEs is obtained for the Fourier coefficients of the velocity $\widehat{\bm u}_{\bm k}(t)$. For time discretization of the resulting ODE systems we stick to classical Runge-Kutta schemes, of order 4 for the 1D Burgers equation and the 3D Euler equations, while for 2D Euler 3rd order Runge-Kutta with a low storage formulation is used, see \cite{Orlandi2000}, on page 20. For details on the convergence and stability of the above spectral schemes we refer to \cite{BaTa13}. Implementation features for the 1D Burgers equation and the 2D Euler equation can be found in \cite{Nguyenvanyen2009} and \cite{PNFS13}. For details on the scheme for 3D Euler we refer to \cite{Farge2017}. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{FIGS/wavelets.pdf} \end{center} \caption{Shannon wavelet (top) and Meyer wavelet (bottom) in physical space $\psi(x)$ (left) and the corresponding modulus of the Fourier transform $|\widehat{\psi}(k)|$ (right).\label{fig:wavelet}} \end{figure} The Fourier space discretization described above could be replaced by any other Galerkin discretization, using for instance finite elements, or wavelets as basis functions. The interest of using wavelets is to introduce adaptive discretizations, see e.g., \cite{ScVa10, ESRF2021}. In this case the projector $P$ is changing over time and is non smooth, which means that dissipation is introduced by removing/adding basis functions during the time stepping. This technique has been previously used for regularizing the Burgers equation and the incompressible Euler equations without a rigorous mathematical justification. To test the influence of wavelet thresholding we introduce the concept of pseudo-adaptive simulations. The Fourier Galerkin discretization is used to solve the PDE, but in each time step the numerical solution $u_N$ is decomposed into a periodic orthogonal wavelet series of $L^2(\mathbb{T}^d)$. For $d=1$ we thus have the 1D truncated wavelet series \begin{equation} P_J u_N(x) = u^J_N(x) = \overline u_{00} + \sum_{j=0}^{J-1} \sum_{i=0}^{2^j -1} \widetilde u_{ji} \psi_{ji} (x) \, , \quad \widetilde u_{ji} = \int_{\mathbb{T}} u_N(x) \psi_{ji} (x) dx \label{eq:OWS} \end{equation} where $\overline u_{00}$ is the mean value of the solution and $\widetilde u_{ji}$ its wavelet coefficients. The wavelet $\psi_{ji}(x) = 2^{j/2} \psi(2^j x - i)$ quantifies fluctuations at scale $2^{-j}$ around position $i/2^j$ and $N=2^J$ denotes the total number of grid points, corresponding to the finest resolution. Figure~\ref{fig:wavelet} illustrates Shannon and Meyer wavelets together with the corresponding Fourier transforms, which have compact support. This implies that both are trigonometric polynomials and can be spanned by a Fourier basis. For extensions to higher dimensions using tensor product constructions of wavelets, we refer to the literature \cite{daubechies1992}. Wavelet filtering, which is the basis of the Coherent Vorticity Simulation (CVS) \cite{FSK1999}, introduces a sparse representation of the solution, by removing weak wavelet coefficients. Thresholding of the wavelet coefficients with a threshold $\epsilon$, which typically depends on time, is performed. This yields a projection of the numerical solution $u_N$ \begin{equation} P_J^{\epsilon} u_N(x) = u^J_{\epsilon} (x) = \overline u_{00} + \sum_{j=0}^{J-1} \sum_{i=0}^{2^j -1} \rho_{\epsilon}\left(\widetilde u_{ji} \right) \psi_{ji} (x) \, , \label{eq:OWS_epsilon} \end{equation} where $\rho_{\epsilon}$ is the (hard) thresholding operator defined as, \begin{equation} \rho_{\epsilon}( x) \, = \, \left \{ \begin{array} {ll} x \quad \quad \quad \; \mbox{\rm for} \quad |x| > \epsilon\\ 0 \quad \quad \quad \; \mbox{\rm for} \quad |x| \le \epsilon\\ \end{array} \right. \label{eqn:hardthres} \end{equation} and $\epsilon$ denotes the threshold. The thresholding error can be estimated (see e.g., \cite{Cohen00}) and we have $$ || P_J u_N(x) - P_J^{\epsilon} u_N(x) ||_2 \le C \epsilon \, . $$ \medskip Using pseudo-adaptive simulations the CVS algorithm can be summarized as follows \cite{PNFS13}: \medskip \begin{itemize} \item[i)] The Fourier coefficients of the solution $\widehat u_k$ for $|{k}| \lesssim N/2$ are advanced in time to $t=t_{n+1}$ and an inverse Fourier transform is applied on a grid of size $N$ to obtain $u_N$. \item[ii)] A forward wavelet transform is performed to obtain $P_J u_N(x)$, according to equation (\ref{eq:OWS}). \item[iii)] CVS filtering removes wavelet coefficients having magnitude below the threshold $\epsilon$. The threshold value is determined iteratively~\cite{azzalini2005} and initialized with $\epsilon_0 = q \sqrt{||u||_2 / 2 / N}$ where $q$ is a compression parameter. The iteration steps are then obtained by $\epsilon_{s+1} = q \sigma[\widetilde u^{s}_{ji}]$ until $\epsilon_{s+1} = \epsilon_s$, where $\widetilde u^{s}_{ji}$ are the wavelet coefficients below $\epsilon_s$ and $\sigma[\cdot]$ is the standard deviation of the set of these coefficients. \item[iv)] A safety zone is added in wavelet space. The index set of retained wavelet coefficients in step iii) is denoted by $\Lambda$ and for each retained wavelet coefficient indexed by $(j,i) \in \Lambda$ neighboring coefficients in position and scale (5 in the present case) are added, as illustrated in figure~\ref{fig:safetyzone}. \item[v)] An inverse wavelet transform is applied to the wavelet coefficients above the final threshold and a Fourier transform is then performed to obtain the Fourier coefficients of the filtered solution at time step $t_{n+1}$. \end{itemize} \medskip Different choices of the wavelet basis for regularization have been tested, e.g., in \cite{PNFS13}, including various orthogonal wavelets and a Dual-Tree Complex Wavelet basis we refer to as `Kingslets' \cite{Kingslets}. The value of the compression parameter $q$ controls the number of discarded coefficients and in previous studies we found experimentally the value $q=5$ for `Kingslets' (complex-valued wavelets) and for orthogonal wavelets we used $q=8$. Adding a safety zone is necessary due to the lack of translational invariance of orthogonal wavelets, but also for local dealiasing. The idea is to keep neighboring coefficients in space and scale and to account for translation of shocks or step gradients and the generation of finer scale structures. For complex-valued wavelets, which are translation invariant, no safety zone is required, as shown in~\cite{PNFS13}. For details and further discussion on possible choices of the safety zone we refer the reader to~\cite{OYSFK11}. \begin{figure} \begin{center} \includegraphics[width=0.55\textwidth]{FIGS/safetyzone.pdf} \end{center} \caption{Safety zone in wavelet coefficient space around an active coefficient $(j, i)$ in position $i$ and finer ($j+1$) and coarser scale ($j-1$).\label{fig:safetyzone}} \end{figure} \section{Numerical experiments} \label{sec:numex} In the following we show results to illustrate the properties of dynamical Galerkin scheme and in particular their ability to introduce energy dissipation into the numerical method, which can be useful for stabilization. As examples we consider first the inviscid 1D Burgers equation using periodic boundary conditions. The initial condition is a simple sine wave given by $u(x,t=0) = \sin(2 \pi x)$ for $x \in \mathbb{T}$. Unless explicitly noted, computations are done with $N=2048$ collocation points and the time step $\Delta t$ is chosen so that $\Delta x/\Delta t = 16$, where $\Delta x = 1/N$ is the grid discretization size. This choice ensures the CFL condition is met \cite{CQHZ88}. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{FIGS/singlecoeff_energy_all_sine.pdf} \end{center} \caption{Filtering of one mode in (a) Fourier space and (b) in wavelet space for the inviscid 1D Burgers equation. Time evolution of energy. As expected, energy loss is observed.\label{fig:test}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.56\textwidth]{FIGS/energyleak_all_sine.pdf} \end{center} \caption{Difference between dissipated energy and filtered energy (equation \ref{eq:delta}) as a function of the time step $\Delta t$, when a single Fourier mode or wavelet coefficient is filtered. A residual difference remains when Daubechies wavelets are employed. \label{fig:test2}} \end{figure} \subsection{Punctual selection in the Fourier basis} The simplest illustration which we develop as a proof of concept is a punctual selection in the Fourier basis. Starting at some time instant $t_b$ and during an entire interval $[t_b, t_e]$, we set to zero the Fourier coefficients corresponding to a given wave number $k_f$ after each time step (both positive and negative modes are erased, such that the solution remains real). The projection operator thus becomes time dependent and discontinuous and we have \begin{equation} P_N(t)_{[t_b, t_e]}^{k_f} u (x) \, = \, \left \{ \begin{array} {ll} \sum_{|{k}| \lesssim N/2, |k| \ne k_f} \widehat u_k \, e^{i { k} \, {x}}\quad \quad \; \mbox{\rm for} \quad {t} \in [t_b, t_e] \\ \sum_{|{ k}| \lesssim N/2} \widehat u_k \, e^{i { k} \, { x}} \quad \; \, \, \quad \quad \quad \; \mbox{elsewhere.}\\ \end{array} \right. \label{eqn:P_filter_fourier} \end{equation} The removal of these modes will instantly dissipate energy of the numerical solution, but from there on energy is conserved. And this is the case still after the reintroduction of the coefficients in the projection basis, despite the discontinuity of the projection operator. Indeed, according to (\ref{eq:energy_equation2}) dissipation is observed as long as $\Vert (1-P_h(t^+)) u_h(t) \Vert^2$ is non zero, but at $t=t_e$ this quantity is null and therefore energy is conserved. We note that since a multistage time marching scheme is employed, it is necessary to reset to zero the removed coefficients after each substage, to ensure they have no effect on the solution. We show in figure~\ref{fig:test}(a) the time evolution of the energy when the filtering wave number is $k_f=2$. The projection operator changes at $t_b=0.16$ and is then restored at $t_e=0.2 $. Dissipation is introduced by this change of projection basis and, up to numerical errors, the lost energy amounts to the energy content of the discarded coefficients. This can be seen in figure~\ref{fig:test2}, where we plot, as a function of the time step $\Delta t$, the quantity \begin{equation}\label{eq:delta} \delta = (\Vert u_N(0) \Vert^2-\Vert u_N(t_b) \Vert^2) - \Vert (1-P_N(t_b^+)^{k_f}_{[t_b,t_e]}) u_N(t_b)\Vert^2, \end{equation} which should be zero according to (\ref{eq:energy_equation2}), since the PDE is energy conserving up to time $t_b$. One observes that $\delta$ indeed converges to zero up to machine precision (of order 10$^{-15}$) as $\Delta t$ is decreased. \subsection{Punctual selection in real orthogonal wavelet bases} To illustrate dissipation through reprojection on a wavelet basis, we extend the previous idea of a punctual selection now to wavelet space. The solution of the Fourier Galerkin method is decomposed in each time step into an orthogonal wavelet basis, as in equation (\ref{eq:OWS}). One single energy containing coefficient, of scale index $j_f$ and position index $i_f$, is then set to zero after every time step during some given time interval $[t_b,t_e]$. The projection operator is once again time dependent and discontinuous, and may be written as \begin{equation} P_J(t)_{[t_b, t_e]}^{j_f,i_f} u (x) \, = \, \left \{ \begin{array} {ll} \overline u_{00} + \sum_{j=0}^{J-1} \sum_{i=0}^{2^j -1} \widetilde u_{ji} \psi_{ji} (x) (1-\delta_{jj_f}\delta_{ii_f})\quad \; \; \mbox{\rm for} \quad {t} \in [t_b, t_e] \\ \overline u_{00} + \sum_{j=0}^{J-1} \sum_{i=0}^{2^j -1} \widetilde u_{ji} \psi_{ji} (x) \quad \quad \quad \quad \quad \quad \quad \; \mbox{elsewhere,}\\ \end{array} \right. \label{eqn:P_filter_wavelet} \end{equation} for a chosen orthogonal wavelet $\psi_{ji}(x)$. We show in figure~\ref{fig:test}(b) the energy time evolution for the case of projections in the Meyer wavelet basis. The filtered coefficient corresponds to $j_f=1$ and $i_f=1$. As before, the filtering happens from time $t_b=0.16$ to $t_e=0.2$. Energy is punctually dissipated as of the first change in the projector, but is otherwise conserved. Figure~\ref{fig:test2} also shows the convergence of the quantity $\delta$ from equation \ref{eq:delta}, now with the projector replaced by equation \ref{eqn:P_filter_wavelet}. Similar results are also obtained with projections onto a Shannon wavelet basis. Interestingly, the same convergence is not observed in figure~\ref{fig:test2} when Daubechies wavelets are used. As illustrated in figure~\ref{fig:wavelet}, working with Shannon wavelets is actually equivalent to working with the Fourier basis, since it is compactly supported in spectral space, with a sharp cut-off. Combining multiscale Shannon wavelets amounts to covering the spectral space up to some Galerkin cut-off frequency. When projecting with this basis, one is simply damping some existing Fourier coefficients without introducing new wave numbers. Hence, when going back to the fully dealiased Fourier space, no further energy is lost. The Meyer wavelet is likewise compactly supported in spectral space, however the projection onto Meyer wavelets is only equivalent to a Fourier projection when the number of Fourier modes is increased from $N$ to $3/2 N$, which is the case when dealiasing is applied. Therefore, in both cases the dissipated energy indeed corresponds to the energy lost due to the discontinuity of the projection operator. The Daubechies wavelet, on the other hand, is not compactly supported in spectral space. When a projection is made in wavelet space and some coefficient is discarded, this will affect wave numbers beyond the dealiased ones, which then cease to vanish. After returning to Fourier space, the dealiasing operation will set all these to zero and further energy dissipation occurs. For this reason, the quantity $\delta$ shows a residual value as the time step decreases and does not attain machine precision, as seen in figure~\ref{fig:test2}. In this simulation, Daubechies 12 wavelets were employed and the projector corresponds to equation \ref{eqn:P_filter_wavelet} with $j_f=0$ and $i_f=0$. Note that the indices are chosen so that the amount of dissipated energy is comparable in all cases. This additional energy dissipation can once again be understood as due to a change in the projector, i.e., going from the wavelet projector removing one coefficient, given in equation \ref{eqn:P_filter_wavelet}, to the Fourier projector given in equation \ref{eq:Fourierprojector}. In other words, it is the fact that these two projectors do not commute when Daubechies wavelets are used (or any other basis not compactly supported in Fourier space, i.e., within the fully dealiased spectral space) which leads to more dissipation then that introduced by the filtering. This shows that pseudo-adaptive simulations, such as those discussed in section~\ref{sec:discret}, must be taken with care, since they may not exactly reproduce what one would get with a fully adaptive scheme in wavelet space. Still, they are valuable tools to predict the solutions behavior in a simpler and faster setup, and we shall apply them to illustrate the introduction of dissipation in conservation laws through a dynamical Galerkin scheme. \section{Application to the inviscid Burgers equation and incompressible Euler using CVS filtering} \label{sec:appl} In the following section we present in a concise way some results from the literature to illustrate the dissipation properties of adaptive Galerkin methods using CVS filtering. We show some numerical examples for the one dimensional inviscid Burgers equation including some space-time convergence and for the incompressible Euler equations in two and three dimensions. For details on the numerical simulations we refer to \cite{PNFS13} and \cite{Farge2017}. \subsection{Inviscid Burgers} We consider the inviscid Burgers equation (\ref{eq:inviscidBurgers}), discretized with a Fourier pseudo-spectral method and endowed with CVS filtering, described in section~\ref{sec:discret}, using $N=16384$ Fourier modes. For the used sinusoidal initial condition $u(x,t=0)=\sin(2\pi t)$ the time evolution of the reference solution, so-called entropy solution, can be easily computed with the method of characteristics, separately in each half of the domain. Figure \ref{fig:burgers_cvs} shows the solution of the standard Fourier Galerkin method, which preserves energy, and the solution obtained with the dynamic Galerkin scheme using CVS filtering with `Kingslets'. We observe that the oscillations (also called resonances, see \cite{RFNM11}), which appear as soon as the shock is formed, are removed using CVS filtering. This is further confirmed in figure \ref{fig:burgers_cvs_zoom} (left) where the oscillations are shown to be completely filtered out and a smooth solution close to the reference solution is obtained. To assess the filtering performance, we develop a space-time convergence analysis by computing the time integrated relative $L^2$-distance from the filtered solution $u_N$ to the analytical reference solution $u_\mathrm{ref}$. We compute, \begin{equation}\label{eq:error} \mathcal{E} = \int_{t_0}^{t_1} \frac{\Vert u_N(t) - u_\mathrm{ref}(t)\Vert^2}{\Vert u_\mathrm{ref}(t)\Vert^2} dt, \end{equation} for different space resolutions while keeping fixed the previous relation between time and space discretization, that is, $\Delta x/\Delta t = 16$. Since the filtering is only relevant after the shock formation, we actually start the analysis from a time right before the shock time $t_s = \inf_x \left[ -1/u'(x,0)\right] \approx 0.1592$, i.e., $t_0 = t_s - \Delta t$ and carry on the integration up to $t_1=0.3$. Results for complex-valued Kingslets and real-valued Shannon wavelets with and without the safety zone discussed in section \ref{sec:discret} are shown in figure \ref{fig:convergence}. We can observe that CVS with Kingslets is in excellent agreement with the reference solution, showing an $\mathcal{O}(\Delta x)$ convergence rate. Although typically one order of magnitude poorer (an under-performance that we now quantify but which has only been visually verified in \cite{PNFS13}), CVS with Shannon wavelets also shows first order convergence towards the reference solution if the safety zone is present. Meanwhile, as anticipated in section \ref{sec:discret}, figure \ref{fig:convergence}(c) shows that CVS is not able to properly regularize the solution when employing real orthogonal wavelets if a safety zone is not introduced. \begin{figure} \begin{center} \includegraphics[width=0.96\textwidth]{FIGS/profiles_new.pdf} \end{center} \caption{CVS-filtered Galerkin truncated inviscid Burgers equation using complex-valued wavelets (Kingslets, in black) together with the non-dissipative Galerkin truncated solution (blue) at times $t=0.1644$, $0.1793$ and $0.3$. The solutions are periodically shifted to the right, so that both the resonances and the shocks can be easily seen.\label{fig:burgers_cvs}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.98\textwidth]{FIGS/convergence_CVS_kings_shannon.pdf} \end{center} \caption{Time integrated relative $L^2$-error (equation \ref{eq:error}) as a function of space resolution $\Delta x$. (a) Kingslets (b) Shannon wavelet with the safety zone (c) Shannon wavelet without the safety zone. The straight lines have slope 1.\label{fig:convergence}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.98\textwidth]{FIGS/detail_energy_new.pdf} \end{center} \caption{(a) Detail of the solution of CVS-filtered Galerkin truncated inviscid Burgers equation using complex-valued wavelets (Kingslets, in black) together with the non-dissipative Galerkin truncated solution (blue) at time $t=0.1644$. Right: Time evolution of the energy $E(t)$ of CVS filtered solutions for different wavelets with and without safety zone together with the analytical result. \label{fig:burgers_cvs_zoom} } \end{figure} The evolution of the energy $E= ||u||^2$ shown in figure~\ref{fig:burgers_cvs_zoom} (right) further quantifies the dissipation of the adaptive schemes for different real orthogonal wavelets. Once again, in the presence of the safety zone the wavelet adaptation removes sufficient energy, matching thus the analytical energy evolution. However, it is now seen that without the safety zone not enough energy is dissipated and the solution is not properly regularized. For a detailed description of similar simulations and a physical interpretation we refer to \cite{PNFS13}. \subsection{Incompressible Euler equations} To illustrate the effect of dissipation when adapting the basis functions using projectors changing over time we consider the incompressible Euler equations given in (\ref{eq:Euler}) and discretize them with a classical Fourier Galerkin scheme. In these pseudo-adaptive simulations we apply in each time step CVS filtering. Detailed results can be found in \cite{PNFS13} and \cite{Farge2017} for the two and three-dimensional cases, respectively. \begin{figure}[h!] \begin{center} \includegraphics[width=1.0\textwidth]{FIGS/fig_euler2dzoom.pdf} \end{center} \caption{Filtering of 2D incompressible Euler using complex-valued wavelets (Kingslets). Left: Contours of the Laplacian of vorticity $\Delta \omega$ at $t=0.71$. The Galerkin truncated solution is shown in gray, the CVS solution is given in black. Right: 1D cut of the Laplacian of vorticity for the oscillatory Galerkin truncated solution and the wavelet-filtered smooth solution. From \cite{PNFS13}.\label{fig:2deulerzoom}} \end{figure} In the two-dimensional case a random initial condition is evolved in time with third order Runge-Kutta time integration using a resolution of $N=1024^2$ Fourier modes \cite{PNFS13}. Visualizations of the Laplacian of vorticity $\omega = \nabla \times {\bm u}$ in the fully developed nonlinear regime are shown in figure~\ref{fig:2deulerzoom} (left). For the Galerkin truncated solution we find oscillations in the isolines in $\Delta \omega$ (a small scale quantity, which is sensitive to oscillations) while the regularized solution using complex-valued wavelets with CVS filtering yields a smooth solution. A one-dimensional cut in figure~\ref{fig:2deulerzoom} (right) illustrates that in the CVS solution the oscillations have been indeed removed. Time evolution of enstrophy, defined as $\frac{1}{2} ||\omega ||_2^2$, shows that in contrast to the Galerkin truncated simulation the CVS computation is dissipative and the enstrophy departs from the one of the conservative Galerkin truncated case and it decays for times larger than 1.4. For more details including a physical interpretation we refer to \cite{PNFS13}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{FIGS/fig_euler2d_enstrophy.pdf} \end{center} \caption{Filtering of 2D incompressible Euler using complex-valued wavelets (Kingslets). Evolution of enstrophy $1/2 || \omega ||_2^2$ for the Galerkin truncated case and the adaptive wavelet filtered case using Kingslets. From \cite{PNFS13}.\label{fig:2deuler_enstrophy}} \end{figure} \medskip The three-dimensional Fourier Galerkin computations of incompressible Euler have been performed at resolution $N=512^3$ in a periodic cubic domain with a fourth order Runge-Kutta scheme for time integration \cite{Farge2017}. A statistically stationary flow of fully developed homogeneous isotropic turbulence obtained by DNS is used as initial condition. For CVS filtering Coiflet 12 wavelets \cite{daubechies1992} were used. Note that the wavelet decomposition and subsequent filtering have been applied to the vorticity ${\bm \omega} = \nabla \times {\bm u}$ (and not to the velocity ${\bm u}$) in each time step and subsequently the filtered velocity has been computed by applying the Biot-Savart operator $(\nabla \times)^{-1}$ in Fourier space. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{FIGS/fig_3dEuler_energy.pdf} \includegraphics[width=0.49\textwidth]{FIGS/fig_euler3d_enstrophy.pdf} \end{center} \caption{Energy (left) and enstrophy (right) evolution for 3D incompressible Euler using for Galerkin truncated Euler (Euler), wavelet filtered Euler (CVS) and Navier-Stokes (NS). HV and EV stand for hyperviscous regularization and EV for Euler-Voigt, respectively, which are not discussed here. From \cite{Farge2017}.\label{fig:3deuler_energy}} \end{figure} The time evolution of the energy, $\frac{1}{2} || {\bm u} ||_2^2 $, and enstrophy, $\frac{1}{2} || {\bm \omega} ||_2^2 $, in figure~\ref{fig:3deuler_energy} first shows that the Galerkin truncted Euler computation preserves energy and that enstrophy grows rapidly in time due to the absence of regularization. For CVS we can observe that energy is dissipated, similar to what is observed for Navier-Stokes and that enstrophy also exhibits a similar evolution as NS and does not grow rapidly. Visualizations of intense vorticity structures in figure~\ref{fig:3deuler_vorticty} for CVS and NS show their similar tube-like character, while the Galerkin truncated Euler solution is similar to Gaussian white noise without the presence of coherent structures. For details including a physical interpretation of the results we refer to \cite{Farge2017}. \begin{figure} \begin{center} \includegraphics[width=0.31\textwidth]{FIGS/fig_euler3d_ee.pdf} \includegraphics[width=0.31\textwidth]{FIGS/fig_euler3d_cvs.pdf} \includegraphics[width=0.33\textwidth]{FIGS/fig_euler3d_ns.pdf} \end{center} \caption{Vorticity isosurfaces, $|{\bm \omega}| = M + 4 \sigma$ (where $M$ is the mean value and $\sigma$ the standard deviation of the modulus of vorticity of NS) for 3D incompressible Euler using Galerkin truncated Euler (Euler, left), wavelet filtered Euler (CVS, center) and Navier-Stokes (NS, right) at time $t/\tau = 3.4$. From \cite{Farge2017}. \label{fig:3deuler_vorticty}} \end{figure} \section{Conclusions} \label{sec:concl} We presented a mathematical framework for analyzing dynamical Galerkin discretizations of evolutionary PDEs. The concept of weak formulations of countable ODEs with non smooth right-hand side in Banach spaces is used. We showed that changing the set of active basis functions, which implies that the projection operators are non differentiable in time, can introduce energy dissipation. This feature is of crucial interest for adaptive schemes for time dependent equations, e.g., adaptive wavelet schemes for hyperbolic conservation laws and yields a mathematical explanation for their regularizing properties due to dissipation. Numerical experiments illustrated the above results for the inviscid Burgers equation and the incompressible Euler equations in two and three space dimensions. To this end the concept of pseudo-adaptive simulations was introduced to test the influence of wavelet thresholding, while solving the PDE with the classical Fourier Galerkin discretization. The results showed that adaptive wavelet based regularization (i.e., filtering out the weak wavelet coefficients) of Galerkin schemes introduce dissipation together with related space adaptivity. The latter can be used for reducing the computational cost in fully adaptive computations. Finally, let us mention an interesting link exists with LES models, see e.g., \cite{SZFA2006}, as the equivalence between nonlinear wavelet thresholding (using Haar wavelets) and a single step of explicitly discretized nonlinear diffusion can be shown, see \cite{MWS2003}. Perspectives of this work are systematic studies of nonlinear hyperbolic conservation laws using adaptive Galerkin discretizations, in particular wavelet-based schemes and their regularization properties. \bibliographystyle{siam}
1,108,101,562,447
arxiv
\section{Introduction} Dark energy is a fundamental component of the nowadays standard cosmological model. It would be very difficult to explain the set of present days cosmological observations without it. Specifically, we refer to the luminosity-redshift relationship from observations of supernovae of type Ia (SNIa) \cite{Riess:1998cb,Perlmutter:1998np,Riess:2004nr}, the matter power spectrum of large scale structure as inferred from galaxy redshift surveys like the Sloan Digital Sky Survey (SDSS) \cite{Tegmark:2003ud} and the 2dF Galaxy Redshift Survey (2dFGRS) \cite{Colless:1998yu}, and the anisotropies in the Cosmic Microwave Background Radiation (CMBR) \cite{Spergel:2003cb}. Despite of its major importance in explaining the astrophysical data, the nature of dark energy is one of the greatest mysteries of modern cosmology. The simplest and most popular candidates for it are the cosmological constant (see e.g. \cite{Carroll:2000fy}), and minimally coupled scalar fields (see e.g. \cite{Wetterich:1994bg,Ratra:1987rm,Caldwell:1997ii,Zlatev:1998tr}). However many other candidates were proposed based on high energy physics phenomenology ( see e.g. \cite{Amendola:1999er,Farrar:2003uw,Brookfield:2005td,Bagla:2002yn,Padmanabhan:2002cp,Armendariz-Picon:2000ah,Bertolami:1998dn,Boisseau:2000pr, Caldwell:1999ew,Gibbons:2003gb,Chiba:1999ka}), and many investigations on their possible astrophysical and cosmological signature were undertaken ( see e.g. \cite{Seljak:2004xh,Abramo:2004ji,Mota:2003tc,Evans:2004iq,Manera:2005ct,Amarzguioui:2004kc, Alam:2003fg,Mota:2003tm,Melchiorri:2002ux,Mota:2004pa,Hannestad:2002ur, Koivisto:2004ne,Nunes:2004wn,Koivisto:2005nr} ). With so many possible candidates it is imperative to understand what are the main properties of the dark energy component that could have specific signatures in the astronomical data, and so could help us to discriminate among all these models. In a phenomenological approach, dark energy might be mainly characterized by its equation of state $w$, its sound speed $c_s$, and its anisotropic stress $\sigma$ \cite{Hu:1998tj}. Much effort has been put into determining the equation of state of dark energy, in an attempt to constrain theories. The equation of state determines the decay rate of energy and thus affects both the background expansion and the evolution of matter perturbations (see e.g \cite{Peebles:2002gy}). An equally insightful characteristic of dark energy is its speed of sound. This does not affect the background evolution but is fundamental in characterizing the behavior of its perturbations. Hence many authors have explored its effect on the evolution of fluctuations in the matter distribution ( see e.g. \cite{Bean:2003fb,Sandvik:2002jz,Avelino:2002fj,Balakin:2003tk}). However, the investigation of the effects of the anisotropic stress has been widely neglected. The main reason for disregarding the anisotropic stress in the dark energy fluid might be that conventional dark energy candidates, such as the cosmological constant or scalar fields, are perfect fluids with $\sigma=0$. However, since there is no fundamental theoretical model to describe dark energy, there are no strong reasons to stick to such assumption. In fact, dark energy vector field candidates have been proposed \cite{Armendariz-Picon:2004pm,Kiselev:2004py,Zimdahl:2000zm, Novello:2003kh,Wei:2006tn}, and these have $\sigma\neq 0$. Of course, if dark energy is such a vector, one might break the isotropy of a Friedmann-Robertson-Walker universe. However, as long as it remains subdominant, this violation is likely to be observationally irrelevant \cite{Barrow:1997as}. Once dark energy comes to dominate though, one would expect an anisotropic expansion of the universe, in conflict with the significant isotropy of the CMBR \cite{Bunn:1996ut}. But on the other hand there appears to be hints of statistical anisotropy in the CMBR fluctuations \cite{Jaffe:2005pw,Bielewicz:2004en,Larson:2004vm,Schwarz:2004gk,Copi:2003kt,deOliveira-Costa:2003pu}. Recently the possibility of viscous dark energy has gained attention \cite{Brevik:2004sd,Brevik:2005ue,Nojiri:2005sr, Brevik:2005bj}. These models are usually restricted to the context of bulk viscosity, although one could expect the shear viscosity to be dominant \cite{Brevik:2005bj}. One can allow bulk viscosity in a Friedmann-Robertson-Walker (FRW) universe, but when the shear is not neglected one has to face the difficulties of an anisotropic universe. However, shear viscosity at the perturbative level is compatible with the assumption of an isotropic FRW background. In fact the anisotropic stress perturbation is crucial to the understanding of evolution of inhomogeneities in the early, radiation dominated universe. Therefore an obviously interesting question is whether present observational data could allow for an anisotropic stress perturbation in the late universe which is dominated by the mysterious dark energy fluid. Motivated by all these possibilities, we investigate if the possible existence of an anisotropic stress in the dark energy component would result in a specific cosmological signature which could be probed using large scale structure data, and if it would still be compatible with the latest CMBR temperature anisotropies and the matter power spectrum. The article is organized as follows: In section II we discuss the parameters describing a general dark energy fluid with anisotropic stress. In section III we consider dark energy imperfect fluid models parameterized with a constant equation of state, sound speed and anisotropic stress. We investigate the effects on the late time perturbation evolution, in the integrated Sachs-Wolfe (ISW) effect of the CMBR anisotropies and on the matter power spectrum. In section IV we extend the analysis to models unifying dark energy with dark matter. We end the article with a summary of our findings and conclusions. \section{Dark Energy Stress Parameterization} In its simplest descriptions the dark energy component is described fully by its equation of state, defined as \begin{eqnarray} w \equiv \frac{p}{\rho}, \end{eqnarray} where $\rho$ is the energy density and $p$ is the pressure of the fluid. If $u_\mu$ is the four-velocity of the fluid, and the projection tensor $h_{\mu\nu}$ is defined as $h_{\mu\nu} \equiv g_{\mu\nu} + u_\mu u_\nu$, we can write the energy momentum tensor for a general cosmological fluid as \begin{eqnarray} \label{fluid} T_{\mu\nu}= \rho u_\mu u_\nu + ph_{\mu\nu} + \Sigma_{\mu\nu}, \end{eqnarray} where $\Sigma_{\mu\nu}$ can include only spatial inhomogeneity. We define perfect fluid by the condition $\Sigma_{\mu\nu}=0$. If in addition the fluid is adiabatic, $p=p(\rho)$, the evolution of its perturbations is described by the adiabatic speed of sound $c_a$. This is in turn fully determined by the equation of state $w$, \begin{eqnarray} \label{ca} c_a^2 \equiv \frac{\dot{p}}{\dot{\rho}} = w - \frac{\dot{w}}{3H(1+w)}. \end{eqnarray} For an adiabatic fluid, $\delta p = c_a^2 \delta \rho$. In the general case, there may be more degrees of freedom and the pressure $p$ might not be a unique function of the energy density $\rho$. An extensively studied example is quintessence \cite{Wetterich:1994bg,Ratra:1987rm,Caldwell:1997ii,Zlatev:1998tr}. For such a scalar field the variables $w$ and $c_s^2$ depend on two degrees of freedom: the field and its derivative, or equivalently, the kinetic and the potential energy of the field. Then the dark energy (entropic) sound speed is defined as the ratio of pressure and density perturbations in the frame comoving with the dark energy fluid, \begin{eqnarray}\label{cs} c_s^2 \equiv \frac{\delta p}{\delta \rho}_{|de}. \end{eqnarray} In the adiabatic case, $c_s^2= c_a^2$, which holds in any frame, but in general the ratio $\delta p/\delta \rho$ is gauge dependent. Hence, in the case of entropic fluid such as scalar fields, one needs both its equation of state and its sound speed as defined in Eq. (\ref{cs}), to have a complete description of dark energy and its perturbations. However, in order to have an even more general set of parameters to fully describe a dark energy fluid and its perturbations, besides $w$ and $c_s$, one should also consider the possibility of anisotropic stress. This is important because it enters directly into the Newtonian metric, as opposed to $w$ and $c_s$ which only contribute through the causal motion of matter \cite{Hu:1998tj}. Taking this generalization into account, in the synchronous gauge \cite{Ma:1995ey}, the evolution equations for the dark energy density perturbation and velocity potential can be written as \cite{Hannestad:2005ak} \begin{eqnarray} \label{deltaevol} \dot{\delta} &=& -(1+w)\left\{\left[k^2+9H^2(c_s^2-c_a^2)\right]\frac{\theta}{k^2} + \frac{\dot{h}}{2}\right\} \nonumber\\ &-& 3H(c_s^2-w)\delta, \end{eqnarray} \begin{eqnarray} \label{thetaevol} \dot{\theta} = -H(1-3c_s^2)\theta+\frac{c_s^2k^2}{1+w}\delta-k^2\sigma, \end{eqnarray} where $h$ is the trace of the synchronous metric perturbation. Here $\sigma$ is the anisotropic stress of dark energy, related to notation of Eq.(\ref{fluid}) by $(\rho + p)\sigma \equiv -(\hat{k}_i\hat{k}_j-\frac{1}{3}\delta_{ij})\Sigma^{ij}$. Basically, while $w$ and $c_s^2$ determine respectively the background and perturbative pressure of the fluid that is rotationally invariant, $\sigma$ quantifies how much the pressure of the fluid varies with direction. Generally such a property implies shear viscosity in the fluid, and thus its effect is to damp perturbations. A covariant form for the viscosity generated in the fluid flow is \cite{misner} \begin{eqnarray} \label{l-l} \Sigma_{\mu\nu} = \varsigma\left(u_{\mu;\alpha}h^\alpha_\nu +u_{\nu;\alpha}h^\alpha_\nu - u^\alpha_{\phantom{\alpha};\alpha} h_{\mu\nu}\right) + \zeta u^\alpha_{\phantom{\alpha};\alpha} h_{\mu\nu}. \end{eqnarray} Now the the conservation equations $T^{\mu \nu}_{\phantom{\mu\nu};\mu}=0$ reduce to the Navier-Stokes equations in the non-relativistic limit. Here $\varsigma$ is the shear viscosity coefficient, and $\zeta$ represents bulk viscosity. Here we set the latter to zero since we demand that $\Sigma_{ij}$ is traceless. In cosmology we have $u_\mu = (1,-v_{,i})/a$ in the synchronous gauge, and the velocity potential $\theta$ is the divergence of the fluid velocity $v$. It is then straightforward to check that the components of Eq.(\ref{n-s}) vanish except in the off-diagonal of the perturbed spatial metric. One finds that \begin{eqnarray} \label{n-s} \sigma = \frac{\varsigma}{k}\left(\theta - \dot{H}_T\right), \end{eqnarray} where $H_T$ is the scalar potential of the tensorial metric perturbations, which in the synchronous gauge equals $H_T=-h/2-3\eta$, where $\eta$ is a metric perturbation. From the coordinate transformation properties of $T_{\mu\nu}$ it follows that $\sigma$ must be gauge-invariant, and indeed the linear combination $\theta-\dot{H}_T$ is frame-independent. However, the anisotropic stress is not necessarily given directly by $\theta-\dot{H}$. For neutrinos this term instead acts as a source for the anisotropic stress, which is also coupled to higher multipoles in the Boltzmann hierarchy. Thus the evolution of the stress must, at least in principle, be solved from a complicated system of evolving multipoles. The approach we will use in this article to specify the shear viscosity of the fluid is more in line with the neutrino stress than Eq.(\ref{n-s}). Following Hu \cite{Hu:1998kj}, we describe the evolution of the anisotropic stress with the equation \begin{eqnarray} \label{sigmaevol} \dot{\sigma}+3H\frac{c_a^2}{w}\sigma = \frac{8}{3}\frac{c_{vis}^2}{1+w}(\theta+\frac{\dot{h}}{2}+3\dot{\eta}). \end{eqnarray} Then the shear stress is not determined algebraically from fluctuations in the fluid as was the case in Eq. (\ref{n-s}), but instead it must be solved from a differential equation. This phenomenological set-up is motivated as follows \cite{Hu:1998tj}. One can guess that the anisotropic stress is sourced by shear in the velocity and in the metric fluctuations. Again one must take into account the coordinate transformation properties of $\sigma$, and construct a gauge-invariant source term in the differential equation. As mentioned, an appropriate linear combination is $\theta-\dot{H}_T$. Up to the viscosity parameter $c^2_{vis}$, this determines the right hand side of Eq. (\ref{sigmaevol}). In the left hand side there appears also a drag term accounting for dissipative effects. We have adopted a natural choice for the dissipation time-scale, $\tau^{-1}_\sigma = 3H$. One may then check that Eq. (\ref{sigmaevol}) with $w=c_{vis}^2=1/3$ reduces to the evolution equation for the massless neutrino quadrupole in the truncation scheme where the higher multipoles are neglected \cite{Ma:1995ey} (this applies also to photons when one ignores their polarization and coupling to baryons). In what follows, we will study the consequences of Eq.(\ref{sigmaevol}) for fluids with negative equations of state. For $w<-1$, one should consider negative values of $c_{vis}^2$, as was suggested in Ref. \cite{Huey:2004jz}. So the parameter $c_{vis}^2/(1+w)$ should remain positive. We will return to this in the section III.C. Note that the parameterization of Eqs.(\ref{deltaevol}), (\ref{thetaevol}) and (\ref{sigmaevol}) describes cosmological fluids in a very general way. The system reduces to cold dark matter equations when $(w,c_s^2,c_{vis}^2)$ is $(0,0,0)$ and relativistic matter corresponds to $(1/3,1/3,1/3)$. A scalar field with a canonical kinetic term is given by $(w(a),1,0)$, where $-1 < w(a) < 1$. With an arbitrary kinetic term one can construct k-essence models \cite{Armendariz-Picon:2000ah} characterized by unrestricted equations of state and speeds of sound, $(w(a),c_s^2(a),0)$, but vanishing shear. On the other hand, one should keep in mind that the parameterization cannot be completely exhaustive. It does not cover, for example, a cosmological fluid with anisotropic stress determined by Eq. (\ref{n-s}) when $\varsigma \neq 0$. We might address the viability of this approximation elsewhere, but restrict here to the parameterization given in Eq.(\ref{sigmaevol}). \section{Stressed Dark Energy Fluid} We will investigate the effect of dark energy perturbations on the CMBR anisotropies and on the matter power spectrum with the simplest assumption that all of the three parameters $w$, $c_s^2$ and $c_{vis}^2$ are constant. This is an accurate description for a wide variety of models for which these parameters can be well approximated at moderate redshifts by their time-averaged values. For the energy content of the universe we use $\Omega_b=0.044$, $\Omega_{cdm}=0.236$, $\Omega_{de}=0.72$ and $h_0 = 0.68$. In all the numerical calculations we assume a scale invariant initial power spectra and set optical depth to last scattering to zero. We normalize the perturbation variables by setting the primordial comoving curvature perturbation to unity. To compare with observations, the resulting power spectra must then be multiplied by the primordial amplitude of the curvature perturbation. For this we employ the same normalization for all the models considered in this article. Although we do not search for the best-fit models here, we include the WMAP data \cite{Spergel:2003cb} and the SDSS data \cite{Tegmark:2003ud} in to the figures in order to give an idea about the viability of the studied models. The WMAP error bars include the cosmic variance, which is the dominant source of uncertainty for the small $\ell$'s. The calculations are performed with a modified version of the CAMB code \cite{Lewis:1999bs}. In addition to specifying the cosmological parameters and the evolution equations (\ref{deltaevol}), (\ref{thetaevol}) and (\ref{sigmaevol}), we must address the initial conditions in the early universe. For the relative entropy between radiation and dark energy to vanish, one must impose \begin{eqnarray} \label{adi} \delta & = & \frac{3}{4}(1+w)\delta_r, \nonumber \\ \theta & = & \frac{1}{1+9\frac{H^2}{k^2}(c_s^2-c_a^2)}\left[\theta_r-\frac{9}{4}H(c_s^2-c_a^2)\delta_r\right]. \end{eqnarray} When there is no inherent entropy in the dark energy fluid, i.e. $c_s^2=c_a^2$, adiabaticity means that the velocity potentials of all cosmic fluids are equal. On the other hand, when $c_s^2 \neq c_a^2$, we see that the above condition implies that the dark energy velocity potential is negligible in the early universe, since the relevant scales are far outside the horizon, $k \ll H$. However, the condition that the relative entropy Eq.(\ref{adi}) vanishes, when $c_a^2=c_s^2$, would be strictly valid only for the instant that it is imposed. Thereby, although is therefore some arbitrariness in the choice of initial values, the late evolution is not affected by the choice of these values as long as they are inside some reasonable region. We use, for all models, the initial values \begin{eqnarray} \delta = \frac{3}{4}(1+w)\delta_r, \quad \theta = \theta_r, \quad \sigma = 0. \end{eqnarray} The two first initial conditions are derived assuming that $c_s^2=c_a^2$ and that the relative entropy between dark energy and radiation together with its first derivatives vanish. The third condition says that we set the anisotropic stress to zero at very large scales at an early time. Only then shear is not generated when $c_{vis}^2=0$. Thus fluids with vanishing viscous parameter are perfect. Note however that when the parameter is allowed to evolve with time, one can have $c_{vis}^2 = 0$ for an imperfect fluid at some stage. \subsection{Dark Energy models with $-1<w<0$} When $w$ is negative but larger than $-1$, the evolution of perturbations is determined by the two sound speeds of dark energy. The evolution has been analyzed in Ref. \cite{Weller:2003hw} but without including the anisotropic stress. The metric perturbation $h$ is now a source in Eq. (\ref{deltaevol}), which tends to draw dark energy into overdensities of cold dark matter. However, for large scales the source due to velocity perturbations is proportional to $-(1+w)(c^2_s-w)\theta/k^2$, and this term can dominate the metric source term and drive $\delta$ to smaller values. In fact $\delta$ drops below zero when evaluated in the synchronous gauge \footnote{More accurately, we evaluate the transfer functions of the perturbations. A negative value for the transfer function indicates that a perturbation variable acquires the opposite sign to its initial value.}. This happens especially for large sound speeds, since then both the friction term in Eq. (\ref{thetaevol}) and the source term in Eq. (\ref{deltaevol}) are larger (see FIG. \ref{perta}). Thus $\delta$ gets smaller when dark energy begins to dominate. The ISW effect is enhanced when one increases the sound speed squared. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{perta2.eps} \caption{\label{perta} Late evolution of the dark energy density perturbation and velocity potential for $k=1.3\cdot 10^{-4}$ Mpc$^{-1}$ when $w=-0.8$. Solid lines from top to bottom correspond to $\delta$, and dashed lines from bottom to top correspond to $(1+w)H\theta/k^2$ when ($c_{s}^2$, $c_{vis}^2$) = (0,0), (0.6,0), (0,0.6), (0.6,0.6). The effect of $c_{vis}^2$ is to damp density perturbations, which in the synchronous gauge is seen as a consequence of enhancing the velocity perturbations.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{gravs1.eps} \caption{\label{gravis1} Late evolution of the gravitational potentials at large scales ($k=1.3\cdot 10^{-4}$ Mpc$^{-1}$) when $w=-0.8$ and $c_s^2=0$. Solid lines are for the case of perfect dark energy and dashed for the imperfect case with $c_{vis}^2=1.0$ The upper lines are $\psi$, the lower lines are $\phi$.} \end{center} \end{figure} The effect of the anisotropic stress is also to wash out overdensities. This is because the metric part of the source term in Eq. (\ref{sigmaevol}) turns out negative, and it dominates over the velocity term. Thus $\sigma$ is driven to negative values, and in Eq. (\ref{thetaevol}) it will act to increase the growth of $\theta$. This is similar to free-streaming of neutrinos, although for them the effect is relevant at smaller scales. Since now $c_a^2<0$, the source term $\sim H^2\theta/k^2$ in Eq. (\ref{deltaevol}) inhibits structure growth at large scales. Therefore, as the dark energy becomes dominant, the overall density structure is smaller when $c_{vis}^2$ is larger, and the ISW effect is amplified. It is illuminating to describe the same thing also in terms of the Newtonian gauge perturbations. This gauge is defined by the line element \begin{eqnarray} ds^2 = a^2(\tau)\left[-(1+2\phi)d\tau^2 + (1-2\psi)dx^idx_i\right]. \end{eqnarray} Here $\tau$ is the conformal time. We remind that the ISW effect stems from the time variation of the metric fluctuations, \begin{eqnarray} \label{isw} C^{ISW}_\ell \propto \int\frac{dk}{k}\left[\int_0^{\tau_{LSS}} d\tau (\dot{\phi} + \dot{\psi})j_\ell(k\tau)\right]^2, \end{eqnarray} where $\tau_{LSS}$ is the conformal distance to the last scattering surface and $j_\ell$ the $\ell$'th spherical Bessel function. The ISW effect occurs because photons can gain energy as they travel trough time-varying gravitational wells. These wells are in turn caused by matter, since \begin{eqnarray} \label{phi} -k^2\phi = 4\pi G a^2\rho\left[\delta + 3\frac{H}{k^2}(1+w)\theta\right]_{|T}. \end{eqnarray} We have indicated with the subscript $ _{|T}$ that in the left hand side variables refer to all matter present, and not just dark energy. Note also that the term in square brackets is gauge-invariant. Thus, evaluated in any frame, it equals $\delta_T$, the overdensity of energy seen in the comoving frame. During matter domination, $\delta_T$ grows in such a way that the gravitational potentials stay constant. It is then clear that as dark energy begins to take over, the gravitational potential $|\phi|$ begins to decay. Contrary to expectations from FIG. \ref{perta}, this decay is not more efficient at large scales when there is shear, as shown in FIG. \ref{gravis1}. This is because the dark energy shear influences gravitational wells in such a way that the growth of matter perturbations does not slow down as much as in a perfect universe. However, there is an important twist to the story. This is seen in the FIG. \ref{gravis1}, where the evolution of the potentials $\phi$ and $\psi$ is plotted at very large scales. At an early time the potentials are unequal because of the free streaming of radiation. However, our attention is now on the late evolution of the potentials. Due to dark energy, the potentials can re-depart from each other at smaller redshifts. This can happen only when $c_{vis}^2 \neq 0$, since \begin{eqnarray} \label{psi} \psi = \phi - 12 \pi G a^2(1+w)\rho \sigma_{|T}, \end{eqnarray} i.e. shear is the difference between the depth of matter-induced gravity well and the amount of spatial curvature. Since $\sigma$ is gauge-invariant, and we found that it becomes negative for dark energy, we can see that shear perturbation drives $|\psi|$ to vanish more efficiently. Thereby we find that the effect of shear on Eq.(\ref{phi}) only partly compensates for the effect on $\psi$ from Eq.(\ref{psi}), and thus the overall ISW from Eq.(\ref{isw}) will be amplified when dark energy perturbations tend to smooth as in FIG. \ref{perta}. In FIG. \ref{sosspic1} we show the large angular scales of the CMBR spectrum when $w=-0.8$ and the two other parameters are varied. The upper panel depicts the case where the sound speed of dark energy vanishes. Then the pressure perturbation vanishes and the clustering of dark energy is inhibited only by the free-streaming effect of shear viscosity. Therefore the large scale power of the CMBR is increased by increasing $c_{vis}^2$. In the lower panel $c_s^2=1$. Then dark energy is almost smooth (except at the largest scales) even without anisotropic stress, and thus we see a smaller effect when $c_{vis}^2$ is increased. When $c_{vis}^2<0$ the metric and the fluid sources drive the perturbations in the same direction, resulting in explosive growth. Since this would spoil the evolution except when $c_{vis}^2$ is tuned to infinitesimal negative values, we will not consider such a case here. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{clfiga2.eps} \includegraphics[width=0.47\textwidth]{clfigc2.eps} \caption{\label{sosspic1} The CMBR anisotropies for $w=-0.8$. In the upper panel $c_s^2=0$ and in the lower panel $c_s^2=1.0$. The ISW contribution increases with the parameter $c^2_{vis}$: thick lines are for $c^2_{vis}=0$, dash-dotted for $c^2_{vis}=0.001$, dashed for $c^2_{vis}=0.01$, dotted for $c^2_{vis}=0.1$ and the solid lines for $c^2_{vis}=1.0$.} \end{center} \end{figure} \subsection{Phantom Dark Energy models with $w<-1$} When the dark energy equation of state is less than $-1$, the effect of both the sound speed and of the viscosity parameter are the opposite to the previous case. Now the source term in Eq. (\ref{deltaevol}) has its sign reversed, and because of that dark energy falls out from the overdensities. Similarly, the velocity potential acts now as a source for the overdensities. Therefore increasing the sound speed will drive dark energy to cluster more efficiently. Now the effect of $c_{vis}^2>0$ is with the same sign of those of the metric sources, and therefore we must consider negative values for this parameter. Then, if we increase the parameter $c_{vis}^2/(1+w)$, the dark energy perturbations are growing more efficiently, as shown in FIG. \ref{pertb}. This is because $\sigma$ is negative, just like in the previous case, and again tends to enhance the velocity potential. The crucial difference in the perturbation evolution for imperfect dark energy here as compared to the imperfect $w>-1$ case is that more shear in the perturbations will result in more clumpy structure in the density of phantom dark energy. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{pertb2.eps} \caption{\label{pertb} Late evolution of the dark energy density perturbation and and the velocity potential for $k=1.3 \cdot 10^{-4}$ Mpc$^{-1}$ when $w=-1.2$. Solid lines from bottom to top correspond to $\delta$, the dashed lines from top to bottom correspond to $(1+w)H\theta/k^2$ when ($c_s^2$, $c_{vis}^2$) = (0,0), (0.6,0), (0,-0.6), (0.6,-0.6). The effect of $c_{vis}^2$ is to increase clustering, which in the synchronous gauge is seen as a consequence of enhancing the velocity perturbations.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{gravs2.eps} \caption{\label{gravis2} Late evolution of the gravitational potentials at large scales( $k=1.3\cdot 10^{-4}$ Mpc$^{-1}$) when $w=-1.2$ and $c_s^2=0$. Solid lines are for the case of perfect dark energy and, dashed for the imperfect case with $c_{vis}^2=-1.0$. The upper lines are $\phi$, the lower lines are $\psi$.} \end{center} \end{figure} One can again consider the ISW in the terms of the Newtonian gauge potentials, Eq.(\ref{isw}). The effect is not directly seen from the behaviour of $\delta$ in FIG. \ref{pertb}, partly because of the different gauge and partly because the anisotropic stress induces a compensation on the other gravitional potential and thereby influences also the matter perturbation. This is shown in FIG. \ref{gravis2}. The anticipated simple result (that the decay of the gravitational potentials is reduced since $\delta$ is enchanced when there is more shear) again holds for the sum of the gravitational potentials, but considering $\phi$ or $\psi$ separately reveals the intricacy of the fluctuation dynamics due to anisotropic stress. Again evolution of $\phi$ implies, through Eq.(\ref{phi}), that the influence of $\sigma$ to dark matter is the opposite from dark energy, but on the other hand, evolution of the spatial curvature $\psi$ implies that the sum $\phi+\psi$ behaves according to the dominating component. Now the gravitational well $\psi$ grows deeper, because the contribution from shear in the phantom fluid in Eq.(\ref{psi}) comes with a minus sign. In FIG. \ref{sosspic2} we have plotted the large angular scales of the CMBR spectrum when $w=-1.2$ and the two other parameters are varied. The upper panel depicts the case that the sound speed of dark energy vanishes. Then the ISW effect without anisotropic stress is large since dark energy perturbations are nearly washed out. Consequently, the large scale power of CMBR is decreased as $|c_{vis}^2|$ is increased, since the ''anti-viscosity'' will then amplify perturbations. In the lower panel $c_s^2=1$. There the effect of $c_s^2$ already dominates, and we see a smaller difference when $c_{vis}^2/(1+w)$ is increased. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{clfigb2.eps} \includegraphics[width=0.47\textwidth]{clfigd2.eps} \caption{\label{sosspic2} The CMBR anisotropies for $w=-1.2$. In the upper panel $c_s^2=0$ and in the lower panel $c_s^2=1.0$. The ISW contribution decreases with the parameter $c^2_{vis}$: thick lines are for $c^2_{vis}=0$, dash-dotted lines for $c^2_{vis}=0.001$, dashed lines for $c^2_{vis}=0.01$, dotted lines for $c^2_{vis}=0.1$ and the solid lines for $c^2_{vis}=1.0$.} \end{center} \end{figure} \subsection{Dark Energy models with $c_s^2<0$} For perfect dark energy models without shear, the case $c_s^2<0$ leads to explosive growth of perturbations. This is analogous to the behaviour of a simple wave, which has a solution $\sim e^{-i(k/c_s) t + i\bar{k}\cdot\bar{x}}$, diverging when the sound velocity is imaginary. It is not clear, however, how useful the analogy to the sound speed of a simple plane wave is to the interpretation of the variable defined by Eq. (\ref{cs}). For instance, in the modified gravity context \cite{Koivisto:2004ne} this formal definition does not describe propagation of waves in any physical matter. A priori one should not discard the possibility $c_s^2<0$ without careful deliberation. In fact, given a fluid with negative equation of state, one would expect, from Eq. (\ref{ca}), also a negative sound speed squared. To get rid of this feature, extra degrees of freedom must be assumed to exist in such a way that the variable defined by Eq. (\ref{cs}) turns out positive. When the generation of shear in the fluid is taken into account, the perturbation growth for $c_s^2<0$ can be stabilized. This is because shear is sourced by the perturbations, and in turn the shear will inhibit clustering. Here it is possible to choose the parameters in such a way that the dark energy perturbation grows steadily at late times. So the ISW effect comes with the opposite sign from the Sachs-Wolfe effect, which leaves its imprint in the CMBR earlier. These effects cancel each other and thus the large scale power in the CMBR spectrum is reduced, in accordance with the measured low quadrupole. We show in FIG. \ref{sosspic3} such a case together with various other choices for the other two parameters when $c_s^2=-1$. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{clfign2.eps} \caption{\label{sosspic3} The CMBR spectra for $c_s^2=-1.0$. The thick line is for $w=-1$ (unperturbed dark energy). The solid line is for $w=-0.8$ and $c_{vis}^2=0.5$ and the dash-dotted line for $w=-0.8$ and $c_{vis}^2=1.0$ The dotted line is for $w=-1.2$ and $c_{vis}^2=0.5$ and the dashed line for $w=-1.2$ and $c_{vis}^2=1.0$. } \end{center} \end{figure} \subsection{A summary} We summarize the features of dark energy perturbations in different parameter regions in Table \ref{tab}. A half of the parameter space is excluded because divergent behaviour occurs, and much of the remaining parameter space is degenerated. Even when restricting to simplest case where the parameters are kept constant, it seems clear that present observational data allows large variety of interesting models with non-vanishing shear, $c_{vis}^2 \neq 0$. In some parameter regions of Table \ref{tab} new features appear at observable scales. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline $w$ & $c_s^2$ & $c_{vis}^2<0$ & $c_{vis}^2 = 0$ & $c_{vis}^2>0$ \\ \hline $ > -1 $ & $ > 0$ & diverges & canonic scalar field & $\searrow \phantom{\dag}$ (FIG. \ref{sosspic1}) \\ \cline{2-5} $ $ & $ < 0$ & diverges & diverges & $\nearrow$ \dag (FIG. \ref{sosspic3}) \\ \hline $<-1 $ & $ > 0$ & $\nearrow \phantom{\dag}$ (FIG. \ref{sosspic2}) & phantom scalar field & diverges \\ \cline{2-5} $ $ & $ < 0$ & $\searrow$ $\dag$ (FIG. \ref{sosspic3}) & diverges & diverges \\ \hline \end{tabular} \caption{\label{tab} Summary of different parameter regions for dark energy fluids. We have indicated with $\nearrow$ the cases where superhorizon perturbations are increased as $|c_{vis}^2|$ is increased, and with $\searrow$ the cases where superhorizon perturbations in dark energy are smoothened as $c_{vis}^2$ is increased. We indicate by $\dag$ that the shear perturbation influences significantly also the small scale perturbations. } \end{table} In FIG. \ref{matterpic1} we plot the matter power spectra including cold dark matter and baryons for various parameter choices. When $c_{vis}^2$ is varied, the effect occurs only at scales much larger than what current observations are able to probe. In the spectrum of the total density perturbation one would see more pronounced features at scales more tantalizingly near the current limits of observations. However, there is no way to directly measure the dark energy density perturbation, and therefore we have plotted only the power spectrum of non-relativistic matter. In FIG. \ref{gravis1} and FIG. \ref{gravis2} it was seen that the shear changes the Newtonian gravitational potentials significantly. Thus one might hope to find a way to study whether effects from an anisotropic stress could be measured by using for example the cross correlation of the ISW signal and the large scale structure observations or gravitational lensing experiments. However, one should keep in mind that we have considered perturbations at vast scales. We have found that that fluctuations in an imperfect as well as in a perfect fluid with a constant equation of state $w<-1/3$ are confined to superhorizon scales. This is except for special occasions, in particular when the parameter $c_s^2$ is negative or when the perturbations behave pathologically due to wrong sign of the viscous parameter. For the largest scales, the viscous parameter determines the evolution of the perturbations. It is clear from FIG. (\ref{perta}) and FIG. (\ref{pertb}) that the variation of the sound speed $c_s^2$ has much less effect on the evolution of perturbations in the limit $k \rightarrow 0$. However, the parameter $c_s^2$ sets the scale at which the fluctuations in dark energy become negligible. For smaller $c_s^2$, there are fluctuations at smaller wavelengths. Therefore the shear would be best seen when the $c_s^2$ is nearly zero or even negative. The main impact of dark energy anisotropic stress on observations seems to be the modification of the CMBR at very large scales, from which it would be very difficult to unambiguously detect. However, this changes when one considers the perhaps better physically motivated situation where the parameters $w$, $c_s^2$ and $c_{vis}^2$ are allowed to evolve in time. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{figmat.eps} \caption{\label{matterpic1} The total matter power spectra when $c^2_{vis}=1.0$. The thick line is for $w=-0.8$ and $c_s^2=1.0$. The dash-dotted line for $w=-0.8$ and $c_s^2=0$, the dotted line for $w=-0.8$ and $c_s^2=1$. The dashed line is for $w=-1.2$ and $c_s^2=0$, the solid line for $w=-1.2$ and $c_s^2=1$.} \end{center} \end{figure} \section{Imperfect unified models} The Chaplygin gas \cite{Kamenshchik:2001cp} is a prototype of a unified model of dark matter and dark energy \cite{Bilic:2001cg}. In such models a single energy component accounts for both the dark matter and dark energy. Thus this component must resemble cold dark matter in the earlier universe, whereas it should exhibit large negative pressure nowadays. These models are, however, problematic because of the suppression of structure formation by the adiabatic pressure perturbations \cite{Carturan:2002si,Sandvik:2002jz,Amendola:2003bz}. A solution for this problem has been based on the observation that due to entropy, the sound speed is not necessarily the adiabatic one \cite{Reis:2003mw,Koivisto:2004ne,Zimdahl:2005ir}. In the so called silent quartessence model entropy perturbations cancel the effect of the adiabatic sound speed \cite{Amendola:2005rk}. The modified polytropic Cardassian expansion \cite{Freese:2002gv} (MPC) in the fluid interpretation \cite{Gondolo:2002fh} provides a general parameterization which encompasses a wide variety of unified models. For the MPC case, one can write the energy density as a function of the scale factor as \begin{eqnarray} \label{quart} \rho = [A a^{3 q(\nu-1)} + B a^{-3q}]^{\frac{1}{q}}. \end{eqnarray} The exponents $q$, $\nu$ are given as parameters, and the constants $A$, $B$ have the appropriate mass dimension. This parameterization is equivalent to the New Generalized Chaplygin gas \cite{Zhang:2004gc}. When $\nu=2$, one gets the Generalized Chaplygin gas \cite{Bento:2002ps} where $q$ can vary, and setting further $q=1$, one is left with the original Chaplygin gas \cite{Kamenshchik:2001cp}. On the other hand, when the parameter $\nu$ can get any values and $q$ is set to $q=2$, the Variable Chaplygin gas \cite{Guo:2005qy} is recovered. Finally, when $q=1$ and $\nu$ is arbitrary, one has the simplest version of the Cardassian expansion which reproduces the background expansion of a universe with standard CDM and dark energy with $w=\nu-2$ \cite{Freese:2002sq}. We will consider here effects of shear in models where the unified dark density is defined by Eq. (\ref{quart}). In the above references, various theoretical origins for such density ansatzes have been proposed, but we will not consider here whether these necessitate incorporation of shear in the linear order in cosmology. Previously Cardassian expansion has been studied in the modified gravity context \cite{Koivisto:2004ne}, and there it was shown that an effective anisotropic stress can appear in the late universe. With certain assumptions for the modified gravity, the cold dark matter density perturbation generates a shear perturbation algebraically determined from the density and velocity fields of the matter interestingly similarly to the case motivated by the covariant generalization of the Navier-Stokes equation (\ref{l-l}). Then the resulting matter power spectrum is in better accordance with observations than in the standard adiabatic case, but as the anisotropic stress affects the gravitational potentials and thus enhances the ISW effect, the CMBR spectrum will then restrict the allowed parameter space stringently. However, we will here consider the anisotropically stressed fluid as parameterized by Eq.(\ref{sigmaevol}). As expected, our results will be different from the modified gravity approach of Ref. \cite{Koivisto:2004ne}. The equation of state for the unified fluid described by Eq.(\ref{quart}) is \begin{eqnarray} w = \frac{\nu A a^{3q(\nu-1)}}{A a^{3 q(\nu-1)} + B a^{-3q}}, \end{eqnarray} and it follows that \begin{eqnarray} c_a^2 = \frac{w\left[1-\nu q+w(1-q)\right]}{1+w}. \end{eqnarray} When both $q$ and $\nu$ are equal to one, the model is equivalent to cold dark matter and a cosmological constant. When either $q$ or $\nu$ is smaller than one, $c_a^2$ will be negative in the late universe, and when either $q$ or $\nu$ is greater than one, the $c_s^2$ will stay positive and grow in the late universe. For instance, if $\nu=1$, the asymptotic value is $c_s^2=1-q$. In the adiabatic case then $c_s = c_a$, but for the silent quartessence one imposes a special condition on $c_s^2$, namely that the pressure perturbation vanishes in the synchronous gauge \cite{Reis:2003mw,Amendola:2005rk}. Here we consider the case that all the three sound speeds are equal in magnitude, including the viscosity parameter as it appears in Eq. (\ref{sigmaevol}). Thus $c^2_{vis}=|c_a^2|$ and $c_s^2=c_a^2$. This seems a natural generalization of the characteristics of better known cosmic fluids, i.e. neutrinos. Then, in the terminology employed here, the fluid is adiabatic but imperfect\footnote{The presence of anisotropic stress does not lead to generation of entropy at the linear order \cite{Maartens:1997sh}.}. For comparison, we include also results for the silent quartessence, as an example of an entropic but perfect fluid. In addition, results for the adiabatic and perfect model are shown. We plot the results for the CMBR spectrum in FIG. \ref{chaps}, and for the matter power spectra in FIG. \ref{mpschaps1} and FIG. \ref{mpschaps2}. Here the matter power spectra include all the components in the energy budget, since one cannot distinguish dark matter from the unified model. In all the figures, the adiabatic case is shown with dash-dotted lines, the entropic case with dashed lines and the imperfect case with solid lines. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{chgl1.eps} \includegraphics[width=0.47\textwidth]{chgl2.eps} \caption{\label{chaps} The CMBR anisotropies for the MPC model with $q=1.0$. The dash-dotted lines are for the adiabatic case, the solid lines for the same model with the shear included, and the dashed lines correspond to the silent case. In the upper panel $\nu=1.1$. In the lower panel $\nu=0.9$ for the silent and the imperfect case. The adiabatic model has $\nu=0.994$, which already gives disproportionately large ISW effect.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{chgm4.eps} \includegraphics[width=0.47\textwidth]{chgm3.eps} \caption{\label{mpschaps1} The total matter power spectra for MPC model when $\nu=1.0$ In the upper panel $\nu=1.01$, and in the lower panel $\nu=0.999$.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{chgm1.eps} \includegraphics[width=0.47\textwidth]{chgm2.eps} \caption{\label{mpschaps2} The total matter power spectra for MPC model when $\nu=1.0$. In the upper panel $q=1.01$, and in the lower panel $q=0.999$. } \end{center} \end{figure} The effect of shear is to stabilize the perturbations. When $q$ or $\nu$ is greater than one, the adiabatic pressure tends to drive the density perturbation to oscillate. However, including the anisotropic stress will remove the oscillations, since the damping effect of shear compensates steep gradients. The overall suppression of growth is alleviated, but not as much as in the silent model. When either $q$ or $\nu$ is smaller than one, the adiabatic pressure would drive the density perturbation to a very fast growth. However, as discussed in the previous section, the shear viscosity eliminates this driving force. \section{Conclusions} In this article we have investigated the effects of an anisotropic stress in the dark energy component on large scale structures. We have parameterized the dark energy component with three variables. The equation of state determines the decay rate of dark energy, and the sound speed characterizes the evolution of its fluctuations. These two were treated as independent parameters, thus accounting for possible entropy in the fluid. In addition we allowed for shear viscosity in the linear order. We discussed the possibility to apply a Navier-Stokes type viscosity to determine the additional degree of freedom for dark energy fluctuations, the amount of shear viscosity, but we adopted the parameterization utilizing a viscosity parameter $c_{vis}^2$, motivated by the fact that it seems to generalize the familiar and well understood cosmological fluids in a natural way \cite{Hu:1998kj}. Using this phenomenological three parameter fluid description we investigated the effect of an imperfect dark energy fluid and of unified dark matter and dark energy models on the matter power spectrum and on the CMBR temperature anisotropies. For most models we find that free streaming effects tend to smooth density fluctuations. However, there are some exceptions, described below. In dark energy models where $-1\le w<0$, we found that increasing the anisotropic stress results in a swifter decay of dark energy overdensities, which is seen in the CMBR spectrum as an amplification of the ISW effect. The opposite occurs in the case of phantom dark energy ($w<-1$), for which the anisotropic stress supports the growth of overdensities and thus reduces the ISW effect. However, the impact of anisotropic stress on the CMBR spectrum can be closely mimicked by varying the sound speed of dark energy. This makes it difficult to distinguish between these two fluid properties. In addition, we found that negative sound speeds are also consistent with observations, if shear viscosity is included. The situation that the pressure perturbation (evaluated in the comoving frame) is of the opposite sign than the density perturbation, is formally unproblematical to define, but when $c_{vis}^2=0$ it will exhibit unlimited growth of density fluctuations. However, when $c_{vis}^2>0$ this does not occur. For a suitable choice of parameters a low amplitude for the CMBR quadrupole is produced, in accordance with observations. In models unifying dark matter and dark energy extended with shear, it is found that the anisotropic stress can stabilize the effect of the adiabatic pressure perturbation, thus slightly improving the compatibility of these models with large scale structure observations. It remains to be seen how one can loosen the constraints by allowing for an anisotropic stress. Our main objective here was to use these models as examples of dark energy with evolving $w$, $c_s$ and $c_{vis}$. The conclusion taken is that, in contrast to the simplest fluid models with constant $w$, $c_s$ and $c_{vis}$, in specific scenarios the shear stress can have consequences distinguishable with present observational data. In general, we found that anisotropic perturbations in dark energy is an interesting possibility which is not excluded by the present day observational data. Furthermore, we found that the CMBR large scale temperature fluctuations, due to the the ISW effect, are a promising tool to constrain the possible imperfectness of the dark energy component. Even when the anisotropic stress cannot be directly measured, it can still bias measurements of other parameters, for instance the dark energy speed of sound or its equation of state. \acknowledgments We thank H. Kurki-Suonio for useful discussions. TK is supported by Magnus Ehrnrooth Foundation. DFM acknowledge support from the Research Council of Norway through project number 159637/V30.
1,108,101,562,448
arxiv
\section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\l@section@saved\l@section \let\l@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\l@section\l@section@saved }% \def\l@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\p@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\p@}% \def\l@section{% \l@@sections{}{section }% \def\l@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\p@}% \bf }% \def\l@subsection{% \l@@sections{section}{subsection }% \def\l@subsubsection{% \l@@sections{subsection}{subsubsection }% \def\l@paragraph#1#2{}% \def\l@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\l@table\l@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section{Introduction} \label{sec:intro} A scalar boson around 125 GeV was observed in 2012 by ATLAS~\cite{:2012gk} and CMS~\cite{:2012gu} at CERN with more than $5\sigma$ significance. The discovery of such particle was based on the analyses of following channels: $\gamma\gamma$, $WW^*$, $ZZ^*$ and $\tau^+\tau^-$ with errors of order of 20-30\% and $b\bar{b}$ channel with an error of order of 40-50\%. The recent updates from ATLAS and CMS with $7\oplus 8$ TeV data~\cite{ATLAS24,ATLAS2G,atlas034,CMS2G,CMS24} indicate the possible deviations from the standard model (SM) predictions. Although the errors of current data are still somewhat large, the new physics signals may become clear in the second run of the LHC at 13-14 TeV. It is expected that the Higgs couplings to gauge bosons (fermions) at the LHC indeed could reach 4-6\% (6-13\%) accuracy when the collected data are up to the integrated luminosity of 300 fb$^{-1}$~\cite{accuracy1,accuracy2}. Furthermore, $e^+ e^-$ Linear Collider (LC) would be able to measure the Higgs couplings at the percent level~\cite{accuracy3}. Therefore, the goals of LHC at run II are (a) to pin down the nature of the observed scalar and see if it is the SM Higgs boson or a new scalar boson; (b) to reveal the existence of new physics effects, such as the measurement of flavor-changing neutral currents (FCNCs) at the top-quark decays, i.e. $t\to q h$. Motivated by the observations of the diphoton, $WW^*$, $ZZ^*$, and $\tau^+\tau^-$ processes at the ATLAS and CMS, it is interesting to investigate what sorts of models may naturally be consistent with these measurements and what the implications are for other channels, e.g. $h\to Z \gamma$ and $t\to ch$. Although many possible extensions of the SM have been discussed~\cite{Arhrib:2012yv,Chiang:2012qz}, it is interesting to study the simplest extension from one Higgs doublet to two-Higgs-doublet model (2HDM) \cite{Lee:1973iz,Branco:2011iw,Ferreira:2011aa,Chiang:2013ixa,Ferreira:2013qua,Barger:2013ofa,Wang:2013sha,Gunion}. According to the situation of Higgs fields coupling to fermions, the 2HDMs are classified as type-I, -II, and -III models, lepton specific model, and flipped model. The 2HDM type-III is the case where both Higgs doublets couple to all fermions; as a result, FCNCs at the tree level appear. The detailed discussions on the 2HDMs are shown elsewhere~\cite{Branco:2011iw}. After scalar particle of 125 GeV is discovered, the implications of the observed $h \to \gamma\ga$ in the type-I and II models are studied~\cite{Celis:2013rcs} and the impacts on $h \to \gamma Z$ are given~\cite{joao,aawh}. As known that the $\tan\beta$ and angle $\alpha$ are important free parameters in the 2HDMs, where the former is the ratio of two vacuum expectation values (VEVs) of Higgses and the latter is the mixing parameter between the two CP-even scalars. It is found that the current LHC data put rather severe constraints on the free parameters~\cite{Ferreira:2011aa}. For instance, the large $\tan\beta\sim m_t/m_b$ scenario in the type-I and -II is excluded except if we tune the $\alpha$ parameter to be rather small $\alpha < 0.02$. Nevertheless, both type-I and type-II models can still fit the data in some small regions of $\tan\beta$ and $\alpha$. In this paper, we will explore the influence of new Higgs couplings on the $h\to \tau^+\tau^-$, $h\to gg,\gamma\ga, WW, ZZ$ and $h\to Z\gamma$ decays in the framework of the 2HDM type-III. We will show what is the most favored regions of the type-III parameter space when theoretical and experimental constraints are considered simultaneously. FCNCs of heavy quark such as $t\to qh$ have been intensively studied both from the experimental and theoretical point of view~\cite{rev}. Such processes are well established in the SM and are excellent probes for the existence of new physics. In the SM and 2HDM type-I and -II, the top-quark FCNCs are generated at one-loop level by charged currents and are highly suppressed due to the GIM mechanism. The branching ratio (BR) for $t\to ch$ in the SM is estimated to be $3 \times 10^{-14}$ \cite{sm1}. If this decay $t\to ch$ is observed, it would be an indisputable sign of new physics. Since the tree-level FCNCs appear in the type-III model, we explore if the $Br(t\to ch)$ reaches the order of $10^{-5}$--$10^{-4}$ \cite{ATLAStch,CMStch}, the sensitivity which is expected by the integrated luminosity of 3000 fb$^{-1}$. The paper is organized as follows. In section II, we introduce the scalar potential and the Yukawa interactions in the 2HDM type-III. The theoretical and experimental constraints are described in section III. We set up the free parameters and establish the $\chi$-square for the best-fit approach in section VI. In the same section, we discuss the numerical results when all theoretical and experimental constraints are taken into account. The conclusions are given in section V. \section{Model} In this section we define the scalar potential and the Yukawa sector in the 2HDM type-III. The scalar potential in $SU(2)_L\otimes U(1)_Y$ gauge symmetry and CP invariance is given by \cite{Gunion:2002zf} \begin{eqnarray} V(\Phi_1,\Phi_2) &=& m^2_1 \Phi^{\dagger}_1\Phi_1+m^2_2 \Phi^{\dagger}_2\Phi_2 -(m^2_{12} \Phi^{\dagger}_1\Phi_2+{\rm h.c}) +\frac{1}{2} \lambda_1 (\Phi^{\dagger}_1\Phi_1)^2 \nonumber \\ &+& \frac{1}{2} \lambda_2 (\Phi^{\dagger}_2\Phi_2)^2 + \lambda_3 (\Phi^{\dagger}_1\Phi_1)(\Phi^{\dagger}_2\Phi_2) + \lambda_4 (\Phi^{\dagger}_1\Phi_2)(\Phi^{\dagger}_1\Phi_2) \nonumber \\ &+& \left[\frac{\lambda_5}{2}(\Phi^{\dagger}_1\Phi_2)^2 + \left(\lambda_6 \Phi^\dagger_1 \Phi_1 + \lambda_7 \Phi^\dagger_2 \Phi_2 \right) \Phi^\dagger_1 \Phi_2+{\rm h.c.} \right] ~, \label{higgspot} \end{eqnarray} where the doublets $\Phi_{1,2}$ have weak hypercharge $Y=1$, the corresponding VEVs are $v_1$ and $v_2$, and $\lambda_i$ and $m^2_{12}$ are real parameters. After electroweak symmetry breaking, three of the eight degrees of freedom in the two Higgs doublets are the Goldstone bosons ($G^\pm$, $G^0$) and the remaining five degrees of freedom become the physical Higgs bosons: 2 CP-even $h$, $H$, one CP-odd $A$, and a pair of charged Higgs $H^\pm$. After using the minimized conditions and the W mass, the potential in Eq.~(\ref{higgspot}) has nine parameters which will be taken as: $(\lambda_i)_{i=1,\ldots,7}$, $m^2_{12}$, and $\tan\beta \equiv v_2/v_1$. Equivalently, we can use the masses as the independent parameters; therefore, the set of free parameters can be chosen to be \begin{equation} \{ m_{h}\,, m_{H}\,, m_{A}\,, m_{H^\pm}\,, \tan\beta\,, \alpha\,, m^2_{12} \}\,, \label{parameters} \end{equation} where we only list seven of the nine parameters, the angle $\beta$ diagonalizes the squared mass matrices of CP-odd and charged scalars and the angle $\alpha$ diagonalizes the CP-even squared-mass matrix. In order to avoid generating spontaneous CP violation, we further require \begin{eqnarray} m^2_{12} - \frac{\lambda_6 v^2_1}{2} - \frac{\lambda_7 v^2_2}{2} \geq \zeta \lambda_5 v_1 v_2 \end{eqnarray} with $\zeta=1(0)$ for $\lambda_5> (< )0$~\cite{Gunion:2002zf}. It has been known that by assuming neutral flavor conservation at the tree-level \cite{Glashow:1976nt}, we have four types of Higgs couplings to the fermions. In the 2HDM type-I, the quarks and leptons couple only to one of the two Higgs doublets and the case is the same as the SM. In the 2HDM type-II, the charged leptons and down type quarks couple to one Higgs doublet and the up type quarks couple to the other. The lepton-specific model is similar to type-I, but the leptons couple to the other Higgs doublet. In the flipped model, which is similar to type-II, the leptons and up type quarks couple to the same double. If the tree-level FCNCs are allowed, both doublets can couple to leptons and quarks and the associated model is called 2HDM type-III~\cite{Cheng:1987rs,Atwood:1996vj,Branco:2011iw}. Thus, the Yukawa interactions for quarks are written as % \begin{equation} {\cal {L}}_{Y} = \bar Q_{L} Y^{k} d_{R} \phi_k+ \bar Q_{L } \tilde{Y}^{k} u_{R} \tilde{\phi}_k + h.c. \end{equation} where the flavor indices are suppressed, $Q^T_L=(u_L, d_L)$ is the left-handed quark doublet, $Y^k$ and $\tilde{Y}^k$ denote the $3\times 3$ Yukawa matrices, $\tilde{\phi_k}=i\sigma_2 \phi^*_k$, and $k$ is the doublet number. Similar formulas could be applied to the lepton sector. Since the mass matrices of quarks are combined by $Y^{1} (\tilde Y^{1})$ and $Y^2 (\tilde Y^{2})$ for down (up) type quarks and $Y^{1,2}(\tilde Y^{1,2})$ generally cannot be diagonalized simultaneously, as a result, the tree-level FCNCs appear and the effects lead to the oscillations of $K-\bar K$, $B_q-\bar B_q$ and $D-\bar D$ at the tree-level. To get naturally small FCNCs, one can use the ansatz formulated by $Y^{k}_{ij},\,\tilde Y^k_{ij} \propto \sqrt{m_i m_j}/v$ \cite{Cheng:1987rs,Atwood:1996vj}. After spontaneous symmetry breaking, the scalar couplings to fermions can be expressed as \cite{GomezBock:2005hc} % \begin{eqnarray} {\cal L}^{\rm 2HDM-III}_Y &=& \bar u_{Li} \left( \frac{\cos\alpha}{\sin\beta}\frac{m_{u_i}}{v} \delta_{ij} - \frac{\cos(\beta-\alpha)}{\sqrt{2}\sin\beta} X^u_{ij} \right) u_{Rj} h \nonumber \\ &+& \bar d_{Li} \left( -\frac{\sin\alpha}{\cos\beta}\frac{m_{d_i}}{v} \delta_{ij} + \frac{\cos(\beta-\alpha)}{\sqrt{2}\cos\beta} X^d_{ij} \right) d_{Rj} h \nonumber \\ &+& \bar u_{Li} \left( \frac{\sin\alpha}{\sin\beta}\frac{m_{u_i}}{v} \delta_{ij} + \frac{\sin(\beta-\alpha)}{\sqrt{2}\sin\beta} X^u_{ij} \right) u_{Rj} H \nonumber\\ &+& \bar d_{Li} \left(\frac{\cos\alpha}{\cos\beta}\frac{m_{d_i}}{v} \delta_{ij} - \frac{\sin(\beta-\alpha)}{\sqrt{2}\cos\beta} X^d_{ij} \right) d_{Rj} H \nonumber\\ &-& i\bar u_{Li} \left(\frac{1}{\tan\beta}\frac{m_{u_i}}{v} \delta_{ij} - \frac{X^u_{ij} }{\sqrt{2}\sin\beta} \right) u_{Rj} A \nonumber \\ &+& i\bar d_{Li} \left(-\tan\beta\frac{m_{d_i}}{v} \delta_{ij} + \frac{X^d_{ij}}{\sqrt{2}\cos\beta} \right) d_{Rj} A +{\rm h.c}\,, \label{eq:LhY} \end{eqnarray} where $v=\sqrt{v^2_1 + v^2_2}$, $X^q_{ij}=\sqrt{m_{q_i} m_{q_j}}/v \chi^q_{ij}$ ($q=u,d$ ) and $\chi^{q}_{ij}$ are the free parameters. By above formulation, if the FCNC effects are ignored, the results are returned to the case of the 2HDM type-II, given by \begin{eqnarray} {\cal L}^{2HDM-II}_Y &=& \bar u_{Li} \left( \frac{\cos\alpha}{\sin\beta}\frac{m_{u_i}}{v} \delta_{ij} \right) u_{Rj} h +\bar d_{Li} \left(-\frac{\sin\alpha}{\cos\beta}\frac{m_{d_i}}{v} \delta_{ij} \right) d_{Rj} h + {\rm h.c}\,. \label{eq:LhYII} \end{eqnarray} For the couplings of other scalars to fermions, the can be found elsewhere~\cite{GomezBock:2005hc}. It can be seen clearly that if $\chi^{u,d}_{ij}$ are of ${\cal{O}}(10^{-1})$, the new effects are dominated by heavy fermions and comparable with those in the type-II model. The couplings of $h$ and $H$ to gauge bosons $V=W,Z$ are proportional to $\sin(\beta-\alpha)$ and $\cos(\beta-\alpha)$, respectively. Therefore, the SM-like Higgs boson $h$ is recovered when $\cos(\beta-\alpha)\approx 0$. The decoupling limit can be achieved if $\cos(\beta-\alpha)\approx 0$ and $m_h \ll m_H, m_A, m_{H\pm}$ are satisfied~\cite{Gunion:2002zf}. From Eqs. (\ref{eq:LhY}) and (\ref{eq:LhYII}), one can also find that in the decoupling limit, the h couplings to quarks are returned to the SM case. In this analysis, since we take $\alpha$ in the range $-\pi/2 \leq \alpha\leq \pi/2$, $\sin\alpha$ will have both positive and negative sign. In the 2HDM type-II, if $\sin\alpha <0$ then the Higgs couplings to up- and down-type quarks will have the same sign as those in the SM. It is worthy to mention that $\sin\alpha$ in minimal supersymmetric SM (MSSM) is negative unless some extremely large radiative corrections flip its sign \cite{Gunion:2002zf}. If $\sin\alpha$ is positive, then the Higgs coupling to down quarks will have a different sign with respect to the SM case. This is called by the wrong sign Yukawa coupling in the literature~\cite{wrong2,Gunion:2002zf}. Later we will explain if the type-III model would favor such wrong sign scenario or not. \section{Theoretical and experimental constraints} The free parameters in the scalar potential defined in Eq.~(\ref{higgspot}) could be constrained by theoretical requirements and the experimental measurements, where the former mainly includes tree level unitarity and vacuum stability when the electroweak symmetry is broken spontaneously. Since the unitarity constraint involves a variety of scattering processes, we adopt the results~\cite{abdesunit,abdesunit1}. We also force the potential to be perturbative by requiring that all quartic couplings of the scalar potential obey $|\lambda_i| \leq 8 \pi$ for all $i$. For the vacuum stability conditions which ensure that the potential is bounded from below, we require that the parameters satisfy the conditions as ~\cite{vac1,vac2} \begin{eqnarray} \nonumber && \lambda_1 > 0\;,\quad\quad \lambda_2 > 0\;, \lambda_3 + \sqrt{\lambda_1\lambda_2 } > 0, \quad \sqrt{\lambda_1\lambda_2 } + \lambda_{3} + \lambda_{4} -|\lambda_{5}| >0,\\ && {2|\lambda_6 + \lambda_7| \le \frac{1}{2}(\lambda_1 + \lambda_2) + \lambda_3 + \lambda_4 + \lambda_5}\,. \label{vac} \end{eqnarray} In the following we state the constraints from experimental data. The new neutral and charged scalar bosons in 2HDM will affect the self-energy of W and Z bosons through the loop effects. Therefore, the involved parameters could be constrained by the precision measurements of the oblique parameters, denoted by S, T, and U~\cite{Peskin}. Taking $m_h = 125$ GeV, $m_t = 173.3$ GeV and assuming that $U=0$, the tolerated ranges for S and T are found by~\cite{Baak:2014ora} \begin{eqnarray} \Delta S = 0.06\pm0.09\,, \ \ \Delta T = 0.10\pm0.07\,, \label{test:ST} \end{eqnarray} where the correlation factor is $\rho=+0.91$, $\Delta S = S^{\textrm{2HDM}} - S^{\textrm{SM}}$ and $\Delta T = T^{\textrm{2HDM}} - T^{\textrm{SM}}$, and their explicit expressions can be found~{\cite{Gunion:2002zf}}. We note that in the limit $m_{H^\pm}=m_{A^0}$ or $m_{H^\pm}=m_{H^0}$, $\Delta T$ vanishes~\cite{Gerard:2007kn,Cervero:2012cx}. The second set of constraints comes from B physics observables. It has been shown recently in Ref.~\cite{Misiak:2015xwa} that $Br(\overline{B}\to X_s \gamma)$ gives a lower limit on $m_{H^\pm}\geq 480$ GeV in the type-II at $95\%$CL. By the precision measurements of $Z\to b \bar{b}$ and $B_q - \bar{B}_q$ mixing, the values of $\tan\beta < 0.5$ have been excluded~\cite{Baak:2011ze}. In this work we allow $\tan\beta \geq 0.5$. Except some specific scenarios, $\tan \beta$ can not be too large due to the requirement of perturbation theory. By the observation of scalar boson at $m_h \approx 125$ GeV, the searches for Higgs boson at ATLAS and CMS can give strong bounds on the free parameters. The signal events in the Higgs measurements are represented by the signal strength, which is defined by the ratio of Higgs signal to the SM prediction and given by \begin{eqnarray} \mu^f_i= \frac{\sigma_i(h) \cdot Br(h\to f) }{\sigma^{SM}_i(h) \cdot Br^{SM}(h\to f) } \equiv \bar \sigma_i \cdot \mu_f \,, \label{eq:kvf} \end{eqnarray} where $\sigma_i(h)$ denotes the Higgs production cross section by channel $i$ and $Br(h\to f)$ is the BR for the Higgs decay $h\to f$. Since several Higgs boson production channels are available at the LHC, we are interested in the gluon fusion production ($ggF$), $t \bar t h$, vector boson fusion (VBF) and Higgs-strahlung $Vh$ with $V=W/Z$; and they are grouped to be $\mu^f_{ggF+t\bar t h}$ and $\mu^f_{VBF+Vh}$. In order to consider the constraints from the current LHC data, the scaling factors which show the Higgs coupling deviations from the SM are defined as \begin{align} &\kappa_V^{} =\kappa_W = \kappa_Z \equiv \frac{g_{hVV}^{\text{2HDM}}}{g_{hVV}^{\text{SM}}},~\quad \kappa_f^{} \equiv \frac{y_{hff}^{\text{2HDM}}}{y_{hff}^{\text{SM}}},\quad \label{scaling1} \end{align} where $g_{hVV}$ and $y_{hff}$ are the Higgs couplings to gauge bosons and fermions, respectively, and $f$ stands for top, bottom quarks, and tau lepton. The scaling factors for loop-induced channels are defined by \begin{eqnarray} \kappa_\gamma^2 &\equiv& \frac{\Gamma(h\to \gamma\gamma)_{\text{2HDM}}}{\Gamma(h\to \gamma\gamma)_{\text{SM}}}\,,~\quad \kappa_g^2 \equiv \frac{\Gamma(h\to g\,g)_{\text{2HDM}}}{\Gamma(h\to g\,g)_{\text{SM}}}\,, \nonumber \\ \quad \kappa_{Z\gamma}^2 &\equiv& \frac{\Gamma(h\to Z\gamma)_{\text{2HDM}}}{\Gamma(h\to Z\gamma)_{\text{SM}}}\,,~\quad \kappa_{h}^2 \equiv \frac{\Gamma(h)_{\text{2HDM}}}{\Gamma(h)_{\text{SM}}}\,, \end{eqnarray} where $\Gamma(h\to XY)$ is the partial decay rate for $h\to XY$. In this study, the partial decay width of the Higgs is taken from \cite{anatomy}, where QCD corrections have been taken into account. In the decay modes $h\to \gamma\ga$ and $h\to Z\gamma$, we have included the contributions of charged Higgs and new Yukawa couplings. Accordingly, the ratio of cross section to the SM prediction for the production channels $ggF+t\bar t h$ and VBF+$Vh$ can be expressed as \begin{eqnarray} \overline{\sigma}_{ggF+t\bar t h} &=& \frac{\kappa^2_{g}\sigma_{SM}(ggF) + \kappa^2_{t}\sigma_{SM}(tth)}{\sigma_{SM}(ggF) + \sigma_{SM}(tth)}\,, \quad \\ \overline{\sigma}_{VBF+Vh} &=& \frac{\kappa^2_{V}\sigma_{SM}(VBF)+\widetilde{\kappa}_{Zh}\widetilde{\sigma}_{SM}(Zh) + \kappa^2_{V}\sigma_{SM}(Zh) + \kappa^2_{V}\sigma_{SM}(Wh)}{\sigma_{SM}(VBF)+\widetilde{\sigma}_{SM}(Zh) + \sigma_{SM}(Zh) + \sigma_{SM}(Wh)}\,, \label{Eq:XS1} \end{eqnarray} where $\sigma_{SM}(Zh)$ is from the coupling of $ZZh$ and occurs at the tree level and $\widetilde{\sigma}_{SM}(Zh)\equiv \sigma_{SM} (gg\to Zh)$ represents the effects of top-quark loop. With $m_h=125.36$ GeV, the scalar factor $\widetilde{\kappa}_{Zh}$ can be written as~\cite{ATLAS24} \begin{eqnarray} \widetilde{\kappa}_{Zh} &=& 2.27\kappa^2_Z + 0.37\kappa^2_t - 1.64\kappa_Z\kappa_t\,. \end{eqnarray} In the numerical estimations, we use $m_h = 125.36$ GeV which is from LHC Higgs Cross Section Working Group~\cite{accuracy1} at $\sqrt{s} = 8$ TeV. The experimental values of signal strengths are shown in Table.~\ref{data:1}, where the results of ATLAS~\cite{atlas034} and CMS~\cite{cms005} are combined and denoted by $\widehat{\mu}^f_{ggF+t\bar t h}$ and $\widehat{\mu}^f_{VBF+Vh}$. \begin{table}[hptb] \begin{center} \caption{ Measured signal strengths $\widehat{\mu}_{\rm{ggF+tth}}$ and $\widehat{\mu}_{\rm{VBF+Vh}}$ that combine the best fit of ATLAS and CMS and correlation coefficient $\rho$ for the Higgs decay mode~\cite{atlas034,cms005}.} \renewcommand{\arraystretch}{1.1} \begin{tabular}{c||ccccc} \hline\hline $f$ & $\widehat{\mu}^{f}_{\rm{ggF+tth}}$ & $\widehat{\mu}^{f}_{\rm{VBF+Vh}}$ &$\pm\,\,1\widehat{\sigma}_{\rm{ggF+tth}}$ & $\pm\,\,1\widehat{\sigma}_{\rm{VBF+Vh}}$& $\rho$ \\ \hline $\gamma\gamma$ & $1.32 $ & 0.8 & 0.38 & 0.7 & -0.30\\ \hline $ZZ^*$ & $1.70 $ & 0.3 & 0.4 & 1.20 & -0.59\\ \hline $WW^*$ & $0.98 $ & 1.28 & 0.28 & 0.55 & -0.20\\ \hline $\tau\tau$ & $2 $ & 1.24 & 1.50 & 0.59 & -0.42\\ \hline $b\bar{b}$ & 1.11 & 0.92 & 0.65 & 0.38 & 0 \\ \hline\hline \end{tabular} \label{data:1} \end{center} \end{table} \section{Parameter setting, global fitting, and numerical results} \subsection{Parameters and global fitting} After introducing the scaling factors for displaying the new physics in various channels, in the following we show the explicit relations with the free parameters in the type-III model. By the definitions in Eq.~(\ref{scaling1}), the scaling factors for $\kappa_V$ and $\kappa_f$ in the type-III are given by \begin{eqnarray} \kappa_V^{} &=& \sin(\beta-\alpha)\,, \nonumber \\ \kappa_U^{} &=& \kappa_t = \kappa_c = \frac{\cos\alpha}{\sin\beta} - \chi_F \frac{\cos(\beta-\alpha)}{\sqrt{2}\sin\beta}\,, \nonumber \\ \kappa_D^{} &=& \kappa_b = \kappa_{\tau} = -\frac{\sin\alpha}{\cos\beta} + \chi_F \frac{\cos(\beta-\alpha)}{\sqrt{2}\cos\beta}\,. \label{scaling3} \end{eqnarray} Although FCNC processes give strict constraints on flavor changing couplings $\chi^{f}_{ij}$ with $i\neq j$, however, the constraints are applied to the flavor changing processes in $K$, $D$ and $B$ meson systems. Since the couplings of scalars to the light quarks have been suppressed by $m_{q_i}/v$, the direct limit on flavor-conserved coupling $\chi^f_{ii}$ is mild. Additionally, since the signals for top-quark flavor changing processes haven't been observed yet, the direct constraint on $X^u_{3i}= \sqrt{m_t m_{q_i}}/v \chi^u_{3i}$ is from the experimental bound of $t\to h q_i$. Hence, for simplifying the numerical analysis, in Eq.~(\ref{scaling3}) we have set $\chi^u_{22}=\chi^u_{33} = \chi^{d}_{33} =\chi^{\ell}_{33}=\chi_F$. Since $X^u_{33} = m_t/v \chi_F$, it is conservative to adopt the vale of $\chi_F$ to be ${\cal O}(1)$. In the 2HDM, the charged Higgs will also contribute to $h\to \gamma \gamma$ decay and the associated scalar triplet coupling $hH^+ H^-$ is read by \begin{eqnarray} \lambda_{hH^{\pm}H^{\mp}} &=& \frac{1}{2m^2_W}\Bigg(\frac{\cos(\alpha+\beta)}{\sin2\beta}(2m^2_h - 2\lambda_5 v^2) - \sin(\beta-\alpha)(m^2_h - 2m^2_{H^\pm})\nonumber \\ &+& m^2_W\cos(\beta-\alpha)\left( \frac{\lambda_6}{\sin^2\beta} - \frac{\lambda_7}{\cos^2\beta}\right)\Bigg)\,. \label{eq:hHH} \end{eqnarray} The scaling factors for loop induced processes $h\to (\gamma \gamma, Z\gamma, gg)$ can be expressed by \begin{eqnarray} \kappa_{\gamma}^2 & \sim & \Big|1.268 \kappa_W - 0.279\kappa_t + 0.0042\kappa_b + 0.0034\kappa_c + 0.0036\kappa_\tau - 0.0014\lambda_{hH^{\pm}H^{\mp}} \Big|^2\,, \nonumber \\ \kappa_{Z\gamma}^2 & \sim &\Big|1.058 \kappa_W - 0.059 \kappa_t + 0.00056 \kappa_b + 0.00014 \kappa_c -0.00054 \lambda_{hH^{\pm}H^{\mp}} \Big|^2\,, \nonumber \\ \kappa_{g}^2 &\sim & \Big|1.078 \kappa_t - 0.065\kappa_b - 0.013\kappa_c \Big|^2\,, \label{scaling2} \end{eqnarray} where we have used $m_h = 125.36$ GeV and taken $m_{H^\pm} = 480$ GeV. It is clear that the charged Higgs contribution to $h\to \gamma\gamma$ and $h\to Z\gamma$ is small. In order to study the influence of new free parameters and to understand their correlations, we perform the $\chi$-square fitting by using the LHC data for Higgs searches~\cite{:2012gk,:2012gu,ATLAS2G,CMS2G}. For a given channel $f=\gamma\gamma, W W^*, Z Z^*, \tau \tau$, we define the $\chi^2_f$ as \begin{equation} \chi^2_f = \frac{1}{\hat{\sigma}^2_{1}(1-\rho^2)}(\mu^{f}_{1} - \hat{\mu}^{f}_{1})^2 + \frac{1}{\hat{\sigma}^2_{1}(1-\rho^2)}(\mu^{f}_{2} - \hat{\mu}^{f}_{2})^2 - \frac{2\rho}{\hat{\sigma}_{1}\hat{\sigma}_{2}(1-\rho^2)}(\mu^{f}_{1} - \hat{\mu}^{f}_{1})(\mu^{f}_{2} - \hat{\mu}^{f}_{2})\,, \label{eq:chi2} \end{equation} where $\hat{\mu}^{f}_{1,2}$, $\hat{\sigma}_{1,2}$ and $\rho$ are the measured Higgs signal strengths, their one-sigma errors, and their correlation, respectively and their values could refer to Table~\ref{data:1}, the indices $1$ and $2$ in turn stand for $\rm ggF+tth$ and $\rm VBF+Vh$, and $\mu^{f}_{1,2}$ are the results in the 2HDM. The global $\chi$-square is defined by \begin{equation} \chi^2 = \sum_{f}\chi^2_{f} + \chi^2_{ST} \,, \end{equation} where the $\chi^2_{ST}$ is related to the $\chi^2$ for S and T parameters, the definition can be obtained from Eq.(\ref{eq:chi2}) by replacing $\mu^f_1 \to S^{2HDM}$ and $\mu^f_2\to T^{2HDM}$, and the corresponding values can be found from Eq.~(\ref{test:ST}). We do not include $b\bar{b}$ channel in our analysis because the errors of data are still large. In order to display the allowed regions for the parameters, we show the best fit at $68\%$, $95.5\%$, and $99.7\%$ confidence levels (CLs), that is, the corresponding errors of $\chi^2$ are $\Delta \chi^2 \leq 2.3$, $5.99$, and $11.8$, respectively. For comparing with LHC data, we require the calculated results in agreement with those shown in ATLAS Fig.~3 of Ref.~\cite{ATLAS24} and in CMS Fig.~5 of Ref.~\cite{CMS24}. \section{Numerical Results} In the following we present the limits of current LHC data based on the three kinds of CL introduced in last section. In our numerical calculations, we set the mass of SM Higgs to be $m_h=125.36$ GeV, and scan the involved parameters in the chosen regions as \begin{eqnarray} && 480 \,{\rm GeV} \le m_{H^\pm} \le 1 \,{\rm TeV}, \quad 126 \,{\rm GeV} \le m_{H} \le 1 \,{\rm TeV}, \quad 100 \,{\rm GeV} \le m_{A} \le 1 \,{\rm TeV} \,, \nonumber \\ && -1 \le \sin\alpha \le 1, \quad 0.5 \le \tan\beta \le 50, \quad -(1000\, {\rm GeV})^2 \le m^2_{12} \le (1000\, {\rm GeV})^2 \,. \label{numbers} \end{eqnarray} The main difference in the scalar potential between type-II and type-III is that the $\lambda_{6,7}$ terms appear in the type-III model. With the introduction of $\lambda_{6,7}$ terms in the potential, not only the mass relations of scalar bosons are modified but also the scalar triple and quartic coupling receives contributions from $\lambda_{6}$ and $\lambda_{7}$. Since the masses of scalar bosons are regarded as free parameters, the relevant $\lambda_{6,7}$ effects in this study enters game through the triple coupling $h$-$H^+$-$H^-$ that contributes to the $h\to \gamma\gamma$ decay, as shown in Eq.~(\ref{eq:hHH}) and the first line of Eq.~(\ref{scaling2}). Since the contribution of the charged Higgs loop to the $h\to \gamma\gamma$ decay is small, expectably the influence of $\lambda_{6,7}$ on the parameter constraint is not significant. To demonstrate that the contributions of $\lambda_{6,7}$ are not very important, we present the allowed ranges for $\tan\beta$ and $\sin\alpha$ by scanning $\lambda_{6,7}$ in the region of $[-1,1]$ in Fig.~\ref{fig:satb_l67}, where the theoretical and experimental constraints mentioned earlier are included and the plots from left to right in turn stand for $\Delta \chi^2 =11.8$, $5.99$, and $2.3$, respectively. Additionally, to understand the influence of $\chi_F$ defined in Eq.~(\ref{scaling3}), we also scan $\chi_F=[-1,1]$ in the plots. By comparing the results with the case of $\lambda_{6,7}\ll 1$ and $\chi_F= 1$ which is displayed in the third plot of Fig.~(\ref{fig:satb}), it can be seen that only small region in the positive $\sin\alpha$ is modified and the modifications happen only in the large errors of $\chi^2$; the plot with $\Delta \chi^2=2.3$ has almost no change. Therefore, to simplify the numerical analysis and to reduce the scanned parameters, it is reasonable in this study to assume $\lambda_{6,7}\ll 1$. Since the influence of $|\chi_F|\leq 1$ should be smaller, to get the typical contributions from FCNC effects, we illustrate our studies by setting $\chi_F = \pm 1$ in the whole numerical analysis. \begin{figure}[h!] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb99CL_chivay.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb95CL_chivay.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb68CL_chivay.jpg} \caption{The allowed regions in $(\sin\alpha, \tan\beta)$ constrained by theoretical and current experimental inputs, where we have used $m_h = 125.36 $ GeV in the type-III with $-1\le\chi_F\le 1$ and $-1\le\lambda_{6,7}\le 1$. The errors for $\chi$-square fit are 99.7$\%$ CL (left panel), 95.5$\%$ CL (middel panel) and 68$\%$ CL(right panel). } \label{fig:satb_l67} \end{figure} With $\lambda_{6,7}\ll 1$, we present the allowed regions for $\sin\alpha$ and $\tan\beta$ in Fig.~\ref{fig:satb}, where the left, middle and right panels stand for the 2HDM type-II, type-III with $\chi_F= -1$ and type-III with $\chi_F = +1$, respectively, and in each plot we show the constraints at $68\%$ CL (green), 95.5$\%$ CL (red) and 99.7$\%$ CL (black). Our results in type-II are consistent with those obtained by the authors in Refs.~\cite{Celis:2013rcs,Ferreira:2011aa} when the same conditions are chosen. By the plots, we see that in type-III with $\chi_F=-1$, , due to the sign of coupling being the same as type-II, the allowed values for $\sin\alpha$ and $\tan\beta$ are further shrunk; especially $\sin\alpha$ is limited to be less than $0.1$. On the contrary, type-III with $\chi_F=+1$, the allowed values of $\sin\alpha$ and $\tan\beta$ are broad. As discussed before, the decoupling limit occurs at $\alpha\to \beta -\pi/2$, i.e. $\sin\alpha=-\cos\beta< 0$. Since we regard the masses of new scalars as free parameters and scan them in the regions shown in Eq.~(\ref{numbers}), therefore, the three plots in Fig.~\ref{fig:satb} cover lower and heavier mass of charged Higgs. We further check that $\sin\alpha >0$ could be excluded at 95.5(99.7)$\%$ CL when $m_{H^\pm} \geq 585(690)$ GeV. The main differences between type-II and type-III are the Yukawa couplings as shown in Eq.~(\ref{eq:LhY}). \begin{figure}[hptb] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/sa_tb_xij_p1.jpg} \caption{The allowed regions in $(\sin\alpha, \tan\beta)$ constrained by theoretical and current experimental inputs, where we have used $m_h = 125.36 $ GeV, the left, middle and right panels stand for the 2HDM type-II, type-III with $\chi_F=-1$ and type-III with $\chi_F=+1$, respectively. The errors for $\chi$-square fit are 99.7$\%$ CL (black), 95.5$\%$ CL (red) and 68$\%$ CL (green). } \label{fig:satb} \end{figure} In order to see the influence of the new effects of type-III, we plot the allowed $\kappa_g$ as a function of $\sin\alpha$ and $\tan\beta$ in Fig.~\ref{checkpl}, where the three plots from left to right correspond to type-II, type-III with $\chi_F=-1$ and type-III with $\chi_F=+1$, the solid, dashed and dotted lines in each plot stand for the decoupling limit (DL) of SM, $15\%$ deviation from DL and $20\%$ deviation from DL, respectively. For comparisons, we also put the results of $99.7\%$ in Fig.~\ref{fig:satb} in each plot. By the analysis, we see that the deviations of $\kappa_g$ from DL in $\chi_F=+1$ are clear and significant while the influence of $\chi_F=-1$ is small. \begin{figure}[hptb]\centering \includegraphics[width=0.32\textwidth]{figureswithouTopCH/typ2xij0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/typ2xijm1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/typ2xijp1.jpg} \caption{$\kappa_{g}$ as a function of $\sin\alpha$ and $\tan\beta$ in type-II (left) and type-III with $\chi_F=(-1, +1)$ (middle, right), where the solid, dashed and dotted lines in each plot stand for the decoupling limit (DL) of SM, $15\%$ deviation from DL and $20\%$ deviation from DL, respectively. The dotted points are the allowed values of parameters resulted from Fig.~\ref{fig:satb}. } \label{checkpl} \end{figure} It is pointed out that a wrong sign Yukawa coupling to down type quarks could happen in type-II 2HDM \cite{wrong2,Gunion:2002zf}. For understanding the sign flip, we rewrite the $\kappa_D$ defined in Eq.~(\ref{scaling3}) to be \begin{eqnarray} \kappa_{D}= -\frac{\sin\alpha}{\cos\beta} \left( 1- \frac{\chi_F \sin\beta}{\sqrt{2}}\right) + \frac{\chi_F \cos\alpha}{\sqrt{2}}\,. \label{eq:kD} \end{eqnarray} In the type-II case, we know that in the decoupling limit $\kappa_D=1$, but $\kappa_D < 0$ if $\sin\alpha >0$. According to the results in the left panel of Fig.~\ref{fig:satb}, $\sin\alpha >0$ is allowed when the errors of best fit are taken by $2\sigma$ or 3$\sigma$. The situation in type-III is more complicated. From Eq.~(\ref{eq:kD}), we see that the factor in the brackets is always positive, therefore, the sign of the first term should be the same as that in type-II case. However, due to $\alpha \in [ -\pi/2, \pi/2]$, the sign of the second term in Eq.~(\ref{eq:kD}) depends on the sign of $\chi_F$. For $\chi_F = -1$, even $\sin\alpha < 0$, $\kappa_D$ could be negative when the first term is smaller than the second term. For $\chi_F=+1$, if $\sin\alpha >0$ and the first term is over the second term, $\kappa_D < 0$ is still allowed. In order to understand the available values of $\kappa_D$ when the constraints are considered, we present the correlation of $\kappa_U$ and $\kappa_D$ in Fig.~\ref{fig:cdcu}, where the panels from left to right stand for type-II, type-III with $\chi_F = -1$ and type-III with $\chi_F=+1$. In each plot, the results obtained by $\chi$-square fitting are applied. The similar correlation of $\kappa_V$ and $\kappa_D$ is presented in Fig.~\ref{fig:cdcv}. By these results, we find that comparing with type-II model, the negative $\kappa_D$ gets more strict limit in type-III, although a wider parameter space for $\sin\alpha > 0$ is allowed in type-III with $\chi_F = +1$. \begin{figure}[hptb] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_Cu_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_Cu_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_Cu_xij_p1.jpg} \caption{ Correlation of $\kappa_D$ and $\kappa_U$, where the left, middle and right panels represent the allowed values in type-II, type-III with $\chi_F=-1$ and type-III with $\chi_F=+1$, respectively and the results of Fig.~\ref{fig:satb} are applied. } \label{fig:cdcu} \end{figure} \begin{figure}[hptb] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_CV_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_CV_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/Cd_CV_xij_p1.jpg} \caption{ The legend is the same as that in Fig.~\ref{fig:cdcu}, but for the correlation of $\kappa_V$ and $\kappa_D$. } \label{fig:cdcv} \end{figure} Besides the scaling factors of tree level Higgs decays, $\kappa_{D,U}$ and $\kappa_V$, it is also interesting to understand the allowed values for loop induced processes in 2HDM, e.g. $h\to \gamma\gamma$, $gg$, and $Z\gamma$, etc. It is known that the differences in the associated couplings between $h\to \gamma \gamma$ and $gg$ are the colorless $W$-, $\tau$-, and $H^\pm$-loop. By Eq.~(\ref{scaling2}), we see that the contributions of $\tau$ and $H^\pm$ are small, therefore, the main difference is from the $W$-loop in which the $\kappa_V$ involves. By using the $\chi$-square fitting approach and with the inputs of the experimental data and theoretical constraints, the allowed regions of $\kappa_\gamma$ and $\kappa_g$ in type-II and type-III are displayed in Fig.~\ref{fig:gaga}, where the panels from left to right are type-II, type-III with $\chi_F=-1$ and $+1$; the green, red and black colors in each plot stand for the $68\%$, $95.5\%$ and $99.7\%$ CL, respectively. We find that except a slightly lower $\kappa_\gamma$ is allowed in type-II, the first two plots have similar results. The situation can be understood from Figs.~\ref{fig:cdcu} and \ref{fig:cdcv}, where the $\kappa_U$ in both models is similar while $\kappa_V$ in type-II could be smaller in the region of negative $\kappa_D$; that is, a smaller $\kappa_V$ will lead a smaller $\kappa_\gamma$. In $\chi_F=+1$ case, the allowed values of $\kappa_\gamma$ and $\kappa_g$ are localized in a wider region. \begin{figure}[hptb] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/ka_kg_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/ka_kg_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/ka_kg_xij_p1.jpg} \caption{Correlation of $\kappa_\gamma$ and $\kappa_g$, where the left, middle and right panels represent the allowed values in type-II, type-III with $\chi_F=-1$ and type-III with $\chi_F=+1$, respectively and the results in Fig.~\ref{fig:satb} obtained by $\chi$-square fitting are applied.} \label{fig:gaga} \end{figure} It is known that except the different gauge couplings, the loop diagrams for $h\to Z\gamma$ and $h\to \gamma\gamma$ are exact the same. One can understand the loop effects by the numerical form of Eq.~(\ref{scaling2}). Therefore, we expect the correlation between $\kappa_{Z\gamma}$ and $\kappa_\gamma$ should behave like a linear relation. We present the correlation between $\kappa_\gamma$ and $\kappa_{Z\gamma}$ in Fig.~\ref{fig:zga}, where the legend is the same as that for Fig.~\ref{fig:gaga}. From the plots, we see that in most region $\kappa_{Z\gamma}$ is less than the SM prediction. The type-III with $\chi_F=-1$ gets more strict constraint and the change is within $10\%$. For $\chi_F=+1$, the deviation of $\kappa_{Z\gamma}$ from unity could be over $10\%$. From run I data, the LHC has an upper bounds on $h\to Z\gamma$, at run II this decay mode will be probed. We give the predictions at 13 TeV LHC for the signal strength $\mu^{\gamma\gamma}_{ggF+tth}$ and $\mu^{Z\gamma}_{ggF+tth}$ in Fig.~\ref{fig:zga13}. Hence, with the theoretical and experimental constraints, $\mu^{Z\gamma}_{ggF+tth}$ is bounded and could be $\cal{O}$(10\%) away from SM at 68$\%$CL. \begin{figure}[h!] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/kga_kzg_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/kga_kzg_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/kga_kzg_xij_p1.jpg} \caption{The allowed regions in $(\kappa_{\gamma}, \kappa_{Z\gamma})$ plan after imposing theoretical and experimental constrains. Color coding the same as Fig.~\ref{fig:satb}} \label{fig:zga} \end{figure} \begin{figure}[h!] \includegraphics[width=0.32\textwidth]{figureswithouTopCH/RZga_Rgaga_xij_0.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/RZga_Rgaga_xij_m1.jpg} \includegraphics[width=0.32\textwidth]{figureswithouTopCH/RZga_Rgaga_xij_p1.jpg} \caption{Correlation between $\mu^{\gamma\gamma}_{ggF+tth}$ and $\mu^{Z\gamma}_{ggF+tth}$ at $\sqrt{s} = 13$ TeV after imposing theoretical and experimental constrains. Left, middle and right panels represent the allowed values in type-II, type-III with $\chi_F=-1$ and type-III with $\chi_F=+1$, respectively and the results in Fig.~\ref{fig:satb} obtained by $\chi$-square fitting are applied.} \label{fig:zga13} \end{figure} \section{$t\to ch$ decay} In this section, we study the flavor changing $t\to c h$ process in type-III model. Experimentally, there have been intensive activities to explore the top FCNCs. CDF, D0 and LEPII collaborations have reported some bounds on top FCNCs. At the LHC with rather large top cross section, ATLAS and CMS search for top FCNCs and put a limit on the branching fraction which is $Br(t\to ch)< 0.82$ \% for ATLAS \cite{ATLAStch} and $Br(t\to ch)< 0.56$ \% for CMS \cite{CMStch}. Note that CMS limit is slightly better than ATLAS limit. CMS search for $t\to ch$ in different channels: $h\to \gamma\gamma$, $WW^*$, $ZZ^*$, and $\tau^+\tau^-$ while ATLAS used only diphoton channel. With the high luminosity option of the LHC, the above limit will be improved to reach about $Br(t\to ch)< 1.5 \times 10^{-4}$ \cite{ATLAStch} for ATLAS detector. From the Yukawa couplings in Eq,~(\ref{eq:LhY}), the partial width for $t\to ch$ decay is given by \begin{eqnarray} \Gamma(t\to ch) &=& \left(\frac{\cos(\beta-\alpha)X_{23}^u}{\sin\beta}\right)^2\frac{m_t}{32\pi} \left( (x_c + 1)^2 - x^2_h\right) \nonumber \\ &\times & \sqrt{1-(x_h - x_c)^2}\sqrt{1-(x_h + x_c)^2} \end{eqnarray} where $x_c = m_c/m_t$, $x_h =m_h/m_t$ and $X_{23}^u$ is a free parameter and dictates the FCNC effect. It is clear from the above expression that the partial width of $t\to ch$ is proportional to $\cos(\beta-\alpha)$. As seen in the previous section, in the case where $h$ is SM-like, $\cos(\beta-\alpha)$ is constrained by LHC data to be rather small and the $t\to ch$ branching fraction is limited. As we will see later in 2HDM type-II with flavor conservation the rate for $t\to ch$ is much smaller than type-III~\cite{thc:Arhrib}. Since we assume that the charged Higgs is heavier than 400 GeV, the total decay width of the top contains only $t\to ch$ and $t\to bW$. With $m_h=125.36$ GeV, $m_t=173.3$ GeV and $m_c=1.42$ GeV, the total width can be written as \begin{eqnarray} \Gamma_t = \Gamma^{SM}_t + 0.0017 \left(\frac{\cos(\beta-\alpha)X^u_{23}}{\sin\beta}\right)^2\,\,{\rm GeV} \end{eqnarray} where $\Gamma^{SM}_t$ is the partial decay rate for $t\to Wb$ and is given by \begin{eqnarray} \Gamma^{SM}_t= \frac{G_F m_t^3}{8\pi \sqrt{2}}(1-\frac{m_W^2}{m_t^2})^2 (1+2 \frac{m_W^2}{m_t^2}) (1-2 \frac{\alpha_s(m_t)}{3 \pi} (2\frac{\pi^2}{3}-\frac{5}{2}))=1.43 \, {\rm GeV}\nonumber \end{eqnarray} in which the QCD corrections have been included. By the above numerical expressions together with the current limit from ATLAS and CMS, the limit on the $tch$ FCNC coupling is found by \begin{eqnarray} \left(\frac{\cos(\beta-\alpha)X^u_{23}}{\sin\beta}\right) < 2.2 \quad\quad {\rm for} \quad \quad Br(t\to ch)< 8.2 \times 10^{-3}\,, \nonumber\\ \left(\frac{\cos(\beta-\alpha)X^u_{23}}{\sin\beta}\right) < 0.36 \quad\quad {\rm for} \quad \quad Br(t\to ch)< 5.6 \times 10^{-3} \label{eq:bounds} \end{eqnarray} in agreement with~\cite{wshou}. We perform a systematic scan over 2HDM parameters, as depicted in Eq.~(\ref{numbers}), taking into account LHC and theoretical constraints. Although $X^u_{23}$ is a free parameter, in order to suppress the FCNC effects naturally, as stated earlier we adopt $X^u_{23} = \sqrt{m_t m_c}/v \chi^u_{23}$. Since the current experimental measurements only give a upper limit on $t\to h c$, basically $\chi^u_{23}$ is limited by Eq.~(\ref{eq:bounds}) and could be as large as ${\cal O}(1-10^{2})$, depending on the allowed value of $\cos(\beta-\alpha)$. In order to use the constrained results which are obtained from the Higgs measurements and the self-consistent parametrisation $X^u_{33}=m_t /v \chi_F$ which was used before, we assume $\chi^u_{23} = \chi_F = \pm 1$. In our numerical analysis, the results under the assumption should be conservative. In Fig.~\ref{fig:brtch}(left) we illustrate the branching fraction of $t\to ch$ in 2HDM-III as a function of $\cos(\beta-\alpha)$. The LHC constraints within 1$\sigma$ restrict $\cos(\beta-\alpha)$ to be in the range $[-0.27,0.27]$. The branching fraction for $t\to ch$ is slightly above $10^{-4}$ . The actual CMS and ATLAS constraint on $Br(t\to ch)<5.6 \times 10^{-3}$ does not restrict $ \cos(\beta-\alpha)$. The expected limit from ATLAS detector with high luminosity 3000 fb$^{-1}$ is depicted as the dashed horizontal line. As one can see, the expected ATLAS limit is somehow similar to LHC constraints within 1$\sigma$. In the right panel, we show the allowed parameters space in $(\sin\alpha, \tan\beta)$ plan where we apply ATLAS expected limit $Br(t\to ch)< 1.5 \times 10^{-4}$. This plot should be compared to the right panel of Fig.\ref{fig:satb}. It is then clear that this additional constraint only act on the 3$\sigma$ allowed region from LHC data. \begin{figure}[h!] \begin{center} \includegraphics[width=0.33\textwidth]{figuresWithTopCH/Br_cba.jpg} \includegraphics[width=0.32\textwidth]{figuresWithTopCH/sa_tb_with_Exp_limit_Brtch.jpg} \end{center} \caption{Left: Branching ratio of Br$(t\to ch)$ as a function of $\cos(\beta -\alpha)$, the two horizontal lines correspond to LHC actual limit (upper line) and expected limit from ATLAS with 3000 fb$^{-1}$ luminosity (dashed line). Right panel: allowed parameters space in type III with the ATLAS expected limit on $Br(t\to ch)< 1.5 \times 10^{-4}$. } \label{fig:brtch} \end{figure} In Fig. \ref{fig:brtch1} and \ref{fig:brtch2}, we show the fitted branching fractions for $t\to ch$ (left), $t\to cH$ at $m_H=150$ GeV (middle) and $t\to cA$ at $m_A=130$ GeV (right) as a function of $\kappa_U$, where Fig.~\ref{fig:brtch1} is for $\chi_{F} = +1$ while Fig.~\ref{fig:brtch2} is $\chi_{F}=-1$. In the case of $\chi_{F} = +1$ the fitted value for $\kappa_U$ at the 3$\sigma$ level is in the range $[0.6,1.18]$ and the branching fraction for $t\to ch , cH$ are less than $10^{-3}$ while $Br(t\to cA)$ slightly exceed the $10^{-3}$ level. Similarly, for $\chi_{F} = -1$ the fitted value for $\kappa_U$ at the 3$\sigma$ level is in the range $[0.85,1.25]$ and the branching fraction for $t\to ch , cH, cA$ are the same size as in the previous case. \begin{figure}[h!] \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tch.jpg} \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tcH.jpg} \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tcA.jpg} \caption{Branching ratios of Br$(t\to ch)$(left), Br$(t\to cH)$(middle) and Br$(t\to ch)$(right) as a function of $\kappa_U$ in type III with $\chi_{F} = +1$. For $m_h=125.36$ GeV, $m_H = 150$ GeV and $m_A = 130$ GeV.} \label{fig:brtch1} \end{figure} \begin{figure}[ht!] \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tchxijm1.jpg} \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tcHxijm1.jpg} \includegraphics[width=4.8cm,height=6cm]{figureswithouTopCH/Fig_tcAxijm1.jpg} \caption{The legend is the same as Fig.~\ref{fig:brtch1} but for $\chi_{F} = -1$. } \label{fig:brtch2} \end{figure} \section{Conclusions} For studying the constraints of $8$ TeV LHC experimental data, we perform $\chi$-square analysis to find the most favorable regions for the free parameters in the two-Higgs-doublet models. For comparisons, we focus on the type-II and type-III models, in which the latter model not only affects the flavor conserving Yukawa couplings, but also generates the scalar-mediated flavor-changing neutral currents at the tree level. Although the difference between type-II and type-III is the Yukawa sector, however, since the new Yukawa couplings in type-III are associated with $\cos(\beta-\alpha)$ and $\sin(\beta-\alpha)$, the modified couplings $tth$ and $btH^{\pm}$ will change the constraint of free parameters. In order to present the influence of modified Yukawa couplings, we show the allowed values of $\sin\alpha$ and $\tan\beta$ in Fig.~\ref{fig:satb}, where the LHC updated data for $pp\to h \to f$ with $f=\gamma\gamma$, $WW^*/ZZ^*$ and $\tau^+\tau^-$ are applied and other bounds are also included. By the results, we see that $\sin\alpha$ and $\tan\beta$ in type-III gets even stronger constraint if the dictated parameter $\chi_F = -1$ is adopted; on the contrary, if we take $\chi_F=+1$, the allowed values for $\sin\alpha$ and $\tan\beta$ are wider. It has been pointed out that there exist the wrong sign Yukawa couplings to down type quarks in the type-II model, i.e. $\sin\alpha >0$ or $\kappa_D <0$. By the study, we find that except the allowed regions of parameters are shrunk slightly, the situation in $\chi_F=-1$ is similar to the type-II case. In $\chi_F=+1$, although the $\kappa_D <0$ is not excluded completely yet, but the case has a strict limit by current data. We show the analyses in Figs.~\ref{fig:cdcu} and \ref{fig:cdcv}. In these figures, one can also see the correlations with modified Higgs coupling to top-quark $\kappa_U$ and to gauge boson $\kappa_V$. When the parameters are bounded by the observed channels, we show the influence on the unobserved channel $h\to Z\gamma$ by using the scaling factor $\kappa_{Z\gamma}$, which is defined by the ratio of decay rate to the SM prediction. We find that the change of $\kappa_{Z\gamma}$ in type-III with $\chi_F=-1$ is less than $10\%$; however, with $\chi_F=+1$, the value of $\kappa_{Z\gamma}$ could be lower from SM prediction by over $10\%$. We also show our predictions for signal strengths $\mu_{\gamma\gamma}$ and $\mu_{\gamma Z}$ and their correlation at 13 TeV. The main difference between type-II and -III model is: the flavor changing neutral currents in the former are only induced by loops, while in the latter they could occur at the tree level. We study the scalar-mediated $t\to c (h, H, A)$ decays in type-III model and find that when all current experimental constraints are considered, $Br(t\to c(h, H) )< 10^{-3}$ for $m_h=125.36$ and $m_H=150$ GeV and $Br(t\to cA)$ slightly exceeds $10^{-3}$ for $m_A =130$ GeV. The detailed numerical analyses are shown in Figs.~\ref{fig:brtch}, \ref{fig:brtch1} and \ref{fig:brtch2}. \section*{Acknowledgments} The authors thank Rui Santos for useful discussions. A.A would like to thank NCTS for warm hospitality where part of this work has been done. The work of CHC is supported by the Ministry of Science and Technology of R.O.C under Grant \#: MOST-103-2112-006-004-MY3. The work of M. Gomez-Bock was partially supported by UNAM under PAPIIT IN111115. This work was also supported by the Moroccan Ministry of Higher Education and Scientific Research MESRSFC and CNRST: "Projet dans les domaines prioritaires de la recherche scientifique et du d\'eveloppement technologique": PPR/2015/6.
1,108,101,562,449
arxiv
\section{Introduction}\label{sec:introduction} Type Ic supernovae (SNe~Ic) are classified from their optical spectra as having no hydrogen or helium present \citep[for a review of supernova classification, see][]{F97}. They constitute $\sim 10$\% of the total number of SNe in the local universe \citep{Li:2011cl}. SNe~Ic are believed to be core-collapse events from either a massive Wolf-Rayet (WR) star that has lost its outer layers via a wind-loss mechanism \citep{Gaskell:1986ge}, or a less massive star where the envelope has been stripped by a binary companion \citep{Podsiadlowski:1992ij,Nomoto:1995ej}. For a recent review of the progenitors of all core-collapse SNe, see \citet{Smartt:2009kr}. One subgroup of SNe~Ic, referred to as broad-lined Type Ic supernovae (SNe~Ic-BL) or sometimes ``hypernovae,'' exhibits very high line velocities in the spectra, indicating an explosion with high kinetic energy per unit mass. These objects have been linked to gamma-ray bursts (GRBs), initially with the observation that the broad-lined, energetic SN\,1998bw was coincident with the long-duration GRB\,980425 \citep{Galama:1998ea}. Subsequently, five other spectroscopically confirmed SNe have been identified with GRBs or X-ray flashes (XRFs) between redshifts $z$ of 0.03 and 0.2: GRN\,030329/SN\,2003dh \citep{Hjorth:2003jv,Stanek:2003ef,Matheson:2003ey}, GRB\,031203/SN\,2003lw \citep{Malesani:2004do,GalYam:2004hh,Thomsen:2004fq,Cobb:2004jj}, XRF\,060218/SN\,2006aj \citep{Pian:2006ho,Mirabal:2006fg,Sollerman:2006bv,Modjaz:2006dl,Cobb:2006jy,Ferrero:2006dm}, XRF\,100316D/SN\,2010bh \citep{Starling:2011jm,Chornock:2010ue,Cano:2011jl,Bufano:2012ke}, and GRB\,130702A/SN\,2013dx \citep{Schulze:2013grb,Cenko:2013grb,DElia:2013grb,Singer:2013aa}. There are also a large and growing number of cases where the optical afterglow of GRBs or XRFs exhibit features typical of (or consistent with) those of SNe~Ic-BL \citep[for example,][]{Soderberg:2005jb,Bersier:2006gu,Cano:2011cy,Berger:2011ee,Sparre:2011go,Melandri:2012ce,Xu:2013ww,Levan:2013vg,Jin:2013cf}. However, there are also many examples of high-energy SNe~Ic for which no associated GRB has been found, including SN\,1997ef \citep{Mazzali:2000cx} and SN\,2002ap \citep{Mazzali:2002bf,GalYam:2002fc,Foley:2003hp}. It has been suggested that all high-energy SNe~Ic form GRBs and that we do not observe the gamma-ray jet because of our viewing angle \citep{Podsiadlowski:2004dn}. This hypothesis is supported by the rates and measurements of the energetics \citep{Smartt:2009ge}, but not by radio observations \citep{Soderberg:2006hq}. The Palomar Transient Factory \citep[PTF;][]{Law:2009cq,Rau:2009fp} was an optical survey of the variable sky using a 7.3 square degree camera installed on the 48-inch Samuel Oschin telescope at Palomar Observatory. PTF conducted real-time analysis and had a number of follow-up programmes designed to obtain colours and light curves of detected transients from a variety of facilities \citep{GalYam:2011fv}. A major science goal of PTF was to conduct a SN survey free from host-galaxy bias and sensitive to events in low-luminosity hosts. Such a survey was particularly suitable to search for SNe~Ic-BL, which appear to be more abundant in low-luminosity dwarf galaxies \citep{Arcavi:2010ky}. In this paper, we present optical photometry and spectra of PTF10qts, a SN~Ic-BL. Section \ref{sec:observations} describes the observations, which are analysed in Section \ref{sec:discuss}. We summarise our results in Section \ref{sec:conc}. Throughout the paper, we assume $\Omega_M = 0.3$, $\Omega_{\Lambda} = 0.7$, and H$_0 = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{lightcurve_all_ABSMAG_newR_040613_col.eps} \caption{Light curve of PTF10qts from photometry taken on the P48 and P60 telescopes. The data points are given in Table \ref{tab:phot} and converted to absolute magnitudes. The solid line represents the $R$-band light curve of SN\,1998bw \citep{Galama:1999et}, the first GRB-SN, and the dot-dashed line shows the $R$-band light curve of SN\,2006aj \citep{Ferrero:2006dm}, which was accompanied by an X-ray flash. The dashed line shows SN\,2003jd \citep{Valenti:2008dz}. No K-corrections have been applied to the individual PTF10qts data, except the long-dashed red line which shows the PTF10qts $R_{\rm PTF}$ points K-corrected using spectra where possible.} \label{fig:lc_all} \end{figure} \begin{table} \caption{$R_{\rm PTF}$ photometry taken with the P48 and $Bgriz$ with the P60. Dates and phases are given in the rest frame relative to $R$-band maximum, and the photometry has been corrected for Galactic extinction ($E(B-V)=0.029$\,mag). These data are plotted in Figure \ref{fig:lc_all} after conversion to absolute magnitudes assuming a distance modulus of $\mu = 415$\,mags. The first line of the table is the upper limit of the last non-detection before the supernova was discovered. \label{tab:phot} } \begin{center} \begin{tabular} {c c c c c} \hline MJD & Band & Phase & Magnitude & Error \\ &&(days)\\ \hline 55410.288 & $R_{\rm PTF}$ & -13.03 & $>$20.1\\ 55413.260 & $R_{\rm PTF}$ & -10.31 & 21.25 & 0.13\\ 55413.304 & $R_{\rm PTF}$ & -10.26 & 21.26 & 0.20\\ 55416.160 & $R_{\rm PTF}$ & -7.65 & 19.87 & 0.12\\ 55416.204 & $R_{\rm PTF}$ & -7.61 & 19.67 & 0.04\\ 55419.175 & $R_{\rm PTF}$ & -4.88 & 19.13 & 0.03\\ 55419.218 & $R_{\rm PTF}$ & -4.84 & 19.19 & 0.02\\ 55422.170 & $R_{\rm PTF}$ & -2.14 & 18.87 & 0.03\\ 55422.214 & $R_{\rm PTF}$ & -2.10 & 18.89 & 0.02\\ 55425.200 & $R_{\rm PTF}$ & 0.64 & 18.79 & 0.03\\ 55425.242 & $R_{\rm PTF}$ & 0.68 & 18.82 & 0.02\\ 55428.199 & $R_{\rm PTF}$ & 3.39 & 18.87 & 0.02\\ 55428.252 & $R_{\rm PTF}$ & 3.44 & 18.88 & 0.02\\ 55431.236 & $R_{\rm PTF}$ & 6.18 & 18.98 & 0.02\\ 55431.280 & $R_{\rm PTF}$ & 6.22 & 18.92 & 0.03\\ 55438.160 & $R_{\rm PTF}$ & 12.52 & 19.34 & 0.04\\ 55438.204 & $R_{\rm PTF}$ & 12.56 & 19.36 & 0.05\\ 55442.192 & $R_{\rm PTF}$ & 16.22 & 19.62 & 0.03\\ 55442.240 & $R_{\rm PTF}$ & 16.26 & 19.62 & 0.03\\ 55472.110 & $R_{\rm PTF}$ & 43.65 & 21.20 & 0.10\\ 55472.162 & $R_{\rm PTF}$ & 43.70 & 20.99 & 0.14\\ 55478.111 & $R_{\rm PTF}$ & 49.15 & 20.97 & 0.12\\ 55478.154 & $R_{\rm PTF}$ & 49.19 & 21.26 & 0.17\\ 55483.097 & $R_{\rm PTF}$ & 53.72 & 21.20 & 0.29\\ 55483.141 & $R_{\rm PTF}$ & 53.76 & 22.19 & 0.66\\ \hline 55423.197 & $B$ & -1.19 & 19.68 & 0.06\\ 55423.211 & $B$ & -1.18 & 19.51 & 0.07\\ 55436.286 & $B$ & 10.81 & 20.79 & 0.27\\ 55450.204 & $B$ & 23.57 & 21.25 & 0.18\\ \hline 55417.201 & $g$ & -6.69 & 19.59 & 0.04\\ 55423.206 & $g$ & -1.19 & 19.15 & 0.04\\ 55423.259 & $g$ & -1.14 & 19.20 & 0.04\\ 55434.307 & $g$ & 8.99 & 20.02 & 0.21\\ 55436.289 & $g$ & 10.81 & 20.37 & 0.12\\ 5450.220 & $g$ & 23.58 & 21.12 & 0.13\\ 55506.082 & $g$ & 74.80 & 21.65 & 0.15\\ 55515.090 & $g$ & 83.06 & 21.60 & 0.35\\ \hline 55423.195 & $r$ & -1.20 & 19.05 & 0.04\\ 55423.210 & $r$ & -1.18 & 19.03 & 0.04\\ 55434.302 & $r$ & 8.99 & 19.34 & 0.08\\ 55450.203 & $r$ & 23.57 & 20.13 & 0.07\\ 55461.229 & $r$ & 33.67 & 20.66 & 0.23\\ 55513.092 & $r$ & 81.22 & 21.38 & 0.23\\ 55515.086 & $r$ & 83.05 & 21.06 & 0.19\\ \hline 55417.198 & $i$ & -6.69 & 19.70 & 0.06\\ 55423.193 & $i$ & -1.20 & 19.14 & 0.05\\ 55423.208 & $i$ & -1.18 & 19.12 & 0.05\\ 55450.201 & $i$ & 23.56 & 19.93 & 0.09\\ 55463.233 & $i$ & 35.51 & 20.59 & 0.29\\ 55482.180 & $i$ & 52.88 & 20.75 & 0.25\\ 55506.076 & $i$ & 74.79 & 21.42 & 0.31\\ 55513.090 & $i$ & 81.22 & 21.00 & 0.24\\ 55515.085 & $i$ & 83.05 & 21.42 & 0.35\\ \hline 55423.205 & $z$ & -1.19 & 19.33 & 0.13\\ 55423.251 & $z$ & -1.15 & 19.75 & 0.17\\ 55434.305 & $z$ & 8.99 & 19.49 & 0.26\\ 55436.287 & $z$ & 10.81 & 19.42 & 0.15\\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ptf10qts_all_spec_FINAL_bw.eps} \caption{Photospheric spectra of PTF10qts. The phases are given in the rest frame relative to $R$-band maximum. The spectra are labelled showing important species discussed in the text. The number in brackets shows the offset applied to each spectrum.} \label{fig:spec-all} \end{center} \end{figure} \section{Observations}\label{sec:observations} \subsection{Optical Photometry}\label{sec:phot} On 2010 August 05.230 (UT dates are used throughout this paper), PTF10qts was discovered by the Palomar 48-inch telescope \citep[P48,][]{Rahmer:2008hs} at $R_{\rm PTF} \approx 20.3$\,mag\footnote{$\lambda_c$ = 6540\AA}; its coordinates were $\alpha$(J2000) = 16$^h$41$^m$37.60$^s$, $\delta$(J2000) = +28$^\circ$58$'$21.1$''$. It was also detected again in an image taken later that night on 2010 August 05.305. It was not detected in an image taken three nights previously with a limit of $R_{\rm PTF} = 20.1$\,mag. A small, faint ($r = 21.1$\,mag) host galaxy, object J164137.53+285820.3, is visible in the SDSS catalogue 1.2" away from the supernova. Its redshift, $z=0.0907$, was measured from H$\alpha$ and [\ion{O}{ii}] narrow emission lines in spectra of PTF10qts. At the host's distance (414.9\,Mpc), the supernova offset from the centre of this corresponds to a projected physical distance of 2.4\,kpc. No host is discernible in the P48 images. PTF10qts was also observed at the Palomar 60-inch telescope \citep[P60;][]{Cenko:2006im} in $B$, $g$, $r$, $i$, and $z$, although the cadence for these observations was less than at the P48. For observations with the P48, measurements were performed by standard image subtraction, using a deep, good-seeing reference constructed from images taken before the SN exploded. The reference was matched astrometrically to field stars in each image containing the SN and subtracted, and point-spread function (PSF) photometry was then performed. For the P60 data, we employed direct aperture photometry without host-galaxy subtraction, as the host is very faint. The data are calibrated to SDSS magnitudes. A light curve is plotted in Figure \ref{fig:lc_all} and listed in Table \ref{tab:phot}. We give the phase in the rest frame relative to $R$-band maximum determined from fitting the points around maximum light with a parabola. The MJD of the $R$-band maximum is $55424.6 \pm 0.5$ (16.6 August 2010). With the non-detection on 2 August 2010 (MJD = 55410.244), we can determine the date of explosion to within 3 days; thus, we constrain the rise time in the $R$ band to $12.7 \pm 1.5$\,days in the observed frame or $11.6 \pm 1.4$\,days in the rest frame. The data points in Figure \ref{fig:lc_all} have been corrected for Milky Way extinction, $E(B-V) = 0.029$\,mag, using the dust maps of \citet[][]{Schlegel:1998fw} and the extinction curve of \citet{Cardelli:1989dp}. The equivalent width of the \ion{Na}{i}~D line at zero redshift measured in the spectrum of PTF10qts taken at +7\,days is $0.15\pm0.12$\,\AA, which can be converted to a measurement of extinction via the relation of \citet{Turatto:2003}. The measured value of $E(B-V) = 0.024\pm0.019$\,mag is consistent with that determined from the dust maps (but see \citealt{Poznanski:2011cc}). This is also consistent with the value derived from \citet{Poznanski:2012bf} of $E(B-V) = 0.021^{+0.09}_{-0.014}$\,mag. We observe no \ion{Na}{i}~D at the redshift of the SN, so no correction has been applied for host-galaxy extinction. The correction for Milky Way extinction has been applied to the individual points shown in Figure \ref{fig:lc_all}, with no K-corrections (see below). Figure \ref{fig:lc_all} also shows the $R$-band light curves of three other SNe~Ic for comparison. SN\,1998bw is a broad-lined SN~Ic and the first GRB-SN; the values of $M_R$ are similar for both objects. We also include SN\,2006aj, which was accompanied by an X-ray flash, and SN\,2003jd, which appears to be spectroscopically similar to PTF10qts (see Section \ref{sec:spec}). From the raw $R$-band light curve, it appears that the SN reaches a more luminous absolute magnitude than SN\,1998bw, but with a light-curve width more similar to those of SN\,2003jd and SN\,2006aj. The long-dashed line shows the $R$-band light curve of PTF10qts with K-corrections based on the photospheric spectra. This confirms that the $R$-band light curve is slightly more luminous than that of SN\,1998bw, but the decline rate is faster. K-corrections are discussed in more detail in Section \ref{sec:rband}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ptf10qts_-7_comp_FINAL_col.eps} \caption{A comparison of spectra before maximum light. Telluric features are marked. The spectra are labelled showing important species discussed in the text. The number in brackets shows the offset applied to each spectrum.} \label{fig:specpremax} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ptf10qts_+7_comp_FINAL_col.eps} \caption{A comparison of spectra around +7 days after maximum light. Telluric features are marked. The number in brackets shows the offset applied to each spectrum.} \label{fig:spec-7} \end{center} \end{figure} \subsection{Optical Spectroscopy}\label{sec:spec} \begin{table*} \centering \caption{A summary of spectroscopic observations of PTF10qts. The phase is relative to $R$-band maximum (16.6 August 2010) and then converted to the rest frame. Note that for the Lick/Kast spectrum, the blue side resolution is 4.1\AA\ (FWHM) and the red side resolution is 9.1\AA\ (FWHM). The velocities, as plotted in Figure \ref{fig:velocities} are determined from the \ion{Si}{ii} 6355\AA\ line.} \label{tab:spec-obs} \begin{tabular} {c c c c c c c c } \hline Date & Phase & Telescope & Range & Resolution & Velocity \\ (UT) & (days) & &(\AA) & (\AA\,pix$^{-1}$) & (1000 km/s)\\ \hline 2010-08-13 & -3.6 &P200/DBSP & 3505--10,100 & 5 & --\\ 2010-08-15 & -1.8 & Lick/Kast & 3480--10,000 & 4.1/9.1 & 19.1 $\pm$ 0.75\\ 2010-08-25 & +7.4 & TNG/DOLORES & 3360--8050 & 2.25 &14.4 $\pm$ 0.5\\ 2010-09-02 & +14.8 & P200/DBSP & 3440--9850 & 2 & 12.0 $\pm$ 0.5\\ 2010-09-05 & +17.4 & P200/DBSP & 3440--9850 & 2 & 8.5 $\pm$ 0.75\\ 2010-09-09 & +21.2 & KPNO/RC Spec & 3620--8140 & 5.5& --\\ 2011-04-27 & +231.7 & Keck/LRIS & 3100--10,200 & 2 & --\\ \hline \end{tabular} \end{table*} Follow-up spectroscopy of PTF10qts was carried out at a number of international observatories and is summarised in Table \ref{tab:spec-obs}. The SN was classified as an SN~Ic-BL based on its broad features and lack of obvious hydrogen and helium, and weak silicon in the spectra. The photospheric spectra are plotted in Figure \ref{fig:spec-all}, where all phases are given relative to $R$-band maximum for each object. Standard \textsc{IRAF} routines as well as custom \textsc{IDL} procedures were used to remove bias and flat field correct the spectra, as well as to create wavelength and flux solutions for the data. These were then applied to the frames and calibrated spectra were extracted from the data. All spectra of PTF10qts are publically available via WISeREP\footnote{http://www.weizmann.ac.il/astrophysics/wiserep/} \citep{Yaron:2012bp}. We compare the spectra of PTF10qts to those of other known SNe~Ic and SNe~Ic-BL in Figures \ref{fig:specpremax}--\ref{fig:spec-21}. Again, all phases are given relative to the $R$-band maximum for that particular object. We divide our spectra into four periods of observation --- pre-maximum, $+7$\,days after $R$-band maximum, $+14$\,days, and $+21$\,days --- consider each of these separately. Before maximum light, the spectrum of PTF10qts is dominated by broad, high-velocity absorption lines which are blended together. The absorptions at 4400\,\AA\ and 4800\,\AA\ are dominated by \ion{Fe}{ii}. \ion{Si}{ii} may be seen in the $-2$\,days spectrum and later as the elbow at 5800\,\AA, but it is blended with other features, making isolation of this feature and a measurement of the photospheric velocity difficult. We also note that visually, the features at 4000--6000\,\AA\ of SN\,2006aj \citep{Pian:2006ho} are similar to the early phases of PTF10qts: the spectrum is blue and contains broad absorptions around 4000\,\AA. We do not see the absorption due to \ion{O}{i} in the 7000--7600\,\AA\ region visible in spectra of SN\,1998bw \citep{Patat:2001jt} or SN\,2004aw \citep{Taubenberger:2006cc}, another SN~Ic-BL . The $t = -2$\,day spectrum is redder than the $t=-4$\,day spectrum, reflecting the fact that the temperature is decreasing as the ejecta expand. Around a week past maximum brightness, the spectrum of PTF10qts resembles that of SN\,2003jd \citep{Valenti:2008dz}, both in the three broad absorption features in the blue and the shape of the continuum in the red. Our next phase of spectroscopy is around two weeks past maximum light. As seen in Figure \ref{fig:spec-14}, the SN ejecta have expanded sufficiently that we can now see individual features, such as the \ion{Si}{ii} lines around 6200\,\AA\ and strong \ion{Ca}{ii} absorption at 8100\,\AA. The velocity of the \ion{Ca}{ii} near-infrared triplet is $\sim$\,18,000\,km\,s$^{-1}$ (measured from the blueshift of the feature's minimum), which is higher than for both SN\,2003jd and SN\,2004aw as shown in the figure. In the blue we also see absorption caused by \ion{Mg}{ii}, \ion{Ca}{ii}, and \ion{Fe}{ii}. Visually the spectra retain their similarity to those of SN\,2003jd, SN\,2004aw, and to a lesser degree SN\,1998bw without \ion{O}{i} which is still absent. This may be indicative of a smaller ejecta mass. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ptf10qts_+14_comp_FINAL_col.eps} \caption{A comparison of spectra around +14 days after maximum light. The spectra are labelled showing important species discussed in the text. The number in brackets shows the offset applied to each spectrum.} \label{fig:spec-14} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ptf10qts_+21_comp_FINAL_col.eps} \caption{A comparison of spectra around +21 days after maximum light. The spectra are labelled showing important species discussed in the text. The number in brackets shows the offset applied to each spectrum.} \label{fig:spec-21} \end{center} \end{figure} The final spectrum of PTF10qts taken during the photospheric phase is shown in Figure \ref{fig:spec-21}. It is quite noisy, but visually the spectral evolution continues to be similar to those of SN\,2003jd and SN\,2004aw. There may be slight absorption from \ion{O}{i} visible in these later spectra. This raises the possibility of a sequence of oxygen masses in SNe Ic-BL ranging from strong in supernovae such as SN2004aw, through objects like SN2003jd and finally objects like PTF10qts which show no oxygen. From this spectral comparison, we conclude that PTF10qts is not a good match to any single well-observed SN~Ic-BL over its entire evolution, although at some phases there appear to be reasonable matches to other known SNe~Ic-BL. PTF10qts lacks the very high velocities (i.e., energy per unit mass) of SN\,1998bw, and the spectral features (related to element abundances) do not match those seen in the lower-velocity examples of SN\,2003jd, SN\,2004aw, and SN\,2006aj. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{velocities_col.eps} \caption{A plot of the photospheric velocities of supernovae at different times after explosion. Different symbol shapes correspond to different types of supernova: diamond - IIb; square - Ic/Ic-BL; circle - GRB/SNe; triangle - XRF/SN Ic; cross - XRF/SN Ib; and bowtie - PTF10qts. This figure is augmented from the one produced in \citet{Mazzali:2008vn}.} \label{fig:velocities} \end{center} \end{figure} \section{Discussion}\label{sec:discuss} \subsection{Velocity Determination}\label{sec:vel} Before determining the physical parameters of this supernova, it is necessary to constrain the supernova photospheric velocity at maximum light, which is typically characterised by the minimum of the blueshifted absortion of the \ion{Si}{ii} feature around $\sim6150$\AA. However, as seen below, we use two different methods which involve two different dates for maximum light and we do not have the spectral coverage around these times to measure values directly from spectra, along with the additional problem of the supernova features being blended together so the \ion{Si}{ii} line which is thought to give a clear determination of the photospheric velocity is not always visible as a separate feature. In Figure \ref{fig:velocities}, we show the velocities of PTF10qts (bowtie) compared to a number of other types of supernovae, including the two we will use for analogues - SN1998bw and SN2006aj. We can define the velocity to use in later analysis in different ways such as the velocity at maximum light in $R-band$ or the velocity at the maximum of the bolometric light curve. The time between explosion and maximum varies between supernovae, but if we wanted a uniform time, we could also take a fixed date after explosion. Applying these three methods to PTF10qts reveals that both the $R$-band and bolometric maxima fall between the first two velocity measurements from the spectra. We interpolate linearlly between the velocities measured at $-1.8$\,days and $+7.4$\,days as given in Table \ref{tab:spec-obs} and assign PTF10qts a photospheric velocity of 17000$\pm$1500kms$^{-1}$. Due to the small difference in rise times for the bolometric and $R$-band light curves for SN1998bw and SN2006aj we use two different velocities for these objects in the following sections and assign an error of $\pm$1000kms$^{-1}$ to each one. We have been conservative with the velocity errors, but they are only a small contribution to the final error on the physical parameters we derive below. \subsection{$R$ Band as a Proxy for Bolometric}\label{sec:rband} We first attempt to use the $R$-band light curve, which has the best phase coverage, to estimate some of the physical parameters of the SN explosion. It has also been suggested that $R$-band can be used as a proxy for bolometric when calculating the physical parameters of the explosion \citet{Drout:2011iw}. We employ a well-studied example as an analogue and scale the physical parameters based on the modeling of that object. Ideally, the analogue would match the light curve and the spectrum. This is particularly important for SNe~Ic-BL, as the kinetic energy is dominated by the broadest parts of the lines. These features are usually blended; thus, as well as matching the velocities, it is important to match the spectra to reduce the error when scaling the parameters. Unfortunately, as discussed above, there is no single good analogue of PTF10qts. Instead, we take two examples for which bolometric light curves and other physical parameters are well modeled, and we use their properties to estimate the ejecta mass $M_{\rm ej}$, kinetic energy $E_{\rm K}$, and nickel mass $M(^{56}\textrm{Ni})$ of PTF10qts. We adopt SN\,1998bw, as this is the most similar in absolute $R$ magnitude to PTF10qts at maximum light ($R=-19.16$\,mags for SN\,1998bw compared to $R=_{\rm PTF}=-19.30$\,mags for PTF10qts), and also SN\,2006aj, which is similar spectroscopically. Radiative transfer models of their light curves have been developed to derive the physical parameters of these explosions \citep{Nakamura:2001gw,Mazzali:2006bm}. In order to compare properly PTF10qts with existing samples in the literature, we use photometry and spectra to measure $R-I$ colours and obtain the transformation from magnitudes in $R_{\rm PTF}$ to $R$ as in \citet{Ofek:2012aa} and \citet{Jordi:2006do}. We find that given the sparse light-curve coverage, it is not possible to infer a relationship between $R-I$ and phase. We therefore assume a constant value of $R-I = 0.24\pm0.12$\,mag, which is the mean of all the measured values. This corresponds to $R = R_{\rm PTF} - 0.14\pm0.01$\,mag, where the quoted uncertainty comes only from the colour term. Given the redshift of PTF10qts, the observed $R$ band is very different from the observed $R$ band of the local comparison SNe we have used. To compensate for this, we calculate K-corrections using the spectra of PTF10qts following \citet{Humason:1956gc}. The spectra acquired on 25 August (TNG) and 9 September (KPNO) fall short of covering the full $R$ band by a few hundred Angstroms when shifted to the rest frame. Therefore, at wavelengths longer than their red end, we assumed that their behaviour is similar to the spectra taken on 15 August (Lick) and 2 September (P200), respectively, based on the similarity of the spectra at bluer wavelengths. We then interpolated the measurements to obtain K-corrections at 0 and +15 days relative to $R$-band maximum. The calculated values are $-0.174$ and $-0.027$, respectively. We interpolated the $R$-band light curve of PTF10qts to obtain final values of $R(0) = -19.27\pm0.06$ and $R(15) = -18.69\pm0.06$\,mag. The uncertainties include measurement errors, uncertainties in the K-correction, and conversion from $R_{\rm PTF}$ to $R$. We therefore find $\Delta m_{15}(R) = 0.58\pm0.08$\,mag for PTF10qts. This is similar to that of SN\,1998bw ($\Delta m_{15}(R) = 0.56$\,mag), but much smaller than that of SN\,2006aj ($\Delta m_{15}(R) = 0.86$\,mag). Note that the K-corrected values in $R$ differ from the light curve for PTF10qts shown in Figure \ref{fig:lc_all}, as that is for $R_{\rm PTF}$. Using \cite{Arnett:1982cv}, we know the following relations for a SN at maximum light: \begin{eqnarray} M_{\rm ej} &\propto & \tau^2 v_{\rm phot},~{\rm and} \label{eq:mass} \\ E_{\rm K} & \propto& \tau ^2 v_{\rm phot}^3, \label{eq:energy} \end{eqnarray} \noindent where $\tau$ is the light-curve width which is proportional to $1/\Delta m_{15}(R)$, and $v_{\rm phot}$ is the photospheric velocity. We have chosen to use $\Delta m_{15}(R)$ instead of $\tau$ for this measurement because of the uncertainty in the K-corrections before the first epoch of spectroscopy. For $v_{\rm phot}$ we adopt the values of 15,500\,km\,s$^{-1}$ for SN\,2006aj and 18,000\,km\,s$^{-1}$ for SN\,1998bw. As discussed above, we have assumed $v_{\rm phot} =$ 17,000\,km\,s$^{-1}$. With Equations \ref{eq:mass} and \ref{eq:energy}, we can calculate the physical parameters for PTF10qts assuming that it is analogous to either SN\,1998bw or SN\,2006aj\footnote{Note that for SN\,2006aj, the light-curve data for the $R$ band ends at +12 days relative to $R$-band maximum, but the bolometric light curve extends to +14 days owing to the availability of data in other filters. We evaluated bolometric magnitudes from these by assuming a constant bolometric correction with respect to the $V$ band. We obtain the same result if we extrapolate just the $R$-band light curve over the longer interval.}. The resulting parameters are given in Table \ref{tab:phys-params}. \begin{table*} \centering \caption{A summary of the physical parameters assumed for SN\,1998bw and SN\,2006aj, and those derived for PTF10qts first in the $R$ band and then using the bolometric light curve. $E_{\rm K}$ is the kinetic energy and $L_{i}$ is the peak luminosity in either $R$ or bolometric. The parameters for SN\,1998bw and SN\,2006aj are either measured from their published light curves or, in the case of $E_{\rm K}$, $v$, $M_{\rm ej}$, and $M(^{56}\textrm{Ni})$, taken from the modeling in \citet{Nakamura:2001gw} and \citet{Mazzali:2006bm} respectively.} \label{tab:phys-params} \begin{tabular} {c c c c c} \hline Parameter & SN\,1998bw & SN\,2006aj & PTF10qts& PTF10qts\\ &&& SN\,1998bw-like & SN\,2006aj-like \\ \hline \multicolumn{5}{c}{$R$ band}\\ \hline $\Delta m_{15}(R)$ (mag)& 0.56 & 0.86 &0.58$\pm$0.18 & 0.58$\pm$0.18 \\ $v$ (km\,s$^{-1}$)& 19,000$\pm$1000 &15,000$\pm$1000 &\multicolumn{2}{c}{17,000$\pm$1500}\\ $L_{R}$ (ergs\,s$^{-1}$) & 1.87 $\times 10^{42}$& 1.08$\times 10^{42} $& \multicolumn{2}{c}{(2.10$\pm$0.05) $\times 10^{42}$}\\ $t_R$ (days) & 17 & 10.5 & \multicolumn{2}{c}{11.6$\pm$1.3}\\ $M_{\rm ej}$ (M$_{\odot}$)& 10$\pm$1 & 1.8$\pm$0.8 & 8.3$\pm$2.6 &4.3$\pm$1.3 \\ $E_{\rm K}$ (ergs) & (50$\pm$10) $\times 10^{51}$ & (2$\pm$1) $\times 10^{51}$ & (33.4$\pm$14.4) $\times 10^{51}$ & (6.1$\pm$3.9) $\times 10^{51} $ \\ $M(^{56}\textrm{Ni})$ (M$_{\odot}$) & 0.43$\pm$0.05 & 0.2$\pm$0.04 & 0.34$\pm$0.09 & 0.42$\pm$0.08 \\ \hline \multicolumn{5}{c}{Bolometric}\\ \hline $\tau$ (days) & 21.7$\pm$0.5 & 16.6$\pm$0.5 & \multicolumn{2}{c}{16.8$\pm$1}\\ $v$ (km\,s$^{-1}$)& 20,000$\pm$1000 &16,000$\pm$1000 &\multicolumn{2}{c}{17,000$\pm$1500}\\ $L_{\rm bol}$ (ergs\,s$^{-1}$) & 8.32 $\times 10^{42}$ & 5.58$\times 10^{42} $& \multicolumn{2}{c}{(7.7$\pm$1.4) $\times 10^{42}$}\\ $t_{\rm bol}$ (days)& 15 & 9.6 & \multicolumn{2}{c}{13.4$\pm$2.3}\\ $M_{\rm ej}$ (M$_{\odot}$) &10$\pm$1 & 1.8$\pm$0.8 & 5.1$\pm$0.9 & 2.0$\pm$0.3\\ $E_{\rm K}$ (ergs) & (50$\pm$10) $\times 10^{51} $ & (2$\pm$1) $\times 10^{51}$ & (18.5$\pm$6.6) $\times 10^{51}$ & (2.5$\pm$1.4) $\times 10^{51} $ \\ $M(^{56}\textrm{Ni})$ (M$_{\odot}$) & 0.43$\pm$0.05 & 0.2$\pm$0.04 & 0.36$\pm$0.1 & 0.36$\pm$0.08\\ \hline \end{tabular} \end{table*} We propagate errors in $\Delta m_{15}(R)$, the values of the analogue $M_{\rm ej}$ and $E_{\rm K}$, and on measuring the velocities through the equations to obtain an uncertainty for each parameter estimate. We note that the largest contribution to the error budget comes from the errors in the quantities of the analogues, not from anything measured from the PTF10qts light curve. There is a large discrepancy between the values of both quantities when using the two different analogues. The amount of nickel produced can be estimated from the peak bolometric luminosity following the assumptions of \citet{Arnett:1982cv}. Assuming a constant bolometric correction from the $R$ band as in \citet{Drout:2011iw}, we can instead use the K-corrected magnitude in $R$. All three SNe have different rise times, so we introduce a correction to account for the varying number of $e$-folding times for primarily $^{56}$Ni, which has a half-life of 6.08\,days, and also for the decay product $^{56}$Co ($t_{1/2} = 77.23$\,days), assuming that there is no $^{56}$Co produced in the SN explosion itself. For an $R$-band luminosity $L_R$ and a nickel mass $M(^{56}\textrm{Ni})$, we find the relation \begin{eqnarray} L_R \propto M(^{56}\mathrm{Ni}) \Bigg (\frac{E_{\mathrm{\footnotesize{Ni}}}} {\tau_{\mathrm{\footnotesize{Ni}}}}e^{-\frac{t_R}{\tau_{\mathrm{\footnotesize{Ni}}}}} + \frac{E_{\mathrm{\footnotesize{Co}}}}{\tau_{\mathrm{\footnotesize{Co}}}} \frac{\tau_{\mathrm{\footnotesize{Co}}}}{\tau_{\mathrm{\footnotesize{Co}}}-\tau_{\mathrm{\footnotesize{Ni}}}} \left[e^{-\frac{t_R}{\tau_{\mathrm{\footnotesize{Co}}}}} - e^{-\frac{t_R}{\tau_{\mathrm{\footnotesize{Ni}}}}} \right] \Bigg), \end{eqnarray} \noindent where $t_R$ is the rise time in the $R$ band, and $\tau_{\mathrm{Ni}}$ and $\tau_{\mathrm{Co}}$ are the respective mean lifetimes for $^{56}$Ni and $^{56}$Co, where $\tau_i = t_{1/2} / \ln 2$. $E_{\mathrm{\footnotesize{Ni}}}$ and $E_{\mathrm{\footnotesize{Co}}}$ are the energies released by a unit mass of Ni and Co, respectively. The energy per decay is 1.7\,MeV for $^{56}$Ni and 3.67\,MeV for $^{56}$Co. As for the parameters estimated in Section \ref{sec:rband}, we use the values measured for SN\,1998bw and SN\,2006aj to provide two estimates of the nickel mass which we can then combine. The individual values for each SN are given in Table \ref{tab:phys-params}. This estimate is significantly different from the value obtained using the relationship in \citet{Drout:2011iw} ($\sim 0.2$\,M$_{\odot}$), despite the fact that PTF10qts is not unusual in either its $\Delta m_{15}(R)$ or $M_R$ values (see their Figure 22). This is because their relation relies purely on the absolute magnitude of the supernova at maximum and does not take into account differences in rise times. In this study we see that PTF10qts has a similar peak magnitude in $R$-band to SN1998bw, but the rise time is $\approx 5.5$\,days shorter. This would imply a much reduced nickel production in PTF10qts which is not reflected in the \citet{Drout:2011iw} estimation. The fact that we obtain an even lower value with the \citet{Drout:2011iw} formula is curious; however it also predicts a lower nickel mass for SN1998bw at 0.34$M_{\odot}$. The value for SN2006aj is in good agreement with that obtained from the modelling - 0.19$M_{\odot}$. We attribute this to the fact that SN1998bw has a much longer rise time than all of the supernovae used in \citet{Drout:2011iw}, whereas SN2006aj has a more typical rise time. The simplification of assuming $L \propto M(^{56}\mathrm{Ni}) / \tau$ is not appropriate for comparing supernovae with significantly different rise time or where the supernovae deviate from parabolic light curves where $\tau \propto 1/\Delta m_{15}(R)$. We can see this is not the case for both $R$-band and bolometric light curves in Figures \ref{fig:lc_all} and \ref{fig:bolLC}. These results clearly show it is not possible to use just the $R$-band to determine the physical parameters of this supernova and so we would caution the extension of the \citet{Drout:2011iw} relations to other supernovae, in particular where the rise time is poorly constrained or differing from $\approx$10 -- 12 days. Instead we now focus on the generation of a bolometric light curve. \subsection{Bolometric Light Curve}\label{sec:bolLC} We combine photometric and spectroscopic data to construct a pseudo-bolometric\footnote{We use the term pseudo-bolometric as the light curve we generate is from the UV to NIR only and cannot be described as truly bolometric as it excludes contributions at wavelengths outside this region, particularly gamma-rays.} light curve, as this will remove the assumption that the bolometric corrections from the $R$ band for PTF10qts and either analogue are the same at all phases. Bolometric fluxes were computed from the six spectra by integrating their dereddened signal in the interval 4000--8500\,\AA. As when calculating the K-corrections, we extend the 25 August (TNG) and 9 September (KPNO) spectra in the red to cover this range of wavelengths. We have also computed bolometric fluxes from the photometry at all epochs in which at least three bands were covered. After correcting for Milky Way reddening, we converted them to fluxes according to \citet{Fukugita:1996jr}, and then splined and integrated them in the observed 4000--8500\,\AA\ range. In the rest frame, the red boundary of this integration interval corresponds to $\sim 7800$\,\AA; thus, we have increased all bolometric fluxes by 15\% to account for the ultraviolet (UV) and near-infrared contributions (based on comparisons to other SNe that have been observed accurately both in the optical and near-infrared). Considering the uncertainty related to this assumption and the lack of UV information, we associate an uncertainty of 20\% with each bolometric flux. When combining the datapoints generated by these two different routes, we noted that the spectroscopically-derived points were systematically offset by a small amount to brighter magnitudes than the photometrically-derived points. We attribute this offset to inconsistencies in the two methods used to derive the individual points. To align the spectroscopically-generated points, we used the bolometric light curve of SN\,1998bw (itself generated via the photometric route) and fitted it to just the photometrically-derived data points of PTF10qts, allowing a temporal ``stretch'' and constant magnitude shift up and down. Treating this warped light curve as a template, we then used $\chi^2$ minimisation to apply a constant shift to the spectroscopically-derived data to bring them in line with the photometrically-dervived points. The final bolometric light curve is reported in Figure \ref{fig:bolLC}, where phases are plotted relative to the date of bolometric maximum, which occurs 1.84 rest-frame days after $R$-band maximum. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{bol_mag_FINAL_col.eps} \caption{The bolometric light curve of PTF10qts calculated from spectroscopic points or from photometry. Also shown for comparison are light curves of SN\,1998bw, SN\,2006aj, SN\,2006aj $- 0.5$\,mag, and SN\,2003jd.} \label{fig:bolLC} \end{center} \end{figure} Comparing the shapes of bolometric lightcurves is another approximate way to examine the physical similarity of PTF10qts to other SNe Ic-BL: supernovae with similar physical properties will have similarly-shaped lightcurves. In Figure \ref{fig:bolLC}, we show PTF10qts with the bolometric lightcurves of other SNe Ic-BL so that the dates of maximum align. We see that the bolometric light curve of PTF10qts is most similar to that of SN\,1998bw, although the later points of PTF10qts may decline slightly faster, implying a lower nickel mass in PTF10qts. SN\,2006aj is also a good match around maximum if it is made brighter by 0.5\,mag, although the light curve is narrower, so we would expect a higher kinetic energy and nickel mass in PTF10qts than SN\,2006aj. We can use these observations as a sanity check when deriving physical properties from the bolometric light curve. We also show that SN\,2003jd, which is a good match at some spectroscopic phases, is a poor match to the bolometric light curve before maximum brightness, again showing that spectroscopic similarity does not always mean the physics of the supernova explosion are the same. We estimate the physical parameters using the relationships discussed in Section \ref{sec:rband}, but now using the bolometric quantities. With bolometric data significantly before maximum brightness, we can switch to using $\tau$, the light-curve width, instead of just the post-maximum $\Delta m_{15} \propto 1/\tau$, which we have shown to be only an approximation. We define $\tau$ to be the width at peak magnitude minus 0.5\,mag. This should better reflect the differences between the SN light curves because, as Figure \ref{fig:bolLC} shows, after maximum the slopes of SN\,1998bw, SN\,2006aj, and PTF10qts are very similar, but before maximum, they differ significantly. For PTF10qts, SN\,1998bw, and SN\,2006aj, we measure $\tau = (16.8\pm1,~21.7\pm0.5,~16.6\pm0.5)$\,days, where now PTF10qts is much less similar to SN\,1998bw and more like SN\,2006aj. We measure the quantities when using both the SN\,1998bw and SN\,2006aj bolometric light curves, and these results are given in Table \ref{tab:phys-params}. We again see how important it is to choose an analogue which matches both the spectroscopy and the light curve, as the estimates of the physical parameters based on SN\,1998bw and SN\,2006aj do not agree. This is due to the different values of $E_{\rm K}/M_{\rm ej}$ and mass of $^{56}$Ni for the two analogues. We take the weighted mean of the two analogues as the best estimate of the physics of PTF10qts: $M_{\rm ej} = 2.3 \pm 0.3$\,M$_{\odot}$ and $E_{\rm K} = (3.2 \pm 1.4)\times 10^{51}$\,ergs. We also derive a nickel mass of $M(^{56}\textrm{Ni}) = 0.36 \pm 0.07$\,M$_{\odot}$. The measurements of the ejecta mass and the kinetic energy are lower than using just the $R$ band, and the nickel mass is slightly higher. We note that these estimates are similar to those for SN\,2010ah \citet{Mazzali:2013jn}, but the spectra are very different. To estimate the zero-age main sequence (ZAMS) mass of the progenitor, we use the models of \cite{Sugimoto:1980hz} and assume a remnant mass of 2\,M$_{\odot}$ as in their models. PTF10qts corresponds to a progenitor star with a ZAMS mass of $\sim 20\pm2$\,M$_{\odot}$. \subsection{Nebular Spectrum} We can also estimate the nickel mass from the nebular spectrum, which was obtained with LRIS at the Keck-I telescope 230 rest-frame days after $R$-band maximum. This is shown in Figure \ref{fig:neb-spec} with a continuum subtracted from it. Also shown is a synthetic spectrum. The observed spectrum has low signal-to-noise ratio, so the resultant model fit parameters should not be used to draw any firm conclusions. We used a code for the synthesis of nebular spectra as described by \citet{Mazzali:2001gz}. The synthetic spectrum was obtained using $M(^{56}\textrm{Ni}) = 0.35\pm0.1$\,M$_{\odot}$, which is in good agreement with the estimate from the bolometric light curve. The red part of the spectrum also appears to indicate a low oxygen mass ($\sim0.7$\,M$_{\odot}$) in the SN, which would support the lack of detection in the post-maximum spectra (Figure \ref{fig:spec-14}); however, the blueshifted profile of the [\ion{O}{i}] emission suggests that the line may not yet be optically thin. The oxygen mass may therefore be underestimated, although we tried to take this into account in the model by requiring a stronger line than the observed one. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{nebular_models_FINAL_col.eps} \caption{A spectrum of PTF10qts obtained at the Keck 10\,m telescope 230 days after $R$-band maximum. A continuum has been subtracted from the data to account for host-galaxy contamination. The dashed line is a fit to the spectrum.} \label{fig:neb-spec} \end{center} \end{figure} \subsection{Comparison to Other SNe~Ic} Table \ref{snicbl} contains a compilation of all SNe~Ic-BL published in the literature for which physical parameters have been derived, as well as a few intermediary cases in the region between normal SNe~Ic and SNe~Ic-BL. Those SNe with which GRB events have been associated are marked by an asterisk. For SN\,2010bh we have used the models of \cite{Sugimoto:1980hz} to infer the progenitor properties from the published energetics. PTF10qts is unremarkable among this type of SN in terms of the kinetic energy and ejecta mass, but the nickel mass is toward the higher end of the observed range. To explore this more fully, in Figure \ref{fig:phys-params} we compare PTF10qts to the trends published by \citet{Mazzali:2013jn} for energetic SNe and hypernovae where all physical parameters and the progenitor mass have been determined. There appears to be a strong relation between the progenitor mass and the kinetic energy of the SN, and PTF10qts lies on this trend. For example, SN\,2006aj has the same progenitor mass, and a similar kinetic energy is derived from the bolometric light curve. The relationship between the mass of synthesised $^{56}$Ni is much looser, and PTF10qts lies away from the apparent trend, producing more nickel than would be expected for its progenitor mass. In fact, PTF10qts has an ejected nickel mass comparable to those events classified as hypernovae. We thus call PTF10qts a nickel-rich Type Ic-BL SN. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{phys_param_plots_final_col.eps} \caption{A reproduction of the plots from \citet{Mazzali:2013jn} with the addition of new data. The objects in the plot are (in order of ascending progenitor mass) \emph{SN~IIP} -- SN\,1987A; \emph{SNe~IIb/Ib} -- SN\,1993J, SN\,2003bg; \emph{SNe~Ic/Ic-BL} -- SN\,1994I, SN\,2002ap, SN\,2010ah, SN\,2003jd, SN\,2004aw, SN\,1997ef; \emph{GRB/SNe~Ic-BL} -- SN\,2003dh, SN\,1998bw, SN\,2003lw; \emph{XRF/SN~Ic} -- SN\,2006aj; and \emph{XRF/SN~Ib} -- SN\,2008D. PTF10qts is shown as the bowtie symbol.} \label{fig:phys-params} \end{center} \end{figure} \label{sub:search_for_GRB} Although PTF10qts is not spectroscopically similar to SN\,1998bw, it is still photometrically similar and the event was clearly energetic. We used Interplanetary Network (IPN) data to search for a possible GRB companion to PTF10qts in case $\gamma$-rays had been detected by any of the orbiting satellites. The IPN includes Mars Odyssey, Konus-Wind, RHESSI, INTEGRAL (SPI-ACS), Swift-BAT, Suzaku, AGILE, MESSENGER, and Fermi (GBM). The date of the PTF10qts explosion is uncertain; we know only the first detection of the SN, 5 August 2010. The observed rise time of PTF10qts is estimated to be $12.7\pm1.5$\,days. We searched for a GRB around 16\,days before PTF10qts maximum light (allowing for any delay between a GRB and the emergence of the SN). This corresponds to a date range of 1--5 August 2010. During this period, six bursts were detected by the nine spacecraft of the IPN. During the same period there were also 14 unconfirmed bursts which have been excluded from further analysis. The sample also excludes bursts from known sources such as anomalous X-ray pulsars and soft gamma repeaters. Of these six bursts, three were observed with the coded fields of view of the Swift-BAT or INTEGRAL IBIS instruments, which have a positional accuracy of several arcminutes. These bursts were inconsistent with the position of PTF10qts. Two were observed either by the Fermi GBM alone, or by the Fermi GBM and one or more near-Earth spacecraft. The GBM error contours are not circles, although they are characterised as such, and they have at least several degrees of systematic uncertainties associated with them. Since no other confidence contours are specified, it is difficult to judge accurately the probability that any particular GBM burst is associated with the SN. In this analysis, we have simply multiplied the $1\sigma$ statistical-only error radius by 3 to obtain a rough idea of the $3\sigma$ error contours. One further event was observed by Konus and MESSENGER, and in this case the probability that this burst was due to PTF10qts is 0.04, excluding this as burst as conincident with the supernova. The total area of the localisations of the six bursts was $\sim 0.04 \times 4\pi$\,steradians. This implies that there is a very low probability of finding an unassociated gamma-ray source coincident with our SN during the time window we are investigating. There is another approach to the probability calculation. Since only 0 or 1 GRBs in our sample can be physically associated with the SN, we can calculate two other probabilities. The first is the probability that, in our ensemble of six bursts, none is associated by chance with the SN. Let $P_i$ be the fraction of the sky which is occupied by the localisation of the $i^{th}$ burst. Then the probability that no GRB is associated with the SN is \begin{eqnarray} \label{eq:nogrb} P(\mathrm{No\:GRB}) &=& \prod_i (1 - P_i). \end{eqnarray} \noindent For our sample, this probability is 0.96. The second probability is that any one burst is associated by chance with the SN, and that all the others are not: \begin{eqnarray} \label{eq:onegrb} P(\mathrm{One\:GRB\:by\:chance}) &=& \sum_i P_i \prod_{i \not= j} (1-P_j). \end{eqnarray} \noindent For our sample, this probability is 0.004. This analysis covers a very narrow range of dates for any potential GRB burst. If we extend the search period to the 30 days preceding the first optical detection of PTF10qts, there is still no statistically significant detection of any $\gamma$-rays associated with the SN event. In light of this, we assume that we have not detected any $\gamma$-rays associated with PTF10qts. \section{Conclusions}\label{sec:conc} We have presented optical follow-up data for the Type Ic-BL supernova PTF10qts, discovered at $z=0.0907$ by the Palomar Transient Factory. We find that the $R$-band light curve of PTF10qts is not a good representation of the bolometric light curve; hence, we used photometric and spectroscopic data to produce a pseudo-bolometric light curve from which to estimate the physical parameters of the SN explosion. PTF10qts appears to be a SN~Ic-BL from a progenitor of $\sim20M_{\odot}$, which is a smaller mass than some other SN~Ic-BL events, such as SN\,1998bw, SN\,2003dh and SN\,2003lw for which the progenitors are all believed to be $>35M_{\odot}$. However, PTF10qts produces a similar amount of $^{56}$Ni to these events, which are all associated with GRBs. A search of IPN data found no evidence for gamma-rays associated with the supernova event though. PTF10qts falls on the general trends of SNe~Ic in terms of the relation between progenitor mass and kinetic energy, but for its ZAMS mass of $\sim20$\,M$_{\odot}$, it produced more $^{56}$Ni than would be expected. This is evidenced by its luminous light curve, but its narrower lightcurve width when compared to SN\,1998bw ($\tau = 21.7$\,days compared to $\tau = 16.8$\,days). We note that the $^{56}$Ni masses we obtained by analogy with SN\,1998bw and SN\,2006aj using the $R$-band light curve (line 7 of Table \ref{tab:phys-params}) are different from those calculated via the bolometric light curve (line 14 of Table \ref{tab:phys-params}). This indicates that the $R$-band light curve is not a completely reliable proxy for the bolometric light curve, and the latter is preferable when evaluating physical parameters. We would caution the use of physical relationships based on monochromatic light curves for use as anything other than a first approximation. This is because assumptions such as constant opacity, constant bolometric correction and $L \propto 1/\tau$ are oversimplifications. In this study we have compared two methods using $R$-band and bolometric data. We find that the bolometric methods is more suitable, but is still only an approximation. A constraint on the time of explosion is required for this to provide anything other than a lower limit on the nickel mass. The physical parameters of a supernova explosion of this type can \emph{only} be determined with full modelling of the light curve and spectra. We encourage future observations of similar objects discovered early and with light curves and spectral coverage across the entire UV-optical-infrared range in order to better understand their nature. \section*{Acknowledgements} We acknowledge financial contributions from contract ASI I/016/07/0 (COFIS), ASI I/088/06/0, and PRIN INAF 2009 and 2011. PTF is a collaboration of Caltech, LCOGT, the Weizmann Institute, LBNL, Oxford, Columbia, IPAC, and UC Berkeley. Collaborative work between A.G. and P.A.M. is supported by a Minerva grant. The Weizmann PTF membership is supported by the ISF via grants to A.G. Joint work of A.G. and S.R.K. is supported by a BSF grant. A.G. also acknowledges support by grants from the GIF, EU/FP7 via ERC grant 307260, ``The Quantum Universe'' I-Core program by the Israeli Committee for planning and budgeting, the Kimmel award and the Lord Sieff of Brimpton Fund. A.V.F.’s group at UC Berkeley has received generous financial assistance from Gary and Cynthia Bengier, the Christopher R. Redlich Fund, the Richard and Rhoda Goldman Fund, the TABASGO Foundation, and NSF grant AST-1211916. JMS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1302771. E.O.O. is incumbent of the Arye Dissentshik career development chair and is grateful to support by a grant from the Israeli Ministry of Science and the I-CORE Program of the Planning and Budgeting Committee and The Israel Science Foundation (grant No 1829/12). We thank the very helpful staffs of the various observatories (Palomar, Lick, KNPO, TNG, Keck) at which data were obtained. The W. M. Keck Observatory is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; it was made possible by the generous financial support of the W. M. Keck Foundation. M. T. Kandrashoff and J. Rex assisted with the Lick observations. We are grateful to the following contributors to the IPN for support and sharing their data: I.~G.~Mitrofanov, D.~Golovin, M.~L.~Litvak, A.~B.~Sanin , C.~Fellows, K.~Harshman, and R.~Starr (for the Odyssey team), R.~Aptekar, E.~Mazets, V.~Pal'shin, D.~Frederiks, and D.~Svinkin (for the Konus-Wind team), A.~von Kienlin and A.~Rau (for the INTEGRAL team), T.~Takahashi, M.~Ohno, Y.~Hanabata, Y.~Fukazawa, M.~Tashiro, Y.~Terada, T.~Murakami, and K.~Makishima (for the Suzaku team), T.~Cline, J.~Cummings, N.~Gehrels, H.~Krimm, and D.~Palmer (for the Swift team), and V.~Connaughton, M.~S.~Briggs, and C.~Meegan (for the Fermi GBM team). K.H.~acknowledges NASA support for the IPN under the following grants: NNX10AI23G (Swift), NNX09AV61G (Suzaku), NNX09AU03G (Fermi), and NNX09AR28G (INTEGRAL). \nocite{Iwamoto:2000dq} \nocite{Nakamura:2001gw} \nocite{Mazzali:2002bf} \nocite{Mazzali:2003df} \nocite{Valenti:2008dz} \nocite{Mazzali:2006dk} \nocite{Taubenberger:2006cc} \nocite{Drout:2011iw} \nocite{Mazzali:2006bm} \nocite{Pian:2006ho} \nocite{Young:2010gv} \nocite{Sahu:2009is} \nocite{Pignata:2011hc} \nocite{Berger:2011ee} \nocite{Corsi:2011fq} \nocite{Sanders:2012dc} \nocite{Bufano:2012ke} \nocite{Cano:2011jl} \nocite{OlivaresE:2012jf} \nocite{Deng:2005aa} \nocite{Milisavljevic:2013aa} \bibliographystyle{mn2e}
1,108,101,562,450
arxiv
\section{Introduction and Main Goal} In \cite{Ram}, the authors give five continued fractions for certain ${\mathbb Z}$-linear combinations of zeta values, only obtained and checked numerically, as well as other linear combinations involving powers of $\pi$, Catalan's constant, etc... The purpose of the present paper is to show that these continued fractions are completely elementary. In fact, we explain three general methods for constructing them. In particular we give seven infinite families of such continued fractions. We also discuss analogous results involving $L$-values for Dirichlet characters of conductor $3$ and $4$. We use the following notation for a continued fraction $S$, which may differ from notation used in other papers in the literature: $$S=a(0)+b(0)/(a(1)+b(1)/(a(2)+b(2)/(a(3)+\cdots)))\;,$$ and we denote as usual by $p(n)/q(n)$ the $n$th convergent, so that $p(0)/q(0)=a(0)/1$, $p(1)/q(1)=(a(0)a(1)+b(0))/a(1)$, etc... When $a(n)$ and $b(n)$ are polynomials $A(n)$ and $B(n)$ for $n\ge1$, we will write the continued fraction as $S=[[a(0),A(n)],[b(0),B(n)]]$. For instance, the continued fraction $$\zeta(2)=2/(1+1/(3+16/(5+81/(7+256/(9+\cdots)))))$$ will simply be written as $\zeta(2)=[[0,2n-1],[2,n^4]]$. \smallskip We recall the following trivial result due to Euler: \begin{lemma} Let $f(n)$ be a nonzero arithmetic function, and set by convention $f(0)=0$. When the left-hand side converges, we have $$\sum_{n\ge1}\dfrac{z^n}{f(n)}=[[0,f(n)+zf(n-1)],[z,-zf(n)^2]]\;,$$ and in addition the $N$th partial sum of the series is equal to the $N$th convergent $p(N)/q(N)$ of the continued fraction. \end{lemma} As a trivial application, for $k\ge2$, we have the trivial continued fraction $$\zeta(k)=[[0,n^k+(n-1)^k],[1,-n^{2k}]]\;.$$ The second trivial lemma that we will use is the following: \begin{lemma} Let $(a(n),b(n))$ define a continued fraction with convergents $(p(n),q(n))$, and let $r(n)$ be an arbitrary nonzero arithmetic function with $r(0)=1$. Then if we set $a'(n)=r(n)a(n)$ and $b'(n)=r(n)r(n+1)b(n)$, the corresponding convergents $(p'(n),q'(n))$ are given by $(p'(n),q'(n))=r!(n)(p(n),q(n))$ with evident notation, and in particular $p'(n)/q'(n)=p(n)/q(n)$.\end{lemma} Thanks to the first lemma above, we can thus transform any series into a continued fraction, not really interesting. For instance, assume that I want a CF for $\zeta(2)+\zeta(3)$: we have $\zeta(2)+\zeta(3)=\sum_{n\ge1}(n+1)/n^3$, so applying the first lemma to $f(n)=n^3/(n+1)$ and $z=1$, we get $$\zeta(2)+\zeta(3)=[[0,n^3/(n+1)+(n-1)^3/n],[1,-n^6/(n+1)^2]]\;,$$ and applying the second lemma to $r(0)=1$, $r(n)=n^2+n$ for $n\ge1$, we get $$\zeta(2)+\zeta(3)=[[0,2n^4-2n^3+2n-1],[2,-(n^8+2n^7)]]\;.$$ As mentioned, not very interesting, in particular because continued fractions involving $\zeta(3)$ should have $a(n)$ a polynomial of degree at most $3$, and $b(n)$ at most $6$. \smallskip We can now more precisely state our goal, much wider than the simple proofs of the continued fractions given in \cite{Ram}. First, we set the following definition: \begin{definition} Let $d\ge0$ be an integer. A \emph{rational period} of degree $k$ is the sum of a convergent series of the form $\sum_{n\ge1}\chi(n)f(n)$, where $\chi(n)$ is a periodic arithmetic function taking rational values, and $f\in{\mathbb Q}(x)$ is a rational function with rational coefficients, whose denominator is of degree $k$.\end{definition} Two remarks concerning this definition: first, it is \emph{not} compatible with the definition of periods as given in \cite{Kon-Zag}. Second, one could ask the coefficients to be algebraic instead of rational, but this leads to a theory which is too general. Examples: $\log(2)$, $\pi$, and $L(\chi,1)$ for a nontrivial Dirichlet character all have degree $1$, more generally $\pi^k$ and $L(\chi,k)$ are rational periods of degree $k$. Note also that a ${\mathbb Q}$-linear combination of rational periods of degree $\le k$ is again a rational period of degree $\le k$. Thus, our goal will be as follows: find continued fractions $(a(n),b(n))$ for rational periods degree $k$ where for $n$ sufficiently large $a(n)$ is a polynomial of degree at most $k$ and $b(n)$ of degree at most $2k$, which we abbreviate by saying that it has \emph{bidegree} at most $(k,2k)$. The prototypical ``trivial'' example is $S=\sum_{n\ge1}1/P(n)$ with $P$ a polynomial of degree $k\ge2$, and $S=[[0,P(n)+P(n-1)],[1,-P(n)^2]]$ is a continued fraction of required bidegree $(k,2k)$, by Euler's lemma above, and similarly for $S=\sum_{n\ge1}(-1)^{n-1}/P(n)$. On the contrary, as mentioned above, such a continued fraction does not seem to exist for $\sum_{n\ge1}(n+1)/n^3$. \section{First Method: use of Polynomial Multipliers} \begin{proposition} Fix an integer $k\ge2$, and let $P\in{\mathbb Q}[x]$ be a nonzero polynomial with rational coefficients such that $P(x)$ divides $x^kP(x+1)+(x-1)^kP(x-1)$, and set $R(x)=(x^kP(x+1)+(x-1)^kP(x-1))/P(x)$. \begin{enumerate}\item We have the continued fraction expansion $$S=\sum_{n\ge1}\dfrac{1}{n^kP(n)P(n+1)}=[[0,R(x)],[1/P(0)^2,-n^{2k}]]\;,$$ which is a continued fraction of bidegree $(k,2k)$. \item If $P(x)$ and $P(x+1)$ are coprime polynomials and $d=\deg(P)\le k$, $S$ is a rational period of degree at most $k$. \end{enumerate} \end{proposition} {\it Proof. \/} (1). By Euler's lemma above we have $$S=[[0,n^kP(n)P(n+1)+(n-1)^kP(n-1)P(n)],[1,-n^{2k}P(n)^2P(n+1)^2]]\;.$$ I claim that $P(1)\ne0$: indeed, if $P(1)=0$ we deduce that $1^kP(2)=R(1)P(1)$ so $P(2)=0$, and the recursion $(n-1)^kP(n)=R(n-1)P(n-1)-(n-2)^kP(n-2)$ for $n\ge3$ implies that $P(n)=0$ for all $n$, so $P$ has infinitely many roots so is identically zero, contradiction. We can thus apply the second lemma to $r(n)=1/P(n)^2$ for $n\ge1$ and $r(0)=1$ and we obtain immediately $S=[[0,R(n)],[1/P(1)^2,-n^{2k}]]$, proving (1). \smallskip For (2), we first note that using the same proof as in (1) but using the recursion backwards, we have $P(0)\ne0$. We can thus write a partial fraction decomposition in the form $$\dfrac{1}{x^kP(x)P(x+1)}=\sum_{1\le j\le k}\dfrac{c_j}{x^j}+\dfrac{N(x)}{P(x)P(x+1)}\;,$$ where the $c_j$ are constants and $\deg(N(x))\le 2d-1$, where $d=\deg(P)$. Since by assumption $P(x)$ and $P(x+1)$ are coprime, by the extended Euclidean algorithm, there exist polynomials $U$ and $V$ such that $U(x)P(x)+V(x)P(x+1)=N(x)$, and $U$ and $V$ can be chose of degree less than or equal to $d-1$. Thus $N(x)/(P(x)P(x+1))=U(x)/P(x+1)+V(x)/P(x)$, so $S$ is a rational period of degree at most $d\le k$.\qed\removelastskip\vskip\baselineskip\relax \smallskip {\bf Remarks} \smallskip \begin{enumerate}\item It is possible that the condition that $P(x)$ and $P(x+1)$ are coprime can be lifted. \item One can prove that $P(x)$ satisfies the identity $P(x+1)=(-1)^dP(-x)$, or equivalently $P(1-x)=(-1)^dP(x)$. I am grateful to ``Ilya Bogdanov'' from the MathOverflow forum for the proof of this fact.\end{enumerate} \medskip Exactly the same proposition with an identical proof can be applied to alternating sums: \begin{proposition} Fix an integer $k\ge2$, and let $P\in{\mathbb Q}[x]$ be a nonzero polynomial with rational coefficients such that $P(x)$ divides $x^kP(x+1)-(x-1)^kP(x-1)$, and set $R(x)=(x^kP(x+1)-(x-1)^kP(x-1))/P(x)$. \begin{enumerate}\item We have the continued fraction expansion $$S=\sum_{n\ge1}\dfrac{(-1)^{n-1}}{n^kP(n)P(n+1)}=[[0,R(x)],[1/P(0)^2,n^{2k}]]\;,$$ which is a continued fraction of bidegree $(k-1,2k)$. \item If $P(x)$ and $P(x+1)$ are coprime polynomials and $d=\deg(P)\le k$, $S$ is a rational period of degree at most $k$. \end{enumerate} \end{proposition} \smallskip The next section consists in searching for suitable polynomials $P(x)$ and writing the corresponding rational period and continued fraction. \section{Examples} \begin{proposition} The condition of the proposition $P\in{\mathbb Q}[x]$ dividing $x^kP(x+1)+(x-1)^kP(x-1)$ is satisfied in the following cases: \begin{enumerate} \item $k\equiv0\pmod2$ and $P(x)=2x-1$. \item $k\equiv-1\pmod3$ and $P(x)=3x^2-3x+1$. \item $k\equiv-1\pmod4$ and $P(x)=2x^2-2x+1$. \item $k\equiv-1\pmod6$ and $P(x)=x^2-x+1$. \item $k=5$ and $P(x)=5x^4-10x^3+19x^2-14x+4$. \end{enumerate} \end{proposition} {\it Proof. \/} In the first four cases, it is sufficient to check that any root of $P(x)=0$ is also a root of $x^kP(x+1)+(x-1)^kP(x-1)$. For instance in the first case $P(x)=2x-1$, for $a=1/2$ we have $P(a+1)=2$ and $P(a-1)=-2$, and indeed $2^{-k}\cdot 2+(-2)^{-k}\cdot(-2)=0$ when $k$ is even. The last case is done by a direct divisibility test.\qed\removelastskip\vskip\baselineskip\relax The same proof shows the following: \begin{proposition} The condition of the proposition $P\in{\mathbb Q}[x]$ dividing $x^kP(x+1)-(x-1)^kP(x-1)$ is satisfied in the following cases: \begin{enumerate} \item $k\equiv1\pmod2$ and $P(x)=2x-1$. \item $k\equiv1\pmod4$ and $P(x)=2x^2-2x+1$. \item $k\equiv2\pmod6$ and $P(x)=x^2-x+1$. \end{enumerate} \end{proposition} Note that I have not found any other examples than the ones given in the above two propositions, but I may have missed some. \smallskip Thanks to these propositions, it is now just a matter of working out explicitly all the above examples, in other words of computing the partial fraction expansions of the expressions $1/(x^kP(x)P(x+1))$, which is routine so not given explicitly. In particular, we will see that in the non-alternating cases, the sum $S$ is a ${\mathbb Q}$-linear combination of $1$ and $\zeta(k)$ for $k\ge2$ of fixed parity, and in the alternating cases, with in addition $\log(2)$. We now give the corresponding formulas, and give examples after. The following corollary immediately follows from the above propositions: \vfill\eject \begin{corollary} By convention, set $\zeta(0)=\zeta^*(0)=0$, $\zeta^*(1)=\log(2)$, and $\zeta^*(k)=(2^{k-1}-1)\zeta(k)$ for $k\ge2$. We have the following general continued fractions: \begin{align*}&\sum_{j=0}^{k-1}2^{2j}\zeta(2(k-j))=[[2^{2k-1},R_1(n)],[-1,-n^{4k}]]\;,\text{ with} \\ &R_1(x)=(x^{2k}(2x+1)+(x-1)^{2k}(2x-3))/(2x-1)\;,\\ &\sum_{j=0}^k(-3)^{3j}(\zeta(6(k-j)+2)+3\zeta(6(k-j)))=[[-(-3)^{3k+1}/2,R_2(n)],[1,-n^{12k+4}]]\;,\text{ with}\\ &R_2(x)=(x^{6k+2}(3x^2+3x+1)+(x-1)^{6k+2}(3x^2-9x+7))/(3x^2-3x+1)\;,\\ &\sum_{j=0}^k(-3)^{3j}(\zeta(6(k-j)+5)+3\zeta(6(k-j)+3))=[[(-3)^{3k+2}/2,R_3(n)],[1,-n^{12k+10}]]\;,\text{ with}\\ &R_3(x)=(x^{6k+5}(3x^2+3x+1)+(x-1)^{6k+5}(3x^2-9x+7))/(3x^2-3x+1)\;,\\ &\sum_{j=0}^k(-4)^j\zeta(4(k-j)+3)=[[4^k,R_4(n)],[1,-n^{8k+6}]]\;,\text{ with}\\ &R_4(x)=(x^{4k+3}(2x^2+2x+1)+(x-1)^{4k+3}(2x^2-6x+5))/(2x^2-2x+1)\;,\\ &\sum_{j=0}^k(\zeta(6(k-j)+5)-\zeta(6(k-j)+3))=[[-1/2,R_5(n)],[1,-n^{12k+10}]]\;,\text{ with}\\ &R_5(x)=(x^{6k+5}(x^2+x+1)+(x-1)^{6k+5}(x^3-3x+3))/(x^2-x+1)\;,\\ &\sum_{j=0}^k2^{4j}\zeta^*(2(k-j)+1)=[[2^{4k},R_6(n)],[-2^{2k},n^{4k+2}]]\;,\text{ with}\\ &R_6(x)=(x^{2k+1}(2x+1)-(x-1)^{2k+1}(2x-3))/(2x-1)\;,\\ &\sum_{j=0}^k(-64)^j\zeta^*(4(k-j)+1)=[[(-1)^k2^{6k-1},R_7(n)],[2^{4k},n^{8k+2}]]\;,\text{ with}\\ &R_7(x)=(x^{4k+1}(2x^2+2x+1)-(x-1)^{4k+1}(2x^2-6x+5))/(2x^2-2x+1)\;,\\ &\sum_{j=0}^k2^{6j}(\zeta^*(6(k-j)+2)-4\zeta^*(6(k-j)))=[[2^{6k},R_8(n)],[2^{6k+1},n^{12k+4}]]\;,\text{ with}\\ &R_8(x)=(x^{6k+2}(x^2+x+1)-(x-1)^{6k+2}(x^2-3x+3))/(x^2-x+1)\;.\end{align*} \end{corollary} We now give corresponding examples: \begin{align*} &\zeta(4)+4\zeta(2)=[[8,2n^4-4n^3+10n^2-8n+3],[-1,-n^8]]\;,\\ &27\zeta(2)-3\zeta(6)-\zeta(8)=[[81/2,R(n)],[-1,-n^{16}]]\;,\text{ with}\\ &R(n)=2n^8-8n^7+46n^6-110n^5+178n^4-182n^3+118n^2-44n+7\;,\\ &\zeta(5)+3\zeta(3)=[[9/2,2n^5-5n^4+22n^3-28n^2+23n-7],[1,-n^{10}]]\;,\\ &4\zeta(3)-\zeta(7)=[[4,2n^7-7n^6+37n^5-75n^4+99n^3-77n^2+31n-5],[-1,-n^{14}]]\;,\\ &\zeta(3)-\zeta(5)=[[1/2,2n^5-5n^4+22n^3-28n^2+15n-3],[-1,-n^{10}]]\;,\\ &4\zeta(5)+11\zeta(3)=[[273/16,2n^5-5n^4+42n^3-58n^2+45n-13],[4,-n^{10}]]\;,\\ &3\zeta(3)+16\log(2)=[[16,5n^2-5n+3],[-4,n^6]]\;,\\ &15\zeta(5)+48\zeta(3)+256\log(2)=[[256,7n^4-14n^3+18n^2-11n+3],[-16,n^{10}]]\;,\\ &64\log(2)-15\zeta(5)=[[32,9n^4-18n^3+30n^2-21n+5],[-16,n^{10}]]\;,\\ &127\zeta(8)-124\zeta(6)+64\zeta(2)=[[64,R(n)],[128,n^{16}]]\;,\text{ with}\\ &R(n)=12n^7-42n^6+110n^5-170n^4+154n^3-82n^2+24n-3\;.\end{align*} \section{Second Method: use of the $\psi$ Function and Derivatives} Recall that $\psi(z)$ is the logarithmic derivative of the gamma function (most authors call $\psi$ the digamma function, and $\psi'$, $\psi''$, etc... the trigamma, tetragamma functions, but this is terrible terminology). By orthogonality of characters, it is immediate to show that for $k\ge1$ $$\psi^{(k)}(r/m)=(-1)^{k-1}\dfrac{k!m^{k+1}}{\phi(m)}\sum_{\chi\bmod m}\overline{\chi}(r)L(\chi,k+1)\;,$$ and for $k=0$ the same formula is valid if we interpret $L(\chi_0,1)$ as $$L(\chi_0,1)=\sum_{d\mid m}\dfrac{\mu(d)}{d}\log(d)-\dfrac{\phi(m)\log(m)}{m}\;.$$ In particular, for $m=1$, $2$, $3$, $4$, and $6$, which are the values of $m$ for which $\phi(m)\le2$, we obtain the following table, where as usual we set $G=L(\chi_{-4},2)$, Catalan's constant, and $G_3=L(\chi_{-3},2)$: \bigskip \centerline{ \begin{tabular}{|c||c|c|c||} \hline $r/m$ & $\psi(r/m)+\gamma$ & $\psi'(r/m)$ & $\psi''(r/m)$ \\ \hline\hline 1 & $0$ & $\zeta(2)$ & $-2\zeta(3)$ \\ 1/2 & $-2\log(2)$ & $3\zeta(2)$ & $-14\zeta(3)$ \\ 1/3 & $-3\log(3)/2-\pi/(2\sqrt{3})$ & $4\zeta(2)+9G_3/2$ & $-26\zeta(3)-4\pi^3/(3\sqrt{3})$ \\ 2/3 & $-3\log(3)/2+\pi/(2\sqrt{3})$ & $4\zeta(2)-9G_3/2$ & $-26\zeta(3)+4\pi^3/(3\sqrt{3})$ \\ 1/4 & $-3\log(2)-\pi/2$ & $6\zeta(2)+8G$ & $-56\zeta(3)-2\pi^3$ \\ 3/4 & $-3\log(2)+\pi/2$ & $6\zeta(2)-8G$ & $-56\zeta(3)+2\pi^3$ \\ 1/6 & $-\log(432)/2-3\pi/(2\sqrt{3})$ & $12\zeta(2)+45G_3/2$ & $-182\zeta(3)-12\pi^3/\sqrt{3}$ \\ 5/6 & $-\log(432)/2+3\pi/(2\sqrt{3})$ & $12\zeta(2)-45G_3/2$ & $-182\zeta(3)+12\pi^3/\sqrt{3}$ \\ \hline \end{tabular}} \bigskip On the other hand, there exist many continued fractions for $\psi(z)$ and its derivatives. The ones for $\psi(z)$ itself are rather complicated, and those for $\psi^{(k)}(z)$ for $k\ge3$ are trivial transformations of the defining series, so the only remaining interesting ones are those for $\psi'(z)$ and $\psi''(z)$. This of course implies that we restrict to rational periods of degree two and three. We choose the nicest continued fractions, taken from \cite{Cuyt}: \medskip For $\psi'(z)$ we have $$\psi'(z)=[[0,(2z-1)(2n-1)],[2,n^4]]\;,$$ valid for $z>1/2$. However, from the trivial identity $\psi'(z)=\psi'(z+1)+1/z^2$, we can deduce infinitely many other continued fractions: $$\psi'(z)=[[\sum_{0\le j<k}1/(z+j)^2,(2z+2k-1)(2n-1)],[2,n^4]]\;,$$ now valid for $z>1/2-k$. Referring to the above table and choosing $z=1/3$, $2/3$, $1/4$, $3/4$, $1/6$, and $5/6$, and $k=0$, $1$, etc..., we obtain as many continued fractions as we like for $8\zeta(2)\pm9G_3$, $3\zeta(2)\pm4G$, and $8\zeta(2)\pm15G_3$. For instance, after simplifications: \begin{align*} 8\zeta(2)-9G_3&=[[0,2n-1],[12,9n^4]]\;,\\ 8\zeta(2)+9G_3&=[[18,10n-5],[12,9n^4]]\;,\\ 3\zeta(2)-4G&=[[0,2n-1],[2,4n^4]]\;,\\ 3\zeta(2)+4G&=[[8,6n-3],[2,4n^4]]\;,\\ 8\zeta(2)-15G_3&=[[0,4n-2],[4,9n^4]]\;,\\ 8\zeta(2)+15G_3&=[[24,8n-4],[4,9n^4]]\;.\end{align*} \medskip For $\psi''(z)$ we have $$\psi''(z)=[[0,(2n-1)(n^2-n+1+2z(z-1))],[-2,-n^6]]\;,$$ valid for $z>1/2$. However, from the trivial identity $\psi''(z)=\psi''(z+1)-2/z^3$, we can deduce infinitely many other continued fractions: $$\psi''(z)=[[-2\sum_{0\le j<k}1/(z+j)^3,(2n-1)(n^2-n+1+2(z+k)(z+k-1))],[-2,-n^6]]\;,$$ now valid for $z>1/2-k$. Referring to the above table and choosing $z=1/3$, $2/3$, $1/4$, $3/4$, $1/6$, and $5/6$, and $k=0$, $1$, etc..., we obtain as many continued fractions as we like for $39\zeta(3)\pm2\pi^3/\sqrt{3}$, $28\zeta(3)\pm\pi^3$, and $91\zeta(3)\pm6\pi^3/\sqrt{3}$. For instance, after simplifications: \begin{align*} 39\zeta(3)-2\pi^3/\sqrt{3}&=[[0,(2n-1)(9n^2-9n+5)],[27,-81n^6]]\;,\\ 39\zeta(3)+2\pi^3/\sqrt{3}&=[[81,(2n-1)(9n^2-9n+17)],[27,-81n^6]]\;,\\ 28\zeta(3)-\pi^3&=[[0,(2n-1)(8n^2-8n+5)],[8,-64n^6]]\;,\\ 28\zeta(3)+\pi^3&=[[64,(2n-1)(8n^2-8n+13)],[8,-64n^6]]\;,\\ 91\zeta(3)-6\pi^3/\sqrt{3}&=[[0,(2n-1)(9n^2-9n+13/2)],[9,-81n^6]]\;,\\ 91\zeta(3)+6\pi^3/\sqrt{3}&=[[216,(2n-1)(9n^2-9n+25/2)],[9,-81n^6]]\;. \end{align*} \section{Third Method: Bauer--Muir Acceleration} This very classical method is just as elementary as the previous ones, but the formulas are slightly more complicated. Let $(a(n),b(n))_{n\ge0}$ be a continued fraction with convergents $(p(n),q(n))$, and let $r(n)_{n\ge1}$ be any sequence (for now). For $n\ge1$ we define $$R(n)=a(n)+r(n)\text{ and }d(n)=r(n)R(n+1)-b(n)=r(n)(a(n+1)+b(n+1))-b(n)\;.$$ We make the following two essential assumptions: $R(1)=a(1)+r(1)\ne0$, and $d(n)\ne0$ for all $n\ge1$. We define: \begin{align*} A(0)&=a(0)+\dfrac{b(0)}{R(1)}\;,\quad B(0)=\dfrac{b(0)d(1)}{R(1)^2}\;,\quad A(1)=\dfrac{a(1)R(2)+b(1)}{R(1)}\;,\\ A(n)&=R(n+1)-r(n-1)\dfrac{d(n)}{d(n-1)}\text{ for $n\ge2$, and } B(n)=b(n)\dfrac{d(n+1)}{d(n)}\text{ for $n\ge1$.}\end{align*} The following result is easy to prove by induction: \begin{proposition} Let $(P(n),Q(n))$ be the convergents of the continued fraction defined by $(A(n),B(n))$. For $n\ge2$ we have $$(P(n),Q(n))=(p(n+1),q(n+1))+r(n+1)(p(n),q(n))\;.$$ In particular, if $p(n)/q(n)$ and $P(n)/Q(n)$ both tend to a limit as $n\to\infty$, these limits are equal.\end{proposition} This process is called Bauer--Muir acceleration, because if $r(n)$ is chosen appropriately, it accelerates the convergence of the continued fraction. An important fact is that if the accelerated formulas are simple enough, for instance when $d(n)$ is \emph{constant}, the acceleration process can be \emph{iterated}. This fact, combined with a suitable diagonal process, is the basis of Ap\'ery's initial proofs of the irrationality of $\zeta(2)$ and $\zeta(3)$. However, we will not consider this here. Let us consider some simple examples. \bigskip {\bf Example 1: $\log(2)$} \smallskip The trivial continued fraction for $\log(2)$, directly coming from the series $\log(2)=\sum_{n\ge1}(-1)^{n-1}/n$, is $\log(2)=[[0,1],[1,n^2]]$. Applying Bauer--Muir acceleration iteratively, we immediately obtain $$\log(2)=[[0,1],[1,n^2]]=[[1,3],[-1,n^2]]=[[1/2,5],[1,n^2]]=[[5/6,7],[-1,n^2]]\;,$$ and so on, the general formula being $$\log(2)=[[\sum_{1\le j\le k}(-1)^{j-1}/j,2k+1],[(-1)^k,n^2]]\;.$$ Note that this is \emph{not} the same as the trivial continued fraction obtained from $\sum_{n>k}(-1)^{n-1}/n$, since this converges like $(-1)^n/n$ as the initial one, while the accelerated formula given above converges like $(-1)^n/n^{2k+1}$. \bigskip {\bf Example 2: $\pi^2/6$} \smallskip The trivial continued fraction for $\zeta(2)=\pi^2/6$, directly coming from the series $\pi^2/6=\sum_{n\ge1}1/n^2$, is $\pi^2/6=[[0,2n^2-2n+1],[1,-n^4]]$. Applying Bauer--Muir acceleration iteratively, we immediately obtain \begin{align*}\pi^2/6 &=[[0,2n^2-2n+1],[1,-n^4]]=[[2,2n^2-2n+3],[-1,-n^4]]\\ &=[[3/2,2n^2-2n+7],[1,-n^4]]=[[31/18,2n^2-2n+13],[-1,-n^4]]\\ &=[[115/72,2n^2-2n+21],[1,-n^4]]=[[3019/1800,2n^2-2n+31],[-1,-n^4]]\\ &=[[973/600,2n^2-2n+43],[1,-n^4]]\;, \end{align*} and so on, the general formula being $$\pi^2/6=[[2\sum_{1\le j\le k}(-1)^{j-1}/j^2,2n^2-2n+k^2+k+1],[(-1)^k,-n^4]]\;.$$ Once again, this is not the tail of the series defining $\pi^2/6$, since it converges like $1/n^{2k+1}$. \bigskip {\bf Example 3: $\pi^2/6$ (again)} \smallskip Another trivial continued fraction for $\pi^2/6$, directly coming from the series $\pi^2/6=2\sum_{n\ge1}(-1)^{n-1}/n^2$, is $\pi^2/6=[[0,2n-1],[2,n^4]]$. Applying Bauer--Muir acceleration iteratively, we immediately obtain \begin{align*}\pi^2/6 &=[[0,2n-1],[2,n^4]]=[[1,6n-3],[2,n^4]]=[[5/4,10n-5],[2,n^4]]\\ &=[[49/36,14n-7],[2,n^4]]=[[205/144,18n-9],[2,n^4]]\\ &=[[5269/3600,22n-11],[2,n^4]]\;,\end{align*} and so on, the general formula being $$\pi^2/6=[[\sum_{1\le j\le k}1/j^2,(2k+1)(2n-1)],[2,n^4]]\;.$$ Note that the constant term of the continued fraction for $\sum_{j\ge1}1/j^2$ is the partial sum of the series $\sum_{j\ge1}(-1)^{j-1}/j^2$ and the constant term of the continued fraction for $\sum_{j\ge1}(-1)^{j-1}/j^2$ is the partial sum of the series $\sum_{j\ge1}1/j^2$. \bigskip {\bf Example 4: $G=L(\chi_{-4},2)$, Catalan's constant} \smallskip Here, we could take the trivial continued fraction for $G$, directly coming from the series $G=\sum_{n\ge1}(-1)^{n-1}/(2n-1)^2$, and apply iteratively Bauer--Muir, giving \begin{align*}G &=[[0,1,8(n-1)],[1,(2n-1)^4]]=[[1/6,7/3,24(n-1)],[16/9,(2n-1)^4]]\\ &=[[19/82,145/41,40(n-1)],[4096/1681,(2n-1)^4]]\;,\end{align*} and so on, but this is not pretty, first because $b(0)$ changes, and second because one needs to specify both $a(0)$ and $a(1)$ since $n-1$ vanishes for $n=1$. \smallskip A nicer continued fraction taken from \cite{Cuyt}, which in fact we are going to prove, is $G=[[1,8n^2-8n+7],[-1/2,-16n^4]]$. We thus obtain \begin{align*}G&=[[0,8n^2-8n+3],[1/2,-16n^4]]\\ &=[[1,8n^2-8n+7],[-1/2,-16n^4]]=[[8/9,8n^2-8n+19],[1/2,-16n^4]]\\ &=[[209/225,8n^2-8n+39],[-1/2,-16n^4]]\\ &=[[10016/11025,8n^2-8n+67],[1/2,-16n^4]]\;,\end{align*} and so on, the general formula being $$G=[[\sum_{1\le j\le k}(-1)^{j-1}/(2j-1)^2,8n^2-8n+4k^2+3],[(-1)^k/2,-16n^4]]\;,$$ and the continued fraction for $k=0$ being obtained by \emph{reverse} Bauer--Muir, with extremely slow convergence in $1/\log(n)$. But in turn these formulas \emph{prove} that the initial continued fraction converges to $G$, since if we set $S_k=\sum_{1\le j\le k}(-1)^{j-1}/(2j-1)^2$, which is the $k$th partial sum of the series defining $G$, the $k$th continued fraction is $S_k+(-1)^k/2/(4k^2+3-16/(4k^2+19-\cdots))$, and this clearly tends to $\lim_{k\to\infty}S_k=G$ as $k\to\infty$. \bigskip {\bf Example 5: $\zeta(3)$} \smallskip The trivial continued fraction for $\zeta(3)$, directly coming from the series $\zeta(3)=\sum_{n\ge1}1/n^3$, is $\zeta(3)=[[0,(2n-1)(n^2-n+1)],[1,-n^6]]$. Applying Bauer--Muir acceleration iteratively, we immediately obtain \begin{align*}\zeta(3) &=[[0,(2n-1)(n^2-n+1)],[1,-n^6]]=[[1,(2n-1)(n^2-n+5)],[1,-n^6]]\\ &=[[9/8,(2n-1)(n^2-n+13)],[1,-n^6]]\\ &=[[251/216,(2n-1)(n^2-n+25)],[1,-n^6]]\;, \end{align*} and so on, the general formula being $$\zeta(3)=[[\sum_{1\le j\le k}1/j^3,(2n-1)(n^2-n+2k^2+2k+1)],[1,-n^6]]\;.$$ \smallskip The reader can check that unfortunately, the method does not work (i.e., the formulas become extremely complicated) for the alternating sum giving $3\zeta(3)/4$, nor for $\zeta(k)$ for $k\ge4$, explaining in large part why Ap\'ery's method has not been extended to $\zeta(k)$ for $k\ge5$. Note that, on the contrary, Ap\'ery's method does work for a large number of other series, and will be the object of a future paper. \section{Conclusion} We have given three rather different methods to obtain infinitely many continued fractions for certain linear combinations of zeta and $L$ values. Note, however, that they are all polynomially convergent (i.e., in $C/n^k$ for some $k\ge1$), while really interesting continued fractions are exponentially or at least sub-exponentially convergent. As already mentioned, this will be the subject of a future paper \cite{Coh}. \bigskip
1,108,101,562,451
arxiv
\section*{Acknowledgments} I am grateful to Antonio Pich for the illuminating discussion during this work and many important suggestions on the manuscript. Many thanks to Anjan S. Joshipura, Namit Mahajan, Saurabh D. Rindani and Utpal Sarkar for valuable and critical feedback on this work. I thank to Mehran Zahiri Abyaneh, Alessio Maiezza, and Rahul Srivastava for various discussions. This paper is dedicated to Aliza for her love, patience and everlasting support. This work has been supported by the Spanish Government and ERDF funds from the EU Commission [Grants No. FPA2011-23778, FPA2014-53631-C2-1-P No. and CSD2007-00042 (Consolider Project CPAN)]. \section*{}
1,108,101,562,452
arxiv
\section{Introduction} During disruptions on the largest operating tokamak, JET, the current profile, $K\equiv\mu_0j_{||}/B$, flattens on a time scale $\lesssim 1$~ms, which is orders of magnitude shorter than the time scale for resistive diffusion. The evidence \cite{Wesson:1990,de Vries:2016} for the flattening is a current spike, which is an increase in the net plasma current, and a drop in the plasma internal inductance, which is a measure of radial width of $K$. The flattening of $K$ on a time scale orders of magnitude shorter than possible with resistive diffusion was found to require magnetic surface breakup and magnetic helicity conservation in a 1991 numerical study by Merrill, Jardin, Ulrickson, and Bell \cite{Jardin:1991}. The speed of surface breakup can be understood as a fast magnetic reconnection \cite{Boozer:ideal-ev}. A fast magnetic reconnection breaks magnetic surfaces on time scale primarily determined by the properties of the evolution, not resistivity, and flattens the current profile on a time scale determined by the Alfv\'en speed. The spike in the current is an implication of a helicity-conserving current flattening on a time scale short compared to the resistive time scale. The physics of the flattening through shear Alfv\'en waves is the focus of this paper. The breakup of magnetic surfaces is of central importance to understanding the danger of runaway electrons to ITER \cite{Boozer-spikes:2016}. When magnetic field lines from a large fraction of the plasma volume can reach the walls, the energetic electrons that serve as a seed for electron runaway are quickly lost. The magnitude of the current spike is a measure of the volume in which magnetic surfaces have been destroyed \cite{Boozer-spikes:2016}. However, a thermal quench, a large drop in the electron temperature, preceeds or accompanies the current spike, and the enhanced dissipation of magnetic helicity associated with the lower temperature reduces the magnitude of the spike and complicates its interpretation. An equation derived using a mean-magnetic-field approximation \cite{Boozer:surf-loss} could be used to study the spatial and temporal extent of the breakup of magnetic surfaces when reliable measurements of the plasma current, internal inductance, and the electron temperature are available. This analysis would be simpler and faster than that in \cite{Jardin:1991}. The mean-field approximation does not address the Alfv\'enic process that gives the flattening of $K$, which is the focus of this paper, but uses the helicity-conserving property of a fast magnetic reconnection \cite{Boozer:acc} to obtain an differential equation of the simplest physically-consistent form for the evolution of $K$. Section \ref{sec:background} gives background information on three topics: (1) fast magnetic reconnection, Section \ref{sec:fast}, (2) the phenomenology of tokamak disruptions, Section \ref{sec:phenomenology}, and (3) the drive and damping of Alfv\'en waves, Section \ref{sec:Alfven}. Those familiar with this material can go directly to Section \ref{sec:flattening}, which derives the equation for the Alfv\'en waves that flatten the current. Section \ref{sec:Monte-Carlo} explains how this equation can be solved for the current flattening using a Monte-Carlo method. Section \ref{sec:discussion} discusses the paper and its conclusions. \section{Background information \label{sec:background} } \subsection{Fast magnetic reconnection \label{sec:fast} } Fast reconnection arises naturally when an evolving magnetic field depends on all three spatial coordinates \cite{Boozer:ideal-ev}. The magnetic field line velocity $\vec{u}$ of an ideal evolution can exponentially distort magnetic surfaces or more generally magnetic flux tubes. This distortion leads to a multiplication of the non-ideal effects by a factor that increases exponentially on a time scale determined by the ideal evolution. Large current densities are not required for a fast reconnection. Magnetic flux tubes are an essential concept for understanding magnetic fields that are smooth functions of the spatial coordinates. There need be no implication that the field is unusually strong in the interior of a flux tube as is often the case in the astrophysical literature. The surface of a flux tube is formed by the field lines that pass through a particular closed curve---often taken to be a circle of radius $r_c(0)$. The cross-sectional shape of a flux tube distorts with distance $\ell$ along the tube. As $r_c(0)\rightarrow0$, the distortion becomes simple, an ellipse. Since $\vec{\nabla}\cdot\vec{B}=0$, the major $r_\ell(\ell)$ and minor $r_s(\ell)$ radii of the ellipse satisfy $ r_\ell r_s=r_c^2$ when $r_c(\ell)$ is defined so $B(\ell)r_c^2(\ell)$ is constant. The exponentiation $\sigma_e(\ell)$ is defined by $r_\ell=r_c e^\sigma_e$ and $r_s=r_c e^{-\sigma_e}$. For all but special ideal field line flows $\vec{u}$, the cross-sectional distortion of a given flux tube becomes larger as time advances \cite{Boozer:ideal-ev}; typically $\sigma_e$ is approximately proportional to time. Resistive magnetic-reconnection competes with the ideal evolution when the time required for resistive diffusion over the small distance across a flux tube $r_s^2\mu_0/\eta$ competes with the evolution time scale, $\tau_{ev}\equiv 1/|\vec{\nabla}\vec{u}| \approx r_c/u$, where $r_c$ is a characteristic initial dimension of the tube. The resistive time scale is defined by $\tau_\eta\equiv r_c^2\mu_0/\eta$, so resistive diffusion competes with the ideal evolution when $R_m\equiv \tau_\eta/\tau_{ev} = e^{2\sigma_e}$, and a fast magnetic reconnection occurs. $R_m$, the magnetic Reynolds number, is of order $10^4$ to $10^8$ in tokamaks and up to $10^{14}$ in the solar corona. The required current density to produce an exponentiation $\sigma_e$ is proportional to $\sigma_e$ or equivalently to $\ln(\sqrt{R_m})$ and not $R_m$, as required for reconnection to compete with evolution without exponentiation. In a two-dimensional ideal evolution, exponentially large distortions of flux tubes require an exponentially large change in the magnetic field strength, but no change in the magnetic field strength is required for exponentially large distortions in a three-dimensional ideal evolution \cite{Boozer:ideal-ev}. Magnetic reconnection can be studied in three dimensions ignoring the effect of exponentiation, but this is as misleading as ignoring the non-diffusive advective stirring of air when calculating the time it takes a hot radiator to warm a room. The ideal advection of air is a divergence-free flow, which causes tubes of air-flow to distort exponentially, just as magnetic flux tubes distort, which enhances diffusive mixing. Exponentiation changes the time scale for warming a room from several weeks to tens of minutes. In three-dimensional simulations of reconnection, one can verify that reconnection occurs where the exponentiation is large, as was done by Daughton et al \cite{Daughton:2014}. Numerical resolution limited the number of exponentiations that they could observe to $\sigma_e\lesssim8$, which implies their code can resolve the physics only when $R_m \lesssim 10^7$. A fast magnetic reconnection conserves magnetic helicity \cite{Boozer:acc} with even greater accuracy than the limit given in 1984 by Berger \cite{Berger:1984}. Helicity conservation requires an increase in the plasma current when the current profile, $K=\mu_0j_{||}/B$, is flattened \cite{Boozer-spikes:2016}. As will be discussed, the time scale for the flattening is determined by the time required for a shear Alfv\'en wave to propagate along magnetic field lines. To obtain a current spike on the observed sub-millisecond time scale, chaotic magnetic field lines must cross a large fraction of the $j_{||}/B$ profile and reach the edge of the plasma in of order a hundred toroidal transits. In JET, a shear Alfv\'en wave requires $\approx 3~\mu$s to make a full toroidal transit. A hundred transits is comparable to the independent observations in a numerical simulation of a tokamak disruption by Valerie Izzo \cite{Izzo:2020} and that by Eric Nardon et al., which is not yet published. The speed of the flattening rules out simple resistive diffusion as an explanation. The time scale for resistive diffusion of the current density using a cylindrical model is $\tau_j = (\mu_0/\eta)(a/2.40)^2$. The resistive diffusion coefficient, $\eta/\mu_0\approx 2\times10^{-2}/T^{3/2}$ where the temperature is in keV, distances in meters and times in seconds. Plasma cooling precedes or accompanies current flattening. But, even at the lowest estimated plasma temperature of 10~eV, $\eta/\mu_0\approx 20$, and $\tau_j\approx9~$ms in JET, which has a minor radius $a\simeq1~$m. The flattening takes place on a sub-millisecond time scale. In the solar corona, it is the motion of the magnetic field lines on the photosphere that is thought to drive what is initially an ideal evolution, which ultimately leads to a fast magnetic reconnection. In tokamak disruptions, the ideal drive is an increasingly contorted annulus of magnetic surfaces between low order magnetic islands. These islands grow at a rate that appears to be consistent with the Rutherford rate \cite{Rutherford:rate}. As illustrated in de Vries et al \cite{de Vries:2016}, JET shows a sudden acceleration in the evolution from a Rutherford-like slow growth of non-axisymmetric magnetic fields to a current spike and a drop in the internal inductance that evolve approximately three orders of magnitude faster, which is also much faster than the time scale of the observed subsequent current quench, $\sim20~$ms, which occurs after the thermal quench. \subsection{Phenomenology of tokamak disruptions \label{sec:phenomenology} } Tokamak disruptions have many causes. Sometimes they are initiated by intentional impurity injection, which produces strong radiative cooling early in the disruption and can result in strong currents of relativistic electrons that allow studies of the behavior of such currents in tokamaks. A more serious issue for ITER is naturally arising disruptions, but details of only a few examples have been published. Two examples have been published for JET, but even these lack important details. Figure 1 in the classic paper by Wesson et al \cite{Wesson:1990} on a carbon-wall JET disruption showed a drop in the central electron temperature from 1.6~keV to 0.5~keV starting approximately 3.5~ms before the current spike. This temperature drop was supposedly due to internal MHD activity breaking the magnetic surfaces and flattening the temperature in the inner half of the plasma. Resistive diffusion would require approximately 10~s, so a fast magnetic reconnection is required. As Wesson et al noted, intact outer magnetic surfaces would shield the outer world from a current spike, and even at 10~eV the breakup time for the outer magnetic surfaces would be of order 10~ms. The current spike occurs over 200~$\mu$s, and the electron temperature drops from 500~eV to an estimated 10~eV within 300~$\mu$s. This temperature drop was thought to be due to an impurity influx. As discussed below, heat flow along chaotic magnetic field lines could easily explain the drop in temperature from 1.6~keV to 0.5~keV, but heat flow along chaotic field lines becomes extremely slow at lower temperatures and impurity radiation seems the only credible explanation for reaching 10~eV. The subsequent current quench has a characteristic time scale of 30~ms, which is extended by an approximate factor of two by a loop voltage, which reaches 100~V, so the current quench is consistent with resistive dissipation. Figure 1 in a paper by de Vries et al \cite{de Vries:2016} is essentially a unique figure of a natural disruption in JET with an ITER-like wall. The results are qualitatively different from those of Wesson et al. The primary temperature collapse, from approximately 1~kev to 200~eV, and the current spike occur simultaneously, which means within the 1~ms time differences that can be distinguished on the published figure. The internal inductance, which is a direct measure of the width of the current profile drops by a factor of two and remains low. The decay time for the current is approximately 20~ms over the next 10~ms, which is consistent with a temperature of 17~eV, not 200~eV. An obvious explanation would be that the current profile remains broad over that 10~ms due to magnetic field lines covering the plasma volume. The destruction of magnetic helicity and hence the plasma current is then determined more by the edge than the central plasma temperature \cite{Boozer:pivotal}. This evidence for the maintenance of chaotic magnetic field lines, rather than the fast re-formation of magnetic surfaces, is optimistic for the avoidance of runaways in the non-nuclear period of ITER operations, but the persistence of chaotic lines is unlikely to ensure the avoidance of runaway problems during nuclear operations on ITER \cite{Mitgation:Chalmers}. The chaotic magnetic field lines produced in a fast magnetic reconnection will cause a rapid drop in the electron temperature---at least until the mean-free-path of the heat-carrying electrons becomes short compared to the distance required for a chaotic field lines to cross a large fraction of the plasma volume. This is consistent with the data on DIII-D thermal quenches in Figure 10 of Paz-Soldan et al \cite{Paz-Soldan:2020}, which shows thermal quench times $\sim50~\mu$s. Assuming a deuterium plasma and measuring electron density in $10^{20}$/m$^3$, the electron mean free path is $\lambda_e \approx 33 T^2/n$. But, collisional heat transport along a magnetic field line is proportional the $T^{7/2}$, which implies the electrons that carry the heat have an energy of approximately $7T/2$. Their mean free path, $\lambda_e^h$ is approximately twelve times longer than that of thermal electrons, $\lambda_e^h \approx 400 T^2/n$. JET has a major radius of 3~m and a circumference of approximately 19~m, so the heat carrying electrons move through approximately $20 T^2/n$ toroidal transits between collisions. The speed of the heat carrying electrons along the magnetic field lines is much faster than the Alfv\'en speed; the ratio is $v_e^h/V_A\approx 20 \sqrt{nT}/B$. Electron cooling can also occur by radiation from impurities, and this is presumably required for a fast reduction of the electron temperature to values far below 1~keV. The shortness of the electron mean-free-path at low temperatures prevents a rapid heat flow along chaotic magnetic field lines. \subsection{Drive and damping of Alfv\'en waves \label{sec:Alfven} } As discussed in \cite{Boozer:acc}, a fast magnetic reconnection can be viewed as a quasi-ideal process, which conserves magnetic helicity and directly dissipates little energy. Energy transfer out of the magnetic field is given by $\vec{j}\cdot\vec{E}$. In a fast magnetic reconnection, the dominant part is given by non-dissipative term, $\vec{u}\times\vec{B}$, in Ohm's law, $\vec{E}+\vec{u}\times\vec{B}=\vec{\mathcal{R}}$, namely $\vec{u}\cdot(\vec{j}\times\vec{B})$. The condition $\vec{\nabla}\cdot\vec{j}=0$ implies that \begin{eqnarray} \vec{B}\cdot\vec{\nabla}\left(\frac{j_{||}}{B}\right)&=&\vec{B}\cdot\vec{\nabla}\times\left(\frac{\vec{f}_L}{B^2}\right) \nonumber\\ &=&\frac{\vec{B}\cdot\vec{\nabla}\times \vec{f}_L}{B^2}- \vec{B}\cdot\left(\vec{f}_L\times\vec{\nabla}\frac{1}{B^2}\right), \hspace{0.2in} \label{j_|| Lorentz} \\ \mbox{where } && \vec{f}_L\equiv\vec{j}\times\vec{B}. \end{eqnarray} Any variation in $j_{||}/B$ along a magnetic field line implies a Lorentz force $\vec{f}_L$. The first term on the right-hand side of Equation (\ref{j_|| Lorentz}) gives the variation in what is known as the the net $j_{||}/B$, which is zero along a magnetic field line in an equilibrium plasma, $\vec{f}_L=\vec{\nabla}p$, and the second term gives what is known as the Pfirsch-Schl\"uter variation in $j_{||}/B$. In a fast magnetic reconnection, two magnetic field lines with different magnitudes of $j_{||}/B$ can be quickly joined together, which makes $\vec{B}\cdot\vec{\nabla}(j_{||}/B)=B\partial(j_{||}/B)/\partial\ell$ large and spatially complicated even in regions in which $\vec{\nabla}B^2$ is zero, where the Pfirsch-Schl\"uter term vanishes. A curl of the Lorentz force is required. In a scalar pressure, model of the plasma $\vec{f}_L=\rho \partial\vec{u}/\partial t +\vec{\nabla}p$. Taking the density $\rho$ to be a spatial constant and letting $\hat{b}\equiv\vec{B}/B$, one finds that $\hat{b}\cdot\vec{\nabla}\times\vec{f}_L = \rho \partial\Omega/\partial t$, where $\Omega\equiv\hat{b}\cdot\vec{\nabla}\times\vec{u}$, the parallel component of the vorticity of the plasma flow. As will be seen, this twisting motion drives a shear Alfv\'en wave. The propagation of Alfv\'en waves along chaotic field lines is thought to produce strong phase mixing and wave damping \cite{Heyvaerts-Priest:1983,Similon:1989}, which could heat the solar corona and slow the flattening of the $j_{||}/B$ profile. But, the flattening of the $j_{||}/B$ profile appears to be approximately Alfv\'enic in tokamaks, and electron runaway provides a simpler explanation for corona formation, Appendix E of \cite{Boozer:acc}. On the sun, the footpoint motions of magnetic field lines naturally produce sufficiently large $j_{||}/B$'s, Appendix B of \cite{Boozer:ideal-ev}, for runaway with the short correlation distances across the field that are needed to avoid kinking. The wave damping of \cite{Heyvaerts-Priest:1983,Similon:1989} is due to the exponentially increasing separation between neighboring chaotic lines. But, the characteristic distance for an e-fold is apparently of order a thousand kilometers along magnetic field lines in the corona \cite{Boozer:acc}. This is much longer than the height of the transition region above the photosphere, so exponentiation is unlikely to directly determine the height of the transition from the cold photospheric to the hot coronal plasma. \section{Alfv\'en waves that flatten $K$ \label{sec:flattening} } The standard assumptions of linearized reduced-MHD \cite{Kadomtsev,Strauss} will be made to derive the equations for the Alfv\'en waves that relax $\partial(j_{||}/B)/\partial\ell\rightarrow0$, where $\ell$ is the distance along a magnetic field line. The required equations are simple and derived in \cite{Boozer:acc} and below for $K\equiv \mu_0j_{||}/B$ and for $\Omega\equiv \hat{b}\cdot\vec{\nabla}\times\vec{u}$, the vorticity along the magnetic field of the magnetic field line velocity: \begin{eqnarray} &&\frac{\partial \Omega}{\partial t} = V_A^2 \frac{\partial K}{\partial \ell} + \nu_v \nabla_\bot^2 \Omega, \label{dOmega/dt} \\ && \frac{\partial \Omega}{\partial\ell} = \frac{\partial K}{\partial t} -\frac{\eta}{\mu_0}\nabla^2_\bot K, \label{dK/dt} \end{eqnarray} where $V_A$ is the Alfv\'en speed. The field strength, plasma density $\rho$, the resistivity $\eta$, and the viscosity are assumed to be slowly varying in space and time when compared to $K$ and $\Omega$. The variables are time, the differential distance along a magnetic field line, $d\ell=R_0d\varphi$ in a torus, and two coordinates across the field lines. The spatial scale of the solution across the magnetic field lines, $\ell_\bot$, will be seen to be short compared to that along, $\ell_{||}$, so $\nabla^2\approx \nabla_\bot^2$. Although Equations (\ref{dOmega/dt}) and (\ref{dK/dt}) follow obviously from the linearized reduced-MHD equations, short derivations are sketched here for completeness. Equation (\ref{dOmega/dt}) follows from the curl of the linearized force-balance equation, $\rho\partial\vec{u}/\partial t =-\vec{\nabla}p +\vec{j}\times\vec{B}-\rho\nu_v\nabla^2\vec{u}$. The curl of the Lorentz force is $\vec{\nabla}\times(\vec{j}\times\vec{B})=\vec{B}\cdot\vec{\nabla}\vec{j} - \vec{j}\cdot\vec{\nabla}\vec{B}$. The component of $\vec{\nabla}\times(\vec{j}\times\vec{B})$ parallel to the magnetic field is approximated by $B \partial j_{||}/\partial \ell - j_{||} \partial B/\partial \ell - \vec{j}_\bot\cdot\vec{\nabla}B\approx B^2 \partial(j_{||}/B)/\partial\ell$. The current density $\vec{j}$ is divergence free, so $|j_{||}|/|\vec{j}_\bot|\sim \ell_{||}/\ell_\bot >>1$. The gradients of the field strength across and along the magnetic field lines have more comparable scales. The component of the curl of $\partial \vec{u}/\partial t$ that is parallel to the magnetic field gives Equation (\ref{dOmega/dt}). Equation (\ref{dK/dt}), for the current evolution, follows from Ampere's law, Faraday's law, and Ohm's law, $\vec{E}+\vec{u}\times\vec{B}=\eta\vec{j}$. The implication is $\mu_0\partial\vec{j}/\partial t = \vec{\nabla}\times(\vec{\nabla}\times(\vec{u}\times\vec{B}-\eta\vec{j}))$. A vector identity implies $\vec{\nabla}\times(\vec{u}\times\vec{B})=\vec{B}\cdot\vec{\nabla}\vec{u}-\vec{u}\cdot\vec{\nabla}\vec{B}$. The parallel component of the $\partial\vec{j}/\partial t$ equation gives Equation (\ref{dK/dt}). The evolution equation for $K$ is obtained using the mixed-partials theorem applied to $\Omega$; \begin{eqnarray} \frac{\partial^2K}{\partial t^2}- \frac{\partial }{\partial\ell}\left(V_A^2 \frac{\partial K}{\partial\ell}\right) = \left(\nu_v+\frac{\eta}{\mu_0}\right)\nabla_\bot^2 \frac{\partial K}{\partial t}. \ \end{eqnarray} Neglecting the slow time dependence of the coefficients of the differential equation, \begin{eqnarray} \omega^2 K + \frac{\partial }{\partial\ell}\left(V_A^2 \frac{\partial K}{\partial\ell}\right) = i\omega \left(\nu_v+\frac{\eta}{\mu_0}\right)\nabla_\bot^2K, \label{full wave eq} \end{eqnarray} where $\omega$ is a frequency. The viscosity and resistivity are assumed to be small, so a term proportional to $\nu_v\eta$ has been ignored. Equation (\ref{full wave eq}) can be solved using the WKB method. In this method, $K=K_s(\vec{x}_\bot) e^{iS}$, where the eikonal $S = S_0+S_1$ with $(\partial S/\partial t)_\ell = \omega$, so \begin{eqnarray} &&\omega^2 K - V_A^2 \left(\frac{\partial S}{\partial\ell}\right)^2 K + K \frac{\partial}{\partial \ell}\left(iV_A^2 \frac{\partial S}{\partial\ell} \right) \nonumber\\ && \hspace{1.0in} = i\omega \left(\nu_v+\frac{\eta}{\mu_0}\right)\nabla_\bot^2K \end{eqnarray} Choose $S_0$ so $(\partial S_0/\partial\ell)^2=\omega^2/V_A^2$. The assumption is that the the parallel wavenumber, $k_{||}\equiv\partial S_0/\partial\ell$, varies slowly as a function of $\ell$, which would be exactly true if the coefficients in Equation \ref{full wave eq} had no $\ell$ dependence. There are two solutions: forward shear Alfv\'en waves moving in the direction of the field and backward waves moving in the opposite direction: \begin{eqnarray} S_{0f}&=& - \omega T_f \hspace{0.1in}\mbox{with} \hspace{0.1in}\\ T_f &\equiv& t -\int\frac{d\ell}{V_A} \hspace{0.1in}\mbox{and} \hspace{0.1in}\\ S_{0b}&=& - \omega T_b \hspace{0.1in}\mbox{with} \hspace{0.1in}\\ T_b &\equiv& t +\int\frac{d\ell}{V_A}. \end{eqnarray} The solution for the forward wave can be approximated $K=K_s e^{-i\omega T_f} e^{iS_1}$, where $S_1$ is slowly varying, and \begin{eqnarray} && V_A^2 \left(2\frac{\partial S_{0f}}{\partial\ell}\frac{\partial S_{1f}}{\partial\ell}\right) K - K \frac{\partial}{\partial \ell}\left(iV_A^2 \frac{\partial S_{0f}}{\partial\ell} \right) \nonumber\\ && \hspace{0.66in} = \left(\nu_v+\frac{\eta}{\mu_0}\right)\nabla_\bot^2\frac{\partial K}{\partial T_f}, \hspace{0.1in}\mbox{where} \hspace{0.2in} \\ && \frac{\partial S_{0f}}{\partial\ell} =\frac{\omega}{V_A}. \hspace{0.2in}\mbox{Consequently,} \\ && 2 V_A \frac{\partial iS_{1f}}{\partial\ell} K' + \frac{\partial V_A}{\partial \ell}K' \nonumber\\ && \hspace{1.0in} = \left(\nu_v+\frac{\eta}{\mu_0}\right)\nabla_\bot^2K', \mbox{where} \hspace{0.3in}\\ &&K' \equiv \frac{\partial K}{\partial T_f} = - i\omega K = \left(\frac{\partial K}{\partial t}\right)_\ell. \end{eqnarray} The resulting equation for the evolution of $K'$ is \begin{eqnarray} &&\frac{1}{\sqrt{V_A}} \left(\frac{\partial \sqrt{V_A}K'}{\partial\ell} \right)_{T_f}= \frac{\Delta_d}{2}\nabla_\bot^2K'; \label{slow-ev}\\ &&\Delta_d \equiv \frac{\nu_v+\frac{\eta}{\mu_0}}{V_A} \\ \label{Delta-d} && \hspace{0.3in} \approx (1+P_{rm}) \frac{1.4\times10^{-8} n}{T^{3/2} B}, \end{eqnarray} where the magnetic Prandtl number $P_{rm}\equiv\mu_0\nu_v/\eta$. $\Delta_d$ has units of length, meters, the number density has units of $10^{20}/$m$^3$, the temperature has units of keV, and the magnetic field has units of Tesla. The solution for the backwards wave, which propagates in the negative $\ell$ direction, is identical except the sign of the righthand side. The cross-field ion viscosity $\nu_v$ is difficult to estimate, but the physics of the viscosity is closely related to that of the ion thermal transport. If one assumes ion transport is gyro-Bohm-like then $\nu_v=(r_i/R_0) T/eB$ with $r_i$ the ion gyroradius and $R_0$ a typical spatial scale, such as the major radius of a tokamak. Then, the magnetic Prandtl number is $P_{rm}\approx200 T^3/R_0B^2$. The definition of $K'$ for a forward going wave can be understood. Over distances $\ell$ sufficiently short that $\sqrt{\Delta_d\ell}<<V_A/|\vec{\nabla}_\bot V_A|$, the functional form of $K$ is $K(t-\int\frac{d\ell}{V_A})$. Letting a prime denote the derivative of $K(t-\int\frac{d\ell}{V_A})$ relative to its argument, $(\partial K/\partial t)_\ell= K'$ and $(\partial K/\partial \ell)_t = - K'/V_A$. The interpretation of Equations (\ref{full wave eq}) and (\ref{slow-ev}) is that shear Alfv\'en waves, which propagate along a magnetic field line with $d\ell/dt=\pm V_A$, serve as the basic characteristics for defining the solutions to Equation (\ref{full wave eq}) for $K$. The part of $K$ that is not constant along the magnetic field line diffuses off the characteristics at the rate given by Equation (\ref{slow-ev}). When both $\Delta_d$ and $V_A$ are constant, the $K'$ in a magnetic flux tube obeys a conservation law---any change along the tube is due to diffusion through the sides. \section{Monte-Carlo solution of Equation (\ref{slow-ev}) \label{sec:Monte-Carlo} } \subsection{Initial $K'$} Equation (\ref{slow-ev}) can be used to study the relaxation of $K'$ from an initial distribution $K'_0$. The distribution of the parallel current, or more precisely the distribution of $K'$, along a magnetic field line immediately after magnetic surfaces have broken can be calculated using the dominance of the dependence of $K_0$ on $T_f$. Since $\vec{B}\cdot\vec{\nabla}K_0= K'_0 \vec{B}\cdot\vec{\nabla}T_f=-(B/V_A)K'_0$, \begin{equation} K'_0=-V_A\frac{\vec{B}\cdot\vec{\nabla}K_0}{B}. \label{K'_0} \end{equation} for the forward wave. The sign of the righthand side is opposite for the backwards wave. For the forward wave, $K'$ propagates along the magnetic field lines at the Alfv\'en speed, $d\ell/dt = V_A$, and diffuses off the lines at the slow rate given by Equation (\ref{slow-ev}). \subsection{Monte Carlo operator} Equation (\ref{slow-ev}) can be solved using a Monte Carlo approach that is derived in Section IV of \cite{Boozer:Monte Carlo}. The term $\nabla^2_\bot K'$ can be calculated using ordinary $R,Z$ cylindrical coordinates for a tokamak since the toroidal magnetic field is assumed far stronger than the poloidal. In the large aspect ratio limit \begin{equation} \nabla_\bot^2 K'=\frac{\partial^2K'}{\partial R^2} + \frac{\partial^2K'}{\partial Z^2}, \end{equation} where $R$ and $Z$ are the position of a particular magnetic field line as it is followed using the distance along the line $\ell=R_0\varphi$. Equation (\ref{slow-ev}) implies that when $K'$ is non-zero only within a small range of $R$ and $Z$ then at a constant $T_f$ the function $K'(\ell,R,Z,T_f)$ obeys \begin{eqnarray} &&\frac{\partial \int R K'dRdZ}{\partial \ell} = \nonumber\\ && \hspace{0.2in}\frac{\Delta_d}{2} \int R \left(\frac{\partial^2K'}{\partial R^2} + \frac{\partial^2K'}{\partial Z^2}\right)dRdZ= \nonumber\\ && \hspace{0.2in}\frac{\Delta_d}{2} \int \frac{\partial}{\partial R} \left(R\frac{\partial K'}{\partial R} - K' \right)dRdZ =0. \hspace{0.3in} \end{eqnarray} This equation and the similar equation for $\int ZK'dRdZ$ imply there is no systematic drift of $K'$ off the field line. But, $K'$ does diffuse off of the field line for \begin{eqnarray} &&\frac{\partial \int R^2K'dRdZ}{\partial \ell} = \nonumber\\ && \hspace{0.2in} \frac{\Delta_d}{2} \int R^2\left(\frac{\partial^2K'}{\partial R^2} + \frac{\partial^2K'}{\partial Z^2}\right)dRdZ =\nonumber \\ &&\hspace{1.0in} \Delta_d \int K'dRdZ \end{eqnarray} with a similar equation for $\int Z^2K'dRdZ$. Following the Monte-Carlo derivation in Section IV of \cite{Boozer:Monte Carlo}, the interpretation is that when $K'$ is a delta function about $R_s,Z_s$ before the application of Equation (\ref{slow-ev}), then after the application, $K'$ will have a Gaussian distribution about the point $R_s,Z_s$ with a standard deviation given by $\partial\sigma^2/\partial\ell=\Delta_d$. Each small step $\delta\ell$ along a magnetic field line consists of two operations: (1) The $R$ and $Z$ are changed to track a particular line. (2) Steps $\delta R=\pm\sqrt{ \Delta_d\delta\ell}$ and $\delta Z=\pm \sqrt{ \Delta_d\delta\ell}$ are taken to a new field line. The integration can then be followed for another $\delta\ell$ step. The symbol $\pm$ implies the sign is chosen with equal probability of being plus or minus. The advance in time during a step is $\delta t= \delta\ell/V_A$ for the forward moving and $\delta t= -\delta\ell/V_A$ for the backward moving wave. \subsection{A study of the flattening} The chaotic magnetic field that arises in a disruption simulation can be used to study flattening of the current profile. To do this the plasma volume can be separated into cells, each with the same volume. The initial $K'_0$ can be obtained by superimposing the parallel current distribution in the pre-disruption plasma on the chaotic magnetic field and using Equation (\ref{K'_0}) to find a value for $K'_0$ in each cell. Start $N_0$ trajectories in each cell with half propagating forward and half propagating backward along the field lines. The value of $K'_j(t)$ in cell $j$ at time $t$ is the sum of the $K'_i(0)$ that are now in cell $j$, starting in cell $i$ at $t=0$ divided by $N_0$. The statistical error scales as $1/\sqrt{N_0}$. The magnetic field lines and the volume in which they are chaotic change over the time scale of the current flattening. This can be studied by updating the field line trajectories as the current profile flattens. Before each step, $\delta t=\pm\delta\ell/V_A$, the magnetic field line trajectories should be updated, and $K'_0$ in each cell at the beginning of the new step is given by Equation (\ref{K'_0}). This should be calculated using the part of the parallel current that is independent of the non-inertial forces, such as the pressure gradient. The part of the parallel current driven by non-inertial forces, such as the pressure gradient, is the Pfirsch-Schl\"uter current. \subsection{Alfv\'en wave reflection} In a tokamak, the wall is not normally a magnetic surface; it is penetrated by what is known as the vertical magnetic field. An implication is that a region of chaotic magnetic field lines can extend all the way to the walls. The Alfv\'en waves that give the relaxation of $K'$ are naturally reflected by the walls---either by perfectly insulating or by perfectly conducting walls---but the sign of the reflected wave is opposite in the two cases. Wave reflection switches the characteristic that the wave is following. When the WKB approximation is valid, which requires $k_{||}$ change slowly with respect to $\ell$, a switch in the wave from following one characteristic to the other is not possible. \subsubsection{Reflection from an insulating wall} When the wall is a perfect insulator, $K=0$ on the wall. A steady state current cannot flow along a chaotic field line that strikes an insulating wall, and the reflected Alfv\'en waves serve to cancel $K'$. The net parallel current drops to zero in an outer region of chaotic field lines on the time scale for a shear Alfv\'en wave to traverse the region by propagating along the chaotic field lines. The decay of the current after the current spike appears to be far slower than the time it takes an Alfv\'en wave propagating along chaotic field lines to reach the walls, which implies the insulating-wall boundary condition $K=0$ is not realistic. The flux of magnetic helicity along the magnetic field lines, which is denoted by $2\mathcal{F}_{||}$ in \cite{{Boozer:runaway-ITER}} is not blocked by a wall that is a perfect insulator but is when the wall is a perfect conductor. \subsubsection{Reflection by drag} Even a perfectly conducting medium can exert a drag force on the motion of the magnetic field lines, which is balanced by the Lorentz force that causes a change in $K=\mu_0 j_{||}/B$, Equation (\ref{j_|| Lorentz}). The drag force can be quantified by a drag time $\tau_d$. In one dimension plus time, the equations are \begin{eqnarray} \frac{\partial\Omega}{\partial t} = V_A^2\frac{\partial K}{\partial \ell} -\frac{\Omega}{\tau_d(\ell)} \hspace{0.2in}\mbox{and}\hspace{0.2in} \frac{\partial\Omega}{\partial\ell} = \frac{\partial K}{\partial t}. \label{d Omega / d ell} \end{eqnarray} The mixed-partials theorem applied to $K$ implies \begin{equation} V_A^2 \frac{\partial^2\Omega}{\partial\ell^2}= \frac{\partial^2\Omega}{\partial t^2} +\frac{1}{\tau_d} \frac{\partial\Omega}{\partial t}. \label{drag eq} \end{equation} The drag, which is proportional to $1/\tau_d$, will be assumed to be zero for $\ell<\ell_0$ but a non-zero constant for $\ell>\ell_0$. The wave equation for $\Omega$ is simpler than the equation for $K$ since that equation includes a term proportional to $d(1/\tau)/d\ell$. In the two regions in which $\tau_d$ is constant, Equation (\ref{drag eq}) can be solved by $\Omega \propto \exp\big(i(k\ell -\omega t)\big)$. Let \begin{eqnarray} k_A &\equiv& \frac{\omega}{V_A} \hspace{0.2in}\mbox{and}\hspace{0.2in} \ell_d \equiv V_A\tau_d, \hspace{0.2in}\mbox{then}\hspace{0.2in} \\ k_\pm&=&\pm k_A \sqrt{1+\frac{ i}{\Lambda_d}}, \hspace{0.2in}\mbox{where}\hspace{0.2in} \Lambda_d\equiv k_A\ell_d.\\ \Omega&=&\mathcal{R}_\Omega e^{i(k_+\ell-\omega t)} \hspace{0.2in}\mbox{for}\hspace{0.1in} \ell>\ell_0 \\ &=&\left(R_\Omega e^{ik_A\ell} + L_\Omega e^{-ik_A\ell}\right) e ^{-i\omega t} \hspace{0.1in}\mbox{for}\hspace{0.1in} \ell<\ell_0 \hspace{0.2in}. \end{eqnarray} Neither $\Omega$ nor $\partial\Omega/\partial\ell$ is discontinuous at $\ell_0$, so $\mathcal{R}_\Omega= R_\Omega+ L_\Omega$ and $k_+\mathcal{R}_\Omega=k_A(L_\Omega-R_\Omega)$, which imply \begin{eqnarray} L_\Omega&=& -\frac{\sqrt{1+\frac{i}{\Lambda_d}}-1}{\sqrt{1+\frac{i}{\Lambda_d}}+1}R_\Omega; \\ \mathcal{R}_\Omega &=& \frac{2}{\sqrt{1+\frac{i}{\Lambda_d}}+1}R_\Omega. \end{eqnarray} Equation (\ref{d Omega / d ell}) implies $K=(i/\omega)\partial\Omega/\partial\ell$ has the same form as $\Omega$ but with coefficients $\mathcal{R}_K$, $R_K$, and $L_K$. \begin{eqnarray} \mathcal{R}_K &=& -\frac{2\sqrt{1+\frac{i}{\Lambda_d} } }{\sqrt{1+\frac{i}{\Lambda_d}}+1}\frac{R_\Omega}{V_A};\\ \nonumber\\ R_K &=&-\frac{R_\Omega}{V_A};\\ L_K &=& - \frac{\sqrt{1+\frac{i}{\Lambda_d}}-1}{\sqrt{1+\frac{i}{\Lambda_d}}+1}\frac{R_\Omega}{V_A};\\ R_K+L_K&=&-\frac{2\sqrt{1+\frac{i}{\Lambda_d}} }{\sqrt{1+\frac{i}{\Lambda_d}}+1}\frac{R_\Omega}{V_A}= \mathcal{R}_K, \end{eqnarray} Both the vorticity $\Omega$ and the parallel current or $K$ are continuous at $\ell_0$, the location at which the drag jumps from zero to a finite value. A strong drag, $\Lambda_d\rightarrow0$, implies the wave is stopped in a far shorter distance than a wavelength, which reflects the wave perfectly. When $R_K$ is the amplitude of the parallel current function propagating towards the region of strong damping, $L_K=R_K$ is the amplitude of the reflected wave propagating away. When small but non-zero $\Lambda_d$ effects are retained, $L_K/R_k = 1+(i-1)\sqrt{2\Lambda_d}$. The imaginary term is equivalent to a time delay. \subsubsection{Reflection by a jump in Alfv\'en speed} A sudden change in the Alfv\'en speed at $\ell=\ell_0$ will also violate the WKB approximation. Assume the Alfv\'en speed jumps from $V_n$ to $V_p$ as $z\equiv\ell-\ell_0$ goes from negative to positive. This boundary condition is probably not applicable to a tokamak with chaotic field lines at its edge but is of interest for solar problems. The Alfv\'en equation for the parallel current $K\equiv \mu_0j_{||}/B$ is \begin{equation} \frac{\partial^2K}{\partial t^2} = \frac{\partial}{\partial z}\left(V_A^2(z) \frac{\partial K}{\partial z}\right). \end{equation} The solution for a wave launched so it is going to the right from the negative $z$ side is \begin{eqnarray} K &= R_n e^{i(k_nz-\omega t)} + L_n e^{-i(k_nz+\omega t)} \hspace{0.1in} &z<0;\hspace{0.2in}\\ &= R_p e^{i(k_nz-\omega t)} &z>0, \end{eqnarray} where $k_n = \omega/V_n$ and $k_p=\omega/V_p$. Two conditions must be satisfied at $z=0$, the continuity of $K$ and the continuity of $V_A^2 \partial K/\partial z$. These two conditions imply \begin{eqnarray} R_n + L_n &=& R_p; \\ V_n^2 k_n(R_n-L_n) &=& V_p^2 k_pR_p, \hspace{0.1in}\mbox{or} \hspace{0.2in}\\ V_n(R_n-L_n) & =& V_p R_p. \end{eqnarray} The solution is \begin{eqnarray} L_n &=& \frac{V_n-V_p}{V_p+V_n}R_n;\\ R_p &=& \frac{2V_n}{V_p+V_n}R_n. \end{eqnarray} When an Alfv\'en wave carrying $K$ propagates from the solar photosphere towards the corona, the Alfv\'en speed undergoes a large increase, which implies $K$ is reduced in amplitude by a factor $V_n/V_p$ on the corona side from the incoming $K$ on the photosphere side. In the limit as $V_n/V_p\rightarrow0$, the boundary acts as an insulator when viewed from the photosphere. An Alfv\'en wave propagating from the corona towards the photosphere undergoes a large reduction in the Alfv\'en speed, which when over a sufficiently short spatial scale, causes a reflection of the wave back into the corona but with the amplitude of $K$ in the photosphere having twice the amplitude as in the incoming $K$ in the corona. \section{Discussion \label{sec:discussion} } Understanding the physical states through which ITER may evolve during disruptions is essential for an assessment of how the issue of runaway electrons can be managed to minimize the impact on the ITER mission. Much of this data is encoded in the flattening of $K\equiv\mu_0j_{||}/B$, and this defines the importance of derivations given in this paper. As has been known for almost thirty years \cite{Jardin:1991}, the rapid breaking of magnetic surfaces and helicity conservation are fundamental to the physics of current spikes. For a current spike to be observed, the time scale for the flattening of the parallel current density $j_{||}$ must be short in comparison to the resistive dissipation of the current; the reconnection must be fast. Current spikes and magnetic reconnections have been seen in three-dimensional NIMROD simulations in 2010 by Izzo and Parks \cite{Izzo:2010}. Eric Nardon and collaborators have made related calculations with the JOREK code \cite{Nardon:JOREK2017}. Three-dimensional simulations of large tokamaks, but especially ITER, are computationally demanding, so only a few cases can be studied, and even these contain simplifying assumptions. Their reliability and utility depend on understanding the physical and mathematical reasons for the results. From the mathematics of fast magnetic reconnection, one expects flux tubes in annular regions of intact magnetic surfaces to show exponentially large distortions in the cross-sectional shape as that annular region evolves toward a state in which fast magnetic reconnection occurs. Reconnection occurs when resistive diffusion across the thinest part of a flux tube can compete with the evolution time scale. Unfortunately, no one has documented this effect in tokamak disruption simulations, but Daughton et al \cite{Daughton:2014} studied the relation between reconnection regions and large exponential separations of of neighboring magnetic field lines and found a close relation. There are two parts to the rapid flattening of the current: (1) a fast magnetic reconnection of the surfaces, which conserves magnetic helicity \cite{Boozer:ideal-ev,Boozer:surf-loss,Boozer:acc,Boozer:pivotal} and (2) a flattening of the parallel current along the newly chaotic magnetic field lines by Alfv\'en waves \cite{Boozer:acc}. Alfv\'en waves propagating along chaotic field lines can be heavily damped \cite{Heyvaerts-Priest:1983,Similon:1989}, which could in principle extend the time required for the flattening sufficiently to eliminate current spikes. This paper found the viscosity $\nu_v$ and the resistivity $\eta$ diffusively spread shear Alfv\'en waves across the field lines by a distance $\approx\sqrt{\Delta_d\ell_p}$, where the distance $\Delta_d$ is given in Equation (\ref{Delta-d}) and $\ell_p\approx 100\times 2\pi R_0$ is the distance Alfv\'en waves must propagate along the field lines to flatten the current. Using the estimates for $\Delta_d$ and $\ell_p$ given in the paper, the distance $\sqrt{\Delta_d\ell_p}$ appears to be of order centimeters, which seems unlikely to significantly slow the flattening. The Monte Carlo methods developed in the paper together with numerical models of the chaotic magnetic fields of a tokamak disruption could be used to determine how large $\Delta_d$ would have to be to significantly slow the flattening of the current. \vspace{0.2in} \section*{Acknowledgements} This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Award Numbers DE-FG02-03ER54696, DE-SC0018424, and DE-SC0019479. \section*{Data availability statement} Data sharing is not applicable to this article as no new data were created or analyzed in this study.
1,108,101,562,453
arxiv
\section{Introduction} {\bf Introduction.} The Born-Oppenheimer (BO) problem \cite{bo} is concerned with the analysis of Schr\"odinger type operators where the small electron to nucleon mass ratio, plays the role of the semiclassical parameter. \cite{aventini-seiler,berry,lh,mead,mead2,mead_truhlar,jackiw,wilczek}. The theory identifies distinct energy scales: The electronic scale which, in atomic units, is of order one and the scale of nuclear vibrations which is of order $(1/M)^{1/2}$ in these units. $M$ is the nucleus to electron mass ratio. The identification of the electrons as the fast degrees of freedom is central to the theory. The clean splitting between fast and slow degrees of freedom fails near eigenvalue crossing of the electronic Hamiltonian where there is strong mixing between electronic and vibrational modes. This lies at the boundary of the conventional BO theory. Since the coupling between different electronic energy surfaces becomes infinite near a crossing, the nuclear wave function does not reduce to a solution of a scalar (second order) Schr\"odinger equation. We describe the (double surface) nuclear wave function near an isotropic conical crossing, for energies close to the energy of the crossing. The strong mixing of the electronic and nuclear degrees of freedom near crossing leads an anomalous Zeeman effect. To describe the anomaly recall that the Zeeman splitting in molecules is reduced compared to the Zeeman splitting of atoms. It is convenient to parameterize the reduction by a parameter $\gamma$ so that the Zeeman splitting is of the form (and order) $ M^{-\gamma} B$ with $B$ the external magnetic field. The low lying vibrational levels have large reduction, $\gamma=1$. This is what one expects from nuclei whose magnetic moments are by factor $M$ smaller than the Bohr magneton. Levels {\em near the crossing energy} can have a small reduction expressed by the fact that $\gamma<1$. For the isotropic situation we calculate $\gamma=1/6$, so that the Zeeman shift is anomalously large, by a factor of about 2000, than the normal Zeeman splitting of molecular levels. More precisely, we find that the Zeeman splitting near isotropic crossing is \begin{equation}\label{main2} \Delta E(m)\approx \frac 1 {M^{1/6}}\frac{g(m)}{T_e}\,B. \end{equation} The sign $\approx$ means equality in the limit $M \to \infty$. $B$ is proportional to the magnetic field, with a coefficient dependent only on the electronic wave functions at the crossing. $m$, a half odd integer, is the azimuthal quantum number, $T_e$ is an electronic time scale, see Eq.~(\ref{period}) below. $g(m)$ is a universal dimensionless factor which is determined by the nuclear wave function near the crossing, see Eq.~(\ref{gyro}) below. As we shall see $g(m)=-g(-m)$ and numerical estimates of Eq.~(\ref{gyro}) give \begin{equation}g\left(\frac{1}{2}\right)= 0.961,\,\, g\left(\frac 3 2 \right)=0.543,\,\,g\left(\frac 5 2 \right)=0.396.\end{equation} $g(m)$ is a molecular analog of the Land\'e g factor in atoms: So, while Lang\'e g factor describes the Zeeman shift due to the mixing of spin and orbital degrees of freedom, $g(m)$ does it for the nuclear and electronic ones. One can formulate the BO problem in the following way \cite{mead}: \begin{equation}\label{Hbo} H_{bo}= -\frac 1 {M}\Delta_x +H_e(x), \end{equation} where $H_e(x)$, the electronic Hamiltonian, depends parameterically on the nuclear coordinates $x$. When time reversal is not broken, $H_e(x)$ is a real symmetric matrix \cite{spin}. The Wigner von Neumann crossing rule \cite{wvn} says that $H_e(x)$ has generically a crossing point for two modes of vibrations, $x\in{\rm I\kern-.2em R}^2$. Here we shall consider the simple scenario where $H_e(x)$ is a $2\times 2$ matrix of and $x\in{\rm I\kern-.2em R}^2$. This means that we shall treat only the restriction of the electronic Hamiltonian to the two dimensional subspace spanned by the two degenerate eigenstates at the crossing point. We shall assume that $H_e(x)$ has a single crossing point at $x=0$, and set the crossing energy at $0$. We further assume that the crossing is conic, that $H_e(x)$ is isotropic about the origin, and that the $x$ dependence of $H_e(x)$ is smooth near the origin. This is a common model \cite{mead_truhlar}. We first recall why the standard BO theory fails near a crossing. When $H_e(x)$ is symmetric it can be diagonalized by an {\em orthogonal} transformation $R(x)$. In the $2\times 2$ case, and when $H_e(x)$ is non-degenerate, $R(x)$ is uniquely determined, up to an overall sign, by requiring $\det R(x)=1$. Hence, away from crossing, $H_{bo}$ is unitarily equivalent to \begin{equation}\label{sHbo} -\frac 1 M\Big(\nabla_x+iA(x)\Big)^2 +\pmatrix{E_1(x)&0\cr 0&E_2(x)}, \end{equation} where $E_{1,2}(x)$ are the two eigenvalues of $H_e(x)$. The vector potential $A(x)=-iR^T(x) \nabla_x R(x)$ is purely off-diagonal. For linear crossing, $R(x)\to - R(x)$ as $x$ surrounds the origin \cite{lh}. This forces a $1/|x|$ singularity of the vector potential for small $x$. Far from the origin, to leading order in $1/M$, the two components of the wave function decouple and are given by \begin{eqnarray}\label{bo} \Psi_{bo,j}(x)\approx\psi_{cl,j}(x) R(x)\pmatrix{\delta_{j,1}\cr \delta_{j,2}}, \quad j=1,2, \end{eqnarray} where $\psi_{cl,j}$ is a semiclassical solution of the Schr\"odinger equation with potential $E_j(x)$ in the cut plane with antiperiodic boundary conditions \cite{lh,mead_truhlar}. This decoupling holds provided $|x|>>M^{-1/3}$ \cite{prep}. We shall henceforth denote $\epsilon=M^{-1/3}$, and refer to the region $|x|>>\epsilon$ as "far from the crossing". In contrast, the divergence of the off diagonal part of $A$ near the crossing prevents such a decoupling near the crossing. Our aim is to analyze the BO theory, to leading order in $\epsilon$, near the crossing of $H_e$. The reason one can still hope to say something useful near the crossing is that the asymptotic form of $H_e(x)$ near the crossing, i.e. for $|x|<<1$, is universal \cite{mead2} \begin{eqnarray}\label{can2} H_e(x)= x_1 \sigma _1 + x_2 \sigma_3+O(x^2) \end{eqnarray} where $\sigma$ are the Pauli matrices. We shall refer to $|x|<<1$ as "close to the crossing". Notice that our notions of close and far from the crossing have a nonempty intersection. This enables us to match the solution close to the crossing with the standard, decoupled, BO solution, Eq.~(\ref{bo}). {\bf The zero energy wave function close to the crossing.} We study the wave functions near the crossing for energies close to the crossing energy. Everything to be said from now on is true in the limit of $M \to \infty$, to leading order in negative powers of $M$. We shall assume that zero is an eigenvalue of (\ref{Hbo}), since there is always an eigenvalue that is close enough to zero \cite{prep}. It turns out to be convenient first to unitary-transform (\ref{can2}) with $e^{-i\frac \pi 4 \sigma_1}$. This will replace $\sigma_3$ in (\ref{can2}) by $\sigma_2$. In this representation, close to the crossing the zero energy wave function satisfies approximately the differential equation \begin{equation}\label{scaled2} \left\{ -\nabla_\xi^2 + \xi_1 \sigma _1 + \xi_2 \sigma_2\right\}\Psi(\xi_1,\xi_2) = 0 \end{equation} $\Psi$ stands for a two component column matrix and $\xi_i=M^{1/3}\,x_i$ is a scaled variable. The operator $J_3=L_3+\frac{1}{2}\sigma_3=-i \xi_1\partial_{\xi_2}+i \xi_2\partial_{\xi_1}+\frac{1}{2}\sigma_3$ commutes with the operator on the l.h.s.~of (\ref{scaled2}). It does not have the meaning of total angular momentum, since the Pauli matrices do not represent spin. We thus consider solutions of the differential equation (\ref{scaled2}) which are eigenfunctions of $J_3$ with an eigenvalue $m$, namely: \begin{eqnarray}\label{radial} \Psi(\xi;m)=e^{im\theta}\, \pmatrix {\varphi_1(\rho;m)e^{-i\theta/2} \cr \varphi_2(\rho;m)e^{i\theta/2}} \end{eqnarray} $\rho, \theta$ are the polar coordinates related to $\xi$. $m$ must be {\it half odd integer}, for the wave function to be single valued. Separating variables, the radial equation obtained from (\ref{scaled2}), in the $m$-th sector, takes the form: \begin{eqnarray}\label{scaled} \left\{ -\frac{d^2}{d\rho^2}-\frac{1}{\rho}\frac{d}{d\rho}+ \frac{m^2+\frac 1 4}{\rho^2} +\pmatrix{ -\frac{m}{\rho^2} & \rho \cr \rho & \frac{m}{\rho^2}}\right\}\pmatrix{\varphi_1 \cr \varphi_2}=0 \end{eqnarray} Of the four linearly independent solutions of Eq.~(\ref{scaled}), basically due to boundary conditions, only one linear combination is fit to represent a wave function. We denote it by ${\cal F}_c(m,\rho)$ and its components by $\varphi_{1 c}(m;\rho)$ and $\varphi_{2 c}(m;\rho)$. ${\cal F}_c(m,\rho)$ is to a crossing point what the Airy function is to a classical turning point \cite{landau,powell}: It interpolates between the near region where the wave function is intrinsically a two component spinor, and the far region where the wave function is highly oscillatory and given by Eq.~(\ref{bo}). ${\cal F}_c(m,\rho)$ has a closed expression in terms of the generalized hypergeometric functions of the kind $_0F_3$. It has the asymptotic (for $\rho>>1$) form \begin{equation}\label{cos} {\cal F}_c(m,\rho) \approx \frac{1}{\rho^{3/4}}\,\cos \left(\tiny{\frac{2}{3}}\rho^{3/2}-\pi\left(\frac m 3 + \frac 1 4\right)\right)\pmatrix{1\cr -1}, \end{equation} Solving (\ref{scaled}) asymptotically near the origin gives\cite{boyce} \begin{equation}\label{origin} \pmatrix{\varphi_1(\rho;m)\cr \varphi_2(\rho;m)} \sim \pmatrix{a_+ \rho^{m-1/2}+a_-\rho^{-m+1/2} \cr b_+ \rho^{m+1/2}+b_-\rho^{-m-1/2}} \end{equation} Solving (\ref{scaled}) asymptotically at infinity we obtain \begin{eqnarray}\label{infinity} \pmatrix{\varphi_1(\rho;m)\cr \varphi_2(\rho;m)} \sim\frac{1}{\rho^{3/4}}\left\{\left(A_+e^z+A_-e^{-z}\right)\pmatrix{1\cr 1} +C\cos \left(z+\phi\right)\pmatrix{1\cr -1}\right\},\quad z=\tiny{\frac{2}{3}}\rho^{3/2} \end{eqnarray} The four dimensional family of solutions can be parameterized by either $a_+, a_-, b_+, b_-$ or $A_+, A_-, C, \phi$. Requiring the solution to be bounded at the origin and at infinity means (for positive $m$) that $A_+=0, a_-=0, b_-=0$. Imposing three homogeneous conditions on a four dimensional linear space leaves us with one dimensional subspace, i.\,e.~a certain function times an arbitrary constant. This is the celebrated ${\cal F}_c(\rho,m)$. While the reason we require ${\cal F}_c(\rho,m)$ to be bounded at the origin is obvious, the reason we require it to be bounded at infinity is a bit subtle, since (\ref{scaled}) is meaningful only close to the crossing. However, "close to the crossing", $\rho<<M^{1/3}$, extends farther and farther in terms of $\rho$ as $M$ gets large. $A_+$ at (\ref{infinity}) should be exponentially small, and can set to zero to leading order. The solutions which are regular at the origin can be obtained from the fourth order differential equations, one for each component $\varphi_j$, obtained from (\ref{scaled}). These are related to the differential equation that defines the generalized hypergeometric functions $_0F_3$. The two linearly independent solutions that are regular at the origin are \cite{prep}: \begin{eqnarray}\label{hyper_solutions1} {\cal F}_1(\rho;m)&=& \pmatrix{ \rho^{m-\frac{1}{2}}{ _0F_3}( ;\frac{1}{3},\frac{1}{2}+\frac{m}{3},\frac{5}{6}+ \frac{m}{3};\frac{\rho^6}{6^4})\cr \frac{\rho^{m+\frac{5}{2}}}{6+4m}{ _0F_3}( ;\frac{4}{3},\frac{3}{2}+\frac{m}{3},\frac{5}{6}+ \frac{m}{3};\frac{\rho^6}{6^4})};\cr \cr {\cal F}_2(\rho;m)&=&\pmatrix { \frac{\rho^{m+\frac{7}{2}}}{12+8m}{ _0F_3}( ;\frac{5}{3},\frac{3}{2}+\frac{m}{3},\frac{7}{6}+ \frac{m}{3};\frac{\rho^6}{6^4})\cr \rho^{m+\frac{1}{2}}{ _0F_3}( ;\frac{2}{3},\frac{1}{2}+\frac{m}{3},\frac{7}{6}+ \frac{m}{3};\frac{\rho^6}{6^4})}, \end{eqnarray} where $_0F_3(;a,b,c;\rho)$ are generalized hypergeometric functions \cite{handbook,edm}. The linear combination \begin{equation}\label{F} {\cal F}_c(\rho;m)= A_1(m){\cal F}_1(\rho;m)+ A_2(m){\cal F}_2(\rho;m) \end{equation} is bounded at infinity provided \cite{prep}: \begin{equation}\label{A} A_j(m)=\frac{-(-1)^j 2\pi^{3/2}6^{\frac{(-1)^j-2m}{3}}}{3\Gamma(\frac 1 2 +\frac m 3)\Gamma(\frac 1 2 -\frac{(-1)^j}{6})\Gamma(1+\frac{2m-(-1)^j}{6})} \end{equation} We have set the global coefficient in front of (\ref{F}) such that (\ref{cos}) will be true. Formulae (\ref{hyper_solutions1}-\ref{A}) are correct only for positive $m$-s. Their negative counterparts can be obtained by interchanging the lower and upper components, as can be seen from (\ref{scaled}). {\bf The anomalous Zeeman shift.} Let us now turn to the Zeeman shift. The magnetic field breaks time reversal symmetry, which means that it adds an the imaginary term to $H_e(x)$, in the representation where $H_e(x)$ is real. Generically the magnetic field will remove the conical intersection of $H_e(x)$ at $x=0$ and create a gap between the two energy sheets. The gap will be proportional to the magnetic field in atomic units, and the coefficient will be in general of order one. We can therefore introduce the magnetic field into our model by adding the term $B\sigma_2$ to $H_e(x)$ at (\ref{can2}). There is no harm in taking $B$ to be independent of $x$, since only the value of $B$ in the origin will be of importance to us. $B$ is therefore a constant, proportional to the magnetic field in atomic units. We shall not minimally couple the magnetic field directly to the vibrations for this turns out to be a weaker effect, of order $M^{-1}$, while the shift mediated by the electrons, in the rotationally invariant case, is $M^{-1/6}$, as we shall see. Let us consider the case where the rest of $H_e(x)$, namely the $O(x^2)$ part in (\ref{can2}), does not break the full rotational symmetry of its linear part, so that $m$ is still a good quantum number. In the rotational invariant case, the model has, for $B=0$, a two-fold degeneracy: The states $m$ and $-m$ are degenerate. The magnetic field $B$ breaks this degeneracy. The splitting is twice the Zeeman shift in the energy for the two states $\pm m$ move in opposite directions. Equipped with approximants to the wave function near and far from the crossing, with a degenerate perturbation theory one can calculate, to leading order in $1/M$, the Zeeman splitting and obtain (\ref{main2}). We describe this calculation in details elsewhere \cite{prep}. Here we would only like to sketch the derivation of the power in (\ref{main2}). Far from the crossing, one can neglect the vector potential in (\ref{sHbo}). The WKB approximant to the radial part of $\psi_{cl,2}$ is \begin{equation}\label{psi_cl} \psi_{cl,2}(r) \approx \frac{N}{r^{3/4}}\cos(\sqrt{M}\frac 2 3 r^{3/2}+\phi), \end{equation} where we have employed the linearity of the energy surfaces near the crossing. It is a general property of the WKB approximation, that the normalization coefficient $N$ is independent of $M$, to leading order in $1/M$. From (\ref{cos}) and (\ref{psi_cl}) one sees that the interpolation of the BO radial wave function towards the crossing is \begin{equation}\label{interp} \Psi(r;m)\approx NM^{1/4}{\cal F}_c(m;M^{1/3}r). \end{equation} $B$ removes the degeneracy of the electronic levels at $x=0$. The gap created there due to $B$ is equal to $2B$. Intuitively, the Zeeman shift of a vibrational level will be proportional to the amount of probability density in the vicinity of the crossing times $B$. By ``vicinity" we mean a neighbourhood of order $M^{-1/3}$ of the origin, the area of which is of order $M^{-2/3}$. The density associated with the wave function is large there and by (\ref{interp}) is proportional to $M^{1/2}$. Hence the weight is proportional to $(M)^{-1/6}$ which gives the power of the Zeeman splitting in Eq.~(\ref{main2}). The coefficient of proportion will include an integral over the components of $\cal F$, \cite{prep}, which gives $g(m)$ \begin{equation}\label{gyro} g(m)=\int_0^\infty \rho\, d\rho\, (\varphi_{1c}^2(\rho;m)- \varphi_{2c}^2(\rho;m)). \end{equation} N gives the factor $1/T_e$, where $T_e$ is \begin{equation}\label{period} T_e =\int dr \frac{1}{\sqrt{-E_1(r)}}, \end{equation} The integration is carried out between the two turning points of $E_1$, the negative energy sheet (\ref{sHbo}). Eq.~(\ref{period}) is proportional to the time in takes a particle with a unit (electronic) mass to travel classically across the potential. It is independent of the nuclear mass, and $T_e^{-1}$ has the order of magnitude of electronic energies. From the invariance under $T$ of $H_{bo}(B=0)$ it follows that $g(m)=-g(-m)$. One motivation for this work was an attempt to gain some understanding of the different status of crossing in theory and experiment. Theory puts crossing and avoided crossing in distinct baskets: conic crossings come with fractional azimuthal quantum numbers while avoided crossings come with integral quantum numbers. In contrast, measurements of molecular spectra normally can not tell a crossing from near avoided crossing. Only with {\em precision} measurements \cite{bush} and {\em precise} quantum mechanical calculations \cite{kendrick} can one tell when molecular spectra favor an interpretation in terms of crossing and half integral quantum numbers or avoided crossing with integral quantum numbers. Zeeman splitting appears to be a useful tool to study crossing. The anomaly of crossing is characterized by a fractional power $\gamma$ of the molecular reduction of the Zeeman splitting, $M^{-\gamma}$. A system of choice for studying crossing is molecular trimers \cite{bush,kendrick}. Since trimers are not rotationally invariant Eq.~(\ref{main2}) does not apply and we can not conclude that near crossing $\gamma=1/6$ for trimers. However, it is natural to expect that the qualitative features of our results carry over also to the non-isotropic case where crossing will manifest itself by an anomalously large Zeeman splitting and a {\em fractional} $\gamma$. It is an interesting challenge to calculate or measure the value of $\gamma$ for (other) molecular crossings, and trimers in particular. \section*{Acknowledgments} We thank M.V. Berry for encouraging us to look for a special function that characterizes the crossing and E. Berg, A. Elgart and L. Sadun for helpful suggestions. We thank C.~A.~Mead for useful comments. This research was supported in part by the Israel Science Foundation, the Fund for Promotion of Research at the Technion and the DFG.
1,108,101,562,454
arxiv
\section{Introduction} \label{sec:Introduction} Advances in the field of low-temperature scanning tunneling microscopy (STM) have enabled the detection and manipulation of the spin of individual magnetic atoms and molecules\cite{Stipe1998}. With current STM techniques magnetic atoms can be arranged into artificial assemblies such as chains, ladders or few-atom aggregates \cite{Hirjibehedin2006,Loth2010,Gauyacq2012,Spinelli2014}, hereafter referred to as engineered atomic spin devices (EASDs). The ability to manipulate and monitor individual atomic spins using inelastic electron tunneling spectroscopy has permitted us to address a set of new questions such as the origin and nature of magnetism in few-atom aggregates and nanostructures, the effects of many-particle correlations between the localized atomic spins and the itinerant electrons crossing the system, and the identification of spin excitations from differential conductance spectra. In parallel to fundamental physics aspects, EASDs are of major interest for spintronic applications \cite{Leuenberger2001,Troiani2005,Imre2006,Khajetoorians2011}. Up to now, EASDs have mostly been applied to improve classical information storage technology. However, as the exploration of coherent quantum regimes is becoming experimentally reachable, these devices are of great potential for applications in quantum information processing and manipulation. A typical EASD consists of a set of magnetic atoms deposited on a crystalline few-atoms-thick layer of insulating material that coats a metallic substrate (see Fig. \ref{fgr:Sketch}). The presence of the insulator reduces the hybridization of the atoms with the underlying metallic substrate and strongly suppresses charge fluctuations. This leaves the atomic spin as the only relevant low-energy degree of freedom. Each atom can be addressed individually by a metallic spin-polarized STM tip. An electronic current ensues by applying a finite bias voltage between the substrate and the tip, collecting contributions from elastic and inelastic processes. Elastic processes arise when electrons pass from one metallic lead to the tip with no energy change. They can be due to direct tip-substrate hopping, amounting to a trivial contribution to the differential conductance, or due to mediated hopping via degenerate energy states of the atomic structure -- the mechanism responsible for Kondo-like physics \cite{Hewson1997}. However, for temperatures or voltages larger than the Kondo energy scale, nontrivial elastic processes can be neglected. Inelastic processes arise when the electrons, while tunneling through the atomic structure, exchange energy with its internal degrees of freedom. The theory of inelastic tunneling through EASDs has received important contributions in recent years. A perturbative approach, assuming small tip-atom and substrate-atom couplings, was developed \cite{Fransson2009,Fernandez-Rossier2009,Delgado2010a,Ternes2015} in parallel with a strong-coupling approach \cite{Persson2009,Lorente2009}. These approaches, based on a set of classical rate equations, predict the current-voltage characteristics of the system. In particular, they model the signature of the atomic structure excitation spectrum in the measured differential conductance \cite{Otte2009,Delgado2010a}. Despite these substantial advances, a complete picture of the nonequilibrium transport processes in EASDs is still far from complete. A particular aspect that requires better understanding is the role of nondiagonal components of the density matrix, i.e., quantum coherences. Existing works mostly concentrate on the computation of the decoherence times \cite{Delgado2010,Gauyacq2015,Delgado2016}, leaving out the question of the effect of coherence in the observables. This issue is of major importance if EASDs are to be operated in quantum coherent regimes, e.g., as devices for quantum information processing. In this work we give the first steps in the direction of a quantum mechanical description of the dynamics in EASDs. We use a theoretical approach based on the microscopically derived Redfield equation \cite{Pollard1996,Breuer2002} for the density matrix of the atomic subsystem. The Redfield equation is a type of master equation describing the evolution of an open quantum system weakly coupled to its environment. Originally employed to model nuclear magnetic resonance \cite{Wangsness1953,Bloch1957,Redfield1957}, it has been applied in various fields including quantum optics \cite{Scully1997,Breuer2002,Gardiner2004}, chemical dynamics \cite{Nitzan2006}, and electronic transport \cite{Esposito2009}. Our goal is to describe inelastic transport processes in EASDs, in particular to predict the average value of the current and the shot noise measured by STM. To access the information about the electronic current through the system, we generalize the Redfield equation approach to charge-specific density matrices \cite{Rammer2004,Flindt2004,Flindt2005}. We derive expressions for the steady state values of the average current and for the shot noise. In order to illustrate our method, we consider single atoms of different total spin and an atomic chain as examples. We study how coherences affect the current and shot-noise characteristics for several setups including different tip polarization geometries. The results are compared with the previous approaches where coherences are neglected \cite{Fernandez-Rossier2009,Delgado2010a} in order to highlight regimes where coherent dynamics sets in. The paper is organized as follows. Section \ref{sec:Model} gives a description of the setup and the model Hamiltonian. Section \ref{sec:Method} describes the method. Section \ref{sec:MethodSummary} summarizes the methodology and provides the final expressions for the average current and the shot noise. The details of the derivation are presented in Sec. \ref{sec:MethodDetails} and the application to EASDs is given in Sec. \ref{sec:Application}. In Sec. \ref{sec:Results} we present some illustrative examples: a single atom with spin $1/2$, a single atom with spin $5/2$, and a chain of atoms with spin $1/2$. We discuss our results and draw conclusions in Sec. \ref{sec:Discussion}. Appendices are devoted to technical details of the derivation. \section{Model} \label{sec:Model} A generic setup of an EASD, sketched in Fig. \ref{fgr:Sketch}, can be described by the Hamiltonian $ {H}= {H}_{A}+H_{R}+ {H}_{I}$, which includes the Hamiltonian of the atomic subsystem $H_{A}$, the Hamiltonian of the electronic degrees of freedom of the tip and of the substrate $H_{R}$, and the coupling Hamiltonian $H_{I}$. In the following we specify and describe each term. \paragraph*{Magnetic atoms.} We consider the limit when the atomic charge gap is much larger than other characteristic energies. The atoms thus possess a well defined number of electrons, and tunneling through atomic orbitals is only possible by virtual excitations of different charge states. Therefore, each atom behaves as a localized spin coupled to other atoms and to the spin of conduction electrons by an effective exchange term \cite{Anderson1966}. As a result, the low-energy Hamiltonian of the atomic ensemble can be expressed solely in terms of spin degrees of freedom \cite{Gatteschi2006}, with symmetry arguments dictating its generic form \cite{Hirjibehedin2007,Otte2008,Fernandez-Rossier2009} \begin{equation} \begin{split} &H_{A}=\sum_{r}\left[DS^{2}_{rz'}+E\left(S^{2}_{rx'}-S^{2}_{ry'}\right)\right]+\\ &+\sum_{\langle rr'\rangle}J_{rr'}\mathbf{S}_{r}\cdot\mathbf{S}_{r'}+g\mu_{B}\sum_{r}\mathbf{B}\cdot\mathbf{S}_{r}, \end{split} \label{eqn:Cluster} \end{equation} where $r=1,...,L$ enumerates the atoms. The first term describes the magnetic anisotropy of the crystal parametrized by the coefficients $D$ and $E$. Here the spin is quantized along the principal axes of the crystal $x'$ (hard axis), $y'$ (intermediate axis), and $z'$ (easy axis). The second term corresponds to an effective exchange $J_{rr'}$ between pairs of neighboring atoms $\langle rr'\rangle$ arising from, e.g., the superexchange, or the RKKY interaction mediated by the substrate. The third term is the Zeeman splitting induced by an external magnetic field $\mathbf{B}$ and proportional to the atomic $g$ factor. \paragraph*{Substrate and tip.} We model the substrate as a set of identical metallic reservoirs each one coupled to a single atom (see Fig. \ref{fgr:Sketch}). This describes the limit when the substrate-mediated correlations between the atoms, other than the effective exchange interaction, are negligible. The polarized tip is modeled as an additional metallic reservoir coupled to a specific atom $r_{0}$. For an ensemble of $L$ atoms, this amounts to considering an environment consisting of $L+1$ electronic reservoirs in total. The Hamiltonian of the reservoirs is given by $H_{R}=H_{T}+ H_{S}$ with the corresponding tip and substrate Hamiltonians \begin{equation} H_{T}=\sum_{\sigma k}\varepsilon_{\sigma k}^{(T)}f_{\sigma k}^{\dag}f_{\sigma k},~~~H_{S}=\sum_{r\sigma k}\varepsilon_{\sigma k}^{(S)}c_{r\sigma k}^{\dag}c_{r\sigma k}, \end{equation} where $\sigma=\uparrow,\downarrow$ is the spin of electrons quantized along the tip polarization vector $\mathbf{P}$, and $k$ runs over single-particle states of the reservoirs. Electrons in all reservoirs (tip and substrate) are in thermal equilibrium with a common temperature $1/\beta$ (in energy units) and chemical potentials $\mu_{S}$ for the substrate and $\mu_{T}=\mu_{S}-eV$ for the tip, where $V$ is the applied voltage and $-e$ is the electron charge. The metallic nature of the electronic reservoirs translates to a local density of states $\varrho_{\eta\sigma}(\varepsilon)=\mathcal{V}_{\eta}^{-1}\sum_{k}\delta\left(\varepsilon-\varepsilon_{\sigma k}^{(\eta)}\right)$ with $\eta=T,S$, that may be considered energy-independent within the energy scales of interest. Here $\mathcal{V}_{\eta}$ stands for the volume of the reservoir. We introduce the spin-dependent density of states to account for the tip polarization. For electrons in the tip we assign $\varrho_{T\sigma}=w_{\sigma}\varrho_{T}$, with $w_{\uparrow}=1+p$, $w_{\downarrow}=1-p$, where $p$ is the polarization parameter ranging from $-1$ to $1$. For electrons in the unpolarized substrate $\varrho_{S\uparrow}=\varrho_{S\downarrow}=\varrho_{S}$. Even though we work in the wideband approximation, for regularization purposes we use rectangular-shaped densities of states \begin{equation} \varrho_{\eta}(\varepsilon)=\varrho_{\eta}\Theta\left(W-|\varepsilon|\right), \end{equation} where $\Theta$ is the Heaviside function and $W$ is the bandwidth, much larger than other energy scales of the system. \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{Sketch} \caption{(a) Schematics of a typical EASD. Magnetic atoms are deposited on an insulating layer coating a metallic substrate. Upon applying a voltage difference between the metallic STM tip and the substrate, a charge current ensues. (b) Sketch of the model. The substrate is modeled by a set of independent reservoirs sharing the same chemical potential $\mu_{S}$. The tip is modeled by an additional reservoir with $\mu_{T}=\mu_{S}-eV$. All reservoirs are assumed to be wideband metals.} \label{fgr:Sketch} \end{figure} \paragraph*{Coupling.} The coupling of the atoms to the electrons in the leads is described by the exchange interaction Hamiltonian \cite{Appelbaum1967,Kim2004,Fernandez-Rossier2009,Delgado2010} $H_{I}=\sum_{\eta\eta'}H_{\eta\eta'}$ with \begin{equation} \begin{split} & H_{TS}=\sqrt{J_{T}J_{S}}\sum_{a\sigma\sigma'kk'} {S}_{r_{0}a}\otimes c_{r_{0}\sigma k}^{\dag}\tau_{\sigma\sigma'}^{a}f_{\sigma'k'},\\ & H_{TT}=J_{T}\sum_{a\sigma\sigma'kk'} {S}_{r_{0}a}\otimes f_{\sigma k}^{\dag}\tau_{\sigma\sigma'}^{a}f_{\sigma'k'},\\ & H_{SS}=J_{S}\sum_{ra\sigma\sigma'kk'} {S}_{ra}\otimes c_{r\sigma k}^{\dag}\tau_{\sigma\sigma'}^{a}c_{r\sigma'k'}, \end{split} \label{eqn:Coupling} \end{equation} where $J_{\eta}\simeq2u_{\eta}^{2}U\Delta_{\eta}^{-1}\left(\Delta_{\eta}+U\right)^{-1}$ are the exchange coupling energies determined by the lead-atom hopping amplitude $u_{\eta}$, the intra-atomic Coulomb repulsion $U$ between electrons, and the energy difference $\Delta_{\eta}$ between the atomic level and the Fermi energy of the lead \cite{Schrieffer1966}. $\tau^{a}$ and $ {S}_{ra}$ with $a=x,y,z$, are the Pauli matrices and the spin operators of the atom $r$, respectively. The axes are chosen so that $z$ is aligned with the tip polarization $\mathbf{P}$. The inelastic current through the cluster originates from tip-to-substrate $H_{TS}$ and substrate-to-tip $H_{ST}=H_{TS}^{\dag}$ tunneling, while the terms $H_{TT}$ and $H_{SS}$ yield purely relaxational contributions due to tip-to-tip and substrate-to-substrate electron scattering processes. In Eq. (\ref{eqn:Coupling}) we have neglected momentum dependence of the lead-atom hopping amplitude and used spin rotational invariant exchange coupling. In the following we use dimensionless parameters $\gamma_{\eta}=\pi J_{\eta}\varrho_{\eta}\mathcal{V}_{\eta}$ to characterize the strength of the tip-atom and substrate-atom couplings. \section{Method} \label{sec:Method} \subsection{Summary} \label{sec:MethodSummary} In this section we summarize the main results of our approach to the description of the transport and dynamics in EASD setups. We discuss the properties of the master equation governing the dynamics of the atomic subsystem and present the generic expressions for the average value of the inelastic current and the shot noise. \paragraph*{Master equation.} Following a standard set of approximations \cite{Breuer2002,Rammer2004} (see below), we derive a Redfield-type master equation for the density matrix of the atomic subsystem \begin{equation} \partial_{t} {\rho}=\mathcal{L} {\rho}, \label{eqn:summaryME} \end{equation} where the superoperator $\mathcal{L}$ is given in Eq. (\ref{eqn:LsuperEASD}). The steady state density matrix $ {\rho}_{\infty}$ is calculated as the eigenstate of $\mathcal{L}$ corresponding to zero eigenvalue, i.e., $\mathcal{L} {\rho}_{\infty}=0$. The derivation of Eq. (\ref{eqn:summaryME}) assumes the lead-atom coupling to be small within the Born approximation and the leads to have a short memory time. Nonetheless, although a Markov-like approximation is employed, the Redfield equation does not lead to purely Markovian evolution \cite{Wolf2008,Breuer2009,Rivas2010,Hall2014,Ribeiro2015}. Therefore, the Redfield equation is generally not of the Lindblad form and may violate the positivity of the density matrix \cite{Alicki2007}. To prevent the breakdown of positivity, the rotating wave approximation (RWA) is sometimes performed leading to an equation where the dynamics of populations and coherences decouple \cite{Breuer2002}, which implies that the coherences vanish in the steady state. This further approximation is valid when the damping rate is much slower than the Bohr frequencies of the system and is equivalent to a treatment in terms of rate equations. Neglecting coherences may lead to wrong predictions when they become of the same order as populations \cite{Kaiser2006,Harbola2006}. On the other hand, the violation of positivity during the dynamics by the Redfield equation generally occurs only far from equilibrium; the description of the stationary regime is in general accurate given that the density matrix remains physical \cite{Pechukas1994}. In the present case, at low temperatures as compared with the energy scales of the atomic spin system, this approach is valid as long as the lead-atom coupling is moderate. Away from its range of validity, the steady-state density matrix of Eq. (\ref{eqn:summaryME}) may violate positivity yielding to unphysical results. For vanishing coupling we recover the rate equation results for the evolution of the populations. The method thus suitably describes moderate lead-atom coupling regimes where coherences cannot be disregarded. In our numerical studies below we explicitly checked that $\rho_{\infty}$ is a physically sound density matrix, i.e., has no negative eigenvalues. The approach followed here, due to its perturbative nature, is unable to capture nonperturbative phenomena in the lead-atom coupling, e.g., elastic processes responsible for the Kondo-like physics when the atomic structure has a degenerate ground-state manifold. Here we assume nontrivial elastic processes to be absent either by considering nondegenerate atomic spectra or by assuming temperature regimes where such effects are washed away. \paragraph*{Current and shot noise.} In order to describe transport properties, Eq. (\ref{eqn:summaryME}) has been generalized to describe the evolution of charge-specific density matrices (CSDMs) that describe the state of the system given that a certain number of charge carriers has left the tip. Using the method of CSDMs \cite{Rammer2004,Flindt2004,Flindt2005}, we obtained the expression for the average value of the inelastic current in the steady state as \begin{equation} I=-e\,\mbox{tr}\left(\mathcal{J} {\rho}_{\infty}\right), \end{equation} where the current superoperator $\mathcal{J}$ is defined in Eqs. (\ref{eqn:DJ-definition}) and (\ref{eqn:DsuperEASD}). Elastic terms, appearing in the current spectra due to direct tunneling of electrons between the tip and the substrate, are not accounted in this expression. These contributions have no impact on the dynamics of the atoms and can be calculated independently. The shot noise of the inelastic current in the steady state can be expressed as \cite{Flindt2004,Flindt2005} \begin{equation} S=4e^{2}\mbox{tr}\left(\mathcal{D} {\rho}_{\infty}-\mathcal{J}\mathcal{L}^{-1}\mathcal{J} {\rho}_{\infty}\right), \label{eqn:summaryShotNoise} \end{equation} where $\mathcal{L}^{-1}$ is the pseudoinverse of $\mathcal{L}$, and the superoperator $\mathcal{D}$ is defined in Eqs. (\ref{eqn:DJ-definition}) and (\ref{eqn:DsuperEASD}). The above set of expressions allows us to reproduce the results of Sec. \ref{sec:Results} and is given here for the benefit of a reader who might not be interested in the detailed derivation of the method. \subsection{Derivation} \label{sec:MethodDetails} In this section we provide a derivation of the master equation for a generic system, as well as expressions for the current and the shot noise in the steady state. Our approach is based on the master equations for CSDMs introduced in Ref. \cite{Rammer2004} for an open quantum system driven by a particle flow. In Ref. \cite{Rammer2004} the authors consider a system coupled to two reservoirs (here identified as tip and substrate) with a coupling Hamiltonian $H_{I}$ containing the terms $H_{TS}$ and $H_{ST}$ of Eq. (\ref{eqn:Coupling}). Here we generalize this approach to include relaxation processes due to tip-to-tip and substrate-to-substrate scattering of the electrons, i.e., terms $H_{TT}$ and $H_{SS}$ in Eq. (\ref{eqn:Coupling}). Not to restrict the derivation to our particular spin system, in this section we write \begin{equation} H_{\eta\eta'}=\sqrt{J_{\eta}J_{\eta'}}\sum_{\alpha\alpha'} {T}_{\alpha\alpha'}c_{\alpha}^{\dag}c_{\alpha'}, \label{eqn:R-operators} \end{equation} where $ {T}_{\alpha\alpha'}$ are generic operators of the atomic subsystem, and index $\alpha$ parametrizes quantum numbers of the electrons in the substrate ($\eta=S$) or the tip ($\eta=T$), i.e., $\alpha=(\sigma,k)$ for $\eta=T$ and $\alpha=(r,\sigma,k)$ for $\eta=S$. The identification of $ {T}_{\alpha\alpha'}$ with specific spin operators of the magnetic atoms is done in Sec. \ref{sec:Application}. \subsubsection{Charge-specific density matrices} CSDMs $ {\rho}_{n}$ of the atomic subsystem are defined as \begin{equation} {\rho}_{n}=\mbox{tr}_{R}\left(\mathcal{P}_{n}\rho_{\text{tot}}\right), \label{eqn:CSDMdef} \end{equation} where $\rho_{\text{tot}}$ is the full density matrix of the system (atoms plus leads) and $\mbox{tr}_{R}$ stands for the trace over all reservoirs. The operator $\mathcal{P}_{n}$ projects the full Hilbert space into a subspace with $n$ particles transferred from the tip to the substrate (compared to the initial state). Note that summing up CSDMs recovers the density matrix of the system $\sum {\rho}_{n}= {\rho}$. As shown in Appendix \ref{sec:Derivation}, they evolve according to the equations of motion \begin{equation} \partial_{t} {\rho}_{n}+i\left[H_{A}, {\rho}_{n}\right]=-i\mbox{tr}_{R}\left(\mathcal{P}_{n}\left[H_{I},\rho_{\text{tot}}\right]\right). \label{eqn:EOMinitial} \end{equation} The substitution of $H_{I}=\sum_{\eta\eta'}H_{\eta\eta'}$ into the right-hand side of Eq. (\ref{eqn:EOMinitial}) leads to \begin{equation} \begin{split} &\partial_{t} {\rho}_{n}+i\left[ {H}_{A}, {\rho}_{n}\right]=-i\left[\sum_{\eta\alpha}J_{\eta}f_{\alpha} {T}_{\alpha\alpha}, {\rho}_{n}\right]+\\ &+\sum_{\eta\eta'\alpha\alpha'}\sqrt{J_{\eta}J_{\eta'}}\left( {T}_{\alpha\alpha'} {C}_{\alpha\alpha'}^{(n)}+\mbox{h.c.}\right) \end{split} \label{eqn:EOMnonliouv} \end{equation} (see Appendix \ref{sec:Derivation}) with operators $ {C}_{\alpha\alpha'}^{(n)}$ defined as \begin{equation} i {C}_{\alpha\alpha'}^{(n)}=\mbox{tr}_{R}\left(\left(c_{\alpha}^{\dag}c_{\alpha'}-f_{\alpha}\delta_{\alpha\alpha'}\right)\rho_{\text{tot}}\mathcal{P}_{n}\right). \label{eqn:C-operators} \end{equation} The numbers $f_{\alpha}=\langle c_{\alpha}^{\dag}c_{\alpha}\rangle$ are determined from the distribution function of electrons in the leads. As shown in Appendix \ref{sec:C-operators}, the operators $ {C}_{\alpha\alpha'}^{(n)}$ satisfy the equations of motion \begin{equation} \begin{split} &\partial_{t} {C}_{\alpha\alpha'}^{(n)}+i[ {H}_{A}, {C}_{\alpha\alpha'}^{(n)}]-i\left(\varepsilon_{\alpha}-\varepsilon_{\alpha'}\right) {C}_{\alpha\alpha'}^{(n)}=\\ &=-\mbox{tr}_{R}\left(\left(c_{\alpha}^{\dag}c_{\alpha'}-f_{\alpha}\delta_{\alpha\alpha'}\right)\left[ {H}_{I},\rho_{\text{tot}}\right]\mathcal{P}_{n}\right). \end{split} \label{eqn:ExactEOMforC} \end{equation} \subsubsection{Approximations} Up to this point all the equations were exact. To proceed and obtain a closed set of equations for the evolution of CSDMs, a number of physically motivated approximations has to be made. Following the standard derivation of the Redfield master equation \cite{Pollard1996,Breuer2002}, we employ both Born and Markov approximations. Within these approximations components of the full density matrix $\mathcal{P}_{m}\rho_{\text{tot}}\mathcal{P}_{n}$ with $m\neq n$ vanish. This is due to the fact that tunneling is rare and superpositions of states with different numbers of particles in the leads do not occur at this order in the lead-atom coupling. For diagonal components we assume separability $\mathcal{P}_{n}\rho_{\text{tot}}\mathcal{P}_{n}\approx {\rho}_{n}\otimes\rho_{R}$ within the Born approximation. This yields an approximate equation of motion for $C_{\alpha\alpha'}^{(n)}$ \begin{equation} \begin{split} &\partial_{t} {C}_{\alpha\alpha'}^{(n)}+i\left[ {H}_{A}, {C}_{\alpha\alpha'}^{(n)}\right]-i\left(\varepsilon_{\alpha}-\varepsilon_{\alpha'}\right) {C}_{\alpha\alpha'}^{(n)}\approx\sqrt{J_{\eta}J_{\eta'}}\times\\ &\times\left(\left(1-f_{\alpha}\right)f_{\alpha'} {\rho}_{n-n_{\alpha\alpha'}} {T}_{\alpha\alpha'}^{\dag}-f_{\alpha}\left(1-f_{\alpha'}\right) {T}_{\alpha\alpha'}^{\dag} {\rho}_{n}\right), \end{split} \label{eqn:ApproximateEOMforC} \end{equation} whose solution is given by \begin{equation} \begin{split} &C_{\alpha\alpha'}^{(n)}(t)=\sqrt{J_{\eta}J_{\eta'}}\int\limits_{0}^{t}e^{-i {H}_{A}\tau}\left(\left(1-f_{\alpha}\right)f_{\alpha'} {\rho}_{n-n_{\alpha\alpha'}}(t-\tau)\times\right.\\ &\left.\times {T}^{\dag}_{\alpha\alpha'}-f_{\alpha}\left(1-f_{\alpha'}\right) {T}^{\dag}_{\alpha\alpha'} {\rho}_{n}(t-\tau)\right)e^{i {H}_{A}\tau}e^{i(\varepsilon_{\alpha}-\varepsilon_{\alpha'})\tau}d\tau \end{split} \label{eqn:ApproximateEOMforCSolution} \end{equation} (see Appendix \ref{sec:C-operators}). We assume that the memory time of the leads is short enough to extend the integration limit in the former expression to infinity. Additionally, within the Born approximation we obtain \begin{equation} e^{-i {H}_{A}\tau} {\rho}_{n}(t-\tau)e^{i {H}_{A}\tau}\approx {\rho}_{n}(t). \label{eqn:ZeroApproxForCSDM} \end{equation} Then $ {C}_{\alpha\alpha'}^{(n)}$ are time-independent and expressed as \begin{equation} \begin{split} & {C}_{\alpha\alpha'}^{(n)}=\sqrt{J_{\eta}J_{\eta'}}\left(\left(1-f_{\alpha}\right)f_{\alpha'} {\rho}_{n-n_{\alpha\alpha'}} {\mathcal{T}}_{\alpha\alpha'}^{\dag}-\right.\\ &\left.-f_{\alpha}\left(1-f_{\alpha'}\right) {\mathcal{T}}_{\alpha\alpha'}^{\dag} {\rho}_{n}\right), \end{split} \label{eqn:Cfinal} \end{equation} where we have introduced the operators \begin{equation} {\mathcal{T}}_{\alpha\alpha'}=\int\limits _{0}^{\infty}e^{-i {H}_{A}\tau} {T}_{\alpha\alpha'}e^{i {H}_{A}\tau}e^{-i(\varepsilon_{\alpha}-\varepsilon_{\alpha'})\tau}d\tau. \label{eqn:NewOps} \end{equation} In the eigenbasis $|m\rangle$ of $ {H}_{A}$, i.e., $ {H}_{A}|m\rangle=E_{m}|m\rangle$, the matrix elements of $ {\mathcal{T}}_{\alpha\alpha'}$ are given by \begin{equation} \begin{split} &\left\langle m\left| {\mathcal{T}}_{\alpha\alpha'}\right|n\right\rangle=\pi\delta\left(\varepsilon_{\alpha}-\varepsilon_{\alpha'}+E_{m}-E_{n}\right)\left\langle m\left| {T}_{\alpha\alpha'}\right|n\right\rangle-\\ &-iP\frac{1}{\varepsilon_{\alpha}-\varepsilon_{\alpha'}+E_{m}-E_{n}}\left\langle m\left| {T}_{\alpha\alpha'}\right|n\right\rangle. \end{split} \label{eqn:NewOpsElements} \end{equation} They include singularities at $\varepsilon_{\alpha}-\varepsilon_{\alpha'}=E_{n}-E_{m}$ which disappear after integrating over quasicontinuous spectra of electronic momentum in the leads, as we show below. \subsubsection{Equation of motion for CSDMs} Substituting Eq. (\ref{eqn:Cfinal}) into Eq. (\ref{eqn:EOMnonliouv}) results in the equation of motion for CSDMs \begin{equation} \partial_{t} {\rho}_{n}=\mathcal{L} {\rho}_{n}-\mathcal{J} {\rho}'_{n}+\mathcal{D} {\rho}''_{n} \label{eqn:MEforCSDM} \end{equation} (see Appendix \ref{sec:EOMforCSDM} for derivation), where $ {\rho}'_{n}$ and $ {\rho}''_{n}$ stand for the discrete derivatives \begin{equation} \begin{split} & {\rho}'_{n}=\frac{1}{2}\left( {\rho}_{n+1}- {\rho}_{n-1}\right),\\ & {\rho}''_{n}= {\rho}_{n+1}+ {\rho}_{n-1}-2 {\rho}_{n}, \end{split} \label{eqn:DiscreteDerivatives} \end{equation} and $\mathcal{L}$, $\mathcal{J}$, $\mathcal{D}$ are linear superoperators defined below. The superoperator $\mathcal{L}$ is responsible for the evolution of the density matrix. Its action on a generic matrix $ {\chi}$ is given by \begin{equation} \begin{split} &\mathcal{L} {\chi}=-i\left[ {H}'_{A}, {\chi}\right]+\sum_{\eta\eta'\alpha\alpha'}J_{\eta}J_{\eta'}\left(1-f_{\alpha}\right)f_{\alpha'}\times\\ &\times\left( {\mathcal{T}}_{\alpha\alpha'} {\chi} {T}_{\alpha\alpha'}^{\dag}-\frac{1}{2}\left\{ {T}_{\alpha\alpha'}^{\dag} {\mathcal{T}}_{\alpha\alpha'}, {\chi}\right\}+\mbox{h.c.}\right), \end{split} \label{eqn:L-superoperatorDef} \end{equation} where the curly braces stand for the anticommutator and \begin{equation} {H}'_{A}= {H}_{A}+\Delta {H}_{A},\label{eqn:Hshifted} \end{equation} accounts for the autonomous evolution of the atoms governed by the Hamiltonian $H_{A}$ and corrected by the coupling to the leads as \begin{equation} \begin{split} &\Delta {H}_{A}=\sum_{\eta\alpha}J_{\eta}f_{\alpha} {T}_{\alpha\alpha}+\sum_{\eta\eta'\alpha\alpha'}J_{\eta}J_{\eta'}\times\\ &\times\left(1-f_{\alpha}\right)f_{\alpha'}\frac{1}{2i}\left( {T}_{\alpha\alpha'}^{\dag} {\mathcal{T}}_{\alpha\alpha'}- {\mathcal{T}}_{\alpha\alpha'}^{\dag} {T}_{\alpha\alpha'}\right). \end{split} \label{eqn:HamilShift} \end{equation} The superoperators $\mathcal{J}$ and $\mathcal{D}$ acting on an arbitrary matrix $\chi$ are defined as \begin{equation} \begin{split} &\mathcal{J} {\chi}=\mathcal{D}_{+} {\chi}-\mathcal{D}_{-} {\chi},\\ &\mathcal{D} {\chi}=\frac{1}{2}\left(\mathcal{D}_{+} {\chi}+\mathcal{D}_{-} {\chi}\right), \end{split} \label{eqn:DJ-definition} \end{equation} with \begin{equation} \begin{split} &\mathcal{D}_{+} {\chi}=J_{T}J_{S}\sum_{st}\left(1-f_{s}\right)f_{t}\left( {T}_{st} {\chi} {\mathcal{T}}_{st}^{\dag}+ {\mathcal{T}}_{st} {\chi} {T}_{st}^{\dag}\right),\\ &\mathcal{D}_{-} {\chi}=J_{T}J_{S}\sum_{st}\left(1-f_{t}\right)f_{s}\left( {T}_{ts} {\chi} {\mathcal{T}}_{ts}^{\dag}+ {\mathcal{T}}_{ts} {\chi} {T}_{ts}^{\dag}\right), \end{split} \label{eqn:D-superoperators} \end{equation} where indices $t$ and $s$ parametrize electronic states in the tip and the substrate correspondingly. \subsubsection{Summation over bands} We now perform the summation over $k,k'$ in Eqs. (\ref{eqn:L-superoperatorDef}), (\ref{eqn:HamilShift}), and (\ref{eqn:D-superoperators}) for the specific case in which the operators $ {T}_{\alpha\alpha'}$ do not depend on the momenta and the bandwidth of the reservoirs is much larger than other energy scales, i.e., $W\gg\Delta_{\eta},U,eV,1/\beta$. We introduce the index $\lambda=(\eta,r,\sigma)$ that enumerates quantum numbers of the reservoirs other than momentum, so that $\alpha=(\lambda,k)$ and $ {T}_{\alpha\alpha'}= {T}_{\lambda\lambda'}$. Using Eq. (\ref{eqn:NewOpsElements}), we evaluate the following sum \begin{equation} \begin{split} & \sum_{kk'}(1-f_{\alpha})f_{\alpha'} {\mathcal{T}}_{\alpha\alpha'}=\varrho_{\eta\sigma}\varrho_{\eta'\sigma'}\mathcal{V}_{\eta}\mathcal{V}_{\eta'}\left(\frac{\pi}{\beta}T'_{\lambda\lambda'}-\right.\\ &\left.-iW{T}_{\lambda\lambda'}\ln4+i\ln\frac{2\beta W}{\pi}\left(\left(\mu_{\eta}-\mu_{\eta'}\right)T_{\lambda\lambda'}+\left[H_{A},T_{\lambda\lambda'}\right]\right)\right) \end{split} \label{eqn:SumOverMomenta} \end{equation} (see Appendix \ref{sec:WBA} for derivation), where $T'_{\lambda\lambda'}$ are operators with matrix elements \begin{equation} \left\langle m\left|T'_{\lambda\lambda'}\right|n\right\rangle=g\left(\beta\left(\mu_{\eta}-\mu_{\eta'}+E_{m}-E_{n}\right)\right)\left\langle m\left|T_{\lambda\lambda'}\right|n\right\rangle, \end{equation} and $g\left(x\right)=x\left(e^{x}-1\right)^{-1}$. After substitution into Eq. (\ref{eqn:MEforCSDM}), the imaginary part of Eq. (\ref{eqn:SumOverMomenta}) contributes to the Hamiltonian shift (\ref{eqn:HamilShift}) as \begin{equation} \begin{split} & \Delta {H}_{A}=\frac{W}{\pi}\sum_{\lambda}\gamma_{\lambda}T_{\lambda\lambda}+\frac{1}{\pi\beta}\sum_{\lambda\lambda'}\gamma_{\lambda}\gamma_{\lambda'}\times\\ & \times\frac{1}{2i}\left(T_{\lambda\lambda'}^{\dag}T'_{\lambda\lambda'}-\text{h.c.}\right)-\frac{W\ln4}{\pi^{2}}\sum_{\lambda\lambda'}\gamma_{\lambda}\gamma_{\lambda'}T_{\lambda\lambda'}^{\dag}T_{\lambda\lambda'}+\\ & +\frac{1}{2\pi^{2}}\ln\frac{2\beta W}{\pi}\left[H_{A},\sum_{\lambda\lambda'}\gamma_{\lambda}\gamma_{\lambda'}T^{\dag}_{\lambda\lambda'}T_{\lambda\lambda'}\right], \end{split} \label{eqn:HamilShiftSummed} \end{equation} where we have identified the parameters $\gamma_{\lambda}=\pi J_{\eta}\varrho_{\eta\sigma}\mathcal{V}_{\eta}$. Substituting Eq. (\ref{eqn:SumOverMomenta}) into Eq. (\ref{eqn:L-superoperatorDef}), one obtains \begin{equation} \begin{split} & \mathcal{L} {\chi}=-i\left[H'_{A},\chi\right]+\frac{1}{\pi\beta}\sum_{\lambda\lambda'}\gamma_{\lambda}\gamma_{\lambda'}\times\\ & \times\left(T''_{\lambda\lambda'}\chi T_{\lambda\lambda'}^{\dag}-\frac{1}{2}\left\{T_{\lambda\lambda'}^{\dag}T''_{\lambda\lambda'},\chi\right\} +\text{h.c.}\right). \end{split} \label{eqn:LsuperSummed} \end{equation} where we defined \begin{equation} T''_{\lambda\lambda'}=T'_{\lambda\lambda'}+i~\frac{\beta}{\pi}\ln\frac{2\beta W}{\pi}\left[H_{A},T_{\lambda\lambda'}\right]. \label{eqn:TppOperators} \end{equation} In a similar way Eq. (\ref{eqn:D-superoperators}) becomes \begin{equation} \begin{split} & \mathcal{D}_{+} {\chi}=\frac{1}{\pi\beta}\sum_{\lambda_{S}\lambda_{T}}\gamma_{\lambda_{S}}\gamma_{\lambda_{T}}\left(T''_{\lambda_{S}\lambda_{T}}\chi T_{\lambda_{S}\lambda_{T}}^{\dag}+\text{h.c.}\right),\\ & \mathcal{D}_{-} {\chi}=\frac{1}{\pi\beta}\sum_{\lambda_{S}\lambda_{T}}\gamma_{\lambda_{T}}\gamma_{\lambda_{S}}\left(T''_{\lambda_{T}\lambda_{S}}\chi T_{\lambda_{T}\lambda_{S}}^{\dag}+\text{h.c.}\right). \end{split} \label{eqn:DsuperSummed} \end{equation} In the following we do not take the imaginary part of the operators (\ref{eqn:TppOperators}) into account, as it leads to unphysical results. We believe that this term is an artifact of performed approximations and would vanish in a more rigorous treatment, e.g., going beyond the Born approximation. We thus use $T'_{\lambda\lambda'}$ instead of $T''_{\lambda\lambda'}$ in Eqs. (\ref{eqn:LsuperSummed}) and (\ref{eqn:DsuperSummed}). We however leave the corresponding logarithmic term in the Hamiltonian shift (\ref{eqn:HamilShiftSummed}), as it has a physical meaning \cite{Oberg2014}. \subsubsection{Master equation} As stated in Sec. \ref{sec:MethodSummary}, $\mathcal{L}$ determines the evolution of the atomic subsystem. This can be seen by summing Eq. (\ref{eqn:MEforCSDM}) over charge-specific components which leads to the equation of motion for the unconditioned density matrix $ {\rho}=\sum_{n} {\rho}_{n}$. We use $\sum_{n} {\rho}'_{n}=0$ and $\sum_{n} {\rho}''_{n}=0$ to obtain \begin{equation} \partial_{t} {\rho}=\mathcal{L} {\rho}. \label{eqn:MEforDM} \end{equation} In principle, this equation can be put in a canonical form in order to identify the coherence rates that characterize the dissipative dynamics \cite{Hall2014}. We were not able to perform this procedure in general but observed in the specific examples below that the decoherence rates are not always positive. This implies that the evolution of the density matrix is generally non-Markovian. A general proof that the density matrix evolving according to Eq. (\ref{eqn:MEforDM}) remains positively defined has also not been found. Nevertheless, for all the examples worked out in Sec. \ref{sec:Results} we checked numerically that this was the case. We note that the usual Markovian master equation is recovered in some limiting cases; see Sec. \ref{sec:Application}. \subsubsection{Current} The probability that $n$ electrons have been transferred from the tip to the substrate is given by $p_{n}=\mbox{tr} {\rho}_{n}$. The average current from the tip to the substrate is thus given by $I=-e\partial_{t}\langle n\rangle=-e\mbox{tr}\sum_{n}n\partial_{t} {\rho}_{n}$. Using Eq. (\ref{eqn:MEforCSDM}) and relations $\sum_{n}n {\rho}'_{n}=- {\rho}$, $\sum_{n}n {\rho}''_{n}=0$, one can show that \begin{equation} I=-e\,\mbox{tr}\mathcal{J} {\rho}. \label{eqn:Current} \end{equation} The steady state value of the current is calculated by substituting $ {\rho}= {\rho}_{\infty}$ into Eq. (\ref{eqn:Current}), where the steady state density matrix $\rho_{\infty}$ is calculated as the eigenstate of $\mathcal{L}$ associated with zero eigenvalue. \subsubsection{Shot noise} Fluctuations of the current are characterized by the shot noise defined as \begin{equation} S=2e^{2}\partial_{t}\left(\langle n^{2}\rangle-\langle n\rangle^{2}\right). \end{equation} Using arguments similar to those for the current, one can show that \begin{equation} S=4e^{2}\mbox{tr}\left(\mathcal{D} {\rho}+\mathcal{J}\sum_{n}\left(n-\langle n\rangle\right)\rho_{n}\right). \label{eqn:Shotnoise} \end{equation} In contrast to the case of average current, the shot noise cannot be expressed through the density matrix alone. One also needs to evaluate the quantity $\sum_{n}(n-\langle n\rangle) {\rho}_{n}= {\rho}^{(1)}$ which satisfies the equation of motion \begin{equation} \partial_{t} {\rho}^{(1)}=\mathcal{L} {\rho}^{(1)}+\mathcal{J} {\rho}- {\rho}\mbox{tr}\mathcal{J} {\rho}. \label{eqn:MEforDM1} \end{equation} In the steady state we obtain \begin{equation} \mathcal{L} {\rho}_{\infty}^{(1)}= {\rho}_{\infty}\mbox{tr}\left(\mathcal{J} {\rho}_{\infty}\right)-\mathcal{J} {\rho}_{\infty}, \label{MEforDM1_2} \end{equation} which has a formal solution \begin{equation} \rho_{\infty}^{(1)}=-\mathcal{L}^{-1}\mathcal{J}\rho_{\infty}, \label{eqn:rho1steady} \end{equation} (see Appendix \ref{sec:Inversion}), where $\mathcal{L}^{-1}$ is the pseudoinverse of $\mathcal{L}$, i.e., taken excluding the zero eigenvalue of $\mathcal{L}$. \subsection{Application to EASD} \label{sec:Application} Here we apply the presented method to the model of the EASD introduced in Sec. \ref{sec:Model}. In particular, we specify Eqs. (\ref{eqn:HamilShiftSummed}), (\ref{eqn:LsuperSummed}), (\ref{eqn:DsuperSummed}) using the coupling Hamiltonian (\ref{eqn:Coupling}) that may be recovered from the generic one used in Sec. \ref{sec:MethodDetails} by the substitution \begin{equation} T_{\lambda\lambda'}=S_{r\sigma\sigma'}\delta_{rr'}\left(\delta_{rr_{0}}+\left(1-\delta_{rr_{0}}\right)\delta_{\eta S}\delta_{\eta'S}\right), \label{eqn:EASDjumpopers} \end{equation} where $\lambda=\left(\eta,r,\sigma\right)$ and $ {S}_{r\sigma\sigma'}$ stands for the atomic operators \begin{equation} {S}_{r\sigma\sigma'}= \begin{cases} ~ {S}_{rz} & \mbox{if}~\sigma=\sigma'=\uparrow,\\ ~ {S}_{r+}= {S}_{rx}+i {S}_{ry} & \mbox{if}~\sigma=\downarrow,~\sigma'=\uparrow,\\ ~ {S}_{r-}= {S}_{rx}-i {S}_{ry} & \mbox{if}~\sigma=\uparrow,~\sigma'=\downarrow,\\ ~- {S}_{rz} & \mbox{if}~\sigma=\sigma'=\downarrow. \end{cases} \label{eqn:Soperators} \end{equation} The delta functions are introduced in Eq. (\ref{eqn:EASDjumpopers}) to account for the features of the model: (i) electrons only tunnel between the leads coupled to the same atom; (ii) the tip is only coupled to the atom $r_{0}$. As shown in Appendix \ref{sec:EASD}, the resulting expressions for Eqs. (\ref{eqn:HamilShiftSummed}), (\ref{eqn:LsuperSummed}), (\ref{eqn:DsuperSummed}) include $ {S}_{r\sigma\sigma'}$ and operators $Q_{r\sigma\sigma'}^{(0)}$, $Q_{r\sigma\sigma'}^{(+)}$ and $Q_{r\sigma\sigma'}^{(-)}$ whose matrix elements are given by \begin{equation} \begin{split} &\langle m|Q_{r\sigma\sigma'}^{(0)}|n\rangle=g\left(\beta\left(E_{m}-E_{n}\right)\right)\langle m| {S}_{r\sigma\sigma'}|n\rangle,\\ &\langle m|Q_{r\sigma\sigma'}^{(+)}|n\rangle=g\left(\beta\left(E_{m}-E_{n}+eV\right)\right)\langle m| {S}_{r\sigma\sigma'}|n\rangle,\\ &\langle m|Q_{r\sigma\sigma'}^{(-)}|n\rangle=g\left(\beta\left(E_{m}-E_{n}-eV\right)\right)\langle m| {S}_{r\sigma\sigma'}|n\rangle. \end{split} \label{eqn:Qoperators} \end{equation} For the Hamiltonian shift (\ref{eqn:HamilShiftSummed}) we obtain \begin{equation} \begin{split} & \Delta H_{A}=\frac{1}{\pi\beta}\sum_{r\sigma\sigma'}\frac{1}{2i}\left(S_{r\sigma\sigma'}^{\dag}A_{r\sigma\sigma'}-\text{h.c.}\right)+\frac{1}{\pi}p\gamma_{T}W S_{r_{0}z}- \\ & -\frac{8\ln2}{\pi^{2}}p^{2}\gamma_{T}^{2}W S_{r_{0}z}^{2}+\frac{2}{\pi^{2}}p^{2}\gamma^{2}_{T}\ln\frac{2\beta W}{\pi}\left[H_{A},S^{2}_{r_{0}z}\right]+C, \end{split} \label{eqn:HamilShiftEASD} \end{equation} where the constant part is given by \begin{equation} \begin{split} & C=\frac{4W\ln2}{\pi^{2}}\left(\gamma_{S}^{2}\sum_{r}\mathbf{S}_{r}^{2}+\right.\\ & \left.+2\gamma_{S}\gamma_{T}\mathbf{S}_{r_{0}}^{2}+\gamma_{T}^{2}(1-p^{2})\mathbf{S}_{r_{0}}^{2}\right), \end{split} \end{equation} and we have introduced operators \begin{equation} \begin{split} & A_{r\sigma\sigma'}=\delta_{rr_{0}}\gamma_{S}\gamma_{T}\left(w_{\sigma}Q_{r\sigma\sigma'}^{(+)}+w_{\sigma'}Q_{r\sigma\sigma'}^{(-)}\right)+ \\ & +\left(\gamma_{S}^{2}+\delta_{rr_{0}}\gamma_{T}^{2}w_{\sigma}w_{\sigma'}\right)Q_{r\sigma\sigma'}^{(0)}. \end{split} \label{eqn:Aoperators} \end{equation} The terms in the shift (\ref{eqn:HamilShiftEASD}), except for the first one, act as a renormalization of the magnetic field and the anisotropy parameters in Eq. (\ref{eqn:Cluster}). We thus do not explicitly account for them in the numerical calculations. For Eq. (\ref{eqn:LsuperSummed}) we obtain \begin{equation} \begin{split} &\mathcal{L} {\chi}=-i\left[ {H}'_{A}, {\chi}\right]+\frac{1}{\pi\beta}\sum_{r\sigma\sigma'}\left( {A}_{r\sigma\sigma'} {\chi} {S}_{r\sigma\sigma'}^{\dag}-\right.\\ &\left.-\frac{1}{2}\left\{ {S}_{r\sigma\sigma'}^{\dag} {A}_{r\sigma\sigma'}, {\chi}\right\} +\text{h.c.}\right). \end{split} \label{eqn:LsuperEASD} \end{equation} Finally, the result for Eq. (\ref{eqn:DsuperSummed}) is expressed as \begin{equation} \begin{split} &\mathcal{D}_{+} {\chi}=\frac{\gamma_{T}\gamma_{S}}{\pi\beta}\sum_{\sigma\sigma'}w_{\sigma'}\left(Q_{r_{0}\sigma\sigma'}^{(+)}\chi S_{r_{0}\sigma\sigma'}^{\dag}+\mbox{h.c.}\right),\\ &\mathcal{D}_{-} {\chi}=\frac{\gamma_{S}\gamma_{T}}{\pi\beta}\sum_{\sigma\sigma'}w_{\sigma}\left(Q_{r_{0}\sigma\sigma'}^{(-)}\chi S_{r_{0}\sigma\sigma'}^{\dag}+\mbox{h.c.}\right). \end{split} \label{eqn:DsuperEASD} \end{equation} The superoperator (\ref{eqn:LsuperEASD}) of the master equation has the Lindblad form when $ {A}_{r\sigma\sigma'}\sim {S}_{r\sigma\sigma'}$. As shown in Appendix \ref{sec:Lindblad}, this happens in the following cases: (i) infinite temperature $\beta\to\infty$, (ii) infinite voltage $|V|\to\infty$, (iii) single atom in the parallel magnetic field $\mathbf{B}\parallel\mathbf{P}$. The obtained superoperator does not couple the diagonal and off-diagonal elements of the density matrix in the case of a single atom and $\left[H_{A},S_{z}\right]=0$. We thus always get the equivalent results with the method of rate equations for single atoms in the parallel geometry, as shown in the next section. \section{Results} \label{sec:Results} In this section we provide two examples using the equations derived above: (i) a single spin in the presence of a spin-polarized tip, and (ii) a spin chain. In addition to the transport properties and observables of the atomic subsystem, we also compute the von Neumann entropy $S=-\mbox{tr}\left(\rho\ln\rho\right)$ that characterizes the degree of purity of the atomic state. \subsection{Single atom with $S=1/2$} \label{subsec:Single1/2} The simplest example of a magnetic structure is an atom with spin $S=1/2$ for which the density matrix can be expressed through the average spin projections as $\rho=\frac{1}{2}+\langle\mathbf{S}\rangle\cdot\mathbf{\tau}$. In this case the anisotropy terms in the Hamiltonian may be discarded as they only yield a constant energy contribution. The Hamiltonian is thus reduced to the contribution of the external magnetic field $\mathbf{B}$ yielding a Zeeman energy gap $\Delta=g\mu_{B}\abs{\mathbf{B}}$ between two energy levels of the atom. For $V=0$ relaxation processes due to interaction with the electronic leads bring the atom to a thermal state $ {\rho}\propto e^{-\beta {H}_{A}}$. At low temperatures $\beta>\Delta^{-1}$ the atomic spin is fully polarized along the magnetic field. A finite applied voltage $V \neq 0$ causes current to ensue through the atom, inducing spin excitations and changing the atomic steady state. The inelastic contribution to the current results from a spin-flip process $|\uparrow\rangle\rightarrow|\downarrow\rangle$ driven by tunneling electrons. In the following we choose the parameters $g=2$, $B=5$ T, $\gamma_{T}=\gamma_{S}=0.8$, $(\beta k_{B})^{-1}=1$ K and vary the value of the polarization $p=0, 0.5, 1$. These are typical experimental parameters \cite{Hirjibehedin2006,Hirjibehedin2007} within the applicability domain of our method. We investigate the steady state of the atom, the differential conductance $dI/dV$ and the differential shot noise $dS/dV$. In order to identify the contribution due to coherences, we compare our results, obtained with master equation (ME), with those obtained using rate equations (REs). As explained above, the ME method deals with the full density matrix and thus accounts for coherence effects, in contrast to REs that operate with the diagonal elements of $\rho$. However, the master equation cannot be used to study nonperturbative phenomena, such as Kondo correlations, unless the lead-atom coupling is treated beyond the second order. We consider two different geometries where the applied field is either parallel or perpendicular to the polarization vector of the tip. In the parallel geometry, when both $\mathbf{B}$ and $\mathbf{P}$ are along the $z$ axis, the ME and REs yield equivalent spectra for any polarization parameter. Indeed, since $\langle S_{x}\rangle=\langle S_{y}\rangle=0$, off-diagonal elements of the density matrix vanish and coherences do not affect the average current and the shot noise. The curves for the steady state observables are given in Appendix \ref{sec:Parallel} and reproduce already known results \cite{Delgado2010,Delgado2010a}. \begin{figure}[t] \includegraphics[width=1\columnwidth]{SingleOneHalfPerpendicular} \caption{Steady state characteristics for a single spin $S=1/2$ in a perpendicular geometry ($\mathbf{B}$ along $z$ axis, $\mathbf{P}$ along $x$ axis) for different values of the polarization parameter $p$ and for $g=2$, $B=5$ T, $\gamma_{T}=\gamma_{S}=0.8$, $(\beta k_{B})^{-1}=1$ K. The quantities presented as functions of voltage are (a) differential conductance, (b) differential shot noise, (c) average spin component $\langle S_{x}\rangle$, (d) average spin component $\langle S_{y}\rangle$, (e) average spin component $\langle S_{z}\rangle$, (f) entropy. In this geometry RE and ME approaches are not equivalent for $p\neq0$, as the coherences determined by $\langle S_{x}\rangle$ and $\langle S_{y}\rangle$ do not vanish and give contribution to the results. While ME gives different $dI/dV$ and $dS/dV$ curves for different $p$, the results obtained with REs are independent of $p$ and coincides with the ME results for $p=0$.} \label{fgr:SingleOneHalfPerpendicular} \end{figure} The calculated spectra in the perpendicular geometry, when $\mathbf{B}$ is along the $z$ axis and $\mathbf{P}$ is along the $x$ axis, are shown in Fig. \ref{fgr:SingleOneHalfPerpendicular}. In this case the RE approach gives the same result for any $p$. This is due to the fact that a change in the polarization parameter does not affect the spin population of electrons in the tip measured in a perpendicular direction. Therefore, if coherences are ignored, a polarization perpendicular to the magnetic field applied to the spin should not affect the current. On the contrary, if coherences are taken into account, the mismatch between the polarization of the electrons and the direction of the atomic spin reduces both the average current and the shot noise. This decrease depends on the polarization parameter, reaching a maximum for $p=1$ (fully polarized tip) and vanishing for $p=0$ (unpolarized tip). The clear difference between curves calculated with REs and the ME shows that in this geometry it is essential to take into account effects of coherences to correctly describe the average current and the shot noise. In other words, interference effects within the atomic subsystem substantially modify its conductance properties. It is worth noting that, although the spin is polarized in the $z$ direction and the magnetic field is in the $x$ direction, all three components of the spin acquire a nonzero mean value. This effect is a direct result of a spin transfer torque \cite{Slonczewski1996}. It has been studied theoretically in quantum dots coupled to magnetic leads in noncollinear arrangements \cite{Konig2003,Braun2004,Rudzinski2004,Weymann2007}. For larger voltages we observe that the entropy is suppressed as the polarization degree of the tip is increased. \begin{figure}[t] \includegraphics[width=1\columnwidth]{SingleOneHalfPerpendicularCoupling} \caption{Dependence of (a) average current and (b) shot noise on the coupling strength for a single spin $S=1/2$ in a perpendicular geometry ($\mathbf{B}$ along $z$ axis, $\mathbf{P}$ along $x$ axis). $dI/dV$ and $dS/dV$ curves are computed with REs and ME for different values of $\gamma_{T}=\gamma_{S}=\gamma$ and for $g=2$, $B=5$ T, $(\beta k_{B})^{-1}=2$ K, $p=1$. In the limit of weak coupling $dI/dV$ and $dS/dV$ curves obtained with ME coincide with the results obtained with REs.} \label{fgr:SingleOneHalfPerpendicularCoupling} \end{figure} To analyze the dependence of the inelastic current on the coupling strength, in Fig. \ref{fgr:SingleOneHalfPerpendicularCoupling} we compare $dI/dV$ and $dS/dV$ curves scaled by a $\gamma^{-2}$ factor for different values of $\gamma=\gamma_{S}=\gamma_{T}$. As expected, for a vanishing coupling both RE and ME methods yield the same results since the relative contribution of coherences to $dI/dV$ and $dS/dV$ vanishes. To emphasize this contribution to the spectra and make it more pronounced, we use the values of $\gamma$ at the limit of validity of the Born approximation. \subsection{Single atom with $S=5/2$} Atoms used in spin-polarized STM experiments typically have spins higher than $S=1/2$. Therefore we now analyze the case of a Mn atom with spin $S=5/2$. Here, even in the absence of external magnetic field, the energy levels can be split by the anisotropy terms. For $D<0$ the states with $S_{z}=+5/2$ and $S_{z}=-5/2$ are separated by the energy barrier and may be used for quantum information storage \cite{Miyamachi2013}. In the following we set $D=-0.04$ meV, $E=0$, $g=2$, $B=0$ T, $\gamma_{S}=\gamma_{T}=0.6$ and $(\beta k_{B})^{-1}=0.5$ K, taken from Refs. \cite{Hirjibehedin2006,Hirjibehedin2007}. We do not consider the case $E\neq0$ separately, as the corresponding results are not qualitatively different from the ones presented below for the perpendicular geometry. The transport through nanomagnets has been previously studied in a number of papers \cite{Timm2006,Elste2006,Elste2007,Misiorny2007,Misiorny2009}. Here, we focus on the difference between the results of the ME method that takes into account coherences and the ones obtained within the previous approaches based on the rate equations. In the parallel geometry, with both $\mathbf{B}$ and $\mathbf{P}$ along the $z$ axis, the ME and RE approaches give the same results, similarly to the single atom with spin $S=1/2$. The spectra of the steady state observables are shown in Appendix \ref{sec:Parallel} and coincide with ones presented in Refs. \cite{Delgado2010,Delgado2010a}. \begin{figure}[t] \includegraphics[width=1\columnwidth]{SingleFiveHalfPerpendicular} \caption{Steady state characteristics for a single spin $S=5/2$ in a perpendicular geometry ($z$ is the easy axis of the crystal and $\mathbf{P}$ is along $x$ axis) for different values of the polarization parameter $p$ and for $D=-0.04$ meV, $\gamma_{T}=\gamma_{S}=0.6$, $(\beta k_{B})^{-1}=0.5$ K. The quantities presented as functions of voltage are (a) differential conductance, (b) differential shot noise, (c) average spin component $\langle S_{x}\rangle$, (d) entropy. In this geometry RE and ME approaches are not equivalent for $p\neq0$, as the coherences determined by $\langle S_{x}\rangle$ do not vanish and give contribution to the results. Other components of the spin vanish, i.e., $\langle S_{y}\rangle=\langle S_{z}\rangle=0$. While ME gives different $dI/dV$ and $dS/dV$ curves for different $p$, the results obtained with REs are independent of $p$ and coincides with the ME results for $p=0$.} \label{fgr:SingleFiveHalfPerpendicular} \end{figure} The spectra of the steady state current in the perpendicular geometry, when $z$ is the easy axis and $\mathbf{P}$ is along $x$ axis, are shown in Fig. \ref{fgr:SingleFiveHalfPerpendicular}. In this case the RE approach gives slightly different curves for different $p$, in contrast to the single atom with spin $S=1/2$. However, we do not show this difference as it is small compared to the contribution due to coherences that grows with the polarization parameter. The switching of the atom to the state whose magnetization is collinear with the tip polarization requires higher voltages than for the parallel geometry. That is explained by the change in the atomic spectrum due to the magnetic field produced by the polarized current. The switching occurs for the polarized tip with $p\neq0$ and is accompanied by the decrease in the entropy as the voltage goes up. For the unpolarized tip $p=0$, there is no switching and the entropy monotonically increases with the voltage. \subsection{Spin-$1/2$ chain} The manipulation capabilities of STM can be used to assemble chains of magnetic atoms on the substrate. Compared to the case of single atoms, the conductivity profile of an atom in the chain is modified by the inter-atomic coupling. Here we study the effect of coherences in the inelastic current when the tip drives a current through one of the atoms of a linear chain of $4$ atoms. We consider the chain in the external magnetic field $B$ and study three geometries of the setup: (i) $\mathbf{B}=0$, (ii) $\mathbf{B}\neq0$, $\mathbf{B}\perp\mathbf{P}$, (iii) $\mathbf{B}\neq0$, $\mathbf{B}\parallel\mathbf{P}$. The results calculated with the ME and RE methods are shown in Figs. \ref{fgr:ChainZero}, \ref{fgr:ChainParallel}, and \ref{fgr:ChainPerpendicular} for the same parameters as in Sec. \ref{subsec:Single1/2} and for the case when the tip is coupled to one of the central atoms $r=2$. The spectra of the steady state current through the chain in zero magnetic field is presented in Fig. \ref{fgr:ChainZero}. In this case the energy scale is set by the coupling constant $J=0.3$ meV. Due to the antiferromagnetic coupling, the ground state of the chain has the total spin $S_{\text{tot}}=0$. The difference between ME and RE approaches increases with $p$ for the $dI/dV$ curve and has the same order for all $p$ for the $dS/dV$ curve. Driving the polarized current through the chain results in the switching to the collinearly polarized state, i.e., the state with the ferromagnetic order of spins. The switching is accompanied by the decrease in the entropy as the voltage goes up. \begin{figure}[t] \includegraphics[width=1\columnwidth]{ChainZero} \caption{Steady state characteristics for a chain of 4 spins $S=1/2$ in zero magnetic field for different values of the polarization parameter $p$ and for $(\beta k_{B})^{-1}=1$ K, $\gamma_{T}=\gamma_{S}=0.8$, $J=0.3$ meV. The quantities presented as functions of voltage are (a) differential conductance, (b) differential shot noise, (c) average spin component $\langle S_{z}\rangle$, (d) entropy. Other components of the spin vanish, i.e., $\langle S_{x}\rangle=\langle S_{y}\rangle=0$.} \label{fgr:ChainZero} \end{figure} In the case of parallel geometry, with both $\mathbf{B}$ and $\mathbf{P}$ along the $z$ axis, the two approaches give different results for any polarization parameter, including the unpolarized tip with $p=0$; i.e., the coherences contribute to the current. That is in contrast to the case of a single spin, see Appendix \ref{sec:Parallel}, where coherences vanish. The contribution of coherences is particularly noticeable in the shot noise which gets suppressed. We explain this by the fact that the coupling drives individual atoms into a coherent superposition of states. The entropy is smaller compared to the case of zero magnetic field. \begin{figure}[t] \includegraphics[width=1\columnwidth]{ChainParallel} \caption{Steady state characteristics for a chain of 4 spins $S=1/2$ in a parallel geometry (both $\mathbf{B}$ and $\mathbf{P}$ along $z$ axis) for different values of the polarization parameter $p$ and for $g=2$, $B=5$ T, $(\beta k_{B})^{-1}=1$ K, $\gamma_{T}=\gamma_{S}=0.8$, $J=0.3$ meV. The quantities presented as functions of voltage are (a) differential conductance, (b) differential shot noise, (c) average spin component $\langle S_{z}\rangle$, (d) entropy. Other components of the spin vanish, i.e., $\langle S_{y}\rangle=\langle S_{z}\rangle=0$.} \label{fgr:ChainParallel} \end{figure} In the case of perpendicular geometry, with $\mathbf{B}$ along the $z$ axis and $\mathbf{P}$ along the $x$ axis, the results obtained within two approaches are not equivalent for any polarization parameter, including $p=0$, differently from the case of a single atom, where the ME and RE results coincide for the unpolarized tip. The difference between methods is especially remarkable for the shot noise calculations. Note also that, similarly to the case of a single atom, see Fig. \ref{fgr:SingleOneHalfPerpendicular}, the RE approach yields the same result for different tip polarizations. \begin{figure}[t] \includegraphics[width=1\columnwidth]{ChainPerpendicular} \caption{Steady state characteristics for a chain of 4 spins $S=1/2$ in a perpendicular geometry ($\mathbf{B}$ along $z$ axis, $\mathbf{P}$ along $x$ axis) for different values of the polarization parameter $p$ and for $g=2$, $B=5$ T, $(\beta k_{B})^{-1}=1$ K, $\gamma_{T}=\gamma_{S}=0.8$, $J=0.3$ meV. The quantities presented as functions of voltage are (a) differential conductance, (b) differential shot noise, (c) average spin component $\langle S_{x}\rangle$, (d) average spin component $\langle S_{y}\rangle$, (e) average spin component $\langle S_{z}\rangle$, (f) entropy.} \label{fgr:ChainPerpendicular} \end{figure} \section{Conclusion} \label{sec:Discussion} A master equation of the Redfield type describing the dynamics of the density matrix of an atomic spin structure was derived in the limit of a small lead-atom coupling and a short lead memory time, as compared with the energy and time scales of the isolated atomic spin system. Its generalization to charge-specific density matrices allows for the description of transport quantities such as the current and the shot noise, in addition to the observables of the atomic subsystem. Unlike approaches based on rate equations, this description accounts for the dynamics of coherences, i.e., the off-diagonal elements of the density matrix. It is suitable to describe the moderate lead-atom coupling regime where coherences cannot be disregarded. This approach is however unable to capture nonperturbative phenomena in the lead-atom coupling such as Kondo effect and may yield unphysical results for large coupling. The simplest example where coherence effects are important is a setup made of a single atom with spin $S=1/2$ precessing under an applied magnetic field in the presence of a spin-polarized tip. If the polarizations of the applied field and of the tip are parallel, the rate equations yield the same results as our method. In fact, in this case the process can essentially be described in a classical way. However, our results show that if the tip polarization and the applied field are perpendicular, superposition effects are important and we find strong corrections to the rate equation results within the range of applicability of our approach. Atoms with higher total spin, employed in the engineered nanomagnets, yield to qualitatively similar results that can be monitored by measuring the average current or the shot noise. For more complex systems, such as spin chains, our results show that coherences contribute to the average current already at zero tip polarization. Although the present work only analyzes the steady state properties, coherence effects are crucial to describe the real time dynamics. The present approach is therefore suitable to be applied to model the high-frequency magnetization dynamics observed in recent experiments \cite{Baumann2015,Krause2016}. Calculation of the time dynamics will also allow us to make a comparison with numerically exact schemes such as the density matrix renormalization group \cite{Schollwock2005} and the quantum Monte Carlo method \cite{Antipov2016}. It is also worthwhile to compare our results with the recently presented kinetic equation approach \cite{Maslova2016a,Maslova2016b}. To summarize, the approach developed in this article provides a further step for the full quantum mechanical description of atomic spin devices and can therefore be used to explore new quantum coherent regimes that are of crucial importance if these systems are to be used for quantum information processing. \section*{Acknowledgments} We gratefully acknowledge discussions with S. Otte, J. Fernandez-Rossier and A. Lichtenstein. P.R. acknowledges support by FCT through Investigador FCT Contract No. IF/00347/2014. The method derivation (Sec. \ref{sec:Method}) was supported by RFBR Grant No. 16-32-00554. The numerical modeling (Sec. \ref{sec:Results}) was funded by RSF Grant No. 16-42-01057.
1,108,101,562,455
arxiv
\section{INTRODUCTION} Dubins path planning problem is widely used for path planning and trajectory planning for unmanned aerial vehicles with finite yaw rate. For the vehicles with finite yaw rate, it is natural to use Dubins paths to generate flyable trajectories that satisfy the curvature constraints. Given an initial and final points in a plane, and a direction at these two points, a Dubins path gives the shortest path between these points that satisfy the minimum turn radius constraints. There are several results in the literature related to the Dubins paths \cite{bui1994accessibility, bui1994shortest, yang2002optimal, wong2004uav, manyam2017tightly, manyam2018tightly} . In \cite{bui1994accessibility, bui1994shortest}, the analysis of the accessibility regions of Dubins paths is done and the Dubins synthesis problem are presented. The three points Dubins problem which is a generalization of the Dubins path planning problem is presented in \cite{yang2002optimal, wong2004uav}; here, an initial and final configuration is prescribed along with a third point. The curvature constrained path between initial and final points should pass through the given third point. The curvature constrained path planning problem in the presence of wind is addressed in \cite{mcgee2005optimal,techy2009minimum}. The problem of finding shortest curvature constrained path in the presence of obstacles is presented in \cite{boissonnat1996polynomial, macharet2009generation, agarwal1995motion, maini2016path}. Another generalization of the Dubins interval problem is presented in \cite{manyam2017tightly, manyam2018tightly}; it gives the algorithm to solve the shortest curvature constrained path between two points, where the heading is restricted to prescribed intervals at the initial and the final points. This generalization helped in improving the lower bounds for Dubins traveling salesman problem significantly. In this paper, we propose another generalization of the Dubins path planning problem: Given an initial location and direction, a fixed target circle and rotational direction, find the shortest curvature constrained path from initial configuration to a point on the circle, such that the final heading of the path is tangent to the circle in the prescribed direction. This fundamental problems has significant applications such as the obstacle avoidance path planning, neighborhood Dubins traveling salesman problem \cite{isaacs2011algorithms, macharet2012evolutionary, hespanha}. This problem also arises when finding shortest path for a pursuit evader problem where evader is following a cyclical path. Thus the presented shortest Dubins path to a target circle has significant applications in several path planning problems. In \cite{Dubins1957}, Dubins states that the shortest path consists of at most three segments, where each segment could be a circular arc or a straight line. If we represent the circular arc with C and, the straight line with S, the shortest path could be of the type CSC or CCC. Let L and R represent counter-clockwise and clockwise circular arcs respectively; the shortest path should be one of the following six combinations: LSL, RSL, RSR, LSR, LRL, and RLR. When the distance between the points in greater than four times the minimum turn radius, the shortest path could be one of the four combinations of CSC paths \cite{bui1994shortest, goaoc2013bounded}. Clearly, the shortest path from an initial configuration to a point on the circle also could only consists of one of these six combinations, or the degenerate cases there of. If one could find the shortest possible path of each of these cases, that starting from the initial configuration and ends on the circular with the final direction tangential and at given orientation, then the minimum of all the six cases gives the shortest Dubins path to the circle. In this paper, we address the four cases of the CSC paths, \textit{i.e} LSL, RSL, RSR, LSR. Under the assumption that the straight line distance between the initial position and any point on the circle is greater than four times the minimum turn radius, it is sufficient to just look at these four cases, however one needs to analyze all the six cases to find the optimal path for the general case. As a first step towards solving the general case, we address the problem with the assumption on distances. \subsection{Problem Statement} Given an initial configuration ($x$, $y$ coordinates and the heading direction) $(x_i,y_i,\theta_i)$, the target circle, and the rotational direction (clockwise/counter-clockwise), find the shortest path, subject to the minimum turn radius constraints, from the initial configuration to a point on the target circle with final heading direction tangential to the circle in the specified rotational direction. The problem setup and feasible paths with final heading tangential to the circle in clockwise and counter-clockwise directions are shown in the Figs. \ref{fig:pathCCW} and \ref{fig:pathCW} respectively. \begin{figure}[htpb] \begin{center} \subfigure[Final tangential direction is clockwise to the target circle]{\includegraphics[width=3in]{FeasPathCW.pdf}\label{fig:pathCCW}} \subfigure[Final tangential direction is counter-clockwise to the target circle]{\includegraphics[width=3in]{FeasPathCCW.pdf}\label{fig:pathCW}} \end{center} \caption{Feasible paths to a circle with final direction tangential to the target circle} \label{figure_ASME} \end{figure} \subsection*{Notations:} \begin{table}[h!] \renewcommand \caption [2][]{} \caption{} \begin{center} \label{tab:ressum1} \begin{tabular}{rl} $C_1/C_2:$ & First or second arc/circle of the CSC path \\ $C_3:$ & Target circle \\ $r$: & Minimum turn radius and radius of the target circle\\ $\theta:$& Final heading of the Dubins path\\ $\alpha:$& Angular position of the final point on the target circle \\ $\phi_1/ \phi_2:$& Angle subtended by the first/second arc of the CSC path\\ $L_S:$&Length of the straight line/middle segment of the CSC path \end{tabular} \end{center} \end{table} \section{Main Result} \label{sec:main} \begin{assumption} \label{assum:4r} The distance between the initial and final position is always greater than four times the minimum turn radius ($r$). \end{assumption} \begin{assumption}\label{assum:equalr} The radius of the target circle $C_3$ and the minimum turn radius of the vehicle are equal. \end{assumption} To find the shortest Dubins paths, one can aim to find the shortest paths of six classes of Dubins paths, and the one with minimum length among these six paths will be the shortest Dubins path to the circle. However, we restrict our analysis in this paper to the four types of the CSC paths, which is sufficient to find the optimal path under the Assumption \ref{assum:4r}. Note that, the analysis done in this paper is not just restricted to only the instances that satisfy Assumption \ref{assum:4r}, it also applies to any type of CSC paths when they exist. Furthermore, it is clear from the symmetry and the equivalency between dubins paths \cite{shkel2001dubins}, we need to analyze only two types of paths among the four CSC paths. The RSR and LSR paths are symmetrical to the LSL and RSL paths respectively, and therefore the results for LSL and RSL paths are directly applicable to the other two paths. The heading direction at the final position on the circle is tangential to the circle, and this direction could be either clockwise or counter-clockwise to the target circle. Depending on the rotational direction chosen on the target circle, the shortest Dubins paths occur at different positions on the target circle. We analyze the LSL and RSL paths for these two cases separately in the subsections \ref{subsec:cw} and \ref{subsec:ccw}. \subsection{Clockwise tangential direction}\label{subsec:cw} \begin{prop}\label{thm:csccw} The maximum and minimum of the the LSL and the RSL paths with respect to $\alpha$ occur when the direction of the straight line segment passes through the center of the target circle. \end{prop} Given the final heading direction is clockwise tangent to the target circle, we have $\theta = \alpha - \frac{\pi}{2}$. Using this relation, we prove the Proposition \ref{thm:csccw} in the following subsections. The plot of the length of the paths against the angular position on the target circle are shown in the Fig. \ref{fig:cscalcw}. Though we see only one discontinuity for each plot, there could be potentially two positions where the length is discontinuous. \begin{figure}[h] \begin{center} \includegraphics[width=2.75in]{CSCVsAlphaCW.pdf}\end{center} \caption{The length of the LSL and RSL paths vs the angular position on target circle. Final headings are clockwise tangents to the target circle.} \label{fig:cscalcw} \end{figure} \subsubsection{LSL Paths} \begin{figure}[h] \begin{center} {\includegraphics[width=3.25in]{PathLSLa.pdf}} \end{center} \caption{LSL Path} \label{fig:pathlsl} \end{figure} Without loss of generality we assume that the initial position is at the origin and the initial heading is towards the positive $x$-axis, i.e. the initial heading is at zero degrees with respect to $x$-axis as shown in the Fig. \ref{fig:pathlsl}. Let $(c,d)$ be the centre of the target circle and $r$ be its radius. We will express the length of the $LSL$ path for an arbitrary position on the target circle as a function of the angular position, $\alpha$. Let $(x,y)$ be the coordinates of the final position of the Dubins path, and $\theta$ be the final heading direction. Using elementary geometry, one can see the centre of the second circle $C_2$ is $(x-r \sin \theta, y+ r\cos \theta)$. The length of the $LSL$ path is given by the sum of the three segments, the first circular arc, the straight line and the second circular arc. Let $\phi_1$ and $\phi_2$ be the angle subtended by the first and second arc respectively, and let $L_S$ be the length of the straight line (second segment of the CSC path). The length $L_S$ is equal to the distance between the centers of the circles $C_1$ and $C_2$. The length of the LSL path is given by the sum of these three segments $L_{LSL} = L_S + r(\phi_1+\phi_2)$. Using geometry, one can derive $L_S$, $\phi_1$ and $\phi_2$, and are given as the following: \begin{flalign} L_S &= \sqrt{(x-r\sin \theta)^2 +(y + r\cos \theta -r)^2}, \label{eqn:lslls}\\ \phi_1 &= \mod\left(\mbox{atan2}\left(\frac{y+r\cos \theta -r}{x - r\sin \theta}\right),2\pi\right), \label{eqn:lslphi1}\\ \phi_2 &= \mod(\theta - \phi_1, 2 \pi). \label{eqn:lslphi2} \end{flalign} For this case, as the final heading is in the clockwise tangential direction at the target circle $\theta = \alpha - \frac{\pi}{2}$, and by substituting $(x,y) = (c+r \cos\alpha, d+r \sin\alpha )$ in the equations (\ref{eqn:lslls} - \ref{eqn:lslphi1}) gives the following: \begin{flalign} L_S &= \sqrt{(c+2 r \cos \alpha)^2 +(d + 2r\sin \alpha-r)^2}, \label{eqn:lsllscw}\\ \phi_1 &= \mod\left(\mbox{atan2}\left(\frac{d + 2r\sin \alpha-r}{c+2 r\cos \alpha}\right),2\pi\right). \label{eqn:lslphi1cw} \end{flalign} One can find the maximum and minimum of the LSL paths by solving the following \begin{equation} \frac{d}{d \alpha} \left( L_s+r(\phi_1+\phi_2) \right)= 0. \label{eqn:ddallsl} \end{equation} \begin{lemma}\label{lem:lslmin} The maximum or minimum of the length of the LSL path with respect to the angular position on the target circle ($\alpha$) occurs when $\phi_2=\frac{\pi}{3}$ or $\frac{5 \pi}{3}$, and at these two positions, the direction of the straight line of the LSL path (second segment) passes through the center of the target circle. \end{lemma} \begin{proof} Refer to the Appendix for the proof. \end{proof} \begin{corol} The absolute value of the derivative of the length of the LSL path, $\frac{d}{d \alpha} L_{LSL}$ represents the perpendicular distance from the center of the target circle to the straight line segment of the LSL path. \end{corol} \begin{proof} From Fig. \ref{fig:pathlsl}, the perpendicular distance from the center of the target circle to the straight line segment of the LSL path is $r+2rsin(\phi_2-\frac{\pi}{2})$, which is equal to $r-2rcos(\phi_2)$. \end{proof} \subsubsection{RSL Paths} \begin{figure}[htpb] \begin{center} {\includegraphics[width=3.25in]{PathRSLa.pdf}} \end{center} \caption{RSL Path} \label{fig:pathrsl} \end{figure} Without loss of generality we assume that the initial position is at the origin and the initial heading is towards the positive $y$-axis, i.e. the initial heading is at $\frac{\pi}{2}$ degrees with respect to $x$-axis as shown in the Fig. \ref{fig:pathrsl}. Let $(x,y)$ be the coordinates of the final position, which lies on the target circle, and $\theta$ be the final heading direction. The centre of the second circle $C_2$ is given by $(x-r\sin \theta, y+ r\cos \theta)$. The length of the $RSL$ path is the sum of the three segments $L_S+r(\phi_1+\phi_2)$. Let $L_{cc}$ be the distance between the centers of the circles $C_1$ and $C_2$, and it is given by \begin{equation} L_{cc}=\sqrt{(x -r\sin \theta-r)^2+(y+ r\cos \theta)^2}. \label{eqn:rsldcc} \end{equation} The length of the straight line segments and the angles subtended by the arcs are given as follows: \begin{flalign} L_S &= \sqrt{L_{cc}^2 - 4r^2}, \label{eqn:rslls}\\ \phi_1 &= \mod( - \psi_1+\psi_2 + \frac{\pi}{2}, 2\pi), \label{eqn:rslphi1} \\ \phi_2 &= \mod(\theta + \phi_1 - \frac{\pi}{2}, 2 \pi), \label{eqn:rslphi2} \end{flalign} where $\psi_1$ and $\psi_2$ are given as \begin{flalign} \psi_1&= \mbox{atan2}\left(\frac{y+ r\cos \theta}{x -r\sin \theta-r}\right), \label{eq:psi1} \\ \psi_2 &=\mbox{atan2}\left(\frac{2r}{L_S}\right). \label{eq:psi2} \end{flalign} By substituting $(x,y) = (c+r\cos\alpha, d+r\sin\alpha )$ and $\theta = \alpha - \frac{\pi}{2}$ in the eqs. (\ref{eqn:rsldcc}), (\ref{eq:psi1}) and (\ref{eq:psi2}), $L_{cc}$, $\psi_1$ and $\psi_2$ reduces to the following: \begin{flalign} L_{cc} &= \sqrt{(c + 2r\cos \alpha-r)^2+(d + 2r\sin \alpha)^2} \label{eqn:rsldcccw} \\ \psi_1 &= \mbox{atan2}\left(\frac{d + 2r\sin \alpha}{c + 2r\cos \alpha-r}\right) \\ \psi_2 &= \mbox{atan2}\left(\frac{2r}{L_S}\right) \end{flalign} \begin{lemma}\label{lem:rslmin} The extremum of the length of the RSL path with respect to the angular position on the target circle ($\alpha$) occurs when $\phi_2=\frac{\pi}{3}$ or $\frac{5 \pi}{3}$; and at these two positions, the direction of the straight line of RSL path (second segment) passes through the center of the target circle. \end{lemma} \begin{proof} Refer to the Appendix for the proof. \end{proof} \begin{corol} The absolute value of the derivative of the length of the RSL path $\frac{d}{d \alpha} L_{RSL}$ represents the perpendicular distance from the center of the target circle to the straight line segment of the RSL path. \end{corol} \begin{proof} From Fig. \ref{fig:pathrsl}, the perpendicular distance from the center of the target circle to the straight line segment of the RSL path is $r+2rsin(\phi_2-\frac{\pi}{2})$, which is equal to $r-2rcos(\phi_2)$. \end{proof} \subsection{Counter clockwise tangential direction}\label{subsec:ccw} \begin{prop} \label{prop:lslrslccw} The length of the LSL and RSL paths vary linearly with the angular position ($\alpha$) on the target circle, except for a discontinuity when the final arc of the paths disappears and the paths degenerate to two segment paths (LS or RS), and this corresponds to the shortest LSL or RSL path. \end{prop} A sample plot of the length of these paths is shown in the Fig. \ref{fig:cscalccw}. We will prove this proposition in the following subsections for the LSL and RSL paths. \begin{figure}[h] \begin{center} {\includegraphics[width=2.75in]{CSCVsAlphaCCW.pdf}} \end{center} \caption{The length of the LSL and RSL paths vs the angular position on target circle. Final headings are counter-clockwise tangents to the target circle.} \label{fig:cscalccw} \end{figure} \subsubsection{LSL Paths} The final heading direction is counter-clockwise tangent to the target circle, and it implies that $\theta = \alpha + \frac{\pi}{2}$. Substituting for $(x,y)$ and $\theta$ in equations (\ref{eqn:lslls} - \ref{eqn:lslphi1}) gives the following: \begin{flalign} L_S &= \sqrt{(c^2 +(d-r)^2}, \label{eqn:lsllsccw}\\ \phi_1 &= \mod\left(\mbox{atan2}\left(\frac{d-r}{c}\right),2\pi\right). \label{eqn:lslphi1ccw} \end{flalign} Clearly, the length of the first arc and the straight line segment are constant, and the second circle $C_2$ is co-located with the target circle $C_3$. From eq. (\ref{eqn:lslphi2}), one can see that length of the final arc changes linearly with $\alpha$, except for a discontinuity due to the modulus function. This confirms with the plot of the length of the LSL path as shown in the Fig. \ref{fig:cscalccw}. The minimum of the LSL path occurs when the length of the third segment goes to zero, or $\theta = \phi_1$, which means the direction of the straight line segment is same as the final heading. Thus, the LSL path is shortest at the position where the straight line segment is tangential to the target circle (point $B$ shown in the Fig. \ref{fig:lslccw}), and here the LSL path degenerates to a two segment path LS. For any other position, the LSL paths would consist of these two LS segments, and a third arc on target circle $C_3$ as shown in the Fig. \ref{fig:lslccw}. \begin{figure}[htpb] \begin{center} {\includegraphics[width=3in]{LSL_CCW.pdf}} \end{center} \caption{LSL Path where the final heading is a counter clockwise tangent to the target circle} \label{fig:lslccw} \end{figure} \subsubsection{RSL Paths} In this case, as the final heading direction is in the counter-clockwise tangential direction, $\theta =\alpha + \frac{\pi}{2}$, and substituting this in eqs. (\ref{eqn:rsldcc})-(\ref{eqn:rslphi2}) gives the folllowing: \begin{flalign} L_{cc} &= \sqrt{(c-r)^2+(d)^2} \label{eqn:rsldccccw} \\ L_S &= \sqrt{L_{cc}^2 - 4r^2}, \label{eqn:rsllsccw}\\ \phi_1 &=\mod \left(\mbox{atan2}\left(\frac{2r}{L_S}\right) - \mbox{atan2}\left(\frac{d}{c-r}\right) + \frac{\pi}{2}, 2 \pi\right), \label{eqn:rslphi1ccw}\\ \phi_2 &= \mod\left(\theta + \phi_1 - \frac{\pi}{2}, 2 \pi\right). \label{eqn:rslphi2ccw} \end{flalign} Similar to the LSL path, the first two segments are constant for any $\alpha$, and the circles $C_2$ and $C_3$ are co-located. Hence, the length of the $RSL$ path varies only due to the third segment. The value of $\phi_2$ is a piecewise linear function of $\alpha$, and the discontinuity as shown in the Fig. \ref{fig:cscalccw} is due to the modulus function which occurs when $\phi_2$ goes to zero. Also, the RSL path is shortest when this third segment goes to zero, \textit{i.e.} the point where the straight line segment is tangential to the target circle $C_3$ (point $B$ shown in the Fig. \ref{fig:rslccw}). \begin{figure}[htpb] \begin{center} {\includegraphics[width=2.5in]{RSL_CCW.pdf}} \end{center} \caption{RSL Path with final heading in the counter-clockwise tangential direction to the target circle.} \label{fig:rslccw} \end{figure} Lemmas \ref{lem:lslmin} and \ref{lem:rslmin} completes the proof for Proposition \ref{thm:csccw}. \section{Summary of results} For a given rotational direction of the tangent at the target circle, using the analysis in Section \ref{sec:main}, we could find the shortest LSL and RSL paths. Due to the symmetry of the CSC paths, one could evidently extend these conditions to the RSR and LSR paths. In summary, one could find the shortest of any CSC path based on whether the rotational direction of the second arc ($C_2$) of the CSC path and the direction of the tangent at the target circle are same or different. $(i)$ If they are same, the circles $C_2$ and $C_3$ will be co-located, and the shortest CSC path occurs when third arc disappears and the path degenerates to a CS path. $(ii)$ If they are different, the shortest path occurs at the angular position on the target circle at which the second arc will be of angle $\frac{\pi}{3}$, and the direction of the straight line passes through the center of the target circle. Now the path of minimum length among the shortest LSL, LSR, RSR and RSL gives the shortest CSC path: $CSC_{min} = \min \{LSL_{min}, LSR_{min}, RSR_{min}, RSL_{min} \}$. We summarize the conditions for the minimum and the discontinuity of the four CSC paths in the Tables \ref{tab:ressum1} and \ref{tab:ressum2}. Though we assume the initial heading to be zero for LSL and $\frac{\pi}{2}$ for the RSL paths in the Section \ref{sec:main}, the results could be generalized for any initial heading. We list the conditions in the Tables \ref{tab:ressum1} and \ref{tab:ressum2} for any given initial heading $\theta_i$. \begin{table}[t] \caption{SUMMARY OF RESULTS FOR COUNTER-CLOCKWISE TANGENT AT TARGET CIRCLE} \begin{center} \label{tab:ressum1} \begin{tabular}{ccl} & & \\ \hline Path type & Minimum & Discontinuity \\ \hline LSL & $\alpha = \theta_i+\phi_1 - \frac{\pi}{2} $ & $\alpha = \theta_i+\phi_1 - \frac{\pi}{2} $\\ RSL & $\alpha = \theta_i -\phi_1 - \frac{\pi}{2} $ & $\alpha =\theta_i -\phi_1 - \frac{\pi}{2}$ \\ RSR & $\alpha = \theta_i-\phi_1 - \frac{5\pi}{6} $ & $ (i) \alpha =\theta_i -\phi_1 - \frac{\pi}{2}$ \\ & & $(ii) (\phi_1=0)$ \\ LSR & $\alpha = \theta_i + \phi_1 - \frac{5\pi}{6} $ & $(i)\alpha = \theta_i+\phi_1 - \frac{\pi}{2}$ \\ & & $(ii) (\phi_1=0)$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[t] \caption{SUMMARY OF RESULTS FOR CLOCKWISE TANGENT AT TARGET CIRCLE} \begin{center} \label{tab:ressum2} \begin{tabular}{ccl} & & \\ \hline Path type & Minimum & Discontinuity \\ \hline LSL & $\alpha = \theta_i+\phi_1 + \frac{5\pi}{6} $ & $(i) \alpha = \theta_i+\phi_1 + \frac{\pi}{2} $\\ & & $(ii) (\phi_1=0)$ \\ RSL & $\alpha = \theta_i -\phi_1 + \frac{5\pi}{6} $ & $(i) \alpha =\theta_i -\phi_1 + \frac{\pi}{2}$ \\ & & $(ii) \phi_1=0$\\ RSR & $\alpha = \theta_i -\phi_1 + \frac{\pi}{2} $ & $\alpha = \theta_i -\phi_1 + \frac{\pi}{2} $ \\ LSR & $\alpha = \theta_i + \phi_1 + \frac{\pi}{2} $ & $\alpha = \theta_i + \phi_1 + \frac{\pi}{2} $\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} We considered a generalization of the Dubins path problem, which is to find a shortest Dubins path from an initial configuration to a final position that lies on a given target circle and the final heading is a tangent to the circle. We assume the distance between the initial and final configurations is always greater than four times minimum turn radius, and this leads to only paths of type CSC. We characterized the length of the four CSC paths with respect to the angular position on the target circle, and presented the necessary conditions for the minimum and maximum. The minimum of these four shortest paths would give the shortest Dubins path to the circle. We also derive for the angular positions ($\alpha$), at which the lengths of these paths are discontinuous. Here, we assume the minimum turn radius for the paths and the radius of the target circle are equal, a first future direction would be to extend these results to the case when these two radii are different. Another future direction is to characterize the CCC paths, and all the six classes of paths would complete the analysis for the shortest Dubins path to circle. \bibliographystyle{asmems4}
1,108,101,562,456
arxiv
\section{Introduction} \label{intro} Age- and cause-specific under-five mortality (ACSU5M) rates are a critical indicator of the health and well-being of children worldwide. Therefore, understanding patterns and trends in ACSU5M is crucial for informing and evaluating age and disease-targeted interventions and policies aimed at reducing child mortality. ACSU5M data are typically collected through demographic and health surveys, sample registration systems (SRS), and vital registration (VR) systems. A persistent challenge for accurate and reliable ACSU5M estimation is the lack of complete and consistent data, particularly in low-income countries, where VRs are often incomplete or unavailable. In the absence of individual-level registration data, researchers often have to rely on multiple data sources to estimate ACSU5M, such as SRSs, household surveys, and verbal autopsy (VA) questionnaires. These data are often provided in an aggregated manner, with ages grouped into various levels, making it difficult to perform age-sensitive analysis as it requires standardized age-disaggregated death counts~\citep{diaz2021call}. Despite the importance of data reconciliation in child mortality research, particularly in ACSU5M studies, the theoretical support for the appropriate data disaggregation approach has not been fully developed. In many cases, researchers face a shortage of individual death data with exact recorded ages and must rely on aggregated data where ages are grouped. When dealing with aggregated data, researchers often resort to past studies in choosing the appropriate age categories to use, without considering the empirical evidence. This can result in the use of age groupings that may not accurately reflect the true age distribution within each cause of death (COD). Table~\ref{tab:age} gives an example of typical breakdowns in age categories at increasing levels of disaggregation. \begin{table}[!htb] \centering \includegraphics[width=0.5\linewidth]{nicetable.png} \vspace{2mm} \caption{Age category disaggregation in ACSU5M studies.} \label{tab:age} \end{table} This paper proposes a Bayesian approach to calibrate across data sources reported at different levels of disaggregation and provides estimates of standardized age group distributions. The method combines both individual registration data, if available, and fully-classified age-disaggregated death counts as group counts from a multinomial distribution. The partially-classified aggregated data are then incorporated to jointly estimate the multinomial probabilities, potentially resulting in improved estimates of the age group distributions as well as age- and cause-specific death counts at the desired level. The problem of concurrently estimating multinomial probabilities has been widely studied in the literature, with numerous Bayesian methods proposed~\citep{fienberg1973simultaneous,leonard1977bayesian,alam1986empirical,albert1987empirical}. The case of partial classification in contingency tables has also been explored~\citep{chen1976analysis,albert1985bayesian,gibbons1994bayesian}, where data are partially classified by rows or columns.~\cite{ahn2010bayesian} expanded upon these ideas by proposing a Bayesian approach to handle incompletely classified multinomial data in the study of pathogen diversity. However, the decision-theoretic foundations of these methods have not been fully established. Unlike the previous approaches that limit the partial classification to row (column) sums, our method considers a more general scenario, where partial classifications can be any partitions of the fully-classified group index set and provides conditions that incorporating partially-classified data can lead to better Bayes estimators. Additionally, we tailor the method to address the age category reconciliation problem in ACSU5M studies in a more general setting, considering scenarios where the age groups are not completely nested between data sources, as opposed to the nested age structures depicted in Table~\ref{tab:age}. In both cases, we provide comprehensive frameworks for conducting Bayesian inference. Our contributions are as follows: \begin{itemize} \item Our work extends the existing literature on the simultaneous estimation of multinomial probabilities to a more versatile setting. This allows for greater applicability to a wider range of problems. \item We provide theoretical support and numerical studies from the decision-theoretic perspective, demonstrating how the integration of partially classified data can result in enhanced Bayesian estimators of multinomial probabilities under certain conditions, which we explicitly define. \item We conduct simulation studies based on observed, disaggregated data to assess the effectiveness of our age reconciliation method. Our results demonstrate that the proposed approach is promising, and offers novel and valuable perspectives on the mitigation of age inconsistencies in ACSU5M studies. \end{itemize} \section{Method} \subsection{Problem Statement} Let $\bm X = (X_i)_{i=1}^k$ be the fully classified observations that follow a multinomial distribution with parameters $(N, \bm \theta)$, \[\bm X \sim \text{Multi}(\bm x | N, \bm\theta) = \frac{N!}{\prod_{i=1}^k x_i!} \prod_{i=1}^k \theta_i^{x_i},\] where $\bm \theta = \{\theta_i\}_{i=1}^k$ is the unknown vector of group probabilities and $\bm x = (x_i)_{i=1}^k\in \{(x'_i)_{i=1}^k \in \mathbb{N}_0|\sum_{i=1}^k x'_i = N\}$, $\mathbb{N}_0 = \{0\} \cup \mathbb{N} = \{0, 1, 2, \dots\}$. In the ACU5M case, for example, this could be the age- and cause-spefic death counts we observe at a desirable disaggregated level with $k$ different groups. Suppose that we have additional data that are partially classified with respect to age levels, that is, $\bm Y'= (Y'_i)_{i=1}^k\sim \mbox{Multi}(N', \bm \theta)$ independently of $\bm X$, but we only observe the partially classified data $\bm Y = (Y_{A_j})_{j=0}^m$ that follow a multinomial distribution with parameters $(N', \bm \tau)$, where $(A_j)_{j=1}^m$ are the distinct non-singleton proper subsets of $S = \{1,\dots,k\}$, $A_0 = S - \cup_{j=1}^m A_j$, and $\bm \tau = (\tau_j)_{j=0}^m$ with $\tau_j = \sum_{i\in A_j}\theta_i$. Note that $A_0$ can be $\emptyset$. An example of an age disaggregation setting using the notation introduced above is presented below: \begin{equation*} \begin{dcasesnoquad} \mbox{\{0-27d\}}:= \{1, 2\} &\begin{cases}[r]\mbox{\{0-6d\}}:=\{1\} &\\ \mbox{\{7-27d\}}:=\{ 2 \}&\end{cases}\\ \mbox{\{1-11m\}}:= \{3, 4\} & \begin{cases}[r] \mbox{\{1-5m\}} :=\{3\}& \\ \mbox{\{6-11m\}}:=\{ 4 \}&\end{cases}\\ \mbox{\{12-59m\}}:= \{5, 6\}&\begin{cases} [r]\mbox{\{12-23m\}}:=\{ 5 \} & \\ \mbox{\{24-59m\}} :=\{ 6 \}&\end{cases}, \end{dcasesnoquad} \end{equation*} from which we have that \begin{align*} S &= \{1, 2, 3, 4, 5, 6\},\\ A_1 & = \{1,2\}, A_2 = \{3, 4\}, A_3 = \{5, 6\}. \end{align*} We consider the problem of estimating $\bm \theta$ with the prior distribution for $\bm \theta$ being the Dirichlet distribution with parameter $\bm \alpha = (\alpha_i)_{i=1}^k$ under the KL loss that can be directly interpreted as divergence measures induced by entropy~\citep{nayak1989estimating}, \[L(\bm p, \bm \theta) = \sum_{i=1}^k \theta_i \log \frac{\theta_i}{p_i},\] where $\bm p = (p_i)_{i=1}^k \in \Theta = \{(\theta_i)_{i=1}^k\in (0,\infty)^{k}|\sum_{i=1}^k \theta_i = 1\}.$ The problem is further broken down into two scenarios: (i) where the $A_j$ sets are mutually exclusive; and (ii) where the $A_j$ sets may overlap. \subsection{Bayes Estimator with Disjoint Partial Classifications} Let $\bm\rho_{A_j} = (\theta_i/\tau_j,i\in A_j)^T$, $j = 0, \dots, m$ be the conditional probabilities of each individual cell within each group $A_j$ and $\bm \eta = (\bm \tau, \bm \rho_{A_j}, j = 0,\dots, m)$. Denote $(\bm X, \bm Y) = (X_1,X_2,\dots, X_k, Y_{A_1}, \dots, Y_{Am})$ and the probability mass function of $(\bm X, \bm Y) = (\bm x,\bm y)$ is given by \begin{align*} & f(\bm x, \bm y|\bm\eta) = \\ & \frac{(N!)(N'!)}{(\prod_{i=1}^k x_i!)(\prod_{j=0}^m y_{A_j}!)}\prod_{j=0}^m \tau_j^{x_{A_j} + y_{A_j}}\prod_{j = 0}^m\prod_{r\in A_j}(\rho_{A_j}^{(r)})^{x_r}, \end{align*} where $x_{A_j} = \sum_{r\in A_j} x_r$. Consider the Bayes estimator with the prior distribution for $\bm \theta$ being the Dirichlet distribution with parameter $\bm \alpha$, which is \begin{align*} f(\bm\theta|\bm\alpha) \propto \prod_{i=1}^k \theta_i^{\alpha_i - 1}. \end{align*} We can show that the posterior density function is given by \begin{equation} \begin{aligned} f(\bm \eta|\bm x, \bm y) &\propto \tau_{0}^{x_{A_0}+\alpha_{A_0}-1}\prod_{j=1}^m \tau_{j}^{x_{A_j} + y_{A_j}+\alpha_{A_j} -1}\\ &\times \prod_{j=0}^m\prod_{r\in A_j}\left(\rho_{A_j}^{(r)}\right)^{x_r + \alpha_r - 1}, \end{aligned} \label{eq:1} \end{equation} where $\alpha_{A_j} = \sum_{r\in A_j}\alpha_r$. Since $\bm \theta$ and $\bm\eta$ bear a one-to-one relationship, they are equivalent parameterizations. It follows from~\ref{eq:1} that $\bm \tau$ and $\bm (\bm\rho_{A_j})_{j=0}^m$ are jointly independent and $(\rho_{A_j}^{(r)})_{r\in A_j = \{j_1,\dots, j_{n_j}\}}$ follows a Dirichlet distribution with parameter $(\alpha_{j_1} + x_{j_1},\dots, \alpha_{j_{n_j}} + x_{j_{n_j}})$ for $j = 0,\dots, m$. Moreover, it can be readily shown that $\bm \tau$ also follows a Dirichlet distribution with parameter $(x_{A_0} + \alpha_{A_0}, x_{A_1} + y_{A_1} + \alpha_{A_1}, \dots, x_{A_j} + y_{A_j} + \alpha_{A_j})$. The closed-form Bayes estimator under the KL loss, which is the posterior mean of $\bm \theta$ based on $(\bm x, \bm y)$ is given in Lemma~\ref{lemma1}. \begin{lemma} With respect to the Dirichlet prior with parameter $\bm \alpha = (\alpha_1, \dots, \alpha_k)$, given data consist of fully-classified $\bm x = (x_i)_{i=1}^{k}$ and partially-classified $\bm y = (y_{A_j})_{j=0}^m$ with disjoint $A_j$'s, the Bayes estimator $(\hat\theta_i)_{i=1}^k$, for $i\in A_j$, is given by \begin{align*} \hat{\theta}_i = \frac{\alpha_i + x_i}{x_{A_j} + \alpha_{A_j}} \frac{y_{A_j} + \alpha_{A_j} + x_{A_j}}{N + N' + \alpha_S}, \end{align*} \label{lemma1} \end{lemma} where $\alpha_S = \sum_{i=1}^k \alpha_i$. \subsection{Decision-theoretic Justification for Age Reconciliation} In order to provide a decision-theoretic rationale for the impact of incorporating partially-classified data on the estimations of the multinomial probabilities, we compare the risk functions of $\bm \hat \theta$ and the Bayes estimator $\Tilde{\bm\theta}$ based only on the fully-classified data $\bm x$, which is given by \begin{align*} \tilde{\theta_i} = \frac{x_i + \alpha_i}{\alpha_S + N}, \end{align*} owing to the conjugation of the Dirichlet prior for the multinomial distribution. Let the risk difference be $\Delta_{\bm\theta}(N, N') = \mathbb{E}_{\bm\theta}[L(\hat{\bm \theta}, \bm\theta)] - \mathbb{E}_{\bm\theta}[(L(\tilde{\bm \theta}, \bm\theta)]$. The following lemma gives the decomposition of $\Delta_{\bm\theta}(N, N')$. \begin{lemma} The risk difference between estimators $\hat{\bm\theta}$ and $\Tilde{\bm\theta}$ can be decomposed as \begin{align*} \Delta_{\bm\theta}(N, N') &= \mathbb{E}_{\bm \theta}\bigg[\log \frac{N' + N + \alpha_S}{N +\alpha_S } \\ & +\sum_{j=0}^m \theta_{A_j}\log\left\{\frac{x_{A_j} + \alpha_{A_j}}{y_{A_j} +x_{A_j} + \alpha_{A_j} }\right\}\bigg]\\ &= \sum_{u=1}^{N'}\Delta_{\bm \theta}(N + u - 1, 1) . \end{align*} \label{lemma2} \end{lemma} All proofs can be found in the supplementary materials. As shown in Lemma~\ref{lemma1}, $\Delta_{\bm\theta}(N, N')$ can be expressed as the sum of $\Delta_{\bm \theta}(N + u - 1, 1)$, where $u = 1,\dots, N'$ represents the number of additional partially-classified data points we considered. Without loss of generality, we consider $\Delta_{\bm \theta}(N, 1)$. The following lemma gives the condition when the maximum of $\Delta_{\bm \theta}(N, 1)$ can be achieved at $\bm \theta^* = (\theta_{i}^*)_{i=0}^k \in \bm\Theta$ where for $i\in A_j = \{ j_1, \dots, j_{n_j}\}$, $\theta_{i: i\in A_j}^* = 1/[(1+m)n_{j}]$ and $\sum_{i=0}^k \theta^*_{i:i\in A_j} = \theta_{A_j} = 1/(m+1)$ for all $j = 0, 1,\dots, m$. \begin{lemma} Suppose that $\min_{j}\alpha_{A_j} \geq 2$, then the risk difference $\Delta_{\bm\theta}(N,1)$ is maximized at $\bm\theta = \bm \theta^*$: \[\max_{\bm\theta \in \Theta} \Delta_{\bm \theta}(N, 1) = \Delta_{\bm\theta^*}(N,1).\] \label{lemma3} \end{lemma} We then establish the dominance of the estimators $\hat{\bm\theta}$ and $\tilde{\bm\theta}$ in the following theorem. \begin{theorem} \begin{enumerate}[(i)] \item \label{t1} Fix $m\in \mathbb{N}$ and $N\in \mathbb{N}$. Suppose we have $A_j$'s and $\alpha_i$'s such that $\min_{j}\alpha_{A_j} \geq 2$, then $\hat{\bm\theta}$ dominates $\Tilde{\bm\theta}$ when $N'$ is sufficiently large. \item \label{t2} Fix $m\in \mathbb{N}$ and $N'\in \mathbb{N}$. Suppose we have $A_j$'s and $\alpha_i$'s such that $\min_{j}\alpha_{A_j} \geq 2$, then $\hat{\bm\theta}$ dominates $\Tilde{\bm\theta}$ when $N$ is sufficiently large. \end{enumerate} \label{theorem1} \end{theorem} Theorem~\ref{theorem1} states the sufficient condition for $\Tilde{\bm \theta}$ to be dominated by $\hat{\bm \theta}$, which requires $\alpha_{A_j} = \sum_{i\in A_j}\alpha_i \geq 2$ for all $j = 0,\dots, m$. For example, if non-informative priors are used, the condition is $\min_j |A_j| \geq 2$ for the uniform prior and $\min_j |A_j| \geq 4$ for Jeffrey's prior. With $m$ being fixed, the conditions also require the number of classes $k$ to be at minimum $2(m+1)$ with the uniform prior and $4(m+1)$ with Jeffrey's prior since $\sum_{j=0}^m |A_j| = k$. Although this may seem counter-intuitive, it can be interpretable in the context of ACSU5M age reconciliation. The condition for $k$ suggests that the granularity of the age groups must be sufficient to overcome the uncertainties introduced by the partial classifications. This requirement is consistent with the nature of the ACSU5M age reconciliation problem, where the age range of the observations is fixed to be between 0 to 5 years old, and the partially-classified age groups are typically reported in an aggregated manner with a fixed $m$. Additionally, it is also worth mentioning that the value of $k$ can be regarded as a truncation level, which is closely related to the truncation bounds for the Dirichlet Process~\citep{ishwaran2001gibbs} and parallel work for the Indian buffet process~\citep{doshi2009variational}, that the bound decreases with the truncation level. Moreover, the Dirichlet parameter vector $\bm \alpha$ captures the prior belief about $\bm \theta$. It can be seen as a pseudo-count of observations of each class before the actual data is collected~\citep{teh2010dirichlet}. The resulting Dirichlet-multinomial distribution approximates the multinomial distribution arbitrarily well for large $\bm\alpha$ values, which corresponds to strong prior knowledge about the distribution whereas small $\bm\alpha$ values correspond to weak or none prior information. This provides us with a natural framework for incorporating pre-existing knowledge about the age distributions, such as from census data, especially when individual registration or fully-classified data is limited. \subsection{Gibbs Sampling Scheme with Overlapping Partial Classifications} In the context of ACSU5M studies, it is possible for data sources to exhibit non-nested age structures. This means that the age groups $A_j$ may overlap with each other, making the posterior distribution intractable. However, it is still possible to make inferences by utilizing the Gibbs sampling scheme~\citep{gelfand2000gibbs}. Let $((Z_{i|A_j})_{i=1}^k)_{j=0}^k$ denote the count of the partially-classified observations that belong to category $i$, and are included when counting for $A_j$. Then $Y_{A_j}$ can be written as \[Y_{A_j} = \sum_{i=1}^k Z_{i|A_j}.\] Then the Gibbs sampling scheme can be implemented as follows, with the $t$-th iterations obtained by: \begin{align*} (Z_{i|A_j})_{i=1}^k & \sim \mbox{Multi}(y_{A_j}, \bm \pi_j ) \\ c_{i}^{(t)} &= x_i + \sum_{j=0}^m z_{i|A_j}\\ \bm\theta^{(t+1)} & \sim \mbox{Dir}(\bm \alpha_t) \end{align*} where $\bm \pi_j = (\theta_i\mathbbm{1}\{i\in A_j\}/\theta_{A_j})_{i=1}^k$, and $\bm \alpha_t = (\alpha_i + c_i^{(t)})_{i=1}^k$. \section{Experiments} \subsection{Numerical Study} In this section, we present the results of the numerical studies we conducted to investigate the finite-sample performance of $\hat{\bm \theta}$ and $\Tilde{\bm \theta}$ in various classification settings: \begin{enumerate}[(i)] \item \label{s1} $S = \{1, 2, 3\}$, $A_0 = \{1\}$, $A_1 = \{2, 3\}$ \item \label{s2} $S = \{1, 2,\dots, 9\}$, $A_0 = \{1,2,3,4\}$, $A_1 = \{5$,$6,7,8,9\}$ \item \label{s3} $S = \{1, 2, 3\}$, $A_0 = \{1,2\}$, $A_1 = \{2, 3\}$ \item \label{s4} $S = \{1, 2,\dots, 9\}$, $A_0 = \{1,2,3,4,5\}$, $A_1 = \{5$,$6,7,8,9\}$ \end{enumerate} For each setting, we use Jeffrey's prior for $\bm \theta_1 = (1/3, 1/3, 1/3)$ and $\bm \theta_2 = (1/9, \dots, 1/9)$. The following sample sizes are considered: \begin{itemize} \item Fix $N \in \{50, 100, 200\}$ and vary $N'\in \{50, 100$, $200, 500, 1000, 2000\}$. \item Fix $N' \in \{50, 100, 200\}$ and vary $N\in \{50, 100$, $200, 500, 1000, 2000\}$ \end{itemize} The results are shown in Figure~\ref{fig:1} and Figure~\ref{fig:2} for disjoint index sets. Overall, as we increase $k$ while fixing other parameters, the risk of both $\hat{\bm \theta}$ and $\Tilde{\bm \theta}$ increases, which is as expected since we aim to capture the variability at a finer level with the same sample size. The first row of the plot for both scenarios represents the case when we fix $N$ and vary $N'$. Note that the sufficient condition for the dominance of $\hat{\bm \theta}$ is not satisfied in~\ref{s1}, and we notice some cases where $\Tilde{\bm \theta}$ has a lower risk when $N'$ is relatively small. In contrast, $\hat{\bm \theta}$ outperforms $\Tilde{\bm \theta}$ overall in~\ref{s2} even when $N'$ is small. The second row represents the case when we fix $N'$ and vary $N$. The risk decreases drastically as $N$ increases due to the fact that we incorporate more and more observations that are fully classified. Figure~\ref{fig:3} and Figure~\ref{fig:4} display the Bayes risk of $\hat{\bm \theta}_1$ and $\hat{\bm \theta}_2$ estimated using Gibbs sampling. Overall, we observe that $\hat{\bm \theta}$ has a higher risk than $\tilde{\bm \theta}$, particularly when $N'$ is large. When $N'$ is fixed, increasing $N$ leads to a decrease in the risk of $\hat{\bm \theta}$, as incorporating more fully classified data reduces the uncertainties introduced by adding $N'$. We test our data disaggregation approach on two empirical examples, where we utilize data from the Sample Registration System (SRS) and Demographic and Health Surveys (DHS) as the sources of information, respectively. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{risk_fix_N_Jeffreys.png} \includegraphics[width=0.9\linewidth]{risk_fix_N_prime_Jeffreys.png} \caption{Comparison of risk between $\bm{\hat{\theta}}_1$ and $\bm{\tilde{\theta}}_1$ estimators in simulation~\ref{s1}.} \label{fig:1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{risk_fix_N_Jeffreys_s2.png} \includegraphics[width=0.9\linewidth]{risk_fix_N_prime_Jeffreys_s2.png} \caption{Comparison of risk between $\bm{\hat{\theta}}_2$ and $\bm{\tilde{\theta}}_2$ estimators in simulation~\ref{s2}.} \label{fig:2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{risk_fix_N_Jeffreys_overlap_s1.png} \includegraphics[width=0.9\linewidth]{risk_fix_N_prime_Jeffreys_overlap_s1.png} \caption{Comparison of risk between $\bm{\hat{\theta}}_1$ and $\bm{\tilde{\theta}}_1$ estimators in simulation~\ref{s3}.} \label{fig:3} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{risk_fix_N_Jeffreys_overlap_s2.png} \includegraphics[width=0.9\linewidth]{risk_fix_N_prime_Jeffreys_overlap_s2.png} \caption{Comparison of risk between $\bm{\hat{\theta}}_2$ and $\bm{\tilde{\theta}}_2$ estimators in simulation~\ref{s4}.} \label{fig:4} \end{figure} \subsection{MCHSS Data of China} In this section, we provide the first empirical example using MCHSS data~\citep{schumacher2020flexible} obtained through China's sample registration system dedicated to maternal and child health. Over a span of 20 years, from 1996 to 2015, all deaths of children under five years of age residing within the surveillance areas were recorded and grouped into six distinct age categories: 0-6 days, 7-27 days, 1-5 months, 6-11 months, 12-23 months, and 24-59 months, with eight non-overlapping, exhaustive categories of CODs. To demonstrate the effectiveness of our disaggregation method under a nested age structure, we create a synthetic dataset based on the MCHSS data collected during the period of $1996$ to $2005$. In the synthetic dataset, the ages are partially classified into three groups: (i) 0-27 days, (ii) 1-11 months, and (iii) 12-59 months. Our goal is to use the fully classified MCHSS data collected from $2006$ to $2015$ to perform age group disaggregation. \begin{table}[!htb] \parbox[b]{0.48\linewidth}{ \centering \caption*{True 1996-2005 MCHSS Data} \vspace{1mm} \begin{tabular}{l|llllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 3550 & 638 & 138 & 9 & 2 & 0 \\ 2 & 4398 & 183 & 24 & 0 & 0 & 0 \\ 3 & 1783 & 568 & 732 & 373 & 264 & 270 \\ 4 & 95 & 78 & 459 & 224 & 268 & 473 \\ 5 & 732 & 231 & 463 & 176 & 584 & 1333 \\ 6 & 39 & 92 & 501 & 361 & 270 & 196 \\ 7 & 1227 & 991 & 1671 & 570 & 428 & 328 \\ 8 & 1051 & 722 & 491 & 269 & 230 & 261 \\ \hline \end{tabular} } \parbox[b]{0.48\linewidth}{ \centering \caption*{Predicted 1996-2005 MCHSS Data} \vspace{1mm} \begin{tabular}{l|llllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 3474 & 714 & 145 & 2 & 1 & 1 \\ 2 & 4258 & 323 & 23 & 1 & 0 & 0 \\ 3 & 1659 & 692 & 740 & 365 & 291 & 243 \\ 4 & 101 & 72 & 454 & 229 & 274 & 467 \\ 5 & 572 & 391 & 471 & 168 & 667 & 1250 \\ 6 & 26 & 105 & 536 & 326 & 321 & 145 \\ 7 & 1094 & 1124 & 1635 & 606 & 435 & 321 \\ 8 & 1059 & 714 & 515 & 245 & 243 & 248 \\ \hline \end{tabular} } \vspace{4mm} \caption{Comparison of true and disaggregated MCHSS data using Bayesian age reconciliation for the years 1996-2005.} \label{tab:mchss-pred} \end{table} We evaluate the effectiveness of our proposed Bayesian age reconciliation method by comparing its predicted results to the actual MCHSS data from 1996 to 2005, as shown in Table~\ref{tab:mchss-pred}. We also compare the accuracy of our method to the data integration by random assignment approach. Our findings demonstrate that our method performs well, achieving a prediction accuracy of $93.0\%$ while only $52.6\%$ of the observations are classified correctly using the random assignment. These results highlights our method's ability to preserve the joint COD-age distribution information with high accuracy. To examine the impact of utilizing the disaggregated data on the estimation of age- and cause-specific child mortality, we fit separate Bayesian models to the true and predicted 1996-2005 MCHSS data, as proposed by~\cite{schumacher2020flexible}. Figures~\ref{fig:logmx-compare-east-urban} present the posterior medians and $80\%$ intervals for the estimated log mortality rates in each age group over the period of 1996-2005 based on the true and predicted data in selected causes and ages. Additionally, we provide posterior median and $80\%$ intervals for the estimated log mortality rates in models with fixed effects only and models with added random effect error terms, as previously discussed in~\cite{schumacher2020flexible}. \begin{figure}[!htb] \centering \vspace{1mm} \caption*{True 1996-2005 MCHSS Data} \includegraphics[width = 0.7\linewidth]{true_1.png} \vspace{1mm} \caption*{Predicted 1996-2005 MCHSS Data} \includegraphics[width = 0.7\linewidth]{pred_1.png} \vspace{1mm} \caption{Estimation of log mortality rates for non-communicable diseases in the east urban region using MCHSS data. The plot shows empirical data, estimated posterior medians, and posterior 80\% intervals. Combinations with zero deaths are indicated by an open square. } \label{fig:logmx-compare-east-urban} \end{figure} \begin{figure}[!htb] \centering \caption*{True 1996-2005 MCHSS Data } \includegraphics[width=0.8\linewidth]{true_csmf.png} \caption*{Predicted 1996-2005 MCHSS Data} \includegraphics[width=0.8\linewidth]{pred_csmf.png} \vspace{4mm} \caption{Comparisons of estimated CSMFs between models based the true MCHSS data and the estimated MCHSS data for selected regions, showing agreement in temporal trends and estimated CSMFs.} \label{fig:csmf-compare} \end{figure} Figure~\ref{fig:logmx-compare-east-urban} shows the results of the analysis of non-communicable diseases in the 0-6 days and 7-27 days age groups. Both the true and predicted data fit the models well, but some discrepancies are observed. Specifically, the estimated log mortality rates in the 0-6 days age group are consistently higher when using the predicted data compared to the true data. This discrepancy may be due to some deaths being incorrectly classified in the 7-27 days age group during the disaggregation process. However, the model fitted to the predicted data effectively captures the overall age-specific time trend and benefits from borrowing strength from other age strata. Furthermore, in Figure~\ref{fig:csmf-compare}, the estimated cause-specific mortality fractions (CSMFs) in the selected region are compared between the model fit of the real MCHSS data and the predicted data. This comparison demonstrates that the temporal trends and the estimated CSMFs from the model fit of the predicted data are in agreement with those of the true data. \subsection{BDHS Data} The BDHS data was collected through VA questionnaires in 2011, as well as in 2017-2018. Physical reviews were performed to determine the COD for each individual, with 11 common CODs being recorded. The ages at death for each individual were also documented. To demonstrate the effectiveness of our proposed method under a non-nested age structure, we aggregated the 2011 data into non-standard age categories: (i) 0-3 months, (ii) 4-11 months, and (iii) 12-59 months. For the 2017-2018 data, we created standardized age-disaggregated data with six age groups, as listed in Table~\ref{tab:age}. Table~\ref{tab:bdhs-pred} presents the comparison between the actual and predicted 2011 BDHS data, which shows overall satisfactory results, albeit with a few instances of misclassification in the non-nested age categories. Our proposed method achieved a slightly improved prediction accuracy of $70.4\%$, compared to $54.9\%$ with random assignment. \begin{table}[!htb] \parbox[b]{0.48\linewidth}{ \centering \caption*{True 2011 BDHS data} \vspace{1mm} \begin{tabular}{l|llllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 4 & 4 & 0 & 0 & 0 & 0 \\ 2 & 1 & 3 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 1 & 13 & 12 \\ 4 & 55 & 2 & 0 & 0 & 0 & 0 \\ 5 & 9 & 0 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 4 & 2 & 4 & 0 \\ 7 & 22 & 21 & 39 & 14 & 6 & 7 \\ 8 & 5 & 5 & 0 & 0 & 0 & 0 \\ 9 & 26 & 5 & 0 & 0 & 0 & 0 \\ 10 & 44 & 14 & 0 & 0 & 0 & 0 \\ 11 & 0 & 0 & 1 & 1 & 0 & 0 \\ \hline \end{tabular} } \parbox[b]{0.48\linewidth}{ \centering \caption*{Predicted 2011 BDHS data} \vspace{1mm} \begin{tabular}{l|llllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 4 & 3 & 1 & 0 & 0 & 0 \\ 2 & 2 & 1 & 1 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 1 & 13 & 12 \\ 4 & 55 & 2 & 0 & 0 & 0 & 0 \\ 5 & 8 & 1 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 4 & 2 & 2 & 2 \\ 7 & 23 & 13 & 57 & 3 & 2 & 11 \\ 8 & 7 & 3 & 0 & 0 & 0 & 0 \\ 9 & 26 & 4 & 1 & 0 & 0 & 0 \\ 10 & 31 & 23 & 4 & 0 & 0 & 0 \\ 11 & 0 & 1 & 1 & 0 & 0 & 0 \\ \hline \end{tabular} } \vspace{4mm} \caption{Evaluating proposed method for non-standard age categories using 2011 BDHS data: comparison of actual and predicted results demonstrates promising performance with some misclassification observed.} \label{tab:bdhs-pred} \end{table} \section{Discussion} In this work, we extend the existing literature on simultaneous estimation of multinomial probabilities to a broader and more versatile setting. We provide theoretical support and numerical studies, demonstrating how the integration of partially classified data can lead to improved Bayesian estimators of multinomial probabilities. Our proposed age reconciliation method is based on this approach and has been tested through simulation studies using observed, disaggregated data. The results show that our method is promising and offers a novel perspective on mitigating age inconsistencies in ACSU5M studies. Our proposed age reconciliation method is a promising first step in tackling the problem of age inconsistencies in ACSU5M studies. However, there are several future directions for research that could further address this issue. First, the CODs are commonly assigned and analyzed by statistical algorithms. For example, several Bayesian methods have been developed to infer CODs based on verbal autopsies~\citep[e.g,][]{mccormick2016probabilistic,kunihama2020bayesian,li2021bayesian,wu2021tree} and estimate the population-level cause specific mortality fractions~\citep[e.g,][]{serina2015improving,byass2019integrated,moran2021}. However, it has been shown that considerable uncertainties exist in the classification in these models. One possible extension of our proposed method is to account for misclassifications in both CODs and age groups through the use of joint misclassification matrices in the model parameter estimations. This can be done by extending Bayesian hierarchical models such as the one proposed by~\cite{mulick2021bayesian}. Our proposed method can extend the approach by accounting for misclassifcations from both CODs and age groups through of use of joint misclassification matrices in the model parameter estimations. Second, although our proposed method offers improved estimators under certain conditions, it fails to provide information theoretic metrics that quantify the effects of data disaggregation. One possible extension of the current approach is to use Dampster-Shafer inference~\citep{dempster1976mathematical} procedures as an alternative approach for multinomial inference~\citep{lawrence}. This approach can incorporate epistemic uncertainty and have the potential to quantify the effects of adding partial-classified multinomial data as adversarial attacks. Overall, we believe that these future research directions can further improve the accuracy and reliability of age reconciliations in ACSU5M studies. \bibliographystyle{unsrt}
1,108,101,562,457
arxiv
\section{Introduction} \label{Intro} The ($K^-$,~$\pi^-$) reaction has been an essential tool for studying spectroscopy in hypernuclear physics and strange particle physics \cite{Bruckner76,Bruckner78,Chrien79}. This reaction has played an important role in understanding hypernuclear structures related to the nature of $YN$ interaction, by controlling a momentum transfer over a wide range of $q =$ 0 to a few hundred MeV/$c$ in the exothermic reaction \cite{Feshbach66,Kerman71}, in comparison with endothermic reactions such as ($\pi^+$,~$K^+$) and ($e$,~$e'K^+$) having large momentum transfers of $q \simeq$ 300--500 MeV/$c$. Several authors \cite{Dover79,Auerbach83,Zofka84,Bando90,Itonaga94} studied a shell-model approach to $\Lambda$ hypernuclear spectroscopy in $p$-shell nuclei within a distorted-wave impulse approximation (DWIA), considering a Fermi averaging of a $K^-n \to \pi^-\Lambda$ amplitude, recoil corrections, and distorted waves obtained by solving the Klein-Gordon equations for $K^-$ and $\pi^-$ mesons. The Fermi averaging treatment \cite{Rosenthal80} may essentially affect the shape and magnitude of the production cross sections in the $K^-n \to \pi^-\Lambda$ reaction on nuclei because there appear narrow $Y^*$ resonances whose widths are smaller than the Fermi-motion energy of a struck nucleon in the nuclei \cite{Auerbach83}. These studies extract valuable information on the structure of hypernuclear states and the mechanism of hyperon production reactions from available experimental data at CERN, BNL, and KEK. The experimental studies are now in progress at J-PARC \cite{Nagae21}. However, the authors \cite{Harada05,Harada06,Harada18} showed that the energy and angular dependence of an in-medium amplitude of ${\overline{f}}_{\pi^-p \to K^+\Sigma^-}$ is significant to explain the behavior of $\Sigma^-$ production spectra for nuclear ($\pi^-$,~$K^+$) reactions in the DWIA, using the optimal Fermi averaging (OFA) procedure \cite{Harada04}, which provides the Fermi motion of a nucleon on the on-energy-shell $\pi^-p \to K^+ \Sigma^-$ reaction condition in a nucleus. This procedure was also applied to $\Lambda$ production via the ($\pi^+$,~$K^+$) reaction \cite{Harada04} and $\Xi^-$ production via the ($K^-$,~$K^+$) on nuclei \cite{Harada21}, and indicated a successful description for the $K^+$ spectra of the data. Therefore, it seems that our OFA procedure works very well for these endothermic reactions characterized by large momentum transfers of $q \gtrsim$ 300--500 MeV/$c$. Kohno and his collaborators \cite{Kohno06,Hashimoto08} also discussed the inclusive $K^+$ spectra via nuclear ($\pi^\pm$,~$K^+$) and ($K^-$,~$K^+$) reactions using the semiclassical distorted-wave (SCDW) model \cite{Luo91}. Considering the semiclassical approximation \cite{Kawai62}, Luo and Kawai \cite{Luo91} showed a successful description of ($p$, $p'x$) and ($p$, $nx$) inclusive cross sections for intermediate energy nucleon reactions by the SCDW model. When we attempt to apply the OFA procedure to a calculation for exothermic $\Lambda$ production reactions such as ($K^-$,~$\pi^-$) on nuclei, however, we realize an unavoidable difficulty that no $\Lambda$ hyperon is populated in hypernuclear states under a near-recoilless environment, e.g., $q \lesssim 80$ MeV/$c$, contrary to the evidence of experimental observations \cite{Bruckner76,Bruckner78,Chrien79}. In the OFA procedure, the on-energy-shell equation [see Eq.~(\ref{eqn:e11})] needs to satisfy approximately the condition as a discriminant, \begin{eqnarray} \frac{{\bm q}^2}{2\Delta m} - \Delta \omega \geq 0, \label{eqn:e1} \end{eqnarray} with $\Delta \omega = \varepsilon_\Lambda({j_\Lambda}) - \varepsilon_N({j_N})$ and $\Delta m = m_\Lambda - m_N$, where $\varepsilon_\Lambda({j_\Lambda})$ and $\varepsilon_N({j_N})$ ($m_\Lambda$ and $m_N$) are energies of the single-particle states (masses) for $\Lambda$ and $N$, respectively. Considering the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$ in the $\pi^-$ forward direction, we perhaps have difficulty of $q^2/2\Delta m < \Delta \omega$ due to $q \lesssim 80$ MeV/$c$. This conjecture implies that the OFA procedure is not applicable in describing the angular distributions of $d\sigma/d\Omega_{\rm lab}$ in this near-recoilless ($K^-$,~$\pi^-$) reaction. In this paper, we propose to extend the OFA procedure \cite{Harada04} theoretically in order to calculate an in-medium $K^-n\to \pi^- \Lambda$ amplitude of $\overline{f}_{K^-n\to\pi^-\Lambda}$ for the exothermic ($K^-$, $\pi^-$) reaction on nuclei in the framework of the DWIA, taking into account the local momentum transfer generated by semiclassical distorted waves for $K^-$ and $\pi^-$ \cite{Kawai62}. Applying the extended OFA in the DWIA, we estimate the angular distributions of $d\sigma/d\Omega_{\rm lab}$ for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$, in comparison with those obtained in other standard DWIA calculations. \section{Procedure and Formulas} \subsection{Distorted-wave impulse approximation} We briefly mention a formulation of the angular distributions for the nuclear ($K^-$,~$\pi^-$) reaction in the DWIA. Considering only the non-spin-flip processes in this reaction, the differential cross section for the $\Lambda$ bound state with a spin parity $J^P$ at the $\pi^-$ forward direction angle of $\theta_{\rm lab}$ is often written in the DWIA \cite{Hufner74,Dover80,Auerbach83} as (in units $\hbar = c =1$) \begin{eqnarray} \left({d\sigma \over d\Omega}\right)_{\rm lab}^{J^P} &=& \alpha \frac{1}{2J_A+1} \sum_{m_Am_B} \biggl| \Bigl\langle {\Psi}_B \Big\vert\, \overline{f}_{K^-n\to\pi^-\Lambda} \nonumber\\ &\times & \chi^{(-)*}_{b}\left({\bm p}_{\pi},{\bm r}\right) \chi^{(+)}_{a}\left({\bm p}_{K},{\bm r}\right) \Big| \Psi_{A} \Bigr\rangle \biggr|^2, \label{eqn:e2} \end{eqnarray} where $\Psi_B$ and $\Psi_A$ are wave functions of the hypernuclear final state and the initial state of the target nucleus, respectively. $\chi_{b}^{(-)}$ and $\chi_{a}^{(+)}$ are distorted waves for outgoing $\pi^-$ and incoming $K^-$ mesons, respectively. The kinematical factor $\alpha$ denotes the translation from a two-body $K^-$-nucleon laboratory system to a $K^-$-nucleus laboratory system \cite{Dover83}. The energy and momentum transfers to the final state are given by \begin{eqnarray} &\omega = E_K-E_\pi, &\quad {\bm q} ={\bm p}_K-{\bm p}_\pi, \label{eqn:e3} \end{eqnarray} where $E_{K}=({\bm p}_{K}^2+m_{K}^2)^{1/2}$ and $E_\pi=({\bm p}_\pi^2+m_\pi^2)^{1/2}$ (${\bm p}_{K}$ and ${\bm p}_{\pi}$) are laboratory energies (momenta) of $K^-$ and $\pi^-$ in this reaction, respectively; $m_{K}$ and $m_\pi$ are masses of $K^-$ and $\pi^-$, respectively. The quantity $\overline{f}_{K^-n\to\pi^-\Lambda}$ denotes the in-medium $K^-n \to \pi^- \Lambda$ non-spin-flip amplitude. An in-medium $K^-n \to \pi^- \Lambda$ spin-flip amplitude $\overline{g}_{K^-n\to\pi^-\Lambda}$ is neglected in this work because the spin-flip part of the elementary $K^-n\to \pi^-\Lambda$ amplitude gives negligible contributions near the forward direction in the ($K^-$,~$\pi^-$) reaction \cite{Auerbach83}. \subsection{Local momentum transfer} In the semiclassical approximation \cite{Kawai62}, the {\it local} momentum transfer in the nucleus may be defined as \begin{eqnarray} {\bm q}({\bm r}) &\equiv& \frac{{\rm Re}\{(-i{\bm \nabla})\chi_b^{(-)*}({\bm p}_{\pi},{\bm r})\chi_a^{(+)}({\bm p}_{K},{\bm r})\}} {\bigl|\chi_b^{(-)*}({\bm p}_{\pi},{\bm r})\chi_a^{(+)}({\bm p}_{K},{\bm r})\bigr|} \nonumber\\ &=& {\bm p}_K({\bm r})-{\bm p}_\pi({\bm r}), \label{eqn:e4} \end{eqnarray} where ${\bm p}_K({\bm r})$ and ${\bm p}_\pi({\bm r})$ are {\it local} momenta for $K^-$ and $\pi^-$, respectively, which are generated by the semiclassical distorted waves of $\chi_a^{(+)}$ and $\chi_b^{(-)}$ that are assumed to behave as a slowly varying function of a local point ${\bm r}$ in the trajectory. We obtain these distorted waves numerically in program PIRK \cite{Eisenstein74}, solving the Klein-Gordon equations for the $K^-$ and $\pi^-$ mesons with the standard Kisslinger optical potentials, \begin{eqnarray} 2EU(r)=-p^2 b_0 \rho_A(r)+ b_1 {\bm \nabla} \cdot \rho_A(r) {\bm \nabla}, \label{eqn:e5} \end{eqnarray} where $\rho_A(r)$ is the nuclear density normalized to the total number of nucleons, $A$. Considering the ($K^-$, $\pi^-$) reaction on $^{12}$C at $p_K=$ 800 MeV/$c$, we have $p_\pi=$ 732 MeV/$c$ for the $[(0s_{1/2})_\Lambda(0p_{3/2})_n^{-1}]_{1^-}$ state and $p_\pi=$ 744 MeV/$c$ for the $[(0p_{3/2})_\Lambda(0p_{3/2})_n^{-1}]_{0^+,2^+}$ states in $^{12}_\Lambda$C. For $K^-$, we determine the parameters of $b_0$ and $b_1$ in Eq.~(\ref{eqn:e5}), fitting to the data of the 800-MeV/$c$ scattering on $^{12}$C \cite{Marlow82}, leading to $b_0=$ $0.309+i0.498$ fm$^{3}$ and $b_1=$ 0 fm$^{3}$ at $p_K=$ 800 MeV/$c$. For $\pi^-$ at $p_\pi=$ 732 and 744 MeV/$c$, we interpolate the values of $b_0$ and $b_1$ from the corresponding parameters determined by fits to the data of the 710- and 790-MeV/$c$ scatterings on $^{12}$C \cite{Takahashi95}; we have $b_0=$ ($-0.099+i\,0.202$) fm$^{3}$ and $b_1=$ ($-0.258+i\,0.736$) fm$^{3}$ at $p_\pi=$ 732 MeV/$c$, and $b_0=$ ($-0.095+i\,0.201$) fm$^{3}$ and $b_1=$ ($-0.228+i\,0.736$) fm$^{3}$ at $p_\pi=$ 744 MeV/$c$. \begin{figure}[tb] \begin{center} \includegraphics[width=1.0\linewidth]{fig1.eps} \end{center} \caption{\label{fig:1} Magnitude of the local momentum transfers $q(r)$ for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at the incident $K^-$ momentum of $p_K=$ 800 MeV/$c$ and $\theta_{\rm lab}=$ 0$^\circ$. The calculated values of the distorted waves (DW) and the plane wave (PW) for the mesons are shown, as a function of the relative distance $r$ between the mesons and the center of the nucleus. The asymptotic momentum transfer corresponds to $q_{K\pi}=$ 56.5 MeV/$c$. } \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=1.0\linewidth]{fig2.eps} \end{center} \caption{\label{fig:2} Meson absorption factor $A_{K\pi}(r)$ in the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at the incident $K^-$ momentum of $p_K=$ 800 MeV/$c$ and $\theta_{\rm lab}=$ 0$^\circ$, as a function of the relative distance $r$. Dot-dashed, dashed, and dotted curves denote the components of the angular momentum transfers with $\Delta L=$ 0, 1, and 2, respectively. The distribution of the neutron density $\rho_n(r)$ in $^{12}$C is also drawn. } \end{figure} Figure \ref{fig:1} shows the calculated values of the magnitude of the local momentum transfer $q(r)=|{\bm q}(r)|$ for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_{K}=$ 800 MeV/$c$, as a function of the relative distance $r$ between the mesons and the center of the nucleus. We find that the value of $q(r)$ amounts to about 300 MeV/$c$ at the nuclear center, and decreases toward the nuclear surface of $R=r_0A^{1/3}=$ 2.91 fm for $^{12}$C; it becomes asymptotically $q_{K\pi}=|{\bm p}_K-{\bm p}_\pi|$ at $|{\bm r}| \to \infty$ outside the nucleus, where $q_{K\pi}$ is an asymptotic momentum transfer corresponding to $q$ given in Eq.~(\ref{eqn:e3}). This behavior is determined by the attractive (repulsive) nature of the potential used for $K^-$ ($\pi^-$) in Eq.~(\ref{eqn:e5}). In the plane-wave (PW) approximation for the meson waves, the value of $q(r)$ is equal to $q_{K\pi}$ as a constant. To see clearly the effect of the local momentum transfer in the nucleus, we estimate the root-mean-square value of $q$ defined as \begin{eqnarray} \langle q^2 \rangle^{1/2}= \left[\frac{\int_0^\infty dr r^2 \rho_n(r)A_{K\pi}(r)|{\bm q}(r)|^2} {\int_0^\infty dr r^2 \rho_n(r)A_{K\pi}(r)} \right]^{1/2}, \label{eqn:e7} \end{eqnarray} where $\rho_n(r)$ is the neutron density in the target nucleus, normalized by $\int \rho_n(r)d{\bm r}=N$, and $A_{K\pi}(r)$ is a meson absorption factor \cite{Dover80,Matsuyama88}, which is given by \begin{eqnarray} A_{K\pi}(r) &=&\frac{1}{4\pi}\int \left|\chi_b^{(-)*}({\bm p}_{\pi},{\bm r}) \chi_a^{(+)}({\bm p}_{K},{\bm r})\right|^2 d\Omega. \label{eqn:e8} \end{eqnarray} Figure~\ref{fig:2} shows the neutron density $\rho_n(r)$ in $^{12}$C and the absorption factor $A_{K\pi}(r)$, as a function of the radial distance. We use a modified harmonic oscillator model with the parameters of $\alpha=$ 2.234 and $b=$ 1.516 fm for $^{12}$C \cite{Vries87}, and the distorted waves obtained by Eq.~(\ref{eqn:e5}). Because the asymptotic momentum transfer is small for the near-recoilless ($K^-$,~$\pi^-$) reaction, the partial waves of the angular momentum transfer $\Delta L \lesssim$ 2 in $A_{K\pi}(r)$ contribute to the $\Lambda$ production; the component of $\Delta L=$ 0 is dominant inside the nucleus, whereas the components of $\Delta L=$ 1 and 2 grow toward the outside of the nucleus. We find $\langle q^2 \rangle^{1/2}=$ 203 MeV/$c$ at $\theta_{\rm lab}=$ 0$^\circ$, of which the value is sufficiently larger than $q_{K\pi}=$ 56.5 MeV/$c$. The value of $\langle q^2 \rangle^{1/2}$ may effectively indicate the momentum transfer in the nucleus. Therefore, the local momentum transfer generated by the distorted waves is expected to significantly influence the $\Lambda$ production cross section for the near-recoilless ($K^-$,~$\pi^-$) reaction. \subsection{Extended optimal Fermi averaging} Following the semiclassical approximation in Ref.~\cite{Kawai62}, we attempt to extend the OFA procedure~\cite{Harada04} for the in-medium $K^-n \to \pi^-\Lambda$ amplitude of $\overline{f}_{K^-n\to\pi^-\Lambda}$, taking into account the effect of the local momentum transfer. The extended optimal Fermi-averaged $K^-n\to \pi^-\Lambda$ $t$ matrix in the nucleus can be defined as \begin{eqnarray} \overline{{t}^{\rm opt}}(p_K; \omega,{\bm q}) &=& \frac{\int_0^\infty dr r^2 \rho_n(r)A_{K\pi}(r)\,{t}^{\rm opt}(p_K; \omega,{\bm q}(r))} {\int_0^\infty dr r^2 \rho_n(r)A_{K\pi}(r)}, \nonumber\\ \label{eqn:e9} \end{eqnarray} where ${t}^{\rm opt}(p_K; \omega,{\bm q})$ is the optimal Fermi-averaged $K^-n\to \pi^-\Lambda$ $t$ matrix at a point of ($\omega$,~${\bm q}$)~\cite{Harada04}, which is given by \begin{eqnarray} &&{t}^{\rm \,opt}(p_K; \omega,{\bm q}) \nonumber\\ &&=\frac{ \int_0^{\pi} \sin{\theta_N}d\theta_N \int_{0}^{\infty} dp_{N} p_N^2 n(p_N) \,{t}(E_{2};{\bm p}_K,{\bm p}_N) }{ \int_0^{\pi}\sin{\theta_N}d{\theta_N} \int_{0}^{\infty} dp_{N} p_N^2 n(p_N)} \Biggl|_{{\bm p}_N={\bm p}^*_N}, \label{eqn:e10} \end{eqnarray} where ${t}(E_{2};{\bm p}_K,{\bm p}_N)$ is the two-body on-shell $t$ matrix for the $K^-n \to \pi^-\Lambda$ reaction in free space, $E_{2}=E_{K}+E_{N}$ is a total energy of the $K^- N$ system, and $\cos{\theta_N}= \hat{\bm p}_K\cdot\hat{\bm p}_N$; $E_N$ and ${\bm p}_N$ are an energy and a momentum of the nucleon in the nucleus, respectively. The function $n(p)$ is a momentum distribution of a struck nucleon in the nucleus, normalized by $\int n(p)d{\bm p}/(2\pi)^3=1$; we estimate $\langle p^2 \rangle^{1/2} \simeq$ 147 MeV/$c$, assuming a harmonic oscillator model with a size parameter $b_N=$ 1.64 fm for $^{12}$C. The subscript ${\bm p}={\bm p}^*$ in Eq.~(\ref{eqn:e10}) means the integral with a constraint imposed on the valuables of ($p_N$, $\theta_N$) that fulfill the condition for ($p_N$, $\theta_N$) = ($p_N^*$, $\theta_N^*$) in an on-energy-shell momentum ${\bm p}_N^*$. The momentum ${\bm p}_N^*$ is a solution that satisfies the on-energy-shell equation for a struck nucleon at the point ($\omega$, ${\bm q}$) in the nuclear systems, \begin{eqnarray} \sqrt{({\bm p}_N^*+{\bm q})^2+m_\Lambda^2}-\sqrt{({\bm p}_N^*)^2+m_N^2}=\omega, \label{eqn:e11} \end{eqnarray} where $m_\Lambda$ and $m_N$ are masses of the $\Lambda$ and the nucleon, respectively. Note that this procedure keeps the on-energy-shell $K^-n \to \pi^-\Lambda$ processes in the nucleus \cite{Gurvitz86}, so that it guarantees to take ``optimal'' values for $t^{\rm opt}$; binding effects for the nucleon and the $\Lambda$ in the nucleus are considered automatically when we input experimental values for the binding energies of the nuclear and hypernuclear states. According to the optimal momentum approximation \cite{Gurvitz86}, the use of the on-shell $K^-n \to \pi^-\Lambda$ $t$ matrix may be valid in the impulse approximation because the leading-order correction caused by the Fermi motion is minimized. Therefore, the OFA procedure is a straightforward way of dealing with the Fermi averaging for the elementary reaction amplitude in the optimal momentum approximation. Moreover, this extension in this work provides the effect of the local momentum transfers in the semiclassical approximation that meson-baryon collisions are spatially localized at a point in the nucleus without interfering with the collisions at different points \cite{Kawai62}. By using the extended optimal Fermi-averaged $t$ matrix in Eq.~(\ref{eqn:e9}), thus, the in-medium $K^-n\to\pi^-\Lambda$ amplitude for the nucleus in Eq.~(\ref{eqn:e2}) is given as \begin{eqnarray} \overline{f}_{K^-n\to\pi^-\Lambda} &=& -\frac{1}{2\pi}\left(\frac{p_\pi E_\pi E_K}{\alpha p_K}\right)^{1/2} \overline{t^{\rm \,opt}}(p_K; \omega,{\bm q}), \label{eqn:e12} \end{eqnarray} as a function of the incident $K^-$ momentum $p_K$ and the detected $\pi^-$ angle $\theta_{\rm lab}$ and momentum $p_\pi$ in the laboratory frame. The on-energy-shell equation of Eq.~(\ref{eqn:e11}) has a solution of ${\bm p}^*_N$ under the condition of $q^2/2\Delta m > \Delta \omega$ in Eq.~(\ref{eqn:e1}). Considering the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction, we have $\Delta m=1115 - 940 \simeq$ 175 MeV, $\Delta \omega = \varepsilon_\Lambda(0p_{3/2})-\varepsilon_N(0p_{3/2}) \simeq -1 - (-19) =$ 18 MeV for $0^+$ and $2^+_{1,2}$ excited states (exc.), and $\Delta \omega = \varepsilon_\Lambda(0s_{1/2})-\varepsilon_N(0p_{3/2}) \simeq -11 -(-19) =$ 8 MeV for a $1^-$ ground state (g.s.). When $q <$ 80 MeV/$c$, it is impossible to populate a $\Lambda$ in the $(0p_{3/2})_\Lambda$ states in the framework of the OFA due to $q^2/2\Delta m < \Delta \omega$. When $q <$ 53 MeV/$c$, it is also impossible to populate a $\Lambda$ in the $(0s_{1/2})_\Lambda$ state due to $q^2/2\Delta m < \Delta \omega$. Therefore, we expect that the extended OFA procedure will overcome the difficulty of $q^2/2\Delta m < \Delta \omega$ even if the near-recoilless reaction has $q \lesssim$ 80 MeV/$c$. However, we believe that the standard Fermi averaging (SFA)~\cite{Auerbach83,Rosenthal80} supplies the in-medium amplitude of $\overline{f}_{K^-n\to\pi^-\Lambda}$ by an assumption of the off-energy-shell components in the nuclear ($K^-$,~$\pi^-$) reaction condition. \section{Results and discussion} Let us consider the angular distributions for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$ in the DWIA. Here we obtain the single-particle states for a neutron, using the Woods-Saxon potential~\cite{Bohr69} with a strength parameter of $V_0^N$= $-64.8$ MeV, which is adjusted to reproduce the data of the charge radius of 2.46 fm \cite{Jacob66}. For a $\Lambda$, we calculate the single-particle states, using the Woods-Saxon potential with $V_0^\Lambda$= $-30.3$ MeV, $a=$ 0.60 fm, $R=$ 2.58 fm, and a spin-orbit strength of $V_{ls}^\Lambda=$ 2 MeV for $A=$ 12 \cite{Millener88,Gal16}, leading to the calculated energies of $\varepsilon_\Lambda(0s_{1/2})=$ $-11.36$ MeV, $\varepsilon_\Lambda(0p_{3/2})=$ $-0.60$ MeV, and $\varepsilon_\Lambda(0p_{1/2})=$ $-0.32$ MeV. We perform the extended OFA for the $K^-n\to \pi^-\Lambda$ reaction, using the elementary amplitudes analyzed by Gopal {\sl et al}.~\cite{Gopal77}, and we estimate the angular distributions for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction according to Eq.~(\ref{eqn:e2}). \subsection{\boldmath In-medium $K^-n \to \pi^-\Lambda$ differential cross sections } \begin{figure}[tb] \begin{center} \includegraphics[width=0.90\linewidth]{fig3a.eps} \includegraphics[width=0.90\linewidth]{fig3b.eps} \end{center} \caption{\label{fig:3} (a) In-medium $K^-n\to \pi^-\Lambda$ differential cross sections of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ obtained by the extended OFA (EOFA) on the $^{12}$C target, together with the elementary cross sections in free space (FREE) including the kinematical factor $\alpha$. The amplitudes analyzed by Gopal {\sl et al}.~\cite{Gopal77} are used. Solid, dot-dashed, and dashed curves denote the values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ for $\theta_{\rm lab}=$ 0$^\circ$, 6$^\circ$, and 12$^\circ$, respectively. (b) The asymptotic momentum transfer $q_{K\pi}$ in the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction, as a function of $p_{K}$ in the laboratory frame. } \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=0.90\linewidth]{fig4.eps} \end{center} \caption{\label{fig:4} Comparison of the calculated results of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ obtained by the extended OFA (dot-dashed curve) with those obtained by the SFA and by the FREE on a $^{12}$C target at $\theta_{\rm lab}=$ 0$^\circ$. Solid and dashed curves denote the values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ without and with a 140-MeV/$c$ momentum upward shift of $\Delta p_K$ due to the binding effects ($B_{\rm eff}$), respectively, } \end{figure} Figure~\ref{fig:3} shows the calculated results of the in-medium $K^-n\to \pi^-\Lambda$ differential cross sections of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ on the $^{12}$C target, together with the asymptotic momentum transfer $q_{K\pi}$ in the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction, as a function of $p_{K}$ in the laboratory frame. We find that the values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ obtained by the extended OFA are reduced in the region of $p_K=$ 550--900 MeV/$c$ that corresponds to $q_{K\pi} \lesssim$ 80 MeV/$c$; the peak position is located at $p_K\simeq$ 900 MeV/$c$, which seems to shift upward in terms of the elementary cross sections in free space (FREE) \cite{Gopal77} including the kinematical factor $\alpha$. These values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ may be simulated by the extended OFA using a constant of $\langle q^2 \rangle^{1/2}\simeq$ 200 MeV/$c$ as an effective momentum transfer near the nuclear surface. We examine the behavior of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ by the extended OFA, comparing it with that obtained by the SFA \cite{Rosenthal80} on the $^{12}$C target at $\theta_{\rm lab}=$ 0$^\circ$, as shown in Fig.~\ref{fig:4}. We find that the absolute values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ by the SFA (FREE) are 1.8 (2.4) times larger than those by the extended OFA at 800 MeV/$c$ at the forward direction angles; the values by the SFA agree with the results of $\alpha |\langle f_L(0) \rangle|^2$ shown in Fig.~4 of Ref.~\cite{Auerbach83}. Here we estimate the values by the SFA that includes the binding effects for a struck neutron via the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction (SFA+$B_{\rm eff}$) because the binding effects are not taken into account in the SFA. Such effects are roughly evaluated by a momentum shift $\Delta p_K$ that is needed to populate a $\Lambda$ hyperon from the $0p_{3/2}$ neutron bound in $^{12}$C, supplying a separation energy of $|\varepsilon_N(0p_{3/2})| \simeq$ $(\Delta p_K)^2/2m_K$ where $m_K$ is a mass of $K^-$. Thus, we have \begin{eqnarray} \Delta p_K &=& \sqrt{2 m_K |\varepsilon_N(0p_{3/2})|} \nonumber\\ &=& \sqrt{2 \times 494 \times 19}\simeq 140 \,\mbox{MeV/$c$}. \label{eqn:e13} \end{eqnarray} In Fig.~\ref{fig:4}, we also draw the values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ by the SFA+$B_{\rm eff}$, which are shifted upward by a 140-MeV/$c$ momentum range of $\Delta p_K$ when the binding effects are taken into account. We find that these values by the SFA+$B_{\rm eff}$ are nearly equal to those by the extended OFA at $p_K \gtrsim$ 1000 MeV/$c$ because the extended OFA provides the binding effects automatically. On the other hand, the difference between the former and the latter gradually becomes bigger at $p_K \lesssim$ 1000 MeV/$c$ due to the region of $q^2/2\Delta m < \Delta \omega$. For the FREE, we realize that the position of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ should be shifted upward by the momentum range of $\Delta p_K$ when the binding effects are taken into account (FREE+$B_{\rm eff}$), as seen in Fig.~\ref{fig:4}. Consequently, we show the validity of the extended OFA and the meaning of the binding effects necessary to make a good description for the $K^-n \to \pi^-\Lambda$ differential cross sections at the forward direction angles. Furthermore, we note that when the elementary $K^-n \to \pi^-\Lambda$ amplitudes analyzed by Zhang {\sl et al}.~\cite{Zhang13} are used, the calculated values of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ in the extended OFA are very similar to those analyzed by Gopal {\sl et al}.~\cite{Gopal77}. This situation is the same as that in the SFA. Therefore, we believe that the dependence of $\alpha |\overline{f}_{K^-n\to\pi^-\Lambda}|^2$ on the elementary $K^-n\to \pi^-\Lambda$ amplitudes is relatively small in the extended OFA and the SFA. \subsection{\boldmath Angular distributions for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at 800 MeV/$c$ } \begin{figure}[tb] \begin{center} \includegraphics[width=0.8\linewidth]{fig5.eps} \end{center} \caption{\label{fig:5} Calculated angular distributions of the laboratory differential cross sections $d\sigma/d\Omega_{\rm lab}$ for (a) the $0^+$(exc.) and $2^+_{1,2}$(exc.) states, and for (b) the $1^-$(g.s.) state in $^{12}_\Lambda$C via the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$, together with the experimental data, as a function of the angle of $\theta_{\rm lab}$ in the laboratory frame. The calculated results are obtained in the DWIA with the extended OFA. The data are taken from Ref.~\cite{Chrien79}. } \end{figure} Now we estimate the angular distributions of the laboratory differential cross sections $d\sigma/d\Omega_{\rm lab}$ for $^{12}_\Lambda$C in the DWIA with the extended OFA. Figure~\ref{fig:5} shows the calculated values of $d\sigma/d\Omega_{\rm lab}$ for $1^-$(g.s.) and for $0^+$, $2^+_1$, and $2^+_2$(exc.) in $^{12}_\Lambda$C, together with the experimental data \cite{Chrien79}. In Fig.~\ref{fig:5}(a), we display the calculated angular distributions for $0^+$(exc.) and $2^+_{1,2}$(exc.), which have the $[(0p_{3/2,1/2})_\Lambda(0p_{3/2})^{-1}_n]_{0^+}$ and $[(0p_{3/2,1/2})_\Lambda(0p_{3/2})^{-1}_n]_{2^+}$ configurations, respectively. These $\Lambda$ excited states are located near the $\Lambda$-$^{11}$C threshold at $B^{\rm cal}_\Lambda(0^+,2^+_{1,2})=$ 0.32--0.60 MeV. We find that the shape and magnitude of the calculated sum values of $0^+$, $2^+_1$ and $2^+_2$(exc.) in the extended OFA are in good agreement with those of the data in the whole angles of $\theta_{\rm lab}=$ 0$^\circ$--20$^\circ$. Note that the renormalization of these calculated cross sections by a factor is not necessary to reproduce the magnitude of the data, in contrast with several results estimated by earlier DWIA calculations \cite{Dover79,Auerbach83,Bando90,Itonaga94}. In Fig.~\ref{fig:5}(b), we display the calculated angular distribution for $1^-$(g.s.) having the $[(0s_{1/2})_\Lambda(0p_{3/2})^{-1}_n]_{1^-}$ configuration. We find that the shape of the calculated value of $1^-$(g.s.) agrees with that observed in the data, whereas its magnitude rather underestimates at 4$^\circ$ $< \theta_{\rm lab}<$ 16$^\circ$, which is about 20\% smaller than that of the data. This discrepancy may be because more sophisticated treatments of nuclear wave functions are needed for more detailed comparison, e.g., a configuration mixing \cite{Dover79,Itonaga94}, a 2p-2h admixture, and other many-body effects beyond single-particle descriptions. \begin{figure}[tb] \begin{center} \includegraphics[width=0.80\linewidth]{fig6.eps} \end{center} \caption{\label{fig:6} Comparison with the angular distributions estimated in several DWIA calculations. The calculated results of $d\sigma/d\Omega_{\rm lab}$ are shown for (a) $0^+ +2^+_1 +2^+_2$(exc.) and (b) $1^-$(g.s.) in $^{12}_\Lambda$C via the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$. Solid curves denote our results of the extended OFA (EOFA). Dotted, dot-dashed, and dashed curves denote the values of the OFA omitting the effect of the local momentum transfer, the SFA, and the FREE, respectively. The data are taken from Ref.~\cite{Chrien79}. } \end{figure} Figure~\ref{fig:6} shows the comparison of our results for the extended OFA with those for the ``standard'' DWIA calculations. The quantities $\overline{f}_{K^-n\to\pi^-\Lambda}$ are estimated in the DWIA with the elementary amplitude in free space (FREE)~\cite{Dover79} and with the SFA amplitude~\cite{Rosenthal80,Auerbach83}. We confirm that the magnitude of $d\sigma/d\Omega_{\rm lab}$ with the FREE amplitude is 2.1 times as large as that of the data, and the magnitude with the SFA amplitude is still larger than that of the data by a factor of 1.6, as discussed in Ref.~\cite{Dover79}. Therefore, it seems that the estimations of the standard DWIA are insufficient to explain the data, whereas these shapes of $d\sigma/d\Omega_{\rm lab}$ moderately agree with those of the data \cite{Dover79,Auerbach83,Bando90,Itonaga94}. In the case of the OFA omitting the effect of the local momentum transfers, we confirm that the shape and magnitude of $d\sigma/d\Omega_{\rm lab}$ hardly agree with those of the data; no $\Lambda$ in $(0p_{3/2})_\Lambda$ is populated at $\theta_{\rm lab} \lesssim$ 4$^\circ$ due to $q^2/2\Delta m < \Delta \omega$ for $q = q_{K\pi} \simeq$ 55--80 MeV/$c$, whereas a $\Lambda$ in $(0s_{1/2})_\Lambda$ is populated due to $q^2/2\Delta m \gtrapprox \Delta \omega$. As seen in Fig.~\ref{fig:6}(b), its shape behaves remarkably owing to the on-energy-shell condition of ($p^*_N$, $\theta^*_N$) determined from Eq.~(\ref{eqn:e11}) in the OFA that accompanies a Fermi averaging over a narrow width of $2p_N^*q/m_\Lambda$. This result indicates that the OFA with only $q_{K\pi}$ does not work in the near-recoilless reaction having $q \lesssim$ 80 MeV/$c$ because of the unavoidable difficulty of $q^2/2\Delta m < \Delta \omega$ or $q^2/2\Delta m \approx \Delta \omega$. This difficulty is overcome by the extended OFA providing the effect of the local momentum transfer; the calculated shape and magnitude of $d\sigma/d\Omega_{\rm lab}$ for $0^++2^+_1+2^+_2$(exc.) and $1^-$(g.s.) can explain those of the data without a renormalization factor quantitatively. Consequently, we show that the effect of the local momenta generated by the semiclassical distorted waves for the mesons overcomes severe difficulties of the previous OFA procedure in the near-recoilless reactions such as ($K^-$, $\pi^-$). This result may imply the validity of the semiclassical picture of the localized on-energy-shell collisions by distorted waves for the mesons. On the other hand, it should be noticed that the endothermic ($\pi^+$, $K^+$) reaction on nuclei satisfies the condition of $q(r) \lesssim q_{\pi K}$, where $q_{\pi K}$ is an asymptotic momentum transfer having $q_{\pi K} \simeq$ 300--500 MeV/$c$. There is no difficulty in the OFA without handling the local momentum transfer because the effect of the local momentum transfer is rather small in the nuclear ($\pi^+$,~$K^+$) reaction. Therefore, we recognize that the OFA procedure obeying the asymptotic $q_{\pi K}$ works well in the ($\pi^+$, $K^+$) reaction \cite{Harada04}. \section{Summary and conclusion} \label{summary} We proposed to extend the OFA procedure theoretically in order to calculate an in-medium $K^-n\to \pi^- \Lambda$ amplitude of $\overline{f}_{K^-n\to\pi^-\Lambda}$ for the exothermic ($K^-$, $\pi^-$) reaction on nuclei in the framework of the DWIA, taking into account the local momentum transfer generated by semiclassical distorted waves for $K^-$ and $\pi^-$. Applying the extended OFA procedure, we estimated the angular distributions for the $^{12}$C($K^-$,~$\pi^-$)$^{12}_\Lambda$C reaction at $p_K=$ 800 MeV/$c$ under the near-recoilless condition of $q \lesssim$ 80 MeV/$c$, and we showed that the calculated angular distributions are in good agreement with those of the data. In conclusion, the extended OFA procedure provides the effect of the local momentum transfer generated by the meson distorted waves. This extension is a successful prescription making it possible to describe the reaction cross sections in the near-recoilless reactions such as ($K^-$,~$\pi^-$) in our framework. This work may be a basis for studies clarifying the mechanism of the hadron production reactions on nuclei, and extracting the properties of a hadron-nucleus potential from the experimental data \cite{Cieply11}. \begin{acknowledgments} The authors thank Professor~M.~Kawai for many valuable discussions and comments. This work was supported by Grants-in-Aid for Scientific Research (KAKENHI) from the Japan Society for the Promotion of Science: Scientific Research (C) (Grant No.~JP20K03954). \end{acknowledgments}
1,108,101,562,458
arxiv
\section{Continuum limit} \label{app:cont_model} The magnetic Hamiltonian $H_{\rm ph}$ for the photonic lattice is quadratic in the field operators and can be written in a diagonal form as \begin{equation} H_{\rm ph}= \sum_\lambda \omega_\lambda \Psi_\lambda^\dag \Psi_\lambda, \qquad {\rm where}\qquad [\Psi_\lambda,\Psi_{\lambda'}^\dag]=\delta_{\lambda,\lambda'}. \end{equation} By making the ansatz $|\varphi_\lambda\rangle= \Psi^\dag_\lambda|{\rm vac}\rangle = \sum_i f_\lambda(\vec r_i) \Psi^\dag(\vec r_i)|{\rm vac}\rangle$ for a single-photon eigenstate of $H_{\rm ph}$, the eigenfrequencies $\omega_\lambda$ and the corresponding mode functions $f_\lambda(\vec r)$ can be derived from the eigenvalue equation \begin{equation}\label{eq:EVequation} (\omega_\lambda-\omega_p) f_\lambda(\vec r_i ) = - J \left[e^{ -i\phi_x} f_\lambda ( \vec r_i+ \vec e_x) +e^{ i\phi_{x}} f_\lambda ( \vec r_i- \vec e_x) +e^{ -i\phi_{y}} f_\lambda( \vec r_i+ \vec e_y)+ e^{ i\phi_{y}}f_\lambda ( \vec r_i- \vec e_y) \right]. \end{equation} Here $\vec e_{x,y}$ are the two lattice unit vectors and we introduced the short notation \begin{equation} \phi_{x,y} = \frac{e}{\hbar} \int_{\vec{r}_i}^{\vec{r}_i + \vec e_{x,y}} \vec{A}(\vec{r})\cdot d\vec{r} \simeq \frac{e}{\hbar}\vec{A}(\vec{r}_i)\cdot \vec e_{x,y}. \end{equation} In the last step we have assumed that the vector potential doesn't vary considerably over the extent of one lattice site. If we restrict ourselves to moderate fields and low-frequency excitations we can also replace $f_\lambda(\vec r)$ by a continuous function and perform a Taylor expansion, \begin{equation} f_\lambda ( \vec r_i+ \vec e_x)\simeq f_\lambda( \vec r_i)+ l_0 \frac{\partial }{\partial x} f_\lambda ( \vec r_i)+\frac{l_0^2}{2} \frac{\partial^2 }{\partial x^2} f_\lambda( \vec r_i). \end{equation} Then, up to second order in $l_0$, the terms on the right hand side of Eq.~\eqref{eq:EVequation} can be approximated by \begin{equation} - J \left[e^{ - i\phi_x} f( \vec r_i+ \vec e_x) +e^{ i\phi_{x}} f_\lambda ( \vec r_i- \vec e_x)\right] \simeq -2J f_\lambda (\vec r_i) -J l_0^2\left[ \frac{\partial }{\partial x} - i \frac{e}{\hbar} A_{x}(\vec{r}_i) \right]^2 f_\lambda (\vec r_i) +O(l_0^3). \end{equation} Therefore, we end up with a partial differential equation \begin{equation}\label{eq:EVcontinuum} \hbar (\omega_\lambda-\omega_b) f(\vec r ) = \frac{1}{2m} \left[-i\hbar \vec \nabla - e \vec A(\vec r)\right]^2 f(\vec r ), \end{equation} where $\omega_b=\omega_p-4J$ and $m=\hbar/(2J l_0^2)$ is the effective mass in the lattice. \subsection{Landau orbitals} Equation~\eqref{eq:EVcontinuum} is the Schr\"odinger equation for a charged $e$ particle in a magnetic field, for which the eigenfunctions are the well-known Landau orbitals, $f_\lambda(\vec r)\equiv \tilde \Phi_{\ell k}(\vec r)$. In this work we use the symmetric gauge, $\vec{A} = B(- y/2, x/2,0)$, where \cite{Page1930S} \begin{equation}\label{eq:LOcontinuum} \tilde \Phi_{\ell k}(\vec r)= \frac{1}{\sqrt{2\pi l_{B}^2}}\sqrt{\frac{\ell!}{k!} } \xi^{k-\ell} e^{-\frac{|\xi|^2}{2}} L_\ell^{k-\ell}\left(|\xi|^2 \right). \end{equation} Here $L_\ell^{k-\ell}(x)$ are generalised Laguerre polynomials, $l_{B}= \sqrt{\hbar/eB} $ and $\xi=(x+iy)/\sqrt{2l^2_B}$. The wavefunctions depend on two indices, $\ell$ and $k$. The index $\ell=0,1,2,\dots$ labels the Landau levels with frequencies $\omega_\ell =\omega_b +\omega_c (\ell+1/2)$, where \begin{equation} \omega_c = \frac{eB}{m} = 4\pi \alpha J. \end{equation} Each of these Landau levels contains a large number of degenerate sublevels, which are labeled by the second quantum number $k=0,1,2, ..., k_{\rm max}$ \cite{Girvin1999S}. For a finite system the level of degeneracy can be estimated by $k_{\rm max}\approx \alpha M \gg1$ (where $M$ is the total number of lattice sites). For all our analytic calculations we take the limit $k_{\rm max}\rightarrow \infty$, which is a good approximation for moderate field strengths and sufficiently far away from the boundaries. Note that the Landau orbitals given in Eq.~\eqref{eq:LOcontinuum} denote the wavefunctions in the continuum. They are normalized to \begin{equation} \int d^2 r \, \tilde \Phi^*_{\ell k}(\vec r) \tilde \Phi_{\ell' k'}(\vec r) =\delta_{\ell\ell'} \delta_{kk'}. \end{equation} The corresponding normalized wavefunctions on the lattice, as given in Eq. (3 in the main text, can then be obtained by identifying $ \Phi_{\ell k}(\vec r_i)= \tilde \Phi_{\ell k}(\vec r=\vec r_i) l_0$. These wavefunctions have the important property that \begin{equation} \Phi_{\ell \ell}(\vec r=0) = \sqrt{\alpha}. \end{equation} This implies that the coupling between a single emitter and a single photon is independent of $\ell$. \subsection{Lattice corrections to the Landau levels energy} The continuum approximation is strictly valid only in the limit $\omega_c/J\sim \alpha \rightarrow 0$. While for the parameter regimes considered in this work this approximation still gives very accurate predictions for the wavefunctions, there are notable corrections to the frequencies $\omega_\ell$. To derive the lowest-order corrections to the equally spaced Landau levels, it is more convenient to use the so-called Harper equation \cite{Hofstadter1976S}, which is just the discrete single particle Schr\"odinger equation from above, but expressed in the Landau gauge, where $\vec A=B(0,x,0)$. This equation reads \begin{equation}\label{eq:Harper} -J[f_\lambda ( \vec r_j+ \vec e_x) + f_\lambda ( \vec r_j- \vec e_x)] - 2J\cos \left( 2\pi \alpha j - k_y \right)f_\lambda ( \vec r_j) = (\omega_{\ell}-\omega_p) f_\lambda ( \vec r_j), \end{equation} where $k_y$ labels the momentum in the $y$-direction, which is a good quantum number in the Landau gauge and $f_\lambda ( \vec r_j)= \chi_\lambda ( x_j)e^{i k_y y_j}$. Different values of $k_y$ only lead to a translation of the wavefunction and for a sufficiently large lattices we can take $k_y=0$ without loss of generality. Then, following Ref.~\cite{Harper2014S}, we replace $\chi_\lambda ( x_j)$ by a continuous, slowly varying function and expand both the cosine and the discrete derivative in Eq.~\eqref{eq:Harper} up to fourth order in $l_0$, i.e., \begin{equation} -J[\chi_\lambda ( x+ l_0) + \chi_\lambda ( x- l_0) ] \simeq -2J \chi_\lambda( x) - J l_0^2 \frac{\partial^2}{\partial x^2} \chi_\lambda (x) - \frac{J l_0^4}{12} \frac{\partial^4}{\partial x^4}\chi_\lambda (x) \end{equation} and, using $x = jl_0$ and $2\pi\alpha=(l_0/l_B)^2$, \begin{equation} -2J\cos \left( 2\pi \alpha j \right)\chi_\lambda (x)\simeq \left[ -2J + J \frac{l_0^2}{l_B^4} x^2 - J \frac{l_0^4}{12 l_B^8} x^4 \right]\chi_\lambda (x). \end{equation} With the definitions introduced above we then obtain the Schr\"odinger equation \begin{equation} \hbar (\omega_{\ell}-\omega_p-4J) \chi_\lambda (x)= \left[- \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}+ \frac{1}{2} m \omega_c^2 x^2 \right] \chi_\lambda (x) - \frac{1}{48\hbar J} \left[ \frac{\hbar^4}{m^2} \frac{\partial^4}{\partial x^4}+ m^2 \omega_c^4 x^4 \right] \chi_\lambda (x). \end{equation} The first term on the right hand side is just the Hamiltonian of a harmonic oscillator, from which we recover the the equidistant Landau levels, $\omega_\ell=\omega_b +\omega_c(\ell+1/2)$. The second term contains the lowest order corrections to the purely harmonic oscillator, which are fourth order in the momentum and the position operators. By including these corrections in perturbation theory we obtain the more accurate Landau spectrum~\cite{Harper2014S} \begin{equation} \omega_{\ell} \simeq \omega_b + \omega_c \left( \ell + \frac{1}{2}\right)- \frac{\omega_c^2 }{32J}(2\ell^2+2\ell+1). \end{equation} For example, based on this formula, the gap between the two lowest Landau levels is given by \begin{equation} \omega_1 -\omega_0 \approx 4\pi \alpha J \left( 1 - \frac{\pi}{2} \alpha \right). \end{equation} If we assume a value of $\alpha=0.08$, as in many examples in the main text, we find \begin{equation} \frac{\omega_1 -\omega_0}{J} \approx 0.874. \end{equation} This value already deviates about $13 \%$ from the zero-th order approximation and already agrees very well with exact numerical result. \subsection{Photon current} In Fig. 2(c) in the main text we plot the profile of the mean photon current $\vec j_p(\vec r_i)$. On the discrete lattice we define the $x$ ($y$) component of $\vec j_p(\vec r_i)$ as the average between the number of photons per unit of time passing from site $\vec r_i$ to site $\vec r_i+ \vec e_x$ ($\vec r_i+ \vec e_y$) and the number of photons per unit of time passing from site $\vec r_i- \vec e_x$ ($\vec r_i- \vec e_y$) to site $\vec r_i$. Explicitly, the two components of the photon current are defined as \begin{eqnarray} \vec j^{x}_p(\vec r_i)&=&i \frac{J}{2}\left[ \left(e^{i\phi_x} \Psi^{\dag}(\vec{r}_i+\vec e_{x}) - e^{-i\phi_x} \Psi^{\dag}(\vec{r}_i-\vec e_{x}) \right)\Psi(\vec{r}_i) - {\rm H. c. }\right],\\ \vec j^{y}_p(\vec r_i)&=&i \frac{J}{2}\left[ \left(e^{i\phi_y} \Psi^{\dag}(\vec{r}_i+\vec e_{y}) - e^{-i\phi_y} \Psi^{\dag}(\vec{r}_i-\vec e_{y}) \right)\Psi(\vec{r}_i) - {\rm H. c. }\right]. \end{eqnarray} The plots in Fig. 2(c) in the main text show a vector plot of the expectation value of this operator with respect to the exact single-excitation wavefunction $|\psi\rangle(t_\pi)$. To connect this expression to the usual current density operator in the continuum limit we identify $\vec j_c(\vec r_i)= \vec j_p(\vec r_i)/l_0$ and $\Psi_c(\vec r_i)= \Psi(\vec r_i) /l_0$, such that $[\Psi_c(\vec r),\Psi^\dag_c(\vec r')]\approx \delta (\vec r-\vec r')$ in the limit $l_0\rightarrow0$. Then, by expanding $\vec j_p(\vec r_i)$ to lowest order in $l_0$ we obtain \begin{equation} \vec j_c(\vec r)=\frac{1}{2m} \left[\Psi_c^{\dag}(\vec{r})\left(-i\hbar \vec \nabla \right)\Psi_c(\vec{r}) - {\rm H.c.} \right] - \frac{e}{m} \vec A(\vec r) \Psi_c^{\dag}(\vec{r})\Psi_c(\vec{r}) . \end{equation} \section{Photon propagator and Landau Green's function} Since the photons are noninteracting, the dynamics of the photonic lattice can be fully captured by the single-photon Green's function, \begin{equation} G(t ,\vec r_i, \vec r_j)= \langle {\rm vac}| \Psi(\vec r_i,t )\Psi^\dag(\vec r_j,0)|{\rm vac}\rangle=\sum_\lambda f_\lambda(\vec r_i) f^*_\lambda(\vec r_j) e^{-i\omega_\lambda t}. \end{equation} In the long-wavelength limit and for moderate magnetic fields, the mode functions $f_\lambda(\vec r_i)$ can be approximated by Landau orbitals and \begin{equation} G(t, \vec{r}_i, \vec{r}_j) \simeq \sum_{\ell k} \Phi_{\ell k}(\vec{r}_i) \Phi^*_{\ell k}(\vec{r}_j) e^{-i\omega_\ell t}. \end{equation} Note that for a simple square lattice it is in principle still possible to obtain an exact expression for $G(t, \vec{r}_i, \vec{r}_j)$ in terms of a continued fraction \cite{Ueta1997S}. However, this expression must still be evaluated numerically and does not offer much physical insight in the considered regime of moderate field strengths, where the continuum approximation is more intuitive and provides sufficiently accurate results. To carry out the sum over the index $k$ in the continuum limit, it is convenient to re-express the Landau orbitals as \begin{equation} \Phi_{\ell k}(\vec r) = \sqrt{\alpha}\braket{k | \mathcal{D}(\xi ) | \ell }, \end{equation} where $\mathcal{D}(\xi )=e^{\xi a^\dag -\xi^* a}$ is the displacement operator for a bosonic mode with annihilation operator $a$ and $|\ell,k\rangle$ are the corresponding number states \cite{galuber1969S}. This identification allows us to make use of the general relation for displacement operators, $\mathcal{D}^\dag(\xi) \mathcal{D}(\beta) = \mathcal{D}(\beta-\xi) e^{-\frac{1}{2}(\xi \beta^* - \xi^* \beta)}$, to show that \begin{equation} \begin{split} \sum_{k} \Phi_{\ell k}(\vec r_i) \Phi^*_{\ell k}(\vec r_j) & = \alpha \sum_k \braket{\ell | \mathcal{D}^\dag( \xi_j) | k }\braket{ k | \mathcal{D} (\xi_i) | \ell } \\ & = \alpha \braket{\ell | \mathcal{D}^\dag( \xi_j) \mathcal{D} (\xi_i) | \ell }= \alpha e^{\frac{1}{2}(\xi_i \xi_j^*- \xi_i^*\xi_j)} \braket{\ell | \mathcal{D} (\xi_i-\xi_j) | \ell },\\ & = \sqrt{\alpha} e^{i \theta_{ij} } \Phi_{\ell \ell} (\vec r_i-\vec r_j), \end{split} \end{equation} where $\theta_{ij}=-i(\xi_i\xi_j^*-\xi_i^*\xi_j)/2=-(x_iy_j-x_jy_i )/(2l_B^2)$. Note that by going from the first to the second line we have used the completeness relation, $\mathbbm{1} \simeq \sum_k |k \rangle \langle k |$. This assumes that the degeneracy of each Landau level is sufficiently large, which corresponds to having a system sufficiently larger than the magnetic length $l_B$ not to feel finite-size effects. Under these approximations the total lattice Green's function reduces to the continuum Green's function of a single charged particle \cite{Ueta1992S}. It can be explicitly expressed as a sum over all Landau levels \begin{equation}\label{eq:SuppG} G(t, \vec{r}_i, \vec{r}_j)\simeq \sum_{\ell } G_\ell(\vec{r}_i, \vec{r}_j) e^{-i\omega_\ell t}, \end{equation} where \begin{equation} G_\ell(\vec{r}_i, \vec{r}_j) = \sqrt{\alpha} e^{i\theta_{ij}} \Phi_{\ell \ell} (\vec r_i-\vec r_j). \end{equation} Remarkably, resuming the degeneracy of each Landau level, the only non-vanishing contributions to the Green's function comes from the orbitals $\Phi_{\ell \ell}$ with zero angular momentum, $L_z \sim k-\ell = 0$. \subsection{Gauge transformations} The vector potential $\vec{A}$ is only defined up to the gradient of an arbitrary function. Once a representation of the vector potential is fixed, one can still change to an equivalent representation by adding the gradient of a suitable function, $\vec{A}(\vec{r}) \longmapsto \vec{A}(\vec{r}) - \vec \nabla \Lambda(\vec{r})$. In order to have a gauge independent Schr\"odinger equation (and thus, consistently, gauge independent observables) the phase of the wave function must change accordingly, $\psi \longmapsto e^{ie\Lambda/\hbar}\psi$. The same it is true for the photonic Green's function which transforms under gauge transformations as \begin{equation} G (\tau, \vec{r}_i, \vec{r}_j) \longmapsto e^{ie(\Lambda(\vec{r}_i) - \Lambda(\vec{r}_j))/\hbar}G (\tau, \vec{r}_i, \vec{r}_j). \end{equation} The immediate consequence of this is that the Green's function must split in two parts, a gauge invariant amplitude, and a gauge dependent phase, where the amplitude depends only on the distance $|\vec{r}_i - \vec{r}_j|$: \begin{equation} G (\tau, \vec{r}_i, \vec{r}_j) = e^{ i\theta_{ij}}G^{\rm inv.} (\tau, |\vec{r}_i - \vec{r}_j| ). \end{equation} In the intermediate flux regime, where the continuum approximation holds, $G^{\rm inv} (\tau, |\vec{r}_i - \vec{r}_j| ) \sim \sum_{\ell} \Phi_{\ell \ell} (|\vec r_i-\vec r_j|)e^{-i\omega_\ell t}$, while $\theta_{ij}$ is still depends on the choice of the gauge. \subsection{Landau-level projector} Equation \eqref{eq:SuppG} shows that in the continuum limit the photonic's Green's function can be written as the sum over the components $G_{\ell}(\vec{r}_i, \vec{r}_j)$ for each band. This decomposition is particularly relevant when the splitting $\omega_c$ is sufficiently large and emitters couple dominantly to a single band. The $G_{\ell}(\vec{r}_i, \vec{r}_j)$ are real-space representations of the band-projector operators $\hat{\mathcal{P}}_{\ell}$ \cite{Alicki1993S,Assaad1995S}, i.e., \begin{equation} \langle r_i |\hat{\mathcal{P}}_{\ell} | r_j \rangle=G_{\ell}(\vec{r}_i, \vec{r}_j) = \sum_k \Phi_{\ell k} (\vec{r}_i ) \Phi_{\ell k}^* (\vec{r}_j ). \end{equation} In this sense, one can define photonic operators $\tilde \Psi_{\ell}(\vec r_i) = \sum_j \, G_{\ell}(\vec{r}_i, \vec{r}_j) \Psi(\vec{r}_j)$, which are field operators projected onto a single Landau level. In general, these operators are not orthogonal and therefore the bosonic operators $B_{\ell n}$ introduced in Eq. (8) in the main text are linear combinations of those projected operators. By evaluating the commutators \begin{equation} \begin{split} [B_{\ell n},B_{\ell n'}^\dag] & = \sum_{m,m'} K^{-1}_{n m} (K^{-1}_{n' m'})^{*} \sum_{ij} G_{\ell } (\vec r_e^{\, m}, \vec{r}_i) G^*_{\ell } (\vec r_e^{\,m'}, \vec{r}_j) \delta_{ij} \\ & = \sum_{m,m'} K^{-1}_{n m} (K^{-1}_{n' m'})^{*} G_{\ell } (\vec r_e^{\,m}, \vec r_e^{\,m'}) \\ & = \left[K^{-1} G (K^{-1})^\dag\right]_{nn'} \overset{!}{=} \delta_{n n'} \end{split} \end{equation} we see that the operators $B_{\ell n}$ represent an independent set of modes when $K K^{\dag} = G$, where $G$ is an $N\times N$ matrix with elements $G_{\ell } (\vec r_e^{\,m}, \vec r_e^{\,m'}) $. For explicit calculations we diagonalize $G$ and take the square root of each of its eigenvalues $\chi_i$. After transforming back to the original basis we obtain \begin{equation} K = U^{\dag} {\rm diag}(\sqrt{\chi_1}, \sqrt{\chi_2} \cdots \sqrt{\chi_{N}})U, \end{equation} where $U$ is the diagonalizing matrix. Note that the matrix-$K$ is not uniquely defined and here we always use the positive square roots of the $\chi_i$. In the case of $N=2$ emitters we obtain the result \begin{equation} K = \frac{1}{\sqrt{ {\rm Tr}[G] + 2 \sqrt{ {\rm det}[G] }}} \left( G + \sqrt{{\rm det}[G]} \mathbbm{1} \right), \end{equation} or, explicitly, \begin{equation} K = \sqrt{\frac{\alpha}{2}} \begin{pmatrix} \sqrt{1 + \sqrt{1-e^{-|\xi_0|^2}L_{\ell}^2(|\xi_0|^2) } } & \frac{e^{-|\xi_0|^2/2}L_{\ell}(|\xi_0|^2)}{\sqrt{1 + \sqrt{1-e^{-|\xi_0|^2}L_{\ell}^2(|\xi_0|^2) } }} \\ \frac{e^{-|\xi_0|^2/2}L_{\ell}(|\xi_0|^2)}{\sqrt{1 + \sqrt{1-e^{-|\xi_0|^2}L_{\ell}^2(|\xi_0|^2) } }} & \sqrt{1 + \sqrt{1-e^{-|\xi_0|^2}L_{\ell}^2(|\xi_0|^2) } } \end{pmatrix}, \end{equation} where $\xi_0 = |\vec{r}_1 - \vec{r}_2|/\sqrt{2 l_{B}^2}$. \section{Resonant interactions in the single excitation sector} We consider the dynamics in the single excitation sector, meaning that we restrict the dynamics to states of the form \begin{equation} |\psi\rangle(t)= \left[ \sum_{n=1}^N c_n(t)\sigma_+^n + \sum_\lambda \varphi_\lambda(t) \Psi_\lambda^\dag\right]|g\rangle|{\rm vac}\rangle, \end{equation} where $\lambda$ labels the single photon eigenstates. Plugging this ansatz into the time dependent Schr\"odinger equation, $i\partial_t | \psi \rangle = H | \psi \rangle$, where $H$ is given in Eq. (2 in the main text, we obtain the following equations of motion \begin{equation}\label{eq:single_ex_one_atom_dyn} \begin{split} i \dot{c}_n & = \left(\omega_e - i\gamma_e/2\right) c_n + g\sum_{\lambda} f_{\lambda}(\vec{r}_e^{\, n}) \varphi_{\lambda}, \\ i \dot{\varphi}_{\lambda} & = (\omega_\lambda - i \gamma_p/2) \varphi_{\lambda} + g \sum_m f^*_{\lambda}(\vec{r}_e^{\, m})c_m, \end{split} \end{equation} where we included a decay of the emitters with rate $\gamma_e$ and photon losses with rate $\gamma_p$. We can formally integrate the second equation (for the photon populations) and obtain \begin{equation}\label{eq:photon_wave_fun_modespace} \varphi_{\lambda}(t) = - i g\sum_m f^*_{\lambda}(\vec{r}_e^{\, m})\int_0^t e^{-i(\omega_\lambda-i\gamma_p/2)(t-t')}c_m(t') dt', \end{equation} where we assumed $\varphi_{\lambda}(t=0) = 0$ (i.e., initially there are no photons in the system). By reinserting this result into the equations for the emitter's amplitude we end up with \begin{equation}\label{eq:many_atoms_sing_ex_integral_eq} \dot{c}_n(t) = - i(\omega_e-i\gamma_e/2) c_n - g^2 \sum_m \int_0^t G(t-t',\vec{r}_e^{\, n}, \vec{r}_e^{\, m})e^{-\gamma_p(t-t')/2} c_m(t') dt'. \end{equation} This result is still completely general and used to produce the numerical results presented in Fig. 2 in the main text. \subsection{Spontaneous emission in a non-magnetic lattice} We consider here in detail the single emitter case. Considering the transformation $c_e(t) \mapsto c_e(t) e^{- i(\omega_e-i\gamma_e/2)t}$, Eq. \eqref{eq:many_atoms_sing_ex_integral_eq} can be rewritten as \begin{equation} \dot{c}_e(t) = - g^2\int_0^t K(t-t') e^{\bar \gamma (t-t')/2} c_e(t') dt', \end{equation} where $\bar \gamma = \gamma_e - \gamma_p$ and the integral kernel is given by \begin{equation} K(t) = \int_{-\infty}^{+\infty} \rho(\vec{r}_e, \omega) e^{-i(\omega-\omega_e)t } d\omega, \end{equation} with $\rho(\vec{r}_e, \omega) = \sum_\lambda |f_{\lambda}(\vec{r}_e)|^2\delta(\omega-\omega_\lambda)$, as defined in the main text. In an infinitely large system, the density of states becomes a smooth function of $\omega$. When the coupling is small and the emitter's resonance is sufficiently far away from eventual singular points \cite{Gonzalez-Tudela2017S}, we can approximate it as a constant, $\rho(\vec{r}_e, \omega) \simeq \rho(\vec{r}_e, \omega_e)=\tau/(2\pi)$. In this way the integral kernel can be approximated by a delta function, $K(t-t') \simeq \tau \delta(t-t')$, which is evaluated at the upper bound of the integral. We then recover the usual exponential decay \begin{equation} \dot{c}_e(t) = - \frac{g^2 \tau}{2} c_e(t). \end{equation} In a 2D system with eigenmodes $f_{\lambda} \sim e^{i \vec{k}\cdot \vec{r}}$ and an approximately quadratic dispersion, $\omega_k \simeq \omega_b +J |\vec k|^2$, we obtain $\tau \simeq 1/(2J)$ and \begin{equation} \Gamma \simeq \frac{g^2}{2J}. \end{equation} For smaller lattices, delimited by sharp edges, the emitted photons will be reflected at the boundaries and for longer times the decay of the emitter will deviate from a purely exponential shape. To avoid such boundary effects we have included in the numerical simulations in Fig. 2(a) in the main text a larger photon loss rates at the edges to mimic an infinitely extended system. To implement the dissipative boundaries it is more convenient to rewrite Eq. \eqref{eq:single_ex_one_atom_dyn} using the photon's wave function $\varphi(t, \vec{r}) = \sum_{\lambda} f_{\lambda}(\vec{r}) \varphi_{\lambda}(t)$, which gives (in general for $N$ emitters) \begin{equation} \begin{split} i \dot{c}_n & = \left(\omega_e - i\gamma_e/2\right) c_n + g\varphi(t, \vec{r}_e^{\, n}), \\ i \dot{\varphi}(t, \vec{r}_i) & = \sum_j\left[- J_{ij} + (\omega_p - i \tilde{\gamma}_p(\vec{r}_i)/2 )\delta_{ij} \right] \varphi(t, \vec{r}_j) + g \sum_m \delta_{m i} c_m , \end{split} \end{equation} where now we introduced a space dependent photonic dissipation $\tilde{\gamma}_p(\vec{r})$. In our simulations we used a Fermi-function-like profile \begin{equation} \tilde{\gamma}_p(\vec{r}) = \gamma_p + \frac{\gamma_{\rm edge}}{1+\exp[-(r-R_0)/2]}. \end{equation} Typically we tune the parameters such as $R_0 \simeq L/2$, where $L$ is the characteristic size of the system, and $\gamma_{\rm edge}\simeq \gamma_p \times 10^{3}$. Note that these additional loss channels do not affect the evolution of the coupled emitter-photon state in the case of a finite $\alpha$. \subsection{Flat-band approximation} When the light-matter coupling $g$ is larger than the width of the $\ell$-th band, but still much smaller than the gap to the other bands, we can make a resonance approximation. To do so we discard the contributions from all the other bands and treat the $\ell$-th band as degenerate. Under these assumptions, i.e, $|\omega_e- \omega_{\ell k}| \ll g $ and $g \ll |\omega_{\ell k} - \omega_{\ell\pm 1 k'}|$, and by changing into a damped rotating frame, $c_n(t) \mapsto c_n(t) e^{-i(\omega_e - i\gamma_p/2 ) t}$, we obtain the approximate result \begin{equation} \dot{c}_n(t) \simeq - \frac{\bar \gamma}{2} c_n - g^2\sum_m \int_0^t G_\ell (\vec{r}_e^{\, n}, \vec{r}_e^{\, m}) c_m(t') dt', \end{equation} where $\bar \gamma = \gamma_e - \gamma_p$ is the difference between the loss rates. Taking the time derivative of this equation we obtain a set of second order differential equations for $N$ coupled harmonic oscillators, \begin{equation}\label{eq:atom_atom_res_eff_int_eq} \ddot{c}_n(t) = -\frac{\bar \gamma}{2} \dot{c}_n(t) - g^2 \sum_m G_\ell (\vec{r}_e^{\, n}, \vec{r}_e^{\, m}) c_m(t). \end{equation} \subsection{LPP spectrum} By taking the Fourier transform of the $c_n(t)$ in Eq.~\eqref{eq:atom_atom_res_eff_int_eq} we obtain the eigenvalue equation \begin{equation}\label{eq:fourier_atom_atom_res_eff_int_eq} (\omega^2 + i\omega \bar \gamma /2 - \Omega^2) c_n(\omega) = g^2 \sum_{m\neq n} G_\ell (\vec{r}_e^{\, n}, \vec{r}_e^{\, m}) c_m(\omega), \end{equation} from which we can derive the complex eigenvalues of the resonant LPPs, which represent the resonance frequencies and the decay rates of the coupled eigenmodes. After transforming back into the original frame, these complex eigenvalues are \begin{equation}\label{eq:generic_singl_ex_eigenvalues} \omega_{\nu} = \omega_e - i\frac{\gamma_e + \gamma_p}{4} \pm \Omega\sqrt{1+ \Lambda_{\nu}-\bar \gamma^2/(16\Omega^2) }, \end{equation} where the $\Lambda_{\nu}$ are the eigenvalues of the matrix \begin{equation} \mathcal{M} = \frac{1}{\alpha} \begin{pmatrix} 0 & G_\ell(\vec{r}_e^{\, 1}, \vec{r}_e^{\, 2}) & G_\ell(\vec{r}_e^{\, 1}, \vec{r}_e^{\, 3}) & \cdots & G_\ell(\vec{r}_e^{\, 1}, \vec{r}_e^{\, N}) \\ G_\ell(\vec{r}_e^{\, 2}, \vec{r}_e^{\, 1}) & 0 & G_\ell(\vec{r}_e^{\, 2}, \vec{r}_e^{\, 3}) & \cdots & G_\ell(\vec{r}_e^{\, 2}, \vec{r}_e^{\, N}) \\ \vdots & & \ddots & &\cdots \\ \end{pmatrix}. \end{equation} The symmetry between the different single-excitation states in respectively the upper and lower polaritons that is visible in Eq.~\eqref{eq:generic_singl_ex_eigenvalues} as a $\pm$ in front of the $\Lambda_\nu$-dependent square root and in the single-excitation manifold of Fig~3(a as a symmetry of the branches with respect to $\omega_e$ is a consequence of the fact that the emitters are resonantly coupled to a single and degenerate Landau level. Quite interestingly, a similar symmetry with respect to $n\omega_e$ is also visible in the $n$-excitation manifold, e.g. in the $n=2$ manifold of Fig~3(a) For the example of three equidistant emitters, \begin{equation} \frac{1}{\alpha} G_\ell(\vec{r}_e^{\, n}, \vec{r}_e^{\, m}) = e^{-\frac{d^2 }{4l_B^2}} L_\ell^0 \left( \frac{d^2}{2l_B^2}\right) e^{i\theta_{nm}} \end{equation} and $\Lambda_\nu =e^{-d^2/(4l_B^2)} L_\ell^0 \left( d^2/(2l_B^2)\right) \times \lambda_\nu$, where $\lambda_\nu$ are the eigenvalues of the reduced matrix \begin{equation} \tilde{\mathcal{M}} =\begin{pmatrix} 0 & e^{i \theta_{12}} & e^{-i \theta_{31}} \\ e^{-i \theta_{12}} & 0 & e^{i \theta_{23}} \\ e^{i \theta_{31}} & e^{-i \theta_{23}} & 0 \\ \end{pmatrix}. \end{equation} Therefore, the $\lambda_\nu$ are determined by the solutions of \begin{equation} \lambda^3 -3\lambda-2\cos(\theta_\triangle)=0, \end{equation} which only depend on the gauge invariant sum of all the phases, \begin{equation} \theta_\triangle=\theta_{12}+\theta_{23}+\theta_{31}= \frac{A_\triangle}{l_B^2}= \frac{e B A_\triangle}{\hbar}. \end{equation} The solutions are explicitly given by \begin{equation} \lambda_\nu = 2 \cos\left( \frac{\theta_\triangle+2\pi \nu}{3}\right). \end{equation} \section{Band-gap chiral excitation flow} The condition of perfect chiral or non-chiral excitation flow in an equilateral triangle of emitters, strongly detuned from any specific Landau level, is related to the eigenvalues of $\tilde{J}_{nm}$. In particular, a fully chiral or completely non-chiral flow appears, when one of the single excitation eigenvalues become zero or when two of them become degenerate. Indeed the single excitation sector of the equilateral triangular system is fully described just looking at the eigenvalues/eigenstates of the band-gap interaction itself \begin{equation}\label{eq:chiral_dyn_matrix_3atoms} \tilde{J} = G_0 \begin{pmatrix} 0 & e^{i \theta_{12}} & e^{i \theta_{13}} \\ e^{-i \theta_{12}} & 0 & e^{i \theta_{23}} \\ e^{-i \theta_{13}} & e^{-i \theta_{23}} & 0 \\ \end{pmatrix}, \end{equation} where $G_0 = g^2/(\omega_a-\omega_\ell) \Phi_{\ell\ell}(|\vec r_a^{\,n}-\vec r_a^{\,m}|)$ can be regarded just as a constant, since we consider an equilateral triangle geometry. The characteristic polynomial of the system is given by \begin{equation} \lambda^3 - 3 G_0^2 \lambda - 2G_0^3 \cos( \theta_{\Delta}) = 0, \end{equation} which is exactly the same polynomials used to find the eigenvalues in the resonant case (up to a scale factor $G_0$). We have that perfect chirality/non-chirality are realised, respectively, when $\theta_{\Delta} = n\pi/2$ with $n$ odd-integer, or $\theta_{\Delta}= n\pi$ with $n$ even-integer. This information is just given by the determinant of the effective interaction, which is ${\rm det}[\tilde{J}_{nm}] = 2G_0^3 \cos( \theta_{\Delta}) $. When ${\rm det}[\tilde{J}_{nm}] = 0$ we have perfect chirality, on contrary, when ${\rm det}[\tilde{J}_{nm}] = \pm 2G_0^3$ chirality is lost, as the magnetic field were turned off. This can be worked out exactly, by considering that $c_n(t) = \sum_{\nu} \sum_m c_m(t=0)f_{\nu} (m) f_{\nu}(n) e^{-i\lambda_{\nu} t}$, where $c_n(t)$, for $n=1,2,3$ is the population of the $n$-th emitter, and $f_{\nu}(n)$, $\lambda_{\nu}$ are, respectively the eigenvectors, eigenvalues of the dynamical matrix \eqref{eq:chiral_dyn_matrix_3atoms}. Assuming the excitation is initially loaded just in the first emitter, i.e. $c_n(t=0) = \delta_{0n}$, and considering $ \theta_{\Delta} = n\pi/2$ we have \begin{equation} \begin{split} |c_1(t)| & = \bigg| \frac{1}{3} + \frac{2}{3}\cos\left[\sqrt{3}G_0 t \right] \bigg| \\ |c_2(t)| & = \bigg| \frac{1}{3} + \frac{2}{3}\cos\left[\sqrt{3}G_0 t + \frac{4\pi}{3} \right] \bigg| \\ |c_3(t)| & = \bigg| \frac{1}{3} + \frac{2}{3}\cos\left[\sqrt{3}G_0 t + \frac{2\pi}{3} \right] \bigg| \end{split} \end{equation} This solution clearly shows that the chirality emerges from the $2\pi/3$ phase shift between the three different populations oscillations. \section{Disorder}\label{sec:SuppDisorder} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig_supp_dis_new.pdf} \caption{(a-d) Disorder averaged excitation spectrum $\bar{\mathcal{S}}_e^n(\omega)$ for fixed value of the disorder strength (as indicate in each plot). Each plot is averaged over $N_{\rm dis}=1000$ realisations. (e) Disorder averaged excitation spectrum $\bar{\mathcal{S}}_e^n(\omega)$ as a function of $\Delta \omega_p$. For each value of $\Delta \omega_p$ the excitation spectrum is averaged over $N_{\rm dis}=50$ realisations. (f) Plot of the photon wavefunction $|\varphi(\vec{r}_n)|^2$ of the lowest LPP. The disorder strength in this plot is chosen as $\Delta \omega_p/g = 0.7$. The left panel shows the case without disorder, the center panel the wavefunction for a single disorder realisation and the right panel depicts the average over $N_{\rm dis} = 200$ realisations. For all figures we have assumed a $M=20\times 20$ photonic lattice, $\alpha=0.08$, $\delta_e/J = 0.47$ (corresponding to the resonance with the $\ell=0$ Landau level) and $g/J=0.08$. } \label{Fig:Supp_disorder} \end{figure} All our calculations in the main text are based on the assumption of an ideal lattice for the photons. In practice, fabrication uncertainties will result, for example, in random local offsets of the bare photon frequency $\omega_p$, which will affect the energies and wavefunctions of the photons. To estimate the effect of disorder on the LPPs, we now replace $\omega_p$ with a random offset at every site, $\tilde{\omega}_p^i = \omega_p + \delta \omega_p^i$, where $\delta \omega_p^i$ is sampled from a Gaussian distribution centered around zero and with a width $\Delta \omega_p$. In the case of emitters resonantly coupled to the lattice, we expect the main physics is barely affected by the disorder, provided that $\omega_c \gg g \gg \Delta \omega_p$ (where for higher Landau levels $\omega_c$ is replaced by the frequency difference between two neighbouring levels). We now illustrate this point more explicitly on the simplest case of the single emitter. We consider the excitation spectrum, as defined in the main text, \begin{equation} \mathcal{S}^{n}_e(\omega) = \left| \braket{G | \sigma_-^n \frac{1}{H - \omega - i\frac{\gamma_e}{2}\sum_{m}\sigma_+^m\sigma_-^m }\sigma_+^n | G} \right|^2, \end{equation} where $H$ is now affected by the onsite disorder, as defined above. A good quantity that will provide a clear visualization of the effect of disorder is the average excitation spectrum defined as \begin{equation} \bar{\mathcal{S}}_e^n(\omega) = \frac{1}{N_{\rm dis}} \sum_{k=1}^{N_{\rm dis} }\mathcal{S}_e^n(\omega), \end{equation} where $N_{\rm dis}$ is the number of disorder realizations. In each realization the onsite energies $\tilde{\omega}_p^i$ for each site are chosen randomly, as described above. In Fig. \ref{Fig:Supp_disorder}(a-e) we plot the resulting average excitation spectrum for a single emitter, in resonance with the lowest Landau level. This plot shows that the Rabi splitting (and thus the presence of the chiral bound state) is almost unaffected for disorder strengths up to $\Delta \omega_p/g\lesssim 1$ and even up to values of $\Delta \omega_p/g\simeq 1$ the splitting is still visible. In this regime the main effect of disorder is a broadening of the lines. Only at larger disorder strengths the LPPs break up and the excitation spectrum reduces to a single line centered around the emitter frequency. Note that in the considered regime of interest, $\omega_c> g$, the condition $\Delta \omega_p< g$ also implies that the disorder does not mix the Landau levels. Therefore, the chiral properties of the LPPs remain preserved. Moreover in Fig. \ref{Fig:Supp_disorder}(f) we visualize the spatial profile of the photon wavefunction, $|\varphi(\vec{r}_n)|^2$, of the lowest LPP in the presence of disorder. Note that for this example a rather larger disorder strength of $\Delta \omega_p/g = 0.7$ has been assumed. Even in this regime the Landau level is on average still clearly recognizable (right panel), although in a single disorder realization its rotational symmetry is already partially lost (center panel). \begin{figure} \centering \includegraphics[width=\columnwidth]{fig_disorder_bandgap.pdf} \caption{(a1-a4) Plot of the lowest eigenvalues $\omega_\lambda$ of $H_{\rm ph}$ in the presence of disorder and for a $31\times 31$ triangular lattice. (b1-b4) Disorder averaged evolution of the excited state population, $\bar{p}_e$, for three equidistant emitters with $d/l_0=4$. Here the bar denotes the average over $N_{\rm dis}=100$ realizations, see Eq.~\eqref{eq:pebar}. The disorder strength used is the same as reported in the respective panel (a) plots. (c1-c4) Single realization of the population's time evolution under the same conditions as the panel (a-b) plots. The other parameters used in all the plots are $\omega_{\rm p} =9.5$, $\omega_e=0.5$, $J=0.75$, $g=0.1$, $\alpha = 1/(16\sqrt{3})$, $d/l_0=4$, $\gamma=10^{-5}$ (all frequencies are given in arbitrary units). } \label{Fig:dis_bandgap} \end{figure} For emitters that are detuned from the nearest Landau level we expect that the constraint on the tolerable level of disorder can be further relaxed and the sufficient condition to observe all non-resonant effects detailed in the main text is to have ${\rm min}\{\omega_c, |\omega_e-\omega_\ell|\} \gg \Delta \omega_p$. Large quantitative and qualitative deviations from the main results of this work are expected once the disorder approaches the scale of the cyclotron frequency, affecting both the amplitude, but also the phase of the emerging dipole-dipole interactions. In Fig. \ref{Fig:dis_bandgap} we report the result of another numerical experiment. In each row of this figure labels (1), (2), (3) and (4) correspond to the disorder strengths $\Delta \omega_{\rm p} = 0.05, 0.1, 0.2, 0.5$. In the column (a1-a4) we report the lowest part of the spectrum of $H_{\rm ph}$ for different disorder strengths. The column (b1-b4) shows the time evolution of the average population \begin{equation}\label{eq:pebar} \bar{p}_e(t) = \bigg| \frac{1}{N_{\rm dis.}} \sum_{\rm dis.} c_e(t) \bigg|^2, \end{equation} where $N_{\rm dis.}$ is the number of disorder realisations considered (in our simulations $N_{\rm dis.} = 100$), and $\sum_{\rm dis.}$ is the sum over all these realisations. The average time evolution defined in this way allows us to study dephasing due to the disorder. When the disorder strength is much smaller than the cyclotron frequency it just weakly affects the effective magnetic phase between the emitters. This can be seen in Fig. \ref{Fig:dis_bandgap}(b1-b2), where up to $\Delta \omega_{\rm p}/\omega_c \lesssim 0.1 $ the average time evolution still exhibits many oscillations before it is washed out by dephasing. Looking at single disorder realizations, Fig. \ref{Fig:dis_bandgap}(c1-c2), it is likely to see perfect chirality (up to very long times) with just small ripples in the dynamics. When the disorder starts to be comparable to the cyclotron frequency the magnetic phases in the effective model $H_{\rm eff}$ starts to be affected. This is shown in Fig. \ref{Fig:dis_bandgap}(b3-c3) for $\Delta \omega_{\rm p}/\omega_c = 0.2$. While for a single realization the emitters still undergoes many chiral oscillations, the average dynamics is already strongly damped. This means that the emitter dynamics is still governed by the effective Hamiltonian $H_{\rm eff}$, but the tunneling amplitudes and phases are no longer predictable. In the last example, we set $\Delta \omega_{\rm p}/\omega_c \leq 0.5$. The Landau gap is still open, but the width of each level is now comparable to the gap between them. In this regime an approximate description in terms of separated Landau levels is no longer possible. This can be seen immediately from the time evolution of the average evolution, Fig. \ref{Fig:dis_bandgap}(b4), where the dephasing is so strong that no single oscillation can occur. Also for most individual disorder realizations the chiral flow of excitations is broken, as shown in Fig. \ref{Fig:dis_bandgap}(c4). \section{Experimental implementations} To probe the main properties of LPPs in experiments, we need a photonic lattice with a synthetic magnetic field, two-level emitters coupled to the photonic lattice with a strength $g$ that exceeds the bare loss rates $\gamma_p$ and $\gamma_e$, and a sufficiently low level of disorder in the lattice such that the formation of discrete Landau levels and a hybridization between photons and emitters is still possible, see Sec.~\ref{sec:SuppDisorder}. While in the long term these conditions might be achievable in different platforms in the optical and microwave regime, we here briefly discuss a setup proposed in Ref. \cite{Anderson2016S} and implemented in Ref. \cite{Owens2018S}, where the physics of LPPs can already be probed with existing technology. Ref. \cite{Anderson2016S} describes a 2D magnetic photonic lattice composed of 3D microwave resonators, which contain a magnetic material. By applying an external field, this magnetic component breaks the time reversal symmetry of the local modes and allows one to realize in a scalable way large lattices with effective fields of $\alpha=1/4$ or $\alpha=1/6$ \cite{Anderson2016S}. The on-site frequency of these microwave resonator can take values in the range of hundreds of MHz to tens of GHz, with quality factors $Q \sim 10^3-10^5$. The tunneling rate between neighboring resonators can be engineered to be $J \sim 100$ MHz. At the same time, the onsite disorder, $\Delta \omega_{p}$, for such microwave resonators can be quite small and in similar coupled resonator arrays values of $\Delta \omega_{p} \lesssim 1$ MHz have been reported \cite{Mirhosseini2018S,Saxberg2019S}. The two-level systems can be represented in this system by superconducting qubits, where qubit-resonator couplings in the range of $g \sim 1-100$ MHz are readily achievable (see, e.g., Refs.~\cite{Mirhosseini2018S,Saxberg2019S,Kim2020S}). Superconducting qubits are typically designed with a frequency of about $\omega_e\sim 3-5$ GHz and exhibit coherence times of about $0.1-1$ ms, which translates into typical decay rates of about $\gamma_e \sim$ kHz. Since the qubit frequency is easily tunable by several 100 MHz, it can be easily made resonant with one of the Landau levels. Based on these estimates, we can readily identify a set of experimentally realistic parameters, which are close to the values assumed for most of the results discussed in the main text: \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\omega_{\rm p}$ & $\omega_e$ & $J$ & $g$ & $\gamma_{p,e}$ & $\,\,\Delta \omega_p\,\,$ & \\ \hline $\,\,5.4 \times 10^3\,\,$ & $\,\,5\times 10^3\,\,$ & $\,\,100\,\,$ & $\,\,20\,\,$ & $\,\,0.05\,\,$ & $\,\,1\,\,$ & \,\,MHz\,\,\\ \hline \end{tabular} \end{table} Note that while in the setting described in Ref. \cite{Anderson2016S} the magnetic flux assumes the fixed values $\alpha =1/4,1/6$, these values are still in the intermediate regime, where a continuum approximation is valid. To achieve arbitrary values of $\alpha$ more flexible approaches, such as demonstrated in Ref. \cite{Roushan2017S} can be used, where arbitrary phase patterns can be imprinted by external driving fields.
1,108,101,562,459
arxiv
\section{Introduction} The smoothed boundary method \cite{Bueno-Orovio:2006a,Bueno-Orovio:2006b,Bueno-Orovio:2006c} and other similar approaches \cite{Gal:2006,Wu:2007,Gal:2008} have recently been demonstrated as powerful tools for solving various partial differential equations with boundary conditions imposed within the computational domain. The method's origin can be traced to the embedded boundary method and the immersed boundary method (for an overview, see Ref. \cite{Badea:2001,Peskin:2002,Boyd:2005,Lui:2009,Sabetghadam:2009}). This method has been successfully employed in simulating diffusion processes \cite{Kockelkoren:2003,Levine:2005} and wave propagation \cite{Bueno-Orovio:2006a,Bueno-Orovio:2006b,Bueno-Orovio:2006c,Fenton:2005,Buzzard:2007} constrained within geometries described by a continuously transitioning domain indicator function (hereafter, the domain parameter) with a no-flux boundary condition imposed on the diffuse interface (as defined by the narrow transitioning region of the domain parameter). While those works demonstrated the potential for this type of numerical methods that circumvents the difficulties with constructing the finite element mesh (e.g., meshing the surface and then building a volumetric mesh based on the surface mesh or by combining regular subdomains that can be easily meshed), which is particularly useful when dealing with complex structures. However, the method was only applicable to no-flux boundary conditions, and no approaches to extend the method to other types of equations or boundary conditions were available. Recently, a different formulation, based on asymptotic analyses, to solve partial differential equations in a similar manner was proposed \cite{Li:2009,Lowengrub:2009,Ratz:2006,Teigen:2009a,Sohn:2009,Teigen:2009b}, providing a justification of the method as well as increasing the applicability of the approach. In this paper, we provide a mathematically consistent smoothed boundary method and provide a precise derivation for the equations. The specific equations that we consider are: (1) the diffusion equation with Neumann and/or Dirichlet boundary conditions, (2) the bulk diffusion equation coupled with surface diffusion, (3) the mechanical equilibrium equation for linear elasticity, and (4) Allen-Cahn or Cahn-Hilliard equations with contact angles as boundary conditions. The method is especially useful for three-dimensional image-based simulations. \section{Background} The method is based on a diffuse interface description of different phases, similar to the continuously transitioning order parameters in the phase-field method \cite{Cahn:1958,Cahn:1959,Allen:1979,Ginzburg:1950,Chen:2002,Emmerich:2003} often used in studying phase transformations and microstructural evolution in materials. In phase-field models, phases (which could be liquid, solid, vapor, or two different solids/liquids having different compositions) are described by one or more order parameters having a prescribed bulk values within each phase. In the interface, the order parameter changes in a controlled manner. Asymptotic analyses \cite{Emmerich:2003} can be used to show that the phase-field governing equations approach the corresponding sharp interface problems in the sharp interface limit. We adopt this concept to describe internal domain boundaries by an order-parameter-like domain parameter, which may or may not be stationary and takes a value of 1 inside the domain of interest and 0 outside. The equations will be solved where the domain parameter is 1, with boundary conditions imposed where the domain parameter is at the intermediate value (approximately 0.5). Figure~\ref{Domain} illustrates a schematic diagram of the sharp and diffuse interfaces. In the conventional sharp interface description, the domain of interest is $\Omega$ and is bound by a zero-thickness boundary denoted by $\partial \Omega$ [Fig.~\ref{Domain}(a)]. Within $\Omega$, the partial differential equations need to be solved according to the boundary conditions imposed at $\partial \Omega$. In the diffuse interface description, we employ a continuous domain parameter, which is uniformly 1 within the domain of interest and uniformly 0 outside. In this case, the originally sharp domain boundary is smeared to a diffuse interface with a finite thickness indicated by $0<\psi<1$. Our target is to solve partial differential equations within the region where $\psi=1$ while imposing boundary conditions at the narrow transitioning interface region where $0<\psi<1$. By using this description, there is no specifically defined domain boundary. The system will determine the boundary by a variation of the domain parameter. In addition, the gradient of the domain parameter $\nabla \psi$ will automatically determine the inward normal vector of the contour level sets of $\psi$ (see Fig.~\ref{Domain}(c)). \section{Formulation} \subsection{General Approach} The general approach is as follows. The domain parameter describes the domain of interest ($\psi=1$ inside the domain, and $\psi=0$ outside). The transition between the two values described is smooth and taken as the solution to an Allen-Cahn type dynamic equation (having a form of a hyperbolic function) described later. To derive the smoothed boundary formulation for Neumann boundary condition, the differential equation of interest ($H$) is multiplied by the domain parameter, $\psi$. By using identities of the product rule of differentiation such as \begin{equation} \label{ChainRule1} \psi \nabla^2 H = \nabla \cdot (\psi \nabla H) - \nabla \psi \cdot \nabla H, \end{equation} we obtain terms proportional to $\nabla \psi$. Since the unit (inward) normal of the boundary, $\vec{n}$, is given by $\nabla \psi/ |\nabla \psi |$, such terms can be written in $\partial H/\partial n = \nabla H \cdot \vec{n} = \nabla H \cdot \nabla \psi /|\nabla \psi|$, and thus reformulated to be the Neumann boundary condition imposed on the diffuse interface. Similarly, to derive the smoothed boundary formulation for the Dirichlet boundary condition, the equation of interest is multiplied by the square of the domain parameter. Again using mathematical identities, $\psi^2 \nabla^2 H = \psi \nabla \cdot (\psi \nabla H) - \psi \nabla \psi \cdot \nabla H$ where $\psi \nabla \psi \cdot \nabla H = \nabla \psi \cdot \nabla \left( \psi H \right) - H \left| \nabla \psi \right|^{2}$, we obtain \begin{equation} \label{ChainRule2} \psi^2 \nabla^2 H = \psi \nabla \cdot (\psi \nabla H) - [\nabla \psi \cdot \nabla \left( \psi H \right)-H |\nabla \psi|^2]. \end{equation} Note that $H = H|_{\partial \Omega}$ associated with $|\nabla \psi|^2$ appearing in the last term is the boundary value imposed on the diffuse interface. Specific details of the derivation depend on the equation to which the approach is applied, and we therefore provide four examples below. \subsection{Diffusion Equation} \label{DiffEqn} The first example is the diffusion equation with Neumann and/or Dirichlet boundary conditions. The Neumann boundary condition is appropriate, for example, as the no-flux boundary condition, while the Dirichlet boundary condition is necessary when the diffusion equation is solved with a fixed concentration on the boundaries. For Fick's Second Law of diffusion, the original governing equation is expressed as \begin{equation} \frac{\partial C}{\partial t} = -\nabla \cdot \vec{j} + S = \nabla \cdot (D \nabla C) + S, \label{OGE2} \end{equation} where $\vec{j}$ is the flux vector, $D$ is the diffusion coefficient, $C$ is the concentration, $S$ is the source term, and $t$ is time. Instead of directly solving the diffusion equation, we multiply both sides of Eq.~\eqref{OGE2} by the domain parameter $\psi$ that describes the domain of the solid phase: \begin{equation} \psi \frac{\partial C}{\partial t} = \psi \nabla \cdot (D \nabla C) + \psi S. \label{MGE5} \end{equation} Using the identity $\psi \nabla \cdot (D \nabla C) = \nabla \cdot (\psi D \nabla C) - \nabla \psi \cdot (D \nabla C)$, Eq.\ (\ref{MGE5}) becomes \begin{equation} \psi \frac{\partial C}{\partial t} = \nabla \cdot (\psi D \nabla C) - \nabla \psi \cdot (D \nabla C) + \psi S. \label{MGE6} \end{equation} Now, let us consider the boundary condition in this formulation. The Neumann boundary condition is the inward flux across the domain boundary, mathematically the normal gradient of $C$ at the diffuse interface, and is treated as \begin{equation} \label{NBC1} \vec{n} \cdot \vec{j} = \frac{\nabla \psi}{\left| \nabla \psi \right|} \cdot \vec{j} = - \frac{\nabla \psi \cdot (D \nabla C) }{\left| \nabla \psi \right|} = - D \frac{\partial C}{\partial n}= -B_{N}, \end{equation} where $\vec{n} = \nabla \psi/|\nabla \psi|$ is the unit inward normal vector at the boundaries defined by the diffuse interface description. Equation \eqref{NBC1} can be rearranged to be $\nabla \psi \cdot (D \nabla C) = \left| \nabla \psi \right| B_{N}$ and substituted back into Eq.~\eqref{MGE6}; thus, we obtain \begin{equation} \label{MGE7} \psi \frac{\partial C}{\partial t} = \nabla \cdot (\psi D \nabla C) - \left| \nabla \psi \right| B_{N} + \psi S. \end{equation} To demonstrate that this smoothed boundary diffusion equation satisfies the assigned Neumann boundary condition (or specifying the boundary flux or normal gradient), we use the one-dimensional version of Eq.~\eqref{MGE7} without loss of generality. By reorganizing terms and integrating over the interfacial region, we obtain \begin{equation} \label{SBM-Nm-prf-01} \int_{a_i-\xi/2}^{a_i+\xi/2}\psi \left( \frac{\partial C}{\partial t} - S \right) dx = \left. \psi D \frac{\partial C}{\partial x} \right|_{a_i - \xi/2}^{a_i + \xi/2} - \int_{a_i - \xi/2}^{a_i + \xi/2} \left| \frac{\partial \psi}{\partial x} \right| B_{N} dx, \end{equation} where $a_i-\xi/2 < x < a_i+\xi/2$ is the region of the interface, and $\xi$ is the thickness of the interface. Following Refs.~\cite{Bueno-Orovio:2006b,Bueno-Orovio:2006c,Kockelkoren:2003,Buzzard:2007}, we shall introduce the mean value theorem of integrals, which states that, for a continuous function, $g(x)$, there exists a constant value, $h_0$, such that: \begin{equation} \min{g(x)} < \frac{1}{q-p} \int_{p}^{q} g(x) dx = h_0 < \max{g(x)}, \end{equation} where $p<x<q$. By eliminating the second term on the right-hand side of Eqs.~\eqref{MGE7} and \eqref{SBM-Nm-prf-01}, the no-flux boundary condition can be imposed; the resulting equation is similar to those proposed in Refs. \cite{Bueno-Orovio:2006b,Bueno-Orovio:2006c,Kockelkoren:2003,Buzzard:2007}. However, we retain the term in order to maintain the generality of the method. Therefore, the analysis presented here leads to an extension of the original method that greatly expands its applicability. Since the function on the left-hand side of Eq.~\eqref{SBM-Nm-prf-01} is continuous and finite within the interfacial region, we can use the mean value theorem of integrals to obtain the relation: \begin{equation} \int_{a_i-\xi/2}^{a_i+\xi/2}\psi \left( \frac{\partial C}{\partial t} - S \right) dx = h_0 \xi. \label{SmVal-01} \end{equation} Using the conditions that $\psi = 1$ at $x=a_i + \xi/2$ and $\psi = 0$ at $x=a_i - \xi/2$, the first term in the right-hand side of Eq.~\eqref{SBM-Nm-prf-01} is written as \begin{equation}\label{FLX-NM-01} \left. 1 \cdot D \frac{\partial C}{\partial x} \right|_{a_i + \xi/2} - \left. 0 \cdot D \frac{\partial C}{\partial x} \right|_{a_i - \xi/2} = \left. D \frac{\partial C}{\partial x} \right|_{a_i + \xi/2}. \end{equation} Since $| \partial \psi / \partial x | = 0$ for $x < a_i - \xi/2$ or $x > a_i+\xi/2$, the second term on the right-hand side of Eq.~\eqref{SBM-Nm-prf-01} can be replaced by \begin{equation}\label{BC-NM-01} \int_{a_i - \xi/2}^{a_i + \xi/2} \left| \frac{\partial \psi}{\partial x} \right| B_{N} dx = \int_{-\infty}^{+ \infty} \left| \frac{\partial \psi}{\partial x} \right| B_{N} dx. \end{equation} Substituting Eqs.~\eqref{SmVal-01}, \eqref{FLX-NM-01} and \eqref{BC-NM-01} back into Eq.~\eqref{SBM-Nm-prf-01}, we obtain \begin{equation} \label{SBM-Nm-prf-02} h_0 \xi = \left. D \frac{\partial C}{\partial x} \right|_{a_i+\xi/2} - \int_{-\infty}^{+ \infty} \left| \frac{\partial \psi}{\partial x} \right| B_{N} dx. \end{equation} Taking the limit of Eq.~\eqref{SBM-Nm-prf-02} for $\xi \rightarrow 0$, we obtain \begin{equation} \label{SBM-Nm-prf-03} \begin{split} \left. D \frac{\partial C}{\partial x} \right|_{a_i} = \int_{-\infty}^{+ \infty} \delta(x-a_i) B_N dx = B_N \bigg|_{a_i}, \end{split} \end{equation} where $\partial C/\partial x|_{a_i+\xi/2} \cong \partial C/\partial x|_{a_i}$ and $\lim_{\xi \rightarrow 0} |\partial \psi / \partial x | = \delta(x-a_i)$ when $\psi$ takes the form of a hyperbolic tangent function, and $\delta(x-a_i)$ is the Dirac delta function. The Dirac delta function has the property that $\int_{-\infty}^{+\infty} \delta(x-a_i) f(x) dx = f(a_i)$, providing the second equality in Eq.~\eqref{SBM-Nm-prf-03}. Therefore, Eq.~\eqref{SBM-Nm-prf-03} clearly shows that the smoothed boundary method recovers the Neumann boundary condition at the boundary when the thickness of the diffuse boundary approaches zero. This convergence is satisfied for both stationary and moving boundaries \cite{Kockelkoren:2003}. For imposing the Dirichlet boundary condition, we can manipulate the original governing equation in a similar procedure to the derivation of Eq.~\eqref{MGE7}. Multiplying both sides of Eq.\ (\ref{MGE6}) with $\psi$, we obtain \begin{equation} \label{MGE10} \psi^{2} \frac{\partial C}{\partial t} = \psi \nabla \cdot ( \psi D \nabla C) - \psi \nabla \psi \cdot (D \nabla C) + \psi^{2} S, \end{equation} where the second term on the right-hand side can be replaced by $\psi \nabla \psi \cdot( D \nabla C) = D [\nabla \psi \cdot \nabla \left( \psi C \right) - C \nabla \psi \cdot \nabla \psi] = D [\nabla \psi \cdot \nabla \left( \psi C \right) - C |\nabla \psi|^2]$. Equation \eqref{MGE10} is then rewritten as \begin{equation} \label{MGE11} \psi^{2} \frac{\partial C}{\partial t} = \psi \nabla \cdot (\psi D \nabla C) - D[ \nabla \psi \cdot \nabla \left( \psi C \right) - C |\nabla \psi|^2] + \psi^{2} S, \end{equation} where $C$ in the third term will be the Dirichlet boundary condition, $B_D$, imposed at the diffuse interface. Therefore, the smoothed boundary formulated diffusion equation with the Dirichlet boundary condition is \begin{equation} \label{MGE12} \psi^{2} \frac{\partial C}{\partial t} = \psi \nabla \cdot (\psi D \nabla C) - D[ \nabla \psi \cdot \nabla \left( \psi C \right) - B_D \left| \nabla \psi \right|^{2}] + \psi^{2} S. \end{equation} To prove the convergence of the solution at the boundaries to the specified boundary value, we again use a one-dimensional version of the smoothed boundary formulated equation. Integrating Eq.~\eqref{MGE12} over the interfacial region and reorganizing terms, we obtain \begin{equation}\label{Int-Dirich-01} \int_{a_i-\xi/2}^{a_i+\xi/2} \left[ \psi^2 \frac{\partial C}{\partial t} - \psi \frac{\partial}{\partial x} \left( \psi D \frac{\partial C}{\partial x} \right) - \psi^2 S \right] dx = -\int_{a_i-\xi/2}^{a_i+\xi/2} D \bigg(\frac{\partial \psi}{\partial x}\bigg) \left[ \frac{\partial \psi C}{\partial x} - B_D \frac{\partial \psi}{\partial x} \right] dx. \end{equation} Similar to the derivation of Eq.~\eqref{SmVal-01}, the left-hand side of Eq.~\eqref{Int-Dirich-01} is proportional to the interfacial thickness and approaches zero in the limit of $\xi \rightarrow 0$. On the right-hand side of Eq.~\eqref{Int-Dirich-01}, the gradient of $\psi$ approaches the Dirac delta function, $\delta(x-a_i)$, as the interface thickness approaches zero. Therefore, we can reduce Eq.~\eqref{Int-Dirich-01} to \begin{equation} \label{Int-Dirich-02} 0 =D \bigg[ \frac{\partial \psi C}{\partial x} - B_D \frac{\partial \psi}{\partial x}\bigg] ~~~\Longrightarrow~~~ \frac{\partial \psi C}{\partial x} = B_D \frac{\partial \psi}{\partial x} \end{equation} in the limit $\xi \rightarrow 0$. By integrating over the interfacial region of Eq.~\eqref{Int-Dirich-02} again, we obtain \begin{equation} 1 \cdot C \bigg|_{a_i+\xi/2} - 0 \cdot C \bigg|_{a_i-\xi/2} = \int_{a_i-\xi/2}^{a_i+\xi/2} B_D \frac{\partial \psi}{\partial x} dx,\label{Int-Dirich-03} \end{equation} which in the limit of $\xi \rightarrow 0$ gives \begin{equation} C \bigg|_{a_i+\xi/2} \cong C \bigg|_{a_i} =\int_{-\infty}^{+ \infty} \delta(x-a_i) B_D dx = B_D \bigg|_{a_i}. \end{equation} Therefore, the smoothed boundary formulation recovers the specified Dirichlet boundary condition: $C|_{a_i} = B_D|_{a_i}$. In this method, the boundary gradient, $B_N$, and the boundary value, $B_D$, are not specified to be constant values. They can vary spatially and/or temporally or be functions of $C$ or other parameters. In addition, one can impose Neumann and Dirichlet boundary conditions simultaneously to yield mixed (or Robin) boundary conditions. The equation then becomes \begin{equation}\label{Ch6-SBM-02} \psi^2 \frac{\partial C}{\partial t} = \psi \nabla \cdot (\psi D \nabla C) - \psi | \nabla \psi |_N B_N(\mathbf{x}) - \nabla \psi \cdot D[ \nabla (\psi C) - B_D(\mathbf{x})\nabla \psi ]_D+\psi^2 S, \end{equation} where $B_N(\mathbf{x})$ and $B_D(\mathbf{x})$ are spatially dependent Neumann and Dirichlet boundary conditions specified at different parts of the boundary, and the subscripts `$N$' and `$D$' denote the quantities associated with the boundaries to which the Neumann and Dirichlet boundary conditions are imposed. \subsection{Surface Diffusion Coupled Bulk Diffusion} \label{SurfDiffFormulation} The second example will demonstrate that surface diffusion can be implemented into the smoothed boundary equation derived above. For this case, we take the set of equations that includes surface reaction, bulk diffusion and surface diffusion to describe an oxygen reduction model in a solid oxide fuel cell (SOFC) cathode \cite{Lu:2009}. The oxygen vacancy concentration, $C$, on the cathode surface is governed by Fick's Second Law: \begin{equation} \label{FSL-S1} D_b\frac{\partial C}{\partial n} = \kappa C -l D_s \bigg( \frac{\partial^2}{\partial s^2}+\frac{\partial^2}{\partial \tau^2} \bigg)C + L \frac{\partial C}{\partial t}, \end{equation} where $n$, $s$ and $\tau$ are the unit normal, primary tangent and secondary tangent vectors of the surface, respectively. Here, the parameter $l$ is the characteristic thickness of the surface and is multiplied into the surface Laplacian term to maintain the dimensional agreement of the equation. The parameters $D_b$, $\kappa$, $D_s$, $L$ and $t$ are the bulk diffusivity, reaction rate, surface diffusivity, accumulation coefficient and time, respectively. Thus, the term on the left-hand side represents the flux from the bulk, and the terms on the right-hand side represent the surface reaction, surface Laplacian and concentration accumulation, respectively. For simplicity, these parameters are all assumed to be constant. In the bulk of cathode particles, the oxygen vacancy diffusion is also governed by Fick's Second Law: \begin{equation} \label{FSL-B1} \frac{\partial C}{\partial t} = D_b \nabla^2 C. \end{equation} To simulate the oxygen vacancy concentration evolution in the cathode, the two diffusion equations, Eqs.~\eqref{FSL-S1} and \eqref{FSL-B1}, are coupled and need to be solved simultaneously. In this case, the two equations will share the flux normal to the cathode surface as the common boundary condition. Recently, this set of equations was formulated using the concept of diffuse interface approach \cite{Teigen:2009a}, which leads to two differential equations that are coupled by boundary conditions. We will show below that the coupling can be achieved by applying the smooth boundary formulation described herein to obtain one single equation that governs both surface and bulk effects. The derivation is as follows. We first multiply Eq.~\eqref{FSL-B1} with $\psi$ and applying the product rule of differentiation to obtain \begin{equation} \label{SBM-FSL-B1} \psi \frac{\partial C}{\partial t} = D_b\nabla \cdot (\psi \nabla C) - D_b \nabla \psi \cdot \nabla C. \end{equation} As in Eq.~\eqref{NBC1}, the normal derivative to the diffuse interface is defined by $\partial C/\partial n = \nabla C \cdot \nabla \psi /|\nabla \psi |$. Substituting this relation back into Eq.~\eqref{FSL-S1} and rearranging terms give \begin{equation} \label{SBM-FSL-BC-1} \nabla \psi \cdot \nabla C = \frac{|\nabla \psi|}{D_b} \bigg[ \kappa C - l D_s \bigg( \frac{\partial^2}{\partial s^2}+\frac{\partial^2}{\partial \tau^2} \bigg)C + L \frac{\partial C}{\partial t} \bigg]. \end{equation} Substituting Eq.~\eqref{SBM-FSL-BC-1} into the second term in Eq.~\eqref{SBM-FSL-B1}, we obtain \begin{equation} \label{SBM-FSL-1} \psi \frac{\partial C}{\partial t} = D_b\nabla \cdot (\psi \nabla C) - |\nabla \psi| \bigg[ \kappa C- l D_s \bigg( \frac{\partial^2}{\partial s^2} +\frac{\partial^2}{\partial \tau^2} \bigg) C + L\frac{\partial C}{\partial t} \bigg]. \end{equation} This equation combines the bulk diffusion and surface diffusion into one single equation, and will be used in examples presented in Sections \ref{bulkSurf_cylinder} and \ref{SOFC_Diff}. In the bulk ($|\nabla \psi|=0$ and $\psi = 1$), Eq.~\eqref{SBM-FSL-1} reduces back to Eq.~\eqref{FSL-B1}. When the interfacial thickness approaches zero, Eq.~\eqref{SBM-FSL-1} will reduce to Eq.~\eqref{FSL-S1} at the interface ($|\nabla\psi|\neq0$) as has been proven in Section \ref{DiffEqn}. To calculate the surface Laplacian, we use the following method. The unit vector of the concentration gradient is given by $\vec{p} = \nabla C/|\nabla C|$. The unit secondary tangential vector on the surface can be obtained by $\vec{\tau} = (\vec{n} \times \vec{p})/|\vec{n} \times \vec{p}|$, and the unit primary tangential vector is then obtained by $\vec{s} = (\vec{\tau}\times \vec{n})/|\vec{\tau}\times \vec{n}|$. In this case, the surface flux has no projection in the $\tau$ direction ($\vec{p} \cdot \vec{\tau} = 0$). We can calculate the surface diffusion flux along the primary tangent direction simply by projecting the concentration gradient into the primary tangential direction. The surface flux is calculated by taking the inner product between the concentration gradient and the unit primary tangential vector for magnitude, and it is along the opposite primary tangential direction: \begin{equation} \vec{j}_s = - l D_s(\vec{p} \cdot \vec{s})\vec{s}. \end{equation} Since the Laplacian operator is independent of the selection of coordinate system, the value of the surface Laplacian can be then obtained by taking the negative divergence of the surface flux: \begin{equation} l D_s \bigg( \frac{\partial^2}{\partial s^2} +\frac{\partial^2}{\partial \tau^2} \bigg) C = - \nabla \cdot \vec{j}_s, \end{equation} where $\nabla \cdot \vec{j}_s$ is the divergence of $\vec{j}_s$ in the global Cartesian grid system of the computational box. To simulate only the surface diffusion on a diffuse-interface described geometry, one can simply eliminate all bulk-related terms to obtain \begin{equation} L\frac{\partial C}{\partial t} = - \nabla \cdot \vec{j}_s, \end{equation} such that only a concentration evolution along the interfacial region will occur. \subsection{Mechanical Equilibrium Equation} The smoothed boundary method can also be applied to the mechanical equilibrium equation. When a solid body is in mechanical equilibrium, all the forces are balanced in all directions, as represented by \begin{equation} \label{ME-1} \frac{\partial \sigma_{ij}}{\partial x_j} = 0, \end{equation} where the subscript `$i$' indicates the component along the $i$-$th$ direction, and $\sigma_{ij}$ is the stress tensor. Repeated indices indicate summation over the index. For a linear elasticity problem, the stress tensor is given by the generalized Hooke's Law: \begin{equation} \label{ESS-1} \sigma_{ij} = C_{ijkl} (\varepsilon_{kl}- \rho \delta_{kl}), \end{equation} where $C_{ijkl}$ is the elastic constant tensor, and $\rho$ is a scalar body force, such as thermal expansion ($\alpha \Delta T$) or a misfit eigen-strain ($\varepsilon^0 = (a_p-a_m)/a_m$), which depends on the governing physics. The total strain tensor is defined by the gradients of displacements as \begin{equation} \label{TSN-1} \varepsilon_{ij} = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i} \right), \end{equation} where $u_i$ is the infinitesimal displacement along the $i$-$th$ direction. Substituting Eqs.~\eqref{TSN-1} and \eqref{ESS-1} back into Eq.~\eqref{ME-1} gives \begin{equation} \label{ME-2} \frac{\partial }{\partial x_j} C_{ijkl} \frac{1}{2} \left(\frac{\partial u_k}{\partial x_l}+\frac{\partial u_l}{\partial x_k} \right) = \frac{\partial }{\partial x_j} \bigg( \rho C_{ijkl}\delta_{kl} \bigg). \end{equation} We can multiply Eq.~\eqref{ME-2} by the domain parameter that distinguishes the elastic solid region ($\psi = 1$) from the environment ($\psi=0$) to perform the smoothed boundary formulation. After collecting the terms associated with $\partial \psi/\partial x_j$ on one side of the equation, we obtain \begin{equation} \label{SBM-ME-1} \begin{split} \frac{\partial}{\partial x_j} \left[ \psi C_{ijkl} \frac{1}{2} \left(\frac{\partial u_k}{\partial x_l}+\frac{\partial u_l}{\partial x_k} \right) \right] - \left(\frac{\partial \psi}{\partial x_j}\right) \bigg\{ C_{ijkl} \bigg[ \frac{1}{2}\bigg(\frac{\partial u_k}{\partial x_l} +\frac{\partial u_l}{\partial x_k} \bigg)-\rho\delta_{kl} \bigg] \bigg\} \\ = \frac{\partial}{\partial x_j} \bigg( \psi \rho C_{ijkl}\delta_{kl} \bigg). \end{split} \end{equation} The traction exerted on the solid surface is defined by $N_{i} = -\sigma_{ij}n_{j}$, where $n_j$ is the inward unit normal of the solid surface. We again use the definition of the inward unit normal of the boundary: $n_{i} = \nabla \psi /|\nabla \psi |$. (In the indicial notation, $ \partial \psi/\partial x_i = \nabla \psi$ and $\sqrt{(\partial \psi/\partial x_i)(\partial \psi/\partial x_i)} = |\nabla \psi|$.) Therefore, the traction force is given by \begin{equation} \label{Trac-1} N_{i} = -\bigg\{ C_{ijkl} \left[ \frac{1}{2}\left(\frac{\partial u_k}{\partial x_l}+\frac{\partial u_l}{\partial x_k} \right)-\rho\delta_{kl} \right] \bigg\} \left(\frac{\nabla \psi}{|\nabla \psi|} \right). \end{equation} Substituting Eq.~\eqref{Trac-1} back into Eq.~\eqref{SBM-ME-1} returns the smoothed boundary formulation of the mechanical equilibrium equation with a traction boundary condition on the solid surface: \begin{equation} \label{SBM-ME-2} \frac{\partial}{\partial x_j} \left[ \psi C_{ijkl} \frac{1}{2} \left(\frac{\partial u_k}{\partial x_l}+\frac{\partial u_l}{\partial x_k} \right) \right] + |\nabla \psi| N_{i} = \frac{\partial}{\partial x_j} \bigg( \psi \rho C_{ijkl}\delta_{kl} \bigg), \end{equation} where $\partial(\psi \rho C_{ijkl}\delta_{kl})/\partial x_j = \tilde{\rho}_i$ can be treated as an effective body force along the $i$-$th$ direction. For linear elasticity problems with presribed displacements at the solid surface, one can perform the smoothed boundary formulation as in the derivation of the Dirichlet boundary condition in Section \ref{DiffEqn} by multiplying Eq.~\eqref{ME-2} by $\psi^2$ and using the product rule to obtain \begin{equation} \begin{split} \psi\frac{\partial}{\partial x_j} \left[ \psi C_{ijkl} \frac{1}{2} \left(\frac{\partial u_k}{\partial x_l}+\frac{\partial u_l}{\partial x_k} \right) \right] & - \bigg\{ \bigg(\frac{\partial \psi}{\partial x_j}\bigg) \bigg[ C_{ijkl} \frac{1}{2}\bigg( \frac{\partial \psi u_k}{\partial x_l}+\frac{\partial \psi u_l}{\partial x_k} \bigg) \bigg] \\ - \bigg(\frac{\partial \psi}{\partial x_j} \bigg) C_{ijkl} \frac{1}{2} \bigg( u_k \frac{\partial \psi}{\partial x_l} & + u_l \frac{\partial \psi}{\partial x_k}\bigg) \bigg\} = \psi^2 \frac{\partial}{\partial x_j}\bigg(\rho C_{ijkl}\delta_{kl}\bigg), \end{split} \end{equation} where the displacements $u_k$ and $u_l$ appearing in the third term on the left-hand side will be the boundary values of the displacements at the solid surface. An equivalent formulation for the mechanical equilibrium equation can also be obtained by asymptotic approach \cite{Voigt:2009}. \subsection{Equations for Phase Transformations with Additional Boundaries} \label{ContactAngleFormulation} Phase transformations affected by a mobile or immobile surface or other boundary are of importance in many materials processes including heterogeneous nucleation that takes place at material interfaces \cite{Granasy:2007,Warren:2009}. Maintaining a proper contact angle at the three-phase boundary (where the interface between the two phases meets the surface) is necessary in capturing the dynamics accurately, since the contact angle represents the difference between surface energies (tensions) of different phase boundaries. While there are previous works that developed a method to impose the contact-angle boundary condition \cite{Granasy:2007,Warren:2009} on sharp domain walls, here we show that a similar model with diffuse domain walls can be obtained simply by applying the approach described above. Below, we assume that the boundary is immobile, but this assumption can be easily removed by describing the evolution of the domain parameter as dictated by the physics of the system. In the Allen-Cahn and Cahn-Hilliard equations of the phase field model, the total free energy has the following form \cite{Cahn:1958,Cahn:1959}: \begin{equation} \label{eqTeng} F = \int_\Omega \bigg [ f(\phi)+\frac{\epsilon^{2}}{2} \left | \nabla \phi \right \vert^{2}\bigg] d\Omega , \end{equation} where $\phi$ is referred to as the phase field or order parameter commonly used to define different phases, and $\epsilon$ is the gradient energy coefficient in the phase field model. We take the variational derivative according to Euler's equation: \begin{equation} \label{eqVD} \delta F = \int_\Omega \bigg( \frac{\partial f}{\partial \phi} - \epsilon^{2} \nabla^{2} \phi \bigg) d\Omega+ \int_{\partial \Omega} \bigg( \epsilon^{2} \nabla \phi \cdot \vec{n}\bigg) d \vec{A}, \end{equation} where $\vec{n}$ is the unit normal vector to the domain boundary $\partial \Omega$. The bulk chemical free energy, $f$, is a double-well function of $\phi$. (This can also be derived from the order parameter $\phi$ changing with local ``velocity" $\dot{\phi}$.) For an extremum of the functional $F$, $\delta F = 0$ must be satisfied. This requirement provides two conditions: \begin{subequations} \begin{equation}\label{eqBC1} \frac{\partial f}{\partial \phi} - \epsilon^{2} \nabla^{2} \phi = 0 \quad \text{in}~~ \Omega, \end{equation} \begin{equation} \label{eqBC2} \epsilon^{2} \nabla \phi \cdot \vec n = 0 \quad \quad \text{on}~~ \partial \Omega. \end{equation} \end{subequations} Following Eq.~\eqref{eqBC1}, we find \begin{equation} \label{eqSBC1} \frac{\partial f}{\partial \phi} \nabla \phi = \epsilon^{2} \nabla^{2} \phi \nabla \phi = \frac{\epsilon^{2}}{2} \nabla (\left | \nabla \phi \right \vert^{2}), \end{equation} which can be rewritten as $\nabla f = \nabla ( \epsilon^2 | \nabla \phi |^{2})/2$. We thus find a useful equality for deriving the contact angle boundary condition: \begin{equation} \label{eqSBC2} \left | \nabla \phi \right \vert = \frac{\sqrt{2 f}}{\epsilon}. \end{equation} In the smoothed boundary method, we introduce a domain parameter $\psi$ to incorporate boundary conditions in the original governing equation. As mentioned earlier, the level sets of this domain parameter $\psi$ describe the original boundaries and should satisfy $\vec{n} = \nabla \psi/| \nabla \psi |$. On $\partial \Omega$, we impose a contact angle $\theta$. Thus, \begin{equation} \label{eqAngBC} \vec n \cdot \frac{\nabla \phi}{\left | \nabla \phi \right \vert} = \frac{\nabla \psi}{|\nabla \psi|} \cdot \frac{\nabla \phi}{|\nabla \phi|}= \cos \theta. \end{equation} Substituting Eq.~\eqref{eqSBC2} into Eq.~\eqref{eqAngBC}, one derives the following boundary condition formulation: \begin{equation} \label{eqAngBCF} \nabla \psi \cdot \nabla \phi = \left | \nabla \psi \right \vert \cos \theta \frac{\sqrt{2f}}{\epsilon}. \end{equation} This contact-angle boundary condition is similar to the one suggested by Warren et al.~\cite{Warren:2009} for contacting a sharp interface, for which a Dirac delta function will replace $|\nabla \psi |$. The bulk chemical potential is defined by the variational derivative of the total free energy of the system: \begin{equation} \label{mu-1} \mu = \frac{\delta F}{\delta \phi} = \frac{\partial f}{\partial \phi} - \epsilon^2 \nabla^2 \phi, \end{equation} as it appeared in the first term of Eq.~\eqref{eqVD}. Multiplying both sides of Eq.~\eqref{mu-1} by the domain parameter $\psi$ gives \begin{equation} \label{SBM-mu-1} \psi \mu = \psi \frac{\partial f}{\partial \phi} - \psi \epsilon^2 \nabla^2 \phi = \psi \frac{\partial f}{\partial \phi} - \epsilon^2 \nabla \cdot ( \psi \nabla \phi ) + \epsilon^2 \nabla \psi \cdot \nabla \phi. \end{equation} We substitute the contact-angle boundary condition, Eq.~\eqref{eqAngBCF}, into the third term in Eq.~\eqref{SBM-mu-1} and obtain the smoothed boundary formulation for the chemical potential by dividing both sides by $\psi$: \begin{equation} \mu = \frac{\partial f}{\partial \phi} - \frac{\epsilon^2}{\psi} \nabla \cdot (\psi \nabla \phi) + \frac{\epsilon |\nabla \psi |}{\psi} \sqrt{2f} \cos{\theta}. \end{equation} For a nonconserved order parameter in the phase field models, the evolution is governed by the Allen-Cahn equation \cite{Allen:1979}, in which the order parameter evolves according to the local chemical potential variation: \begin{equation} \label{AC-1} \frac{\partial \phi}{\partial t} = - M \mu = -M \bigg( \frac{\partial f}{\partial \phi} - \frac{\epsilon^2}{\psi} \nabla \cdot (\psi \nabla \phi) + \frac{\epsilon |\nabla \psi |}{\psi} \sqrt{2f} \cos{\theta} \bigg). \end{equation} For a conserved order parameter, the evolution of the order parameter is governed by the divergence of the order-parameter flux, while the flux is proportional to the gradient of the chemical potential gradient. This process is governed by the Cahn-Hilliard equation \cite{Cahn:1958,Cahn:1959}: \begin{equation} \label{CH-1} \frac{\partial \phi}{\partial t} = \nabla \cdot (M \nabla \mu), \end{equation} for which the smoothed boundary formulation is obtained by (see Section \ref{DiffEqn}) \begin{equation} \label{SBM-CH-1} \psi \frac{\partial \phi}{\partial t} = \nabla \cdot (\psi M \nabla \mu) - \nabla \psi \cdot (M \nabla \mu). \end{equation} Note that $-M \nabla \mu = \vec{j}$ is the flux of the conserved field order parameter. Therefore, the second term represents the fluxes normal to the domain wall (equivalent to Eq.~\eqref{NBC1}): \begin{equation} \nabla \psi \cdot (M \nabla \mu) = -(\vec{j}\cdot \vec{n}) |\nabla \psi |. \end{equation} Substituting the flux across the domain wall, our final smoothed boundary formulation of the Cahn-Hilliard equation is then written as \begin{equation} \psi \frac{\partial \phi}{\partial t} = M \nabla \cdot \bigg[ \psi \nabla \bigg( \frac{\partial f}{\partial \phi} - \frac{\epsilon^2}{\psi} \nabla \cdot (\psi \nabla \phi) + \frac{\epsilon| \nabla \psi |}{\psi} \sqrt{2f} \cos{\theta} \bigg) \bigg] + |\nabla \psi |J_n, \label{SBM-CH-3} \end{equation} where $J_n = \vec{j}\cdot \vec{n}$. In practice, $\psi$ has a very small cutoff value such that the terms containing $1/\psi$ can be numerically evaluated. For time dependent problems, the equation is divided by $\psi$ before numerical implementation. \section{Validation of the approach} We demonstrate the validity and accuracy of the approach using bulk/surface diffusion in Sections \ref{DiffEqn} and \ref{SurfDiffFormulation}, as well as the phase transformation of three phase systems in Section \ref{ContactAngleFormulation}. \subsection{1D Diffusion Equation} First, we perform a 1D simulation to demonstrate that the Neumann and Dirichlet boundary conditions are satisfied on two different sides of the domain. Fick's second diffusion equation, Eq.~\eqref{OGE2}, with the given source term is solved within the solid phase that is defined by $\psi=1$. The diffusion coefficient $D$ is set to be 1, and the source $S$ is 0.02. On the right boundary of the solid, the gradient of $C$ is set to be -0.05, while on the left boundary, the value of $C$ is set to be 0.4. We perform the smoothed boundary formulation, as in the derivation of Eq.~\eqref{Ch6-SBM-02}, to obtain \begin{equation} \label{SBM-Diff-A} \psi^2 \frac{\partial C}{\partial t} = \psi \nabla \cdot (\psi\nabla C) - \psi [| \nabla \psi | (-0.05)]_r -[\nabla \psi \cdot \nabla (\psi C) - |\nabla \psi|^2(0.4)]_l +\psi^2 (0.02), \end{equation} where the subscripts `$r$' and `$l$' indicate the right and left interfaces. The solid region is approximately in the range between the 102-$th$ and 298-$th$ grid points. We use a hyperbolic tangent form for the continuous domain parameter $\psi$ as \begin{equation} \psi = \frac{1}{2}\{\tanh{[0.8(x-10)+1]}-\tanh{[0.8(x-30)+1]}\}, \end{equation} such that the interfacial thickness is taken to be approximately 6 grid spacings. The initial concentration is $C=0$ everywhere in the computational box. A standard finite central differencing scheme in space and the Euler explicit time scheme are employed in the simulation. The grid spacing is taken to be $\Delta x = 0.1$. Figure \ref{1D_demo} shows the concentration profiles taken at four different times (in blue solid lines). The domain parameter is plotted in the red dashed line. On the right interface, it can be clearly observed that $dC/dx = -0.05$ at all times, except for a rapid change from $dC/dx=0$ to $dC/dx = -0.05$ in the very early transient period. In the early period, the concentration even takes negative values to satisfy the gradient boundary condition imposed at the right boundary. On the other hand, the concentration remains at 0.4 during the entire diffusion process, except in the very early transient period during which $C$ changed from 0 to 0.4. This result clearly demonstrates both Neumann and Dirichlet boundary conditions are satisfied on the diffuse interfaces. \subsection{Surface Diffusion and Bulk Diffusion in a Cylinder} \label{bulkSurf_cylinder} To further demonstrate the validity of the smoothed boundary method, we apply the method to a cylinder for which a cylindrical coordinate grid system can be used. We solve the coupled surface-bulk diffusion problem using both the smoothed boundary and standard sharp interface formulations in the same grid system for comparison. Again, we use a continuous domain parameter $\psi$ to define the solid region of a cylinder ($\psi = 1$ for solid, and $\psi=0$ for environment). The solid surface is then represented by $0<\psi<1$. For the smoothed boundary case, we solve Eq.~\eqref{SBM-FSL-1} using the central differencing scheme in space. The radial direction is discretized into 80 grid points, and the longitudinal direction is discretized into 600 grid points. The grid spacing $\Delta x$ is $1/60$, such that the radius of the cylinder is $R = 60\Delta x = 1$, and the length of the cylinder is $10R$. The thickness of the diffuse interface is approximately 4$\sim$5 grid spacings; thus, the characteristic thickness appearing in Eq.~\eqref{SBM-FSL-1} is set to be $l= 4.5\Delta x = 0.075$. Here, we set the surface accumulation coefficient $L$ to be 0 for simplicity. We investigate two cases: one with a low surface reaction rate, $\kappa = 2.1$, and the other with a high surface reaction rate, $\kappa = 1000$. To compare the results, we solve the original form of the coupled surface and bulk diffusion equations using the sharp interface approach with the same finite difference method. The same grid system is used, except a cylinder surface is now explicitly placed at $R =1$ at which the boundary condition is imposed. In this case, we calculate the normal flux to the surface by the right-hand side of Eq.~\eqref{FSL-S1} with the grid system on the cylinder surface, and then use the flux as the boundary condition for the concentration evolution in the bulk and on the surface, Eq.~\eqref{FSL-B1}. Note that the characteristic thickness $l$ drops in the sharp interface description as the limit $l \rightarrow 0$ is taken. Figures \ref{Cyl-Con}(a) and (b) show the concentration profiles in the cylinder at the steady state obtained using the smoothed boundary method and the finite central difference method. For clarity, only the concentration in the region of $0<z<5R$ is presented. The top rows in Figs.~\ref{Cyl-Con}(a) and (b) are the smoothed boundary results, and the bottom rows are the sharp interface results. The results from the two methods are clearly in excellent agreement. Shown in Figs.~\ref{Cyl-Err}(a) and (b) are the concentration profiles plotted along the longitudinal line at $r=0$, $r =2R/3$, and $r = R$, respectively. Again the plots show that the differences between the results from the two methods are small for the cylindrical geometry. As mentioned in a previous section, the error of the smoothed boundary method is proportional to the interfacial thickness. Based on our tests, we found that, even for an interfacial-thickness-to-radius ratio of around 1/5, the maximum error between the two methods appearing near the surface is still around 2\% (shown in the solid square markers), while the error in the bulk region is significantly smaller than that. If we select the interfacial-thickness-to-radius ratio to be 1/10, the maximum error appearing in the entire solid region is on the order of $1\times10^{-3}$ (including the region near the surface). Another controlling factor of errors is the number of grid points across the diffuse interface. From our numerical tests, we noticed that at least 4 grid points are required to properly resolve the sharp change in $\psi$ across the interface such that the errors are reasonably small. In addition to the steady state solution, the transient state solutions are also in excellent agreement during the entire diffusion process. This demonstrates that the smoothed boundary method can be employed to accurately solve coupled surface diffusion and bulk diffusion problems. \subsection{Contact Angle Boundary Condition} We perform a simple 2D simulation to validate the smoothed boundary formulation for the contact-angle boundary condition at the three-phase boundary. Equations \eqref{AC-1} and \eqref{CH-1} are tested for nonconserved and conserved field order parameters, respectively. The computational box sizes are $L_x= 100$ and $L_y=100$, and the parameters used are $\Delta x= 1$, $M= 1$, and $\epsilon = 1$. On the computational box boundaries, the normal gradients of the order parameter are set to be zero: $\partial \phi/\partial x = 0$ at $x = 0$ and $x=100$ and $\partial \phi/\partial y =0$ at $y=0$ and $y=100$. A horizontal flat wall is defined by a hyperbolic tangent function of the domain parameter $\psi$ \begin{equation} \psi = \frac{1}{2}\tanh{(y-30)}+\frac{1}{2}, \end{equation} such that $\psi=0.5$ is at $y=30$ and $\psi$ gradually transitions from 0 to 1 from below the wall to above. The wall thickness is approximately 5 grid spacings. The initial phase boundary is vertically placed at the middle of the domain ($x=50$) with phase 1 ($\psi=1$) and phase 0 ($\psi=0$) on the left and right halves, respectively. In the first case with nonconserved order parameter, we evolve Eq.~\eqref{AC-1} with a 60-degree contact angle. The result clearly shows a 60-degree contact angle at the three-phase boundary as imposed (Fig.~\ref{CA-Validate}(a)). The angle can be measured by the intersection between the two contours of $\psi = 0.5$ and $\phi=0.5$, as shown in Fig.~\ref{CA-Validate}(b). The 60-degree angle is maintained during the entire evolution, except for the very early transient period when the contact angle changed from 90 to 60 degrees. Due to the imposed contact angle, the initially flat phase boundary bends and creates a negative curvature of phase 1. As a result, the phase boundary moves toward phase 0, and eventually only phase 1 remains in the system. For the second case with conserved order parameter, we evolve Eq.~\eqref{CH-1} with a contact angle of 120 degrees. As expected, the phase boundary intersects the wall at a 120-degree contact angle (Fig.~\ref{CA-Validate}(c) and (d)). In contrast to the Allen-Cahn type dynamics, due to the conservation of the order parameter, the phase boundary near the wall moves toward the left while the phase boundary away from the wall moves in the opposite direction. As a result, the phase boundary deforms to a curved shape. When the system reaches its equilibrium state, the phase boundary forms a circular arc with a uniform curvature everywhere along the phase boundary, such that the total surface energy is minimized (see Fig.~\ref{CA-Validate}(c) for $t=1.3\times10^5$). \section{Applications} While the details of the scientific calculations performed by applying of these methods will be published elsewhere, it is worthwhile to show some of the results to demonstrate the potential of the method. \subsection{Surface-Reaction Diffusion Kinetics} \label{SOFC_Diff} The first example is ionic transport through a complex microstructure. Here, the ion diffusion is driven by a sinusoidal voltage perturbation. For the steady state solution, the time dependence of the form $\exp(\mathrm{i} \omega t)$, where $\omega$ is the angular frequency and $\mathrm{i}=\sqrt{-1}$, can be removed, as in the equation derived by Lu et al.~\cite{Lu:2009}. For a demonstration, we solve the steady state solution for the case without surface diffusion while solving the transient state solution for the case with surface diffusion. For the first case, the smoothed boundary formulated equation is given by \begin{equation} \label{SBM-Cx-1} \nabla \cdot (\psi \nabla \tilde{C}) - |\nabla \psi|\kappa \tilde{C} = \mathrm{i} \omega \psi \tilde{C}, \end{equation} where $\tilde{C}$ is the concentration amplitude consisting of a real and imaginary part. This equation is solved by a standard alternative direction iterative (ADI) method in a second-order central-difference scheme in space ($\Delta x= 0.04$). For the transient state solution, we keep the time dependence as is, and the smoothed boundary formulation is given by Eq.~\eqref{SBM-FSL-1} in which surface diffusion, bulk diffusion and surface reactions are all considered. For simplicity, the surface accumulation term is ignored ($L=0$). Equation \eqref{SBM-FSL-1} is solved by a second-order central-difference scheme in space ($\Delta x= 0.04$) and the Euler explicit time stepping scheme ($\Delta t = 0.01$). Here, we employed an Allen-Cahn type equation \cite{Beckermann:1999,Jamet:2008a,Jamet:2008b} to smooth the initially sharp boundaries of experimentally obtained 3D voxelated data ($\psi =1$ for the voxels in the cathode and $\psi=0$ for the voxels in the pores): \begin{equation} \label{S-1} \frac{\partial \psi}{\partial t} = -\frac{\partial f}{\partial \psi}+\epsilon^2 \nabla^2 \psi - \epsilon\sqrt{2f}\frac{|\nabla \psi | \nabla^2 \psi - \nabla \psi \cdot \nabla |\nabla \psi |}{(\nabla \psi)^2}\chi, \end{equation} where $f = \psi^2(1-\psi)^2$ is a typical double-well function, and $\epsilon$ is the gradient energy coefficient. The interfacial thickness ($0.1<\psi<0.9$) is given by $2\epsilon\sqrt{2}$. Note that the third term in Eq.~\eqref{S-1} is used to remove the curvature effect such that the location of $\psi = 0.5$ does not change during the smoothing process if $\chi=1$. The computational box contains 321, 160 and 149 grid points along the $x$, $y$ and $z$ directions. Figure \ref{CRI-AC-1} shows the steady-state concentration for the case in which $D_b = 1$, $\kappa = 0.1$, $D_s= 0$ and $\omega=0.55$. The boundary condition on the computational box is $\tilde{C} = 1$ at $y=0$, $\tilde{C} = 0$ at $y=6.4$ and no-gradient on the remaing four sides. As shown in Fig.~\ref{CRI-AC-1}(a), the real part of the concentration decays from 1 to 0 over the complex cathode microstructure to satisfy the boundary condition given on the domain box at $y=0$ and $y = 6.4$. For the imaginary part, the values at $y=0$ and $y=6.4$ remain at 0 as assigned. In the middle region, a negative value of the imaginary part occurs due to the phase shift resulting from the delayed response. Figure \ref{SurfDiff_1} shows the concentration distribution taken at two different times for the case in which $D_b = 1$, $\kappa = 2.1$ and $D_s= 10$, with DC loading ($\omega=0$). The boundary conditions on the computational box are the same as in the AC loading case above. An enhanced concentration along the irregular surface due to surface diffusion can be clearly observed in the intermediate stage, Fig.~\ref{SurfDiff_1}(a). As the concentration propagates through the bulk region, the system eventually approaches its steady state, and the concentration enhancement diminishes, Fig.~\ref{SurfDiff_1}(b). Figures \ref{SurfDiff_1}(c) and (d) are magnified views of (a) and (b). The smoothed boundary method can also be used to impose Dirichlet boundary conditions on irregular surfaces. For example, if the ion diffusivity is much higher in the electrolyte phase than in the cathode phase, the concentration in the electrolyte will be nearly uniform. To simulate this scenario, we impose a fixed concentration at the electrolyte-cathode contacting surface as the boundary condition. On the computational box boundaries, we set $C=0$ at $y=10.44$ and the no-flux boundary condition for the remaining five sides. The material parameters are selected to be $D_b= 1$, $\kappa=0$, and $D_s = 0$. Figure \ref{DiriBC} shows the simulation results for a pure bulk diffusion example with a fixed value $C=1$ imposed at the LSC (cathode) -YSZ (electrolyte) interfaces. In this case, since the contacting areas are small (compared to the cross-sectional area of LSC on the $x$-$z$ plane in Fig.~\ref{CRI-AC-1}(a)), ion diffusion along the lateral directions ($x$ and $z$) is large. As a result, the concentration drops very rapidly in a short distance from the contacting areas. Therefore, the concentration distribution is very different from the ones shown in Figs.~\ref{CRI-AC-1} and \ref{SurfDiff_1}, where the cross-sectional areas at $y=0$ and $y=6.4$ ($x$-$z$ planes on the computational box) are approximately equal. \subsection{Kirkendall Effect Diffusion with a Moving Boundary Driven by Coupled Navier-Stokes-Cahn-Hilliard equations} \label{Kirkendall effect deformation} The third application will demonstrate the smoothed boundary method's broad applicability by applying it to the coupled Navier-Stokes-Cahn-Hilliard equations \cite{Gurtin:1996,Jacqmin:1999,Kim:2005,Zhou:2006,Villanueva:2008}. This particular formulation aims to solve diffusion problems with the Kirkendall effect with vacancy sources and sinks in the bulk of the solid \cite{Kirkendall:1939,Kirkendall:1942,Smigelskas:1947,Darken:1948,AtomMovements:Bardeen}. In this case, the solid experiences deformation due to vacancy generation and elimination. The Navier-Stokes-Cahn-Hilliard equations are coupled to the smoothed boundary formulation of the diffusion equation in Section \ref{DiffEqn} as a model of plastic deformation due to volume expansion and contraction resulting from vacancy flow. When the diffusing species of a binary substitutional alloy have different mobilities, the diffusion fluxes of the two species are unbalanced, creating a net vacancy flux toward the fast diffuser side. Here, we denote the slow diffuser, fast diffuser and vacancy by $A$, $B$ and $V$, respectively. Due to the accommodation/supply of excess/depleted vacancies, the solid locally expands/shrinks \cite{Strandlund:2004,Larsson:2006,Strandlund:2006,Yu:2007} when maintaining the vacancy mole fraction at its thermal-equilibrium value. We treat the solid as a very viscous fluid \cite{Stephenson:1988,Boettinger:2005,Dantzig:2006,Boettinger:2007} with a much larger viscosity than the surrounding environment. In this case, we solve the Navier-Stokes-Cahn-Hilliard equations to update the shape of the material as follows \cite{Yu:2009a}: \begin{subequations} \begin{equation} \label{NS-CH-1} -\nabla P+\nabla \cdot (\eta \nabla \mathbf{v})+\nabla \bigg(\frac{2\eta}{d}S_V\bigg)+\frac{1}{C_a}\mu\nabla \psi=0, \end{equation} \begin{equation} \label{NS-CH-2} \nabla \cdot \mathbf{v}=-S_V, \end{equation} \begin{equation} \label{NS-CH-3} \frac{\partial \psi}{\partial t} -\mathbf{v}\cdot \nabla \psi = M \nabla^2\bigg( \frac{\partial f}{\partial \psi} -\epsilon^2 \nabla^2\psi\bigg), \end{equation} \end{subequations} where $P$ is the effective pressure, $\eta$ is the viscosity, $\mathbf{v}$ is the velocity vector, $d$ is the number of dimension, and $C_a$ is the Cahn number reflecting the capillary force compared to the pressure gradient. One great convenience of solving this type of phase field equation is that it automatically maintains the domain parameter in the form of a hyperbolic tangent function while updating the location of the diffuse interface. Note that we have ignored the inertial force in the Navier-Stokes equation to obtain Eq.~\eqref{NS-CH-1} since the deformation is assumed to be a quasi-steady state process. The vacancy generation rate that results in the local volume change is given by \begin{equation} S_V = -\frac{\nabla \cdot (D_{VB} \nabla X_B)}{\rho_l(1-X_V^{eq})}, \end{equation} where $X_B$ is the mole fraction of the fast diffuser, $X_V^{eq}$ is the thermal-equilibrium vacancy mole fraction, $D_{VB}$ is the diffusivity for vacancy flux associated with $\nabla X_B$, and $\rho_l$ is the lattice site density. The fast diffuser mole fraction evolution is governed by the advective Fick's diffusion equation as \begin{equation} \label{KE-Trad-1} \frac{\partial X_B}{\partial t} -\mathbf{v} \cdot \nabla X_B= \nabla \cdot (D_{BB}^V\nabla X_B) -X_B S_V, \end{equation} where $D_{BB}^V$ is the diffusivity for a fast diffuser flux associated with $\nabla X_B$, and the advective term accounts for the lattice shift due to volume change. Since the diffusing species cannot depart the solid, a no-flux boundary condition is imposed at the solid surface. Thus, we obtain the smoothed boundary formulation of Eq.~\eqref{KE-Trad-1} as \begin{equation} \label{KE-Trad-SBM-1} \psi \bigg( \frac{\partial X_B}{\partial t} -\mathbf{v} \cdot \nabla X_B \bigg)= \nabla \cdot (\psi D_{BB}^V\nabla X_B) -\psi X_B S_V. \end{equation} As the concentration evolves, the shape of the solid is also updated by Eq.~\eqref{NS-CH-3} and iteratively solving Eqs.~\eqref{NS-CH-1} and \eqref{NS-CH-2} by applying a projection method \cite{Kim:2006a,Kim:2006b}. The slow and fast diffusers are initially placed in the left and right halves of the solid, respectively. We use theoretically calculated diffusivities for this simulation \cite{Moleko:1989,Manning:1971,Belova:2000,VanDerVen:2009}. Figure \ref{Def_Con} shows snapshots of the concentration profiles (left column) and velocity fields (right column) from a 2D simulation. As the fast diffuser diffuses from the right to the left side, the vacancy elimination and generation cause contraction and expansion on the right and left sides, respectively. As a result, the initially rectangular slab deforms to a bottle-shaped object. In another scenario in which the vacancy diffusion length is comparable to or smaller than the distance between vacancy sources and sinks, the explicit vacancy diffusion process must be considered \cite{VanDerVen:2009,Yu:2008}. In this case, vacancies diffuse in the same manner as the atomic species. In the bulk of a solid devoid of vacancy sources/sinks, the concentration evolutions are governed by \begin{subequations} \label{KE-Rig-1} \begin{equation} \frac{\partial X_V}{\partial t} = \nabla \cdot (D_{VV} \nabla X_V + D_{VB} \nabla X_B), \end{equation} \begin{equation} \frac{\partial X_B}{\partial t} = \nabla \cdot (D_{BV} \nabla X_V + D_{BB}^V \nabla X_B). \end{equation} \end{subequations} Since the solid surfaces are very efficient vacancy sources/sinks \cite{Yu:2009a,Yu:2009b}, we impose the thermal-equilibrium vacancy mole fraction at the solid surfaces as the Dirichlet boundary condition for solving Eq.~\eqref{KE-Rig-1}. In this case, the smoothed boundary formulation of Eq.~\eqref{KE-Rig-1} is given by \begin{subequations} \begin{equation} \psi^2 \frac{\partial X_V}{\partial t} = \psi \nabla \cdot [\psi (D_{VV} \nabla X_V+D_{VB} \nabla X_B)]-K, \end{equation} \begin{equation} \psi^2 \frac{\partial X_B}{\partial t} = \psi \nabla \cdot [\psi (D_{BV} \nabla X_V+D_{BB}^V \nabla X_B)]+\frac{X_B}{1-X_V^{eq}}K, \end{equation} \end{subequations} where $K = D_{VV} [\nabla \psi \cdot \nabla (\psi X_V)-|\nabla \psi|^2 X_V^{eq}]$. Since the vacancy generation and elimination in this scenario only occurs on the solid surfaces, no internal volume change needs to be considered in the bulk. Therefore, instead of using a plastic deformation model as in the previous case, we adopt a typical Cahn-Hilliard type dynamics to track the shape change: \begin{equation} \frac{\partial \psi}{\partial t} = M\nabla^2 \mu+\frac{\nabla \psi}{|\nabla \psi |}\cdot \frac{\vec{J}_V}{1-X_{V}^{eq}}, \end{equation} where $\vec{J}_V = D_{VV}\nabla X_V + D_{VB}\nabla X_B$ is the vacancy flux, and the last term represents the normal velocity of the solid surfaces due to vacancy injection into or ejection from the solid. An example of the results obtained using this approach is the growth of a void in a rod \cite{Fan:2006,Fan:2007,Yu:2009b}. The above equations are solved using a central difference scheme in space and an implicit time stepping scheme. The vacancy mole fraction is fixed at the void and cylinder surfaces. The fast diffuser initially occupies the center region, while the slow diffuser occupies the outer region. A void is initially placed off-center in the fast diffuser region. Figure \ref{Hallow-1} shows snapshots of the fast diffuser mole fraction profile and the vacancy mole fraction profile (normalized to its equilibrium value). As the fast diffuser diffuses outward, vacancies diffuse inward from the rod surface to the void surface, causing vacancy concentration enhancement and depletion in the center and outer regions, respectively. To maintain the equilibrium vacancy mole fraction at the rod and void surfaces, vacancies are injected and ejected at those surfaces. As a result, the rod radius increases, and the void grows. Such dynamics was examined using a sharp interface approach \cite{Yu:2009b}, but this new method provides the flexibility in geometry to examine cases where a void initially forms off-center. \subsection{Thermal Stress} Since an SOFC operates at temperatures near 500$\sim$1000$^\circ$C, the thermal stress is important for analyzing mechanical failure. Here, we expand the generalized mechanical equilibrium equation, Eq.~\eqref{SBM-ME-2}, for a linear, elastic and isotropic solid. In this case, the elastic constant tensor is expressed by \begin{subequations} \begin{equation} \lambda_{11} = C_{1111} = C_{2222} = C_{3333}, \end{equation} \begin{equation} \lambda_{12} = C_{1122} = C_{2211} = C_{2233} = C_{3322} = C_{3311} = C_{1133}, \end{equation} \begin{equation} \begin{split} \lambda_{44} =& C_{1212} = C_{1221} = C_{2112} = C_{2121} = C_{2323} = C_{2332} \\= &C_{3223} = C_{3232} = C_{1313} = C_{1331} = C_{3113} = C_{3131}. \end{split} \end{equation} \end{subequations} The remainder of the elastic constant components vanish. We use the coordinate notation to replace the indices $i = 1$, $2$ and $3$ by $x$, $y$ and $z$, respectively. The infinitesimal displacements along the $x$, $y$ and $z$ directions are then replaced by $u$, $v$ and $w$, respectively. Thus, Eq.~\eqref{SBM-ME-2} in the three Cartesian directions is rewritten as \begin{subequations} \label{ME-ISO-1} \begin{equation} \label{ME-I1-3} \begin{split} \frac{\partial}{\partial x}\bigg[ \psi \lambda_{11} \bigg(\frac{\partial u}{\partial x} \bigg) \bigg] + \frac{\partial}{\partial x}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial v}{\partial y} \bigg) \bigg] + \frac{\partial}{\partial x}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial w}{\partial z} \bigg) \bigg] & + \\ \frac{\partial}{\partial y}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \bigg) \bigg] + \frac{\partial}{\partial z}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial u}{\partial z} + \frac{\partial w}{\partial x} \bigg) \bigg] & + \\ |\nabla \psi | N_x = \frac{\partial}{\partial x} [ \psi \rho (\lambda_{11}+2 \lambda_{12}) ] &, \end{split} \end{equation} \begin{equation} \label{ME-I2-3} \begin{split} \frac{\partial}{\partial x}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \bigg) \bigg] & + \\ \frac{\partial}{\partial y}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial u}{\partial x} \bigg) \bigg] + \frac{\partial}{\partial y}\bigg[ \psi \lambda_{11} \bigg( \frac{\partial v}{\partial y} \bigg) \bigg] + \frac{\partial}{\partial y}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial w}{\partial z} \bigg) \bigg] & + \\ \frac{\partial}{\partial z}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial v}{\partial z} + \frac{\partial w}{\partial y} \bigg) \bigg] & + \\ |\nabla \psi | N_y= \frac{\partial}{\partial y} [ \psi \rho (\lambda_{11}+2 \lambda_{12}) ] &, \end{split} \end{equation} \begin{equation} \label{ME-I3-3} \begin{split} \frac{\partial}{\partial x}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial u}{\partial z} + \frac{\partial w}{\partial x} \bigg) \bigg] + \frac{\partial}{\partial y}\bigg[ \psi \lambda_{44} \bigg( \frac{\partial v}{\partial z} + \frac{\partial w}{\partial y} \bigg) \bigg] &+\\ \frac{\partial}{\partial z}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial u}{\partial x} \bigg) \bigg] + \frac{\partial}{\partial z}\bigg[ \psi \lambda_{12} \bigg( \frac{\partial v}{\partial y} \bigg) \bigg] + \frac{\partial}{\partial z}\bigg[ \psi \lambda_{11} \bigg( \frac{\partial w}{\partial z} \bigg) \bigg] & + \\ |\nabla \psi |N_z= \frac{\partial}{\partial z} [ \psi \rho (\lambda_{11}+2 \lambda_{12}) ] &, \end{split} \end{equation} \end{subequations} for the $x$, $y$ and $z$ directions, respectively. To numerically solve this equation, we reorganize the terms in Eqs.~\eqref{ME-I1-3}--\eqref{ME-I3-3} to form \begin{equation} \label{LDOP-1} \mathcal{L}_1u = h_1,~\mathcal{L}_2v = h_2,~\text{and}~\mathcal{L}_3w = h_3, \end{equation} where $\mathcal{L}_1$, $\mathcal{L}_2$ and $\mathcal{L}_2$ are the linear differential operators associated with $u$, $v$ and $w$, respectively, in Eq.~\eqref{ME-ISO-1}. The right-hand sides, $h_1$, $h_2$ and $h_3$, are the remaining terms collected in Eq.~\eqref{ME-ISO-1}. The linear differential operators are discretized in the second-order central differencing scheme in space. We employ an ADI solver for the linear differential operators and iterate Eq.~\eqref{LDOP-1} until $u$, $v$ and $w$ all converge to their equilibrium values. We select the material parameters as follows: $\alpha^{YSZ} \Delta T = 1\%$ and $\alpha^{LSC}\Delta T= 2\%$. The elastic constants are choosen arbitrarily as $\lambda_{11}^{YSZ}=20\times10^7$, $\lambda_{22}^{YSZ}=10\times10^7$, and $\lambda_{44}^{YSZ} = 5\times10^7$ (dimensionless) such that the solid is isotropic in mechanical behavior, $(\lambda_{11}-\lambda_{12})/(2\lambda_{44})=1$. The LSC (cathode) phase is softer than the YSZ (electrolyte) phase, and its elastic constant is assumed to be $0.75\lambda_{ij}^{YSZ}$. Again, we use domain parameters to indicate the YSZ phase ($\psi_{YSZ} = 1$ inside YSZ and $\psi_{YSZ} = 0$ outside YSZ) and LSC phase ($\psi_{LSC}=1$ inside LSC and $\psi_{LSC}=0$ outside LSC). The entire solid phase is then indicated by the sum of the two phases, $\psi = \psi_{YSZ}+\psi_{LSC} = 1$. The body force term and elastic constant tensor in Eq.~\eqref{ME-ISO-1} are replaced by an interpolated, spatially dependent thermal expansion, $\psi \rho = 0.01\psi_{YSZ} + 0.02\psi_{LSC}$, and elastic constant tensor, $\lambda_{ij} = \lambda_{ij}^{YSZ}\psi_{YSZ}+\lambda_{ij}^{LSC}\psi_{LSC}$. The solid surface is assumed to be traction-free, $N_i=0$. In this simulation, we select a computational box containing 160, 160, and 149 grid points along the $x$, $y$, and $z$ directions, respectively. The grid spacing is $\Delta x = 0.04$. We assume a rigid computational box with frictionless boundaries on the six sides, which means that $u=0$, $v$ and $w$ are free on the two $y$-$z$ planes, $v=0$, $u$ and $w$ are free on the two $x$-$z$ planes, and $w=0$, $u$ and $v$ are free on the two $x$-$y$ planes of the computational box boundaries. Figure \ref{TherStress}(a) illustrates our experimentally obtained microstructure containing the cathode (LSC) and electrolyte (YSZ) phases. The yellow color indicates the cathode phase, and the cyan color indicates the electrolyte phase. Shown in Fig.~\ref{TherStress}(b) are the calculated Von-Mises stresses resulting from the thermal expansion. Due to the porosity, an overall stress enhancement occurs in the cathode phase, as can be observed from the overall light blue-green color. Figure \ref{TherStress}(c) shows the Von-Mises stress on the YSZ surface after rotating the volume 180$^\circ$ around the $z$-axis. Figure \ref{TherStress}(d) shows the Von-Mises stress in the LSC phase. Three types of stress enhancements can be noticed from the simulation result. At the cathode-electrolyte contacting surfaces, stress is enhanced due to the mismatch of thermal expansion and elastic constants between the two materials (see the red arrows in Figs.~\ref{TherStress}(c) and (d)). The second is the concentrated stress observed at the grooves on the electrolyte surface (not contacting the cathode) as shown in the white arrows in Fig.~\ref{TherStress}(c). The third type is the stress concentration effect at the bottlenecks in the cathode phase as shown in the green arrows in Fig.~\ref{TherStress}(d). The simulation results demonstrate that the smoothed boundary method can properly capture the linear elasticity behavior and the geometric effects based on a diffuse-interface defined geometry. \subsection{Phase Transformations in the Presence of a Foreign Surface} \label{Contact angle AC} The Allen-Cahn equation describes the dynamics of a nonconserved order parameter, which can be taken as a model for the ordering of magnetic moments \cite{Chen:2002} and diffusionless phase transformations that involve only crystalline order change \cite{Chen:2002}. It can also be used as a model for evaporation-condensation dynamics \cite{Chen:2002,Emmerich:2003}. Here, we use the Allen-Cahn equation to examine the evaporation of a droplet on a rough surface. The domain parameter is given a ripple-like feature as shown in Fig.\ \ref{droplet}, and pre-smoothed using Allen-Cahn dynamics, Eq.~\eqref{S-1}. The droplet phase is placed on top of the boundary, and its shape is evolved by the smoothed boundary formulation of the Allen-Cahn equation, Eq.\ \eqref{AC-1}. The simulation is performed in two dimensions using parameters $\Delta x = 1$, $M=1$ and $\epsilon=1$ with a domain size of $L_x=100$ and $L_y=100$. The evolution of the droplet surface as it evaporates is illustrated in Fig.\ \ref{droplet}(a) as a contour ($\phi=0.5$) plotted at equal intervals of 270 in dimensionless time. The blue to red colors indicate the initial to final stages. As it evolves, it is clear that the contact angle is maintained, as shown in Fig.\ \ref{droplet}(b). The dynamics of the motion of the three-phase boundary is interesting in that the velocity changes depending on the angle of the surface (with respect to the horizontal axis), which can be inferred from the change in the density of the contours. Since the interfacial energy is assumed to be constant, the droplet would prefer to have a circular cap shape. However, the contact angle imposes another constraint at the three-phase boundary. When the orientation of the surface is such that both of these conditions are nearly met, the motion of the three-phase boundary is slow while the droplet continues to evaporate. When the orientation becomes such that the shape of the droplet near the three-phase boundary must be deformed (compared to the circular cap), the three-phase boundary moves very quickly. This leads to an unsteady motion of the three-phase boundary. On the other hand, at the top of the droplet far from the substrate, the curvature is barely affected by the angle of the substrate surface; thus, the phase boundary there moves at a speed inversely proportional to the radius. \subsection{Motion of a Droplet due to Unbalanced Surface Tensions} \label{Contact angle CH} As another application, we have modeled a self-propelling droplet. Here, two different contact-angle boundary conditions are imposed on the right and left sides of the droplet placed on a flat surface. The smoothed boundary formulation of the Cahn-Hilliard equation, Eq.~\eqref{SBM-CH-3}, is used with $J_{n} = 0$ in this simulation. The parameters used are $\Delta x = 1$, $M=1$, and $\epsilon=1$, and the domain sizes are $L_x=240$ and $L_y=60$. The contact angle on the right side of the droplet is set to 45 degrees and that on the left side to 60 degrees by imposing position dependent boundary conditions. Note that this setup is equivalent to the situation in which the wall-environment, droplet-wall and droplet-environment surface energies satisfy the conditions of Young's equation as \begin{subequations} \begin{equation} \gamma_{we}-\gamma_{wd} = \gamma_{de} \cos{60^{\circ}} ~~\text{for the left side}, \end{equation} \begin{equation} \gamma_{we}-\gamma_{wd} = \gamma_{de} \cos{45^{\circ}}~~\text{for the right side}, \end{equation} \end{subequations} where $\gamma_{we}$, $\gamma_{wd}$ and $\gamma_{de}$ are the surface energies of the wall-environment, droplet-wall and droplet-environment surfaces, respectively, Therefore, this model can be used to simulate a case where the surface energies are spatially and/or temporally dependent on other fields, such as surface temperature or surface composition, as in Ref \cite{Tersoff:2009}. The evolution of the droplet surface is illustrated in Fig.~\ref{self-propel-droplet}. The droplet initially has the shape of a hemisphere, with a 90-degree contact angle with the wall surface. The early evolution is marked by the evolution of the droplet shape as it relaxes to satisfy the contact-angle boundary condition, as seen in Fig.~\ref{self-propel-droplet}(a). Then the droplet begins to accelerate. Once the contact angle reaches the prescribed value, it is maintained as the droplet moves toward the right (see Fig.~\ref{self-propel-droplet}(b)). In the steady state, the droplet moves at constant speed without other effects present. Such motions of droplets have been observed and explained as a result of an unbalanced surface tension between the head portion (with a dry surface) and tail portion (with a wet surface) due to the resulting spatially varying composition and composition-dependent surface energy \cite{Tersoff:2009}. Figure \ref{relaxingdroplet} shows the relaxation of an initially hemispherical droplet on an irregular substrate surface in a 3D simulation. The contact-angle boundary condition imposed at the three-phase boundary is 135 degrees. The computational box sizes are $L_x=L_y=120$ and $L_z= 80$. As can be seen, the droplet changes its shape to satisfy the imposed contact angle, and the droplet evolves to a shape for which the total surface energy is minimized. The behavior favoring dewetting imposed by the contact angle ($\theta > 90^{\circ}$) is properly reflected in the lifting of the droplet, as shown in Figs.~\ref{relaxingdroplet}(a)--(c) and (d)--(f). During this relaxation process, the three-phase boundary shrinks toward the center as the droplet-wall contacting area decreases, as shown in Fig.~\ref{relaxingdroplet}(a)--(c). \section{Discussions and Conclusions} In this paper, we have demonstrated a generalized formulation of the smoothed boundary method. This method can properly impose Neumann and/or Dirichlet boundary conditions on a diffuse interface for solving partial differential equations within the region where the domain parameter $\psi$ uniformly equals 1. The derivation of the method, as well as its implementation, is straightforward. It can numerically solve differential equations without complicated and time-consuming meshing of the domain of interest since the domain boundary is specified by a spatially varying function. Instead, any gird system, including a regular Cartesian grid system, can be used with this method. This smoothed boundary approach is flexible in coupling multiple differential equations. We have demonstrated how this method can couple bulk diffusion and surface diffusion into one single equation while the two equations serve as the boundary condition for each other in Section \ref{SurfDiffFormulation}. In principle, this method can couple multiple differential equations in different regions that are defined by different domain parameters. For example, the physics within a domain defined by $\psi_i = 1$ is governed by a differential equation $H_i$. The overall phenomenon will be then represented by $H = \sum_{i} \psi_i H_i$, where the subscript `$i$' denotes the $i$-$th$ domain, and $\sum_{i} \psi_i = 1$ represents the entire computational box. When sharing the diffuse interfaces between domains, the physical quantities can connect to one another as boundary conditions for each equation in each domain. Therefore, this method could be used to simulate coupled multi-physics and/or multiple-domain problems, such as fluid-solid interaction phenomena or diffusion in hetero-polycrystalline solids. We have also demonstrated the capability of applying the smoothed boundary method to moving boundary problems in Section \ref{Kirkendall effect deformation}. When the locations of domain boundaries are updated by a phase-field type dynamics such that the domain parameter remains uniformly at 1 and 0 on each side of the interface, the smoothed boundary method can be conveniently employed to solve differential equations with moving boundaries. In addition to Neumann and Dirichlet boundary conditions, we have also shown the capability of the smoothed boundary method for specifying contact angles between the phase boundaries and domain boundaries (Sections \ref{Contact angle AC} and \ref{Contact angle CH}). This type of boundary condition is difficult to impose using conventional sharp interface models. Although the smoothed boundary method has many advantages, as shown in Section \ref{DiffEqn}, the nature of the diffuse interface inevitably introduces an error proportional to the interfacial thickness since we smear an originally zero-thickness boundary into a finite thickness interface. Another error results from the resolution of the rapid transition of the domain parameter across the interfacial region. When numerically solving the smoothed-boundary formulated equations, properly capturing the gradient of the domain parameter across the interface becomes very important. From our experience, at least 4$\sim$6 grid points are necessary to resolve the diffuse interfaces such that the errors are controlled. Moreover, when solving time dependent equations, one singularity occurs because of the terms of $1/\psi$ and $1/\psi^2$ for imposing Neumann and Dirichlet boundary conditions, respectively. In practice, cutoffs at small $\psi$ values are necessary to avoid numerical instabilities. These cutoff values can be smaller as the diffuse interface is better resolved, i.e., by using more grid points across the interface. However, only a small number of grid points will be used across the interface for computational efficiency. In our simulations, when 4$\sim$6 grid spacings are used for the interfacial regions, the cutoff values are around $1\times10^{-6}\sim1\times10^{-8}$ for the Neumann boundary condition and $1\times10^{-2}\sim1\times10^{-4}$ for the Dirichlet boundary condition to maintain numerical stability while keeping the errors reasonably small. On the other hand, when solving time independent equations such as the mechanical equilibrium equation and the steady state diffusion equation, there are no singular terms in the equations. The cutoff value is simply used to avoid the singularity of the matrix solver. In this case, the cutoff value can be as small as the order of numerical precision, such as $1\times10^{-16}$. All of these numerical instability and error behaviors require more systematic and theoretical studies; thus, the interfacial thickness and resolution should be optimized for future works. Based on the general nature of the derivation, the smoothed boundary method is applicable to generalized boundary conditions (including time-dependent boundary values that are important for simulating evolution of many physical systems). Since the domain boundaries are not specifically defined in the smoothed boundary method, this method can be applied to almost any geometry as long as it can be defined by the domain parameter. This is very powerful and convenient for solving differential equations in complex geometries that are often difficult and time-consuming to mesh. As three-dimensional image-based calculations are more prevailing in scientific and engineering research fields \cite{Thornton:2008} in which voxelated data from serial scanning or sectioning are often utilized and are difficult to render as meshes, the smoothed boundary method is expected to be widely employed to simulate and study physics in complex geometries defined by 2D pixelated and 3D voxelated data with a simply process of smoothing the domain boundaries. \textbf{Acknowledgements}: HCY and KT thank the National Science Foundation for financial support under Grant Nos. 0511232, 0502737, and 0854905. HYC and KT thank the National Science Foundation for financial support on Grant Nos. 0542619 and 0907030. The authors thank John Lowengrub, Axel Voigt, Xiaofan Li, Anton Van der Ven and James Warren for valuable discussions and comments. The authors also thank Scott Barnett and Stu Adler for providing the experimental 3D microstructures used in the demonstration.
1,108,101,562,460
arxiv
\section{Introduction} \label{Sec:Introduction} Cool subdwarfs are the oldest members of the low-mass stellar population, with spectral types of K and M, masses between $\sim$0.6 and $\sim$0.08 M$_\sun$, and surface effective temperatures between ~4000 and ~2300 K \citep{kal09}. First coined by \citet{kui39}, subdwarfs are the low-luminosity, metal-poor ([Fe/H] $<$ -1) spectral counterparts to the main sequence dwarfs. On a color-magnitude diagram, subdwarfs lie between white dwarfs and the main sequence \citep{adam15}. With decreased metal opacity, subdwarfs have smaller stellar radii and are bluer at a given luminosity than their main sequence counterparts \citep{sand59}. These low-mass stars are members of the Galactic halo and have higher systematic velocities and proper motions than disk dwarf stars. Traditionally subdwarfs have been identified using high proper motion surveys. Although 99.7\% of stars in the galaxy are disk main sequence, statistically there are more subdwarfs in these high PM surveys \citep{reid05}. The search for companions to stars of different masses gives clues to the star formation process, as any successful model must account for both the frequency of the multiple star systems and the properties of the systems. In addition, monitoring the orbital characteristics of multiple star systems yields information otherwise unattainable for single stars, such as relative brightness and masses of the components \citep{good07a}, lending further constraints to mass-luminosity relationships \citep{chabrier00} Old population II stars are important probes for the early history of star formation in the galaxy \citep{zhang13}. The formation process of low mass stars remains less well understood than for solar-like stars. Although multiple indications suggest they form as the low-mass tail of regular star formation \citep{bourke06}, other mechanisms have been proposed for some or all of these objects \citep{good07b, thies07, basu12}. A firm binary fraction for low-metallicity cool stars could assist in constraining various formation models. This again motivates the need for a comprehensive binarity survey, sensitive to small angular separations. The multiplicity of main sequence dwarfs has been well explored in the literature. A consistent trend that has purveyed is that the percentage of stars with stellar companions seems to depend on the mass of the stars. For AB-type stars, \citet{pete12} used a sample of 148 stars to determine a companion fraction of $\sim$70\%. For solar type stars (FGK-type), around 57\% have companions \citep{duq91}, although \citet{rag10} have revised the fraction down to $\sim$46\%. Fischer and Marcy (1992) looked at M-dwarfs and found a multiplicity fraction of 42$\pm$9\%. More recently, \citet{janson12} find a binary fraction for late K- to mid M-type dwarfs of 27 $\pm$ 3$\%$ from a sample of 701 stars. For late M-dwarfs, a slightly lower fraction was found by \citet{law06b} of 7$\pm$ 3$\%$. Extending their previous study for mid/late M-type dwarfs, M5-M8, \citet{janson14} find a multiplicity fraction of 21\%-27\% using a sample of 205 stars. While the multiplicity of dwarf stars has been heavily studied with comprehensive surveys, detailed multiplicity studies of low-mass subdwarfs have, historically, been hindered by their low luminosities and relative rarity in the solar neighborhood. Within 10 pc, there are three low-mass subdwarfs, compared to 243 main sequence stars \citep{monteiro06}. Subsequently, multiplicity surveys of cool subdwarfs have been relatively small. The largest, a low-limit angular resolution search by \citet{zhang13} mined the Sloan Digital Sky Survey \citep{york00} to find 1826 cool subdwarfs, picking out subdwarfs by their PMs and identifying spectral type by fitting an absolute magnitude-spectral type relationship. They find 45 subdwarfs multiple systems in total, with 30 being wide companions and 15 partially resolved companions. When adjusting for the incompleteness of their survey, an estimate of the binary fraction of $>$10\% is predicted. The authors note the need for a high spatial resolution imaging survey to search for close binaries ($<$100 AU) and put tighter constraints on the binary fraction of cool subdwarfs. The high-resolution subdwarf surveys completed thus far have been comparatively small. \citet{gizis00} detected no companions in a sample of eleven cool subdwarfs. \citet{riaz08} similarly found no companions in a sample of nineteen M subdwarfs using the \textit{Hubble Space Telescope}. \citet{lodieu09} reported one companion in a sample of 33 M type subdwarfs. \citet{jao09} found four companions in a sample of 62 cool subdwarf systems. With the high variance in small number statistics, the relationship between dwarf and subdwarf multiplicity fractions remains inconclusive. We present here the largest high resolution cool subdwarf multiplicity survey yet performed, making use of the efficient Robo-AO system. The Robo-AO system allows us to detect more cool and close companion stars in a much larger sample size than previously possible. This survey combines previously known wide proper-motion pairs, spectroscopic binaries, and high angular resolution images able to detect companions with $\rho$ $\ge$ 0$\farcs$15 and $\Delta m_i \le$ 6. The paper is organized as follows. In Section \ref{sec:Targetselection} we describe the target selection, the Robo-AO system, and follow-up observations. In Section \ref{sec:Data} we describe the Robo-AO data reduction and the companion detection and analysis. In Section \ref{sec:Discoveries} we describe the results of this survey, including discovered companions, and compare to similar dwarf surveys. The results are discussed in Section \ref{sec:Discussion} and put in context of previous literature. We conclude in Section \ref{sec:Conclusions}. \section{Survey Targets and Observations} \label{sec:Targetselection} \subsection{Sample Selection} \begin{figure} \centering \includegraphics[width=245pt]{redpropmotion_subdwarfs.eps} \caption{Reduced proper motion diagram of the complete rNLTT \citep{gould03}, with our observed subdwarfs in red \textit{X}'s. The discriminator lines between solar-metallicity dwarfs, metal-poor subdwarfs, and white dwarfs are at $\eta$ = 0 and 5.15, respectively, and with \textit{b}=$\pm$30. The subdwarfs plotted make use of the improved photometry of \citet{marshall07}.} \label{fig:redprop} \end{figure} \begin{figure} \centering \includegraphics[width=241pt]{histogram_colorvmag.eps} \caption{(\textit{a}) Histogram of magnitudes in V band of the 348 observed subdwarfs. (\textit{b}) Histogram of the $(V - J)$ colors of the observed subdwarf sample, with approximate spectral types regions K and M marked. Both plots use the photometry of \citet{marshall07}} \label{fig:hist_colorvmag} \end{figure} \begin{table} \renewcommand{\arraystretch}{1.3} \begin{longtable}{ll} \caption{\label{tab:survey_specs}The specifications of the Robo-AO subdwarf survey} \\ \hline Filter & Sloan \textit{i}\textsuperscript{$\prime$}-band \\ FWHM resolution & 0$\farcs$15 \\ Field size & 44\arcsec $\times$ 44\arcsec\\ Detector format & 1024$^2$ pixels\\ Pixel scale & 43.1 mas / pix\\ Exposure time & 120 seconds \\ Subdwarf targets & 344 \\ Targets observed / hour & 20\\ Observation dates & September 1 2012 --\\ & August 21 2013\\ \hline \label{tab:specs} \end{longtable} \end{table} \begin{figure*}[!htb] \centering \includegraphics*[width=400pt]{psf_paper.eps} \caption{Example of PSF subtraction on NLTT31240 with companion separation of 0$\farcs$74. The red X marks the position of the primary star's PSF peak. Successful removal of PSF leaves residuals consistent with photon noise.} \label{fig:psf} \end{figure*} We selected targets from the 564 spectral type F- through M-subdwarfs studied by Marshall (2007). These targets were selected from the New Luyten Two-Tenths (NLTT) catalog \citep{luyten79, luyten80} of high proper motion stars ($>$0.18 arcsec/year) using a reduced proper motion diagram (RPM). To distinguish subdwarf stars from their solar-metallicity companions on the main sequence, the RPM used a $(V-J)$ optical-infrared baseline, a technique first used by \citet{salim02}, rather than the shorter $(B-R)$ baseline used by Luyten. This method uses the high proper motion as a proxy for distance and the blueness of subdwarfs relative to equal luminosity dwarf stars to separate out main sequence members of the local disk and the halo subdwarfs \citep{marshall08}. The reduced proper motion, H$_M$, is defined as \begin{equation} H_{M} = m + 5log\mu + 5 \end{equation} where m is the apparent magnitude and $\mu$ is the proper motion in $\arcsec$/yr. The discriminator, $\eta$, developed by Salim \& Gould to separate luminosity classes, is defined as \begin{equation} \eta(H_{V},V-J,\sin b)=H_{V}-3.1(V-J) - 1.47|\sin b| - 7.73 \end{equation} where \textit{b} is the Galactic latitude. The reduced proper motion diagram for the revised NLTT (rNLTT) catalog \citep{gould03} and our subdwarf targets is presented in Figure$~\ref{fig:redprop}$. The improved photometry of \citet{marshall07} placed 12 of the original suspected subdwarfs outside the subdwarf sequence. These stars were rejected from our sample. Of the 552 subdwarfs confirmed by Marshall, a randomly-selected sample of 348 K- and M-subdwarfs were observed by Robo-AO when available between other high priority surveys. The V-band magnitudes and $(V-J)$ colors of the observed subdwarf sample are shown in Figure$~\ref{fig:hist_colorvmag}$. \subsection{Observations} \subsubsection{Robo-AO} We obtained high-angular-resolution images of the 348 subdwarfs during 32 separate nights of observations between 2012 September 3 and 2013 August 21 (UT). The observations were performed using the Robo-AO laser adaptive optics system \citep{baranec13, baranec14, riddle12} mounted on the Palomar 60 inch telescope. The first robotic laser guide star adaptive optics system, the automatic Robo-AO system can efficiently observe large, high-resolution surveys. All images were taken using the Sloan \textit{i}\textsuperscript{$\prime$}-band filter \citep{york00} and with exposure times of 120 s. Typical seeing at the Palomar Observatory is between 0$\farcs$8 and 1$\farcs$8, with median around 1$\farcs$1 \citep{baranec14}. The typical FWHM (diffraction limited) resolution of the Robo-AO system is 0$\farcs$12-0$\farcs$15. Specifications of the Robo-AO system are summarized in Table$~\ref{tab:specs}$. The images were reduced by the Robo-AO imaging pipeline described in \citet{law06a, law06b, law09, law14}. The EMCCD frames are dark-subtracted and flat-fielded and then, using the Drizzle algorithm \citep{fruchter02}, stacked and aligned, while correcting for image motion using a bright star in the field. The algorithm also introduces a factor-of-two up-sampling to the images. Since the subdwarf targets are in relatively sparse stellar fields, for the majority of the images the only star visible is the target star and was thus used to correct for the image motion. \subsubsection{Keck LGS-AO} Six candidate multiple systems were selected for re-imaging by the NIRC2 camera behind the Keck II laser guide star adaptive optics system \citep{wiz06, vandam06}, located in Maunakea, Hawaii, on 2014 August 17 (UT) to confirm possible companions. The targets were selected for their low significance of detectability, either because of low contrast ratio or small angular separation. The observations were done in the K\textsuperscript{$\prime$} and H bands with three 90 s exposures for two targets and three 30 s for five targets in a 3-position dither pattern that avoided the noisy, lower-left quadrant. We used the narrow camera setting (0$\farcs$0099/px), which gave a single-frame field of view of 10\arcsec $\times$ 10\arcsec. \subsubsection{SOAR Goodman Spectroscopy} We took spectra of 24 of the subdwarfs using the Southern Astrophysical Research Telescope and the Goodman Spectrograph \citep{clemens04} on 2014 July 15. We observed twelve targets with companions and twelve single stars as reference. The spectra were taken using a 930 lines/mm grating with 0.42 \AA/pixel, a 1$\farcs$07 slit, and exposure times of 480 s. \section{Data Reduction and Analysis} \label{sec:Data} \subsection{Robo-AO Imaging} \subsubsection{Target Verification} To verify that each star viewed in the image is the desired subdwarf target, we created Digital Sky Survey cutouts of similar angular size around the target coordinates. Each image was then manually checked to assure no ambiguity in the target star. The vast majority of the targets are in relatively sparse stellar regions. Four of the target stars in crowded fields whose identification was ambiguous were discarded, leaving 344 verified subdwarf targets. \subsubsection{PSF Subtraction} To locate close companions, a custom locally optimized point spread function (PSF) subtraction routine \citep{law14} based on the Locally Optimized combination of Images algorithm \citep{lafreniere07} was applied to centered cutouts of all stars. Other subdwarf observations taken at similar times were used as references, as it is unlikely to have a companion found in the same position for two different targets. For each target image and for 20 reference images selected as the closest to the target image in observation time, the region around the star was subdivided into polar sections, five up-sampled pixels in radius and 45$^{\circ}$ in angle. A locally optimized estimate of the PSF for each section was then generated using a linear combination of the reference PSFs. The algorithm begins with an average over the reference PSFs, then uses a downhill simplex algorithm to optimize the contributions from each reference image to find the best fit to the target image. The optimization is done on several coincident sections simultaneously to minimize the probability of subtracting out a real companion, with only the central region outputted to the final PSF. This also provides smoother transitions between adjacent sections as many of the image pixels were shared in the optimization. After iterating over all sections of the image, the final PSF is an optimal local combination of all the reference PSFs. This final PSF is then subtracted from the original reference image, leaving residuals that are consistent with photon noise. Figure$~\ref{fig:psf}$ shows an example of the PSF subtraction performance. We manually checked the final subtracted images for close companions detections ($>$5$\sigma$). The initial search was limited to a detection radius of 1$\arcsec$ from the target star. We subsequently performed a secondary search out to a radius of 2$\arcsec$. \subsubsection{Imaging Performance Metrics} \label{sec:imageperf} The two dominant factors that effect the image performance of the Robo-AO system are seeing and target brightness. To further classify the image performance for each target an automated routine was ran on all images. Described in detail in \citet{law14}, the code uses two Moffat functions fit to the PSF to separate the widths of the core and halo. We found that the core size was an excellent predictor of the contrast performance, and used it to group targets into three levels (low, medium and high). Counter-intuitively, the PSF core size decreases as image quality decreases. This is caused by poor S/N on the shift-and-add image alignment used by the EMCCD detector. The frame alignment subsequently locks onto photo noise spikes, leading to single-pixel-sized spikes in the images \citep{law06b,law09}. The images with diffraction limited core size ($\sim$0.15\arcsec) were assigned to the high-performance group, with smaller cores assigned to lower-performance groups. Using a companion-detection simulation with a group of representative targets, we determine the angular separation and contrast consistent with a 5$\sigma$ detection. For clarity, the contrast curves of the simulated targets are fitted with functions of the form $a - b/(r - c)$ (where \textit{r} is the radius from the target star and \textit{a, b,} and \textit{c} are fitting variables). Contrast curves for the three performance groups are shown in Section$~\ref{sec:Discoveries}$. \begin{figure} \centering \includegraphics[width=250pt]{sgi3075_2.eps} \caption{The extracted spectra for NLTT52532 showing subdwarf characteristics, most apparent the weakness of the 7050\AA TiO band and strength of the 6380\AA CaH band. The y-axis is given in normalized arbitrary flux units.} \label{fig:spect} \end{figure} \begin{table} \begin{center} \caption{Full SOAR Spectroscopic Observation List} \begin{tabular}{cccc} \hline \hline \noalign{\vskip 3pt} \text{NLTT} & \text{m$_v$} & \text{ObsID} & \text{Companion?}\\ [0.2ex] \hline \\ [-1.5ex] 2205 & 14.0 & 2014 Jul 14 & yes\\ 7301 & 14.9 & 2014 Jul 14 & yes\\ 7914 & 14.3 & 2014 Jul 14 & yes\\ 9597 & 12.0 & 2014 Jul 14 & \\ 9898 & 14.2 & 2014 Jul 14 & \\ 10022 & 15.8 & 2014 Jul 14 & \\ 10135 & 15.7 & 2014 Jul 14 & \\ 33971 & 12.8 & 2014 Jul 14 & \\ 37342 & 14.4 & 2014 Jul 14 & yes\\ 37807 & 12.0 & 2014 Jul 14 & \\ 40022 & 13.9 & 2014 Jul 14 & \\ 40313 & 13.7 & 2014 Jul 14 & \\ 41111 & 13.7 & 2014 Jul 14 & \\ 44039 & 11.5 & 2014 Jul 14 & \\ 44568 & 12.3 & 2014 Jul 14 & \\ 49486 & 16.0 & 2014 Jul 14 & yes\\ 50869 & 15.8 & 2014 Jul 14 & \\ 52377 & 14.5 & 2014 Jul 14 & yes\\ 52532 & 15.5 & 2014 Jul 14 & yes\\ 53255 & 15.0 & 2014 Jul 14 & yes\\ 55603 & 12.1 & 2014 Jul 14 & yes\\ 56818 & 14.0 & 2014 Jul 14 & yes\\ 57038 & 13.9 & 2014 Jul 14 & yes\\ 58812 & 14.9 & 2014 Jul 14 & yes\\ \end{tabular} \label{tab:soar} \end{center} \end{table} \subsubsection{Contrast Ratios} For wide companions, the binaries' contrast ratio was determined using aperture photometry on the original images. The aperture size was determined uniquely for each system based on separation and the presence of non-associated background stars. For close companions, the estimated PSF was used to remove the blended contributions of each star before aperture photometry was performed. The locally optimized PSF subtraction algorithm attempts to remove the flux from companions using other reference PSFs with excess brightness in those areas. For detection purposes, we use many PSF core sizes for optimization, and the algorithm's ability to remove the companion light is reduced. However, the companion is artificially faint as some flux has still been subtracted. To avoid this, the PSF fit was redone excluding a six-pixel-diameter region around the detected companion. The large PSF regions allow the excess light from the primary star to be removed, while not reducing the brightness of the companion. \subsubsection{Separation and Position Angles} Separation angles were determined from the raw pixel positions. Uncertainties were found using estimated systematic errors due to blending between components. Typical uncertainty in the position for each star was 1-2 pixels. Position angles were calculated using a distortion solution produced using Robo-AO measurements for a globular cluster.\footnote{S. Hildebrandt (2013, private communication).} \subsection{Previously Detected Binaries} To further realize our goal of a comprehensive cool subdwarf survey, we included in our statistics previously confirmed binary systems in the literature with separations outside of our field of view. Common proper motion is a useful indicator of wider binary systems. Wide ($>$30\arcsec) common proper motion companions among our target subdwarfs were previously identified in the Revised New Luyten Two-Tenths (rNLTT) catalog \citep{salim02,chan04}, and a search by \citet{lopez12} of the Lepine and Shara Proper Motion-North catalog \citep{lepine05}. The target list was also cross-checked against the {\it Ninth Catalogue of Spectroscopic Binary Orbits} \citep[\protect$S\!_{B^9}$]{pourbaix04}, a catalogue of known spectroscopic binaries available online.\footnote{http://sb9.astro.ulb.ac.be/} While these systems were included in the total subdwarf binary numbers, the compilatory nature of this catalogue leaves some uncertainty in the completeness of the spectroscopic search. \begin{figure*} \centering \includegraphics[width=400pt]{keck_roboao.eps} \caption{Keck-AO confirming the Robo-AO companion to NLTT52532. Exposure time for the Robo-AO image is 120 s and for the Keck-AO image is 90 s.} \label{fig:keck} \end{figure*} \subsection{Spectroscopy} To further verify that the targets selected are cool subdwarfs, we took spectra of 7$\%$ of the total survey and 31$\%$ of the candidate companion systems. Past spectroscopic studies of cool subdwarfs at high resolution have proven difficult as, at the low temperatures present, a forest of molecular absorption lines conceals most atomic lines used in spectral analysis. Subdwarfs can be classified spectroscopically using two molecular lines \citep{gizis97}. Comparing titanium oxide (TiO) bands to metal hydride bands (typically CaH in M subdwarfs), Gizis classified two groups, the intermediate and extreme subdwarfs. As the metallicity decreases, the TiO adsorption also decreases, but the CaH remains largely unaffected for a given spectral type. This classification system was expanded and revised to include ultra subdwarfs by \citet{lepine07}, who introduced the new useful parameter $\zeta_{TiO/CaH}$. Spectra were taken for wavelengths 5900-7400\AA, and reduced (dark-subtracted and flat-fielded) using IRAF reduction packages, particularly the onedspec.apall to extract the trace of the spectrum and onedspec.dispcor for applying the wavelength calibration. A Fe+Ar arc lamp was recorded for wavelength calibration. All observed target subdwarfs were confirmed to show the spectral characteristics of subdwarf stars described above, specifically the reduced band strength of 7050\AA TiO5. An example of the extracted spectra is given in Figure$~\ref{fig:spect}$. The full observation list for SOAR is given in Table$~\ref{tab:soar}$. \begin{figure} \centering \includegraphics[width=245pt]{binary_frac_color.eps} \caption{Binary fraction of the target subdwarfs binned by their $(V-J)$ color.} \label{fig:binary_frac_color} \end{figure} \begin{figure} \centering \includegraphics[width=245pt]{dwarf_subdwarf_comp.eps} \caption{Comparison of the separation and the magnitude difference in the i-band between our subdwarf companions $(<$6$\arcsec$) and the dwarf companions found by \citet{janson12}. The detectable magnitude ratios for our image performance groups are also plotted, with the number of observed subdwarfs targets in each image performance group, as described in Section \ref{sec:imageperf}.} \label{fig:dwarfscomp} \end{figure} \subsection{Candidate Companion Follow-ups} With either high contrast ratio or small angular separation, six candidate subdwarf binary systems with low detection significance ($<$6$\sigma$) were selected for follow-up imaging using Keck II. One low-probability candidate companion star was rejected after followups using Keck II, an apparent close ($\rho\simeq0.15\arcsec$) binary to NLTT50869, probably resulting from a cosmic ray on the original Robo-AO image. A wider binary to NLTT50869, with high detection significance, was not in the image field of view. Outside of the six target stars with low significant companions, another candidate companion star, NLTT4817, was observed and had no companion inside the field of view of the Keck II image, however had a high significant companion ($>$7$\sigma$) in the Robo-AO field of view. An example of the Keck II images and the Robo-AO images is given in Figure$~\ref{fig:keck}$. The full Keck II observations are listed in Table$~\ref{tab:keck}$, with the last column indicating the presence of the low detection significance companion. \begin{table} \renewcommand{\arraystretch}{1.3} \begin{center} \caption{Full Keck-AO Observation List} \begin{tabular}{cccc} \hline \hline \noalign{\vskip 1pt} \text{NLTT} & \text{m$_v$} & \text{ObsID} & \text{Low-sig. Companion?}\\ [0.2ex] \hline\ 4817 & 11.4 & 2014 Aug 17 & \\ 7914 & 14.3 & 2014 Aug 17 & yes \\ 50869 & 15.8 & 2014 Aug 17 & \\ 52377 & 14.5 & 2014 Aug 17 & yes \\ 52532 & 15.5 & 2014 Aug 17 & yes \\ 53255 & 15.0 & 2014 Aug 17 & yes \\ 56818 & 14.0 & 2014 Aug 17 & yes \\ \end{tabular} \label{tab:keck} \end{center} \end{table} \begin{figure*} \centering \includegraphics[width=500pt]{cutout_labels.eps} \caption{Gray scale cutouts of the 22 multiple star systems with separations $<$7$\arcsec$ resolved with Robo-AO. The angular scale and orientation is similar for each cutout.} \label{fig:cutouts} \end{figure*} \begin{figure} \centering \includegraphics[width=245pt]{histogram_magnitude.eps} \caption{Histogram of the magnitude difference in the i-band between all our subdwarf companions and the dwarf companions found by \citet{janson12}.} \label{fig:hist_mag} \end{figure} \section{Discoveries} \label{sec:Discoveries} Of the 344 verified subdwarf targets observed, 40 appear to be in multiple star systems for an apparent binary fraction of 11.6\%$\pm$1.8$\%$, where the error is based on Poissonian statistics \citep{burgasser03}. This count includes 6 multiple systems first recorded in the NLTT, 13 systems first recorded in the rNLTT, 1 wide binary found in the LSPM \citep{lopez12}, 6 spectroscopic binaries, and 16 newly discovered multiple systems. We also found four new companions to already recorded binary systems, including two new triple systems, for a total of 6 triple star systems, for a triplet fraction of 1.7\%$\pm$.7$\%$. One quarter (26$\%$) of the companions would only be observable in a high-resolution survey ($<$2.0\arcsec sep). The binary fraction of the target stars binned by their $(V-J)$ color is given in Figure$~\ref{fig:binary_frac_color}$. Cutouts of the closest 22 multiple star systems are shown in Figure$~\ref{fig:cutouts}$. Measured companion properties are detailed in Table$~\ref{tab:measurements}$. \subsection{Probability of Association} The associations of all discovered and previously recorded companions were confirmed using the Digitized Sky Survey (DSS) \citep{reid91}. Since all the targets have high proper motions, if not physically associated the systems would have highly apparent shifts in separation and position angle over the past two decades. For the widely separated systems with both stars visible in the DSS, we checked the angular separation in the DSS and our survey to confirm relatively constant separation. For closely separated systems where both stars are merged in the DSS, we looked for a background star at the DSS position that does not appear in our images. In addition, since our stars appear in relatively sparse stellar regions in the sky, well outside the Galactic disk, the probability of a background star appearing in a close radius to our observed star is low. Using the total number of known non-associated stars in our images, than we expect over all target stars in our survey 1.2 background stars within a radius of 2$\farcs$5 of any of our target stars, compared to 10 stars observed in that range. \subsection{Photometric Parallaxes} Very few subdwarfs in our sample have accurate parallax measurements. Only 43 of the targets have published parallaxes, most with significant measurement errors. To estimate the distances to our subdwarf targets, we employed an expression for M$_{R}$=$\fint (R-I)$ estimated by \citet{siegel02} using a color-magnitude diagram and the photometric measurements by \citet{marshall07}. The polynomial fit found by Siegel for subdwarfs with measured parallaxes and an estimated mean [Fe/H] of -1.2, and with the \citet{lutz73} correction, is \begin{equation} M_{R} = 2.03 + 10 \times (R-I) - 2.21 \times (R-I)^2 \end{equation} The color-absolute magnitude relation has an uncertainty of $\sim$0.3 mag. In all cases, the published parallax errors are much larger than photometric errors of $<$0.03 mag. The estimated distances for the primary stars in the subdwarf multiple systems are listed in Table$~\ref{tab:measurements}$. \section{Discussion} \label{sec:Discussion} \subsection{Comparison to Main-Sequence Dwarfs} With comparable sample size and spectrum types, the cool dwarf survey of \citet{janson12} is a useful metal-rich analog to this work. The most striking disparity between the two samples is the lack of low-contrast ($\Delta m_i \leq$2), close ($\rho \leq 1 \arcsec$) companions to the subdwarf stars, a regime heavily populated by solar-metallicity dwarf companions. This is clearly seen in a plot of the companion's magnitude difference versus angular separation for the two populations, as in Figure$~\ref{fig:dwarfscomp}$. The dissimilarity between contrast ratios between dwarfs and subdwarfs is further illustrated in Figure$~\ref{fig:hist_mag}$. A two sample Kolmogrov-Smirnov test rejects the null hypothesis that the two populations are similar at a confidence of $\sim$2.8$\sigma$. \begin{table*} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{6pt} \caption{Multiple subdwarf systems resolved using Robo-AO and previously detected systems} \small \centering \begin{tabular}{l c c c c c c c c c} \hline \hline NLTT & Comp & \textit{$m_v$}\footnote{\citep{marshall07}} & ObsID & $\Delta$ \textit{i}\textsuperscript{$\prime$} & $\rho$ & $\rho$ & P.A. & Dist& Prev Det?\\ & NLTT & (mag) & & (mag) & ($\arcsec$) & (AU) & (deg.) & (pc)\\ \hline 2045AB & \nodata & 13.5 & 2013 Aug 15 & \nodata & \nodata & \nodata & \nodata & 183.3$\pm$21.0 & SB9\\ 2205AB & 2206 & 13.9 & 2013 Aug 15 & 0.18 & 3.37 & 475.5$\pm$54.3 & 123$\pm$2 & 140.9$\pm$16.1 & L79\\ 2324AB & 2325 & 15.7 & 2013 Aug 16 & 1.16 & 3.84 & 138.8$\pm$15.9 & 254$\pm$2 & 36.1$\pm$4.1 & L79\\ 2324AC & \nodata & 15.7 & 2013 Aug 16 & 4.14 & 23.48 & 847.8$\pm$96.2 & 159$\pm$2 & 36.1$\pm$4.1 & \\ 4817AB & 4814 & 11.4 & 2012 Sep 3 & 4.30 & 24.59 & 3615$\pm$413 & 218$\pm$2 & 147$\pm$16.8 & S02\\ 7301AB & 7300 & 14.9 & 2012 Sep 3 & 2.48 & 4.87 & 105.7$\pm$12.1 & 57$\pm$2 & 21.7$\pm$2.5 & S02\\ 7914AB & \nodata & 14.3 & 2012 Sep 3 & 3.76 & 2.53 & 424.4$\pm$48.5 & 150$\pm$2 & 167.6$\pm$19.2 & \\ 10536AB & 10548 & 11.2 & 2013 Aug 15 & \nodata & 185.7 & 30633$\pm$3501 & 85.5 & 164.9$\pm$18.9 & S02\\ 11015AB & 11016 & 16.3 & 2013 Aug 16 & 0.94 & 9.24 & 1399$\pm$160 & 57$\pm$2 & 151.3$\pm$17.3 & S02\\ 12845AB & \nodata & 10.6 & 2012 Oct 3 & 4.71 & 1.85 & 149.4$\pm$17.1 & 92$\pm$2 & 80.6$\pm$9.2 & \\ 15973AB & 15974 & 9.3 & 2012 Oct 7 & 3.47 & 6.88 & 303.1$\pm$34.6 & 227$\pm$2 & 44$\pm$5.0 & S02\\ 15973AC & \nodata & 9.3 & 2012 Oct 7 & 5.02 & 8.23 & 362.2$\pm$41.1 & 217$\pm$2 & 44$\pm$5.0 & \\ 17485AB & \nodata & 11.9 & 2012 Oct 10 & \nodata & \nodata & \nodata & \nodata & 191.3$\pm$21.9 & SB9\\ 18502AB & \nodata & 12.2 & 2013 Jan 19 & 3.18 & 5.95 & 1262$\pm$144 & 331$\pm$2 & 212.1$\pm$24.3 & \\ 18798AB & 18799 & 14.5 & 2013 Jan 19 & 3.12 & 12.82 & 2270$\pm$259 & 172$\pm$2 & 177$\pm$20.2 & S02\\ 19210AB & 19207 & 11.2 & 2013 Jan 20 & & 102.5 & 18468$\pm$2110 & 285.4 & 180.2$\pm$20.6 & S02,SB9\\ 20691AB & \nodata & 9.6 & 2013 Jan 19 & \nodata & \nodata & \nodata & \nodata & 70.6$\pm$8.1 & SB9\\ 21370AB & \nodata & 13.7 & 2013 Jan 19 & 2.46 & 19.83 & 6603$\pm$755 & 322$\pm$2 & 332.9$\pm$38.1 & SB9\\ 24082AB & \nodata & 13.1 & 2013 Jan 19 & 4.46 & 5.81 & 1683$\pm$192 & 187$\pm$2 & 289.7$\pm$33.1 & \\ 24082AC & \nodata & 13.1 & 2013 Jan 19 & 4.17 & 12.00 & 3476$\pm$397 & 267$\pm$2 & 289.7$\pm$33.1 & \\ 25234AB & 25233 & 13.2 &2013 Jan 18 & 3.05 & 8.29 & 1175$\pm$134 & 287$\pm$2 & 141.7$\pm$16.2 & S02\\ 28434AB & \nodata & 14.9 & 2013 Jan 17 & 2.46 & 2.54 & 652.9$\pm$74.6 & 202$\pm$2 & 256.7$\pm$29.3 & \\ 29551AB & \nodata & 11.5 & 2012 Sep 3 & 3.29 & 0.51 & 104.6$\pm$12.0 & 355$\pm$2 & 206.5$\pm$23.6 & \\ 29594AB & \nodata & 13.2 & 2013 Apr 22 & \nodata & 38.10 & 12834$\pm$1466 & 269 & 336.8$\pm$38.5 & L12\\ 30193AB & \nodata & 14.6 & 2013 Apr 21 & 1.99 & 0.95 & 304.8$\pm$34.8 & 304$\pm$2 & 321.5$\pm$36.7 & \\ 30838AB & 30837 & 12.5 & 2013 Apr 22 & 5.69 & 16.25 & 4436$\pm$507 & 25$\pm$2 & 273$\pm$31.2 & S02\\ 31240AB & \nodata & 15.0 & 2013 Apr 21 & 4.16 & 0.74 & 251.2$\pm$28.7 & 210$\pm$2 & 338.3$\pm$38.7 & \\ 31240AC & \nodata & 15.0 & 2013 Apr 21 & 3.86 & 10.32 & 3491$\pm$399 & 157$\pm$2 & 338.3$\pm$38.7 & \\ 34051AB & \nodata & 13.5 & 2013 Jan 19 & \nodata & \nodata & \nodata & \nodata & 242.3$\pm$27.7 & SB9\\ 37342AB & 37341 & 14.4 & 2013 Apr 22 & 1.37 & 5.75 & 123.4$\pm$14.1 & 54$\pm$2 & 21.4$\pm$2.5 & S02\\ 45616AB & \nodata & 11.9 & 2012 Sep 3 & 2.59 & 28.31 & 4696$\pm$536.8 & 113$\pm$2 & 165.9$\pm$19.0 & SB9\\ 49486AB & 49487 & 15.9 & 2012 Oct 4 & 1.48 & 4.51 & 390.3$\pm$44.6 & 148$\pm$2 & 86.4$\pm$9.9 & S02\\ 49819AB & 49821 & 14.0 & 2013 Aug 19 & 1.12 & 25.28 & 10263$\pm$1173 & 84$\pm$2 & 406$\pm$46.4 & S02\\ 50759AB & 50751 & 15.9 & 2012 Sep 13 & \nodata & 297.7 & 79156$\pm$9046 & 267.7 & 265.8$\pm$30.4 & S02\\ 50869AB & \nodata & 15.8 & 2013 Aug 8 & 3.15 & 8.17 & 1707$\pm$195 & 19$\pm$2 & 209.0$\pm$24.0 & \\ 52377AB & \nodata & 14.5 & 2012 Sep 4 & 2.35 & 0.92 & 561.3$\pm$64.2 & 211$\pm$2 & 585.3$\pm$66.9 & \\ 52532AB & \nodata & 15.5 & 2012 Sep 4 & 2.60 & 0.30 & 52.82$\pm$6.0 & 168$\pm$2 & 175$\pm$20.0 & \\ 52532AC & 52538 & 15.5 & 2012 Sep 4 & 3.35 & 37.14 & 6536$\pm$780 & \nodata & 176$\pm$21.0 & L79\\ 53255AB & \nodata & 15.0 & 2013 Aug 16 & 0.75 & 1.07 & 123.9$\pm$14.2 & 68$\pm$2 & 112.7$\pm$12.9 & \\ 53255AC & 53254 & 15.0 & 2013 Aug 16 & \nodata & 53.8 & 6063$\pm$694 & \nodata & 112.7$\pm$12.9 & L79\\ 55603AB & \nodata & 12.1 & 2013 Aug 18 & 3.54 & 4.45 & 886.9$\pm$101.4 & 29$\pm$2 & 199.2$\pm$22.8 & \\ 56818AB & \nodata & 14.0 & 2012 Sep 3 & 2.04 & 0.63 & 169.8$\pm$19.4 & 44$\pm$2 & 246.2$\pm$28.1 & \\ 57038AB & \nodata & 13.9 & 2013 Aug 16 & 0.19 & 8.14 & 2508$\pm$286.7 & 335$\pm$2 & 308.3$\pm$35.2 & \\ 57452AB & \nodata & 13.6 & 2013 Aug 16 & 1.91 & 1.98 & 474.5$\pm$54.2 & 77$\pm$2 & 234.9$\pm$26.9 & \\ 57856AB & \nodata & 13.2 & 2013 Aug 17 & 5.08 & 2.00 & 585.3$\pm$66.9 & 169$\pm$2 & 289.7$\pm$33.1 & \\ 58812AB & 58813 & 15.0 & 2013 Aug 16 & 1.40 & 2.81 & 743.6$\pm$85.0 & 69$\pm$2 & 264.4$\pm$30.2 & \\ \hline \end{tabular} \small \label{tab:measurements} \begin{flushleft} Notes. --- References for previous detections are denoted using the following codes: Pourbaix et al. 2004 (SB9); Luyten 1979 (L79); Samir et al. 2002 (S02); L\'{o}pez et al. 2012 (L12). \end{flushleft} \end{table*} The lack of close subdwarf companions has been noted previously by \citet{jao09} and by \citet{abt08}, however with significantly smaller samples. A direct comparison of orbital separations is biased by the relative distance variation in the two samples. With their relative rarity in the solar neighborhood, the subdwarf sample is overall approximately a factor of 4 further distant than the dwarf sample. If the populations were similiar, this would result in a relative abundance of tight dwarf binaries, while the 6$\arcsec$ limit of the Janson et al. survey reduces the number of observed wide dwarf binaries. Attempts to pick out similar systems by relative distance or by orbital separation from the two surveys results in a small statistical sample. Nonetheless, the relative lack of close stars in the subdwarfs sample, as illustrated in Figure$~\ref{fig:hist_sep}$, and confirmed at high-confidence in our survey, warrants further investigation. \begin{figure} \centering \includegraphics[width=245pt]{histogram_separations.eps} \caption{Histogram of the angular separations of our subdwarf companions and the dwarf companions found by \citet{janson12}. Only systems resolvable in both surveys were plotted (0$\farcs 15< \rho < 6 \farcs$0)} \label{fig:hist_sep} \end{figure} \subsection{Binarity and Metallicity} The binary fraction we have found further confirms what has been suspected by past studies: that the binary fraction of subdwarfs is substantially lower than their dwarf cousins. The largest survey of cool subdwarfs, although limited by the low angular resolution of the SDSS, \citet{zhang13}, find a multiplicity for type late K and M subdwarfs of 2.41\%, with an estimated lower bound of 10\% when adjusting for survey incompleteness. This estimate and our work leave subdwarfs multiplicity rates approximately a factor of 2 to 4 lower than solar-metallicity stars of the same spectral types. Historically, it has been a widely held view that metal-poor stars possess fewer stellar companions \citep{batten73, latham04}. A deficiency of eclipsing binaries was found in globular clusters by \citet{kopal59}, while \citet{jaschek59} discovered a deficiency of spectroscopic binaries in a sample of high-velocity dwarfs. \citet{abt87} used higher resolution CCD spectra to conclude that the frequency of spectroscopic binaries in high-velocity stars was half of metal-rich stars. Recently, however, this view has come under attack. \citet{carney94} used radial velocity measurements of 1464 stars, along with metallicity data \citep{carney87}, and found the difference in binary frequency of metal-rich and metal-poor stars to not be significant. Likewise, \citet{grether07} found a $\sim$2$\sigma$ anti-correlation between metallicity and companion stars. In recent years, the relationship between planetary systems and metallicity has also been explored. \citet{fischer04} found a positive correlation between planetary systems and the metallicity of the host star. This correlation has been reinforced to $\sim$4$\sigma$ by \citet{grether07}. Recently, \citet{wang14} found that planets in multiple-star systems occur 4.5$\pm$3.2, 2.6$\pm$1.0, and 1.7$\pm$0.5 times less frequently when the companion star is separated by 10, 100, and 1000 AU, respectively. The solution may lie in the differences between halo and thick disk stars. \citet{latham02} found no obvious difference between the binary fraction of the two populations, however \citet{chiba00} found a 55\% multiplicity rate for thick disk stars and 12\% for halo stars. \citeauthor{grether07} also find that the thick disk shows a $\sim$4 times higher binary fraction than halo stars, further hypothesizing that the mixing of the populations is the explanation for the perceived anti-correlation of metallicity and binarity. The large difference between the M subdwarfs and thick-disk M dwarfs, apparent in our work in this paper and \citet{janson12}, seems to imply the two populations formed under different initial conditions. Star formation in less dense regions appear to lower binary rates. \citet{kohler06} found a factor 3-5 difference in binary fraction between the low-density Taurus star-forming region and the dense Orion cluster. It is also possible that, as older than solar-abundance stars, the metal-poor subdwarfs could have suffered more disruptive encounters with other stars. These disturbances could separate companions with separations larger than a few AU, with the tighter, more highly bound systems being less affected \citep{sterzik98, abt08}, a theory derived from $N$-body simulations \citep{aarseth72, kroupa95}. This, however, is contrary to our tentative result of a lack of close subdwarf companions, and the similar observations of \citet{jao09} and \citet{abt08} that close subdwarf binaries are rare. This implies that metal-poor subdwarfs had shorter lifetimes in clusters than their younger, metal-rich cousins, either being ejected or formed in a disrupted cluster. Another possible explanation is that a large number of low-metallicity stars in the Milky Way could have resulted from past mergers with satellite galaxies. Simulations from \citet{abadi06} predict that the early Galaxy underwent a period of active merging. From these mergers, the Galaxy would inherit large numbers of metal-poor stars. \citet{meza05} observe a group of metal-poor stars with angular momenta similar to the cluster $\omega$ Cen, long theorized to be the core of a dwarf galaxy that merged with the Milky Way. The environment of these foreign galaxies is unknown, so star formation could be quite different than our own Galaxy. It is also possible that during the merger multiple close stellar encounters and perturbations could alter their primordial binary properties. \section{Conclusions} \label{sec:Conclusions} In the largest high-resolution binary survey of cool subdwarfs, we observed 344 stars with the Robo-AO robotic laser adaptive optics system, sensitive to companions at $\rho \ge 0\farcs$15 and $\Delta m_i \le$ 6. Of those targets, we observed 16 new multiple systems and 4 new companions to already known binary systems. When including previously recorded multiple systems, this implies a multiplicity rate for cool subdwarfs of $11.6\%\pm1.8\%$ and a triplet fraction of 1.7$\%\pm .7 \%$. This is significantly lower than the observed cool subdwarf binarity of 26$\%\pm 6 \%$ by \citet{jao09} and in agreement with the completeness adjusted estimate of $> 10 \%$ of \citet{zhang13}. When comparing our results to similar surveys of dwarf binarity, we note a $\sim$2.8$\sigma$ difference in relative magnitude differences between companions. An apparent lack of close binaries is noted, as has been previously observed in the literature. The high efficiency of Robo-AO makes large, high-angular resolution surveys practical and will in the future continue to put tighter constraints on the properties of stellar populations. \section*{Acknowledgements} The Robo-AO system is supported by collaborating partner institutions, the California Institute of Technology and the Inter-University Centre for Astronomy and Astrophysics, and by the National Science Foundation under Grant Nos. AST-0906060, AST-0960343, and AST-1207891, by the Mount Cuba Astronomical Foundation, by a gift from Samuel Oschin. We are grateful to the Palomar Observatory staff for their ongoing support of Robo-AO on the 60 inch telescope, particularly S. Kunsman, M. Doyle, J. Henning, R. Walters, G. Van Idsinga, B. Baker, K. Dunscombe and D. Roderick. The SOAR telescope is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement between the CNPq, Brazil, the National Observatory for Optical Astronomy (NOAO), the University of North Carolina, and Michigan State University, USA. We also thank the SOAR operators, notably Sergio Pizarro. We recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. C.B. acknowledges support from the Alfred P. Sloan Foundation. This research has made use of the SIMBAD database, operated by Centre des Donn\'ees Stellaires (Strasbourg, France), and bibliographic references from the Astrophysics Data System maintained by SAO/NASA. {\it Facilities:} \facility{PO:1.5m (Robo-AO)}, \facility{Keck:II (NIRC2-LGS)}, \facility{SOAR (Goodman)} \section{appendix} In Table$~\ref{tab:roboaolist}$, we list our Robo-AO observed subdwarfs, including date the target was observed, observation quality as described in Section Section \ref{sec:imageperf}, and the presence of detected companions. \bigskip \bigskip \bigskip \begin{center} \begin{longtable}{cclcc} \caption{Full Robo-AO Observation List}\\ \hline \hline \noalign{\vskip 3pt} \text{NLTT} & \textit{m$_v$} & \text{ObsID} & \text{Obs. qual} & \text{Companion?}\\ [0.3ex] \hline \noalign{\vskip 3pt} \endfirsthead \multicolumn{5}{c} {\tablename\ \thetable\ -- \textit{Continued}} \\ \hline \noalign{\vskip 3pt} \text{NLTT} & \textit{m$_v$} & \text{ObsID} & \text{Obs. qual} & \text{Companion?}\\ [0.3ex] \hline \noalign{\vskip 3pt} \endhead \endfoot \hline \endlastfoot 69 & 15.2 & 2012 Oct 10 & low & \\ 193 & 15.5 & 2013 Aug 15 & medium & \\ 341 & 12.1 & 2012 Oct 10 & high & \\ 361 & 15.4 & 2013 Aug 17 & low & \\ 496 & 15.8 & 2012 Sep 04 & medium & \\ 660 & 15.7 & 2012 Sep 03 & low & \\ 812 & 12.8 & 2012 Sep 03 & high & \\ 933 & 15.5 & 2013 Aug 16 & low & \\ 1020 & 15.3 & 2013 Aug 15 & medium & \\ 1059 & 13.8 & 2012 Sep 04 & medium & \\ 1231 & 11.9 & 2013 Aug 16 & high & \\ 1509 & 15.8 & 2013 Aug 16 & low & \\ 1575 & 16.2 & 2012 Sep 03 & low & \\ 1635 & 13.2 & 2012 Sep 03 & high & \\ 1684 & 15.1 & 2012 Sep 13 & low & \\ 1815 & 15.5 & 2012 Sep 04 & low & \\ 1870 & 13.9 & 2012 Sep 03 & medium & \\ 2045 & 13.5 & 2013 Aug 15 & medium & yes\\ 2107 & 15.5 & 2012 Sep 04 & low & \\ 2205 & 14.0 & 2013 Aug 15 & medium & yes\\ 2324 & 15.7 & 2013 Aug 16 & medium & yes\\ 2868 & 13.5 & 2013 Aug 16 & medium & \\ 2953 & 15.9 & 2012 Sep 04 & low & \\ 2966 & 15.6 & 2012 Sep 04 & medium & \\ 3035 & 15.9 & 2012 Sep 04 & low & \\ 3965 & 16.1 & 2013 Aug 16 & medium & \\ 4245 & 15.6 & 2013 Aug 15 & low & \\ 4447 & 15.9 & 2012 Sep 03 & low & \\ 4817 & 11.4 & 2012 Sep 03 & high & yes\\ 4838 & 15.4 & 2012 Sep 03 & low & \\ 5022 & 13.9 & 2012 Sep 03 & medium & \\ 5192 & 14.3 & 2012 Sep 03 & medium & \\ 5289 & 15.6 & 2012 Sep 03 & low & \\ 6519 & 14.8 & 2012 Sep 03 & medium & \\ 6582 & 15.7 & 2013 Aug 17 & low & \\ 6614 & 15.7 & 2012 Sep 03 & medium & \\ 6816 & 16.1 & 2013 Aug 15 & low & \\ 6856 & 16.1 & 2012 Sep 03 & low & \\ 6863 & 15.3 & 2013 Aug 17 & low & \\ 7078 & 14.4 & 2012 Sep 03 & medium & \\ 7207 & 14.5 & 2013 Aug 15 & medium & \\ 7299 & 11.5 & 2013 Aug 16 & high & \\ 7301 & 14.9 & 2012 Sep 03 & high & yes\\ 7415 & 9.1 & 2012 Sep 03 & high & \\ 7417 & 11.6 & 2013 Aug 15 & high & \\ 7467 & 15.9 & 2012 Sep 13 & low & \\ 7596 & 16.2 & 2013 Aug 17 & low & \\ 7654 & 16.1 & 2013 Aug 16 & medium & \\ 7769 & 14.0 & 2012 Sep 03 & medium & \\ 7914 & 14.3 & 2012 Sep 03 & medium & yes\\ 8034 & 11.8 & 2012 Sep 03 & high & \\ 8227 & 10.5 & 2013 Aug 17 & high & \\ 8342 & 14.9 & 2012 Sep 03 & medium & \\ 8405 & 15.8 & 2012 Sep 03 & medium & \\ 8507 & 13.9 & 2012 Sep 03 & medium & \\ 8783 & 11.5 & 2012 Sep 03 & high & \\ 8866 & 15.8 & 2013 Aug 16 & low & \\ 9523 & 15.4 & 2013 Aug 15 & low & \\ 9550 & 15.5 & 2013 Aug 19 & low & \\ 9578 & 10.5 & 2013 Aug 15 & high & \\ 9597 & 12.0 & 2012 Sep 13 & high & \\ 9622 & 14.3 & 2012 Sep 04 & medium & \\ 9648 & 14.9 & 2012 Sep 04 & medium & \\ 9653 & 15.6 & 2013 Aug 16 & low & \\ 9727 & 15.8 & 2013 Aug 15 & medium & \\ 9734 & 15.0 & 2012 Sep 04 & medium & \\ 9799 & 15.4 & 2012 Sep 13 & low & \\ 9848 & 16.6 & 2013 Aug 19 & low & \\ 9898 & 14.2 & 2013 Aug 19 & low & \\ 9938 & 16.2 & 2013 Aug 15 & low & \\ 10018 & 15.4 & 2013 Aug 17 & low & \\ 10022 & 15.8 & 2013 Aug 16 & medium & \\ 10135 & 15.7 & 2012 Sep 04 & low & \\ 10176 & 15.8 & 2013 Aug 20 & low & \\ 10243 & 14.1 & 2012 Sep 04 & medium & \\ 10401 & 14.6 & 2013 Aug 18 & low & \\ 10517 & 14.5 & 2012 Sep 04 & medium & \\ 10536 & 11.2 & 2013 Aug 15 & high & yes\\ 10548 & 15.9 & 2013 Aug 15 & low & \\ 10850 & 10.7 & 2012 Sep 04 & high & \\ 10883 & 15.9 & 2012 Sep 04 & low & \\ 11007 & 12.2 & 2013 Aug 21 & high & \\ 11010 & 14.1 & 2012 Sep 04 & medium & \\ 11015 & 16.3 & 2013 Aug 16 & low & yes\\ 11032 & 14.2 & 2012 Sep 04 & medium & \\ 11068 & 15.4 & 2013 Aug 21 & low & \\ 11938 & 14.3 & 2012 Sep 04 & medium & \\ 12017 & 12.3 & 2013 Aug 17 & high & \\ 12026 & 15.8 & 2013 Aug 18 & low & \\ 12044 & 15.8 & 2012 Sep 13 & low & \\ 12227 & 14.2 & 2013 Aug 18 & medium & \\ 12350 & 12.1 & 2013 Aug 18 & medium & \\ 12489 & 14.6 & 2012 Oct 10 & low & \\ 12537 & 14.5 & 2013 Aug 21 & medium & \\ 12704 & 15.4 & 2012 Oct 10 & low & \\ 12769 & 14.1 & 2013 Aug 18 & medium & \\ 12829 & 14.6 & 2012 Oct 03 & medium & \\ 12845 & 10.6 & 2012 Oct 03 & high & yes\\ 12856 & 10.8 & 2013 Aug 18 & high & \\ 12876 & 15.6 & 2012 Oct 03 & low & \\ 12923 & 15.2 & 2013 Aug 18 & low & \\ 13022 & 15.9 & 2012 Oct 03 & low & \\ 13344 & 13.8 & 2012 Oct 03 & medium & \\ 13368 & 15.5 & 2012 Oct 03 & low & \\ 13402 & 14.7 & 2012 Oct 03 & low & \\ 13469 & 15.1 & 2013 Aug 18 & low & \\ 13470 & 13.8 & 2012 Oct 03 & medium & \\ 13641 & 12.9 & 2012 Oct 06 & high & \\ 13660 & 12.4 & 2012 Oct 03 & high & \\ 13694 & 15.4 & 2013 Aug 20 & medium & \\ 13706 & 14.5 & 2012 Oct 03 & low & \\ 13770 & 12.4 & 2012 Oct 03 & high & \\ 13811 & 13.4 & 2012 Oct 03 & medium & \\ 13920 & 14.4 & 2013 Aug 20 & medium & \\ 13940 & 14.4 & 2012 Oct 05 & medium & \\ 14091 & 13.9 & 2012 Oct 05 & medium & \\ 14131 & 13.4 & 2012 Oct 03 & medium & \\ 14169 & 13.4 & 2012 Oct 05 & medium & \\ 14197 & 12.4 & 2012 Oct 04 & low & \\ 14391 & 13.5 & 2012 Oct 04 & low & \\ 14450 & 14.7 & 2012 Oct 04 & low & \\ 14549 & 14.5 & 2012 Oct 10 & low & \\ 14822 & 12.7 & 2012 Oct 03 & medium & \\ 14864 & 14.3 & 2012 Oct 07 & low & \\ 15039 & 14.8 & 2012 Oct 10 & low & \\ 15183 & 12.6 & 2012 Oct 07 & medium & \\ 15218 & 12.3 & 2012 Oct 06 & high & \\ 15973 & 9.3 & 2012 Oct 07 & high & yes\\ 15974 & 13.8 & 2012 Oct 07 & high & \\ 16030 & 13.9 & 2012 Oct 07 & low & \\ 16185 & 14.4 & 2012 Oct 10 & low & \\ 16242 & 10.6 & 2012 Oct 06 & medium & \\ 16579 & 12.3 & 2012 Oct 09 & high & \\ 16606 & 12.3 & 2012 Oct 10 & high & \\ 16849 & 15.3 & 2012 Oct 10 & low & \\ 16869 & 13.2 & 2013 Jan 20 & high & \\ 16986 & 15.8 & 2013 Jan 20 & low & \\ 17039 & 12.9 & 2012 Oct 10 & medium & \\ 17485 & 11.9 & 2012 Oct 10 & high & yes\\ 17680 & 13.6 & 2013 Jan 20 & medium & \\ 17786 & 12.0 & 2013 Jan 20 & high & \\ 17872 & 10.7 & 2013 Jan 20 & high & \\ 18019 & 13.3 & 2012 Oct 10 & medium & \\ 18131 & 14.4 & 2013 Jan 20 & medium & \\ 18424 & 12.7 & 2013 Jan 18 & high & \\ 18463 & 13.8 & 2013 Jan 20 & high & \\ 18502 & 12.2 & 2013 Jan 19 & high & yes\\ 18731 & 13.1 & 2013 Jan 19 & high & \\ 18798 & 14.5 & 2013 Jan 19 & high & yes\\ 18799 & 11.0 & 2013 Jan 19 & high & \\ 19037 & 14.9 & 2013 Jan 20 & medium & \\ 19210 & 11.2 & 2013 Jan 20 & high & yes\\ 19301 & 14.7 & 2013 Jan 19 & low & \\ 19570 & 14.4 & 2013 Apr 22 & medium & \\ 19614 & 15.7 & 2013 Apr 22 & medium & \\ 19643 & 11.9 & 2013 Jan 19 & high & \\ 19824 & 14.6 & 2013 Jan 19 & medium & \\ 20252 & 14.9 & 2013 Apr 22 & medium & \\ 20288 & 14.9 & 2013 Apr 22 & medium & \\ 20392 & 13.8 & 2013 Jan 22 & low & \\ 20476 & 13.2 & 2013 Apr 22 & high & \\ 20492 & 13.3 & 2013 Jan 19 & high & \\ 20684 & 12.0 & 2013 Jan 19 & high & \\ 20691 & 9.6 & 2013 Jan 19 & high & yes\\ 20768 & 14.0 & 2013 Jan 19 & medium & \\ 21039 & 14.0 & 2013 Jan 19 & medium & \\ 21112 & 15.3 & 2013 Apr 22 & medium & \\ 21133 & 12.7 & 2013 Jan 19 & medium & \\ 21341 & 14.3 & 2013 Jan 19 & low & \\ 21370 & 13.7 & 2013 Jan 19 & medium & yes\\ 21449 & 12.6 & 2013 Apr 22 & high & \\ 21601 & 14.6 & 2013 Apr 22 & medium & \\ 22026 & 12.6 & 2013 Apr 22 & high & \\ 22053 & 12.1 & 2013 Jan 19 & high & \\ 22520 & 10.8 & 2013 Jan 19 & high & \\ 22752 & 13.9 & 2013 Jan 19 & medium & \\ 22945 & 13.2 & 2013 Apr 22 & medium & \\ 23894 & 14.6 & 2013 Jan 18 & low & \\ 24006 & 15.5 & 2013 Apr 22 & medium & \\ 24082 & 13.1 & 2013 Jan 19 & medium & yes\\ 24353 & 13.2 & 2013 Jan 18 & medium & \\ 24371 & 14.2 & 2013 Jan 18 & low & \\ 24718 & 13.1 & 2013 Jan 18 & medium & \\ 24984 & 12.5 & 2013 Apr 21 & high & \\ 25006 & 14.1 & 2013 Apr 21 & medium & \\ 25177 & 12.2 & 2013 Apr 22 & high & \\ 25190 & 13.9 & 2013 Jan 18 & low & \\ 25234 & 13.2 & 2013 Jan 18 & medium & yes\\ 25475 & 13.9 & 2013 Apr 21 & medium & \\ 25776 & 13.8 & 2013 Apr 22 & medium & \\ 25909 & 13.5 & 2013 Apr 22 & high & \\ 25970 & 14.9 & 2013 Jan 18 & low & \\ 26232 & 14.4 & 2013 Jan 18 & low & \\ 26482 & 12.5 & 2013 Jan 18 & medium & \\ 26503 & 14.2 & 2013 Apr 21 & medium & \\ 26532 & 14.8 & 2013 Jan 18 & low & \\ 26565 & 14.8 & 2013 Jan 18 & low & \\ 26588 & 13.6 & 2013 Apr 21 & high & \\ 26677 & 13.5 & 2013 Jan 18 & low & \\ 27436 & 13.0 & 2013 Jan 18 & medium & \\ 27763 & 13.6 & 2013 Jan 18 & medium & \\ 27767 & 14.7 & 2013 Apr 21 & medium & \\ 28199 & 13.2 & 2013 Jan 18 & medium & \\ 28304 & 13.3 & 2013 Apr 22 & medium & \\ 28434 & 14.9 & 2013 Jan 17 & low & yes\\ 29023 & 13.0 & 2013 Jan 18 & medium & \\ 29064 & 14.0 & 2013 Apr 21 & medium & \\ 29256 & 14.7 & 2013 Jan 18 & low & \\ 29442 & 14.4 & 2013 Jan 18 & low & \\ 29551 & 11.5 & 2013 Apr 21 & high & yes\\ 29594 & 13.2 & 2013 Apr 22 & high & yes\\ 29933 & 10.2 & 2013 Apr 22 & high & \\ 30128 & 13.1 & 2013 Apr 21 & high & \\ 30193 & 14.6 & 2013 Apr 21 & medium & yes\\ 30462 & 12.8 & 2013 Jan 18 & medium & \\ 30636 & 14.8 & 2013 Jan 18 & low & \\ 30824 & 14.6 & 2013 Jan 17 & low & \\ 30838 & 12.5 & 2013 Apr 22 & high & yes\\ 31146 & 12.0 & 2013 Apr 21 & high & \\ 31155 & 13.6 & 2013 Jan 18 & medium & \\ 31240 & 15.0 & 2013 Apr 21 & medium & yes\\ 31965 & 14.2 & 2013 Jan 19 & medium & \\ 32316 & 11.3 & 2013 Apr 22 & high & \\ 32392 & 14.6 & 2013 Jan 19 & medium & \\ 32562 & 14.3 & 2013 Jan 17 & low & \\ 32648 & 12.8 & 2013 Jan 18 & medium & \\ 32917 & 13.8 & 2013 Apr 22 & medium & \\ 32995 & 13.4 & 2013 Apr 22 & high & \\ 33104 & 14.0 & 2013 Jan 18 & low & \\ 33156 & 14.2 & 2013 Apr 22 & medium & \\ 33371 & 12.8 & 2013 Jan 17 & medium & \\ 33971 & 12.8 & 2013 Jan 18 & medium & \\ 34051 & 13.5 & 2013 Jan 19 & low & yes\\ 34628 & 11.9 & 2013 Apr 21 & high & \\ 35068 & 13.2 & 2013 Jan 18 & medium & \\ 35318 & 13.4 & 2013 Apr 21 & high & \\ 36020 & 14.2 & 2013 Apr 22 & medium & \\ 37342 & 14.4 & 2013 Apr 22 & high & yes\\ 37684 & 13.3 & 2013 Apr 22 & high & \\ 37807 & 12.0 & 2013 Apr 22 & high & \\ 39378 & 13.5 & 2013 Apr 22 & high & \\ 39721 & 13.6 & 2013 Apr 22 & high & \\ 40022 & 13.9 & 2013 Apr 22 & medium & \\ 40313 & 13.7 & 2013 Apr 22 & high & \\ 41111 & 13.7 & 2013 Apr 22 & medium & \\ 44039 & 11.5 & 2012 Sep 14 & high & \\ 44233 & 15.2 & 2012 Sep 04 & low & \\ 44568 & 12.3 & 2012 Sep 04 & high & \\ 44639 & 11.8 & 2012 Sep 04 & high & \\ 44769 & 15.2 & 2013 Apr 21 & medium & \\ 45609 & 12.5 & 2012 Sep 04 & high & \\ 45616 & 11.9 & 2012 Sep 04 & high & yes\\ 47480 & 13.8 & 2012 Oct 05 & low & \\ 47543 & 9.2 & 2012 Oct 05 & medium & \\ 48011 & 14.7 & 2012 Oct 05 & high & \\ 48056 & 13.7 & 2012 Oct 07 & low & \\ 48391 & 15.2 & 2012 Oct 05 & medium & \\ 48592 & 12.2 & 2012 Oct 04 & medium & \\ 48866 & 12.7 & 2012 Oct 04 & medium & \\ 49486 & 16.0 & 2012 Oct 04 & medium & yes\\ 49487 & 12.3 & 2012 Oct 04 & medium & \\ 49488 & 14.9 & 2013 Aug 19 & medium & \\ 49618 & 12.2 & 2012 Oct 04 & medium & \\ 49726 & 15.9 & 2013 Aug 19 & low & \\ 49749 & 14.8 & 2012 Oct 03 & medium & \\ 49819 & 14.0 & 2013 Aug 19 & high & yes\\ 49821 & 12.8 & 2013 Aug 19 & high & \\ 49897 & 15.8 & 2012 Oct 04 & low & \\ 50257 & 13.8 & 2013 Aug 18 & low & \\ 50376 & 13.9 & 2012 Sep 13 & medium & \\ 50556 & 15.7 & 2012 Sep 13 & low & \\ 50759 & 15.9 & 2012 Sep 13 & low & yes\\ 50869 & 15.8 & 2013 Aug 19 & low & \\ 50911 & 11.6 & 2012 Sep 13 & high & \\ 51006 & 14.1 & 2013 Aug 19 & medium & \\ 51153 & 15.1 & 2012 Sep 13 & low & \\ 51740 & 15.3 & 2012 Sep 13 & low & \\ 51754 & 15.0 & 2012 Sep 13 & low & \\ 51824 & 11.9 & 2013 Aug 18 & medium & \\ 51856 & 13.4 & 2012 Sep 04 & medium & \\ 52089 & 14.9 & 2012 Sep 04 & medium & \\ 52377 & 14.5 & 2012 Sep 04 & medium & yes\\ 52532 & 15.5 & 2012 Sep 04 & low & yes\\ 52573 & 15.3 & 2013 Aug 18 & low & \\ 52666 & 15.0 & 2013 Aug 19 & low & \\ 52816 & 15.7 & 2012 Sep 13 & low & \\ 52894 & 16.0 & 2012 Sep 13 & low & \\ 53190 & 15.4 & 2013 Aug 16 & medium & \\ 53254 & 14.7 & 2013 Aug 16 & medium & \\ 53255 & 15.0 & 2013 Aug 16 & medium & yes\\ 53274 & 11.9 & 2013 Aug 17 & high & \\ 53316 & 15.4 & 2012 Sep 13 & low & \\ 53346 & 13.8 & 2013 Aug 17 & medium & \\ 53480 & 12.6 & 2013 Aug 17 & high & \\ 53702 & 15.3 & 2012 Sep 13 & medium & \\ 53707 & 12.1 & 2013 Aug 18 & medium & \\ 53781 & 13.8 & 2013 Aug 17 & medium & \\ 53801 & 11.8 & 2012 Sep 13 & high & \\ 53823 & 13.8 & 2013 Aug 18 & low & \\ 54027 & 13.3 & 2013 Aug 19 & medium & \\ 54088 & 14.1 & 2013 Aug 18 & low & \\ 54168 & 13.4 & 2013 Aug 17 & medium & \\ 54184 & 14.0 & 2013 Aug 17 & medium & \\ 54349 & 14.4 & 2012 Sep 13 & medium & \\ 54450 & 15.6 & 2013 Aug 16 & low & \\ 54578 & 15.8 & 2013 Aug 18 & low & \\ 54608 & 16.0 & 2013 Aug 16 & low & \\ 54620 & 15.2 & 2013 Aug 17 & medium & \\ 54699 & 15.1 & 2012 Sep 13 & low & \\ 54710 & 15.2 & 2012 Sep 13 & low & \\ 54730 & 11.5 & 2012 Sep 13 & high & \\ 55411 & 15.9 & 2013 Aug 16 & low & \\ 55603 & 12.1 & 2013 Aug 18 & medium & yes\\ 55732 & 13.4 & 2013 Aug 17 & medium & \\ 55733 & 14.5 & 2012 Sep 03 & medium & \\ 55942 & 13.5 & 2013 Aug 16 & medium & \\ 56002 & 14.4 & 2012 Sep 03 & medium & \\ 56290 & 12.6 & 2013 Aug 16 & high & \\ 56420 & 15.6 & 2012 Sep 03 & low & \\ 56533 & 15.9 & 2013 Aug 16 & low & \\ 56534 & 12.7 & 2013 Aug 17 & high & \\ 56774 & 12.9 & 2013 Aug 18 & low & \\ 56817 & 16.1 & 2013 Aug 17 & low & \\ 56818 & 14.0 & 2012 Sep 03 & medium & yes\\ 56855 & 13.7 & 2013 Aug 16 & medium & \\ 57038 & 13.9 & 2013 Aug 16 & medium & yes\\ 57214 & 15.8 & 2013 Aug 16 & low & \\ 57452 & 13.6 & 2013 Aug 16 & medium & yes\\ 57546 & 16.2 & 2013 Aug 17 & low & \\ 57564 & 10.6 & 2013 Aug 17 & high & \\ 57630 & 15.0 & 2013 Aug 16 & medium & \\ 57631 & 13.5 & 2013 Aug 17 & medium & \\ 57647 & 14.7 & 2013 Aug 17 & medium & \\ 57741 & 14.2 & 2013 Aug 17 & medium & \\ 57744 & 16.1 & 2013 Aug 17 & low & \\ 57781 & 10.1 & 2013 Aug 16 & high & \\ 57832 & 15.2 & 2012 Sep 03 & medium & \\ 57851 & 15.2 & 2012 Sep 03 & medium & \\ 57856 & 13.2 & 2013 Aug 17 & medium & yes\\ 58071 & 13.1 & 2012 Sep 03 & medium & \\ 58141 & 15.8 & 2013 Aug 16 & low & \\ 58403 & 15.2 & 2013 Aug 16 & low & \\ 58522 & 15.0 & 2013 Aug 17 & medium & \\ 58555 & 15.1 & 2012 Sep 03 & medium & \\ 58812 & 14.9 & 2013 Aug 16 & medium & yes \label{tab:roboaolist} \end{longtable} \end{center}
1,108,101,562,461
arxiv
\section{Introduction} The effect of electric fields on the behavior of dielectric block copolymer melts in bulk and in thin films has found increasing interest in recent years \cite{wirtz-ma92}-\cite{matsen06} (and references therein) due to the possibility to create uniform alignment in macroscopic microphase separated samples. This is of special relevance for applications using self assembled block copolymer structures for patterning and templating of nanostructures \cite{park03}. The driving force for electric field induced alignment is the orientation-dependent polarization in a material composed of domains with anisotropic shape. The reason for the orientation in electric fields has a very simple explanation in samples where the inhomogeneities appear only at interfaces of cylinders or lamellae, as it is roughly the case in the strong segregation limit. The polarization of the sample in this case induces surface charges at the interfaces depending on the relative orientation of the interfaces with respect to the field. The system is lowering its free energy, if the interfaces orient parallel to the field. If the composition of a block copolymer sample and consequently also the local dielectric constant varies gradually, the polarization charges appear in the whole system. However, interfaces parallel to the field possess the lowest electric energy also in this case. The effects of electric fields on the behavior of diblock copolymer melts have been studied in \cite{AHQH+S94} by taking into account the electric part to the free energy (quadratic in the strength of the electric field and the order parameter) in addition to the thermodynamic potential including composition fluctuations in the absence of the field. The influence of an electric field on the composition fluctuations has not been considered yet. However, the general relation between the derivatives of the thermodynamic potential and the correlation function of the order parameter, given by Eq.(% \ref{gf_phi}), requires the inclusion of the electric field into the correlation function of the Brazovskii self-consistent Hartree approach, too. The angular dependence of the structure factor without taking into account the fluctuation effects was derived previously for polymer solutions in \cite{wirtz-ma92}, and for copolymer melts in \cite{onuki-95}. Intuitively it seems obvious that fluctuations become anisotropic in an electric field, and moreover fluctuations of modes with wave vectors parallel to the electric field are suppressed. The effects of an electric field on composition fluctuations are directly accessible in scattering experiments, and were studied for polymer solutions in \cite{wirtz-ma92} and for asymmetric diblock copolymers in \cite{tta02}. In this paper we present the results of the generalization of the Fredrickson-Helfand theory \cite% {FrHe87} by taking into account the effects of the electric field on the composition fluctuations in symmetric diblock copolymer melts. The paper is organized as follows. Section \ref{sect-bulk} reviews the collective description of the diblock copolymer melt. Section \ref% {sect-efield} describes the coupling of the block copolymer melt to external time-independent electric fields. Section \ref{sect-hartree} gives an introduction to the Brazovskii-Fredrickson-Helfand treatment of composition fluctuations in the presence of time-independent electric fields. Section % \ref{sect-results} contains our results. \section{Theory} \subsection{A brief review of the collective description} \label{sect-bulk} \noindent In a diblock copolymer ($AB$), a chain of $N_{A}$ subunits of type $A$ is at one end covalently bonded to a chain of $N_{B}$ subunits of type $% B $. A net repulsive $A-B$ interaction energy $E\propto \varepsilon _{AB}-(\varepsilon _{AA}+\varepsilon _{BB})/2$ between the monomers leads to microphase separation. Thus at an order-disorder transition concentration waves are formed spontaneously, having a wavelength of the same order as the radius of gyration of the coils. The type of the long range order that forms depends on the composition of the copolymers $f=N_{A}/N$ with $N=N_{A}+N_{B}$% . Here we treat only the symmetric composition with $f=1/2$ for which the lamellar mesophase is formed. As an order parameter we consider the deviation of the density of $A$-polymers from its mean value, $\delta \Phi _{A}(\mathbf{r)}=\rho _{A}(r)-f\rho _{m}$, where $\rho _{A}(r)$ is the monomer density of the $A$ monomers and $\rho _{m}$ is the average monomer density of the melt. Since the system is assumed to be incompressible, the condition $\delta \Phi _{A}(\mathbf{r})+\delta \Phi _{B}(\mathbf{r})=0$ should be fulfilled. The expansion of the effective Landau Hamiltonian in powers of the fluctuating order parameter $\delta \Phi (\mathbf{r})\equiv $ $% \delta \Phi _{A}(\mathbf{r})$ was derived by Leibler \cite{leibler80} using the random phase approximation (RPA). Following to Fredrickson and Helfand \cite{FrHe87} let us introduce instead of $\delta \Phi (\mathbf{r})$ the dimensionless order parameter $\psi (\mathbf{r)=}$ $\delta \Phi _{A}(\mathbf{% r})/\rho _{m}$. The effective Hamiltonian in terms of $\psi (\mathbf{r)}$ is given in units of $k_{B}T$ for symmetric composition by \cite{FrHe87}% \begin{eqnarray} H(\psi ) &=&\frac{1}{2}\int_{q}\psi (-\mathbf{q})\gamma _{2}(q)\psi (\mathbf{% q})+\frac{1}{4!}\int_{q_{1}}\int_{q_{2}}\int_{q_{3}}\gamma _{4}(\mathbf{q}% _{1},\mathbf{q}_{2},\mathbf{q}_{3},-\mathbf{q}_{1}-\mathbf{q}_{2}-\mathbf{q}% _{3}) \notag \\ &\times &\psi (\mathbf{q}_{1})\psi (\mathbf{q}_{2})\psi (\mathbf{q}_{3})\psi (-\mathbf{q}_{1}-\mathbf{q}_{2}-\mathbf{q}_{3})\,, \label{1} \end{eqnarray}% where the wave vector $\mathbf{q}$ is the Fourier conjugate to $\mathbf{x}% \equiv \rho _{m}^{1/3}\mathbf{r}$, and the shortcut $\int_{q}\equiv \int \frac{d^{3}q}{(2\pi )^{3}}$ is introduced. Furthermore $\mathbf{q}_{r}$ is the Fourier conjugate to $\mathbf{r}$ and is defined via $\mathbf{q}=$ $% \mathbf{q}_{r}\rho _{m}^{-1/3}$. The quantity $\gamma _{2}(q)$ in Eq.~(\ref% {1}) is defined as% \begin{equation*} \gamma _{2}(q)=\left( F(y)-2\chi N\right) /N \end{equation*}% with% \begin{equation*} F(y)=\frac{g_{D}(y,1)}{g_{D}(y,f)g_{D}(y,1-f)-\frac{1}{4}\left[ g_{D}(y,1)-g_{D}(y,f)-g_{D}(y,1-f)\right] ^{2}}, \end{equation*}% and% \begin{equation*} g_{D}(y,f)=\frac{2}{y^{2}}\left( fy+e^{-fy}-1\right) \,. \end{equation*}% Here $g_{D}(y,f)$ is the Debye function and $y=q_{r}^{2}R_{g}^{2}=q^{2}\rho _{m}^{2/3}R_{g}^{2}=q^{2}N/6$, where $R_{g}=$ $b\sqrt{N/6}$ is the unperturbed radius of gyration of the copolymer chain. We assume that both blocks have equal statistical segment lengths denoted by $b$ and furthermore $b=\rho _{m}^{-1/3}$. The collective structure factor of the copolymer melt $% S_{c}(q)=\left\langle \delta \Phi _{A}(\mathbf{q}_{r})\delta \Phi _{A}(-% \mathbf{q}_{r})\right\rangle $ is related to $\gamma _{2}(q)$ by \begin{equation*} S_{c}(q)=\gamma _{2}^{-1}(q). \end{equation*}% The scattering function of a Gaussian polymer chain $S(q_{r})=(2/N)% \sum_{i<j}\left\langle \exp (-i\mathbf{q_{r}r}_{ij}\right\rangle $ can be expressed by the Debye function $g_{D}(x,1)$ via the relation $% S(q)=Ng_{D}(x,1)$. The vertex function $\gamma _{4}(\mathbf{q}_{1},\mathbf{q}% _{2},\mathbf{q}_{3},-\mathbf{q}_{1}-\mathbf{q}_{2}-\mathbf{q}_{3})$ is expressed in the random phase approximation through the correlation functions of one and two Gaussian copolymer chains \cite{leibler80}. The vertex function as well as the quantity $F(y)$ in $\gamma _{2}(q)$ are independent of temperature. The collective structure factor $S_{c}(q)$ has a pronounced peak at the wave vector $q^{\ast }$, obeying the condition $% y^{\ast }=\left( q_{r}^{\ast }R_{g}\right) ^{2}=3.7852$ for $f=1/2$ , i.e. $% y^{\ast }$ is independent of both temperature and molecular weight. As it is well-known \cite{FrHe87}, the composition fluctuations, which are described according to Brazovskii \cite{brazovskii75} within the self-consistent Hartree approach, change the type of the phase transition to the ordered state from second order to fluctuation-induced weak first-order. In this theory \cite{brazovskii75}, \cite{FrHe87} the inverse of the collective structure factor is approximated near to the peak by% \begin{eqnarray} N\gamma _{2}(q) &=&F(y^{\ast })-2\chi N+\frac{1}{2}\frac{\partial ^{2}F}{% \partial y^{\ast 2}}\left( y-y^{\ast }\right) ^{2}+... \notag \\ &=&2\left( \chi N\right) _{s}-2\chi N+\frac{1}{2}\frac{\partial ^{2}F}{% \partial y^{\ast 2}}4y^{\ast }\frac{N}{6}\left( q-q^{\ast }\right) ^{2}+... \notag \\ &\simeq &Nc^{2}\left( \tilde{\tau}+\left( q-q^{\ast }\right) ^{2}\right) . \label{S_c1} \end{eqnarray}% According to \cite{FrHe87} the notations% \begin{eqnarray} \left( 2\chi N\right) _{s} &=&F(y^{\ast })=20.990, \notag \\ c &=&\sqrt{y^{\ast }\partial ^{2}F/\partial y^{\ast 2}/3}=1.1019, \notag \\ \tilde{\tau} &=&\frac{2\left( \chi N\right) _{s}-2\chi N}{Nc^{2}} \label{tildtau} \end{eqnarray}% are introduced. Redefining the order parameter by $\psi =c^{-1}\phi $ and inserting Eq.~(\ref{S_c1}) into Eq.~(\ref{1}), the effective Hamiltonian can be written as% \begin{eqnarray} H(\phi ) &=&\frac{1}{2}\int_{q}\phi (-\mathbf{q})\left( \tilde{\tau}+\left( q-q^{\ast }\right) ^{2}\right) \phi (\mathbf{q}) \notag \\ &+&\frac{\tilde{\lambda}}{4!}\int_{q_{1}}\int_{q_{2}}\int_{q_{3}}\phi (% \mathbf{q}_{1})\phi (\mathbf{q}_{2})\phi (\mathbf{q}_{3})\phi (-\mathbf{q}% _{1}-\mathbf{q}_{2}-\mathbf{q}_{3}), \label{H_Phi} \end{eqnarray}% where $\tilde{\lambda}=\gamma _{4}(\mathbf{q}^{\ast },\mathbf{q}^{\ast },% \mathbf{q}^{\ast },-\mathbf{q}^{\ast }-\mathbf{q}^{\ast }-\mathbf{q}^{\ast })/c^{4}$. Following \cite{FrHe87} the vertex $\gamma _{4}$\ is approximated by its value at $\mathbf{q}^{\ast }$. The quantity $\tilde{\tau}$ plays the role of the reduced temperature in the Landau theory of phase transitions. Note that the scattering function is obtained from Eq.~(\ref{S_c1}) as% \begin{equation} S_{c}^{-1}(q)=\tilde{\tau}+\left( q-q^{\ast }\right) ^{2}. \label{scat-fl} \end{equation} In order to study the composition fluctuations in symmetric diblock copolymer melts based on the effective Hamiltonian (\ref{H_Phi}), it is convenient to introduce an auxiliary field $h(\mathbf{x})$, which is coupled linearly to the order parameter. Thus, the term $\int d^{3}rh(\mathbf{x}% )\phi (\mathbf{x})$ should be added to Eq.~(\ref{H_Phi}). Consequently the average value of the order parameter can be written as $\bar{\phi}(\mathbf{x}% )=-\delta \mathcal{F}(h)/\delta h(\mathbf{x})|_{h=0}$, where $\mathcal{F}(h)$ is the free energy related to the partition function $Z(h)$ by $\mathcal{F}% (h)=-k_{B}T\ln Z(h)$. The Legendre transformation, $\delta \left( \mathcal{F}% +\int d^{3}rh(\mathbf{x})\bar{\phi}(\mathbf{x})\right) \equiv \delta \Gamma (% \bar{\phi})=\int d^{3}rh(\mathbf{x})\delta \bar{\phi}(\mathbf{x})$ introduces the thermodynamic potential $\Gamma (\bar{\phi})$, which is a functional of the average value of the order parameter $\bar{\phi}(\mathbf{x}% )$. In terms of the Gibbs potential $\Gamma (\bar{\phi})$ the spontaneous value of the order parameter is determined by the equation $0=\delta \Gamma (% \bar{\phi})/\delta \bar{\phi}(\mathbf{x})$. The potential $\Gamma (\bar{\phi}% )$ is the generating functional of the one-particle irreducible Greens's function, and can be represented as a series by using Feynman diagrams \cite% {zinn-justin}. The second derivative of the Gibbs potential with respect to the order parameter yields the inverse correlation function% \begin{equation} \frac{\delta ^{2}\Gamma (\bar{\phi})}{\delta \bar{\phi}(\mathbf{x}% _{1})\delta \bar{\phi}(\mathbf{x}_{2})}=S^{-1}(\mathbf{x}_{1}\mathbf{,x}_{2},% \bar{\phi}). \label{gf_phi} \end{equation}% The correlation functions of composition fluctuations in the disordered and ordered phase are defined by \begin{equation*} S(\mathbf{x}_{1}\mathbf{,x}_{2})=\left\langle \phi (\mathbf{x}_{1})\phi (% \mathbf{x}_{2})\right\rangle ,\ \ S(\mathbf{x}_{1}\mathbf{,x}_{2},\bar{\phi}% )=\left\langle \Delta \phi (\mathbf{x}_{1})\Delta \phi (\mathbf{x}% _{2})\right\rangle , \end{equation*}% respectively, where the abbreviation $\Delta \phi (\mathbf{x})=\phi (\mathbf{% x})-\bar{\phi}(\mathbf{x})$ is introduced. Eq.~(\ref{gf_phi}) is the pendant of the well-known relation \begin{equation} \frac{\delta ^{2}\mathcal{F}(h)}{\delta h(\mathbf{x}_{1})\delta h(\mathbf{x}% _{2})}|_{\,h=0}=S(\mathbf{x}_{1}\mathbf{,x}_{2}), \label{F-diff2} \end{equation}% which represents the relation between the thermodynamic quantities and the correlation function of the composition $\phi (\mathbf{x})$. For $\bar{\phi}% =0$ Eq.~(\ref{gf_phi}) is obtained from Eq.~(\ref{F-diff2}) using the Legendre transform. In case of a constant auxiliary field the variational derivatives on the left-hand side of Eq.~(\ref{F-diff2}) should be replaced by the partial ones, while the integration over $\mathbf{x}_{1}$\textbf{\ }% and\textbf{\ }$\mathbf{x}_{2}$ is carried out on the right-hand side. Then the derivative $-\partial \mathcal{F}/\partial h$ is the mean value of the order parameter $\bar{\phi}(\mathbf{x})$ multiplied by the volume of the system. The 2nd derivative of the free energy, $\partial ^{2}\mathcal{F}% /\partial h^{2}$, is related to the susceptibility, which in accordance to Eq.~(\ref{F-diff2}) is equal to the integral of the correlation function. The order parameter for a symmetric diblock copolymer melt can be approximated in the vicinity of the critical temperature of the microphase separation by \begin{equation} \bar{\phi}(\mathbf{x})=2A\cos \left( q^{\ast }\mathbf{nx}\right) , \label{OP} \end{equation}% where $\mathbf{n}$ is an unit vector in the direction of the wave vector perpendicular to the lamellae and $A$ is an amplitude. The Brazovskii self-consistent Hartree approach, which takes into account the fluctuation effects on the microphase separation, is based on the following expression of the derivative of the Gibbs potential with respect to the amplitude $A$ of the order parameter% \begin{equation} \frac{1}{A}\frac{\partial \Gamma (A)/V}{\partial A}=2\left( \tilde{\tau}+% \frac{\tilde{\lambda}}{2}\int_{q}S_{0}(\mathbf{q,}A)+\frac{\tilde{\lambda}}{2% }A^{2}\right) . \label{braz1} \end{equation}% The second term in Eq.~(\ref{braz1}) includes the propagator \begin{equation} S_{0}(\mathbf{q,}A)=1/\left( \tilde{\tau}+\left( q-q^{\ast }\right) ^{2}+% \tilde{\lambda}A^{2}\right) \label{2} \end{equation}% and represents the first-order correction to the thermodynamic potential owing to the self-energy. The first two terms in the brackets on the right-hand side of Eq.~(\ref{braz1}) are summarized to an effective reduced temperature denoted by $\tilde{\tau}_{r}$. The equation for $\tilde{\tau}% _{r} $ becomes self-consistent by replacing $\tilde{\tau}$ in Eq.~(\ref{2}) for $S_{0}(\mathbf{q,}A)$ by $\tilde{\tau}_{r}$. Then we get \begin{equation} \tilde{\tau}_{r}=\tilde{\tau}+\frac{\tilde{\lambda}}{2}\int_{q}S(\mathbf{q,}% A), \label{braz2} \end{equation}% where $S^{-1}(\mathbf{q,}A)=\tilde{\tau}_{r}+\left( q-q^{\ast }\right) ^{2}+% \tilde{\lambda}A^{2}$ is the inverse of the effective propagator. \subsection{Contribution of the electric field to the effective Hamiltonian} \label{sect-efield} \noindent In this subsection we discuss the coupling of the diblock copolymer melt to an external time-independent electric field. \ The system we consider is a linear dielectric and is free of external charges. The field satisfies the Maxwell equation% \begin{equation*} \mathrm{div}\left( \varepsilon (\mathbf{r})\mathbf{E}(\mathbf{r})\right) =0, \end{equation*}% where the inhomogeneities of the dielectric constant $\varepsilon (\mathbf{r}% )$ are caused by the inhomogeneities of the order parameter. According to \cite{AHQS93} we adopt the expansion of the dielectric constant in powers of the order parameter up to quadratic terms% \begin{equation} \varepsilon (\mathbf{r})=\varepsilon _{D}(\mathbf{r})+\beta \bar{\phi}(% \mathbf{r})+\frac{1}{2}\frac{\partial ^{2}\varepsilon }{\partial \bar{\phi}% ^{2}}\bar{\phi}(\mathbf{r})^{2}. \label{eps-r} \end{equation}% In case of zero order parameter $\bar{\phi}(\mathbf{r})=0$ the dielectric constant is assumed to be homogeneous, i.e. $\varepsilon _{D}(\mathbf{r}% )=\varepsilon _{D}$. The above Maxwell equation can be rewritten as integral equation as follows% \begin{equation} \mathbf{E}(\mathbf{r})=\mathbf{E}_{0}+\frac{1}{4\pi }\boldsymbol{\nabla }% \int d^{3}r_{1}G_{0}(\mathbf{r}-\mathbf{r}_{1})\left( \mathbf{E}(\mathbf{r}% _{1})\boldsymbol{\nabla }\right) \ln \varepsilon (\mathbf{r}_{1}), \label{E_r} \end{equation}% where $G_{0}(\mathbf{r})=1/r$ is the Green's function of the Poisson equation. The integral equation (\ref{E_r}) is convenient to derive iterative solutions for the electric field, and to take into account the dependencies of $\varepsilon (r)$\ on the order parameter.\ The 2nd term in Eq.~(\ref{E_r}) takes into account the polarization due to the inhomogeneities of the order parameter. The substitution $\mathbf{E}(\mathbf{% r}_{1})=\mathbf{E}_{0}$ on the right-hand side of Eq.~(\ref{E_r}) gives the first-order correction to the external electric field as% \begin{eqnarray} \mathbf{E}(\mathbf{r}) &=&\mathbf{E}_{0}+\mathbf{E}_{1}(\mathbf{r})+\dots \notag \\ &=&\mathbf{E}_{0}+\frac{1}{4\pi }\frac{\beta }{\varepsilon _{D}}% \boldsymbol{\nabla }\int d^{3}r_{1}G_{0}(\mathbf{r}-\mathbf{r}_{1})\left( \mathbf{E}_{0}\boldsymbol{\nabla }\right) \bar{\phi}(\mathbf{r}_{1})+\dots \label{E-r1} \end{eqnarray}% The higher-order terms $\mathbf{E}_{i}(\mathbf{r})$ ($i=2$, $3$,$...$) in the last equation are linear in the external field, too. In taking into account the electric energy in thermodynamic potentials one should distinguish between the thermodynamic potentials with respect to the charges or the potential \cite{landau-lifshitz8}. These thermodynamic potentials are connected with each other by a Legendre transformation. Here, in calculating the effects of fluctuations we interpret in fact the Landau free energy as an Hamiltonian, which weights the fluctuations by the Boltzmann factor $\exp (-H)$. Therefore, the contribution of the electric field to the effective Hamiltonian corresponds to the energy of the electric field, and is given in Gaussian units by% \begin{equation*} k_{B}T\Gamma _{el}=\frac{1}{8\pi }\int d^{3}r\varepsilon (\mathbf{r})\mathbf{% E}^{2}(\mathbf{r}), \end{equation*}% where $\varepsilon (\mathbf{r})$ and $\mathbf{E}(\mathbf{r})$ are given by Eqs.~(\ref{eps-r},\ref{E-r1}). In the following we consider only the polarization part of $\Gamma _{el}$. The quadratic part of the latter in powers of the order parameter is given by \begin{eqnarray} k_{B}T\Gamma _{el} &=&\frac{1}{8\pi }\int d^{3}r\frac{1}{2}\frac{\partial ^{2}\varepsilon }{\partial \bar{\phi}^{2}}\bar{\phi}(\mathbf{r})^{2}\mathbf{E% }_{0}^{2} \notag \\ &&+\frac{1}{8\pi }\int d^{3}r\varepsilon _{D}\frac{1}{4\pi }\nabla ^{m}\int d^{3}r_{1}G_{0}(\mathbf{r}-\mathbf{r}_{1})E_{0}^{n}\frac{\beta }{\varepsilon _{D}}\nabla ^{n}\bar{\phi}(\mathbf{r}_{1}) \notag \\ &&\times \frac{1}{4\pi }\nabla ^{m}\int d^{3}r_{2}G_{0}(\mathbf{r}-\mathbf{r}% _{2})E_{0}^{k}\frac{\beta }{\varepsilon _{D}}\nabla ^{k}\bar{\phi}(\mathbf{r}% _{2}) \notag \\ &=&\frac{1}{2}\int d^{3}r_{1}\int d^{3}r_{2}\bar{\phi}(\mathbf{r}_{1})\tilde{% \gamma}_{2}^{el}(\mathbf{r}_{1}\mathbf{,r}_{2})\bar{\phi}(\mathbf{r}_{2}), \label{G_el} \end{eqnarray}% with% \begin{equation*} \tilde{\gamma}_{2}^{el}(\mathbf{r}_{1}\mathbf{,r}_{2})=\frac{1}{8\pi }\frac{% \partial ^{2}\varepsilon }{\partial \bar{\phi}^{2}}\mathbf{E}_{0}^{2}\delta (% \mathbf{r}_{1}\mathbf{-r}_{2})+\frac{\beta ^{2}}{\left( 4\pi \right) ^{3}\varepsilon _{D}}\int d^{3}r\nabla _{\mathbf{r}}^{m}\nabla _{\mathbf{r}% _{1}}^{n}G_{0}(\mathbf{r}-\mathbf{r}_{1})\nabla _{\mathbf{r}}^{m}\nabla _{% \mathbf{r}_{2}}^{k}G_{0}(\mathbf{r}-\mathbf{r}_{2})E_{0}^{n}E_{0}^{k}. \end{equation*}% Notice the sum convention over the indices $m$, $n$, $k$\ (= x, y, z) in the above two equations and in Eq.~(\ref{CF-el}). Expressing $\Gamma _{el}$ by the Fourier components of the order parameter yields \begin{equation} \Gamma _{el}=\frac{1}{2}\int_{q}\bar{\phi}(-\mathbf{q})\tilde{\gamma}% _{2}^{el}(\mathbf{q})\bar{\phi}(\mathbf{q}), \label{G_el-FT} \end{equation}% whereas $\tilde{\gamma}_{2}^{el}(\mathbf{q})$ is given by \begin{equation} \tilde{\gamma}_{2}^{el}(\mathbf{q})=\frac{1}{4\pi \rho _{m}k_{B}T}\left( \frac{1}{2}\frac{\partial ^{2}\varepsilon }{\partial \bar{\phi}^{2}}\mathbf{E% }_{0}^{2}+\frac{\beta ^{2}}{\varepsilon _{D}}\frac{q^{n}q^{k}}{\mathbf{q}^{2}% }E_{0}^{n}E_{0}^{k}\right) . \label{CF-el} \end{equation}% The electric contribution to the correlation function can be obtained using Eq.~(\ref{gf_phi}). Note that the factor $\rho _{m}^{-1}$ in Eq.~(\ref{CF-el}% ) is due to the length redefinition $\mathbf{r}=\rho _{m}^{-1/3}\mathbf{x}$. Eqs.~(\ref{G_el})-(\ref{CF-el}) are used in the subsequent section to analyze the influence of the electric field on the composition fluctuations of the order parameter. Assuming that the electric field is directed along the $z$-axes, and denoting the angle between the field and the wave vector $% \mathbf{q}$ by $\theta $, we obtain the quantity $\tilde{\gamma}_{2}^{el}(% \mathbf{q})$ in Eq.~(\ref{CF-el}) as \begin{equation} \tilde{\gamma}_{2}^{el}(\mathbf{q})=\tilde{\alpha}\cos ^{2}\theta +\tilde{% \alpha}_{2}, \label{gamma-el} \end{equation}% where the notations \begin{equation} \tilde{\alpha}=\frac{1}{4\pi \rho _{m}k_{B}T}\frac{\beta ^{2}}{\varepsilon _{D}}\mathbf{E}_{0}^{2},\ \ \tilde{\alpha}_{2}=\frac{1}{4\pi \rho _{m}k_{B}T}% \frac{1}{2}\frac{\partial ^{2}\varepsilon }{\partial \bar{\phi}^{2}}\mathbf{E% }_{0}^{2} \label{alpha} \end{equation}% are used. The thermodynamic potential given by Eq.~(\ref{G_el}) is quadratic in both the external electric field $E_{0}$\ and the order parameter. The terms $% E_{i}(r)$ with $i>1$ in Eq.~(\ref{E-r1}) are still linear in $E_{0}$, but contain higher powers of the order parameter and its derivatives. In the vicinity of the order-disorder transition, where the order parameter is small and its inhomogeneities are weak, the higher-order contributions to $% \Gamma _{el}$\ can be neglected. \subsection{Hartree treatment of fluctuations in the presence of the electric field} \label{sect-hartree} The Brazovskii-Hartree approach in absence of the electric field is summarized in Eqs.~(\ref{braz1}, \ref{braz2}). According to Eq.~(\ref{gf_phi}% ) the contribution of the electric field should be taken into account both to the thermodynamic potential and the propagator. Therefore, instead of Eq.~(\ref{braz1}) we obtain \begin{eqnarray} &&\frac{1}{A}\frac{\partial }{\partial A}\left( g-\tilde{\gamma}_{2}^{el}(% \mathbf{q}^{\ast })A^{2}\right) =\tilde{\tau}+ \notag \\ &&2\tilde{\lambda}\int_{q}\frac{1}{\tilde{\tau}+\tilde{\gamma}_{2}^{el}(% \mathbf{q})+(q-q^{\ast })^{2}+\tilde{\lambda}\,A^{2}}+\tilde{\lambda}\,A^{2}, \label{GAns} \end{eqnarray}% where $g=\Gamma (A)/V$ is the thermodynamic potential per volume. The term $% \tilde{\gamma}_{2}^{el}(q^{\ast })$\ in (\ref{GAns}) is the contribution to $% g$ associated with the lamellae in the electric field below the transition with the orientation defined by the angle between the wave vector $\mathbf{q}% ^{\ast }$ and the field strength $\mathbf{E}_{0}$, $\mathbf{q}^{\ast }% \mathbf{E}_{0}=q^{\ast }E_{0}\cos \theta ^{\ast }$. The equilibrium orientation of the lamellae is derived by minimization of the thermodynamic potential with respect to the angle $\theta ^{\ast }$, and yields $\theta ^{\ast }=\pi /2$. As a consequence, the modulations of the order parameter perpendicular to the electric field possess the lowest electric energy. The term $\tilde{\gamma}_{2}^{el}(\mathbf{q}^{\ast })A^{2}$ in (\ref{GAns}) disappears in equilibrium state and will not be considered below. The isotropic part of $\tilde{\gamma}_{2}^{el}(\mathbf{q})$ in Eq.~(\ref{alpha}% ), which is associated with $\tilde{\alpha}_{2}$, shifts $\tilde{\tau}$ for\ positive $\partial ^{2}\varepsilon /\partial \bar{\phi}^{2}$ to higher values i.e. shifts the critical temperature to lower values and favors mixing. The sign of this term agrees with that in \cite{wirtz-ma92}, where the effect of this term was observed and studied for polymer solutions near the critical point. Since this term is isotropic and small, it will not contribute directly to alignment and will be not considered further. The demixing of the low molecular mixtures studied recently in \cite% {tsori-nature04}\ is due to field gradients. The fluctuations in the presence of the electric field become anisotropic due to $\tilde{\gamma}% _{2}^{el}(\mathbf{q})$ in Eq.~(\ref{GAns}). The first two terms on the right-hand side of Eq.~(\ref{GAns}) define as before an effective $\tilde{% \tau}$, which is denoted by $\tilde{\tau}_{r}$. Replacing $\tilde{\tau}$ under the integral in Eq.~(\ref{GAns}) by $\tilde{\tau}_{r}$ we obtain a self-consistent equation for $\tilde{\tau}_{r}$ \begin{equation} \tilde{\tau}_{r}=\tilde{\tau}+\frac{\tilde{\lambda}}{2}\int_{q}\frac{1}{% \tilde{\tau}_{r}+\tilde{\alpha}\cos ^{2}\theta +(q-q^{\ast })^{2}+\tilde{% \lambda}\,A^{2}}\,, \label{m} \end{equation}% which generalizes Eq.~(\ref{braz2}) for $\mathbf{E}\neq 0$. Carrying out the integration over $\mathbf{q}$ we obtain \begin{eqnarray} \frac{\tilde{\lambda}}{2}\int_{q}\frac{1}{\tilde{\tau}_{r}+\tilde{\lambda}% \,A^{2}+\tilde{\alpha}\cos ^{2}\theta +(q-q^{\ast })^{2}} &=&\frac{\tilde{% \lambda}q_{\ast }^{2}}{4\pi \sqrt{\tilde{\alpha}}}\mathrm{arcsinh}\sqrt{% \frac{\tilde{\alpha}}{\tilde{\tau}_{r}+\tilde{\lambda}\,A^{2}}} \notag \\ &=&\frac{d\tilde{\lambda}}{N\sqrt{\tilde{\alpha}}}\mathrm{arcsinh}\sqrt{% \frac{\tilde{\alpha}}{\tilde{\tau}_{r}+\tilde{\lambda}\,A^{2}}}, \label{HarApp} \end{eqnarray}% where the notation $d=3y^{\ast }/2\pi $ with $y^{\ast }=q^{\ast 2}\,N/6=3.7852$ is used. Eq.~(\ref{m}) shows that the fluctuations in the presence of the electric field are suppressed due to the angular dependence of the integrand. Consequently an electric field weakens the first-order phase transition. In computing the integral in Eq.~(\ref{HarApp}) we realize that the leading contribution comes from the peak of the structure factor at $q^{\ast }$. The contributions of large wave vectors, which become finite after an introduction of an appropriate cutoff at large $q$, are expected to renormalize the local parameters such as the $\chi $ parameter etc. Very recently these fluctuations have been considered in \cite{kudlay03}. The self-consistent equation for $\tilde{\tau}_{r}$ in the presence of the electric field reads \begin{equation} \tilde{\tau}_{r}=\tilde{\tau}+\frac{\,\tilde{\lambda}d}{N\sqrt{\tilde{\alpha}% }}\mathrm{arcsinh}\sqrt{\frac{\tilde{\alpha}}{\tilde{\tau}_{r}+\tilde{\lambda% }\,A^{2}}}\,. \label{tau-E} \end{equation}% The equation for $\tilde{\tau}_{r}$ in the Brazovskii-Fredrickson-Helfand theory is obtained from Eq.~(\ref{tau-E}) in the limit $\alpha \rightarrow 0$ as \begin{equation*} \tilde{\tau}_{r}=\tilde{\tau}+\frac{\,\tilde{\lambda}d}{N}\left( \tilde{\tau}% _{r}+\tilde{\lambda}A^{2}\right) ^{-1/2}\,. \end{equation*}% Because the quantities $\tilde{\tau}$ and $\tilde{\lambda}$ are of the order $O(N^{-1})$, it is convenient to replace them in favor of \begin{equation} \tau =\tilde{\tau}N\ ,\ \lambda =\tilde{\lambda}N. \label{tau} \end{equation}% Note that the transformation from $\tilde{\tau}$ to $\tau $ implies the redefinition of the thermodynamic potential $g\rightarrow gN$. Instead of Eq.~(\ref{tau-E}) we then obtain% \begin{equation} \tau _{r}=\tau +\frac{\,\lambda d}{\sqrt{N\alpha }}\mathrm{arcsinh}\sqrt{% \frac{\alpha }{\tau _{r}+\lambda \,A^{2}}}\,, \label{tau-E1} \end{equation}% where the quantity $\alpha $ is defined by $\alpha =\tilde{\alpha}N=(\beta ^{2}N/4\pi \rho _{m}k_{B}T\varepsilon _{D})\mathbf{E}_{0}^{2}$. For symmetric composition $\lambda $ was computed in \cite{leibler80} as $% \lambda =106.18$. Using the substitution $t=\tau _{r}+\lambda \,A^{2}$ we obtain \begin{equation} t=\tau +\frac{\,\lambda d}{\sqrt{N\alpha }}\mathrm{arcsinh}\sqrt{\frac{% \alpha }{t}}+\lambda \,A^{2}. \label{t} \end{equation}% The derivative of the potential $g$ with respect to the amplitude of the order parameter $A$ is obtained from Eq.~(\ref{GAns}) as \begin{equation*} \frac{\partial g}{\partial A}=2\tau _{r}(A)\,A+\lambda \,A^{3}=2t(A)\,A-\lambda \,A^{3}. \end{equation*}% The integration gives the thermodynamic potential as \begin{equation} g=\int\limits_{0}^{A}2t(A)\,A\,dA-\frac{\lambda }{4}A^{4}=\frac{1}{2\lambda }% \left( t{}^{2}-t_{0}^{2}\right) +\frac{d}{\sqrt{N}}\left( \sqrt{t+\alpha }-% \sqrt{t_{0}+\alpha }\right) -\frac{\lambda }{4}A^{4}\,, \label{g} \end{equation}% whereas the inverse susceptibility of the disordered phase $t_{0}\equiv t(A=0)$ satisfies the equation% \begin{equation} t_{0}=\tau +\frac{d\,\lambda }{\sqrt{N\alpha }}\mathrm{arcsinh}\sqrt{\frac{% \alpha }{t_{0}}}. \label{t0} \end{equation}% Eqs.~(\ref{t}, \ref{g}, \ref{t0}) generalize the Fredrickson-Helfand treatment of the composition fluctuations in symmetric diblock copolymer melts in the presence of an external time-independent electric field. \section{Results} \label{sect-results} \noindent The position of the phase transition is determined by the conditions% \begin{equation} g=0,\ \ \frac{\partial g}{\partial A}=0, \label{T_c} \end{equation}% and result in the following equation \begin{equation} \frac{1}{2}\left( t{}^{2}+t_{0}^{2}\right) =\frac{d\lambda }{\sqrt{N}}\left( \sqrt{t+\alpha }-\sqrt{t_{0}+\alpha }\right) . \label{gTr} \end{equation}% The perturbational solution of Eqs.~(\ref{t}), (\ref{t0}) and (\ref{gTr}) to the first order in powers of $\alpha $ yields for the transition temperature \begin{equation} \tau _{t}=-2.0308\left( d\,\lambda \right) ^{2/3}N^{-1/3}+0.48930\alpha . \label{tau_t} \end{equation}% By making use of Eqs.~(\ref{tildtau}) and (\ref{tau}) we find \begin{equation} \left( \chi N\right) _{t}=\left( \chi N\right) _{s}+1.0154\,c^{2}\left( d\,\lambda \right) ^{2/3}N^{-1/3}-0.48930\alpha \frac{c^{2}}{2}. \label{chi_t} \end{equation}% The solution of Eqs.~(\ref{t}) and (\ref{gTr}) results in the following expression for the amplitude of the order parameter at the transition \begin{equation} A_{t}=1.4554\,\left( d^{2}/\lambda \right) ^{1/6}\,N^{-1/6}-0.42701\,\left( d^{2}\lambda ^{5}\right) ^{-1/6}\,N^{1/6}\,\alpha \,, \label{A} \end{equation}% where $\alpha $\ is defined by $\alpha =N\beta ^{2}/(4\pi \rho _{m}k_{B}T\varepsilon _{D})\mathbf{E}_{0}^{2}$. The corrections associated with the electric field in Eqs.~(\ref{tau_t}-\ref{A}) are controlled by the dimensionless expansion parameter $\alpha N^{1/3}/(d\lambda )^{2/3}$. Inserting the values of the coefficients for $f=1/2$ gives% \begin{equation} \left( \chi N\right) _{t}=10.495+41.018\,N^{-1/3}-0.29705\,\alpha , \label{chi-t1} \end{equation}% \begin{equation} A_{t}=0.81469\,N^{-1/6}-0.0071843\,N^{1/6}\,\alpha \,. \label{A2-1} \end{equation}% Note that for $N\rightarrow \infty $\ the strength of the electric field $% E_{0}$\ should tend to zero\ in order that the dimensionless expansion parameter $\alpha N^{1/3}/(d\lambda )^{2/3}$\ remains small. Eqs.~(\ref{chi-t1}-\ref{A2-1}) are our main results. The terms in (\ref% {chi-t1}-\ref{A2-1}) depending on $\alpha $ describe the effects of the electric field on fluctuations, and were not considered in previous studies \cite{AHQS93}-\cite{AHQH+S94}. This influence of the electric field on fluctuations originates from the term $\tilde{\gamma}_{2}^{el}(q)$ under the integral in Eq.~(\ref{GAns}). Eq.~(\ref{chi-t1}) shows that the electric field shifts the parameter $\left( \chi N\right) _{t}$ to lower values, and correspondingly the transition temperature to higher values towards its mean-field value. The latter means also that the electric field favors demixing with respect to the free field case. According to Eq.~(\ref{A2-1}) the electric field lowers the value of the order parameter at the transition point. In other words, the electric field weakens the fluctuations, and consequently the first order phase transition. We now will discuss the limits of the Brazovskii-Hartree approach in the presence of the electric field. The general conditions on the validity of the Brazovskii-Hartree approach, which are discussed in \cite{FrHe87} \cite% {swift-hohenberg77}, hold also in the presence of the electric field. Essentially, while taking into account the fluctuations the peak of the structure factor at the transition should remain sufficiently sharp, i.e. the transition should be a weak first-order transition. This requires large values of N \cite{FrHe87}. A specific approximation used in the presence of the electric field consists in adopting the expansion of the dielectric constant in powers of the order parameter to 2nd order, which however is justified in the vicinity of the transition. The smallness of the linear term in powers of $\alpha $\ in Eqs.~(\ref{chi-t1}-\ref{A2-1}) imposes a condition on $E_{0}$: $E_{0}^{2}\ll \lambda ^{2/3}\rho _{m}k_{B}T\varepsilon _{D}/\beta ^{2}N^{4/3}$. The applicability of the approach of linear dielectric requires also a limitation on the strength of the electric field. The dependence of the propagator on $\alpha $ in the field theory associated with the effective Hamiltonian, which is due to the term $\tilde{\gamma}% _{2}^{el}(q)$, enables us to make a general conclusion that fluctuations (in Brazovskii-Hartree approach and beyond) are suppressed for large $\alpha $. According to this one expects that the order-disorder transition will become second order for strong fields. The numerical solution of Eqs.~(\ref{T_c}) yields that the mean-field behavior is recovered only in the limit $\alpha \rightarrow \infty $, which is therefore outside the applicability of linear electrodynamics. The angular dependence of $\gamma _{4}$, which has not been taken into account in the present work, gives rise to corrections which are beyond the linear order of $\alpha $. The main prediction of the present work that the electric field weakens fluctuations agrees qualitatively with the behavior of diblock copolymer melts in shear flow studied in \cite{cates-milner89}, where the shear also suppresses the fluctuations. Due to the completely different couplings to the order parameter the calculation schemes are different in both cases. We now will estimate the shift of the critical temperature for the diblock copolymer Poly(styrene-block-methylmethacrylate) in an electric field. Without an electric field its transition temperature is at $182^{\circ }$C for a molecular weight of $31000$~g/mol~\cite{AHPQ+S92}. We use the following values of the parameters~\cite{AHQH+S94}: $\varepsilon _{\text{PS}% }=2.5$, $\varepsilon _{\text{PMMA}}=5.9$\ \footnote{% these values for $\varepsilon $ refer to the liquid state ($T\approx 160^{\circ }$C), since the corresponding orientation experiments are performed in this temperature range}, $\beta =\varepsilon _{\text{PMMA}% }-\varepsilon _{\text{PS}}$, $\varepsilon _{D}=(\varepsilon _{\text{PS}% }+\varepsilon _{\text{PMMA}})/2$, and $\chi =0.012+17.1/T$. The estimation of the number of statistical segments $N$\ using the relation $\left( \chi N\right) _{t}=10.495+41.018\,N^{-1/3}$\ for $T_{t}=182^{\circ }C$\ yields $% N\approx331$. As described in \cite{AHQH+S94} and \cite{RHS90} for this calculation a mean statistical segment length $b=7.1~\mathrm{\mathring{A}}$ was assumed, while the approximation $b=\rho _{m}^{-1/3}$\ used here implies $b\approx 5.2~\mathrm{\mathring{A}}$~\footnote{% The relation $\rho _{m}=\rho\,N\,N_A/M$, where $N_A$ denotes the Avogadro number, $M$ the molecular mass of the block copolymer and $\rho\approx1.12~% \mathrm{g}/\mathrm{cm}^3$ its mass density, results in $\rho_m\approx7.20% \times10^{21}\mathrm{cm}^{-3}$}. This comparison reflects the limits of the above approximation. For a field strength $E_{0}=40\mathrm{V/\mu m}$\ the shift is obtained using Eq.~(\ref{chi-t1}) as% \begin{equation*} \Delta T_{t}\approx 2.5\ \mathrm{K}. \end{equation*}% The numerical value of the dimensionless expansion parameter, $\alpha N^{1/3}/(d\lambda )^{2/3}$, is computed in the case under consideration as $% 0.059$. The experimental determination of $\Delta T_{t}$ would be very helpful to make more detailed fit of the theory to experimental data. The scattering function in the disordered phase is obtained by taking into account the composition fluctuations in the presence of the electric field as \begin{equation} S_{dis}(\mathbf{q})\simeq \frac{1}{t_{0}+\alpha \cos ^{2}\theta +(q-q^{\ast })^{2}}. \label{S_q} \end{equation}% The fluctuational part of \ the scattering function in the ordered state is obtained as \begin{equation} S_{ord}(\mathbf{q})\simeq \frac{1}{t+\alpha \cos ^{2}\theta +(q-q^{\ast })^{2}}. \label{Sq-A} \end{equation}% At the transition point the expansion of $t_{0}$ and $t$ in powers of the field strength is derived from Eqs.~(\ref{t}, \ref{t0}) as% \begin{equation} t_{0,t}=0.20079(\lambda \,d)^{2/3}\,N^{-1/3}-0.20787\alpha +..., \label{t0-E} \end{equation}% \begin{equation} t_{t}=1.0591(\lambda \,d)^{2/3}\,N^{-1/3}-0.62147\alpha +.... \label{t-E} \end{equation}% The difference between $t_{0,t}$ and $t_{t}$, which is due to the finite value of the order parameter at the transition, results in the jump of the peak at the transition point. The structure factor becomes owing to the term $\alpha \cos ^{2}\theta $ anisotropic in the presence of the electric field. The structure factor depends on the electric field via $t_{0,t}$ ($t_{t}$) and the term $\alpha \cos ^{2}\theta $. The suppression of $t_{0,t}$ ($t_{t}$% ) in an electric field according to Eqs.~(\ref{t0-E}, \ref{t-E}) results in an increase of the peak. Thus, for wave vectors perpendicular to the field direction, where the angular-dependent term is zero, the peak is more pronounced than that for $\mathbf{E}_{0}=0$. In the opposite case for wave vectors parallel to $\mathbf{E}_{0}$ the anisotropic term ($=\alpha \cos ^{2}\theta $) dominates, so that the peak is less pronounced than that for $% \mathbf{E}_{0}=0$. Composition fluctuations can be associated with fluctuational modulations of the order parameter. According to Eqs.~(\ref% {S_q}-\ref{Sq-A}) fluctuational modulations of the order parameter with wave vectors parallel to the field are strongest suppressed. The latter correlates with the behavior in the ordered state where the lamellae with the wave vector perpendicular to the field direction possess the lowest energy. \section{Conclusions} We have generalized the Fredrickson-Helfand theory of microphase separation in symmetric diblock copolymer melts by taking into account the effects of the electric field on the composition fluctuations. We have shown that an electric field suppresses the fluctuations and therefore weakens the first-order phase transition. However, the mean-field behavior is recovered in the limit $\alpha \rightarrow \infty $, which is therefore outside the applicability of the linear electrodynamics. The collective structure factor in the disordered phase becomes anisotropic in the presence of the electric field. Fluctuational modulations of the order parameter along the field direction are strongest suppressed. Thus, the anisotropy of fluctuational modulations in the disordered state correlates with the parallel orientation of the lamellae in the ordered state. \begin{acknowledgments} \noindent A financial support from the Deutsche Forschungsgemeinschaft, SFB 418 is gratefully acknowledged. \end{acknowledgments}
1,108,101,562,462
arxiv
\section{Introduction} Within the field of spintronics,\cite{book_awschalom_2002,zutic_2004,awschalom_2007} semiconductor devices with spin-orbit coupling have attracted great attention over the past years because they offer a setting where electronic spin polarizations can be generated and manipulated in the absence of ferromagnetism or external magnetic fields.\cite{awschalom_2009} This opens the perspective of adding the spin degree of freedom to the existing semiconductor logic in information technology without encountering the challenge of artificially integrating local magnetic fields in devices. From this applications point of view it is clearly desirable to maximize the spin lifetimes and coherence lengths in semiconductor spintronics devices. In this respect an ideal candidate is the persistent spin helix (PSH), a spin density wave state with infinite lifetime, which exists in two-dimensional electron systems with Rashba and linear Dresselhaus spin-orbit interaction of equal magnitude~\cite{schliemann,bernevig1} due to a SU(2) symmetry of the corresponding Hamiltonian.~\cite{bernevig1} On a less abstract level this can be understood as the combined effect of diffusion and spin precession: the momentum-dependent spin-orbit field is perpendicular to the PSH wave vector, and its magnitude grows linearly with the projection of the momentum argument on the direction of this wave vector. If, for instance, a spin-up electron starts at the crest of $z$-spin polarization and travels at the Fermi velocity in the direction of the PSH wave vector, its spin precesses precisely by an angle of $2\,\pi$ during the time it takes to cover the distance of one PSH wavelength. If the electron propagates off direction, the spin will still match the phase of the PSH everywhere because the larger traveling time to, e.g., the neighboring crest is exactly compensated by a smaller precession frequency. One promising progress in this direction is the recent realization of the persistent spin helix in a GaAs/AlGaAs quantum well by Koralek {\it et al.}~\cite{koralek}. They used transient spin grating spectroscopy\cite{cameron} to optically excite a sinusoidal profile of out-of-plane spin polarization with the ``magic'' PSH wave vector. Due to the presence of symmetry breaking effects in a real quantum well, instead of a state with infinite lifetime, two decaying modes were observed. Koralek \etal~named these two modes the symmetry--{\em reduced} and {\em --enhanced} mode---the latter being the PSH. Although the lifetime of the observed PSH mode is not infinite it is still of the order of $100\, \mathrm{ps}$, exceeding typical transient spin grating lifetimes by two orders of magnitude. Intriguingly, the temperature dependence of the PSH lifetime displays a maximum close to $100 \, \mathrm{K}$. In order to improve the lifetimes it is necessary to figure out what the dominant relaxation mechanisms are. The temperature dependence of the PSH lifetime suggests the involvement of electron-electron interactions,~\cite{koralek} which are known to relax spin currents via the spin Coulomb drag effect.\cite{Amico1, flensberg, Amico2, weber} However, since electron-electron interactions respect the SU(2) symmetry of the PSH state, they cannot be the sole reason for a finite lifetime but in addition a symmetry breaking term must be present.\cite{bernevig1} Here, we consider extrinsic spin orbit interaction~\cite{raimondi} and cubic Dresselhaus spin orbit interaction\cite{stanescu} as a possible source of symmetry breaking as proposed by Koralek {\it et al.}\cite{koralek}. It is the purpose of this work to develop a theoretical understanding of the PSH lifetime and how the lifetime could be enhanced. In particular we consider the effect of electron-electron interactions in the diffusive D'yakonov-Perel' regime. Regarding symmetry breaking mechanisms, our model (Sec.~\ref{sec:model}) takes into account the effect of extrinsic spin-orbit coupling, which results from the interaction of the conduction electron spins with impurities, as well as the cubic Dresselhaus spin-orbit interaction, which is known to be present in the experimental quantum well to a non-negligible amount.~\cite{koralek} In Sec.~\ref{sec:spindiffeq} we derive a diffusion equation for the spin density in our model system and discuss the contribution of the different symmetry breaking mechanisms. In Sec.~\ref{sec:schematic} we present analytical solutions for the simplified situation where only one symmetry breaking mechanism is present. We propose that a spatially damped spin profile could enhance the lifetime compared to the PSH lifetime. For the parameters of the GaAs/AlGaAs quantum well used by Koralek {\it et al.}~\cite{koralek} (Sec.~\ref{sec:numbers}) it turns out that electron-electron interactions in combination with cubic Dresselhaus spin-orbit interaction are the key ingredients to understand the temperature dependence of the PSH lifetime. Detailed conclusions and an outlook are given in Sec.~\ref{sec:conclusions}. \section{Model\label{sec:model}} In an envelope-function description\cite{winkler} of the conduction band electrons in semiconductor quantum wells, the spin-orbit interaction takes the form of a momentum-dependent, in-plane effective magnetic field. The two dominant contributions to this field are linear in the in-plane momentum: The Rashba field,~\cite{rashba} which has winding number $1$ in momentum space, is caused by structure inversion asymmetry and can be tuned by changing the doping imbalance on both sides of the quantum well. The linear Dresselhaus\cite{dresselhaus} contribution, in contrast, has winding number $-1$ and its physical origin is the bulk inversion asymmetry of the zinc-blende type quantum well material. It is proportional to the kinetic energy of the electron's out-of-plane motion and therefore decreases quadratically with increasing well width. In addition, a small cubic Dresselhaus spin-orbit interaction is present as well. Thus we write the Hamiltonian for conduction band electrons in the (001) grown quantum well as \begin{align} H&~=~H_\textnormal{0}+H_\textnormal{imp}+H_\textnormal{e-e}\label{H}. \end{align} The first term represents a two-dimensional electron gas (2DEG) with a quadratic dispersion and intrinsic spin-orbit interaction \begin{align} H_0&~=~\sum_{s,s'; \vc{k}} \psi_{\vc{k} s'}^\dagger\,\mathcal{H}_{0 s's}\,\psi_{\vc{k} s}\label{general2ndq} \end{align} with the $2 \times 2$ matrix in spin space \begin{align} \mathcal{H}_0&~=~\epsilon_k+ \vc{b}(\vc{k})\cdot {\bf \vc{\sigma}}.\label{H0} \end{align} The $\psi_{\vc{k} s}^\dagger \left(\psi_{\vc{k} s}\right)$ are creation (annihilation) operators for electrons with momentum $\vc{k}$ and spin projection $s$. Within the standard envelope function approximation\cite{winkler} one finds $\epsilon_k=\frac{\hbar^2 k^2}{2\, m}$ where $m$ is the effective mass. The vector of Pauli matrices is denoted by $\vc{\sigma}$ and the in-plane spin-orbit field \begin{align} \vc{b}(\vc{k})&~=~\vc{b}_R(\vc{k})+\vc{b}_D(\vc{k})\label{bk} \end{align} contains Rashba- as well as linear and cubic Dresselhaus spin-orbit interactions\cite{weng} (henceforth $\hbar \equiv 1$), \begin{align} \vc{b}_R(\vc{k})&= \alpha\,v_F\, \begin{pmatrix} k_y\\ -k_x \end{pmatrix}\label{br},\\ \vc{b}_D(\vc{k})&=v_F\cos 2\phi\left[{\beta'} \begin{pmatrix} -k_x\\ k_y \end{pmatrix} -\gamma\,\frac{k^3}{4} \begin{pmatrix} \cos3\theta\\ \sin3\theta \end{pmatrix} \right]\nonumber\\ &\quad+ v_F\sin 2\phi\left[{\beta'} \begin{pmatrix} k_y\\ k_x \end{pmatrix} +\gamma\frac{k^3}{4} \begin{pmatrix} \sin3\theta\\ -\cos 3\theta \end{pmatrix}\label{bd} \right]. \end{align} Here, $v_F$ is the Fermi velocity, the angle $\theta$ gives the direction of $\vc{k}$ with respect to the $x$ axis and $\phi$ denotes the angle between the latter and the (100) crystal axis. The strength of the Rashba spin-orbit field is controlled by $\alpha$ and the coefficient for linear Dresselhaus coupling $\beta'$ contains a momentum-dependent renormalization due to the presence of cubic Dresselhaus coupling, \begin{align} {\beta'}&~=~\beta-\gamma\,{k^2}/{4},\label{betapr} \end{align} where the ``bare'' linear Dresselhaus coefficient $\beta$ is related to the one for cubic Dresselhaus $\gamma$ via $\beta=\gamma \mean{k_z^2}=\gamma\left(\pi/d \right)^2$ ($d$ being the quantum well width). We assume in the following that the spin-orbit interaction is small compared to the Fermi energy $E_F$, i.e., $b_F/E_F\ll 1$, where $b_F\equiv b(k_F)$ with $k_F$ being the Fermi momentum. Furthermore, we have included in Eq.~\eqref{H} electron-impurity interactions, \begin{align} H_\textnormal{imp}&~=~\frac{1}{V} \sum_{s,s';\vc{k}, \vc{k'}}\psi_{\vc{k'} s'}^\dagger U_{\vc{k'}\vc{k} s's}\, \psi_{\vc{k}s}, \end{align} (henceforth volume $V\equiv 1$). The impurity potential is a matrix in spin space, \begin{align} \hat{U}_{\vc{k}\vc{k'}}&~=~ V^\textnormal{imp}_{\vc{k}\vc{k'}}\left(\left\{ \vc{R}_i\right\}\right)\left(1+\sigma_z\,\frac{i \lambda_0^2}{4}\,\left[\vc{k}\times\vc{k'}\right]_z\right),\label{U} \end{align} where the spin-dependent part arises from extrinsic spin-orbit interaction\cite{raimondi} of the conduction electrons with the impurity potential. In real space, the matrix operator for electron-impurity interactions reads $ \hat{U}(\vc{x})=V^\textnormal{imp}(\vc{x})+i \,{\lambda_0^2}/{4}\left[\vc{\sigma}\times \vc{\nabla}V^\textnormal{imp}(\vc{x}) \right]\cdot \vc{\nabla}$, with $V^\textnormal{imp}(\vc{x})=\sum_i v(\vc{x}-\vc{R}_i)$, where $v(\vc{x})$ denotes the potential of each single impurity, $\left\{ \vc{R}_i\right\}$ are the impurity positions (eventually to be averaged over) and $\lambda_0$ is a known material parameter ($\lambda_0=4.7\times 10^{-10}\,\textnormal{m}$ for GaAs). Eq.~\eqref{U}, with $V^{\rm imp}_{\vc{k}\vc{k'}}\left(\left\{\vc{R}_j\right\}\right)=\sum_j v(\vc{k'}-\vc{k})\,e^{-i (\vc{k'}-\vc{k})\cdot \vc{R}_j} $, is then obtained by Fourier transformation. Finally, the Hamiltonian \eqref{H} contains electron-electron interactions, \begin{align} H_\textnormal{e-e}&~=~\frac{1}{2} \sum_{\substack{\vc{k_1}\dots \vc{k_{4}}\\ s_1, s_2}}V_{\vc{k_{3}},\vc{k_{4}},\vc{k_{1}},\vc{k_{2}}}\,\psi_{\vc{k_{4}} s_2}^\dagger \psi_{\vc{k_{3}} s_1}^\dagger \psi_{\vc{k_1} s_1}\psi_{\vc{k_2} s_2} \end{align} with a Thomas-Fermi screened Coulomb potential of the form $V_{\vc{k_{3}},\vc{k_{4}},\vc{k_{1}},\vc{k_{2}}}\approx \frac{v(|\vc{k_{3}}-\vc{k_{1}}|)}{\epsilon(|\vc{k_{3}}-\vc{k_{1}}|)}$ where $v(q)=\frac{\hbar^2 2\,\pi}{m\, q\, a^*}$ and $\epsilon(q)\approx1+\frac{2}{q\, a^*}$ with $a^*=\frac{\hbar^2 4 \,\pi\, \epsilon_0\,\epsilon_r }{m\, e^2}$ being the effective Bohr radius. For the GaAs dielectric constant we take a standard value, $\epsilon_r=12.9$. \section{Spin diffusion equations\label{sec:spindiffeq}} \subsection{Semiclassical kinetic equations} Our goal is to describe the dynamics of the spin density in real space. Using the nonequilibrium statistical operator method\cite{AltDerivations} (see Ref.~\onlinecite{zubarev}) we derive kinetic equations for the charge and spin components of the Wigner-transformed density matrix \begin{align} \hat{\rho}_\vc{k}(\vc{x},t)&~=~n_\vc{k}(\vc{x},t)+\vc{s}_\vc{k}(\vc{x},t)\cdot\vc{\sigma}, \end{align} where \begin{align} \rho_{\vc{k};s s'}(\vc{x},t)&=\int d \vc{r}\, e^{i \vc{k}\cdot \vc{r}}\mean{\psi^\dagger_{s'}(\vc{x}-\vc{r}/2,t)\,\psi_{s}(\vc{x}+\vc{r}/2,t)} . \end{align} If we restrict our calculation to the zeroth order in ${b}/E_F$ and furthermore neglect terms that are nonlinear in the spin density $\vc{s}_\vc{k}(\vc{x},t)$,\cite{HartreeFock} the equations for charge and spin read \begin{align} \partial_t\,n_\vc{k}+\vc{v}\cdot\partial_{\vc{x}}\,n_\vc{k}&~=~{\mathcal{J}}^\textnormal{imp}_\vc{k}+{\mathcal{J}}^\textnormal{e-e}_\vc{k},\label{charge}\\ 2\,\vc{s}_\vc{k}\times\vc{b}(\vc{k})+\partial_t\,\vc{s}_\vc{k}+\vc{v}\cdot \partial_{\vc{x}}\,\vc{s}_\vc{k}&~=~\vc{\mathcal{J}}^\textnormal{imp}_\vc{k}+\vc{\mathcal{J}}^\textnormal{e-e}_\vc{k}\label{spin} \end{align} with $v_i=k_i/m$, where the index $i=x,y$ labels the in-plane spatial directions. Note that spin and charge equations decouple in this approximation because the gradient terms containing $\partial_{k_i}\vc{b}({\vc{k}})$, which would couple the spin and charge equations, are of higher order in ${b}/E_F$. Moreover, in the diffusive limit $b_F\,\tau\ll 1$ (where $\tau$ is the momentum relaxation time), they would yield terms of higher order in this small parameter $b_F\,\tau$.\cite{stanescu, burkov} On the right-hand side of Eqs.~\eqref{charge}-\eqref{spin}, we have the collision integrals for impurity scattering, \begin{align} {\mathcal{{J}}}^\textnormal{imp}_\vc{k} &=-\sum_\vc{k'}\,W_{\vc{k}\vc{k'}}\,\delta(\Delta \epsilon)\,\Delta n\left\{1+\frac{\lambda_0^4}{16}\left[(\vc{k}\times\vc{k'})_z\right]^2\right\},\\ \vc{\mathcal{{J}}}^\textnormal{imp}_\vc{k} &=-\sum_\vc{k'}\,W_{\vc{k}\vc{k'}}\,\delta(\Delta \epsilon)\left\{\Delta\vc{s}+\frac{ \lambda_0^2}{2}\left[\vc{k}\times\vc{k'}\right]_z \begin{pmatrix} -s_y'\\ s_x'\\ 0 \end{pmatrix}\right.\nonumber\\ &\left.\qquad\qquad\quad\quad +\frac{\lambda_0^4}{16}\left[\vc{k}\times\vc{k'}\right]_z^2 \begin{pmatrix} s_x+s_x'\\ s_y+s_y'\\ s_z-s_z' \end{pmatrix} \right\},\label{Jimpspin} \end{align} with the transition rate $W_{\vc{k}\vc{k'}}= {2\,\pi}\,n_i \,|v\left(\vc{k'}-\vc{k}\right)|^2$, where $n_i$ is the impurity concentration, $\Delta \epsilon\equiv \epsilon_k-\epsilon_{k'}$, $ \Delta n \equiv n_k - n_{k'}$ and $\,\Delta \vc{s} \equiv \vc{s}_k- \vc{s}_{k'}$, as well as electron-electron scattering, \begin{align} \mathcal{J}^\textnormal{e-e}_\vc{k_1} &=~2\,\pi\sum_{2,3,4}\left(2 |V_{1 2 3 4}|^2- V_{1 2 3 4}V_{1 2 4 3}\right)\delta(\Delta \tilde{\epsilon})\nonumber\\ &~\quad \left[(1-n_1)(1-n_2)\,n_3\, n_4-\left(1\leftrightarrow 3,~2 \leftrightarrow 4\right) \right],\\ \vc{\mathcal{J}}^\textnormal{e-e}_\vc{k_1}\nonumber &=~2\,\pi\sum_{2,3,4}\,\delta(\Delta \tilde{\epsilon})\,\label{vectorJee} \left\{(1-n_1)(1-n_2)\,n_3\,n_4\right. \nonumber\\ &~\quad \left[ 2 |V_{1 2 3 4}|^2 \left(\frac{\vc{s}_3}{n_3}-\frac{\vc{s}_1}{1-n_1} \right)\right.\nonumber\\ &\quad\quad \left. - V_{1 2 3 4}V_{1 2 4 3}\left(\frac{\vc{s}_3}{n_3}+\frac{\vc{s}_4}{n_4}-\frac{\vc{s}_1}{1-n_1}-\frac{\vc{s}_2}{1-n_2}\right)\right]\nonumber\\ &\quad~ - \left(1\leftrightarrow 3,~2 \leftrightarrow 4\right)\left. \right\}. \end{align} Here, we abbreviated $j\equiv \vc{k_j}~$(where $j=1,2,3,4 $ labels initial and final states of the two collision partners) and $\Delta \tilde{\epsilon} \equiv \epsilon_\vc{k_1}+\epsilon_\vc{k_2}-\epsilon_{\vc{k_3}}-\epsilon_{\vc{k_4}}$. In our approximation the charge kinetic equation \eqref{charge} decouples from the spin kinetic equation \eqref{spin} and is independently solved by the Fermi-Dirac distribution $n_\vc{k}(\vc{x},t)=f(\epsilon_k)=\left[1+e^{(\epsilon_k-E_F)/k_B T}\right]^{-1}$, where $k_B$ is the Boltzmann constant and $T$ the temperature. Since we are not interested in charge transport or local charge excitations, we assume that the charge distribution is given by this spatially uniform solution. In the next subsection we use the spin kinetic equation~\eqref{spin} to derive a drift-diffusion equation for the real space spin density, cf.~Refs.~\onlinecite{burkov, mishchenko, stanescu, weng}. \subsection{Spin diffusion equations in the D'yakonov-Perel' regime} In the following, we consider the D'yakonov-Perel'\cite{dyakonov} regime of strong scattering and/or weak spin-orbit interaction, $b_F\,\tau\ll 1$. During the time interval $\tau$ between two collisions which alter the momentum of an electron---and thereby $\vc{b}(\vc{k})$---its spin precesses around the spin-orbit field only by the small angle $b_F\,\tau$. This results in a random walk behavior of the spin.\cite{yang} In contrast to the weak scattering limit $b_F\,\tau\gg 1$, the spin polarization is actually stabilized by scattering in the strong scattering regime $b_F\,\tau\ll 1$: the stronger the scattering, the slower the D'yakonov-Perel' spin relaxation---a phenomenon often referred to as ``motional narrowing'' in analogy to the reduction of linewidths in NMR spectroscopy due to disorder in the local magnetic fields. In the spirit of the derivation by D'yakonov and Perel'\cite{dyakonov} we will exploit the separation of the timescales that govern the evolution of isotropic (in momentum space) and anisotropic parts of the spin distribution function. Since we deal with a spatially inhomogeneous spin density we also have to assume that the timescale connected to the gradient term in Eq.~\eqref{spin} is large as compared to the transport time, i.e.~$v_F\,q\,\tau\ll1$, where $q$ is a typical wave vector of the Fourier transformed spin density. Thus when speaking of ``orders in $b_F\,\tau$'' in the following, we actually have in mind ``orders in $\max\{b_F\,\tau,\,v_F\,q\,\tau\}$''. In order to solve the spin kinetic equation~\eqref{spin} we split off an isotropic component $\vc{S}(\vc{x},t)$ from the spin density $\vc{s}_\vc{k}$ and expand the remaining anisotropic component in winding numbers and powers of momentum $k$, \begin{align} \vc{s}_\vc{k}&~=~-\frac{2\,\pi}{m}\,f'(\epsilon_k)\,\vc{S}+\vc{s}_{\vc{k},1}+\vc{\tilde{s}}_{\vc{k},1}+\vc{s}_{\vc{k},3},\label{ansatz} \end{align} with \begin{align} \vc{s}_{\vc{k},1}&~=~f'(\epsilon_k)\,\frac{k}{m} \sum_{n=\pm 1} \vc{\delta k_{n}}(\vc{x},t)\,e^{i \,n\, \theta},\label{sk1}\\ \vc{\tilde{s}}_{\vc{k},1}&~=~f'(\epsilon_k)\,\frac{k^3}{k_F^2\,m} \sum_{n=\pm 1} \vc{\delta \tilde{k}_{n}}(\vc{x},t)\,e^{i \,n\, \theta},\label{sk1tilde}\\ \vc{s}_{\vc{k},3}&~=~f'(\epsilon_k)\,\frac{k^3}{k_F^2\,m} \sum_{n=\pm 3} \vc{\delta k_{n}}(\vc{x},t)\,e^{i \,n\, \theta}.\label{sk3} \end{align} The anisotropic components of the distribution function arise due to the gradient term in the Boltzmann equation and the precession around the spin-orbit field. Since the spin-orbit fields~\eqref{br}, \eqref{bd} contain terms with winding numbers $\pm 1$ and $\pm 3$ only these winding numbers have to be considered for the anisotropic part of the spin density to lowest order in $b_F \tau$. Furthermore, one can show that the spin density contains only the same powers of $k$ as the corresponding driving terms in Hamiltonian~\eqref{H0}. Thus we consider a $k$- and a $k^3$-term in the ansatz for the winding number $\pm1$-terms of the spin density~\eqref{sk1} and \eqref{sk1tilde}, because the winding number $\pm 1$-terms of the kinetic equation~\eqref{spin} are the gradient term, the linear Rashba and Dresselhaus spin-orbit fields as well as the renormalization of the linear Dresselhaus term due to cubic Dresselhaus spin-orbit interaction. The winding number $\pm 3$-component of the spin density~\eqref{sk3}, on the other hand, contains only a $k^3$-term because only the cubic Dresselhaus spin-orbit field contributes to winding number $\pm 3$ in the kinetic equation~\eqref{spin}. In the following we consider point-like impurities, i.e., isotropic scattering with $\tau^{-1} = m\, n_i\,v(0)^2$. Furthermore we assume low temperature $T\ll T_F\equiv E_F/k_B$ and perform a Sommerfeld expansion up to order $(T/T_F)^2$ in all momentum integrations. In this procedure we encounter integrals of the form ($n=2,3,4,6,8$) \begin{align} \int_0^\infty d \epsilon_k\,f'(\epsilon_k)\,k^n &~=~-k_F^n\,z_n(T) \end{align} with $z_2=1$ and the Sommerfeld functions (see Appendix~\ref{app:sommerfeld}) \begin{align} z_3&~=~1+\frac{\pi^2}{8}\,\frac{T^2}{T_F^2}+\mathcal{O}\left(\frac{T^4}{T_F^4}\right),\label{z3}\\ z_4&~=~1+\frac{\pi^2}{3}\,\frac{T^2}{T_F^2},\\ z_6&~=~1+\pi^2\,\frac{T^2}{T_F^2},\\ z_8&~=~1+2\,\pi^2\,\frac{T^2}{T_F^2}+\mathcal{O}\left(\frac{T^4}{T_F^4}\right).\label{z8} \end{align} With the goal of obtaining diffusion equations for the real space spin density we start by momentum integration of the kinetic equation, $\frac{1}{(2\pi)^2}\int d\,\vc{k}\left[\mathrm{Eq}.~\eqref{spin}\right]$, using the ansatz \eqref{ansatz}. This yields the {\it isotropic} equation for the isotropic component of the spin density \begin{widetext} \begin{align} \partial_t\,S_x &~=~\frac{k_F^2}{2\, \pi} \left\{\frac{1}{2m} \left(\partial_x \delta \hat k_{c,x} +\partial_y \delta \hat k_{s,x}\right)+\alpha v_F\delta \hat k_{c,z} - \beta v_F \Big{(} \sin 2\phi \delta \bar k_{c,z} + \cos 2\phi \delta \bar k_{s,z} \Big{)}\right\}-z_4\,\gamma_\mathrm{ey}\,S_x,\label{Sxextr}\\ \partial_t\,S_y &~=~\frac{k_F^2}{2\, \pi}\left\{\frac{1}{2m}\left(\partial_x\delta \hat k_{c,y}+\partial_y\delta \hat k_{s,y}\right)+\alpha v_F \delta \hat k_{s,z}+\beta v_F \left(\sin 2\phi \delta \overline{\bar k}_{s,z} -\cos 2\phi \delta \overline{\bar k}_{c,z}\right)\right\}-z_4\,\gamma_\mathrm{ey}\,S_y,\label{Syextr}\\ \partial_t\,S_z &~=~\frac{k_F^2}{2\, \pi}\left\{\frac{1}{2m}\left(\partial_x \delta \hat k_{c,z}+\partial_y \delta \hat k_{s,z}\right)-\alpha v_F (\delta \hat k_{c,x}+\delta \hat k_{s,y}) +\beta v_F \left[ \sin 2\phi \left(\delta \bar k_{c,x}-\delta \overline{\bar k}_{s,y}\right)+\cos 2\phi \left( \delta \overline{\bar k}_{c,y} +\delta \bar k_{s,x}\right) \right]\right\}\label{Szextr} \end{align} \end{widetext} with \begin{align} \vc{\delta \hat k}_{c(s)} &~=~ \vc{\delta k}_{c(s)} + z_4 \vc{\delta \tilde k}_{c(s)}, \\ \vc{\delta \bar k}_{c(s)} &~=~\vc{\delta \hat k}_{c(s)} \!-\! \zeta (z_4 \vc{\delta k}_{c(s)} + z_6 \vc{\delta \tilde k}_{c(s)}+ z_6 \vc{\delta k}_{c3(s3)}) ,\nonumber \\ \vc{\delta \overline{\bar k}}_{c(s)} &~=~\vc{\delta \hat k}_{c(s)} \!-\! \zeta (z_4 \vc{\delta k}_{c(s)} + z_6 \delta \vc{\tilde k}_{c(s)}- z_6 \vc{\delta k}_{c3(s3)}), \nonumber \end{align} where \begin{align} \zeta&~=~\frac{\gamma\,k_F^2}{4 \,\beta}\label{zeta} \end{align} represents the ratio of cubic and linear Dresselhaus coupling strengths and \begin{equation} \begin{tabular}{ll} $\vc{\delta k}_{c( c 3)}~=~2\, \textnormal{Re }\vc{\delta k}_{1(3)},\,\,$ & $\vc{\delta \tilde{k}}_{c}~=~2\, \textnormal{Re }\vc{\delta \tilde{k}}_{1},$\\ $\vc{\delta k}_{s( s 3)} ~=~ -2\, \textnormal{Im }\vc{\delta k}_{1(3)},\,\,$ & $\vc{\delta \tilde{k}}_{s}~=~-2\, \textnormal{Im }\vc{\delta \tilde{k}}_{1}$. \end{tabular} \end{equation} Eqs.\eqref{Sxextr}-\eqref{Szextr} can be seen as continuity equations for the spin density where the anisotropic components $\vc{\delta k}_{c(s)}$, $\vc{\delta k}_{c(s)3}$ and $\vc{\delta \tilde k}_{c(s)}$ play the role of (generalized) spin currents. The impurity collision integral~\eqref{Jimpspin} contains a spin-dependent part due to extrinsic spin-orbit interaction, which acts as a drain for in-plane spin-polarization with the Elliot-Yafet relaxation rate\cite{raimondi} \begin{align} \gamma_{\textnormal{ey}}&~=~\left(\frac{\lambda_0\, k_F}{2}\right)^4\frac{1}{\tau}\label{Gammaey}. \end{align} This relaxation mechanism can be understood as the net effect of the electron spins precessing by a small angle around the extrinsic spin-orbit field {\it during} the collision with an impurity. Since this field is perpendicular to the electronic motion, i.e., it points in $z$-direction, the $z$ component of the isotropic spin density is unaffected by the Elliot-Yafet mechanism. The anisotropic components $\vc{\delta k}_{c(s)}$, $\vc{\delta \tilde k}_{c(s)}$ and $\vc{\delta k}_{c3(s3)}$ can in turn be expressed in terms of the isotropic spin density $S_i$ by integrating the kinetic equation~\eqref{spin} times velocity, where, this time, we omit the time derivative. The justification for doing so is that, in order to capture the slow precession-diffusion dynamics of the real space density, we can interpret the time derivative as a coarse-grained one, i.e.~$\partial_t \,\vc{S}\rightarrow \Delta \vc{S}/\Delta t$ with $\Delta t\approx b_F^{-1} \gg \tau$. Then the fast relaxation of the anisotropic components into the steady state at the beginning of each time interval $\Delta t$ contributes only in higher order in $b_F\,\tau$ to the average over $\Delta t$. Thus, to leading order, it is sufficient to find the (quasi-)equilibrium solutions for the anisotropic coefficients. Another way of seeing this is in analogy with the Born-Oppenheimer approximation: similarly to the fast moving electrons in a molecule, which almost instantaneously find their equilibrium positions with respect to the slowly vibrating nuclei, the anisotropic parts of the spin distribution quickly adjust to the momentary isotropic spin density. The backaction of the anisotropic parts on the isotropic spin density is then well described using their steady state solution. By integrating $\frac{1}{(2\pi)^2}\int d\,\vc{k}\,v_{x(y)}\left[\mathrm{Eq}.~\eqref{spin}\right]$, equating terms of the same order in $k$ and solving for the coefficients, we obtain the following anisotropic equations: \begin{widetext} \begin{align} \delta k_{c,x}&~=~4\pi \left[\alpha v_F (1+z_4 \gamma_\mathrm{sw} \tau_1) - \beta v_F \sin 2\phi(1-z_4 \gamma_\mathrm{sw} \tau_1) \right] \tau_1 S_z+ \frac{2\pi}{m} \tau_1 \left(\partial_x S_x + z_4 \gamma_\mathrm{sw} \tau_1 \partial_y S_y \right), \label{kcx}\\ \delta k_{c,y}&~=~-4\,\pi \beta v_F \tau_1 \cos 2\phi\,\left(1-z_4 \gamma_\mathrm{sw} \tau_1\right)\, S_z+\frac{2 \,\pi}{m} \tau_1\left(\partial_x \,S_y-z_4 \gamma_\mathrm{sw} \tau_1\partial_y \,S_x\right),\label{5nnew}\\ \delta k_{c,z}&~=~ 4\,\pi \left(-\alpha v_F+\beta v_F \sin 2\phi\right) \tau_1 S_x+4\,\pi \beta v_F\,\tau_1\cos 2\phi\,S_y +\frac{2\,\pi}{m} \tau_1 \partial_x \,S_z\label{6new},\\ \delta k_{s,x}&~=~-4\,\pi \beta v_F \tau_1 \cos 2\phi\,\left(1-z_4 \gamma_\mathrm{sw} \tau_1\right)\, S_z +\frac{2\,\pi}{m}\tau_1\left(\partial_y \,S_x-z_4 \gamma_\mathrm{sw} \tau_1\partial_x \,S_y\right)\label{7nnew},\\ \delta k_{s,y}&~=~4\pi \left[\alpha v_F (1+z_4 \gamma_\mathrm{sw} \tau_1) + \beta v_F \sin 2\phi(1-z_4 \gamma_\mathrm{sw} \tau_1) \right] \tau_1 S_z+\frac{2\,\pi}{m}\tau_1\left(\partial_y \,S_y+z_4 \gamma_\mathrm{sw} \tau_1\partial_x \,S_x\right),\label{deltaksy}\\ \delta k_{s,z}&~=~4\,\pi\beta v_F\,\tau_1 \cos 2\phi\,S_x-4\,\pi \left[\alpha v_F+\beta v_F\sin2\phi\right]\tau_1 S_y +\frac{2\,\pi}{m}\tau_1 \partial_y \,S_z\label{9nnew},\\ \delta \tilde{k}_{c,x}&~=-\delta \tilde{k}_{s,y}= ~4\,\pi\beta v_F \zeta \sin 2\phi \tilde \tau_1(1-\frac{z_6}{z_4} \gamma_\mathrm{sw} \tilde \tau_1) S_z,\\ \delta \tilde{k}_{c,y}&~=\delta \tilde{k}_{s,x}= ~ 4\,\pi\beta v_F \zeta \cos 2\phi \tilde \tau_1 (1-\frac{z_6}{z_4} \gamma_\mathrm{sw} \tilde \tau_1) S_z,\label{deltaktildecy}\\ \delta \tilde{k}_{c,z}&~=~ -4\,\pi\,\beta v_F \zeta \tilde{\tau}_1 (\sin2\phi\, S_x+\cos2\phi\, S_y),\\ \delta \tilde{k}_{s,z}&~=~ -4\,\pi\,\beta v_F \zeta \tilde{\tau}_1 (\cos 2\phi\, S_x-\sin2\phi\, S_y)\label{deltaktildesz}. \end{align} \end{widetext} The spin densities $S_i$ act as sinks and sources in the equations for the anisotropic coefficients ${\delta k}_{\pm 1,\pm 3,i},{\delta\tilde{k}}_{\pm 1,i}$. Since the spin densities $S_i$ are determined by the initial conditions at $t=0$, they are of zeroth order in $b_F\,\tau$, whereas the anisotropic coefficients ${\delta k}_{\pm 1,\pm 3,i},{\delta\tilde{k}}_{\pm 1,i}$ are already first order in $b_F\,\tau$. Had we included parts with higher winding numbers $\pm2, \pm4, \pm5, \dots$ in our ansatz, these would have been generated only indirectly via the ${\delta k}_{\pm 1,\pm 3,i},{\delta\tilde{k}}_{\pm 1,i}$ (all of which are already of first order in $b_F\,\tau$) and would therefore be of even higher order in $b_F\,\tau$. In Eqs.~\eqref{kcx}-\eqref{deltaktildesz} we have defined the rate of ``swapping of the spin currents''\cite{lifshits} as \begin{align} \gamma_\mathrm{sw}&~=~ \left(\frac{\lambda_0\, k_F}{2}\right)^2\frac{1}{\tau}\label{Gammasw}, \end{align} which is due to extrinsic spin-orbit interaction like the Elliot-Yafet rate $ \gamma_\mathrm{ey}$ (Eq.~\eqref{Gammaey}), but lower order in $\lambda_0$. It leads to a ``swapping of spin currents'' because a finite $\gamma_\mathrm{sw}$ generates, e.g., a $S_y$ spin current in response to a gradient of the $S_x$ spin density in $x$ direction (see Eq.~\eqref{deltaksy}). Eqs.~\eqref{kcx}-\eqref{deltaktildesz} are valid to linear order in $\tau\,\gamma_\mathrm{sw}\ll 1$. Since the anisotropic components $\vc{\delta k}_{\pm1}$ and $\vc{\delta \tilde k}_{\pm1}$ are related to (generalized) spin currents, the anisotropic equations \eqref{kcx}-\eqref{deltaktildesz} express generalized Ohm's laws. The effective relaxation times for the anisotropic parts of the spin distribution function are obtained as the inverse sum of the collision integrals for normal impurity scattering, spin-dependent impurity scattering and electron-electron scattering, \begin{align} \tau_1&~=~\left( \frac{1}{\tau}+\gamma_\mathrm{ey}\,{z_6}+\frac{1}{{\tau}_{\textnormal{e-e},1}}\right)^{-1},\label{tau1}\\ \tilde{\tau}_1&~=~\left( \frac{1}{\tau}+\gamma_\mathrm{ey}\,\frac{z_8}{z_4}+\frac{1}{z_4\,\tilde{\tau}_{\textnormal{e-e},1}}\right)^{-1}.\label{tau1tilde} \end{align} Here, the temperature-dependent rates $\tau_{\textnormal{e-e},1}^{-1},\,\tilde{\tau}_{\textnormal{e-e},1}^{-1}$ account for the decay of the respective component ($\vc{s}_{\vc{k},1}$ or $\vc{\tilde{s}}_{\vc{k},1}$) of the spin distribution due to two-particle Coulomb scattering. The rate at which winding-number-$\pm 1$ and linear-in-$k$ components of the spin distribution relax due to electron-electron interaction is \begin{align} {\tau}_{\textnormal{e-e},1}^{-1} &=-\frac{1}{k_B\,T\,k_F \,m\,(2 \pi)^4} \iiint {d \vc{k_1}d \vc{k_2}d \vc{k_3}}\,\delta(\Delta\tilde{\epsilon})\,k_1\nonumber\\ &\qquad \left[1-f(\epsilon_{k_3})\right]\left[1-f(\epsilon_{\vc{k_1}+\vc{k_2}-\vc{k_3}})\right]f(\epsilon_{k_1})f(\epsilon_{{k_2}})\nonumber\\ &\qquad \left\{2|V(|\vc{k_1}-\vc{k_3}|)|^2 \,\left[\cos (\theta_3-\theta_1)\,k_3-k_1 \right] \right.\nonumber\\ &\quad~\left.+V(|\vc{k_1}-\vc{k_3}|)V(|\vc{k_2}-\vc{k_3}|) \right.\nonumber\\ &\quad\left. ~~\left[k_1+\cos (\theta_2-\theta_1)\,k_2-\cos (\theta_3-\theta_1)\,k_3 \right.\right.\nonumber\\ &\quad\left.\left. ~-\cos 3(\theta_{1+2-3}-\theta_1)\,|\vc{k_1}+\vc{k_2}-\vc{k_3}|\right] \right\}.\label{tauee1} \end{align} It is related to the spin Coulomb drag conductivity from Refs.~\onlinecite{Amico1,flensberg, Amico2} via the Drude formula. The analogous expression for the winding-number-$\pm 1$ but cubic-in-$k$ components reads \begin{align} \tilde{\tau}_{\textnormal{e-e},1}^{-1} &=-\frac{1}{k_B\,T\,k_F^4 m\,(2 \pi)^4} \iiint {d \vc{k_1}d \vc{k_2}d \vc{k_3}} \,\delta(\Delta\tilde{\epsilon})\,k_1\nonumber\\ &\qquad \left[1-f(\epsilon_{k_3})\right]\left[1-f(\epsilon_{\vc{k_1}+\vc{k_2}-\vc{k_3}})\right]f(\epsilon_{k_1})f(\epsilon_{{k_2}})\nonumber\\ &\qquad \left\{2|V(|\vc{k_1}-\vc{k_3}|)|^2 \,\left[\cos (\theta_3-\theta_1)\,k_3^3-k_1^3 \right] \right.\nonumber\\ &\quad~\left.+V(|\vc{k_1}-\vc{k_3}|)V(|\vc{k_2}-\vc{k_3}|) \right.\nonumber\\ &\quad\left. ~~\left[k_1^3+\cos (\theta_2-\theta_1)\,k_2^3-\cos (\theta_3-\theta_1)\,k_3^3 \right.\right.\nonumber\\ &\quad\left.\left. ~-\cos 3(\theta_{1+2-3}-\theta_1)\,|\vc{k_1}+\vc{k_2}-\vc{k_3}|^3\right] \right\}.\label{tauee1tilde} \end{align} To find the anisotropic equations for $\vc{\delta k}_{\pm3}$ we follow a similar procedure as before and integrate $\frac{1}{(2\pi)^2}\int d\,\vc{k}\,e^{\pm i 3 \theta}\left[\mathrm{Eq}.~\eqref{spin}\right]$, which results in \begin{align} \vc{\delta k}_{c3}&~{=}~{\gamma\, v_F \,k_F^2\,\pi\, {\tau_3}} \begin{pmatrix} \sin 2 \phi\, S_z\\ -\cos 2 \phi \, S_z\\ \cos 2 \phi \, S_y- \sin 2 \phi\, S_x \end{pmatrix},\label{kc3}\\ \vc{\delta k}_{s3}&~{=}~{\gamma\, v_F \,k_F^2\,\pi\, {\tau_3}} \begin{pmatrix} \cos 2 \phi \, S_z\\ \sin 2 \phi \, S_z\\ -\sin 2 \phi \, S_y- \cos 2 \phi\, S_x \end{pmatrix}.\label{ks3} \end{align} with \begin{align} {\tau}_3 &~{=}~\left( \frac{1}{\tau}+\gamma_\mathrm{ey}\,\frac{z_8}{z_3}+\frac{1}{z_3\,{\tau}_{\textnormal{e-e},3}}\right)^{-1}.\label{tau3} \end{align} The electron-electron scattering rate that enters the effective relaxation time \eqref{tau3} for the winding-number-$\pm 3$ parts of the spin distribution is given by \begin{align} \tau_{\textnormal{e-e},3}^{-1} &=- \frac{1}{k_B\,T\,k_F^3 m\,(2 \pi)^4}\iiint {d \vc{k_1}d \vc{k_2}d \vc{k_3}} \,\delta(\Delta\tilde{\epsilon})\nonumber\\ &\qquad \left[1-f(\epsilon_{k_3})\right]\left[1-f(\epsilon_{\vc{k_1}+\vc{k_2}-\vc{k_3}})\right]f(\epsilon_{k_1})\,f(\epsilon_{{k_2}})\nonumber\\ &\qquad \left\{2|V(|\vc{k_1}-\vc{k_3}|)|^2 \,\left[\cos 3 (\theta_3-\theta_1)\,k_3^3-k_1^3 \right] \right.\nonumber\\ &\quad~\left.+V(|\vc{k_1}-\vc{k_3}|)V(|\vc{k_2}-\vc{k_3}|) \right.\nonumber\\ &\quad\left. ~~\left[k_1^3+\cos 3 (\theta_2-\theta_1)\,k_2^3-\cos 3 (\theta_3-\theta_1)\,k_3^3 \right.\right.\nonumber\\ &\quad\left.\left. ~-\cos 3 (\theta_{1+2-3}-\theta_1)\,|\vc{k_1}+\vc{k_2}-\vc{k_3}|^3\right] \right\}.\label{tauee3} \end{align} Finally we insert the steady-state solutions for the anisotropic coefficients~\eqref{kcx}-\eqref{deltaktildesz} and \eqref{kc3}-\eqref{ks3} into the isotropic equations~\eqref{Sxextr}-\eqref{Szextr} and obtain a closed set of coupled diffusion equations for the three spatial components of the spin density, \begin{widetext} \begin{align} \partial_t\,\vc{S} &= \begin{pmatrix} {D}\,\grad^2-{\Gamma}_x-{\gamma}_{\textnormal{cd}}\,z_6-\gamma_\mathrm{ey}\,z_4&~&{L}&~&{K}_{xz}\,\partial_x- {M}\,\partial_y\\ {L}&~& {D}\,\grad^2-{\Gamma}_y-{\gamma}_{\textnormal{cd}}\,z_6-\gamma_\mathrm{ey}\,z_4&~&{K}_{yz}\,\partial_y- {M}\,\partial_x\\ -{K}_{zx}\,\partial_x+{M}_{z}\,\partial_y&~&-{K}_{zy}\,\partial_y+{M}_{z}\,\partial_x&~& {D}\,\grad^2-{\Gamma}_x-{\Gamma}_y-2 \,{\gamma}_{\textnormal{cd}}\,z_6-\Gamma_\mathrm{sw} \end{pmatrix} \vc{S}.\label{matrixeq} \end{align} \end{widetext} On its diagonal the matrix operator contains the pure diffusion terms with $ \grad^2=\partial_x^2+\partial_y^2$ and the Elliot-Yafet relaxation rate $\gamma_{\rm ey}$ due to extrinsic spin-orbit interaction. In addition, it contains the D'yakonov'-Perel' relaxation rates $\Gamma_{x(y)}$ and $\gamma_{\textnormal{cd}}$ which reflect the randomization of the spin orientation due to precession (between the collisions) around the winding-number-$\pm 1$ and winding-number-$\pm 3$ spin-orbit fields, respectively. The $S_x$ component is relaxed as a consequence of precession about the $y$ component of the spin-orbit field only, and vice versa. In contrast, the $S_z$ component is relaxed by the precession about the full spin-orbit field. Thus the relaxation rate of $S_z$ due to precession is the sum of the ones for $S_x$ and $S_y$, plus a correction $\Gamma_\mathrm{sw}$ for processes that involves the swapping of the spin currents due to extrinsic spin-orbit interaction. Due to precession there are also off-diagonal rates $L$, which couple the in-plane spin components, as well as several off-diagonal mixed diffusion-precession rates, which are accompanied by partial derivatives. In terms of the parameters of our model and previously defined quantities, the coefficients in the spin diffusion equation~\eqref{matrixeq} are given by: \begin{widetext} \begin{align} \gamma_{\textnormal{cd}}&~{=}~\frac{1}{8}\,v_F^2\,\gamma^2\,k_F^6 \,{\tau}_3,\label{Gammacd}\\ \Gamma_{x(y)}(\phi)&~{=}~\frac{1}{4}\,q_0^2\left(D\mp\frac{\beta}{\alpha}\left[2 \,D-\zeta\,z_4\,(D+\tilde{D})\right]\sin2\phi+\frac{\beta^2}{\alpha^2}\left[D-\zeta\,z_4\,(D+\tilde{D})+\zeta^2z_6\,\tilde{D}\right]\right),\label{Gxysw}\\ \Gamma_\mathrm{sw}&~{=}~\frac{1}{2}\,q_0^2\,\gamma_\mathrm{sw}\left[D\,\tau_1\,z_4-\frac{\beta^2}{\alpha^2}\left(D\,\tau_1\,z_4-\zeta\,\tilde{D}\,\tilde{\tau}_1\,z_6-\zeta\,D\,\tau_1\,z_4^2+\zeta^2\,\tilde{D}\,\tilde{\tau}_1\,\frac{z_6^2}{z_4}\right)\right],\\ K_{xz(yz)}(\phi)&~{=}~q_0\left(D\mp\frac{\beta}{\alpha}\left[D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})\right]\sin2\phi\right)+\frac{1}{2}\,\gamma_\mathrm{sw}\,q_0\left(\tau_1\,D\,z_4\pm\frac{\beta}{\alpha}\left[\tau_1\,D\,z_4-\zeta\,\tilde{\tau}_1\,\tilde{D}\,z_6\right]\sin 2 \phi\right),\label{Kxzsw}\\ K_{zx(zy)}(\phi)&~{=}~q_0\left(D\mp\frac{\beta}{\alpha}\left[D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})\right]\sin2\phi\right)+\frac{1}{2}\,\gamma_\mathrm{sw}\,\tau_1\,q_0\,D\,z_4\left[1\pm\frac{\beta}{\alpha}\left(1-\zeta\,z_4\right)\sin2\phi\right],\label{Kzxsw}\\ M(\phi)&~{=}~\cos 2\phi\,q_0\,\frac{\beta}{\alpha}\left[D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})\right]-\frac{1}{2}\,\gamma_\mathrm{sw}\,q_0\,\cos2\phi\,\frac{\beta}{\alpha}\left[\tau_1\,D\,z_4-\zeta\,\tilde{\tau}_1\,\tilde{D}\,z_6\right],\label{Mxysw}\\ M_{z}(\phi)&~{=}~\cos 2\phi\,q_0\,\frac{\beta}{\alpha}\left[D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})\right]-\frac{1}{2}\,\gamma_\mathrm{sw}\,\tau_1\,q_0\,D\,z_4 \,\cos 2\phi\,\frac{\beta}{\alpha}\left(1-\zeta\,z_4\right),\label{Mzsw}\\ L(\phi)&~{=}~\cos 2\phi\,\frac{1}{2}\,q_0^2\,\frac{\beta}{\alpha}\left[D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})\right]\label{L} \end{align} \end{widetext} with the PSH wave vector \begin{align} {q}_0&~=~4\,v_F\,m\,\alpha \label{magicq} \end{align} and the effective diffusion constants \begin{equation} D=\frac{1}{2}\, v_F^2\,\tau_1,\quad\tilde{D}=\frac{1}{2}\, v_F^2\,\tilde{\tau}_1. \end{equation} At $T=0$, we have $z_n=1$ and electron-electron interactions are absent, such that $\tilde{D}=D $. Then, if we leave out extrinsic spin-orbit interaction in the spin diffusion equation \eqref{matrixeq}, it agrees with the one presented in Ref.~\onlinecite{weng} (except for the sign of $L$). If we further omit cubic Dresselhaus spin-orbit interaction in our diffusion equation, it also concurs with the one of Ref.~\onlinecite{bernevig1} provided that the spin-charge coupling is negligible. \section{Persistent spin helix in the presence of symmetry breaking mechanisms\label{sec:schematic}} In this section, we use the spin diffusion equation~\eqref{matrixeq} to calculate the lifetime of the persistent spin helix in the presence of symmetry breaking mechanisms. We consider extrinsic spin-orbit interaction, cubic Dresselhaus spin-orbit interaction or simple spin-flip scattering as possible symmetry breaking mechanisms. In order to allow for simple analytical solutions we discuss each of the symmetry breaking mechanisms separately. In the case of cubic Dresselhaus spin-orbit interaction we neglect at first the renormalization of the linear Dresselhaus spin-orbit interaction (see~Eq.~\eqref{betapr}). This is formally achieved by setting $\zeta = 0$ in Eqs.~\eqref{Gxysw}-\eqref{L} while keeping the $ {\gamma}_\textnormal{cd}$ term in Eq.~\eqref{matrixeq}. However, we will include the renormalization of the linear Dresselhaus spin-orbit interaction when we discuss a possible stationary solution and when we compare to the experimental results in a GaAs/AlGaAs quantum well in Sec.~\ref{sec:numbers}. We choose our coordinate system such that the $x$ axis points into the (110)-crystal direction, corresponding to $\phi=\frac{\pi}{4}$ in Eqs.~\eqref{Gxysw}-\eqref{L}. Considering an initial spin polarization, which is uniform in $x$-direction, then due to $L(\frac{\pi}{4})=M(\frac{\pi}{4})=0$ the $S_x$ component decouples from the $S_y$ and $S_z$ components and we can set $S_x=0$. For $\alpha=\beta$ Eq.~\eqref{matrixeq} reduces for the remaining $S_y$ and $S_z$ components to \begin{align} \partial_t\,\vc{S}&~=~ \begin{pmatrix} {D}\,\partial_y^2-q_0^2\,D-X &2\,q_0\,D\,\partial_y\\ -2\,q_0\,D\,\partial_y& {D}\,\partial_y^2-q_0^2\,D-N\,X \end{pmatrix}\,\vc{S},\label{2by2eqatSP} \end{align} where the relaxation rates due to the respective symmetry-breaking mechanism are represented by $X$ and an integer $N$ according to Table~\ref{tab:XN}. \begin{table} \caption{Specification of $X$ and $N$ in Eq.~\eqref{2by2eqatSP} \label{tab:XN}} \vspace{0.2 cm} \begin{tabular}{c|c|c|c} &\,\,\textnormal{simple spin flips}\,\,&\,\textnormal{extr.~spin-orbit int.}&\, \textnormal{ cubic Dress. }\,\\ \hlin $X$\,&$1/\tau_\textnormal{sf}$&$\,\,\, \gamma_\textnormal{ey}$ &$\,\gamma_\textnormal{cd}\,z_6$\\ $N$\,&$1$ & 0 &$2$ \end{tabular} \end{table} For the SU(2) symmetric situation $X=0$ there exists a steady state solution with wave vector $q_0$. This is the persistent spin helix state. More precisely for an initial spin polarization of the form \begin{align} \vc{S}(\vc{x},t=0)&~=~S_0\left(0,\,0,\,\cos q_0 y\right),\label{initial} \end{align} similar to the experimental set-up,\cite{koralek} one finds that the time-dependent solution to Eq.~\eqref{2by2eqatSP} is \begin{align} \vc{S}^{X=0}(y,t)&~=~\frac{S_0}{2} \begin{pmatrix} [e^{-4\,q_0^2\,D\,t}-1]\,\sin q_0 y\\ [e^{-4\,q_0^2\,D\,t}+1]\,\cos q_0 y \end{pmatrix}.\label{symmetric} \end{align} For $t\rightarrow \infty$, i.e., in the stationary limit, this reduces to the persistent spin helix state. In the presence of symmetry breaking mechanisms, i.e. for $X \ne 0$, one can still find a steady state solution of the form \begin{align} S_y(y)&~=~-\frac{S_0}{2}\,e^{-{y}/{l_\textnormal{X}}}\,C_1\,\sin q_\textnormal{X}y,\label{dampedy}\\ S_z(y)&~=~\frac{S_0}{2}\,e^{-{y}/{l_\textnormal{X}}}\left(C_2\,\sin q_\textnormal{X}y+\cos q_\textnormal{X}y\right).\label{dampedz} \end{align} This solution is a spatially damped persistent spin helix state with coefficients given by \begin{align} l_{X}^{-1}&~=~\frac{q_0}{2}\,\sqrt{2\,\Xi+(N+1)\,\xi-2},\label{lx}\\ q_{X}&~=~\frac{q_0}{2}\,\sqrt{2\,\Xi-(N+1)\,\xi+2},\label{qx}\\ C_1&~=~\frac{4\,\sqrt{2\,\Xi-(1+N)\,\xi+2}}{\xi^2\,(-8(N^2-1)+(N-1)^3\xi)}\nonumber\\ &\qquad\Big{[}4+(3\,N+1)\,\xi -N\,(N-1)\,\xi^2\nonumber\\ &\qquad\,-(4+(N-1)\,\xi)\,\Xi\,\Big{]},\\ C_2&~=~\frac{8-(N-1)^2\xi^2-4\left(2\,\Xi-(N+1)\,\xi\right)}{(N-1)\,\xi\,\sqrt{8\,(N+1)\xi-(N-1)^2\xi^2}},\label{C2} \end{align} where $\xi\equiv X/({q_0^2\,D})$ and $\Xi\equiv\sqrt{(1+\xi)(1+N\,\xi)}$. In the absence of symmetry breaking mechanisms ($\xi\rightarrow 0$) the $t\rightarrow \infty$ asymptotics of Eq.~\eqref{symmetric}, i.e., the truely persistent spin helix state, is recovered. The spatially damped persistent spin helix state \eqref{dampedy}-\eqref{dampedz} could in principle be excited with the initial spin polarization profile \begin{align} \vc{S}(\vc{x},t=0)&~=~S_0\,e^{-{y}/{l_\textnormal{X}}}\left(0,\,0,\,\cos q_\textnormal{X} y\right).\label{damped} \end{align} Although the spatially damped persistent spin helix is clearly a steady state solution when the symmetry breaking is caused by simple spin flips or extrinsic spin orbit interaction, it is not obvious that this applies also to the case of cubic Dresselhaus spin orbit interaction, since we have neglected the renormalization of the linear Dresselhaus spin orbit interaction ($\zeta \ne 0$), which might lead to a finite lifetime of the spatially damped state. Nevertheless, even when the renormalization of the linear Dresselhaus spin orbit interaction is taken into account one can still find a steady state solution of the form~\eqref{dampedy}-\eqref{C2} when the ratio of the linear Rashba and Dresselhaus spin orbit interactions is given by \begin{align} \frac{\beta}{\alpha}&~=~\frac{D}{D-\frac{1}{2}\,\zeta\,z_4\,(D+\tilde{D})}.\label{B1} \end{align} Then the spin diffusion equation~\eqref{matrixeq} can still be cast into the form of Eq.~\eqref{2by2eqatSP} when the symmetry breaking rate $X$ is redefined as $\tilde{X}= X+ q_0^2 D\, F(T)$ with the temperature dependent dimensionless function $F(T)= \frac{1}{4}\Big(\frac{D^2-\zeta z_4\,D\,(D+\tilde{D})+\zeta^2 z_6\, D\,\tilde{D}}{D^2-\zeta z_4 D(D+\tilde{D})+\frac{1}{4}\zeta^2 z_4^2( D+\tilde{D})^2}-1\Big).$ For this symmetry breaking rate $\tilde{X}$ and spin orbit couplings of \eqref{B1} the spatially damped spin profile of the form \eqref{dampedy}-\eqref{C2} is again infinitely long-lived. This stationary state should in principle be realizable in the GaAs/AlGaAs quantum well used in Ref.~\onlinecite{koralek} because there the ratio of $\beta/\alpha$ almost fulfills relation~\eqref{B1} at a temperature of $T=100$ K. For the parameters of the GaAs/AlGaAs quantum well of Ref.~\onlinecite{koralek} the steady state solution \eqref{dampedy}-\eqref{C2} would be characterized by a wavevector of $q_{\tilde{X}}\approx q_0$ and a damping length of a bit more than a PSH wavelength, $l_{\tilde{X}}\approx 1.06\,\frac{2\,\pi}{q_0}$. Although a spin grating with such a strong spatial damping might be difficult to realize, it should be noted that the required damping length is $ \propto \zeta^{-1}$, so that one can expect much longer damping lengths for thinner quantum well. We now want to consider the conventional PSH solution. When we stick to an initial spin polarization with the form of a plane wave~\eqref{initial} similar to the experimental set-up~\cite{koralek} the time dependent solution is given by a double exponential decay, \begin{align} S_y(y,t)&~=~\frac{S_0}{2}\,\sin q_0y\,\frac{4\,q_0^2\,D\left(e^{-\frac{t}{\tau_R}}- e^{-\frac{t}{\tau_E}}\right)}{\sqrt{(4\,q_0^2\,D)^2+(N-1)^2\, X^2}},\label{Sydouble}\\ S_z(y,t)&~=~\frac{S_0}{2}\,\cos q_0y\left[e^{-\frac{t}{\tau_R}}+ e^{-\frac{t}{\tau_E}}\right.\nonumber\\ &\quad\quad\qquad\left.+\frac{(N-1)\,X\left(e^{-\frac{t}{\tau_R}}- e^{-\frac{t}{\tau_E}}\right)}{\sqrt{(4\,q_0^2\,D)^2+(N-1)^2\, X^2}} \right]\label{Szdouble} \end{align} with the symmetry--{\it enhanced} and {\it--reduced} lifetimes \begin{align} \tau_{E(R)}^{-1}&~=~2\,q_0^2\,D+\frac{1}{2}\,(N+1)\,X\nonumber\\ &~\quad\mp\frac{1}{2}\sqrt{(4\,q_0^2\,D)^2+(N-1)^2\, X^2} .\label{tauexact} \end{align} In the absence of any symmetry-breaking relaxation mechanism, i.e., for $X=0$, the proper persistent spin helix state is recovered ($\tau_E=\infty$). Expanding Eq.~\eqref{tauexact} for small $X/(4\,q_0^2\,D)\ll 1$ we obtain \begin{align} \tau_{E}&~\approx~ \frac{2}{(N+1)}\,X^{-1}+\left(\frac{N-1}{N+1}\right)^2\frac{1}{4\,q_0^2\,D} ,\label{tauE}\\ \tau_{R}&~\approx~ \frac{1}{4\,q_0^2\,D} -\frac{(N+1)\,X}{2\,(4\,q_0^2\,D)^2} \label{tauR}. \end{align} The reduced lifetime $\tau_{R}$ is not very sensitive to details of the symmetry-breaking mechanism as long as it is weak. Correspondingly, the temperature dependence of the reduced lifetime $\tau_R$ is almost independent of the symmetry breaking mechanism (and is given by the electron-electron relaxation rate $\tau_{\textnormal{e-e},1}^{-1}$ contained in $D$ via $\tau_1$, see~Eq.~\eqref{tau1}). The temperature dependence of the enhanced lifetime $\tau_E$, by contrast, depends crucially on the symmetry breaking mechanism under consideration and thus offers a way to discriminate between the different symmetry breaking mechanisms. For small symmetry breaking terms the enhanced lifetime $\tau_E$ is proportional to the respective scattering rate $X^{-1}$. Therefore also the temperature dependence of $\tau_E$ is determined by the respective scattering rate. For simple spin-flip scattering $X=\tau_{\rm sf}^{-1}$ we expect a temperature independent lifetime $\tau_E$ due to constant $\tau_{\rm sf}$. For extrinsic spin-orbit interactions, $X=\gamma_{\rm ey}$, to leading order in $X/(4\,q_0^2\,D)$ the only temperature dependence comes from the Sommerfeld corrections. Thus $\tau_E$ decreases quadratically with temperature. For cubic Dresselhaus spin-orbit interaction one finds \begin{align} \tau_E&~\approx~ \frac{2}{3}\, \gamma_\textnormal{cd}^{-1}z_6^{-1} \label{tauEcD} \end{align} and therefore $\tau_E$ is proportional to $\tau_3^{-1}$ (see Eq.~\eqref{Gammacd}). Since $\tau_3$ decreases with temperature because of enhanced electron-electron scattering $\tau_{\textnormal{e-e},3}^{-1}$ (see Eq.~\eqref{tau3}) the lifetime $\tau_E$ increases initially with temperature due to the motional narrowing effect in the D'yakonov-Perel' regime. The presence of the Sommerfeld function $z_6$, on the other hand, leads to a decrease of $\tau_E$ with increasing temperature. Thus for cubic Dresselhaus spin-orbit interaction we find that the temperature dependence is governed by a competition between increasing and decreasing contributions. We will compare this theoretical interpretation with experimental results for the persistent spin helix in GaAs/AlGaAs quantum wells~\cite{koralek} in the next section. \section{Persistent spin helix in G\lowercase{a}A\lowercase{s}/A\lowercase{l}G\lowercase{a}A\lowercase{s} quantum wells} \label{sec:numbers} \begin{figure} \psfrag{u}[bl]{{$T$/K}} \psfrag{v}[bc]{{$\tau^{-1}_{\textnormal{e-e},1(3)},\tilde{\tau}^{-1}_{\textnormal{e-e},1}$/ps}} \psfrag{x}[bl]{{$T$/K}} \psfrag{y}[bc]{{$\tau^{-1}_{1(3)},\tilde{\tau}^{-1}_{1}$/ps}} $\begin{array}[b]{l} \multicolumn{1}{c}{\mbox{{ \bf{(a)}}}} \\ [-0.2 cm] \includegraphics[width=0.56\columnwidth,clip=true ]{Fig1a.eps}\\ {\vspace{0.2 cm}}\\ \multicolumn{1}{c}{\mbox{{ \bf{(b)}}}} \\ [-0.2 cm] \includegraphics[width=0.56\columnwidth,clip=true ]{Fig1b.eps} \\ \end{array}$ \caption{(a) Temperature-dependent relaxation rates due to electron-electron interactions $\tau^{-1}_{\textnormal{e-e},1}$ (solid line), $\tilde{\tau}^{-1}_{\textnormal{e-e},1}$ (dot-dashed line) and $\tau^{-1}_{\textnormal{e-e},3}$ (dashed line), as numerically computed using the experimental parameters of Ref.\onlinecite{koralek}. In order to continuously interpolate between the data points, we made a fit to the functional form $A\,T^2+B\,T^2\ln T$, which has been shown to be correct for the spin Coulomb drag conductivity at low temperatures in Ref.~\onlinecite{Amico2}. For comparison we also show (blue dotted line) the inverse transport time $\tau^{-1}$(100 K). (b) The resulting effective relaxation rates $\tau^{-1}_{1}$ (solid), $\tau^{-1}_{3}$ (dashed) and $\tilde{\tau}^{-1}_{1}$ (dot-dashed), cf.~Eqs.~\eqref{tau1}-\eqref{tau1tilde} and \eqref{tau3}.} \label{fig:eerates} \end{figure} In order to address the lifetime of the PSH observed experimentally in GaAs/AlGaAs quantum wells\cite{koralek} we consider cubic Dresselhaus alongside with extrinsic spin-orbit interaction as possible symmetry breaking mechanisms. We also include also the renormalization of the linear Dresselhaus coupling constant due to cubic Dresselhaus spin-orbit interaction ($\zeta \neq 0$ in Eqs.~\eqref{Gxysw}-\eqref{L}). Analogously to the previous section we can set $S_x=0$ and then the spin diffusion equation~\eqref{matrixeq} reduces for the remaining components $S_y$ and $S_z$ to: \begin{align} \partial_t\,\vc{S}&~=~ \begin{pmatrix} {D}\,\partial_y^2-Y &K_{yz}(\pi/4)\,\partial_y\\ -K_{zy}(\pi/4)\,\partial_y& {D}\,\partial_y^2-Z \end{pmatrix}\,\vc{S}\label{2by2eq} \end{align} with \begin{align} Y&~=~\Gamma_y(\pi/4) +\gamma_\textnormal{cd}\,z_6+\gamma_\textnormal{ey}\,z_4,\\ Z&~=~\Gamma_x(\pi/4)+\Gamma_y(\pi/4)+2\,\gamma_\textnormal{cd}\,z_6+\Gamma_\textnormal{sw}. \end{align} For an initial spin polarization of the form $\vc{S}(\vc{x},t=0)=S_0\left(0,\,0,\,\cos q_0 y\right)$ the time dependent part of the solution is given by a double exponential decay, i.e., a sum of two exponentially decaying terms with a symmetry enhanced relaxation rate $\tau_E$ and a symmetry reduced relaxation rate $\tau_R$ given by \begin{align}\label{tauER} \tau_{E(R)}^{-1}&~=~\frac{1}{2}\,(Y+Z)+q_0^2\,D\\ &\qquad\mp\frac{1}{2}\,\sqrt{(Y-Z)^2+4 \, q_0^2\,K_{yz}(\pi/4)\,K_{zy}(\pi/4)}.\nonumber \end{align} In order to compare our theory with the experiment of Ref.~\onlinecite{koralek} we need to calculate the coefficients that occur in Eq.~\eqref{tauER}---in particular the temperature-dependent rates for electron-electron scattering. Fig.~\ref{fig:eerates} (a) shows $\tau^{-1}_{\textnormal{e-e},1}$, $\tilde{\tau}^{-1}_{\textnormal{e-e},1}$ and $\tau^{-1}_{\textnormal{e-e},3}$ , evaluated from Eqs.~\eqref{tauee1}-\eqref{tauee3} by Monte Carlo integration for parameters of Ref.~\onlinecite{koralek}. With these electron-electron scattering rates we find for the effective scattering rates in Eqs.~\eqref{tau1}-\eqref{tau1tilde} and \eqref{tau3} the results depicted in Fig.~\ref{fig:eerates}(b). \begin{figure}[t] \psfrag{x}[bl]{{$T$/K}} \psfrag{y}[bc]{{$\tau_{E(R)}$/ps}} \psfrag{u}[bc]{{$\tau_{E}$/ps}} $\begin{array}[b]{l} \multicolumn{1}{c}{\mbox{{ \bf{(a)}}}} \\ [-0.86 cm] \includegraphics[width=0.66\columnwidth,clip=true ]{Fig2aa.eps}\\ {\vspace{0.4 cm}}\\ \multicolumn{1}{c}{\mbox{{ \bf{(b)}}}} \\ [-0.3 cm] \hspace{0.2 cm}\includegraphics[width=0.66\columnwidth,clip=true ]{Fig2bb.eps}\\ {\vspace{0.4 cm}}\\[0.36 cm] \multicolumn{1}{c}{\mbox{{ \bf{(c)}}} } \\ [-0.86 cm] \includegraphics[width=0.66\columnwidth,clip=true ]{Fig2cc.eps}\\ \end{array}$ \caption{(Color online) (a) Temperature-dependent lifetimes of the enhanced (red/grey) and reduced (blue/black) modes. The points are experimental data from Ref.~\onlinecite{koralek}. Solid lines are the respective theoretical curves including extrinsic and cubic Dresselhaus spin-orbit interactions as well as electron-electron interactions; the thin dashed line is the simplified result from Eq.~\eqref{tauEcD}. In panel (b) we zoom in on the theoretical curve of panel (a) close to the maximum of $\tau_{E}$. The dashed line is the theoretical curve without extrinsic spin-orbit interaction. Panel (c) depicts the results of a calculation, where we include extrinsic and cubic Dresselhaus spin-orbit interaction but exclude electron-electron interactions. (Also here, a comparison as in (b) would show that the influence of the extrinsic spin-orbit interactions is marginal.)} \label{lifetimeplot} \end{figure} In Fig.~\ref{lifetimeplot}, we show the numerical results for the temperature dependence of the symmetry-enhanced and reduced lifetimes $\tau_{E(R)}$ where we use the experimental parameters of Ref.~\onlinecite{koralek}. In particular, we take $\alpha=0.0013$ for the Rashba spin-orbit interaction and $\gamma\,v_F=5.0\textnormal{ eV}\textnormal{ \r{A}}^3$ for the cubic Dresselhaus spin-orbit interaction. We adjust the linear Dresselhaus spin-orbit interaction to $\beta= 1.29 \,\alpha$ in order to maximize $\tau_E$ for $T=75$ K---the temperature at which also in the experiment the spin-orbit interaction was tuned in order to maximize $\tau_E$. At intermediate temperatures around $100$ K, i.e., in the temperature range where our theory should be most applicable, we find very good agreement between our theory (solid lines) and the experimental lifetimes (dots), see Fig.~\ref{lifetimeplot} (a). We observe a maximum in $\tau_E$ roughly where the experimental points exhibit one. Also the size of $\tau_E$ as well as of $\tau_R$ is very close to the experimental values. Since the scattering rates due to extrinsic spin-orbit interaction are very small in the GaAs/AlGaAs quantum well under consideration, i.e., $\gamma_\textnormal{ey}/\gamma_\textnormal{cd}\approx 10^{-4}$ and $\tau\,\gamma_\textnormal{sw}\approx 3\times 10^{-3} $, effects of extrinsic spin-orbit interaction turn out to be negligible, see Fig.~\ref{lifetimeplot} (b). A calculation which includes extrinsic spin-orbit interactions and electron-electron interactions but excludes cubic Dresselhaus spin-orbit interaction (not depicted in Fig.~2) would yield enhanced lifetimes that exceed the experimental ones by a factor $\sim 10^3$. Interestingly, the simple result \eqref{tauEcD} for the enhanced lifetime, where we neglected the renormalization of the linear Dresselhaus spin-orbit interaction due to cubic Dresselhaus spin-orbit interaction, is a fairly good approximation (see dashed curve in Fig.~\ref{lifetimeplot} (a)). Thus the simple interpretation of the temperature dependence of $\tau_E$ can also be extended to the present situation. The formation of the maximum in $\tau_E$ at intermediate temperatures around $100$ K is caused by the competition between two effects: on the one hand $\tau_E$ increases with temperature due to increasing electron-electron scattering, which leads in the presence of symmetry breaking cubic Dresselhaus spin-orbit interaction to the usual motional-narrowing effect in the D'yakonov-Perel' regime. On the other hand the magnitude of Sommerfeld corrections increases with temperature reducing the lifetime $\tau_E$ in two ways: (i) by increasing the effective cubic Dresselhaus scattering rate $\gamma_{\rm cd}\,z_6$ and (ii) by increasing the linear renormalization of the Dresselhaus spin-orbit interaction, which leads to a detuning of the Rashba and the effective linear Dresselhaus spin-orbit interactions. The important effect of electron-electron interaction for the temperature dependence of the lifetimes $\tau_E$ and $\tau_R$ can also be deduced from Fig.~\ref{lifetimeplot}(c), where we show the lifetimes excluding the effect of electron-electron interactions. Obviously the initial increase of the lifetimes with temperature is absent for both $\tau_E$ and $\tau_R$ in the absence of electron-electron interaction. At low temperatures and at high temperatures deviations between our theory and the experimental lifetimes are more pronounced. We suppose that at high temperatures symmetry breaking mechanisms that are not captured by our model (e.g.~effects involving phonons) could become important. Furthermore, since the Fermi temperature in the GaAs/AlGaAs quantum well under consideration is only $T_F=400$ K we cannot expect our calculation, which is based on a low-order Sommerfeld expansion, to be as accurate in the high temperature range above $200$ K. The disagreement at low temperatures, on the other hand, results most likely from the fact that we do not take into account the temperature dependence of the transport lifetime but rather use the experimental $100$ K-transport lifetime $\tau(100\textnormal{ K})= 1$ ps at all temperatures. In reality, however, the transport lifetime increases strongly with decreasing temperature~\cite{koralek} such that $b_F\,\tau_1\gtrsim 1$ for low temperatures, i.e., the system is outside the D'yakonov-Perel' regime and our theory is no longer applicable. In this low temperature regime other approaches which account for strong spin-orbit interaction could be used.\cite{bernevig3,liu-2011} \section{Conclusions\label{sec:conclusions}} Using a spin-coherent Boltzmann-type approach we have derived semiclassical spin-diffusion equations for a two-dimensional electron gas with Rashba and Dresselhaus spin-orbit interactions including the effect of cubic Dresselhaus and extrinsic spin orbit interactions as well as the influence of electron-electron interactions. Based on this approach we have analyzed the role of electron-electron interaction in generating a finite lifetime of the persistent spin helix state. Our calculation shows that the Hamiltonian has to contain SU(2)-breaking terms such as cubic Dresselhaus or extrinsic spin orbit interactions in addition to electron-electron interactions. Otherwise the persistent spin helix remains infinitely long-lived. We find that in this respect the effect of extrinsic spin-orbit interaction is negligible in the quantum wells used in the experiment by Koralek \etal\cite{koralek} Instead, the experimentally observed temperature dependence of the lifetime of the persistent spin helix, which displays a maximum at intermediate temperatures close to 100 K, is caused by the interplay of cubic Dresselhaus spin-orbit interaction and electron-electron interactions. The formation of the maximum can be understood as follows: due to electron-electron interactions the scattering rates of the winding number $\pm 3$ components of the spin distribution function grow with increasing temperature. Since the inverse of these rates enters the effective scattering rate in the D'yakonov-Perel' regime, electron-electron interactions increase the PSH lifetime with temperature. On the other hand, Sommerfeld corrections of the cubic Dresselhaus spin orbit interaction enter directly into the expressions for the effective scattering rates and thus decrease the lifetime of the PSH state with increasing temperature. Also temperature-induced deviations from the SU(2) point due to a renormalization of the linear Dresselhaus coupling constant by cubic Dresselhaus spin-orbit interaction increase with temperature and thus effectively reduce the lifetime of the PSH state. Since these corrections due to cubic Dresselhaus spin-orbit interaction dominate for larger temperatures, whereas the effect of electron-electron interaction prevails for lower temperatures, a maximum of the PSH lifetime emerges at intermediate temperatures. Our theory reproduces qualitatively the lifetime of the PSH state in the whole temperature range accessed experimentally by Koralek \etal\cite{koralek}. For intermediate temperatures close to the maximum, i.e., in the regime where our diffusive theory should be valid, we find also quantitative agreement with the experimental data. In order to maximize the lifetime, we propose to use a spatially damped sinusoidal spin profile as an initial condition for a transient spin grating spectroscopy experiment. When cubic Dresselhaus spin orbit interaction represents the only SU(2) symmetry breaking element, the proposed spin density profile is infinitely long lived similar to the PSH state in the absence of symmetry breaking terms. It may be interesting to include also disorder in the local Rashba spin-orbit coupling or spin-dependent electron-electron scattering in order to apply our theory to situations where the cubic Dresselhaus spin-orbit interaction is less dominant. These relaxation mechanisms are currently discussed in the context of spin relaxation in (110) grown GaAs quantum wells.\cite{sherman, glazov} \begin{acknowledgments} We thank F.\ von Oppen for helpful discussions and J.\,D.\ Koralek for providing with experimental data. This work was supported by SPP 1285 of the DFG. \end{acknowledgments} \begin{appendix} \section{Sommerfeld functions}\label{app:sommerfeld} From the standard Sommerfeld technique in the theory of the Fermi gas it is well known that the approximation \begin{align} \int_0^\infty d \epsilon\,g(\epsilon)\, f(\epsilon)&~=~\int_0^{E_F} d \epsilon\, g(\epsilon)+\frac{\pi^2}{6} \,(k_B T)^2\,g'(E_F)\nonumber\\ &\qquad+\mathcal{O}(T^4/T_F^4) \end{align} holds, where $f(\epsilon)$ is the Fermi distribution and $g(\epsilon)$ is a function of the energy that varies slowly for $\epsilon\approx E_F $. In the derivation of the spin diffusion equations we have to deal with powers of momentum $k^2,k^3,k^4,k^6,k^8$. Since the dispersion is quadratic and the 2d DOS is constant, the problem reduces to ($n=1,\frac{3}{2},2,3,4$) \begin{align} \int_0^\infty d \epsilon\,\epsilon^n\, f'(\epsilon)&= -\int_0^\infty d \epsilon\,n\,\epsilon^{n-1}\, f(\epsilon)\nonumber\\ &= -(E_F)^n\left[1+n\,\left(n-1\right)\frac{\pi^2}{6}\left(\frac{k_B\,T}{E_F}\right)^2\right]\nonumber\\ &\quad +\mathcal{O}(T^4/T_F^4). \end{align} Thus, $k^2$ terms do not obtain any $T$-dependent corrections, whereas the higher powers, $k^3,k^4,k^6$ and $k^8$, are not simply replaced by $-k_F^3,\dots-k_F^8$ but acquire corrections in the form of the factors $z_3,\dots z_8$, see Eqs.~\eqref{z3}-\eqref{z8}. \end{appendix} \pagebreak
1,108,101,562,463
arxiv
\section{Introduction} The study of the size and shape of the density distributions of protons and neutrons in nuclei is a classic, yet always contemporary area of nuclear physics. The proton densities of a host of nuclei are known quite well from the accurate nuclear charge densities measured in experiments involving the electromagnetic interaction \cite{fricke95}, like elastic electron scattering. In contrast, the neutron densities have been probed in fewer nuclei and are generally much less certain. The neutron distribution of $^{208}$Pb, and its rms radius in particular, is nowadays attracting significant interest in both experiment and theory. Indeed, the neutron skin thickness, i.e., the neutron-proton rms radius difference \begin{equation} \label{skin} \Delta r_{np}= \langle r^2 \rangle_n^{1/2} - \langle r^2 \rangle_p^{1/2} , \end{equation} of this nucleus has close ties with the density-dependent nuclear symmetry energy and with the equation of state of neutron-rich matter. In nuclear models, $\Delta r_{np}$ of $^{208}$Pb displays nearly linear correlations with the slope of the equation of state of neutron matter \cite{bro00,typ01,cen02}, with the density derivative $L$ of the symmetry energy \cite{fur02,dan03,bal04,ava07,cen09,cen09b,vid09}, and with the surface-symmetry energy of the finite nucleus \cite{cen09}. At first sight, it may seem intriguing that a property of the mean position of the surface of the nucleon densities ($\Delta r_{np}$) is correlated with a purely bulk property of infinite nuclear matter ($L$). However, we have to keep in mind that $\Delta r_{np}$ depends on the surface-symmetry energy. This quantity reduces the bulk-symmetry energy due to the finite size of the nucleus. Assuming a local density approximation, we can correlate the surface-symmetry energy with the density slope $L$, which determines the departure of the symmetry energy from the bulk value. The correlation of $\Delta r_{np}$ with $L$ then follows. Actually, these correlations can be derived almost analytically starting from the droplet model (DM) of Myers and \'{S}wi\c{a}tecki \cite{MS,MSskin} as we showed in Refs.\ \cite{cen09,cen09b}. By reason of its close connections with the nuclear symmetry energy, knowing accurately $\Delta r_{np}$ of $^{208}$Pb can have important implications in diverse problems of nuclear structure and of heavy-ion reactions, in studies of atomic parity violation, as well as in the description of neutron stars and in other areas of nuclear astrophysics (see, e.g., Refs.\ \cite{hor01,die03,sil05,ste05a,lat07,dhiman07,samaddar07,kli07,li08, xu09,sun09,khoa09,cen10,carbone10,chen10}). Since the charge radius of $^{208}$Pb has been measured with extreme accuracy ($r_{ch}= 5.5013(7)$ fm \cite{fricke95}), the neutron rms radius of $^{208}$Pb is the principal unknown piece of the puzzle. The lead parity radius experiment (PREX) \cite{prex1} is a challenging experimental effort that aims to determine $\langle r^2 \rangle_n^{1/2}$ of $^{208}$Pb almost model independently and to 1\% accuracy by parity-violating electron scattering \cite{prex1,prex2}. This purely electroweak experiment has been run at the Jefferson Lab very recently, although results are not yet available. The parity-violating electron scattering is useful to measure neutron densities because in the low-momentum transfer regime the $Z^0$ boson couples mainly to neutrons. For protons, this coupling is highly suppressed because of the value of the Weinberg angle. Therefore, from parity-violating electron scattering one can obtain the weak charge form factor and the closely related neutron form factor. From these data, the neutron rms radius can in principle be deduced \cite{prex2}. This way of proceeding is similar to how the charge density is obtained from unpolarized electron scattering data \cite{prex2}. The electroweak experiments get rid of the complexities of the hadronic interactions and the reaction mechanism does not have to be modeled. Thus, the analysis of the data can be both clean and model independent. There may be a certain model dependence, in the end, in having to use some neutron density shape to extract the neutron rms radius from the parity-violating asymmetry measured at a finite momentum transfer. To date, the existing constraints on neutron radii, skins, and neutron distributions of nuclei have mostly used strongly interacting hadronic probes. Unfortunately, the measurements of neutron distributions with hadronic probes are bound to have some model dependence because of the uncertainties associated with the strong force. Among the more frequent experimental techniques we may quote nucleon elastic scattering \cite{kar02,cla03}, the inelastic excitation of the giant dipole and spin-dipole resonances \cite{kra99,kra04}, and experiments in exotic atoms \cite{trz01,jas04,fried03,fried05}. Recent studies indicate that the pygmy dipole resonance may be another helpful tool to constrain neutron skins \cite{kli07,carbone10}. The extraction of neutron radii and neutron skins from the experiment is intertwined with the dependence of these quantities on the shape of the neutron distribution \cite{trz01,jas04,fried03,fried05,don09}. The data typically do not indicate unambiguosly, by themselves, if the difference between the peripheral neutron and proton densities that gives rise to the neutron skin is caused by an extended bulk radius of the neutron density, by a modification of the width of the surface, or by some combination of both effects. In the present work we look for theoretical indications on this problem and study whether the origin of the neutron skin thickness of $^{208}$Pb comes from the bulk or from the surface of the nucleon densities according to the mean-field models of nuclear structure. The answer turns out to be connected with the density dependence of the nuclear symmetry energy in the theory. We described in Ref.\ \cite{warda10} a procedure to discern bulk and surface contributions in the neutron skin thickness of nuclei. It can be applied to both theoretical and experimental nucleon densities as it only requires to know the equivalent sharp radius and surface width of these densities, which one can obtain by fitting the actual densities with two-parameter Fermi (2pF) distributions. The 2pF shape is commonly used to characterize nuclear densities and nuclear potentials in both theoretical and experimental analyses. The doubly magic number of protons and neutrons in $^{208}$Pb ensures that deformations do not influence the results and spherical density distributions describe the nuclear surface very well. We perform our calculations with several representative effective nuclear forces, namely, nonrelativistic interactions of the Skyrme and Gogny type and relativistic mean-field (RMF) interactions. The free parameters and coupling constants of these nuclear interactions have usually been adjusted to describe data that are well known empirically such as binding energies, charge radii, single-particle properties, and several features of the nuclear equation of state. However, the same interactions predict widely different results for the size of neutron skin of $^{208}$Pb and, as we will see, for its bulk or surface nature. We also study the halo or skin character \cite{trz01,jas04,fried03,fried05,don09} of the nucleon densities of $^{208}$Pb in mean-field models. Finally, we perform calculations of parity-violating electron scattering on $^{208}$Pb. We show that if 2pF nucleon densities are assumed, the parity-violating asymmetry as predicted by mean-field models can be approximated by a simple and analytical expression in terms of the central radius and surface width of the neutron and proton density profiles. This suggests that an experiment such as PREX could allow to obtain some information about the neutron density profile of the $^{208}$Pb nucleus in addition to its neutron rms radius. The rest of the article proceeds as follows. In Sec.\ \ref{formalism}, we summarize the formalism to decompose the neutron skin thickness into bulk and surface components. The results obtained in the nuclear mean-field models are presented and discussed in Sec.\ \ref{results}. A summary and the conclusions are given in Sec.\ \ref{summary}. \section{Formalism} \label{formalism} The analysis of bulk and surface contributions to the neutron skin thickness of a nucleus requires proper definitions of these quantities based on nuclear density distributions. We presented in Ref.\ \cite{warda10} such a study, and we summarize only its basic points here. One can characterize the size of a nuclear density distribution $\rho(r)$ through several definitions of radii, and each definition may be more useful for a specific purpose (see Ref.\ \cite{has88} for a thorough review). Among the most common radii, we have the {\it central radius}~$C$: \begin{equation} \label{c} C= \frac{1}{\rho(0)} \int_0^{\infty} \rho(r) dr \,; \end{equation} the {\it equivalent sharp radius}~$R$: \begin{equation} \label{r} \frac 43 \pi R^3 \rho({\rm bulk}) = 4\pi \int_0^{\infty} \rho(r) r^2 dr , \end{equation} i.e., the radius of a uniform sharp distribution whose density equals the bulk value of the actual density and has the same number of particles; and the {\it equivalent rms radius}~$Q$: \begin{equation} \label{q} \frac 35 \, Q^2= \langle r^2 \rangle , \end{equation} which describes a uniform sharp distribution with the same rms radius as the given density. These three radii are related by expansion formulas \cite{has88}: \begin{equation} \label{qr} Q = R \Big(1+\frac 52\frac {b^2}{R^2} + \ldots \Big) , \quad C = R \Big(1-\frac {b^2}{R^2} + \ldots \Big) . \end{equation} Here, $b$ is the {\it surface width} of the density profile: \begin{equation} \label{b} b^2= - \frac{1}{\rho(0)} \int_0^{\infty} (r-C)^2 \frac{d \rho(r)}{dr} dr , \end{equation} which provides a measure of the extent of the surface of the density. Relations (\ref{qr}) usually converge quickly because $b/R$ is small in nuclei, especially in heavy-mass systems. Nuclear density distributions have oscillations in the inner bulk region and a meaningful average is needed to determine the density values $\rho(0)$ and $\rho({\rm bulk})$ appearing in the above equations. This can be achieved by matching the original density with a 2pF distribution: \begin{equation} \label{2pf} \rho(r) = \frac{\rho_0}{1+ \exp{ [(r-C)/a] }} . \end{equation} In 2pF functions the bulk density value corresponds very closely to the central density, and the latter coincides to high accuracy with the $\rho_0$ parameter if $\exp{(-C/a)}$ is negligible. The surface width $b$ and the diffuseness parameter $a$ of a 2pF function are related by $b= (\pi/ \sqrt{3}) a$. As discussed in Ref.\ \cite{has88}, the equivalent sharp radius $R$ is the quantity of basic geometric importance of the $C$, $Q$, and $R$ radii. This is because a sharp distribution of radius $R$ has the same volume integral as the density of the finite nucleus and differs from it only in the surface region. We illustrate this fact in Fig.~\ref{radii} using as example the neutron density of $^{208}$Pb of a mean-field calculation. We can see that the mean-field density is clearly overestimated in the whole nuclear interior by a sharp sphere of radius $C$. The equivalent rms radius $Q$ fails also, by underestimating it. Only the equivalent sharp radius $R$ is able to reproduce properly the bulk part of the original density profile of the nucleus. Therefore, $R$ appears as the suitable radius to describe the size of the bulk of the nucleus. \begin{figure} \includegraphics[width=0.98\columnwidth,clip=true] {FIG01_lepto_dens_Pb.eps} \caption{\label{radii} (Color online) Comparison of sharp density profiles having radii $C$, $R$, and $Q$ with the mean-field and 2pF density distributions for the neutron density of $^{208}$Pb. The RMF interaction NL3 has been used in the mean-field calculation.} \end{figure} As the neutron skin thickness (\ref{skin}) is defined through rms radii, it can be expressed with $Q$: \begin{equation} \label{r0} \Delta r_{np}=\sqrt{\frac{3}{5}} \left(Q_n-Q_p\right) . \end{equation} Recalling from (\ref{qr}) that $Q\simeq R + \frac 52 (b^2/R)$, we have a natural distinction in $\Delta r_{np}$ between bulk ($\propto R_n-R_p$) and surface contributions. That is to say, \begin{equation} \label{rtot} \Delta r_{np} = \Delta r_{np}^{\rm bulk} + \Delta r_{np}^{\rm surf} , \end{equation} with \begin{equation} \label{rb} \Delta r_{np}^{\rm bulk} = \sqrt{\frac{3}{5}}\left(R_n-R_p\right) \end{equation} independent of surface properties, and \begin{equation} \label{rs} \Delta r_{np}^{\rm surf} = \sqrt{\frac{3}{5}} \, \frac{5}{2} \Big(\frac{b_n^2}{R_n}-\frac{b_p^2}{R_p}\Big) \end{equation} of surface origin. The nucleus may develop a neutron skin by separation of the bulk radii $R$ of neutrons and protons or by modification of the width $b$ of the surfaces of the neutron and proton densities. In the general case, both effects are expected to contribute. We note that Eq.\ (\ref{rs}) coincides with the expression of the surface width contribution to the neutron skin thickness provided by the DM of Myers and \'{S}wi\c{a}tecki \cite{MS,MSskin} if we set in Eq.\ (\ref{rs}) $R_n= R_p= r_0 A^{1/3}$. The next-order correction to Eq.\ (\ref{rs}) can be easily evaluated for 2pF distributions (cf.\ Ref.\ \cite{has88} for the higher-order corrections to the expansions (\ref{qr})) and gives \begin{equation} \label{rscorr} \Delta r_{np}^{\rm surf,corr} = - \sqrt{\frac{3}{5}} \, \frac{5}{2} \, \frac{21}{20} \Big(\frac{b_n^4}{R_n^3}-\frac{b_p^4}{R_p^3}\Big) . \end{equation} This quantity is usually very small---indeed, we neglected it in \cite{warda10}. In the case of $^{208}$Pb, we have found in all calculations with mean-field models that it is between $-0.0025$ fm and $-0.004$ fm, and thus can be obviated for most purposes. But because in the present work we deal with some detailed comparisons among the models, we have included (\ref{rscorr}) in the numerical values shown for the surface contribution $\Delta r_{np}^{\rm surf}$ in the later sections. It is to be mentioned that there is no universal method to do the parametrization of the neutron and proton densities with 2pF functions. A popular prescription is to use a $\chi^2$ minimization of the differences between the density to be reproduced and the 2pF profile, or of the differences between their logarithms. These methods may somewhat depend on conditions given during minimization (number of mesh points, limits, etc.). As in \cite{warda10}, we have preferred to extract the parameters of the 2pF profiles by imposing that they reproduce the same quadratic $\langle r^2 \rangle$ and quartic $\langle r^4 \rangle$ moments of the self-consistent mean-field densities, and the same number of nucleons. These conditions allow us to determine in a unique way the equivalent 2pF densities and pay attention to a good reproduction of the surface region of the original density because the local distributions of the quantities $r^2 \rho(r)$ and $r^4 \rho(r)$ are peaked at the peripheral region of the nucleus. An example of this type of fit is displayed in Fig.~\ref{radii} by the dash-dotted line. It can be seen that the equivalent 2pF distribution nicely averages the quantal oscillations at the interior and reproduces accurately the behavior of the mean-field density at the surface. \section{Results} \label{results} \subsection{Survey of model predictions and of data for $\Delta r_{np}$ of $^{208}$Pb} We calculate our results with nonrelativistic models of Skyrme type (SGII, Ska, SkM*, Sk-T4, Sk-T6, Sk-Rs, SkMP, SkSM*, MSk7, SLy4, HFB-8, HFB-17) and of Gogny type (D1S, D1N), as well as with several relativistic models (NL1, NL-Z, NL-SH NL-RA1, NL3, TM1, NLC, G2, FSUGold, DD-ME2, NL3*). The original references to the different interactions can be found in the papers \cite{xu09,HFB-17} for the Skyrme models, \cite{cha08} for the Gogny models, and \cite{patra02b,sulaksono07,FSUG,DDME2,NL3S} and Ref.\ [19] in \cite{patra02b} for the RMF models. It may be mentioned that the recent force HFB-17 \cite{HFB-17} achieves the lowest rms deviation with respect to experimental nuclear masses found to date in a mean-field approach. As is well known, nonrelativistic and relativistic models differ in the stiffness of the symmetry energy. Note that by soft or stiff symmetry energy we mean that the symmetry energy increases slowly or rapidly as a function of the nuclear density around the saturation point. Of course, the soft or stiff character can depend on the explored density region; for example, it is possible that a symmetry energy that is soft at nuclear densities becomes stiff at much higher densities \cite{HFBXII}, or that a model with a stiff symmetry energy at normal density has a smaller symmetry energy at low densities \cite{todd03}. The density dependence of the nuclear symmetry energy $c_{\rm sym}(\rho)$ around saturation is frequently parametrized through the slope $L$ of $c_{\rm sym}(\rho)$ at the saturation density: \begin{equation} L = \left. 3\rho_0\frac{\partial c_{\rm sym}(\rho)}{\partial \rho} \right|_{\rho_0} . \label{Lsym} \end{equation} The pressure of pure neutron matter is directly proportional to $L$ \cite{piek09} and thus the $L$ value has important implications for both neutron-rich nuclei and neutron stars. The symmetry energy of the Skyrme and Gogny forces analyzed in this work displays, as usual in the nonrelativistic models, from the very soft to the moderately stiff density dependence at nuclear densities (see Table~\ref{TABLE1} for the $L$ parameter of the models). On the contrary, the majority of the relativistic parameter sets have a stiff or very stiff symmetry energy around saturation. The exception to the last statement in our case are the covariant parameter sets FSUGold and DD-ME2 that have a milder symmetry energy than the typical RMF models. FSUGold achieves this through an isoscalar-isovector nonlinear meson coupling \cite{FSUG} and DD-ME2 because of having density-dependent meson-exchange couplings \cite{DDME2}. In Table~\ref{TABLE1} we display the neutron skin thickness of $^{208}$Pb obtained from the self-consistent densities of the various interactions (denoted as $\Delta r_{np}^{\rm s.c.}$). It is evident that the nuclear mean-field models predict a large window of values for this quantity. The nonrelativistic models with softer symmetry energies point toward a range of about 0.1--0.17 fm. Most of the relativistic models, having a stiff symmetry energy, point toward larger neutron skins of 0.25--0.3~fm. In between, the relativistic models DD-ME2 and FSUGold predict a result close to 0.2~fm and the Skyrme interactions that have relatively stiffer symmetry energies fill in the range between 0.2 and 0.25~fm. \begin{table}[t] \caption{Neutron skin thickness in $^{208}$Pb calculated with the self-consistent densities of several nuclear mean-field models ($\Delta r_{np}^{\rm s.c.}$) and its partition into bulk and surface contributions defined in Sec.\ \ref{formalism}, as well as the relative weight of these bulk and surface parts. The models have been set out according to increasing $\Delta r_{np}^{\rm s.c.}$. The density slope $L$ of the symmetry energy of the models is also listed. In order to help distinguish relativistic and nonrelativistic models, we have preceded the relativistic ones with an r in this table.} \begin{ruledtabular} \begin{tabular}{lccccccc} Model & $\Delta r_{np}^{\rm s.c.}$ & $\Delta r_{np}^{\rm bulk}$ & $\Delta r_{np}^{\rm surf}$ & bulk & surf & $L$ \\ & (fm) & (fm) & (fm) & \% & \% & (MeV) \\ \hline \; HFB-8 & 0.115 & 0.031 & 0.084 & 27 & 73 & \ 14.8 \\ \; MSk7 & 0.116 & 0.030 & 0.086 & 26 & 74 & \ \ 9.4\\ \; D1S & 0.135 & 0.062 & 0.073 & 46 & 54 & \ 22.4 \\ \; SGII & 0.136 & 0.065 & 0.071 & 48 & 52 & \ 37.6 \\ \; D1N & 0.142 & 0.070 & 0.072 & 49 & 51 & \ 31.9 \\ \; Sk-T6 & 0.151 & 0.067 & 0.084 & 44 & 56 & \ 30.9 \\ \; HFB-17 & 0.151 & 0.067 & 0.084 & 44 & 56 & \ 36.3 \\ \; SLy4 & 0.161 & 0.086 & 0.075 & 53 & 47 & \ 46.0 \\ \; SkM* & 0.170 & 0.093 & 0.077 & 55 & 45 & \ 45.8 \\ r\! DD-ME2 & 0.193 & 0.098 & 0.095 & 51 & 49 & \ 51.3 \\ \; SkSM* & 0.197 & 0.116 & 0.082 & 58 & 42 & \ 65.5 \\ \; SkMP & 0.197 & 0.123 & 0.074 & 62 & 38 & \ 70.3 \\ r\! FSUGold & 0.207 & 0.105 & 0.102 & 51 & 49 & \ 60.5 \\ \; Ska & 0.211 & 0.140 & 0.071 & 66 & 34 & \ 74.6 \\ \; Sk-Rs & 0.215 & 0.146 & 0.069 & 68 & 32 & \ 85.7 \\ \; Sk-T4 & 0.248 & 0.163 & 0.085 & 66 & 34 & \ 94.1 \\ r\! G2 & 0.257 & 0.171 & 0.086 & 66 & 34 & 100.7 \\ r\! NLC & 0.263 & 0.174 & 0.089 & 66 & 34 & 108.0 \\ r\! NL-SH & 0.266 & 0.169 & 0.097 & 64 & 36 & 113.6 \\ r\! TM1 & 0.271 & 0.172 & 0.098 & 64 & 36 & 110.8 \\ r\! NL-RA1 & 0.274 & 0.179 & 0.095 & 65 & 35 & 115.4 \\ r\! NL3 & 0.280 & 0.185 & 0.095 & 66 & 34 & 118.5 \\ r\! NL3* & 0.288 & 0.191 & 0.097 & 66 & 34 & 122.6 \\ r\! NL-Z & 0.307 & 0.209 & 0.098 & 68 & 32 & 133.3 \\ r\! NL1 & 0.321 & 0.216 & 0.105 & 67 & 33 & 140.1 \end{tabular} \end{ruledtabular} \label{TABLE1} \end{table} Before proceeding, we would like to briefly survey some of the recent results deduced for $\Delta r_{np}$ in $^{208}$Pb from experiment. For example, the recent analysis in Ref.\ \cite{klo07} of the data measured in the antiprotonic $^{208}$Pb atom \cite{trz01,jas04} gives $\Delta r_{np}= 0.16 \pm(0.02)_{\rm stat} \pm(0.04)_{\rm syst}$ fm, including statistical and systematic errors. Another recent study \cite{bro07} of the antiprotonic data for the same nucleus leads to $\Delta r_{np}= 0.20 \pm(0.04)_{\rm exp} \pm(0.05)_{\rm th}$ fm, where the theoretical error is suggested from comparison of the models with the experimental charge density. These determinations are in consonance with the {\em average} value of the hadron scattering data for the neutron skin thickness of $^{208}$Pb, namely, $\Delta r_{np} \sim 0.165 \pm 0.025$ fm (taken from the compilation of hadron scattering data in Fig.~3 of Ref.\ \cite{jas04}). We may also mention that the constraints on the nuclear symmetry energy derived from isospin diffusion in heavy ion collisions of neutron-rich nuclei suggest $\Delta r_{np}= 0.22\pm 0.04$ fm in $^{208}$Pb \cite{che05}. Following Ref.\ \cite{ste05}, the same type of constraints exclude $\Delta r_{np}$ values in $^{208}$Pb less than 0.15 fm. A recent prediction based on measurements of the pygmy dipole resonance in $^{68}$Ni and $^{132}$Sn gives $\Delta r_{np}= 0.194 \pm 0.024$ fm in $^{208}$Pb \cite{carbone10}. Finally, we quote the new value $\Delta r_{np}= 0.211^{+0.054}_{-0.063}$ fm determined in \cite{zenihiro10} from proton elastic scattering. Thus, in view of the empirical information for the central value of $\Delta r_{np}$ and in view of the $\Delta r_{np}^{\rm s.c.}$ values predicted by the theoretical models in Table~\ref{TABLE1}, it may be said that those interactions with a soft (but not very soft) symmetry energy, for example, HFB-17, SLy4, SkM*, DD-ME2, or FSUGold, agree better with the determinations from experiment. Nevertheless, the uncertainties in the available information for $\Delta r_{np}$ are rather large and one cannot rule out the predictions by other interactions. If the PREX experiment \cite{prex1,prex2} achieves the purported goal of accurately measuring the neutron rms radius of $^{208}$Pb, it will allow to pin down more strictly the constraints on the neutron skin thickness of the mean-field models. \subsection{Bulk and surface contributions to $\Delta r_{np}$ of $^{208}$Pb in nuclear models and the symmetry energy} \label{bulksurf} We next discuss the results for the division of the neutron skin thickness of $^{208}$Pb into bulk ($\Delta r_{np}^{\rm bulk}$) and surface ($\Delta r_{np}^{\rm surf}$) contributions in the nuclear mean-field models, following Sec.\ \ref{formalism}. We display this information in Table~\ref{TABLE1}. It may be noticed that the value of $\Delta r_{np}^{\rm bulk}$ plus $\Delta r_{np}^{\rm surf}$ (quantities obtained from Eqs.\ (\ref{rb})--(\ref{rscorr})) agrees excellently with $\Delta r_{np}^{\rm s.c.}$ (neutron skin thickness obtained from the self-consistent densities). One finds that the bulk contribution $\Delta r_{np}^{\rm bulk}$ to the neutron skin of $^{208}$Pb varies in a window from about 0.03 fm to 0.22 fm. The surface contribution $\Delta r_{np}^{\rm surf}$ is comprised approximately between 0.07 fm and 0.085 fm in the nonrelativistic forces, and between 0.085 fm and 0.105 fm in the relativistic ones. Thus, whereas the bulk contribution to the neutron skin thickness of $^{208}$Pb changes largely among the different mean-field models, the surface contribution remains confined to a narrower band of values. Table~\ref{TABLE1} shows that the size of the neutron skin thickness of $^{208}$Pb is divided into bulk and surface contributions in almost equal parts in the nuclear interactions that have soft symmetry energies (say, $L\sim20$--60). This is the case of multiple nonrelativistic interactions and of the covariant DD-ME2 and FSUGold parameter sets. When the symmetry energy becomes softer, the bulk part tends to be smaller. Indeed, we see that in the models that have a very soft symmetry energy ($L\lesssim20$), which we may call ``supersoft'' \cite{wen09}, the surface contribution takes over and it is responsible for the largest part ($\sim75\%$) of $\Delta r_{np}$ of $^{208}$Pb. At variance with this situation, in the models with stiffer symmetry energies ($L\gtrsim75$) about two thirds of $\Delta r_{np}$ of $^{208}$Pb come from the bulk contribution, as seen in the Skyrme forces of stiffer symmetry energy and in all of the relativistic forces that have a conventional isovector channel (G2, TM1, NL3, etc.). We therefore note that in a heavy neutron-rich nucleus with a sizable neutron skin such as $^{208}$Pb, the nuclear interactions with a soft symmetry energy predict that the contribution to $\Delta r_{np}$ produced by differing widths of the surfaces of the neutron and proton densities ($b_n \ne b_p$) is similar to, or even larger than, the effect from differing extensions of the bulk of the nucleon densities ($R_n \ne R_p$). On the contrary, the nuclear interactions with a stiff symmetry energy favor a dominant bulk nature of the neutron skin of $^{208}$Pb, and then the largest part of $\Delta r_{np}$ is caused by $R_n \ne R_p$. We collect in Table~\ref{TABLE2} the found equivalent sharp radii $R_n$ and $R_p$ and surface widths $b_n$ and $b_p$ of the densities of $^{208}$Pb in the present mean-field models. \begin{table}[t] \caption{Equivalent sharp radius and surface width of the 2pF neutron and proton density distributions of $^{208}$Pb in mean-field models. Units are fm.} \begin{ruledtabular} \begin{tabular}{lcccc} Model & $R_n$ & $R_p$ & $b_n$ & $b_p$ \\ \hline HFB-8 & 6.822 & 6.782 & 0.991 & 0.819 \\ MSk7 & 6.847 & 6.808 & 0.980 & 0.801 \\ D1S & 6.830 & 6.751 & 0.994 & 0.846 \\ SGII & 6.890 & 6.806 & 0.971 & 0.821 \\ D1N & 6.845 & 6.755 & 0.979 & 0.828 \\ Sk-T6 & 6.862 & 6.775 & 0.994 & 0.820 \\ HFB-17 & 6.883 & 6.797 & 0.996 & 0.821 \\ SLy4 & 6.902 & 6.790 & 1.007 & 0.852 \\ SkM* & 6.907 & 6.786 & 1.007 & 0.847 \\ DD-ME2 & 6.926 & 6.800 & 1.026 & 0.829 \\ SkSM* & 6.955 & 6.805 & 0.970 & 0.790 \\ SkMP & 6.943 & 6.784 & 0.997 & 0.839 \\ FSUGold & 6.971 & 6.836 & 1.024 & 0.808 \\ Ska & 6.970 & 6.789 & 0.998 & 0.844 \\ Sk-Rs & 6.950 & 6.762 & 0.962 & 0.806 \\ Sk-T4 & 6.991 & 6.780 & 1.008 & 0.823 \\ G2 & 7.037 & 6.817 & 1.012 & 0.824 \\ NLC & 7.087 & 6.863 & 1.016 & 0.820 \\ NL-SH & 7.039 & 6.821 & 0.989 & 0.772 \\ TM1 & 7.085 & 6.862 & 1.005 & 0.787 \\ NL-RA1 & 7.065 & 6.834 & 1.008 & 0.797 \\ NL3 & 7.060 & 6.821 & 1.017 & 0.807 \\ NL3* & 7.052 & 6.806 & 1.026 & 0.814 \\ NL-Z & 7.134 & 6.865 & 1.058 & 0.847 \\ NL1 & 7.100 & 6.822 & 1.065 & 0.840 \end{tabular} \end{ruledtabular} \label{TABLE2} \end{table} As we have had the opportunity to see, the neutron skin thickness of a heavy nucleus is strongly influenced by the density derivative $L$ of the symmetry energy. Indeed, one easily suspects from Table~\ref{TABLE1} that $\Delta r_{np}$ of $^{208}$Pb is almost linearly correlated with $L$ in the nuclear mean-field models, which Fig.~\ref{correlation} confirms for the present interactions. The correlation of the neutron skin thickness of $^{208}$Pb with $L$ has been amply discussed in the literature \cite{fur02,dan03,bal04,ava07,cen09,cen09b,vid09}, as it implies that an accurate measurement of the former observable could allow to tightly constrain the density dependence of the nuclear symmetry energy. In particular, we studied the aforementioned correlation in Ref.\ \cite{cen09} where it is shown that the expression of the neutron skin thickness in the DM of Myers and \'{S}wi\c{a}tecki \cite{MS,MSskin} can be recast to leading order in terms of the $L$ parameter. To do that, we use the fact that in all mean-field models the symmetry energy coefficient computed at $\rho \approx 0.10$ fm$^{-3}$ is approximately equal to the DM symmetry energy coefficient in $^{208}$Pb which includes bulk- and surface-symmetry contributions \cite{cen09}. In the standard DM, where the surface widths of the neutron and proton densities are taken to be the same \cite{MS,MSskin}, the neutron skin thickness is governed by the ratio between the bulk-symmetry energy at saturation $J \equiv c_{\rm sym}(\rho_0)$ and the surface stiffness coefficient $Q$ of the DM \cite{cen09,cen09b} (the latter is not to be confused with the equivalent rms radius $Q$ of Eq.\ (\ref{q})). The DM coefficient $Q$ measures the resistance of the nucleus against the separation of the neutron and proton surfaces to form a neutron skin. We have shown \cite{cen09,cen09b} in mean-field models that the DM formula for the neutron skin thickness in the case where one assumes $b_n=b_p$, undershoots the corresponding values computed by the semiclassical extended Thomas-Fermi method in finite nuclei and, therefore, a nonvanishing surface contribution is needed to describe more accurately the mean-field results. However, this surface contribution has a more involved dependence on the parameters of the interaction and does not show a definite correlation with the $J/Q$ ratio (see Fig.~4 of Ref.\ \cite{cen09b}). Now, we wondered to which degree the correlation with $L$ of the neutron skin thickness of $^{208}$Pb holds in its bulk and surface parts extracted from actual mean-field densities. From our discussion of the indications provided by the DM, we can expect this correlation to be strong in the bulk part and weak in the surface part. Indeed, the plots of $\Delta r_{np}^{\rm bulk}$ and $\Delta r_{np}^{\rm surf}$ against $L$ in Fig.~\ref{correlation} show that the bulk part displays the same high correlation with $L$ as the total neutron skin (the linear correlation factor is of 0.99 in both cases), whereas the surface part exhibits a mostly flat trend with $L$. The linear fits in Fig.~\ref{correlation} of the neutron skin thickness of $^{208}$Pb and of its bulk part have also quite similar slopes. One thus concludes that the linear correlation of $\Delta r_{np}$ of $^{208}$Pb with the density content of the nuclear symmetry energy arises mainly from the bulk part of $\Delta r_{np}$. In other words, the correlation arises from the change induced by the density dependence of the symmetry energy in the equivalent sharp radii of the nucleon density distributions of $^{208}$Pb rather than from the change of the width of the surface of the nucleon densities. The value of about 0.1 fm that the surface contribution to $\Delta r_{np}$ takes in $^{208}$Pb can be understood as follows starting from Eq.\ (\ref{rs}). Taking into account that in 2pF distributions fitted to mean-field densities $R_n \sim R_p \sim 1.16 A^{1/3}$ fm and $b_n + b_p \sim 1.8$ fm (see Table \ref{TABLE2}), Eq.\ (\ref{rs}) can be approximated as \begin{equation} \Delta r^{\rm surf}_{np} \sim 3 A^{-1/3} (b_n -b_p) . \end{equation} Given that $b_n - b_p \sim 0.2$ fm for $^{208}$Pb on the average in mean-field models (see Table \ref{TABLE2}), one finds $\Delta r^{\rm surf}_{np} \sim$ 0.1 fm, rather independently of the model used to compute it. It is interesting why the range of variation of $b_n$ with respect to $b_p$ is not larger in nuclear models, in view of the fact that $R_n-R_p$ can take more different values. As discussed in Ref.\ \cite{prex2}, this constraint is imposed on the models most likely by the mass fits. For example, a model having nucleon densities with very small or very large surface widths (i.e., very sharp or very extended surfaces) would provoke a large change in the surface energy of the nucleus, but that hardly would be successful to reproduce the known nuclear masses. \begin{figure}[t] \includegraphics[width=1.0\columnwidth,clip=true] {FIG02_LINEARITY_WITH_L.eps} \caption{\label{correlation} (Color online) Correlation of the neutron skin thickness of $^{208}$Pb and of its bulk and surface parts with the density derivative $L$ of the nuclear symmetry energy.} \end{figure} \subsection{Discussion of the shape of the neutron density profiles} The use of 2pF functions in order to represent the nuclear densities by approximate distributions is also quite common in the experimental investigations. The parameters of the proton 2pF distribution can be assumed known in experiment, by unfolding from the accurately measured charge density \cite{warda10}. However, the shape of the neutron density is more uncertain, and even if the neutron rms radius is determined, it can correspond to different shapes of the neutron density. Actually, the shape of the neutron density is a significant question in the extraction of nuclear information from experiments in exotic atoms \cite{trz01,jas04,fried03,fried05} and from parity-violating electron scattering \cite{don09}. To handle the possible differences in the shape of the neutron density when analyzing the experimental data, the so-called ``halo'' and ``skin'' forms are frequently used \cite{trz01,jas04,fried03,fried05,don09}. In the ``halo-type'' distribution the nucleon 2pF shapes have $C_n = C_p$ and $a_n > a_p$, whereas in the ``skin-type'' distribution they have $a_n = a_p$ and $C_n > C_p$. To complete our study, we believe worth discussing the predictions of the theoretical models for the parameters of the 2pF shapes in $^{208}$Pb. We compile in Table~\ref{TABLE3} the central radii $C_n$ and $C_p$ and the diffuseness parameters $a_n$ and $a_p$ of the 2pF nucleon density profiles of $^{208}$Pb obtained from the mean-field models of Table~\ref{TABLE1}. We see that $C_n$ of neutrons spans a range of approximately 6.7--6.85 fm in the nonrelativistic interactions and that it is of approximately 6.8--7 fm in the relativistic parameter sets. In the case of the proton density distribution, the value of $C_p$ is smallest ($\sim$6.65 fm) in the two Gogny forces, it is about 6.67--6.71 fm in the Skyrme forces, and it is in a range of 6.7--6.77 fm in the RMF models. Then, we note that not only $C_n$ of neutrons but also $C_p$ of protons is generally smaller in the nonrelativistic forces than in the relativistic forces. The total spread in $C_p$ among the models (about 0.12 fm) is, though, less than half the spread found in $C_n$ (about 0.3 fm). Indeed, the accurately known charge radius of $^{208}$Pb is an observable that usually enters the fitting protocol of the effective nuclear interactions. \begin{table}[t] \caption{Central radius and surface diffuseness of the 2pF neutron and proton density distributions of $^{208}$Pb in mean-field models. Units are fm.} \begin{ruledtabular} \begin{tabular}{lcccccc} Model & $C_n$ & $C_p$ & $a_n$ & $a_p$ &$C_n-C_p$& $a_n-a_p$ \\ \hline HFB-8 & 6.679 & 6.683 & 0.546 & 0.451 &$-$0.004 \ \ & 0.095 \\ MSk7 & 6.707 & 6.714 & 0.540 & 0.442 &$-$0.007 \ \ & 0.099 \\ D1S & 6.687 & 6.649 & 0.546 & 0.464 & 0.038 & 0.082 \\ SGII & 6.753 & 6.707 & 0.536 & 0.453 & 0.046 & 0.083 \\ D1N & 6.705 & 6.657 & 0.537 & 0.453 & 0.048 & 0.084 \\ Sk-T6 & 6.718 & 6.676 & 0.548 & 0.452 & 0.042 & 0.096 \\ HFB-17 & 6.739 & 6.697 & 0.549 & 0.453 & 0.042 & 0.096 \\ SLy4 & 6.755 & 6.683 & 0.555 & 0.470 & 0.072 & 0.085 \\ SkM* & 6.760 & 6.681 & 0.555 & 0.467 & 0.079 & 0.088 \\ DD-ME2 & 6.774 & 6.699 & 0.566 & 0.457 & 0.075 & 0.109 \\ SkSM* & 6.819 & 6.713 & 0.535 & 0.436 & 0.106 & 0.099 \\ SkMP & 6.799 & 6.680 & 0.550 & 0.463 & 0.119 & 0.087 \\ FSUGold & 6.821 & 6.740 & 0.564 & 0.446 & 0.081 & 0.118 \\ Ska & 6.827 & 6.684 & 0.550 & 0.465 & 0.143 & 0.085 \\ Sk-Rs & 6.817 & 6.665 & 0.530 & 0.444 & 0.152 & 0.086 \\ Sk-T4 & 6.846 & 6.681 & 0.555 & 0.453 & 0.165 & 0.102 \\ G2 & 6.891 & 6.717 & 0.558 & 0.454 & 0.174 & 0.104 \\ NLC & 6.941 & 6.765 & 0.560 & 0.452 & 0.176 & 0.108 \\ NL-SH & 6.900 & 6.733 & 0.546 & 0.426 & 0.167 & 0.120 \\ TM1 & 6.942 & 6.772 & 0.554 & 0.434 & 0.170 & 0.120 \\ NL-RA1 & 6.921 & 6.741 & 0.556 & 0.440 & 0.180 & 0.116 \\ NL3 & 6.914 & 6.726 & 0.560 & 0.445 & 0.188 & 0.115 \\ NL3* & 6.903 & 6.709 & 0.566 & 0.449 & 0.194 & 0.117 \\ NL-Z & 6.977 & 6.761 & 0.584 & 0.467 & 0.216 & 0.117 \\ NL1 & 6.940 & 6.718 & 0.587 & 0.463 & 0.222 & 0.124 \end{tabular} \end{ruledtabular} \label{TABLE3} \end{table} If we inspect the results for the surface diffuseness of the density profiles of $^{208}$Pb in Table~\ref{TABLE3}, we see that $a_n$ of neutrons lies in a window of 0.53--0.59 fm (with the majority of the models having $a_n$ between 0.545 and 0.565 fm). The nonrelativistic interactions favor $a_n \lesssim 0.555$ fm, whereas the RMF sets favor $a_n \gtrsim 0.555$ fm. This indicates that the fall-off of the neutron density of $^{208}$Pb at the surface is in general faster in the interactions with a soft symmetry energy than in the interactions with a stiff symmetry energy. The surface diffuseness $a_p$ of the proton density spans {\em in either} the nonrelativistic or the relativistic models almost the same window of values (0.43--0.47 fm; with the majority of the models having $a_p$ between 0.445 and 0.465 fm). This fact is in contrast to the other 2pF parameters discussed so far. Actually, the $a_p$ value of the proton density can be definitely larger in some nonrelativistic forces than in some relativistic forces (for example, in the case of SkM* and NL3). One finds that the total spread of $a_n$ and $a_p$ within the analyzed models is quite similar: about 0.05 fm in both $a_n$ and $a_p$. This spread corresponds roughly to a 10\% variation compared to the mean values of $a_n$ and $a_p$. It is remarkable that while among the models $C_n$ has a significantly larger spread than $C_p$, the surface diffuseness $a_n$ of the neutron density has essentially the same small spread as the surface diffuseness $a_p$ of the proton density. As we have discussed at the end of Sec.\ \ref{bulksurf}, this is likely imposed by the nuclear mass fits. It means that our ignorance about the neutron distribution in $^{208}$Pb does not seem to produce in the mean-field models a larger uncertainty for $a_n$ of neutrons than for $a_p$ of protons, and that most of the uncertainty goes to the value of $C_n$. The difference $C_n-C_p$ of the central radii of the nucleon densities of $^{208}$Pb turns out to range approximately between 0.~and 0.2 fm. It is smaller for soft symmetry energies and larger for stiff symmetry energies. We realize that the limiting situation of a halo-type distribution where the nucleon densities of $^{208}$Pb have $C_n = C_p$ and $a_n>a_p$ is actually attained in the nuclear mean-field models with a very soft symmetry energy (like in HFB-8 or MSk7 where $C_n-C_p$ even is slightly negative). The difference $a_n-a_p$ of the neutron and proton surface diffuseness in $^{208}$Pb is comprised between nearly 0.08 and 0.1 fm in the nonrelativistic forces and between nearly 0.1 and 0.12 fm in the RMF forces. This implies that no interaction predicts $a_n-a_p$ of $^{208}$Pb as close to vanishing as $C_n-C_p$ is in some forces. Thus, the limiting situation where the nucleon densities in $^{208}$Pb would have $a_n=a_p$ and $C_n > C_p$ is not found in the nuclear mean-field models. Indeed, we observe in Table~\ref{TABLE3} that if $C_n-C_p$ becomes larger in the models, also $a_n-a_p$ tends overall to become larger. In order to help visualize graphically the change in the mean-field nucleon densities of $^{208}$Pb from having a nearly vanishing $C_n-C_p$ or a large $C_n-C_p$, we have plotted in Fig.~\ref{profiles} the example of the densities of the MSk7 and NL3 interactions. On the one hand, we see that both models MSk7 and NL3 predict basically the same proton density, as expected. On the other hand, the difference between having $C_n \approx C_p$ in MSk7 and $C_n > C_p$ in NL3 can be appreciated in the higher bulk and the faster fall-off at the surface of the neutron density of MSk7 compared with NL3. In summary, we conclude that in $^{208}$Pb the nuclear mean-field models favor the halo-type distribution with $C_n\approx C_p$ and $a_n > a_p$ if they have a very soft (``supersoft'') symmetry energy, they favor a mixed-type distribution if they have mild symmetry energies, and a situation where $C_n$ is clearly larger than $C_p$ if the symmetry energy is stiff, but that the pure skin-type distribution where $a_n-a_p=0$ in $^{208}$Pb is not supported (not even $a_n-a_p \approx 0$) by the mean-field models. Although the experimental evidence available to date on the neutron skin thickness of $^{208}$Pb is compatible with the ranges of the $C_n-C_p$ and $a_n-a_p$ parameters considered in our study, it is not to be excluded that the description of a precision measurement in $^{208}$Pb may need of nucleon densities with $C_n-C_p$ or $a_n-a_p$ values not fitting Table~\ref{TABLE3}. However, a sizable deviation (such as $a_n-a_p=0$) could mean that there is some missing physics in the isospin channel of present mean-field interactions, because once these interactions are calibrated to reproduce the observed binding energies and charge radii of nuclei they typically lead to the ranges of Table~\ref{TABLE3}. \begin{figure} \includegraphics[width=0.98\columnwidth,clip=true] {FIG03_dens_MSk7_vs_NL3.eps} \caption{\label{profiles} (Color online) Comparison of the nucleon densities predicted in $^{208}$Pb by the mean-field models MSk7 ($C_n-C_p \approx 0$ fm) and NL3 ($C_n-C_p \approx 0.2$ fm).} \end{figure} \subsection{Application to parity-violating electron scattering} Parity-violating electron scattering is expected to be able to accurately determine the neutron density in a nucleus since the $Z^0$ boson couples mainly to neutrons \cite{prex1,prex2}. Specifically, the PREX experiment \cite{prex1} aims to provide a clean measurement of the neutron radius of $^{208}$Pb. In this type of experiments one measures the parity-violating asymmetry \begin{equation} A_{LR}\equiv \frac{\displaystyle \frac{d\sigma_+}{d\Omega}-\frac{d\sigma_-}{d\Omega}} {\displaystyle \frac{d\sigma_+}{d\Omega}+\frac{d\sigma_-}{d\Omega}} , \label{alr} \end{equation} where $d\sigma_\pm/d\Omega$ is the elastic electron-nucleus cross section. The plus (minus) sign accounts for the fact that electrons with a positive (negative) helicity state scatter from different potentials ($V_\pm(r)=V_{\rm Coulomb}(r) \pm V_{\rm weak}(r)$ for ultra-relativistic electrons). Assuming for simplicity the plane wave Born approximation (PWBA) and neglecting nucleon form factors, the parity-violating asymmetry at momentum transfer $q$ can be written as \cite{prex2} \begin{equation} A_{LR}^{\rm PWBA} = \frac{G_F q^2}{4\pi \alpha \sqrt{2}} \Big[ 4 \sin^2\theta_W + \frac{F_n(q) - F_p(q)}{F_p(q)} \Big] , \label{alrPWBA} \end{equation} where $\sin^2\theta_W \approx 0.23$ for the Weinberg angle and $F_n(q)$ and $F_p(q)$ are the form factors of the point neutron and proton densities. Because $F_p(q)$ is known from elastic electron scattering, it is clear from (\ref{alrPWBA}) that the largest uncertainty to compute $A_{LR}$ comes from our lack of knowledge of the distribution of neutrons inside the nucleus. PREX intends to measure $A_{LR}$ in $^{208}$Pb with a 3\% error (or smaller). This accuracy is thought to be enough to determine the neutron rms radius with a 1\% error \cite{prex1,prex2}. To compute the parity-violating asymmetry we essentially follow the procedure described in Ref.\ \cite{prex2}. For realistic results, we perform the exact phase shift analysis of the Dirac equation for electrons moving in the potentials $V_\pm(r)$ \cite{roca08}. This method corresponds to the distorted wave Born approximation (DWBA). The main input needed for solving this problem are the charge and weak distributions. To calculate the charge distribution, we fold the mean-field proton and neutron point-like densities with the electromagnetic form factors provided in \cite{emff}. For the weak distribution, we fold the nucleon point-like densities with the electric form factors reported in \cite{prex2} for the coupling of a $Z^0$ to the protons and neutrons. We neglect the strange form factor contributions to the weak density \cite{prex2}. Because the experimental analysis may involve parametrized densities, in our study we use the 2pF functions extracted from the self-consistent densities of the various models. The difference between $A_{LR}$ calculated in $^{208}$Pb with the 2pF densities and with the self-consistent densities is anyway marginal at most. In Fig.~\ref{alr2pF}, $A_{LR}^{\rm DWBA}$ obtained with the Fermi distributions listed in Table \ref{TABLE3} is plotted against the values of $C_n-C_p$ (lower panel) and $a_n-a_p$ (upper panel). To simulate the kinematics of the PREX experiment \cite{prex1}, we set the electron beam energy to 1~GeV and the scattering angle to $5^\circ$, which corresponds to a momentum transfer in the laboratory frame of $q=0.44$ fm$^{-1}$. First, one can see from Fig.~\ref{alr2pF} that the mean-field calculations constrain in a rather narrow window the value of the parity-violating asymmetry in $^{208}$Pb. The increasing trend of $A_{LR}$ with decreasing $C_n-C_p$ indicates that $A_{LR}$ is larger when the symmetry energy is softer. Note that a large value $A_{LR} \approx 7 \times 10^{-7}$ (at 1~GeV and $5^\circ$) would be in support of a more surface than bulk origin of the neutron skin thickness of $^{208}$Pb and of the halo-type density distribution for this nucleus. Second, $A_{LR}^{\rm DWBA}$ displays in good approximation a linear correlation with $C_n-C_p$ ($r=0.978$), while the correlation with $a_n-a_p$ is not remarkable. \begin{figure}[t] \includegraphics[width=0.98\columnwidth,clip=true] {FIG04_Alr_2pF_CORRELATIONS.eps} \caption{\label{alr2pF} (Color online) Parity-violating asymmetry for 1~GeV electrons at $5^\circ$ scattering angle calculated from the 2pF neutron and proton density distributions of $^{208}$Pb in nuclear mean-field models.} \end{figure} Nevertheless, we have found a very good description of $A_{LR}^{\rm DWBA}$ of the mean-field models---well below the 3\% limit of accuracy of the PREX experiment---by means of a fit in $C_n-C_p$ and $a_n-a_p$ (red crosses in Fig.~\ref{alr2pF}): \begin{equation} A_{LR}^{\rm fit} = [ \alpha + \beta (C_n-C_p) + \gamma (a_n-a_p) ]\times 10^{-7} , \label{alrfit} \end{equation} with $\alpha=7.33$, $\beta=-2.45$ fm$^{-1}$, and $\gamma=-3.62$ fm$^{-1}$. The parametrization (\ref{alrfit}) may be easily understood if we consider the PWBA expression of $A_{LR}$ given above in Eq.\ (\ref{alrPWBA}). At low-momentum transfer, the form factors $F_n(q)$ and $F_p(q)$ of the neutron and proton densities (these are point densities in PWBA) can be expanded to first order in $q^2$, so that the numerator inside brackets in Eq.\ (\ref{alrPWBA}) becomes $-(q^2/6)\, (\langle r^2 \rangle_n - \langle r^2 \rangle_p)$. In 2pF density distributions we have $\langle r^2 \rangle_q = (3/5)\, C_q^2 + (7\pi^2/5)\, a_q^2$. Now, assuming constancy of $F_p(q^2)$ in the nuclear models and taking into account that $C_n+C_p \gg C_n-C_p$ and $a_n+a_p \gg a_n-a_p$, it is reasonable to assume that the variation of $A_{LR}$ is dominated by the change of $C_n-C_p$ and $a_n-a_p$ as proposed in Eq.~(\ref{alrfit}). In the analysis of a measurement of $A_{LR}$ in $^{208}$Pb through parametrized Fermi densities, one could set $C_p$ and $a_p$ to those known from experiment \cite{warda10} and then vary $C_n$ and $a_n$ in (\ref{alrfit}) to match the measured value. According to the predictions of the models in Table~\ref{TABLE3}, it would be reasonable to restrict this search to windows of about 0--0.22 fm for $C_n-C_p$ and 0.08--0.125 fm for $a_n-a_p$. Therefore, the result of a measurement of the parity-violating asymmetry together with Eq.~(\ref{alrfit}) (or Fig.~\ref{alr2pF}) would allow not only to estimate the neutron rms radius of $^{208}$Pb but also to obtain some insight about the neutron density profile in this nucleus. This assumes that the experimental value for $A_{LR}$ will fall in, or at least will be not far from, the region allowed by the mean-field calculations at the same kinematics. \section{Summary} \label{summary} We have investigated using Skyrme, Gogny, and relativistic mean-field models of nuclear structure whether the difference between the peripheral neutron and proton densities that gives rise to the neutron skin thickness of $^{208}$Pb is due to an enlarged bulk radius of neutrons with respect to that of protons or, rather, to the difference between the widths of the neutron and proton surfaces. The decomposition of the neutron skin thickness in bulk and surface components has been obtained through two-parameter Fermi distributions fitted to the self-consistent nucleon densities of the models. Nuclear models that correspond to a soft symmetry energy, like various nonrelativistic mean-field models, favor the situation where the size of the neutron skin thickness of $^{208}$Pb is divided similarly into bulk and surface components. If the symmetry energy of the model is ``supersoft'', the surface part even becomes dominant. Instead, nuclear models that correspond to a stiff symmetry energy, like most of the relativistic models, predict a bulk component about twice as large as the surface component. We have found that the size of the surface component changes little among the various nuclear mean-field models and that the known linear correlation of $\Delta r_{np}$ of $^{208}$Pb with the density derivative of the nuclear symmetry energy arises from the bulk part of $\Delta r_{np}$. The latter result implies that an experimental determination of the equivalent sharp radius of the neutron density of $^{208}$Pb could be as useful for the purpose of constraining the density-dependent nuclear symmetry energy as a determination of the neutron rms radius. We have discussed the shapes of the 2pF distributions predicted for $^{208}$Pb by the nuclear mean-field models in terms of the so-called ``halo-type'' ($C_n-C_p=0$) and ``skin-type'' ($a_n-a_p=0$) distributions of frequent use in experiment. It turns out that the theoretical models can accomodate the halo-type distribution in $^{208}$Pb if the symmetry energy is supersoft. However, they do no support a purely skin-type distribution in this nucleus, even if the model has a largely stiff symmetry energy. Let us mention that the information on neutron densities from antiprotonic atoms favored the halo-type over the skin-type distribution \cite{trz01,jas04}. We have closed our study with a calculation of the asymmetry $A_{LR}$ for parity-violating electron scattering off $^{208}$Pb in conditions as in the recently run PREX experiment \cite{prex1}, using the equivalent 2pF shapes of the models. This has allowed us to find a simple parametrization of $A_{LR}$ in terms of the differences $C_n-C_p$ and $a_n-a_p$ of the parameters of the nucleon distributions. With a measured value of the parity-violating asymmetry, it would provide a new correlation between the central radius and the surface diffuseness of the distribution of neutrons in $^{208}$Pb, assuming the same properties of the proton density known from experiment. \begin{acknowledgments} Work partially supported by the Spanish Consolider-Ingenio 2010 Programme CPAN CSD2007-00042 and by Grants No.\ FIS2008-01661 from MICIN (Spain) and FEDER, No.\ 2009SGR-1289 from Generalitat de Catalunya (Spain), and No.\ N202~231137 from MNiSW (Poland). \end{acknowledgments}
1,108,101,562,464
arxiv
\section{Introduction} One of the main challenges in molecular and systems biology is to infer mechanistic details of the processes that underlie available experimental data. A common strategy consists in applying controlled perturbations to the system of interest, and compare model predictions with gathered quantitative data. As an example, consider a simple two-component system in which a histidine kinase $E$ transfers a phosphate group to a response-regulator $S$: \begin{align*} E & \ce{->} E_p & S + E_p & \ce{ ->} S_p + E & S_p& \ce{->} S. \end{align*} Let us imagine a protein $I$, which forms an inhibitory complex $Y$ with the substrate by binding exactly one of the two forms, $S$ or $S_p$. This gives rise to two rivaling inhibition models with the following additional reactions \begin{align*} \textrm{Model 1: }\quad S + I & \ce{ <=> }Y & \textrm{Model 2: }\quad S_p + I & \ce{ <=> }Y. \end{align*} In this small (and artificial) example, the measurement of the concentration of $I$ at steady state for two starting conditions that only differ slightly in the amount of kinase $E$, helps us to qualitatively discriminate between these two models. Indeed, the derivative of the concentration of $I$ at steady state increases with $E$ in the first model, while it decreases in the second model. This exemplifies the basis of perturbation-based studies, in which the response of a system to an intervention, that being the addition of a protein, knockdown of a component, modification of reaction rate constants etc, is recorded \cite{gardner,villaverde,paradoxical,Kholodenko:untangling}. In this paper we focus on how to predict this response given the model, such that comparison with experimental data can be in place. The modeling setting is based on reaction networks and their associated evolution equations for the concentrations of the species in the network. In the examples, we employ the mass-action assumption, although this is not required for the theoretical framework. We consider perturbations on the parameters of the model, these typically being either initial concentrations or total amounts, and kinetic parameters. We assume that the system is at steady state, and that a small perturbation to the parameter vector is performed. If the perturbation is small enough and the steady state is non-degenerate and \blue{stable}, then the system is expected to converge to a new steady state. Our goal is to determine, component-wise, the sign of the difference of the two steady states as a function of the starting steady state and the parameters of the system. Mathematically, this translates into determining the signs of the derivatives of all concentrations at steady state with respect to the performed perturbation; these signs are referred to as \emph{sign-sensitivities} \cite{SontagSigns}. Numerous computer-based approaches exist to find sign-sensitivities, with the same or different modeling framework as the one we use here \cite{Kholodenko:untangling,gupta:sensitivity,vera:inferring}. A general strategy assumes all parameters known except the one of interest, and the response of the system is investigated using simulations. However, most quantities and parameters are notoriously hard to measure or estimate, which reduces the applicability of these methods. It is worth highlighting the algorithm by Sontag \cite{SontagSigns} to determine sign-sensitivities with respect to the increase of a total amount (a quantity that is preserved under the dynamics of the system), while treating the rest of the parameters as unknowns. Alternatively, if the network is small enough, one can attempt direct manipulation of the steady state equations \cite{Feliu:2010p94,bluthgen:feedback}. In recent works \cite{okada:sensitivity,okada:law}, Okada and Mochizuki provide a theorem to determine zero sensitivities from network structure alone. Similarly, Brehm and Fiedler study whether the sensitivity is zero \cite{brehm:sensitivity}. Another series of works \cite{shinar:sensitivity,shinar:sensitivity2} address the question of ``how large the absolute value of the sensitivity is'', by finding upper bounds for reaction networks in a specific class. \medskip In this paper we derive a closed formula for the sensitivities. When the kinetics are polynomial, then the derivative is expressed as a rational function in the parameters of the system and the concentrations of the species at the steady state. If the numerator and the denominator of this function have all coefficients with the same sign, then the sign of the derivative is easily determined, and it does not depend on the initial steady state nor on the parameters of the system. When the signs of the coefficients differ, then one can employ standard techniques, such as those based on the Newton polytope, e.g. \cite{FeliuPlos}, to determine whether the derivative can both be positive and negative, thereby concluding that the sign depends on the chosen steady state and/or parameter values. For example, for the two rivaling inhibitory models above, we find that the derivative $\se'_I$ of the concentration of $I$ with respect to the initial concentration of $E$ at a steady state $\se$ for model 1 and model 2 are \begin{align*} \se_I' &=k_1k_2k_4 \se_{S}\se_{I}\, / \, q_1(k,\se) \quad \textrm{(model 1)} & \se_I' &= -k_1k_2k_4 \se_{S}\se_{I} \, / \, q_2(k,\se) \quad \textrm{(model 2)}, \end{align*} where $k_i>0$ stands for the reaction rate constants of the reactions in the network (in the order given above), $\se_Z$ denotes the concentration of the species $Z$ at the steady state, and \begin{align*} q_1(k,\se)&= k_{1}k_{2}k_{4}\se_{E_p}\se_{S}+k_{2}k_{3}k_{4}\se_{S}^{2}+k_{2}k_{3}k_{4}\se_{S}\se_{I}+k_{1}k_{2}k_{5}\se_{E_p}+k_{1}k_{3}k_{4}\se_{S}+\\ & \qquad k_{1}k_{3}k_{4}\se_{I}+k_{2}k_{3}k_{5}\se_{S}+k_{1}k_{3}k_{5}, \\ q_2(k,\se)&= k_{1}k_{2}k_{4}\se_{E_p}\se_{S_p}+k_{1}k_{2}k_{4}\se_{E_p}\se_{I}+k_{2}k_{3}k_{4}\se_{S}\se_{S_p}+k_{1}k_{2}k_{5}\se_{E_p}+k_{1}k_{3}k_{4}\se_{S_p}+k_{2}k_{3}k_{5}\se_{S}+k_{1}k_{3}k_{5}. \end{align*} By inspecting the signs of these polynomials at positive values of $\se$ and $k$, we conclude that model 1 leads to an increase of the concentration of $I$ while model 2 to a decrease. In \cite{paradoxical}, apparently paradoxical results on sign-sensitivities were brought up to attention. We recover these phenomena in this work, and encounter a new surprising counter-intuitive result: the concentration of a species $X$ at steady state might decrease as a function of $X$ itself; that is, the concentration of $X$ might decrease after the addition of $X$ to the system (Section~\ref{hybridhistine kinase:section}). \medskip After exemplifying how to find the sensitivities for different types of perturbations (Section~\ref{sec:sensitivities}), we focus on perturbations of concentrations. We argue that, mathematically, it is better posed to discuss responses to a change of an initial concentration rather than to a change of a total amount (Section~\ref{sec:conc}). We then employ recent results relating the sign of \blue{the determinant of the} Jacobian of a function with sign-vectors conditions from \cite{MullerSigns} to determine, without computing the sensitivities, whether the sign depends on the steady state and parameters of the system (Section~\ref{sec:indep}). We conclude by discussing the existence of multiple steady states and the restriction to stable steady states, which are the only ones observable in an experimental setting (Section~\ref{hybridhistine kinase:section}). \section{Reaction networks} The processes we consider are modeled by \textbf{reaction networks}, which can be seen as directed graphs. Specifically, a reaction network consists of a set of species $\{X_1,\dots,X_n\}$, and a directed graph whose nodes are finite linear combinations of species (called \emph{complexes}). The directed edges are called \emph{reactions}. We let $r$ be the number of reactions. An example of a reaction network \cite{SontagSigns}, modeling the transfer of phosphate groups from a kinase $E$ to a substrate $S$ that has two phosphorylation sites is: \begin{equation}\label{network:phosphotransfer} S_0 + E_p \ce{<=>} S_1 + E \qquad S_1 + E_p \ce{<=>} S_2 + E. \end{equation} The species of the network are $E, E_p, S_0,S_1,S_2$: $E,E_p$ are the unphosphorylated and phosphorylated forms of the kinase $E$ and $S_0,S_1,S_2$ denote the substrate with no, one or two phosphate groups attached. The complexes are $S_0 + E_p, S_1 + E, S_1 + E_p$ and $S_2 + E$. This network, which is used as a running example, originates from the work of Sontag on sign-sensitivities as well \cite{SontagSigns}. See \cite{gunawardena-notes,feinbergnotes} for an expanded introduction to reaction networks. The source of a reaction is called the \emph{reactant}, while the target is called the \emph{product}. We assume the set of species is numbered such that each complex $y$ can be identified with a vector in $\R^n$; for instance, the complex $X_1+2 X_2$ is identified with the vector $(1,2,0,\dots,0)$ in $\R^n$, where $n$ is the number of species. In this way, each reaction $y\rightarrow y'$ gives rise to a vector $y'-y$ in $\R^n$, encoding the net production of each species with respect to the reaction. After choosing an order of the set of reactions, these vectors are gathered as columns of a matrix, called the stoichiometric matrix $N\in \R^{n\times r}$. The stoichiometric matrix for network \eqref{network:phosphotransfer} is \begin{align}\label{eq:N} N=\left(\begin{array}{rrrr} -1&1&0&0\\ 1&-1&-1&1\\ 0&0&1&-1\\ 1&-1&1&-1\\ -1&1&-1&1 \end{array}\right), \end{align} where the species are ordered as $S_0,S_1,S_2,E,E_p$. As it is custom within chemical reaction network theory, in this work we model the evolution of the concentration of the species in the network in time by means of a system of ordinary differential equations (ODEs). Specifically, denote by $x_i(t)$ the concentration of $X_i$ at time $t$ (or $x_C(t)$ for a species $C$). One chooses the rate $v_{y\rightarrow y'}(x)$ of each reaction of the network, to be a differentiable function from $\R^n_{\geq 0}$ to $\R_{\geq 0}$, and gathers these rates into a vector $v(x)$ from $\R^n_{\geq 0}$ to $\R^r_{\geq 0}$, using the established orders of the sets of species and reactions. Then the \textbf{evolution equations} of the vector of concentrations $x=(x_1,\dots,x_n)$ takes the form \begin{equation}\label{eq:ode} \frac{d x}{dt}= N v(x),\qquad x\in \R^n_{\geq 0}. \end{equation} The vector $v(x)$ depends often on parameters, as it is exemplified below. Therefore, we often write $v_k(x)$ to indicate dependence on some vector of parameters $k$, and define $$ f_k(x)= N v_k(x).$$ Under the assumption of \textbf{mass-action kinetics}, we have $$ v_{y\rightarrow y'} (x)= k_{y\rightarrow y'} \prod_{i=1}^n x_i^{y_i},$$ with $0^0=1$. Here, $ k_{y\rightarrow y'}>0$ is called the \emph{reaction rate constant} and is treated as a parameter, since it is often unknown. Let $B$ be the $n\times r$ matrix such that the $i$-th column is the reactant vector of the $i$-th reaction. Then, under the assumption of mass-action kinetics, the right-hand side of system \eqref{eq:ode} can be written equivalently as \begin{equation}\label{eq:ode2} N v_k(x)= N \diag(k) x^B, \end{equation} where $x^B\in \R^r_{\geq 0}$ is defined by $(x^B)_j= \prod_{i=1}^n x_i^{y_{i}}$ if $y$ is the reactant of the $j$-th reaction. The reaction rate constant is incorporated in the reaction network as a label of the reactions, such that we write for the network in \eqref{network:phosphotransfer2} \begin{equation}\label{network:phosphotransfer2} S_0 + E_p \ce{<=>[k_1][k_2]} S_1 + E \qquad S_1 + E_p \ce{<=>[k_3][k_4]} S_2 + E. \end{equation} We write $x_1,\dots,x_5$ for the concentrations of $S_0,S_1,S_2,E,E_p$ respectively. Under the assumption of mass-action kinetics, the matrix $B$ and the vector $v(x)$ are \begin{equation}\label{eq:B} B=\left(\begin{array}{cccc} 1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\\ 0&1&0&1 \\ 1 & 0 & 1 & 0 \end{array}\right), \qquad v(x)=(k_1x_1x_5, k_2x_2x_4, k_3x_2x_5, k_4x_3x_4 ), \end{equation} which together with the matrix $N$ in \eqref{eq:N}, give the following ODE system: \begin{align*} \frac{dx_1}{dt} & = - k_{1} x_1 x_5 +k_{2} x_2 x_4 \\ \frac{dx_2}{dt} & = k_{1} x_1 x_5 - k_{2} x_2 x_4 - k_{3} x_2 x_5 + k_{4} x_3 x_4 \\ \frac{dx_3}{dt} & = k_{3} x_2 x_5 - k_{4} x_3 x_4 \\ \frac{dx_4}{dt} & = k_{1} x_1 x_5 - k_{2} x_2 x_4 + k_{3} x_2 x_5 - k_{4} x_3 x_4 \\ \frac{dx_5}{dt} & = -k_{1} x_1 x_5 + k_{2} x_2 x_4 - k_{3} x_2 x_5 + k_{4} x_3 x_4. \end{align*} It is clear from \eqref{eq:ode} that the vector $\frac{dx}{dt}$ belongs to the column span $\im(N)$ of $N$, called the \emph{stoichiometric subspace}. Thus, given an initial condition $x^0$, the solution to \eqref{eq:ode} is confined to the linear subspace $x^0+\im(N)$. Further, both $\R^n_{\geq 0}$ and $\R^n_{>0}$ are also forward-invariant for the trajectories of \eqref{eq:ode} \cite{Sontag:2001}. Each of the sets $(x^0+\im(N)) \cap \R^n_{\geq 0} \subseteq \R^n_{\geq 0}$ is called a \textbf{stoichiometric compatibility class}. In this work we parametrize these sets in two ways. First, choose a matrix $W$ whose rows form a basis of $\im(N)^\perp$. Then, the set $(x^0+\im(N))\cap \R^n_{\geq 0}$ agrees with the subset of $\R^n_{\geq 0}$ defined by the equation $$W x= W x^0,\qquad x\in \R^n_{\geq 0}.$$ This set is independent of the choice of matrix $W$ and is parametrized by $x^0\in \R^n_{\geq 0}$. Let now $d$ be the dimension of $\im(N)^\perp$. Alternatively, one might consider vectors $T=(T_1,\dots,T_d)\in \R^d$ and consider the sets $$ \big\{ x\in \R^n_{\geq 0} \mid Wx = T \big\}.$$ Each such set corresponds to a stoichiometric compatibility class. However, the set depends both on $W$ and $T$, that is, the vector $T$ alone does not characterize the class. We refer to $T$ as the vector of \textbf{total amounts} and to $W$ as a matrix of \textbf{conservation laws}. \medskip A matrix of conservation laws $W$ for network \eqref{network:phosphotransfer} is \begin{align}\label{W} W=\left(\begin{array}{ccccc}1&1&1&0&0\\0&1&2&0&1\\0&0&0&1&1\end{array}\right),\end{align} which gives rise to the following equations for the stoichiometric compatibility class with vector of total amounts $(T_S,T_p,T_E)$ \begin{align}\label{eq:cons1} x_1+x_2+x_3&=T_S, & x_2+2x_3+x_5&=T_p, & x_4+x_5&=T_E. \end{align} The first equation encodes that the substrate is conserved, the third that the kinase is conserved, and the second that the phosphate group is either in $S_1, S_2$ or $E_p$, with $S_2$ having two sites. Alternatively, we can write the stoichiometric compatibility class of $x^0$ as \begin{align*} x_1+x_2+x_3&=x_1^0+x_2^0+x_3^0, & x_2+2x_3+x_5&=x_2^0+2x_3^0+x_5^0, & x_4+x_5&=x_4^0+x_5^0. \end{align*} We illustrate that it is more meaningful to use this second parametrization when studying sign-sensitivities in Section~\ref{sec:conc}. In this work, we are interested in the \textbf{positive steady states} of the system \eqref{eq:ode} restricted to a stoichiometric compatibility class. These are defined by a system of equations $$ f_k(x)=0, \qquad Wx = Wx^{0}.$$ Since $d$ equations of $f_k(x)$ are redundant (as $WN=0$), we remove them from the system $f_k(x)=0$ and obtain a system with $n$ equations and $n$ variables, which we write as \begin{equation}\label{eq:F} F_{k,x^0} (x )=0, \end{equation} with the first $n-d$ components of $F_{k,x^0}(x)$ obtained from $f_k(x)$ after removing redundant equations, and the last $d$ components are $W (x-x^0)^{tr}=0$. \blue{Here \emph{tr} stands for the transpose of a vector or matrix.} If we wish to parametrize stoichiometric compatibility classes with $T\in \R^d$, then we simply write $F_{k,T}(x)=0$ for the corresponding system. For network \eqref{network:phosphotransfer}, we remove the equations corresponding to the species $S_0,S_1,E$, and obtain the following system defining the steady states in the stoichiometric compatibility class of $x^0$: \begin{align} k_{3} x_2 x_5 - k_{4} x_3 x_4 &= 0,\nonumber \\ -k_{1} x_1 x_5 + k_{2} x_2 x_4 - k_{3} x_2 x_5 + k_{4} x_3 x_4 &=0,\nonumber \\ x_1+x_2+x_3- (x_1^0+x_2^0+x_3^0) &=0, \label{eq:F} \\ x_2+2x_3+x_5-(x_2^0+2x_3^0+x_5^0)&=0, \nonumber\\ x_4+x_5-(x_4^0+x_5^0) &=0. \nonumber \end{align} We conclude this section with a definition: we say that a steady state $x^*$ is \textbf{degenerate} if the Jacobian of $F_{k,x^*}(x)$ evaluated at $x^*$ is singular, that is, has vanishing determinant. This is equivalent to the Jacobian of the function $f_k(x)$ be singular on $\im(N)$, c.f. \cite[Eq (6.1)]{wiuf-feliu}. \section{Sign-sensitivities}\label{sec:sensitivities} We consider a reaction network with associated ODE system \eqref{eq:ode} and a steady state $\se$. In this section we investigate how the steady state $\se$ changes upon a small perturbation to a parameter of the system, that being either $k$ or $x^0$ (or $T$). Specifically, we consider the vector of parameters $k\in \R^{m}$ of the rate function $v_k(x)$, and the vector of parameters of initial conditions $x^0\in \R^n$, or the vector of parameters of total amounts $T\in \R^d$. These live in a subspace $\Omega$ of $\R^{M}$ with $M=m+n$ or $M=m+d$, and we write generically $\alpha\in \Omega$ for the vector of parameters of either form $(k,x^0)$ or $(k,T)$. We let $\gamma_0\in \Omega$ be the vector of parameters corresponding to our steady state $\se$, such that $F_{\gamma_0}(\se)=0$. \medskip \textbf{A formula for sensitivities. } We consider a continuously differentiable map $$\gamma\colon \ (-\epsilon, \epsilon) \rightarrow \Omega,$$ where $\epsilon>0$ and such that $\gamma(0)=\gamma_0$. If $\se$ is not degenerate, then the Implicit Function Theorem implies that locally around $0$, there is a continuously differentiable curve $\se(s)$ with $\se(0)=\se$ and such that $\se(s)$ is a steady state of the reaction network and stoichiometric compatibility class with parameters $\gamma(s)$, that is, $F_{\gamma(s)} (\se(s))=0$. The question we address here is how to determine the sign of the derivative of $\se(s)$ with respect to $s$ at $s=0$, which we denote by $\se'(0)$. We let $\gamma'(s)$ denote the derivative of $\gamma$ with respect to $s$. We view $F_{\alpha}(x)$ as a function in both $\alpha$ and $x$ and let $J_{\alpha,1}(x):=\frac{\partial F_{\alpha}(x)}{\partial x}\in \R^{n\times n}$ denote the Jacobian matrix of $F_{\alpha}(x)$ with respect to the vector $x$ and similarly $J_{\alpha,2}(x):=\frac{\partial F_{\alpha}(x)}{\partial \alpha}\in \R^{n\times M}$ denote the Jacobian matrix of $F_{\alpha}(x)$ with respect to the vector $\alpha$. Differentiation of $F_{\gamma(s)} (\se(s))=0$ with respect to $s$ and evaluation at $s=0$ gives \begin{equation} \label{eq:differentiate} J_{\gamma_0,1}(\se) \cdot \se'(0) + J_{\gamma_0,2}(\se) \cdot\gamma'(0) =0. \end{equation} This results in a linear system in $n$ unknowns $\se'_1(0), \dots, \se'_n(0)$ with coefficient matrix $J_{\gamma_0,1}(\se)$ and independent term $J_{\gamma_0,2}(\se) \cdot\gamma'(0)$, which we can find if the steady state $\se$ and $\gamma_0$ are given. Since the steady state $\se$ is non-degenerate, the coefficient matrix has full rank $n$, and hence this system has a unique solution. Note that neither the coefficient matrix of the linear system $J_{\gamma_0,1}(\se) $ nor $J_{\gamma_0,2}(\se)$ depend on the specific perturbation $\gamma$. Further, the last $d$ rows of $J_{\gamma_0,1}(\se)$ are $W$. \blue{When $\alpha=(k,x^0)$, the last $d$ rows of $J_{\gamma_0,2}(\se)$ are $(0\, |\,-W)$, where $0$ is the zero matrix of size $d\times m$. Similarly, when $\alpha=(k,T)$, the last $d$ rows of $J_{\gamma_0,2}(\se)$ are $(0\, | \, -I_{d\times d})$. In both cases the upper $n-d$ rows are zero in the last $d$ entries.} \smallskip Using Cramer's rule, $\se'_i(0)$ is expressed as a fraction where the denominator is the determinant of $J_{\gamma_0,1}(\se)$ and the numerator is the determinant of the matrix obtained by replacing the $i$-th column of $J_{\gamma_0,1}(\se)$ by $- J_{\gamma_0,2}(\se) \cdot\gamma'(0) $. When the rate functions are mass-action, then $\se'_i(0)$ becomes a rational function in the parameters and the entries of $\se=(\se_1,\dots,\se_n)$. \medskip In the general scenario, nor the steady state $\se$ nor the parameter value $\gamma_0$ are known, and therefore we aim at determining the sign of $\se'_i(0)$ for all values of $\se$ and $\gamma_0$, and at deciding whether this sign is independent of these values. As it has been used in several works, e.g. \cite{FeliuPlos,Dickenstein:structured}, the set of all positive steady states is studied by means of a \textbf{parametrization} $$ \varphi\colon U \rightarrow \R^n_{>0},$$ such that the image of $\varphi$ is the set of positive steady states (see \cite{FeliuPlos} for strategies to find parametrizations). \medskip We illustrate this framework and computations with selected perturbations $\gamma$ for our running example \eqref{network:phosphotransfer}. First, note that due to the matrix of conservation laws in \eqref{W}, the steady state equations for $x_1$, $x_2$ and $x_4$ are redundant. Thus, a positive steady state is simply a point $x\in \R^5_{>0}$ satisfying the steady state equations for $x_3$ and $x_5$: \begin{align*} 0 & = k_{3} x_2 x_5 - k_{4} x_3 x_4 & 0 & = -k_{1} x_1 x_5 + k_{2} x_2 x_4 - k_{3} x_2 x_5 + k_{4} x_3 x_4, \end{align*} or equivalently \begin{align}\label{eq:ss} 0 & = k_{3} x_2 x_5 - k_{4} x_3 x_4 & 0 & = -k_{1} x_1 x_5 + k_{2} x_2 x_4. \end{align} Any solution to this system is of the form \begin{align}\label{parametirzation:example} \se= \Big( x_1, x_2, \frac{k_{2}k_{3}x_2^2}{k_{1}k_{4}x_1}, x_4, \frac{k_{2}x_4x_2}{k_{1}x_1}\Big), \end{align} that is, the set of positive steady states is parametrized by $x_1$, $x_2$ and $x_4$, where $U=\R^3_{>0}$. If these three variables are positive, then so is $\se$. Let $\alpha=(k_1,k_2,k_3,k_4, x_1^0,x_2^0,x_3^0,x_4^0,x_5^0) \in \R^9_{>0}$ be the vector of parameters. The function $F_{\alpha}(x)$ is taken to be the left-hand side of the system in \eqref{eq:F}. This leads to the following Jacobian matrices: \begin{align} \label{JW} J_{\alpha,1}(x) &=\left(\begin{matrix}0 & k_{3} x_5& - k_{4} x_4 & - k_{4} x_3 & k_{3} x_2 \\ - k_{1} x_5 & k_{2} x_4 - k_{3} x_5 & k_{4} x_4 & k_{2} x_2 + k_{4} x_3 & - k_{1} x_1 - k_{3} x_ 2 \\ 1 & 1 & 1 & 0 & 0\\ 0 & 1 & 2 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 \end{matrix}\right), \\ J_{\alpha,2}(x) &= \begin{pmatrix} 0&0&x_{{2}}x_{{5}}&-x_{{3}}x_{{4}}&0 &0&0&0&0\\ -x_{{1}}x_{{5}}&x_{{2}}x_{{4}}&-x_{{2}}x_{{5}}&x_{{3}}x_{{4}}&0&0&0&0&0\\ 0&0&0&0&-1&-1&-1&0&0 \\ 0&0&0&0&0&-1&-2&0&-1\\ 0&0&0&0&0 &0&0&-1&-1\end{pmatrix}. \label{JW2} \end{align} \medskip \textbf{Perturbing $k_1$. } We consider first the perturbation that maps $k_1$ to $k_1+s$. This gives $\gamma'(0)=(1,0,0,0,0,0,0,0,0)$ and hence $J_{\alpha,2}(x) \cdot\gamma'(0)$ is simply the first column of $J_{\alpha,2}(x)$ in \eqref{JW2}, which is $(0,-x_1 x_5,0,0,0)^{tr}$. We solve system \eqref{eq:differentiate} with these data and obtain for $\se=(x_1,\dots,x_5)$ that \begin{align*} \se'_1(0) &=\frac {-x_{{1}}x_{{5}} \left( k_{{3}}x_{{2}}+k_{{3}}x_{{5}}+k_{{4}}x_{{3}}+k_{{4}}x_{{4}} \right) }{q(k,x)}, & \se'_3(0) & =\frac {x_{{1}}x_{{5}} \left( -k_{{3}}x_{{2}}+k_{{3}}x_{{5}}-k_{{4}}x_{{3}} \right) }{q(k,x)},\\ \se'_2(0) & =\frac {x_{{1}}x_{{5}} \left( 2\,k_{{3}}x_{{2}}+2\,k_{{4}}x_{{3}}+k_{{4}}x_{{4}} \right) }{q(k,x)},& \se'_4(0) & =\frac {x_{{1}}x_{{5}} \left( 2\,k_{{3}}x_{{5}}+k_{{4}}x_{{4}} \right) }{q(k,x)},\\ \se'_5(0) & =\frac {-x_{{1}}x_{{5}} \left( 2\,k_{{3}}x_{{5}}+k_{{4}}x_{{4}} \right) }{q(k,x)}, \end{align*} where \begin{multline*} q(k,x)=2\,k_{1}k_{3}x_{1}x_{5}+k_{1}k_{3}x_{2}x_{5}+k_{1}k_{3}x_{5}^{2}+k_{1}k_{4}x_{1}x_{4}+k_{1}k_{4}x_{3}x_{5}+k_{1}k_{4}x_{4}x_{5}\\ +2\,k_{2}k_{3}x_{2}x_{4}+2\,k_{2}k_{3}x_{2}x_{5}+k_{2}k_{4}x_{2}x_{4}+2\,k_{2}k_{4}x_{3}x_{4}+k_{2}k_{4}x_{4}^{2}. \end{multline*} We readily see that $\se_1$ and $\se_5$ \blue{decrease}, and $\se_2$ and $\se_4$ \blue{increase} when $k_1$ is slightly increased. The sign of $\se_3'(0)$ is not determined (yet). But we have not imposed that $\se$ is a steady state. In order to do that, we evaluate $\se_3'(0)$ in the parametrization and obtain that the sign of $\se_3'(0)$ at a steady state is the sign of $$ -k_3x_2+k_3\frac{k_{2}x_4x_2}{k_{1}x_1}-k_ 4\frac{k_{2}k_{3}x_2^2}{k_{1}k_{4}x_1} = \frac{k_{3} x_2}{k_1x_1} \big( -k_1x_1+k_{2}x_4 - k_{2}x_2\big).$$ Clearly, this expression can be positive, negative or zero, after appropriately choosing $k_1,k_2,x_1,x_2,x_4$. We conclude that the sign of the change of $\se_3$ with respect to this perturbation is not parameter and variable independent, and therefore information on the specific value of the steady state is required. \medskip \textbf{Perturbing $x_4^0$. } We perturb now $x_4^0$ (that is, the initial concentration of $E$) by the addition of a small amount $s$. We now have $\gamma'(0)=(0,0,0,0,0,0,0,1,0)$ and hence $J_{\alpha,2}(x) \cdot\gamma'(0)$ is the eighth column of $J_{\alpha,2}(x)$, which is $(0,0,0,0,-1)^{tr}$. We solve the resulting system \eqref{eq:differentiate} and obtain for $\se=(x_1,\dots,x_n)$ that \begin{align*} \se'_1(0)&= \frac{-k_1k_4x_1x_3+k_2k_3x_2^2+k_2k_3x_2x_5+k_2k_4x_2x_4+k_2k_4x_3x_4}{q(k,x)}. \end{align*} After evaluating the numerator of $\se'_1(0)$ in the parametrization, we obtain $$ \frac{2k_2^2 k_3x_2^2x_4}{k_1x_1} + k_2k_4x_2x_4,$$ which only attains positive values. We conclude that the concentration of $S_0$ at steady state increases when an infinitesimal amount of $E$ is added to the system. \medskip We proceed in the same way to determine $\se'_i(0)$ after perturbing each of the reaction rate constants $k_j$ and initial concentrations $x_j^0$ one by one by adding a small amount $s$. The sign-sensitivities are summarised in Table~\ref{table:signs}. A seemingly striking insight of this table is that an increase of a certain species can be paired with both an increase or decrease of another species, depending on the perturbation applied. For instance, $E_p$ increases ($\se_5'(0)>0)$ after increasing either $x_3^0$ or $x_4^0$, while $E$ decreases ($\se_4'(0)<0$) for the first perturbation and increases ($\se_4'(0)>0$) for the second. This highlights that perturbation studies need to be appropriately interpreted, as it would be wrong to conclude, out of the column for $x_3^0$, that $E$ decreases when $E_p$ increases. That is, one needs to pair the direct perturbation to the response, and not two responses to a perturbation. This ``paradoxical'' result has been first pointed out in \cite{paradoxical}. \begin{table}[t] \begin{center} \begin{tabular} {c||c|c|c|c|c|c|c|c|c|c|} & $k_1$ & $k_2 $ & $k_3$ & $k_4$ & $x_1^0$ & $x_2^0$ & $x_3^0$ & $x_4^0$ & $x_5^0$ \\\hline\hline $\se'_1(0)$ & $-$ & $+$ & \cellcolor{gray!30!white} $+\tau_1$ & \cellcolor{gray!30!white}$-\tau_1$ &$+$&$+$& \cellcolor{gray!30!white} $-\tau_1$& \cellcolor{gray!30!white} $+$& \cellcolor{gray!30!white} $-$\\ \hline $\se'_2(0)$ & $+$ & $-$ & $- $ & $+$ &$+$&$+$&$+$& \cellcolor{gray!30!white} $-\tau_2$&\cellcolor{gray!30!white} $+\tau_2$ \\ \hline $\se'_3(0)$ & \cellcolor{gray!30!white}$-\tau_1$ & \cellcolor{gray!30!white} $+\tau_1$ & $+$ & $-$ & \cellcolor{gray!30!white} $-\tau_1$&$+$&$+$& \cellcolor{gray!30!white} $- $& \cellcolor{gray!30!white} $+$ \\ \hline $\se'_4(0)$ & $+$ & $-$ & $+$ & $-$ & $+ $ & \cellcolor{gray!30!white} $-\tau_2$ & $-$ & $+$ &$+$\\ \hline$\se'_5(0)$ & $-$ & $+$ & $-$ & $+$ & $-$ & \cellcolor{gray!30!white}$+\tau_2$&$+$&$+$&$+$\\ \hline \end{tabular} \end{center} \caption{Sign-sensitivities with respect to adding a small amount to each of the parameters. Each column gives the sign-sensitivity with respect to one parameter. $\tau_1$ is the sign of $k_1x_1 + k_2(x_2-x_4)$ and $\tau_2$ is the sign of $k_1k_4x_1^2-k_2k_3x_2^2$ at the steady state (which can be zero). Gray cells are determined using the parametrization, and for the other cells the sign is determined for all $x\in \R^5_{>0}$.}\label{table:signs} \end{table} \medskip \textbf{General perturbations. } The outlined framework accommodates all types of perturbations, not only consisting in adding a small amount to one of the parameters. We illustrate this with two perturbations: in the first we scale two reaction rate constants by $s$, and in the second a small amount $s$ is added to $x_4^0$ and $x_5^0$ simultaneously. \smallskip First, consider the perturbation to $\alpha=(k_1,k_2,k_3,k_4, x_1^0,x_2^0,x_3^0,x_4^0,x_5^0)$ such that $$\gamma(s)= (sk_1,k_2,sk_3,k_4, x_1^0,x_2^0,x_3^0,x_4^0,x_5^0).$$ Then $\gamma'(0)=(k_1,0,k_3,0,0,0,0,0,0)$ and $J_{\gamma_0,2}(\se)\cdot \gamma'(0)$ is the vector $(k_3x_2x_5,-k_1x_1x_5-k_3x_2x_5)^{tr}$. Solving the corresponding system \eqref{eq:differentiate}, we obtain that $\se_1'(0)$ and $\se_5'(0)$ are negative, $\se_3'(0)$ and $\se_4'(0)$ are positive, and $\se_2'(0)$ can be of either sign. Here only the signs of $\se_4'(0)$ and $\se_5'(0)$ can be determined without the parametrization. If instead we consider the perturbation $\gamma(s)= (sk_1,sk_2,k_3,k_4, x_1^0,x_2^0,x_3^0,x_4^0,x_5^0)$, then all derivatives become zero, that is, the steady state is invariant under simultaneously scaling $k_1$ and $k_2$ (as it is readily seen from \eqref{eq:ss}). \smallskip Consider next the perturbation to $\alpha$ such that $$\gamma(s)=(k_1,k_2,k_3,k_4, x_1^0,x_2^0,x_3^0,x_4^0+s,x_5^0+s).$$ Then $\gamma'(0)=(0,0,0,0,0,0,0,1,1)$ and $J_{\gamma_0,2}(\se)\cdot \gamma'(0)$ is the sum of the last two columns of $J_{\gamma_0,2}(\se)$, namely the vector $ (0,0,0,-1,-2)^{tr}$. Then the solution $\se_i'(0)$ to the corresponding system is the sum of $\se_i'(0)$ for the perturbation $x_4^0 \mapsto x_4^0 +s$ and $\se_i'(0)$ for the perturbation $x_5^0 \mapsto x_5^0 +s$. By Table~\ref{table:signs}, the sign of $\se_4'(0)$ and $\se_5'(0)$ is $+$. We further obtain that the sign of $\se_1'(0)$, $\se_2'(0)$ and $\se_3'(0)$ can be any of $-,0,+$. \medskip In this example it is straightforward to decide whether the numerator of $\se_i'(0)$ can attain any sign when the polynomial has coefficients of both signs. For larger systems, an often successful approach consists on investigating the vertices of the Newton polytope associated with the polynomial. If two of the vertices correspond to monomials with coefficients of opposite signs, then the polynomial attains all signs for positive values of the variables. This strategy has been used in numerous recent works with chemical reaction network theory, e.g. \cite{FeliuPlos,obatake:hopf,conradi:mixed}, and we refer the reader to \cite{FeliuPlos} for an expository account. \begin{remark} In \cite{brehm:sensitivity} the authors provide structural conditions to determine whether a sign-sensitivity is zero, for ODE systems arising from a subclass of rate functions that does not include mass-action. In that work, perturbations on reaction rate constants of the form $k_i\mapsto k_i + s$ are considered using the corresponding equation \eqref{eq:differentiate}. In Metabolic Control Analysis (MCA) \cite{Fell:MCA}, so called \emph{flux/concentration control coefficients} are considered. The latter measures sensitivity similarly to here, as the derivative of the logarithm of a concentration $\se_i$ with respect to the logarithm of another concentration $x_j^0$ is taken, or what is equivalent \[ \se_i'(0) \cdot \tfrac{x_j^0}{\se_i(0)}. \] These are found using \eqref{eq:differentiate} as well, after adjusting the formula. Since $\tfrac{x_j^0}{\se_i(0)}$ is positive, this factor is redundant when considering sign-sensitivities, but in MCA, of relevance is the value of this (normalized) derivative, and not only its sign. See \cite{gunawardena:MCA} for a gentle introduction to MCA and control coefficients. \end{remark} \section{Perturbing concentrations}\label{sec:conc} In this section we take a closer look at perturbations caused by a change in the stoichiometric compatibility class. The first observation we make is that perturbations of the total amounts might lead to apparently contradictory results. To see this, consider network \eqref{network:phosphotransfer}, with conservation laws and total amounts as given in \eqref{eq:cons1} and corresponding function $F_{k,T}(x)$. The vector of parameters is now $\alpha=(k_1,k_2,k_3,k_4,T_S,T_p,T_E)$. Under the perturbation $\gamma$ on the total amount of phosphorylated proteins $T_p \mapsto T_p + s$, we have $\gamma'(0)=(0,0,0,0,0,1,0)$ and we obtain $$ \sign(\se_1'(0))= - ,\quad \sign(\se_2'(0))=\pm, \quad \sign(\se_3'(0))=+,\quad \sign(\se_4'(0))= -,\quad \sign(\se_5'(0))= -. $$ We consider now another matrix of conservation laws $W'$, with same second row as $W$ in \eqref{W}: $$W'=\left(\begin{array}{ccccc}1&0&-1&0&-1\\0&1&2&0&1\\0&0&0&1&1\end{array}\right). $$ We perform the same perturbation on $T_p$, and obtain the following sign-sensitivities: $$ \sign(\se_1'(0))= + ,\quad \sign(\se_2'(0))=+, \quad \sign(\se_3'(0))=+,\quad \sign(\se_4'(0))= \pm,\quad \sign(\se_5'(0))= \pm. $$ Although we did not change the expression for the total amount $T_p$, the sign-sensitivities changed drastically. For example, $S_0$ decreases when $T_p$ is increased for the first matrix of conservation laws, while it decreases for the second choice. We conclude that perturbations with respect to total amounts might not be meaningful and need to be appropriately interpreted \blue{as perturbations of the considered system}. \medskip We proceed to investigate perturbations with respect to initial concentrations, and show that in this case, the sign-sensitivities do not depend on the choice of matrix of conservation laws. For the rest of the section we let $\alpha=(k,x^0)$. Note that $J_{\alpha,2}(x)$ is independent of $x^0$ since $F_\alpha(x)$ is linear in $x^0$. \begin{lemma} Consider the perturbation $\gamma$ sending $x_i^0$ to $x_i^0+s$, and being the identity on the other parameters. For $j=1,\dots,n$, the derivative $\se_j'(0)$ does not depend on the basis of $\im(N)^\perp$ used to construct the function $F_\alpha(x)$. \end{lemma} \begin{proof} Let $W,W'$ be two matrices of conservation laws. Then there exists an invertible $d\times d$ matrix $A$ such that $W'=AW$. Let $F_{\alpha}(x)$ and $F'_{\alpha}(x)$ be the corresponding steady state functions from \eqref{eq:F}, and denote by $J,J'$ (with the appropriate subindices) their Jacobian matrices respectively. Then $$ J'_{\gamma_0,1}(\se) \cdot \se'(0) + J'_{\gamma_0,2}(\se) \cdot\gamma'(0) = \left(\begin{array}{cc}I_{n-d}& 0\\ 0&A\end{array}\right) \Big(J_{\gamma_0,1}(\se) \cdot \se'(0) + J_{\gamma_0,2}(\se) \cdot\gamma'(0)\Big), $$ where $I_{n-d}$ is the identity matrix of size $n-d$ (see text after \eqref{eq:differentiate}). Since $ \left(\begin{array}{cc}I_{n-d}&0\\ 0&A\end{array}\right)$ is invertible, the solution to \eqref{eq:differentiate} for $W$ and $W'$ agree. \end{proof} Having established that $\se_j'(0)$ does not depend on the choice of matrix of conservation laws, we can easily prove a series of lemmas by appropriately selecting this matrix. \blue{First, we note that if a concentration does not take part of any conservation law, then all sensitivities with respect to changes to this concentration are zero. } \begin{lemma}\label{lem:zero} Consider the perturbation sending $x_i^0$ to $x_i^0+s$, and being the identity on the other parameters. If the $i$-th component of all vectors in $\im(N)^\perp$ is zero, then $\se_j'(0)=0$ for all $j=1,\dots,n$. In other words, the sign-sensitivities are all zero. \end{lemma} \begin{proof} By hypothesis, the $i$-th column of any matrix of conservation laws is zero. Consequently $J_{\gamma_0,2}(\se) \cdot\gamma'(0)$ is the zero vector and the only solution to \eqref{eq:differentiate} is the zero vector. \end{proof} In the next proposition we discuss perturbations with respect to an initial concentration that only appears in one conservation law, and how it relates to the perturbation with respect to the corresponding total amount. \begin{proposition}\label{prop:form} Assume the matrix of conservation laws $W=(w_{j,i})$ is such that the $i$-th column has only one non-zero entry, that is, there exists $\ell$ such that \begin{align*} w_{\ell',i} & =0 \quad \textrm{for }\ell'\neq \ell \quad\textrm{and}\quad w_{\ell,i} \neq 0. \end{align*} Let $M_j$ be the minor of $ J_{\gamma_0,1}(\se)$ obtained by removing the $j$-th column and the $(n-d+\ell)$-th row, divided by $\det J_{\gamma_0,1}(\se)$. Then \begin{itemize} \item $\se_j'(0)$ for the perturbation $\gamma_i$ sending $x_i^0$ to $x_i^0+s$ equals $(-1)^{n-d+\ell+j}w_{\ell,i} M_j$. \item $\se_j'(0)$ for the perturbation $\gamma^*_\ell$ sending $T_\ell$ to $T_\ell+s$ equals $(-1)^{n-d+\ell+j} M_j$. \end{itemize} \end{proposition} \begin{proof} The statement of the proposition follows after noticing that $J_{\gamma_0,2}(\se) \cdot\gamma_i'(0)$ is the vector with $-w_{\ell,i}$ in the $(n-d+\ell)$-th entry and zero everywhere else, and the vector $J_{\gamma_0,2}(\se) \cdot (\gamma^*_\ell)'(0)$ is $-1$ in the $(n-d+\ell)$-th entry and zero otherwise. \end{proof} In particular, it follows from Proposition~\ref{prop:form} that if $w_{\ell,i}>0$, then the perturbations with respect to $x_i^0$ or $T_\ell$ yield sensitivities with the same sign. As a consequence, if the matrix $W$ is row reduced, then perturbation with respect to a slight increase of a total amount can be interpreted as the perturbation with respect to $x_i^0$ for $i$ the index of the first non-zero entry of the corresponding conservation law. Hence, the value of the perturbation with respect to this total amount is well defined under the restriction that the other conservation laws do not involve $x_i$. An immediate consequence of Proposition~\ref{prop:form} is that \blue{if two columns $i,i'$ of $W$ have both only one non-zero entry, at the $\ell$-th row, then sensitivities with respect to $x_i^0$ and $x_{i'}^0$ agree up to the product of these non-zero entries.} This is summarized in the following lemma. \begin{lemma}\label{corollary} If there exists a basis $\{w_1,\dots,w_d\}$ of $\im(N)^\perp$ and three indices $i,i',\ell$ such that \begin{align*} w_{\ell',i} & =w_{\ell',i'}=0 \quad \textrm{for }\ell'\neq \ell & w_{\ell,i} & \neq 0, & w_{\ell,i'} &\neq 0, \end{align*} then $c_j'(0)$ for the perturbation $\gamma_i$ sending $x_i^0$ to $x_i^0+s$ agrees with that for the perturbation $\gamma_{i'}$ sending $x_{i'}^0$ to $x_{i'}^0+s$ times $w_{\ell,i'}/w_{\ell,i}$. \end{lemma} \begin{proof} By Proposition~\ref{prop:form}, the numerator of $c_j'(0)$ for the perturbation $\gamma_i$ is $(-1)^{n-d+\ell + j}w_{\ell,i}$ times the minor of $ J_{\gamma_0,1}(\se)$ obtained by removing the $j$-th column and the $(n-d+\ell)$-th row, and for $\gamma_{i'}$, this same minor is multiplied by $(-1)^{n-d+\ell+j} w_{\ell,i'}$. \end{proof} As an example, consider the two rivaling models in the introduction. For model 1, a matrix of conservation laws is $$W= \begin{pmatrix}1 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 1\end{pmatrix},$$ with the order of species $E, E_p, S_0,S_1,I,Y$. The pairs $(E,E_p)$ and $(S_0,S_1)$ satisfy the hypothesis of Lemma~\ref{corollary}. Therefore, for any $j$, the sign of $\se_j'(0)$ is the same when either $E$ or $E_p$ are increased, and similarly, is the same when either $S_0$ or $S_1$ are increased. \section{Parameter-independent sign-sensitivities}\label{sec:indep} Although the computation of $\se_i'(0)$ by solving system \eqref{eq:differentiate} might seem straightforward, it requires the computation and analysis of two symbolic determinants. As noticed in \cite{feliu-bioinfo,baudier:biomodels}, these computations are expensive for relatively large reaction networks, as those encountered in applications. In this section we investigate an alternative approach to decide whether the sign of $\se_i'(0)$ does not depend on the value of the parameters nor on $\se = (x_1,\dots,x_n)\in \R^n_{>0}$, that is, without imposing that $\se$ is a steady state. We do this for perturbations of an initial concentration as in the previous section. Once it has been established that the sign is independent of the parameters and $x$, then it can easily be determined after arbitrarily choosing values. The subsequent results are based on the study of \emph{injective networks} and sign vectors, as presented in \cite{MullerSigns}. For that, some notation needs to be introduced. The sign-vector $\sigma(v)$ of a vector $v$ is obtained by taking the sign component-wise. For $V\subseteq \R^n$, let $\sigma(V)$ be the set of sign vectors of all elements in $V$, and $\Sigma(V)$ be the subset of $\R^n$ containing all, possibly lower dimensional, orthants that $V$ intersects. Consider a function in $\R^n_{>0}$ of the form $$ g_k(x)=N \diag(k) x^B,$$ with $N\in \R^{n\times r}$ of rank $n-d$, $B\in \R^{r\times n}$, $k\in \R^r_{>0}$. Let $S\subseteq \R^n$ be a vector subspace of dimension $d$ and $N'$ be a submatrix of $N$ given by $n-d$ linearly independent rows of $N$ (such that $\ker(N)=\ker(N')$). Define a function $G_k(x)$ with the first $n-d$ components equal to $N'\diag(k) x^B$, and the last $d$ components $Wx^{tr}$, for $W$ any basis of $S^\perp$. Then the determinant of the Jacobian of $G_k$ is a polynomial in $k$ and $x$ such that all coefficients have the same sign if and only if \begin{equation}\label{eq:sign} \sigma(\ker(N))\cap\sigma\big(B^{tr}(\Sigma(S \backslash\{0\}))\big)=\emptyset. \end{equation} (see \cite{MullerSigns}). Further, if any of these conditions hold, the function $G_k(x)$ is injective on all cosets $(x^0+S)\cap \R^n_{>0}$ for any choice of $k$. \blue{To understand how \eqref{eq:sign} arises, one first notes that the determinant of the Jacobian $J_{G_k}$ of $G_k$ is a polynomial in $k$ and $x$ such that all coefficients have the same sign if and only if it never vanishes. Vanishing of $\det J_{G_k}$ means that the Jacobian of $G_k$ has non-trivial kernel, that is, there exists a non-zero vector $u$ in $\ker (J_{g_k})$ which further satisfies $Wu=0$, i.e. $u\in S\setminus \{0\}$. Using $J_{g_k}(x)=N \diag(k) B^{tr} \diag(\tfrac{1}{x})$, we have $u\in \ker(J_{g_k}(x))$ if and only if $\ker(N)$ contains $\diag(k) B^{tr} \diag(\tfrac{1}{x})u.$ Condition \eqref{eq:sign} arises from noticing that varying $x$ and $u\in S\setminus \{0\}$ means considering all orthants that $S\setminus \{0\}$ intersects, that is, $\Sigma(S \backslash\{0\})$, and then a vector $k$ such that $\diag(k) B^{tr} \diag(\tfrac{1}{x})u$ belongs to $\ker(N)$ exists if and only if $B^{tr} \diag(\tfrac{1}{x})u$ has the sign of some vector in $\ker(N)$. For details of this construction we refer the reader to \cite{MullerSigns}. } When \blue{applying \eqref{eq:sign}} to a reaction network with mass-action kinetics, we consider $S=\im(N)$, and if \eqref{eq:sign} holds, then the reaction network is said to be injective. By verifying the sign equality \eqref{eq:sign} with $N$ the stoichiometric matrix, $S=\im(N)$, and $B$ the exponent matrix in \eqref{eq:ode2}, we can determine whether the sign of the determinant $J_{\alpha,1}(x)$, and hence of the denominator of $\se_i'(0)$, is constant. We emphasize that we do not impose that $x$ is a steady state in the computations in this section. In order to study the numerator, \blue{we interpret it as the Jacobian of a function of the form $g_k(x)$ as above, with the same coefficient matrix $N$, and suitable exponent matrix $B_j$ and vector space $S_j$. Afterwards we apply \eqref{eq:sign}. The specific form of these objects is given in the next proposition. } \begin{proposition}\label{prop:signs} Assume mass-action kinetics. Consider the perturbation $\gamma_i$ and assume that $\im(N)^\perp$ contains vectors with non-zero $i$-th entry. Let $W$ be a matrix of conservation laws such that the only row where the $i$-th component is non-zero is the first, where it takes the value $1$. Let $S_{j}\subseteq\R^{n-1}$ be the kernel of the vector subspace spanned by the rows of the matrix obtained from $W$, by deleting the first row and the $j$-th column, and let $B_j$ be obtained from $B$ by removing the $j$-th row. Then the sign of the numerator of $\se_j'(0)$, as a function of $k$ and $x$, is independent of $k$ and $x$ if and only if \[\sigma(\ker(N))\cap\sigma(B_j^{tr}(\Sigma(S_{j}\backslash\{0\})))=\emptyset.\] \end{proposition} \begin{proof} Let $\widehat{x}=(x_1,\dots,x_{j-1},x_{j+1},\dots x_n)\in \R^{n-1}$ be the vector $x$ where the $j$-th entry is deleted. Let $N'$ be a matrix formed by $n-d$ linearly independent rows of $N$ (such that $\ker(N)=\ker(N')$), and for $\ell=1,\dots,r$, let $\widehat{k}_\ell = k_\ell x_j^{y_j}$ if the reactant of the $\ell$-th reaction is $y$. With the choice of $W$ and the considerations before Lemma~\ref{corollary}, the numerator of $\se_j'(0)$ is, up to a constant sign, the determinant of the submatrix $J'$ of $ J_{\alpha,1}(\se)$ obtained by removing the $j$-th column and the $(n-d+1)$-th row. This matrix $J'$ agrees with the Jacobian of the function $G_k(\widehat{x})$ in $\R^{n-1}$ with the first $n-d$ entries equal to $N' \diag\big(\, \widehat{k}\, \big) \widehat{x}^{B_j}$ and bottom $d-1$ entries $W' \widehat{x}^{tr}$, where $W'$ is the matrix obtained from $W$, by deleting the first row and the $j$-th column. As recalled in \eqref{eq:sign}, by \cite{MullerSigns}, the sign of the determinant of $J'$ does not depend on $\widehat{k}$ nor $\widehat{x}$, hence on $k$ nor $x$, if and only if the sign condition in the statement holds. \end{proof} To illustrate this result, we consider network \eqref{network:phosphotransfer}, the perturbation of $x_2^0$ by adding $s$, and focus on $\se_1'(0)$. By Table~\ref{table:signs}, we already know that the sign of $\se_1'(0)$ is $+$, and this holds for any $x$ without imposing the steady state condition. In particular, in the notation of Proposition~\ref{prop:signs}, $i=2$ and $j=1$. The kernel of $N$ in \eqref{eq:N} is generated by the vectors $(1,1,0,0)$ and $(0,0,1,1)$, and hence for any $u=(u_1,u_2,u_3,u_4)$ in $\ker(N)$, the sign of $u_1$ and $u_2$ agree, and the sign of $u_3$ and $u_4$ agree. The matrix $B_1$ obtained by removing the first row of $B$ in \eqref{eq:B} and a matrix of conservation laws satisfying the hypothesis of Proposition~\ref{prop:signs} are given as $$ B_1=\left(\begin{array}{cccc} 0&1&1&0\\ 0&0&0&1\\ 0&1&0&1 \\ 1 & 0 & 1 & 0 \end{array}\right),\qquad W=\left(\begin{array}{ccccc} 1&1&1&0&0\\ -1&0&1&0&1\\ 0&0&0&1&1\end{array}\right).$$ Hence $S_1=\ker\left(\begin{array}{ccccc} 0&1&0&1\\ 0&0&1&1\end{array}\right)$ is generated by $(1,0,0,0)$ and $(0,-1,-1,1)$. In particular for any vector $(a,b,c,d)$ in $\Sigma(S_{1}\backslash\{0\})$, at least one entry is non-zero, the signs of $b$ and $c$ agree and are opposite to the sign of $d$, unless $b,c,d$ are zero. Now, $B_1^{tr}$ times $(a,b,c,d)^{tr}$ is the vector $u=(d,a+c,a+d,b+c)^{tr}$. If $d$ is positive, then $b$, $c$ and $b+c$ are negative, and for $u$ to have the sign of a vector in $\ker(N)$, it is necessary that $a+d$ is negative (hence $a$ negative) and $a+c$ is positive (hence $a$ positive), a contradiction. Similarly, we argue that if $d$ is negative, then $u$ does not have the sign of any vector in $\ker(N)$. Finally, if $d$ is zero, then so are $b,c$, and hence $u=(0,a,a,0)$, which has the sign of a vector in $\ker(N)$ only if $a=0$, a contradiction. We have therefore verified that the sign condition in Proposition~\ref{prop:signs} holds, and therefore $\se_1'(0)$ does not depend neither on $k$ nor on $x$. Clearly, finding the sign vectors manually is not optimal at all. In \cite{MullerSigns}, see also \cite{dickenstein:messi}, strategies to verify whether this equality holds are presented. \section{Stable vs unstable steady states}\label{hybridhistine kinase:section} In the previous sections we have not taken into consideration whether the steady states are asymptotically stable or unstable. In practice, in an experimental setting, only stable steady states are observable. Although it is often not possible to restrict parametrizations of the set of steady states to only stable steady states, some relevant information can be extracted from the sign of the determinant of the Jacobian $J_{\alpha,1}(x)$. Specifically, assume the function $F_{\alpha}(x)$ is constructed from a matrix of conservation laws $W$ that is row reduced, and let $i_1,\dots,i_d$ be the indices of the first non-zero entries of the rows of $W$. The first $n-d$ entries of $F_\alpha(x)$ can be chosen to be the entries of $f_k(x)$ with index different from $i_1,\dots,i_d$. Let $\tau=\sum_{\ell=1}^d (n-\ell - i_\ell)$ be the sign of the permutation that reorders the entries of $F_\alpha(x)$ such that the entries defined by $W$ are at entries $i_1,\dots,i_d$. Then, by \cite[Prop. 5.3]{wiuf-feliu}, the determinant of $J_{\alpha,1}(x)$ is $(-1)^\tau$ times the product of the $n-d$ nonzero eigenvalues of the Jacobian of $f_k$ evaluated at $x$. Hence, if the steady state is hyperbolic and asymptotically stable, then necessarily the sign of this determinant is $(-1)^{\tau + n-d}$. In the previous examples, the determinant of the Jacobian $J_{\alpha,1}(x)$ at a steady state had a constant sign, which was actually $(-1)^{\tau+n-d}$, and hence in accordance with stability. We now illustrate by means of an example what can be said about sensitivities when the network has unstable steady states. For that, we consider a simple model of a hybrid histidine kinase from \cite{feliu:unlimited}. The network is depicted in Figure~\ref{fig:HK}(a). Under mass-action kinetics, there exist stoichiometric compatibility classes for which this network has three positive steady states \cite{feliu:unlimited}, two of which are asymptotically stable \cite{torres:stability}. Further, by \cite{FeliuPlos}, the network admits three positive steady states in some stoichiometric compatibility class if and only if $k_3>k_1$. If $k_1\geq k_3$, then the network has exactly one positive steady state in each stoichiometric compatibility class. \begin{figure}[t] \begin{minipage}[b]{0.45\textwidth} \begin{center} \begin{align*} {\rm HK}_{00} \ce{->[k_1]} {\rm HK}_{p0} & \ce{->[k_2]} {\rm HK}_{0p} \ce{->[k_3]} {\rm HK}_{pp} \\ {\rm HK}_{0p} +{\rm RR} & \ce{->[k_4]} {\rm HK}_{00} +{\rm RR}_p \\ {\rm HK}_{pp} +{\rm RR} & \ce{->[k_5] }{\rm HK}_{p0} +{\rm RR}_p\\ {\rm RR}_p & \ce{->[k_6]} {\rm RR} \end{align*} (a) \end{center} \end{minipage} \begin{minipage}[b]{0.45\textwidth} \begin{center} \begin{tabular} {c||c|c|c|c|c|c|c|c|c|c|} & $x_1^0$ & $x_5^0$ \\\hline\hline $\se'_1(0)$ & \cellcolor{gray!30!white} $\pm$ & $+$ \\ \hline $\se'_2(0)$ & $+$ & \cellcolor{gray!30!white} $\pm^*$ \\ \hline $\se'_3(0)$ & $+$ & \cellcolor{gray!30!white} $\pm$ \\ \hline $\se'_4(0)$ & $+$ & $-$ \\ \hline $\se'_5(0)$ & $-$ & $+$ \\ \hline $\se'_6(0)$ & $+$ & \cellcolor{gray!30!white} $\pm^*$ \\ \hline \end{tabular} \medskip (b) \end{center} \end{minipage} \caption{(a) A simple network of a hybrid histidine kinase, taken from \cite{feliu:unlimited}. (b) Sign-sensitivities with respect to an increase of $x_1^0$ and $x_5^0$. $\pm^*$ means that the sign is $+$ when $k_1\geq k_3$, that is, when the network has exactly one positive steady state. }\label{fig:HK} \end{figure} We order the species as HK, HK$_{p0}$, HK$_{0p}$, HK$_{pp}$, RR and RR$_{p}$, and let $x_1,\dots,x_6$ denote their concentrations respectively. Following \cite{FeliuPlos}, the set of positive steady states admits a parametrization in terms of $x_1$ and $x_5$ obtained by solving the steady states equations of $x_2,x_3,x_4,x_6$ in these variables: \[x_{2}=\frac {k_{1}x_{1}( k_{4}x_{5}+k_{3}) }{k_{2}k_{4}x_{5}}, \quad x_{3}=\frac {k_{1}x_{1}}{k_{4}x_{5}}, \quad x_{4}=\frac{k_1k_3x_{1}}{k_{4}k_{5}x_{5}^{2}}, \quad x_{6}=\frac {k_{1}x_{1} ( k_{4}x_{5}+k_{3}) }{k_{4}k_6x_{5}}.\] We choose the matrix of conservation laws $$ W=\begin{pmatrix} 1& 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 \end{pmatrix},$$ and construct $F_{k,x^0}(x)$ with first four components equal to $f_2,f_3,f_4,f_6$. The determinant of $J_{\alpha,1}(x)$ evaluated at the parametrization yields: \begin{align*} \det J_{\alpha,1}(x_1,x_5) &= -(k_{1}-k_{3})k_{1}k_{2}k_{5}x_{1} -(k_{1}+k_2)k_{4}k_{5}k_{6}x_{5}^{2} -k_{1}(k_{2}+k_3)k_{5}k_{6}x_{5}-k_{1}k_{2}k_{3}k_{6} \\ & \qquad -\frac{2\,k_{1}^{2}k_{2}k_{3}x_{1}}{x_{5}}-\frac {k_{1}^{2}k_{2}k_{3}^{2}x_{1}}{x_{5}^{2}k_{4}}. \end{align*} Here we see that if $k_1\geq k_3$, then this determinant has negative sign, which is actually the sign it attains when the steady state is asymptotically stable and hyperbolic. Indeed, in this case $n-d=4$ and $(-1)^\tau=(-1)^{n-1-1 + n-2-5} = (-1)^5=-1$. If $k_3>k_1$, then the stable steady states will necessarily satisfy that the sign of $\det J_{\alpha,1}(x_1,x_5)$ is negative. Using this, we proceed as above to compute the sign of the sensitivities with respect to adding a small amount to each of $x_i^0$. By Lemma~\ref{corollary}, it is enough to compute the sensitivities with respect to perturbing $x_1^0$ (which agrees with the perturbations with respect to $x_2^0$, $x_3^0$ and $x_4^0$) and $x_5^0$ (which agrees with $x_6^0$). Figure~\ref{fig:HK}(b) shows the obtained sign-sensitivities under the assumption that $\det J_{\alpha,1}(x_1,x_5) $ is negative. If this determinant is positive, then all signs are reversed, but this implies that the steady state is unstable. An apparently surprising property of this network is that the addition of $\HK_{00}$, that is, $x_1^0$, might lead to the decrease of $\HK_{00}$. To have a closer inspection at this phenomenon, using the parametrization, we have that $\se_1'(0)$ at the steady state defined by $x_1, x_5$ is $$\se_1'(0)= k_2k_5(k_1k_3x_1 - k_4k_6 x_5^2) \, / \, \det J_{\alpha,1}(x_1,x_5). $$ By letting $k_1=\dots =k_6=1$, the system has exactly one positive steady state in each stoichiometric compatibility classes and $\det J_{\alpha,1}(x_1,x_5)<0$. The $x_1$-component of the steady state defined by $x_1=2,x_5=1$ will decrease after a small amount of $x_1$ is added to the system. On the other hand, the $x_1$-component of the steady state defined by $x_1=1,x_5=2$ will increase. \medskip \textbf{Two-site sequential and distributive phosphorylation cycle. } We conclude with one extra example where we analyze sign-sensitivities of a classical model. We consider the reaction network in which a substrate $S$ becomes doubly phosphorylated by a kinase $E$ and dephosphorylated by a phosphatase $F$. We let $S_0,S_1,S_2$ be the three phosphoforms of $S$ with $0,1,2$ phosphorylated sites, respectively. This gives rise to the following reactions \cite{Wang:2008dc,conradi-mincheva}: \begin{align}\label{eq:network} \begin{split} S_0 + E \ce{<=>[k_1][k_2]} ES_0 \ce{->[k_3]} S_1+E \ce{<=>[k_7][k_8]} ES_1 \ce{->[k_9]} S_2+E \\ S_2 + F \ce{<=>[k_{10}][k_{11}]} FS_2 \ce{->[k_{12}]} S_1+F \ce{<=>[k_4][k_5]} FS_1 \ce{->[k_6]} S_0+F. \end{split} \end{align} We order the species as $E,F,S_0,S_1,S_2,ES_0,FS_1,ES_1,FS_2$ and let $x_1,\dots,x_9$ denote their concentrations respectively. A matrix of conservation laws is $$ W = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}$$ The set of positive steady states admits a positive parametrization in $x_1,x_2,x_3$, obtained by solving the system $f_4=\dots=f_9=0$ in $x_4,\dots,x_9$, where $f_i$ is the mass-action evolution equation for $x_i$. It is well known that this network admits between one and three positive steady states in each stoichiometric compatibility class, e.g. \cite{Wang:2008dc,conradi-mincheva}. We consider $\det J_{\alpha,1}(x)$ evaluated in the parametrization, and assume it is negative: namely this is the sign of this determinant when the steady state is asymptotically stable and hyperbolic. Under this assumption, we compute the sign-sensitivities $\se_1'(0),\dots,\se_5'(0)$ with respect to a small increase of $x_1^0$, $x_2^0$ and $x_3^0$, and obtain that none of them is given by a rational function with numerator of fix sign, indicating that all signs might be possible for this system. However, it is not straightforward to analyze the sign of the numerators while imposing that $\det J_{\alpha,1}(x)$ is negative. Nevertheless, a few cases have a nice and simple form. Specifically: \begin{itemize} \item With respect to adding $x_3^0$, that is, adding substrate $S_0$, we have that $\se_1'(0)>0$, $\se_2'(0)>0$ and $\se_4'(0)<0$ if $k_3k_{12}\geq k_6k_9$. \item With respect to adding $x_2^0$, that is, adding phosphatase $F$, we obtain that $\se_3'(0)<0$, if $k_3k_{12}\leq k_6k_9$, and $\se_5'(0)>0$ if $k_3k_{12}\geq k_6k_9$. \item Symmetrically, with respect to $x_1^0$, that is, adding kinase $E$, then $\se_3'(0)>0$, if $k_3k_{12}\leq k_6k_9$, and $\se_5'(0)<0$ if $k_3k_{12}\geq k_6k_9$. \end{itemize} \section*{Acknowledgements } This work has been partially supported by the Independent Research Fund of Denmark. Janne Kool is thanked for her input and development of the ideas presented in this manuscript, and specially for pointing out the problem with perturbations with respect to total amounts. \blue{Beatriz Pascual Escudero is thanked for comments on a preliminary version of this manuscript, and in particular for suggesting to include Proposition~\ref{prop:form}. } \providecommand{\href}[2]{#2} \providecommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL }
1,108,101,562,465
arxiv
\section*{INTRODUCTION} The tumor suppressor p53 plays a central role in cellular responses to various stresses, such as oxidative stress, hypoxia, telomere erosion and DNA damage \cite{Levine2009,Junttila2009}. As a powerful transcription factor, p53 primarily functions by inducing the transcription of many different downstream genes, including p21/WAF1/CIP1 and GADD45, which are involved in cell cycle arrest, and PUMA, Bax and PIG3, which induce apoptosis \cite{Hollstein1991,Laptenko2006,Hanahan2000}. p53 can also control apoptosis through a transcription-independent mechanism \cite{Mihara2003}. Fine control of p53 activity is crucial for proper cellular responses. In unstressed cells, p53 is maintained at low levels via interactions with E3 ubiquitin ligases, such as Mdm2 \cite{Kubbutat1997}, Pirh2 \cite{Leng2003}, COP1 \cite{Dornan2004} and ARG-BP1 \cite{Chen2005}, which mediate p53 degradation through the ubiquitin-proteasome pathway. Under stressed conditions such as DNA damage, p53 is stabilized and activated to induce the expression of downstream target genes. This process leads to different cellular responses such as cell cycle arrest and apoptosis; the former facilitates DNA repair and promotes cell survival, whereas the latter provides an efficient way to remove damaged cells \cite{Rich2000}. These processes are tightly controlled by the binding partners and post-translational modifications \cite{Meek2009}. For example, upon the occurrence of DNA double-strand breaks (DSBs), the DSB detector ATM is activated and induces the phosphorylation of p53 and Mdm2 \cite{Bakkenist2003, Kitagawa2005,Prives1998,Stommel2004}. Phosphorylation of p53 and Mdm2 inhibits Mdm2-mediated p53 degradation and therefore stabilizes p53. p53 can be phosphorylated or acetylated at multiple sites by different protein kinases, and its stability and sequence-specific DNA binding activity are modulated through these processes \cite{Bode2004}. Phosphorylation at Ser15 by ATM/ATR leads to cell cycle arrest \cite{Abraham2001}, whereas further phosphorylation at Ser46 by HIPK2 promotes the expression of pro-apoptotic genes such as \textit{p53AIP1} \cite{Pomerening2005}. Acetylation of p53 at Lys120 by Tip60 induces the expression of pro-apoptotic genes such as \textit{bax} and \textit{puma} \cite{Tang2006}. Programmed Cell Death 5 (PDCD5; formerly referred to as TF-1 cell apoptosis-related gene 19 (TFAR19)) is known to promote apoptosis in different cell types in response to various stimuli and also to enhance TAJ/TROY-induced paraptosis-like cell death \cite{Liu1999,Wang2004}. PDCD5 is rapidly upregulated following apoptotic stimuli and translocates from the cytoplasm to the nucleus during early apoptosis \cite{Chen2001}. Decreased expression of PDCD5 has been detected in various human tumors, including lung cancer \cite{Spinola2006}, gastric cancer \cite{Yang2006}, chronic myelogenous leukemia \cite{Ruan2006}, prostate cancer \cite{Du2009}, epithelial ovarian carcinoma \cite{Zhang2011}, astrocytic glioma \cite{Li2008} and chondrosarcoma \cite{Chen2010}. Moreover, the restoration of PDCD5 with recombinant protein or an adenovirus expression vector can significantly sensitize different cancers to chemotherapies \cite{Ruan2008,Chen2010,Shi2010,Wang2009}. Thus, PDCD5 likely plays a critical role in multiple tissues during tumorigenesis. However, the molecular mechanisms that underlie the function of PDCD5 during cell growth, proliferation and apoptosis remain largely unclear. Previous experiments have demonstrated that PDCD5 is apparently upregulated in cells following apoptotic stimulation \cite{Liu1999}, enhances caspase-3 activity by modulating Bax translation from the cytosol to the mitochondrial membrane \cite{Chen2006a}, interacts with Tip60 to enhance histone acetylation and p53 acetylation at Lys120, and promotes the expression of Bax \cite{Xu2009}. Recently, novel evidence indicated that PDCD5 is a p53 regulator during gene expression and the cell cycle \cite{Xu2012a}. It was shown that PDCD5 interacts with the p53 pathway by inhibiting the Mdm2-mediated ubiquitination and nuclear exportation of p53 and that knockdown of PDCD5 can decrease the ubiquitination level of Mdm2 and attenuate the expression and transcription of p21. Hence, upon DNA damage, PDCD5 can function as a co-activator of p53 to regulate cell cycle arrest and apoptosis. Many computational models have been constructed to investigate the mechanism of the p53-mediated cell-fate decision \cite{Bar-Or2000, Mihalas2000,Tiana2002,Michael2003,Ma2005,Geva-Zatorsky2006,Zhang2009, Zhang2010a, ZhangPNAS2011, Zhang2012, Tian2012, Batchelor2008, Kim2013}. In these models, the p53/Mdm2 oscillation is highlighted as important to the cell-fate decision following DNA damage. Integrated models of the p53 signaling network have been established to study the process of cell fate decision in response to DNA damage \cite{Zhang2009,Zhang2010a,ZhangPNAS2011,Zhang2012}. These models advance the understanding of the dynamics and functions of the p53 pathway in the DNA damage response. In \cite{Zhang2009}, it has been suggested that the decision between the cell fates of survival and death might be determined by counting the number of p53 pulses. In \cite{ZhangPNAS2011}, the two feedback loops of ATM-p53-Wip1 and p53-PTEN-Akt-Mdm2 are combined in the p53 signaling network. A two-phase p53 response has been shown in this model; pulses occur during DNA repair and are sustained at a high level that triggers apoptosis if the damage cannot be fixed after a crucial number of p53 pulses. Furthermore, dynamical analysis has shown that the ATM-p53-Wip1 loop is essential for the generation of the p53 pulses and that the PTEN level determines whether p53 acts as a pulse generator or a switch. Despite extensive studies of the p53 pathway, little work has been focused on modeling the PDCD5 interactions. The first model of PDCD5-regulated DNA damage decisions was established by \cite{Zhuge2011}. Two known pathways were considered in this model: the interaction of PDCD5 with Tip60 in the nucleus, and the regulation of Bax translocation in the cytoplasm. This model revealed that the cytoplasmic pathway plays an important role in PDCD5-regulated cell apoptosis \cite{Zhuge2011}. However, how PDCD5 interactions with the p53 pathway affect cell fate decision has not been considered in previous models. Motived by the above considerations, we constructed a mathematical model of the p53 signaling network with PDCD5 interactions in the present study to examine the effects of PDCD5 on p53-mediated cell fate decisions in response to DNA damage. The main results of this study suggest that PDCD5 can function as a co-activator of p53 to regulate p53-dependent cell fate decisions by mediating the dynamics of p53. The effects of PDCD5 are dose dependent, and various cell fates can occur for cells with different PDCD5 levels. \section*{MATERIALS AND METHODS} \subsection*{Model description} Our model was based on p53 responses to DNA damage caused by ionizing radiation (IR) \cite{Zhang2009,ZhangPNAS2011} and PDCD5 interactions with the p53 pathway \cite{Xu2012a} (Figure \ref{fig:SimplifiedModel}). In the model, the cell fate decision following DNA damage is mediated by p53 pulses through the p53-Mdm2 oscillator, and PDCD5 interacts with p53 and functions as a positive regulator in the p53 pathway. An integrated model with four modules for the p53 signaling network has been developed by \cite{Zhang2009,ZhangPNAS2011}. This model includes the following processes: DNA repair, ATM switch, p53-Mdm2 oscillation, and cell fate decision. When a cell is exposed to IR, a certain number of DSBs are generated in the cell and induce the formation of DSB repair-protein complexes (DSBCs), and the repair process ensues. Subsequently, DSBCs promote the conversion of inactive ATM monomers to active forms \cite{Bakkenist2003} such that active ATM ($\mathrm{ATM}^*$) becomes dominant after exposure to IR. After the activation of ATM, the p53 level exhibits a series of pulses due to the feedback loops in the p53-Mdm2 oscillator. The protein p53 and its negative regulator Mdm2 are the core proteins in this oscillator (Figure \ref{fig:SimplifiedModel}a). In the nucleus, p53 is activated by $\mathrm{ATM}^*$ in two ways. First, $\mathrm{ATM}^*$ promotes the phosphorylation of p53 on Ser-15 \cite{Prives1998} and accelerates the degradation of Mdm2 through phosphorylation \cite{Stommel2004}. Thus, $\mathrm{ATM}^*$ induces a conversion of p53 from the inactive state to the active state ($\mathrm{p53}^*$) \cite{Stommel2004}. Second, $\mathrm{p53}^*$ is deactivated at a basal rate. Here, only $\mathrm{p53}^*$ can induce the production of $\mathrm{Mdm2}_{\mathrm{cyt}}$, which in turn promotes the translation of \textit{p53} mRNA in the cytoplasm \cite{Yin2002}. In undamaged cells, p53 levels are kept low by Mdm2 through the negative-feedback between $\mathrm{p53}^*$ and $\mathrm{Mdm2}_{\mathrm{nut}}$. After damage, the p53-Mdm2 complex is dissociated due to activation of p53 by $\mathrm{ATM}^*$, and the levels of $\mathrm{p53}^*$ and $\mathrm{Mdm2}_{\mathrm{cyt}}$ increase abruptly through the positive feedback between $\mathrm{p53}^*$ and $\mathrm{Mdm2}_{\mathrm{cyt}}$. In the cell fate decision module, p53 coordinates cell cycle arrest and apoptosis to govern cell fate through the phosphorylation of p53 at distinct sites (Figure \ref{fig:SimplifiedModel}b). The primary phosphorylation of p53 on Ser-15 leads to cell cycle arrest, whereas the further phosphorylation of p53 on Ser-46 promotes expression of pro-apoptotic genes such as \textit{p53AIP1} \cite{Oda2000}. The above two forms of phosphorylated p53 are termed p53 arrester and p53 killer, respectively \cite{Zhang2009}. There are three feedback loops involved in the conversion between p53 arrester and p53 killer. The p53 arrester-inducible gene \textit{Wip1} can promote the reversion of p53 killer to p53 arrester \cite{Fiscella1997}, while the gene \textit{p53DINP1}, which is induced by both p53 arrester and p53 killer, contributes to the formation of p53 killer \cite{Okamura2001}. p53 arrester induces cell cycle arrest through the transcriptional activation of \textit{p21}, and p53 killer promotes cell death via pro-apoptotic genes such as \textit{p53AIP1}. The over-expression of \textit{p53AIP1} induces the release of cytochrome \textit{c} from mitochondria, and apoptosis rapidly ensues after the activation of caspase-3. A positive-feedback loop between cytochrome \textit{c} and caspase-3 \cite{Kirsch1999,Bagci2006} underlies the apoptotic switch in the model developed by \cite{Zhang2009}. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{fig1} \caption{Model of the PDCD5 pathway that regulates p53/MDM2 oscillation and cell fate decision. (a) Model of PDCD5-regulated p53/MDM2 oscillation. (b) Model of the cell fate decision. Red color lines show the PDCD5 interactions.} \label{fig:SimplifiedModel} \end{figure} The PDCD5 protein is weakly expressed in unstressed conditions, and is upregulated in cells upon apoptotic simulation \cite{Liu1999}. Following the onset of an apoptotic stimulus, the cytoplasmic PDCD5 protein level first increases rapidly and forms an inward gradient from the cytoplasm to the nucleus \cite{Liu1999,Chen2001,Xu2009}. This pattern is maintained for a few hours until the initiation of apoptosis, which is paralleled by a rapid translocation of PDCD5 protein from the cytoplasm to the nucleus \cite{Chen2001}. Hence, after DNA damage, nuclear PDCD5 is maintained at an intermediate level during the DNA repair process. PDCD5 has been found to up-regulate p53 activity through at least two interactions \cite{Xu2012a}. When U2OS cells were transfected with either control or PDCD5-specific siRNA, p53 protein levels decreased following the knockdown of PDCD5. Simultaneously, the knockdown of PDCD5 failed to influence the levels of p53 mRNA levels, which suggests that PDCD5 enhances the stability of p53 and does not regulate p53 at the transcriptional level \cite{Xu2012a}. Co-localization analysis in U2OS cells further revealed that PDCD5 co-localizes with p53 in the nucleus. Furthermore, NMR experiments indicated that PDCD5 can bind with the N-terminal domain of p53 (p53$_{15-61}$) \cite{Yao2012}, which overlaps with the binding site between p53 (p53$_{15-29}$) and Mdm2 \cite{Schon2004}. When p53 was incubated with whole lysates of HeLa cells that overexpressed Mdm2, p53 strongly bound to Mdm2, but this interaction between p53 and Mdm2 decreased significantly in the presence of recombinant human PDCD5. Moreover, PDCD5 can be pulled down with p53. These results suggest that PDCD5 might disrupt the p53-Mdm2 interaction via the direct interaction between p53 and PDCD5 \cite{Xu2012a}, which is consistent with the results of the NMR study \cite{Yao2012}. Interestingly, PDCD5 has been found to be capable of dose-dependently decreasing the protein level of Mdm2. Knockdown of endogenous PDCD5 could increase the accumulation of Mdm2 and decrease the ubiquitination level of Mdm2 \cite{Xu2012a}. Hence, PDCD5 dissociates the p53-Mdm2 complex promotes Mdm2 degradation. These interactions are shown with red lines in Figure \ref{fig:SimplifiedModel}a. At the cell fate decision module, PDCD5 in the nucleus interacts with Tip60 to promote the Tip60-induced Lys120 acetylation of p53 (the killer form of p53) \cite{Xu2009}. In the cytoplasm, PDCD5 promotes the translocation of Bax from the cytosol to the mitochondrial outer membrane to induce the release of cytochrome \textit{c} \cite{Chen2006a}. ChIP assays in U2OS cells have shown that PDCD5 might associate with the \textit{p21} promoter to promote transcription activation after DNA damage \cite{Xu2012a}. Knockdown of PDCD5 attenuates the expression and transcription of p21 \cite{Xu2012a}. Hence, PDCD5 in the nucleus increases the transition from p53 arrester to p53 killer and promotes the transcription of p21, and PDCD5 in the cytoplasm up-regulates Bax translocation to increase cytochrome \textit{c} release (Figure \ref{fig:SimplifiedModel}b). Based on the model simulation reported by \cite{Zhang2009}, active ATM is dominant following IR and the level of $\mathrm{ATM}^*$ remains mostly constant during the DNA repair process. Hence, in our model, the four modules of the model of \cite{Zhang2009} were simplified to include only the two modules of the p53-Mdm2 oscillator and the cell fate decision, and the levels of $\mathrm{ATM}^*$ and nuclear PDCD5 were represented by time dependent functions to mimic the DNA repair process. This simplification is acceptable in the current study because we intended to investigate the effects of PDCD5 on p53 dynamics after DNA damage. For a more complete understanding of the effect of PDCD5 on cell fate decisions following DNA damage, an integrated model that incorporates PDCD5 dynamics \cite{Zhuge2011} and the responses of the p53 pathway \cite{Zhang2009,Zhang2011} is certainly required and will be the subject of further studies. \subsection*{Formulations} In the formulations, we first simplified the models presented in \cite{Zhang2009,Zhang2011} to a six differentiation equations for the modules of the p53-Mdm2 oscillator and cell fate decision, and a time-dependent $\mathrm{ATM}^*$ level was introduced for the DNA repair process. In the p53-Mdm2 oscillator, inactive p53 in the nucleus is degraded rapidly by Mdm2 and was thus assumed to be at quasi-equilibrium in our model. Hence, three components were included in the p53-Mdm2 oscillator: active p53 in the nucleus $[\mathrm{p53}]$, Mdm2 in the nucleus $[\mathrm{Mdm2_{nuc}}]$, and Mdm2 in the cytoplasm $[\mathrm{Mdm2_{cyt}}]$. Active p53 promotes the production of $\mathrm{Mdm2}_{\mathrm{cyt}}$, which promotes the translation of \textit{p53} mRNA to produce p53 to form a positive-feedback loop. In the nucleus, active p53 is degraded slowly by weakly binding to $\mathrm{Mdm2_{nuc}}$, and the interaction is disrupted by PDCD5. Mdm2 in the nucleus and cytoplasm can be shuttled between the two compartments at different rates. The degradation of $\mathrm{Mdm2}_{\mathrm{nuc}}$ is promoted by both $\mathrm{ATM}^*$ and PDCD5. These interactions resulted in differential equations (1)-(3), which are given in the Supporting Material. In the cell fate decision module, there are two forms of active p53: p53 arrester and p53 killer. p53 arrester and p53 killer transform into each other at different rates that are regulated by their inducible genes \textit{Wip1} and \textit{p53DINP1}. PDCD5 in the nucleus can increase the transition from p53 arrester to p53 killer. p53 killer induces apoptosis through the killer-inducible gene \textit{p53AIP1}, which up-regulates the expression of the pro-apoptotic gene \textit{Bax}. PDCD5 in the cytoplasm enhances Bax translocation and promotes the release of cytochrome \textit{c} from the mitochondria and hence the activation of caspase-3. In our model, we omitted components for the proteins Wip1, p53DINP1, and p53AIP1 and considered the dynamics of p53 killer $[\mathrm{killer}]$ (the p53 arrester concentration is given by $[\mathrm{arrester}] = [\mathrm{p53}] - [\mathrm{killer}]$), cytochrome \textit{c} $[\mathrm{CytoC}]$ and active caspase-3 $[\mathrm{C3}]$. This process resulted in equations (4)-(6) in the Supporting Material. Here we note that the downstream signal p21 transcription was omitted. The intensities of $\mathrm{ATM}^*$ and PDCD5 were included in the model through their incorporation in the equation coefficients (refer to the Supporting Material, Section 1.6). This study intended to study p53 dynamics during DNA repair while both $\mathrm{ATM}^*$ and PDCD5 remained at high levels \cite{Zhang2009,Zhuge2011}. Hence, the $\mathrm{ATM}^*$ and PDCD5 levels were described by the predefined functions $A(t)$ and $P(t)$ given by equation (20)-(21) in the Supporting Material. The parameter $P_0$ was introduced for the different PDCD5 levels with $P_0 = 0.8$ for the wild-type cells and $P_0=0.2$ for the siPDCD5 cells. Despite this specificity, extensive simulations showed that the results of the current study were insensitive to different mathematical formulations of these two functions. Now, the original integrated model of cell fate decision mediated by p53 pulses has been simplified to the above model with six differential equations. With this simplification, all parameters were adjusted to reproduce the p53 pluses utilized by \cite{Zhang2009,Zhang2011}, and the PDCD5 interaction parameters were estimated based on experimental results regarding the degradation of p53 with or without PDCD5 \cite{Xu2012a}. Details of the parameter values are given in Table 1 in the Supporting Material. \subsection*{Numerical methods} In the numerical simulations, the model equations were solved numerically using \verb|NDSolve| on the Mathematica 8.0 platform \cite{mathematica8}. \section*{RESULTS} \subsection*{PDCD5 regulates p53 dynamics in a dose-dependent manner} Signaling dynamics are known to encode and decode cellular informations that controls cellular responses \cite{Purvis:2013}. p53 dynamics can control cell fate decision in response to DSBs, and cells that experience p53 pulses recover from DNA damage, whereas cells that are exposed to sustained p53 signaling frequently undergo senescence \cite{Purvis:2012}. To examine how PDCD5 regulates the dynamics and functions of p53, we performed simulations with various PDCD5 levels. The integrated model showed that cell fate was governed by the number of p53 pulses during DNA repair \cite{Zhang2009}. In our simulations, there were seven p53 pulses when the DNA repair process required $t_{c}=48 h$ and apoptosis was induced by obvious increases in caspase-3 levels. When the DNA process is shortened to $t_{c}=30 h$, there are four p53 pulses and the cells recover to normal growth, while caspase-3 is maintained at a low level (Figure \ref{fig:SamplePaths}a). These results reproduced the p53 dynamics and cell fate decision obtained from the integrated model of \cite{Zhang2009}; therefore, our model simplification is capable of investigating the effects of PDCD5 on p53 dynamics. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{fig2} \caption{Typical p53 dynamics after DNA damage. The DNA repair processes are shown by high ATM levels (red dashed lines), the PDCD5 level was adjusted by $P_{0}$ (shown at each figure panel), and the cell fate was indicated by caspase-3 (green dash dotted lines). (a) Wild-type cells with $P_{0} = 0.8$ and $48 h$ and $30 h$ DNA repair processes, respectively. (b) Cells with various PDCD5 levels and $48 h$ DNA repair processes.} \label{fig:SamplePaths} \end{figure} To examine the effects of PDCD5 on p53 dynamics, we fixed the DNA repair process at $t_{c}=48 h$ and varied the PDCD5 level to examine the cell response. Increasing $P_{0}$ elicited no changes in cell fate (data not shown), and reducing the PDCD5 level might lead to various p53 dynamics and cell fate (Figure \ref{fig:SamplePaths}b). When $P_{0}=0.4$, there are six p53 pulses during DNA repair and caspase-3 is maintained at low levels so that the cells recover to normal growth. The p53 pulses are repressed if $P_{0}$ is further reduced. When $P_{0}=0.3$, the cells exhibit sustained p53 signaling and caspase-3 increases to levels as high as those observed when $P_{0}=0.8$. When $P_{0}$ is reduced to $0.2$, the p53 level is attenuated and fails to induce cell apoptosis. These results suggest that p53 dynamics and cell fate can be modulated by PDCD5 in a dose-dependent manner. Here we note that when $P_{0}=0.3$, the caspase-3 reaches levels as high as those in the wild-type cells ($P_{0} = 0.8$). However, at this point we cannot conclude that the cells undergo apoptosis \cite{Abraham2004} because some other response pathways not included in the current study, such as cell senescence, can be triggered by sustained p53. In \cite{Purvis:2012}, it was suggested that a proper stimulus with Nutlin-3 can induce sustained p53. Our simulations predict that a proper dose of PDCD5 can also result in sustained p53 after DNA damage. This prediction requires experimental confirmation. \begin{figure}[htbp] \centering \includegraphics[width=6cm]{fig3} \caption{The effects of PDCD5 on p53 dynamics. Here we set $t_{c}=48 h$ and varied $P_{0}$ from $0$ to $1$. Upper panel, the mean p53 level over the DNA repair process. Bottom panel, the p53 pulse numbers, period, and amplitude during DNA repair. The shadowed regions indicate the $P_{0}$ range at which caspase-3 is activated. The vertical dashed line separates $P_{0}$ into sustained or pulsed p53 dynamics. } \label{fig:dyns} \end{figure} To further investigate the effects of PDCD5 on p53 dynamics, we altered $P_{0}$ over the wider range of $0$ to $1$. There is a threshold of $P_{0}=0.33$ that corresponds to a Hopf bifurcation of the p53 oscillator module (dashed line in Figure \ref{fig:dyns}) such that p53 is sustained when $P_{0}$ is less than the threshold and p53 is pulsed when $P_{0}$ is above the threshold. In both regions of either sustained or pulsed p53, caspase-3 is activated when $P_{0}$ is relatively large (shadows in Figure \ref{fig:dyns}) but with different mechanisms. In the sustained region, during DNA repair, the p53 level increases with PDCD5 to induce caspase-3 activation through saddle node bifurcation; however, in the pulsed region, p53 oscillates while the mean value is nearly unchanged with the PDCD5 level, and hence, caspase-3 is activated by an alternative mechanism (see Figure \ref{fig:dyns} upper panel, to be detailed below). In the pulsed region, the period decreases with $P_{0}$ such that the pulse number increases from $4$ to $7$ when $P_{0}$ varies from $0.33$ to $1$, and the amplitude slightly increases with $P_{0}$ (Figure \ref{fig:dyns}, bottom panel). We note that caspase-3 is either active or not when there are $7$ pulses, which indicates that the p53 pulse number alone is insufficient to determine cell fate. These results suggest that PDCD5 regulates the p53 dynamics with different mechanisms in the p53 sustained and pulsed regions that are separated by a Hopf bifurcation of the p53 oscillator module. \subsection*{PDCD5 regulates caspase-3 activation by two mechanisms} To investigate the mechanism by which PDCD5 regulates caspase-3 activation, we considered the cell fate decision module, which can be described by the dynamics of cytochrome \textit{c} and caspase-3 given by equations (22)-(23) in the Supporting Material. The caspase-3 dynamics are controlled by the cytochrome \textit{c} release rate $v_0$, which is dependent on PDCD5 and p53 killer. We performed bifurcation analysis for the equations to seek the mechanisms by which PDCD5 induces caspase-3 activation. In the case of sustained p53, the rate $v_{0}$ is a constant such that the cell fate module exhibits either bistability for small $v_0$ or monostability for larger $v_0$ (Figure \ref{fig:bif}a). The saddle node bifurcation defines a critical release rate (red point at Figure \ref{fig:bif}a) such that when $v_0$ increases above the critical rate, the low-caspase-3 state vanishes and the system switches to the state of caspase-3 activation. This critical rate defines a critical curve of p53 versus $P_{0}$ through equation (21) in the Supporting Material as shown by the red dashed line in Figure \ref{fig:bif}b. To test whether PDCD5 induces caspase-3 activation during sustained p53 through saddle note bifurcation, we superpose this critical curve with the dependence of the mean p53 on $P_0$ (Figure \ref{fig:dyns}). The two curves meet at a point at which $P_0$ is at the critical value for inducing caspase-3 activation (Figure \ref{fig:bif}b). This consistence indicates that PDCD5 induces caspase-3 through saddle node bifurcation in the region of sustained p53. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{fig4} \caption{Bifurcation analysis of the cell fate decision module. (a) Black shows the dependence of the steady state caspase-3 level on the cytochrome \textit{c} release rate $v_0$ (unstable steady states are marked by the dashed line). Green shows the solution of the original model with $P_0 = 0.66$ and $t_c=48 h$ (same as in (c) and (d)). (b) The superposition of the dependence of the mean p53 on $P_0$ and the curve corresponds to saddle note bifurcation. (c) Phase plane analysis with $v_0 = 0.02$. The black triangles indicate stable steady states, and the red dot indicates the unstable steady state (saddle point). The red curve shows the stable manifolds of the saddle point that divide the phase plane into two regions. The green curve shows the solution of the original model with $P_0 = 0.66$ and $t_c=48 h$ (the cell proceeds to apoptosis), and the blue indicates the solution with $P_0=0.65$ (cell survival). (d) Enlargement of the two solutions shown in (c). The inset enlarges the square region.} \label{fig:bif} \end{figure} When p53 is pulsed, equations (22)-(23) in the Supporting Material are time-dependent and have periodic coefficients. In this case, caspase-3 is not always active even when the mean p53 level is above the critical value (Figure \ref{fig:bif}b). Hence, some other mechanism must be at work to induce caspase-3 activation with increases in the PDCD5 level $P_{0}$. In the absence of DNA damaging stimuli, the caspase-3 dynamic system has two stable steady states of survival and apoptosis (Figure \ref{fig:bif}c). The two stable states are separated by stable manifolds of an unstable steady state (red curve in Figure \ref{fig:bif}c). After DNA damage, starting from the survival state, a cell transitions to the apoptotic state with an increase in the amount of cytochrome \textit{c} released and a rapid caspase-3 activation (Figure \ref{fig:bif}c). The simulations showed that the final cell fate is determined by whether the solution trajectory shown by Figure \ref{fig:bif}c crosses the boundary between the survival and apoptosis regions. During DNA repair, p53 is oscillating and switching between pro-apoptosis and pro-survival (Figure \ref{fig:bif}a) such that the released cytochrome \textit{c} accumulates at each p53 pulse. Simulations showed that PDCD5 can increase the accumulation of cytochrome \textit{c} and hence induce caspase-3 activation by promoting the solution trajectory to cross the boundary curve during DNA repair (Figure \ref{fig:bif}d). These results indicate that PDCD5 promotes cell apoptosis (caspase-3 activation) as a co-activator of p53 that accelerates cytochrome \textit{c} release during the region of pulsed p53. This observation highlights the crucial role of PDCD5 in the cytoplasm and is in agreement with our previous study \cite{Zhuge2011}. The above analyses indicated that PDCD5 promotes caspase-3 activation by accelerating cytochrome \textit{c} release. Consequently, the time of caspase-3 activation should decrease with increases in PDCD5 levels. This notion was confirmed by our simulations as shown by Figure \ref{fig:P0}a. Next, we asked whether the PDCD5 level required to induce apoptosis is related to the duration of DNA repair. We changed both the DNA repair duration $t_c$ and $P_0$ to examine cell responses. The simulations showed that caspase-3 activation is induced only when the DNA repair duration is sufficiently large ($t_{c} > 29 h$ in this study). When $t_{c}$ is sufficiently large, the critical PDCD5 level $P_{0}$ required ot induce caspase-3 activation decreases with increases in $t_{c}$ in both regions of sustained and pulsed p53 (Figure \ref{fig:P0}b). We note that the boundary value of $P_{0}$ at which p53 transits from sustained to pulsed dynamics is independent of $t_{c}$ (Figure \ref{fig:P0}b dashed line). This $P_{0}$ value is determined by the Hopf bifurcation of the p53/Mdm2 oscillation module. \begin{figure}[htbp] \centering \includegraphics[width=6cm]{fig5} \caption{Dependence of cell fate on the DNA repair time $t_c$ and the PDCD5 level $P_0$. (a) The dependence of caspase-3 activation time on $P_{0}$, here $t_c = 48 h$. (b) The shaded area shows the DNA repair duration for each $P_{0}$ required to activate caspase-3 with sustained and pulsed p53. } \label{fig:P0} \end{figure} \subsection*{The cytoplasm pathway and the regulation of Mdm2-mediated p53 degradation are the primary effects of PDCD5 on the cell fate decision} PDCD5 interacts with the p53 pathway in multiple ways; it stabilizes p53 by disrupting the p53-MDM2 interaction, enhances Mdm2 degradation, and promotes the Tip60-induced Lys120 acetylation of p53. In the cytoplasm, PDCD5 enhances Bax translocation and promotes the release of cytochrome \textit{c}. PDCD5 regulates the cell fate decision via the combination of these multi-site interactions. To investigate which role is the most essential for the regulatory function of PDCD5, we altered the strengths of each interaction and examined the DNA repair time required to induce cell apoptosis. Four parameters were considered: $\alpha_{1}$ for PDCD5 disrupting the p53-MDM2 interaction, $K_{4}$ for enhancing the Mdm2 regulation, $K_{9}$ for promoting the p53 killer transformation, and $\alpha_{2}$ for the cytoplasm pathway (Figure \ref{fig:par}). The results revealed that cell fate is sensitive to changes in $\alpha_{2}$, i.e., the PDCD5 interactions with the cytoplasm pathway. Among the functions of PDCD5 in the p53 pathway, changes in the strength with which PDCD5 disrupts the p53-Mdm2 interaction ($\alpha_{1}$) result the greatest changes in the DNA repair duration required to induce apoptosis. These results show that the cytoplasm pathway is essential for the regulatory function of PDCD5, which agrees with previous studies \cite{Zhuge2011}, and the PDCD5-regulated disruption of the Mdm2-mediated p53 degradation is important among the interactions of PDCD5 with the p53 pathway. These observations provide poptential options for killing cancer cells in clinical treatments, which are discussed below. \begin{figure}[htbp] \centering \includegraphics[width=6cm]{fig6} \caption{Dependence of the critical duration of DNA repair on the model parameters. The WT cells took the value listed in the Supporting Material; for the other cells, each of the labeled parameters was decreased ($-$) or increased ($+$) by $20\%$.} \label{fig:par} \end{figure} \subsection*{Effects of PDCD5 disrupting Mdm2-mediated p53 degradation} PDCD5 disrupts Mdm2-mediated p53 degradation via a direct interaction with p53 \cite{Yao2012,Xu2012a}. Biologically, the parameter value $\alpha_1$ is adjustable if we can modify the binding affinity between PDCD5 and p53 via methods such as single molecule engineering. To further explore how various affinities ($\alpha_1$) affect the cell fate decision, we varied $\alpha_1$ and $P_0$ to examine the p53 dynamics and caspase-3 activity (Figure \ref{fig:a1}a). The results revealed that caspase-3 was activated only when $\alpha_1 P_0 > 0.7$, and that the two parameter regions with either sustained p53 or pulsed p53 were separated by the curve $\alpha_1 P_0 = 1.1$. For small $\alpha_1$ values ($\alpha_{1} < 1.7$), pulsed p53 always yielded caspase-3 activity; however, when $\alpha_1$ was large, pulsed p53 did not necessarily imply caspase-3 activity when $P_0$ was not sufficiently large (see Figure \ref{fig:SamplePaths}b). \begin{figure}[htbp] \centering \includegraphics[width=8cm]{fig7} \caption{Dependence of the cell fate decision on $\alpha_1$. (a) The cell fate decision with various parameters ($\alpha_1$ and $P_0$). Yellow shows the region with caspase-3 activity, and the regions with sustained and pulsed p53 are separated by the red curve. (b) Apoptosis probabilities obtained from mutlicell simulations with varying $P_0$ and different $\alpha_1$ values. Here, $t_c = 48 h$ in all simulations.} \label{fig:a1} \end{figure} To further investigate the cellular apoptosis probability, we applied the method of multicell simulation\cite{Zhuge2011} to simulate a group of $10^4$ cells each of which had parameters that were randomly chosen from a range of $\pm 20\%$ away from their default values. The cell fate of apoptosis was marked by caspase-3 activation, and the fraction of apoptotic cells provided the apoptosis probability. Figure \ref{fig:a1}b shows the apoptosis probabilities with $P_{0}$ from $0$ to $1$ and strong and weak affinity $\alpha_{1}$ values. The results suggested that when PDCD5 is low, the significant increase in the expression of $\alpha_{1}$ can significantly enhance cell apoptosis. We note that when $P_{0}$ takes an intermediate value ($P_{0}\approx 0.5$, arrow at Figure \ref{fig:a1}b), increasing $\alpha_{1}$ tends to decrease the probability of apoptosis. This counterintuitive result was due to the possibility that a cell can display pulsed p53 without activation of caspase-3. These results provide possible directions for interfering with the binding affinity of PDCD5 and p53 to modulate cell fate decisions in clinical treatments. \section*{DISCUSSION} PDCD5 is known to interact with p53 and functions as a regulator in the p53 pathway during responses to DNA damage. In the present study, we constructed a mathematical model of the p53 signaling network with the interactions of PDCD5. The model was based on the integrated model of the p53 signaling network that was previously proposed \cite{Zhang2009,Zhang2012}, and the interactions of PDCD5 with the p53/Mdm2 oscillator and cell fate decision were included in accordance with recent observations \cite{Xu2012a}. The computational model consisted of two p53/Mdm2 oscillator and cell fate decision modules. The DNA repair process was represented by increases in active ATM and PDCD5 concentrations, both of which were given by pre-defined time-dependent functions and were incorporated into the equation coefficients. The model simulations showed that PDCD5 can modulate the cell fate decision by mediating p53 dynamics in a dose-dependent manner such that p53 can display either sustained or pulsed dynamics in cells with different levels of PDCD5. Moreover, PDCD5 regulates caspase-3 activation via two mechanisms that operate in the two regions of sustained and pulsed p53 dynamics. We found that the cell fate decision is sensitive to the cytoplasm pathway of PDCD5, which agrees with the results of our previous studies \cite{Zhuge2011}. Moreover, the PDCD5-regulated disruption of Mdm2-mediated p53 degradation is also important for the interaction of PDCD5 with the p53 pathway, which suggests that it is possible to modulate cell fate decision via interference in the binding affinity between PDCD5 and p53. This study sought to investigate the effects of PDCD5 on p53 dynamics following DNA damage. A more comprehensive analysis of p53-Mdm2 dynamics has recently been provided by \cite{Bi:2015}. For a more complete understanding of how PDCD5 functions to regulate DNA repair and apoptosis following DNA damage, the PDCD5 dynamics \cite{Zhuge2011} and the p53 pathway response need to be incorporated \cite{Zhang2009,Zhang2012}. These results are certainly important for additional studies that seek to improve our understanding of how recombinant human PDCD5 can be used in cancer treatment. \section*{SUPPORTING MATERIAL} \ack{An online supplement to this article can be found by visiting BJ Online at http://www.biophysj.org.}\vspace*{-3pt} \section*{AUTHOR CONTRIBUTIONS} J.L. and Y.C. designed the research; C.Z. performed the research; C.Z. and X.S. analyzed data; C.Z. draft the manuscript; C.Z., X.S. and J.L. edited and revised the manuscript; C.Z., X.S., Y.C. and J.L. approved the final version of the manuscript. \section*{ACKNOWLEDGMENTS} \ack{This work was supported by the Fundamental Research Funds for the Central Universities (NO. BLX2014-29) and the National Natural Science Foundation of China (11272169, 91430101, 31370898)}\vspace*{6pt} \bibliographystyle{plain} \input{Zhuge-BJ.bbl} \end{document}
1,108,101,562,466
arxiv
\section{Introduction} Higher-order topological superconductors (HOTSC) are novel forms of gapped quantum matter that host gappable surfaces but gapless corners or hinges in-between\cite{benalcazar2017quantized,schindler2018higher}. Since their initial discovery, HOTSC and their descendants have been discussed extensively and have become an active area of theoretical and experiential research. Recent progress has included topological classifications\cite{benalcazar2017electric,song2017d,langbehn2017reflection,khalaf2018higher,benalcazar2019quantization,you2021multipolar}, topological field theories\cite{you2021multipolar, may2021crystalline}, and experimental realization of various classes of HOTSC\cite{noh2018topological,serra2018observation,peterson2018quantized,imhof2018topolectrical,schindler2018higherb,xue2019acoustic,zhang2019second,ni2019observation,noguchi2021evidence,aggarwal2021evidence}. Despite the rapid progress in the understanding of higher-order topological superconductors from a band structure perspective~\cite{isobe2015theory,huang2017building,song2017interaction,song2017topological,you2018higher,rasmussen2018intrinsically,rasmussen2018classification,thorngren2018gauging,benalcazar2018quantization,zhang2019construction,tiwari2019unhinging,you2018higher,jiang2019generalized}, experimentally accessible fingerprints for observing HOTSC still remain challenging in strongly correlated systems. Notably, the observation of gapless Majorana modes at the corners or hinges does not fully guarantee that the bulk is a higher-order topological superconductor (HOTSC), as some of these gapless modes can potentially be annihilated via surface gap closing, even if the bulk spectrum remains gapped\cite{you2019multipolar,tiwari2019unhinging}. Alternatively, some higher-order topological superconductors (HOTSCs) can exhibit fully gappable boundaries, including corners and hinges~\cite{tiwari2019unhinging}, while still exhibiting a non-trivial entanglement structure that distinguishes them from trivial superconductors. Previous works have established that higher-order topological insulators and superconductors can be probed by their geometric responses, such as the creation of lattice defects like dislocations or disclinations. By creating these defects, one can observe Majorana zero modes inside the disclination point in two dimensions or chiral fermion modes localized at the dislocation/disclination lines in three dimensions\cite{liu2019shift,you2018highertitus,teo2012majorana,li2020fractional,may2021crystalline,queiroz2019partial,zhang2022bulk,schindler2022topological}. In contrast to these approaches, which are primarily based on the non-interacting limit, we aim to identify the universal fingerprints and topological responses specifically for strongly interacting higher-order topological superconductor (HOTSC) phases. In this work, we seek to unravel the nature of superconducting $\pi$ flux and their corresponding topological response features in 2D and 3D HOTSC. First, we begin with the 2D HOTSC model on a square lattice proposed in Eq.~\cite{wang2018weak} protected by $C_4$ symmetry. We demonstrate that creating a $\pi$ superconducting flux engenders a projective symmetry between $C_4$ and fermion parity $P$, so the resultant flux state contains a protected two-fold degeneracy. Remarkably, this projective symmetry within the flux uniquely generates the $N=2$ supersymmetry (SUSY) algebra in quantum mechanics\cite{hsieh2016all}. Motivated by these observations, we extend our horizon into frustrated spin systems whose emergent quasiparticles and flux excitations exactly reproduce the topological feature of HOTSC\cite{dwivedi2018majorana}. Namely, we construct a bosonic spin-$\frac{3}{2}$ model whose low-energy excitations contain emergent Majoranas coupling with a $Z_2$ gauge field. The Majorana forms a superconducting band akin to the 2D HOTSC, while the $Z_2$ gauge flux excitation carries $N=2$ SUSY structure. In section\ref{sec:3d}, we examine the role of superconducting $\pi$ flux in 3D HOTSC with $C_4^T$ symmetry. One of our main findings is that the flux lines inside the HOTSC trap 1D helical Majorana modes with an intrinsic quantum anomaly. In particular, the gapless modes inside the flux line are anomalous in the sense that the $C_4^T$ symmetry will inevitably be broken if we gauge the fermion parity inside the flux line. Under this observation, the helical Majorana modes inside the flux exhibit global anomaly, signaling the impossibility of realizing them on an isolated one-dimensional lattice model. Notably, such quantum anomaly manifested by `conflict of the symmetries' had been widely observed in the surface theory of symmetry-protected topological phase\cite{ryu2015interacting,cho2017relationship}. Our result provides a new route to detect HOTSC in numerical simulations via flux responses. The projective symmetry in the 2D HOTSC flux state can be detected from the shift of the entanglement spectrum upon flux insertion. Suppose one creates a rotational symmetric cut of the ground state wave function after $\pi$ flux insertion; the entanglement spectrum will display a robust two-fold degeneracy in all spectrum levels. This degeneracy persists even for small system sizes, where finite-size effects are inevitable. More precisely, the whole entanglement Hamiltonian develops a projective symmetry under rotation and fermion parity after flux insertion, which results in a degenerate spectrum in the entanglement Hamiltonian for both the ground state and highly excited states. Notably, this degeneracy in the entanglement spectrum is not a manifestation of the corner mode, but a consequence of the projective symmetry due to the anomalous flux. Our result suggests that the entanglement features of HOTSC also reveal unique properties of topological flux responses, and that the projective symmetry in the anomalous flux state can be observed from the entanglement Hamiltonian. This paves the way for a new promising route for exploring HOTSC in numerical simulations. \section{Projective symmetry in the superconducting flux of 2D HOTSC} \label{sec:2d} This section investigates the topological feature of $\pi$ flux inside a 2D higher-order topological superconductor (HOTSC) protected by $C_4$ and fermion parity symmetry. The motivation comes from the expectation that for a topological quantum phase protected by symmetry $G$, one can detect its topological feature by observing the anomalous symmetry structure inside the symmetry defect (e.g., the gauge flux for $G$ symmetry)\cite{ryu2015interacting,levin2012braiding,chen2017symmetry}. For instance, in a 2D $p+ip$ superconductor with fermion parity symmetry, the superconducting vortex contains a Majorana zero mode\cite{read2009non}. In 3D $\mathcal{T}$-invariant topological superconductor, the $\pi$ flux line carries a 1D chiral Majorana mode with $c=1/2$ central charge\cite{qi2009time}. For a general $G$ symmetry-protected topological phase, once we gauge the symmetry $G$, the symmetry flux either carries a fractional quantum number of $G$ or contains anomalous gapless modes. This aspect provides an alternative way to visualize the underlying quantum structure of the symmetry-protected topological phases. In addition, exploring symmetry flux offers a feasible way to detect topological responses via numerical simulations or experimental measurements. \begin{figure}[h] \centering \includegraphics[width=2.1in]{3D.png} \caption{The HOTSC on the square lattice with four Majoranas per site. In the zero-correlation length limit, the four Majoranas on the four corners of the square hybridize within the plaquette (solid lines). The ground state Hamiltonian contains $\pi$ flux per plaquette. By inserting a superconducting $\pi$ flux in the center, the central plaquette becomes $\Phi=0$.} \label{2d} \end{figure} To set the stage, we begin with the 2D higher-order TSC proposed in Ref.~\cite{wang2018weak}. The model contains four Majoranas (two complex fermions) living inside each unit cell on a square lattice as shown schematically in Fig.~\ref{2d}. The four Majoranas at the corners of the square mainly tunnel with its nearest neighbor within the plaquette with $\pi$ flux per square. The resultant superconducting state is fully gapped inside the bulk and on the smooth edges, while the corner intersecting two boundaries contains a Majorana zero mode(MZM). Our model contains a plaquette-centered $C_4$ rotation symmetry in addition to the fermion parity conservation $P$. The non-interacting Hamiltonian in the Majorana basis is, \begin{align} &H=\eta^T (t+\cos(k_x))\Gamma^3+(t+\cos(k_y))\Gamma^1 \nonumber\\ &+\sin(k_x)\Gamma^4+\sin(k_y)\Gamma^2) \eta\nonumber\\ &\Gamma^1=-\sigma^y \otimes \tau^x,\Gamma^2=\sigma^y \otimes \tau^y,\Gamma^3=\sigma^y \otimes \tau^z,\nonumber\\ & \Gamma^4=\sigma^x \otimes I. \label{eq1} \end{align} With $\eta^T=(\eta_1,\eta_2,\eta_3,\eta_4)$ being the four Majoranas on each site. $t$ is the strength for intra-site Majorana coupling. For $|t|<1$, the system is in the HOTSC phase. When $t=0$, the model returns to the zero-correlation length limit that the four Majoranas on the four corners of each square only hybridize within the plaquette. The $C_4$ rotation acts on $\eta$ as, \begin{align} C_4: \begin{pmatrix} 0 & \tau^z \\ \tau^x & 0 \end{pmatrix} \end{align} This model was widely explored in various works of literature for weak and strongly interacting systems\cite{wang2018weak,benalcazar2017quantized,benalcazar2019quantization}. The $\pi$ flux inside each plaquette is essential to acquire a fully gapped bulk spectrum and the resultant $C_4$ symmetry has the structure with $(C_4)^4=-1$ due to the $\pi$ flux. In particular, the Majorana zero mode localized at the corner is robust against any interaction provided the $C_4$ symmetry is unbroken and the SC bulk is gapped. It was pointed out that if one develops a symmetry defect of the $C_4$ symmetry, namely, a $\pi/2$ disclination by removing a quadrant and reconnecting the disclination branch cut, there exists a Majorana zero mode localized at the disclination core\cite{you2018higher,li2020fractional}. Now we consider the symmetry flux of the fermion parity $P$ by inserting an additional $\pi$ flux to the center of the plaquette as Fig.~\ref{2d}. As the pairing Hamiltonian already contains a $\pi$ flux in each plaquette, the additional flux insertion erases it and makes the central plaquette flux free $\Phi=0$. In the limit $t=0$ in Eq.\ref{eq1}, the flux insertion only changes the four Majorana coupling at the central plaquette, leaving a two-fold degeneracy at the center. Here and after, we will demonstrate that this degeneracy is robust against any interaction or coupling due to the projective symmetry. To demonstrate, we begin with the particular case with $t=0$, but our demonstration is adaptive to more general circumstances, as we will elaborate on later. In this zero correlation-length limit, the four Majoranas at the corners of the central plaquette coupled like a 1D ring with four Majoranas on four sites. The ground state containing $\pi$ flux inside the plaquette imposes an anti-periodic boundary condition for the 1D ring. The plaquette-centered $C_4$ rotation symmetry performs as a `translation symmetry' that permutes between the four Majoranas on the ring. Due to the $\pi$ flux at the center, the $C_4$ symmetry permutes the four Majorana as, \begin{align} & C_4(\pi): \gamma_1 \rightarrow \gamma_2,\gamma_2 \rightarrow \gamma_3,\gamma_3 \rightarrow \gamma_4,\gamma_4 \rightarrow -\gamma_1 \end{align} So $(C_4)^4=-1$. If we perform this rotation to the fermion-parity symmetry operator $P=\eta_1 \eta_2 \eta_3 \eta_4$, \begin{align} & C_4(\pi)P C^{-1}_4(\pi) = -\eta_2 \eta_3 \eta_4 \eta_1=\eta_1 \eta_2 \eta_3 \eta_4=P \end{align} The fermion parity and $C_4$ rotation commute. However, once we insert an additional flux in the center, the `net flux plaquette' can be viewed as a ring of four Majoranas with periodic boundary conditions. The $C_4$ symmetry is now defined as, \begin{align} & C_4(0): \gamma_1 \rightarrow \gamma_2,\gamma_2 \rightarrow \gamma_3,\gamma_3 \rightarrow \gamma_4,\gamma_4 \rightarrow \gamma_1 \end{align} Inside the superconducting flux, the $C_4$ symmetry anti-commutes with the fermion parity operator. \begin{align} & C_4(0)P C^{-1}_4(0) = \eta_2 \eta_3 \eta_4 \eta_1=-P \end{align} This anti-commutation relation guarantees an additional zero mode at the flux center. To this end, we demonstrate that adding a $\pi$ flux at the rotation center engenders a projective symmetry such that the fermion parity and $C_4$ symmetry anti-commute in the flux center. As a result, the flux would trap two degenerate modes with different fermion parity connected by a $C_4$ rotation operation. We can label these degenerate modes as the even(odd) fermion parity state $|\Psi\rangle_0^a (|\Psi\rangle_0^b)$ that is related by $C_4$. \begin{align} & C_4|\Psi\rangle_0^a = |\Psi\rangle_0^b \end{align} Our demonstration above is based on the zero-correlation limit in the absence of interaction. Now and after, we will extend this argument to a more general case with additional symmetry-preserving coupling and interaction. If the correlation length is finite, the four Majoranas in the central plaquette with net flux would unavoidably be coupled with the rest of the system. As long as there is no gap closing in the bulk, the flux state wave function $|\Psi\rangle$ can be connected to the aforementioned zero-correlation length limit wave function by a set of finite depth local unitary circuit $U$\cite{chen2010local} that commutes with the $C_4$ symmetry and fermion parity. \begin{align} & U|\Psi\rangle_0^a = |\Psi\rangle^a,U|\Psi\rangle_0^b = |\Psi\rangle^b, \end{align} As the unitary operator commutes with all symmetries, the new degenerate states $|\Psi\rangle^a,|\Psi\rangle^b$ carrying different fermion parity are still related by a $C_4$ rotation. This concludes that the $C_4$ symmetry and fermion parity always anti-commute inside the flux regardless of the correlation length or additional interaction. Thus, due to the projective symmetry structure inside the flux, there is no consistent way to hybridize or lift the degeneracy that preserves both rotation and fermion parity. To summarize, we elucidate that the insertion of $\pi$ flux in a higher-order topological superconductor engenders a projective symmetry between $C_4$ and fermion parity so the resultant flux state contains a localized zero mode. In particular, inserting a gauge flux is expected to engender a projective symmetry of $C_{2n} P$. Our argument can be generalized to other 2D higher-order topological superconductors with $C_{2n}$ symmetries\cite{zhang2022bulk}. \subsection{Emergent supersymmetry (SUSY) inside the flux} There had been growing interest in realizing supersymmetry in solid-state systems, which is a highly appealing concept from particle physics, relating bosonic and fermionic modes. In this section, we establish a general theorem that all 2D HOTSC exhibit N=2 SUSY algebra inside the superconducting flux. As is demonstrated in Sec.~\ref{sec:2d}, adding a flux to HOTSC creates a projective symmetry between fermion parity and rotation. We will show that all flux states in HOTSC have an underlying N = 2 supersymmetry and explicitly construct the generator of the supersymmetry\cite{hsieh2016all}. To set the stage, we first shift all the eigenvalues of the Hamiltonian by a constant so that they are all non-negative. Then we define the following fermionic, non-Hermitian operator based on $C_4$ and P, \begin{align} Q=\sqrt{\frac{H}{2}}C_4(1+P),~Q^{\dagger}=\sqrt{\frac{H}{2}}(1+P)(C_4)^{-1}. \label{susy} \end{align} $Q^{\dagger}$ commutes with the HOTSC Hamiltonian $[H,Q^{\dagger}] = 0$. Most importantly, due to the projective symmetry inside the flux, the $Q, Q^{\dagger}$ obeys the algebra: \begin{align} (Q)^2=0, (Q^{\dagger})^2=0,~ Q Q^{\dagger}+Q^{\dagger} Q=2H \end{align} Therefore, $Q$ is the generator of an $N = 2$ supersymmetry. Such supersymmetry naturally explains the degenerate modes inside the flux. By adding a flux to HOTSC, all energy levels are doubly degenerate, and the corresponding eigenstates can be chosen as fermion parity eigenstates with different parity. $Q,Q^{\dagger}$ operators, assisted by spatial rotation, play a role of exchanging between these fermion parity sectors. Notably, while a wide range of emergent supersymmetry in condensed matter typically requires fine-tuned Hamiltonians or critical points\cite{grover2014emergent}, the $N=2$ SUSY in HOTSC flux is guaranteed by the projective symmetry of $C_4P$, and thus robust against perturbations. \subsection{Detecting flux responses from the entanglement Hamiltonian} In this section, we show that the topological flux response in the higher-order topological superconductor can be assessed using a quantum information perspective by examining the entanglement Hamiltonian. This method not only enables the detection of the higher-order topological superconductor numerically but also suggests that the hidden topological structure, including the topological flux response, can be understood through an entanglement viewpoint. \begin{figure}[h] \centering \includegraphics[width=3.5in]{entangle.png} \caption{Tracing out the central block of the wavefunction arrives at the reduced density matrix $\rho^A$ that resembles a 1D Majorana chain along the square cut. Each quadrant of $\rho^A$ contains odd number of Majoranas.} \label{entangle} \end{figure} To begin with, we trace out the system's central block with a size larger than the correlation length but still finite compared to the thermal dynamical limit. The resultant reduced density matrix $\rho^A=e^{-\beta H_{e}}$ can be viewed as a partition function of an entanglement Hamiltonian $H_{e}$ that resembles a 1D `square frame' along with the cut in Fig.~\ref{entangle}. Such a cut contains four corners, with each quadrant carrying an odd number of Majoranas. The $C_4$ rotation operator performs as a translation operator $T_{L/4}$ on the 1D entanglement Hamiltonian that shifts the fermion by a quarter of the cut size. In the thermal dynamical limit, the four Majoranas at the corners of the `square frame' generate four Majorana zero modes in the entanglement Hamiltonian $H_{e}$. However, these zero modes could be hybridized with finite-size cuts. Suppose we choose the length of the 1D entanglement Hamiltonian being $L$, the coupling strength between the four corner Majoranas in the entanglement spectrum scales as $e^{-\xi/L}$ so the Majorana zero-mode hybridization is inevitable for a finite-size system. In addition, the correspondence between the `ground state of the entanglement Hamiltonian' and the wave function correlation cannot be taken too literally. Since the reduced density matrix is the partition function of the entanglement Hamiltonian(EH) at finite temperatures, the high energy modes in the entanglement spectrum(ES) also contribute to the entangled features of the ground state. In particular, the low-lying states of the ES may undergo a phase transition while the bulk phase remains unchanged~\cite{chandran2014universal}. In terms of the ground state wave function, the central block in the reduced density matrix (with an odd number of plaquettes) contains a total $\pi$ flux. The entanglement Hamiltonian defined on the ring with $\pi$ flux inside has an anti-periodic boundary condition. The resultant $C_4$ symmetry operator of the 1D entanglement Hamiltonian can be defined as, \begin{align} & C_4(\pi): \gamma_i \rightarrow \gamma_{N+i},\gamma_{N+i} \rightarrow \gamma_{2N+i},\gamma_{2N+i} \rightarrow \gamma_{3N+i},\nonumber\\ &\gamma_{3N+i} \rightarrow -\gamma_i \end{align} Here $i$ labels the Majorana on each quadrant, with 4N being the total number of Majoranas in the effective 1D entanglement Hamiltonian $H_e$. It is not hard to check that the $C_4$ rotation and fermion parity commute for the entanglement Hamiltonian $H_e$. This also agrees with the fact that the entanglement Hamiltonian can have a fully gapped spectrum for a finite-size system. To visualize the projective symmetry and zero modes inside the $\pi$ superconducting flux from the entanglement Hamiltonian, we look into the wave function with an additional $\pi$ flux in the center (so the central plaquette has net flux) and trace out the center block to get the entanglement Hamiltonian $H_e^{\text{flux}}$. Due to the additional flux insertion, the 1D entanglement Hamiltonian $H_e^{\text{flux}}$ has net flux inside the ring with periodic boundary conditions. The resultant $C_4$ symmetry operator of the entanglement Hamiltonian can be defined as, \begin{align} C_4(0): ~&\gamma_i \rightarrow \gamma_{N+i},\gamma_{N+i} \rightarrow \gamma_{2N+i},\nonumber\\ &\gamma_{2N+i} \rightarrow \gamma_{3N+i},~\gamma_{3N+i} \rightarrow \gamma_i \end{align} $N$ is an odd number since each quadrant contains an odd number of Majoranas. After some simple algebra, we find that $C_4$ rotation and fermion parity anti-commute $C_4 P=-P C_4$ for the entanglement Hamiltonian $H_e^{\text{flux}}$. This indicates that these two symmetries act projectively on $H_e^{\text{flux}}$ so the full entanglement spectrum should display a robust two-fold degeneracy for all eigenstates. It is notable for emphasizing that this degeneracy has nothing to do with the corner zero modes in the original Hamiltonian that can be gapped due to the finite-size effect. The projective symmetry-enforced degeneracy can survive even for finite-size cuts and is robust against any interaction or perturbation. \section{3D flux lines in HOTSC} \label{sec:3d} This section extends our discussion on anomalous flux states to interacting HOTSC in 3D. We begin with the 3D HOTSC that supports chiral Majorana hinge modes proposed in Ref.~\cite{wang2018weak,langbehn2017reflection,benalcazar2017quantized,may2022interaction}, \begin{align} &H=\eta^T[(1-m\cos{k_z}+\cos(k_x))\Gamma^3+(1-m\cos{k_z}\nonumber\\ &+\cos(k_y))\Gamma^1+\sin(k_x)\Gamma^4+\sin(k_y)\Gamma^2-m \sin{k_z}\Gamma^0]\eta\nonumber\\ &\Gamma^1=-\sigma^y \otimes \tau^x,\Gamma^2=\sigma^y \otimes \tau^y,\Gamma^3=\sigma^y \otimes \tau^z,\nonumber\\ &\Gamma^4=\sigma^x \otimes I, \Gamma^0=\sigma^z \otimes I. \label{3dmodel} \end{align} For $-2<m<0$, the model is in the HOTSC phase\cite{benalcazar2017electric}. This model exhibit a special $C_4^T$ symmetry that rotates the x-y plane along with the time-reversal operation. The $C_4^T$ acts on the Majorana field $\eta$ as, \begin{align} \mathcal{K}\begin{pmatrix} 0 & -\tau^z \\ \tau^x & 0 \end{pmatrix} \end{align} Notably, if we implement a dimension-reduction view by fixing the momentum $k_z$, the momentum layer with $k_z=\pi$ resembles the aforementioned 2D HOTSC with $C_4$ symmetry while the $k_z=0$ layer corresponds to the trivial one. $(C_4^T)^2=-1 $ indicates that the Hamiltonian has a $\pi$ flux penetrating each tube along the z-direction illustrated as Fig.~\ref{helical}. Consider inserting an additional $\pi$ flux along the z-direction, the corresponding center tube contains net flux. Now and after, we will demonstrate that such flux insertion will engender a gapless 1D mode that is anomalous and cannot be manifested in pure lower-dimensional systems. To warm up, recall our discussion in Sec.~\ref{sec:2d}, adding $\pi$ flux to 2D HOTSC give rise to a projective representation between $C_4$ and $P$. Our 3D model can be treated as layers of 2D superconductors with fixed $k_z$ momentum so that we can treat different $k_z$ layers independently. We consider two special momentum slices $k_z=0,\pi$ which resemble the $2D$ trivial superconductor and higher-order topological superconductor. Based on our discussion in Sec.~\ref{sec:2d}, it is clear that implementing a $C_4^T$ symmetry would change the fermion parity number $P(\pi)=(-1)^{n_{\pi}}$ inside the flux line that carries momentum $k_z=\pi$. Likewise, the fermion parity $P(0)=(-1)^{n_{0}}$ that carries momentum $k_z=0$ is not affected. Based on this argument, we conclude that the algebra between $C_4^T$ and fermion parity inside the flux line has the form, \begin{align} C_4^T P(\pi)=-P(\pi)C_4^T, \end{align} Unfortunately, the above argument relies on the fact that the fermion parity number in each momentum layer (with fixed $k_z$) is a well-defined quantum number. However, as our HOTSC does not require a translation symmetry, one can add disorder along the z-direction and the corresponding $k_z$ is no longer a good quantum number. Further, in the presence of strong interaction, fermions with distinct momentum $k_z$ can hybridize and interact. In this sense, $n_{\pi}$ again becomes ill-defined when the single-particle picture breaks down. \subsection{Conflict of symmetry and quantum anomaly} Here we provide a more detailed and systematic study of the flux state based on the symmetry anomaly argument. For concreteness, we will demonstrate that the flux line inside the HOTSC displays a quantum anomaly that can be manifested as a `conflict of symmetry.' If the 1D flux line is invariant under two independent symmetries $G_1$ and $G_2$, the theory is anomalous if gauging $G_1$ would break the symmetry of $G_2$ or vice versa\cite{cho2014conflicting}. Applying this `conflict of symmetry' criteria to our case, we will demonstrate that after gauging the fermion parity inside the flux line, one observes that a fermion parity gauge transformation inside the flux will automatically break the $C_4^T$ symmetry. This conflict of symmetry alternatively suggests that the symmetry assignment of $C_4^T$ and $P$ are incompatible with open boundaries so the corresponding 1D theory does not render a lattice realization. In the zero correlation length limit, the HOTSC model in Eq.~\ref{3dmodel} has a coupled wire construction\cite{may2022interaction}. We can decompose the complex fermions along each z-row into two up-moving and two down-moving chiral Majoranas. Treat the z-tube as an elementary building block; it contains four chiral Majoranas living at the four hinges of the tube illustrated in Fig.~\ref{helical}. \begin{align} H_{tube}=\eta^T (k_z) \sigma^{30} \eta \label{1dflux} \end{align} We consider the general case where the four hinges along each z-tube with counter-propagating Majorana modes are coupled in a $C_4^T$ symmetric way. After inserting an additional $\pi$ flux to the central plaquette, the $C_4^T$ symmetry permutes the four components of the 1D Majorana modes in the central tube as, \begin{align} C_4: \mathcal{K}\begin{pmatrix} 0 & I \\ \tau^x & 0 \end{pmatrix} \end{align} with $(C_4^T)^4=1$ provided there is net flux inside the tube center\footnote{The $\pi$ flux from the Hamiltonian and the additional $\pi$ we insert cancels.}. The possible gapping terms for each z-tube as of Eq.~\ref{1dflux} are: \begin{align} &m_1=\sigma^{20},m_2=\sigma^{21},m_3=\sigma^{23},m_4=\sigma^{12},\nonumber\\ C_4^T:&m_1 \rightarrow m_2,m_2 \rightarrow m_1,\nonumber\\ & m_3 \rightarrow m_4,m_4 \rightarrow -m_3, \end{align} The mass terms $m_3,m_4$ are also odd under $C_2$ symmetry, so they cannot appear as a fermion bilinear mass. To make the theory compatible with $C_4^T$, we require $m_1=m_2$ and the resultant 1D flux line always remains gapless regardless of the strength of $m$. Thus, no band mass can fully gap out the helical modes inside the flux. This in-gappable condition can be generalized to the interacting case due to the existence of an anomalous symmetry. \begin{figure}[h] \centering \includegraphics[width=3.5in]{helical.png} \caption{A) 3D HOTSC top-down view from the x-y plane. An additional $\pi$ flux penetrates the central plaquette so the total flux in the central plaquette is zero. B) Treat the z-tube as an elementary building block; it contains up-moving/down-moving chiral Majoranas living at the four hinges of the tube along the z-direction.} \label{helical} \end{figure} Now and after, we will demonstrate that the helical modes in Eq.~\ref{1dflux} are anomalous and cannot exist in pure 1D lattice models. This further suggests that the helical modes cannot be trivially gapped unless we break the $C_4^T$ symmetry. We would elaborate on this point by gauging the fermion parity symmetry inside the flux line and examining the role of $C_4^T$ under such gauge transformation. Central to our discussion below is based on the bosonization picture of helical Majoranas in Eq.~\ref{1dflux}. \begin{align} &\Psi^{\dagger}_L=\eta^L_1+i\eta^L_3=e^{i\theta+i\phi+i\pi/4},\nonumber\\ &\Psi^{\dagger}_R=\eta^R_2+i\eta^R_4=e^{-i\theta+i\phi+i\pi/4}\nonumber\\ &\hat{n}=\frac{\partial_z \theta}{\pi} \end{align} Here $\theta,\phi$ are bosonic fields and the fermion charge density $\hat{n}$ is only defined modulo 2. \footnote{Here we add an additional $i\pi/4$ phase factor as a gauge choice that will simplify the $C_4^T$ symmetry transformation in the bosonization language}. The $C_4^T$ symmetry acts as, \begin{align} &\Psi^{\dagger}_L \rightarrow \Psi_R ,~\Psi^{\dagger}_R \rightarrow -i\Psi^{\dagger}_L \nonumber\\ &\theta \rightarrow \phi , ~\phi\rightarrow -\theta \end{align} The possible interactions that do not break $C_4^T$ are $\cos(2\theta)+\cos(2\phi)$ or their higher order descendants. Precisely, the $C^T_4$ symmetry exchange the role between particle-hole channel tunneling term $\cos(2\theta)$ and particle-particle channel pairing term $\cos(2\phi)$ by enforcing them with the same strength. These terms cannot symmetrically gap out the helical modes, so the resultant theory is either gapless or symmetry-breaking. If we apply a gauge transformation of $P$ along the string from $-\infty$ to $z$, \begin{align} &G(z)=e^{i \int_{-\infty}^{z} d z' \pi n(z')}=e^{i \theta(z)} \end{align} Such a gauge transformation can be viewed as the fermion parity operator defined on an open string with its half-end terminated at $z$. The $C_4^T$ symmetry transforms G(z) as, \begin{align} &C_4^T e^{i \theta(z)} (C_4^T)^{-1}=e^{-i \phi(z)}=-G(z)e^{-i \theta(z)-i \phi(z)} \label{ano} \end{align} Such gauge transformation, equivalent to the fermion parity defined on an open chain, is not invariant under the $C_4^T$ symmetry. Notably,$e^{-i \theta(z)-i \phi(z)}$ is a fermion operator, so the $C^T_4$ transformation creates additional fermion at the end of the fermion parity string. Such conflict of symmetry indicates that the theory cannot be placed on an open 1D chain as the fermion parity operator on the open chain is not invariant under $C^T_4$. As a result, the helical modes inside the flux line cannot be realized in isolated 1D lattice models with the same symmetry assignment. It is noteworthy mentioning that the conflict of symmetry was widely explored as a signature of anomalous surface states in symmetry-protected topological phases. In Ref.~\cite{cho2014conflicting,kapustin2014anomalous}, it was convinced that the conflict of the symmetries at the boundary of the SPT surfaces signals that the edge theory can never be realized as a purely lower dimensional lattice model. Our argument can be treated as a complement theorem signaling that the flux state inside the HOTSC also contains a gapless mode with anomalous symmetry action. \section{Emergent HOTSC from Kitaev spin liquids} We conclude our discussion by extending our horizon into frustrated spin systems whose emergent quasiparticle excitations exactly reproduce the topological features of HOTSC discussed in Sec.~\ref{sec:2d}. Namely, we begin with a bosonic spin model on a honeycomb lattice. Intriguingly, the low energy excitations of such a bosonic system contain emergent Majoranas coupling with an emergent $Z_2$ gauge field. The Majoranas form a superconductor reminiscent of the 2D HOTSC in Sec.~\ref{sec:2d} while the emergent flux excitation carries $N=2$ SUSY structure. To continue, we focus on a specific solvable honeycomb lattice model. However, it is worth mentioning that the protocol and strategy we developed here can be applied to a wider class of lattice models, as we will elaborate on later. We begin with a spin $\frac{3}{2}$ honeycomb model with strong bond anisotropy. \begin{widetext} \begin{align} &H=\sum_{i \in A, j\in B}~ [ \sum_{ij \in \text{green}}(\Gamma_1^i \Gamma_1^j-\Gamma_4^i\Gamma_1^i \Gamma_4^j\Gamma_1^j)-\sum_{ij \in \text{blue}}(\Gamma_3^i\Gamma_4^i \Gamma_3^j\Gamma_4^j+\Gamma_5^i\Gamma_3^i \Gamma_5^j\Gamma_3^j)+\sum_{ij \in \text{red}}(\Gamma_2^i \Gamma_2^j-\Gamma_5^i\Gamma_2^i \Gamma_5^j\Gamma_2^j)~] \label{spin} \end{align} \end{widetext} Here $\Gamma_a(a=1,..5)$ are the $4\times 4$ Gamma matrices with $-i\prod^{a=5}_{a=1}\Gamma_a=1$. At each A/B site on the hexagon lattice, we color three directional bonds with red/green/blue as Fig.~\ref{kitaev}. Each spin only interacts with its nearest neighbor across the red/green/blue bond, and each colored bond has two preferred spin bilinear interactions. \begin{figure}[h] \centering \includegraphics[width=3.0in]{kitaev.png} \caption{The spin $\frac{3}{2}$ degree of freedom and its Majorana representation on the A/B sublattice. The $\pi_i$ Majoranas are the itinerary fermions that only hybridized with their nearest neighbor within the hexagon (illustrated as the red dashed line.) The $\eta_i$ Majorana plays the role of the emergent $Z_2$ gauge potential. The Hamiltonian commutes with the flux operator defined on the blue hexagon.} \label{kitaev} \end{figure} Albeit the model is non-integrable, it renders an exact solvable solution inherited from the spirit of the original Kitaev model\cite{kitaev2006anyons}. In terms of parton construction, we can fermionized the spin-$3/2$ operator by introducing six Majoranas $\pi_1, \pi_2, \pi_3, \eta_1,\eta_2,\eta_3$ per site as Fig.~\ref{kitaev}. We restricted our Hilbert space with fixed onsite parity $i\pi_1\pi_2\pi_3 \eta_1\eta_2\eta_3=1$ so the six Majoranas with even parity generate a four-level system per site akin to the spin-$3/2$ degree of freedom. The spin Gamma matrices can be expressed as, \begin{align} &\Gamma^1=i\pi_1 \eta_1,~ \Gamma^2=i\pi_1 \eta_2,~ \Gamma^3=i\pi_1 \eta_3,~ \Gamma^4=i\pi_2 \pi_1,~\nonumber\\ &\Gamma^5=i\pi_3 \pi_1 \end{align} So the Clifford algebra is automatically satisfied. We can express the spin operators in the Hamiltonian as, \begin{align} &\Gamma^1=i\pi_1 \eta_1,~ i\Gamma^4 \Gamma^1=-i\pi_2 \eta_1,~ \Gamma^2=i\pi_1 \eta_2,~\nonumber\\ &i\Gamma^5 \Gamma^2=-i\pi_3 \eta_2,~ i\Gamma^3 \Gamma^4=-i\eta_3 \pi_2,~ i\Gamma^5 \Gamma^3=i\eta_3 \pi_3 \end{align} The model displays a special $C'_6$ symmetry that hybrid hexagon-centered $C_6$ rotation with $S_3$ spin rotation as $C'_6=C_6 \times S_3$. Under the Majorana representation, the spin rotation becomes the $S_3$ permutation between Majorana flavors, \begin{align} &\pi_3 \rightarrow \pi_1,~\pi_1 \rightarrow \pi_2,~\pi_2 \rightarrow \pi_3,\nonumber\\ &\eta_3 \rightarrow \eta_2,~\eta_2 \rightarrow \eta_1,~\eta_1 \rightarrow \eta_3,~ \label{gauge} \end{align} It is not hard to find a locally conserved hexagon operator illustrated in Fig~.\ref{kitaev} that commutes with all spin interactions in the Hamiltonian. This enables us to treat the $\eta_i$ fermion bilinear as the gauge potential on the link, \begin{align} &\exp{i\pi A_{ij \in \text{green}}}=i\eta^i_1 \eta^j_1,~~\exp{i\pi A_{ij \in \text{red}}}= i\eta^i_2 \eta^j_2,\nonumber\\ &\exp{i\pi A_{ij \in \text{blue}}}=i\eta^i_3 \eta^j_3 \label{flux} \end{align} $A_{ij}$ denotes the $Z_2$ gauge potential on the link between i-j sites (with i(j) belongs to the A(B) sublattice). The $Z_2$ potential on tricolored links can be written as the Majorana fermion bilinears $i\eta^i_a \eta^j_a (a=1,2,3)$ that cross between the links. As a result, the total flux in each hexagon $\oint \vec{A} d \vec{l}=\Phi$ is manifested by the product of Majorana bilinears defined in Eq.~\ref{flux} across the six links along the hexagon loop, which returns to the hexagon operator in Fig.~\ref{kitaev}. Our argument makes it clear that, under the Majorana representation of spin operators, the $\eta$ fermion plays a role as the $Z_2$ gauge potential akin to the original Kitaev model. Likewise, the $\pi_a(a=1,2,3)$ fermion can be treated as the itinerary Majoranas that hop between nearest sites with minimal coupling to the gauge potential $A_{ij}$ on the link. To manifest, we can decompose the spin interactions as \begin{align} &\sum_{ij \in \text{green}}\Gamma_1^i \Gamma_1^j =-\sum_{ij \in \text{green}}\pi_1^i \eta_1^i \pi_1^j \eta_1^j \nonumber\\ &\sum_{ij \in \text{green}}\Gamma_4^i\Gamma_1^i \Gamma_4^j\Gamma_1^j=\sum_{ij \in \text{green}}\pi_2^i \eta_1^i \pi_2^j \eta_1^j\nonumber\\ &\sum_{ij \in \text{blue}}\Gamma_3^i\Gamma_4^i \Gamma_3^j\Gamma_4^j=\sum_{ij \in \text{blue}} \pi_2^i \eta_3^i \pi_2^j \eta_3^j\nonumber\\ &\sum_{ij \in \text{blue}}\Gamma_5^i\Gamma_3^i \Gamma_5^j\Gamma_3^j =\sum_{ij \in \text{blue}} \pi_3^i \eta_3^i \pi_3^j \eta_3^j\nonumber\\ &\sum_{ij \in \text{red}}\Gamma_2^i \Gamma_2^j =-\sum_{ij \in \text{red}} \pi_1^i \eta_2^i \pi_1^j \eta_2^j\nonumber\\ &\sum_{ij \in \text{red}}\Gamma_5^i\Gamma_2^i \Gamma_5^j\Gamma_2^j =\sum_{ij \in \text{red}} \pi_3^i \eta_2^i \pi_3^j \eta_2^j \end{align} In the Majorana representation, it is clear that all bond interactions in Equation~\ref{spin} can be treated as Majorana hopping between nearest sites, with minimal coupling to the gauge potential $A_{ij}$ represented by the $\eta$ fermion bilinears. Since the flux operators commute with the Hamiltonian, we can fix the flux sector $\Phi$ when focusing on the ground state manifold and treat $A_{ij}$ as a constant. For net flux conditions, we can simply take $A_{ij}=0$ and the permutation symmetry of the $\pi$ Majorana fermions will still hold. With $\pi$ flux patterns, any specific gauge choice of $A_{ij}$ will break the permutation symmetry of the $\eta$ Majorana fermions. From the Majorana construction perspective, the effective Hamiltonian becomes a free Majorana model with three orbitals $\pi_1,\pi_2,\pi_3$ persite. Each orbital is only hybridized with one of the three adjacent hexagons as Fig.~\ref{kitaev}. Consequently, the fermion model is decomposed of non-overlap clusters from all hexagons. Each hexagon contains six Majoranas hybridized with their nearest neighbor that resembles higher-order topological superconductors on the honeycomb lattice\cite{zhang2022bulk}. In particular, one can easily check that the lowest energy state requires $\Phi=\pi$ flux per plaquette and the ground state is in the $\pi$-flux sector. The effective band structure for the itinerary Majoranas $\pi_1,\pi_2,\pi_3$ is reminiscent of the higher-order topological superconductor on the honeycomb lattice. Remarkably, such HOTSC may not exhibit protected corner mode for sharp corners with $2\pi/3$ angles. Nonetheless, the topological response still holds. By creating an additional $\pi$ flux excitation in the center, the $C'_6$ symmetry and fermion parity anti-commute so the flux excitations display a projective symmetry. As both the itinerary Majorana and the $Z_2$ flux excitation originate from spin models as fractionalized excitations, the $Z_2$ flux should be treated as an intrinsic excitation rather than an external field that characterizes and probes the response. In particular, the flux excitation in this spin model contains the N=2 SUSY structure we explored in Eq.~\ref{susy}. Finally, the construction we adopt here can be generalized to spin models on other 2D lattices. The essence relies on the fact that for any HOTSC, we can introduce a $Z_2$ gauge potential on the link and express them as a pair of 'auxiliary' Majorana fermion bilinears across the link. After onsite fermion parity projection, the resultant onsite degree of freedom becomes a hyper-spin operator, and the fermion hopping term minimal coupling to the $Z_2$ gauge potential can be written in terms of spin-bilinear interactions. Following this protocol, one can build a zoology of `Kitaev spin liquids' whose low energy excitation can constitute Majoranas with HOTSC band structure and emergent $Z_2$ gauge field. The flux excitations in these Kitaev-type models carry exotic SUSY structures with projective symmetry between spatial rotation and fermion parity. \acknowledgments Y.Y was supported by Gordon and Betty Moore Foundation through Grant GBMF8685 and Marie Sklodowska-Curie Actions under the new Horizon 2020. Y.Y acknowledges informative discussions with Taylor Hughes and Rui-Xing Zhang.
1,108,101,562,467
arxiv
\section{Conclusion} \label{conclusion} This paper presented an open-source labeled dataset and a benchmark for object detection for \gls{ssl}. The proposed dataset guarantees variety with images extracted from different sources under distinct lighting conditions and camera configurations. The labeled objects are Robot, Ball, and Goal, which are the essential objects found during an \gls{ssl} game. The dataset's images can contain multiple instances of these objects, including no objects per image. It was also presented a pipeline to train a \gls{cnn} and deploy it on an embedded device with limited computational power. The results show that \gls{cnn} are robust to variable light conditions and can also detect robots with different structures. This result contrasts with using color segmentation with scan lines. Color segmentation can be easily disturbed by these circumstances since it needs fine-tuning parameters that rely on image saturation and brightness. The presented dataset has a similar size as other datasets used on other RoboCup leagues. However, it is smaller than general propose object detection datasets. So, data augmentation techniques were applied to increase diversity and model generalization. A future improvement to the dataset is to add images from game situations and different field configurations. Besides, increasing the number of distinctive robots' instances and the number of Large images will boost the dataset robustness. This paper uses the proposed dataset to evaluate \gls{ap}, \gls{ar}, and \gls{fps} of four different \gls{cnn} models on constrained hardware. Furthermore, this paper highlights the importance of model architectural optimizations. Future works will analyze other models and modifications to hyperparameters, as input size, to enhance Small object detection. Further work will also analyze other techniques, such as tracking, to continuously detect objects in multiple frames in a real game environment. \section{Dataset} \label{dataset} \subsection{Dataset Creation} The first step to create a labeled dataset is to select images to be part of it. The proposed dataset's images come from three different sources to use images under different conditions and angles. The first set of images consists of 259 pictures taken outside of the \gls{ssl} field, obtained from public image repositories of league teams. This set contains a variety of robot models and images taken under various light conditions. The second set has 516 brand-new images taken for this dataset from a smartphone camera inside a university laboratory field. Furthermore, the remaining 156 images were collected similarly to the second set but came from the final configuration, a camera placed on the \gls{ssl} robot. The images from this last set came from videos, where it was used a frame rate of 10 \gls{fps} to avoid using similar images. The combination of those sets results in a dataset of 931 images. After collecting the images, they were resized to a standard resolution of $224 \times 224$ pixels as used by \cite{mobilenet,mobilenetv2}. \fref{fig:dataset_sample} shows some labeled examples from this dataset, each column of this image has two examples of each set of images. \input{figures/dataset/dataset_sample} The next step of creating the proposed dataset was to add labels to the objects in images. In the proposed dataset were defined three objects class to label: Robot, Ball, and Goal. These classes are the distinctive and relevant ones to detect in an \gls{ssl} game. Each image on this dataset can contain multiple labels, including none of it in the image. LabelImg \cite{labelimg} was used to label the images. This tool outputs squared detection in Pascal VOC and YOLO format. After labeling the images, they were randomly divided into train and test set using the 70/30 proportion as in other robotics' object detection works \cite{dataset_msl}, \tref{tab:dataset_size} shows the final result of this division. This creation process took 160 working hours, most of them manually adding labels to each image. The proposed dataset is fully available on the author's GitHub \footnote{\url{https://github.com/bebetocf/ssl-dataset}}. \input{tables/dataset/size} \subsection{Dataset Statistics} The proposed dataset's main objective is to detect objects in distinct game situations, so it is important to have multiple instances in each image. \fref{fig:instance_size} shows the number of instances per image. It is possible to see that most images have more than one instance, so there are more class instances than images. The dataset has 4182 instances, averaging 4.5 instances per image, which helps mitigate the low number of images. \input{figures/dataset/instance_size} \tref{tab:dataset_div} shows the instance division on the proposed dataset. The Goal class has fewer examples than Robot and Ball classes because not all images have a Goal instance, and when it appears, there is only one instance per image. Besides, the Goal instances are characteristics since they are very similar to each other. As COCO \cite{dataset_coco}, some datasets also have some imbalanced class, and some techniques could be used to balance a dataset \cite{oksuz2020imbalance}. \input{tables/dataset/division} The proposed dataset classifies the objects by their area, Small, Medium, and Large, similarly to COCO \cite{dataset_coco}. Small objects have less than $32\times32~(1024)$ pixels and represent 2919 instances. Medium objects are 1225 instances and have an area between $32\times32~(1024)$ pixels and $96\times96~(9216)$. Moreover, Large objects are bigger than $96\times96~(9216)$ pixels, representing 38 instances. Most objects concentrate in the Small area class, approximately 70\% of all objects, due to the low-resolution images. \fref{fig:dataset_hist} shows each instance's division by the class and their area size. It is possible to see that almost all Ball instances are Small due to their actual size. More than half of the Robots' examples are Small due to images from the first set taken from outside the field, where robots are far from the camera. Furthermore, most of the Goals' samples are in Medium class. \input{figures/dataset/size_histogram} \subsection{Environment} \label{environment} One of the main drawbacks of using a \gls{cnn} is the requirement of a \gls{gpu} to infer in a frequency to use in a real-time detection \cite{gpucpu}. Besides, \gls{gpu} are too big to use in an \gls{ssl} robot, and they have a power consumption too high for the battery that fits in one of these robots. However, improvements in \gls{cnn} inference time in an embedded system make this method an excellent option to solve this technical challenge. The primary constraint to this work is the environment delimitation due to the league's restrictions \cite{rules_ssl}. The robot used to test was a modified version on the RobôCIn v2020 \cite{tdp2020robocin}. As the technical challenge does not have a height restriction, another floor was added to the robot to fit additional hardware. All the modifications should have low power consumption, as the robot uses a LiPo 2200mah 4S 35C battery. This battery is enough to supply four brushless motors of 50W each and all the other robot's necessities. A Raspberry Pi 4 Model B, a Google Coral Edge TPU accelerator, and a camera module were added to the robot, composing the vision system. The camera can capture images up to 90 \gls{fps} in a resolution of $640 \times 480$ pixels. These new components aim to tackle the lack of computational power in the main microcontroller, an STM32F767ZI. The power consumption of a Raspberry Pi 4 with the camera module is up to 7.5W, and the Google Coral is 4.5W, which fits the power supply of the robot's battery. The vision system's inclusion is desirable to avoid modifying the architecture and data flow on the current robot. In the current robot, the microcontroller controls the motors to operate at the desired speeds. In the new system, the Raspberry Pi receives the camera's captured frames and uses them to input the inference model running on the Google Coral Edge TPU. After the inference, the model outputs the detected objects to the Raspberry Pi, which computes where the robot should go, and sends this position to the microcontroller. \fref{fig:system} shows the new system architecture of the robot. \input{figures/environment/system_overview} \section{Evaluation Methodology} \label{evaluation} \input{chapters/environment} \input{chapters/models} \section{Introduction} \label{introduction} The \acrfull{ssl} is one of the most traditional leagues in the RoboCup. In this league, it is possible to precisely perform a wide range of dynamic plays every moment during a game. The decision-making process at each play needs to be fast to keep up with the fast-paced game, in which robots usually move at $3m/s$, and the ball reaches $6.5m/s$. These actions are possible due to the use of omnidirectional wheeled robots, and the use of SSL-Vision \cite{sslvision} as a global vision system. Due to the use of the SSL-Vision, all robots have all the field information, making it easy to design and develop a tactic. With this external vision system, a team in the \gls{ssl} is considered a semi-autonomous system. As a comparison, in \gls{msl} and \gls{spl} instead of using external information, each robot has its camera and vision system, limiting the information to which they have access. Thus, they are considered a fully autonomous system because each robot can perform a tactic without receiving external information. A technical challenge \cite{tech_ssl} was introduced in 2019 to evolve the league, encouraging the teams to develop and propose a local vision system. This challenge aims to bring autonomy to an \gls{ssl} robot, in a similar way to \gls{msl} and \gls{spl}. A \gls{msl} robot fits in $52\times52\times80cm$, which can equip a full-size computer, and a \gls{spl} robot can not be modified, since it uses the NAO as standard platform. However, an \gls{ssl} robot needs to fit in a cylinder with a height of 15cm and a diameter of 18cm \cite{rules_ssl}, which constrains the robot's vision system complexity. In the first three steps of this challenge, a robot has to grab a stationary ball, find a goal, and score against a static defender robot without receiving any information from the SSL-Vision. Therefore, a robot has to detect a Robot, a Ball, and a Goal autonomously. It also has to respect the league requirements, except for the height restriction, creating a small room for hardware improvements. The straightforward option to detect these objects uses scan lines and color segmentation to detect the ball \cite{tigers_scan}, as the league uses an orange golf ball. However, this approach can not detect robots and goals because they do not have a unique pattern. For instance, a team can use robots of any color, making it harder to use this technique. Besides, the color segmentation approach needs to be re-calibrated on each slight environment variation, as uneven illumination or field changes \cite{segmentation}. The state-of-the-art of object detection relies on \acrfull{cnn} \cite{yolov4}, which given a labeled dataset, trains a model once, and does not need any other calibration or modification. Besides, this approach is robust to deal with occlusion, scale transformation, and background switch \cite{cnn_survey}, which makes \gls{cnn} strong candidates to use in the \gls{ssl}. For other RoboCup's leagues, like \gls{spl} \cite{spl_dataset} and \gls{msl} \cite{dataset_msl}, there are public object detection datasets. Although, in the \gls{ssl} does not exist an open-source labeled dataset and creating a new one takes time, making the research and development of object detection models in this league even harder. Therefore, given the \gls{ssl} technical challenge, the league constraints, and the lack of an open-sourced dataset, this paper has two main contributions: \begin{itemize} \item Propose a novel open-source dataset for \gls{ssl}, containing labels for Robot, Ball, and Goal, intended to benchmark object detection in this league. \item Evaluate and compare \gls{cnn} models, respecting the league's hardware constraints while achieving an inference frequency of at least 24 \acrfull{fps}, real-time rate, necessary during actual games. \end{itemize} This paper's remainder is organized as follows: \sref{related_work} will present some related works. \sref{dataset} will detail the dataset. \sref{evaluation} will explain the evaluation methodology. \sref{results} will show and discuss the achieved results. \sref{conclusion} will present what can be concluded and propose some future works. \subsection{Models and Experiments} \label{models} The pipeline to train, run and evaluate any model follows the same standards to each approach. Transfer Learning was used due to the proposed dataset size and the system restrictions, which speeds up training and takes advantage of low-level learned features \cite{transfer}. This technique uses a model pre-trained with another dataset to them train it with the proposed dataset. The proposed dataset was evaluated using MobileNet SSD v1 \cite{mobilenet}, MobileNet SSD v2 \cite{mobilenetv2}, MobileDet \cite{mobiledet} and YOLO v4 Tiny \cite{yolotiny}, which are state-of-the-art object detection models. TensorFlow Object Detection API \cite{tf_object_api} was used to train MobileNets' and MobileDet's models. These models' train was improved using data augmentation techniques as Horizontal Flip, Image Crop, Image Scale, Brightness Adjustment, Contrast Adjustment, Saturation Adjustment, and Black Patches. YOLO v4 tiny is a shallow version of the YOLO v4 \cite{yolov4}, designed to run in an embedded system. It already uses CutMix, Mosaic, Class Label Smoothing, and Self-Adversarial Training, so this architecture does not need any extra data augmentation technique. Furthermore, due to limitations of the portability process for Google Coral Edge TPU, the YOLO v4 tiny uses ReLU rather than Leaky ReLU as activation functions. These models were optimized using Integer Quantization, which increases the inference speed while maintaining network precision \cite{quantization}. This method consists of converting the network weight from floating-point numbers to integer values. After training, the models were quantized and converted to a TensorFlow Lite compatible model, which is required to compile the model to run on a Google Coral Edge TPU accelerator. \subsection{Running and Evaluating} \label{running} The primary constraint to use a trained model on an \gls{ssl} robot is inferring in real-time. A model has to run in at least 24 \gls{fps} to be considered a real-time inference. This frame rate is acceptable with the league's objects' speed since a ball, the fastest object in the field, with a maximum speed of $6.5 m/s$ \cite{rules_ssl}, would move only $27cm$ between inferences. The models were evaluated using the metrics as the COCO dataset, which are \gls{ap} and \gls{ar}. In those metrics, to determine if a detected object is a true positive or a false positive, it has to define a \gls{iou} threshold to consider a correct prediction. The \gls{ap} and \gls{ar} metrics uses the mean of ten \gls{iou} threshold values from $0.5$ to $0.95$ with a step of $0.05$. Besides, these metrics are presented by each object's size. The evaluation is made using an open-source tool \cite{metrics_tool}, that given ground-truth labels and predictions, output the COCO metrics to compare each model's result. \section{Related Work} \label{related_work} Object detection has been one of the most studied fields in computer vision since the first use of \gls{cnn} \cite{lenet}. Since that, datasets have been released to improve object detection models. Among these released datasets, some label many classes, as COCO with 91 classes, and others are task-specific, with less than three classes. This section will present some of these datasets, as the COCO and some datasets used in other RoboCup leagues. The most famous and used dataset for object detection is COCO \cite{dataset_coco}, released in 2014. This dataset contains 328.000 images, collected from Flickr, to avoid iconic-object images containing a single object centered in the image. Thus, the COCO dataset focus on non-iconic images, which means images with multiples categories in a diverse context. This strategy helps trained models to generalize objects instance, given the multiple contexts. The classes used in the COCO dataset were chosen among 255 candidates given by children from 4 to 8 years old. The authors then voted on these categories based on how often each category occurred, and the most voted ones were selected, resulting in 80 classes. This dataset consists of 2.5 million instances, and as a result, each image averages 7.7 instances per image. It took 77.000 working hours to label all of these instances. Moreover, in other RoboCup leagues, some datasets appear as good options. For instance, on \gls{msl}, there is an open object detection dataset \cite{dataset_msl}, which consists of 1456 images, divided into train and test using 70/30 proportion. This dataset uses images taken from the robot camera and images taken from outside of the field from different competitions to increase the variety of the dataset. This dataset provides the annotations in Pascal VOC and YOLO format, although it has only one class, labeling robot instances. The \gls{spl} has an open-source tool to create and share dataset for object detection \cite{imagetagger} that has several images labeling Robot, Ball, and Goalpost. Besides, teams have been regularly releasing their datasets, as, per instance, the SPQR dataset \cite{spl_dataset}, that labels the same three classes. The SPQR dataset contains 2411 images collected from various game conditions, as natural and artificial light. Other \gls{spl} dataset focus only on detecting Ball, as \cite{ball_spl}. This dataset has 6564 images collected from RoboCup logs and the authors' laboratory, varying lighting conditions. The images have a fixed size of $640\times480$ pixels from static and moving balls, resulting in 5209 ball examples. \section{Results} \label{results} \tref{tab:results_coco_ap} shows the \gls{ap} for the four models separated by \gls{iou} threshold and object area. \gls{ap} for Medium and Large objects shows how powerful these models can be in less challenging scenarios, where the object is much closer to the robot. However, the results for Small objects are worse than for Medium and Large objects, which indicates a high false-positive rate. This error occurs due to the low information on objects of Small size. \input{tables/results/coco_ap} From the \gls{ap} perspective, the MobileNet SSD v1 had the best result overall and for Large objects. The \gls{ap} for Large objects on the YOLO v4 tiny model was worse than for Medium objects, which is a peculiar behavior since the other models achieve better \gls{ap} when detecting Large objects. This result can indicate that YOLO v4 tiny needs more labeled data with Large size, as there are only 38 objects with this size on the proposed dataset. \tref{tab:results_coco_ar} shows the \gls{ar} results separated by maximum detection per image and detected object size. A high \gls{ar} is important for Robot and Goal classes, as the robot uses it to avoid colliding with another robot when navigating and helps the robot identify the Goal faster. The Robot and Goal classes' represents all of the Large objects and $95\%$ of Medium objects, as shown in \fref{fig:dataset_hist}. \input{tables/results/coco_ar} The obtained result of \gls{ar} for Medium and Large objects sizes shows a high detection rate, with the MobileNet SSD v1 as the best \gls{ar} results overall. It was also observed that YOLO v4 tiny had worse results for Large objects, which supports the necessity of more samples of Large objects for this model. \tref{tab:results_fps} shows each model's inference frequency, where the MobileNet SSD v1 had the best \gls{fps} overall, but MobileNet SSD v2 and MobileDet had a rate that fits the requirement of at least 24 \gls{fps}. However, the YOLO v4 Tiny had a bad result with only 10 \gls{fps}, which is caused by the lack of architectural optimizations on the network compared with the other evaluated models. This shortage of optimization results in a bad mapping to the Google Coral. \input{tables/results/fps} \fref{fig:precision_recall} shows the Precision-Recall curve for each model, separated by class, and using a \gls{iou} threshold of $0.5$. The Ball class is the most difficult class to detect due to the object size. The MobileNet SSD v2 in this \gls{iou} had a good result in all the three classes but has a smaller recall in the Goal class. The YOLO v4 tiny had a good precision when detecting all classes but could detect only $20\%$ of the Ball in the test set. This result explains why the \gls{ap}$_{50}$ for this model is lower than MobileNet SSD v2 since it is calculated averaging precision across recalls values from 0 to 1. This figure also shows that the fewer examples in the Goal class were not a problem since the Precision x Recall curve for this class is very similar to the other classes. \input{figures/results/precision_recall_all} \subsubsection{Acknowledgements} The authors would like to acknowledge the RoboCIn's team and Centro de Informática - UFPE for all the research support. The first and second authors were also funded by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The authors appreciate all the \gls{ssl}'s teams for the open-source images from past competitions.
1,108,101,562,468
arxiv
\section{Introduction} Data augmentation techniques are used to enhance models' performance by adding additional variations to the training data. These techniques are widely applied to improve automatic speech recognition (ASR) performance \cite{ko15_interspeech,kim17_interspeech,Hannun2014DeepSS,nguyen2020improving}. In \cite{ko15_interspeech}, the authors used speed perturbation to create new speech utterances by changing the frequency components and number of time frames of speech recordings. This additional training data helped to decrease the word error rate (WER) by 3.2\% relative on Librispeech task with 960 hours Librispeech data. In \cite{kim17_interspeech}, reverberation was added to the speech to make it more realistic. Recently, a common technique is to remove or mask information in the spectrogram domain. For instance, SpecAugment \cite{park2019specaugment} removes speech information in $T$ continuous random time frames or $F$ frequency bins. At the time, this augmentation not only increased ASR accuracy, but also achieved the state-of-the-art WER on the LibriSpeech 960-hour dataset at 5.8\%. \cite{Hannun2014DeepSS} proposed data augmentation via adding additional noise to speech, reducing WER by 21.3\% relative on their self-constructed 100 sentence evaluation set. Recently, data augmentation techniques have been introduced that utilize importance or saliency maps. There are many methods to predict importance and saliency maps, e.g., \cite{itti1998model,Harel2006,jetley2016end,Kummerer_2017_ICCV,hou2007saliency,pan2017salgan,kim2017bubbleview,spille2017listening,Trinh2020,trinh2018bubble,9271908}, but few previous studies have investigated applications of such maps. In the visual domain, a recent work \cite{gong2021keepaugment} used saliency maps for data augmentation. Instead of using noise, the authors cut random rectangles out of an image if the sum of the importance scores of all the pixels inside the rectangle was smaller than a threshold. In speech, \cite{do2018weighting} used a bottom-up approach to predicting auditory saliency maps to improve ASR performance. They used Gabor filters to extract intensity and contrast in time and frequency to find the saliency maps. This saliency map is then multiplied with the spectrogram, resulting in a weighted spectrogram, from which features are extracted for ASR. This approach achieved a 5.3\% relative WER reduction compared to a baseline that did not use importance maps. We introduced a top-down adversarial approach to predicting importance maps in \cite{trinh2018bubble,kavaki20_interspeech}. The current paper builds upon those approaches to introduce a method of using our top-down importance maps for data augmentation in speech command recognition. In contrast to \cite{do2018weighting}, we use a top-down approach to identify the regions that are important for recognizing the specific production of the specific words in a given utterance. Furthermore, these regions are directly related to the speech recognition task, which is different from bottom-up approaches, which produce the same prediction regardless of the task. For instance, a bottom up approach using intensity filters might predict that a spectrogram area containing loud noise is important for the speech recognition task. \begin{figure* \centering \begin{adjustbox}{width=\textwidth} \begin{tikzpicture}[every node/.style={font=\footnotesize}] \node (word2_pic)[inner sep=0pt] {\includegraphics[width=0.1\textwidth]{figures_importantAug/clean_speech.eps}}; \node (dot1) [c,right of = word2_pic, xshift = 0.5 cm, fill = black,minimum size=0.5em]{}; \node (text) [above of= word2_pic,yshift=-0.2cm] {Clean speech $S$}; \node (gen) [process, right of =dot1,xshift= 0.7 cm] {Generator G}; \node (mask_pic) [inner sep=0pt,right of= gen, xshift=1.5 cm] {\includegraphics[width=0.1\textwidth]{figures_importantAug/mask.eps}}; \node (multi1) [c, right of= mask_pic, xshift=1 cm] {$\odot$}; \node (add2) [c, right of= multi1, xshift=5 cm] {$+$}; \node (G2) [tria_r,below of =multi1, xshift= 1 cm,yshift= -0.2 cm] {A}; \node[inner sep=0pt,right of=G2, xshift=1 cm] (noise) {\includegraphics[width=0.1\textwidth]{figures_importantAug/noise.eps}}; \node (text) [above of= noise,yshift=-0.2cm] {Noise N}; \node (text) [above of= mask_pic,yshift=-0.2cm] {Importance map $M_{\theta}$}; \node[inner sep=0pt,above of= add2, xshift=2.2 cm, yshift= -0.2 cm] (noisy_pic) {\includegraphics[width=0.1\textwidth]{figures_importantAug/x2.eps}}; \node (text) [above of= noisy_pic,yshift=-0.2cm] {Noisy mixtures}; \node (SCR) [process, right of= add2, xshift=4 cm] {Speech Command Recognizer}; \draw (dot1) -- ($(dot1)+(0,1.1)$); \draw [arrow] ($(dot1)+(0,1.1)$) -| (add2); \draw [arrow] (noise) -- (G2); \draw (word2_pic) -- (dot1); \draw [arrow] (dot1) -- (gen); \draw [arrow] (gen) -- (mask_pic); \draw [arrow] (mask_pic) -- (multi1); \draw [arrow] (multi1) -- (add2); \draw [arrow] (add2) -- (SCR); \draw[arrow,rounded corners=5pt] (G2) - | (multi1); \end{tikzpicture} \end{adjustbox} \caption{ImportantAug scheme. The mask generator's task is to output an importance map (mask) for an utterance with maximal noise while interfering with recognition of the recognizer as little as possible. The mask is point-wise multiplied ($\odot$) with the scaled noise and added to the clean speech. The mask contains values close to 0 at important points and values close to 1 at unimportant points.} \label{fig:impAugArch} \end{figure*} In section 2, we discuss our ImportantAug\footnote[1]{The code is available at https://github.com/tvanh512/importantAug} method, where we first identify the importance maps and then utilize them to augment the data. In section 3, we present our experimental setup with details about the data, hyperparameter settings, and experiments. The results on clean, in domain noisy, and out-of-domain noisy test sets are illustrated in section 4. \section{Method} The proposed network has a speech command recognizer and a mask generator, as illustrated in Figure \ref{fig:impAugArch}. The speech command recognizer's task is to classify the input utterances into the correct classes. The mask generator's task is to add as much noise as possible to utterances without harming the performance of the recognizer. This has the effect of generating importance maps, which are utilized for data augmentation. Our networks are trained in two stages. In the first stage, we train the generator so that it can output importance maps (masks). We load a recognizer that is pre-trained on clean speech. Then, we freeze the recognizer and train only the mask generator. The generator receives clean speech as input and outputs a mask. This mask is multiplied with the noise and then added to the clean speech, resulting in a noisy utterance. The recognizer receives this noisy speech as input and predicts a class. Note that in the Google Speech Commands (GSC) dataset \cite{warden2018speech}, each utterance is at most 1s long and only contains a single word in the presence of noise. Thus this is a speech classification task as opposed to a full speech recognition task. We designed the loss function for our network to encourage the mask to maximize the amount of noise while the speech recognizer maintains good performance. This loss function therefore forces the generator to output a mask with less noise in regions that are important to the recognizer, and with more noise in regions that are unimportant to the recognizer. In the second stage, we freeze the generator and train only the speech command recognizer. We aim to create additional data to train the recognizer. To create additional data, noise is added to the unimportant regions of the clean speech. Less or no noise is added to the important regions. Denote $S(f,t) $ and $N(f,t)$ as the complex spectrograms of the speech and noise, respectively, where $f$ is the frequency index and $t$ is the time index. These spectrograms are created by applying the short time Fourier transform (STFT) to the time domain signal $s(t)$ of the speech and $n(t)$ of the noise. The generator $G$ with parameters $\theta$ takes $\tilde{S}(f,t) = 20 \log_{10}|S(f,t)|$ as input and predicts a mask $M_{\theta}(f,t)$ with the same shape as $\tilde{S}(f,t)$ \begin{align} M_{\theta}(f,t) = G(\tilde{S}(f,t); \theta) \in [0,1]^{F \times T} \end{align} An additional augmentation shifts the mask slightly in time or frequency to further increase variability in the training data for the recognizer. The mask output by the generator, $M_{\theta}$, is rolled along the frequency and time dimension \begin{align} M_{\theta r} = r(M_{\theta};\delta) \end{align} where r is the roll operator (we use torch.roll) and $\delta$ is the number of time frames or frequency bins by which the elements of the mask are shifted. $\delta$ is drawn uniformly at random from the interval $(-D,D)$. Furthermore, to create additional variation, with probability $0.5$, the mask $M_{\theta r}$ is replaced by a mask of all 1's. Denote whichever mask is selected as $M$. This rolling augmentation is only used when re-training the recognizer using the predicted importance maps and not when training the mask generator itself. This mask is then applied point-wise to a noise instance $N$, scaled by gain $A$. The gain A is adjusted each training batch such that the signal to noise ratio is maintain at a target value \begin{align} A=\sqrt{\frac{\sum_{b,t,f}|S_{btf}|^2}{10^{v/10} \sum_{b,t,f} |N_{btf}|^2 }}, \end{align} where $v$ is the target SNR expressed in decibels, and b, t, f denoted the batch, time, and frequency dimensions respectively. The resulting masked-scaled noise $A N \odot M$ (where $\odot$ denotes point-wise multiplication) is added to the clean speech $S$. The resulting noisy mixture is input to the speech command recognizer $R$, which predicts the probability of the class $\hat{y}$ \begin{align} \hat{y} = R(S + A N \odot M). \end{align} The model is trained to minimize \begin{align} \mathcal{L}(\theta) &= \lambda_r \mathcal{L}_{\textrm{R}}(y, \hat{y}) - \frac{\lambda_e}{TF}\sum_{f,t} \log M \nonumber \\ &+ \frac{\lambda_f}{TF}\sum_{f,t} |\Delta_f M| + \frac{\lambda_t}{TF}\sum_{f,t} |\Delta_t M|. \label{eq:bcnLoss} \end{align} where $\mathcal{L}_{\textrm{R}}$ is the loss of the speech recognizer, $\Delta_f$ is the difference operation along frequency, $\Delta_t$ is the difference operation along time, and $\lambda_r, \lambda_e, \lambda_f,$ and $\lambda_t$ are weights set as hyperparameters of the model. The recognizer loss is the cross entropy between the prediction $\hat{y}$ and the ground truth label $y$. This loss forces the recognizer to keep high accuracy on predicting the correct class. The $-\sum_{f,t} \log M$ term forces the mask's value to be close to one, thus maximizing the amount of noise added. The terms associated with $\lambda_f$ and $\lambda_t$ encourage the mask to smooth in frequency and time. \section{Experimental setup} \subsection{Dataset} We use the Google Speech Commands (GSC) dataset version 2 \cite{warden2018speech} for our experiments. This dataset includes 105,829 single-word utterances of 35 unique words. Many utterances include noise or other distortions. The models were trained on the training set and evaluated on the test set. The development set was used for early stopping. We also employ additional noise from the MUSAN dataset \cite{snyder2015musan} to augment the speech from the GSC dataset. The recordings in MUSAN have different lengths, so we only used the first second from each recording and exclude any recordings shorter than one second as the speech utterances are restricted to be at most one second long. There are 877 noisy files after filtering out short utterances. We randomly choose 702 files (80\%) for training. We mix the remaining 175 files with the utterances from the GSC test set, creating a new noisy test set that we call GSC-MUSAN. To evaluate our trained model on out-of-domain noisy environments, we also create another test set. First, we select a file ``HOME-LIVINGB-1.wav", which contains 40 minutes of noise recording in the living room environment from the QUT corpus \cite{dean2015qut}. We then resample this file from 48 to 16~kHz, the same rate as the GSC utterances and choose random sections in this noisy file to mix with the utterances in the GSC test set. We call this dataset GSC-QUT. \begin{table} \caption{Recognizer error rate (\%) on the Google Speech Command v2 (GSC) development set with conventional noise augmentation at different SNRs} \label{tab:importantAug_result1} \begin{center} \begin{tabular}{cc@{$\qquad\quad$}cc} \toprule SNR & Dev & SNR & Dev \\ \midrule $\infty$ & 7.74 & 15 & \textbf{5.83} \\ 40 & 6.39 & 10 & 6.11 \\ 35 & 7.65 & 5 & 6.00 \\ 30 & 6.10 & 0 & 5.97 \\ 25 & 6.19 & $-5$ & 6.24 \\ 20 & 6.22 & $-10$ & 6.16 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Experiments} We compare our proposed method against two other methods. In the first method (baseline), we train a recognizer that does not utilize any data augmentation . It is trained on the GSC training set and selected using early stopping on the development set. All other methods are trained by initializing their parameters to those of this pre-trained baseline recognizer. In the second method, we utilize a conventional noise augmentation technique that treats all time-frequency points as equally important and applies noise directly to the speech without importance maps (S + AN). We perform an experiment to identify the best single signal to noise ratio (SNR) to use, comparing those ranging from $-10$~dB to 40~dB in steps of 5~dB. We also evaluate $\infty$~dB by training on clean data. In our proposed method, ImportantAug, we performed the two-stage training as described above. First, we load and freeze the recognizer from the baseline and train the generator. Then, we freeze the generator and train the recognizer. The noise from the MUSAN dataset was multiplied with the rolled importance maps and added to the speech. In addition, we perform an ablation study by evaluating the recognizer performance when we remove the importance map from the proposed approach, by setting the mask to be all 1's, which we call the ``Null ImportantAug" condition. In this case, no region is more important than other regions and the noise is added directly to the speech. We evaluate the baseline (no augmentation), conventional noise augmentation, ImportantAug and Null ImportantAug on the standard GSC test set, GSC-MUSAN and GSC-QUT noisy test sets. \begin{figure*} \begin{center} \scriptsize \begin{tabular}{ccccc} \includegraphics[width=0.18\textwidth]{figures_importantAug/clean_speech.eps} & \includegraphics[width=0.18\textwidth]{figures_importantAug/mask.eps} & \includegraphics[width=0.18\textwidth]{figures_importantAug/mask_roll.eps} & \includegraphics[width=0.18\textwidth]{figures_importantAug/noise.eps} & \includegraphics[width=0.18\textwidth]{figures_importantAug/x2.eps} \\ (a) Clean speech & (b) Importance map (IM) & (c) Rolled IM & (d) MUSAN noise & (e) Noisy speech \\ \end{tabular} \end{center} \caption{ (a) Clean utterance from Google Speech Commands dataset. (b) Importance map (blue areas) from the generator. (c) Rolled importance map. (d) MUSAN noise. (e) Noisy speech created by multiplying the noise from (d) with the mask from (c) and adding clean speech from (a) } \label{fig:Important_aug_process} \end{figure*} In addition to using continuous-valued importance maps, we also experimented with binarizing the importance maps . We considered the $q$\% of time-frequency points with the lowest value in the continuous-valued importance map as being important and did not add any noise to them. The other $100-q$\% of the points were considered unimportant and noise was added to them. In this experiment, the mask was not replaced by an all 1's mask at all. \subsection{Hyperparameter settings} The signal was sampled at 16~kHz with a window length of 512 and a hop length of 128 samples, leading to a spectrogram with 257 frequency bins and 126 time frames for a 1~s utterance. In all experiments, we use the same default setting for the speech command recognizer, which is a neural network with 5 layers. Each layer has a 1D depth-wise and 1D point-wise convolution \cite{chollet2017xception, Somshubra20}, followed by SELU activation \cite{klambauer2017self}. The depth-wise convolution has a kernel size of $9\times 9$ (281.25 Hz x 96 ms), a stride value of 1, a dilation value of 1; and its inputs and outputs are both 257 channels. The point-wise convolution consists of a kernel of size $1 \times 1$ and also has inputs and outputs for size 257. The generator is a neural network with 4 layers, where each layer is a 2D convolutional network. The first layer takes one channel in and outputs 2 channels. The second and third layers have 2 channels in their input and output. The last layer has 2 channels of input and one of output. All the layers have a kernel size of $5 \times 5$ (156.25 Hz x 64ms), a stride value of 1, a dilation value of 1 and a padding so that the output has the same height and width as the input. In the proposed ImportantAug method, we selected hyperparameters $\lambda_r = 1$, $\lambda_e=\lambda_f=\lambda_t=3$, $v=-12.5$~dB. First, the weights $\lambda_r, \lambda_e, \lambda_f, \lambda_t$, and $v$ were manually adjusted based on a very small number of settings so that the speech command recognizer performed well and the mask values were closer to all 1's on the development set. Then we chose $D$, the maximum number of time frames or frequency bins by which the elements of the mask are shifted to be 30, equivalent to 937.5 Hz and 264 ms. This was selected to keep the mask from shifting too far from the original position. All the models are trained with the Adam optimizer with an initial learning rate of $0.001$, which is decayed by half every 20 epochs and a batch size of 256. The models are trained for 200 epochs with early stopping on the development set loss with a patience value of 30. \section{Results} Table~\ref{tab:importantAug_result1} shows the error rate on the development set for the conventional augmentation method with different signal to noise ratios. We can see that adding too much noise leads to a high error rate, for example, SNRs -10 and -5~dB have error rates 6.16\% and 6.24\%, respectively on the development set. Adding too little noise is also not optimal, for instance, SNRs 40 and 35~dB have error rate 6.39\% and 7.65\% on the development set. Using no noise at all does not provide good performance, with an error rate 7.74\%. However, adding the right amount of noise is beneficial for the recognizer as it balances variation in the training data with speech fidelity. As shown in Table~\ref{tab:importantAug_result1}, the best error rate (5.83\%) is with an SNR of 15~dB. The model trained with SNR 15~dB has the best performance on the development set, so we choose this model to evaluate on the test set and compare with other approaches in Table~\ref{tab:importantAug_result2}. \begin{table} \caption{Recognizer error rate (\%) with various augmentation approaches on GSC test set} \label{tab:importantAug_result2} \begin{center} \begin{tabular}{lcc} \toprule Augmentation method & Initial SNR (dB) & Error \\ \midrule No augmentation & $\infty$ & 6.70 \\ Conventional noise augmentation & 15.0 & 6.52 \\ ImportantAug & -12.5 & \textbf{5.00} \\ Null ImportantAug & -12.5 & 6.12 \\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tab:importantAug_result2} shows the results on the standard GSC test set. The baseline speech command recognizer has an error rate of 6.70\%. The conventional noise augmentation method produces a model with an error rate of 6.52\%. Our proposed method has the best error rate at 5.00\%, which is a 25.4\% relative improvement over the no augmentation baseline and 23.3\% relative improvement over the conventional noise augmentation method. We also perform an ablation study with the Null ImportantAug method by using a ``mask'' that is all 1's, which leads to an error rate of 6.12\%. Null ImportantAug is similar to the traditional NoiseAug because it does not utilize importance maps. The difference is that Null ImportantAug is trained with the same SNR as ImportantAug (-12.5 dB), while the traditional NoiseAug uses the SNR chosen based on the performance on the development set of 15 dB. The error rate with and without important maps are 5\% and 6.12\% respectively, thus the importance map is necessary for the observed performance gains. \begin{table} \caption{Recognizer error rate (\%) of augmentations on in-domain noise test set (GSC-MUSAN) as a function of test SNR. \label{tab:importantAug_result_MUSAN} \begin{center} \scriptsize \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{7}{c}{Test SNR} \\ Method & $-12.5$ &$-10$ &0 & 10 & 20 & 30 &40 \\ \midrule No aug. (baseline) & 77.6 & 72.7 & 45.2 & 21.0 & 11.5 & 8.4 & 7.3 \\ Noise aug.~(SNR 15) & 65.8 & 57.7 & 26.3 & 10.8 & 7.3 & 6.6 & 6.4 \\ ImportantAug & \textbf{43.5} & \textbf{35.0} & \textbf{13.3} & \textbf{7.4} & \textbf{5.7} & \textbf{5.2} & \textbf{5.1} \\ Null ImportantAug & 45.2 & 37.0 & 15.0 & 8.5 & 6.9 & 6.2 & 6.0 \\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tab:importantAug_result_MUSAN} shows the results on the GSC-MUSAN test set. We could observe that the proposed method ImportantAug achieve the best result in all SNR range. For example, the ImportantAug achieve 13.3\% error rate at 0~dB, which is around one-third of the error rate of the baseline 45.2\% and a half of the conventional augmentation method. We also observe that the error rates are going up if we remove the importance map (IM) when comparing row 3 and row 4 of Table~\ref{tab:importantAug_result_MUSAN}. For example, at SNR 0~dB, the error rate going up from 13.3\% to 15\% if we remove the IM and train with only the noise. Table~\ref{tab:importantAug_result_QUT} shows the results on the GSC-QUT test set, which is out-of-domain noise test set because the models are trained with MUSAN noise, not with QUT noise. Here, we observe the same trend when the ImportantAug outperforms the baseline, the conventional augmentation method. \begin{table} \caption{Recognizer error rate (\%) of augmentations on out-of-domain noise test set (GSC-QUT) as a function of test SNR.} \label{tab:importantAug_result_QUT} \begin{center} \scriptsize \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{7}{c}{Test SNR} \\ Method & $-12.5$ &$-10$ &0 & 10 & 20 & 30 &40 \\ \midrule No aug. (baseline) & 90.9 & 87.3 & 55.8 & 20.8 & 9.6 & 7.4 & 7.0 \\ Noise aug.~(SNR 15) & 89.0 & 83.5 & 42.0 & 12.9 & 7.3 & 6.5 & 6.2 \\ ImportantAug & \textbf{72.0} & \textbf{61.3} & \textbf{23.5} & \textbf{8.9} & \textbf{5.8} & \textbf{5.1} & \textbf{4.8} \\ Null ImportantAug & 72.3 & 61.6 & 24.8 & 10.0 & 6.8 & 6.1 & 6.0 \\ \bottomrule \end{tabular} \end{center} \end{table} Figure~\ref{fig:Important_aug_process}.b shows an example of an importance map of an utterance of the word ``four" in the GSC dataset. The importance map includes the fundamental frequency, the harmonics, and the outer border shape of the speech. These regions are predicted to be necessary for the speech command recognizer to identify this specific utterance. Thus, keeping these regions clean and adding noise outside of them makes the data more diverse while not affecting the recognition. \begin{table} \caption{Recognizer error rate (\%) with binarized ImportantAug using different important region ratios, $q$ on the original GSC test set.} \label{tab:importantAug_result3} \begin{center} \begin{tabular}{ccc} \toprule $q$ (\%) & Dev & Test \\ \midrule 70 & 5.42 & 5.64 \\ 50 & 5.49 & 5.92 \\ 40 & 5.19 & 5.71 \\ 20 & 5.17 & 5.15 \\ 10 & \textbf{5.00} & 5.43 \\ 5 & 5.09 & 4.92 \\ 1 & 5.12 & 4.94\\ 0 & 6.03 & 6.12\\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tab:importantAug_result3} shows the error rate on the development and test set for the binary ImportantAug method with different important region ratios. In this experiment, we consider the quantile $q$\% of the regions that have lowest mask value to be important. The best result is achieved on the development set by choosing 10\% of points to be important, which provides a 11.3\% relative error reduction on the test set compared to not multiplying the noise with the importance map ($q=0$). Thus only a very small proportion of points need to be preserved in this way to enhance the data augmentation performance. \section{Conclusion} In conclusion, we have demonstrated a data augmentation agent that improves a speech command recognizer. Our proposed ImportantAug method produced a 25.4\% relative error rate reduction compared to the non-augmentation method and and 23.3\% relative reduction compared to the conventional noise augmentation method. Taken together, this work shows that importance maps can be estimated accurately enough to be helpful for data augmentation, providing one of the first such demonstrations, especially for speech. In the future, we will extend this framework by replacing the speech command recognizer with a full large vocabulary continuous speech recognizer and we will deploy different methods to identify the importance map and use the map to augment the speech data, such as those based on human responses. The proposed method could also be used in computer vision tasks, such as image recognition by predicting importance maps for images. \section{Acknowledgements} This material is based upon work supported by the National Science Foundation (NSF) under Grant IIS-1750383. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. \balance \bibliographystyle{IEEEbib}
1,108,101,562,469
arxiv
\section{Introduction} On September 26, 2022, NASA's Double Asteroid Redirection Test (DART) mission will kinetically deflect Dimorphos, the smaller component of the binary asteroid 65803 Didymos, as a planetary defense demonstration test \citep{Rivkin2021}. Prior to the impact, DART will deploy the Light Italian CubeSat for Imaging of Asteroids (LICIACube), which will fly by the system to image the initial phase of the cratering process as well as improve Dimorphos's shape determination \citep{Dotto2021,Cheng2022}. Following the impact, the change in the mutual orbit period will be measured via ground-based observations and used to infer the momentum enhancement factor, commonly referred to as $\beta$ \citep{Rivkin2021}. Due to the contribution of ejecta that exceeds the escape speed, $\beta$ is expected to exceed 1. Four years after DART, the European Space Agency's Hera mission will rendezvous with Didymos to characterize the physical, dynamical, and compositional properties of the system. Hera will also measure in detail the effects of the DART impact, including the crater's properties and the mass of Dimorphos, allowing for a more precise determination of $\beta$ \citep{Michel2022}. In addition to abruptly reducing the binary semimajor axis and orbit period, the impact will also change the eccentricity and inclination \citep{Cheng2016}. Due to a high degree of spin-orbit coupling, the dynamical evolution of Dimorphos strongly depends on the initial conditions at the time of impact and the body's shape, which are currently unknown \citep{Agrusa2020}. Depending on $\beta$ and Dimorphos's shape, it is possible that Dimorphos may enter a chaotic rotation state following the DART impact \citep{Agrusa2021,Richardson2022}. Furthermore, numerical simulations that treat Dimorphos as a rubble pile indicate that boulders may move on the surface, depending on Dimorphos's spin state, bulk shape, and material properties \citep{Agrusa2022a}. In this study, we take a closer look at the possibility of post-impact surface motion on Dimorphos as a function of its complex spin and orbital environment. Observational evidence and theoretical arguments both indicate that chaotic rotation is not uncommon for secondaries in tight binary systems \citep{Pravec2016,Cuk2021, Seligman2021, Quillen2022a}, and it is plausible that many synchronous secondaries have undergone some level of chaotic rotation in their past or during their formation \citep{Wisdom1987b,Jacobson2011a,Davis2020b}. Therefore, the methods and results presented here are also broadly applicable to the general binary asteroid population. \vspace{-5pt} \section{Methods} Focusing on the DART impact, we first ran a simulation to capture the system's dynamics, from which the local slopes can be computed, in an approach analogous to previous studies of dynamically triggered regolith motion \citep{Yu2014,Ballouz2019}. In order to capture the coupled spin and orbital motion of the secondary, we used the General Use Binary Asteroid Simulator (\textsc{gubas}), an efficient rigid full two-body problem (F2BP) code \citep{Davis2020a,Davis2021}. \textsc{gubas} has been benchmarked against other F2BP simulation codes and has been used extensively to study the dynamics of Didymos and other binary systems \citep{Agrusa2020, Davis2020b, Meyer2021a,Meyer2021b}. In accordance with previous studies, the \textsc{gubas} simulations expand the gravitational potential of the polyhedral shape models to degree and order 4 to adequately capture their irregular gravity fields. All simulations presented herein were run for 1 yr of integration time. \begin{table} \caption{Selected physical and dynamical parameters used for the simulated Didymos system, consistent with the current best estimates \citep{Rivkin2021}. The body diameters are the volume-equivalent spherical diameters. A synchronous spin state for Dimorphos is {assumed}, and we refer the reader to \cite{Richardson2022} for further discussion on this assumption.} \label{tab:params} \centering \begin{tabular}{ll} \hline \hline Parameter & Value \\ \hline Primary bulk density ($\rho_\text{P}$) & $2.2$ g cm$^{-3}$ \\ Secondary bulk densities ($\rho_\text{S}$) & $[1.85, 2.20, 2.55]$ g cm$^{-3}$\\ Primary mass ($M_\text{P}$) & $5.47\times10^{11}$ kg\\ Secondary masses ($M_\text{S}$) & $[4.20, 4.99, 5.78]\times10^9$ kg\\ Primary Diameter ($D_\text{P}$) & $780$ m \\ Secondary Diameter ($D_\text{S}$) & $164$ m \\ Initial body separation ($a_{\mathrm{orb}}$) & $1200$ m \\ Initial Orbital Period ($P_{\mathrm{orb}}$) & $11.92$ h \\ Primary Spin Period ($P_\text{P}$) & $2.26$ h \\ Secondary Spin Period ($P_\text{S}$) & $11.92$ h\\ Assumed DART Mass ($M_{\text{DART}}$) & 536 kg\\ Assumed DART Speed ($v_{\text{DART}}$) & 6.15 km/s \\ \hline \end{tabular} \end{table} \begin{figure*} \centering \begin{minipage}[b]{0.3\hsize} \centering \includegraphics[clip, trim=2.25cm 5.22cm 1.9cm 4.8cm,width=\textwidth]{topDown.pdf} (a) \includegraphics[clip, trim=2.25cm 5.1cm 1.75cm 4.6cm, width=\textwidth]{side.pdf} (b) \includegraphics[clip, trim=3cm 3.4cm 0.65cm 3.4cm, width=\textwidth]{slope3D.pdf} (c) \end{minipage} \begin{minipage}[b]{0.34\hsize} \centering \includegraphics[width=\textwidth]{spinOrbit_50d_rho2.2_beta3.pdf} (d) \end{minipage} \begin{minipage}[b]{0.34\hsize} \centering \includegraphics[width=\textwidth]{accelSlope_50d_rho2.2_beta3_long0_lat45.pdf} (e) \end{minipage} \caption{\label{fig:timeSeries} Surface slope evolution as a function of Dimorphos's dynamical evolution. (a) Top-down view of the ``Didymos-Squannit'' system. From this view, the spin and mutual orbit poles are pointing out of the page. (b) Side view. (c) Surface slopes for a Squannit-shaped Dimorphos with a bulk density of $\rho_\text{S}=2.2\text{ g cm}^{-3}$ in an idealized, relaxed dynamical state. The black facet corresponds to the sub-Didymos point (at zero libration amplitude) with a longitude and latitude of $\phi\approx\lambda\approx0^\circ$. The white facet has a longitude and latitude of $(\phi, \lambda)\approx(0^\circ,45^\circ)$ and corresponds to the time-series plots in part (e). (d) Spin and orbital evolution for the Squannit-shaped Dimorphos when $\beta=3\, (e=0.023)$. The Euler angles are the 1-2-3 Euler angle set (roll-pitch-yaw) expressed in the rotating orbital frame, while the body spin rates are in the secondary's body-fixed frame. (e) Slope and surface accelerations on the white facet from part (c). The vertical accelerations point along the facet's surface normal and are generally dominated by self-gravity. The horizontal accelerations are expressed as magnitudes and point parallel to the surface. Initially, the Euler acceleration is relatively small and the tides are the dominant time-varying acceleration. After about 5 days, Dimorphos enters NPA rotation, and the Euler accelerations become comparable to both the tidal and centrifugal accelerations. We refer the reader to Appendix \ref{app:timeSeries} for an identical plot showing the full 365 d simulation.} \end{figure*} \vspace{-5pt} \subsection{Simulation setup} In the F2BP simulations, the primary's gravity is modeled using Didymos's radar-derived polyhedral shape model \citep{Naidu2020}. Dimorphos's shape is still unknown, so we used the radar shape model for Squannit, the secondary component of the binary asteroid (66391) Moshup, scaled to the expected volume of Dimorphos. Squannit is arguably the best available analog for Dimorphos. Both the Didymos and Moshup systems are S types \citep{Binzel2004,Dunn2013} and have similar properties, including a fast-rotating primary with a raised equatorial ridge and a tidally locked secondary component on a tight, approximately circular orbit \citep{Scheeres2006b}.\footnote{There are no observations that show Dimorphos is spin locked, but circumstantial evidence indicates that this is likely. We refer the reader to \cite{Richardson2022} for a detailed discussion on this assumption.} Squannit is the only currently available secondary shape model for a near-Earth binary and contains ${\sim}2300$ facets \citep{Ostro2006}. Radar data tend to smooth and flatten surface features, making the surface slope analysis presented here somewhat conservative. When scaled to the dimensions of Dimorphos, Squannit's average facet has a surface area of ${\approx} 38 \text{ m}^2$. Schematics showing the shape models for the primary and secondary are shown in Fig.\ \ref{fig:timeSeries}(a-c). We focused this short study on the role of $\beta$ and Dimorphos's bulk density ($\rho_\text{S}$) as they play a significant role in determining the surface slope evolution of Dimorphos. The bulk density sets the mass and therefore the self-gravity of the body, which has a significant effect on the surface slope of a given shape model \citep{RichardsonJ2014,RichardsonJ2019}. For a fixed $\beta$, a smaller bulk density (i.e., lower mass) will result in a larger perturbation to the mutual orbit, which can lead to larger changes in surface slopes over time. We tested values of $\beta$ in the range $0\leq\beta\leq5$, in accordance with the best estimates from hydrodynamic simulations of the DART impact \citep{Raducan2022a,Stickle2022}. Based on light curve and radar observations, the Didymos system is expected to have a bulk density with $1\sigma$ uncertainties of $\rho{\approx}2.2 \pm 0.35\text{ g cm}^{-3}$ \citep{Naidu2020,Rivkin2021}. Assuming Dimorphos has a bulk density within this range, we tested values of $1.85, 2.2$, and $2.55\text{ g cm}^{-3}$. It should be noted that the reported uncertainties are for the bulk density of the {entire system}, which is of course dominated by the primary, and it is certainly possible for Dimorphos to have a bulk density outside of the range explored here (see the discussion on Dimorphos's density in \cite{Rivkin2021}). Table \ref{tab:params} provides the adopted physical and dynamical parameters for this study. First, the binary was given dynamically relaxed initial conditions (i.e., a circular orbit with a synchronous secondary). Then, a change in velocity ($\Delta \vec{v}$) was applied to the secondary's instantaneous orbital velocity consistent with a head-on DART impact and a given selection for $\beta$ and $\rho_\text{S}$.\footnote{The DART impact will not be ideally head-on and centered, but recent work indicates that these effects should be negligible in terms of determining the system's bulk dynamical properties \citep{Richardson2022}.} This $\Delta \vec{v}$ reduces Dimorphos's velocity, causing the body to fall into a tighter, more eccentric orbit.\footnote{$\Delta \vec{v}$ is dependent on $\beta$, the impactor mass and velocity, as well as the secondary's mass. In a simplified scalar form, it can be written as $\Delta v = -\beta M_\text{DART}v_\text{DART}/M_\text{S}$, where the negative sign indicates that Dimorphos's speed is reduced.} Due to the increased eccentricity, Dimorphos then begins librating and can also enter a chaotic non-principal axis (NPA) rotation state at later times depending on its shape. The attitude instability that leads to NPA rotation is driven by intersections of various spin-orbit resonances among Dimorphos's frequencies of free libration, spin precession, nutation, and mean motion --- more details can be found in \cite{Agrusa2021}. In results presented here, we give both the value for $\beta$ and the corresponding binary eccentricity, $e$, in an effort to make the results of this paper broadly applicable to other similar binary systems. Due to the non-Keplerian nature of small binary systems, we report $e$ as the geometric eccentricity, which is a function of the periapsis $(r_\text{p})$ and apoapsis $(r_\text{a})$ distances: $e = (r_\text{a}-r_\text{p})/(r_\text{a}+r_\text{p})$. \vspace{-5pt} \subsection{Computation of external accelerations} At each timestep, the \textsc{gubas} code outputs the full state of the system, including the body locations, orientations, velocities, and spins, from which the net surface accelerations of the secondary can be readily computed. The net acceleration is evaluated at the center of each triangular facet (indexed by $i$) of the shape model at each timestep (indexed by $t$) and can be written as\begin{equation} \mathbf{a}_{i,t}^{\text{net}} = \mathbf{a}_{i,t}^{\text{grav}} + \mathbf{a}_{i,t}^{\text{tides}} + \mathbf{a}_{i,t}^{\text{cent}} + \mathbf{a}_{i,t}^{\text{Euler}}, \end{equation} where the vectors $\mathbf{a}_{i,t}^{\text{grav}}$, $\mathbf{a}_{i,t}^{\text{tides}}$, $\mathbf{a}_{i,t}^{\text{cent}}$, and $\mathbf{a}_{i,t}^{\text{Euler}}$ are the secondary's self-gravity, the primary's tidal acceleration, the centrifugal acceleration, and Euler acceleration, respectively. The Coriolis acceleration is neglected because this study is focused on the conditions to trigger surface motion, rather than details of the motion itself \citep{Kim2021}. The details of how each respective acceleration was computed can be found in Appendix \ref{app:accels}. On each facet, the surface slope is then defined as \begin{equation} \theta_{i,t} = \mathbf{\hat{n}}_i\cdot\mathbf{\hat{a}}_{i,t}^\text{net}, \end{equation} where $\mathbf{\hat{n}}_i$ is the surface normal and $\mathbf{\hat{a}}_{i,t}^\text{net}=\frac{\mathbf{a}_{i,t}^{\text{net}}}{\lVert \mathbf{a}_{i,t}^{\text{net}}\rVert}$. \vspace{-5pt} \section{Results} \vspace{-5pt} \subsection{A conceptual example} To demonstrate how the various acceleration components affect the surface slope, we show time-series plots for a scenario in which $\beta{=}2$ and $\rho_\text{S}{=}2.2\text{ g cm}^{-3}$ in Fig. \ref{fig:timeSeries}. The initial slopes of the secondary are shown in Fig.\ \ref{fig:timeSeries}(c), and the post-impact spin and orbital evolution is shown in Fig.\ \ref{fig:timeSeries}(d). The slope and accelerations are shown in Fig. \ref{fig:timeSeries}(e) for the facet shown in white in Fig.\ \ref{fig:timeSeries}(c), which has a longitude and latitude of $(\phi,\lambda) \approx (0^\circ,45^\circ$). This particular example was chosen to illustrate the relative importance of the various accelerations considered here, as well as the sensitivity of the slope evolution to the spin and orbit of Dimorphos. The DART perturbation reduces the semimajor axis and increases the eccentricity to $e{\sim}0.023$, the effect of which can be seen in the top plot of Fig.\ \ref{fig:timeSeries}(d). Through spin-orbit coupling, Dimorphos's spin state is also excited, and it begins librating while its spin rate oscillates. In only ${\sim}5\text{ d}$, the secondary becomes attitude unstable, as indicated by the nonzero roll and pitch angles, although Dimorphos technically remains in the 1:1 spin-orbit resonance (yaw angle $<90^\circ$). The influence of these dynamical changes can be seen on the surface slope plot at the top of Fig.\ \ref{fig:timeSeries}(e). At early times, changes in the surface slope are dominated by the tidal acceleration. When Dimorphos enters slight NPA rotation, the centrifugal and Euler accelerations become much more important, leading to abrupt and chaotic surface slope changes. \vspace{-5pt} \subsection{Dependence on momentum enhancement (\texorpdfstring{$\beta$)})} \begin{figure} \centering \includegraphics[clip, trim=0.225cm 0.25cm 0.25cm 0.25cm, width=0.75\hsize]{deltaSlopes_rho2.20_beta1.00.pdf} \includegraphics[clip, trim=0.225cm 0.25cm 0.25cm 0.25cm, width=0.75\hsize]{deltaSlopes_rho2.20_beta3.00.pdf} \caption{\label{fig:squannit_slopes_timeSeries} Time-series plots of the change in surface slope $(\Delta\theta$) of each facet in the secondary shape model. Each line is colored based on its initial surface slope $(\theta_0)$. As $\beta$ (or $e$) increases, we see much larger changes in surface slope. The bulk density is $\rho_\text{S}=2.2\text{ g cm}^{-3}$. See Appendix \ref{app:surfaceSlopePlots} for equivalent plots showing the full 365 d simulation and additional values for $\beta$.} \end{figure} \begin{figure} \centering \includegraphics[clip, trim=8.5cm 0.75cm 9.5cm 0.75cm,width=\hsize]{slopeMap_maxSlope_rho2.2_beta1.pdf} \includegraphics[clip, trim=8.5cm 0.75cm 9.5cm 0.75cm,width=\hsize]{slopeMap_maxSlope_rho2.2_beta3.pdf} \caption{\label{fig:squannit_slopes_maps} Maximum slope achieved after a $365\textrm{ d}$ simulation, with arrows indicating the down-slope direction. See Appendix \ref{app:surfaceSlopePlots} for equivalent plots for other values of $\beta$.} \end{figure} In Fig.\ \ref{fig:squannit_slopes_timeSeries} we show time-series plots of the change in surface slope $(\Delta\theta = \theta(t) - \theta_0)$ of each surface facet for $\beta=1$ and $\beta=3$ with $\rho_\text{S}$ fixed at $2.2 \text{ g cm}^{-3}$. The color of each line corresponds to the slope at the start of the simulation, $\theta_0$. When $\beta=1$, the orbit is not significantly perturbed.\ As such, the tidal acceleration is weak and Dimorphos exhibits little NPA rotation, resulting in small surface slope changes of $\Delta\theta\lessapprox2^\circ$. When $\beta=3$, then the tidal environment becomes strong and Dimorphos enters NPA after only ${\sim}5$ d, resulting in surface slope changes as large as $\Delta\theta{\sim}10^\circ$. The results of Fig.\ \ref{fig:squannit_slopes_timeSeries} highlight the strong temporal dependence of the surface slopes. The surface slope evolution is also spatially dependent, as demonstrated by Fig.\ \ref{fig:squannit_slopes_maps}. These plots show the maximum slope achieved over the same simulations shown in Fig.\ \ref{fig:squannit_slopes_timeSeries}. The arrows on the plot indicate the down-slope direction. These plots suggest that the highest slopes are achieved in regions that start off with a high slope. For this particular shape and assuming loose regolith covering the surface, we would expect most motion near the equator and mid-latitudes, and very little, if any, near the poles. This spatial dependence may have implications for inferred crater ages in different regions of Dimorphos's surface. In addition, the spatial dependence on the surface slope evolution could be leveraged to distinguish between causes of surface refreshment. For example, we might expect surface motion triggered by the re-accretion of impact ejecta to occur over much of Dimorphos's surface, while tidal and rotationally induced surface motion may be restricted to regions that can achieve high slopes. We refer the reader to Appendix \ref{app:surfaceSlopePlots} for additional plots that show the surface slopes for other values of $\beta$. \vspace{-5pt} \subsection{Dependence on the bulk density (\texorpdfstring{$\rho_\text{S}$}))} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{maxSlope_vs_rhoBeta.pdf} \caption{\label{fig:maxSlope} Maximum slope as a function of $\beta$ and $\rho_\text{S}$. The slope over a given simulation increases with $\beta$ due to the spin and orbit of Dimorphos being more excited. Lower densities achieve higher slopes due to the higher orbital eccentricity for a given $\beta$, in addition to a weaker self-gravity in relation to the tidal and rotational accelerations.} \end{figure} The surface slopes of a given shape are highly dependent on the body's bulk density \citep{RichardsonJ2014,Susorney2022}. It sets the mass and self-gravity, which partially determine the initial slope of each facet. On a related note, a low density means that the self-gravity is weaker, making the accelerations due to tides and rotation stronger in comparison and in turn allowing larger slope changes. Finally, a low density (i.e., a low mass) means a higher eccentricity (and shorter periapsis distance) for a fixed value of $\beta$. Therefore, a lower density will result in a more perturbed orbit, in which the tidal and rotational accelerations play an increasingly important role. For these reasons, the possibility and magnitude of any granular motion will by highly dependent on Dimorphos's bulk density. We see precisely this result in Fig.\ \ref{fig:maxSlope}, which shows the maximum surface slope achieved as a function of $\rho_\text{S}$ and $\beta$. The color of the dots indicates the eccentricity of the particular orbit, which depends on both $\beta$ and $\rho_\text{S}$. We see that the surface slopes increase dramatically as a function of $\beta$, especially for $\rho_\text{S}=1.85\text{ g cm}^{-3}$, reaching ${\sim}40^\circ$ for high $\beta$ due to the higher eccentricity and resulting in stronger tidal and rotational forces. \vspace{-5pt} \section{Discussion} If Dimorphos's surface has an angle of repose of ${\sim}35^\circ$, similar to that reported at Ryugu and Bennu \citep{Watanabe2019,Barnouin2022}, then we would expect significant landslides and shape changes in cases where $\theta$ exceeds this value. For the Dimorphos shape used in this study, this would only occur for lower densities and high $\beta$ values. Without knowing the true shape of Dimorphos, however, it is impossible to say with certainty how probable any post-impact surface motion is. The aim of this paper is to demonstrate the plausibility of any dynamics-induced granular motion or shape change, and this topic will be revisited once Dimorphos's true shape is known. Recent work focused on surface refreshment on Mars's moon Phobos indicates that a time-varying $\Delta\theta$ of only a few degrees can lead to a gradual creep motion of granular material, without the slope ever exceeding the formal angle of repose. \cite{Ballouz2019} combined dynamical modeling, granular physics, and geologic mapping of color units to demonstrate that regions of combined high values of $\theta$ and $\Delta\theta$ coincide with Phobos's blue units. This work indicated an active surface-refreshing process that could excavate pristine un-weathered material. Depending on Dimorphos's geophysical properties, it may be plausible that a similar creep motion process will occur following the DART impact. We note that surface refreshment could be currently ongoing, if Dimorphos is already in an NPA rotation state as predicted by \cite{Quillen2022a}. It is also important to consider that both $\beta$ and $\rho_\text{S}$ could lie outside the range explored in this paper. Of course, Dimorphos's real shape and surface geology are also unknown, so the results presented here are illustrative and meant to highlight the range of post-impact possibilities. After DART's impact, this phenomenon can be explored with higher fidelity, incorporating the initial shape model and surface geology obtained with DART and LICIACube imagery. When Hera arrives, its optical instruments and CubeSats, especially the Juventas CubeSat and its onboard GRAvimeter for small Solar System bodies (GRASS) instrument, will measure the dynamical slopes as one of its science objectives \citep{Michel2018,Karatekin2021,Ritter2021}. The seismic pulse delivered by the DART impact may significantly alter surface features on Dimorphos \citep{Quillen2022b,Thomas2005}. We also note that the global shape of Dimorphos may also be immediately altered by the DART impact itself \citep{Raducan2022b}. In addition to affecting the system dynamics \citep{Nakano2022}, these processes will create a unique challenge in discerning the various surface refreshment mechanisms upon Hera's arrival. The results of the work presented here have the following implications, in the context of the DART and Hera missions as well as binary asteroids in general: \textbf{Granular motion and surface changes.} Through images and infrared measurements, Hera may identify refreshed areas of Dimorphos's surface exposed by dynamics-induced surface motion. Furthermore, a comparison of images taken by DART and Hera may be used to identify surface features that have moved or changed during the four years between the missions. If there is long-term boulder motion on the surface, Hera may detect the motion of boulders over the course of its six-month mission lifetime. Furthermore, this effect may noticeably alter the system's dynamics \citep{Brack2019}. \textbf{Crater degradation.} Impact craters (both natural craters and DART's crater) may degrade at different rates based on their location on the surface as surface slope changes are spatially dependent. This may have important implications for understanding crater morphology and the surface age of Dimorphos, a challenge that does not usually require consideration for single asteroids due to their quasi-static spin states \citep{Sugita2019,Walsh2019,RichardsonJ2020}. \textbf{Tidal dissipation.} Granular surface motion may affect tidal dissipation in two ways. First, any material undergoing surface motion will dissipate energy through friction, potentially enhancing dissipation beyond what is assumed from traditional tidal theories \citep{Goldreich2009,Nimmo2019}. Second, granular motion will change Dimorphos's mass distribution and, therefore, its gravitational potential. This mechanism could subtly remove energy from the system, an effect not captured by simplified tidal treatments. \textbf{Binary formation and evolution.} One proposed scenario of binary formation assumes the secondary forms through a spin-up fission event driven by the Yarkovsky–O'Keefe–Radzievskii–Paddack (YORP) effect and initially orbits chaotically. At some later time, the secondary must fission a second time, forming a short-lived triple system and liberating excess free energy in order to enter a stable, synchronous spin state \citep{Jacobson2011a}. Given the results presented herein, we might expect landslides on the surface well before a secondary fission event. This process may dissipate energy and reshape the secondary, allowing for synchronous rotation without the need to invoke additional fissions. Furthermore, if all secondaries undergo chaotic rotation at some point, then we might expect the population to have broadly similar shapes. However, this would largely depend on the relative timescales for tidal locking and surface refreshment, as well as other competing slope-altering processes such as meteorite impacts. In any case, rotation-driven surface motion, shape change, and energy dissipation may be important effects that should be accounted for in any binary asteroid formation scenario. \vspace{-5pt} \section{Conclusions} In this paper we have shown that perturbed post-impact spin and orbital dynamics may lead to significant fluctuations in Dimorphos's surface slopes. Depending on Dimorphos's shape, bulk density, surface geology, and $\beta$, we predict that this may trigger long-lived granular motion on the surface. The implications for dynamics-driven granular motion include a refreshment of Dimorphos's surface, impact crater degradation, and enhanced tidal dissipation. Understanding these effects will help guide and interpret the measurements Hera will obtain on Dimorphos's surface and interior. In addition, this effect may have implications for the formation and evolution of small binary systems in general. Thanks to this initial study, post-impact granular motion will be explored more closely and with higher fidelity when Dimorphos's shape model first becomes available. Future work includes directly modeling granular motion on the surface in addition to coupling that motion back to the resulting dynamical evolution. \begin{acknowledgements} We kindly thank the anonymous referee, whose insightful comments significantly improved the manuscript. This work was supported in part by the DART mission, NASA Contract \#80MSFC20D0004 to JHU/APL. E.T., G.N., O.K., and P.M. acknowledge support from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 870377 (project NEO-MAPP). E.T., G.N., and O.K. acknowledge support by Belgian Federal Science Policy (BELSPO) through the through the ESA/PRODEX Program. P.M. acknowledges support from ESA and from the French Space Agency CNES. The simulations herein were carried out on The University of Maryland Astronomy Department’s YORP cluster, administered by the Center for Theory and Computation. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,562,470
arxiv
\section{Introduction} \label{sec:intro} The automated audio captioning task is a combination of audio and natural language processing to create meaningful natural language sentences \cite{DBLP:journals/corr/DrossosAV17}. The purpose of audio captioning is different than the previous audio processing tasks such as audio event/scene detection and audio tagging. The previous tasks do not aim to create descriptive natural language sentences whereas audio captioning aims to capture relations between events, scenes, and objects to create meaningful sentences. This technical report presents the details of our submission for DCASE 2021 Task 6 automated audio captioning. We propose an encoder-decoder model using sound event detection and PANNs (Pretrained Audio Neural Networks). Since sound events in audio clips are informative to capture the main context of an audio clip, we propose a new model using sound event detection to obtain more semantic information. This paper is organized as follows: Section 2 describes the details of system architecture. The experimental setup and dataset information are presented in Section 3. Section 4 shows the results and Section 5 gives the conclusion. \section{SYSTEM ARCHITECTURE} \label{sec:system} The overall system architecture is shown in Figure 1. The details of the proposed model are presented in this section. \begin{figure*} \centering \includegraphics[scale=0.5]{model_event.pdf} \caption{The illustration of the proposed audio captioning model. The PANNs used to extract both audio features and sound events. (+) is used for concatenation method.} \label{figModelPng} \end{figure*} \subsection{PANNs Feature Extraction} \label{ssec:panns} The PANNs are pretrained features on the AudioSet dataset \cite{7952261}. Wavegram-Logmel-CNN14 model is used to extract the PANNs features. 96 ms Hamming window and 50\% overlap are applied for windowing and overlapping methods similar to \cite{Drossos_2020}. We present $\textbf{x}=[x_1,...,x_T], T \in \mathbb{R}^{2048}$ as PANNs features. \subsection{Sound Event Extraction} \label{ssec:sound} The PANNs are used to extract sound events. The last layer of the PANNs gives the probabilities of each sound event on the AudioSet dataset. The dataset contains 527 sound classes. We obtain $\textbf{e}=[e_1,...,e_M], e_m \in \mathbb{R}^{527}$, where $e_m$ is the probability of each sound classes on the AudioSet dataset. We apply a 0.1 threshold value to the sound event probabilities and the events greater than 0.1 probability are selected for each audio clip. Therefore the most probable events are obtained for a given audio clip. After that, an event tokenizer is generated from the AudioSet events to divide some events that have more than one word. The purpose of tokenization is to obtain the similarity of words in different sound events. For instance, the AudioSet contains different classes such as ``{\textit{Funny Music}}'', ``{\textit{Sad Music}}'', ``{\textit{Scary Music}}'', ``{\textit{Middle Eastern Music}}'' etc. The tokenization method can capture the similarities between four audio clips that contain different music events. After the tokenization, we obtain an event corpus represented as $\textbf{c}=[c_1,...,c_K], c_K \in \mathbb{R}^{600}$. Then, the sound event vector of each audio clip is obtained by using one-hot-encoding. For the $j^{th}$ audio clip, if the audio clip contains $c^{th}$ event, $c_{ik}$=1 otherwise $c_{ik}$=0. \subsection{Encoder-Decoder} \label{ssec:encoder} We use the same encoder-decoder model in \cite{9327916}. In the proposed encoder-decoder architecture, there are two BiGRU layers for encoding PANNs features. The first BiGRU layer contains 32 cells and the second BiGRU layer contains 64 cells. There is one GRU layer that contains 128 cells for encoding partial captions. The number of the cells is selected empirically. After obtaining a sound event vector for each audio file, PANNs features and sound event vectors are concatenated as an input to the encoder. The captions in the dataset are preprocessed before feeding the model. All words are converted to lowercase and all punctuations are removed. \texttt{$<$sos$>$} and \texttt{$<$eos$>$} are added to the beginning and end of the captions. Previous studies show that the inclusion of Word2Vec \cite{DBLP:journals/corr/MikolovSCCD13} improves the performance of the audio captioning system \cite{eren2020audio}, therefore Word2Vec is used for representing the captions in the training dataset. Each unique word in the dataset is represented by $v=[v_1,...,v_i]$, where $v_i\in \mathbb{R}^{256}$ and 256 is the dimension for word embeddings. Encoded partial captions are concatenated to the encoded PANNs features and sound event vectors to feed the decoder part. The decoder contains one GRU layer that contains 128 cells. The Softmax function is used after the fully connected layer. The decoder predicts probabilities of the unique words in the dataset and selects the most probable word as the predicted word. After finding the \texttt{$<$eos$>$} token, the whole sentence is created by the predicted words. \section{EXPERIMENTS} \label{sec:experiments} This section describes the details of the dataset and implementation details. \subsection{Dataset} \label{ssec:dataset} We use Clotho \cite{Drossos_2020} audio captioning dataset for our experiments. The challenge presents a new version of the Clotho dataset that is called Clotho V2. Clotho V2 contains 3840 audio clips in the development split. There is a validation split that has 1046 audio files in the Clotho V2. There are 1045 audio files in the evaluation split and 1043 audio files in the test split. All of the splits have five captions for each audio clip. For our experiments, we use five times each audio file with their corresponding captions similarly to \cite{Drossos_2020}. \subsection{Experimental Setup} \label{ssec:setup} Our system is implemented using Keras framework \cite{keras} and our experiments are run on a computer with GPU GTX1660Ti, Linux Ubuntu 18.04 system. Python 3.6 is used for implementation. We run all experiments for 100 epochs and we choose the model with the minimum validation error. Different batch sizes are applied to our proposed model with the best results. 128 is chosen as batch-size. Adam optimizer, LeakyReLU activation function, and cross-entropy loss are used as hyperparameters. Batch normalization \cite{DBLP:journals/corr/IoffeS15} and a dropout rate of 0.5 are also used. The number of parameters on our prosed model is nearly 2,500,000. \section{RESULTS} \label{sec:results} For evaluation, BLUE-n \cite{Papineni2002}, METEOR \cite {Banerjee2005}, ROUGE$_L$ \cite{Lin2004}, CIDEr \cite{Vedantam2015}, SPICE \cite{10.1007/978-3-319-46454-1_24}, and SPIDEr \cite {Liu_2017} metrics are used. The matching words in the actual and predicted captions are calculated for BLEU-n. It calculates the precision for n-grams. Recall and precision are calculated for METEOR. ROUGE$_L$ calculates Longest Common Subsequence. CIDEr presents more semantic results by calculating cosine-similarity between actual and predicted captions. SPICE parses the actual and predicted captions and creates scene graph representation. SPIDEr is a linear combination of CIDEr and SPICE. The comparison of our proposed method and Clotho V2 is shown in Table 1. The results show that our model significantly outperforms the challenge baseline across all evaluation metrics. \begin{table*} [t] \caption{The comparison of our proposed method and Clotho V2 baseline results} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{ l|l|c|c|c|c|c|c|c|c} \hline \multirow{2}*{\bfseries Method} & \multicolumn{9} {c}{\bfseries Metric} \\ \cline{2-10} & \textbf{BLEU-1}& \textbf{BLEU-2}& \textbf{BLEU-3} & \textbf{BLEU-4} & \textbf{METEOR} & \textbf{ROUGE$_L$} & \textbf{CIDEr} & \textbf{SPICE} & \textbf{SPIDEr}\\ \hline \textbf{Clotho V2 baseline} & 0.378 & 0.119 & 0.050 & 0.017 & 0.078 & 0.263 & 0.075 & 0.028 & 0.051\\ \textbf{Proposed Method} & \textbf{0.586} & \textbf{0.356} & \textbf{0.268} & \textbf{0.150} & \textbf{0.214} & \textbf{0.444} & \textbf{0.328} & \textbf{0.155} & \textbf{0.242} \\ \hline \end{tabular} } \end{center} \label{table1} \end{table*} The predicted captions on the evaluation dataset show that our proposed model can predict meaningful captions. \section{CONCLUSION} \label{sec:conclusion} This technical report presents our system details for participating DCASE 2021 Task 6. An encoder-decoder model with sound event detection and pretrained audio features is proposed for the challenge. The results show that the inclusion of sound event detection improves the audio captioning performance. In future work, different fusion and extraction methods will be applied to the audio features and sound events. \bibliographystyle{IEEEtran}
1,108,101,562,471
arxiv
\section{Introduction In typical modern machine learning tasks, we often encounter large scale optimization problems, which require huge computational time to solve. Hence, saving computational time of optimization processes is practically quite important and is main interest in the optimization community. \par To tackle large scale problems, a golden-standard approach is the usage of {\it{Stochastic Gradient Descent}} (SGD) method \cite{robbins1951stochastic}. For reducing loss, SGD updates the current solution by using a stochastic gradient in each iteration, that is the average of the gradients of the loss functions correspond to a random subset of the dataset (mini-batch) rather than the whole dataset. This (stochastic) mini-batch approach allows that SGD can be faster than deterministic full-batch methods in terms of computational time \cite{dekel2012optimal, li2014efficient}. Furthermore, {\it{Stochastic Nesterov's Accelerated Gradient}} (SNAG) method and its variants have been proposed \cite{hu2009accelerated, chen2012optimal, ghadimi2016accelerated}, that are based on the combination of SGD with Nesterov's acceleration \cite{nesterov2013introductory, nesterov2013gradient, tseng2008accelerated}. Mini-batch SNAG theoretically outperforms vanilla mini-batch SGD for moderate optimization accuracy, though its asymptotic convergence rate matches that of SGD. \par For realizing further scalability, {\it{distributed optimization}} have received much research attention \cite{bekkerman2011scaling, duchi2011dual, jaggi2014communication, gemulla2011large, dean2012large, ho2013more, arjevani2015communication, chen2016revisiting, goyal2017accurate}. Distributed optimization methods are mainly classified as synchronous centralized \cite{zinkevich2010parallelized, dekel2012optimal, shamir2014distributed}, asynchronous centralized \cite{recht2011hogwild, agarwal2011distributed, lian2015asynchronous, liu2015asynchronous, zheng2017asynchronous}, synchronous decentralized \cite{nedic2009distributed, yuan2016convergence, lian2017can, lan2017communication, uribe2017optimal, scaman2018optimal} and asynchronous decentralized \cite{lian2017asynchronous, lan2018asynchronous} ones by their communication types. In this paper, we particularly focus on data parallel stochastic gradient methods for {\it{synchronous centralized}} distributed optimization with smooth objective function $F: \mathbb{R}^d \to \mathbb{R}, F(x) = \frac{1}{P}\sum_{p=1}^P\frac{1}{N}\sum_{i=1}^N f_{i, p}(x)$, where each $\{f_{i, p}\}_{i=1}^N$ corresponds to a data partition of the whole dataset for the $p$-th node (or processor). In this setting, first each processor $p$ computes a stochastic gradient of $(1/N)\sum_{i=1}^N f_{i, p}(x)$ and then the nodes send the gradients each other. Finally, the current solution is updated using the averaged gradient on each processor. Here we assume that node-to-node broadcasts are used, but it is also possible to utilize an intermediate parameter server. \par A main concern in synchronous distributed optimization is communication cost because it can easily be a bottleneck in optimization processes. Theoretically, naive parallel mini-batch SGD achieves linear speed up with respect to the number of processors \cite{dekel2012optimal, li2014efficient}, but not empirically due to this cost \cite{shamir2014distributed, chen2016revisiting}. For leveraging the power of parallel computing, it is essential to reduce the communication cost. \par One of fascinating techniques for reducing communication cost in distributed optimization is compression of the communicated gradients \cite{aji2017sparse, lin2017deep, wangni2018gradient, alistarh2018convergence, stich2018sparsified, shi2019distributed, karimireddy2019error, seide20141, wen2017terngrad, alistarh2017qsgd, wu2018error}. {\it{Sparsification}} is an approach in which the gradient is compressed by sparsifying it in each local node before communication \cite{aji2017sparse, lin2017deep, wangni2018gradient, alistarh2018convergence, stich2018sparsified, shi2019distributed, karimireddy2019error}. For sparsifying a gradient, top-$k$ algorithm, that drops the $d-k$ smallest components of the gradient by absolute value from the $d$ components of the gradient, has been typically used. Another example of compression is quantization, which is a technique that limit the number of bits to represent the communicated gradients. Several work has demonstrated that parallel SGD with quantized gradients has good practical performance \cite{seide20141, wen2017terngrad, alistarh2017qsgd, wu2018error}. Particularly, Alistarh et al. \cite{alistarh2017qsgd} have proposed Quantized SGD (QSGD), which is the first quantization algorithm with a theoretical convergence rate. QSGD is based on unbiased quantization of the communicated gradient. \par However, theoretically there exists an essential trade-off between communication cost and convergence speed when we use naive gradient compression schemes. Specifically, naive compression (including sparsification and quantization) causes large variances and theoretically always slower than vanilla SGD, though they surely reduce the communication cost \cite{stich2018sparsified, alistarh2017qsgd}. \par {\it{Error feedback}} scheme partially solves this trade-off problem. Some work has considered the usage of compressed gradients with the locally accumulated compression errors in each node and its effectiveness has been validated empirically \cite{aji2017sparse, lin2017deep, wu2018error}. Very recently, several work has attempted to analyse and justified the effectiveness of error feedback in a theoretical view \cite{alistarh2018convergence, stich2018sparsified, cordonnier2018convex, karimireddy2019error, tang2019doublesqueeze, zheng2019communication}. Surprisingly, it has been shown that Sparsified SGD with error feedback {\it{asymptotically}} (in terms of optimization accuracy) achieves the {\it{same}} rate as non-sparsified SGD. \par Nevertheless, for a theoretical point of view, the method still may require large communication cost for maintaining the rate of ideal SGD particularly in early iterations due to the compression error. Therefore, more communication efficient method are desired. The goal of this paper is a creation of an algorithm that requires smaller per iteration communication cost than sparsified SGD with error feedback while it maintains the same rate as vanilla SGD. \paragraph*{Main contribution} We construct and analyse Sparsified Stochastic Nesterov's Accelerated Gradient method (S-SNAG-EF) based on the combination of (i) unbiased sparsification of the stochastic gradients; (ii) error feedback scheme; and (iii) Nesterov's acceleration technique. The main message of this paper is the following: \begin{framed} S-SNAG-EF maintains the convergence rate of vanilla SGD with per iteration communication cost $O(P^{3/4}\varepsilon d)$\footnotemark for general convex problems. In contrast, non-accelerated methods require $O(P^{1/2}\sqrt{\varepsilon}d)$. Namely, the per iteration communication cost of our method has a better dependence on $\varepsilon$ than previous methods. Also, this superiority is true even in nonconvex problems. \end{framed} \footnotetext{Here, $P$ is the number of processors, $\varepsilon$ is desired optimization accuracy and $d$ is the dimension of the parameter space. $\varepsilon \leq 1/P$ is assumed for simplicity. } We also give thorough analysis of non-accelerated sparsified SGD with error feedback based on {\it{unbiased}} random compression (we call this algorithm as S-SGD-EF in this paper) and show better results than previously known ones\footnotemark. A comparison of our method with the most relevant previous methods is summarized in Table \ref{table: commu_cost_comparison}. \par \par \footnotetext{These improvements come from the unbiasedness of the random compression. Several previous work have given analysis of (non-accelerated) sparsified SGD with errorfeedback in distributed settings based on more general compression scheme including the unbiased random compression\citep{alistarh2018convergence,cordonnier2018convex,tang2019doublesqueeze,zheng2019communication}. However, these approaches do not fully utilize the unbiasedness of compression and the convergence rates in parallel settings are worse than ours. } \begin{table}[] \centering \scalebox{0.94}{ \label{table: commu_cost_comparison} \begin{tabular}{c c c c} \hline & \hspace{-1em}general convex & \hspace{-1em}strongly convex & \hspace{-1em}general nonconvex \\ \hline S-SGD & $d$ & $d$ & $d$ \\ S-SNAG & $d$ & $d$ & $d$ \\ \begin{tabular}{c} MEM-SGD \cite{ cordonnier2018convex}\end{tabular}& $(P\sqrt{\varepsilon}\wedge 1) d$ & $(P\sqrt{\varepsilon}\wedge 1) d$ & No Analysis \\ \begin{tabular}{c} DoubleSqueeze \cite{tang2019doublesqueeze} \end{tabular} & No Analysis & No Analysis & $(P\sqrt{\varepsilon}\wedge 1) d$ \\ {\color{red}S-SGD-EF} & {\color{red}$(\sqrt{P\varepsilon}\wedge 1)d$ }& {\color{red}$(\sqrt{P\varepsilon}\wedge 1)d$} & {\color{red}$(\sqrt{P\varepsilon}\wedge 1)d$} \\ {\color{red}S-SNAG-EF} & {\color{red}$(P^{\frac{3}{4}}\varepsilon\wedge 1)d$} & {\color{red}$((P^{\frac{1}{3}}\mu^{\frac{1}{3}}\varepsilon^{\frac{2}{3}} + P^\frac{3}{4}\mu^\frac{1}{4}\varepsilon^\frac{3}{4})\wedge 1)d$} & {\color{red}$((P^\frac{3}{4}\varepsilon^\frac{3}{4} + P^\frac{4}{3}\varepsilon^\frac{2}{3})\wedge 1)d$}\\ \hline \end{tabular} } \caption{Comparison of the order of the per iteration communication cost for maintaining the rate of vanilla SGD. Here, we refer to the standard SGD iteration complexities for achieving $F(x) - F(x_*) \leq \varepsilon$ (for convex objectives) or $\|\nabla F(x)\|^2 \leq \varepsilon$ (for nonconvex ones), that are $O(1/\varepsilon + 1/(P\varepsilon^2))$ for general convex problems, $O(1/\mu + 1/(P\mu\varepsilon))$ for $\mu$-strongly convex ones and $O(1/\varepsilon + 1/(P\varepsilon^2))$ for general nonconvex ones. The per iteration communication cost is simply computed by $\widetilde{O}(kP \wedge d)$, where $d$ is the dimension of the parameter space, $k/d$ is the compression ratio for sparsified methods and $P$ is the number of processors. For simple comparison, we assume that $\varepsilon \leq O(1/P)$. Also $L$, $\mathcal{V}$, $\Delta = F(x_{\mathrm{ini}}) - F(x_*)$, $D = \|x_{\mathrm{ini}} - x_*\|^2$ are assumed to be $\Theta(1)$. Extra logarithmic factors are ignored. } \end{table} \setlength{\textfloatsep}{8pt} {\bf{Related Work}}\ \ \ We briefly describe the most relevant papers to this work. Stich et al. \cite{stich2018sparsified} have first provided theoretical analysis of sparsified SGD with error feedback (called MEM-SGD) and shown that MEM-SGD asymptotically achieves the rate of non-sparsifed SGD. However, their analysis is limited in serial computing settings, i.e., $P = 1$. Independently, Alistarh et al. \cite{alistarh2018convergence} have also theoretically considered sparsified SGD with error feedback in parallel settings for convex and nonconvex objectives. However, their analysis is still unsatisfactory because their analysis relies on an artificial assumption due to the usage of top-$k$ algorithm for gradient compression and it is unclear from their results whether the algorithm asymptotically possesses the linear speed up property with respect to the number of nodes. After their work, Cordonnier et al. \cite{cordonnier2018convex} have analyze sparsified SGD with error feedback in parallel settings and shown the linear speedup property at the stochastic error term, but not the compression error terms. Recently, Karimireddy et al. \cite{karimireddy2019error} have also analysed a variant of sparsified SGD with error feedback (called EF-SGD) for convex and nonconvex cases in serial computing settings. Differently from ours, their analysis allows non-smoothness of the objectives for convex cases, though the convergence rate is always worse than vanilla SGD and the algorithm does not possesses the asymptotic optimality. More recently, Tang et al. \citep{tang2019doublesqueeze} have proposed and analysed Doublesqueeze in parallel and nonconvex settings. In Doublesqueeze, error feedback scheme is applied to each worker and also the parameter sever. They have also shown the linear speedup property at the stochastic error term, but not the compression error terms. Zheng et al. \citep{zheng2019communication} have proposed blockwise compression with error feedback and its acceleration by Nesterov's momentum. They have analysed the algorithms in parallel and nonconvex settings but the convergence rates are essentially same as Doublesqueeze. Importantly, they have not shown any theoretical superiority of their accelerated method to non-accelerated one. \section{Notation and Assumptions} $\| \cdot \|$ denotes the Euclidean $L_2$ norm $\| \cdot \|_2$: $\|x\| = \|x\|_2 = \sqrt{\sum_{i}x_i^2}$. For natural number $m$, $[m]$ denotes the set $\{1, 2, \ldots, m\}$. We define $Q(z): \mathbb{R}^d \to \mathbb{R}$ as the quadratic function with center $z$, i.e., $Q(z)(x) = \|x - z\|^2$. A sparsification operator $\mathrm{RandComp}$ is defined as $\mathrm{RandComp}(x, k)j = (d/k)x_j$ for $j$ in a uniformly random subset $J$ with $\#J=k$ and $\mathrm{RandComp}(x, k)j = 0$ otherwise. The followings are theoretical assumptions for our analysis. These are very standard in optimization literature. We always assume the first three assumptions. \begin{assump}\label{assump: sol_existence} $F$ has a minimizer $x_* \in \mathbb{R}^d$. \end{assump} \begin{assump}\label{assump: smoothness} $F$ is $L$-smooth ($L > 0$), i.e., $\|\nabla F(x) - \nabla F(y)\| \leq L\|x-y\|, \forall x, y \in \mathbb{R}^d$. \end{assump} \begin{assump}\label{assump: bounded_variance} $\{f_{i, p}\}_{i, p}$ has $\mathcal{V}$-bounded variance, i.e., $\frac{1}{NP}\sum_{i, p}\|\nabla f_{i, p}(x) - F(x)\|^2 \leq \mathcal{V}, \forall x \in \mathbb{R}^d$. \end{assump} \begin{assump}\label{assump: strong_convexity} $F$ is $\mu$-strongly convex ($\mu >0$), i.e., $F(y) - (F(x) + \langle \nabla F(x), y - x\rangle) \geq (\mu/2)\|x - y\|^2, \forall x, y \in \mathbb{R}^d$. \end{assump} \section{Proposed Algorithm} In this section, we first illustrate three core technique for constructing our algorithm. Then we describe our proposed algorithm S-SNAG-EF. \par {\bf{Sparsifcation}}\ \ \ Gradient sparsification is quite intuitive approach for reducing communication cost in distributed optimization. Concretely, $k$ elements of local stochastic gradient over $d$ ones are selected and set the other ones to be zero on each processor. Then, the sparsified gradients are communicated between the processors. Several selection methods have been proposed, but we adopt random sparsification, that is the simplest one and desirable from a theoretical point of view because of its unbiasedness. It is known that this naive sparsification causes $d/k$-times larger variances and hence $d/k$-times slower convergence than vanilla SGD. \par {\bf{Errorfeedback}}\ \ \ The key observation is that each processor can make use of the history of its local, but not compressed stochastic gradients for correcting the compression. For the first update, each processor sparsifies its local gradient and communicate it. Then, each processor broadcasts it to the other processors and aggregates the received gradients. The difference from naive sparsification is that each processor saves the difference of the non-compressed gradient from the compressed gradient (we call this as compression error) cumulatively. During subsequent updates, each processor sparsifies the sum of its local gradient and appropriately scaled cumulative compression error rather than the former only. We call this process as error feedback. The formal description of the algorithm are given in Algorithm \ref{alg: s_sgd_ef}. It is known that sparsified SGD with error feedback asymptotically achieves the same rate as vanilla SGD. \begin{algorithm}[H] \label{alg: s_sgd_ef} \caption{S-SGD-EF($F$, $x_{\mathrm{in}}$, $\{\eta_t\}_{t=1}^{\infty}$, $\gamma$, $k$, $T$)} \begin{algorithmic}[1] \STATE Set: $x_0 = x_{\mathrm{in}}$, $m_{0, p} = 0\ (p \in [P])$. \FOR {$t=1$ to $T$} \FOR {$p=1$ to $P$ \it{in parallel}} \STATE Compute i.i.d. stochastic gradient of the partition of $F$: $\nabla f_{i, p}(x_{t-1})$. \STATE Correct gradient based on cumulative compression error: $g_{t, p} = \nabla f_{i, p}(x_{t-1}) + (\gamma/\eta_t) m_{t-1, p}$. \STATE Compress: $\bar g_{t, p} = \mathrm{RandComp}(g_{t, p}, k)$. \STATE Update cumulative compression error: \\ $m_{t, p} = m_{t-1, p} + \eta_t(\nabla f_{i, p}(x_{t-1}) - \bar g_{t, p})$. \ENDFOR \STATE Broadcast and Receive: $\bar g_{t, p}\ (p \in [P])$. \FOR {$p=1$ to $P$ \it{in parallel}} \STATE Update solution: $x_t = x_{t-1} - \eta_t \frac{1}{P}\sum_{p=1}^P \bar g_{t, p}$. \ENDFOR \ENDFOR \ENSURE $x_{\hat t}$. \end{algorithmic} \end{algorithm} \begin{rem}[Difference from previous algorithms] Algorithm \ref{alg: s_sgd_ef} can be regard as an extension of Mem-SGD \cite{stich2018sparsified} or EF-SGD \cite{karimireddy2019error} to parallel computing settings, though these two methods mainly utilize top-$k$ compression for gradient sparsification. In constrast, we rather use unbiased random compression. This difference is essential for our analysis. \end{rem} {\bf{Acceleration}}\ \ \ It is well-known that Nesterov's accelerated method achieves faster convergence than vanilla GD and is optimal in convex optimization. Hence it is also expected that the acceleration is effective to improve compressed stochastic gradient methods. The most famous form of the acceleration algorithm uses momentum: the solution is constructed as the sum of the standard gradient descent solution and the appropriately scaled momentum, that is the difference of current solution from the previous one. In an alternative form of the acceleration algorithm, we use three updated solutions, that are (i) a conservative solution (because step size is small, but updated from (iii) at each iteration); (ii) an aggressive solution (i.e., because of large step size); (iii) the convex combination of (i) and (ii). At first sight, they seem to be different algorithms, but it is easy to show the equivalency. We will adopt the latter for our algorithm. The concrete procedure of NAG is illustrated in Algorithm \ref{alg: nag}. \begin{algorithm}[t] \label{alg: one_iter_nag} \caption{OneIterNAG($x$, $y$, $z$, $\Delta_y$, $\Delta_z$, $\alpha$, $\beta$)} \begin{algorithmic}[1] \STATE $y = y - \Delta_y$. \STATE $z = (1-\beta)z + \beta x - \Delta_z$. \STATE $x = (1 - \alpha)y + \alpha z$. \ENSURE $x, y, z$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \label{alg: nag} \caption{NAG($F$, $x_{\mathrm{in}}$, $\{\eta_t, \lambda_t, \alpha_t, \beta_t\}_{t=1}^{\infty}$, $T$)} \begin{algorithmic}[1] \STATE Set: $y_0 = z_0 = x_{\mathrm{in}}$. \FOR {$t=1$ to $T$} \STATE $x_t, y_t, z_t = $ OneIterNAG($x_{t-1}$, $y_{t-1}$, $z_{t-1}$, $\eta_t \nabla F(x_{t-1})$, $\lambda_t \nabla F(x_{t-1})$, $\alpha_t$, $\beta_t$). \ENDFOR \ENSURE $y_T$ \end{algorithmic} \end{algorithm} {\bf{Proposed Algorithm: S-SNAG-EF}}\ \ \ The procedure of S-SNAG-EF for convex objectives is provided in Algorithm \ref{alg: s_snag_ef}. At first, each processor computes an i.i.d. stochastic gradient in line 4. In line 5 and 6, two different gradient estimators by randomly picking $k/2$-coordinates for each are computed. Also in line 7, we update three cumulative compression errors. Why are different compressed estimators and cumulative errors necessary for appropriate updates? In a typical acceleration algorithm we construct two different solution paths $\{y_t\}$ and $\{z_t\}$, and their aggregations $\{x_t\}$ as in line 11. The aggregation of the "conservative" solution $y_t$ (because of small learning rate $\eta_t$) and "aggressive" solution $z_t$ (because of large learning rate $\lambda_t$) is the essence of Nesterov's acceleration. On the other hand, from a theoretical point of view, the impact of error feedback to the vanilla stochastic gradient should be scaled to the inverse of learning rate as in line 5. Therefore, for using two different learning rates, it is necessary to construct two compressed gradient estimators and hence three compression errors. Finally, we update three solutions similar to the Nesterov's accelerated algorithm. \par \begin{algorithm}[t] \label{alg: s_snag_ef} \caption{S-SNAG-EF($F$, $x_{\mathrm{in}}$, $\{\eta_t, \lambda_t, \alpha_t, \beta_t\}_{t=1}^{\infty}$, $\gamma$, $k$, $T$)} \begin{algorithmic}[1] \STATE Set: $y_0 = z_0 = x_{\mathrm{in}}$, $m_{0, p} = m_{0, p}^{(y)} = m_{0, p}^{(z)} = 0\ (p \in [P])$. \FOR {$t=1$ to $T$} \FOR {$p=1$ to $P$ \it{in parallel}} \STATE Compute i.i.d. stochastic gradient of the partition of $F$: $\nabla f_{i, p}(x_{t-1})$. \STATE Correct gradients based on cumulative compression errors: $g_{t, p}^{(y)} = g_{t, p}^{(z)} = \nabla f_{i, p}(x_{t-1})$, \\ $\begin{cases}g_{t, p}^{(y)} += (\gamma/\eta_t) m_{t-1, p}, \\ g_{t, p}^{(z)} += (\gamma/\lambda_t)((1-\beta_t) m_{t-1, p}^{(z)}+\beta_t m_{t-1, p}). \end{cases}$ \STATE Compress corrected gradients: \\ $\begin{cases}\bar g_{t, p}^{(y)} = \mathrm{RandComp}(g_{t, p}^{(y)}, k/2), \\ \bar g_{t, p}^{(z)} = \mathrm{RandComp}(g_{t, p}^{(z)}, k/2). \end{cases}$ \STATE Update cumulative compression errors: \\ $m_{t, p}, m_{t, p}^{(y)}, m_{t, p}^{(z)} =$ OneIterNAG($m_{t-1, p}$, $m_{t-1, p}^{(y)}$, $m_{t-1, p}^{(z)}$, $\Delta_y^t$, $\Delta_z^t$, $\alpha_t$, $\beta_t$), where $\Delta_y^t = \eta_t(\bar g_{t, p}^{(y)} - \nabla f_{i, p}(x_{t-1}))$ and $\Delta_z^t = \lambda_t(\bar g_{t, p}^{(z)} - \nabla f_{i, p}(x_{t-1}))$. \ENDFOR \STATE Broadcast and Receive: \\ $\bar g_{t, p}^{(y)}, \bar g_{t, p}^{(z)}\ (p \in [P])$. \FOR {$p=1$ to $P$ \it{in parallel}} \STATE Update solutions: \\ $x_t, y_t, z_t =$ OneIterNAG($x_{t-1}$, $y_{t-1}$, $z_{t-1}$, $\eta_t \frac{1}{P}\sum_{p=1}^P \bar g_{t, p}^{(y)}$, $\lambda_t \frac{1}{P}\sum_{p=1}^P \bar g_{t, p}^{(z)}$, $\alpha_t$, $\beta_t$) \ENDFOR \ENDFOR \RETURN $x_{\mathrm{out}} = y_{\hat t}$. \end{algorithmic} \end{algorithm} \begin{rem}[Parameter tuning] It seems that Algorithm \ref{alg: s_snag_ef} has many tuning parameters. However, this is not. Specifically, as Theorem \ref{thm: s_snag_ef_strongly_convex} in Section \ref{sec: convergence_analysis} indicates, actual tuning parameters are only constant learning rate $\eta$, strong convexity $\mu$ and $\gamma$, and the other parameters are theoretically determined. This means that the additional tuning parameters compared to S-SGD-EF are essentially only strong convexity parameter $\mu$. Practically, fixing $\gamma = 0.5 \times k/d$ works well. \end{rem} \section{Convergence Analysis}\label{sec: convergence_analysis} In this section, we provide convergence analysis of our proposed S-SNAG-EF. For the space limitation, we give the analysis of S-SGD-EF in Section \ref{sec: analysis_s_sgd_ef} of the supplementary material. For convex cases, we always assume the strong convexity of the objective in this paper\footnotemark. \footnotetext{For non-strongly convex cases, we can immediately derive the convergence rate from the ones for strongly convex cases by taking standard dummy regularizer approach and we omit it here.} \par Let $m_t$ be the mean of the cumulative compression errors of the all nodes at $t$-th iteration, i.e., $m_t = (1/P)\sum_{p=1}^P m_{t, p}$. We use $\widetilde{O}$ notation to hide additional logarithmic factors for simplicity. For the proofs of the statements, see Section \ref{sec: analysis_s_snag_ef} of the supplementary material. \par The following proposition holds for strongly convex objective $F$. \begin{prop}[Strongly convex]\label{prop: s_snag_ef_final_obj_gap_bound} Suppose that Assumptions \ref{assump: sol_existence}, \ref{assump: smoothness}, \ref{assump: bounded_variance} and \ref{assump: strong_convexity} hold. Let $\eta_t = \eta \leq 1/(2L)$, $\lambda_t = \lambda = (1/2)\sqrt{\eta/\mu}$, $\alpha_t = \alpha = \lambda\mu/(2 + \lambda\mu)$ and $\beta_t = \beta = \lambda\mu/(1 + \lambda\mu)$. Then S-SNAG-EF satisfies \begin{align*} \mathbb{E}[F(x_{\mathrm{out}}) - F(x_*)] \leq&\ \Theta\left(\mu(1-\sqrt{\eta\mu})^T\|x_0 - x_*\|^2 + \sqrt{\frac{\eta}{\mu}}\frac{\mathcal{V}^2}{P} \right). \\ &\left. + \sum_{t=1}^T(1-\sqrt{\eta\mu})^{T-t}\left(\lambda L^2\mathbb{E}\|m_{t-1}\|^2 - \eta \mathbb{E}\|\nabla F(x_{t-1})\|^2\right) + L\mathbb{E}\|m_{T-1}\|^2\right), \end{align*} where $x_{\mathrm{out}} = x_{T-1}$. \end{prop} \begin{rem} The first deterministic error term is scaled to $(1 - \sqrt{\eta\mu})^T$ rather than $(1 - \eta\mu)^T$ thanks to the acceleration scheme at the expense of $1/\sqrt{\eta\mu}$ times larger stochastic error (the second term) than the one of vanilla SGD. This evokes the bias-variance trade-off in the rate of vanilla accelerated SGD. The third and last terms are the compression error caused by the gradient sparsification. \end{rem} The compression error terms are bounded by the following proposition. \begin{prop}\label{prop: s_snag_ef_m_t_bound} Suppose that Assumptions \ref{assump: bounded_variance} holds. Let $\gamma = \Theta(k/d)$, $\beta_t \leq \Theta(\gamma^3/\alpha_t^2)$ be sufficiently small and $\{\alpha_t\}$ is monotonically non-increasing. Then S-SNAG-EF satisfies \begin{align*} \mathbb{E}\|m_t\|^2 \leq \Theta\left(\sum_{t'=1}^t\frac{(\eta_{t'}^2 + (\alpha_{t'}^2/\gamma^2)\lambda_{t'}^2)d}{kP}(1 - \gamma)^{t - t'}(\mathcal{V} + \mathbb{E}\|\nabla F(x_{t'-1})\|^2)\right). \end{align*} \end{prop} \begin{rem} This proposition shows that the cumulative compression error is bounded even if $t \to \infty$. This is the key property for obtaining the asymptotical rate of vanilla SGD. Also note hat the cumulative compression error has a factor $1/P$, this does not arise in any previous analysis. \end{rem} Combining Proposition \ref{prop: s_snag_ef_final_obj_gap_bound} and \ref{prop: s_snag_ef_m_t_bound} yields the following theorem. \begin{thm}[Strongly convex]\label{thm: s_snag_ef_strongly_convex} Suppose that Assumptions \ref{assump: sol_existence}, \ref{assump: smoothness}, \ref{assump: bounded_variance} and \ref{assump: strong_convexity} hold. Let $\lambda_t, \alpha_t$ and $\beta_t$ are the same ones in Proposition \ref{prop: s_snag_ef_final_obj_gap_bound} and $\gamma = \Theta(k/d)$ be sufficiently small. Then the iteration complexity $T$ of S-SNAG-EF with appropriate $\eta_t = \eta$ to acheive $\mathbb{E}[F(x_{\mathrm{out}}) - F(x_*)] \leq \varepsilon$ is \begin{align} \widetilde O&\left( \sqrt{\frac{L}{\mu}} + \frac{\mathcal{V}}{P}\frac{1}{\mu\varepsilon} + \frac{d}{k} + \frac{d^{\frac{3}{2}}}{k^{\frac{3}{2}}\sqrt{P}}\sqrt{\frac{L}{\mu}} + \frac{d^{\frac{4}{3}}}{k^{\frac{4}{3}}P^{\frac{1}{3}}}\frac{L^{\frac{2}{3}}}{\mu^{\frac{2}{3}}} + \frac{d}{kP^{\frac{1}{4}}}\frac{(L^2\mathcal{V})^{\frac{1}{4}}}{\mu^{\frac{3}{4}}\varepsilon^{\frac{1}{4}}}\right), \label{iter_comp: s_snag_ef} \end{align} where $x_{\mathrm{out}} = x_T$. \end{thm} \begin{rem} In contrast, non-accelerated S-SGD-EF only achieves \begin{align*} \widetilde{O}\left( \frac{L}{\mu} + \frac{\mathcal{V}}{P}\frac{1}{\mu\varepsilon} + \frac{d}{k} + \frac{d}{k\sqrt{P}}\left(\frac{L}{\mu} + \frac{\sqrt{L\mathcal{V}}}{\mu\sqrt{\varepsilon}}\right) \right). \end{align*} \end{rem} {\bf{Asymptotic View: Same Rate as Non-compressed SGD}}\ \ \ As $\varepsilon \to 0$, the asymptotic rate of S-SNAG-EF becomes $1/(\mu\varepsilon)$, that is the convergence rate of the vanilla SGD. Several previous methods and S-SGD-EF also possess this property. \par {\bf{Non-asymptotic View: Less Compression Error than Non-accelerated Methods}} \ \ \ The compression error (third to last terms in (\ref{iter_comp: s_snag_ef}) has a better dependency on $\varepsilon$ than the one of S-SGD-EF. As a result, the necessary number of communicated components $k$ to maintain the rate of vanilla SGD $1/(\mu\varepsilon)$ is $(\widetilde O(P^{1/3}\mu^{1/3}\varepsilon^{3/4}+P^{3/4}\mu^{1/4}\varepsilon^{3/4}) \wedge 1)d$, which can be much better than the one of S-SGD-EF $(O(P^{1/2}\sqrt\varepsilon)\wedge 1 )d$ for moderate $\varepsilon > 0$. Here, for simple comparison, we assume that $\varepsilon \leq 1/P$.\par \section{Extension to Nonconvex Cases} In this section, we briefly discuss an extension of S-SNAG-EF (Algorithm \ref{alg: s_snag_ef}) to nonconvex cases. Unfortunately, Algorithm \ref{alg: s_snag_ef} has no theoretical guarantee for nonconvex cases generally. Hence We adopt the standard recursive regularization scheme to resolve this problem. Specifically, Algorithm \ref{alg: reg_s_snag_ef} repeatedly minimize the "regularized" objective $F + \sigma Q(x_{t-1}^{++})$ by using S-SNAG-EF, where $Q(x_{t-1}^{++})(x) = \|x - x_{t-1}^{++}\|^2$ and $x_{t-1}^{++}$ is the current solution. This means that the objective function is convexified by $L_2$-regularization around the current solution and the regularized objective is minimized by S-SNAG-EF at each iteration. We call this algorithm Reg-S-SNAG-EF (Algorithm \ref{alg: reg_s_snag_ef}). \begin{algorithm}[H] \label{alg: reg_s_snag_ef} \caption{Reg-S-SNAG-EF ($F$, $x_{\mathrm{in}}$, $\{\eta_t, \lambda_t, \alpha_t, \beta_t \}_{t=1}^{\infty}$, $\gamma$, $k$, $T$, $\sigma$, $S$)} \begin{algorithmic} \STATE Set: $x_0 = x_{\mathrm{in}}$. \FOR {$s=1$ to $S$} \STATE Run: $x_s = $ S-SNAG-EF($F + \sigma Q(x_{s-1})$, $x_{s-1}$, $\{\eta_t, \lambda_t, \alpha_t, \beta_t \}_{t=1}^{\infty}$, $\gamma$, $k$, $T$) \ENDFOR \ENSURE $x_{\hat s}$. \end{algorithmic} \end{algorithm} \begin{thm}[General nonconvex]\label{thm: reg_s_snag_ef_general_nonconvex} Suppose that Assumptions \ref{assump: sol_existence}, \ref{assump: smoothness} and \ref{assump: bounded_variance} hold. Let $\sigma = L$, and $\lambda_t$, $\alpha_t$, $\beta_t$ and $\gamma$ be the same ones in Theorem \ref{thm: s_snag_ef_strongly_convex} (with $\mu \leftarrow \sigma$), and $T = \widetilde\Theta(1/\sqrt{\eta L})$ and $S = \Theta(1+L\Delta/\varepsilon)$ be sufficiently large. Then the iteration complexity $ST$ of Reg-S-SNAG-EF with appropriate $\eta_t = \eta$ for achieving $\mathbb{E}\|\nabla F(x_{\mathrm{out}})\|^2 \leq \varepsilon$ is \begin{align*} \widetilde O&\left( \frac{L\Delta}{\varepsilon} + \frac{\mathcal{V}}{P}\frac{L\Delta}{\varepsilon^2} + \left(\frac{d}{k} + \frac{d^{\frac{3}{2}}}{k^{\frac{3}{2}}\sqrt{P}} + \frac{d^{\frac{4}{3}}}{k^{\frac{4}{3}}P^{\frac{1}{3}}}\right)\frac{L\Delta}{\varepsilon} + \frac{d}{kP^{\frac{1}{4}}}\frac{L\mathcal{V}^{\frac{1}{4}}\Delta}{\varepsilon^{\frac{5}{4}}}\right), \end{align*} where $x_{\mathrm{out}} = x_{\hat s}$ and $\hat s \sim [S]$ according to $\{1/S\}_{s=1}^S$. \end{thm} \begin{rem} In contrast, S-SGD-EF only achieves \begin{align*} O\left( \frac{L\Delta}{\varepsilon} + \frac{\mathcal{V}}{P}\frac{L\Delta}{\varepsilon^2} + \frac{d}{k} + \frac{d}{k\sqrt{P}} \left(\frac{L\Delta}{\varepsilon} + \frac{L\sqrt{\mathcal{V}}\Delta}{\varepsilon^{\frac{3}{2}}}\right)\right). \end{align*} From Theorem \ref{thm: reg_s_snag_ef_general_nonconvex}, we can see that even in nonconvex cases, acceleration can be beneficial. Indeed, the compression error terms (third and fourth terms) have a better dependence on $\varepsilon$ than S-SGD-EF. \end{rem} \section{Numerical Experiments} In this section, we provide numerical experiments to demonstrate the performances of our methods. \par {\bf{Experimental settings}}\ \ \ We conducted standard $L_2$-regularized logistic regression for multi-class classification on publicly available CIFAR 10 dataset\footnotemark. \footnotetext{\url{https://www.cs.toronto.edu/~kriz/cifar.html}.} The regularization parameter was set to be $10^{-4}$. We normalized each channel of images to be mean and standard deviation $0.5$. We compared our proposed S-SNAG-EF with non-compressed SGD, sparsified SGD without error feedback, top-$k$ sparsified SGD with error feedback and S-SGD-EF. We implemented the all algorithms on pseudo distribution settings in single node. In our experiments, the number of processors $P$ ranged in $\{10, 100\}$ and the compression ratio $k/d$ did in $\{0.01, 0.1\}$. We fairly tuned the all hyper parameters\footnotemark. \footnotetext{For non-compressed SGD, sparsified SGD, top-$k$ SGD with error feedback and S-SGD-EF, we only tuned learning rate $eta$. $eta$ ranged in $\{10^{-i} \mid i \in \{1, \ldots, 6\}$. For S-SNAG-EF, we additionally tunded strong convexity parameter $\mu in \{10^{-i} \mid i \in \{1, \ldots, 4\}$.}We independently ran each experiment four times and report the mean and standard deviation of train and test loss and accuracy against the number of iterations. \par \begin{figure*}[t] \begin{subfigmatrix}{4} \subfigure[$P=10$, $k/d=0.01$ ]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.01/train_loss.png}} \subfigure[$P=10$, $k/d=0.01$ ]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.01/train_acc.png}} \subfigure[$P=10$, $k/d=0.01$]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.01/test_loss.png}} \subfigure[$P=10$, $k/d=0.01$ ]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.01/test_acc.png}} \end{subfigmatrix} \begin{subfigmatrix}{4} \subfigure[$P=10$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.1/train_loss.png}} \subfigure[$P=10$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.1/train_acc.png}} \subfigure[$P=10$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.1/test_loss.png}} \subfigure[$P=10$, $k/d=0.1$ ]{\includegraphics[width=3.4cm]{figs/num_processors10_comp_ratio0.1/test_acc.png}} \end{subfigmatrix} \begin{subfigmatrix}{4} \subfigure[$P=100$, $k/d=0.01$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.01/train_loss.png}} \subfigure[$P=100$, $k/d=0.01$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.01/train_acc.png}} \subfigure[$P=100$, $k/d=0.01$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.01/test_loss.png}} \subfigure[$P=100$, $k/d=0.01$ ]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.01/test_acc.png}} \end{subfigmatrix} \begin{subfigmatrix}{4} \subfigure[$P=100$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.1/train_loss.png}} \subfigure[$P=100$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.1/train_acc.png}} \subfigure[$P=100$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.1/test_loss.png}} \subfigure[$P=100$, $k/d=0.1$]{\includegraphics[width=3.4cm]{figs/num_processors100_comp_ratio0.1/test_acc.png}} \end{subfigmatrix} \caption{Comparisons of our methods with existing methods on $L_2$-regularized logistic regression tasks on CIFAR 10 for different number of processors $P$ and gradient compression ratio $k/d$. The first, second, third and last columns depict the comparison of train loss, train accuracy, test loss and test accuracy respectively on each $P$ and $k/d$ setting. } \label{fig: comparison} \end{figure*} {\bf{Results}}\ \ \ Figure \ref{fig: comparison} shows the comparisons of our proposed S-SNAG-EF with previous methods and S-SGD-EF. When the cases $(P, k/d)=(10, 0.01), (100, 0.1)$, S-SNAG-EF significantly outperformed the other method except non-compressed SGD. When the cases $(P, k/d)=(10, 0.1), (100, 0.01)$, S-SGD-EF showed the best performances except vanilla SGD. The convergence of S-SNAG-EF was initially slow and this is perhaps a reason why S-SNAG-EF was outperformed by S-SNAG-EF in some cases. The performances of Top-$k$ SGD-EF were unstable and did not converge particularly for large $P$ . \section{Conclusion} In this paper, we considered an accelerated sparsified SGD with error feedback (S-SNAG-EF) in parallel computing settings. We gave theoretical analysis of S-SNAG-EF and showed that our proposed algorithm achieves (i) asymptotical linear speed up with respect to the number of nodes; (ii) lower communication cost for maintaining the rate of vanilla SGD than non-accelerated methods thanks to Nesterov's acceleration. We also gave better analysis of non-accelerated S-SGD-EF than previous work by fully utilizing the unbiasedness of sparsification. In numerical experiments, we compared our methods with several previous methods and our methods showed comparable or better performances. \section*{Acknowledgement} TS was partially supported by JSPS KAKENHI (18K19793, 18H03201, and 20H00576), Japan DigitalDesign, and JST CREST.
1,108,101,562,472
arxiv
\section{Optical and mechanical design} \label{sec:appA} The optomechanical crystal (OMC) studied in this work is numerically optimized for both optomechanical coupling and optical/acoustic quality factor, via finite-element method (FEM) simulation in COMSOL Multiphysics~\cite{COMSOL}, according to the procedure outlined in Ref.~\cite{Chan2012}. The holes on the ends of the beam support simultaneous bandgaps for optical light with wavelengths near $1550$~nm and acoustic waves with frequencies from $3-5.5$~GHz, while the variation of holes towards the center of the beam perturb the bandgaps so as to create co-localized optical and acoustic midgap resonances. The fundamental optical mode (Fig.~\ref{Sfig:modes}a) has a nominal wavelength of $1545$~nm and the fundamental acoustic mode (Fig.~\ref{Sfig:modes}b) has a nominal resonance frequency of $5.1$~GHz. The nominal optomechanical vacuum coupling rate, due predominantly to photoelastic effects, is predicted to be $g_{\text{0}}/2\pi = 860$~kHz. Physically, this coupling rate is the optical resonance frequency shift due to the zero-point fluctuations of the acoustic resonator. \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{./FigureS1.pdf} \caption{\textbf{FEM simulations.} \textbf{a}, Electric field $E_{\text{y}}$ component of the optical mode at frequency $\omega_{\text{c}}/2\pi = 194$~THz (polarization in the plane of the page and transverse to the long axis of the nanobeam). \textbf{b} Displacement field of the mechanical breathing mode at frequency $\omega_{\text{m}}/2\pi = 5.1$~GHz.} \label{Sfig:modes} \end{center} \end{figure} \section{Fabrication} \label{sec:appB} The devices are fabricated from a silicon-on-insulator (SOI) wafer (SOITEC, $220$~nm device layer, $3 \mu$m buried oxide) using electron beam lithography followed by reactive ion etching (RIE/ICP). The Si device layer is then masked using a ProTEK PSB photoresist to define a mesa region of the chip to which a tapered lensed fiber can access. Outside of the protected mesa region, the buried oxide is removed with a plasma etch and a trench is formed in the underlying silicon substrate using tetramethylammonium hydroxide (TMAH). The devices are then released in hydrofluoric acid ($49\%$ aqueous HF solution) and cleaned in a piranha solution ($3$-to-$1$ H$_2$SO$_4$:H$_2$O$_2$) before a final hydrogen termination in diluted HF. In fabrication, arrays of the nominal design shown in Fig.~\ref{Sfig:modes} are scaled by $\pm2\%$ to account for frequency shifts due to fabrication imperfections and disorder. \section{Experimental setup} \label{sec:appC} The full experimental setup for phonon counting and intensity interferometry is shown in Fig.~\ref{Sfig:setup}. A fiber-coupled, wavelength tunable external cavity diode laser is used as the light source, with a small portion of the laser output sent to a wavemeter ($\lambda$-meter) for frequency stabilization. The remaining laser power is sent through an electro-optic phase modulator ($\phi$-m), used to generate optical sidebands for locking the filter cavities, and a variable optical attenuator (VOA) to allow control of the input power to the cavity. The signal is then sent into an optical circulator which sends the optical probe to the a lensed fiber tip for end-fire coupling to the device. The cavity reflection can then be switched to one of two detection setups. The first allows the signal to be switched to a power meter (PM) for measuring the reflected signal power or to an erbium-doped fiber amplifier (EDFA) followed by a high-speed photodetector (PD). The resulting photocurrent can be sent to a real-time spectrum analyzer (RSA) in order to measure the noise power spectral density (NPSD) of the optical signal or to a vector network analyzer (VNA) which can be used to measure the full complex response of the optical cavity. The second detection setup sends the cavity reflection through a series of narrowband tunable Fabry-Perot filters ($\sim50$~MHz bandwidth, $\sim20$~GHz free-spectral range) in order to reject the pump frequency. The signal then travels through a variable coupler (VC) and is sent to the dilution refrigerator where is is detected by two superconducting single photon detectors (SPD). The output of these detectors is sent to a time-correlated single photon counting (TCSPC) module for calculation of the detection correlation function. \begin{figure}[btp] \begin{center} \includegraphics[width=\columnwidth]{./FigureS2.pdf} \caption{\textbf{Experimental setup.} Phonon counting setup. $\lambda$-meter: wavemeter, FPC: fiber polarization controller, $\phi$-m: electro-optic phase modulator, VOA: variable optical attenuator, SW: optical switch, PM: optical power meter, EDFA: erbium-doped fiber amplifier, PD: fast photodiode, RSA: real-time spectrum analyzer, VNA: vector network analyzer, VC: variable coupler, SPD: superconducting single photon detector, TCSPC: time-correlated single photon counting.} \label{Sfig:setup} \end{center} \end{figure} Since the pump laser is tuned to a motional sideband during the phonon counting measurement, the two Fabry-Perot filters used in this work must be tuned to the optical cavity resonance via an initial lock and stabilization procedure. This procedure is as follows. First, the pump wavelength is tuned to the blue or red OMC sideband by optimizing the mechanical transduction signal on a spectrum analyzer. Since the power of the radiated Stokes- or anti-Stokes-scattered light is too low to provide a feedback signal for filter stabilization, we then bypass the OMC and phase-modulate the pump at frequency $\omega_{\text{m}}$. The filters are then locked to maximize transmission of the sideband which is resonant with the cavity. After a stabilization period of a few seconds, the filter positions are held without further feedback while the pump modulation is turned off, pump power is adjusted for the desired $n_{\text{c}}$, and the OMC is switched back into the optical path. Once locked, the transmission of the filters is observed to be stable to within $5-10\%$ for several minutes in the absence of active feedback locking. In order to avoid pile-up artifacts~\cite{Becker2005} in the acquired $g^{(2)}(\tau)$ histograms presented in the main text, the photon count rate incident upon the SPDs is kept at or below $30$~kHz. This is accomplished with a variable optical attenuator on the output of the filters, and is sufficient to maintain a flat histogram background over a $5$~$\mu$s window. The absolute count rate reported in Fig.~\ref{fig:count_rate}a takes the variable attenuation into account. \section{Device characterization} \label{sec:appD} Full characterization of the optical resonance involves determination of the single pass fiber-to-waveguide coupling efficiency $\eta_{\text{cpl}}$, the total energy decay rate $\kappa$, and the waveguide-cavity coupling efficiency $\eta_\kappa = \kappa_{\text{e}}/\kappa$ ($\kappa_{\text{e}}$ is the decay rate into the detection channel). The fiber collection efficiency is determined by observing the calibrated reflection level far-off resonance with the cavity and is found to be $\eta_\text{cpl} = 0.63$. The total cavity decay rate is determined by fitting the optical reflection spectrum of the cavity (Fig.~\ref{Sfig:optical_scan}a) and yields $\kappa/2\pi = 818$~MHz (optical quality factor $Q_{\text{c}} = 236,000$). The reflection level on resonance, when normalized to the off-resonance reflection level, is related to the cavity-waveguide coupling efficiency by $R_0 = (1-2\eta_\kappa)^2$. However, for single-sided coupling this is not a single-valued function of the coupling efficiency. Consequently, the complex response of the cavity is measured by locking the laser off-resonance from the cavity and using a vector network analyzer (VNA) to drive an electro-optic modulator (EOM) and sweep an optical sideband across the cavity. By detecting the reflected power on a high-speed photodetector connected to the VNA input, the phase response of the cavity can be measured (Fig.~\ref{Sfig:optical_scan}b). Fitting this with prior knowledge of the cavity resonance frequency and decay rate yields $\eta_\kappa = 0.52$. \begin{figure}[btp] \begin{center} \includegraphics[width=\columnwidth]{./FigureS3.pdf} \caption{\textbf{Optical cavity response.} \textbf{a}, Optical reflection spectrum of the cavity resonance versus cavity detuning $\Delta = \omega_{\text{c}}-\omega_{\text{l}}$ (blue) with a Lorentzian fit (red) yielding $Q_{\text{c}} = 236,000$. \textbf{b}, Phase response of the optical resonance, yielding $\kappa_{\text{e}}/\kappa = 0.52$.} \label{Sfig:optical_scan} \end{center} \end{figure} To characterize the acoustic resonance, the cavity reflection is sent through an erbium-doped fiber amplifier (EDFA) and detected on a high-speed photodetector. The EDFA is used to amplify the signal so that the optical noise floor overcomes the detector's electronic noise, and the noise power spectral density (NPSD) of the optical reflection is measured on a real-time spectrum analyzer (RSA), where a Lorentzian response due to transduction of the acoustic thermal Brownian motion can be observed at the acoustic resonant frequency $\omega_{\text{m}}/2\pi = 5.6$~GHz. For a pump laser locked onto the red or blue mechanical sideband of the cavity ($\Delta = \omega_{\text{c}} - \omega_{\text{l}} = \pm \omega_{\text{m}}$) the linewidth of the transduced signal is given by $\gamma = \gamma_{\text{i}} \pm \gamma_{\text{OM}}$, where $\gamma_{\text{OM}} = \pm 4 g_{\text{0}}^2 n_{\text{c}} / \kappa$ ($n_{\text{c}}$ is the steady state intracavity photon number). The dependence of linewidth for both detunings versus $n_{\text{c}}$ is shown in Fig.~\ref{Sfig:gammaRSA}. By averaging the two sets of data we can extract the intrinsic acoustic damping rate $\gamma_{\text{i}}/2\pi = 3$~MHz ($Q_{\text{m}} = 1850$), which is seen to remain constant as a function of $n_{\text{c}}$. By fitting the excess optomechanically induced damping $\gamma_{\text{OM}}$ as a function of $n_{\text{c}}$ we extract a coupling rate of $g_{\text{0}} = 645$~kHz. \begin{figure}[btp] \begin{center} \includegraphics[width=\columnwidth]{./FigureS4.pdf} \caption{\textbf{Calibration of $g_{\text{0}}$.} Mechanical linewidth $\gamma$ versus intracavity photon number $n_{\text{c}}$ for $\Delta = \omega_{\text{m}}$ (red) and $\Delta = -\omega_{\text{m}}$ (blue). The intrinsic linewidth of the acoustic resonator $\gamma_{\text{i}}$ (black) is determined by averaging the blue detuned data and yields $Q_{\text{m}} = 1850$. The inset shows the optomechanically induced damping $\gamma_{\text{OM}}$, obtained by subtracting $\gamma_{\text{i}}$ from $\gamma$, versus $n_{\text{c}}$. The linear fit shown in red yields a vacuum optomechanical coupling rate of $g_{\text{0}} = 645$~kHz.}\label{Sfig:gammaRSA} \end{center} \end{figure} \section{Single photon detectors} \label{sec:appE} The detectors used in this work are amorphous WSi-based superconducting nanowire single-photon detectors (SNSPDs, or SPDs hereafter) developed in collaboration between the Jet Propulsion Laboratory and NIST. The SPDs are designed for high-efficiency detection of individual photons in the wavelength range $\lambda = 1520 - 1610$~nm with maximum count rates of about $25 \times 10^{6}$~counts per second (c.p.s). (reset time $t_R = 40$~ns)~\cite{Marsili2013} and very low dark count rates (DCRs). With the SPD in the superconducting state (below its critical temperature $T_\text{c} = 3.7$~K), a DC bias current $I_\text{b}$ of a few microamps is maintained through the nanowire by an external current source. The operating range of the detector lies between a lower cutoff current and an upper switching current $I_\text{sw}$, above which the nanowire switches to a non-superconducting state. The choice of quiescent operating current $I_\text{b}$ for each detector was made to roughly maximize the ratio of the quantum efficiency $\eta_\text{SPD}$ to DCR while operating within the "plateau" region where both $\eta_\text{SPD}$ and the DCR are nearly constant as $I_\text{b}$ is adjusted slightly. The SPDs are mounted on the still stage of a $^{3}$He/$^{4}$He dilution refrigerator at 700 mK. Single-mode optical fibers (Corning SMF-28) are passed into the refrigerator through vacuum feed-throughs and coupled to the SPDs via a fiber sleeve attached to each SPD mount. Proper alignment of the incoming fiber with the $15$ $\mu$m $\times$ $15$ $\mu$m square area of the SPD nanowire is ensured by a self-aligned mounting system incorporated into the design of the SPD~\cite{Marsili2013}. The radio-frequency output of each SPD is amplified by a cold-amplifier mounted on the $50$~K stage of the refrigerator as well as a room-temperature amplifier, then read out by a triggered PicoQuant PicoHarp 300 time-correlated single photon counting module. The counting module is triggered by input pulses reaching a voltage above a fixed discriminator value $V_\text{d}$. Amplified photon-detection pulse heights of $150 - 250$~mV are typical, and corresponding discriminator values in the range $110 - 150$~mV were chosen for each SPD by measuring nominal count rates as a function of $V_\text{d}$ and choosing an operating value of $V_\text{d}$ to be near the center of the plateau region in which the observed count rates are independent of small changes in the discriminator setting. Initial characterization of the SPDs was centered on measuring dark count rates and the quantum efficiencies of the detectors. The measured DCRs are sensitive to various channels by which stray light may couple into the fiber-detector system, including ambient laboratory lighting and thermal radiation both inside and outside the refrigerator. By tightly spooling ($\sim1.5$~inch diameter) the optical fiber within the fridge to filter out long wavelength blackbody radiation and systematically isolating the optical fiber from environmental light sources we have achieved DCRs of $2-4$~c.p.s. Quantum efficiency measurements of the SPDs were made using laser light of $\lambda = 1554$~nm attenuated to an input power of $1.53$~fW at the input to the fridge, corresponding to $N \approx 12,000$ incoming photons per second. We calculate $\eta_\text{SPD}$ by referring the detected photon count rate (less the corresponding known DCR) to the nominal input flux $N$. This efficiency incorporates the intrinsic detection efficiency of the SPDs as well as any losses in the fiber run within the fridge and in the coupling between the fiber and the SPD itself. At just below the respective switching currents of the detectors, we find $\eta_\text{SPD} = 70\%$, with this result depending on photon polarization ($\lesssim 20\%$ variability). \section{Phonon counting sensitivity} \label{sec:appF} For sufficiently weak optomechanical coupling ($g_{\text{0}} \ll \kappa$) and small mechanical amplitude, the equations of motion for the optomechanical system can be linearized about a large steady state optical field amplitude. For a sideband resolved system ($\kappa/2 \ll \omega_{\text{m}}$) and a red-detuned pump ($\Delta \approx \omega_{\text{m}}$) the output optical field may then be written in the Fourier domain (in a frame rotating at the pump frequency) as~\cite{Safavi-Naeini2013a} \begin{eqnarray} \hat{a}_\text{out}(\omega) & = & \left(1-\frac{\kappa_{\text{e}}}{i(\Delta-\omega)+\kappa/2} \right) \hat{a}_\text{in}(\omega) \notag \\ &&-\frac{\sqrt{\kappa_{\text{e}}\kappa_{\text{i}}}}{i(\Delta-\omega)+\kappa/2} \hat{a}_\text{i}(\omega) \notag \\ &&-i\frac{\sqrt{\kappa_{\text{e}} n_{\text{c}}}g_{\text{0}}}{i(\Delta-\omega)+\kappa/2} \hat{b}(\omega), \end{eqnarray} \noindent where $\hat{a}_\text{in}(\omega) = \alpha \delta(\omega) + \hat{a}_\text{vac}(\omega)$ ($\alpha$ is the steady-state optical field at the pump frequency, $\hat{a}_\text{vac}(\omega)$ is the vacuum noise of the pump), $\kappa_{\text{i}} = \kappa-\kappa_{\text{e}}$ is the intrinsic loss rate of the optical cavity, $\hat{a}_\text{i}(\omega)$ is additional vacuum noise admitted via the intrinsic loss channels, and $\hat{b}(\omega)$ is the annihilation operator for the acoustic resonator. Note that $\hat{b}^{\dagger}(\omega)$ takes the place of $\hat{b}(\omega)$ for a blue-detuned pump ($\Delta \approx -\omega_{\text{m}}$). As $\hat{b}(\omega)$ is sharply peaked around $\omega = \omega_{\text{m}}$, we can spectrally filter out the strong optical pump at $\omega = 0$. The additional optical noise, assumed to be white Gaussian noise, cannot be filtered out in this way. However, in the case that the optical noise is pure vacuum noise it will not contribute to any photon counting events. Thus, for the purposes of photon counting the output optical field can be written post-filtering as \begin{equation} \hat{a}_\text{out}(t) \approx \frac{2 \sqrt{\kappa_{\text{e}} n_{\text{c}}} g_{\text{0}}}{\kappa} \hat{b}(t) = \sqrt{\frac{\kappa_{\text{e}}}{\kappa}} \sqrt{\gamma_{\text{OM}}} \hat{b}(t), \end{equation} \noindent which shows explicitly that in this linearized regime photon counting is equivalent to phonon counting ($\langle \hat{a}^{\dagger}_\text{out} \hat{a}_\text{out} \rangle \propto \langle \hat{b}^{\dagger} \hat{b} \rangle$). As can be seen in the above equation, the optically induced acoustic damping rate $\gamma_{\text{OM}}$ physically represents the per-phonon rate of sideband photon emission, corresponding to phonon absorption (emission) for $\Delta = \omega_{\text{m}}$ ($\Delta = -\omega_{\text{m}}$). Of the sideband photons emitted into the optical cavity, a fraction $\kappa_{\text{e}}/\kappa$ are subsequently emitted into the detection channel and detected with overall system efficiency $\eta$, including both the system efficiency of the SPDs as well as insertion loss along the path from cavity to detector. The count rate per phonon is thus given by $\Gamma_\text{SB,0}= \eta (\kappa_{\text{e}}/\kappa) \gamma_{\text{OM}}$, and the total count rate is given by \begin{equation} \Gamma_\text{tot} = \Gamma_\text{SB,0} \langle n \rangle + \Gamma_\text{pump} + \Gamma_\text{dark}, \end{equation} \noindent where $\langle n \rangle$ is the average phonon occupancy of the acoustic resonator, $\Gamma_\text{pump}$ is the count rate due to residual pump transmission through the filters, and $\Gamma_\text{dark}$ is the intrinsic dark count rate of the SPD. To assess the sensitivity of the phonon counting measurement, the noise count rate can be divided by the per-phonon sideband count rate to obtain a noise-equivalent phonon number \begin{equation} n_\text{NEP} = n_\text{pump} + n_\text{dark} = \frac{\Gamma_\text{pump}}{\Gamma_\text{SB,0}} + \frac{\Gamma_\text{dark}}{\Gamma_\text{SB,0}}. \end{equation} The dark count rate $\Gamma_\text{dark}$ is simply a measured constant, while $\Gamma_\text{pump} = \eta A \dot{N}_\text{pump}$, where $A$ is the transmission of the filters at the pump frequency relative to the peak transmission, and $\dot{N}_\text{pump}$ is the input photon flux of the pump, which is nearly perfectly reflected from the cavity when the pump is far off-resonance. For a pump detuning from the cavity of $\Delta = \omega_{\text{m}}$, the input photon flux can be related to the intracavity photon number $n_{\text{c}}$ by $\dot{N}_\text{pump} \approx \omega_{\text{m}}^2 n_{\text{c}} / \kappa_{\text{e}}$. Thus, we can write the total noise-equivalent phonon number as \begin{equation} n_\text{NEP} = \frac{\kappa^2 \Gamma_\text{dark}}{4 \eta \kappa_{\text{e}} g_{\text{0}}^2 n_{\text{c}}} + A \left( \frac{\kappa \omega_{\text{m}}}{2 \kappa_{\text{e}} g_{\text{0}}}\right)^2. \end{equation} \section{Oscillation amplitude above threshold} \label{sec:appG} The classical equation of motion for the optical cavity field $\alpha$ is given in the frame rotating at the pump frequency by \begin{equation} \dot{\alpha} = -\left(i \left(\Delta+g_{\text{0}} x\right) +\frac{\kappa}{2}\right) \alpha + \Omega, \end{equation} \noindent where $x$ is the position of the acoustic resonator, $\Omega = \sqrt{\kappa_{\text{e}} P_\text{in} / \hbar \omega_{\text{c}}}$, and $P_\text{in}$ is the optical input power at the device. In the regime of parametrically driven self-oscillation, the amplitude of the acoustic oscillator can become large enough that the usual linearization approximation becomes invalid. However, using the ansatz that the mechanical oscillation amplitude is given by $x(t) = \beta \text{sin}(\omega_m t)$ (note that $\beta$ is given in units of the zero-point amplitude, so that $\beta^2 = 4 \langle n \rangle+2$), an exact solution for the optical cavity field can be written as a sum of sidebands. In particular, the equation of motion can be formally integrated to yield \begin{equation} \alpha(t) = \Omega e^{-i z \text{cos}(\omega_{\text{m}} t)} \int_{0}^{\infty} d\tau e^{-\left(i \Delta + \frac{\kappa}{2}\right) \tau} e^{i z \text{cos}(\omega_{\text{m}} (t-\tau))}, \end{equation} \noindent where $z = g_{\text{0}} \beta/\omega_{\text{m}}$. This can be solved exactly by using the Jacobi-Anger expansion $e^{i z \text{cos}(\theta)} = \sum_{n} i^n J_\text{n} (z) e^{i n \theta}$, where $J_\text{n}$ is a Bessel function of the first kind, to yield \begin{align} \alpha(t) & = e^{-i z \text{cos}(\omega_{\text{m}} t)} \sum_n e^{i n \omega_{\text{m}} t} \frac{i^n \Omega J_\text{n} (z)}{\kappa/2+i(\Delta+n\omega_{\text{m}})} \notag \\ & = \sum_n \sum_m e^{i(n-m)\omega_{\text{m}} t} \frac{i^{n-m} \Omega J_\text{n} (z) J_\text{m} (z)}{\kappa/2+i(\Delta+n\omega_{\text{m}})}, \label{eqn:sideband_deriv} \end{align} \noindent which can be reindexed to have the form $\alpha(t) = \sum_n \alpha_n e^{i n \omega_{\text{m}} t}$, with \begin{equation} \alpha_\text{n} = i^n \Omega \sum_m \frac{J_\text{m} (z) J_\text{m-n} (z)}{h_\text{m}}, \label{eqn:sideband_expr} \end{equation} \noindent where $h_\text{m} = \kappa/2 + i(\Delta + m\omega_{\text{m}})$. Note that if we are only interested in the total energy in the cavity, $|\alpha(t)|^2$, the final expansion of the global phase factor in Eqn.~\ref{eqn:sideband_deriv} is unnecessary and a simpler form for $\alpha_\text{n}$ involving no sums may be used~\cite{Marquardt2006,Rodrigues2010}. However, for the case considered here where a specific frequency component of the cavity output is filtered before detection it is necessary to use the full expression given in Eqn.~\ref{eqn:sideband_expr}. In the regime of self-oscillation, the oscillation amplitude $\beta$ is determined by balancing the optically induced mechanical gain with the intrinsic mechanical loss \begin{equation} \gamma_{\text{OM}} = \frac{4 g_{\text{0}}^2 \Omega^2}{\omega_{\text{m}}} \text{Im} \left[ \sum_\text{n} \frac{J_\text{n}(z) J_\text{n+1}(z)}{z h_\text{n} h_\text{n+1}^{*}} \right] = -\gamma_{\text{i}}. \label{eqn:gain_balance} \end{equation} The presence of the filters at the cavity output ensures that the SPD count rate is proportional to the number of intracavity photons in the first Stokes sideband at $\omega = \omega_{\text{l}}-\omega_{\text{m}}$, which is given by $n_1 = |\alpha_1|^2$. For the lowest input power, below threshold, $n_1$ is given by the simple linear approximation $n_1 = [(4G^2/\kappa)n_{\text{b}}]/\kappa$, where $n_{\text{b}} \approx 1100$ is the thermal occupancy of the mechanical resonator. Near and above threshold, $n_1$ can be determined from the ratio of the above and below threshold SPD count rates and the known value of $n_1$ below threshold. For a given input power with $n_1$ constrained, Eqs.~\ref{eqn:sideband_expr} and \ref{eqn:gain_balance}, along with the condition that $|\gamma_{\text{OM}}| \approx \gamma_{\text{i}}$ above threshold, can be used to solve for the detuning $\Delta$ and amplitude $\beta$ commensurate with the observed count rate. For our highest input power, we find $\Delta \approx -1.067 \omega_{\text{m}}$ and $z \approx 0.15$. This amplitude is small enough that the linear approximation ($\alpha_1 \propto z$) is still valid. In particular, in the linear regime the relation $J_1(z) \approx z/2$ should hold. For the largest value of $z$ in our measurements, we find that $J_1(z)$ differs from $z/2$ by only about $0.3\%$. The shift in detuning, and the concomitant reduction in oscillation amplitude, is expected due to the thermo-optic effect which will tend to shift the cavity resonance to lower frequencies as the total intracavity photon population is increased by the amplified Stokes scattering~\cite{Krause2014}. In the linearized regime, $n_{\text{c}}$ (the intra-cavity photon number of the 0th sideband at $\omega_{\text{l}}$) is to a good approximation equal to the intra-cavity photon number in the absence of optomechanical coupling, $n_{\text{c}} = (4\kappa_{\text{e}} P_{\text{in}}/\hbar\omega_{\text{l}})/(\kappa^2 + 4\Delta^2)$. The total optomechanical back-action rate ($\gamma_{\text{OM}}$) is also approximately equal to the scattering rate from the $0$th sideband to the $1$st Stokes sideband ($\gamma_{0,1}$) in the linearized regime, which for $\Delta=-\omega_{\text{m}}$ blue detuned pumping yields $|\gamma_{\text{OM}}|\approx\gamma_{0,1}\approx 4G^2/\kappa$. As this is the case, in the main text we don't differentiate between back-action damping and Stokes scattering rate, and $n_{\text{c}}$ is a simple placeholder for $P_{\text{in}}$ in all of our plots. \end{document}
1,108,101,562,473
arxiv
\section{Introduction} Particle-in-Cell (PIC) codes are among the most popular tools for the kinetic simulation of plasmas~\cite{birdsall}. They consist in following the continuous trajectories of charged particles moving through a spatial domain under the action of external and self-induced electromagnetic (EM) fields. These fields are represented on a discrete grid that also holds the plasma charge and current densities entirely defined by the particles' phase-space distribution. Because the speed of light is finite, the nature of the physics described by EM PIC codes, the interactions between particles and fields, is spatially local. This property makes them a good candidate for massive parallelization and several codes have indeed demonstrated virtually unlimited weak scaling~\cite{smilei,osiris,picongpu} provided load balance is maintained~\cite{loadbalance}. In contrast, the PIC algorithm is not well adapted for high performances at the single node level for several reasons. First, in most cases, PIC simulations are becoming increasingly memory bound as memory performance is not ramping up as fast as the computation capabilities. Second, particles are free to move anywhere in the domain and therefore trigger inefficient random accesses to memory each time they interact with the grid. This randomness also prevents the use of Single Instruction Multiple Data (SIMD) instructions which are very efficient at speeding up memory-bound operations but are restricted to very regular memory access patterns. Finally, the wide variety of possible numerical configurations is difficult to optimize with a single technique. PIC code optimization must therefore rely on randomness mitigation for optimized memory usage independently of the simulation parameters. A first approach to mitigate randomness in PIC codes is to use the standard domain decomposition on domains so small that they can fit in the cache of the system. This method is commonly used in recent implementations and the small domains are often referred to as \textit{patches}\cite{smilei} or \textit{tiles}\cite{vincenti2017,picador} (from now on the term \textit{patch} is used as the generic denomination). It exposes a very high level of parallelism and mitigates memory access randomness since particles of each patch all access the same grid region which is limited by the patch extension. Another approach to further reduce randomness is particle sorting. This consists in organizing the particles in memory according to their location. This idea was introduced in PIC codes in 1977 as part of a binary collision model \cite{Takizuka1977}, but only considered for optimization twenty years later on CPU \cite{decyk1996,bowers2001} and on GPU a decade later \cite{gpusort,mertmann2011,decyk2014}. Its purpose was to make memory accesses less random while maximizing the cache efficiency. Since all computing systems have had multi-level caches for decades, sorting is nowadays very common in PIC codes, but it paradoxically implies a significant computation overhead because of potentially heavy data movements. Moreover, when coupled to patching, sorting only has a minimal impact on cache management efficiency. For those reasons, most PIC codes do not keep particles sorted at all times or perform only a coarse-grain sort\cite{gpusort}. Nevertheless, in addition to cache use improvement, particle sorting can also favor SIMD operations by structuring memory accesses into repeatable patterns. This article focuses on the fact that, with the increasing importance of these operations in today's hardware, the benefits of sorting at all times can actually overcome its cost. The benefits of frequent sort and its possible implementation is discussed in \cite{nakashima2015,nakashima2017}. In these works, particles are stored in many different cell-dependent arrays and moved in memory when they change cell. This approach improves SIMD efficiency on many-core architectures such as the Intel Xeon Phi provided that the particles arrays have enough elements. A similar approach has been extended in \cite{barsamian_2018} where the authors use additional strategies such as the division of a cell's particle set into chunks to improve cache coherence and reduce memory transfers. They report acceleration when using a few hundreds particles per cell. The present work proposes a vectorized PIC algorithm based on a new fine-grain particle sorting at all times. The algorithm relies on a cycle sort and retains a single particle array per patch. It is combined with an adaptive mode that selects at runtime and locally (at the patch level) between the scalar and vectorized algorithms depending on the local conditions in order to support efficiently any number of particles per cell. This approach was implemented in the code {\sc Smilei}\xspace\footnote{{\sc Smilei}\xspace is an open-source project. Both the code and its documentation are available online at \url{http://www.maisondelasimulation.fr/smilei}} \cite{smilei}, and its impact on the code performance is discussed throughout this paper. The paper is structured as follows. Section~\ref{sec:pic} summarizes the PIC algorithm and its implementation in {\sc Smilei}\xspace. The performance of the most important operators acting onto the particles (namely the interpolator, pusher and projector), in their scalar version, is analyzed in terms of their computational cost. The following Sec.~\ref{sort} details the fine-grain cycle sort algorithm and its benefits. Section~\ref{sec:vectorization_operators} then focuses on the vectorization of each operator. For generality, emphasis is placed on the algorithm rather than on the implementation itself. Section~\ref{sec:vecto_efficiency} analyzes and compares the performance measurements between the scalar and vectorized operators. We demonstrate that the vectorized algorithms are more efficient only for a large enough number of particles per cell. This motivated the development of an adaptive method to select locally and dynamically (at runtime) the most efficient operators between scalar or vectorized depending on local number of particles per cell in the patch. This adaptive method is presented in Sec.~\ref{sec:adaptive_operators}. Section \ref{sec:simulation_benchmark} presents the performance gain that can typically be obtained by using the fully vectorized and adaptive modes in large-scale 3D simulations. Three configurations, two related to laser-plasma interaction the third one to astrophysics, are presented. In all three cases, the scalar, vectorized and adaptive modes are used and their performances are compared. Finally, conclusions are given in Sec.~\ref{sec:conclusion}. \section{The PIC method} \label{sec:pic} This first section briefly summarizes the basics of the PIC method for collisionless plasma simulation. This presentation introduces in particular the main operators that in {\sc Smilei}\xspace act onto the particles and which performance, in their scalar version, will be presented at the end of the section. More detailed descriptions of the PIC method can be found in \cite{birdsall,dawson1983,hockney1988}, and {\sc Smilei}\xspace's implementation is more specifically explained in \cite{smilei}. \subsection{The Maxwell-Vlasov model} The kinetic description of a collisionless (fully or partially ionized) plasma relies on the Vlasov-Maxwell system of equations. In this description, the different species of particles constituting the plasma are described by their respective distribution functions $f_s(t,\mathbf{x},\mathbf{p})$, where $s$ denotes a given species consisting of particles with charge $q_s$ and mass $m_s$, and $\mathbf{x}$ and $\mathbf{p}$ denote the position and momentum of a phase-space element. The distribution $f_s$ satisfies Vlasov's equation\footnote{SI units are used throughout this work.}: \begin{eqnarray}\label{eq_Vlasov} \left(\partial_t + \frac{\mathbf{p}}{m_s \gamma} \cdot \nabla + \mathbf{F}_L \cdot \nabla_{\mathbf{p}} \right) f_s = 0\,, \end{eqnarray} where $\gamma = \sqrt{1+\mathbf{p}^2/(m_s\,c)^2}$, $c$ is the speed of light in vacuum, and \begin{eqnarray}\label{eq_LorentzForce} \mathbf{F}_L = q_s\,(\mathbf{E} + \mathbf{v} \times \mathbf{B}) \end{eqnarray} is the Lorentz force acting on a particle with velocity $\mathbf{v}=\mathbf{p}/(m_s\gamma)$. This force follows from the existence, in the plasma, of collective electric [$\mathbf{E}(t,\mathbf{x})$] and magnetic [$\mathbf{B}(t,\mathbf{x})$] fields satisfying Maxwell's equations: \begin{subequations}\label{eq_Maxwell} \begin{eqnarray} \label{eq_BGauss} \nabla \cdot \mathbf{B} &=& 0 \,,\\ \label{eq_Poisson} \nabla \cdot \mathbf{E} &=& \rho/\epsilon_0 \,,\\ \label{eq_Ampere}\nabla \times \mathbf{B} &=& \mu_0\, \mathbf{J} + \mu_0 \epsilon_0\,\partial_t \mathbf{E} \,,\\ \label{eq_Faraday}\nabla \times \mathbf{E} &=& -\partial_t \mathbf{B} \,, \end{eqnarray} \end{subequations} where $\epsilon_0$ and $\mu_0$ are the vacuum permittivity and permeability, respectively. The Vlasov-Maxwell system of Eqs.~\eqref{eq_Vlasov} -- \eqref{eq_Maxwell} describes the self-consistent dynamics of the plasma whose constituents are subject to the Lorentz force, and in turn modify the collective electric and magnetic fields through their charge and current densities: \begin{subequations}\label{eq_rhoJ} \begin{eqnarray} \rho(t,\mathbf{x}) &=& \sum_s q_s\int\!d^3\!p f_s(t,\mathbf{x},\mathbf{p})\,,\\ \mathbf{J}(t,\mathbf{x}) &=& \sum_s q_s\int\! d^3\!p\,\mathbf{v} f_s(t,\mathbf{x},\mathbf{p})\,. \end{eqnarray} \end{subequations} In the electromagnetic code {\sc Smilei}\xspace, velocities are normalized to $c$. Charges and masses are normalized to $e$ and $m_e$, respectively, with $-e$ the electron charge and $m_e$ its mass. Momenta and energies (and by extension temperatures) are then expressed in units of $m_e c$ and $m_e c^2$, respectively. The normalization for time and space is not decided {\it a priori}. Instead, all the simulation results may be scaled by an arbitrary factor, chosen here to be an angular frequency $\omega$. Temporal and spatial quantities are then expressed in units of $\omega^{-1}$ and $c/\omega$, respectively, while (number) densities are in units of $\epsilon_0 m_e \omega^2/e^2$. More details are given in \cite{smilei}. \subsection{Data structures: Macro-particles and fields} The ``Particle-In-Cell'' method owes its name to the discretization of the distribution function $f_s$ as a sum of $N_s$ ``macro-particles'' (also referred to as ``super-particles'' or ``quasi-particles''): \begin{eqnarray}\label{eq_fs_discretized} f_s(t,\mathbf{x},\mathbf{p}) = \sum_{p=1}^{N_s}\,w_p\,\,S\big(\mathbf{x}-\mathbf{x}_p(t)\big)\,\delta\big(\mathbf{p}-\mathbf{p}_p(t)\big)\,, \end{eqnarray} where $w_p$ is the $p^{th}$ macro-particle ``weight'', $\mathbf{x}_p$ is its position, $\mathbf{p}_p$ is its momentum. $\delta(\mathbf{p})$ is the Dirac distribution and $S(\mathbf{x})$ is the so-called shape-function of all macro-particles. These macro-particles are advanced, knowing the electromagnetic fields at their position, by solving their relativistic equations of motion. For convenience, in the rest of this article macro-particles will be referred to simply as ``particles''. \\ Particle weights, momentum components and position components are stored separately in contiguous arrays. These arrays are elements of a structure of arrays called \verb+Particles+. The EM fields experienced by the particles (obtained at the particles' positions after the interpolation step, see section \ref{interpolation}) as well as their Lorentz factors are stored in temporary contiguous arrays. {\sc Smilei}\xspace uses the Finite Difference Time Domain (FDTD) method~\cite{taflove2005} to solve Maxwell's equations. The EM field components, charge densities and current density components are thus stored onto Cartesian staggered grids as illustrated in Fig.~\ref{fig_Yee}. This Yee grid~\cite{yee1966} is a very standard mesh layout used in most FDTD approaches, as well as refined methods based on this technique~\cite{nuter2014}. It involves two regularly-spaced grids: \textit{primal} and \textit{dual}. Primal vertices are points where the charge density $\rho$ is evaluated; they delimit the primal cells. Dual vertices are located at the center of the primal cells and form the dual grid. Apart from the charge density, other quantities are not evaluated at either of these vertices, but at midpoints highlighted in figure \ref{fig_Yee}. As an example, the current component $J_x$ is dual in $x$, primal in $y$ and primal in $z$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Figures/yee_grid.pdf} \caption{Representation of the staggered Yee grids. The location of all fields and current densities follows from the common convention to define charge densities at the cell vertices. The black cell is part of the \textit{primal} grid which vertices carry the charge density. The red cell is part of the \textit{dual} grid which vertices are located at the center of the primal cells. Primal and dual vertices are respectively represented by blue and red circles.} \label{fig_Yee} \end{figure} \subsection{The PIC time-loop iteration} The explicit PIC time loop operations consist in solving successively Maxwell's and Vlasov's equations. Maxwell's equations are solved by performing an explicit FDTD solver. Vlasov's equation, solved by advancing particles in phase space, requires three steps: \begin{itemize} \item Field interpolation (or field gathering): the freshly updated electric and magnetic fields from the Maxwell solver, being known only at the grid vertices, are interpolated at each particle's position. The interpolation method accounts for the fields of several neighboring cells according to a particle shape function. \item Particle push: the equation of motion is solved using the interpolated fields. This typically relies on finite difference leap-frog methods (e.g. schemes from Boris \cite{boris1970,birdsall}, Vay \cite{vay2008} or Higuera-Cary \cite{higuera2017}) which advance the momenta at the middle of the time step before computing the positions at the next time step. \item Projection (or current deposition): once the particles have been pushed, their contributions to the current need to be projected back to the grids. As field interpolation, this step uses the particle shape function. Note that, in {\sc Smilei}\xspace, current deposition relies on the charge-conserving method developed by Esirkepov~\cite{esirkepov}, and this projection method will alway be considered throughout this work. The current projected onto the grids is then used in the Maxwell solver to compute the following time step. \end{itemize} \subsection{The PIC time-loop performance} \label{sec:pic_scalar_performance} In plasma simulations, advancing the particles is usually much more expensive than solving Maxwell's equations. The computational cost thus scales with the number of particles which is, in most case, vastly larger than the number of vertices. The computational cost also varies between operators. In this Section, the performance of the scalar particle operators (namely the interpolator, pusher and projector) is analyzed. To do so, we consider the simple case of a thermal plasma. An homogeneous, Maxwellian, hydrogen plasma fills the entire simulation domain, with an initial proton temperature of 10 keV and electron temperature of 100 keV, and particles were initially randomly distributed in space. The domain has periodic boundary conditions and the cell dimensions are $\Delta x = \Delta y = \Delta z \simeq 0.22\ c / \omega$, where $\omega$ denotes the electron plasma frequency in this particular case. Simulations were run for 100 iterations with a time step $\Delta t = 0.95 \Delta_{\rm CFL} = 0.12 \omega^{-1}$ where $c \Delta_{\rm CFL} = \left( \Delta_x^{-2} + \Delta_y^{-2} + \Delta_z^{-2} \right)^{-1/2}$ corresponds to the timestep at the Courant-Friedrichs-Lewy (CFL) condition. The shape function for interpolation and projection is of order 2, i.e. over 3 vertices in each direction, and as stressed earlier Esirkepov's charge-conserving current projection scheme is used. The simulations are performed on a single node of the Skylake super-computer \textit{Irene Joliot-Curie} in France (see \ref{compilation}). The domain is divided into $8\times8\times6$ patches. The run has 2 MPI processes with 24 openMP threads each so that each core has 8 patches to handle. Each patch contains $8\times8\times8$ cells, which is sufficiently small to have the field data in L2 cache. The load is balanced during the entire simulation as the plasma remains uniform. This study neglects the cost of communications between nodes, focusing instead on the particle operators (interpolator, pusher, projector) and the Maxwell solver. As a consequence, the type of particles and their velocities have little impact on the results. In the following, we present a parametric study of the scalar operators' performance as a function the number of particles per cell (from 1 to 256). Throughout this work, the performance of various operators will be measured by the computation time per particle per iteration. In order to facilitate the comparison between architectures, this computation time is considered at the node level. More precisely, it is computed as: \begin{eqnarray}\label{eq:comptime} \tau_{\rm part} = \frac{T_{\rm wall-clock}}{N_{\rm part} \times N_t} \times N_{\rm Nodes}\, \end{eqnarray} where $T_{\rm wall-clock}$ is the wall-clock time spent in the considered operators, $N_{\rm part}$ is the total number of particles in the simulation, $N_t$ the number of timesteps over which the simulation is run and $N_{\rm Nodes}$ is the number of nodes used for the simulation. In Sec.~\ref{sec:simulation_benchmark}, we will also present a time-resolved version of this measure that is obtained by summing not over the total number of timesteps, but a reduced number of them and doing so several time during the simulation. The computation times obtained per particle and per iteration for each operator are shown in Fig. \ref{fig_particle_scalar_operator_times_skl}. They appear to depend weakly on the number of particles per cell, gaining $\sim$19\% at the higher end. With little vectorization and neglecting cache issues, the scalar operators should not depend on the number of particles per cell but on the total number of particles to be computed per patch. This is approximately verified. The small gain is partly due to the scalar operators having vectorized sequences (the code is always compiled with the vectorization flags) even if the most intensive loops are not optimized. Cache memory effects could also impact the particle computation time but this analysis requires a deep instrumentation of the code. The projection appears to be the most time-consuming operator ($\sim$65\% of the whole particle pushing time in average), followed by the interpolator ($\sim$30\% contribution). The pusher represents $\sim$5\% (or less) of the particle pushing time, although slightly increasing with the number of particles contrary to the other operators. Note that the sum of all contributions is not exactly 100\% because the particle processing includes additional small computation such as the exchange preprocessing. The time spent in the Maxwell solver is independent of the number of particles per cell and remains constant for all cases. In relative terms, it represents 12\% of the particle computation time for 1 particle per cell, and becomes rapidly negligible above, as particles consume more and more time. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/particle_scalar_operator_times_skl} \caption{Computational cost [see Eq.~\ref{eq:comptime}] of each particle operator for the scalar version of the code, as a function of the number of particles per cell. Simulations run on a single Skylake node.} \label{fig_particle_scalar_operator_times_skl} \end{figure} The objective of the vectorization method described in this paper is to reduce the cost of the three particle operators which are further detailed in sections \ref{interpolation} to \ref{projection}. \section{Particle sorting} \label{sort} This section describes the algorithm used to sort particles in {\sc Smilei}\xspace which is a fine-grain and frequent sorting. \subsection{Sorting definition and purpose} A sorting technique in a PIC code is defined by: (i) the ``grain'' of the sorting, or resolution, often expressed as an elementary volume (i.e. sub-cells, single cell or multiple cells), (ii) the ordering of the set of grains, and (iii) the frequency of the sorting (usually a fixed periodicity expressed in number of time steps). The objective is that, after sorting, all particles within the same ``grain'' are stored contiguously in memory. The vectorization strategy in {\sc Smilei}\xspace requires particles sharing the same primal indices to be contiguous in memory. This is slightly different from a standard cell-based sorting and is in fact equivalent to a dual-cell-based sorting, as illustrated in figure \ref{sorting_scheme} for a two-dimensional situation. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Figures/particle_sorting.pdf} \caption{2D dual-cell-based particle sorting in {\sc Smilei}\xspace. The left panel represents the sorted particles before movements, and the unsorted ones afterwards. The right panel illustrates the ordered particles after sorting. Each panel outlines both primal and dual grid. Particles are sorted according to the nearest primal vertex (i.e. located in the same dual cell), the number next to each particle being its position in memory. After sorting, particles sharing the same primal vertex are contiguous in memory.} \label{sorting_scheme} \end{figure*} Many authors suggest single-cell sortings are also a good practice to maximize cache efficiency. Since it is usually not executed at every time step, the ordering of the sorted cells matters. Indeed, cache use is optimized by the ordering if, as particles move, they only travel to cells close in memory to the cell they originate from. It has been shown that ordering them along elaborate structures such as Z curves provides the best performances\cite{data-structure}. However, the present the situation is different: the objective is to guarantee that SIMD operations can be executed at every time step. Therefore the sorting, in {\sc Smilei}\xspace, must be done at every time step as well. This high frequency ensures an optimized cache use, independently of the cell ordering. As a consequence, a cell ordering that benefits best from the vectorized operators is chosen (see section \ref{projection}): the \verb!C++! natural row-major order which matches that of all field data. \subsection{Counting sort} Sorting at every time step is a potentially costly operation which, without proper care, could overweight the benefits of having a well sorted array of particles. The most expensive operation of the whole sorting process is particle copying because a single particle copy in memory involves a significant amount of data movement. Consequently, an efficient sorting algorithm should aim at minimizing the number of particle copies. In that regard, the counting sort has been a standard choice because it involves exactly one copy per particle. The whole point of this algorithm is to determine, before any data movement, where each particle is supposed to be moved. Pseudo code of the counting sort is given in algorithm \ref{countingsort} where the expression $range(N)$ refers to an array of integer ranging from 0 to $N-1$. \begin{algorithm} \DontPrintSemicolon \KwData{ \\$Particles$: array of unsorted particles. \\$CellKeys$: array of the cell indices of the particles. \\$Count$: array counting the occurrence of each cell key. \\$First\_index$: index of the first unsorted particle of each cell. \\$Npart$: number of particles. \\$Ncell$: number of cells. } \KwResult{$PartSorted$: array of sorted particles} \Begin{ \textcolor{blue}{\tcp{$Count$ is evaluated.}} \For{$ipart \in range(Npart)$}{ $Count[CellKeys[ipart]] \mathrel{+}= 1$\; } $First\_index[0] \longleftarrow 0$\; \textcolor{blue}{\tcp{Accumulate $Count$.}} \For{$icell \leftarrow 1$ \KwTo $Ncell-1$}{ $First\_index[icell] \longleftarrow First\_index[icell-1]+Count[icell-1]$\; } \textcolor{blue}{\tcp{Copy particles into the sorted array}} \For{$ipart \in range(Npart)$}{ $PartSorted[First\_index[CellKeys[ipart]]] \longleftarrow Particles[ipart]$ \; $First\_index[CellKeys[ipart]] \mathrel{+}= 1$\; } \KwRet $PartSorted$ } \caption{Counting Sort. } \label{countingsort} \end{algorithm} This algorithm is standard in PIC codes where the sorting is executed at low frequency. Between two sortings, each particle has time to travel several cells away from its original cell. The algorithm must therefore be efficient at treating a completely disordered plasma and the counting sort is perfectly adapted to this. Its major drawback is that it is an ``out-of-place'' sorting and therefore requires another full array of particles doubling the memory occupation of the particles. \subsection{Cycling sort} In the case of a high frequency sorting, there is little particle movement between two sortings and the particles remains relatively well ordered at all times. Although these conditions seem favorable to the sorting, the counting sort still pays the full cost of one copy per particle and double memory. It is therefore more efficient to use an ``in-place'' sorting and copy only particles that effectively change cells. This can be achieved with the cycle sort given in \ref{ap_cycle_sort}. The purpose of this algorithm is to find a succession of circular permutations, or cycles, leading to a full sorted array while copying only particles which have effectively moved to a different cell. Unlike the counting sort, the total number of copies is variable and depends on the particles movement and the length of the cycles found. For a given cycle, the number of copies per particle is given by $N_c=(L+1)/L$ where $L$ is the length of the cycle. This accounts for the necessary copy of one particle in a temporary variable. The total number of particle copies can be approximated by $N_c\times N_m$ where $N_m$ is the total number of particles moving to a different cell. In the worst case scenario, all cycles have the minimum length 2, $N_c=1.5$ and the number of copies is $1.5\times N_m$. As long as $N_m < 2N_{part}/3$, where $N_{part}$ is the total number of particles, the total number of copies is still lower than when using a counting sort. In general, few particles change cells between sortings when the sort is done frequently hence the obvious advantage of the cycle sort over the counting sort. \subsection{Optimized cycle sort} The cycling sort minimizes the number copies at the cost of a theoretical complexity of $\mathcal{O}\left(N_{part}^2\right)$: for each particle at index $cycleStart$, the algorithm has to compute its future index in the array by traveling through all particles located after $cycleStart$. This part of the algorithm can be significantly accelerated in the case of many duplicates. This is usually the case in PIC codes because there are many more particles than cells and for that reason, many particles share the same $CellKeys$. A useful optimization consists in building the $Count$ array in the same manner as in the counting sort. This array is then used to keep track of the index where, in each cell, the next particle can be inserted. It reduces the complexity of the algorithm to $\mathcal{O}\left(N_{part}+N_{cell}\right)$ so effectively to $\mathcal{O}\left(N_{part}\right)$ since $N_{part}\gg N_{cell}$ in most simulations. The optimized cycling sort is given in algorithm \ref{optimizedcyclesort}. \begin{algorithm} \DontPrintSemicolon \KwData{ \\$Particles$: array of unsorted particles. \\$CellKeys$: array of the cell indices of the particles. \\$Count$: array counting the occurrence of each cell key. \\$First\_index$: index of the first unsorted particle of each cell. \\$Last\_index$: index of the last particle of each cell. \\$Npart$: number of particles. \\$Ncell$: number of cells. } \KwResult{$Particles$: array of sorted particles} \Begin{ \textcolor{blue}{\tcp{$Count$ is initialized.}} \For{$ipart \in range(Npart)$}{ $Count[CellKeys[ipart]] \mathrel{+}= 1$\; } $First\_index[0] \longleftarrow 0$\; \textcolor{blue}{\tcp{Accumulate $Count$.}} \For{$icell \leftarrow 1$ \KwTo $Ncell-1$}{ $First\_index[icell] \longleftarrow First\_index[icell-1]+Count[icell-1]$\; $Last\_index[icell-1]\longleftarrow First\_index[icell]$\; } $Last\_index[Ncell-1]\longleftarrow Last\_index[Ncell-2]+Count[Ncell-1]$\; \textcolor{blue}{\tcp{Loop on each cell}} \For{$icell \in range(Ncell)$}{ \For{$cycleStart\leftarrow First\_index[icell]$ \KwTo $ Last\_index[icell]$}{ \If{$CellKeys[cycleStart]\mathrel{=}= icell$}{ \textcolor{blue}{\tcp{Particle already well placed}} $\mathbf{continue}$\; } $cell\_dest \longleftarrow CellKeys[cycleStart]$\; $ip\_dest\longleftarrow First\_index[cell\_dest]$\; $Cycle.resize(0)$\; $Cycle.push\_back(cycleStart)$\; \textcolor{blue}{\tcp{Build a cycle}} \While{$ip\_dest\ \mathrel{!}=\ cycleStart$}{ \textcolor{blue}{\tcp{Do not swap twins}} \While{$CellKeys[ip\_dest]\mathrel{=}=\ cell\_dest$}{ $ip\_dest \mathrel{+}= 1$\; } $First\_index[cell\_dest]\longleftarrow ip\_dest + 1$\; $Cycle.push\_back(ip\_dest)$\; $cell\_dest \longleftarrow CellKeys[ip\_dest]$\; $ip\_dest\longleftarrow First\_index[cell\_dest]$\; } \textcolor{blue}{\tcp{Proceed to the swap}} $Ptemp\longleftarrow Particles[Cycle.back()]$\; \For{$i \leftarrow Cycle.size()-1$ \KwTo $1$}{ $Particles[Cycle[i]]\longleftarrow Particles[Cycle[i-1]]$\; } $Particles[Cycle[0]]\longleftarrow Ptemp$\; } } \KwRet $Particles$ } \caption{Optimized Cycle Sort. } \label{optimizedcyclesort} \end{algorithm} \subsection{Sorting in a parallel environment} \label{sec:sorting-parallel} PIC codes are usually executed in a parallel environment. This poses two issues for the cycle sort algorithm. First, particles are constantly exchanged with neighboring domains. The size of each particle array changes and gaps appear in the middle of the array preventing a standard cycle sort. A simple way of dealing with this issue is the following. All particles entering in a given patch are stored in a buffer; they have their own $CellKeys$, and contribute to the $Count$ array. All $CellKeys$ of exiting particles are set to $-1$. The cycle sort algorithm is then executed through the particle array. First cycles start with the entering particles and end when they hit a $CellKeys$ of $-1$. The particles of these cycles are simply copied to their destinations, eventually overwriting the exiting particles. This process is repeated for all entering particles. At this point, all gaps are filled and all entering particles are placed. If unsorted particles remain, the optimized cycle sort, from the previous section, is applied. Figure \ref{sorting_sketch} sketches the whole process and illustrates the differences with the counting sort. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/sorting_sketch.pdf} \caption{Comparison between counting sort and optimized cycle sort starting from an identical unsorted array of particles. Particles are colored as a function of their cell keys. Lost particles are given a cell key of -1. Particles coming from other patches are represented in a separate buffer. In panel a) the counting sort directly copies particles in a new array. Only copies required for the first cell are represented for readability. In panel b) the cycle sort performs particles permutations. Fewer copies are needed and they are all represented.} \label{sorting_sketch} \end{figure} The second issue is related to load balancing. Most advanced PIC codes deploy elaborate techniques in order to balance the computational load between the different compute units. Since most of the computational load is proportional to the number of particles, the effort mainly consists in balancing the number of particles per compute unit. The counting sort cost is proportional to the number of particles as well and is not problematic. However, the cost of a cycle sort strongly depends on the local disorder of the particle array, thus likely to cause significant load imbalance. The disorder is difficult to estimate and taking it into account in a load balancing procedure proves complicated. Instead, a good task scheduler can smooth this imbalance while being easier to achieve. In {\sc Smilei}\xspace, this is the role of the OpenMP dynamic scheduler and the patch-based domain decomposition \cite{smilei}. \section{Vectorization of the PIC operators} \label{sec:vectorization_operators} In most PIC simulations, an important fraction of the computation time is spent in the three main operators which are interpolation, pusher and projection. This section describes the workflow of these 3 functions, how they are vectorized and why they benefit from the dual-cell fine-grain sorting. Note that the vectorization effort in {\sc Smilei}\xspace focuses only on the algorithm and data structures and not on the implementation itself. This means that no specific intrinsics were introduced in the code. The only additions to the C++ code are \texttt{\#pragma omp simd} directives on critical loops and \texttt{aligned(64)} attributes to critical arrays. Vectorization in {\sc Smilei}\xspace therefore relies only on auto-vectorization. \subsection{Interpolation}\label{interpolation} In the interpolation operation, also referred to as ``field gathering'', the EM field defined on the grid must be evaluated at each particle's position. This operation on a single particle can be broken down into the three following sub-steps. \begin{enumerate} \item Extract the field data from the global field arrays in the neighborhood of the particle (the \textit{stencil}). \item Compute the interpolation coefficients affecting each field data point, depending on the particle position relative to that of its cell. \item Multiply fields by coefficients and sum all terms. \end{enumerate} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/PIC_interpolation_step.pdf} \caption{Illustration, in one dimension, of the primal and dual grid vertices accessed during the interpolation process with a second order shape function. Vertical dotted lines mark the boundary of the dual cells. Green and orange particles are in the same dual cell; they share the same primal index and therefore access the same 3 primal vertices. However, their dual indices differ by one which extends the number of dual vertices accessed to 4.} \label{fig_interpolation} \end{figure} The extracted portion of the field data depends on the position of the particle. If particles are not well sorted, step 1 is a random access in a potentially large array. In addition to significant cache misses, this also prevents SIMD operations. It has even been reported that, for some architectures, the complicated pattern of interpolation behaved better when specifically instructing the compiler not to use SIMD operations \cite{picador}. In {\sc Smilei}\xspace, as explained in section \ref{sort}, particles are sorted in dual cells so that groups of particles sharing the same primal indices are contiguous in memory. These groups are treated successively and each of them is vectorized efficiently as follows. The first benefit of sorting is that, for step 1, it completely removes particle dependency since all particles of the group require the same data. Access to the global memory is thus minimized and it improves cache use. Primal components of the fields are common to all particles of the group since they share the same primal indices. Dual components extend to only one additional vertex as illustrated on figure \ref{fig_interpolation}. In these conditions, steps 2 and 3 can be easily vectorized. They operate on the full stencil with the exception of one point depending on their initial position. This is effectively dealt with via the use of a mask (see figure \ref{fig_interpolation}). Sorting also guarantees that the local data involved in steps 2 and 3 are contiguous and can therefore be easily vectorized. The local positions (relative to that of the cell) are stored for reuse later whereas the interpolation coefficients are loaded into a temporary buffer. Particles groups are treated by sub-groups of 32 in order to limit the total size of these temporary buffers and fit them into the cache while retaining a reasonable vector length. The optimal size of these sub-groups depends on the architecture and may change in the future. Finally, the interpolated EM fields are returned and stored for each particle of the currently treated patch for later use in the pusher. \subsection{Pusher}\label{pusher} The pusher is the operation that benefits the most from vectorization with minimal adjustments, provided that the data structure for particle properties is appropriate. Its algorithm remains almost unchanged thanks to the optimized cycle sort. It is performed on all particles of the patch regardless of the cell they occupy. \begin{algorithm} \DontPrintSemicolon \KwData{ \\$particles$: array of sorted particles } \KwResult{$particles$: array of pushed and unsorted particles} \Begin{ \textcolor{blue}{\tcp{Vectorized loop on particles}} \For{$particle \in particles$}{ $\triangleright$ Update the momentum of $particle$\\ $\triangleright$ Update the position of $particle$ } \KwRet $particles$ } \caption{Particle pusher.} \label{particle_pusher} \end{algorithm} The \verb+CellKeys+ array, containing the dual cell index of each particle, is updated after the pusher. When a particle crosses the patch boundary, \verb+CellKeys+ is set to $-1$ as a tag for the boundary condition treatment (see section \ref{sec:sorting-parallel}). \subsection{Projection}\label{projection} In the projection operation, also referred to as ``current deposition'', the current density carried by each particle must be evaluated at the coordinates of the surrounding vertices and added to the current density global arrays. Nowadays, one standard approach is the charge-conserving Esirkepov projection algorithm \cite{esirkepov}. The direct algorithm is not vectorizable in its naive form since two particles located in the same cell could project their charge or current contributions to the same vertices leading to memory races. Nonetheless, efficiently vectorized algorithms have been implemented to get around this limitation \cite{vincenti2017}. Esirkepov's method is even more challenging to vectorize because the computation not only depends on the particle's positions but also on their displacements. Sorting suppresses all randomness in positions but not in displacements. This section explains how {\sc Smilei}\xspace benefits from sorting during the projection phase and how it deals with the displacements randomness. In Esirkepov's projection method, the current densities along each dimension of the grid are computed from the charge flux through the cell borders. By definition, fluxes are computed from the particle present and former positions, respectively $x^{t}$ and $x^{t-\Delta t}$. The simpler direct projection algorithm only uses the present particle position $x^{t}$, but does not conserve charge. In both methods, the operation on a single particle can be broken down into the two following sub-steps: \begin{itemize} \item Step 1 - Compute the projection contributions of the particle depending on its relative position in its local cell and its displacement. \item Step 2 - Add these contributions to the global array of current density according to the particle global position. \end{itemize} The Esirkepov projection, operating with a shape function of either 2nd or 4th order, has been vectorized by exploiting the properties of the sorted particles. Algorithm \ref{particle_projection} presents concisely the method at 2nd order. This algorithm is repeated for each current component $J_x$, $J_y$, $J_z$. \begin{algorithm} \DontPrintSemicolon \KwData{ \\$clusters$: List of clusters of 4 cells \\$vectors$: List of vectors of 8 particles contained in a given cell cluster \\$J_{local}$: local buffer to gather the current contribution from the cluster particles } \KwResult{$J$: current grids} \Begin{ $\triangleright$ For each current component $J_x$, $J_y$ and $J_z$: \textcolor{blue}{\tcp{Loop 1 - on 4-cell clusters}} \For{$cluster \in clusters$}{ \textcolor{blue}{\tcp{Loop 2 - on the cluster cells}} \For{$cell \in cluster$}{ \textcolor{blue}{\tcp{Loop 3 - on particle vectors}} \For{$vector\in vectors$}{ \textcolor{blue}{\tcp{Loop 4 - Vectorized loop on the vector particles}} \For{$particle \in vector$}{ $\triangleright$ Compute each particle coefficients and distances to the vertices \\ $\triangleright$ Compute each particle charge weight } \textcolor{blue}{\tcp{Loop 5 - Vectorized loop on the vector particles}} \For{$particle \in vector$}{ $\triangleright$ Compute the current contributions and store in $J_{local}$ } } } \textcolor{blue}{\tcp{Loops 6 - on vertex indexes}} \For{$i \in range(5)$}{ \For{$j \in range(5)$}{ \textcolor{blue}{\tcp{Vectorized loop in the $z$ contiguous direction}} \For{$k \in range(8)$}{ \textcolor{blue}{\tcp{Unrolled loop on the particle vector size}} \For{$ipart \in range(8)$}{ $\triangleright$ Reduction of $J_{local}$ in the main current array $J$ } } } } } } \caption{Particle projection for order 2.} \label{particle_projection} \end{algorithm} In order to take advantage of the sorting, the first loops (1 and 2 in Algorithm \ref{particle_projection}) iterate on the cells. The particularity of {\sc Smilei}\xspace's projection is the gathering of the cells in clusters of 4 cells in the $z$ direction (of index $k$). Note that the number of cells per cluster actually depends on the order of the projection and the size of 4 cells is only valid for order 2. The advantages of this decomposition is clarified in the following description. The first part of the algorithm corresponds to step 1. Particles in the same dual cell (sharing the same primal indices) are clustered into vectors of 8 particles to minimize the local buffer size while retaining enough data for an efficient vectorization. The loop on these vectors is denoted \textit{loop 3} in Algorithm \ref{particle_projection}. As illustrated in Fig. \ref{fig_projection}a, computing the coefficients requires up to 4 vertices for each particle in each direction but potentially 5 vertices if all particles of the same cell are considered. This is due to the Esirkepov scheme which applies a shift depending on the particle displacement. Since each particle only uses 4 vertices among 5, it has one useless value at one vertex. For the vectorization, the shape factor coefficients and the flux intermediate coefficients are computed and stored in separate and adequate buffers for each direction. These buffers are carefully allocated (aligned and contiguous in the particle direction) so that the computation of these coefficients is vectorized in the particle loop 4 in Algorithm \ref{particle_projection}. Each buffer has by default a size of 8 (vector particles) $\times$ 5 (vertices). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/PIC_projection_step.pdf} \caption{a) Primal vertices accessed during the projection process with a second order shape factor. b) Schematic of the multi-cell approach that uses a larger temporary current projection buffer in the z direction and helps reducing the number of projections in the patch grids. c) Drawing of the $J_{local}$ local buffer reduction process into the main array $J$.} \label{fig_projection} \end{figure} The computation of step 2 can be divided into 2 sub-steps. During sub-step 2.1 (loop 5 in Algorithm \ref{particle_projection}), the current contributions, calculated using previously computed coefficients, are stored on a small local grid, called $J_{local}$ in Algorithm \ref{particle_projection}, in order to avoid concurrent memory access and enable vectorization. $J_{local}$ is treated like eight small, separate current grids so that the current of the 8 particles of each vector can be stored independently without concurrency. In 3D, the grid size required to satisfy the projection of the particles in the 4-cell cluster is $5 \times 5 \times 8$ cells as schematically shown in \ref{fig_projection}b. Therefore, the local buffer $J_{local}$ is composed of $5 \times 5 \times 8 \times 8$ elements (12.5 kB). The fast (contiguous) axis is the particle index. Sub-step 2.2 (loop 6 in Algorithm \ref{particle_projection}) reduces the local grid $J_{local}$ into the main one $J$. Vectorization is applied on the direction $z$, contiguous for $J$. The 4-cell cluster enables to have 8 elements in this direction. For each vertex, the 8 particles' contributions to $J_{local}$ are summed in a temporary buffer as described in Fig. \ref{fig_projection}c. This buffer is then added to the main grid $J$. The 4-cell cluster further contributes to optimize this step by pooling 4 reductions. The particle vectors size and the number of cells in a cluster may be adjusted to optimize the vectorization efficiency. A large vector size requires more memory that will not necessary be used entirely if there are not enough particles per cell. The buffer memory size needs to be low enough to fit in L2 cache. Larger cell clusters may help minimizing the number of reductions but requires more memory. When the number of particles per cell is not a multiple of the vector size (8 for \textsc{AVX512}, 4 for \textsc{AVX2}), the remaining particles are treated in a smaller vector. When the number of cells in $z$ is not a multiple of the cluster size (4), the remaining cells are treated sequentially (i.e. one reduction per cell). \section{Vectorization performances} \label{sec:vecto_efficiency} The vectorized operators implemented in {\sc Smilei}\xspace are designed to be efficient when a systematic sorting algorithm is used, as described above. Their performance is first evaluated using the 3D homogeneous Maxwellian benchmark from section \ref{sec:pic_scalar_performance}, as a function of the number of particles per cell (PPC) ranging from 1 to 256. This study is focused on the particle operators (interpolator, pusher, projector, sorting) and discards the computational costs of the Maxwell solver and of the communications between processes. The patch size is kept constant at $8\times8\times8$ cells. The test runs have been performed on 4 clusters equipped with different Intel architectures typically used for {\sc Smilei}\xspace: Haswell, Broadwell, Knights Landing (KNL) and Skylake. The clusters' properties and the code compilation parameters are described in \ref{compilation}. Each run has been performed on a single node. Since the number of cores varies from an architecture to another, the runs were conducted so that the load per core (i.e. OpenMP thread) is constant. In other words, the number of patches per core is constant. The total number of patches for each architecture is determined so that each core has 8 patches to handle. There is 1 MPI process per NUMA domain (NUMA stands for non-uniform memory access) which means a single process per socket on Haswell, Broadwell and Skylake nodes that all have 2 sockets per node. A KNL node, configured in quadrant cache mode, has only 1 socket, and among the 68 available cores, 64 are used for the simulations and 4 for the system. The total number of patches is of $8\times4\times3$ on Haswell (24 cores), $8\times8\times4$ on Broadwell (32 cores), $8\times8\times8$ on KNL (64 cores), $8\times8\times6$ on Skylake (48 cores). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/vecto_particle_times_o2_all.pdf} \caption{Particle computational cost as a function of the number of PPC. Vectorized operators are compared to their scalar versions on various cluster architectures. Note that the Skylake compilations accepts both AVX512 and AVX2 instruction sets.} \label{fig_particle_times} \end{figure} The first series of tests considers an interpolation shape function of order 2 and compares the computation times to advance a particle (interpolation, pusher, projection) per iteration. The results for both scalar and vectorized versions are shown in Fig. \ref{fig_particle_times}. Contrary to the scalar mode, the vectorized operators efficiency depends strongly on the number of PPC. It shows improved efficiency, compared to the scalar mode, above a certain number of PPC denoted ``inversion point`` in Fig. \ref{fig_particle_times}. The lower performances of the vectorized operators at low PPC can be easily understood. First, their complexity is higher than their scalar counter-parts. As explained in sec. \ref{sec:vectorization_operators}, the interpolation and projection masks increase the arithmetic intensity of the operations on a single particle. Moreover, there are two additional loops, one over the cells and one sub-loop over groups of particles. They are ineffective if cells are practically empty of particles. And finally, SIMD instructions operate at a lower clock frequency than scalar ones \cite{Intel2018}. For low numbers of PPC, these overheads are not compensated by the more efficient SIMD operations because the vector registers are not entirely filled and do not provide enough gain. The location of the inversion point depends on the architecture: 10 PPC for Haswell and Broadwell, 12 for KNL, and 10 for Skylake, considering the most advanced instruction set for each processor type. Since Skylake can handle both the \textsc{AVX512} and the \textsc{AVX2} instruction sets, the results from the two compilations are presented in Fig. \ref{fig_particle_times}a for comparison. The compilation in \textsc{AVX2} does not affect the run performance below the inversion point when the scalar mode dominates. However, the \textsc{AVX512} mode appears up to 30\% more efficient than \textsc{AVX2} above 10 PPC. In vectorized mode, the computation time decreases with the number of PPC and stabilizes after 100 PPC around a final value that depends on the architecture. On Haswell, the efficiency gains a factor of 1.9 at 512 PPC compared to the scalar mode. On Broadwell, the same value is reached at 256 PPC. On KNL, a factor of 2.8 is obtained at 512 PPC (the highest for all considered architectures), but this fills the entire high-bandwidth memory (16 Gb), preventing tests above. On Skylake, a maximum gain of 2.1 is reached at 256 PPC with \textsc{AVX512}, while reaching 1.7 at 1024 PPC with \textsc{AVX2}. Neglecting memory and cache effects, an ideal vectorization should give an almost constant computation time per particle when the vector registers are filled. In other words, the maximum gain from vectorization should be equal to the vector register size (8 in double precision on the most recent architectures, KNL and Skylake, when compiled with the AVX512 instruction set, and 4 in double precision with the AVX2 instruction set for Haswell and Broadwell). As demonstrated above, {\sc Smilei}\xspace's vectorized algorithms are not perfect due to the nature of the operators (interpolation and projection) that induce the presence of semi-vectorized or scalar sequences. This is highlighted in Fig. \ref{fig_particle_operators} showing the computational cost of each operator from the same test case, simulated on Skylake only. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/particle_operator_times_skl.pdf} \caption{Computational cost of the four particle operators as a function of the number of PPC, for the Skylake test cases, in vectorized and scalar modes. Scalar cases are always compiled with the most advanced instruction set.} \label{fig_particle_operators} \end{figure} With \textsc{AVX512} vectorization, the projector remains the most time-consuming operator, even though it features the highest gain compared to scalar mode: $\times$3.5 at 256 PPC (and $\times$2.5 with \textsc{AVX2}). Its cost decreases from 66\% of the total particle time at 1 PPC to 37\% at 256 PPC. The interpolator is most efficient above 32 PPC with \textsc{AVX512} vectorization reaching a speed factor of 2 compared to the scalar mode, and 1.5 compared to \textsc{AVX2}. The pusher remains negligible: it represents 1 to 12\% of the total particle time, depending on the number of PPC. As it is automatically vectorized with the compilation flag \textsc{-03}, there is little speed gain. However, it benefits from the decomposition of particle data into blocks, thus showing higher efficiency above 128 PPC. The cost of the sorting operation does not depend much on the number of PPC (although, in relative terms, it varies from 1 to 18\% of the total particle cost). Indeed, the complexity of the cycle sort would only increase with a higher proportion of particles changing cells (higher temperatures or higher $\Delta t/\Delta x$), which does not vary with the number of PPC in a given thermal plasma benchmark. This step does not benefit from vectorization (mainly data transfer), thus there is therefore no difference between \textsc{AVX2} and \textsc{AVX512}. However, its cost is low compared to the speed gain from the interpolator and the projector optimizations, consequently ensuring an overall improvement. The same parametric study is performed with different electron temperatures $T_e$ ranging from 10 keV to 100 MeV and an ion temperature of $T_i = T_e / 10$. The results confirm that the sorting cost increases with the temperature whereas the interpolation, projection and pusher cost are unchanged. At 100 MeV, the sorting takes 7 \% longer than the interpolator. At this temperature, the thermal velocity (most probable velocity) is close to $c$ and more than half of the particles change cell every time step. These are extreme conditions for EM PIC codes but yet, the cost of the sorting is still compensated by the vectorization speed-up. The same parametric study has been conducted with a 4th-order interpolation shape function. The global trends are similar to those at order 2: in scalar mode, times do not depend significantly on the number of PPC, while they decrease in vectorized mode. The inversion point is located at 10 PPC for Haswell and Broadwell, 4 for KNL and 6 for Skylake. At 256 PPC, the vectorized particle operators (\textsc{AVX512}) are respectively 1.4 faster on Haswell, 1.7 on Broadwell , 5 on KNL and 2.8 on Skylake , compared to the scalar version. The most recent architectures benefit the most from vectorization, in particular with KNL which may prove even faster with more PPC. \section{Adaptive Vectorization Mode} \label{sec:adaptive_operators} According to section \ref{sec:vecto_efficiency}, the scalar operators are significantly more efficient when the number of PPC is under the inversion point, which depends on the architecture. However, in both laser-matter interaction or astrophysical cases, the number of PPC may be vastly different from one domain to another, and this number may evolve significantly during a simulation. Consequently, the vectorized (or scalar) operators may not be adequate in all spatial regions, or for all times. This issue can be addressed by using an adaptive vectorization mode which can locally switch between the scalar and vectorized operators during the simulation, choosing the most efficient one in the region of interest. Every given number of time steps, for each patch, and for each species, the most efficient operator is determined from the number of PPC. This provides an automated, fine-grain adjustment in both space and time. It also contributes to the dynamic load balancing since patches with more PPC will be treated more efficiently. This mode is now referred to as ``adaptive''. Note that, two different adaptive modes exist in {\sc Smilei}\xspace: \begin{itemize} \item Adaptive mode 1: the sorting methods of the scalar and vectorized operators are different, respectively the standard coarse-grain sort and the cycle sort described in section \ref{sort}. Switching modes thus requires sorting particles again. \item Adaptive mode 2: the cycle sort method is used with both operators. The scalar operators have been adapted to fit the new sorted structure. \end{itemize} A naive criterion to determine which operators should be applied locally consists on using a threshold on the average number of particles per cell. Another simple method, implemented at first, consists on counting the number of cells with particles below and above the inversion point. Then, the ratio of the two is computed. A threshold on this ratio determines the most suitable operators. With a statistical study, an adequate threshold could be found although the criterion was still proposing the wrong operators when the particle distribution was broad. This criterion appears nonetheless computationally cheap and satisfying in many cases. A more complex empirical criterion has been developed. It is computed from the parametric studies presented in \ref{sec:vecto_efficiency}. Fig. \ref{fig_particle_times} summarizes their results and indicates, for a given species in a given patch, the approximate time to compute the particle operators using both the scalar and the vectorized operators. The computation times have been normalized to that of the scalar operator for a single particle and 2nd-order shape functions. The outcomes from different architectures appear sufficiently similar to consider an average between their results, as shown in the same figure. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/vecto_efficiency_o2_all_fit.pdf} \caption{a) Normalized time per particle spent for all particle operators in the scalar and vectorized modes with various architectures, and 2nd-order interpolation shape functions. b) Averages of the curves in panel a), and polynomial regressions. } \label{fig_vecto_efficiency_o2_all_fit} \end{figure} A linear regression of the average between all the scalar results writes \begin{eqnarray} S(N) = -1.17 \times 10^{-2} \log{\left( N \right)} + 9.47 \times 10^{-1} \end{eqnarray} where $S$ is the computation time per particle normalized to that with 1 PPC, and $N$ is the number of PPC. For the average between vectorized results, a fourth-order polynomial regression writes \begin{eqnarray} V(N) = -4.27 \times 10^{ -3 } \log{ \left( N \right)}^4 \\ \nonumber + 3.69 \times 10^{ -2 } \log{ \left( N \right)}^3 \\ \nonumber + 4.07 \times 10^{ -2 } \log{ \left( N \right)}^2 \\ \nonumber -1.07 \log{ \left( N \right) } \\ \nonumber + 2.88 \end{eqnarray} These functions are implemented in the code to determine approximately the normalized single-particle cost. Assuming every particle takes the same amount of time, the total time to advance a species in a given patch can then be simply evaluated with a sum on all cells within the patch as \begin{equation} \sum_{c \ \in\ patch\ cells} N(c) \times F\!\left(N(c)\right) \end{equation} where $F$ is either $S$ or $V$. Comparing these two total times, for $S$ and $V$, determines which of the scalar or vectorized operators should be locally selected. This operation is repeated every given number of time steps to adapt to the evolving plasma distribution. Note that similar approximations may be computed for specific processors instead of using a general rule. In {\sc Smilei}\xspace, other typical processors have been included, requiring an additional compilation flag. To confirm that the adaptive mode results in the lowest particle computation time of both scalar and vectorized modes, Fig. \ref{fig_vecto_particle_times_dynamic_all} shows the measured times in the same Maxwellian plasma cases. In this particular configuration, the plasma remains uniform in the whole domain during the simulation, but the computation times vary depending on the (initial) number of PPC. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/vecto_particle_times_dynamic_all.pdf} \caption{Particle computation times as a function of the number of PPC in the adaptive modes 1 and 2, for various architectures. } \label{fig_vecto_particle_times_dynamic_all} \end{figure} As expected, the inversion point between scalar and vectorized modes is located between 10 and 12 PPC. The adaptive mode 1 is slightly more efficient than mode 2 below 12 PPC because of the slightly more expensive sorting method. As expected, both modes provide the same performances above 12 PPC. Fig. \ref{fig_vecto_efficiency_o4_all_fit} shows the normalized times for a 4th order shape factor. Contrary to the 2nd order case, the difference between all architectures is more important and the use of a general fitting function is less reliable. Nevertheless, averages for all architectures and polynomial regressions are shown in the same figure as they provide a sufficient estimate of the vectorization speed gain. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/vecto_efficiency_o4_all_fit.pdf} \caption{a) Normalized time per particle spent for all particle operators in the and vectorized modes with various architectures, and 4th-order interpolation shape functions. b) Averages of the curves in panel a), and polynomial regressions. } \label{fig_vecto_efficiency_o4_all_fit} \end{figure} \section{Simulation performance benchmarks} \label{sec:simulation_benchmark} In this section, the advantages of the adaptive mode will be presented considering three different simulation setups. The first two are related to laser-plasma interaction at ultra-high intensity, the last one to astrophysics. All three setups have been chosen as typical of current interests from the plasma simulation community, and realistic parameters have been chosen for each setup. In all cases, the second order interpolation and (Esirkepov) projection was used. All simulations have run on the Skylake partition of the \textit{Irene Joliot-Curie} supercomputer. \subsection{Laser Wakefield Acceleration} Laser wakefield acceleration (LWFA) consists in accelerating electrons in the wake of a laser propagating through a low density (transparent) plasma. A plasma wave is generated in the wake of the laser pulse as a result of the collective response of the electrons to the electromagnetic field associated to the laser pulse\cite{Malka2002,Esarey2009,Malka2012}. At large laser intensities, nonlinear effects may lead to a succession of electron-depleted cavities separated by steep and dense electron shells, instead of a smooth sinusoidal wave. In some specific cases, the cavities look like bubbles, empty of electrons, and one then speaks of the \textit{bubble regime} of acceleration\cite{Pukhov2002}. At the back side of the bubble (the front side being the one closest to the laser pulse), some electrons can be injected in the first half of the bubble and then accelerated forward due to the existence of a strong negative longitudinal electric field, eventually reaching speed close to that of light. This method is used in the laboratory to accelerate electrons up to energies at the multi-GeV level over very short distances of a few mm to a few cm. A strong effort is made to improve the control and quality of the produced electron beams. It depends on various plasma and laser parameters and this effort strongly relies on massive 3D PIC simulation. In this Section, we study the impact of the vectorization strategy on LWFA. To do so, three series (considering 4, 8 and 16 PPC, respectively) of three simulations (considering the scalar, vectorized and adaptive modes) are presented. In these simulations, a laser pulse with wavelength $\lambda$ (corresponding to an angular frequency $\omega=2\pi c/\lambda$) is sent onto a fully ionized hydrogen plasma. The plasma density profile consists in a long linear ramp (from $x=100\ c/\omega$ to $1280\ c/\omega$) preceding a plateau at the density $n_0 = 5 \times 10^{-3} n_c$, with $n_c = \epsilon_0 m_e \omega^2/e^2$ the critical density. The laser pulse, with maximum field strength $a_0=10$ (in units of $m_e c \omega/e$) is injected from the $x=0$ boundary. It has a Gaussian temporal profile of $20\pi\ \omega^{-1}$ FWHM (Full Width at Half Maximum) and a Gaussian transverse spatial profile of waist $24\pi \ c / \omega$. Its propagation through the plasma is followed up to a distance of $2050\ c/\omega$. Yet, instead of simulating the full propagation length, which would be too costly, the simulation domain consists in a moving window sufficiently large to contain the laser and a few wakefield periods and traveling at the laser group velocity. The overall domain has a dimension of $503 \times 503 \times 503\ (c/\omega)^3$. It is discretized in $1280 \times 320 \times 320$ cells, corresponding to spatial steps of $\Delta_x = 0.39\ c / \omega \sim \lambda / 16$ and $\Delta_y = \Delta_z = 0.157\ c / \omega \sim \lambda/4$. The time step is computed from the CFL condition as $\Delta_t = 0.96 \Delta_{\rm CFL} \simeq 0.31 \omega^{-1}$. A patch contains $10\times10\times10$ cells, for a total of $128 \times 32 \times 32$ patches. Only the electron species is considered and an immobile ion background is assumed by using the charge conserving current deposition scheme (with no Poisson solver at initial time). This simulation setup was run with the scalar, vectorized and adaptive modes, with 4, 8 and 16 PPC at initialization. The adaptive mode reconfiguration is done every 50 iterations. These simulations have run on 96 Skylake processors (48 nodes), corresponding to 2305 cores, with 1 MPI process per processor and 24 OpenMP threads per MPI process. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/lwfa_figures/LWFA_3d_ne_vecto_9000.pdf} \caption{Laser wakefield acceleration. a) Volume rendering of the electron charge density (in units of $e n_c$) and laser magnetic field ($B_y$, in units of $m_e \omega/e$), at time $t =2770\ \omega^{-1}$. b) Patches using a vectorized operator for the electron species at the same time. An animated version of these quantities can be viewed in the supplementary materials.} \label{fig_LWFA_3d_ne_vecto_9000} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/lwfa_figures/LWFA_ne_vecto_09000.pdf} \caption{Laser wakefield acceleration. Same as Fig.~\ref{fig_LWFA_3d_ne_vecto_9000} but taking a 2D slide at $z = 0.5\ L_z$ ($L_z$ being the domain length in the $z$ direction) a) Electron charge $-n_e / n_c$ at time $t =2770\ \omega^{-1}$. b) Patches using a vectorized operator for the electron species at the same time.} \label{fig_LWFA_2d_ne_vecto_9000} \end{figure} Figure \ref{fig_LWFA_3d_ne_vecto_9000}a shows a volume rendering of the electron density illustrating the wakefield cavities surrounded by dense electron layers. For the reader's convenience, Fig. \ref{fig_LWFA_2d_ne_vecto_9000}a presents a 2D slice of the electron density taken at $z = 0.5\ L_z$ ($L_z$ being the domain length in the $z$ direction). At the rear of each cavity, very high density electron bunches are accelerated by the strong charge separation electric field. Those beams have a density that can be several orders of magnitude higher than the initial plasma density, which translates in a large load imbalance. In particular, patches in these high-density regions see their average number of particles per cell largely exceeding the initial one, and will thus benefit most of the vectorized operator. Figure \ref{fig_LWFA_3d_ne_vecto_9000}b (see also Fig.~\ref{fig_LWFA_2d_ne_vecto_9000}b) highlights the regions where the adaptive mode has switched to vectorized operators. As expected, these regions corresponds to the patches containing a large number of PPC, such as the rear side of the wakefield cavities (containing the electron beams) and a thin circle around the first cavity at $x = 200\ c/\omega$. Let us note an additional advantage of the adaptive mode which helps mitigating the load imbalance at the node level as patches holding many particles can be treated more efficiently than those holding only few of them. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/lwfa_figures/lwfa_particle_time_ppc_comparision.pdf} \caption{Laser wakefield acceleration. Temporal evolution of the mean particle computation time (only in the particle operators) spent per particle per iteration at 4, 8 and 16 PPC (respectively figures a, b and c for the scalar, vectorized and adaptive modes.} \label{fig_LWFA_particle_time} \end{figure} Figure \ref{fig_LWFA_particle_time} presents the temporal evolution of the computation (node) time per particle and iteration considering 4, 8 and 16 PPC (panels a, b and c, respectively). One recovers that when using few particles per cell (4 PPC in panel a), the scalar operator is the most efficient one, while considering a larger number of particles per cells (16 PPC in panel c), the vectorized one is more interesting. Importantly, the adaptive mode allows to select the optimal operator for all three panels and provides the most efficient approach. Overall, considering 4 PPC, the computation time spent in the particle operators is close to $680$ s for both the scalar and adaptive modes, and of $1080$ s for the vectorized one. As most of the simulation box contains few PPC, the adaptive mode selects adequately the scalar operator. With 8 PPC, the computation times for all three modes are similar, equal to 1300 s (scalar), 1400 s (vectorized) and $1260$ s (adaptive). This number of PPC is indeed close to what was referred to as the inversion point in Sec.~\ref{sec:vecto_efficiency}. With 16 PPC, the vectorized mode is the most efficient with a computation time of $1950$ s, while the scalar mode is significantly slower with a particle computation time of $2500$ s. The adaptive mode hence selects adequately the vectorized operator leading to the same time of $\sim 1950$ s, see also Fig. \ref{fig_LWFA_particle_time}c. Let us finally note that the computation time per particle per iteration decreases with the number of PPC using the adaptive mode while it is barely sensitive to the number of PPC considering the scalar one. At the modest number of 16 PPC, the vectorized operator already allows to decrease the computation time by more than 20\% with respect to the scalar one. Finally, for all cases, the time allocated to the adaptive reconfiguration process is well below 1\% of the simulation time. \subsection{Laser interaction with a solid-density thin foil} Laser interaction with high-density ($n_0 \gg n_c$) plasmas created by irradiating solid-density foils is at the center of various experimental and theoretical investigations by the laser-plasma community. These studies are motivated by the broad range of physical mechanisms and potential applications of this kind of interaction, ranging from electron and ion acceleration, new radiation sources (from THz to XUV and $\gamma$), to the possibility to address strong field quantum electrodynamicss effects \cite{daido2012,macchi2013,dipiazza2012}. In this Section, we illustrate the impact of the vectorization strategy on the simulation of such a high-density target irradiated by an ultra-intense laser pulse. To do so, three simulations using either the scalar, vectorized or adaptive operators are reported. In these simulations, a laser pulse with wavelength $\lambda$ (corresponding to an angular frequency $\omega = 2\pi c/\lambda$) is focused at normal incidence onto a carbon foil located at $\sim 37.7\ c/\omega$ ($6\lambda$) from the $x=0$ boundary. The carbon foil is a fully-ionized plasma which density increases from 0 to its maximum $n_0 = 492\ n_c$ linearly over $\sim 12.6\ c/\omega$ ($2\lambda$) (this ramp mimics a pre-plasma) then forms a plateau with thickness $\sim 12.6\ c/\omega$ ($2\lambda$). The foil density is otherwise uniform over the full simulation domain in the transverse ($y$ and $z$) directions. At initialization, both carbon ions and electrons have the same uniform temperature of 1 keV. The laser pulse, with maximum field strength $a_0=100$ (in units of $m_e c \omega/e$), is injected from the $x=0$ boundary. It has a fourth-order hyper-Gaussian temporal profile of FWHM $\sim 188.5\ \omega^{-1}$ ($30\lambda/c$) and a transverse Gaussian profile with waist $\sim 12.6\ c/\omega$ ($2\lambda$). It is focused at the front of the preplasma ($x=6\ \lambda$) and at the center of the simulation box in the $y$ and $z$ directions. The simulation lasts for 100 laser periods ($\lambda/c$), the time to fully complete the laser interaction. The simulation domain extends over $\sim 100 \times 67 \times 67 (c/\omega)^3$ (approximately $16\lambda \times 11\lambda \times 11\lambda$) discretized in $1024 \times 256 \times 256$ cells, corresponding to a spatial resolution $\Delta x = \lambda / 64 \simeq 0.10 c/\omega$ and $\Delta y = \Delta z = \lambda / 24 \simeq 0.26 c/\omega$, and the time step is $\Delta_t = 0.96 \Delta_{\rm CFL} \sim 0.083\ \omega^{-1}$. Cells containing plasma are initialized with 32 randomly-distributed PPC and the simulation domain is decomposed into $128\times 32 \times 32$ patches, each patch containing $8\times8\times8$ cells. Three simulations have been run considering the scalar, vectorized and adaptive modes, respectively. For the latter, the adaptive mode reconfiguration is done every 8 iterations. These simulations run on 64 Skylake processors (32 nodes, 3072 cores) with 1 MPI process per processor and 24 OpenMP threads per MPI process. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/tf_figures/TF_3d_ne_vecto_it6450.pdf} \caption{Laser over-dense foil interaction. a) Volume rendering of the normalized electron density $n_e / n_c$ at time $t = 537\ \omega^{-1}$. Only half of the target (subset $0 < z \leq 0.5 L_z$) is shown. The cross-section highlights the inner target structure in addition to the outer target distortion effects. b) Patches using a vectorized operator (adaptive mode) for the electron species at the same time. An animated version of these quantities can be viewed in the supplementary materials.} \label{fig_TF_ne_vecto_3D} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/tf_figures/TF_ne_vecto_it5700.pdf} \caption{Laser over-dense foil interaction. a) Slice at $z = 0.5 L_z$ of the electron density $n_e / n_c$ at time $t = 475 \omega^{-1}$ corresponding to the end of the laser interaction. b) Patches using a vectorized operator (adaptive mode) for the electron species at the same time..} \label{fig_TF_ne_vecto} \end{figure} Figure \ref{fig_TF_ne_vecto_3D}a illustrates the deformation of the foil as it is irradiated by the ultra-intense laser pulse. Indeed, the overdense (i.e. with density $n_0>n_c$) plasma is opaque to the laser light which is thus reflected at the foil's surface. As the laser pulse bounces off the target, it exerts a strong (radiation) pressure onto its surface which is pushed inward, a process known as {\it hole boring} and highlighted in Fig.~\ref{fig_TF_ne_vecto_3D}a. At the same time, the laser plasma interaction in the pre-plasma at the target front side leads to the copious production of relativistic electrons that propagate throughout the foil, and eventually escape at its back as a hot, low density, electron gas. Also illustrated in Fig.~\ref{fig_TF_ne_vecto_3D}a, this tenuous electron plasma escaping from the target is better illustrated in Fig. \ref{fig_TF_ne_vecto}a, showing a 2D slice (at $z = 0.5 L_z$) of the electron density in logarithmic scale. Figure \ref{fig_TF_ne_vecto_3D}b presents, for the simulation in adaptive mode, the distribution of patches relying on vectorized operators. Interestingly, these patches are located where the particle density is high, that in the region corresponding to the initial target location minus the front side hole-boring region that has been depleted of particles. Note also that patches located in the region at the back of the target, where hot electrons are escaping, use the scalar operator as the hot electron gas is tenuous and thus described by only few PPC. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/tf_figures/TF_particle_time.pdf} \caption{Laser over-dense foil interaction. Temporal evolution of the computation particle time (only in the particle operators) spent per particle per iteration for the scalar, vectorized and adaptive modes. Among all MPI processes, average, minimum and maximum times are shown. Note that the time acquisition is started at $t=37 \omega^{-1}$ when the laser strikes the target.} \label{fig_TF_particle_time} \end{figure} Figure \ref{fig_TF_particle_time} presents the temporal evolution of the computation (node) time per particle and iteration. The mean value (solid line) shows that for these simulations, the adaptive mode is the one that provides the most efficient treatment over the full simulation. In addition to the mean value, the computation (node) time per particle and iteration was also computed for each MPI task separately, and the minimum and maximum values reported in Fig. \ref{fig_TF_particle_time}. The maximum value is particularly interesting as it refers to the computation time on the least efficient MPI task. Following this value in time allows to see how the adaptive mode adapt to each phase of the physics process. At early times, the dense target is associated to a large number of PPC, the vectorized operator is the most efficient one, and adaptive mode adequately select it. At later times, the electron population expends, its density decreases and more and more patches with few PPC are generated. As a result, the vectorized mode becomes less and less efficient and, at $t\sim 400\omega^{-1}$, the scalar operator becomes more interesting. The adaptive mode thus eventually selects the scalar operator, and throughout the simulation, the adaptive mode is the one that proves the most efficient. Overall, after 7550 iterations, the computation time spent in the particle operators is 912 s with the scalar mode, 647 s with the vectorized mode and 604 s with the adaptive mode. The adaptive mode thus allows to reduce the simulation time by $\sim 34\%$, and for this case, the overhead due to the adaptive reconfiguration of is also below the percent. \subsection{Mildly-relativistic collisionless shock} Ubiquitous in astrophysics, collisionless shocks have been identified as one of the major sources of high-energy particle and radiation in the Universe~\cite{KirkDuffy1999}, and, as such, have been the focus of numerous PIC simulations over the last decade~\cite{spitkovsky2008,sironi2013,plotnikov2018}. Collisionless shocks can form during the interpenetration of two colliding plasmas. In the absence of external magnetic field, the Weibel instability~\cite{weibel1959} provides the dissipation mechanism necessary to shock formation. This instability quickly grows in the overlapping plasma region (see, e.g. \cite{grassi2017}), and leads to the formation of current filaments associated with a strong magnetic field perturbation. At the end of the linear phase, the magnetic and current filaments distort into a region of electromagnetic turbulence, decelerating and transversely heating the flow's particles, ultimately leading to their isotropization and thermalization. This leads to a pile-up of the particle in the turbulent region during which both the plasma density and pressure increase up to the formation of a shock front. To illustrate this process and the impact of adaptive vectorization on its simulation, three series (considering either 4, 8 or 32 PPC) of three simulations (using the scalar, vectorized and adaptive modes) are presented. In these simulations, two counter-propagating electron-positron plasma flows are initialized each filling half of the simulation domain (in the $x$-direction). Both flows, with density $n_0$, have opposite drift velocity $\pm 0.9~c$ (in the $x$-direction), corresponding to a Lorentz factor $\gamma_0 = 2.3$, so that they collide at the center of the 3D simulation domain. The domain size is $300 \times 28.5 \times 28.5 \ (c/ \omega)^3$, with $\omega=\sqrt{e^2 n_0/(m_e\epsilon_0)}$ the electron plasma frequency associated to the initial flow density $n_0$. The cell sizes were set to $\Delta_x \simeq 0.11 \ c/\omega$ and $\Delta_y = \Delta_z \simeq 0.15 \ c / \omega$, and the time step to $\Delta_t = 0.95 \Delta_{\rm CFL}$. This ensures a good resolution of the relativistic electron skin-depth $d_{e,{\rm rel}} = \sqrt{\gamma_0} c/\omega \simeq 1.5\,c/\omega$ and thus of the Weibel filaments. The simulation lasts $100\ \omega^{-1}$. Each patch contains $8 \times 8 \times 8$ cells, initialized with either 4, 8 or 32 randomly-distributed PPC. The adaptive mode reconfiguration is done every 8 iterations. These simulations have been run on 64 Skylake processors (32 nodes), corresponding to 1536 cores. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/weibel_figures/Weibel_3d_ne_vecto_it510.pdf} \caption{a) Volume rendering of the normalized electron density $n_e / n_c$ at time $t =34\ \omega^{-1}$ after the beginning of the collision. b) Patches in vectorized mode for the electron species at the same time. An animated version of these quantities can be viewed in the supplementary materials.} \label{fig_Weibel_ne_vecto_3D} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/weibel_figures/Weibel_bz_ne_vecto_it510.pdf} \caption{a) Slice of the transverse normalized magnetic field $B_z / B_0$ during the plasma flow collision at $z = 0.5 L_z$, $L_z$ being the domain length in the $z$ direction at time $t = 34\ \omega^{-1}$. b) Slice of the normalized electron density $n_e / n_c$ at time $t = 34\ \omega^{-1}$ and $z = 0.5 L_z$. c) Computational mode (scalar or vectorized) of the electron species for each patch in the slice $z = 0.5 L_z$ at $t = 34\ \omega^{-1}$.} \label{fig_Weibel_bz_ne_vecto} \end{figure} Figure \ref{fig_Weibel_ne_vecto_3D}a shows a 3D volume rendering of the electron density at an early stage of the interaction ($t = 34\omega^{-1}$). The Weibel filamentation region is clearly illustrated as well as the on-set of turbulence in the central region. Figure \ref{fig_Weibel_ne_vecto_3D}b shows, at the same time, the distribution of patches for which the adaptive mode switched to the vectorized operator. It is clear that these patches are located in the high-density regions at the position of the Weibel filaments, as well as in the central region where the density increases by a factor nearly of $\times 4$, as expected for a fully formed 3D shock. For the reader's convenience, a 2D-slice taken at $z = 0.5 Lz$ is also presented in Fig.~\ref{fig_Weibel_bz_ne_vecto}. In panel a is also presented the magnetic field structure characteristic of the Weibel instability and its latter more turbulent state in the central region. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figures/weibel_figures/Weibel_particle_time_ppc_comparision.pdf} \caption{For the collisionless shock simulations: temporal evolution of the mean particle computation time (only in the particle operators) spent per particle per iteration at 4, 8 and 32 PPC (respectively figures a, b and c) for the scalar, vectorized and adaptive modes.} \label{fig_Weibel_particle_time} \end{figure} Figure \ref{fig_Weibel_particle_time} provides the detailed evolution of the computation (node) time per particle per iteration at respectively 4, 8 and 32 PPC (panels a, b, c, respectively). As expected, at 4 PPC, the scalar operator is the most efficient and the adaptive mode adequately selects it, leading to similar computation time. As the simulation goes on, particles starts piling up in the overlapping region at the center of the simulation domain. The effective number of PPC in the central patches increases and the vectorized operator becomes more and more interesting with respect to the scalar one. The adaptive mode benefits from this speed-up by selecting the vectorized operator wherever it allows from improved efficiency. For 8 and 32 PPC, the vectorized operator and the adaptive mode lead to the highest efficiency. Overall, at 4 PPC, the computation time spent in particle operators is for the full simulation (1515 iterations) of 433 s, 534 s and 405 s in the scalar, vectorized and adaptive mode, respectively. Even with such a small number of PPC, the adaptive approach allows for 6\% gain in efficiency with respect to the scalar mode. At 8 PPC, the particle computation time is of 826 s, 723 s and 709 s for the the scalar, vectorized and adaptive mode, respectively. The gain in efficiency thus increases to 14\%. Finally, at 32 PPC, the particle computational time is of 3160 s for the scalar mode and reduced to 1660 s for both the vectorized and adaptive modes. In this case, the gain in efficiency due to the vectorized operators is of 47\%, that corresponds to a speed-up of almost $\times 2$ for this configuration. Last, we note that, in this configuration again, the time per particle per iteration decreases with the number of particles. In addition, the overhead due to the adaptive reconfiguration remains for all cases below 1\% of the full simulation time. \section{Conclusion} \label{sec:conclusion} The new vectorized particle operators implemented in {\sc Smilei}\xspace rely on an optimized cycle sort. It sorts particles by dual cell (particles with the same primal index are contiguous in memory) at all times. It takes advantage from the low number of particles changing cells between time steps and from the fact that there are several particles in each cell. The number of particle copies is minimized and the algorithm complexity is reduced to $\mathcal{O}\left(N_{part}\right)$ by keeping track of the cell locations, as in a counting sort. The interpolation operator has been efficiently vectorized thanks to this sorting method that avoids random memory access and facilitates data reuse. Although the pusher was already efficiently vectorized using the particles' structure of arrays, it is now applied more efficiently on particle groups instead of the full arrays. The Esirkepov projection, being hardly vectorizable in its naive implementation due to concurrent memory access, has taken advantage of the new cycle sort and thus shown an efficient vectorization. The presented method computes particles by groups of 8 and uses temporary buffers sized accordingly (a reduction step is necessary to update eventually the main arrays). Clusters of 4 cells are considered to limit the memory footprint and the number of reductions. An improved efficiency of the vectorized operators is obtained, compared to their original scalar implementation, when the number of particles per cell is sufficiently large, generally above 8 particles per cell. This threshold depends on the processor architecture (vector instruction set) and the order of the interpolation shape functions. In all cases, the vectorized operators, combined with the cycle sort, significantly speed up the particle processing when the number of particles per cell is several multiples of the vector register length. But when the number of particles per cell is lower than the vector register length, the vectorized operators become less efficient than their scalar counterparts. This issue is addressed by using a \textit{adaptive mode} able to pick locally (each patch, each species) and dynamically (every number of time steps) the most efficient version. Simulations presenting a strong imbalance in the number of particles per cell contain both vectorized and scalar patches, depending on their load. If the plasma evolves, the mode of each patch changes accordingly. This adaptive approach results in a simulation cost equal to or lower than the best mode (scalar or vectorized). The adaptive reconfiguration overhead appears negligible. The optimal scenario corresponds, as expected, to a fully vectorized simulation, but this is not suitable for all physical cases, hence the adaptive approach. This adaptive mode does not require any input from the user as the algorithm detects automatically which operators to pick. However, the implementation is based on empirical, architecture-dependent metrics, and should be reevaluated on other processor types for optimal performances. Fortunately, several architectures can share a similar behavior and it is possible to build common approximate metrics. The order of the interpolation shape functions, the MPI/OpenMP ratio, the compiler version, or other parameters may also modify these results. In the future, an automated analysis could be performed by the code at initialization to compute the most suitable metrics. For large scale simulations, this evaluation would represent a negligible cost. There is, for the moment, no overlapping strategy between computation and communications. This constitutes a next development axis that would enhance the benefits brought by this adaptive strategy. A better integration of the dynamic load balancing with the adaptive vectorization mode constitutes a second possible improvement: they are not coupled even if they can both contribute separately to the simulation efficiency. For instance, they do not share the metrics used to estimate the particle computation time. \section{Compilation} \label{compilation} There are 4 different clusters used in this article. Each of them is equipped with processors of different Intel architectures that represent the most used with {\sc Smilei}\xspace: \begin{itemize} \item Jureca supercomputer: 2 x Haswell node (Intel Xeon E5-2680 v3, 12 cores) \item Tornado supercomputer: 2 x Broadwell node (Intel Xeon CPU E5-2697 v4, 16 cores, 2.3 GHz) \item Frioul supercomputer: Knights Landing (KNL) node (Intel Xeon Phi 7250, 68 cores, 1.4 Ghz) \item Irene Joliot-Curie supercomputer: 2 x Skylake node (Intel Skylake 8168, 24 cores, 2.7 - 1.9 Ghz) \end{itemize} On each of them, the code is compiled with the following versions: \begin{itemize} \item Intel compiler 18.0.1.163, IntelMPI 18.0.1.163 \item Intel compiler 17.3.191, OpenMPI 1.6.5 \item Intel compiler 18.0.1.163, IntelMPI 18.0.1.163 \item Intel compiler 18.0.1.163, IntelMPI 18.0.1.163 \end{itemize} The most recent architecture is Skylake and uses the extended \textsc{AVX512} vector instruction set coming from the Xeon Phi family (including KNL). It has the larger vector size able to treat 8 double precision floats in a single instruction. The Skylake architecture can handle by legacy the \textsc{AVX2} instruction set inherited from the Haswell and Broadwell processors. The Intel Turboboost technology allows the processor to adjust the core frequency to the total number of used cores and the required instruction set. Regarding the Skylake processor used in this article, the base frequency without vectorization is 2.7 GHz, 2.3 GHz for \textsc{AVX2} and 1.9 GHz for \textsc{AVX512}. The code is compiled with the most advanced architecture vectorization flags, i.e. \textsc{-xCORE-AVX2} on Haswell and Broadwell, \textsc{-xMIC-AVX512} on KNL and \textsc{-xCOMMON-AVX512} on Skylake. The flag \textsc{-xCORE-AVX2} can also be used on KNL and Skylake to test the code with the \textsc{AVX2} instruction set that limit the vector register size to 256 bit (4 double precisions float). These flags are completed by \textsc{-O3 -ip -ipo -inline-factor=1000 -fno-alias} for best performance. The KNL cluster is configured in Quadrant cache mode. On KNL, OpenMP is used to keep the 64 cores busy among the 68 available. The remaining cores are let alone for the system. Hyperthreading is not activated. For the other types of processors, we use all available cores. \section{Cycle sort} \label{ap_cycle_sort} \begin{algorithm} \DontPrintSemicolon \KwData{ \\$Particles$: array of unsorted particles. \\$CellKeys$: array of the cell indexes of the particles. \\$Npart$: number of particles. } \KwResult{$Particles$: array of sorted particles} \Begin{ \textcolor{blue}{\tcp{Loop on particles}} \For{$cycleStart \in range(Npart-2)$}{ $cell\_dest \longleftarrow CellKeys[cycleStart]$\; $ip\_dest \longleftarrow cycleStart$\; \textcolor{blue}{\tcp{Compute the destination}} \For{$i\leftarrow cycleStart+1$ \KwTo $Npart-1$}{ \If{$CellKeys[i] < cell\_dest$}{ $ip\_dest\mathrel{+}= 1$\; } } \If{$ip\_dest== cycleStart$}{ \textcolor{blue}{\tcp{Particle already well placed}} $\mathbf{continue}$\; } \textcolor{blue}{\tcp{Do not swap twins}} \While{$CellKeys[ip\_dest]\mathrel{=}=\ cell\_dest$}{ $ip\_dest\mathrel{+}= 1$\; } $Cycle.resize(0)$\; $Cycle.push\_back(cycleStart)$\; $Cycle.push\_back(ip\_dest)$\; \textcolor{blue}{\tcp{Build a cycle}} \While{$ip\_dest\ \mathrel{!}=\ cycleStart$}{ $cycleStart\longleftarrow ip\_dest$\; $cell\_dest \longleftarrow CellKeys[cycleStart]$\; \For{$i\leftarrow cycleStart+1$ \KwTo $Npart-1$}{ \If{$CellKeys[i] < cell\_dest$}{ $ip\_dest\mathrel{+}= 1$\; } } \While{$CellKeys[ip\_dest]\mathrel{=}=\ cell\_dest$}{ $ip\_dest\mathrel{+}= 1$\; } $Cycle.push\_back(ip\_dest)$\; } \textcolor{blue}{\tcp{Proceed to the swap}} $Ptemp\longleftarrow Particles[Cycle[0]]$\; \For{$i \leftarrow Cycle.size()-1$ \KwTo 2}{ $Particles[Cycle[i]]\longleftarrow Particles[Cycle[i-1]]$\; } $Particles[Cycle[1]]\longleftarrow Ptemp$\; } \KwRet $Particles$ } \caption{Cycle Sort.} \label{cyclesort} \end{algorithm} \section{Acknowledgements} \label{sec:Acknowledgements} The authors are grateful to M. Haefele, J. Bigot, P. Kestener, A. Durocher, O. Iffrig, V. Soni and H. Vincenti for fruitful discussions. This work was granted access to the HPC resources of TGCC/CINES under the allocation 2017 - A0020607484 and \textit{Grand Challenge} ``Irene'' 2018 project gch0313 made by GENCI. The authors are grateful to the TGCC and CINES engineers for their support. Access to the KNL cluster Frioul was granted via the Cellule de Veille technologique. The authors acknowledge the European EoCoE project for sharing HPC resources on the supercomputer Jureca. The authors thank engineers of the LLR HPC clusters for resources and help. The authors are thankful to M. Mancip for his help in rendering 3D images on the Mandelbrot cluster and the Mur d'Image. \input{appendix}
1,108,101,562,474
arxiv
\section{Introduction} \label{intro} There are many different types of bound-state problem that arise in atomic and molecular physics. These range from the electronic structure problem, involving antisymmetrised many-particle wavefunctions and Cou\-lomb interaction potentials, to low-amplitude molecular vibrational problems that can be solved in basis sets of harmonic-oscillator functions. In the absence of wide-amplitude motion, the rovibrational bound-state problem is often formulated in terms of Eckart-Watson Hamiltonians~\cite{Watson:1968, Watson:1970, Matyus:2007}. Wide-amplitude motion and exchange of identical nuclei often require special techniques, even when the motion takes place on a single electronic potential-energy surface~\cite{Bowman:multimode:2003, Bowman:variational:2008, McCoy:DMC:2006, Yurchenko:2007}. The {\sc bound}\ and {\sc field}\ programs deal with an intermediate set of problems involving interactions between two particles (atoms or molecules), in some cases on multiple coupled surfaces, where the total Hamiltonian of the system may be written \begin{equation} H=-\frac{\hbar^2}{2\mu}R^{-1}\frac{d^2\,}{d R^2}R +\frac{\hbar^2 \hat L^2}{2\mu R^2}+H_{\rm intl}(\xi_{\rm intl})+V(R,\xi_{\rm intl}), \label{eqh} \end{equation} where $R$ is a radial coordinate describing the separation of two particles and $\xi_{\rm intl}$ represents all the other coordinates in the system. $H_{\rm intl}$ represents the sum of the internal Hamiltonians of the isolated particles, and depends on $\xi_{\rm intl}$ but not $R$, and $V(R,\xi_{\rm intl})$ is an interaction potential. The operator $\hbar^2 \hat L^2/2\mu R^2$ is the centrifugal term that describes the end-over-end rotational energy of the interacting pair. The Hamiltonian \eqref{eqh} is usually appropriate for pairs of particles that interact weakly enough that the particles retain their chemical identity. Such problems commonly arise in the spectroscopy of van der Waals complexes~\cite{Hutson:AMVCD:1991} and in the near-threshold bound states that are important in the creation and control of ultracold molecules~\cite{Hutson:Cs2:2008}. The internal Hamiltonian $H_{\rm intl}$ is a sum of terms for the two particles 1 and 2, \begin{equation} H_{\rm intl}(\xi_{\rm intl}) = H_{\rm intl}^{(1)}(\xi_{\rm intl}^{(1)}) + H_{\rm intl}^{(2)}(\xi_{\rm intl}^{(2)}), \end{equation} with eigenvalues $E_{{\rm intl},i}=E_{{\rm intl},i}^{(1)}+E_{{\rm intl},i}^{(2)}$, where $E_{{\rm intl},i}^{(1)}$ and $E_{{\rm intl},i}^{(2)}$ are energies of the separated monomers $1$ and $2$. The individual terms can vary enormously in complexity: each one may represent a structureless atom, requiring no internal Hamiltonian at all, a vibrating and/or rotating molecule, or a particle with electron and/or nuclear spins. The problems that arise in ultracold physics frequently involve pairs of atoms or molecules with electron and nuclear spins, often in the presence of external electric, magnetic or photon fields. All these complications can be taken into account in the structure of $H_{\rm intl}$ and the interaction potential $V(R,\xi_{\rm intl})$, which may both involve terms dependent on spins and external fields. It is possible to solve the bound-state problem for the Hamiltonian \eqref{eqh} using basis sets for both the internal coordinates $\xi_{\rm intl}$ and the interparticle distance $R$. Such methods have been used with considerable success for highly excited states of molecules such as H$_3^+$ and H$_2$O on a single surface, often using discrete variable representations~\cite{Tennyson:2004}. However, they have the disadvantage that the computer time generally scales as the cube of the number of radial basis functions. This problem becomes worse for levels very close to dissociation. It can be ameliorated to some extent by using sparse-matrix techniques and basis-set contraction, but the scaling remains poor. An alternative is the \emph{coupled-channel} approach, which handles the radial coordinate $R$ by direct numerical propagation on a grid, and all the other coordinates using a basis set~\cite{Hutson:CPC:1994}. This is the approach that is implemented in {\sc bound}\ and {\sc field}. It has the advantage that the computer time scales \emph{linearly} with the number of points on the radial propagation grid. In the coupled-channel approach, the total wavefunction is expanded \begin{equation} \Psi(R,\xi_{\rm intl}) =R^{-1}\sum_j\Phi_j(\xi_{\rm intl})\psi_{j}(R), \label{eqexp} \end{equation} where the functions $\Phi_j(\xi_{\rm intl})$ form a complete orthonormal basis set for motion in the coordinates $\xi_{\rm intl}$ and the factor $R^{-1}$ serves to simplify the form of the radial kinetic energy operator. The wavefunction in each {\em channel} $j$ is described by a radial \emph{channel function} $\psi_{j}(R)$. The expansion (\ref{eqexp}) is substituted into the total Schr\"odinger equation, and the result is projected onto a basis function $\Phi_i(\xi_{\rm intl})$. The resulting coupled differential equations for the channel functions $\psi_{i}(R)$ are \begin{equation}\frac{d^2\psi_{i}}{d R^2} =\sum_j\left[W_{ij}(R)-{\cal E}\delta_{ij}\right]\psi_{j}(R). \end{equation} Here $\delta_{ij}$ is the Kronecker delta and ${\cal E}=2\mu E/\hbar^2$, where $E$ is the total energy, and \begin{equation} W_{ij}(R)=\frac{2\mu}{\hbar^2}\int\Phi_i^*(\xi_{\rm intl}) [\hbar^2 \hat L^2/2\mu R^2 + H_{\rm intl}+V(R,\xi_{\rm intl})] \Phi_j(\xi_{\rm intl})\,d\xi_{\rm intl}. \label{eqWij} \end{equation} The different equations are coupled by the off-diagonal terms $W_{ij}(R)$ with $i\ne j$. The coupled equations may be expressed in matrix notation, \begin{equation} \frac{d^2\boldsymbol{\psi}}{d R^2}= \left[{\bf W}(R)-{\cal E}{\bf I}\right]\boldsymbol{\psi}(R). \label{eqcp} \end{equation} If there are $N$ basis functions included in the expansion (\ref{eqexp}), $\boldsymbol{\psi}(R)$ is a column vector of order $N$ with elements $\psi_{j}(R)$, ${\bf I}$ is the $N\times N$ unit matrix, and ${\bf W}(R)$ is an $N\times N$ interaction matrix with elements $W_{ij}(R)$. In general there are $N$ linearly independent solution vectors $\boldsymbol{\psi}(R)$ that satisfy the Schr\"o\-ding\-er equation subject to the boundary condition that $\boldsymbol{\psi}(R)\rightarrow0$ at one end of the range. These $N$ column vectors form a wavefunction matrix $\boldsymbol{\Psi}(R)$. The propagators in {\sc bound}\ and {\sc field}\ all propagate the log-derivative matrix ${\bf Y}(R) = \boldsymbol{\Psi}'(R) [\boldsymbol{\Psi}(R)]^{-1}$, rather than $\boldsymbol{\Psi}(R)$ itself. The particular choice of the basis functions $\Phi_j(\xi_{\rm intl})$ and the resulting form of the interaction matrix elements $W_{ij}(R)$ depend on the physical problem being considered. The complete set of coupled equations often factorises into blocks determined by the symmetry of the system. In the absence of external fields, the \emph{total angular momentum} $J_{\rm tot}$ and the \emph{total parity} are conserved quantities. Different or additional symmetries arise in different physical situations. The programs are designed to loop over total angular momentum and parity, constructing a separate set of coupled equations for each combination and solving them by propagation. These loops may be repurposed for other symmetries when appropriate. {\sc bound}\ and {\sc field}\ can also handle interactions that occur in external fields, where the total angular momentum is no longer a good quantum number. \subsection{Location of bound states}\label{theory:boundcalcs} True bound states exist only at energies where all asymptotic channels are energetically closed, $E<E_{{\rm intl},i}$ for all $i$. Under these circumstances the bound-state wavefunction $\boldsymbol{\psi}(R)$ is a column vector of order $N$ that must approach zero in the classically forbidden regions at both short range, $R\rightarrow 0$, and long range, $R\rightarrow \infty$. Continuously differentiable solutions of the coupled equations that satisfy the boundary conditions at both ends exist only at specific energies $E_n$. These are the eigenvalues of the total Hamiltonian \eqref{eqh}; we refer to them (somewhat loosely) as the eigenvalues of the coupled equations, to distinguish them from eigenvalues of other operators that also enter the discussion below. Wavefunction matrices $\boldsymbol{\Psi}(R)$ that satisfy the boundary conditions in \emph{one} of the classically forbidden regions exist at any energy. We designate these $\boldsymbol{\Psi}^+(R)$ for the solution propagated outwards from short range and $\boldsymbol{\Psi}^-(R)$ for the solution propagated inwards from long range. The corresponding log-derivative matrices are ${\bf Y}^+(R)$ and ${\bf Y}^-(R)$. It is convenient to choose a matching distance $R_{\rm match}$ where the outwards and inwards solutions are compared. A solution vector that is continuous at $R_{\rm match}$ must satisfy \begin{equation} \boldsymbol{\psi}(R_{\rm match})=\boldsymbol{\psi}^+(R_{\rm match})=\boldsymbol{\psi}^-(R_{\rm match}). \end{equation} Since the derivatives of the outwards and inwards solutions must match too, we also require that \begin{equation} \frac{d}{dR}\boldsymbol{\psi}^+(R_{\rm match})=\frac{d}{dR}\boldsymbol{\psi}^-(R_{\rm match}) \end{equation} so that \begin{equation} {\bf Y}^+(R_{\rm match})\boldsymbol{\psi}(R_{\rm match}) = {\bf Y}^-(R_{\rm match})\boldsymbol{\psi}(R_{\rm match}). \end{equation} Equivalently, \begin{equation} \left[{\bf Y}^+(R_{\rm match}) - {\bf Y}^-(R_{\rm match})\right] \boldsymbol{\psi}(R_{\rm match}) = 0, \label{eq:ymatch} \end{equation} so that the wavefunction vector $\boldsymbol{\psi}(R_{\rm match})$ is an eigenvector of the log-derivative matching matrix, $\Delta{\bf Y} = \left[{\bf Y}^+(R_{\rm match}) - {\bf Y}^-(R_{\rm match})\right]$, with eigenvalue zero~\cite{Hutson:CPC:1994}. For each $J_{\rm tot}$ and symmetry block, {\sc bound}\ propagates log-derivative matrices to a matching point $R_{\rm match}$, both outwards from the classically forbidden region at short range (or from $R=0$) and inwards from the classically forbidden region at long range. At each energy $E$, it calculates the multichannel node count~\cite{Johnson:1978}, defined as the number of zeros of $\boldsymbol{\psi}(R)$ between $R_{\rm min}$ and $R_{\rm max}$. Johnson \cite{Johnson:1978} showed that this is equal to the number of eigenvalues of the coupled equations that lie below $E$. It may be calculated as a simple byproduct of the propagations and the matching matrix. {\sc bound}\ uses the node count to determine the number of eigenvalues of the coupled equations in the specified range, and then uses bisection to identify energy windows that contain exactly one eigenvalue. In each such window, it uses a combination of bisection and the Van Wijngaarden-Dekker-Brent algorithm~\cite{VWDB} to converge on the energy where an eigenvalue of the log-derivative matching matrix $\Delta {\bf Y}$ is zero. This energy is an eigenvalue of the coupled equations. The program extracts the local wavefunction vector $\boldsymbol{\psi}(R_{\rm match})$, and optionally calculates the complete bound-state wavefunction $\boldsymbol{\psi}(R)$ using the method of Thornley and Hutson~\cite{THORNLEY:1994}. {\sc field}\ operates in a very similar manner to locate eigenvalues of the coupled equations as a function of external field at fixed energy (or energy fixed with respect to a field-dependent threshold energy). The one significant difference is that the multichannel node count is not guaranteed to be a monotonic function of field, and it is in principle possible to miss pairs of states that cross the chosen energy in opposite directions as a function of field. In practice this seldom happens. The choice of $R_{\rm match}$ is significant. It does not affect the energies or fields at which the matching condition (\ref{eq:ymatch}) is satisfied, but it does affect the matching matrix at other energies or fields and hence the rate of convergence on eigenvalues of the coupled equations. In particular, it is usually inappropriate to place $R_{\rm match}$ far into a classically forbidden region. \subsection{Matrix of the interaction potential}\label{theory:W} In order to streamline the calculation of matrix elements for the propagation, {\sc bound}\ and {\sc field}\ express the interaction potential in an expansion over the internal coordinates, \begin{equation} V(R,\xi_{\rm intl})=\sum_\Lambda v_\Lambda(R){\cal V}^\Lambda(\xi_{\rm intl}). \label{eqvlambda} \end{equation} The specific form of the expansion depends on the nature of the interacting particles. The radial potential coefficients $v_\Lambda(R)$ may either be supplied explicitly, or generated internally by numerically integrating over $\xi_{\rm intl}$. The $R$-independent coupling matrices $\boldsymbol{\cal V}^\Lambda$ with elements ${\cal V}^\Lambda_{ij}=\langle\Phi_i|{\cal V}^\Lambda|\Phi_j\rangle_{\rm intl}$ are calculated once and stored for use in evaluating $W_{ij}(R)$ throughout the course of a propagation. \subsection{Matrices of the internal and centrifugal Hamiltonians}\label{theory:Wextra} Coupled-channel theory is most commonly formulated in a basis set where $\hat L^2$ and $H_{\rm intl}$ are both diagonal. All the built-in coupling cases use basis sets of this type. The matrix of $H_{\rm intl}$ is $\langle\Phi_i|H_{\rm intl}|\Phi_j\rangle_{\rm intl}=E_{{\rm intl},i}\delta_{ij}$. The diagonal matrix elements of $\hat L^2$ are often of the form $L_i(L_i+1)$, where the integer quantum number $L_i$ (sometimes called the partial-wave quantum number) represents the end-over-end angular momentum of the two particles about one another. However, the programs also allow the use of basis sets where one or both of $\hat L^2$ and $H_{\rm intl}$ are non-diagonal. If $H_{\rm intl}$ is non-diagonal, it is expanded as a sum of terms \begin{equation} H_{\rm intl}(\xi_{\rm intl}) =\sum_\Omega h_\Omega {\cal H}^\Omega_{\rm intl}(\xi_{\rm intl}), \label{eqHomega1} \end{equation} where the $h_\Omega$ are scalar quantities, some of which may represent external fields if desired. The programs generate additional coupling matrices $\boldsymbol{\cal H}^\Omega$ with elements ${\cal H}^\Omega_{ij}=\langle\Phi_i|{\cal H}^\Omega_{\rm intl}|\Phi_j\rangle_{\rm intl}.$ These are also calculated once and stored for use in evaluating $W_{ij}(R)$ throughout the course of a propagation. A similar mechanism is used for basis sets where $\hat L^2$ is non-diagonal, with \begin{equation} \hat L^2 =\sum_\Upsilon {\cal L}^\Upsilon. \label{eqL2} \end{equation} If $H_{\rm intl}$ is non-diagonal, the allowed energies $E_{{\rm intl},i}$ of the pair of monomers at infinite separation are the eigenvalues of $H_{\rm intl}$. The wavefunctions of the separated pair are represented by simultaneous eigenvectors of $H_{\rm intl}$ and $\hat L^2$. \subsection{Boundary conditions} For deeply bound states, it is often sufficient to require that $\boldsymbol{\Psi}(R)\rightarrow0$ in the classically forbidden regions at short and long range, or equivalently that ${\bf Y}(R)\rightarrow\pm\infty$. However, there are circumstances where more general boundary conditions are required: \begin{itemize}[nosep] \item In systems where $R=0$ is energetically accessible, some states require ${\bf Y}(0)=0$. \item In model systems with a Fermi pseudopotential, corresponding to a $\delta$-function at the origin or elsewhere, a finite value of ${\bf Y}$ may be required. \item For states very close to dissociation, the wavefunction $\boldsymbol{\psi}(R)$ dies off very slowly at long range, and it may be inefficient to propagate far enough that $\boldsymbol{\psi}(R) \rightarrow 0$. For a single channel, the wavefunction approximately follows the Wentzel-Kramers-Brillouin (WKB) approximation in the far classically forbidden region, \begin{eqnarray} \psi(R)&=&[k(R)]^{-\frac{1}{2}} \exp\left(\pm\int_{R_{\rm turn}}^R k(R')\,d R'\right),\\ \psi'(R)&=&[k(R)]^{-\frac{1}{2}}\left[\pm k(R)-\frac{1}{2}\frac{k'(R)}{k(R)} \right] \exp\left(\pm\int_{R_{\rm turn}}^R k(R')\,d R'\right),\\ Y(R)&=&\pm k(R)-\frac{1}{2}\frac{k'(R)}{k(R)},\label{eq:bcwkb} \end{eqnarray} where $k(R) = [2\mu(V(R)-E)/\hbar^2]^{1/2}$ and $V(R)$ is an effective potential energy for the channel concerned. The $+$ sign applies inside the inner turning point (where the phase integral itself is negative) and the $-$ sign applies outside the outer turning point. The first term in Eq.\ \ref{eq:bcwkb} dominates either when $k(R)$ is large (in a strongly classically forbidden region) or when the interaction potential is nearly constant (at very long range). The term involving $k'(R)$ is therefore neglected in the implementation of WKB boundary conditions. \end{itemize} {\sc bound}\ and {\sc field}\ allow the imposition of separate boundary conditions for ${\bf Y}$ in closed and in open channels at $R_{\rm min}$ and at $R_{\rm max}$, and by default apply WKB boundary conditions for closed channels (neglecting the term involving $k'(R)$ in Eq.\ \ref{eq:bcwkb}). This gives faster convergence with respect to $R_{\rm min}$ and $R_{\rm max}$ than ${\bf Y}(R)\rightarrow\pm\infty$. \subsection{Perturbation calculations} {\sc bound}\ can calculate expectation values using a finite-difference approach \cite{Hutson:expect:88}. After a bound state is located at energy $E_n^{(0)}$, {\sc bound}\ repeats the calculation with a small perturbation $a \hat A(R)$ added to the Hamiltonian to obtain a modified energy $E_n(a)$. From perturbation theory, \begin{equation} E_n(a) = E_n^{(0)} + a \langle\hat A\rangle_n + {\cal O}(a^2), \end{equation} where ${\cal O}(a^2)$ are second-order terms. The finite-difference approximation to the expectation value $\langle\hat A\rangle_n$ is \begin{equation} \langle\hat A\rangle_n = \frac{E_n(a) - E_n^{(0)}}{a}, \end{equation} and is accurate to order $a$. For built-in coupling cases, {\sc bound}\ can calculate expectation values of an operator $\hat A$ that is made up of a product of one of the angular functions in the potential expansion and a power of $R$. For coupling cases implemented in plug-in basis-set suites, any required operator can be implemented in the basis-set suite. \subsection{Richardson extrapolation} For propagators that use equally spaced steps, the error in bound-state energies due to a finite step size is proportional to a power of the step size (in the limit of small steps). {\sc bound}\ can obtain an improved estimate of the bound-state energy by performing calculations with two different step sizes and extrapolating to zero step size. \subsection{Reference energies} By default, the zero of energy used for total energies is the one used for monomer energies, or defined by the monomer Hamiltonians programmed in a plug-in basis-set suite. However, it is sometimes desirable to use a different zero of energy (reference energy). This may be specified: \begin{itemize}[nosep] \item{as a value given directly in the input file;} \item{as the energy of a particular scattering threshold or pair of monomer states, which may depend on external fields.} \end{itemize} \subsection{Locating zero-energy Feshbach resonances} Zero-energy Feshbach resonances occur at fields where bound states cross a scattering threshold as a function of external field, provided there is coupling between the bound state and the threshold. {\sc field}\ may be used to locate such crossings by choosing the energy of the desired threshold as the reference energy and setting the relative energy to zero. \subsection{Wavefunctions} The programs always extract the local wavefunction vector $\boldsymbol{\psi}(R_{\rm match})$ at the matching point. If desired, they can calculate the complete bound-state wavefunction $\boldsymbol{\psi}(R)$ using the method of Thornley and Hutson~\cite{THORNLEY:1994}. \section{Systems handled}\label{interactiontypes} The programs provide built-in Hamiltonians and basis sets to handle a number of common cases. In particular, they can calculate bound states in the close-coupling approximation (with no dynamical approximations except basis-set truncation) for the following pairs of species: \begin{enumerate}[nosep] \item Atom + linear rigid rotor~\cite{Arthurs:1960}; \item Atom + vibrating diatom with interaction potentials independent of diatom rotational state~\cite{Green:1979:vibrational}; \item Linear rigid rotor + linear rigid rotor~\cite{Green:1975,Green:1977:comment,Heil:1978:coupled}; \item Asymmetric top + linear molecule~\cite{Phillips:1995} \item Atom + symmetric top (also handles near-symmetric tops and linear molecules with vibrational angular momentum)~\cite{Green:1976,Green:1979:IOS}; \item Atom + asymmetric top~\cite{Green:1976} (also handles spherical tops~\cite{Hutson:spher:1994}); \item Atom + vibrating diatom with interaction potentials dependent on diatom rotational state~\cite{Hutson:sbe:1984}; \item Atom + rigid corrugated surface \cite{Wolken:1973:surface,Hutson:1983} (band structure). At present, the code is restricted to centrosymmetric lattices, for which the potential matrices are real, and is not included in {\sc field}. \end{enumerate} The close-coupling calculations are all implemented in a fully coupled space-fixed representation, with the calculations performed separately for each total angular momentum and parity. In addition, the programs implement a variety of dynamical approximations (decoupling methods) that offer considerable savings of computer time at the expense of accuracy. Some of these are of significance only for scattering calculations, such as the effective potential approximation~\cite{Rabitz:EP}, the $L$-labelled coupled-states (CS) approximation~\cite{McG74} and the decoupled $L$-dominant approximation~\cite{Green:1976:DLD,DePristo:1976:DLD}. However, bound-state calculations frequently use the helicity decoupling approximation~\cite{Hutson:AMVCD:1991}, which is implemented in {\sc bound}\ and {\sc field}\ in the framework of the CS approximation. In addition to the built-in cases, the programs provide an interface that allows users to specify Hamiltonians and basis sets for different pairs of species. These have been used for numerous different cases, and routines are provided for two cases of current interest: \begin{enumerate}[nosep] \item Structureless atom + $^3\Sigma$ molecule in a magnetic field, demonstrated for Mg + NH; \item Two alkali-metal atoms in $^2$S states, including hyperfine coupling, in a magnetic field, demonstrated for $^{85}$Rb$_2$. \end{enumerate} \section{Propagators}\label{propagators} {\sc bound}\ and {\sc field}\ can solve the coupled equations using a variety of different propagation methods, all of which propagate the log-derivative matrix ${\bf Y}(R)$ rather than the wavefunction matrix $\boldsymbol{\Psi}(R)$. Log-derivative propagators are particularly suitable for the bound-state problem, both because they allow a very simple form of the matching equation and because they inherently avoid the instability associated with propagating in the presence of deeply closed channels. The propagators currently implemented in {\sc bound}\ and {\sc field}\ are: \begin{itemize}[leftmargin=13pt] \item{Log-derivative propagator of Johnson (LDJ)~\cite{Johnson:1973, Manolopoulos:1993:Johnson}: This is a very stable propagator. It has largely been superseded by the LDMD propagator, but can be useful in occasional cases where that propagator has trouble evaluating node counts.} \item{Diabatic log-derivative propagator of Manolopoulos (LDMD)~\cite{Manolopoulos:1986}: This is a very efficient and stable propagator, especially at short and medium range.} \item{Quasiadiabatic log-derivative propagator of Manolopoulos (LDMA)~\cite{Manolopoulos:PhD:1988, Hutson:CPC:1994}: This is similar to the LDMD propagator, but operates in a quasiadiabatic basis. It offers better accuracy than LDMD for very strongly coupled problems, but is relatively expensive. It is recommended for production runs only for very strongly coupled problems. However, it is also useful when setting up a new system, because it can output eigenvalues of the interaction matrix at specific distances (adiabats) and nonadiabatic couplings between the adiabatic states.} \item{Symplectic log-derivative propagators of Manolopoulos and Gray (LDMG) \cite{MG:symplectic:1995}: This offers a choice of 4th-order or 5th-order symplectic propagators. These are 1.5 to 3 times more expensive per step than the LDMD and LDJ propagators, but can have smaller errors for a given step size. They can be the most efficient choice when high precision is required.} \item{Airy propagator: This is the AIRY log-derivative propagator of Alexander~\cite{Alexander:1984} as reformulated by Alexander and Manolopoulos~\cite{Alexander:1987}. It uses a quasiadiabatic basis with a linear reference potential (which results in Airy functions as reference solutions). This allows the step size to increase rapidly with separation, so that this propagator is particularly efficient at long range.} \end{itemize} Calculations with {\sc bound}\ and {\sc field}\ may use different log-derivative propagators at short and long range. This is particularly useful for bound states close to a dissociation threshold, where it may be necessary to propagate inwards from very large values of $R$ to obtain converged results. The AIRY propagator incorporates a variable step size, and can be used to propagate inwards with an initially very large but decreasing step size at very low cost. However, it is not particularly efficient when the interaction potential is fast-varying, so it is often used in combination with a fixed-step-size method such as the LDMD propagator at short and intermediate range. \section{Computer time} The computer time required to solve a set of $N$ coupled equations is approximately proportional to $N^3$. The practical limit on $N$ is from a few hundred to several thousand, depending on the speed of the computer and the amount of memory available. The computer time also depends linearly on the number of radial steps required to solve the coupled equations to the desired accuracy. The step size required is typically proportional to the minimum local wavelength, so that the time scales approximately with $(\mu E_{\rm max}^{\rm kin})^{1/2}$, where $E_{\rm max}^{\rm kin}$ is the maximum local kinetic energy; for bound states near dissociation, $E_{\rm max}^{\rm kin}$ may be approximated by the well depth of the interaction potential. \section{Plug-in functionality} \subsection{Potential or potential expansion coefficients} The programs internally express the interaction potential as an expansion over the internal coordinates, as in Eq.~\eqref{eqvlambda}. The expansion coefficients $v_\Lambda(R)$ may be supplied in a variety of ways: \begin{itemize}[nosep, leftmargin=13pt] \item For very simple potentials, where the functions $v_\Lambda(R)$ are sums of exponentials and inverse powers, the parameters that specify them may be supplied in the input file. \item For more complicated functions, plug-in routines may be supplied to return individual values of $v_\Lambda(R)$ at a value of $R$ specified in the calling sequence. \item For most of the built-in coupling cases, plug-in routines may be supplied to return the unexpanded potential $V(R,\xi_{\rm intl})$ at specified values of $R$ and the internal coordinates $\xi_{\rm intl}$. The general-purpose potential routine supplied then performs numerical quadrature over $\xi_{\rm intl}$ to evaluate the expansion coefficients $v_\Lambda(R)$ internally. \item If none of these approaches is convenient (or efficient enough), a replacement potential routine may be supplied to return the complete potential expansion at once (all values of $v_\Lambda(R)$ at a value of $R$ specified in the calling sequence). \end{itemize} \subsection{Basis sets and coupling matrices} The programs provide an interface for users to supply a set of routines that specify an additional type of basis set, select the elements that will be used in a particular calculation, and calculate the matrices of coupling coefficients for the operators ${\cal V}^\Lambda(\xi_{\rm intl})$ used to expand the interaction potential. The routines must also specify the matrices of $H_{\rm intl}$ and $\hat L^2$, which may be diagonal or non-diagonal. If desired, $H_{\rm intl}$ may contain terms that depend on external fields. \subsection{External fields and potential scaling} The programs incorporate data structures to handle external electric, magnetic or photon fields. There may be multiple fields at arbitrary relative orientations. Internally, the field-dependent terms in the Hamiltonian are a subset of those in $H_{\rm intl}$, \begin{equation} H_{\rm intl}(\xi_{\rm intl},\boldsymbol{B}) =\sum_\Omega B_\Omega {\cal H}^\Omega_{\rm intl}(\xi_{\rm intl}), \label{eqHomegaB} \end{equation} where the vector $\boldsymbol{B}$ represents all the fields present. The elements of $\boldsymbol{B}$ may each be expressed as a \emph{nonlinear} function of external field variables (EFVs); the EFVs may thus (for example) represent the magnitudes, orientations, or relative angles of the component fields. {\sc bound}\ allows calculations on a grid of values of any one EFV, and {\sc field}\ allows bound states to be located as a function of one EFV with all the others fixed. The programs also allow calculations as a function of a scaling factor that multiplies the entire interaction potential, or a subset of the potential expansion coefficients $v_\Lambda(R)$. The scaling factor is handled internally using the same structures as external fields. \section{Distributed files and example calculations}\label{use:input} \subsection{Distributed files} The program is supplied as a tarred zipped file, which contains: \begin{itemize} \item{the full program documentation in pdf format;} \item{a directory {\tt source\_code} containing \begin{itemize} \item{the Fortran source code;} \item{a GNU makefile ({\tt GNUmakefile}) that can build the executables needed for the example calculations;} \end{itemize}} \item{a directory {\tt examples} containing \begin{itemize} \item{a sub-directory {\tt input} containing input files for the example calculations described below;} \item{a sub-directory {\tt output} containing the corresponding output files;} \end{itemize}} \item{a directory {\tt data} containing auxiliary data files for some potential routines used in the example calculations;} \item{a plain-text file {\tt README} that gives information on changes that may be needed to adapt the GNUmakefile to a specific target computer.} \item{a plain-text file {\tt COPYING} that contains the text of the GNU General Public License, Version 3.} \end{itemize} \subsection{Example calculations} The executables used for different calculations may differ in the routines linked to construct the basis set, specify the internal Hamiltonian, and evaluate the interaction potential. The executables required for the example calculations can all be built using {\tt GNUmakefile}. \subsubsection{All available propagators}\label{testfiles:bound:intflgs} \begin{tabular}{ll} input file: & \file{bound-all\_propagators.input}\\ executable: & \file{bound-basic} \end{tabular} \file{bound-all\_propagators.input} performs close-coupling calculations on the bound states of a simple model of a complex formed between an atom and a linear rigid rotor. The radial potential coefficients are provided in the input data file and consist of a Lennard-Jones 12-6 potential for $\lambda=0$ and a dispersion-like $R^{-6}$ form for $\lambda=2$. The calculation is repeated using combinations of short-range and long-range propagators that exercise every propagation method available in {\sc bound}\ (though not every possible combination). The calculation is done twice for the LDMD/AIRY combination; once with $R_{\rm mid} < R_{\rm match}$ and once with $R_{\rm mid} > R_{\rm match}$. The calculation which uses just the LDMD propagator employs a different step length for the inwards propagation. This input file should produce the same results regardless of which {\sc bound}\ executable is used. \subsubsection{Bound states of Ar-HCl with expectation values}\label{testfiles:bound:ityp1} \begin{tabular}{ll} input file: & \file{bound-Ar\_HCl.input}\\ executable: & \file{bound-Rg\_HX} \end{tabular} \file{bound-Ar\_HCl.input} performs calculations on the states of Ar-HCl bound by more than 80 cm$^{-1}$, using the H6(4,3,0) potential of Hutson~\cite{H92ArHCl} and the LDMD propagator, for total angular momentum $J_{\rm tot}=0$ and 1 and both parities. The first run does close-coupling calculations. The second run does calculations in the helicity decoupling approximation, and in addition calculates expectation values $\langle P_2(\cos\theta)\rangle$ and $\langle 1/R^2 \rangle$ for all the states. The results may be compared with Table IV of ref.~\cite{H92ArHCl}. The third run calculates the wavefunction for the first bound state identified in the first run. The wavefunction is written to unit 109; the resulting file is included as \file{bound-Ar\_HCl.wavefunction} in \file{examples/output}. The components may be plotted with any standard plotting package. \subsubsection{\texorpdfstring{Bound states of Ar-CO$_2$ with Richardson extrapolation}{Bound states of Ar-CO2 with Richardson extrapolation}}\label{testfiles:bound:ityp1b} \begin{tabular}{ll} input file: & \file{bound-Ar\_CO2.input}\\ executable: & \file{bound-Rg\_CO2} \end{tabular} \file{bound-Ar\_CO2.input} performs close-coupling calculations on the ground and first vibrationally excited state of Ar-CO$_2$, using the split repulsion potential of Hutson \emph{et al.{}}~\cite{H96ArCO2fit} and the LDJ propagator, for total angular momentum $J_{\rm tot}=0$. The results may be compared with Table IV of ref.~\cite{H96ArCO2fit}. It first calculates the ground-state energy using a fairly large (unconverged) step size of 0.03 \AA. It then repeats the calculation with an even larger step size, and extrapolates to zero step size using Richardson $h^4$ extrapolation. \subsubsection{\texorpdfstring{Bound states of Ar-H$_2$}{Bound states of Ar-H2}}\label{testfiles:bound:ityp7} \begin{tabular}{ll} input file: & \file{bound-Ar\_H2.input}\\ executable: & \file{bound-Rg\_H2}\\ also required: & \file{data/h2even.dat} \end{tabular} \file{bound-Ar\_H2.input} performs close-coupling calculations on the ground state of Ar-H$_2$ with H$_2$ in its $v=1$, $j=1$ state, for total angular momentum $J_{\rm tot}=1$ and even parity ($j+L$ even). For this parity there is no allowed $j=0$ channel, so the state is bound except for vibrational predissociation to form H$_2$ ($v=0$)~\cite{HUTSON:ArH2:1983}, which is not taken into account by {\sc bound}. The run uses the LDMD propagator and the TT3(6,8) potential of Le~Roy and Hutson~\cite{LeR87}, evaluated for H$_2$ states $(j,v) = (0,0)$, (2,0) and (4,0) using H$_2$ matrix elements provided in the file \file{data/h2even.dat}. {\sc bound}\ first calculates the ground-state energy using a fairly large (unconverged) step size of 0.04 \AA. It then repeats the calculation with an even larger step size, and extrapolates to zero step size using Richardson $h^4$ extrapolation. \subsubsection{\texorpdfstring{Bound states of H$_2$-H$_2$ (ortho-para)}{Bound states of H2-H2 (ortho-para)}}\label{testfiles:bound:ityp3} \begin{tabular}{ll} input file: & \file{bound-ityp3.input}\\ executable: & \file{bound-H2\_H2} \end{tabular} \file{bound-ityp3.input} performs close-coupling calculations on bound states of H$_2$-H$_2$ with one para-H$_2$ molecule (even $j$) and one ortho-H$_2$ molecule (odd $j$). It uses the LDMD propagator. The interaction potential is that of Zarur and Rabitz~\cite{Zarur:1974}. The states are bound by less than 2~cm$^{-1}$ (below the $j=0$ + $j=1$ threshold). \subsubsection{\texorpdfstring{Bound states of He-NH$_3$}{Bound states of He-NH3}}\label{testfiles:bound:ityp5} \begin{tabular}{ll} input file: & \file{bound-ityp5.input}\\ executable: & \file{bound-basic} \end{tabular} \file{bound-ityp5.input} performs close-coupling calculations on bound states of He-NH$_3$, taking account of the tunnelling splitting of NH$_3$, using a simple analytical interaction potential and the LDMD propagator. The input file selects rotational functions of E symmetry by setting \basisitem{ISYM(3)} to 1 and specifies that the H nuclei are fermions by setting \basisitem{ISYM(4)} to 1. \subsubsection{\texorpdfstring{Bound states of Ar-CH$_4$}{Bound states of Ar-CH4}}\label{testfiles:bound:ityp6} \begin{tabular}{ll} input file: & \file{bound-Ar\_CH4.input}\\ executable: & \file{bound-Ar\_CH4} \end{tabular} \file{bound-Ar\_CH4.input} performs close-coupling calculations on bound states of Ar-CH$_4$, using $\basisitem{ITYPE}=6$. It uses the interaction potential of Buck \emph{et al.{}}~\cite{Buck:1983}. It uses the LDMD propagator. CH$_4$ is a spherical top, and the input file selects rotor functions of F symmetry and even $k$ by setting \basisitem{ISYM} to 177. The results may be compared with Table II of ref.~\cite{Hutson:spher:1994}. \subsubsection{Bound-state energies of the hydrogen atom}\label{testfiles:bound:H} \begin{tabular}{ll} input file: & \file{bound-hydrogen.input}\\ executable: & \file{bound-basic} \end{tabular} \file{bound-hydrogen.input} carries out single-channel bound-state calculations on the hydrogen atom, and demonstrates how to handle calculations in atomic units. It sets \inpitem{MUNIT} to the electron mass in Daltons, \inpitem{RUNIT} to the Bohr radius in \AA\ and $\inpitem{EUNITS}=7$ to select input energies in hartrees. It sets up a simple Coulomb potential, with the energy scaling factor set to the hartree in cm$^{-1}$, so that the potential is handled in atomic units. It uses the atom-rigid rotor basis with $\basisitem{JMAX}=0$ to generate a simple single-channel problem. Note that \basisitem{ROTI}\code{(1)} is set to the dummy value \code{1.0}; this value is not used because $\basisitem{JMAX}=0$, but it prevents the program terminating prematurely. The wavefunction at the origin is of the form $r^{l+1}$, so its log-derivative is infinite at the origin. This is the default for locally closed channels, but is specified explicitly for the locally open $l=0$ channel. Because $\basisitem{JMAX}=0$, the orbital angular momentum $l$ is equal to \var{JTOT}. $\var{JTOT}=0$ produces $n$s levels at energies of $-1/(2n^2)$ for $n=1,2,...$, while $\var{JTOT}=1$ produces $n$p levels starting at $n=2$. \subsubsection{Bound-state energies of Mg + NH at specified magnetic fields}\label{testfiles:bound:MgNH} \begin{tabular}{ll} input file: & \file{bound-Mg\_NH.input}\\ executable: & \file{bound-Mg\_NH}\\ also required: & \file{data/pot-Mg\_NH.data} \end{tabular} \file{bound-Mg\_NH.input} locates the bound states of MgNH at specified magnetic fields. It uses a plug-in basis-set suite for a $^3\Sigma$ diatom colliding with a structureless atom. Radial potential coefficients are obtained by RKHS interpolation of the potential points of Sold\'an \emph{et al.{}}~\cite{Soldan:MgNH:2009}. The coupled equations are solved using the LDMD/AIRY hybrid propagation scheme. The run locates a single bound state at four different magnetic fields from 370~G to 385~G, from which it may inferred that the state will cross threshold near 387~G. \subsubsection{Bound states of Mg + NH as a function of magnetic field} \label{testfiles:field:MgNH} \begin{tabular}{ll} input file: & \file{field-Mg\_NH.input}\\ executable: & \file{field-Mg\_NH}\\ also required: & \file{data/pot-Mg\_NH.data} \end{tabular} \file{field-Mg\_NH.input} locates magnetic fields in the range 0 to 400~G at which bound states exist for specific energies relative to the lowest scattering threshold of Mg + NH in a magnetic field. It uses the same basis-set suite and interaction potential as in section \ref{testfiles:bound:MgNH}. The coupled equations are solved using the LDMD/AIRY hybrid propagation scheme. The run locates the same level as in section \ref{testfiles:bound:MgNH} at energies of 0, 20 and 40 MHz $\times\ h$ below threshold, and shows that it crosses threshold near 387.28~G. \subsubsection{\texorpdfstring{Locating threshold crossings for $^{85}$Rb$_2$}{Locating threshold crossings for 85Rb2}}\label{basic:rb2:field} \begin{tabular}{ll} input file: & \file{field-basic\_Rb2.input}\\ executable: & \file{field-Rb2} \end{tabular} \file{field-basic\_Rb2.input} locates magnetic fields where bound states cross the lowest scattering threshold for $^{85}$Rb$_2$. These are the fields at which zero-energy Fesh\-bach resonances exist. It uses a plug-in basis-set suite for a pair of alkali-metal atoms in a magnetic field, including hyperfine interactions. It uses the potential of Strauss \emph{et al.{}}~\cite{Strauss:2010}, implemented with potential coefficients incorporated in the executable. The coupled equations are solved using the LDMD/AIRY hybrid propagation scheme. The basis-set suite for this interaction requires information about the hyperfine properties of the atoms in an additional namelist block named \namelist{\&BASIS9}. The potential expansion comprises 3 terms: the singlet and triplet interaction potentials, and the spin-spin dipolar term, which is modelled in the form \begin{equation} \lambda(R)=E_{\rm h}\alpha^2\left[\frac{g_S^2}{4(R/a_0)^3}+A\exp(-\beta R/a_0)\right]. \end{equation} \subsubsection{\texorpdfstring{Bound states of $^{85}$Rb$_2$ as a function of magnetic field}{Bound states of 85Rb2 as a function of magnetic field}}\label{testfiles:field:85Rb2} \begin{tabular}{ll} input file: & \file{field-Rb2.input}\\ executable: & \file{field-Rb2} \end{tabular} \file{field-Rb2.input} locates bound states of $^{85}$Rb$_2$ as a function of magnetic field, using the same potential and basis-set suite as in section \ref{basic:rb2:field}. The calculation locates the magnetic fields (in the range 750 to 850 G) at which bound states exist with binding energies of 225, 175, 125, 75 and 25 MHz below the lowest threshold. There are, however, two bound states that these calculations fail to find, as they run almost parallel to the threshold, at about 140 and 220 MHz below it. To locate these bound states, one would need to do a calculation using {\sc bound}. \section{Program history}\label{history} {\sc bound}\ was originally written by Jeremy Hutson in 1984 to calculate bound states of van der Waals complexes by coupled-channel methods, using the same structures as {\sc molscat}~\cite{molscat:2019} to generate the coupled equations. Subsequent versions incorporated basis-set enhancements as they were made in {\sc molscat}. A fundamental change was made in {\sc bound}\ version 5 (1993) to base the convergence algorithm on individual eigenvalues of the log-derivative matching matrix~\cite{Hutson:CPC:1994}, rather than its determinant. Versions 4 (1992) and 5 (1993)~\cite{Hutson:bound:1993} were distributed via CCP6, the Collaborative Computational Project on Heavy Particle Dynamics of the UK Science and Engineering Research Council. {\sc bound}\ was extended to handle calculations in external electric and magnetic fields in 2007. {\sc field}\ was written by Jeremy Hutson in 2010, using the same structures as {\sc bound}\ to generate the coupled equations but designed to locate bound states as a function of external field at fixed energy, rather than as a function of energy. There has been no fully documented publication of {\sc bound}\ since version 5, and {\sc field}\ has never been published. \subsection{Principal changes in version \currentversion}\label{changes} \begin{itemize}[leftmargin=13pt] \item The basis-set plug-in mechanism has been extended to allow propagation in basis sets that are not eigenfunctions of the internal Hamiltonian $H_{\rm intl}$. This makes implementing new types of system much simpler than before, especially where the individual interaction partners have complicated Hamiltonians. \item The basis-set plug-in functionality has been used to add new capabilities to carry out calculations in external fields (electric, magnetic, and/or photon) and to loop over (sets of) values of the fields. \item The distance at which the calculation switches between short-range and long-range propagators ($R_{\rm mid}$) is now distinct from the distance at which the incoming and outgoing wavefunctions are matched ($R_{\rm match}$). \item The programs now do an outwards propagation from $R_{\rm min}$ to $R_{\rm match}$ and an inwards propagation from $R_{\rm max}$ to $R_{\rm match}$. The node count is calculated without needing a third propagation from $R_{\rm match}$ to $R_{\rm min}$ or $R_{\rm max}$. \item A more general mechanism for combining propagators has been implemented, allowing any sensible combination of propagators at short and long range. \item A more general choice of log-derivative boundary conditions at the starting points for propagation is now allowed. \item An additional propagation approach~\cite{MG:symplectic:1995} has been included, implemented by George McBane, which takes advantage of the symplectic nature of the multichannel radial Schr\"odinger equation. \end{itemize} \section{Acknowledgements} We are grateful to an enormous number of people who have contributed routines, ideas, and comments over the years. Any attempt to list them is bound to be incomplete. We owe an enormous debt to the late Sheldon Green, who developed the original {\sc molscat}\ program on which {\sc bound}\ and {\sc field}\ are based. Robert Johnson, David Manolopoulos, Millard Alexander and George McBane all contributed propagation methods and routines. Alice Thornley developed routines to propagate wavefunctions from log-derivative propagators. Maykel Leonardo Gonz\'alez-Mart\'\i{}nez worked on the addition of structures for non-diagonal Hamiltonians, including magnetic fields. This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) Grant Nos.\ EP/P01058X/1, EP/P008275/1 and EP/N007085/1. \bibliographystyle{elsarticle-num}
1,108,101,562,475
arxiv
\section{Introduction} In this note we investigate Adrian Raftery's {\em mixture transition distribution model} (MTD) from the perspective of algebraic statistics \cite{LAS, ascb}. The MTD model, which was first proposed in \cite{Raf}, has a wide range of applications in engineering and the sciences \cite{RT}. The article by Berchtold and Raftery \cite{BR} offers a detailed introduction and review. The point of departure for this project was a conjecture due to Donald Richards \cite{Ric}, stating that the likelihood function of an MTD model can have multiple local maxima. We establish this conjecture for the case of binary states in Proposition \ref{thm:twolocal}. Our main result, to be derived in Section 4, gives an explicit Gr\"obner basis for the MTD model. Here, both the sequence length and the number of states are arbitrary. We begin with an algebraic description of the model in \cite{BR, Raf}. Fix a pair of positive integers $l$ and $m$, and set $N = m^{l+1}-1$. We define the statistical model ${\rm MTD}_{l,m}$ whose state space is the set $[m]^{l+1}$ of sequences $i_0 i_1 \cdots i_l$ of length $l+1$ over the alphabet $[m] = \{1,2,\ldots,m\}$. The model has $(m-1)m + l-1$ parameters, given by the entries of an $m \times m$-transition matrix $(q_{ij})$ and a probability distribution $\lambda = (\lambda_1,\ldots,\lambda_l)$ on the set $[l] = \{1,2,\ldots,l\}$ of the hidden states. Thus the parameter space is the product of simplices $\,(\Delta_{m-1})^m \times \Delta_{l-1}$. The model ${\rm MTD}_{l,m}$ will be a semialgebraic subset of the simplex $\Delta_N$. That simplex has its coordinates $p_{i_0 i_1 \cdots i_l}$ indexed by sequences in $[m]^{l+1}$. The model ${\rm MTD}_{l,m}$ is the image of the bilinear map $$ \phi_{l,m} \,: \, \,(\Delta_{m-1})^m \times \Delta_{l-1} \, \rightarrow \, \Delta_N $$ which is defined by the formula \begin{equation} \label{eq:param} p_{i_0 i_1 \ldots i_{l-1} i_l} \quad = \quad \frac{1}{m^{l}} \cdot \sum_{j=1}^{l} \lambda_j q_{i_{j-1},i_l} \end{equation} As is customary in algebraic statistics, we pass to a simpler object of study by considering the Zariski closure $\overline{{\rm MTD}}_{l,m}$ of our model in the complex projective space $\mathbb{P}^N$, and we seek to compute the homogeneous prime ideal of all polynomials in the $N+1$ unknowns $p_{i_0 i_1 \ldots i_l}$ that vanish on $\overline{{\rm MTD}}_{l,m}$. This particular goal will be reached in our Theorem \ref{thm:main}. The following probabilistic interpretation of the formula $(\ref{eq:param})$ makes it evident that $\sum p_{i_0 i_1 \cdots i_l} = 1$ holds on the image of $\phi_{l,m}$. We generate a sequence of length $l+1$ on $m$ states as follows. First we select from the uniform distribution on all $m^l$ sequences $i_0 i_1 \cdots i_{l-1}$ of length $l$. All that remains is to determine the state $i_l$ in position $l$. The mixture distribution $\lambda$ determines which of the earlier states gets used in the transition. With probability $\lambda_j$, we select position $j-1$ for that. The character in the last position $l$ is determined from the state $i_{j-1}$ in position $j-1$ using the transition matrix $(q_{ij})$. The model ${\rm MTD}_{l,m}$ is known to be identifiable \cite[\S 4.2]{BR}. Consequently, the dimension of the projective variety $\overline{{\rm MTD}}_{l,m}$ is equal to the number $(m-1)m + l-1$ of model parameters. A geometric characterization of this variety will be given in Corollary \ref{cor:segre}. Equations defining Markov chains and Hidden Markov Models have received considerable attention in algebraic statistics \cite{Cri, HT, HDY, Sch}. We contribute to this literature by studying the algebraic geometry of a fundamental model for higher order Markov chains. In addition to our theoretical results in Theorems \ref{thm:eins} and \ref{thm:main}, readers from statistics will find in Section 3 an analysis of the behavior of the EM algorithm for binary MTD models. \section{Binary States} Our first result concerns the geometry of the model in the case $m=2$ of binary states. \begin{theorem} \label{thm:eins} The variety $\,\overline{{\rm MTD}}_{l,2}\,$ is a linear subspace of dimension $l+1$ in the projective space $\mathbb{P}^N$. This variety intersects the probability simplex $\Delta_N$ in a regular cross-polytope of dimension $l+1$. The model ${\rm MTD}_{l,2}$ is the union of two $(l+1)$-simplices spanned by vertices of the cross-polytope $\overline{{\rm MTD}}_{l,2} \cap \Delta_N$. The two simplices meet along a common~edge. \end{theorem} The {\em cross-polytope} is the free object in the category of centrally symmetric polytopes \cite{Zie}. It can be represented as the convex hull of all signed unit vectors $e_i$ and $-e_i$ where $i=0,1,\ldots,l$, so it is an $(l+1)$-dimensional polytope with $2l+2$ vertices and $2^{l+1}$ facets. Before we come to the proof Theorem \ref{thm:eins}, let us first see some examples to illustrate it. In what follows we abbreviate the model parameters by $q_{11} = a$, $q_{21} = b$ and $\lambda_2 = \lambda$. \begin{example} \rm Theorem \ref{thm:eins} also applies in the trivial case $l = 1$, where (\ref{eq:param}) reads \begin{equation} \label{eq:2by2} \begin{pmatrix} p_{11} & p_{12} \\ p_{21} & p_{22} \end{pmatrix} \quad = \quad \begin{pmatrix} a/2 & (1-a)/2 \\ b/2 & (1-b)/2 \end{pmatrix}. \end{equation} The variety $\overline{{\rm MTD}}_{1,2}$ is the plane in $\mathbb{P}^3$ given by $p_{11} + p_{12} = p_{21} + p_{22}$. Its intersection with the tetrahedron $\Delta_3$ coincides with the model ${\rm MTD}_{1,2}$, which is a regular square: $$ {\rm MTD}_{1,2} \,\, = \,\, \overline{{\rm MTD}}_{1,2} \cap \Delta_3 \,= \, {\rm conv} \biggl\{ \begin{pmatrix} 1/2 &\! 0 \\ 1/2 &\! 0 \end{pmatrix},\, \begin{pmatrix} 1/2 &\! 0 \\ 0 &\!\!1/2 \end{pmatrix},\, \begin{pmatrix} 0 &\!\! 1/2 \\ 1/2 &\! 0 \end{pmatrix},\, \begin{pmatrix} 0 &\! 1/2 \\ 0 &\! 1/2 \end{pmatrix} \biggr\}. $$ The first three and last three matrices in this list form the two triangles referred to in Theorem \ref{thm:eins}. Their common edge consists of all transition matrices (\ref{eq:2by2}) of rank~$1$.~\qed \end{example} \begin{example} \rm Our first non-trivial example arises for $l=m=2$. The map $\phi_{2,2}$ is given~by $$ \!\! (a,b,\lambda) \mapsto p = \frac{1}{4} \! \begin{bmatrix} a e_{111} + (\lambda b + (1-\lambda) a ) e_{121} + (\lambda a + (1-\lambda) b ) e_{211} + b e_{221} + (1{-}a) e_{112} \\ + (\lambda (1{-}b) {+} (1{-}\lambda) (1{-}a) ) e_{122} + (\lambda (1{-}a) {+} (1{-}\lambda) (1{-}b )) e_{212} + (1{-}b) e_{222} \end{bmatrix} $$ Here $\{e_{111}, e_{112},\ldots, e_{222}\}$ denotes the standard basis in the space of $2 \times 2 \times 2$-tensors. The variety $\overline{{\rm MTD}}_{2,2} $ is the $3$-dimensional linear subspace of $\mathbb{P}^7$ defined by $$ \begin{matrix} p_{111}+p_{112} = p_{121} + p_{122}, & p_{211}+p_{212} = p_{221} + p_{222} , \\ p_{121}+p_{122} = p_{221} + p_{222}, & p_{111}+p_{221} = p_{121} + p_{211}. \end{matrix} $$ The intersection of this linear space with the simplex $\Delta_7$ is the regular octahedron whose vertices are the images under $\phi_{2,2}$ of the vertices of the cube $\,(\Delta_1)^2 \times \Delta_1 $. The model ${\rm MTD}_{2,2}$ consists of two tetrahedra formed by vertices of the octahedron. Their common edge is the segment between $\frac{1}{4}(e_{111}+e_{121}+e_{211}+e_{221})$ and $\frac{1}{4}(e_{112}+e_{122}+e_{212}+e_{222})$. \qed \end{example} \begin{example} \label{ex23} \rm The statement of Theorem \ref{thm:eins} does not extend to $m \geq 3$. Consider the case $l=2, m = 3$. The $7$-dimensional variety $\overline{{\rm MTD}}_{2,3} $ lives in $\mathbb{P}^{26}$, and it is not a linear space. The linear span of $\overline{{\rm MTD}}_{2,3} $ is $10$-dimensional. Inside this $\mathbb{P}^{10}$, the variety $\,\overline{{\rm MTD}}_{2,3} $ has codimension $3$, degree $4$, and it is cut out by six quadrics. In Example~\ref{ex:neun} we shall display a Gr\"obner basis consisting of $16$ linear forms and six quadrics for its prime ideal. \qed \end{example} \begin{proof}[Proof of Theorem \ref{thm:eins}] It is known by \cite[\S 4.2]{BR} that the model is identifiable, so ${\rm MTD}_{l,2}$ is a semi-algebraic set of dimension $l+1$ in $\Delta_N$. Its Zariski closure $\overline{{\rm MTD}_{l,2}}$ is a variety of dimension $l+1$ in $\mathbb{P}^N$. That variety is irreducible because it is defined by way of a rational parametrization. For any binary sequence $i_0 i_1 \cdots i_{l-1}$, the identity \begin{equation} \label{eq:lin1} p_{i_0 i_1 \cdots i_{l-1} 2} \,\,=\,\, 2^{-l} - p_{i_0 i_1 \cdots i_{l-1} 1} \end{equation} holds on ${\rm MTD}_{l,2}$, so it suffices to consider relations on probabilities of sequences that end with $1$. On our model, these probabilities satisfy the linear equations \begin{equation} \label{eq:lin2} p_{i_0 i_1 \cdots i_r \cdots i_s \cdots i_{l-1} 1} + p_{i_0 i_1 \cdots \tilde{i}_r \cdots \tilde{i}_s \cdots i_{l-1} 1} \,\,=\,\, p_{i_0 i_1 \cdots i_r \cdots \tilde{i}_s \cdots i_{l-1} 1} + p_{i_0 i_1 \cdots \tilde{i}_r \cdots i_s \cdots i_{l-1} 1} . \end{equation} In other words, the $l$-dimensional $2 {\times} 2 {\times} \cdots {\times} 2$-tensor $(p_{i_0 i_1 \cdots i_{l-1} 1})$ has tropical rank $1$. The set of such tensors is a classical linear space of dimension $l+1$. Solving the linear equations (\ref{eq:lin1}) and (\ref{eq:lin2}) on the simplex $\Delta_N$, we obtain an $(l+1)$-dimensional polytope $P$ that contains the model ${\rm MTD}_{l,2}$. Its Zariski closure in $\mathbb{P}^N$ is an $(l+1)$-dimensional linear space that contains the variety $\overline{{\rm MTD}_{l,2}}$. Being irreducible varieties of the same dimension, they must be equal. This proves the first assertion. We next claim that the polytope $P$ of all non-negative real solutions to (\ref{eq:lin1}) and (\ref{eq:lin2}) is a regular cross-polytope. For $r \in \{0,1,\ldots,l-1\}$ and $s \in \{1,2\}$ define the $2l$ points $$ E_{rs} \,\,\,=\,\,\, \frac{1}{2^l} \cdot \biggl[ \sum \bigl\{ \,e_{i_0 i_1 \cdots i_{l-1} 1} \,| \, i_r = s \,\bigr\} \,+\, \sum \bigl\{ \,e_{i_0 i_1 \cdots i_{l-1} 2} \,| \, i_r \not= s \,\bigr\} \biggr] \quad \in \,\,\Delta_N . $$ These are extreme non-negative solutions of (\ref{eq:lin1}) and (\ref{eq:lin2}). They form the vertices of an $l$-dimensional cross-polytope, since $\,\frac{1}{2}(E_{r1} + E_{r2}) \,$ is equal to the uniform distribution $\, \frac{1}{2^{l+1}} e_{++\cdots++}\,$ for all $r$. In addition to the $2l$ vertices $E_{rs}$, the polytope $P$ has two more vertices, namely, $\,\frac{1}{2^l} e_{++\cdots+1}\,$ and $\,\frac{1}{2^l} e_{++\cdots+2}$. Hence $P$ is a bipyramid over the $l$-dimensional cross-polytope, so it is an $(l+1)$-dimensional cross-polytope. It remains to identify the model ${\rm MTD}_{l,2}$ inside $P$. The parameter polytope is the product $(\Delta_1)^2 \times \Delta_{l-1}$, and, as before, we chose coordinates $(a,b)$ on the square $(\Delta_1)^2$. The map $\phi_{l,2}$ contracts the simplex $\{(0,0)\} \times \Delta_{l-1}$ onto the vertex $\,\frac{1}{2^l} e_{++\cdots+2}$ of $P$, and it contracts the simplex $\{(1,1)\} \times \Delta_{l-1}$ onto the vertex $\,\frac{1}{2^l} e_{++\cdots+1}$ of $P$. The vertex $(0,1) \times e_r$ is mapped to the vertex $E_{r,2}$, and the vertex $(1,0) \times e_r$ is mapped to the vertex $E_{r,1}$. The parameter points with $a = b$ are contracted onto the line segment $S = [\frac{1}{2^l} e_{++\cdots+1},\frac{1}{2^l} e_{++\cdots+2}]$. The parameter points with $a < b$ are mapped bijectively onto the $(l+1)$-simplex formed by $S$ and $\{E_{0,2}, E_{1,2}, \ldots, E_{l-1,2}\}$, but with $S$ removed. The parameter points with $a > b$ are mapped bijectively onto the $(l+1)$-simplex formed by $S$ and $\{E_{0,1}, E_{1,1}, \ldots, E_{l-1,1}\}$, but with $S$ removed. Hence ${\rm MTD}_{l,2}$ equals the union of two $(l+1)$-simplices glued along the special diagonal $S$ of the cross-polytope $P$. \end{proof} \begin{corollary} For large $l$, there are far fewer distributions in the model ${\rm MTD}_{l,2}$ than distributions in its Zariski closure. Namely, with respect to Lebesgue measure, we have $$ \frac{{\rm vol}({\rm MTD}_{l,2})}{ {\rm vol}(\overline{{\rm MTD}}_{l,2} \cap \Delta_N)} \,\, = \,\,\frac{1}{2^{l-1}}. $$ \end{corollary} \begin{proof} We can triangulate the cross-polytope $P$ into $2^l$ simplices, all of the same volume and containing the special diagonal $S$. The model ${\rm MTD}_{l,2}$ consists of two of them. Hence $2/2^l$ is the fraction of the volume of $P = \overline{{\rm MTD}}_{l,2} \cap \Delta_N$ that is occupied by ${\rm MTD}_{l,2}$. \end{proof} \section{Likelihood inference} We next discuss maximum likelihood estimation (MLE) for the mixture transition distribution model ${\rm MTD}_{l,m}$. Any data set is represented by a function $\,u : [m]^{l+1} \rightarrow \mathbb{N}\,$ that records the frequency counts of the observed sequences. Given such a function $u$, our objective is to maximize the corresponding log-likelihood function \begin{equation} \label{eq:loglike} L_u \quad = \quad \sum_{i_0 i_1 \cdots i_l} u_{i_0 i_1 \cdots i_l} \cdot {\rm log}(p_{i_0 i_1 \cdots i_l}) \end{equation} over all probability distributions that lie in the model ${\rm MTD}_{l,m}$. A standard method for solving this optimization problem is the expectation-maximization (EM) algorithm. Other algorithms for the same task can be found in \cite{Ber, RT}. A general version of the EM algorithm for algebraic models with discrete data is described in \cite[\S 1.3]{ascb}, while the specific case of the MTD model is treated in \cite[\S 4.5]{BR}. Richards \cite{Ric} conjectured that the EM algorithm for the MTD model may get stuck in local maxima. Our next result confirms that this is indeed the case, even for $m=2$. \begin{proposition} \label{thm:twolocal} The log-likelihood function $L_u$ on the binary model ${\rm MTD}_{l,2}$ has either one or two local maxima. With probability one, there will be two local maxima, and both of these will be reached by the EM algorithm for different choices of initial parameters. \end{proposition} Here the statement about ``probability one'' in the second sentence refers to any absolutely continuous probability distribution that is positive on the simplex $\Delta_N$. \begin{proof} We saw in Theorem \ref{thm:eins} that ${\rm MTD}_{l,2}$ is the union of two convex polytopes. The log-likelihood function $L_u$ is strictly concave on the ambient simplex $\Delta_N$, so it attains a unique maximum on each of the two polytopes. This proves the first statement. For the second statement consider the empirical distribution $u/|u|$ which is a point in $\Delta_N$. Its log-likelihood function $L_u$ has a unique maximum $p^*$ in the interior of the cross-polytope $P$. With probability one, this maximum $p^*$ will not lie in the segment $S$, so let us assume that this is the case. Then either $p^*$ lies in precisely one of the two $(l+1)$-simplices that make up ${\rm MTD}_{l,2}$, or $p^*$ does not lie in ${\rm MTD}_{l,2}$. In the former case, $p^*$ is the MLE, and the maximum over the other simplex is in the boundary of that simplex and constitutes a second local maximum. In the latter case, each of the two simplices has a local maximum in its boundary. When choosing starting parameter values near either of these local maxima, the EM algorithm converges to that local maximum. \end{proof} The point $p^*$ in the cross-polytope $P$ at which $L_u$ attains its maximum is an algebraic function of the data $u$. The degree of this algebraic function is the {\em ML degree} (see \cite{HKS}) of the linear subvariety $\,\overline{{\rm MTD}}_{l,2}\,$ of $\mathbb{P}^N$. By Varchenko's Formula \cite[Theorem 1.5]{ascb}, this ML degree coincides with the number of bounded regions in an arrangement of hyperplanes. This arrangement lives inside the affine space that is cut out by (\ref{eq:lin1}) and (\ref{eq:lin2}) and it consists of the restrictions of the coordinate hyperplanes $\{ p_\bullet = 0\}$. Computations show that the ML degree equals $9$ for $l = 3$, and it equals $209$ for $l = 4$. It would be interesting to find a general formula for that ML degree as a function of $l$. The local maxima that occur on the boundary of the two simplices of ${\rm MTD}_{l,2}$ have ML degree $1$, that is, they are expressed as rational functions in the data $u$. Indeed, these local maxima are precisely the estimates for the Markov chain obtained by fixing $\lambda_i = 1$ for some $i$. Hence, if $p^* \not\in {\rm MTD}_{l,2}$, then the MLE is a rational expression in $u$. The next example illustrates the behavior of the EM algorithm for $m=2$ and $l = 3$. \begin{example} \rm The data consists of eight positive integers, here written as a matrix $$ U \,\,\, = \,\,\, \begin{pmatrix} u_{111} & u_{121} & u_{211} & u_{221} \\ u_{112} & u_{122} & u_{212} & u_{222} \end{pmatrix}. $$ The MLE $\hat p$ will be either $$ p' \quad = \quad \frac{1}{2|u|} \begin{pmatrix} u_{111} + u_{211} & u_{121} + u_{221} & u_{111} + u_{211} & u_{121} + u_{221} \\ u_{112} + u_{212} & u_{122} + u_{222} & u_{112} + u_{212} & u_{122} + u_{222} \end{pmatrix} $$ or $$ p'' \quad = \quad \frac{1}{2|u|} \begin{pmatrix} u_{111}+u_{121} & u_{111}+u_{121} & u_{211}+u_{221} & u_{211}+u_{221} \\ u_{112}+u_{122} & u_{112}+u_{122} & u_{212}+u_{222} & u_{212}+u_{222} & \end{pmatrix}, $$ or it will be the unique probability distribution satisfying (\ref{eq:lin1}), (\ref{eq:lin2}), and \begin{equation} \label{eq:rank4} {\rm rank} \begin{pmatrix} u_{111} & u_{112} & u_{121} & u_{122} & u_{211} & u_{212} & u_{221} & u_{222} \\ p_{111} & p_{112} & p_{121} & p_{122} & p_{211} & p_{212} & p_{221} & p_{222} \\ p_{111} & p_{112} & -p_{121} & -p_{122} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & p_{211} & p_{212} & -p_{221} & -p_{222} \\ 0 & 0 & p_{121} & p_{122} & 0 & 0 & -p_{221} & -p_{222} \\ p_{111} & 0 & -p_{121} & 0 & -p_{211} & 0 & p_{221} & 0 \end{pmatrix} \,\leq \,5 . \end{equation} This is the matrix denoted $\begin{bmatrix} u \\ \tilde{J} \end{bmatrix} $ in \cite[\S 3]{HKS}. The rank constraint (\ref{eq:rank4}) represents Proposition~2 in \cite{HKS}. The unique probability distribution that lies in our model and also satisfies (\ref{eq:rank4}) was called $p^*$ in the proof of Proposition \ref{thm:twolocal}. Its defining constraints (\ref{eq:lin1}), (\ref{eq:lin2}) and (\ref{eq:rank4}) form a system of polynomial equations that has $9$ complex solutions. The distribution $p^*$ is the unique solution to that system whose coordinates are both real and positive. The trichotomy in this example is best explained by the following observations: For almost all data matrices $U$, the three points $p', p'', p^*$ are distinct, one of them coincides with the global maximum $\hat p$ of $L_u$ over ${\rm MTD}_{l,2}$, and another one is a local maximum. \qed \end{example} It would be interesting to extend the findings in Proposition \ref{thm:twolocal} to $m \geq 3$. The algebraic tools that may be needed for such an analysis are developed in the next section. \section{Non-linear Models} In this section we examine the geometry of model ${\rm MTD}_{l,m}$ and the variety $\overline{{\rm MTD}}_{l,m}$ for an arbitrary number $m$ of states. In particular, we prove that its prime ideal is minimally generated by linear forms and quadrics. These minimal generators form a Gr\"obner basis. \begin{theorem} \label{thm:main} The variety $\overline{{\rm MTD}}_{l,m}$ spans a linear space of dimension $(m-1)(lm-l+1)$ in $\mathbb{P}^N$. In this linear space, its prime ideal is given by the $2 \times 2$-minors of an $l \times (m-1)^2$-matrix of linear forms. The linear and quadratic ideal generators form a Gr\"obner basis. \end{theorem} This theorem explains our earlier result that the model is linear for binary states. Indeed, for $m=2$, the dimension $(m-1)m+l-1$ of the model coincides with the dimension $ (m-1)(lm-l+1)$ of the ambient linear space, and there are no $2 \times 2$-minors. \begin{proof} We shall present an explicit Gr\"obner basis consisting of linear forms and quadrics. The term order we choose is the reverse lexicographic term order induced by the lexicographic order on the states $i_0 i_1 \cdots i_l $ of the model. We first consider the linear relations \begin{equation} \label{eq:linrel1} \underline{ p_{i_0 i_1 i_2 \cdots i_{l-1} i_l}} - \sum_{j=0}^{l-1} p_{m \cdots m i_j m \cdots m i_l} + (l-1) p_{m m \cdots mm i_l} . \end{equation} This linear form is non-zero and has the underlined leading term if and only if at least two of the entries of the $l$-tuple $(i_0,i_1,\ldots,i_{l-1})$ are not equal to $m$. Thus the number of distinct Gr\"obner basis elements (\ref{eq:linrel1}) equals $\,m^{l+1} - m (1+l(m-1))$. Our second class of Gr\"obner basis elements consists of the linear relations \begin{equation} \label{eq:linrel2} \begin{matrix} & \underline{p_{m \cdots m i_j m \cdots m 1}} + p_{m \cdots m i_j m \cdots m 2} + \cdots + p_{m \cdots m i_j m \cdots m m} \\ - & p_{m \cdots m m m \cdots m 1} - p_{m \cdots m m m \cdots m 2} - \cdots - p_{m \cdots m m m \cdots m m} . \end{matrix} \end{equation} These linear forms are non-zero with the underlined leading term provided $0 \leq j \leq l-1$ and $1 \leq i_j \leq m-1$. The number of distinct linear forms (\ref{eq:linrel2}) equals $l(m-1)$, and the set of their leading terms is disjoint from the set of leading terms in (\ref{eq:linrel1}). The number of unknowns $p_\bullet$ not yet underlined equals $\,l(m-1)^2+(m-1) + 1 $. We use these unknowns to form $m-1$ matrices $A_2,A_3,\ldots,A_{m}$, each having format $l \times (m-1)$, as follows. Define the matrix $A_r$ by placing the following entry in row $j$ and column $i_j$: \begin{equation} \label{eq:matrixrel1} \underline{p_{m \cdots m i_j m \cdots m r}} \,-\, p_{m \cdots mmm \cdots m r} . \end{equation} We finally form an $l \times (m-1)^2$ matrix by concatenating these $m-1$ matrices: \begin{equation} \label{eq:matrixrel2} A \,\, = \,\, \bigl( \,A_2 \, A_3 \, \,\cdots \,\,A_{m} \bigr) . \end{equation} The third and last group of polynomials in our Gr\"obner basis is the set of $2 \times 2$-minors of $A$. The entries of $A$ have distinct leading terms, underlined in (\ref{eq:matrixrel1}), and the leading term of each $2 \times 2$-minor is the product of the leading terms on the main diagonal. Note that we could also define the matrix $A_1$ and include it when forming (\ref{eq:matrixrel2}). This would not change the ideal, but it would lead to a generating set that is not minimal. It is well-known that the $ 2 \times 2$-minors of a matrix of unknowns form a Gr\"obner basis for the prime ideal they generate. Since no unknown $p_\bullet$ underlined in (\ref{eq:linrel1}) or (\ref{eq:linrel2}) appears in the matrix $A$, it follows that these linear relations together with the $2 \times 2$-minors of (\ref{eq:matrixrel2}) generate a prime ideal and form a Gr\"obner basis for that prime ideal. The ideal of $2 \times 2$ minors of $A$ has codimension $l(m-1)^2 - l - (m-1)^2 + 1$. Subtracting this quantity from the number $\,l(m-1)^2+(m-1) + 1 \,$ of unknowns not underlined in (\ref{eq:linrel1}) or (\ref{eq:linrel2}), we obtain $\, l+(m-1)^2-1 + (m-1) + 1 \, = \,(m-1)m+l$. This is the dimension of the affine variety defined by our prime ideal. The corresponding irreducible projective variety has dimension $\,(m-1)m+l-1$. This is precisely the dimension of $\overline{{\rm MTD}}_{l,m}$. It hence suffices to prove that our variety contains the model ${\rm MTD}_{l,m}$, or, equivalently, that the linear forms (\ref{eq:linrel1}) and (\ref{eq:linrel2}) are mapped to $0$ by the parameterization (\ref{eq:param}), and that the specialized matrix $\phi_{l,m}(A)$ has rank $1$. For (\ref{eq:linrel2}) this is obvious because, for fixed $i_j$, $$ \sum_{r=1}^m \phi^*_{l,m} \bigl( p_{m \cdots m i_j m \cdots m r} \bigr)\,\, = \,\, \frac{1}{m^l} .$$ Here $\phi^*_{l,m}$ denotes the homomorphism of polynomial rings induced by the map $\phi_{l,m}$. The indices of the unknowns in the linear form (\ref{eq:linrel1}) all have the same letter $i_l$ in the end. The formula (\ref{eq:param}) for the corresponding probabilities can thus be written as $$ \phi^*_{l,m} (p_{i_0 i_1 \cdots i_{i-1} i_l}) \,\, = \,\, u + x_{i_0} + y_{i_1} + \cdots + z_{i_{l-1}}. $$ In other words, for any fixed $i_l$, the resulting $l$-dimensional tensor has tropical rank $1$. This representation implies linear relations like (\ref{eq:lin2}), and these are equivalent to (\ref{eq:linrel1}). Finally, if we apply our ring homomorphism to (\ref{eq:matrixrel1}) then we get \begin{equation} \label{eq:getlambda} \phi_{l,m}^*(p_{m \cdots m i_j m \cdots m r}) \,-\, \phi_{l,m}^*(p_{m \cdots mmm \cdots m r}) \,\, = \,\,\lambda_{j} \cdot (q_{i_j,r} -q_{m,r}). \end{equation} Thus, the matrix $\phi_{k,l}(A)$ is the product of the column vector $(\lambda_1,\ldots,\lambda_l)$ and a row vector of length $(m-1)^2$ whose entries are $q_{i_j,r}-q_{m,r}$ for $2 \leq r \leq m$ and $1 \leq i_j \leq m-1$. In particular, the matrix $\phi_{l,m}^*(A)$ has rank $\leq 1$. This completes the proof of Theorem~\ref{thm:main}. \end{proof} \begin{remark} \rm The prime ideal in Theorem \ref{thm:main} is the kernel of $\phi_{l,m}^*$, so it characterizes the image of the model parametrization $\phi_{l,m}$. On the model ${\rm MTD}_{l,m}$, the map $\phi_{l,m}$ can be inverted as long as the rows of the transition matrix $(q_{ij})$ are distinct. Indeed, $q_{ij} $ equals $2^l \phi^*_{l,m}(p_{ii\cdots iij})$, and the coordinates of $\lambda$ are identified from (\ref{eq:getlambda}). Thus, our result refines the well-known fact that MTD models are identifiable \cite[\S 4.2]{BR}. \end{remark} \begin{example} \label{ex:neun} \rm We illustrate Theorem \ref{thm:main} for the case $l=2, m=3$, by presenting the Gr\"obner basis promised in Example~\ref{ex23}. Note that $N = 26$. Here the ambient linear space has dimension $(m-1)(lm-l+1)= 10$, and our Gr\"obner basis for that linear space consists of twelve linear forms (\ref{eq:linrel1}) and four linear forms (\ref{eq:linrel2}). These are respectively, $$ \begin{matrix} \underline{p_{111}} {-} p_{311} {-} p_{131} {+} p_{331} ,\, \underline{p_{121}} {-} p_{321} {-} p_{131} {+} p_{331} ,\, \underline{p_{211}} {-} p_{311} {-} p_{231} {+} p_{331} , \, \underline{p_{221}} {-} p_{321} {-} p_{231} {+} p_{331} ,\\ \underline{p_{112}} {-} p_{312} {-} p_{132} {+} p_{332} , \, \underline{p_{122}} {-} p_{322} {-} p_{132} {+} p_{332} ,\, \underline{p_{212}} {-} p_{312} {-} p_{232} {+} p_{332} , \, \underline{p_{222}} {-} p_{322} {-} p_{232} {+} p_{332} ,\\ \underline{p_{113}}{-} p_{313} {-} p_{133} {+} p_{333} , \, \underline{p_{123}} {-} p_{323} {-} p_{133} {+} p_{333} ,\, \underline{p_{213}} {-} p_{313} {-} p_{233} {+} p_{333} , \, \underline{p_{223}} {-} p_{323} {-} p_{233} {+} p_{333}. \end{matrix} $$ $$ \begin{matrix} {\rm and} \quad & \underline{p_{311}} +p_{312}+p_{313}-p_{331}-p_{332}-p_{333}\,,\,\, \underline{p_{321}} +p_{322}+p_{323}-p_{331}-p_{332}-p_{333} \,,\\ & \underline{p_{131}} +p_{132}+p_{133}-p_{331}-p_{332}-p_{333} \,,\,\, \underline{p_{231}} +p_{232}+p_{233}-p_{331}-p_{332}-p_{333}. \end{matrix} $$ The remaining $l(m-1)^2 + (m-1) + 1 =8 + 2 + 1 =11$ not yet underlined unknowns are $\,p_{132}, p_{232}, p_{312}, p_{322}, p_{133}, p_{233}, p_{313}, p_{323}, \, p_{332}, p_{333},\,p_{331}$. These represent coordinates on the linear subspace $\mathbb{P}^{10}$ of $\mathbb{P}^{26}$ that is cut out by these linear forms. Inside that linear subspace $\mathbb{P}^{10}$, our variety $\overline{{\rm MTD}}_{2,3} $ has codimension $3$, and it is defined ideal-theoretically by the $2 \times 2$-minors of the $2 \times 4$-matrix $$ A \quad = \quad \bigl(\,A_2 \,\,A_3 \,\bigr) \,\, = \,\, \begin{pmatrix} \, \underline{p_{132}}-p_{332} & \underline{p_{232}}-p_{332} \, &\, \, \underline{p_{133}}-p_{333} & \underline{p_{233}}-p_{333} \,\\ \, \underline{p_{312}}-p_{332} & \underline{p_{322}}-p_{332} \,&\, \, \underline{p_{313}}-p_{333} & \underline{p_{323}}-p_{333}\, \end{pmatrix}. $$ These six quadrics, together with the $16$ linear forms, form a reduced Gr\"obner basis. \qed \end{example} Our proof of Theorem \ref{thm:main} gives rise to the following geometric description: \begin{corollary} \label{cor:segre} The projective variety $\overline{{\rm MTD}}_{l,m} $ is a cone with base $\mathbb{P}^{m-1}$ over the Segre variety $\mathbb{P}^{l-1} \times \mathbb{P}^{m^2-2m}$. If $m \geq 3$, then this variety is singular and its singular locus is the $\mathbb{P}^{m-1}$ that forms the base of that cone. The degree of $\overline{{\rm MTD}}_{l,m} $ equals $\binom{l+(m-1)^2-2}{l-1}$. \end{corollary} \begin{proof} The ideal of singular locus of $\overline{{\rm MTD}}_{l,m} $ is generated by the entries of the matrix $A$ together with the linear forms (\ref{eq:linrel1}) and (\ref{eq:linrel2}). Together, these linear equations are equivalent to requiring that the value of $\,p_{i_0 i_1 \cdots i_{l-1} r} \,$ depends only on $r$. It does not on $i_0 i_1 \cdots i_{l-1}$. These constraints define a linear space $\mathbb{P}^{m-1}$ in $\mathbb{P}^N$. The $2 \times 2$-minors of an $l \times (m{-}1)^2$ matrix define the Segre variety $\mathbb{P}^{l-1} \times \mathbb{P}^{m^2-2m}$, whose degree is known to be the binomial coefficient. \end{proof}
1,108,101,562,476
arxiv
\section{Introduction} Despite more than fifty years since the seminal work of Anderson \cite{Ander58}, the localization of single particle wavefunctions in disordered quantum systems continues to attract a significant interest \cite{abr2010}. Exactly solvable models naturally have played a crucial role in the understanding of the transition between localized and extended wavefunctions. One of the most important models consists of a single particle hopping among the nodes of an infinitely large Cayley tree with on-site disorder \cite{Abou73}. In contrast to its counterpart in finite dimensions, this mean-field version of the Anderson model is exactly solvable thanks to the absence of loops. The statistics of energy levels and eigenfunctions of the Anderson model on locally treelike random graphs have lately re-emerged as a central problem in condensed matter theory. This is due to the connection between this class of models and localization in interacting many-body systems. Essentially, the structure of localized wavefunctions in the Fock space of many-body quantum systems can be mapped on the localization problem of a single particle hopping on a tree-like graph with quenched disorder \cite{Altshuler1997,Gornyi2005,Basko2006}. The phenomena of many-body localization and ergodicity breaking in isolated quantum systems prevent them to equilibrate, which has serious consequences for the foundations of equilibrium statistical mechanics \cite{Huse2015,Pino2016}. The prototypical model to inspect the statistics of energy levels and eigenfunctions in the Anderson model is realized on a regular random graph (RRG). Regular random graphs have a local treelike structure, but loops containing typically $O(\ln N)$ sites are present. Another difference of a RRG with respect to a Cayley tree (both with finite $N$) is the absence of boundary nodes in the former case. The majority of sites of a Cayley tree lies on its boundary, which pathologically influences the eigenfunctions within the delocalized phase \cite{Monthus,Tikhonov2016}. Although the complete characterization of the phase diagram of the Anderson model on a RRG is still a work in progress \cite{Biroli2010,Aizenman1,Aizenman2}, it is well established that the extended phase appears at the center of the band as long as the disorder strength $W$ is smaller than a critical value $W_c$ \cite{Abou73,Biroli2012}. Recently there has been an intense debate concerning the ergodicity of the eigenfunctions within the extended phase of the Anderson model on RRGs and two main pictures have emerged. At one side, it has been put forward that, for a certain interval $W_E < W < W_c$, there exists an intermediate phase where the eigenfunctions are multifractal \cite{Biroli2012,Luca2014,Alt2016,Altshuler2016} and the inverse participation ratio scales as $N^{-\tau(W)}$ ($N \gg 1$), with $0 < \tau(W) < 1$ \cite{Alt2016,Altshuler2016}. The results supporting this picture are mostly based on a numerical diagonalization study of the fractal exponents \cite{Luca2014,Alt2016,Altshuler2016}, combined with a semi-analytical approach to solve the self-consistent equations \cite{Abou73}. The transition between ergodic and non-ergodic extended eigenstates at $W_E$ is discontinuous \cite{Alt2016,Altshuler2016}, analogous somehow to the one-step replica symmetry breaking transition observed in some spin-glass systems \cite{MezPar,MezPar1}. According to the other side, the inverse participation ratio scales as $1/N$ ($N \gg 1$) and the energy-levels follow the Wigner-Dyson statistics within the whole extended phase \cite{Mirlin2016,Garcia2016,Tikhonov2016}. The results supporting the ergodicity of the extended eigenstates are mainly based on numerical diagonalization \cite{Mirlin2016,Tikhonov2016}. The statistical properties of the energy levels and eigenfunctions display a non-monotonic behavior for increasing $N$, reducing essentially the non-ergodicity of the eigenfunctions to a finite size effect. This picture is consistent with earlier analytical predictions for the problem of a single quantum particle hopping on an Erd\"os-Renyi random graph \cite{Mirlin91,Fyod}, a treelike model closely related to the Anderson model on a RRG. In this work we add an important contribution to this heated debate. We probe the eigenvalue statistics by solving an exact set of equations for the level compressibility $\chi$ of the number of energy levels inside the interval $[-L/2,L/2]$. By considering the limit $L \rightarrow 0^{+}$ (see below), this quantity allows to distinguish among the three conventional statistical behaviors of the energy levels found in Anderson models: Wigner-Dyson statistics \cite{BookMehta}, Poisson statistics (localized states) \cite{BookMehta} and sub-Poisson statistics (multifractal states) \cite{Altshuler88,Chalker1996,Bogomolny2011,Mirlin}. We calculate $\chi$ as a function of $L \ll 1$ within the extended phase, including some values of $W < W_c$ in an interval of disorder strength close to the critical point. This is the relevant regime of $W$ where one would expect the presence of multifractal eigenstates, according to recent numerical results \cite{Alt2016}. Our results consistently show that $\chi$ approaches zero in the limit $L \rightarrow 0^{+}$ for all values of $W < W_c$ considered here, which strongly supports the Wigner-Dyson statistics of the energy levels in the whole extended phase. The level-compressibility has a non-monotonic behavior as a function of $L$, which resembles the system size dependency discussed in previous works \cite{Mirlin2016,Alt2016}. Our approach is based on the numerical solution of an exact set of equations derived previously through the replica-symmetric method and valid in the limit $N \rightarrow \infty$ \cite{Metz2016}. We discuss the possible role of replica symmetry breaking in $\chi$ and the relation of our results with the problem of the existence of non-ergodic extended states. The paper is organized as follows. In the next section we define the Anderson model on a regular random graph. We present the main equations for the level compressibility and the corresponding results within the extended phase in section \ref{results}. Section \ref{discussion} discusses the possible role of replica symmetry breaking and the relation of our results with previous works. The details of the numerical procedure to solve the equations for $\chi$ are presented in the appendix. \section{The Anderson model on a regular random graph.} The tight-binding Hamiltonian for a spinless particle moving on a random potential is given by \beeq{ \mathcal{H}=-\sum_{ij = 1}^N t_{ij}\left(c_{i}^\dag c_{j}+c_{j}^\dag c_i \right)+\sum_{i=1}^N\epsilon_i c_i^\dag c_i\,, \label{eq1aa} } where $t_{ij}$ is the energy for the hopping between nodes $i$ and $j$, while $\epsilon_1, \dots,\epsilon_N $ are the on-site random potentials drawn from the uniform distribution $P_{\epsilon}(\epsilon)=(1/W)\Theta(W/2-|\epsilon|)$, with $W \geq 0$. The hopping coefficients $\{ t_{ij} \}_{i,j=1,\dots,N}$ correspond to the entries of the adjacency matrix of a regular random graph (RRG) with connectivity $k+1$ \cite{Bollobas,Wormald}. A matrix element $t_{ij}$ is equal to one if there is a link between nodes $i$ and $j$, and $t_{ij} = 0$ otherwise. The ensemble of random graphs can be defined through the full distribution of the adjacency matrix elements $\{ t_{ij} \}_{i,j=1,\dots,N}$. For the explicit form of this distribution, we refer the reader to \cite{Metz2016}. For $k=2$, where each node is connected precisely to three neighbors, all eigenfunctions become localized provided $W > W_c \simeq 17.4$ \cite{Abou73,Biroli2010}. The value of $W_c$ is the same for the infinitely large Cayley tree and the RRG. \\ \section{Results for the level compressibility} \label{results} Let $\mathcal{I}_L^{(N)}$ denotes the number of energy levels inside $[-L/2,L/2]$ \begin{equation} \begin{split} \mathcal{I}^{(N)}_L= N\int_{-L/2}^{L/2} dE \, \rho_N(E)\,, \label{ssq1} \end{split} \end{equation} where $\rho_N(E)=(1/N)\sum_{i=1}^N\delta(E-E_i)$ is the density of energy levels $E_1,\ldots,E_N$ between $E$ and $E + dE$. We define the $N \rightarrow \infty$ limit of the level-compressibility as follows \cite{BookMehta,Mirlin} \begin{equation} \chi(L,W) = \lim_{N \rightarrow \infty} \frac{n^{(N)}(L) }{\left\langle \mathcal{I}^{(N)}_L \right\rangle } \,, \label{hhj} \end{equation} with the number variance \begin{equation} n^{(N)}(L) = \left\langle \left( \mathcal{I}^{(N)}_L \right)^2 \right\rangle - \left\langle \mathcal{I}^{(N)}_L \right\rangle^2 \end{equation} characterizing the fluctuation of the energy levels within $[-L/2,L/2]$. The symbol $\langle \dots \rangle$ represents the ensemble average with respect to the graph distribution and the distribution of the on-site potentials. Let us consider the behavior of $\chi(L,W)$ when $L=s/N$, i.e., the interval $[-L/2,L/2]$ is measured in units of the mean level spacing $1/N$. Energy levels following the Wigner-Dyson statistics strongly repel each other and the number variance scales as $n^{(N)}(L) \propto \ln \left\langle \mathcal{I}^{(N)}_L \right\rangle$ ($s \gg 1$), yielding $\chi(L,W) = 0$ \cite{BookMehta}. In the case of localized eigenfunctions, the energy levels are uncorrelated random variables with a Poisson distribution, the number variance is given by $n^{(N)}(L) = \langle \mathcal{I}^{(N)}_L \rangle$ ($s \gg 1$) and, consequently, we have $\chi(L,W) = 1$ \cite{BookMehta}. Finally, if the energy levels follow a sub-Poisson distribution, the number variance scales linearly with $\langle \mathcal{I}^{(N)}_L \rangle$ ($s \gg 1$), but $0 < \chi(L,W) < 1$ \cite{Altshuler88,Chalker1996,Bogomolny2011,Mirlin}. This is the typical behavior of $\chi(L,W)$ at the critical point for the Anderson transition in finite dimensions \cite{Chalker1996} as well as for critical random matrix ensembles \cite{Bogomolny2011}, in which the eigenfunctions exhibit a multifractal behavior. Thus, the level-compressibility is a suitable quantity to distinguish among Wigner-Dyson, Poisson and sub-Poisson statistics of the energy levels. The first $\kappa_1^{(N)}$ and second $\kappa_2^{(N)}$ cumulants of the random variable $\mathcal{I}^{(N)}_L$ read \begin{eqnarray} \kappa_1^{(N)}(L,W)= \frac{\partial \mathcal{F}_L^{(N)}(y)}{\partial y}\Big|_{y=0} = \frac{ \left\langle \mathcal{I}^{(N)}_L \right\rangle}{N} \,, \label{fgh0} \\ \kappa_2^{(N)}(L,W)= - \frac{\partial^2 \mathcal{F}_L^{(N)}(y)}{\partial y^2}\Big|_{y=0} = \frac{n^{(N)}(L)}{N} \,, \label{fgh} \end{eqnarray} where the cumulant generating function $\mathcal{F}_L^{(N)}(y)$ for the statistics of $\mathcal{I}^{(N)}_L$ is given by \begin{equation} \begin{split} \mathcal{F}_L^{(N)}(y)=-\frac{1}{N}\ln \bracket{e^{-y\mathcal{I}_L^{(N)}}} \,. \label{eq:cgf1} \end{split} \end{equation} Substituting eqs. (\ref{fgh0}) and (\ref{fgh}) in eq. (\ref{hhj}), we see that the level-compressibility can be written in terms of the cumulants \begin{equation} \chi(L,W) = \frac{\kappa_2(L,W) }{\kappa_1(L,W)}\,, \end{equation} with $\kappa_{1,2}(L,W) \equiv \lim_{N \rightarrow \infty} \kappa_{1,2}^{(N)}(L,W)$. Thus, the calculation of $\chi(L,W)$ reduces to evaluate $\mathcal{F}_L^{(N)}(y)$ in the limit $N \rightarrow \infty$, from which the first and second cumulants are readily obtained. Here we briefly recall the analytical approach to calculate $\lim_{N \rightarrow \infty} \mathcal{F}_L^{(N)}(y)$ and then we present the final equations for the first and second cumulants. A detailed account of this computation is presented in \cite{Metz2016}. Our first task consists in expressing $\mathcal{F}_L^{(N)}(y)$ in terms of the disordered Hamiltonian $\mathcal{H}$, such that we are able to compute analytically the ensemble average $\langle \dots \rangle$ in eq. (\ref{eq:cgf1}). This is achieved by representing the Heaviside step function $\Theta(x)$ ($x \in \mathbb{R}$) in terms of the discontinuity of the complex logarithm along the negative real axis, i.e., $\Theta(-x)=\frac{1}{2\pi i}\lim_{\eta\to 0^{+}}[\ln (x+i\eta)-\ln (x-i\eta)]$. Using this prescription in eq. (\ref{ssq1}), we derive \begin{equation} \begin{split} \mathcal{I}_L^{(N)}&=-\frac{1}{\pi i}\lim_{\eta\to0^{+}}\ln\left[\frac{Z( L/2-i\eta)Z(-L/2+i\eta)}{Z( L/2+i\eta)Z( -L/2-i\eta)}\right] \label{eq:pin}\,, \end{split} \end{equation} where $Z(z)=\left[\det\left(\mathcal{H}-z\openone\right)\right]^{-1/2}$ ($z \in \mathbb{C}$), with $\openone$ the $N\times N$ identity matrix, $(\cdots)^\star$ the complex conjugation, and $\eta > 0$ a regularizer. Combining eqs. \eqref{eq:pin} and \eqref{eq:cgf1}, one can write \begin{equation} \begin{split} \mathcal{F}_L^{(N)}(y)=- \lim_{\eta\to 0^+}\frac{1}{N}\ln \mathcal{Q}_L^{(N)}(y)\,, \label{sda} \end{split} \end{equation} with \begin{equation} \begin{split} \mathcal{Q}_L^{(N)}(y)&=\Big\langle Z^{\frac{i y}{\pi }}(L/2+i\eta) Z^{\frac{i y}{\pi }}(-L/2- i{\eta})\\ &\times Z^{-\frac{i y}{\pi }}(L/2 -i\eta)Z^{-\frac{i y}{\pi }}(-L/2+i{\eta}) \Big\rangle \,. \label{eq:fq} \end{split} \end{equation} The ensemble average in eq. (\ref{eq:fq}) can be calculated analytically using the replica approach of spin-glass theory \cite{BookParisi,Metz2016}. The limit $N \rightarrow \infty$ of $\mathcal{F}_L^{(N)}(y)$ follows from the solution of a saddle-point integral, in which we have restricted our analysis to those saddle-points that are replica symmetric, i.e., invariant with respect to the permutation of two or more replicas. For all details involved in the calculation of $\lim_{N \rightarrow \infty} \mathcal{F}_L^{(N)}(y)$, we refer the reader to the supplemental information of \cite{Metz2016}. Following this approach we find the expressions for the first two cumulants: \begin{align} \kappa_{1}(L,W) &= \frac{1}{\pi} \lim_{\eta \rightarrow 0^{+}} \left[ \frac{(k+1)}{2} \left\langle F \right\rangle_{\nu} - \left\langle R \right\rangle_{\mu} - (k+1) \left\langle R \right\rangle_{\nu} \right]\,, \label{k1} \\ \kappa_{2}(L,W)&= \frac{1}{\pi^2} \lim_{\eta \rightarrow 0^{+}} \Bigg[ \left\langle R^2 \right\rangle_{\mu} - \left\langle R \right\rangle^{2}_{\mu} \nonumber \\ &+ 2 (k+1) \left( \left\langle R F \right\rangle_{\nu} - \left\langle R \right\rangle_{\nu} \left\langle F \right\rangle_{\nu} \right)\nonumber\\ &- \frac{(k+1)}{ 2} \left( \left\langle F^{2} \right\rangle_{\nu} - \left\langle F \right\rangle^{2}_{\nu} \right) \nonumber \\ &- (k+1) \left( \left\langle R^2 \right\rangle_{\nu} - \left\langle R \right\rangle^{2}_{\nu} \right) \Bigg]\,, \label{k2} \end{align} with the functions $R = R(u,v)$ and $F = F(u,v;u^{\prime},v^{\prime})$ \begin{align} R(u,v) &= \frac{i}{2} \ln{\left[ \frac{u v }{\left(u v \right)^{*} } \right]}\,, \\ F(u,v;u^{\prime},v^{\prime}) &= R(u,v) + R(u^{\prime},v^{\prime})\nonumber\\ & + \varphi(u,u^{\prime}) + \varphi(v,v^{\prime})\,, \end{align} and \begin{equation} \varphi(u,u^{\prime}) = - \frac{i}{2} \ln{\left[ \frac{ 1 +u u^{\prime} }{ \left( 1 + u u^{\prime} \right)^{*} } \right]}\,. \end{equation} The average $\left\langle \dots \right\rangle_{\mathcal{P}}$ of integer powers of $R(u,v)$ and $F(u,v;u^{\prime},v^{\prime})$ with an arbitrary distribution $\mathcal{P}$ is defined by the general formula \begin{equation} \begin{split} \left\langle R^{m} F^{n} \right\rangle_{ \mathcal{P} } &= \int d u \, d v \, d v^{\prime} \, d u^{\prime} \, \mathcal{P}(u,v) \mathcal{P}(u^{\prime},v^{\prime})\\ &\times R^{m}(u,v) F^{n}(u,v;u^{\prime},v^{\prime}) \,, \label{gqp} \end{split} \end{equation} where $m \geq 0$ and $n \geq 0$. The distributions $\mu(u,v)$ and $\nu (u,v)$, which enter in the averages appearing in eqs. (\ref{k1}) and (\ref{k2}), are determined from the self-consistent equations \begin{widetext} \begin{align} \mu(u,v) &= \int \left( \prod_{r=1}^{k+1} du_r \, dv_r \, \nu(u_r,v_r) \right) \left\langle \delta\left[ u - \frac{1}{i \left( \epsilon - \frac{L}{2}-i\eta \right) - \sum_{r=1}^{k+1} {u_r}} \right] \delta\left[ v + \frac{1}{i \left( \epsilon + \frac{L}{2}+i\eta \right) - \sum_{r=1}^{k+1}{v_r}} \right] \right\rangle_{\epsilon} , \label{1tra} \\ \nu(u,v) &= \int \left( \prod_{r=1}^{k} du_r \, dv_r \, \nu(u_r,v_r) \right) \left\langle \delta\left[ u - \frac{1}{i \left( \epsilon - \frac{L}{2}-i\eta \right) - \sum_{r=1}^{k} {u_r}} \right] \delta\left[ v + \frac{1}{i \left( \epsilon + \frac{L}{2}+i\eta \right) - \sum_{r=1}^{k} {v_r}} \right] \right\rangle_{\epsilon} . \label{2tra} \end{align} \end{widetext} where $\langle \dots \rangle_{\epsilon}$ is the average over the on-site random potentials. Equations (\ref{k1}-\ref{2tra}) are exact for $N\to\infty$ and $L =\mathcal{O}(1)$ fixed, independently of the system size $N$, and the level-compressibility is evaluated with high accuracy using the population dynamics algorithm \cite{Metz2016}. However, as we discussed above, we should calculate $\chi(L,W)$ at the scale $L=\mathcal{O}(1/N)$ in order to probe the statistics of the energy levels within the extended phase. The reason is twofold: (i) by considering the regime $L=\mathcal{O}(1/N)$, we are inspecting the fluctuations of low-lying energies of $O(1/N)$ (or long time scales of the order $O(N)$, much larger than the typical size $\ln N$ of the loops); (ii) the average density of states $\langle \rho(E) \rangle$ is uniform along an interval of size $L=\mathcal{O}(1/N)$, and we avoid the spurious influence on $\chi(L,W)$ of significant variations of $\langle \rho(E) \rangle$ \cite{Metz2016}. In principle, one should employ the formalism of \cite{Metz2016} and determine the cumulants when $L=s/N$ ($s \gg 1$). However, one immediately concludes that the terms arising from rescaling $L \rightarrow s/N$ and $\eta \rightarrow \eta/N$ in the formal development \cite{Metz2016} enter only in an eventual calculation involving finite size corrections, i.e., when one considers $N$ large but finite \cite{MetzPar,Metz2015b}. Thus, the leading behavior of the level-compressibility $\lim_{N \rightarrow \infty} \kappa_2^{(N)}/\kappa_1^{(N)}$ in the scaling regime $L=O(1/N)$ should already be given by eqs. (\ref{k1}-\ref{k2}) in the limit $L \rightarrow 0^{+}$ and $\eta \rightarrow 0^+$. The central idea here is to explore numerically this limit using population dynamics. Note that the imaginary part of the energy $\eta$ is also going to zero and the interesting limit is $L \to 0^{+}$ and $\eta \to 0^+$, keeping the ratio $L/\eta$ large. Essentially, $\eta$ plays the role of the level spacing in a regularized density of states and we want to have many levels within $[-L/2,L/2]$. Due to the $\eta$-dependency of eqs. (\ref{k1}-\ref{2tra}), it is convenient to introduce the level compressibility $\chi_\eta(W,L)$ for fixed $\eta$. For a given value of $L$ and $W$, we have \beeq{ \chi(W,L)=\lim_{\eta\to 0^{+}}\chi_\eta(W,L)\,. \label{ss1} } Henceforth, we restrict ourselves to $k=2$. For this connectivity, the eigenfunctions at the center of the band undergo an Anderson localization transition at $W_c = 17.4$ \cite{Abou73,Biroli2010}. In figure \ref{fig1} we present results for $\chi_\eta(W,L)$ as a function of $W$ for fixed $\eta=10^{-6}$ and different values of the size $L$ of the interval. As $L$ decreases, it is clear that $\chi_\eta(W,L)$ converges to $\chi_\eta(W,L)=1$ or $\chi_\eta(W,L)=0$ for $W > W_c$ or $W < W_c$, respectively, as long as $W$ is not too close to the critical value $W_c = 17.4$. \begin{figure} \begin{center} \includegraphics[width=8.5cm,height=6cm]{fig1.pdf} \caption{The level-compressibility as a function of $W$ for a fixed imaginary energy $\eta=10^{-6}$ and different sizes of the interval $[-L/2,L/2]$. The number of neighbors connected to each node is $k+1 = 3$.} \label{fig1} \end{center} \end{figure} Importantly, one observes a non-monotonic behavior of $\chi_{\eta}(W,L)$ as a function of $L$ for some values of $W < W_c$. This is particularly evident for $W=10$ and $W=12.5$. However, it is not clear from figure \ref{fig1} that $\eta$ is small enough such that the limit $\eta \rightarrow 0^+$ has been reached, especially close to the critical point. In order to have reliable data in the delocalized phase $W< W_c$, it is crucial to understand the dependence of $\chi_\eta(W,L)$ with respect to $\eta$. We have thus solved eqs. (\ref{k1}-\ref{2tra}) for several values of $\eta$, keeping $L$ fixed. For sufficiently small $\eta<\eta_{*} \sim L$, $1 - \chi_\eta(W,L)$ can be well fitted by the function $\chi_0+a \eta^{b}$, where the fitting parameters $\chi_0(W,L)$, $a(W,L)$ and $b(W,L)$ can be determined with high accuracy (see the appendix). This procedure allows to obtain $\chi(W,L)=\lim_{\eta\to 0^+}\chi_{\eta}(W,L)$ simply by reading the value of $\chi_0$. Performing this numerical computation for many values of $W$ and $L$ is highly time consuming, so we have focused on some values of $W$ within the extended phase for which the eigenfunctions would be multifractal, according to recent works \cite{Luca2014,Alt2016, Altshuler2016}. The main outcome of this calculation is shown in figure \ref{fig3}, where we show $\chi(W,L)$ as a function of $L$. \begin{figure}[h] \begin{center} \includegraphics[width=8.5cm,height=6cm]{fig3.pdf} \caption{The behavior of the level-compressibility $\chi(W,L)$ as a function of $L$ for connectivity $k+1=3$ and different values of the disorder strength $W$. For $W< W_c \simeq 17.4$, the function $\chi(W,L)$ approaches zero in the limit $L \rightarrow 0^+$, signaling the Wigner-Dyson statistics of the energy levels.} \label{fig3} \end{center} \end{figure} As we approach $W_c$ from the delocalized side, the level compressibility $\chi(W,L)$ behaves non-monotonically as a function of $L$: initially it tends to its Poisson value $\chi(W,L) =1$, but as $L$ is further decreased, the level compressibility clearly moves towards the limit $\lim_{L \rightarrow 0^{+}}\chi(W,L) = 0$, characteristic of Wigner-Dyson statistics \cite{BookMehta}. As the critical point is approached, the regime where $\chi(W,L)$ attains its maximum value sets in for smaller and smaller $L$, making the numerical calculation highly demanding. In spite of this numerical difficulty, our results strongly indicate that $\lim_{L\to 0^{+}} \chi(W,L)= 0$ for $W < W_c$, supporting the Wigner-Dyson statistics of the energy levels in the whole extended phase. \section{Discussion} \label{discussion} In this work we have calculated the level-compressibility $\chi(W,L)$ of the energy levels within a box of size $L=O(1)$ for the Anderson model on an infinitely large regular random graph (RRG). We have focused on the behavior of $\chi(W,L)$ for $L \rightarrow 0^{+}$, from which we expect to characterize the statistics of the energy levels (Poisson, sub-Poisson or Wigner-Dyson) at a local scale, i.e., when $L=O(1/N)$. This expectation is confirmed by the behavior of $\lim_{L \rightarrow 0^{+}} \chi(W,L)$ {\it away from the critical point} $W_c$: we have found that $\lim_{L \rightarrow 0^{+}} \chi(W,L)$ converges to one or zero, provided $W > W_c$ or $W < W_c$, respectively. Hence we have employed the level-compressibility to probe the eigenvalue statistics as we approach the critical point $W_c$ from the delocalized phase. Our results show that, {\it for values of $W$ closer to the critical point $W_c$}, $\chi(W,L)$ approaches zero in the limit $L \rightarrow 0^{+}$. This is consistent with earlier analytical predictions for the Anderson model on Erd\"os-R\'enyi random graphs \cite{Mirlin91} as well as with recent numerical results for the Anderson model on regular random graphs \cite{Mirlin2016,Tikhonov2016}. Our results are free of finite size effects, since they are obtained from a set of exact equations valid in the limit $N \rightarrow \infty$. In particular, we do not observe any change of behavior of $\lim_{L \rightarrow 0^+} \chi(W,L)$ for $W < W_c$ (see figure 2), as one would expect from the results for the fractal exponents \cite{Alt2016,Altshuler2016}, in combination with the standard view according to which the level-compressibility at the scale $L = O(1/N)$ is directly related with the statistics of the eigenfunctions \cite{Chalker1996,Bogomolny2011}. From this perspective, our results support the ergodicity of the eigenfunctions within the whole extended phase, entirely consistent with numerical diagonalization results \cite{Mirlin2016}. On the other hand, recent works \cite{Kravtsov,Biroli1,Amini} show that, in the Rosenzweig-Porter model, the non-ergodic extended states are unveiled by considering the statistics of the eigenfunctions at the scale of the Thouless energy $E_{th} \sim N^{1-\gamma}$ ($1 < \gamma < 2$), much larger than the mean level spacing $1/N$. Since the Anderson model on a RRG belongs, in a certain sense, to the same class as the Rosenzweig-Porter model \cite{Kravtsov}, the sole results for $\chi$ seem not sufficient to conclude about the ergodicity of the eigenfunctions within the extended phase. As a future perspective, it would be interesting to extend our method in order to compute $\chi$ at the scale $L =O(E_{th})$. Recently it has been put forward that replica symmetry breaking should be taken into account to correctly describe the eigenfunctions in the extended phase \cite{Altshuler2016}. Equations (\ref{k1}-\ref{2tra}) are derived assuming replica symmetry, which is exact provided we fix $L=O(1)$ for $N \rightarrow \infty$ \cite{Metz2016,Metz2015}. This is corroborated by an abundance of works \cite{Perez2008,Kuhn2008,Ergun,Metz2010,Metz2015}, where observables related to the global density of states $\langle \rho(E) \rangle$ of the adjacency matrix of several treelike random graphs are calculated using replica symmetry and confirmed through direct diagonalization. Nevertheless, the issue of replica symmetry breaking could arise in the limit $L \rightarrow 0^{+}$, since we expect to approach the local scale of $L=s/N \, (s \gg 1)$ characterizing the mean level spacing. From the replica calculation of the connected part of the two-level correlation function $R^{(c)}(s)$ for the GUE ensemble \cite{Kamenev}, the replica-symmetric saddle-point yields the decay $R^{(c)}(s) \propto 1/s^2$, which already gives the leading contribution $\chi(W,s/N) \propto s^{-1} \ln s \overset{s \rightarrow \infty}{\longrightarrow} 0$ \cite{BookMehta,Mirlin}. The inclusion of saddle-points that break replica symmetry allow to capture the oscillatory behavior of $R^{(c)}(s)$ \cite{Kamenev}, which does not affect the leading value $\lim_{s \rightarrow \infty} \chi(W,s/N)=0$, but only introduces sub-leading corrections due to finite $s$. We expect the situation to be similar for the Anderson model on regular random graphs, i.e., replica symmetry breaking is important only when one is interested in sub-leading corrections for finite $s$. This is also supported by the absence of many solutions to the cavity or population dynamics eqs. (\ref{1tra}) and (\ref{2tra}), which is a common sign of replica symmetry breaking \cite{BookParisi}. \vfill \begin{acknowledgments} The authors thank Alexander Mirlin and Yan V. Fyodorov for illuminating discussions. FLM thanks the hospitality of the Institute of Physics at UNAM. The authors thank the support of DGTIC for the use of the HP Cluster Platform 3000SL, codename Miztli, partly done under the Mega-project LANCAD-UNAM-DGTIC-333. We also thank Cecilia Nogu\'ez for letting us use the computer cluster Baktum during the initial stages of this project. This work has been funded by the program UNAM-DGAPA-PAPIIT IA101815. \end{acknowledgments}
1,108,101,562,477
arxiv
\section{Introduction} \label{sec_intro} The aim of this work is to explore connections between the concepts given in the title: \begin{enum} \item \emph{Noise-induced phase slips} are occasional losses of synchrony of two coupled phase oscillators, due to stochastic perturbations~\cite{PRK}. The problem of finding the distribution of their location and length can be formulated as a stochastic exit problem, which involves the exit through a so-called \emph{characteristic boundary}~\cite{Day7,Day3}. \item \emph{Log-periodic oscillations} designate the periodic dependence of a quantity of interest, such as a power law exponent, on the logarithm of a system parameter. They often occur in systems presenting a discrete scale invariance~\cite{Sornette_98}. In the context of the stochastic exit problem, they are connected to the phenomenon of \emph{cycling} of the exit distribution through an unstable periodic orbit~\cite{Day6,BG7,BG_periodic2}. \item The \emph{Gumbel distribution} is one of the max-stable distributions known from extreme-value theory~\cite{Gnedenko_1943}. This distribution has been known to occur in the exit distribution through an unstable periodic orbit~\cite{MS4,BG7,BG_periodic2}. More recently, the Gumbel distribution has also been found to govern the length of \emph{reactive paths} in one-dimensional exit problems~\cite{CerouGuyaderLelievreMalrieu12,Bakhtin_2013a,Bakhtin_2014a}. \end{enum} In this work, we review a number of prior results on exit distributions, and build on them to derive properties of the phase slip distributions. We start in Section~\ref{sec_synchro} by recalling the classical situation of two coupled phase oscillators, and the phenomenology of noise-induced phase slips. In Section~\ref{sec_exit}, we present the mathematical set-up for the systems we will consider, and introduce different tools that are used for the study of the stochastic exit problem. Section~\ref{sec_logper} contains our main results on the distribution of the phase slip location. These results are mainly based on those of~\cite{BG_periodic2} on the exit distribution through an unstable planar periodic orbit, slightly reformulated in the context of limit distributions. We also discuss links to the concept of log-periodic oscillations. In Section~\ref{sec_Gumbel}, we discuss a number of connections to extreme-value theory. After summarizing properties of the Gumbel distribution relevant to our problem, we give a short review of recent results by C\'erou, Guyader, Leli\`evre and Malrieu~\cite{CerouGuyaderLelievreMalrieu12} and by Bakhtin~\cite{Bakhtin_2013a,Bakhtin_2014a} on the appearance of the Gumbel distribution in transition path theory for one-dimensional problems. Section~\ref{sec_slips} presents our results on the duration of phase slips, which build on the previous results from transition path theory. Section~\ref{sec_conclusion} contains a summary and some open questions, while the proofs of the main theorems are contained in the Appendix. \subsection*{Acknowledgments} This work is based on a presentation given at the meeting \lq\lq Inhomogeneous Random Systems\rq\rq\ at Institut Henri Poincar\'e, Paris, on January 28, 2014. It is a pleasure to thank Giambattista Giacomin for inviting me, and Fran\c cois Dunlop, Thierry Gobron and Ellen Saada for organising this interesting meeting. I am grateful to Barbara Gentz for critical comments on the manuscript, and to Arkady Pikovsky for suggesting to look at the phase slip duration. The connection with elliptic functions was pointed out to me by G\'erard Letac. Finally, thanks are due to an anonymous referee for comments leading to an improved presentation. \section{Synchronization of phase oscillators} \label{sec_synchro} In this section, we briefly recall the setting of two coupled phase oscillators showing synchronization, following mainly~\cite{PRK}. \subsection{Deterministic phase locking} \label{ssec_synchrodet} Consider two oscillators, whose dynamics is governed by ordinary differential equations (ODEs) of the form \begin{align} \nonumber \dot x_1 &= f_1(x_1)\;, \\ \dot x_2 &= f_2(x_2)\;, \label{sdet01} \end{align} where $x_1\in\R\!^{n_1}, x_2\in\R\!^{n_2}$ with $n_1,n_2\geqs 2$. A classical example of a system displaying oscillations is the Van der Pol oscillator~\cite{vanderPol20,vanderPol26,vanderPol27} \begin{equation} \label{sdet02} \ddot \theta_i - \gamma_i (1 - \theta_i^2) \dot\theta_i + \theta_i = 0\;, \end{equation} which can be transformed into a first-order system by setting $x_i=(\theta_i,\dot\theta_i)\in\R^2$. The precise form of the vector fields $f_i$, however, does not matter. What is important is that each system admits an asymptotically stable periodic orbit, called a \emph{limit cycle}. These limit cycles can be parametrised by angular variables $\phi_1,\phi_2\in\R/\Z$ in such a way that \begin{align} \nonumber \dot \phi_1 &= \omega_1\;,\\ \dot \phi_2 &= \omega_2\;, \label{sdet03} \end{align} where $\omega_1,\omega_2\in\R$ are constant angular frequencies~\cite[Section~7.1]{PRK}. Note that the product of the two limit cycles forms a two-dimensional invariant torus in phase space $\R\!^{n_1+n_2}$. Consider now a perturbation of System~\eqref{sdet01} in which the oscillators interact weakly, given by \begin{align} \nonumber \dot x_1 &= f_1(x_1) + \eps g_1(x_1,x_2)\;, \\ \dot x_2 &= f_2(x_2) + \eps g_2(x_1,x_2)\;. \label{sdet04} \end{align} The theory of normally hyperbolic invariant manifolds (see for instance~\cite{HirschPughShub}) shows that the invariant torus persists for sufficiently small nonzero $\eps$ (for stronger coupling, new phenomena such as oscillation death can occur~\cite[Section~8.2.2]{PRK}). For small $\eps$, the reduced equations~\eqref{sdet03} for the dynamics on the torus take the form \begin{align} \nonumber \dot \phi_1 &= \omega_1 + \eps Q_1(\phi_1,\phi_2)\;,\\ \dot \phi_2 &= \omega_2 + \eps Q_2(\phi_1,\phi_2)\;, \label{sdet05} \end{align} where $Q_{1,2}$ can be computed perturbatively in terms of $f_{1,2}$ and $g_{1,2}$. Assume that the natural frequencies $\omega_1, \omega_2$ are different, but that the \emph{detuning} $\nu=\omega_2-\omega_1$ is small. Introducing new variables $\psi=\phi_1-\phi_2$ and $\ph = (\phi_1+\phi_2)/2$ yields a system of the form \begin{align} \nonumber \dot \psi &= -\nu + \eps q(\psi,\ph)\;,\\ \dot \ph &= \omega + \Order{\eps}\;, \label{sdet06} \end{align} where $\omega=(\omega_1+\omega_2)/2$ is the mean frequency. Note that the phase difference $\psi$ evolves more slowly than the mean phase $\ph$, so that the theory of averaging applies~\cite{Bogoliubov56,Verhulst}. For small $\nu$ and $\eps$, solutions of~\eqref{sdet06} are close to those of the averaged system \begin{equation} \label{sdet07} \omega \frac{\6\psi}{\6\ph} = -\nu + \eps \bar q(\psi)\;, \qquad \bar q(\psi) = \int_0^1 q(\psi,\ph) \6\ph \end{equation} (recall our convention that the period is equal to $1$). In particular, solutions of the equation $-\nu + \eps \bar q(\psi)=0$ correspond to stationary solutions of the averaged equation~\eqref{sdet07}, and to periodic orbits of the original equation~\eqref{sdet06} (and thus also of~\eqref{sdet05}). For example, in the case of \emph{Adler's equation}, $\bar q(\psi) = \sin(2\pi\psi)$, there are two stationary points whenever $\abs{\nu}<\abs{\eps}$. They give rise to one stable and one unstable periodic orbit. The stable periodic orbit corresponds to a synchronized state, because the phase difference $\psi$ remains bounded for all times. This is the phenomenon known as \emph{phase locking}. \begin{remark} Similar phase locking phenomena appear when the ratio $\omega_2/\omega_1$ is close to any rational number $m/n\in\Q$. Then for small $\eps$ the quantity $n\phi_1-m\phi_2$ may stay bounded for all times ($n\colon m$ \emph{frequency locking}). The sets of parameter values $(\eps,\nu)$ for which frequency locking with a specific ratio occurs are known as \emph{Arnold tongues} \cite{Arnold1961}. \end{remark} \subsection{Noise-induced phase slips} \label{ssec_slips} \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',main node/.style={draw,thick,circle,blue,fill=blue!20,minimum size=5pt,inner sep=0pt},scale=0.55,x=1.25cm,y=0.5cm, declare function={ pot(\x) = cos(4*\x r) - 1.5*\x; } ] \newcommand*{0}{0} \newcommand*{8}{8} \newcommand*{-13.5}{-13.5} \path[fill=teal!20,thick,-,smooth,domain=0:8,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {pot(\x) }) -- (8,{pot(8)}) -- (8,-13.5) -- (0,-13.5) -- (0,{pot(0)}); \draw[teal,thick,-,smooth,domain=0:8,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {pot(\x) }); \path[fill=white] plot (0,-13.5) -- (8,-13.5) -- (8,-13.5-1.5) -- (0,-13.5-1.5); \node[main node] at (2.45,{pot(2.45)+0.3}) {}; \draw[blue,semithick,->] (2.7,-3.7) .. controls (3,-3) and (3.2,-3) .. (3.5,-3.7); \node[] at (0.3,2.7) {{\bf (a)}}; \end{tikzpicture} \hspace{5mm} \begin{tikzpicture}[>=stealth',main node/.style={draw,semithick,circle,fill=white,minimum size=2pt,inner sep=0pt},scale=0.55,x=2cm,y=0.8cm, declare function={ stab(\x) = 1.5-0.6*sin(\x r+4); unstab(\x) = 5+0.8*cos(\x r); trans(\x) = stab(\x) + 3.14*(1+tanh(2*(\x-3.14))); } ] \path[fill=green!30,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {stab(\x)+0.5}) -- plot(6.28-\x, {stab(6.28-\x)-0.5}); \path[fill=green!30,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {stab(\x)+6.28+0.5}) -- plot(6.28-\x, {stab(6.28-\x)+6.28-0.5}); \draw[->,thick] (0,0) -- (0,9.3); \draw[->,thick] (0,0) -- (6.8,0); \draw[green!50!black,thick,-,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {stab(\x)}); \draw[green!50!black,thick,-,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {6.28+stab(\x)}); \draw[violet,thick,-,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {unstab(\x)}); \pgfmathsetseed{16825527} \draw[red,semithick,-,smooth,domain=0:6.28,samples=60,/pgf/fpu, /pgf/fpu/output format=fixed] plot (\x, {trans(\x) + 0.5*rand}); \newcommand*{2.2}{2.2} \draw[dashed] ({2.2},0) -- ({2.2},{stab(2.2)+0.5}); \node[main node] at ({2.2},0) {}; \node[main node] at ({2.2},{stab(2.2)+0.5}) {}; \node[] at ({2.2+0.1},-0.6) {$\ph_{\tau_-}$}; \renewcommand*{2.2}{3.05} \draw[dashed] ({2.2},0) -- ({2.2},{unstab(2.2)}); \node[main node] at ({2.2},0) {}; \node[main node] at ({2.2},{unstab(2.2)}) {}; \node[] at ({2.2+0.1},-0.6) {$\ph_{\tau_0}$}; \renewcommand*{2.2}{3.7} \draw[dashed] ({2.2},0) -- ({2.2},{stab(2.2)+6.28-0.5}); \node[main node] at ({2.2},0) {}; \node[main node] at ({2.2},{stab(2.2)+6.28-0.5}) {}; \node[] at ({2.2+0.1},-0.6) {$\ph_{\tau_+}$}; \node[] at (6.3,-0.6) {$\ph$}; \node[] at (-0.3,8.5) {$\psi$}; \node[] at (-0.2,10) {{\bf (b)}}; \end{tikzpicture} \vspace{-6mm} \end{center} \caption[]{{\bf (a)} Washboard potential $V(\psi)$ of the averaged system~\eqref{slips01}, in a case where $\nu<0$. Phase slips correspond to transitions over a local potential maximum. {\bf (b)} For the unaveraged system~\eqref{slips03}, phase slips involve crossing the unstable periodic orbit delimiting the basin of attraction of the synchronized state. } \label{fig_phase_slip} \end{figure} Consider now what happens when noise is added to the system. This is often done (see e.g.~\cite[Chapter~9]{PRK}) by looking at the effect of noise on the averaged system~\eqref{sdet07}, which becomes \begin{equation} \label{slips01} \omega \frac{\6\psi}{\6\ph} = -\nu + \eps \bar q(\psi) + \text{noise}\;, \end{equation} where we will specify in the next section what kind of noise we consider. The first two terms on the right-hand side of~\eqref{slips01} can be written as \begin{equation} \label{slips02} -\frac{\partial}{\partial\psi} V(\psi)\;, \qquad \text{where } V(\psi) = \nu\psi - \eps\int_0^\psi \bar q(x)\6x\;. \end{equation} In the synchronization region, the potential $V(\psi)$ has the shape of a tilted periodic, or \emph{washboard} potential (\figref{fig_phase_slip}a). The local minima of the potential represent the synchronized state, while the local maxima represent an unstable state delimiting the basin of attraction of the synchronized state. In the absence of noise, trajectories are attracted exponentially fast to the synchronized state and stay there. When weak noise is added, solutions still spend most of the time in a neighbourhood of a synchronized state. However, occasional transitions through the unstable state may occur, meaning that the system temporarily desynchronizes, before returning to synchrony. This behaviour is called a \defwd{phase slip}. Transitions in both directions may occur, that is, $\psi$ can increase or decrease by $1$ per phase slip. When detuning and noise are small, however, transitions over the lower local maximum of the washboard potential are more likely. In reality, however, we should add noise to the unaveraged system~\eqref{sdet06}, which becomes \begin{align} \nonumber \dot \psi &= -\nu + \eps q(\psi,\ph)+ \text{noise}\;,\\ \dot \ph &= \omega + \Order{\eps}+ \text{noise}\;. \label{slips03} \end{align} Phase slips are now associated with transitions across the unstable orbit (\figref{fig_phase_slip}b). Two important random quantities characterising the phase slips are \begin{enum} \item the value of the phase $\ph_{\tau_0}$ at the time $\tau_0$ when the unstable orbit is crossed, and \item the duration of the phase slip, which can be defined as the phase difference between the time $\tau_-$ when a suitably defined neighbourhood of the stable orbit is left, and the time $\tau_+$ when a neighbourhood of (a translated copy of) the stable orbit is reached. \end{enum} Unless the system~\eqref{slips03} is independent of the phase $\ph_{\tau_0}$, there is no reason for the slip phases to have a uniform distribution. Our aim is to determine the weak-noise asymptotics of the phase $\ph_{\tau_0}$ and of the phase slip duration $\ph_{\tau_+}-\ph_{\tau_-}$. \section{The stochastic exit problem} \label{sec_exit} Let us now specify the mathematical set-up of our analysis. The stochastically perturbed systems that we consider are It\^o stochastic differential equations (SDEs) of the form \begin{equation} \label{exit01} \6x_t = f(x_t)\6t + \sigma g(x_t)\6W_t\;, \end{equation} where $x_t$ takes values in $\R^2$, and $W_t$ denotes $k$-dimensional standard Brownian motion, for some $k\geqs2$. Physically, this describes the situation of Gaussian white noise with a state-dependent amplitude $g(x)$. Of course, one may consider more general types of noise, such as time-correlated noise, but such a setting is beyond the scope of the present analysis. The drift term $f$ and the diffusion term $g$ are assumed to satisfy the usual regularity assumptions guaranteeing the existence of a unique strong solution for all square-integrable initial conditions $x_0$ (see for instance~\cite[Section~5.2]{Oksendal}). In addition, we assume that $g$ satisfies the uniform ellipticity condition \begin{equation} \label{exit02} c_1 \norm{\xi}^2 \leqs \langle \xi, D(x) \xi \rangle \leqs c_2 \norm{\xi}^2 \qquad \forall x,\xi\in\R^2\;, \end{equation} where $c_2\geqs c_1>0$. Here $D(x)=gg^\text{T}(x)$ denotes the \emph{diffusion matrix}. We finally assume that the drift term $f(x)$ results from a system of the form~\eqref{sdet06}, in a synchronized case where there is one stable and one unstable orbit. It will be convenient to choose coordinates $x=(\ph,r)$ such that the unstable periodic orbit is given by $r=0$ and the stable orbit is given by $r=1/2$. The original system is defined on a torus, but we will unfold everything to the plane $\R^2$, considering $f$ (and $g$) to be periodic with period $1$ in both variables $\ph$ and $r$. The resulting system has the form\footnote{In the situation of synchronised phase oscillators considered here, the change of variables yielding~\eqref{exit03} is global. In more general situations considered in~\cite{BG_periodic2}, such a transformation may only exist locally, in a domain surrounding the stable and unstable orbit, and the analysis applies up to the first-exit time from that domain.} \begin{align} \nonumber \6r_t &= f_r(r_t,\ph_t)\6t + \sigma g_r(r_t,\ph_t)\6W_t\;, \\ \6\ph_t &= f_\ph(r_t,\ph_t)\6t + \sigma g_\ph(r_t,\ph_t)\6W_t\;, \label{exit03} \end{align} and admits unstable orbits of the form $\set{r=n}$ and stable orbits of the form $\set{r=n+1/2}$ for any integer $n$. In particular, $f_r(n/2,\ph)=0$ for all $n\in\Z$ and all $\ph\in\R$. Using a so-called equal-time parametrisation of the periodic orbits, it is also possible to assume that $f_\ph(0,\ph)=1/T_+$ and $f_\ph(1/2,\ph)=1/T_-$ for all $\ph\in\R$, where $T_\pm$ denote the periods of the unstable and stable orbit~\cite[Proposition~2.1]{BG_periodic2}\footnote{Because of second-order terms in It\^o's formula, the periodic orbits of the reparametrized system may not lie exactly on horizontal lines $r=n/2$, but be shifted by a small amount of order $\sigma^2$.}. The instability of the orbit $r=0$ means that the characteristic exponent \begin{equation} \label{exit04} \lambda_+ = \int_0^1 \partial_r f_r(0,\ph)\6\ph \end{equation} is strictly positive. The similarly defined exponent $-\lambda_-$ of the stable orbit is negative. It is then possible to redefine $r$ in such a way that \begin{align} \nonumber f(r,\ph) &= \lambda_+ r + \Order{r^2}\;, \\ f(r,\ph) &= -\lambda_-(r-1/2) + \Order{(r-1/2)^2} \label{exit05} \end{align} for all $\ph\in\R$ (see again~\cite[Proposition~2.1]{BG_periodic2}). It will be convenient to assume that $f_\ph(r,\ph)$ is positive, bounded away from zero, for all $(r,\ph)$. Finally, for definiteness, we assume that the system is asymmetric, in such a way that it is easier for the system starting with $r$ near $-1/2$ to reach the unstable orbit in $r=0$ rather than its translate in $r=-1$. This corresponds intuitively to the potential in~\figref{fig_phase_slip} tilting to the right, and can be formulated precisely in terms of large-deviation rate functions introduced in Section~\ref{ssec_ldp} below. \subsection{The harmonic measure} \label{ssec_hm} Fix an initial condition $(r_0\in(-1,0),\ph_0=0)$ and let \begin{equation} \label{hm01} \tau_0 = \inf\setsuch{t>0}{r_t = 0} \end{equation} denote the first-hitting time of the unstable orbit. Note that $\tau_0$ can also be viewed as the first-exit time from the set $\cD=\set{r<0}$. The crossing phase $\ph_{\tau_0}$ is equal to the exit location from $\cD$, and its distribution is also known as the \emph{harmonic measure} associated with the infinitesimal generator \begin{equation} \label{hm02} L = \sum_{i\in\set{r,\ph}} f_i(x) \dpar{}{x_i} + \frac{\sigma^2}{2} \sum_{i,j\in\set{r,\ph}}D_{ij}(x) \dpar{^2}{x_i\partial x_j} \end{equation} of the diffusion process. It is known that the harmonic measure admits a smooth density for sufficiently smooth $f$, $g$ and $\partial\cD$ \cite{BenArous_Kusuoka_Stroock_1984}. It follows from Dynkin's formula~\cite[Section 7.4]{Oksendal} that for any continuous bounded test function $b:\partial\cD\to\R$, the function $h(x)=\expecin{x}{b(\ph_{\tau_0})}$ satisfies the boundary value problem\footnote{Several tools will require $\cD$ to be a bounded set. This does not create any problems, because our assumptions on the deterministic vector field imply that probabilities are only affected by a negligible amount if $\cD$ is replaced by its intersection with some large compact set.} \begin{alignat}{3} \nonumber Lh(x) &= 0 &x\in&\cD\;, \\ h(x) &= b(x) \qquad &x\in&\partial\cD\;. \label{hm03} \end{alignat} One may think of the case of a sequence $b_n$ converging to the indicator function $1_{\set{\ph\in B}}$. Then the associated $h_n$ converge to $h(x)=\probin{x}{\ph_{\tau_0}\in B}$, giving the harmonic measure of $B\subset\partial\cD$. While it is in general difficult to solve the equation~\eqref{hm03} explicitly, the fact that $Lh=0$ ($h$ is said to be \emph{harmonic}) yields some useful information. In particular, $h$ satisfies a maximum principle and Harnack inequalities~\cite[Chapter~9]{Gilbarg_Trudinger}. \subsection{Large deviations} \label{ssec_ldp} The theory of large deviations has been developed in the context of general SDEs of the form~\eqref{exit01} by Freidlin and Wentzell~\cite{FW}. With a path $\gamma:[0,T]\to\R\!^2$ it associates the \emph{rate function} \begin{equation} \label{ldp01} I_{[0,T]}(\gamma) = \frac12 \int_0^T (\dot\gamma_s-f(\gamma_s))^\text{T} D(\gamma_s)^{-1} (\dot\gamma_s-f(\gamma_s)) \6s\;. \end{equation} Roughly speaking, the probability of the stochastic process tracking a particular path $\gamma$ on $[0,T]$ behaves like $\e^{-I_{[0,T]}(\gamma)/\sigma^2}$ as $\sigma\to0$. In the case of the stochastic exit problem from a domain $\cD$, containing a unique attractor\footnote{In the present context, an attractor $\cA$ is an equivalence set for the equivalence relation $\sim_\cD$ on $\cD$, defined by $x\sim_\cD y$ whenever one can find a $T>0$ and a path $\gamma$ connecting $x$ and $y$ in time $T$ and staying in $\cD$ such that $I_{[0,T]}(\gamma)=0$, cf.~\cite[Section~6.1]{FW}. In addition, $\cD$ should belong to the basin of attraction of $\cA$. In other words, deterministic orbits starting in $\cD$ should converge to $\cA$, and the set $\cA$ should have no proper subsets invariant under the deterministic flow.} $\cA$, the theory of large deviations yields in particular the following information. For $y\in\partial\cD$ let \begin{equation} \label{ldp02} V(y) = \inf_{T>0}\inf_{\gamma\colon \cA\to y} I_{[0,T]}(\gamma)\;, \end{equation} be the \emph{quasipotential}, where the second infimum runs over all paths connecting $\cA$ to $y$ in time $T$. Then for $x_0\in\cA$ \begin{equation} \label{ldp03} \lim_{\sigma\to0} \sigma^2 \log\bigexpecin{x_0}{\tau_0} = \inf_{y\in\partial\cD} V(y)\;. \end{equation} Furthermore, if the quasipotential reaches its infimum at a unique isolated point $y^*\in\partial\cD$, then \begin{equation} \label{ldp04} \lim_{\sigma\to0} \bigprobin{x_0}{\norm{x_{\tau_0} - y^*} > \delta} = 0 \end{equation} for all $\delta>0$. This means that exit locations concentrate in points where the quasipotential is minimal. If we try to apply this last result to our problem, however, we realise that it does not give any useful information. Indeed, the quasipotential $V$ is constant on the unstable orbit $\set{r=0}$, because any two points on the orbit can be connected at zero cost, just by tracking the orbit. Nevertheless, the theory of large deviations provides some useful information, since it allows to determine most probable exit paths. The rate function~\eqref{ldp01} can be viewed as a Lagrangian action. Minimizing the action via Euler--Lagrange equations is equivalent to solving Hamilton equations with Hamiltonian \begin{equation} \label{ldp05} H(\gamma,\eta) = \frac12 \eta^\text{T} D(\gamma) \eta + f(\gamma)^\text{T} \eta\;, \end{equation} where $\eta=D(\gamma)^{-1}(\dot\gamma - f(\gamma))=(p_r,p_\ph)$ is the moment conjugated to $\gamma$. This is a two-degrees-of-freedom Hamiltonian, whose orbits live in a four-dimensional space, which is, however, foliated into three-dimensional hypersurfaces of constant $H$. Writing out the Hamilton equations (cf.~\cite[Section~2.2]{BG_periodic2}) shows that the plane $\set{p_r=p_\ph=0}$ is invariant. It corresponds to deterministic motion, and contains in particular the periodic orbits of the original system. These turn out to be hyperbolic periodic orbits of the three-dimensional flow on the set $\set{H=0}$, with characteristic exponents $\pm\lambda_+T_+$ and $\mp\lambda_-T_-$. Typically, the unstable manifold of the stable orbit and the stable manifold of the unstable orbit will intersect transversally, and the intersection will correspond to a minimiser $\gamma_\infty$ of the rate function, connecting the two orbits in infinite time. In the sequel, we will assume that this is the case, and that $\gamma_\infty$ is unique up to shifts $\ph\mapsto\ph+n$ (cf.~\cite[Assumption~2.3 and Figure 2.2]{BG_periodic2} and~\figref{fig_ldp_section}). \begin{example} \label{ex:Melnikov} Assume that in~\eqref{exit03}, $f_r(r,\ph)=\sin(2\pi r)[1+\eps\sin(2\pi r)\cos(2\pi\ph)]$, whereas $f_\ph(r,\ph)=\omega$ and $g_r=g_\ph=1$. The resulting Hamiltonian takes the form \begin{equation} \label{ldp06} H(r,\ph,p_r,p_\ph) = \frac12(p_r^2+p_\ph^2) + \sin(2\pi r)[1+\eps\sin(2\pi r)\cos(2\pi\ph)] p_r + \omega p_\ph\;. \end{equation} In the limiting case $\eps=0$, the system is invariant under shifts along $\ph$, and thus $p_\ph$ is a first integral. The unstable and stable manifolds of the periodic orbits do not intersect transversally. In fact, they are identical, and given by the equation $p_r=-2\sin(2\pi r)$, $p_\ph=0$. However, for small positive $\eps$, Melnikov's method~\cite[Chapter~6]{GH} allows to prove that the two manifolds intersect transversally. \end{example} \subsection{Random Poincar\'e maps} \label{ssec_rpm} \begin{figure} \centerline{\includegraphics*[clip=true,height=55mm]{figs/random_poincare2}} \figtext{ \writefig 0.9 5.5 $r$ \writefig 3.35 5.4 $1$ \writefig 5.55 5.4 $2$ \writefig 7.7 5.4 $3$ \writefig 9.9 5.4 $n$ \writefig 11.2 5.35 $\ph_{\tau_0}$ \writefig 12.1 5.4 $n+1$ \writefig 14.0 5.4 $\ph$ \writefig 6.5 4.2 $\gamma_\infty$ \writefig 1.35 1.75 $R_0$ \writefig 3.525 1.85 $R_1$ \writefig 5.7 1.65 $R_2$ \writefig 7.88 1.5 $R_3$ \writefig 9.45 3.95 $R_n$ \writefig 0.1 1.2 $-\frac12$ \writefig 0.6 4.5 $-\delta$ \writefig 2.35 5.35 $s^*_\delta$ } \caption[]{Definition of the random Poincar\'e map. The sequence $(R_0,R_1,\dots,R_{\intpart{\tau_0}})$ forms a Markov chain, killed at $\intpart{\tau_0}$, where $\tau_0$ is the first time the process hits the unstable periodic orbit in $r=0$. The process is likely to track a translate of the path $\gamma_\infty$ minimizing the rate function. } \label{fig_random_poincare} \end{figure} The periodicity in $\ph$ of the system~\eqref{exit03} yields useful information on the distribution of the crossing phase $\ph_{\tau_0}$. We fix an initial condition $(r_0,\ph_0=0)$ with $-1<r_0<0$ and define for every $n\in\N$ \begin{equation} \label{rpm01} \tau_n = \inf\setsuch{t>0}{\ph_t = n}\;. \end{equation} In addition, we kill the process at the first time $\tau_0$ it hits the unstable orbit at $r=0$, and set $\tau_n=\infty$ whenever $\ph_{\tau_0} < n$. The sequence $(R_0,R_1,\dots,R_N)$ defined by $R_k=r_{\tau_k}$ and $N = \intpart{\tau_0}$ defines a substochastic Markov chain on $E=\R_-$, which records the successive values of $r$ whenever $\ph$ reaches for the first time the vertical lines $\set{\ph=k}$ (\figref{fig_random_poincare}). This Markov chain has a transition kernel with density $k(x,y)$, that is, \begin{equation} \label{rpm02} \pcond{R_{n+1}\in B}{r_{\tau_n}=R_n} =: K(R_n,B) = \int_B k(R_n,y)\6y\;, \qquad B\subset E \end{equation} for all $n\geqs0$. In fact, $k(x,y)$ is obtained\footnote{Again, for technical reasons, one has to replace the set $\set{\ph<1,r<0}$ by a large bounded set, but this modifies probabilities by exponentially small errors that will be negligible.} by restricting to $\set{\ph=1}$ the harmonic measure for exit from $\set{\ph<1,r<0}$, for a starting point $(0,x)$. We denote by $K^n$ the $n$-step transition probabilities defined recursively by \begin{equation} \label{rpm03} K^n(R_0,B) := \probin{R_0}{R_{n}\in B} = \int_E K^{n-1}(R_0,\6y)K(y,B)\;. \end{equation} If we decompose $\ph=n+s$ into its integer part $n$ and fractional part $s$, we can write \begin{equation} \label{rpm04} \probin{0,R_0}{\ph_{\tau_0}\in n+\6s} = \int_E K^n(R_0,\6y) \probin{0,y}{\ph_{\tau_0}\in\6s}\;. \end{equation} Results by Fredholm~\cite{Fredholm_1903} and Jentzsch~\cite{Jentzsch1912}, extending the well-known Perron--Frobenius theorem, show that $K$ admits a spectral decomposition. In particular, $K$ admits a \emph{principal eigenvalue} $\lambda_0$, which is real, positive, and larger than the module of all other eigenvalues $\lambda_i$. The substochastic nature of the Markov chain, due to the killing, implies that $\lambda_0<1$. If we can obtain a bound $\rho<1$ on the ratio $\abs{\lambda_i}/\lambda_0$ valid for all $i\geqs1$ (\emph{spectral gap} estimate), then we can write \begin{equation} \label{rpm05} K^n(R_0,B) = \lambda_0^n \pi_0(B) \bigl[ 1+\Order{\rho^n} \bigr] \end{equation} as $n\to\infty$. Here $\pi_0$ the probability measure defined by the right eigenfunction of $K$ corresponding to $\lambda_0$~\cite{Jentzsch1912,KreinRutman1948,Birkhoff1957}. Since \begin{equation} \label{rpm06} \pcondin{R_0}{R_n\in B}{N>n} = \frac{K^n(R_0,B)}{K^n(R_0,E)} = \pi_0(B) \bigl[ 1+\Order{\rho^n} \bigr]\;, \end{equation} the measure $\pi_0$ represents the asymptotic probability distribution of the process conditioned on having survived. It is called the \emph{quasistationary distribution} of the process~\cite{Yaglom56,Seneta_VereJones_1966}. Plugging~\eqref{rpm05} into~\eqref{rpm04}, we see that \begin{equation} \label{rpm07} \probin{0,R_0}{\ph_{\tau_0}\in n+\6s} = \lambda_0^n \int_E \pi_0(\6y) \probin{0,y}{\ph_{\tau_0}\in\6s} \bigl[ 1+\Order{\rho^n} \bigr]\;. \end{equation} This implies that the distribution of crossing phases $\ph_{\tau_0}$ asymptotically behaves like a periodically modulated geometric distribution: its density $P$ satisfies $P(\ph+1) = \lambda_0 P(\ph)$ for large $\ph$. \section{Log-periodic oscillations} \label{sec_logper} In this section, we formulate our main result on the distribution of crossing phases $\ph_{\tau_0}$ of the unstable orbit, which describe the position of phase slips. This result is based on the work~\cite{BG_periodic2}, but we will reformulate it in order to allow comparison with related results. \subsection{The distribution of crossing phases} \label{ssec_cycling} Before stating the results applying to general nonlinear equations of the form~\eqref{exit03}, let us consider a system approximating it near the unstable orbit at $r=0$, given by \begin{align} \nonumber \6r_t &= \lambda_+ r_t \6t + \sigma g_r(0,\ph_t)\6W_t\;, \\ \6\ph_t &= \frac{1}{T_+}\6t\;. \label{cycling01} \end{align} This system can be transformed into a simpler form by combining a $\ph$-dependent scaling and a random time change. Indeed, let $h^{\text{\rm{per}}}(\ph)$ be the periodic solution of \begin{equation} \label{cycling03} \frac{\6h}{\6\ph} = 2\lambda_+T_+ h - D_{rr}(0,\ph)\;, \end{equation} and set $r=[2\lambda_+T_+h^{\text{\rm{per}}}(\ph)]^{1/2}y$. Then It\^o's formula yields \begin{equation} \label{cycling03b} \6y_t = \frac{D_{rr}(0,\ph_t)}{2T_+h^{\text{\rm{per}}}(\ph_t)} y_t \6t + \sigma \frac{g_r(0,\ph_t)}{\sqrt{2\lambda_+T_+h^{\text{\rm{per}}}(\ph_t)}}\6W_t\;. \end{equation} Next, we introduce the function \begin{equation} \label{cycling05} \theta(\ph) = \lambda_+T_+\ph - \frac12 \log \biggl( \frac{h^{\text{\rm{per}}}(\ph)}{2h^{\text{\rm{per}}}(0)^2}\biggr)\;, \end{equation} which should be thought of as a parametrisation of the unstable orbit that makes the stochastic dynamics as simple as possible. Indeed, note that $\theta(\ph+1)=\theta(\ph)+\lambda_+T_+$ and $\theta'(\ph) = D_{rr}(0,\ph)/(2h^{\text{\rm{per}}}(\ph)) > 0$. Thus the random time change $\6t = [\lambda_+T_+/\theta'(\ph_t)]\6s$ yields the equation \begin{equation} \6y_s = \lambda_+ y_s \6s + \sigma \tilde g_r(0,s)\6W_s\;, \qquad \tilde g_r(0,s) = \frac{g_r(0,\ph_t)}{\sqrt{D_{rr}(0,\ph_t)}}\;, \label{cycling02} \end{equation} in which the effective noise intensity is constant, i.e.\ $\widetilde D_{rr}(0,s) = \tilde g_r(0,s)\tilde g_r(0,s)^\text{T}=1$. In order to formulate the main result of this section, we set \begin{equation} \label{cycling06} \theta_\delta(\ph) = \theta(\ph) - \log\delta + \log \biggl( \frac{h^{\text{\rm{per}}}(s^*_\delta)}{h^{\text{\rm{per}}}(0)}\biggr)\;, \end{equation} where $s^*_\delta\in[0,1)$ is such that $(s^*_\delta,-\delta)$ belongs to a translate of the optimal path $\gamma_\infty$ (\figref{fig_random_poincare}). A real-valued random variable $Z$ is said to follow the \emph{standard Gumbel law} if \begin{equation} \label{cycling07} \bigprob{Z\leqs t} = \e^{-\e^{-t}} \qquad \forall t\in\R\;. \end{equation} \figref{fig_Gumbel} shows the density $\e^{-t-\e^{-t}}$ of a standard Gumbel law. \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',main node/.style={circle,minimum size=0.25cm,fill=blue!20,draw},x=1.2cm,y=0.8cm] \draw[->,thick] (-5,0) -> (5,0); \draw[->,thick] (0,-0.5) -> (0,5); \foreach \x in {1,...,4} \draw[semithick] (\x,-0.1) -- node[below=0.1cm] {{\small $\x$}} (\x,0.1); \foreach \x in {-1,...,-4} \draw[semithick] (\x,-0.1) -- node[below=0.1cm] {{\small $\x$}} (\x,0.1); \foreach \y in {1,...,4} \draw[semithick] (-0.08,\y) -- node[right=0.1cm] {{\small $0.\y$}} (0.08,\y); \draw[blue,thick,-,smooth,domain=-4.5:4.5,samples=75,/pgf/fpu,/pgf/fpu/output format=fixed] plot (\x, { 10*exp(-\x -exp(-\x) ) }); \node[] at (5.0,-0.5) {$t$}; \node[] at (0.8,5.0) {$\e^{-t-\e^{-t}}$}; \end{tikzpicture} \vspace{-5mm} \end{center} \caption[]{Density of a standard Gumbel random variable. } \label{fig_Gumbel} \end{figure} \begin{theorem}[{\cite[Theorem~2.4]{BG_periodic2}}] \label{thm_periodic2} Fix an initial condition $(r_0,\ph_0=0)$ of the nonlinear system~\eqref{exit03} with $r_0$ sufficiently close to the stable orbit in $r=-1/2$. There exist $\beta, c>0$ such that for any sufficiently small $\delta,\Delta>0$, there exists $\sigma_0>0$ such that for $0<\sigma<\sigma_0$, \begin{align} \nonumber \biggprobin{r_0,0}{\frac{\theta_\delta(\ph_{\tau_0})}{\lambda_+T_+} \in [t,t+\Delta]} ={}& \Delta [1-\lambda_0(\sigma)]\lambda_0(\sigma)^t Q_{\lambda_+T_+} \biggl( \frac{\abs{\log\sigma}}{\lambda_+T_+} - t + \Order{\delta}\biggr) \\ &{}\times \biggl[ 1 + \Order{\e^{-c\ph/\abs{\log\sigma}}} + \Order{\delta\abs{\log\delta}} + \Order{\Delta^\beta}\biggr]\;. \label{cycling08} \end{align} Here $\lambda_0(\sigma)$ is the principal eigenvalue of the Markov chain, and $1-\lambda_0(\sigma)$ is of order $\e^{-I_\infty/\sigma^2}$, where $I_\infty=I(\gamma_\infty)$ is the value of the rate function for the path $\gamma_\infty$. Furthermore, $Q_{\lambda_+T_+}(x)$ is the periodic function, with period $1$, given by \begin{equation} \label{cycling08A} Q_{\lambda_+T_+}(x) = \sum_{n\in\Z} A\bigl( \lambda_+T_+(n-x) \bigr)\;, \end{equation} where \begin{equation} \label{cycling09} A(x) = \exp \Bigl\{ -2x - \frac12 \e^{-2x} \Bigr\} \end{equation} is the density of $(Z-\log2)/2$, with $Z$ a standard Gumbel variable. \end{theorem} We will discuss various implications of this result in the next sections. The periodic dependence on $\log\sigma$ will be addressed in Section~\ref{ssec_osc}, and we will say more on the Gumbel law in Section~\ref{sec_Gumbel}. For now, let us give a reformulation of the theorem, which will ease comparison with other related results. Following~\cite{HitczenkoMedvedev,HitczenkoMedvedev1}, we say that an integer-valued random variable $Y$ is \emph{asymptotically geometric} with success probability $p$ if \begin{equation} \label{cycling10} \lim_{n\to\infty} \pcond{Y=n+1}{Y>n} = p\;. \end{equation} We use $\lim_{n\to\infty}\mathsf{Law}(X_n)=\mathsf{Law}(X)$ to denote convergence in distribution of a sequence of random variables $X_n$ to a random variable $X$. \begin{theorem} \label{thm_convergence_Gumbel} There exists a family $(Y_m^\sigma)_{m\in\N,\sigma>0}$ of asymptotically geometric random variables such that \begin{equation} \label{cycling11} \lim_{m\to\infty} \Bigl[ \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\ph_{\tau_0}) - \abs{\log\sigma} - \lambda_+T_+ Y_m^\sigma \bigr) \Bigr] = \mathsf{Law} \biggl( \frac{Z}{2} - \frac{\log2}{2}\biggr) \;, \end{equation} where $Z$ is a standard Gumbel random variable independent of the $Y_m^\sigma$. The success probability of $Y_m^\sigma$ is of the form $p_{m,\sigma} = \e^{-I_m/\sigma^2}$, where $I_m = I_\infty + \Order{\e^{-2m\lambda_+T_+}}$. \end{theorem} This theorem is almost a corollary of Theorem~\ref{thm_periodic2}, but a little work is required to control the limit $m\to\infty$, which corresponds to the limit $\delta\to0$. We give the details in Appendix~\ref{sec_proof_Gumbel}. The interpretation of~\eqref{cycling11} is as follows. To reach the unstable orbit at $\ph_{\tau_0}$, the system will track, with high probability, a translate $\gamma_\infty(\cdot+n)$ of the optimal path $\gamma_\infty$ (\figref{fig_random_poincare}). The random variable $Y_m^\sigma$ is the index $n$ of the chosen translate. This index follows an approximately geometric distribution of parameter $1-\lambda_0(\sigma) \simeq \e^{-I_\infty/\sigma^2}$, which also manifests itself in the factor $(1-\lambda_0)\lambda_0^t$ in~\eqref{cycling08}. The distribution of $\ph_{\tau_0}$ conditional on the event $\set{Y_m^\sigma=n}$ converges to a shifted Gumbel distribution --- we will come back to this point in Section~\ref{sec_Gumbel}. We may not be interested in the integer part of the crossing phase $\ph_{\tau_0}$, which counts the number of rotations around the orbit, but only in its fractional part $\hat\ph_{\tau_0}=\ph_{\tau_0}\pmod{1}$. Then it follows immediately from~\eqref{cycling11} and the fact that $Y_m^\sigma$ is integer-valued that \begin{equation} \label{cycling12} \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\hat\ph_{\tau_0}) - \abs{\log\sigma} \bigr) = \mathsf{Law} \biggl( \biggl[\frac{Z}{2} - \frac{\log2}{2}\biggr] \pmod{\lambda_+T_+}\biggr) \;. \end{equation} The random variable on the right-hand side has a density given by~\eqref{cycling08A}. This result can also be derived directly from~\cite[Corollary~2.5]{BG_periodic2}, by the same procedure as the one used in Section~\ref{ssec_pG_u}. \begin{remark} \hfill \begin{enum} \item The index $m$ in~\eqref{cycling11} seems artificial, and one would like to have a similar result for the law of $\theta(\ph_{\tau_0})-\abs{\log\sigma}-\lambda_+T_+Y_\infty^\sigma$. Unfortunately, the convergence as $\sigma\to0$ is not uniform in $m$, so that the two limits in~\eqref{cycling11} have to be taken in that particular order. \item The speed of convergence in~\eqref{cycling10} depends on the spectral gap of the Markov chain. In~\cite[Theorem~6.14]{BG_periodic2}, we proved that this spectral gap is bounded by $\e^{-c/\abs{\log\sigma}}$ for some constant $c>0$, though we expect that the gap can be bounded uniformly in $\sigma$. We expect, but have not proved, that the constant $c$ is uniform in $m$ (i.e.\ uniform in the parameter $\delta$). \end{enum} \end{remark} \subsection{The origin of oscillations} \label{ssec_osc} A striking aspect of the expression~\eqref{cycling08} for the distribution of $\ph_{\tau_0}$ is that it depends periodically on $\abs{\log\sigma}$. This means that as $\sigma\to0$, the distribution does not converge, but is endlessly shifted around the unstable orbit proportionally to $\abs{\log\sigma}$. This phenomenon has been discovered by Martin Day, who called it \emph{cycling} \cite{Day7,Day3,Day6,Day4}. See also~\cite{MS4,BG7,Getfert_Reimann_2009,Getfert_Reimann_2010} for related work. The intuitive explanation of cycling is as follows. We have seen that the large-deviation rate function is minimized by a path $\gamma_\infty$ (and its translates). The path approaches the unstable orbit as $\ph\to\infty$, and the stable one as $\ph\to-\infty$. The distance between $\gamma_\infty$ and the unstable orbit satisfies \begin{equation} \label{osc01} |r(\ph)| \simeq c\e^{-\theta(\ph)} \qquad \text{as } \ph\to\infty\;. \end{equation} This implies \begin{equation} \label{osc02} |r(\ph)| = \sigma \quad \Leftrightarrow \quad \theta(\ph) \simeq |\log\sigma| + \log c\;. \end{equation} Thus everything behaves as if the unstable orbit has an \lq\lq effective thickness\rq\rq\ equal to the standard deviation $\sigma$ of the noise. Escape becomes likely once the optimal path $\gamma_\infty$ touches the thickened unstable orbit. It is interesting to note that the periodic dependence on the logarithm of a parameter, or \emph{log-periodic oscillations}, appear in many systems presenting discrete-scale invariance~\cite{Sornette_98}. These include for instance hierarchical models in statistical physics \cite{Derrida_Itzykson_Luck_84,Costin_Giacomin_2013} and self-similar networks \cite{Doucot_etal_PRL_86}, diffusion through fractals \cite{Akkermans_etal_EPL_2009,Dunne_JPA_2012} and iterated maps \cite{deMoura_etal_PRE_2000,Derrida_Giacomin_2014}. One link with the present situation is that~\eqref{osc01} implies a discrete-scale invariance, since scaling $r$ by a factor $\e^{-\theta(1)}=\e^{-\lambda_+T_+}$ is equivalent to scaling the noise intensity by the same factor. There might be deeper connections due to the fact that certain key functions, as the Gumbel distribution in our case, obey functional equations --- see for instance the similar behaviour of the example in~\cite[Remark~3.1]{Derrida_Giacomin_2014}. \begin{remark} \label{rem_osc} The periodic \lq\lq cycling profile\rq\rq\ $Q_{\lambda_+T_+}$ admits the Fourier series representation \begin{equation} \label{osc03} Q_{\lambda_+T_+}(x) = \sum_{k\in\Z} a_k \e^{2\pi\icx k x}\;, \qquad a_k = \frac{2^{-\pi\icx k/(\lambda_+T_+)}}{\lambda_+T_+} \Gamma \biggl( 1 - \frac{\pi\icx k}{\lambda_+T_+}\biggr)\;, \end{equation} where $\Gamma$ is Euler's Gamma function. $Q_{\lambda_+T_+}$ is also an \emph{elliptic function}, since in addition to being periodic in the real direction, it is also periodic in the imaginary direction. Indeed, by definition of $A(x)$, we have \begin{equation} \label{osc04} Q_{\lambda_+T_+} \biggl( z + \frac{\pi\icx}{\lambda_+T_+}\biggr) = Q_{\lambda_+T_+}(z) \qquad \forall z\in\C\;. \end{equation} Being non-constant and doubly periodic, $Q_{\lambda_+T_+}$ necessarily admits at least two poles in every translate of the unit cell $(0,1)\times(0,\pi\icx/(\lambda_+T_+))$. \end{remark} \section{The Gumbel distribution} \label{sec_Gumbel} \subsection{Extreme-value theory} \label{ssec_evt} Let $X_1,X_2,\dots$ be a sequence of independent, identically distributed (i.i.d.) real random variables, with common distribution function $F(x)=\prob{X_1\leqs x}$. Extreme-value theory is concerned with deriving the law of the maximum \begin{equation} \label{evt01} M_n = \max\set{X_1,\dots,X_n} \end{equation} as $n\to\infty$. It is immediate to see that the distribution function of $M_n$ is $F(x)^n$. We will say that $F$ belongs to the domain of attraction of a distribution function $\Phi$, and write $F\in D(\Phi)$, if there exist sequences of real numbers $a_n>0$ and $b_n$ such that \begin{equation} \label{evt02} \lim_{n\to\infty} F(a_nx+b_n)^n = \Phi(x) \qquad \forall x\in\R\;. \end{equation} This is equivalent to the sequence of random variables $(M_n-b_n)/a_n$ converging in distribution to a random variable with distribution function $\Phi$. Clearly, if $F\in D(\Phi)$, then one also has $F\in D(\Phi(ax+b))$ for all $a>0,b\in\R$, so it makes sense to work with equivalence classes $\set{\Phi(ax+b)}_{a,b}$. Any possible limit of~\eqref{evt02} should satisfy the functional equation \begin{equation} \label{evt03} \Phi(ax+b)^2 = \Phi(x) \qquad \forall x\in\R \end{equation} for some constants $a>0, b\in\R$. Fr\'echet~\cite{Frechet_1927}, Fischer and Tippett~\cite{Fisher_Tippett_1928} and Gnedenko~\cite{Gnedenko_1943} have shown that if one excludes the degenerate case $F(x)=1_{\set{x\geqs c}}$, then the only possible solutions of~\eqref{evt03} are in one of the following three classes, where $\alpha>0$ is a parameter: \begin{alignat}{3} \nonumber \Phi_\alpha(x) &= \e^{-x^{-\alpha}} 1_{\set{x>0}} & \qquad &\text{Fr\'echet law\;,} \\ \nonumber \Psi_\alpha(x) &= \e^{-(-x)^{\alpha}} 1_{\set{x\leqs 0}} + 1_{\set{x>0}} & &\text{(reversed) Weibull law\;,} \\ \Lambda(x) &= \e^{-\e^{-x}} & &\text{Gumbel law\;.} \label{evt04} \end{alignat} In~\cite{Gnedenko_1943}, Gnedenko gives precise characterizations on when $F$ belongs to the domain of attraction of each of the above laws. Of particular interest to us is the following result. Let \begin{equation} \label{evt05} R(x) = 1-F(x) = \prob{X_1 > x} \end{equation} denote the tail probabilities of the i.i.d.\ random variables $X_i$. \begin{lemma}[{\cite[Lemma~4]{Gnedenko_1943}}] \label{lem_Gendenko_4} A nondegenerate distribution function $F$ belongs to the domain of attraction of $\Phi$ if and only if there exist sequences $a_n>0$ and $b_n$ such that \begin{equation} \label{evt06} \lim_{n\to\infty} n R(a_n x + b_n) = -\log\Phi(x) \qquad \forall x \text{ such that } \Phi(x)>0\;. \end{equation} \end{lemma} The sequences $a_n$ and $b_n$ are not unique, but \cite[Theorem~6]{Gnedenko_1943} shows that in the case of the Gumbel distribution $\Phi=\Lambda$, \begin{equation} \label{evt07} b_n = \inf\biggsetsuch{x}{F(x) > 1-\frac1n} \;, \qquad a_n = \inf\biggsetsuch{x}{F(x) + b_n > 1-\frac1{n\e}} \end{equation} is a possible choice. In this way it is easy to check that the normal law is attracted to the Gumbel distribution. Another related characterization of $F$ being in the domain of attraction of the Gumbel law is the following. \begin{theorem}[{\cite[Theorem~7]{Gnedenko_1943}}] Let $x_0=\inf\setsuch{x}{F(x)=1}\in\R\cup\set{\infty}$. Then $F\in D(\Lambda)$ if and only if there exists a continuous function $A(z)$ such that $\lim_{z\to x_0-}A(z)=0$ and \begin{equation} \label{evt08} \lim_{z\to x_0-} \frac{R(z(1+A(z)x))}{R(z)} = -\log\Lambda(x) = \e^{-x} \qquad \forall x\in\R\;. \end{equation} The function $A(z)$ can be chosen such that $A(b_n)=a_n/b_n$ for all $n$, where $a_n$ and $b_n$ satisfy~\eqref{evt06}. \end{theorem} The quantity on the left-hand side of~\eqref{evt08} can be rewritten as \begin{equation} \label{evt09} \bigpcond{X_1 > z(1 + A(z)x)}{X_1>z}\;, \end{equation} that is, it represents a \emph{residual lifetime}. See also~\cite{Balkema_deHaan_74}. \subsection{Length of reactive paths} \label{ssec_lrp} The Gumbel distribution also appears in the context of somewhat different exit problems (which, however, will turn out not to be so different after all). In~\cite{CerouGuyaderLelievreMalrieu12}, C\'erou, Guyader, Leli\`evre and Malrieu consider one-dimensional SDEs of the form \begin{equation} \label{lrp01} \6x_t = -V'(x_t)\6t + \sigma\6W_t\;, \end{equation} where $V(x)$ is a double-well potential (\figref{fig_double_well}). \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',main node/.style={draw,circle,fill=white,minimum size=3pt,inner sep=0pt}] \draw[->,thick] (-5.5,0) -> (5.5,0); \draw[->,thick] (0,-4.5) -> (0,2.0); \draw[dashed,semithick] (-2.55,0) -- (-2.55,-4); \draw[dashed,semithick] (2.3,0) -- (2.3,-3); \draw[blue,thick] plot[smooth,tension=.6] coordinates{(-4.6,1.5) (-2.6,-4) (-0.05,0) (2.4,-3) (4.4,1.5)}; \node[main node,blue,fill=white,semithick] at (0,0) {}; \node[main node,semithick] at (2.3,0) {}; \node[main node,blue,fill=white,semithick] at (2.3,-3) {}; \node[main node,semithick] at (-2.55,0) {}; \node[main node,blue,fill=white,semithick] at (-2.55,-4) {}; \node[main node,semithick] at (-2.0,0) {}; \node[] at (-2.55,0.3) {$x^*_-$}; \node[] at (2.3,0.3) {$x^*_+$}; \node[] at (-2,0.3) {$a$}; \node[] at (5.0,-0.25) {$x$}; \node[] at (0.5,1.5) {$V(x)$}; \end{tikzpicture} \end{center} \caption[]{An example of double-well potential occurring in~\eqref{lrp01}. } \label{fig_double_well} \end{figure} Assume, without loss of generality, that the local maximum of $V$ is in $0$. Denote the local minima of $V$ by $x^*_- < 0 < x^*_+$, and assume $\lambda=-V''(0)>0$. Pick an initial condition $x_0\in(x^*_-,0)$. A classical question is to determine the law of the first-hitting time $\tau_b$ of a point $b\in(0,x^*_+]$. The expected value of $\tau_b$ obeys the so-called \emph{Eyring--Kramers law}~\cite{Arrhenius,Eyring,Kramers} \begin{equation} \label{lrp02} \expecin{x_0}{\tau_b} = \frac{2\pi}{\sqrt{V''(x^\star_-)\abs{V''(0)}}} \e^{2[V(0)-V(x^\star_-)]/\sigma^2} \bigl[ 1 + \Order{\sigma} \bigr]\;. \end{equation} In addition, Day~\cite{Day1} has proved (in a more general context) that the distribution of $\tau_b$ is asymptotically exponential: \begin{equation} \label{lrp03} \lim_{\sigma\to0} \bigprobin{x_0}{\tau_b > s \, \expecin{x_0}{\tau_b}} = \e^{-s} \;. \end{equation} The picture is that sample paths spend an exponentially long time near the local minimum $x^*_-$, with occasional excursions away from $x^*_-$, until ultimately managing to cross the saddle. See for instance~\cite{Berglund_irs_MPRF} for a recent review. In transition-path theory~\cite{E_VandenEijnden_06,Vanden_Eijnden_LNP06}, by contrast, one is interested in the very last bit of the sample path, between its last visit to $x^*_-$ and its first passage in $b$. The length of this transition is considerably shorter than $\tau_b$. A way to formulate this is to fix a point $a\in(x^*_-,x_0)$, and to condition on the event that the path hits $b$ before hitting $a$. The result can be formulated as follows (note that our $\sigma$ corresponds to $\sqrt{2\eps}$ in~\cite{CerouGuyaderLelievreMalrieu12}): \begin{theorem}[{\cite[Theorem~1.4]{CerouGuyaderLelievreMalrieu12}}] \label{thm_CGLM} For any fixed $a < x_0 < 0 < b$ in $(x^*_- , x^*_+)$, \begin{equation} \label{lrp04} \lim_{\sigma\to0} \mathsf{Law} \bigl( \lambda \tau_b - 2 \abs{\log\sigma} \bigm| \tau_b < \tau_a \bigr) = \mathsf{Law} \Bigl( Z + T(x_0,b) \Bigr) \;, \end{equation} where $Z$ is a standard Gumbel variable, and \begin{equation} \label{lrp05} T(x_0,b) = \log \bigl(\abs{x_0}b\lambda \bigr) + \int_{x_0}^0 \biggl( \frac{\lambda}{V'(y)} + \frac{1}{y}\biggr) \6y - \int_0^b \biggl( \frac{\lambda}{V'(y)} + \frac{1}{y}\biggr) \6y\;. \end{equation} \end{theorem} The proof is based on Doob's $h$-transform, which allows to replace the conditioned problem by an unconditioned one, with a modified drift term. The new drift term becomes singular as $x\to a_+$. See also~\cite{Lu_Nolen_2014} for other uses of Doob's $h$-transform in the context of reactive paths. As shown in~\cite[Section~4]{CerouGuyaderLelievreMalrieu12}, $2\abs{\log\sigma} + T(x_0,b)/\lambda$ is the sum of the deterministic time needed to go from $\sigma$ to $b$, and of the deterministic time needed to go from $-\sigma$ to $a$ (in the one-dimensional setting, paths minimizing the large-deviation rate function are time-reversed deterministic paths). \subsection{Bakhtin's approach} \label{ssec_Bakhtin} Yuri Bakhtin has recently provided some interesting insights into the question of why the Gumbel distribution governs the length of reactive paths~\cite{Bakhtin_2013a,Bakhtin_2014a}. They apply to linear equations of the form \begin{equation} \label{Bakhtin01} \6x_t = \lambda x_t\6t + \sigma\6W_t\;, \end{equation} where $\lambda > 0$. However, we will see in Section~\ref{sec_slips} below that they can be extended to the nonlinear setting by using the technique outlined in Appendix~\ref{ssec_pG_u}. The solution of~\eqref{Bakhtin01} is an \lq\lq explosive Ornstein--Uhlenbeck process\rq\rq \begin{equation} \label{Bakhtin02} x_t = \e^{\lambda t} \biggl( x_0 + \sigma \int_0^t \e^{-\lambda s}\6W_s\biggr)\;, \end{equation} which can also be represented in terms of a time-changed Brownian motion, \begin{equation} \label{Bakhtin03} x_t = \e^{\lambda t} \tilde x_t\;, \qquad \tilde x_t = x_0 + \widetilde W_{\sigma^2(1-\e^{-2\lambda t})/(2\lambda)} \end{equation} (this follows by evaluating the variance of $\tilde x_t$ using It\^o's isometry). Thus $\tilde x_t - x_0$ is equal in distribution to $\sigma\sqrt{(1-\e^{-2\lambda t})/(2\lambda)}\,N$, where $N$ is a standard normal random variable. Assume $x_0<0$ and denote by $\tau_0$ the first-hitting time of $x=0$. Then Andr\'e's reflection principle allows to write \begin{equation} \label{Bakhtin04} \bigpcond{\tau_0 < t}{\tau_0 < \infty} = \frac{\prob{\tau_0 < t}}{\prob{\tau_0 < \infty}} = \frac{2\prob{\tilde x_t>0}}{2\prob{\tilde x_\infty>0}} = \bigpcond{\tilde x_t>0}{\tilde x_\infty>0}\;. \end{equation} Now we observe that \begin{align} \nonumber \Bigpcond{\tau_0 < t + \frac{1}{\lambda} |\log\sigma|}{\tau_0 < \infty} &= \Bigpcond{\tilde x_{t+\frac{1}{\lambda} |\log\sigma|}>0}{\tilde x_\infty>0} \\ &= \biggpcond{N > \frac{|x_0|}{\sigma}\sqrt{\frac{2\lambda}{1-\sigma^2\e^{-2\lambda t}}}} {N > \frac{|x_0|}{\sigma}\sqrt{2\lambda}}\;, \label{Bakhtin05} \end{align} where $N$ is a standard normal random variable. It follows that \begin{equation} \label{Bakhtin06} \lim_{\sigma\to0} \Bigpcond{\tau_0 < t + \frac{1}{\lambda} |\log\sigma|}{\tau_0 < \infty} = \exp \bigl\{ -x_0^2 \lambda\e^{-2\lambda t}\bigr\}\;. \end{equation} This can be checked by a direct computation, using tail asymptotics of the normal law. However, it is more interesting to view the last expression in~\eqref{Bakhtin05} as a residual lifetime, given by the expression~\eqref{evt09} with $z=(|x_0|/\sigma)\sqrt{2\lambda}$, $A(z)=z^{-2}$ and $x=x_0^2 \lambda\e^{-2\lambda t}$. The right-hand side of~\eqref{Bakhtin06} is the distribution function of $(Z+\log(x_0^2\lambda))/(2\lambda)$, where $Z$ is a standard Gumbel variable. Building on this computation, Bakhtin provided a new proof of the following result, which was already obtained by Day in~\cite{Day7}. \begin{theorem}[{\cite{Day7} and \cite[Theorem~3]{Bakhtin_2014a}}] \label{thm_Bakhtin_tau0} Fix $a<0$ and an initial condition $x_0\in(a,0)$. Then \begin{equation} \label{Bakhtin07} \lim_{\sigma\to0} \mathsf{Law} \bigl( \lambda \tau_0 - \abs{\log\sigma} \bigm| \tau_0 < \tau_a \bigr) = \mathsf{Law} \biggl( \frac{Z}{2} + \frac{\log(x_0^2\lambda)}{2} \biggr) \;. \end{equation} \end{theorem} Observe the similarity with Theorem~\ref{thm_convergence_Gumbel} (and also with Proposition~\ref{prop_pG_p1}). The proof in~\cite{Bakhtin_2014a} uses the fact that conditioning on $\set{\tau_0 < \tau_a}$ is asymptotically equivalent to conditioning on $\set{\tau_0 < \infty}$. Note that we use a similar argument in the proof of Theorem~\ref{thm_convergence_Gumbel} in Appendix~\ref{ssec_pG_proof}. The expression~\eqref{Bakhtin07} differs from~\eqref{lrp04} in some factors $2$. This is due to the fact that Theorem~\ref{thm_Bakhtin_tau0} considers the first-hitting time $\tau_0$ of the saddle, while Theorem~\ref{thm_CGLM} considers the first-hitting time $\tau_b$ of a point $b$ in the right-hand potential well. We will come back to this point in Section~\ref{ssec_tba}. The observations presented here provide a connection between first-exit times and extreme-value theory, via the reflection principle and residual lifetimes. As observed in~\cite[Section~4]{Bakhtin_2013a}, the connection depends on the seemingly accidental property \begin{equation} \label{Bakhtin08} -\log\Lambda(\e^{-x}) = \Lambda(x)\;, \end{equation} or $\Lambda(\e^{-x}) = \e^{-\Lambda(x)}$, of the Gumbel distribution function. Indeed, the right-hand side in~\eqref{Bakhtin06} is identified with $-\log\Lambda(x)$, evaluated in a point $x$ proportional to $-\e^{-2\lambda t}$. \section{The duration of phase slips} \label{sec_slips} \subsection{Leaving the unstable orbit} \label{ssec_luo} Consider again, for a moment, the linear equation~\eqref{Bakhtin01}. Now we are interested in the situation where the process starts in $x_0=0$, and hits a point $b>0$ before hitting a point $a<0$. In this section, $\Theta$ will denote the random variable $\Theta=-\log|N|$, where $N$ is a standard normal variable. Its density is given by \begin{equation} \label{luo01} \frac{\6}{\6t} 2 \prob{N < -\e^{-t}} = \sqrt{\frac2\pi} \e^{-t-\frac12\e^{-2t}}\;, \end{equation} which is similar to, but different from, the density of a Gumbel distribution, see~\figref{fig_Theta}. \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',main node/.style={circle,minimum size=0.25cm,fill=blue!20,draw},x=1.2cm,y=0.8cm] \draw[->,thick] (-5,0) -> (5,0); \draw[->,thick] (0,-0.5) -> (0,6.0); \foreach \x in {1,...,4} \draw[semithick] (\x,-0.1) -- node[below=0.1cm] {{\small $\x$}} (\x,0.1); \foreach \x in {-1,...,-4} \draw[semithick] (\x,-0.1) -- node[below=0.1cm] {{\small $\x$}} (\x,0.1); \foreach \y in {1,...,5} \draw[semithick] (-0.08,\y) -- node[right=0.1cm] {{\small $0.\y$}} (0.08,\y); \draw[blue,thick,-,smooth,domain=-4.5:4.5,samples=75,/pgf/fpu,/pgf/fpu/output format=fixed] plot (\x, { sqrt(2/pi)*10*exp(-\x -0.5*exp(-2*\x) ) }); \node[] at (5.0,-0.5) {$t$}; \node[] at (1.2,6.0) {$\sqrt{\frac2\pi}\e^{-t-\frac12\e^{-2t}}$}; \end{tikzpicture} \vspace{-5mm} \end{center} \caption[]{Density of the random variable $\Theta=-\log|N|$. } \label{fig_Theta} \end{figure} \begin{theorem}[{\cite{Day2,Bakhtin_2008_SPA,Bakhtin_2011_PTRF}}] \label{thm_Bakhtin_taub} Fix $a<0<b$ and an initial condition $x_0=0$. Then the linear system~\eqref{Bakhtin01} satisfies \begin{equation} \label{luo02} \lim_{\sigma\to0} \mathsf{Law} \bigl( \lambda \tau_b - \abs{\log\sigma} \bigm| \tau_b < \tau_a \bigr) = \mathsf{Law} \biggl( \Theta + \frac{\log(2b^2\lambda)}{2} \biggr) \;. \end{equation} \end{theorem} The intuition for this result is as follows. Consider first the symmetric case where $a=-b$ and let $\tau=\inf\setsuch{t>0}{|x_t|=b}=\tau_a\wedge\tau_b$. The solution of~\eqref{Bakhtin01} starting in $0$ can be written $x_t = \e^{\lambda t} \tilde x_t$, where $\tilde x_t = \sigma\sqrt{(1-\e^{-2\lambda t})/(2\lambda)}\,N$ and $N$ is a standard normal random variable, cf.~\eqref{Bakhtin03}. The condition $|x_\tau|=b$ yields \begin{equation} \label{luo03} b = \e^{\lambda\tau} \sigma \sqrt{\frac{1-\e^{-2\lambda\tau}}{2\lambda}}|N| \simeq \e^{\lambda\tau} \sigma \frac{1}{\sqrt{2\lambda}}|N|\;. \end{equation} Solving for $\tau$ yields $\lambda\tau - |\log\sigma| \simeq \log(2\lambda b^2)/2 - \log|N|$. One can also show~\cite[Theorem~2.1]{Day2} that $\sign(x_\tau)$ converges to a random variable $\nu$, independent of $N$, such that $\prob{\nu=1}=\prob{\nu=-1}=1/2$. This implies~\eqref{luo02} in the symmetric case, and the asymmetric case is dealt with in~\cite[Theorem~1]{Bakhtin_2008_SPA}. Let us return to the nonlinear system~\eqref{exit03} governing the coupled oscillators. We seek a result similar to Theorem~\ref{thm_Bakhtin_taub} for the first exit from a neighbourhood of the unstable orbit. The scaling argument given at the beginning of Section~\ref{ssec_cycling} indicates that simpler expressions will be obtained if this neighbourhood has a non-constant width of size proportional to $\sqrt{2\lambda_+T_+h^{\text{\rm{per}}}(\ph)}$. This is also consistent with the discussion in~\cite[Section~3.2.1]{BGbook}. Let us thus set \begin{equation} \label{luo03b} \tilde\tau_{\delta} = \inf\Bigsetsuch{t>0}{r_t = \delta\sqrt{2\lambda_+T_+h^{\text{\rm{per}}}(\ph)}}\;. \end{equation} \begin{theorem} \label{thm_exit} Fix an initial condition $(\ph_0,r_0=0)$ on the unstable periodic orbit. Then the system~\eqref{exit03} satisfies \begin{equation} \label{luo04} \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\ph_{\tilde\tau_{\delta}}) - \theta(\ph_0) - \abs{\log\sigma} \bigm| \tilde\tau_{\delta} < \tilde\tau_{-\delta} \bigr) = \mathsf{Law} \biggl( \Theta + \frac{\log ( 2\lambda_+\delta^2 )}2 + \Order{\delta} \biggr) \end{equation} as $\delta\to0$. \end{theorem} We give the proof in Appendix~\ref{ssec_proof_exit}. Note that this result is indeed consistent with~\eqref{luo02}, if we take into account the fact that $h^{\text{\rm{per}}}(\ph)\equiv1/(2\lambda_+T_+)$ in the case of a constant diffusion matrix $D_{rr}\equiv 1$. \subsection{There and back again} \label{ssec_tba} A nice observation in~\cite{Bakhtin_2014a} is that Theorems~\ref{thm_Bakhtin_tau0} and~\ref{thm_Bakhtin_taub} imply Theorem~\ref{thm_CGLM} on the length of reactive paths in the linear case. This follows immediately from the following fact. \begin{lemma}[\cite{Bakhtin_2014a}] \label{lem_Bakhtin} Let $Z$ and $\Theta=-\log|N|$ be independent random variables, where $Z$ follows a standard Gumbel law, and $N$ a standard normal law. Then \begin{equation} \label{tba10} \mathsf{Law} \biggl( \frac12 Z + \Theta \biggr) = \mathsf{Law} \biggl( Z + \frac{\log2}2 \biggr) \;. \end{equation} \end{lemma} \begin{proof} This follows directly from the expressions \begin{equation} \label{tba11} \bigexpec{\e^{\icx t Z}} = \Gamma(1-\icx t) \qquad \text{and} \qquad \bigexpec{\e^{\icx t \Theta}} = \bigexpec{|N|^{-\icx\theta}} = \frac{2^{-\icx t/2}}{\sqrt{\pi}}\Gamma \biggl( \frac{1-\icx t}{2}\biggr) \end{equation} for the characteristic functions of $Z$ and $\Theta$, and the duplication formula for the Gamma function, $\sqrt{\pi}\,\Gamma(2z)=2^{2z-1} \Gamma(z) \Gamma(z+\tfrac12)$. \end{proof} Let us now apply similar ideas to the nonlinear system~\eqref{exit03} in order to derive information on the duration of phase slips. In order to define this duration, consider two families of continuous curves $\Gamma^s_-$ and $\Gamma^s_+$, depending on a parameter $s\in\R$, periodic in the $\ph$-direction, and such that each $\Gamma^s_-$ lies in the set $\set{-1/2<r<0}$ and each $\Gamma^s_+$ lies in $\set{0<r<1/2}$. We set \begin{equation} \label{tba12} \tau^s_\pm = \inf\setsuch{t>0}{(r_t,\ph_t)\in\Gamma^s_\pm}\;, \end{equation} while $\tau_0$ is defined as before by~\eqref{hm01}. Given an initial condition $(r_0=-1/2,\ph_0)$, let us call a \emph{successful phase slip} a sample path that does not return to the stable orbit $\set{r=-1/2}$ between $\tau^s_-$ and $\tau_0$, and that does not return to $\Gamma^s_-$ between $\tau_0$ and $\tau^s_+$ (see~\figref{fig_phase_slip}). Then we have the following result. \begin{theorem} \label{thm_duration} There exist families of curves $\set{\Gamma^s_\pm}_{s\in\R}$ such that conditionally on a successful phase slip, \begin{align} \label{tba14a} \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\ph_{\tau_0}) - \theta(\ph_{\tau^s_-}) - \abs{\log\sigma} \bigr) &= \mathsf{Law} \biggl( \frac{Z}{2} - \frac{\log(2)}{2} + s \biggr)\;, \\ \label{tba14b} \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\ph_{\tau^s_+}) - \theta(\ph_{\tau_0}) - \abs{\log\sigma} \bigr) &= \mathsf{Law} \Bigl( \Theta + s \Bigr)\;, \\ \label{tba14c} \lim_{\sigma\to0} \mathsf{Law} \bigl( \theta(\ph_{\tau^s_+}) - \theta(\ph_{\tau^s_-}) - 2\abs{\log\sigma} \bigr) &= \mathsf{Law} \Bigl( Z + 2s \Bigr)\;, \end{align} where $Z$ denotes a standard Gumbel variable, $\Theta=-\log|N|$, and $N$ is a standard normal random variable. The curves $\Gamma^s_\pm$ are ordered in the sense that if $s_1<s_2$, then $\Gamma^{s_1}_\pm$ lies below $\Gamma^{s_2}_\pm$. Furthermore, $\Gamma^s_+$ converges to the unstable orbit $\set{r=0}$ as $s\to-\infty$, and to the stable orbit $\set{r=1/2}$ as $s\to\infty$. Similarly, $\Gamma^s_-$ converges to the unstable orbit $\set{r=0}$ as $s\to\infty$, and to the stable orbit $\set{r=-1/2}$ as $s\to-\infty$. \end{theorem} We give the proof in Appendix~\ref{ssec_proof_duration}, along with more details on how to construct the curves $\Gamma^s_\pm$. In a nutshell, they are obtained by letting evolve under the deterministic flow the curves $\set{r=\pm\delta\sqrt{2\lambda_+T_+h^{\text{\rm{per}}}(\ph)}}$ introduced in the previous section. The parameter $s$ plays an analogous role as $T(x_0,b)$ in~\eqref{lrp05}. \section{Conclusion and outlook} \label{sec_conclusion} Let us restate our main results in an informal way. Theorem~\ref{thm_convergence_Gumbel} shows that in the weak-noise limit, the position of the center $\ph_{\tau_0}$ of a phase slip, defined by the crossing location of the unstable orbit, behaves like \begin{equation} \label{conc1} \theta(\ph_{\tau_0}) \simeq |\log\sigma| + \lambda_+T_+ Y^\sigma + \frac{Z}{2} - \frac{\log2}{2}\;, \end{equation} where $Y^\sigma$ is an asymptotically geometric random variable with success probability of order $\e^{-I_\infty/\sigma^2}$, and $Z$ is a standard Gumbel random variable. This expression is dominated by the term $Y^\sigma$, which accounts for exponentially long waiting times between phase slips. The term $\abs{\log\sigma}$ is responsible for the cycling phenomenon, and the term $(Z-\log(2))/2$ determines the shape of the cycling profile. Theorem~\ref{thm_duration} shows in particular that the duration of a phase slip behaves like \begin{equation} \label{conc2} \theta(\ph_{\tau_+}) - \theta(\ph_{\tau_-}) \simeq 2|\log\sigma| + Z + 2s\;, \end{equation} where $s$ is essentially the deterministic time required to travel between $\sigma$-neighbourhoods of the orbits, while the other two terms account for the time spent near the unstable orbit. The dominant term here is $2\abs{\log\sigma}$, which reflects the intuitive picture that noise enlarges the orbit to a thickness of order $\sigma$, outside which the deterministic dynamics dominates. The phase slip duration is split into two contributions from before and after crossing the unstable orbit, of respective size $(Z-\log2)/2 + s$ and $\Theta + s$. Decreasing the noise intensity has two main effects. The first one is to increase the duration of phase slips by an amount $2|\log\sigma|$, which is due to the longer time spent near the unstable orbit. The second effect is to the shift of the phase slip location by an amount $|\log\sigma|$, which results in log-periodic oscillations. Note that other quantities of interest can be deduced from the above expressions, such as the distribution of residence times, which are the time spans separating phase slips when the system is in a stationary state. The residence-time distribution is given by the sum of an asymptotically geometric random variable and a so-called logistic random variable, i.e., a random variable having the law of the difference of two independent Gumbel variables, with density proportional to $1/\cosh^2 (\theta)$~\cite{BG9}. The connection between first-exit distributions and extreme-value theory is partially understood in the context as residual lifetimes, as summarized in Section~\ref{ssec_Bakhtin}. It is probable that other connections remain to be discovered. For instance, functional equations satisfied by the Gumbel distribution seem to play an important r\^ole. One of them is the equation \begin{equation} \label{conc3} \Lambda\bigl(x-\log 2\bigr)^2 = \Lambda(x) \end{equation} which results from the Gumbel law being max-stable. Another one is the equation \begin{equation} \label{conc4} \Lambda\bigl(\e^{-x}\bigr) = \e^{-\Lambda(x)} \end{equation} which appears in the context of the residual-lifetime interpretation. These functional equations may prove useful to establish other connections with critical phenomena and discrete scale invariance.
1,108,101,562,478
arxiv
\section{Introduction} Maritime history is the study of human activity at sea. It covers a broad thematic element of history, focusing on understanding humankind's various relationships to the oceans, seas, and major waterways of the globe \cite{hattendorf2012maritime}. A large area of research in this field requires the collection and integration of data coming from multiple and diverse historical sources, in order to perform qualitative and quantitative analysis of empirical facts and draw conclusions on possible impact factors \cite{fafalios2021FastCat,petrakis2021}. Consider, for instance, the real use case of the SeaLiT project (ERC Starting Grant in the field of maritime history)\footnote{\url{https://sealitproject.eu/}}, which studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s \cite{delis2020seafaring}. Historians in this project have collected a large number of archival documents of different types and languages, including crew lists, payrolls, sailor registers, naval ship register lists, and employment records, gathered from multiple authorities in different countries (more about this project in Sect.~\ref{subsec:sealit}). Complementary information about the same entity of interest, such as a ship, a port, or a captain, may exist in different archival documents. For example, for the same ship, one source may provide information about its owners, another source may provide construction details and characteristics of the ship (length, width, tonnage, horsepower, etc.), while other sources may provide information about the ship's voyages and crew. Information integration is crucial in this context for performing valid data analysis and drawing safe conclusions, such as finding answers to questions that require combining and aggregating information, like \textit{\q{finding the number of sailors per residence location that arrived at a specific port and who were crew members in ships of a specific type, e.g. Brig}}. Moreover, information integration under a common data model can produce data of high value and long-term validity that can be reused beyond a particular research activity or project, as well as integrated with other datasets by the wider (historical science) community. To this end, this paper describes the construction and use of the \textit{SeaLiT Ontology}. The ontology aims at facilitating a shared understanding of maritime history information by providing a common and extensible semantic framework for information modeling and integration. It uses and extends the CIDOC Conceptual Reference Model (CRM) (ISO 21127:2014)\footnote{\url{https://cidoc-crm.org/}} as a formal ontology of human activity, things and events happening in space and time \cite{doerr2003cidoc}. The ontology was designed considering requirements and knowledge of domain experts (a large group of maritime historians), expressed through research needs, inference processes they follow, and exceptions they make. It was developed in a bottom-up manner by analysing large and heterogeneous amounts of primary data, in particular archival documents of different types and languages gathered from authorities in several countries, including crew lists, payrolls, civil registers, sailor registers, naval ship registers, employments records, censuses, and others. All modeling decisions were validated by the domain experts and, in practice, by transforming their data (transcripts) to a rich semantic network based on the SeaLiT Ontology, which enables them (through a user-friendly interface) to find answers to information needs that require combining information of different sources. We describe the methodology and the steps we followed for designing the ontology, and provide its specification, RDFS and OWL implementations, as well as knowledge graphs that make use of the ontology for integrating data transcribed from a large and diverse set of archival documents. We also describe a data exploration application that operates over these knowledge graphs and which currently supports maritime historians in exploring and analysing the integrated data. Table \ref{tab:links} provides the key access links to the SeaLiT Ontology as well as related resources and information. \begin{table}[h] \begin{center} \caption{Key access links and information of the SeaLiT Ontology.} \label{tab:links} \vspace{-2mm} \begin{tabular}{ll} \toprule SeaLiT Ontology Specification & \url{https://zenodo.org/record/6797750} \\ DOI of the SeaLiT Ontology & 10.5281/zenodo.6797750\\ Namespace of the SeaLiT Ontology & \url{http://www.sealitproject.eu/ontology/} \\ SeaLiT Ontology RDFS (Turtle) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.ttl} \\ SeaLiT Ontology RDFS (RDF/XML) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.rdf} \\ SeaLiT Ontology OWL (RDF/XML) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl} \\ \midrule SeaLiT Knowledge Graphs (KGs) & \url{https://zenodo.org/record/6460841} \\ DOI of SeaLiT KGs & 10.5281/zenodo.6460841 \\ ResearchSpace application over the KGs & \url{http://rs.sealitproject.eu/} \\ \midrule License of SeaLiT Ontology \& KGs & Creative Commons Attribution 4.0 \\ \bottomrule \end{tabular} \end{center} \end{table} The rest of this paper is organised as follows: Section~\ref{sec:background} describes the context of this work, provides the required background, and discusses related work. Section~\ref{sec:methodology} details the methodology and principles we have followed for building the ontology. Section~\ref{sec:ontology} presents the ontology, describes an example on how a part of the model was revised several times to incorporate new historical knowledge, and provides its specification as well as an RDFS and an OWL implementation. Section~\ref{sec:application} describes the application of the ontology in a real context. Section~\ref{sec:usage} discusses its usage and sustainability. Finally, Section~\ref{sec:conclusion} concludes the paper and outlines future work. \section{Context, Background and Related Work} \label{sec:background} \subsection{The SeaLiT Project} \label{subsec:sealit} The ontology has been developed in the context of the SeaLiT project\footnote{\url{https://sealitproject.eu/}}, a European project in the field of maritime history (ERC Starting Grant, No 714437). The project studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s. Historians in SeaLiT investigate the maritime labour market, the evolving relations among ship-owners, captain, crew, and local societies, and the development of new business strategies, trade routes, and navigation patterns, during the transitional period from sail to steam. The main concepts on which the scientific research focuses, are the ships (including various information such as type, usage, dimensions, technology), the people related to the ships (sailors, ship owners, students, relatives) and the historical events/activities related to these (such as voyages, recruitments, payments). The archival sources considered and studied in SeaLiT range from hand written ship log books, crew lists, payrolls and employment records, to registers of different types such as civil, sailors, students and naval ship registers. These archival sources have been gathered from different authorities in countries of the Mediterranean and the Black Sea, and are written in different languages, including Spanish, Italian, French, Russian, and Greek. The full archival corpus studied in SeaLiT is described in the project's web site.\footnote{\url{https://sealitproject.eu/archival-corpus}} \subsection{The ISO standard CIDOC-CRM} The SeaLiT Ontology uses and extends the CIDOC-CRM (Conceptual Reference Model)\footnote{\url{http://www.cidoc-crm.org/}}, in particular its stable version 7.1.1, which means that each class of the SeaLiT Ontology is a direct subclass or a descendant of a CIDOC-CRM class. CIDOC-CRM is a high-level, event-centric ontology of human activity, things and events happening in spacetime, providing definitions and a formal structure for describing the implicit and explicit concepts and relationships used in cultural heritage documentation \cite{doerr2003cidoc}. It is the international standard (ISO 21127:2014)\footnote{\url{https://www.iso.org/standard/57832.html}} for the controlled exchange of cultural heritage information, intended to be used as a common language for domain experts and implementers to formulate requirements for information systems, providing a way to integrate cultural heritage information of different sources. The considered stable release of CIDOC-CRM (version 7.1.1) consists of 81 classes and 160 unique properties. The highest-level distinction in CIDOC-CRM is represented by the top-level concepts of {\tt E77 Persistent Item} (equivalent to the philosophical notion of endurant), {\tt E2 Temporal Entity} (equivalent to the philosophical notion of perdurant) and, further, the concept of {\tt E92 Spacetime Volume} which describes the entities whose substance has or is an identifiable, confined geometrical extent in the material world that may vary over time. Fig.~\ref{fig:crm1} depicts how the high level classes of CIDOC-CRM are connected. \begin{figure}[h] \centering \fbox{\includegraphics[width=0.8\textwidth]{figures/crm_mainClasses.png}} \vspace{-2mm} \caption{High level properties and classes of CIDOC-CRM.} \label{fig:crm1} \end{figure} \subsection{Related Work} Over the last years, methods and technologies of the Semantic Web have started playing a significant and ever increasing role in historical research. The survey in \cite{merono2015semantic} reviews the state of the art in the application of semantic technologies to historical research, in particular works related to i) knowledge modeling (ontologies, data linking), ii) text processing and mining, iii) search and retrieval, and iv) semantic interoperability (data integration, classification systems). As regards ontologies for the modeling of \textit{maritime history} information, the most relevant work is an ongoing project on the ontology management environment OntoME~\cite{beretta2021challenge} that aims to provide a data model for the field of maritime/nautical history.\footnote{\url{https://ontome.net/namespace/66}} The project is a cooperation between the Huygens Institute for the History of the Netherlands, LARHRA and the Data for History consortium. The current (draft) model consists of 13 classes and 12 properties, while it makes use of CIDOC-CRM as well as extensions of CIDOC-CRM. The ontology is unfinished and not for use yet (as of December 15, 2022). \textit{Conflict}\footnote{\url{http://ontologies.michelepasin.org/docs/conflict/index.html}} is an ontology developed in the context of the SAILS project (2010-2013)\footnote{\url{http://sailsproject.cerch.kcl.ac.uk/}} that models concepts useful for describing the First World War. The provided ontology version (0.1) is actually a \textit{taxonomy} consisting of 175 classes, some of which allow modeling information related to maritime history, like the classes {\tt Ship}, {\tt Ship\_journey}, {\tt Ship\_type}, and {\tt Ownership}. Similarly, there are ontologies that could be used for modeling other \textit{parts} of the model, such as \textit{GoodRelations}~\cite{hepp2008goodrelations}, a lightweight ontology for exchanging e-commerce information, for the part that concerns payments for products. We selected to use CIDOC-CRM because it is the standard ontology for cultural heritage documentation, extensively used in the fields of cultural heritage, history and archaeology. It is directly related to the domain of discourse of history, as a discipline that studies the life of humans and societies in the past. This scope, studied from the point of view of maritime historical research, can be represented by the abstraction of reality offered by CIDOC-CRM. As an example, we can directly take advantage of the (direct or inherited) properties of the CIDOC-CRM class {\tt E7 Activity}, such as \textit{\sq{P14 carried out by}}, \textit{\sq{P4 has time-span}}, \textit{\sq{P7 took place at}}, etc., and use them for describing instances of classes of the SeaLiT Ontology that are subclasses of {\tt E7 Activity} (e.g. {\tt Voyage}, {\tt Arrival}, {\tt Recruitment}, etc.). Therefore, using CIDOC-CRM facilitates data integration with relevant (existing or future) datasets that also make use of CIDOC-CRM, but also it enables data sustainability because CIDOC-CRM is a living standard and has a very active community that constantly works on it and improves it. Finally, there is a plethora of ontologies which have been developed as extensions of CIDOC-CRM, e.g. CRMas~\cite{niccolucci2017documenting} for documenting archaeological science, CRMgeo~\cite{hiebel2017crmgeo} for geospatial information, CRMdig~\cite{theodoridou2010modeling} for provenance of digital objects, IAM~\cite{doerr2011factual} for factual argumentation, and others. \section{Design Methodology and Principles} \label{sec:methodology} \subsection{Overall Methodology} The ontology has been created gradually, following a bottom-up strategy \cite{gandon2002distributed}, working with real empirical data and information needs, in particular digitised historical records (transcripts) and corresponding data structures in various forms, as well as research questions provided by a large group of historians. The archival material together with the research questions define the modeling requirements. The main characteristics of our strategy are summarised as follows: \begin{itemize} \item Study and analysis of a large and diverse set of archival sources related to maritime history. This material provides historical information about ships, persons (such as sailors, captains, ship owners, students), and relevant activities and events (such as voyages, recruitments, payments, teaching activities). \item Gathering of research questions and corresponding information needs (\textit{competency questions}) for which the considered archival sources can provide answers or important relevant information. \item Lengthy discussions with a large group of maritime historians from different institutions and countries (Spain, Italy, France, Croatia, Greece), for consulting as well as understanding of inference processes and exceptions they make. \end{itemize} \begin{table} \begin{center} \caption{Considered archival sources and type of recorded information.} \label{tab:archSources} \scriptsize \begin{tabular}{p{3.9cm}|p{10cm}} \toprule Archival source & Overview of recorded information and example transcript\\ \midrule Crew and displacement list (Roll) & ships (name, type, construction location, construction year, registry location, owners), ports of provenance, arrival ports, destination ports, crew members (name, father's name, birth place, residence location, profession, age), embarkation ports, discharge ports. \textbf{[example transcript: \url{https://tinyurl.com/4ukzezfe}]} \\ \midrule Crew List (Ruoli di Equipaggio) & ships (name, type, construction location, construction year, registry number, registry port, owners), voyages (date from/to, duration, total crew number), destinations, departure ports, arrival ports, crew members (name, residence location, birth year, serial number, profession), embarkation ports, discharge ports. \textbf{[example transcript: \url{https://tinyurl.com/2u35frya}]} \\ \midrule General Spanish Crew List & ships (name, type, tonnage, registry port), ship owners, crew members (name, age, residence location), voyages (date from/to, total crew number), embarkation ports, destinations. \textbf{[example transcript: \url{https://tinyurl.com/3axs6ret}]} \\ \midrule Sailors Register (Libro de registro de marineros) & seafarers (name, father's name, mother's name, birth date, birth place, profession, military service organisation locations) \textbf{[example transcript: \url{https://tinyurl.com/2p8kzm6n}]} \\ \midrule Register of Maritime Personnel & persons (name, father's name, mother's name, birth place, birth date, residence location, marital status, previous profession, military service organisation location). \textbf{[example transcript: \url{https://tinyurl.com/4v6hnwjj}]} \\ \midrule Seagoing Personnel & persons (name, father's name, marital status, birth date, profession, end of service reason, work status type), ships (name), destinations. \textbf{[example transcript: \url{https://tinyurl.com/2x5cu37n}]} \\ \midrule Naval Ship Register List & ships (name, type, tonnage, length, construction location, registration location, owner). \textbf{[example transcript: \url{https://tinyurl.com/bdhx87tr}]} \\ \midrule List of Ships & ships (name, previous name, type, registry port, registry year, construction place, construction year, tonnage, engine construction place, engine manufacturer, nominal power, indicated power, owners). \textbf{[example transcript: \url{https://tinyurl.com/2cphfpef}]} \\ \midrule Civil Register & persons (name, profession, origin location, age, sex, marital status, death location, death reason, related persons). \textbf{[example transcript: \url{https://tinyurl.com/bdzeja8n}]} \\ \midrule Maritime Register, La Ciotat & persons (name, birth date, birth place, residence location, profession, service sector), embarkation locations, disembarkation locations, ships (name, type, navigation type), captains, patrons. \textbf{[example transcript: \url{https://tinyurl.com/fkhyyp4a}]} \\ \midrule Students Register & students (origin location, profession, employment company, religion, related persons), courses (title, subject, date from/to, semester, total number of students). \textbf{[example transcript: \url{https://tinyurl.com/mryp6cbb}]} \\ \midrule Census La Ciotat & occupants (name, age, birth year, birth place, nationality, marital status, religion, profession, working organisation, household role, address). \textbf{[example transcript: \url{https://tinyurl.com/4dzfcbtt}]} \\ \midrule Census of the Russian Empire & occupants (name, patronymic, sex, age, marital status, estate, religion, native language, household role, occupation, address). \textbf{[example transcript: \url{https://tinyurl.com/43xczvux}]} \\ \midrule Payroll (of Greek Ships) & ships (name, type, owners), captains, voyages (date from/to, total days, days at sea, days at port, overall total wages, overall pension fund, overall net wage), persons (name, adult/child, literacy, origin location, professio/rank), employments (recruitment date, discharge date, recruitement location, monthy wage, total wage, pension fund, net wage). \textbf{[example transcript: \url{https://tinyurl.com/ztjk4jw7}]} \\ \midrule Payroll (of Russian Steam Navigation and Trading Company) & ships (name, owners), persons (name, patronymic, adult/child, sex, birth date, estate, registration place), recruitments (port, type of document, rank/specialisation, salary per month). \textbf{[example transcript: \url{https://tinyurl.com/y5urjhc9}]} \\ \midrule Employment records (Shipyards of Messageries Maritimes, La Ciotat) & workers (name, sex, birth year, birth place, residence location, marital status, profession, status of service in company, workshop manager). \textbf{[example transcript: \url{https://tinyurl.com/yc3havkc}]} \\ \midrule Logbook & ships (name, type, telegraphic code, tonnage, registry port, owners), captains, departure ports, destination ports, route movements, calendar event types. \textbf{[example transcript: \url{https://tinyurl.com/mrx2re9k}]} \\ \midrule Accounts Book & ships (name, type, owners), voyages, captains, departure ports, destination ports, ports of call, transactions (type, recording location, supplier, mediator, receiver). \textbf{[example transcript: \url{https://tinyurl.com/4uf3bye8}]} \\ \bottomrule \end{tabular} \end{center} \end{table} In more detail, our approach focused on studying and analysing the historical sources from the historians perspective, following their respective research questions and practices of documentation. In order to achieve that, we had to consult all the data providers (coming from different research teams and countries) for a long period and to do extensive research on their research practices and the historical data for the development and the validation of the model. As a result, the model was designed from actual data values, from existing (and used) structured information sources (such as spreadsheets) and historical records (transcripts) that include the original information. The model's concepts were refined several times during the span of the project for considering new information coming from new kinds of sources. Table~\ref{tab:archSources} provides the considered archival sources as well as an overview of the recorded information and an example record (transcript) for each source.\footnote{A web application that allows exploring the data in the transcripts of these archival sources is available at: \url{https://catalogues.sealitproject.eu/}} As regards the research questions and information needs provided by the historians, their majority concerns aggregated information, such as \textit{number of sailors per origin location that arrived at a specific port}, \textit{average tonnage of ships}, \textit{wage level per country}, \textit{percentages of immigration in relation to the sailors' profession}, etc. Other information needs concern the retrieval of a specific list of entities (e.g. \textit{ship construction places during a specific time period}), comparative information (e.g. \textit{time of sailors' service in relation to the time on land}, \textit{number of women/men in ships}, etc.), or the retrieval of a specific value (e.g. \textit{total number of officers employed by the company in a specific year or span of years}).\footnote{The full list of information needs is available at \url{https://users.ics.forth.gr/~fafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf}} For creating the ontology, we followed a custom engineering methodology ~\cite{kotis2020ontology} which, though, maintains most of the features supported by existing methodologies, such as HCOME~\cite{kotis2006human} and DILIGENT~\cite{pinto2009ontology}. In particular: \begin{itemize} \item Data-driven / bottom-up processing (our strategy for the development of the ontology) \item Involvement of domain experts (maritime historians in our case) \item Iterative processing (gradual, highly-iterative ontology development) \item Collaborative engineering processing (within a small team of conceptual modeling experts) \item Validation and exploitation (validation by domain experts and application in a real context) \item Detailed versioning (multiple intermediate versions, currently in stable version 1.1) \end{itemize} \subsection{Design Steps and Principles} The basis for the model was CIDOC-CRM since it is a standard suitable for recording historical information relating who, when, where, and what. From an ontological point of view, we followed the below steps: \begin{enumerate} \item We have extended CIDOC-CRM by creating new classes as subclasses of CIDOC-CRM classes and defining properties accordingly (with some of them being subproperties of CIDOC-CRM properties). After extending or revising the model for a given type of archival source and corresponding information needs, we created mappings for transforming the data from the source schema to a semantic network (RDF triples) based on the designed (target) model. This conceptual alignment was an important step to the ontology development process, contributing to redesign concepts and finalise the model. \item We distinguished the entities included in the existing schemata into those that directly or indirectly imply an \textit{event} and to those that imply \textit{objects}, mobile or immobile, and classified them in abstraction levels according to whether they represent individuals, or set of individuals. We realised that most binary relationships acquire substance as temporal entities (e.g. \textit{has met}, \textit{has created}, etc.). This principle helped us to detect hidden events in the data structures. \item We classified the existing relations between the entities according to the abstraction level which their domain and range entity belong to, and created class and property hierarchies accordingly. We did not define the same property twice for different classes, but found the most general (super)class that the property applies to. The discovery of repeating properties for different classes, suggested that they rely on a common, more general concept, causal to the ability to have such a relation in the first place. Finding the single most general concept to describe this common generalization allowed the creation of a general class to which the properties can be applied and from which these relations can be inherited by assigning the originally modelled classes as subclasses of the newly created generalization (like in the case of classes {\em Money for Service} and {\em Legal Object Relationship}). \item We found classes for the relevant properties, and not properties for relevant classes (e.g. \textit{Voyage} for the property \sq{voyages}, \textit{Ship Construction} for \sq{constructed}, etc.). We detected the general classes for which each property is characteristic of. In other terms, we found the one most specific class that generalizes over all classes for which the property applies as domain or range. \item We defined concepts by finding the identity criteria of them, by distinguishing what is and what is not an instance of these concepts. We identified classes that exist independent from the property, and not \q{anything that has this property} (e.g. the case of the \textit{Service} concept). \item The number of the classes and relationships developed can answer queries of \textit{global} nature. By global queries we mean those that users would address to more than one database (source) at the same time in order to get a comprehensive answer, in particular including joins across databases. It should also be emphasised that the goal was not to model \sq{everything} but rather to model the necessary and well understood concepts for this specific domain. \end{enumerate} The ontology was built following these principles. Its design and development was an iterative process with several repetitions of the steps described above. \section{The SeaLiT Ontology} \label{sec:ontology} We first provide an overview of the ontology (Sect.~\ref{subsec:ontOverview}), then we describe an ontology evolution example (Sect.~\ref{subsec:evolution}), and finally we present the specification of the ontology as well as RDFS and OWL implementations (Sect.~\ref{subsec:specAndRdfs}). \subsection{Ontology Overview} \label{subsec:ontOverview} The ontology currently (version 1.1) contains 46 classes, 79 properties and 4 properties of properties, allowing the description of information about \textit{ships}, \textit{ship voyages}, \textit{seafaring people}, \textit{employments} and \textit{payments}, \textit{teaching activities}, as well as a plethora of other related activities and characteristics. Appendices \ref{appendix:A} and \ref{appendix:B} provide the full class and property hierarchy, respectively. Fig.~\ref{fig:model_ship} shows how information about a \textit{ship} is modelled.\footnote{The classes whose name starts with the letter 'E' followed by a number are CIDOC-CRM classes (these are in green boxes in the figures). All other are classes of the SeaLiT Ontology (in blue boxes). Accordingly, all properties whose name starts with the letter 'P' followed by a number are properties of CIDOC-CRM, while all other are properties of the SeaLiT Ontology.} A {\tt Ship} (subclass of {\tt E22 Human-Made Object}) is the result of a {\tt Ship Construction} activity (subclass of {\tt E12 Production}) which gave the {\tt Ship Name} (subclass of {\tt E41 Appellation}) to the ship. A ship also has some characteristics, like {\tt Horsepower} and {\tt Tonnage} (subclasses of {\tt E54 Dimension}; this allows providing, apart from the value, the corresponding measurement unit, a note, etc.), and is registered through a {\tt Ship Registration} (subclass of {\tt E7 Activity}) by a {\tt Port of Registry} (subclass of {\tt E74 Group}), with a ship flag of a particular {\tt Country} (subclass of {\tt E53 Place}) and with a particular {\tt Ship ID} (subclass of {\tt E42 Identifier}). Modeling the ship ID as a class allows including additional information about the identifier, such as which authority provided the identifier, when, etc. (by connecting it to the CIDOC-CRM class {\tt E15 Identifier Assignment}). Finally, a ship has one or more {\tt Ship Ownership Phase}s (subclass of {\tt Legal Object Relationship}), each one initialized by a {\tt Ship Registration} and terminated by a {\tt De-flagging} activity. Note here that, all classes related to activities (like {\tt Ship Construction}, {\tt Ship Repair}, {\tt De-flagging}, etc.) can make use of the CIDOC-CRM property {\em \sq{P4 has time-span}} for providing temporal information. \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_ship.png}} \vspace{-2mm} \caption{Modelling information about a ship.} \label{fig:model_ship} \end{figure} \begin{figure} \centering \fbox{\includegraphics[width=14cm]{figures/sealitOntology_voyage.png}} \vspace{-2mm} \caption{Modelling information about a ship voyage.} \label{fig:model_voyage} \end{figure} Fig.~\ref{fig:model_voyage} shows how information about a \textit{ship voyage} is modelled in the ontology. First, a {\tt Voyage} (subclass of {\tt E7 Activity}) concerns a particular Ship, navigated by one or more captains ({\tt E39 Actor}), and has a \textit{starting from} place, a \textit{destination} place, and a \textit{finally arriving at} place ({\tt E53 Place}). Then, the main activities during a ship voyage include {\tt Loading} things, {\tt Leaving} from a place, {\tt Passing} by or through a place, {\tt Arrival} at a place, and {\tt Unloading} things. All these activities are linked to a {\tt E52 Time-Span} through the CIDOC-CRM property {\em \sq{P4 has time-span}}. Fig.~\ref{fig:model_payments} shows how the ontology allows describing information about \textit{employments and payments}. {\tt Money for Service} (subclass of {\tt E7 Activity}) is given to an {\tt E39 Actor} for a particular {\tt Service} (subclass of {\tt E7 Activity}).\footnote{We use the term \sq{money} instead of \sq{payment}, because we want to indicate that there was a money transaction, e.g. using lira, franc, etc. (in older times, a payment could be conducted without the use of money, e.g. using things).} The class {\tt Money for Service} has two specialisations (subclasses): {\tt Money for Things} and {\tt Money for Labour}, while the class {\tt Employment} is a specialisation of the class {\tt Service}. A {\tt Crew Payment} concerns a particular {\tt Voyage} and is a specialisation of {\tt Money for Labour}. In this context, a {\tt Labour Contract} (subclass of {\tt E29 Design or Procedure}) specifies the conditions of {\tt Money for Labour}. An {\tt Employment} starts with a {\tt Recruitment} (subclass of {\tt E7 Activity}) and ends with a {\tt Discharge} (subclass of {\tt E7 Activity}). \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_payments.png}} \vspace{-2mm} \caption{Modelling information about employments and payments.} \label{fig:model_payments} \end{figure} Fig.~\ref{fig:model_persons} shows how information about \textit{persons} (seagoing people, such as captains, crew members, students, etc.) is modelled in the ontology. A person ({\tt E21 Person}) is registered through a {\tt Civil Registration} activity and receives an identifier ({\tt E42 Identifier}). A person has a first name and last name ({\tt E62 String}), works at an organisation or company ({\tt E74 Group}), has an age ({\tt E60 Number}) at a specific time (the time of the information recording), as well as a set of other properties, in particular a {\tt Religion Status}, a {\tt Literacy Status}, a {\tt Sex Status}, a {\tt Language Capacity}, a {\tt Social Status}, and a {\tt Profession} (all subclasses of {\tt E55 Type}). The use of {\tt E55 Type} as superclass of these properties/qualities (instead of modeling them as temporal entities) is a good solution when the sources (such as a civil register or a census document) do not provide enough temporal information to infer/observe the corresponding event (this is exactly the case with the archival sources of the SeaLiT project). In addition, a {\tt Punishment} (subclass of {\tt E7 Activity}) or {\tt Promotion} (subclass of {\tt E13 Attribute Assignment}) can be given to a person. A {\tt Promotion} is related either to a {\tt Social Status} promotion or to a job/career ({\tt Profession}) promotion. \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_persons.png}} \vspace{-2mm} \caption{Modelling information about persons.} \label{fig:model_persons} \end{figure} Finally, Fig.~\ref{fig:model_teaching} shows how the ontology allows describing information about teaching activities related to seafaring. A {\tt Teaching Unit} is an activity that can be specialised to {\tt Course} or {\tt Section}. It is connected to a {\tt Subject} (subclass of {\tt E55 Type}), the students ({\tt E39 Actor}) who participated in the teaching unit, the number of participating students ({\tt E60 Number}), as well as one or more other teaching units through the CIDOC-CRM property {\em \sq{P9 consists of}}. The latter allows, in particular, describing the information that a course consists of sections. \begin{figure}[t] \centering \fbox{\includegraphics[width=12.5cm]{figures/sealitOntology_teaching.png}} \vspace{-2mm} \caption{Modelling information about teaching activities.} \label{fig:model_teaching} \end{figure} \subsection{Ontology Evolution Example} \label{subsec:evolution} The ontology development process lasted more than two years, including a large number of intermediate versions, before releasing the first \textit{stable} version (1.0). In particular, the ontology elements (classes and properties) were revised several times based on (a) new evidence coming from newly-considered archival sources, and (b) new requirements (information needs) by the domain experts (maritime historians). Such new evidence and requirements required either the definition of new elements, such as the creation of a new class or property, or the revision of an existing set of elements that concern a part of the model. Fig.~\ref{fig:evolution} shows how the part of the ontology that concerns \textit{ship ownership} was revised several times during the ontology development process. A first requirement provided by the historians was the ability to find all ships per owner. The analysed archival material (\textit{crew lists}) only provided the name of the owner, where the value was either the name of a person or the name of a company. Based on this evidence, the property {\em \sq{has owner}} was created connecting an instance of {\tt Ship} with the an instance of the CIDOC-CRM class {\tt E39 Actor} (v1 in Fig.~\ref{fig:evolution}). Another source (\textit{naval ship register lists}) provided information about ships' previous owners, while a new requirement was the ability to find the number of first owners per ship during a period of time. Based on this, as well as on the fact that the binary relationship \textit{has owner} implies/hides a temporal entity, we defined the class {\tt Ship Ownership Phase}, the property {\em \sq{has phase}} for connecting a ship to a ship ownership phase, the property {\em \sq{in time}} for connecting the ownership phase to a {\tt E52 Time-Span}, while the property {\em \sq{has owner}} was revised for connecting the ship ownership phase with an {\tt E39 Actor} (v2 in Fig.~\ref{fig:evolution}). A ship can have many names during its lifespan, while an owner can own more than one ships with the same name (as shown in \textit{logbooks} and \textit{crew and displacement lists}). According to the historians, ownership usually assigns a name to a ship and a ship changes its name under a new ownership state at a specific time. Based on this historical knowledge, the property {\em \sq{ownership under name}} was created for enabling to link the ship ownership phase to a {\tt Ship Name} (v3 in Fig.~\ref{fig:evolution}). Evidence shows that ownership of a ship is a type of information that can be inferred and not directly observed. An ownership phase can be traced by the \textit{ship registration} activity that initiates it and by the \textit{de-flagging} activity that terminates it. The documentation of a ship registration in \textit{Austrian Lloyd's fleet lists}, in particular, includes information about the ship's construction place and date, which together with the name given to ship after construction constitute safe criteria to identify a ship. Based on this, the classes {\tt Ship Registration} (subclass of {\tt E72 Activity}), {\tt De-flagging} (subclass of {\tt E72 Activity}) and {\tt Ship Construction} (subclass of {\tt E12 Production}) were defined, together with the properties {\em \sq{registers}} (for linking a registration activity to a ship), {\em \sq{ownership is initialized by}} (for linking an ownership phase to a registration activity), {\em \sq{de-flagging of}} (for linking a de-flagging activity to a ship), {\em \sq{ownership is terminated by}} (for linking an ownership phase to a de-flagging activity), {\em \sq{constructed}} (for linking a construction activity to a ship), and {\em \sq{under name}} (for linking a construction activity to a ship name (v4 in Fig.~\ref{fig:evolution}). The ownership of a ship is actually a legal agreement in which an owner holds shares. For example, according to Italian sources (\textit{maritime registers}), the ownership of a ship was structured in 24 parts (\q{carati}). Sometimes only one ship owner possessed all 24 parts. However, much more frequently the 24 parts were distributed among several ship owners. Based on this evidence, a new class {\tt Shareholding} was created as a specialisation (subclass) of {\tt Ship Ownership Phase}, together with the property {\em \sq{of share}} for assigning the number of shares to a shareholding phase (v5 in Fig.~\ref{fig:evolution}). In the last ontology version (see Fig.~\ref{fig:model_ship}), {\tt Ship Ownership Phase} is defined as specialisation (subclass) of the class {\tt Legal Object Relationship}, together with the class {\tt Legal Document with Temporal Validity} which comprises official documents or legal agreements that are valid for a specific time-span. The more general class {\tt Legal Object Relationship} represents kinds of relationships whose state and time-span are not documented and thus cannot be directly observed. We can only observe the relationship through the events that initialise or terminate the state (starting and terminating events). \begin{figure}[t] \centering \fbox{\includegraphics[width=16.0cm]{figures/ModelPartEvolution.jpg}} \vspace{-5mm} \caption{Ontology evolution example for modeling ship ownership information.} \label{fig:evolution} \end{figure} \subsection{Specification, RDFS and OWL Implementation} \label{subsec:specAndRdfs} The specification of the ontology and its RDFS implementation are available through the Zenodo repository (DOI: {\tt 10.5281/zenodo.6797750})\footnote{\url{https://zenodo.org/record/6797750}}, under a Creative Commons Attribution 4.0 license. The (resolvable) namespace of the ontology pointing to the RDFS implementation is: \url{http://www.sealitproject.eu/ontology/}. The specification document defines the ontology classes and properties. For each class, it provides: i)~its superclasses, ii)~its subclasses (if any), iii)~a scope note (a textual description of the class's intension), iv)~one or more examples of instances of this class, and v)~its properties (if any), each one represented by its name and the range class that it links to. For each property, the specification provides: i)~its domain, ii)~its range, iii)~its superproperties (if any), iv)~its subproperties (if any), v)~a scope note, vi)~one or more examples of instances of this property, and vii)~its properties (if any). If a property has an inverse property, this is provided in parentheses next to the property name. Scope notes are not formal modelling constructs, but are provided to help explain the intended meaning and application of a class or property. They refer to a conceptualisation common to domain experts (maritime historians) and disambiguate between different possible interpretations. The RDFS implementation provides the scope note of each class or property using {\em \sq{rdfs:comment}}. For producing the class and property URIs, the space character in the name of a class or property is replaced by the underscore character. Inverse properties are provided using {\em \sq{owl:inverseOf}}. The version of the ontology is provided through the property {\em \sq{owl:versionInfo}} and its license through the Dublin Core term {\em \sq{dc:license}}. For the properties pointing to classes that are represented as literals in RDF (seven properties in total, pointing to the CIDOC-CRM classes {\tt E60~Number} or {\tt E62~String}), we define their range as {\tt rdfs:Literal}. We also provide an OWL implementation of the ontology, containing 71 object properties, 7 datatype properties and 1 symmetric property (the property \textit{\sq{related to}}).\footnote{\url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl}} Since RDF does not provide a direct way to express properties of properties, we make use of \textit{property classes} (as suggested and implemented by CIDOC-CRM), as a reification method for encoding the four properties of properties defined in the SeaLiT Ontology. Using this method, a class is created for each property having a property. This property class can then be instantiated and used together with the properties {\em \sq{P01~has~domain}} and {\em \sq{P02~has~range}} provided by the RDFS implementation of CIDOC-CRM.\footnote{\url{https://cidoc-crm.org/rdfs/7.1.1/CIDOC_CRM_v7.1.1_PC.rdf}} For example, Fig.~\ref{fig:propOfProp} depicts how the property {\em \sq{in the role of}} of the property {\em \sq{works~at}} is implemented using the idea of property classes. First, the property class {\tt PC~works~at} is provided for representing the property {\em \sq{works~at}}. During data generation/instantiation, an instance of this property class is created pointing to the domain (an instance of {\tt E21~Person}) and the range (an instance of {\tt E74~Group}) of the original property {\em \sq{works~at}} using the properties {\em \sq{P01~has~domain}} and {\em \sq{P01~has~range}}, respectively. Then, we can provide the property of property {\em \sq{in the role of}} by directly linking it to the property class instance. \begin{figure}[h] \centering \fbox{\includegraphics[width=12.5cm]{figures/propOfProp.png}} \vspace{-2mm} \caption{Representing a property of property in RDF using a property class.} \label{fig:propOfProp} \end{figure} \section{Application} \label{sec:application} \subsection{SeaLiT Knowledge Graphs} The SeaLiT Ontology has been used in the context of the SeaLiT project (cf.~Section~\ref{subsec:sealit}) for transforming the data transcribed from a set of disparate, localised information sources of maritime history to a rich and coherent semantic network of integrated data (a \textit{knowledge graph}). The objective of this transformation is the ability to run complex questions over the integrated data, like those provided by the historians that require combining information from more than one sources. In particular, the original archival documents are collaboratively transcribed and documented by historians in tabular form (similar to spreadsheets) using the FAST CAT system~\cite{fafalios2021FastCat}. In FAST CAT, data from different sources are transcribed as \textit{records} belonging to specific \textit{templates}. A \textit{record} organises the data and metadata of an archival document in a set of tables, while a \textit{template} represents the structure of a single data source, i.e. it defines the data entry tables. Currently, more than 600 records have been already created and filled in FAST CAT by historians of SeaLiT. An example of a record for each different type of source (template) is provided in Table~\ref{tab:archSources}. For transforming the transcribed data to RDF based on the SeaLiT Ontology, schema mappings are created for each distinct FAST CAT template. These mappings define how the data elements of the FAST CAT records (e.g. the columns of a table) are mapped to ontology classes and properties. To create the schema mappings and run the transformations, we make use of the X3ML mapping definition language and framework~\cite{marketakis2017x3ml}. The transformed data (RDF triples) are then ingested into a semantic repository (RDF triplestore) which can be accessed by external applications and services using the SPARQL language and protocol. The ResearchSpace application (described below) operates over such a repository for supporting historians in searching and analysing quantitatively the integrated data. The reader can refer to \cite{fafalios2021FastCat} for more information about the FAST CAT system and the data transcription, curation and transformation processes. The generated knowledge graphs are available through the Zenodo repository (DOI: 10.5281/zenodo.6460841)\footnote{\url{https://zenodo.org/record/6460841}}, under a Creative Commons Attribution 4.0 license. This dataset currently consists of more than 18.5M triples, providing integrated information for about 3,170 ships, 92,240 persons, 935 legal bodies, and 5,530 locations. These numbers might change in a future version since data curation, including instance matching, is still undergoing and new archival documents are transcribed in FAST CAT. \subsection{ResearchSpace Application} For supporting historians in exploring the SeaLiT Knowledge Graphs (and thus the integrated data), we make use of ResearchSpace~\cite{oldman2018reshaping}, an open source platform that offers a variety of functionalities, including a \textit{query building} interface that supports users in gradually building complex queries through an intuitive (user friendly) interface. The results can then be browsed, filtered, or analysed quantitatively through different visualisations, such as bar charts. The application is accessible at: \url{http://rs.sealitproject.eu/}. The query building interface of ResearchSpace has been configured for the case of the SeaLiT Knowledge Graphs. In particular, the following searching categories have been defined: \textit{Ship, Person, Legal Body, Crew Payment, Place, Voyage, Course, Record, Source}. By selecting a category (e.g. \textit{Ship}) the user is shown a list with its connected categories. By selecting a connected category (e.g. \textit{Place}) the user can then select a property connecting them (e.g. \textit{arrived at}) as well as an instance/value (e.g. \textit{Marseille}; thus the user is searching for ships that arrived at Marseille). Such a property actually corresponds to a path in the knowledge graph that connects instances of the selected categories. \begin{figure} \centering \fbox{\includegraphics[width=15.5cm]{figures/rs.png}} \vspace{-1mm} \caption{Query building and visualisation of results in the ResearchSpace application.} \label{fig:rs} \end{figure} Fig.~\ref{fig:rs} shows a screen dump of the system. In this example, the user has searched for \textit{persons that were crew members at ships that arrived at Marseille,}\footnote{ResearchSpace link to the query: \url{https://tinyurl.com/2p8ky96e}} and has selected to group the persons by their \textit{residence location} and visualise the result in a bar chart. From the bar chart we see that the majority of persons had \textit{Camogli} as their residence location. This query corresponds to a real information need provided by the historians of SeaLiT. For retrieving the results and creating the chart, ResearchSpace internally translates the user interactions to SPARQL queries that are executed over the SeaLiT Knowledge Graphs. For instance, the below SPARQL query retrieves the persons that were crew members at ships that had \textit{Marseille} as their final destination: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> SELECT DISTINCT ?person WHERE { ?ship sealit:voyages ?voyage . ?voyage sealit:finally_arriving_at <https://rs.sealitproject.eu/kb/location/Marseille> ; crm:P14_carried_out_by ?person } \end{Verbatim} \normalsize \noindent For grouping the persons by their residence location and showing a chart, the below SPARQL is executed for retrieving the relevant data: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?location ?locationName (COUNT(?person) AS ?numOfPersons) WHERE { ?ship sealit:voyages ?voyage . ?voyage sealit:finally_arriving_at <https://rs.sealitproject.eu/kb/location/Marseille> ; crm:P14_carried_out_by ?person . ?person crm:P74_has_current_or_former_residence ?location . ?location rdfs:label ?locationName . } GROUP BY ?location ?locationName ORDER BY ?locationName \end{Verbatim} \normalsize Such queries can also utilise the RDFS inference rules, e.g. those based on the \textit{subClassOf} and \textit{subPropertyOf} relations. An example is the use of the CIDOC CRM property \textit{\sq{P9 consists of}} for getting all voyage-related activities of a particular ship (leaving by a place, arrival at a place, passing by or through a place, loading things, unloading things), as shown in the below SPARQL query: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?activity ?activityName WHERE { <SHIP-URI> sealit:voyages ?voyage . ?voyage crm:P9_consists_of ?activity . ?activity rdfs:label ?activityName } \end{Verbatim} \normalsize In this case, we exploit the fact that the property \textit{\sq{P9 consists of}} is super-property of the properties \textit{\sq{consists of leaving}}, \textit{\sq{consists of arrival}}, \textit{\sq{consists of passing}}, \textit{\sq{consists of loading}}, and \textit{\sq{consists of unloading}}. The type of historians' research questions / information needs that can be answered (either directly or indirectly) using the ResearchSpace platform over the integrated data mainly depends on the actual archival material that is transcribed and transformed to RDF based on the SeaLiT Ontology, and less on the ontology itself. Specifically, the ontology was designed considering community requirements and material evidence, therefore if the data needed to answer an information need (or to find important information related to the information need) exists in the transcripts (and thus in the transformed data) then the question can be answered either fully, or partially through the retrieval of important relevant information. For example, in the case of SeaLiT, there are transcripts (FAST CAT records) containing tables that are not fully filled, either because some archival documents do not provide the corresponding information, or just because historians did not fill the columns during data transcription (planning to do it at a later stage). In this case, information needs that require this missing information cannot be satisfied. In future, if new types of information (and corresponding information needs) appear that cannot be modelled by the ontology, the ontology will be extended/revised and a new version will be released. With respect to incomplete information, missing entity attributes (e.g. unknown construction location for a particular ship) are in general very common in historical-archival research, but at the same time an important-to-know information for historians because they can affect the interpretation of quantitative analysis results. Our configuration of ResearchSpace considers missing information by representing it as an \sq{unknown} value, e.g. by showing an \sq{unknown} column in a bar chart. \section{Usage and Sustainability} \label{sec:usage} As already stated, the ontology has been created and used in the context of the SeaLiT project for transforming data transcribed from archival documents of maritime history to a rich semantic network. The integrated data of the semantic network allows a large group of maritime historians to perform quantitative and qualitative analysis of the transcribed material (through the user-friendly interface provided by the ResearchSpace platform) and find important information relevant to their research needs. A continuation of the relevant activities is expected after the end of the SeaLiT project through the close collaboration of the two involved institutions of the Foundation for Research and Technology - Hellas (FORTH): the Institute of Mediterranean Studies (coordinator of SeaLiT) and the Institute of Computer Science (data engineering partner in SeaLiT). In particular, the ontology will be extended as soon as a new type of archival material needs to be transcribed and integrated into the SeaLiT Knowledge Graphs. The long-term sustainability of the ontology is assured through our participation in relevant communities, in particular CIDOC-CRM SIG\footnote{\url{https://www.cidoc-crm.org/sig-members}} and Data for History Consortium\footnote{\url{http://dataforhistory.org/members}}, an international consortium aiming at establishing a common method for modelling, curating and managing data in historical research. There is already an interest on using (and probably extending) the ontology in the context of other (ongoing) projects in the field of historical/archival research. In addition, the part of the model which is about employments and payments is considered for the creation of a new CIDOC-CRM family model about social transactions and bonds (there are relevant discussions on this in the CIDOC-CRM Special Interest Group; see issues 420 and 557\footnote{\url{https://cidoc-crm.org/issue_summary}}). \section{Conclusion} \label{sec:conclusion} We have presented the construction and use of the SeaLiT Ontology, an extension of CIDOC-CRM for the modeling and integration of data in the field of maritime history. The ontology aims at facilitating a shared understanding of maritime history information, by providing a common and extensible semantic framework (a \textit{common language}) for evidence-based information integration. We provide the specification of the ontology, an RDFS and an OWL implementation, as well as knowledge graphs that make use of the ontology for integrating a large and diverse set of archival documents into a rich semantic network. We have also presented a real-working application (ResearchSpace deployment) that operates on top of the knowledge graphs and which supports maritime historians in exploring and analysing the integrated data through a user-friendly interface. In the near future, we plan to a) investigate possible extensions of the ontology based on new data modeling requirements, b) improve the scope notes of classes and properties in the specification document and add more examples (and then provide a new ontology version), c) create and make available a JSON-LD context of the ontology for use in Web-based programming environments. \subsection*{Acknowledgements} This work has received funding from the European Union's Horizon 2020 research and innovation programme under i) the Marie Sklodowska-Curie grant agreement No 890861 (Project ReKnow), and ii) the European Research Council (ERC) grant agreement No 714437 (Project SeaLiT). \bibliographystyle{ACM-Reference-Format}
1,108,101,562,479
arxiv
\section{Introduction} Coalgebra \cite{jacobs,r:universal-coalgebra} is a general framework in which several types of transition systems can be studied (deterministic and non-deterministic automata, weighted automata, transition systems with non deterministic and probabilistic branching, etc.). One of the strong points of coalgebra is that it induces -- via the notion of coalgebra homomorphism and final coalgebra -- a notion of behavioral equivalence for all these types of systems. The resulting behavioral equivalence is usually some form of bisimilarity. However, \cite{hasuo} has shown that by modifying the category in which the coalgebra lives, one can obtain different notions of behavioral equivalence, such as trace equivalence. We will shortly describe the basic idea: given an endofunctor $F$ on $\mathbf{Set}$, the category of sets and total functions, describing the branching type of the system, a coalgebra in the category $\mathbf{Set}$ is a function $\alpha\colon X\to FX$, where $X$ is a set. Consider, for instance, the functor $FX = \mathcal{P}_\mathit{fin}(\ensuremath{\mathcal{A}} \times X+\mathbf{1})$, where $\mathcal{P}_\mathit{fin}$ is the finite powerset functor and $\ensuremath{\mathcal{A}}$ is a given alphabet. This setup allows us to specify finitely branching non-deterministic automata where a state $x\in X$ is mapped to a set of tuples of the form $(a,y)$, for $a\in \ensuremath{\mathcal{A}}, y\in X$, describing transitions. The set contains the symbol $\checkmark$ (for termination) -- the only element contained in the one-element set $\mathbf{1}$ -- if and only if $x$ is a final state. A coalgebra homomorphism maps the set of states of a coalgebra to the set of states of another coalgebra, preserving the branching structure. Furthermore, the final coalgebra -- if it exists -- is the final object in the category of coalgebras. Every coalgebra has a unique homomorphism into the final coalgebra and two states of a transition system modelled as coalgebra are mapped to the same state in the final coalgebra iff they are behaviorally equivalent. Now, applying this notion to the example above induces bisimilarity, whereas usually the appropriate notion of behavioral equivalence for non-deterministic finite automata is language equivalence. One of the ideas of \cite{hasuo} is to view a coalgebra $X\to\mathcal{P}(\mathcal{A}\times X+\mathbf{1})$ not as an arrow in $\mathbf{Set}$, but as an arrow $X\to \mathcal{A}\times X+\mathbf{1}$ in $\mathbf{Rel}$, the category of sets and relations which is also the Kleisli category of the powerset monad. This induces trace equivalence, instead of bisimilarity, with the underlying intuition that non-determinism is a side-effect that is ``hidden'' within the monad. This side effect is not present in the final coalgebra (which consists of the set $\mathcal{A}^*$ with a suitable coalgebra structure), but in the arrow from a state $x\in X$ to $\mathcal{A}^*$, which is a relation, and relates each state with all words accepted from this state. More generally, coalgebras are given as arrows $X\to TFX$ in a Kleisli category, where a monad $T$ describes implicit branching and an endofunctor $F$ specifies explicit branching with the underlying intuition that the implicit branching (for instance non-determinism or probabilistic branching) is aggregated and abstracted away in the final coalgebra. For several monads this yields a form of trace semantics. In \cite{hasuo} a theorem gives sufficient conditions for the existence of a final coalgebra for Kleisli categories over $\mathbf{Set}$, which -- interestingly -- can be obtained as initial $F$-algebra in $\mathbf{Set}$. In \cite{hasuo} it is also proposed to obtain probabilistic trace semantics for the Kleisli category of the (discrete) subdistribution monad $\mathcal{D}$ on $\mathbf{Set}$. The endofunctor of that monad maps a set $X$ to the set $\mathcal{D}(X)$ of all functions $p\colon X \to [0,1]$ satisfying $\sum_{x \in X} p(x) \leq 1$. Coalgebras in this setting are functions of the form $X\to \mathcal{D}(\mathcal{A}\times X+\mathbf{1})$ (modeling probabilistic branching and termination), seen as arrows in the corresponding Kleisli category. From the general result in \cite{hasuo} mentioned above it again follows that the final coalgebra is carried by $\mathcal{A}^*$, where the mapping into the final coalgebra assigns to each state a discrete probability distribution over its traces. In this way one obtains the finite trace semantics of generative probabilistic systems \cite{s:coalg-ps-phd,glabbeek}. The contribution in \cite{hasuo} is restricted to discrete probability spaces, where the probability distributions always have at most countable support \cite{Sokolova20115095}. This might seem sufficient for practical applications at first glance, but it has two important drawbacks: first, it excludes several interesting systems that involve uncountable state spaces (see for instance the examples in Section~\ref{sec:advexamples} or the examples in \cite{Pan09}). Second, it excludes the treatment of infinite traces, as detailed in \cite{hasuo}, since the set of all infinite traces is uncountable and hence needs measure theory to be treated appropriately. This is an intuitive reason for the choice of the subdistribution monad -- instead of the distribution monad -- in \cite{hasuo}: for a given state, it might always be the case that a non-zero ``probability mass'' is associated to the infinite traces leaving this state, which -- in the discrete case -- cannot be specified by a probability distribution over all words. Hence, we generalize the results concerning probabilistic trace semantics from \cite{hasuo} to the case of uncountable state spaces, by working in the Kleisli category of the (continuous) sub-probability monad over $\mathbf{Meas}$ (the category of measurable spaces). Unlike in \cite{hasuo} we do not derive the final coalgebra via a generic construction (building the initial algebra of the functor), but we construct the final coalgebra directly. Furthermore we consider the Kleisli category of the (continuous) probability monad (Giry monad) and treat the case with and without termination. In the former case we obtain a coalgebra over the set $\mathcal{A}^\infty$ (finite and infinite traces over $\mathcal{A}$) and in the latter over the set $\mathcal{A}^\omega$ (infinite traces), which shows the naturality of the approach. For completeness we also consider the case of the sub-probability monad without termination, which results in a trivial final coalgebra over the empty set. In all cases we obtain the natural trace measures as instances of the generic coalgebraic theory. Since, to our knowledge, there is no generic construction of the final coalgebra for these cases, we construct the respective final coalgebras directly and show their correctness by proving that each coalgebra admits a unique homomorphism into the final coalgebra. Here we rely on the measure-theoretic extension theorem for sigma-finite pre-measures and the identity theorem. In the conclusion we will further compare our approach to \cite{hasuo} and discuss why we took an alternative route. \subsection{Another paper?} This paper is the extended version of the paper \cite{KK12a} first published at CONCUR 2012 and thus it necessarily contains all results of that paper. Due to page limitations some of the proofs were omitted in the published version and hence in the technical report \cite{KK12TR} we provided a version which is identical to the original paper but contains an appendix with the missing proofs. In contrast to that, the paper at hand contains all the proofs in place and also some corrections. Moreover, more details are presented, mainly taken from \cite{kerstan}, which was the starting point for everything. Last but not least the paper at hand includes the new Section \ref{sec:advexamples} containing two examples with uncountable state spaces and some additional theory needed in order to understand them. \section{Background Material and Preliminaries} \label{sec:prelim} We assume that the reader is familiar with the basic definitions of category theory. However, we will provide a brief introduction to notation, measure theory and integration, coalgebra, coalgebraic trace semantics and Kleisli categories -- of course all geared to our needs. \subsection{Notation} By $\mathbf{1}$ we denote a singleton set, its unique element is $\checkmark$. For arbitrary sets $X, Y$ we write $X \setminus Y$ for set complement, $X \times Y$ for the usual cartesian product and the disjoint union $X + Y$ is the set $\set{(x,0), (y,1)\mid x \in X, y \in Y}$. Whenever $X \cap Y = \emptyset$ this coincides with (is isomorphic to) the usual union $X \cup Y$ in an obvious way. For set inclusion we write $\subset$ for strict inclusion and $\subseteq$ otherwise. The set of real numbers is denoted by $\mathbb{R}$, the set of extended reals is the set $\overline{\mathbb{R}} := \mathbb{R} \cup \set{\pm\infty}$ and $\mathbb{R}_+$ and $\overline{\mathbb{R}}_+$ are their restrictions to the non-negative (extended) reals. We require $0 \cdot \pm \infty = \pm \infty \cdot 0 = 0$. For a function $f\colon X \to Y$ and a set $A \subseteq X$ the restriction of $f$ to $A$ is the function $f|_A\colon A \to Y$. \subsection{A Brief Introduction to Measure Theory} Within this section we want to give a very brief introduction to measure theory. For a more thorough treatment there are many standard textbooks as e.g. {\cite{ash,Els07}}. Measure theory generalizes the idea of length, area or volume. Its most basic definition is that of a $\emph{$\sigma$-algebra (sigma-algebra)}$. Given an arbitrary set $X$ we call a set $\Sigma$ of subsets of $X$ a \emph{$\sigma$-algebra} iff it contains the empty set and is closed under complement and countable union. The tuple $(X, \Sigma)$ is called a \emph{measurable space}. We will sometimes call the set $X$ itself a measurable space, keeping in mind that there is an associated $\sigma$-algebra which we will then denote by $\Sigma_X$. For any subset $\mathcal{G} \subseteq \powerset{X}$ we can always uniquely construct the smallest $\sigma$-algebra on $X$ containing $\mathcal{G}$ which is denoted by $\sigalg[X]{\mathcal{G}}$. We call $\mathcal{G}$ the \emph{generator} of $\sigalg[X]{\mathcal{G}}$, which in turn is called \emph{the $\sigma$-algebra generated by $\mathcal{G}$}. It is known (and easy to show), that $\sigma_X$ is a monotone and idempotent operator. The elements of a $\sigma$-algebra on $X$ are called the \emph{measurable sets} of $X$. Among all possible generators for $\sigma$-algebras, there are special ones, so-called \emph{semirings of sets}. \begin{defi}[Semiring of Sets] Let $X$ be an arbitrary set. A subset $\S \subseteq \powerset{X}$ is called a \emph{semiring of sets} if it satisfies the following three properties. \begin{enumerate}[label=(\alph*)] \item $\S$ contains the empty set, i.e. $\emptyset \in \S$. \item $\S$ is closed under pairwise intersection, i.e. for $A, B \in \S$ we always require $(A \cap B)\in \S$. \item The set difference of any two sets in $\S$ is the disjoint union of finitely many sets in $\S$, i.e. for any $A, B \in \S$ there is an $N \in \mathbb{N}$ and pairwise disjoint sets $C_1,\hdots,C_N \in \S$ such that $A\setminus B = \cup_{n=1}^N C_n$. \end{enumerate} \end{defi} \noindent It is easy to see that every $\sigma$-algebra is a semiring of sets but the reverse is false. Please note that a semiring of sets is different from a semiring in algebra. For our purposes, we will consider special semirings containing a countable cover of the base set. \begin{defi}[Countable Cover, Covering Semiring] Let $\S$ be a semiring. A countable sequence $(S_n)_{n \in \mathbb{N}}$ of sets in $\S$ such that $\cup_{n \in \mathbb{N}}S_n = X$ is called a \emph{countable cover of $X$ (in $\S$)}. If such a countable cover exists we call $\S$ a \emph{covering} semiring. \end{defi} With these basic structures at hand, we can now define pre-measures and measures. A non-negative function $\mu \colon \S \to \overline{\mathbb{R}}_+$ defined on a semiring $\S$ is called a \emph{pre-measure} on $X$ if it assigns $0$ to the empty set and is \emph{$\sigma$-additive}, i.e. for a sequence $(S_n)_{n \in \mathbb{N}}$ of pairwise disjoint sets in $\mathcal{S}$ where $\left(\cup_{n \in \mathbb{N}}S_n\right) \in \mathcal{S}$ we must have \begin{align} \mu\left(\bigcup_{n \in \mathbb{N}}S_n\right) = \sum_{n \in \mathbb{N}}\mu\left(S_n\right). \end{align} A pre-measure $\mu$ is called \emph{$\sigma$-finite} if there is a countable cover $(S_n)_{n \in \mathbb{N}}$ of $X$ in $\mathcal{S}$ such that $\mu\left(S_n\right) < \infty$ for all $n \in \mathbb{N}$. Whenever $\S$ is a $\sigma$-algebra we call $\mu$ a \emph{measure} and the tuple $(X, \S, \mu)$ a \emph{measure space}. In that case $\mu$ is said to be \emph{finite} iff $\mu(X) < \infty$ and for the special cases $\mu(X) = 1$ (or $\mu(X) \leq 1$) $\mu$ is called a \emph{probability measure} (or \emph{sub-probability measure} respectively). Measures are \emph{monotone}, i.e. if $A,B$ are measurable $A \subseteq B$ implies $\mu(A) \leq \mu(B)$ and \emph{continuous}, i.e. for measurable $A_1 \subseteq A_2 \subseteq \hdots \subseteq A_n \subseteq \hdots$ we always have $\mu\left(\cup_{n=1}^\infty A_n\right) = \lim_{n \to \infty} \mu(A_n)$ and for measurable $B_1 \supseteq B_2 \supseteq \hdots \supseteq B_n \supseteq \hdots$ with $\mu(B_1) < \infty$ we have $\mu\left(\cap_{n=1}^\infty A_n\right) = \lim_{n \to \infty} \mu(A_n)$ \cite[1.2.5 and 1.2.7]{ash}. Given a measurable space $(X, \Sigma_X)$, a simple and well-known probability measure, is the so-called \emph{Dirac measure}, which we will use later. It is defined as $\delta_x^X\colon \Sigma_X \to [0,1]$, and is $1$ on $S \in \Sigma_X$ iff $x \in S$ and $0$ otherwise. The most significant theorems from measure theory which we will use in this paper are the identity theorem and the extension theorem for $\sigma$-finite pre-measures, for which a proof can be found e.g. in~\cite[II.5.6 and II.5.7]{Els07}. \begin{prop}[Identity Theorem] Let $X$ be a set, $\mathcal{G} \subseteq \powerset{X}$ be a set which is closed under pairwise intersection and $\mu, \nu \colon \sigalg[X]{\mathcal{G}} \to \overline{\mathbb{R}}_+$ be measures. If $\mu|_\mathcal{G} = \nu|_\mathcal{G}$ and $\mathcal{G}$ contains a countable cover $(G_n)_{n \in \mathbb{N}}$ of $X$ satisfying $\mu(G_n) = \nu(G_n) < \infty$ for all $n \in \mathbb{N}$ then $\mu = \nu$.\qed \end{prop} \begin{prop}[Extension Theorem for $\sigma$-finite Pre-Measures] \label{prop:extension} Let $X$ be a set, \linebreak $\S \subseteq \powerset{X}$ be a semiring of sets and $\mu\colon \S \to \overline{\mathbb{R}}_+$ be a $\sigma$-finite pre-measure. Then there exists a uniquely determined measure $\hat{\mu} \colon \sigalg[X]{\S} \to \overline{\mathbb{R}}_+$ such that $\hat{\mu}|_\S = \mu$. \qed \end{prop} As we are only interested in finite measures, we provide a result, which can be derived easily from the identity theorem. \begin{cor}[Equality of Finite Measures on Covering Semirings] \label{cor:equality_of_measures} Let $X$ be an arbitrary set, $\S \subseteq \powerset{X}$ be a covering semiring and $\mu, \nu \colon \sigalg[X]{\S} \to \overline{\mathbb{R}}_+$ be finite measures. Then $\mu = \nu$ if and only if $\mu|_\S = \nu|_\S$. \end{cor} \proof Obviously we get $\mu|_\S = \nu|_\S$ if $\mu = \nu$. For the other direction let $(S_n)_{n \in \mathbb{N}}$ be a countable cover of $X$. Then finiteness of $\mu$ and $\nu$ together with the fact that measures are continuous and $\mu|_\S = \nu|_\S$ yield $\mu(S_n) = \nu(S_n) \leq \nu(X)< \infty$ for all $n \in \mathbb{N}$. Since $\S$ is a semiring of sets, it is closed under pairwise intersection which allows us to apply the identity theorem yielding $\mu = \nu$. \qed \subsection{The Category of Measurable Spaces and Functions} Let $X$ and $Y$ be measurable spaces. A function $f \colon X \to Y$ is called \emph{measurable} iff the pre-image of any measurable set of $Y$ is a measurable set of $X$. The category $\mathbf{Meas}$ has measurable spaces as objects and measurable functions as arrows. Composition of arrows is function composition and the identity arrows are the identity functions. The product of two measurable spaces $(X, \Sigma_X)$ and $(Y, \Sigma_Y)$ is the set $X \times Y$ endowed with the $\sigma$-algebra generated by $\Sigma_X \ast \Sigma_Y$, the set of so-called ``rectangles'' of measurable sets which is $\set{S_X \times S_Y\mid S_X \in \Sigma_X, S_Y \in \Sigma_Y}$. It is called the \emph{product $\sigma$-algebra} of $\Sigma_X$ and $\Sigma_Y$ and is denoted by $\Sigma_X \otimes \Sigma_Y$. Whenever $\Sigma_X$ and $\Sigma_Y$ have suitable generators, we can also construct a possibly smaller generator for the product $\sigma$-algebra by taking only the ``rectangles'' of the generators. \begin{prop}[Generators for the Product $\sigma$-Algebra] \label{prop:generator_product} Let $X, Y$ be arbitrary sets and $\mathcal{G}_X \subseteq \powerset{X}, \mathcal{G}_Y \subseteq \powerset{Y}$ such that $X \in \mathcal{G}_X$ and $Y \in \mathcal{G}_Y$. Then the following holds: \[ \sigalg[X\times Y]{\mathcal{G}_X \ast \mathcal{G}_Y} = \sigalg[X]{\mathcal{G}_X} \otimes \sigalg[Y]{\mathcal{G}_Y}\,. \eqno{\qEd} \] \end{prop} A proof of this proposition can be found in many standard textbooks on measure theory, e.g. in \cite{Els07}. We remark that there are (obvious) product endofunctors on the category of measurable spaces and functions. \begin{defi}[Product Functors] Let $(Z, \Sigma_Z)$ be a measurable space. The endofunctor $Z \times \mathrm{Id}_\mathbf{Meas}$ maps a measurable space $(X, \Sigma_X)$ to $\left(Z \times X, \Sigma_Z \otimes \Sigma_X\right)$ and a measurable function $f\colon X \to Y$ to the measurable function $Z \times f \colon Z \times X \to Z \times Y, (z,x) \mapsto \left(z,f(x)\right)$. The functor $\mathrm{Id}_\mathbf{Meas} \times Z$ is constructed analogously. \end{defi} The coproduct of two measurable spaces $(X, \Sigma_X)$ and $(Y, \Sigma_Y)$ is the set $X + Y$ endowed with $\Sigma_X \oplus \Sigma_Y := \set{S_X + S_Y\mid S_X \in \Sigma_X, S_Y \in \Sigma_Y}$ as $\sigma$-algebra, the \emph{disjoint union $\sigma$-algebra}. Note that in contrast to the product no $\sigma$-operator is needed because $\Sigma_X \oplus \Sigma_Y$ itself is already a $\sigma$-algebra whereas $\Sigma_X \ast \Sigma_Y$ is usually no $\sigma$-algebra. For generators of the disjoint union $\sigma$-algebra we provide and prove a comparable result to the one given above for the product $\sigma$-algebra. \begin{prop}[Generators for the Disjoint Union $\sigma$-Algebra] \label{prop:generator_union} Let $X, Y$ be arbitrary sets and $\mathcal{G}_X \subseteq \powerset{X}, \mathcal{G}_Y \subseteq \powerset{Y}$ such that $\emptyset \in \mathcal{G}_X$ and $Y \in \mathcal{G}_Y$. Then the following holds: \begin{align} \sigalg[X + Y]{\mathcal{G}_X \oplus \mathcal{G}_Y} = \sigalg[X]{\mathcal{G}_X} \oplus \sigalg[Y]{\mathcal{G}_Y}\label{eq:generator_union}\,. \end{align} \end{prop} \noindent In order to prove this, we cite another result from \cite[I.4.5 Korollar]{Els07}. \begin{lem} \label{lem:trace_sigma_algebra} Let $X$ be an arbitrary set, $\mathcal{G} \subseteq \powerset{X}$ and $S \subseteq X$. Then $\sigalg[S]{\mathcal{G}|S} = \sigma_X(\mathcal{G})|S$ where $\mathcal{G} | S := \set{G \cap S \mid G \in \mathcal{G}}$ and analogously $\sigalg[X]{\mathcal{G}}|S := \set{G \cap S \mid G \in \sigalg[X]{\mathcal{G}}}$. \end{lem} \proof[Proof of Proposition~\ref{prop:generator_union}] Without loss of generality we assume that $X$ and $Y$ are pairwise disjoint. Hence for any subsets $A \subseteq X$, $B \subseteq Y$ we have $A \cap B = \emptyset$ and thus $A + B \cong A \cup B$. In order to prove equation \eqref{eq:generator_union} we show both inclusions. \begin{itemize}[label=$\subseteq$] \item We have $\mathcal{G}_X \oplus \mathcal{G}_Y \subseteq \sigalg[X]{\mathcal{G}_X} \oplus \sigalg[Y]{\mathcal{G}_Y}$ and thus monotonicity and idempotence of the $\sigma$-operator immediately yield $\sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y} \subseteq \sigalg[X]{\mathcal{G}_X} \oplus \sigalg[Y]{\mathcal{G}_Y}$. \item [$\supseteq$] Let $G \in \sigalg[X]{\mathcal{G}_X} \oplus \sigalg[Y]{\mathcal{G}_Y}$. Then $G = G_X\cup G_Y$ with $G_X \in \sigalg[X]{\mathcal{G}_X}$ and $G_Y \in \sigalg[Y]{\mathcal{G}_Y}$. \linebreak We observe that $\mathcal{G}_X = (\mathcal{G}_X \oplus \mathcal{G}_Y) | X$ and by applying Lemma~\ref{lem:trace_sigma_algebra} we obtain that $\sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y} | X = \sigalg[X]{\mathcal{G}_X}$. Thus there must be a $G'_Y \in \powerset{Y}$ such that \linebreak $G_X \cup G'_Y \in \sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y}$. Analogously there must be a $G'_X \in \powerset{X}$ such that $G'_X \cup G_Y \in \sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y}$. We have $Y = \emptyset \cup Y \in \sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y}$ and hence we also have $X = (X\cup Y)\setminus Y \in \sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y}$. Thus we calculate \begin{align*} G = G_X \cup G_Y = \big( (G_X \cup G'_Y) \cap X \big) \cup \big( (G'_X \cup G_Y) \cap Y\big) \in \sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y} \end{align*} and hence can conclude that $\sigalg[X\cup Y]{\mathcal{G}_X \oplus \mathcal{G}_Y} \supseteq \sigalg[X]{\mathcal{G}_X} \oplus \sigalg[Y]{\mathcal{G}_Y}$.\qed \end{itemize} \noindent As before we have endofunctors for the coproduct, the coproduct functors. \begin{defi}[Co-Product Functors] Let $(Z, \Sigma_Z)$ be a measurable space. The endofunctor $\mathrm{Id}_\mathbf{Meas} + Z$ maps a measurable space $(X, \Sigma_X)$ to $\left(X+Z, \Sigma_X \oplus \Sigma_Z\right)$ and a measurable function $f\colon X \to Y$ to the measurable function $f + Z \colon X + Z\to Y + Z$, $(x,0) \mapsto (f(x),0)$, $(z,1) \mapsto (z,1)$. The functor $\mathrm{Id}_\mathbf{Meas} + Z$ is constructed analogously. \end{defi} For isomorphisms in $\mathbf{Meas}$ we provide the following characterization which we will need later for our main result. \begin{prop}[Isomorphisms in $\mathbf{Meas}$] \label{prop:isomorphisms} Two measurable spaces $X$ and $Y$ are isomorphic in $\mathbf{Meas}$ iff there is a bijective function $\phi\colon X \to Y$ such that\footnote{For $\S \subseteq \powerset{X}$ and a function $\phi \colon X \to Y$ let $\phi(\S) = \set{\phi\left(S_X\right) \mid S_X \in \S} = \set{ \set{\phi(x) \mid x \in S_X} \mid S_X \in \S}$.} $\phi\left(\Sigma_X\right) = \Sigma_Y$. If $\Sigma_X$ is generated by a set $\S \subseteq \powerset{X}$ then $X$ and $Y$ are isomorphic iff there is a bijective function $\phi\colon X \to Y$ such that $\Sigma_Y$ is generated by ${\phi\left(\S\right)}$. In this case $\S$ is a (covering) semiring of sets [a $\sigma$-algebra] iff $\phi(\S)$ is a (covering) semiring of sets [a $\sigma$-algebra]. \end{prop} Again, we need a result from measure theory for the proof. This auxiliary result and its proof can be found e.g. in \cite[I.4.4 Satz]{Els07}. \begin{lem} \label{lem:generator_inverse} Let $X, Y$ be sets, $f\colon X \to Y$ be a function. Then for every subset $\S \subseteq \powerset{Y}$ it holds that $\sigalg[X]{f^{-1}(\S)} = f^{-1}\left(\sigalg[Y]{\S}\right)$.\qed \end{lem} \proof[Proof of Proposition~\ref{prop:isomorphisms}] Since the identity arrows in $\mathbf{Meas}$ are the identity functions, we can immediately derive that any isomorphism $\phi\colon X \to Y$ must be a bijective function. Measurability of $\phi$ and its inverse function $\phi^{-1}\colon Y \to X$ yield $\phi\left(\Sigma_X\right) = \Sigma_Y$. The equality $\sigalg[Y]{\phi(\S)} = \phi\left(\sigalg[X]{\S}\right)$ follows from Lemma~\ref{lem:generator_inverse} by taking $f = \phi^{-1}$. The last equivalence is easy to verify using bijectivity of $\phi$ and $\phi^{-1}$.\qed \subsection{Kleisli Categories and Liftings of Endofunctors} Recall that a monad on a category $\mathbf{C}$ is a triple $(T, \eta, \mu)$ where $T\colon \mathbf{C} \to \mathbf{C}$ is an endofunctor together with two natural transformations\footnote{This is the second meaning of the symbol $\mu$. Until now, $\mu$ was used as a symbol for a (pre-)measure.} $\eta \colon \mathrm{Id}_\mathbf{C} \Rightarrow T$ and $\mu\colon T^2 \Rightarrow T$ such that the following diagrams commute for all $\mathbf{C}$-objects $X$. \[\xymatrix@C+20 pt{ TX \ar[r]^{T\eta_X} \ar[dr]_{\,\mathrm{id}_{TX}\!} \ar[d]_{\eta_{TX}} & T^2X \ar[d]^{\mu_X} & & T^3X \ar[r]^{T\mu_X} \ar[d]_{\mu_{TX}} & T^2X \ar[d]^{\mu_X}\\ T^2X \ar[r]_{\mu_X} & TX & & T^2X \ar[r]_{\mu_X} & TX }\] \noindent Given a monad $(T, \eta, \mu)$ on a category $\mathbf{C}$ we can define a new category, the Kleisli category of $T$, where the objects are the same as in $\mathbf{C}$ but every arrow in the new category corresponds to an arrow $f\colon X \to TY$ in $\mathbf{C}$. Thus, arrows in the Kleisli category incorporate side effects specified by a monad~\cite{hasuo,abhkms:coalgebra-min-det}. Formally we will use the following definition. \begin{defi}[Kleisli Category] Let $(T, \eta, \mu)$ be a monad on a category $\mathbf{C}$. The \emph{Kleisli category of $T$} has the same objects as $\mathbf{C}$. For any two such objects $X$ and $Y$, the Kleisli arrows with domain $X$ and codomain $Y$ are exactly the $\mathbf{C}$-arrows $f\colon X \to TY$. Composition of Kleisli arrows $f \colon X \to TY$ and $g \colon Y\to TZ $ is defined as $g\circ_T f := \mu_Z \circ T(g)\circ f$, the identity arrow for any Kleisli object $X$ is $\eta_X$. \end{defi} Given an endofunctor $F$ on $\mathbf{C}$, we want to construct an endofunctor $\overline{F}$ on $\mathcal{K}\ell(T)$ that ``resembles'' $F$: Since objects in $\mathbf{C}$ and objects in $\mathcal{K}\ell(T)$ are the same, we want $\overline{F}$ to coincide with $F$ on objects i.e. we want $\overline{F}X = FX$. It remains to define how $\overline{F}$ shall act on Kleisli arrows $f\colon X \to TY$ such that it ``resembles'' $F$. Formally we require $\overline{F}$ to be a \emph{lifting} of $F$ in the following sense: Given a monad $(T,\eta,\mu)$ and its Kleisli category $\mathcal{K}\ell(T)$, there is a canonical adjunction\footnote{Explicitly: The left-adjoint $L\colon \mathbf{C} \to \mathcal{K}\ell(T)$ is given by $LX = X$ for all $\mathbf{C}$-objects $X$ and $L(f) = \eta_Y \circ f$ for all $\mathbf{C}$-arrows $f\colon X \to Y$. The right-adjoint $R\colon \mathcal{K}\ell(T) \to \mathbf{C}$ is given by $RX = TX$ for all $\mathcal{K}\ell(T)$-objects $X$ and $R(f)=\mu_Y \circ Tf$ for all $\mathcal{K}\ell(T)$-arrows $f \colon X \to TY$.} \begin{align*} \big(L\colon \mathbf{C} \to \mathcal{K}\ell(T)\big) \quad \dashv \quad \big(R\colon \mathcal{K}\ell(T) \to \mathbf{C}\big) \end{align*} with unit $\eta'\colon \mathrm{Id}_\mathbf{C} \Rightarrow RL$ and counit $\epsilon\colon LR \Rightarrow \mathrm{Id}_{\mathcal{K}\ell(T)}$ giving rise to the monad, i.e. $T = RL$, $\eta=\eta'$, $\mu = R\epsilon L$. Then an endofunctor $\overline{F}$ on $\mathcal{K}\ell(T)$ is called a \emph{lifting of $F$} if it satisfies $\overline{F}L = LF$. We will use the fact that these liftings are in one-to-one correspondence with distributive laws \cite{mulry-lifting}. \begin{defi}[Distributive Law] Let $(T, \eta, \mu)$ be a monad on a category $\mathbf{C}$ and $F$ be an endofunctor on $\mathbf{C}$. A natural transformation $\lambda\colon FT \Rightarrow TF$ is called a \emph{distributive law} if for all $\mathbf{C}$-objects $X$ the following diagrams commute in $\mathbf{C}$: \[\xymatrix{ FX \ar[r]^{F\eta_X} \ar[dr]_{\eta_{FX}} & FTX \ar[d]^{\lambda_X} & & FT^2X \ar[r]^{\lambda_{TX}} \ar[d]_{F\mu_X} & TFTX \ar[r]^{T\lambda_X} & T^2FX \ar[d]^{\mu_{FX}}\\ & TFX & & FTX \ar[rr]_{\lambda_X} & & TFX }\] or equivalently $\lambda_X \circ F\eta_X = \eta_{FX}$ and $\mu_{FX} \circ T\lambda_X \circ \lambda_{TX} = \lambda_X \circ F\mu_X$. \end{defi} Whenever we have such a distributive law we get the lifting of a functor as defined above in the following way \cite{mulry-lifting}. \begin{prop}[Lifting via Distributive Law] Let $(T, \eta, \mu)$ be a monad on a category $\mathbf{C}$ and $F$ be an endofunctor on $\mathbf{C}$ with a distributive law $\lambda\colon FT \Rightarrow TF$. The distributive law induces a lifting of $F$ to an endofunctor $\overline{F}\colon \mathcal{K}\ell(T) \to \mathcal{K}\ell(T)$ if we define $\overline{F}X =FX$ for each object $X$ of $\mathcal{K}\ell(T)$ and $\overline{F}(f) := \lambda_Y \circ Ff$ for each Kleisli arrow $f \colon X \to TY$. \qed \end{prop} \subsection{Coalgebraic Trace Semantics} We first recall the central notions of coalgebra, coalgebra homomorphism and final coalgebra. \begin{defi}[Coalgebra, Coalgebra-Homomorphism, Final Coalgebra] \label{def:coalgebra} For an endofunctor $F$ on a category $\mathbf{D}$ an $F$-coalgebra is a pair $(X, \alpha)$ where $X$ is an object and $\alpha\colon X \to FX$ is an arrow of $\mathbf{D}$. An $F$-coalgebra homomorphism between two $F$-coalgebras $(X, \alpha), (Y, \beta)$ is an arrow $\phi \colon X \to Y$ in $\mathbf{D}$ such that $\beta \circ \phi = F(\phi)\circ \alpha$. We call an $F$-coalgebra $(\Omega, \kappa)$ final if and only if for every $F$-coalgebra $(X,\alpha)$ there is a unique $F$-coalgebra-homomorphism $\phi_\alpha \colon X \to \Omega$. \end{defi} By choosing a suitable category and a suitable endofunctor, many (labelled) transition systems can be modelled as $F$-coalgebras. The final coalgebra -- if it exists -- can be seen as the ``universe of all possible behaviors'' and the unique map into it yields a behavioral equivalence: Two states are equivalent iff they have the same image the final coalgebra. Whenever transition systems incorporate side-effects, these can be ``hidden'' in a monad $T$. This leads to the following setting: the category $\mathbf{D}$ of Definition \ref{def:coalgebra} is $\mathcal{K}\ell(T)$, i.e., the Kleisli category for the monad $T$ and a functor $\overline{F}\colon \mathcal{K}\ell(T)\to \mathcal{K}\ell(T)$ is obtained by suitably lifting a functor $F$ of the underlying category (such that $\overline{F}X = FX$ on objects, see above). Then coalgebras are defined as arrows $\alpha\colon X\to\overline{F}X$ in the Kleisli category, which can be regarded as arrows $X\to TFX$ in the base category. As indicated in the introduction, the monad can be seen as describing implicit branching (side effects), whereas $F$ describes the explicit branching structure. In this setup the final coalgebra in the Kleisli category often yields a notion of trace semantics \cite{hasuo,Sokolova20115095}. The side effects specified via the monad are not part of the final coalgebra, but are contained in the unique map into the final coalgebra (which is again a Kleisli arrow). In our case $T$ is either the sub-probability or the probability monad on $\mathbf{Meas}$ (which will be defined later), whereas $F$ is defined as $F = \ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$ or $F = \ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas}$ for a given finite alphabet $\ensuremath{\mathcal{A}}$. That is, the monad $T$ describes probabilistic branching, whereas the endofunctor $F$ specifies (explicitly observable) labels and possibly termination. \subsection{Borel-Sigma-Algebras and the Lebesgue Integral} Before we can define the probability and the sub-probability monad, we give a crash course in integration loosely based on \cite{ash,Els07}. For that purpose let us fix a measurable space $X$ and a measure $\mu$ on $X$. We want to integrate numerical functions $f\colon X \to \overline{\mathbb{R}}$ and in order to do that we need a suitable $\sigma$-algebra on $\overline{\mathbb{R}}$ to define measurability of such functions. Recall that a topological space is a tuple $(Y, \mathcal{T})$, where $Y$ is a set and $\mathcal{T} \subseteq \powerset{Y}$ is a set containing the empty set, the set $Y$ itself and is closed under arbitrary unions and finite intersections. The set $\mathcal{T}$ is called the \emph{topology} of $Y$ and its elements are called \emph{open sets}. The \emph{Borel $\sigma$-algebra} on $Y$, denoted $\mathcal{B}(Y)$, is the $\sigma$-algebra generated by the open sets $\mathcal{T}$ of the topology, i.e. $\mathcal{B}(Y) = \sigalg[Y]{\mathcal{T}}$. Thus the Borel $\sigma$-algebra provides a connection of topological aspects and measurability. For the set of real numbers, it can be shown (\cite[I.4.3 Satz]{Els07}) that the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$ is generated by the semiring of all left-open intervals \begin{align*} \mathcal{B}(\mathbb{R}) = \sigalg[\mathbb{R}]{\set{\,(a,b]\mid a,b \in \mathbb{R}, a \leq b}}. \end{align*} With this definition at hand, we now equip the set $\overline{\mathbb{R}}$ of extended reals with its Borel \linebreak $\sigma$-algebra which can be defined as \begin{align*} \mathcal{B}(\overline{\mathbb{R}}) = \sigalg[\overline{\mathbb{R}}]{\set{B \cup E \mid B \in \mathcal{B}(\mathbb{R}), E \subseteq \set{-\infty, \infty}}}. \end{align*} A function $f\colon X \to \overline{\mathbb{R}}$ is called \emph{(Borel-)measurable} if it is measurable with with respect to this Borel $\sigma$-algebra. Given two Borel-measurable functions $f,g \colon Y \to \overline{\mathbb{R}}$ and real numbers $\alpha, \beta$ also $\alpha f+\beta g$ is Borel-measurable \cite[III.4.7]{Els07} and thus are all finite linear combinations of Borel-measurable functions. Moreover, if $(f_n)_{n \in \mathbb{N}}$ is a sequence of Borel-measurable functions $f_n\colon X \to \overline{\mathbb{R}}$ converging pointwise to a function $f\colon X \to \overline{\mathbb{R}}$, then also $f$ is Borel-measurable \cite[1.5.4]{ash}. In the remainder of this section we will just consider Borel-measurable functions. We call $f$ \emph{simple} iff it attains only finitely many values, say $f(X) = \set{\alpha_1, \hdots, \alpha_N}$. The integral of such a simple function $f$ is then defined to be the $\mu$-weighted sum of the $\alpha_n$, formally $\Int{f}[\mu] = \sum_{n=1}^{N}\alpha_n\mu(S_n)$ where $S_n = f^{-1}(\alpha_n) \in \Sigma_X$. Whenever $f$ is non-negative we can approximate it from below using non-negative simple functions. In this case we define the integral to be \[\Int{f}[\mu] := \sup\set{\Int{s}[\mu]\mid s \mbox{ non-negative and simple s.t. } 0 \leq s \leq f}.\] For arbitrary Borel-measurable $f$ we decompose it into its positive part $f^+ := \max\set{f,0}$ and negative part $f^- := \max\set{-f, 0}$ which are both non-negative and Borel-measurable. We note that $f = f^+ - f^{-}$ and consequently we define the integral of $f$ to be the difference $\Int{f}[\mu] := \Int{f^+}[\mu] - \Int{f^-}[\mu]$ if not both integrals on the right hand side are $+\infty$. In the latter case we say that the integral does not exist. Whenever it exists and is finite we call $f$ a \emph{$\mu$-integrable function} or simply an \emph{integrable function} if the measure $\mu$ is obvious from the context. For every measurable set $S \in \Sigma_X$ its characteristic function $\chi_S \colon X \to \mathbb{R}$, which is \linebreak $1$ if $x \in S$ and $0$ otherwise, is $\mu$-integrable and for $\mu$-integrable $f$ the product $\chi_S \cdot f$ is also $\mu$-integrable and we write \begin{align*} \Int[S]{f}[\mu] := \Int{\chi_S \cdot f}[\mu] \,. \end{align*} Instead of $\Int[S]{f}[\mu]$ we will sometimes write $\Int[S]{f(x)}[\mu(x)]$ or $\Int[{x \in S}]{f(x)}[\mu(x)]$ which is useful if we have functions with more than one argument or multiple integrals. Note that this does not imply that singleton sets are measurable. Some useful properties of the integral are that it is \emph{linear}, i.e. for $\mu$-integrable functions $f,g\colon X \to \overline{\mathbb{R}}$ and real numbers $\alpha, \beta$ we have \begin{align*} \Int{\alpha f + \beta g}[\mu] = \alpha \Int{f}[\mu] + \beta\Int{g}[\mu] \end{align*} and the integral is \emph{monotone}, i.e. $f \leq g$ implies $\Int{f}[\mu] \leq \Int{g}[\mu] $. We will state one result explicitly which we will use later in our proofs. This result and its proof can be found e.g. in \cite[Theorem 1.6.12]{ash}. \begin{prop}[Image Measure] Let $X, Y$ be measurable spaces, $\mu$ be a measure on $X$, $f\colon Y \to \overline{\mathbb{R}}$ be a Borel-measurable function and $g\colon X \to Y$ be a measurable function. Then $\mu \circ g^{-1}$ is a measure\footnote{This notation is a bit lax, if we wanted to be really precise we would have to write $\mu \circ \left(g^{-1}|_{\Sigma_Y}\right)$.} on $Y$, the so-called \emph{image-measure} and $f$ is $(\mu \circ g^{-1})$-integrable iff $f \circ g$ is $\mu$-integrable and in this case we have $\Int[S]{f}[(\mu \circ g^{-1})] = \Int[{g^{-1}(S)}]{f \circ g}[\mu]$ for all $S \in \Sigma_Y$.\qed \end{prop} \subsection{The Probability and the Sub-Probability Monad} We will now introduce the probability monad (Giry monad) and the sub-probability monad as e.g. presented in \cite{Gir82} and \cite{Pan09}. First, we take a look at the endofunctors of these monads. \begin{defi}[The Sub-Probability and the Probability Functor] The \emph{sub-probability-functor} $\mathbb{S} \colon \mathbf{Meas} \to \mathbf{Meas}$ maps a measurable space $(X, \Sigma_X)$ to the measurable space $\big(\mathbb{S}(X), \Sigma_{\mathbb{S}(X)}\big)$ where $\mathbb{S}(X)$ is the set of all sub-probability measures on $\Sigma_X$ and $\Sigma_{\mathbb{S}(X)}$ is the smallest $\sigma$-algebra such that for all $S \in \Sigma_X$ the \emph{evaluation maps}: \begin{align} \quad p_S\colon \mathbb{S}(X) \to [0,1],\quad p_S(P) = P(S) \label{eq:evaluation_map} \end{align} are Borel-measurable. For any measurable function $f \colon X \to Y$ between measurable spaces $(X, \Sigma_X)$, $(Y, \Sigma_Y)$ the arrow $\mathbb{S}(f)$ maps a probability measure $P$ to its image measure: \begin{align} \mathbb{S}(f) \colon \mathbb{S}(X) \to \mathbb{S}(Y), \quad \mathbb{S}(f)(P) := P \circ f^{-1}. \end{align} If we take full probabilities instead of sub-probabilities we get another endofunctor, the probability functor $\mathbb{P}$, analogously. \end{defi} Both the sub-probability functor $\mathbb{S}$ and the probability functor $\mathbb{P}$ are functors of monads with the following unit and multiplication natural transformations. \begin{defi}[Unit and Multiplication] Let $T$ be either the sub-probability functor $\mathbb{S}$ or the probability functor $\mathbb{P}$. We obtain two natural transformations $\eta \colon \mathrm{Id}_\mathbf{Meas} \Rightarrow T$ and $\mu \colon T^2\Rightarrow T$ by defining for every measurable space $(X,\Sigma_X)$: \begin{align} \eta_X \colon X \to TX,\ & \quad \eta_X(x) = \delta_x^X \label{eq:giry_unit}\\ \mu_X \colon T^2X \to TX,\ & \quad \mu_X(P)(S) := \Int{p_S}[P] \quad \text{for } S \in \Sigma_X\label{eq:giry_mult} \end{align} where $\delta_x^X\colon \Sigma_X \to [0,1]$ is the Dirac measure and $p_S$ is the evaluation map \eqref{eq:evaluation_map} from above. \end{defi} If we combine all the ingredients we obtain the following result which also guarantees the soundness of the previous definitions. \begin{prop}[\cite{Gir82,Pan09}] $(\mathbb{S}, \eta, \mu)$ and $(\mathbb{P}, \eta, \mu)$ are monads on $\mathbf{Meas}$.\qed \end{prop} \subsection{A Category of Stochastic Relations} The Kleisli category of the sub-probability monad $(\mathbb{S}, \eta, \mu)$ is sometimes called \emph{category of stochastic relations \cite{Pan09}} and denoted by $\mathbf{SRel}$. Let us briefly analyze the arrows of this category: Given two measurable spaces $(X,\Sigma_X)$, $(Y, \Sigma_Y)$ a Kleisli arrow $h \colon X \to \mathbb{S}Y$ maps each $x \in X$ to a sub-probability measure $h(x) \colon \Sigma_Y \to [0,1]$. By uncurrying we can regard $h$ as a function $h\colon X \times \Sigma_Y\to [0,1]$. Certainly for each $x \in X$ the function $S \mapsto h(x,S)$ is a (sub-)probability measure and one can show that for each $S \in \Sigma_Y$ the function $x \mapsto h(x,S)$ is Borel-measurable. Any function $h \colon X \times \Sigma_Y\to [0,1]$ with these properties is called a \emph{Markov kernel} or a \emph{stochastic kernel} and it is known \cite[Proposition 2.7]{Dob07b} that these Markov kernels correspond exactly to the Kleisli arrows $h \colon X \to \mathbb{S}Y$. We will later need the following, simple result about Borel-measurable functions and Markov kernels: \begin{lem} \label{lem:measMarkovKernel} Let $(X, \Sigma_X)$ and $(Y, \Sigma_Y)$ be measurable spaces, $g \colon Y \to [0,1]$ be a Borel-measurable function and $h\colon X \times \Sigma_Y \to [0,1]$ be a Markov kernel. Then the function\linebreak $f\colon X \to [0,1]$, $f(x) := \Int[y \in Y]{g(y)}[h(x,y)]$ is Borel-measurable. \end{lem} \proof If $g$ is a simple and Borel-measurable function, say $g(Y) = \set{\alpha_1,..., \alpha_N}$, then $f(x) = \sum_{n=1}^N \alpha_n h(x,A_n)$ where $A_n = g^{-1}(\set{\alpha_n})$ and hence $f$ is Borel-measurable as a linear combination of Borel-measurable functions. If $g$ is an arbitrary, Borel-measurable function we approximate it from below with simple functions $s_i$, $i \in \mathbb{N}$ and define $f_i\colon X \to [0,1]$ with $f_i(x) = \Int[y \in Y]{s_i(y)}[h(x,y)]$. Then by the monotone convergence theorem (\cite[1.6.2]{ash}) we have $f(x) = \Int[y \in Y]{\lim_{i \to \infty}s_i(y)}[h(x,y)] = \lim_{i \to \infty}f_i(x)$. As shown before, each of the $f_i$ is Borel-measurable and thus also the function $f$ is Borel-measurable as pointwise limit of Borel-measurable functions. \qed \section{Main Results} \subsection{Continuous Probabilistic Transition Systems} There is a big variety of probabilistic transition systems \cite{Sokolova20115095,glabbeek}. We will deal with four slightly different versions of so-called \emph{generative} PTS. The underlying intuition is that, according to a sub-probability measure, an action from the alphabet $\mathcal{A}$ and a set of possible successor states are chosen. We distinguish between probabilistic branching according to sub-probability and probability measures and furthermore we treat systems without and with termination. \begin{defi}[Probabilistic Transition System] A \emph{probabilistic transition system}, short \emph{PTS}, is a tuple $(\ensuremath{\mathcal{A}}, X, \alpha)$ where $\ensuremath{\mathcal{A}}$ is a finite alphabet (endowed with $\powerset{\ensuremath{\mathcal{A}}}$ as $\sigma$-algebra), $X$ is the \emph{state space}, an arbitrary measurable space with $\sigma$-algebra $\Sigma_X$ and $\alpha$ is the \emph{transition function} which has one of the following forms and determines the type\footnote{The reason for choosing these symbols as type-identifiers will be revealed later in this paper.} of the PTS. \begin{center}\begin{tabular}{p{5cm}|c} \hline Transition Function $\alpha$ & Type $\diamond$ of the PTS\\ \hline $\alpha\colon X \to \mathbb{S}(\ensuremath{\mathcal{A}} \times X)$ & $0$\\ $\alpha\colon X \to \mathbb{S}(\ensuremath{\mathcal{A}} \times X + \mathbf{1})$ & $*$\\ $\alpha\colon X \to \mathbb{P}(\ensuremath{\mathcal{A}} \times X)$ & $\omega$\\ $\alpha\colon X \to \mathbb{P}(\ensuremath{\mathcal{A}} \times X + \mathbf{1})$ & $\infty$\\ \hline \end{tabular}\end{center} For every symbol $a \in \ensuremath{\mathcal{A}}$ we define a Markov kernel $\mathbf{P}_{a}\colon X \times \Sigma_X \to [0,1]$ where \begin{align} \P{a}{x}{S} := \alpha(x)(\set{a} \times S)\,. \end{align} Intuitively, $\P{a}{x}{S}$ is the probability of making an $a$-transition from the state $x \in X$ to any state $y \in S$. Whenever $X$ is a countable set and $\Sigma_X = \powerset{X}$ we call the PTS \emph{discrete}. The unique state $\checkmark \in \mathbf{1}$ -- whenever it is present -- denotes termination of the system. \end{defi} We will now take a look at a small example $\infty$-PTS before we continue with our theory. \begin{exa}[Discrete PTS with Finite and Infinite Traces] \label{ex:pts} Let $\ensuremath{\mathcal{A}} = \set{a,b}$, $X = \set{0,1,2}$, $\Sigma_X = \powerset{X}$ and $\alpha \colon X \to \mathbb{P}(\ensuremath{\mathcal{A}} \times X + \mathbf{1})$ such that we obtain the following system. \begin{center} \begin{tikzpicture}[node distance=1.4 and 2.8, on grid, shorten >=1pt, >=stealth', semithick] \node[state, inner sep=2pt, minimum size=20pt,draw](q0) {$0$}; \node[state, inner sep=2pt, minimum size=20pt,draw, right=of q0] (q1) {$1$}; \node[state, inner sep=2pt, minimum size=20pt,draw, below=of q1] (q2) {$2$}; \node[state, inner sep=2pt, minimum size=20pt,draw, right=of q1, accepting] (q3) {$\checkmark$}; \draw[->] (q0) edge[loop left] node[left] {$b,1$} (q0); \draw[->] (q1) edge node[above] {$b, 1/3$} (q0); \draw[->] (q1) edge node[left] {$a, 1/3$} (q2); \draw[->] (q1) edge node[above] {$1/3$} (q3); \draw[->] (q2) edge[loop left] node[left] {$a, 2/3$} (q2); \draw[->] (q2) edge node[below] {$1/3$} (q3); \end{tikzpicture} \end{center} \noindent As stated in the definition, $\checkmark$ is the unique final state. It has only incoming transitions bearing probabilities and no labels. The intuitive interpretation of these transitions can be stated as follows: ``From state $1$ the system terminates immediately with probability $1/3$''. \end{exa} \subsection{Towards Measurable Sets of Words: Cones and Semirings} \label{sec:cones} In order to define a trace measure on these probabilistic transition systems we need suitable $\sigma$-algebras on the sets of words. While the set of all finite words, ${\mathcal{A}^*}$, is rather simple -- we will take $\powerset{{\mathcal{A}^*}}$ as $\sigma$-algebra -- the set of all infinite words, ${\mathcal{A}^\omega}$, and also the set of all finite and infinite words, ${\mathcal{A}^\infty}$, needs some consideration. For a word $u \in {\mathcal{A}^*}$ we call the set of all infinite words that have $u$ as a prefix the \emph{$\omega$-cone} of $u$, denoted by $\cone{\omega}{u}$, and similarly we call the set of all finite and infinite words having $u$ as a prefix the \emph{$\infty$-cone} \cite[p.~23]{Pan09} of $u$ and denote it with $\cone{\infty}{u}$. Sometimes, e.g. in \cite{baierkatoen2008}, these sets are also-called ``cylinder sets''. A cone can be visualized in the following way: For a given alphabet $\ensuremath{\mathcal{A}} \not = \emptyset$ we consider the undirected, rooted and labelled tree given by $\mathcal{T} := (V, E,\epsilon, l)$ with infinitely many vertices $V := {\mathcal{A}^*}$, edges $E := \set{ \set{u, ua}\mid u \in \mathcal{A}^*, a \in \mathcal{A}}$, root $\epsilon \in \mathcal{A}^*$ and edge-labeling function $l \colon E \to \mathcal{A}, \set{u, ua} \mapsto a$. For $\mathcal{A} = \set{a,b,c}$ the first three levels of the tree can be depicted as follows: \[\begin{xy}\xymatrix{ & &&& \ar@{-}[dlll]_a\epsilon\ar@{-}[d]_b\ar@{-}[drrr]^c\\ & \ar@{-}[dl]_a a \ar@{-}[d]_b\ar@{-}[dr]^c &&& \ar@{-}[dl]_a b \ar@{-}[d]_b\ar@{-}[dr]^c &&& \ar@{-}[dl]_a c \ar@{-}[d]_b\ar@{-}[dr]^c\\ aa & ab & ac & ba & bb & bc & ca & cb & cc }\end{xy} \] Given a finite word $u \in \mathcal{A}^*$, the $\omega$-cone of $u$ is represented by the set of all infinite paths\footnote{Within this paper a path of an undirected graph $(V,E)$ is always considered to be \emph{simple}, i.e. any two vertices in a path are different.} that begin in $\epsilon$ and contain the vertex $u$ and the $\infty$-cone of $u$ is represented by the set of all finite and infinite paths that begin in $\epsilon$ and contain the vertex $u$ (and thus necessarily have a length which is greater or equal to the length of $u$). \begin{defi}[Cones] Let $\ensuremath{\mathcal{A}}$ be a finite alphabet and let $\sqsubseteq \, \subset {\mathcal{A}^*} \times {\mathcal{A}^\infty}$ denote the usual prefix relation on words. For $u \in {\mathcal{A}^*}$ we define its $\omega$-\emph{cone} to be the set $\cone{\omega}{u}:=\set{v \in {\mathcal{A}^\omega}\mid u \sqsubseteq v }$ and analogously we define $\cone{\infty}{u}:=\set{v \in {\mathcal{A}^\infty}\mid u \sqsubseteq v }$, the $\infty$-\emph{cone} of $u$. \end{defi} With this definition at hand, we can now define the semirings we will use to generate $\sigma$-algebras on $\emptyset$, ${\mathcal{A}^*}$, ${\mathcal{A}^\omega}$ and ${\mathcal{A}^\infty}$. \begin{defi}[Semirings of Sets of Words] \label{def:semirings_of_words} Let $\ensuremath{\mathcal{A}}$ be a finite alphabet. We define \begin{align*} \S_0 &:= \set{\emptyset} \subset \powerset{\emptyset},\\ \S_* &:= \set{\emptyset}\cup \set{\set{u}\mid u \in {\mathcal{A}^*}} \subset \powerset{\ensuremath{\mathcal{A}}^*},\\ \S_\omega &:= \set{\emptyset}\cup \set{\cone{\omega}{u}\mid u \in {\mathcal{A}^*}} \subset \powerset{\ensuremath{\mathcal{A}}^\omega},\\ {\mathcal{S}_\infty} &:= \set{\emptyset}\cup \set{\set{u}\mid u \in {\mathcal{A}^*}} \cup \set{\cone{\infty}{u}\mid u \in {\mathcal{A}^*}}\subset \powerset{{\mathcal{A}^\infty}}. \end{align*} \end{defi} \noindent For the next proposition the fact that $\ensuremath{\mathcal{A}}$ is a finite alphabet is crucial. \begin{prop} \label{prop:semirings_of_words} The sets $\S_0$, ${\mathcal{S}_*}$, ${\mathcal{S}_\omega}$ and ${\mathcal{S}_\infty}$ are covering semirings of sets. \end{prop} \proof For $\S_0 = \set{\emptyset}$ nothing has to be shown. Obviously we have $\emptyset \in {\mathcal{S}_*}$ and for elements $\set{u}, \set{v} \in {\mathcal{S}_*}$ we remark that $\set{u} \cap \set{v}$ is either $\set{u}$ iff $u = v$ or $\emptyset$ else. Moreover, $\set{u} \setminus \set{v}$ is either $\emptyset$ iff $u=v$ or $\set{u}$ else. We proceed with the proof for ${\mathcal{S}_\infty}$, the proof for ${\mathcal{S}_\omega}$ can be carried out almost analogously (in fact, it is simpler). By definition we have $\emptyset \in {\mathcal{S}_\infty}$. An intersection $\cone{\infty}{u} \,\cap \cone{\infty}{v}$ is non-empty iff either $u \sqsubseteq v$ or $v \sqsubseteq u$ and is then equal to $\cone{\infty}{v}$ or to $\cone{\infty}{u}$ and thus an element of $\mathcal{S}_\infty$. Similarly an intersection $\cone{\infty}{u} \cap \set{v}$ is non-empty iff $u \sqsubseteq v$ and is then equal to $\set{v} \in {\mathcal{S}_\infty}$. As before we have $\set{u} \cap \set{v} = \set{u}$ for $u=v$ and $\set{u} \cap \set{v} = \emptyset$ else. For the set difference $\cone{\infty}{u} \,\setminus \cone{\infty}{v}$ we denote that this is either $\emptyset$ (iff $v \sqsubseteq u$) or $\cone{\infty}{u}$ (iff $v \not \sqsubseteq u$ and $u \not \sqsubseteq v$) or otherwise ($u \sqsubseteq v$) the following union\footnote{For $n \in \mathbb{N}$ we define $\ensuremath{\mathcal{A}}^{<n} := \set{u \in \ensuremath{\mathcal{A}} \mid |u| < n}$.} of finitely many disjoint sets in ${\mathcal{S}_\infty}$: \begin{align*} \cone{\infty}{u} \,\setminus \cone{\infty}{v} = \left(\bigcup\limits_{v' \in \ensuremath{\mathcal{A}}^{|v|}\setminus\set{v}, u \sqsubseteq v'} \! \cone{\infty}{v'}\right) \cup \left(\bigcup\limits_{v' \in \ensuremath{\mathcal{A}}^{< |v|},~u \sqsubseteq v'}\set{v'}\right). \end{align*} As before we get $\set{u}\setminus\set{v}=\emptyset$ iff $u=v$ and $\set{u}\setminus\set{v} = \set{u}$ else. For $\set{u} \setminus \cone{\infty}{v}$ we observe that this is either $\set{u}$ iff $v \not \sqsubseteq u$ or $\emptyset$ else. Finally, $\cone{\infty}{u} \,\setminus \set{v}$ is either $\cone{\infty}{u}$ (iff $u \not \sqsubseteq v$) or ($u \sqsubseteq v$) the following union of finitely many disjoint sets in ${\mathcal{S}_\infty}$: \begin{align*} \cone{\infty}{u} \,\setminus \set{v} = \left(\bigcup\limits_{v' \in \ensuremath{\mathcal{A}}^{|v|}\setminus\set{v}, u \sqsubseteq v'} \! \cone{\infty}{v'}\right) \cup \left(\bigcup\limits_{v' \in \ensuremath{\mathcal{A}}^{< |v|},~u \sqsubseteq v'}\set{v'}\right) \cup \left(\bigcup\limits_{a \in \ensuremath{\mathcal{A}}} \! \cone{\infty}{va}\right) \end{align*} which completes the proof that the given sets are semirings. The countable (and even disjoint) covers are: $\emptyset = \emptyset$, ${\mathcal{A}^*} = \cup_{u \in {\mathcal{A}^*}}\set{a}$, ${\mathcal{A}^\omega} =\, \cone{\omega}{\epsilon}$ and ${\mathcal{A}^\infty} =\, \cone{\infty}{\epsilon}$. \qed We remark that many interesting sets will be measurable in the $\sigma$-algebra generated by these cones. The singleton-set $\set{u}$ will be measurable for every $u \in {\mathcal{A}^\omega}$ because $\set{u} = \bigcap_{v \sqsubseteq u}\cone{\omega}{v} = \bigcap_{v \sqsubseteq u}\cone{\infty}{v}$ which are countable intersections, and (for $\infty$-cones only) the set ${\mathcal{A}^*} = \cup_{u \in {\mathcal{A}^*}}\set{u}$ and consequently also the set ${\mathcal{A}^\omega} = {\mathcal{A}^\infty} \setminus {\mathcal{A}^*}$ will be measurable. The latter will be useful to check to what ``extent'' a state of a $\infty$-PTS accepts finite or infinite behavior. \subsection{Measurable Sets of Words} Let us now take a closer look at the $\sigma$-algebras generated by the semirings which we defined in the last section. We obviously obtain the trivial $\sigma$-algebra $\sigalg[\emptyset]{\S_0} = \set{\emptyset}$. Since $\ensuremath{\mathcal{A}}$ is finite, ${\mathcal{A}^*}$ is countable and we can easily conclude $\sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}} = \powerset{{\mathcal{A}^*}}$. The other two cases need a more thorough treatment. For the remainder of this section let thus $\diamond \in \set{\omega,\infty}$. We will use the concepts of transfinite induction (cf. e.g. to \cite{Dud89} for an introduction) to extend the semi-ring $\S_\diamond$ to the $\sigma$-algebra it generates. A similar construction is well-known and presented e.g. in \cite{Els07}. Usually this explicit construction is not needed but for our proofs it will turn out to be useful. \begin{defi} For any set $X$ and $\mathcal{G} \subseteq \powerset{X}$ let $\Unions{\mathcal{G}}$ and $\Intersections{\mathcal{G}}$ be the closure of $\mathcal{G}$ under countable unions and intersections. We define $\mathcal{R}_\diamond(0) :=\set{\cup_{n=1}^N S_n \mid N \in \mathbb{N}, S_n \in \S_\diamond \text{ disjoint}}$, $\mathcal{R}_\diamond(\alpha+1) := \Unions{\Intersections{\mathcal{R}_\diamond(\alpha)}}$ for every ordinal $\alpha$ and $\mathcal{R}_\diamond(\gamma) := \cup_{\alpha < \gamma} \mathcal{R}_\diamond(\alpha)$ for every limit ordinal $\gamma$. \end{defi} Obviously we have $\mathcal{R}_\diamond(\alpha) \subseteq \mathcal{R}_\diamond(\beta)$ for all ordinals $\alpha < \beta$. Since $\S_\diamond$ is a semiring of sets, is easy to see that $\mathcal{R}_\diamond(0)$ is an \emph{algebra}, i.e. it contains the base set $\ensuremath{\mathcal{A}}^\diamond$, is closed under complement and binary (and hence all finite) unions and intersections. \begin{lem} \label{lem:complementsLimit} $A \in \mathcal{R}_\diamond(\gamma) \implies \compl{A} \in \mathcal{R}_\diamond(\gamma)$ for every limit ordinal $\gamma$. \end{lem} \proof We will show that $A \in \mathcal{R}_\diamond(\alpha) \implies \compl{A} \in \Intersections{\mathcal{R}_\diamond(\alpha)}$ for every ordinal $\alpha$. This is true for the algebra $\mathcal{R}_\diamond(0)$. Now let $\alpha$ be an ordinal satisfying the implication and let $A \in \mathcal{R}_\diamond(\alpha+1)$. Then $A = \cup_{m=1}^\infty \cap_{n=1}^\infty A_{m,n}$ with $A_{m,n} \in \mathcal{R}_\diamond(\alpha)$ and by deMorgan's rules $\compl{A} = \cap_{m=1}^\infty \cup_{n=1}^\infty \compl{A_{m,n}}$ where by hypothesis $\compl{A_{m,n}} \in \Intersections{\mathcal{R}_\diamond(\alpha)}$, thus $\cup_{n=1}^\infty \compl{A_{m,n}} \in \Unions{\Intersections{\mathcal{R}_\diamond(\alpha)}} = \mathcal{R}_\diamond(\alpha+1)$ and therefore $\compl{A} \in \Intersections{\mathcal{R}_\diamond(\alpha+1)}$. Finally, let $\gamma$ be a limit ordinal and suppose the implication holds for all ordinals $\alpha < \gamma$. For any $B\in \mathcal{R}_\diamond(\gamma)$ there is a $\beta < \gamma$ such that $B \in \mathcal{R}_\diamond(\beta)$. Hence we have $\overline{B} \in \Intersections{\mathcal{R}_\diamond(\beta)} \subseteq \Intersections{\mathcal{R}_\diamond(\gamma)} \subseteq \mathcal{R}_\diamond(\gamma)$. \qed \begin{lem} \label{lem:finite_union_intersection} $A, B \in \mathcal{R}_\diamond(\alpha) \implies A\cup B, A \cap B \in \mathcal{R}_\diamond(\alpha)$ for every ordinal $\alpha$. \end{lem} \proof This is true for the algebra $\mathcal{R}_\diamond(0)$. Let $\alpha$ be an ordinal satisfying the implication and $A, B \in \mathcal{R}_\diamond(\alpha+1)$, then $A = \cup_{k=1}^\infty \cap_{l=1}^\infty A_{k,l}$ and $B = \cup_{m=1}^\infty \cap_{n=1}^\infty B_{m,n}$ with $A_{k,l}, B_{m,n} \in \mathcal{R}_\diamond(\alpha)$. Obviously $A \cup B = \cup_{k,m=1}^\infty \cap_{l,n=1}^\infty ( A_{k,l} \cup B_{m,n})$ and $A \cap B = \cup_{k,m=1}^\infty \cap_{l,n=1}^\infty ( A_{k,l} \cap B_{m,n})$ where by hypothesis $A_{k,l} \cup B_{m,n}, A_{k,l} \cap B_{m,n} \in \mathcal{R}_\diamond(\alpha)$. Let $\gamma$ be a limit ordinal and suppose the statement is true for all $\alpha < \gamma$ and let $A, B \in \mathcal{R}_\diamond(\gamma)$. There must be ordinals $\alpha, \beta < \gamma$ such that $A \in \mathcal{R}_\diamond(\alpha)$ and $B \in \mathcal{R}_\diamond(\beta)$. Assume wlog $\alpha \leq \beta$ then $A \in \mathcal{R}_\diamond(\beta)$, hence $A \cup B, A \cap B \in \mathcal{R}_\diamond(\beta) \subseteq \mathcal{R}_\diamond(\gamma)$ which completes the proof. \qed \begin{lem} \label{lem:IntersectionIsClosedUnderFiniteUnion} $A, B \in \Intersections{\mathcal{R}_\diamond(\alpha)} \implies A\cup B \in \Intersections{\mathcal{R}_\diamond(\alpha)}$ for every ordinal $\alpha$. \end{lem} \proof Let $A, B \in \Intersections{\mathcal{R}_\diamond(\alpha)}$ then $A:= \cap_{m=1}^\infty A_m$ and $B:= \cap_{n=1}^\infty B_n$ with $A_m, B_n \in \mathcal{R}_\diamond(\alpha)$. Then $A \cup B = \cap_{m,n=1}^\infty (A_m \cup B_n)$ where $A_m \cup B_n \in \mathcal{R}_\diamond(\alpha)$ by Lemma \ref{lem:finite_union_intersection} and thus \linebreak $A \cup B \in \Intersections{\mathcal{R}_\diamond(\alpha)}$. \qed \begin{prop} \label{prop:TransFiniteSigAlg} $\sigalg[\ensuremath{\mathcal{A}}^\diamond]{\mathcal{R}_\diamond(0)} = \mathcal{R}_\diamond(\omega_1)$ where $\omega_1$ is the smallest uncountable limit ordinal. \end{prop} \proof[Proof (adapted from \cite{Els07}).] We first show $\mathcal{R}_\diamond(\omega_1) \subseteq \sigalg[X]{\mathcal{R}_\diamond(0)}$. We know that \linebreak $\mathcal{R}_\diamond(0) \subseteq \sigalg[X]{\mathcal{R}_\diamond(0)}$. For an ordinal $\alpha$ with $\mathcal{R}_\diamond(\alpha) \subseteq \sigalg[X]{\mathcal{R}_\diamond(0)}$ let $A \in \mathcal{R}_\diamond(\alpha+1)$. Then $A = \cup_{m=1}^\infty\cap_{n=1}^\infty A_{m,n}$ with $A_{m,n} \in \mathcal{R}_\diamond(\alpha)$ yielding $A \in \sigalg[X]{\mathcal{R}_\diamond(0)}$. If $\gamma$ is a limit ordinal with $\mathcal{R}_\diamond(\alpha) \subseteq \sigalg[X]{\mathcal{R}_\diamond(0)}$ for all ordinals $\alpha < \gamma$ then for any $A \in \mathcal{R}_\diamond(\gamma)$ there must be an ordinal $\alpha < \gamma$ such that $A \in \mathcal{R}_\diamond(\alpha)$ and hence $A \in \sigalg[X]{\mathcal{R}_\diamond(0)}$. In order to show $\mathcal{R}_\diamond(\omega_1) \supseteq \sigalg[X]{\mathcal{R}_\diamond(0)}$ it suffices to show that $\mathcal{R}_\diamond(\omega_1)$ is a $\sigma$-algebra. We have $X \in R(0) \subseteq R(\omega_1)$ and Lemma \ref{lem:complementsLimit} yields closure under complements. Let $A_n \in \mathcal{R}_\diamond(\omega_1)$ for $n \in \mathbb{N}$. Then for each $A_n$ we have an $\alpha_n$ such that $A_n \in \mathcal{R}_\diamond(\alpha_n)$. Since $\omega_1$ is the first uncountable ordinal, we must find an $\alpha < \omega_1$ such that $\alpha_n < \alpha$ for all $n \in \mathbb{N}$. Hence we have $A_n \in \mathcal{R}_\diamond(\alpha)$ for all $n \in \mathbb{N}$. Thus $\cup_{n=1}^\infty A_n \in \mathcal{R}_\diamond(\alpha+1) \subseteq \mathcal{R}_\diamond(\omega_1)$. \qed \subsection{The Trace Measure} We will now define the trace measure which can be understood as the behavior of a state: it measures the probability of accepting a set of words. \begin{defi}[The Trace Measure] \label{def:trace_premeasure} Let $(\ensuremath{\mathcal{A}}, X, \alpha)$ be a $\diamond$-PTS. For every state $x \in X$ we define the trace (sub-)probability measure $\mathbf{tr}(x) \colon \sigalg[\ensuremath{\mathcal{A}}^\diamond]{\mathcal{S}_\diamond} \to [0,1]$ as follows: In all four cases we require $\mathbf{tr}(x)(\emptyset) = 0$. For $\diamond \in \set{*, \infty}$ we define \begin{align} \mathbf{tr}(x)(\set{\epsilon}) = \alpha(x)(\mathbf{1}) \label{eq:trace_emptyword} \end{align} and \begin{align} \mathbf{tr}(x)\big(\set{au}\big) := \Int[{x' \in X}]{\mathbf{tr}(x')(\set{u})}[\P{a}{x}{x'}] \label{eq:trace_main_equation} \end{align} for all $a \in A$ and all $u \in {\mathcal{A}^*}$. For $\diamond \in \set{\omega, \infty}$ we define \begin{align} \mathbf{tr}(x)(\cone{\diamond}{\epsilon}) = 1\label{eq:trace_wholespace} \end{align} and \begin{align} \mathbf{tr}(x)\big(\cone{\diamond}{au}\big) := \Int[{x' \in X}]{\mathbf{tr}(x')(\cone{\diamond}{u})}[\P{a}{x}{x'}] \label{eq:trace_main_equation2} \end{align} for all $a \in A$ and all $u \in {\mathcal{A}^*}$. \end{defi} We need to verify that everything is well-defined and sound. In the next proposition we explicitly state what has to be shown. \begin{prop} \label{prop:trace_premeasure} For all four types $\diamond \in \set{0,*,\omega, \infty}$ of PTS the equations in Definition~\ref{def:trace_premeasure} yield a $\sigma$-finite pre-measure $\mathbf{tr}(x)\colon \S_\diamond \to [0,1]$ for every $x \in X$. Moreover, the unique extension of this pre-measure is a (sub-)probability measure. \end{prop} Before we prove this proposition, let us try to get a more intuitive understanding of Definition~\ref{def:trace_premeasure} and especially equation \eqref{eq:trace_main_equation}. First we check how the above definition reduces when we consider discrete systems. \begin{rem} Let $(\ensuremath{\mathcal{A}}, X, \alpha)$ be a discrete\footnote{If $Z$ is a countable set and $\mu\colon \powerset{Z} \to [0,1]$ is a measure, we write $\mu(z)$ for $\mu(\set{z})$.} $*$-PTS, i.e. $X$ is a countable set with \linebreak $\sigma$-algebra $\powerset{X}$ and the transition probability function is $\alpha \colon X \to \mathbb{S}(\ensuremath{\mathcal{A}} \times X + \mathbf{1})$. Then $\mathbf{tr}(x)(\epsilon) := \alpha(x)(\checkmark)$ and \eqref{eq:trace_main_equation} is equivalent to \begin{align} \quad \mathbf{tr}(x)(au) := \sum_{x' \in X} \mathbf{tr}(x')(u) \cdot \P{a}{x}{x'} \end{align} for all $a \in \ensuremath{\mathcal{A}}$ and all $u \in {\mathcal{A}^*}$ which in turn is equivalent to the discrete ``trace distribution'' presented in \cite{Hasuo06generictrace} for the sub\--dis\-tri\-bu\-tion monad $\mathcal{D}$ on $\mathbf{Set}$. \end{rem} Having seen this coincidence with known results, we proceed to calculate the trace measure for our example (Example \ref{ex:pts}) which we can only do in our more general setting because this $\infty$-PTS is a discrete probabilistic transition system which exhibits both finite and infinite behavior. \begin{exa}[Example \ref{ex:pts} continued.] We calculate the trace measures for the $\infty$-PTS from Example \ref{ex:pts}. We have $\mathbf{tr}(0) = \delta_{b^\omega}^{\mathcal{A}^\infty}$ because \begin{align*} \mathbf{tr}(0)(\set{b^\omega}) &= \mathbf{tr}(0)\left(\bigcap_{k=0}^{\infty}\cone{\infty}{b^k}\right)=\mathbf{tr}(0)\left({\mathcal{A}^\infty} \setminus \bigcup_{k=0}^{\infty}\left({\mathcal{A}^\infty} \setminus \cone{\infty}{b^k}\right)\right) \\ &= \mathbf{tr}(0)\left({\mathcal{A}^\infty}\right) - \mathbf{tr}(0)\left(\bigcup_{k=0}^{\infty}\left({\mathcal{A}^\infty} \setminus \cone{\infty}{b^k}\right)\right) \geq 1 - \sum_{k=0}^{\infty}\mathbf{tr}(0)\left({\mathcal{A}^\infty} \setminus \cone{\infty}{b^k}\right)\\ &= 1 - \sum_{k=0}^{\infty} \left(1- \mathbf{tr}(0)\left(\cone{\infty}{b^k}\right)\right) = 1-\sum_{k=0}^{\infty}(1-1) = 1 \end{align*} Thus we have $\mathbf{tr}(0)({\mathcal{A}^*}) = \mathbf{tr}(0)\left(\cup_{u \in {\mathcal{A}^*}}\set{u}\right) = 0$ and $\mathbf{tr}(0)({\mathcal{A}^\omega}) = 1$. By induction we can show that $\mathbf{tr}(2)(\set{a^k}) = (1/3) \cdot (2/3)^k$ and thus $\mathbf{tr}(2)({\mathcal{A}^*}) = 1$ because \begin{align*} 1 \geq \mathbf{tr}(2)({\mathcal{A}^*}) = \mathbf{tr}(2)\left(\bigcup_{u \in {\mathcal{A}^*}}^\infty \set{u}\right)\geq \mathbf{tr}(2)\left(\bigcup_{k=0}^\infty \set{a^k}\right)= \frac{1}{3}\cdot\sum_{k=0}^\infty\left(\frac{2}{3}\right)^k = 1 \end{align*} and hence $\mathbf{tr}(2)({\mathcal{A}^\omega}) = 0$. Furthermore we calculate $\mathbf{tr}(1)(\set{b^\omega})= 1/3$, $\mathbf{tr}(1)(\cone{\infty}{a}) = 1/3$ and $\mathbf{tr}(1)(\set{\epsilon}) = 1/3$ yielding $\mathbf{tr}(1)({\mathcal{A}^*}) = 2/3$ and $\mathbf{tr}(1)({\mathcal{A}^\omega}) = 1/3$. \end{exa} Recall, that we still have to prove Proposition~\ref{prop:trace_premeasure}. In order to simplify this proof, we provide a few technical results about the sets ${\mathcal{S}_*}$, ${\mathcal{S}_\omega}$, ${\mathcal{S}_\infty}$. For all these results remember again that $\ensuremath{\mathcal{A}}$ is required to be a \emph{finite} alphabet. This is a crucial point, particularly in the next lemma. \begin{lem}[Countable Unions] \label{lem:union_cones} Let $(S_n)_{n \in \mathbb{N}}$ be a sequence of pairwise disjoint sets in ${\mathcal{S}_\omega}$ or in ${\mathcal{S}_\infty}$ such that their union, $\cup_{n \in \mathbb{N}}S_n$, is itself an element of ${\mathcal{S}_\omega}$ or ${\mathcal{S}_\infty}$. Then $S_n = \emptyset$ for all but finitely many $n$. \end{lem} \proof We have several cases to consider.\\ \emph{Case 1:} If $\cup_{n \in \mathbb{N}}S_n = \emptyset \in \S_\diamond$ for $\diamond \in \{\omega, \infty\}$, we have $S_n = \emptyset$ for all $n \in \mathbb{N}$.\\ \emph{Case 2:} If $\cup_{n \in \mathbb{N}} = \{u\} \in {\mathcal{S}_\infty}$ with suitable $u \in {\mathcal{A}^*}$ we get $S_n=\emptyset$ for all but one $n \in \mathbb{N}$ since the $S_n$ are disjoint.\\ \emph{Case 3:} Let $\cup_{n \in \mathbb{N}}S_n =~ \cone{\diamond}{u}$ with a suitable $u \in {\mathcal{A}^*}$ for $\diamond \in \{\omega, \infty\}$. Suppose there are infinitely many $n \in \mathbb{N}$ such that $S_n \not = \emptyset$. Without loss of generality we can assume $S_n\not=\emptyset$ for all $n \in \mathbb{N}$ and thus there is an infinite set $U :=\set{u_n\mid n \in \mathbb{N}}$ of words such that for each $n \in \mathbb{N}$ we either have $S_n = \{u_n\}$ (only for $\diamond=\infty$) or $S_n =~\cone{\diamond}{u_n}$ (for $\diamond \in \{\omega, \infty\}$). Necessarily we have $u \sqsubseteq u_n$ for all $n \in \mathbb{N}$. We will now revive our tree metaphor from Section \ref{sec:cones}: The prefix-closure $\mathrm{pref}(U) = \set{v \in {\mathcal{A}^*} \mid \exists n \in \mathbb{N}: v \sqsubseteq u_n}$ of $U$ is the set of vertices contained in the paths from the root $\epsilon$ (via $u$) to $u_n$. We consider the subtree $\mathcal{T'} = (\mathrm{pref}(U), E', \epsilon, l|_{E'})$ with $E' = \set{\set{u,ua} \mid a \in \ensuremath{\mathcal{A}}, u, ua \in \mathrm{pref}(U)}$. Since the set $U$ and hence also $\mathrm{pref}(U)$ is infinite, we have thus constructed an infinite, connected graph where every vertex has finite degree (because $\ensuremath{\mathcal{A}}$ is finite). By König's Lemma \cite[Satz 3]{koenigd} there is an infinite path starting at the root $\epsilon$. Let $v \in {\mathcal{A}^\omega}$ be the unique, infinite word associated to that path (which we get by concatenating all the labels along this path). Since $u \sqsubset v$ we must have $v \in ~\cone{\diamond}{u}$. Moreover, we know that $~\cone{\diamond}{u} = \cup_{n \in \mathbb{N}} S_n$ and due to the fact that the $S_n$ are pairwise disjoint we must find a unique $m \in \mathbb{N}$ with $v \in S_m$. This necessarily requires $S_m$ to be a cone of the form $S_m = ~\cone{\diamond}{u_m}$ with $u_m \in U$ and $u_m \sqsubset v$. Again due to the fact that the $S_n$ are disjoint we know that there cannot be a $u' \in U$ with $u_m \sqsubset u'$ and hence there also cannot be a $u' \in \mathrm{pref}(U)$ with $u_m \sqsubset u'$. Thus the vertex $u_m$ is a leaf of the tree $\mathcal{T}'$ and therefore the finite path from $\epsilon$ to $u_m$ is the only path from $\epsilon$ that contains $u_m$. This contradicts the existence of $v$ because this path is infinite and contains $u_m$. Hence our assumption must have been wrong and there cannot be infinitely many $n \in \mathbb{N}$ with $S_n \not = \emptyset$. \qed \begin{lem} \label{lem:premeas_singletons} Any map $\mu \colon {\mathcal{S}_*} \to \overline{\mathbb{R}}_+$ where $\mu(\emptyset) = 0$ is $\sigma$-additive and thus a pre-measure. \end{lem} \proof Let $\left(S_n\right)_{n \in \mathbb{N}}$ be a family of disjoint sets from ${\mathcal{S}_*}$ with $\left(\cup_{n \in \mathbb{N}}S_n\right) \in {\mathcal{S}_*}$, then we have $S_n = \emptyset$ for all but at most one $n \in \mathbb{N}$. \qed \begin{lem} \label{lem:premeas_omega_cone} A map $\mu\colon \S_\omega \to \overline{\mathbb{R}}_+$ where $\mu(\emptyset) = 0$ is $\sigma$-additive and thus a pre-measure if and only if the following equation holds for all $u \in {\mathcal{A}^*}$. \begin{align} \mu\left(\cone{\omega}{u}\right) = \sum_{a \in \ensuremath{\mathcal{A}}}\mu\left(\cone{\omega}{ua}\right)\label{eq:premeas_omega_cone} \end{align} \end{lem} \noindent We omit the proof of this lemma as it is very similar to the proof of the following lemma. \begin{lem} \label{lem:premeas_infty_cone} A map $\mu\colon {\mathcal{S}_\infty} \to \overline{\mathbb{R}}_+$ where $\mu(\emptyset) = 0$ is $\sigma$-additive and thus a pre-measure if and only if the following equation holds for all $u \in {\mathcal{A}^*}$. \begin{align} \mu\left(\cone{\infty}{u}\right) = \mu\left(\set{u}\right) + \sum_{a \in \ensuremath{\mathcal{A}}}\mu\left(\cone{\infty}{ua}\right)\label{eq:premeas_infty_cone} \end{align} \end{lem} \proof Obviously $\sigma$-additivity of $\mu$ implies equality \eqref{eq:premeas_infty_cone}. Let now $\left(S_n\right)_{n \in \mathbb{N}}$ be a family of disjoint sets from ${\mathcal{S}_\infty}$ with $\left(\cup_{n \in \mathbb{N}}S_n\right) \in {\mathcal{S}_\infty}$. Using Lemma~\ref{lem:union_cones} we know that (after resorting) we can assume that there is an $N \in \mathbb{N}$ such that $S_n \not= \emptyset$ for $1 \leq n \leq N$ and $S_n = \emptyset$ for $n > N$. For non-trivial cases (trivial means $S_n = \emptyset$ for all but one set) there must be a word $u \in {\mathcal{A}^*}$ such that $\cone{\infty}{u} = \left(\cup_{n = 1}^NS_n\right)$. Because $u$ is an element of $\cone{\infty}{u}$ there must be a natural number $m$ with $u \in S_m$ which is unique because the family is disjoint. Without loss of generality we assume that $u \in S_1$. By construction of ${\mathcal{S}_\infty}$ and the fact that $\cup_{n=1}^NS_n =~\cone{\infty}{u}$ there are two cases to consider: either $S_1 = \set{u}$ or $S_1=~\cone{\infty}{u}$. The latter cannot be true since this would imply $S_n = \emptyset$ for $n\geq 2$ which we explicitly excluded. Thus we have $S_1 = \set{u}$. We remark that \begin{align*} \bigcup_{a \in \ensuremath{\mathcal{A}}} \cone{\infty}{ua} =~\cone{\infty}{u}\setminus \set{u} = \left(\bigcup_{n=2}^NS_n\right). \end{align*} Again by construction of ${\mathcal{S}_\infty}$ we must be able to select sets $S_k^a \in \set{S_n \mid 2 \leq n \leq N}$ for all $a \in \ensuremath{\mathcal{A}}$ and all $k$ where $1 \leq k \leq K_a < N$ for a constant $K_a$ such that $\cup_{k =1}^{K_a} S_k^a =~\cone{\infty}{ua}$. This selection is unique in the following manner: For $a,b\in\ensuremath{\mathcal{A}}$ where $a \not = b$ and $1\leq k \leq K_a$, $1\leq l \leq K_b$ we have $S_k^a \not= S_l^b$. Additionally it is complete in the sense that $\set{S_k^a\mid a \in \ensuremath{\mathcal{A}}, 1 \leq k \leq K_a} = \set{S_n\mid 2 \leq n \leq N}$. We apply our equation \eqref{eq:premeas_infty_cone} to get \begin{align*} \mu\left(\bigcup_{n=1}^NS_n\right)= \mu\left(\cone{\infty}{u}\right) = \mu\left(S_1\right) + \sum_{a \in \ensuremath{\mathcal{A}}}\mu\left(\bigcup_{k =1}^{K_a} S_k^a\right) \end{align*} and note that we can repeat the procedure for each of the disjoint unions $\cup_{k=1}^{K_a} S_k^a$. Since $K_a < N$ for all $a$ this procedure stops after finitely many steps yielding $\sigma$-additivity of $\mu$. \qed Using these results, we can now finally prove Proposition~\ref{prop:trace_premeasure}. \proof[Proof of Proposition~\ref{prop:trace_premeasure}] We will look at the different types of PTS separately. For $\diamond = 0$ nothing has to be shown because $\sigalg[\emptyset]{\set{\emptyset}} = \set{\emptyset}$ and $\mathbf{tr}(x)\colon \set{\emptyset} \to [0,1]$ is already uniquely defined by $\mathbf{tr}(x)(\emptyset) = 0$. For $\diamond = *$ Lemma~\ref{lem:premeas_singletons} yields immediately that the equations define a pre-measure. For $\diamond = \infty$ we have to check validity of equation \eqref{eq:premeas_infty_cone} of Lemma~\ref{lem:premeas_infty_cone}. We will do so using induction on the length of the word $u \in {\mathcal{A}^*}$ in that equation. We have \begin{align*} &\mathbf{tr}(x)(\cone{\infty}{\epsilon}) = 1 = \alpha(x)(\ensuremath{\mathcal{A}} \times X + \mathbf{1}) = \alpha(x)(\mathbf{1}) + \sum_{a \in \ensuremath{\mathcal{A}}}\P{a}{x}{X} \\ &= \mathbf{tr}(x)(\set{\epsilon}) + \sum_{a \in \ensuremath{\mathcal{A}}}\Int[x'\in X]{1}[\P{a}{x}{x'}]\\ &= \mathbf{tr}(x)(\set{\epsilon}) + \sum_{a \in \ensuremath{\mathcal{A}}}\Int[x'\in X]{\mathbf{tr}(x')(\cone{\infty}{\epsilon})}[\P{a}{x}{x'}]\\ &=\mathbf{tr}(x)(\set{\epsilon}) + \sum_{a \in \ensuremath{\mathcal{A}}}\mathbf{tr}(x)(\cone{\infty}{a\epsilon}) = \mathbf{tr}(x)(\set{\epsilon}) + \sum_{a \in \ensuremath{\mathcal{A}}}\mathbf{tr}(x)(\cone{\infty}{\epsilon a}) \end{align*} for all $x \in X$. Now let us assume that for all $x \in X$ and all words $u \in \ensuremath{\mathcal{A}}^{\leq{n}}$ of length less or equal to a fixed $n \in \mathbb{N}$ the induction hypothesis \begin{align*} \mathbf{tr}(x)(\cone{\infty}{u}) = \mathbf{tr}(x)(\set{u}) + \sum_{b \in \ensuremath{\mathcal{A}}} \mathbf{tr}(x)(\cone{\infty}{ub}) \end{align*} is fulfilled. Then for all $x \in X$, all $a \in \ensuremath{\mathcal{A}}$ and all $u \in \ensuremath{\mathcal{A}}^{\leq n}$ we calculate \begin{align*} &\mathbf{tr}(x)(\cone{\infty}{au}) = \Int[{x' \in X}]{\mathbf{tr}(x')(\cone{\infty}{u})}[\P{a}{x}{x'}] \\ &\quad = \Int[{x' \in X}]{\left(\mathbf{tr}(x')(\set{u}) + \sum_{b \in \ensuremath{\mathcal{A}}} \mathbf{tr}(x')(\cone{\infty}{ub})\right)}[\P{a}{x}{x'}]\\ &\quad = \Int[{x' \in X}]{\mathbf{tr}(x')(\set{u})}[\P{a}{x}{x'}] + \sum_{b \in \ensuremath{\mathcal{A}}} \Int[x'\in X]{\mathbf{tr}(x')(\cone{\infty}{ub})}[\P{a}{x}{x'}]\\ &\quad = \mathbf{tr}(x)(\set{au}) + \sum_{b \in \ensuremath{\mathcal{A}}}\mathbf{tr}(x)(\cone{\infty}{aub}) \end{align*} and hence also for $au \in \ensuremath{\mathcal{A}}^{\leq {n+1}}$ equation \eqref{eq:premeas_infty_cone} is fulfilled and by induction we conclude that it is valid for all $u \in {\mathcal{A}^*}$. The only difficult case is $\diamond = \omega$ where we will, of course, apply Lemma~\ref{lem:premeas_omega_cone}. Let $u = u_1\dots u_m$ with $u_k \in \ensuremath{\mathcal{A}}$ for every $k \in \mathbb{N}$ with $k \leq m$, then multiple application of the defining equation \eqref{eq:trace_main_equation} yields \begin{align*} \mathbf{tr}(x)\big(\cone{\omega}{u}\big) &= \int\limits_{x_1 \in X}\hdots\int\limits_{x_m \in X}\!1\,\mathrm{d}\P{u_m}{x_{m-1}}{x_m}\hdots\mathrm{d}\P{u_1}{x}{x_1} \end{align*} and for arbitrary $a \in \mathcal{A}$ we obtain analogously: \begin{align*} \mathbf{tr}(x)\big(\cone{\omega}{ua}\big) &= \int\limits_{x_1 \in X}\hdots\int\limits_{x_m \in X}\!\P{a}{x_m}{X}\,\mathrm{d}\P{u_m}{x_{m-1}}{x_m}\hdots\mathrm{d}\P{u_1}{x}{x_1}\,. \end{align*} All integrals exist and are bounded above by $1$ so we can use the linearity and monotonicity of the integral to exchange the finite sum and the integrals. Using the fact that \begin{align*} \sum_{a \in \mathcal{A}}\P{a}{x_m}{X} = \sum_{a \in \mathcal{A}}\alpha(x_m)(\set{a} \times X) = \alpha(x_m)(\ensuremath{\mathcal{A}} \times X) = 1 \end{align*} we obtain that indeed the necessary and sufficient equality \begin{align*} \mathbf{tr}(x)\big(\cone{\omega}{u}\big) = \sum_{a \in \mathcal{A}}\mathbf{tr}(x)\big(\cone{\omega}{ua}\big) \end{align*} is valid for all $u \in \mathcal{A}^*$ and thus Lemma~\ref{lem:premeas_omega_cone} yields that also $\mathbf{tr}(x)\colon {\mathcal{S}_\omega} \to \overline{\mathbb{R}}_+$ is $\sigma$-additive and thus a pre-measure. Now let us check that the pre-measures for $\diamond \in \set{*, \omega, \infty}$ are $\sigma$-finite and that their unique extensions must be (sub-)probability measures. For $\diamond \in \set{\omega, \infty}$ this is obvious and in these cases the unique extension must be a probability measure because by definition we have $\mathbf{tr}(x)({\mathcal{A}^\omega}) = 1$ and $\mathbf{tr}(x)({\mathcal{A}^\infty}) = 1$ respectively. For the remaining case ($\diamond = *$) we will use induction. We have $\mathbf{tr}(x)(\{\epsilon\}) = \alpha(x)(\mathbf{1}) \leq 1$ for every $x \in X$. Let us now assume that for a fixed but arbitrary $n \in \mathbb{N}$ the inequality $\mathbf{tr}(x)(\{u\}) \leq 1$ is valid for all $x \in X$ and all words $u \in \mathcal{A}^{\leq n}$ with length less or equal to $n$. Then for any word $u' \in \mathcal{A}^{n+1}$ of length $n+1$ we have $u' = au$ with $a \in \mathcal{A}$ and $u \in \mathcal{A}^n$. We observe that \begin{align*} \mathbf{tr}(x)(\{au\}) = \Int[{x' \in X}]{\underbrace{\mathbf{tr}(x')(\{u\})}_{\leq 1}}[\P{a}{x}{x'}] \leq \Int{1}[\P{a}{x}{x'}]=\P{a}{x}{X} \leq 1 \end{align*} and conclude by induction that $\mathbf{tr}(x)(\{u\}) \leq 1$ is valid for all $u \in \mathcal{A}^*$ and all $x \in X$. Due to the fact that $\mathcal{A}^* = \cup_{u \in \mathcal{A}^*} \{u\}$ this yields that $\mathbf{tr}$ is $\sigma$-finite. Again by induction we will show that $\mathbf{tr}$ is bounded above by $1$ and thus a sub-probability measure. We have $\mathbf{tr}(x)\left(\mathcal{A}^{\leq 0}\right) = \mathbf{tr}(x)(\{\epsilon\})\leq 1$ for all $x \in X$. Suppose that for a fixed but arbitrary $n \in \mathbb{N}$ the inequality $\mathbf{tr}(x)\left(\mathcal{A}^{\leq n-1}\right) \leq 1$ holds for all $x \in X$. We conclude with the following calculation \begin{align*} \mathbf{tr}(x)\left(\mathcal{A}^{\leq n}\right) &= \mathbf{tr}(x)\left( \cup_{u \in \mathcal{A}^{\leq n}} \{u\}\right) = \sum\limits_{u \in \mathcal{A}^{\leq n}} \mathbf{tr}(x)\left( \{u\}\right)\\ &= \mathbf{tr}(x)(\{\epsilon\}) + \sum_{a \in \mathcal{A}} \sum_{u \in \mathcal{A}^{\leq n-1}} \mathbf{tr}(x)\left(\{au\}\right) \\ &= \alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}} \sum\limits_{u \in \mathcal{A}^{\leq n-1}} \Int{\mathbf{tr}(x')\left(\set{u}\right)}[\P{a}{x}{x'}] \\ &= \alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}} \Int{\sum_{u \in \mathcal{A}^{\leq n-1}}\!\mathbf{tr}(x')(\{u\})}[\P{a}{x}{x'}]\\ &= \alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}} \Int{\underbrace{\left(\mathbf{tr}(x')\left(\mathcal{A}^{\leq n-1}\right)\right)}_{\leq 1}}[\P{a}{x}{x'}]\\ &\leq \alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}}\Int{1}[\P{a}{x}{x'}]=\alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}} \P{a}{x}{X}\\ &= \alpha(x)(\mathbf{1}) + \sum\limits_{a \in \mathcal{A}} \alpha(x)(\{a\} \times X)= \alpha(x)(\mathcal{A} \times X + \mathbf{1}) \leq 1 \end{align*} using the linearity and monotonicity of the integral which can be applied here since $\mathcal{A}$ is finite which in turn implies that $\mathcal{A}^{\leq n-1}$ is finite and all the integrals $\Int{\mathbf{tr}(x')\left(\set{u}\right)}[\P{a}{x}{x'}]$ exist because $\mathbf{tr}(x')\left(\set{u}\right)$ is bounded above by $1$. By induction we can thus conclude that \begin{align*} \forall x \in X\ \forall n \in \mathbb{N}_0: \mathbf{tr}(x)\left(\mathcal{A}^{\leq n} \right) \leq 1 \end{align*} which is equivalent to \begin{align*} \forall x \in X\ \sup_{n \in \mathbb{N}_0}\left(\mathbf{tr}(x)\left(\mathcal{A}^{\leq n} \right)\right) \leq 1\,. \end{align*} Since $\mathbf{tr}(x)$ is a measure (and thus non-negative and $\sigma$-additive), the sequence given by $\left(\mathbf{tr}(x)\left(\mathcal{A}^{\leq n}\right)\right)_{n \in \mathbb{N}_0}$ is a monotonically increasing sequence of real numbers bounded above by $1$. Furthermore, $\mathbf{tr}(x)$ is continuous from below as a measure and we have $\mathcal{A}^{\leq n} \subseteq \mathcal{A}^{\leq n+1}$ for all $n \in \mathbb{N}_0$ and thus we obtain \begin{align*} \mathbf{tr}(x)\left(\mathcal{A}^*\right) = \mathbf{tr}(x)\left( \bigcup\limits_{n =1}^\infty \mathcal{A}^{\leq n}\right) = \lim_{n \to \infty} \mathbf{tr}(x)\left( \mathcal{A}^{\leq n}\right) = \sup_{n \in\mathbb{N}_0}\mathbf{tr}(x)\left(\mathcal{A}^{\leq n} \right) \leq 1\,. \end{align*} \qed \subsection{The Trace Function is a Kleisli Arrow} Now that we know that our definition of a trace measure is mathematically sound, we remember that we wanted to show that it is ``natural'', meaning that it arises from the final coalgebra in the Kleisli category of the (sub-)probability monad. We start by showing that the function $\mathbf{tr}\colon X \to T\ensuremath{\mathcal{A}}^\diamond$ we get from Definition \ref{def:trace_premeasure} is a Kleisli arrow by proving that it is a Markov kernel. Since $\mathbf{tr}(x)$ is a sub-probability measure for each $x \in X$ by Proposition \ref{prop:trace_premeasure} we just have to show that for each $S \in \sigalg[\ensuremath{\mathcal{A}}^\diamond]{\S_\diamond}$ the function $x \mapsto \mathbf{tr}(x)(S)$ is Borel-measurable. This is easy for elements $S$ of the previously defined semirings: \begin{lem} \label{lem:measurabilityGenerators} For every $S \in \S_\diamond$ the function $x \mapsto \mathbf{tr}(x)(S)$ is Borel-measurable. \end{lem} \proof For $\diamond=0$ nothing has to be shown. For the other cases we will use induction on the length of a word $u$. For $\diamond \in \set{*,\infty}$ measurability of $x \mapsto \mathbf{tr}(x)(\set{\epsilon})$ follows from measurability of $x \mapsto \alpha(x)(\mathbf{1})$ and for $\diamond \in \set{\omega,\infty}$ the function $x \mapsto \mathbf{tr}\left(x)(\cone{\diamond}{\epsilon}\right)$ is the constant function with value $1$ and thus is measurable. Suppose now that for an $n \in \mathbb{N}$ we have established that for all $u \in \ensuremath{\mathcal{A}}^n$ the functions $x \mapsto \mathbf{tr}(x)(\set{u})$ and $x \mapsto \mathbf{tr}(x)(\cone{\diamond}{u})$ (whenever they are meaningful) are measurable. Then for all $a \in \ensuremath{\mathcal{A}}$ and all $u \in \ensuremath{\mathcal{A}}^n$ we have $\mathbf{tr}(x)(\set{au}) = \Int[x' \in X]{\mathbf{tr}(x')(\set{u})}[{\mathbf{P}_{a}(x,x')}]$ and also $\mathbf{tr}(x)(\cone{\diamond}{au}) = \Int[x' \in X]{\mathbf{tr}(x')(\cone{\diamond}{u})}[{\mathbf{P}_{a}(x,x')}]$ and by applying Lemma \ref{lem:measMarkovKernel} we get the desired measurability. \qed Without any more complicated tools we get the complete result for any $*$-PTS: \begin{prop} \label{prop:MeasurabilityFiniteTrace} For every $S \in \powerset{{\mathcal{A}^*}}$ the function $x \mapsto\mathbf{tr}(x)(S)$ is Borel-measurable. \end{prop} \proof We know from Lemma \ref{lem:measurabilityGenerators} that $x\mapsto\mathbf{tr}(x)(S)$ is measurable for every $S \in {\mathcal{S}_*}$. Recall that $\sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}} = \powerset{{\mathcal{A}^*}}$ and every $S \in \powerset{{\mathcal{A}^*}}$ is at most countably\footnote{For finite $S$ the proof works analogously but simpler!} infinite, say $S := \set{u_1, u_2,\hdots}$ and we have the trivial, disjoint decomposition $S = \cup_{n=1}^\infty\set{u_n}$. If we define $T_N := \cup_{n=1}^N \set{u_n}$ we get an increasing sequence of sets converging to $S$. Hence by continuity of the sub-probability measures $S' \mapsto \mathbf{tr}(x)(S')$ we get $\mathbf{tr}(x)(S) = \lim_{N\to \infty}\mathbf{tr}(x)(T_N) = \lim_{N\to \infty}\sum_{n=1}^N \mathbf{tr}(x, \set{u_n})$. Thus $x \mapsto\mathbf{tr}(x)(S)$ is the pointwise limit of a finite sum of measurable functions and therefore measurable. \qed From here until the rest of this subsection we restrict $\diamond$ to be either $\omega$ or $\infty$ if not indicated otherwise. As before, we will rely on transfinite induction for our proof. \begin{lem} For every $S \in \mathcal{R}_\diamond(0)$ the function $x \mapsto\mathbf{tr}(x)(S)$ is measurable. \end{lem} \proof We know from Lemma \ref{lem:measurabilityGenerators} that $x\mapsto\mathbf{tr}(x)(S)$ is measurable for every $S \in \S_\diamond$. Let $S \in \mathcal{R}_\diamond(0)$ then $S = \cup_{n=1}^N S_n$ with $S_n \in \S_\diamond$ disjoint for $1 \leq n \leq N \in \mathbb{N}$. We have $\mathbf{tr}(x)(S) = \sum_{n=1}^N \mathbf{tr}(x,S_n)$ which is measurable as a finite sum of measurable functions. \qed \begin{lem} Let $\alpha$ be an ordinal s.t. the function $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \mathcal{R}_\diamond(\alpha)$. Then $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \Intersections{\mathcal{R}_\diamond(\alpha)}$. \end{lem} \proof Let $S \in \Intersections{\mathcal{R}_\diamond(\alpha)}$ then $S = \cap_{n=1}^\infty S_n$ with $S_n \in \mathcal{R}_\diamond(\alpha)$. We define $T_N := \cap_{n=1}^N S_n$ for all $N \in \mathbb{N}$, then $T_N \in \mathcal{R}_\diamond(\alpha)$ by Lemma \ref{lem:finite_union_intersection}. We have $T_N \supseteq T_{N+1}$ for all $N \in \mathbb{N}$ \linebreak and $S = \cap_{N=1}^\infty T_N$. Continuity of $S' \mapsto \mathbf{tr}(x)(S')$ for every $x \in X$ yields $\mathbf{tr}(x)(S) = \lim_{N \to \infty} \mathbf{tr}(x)\left(T_N\right)$. Hence $x \mapsto \mathbf{tr}(x)(S)$ is measurable as pointwise limit of measurable functions. \qed \begin{lem} Let $\alpha$ be an ordinal s.t. the function $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \Intersections{\mathcal{R}_\diamond(\alpha)}$. Then $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \mathcal{R}_\diamond(\alpha+1)$. \end{lem} \proof Let $S \in \mathcal{R}_\diamond(\alpha+1)$ then $S = \cup_{n=1}^\infty S_n$ with $S_n \in \Intersections{\mathcal{R}_\diamond(\alpha)}$. We define $T_N:= \cup_{n=1}^N S_n$ for all $N \in \mathbb{N}$. Then we know that $T_N \in \Intersections{\mathcal{R}_\diamond(\alpha)}$ by Lemma \ref{lem:IntersectionIsClosedUnderFiniteUnion}. We have \linebreak $T_N \subseteq T_{N+1}$ for all $N \in \mathbb{N}$ and $S = \cup_{N=1}^\infty T_N$. Continuity of the sub-probability measures $S' \mapsto \mathbf{tr}(x)(S')$ yields for every $x \in X$ that $\mathbf{tr}(x)(S) = \lim_{N \to \infty} \mathbf{tr}\left(x)(T_N\right)$. Hence the function $x \mapsto \mathbf{tr}(x)(S)$ is measurable as pointwise limit of measurable functions. \qed \begin{lem} Let $\gamma$ be a limit ordinal s.t. for all ordinals $\alpha < \gamma$ the function $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \mathcal{R}_\diamond(\alpha)$. Then $x \mapsto \mathbf{tr}(x)(S)$ is measurable for each $S \in \mathcal{R}_\diamond(\gamma)$. \end{lem} \proof Let $S \in \mathcal{R}_\diamond(\gamma)$, then there is an $\alpha < \gamma$ such that $S \in \mathcal{R}_\diamond(\alpha)$ and hence $x \mapsto\mathbf{tr}(x)(S)$ is measurable for this $S$. \qed By using the characterization $\sigalg[\ensuremath{\mathcal{A}}^\diamond]{\S_\diamond} = \mathcal{R}_\diamond(\omega_1)$ of Proposition \ref{prop:TransFiniteSigAlg} and combining the four preceding lemmas we get the desired result: \begin{prop} \label{prop:TraceIsMeasurable} For every $S \in \sigalg[\ensuremath{\mathcal{A}}^\diamond]{\S_\diamond}$ the function $x \mapsto \mathbf{tr}(x)(S)$ is measurable. \qed \end{prop} Finally, combining this result with Proposition \ref{prop:trace_premeasure} and the fact that Markov kernels are in one-to-one correspondence with Kleisli arrows \cite[Proposition 2.7]{Dob07b} yields: \begin{prop} \label{prop:TraceIsKleisli} Let $\diamond \in \set{0, *, \omega, \infty}$ and $(T, \eta, \mu)$ be the (sub-)probability monad. Then the function $\mathbf{tr}\colon X \to T\ensuremath{\mathcal{A}}^\diamond$ given by Definition \ref{def:trace_premeasure} is a Kleisli arrow. \qed \end{prop} \subsection{The Trace Measure and Final Coalgebra} Before stating the next proposition which presents a close connection between the unique existence of the map into the final coalgebra and the unique extension of a family of $\sigma$-finite pre-measures, we first give some intuition: in order to show that a coalgebra is final it is enough to show that every other coalgebra admits a unique homomorphism into it. Commutativity of the square underlying the homomorphism and uniqueness have to be shown for every element of a $\sigma$-algebra and one of our main contributions is to reduce the proof obligations to a smaller set of generators, which form a covering semiring. This proposition will later be applied to our four types of transition systems by using the semirings defined earlier and showing that there can be only one way to assign probabilities to their elements.\newpage \begin{prop} \label{thm_finalcoalg} Let $(T, \eta, \mu)$ be either the sub-probability monad $(\mathbb{S}, \eta, \mu)$ or the probability monad $(\mathbb{P}, \eta, \mu)$, $F$ be an endofunctor on $\mathbf{Meas}$ with a distributive law $\lambda \colon FT \Rightarrow TF$ and $(\Omega, \kappa)$ be an $\overline{F}$-coalgebra where $\Sigma_{F\Omega} = \sigalg[F\Omega]{\S_{F\Omega}}$ for a covering semiring $\S_{F\Omega}$. Then the following statements are equivalent: \begin{enumerate \item $(\Omega, \kappa)$ is a final $\overline{F}$-coalgebra in $\mathcal{K}\ell(T)$. \item For every $\overline{F}$-coalgebra $(X, \alpha)$ in $\mathcal{K}\ell(T)$ there is a unique Kleisli arrow $\mathbf{tr}\colon X \to T\Omega$ satisfying the following condition: \begin{align} \forall x \in X, \forall S \in \S_{F\Omega}: \quad \Int[\Omega]{p_S \circ \kappa}[\mathbf{tr}(x)] = \Int[{FX}]{p_S \circ \lambda_\Omega \circ F(\mathbf{tr})}[\alpha(x)]\,. \label{eq:giry_final_coalgebra} \end{align} \end{enumerate} \end{prop} \proof We consider the final coalgebra diagram in $\mathcal{K}\ell(T)$. \begin{align*}\begin{xy}\xymatrix{ X \ar[rr]^{\alpha} \ar[d]_{\mathbf{tr}} && \overline{F}X \ar[d]^{\overline{F}(\mathbf{tr}) = \lambda_\Omega \circ F(\mathbf{tr})}\\ \Omega \ar[rr]^{\kappa} && \overline{F}\Omega }\end{xy}\end{align*} By definition $(\Omega, \kappa)$ is final iff for every $\overline{F}$-coalgebra $(X , \alpha)$ there is a unique Kleisli arrow $\mathbf{tr} \colon X \to T\Omega$ making the diagram commute. We define \begin{align*} g := \mu_{F\Omega} \circ T(\kappa)\circ \mathbf{tr}\ \mbox{(down, right)} \quad \text{and}\quad h:= \mu_{F\Omega} \circ T\left(\overline{F}(\mathbf{tr})\right) \circ \alpha\ \mbox{(right, down)} \end{align*} and note that commutativity of the final coalgebra diagram is equivalent to \begin{align} \forall x \in X,\forall S \in \S_{F\Omega}: \quad g(x)(S) &= h(x)(S) \label{eq:g_equals_h} \end{align} because $\S_{F\Omega}$ is a covering semiring and for all $x \in X$ both $g(x)$ and $h(x)$ are sub-probability measures and thus finite measures which allows us to apply Corollary \ref{cor:equality_of_measures}. We calculate \begin{align*} g(x) (S) &= (\mu_{F\Omega} \circ T(\kappa) \circ \mathbf{tr}) (x) (S) =\mu_{F\Omega}\left(T(\kappa)(\mathbf{tr}(x))\right)(S)\\ & =\mu_{F\Omega} \left(\mathbf{tr}(x) \circ \kappa^{-1}\right) (S) = \Int{p_S}[{\left(\mathbf{tr}(x) \circ \kappa^{-1}\right)}] = \Int{p_S\circ\kappa} [\mathbf{tr}(x)] \end{align*} and if we define $\rho := \overline{F}(\mathbf{tr}) = \lambda_\Omega \circ F(\mathbf{tr}) \colon FX \to TF\Omega$ we obtain \begin{align*} h(x)(S) &= (\mu_{F\Omega} \circ T(\rho) \circ \alpha) (x) (S) = \mu_{F\Omega} \left(T(\rho) (\alpha (x))\right) (S) =\mu_{F\Omega} \left(\alpha(x)\circ \rho^{-1}\right) (S) \\ &= \int\! p_S\, \mathrm{d}\left(\alpha(x)\circ \rho^{-1}\right) = \int\! p_S \circ \rho\, \mathrm{d}\alpha(x) = \int\! p_S \circ \lambda_\Omega \circ F(\mathbf{tr})\, \mathrm{d}\alpha(x) \end{align*} and thus \eqref{eq:g_equals_h} is equivalent to \eqref{eq:giry_final_coalgebra}. \qed We immediately obtain the following corollary. \begin{cor} \label{cor:main_corollary} Let in Proposition \ref{thm_finalcoalg} $\kappa = \eta_{F\Omega} \circ \phi$, for an isomorphism $\phi \colon \Omega \to F\Omega$ in $\mathbf{Meas}$, and let $\S_\Omega \subseteq \powerset{\Omega}$ be a covering semiring such that $\Sigma_\Omega = \sigalg[\Omega]{\S_\Omega}$. Then equation \eqref{eq:giry_final_coalgebra} is equivalent to \begin{align} \forall x \in X, \forall S \in \S_\Omega: \quad \mathbf{tr}(x)(S) = \Int{p_{\phi(S)} \circ \lambda_\Omega \circ F(\mathbf{tr})}[\alpha(x)]\,. \label{eq:giry_final_coalgebra_semiring} \end{align} \end{cor} \proof Since $\phi$ is an isomorphism in $\mathbf{Meas}$ we know from Proposition~\ref{prop:isomorphisms} that $\Sigma_{F\Omega} = \sigalg[F\Omega]{\phi(\S_\Omega)}$. For every $S \in \S_{\Omega}$ and every $u \in \Omega$ we calculate \[p_{\phi(S)}\circ \kappa (u) = p_{\phi(S)}\ \circ\eta_{F\Omega} \circ \phi (u) = \delta_{\phi(u)}^{F\Omega} (\phi(S))= \chi_{\phi(S)}(\phi(u)) = \chi_{S}({u}) \] and hence we have $\int\! p_{\phi(S)}\circ\kappa\, \mathrm{d}\mathbf{tr}(x) = \int\! \chi_S \, \mathrm{d}\mathbf{tr}(x) = \mathbf{tr}(x)(S)$. \qed Since we want to apply this corollary to sets of words, we now define the necessary isomorphism $\phi$ using the characterization given in Proposition~\ref{prop:isomorphisms}. \begin{prop} \label{prop:words_iso} Let $\mathcal{A}$ be an arbitrary alphabet and let \begin{align} \phi\colon {\mathcal{A}^\infty} \to \ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}, \quad \epsilon \mapsto \checkmark, \quad au \mapsto (a,u)\,. \end{align} Then $\phi$, $\phi|_{{\mathcal{A}^*}}\colon {\mathcal{A}^*} \to \phi({\mathcal{A}^*})$ and $\phi|_{\mathcal{A}^\omega}\colon {\mathcal{A}^\omega} \to \phi({\mathcal{A}^\omega})$ are isomorphisms in $\mathbf{Meas}$ because they are bijective functions\footnote{Note that we restrict not only the domain of $\phi$ here but also its codomain.} and we have \begin{align} \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\omega}]{\phi({\mathcal{S}_\omega})} &= \powerset{\ensuremath{\mathcal{A}}} \otimes \sigalg[{\mathcal{A}^\omega}]{{\mathcal{S}_\omega}}\,, \\ \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^*} + \mathbf{1}]{\phi({\mathcal{S}_*})} &= \powerset{\ensuremath{\mathcal{A}}} \otimes \sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}} \oplus \powerset{\mathbf{1}}\,, \label{eq:sigalg_eq_Astar}\\ \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}]{\phi({\mathcal{S}_\infty})} &= \powerset{\ensuremath{\mathcal{A}}} \otimes \sigalg[{\mathcal{A}^\infty}]{{\mathcal{S}_\infty}} \oplus \powerset{\mathbf{1}}\,. \label{eq:sigalg_eq_Ainfty} \end{align} \end{prop} \proof Bijectivity is obvious. We will now show validity of \eqref{eq:sigalg_eq_Ainfty}, the other equations can be verified analogously.\footnote{For proving \eqref{eq:sigalg_eq_Astar} we can use Proposition \ref{prop:generator_product} because $\sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}} = \sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*} \cup \set{{\mathcal{A}^*}}}$.} Let $\S_\ensuremath{\mathcal{A}} := \set{\emptyset} \cup \set{\set{a} \mid a \in \ensuremath{\mathcal{A}}} \cup \set{\ensuremath{\mathcal{A}}}$, then it is easy to show that we have $\sigalg[\ensuremath{\mathcal{A}}]{\mathcal{S}_\ensuremath{\mathcal{A}}} = \powerset{\ensuremath{\mathcal{A}}}$ and Propositions \ref{prop:generator_product} and \ref{prop:generator_union} yield that \begin{align*} \powerset{\ensuremath{\mathcal{A}}} \otimes \sigalg[{\mathcal{A}^\infty}]{{\mathcal{S}_\infty}} \oplus \powerset{\mathbf{1}} = \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}]{\S_\ensuremath{\mathcal{A}} \ast {\mathcal{S}_\infty} \oplus \powerset{\mathbf{1}}}\,. \end{align*} We calculate $\phi\left(\emptyset\right) = \emptyset$, $\phi\left(\set{\epsilon}\right) = \mathbf{1}$, $\phi\left(\cone{\omega}{\epsilon}\right) = \ensuremath{\mathcal{A}} \times {\mathcal{A}^\omega}$, $\phi\left(\cone{\infty}{\epsilon}\right) = \ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}$, and for all $a \in \ensuremath{\mathcal{A}}$ and all $u \in {\mathcal{A}^*}$ we have $\phi\left(\set{au}\right) = \set{(a,u)}$ and also $\phi\left(\cone{\infty}{au}\right) = \set{a} \times \cone{\infty}{u}$. This yields \begin{align*} \phi({\mathcal{S}_\infty}) &= \set{\emptyset, \emptyset + \mathbf{1}, \ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}} \cup \set{\set{a} \times \set{u} + \emptyset, \set{a} \times \cone{\infty}{u}+\emptyset\mid a \in \ensuremath{\mathcal{A}}, u \in {\mathcal{A}^*}} \end{align*} and furthermore we have \begin{align*} \mathcal{S}_\ensuremath{\mathcal{A}} \ast {\mathcal{S}_\infty} \oplus \powerset{\mathbf{1}} = \set{\emptyset, \emptyset + \mathbf{1}} &\cup \set{\set{a} \times \set{u} + \emptyset, \set{a} \times \cone{\infty}{u}+\emptyset\mid a \in \ensuremath{\mathcal{A}}, u \in {\mathcal{A}^*}}\\ &\cup \set{\set{a} \times \set{u} + \mathbf{1}, \set{a} \times \cone{\infty}{u}+\mathbf{1}\mid a \in \ensuremath{\mathcal{A}}, u \in {\mathcal{A}^*}}\\ &\cup \set{\ensuremath{\mathcal{A}}\, \times \set{u} + \emptyset, \ensuremath{\mathcal{A}}\, \times \cone{\infty}{u}+\emptyset\mid u \in {\mathcal{A}^*}}\\ &\cup \set{\ensuremath{\mathcal{A}}\, \times \set{u} + \mathbf{1}, \ensuremath{\mathcal{A}}\, \times \cone{\infty}{u}+\mathbf{1}\mid u \in {\mathcal{A}^*}}. \end{align*} Due to the fact that $\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1} = \ensuremath{\mathcal{A}} \times \cone{\infty}{\epsilon} + \mathbf{1}$ we have $\phi({\mathcal{S}_\infty}) \subseteq \S_\ensuremath{\mathcal{A}} \ast {\mathcal{S}_\infty} \oplus \powerset{\mathbf{1}}$ and the monotonicity of the $\sigma$-operator yields \begin{align*} \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}]{\phi({\mathcal{S}_\infty})} \subseteq \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}]{\S_\ensuremath{\mathcal{A}} \ast {\mathcal{S}_\infty} \oplus \powerset{\mathbf{1}}}\,. \end{align*} For the other inclusion we remark that \begin{align*} \set{a} \times \set{u} + \mathbf{1} &= (\set{a} \times \set{u} + \emptyset) \cup (\emptyset + \mathbf{1})\\ \set{a} \times \cone{\infty}{u} + \mathbf{1} &= (\set{a} \times \cone{\infty}{u} + \emptyset) \cup (\emptyset + \mathbf{1}) \end{align*} and together with the countable decomposition $\ensuremath{\mathcal{A}} = \cup_{a \in A} \set{a}$ it is easy to see that \begin{align*} \S_\ensuremath{\mathcal{A}} \ast {\mathcal{S}_\infty} \oplus \powerset{\mathbf{1}} \subseteq \sigalg[\ensuremath{\mathcal{A}} \times {\mathcal{A}^\infty} + \mathbf{1}]{\phi({\mathcal{S}_\infty})} \end{align*} and monotonicity and idempotence of the $\sigma$-operator complete the proof. \qed We recall that -- in order to get a lifting of an endofunctor on $\mathbf{Meas}$ -- we also need a distributive law for the functors and the monads we are using to define PTS. In order to define such a law we first provide two technical lemmas. \begin{lem} \label{lem:semirings} Let $\ensuremath{\mathcal{A}}$ be an alphabet and $(X, \Sigma_X)$ be a measurable space. \begin{enumerate \item The sets $\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X$ and $\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X \oplus \powerset{\mathbf{1}}$ are covering semirings of sets.\label{itm:semirings:one} \item $\powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X = \sigalg[\ensuremath{\mathcal{A}} \times X]{\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X}$. \item $\powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X \oplus \powerset{\mathbf{1}} = \sigalg[\ensuremath{\mathcal{A}} \times X+\mathbf{1}]{\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X\oplus \mathbf{1}}$. \end{enumerate} \end{lem} \proof Showing property \eqref{itm:semirings:one} is straightforward and will thus be omitted. The rest follows by Propositions \ref{prop:generator_product} and \ref{prop:generator_union}. \qed \begin{lem}[Product Measures] \label{lem:productmeasures} Let $\ensuremath{\mathcal{A}}$ be an alphabet, $a \in \ensuremath{\mathcal{A}}$ and $(X, \Sigma_X)$ be a measurable space with a sub-probability measure $P \colon \Sigma_X \to [0,1]$. Then the following holds: \begin{enumerate \item The \emph{product measure} $\delta_a^\ensuremath{\mathcal{A}} \otimes P\colon \powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X \to \mathbb{R}_+$ of $\delta_a^\ensuremath{\mathcal{A}}$ and $P$ which is the unique extension of the pre-measure satisfying \begin{align} (\delta_a^\ensuremath{\mathcal{A}} \otimes P)(S_\ensuremath{\mathcal{A}} \times S_X) := \delta_a^\ensuremath{\mathcal{A}}(S_\ensuremath{\mathcal{A}}) \cdot P(S_X)\label{eq:product_measure} \end{align} for all $S_\ensuremath{\mathcal{A}} \times S_X \in \powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X$ is a sub-probability measure on $\ensuremath{\mathcal{A}} \times X$. If $P$ is a probability measure on $X$, then also $\delta_a^\ensuremath{\mathcal{A}} \otimes P$ is a probability measure on $\ensuremath{\mathcal{A}} \times X$. \item The measure $\delta_a^\ensuremath{\mathcal{A}} \odot P\colon \powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X \oplus \powerset{\mathbf{1}} \to \mathbb{R}_+$ which is defined via the equation \begin{align} \quad (\delta_a^\ensuremath{\mathcal{A}} \odot P)(S) := (\delta_a^\ensuremath{\mathcal{A}} \otimes P) \left(S \cap (\ensuremath{\mathcal{A}} \times X)\right)\label{eq:product_coproduct_measure} \end{align} for all $S \in \powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X \oplus \powerset{\mathbf{1}}$ is a sub-probability measure on $\ensuremath{\mathcal{A}} \times X + \mathbf{1}$. If $P$ is a probability measure on $X$, then also $\delta_a^\ensuremath{\mathcal{A}} \odot P$ is a probability measure on $\ensuremath{\mathcal{A}} \times X+\mathbf{1}$. \end{enumerate} \end{lem} \proof Before proving the statement, we check that the two equations yield unique measures. \begin{enumerate \item Existence and uniqueness of the product measure is a well known fact from measure theory and follows immediately by Proposition~\ref{prop:extension} because equation \eqref{eq:product_measure} defines a $\sigma$-finite pre-measure on $\powerset{A} \ast \Sigma_X$ which by Lemma~\ref{lem:semirings} is a covering semiring of sets and a generator for the product-$\sigma$-algebra. \item We obviously have non-negativity and $(\delta_a^\ensuremath{\mathcal{A}} \odot P)(\emptyset)=0$. Let $(S_n)_{n \in \mathbb{N}}$ be a family of pairwise disjoint sets in $\powerset{A} \otimes \Sigma_X \oplus \powerset{\mathbf{1}}$. Then the following holds \begin{align*} &(\delta_a^\ensuremath{\mathcal{A}} \odot P)\left(\bigcup_{n \in \mathbb{N}}S_n\right) = (\delta_a^\ensuremath{\mathcal{A}} \otimes P)\left(\bigcup_{n \in \mathbb{N}}(S_n\cap (\ensuremath{\mathcal{A}} \times X))\right)\\ &\quad =\sum_{n \in N}(\delta_a^\ensuremath{\mathcal{A}} \otimes P)(S_n\cap (\ensuremath{\mathcal{A}} \times X)) =\sum_{n \in N}(\delta_a^\ensuremath{\mathcal{A}} \odot P)\left(S_n\right) \end{align*} and hence $\delta_a^\ensuremath{\mathcal{A}} \odot P$ as defined by equation \eqref{eq:product_coproduct_measure} is $\sigma$-additive and thus a measure. \end{enumerate} For the proof of the Lemma we observe that \begin{align*} (\delta_a^\ensuremath{\mathcal{A}} \odot P)(\ensuremath{\mathcal{A}} \times X+\mathbf{1}) = (\delta_a^\ensuremath{\mathcal{A}} \otimes P)(\ensuremath{\mathcal{A}} \times X) = \delta_a^\ensuremath{\mathcal{A}}(\ensuremath{\mathcal{A}}) \cdot P(X) = P(X) \end{align*} which immediately yields that both measures are sub-probability measures and if $P$ is a probability measure they are probability measures. \qed With the help of the preceding lemmas, we can now state and prove the distributive laws for the endofunctors $\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas}$, $\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$ on $\mathbf{Meas}$ and the sub-probability monad and the probability monad. \begin{prop}[Distributive Laws] \label{prop:distributive_law_giry} Let $(T, \eta, \mu)$ be either the sub-probability monad $(\mathbb{S}, \eta, \mu)$ or the probability monad $(\mathbb{P}, \eta, \mu)$ and $\ensuremath{\mathcal{A}}$ be an alphabet with $\sigma$-algebra $\powerset{\ensuremath{\mathcal{A}}}$. \begin{enumerate \item Let $F = \mathcal{A} \times \mathrm{Id}_\mathbf{Meas}$. For every measurable space $(X, \Sigma_X)$ we define \begin{align} \lambda_X&\colon \ensuremath{\mathcal{A}} \times TX \to T(\ensuremath{\mathcal{A}} \times X),~ (a,P) \mapsto \delta_a^\ensuremath{\mathcal{A}} \otimes P\,.\label{eq:distributivelaw_noterm} \end{align} Then $\lambda\colon FT \Rightarrow TF$ is a distributive law. \item Let $F = \mathcal{A} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$. For every measurable space $(X, \Sigma_X)$ we define \begin{align} &\lambda_X\colon \ensuremath{\mathcal{A}} \times TX + \mathbf{1} \to T(\ensuremath{\mathcal{A}} \times X + \mathbf{1})\nonumber\\ &(a,P) \mapsto \delta_a^\ensuremath{\mathcal{A}} \odot P \label{eq:distributivelaw_term}, \quad \checkmark \mapsto \delta_\checkmark^{\ensuremath{\mathcal{A}} \times X + \mathbf{1}}\, . \end{align} Then $\lambda\colon FT \Rightarrow TF$ is a distributive law. \end{enumerate} \end{prop} \proof In order to show that the given maps are distributive laws we have to check commutativity of the following three diagrams \[\xymatrix{ FTY \ar[r]^{\lambda_Y} \ar[d]_{FTf} & TFY \ar[d]^{TFf} & FX \ar[r]^{F\eta_X} \ar[dr]_{\eta_{FX}} & FTX \ar[d]^{\lambda_X} & FT^2X \ar[r]^{\lambda_{TX}} \ar[d]_{F\mu_X} & TFTX \ar[r]^{T\lambda_X} & T^2FX \ar[d]^{\mu_{FX}}\\ FTX \ar[r]^{\lambda_X} & TFX & & TFX & FTX\ar[rr]_{\lambda_X} & & TFX }\] for all measurable spaces $(X, \Sigma_X)$, $(Y, \Sigma_Y)$ and all measurable functions $f \colon Y \to X$. By Lemma~\ref{lem:semirings} we know that $\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X$ and $\powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X \oplus \powerset{\mathbf{1}}$ are covering semirings of sets and that they are generators for the $\sigma$-algebras $\powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X$ and $\powerset{\ensuremath{\mathcal{A}}} \otimes \Sigma_X\oplus \powerset{\mathbf{1}}$. Moreover, we know from Lemma~\ref{lem:productmeasures} that the measures assigned in equations~\eqref{eq:distributivelaw_noterm} and \eqref{eq:distributivelaw_term} are sub-probability measures and thus finite. We can therefore use Corollary \ref{cor:equality_of_measures} to check the equality of the various (sub-)probability measures. We will provide the proofs for the second distributive law only, the proofs for the first law are simpler and can in fact be derived from the given proofs. Let $S := S_\mathcal{A} \times S_X + S_\mathbf{1} \in \powerset{\ensuremath{\mathcal{A}}} \ast \Sigma_X \oplus \powerset{\mathbf{1}}$. \begin{enumerate \item Let $f \colon Y \to X$ be a measurable function. For $(a,P) \in \mathcal{A} \times TY$ we calculate \begin{align*} (TFf \circ \lambda_Y)(a,P)(S) &= (\delta_a^{\mathcal{A}}\odot P)\left((Ff)^{-1}(S)\right) =(\delta_a^{\mathcal{A}}\odot P) (S_{\mathcal{A}} \times f^{-1}(S_X) + S_\mathbf{1})\\ &= \delta_a^{\mathcal{A}} (S_{\mathcal{A}}) \cdot P\left(f^{-1}(S_X)\right) = (\delta_a^{\mathcal{A}} \odot (P\circ f^{-1}))(S_{\mathcal{A}} \times S_X + S_\mathbf{1}) \\ &= (\lambda_X \circ FTf)(a,P)(S) \end{align*} and analogously we obtain \begin{align*} &(TFf \circ \lambda_Y) (\checkmark) (S) =\delta_\checkmark^{\mathcal{A}\times Y + \mathbf{1}}\left((Ff)^{-1} (S)\right) \\ &\quad=\delta_\checkmark^{\mathcal{A}\times Y + \mathbf{1}}\left(S_{\mathcal{A}} \times f^{-1}(S_X) + S_\mathbf{1}\right) = \delta_\checkmark^{\mathcal{A}\times X + \mathbf{1}} (S)= (\lambda_X \circ FTf)(\checkmark)(S)\,. \end{align*} \item For $(a,x) \in \ensuremath{\mathcal{A}} \times X$ we calculate \begin{align*} \eta_{FX}(a,x)(S) &= \delta_{(a,x)}^{FX}(S_\mathcal{A} \times S_X + S_\mathbf{1}) = \delta_a^\mathcal{A}(S_\mathcal{A}) \cdot \delta_x^X(S_X)\\ &= (\delta_a^\mathcal{A} \odot \delta_x^X)(S) =\lambda_X(a,\delta_x^X)(S) = \big(\lambda_X \circ F\eta_X\big)(a,x)(S) \end{align*} and also \begin{align*} \eta_{FX} (\checkmark) = \delta_\checkmark^{FX} = \lambda_X(\checkmark) = \lambda_X\left(F\eta_X(\checkmark)\right) = \big(\lambda_X\circ F\eta_X\big)(\checkmark)\,. \end{align*} \item For $(a,P) \in FT^2X$ we calculate \begin{align*} \left(\lambda_X \circ F\mu_X\right)(a,P)(S) &= \left(\lambda_X\left(a, \mu_X(P)\right)\right)(S) = \left(\delta_a^\mathcal{A}\odot \mu_X(P)\right)(S) \\ &=\delta_a^\mathcal{A}(S_{\mathcal{A}}) \cdot \mu_X(P)(S_X) = \delta_a^\mathcal{A}(S_{\mathcal{A}}) \cdot \Int{p_{S_X}}[P] \end{align*} and \begin{align*} &\left(\mu_{FX} \circ T\lambda_X\circ \lambda_{TX}\right)\!(a,P)(S) = \mu_{FX}\left(\left(\delta_a^\mathcal{A} \odot P\right) \circ \lambda^{-1}_X\right)(S) \\ &\quad = \Int[TFX]{p_S}[{\left(\left(\delta_a^\mathcal{A}\odot P\right) \circ \lambda^{-1}_X\right)}] = \Int[\lambda^{-1}_X(TFX)]{p_S\circ \lambda_X}[\big(\delta_a^\mathcal{A} \odot P\big)] \\ &\quad = \Int[\set{a} \times TX]{p_S\circ \lambda_X}[\big(\delta_a^\mathcal{A} \odot P\big)] = \Int[P' \in TX]{\big(\delta_a^\mathcal{A} \otimes P')(S)}[P(P')]\\ &\quad = \Int[P' \in TX]{\delta_a^\mathcal{A}(S_{\mathcal{A}})\cdot P'(S_X)}[P(P')] = \delta_a^\mathcal{A}(S_{\mathcal{A}})\cdot \Int{p_{S_X}}[P]\,. \end{align*} Analogously we obtain \begin{align*} \left(\lambda_X \circ F\mu_X \right)(\checkmark) = \lambda_X(\checkmark) = \delta_{\checkmark}^{\mathcal{A}\times X+1} \end{align*} and \begin{align*} &\left(\mu_{FX} \circ T\lambda_X\circ \lambda_{TX}\right)(\checkmark)(S) = \mu_{FX}\left(\delta_\checkmark^{\mathcal{A} \times TX+\mathbf{1}} \circ \lambda^{-1}_X\right)(S)\\ &\quad=\Int[TFX]{p_S}[\left(\delta_\checkmark^{\mathcal{A}\times TX+\mathbf{1}}\circ \lambda^{-1}_X\right) ] = \Int[\lambda^{-1}_X(TFX)]{p_S \circ \lambda_X}[\delta_\checkmark^{\mathcal{A}\times TX+\mathbf{1}}]\\ &\quad = (p_S \circ \lambda_X)(\checkmark)= \delta_\checkmark^{\mathcal{A}\times X+\mathbf{1}}(S)\,. \end{align*} \end{enumerate} \qed \noindent With this result at hand we can finally apply Corollary \ref{cor:main_corollary} to the measurable spaces $\emptyset$, ${\mathcal{A}^*}$, ${\mathcal{A}^\omega}$, ${\mathcal{A}^\infty}$, each of which is of course equipped with the $\sigma$-algebra generated by the covering semirings $\S_0$, ${\mathcal{S}_*}$, ${\mathcal{S}_\omega}$, ${\mathcal{S}_\infty}$ as defined in Proposition~\ref{prop:semirings_of_words}, to obtain the final coalgebra and the induced trace semantics for PTS as presented in the following theorem. \begin{thm}[Final Coalgebra and Trace Semantics for PTS] Let $(T, \eta, \mu)$ be either the sub-probability monad $(\mathbb{S}, \eta, \mu)$ or the probability monad $(\mathbb{P}, \eta, \mu)$ and $F$ be either $\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas}$ or $\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$. A PTS $(\ensuremath{\mathcal{A}}, X, \alpha)$ is an $\overline{F}$-coalgebra $(X , \alpha)$ in $\mathcal{K}\ell(T)$ and vice versa. In the following table we present the (carriers of) final $\overline{F}$-coalgebras $\left(\Omega, \kappa\right)$ in $\mathcal{K}\ell(T)$ for all suitable choices of $T$ and $F$ (depending on the type of the PTS). \begin{align*}\begin{tabular}{c|c|l|c} \hline Type $\diamond$~ & ~Monad $T$ ~ & ~Endofunctor $F$ ~ & Carrier $\Omega$ \\\hline $0$ & $\mathbb{S}$ & ~$\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas}$ & $(\emptyset, \set{\emptyset})$ \\ $*$ & $\mathbb{S}$ & ~$\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$ & $\left({\mathcal{A}^*}, \sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}}\right) $ \\ $\omega$ & $\mathbb{P}$ & ~$\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas}$ & $\left({\mathcal{A}^\omega}, \sigalg[{\mathcal{A}^\omega}]{{\mathcal{S}_\omega}}\right) $\\ $\infty$ & $\mathbb{P}$ & ~$\ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$ & ~$\left({\mathcal{A}^\infty}, \sigalg[{\mathcal{A}^\infty}]{{\mathcal{S}_\infty}}\right)$\\\hline \end{tabular}\end{align*} where for $\diamond \in \set{*, \omega, \infty}$ we have $\kappa = \eta_{F\Omega}\circ \phi$ where $\phi$ is the isomorphism as defined in Proposition~\ref{prop:words_iso} and for $\diamond = \emptyset$ we take $\kappa = \eta_{F\emptyset} \circ \phi$ with $\phi$ being the empty function $\phi \colon \emptyset \to \emptyset$. The unique arrow into the final coalgebra is the map $\mathbf{tr}\colon X \to T\Omega$ given by Definition~\ref{def:trace_premeasure}. \end{thm} \proof For the whole proof we always assume that the combinations of the type $\diamond$ of the PTS, the monad $T$, the endofunctor $F$ and the carrier $(\Omega, \Sigma_\Omega)$ are chosen as presented in the table given in the corollary. Thus e.g. $\diamond = *$ automatically yields $T = \mathbb{S}$, $F = \ensuremath{\mathcal{A}} \times \mathrm{Id}_\mathbf{Meas} + \mathbf{1}$, $\Omega={\mathcal{A}^*}$, $\Sigma_\Omega = \sigalg[{\mathcal{A}^*}]{{\mathcal{S}_*}}$ and we automatically work in the Kleisli category $\mathcal{K}\ell(\mathbb{S})$ of the sub-probability monad. The first statement of the theorem is obvious by construction of the transition function $\alpha$. For $\diamond \in \set{*, \omega, \infty}$ we remark that the preconditions of Corollary \ref{cor:main_corollary} are fulfilled and aim at applying this corollary, and especially at evaluating equation \eqref{eq:giry_final_coalgebra_semiring} for the covering semirings ${\mathcal{S}_*}, {\mathcal{S}_\omega}, {\mathcal{S}_\infty}$. Let us carry out these calculations in various steps to obtain all the equations of Definition \ref{def:trace_premeasure}. For all $(b,x') \in \ensuremath{\mathcal{A}} \times X$ we calculate \begin{align*} (\lambda_\Omega \circ F(\mathbf{tr})) (b,x') = \begin{cases} \delta_b^\ensuremath{\mathcal{A}} \otimes \mathbf{tr}(x'), & \diamond = \omega\\ \delta_b^\ensuremath{\mathcal{A}} \odot \mathbf{tr}(x'), & \diamond \in \set{*, \infty}. \end{cases} \end{align*} Now suppose $S$ is chosen as $S=\set{au}$, $S=~\cone{\omega}{au}$ or $S=~\cone{\infty}{au}$ respectively for an arbitrary $a \in \ensuremath{\mathcal{A}}$ and an arbitrary $u \in {\mathcal{A}^*}$. Then $\phi(S) = \set{a} \times S'$ with $S'=\set{u}$, $S'=~\cone{\omega}{u}$ or $S'=~\cone{\infty}{u}$ respectively and hence we obtain \begin{align*} &(p_{\phi(S)} \circ \lambda_\Omega \circ F(\mathbf{tr})) (b,x') = \delta_b^\ensuremath{\mathcal{A}} \otimes \mathbf{tr}(x')(\set{a} \times S') \\ &\quad = \delta_b^\ensuremath{\mathcal{A}}(\set{a}) \cdot \mathbf{tr}(x')(S') = \chi_{\set{a} \times X}(b,x') \cdot \mathbf{tr}(x')(S')\,. \end{align*} Using this, we evaluate equation \eqref{eq:giry_final_coalgebra_semiring} of Corollary \ref{cor:main_corollary} for these sets and get \begin{align*} \mathbf{tr}(x)(S) = \Int[(b,x') \in \set{a} \times X]{\mathbf{tr}(x')(S')}[\alpha(x)] = \Int[x' \in X]{\mathbf{tr}(x')(S')}[\P{a}{x}{x'}] \end{align*} which yields equations \eqref{eq:trace_main_equation} and \eqref{eq:trace_main_equation2} of Definition~\ref{def:trace_premeasure}. For $\diamond \in \set{*,\infty}$ we calculate \begin{align*} (\lambda_\Omega \circ F(\mathbf{tr})) (\checkmark) = \delta_\checkmark^{\ensuremath{\mathcal{A}} \times \Omega + \mathbf{1}} \end{align*} and conclude that for $z \in \ensuremath{\mathcal{A}} \times X + \mathbf{1}$ we have $(p_{\phi(\set{\epsilon})} \circ \lambda_\Omega \circ F(\mathbf{tr})) (z) = 1$ if and only if $z = \checkmark$. Hence evaluating equation \eqref{eq:giry_final_coalgebra_semiring} in this case yields \begin{align*} \mathbf{tr}(x)(\set{\epsilon}) = \Int{p_{\phi(\set{\epsilon})} \circ \lambda_\Omega \circ F(\mathbf{tr})}[\alpha(x)] = \Int{\chi_\mathbf{1}}[\alpha(x)] = \alpha(x)(\mathbf{1}) \end{align*} which is equation \eqref{eq:trace_emptyword}. For $\diamond \in \set{\omega, \infty}$ we have $\mathbf{tr}(x)(\ensuremath{\mathcal{A}}^\diamond) = 1$ due to the fact that $\mathbf{tr}(x)$ must be a probability measure. This is already equation \eqref{eq:trace_wholespace} because $\ensuremath{\mathcal{A}}^\diamond=\epsilon \ensuremath{\mathcal{A}}^\diamond$. Moreover $\phi(\cone{\diamond}{\epsilon}) = \phi(\Omega) = F\Omega$ and since also $\lambda_\Omega \circ F(\mathbf{tr})$ must be a probability measure evaluating \eqref{eq:giry_final_coalgebra_semiring} yields the same: \begin{align*} \mathbf{tr}(x)(\cone{\diamond}{\epsilon}) &= \Int{p_{\phi(\cone{\diamond}{\epsilon})} \circ \lambda_\Omega \circ F(\mathbf{tr})}[\alpha(x)] =\Int{1}[\alpha(x)] = \alpha(x)(FX) = 1\,. \end{align*} Finally, for $\diamond=0$ we remark, that the $\mathcal{K}\ell(\mathbb{S})$-object $(\emptyset, \set{\emptyset})$ is the unique final object of $\mathcal{K}\ell(\mathbb{S})$: Given any $\mathcal{K}\ell(\mathbb{S})$-object $(X, \Sigma_X)$, the unique map into the final object is given as $f \colon X \to \mathbb{S}(\emptyset) = \set{(p \colon \set{\emptyset} \to [0,1], p(\emptyset) = 0)}$ mapping any $x \in X$ to the unique element of that set. Moreover, $(\emptyset, \set{\emptyset})$ together with $\kappa = \eta_{F\emptyset} \circ \phi$, where the map $\phi \colon \emptyset \to \ensuremath{\mathcal{A}} \times \emptyset$ is the obvious and unique isomorphism $(\emptyset, \powerset{\emptyset}) \cong (\ensuremath{\mathcal{A}} \times \emptyset, \powerset{\ensuremath{\mathcal{A}}} \otimes \powerset{\emptyset})$, is a $\overline{F}$-coalgebra and thus final. In all cases we have obtained exactly the equations from Definition~\ref{def:trace_premeasure} which by Proposition \ref{prop:trace_premeasure} yield a unique function $\mathbf{tr}\colon X \to T\ensuremath{\mathcal{A}}^\diamond$. From Proposition \ref{prop:TraceIsKleisli} we know that this function is indeed a Kleisli arrow.\qed \section{Examples} \label{sec:advexamples} In this section we will define and examine two truly continuous probabilistic systems and calculate their trace measures or parts thereof. However, in order to deal with these systems, we first need to provide some additional measure theoretic results and tools. At first, we will explain the \emph{counting measure} on countable sets and also the \emph{Lebesgue measure} as this is ``the'' standard measure on the reals. Afterwards we will take a quick look into the theory of measures with \emph{densities}. With these tools at hand we can finally present the examples. All of the presented results should be contained in any standard textbook on measure and integration theory. We use \cite{Els07} as our primary source for this short summary. \begin{defi}[Counting Measure] Let $X$ be a countable set. The \emph{counting measure} on $(X, \powerset{X})$ is the cardinality map \begin{align} \#\colon \powerset{X} \to \overline{\mathbb{R}}_+, \quad A \mapsto |A| \end{align} assigning to each finite subset of $X$ its number of elements and $\infty$ to each infinite subset of $X$. It is uniquely defined as the extension of the $\sigma$-finite pre-measure on the set of all singletons (and $\emptyset$) which is $1$ on every singleton and $0$ on $\emptyset$. \end{defi} \subsection{Completion and the Lebesgue Measure} The (one-dimensional) \emph{Lebesgue-Borel measure} is the unique measure $\lambda'$ on the reals equipped with the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$ satisfying $\lambda'\left((a,b]\right) = b-a$ for every $a, b \in \mathbb{R}$, $a \leq b$. In order to obtain the \emph{Lebesgue measure}, we will refine both the measure and the set of measurable sets by \emph{completion}. We call a measure space $(X, \Sigma, \mu)$ \emph{complete} if every subset of a $\mu$-null-set (i.e. a measurable set $S \in \Sigma$ such that $\mu(S) = 0$) is measurable (and necessarily also a $\mu$-null-set). For any measure space $(X, \Sigma, \mu)$ there is always a smallest complete measure space $(X, \tilde{\Sigma}, \tilde{\mu})$ such that $\Sigma \subseteq \tilde{\Sigma}$ and $\tilde{\mu}|_\Sigma = \mu$ called the \emph{completion} (\cite[II.~§6]{Els07}). The completion of the Lebesgue-Borel measure yields the \emph{Lebesgue $\sigma$-algebra} $\mathcal{L}$ and the \emph{Lebesgue measure}\footnote{This is the second meaning of the symbol $\lambda$. Until here, $\lambda$ was used as symbol for a distributive law.} $\lambda \colon \mathcal{L} \to \overline{\mathbb{R}}$. For the Lebesgue measure we will use the following notation for integrals: \begin{align*} \Int[a][b]{f} := \Int[{[a,b]}]{f}[\lambda]\,. \end{align*} \subsection{Densities} When dealing with measures on arbitrary measurable spaces -- especially in the context of probability measures -- it is sometimes useful to describe them using so-called \emph{densities}. We will give a short introduction into the theory of densities here which is sufficient for understanding the upcoming examples. Given a measurable space $(X, \Sigma_X)$ and measures $\mu, \nu \colon \Sigma_X \to \overline{\mathbb{R}}_+$ we call a Borel-measurable function $f \colon X \to \overline{\mathbb{R}}$ satisfying \begin{align} \nu (S) = \Int[S]{f}[\mu]\label{eq:density} \end{align} for all measurable sets $S \in \Sigma_X$ a \emph{$\mu$-density of $\nu$}. In that case $\mu(S) = 0$ implies $\nu(S)=0$ for all measurable sets $S \in \Sigma_X$ and we say that $\nu$ is \emph{absolutely continuous} with respect to $\mu$ and write $\nu \ll \mu$. Densities are neither unique nor do they always exist. However, if $\nu$ has two $\mu$-densities $f,g$ then $f = g$ holds $\mu$-almost everywhere, i.e. there is a $\mu$ null set $N \in \Sigma_X$ such that for all $x \in X\setminus N$ we have $f(x) = g(x)$. Moreover, any such $\mu$-density uniquely defines the measure $\nu$. If $\mu = \lambda$, i.e. $\mu$ is the Lebesgue-measure, and \eqref{eq:density} holds for a measure $\nu$ and a function $f$, then $f$ is called \emph{Lebesgue density} of $\nu$. For our examples we will make use of the following Proposition which can be found e.g. in \cite[IV.2.12 Satz]{Els07}. \begin{prop}[Integration and Measures with Densities] Let $(X, \Sigma_X)$ be a measurable space and let $\mu, \nu \colon \Sigma_X \to \mathbb{R}_+$ be measures such that $\nu$ has a $\mu$-density $f$. If $g \colon X \to \mathbb{R}_+$ is $\nu$-integrable, then $\int\!g\,\mathrm{d}\nu = \int\!gf\,\mathrm{d}\mu.$\qed \end{prop} \subsection{Examples} With all the previous results at hand, we can now present our two continuous examples using densities to describe the transition functions. \begin{exa} We will first give an informal description of this example as a kind of one-player-game which is played in the closed real interval $[0,1]$. The player, who is in any point $z \in [0,1]$, can jump up and will afterwards touch down on a new position $x \in [0,1]$ which is determined probabilistically. After a jump, the player announces, whether he is left ``$L$'' or right ``$R$'' of his previous position. The total probability of jumping from $z$ to the left is $z$ and the probability of jumping to the right is $1-z$. In both cases, we have a continuous uniform probability distribution. As we are within the set of reals, the probability of hitting a specific point $x_0 \in [0,1]$ is always zero. Let us now continue with the precise definition of our example. Let $\mathcal{A} := \set{L,R}$. We consider the PTS $(\set{L,R}, [0,1], \alpha)$ where $[0,1]$ is equipped with the Lebesgue $\sigma$-algebra of the reals, restricted to that interval denoted $\mathcal{L}([0,1])$. The transition probability function $\alpha \colon [0,1] \to \mathbb{P}([0,1])$ is given as \begin{align*} \alpha(z)(S) = \Int[S]{f_z}[(\# \otimes \lambda)] \end{align*} for every $z \in [0,1]$ and all sets $S \in \powerset{\set{L,R}} \otimes \mathcal{L}([0,1])$ with the $(\# \otimes \lambda)$-densities \begin{align*} f_z \colon \set{L,R} \times [0,1] \to \mathbb{R}^+, \quad (a,x) \mapsto \chi_{\set{L} \times [0,z]}(a,x) + \chi_{\set{R} \times [z,1]}(a,x)\,. \end{align*} We observe that $S \mapsto \P{L}{z}{S}, S \mapsto \P{R}{z}{S} \colon \mathcal{L}([0,1]) \to \mathbb{R}^+$ thus have Lebesgue-densities \begin{align*} \P{L}{z}{S} = \Int[S]{\chi_{[0,z]}}[\lambda] = \Int[S]{\chi_{[0,z]}(x)}, \quad \P{R}{z}{S} = \Int[S]{\chi_{[z,1]}}[\lambda] = \Int[S]{\chi_{[z,1]}(x)}\,. \end{align*} with the following graphs (in the real plane) \begin{center} \input{square} \end{center} Evaluating these measures on $[0,1]$ yields \begin{align*} \P{L}{z}{[0,1]} = \int_0^z\!1\,\mathrm{d}x = z, \quad \P{R}{z}{[0,1]} = \int_z^1\! 1\,\mathrm{d}x = 1-z\,. \end{align*} With these preparations at hand we calculate the trace measure on some cones. \begin{align*} \mathbf{tr}(z)(\cone{\omega}{\epsilon}) &= 1\\ \mathbf{tr}(z)(\cone{\omega}{L}) &= \Int[{[0,1]}]{1}[\P{L}{z}{z'}] = \P{L}{z}{[0,1]} = z\\ \mathbf{tr}(z)(\cone{\omega}{R}) &= \Int[{[0,1]}]{1}[\P{R}{z}{z'}] = \P{R}{z}{[0,1]} = 1-z\\ \mathbf{tr}(z)(\cone{\omega}{LL}) &= \Int[{[0,1]}]{x}[\P{L}{z}{x}] = \Int[0][1]{x \cdot \chi_{[0,z]}(x)} = \Int[0][z]{x} = \left[\frac{1}{2} x^2\right]_0^z = \frac{1}{2}z^2\\ \mathbf{tr}(z)(\cone{\omega}{LR}) &= \Int[{[0,1]}]{1-x}[\P{L}{z}{x}] = \Int[0][z]{(1-x)} = \left[x-\frac{1}{2} x^2\right]_0^z = z- \frac{1}{2}z^2\\ \mathbf{tr}(z)(\cone{\omega}{RL}) &= \Int[{[0,1]}]{x}[\P{R}{z}{x}] = \Int[0][1]{x \cdot \chi_{[z,1]}(x)} = \Int[z][1]{x} = \left[\frac{1}{2} x^2\right]_z^1 = \frac{1}{2} - \frac{1}{2} z^2\\ \mathbf{tr}(z)(\cone{\omega}{RR}) &= \Int[{[0,1]}]{1-x}[\P{R}{z}{x}] = \Int[z][1]{(1-x)} = \left[x-\frac{1}{2} x^2\right]_z^1 = \frac{1}{2} - z + \frac{1}{2} z^2 \end{align*} Thus for any word $u \in {\mathcal{A}^*}$ of length $n$ there is a polynomial $p_u \in \mathbb{R}[Z]$ in one variable $Z$ with degree $\mathop{deg}(p_u) = n$. Evaluating this polynomial for an arbitrary $z \in [0,1]$ yields the value of the trace measure $\mathbf{tr}(z)$ on the cone $\cone{\omega}{u}$ generated by $u$, i.e. $\mathbf{tr}(z)(\cone{\omega}{u}) = p_u(z)$. \end{exa} While the previous example provides some understanding on how to describe a continuous PTS and also on how to calculate its trace measure, we are interested in trace equivalence. The second example will thus be a system which is trace equivalent to a finite state system. \begin{exa} As before, we will give an informal description as a kind of one-player-game first. There is exactly one player, who starts in any point $z \in \mathbb{R}$, jumps up and touches down somewhere on the real line announcing whether he is left ``$L$'' or right ``$R$'' of his previous position or has landed back on his previous position ``$N$''. The probability of landing is initially given via a normal distribution centered on the original position $z$. Thus, the probability of landing in close proximity of $z$, i.e. in the interval $[z-\epsilon, z + \epsilon]$, is high for sufficiently big $\epsilon \in \mathbb{R}_+\setminus\set{0}$ whereas the probability of landing far away, i.e. outside of that interval, is negligible. The player has a finite amount of energy and each jump drains that energy so that after finitely many jumps he will not be able to jump again resulting in an infinite series of ``$N$'' messages. Before that the energy level determines the likelihood of his jump width, i.e. the standard deviation of the normal distributions. Now let us give a formal description of such a system. Recall that the density function of the normal distribution with expected value\footnote{This is the third meaning of $\mu$. Until here, $\mu$ was used as symbol for a measure and also as a symbol for the multiplication natural transformation of a monad.} $\mu \in \mathbb{R}$ and standard deviation $\sigma \in \mathbb{R}^+ \setminus\set{0}$ is the Gaussian function \begin{align*} \phi_{\mu, \sigma}\colon \mathbb{R} \to \mathbb{R}^+, \quad \phi_{\mu, \sigma}(x) = \frac{1}{\sigma \sqrt{2\pi}} \cdot \exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right) \end{align*} with the following graph (in the real plane), often called the ``bell curve". \begin{center} \input{gauss} \end{center} Let now the finite ``energy level'' or ``time horizon'' (which is the maximal number of jumps) $T \in \mathbb{N}$, $T \geq 2$ be given. We consider the PTS with alphabet $\mathcal{A} := \set{L,N,R}$, state space $(\mathbb{N}_0 \times \mathbb{R}, \powerset{\mathbb{N}_0} \otimes \mathcal{L})$ and transition probability function $\alpha\colon \mathbb{N}_0 \times \mathbb{R} \to \Prob{\mathcal{A} \times \mathbb{N}_0 \times \mathbb{R}}$ which we define in two steps. For all $(t,z) \in \mathbb{N}_0 \times \mathbb{R}$ with $t < T$ and all measurable sets $S \in \powerset{\mathcal{A}} \otimes \powerset{\mathbb{N}_0} \otimes \mathcal{L}$ we set \begin{align*} \alpha(t,z)(S) := \Int[S]{f_{(t,z)}}[(\# \otimes \# \otimes \lambda)] \end{align*} where the $(\#\otimes\#\otimes\lambda)$-density $f_{(t,z)}$ is \begin{align*} f_{(t,z)}\colon \mathcal{A} \times \mathbb{N}_0 \times \mathbb{R} \to \mathbb{R}^+, (a, t', x) \mapsto \begin{cases} \chi_{(-\infty,z]}(x) \cdot \phi_{z, 1/(t+1)}(x), & a = L \wedge t' = t+1\\ \chi_{[z, +\infty)}(x) \cdot \phi_{z, 1/(t+1)}(x), & a = R \wedge t' = t+1\\ 0, & \text{else.} \end{cases} \end{align*} Thus in the first two cases the density is the left (or right) half of the Gaussian density function with expected value $\mu = z$ and standard deviation $\sigma = 1/(t+1)$ and the constant zero function in all other cases. For the remaining $(t,z) \in \mathbb{N}_0 \times \mathbb{R}$ with $t \geq T$ we define the transition probability function to be \begin{align*} \alpha(t,z):= \delta_{(N, t+1, z)}^{\mathcal{A} \times \mathbb{N}_0 \times \mathbb{R}}\,. \end{align*} We observe that for $(t,z) \in \mathbb{N}_0 \times \mathbb{R}$ with $t < T$ we have $\P{N}{(t,z)}{\mathbb{N}_0 \times \mathbb{R}} = 0$ and \begin{align*} \P{L}{(t,z)}{\mathbb{N}_0 \times \mathbb{R}} = \Int[-\infty][z]{\phi_{z,1/(t+1)}(x)} = \frac{1}{2} = \Int[z][\infty]{\phi_{z,1/(t+1)}(x)} = \P{R}{(t,z)}{\mathbb{N}_0\times \mathbb{R}}. \end{align*} For $t \geq T$ we have $\P{N}{(t,z)}{\mathbb{N}_0 \times \mathbb{R}} = 1$ and $\P{L}{(t,z)}{\mathbb{N}_0 \times \mathbb{R}} = \P{R}{(t,z)}{\mathbb{N}_0 \times \mathbb{R}} = 0$. When we combine these results we obtain the trace measure. For $t <T$ we get \begin{align*} \mathbf{tr}(t,z) = \sum\limits_{u \in \set{L,R}^{T-t}} \left(\frac{1}{2}\right)^{T-t} \!\cdot \delta_{uN^\omega}^{\mathcal{A}^\omega} \end{align*} and for $t \geq T$ the trace measure is $\mathbf{tr}(t,z) = \delta_{N^\omega}^{\mathcal{A}^\omega}$. Obviously the trace measure does not depend on $z$, i.e. $\mathbf{tr}(t,z_1) = \mathbf{tr}(t, z_2)$ for all $t \in \mathbb{N}$ and all $z_1, z_2 \in \mathbb{R}$. Moreover, there is a simple finite state system which is trace equivalent to this system. The finite system has the same alphabet $\ensuremath{\mathcal{A}}$, its state space is $(\set{0,\hdots,T},\powerset{\set{0, \hdots, T}})$, and the transition function $\alpha\colon \set{0,\hdots,T} \to \Prob{\ensuremath{\mathcal{A}}\, \times \set{0,\hdots, T}}$ is given as follows \begin{center}\begin{tikzpicture}[node distance=1.8 and 2.5, on grid, shorten >=1pt, >=stealth', semithick] \begin{scope}[state, inner sep=2pt, minimum size=32pt] \draw node [draw] (q0) {$0$}; \draw node [draw, right=of q0] (q1) {$1$}; \draw node [draw, right=of q1] (q2) {$2$}; \draw node [draw, right=of q2] (q3) {$T-1$}; \draw node [draw, right=of q3] (q4) {$T$}; \end{scope} \begin{scope}[->] \draw (q0) edge[bend left] node[above] {$L, 1/2$} (q1); \draw (q0) edge[bend right] node[below] {$R, 1/2$} (q1); \draw (q1) edge[bend left] node[above] {$L, 1/2$} (q2); \draw (q1) edge[bend right] node[below] {$R, 1/2$} (q2); \draw (q3) edge[bend left] node[above] {$L, 1/2$} (q4); \draw (q3) edge[bend right] node[below] {$R, 1/2$} (q4); \draw (q4) edge[loop right] node[right] {$N, 1$} (q4); \end{scope} \draw (q2) edge[dashed] node {} (q3); \end{tikzpicture}\end{center} i.e. for $t < T$ we define \begin{align*} \alpha(t) = \frac{1}{2} \cdot \left(\delta_{(L, t+1)}^{\ensuremath{\mathcal{A}} \times \set{0,\hdots,T}} + \delta_{(R, t+1)}^{\ensuremath{\mathcal{A}} \times \set{0,\hdots,T}}\right) \end{align*} and for $t = T$ we define $\alpha(t) = \delta_{(N,T)}^{\ensuremath{\mathcal{A}} \times \set{0,\hdots,T}}$. \end{exa} \section{Conclusion, Related and Future Work} We have shown how to obtain coalgebraic trace semantics for generative probabilistic transition systems in a general measure-theoretic setting, thereby allowing uncountable state spaces and infinite trace semantics. Especially we have presented final coalgebras for four different types of probabilistic systems. There is a huge body of work on Markov processes and probabilistic transition systems, but only part of it deals with behavioral equivalences, as in our setting. Even when the focus is on behavioral equivalences, so far usually bisimilarity and related equivalences have been studied (see for instance \cite{larsenskou89}), neglecting the very natural notion of trace equivalence. Furthermore many papers restrict to countable state spaces and discrete probability theory. Our work is clearly inspired by \cite{hasuo}, which presents the idea to obtain trace equivalence by considering coalgebras in suitable Kleisli categories, generalizing their instantiation of generative probabilistic systems to a general measure-theoretic setting and considering new types of systems. Different from the route we took in this paper, another option might have been to extend the general theorem (Theorem~3.3) of \cite{hasuo}. The theorem gives sufficient conditions under which a final coalgebra in a Kleisli category coincides with an initial algebra in the underlying category $\mathbf{Set}$. This theorem is given for Kleisli categories over $\mathbf{Set}$ and requires that the Kleisli category is $\mathbf{Cppo}$-enriched, i.e., each homset carries a complete partial order with bottom and some additional conditions hold. This theorem is non-trivial to generalize. First, it would be necessary to extend it to $\mathbf{Meas}$ and second -- and even more importantly -- the requirement of the Kleisli category being $\mathbf{Cppo}$-enriched is quite restrictive. For the case of the sub-probability monad a bottom elements exist (the arrow which maps everything to the constant $0$-measure), but this is not the case for the probability monad, which is the more challenging part, giving rise to infinite words. Hence we would require a different approach, which can also be seen by the fact that in the case of the probability monad the final coalgebra is \emph{not} the initial algebra in $\mathbf{Meas}$. The study of probabilistic systems using coalgebra is not a new approach. An extensive survey on the coalgebraic treatment of these systems can be found in \cite{Sokolova20115095} including an overview of various different types of transition systems containing probabilistic effects alongside user-input, non-determinism and termination, extensions that we did not consider in this paper (apart from termination). A thorough consideration of coalgebras and especially theorems guaranteeing the existence of final coalgebras for certain functors on $\mathbf{Meas}$ is given in \cite{viglizzofinal} but since all these are coalgebras in $\mathbf{Meas}$ and not in the Kleisli category over a suitable monad, the obtained behavioral equivalence is probabilistic Larsen-Skou \cite{larsenskou89} bisimilarity instead of trace equivalence and the results do not directly apply to our setting. Also, in \cite{doberkat2007stochastic} and \cite{Pan09} a very thorough and general overview of properties of labelled Markov processes including the treatment of and the evaluation of temporal logics on probabilistic systems is given. However, the authors do not explicitly cover a coalgebraic notion of trace semantics. Infinite traces in a general coalgebraic setting have already been studied in \cite{Cirstea:2010:GIT:1841982.1842047}. However, this generic theory, once applied to probabilistic systems, is restricted to coalgebras with countable carrier while our setting, which is undoubtedly specific and covers only certain functors and branching types, allows arbitrary carriers for coalgebras of probabilistic systems. As future work we plan to apply the minimization algorithm introduced in \cite{abhkms:coalgebra-min-det} and adapt it to this general setting, by working out the notion of canonical representatives for probabilistic transition system. We are especially interested in comparing this to the canonical representatives for weak and strong bisimilarity presented recently in \cite{eisentrautetal2013}. Furthermore we plan to define and study a notion of probabilistic trace distance, similar to the distance measure (for bisimilarity) considered in \cite{bw:behavioural-distances,bw:behavioural-pseudometric}. We are also interested in algorithms for calculating this distance, perhaps similar to what has been proposed in \cite{chen} for probabilistic bisimilarity or the more recent on-the-fly algorithm presented in \cite{baccietal2013}. \section*{Acknowledgement} We would like to thank Paolo Baldan, Filippo Bonchi, Mathias Hülsbusch, Sebastian Küpper and Alexandra Silva for discussing this topic with us and giving us some valuable hints. Moreover, we are grateful for the detailed feedback from our reviewers of both, the conference version, \cite{KK12a}, of this paper and also of the version at hand. \bibliographystyle{alpha}
1,108,101,562,480
arxiv
\section*{} We have created an experimental procedure for determining the temperature coefficient of resistivity, $\alpha_R$, for introductory physics laboratories. As in the procedure from Henry [1], this method examines the relationship between temperature and resistivity to establish $\alpha_R$ within 10\% of the accepted value. \\ Electrical resistivity, $\rho$, varies with temperature according to: \begin{equation} \rho = \rho_o (1+ \alpha_R (T - T_o)) \end{equation} where $\rho_o$ is the resistivity for a given temperature $T_o$, $T$ is the temperature of the material, and $\alpha_R$ is the temperature coefficient of resistivity. For a wire of length, $L$, and cross-sectional area, $A$, the resistance of a wire, $R$, is defined accordingly as \begin{equation} R= \rho \frac{L}{A} \ . \end{equation} While resistance will increase as a product of both increased length and resistivity, the increase in length provides a negligible increase in resistance. This is evident from observing that the thermal coefficient of resistivity is approximately two orders of magnitude larger than the coefficient of thermal expansion. As such, the change in resistivity is primarily responsible for the increase in resistance. Hence, $R$ will vary similarly with \begin{equation} R=R_o(1+\alpha_R(T-T_o)) \end{equation} where $R_o$ is the resistance of the wire at a temperature $T_o$. By applying a current through the wire, its temperature will also vary as a result of Joule heating and the resistance of the wire can be measured based on a given current, I, and difference in voltage, $\Delta V$, by \begin{equation} R=\frac{\Delta V}{I} \ . \end{equation} Hence, by measuring the resistance as a function of temperature, we can determine $\alpha_R$ by plotting $R$ vs. $T-T_o$ and performing a linear fit using Eq. (3). To perform the experiment, we created a closed circuit (see Fig. 1) in which a carbon steel wire [2] is suspended under tension above a surface, as in most stringed instruments. Two digital multimeters were used to record the voltage across and current through a 0.016 inch (0.406 mm) diameter, 40-cm long wire. Temperature measurements were taken using liquid crystal thermometers [3] placed in thermal contact with the wire by fastening them to the wire with an adhesive backing. Two thermometers were used, one ranging from $14-31 ^o C$ and the other from $32-49 ^o C$, to provide an overall temperature range of $14-49 ^o C$. For the most accurate temperature readings, we found it is essential to avoid all contact with the thermometers during the experiment. To collect the data, we applied different currents to the wire, ranging from $0.2 A$ to $1.0 A$. We used a BK Precision 1787B power supply that allows for digital control of the current. We found that having initial steps of $0.2 A$ and later reduced to $0.1 A$ created consistent temperature changes in the wire of $2-3 ^o C$. The experiment proved much more difficult to complete using an analog power supply because of the difficulty in creating the precise changes in current needed to have a well formed data set. After allowing around 30 seconds for the system to reach thermal equilibrium, the recorded temperature of each trial was given as the uppermost visible temperature reading of the liquid crystal thermometer, as seen in Fig. 2. We recorded the temperature of, current through and voltage across the wire, and calculated its resistance using Eq. (4). By graphing the resistance of the wire as a function of $T-T_o$, where $T_o$ is the temperature of the wire with the lowest current applied, we can then apply a linear fit to the data with the y-intercept yielding $R_o$ and slope giving $R_o \alpha_R$, according to Eq. (3). Figure 4 shows example experimental results with the fit giving $R_o=0.744 \Omega$ and $\alpha_R = 0.0039 K^{-1}$. This is within 5\% of the accepted value of $\alpha_R = 0.0041 K^{-1}$ [4]. Repeated experiments found these results to be reproducible with $\alpha_R$ consistently measured within 10\% of the accepted value. To get the best results, we found that the resistance of the wire should be at least $0.3\Omega$ to allow for accurate resistance measurements. Furthermore, the wires need to be thick enough to support the thermometers. As such, we achieved the best results when using steel as opposed to other materials with lower resistivity and tensile strength, such as copper and aluminum. This experiment can be completed in less than 2 hours and uses equipment that is commonly present in a typical introductory physics lab, with the exception of relatively low cost supplies such as music wire and liquid crystal thermometers. It also reinforces key ideas from introductory physics such as conservation of energy, where electrical energy becomes thermal energy, Ohm's Law, and the temperature dependence of resistance. \section*{References} 1. D. Henry, ``Resistance of a wire as a function of temperature", The Physics Teacher 33, 96-97 (1995) \url{https://doi.org/10.1119/1.2344149}\\ 2. Precision Brand Music Wire (0.016 inch diameter); UPC. No. 21016 \\ 3. TelaTemp reversible LCT strip model 416-2 ($14-31 ^o C$) and 416-3 ($32-49 ^o C$) \\ 4. S. Yafei , N. Dongjie, and S. Jing, 4th IEEE Conference on Industrial Electronics and Applications, 368-372 (2009). \begin{figure}[h] \centering \includegraphics[scale=.4]{paper_1_figure_3.png} \caption{The experimental setup (bottom) with the wire suspended above the box, and a circuit diagram (top). The circuit consists of two multimeters set up to measure current and voltage and a digital power supply. The liquid crystal thermometers are affixed to the top of the wire.} \end{figure} \begin{figure}[h] \centering \includegraphics[width=6cm, height=9cm,angle= 0]{example_reading.jpg} \caption{An example of a temperature reading corresponding to $37^{\circ} C$ ($98^{\circ} F$), according to the procedure of reading the uppermost visible indication of the liquid crystal thermometer (the yellow bar spanning $37^{\circ} C$).} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8cm, height=6cm, angle= 0]{FINAL.png} \caption{Plot of resistance against the associated change in temperature for currents from 0.2 $A$ to 1.0 $A$. A linear fit of the data yields $\alpha_R = 0.0039 K^{- 1}$.} \end{figure} \end{document} While resistance will increase as a product of both increased length and resistivity, the increase in length provides a negligible increase in resistance. This is evident from observing that the thermal coefficient of resistivity is approximately 2 orders of magnitude larger than the coefficient of thermal expansion. As such, temperature is primarily responsible for increases that we see in resistance.
1,108,101,562,481
arxiv
\section{Introduction} To process autonomous tasks, some blockchains execute digital programs called \emph{smart contracts} that become immutable once deployed \citep{zheng2020overview}. Smart contracts are autonomous programs that can (1) customize contracting rules and functions between contractors and (2) facilitate transferring irreversible and traceable digital cryptocurrency transactions \citep{hewa2020survey}. Smart contracts were born and continued to grow with noteworthy achievements in various life fields, including industry, finance, and economic. Due to the immaturity of blockchain technology, vulnerabilities in smart contracts can have severe consequences and result in substantial financial losses. For instance, the infamous DAO attack in 2016 \citep{Dao1} resulted in stealing around 50 million dollars because of exploiting a re-entrancy vulnerability in the Distributed Autonomous Organizations (DAO) contract\footnote{https://www.coindesk.com/understanding-dao-hack-journalists}. Therefore, understanding vulnerabilities in smart contracts is critical to perceive the threats they represent, e.g., to develop predictive models or software engineering tools that can predict or detect threats with a high precision \citep{seacord2005structured}. Furthermore, classifying smart contract vulnerabilities enables researchers and practitioners to better understand their frequency and trends over time Existing studies attempt to categorize vulnerabilities in Ethereum smart contracts~\cite{atzei2017survey,dingman2019defects,chen2020defining,zhang2020framework,rameder2021systematic}. While they provide valuable insights into existing security issues in smart contracts, they fall short in several ways: \begin{enumerate} \item \textbf{There is no unified view on smart contract vulnerabilities.} For instance, \citep{atzei2017survey} classify security vulnerabilities according to their network level, \citep{chen2020defining} classify defects in smart contracts according to their impact on quality attributes, and \citep{zhang2020framework} categorize smart contract defects according to sources of error. While these dimensions are all relevant, they are orthogonal and cannot easily be compared. \item \textbf{Several studies mix different classification dimensions.} For instance, \citep{zhang2020framework} focus primarily on error sources (e.g., data and interface errors), but also include categories concerned with effects/impact on quality attributes (e.g., security and performance). This leads to classifications where categories are not orthogonal and, since these dimensions are not discussed, often confusing. \item \textbf{Vulnerabilities are classified into broad categories.} Several studies classify vulnerabilities into broad categories, which results in two main shortcomings. First, a vulnerability can be assigned into more than one category. Second, the differences among the categories are too general to be useful to reason about the vulnerabilities. \item \textbf{Data sources differ widely.} Several existing studies rely only on vulnerabilities published in academic literature or white literature, e.g., \citep{atzei2016survey} and \citep{alharby2017blockchain}, while \citep{chen2020defining} used posts on StackExchange, and \citep{zhang2020framework} used a mix of academic literature and GitHub project data. This makes the comparison challenging or even impossible. \item \textbf{Important data sources are omitted in existing classifications.} To our knowledge, no existing study uses established vulnerability and defect registries such as the Smart Contract Weakness Registry (SWC) \footnote{https://swcregistry.io/} and vulnerabilities in the Common Vulnerability and Exposure (CVE)\footnote{https://cve.mitre.org/}. \end{enumerate} To address these gaps, the contribution of this study is twofold: \begin{enumerate} \item To unify existing classifications on smart contract vulnerabilities by providing an overview of the different classification dimensions, and by mapping existing classifications to a single classification scheme using error source and impact as dimensions of the vulnerabilities. \item To complement existing studies by classifying smart contract vulnerabilities extracted from a variety of important data sources according to the different dimensions presented in existing work. \end{enumerate} We extracted and analyzed data related to Ethereum smart contracts written in Solidity from four data sources, i.e., Stack Overflow, GitHub, CVE and SWC. Using a card sorting approach, we devised a classification scheme that uses the error source and impact of a vulnerability as dimensions. Furthermore, we mapped existing classfications to this scheme and analyzed the frequency distribution of the defined categories per data source. The resulting classification scheme consists of 11 categories describing the error source, and 13 categories describing potential impacts. Our findings show that language specific coding and structural data flow categories are the dominant categories of vulnerabilities in Ethereum smart contracts. However, frequency distribution of the error source categories differ widely across data sources. With respect to the existing classifcations, we find that the majority of sources use broad categories that are applicable to many vulnerabilities, such as ``security'' or ``availability''. The remainder of this paper is organized as follows. Section~\ref{sec:background} presents the background on Ethereum smart contracts, Solidity and the used data sources. Section~\ref{sec:related_work} discusses how existing work relates to our study and what gaps exist. Section~\ref{fig:method} describes the methodology that we followed. In Section~\ref{sec:results}, we report our findings in terms of the obtained classification and mapping to existing work. We then discuss our findings in Section~\ref{sec:discussion} and potential threats to validity in Section~\ref{sec:threats}. The paper is concluded in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} In this section, we discuss the background of our work. Specifically, we discuss definitions used in this paper, the Ethereum environment, and existing vulnerability and weakness databases. \subsection{Definitions} Before we can discuss a classification scheme for smart contracts' vulnerabilities, it is fundamental that we have a sufficiently specific definition of what we are classifying. There have been efforts to formally define concepts such as vulnerability in SE. However, we will use the formal definition of a vulnerability and weakness that were defined in the Ethereum Improvement Proposals (EIPs)\footnote{https://eips.ethereum.org/} as follows: \begin{itemize} \item \textbf{Vulnerability:} ``A weakness or multiple weaknesses which directly or indirectly lead to an undesirable state in a smart contract system'' (\cite{EIP1470} 1470). A vulnerable contract does not necessarily imply exploited \citep{perez2019smart}. Moreover, as the contract is a digital agreement between two or more parties, exploiting the contract is not always done by external malicious actors. A venerable contract can also be exploited by one of the contracting parties such as the contract owner who can use vulnerabilities to gain more profits such as in CVE-2018-13783\footnote{https://nvd.nist.gov/vuln/detail/CVE-2018-13783}. Any of the contractors can also exploit it, the miners or even the developers who implemented the contract (e.g., CVE-2018-17968 \footnote{https://nvd.nist.gov/vuln/detail/CVE-2018-17968}). \item \textbf{Weakness:} ``a software error or mistake in contract code that in the right conditions can by itself or coupled with other weaknesses lead to a vulnerability''. (\cite{EIP1470} 1470). \end{itemize} What distinguishes smart contract code weaknesses from other software applications is that any smart contract code instruction costs a specific amount of gas (see Section-\ref{systemgas}). This means even if the weakness was not exploited, it would result in losing Ether \footnote{Ethereum corresponding cryptocurrency} when it is triggered by the contract itself and executed, which makes the contract vulnerable even if it is not exploited. Thus, it is of high importance to study smart contracts' weaknesses along with smart contracts' vulnerabilities. \subsection{\textbf{Ethereum Virtual Machine (EVM)}} Ethereum\footnote{https://ethereum.org/en/} is a globally open decentralized blockchain framework that supports smart contracts, referred to as Ethereum Virtual Machine (EVM). Because EVM hosts and executes smart contracts, it is often referred to as the programmable blockchain. EVM contracts reside on the blockchain in a Turing complete bytecode language; however, they are implemented by developers using high-level languages such as Solidity or Vyper and then compiled to bytecode to be uploaded to the EVM. Users on the EVM can create new contracts, invoke methods in a contract, and transfer Ether. All of the transactions on EVM are recorded publicly and their sequence determines the state of each contract and the balance of each user. In order to ensure the correct execution of smart contracts, EVM relies on a large network of mutually untrusted peers (miners) to process the transactions. EVM also uses the Proof-of-Work (PoW) consensus protocol to ensure that a trustworthy third party (e.g., banks) is not needed to validate transactions, fostering trust among users to build a dependable transaction ledger. EVM gained remarkable popularity among blockchain users as EVM is the first framework that supports smart contracts to manage digital assets and build decentralized applications (DApps) \citep{khan2020ethereum}. \subsection{\textbf{Ethereum Smart Contracts}} A smart contract is a general-purpose digital program that can be deployed and executed on the blockchain. Ethereum smart contract is identified by a unique 160-bit hexadecimal string which is the contract address. It is written in a high-level language, either Solidity or Vyper. In this paper, we focus on Ethereum smart contracts written in Solidity because it is the most popular language in the EVM community and most of the deployed contracts on EVM are written using Solidity \citep{bhat2017probabilistic,badawi2020cryptocurrencies}. A smart contract can call other accounts, as well as other contracts on the EVM. For example, it can call a function in another contract and send Ether to a user account. In EVM, internal transactions (i.e., calls from within a smart contract) do not create new transactions and are therefore not directly recorded on-chain. \subsection{\textbf{Ethereum Gas System}} \label{systemgas} In order to execute a smart contract, a user has to send a transaction (i.e., make a function call) to the target contract and pay a transaction fee that is measured in units of gas, referred to as the gas usage of a transaction. The transaction fee is derived from the contract’s computational cost, i.e., the type and the number of executed instructions during runtime. Each executed instruction in the contract consumes an agreed-upon amount of gas. Instructions that need more computational resources cost more gas than instructions that need less computational resources. This helps secure the system against denial-of-service attacks and prevents flooding the network. Hence, the gas system in EVM has two main benefits: (1) it motivates developers to implement efficient applications and smart contracts and (2) it compensates miners who are validating transactions and executing the needed operations for their contributed computing resources. To pay for gas, the transaction fee equals the \begin{math}gasprice * gascost\end{math}. The minimum unit of gas price is Wei (1 Ether = \begin{math} 10^{18} \end{math} Wei). Therefore, Ether can be thought of as the fuel for operating Ethereum. \subsection{\textbf{Solidity}} Solidity\footnote{https://docs.soliditylang.org/} is a domain-specific language (DSL) that is used to implement smart contracts on the Ethereum blockchain. It is the most widely used open-source programming language in implementing blockchain and smart contracts. Also, although it was originally designed to be used on Ethereum, it can be used in other blockchain platforms such as Hyperledger and Monax\footnote{https://monax.io/}. Solidity is statically typed, which requires specifying the type of all variables in the contract. It does not support any ``undefined'' or ``null'' values, and any newly defined variable has a default value based on its type. Smart contracts written in Solidity are organized in terms of subcontracts, interfaces, and libraries. They may contain state variables, functions, function modifiers, events, struct types, and enum types. Also, Solidity contracts can inherit from other contracts, and can call other contracts. In Solidity there are two kinds of function calls. \textit{Internal function calls}, which do not create an actual EVM call, and \textit{External function calls}, which do. Due to this distinction, Solidity supports four types of visibility for functions and state variables: \begin{itemize} \item \textit{External} functions can be called from other contracts using transactions as they are part of the contract interface. They can not be called internally (e.g., \emph{externalFunction()} does not work, while \emph{this.externalFunction()} works). State variables can not be external. \item \textit{Public} functions can be called internally or using messages and they are part of the contract interface. For public state variables, an automatic getter function is generated by the Solidity compiler to avoid high gas cost when returning an entire array. \item \textit{Internal} functions and state variables can only be accessed internally (i.e., from within the current contract or contracts deriving from it). \item \textit{Private} functions and state variables are only visible for the contract they are defined in and not in derived contracts. \end{itemize} State variables can be declared using the keywords \textit{constant} or \textit{immutable}. Immutable variables can be assigned at construction time, while constant variables must be fixed at compile time. For the functions declaration, they can be declared as follows. \begin{itemize} \item \textit{Pure Functions} promise not to read or modify the state. They can use the \textit{revert()} and \textit{require()} functions to revert potential state changes when an error occurs. \item \textit{View Functions} promise not to modify the state such as writing to state variables. \item \textit{Receive Ether Functions} can exist at most once in a contract. They cannot have any arguments and cannot return any value, and must be declared with external visibility and payable state mutability as in \textit{receive() external payable\{ ...\}}. These functions are executed on plain Ether transfers (e.g., using \textit{.send()} or \textit{.transfer()}) and based on a call to the contract with empty \textit{calldata}. \item \textit{Fallback Functions} are similar to \textit{Receive Ether} functions, as any contract can have at most one such function. It must have external visibility, and is executed on a call to the contract if none of the other functions match the given function signature or if no data was supplied at all and there is no receive function. The \textit{fallback} function always receives data, but in order to also receive Ether it must be marked as \textit{payable}. \end{itemize} Finally, if any function promises to receive Ether, it has to be declared as \textit{payable}. An example of a Solidity contract is shown in Figure~\ref{fig:SolidityVotingContract}. It shows a voting contract as explained in Solidity's official documentation \citep{Soliditydocumentation}. \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth, bb=0 0 678 1091]{SolidityCode.png} \caption{Voting Contract Example Written in Solidity \citep{Soliditydocumentation}} \label{fig:SolidityVotingContract} \end{figure} \subsection{\textbf{Common Vulnerability and Exposure (CVE) and National Vulnerability Database (NVD)}} CVE is a list of publicly disclosed vulnerabilities and exposures that is maintained by MITRE\footnote{https://cve.mitre.org/cve/}. It feeds into the NVD\footnote{http://nvd.nist.gov/}, so both are synchronized at all times. NVD is a comprehensive repository with information about all publicly known software vulnerabilities and includes all public sources of vulnerabilities (e.g., alerts from security focus\footnote{https://www.securityfocus.com/}). NVD also provides more information about the CVE lists' vulnerabilities, such as severity scores and patch availability. It also provides an easy mechanism to search for vulnerabilities using various variables. Both CVE and NVD are maintained by the US Federal Government. An example of a reported smart contract vulnerability in NVD is shown in Table~\ref{table:cveExample}. It also shows the impact metrics (Common Vulnerability Scoring System - CVSS), vulnerability types (Common Weakness Enumeration - CWE), applicability statements (Common Platform Enumeration - CPE), and other relevant meta-data. Each CVE in the list also has a unique identifier that shows the affected software product, sub-products, and various versions. \begin{table}[htbp] \centering \caption{An example of a vulnerability in NVD} \label{table:cveExample} \begin{tabular}{|p{3.4 cm} |p{4.4 cm}|} \hline \hline CVE ID & CVE-2021-3006\\ NVD Published Date: & 01/03/2021 \\ Source: & CVE MITRE \\ Description & The breed function in the smart contract implementation for Farm in Seal Finance (Seal), an Ethereum token, lacks access control and thus allows price manipulation, as exploited in the wild in December 2020 and January 2021. \\ CVSS & 7.5 \\ Weakness Enumeration & CWE-863 \\ CWE Name & Incorrect Authorization \\ Hyperlink & Link\footnote{https://etherscan.io/address/0x33c2da7fd5b125e629b3950f3c38d7f721d7b30d} \\ Integrity & High\\ Impact score & 3.6 \\ \hline \hline \end{tabular} \end{table} \subsection{\textbf{Common Weakness Enumeration (CWE) }} CWE \footnote{http://cwe.mitre.org} is a community-developed list of common software security weaknesses. It is considered as a comprehensive online dictionary of weaknesses that have been found in computer software. It also serves as a baseline for weakness identification, mitigation, and prevention efforts. Its primary purpose is to promote the effective use of tools to identify, find, and repair vulnerabilities and exposures in computer software before the programs are distributed to the public. \subsection{\textbf{Smart Contract Weakness Classification Registry (SWC)}} SWC is an implementation of the weakness classification scheme that is proposed in \textit{Ethereum Improvement Proposals}\footnote{https://eips.ethereum.org/}. It is also aligned with the terms and structures described in the CWE. Each SWC has an identifier (ID), weakness title, CWE parent, and related code samples list. \section{Related Work} \label{sec:related_work} Multiple studies that classify smart contract vulnerabilities have been published since the first attack on Ethereum smart contracts in 2016 \citep{daian2016analysis}, e.g., \cite{atzei2016survey}, \cite{alharby2018blockchain}, \cite{dingman2019defects}, and \cite{zhang2020framework}. This section summarizes these studies in relation to our work. \subsection{Literature-Based Vulnerability Classification} Several studies on Ethereum smart contract vulnerabilities classify vulnerabilities reported in blogs, academic literature, and white papers, i.e., \cite{atzei2016survey,alharby2017blockchain,huang2019smart,sanchez2018raziel,chen2020survey,praitheeshan2019security,dingman2019defects}. \cite{atzei2016survey} propose a classification scheme comprised of three categories to classify security vulnerabilities in Ethereum smart contracts. In addition to blogs and academic literature, the authors also employ their own practical experience as a resource for their classification. Vulnerabilities are classified into language-related issues, blockchain issues, and EVM bytecode issues. The classification is followed by a brief discussion on potential attacks that result in stealing money or causing other damage. \cite{alharby2017blockchain} also use academic literature as the main source of data, classifying them into four main classes: codifying, security, privacy, and performance issues. Additionally, they discuss proposed solutions based on suggestions provided by smart contract analysis tools. The proposed classification suffers from a significant overlap between categories. For example, codifying issues can lead to security and privacy issues, as in the case of the popular re-entrancy vulnerability (classified as a security issue in the paper). \cite{huang2019smart} report a literature review of smart contract security from a software lifecycle perspective. The authors analyze blockchain features that may result in security issues in smart contracts and summarize popular smart contracts’ vulnerabilities based on four development phases, i.e., design, implementation, testing before deployment, and monitoring and analysis. Finally, they classified 10 vulnerabilities into three broad categories (i.e., Solidity, blockchain and misunderstanding of common practices). Unfortunately, there is no explanation of how or on what basis these categories were designed. \cite{dingman2019defects} study well-known vulnerabilities reported in white and gray literature and classify them according to National Institute of Standards and Technologies Bugs framework (NIST-BF)\footnote{https://samate.nist.gov/BF/} into security, functional, developmental, and operational vulnerabilities. The results show that the majority of vulnerabilities fall outside the scope of any category. However, the categories and the classification process are not defined or described in the paper. Similar to \cite{dingman2019defects}, \cite{samreen2021survey} survey and map eight popular smart contracts’ vulnerabilities in the literature to the NIST-BF. Their results show that only three of the studied eight vulnerabilities could be matched with two NIST-BF classes. They also suggest a preventive technique per classified vulnerability. Finally, a map between existing analysis tools and the eight vulnerabilities are provided in the paper. \cite{praitheeshan2019security} classify 16 smart contracts’ vulnerabilities reported in literature based on their internal mechanisms. The authors use three categories, i.e., blockchain, software security issues, and Ethereum and Solidity vulnerabilities. Finally, \cite{chen2020survey} survey Ethereum System Vulnerabilities including smart contract vulnerabilities reported in literature, classifying them according to two dimensions. First, they group vulnerabilities into four-layer groups according to the location, i.e., on the application, data, consensus or network layer. Secondly, they group vulnerabilities according to their cause into Ethereum design and implementation, smart contract programming, Solidity language and tool-chain, and human factors. This classification focuses on few locations that a vulnerability might occur at, while omitting others. For instance, the vulnerability could be located in the source code of the smart contract itself or in its dependencies. Furthermore, there is no clear indication of how these categories were defined or if a systematic way to classify them was followed. \subsection{Repository-Based Vulnerability Classification} In addition to classifications that are based on published vulnerabilities, two papers attempt classifications based on data extracted from public repositories such as StackExchange, i.e., \cite{chen2020defining,zhang2020framework}. \cite{chen2020defining} collect smart contract vulnerabilities from discussions available on Ethereum StackExchange, classifying them based on five high-level aspects according to their consequences, i.e., security, availability, performance, maintainability, and reusability. To evaluate if the selected vulnerabilities are harmful, the authors conduct an online survey to collect feedback from practitioners. The proposed categories have the drawback that not all vulnerabilities can be clearly placed in a single category, i.e., one vulnerability could have various consequences. \cite{zhang2020framework} classify smart contracts vulnerabilities from both literature and open projects on GitHub. The authors classify extracted vulnerabilities into 9 categories based on an extension of IEEE Standard Classification for Software Anomalies\footnote{https://standards.ieee.org/standard/1044-2009.html}. Finally, they propose a four-category classification scheme for the impact of a vulnerability, i.e., unwanted function executed, performance, security, and serviceability. This classification is based on vulnerability GitHub reports, and some categories are too general to be useful in any detailed engineering analysis (e.g., the security category). \subsection{Tool-Based Vulnerability Detection} As a last category of related work, several publications study existing tools to detect vulnerabilities in smart contracts, and propose classifications based on the tools' capabilities, i.e., \cite{khan2020survey,rameder2021systematic}. \cite{khan2020survey} provide an overview of current smart contracts vulnerabilities and testing tools in their work. In addition, they propose a vulnerability taxonomy for smart contracts' vulnerabilities. The proposed taxonomy consists of seven categories, i.e., inter-contractual vulnerabilities, contractual vulnerabilities, integer bugs, gas-related issues, transnational vulnerabilities, deprecated vulnerabilities, and randomization vulnerabilities. The authors then provide a mapping between the surveyed vulnerabilities and the available detection tools. Unfortunately, this categorization is not actually a classification scheme, in the sense that it fails to identify a category unique to each vulnerability and no structured method was provided to show how the classification was performed. In a Master Thesis, \cite{rameder2021systematic} provides a comprehensive overview of state-of-the-art tools that analyze Ethereum smart contracts with an overview of known security vulnerabilities and the available classification schemes in the literature. The studied vulnerabilities are classified into 10 novel categories, i.e., malicious environment, environment dependency/blockchain, exception \& error handling disorders, denial of service, gas related issues, authentication, arithmetic bugs, bad coding quality, environment configuration, and deprecated vulnerabilities. However, the proposed categories cover different dimensions, e.g., vulnerability consequences as well as programming errors. Finally, some categories are not defined in the work. \subsection{Summary and Research Gap} In summary, various attempts to classify smart contract vulnerabilities have been published, both on reported vulnerabilities and by mining software repositories. A third line of research focuses on studying smart contract vulnerability detection tools, classifying what vulnerabilities they are able to detect. These papers suffer from three flaws. First, they rely exclusively on vulnerabilities reported in literature and might, therefore, provide a skewed image, e.g., \cite{atzei2016survey}. Secondly, they propose classifications that mix different concerns or dimensions, such as consequences of exploiting a vulnerability and the source of error in \cite{rameder2021systematic}. Third, they use categories with too broad distinctions that do not allow for detailed reasoning, such as the privacy and security categories in \cite{alharby2017blockchain} and \cite{zhang2020framework}. Finally, they provide only a limited view on smart contract vulnerabilities due to focusing on a single dimension, such as the consequences in \cite{chen2020survey}. Therefore, there is a need to unify existing taxonomies and classification schemes and provide a reference taxonomy that includes several dimensions such as root cause, impact, or scope \cite{vacca2021systematic}. The aim of this paper is to arrive at such a classification by integrating existing work and complementing it with additional data from software repositories and well-known sources such as the CVE and SWC registries. \section{Research Method} \label{sec:method} This section presents the method we followed in this paper, as shown in Figure~\ref{fig:method}. It includes the study setup, data sources, data cleaning, and data analysis. \begin{figure*}[htbp] \centering \includegraphics[width=0.94\textwidth, bb=0 0 791 518]{journalapproach2-eps-converted-to.pdf} \caption{Empirical Study Method} \label{fig:method} \end{figure*} \subsection{Study setup} In this study, we aim to answer the following research questions (RQs): \begin{itemize} \item \emph{RQ1. What categories of vulnerabilities appear in smart contracts?} Goal: To comprehensively categorize the vulnerabilities that appear in Ethereum smart contracts. To study different dimensions of the problem and to map literature-based classifications on Ethereum smart contract vulnerabilities and unify them in one thorough classification. A thorough classification scheme will make it possible to collect statistics on frequency, trends, and vulnerabilities, as well as evaluate countermeasures. \item \emph{RQ2. Are the frequency distributions of smart contract vulnerability categories similar across all studied data sources?} Goal: To investigate if all data sources have the same frequency distributions of vulnerabilities. If so, then we can rank vulnerability categories from the most common to the least common. If not, we can reason about the skew or bias when different sources are used. Further research can find solutions for the most common vulnerabilities. More effort could be put to address the prevalent category before deploying the contract to the blockchain. \item \emph{RQ3. What impact do the different categories of smart contract vulnerabilities and weaknesses have?} Goal: To investigate the impacts of smart contract vulnerability and code weaknesses. To define various dimensions of impacts classifications and to map literature-based impact classifications and propose a thorough impact classification of smart contracts vulnerabilities and code weaknesses. More effort can be put to vulnerabilities and code weaknesses with critical impacts. \end{itemize} \subsubsection{Data Sources} To answer the proposed research questions, we analyzed and studied smart contract code vulnerabilities and weaknesses from four primary sources. Two of these sources are widely used by developers (i.e., Github and StackOverflow), and two are very well-known publicly accessible sources (i.e., SWC and NVD) for reporting vulnerabilities and weakness in Ethereum smart contracts and other software systems. Table~\ref{table:datasetSummary} shows the final number of data records during and after data pre-processing. \begin{table}[htbp] \centering \caption{Summary of the sampled vulnerabilities in the selected data sources} \label{table:datasetSummary} \begin{tabular}{||p{2.6 cm} |p{1.4 cm} |p{0.9 cm} | p{0.6 cm} | p{0.8 cm}||} \hline Data Source & Stack Overflow & GitHub & CVE & SWC\\ [0.5ex] \hline\hline \# of collected data& 2065 & 3160 &523 & 37 \\ \hline \# of data records after preprocessing steps& 1490 & 1160 &523 & 37 \\ \hline Final \# of Vulnerabilities& 765 & 818 &523 & 37 \\ \hline \end{tabular} \end{table} \textbf{Data-source 1:} We opted for extracting data from Stack Overflow, as it has successfully been leveraged in existing work on smart contracts~\citep{ayman2019smart, aymansmart, chen2020defining}, and in general Software Engineering research \citep{bajaj14,ponzanelli14,calefato15,chen16,ahasanuzzaman16}. We used Stack Overflow posts to study weaknesses and vulnerabilities in smart contracts. To do so, we extracted Q\&A posts tagged with ``smart contract'', ``Solidity'', and ``Ethereum'', posted between January 2015 and April 2021. We discarded posts with the ``Ethereum'' tag, but without the ``Solidity'' or ``smart contract'' tags. To retrieve the related information for each post, we used Scrapy~\cite{Scrapy}, an open source Python library that facilitates Web crawling. For each of the 2065 extracted posts, we extracted the post title, URL, related tags, post time and accepted answer time. \begin{figure}[ht] \centering \includegraphics[width=.4\textwidth, bb=0 0 908 640]{VULT.png} \caption{Vulnerability Example from GitHub} \label{fig:Githubvul} \end{figure} \textbf{Data-source 2:} We used GitHub as the second data source for our study, as it is the most popular social coding platform \citep{cosentino2017systematic}. Moreover, many studies on Ethereum smart contracts and smart contract analysis tools have been published reporting findings based on data published in GitHub open-source projects (e.g., \cite{durieux2020empirical}). We studied vulnerabilities and code weaknesses reported in open source projects that have Ethereum smart contracts written in Solidity. We used the keywords ``smart contract'', ``Solidity'', and ``Ethereum'' to search for these projects. Then we searched for the vulnerabilities and weaknesses based on the fixes in each project and based on the keywords ``Vulnerability'', ``bug'', ``defect'', ``issue'', ``problem'', and ``weakness''. We also studied Ethereum official GitHub repository. Figure~\ref{fig:Githubvul} shows an example of a reported smart contract vulnerability in GitHub. \textbf{Data-source 3:} We used the NVD search interface to collect and extract all reported CVEs on smart contracts and their CVSS until April 2021. We searched with ``smart contracts'', ``Solidity'', and ``Ethereum''. Then, we manually extracted the reported CVEs that are related to smart contracts. \textbf{Data-source 4:} We extracted all the reported code weaknesses in SWC until April 2021. For each code weakness, we extracted the ID, title, relationships, and test cases. \subsubsection{Data Cleaning and Pre-Processing} After the initial extraction, we applied several filtering steps to obtain a clean dataset. First, we removed posts with duplicate titles or marked as \textit{[duplicate]} in StackOverflow. Secondly, we manually inspected the title and the body of the question and decided if the post actually discussed smart contracts in Ethereum/Solidity. Finally, we removed vague, ambiguous, incomplete, and overly broad posts. As an indication for such posts, we considered the amount of negative votes and/or negative feedback. Finally, we extracted the code of the smart contract in each post for further analysis. In order to pre-process the collected GitHub data, we removed the duplicates based on the description of the vulnerability. For further analysis, we also extracted the code of the smart contract that contained the vulnerability. We also double-checked data collected from both the NVD database and SWC registry for duplicates. After this stage, we had a clean dataset with records from the four data sources. Table~\ref{table:datasetSummary} shows a summary of the collected dataset after applying the aforementioned cleaning and pre-processing steps. \subsection{Data Analysis and Classification Categories} \label{sec:met_categories} To analyze and label the cleaned dataset, we manually inspected each record and read the description of each vulnerability and weakness. Then, to define the categories of vulnerabilities and weaknesses in smart contracts, two experts (i.e., a software engineering expert and a cybersecurity expert) together used card sorting to propose a classification scheme based on the cleaned data. After that, we discussed each category and gave it a name based on the categories that were defined by Beizer~\citep{beizer1984software}. To find out root causes of the categories, we analyzed the question and/or the answer information. Finally, we followed the same categorization approach to propose a classification scheme for the impact of the recorded vulnerabilities and weaknesses. For each record in the cleaned dataset, we created a card containing the information of the vulnerability as collected from the original data source. Then, the two experts determined the category of the vulnerability (RQ1) independently based on its root cause. As well as the impact category (RQ3) was decided based on its consequence on the software product (i.e., smart contracts). When cards did not fit into any of the existing categories, we extended the schemes accordingly. We started with a random 10\% of the data, and measured inter-rater agreement after independent labeling using Cohen's Kappa coefficient. The Kappa value between the two experts was 0.60, showing moderate agreement \citep{viera2005understanding}. We then clarified major disagreements to reach a common decision, and continued with the remaining 90\% of the data. We repeated the clarification discussions again after 20\% and 40\% of the posts were labeled (with Kappa values of 0.75 and 0.82 respectively). Finally, calculations of the Kappa coefficient at 60\%, 80\%, and 100\% resulted in values of $>0.90$, indicating perfect agreement. The same approach was followed to label the impact of each vulnerability in the cleaned dataset. The Kappa value was also $>0.90$ in the final discussion of the impact labeling. \subsubsection{Classification Scheme Attribute-value Design} To design a thorough classification scheme for smart contracts' vulnerabilities, we also followed the structured approach and recommendations of \cite{seacord2005structured}. To eliminate the problem of having vulnerabilities that fit into multiple classifications and therefore invalidate frequency data, we added attribute-value pairs in our classification for each vulnerability. This also helped us to provide an overall picture of the vulnerability. A smart contract has many attributes such as size, complexity, performance, and other quality attributes. In this paper, we are only interested in attributes that characterize the overall vulnerability of a smart contract. Therefore, these attributes can also represent code weaknesses (that may or may not lead to vulnerabilities). All attributes and values that we list in this paper are selected based on engineering differences of the vulnerability sources of error that were concluded from expert discussions while analyzing the data (see Section~\ref{sec:met_categories}). An example of the proposed attributes-value pairs is shown in Figure~\ref{fig:exampleatt}. We present a sample contract (i.e., not a real contract) demonstrating solidity functions with a set of weaknesses and vulnerabilities. In particular, the contract has multiple vulnerabilities, some of which are popular vulnerabilities, such as authorization with tx.origin and reentrancy. In addition, the contract contains another attribute that is an insufficient/ outdated compiler version, which uses insufficient pragma version 0.x.x. Another attribute is the use of wrong visibility initialization that uses wrong state variable initialization. \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth, bb=0 0 701 550]{examplecodevul-eps-converted-to.pdf} \caption{Sample Solidity contract annotated with vulnerability attribute-value pairs} \label{fig:exampleatt} \end{figure} The attribute-value pairs eliminate the possibility of vulnerabilities fitting into multiple classifications and therefore invalidating frequency data \cite{seacord2005structured}. \subsection{Unifying Classification Schemes} \label{sec:met_unification} To unify existing vulnerability classification schemes in the literature, we gathered all the categories proposed in the literature into an Excel sheet. The first and the second authors then discussed each category in relation to the categories proposed by us. We subsequently defined three dimensions of the problem (i.e., the vulnerability's source of error; the location of the vulnerability in the network level; the behavior and consequences arising from an exploit of the vulnerability). Afterwards, we mapped existing classifications to the defined dimensions, as illustrated in Figure~\ref{fig:dimensions}. We name these vulnerability dimensions \emph{V-D}. The error source dimension (V-D1) describes the main cause that, when triggered, can result in executing the vulnerability, such as the logic of the contract and the data initialization. V-D2 is the network dimension, which indicates at which layer on the network the vulnerability occurs. Finally, V-D3 describes the resulting behavior and consequences of the vulnerability, which means the result of executing the vulnerability. \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth, bb=0 0 618 352]{dim-eps-converted-to.pdf} \caption{Dimensions of Smart Contract Vulnerabilities (\emph{V-D})} \label{fig:dimensions} \end{figure} We followed the same approach to devise a thorough classification scheme for the impacts of smart contract vulnerabilities. We defined two dimensions (i.e., impact on the software product, and impact on business factors) as shown in Figure~\ref{fig:impacts}. We name these impact dimensions \emph{I-D}. I-D1 is the impact of the vulnerability on the software product itself and its resulting behavior, whereas I-D2 describes the impact of the vulnerability on the business level, e.g., losing money or important information. \begin{figure}[ht] \centering \includegraphics[width=.49\textwidth, bb=0 0 624 253]{impactss-eps-converted-to.pdf} \caption{Dimensions of Smart Contract Vulnerability Impacts (\emph{I-D})} \label{fig:impacts} \end{figure} \section{Results} \label{sec:results} In this section, we present the results of our study, and answer the proposed RQs. Our objective is to unify the existing classification schemes and define the causes, impacts, and recurrences of smart contract vulnerabilities and code weaknesses. Moreover, we propose thorough classification schemes for both the impacts and categories of smart contracts' weaknesses and vulnerabilities. \begin{tcolorbox} Smart contracts' code weaknesses and vulnerabilities are found in more than half (i.e., 66.7\%) of the cleaned data from all the four data sources (Stack Overflow, GitHub, CVE and SWC). \end{tcolorbox} \subsection{What categories of vulnerabilities appear in smart contracts? (RQ1)} \label{sec:resCategories} To answer RQ1, we followed the analysis approach in Section~\ref{sec:met_categories}, then mapped the result of our classification to other classification schemes as explained in Section~\ref{sec:met_unification}. We classified the 2143 extracted vulnerabilities and weaknesses into 47 unique smart contract weaknesses and vulnerabilities, grouped into 11 categories. Within our classification scheme, we mapped the existing literature classification schemes. Table~\ref{table:classsch} shows a mapping between our categories and Beizer’s categories \citep{beizer1984software}. We define each category in our proposed classification scheme and show its relation to Beizer’s categories. The categories \emph{Interface}, \emph{Dependency and upgradability}, \emph{Authentication \& authorization}, and \emph{Deployment and configurations} were added to the classification based on discussions throughout the card sorting process, and do not correspond to any category in Beizer's classification. Table~\ref{table:mapping_vd} maps literature-based classification schemes of smart contracts vulnerabilities with our own. The categories proposed in the literature are listed in the rows, while our categories are listed in the columns of the table. The table covers all the three dimensions of \emph{V-D} discussed in Section~\ref{sec:met_unification}. As can be seen, some broad categories listed in literature essentially cover all of our classification categories. \begin{tcolorbox} Most literature-based classification schemes for smart contract vulnerabilities include broad categories, such as security and availability. \end{tcolorbox} \begin{table*}[htbp] \centering \caption{Classification Scheme of Vulnerabilities in Smart Contracts} \label{table:classsch} \begin{tabular}{|p{3 cm}| p{13 cm} | p{0.7 cm}|} \hline Category Name &Description &Short \\ \hline\hline Language Specific Coding &Syntax mistakes in implementing Solidity contracts that are not captured by the Solidity compiler can introduce unexpected behavior or damage the contract. Similar to \emph{Implementation} in \cite{beizer1984software}, but specific to Solidity. & CV \\ \hline Data Initialization and Validation & The input data type to the contract or the fields' data type in the contract are not initialized correctly. Also includes the data passed from/sent to other contracts. Corresponding to \emph{Data definition} and \emph{Data access} in \cite{beizer1984software}. & DV \\ & \textbf{Predictable resources} is a subcategory of data initialization and validation. Weaknesses and vulnerabilities resulting from using expected values in state variables or functions. These vulnerabilities can allow malicious minors to take advantage of the vulnerabilities and control the contract. & DV-PR\\ \hline Structural Sequence \& Control & Problem with the structure of the contract control flow. Specifically, a result of incorrect control flow structure such as require, if-else conditions, assert, and loop structures. Corresponding to \emph{Flow control and sequence} in \cite{beizer1984software}. & SV-SC \\ \hline Structural Data Flow &Problems with the structure of the contract data flow. The main difference to \emph{Data} is that \emph{Data} originates in the data fields and input parameters to the subcontracts or the contract methods. Instead, in this category, changing data fields in a wrong way during and after the execution of the contract leads to issues. Corresponds to \emph{Data-flow anomaly} in \cite{beizer1984software}. & SV-DF \\ \hline Logic & Inconsistency between the intention of the programmer and the coded contract, and not one of the other categories. Corresponds to \emph{Logic} in \cite{beizer1984software}. & LV\\ \hline Timing \& Optimization & Performance and timing issues that can affect execution time and results in abnormal responsiveness under a normal workload. Corresponds to \emph{Performance and timing} in \cite{ beizer1984software}, considering the existence of the Ether/gas concept. & TV \\%&\cite{rameder2021systematic}, \cite{zhang2020framework},\cite{chen2020defining},\cite{alharby2018blockchain}, \cite{khan2020survey} \\ \hline Compatibility & Required software and packages are not compatible with the available resources (e.g., operating system, CPU architecture). Corresponds to \emph{Configuration sensitivity} in \cite{beizer1984software}& CoV \\%&\cite{zhang2020framework}\\ \hline Deployment \& Configurations & Weaknesses during deployment of implemented contracts on the Ethereum blockchain. & DL\\%& \cite{rameder2021systematic},\cite{chen2020survey}\\ \hline Authentication \& Authorization & Vulnerabilities allow malicious actors to take control over the contract. & SV\\%&\cite{rameder2021systematic} \hline Dependency \& Upgradability & Upgrades in a smart contract breaking the dependencies in the new contract. & UV \\ \hline Interface & Vulnerabilities and weaknesses in the interface of smart contracts, e.g., when the contract is functioning correctly, but the interface is showing a wrong output that contradicts the contract’s execution logs and transaction logs. & IB \\ \hline \end{tabular} \end{table*} \begin{table*}[htbp] \centering \caption{Mapping Literature-based Classifications to V-D. The rows marked \emph{V-D1} refer to the source of error dimension, rows marked \emph{V-D2} refer to the network view dimension, and rows marked \emph{V-D3} refer to the resulting behavior and consequences dimension. {$\subset$}* indicates that the category in the V-D classification is a subset of the corresponding category in the literature marked by *. *{$\subset$} means the category in the literature is a subset of a proposed category in V-D. Finally, {=} means the categories are identical.} \label{table:mapping_vd} \begin{tabular}{|l||*{10}{c|}}\hline \backslashbox{Literature}{Ours} &\makebox[1.5em]{CV}&\makebox[1.5em]{DV}&\makebox[1em]{SV} &\makebox[1em]{LV}&\makebox[1.2em]{TV}&\makebox[2em]{CoV}&\makebox[1.2em]{DL}&\makebox[1.2em]{SV}&\makebox[1.2em]{UV}&\makebox[1.2em]{IB}\\\hline\hline \tikzmark[xshift=-8pt,yshift=1ex]{m}Codifying \citep{alharby2017blockchain}&\textbf{$\subset$}*&&&\textbf{$\subset$}*&&&&&&\\\hline Data* \citep{zhang2020framework} &*{$\subset$}&*{$\subset$}&&&&&&&&\\\hline Description* \citep{zhang2020framework} &*{$\subset$}&&&&&&&&&\\\hline Interaction* \citep{zhang2020framework} &*{$\subset$}&*{$\subset$}&&*{$\subset$}&&&&&&\\\hline Interface* \citep{zhang2020framework} &&&&&&&&&&=\\\hline Logic* \citep{zhang2020framework} &&&&=&&&&&&\\\hline Standard* \citep{zhang2020framework} &*{$\subset$}&&&&&&&&&\\\hline Authentication* \citep{rameder2021systematic} &&&&&&&&=&&\\\hline Arithmetic* \citep{rameder2021systematic} &&&&=&&&&&&\\\hline Bad Coding Quality* \citep{rameder2021systematic} &=&&&&&&&&&\\\hline Environment Configuration* \citep{rameder2021systematic} &&&&&&&=&&&\\\hline \tikzmark[xshift=-8pt,yshift=-1ex]{l}Deprecated* \citep{rameder2021systematic} &*{$\subset$}&&&&&&&&&\\\hline \hline \tikzmark[xshift=-8pt,yshift=1ex]{x}Solidity* \citep{atzei2016survey}& *\textbf{$\subset$} & \textbf{$\subset$}* &\textbf{$\subset$}*& \textbf{$\subset$}* &\textbf{$\subset$}* & & \textbf{$\subset$}*& \textbf{$\subset$}*&\textbf{$\subset$}*&\tikzmark[xshift=3.5em]{a} \\ \hline EVM* \citep{atzei2016survey}&&&&&&\textbf{$\subset$}*&&&&\\\hline \tikzmark[xshift=-8pt,yshift=-1ex]{y}Blockchain* \citep{atzei2016survey} &&&&&&&\textbf{$\subset$}*&&& \tikzmark[xshift=3.5em]{b} \\ \hline \hline \tikzmark[xshift=-8pt,yshift=1ex]{w}Security* \citep{alharby2017blockchain}&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*&\textbf{$\subset$}*\\\hline Privacy* \citep{alharby2017blockchain}&{$\subset$}*&{$\subset$}*&&{$\subset$}*&&&&&&\\\hline Performance* \citep{alharby2017blockchain}&&&&&=&&&&&\\\hline Availability* \citep{chen2020defining}&{$\subset$}*& {$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*\\\hline Maintainability* \citep{chen2020defining} &{$\subset$}*&&{$\subset$}*&{$\subset$}*&&{$\subset$}*&&&{$\subset$}*&\\\hline \tikzmark[xshift=-8pt,yshift=-1ex]{z}Reusability* \citep{chen2020defining}&{$\subset$}*&{$\subset$}*&{$\subset$}*&{$\subset$}*&&&{$\subset$}*&{$\subset$}*&&{$\subset$}*\\\hline \end{tabular} \drawbrace[brace mirrored, thick]{x}{y} \drawbrace[brace mirrored, thick]{w}{z} \drawbrace[brace mirrored, thick]{m}{l} \annote[left]{brace-1}{V-D2} \annote[left]{brace-2}{V-D3} \annote[left]{brace-3}{V-D1} \end{table*} The following subsections present the key findings related to the defined categories. For each category, we define and explain the most critical vulnerabilities and weaknesses as agreed by the two raters. We give examples for some vulnerabilities and weaknesses, which are directly taken from our dataset. For the full list of vulnerability and code weakness definitions, we refer to our published dataset~\citep{dataset}. \subsubsection{Language Specific Coding Vulnerabilities and Weaknesses} \begin{table}[htbp] \centering \caption{Language Specific Coding Attribute} \label{table:CVattribute} \begin{tabular}{||p{2 cm} |p{6 cm}||} \hline Attribute & Values\\ [0.5ex] \hline Language specific coding & pragma version, fallback function, pre-defined functions in the language, language standards, language defined libraries, syntax issues (not detectable by the compiler), style guide and recommended language patterns, experimental language features, deprecated code, unsafe language features. \\ \hline \end{tabular} \end{table} In this category, smart contracts' vulnerabilities and weaknesses result from language-based errors not captured by the compiler. The source of error of this category can be in the language pre-defined functions, events, libraries, and/or language standards. Table~\ref{table:CVattribute} shows the attribute-value pair for this category. We define 14 Language Specific Coding vulnerabilities and weaknesses that can result in undesirable state in a smart contract or can be used by attackers in their favor. Next, we show a sample of these vulnerabilities and weaknesses. \renewcommand{\labelitemi}{$\blacksquare$} \renewcommand\labelitemii{$\square$} \textbf{Insufficient compiler version or pragma version --- CV\#1.} A so-called version \emph{pragma} should be included in the source code of smart contract to reject compiling the contract using incompatible compiler versions. When using a version \emph{pragma} in the contract which is later than the selected compiler, this may introduce incompatible changes and lead to vulnerabilities in compiled smart contract code. Moreover, future compiler versions may handle language constructs in a way that introduces unclear changes affecting the behavior of the contract as shown in Listing~\ref{lst:cv1}. \begin{lstlisting}[language=Python, caption=Version pragma, label={lst:cv1}] pragma solidity ^0.6.3; // weakness/ vulnerability pragma solidity 0.6.3; \end{lstlisting} \textbf{Fallback function not payable --- CV\#2. }Smart contracts written in Solidity versions 0.6.0+ should have the fallback function split up into a \emph{receive()} and a \emph{fallback()} function (i.e., a new fallback function that is defined using the \emph{fallback} keyword and a \emph{receive} ether function defined using the \emph{receive} keyword). If present, the \emph{receive} function is called whenever no parameters are provided in the call. The \emph{receive} function is implicitly payable. The new \emph{fallback} function is called when no other function matches. However, if \emph{fallback()} is not payable and \emph{receive()} does not exist, transactions not matching any other function which send ether will revert and result in an undesirable state in the contract. \textbf{Fallback function does not exist --- CV\#3.} In addition to CV\#2, when sending Ether from a contract to another contract without calling any of the receiving contract’s functions, sending the Ether will fail if the receiver contract has no fallback function. Thus, a payable fallback function should be added to the receiver before deployment. Otherwise, there is no way to receive the Ether unless the sender has previous knowledge of the exact functions of the receiving contract, which is not usually the case. \textbf{Violating splitting Fallback function --- CV\#4.} Smart contracts written in Solidity versions 0.6.0+ should have the fallback function split up to \emph{receive()} and \emph{fallback()} (i.e., a new Fallback function that is defined using the \emph{fallback} keyword and a \emph{receive} ether function defined using the \emph{receive} keyword). If present, the \emph{receive} function is called whenever the call data is empty. The \emph{receive} function is implicitly payable. The new \emph{fallback} function is called when no other function matches. The fallback function in Listing~\ref{lst:cv4} can be payable or not, however, if it is not payable then transactions not matching any other function which send value will revert. \begin{lstlisting}[language=Python, caption=Fallback function, label={lst:cv4}] contract payment{ mapping(address => uint) _balance; fallback() payable external { _balance[msg.sender] += msg.value; } }\end{lstlisting} \textbf{Violating modifier definition --- CV\#5.} Solidity provides modifiers that are used to change the behavior of functions in a declarative way. They can be used to enforce pre/post-conditions on the execution of a function. The \_ operator should be used in defining a modifier, and starting from Solidity version 0.4.0+ a semicolon should be added after the \_ operator. The operator represents the actual code of the function that is being modified. Thus, the code for the function being modified is inserted where the \_ is placed in the modifier. Missing the \_ operator might generate unwanted results. For example, in Listing~\ref{lst:modifierDef}, line 8, every time transferOwnership is invoked, the onlyOwner modifier will get into play first. If the owner invokes it, then the control flow will reach the \_ operator, so the transferOwnership statements will be executed. Otherwise, the execution will just throw, revert, and exit. \begin{lstlisting}[language=Python,label={lst:modifierDef}, caption=Violating \emph{modifier} definition] contract owned { address public owner; function owned() { owner = msg.sender;} modifier onlyOwner { if (msg.sender != owner) throw; _; } function transferOwnership(address newOwner) onlyOwner { owner = newOwner; }}\end{lstlisting} \textbf{Manipulated language standard --- CV\#6} Ethereum has adopted many standards to guarantee the composability of smart contracts and Dapps. Those standards are in Ethereum's official EIPs and include token \footnote{An Ethereum token can represent anything, including lottery tickets, financial assets, a fiat currency like USD, an ounce of gold, etc.} standards. For example, ERC-20 is a token technical standard that allows developers to implement tokens of cryptocurrencies. It contains nine unique functions and two events to guarantee the possibility of exchanging tokens based on ERC-20 with other ERC-20 tokens. Any modification on the function name, parameter types, and the return value in the standard might change its functionality and leave the developer believing it is the same as ERC-20. The implementation of ERC-20 in any contract shall strictly be the same as in the standard template. \textbf{Violating call-stack depth limit --- CV\#7.} In Ethereum, the call-stack has a hard limit of 1024 frames. Each time the contract calls an external contract, the call-stack depth of transaction increases by one. Thus, when the number of calls exceeds the limit of the call-stack, an exception is thrown and the call is aborted by Solidity. Moreover, Solidity does not support exceptions in low-level external calls. Therefore, a malicious actor can recursively call a contract 1023 times, then call a victim contract to reach the call-stack depth limit. This will fail any subsequent call made by the victim contract without the victim contract owner being aware of the attack. Recently, EIP 150\footnote{https://github.com/ethereum/EIPs/blob/master/EIPS/eip-150.md} makes it impossible to reach stack depths of 1024, effectively eliminating call depth attacks \textbf{Insufficient Address split --- CV\#8.} Starting from Solidity 0.5.0+, the address should be split to address and address payable, where only address payable provides the transfer function. Otherwise, sending tokens to ``unpayable'' addresses will be reverted. Moreover, there is no way to convert an address to address payable. \textbf{Mixing \emph{pure} and \emph{view} --- CV\#9.} In Solidity, \emph{pure} and \emph{view} are function modifiers that describe how the logic in that function will interact with the contract's state. Functions that are declared \emph{view} promise not to modify the state, while functions that are declared \emph{pure} promise not to read or write the state.By using no specifier, the state can be read as well as modified. Developers can mix these two modifiers by replacing \emph{view} with \emph{pure} or missing any of these modifiers, resulting in unexpected state changes or incorrect reads from the state . \textbf{Using of \emph{balance} as attribute to the contract --- CV\#10} One of the features of Solidity is that contracts inherit all members from \emph{Address}, meaning that the keyword \emph{this} is the pointer to the current instance of the type derived from Address. In other words, if the developer wants to access members of the address type (e.g. the balance) of the current contract instance, then the developer should use \emph{this} and should use \emph{balance} as an attribute of the address type, not the contract as shown in Listing~\ref{lst:cv11}. We noticed a confusion in using balance and other address attributes as if they are attributes of the contract. \begin{lstlisting}[language=Python, caption=Using of \emph{balance} as attribute to the contract , label={lst:cv11}] function getSummary() public view returns( uint, uint, uint, uint, address ){ return ( minimumContribution, this.balance, //incorrect address(this).balance,// correct requests.length, approversCount, manager ); }\end{lstlisting} \textbf{Unsafe delegatecall (code injection)--- CV\#11 } A special variant of a message call in Solidity is \emph{delegatecall}. With this feature, the contract can be executed in the callee's context, while msg.sender and msg.value remain unchanged. Consequently, it allows an external contract to modify the storage of the calling contract. This can be exploited by a malicious caller to manipulate the caller's contract state variables and take full control over the balance. \textbf{Variable shadowing --- CV\#12} Solidity supports ambiguous naming when inheritance is used. For instance, contract Alpha with a variable V could inherit contract Beta that also has a state variable V defined. Consequently, there would be two versions of V, one accessed from contract Alpha and the other from contract Beta. In complex contract systems, this condition might go undetected and ultimately cause security issues. Also, this can also occur at the contract level (e.g., a contract with more than one definition at the contract and function level). \textbf{Deprecated Solidity code --- CV\#13} As Solidity evolves, several of its functions and operators are deprecated. Making use of them leads to poor code quality. It is strongly discouraged to use deprecated Solidity language code with new major versions of the Solidity compiler, since it can cause unwanted behavior and vulnerabilities. \textbf{Experimental Language Features --- CV\#14} Similar to CV \#13, it is strongly discouraged to use experimental Solidity language features since it can cause undesired behavior and code weaknesses. \subsubsection{Data Vulnerabilities and Weaknesses} Most of the data vulnerabilities result from the use of wrong or insufficient data types, or passing wrong data formats to arguments without knowing the exact required type. Moreover, organizing the memory and storage in Solidity is the responsibility of programmers, which many developers are not used to do. Table~\ref{table:dataattribute} shows the attribute-value pairs for this category. \begin{table}[htbp] \centering \caption{Data vulnerabilities and weaknesses Attribute} \label{table:dataattribute} \begin{tabular}{||p{2 cm} |p{6 cm}||} \hline Attribute & Values\\ [0.5ex] \hline Data vulnerabilities and weaknesses & Insufficient/wrong data type, wrong addresses initialization, writing on arbitrary locations, insufficient memory and storage management, improper data validation, improper state variable initialization, data pointer initialization, function pointer initialization. \\ \hline \end{tabular} \end{table} \textbf{Violating explicit data location --- DV\#1.} For Solidity versions 0.5.0+, an explicit data location for all variables of type struct, array, or mapping is mandatory. This also applies to function parameters and return variables. For instance, \emph{calldata} is a special data location that contains the function arguments, which is only available for external function call parameters. If \emph{calldata} is not included in the initialization, it results in unexpected values as shown in Listing~\ref{lst:dv1}. \begin{lstlisting}[language=Python, label={lst:dv1}, caption=Using of \emph{torage} instead of \emph{memory}] contract StructExample { struct SomeStruct { int someNumber; string someString; } SomeStruct[] someStructs; function addSomeStruct() { SomeStruct storage someStruct = SomeStruct(123, "test");// insufficient use SomeStruct memory someStruct = SomeStruct(123, "test");// correct someStructs.push(someStruct); } } \end{lstlisting} \textbf{Using of \emph{storage} instead of \emph{memory} --- DV\#2.} In addition to \emph{calldata}, Solidity provides two more reference types to comprise structs, arrays, and mappings called \emph{storage} and \emph{memory}. The Solidity contract can use any amount of memory (based on the amount of Ether that the contract owns and can pay for) during execution. However, when execution stops, the entire content of the memory is wiped, and the next execution will start fresh. The \emph{storage} is persisted into the blockchain itself, so the next time the contract executes, it has access to all the data it previously stored in its storage area. Confusing storage and memory can result in data loss. \textbf{Violating array indexing --- DV\#3.} Developers are making numerous mistakes when initializing and accessing arrays in Solidity. Most of the time, discovering these violations is not easy, especially if there is no syntax error or an error that can be detected by the compiler. This can result in returning incorrect values. A clear violation of array indexing is shown in Listing~\ref{lst:arrayViol}, line 9, where the developer is trying to access a single element in a 3-dimensional array, but only provides two sets of square brackets. Therefore, the developer is returning an array instead of a single Object. \begin{lstlisting}[language=Python,label={lst:arrayViol}, caption=Violating arrays indexing ] contract Game { struct User{ address owner; } User[][10][10] public gameBoard; User memory mover = gameBoard[_fromX][_fromY][0]; function addUser (uint _x, uint _y) public { gameBoard[_x][_y].push(User(msg.sender, 10, 5, 5, 5, 5));} function moveUser (uint _fromX, uint _fromY) public { User memory mover = gameBoard[_fromX][_fromY]; //incorrect access if (mover.owner != msg.sender)return;}} \end{lstlisting} \textbf{Hard-coded address --- DV\#4} An existing bad practice is the use of hard-coded addresses in smart contract code, as shown in Listing~\ref{lst:dv4}. Any incorrect or missing digit in the address may result in losing Ether, in case Ether is sent to that wrong address, or in unexpected outcomes. This vulnerability is also known as ``Transfer to orphan address'' \cite{atzei2016survey}. \begin{lstlisting}[language=Python,label={lst:dv4}, caption=Hardcoded address ] address recieveraddress ; function initializeAddress1 () { recieveraddress = 0x98081c...8e5ace; //hardcoded address}\end{lstlisting} \textbf{Improper data validation --- DV\#5} It is necessary to validate input from untrusted sources, such as external libraries or contracts before integrating it into any contract logic. \textbf{Unintentional Write to arbitrary storage location --- DV\#6} Because Solidity storage is not dynamically allocated, it can lead to unpredictable behavior, unauthorized data access, and other vulnerabilities, especially if the data location of data types like structs, mappings, and arrays is not clarified and allowed to overwrite entries of other data structures. \subsubsection{Predictable Data Values and Resources} We encountered several issues in Solidity smart contracts that relate to values that can be guessed even though they are intended to serve as an element of randomness. \textbf{Timestamp dependency --- DV-PR\# 7} To keep contracts safe from malicious actors, developers should avoid using the block variables as a source of randomness or as part of triggering conditions for executing significant operations in their contracts, such as transferring Ether. When submitting blocks, miners determine the value of block variables such as block.timestamp, block.coinbase, and block.difficulty. Thus, these values can affect the contract's outcome and can be used to benefit the attacker. For example, Listing~\ref{lst:dv-pr7} shows an insecure lottery contract in which block.timestamp is used as a source of entropy. \begin{lstlisting}[language=Python,label={lst:dv-pr7}, caption=Insecure lottery contract using block variables] function setWinner() public { bytes32 hash = keccak256(abi.encode(block.timestamp)); bytes4[2] memory x0 = [bytes4(0), 0]; assembly { mstore(x0, hash) mstore(add(x0, 4), hash) } \end{lstlisting} \textbf{Blockhash dependency --- DV-PR\# 8} Using blockhash has the same risks as block.timestamp in DV-PR \#7, especially when used in critical operations such as Ether transfer. It can lead to serious attacks as malicious miners can tamper with the blockhash and take full control over the contract. \textbf{Bad random number generation--- DV-PR\# 9} Using random numbers is not avoidable in some smart contracts, e.g., games or lotteries. It is important that the randomness is not based on global blockchain variables, as that leaves the contract open to manipulation by malicious miners similar to DV-PR\#7 and DV-PR\#8. \subsubsection{Sequence and Control Vulnerabilities and Weaknesses} This category of vulnerabilities is corresponding to incorrect control structure and loop control statements. These can be exploited and help the attacker to steal money in the contract. Moreover, they can also result in losing all the money in the contract without even being attacked, just because of vulnerabilities in these structures. Table~\ref{table:seqattribute} shows the attribute-value pairs for this category. \begin{table}[htbp] \centering \caption{Sequence and control vulnerabilities and weaknesses} \label{table:seqattribute} \begin{tabular}{||p{2 cm} |p{6 cm}||} \hline Attribute & Values\\ [0.5ex] \hline Sequence and control & Wrong use of assert, wrong use of require. \\ \hline \end{tabular} \end{table} \textbf{Using assert instead of require --- SV-SC\#1.} The \emph{assert} statement should be only used for conditions that indicate you have an internal vulnerability in the contract code. The \emph{require} statement should be used to check valid conditions (e.g., state variables conditions are met, validate input, and validate return value from external contracts). A valid code with correct functions should never fail \emph{assert} conditions. Otherwise, there is a vulnerability in the contract and something unexpected has happened. In smart contracts, \emph{assert} can consume all the gas in the contract as shown in Listing~\ref{lst:sv-sc1}. If the example is tested with run(8), the function runs successfully and 1860 gas will be consumed based on the cost of the function and the loop iterations\footnote{The actual gas costs are stated in the Solidity documentation and depend on numerous factors, such as the executed functions and the used data types.}. If it is tested with run(15), then the function passes assert, fails at require and the first loop only will be executed and consume 1049 gas. Finally, testing the same example with run(25) causes the function to fail the assert statement. The execution continues and thus iterates 25 times through the loop, resulting in a very high cost of gas. \begin{lstlisting}[language=Python, caption=Using assert instead of require, label={lst:sv-sc1}] contract Test { function run(uint8 i) public pure { uint8 total = 0; for (uint8 j = 0; j < 10; j++) total += j; assert (i < 20); require (i < 10); for (j = 0; j < 10; j++) total += j; }\end{lstlisting} \subsubsection{Data Flow Vulnerabilities and Weaknesses} These are vulnerabilities and weaknesses in the data flow of smart contracts, so that data fields are changing unexpectedly or incorrectly. We defined two vulnerabilities that belong to this category. Table~\ref{table:seqdataflowattribute} shows the attribute-value pairs for this category. \begin{table}[htbp] \centering \caption{Data flow vulnerabilities and weaknesses} \label{table:seqdataflowattribute} \begin{tabular}{||p{2 cm} |p{6 cm}||} \hline Attribute & Values\\ [0.5ex] \hline Data flow & Unexpected integer overflow/underflow, unexpected conversion in data values, unexpected arithmetic operation behavior \\ \hline \end{tabular} \end{table} \textbf{Updating storage in fallback functions--- SV-DF\#1.} Upon receiving Ether without a function being called, either the receive Ether or the fallback function is executed. If the contract does neither have a receive nor a fallback function, the Ether will be rejected by throwing an exception. During the execution of one of these functions, the contract can only rely on the passed gas (i.e., 2300 gas) being available to it at that time. This stipend is not enough to modify storage. However, we found that developers sometimes are updating state variables, trying to write to the storage in the fallback functions. Updating the variables with such a gas limit will fail. If the data flow of the contract depends on the failed state variables, this results in an incorrect data flow and in unexpected outcomes. \textbf{Arithmetic operation/calculation overflow/ underflow --- SV-DF\#2.} An overflow can happen as a result of an arithmetic operation or calculation that falls outside the range of a Solidity data type, resulting in unwanted behavior or unauthorized manipulation of the contract balance. Underflow happens when an arithmetic operation reaches the minimum size of a type. This is a data flow vulnerability, as the code of the contract does not perform correct validation on the numeric inputs and the calculations. In addition, the Solidity compiler does not enforce detecting integer overflow/underflow. An example is shown in Listing~\ref{lst:overflow}, where the computation of \emph{mask} overflows at x $>$= 248. \begin{lstlisting}[language=Python,label={lst:overflow}, caption=Arithmetic Operation/calculation overflow ] uint256 public MAXUINT256 = 2*256 - 1; for (uint256 x = 0; x < 255; x++) { var mask = MAXUINT256 * (2 ** x); var key = signature & bytes32(mask);}\end{lstlisting} \subsubsection{Logic Vulnerabilities and Weaknesses} This category reflects inconsistencies with the contract and the programmer’s intention, which is usually mentioned in the question information. These issues relate to a number of reasons, e.g., vague developer intentions, misunderstanding of language components, incorrect usage of Math, and incorrect gas predictions. \textbf{Greedy contract--- LV\#1.} This vulnerability occurs when implementing a contract logic that is only locking Ether balance all the time because of its inability to access the external library contract to transfer Ether. For instance, the contract logic may only accept transferring money based on a specific value in the code, which happens to be unreachable due to incorrect logic. In this case, the Ether will be locked in the deployed contract forever. In the example of Listing~\ref{lst:greedy}, the function \emph{refundMoney()}, line 8, does not decrease the \emph{weiRaised} value, meaning that once starting a refund, the developer can no longer use \emph{forwardAllRaisedFunds()} to drain the contract. This code weakness would be triggered even in the regular course of action and Ether in this contract is stuck. It can receive any funds but the received funds can never be retrieved. \begin{lstlisting}[language=Java, label={lst:greedy}, caption=Greedy Contract] contract SwordCrowdsale is Ownable { //amount of raised money in wei uint256 public weiRaised; bool public isSoftCapHit = false; //send ether to the fund collection wallet function forwardAllRaisedFunds() internal { wallet.transfer(weiRaised);} function refundMoney(address _address) onlyOwner public { uint amount = contributorList[_address].contributionAmount; if (amount > 0 && _address.send(amount)) { //user got money back contributorList[_address].contributionAmount = 0; contributorList[_address].tokensIssued = 0;} }\end{lstlisting} \textbf{Transaction order dependency --- LV\#2} The vulnerability arises when a contract's logic is dependent on the order in which transactions are executed and processed in a block. It is a type of race condition inherent to Blockchains. By manipulating the order of transaction processing, malicious miners can take advantage of the contract and benefit from it. Therefore, the logic of the contract should not rely on the transaction order. \textbf{Call to the unknown --- LV\#3} This vulnerability arises when a function unexpectedly invokes the fallback function of the recipient. Consequently, malicious code can be introduced. For example, the unknown call could trigger the recipient's fallback function, allowing malicious code to execute.Also, this can be done via direct call, delegatcall, send, or only call functions. In the MultiSig Wallet Attack\footnote{https://blog.openzeppelin.com/on-the-parity-wallet-multisig-hack-405a8c12e8f7/}, an attacker exploited this vulnerability to steal 30M USD from the Parity Wallet. Another example is shown in Listing~\ref{lst:Unknown}. In which, pong function uses a direct call to invoke Alice’s ping. However, if the interface of contract Alice by mistake was mistyped by declaring the parameter as \emph{int} instead of \emph{unit} and Alice has no function with \emph{int} type, then the call to ping results in calling Alice’s fallback function. \begin{lstlisting}[language=Python, ,label={lst:Unknown},caption=Call to the unknown \cite{atzei2016survey}] contract Alice {function ping(uint) returns (uint)} contract Bob {function pong(Alice c){c.ping(42);}} \end{lstlisting} \textbf{DoS by external contract --- LV\#4} It is possible for external calls to fail and throw exceptions or revert the transaction. Inefficient management of these calls in the contract logic can lead to critical vulnerabilities, such as a Denial of Service (DoS) or loss of funds. \subsubsection{Timing and Optimization Vulnerabilities and Weaknesses} Performance and timing vulnerabilities/weaknesses in smart contracts usually affect the gas amount in the contracts. In the following, we define 2 vulnerabilities belonging to this category. \textbf{Unbounded loops --- TO\#1.} In Solidity, iterating through a potentially unbounded array of items can be costly, as exemplified in \emph{getNotes()} in Listing~\ref{lst:unbounded}. Since the array \emph{notes} is provided as an input, the smart contract has no control over the maximum length, allowing a malicious actor to send in large arrays. \textbf{Creating subcontracts cost --- TO\#2.} Contract deployments are very expensive operations. For instance, deploying a contract for every patient in Listing~\ref{lst:unbounded} is very costly. A malicious developer can use this weakness to cost the owner of the contract more Ether. \begin{lstlisting}[language=Python, ,label={lst:unbounded},caption=Unbounded loops and creating subcontracts] contract MedicalRecord { struct Doctor { bytes32 name; uint id;} struct Note { bytes32 title; bytes32 note;} function getNotes() view public isCurrentDoctor returns (bytes32[], bytes32[]) { bytes32[] memory titles = new bytes32[](notes.length); bytes32[] memory noteTexts = new bytes32[](notes.length); for (uint i = 0; i < notes.length; i++) { Note storage snote = notes[i]; titles[i] = snote.title; noteTexts[i] = snote.note; } return (titles, noteTexts);} \end{lstlisting} \textbf{Costly state variable data type --- TO\#3} Because of the padding rules, the byte[] data type consumes more gas than a byte array. In addition, declaring variables without constant consumes more gas than one declared with a constant. This weakness is also reported in \cite{chen2020defining}. \textbf{Costly function type --- TO\#4} A function declared public rather than external and not utilized within the contract consumes more gas on deployment than it should.This weakness is also reported in \cite{chen2020defining}. \subsubsection{Compatibility Vulnerabilities and Weaknesses} This category is related to vulnerabilities that prevent Ethereum from running normally on the developer machine. We find that the main root causes of this category are: (1) developers are not using the latest binaries/releases and (2) the hardware that is in use does not meet the minimum requirements. As this category played only a minor role in the analyzed posts, we did not label the posts in detail, but decided to keep this for future work. \subsubsection{Deployment and Configurations} This category of vulnerabilities and weaknesses is caused by wrong configurations and weaknesses in deployment. \textbf{Improper configuration --- DL\#1} Wrong or improper configuration of the smart contract application tool-chain can result in weaknesses, errors, or vulnerabilities, which applies even if the contract itself is free of vulnerabilities. \textbf{Violating contract size limit --- DL\#2.} In Ethereum, limits are imposed by the gas consumed for the transaction. While there is no exact size limit there is a block gas limit and the amount of gas provided has to be within that limit. When deploying a contract, there are three factors to consider: (1) an intrinsic gas cost, (2) the cost of the constructor execution, and (3) the cost of storing the bytecode. The intrinsic gas cost is static, but the other two are not. The more gas consumed in the constructor, the less is available for storage. Normally, the vast majority of gas consumed is based on the size of the contract. If the contract size is large, the consumed gas can get close to the block gas size limits, preventing the deployment of the contract. \begin{table}[htbp] \centering \caption{Authentication \& authorization vulnerabilities and weaknesses Attribute} \label{table:Authenticationattribute} \begin{tabular}{||p{2 cm} |p{6 cm}||} \hline Attribute & Values\\ [0.5ex] \hline Authentication \& authorization vulnerabilities and weaknesses & unauthorized function call, wrong permissions, lack of access control, signature issues. \\ \hline \end{tabular} \end{table} \subsubsection{Authorization and Authentication} The following vulnerabilities and weaknesses directly affect the security of a smart contract and could enable attacks/exploits. \textbf{Lack of access control management --- SV\#1.} Access control is an essential element to the security of a smart contract. Based on the privileges of each client/contractor party, there have to be strict rules implemented in the contract that enforce access control. For example, the contract in Listing~\ref{lst:access} is trying to provide functionality to whitelist addresses. The original function in line 10 does not have any access restrictions, meaning any caller can whitelist addresses. The modified version in line 15 only allows the contract owner to do so. \begin{lstlisting}[language=Python, label={lst:access}, caption=Lack of access control management] contract WHITELIST { address owner; //set during the first call modifier isOwner() { require(msg.sender == owner); _; } // insecure function enableWhitelist(address address) { //Whitelist an address } // secure function enableWhitelist(address address) external isOwner { //Whitelist an address } } \end{lstlisting} \textbf{Authorization via tx.origin --- SV\#2} tx.origin is a global variable in Solidity which returns the address of the account that sent the transaction. Rather than returning the immediate caller, tx.origin returns the address of the original sender (i.e., the first sender who initiated the transaction chain). It can make the contract vulnerable, if an authorized account calls into a malicious contract. Therefore, a call could be made to the vulnerable contract that passes the authorization check as tx.origin returns the original sender of the transaction, which in this case is the authorized account. \textbf{Signature based vulnerabilities --- SV\#3} These vulnerabilities are introduced as a result of insufficient signature information or weaknesses in signature generation and verification. Those include but not limited to: \begin{itemize} \item Lack of proper signature verification: One example can be relying on \emph{msg.sender }for authentication and assuming that if a message is signed by the sender address, then it has also been generated by the sender address. Particularly in scenarios where proxies can be used to relay transactions, this can lead to vulnerabilities. For more information, we refer to SWC-122\footnote{https://swcregistry.io/docs/SWC-122} \item Missing Protection against Signature Replay Attacks: To protect against Signature Replay Attacks, a secure implementation should keep track of all processed message hashes and only allows new message hashes to be processed. Without such control, it would be possible for a malicious actor to attack a contract and receive message hashes that were sent by another user multiple times. For more information, we refer to SWC-121\footnote{https://swcregistry.io/docs/SWC-121} \item Signature Malleability: The implementation of a cryptographic signature system in Ethereum contracts often assumes that the signature is unique, but signatures can be altered without the possession of the private key and still be valid. Valid signatures could be created by a malicious user to replay previously signed messages. For more information, we refer to SWC-117\footnote{https://swcregistry.io/docs/SWC-117} \end{itemize} \subsubsection{Dependency and Upgradability Vulnerabilities and Weaknesses} Dependencies of and upgrades to smart contracts can lead to a number of issues in smart contracts. We describe one of these weaknesses/vulnerabilities below. \textbf{Insecure contract upgrading --- UV\#1.} There are two ways to upgrade a contract. First, to use a registry contract that keeps track of the updated contracts, and second, to split the contract into a logic contract and a proxy contract so that the logic contract is upgradable while the proxy contract is the same. Both approaches allow untrusted developers to introduce dependency vulnerabilities in the updated contract's logic, allowing attackers to modify the logic of the upgraded contract using its dependencies, e.g. another contract. This vulnerability was also reported in \cite{atzei2017survey} and \cite{chen2020survey}. \subsubsection{Interface Vulnerabilities and Weaknesses} This category describes weaknesses resulting in incorrect display of results. However, in these cases the smart contracts actually worked and produced the expected outcomes. The weaknesses were instead found in other applications that displayed the results. As these applications are not part of the smart contract, and there was no apparent code weakness in the contracts, we did not further investigate these kind of weaknesses. \subsection{Are the frequency distributions of vulnerabilities similar across all studied data sources? (RQ2)} \label{sec:resfreqt} To answer RQ2, we analyzed the frequency distributions of the defined eleven vulnerability categories across the four data sources (i.e., Stack Overflow GitHub, CVE, and SWC). The frequency distribution of the defined categories is shown in Figure~\ref{fig:frequency} for each data source. \begin{figure*}[htbp] \centering \subfloat[][Stack Overflow]{\resizebox{0.25\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, ymin=0, width = 10cm, height = 8cm, bar width=14pt, ylabel={Percentage}, xticklabel style={rotate=90}, xtick = data, table/header=false, table/row sep=\\, xticklabels from table={\footnotesize CoV\\\footnotesize CV\\\footnotesize DL\\\footnotesize DV\\ \footnotesize IB\\\footnotesize LV\\\footnotesize SV\\\footnotesize SV-DF\\ \footnotesize SV-SC\\\footnotesize TV\\\footnotesize UV\\ }{[index]0}, enlarge y limits={value=0.2,upper} ] \addplot table[x expr=\coordindex,y index=0]{1.83 \\29.02\\10.46\\16.99\\12.42\\14.25\\3.53\\5.1\\1.05\\1.83\\3.53\\}; \pgfplotsinvokeforeach{0,3,4,7,8,11}{\coordinate(l#1)at(axis cs:#1,0);} \end{axis} \end{tikzpicture} }} \subfloat[][GitHub]{\resizebox{0.25\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, ymin=0, width = 10cm, height = 8cm, bar width=14pt, ylabel={Percentage}, xticklabel style={rotate=90}, xtick = data, table/header=false, table/row sep=\\, xticklabels from table={\footnotesize CoV\\\footnotesize CV\\\footnotesize DL\\\footnotesize DV\\ \footnotesize IB\\\footnotesize LV\\\footnotesize SV\\\footnotesize SV-DF\\ \footnotesize SV-SC\\\footnotesize TV\\\footnotesize UV\\ }{[index]0}, enlarge y limits={value=0.2,upper} ] \addplot table[x expr=\coordindex,y index=0]{4.36\\20.03\\ 2.18\\17.65\\10.91\\10.11\\1.388\\16.07\\7.53\\5.35\\4.36\\}; \pgfplotsinvokeforeach{0,3,4,7,8,11}{\coordinate(l#1)at(axis cs:#1,0);} \end{axis} \end{tikzpicture} }} \subfloat[][CVE]{\resizebox{0.25\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, ymin=0, width = 10cm, height = 8cm, bar width=14pt, ylabel={Percentage}, xticklabel style={rotate=90}, xtick = data, table/header=false, table/row sep=\\, xticklabels from table={\footnotesize CoV\\\footnotesize CV\\\footnotesize DL\\\footnotesize DV\\ \footnotesize IB\\\footnotesize LV\\\footnotesize SV\\\footnotesize SV-DF\\ \footnotesize SV-SC\\\footnotesize TV\\\footnotesize UV\\ }{[index]0}, enlarge y limits={value=0.2,upper} ] \addplot table[x expr=\coordindex,y index=0]{0\\0.19\\0\\2.88\\0\\5.96\\1.34\\89.61\\0\\0\\0\\}; \pgfplotsinvokeforeach{0,3,4,7,8,11}{\coordinate(l#1)at(axis cs:#1,0);} \end{axis} \end{tikzpicture} }} \subfloat[][SWC]{\resizebox{0.25\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, ymin=0, width = 10cm, height = 8cm, bar width=14pt, ylabel={Percentage}, xticklabel style={rotate=90}, xtick = data, table/header=false, table/row sep=\\, xticklabels from table={\footnotesize CoV\\\footnotesize CV\\\footnotesize DL\\\footnotesize DV\\ \footnotesize IB\\\footnotesize LV\\\footnotesize SV\\\footnotesize SV-DF\\ \footnotesize SV-SC\\\footnotesize TV\\\footnotesize UV\\ }{[index]0}, enlarge y limits={value=0.2,upper} ] \addplot table[x expr=\coordindex,y index=0]{21.05\\15.78\\7.89\\10.52\\5.26\\5.26\\0\\0\\31.57\\0\\2.63\\}; \pgfplotsinvokeforeach{0,3,4,7,8,11}{\coordinate(l#1)at(axis cs:#1,0);} \end{axis} \end{tikzpicture} }} \caption{Distribution of SC vulnerability categories for each of the studied data sources.} \label{fig:frequency} \end{figure*} \begin{tcolorbox} The resulting frequency distribution shows that the Language specific coding category and the Structural data flow category are the most common vulnerability categories in Ethereum smart contracts \end{tcolorbox} The language specific coding category is dominant on both Stack Overflow and GitHub. In contrast, structural data flow vulnerabilities are most frequent on CVE and SWC. Additionally, almost 80\% of the reported structural data flow vulnerabilities in the NVD database (CVEs) are integer overflow/ underflow vulnerabilities. Interestingly, issues on StackOverflow and GitHub appear to have similar frequency distributions. However, these are not statistically significant. \subsection{What impact do the different categories of smart contract vulnerabilities and weaknesses have? (RQ3)} \label{sec:resImpact} In this section, we present the main impacts of vulnerabilities and bugs in smart contracts, thus answering RQ3. We unify all the proposed impact categories in the literature (i.e., \citep{chen2020defining,zhang2020framework}), and present a thorough classification scheme of vulnerability impacts on Ethereum smart contracts. We followed a similar approach to what we did in RQ1. The definitions of the final impact categories are depicted in Table~\ref{table:impacts} and Table~\ref{table:impacttable2}. Furthermore, Table~\ref{table:mapping_id} shows how the impact categories in literature related to our classification. Note that IP5 (in the second-last row) does not relate to any of our categories, as the category describes smart contracts that function as intended, something that we did not include in our classification. \begin{tcolorbox}Our mapping shows that the impact of smart contract vulnerabilities and code weaknesses on certain aspects has not been examined in detail. For instance, the impact on the development process and the productivity of a software development team. Additional research in this area can quantify the extent to which the vulnerabilities impact smart contract development, the developing team, and the development of decentralized applications based on smart contracts.\end{tcolorbox} \begin{table}[htbp] \centering \caption{Classification Scheme of Impacts (I-D1)} \label{table:impacts} \begin{tabular}{||p{2.2 cm} | p{4.8cm}| p{.4 cm} ||} \hline Impact & Description & App.\\ \hline\hline Unexpected behavior & Contract behaves abnormally, e.g., generating incorrect output. & UB\\ Unwanted functionality & Contract executes wrong functionality because of wrong logic. & UF\\ Long response time & Long runtime of a smart contract to any input without providing the desired output.& LRT\\ Data Corruption & Data becomes unreadable, unusable or inaccessible, and unexpected output is generated.& DC\\ Memory disclosure & Problems in the memory storage of the smart contract. & MF \\ Poor performance & Non-optimal resource usage in terms of gas and time. & PPC\\ Unexpected stop & Unexpected exit and execution stop at the point of triggering the vulnerability in the code.& USP \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \centering \caption{Classification Scheme of Impacts (I-D2)} \label{table:impacttable2} \begin{tabular}{||p{2 cm} | p{5 cm} | p{.5 cm}||} \hline Impact & Description & App.\\ \hline\hline Information disclosure & When a code weakness or vulnerability is exploited, sensitive information is exposed to an actor not explicitly authorized to see it.& ID\\ Lost Ether or assets& An exploited code weakness can lead to unauthorized actors taking over the Ether of the contract and losing it.& LEA\\ Locked Ether& In the situation of triggering code weaknesses in a contract, one can lose access to the contract and lock the Ether in it without having access to it again. & LE\\ Lost control over the contract & By exploiting code weaknesses or vulnerabilities, an unauthorized actor can take over the contract. & LC\\ \hline \end{tabular} \end{table} \begin{table*}[htbp] \centering \caption{Mapping literature-based Impact classifications to I-D. \emph{D1}: refers to the Impact on software product. Note: IP2 belongs to D1 and D2. {$\subset$}* indicates the corresponding category in our own classification is a subset of the corresponding category in the literature marked by *. *{$\subset$} means the category in the literature is a subset of our proposed category.} \label{table:mapping_id} \begin{tabular}{|l||*{11}{c|}}\hline \backslashbox{Literature}{Ours} &\makebox[1.5em]{UB}&\makebox[1.5em]{UF}&\makebox[1em]{LRT} &\makebox[1em]{DC}&\makebox[1.2em]{MF}&\makebox[2em]{PPC}&\makebox[1.2em]{USP}&\makebox[1.2em]{ID}&\makebox[1.2em]{LEA}&\makebox[1.2em]{LE}&\makebox[1.2em]{LC}\\\hline\hline \tikzmark[xshift=-8pt,yshift=1ex]{x}Functionality*. \citep{zhang2020framework}& & = && & & & & && &\tikzmark[xshift=3.5em]{a} \\ \hline Performance* \citep{zhang2020framework}& & && & &= & & && &\\\hline Security \citep{zhang2020framework}& \textbf{$\subset$}* & \textbf{$\subset$}* &\textbf{$\subset$}*& \textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}*&\textbf{$\subset$}* &\textbf{$\subset$}*\\\hline Serviceability *\citep{chen2020defining}& \textbf{$\subset$}* & \textbf{$\subset$}* &\textbf{$\subset$}*& \textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}* &\textbf{$\subset$}* & && &\\\hline IP1 unwanted behaviors* \citep{chen2020defining} & = & && & & & & && &\\\hline IP3 non-exploitable UB* \citep{chen2020defining} & *\textbf{$\subset$} & && & & & & && &\\\hline IP4 Errors outside program call \citep{chen2020defining} & &* \textbf{$\subset$} && & & & & && &\\\hline \tikzmark[xshift=-8pt,yshift=1ex]{y}IP5 No errors* \citep{chen2020defining} & & && & & & & && &\tikzmark[xshift=3.5em]{a} \\\hline IP2 UB without losses* \citep{chen2020defining} & *\textbf{$\subset$} & && & & & & & &*\textbf{$\subset$}& \\\hline \end{tabular} \drawbrace[brace mirrored, thick]{x}{y} \annote[left]{brace-4}{I-D1} \end{table*} The analysis of the impacts of vulnerabilities and code weaknesses in smart contracts shows that unexpected stop is the most prevalent impact category among all proposed categories. It is caused primarily by vulnerabilities in the Language Specific Coding category. The second most prevalent impact category is unexpected functionality. This mostly happens in smart contracts when the transferred gas amount does not match the expected amount expected from the logic of the code, or when incorrect amounts of Ether are transferred. \section{Discussion} \label{sec:discussion} In the following, we discuss our findings in terms of implications and relation to existing work. We find a substantial number of vulnerabilities and weaknesses being discussed in social coding platforms and existing vulnerability repositories. This clearly shows that this is an important topic to study and analyze. Because of the unique characteristics of smart contracts, e.g., immutability and gas consumption, it is important to make sure that vulnerabilities and code weaknesses with severe impacts are fixed or even detected before deploying the contract to the blockchain. As demonstrated in our findings, existing classifications either focus on a single dimension of smart contract vulnerabilities, such as the error source, or mix multiple dimensions in a single classification. Our mapping unifies these different dimensions to some extent and shows how different classification schemes relate. In addition to mixing dimensions, the majority of existing classification schemes for smart contract vulnerabilities include broad categories, such as security and availability, to which many vulnerabilities can be assigned. As such, these categories do not support reasoning about the included vulnerabilities, which is an important quality criterion for classifications \cite{ralph18}. Furthermore, broad categories might prevent orthogonality of the categories, i.e., that a single vulnerability fits into a single category only. In the example mentioned above, i.e., security and availability, many vulnerabilities can lead to negative effects on both, and thus could be labeled both. Due to the use of attribute-value pairs \cite{seacord2005structured}, we believe that our unified classification avoids this issue. The frequency distributions discussed in Section~\ref{sec:resfreqt} show notable differences between the frequencies of found vulnerabilities across different data sources. This, once again, highlights that focusing on a single source biases the resulting study. It further demonstrates that established databases such as CVE and SWC do not reflect well the topics discussed in public coding platforms such as Stack Overflow or GitHub. On the latter platforms, we observe specifically that developers seem to have a poor understanding of pre-defined functions such as \emph{view} and \emph{pure}, and that it is hard for them to cope with continuous changes and updates in Solidity and its documentation. On the one hand, this finding suggests that tools for verification and analysis of smart contracts is of high importance, especially focusing on the prevalent vulnerability categories in our classification. On the other hand, the observed frequencies might only be a symptom of the technology maturity. Hence, these issues might become less prevalent once Solidity matures and updates become less frequent. Based on our findings, we can provide a number of recommendations to researchers and practitioners in order to improve smart contract development. First, available static detection tools must urgently target the defined categories of vulnerabilities and code weaknesses. Our breakdown of the frequencies at which the different categories occur can help prioritizing this development to target the most important categories first. Second, there is a need to define coding best practices of smart contracts and make them available to developers. Coding guidelines are available for many programming languages and technologies, and can help improving quality and reducing code weaknesses and unexpected gas consumption. Based on the analyzed data from social coding platforms, we believe that this list can include, e.g., avoiding multidimensional arrays, or arrays in general, if possible; carefully checking gas consumption amounts; recomming specific safe libraries such as \emph{SafeMath}~\cite{Safemath} to avoid common pitfalls such as underflows; and avoiding substantial sub-contract creation. Finally, we see a need for action from the Solidity team, in particular when it comes to clearly documenting existing libraries and code weaknesses they can resolve, as well as clearly defining gas requirements for patterns, functions, declaration types, creating subcontracts, upgrading contracts, and other language artifacts and operations. Such documentation could substantially contribute to a reduction in smart contract vulnerabilities and weaknesses in Solidity/Ethereum. \section{Threats to Validity} \label{sec:threats} In the following, we discuss threats to the validity of our findings according to internal and external validity, as well as reliability. \subsection{Internal Validity} Card sorting was used as the main method to categorize and label the posts in this work. This method is subjective and open to bias, hence two experts verified the manual labeling twice independently in order to mitigate this threat. In case of any disagreement, the two experts would discuss it until reaching to a consensus. An additional threat to internal validity lies in the keyword search employed during data collection. For instance, to collect Q\&A posts from Stack-Overflow, we used the tags ``Smart contract'', ``Ethereum'', and ``Solidity'', while for GitHub we used ``smart contract'', ``Solidity'', and ``Ethereum''. Also, a part of the collected data with insufficient information or with information that is not related to smart contract vulnerabilities and code weaknesses written in Solidity were excluded from the analysis. This search and filtering process might have led to exclude relevant data that could have led to further insights. As trustworthiness of the collected posts can be a source of noise in the data, we removed negative voted questions from the dataset. This might have resulted in a systematic under-representation of certain types of vulnerabilities/bugs. Given the maturity of Solidity, we deemed this step warranted to allow for data of sufficiently high quality. \subsection{External Validity} A threat to external validity is the continuous updates of Ethereum using the hard fork. More improvements are being added to Ethereum with each hard fork such as the EIPs \footnote{https://github.com/ethereum/EIP} that ensures the energy efficiency of the proof-of-work. Furthermore, many new features are added to Solidity newer versions. Therefore, new contract vulnerabilities and bugs may be introduced and some others may be resolved. This means that our classification might not generalize to newer (or older) versions of Solidity. Focusing on Solidity and the Ethereum blockchain only limits the external validity of the results, as there are other blockchains and other smart contract languages that could have yielded further or different vulnerabilities/bugs. We focus on Ethereum since it is the second-largest blockchain, and the largest blockchain that supports smart contracts (written in Solidity). However, additional studies into other technologies could be a valuable addition to our findings, and our results might not generalize to other blockchains or smart contract languages. \subsection{Reliability} To ensure reliability of the results, we described in detail the data collection and analysis process. We systematically calculated inter-rater reliability coefficients in iterative rounds, to ensure sufficiently clear categories and agreement among two expert raters. Finally, we published the full dataset~\cite{dataset}. Overall, these steps should ensure reliability and enable replication of our study. \section{Conclusion} \label{sec:conclusion} Due to the immaturity of blockchain technology, vulnerabilities in smart contracts can have severe consequences and result in substantial financial losses. Hence, it is essential to understand vulnerabilities and code weaknesses of smart contracts. However, there exist several shortcomings in existing classifications, i.e., they focus on single dimensions, mix dimensions, propose too broad categories, or rely on single data source omitting important sources for vulnerabilities. To address this gap, we extracted smart contract vulnerabilities written in Solidity from a number of important data sources, classified them in terms of error source and impact, and related them to existing classifications in literature. Our findings show that language-specific coding and structural data flow are the dominant categories of vulnerabilities in Solidity smart contracts. We also find that many vulnerabilities and code weaknesses are similar to known issues in general purpose programming languages, such as integer overflow and erroneous memory management. However, the immaturity and rapid evolution of the technology and the Solidity language, and the added concept of gas furthermore adds vulnerabilities and code weaknesses, and increases the risk of attacks and financial losses. Interestingly, we find that the frequency at which the different categories occur differs widely across data sources, indicating that they should not be viewed in isolation. Our classification scheme is a further step to standardize and unify vulnerability analysis in smart contracts. This can support researchers in building tools and methods to avoid, detect, and fix smart contract vulnerabilities in the future. Specifically, we see a number of directions for future research. First, future studies should investigate whether our classification scheme can be generalized by investigating other smart contracts in other blockchain networks, e.g., Hyperledger, Stellar, and Openchain. Potentially, our classification needs to be modified to fit into other networks or languages. Similarly, the classification might have to be adapted as Solidity evolves and as new languages are developed for Ethereum smart contracts. Second, more dimensions and characteristics of the studied vulnerabilities and code weaknesses can be explored in the future. For instance, patterns among the extracted vulnerabilities and code weaknesses could be abstracted, as well as the evolution over time. For each of the categories, code metrics and detection tools can be explored. Finally, it should be investigated in which ways the defined vulnerabilities and code weaknesses can be exploited, as well as what the impact of these exploits will be. This can be accomplished by developing automated scripts or manually devising such exploits. \section*{Acknowledgment} The authors would like to thank {Mohammad Alsarhan}, a security expert, for participating in the card sorting and inter-rater agreement discussions. This work was supported by the {Icelandic} Research Fund (Rannís) grant number 207156-051. \bibliographystyle{abbrvnat}
1,108,101,562,482
arxiv
\section{Introduction: A Motivating Question} Let ${\mathcal X}=\{x_1,x_2,...,x_n\}$ be a fixed set of real numbers, and let $X_1,X_2,...,X_n$ be the successive values of a sample of size $n$ that is drawn sequentially without replacement from the set $\mathcal X$. We are concerned here with a systematic process by which one can construct martingales with respect to the sequence of sigma-fields $\sigma (X_1, X_2, ...,X_k)$, $1 \leq k \leq n$. For example, we consider the partial sums $$ S_k=X_1+X_2+ \cdots+X_k, \quad 1 \leq k \leq n, $$ and ask: Is there an $\{\F_k: 1 \leq k \leq n\}$ martingale where the values $\{S_k^2: 1 \leq k \leq n\}$ appear in an simple and explicit way? What about the values $\{S_k^3: 1 \leq k \leq n\}$, or the partial sums of $X_i^2$, etc. We show that there is a practical, unified approach to these problems. It faces some limitations due to the burden of algebra, but one can make considerable progress before those burdens become too cumbersome. We first illustrate the construction with two basic examples. These lead in turn to several martingales whose usefulness we indicate by the derivation of permutation inequalities --- both old and new. \section{First Example of the Construction} To begin, we consider $T_k=X_1^2+X_2^2+ \cdots+X_k^2$ for $1\leq k\leq n$, and we ask for a martingale where $T_k$ (or a deterministic multiple of $T_k$) appears as a summand. The set $\mathcal X$ is known before sampling beings, and the only source of randomness is the sampling process itself. The totals are known, deterministic, values which we denote by \begin{equation} M=S_n \quad \text{and} \quad B=T_n. \end{equation} We first compute the conditional expectation of $S^2_{k+1}$ given the immediate past, \begin{align*} E[S_{k+1}^2|\F_k]&=E[(S_{k}+X_{k+1})^2|\F_k]\\ &=E[(S_{k}^2+2S_kX_{k+1}+X_{k+1}^2|\F_k]\\ &=S_k^2+2S_k\frac{M-S_k}{n-k}+\frac{B-T_k}{n-k}\\ &=\frac{n-k-2}{n-k}S_k^2+\frac{2M}{n-k}S_k-\frac{1}{n-k}T_k+\frac{B}{n-k}. \end{align*} To organize this information, introduce the vector-column $\bxi_k=(S_k^2,S_k,T_k,1)^\top$, and note that we also have \begin{align*} E[S_{k+1}|\F_k]&=S_k+\frac{M-S_k}{n-k}=\frac{n-k-1}{n-k}S_k+\frac{M}{n-k}, \quad \text{and }\\ E[T_{k+1}|\F_k]&=T_k+\frac{B-T_k}{n-k}=\frac{n-k-1}{n-k}T_k+\frac{B}{n-k}. \end{align*} These observations can be combined into one matrix equation for the one-step conditional expected values $$E[\bxi_{k+1}|\F_k]=A_{k+1}\bxi_k,$$ where here the deterministic $4 \times 4$ matrix $A_{k+1}$ is given explicitly by {\small $$A_{k+1}=\left( \begin{array}{cccc} \displaystyle \frac{n-k-2}{n-k} & \displaystyle\frac{2M}{n-k} & \displaystyle-\frac{1}{n-k} & \displaystyle\frac{B}{n-k} \\ &&&\\ \displaystyle 0 & \displaystyle\frac{n-k-1}{n-k} & 0 & \displaystyle\frac{M}{n-k} \\ &&&\\ \displaystyle 0 & 0 & \displaystyle\frac{n-k-1}{n-k} & \displaystyle\frac{B}{n-k} \\ &&&\\ \displaystyle 0 & 0 & 0 & 1 \\ \end{array} \right). $$} The matrices $\{A_k: 1 \leq k \leq n-2\}$ are invertible and deterministic, so the vector process \begin{equation}\label{eq:MartingaleRepr} \bM_k=A_1^{-1}A_2^{-1}\cdots A_k^{-1}\bxi_k \end{equation} is well-defined and adapted to $\{\F_k: 1 \leq k \leq n-2 \}$. To check that it is a vector martingale we only need to note that \begin{align} E(\bM_{k+1}|\F_k)=&E(A_1^{-1}A_2^{-1}\cdots A_{k+1}^{-1}\bxi_{k+1}|\F_k) \label{ew:GeneralProcedure} \\ =&A_1^{-1}A_2^{-1}\cdots A_{k+1}^{-1}E(\bxi_{k+1}|\F_k) \notag \\ =&A_1^{-1}A_2^{-1}\cdots A_{k+1}^{-1}A_{k+1}\bxi_k =A_1^{-1}A_2^{-1}\cdots A_k^{-1}\bxi_k =\bM_k. \notag \end{align} Now, to extract the benefit from the martingale $\{\bM_k\}$, one just needs to make it more explicit. Here one is fortunate; an easy induction confirms that $A_1^{-1}A_2^{-1}\cdots A_k^{-1}$ is given by the upper-triangular matrix: {\small $$\frac{1}{n-k}\left( \begin{array}{cccc} \displaystyle \frac{n(n-1)}{n-k-1} & \displaystyle-\frac{2knM}{n-k-1} & \displaystyle\frac{kn}{n-k-1}& \displaystyle\frac{k(k+1)M^2-knB}{n-k-1 }\\ &&&\\ \displaystyle 0 & \displaystyle n & 0 & \displaystyle-kM \\ &&&\\ \displaystyle 0 & 0 & \displaystyle n & \displaystyle-kB \\ &&&\\ \displaystyle 0 & 0 & 0 & n-k \\ \end{array} \right). $$} Now, the coordinates of the vector martingale $\bM_k= (M_{1,k},M_{2,k},M_{3,k},M_{4,k})^\top$ are martingales in their own right, and it is worthwhile to examine them individually. The fourth coordinate just gives the trivial martingale $M_{4,k}\equiv 1$, but the other coordinates are much more interesting. The second and third coordinates give us two useful --- but known --- martingales, \begin{equation}\label{eq:MartingalsM2M3} M_{2,k}=(nS_k-kM)/(n-k) \quad \text{and } M_{3,k}=(nT_k-kB)/(n-k). \end{equation} Actually, we have only one martingale here; one gets the martingale $\{M_{3,k}\}$ from the martingale $\{M_{2,k}\}$ if one replaces ${\mathcal X}$ with the set of squares ${\mathcal X'}=\{x_1^2,x_2^2,...,x_n^2\}$. The martingale $\{M_{2,k}\}$ is given in Serfling (1974) and Stout (1974, p.~147). The earliest source we could identify for the martingale $\{M_{2,k}\}$ is Garsia (1968,~p.~82), but it is hard to say if this was its first appearance. The martingale $\{M_{2,k}\}$ is a natural one with few impediments to its discovery. To get a martingale that is more novel (and less transparent), we only need to consider the first coordinate of the vector martingale $\bM_k$. One can write this coordinate explicitly as \begin{align} M_{1,k}=&\frac{n(n-1)}{(n-k)(n-k-1)}S_k^2-\frac{2knM}{(n-k)(n-k-1)}S_k \notag \\ &+\frac{kn}{(n-k)(n-k-1)}T_k+\frac{k(k+1)M^2-knB}{(n-k)(n-k-1)}. \notag \end{align} This may seem complicated at first sight, but there is room for simplification. In many situations it is natural to assume that $M=S_n=0$, and, in that case, $M_{1,k}$ reduces to the more manageable martingale sequence that we denote by \begin{equation}\label{eq:FirstNewMartingale} \widetilde{M}_k=\frac{1}{(n-k)(n-k-1)}\left[(n-1)S_k^2-(B-T_k)k\right], \quad \quad 1\leq k \leq n-2. \end{equation} We will give several applications of this martingale to permutation inequalities, and, in particular, we use it in Section \ref{sec:QuadraticRearrangement} to get a new bi-quadratic maximal inequality for permuted arrangements. We should also note that this martingale also has a potentially useful monotonicity property. Specifically, $T_k$ is monotone increasing, so by a little surgery on $T_k$ (say by replacing $T_k$ by $k^\alpha T_k$) will yield one a rich family of submartingales or supermartingales. The device used here to get the 4-vector martingale $\{\bM_n: 1 \leq k \leq n-2 \}$ can be extended in several ways. The most direct approach begins with $S_k^3$ in addition to $S_k^2$. In that case, linearization of the recursions requires one to introduce the terms $S_kT_k$ and $U_k=X_1^3+X_2^3+ \cdots+X_k^3$. The vectors $\{\bxi_k\}$ are then 7-dimensional, and, the matrix algebra is still tractable through symbolic computation, but it is unpleasant to display. One can follow the construction and obtain seven martingales. Some of these are known, but many as four may be new. The ones with $S_k^3$ and $S_kT_k$ are (almost) guaranteed to be new. Nevertheless, we do not pursue the seven-dimensional example here. Instead, for our second example, we apply the general construction to a simpler two-dimensional problem. In this example, the algebra is much more attractive, and the martingale that one finds has considerable flexibility. \section{Second Example of the Construction}\label{sec:SecondConstruction} As before, we assume that $X_1, X_2, ...,X_n$ is a sequential sample taken without replacement from $\mathcal X$, and we impose the centering condition $S_n=M=0$. Further, we consider a system of ``multipliers" $a_k$, $1\leq k\leq n$ where each $a_k$ is assumed to be an $\mathcal{F}_{k-1}$ measurable random variable. Since our measure space is finite, these non-anticipating random variables are automatically bounded. The basic building block for our next collection of martingales is the sequence of random variables defined by setting $$ W_k=a_1X_1+a_2X_2+\cdots+a_kX_k \quad \text{for }1\leq k\leq n. $$ The immediate task is to find a martingale that has $W_k$ (or a deterministic multiple of $W_k$) as a summand. As before, we begin by calculating the one-step conditional expectations: $$ E[W_{k+1}|\mathcal{F}_k]=W_k-\frac{a_{k+1}}{n-k}S_k \quad \text{and } \quad E[S_{k+1}|\F_k]=\frac{n-k-1}{n-k}S_k. $$ If we introduce the vector $\bEta_k=(W_k,S_k)^{\top}$ we have $$ E[\bEta_{k+1}|\mathcal{F}_k]=A_{k+1}\bEta_k \quad \text{where } { A_{k+1}=\left( \begin{array}{cc} \displaystyle 1 & \displaystyle -\frac{a_{k+1}}{n-k} \\ &\\ \displaystyle 0 &\displaystyle \frac{n-k-1}{n-k} \end{array} \right)}. $$ Inversion is now especially easy, and we note that for $$A_{k}=\left( \begin{array}{cc} \displaystyle 1 & \displaystyle -\frac{a_{k}}{n-k+1} \\ &\\ \displaystyle 0 & \displaystyle \frac{n-k}{n-k+1} \end{array} \right) \quad \text{we have } \quad A_{k}^{-1}=\left( \begin{array}{cc} \displaystyle 1 & \displaystyle \frac{a_{k}}{n-k} \\ &\\ \displaystyle 0 & \displaystyle \frac{n-k+1}{n-k} \end{array} \right), $$ so induction again confirms the critical inverse: $$A_1^{-1}A_2^{-1}\cdots A_{k}^{-1}=\left( \begin{array}{cc} \displaystyle 1 & \displaystyle \frac{a_1+a_2+\cdots+a_{k}}{n-k} \\ &\\ \displaystyle 0 & \displaystyle \frac{n}{n-k} \end{array} \right).$$ The general recipe \eqref{ew:GeneralProcedure} then gives us a new martingale: \begin{equation}\label{eq:WeightedMartingale} M_k= W_k+\frac{a_1+a_2+ \cdots +a_k}{n-k}S_k \quad \quad 1\leq k < n \end{equation} Once this is martingale is written down, one could also verify the martingale property by a direct calculation. In this instance, linear algebra has served us mainly as a tool for discovery. Some of the benefits of the martingale \eqref{eq:WeightedMartingale} are brought to life through interesting choices for the non-anticipating factors $\{a_k: 1\leq k \leq n\}$. For, example, if we take $a_1=0$ and set $a_k=X_{k-1}$ for $k\geq 2$, we find a curious quadratic martingale $$ M_k= X_1X_2+X_2X_3+\cdots+X_{k-1}X_k+\frac{(X_1+\cdots+X_{k-1})(X_1+\cdots+X_k)}{n-k}. $$ One is unlikely to have hit upon this martingale without the benefit of a systematic construction. For the moment, no application of this martingale comes to mind, but it does seem useful to know that there is such a simple quadratic martingale. Perhaps a nice application is not too far away. Out main purpose here is to expose the general construction \eqref{ew:GeneralProcedure}, but, through the illustrations just given, we now have several martingales that speak directly to permutation inequalities --- an extensive subject where martingales have traditionally been part of the toolkit. As we explore what can be done with the new martingales \eqref{eq:FirstNewMartingale} and \eqref{eq:WeightedMartingale}, we will also address some of the classic results on permutation inequalities. \section{A Permutation Inequality}\label{sec:firstRAI} In the first application, we quickly check what can be done with the martingale $M_{2,k}=(nS_k-kM)/(n-k)$ given by \eqref{eq:MartingalsM2M3} in our first construction. To keep the formulas simple, we impose a standing assumption, \begin{equation}\label{M=0}S_n=M=0,\end{equation} so, $ {S_k}/({n-k}) $ is an $\{\F_k \}$ martingale with expectation zero. The Doob-Kolmogorov $L^2$ maximal inequality (see e.g.~Shiryaev (1995, p.~493)) then gives us $$E\max_{1\leq k\leq n-1}\left|({n-k})^{-1}{S_k}\right|^2\leq 4 ES_{n-1}^2.$$ This can be simplified by noting that $$ ES_{n-1}^2=E(M-X_n)^2=EX_n^2=EX_1^2={B}/{n}, $$ so, in the end, we have \begin{equation}\label{easy*Garsia} E\max_{1\leq k\leq n-1}\left|({n-k})^{-1}{S_k}\right|^2\leq 4B/{n}. \end{equation} One could immediately transcribe this as a permutation inequality, but first we put it in a form that seems more natural for applications. For any fixed permutation $\sigma$, the distribution of the vector $(X_1, X_2,\dots, X_n)$ is the same as distribution the vector $(X_{\sigma(1)}, X_{\sigma(2)}, \ldots ,X_{\sigma(n)})$ (cf.~Feller (1971, p.~228)), so, in particular the distribution of $(X_1, X_2,\dots, X_n)$ is the same as the distribution of $(X_n, X_{n-1},\dots, X_1)$. By the centering assumption \eqref{M=0}, we know $$ (X_1+X_2+\cdots+X_k)^2=(X_{k+1}+X_{k+2}+\cdots +X_n)^2, $$ so applying both observations gives us the identity $$E\max_{1\leq k\leq n-1}\left|({n-k})^{-1}{S_k}\right|^2=E\max_{1\leq k\leq n-1}\left|{S_k}/{k}\right|^2.$$ Using this bound in \eqref{easy*Garsia} then gives us \begin{equation}\label{eq:M2Identity} E\max_{1\leq k\leq n-1}\left|{S_k}/{k}\right|^2\leq {4B}/{n}, \end{equation} so by the sampling model and the definition of $B$, we come to an attractive inequality for the maximum of averages drawn sequentially from a randomly permutated sample. \begin{proposition}[Max-Averages Inequality]\label{Serfling*Martingale} For real numbers $\{x_1,x_2,...,x_n\}$ with $x_1+x_2+\cdots+x_n=0$, we have \begin{equation}\label{eq:SerflingInequality} \frac{1}{n!} \sum_{\sigma}\max_{1\leq k\leq n}\left\{ \frac{1}{k} {\sum_{i=1}^kx_{\sigma(i)}}\right\}^2 \leq \frac{4}{n}\sum_{i=1}^nx_i^2. \end{equation} \end{proposition} This bound is so natural, it is likely to be part of the folklore of permutation inequalities, but we have been unable to locate it in earlier work. Still, even if the inequality is known, it seems probable that it has been under appreciated. In two examples in Section \ref{sec:AlternaitingSums} we find that it provides an efficient alternative to other, more complicated tools. This inequality also bears a family relationship to an important inequality that originated with Hardy (1920). Hardy's inequality went through some evolution before it reached its modern form (see Steele (2004), pp.~169 and 290 for historical comments), but now one may write Hardy's inequality in a definitive way that underscores the analogy with \eqref{eq:SerflingInequality}: \begin{equation}\label{eq:Hardy} \max_{\sigma} \sum_{k=1}^n \left\{ \frac{1}{k} {\sum_{i=1}^kx_{\sigma(i)}}\right\}^2 \leq 4 \sum_{i=1}^n x_i^2. \end{equation} It is well known (and easy to prove) that the constant in Hardy's inequality is best possible, and it is feasible that the constant in \eqref{eq:SerflingInequality} is also best possible. We have not been able to resolve this question. \section{Exchangeability and a Folding Device}\label{sec:foldingdevice} One also has the possibility of using exchangeability more forcefully; in particular, one can exploit exchangeability around the center of the sample. For a preliminary illustration of this possibility, we fix $1\leq m < n$ and note that the martingale property \eqref{eq:FirstNewMartingale} of $\widetilde{M}_k$ gives us \begin{align*} ES_m^2=&E\frac{m}{n-1}[B-T_m] =\frac{m}{n-1}[B-ET_m]. \end{align*} The martingale property \eqref{eq:MartingalsM2M3} of $M_{3,k}$ also gives us $ET_m=({m}/{n})B$, so we have \begin{equation}\label{S_2*Expectation} ES_m^2=\frac{m(n-m)}{n(n-1)}B, \end{equation} a fact that one can also get by bare hands, though perhaps not so transparently. We can also use the martingale $M_{2,k}=(nS_k-kM)/(n-k)= nS_k/(n-k)$ here. Since we have $M=0$, the $L^2$ maximal inequality for this martingale and the identity \eqref{S_2*Expectation} give us \begin{equation}\label{eq:preFold} E\max_{1\leq k\leq m}\left|\frac{S_k}{n-k}\right|^2\leq \frac{4 ES_{m}^2}{(n-m)^2} = \frac{4 m B}{n(n-1)(n-m)}. \end{equation} Since $(n-k)^{2} \leq n^{-1}(n-1)$ for $1\leq k\leq m$, the bound \eqref{eq:preFold} implies the weaker, but simpler, bound \begin{equation}\label{eq:FirstHalfFold} E\max_{1\leq k\leq m}S_k^2\leq \frac{4mB}{n-m} \quad \quad \text{for } 1 \leq m < n. \end{equation} The idea now is to use symmetry to exploit the fact that \eqref{eq:FirstHalfFold} holds for many choices of $m$. To begin, we note that we always have the crude bound \begin{equation}\label{eq:crude} E\max_{1\leq k\leq n}S_k^2\leq E\max_{1\leq k\leq m}S_k^2+E\max_{m< k\leq n}S_k^2. \end{equation} Typically this is useless, but here it points to a useful observation; we can use the same bounds on the second terms that we used on the first. This is the ``folding device" of the section heading. Specifically, since $M=0$ we have $$ |S_k|=|S_n-S_k|=|X_n+X_{n-1}+\cdots+X_{k+1}|, $$ so, by exchangeability, we see that the second summand of \eqref{eq:crude} is bounded by $4(n-m)B/m$. Thus, we have for all $1\leq m < n$ that \begin{equation E\max_{1\leq k\leq n}S_k^2\leq E\max_{1\leq k\leq m}S_k^2+E\max_{m< k\leq n}S_k^2\leq 4B \left(\frac{m}{n-m}+\frac{n-m}{m}\right). \end{equation} Here, if we just consider $n\geq 8$ and choose $m$ as close as possible to $n/2$, then we see that the worst case occurs when $n=9$ and $m=4$; so we have the bound \begin{equation}\label{backward*Serfling} E\max_{1\leq k\leq n}S_k^2\leq E\max_{1\leq k\leq m}S_k^2+E\max_{m< k\leq n}S_k^2\leq (41/5)B \quad \text{for } n \geq 8. \end{equation} On the other hand, Cauchy's inequality gives us $S_k^2\leq kB \leq nB$, so the bound \eqref{backward*Serfling} is trivially true for $n\leq 7$. Combining these ranges gives us a centralized version of an inequality originating with A. Garsia (cf. Stout (1974, pp. 145--148)). \begin{proposition}[A. Garsia] \label{Garsia*Inequality} For any set of real numbers $\{x_1,x_2,...,x_n\}$ with sum $x_1+x_2+\cdots+x_n=0$ we have \begin{equation}\label{eq:GarsiaProp} \frac{1}{n!}\sum_{\sigma}\max_{1\leq k\leq n}|\sum_{i=1}^kx_{\sigma(i)}|^2\leq (8+\frac{1}{5})\sum_{i=1}^nx_i^2, \end{equation} where the sum is taken over all possible permutations of $\{x_1,x_2,...,x_n\}$. \end{proposition} Here we have paid some attention to the constant in this inequality, but already from the crude bound \eqref{eq:crude} one knows that the present approach is not suited to the derivation of a best possible result. Our intention has been simply to illustrate what one can do with reasonable care and robust approach that uses the tools provided by our general martingale construction. Nevertheless, the constant in this inequality has an interesting history. The bound seems first to have appeared in Garsia (1968, Theorem 3.2) with the constant $9$. Curiously, the inequality appears later in Garsia (1970, eqn. 3.7.15, p.~91) where the constant is given as $16$, and for the proof of the inequality one is advised to ``following the same steps" of Garsia (1964). In each instance, the intended applications did not call for sharp constants, so these variations are scientifically inconsequential. Still, they do make one curious. Currently the best value for the constant in the Garsia inequality \eqref{eq:GarsiaProp} is due to Chobanyan (1994, Corollary 3.3) where the stunning value of $2$ is obtained. Moreover, Chobanyan and Salehi (2001, Corollary 2.8)) have a much more general inequality which also gives a constant of $2$ when specialized to our situation. Here we should also note that in all inequalities of this type have both a centralized version where $M=0$ and non-centralized version where $M$ is unconstrained. One can pass easily between the versions (see e.g. Stout (1974, p.~147) or Garsia (1970, p.~93)). The centralized versions are inevitably simpler to state, so we have omitted discussions of the non-centralized versions. \section{Quadratic Permutation Inequality}\label{sec:QuadraticRearrangement} To apply the $L^2$ maximal inequality to the martingale $\{ \widetilde{M}_k : 1 \leq k \leq n-2 \}$ that was discovered by our first construction \eqref{eq:FirstNewMartingale}, one needs a comfortable estimate of the bounding term $4 E[\widetilde{M}_{n-2}^2]$ in the Doob-Kolmogorov inequality. There are classical formulas for the moments for sampling without replacement that simplify this task. These formulas were known to Isserlis (1931), if not before, but they are perhaps easiest to derive on one's own. \begin{lemma}\label{Moments} If $X_1,X_2,X_3,X_4$ are draw without replacement from ${\mathcal X}=\{x_1,x_2,...,x_n\}$ where $x_1+x_2+ \cdots+x_n=0$, then we have the moments \begin{align*} &E(X_1X_2X_3X_4)=\frac{3B^2-6Q}{n(n-1)(n-2)(n-3)},\quad E(X_1^2X_2X_3)=\frac{2Q-B^2}{n(n-1)(n-2)},\\ &E(X_1^2X_2^2)=\frac{B^2-Q}{n(n-1)},\quad E(X_1^3X_2)=-\frac{Q}{n(n-1)}, \quad \text{and} \quad E(X_1^4)={Q}/{n}, \end{align*} where, as usual, we set $B=x_1^2+x_2^2+\cdots+x_n^2$ and $Q=x_1^4+x_2^4+ \cdots+x_n^4$. \end{lemma} Now, to calculate $4 E[\widetilde{M}_{n-2}^2]$, we first note that $$ S_{n-1}=-X_n \quad \text{and} \quad S_{n-2}=-X_{n-1}-X_n, $$ so just expanding the definition of $\widetilde{M}_{n-2}$ gives us \begin{align*} 4 E[\widetilde{M}_{n-2}^2] = & E\left[\left\{(n-1)S_{n-2}^2-(n-2)(B-T_{n-2})\right\}^2\right]\\ = & E\left[\left\{(n-1)(X_{n-1}+X_n)^2-(n-2)(X_{n-1}^2+X_n^2)\right\}^2\right]\\ = & E\big[X_{n-1}^4+X_n^4+(4n^2-8n+6)X_{n-1}^2X_n^2\\ &\quad +4(n-1)X_{n-1}^3X_n+4(n-1)X_{n-1}X_n^3\big]. \end{align*} By Lemma \ref{Moments} and exchangeability, one then finds after some algebra that the $L^2$ maximal inequality takes the form \begin{equation}\label{eq:MaxEq4us} E\max_{1\leq k\leq n-2}\widetilde{M}_k^2 \leq 4 E[\widetilde{M}_{n-2}^2]=\frac{4n^2-8n+6}{n(n-1)}B^2-\frac{4n^2-2n}{n(n-1)}Q. \end{equation} The only task left is to reframe this inequality so that it easy to apply as a permutation inequality. Here it is useful to observe that \begin{equation}\label{eq:nbounds} \frac{4n^2-8n+6}{n(n-1)}<4 \quad \text{and} \quad \frac{4n^2-2n}{n(n-1)}>4 \quad \quad \text{for } n\geq 2, \end{equation} moreover, these are not wasteful bounds; they are essentially sharp for large $n$. When we apply these bounds in \eqref{eq:MaxEq4us} we get the nicer bound, \begin{equation}\label{our*forward} E\max_{1\leq k\leq n-2}\left|\frac{(n-1)S_k^2-k(B-T_k)}{(n-k)(n-k-1)}\right|^2 \leq 4[B^2-Q]. \end{equation} This brings us closer to our goal, but further simplification is possible if we note that by reverse sampling the left side of \eqref{our*forward} can also be written as \begin{align*}E\max_{1\leq k\leq n-2}\left|\frac{(n-1)S_k^2-k(B-T_k)}{(n-k)(n-k-1)}\right|^2 =&E\max_{2\leq k\leq n-1}\left|\frac{(n-1)S_k^2-(n-k)T_k}{k(k-1)}\right|^2\\ =&E\max_{2\leq k\leq n}\left|\frac{(n-1)S_k^2-(n-k)T_k}{k(k-1)}\right|^2. \end{align*} In permutation terms, the bound \eqref{our*forward} establishes the following proposition. \begin{proposition}[Quadratic Permutation Inequality]\label{our*martingale} For any set of real numbers $\{x_1,x_2,...,x_n\}$ with $x_1+x_2+\cdots+x_n=0$ we have \begin{equation*} \frac{1}{n!}\sum_{\sigma}\! \max_{2\leq k\leq n}\!\left|\frac{\left(\sum_{i=1}^kx_{\sigma(i)}\right)^2 \!\!-\frac{n-k}{n-1}\sum_{i=1}^kx_{\sigma(i)}^2}{k(k-1)}\right|^2 \!\leq \frac{4}{(n-1)^2} \! \left\{\left(\sum_{i=1}^nx_i^2\right)^2-\sum_{i=1}^n x_i^4 \right\}\!, \end{equation*} where the first sum is taken over all possible permutations of $\{x_1,x_2,...,x_n\}$. \end{proposition} At first glance, this may seem complicated, but the components are all readily interpretable and the inequality is no more complicated than it has to be. The main observation is that our general construction \eqref{eq:MartingaleRepr} brought us here in a completely straightforward way. Without such an on-ramp, one is unlikely to imagine any bound of this kind --- and \emph{relative} simplicity. Moreover, the inequality does have intuitive content, and this content is made more explicit in the next section. \section{The Discrete Bridge and Further Folding}\label{sec:DiscreteBridge} Here we set $x_i=1$ for $1 \leq i \leq m$ and take $x_i=-1$ for $m+1 \leq i \leq 2m$. We then let $X_i$, $1\leq i \leq 2m$, denote samples that are drawn without replacement from the set ${\mathcal X}=\{x_1,x_2, \ldots, x_{2m}\}$. If we put $S_0=0$ and denote the usual partial sums by $S_k$, $1\leq k \leq 2m$, then $S_{2m}=0$ and the process $\{S_k: 0 \leq k \leq 2m\}$ is a discrete analog of the Brownian bridge. Alternatively, one can view this process as simple random walk that is conditioned to return to $0$ at time $2m$. For $\{\widetilde{M}_k\}$, the martingale from the first construction \eqref{eq:FirstNewMartingale}, we now have $T_k\equiv k$, so we have the simple representation \begin{equation}\label{eq:DBMart} \widetilde{M}_k=\frac{(2m-1)S_k^2-(2m-k)k}{(2m-k)(2m-k-1)}, \quad \quad 1\leq k\leq 2m-2. \end{equation} By Lemma~\ref{Moments} with $B=Q=2m$ and by \eqref{S_2*Expectation}, we also find that at the (left) mid-point $m$ of our process we have the nice relations \begin{equation}\label{eq:ExpS4} E[S^2_m]=m^2/(2m-1)\quad \text{and } E[S_m^4]=\frac{3m^4-4m^3}{4m^2-8m+3}. \end{equation} These give us a rational formula for $E[\widetilde{M}_m^2]$, but for the moment, we just use the partial simplification \begin{align*} E[\widetilde{M}_m^2]&=E\left|\frac{(2m-1)S_{m}^2-(2m-m)m}{(2m-m)(2m-m-1)}\right|^2\\ &= \frac{1}{m^2(m-1)^2}\left[(2m-1)^2ES_{m}^4-m^4\right]. \end{align*} For $1\leq k \leq m$ we have the trivial bound $$ 1\leq \max_{1\leq k\leq m}\left|\frac{(2m)(2(m-1))}{(2m-k)(2m-k-1)}\right|^2, $$ so the $L^2$ maximal inequality applied to the martingale \eqref{eq:DBMart} gives us \begin{equation}\label{eq:ForBoth} E\max_{1\leq k\leq m}\left|(2m-1)S_{k}^2-(2m-k)k\right|^2 \leq 64\left[(2m-1)^2ES_{m}^4-m^4\right]. \end{equation} We can now take advantage of a symmetry that is special to the discrete bridge. Since $S_{2m}=0$ we have $(X_1+\cdots+X_k)^2=(X_{2m}+\cdots+X_{k+1})^2$, and by exchangeability the vectors $(X_1,\dots,X_m)$ and $(X_{2m},\dots,X_{m+1})$ have the same distribution. Also, the value $(2m-k)k$ is ``invariant" in the following sense: if we substitute for $k$ (the number of summands in $X_1+X_2+\cdots+X_k$) the value $2m-k$ (the number of summands in $X_{2m}+X_{2m-1}+...+X_{k+1}$), then the symmetric quantity $(2m-k)k$ is unchanged. As a consequence, the random variables $$ \max_{1\leq k\leq m}\left|(2m-1)S_{k}^2-(2m-k)k\right|^2 \quad \text{and} \quad \max_{m\leq k\leq 2m-1}\left|(2m-1)S_{k}^2-(2m-k)k\right|^2 $$ are equal in distribution. We can then apply \eqref{eq:ForBoth} twice to obtain \begin{align* E\max_{1\leq k\leq 2m-1}\left|(2m-1)S_{k}^2-(2m-k)k\right|^2\leq 128\left[(2m-1)^2ES_{m}^4-m^4\right]. \end{align*} From this bound and the formula \eqref{eq:ExpS4} for $ES_m^4$, we then have \begin{align*} E\max_{1\leq k\leq 2m-1}\left|S_{k}^2-k\frac{2m-k}{2m-1}\right|^2\leq 128\frac{4m^3-8m^2+4m}{8m^3-20m^2+14m-3}m^2, \end{align*} and, for any $m\geq 2$ we have $(4m^3-8m^2+4m)/(8m^3-20m^2+14m-3)<1$. In the end, we have the following proposition. \begin{proposition}[Quadratic Permutation Inequality for Discrete Bridge]\label{Now*done*proposition} For the set ${\mathcal X}=\{x_1,x_2, \ldots, x_{2m}\}$ with $x_i=1$ for $1 \leq i \leq m$ and $x_i=-1$ for $m+1 \leq i \leq 2m$ one has \begin{align}\label{Now*Done} \frac{1}{(2m)!}\sum_{\sigma}\max_{1\leq k\leq 2m-1}\left|\left(\sum_{i=1}^kx_{\sigma(i)}\right)^2 - k\frac{2m-k}{2m-1}\right|^2\leq 128 m^2, \end{align} where the first sum is taken over all possible permutations. \end{proposition} For the terms of the squared discrete bridge process $\{S_k^2: 0\leq i \leq 2m\}$ have the expectations $E S_{k}^2=k(2m-k)/(2m-1)$, so \eqref{Now*Done} gives us a rigorous bound on the maximum deviation of the square $S_k^2$ of a discrete bridge from its expected value $ES_k^2$ . Here, the order $O(m^2)$ of the bound cannot be improved, but, if necessity called, one may be able to improve on ungainly constant $128$. \section{Permuted Sums with Fixed Weights}\label{sec:AlternaitingSums} In Section \ref{sec:SecondConstruction}, we considered the weighted sums $$ W_k=a_1X_1+a_2X_2+\cdots+a_kX_k \quad \text{for }1\leq k\leq n. $$ and we found that the process \begin{equation}\label{eq:WeightedMartingale2} M_k= W_k+\frac{a_1+a_2+ \cdots +a_k}{n-k}S_k \quad \quad 1\leq k < n \end{equation} is a martingale whenever the multipliers are non-anticipating random variables (i.e. whenever $a_k$ is $\F_{k-1}$-measurable for each $1\leq k \leq n$). Here we will show that this martingale has informative uses even when one simply takes the multipliers to be fixed real numbers. First we introduce some shorthand. For $1 \leq k \leq n$ we will write \begin{equation}\label{eq:alphaNotation} \alpha_1(k)=a_1+a_2+\cdots+a_k \quad \text{and} \quad \alpha_2(k)=a^2_1+a^2_2+\cdots+a^k_k. \end{equation} Also, we will be most interested in multiplier vectors $a=(a_1, a_2, \ldots, a_n)$ for which we have some control of the quantity \begin{equation}\label{eq:Vcondition} V_n(a)\stackrel{\rm def}{=}\max_{1\leq k\leq n-1}\alpha_1^2(k)\big/\alpha_2(n), \end{equation} which is a measure of cancelation among the multipliers. A leading example worth keeping in mind is the sequence $a_k=(-1)^{k+1}$, $1 \leq k \leq n$, for which we have $V_n(a)=1/n$. We will revisit the measure $V_n(a)$ after we derive a moment bound. \begin{lemma}\label{lm:Msqrd} For the martingale $\{M_k\}$ defined by equation \eqref{eq:WeightedMartingale2}, we have \begin{align}\label{eq:Msqrd} E[M_k^2]= \frac{1}{n-1} \alpha_2(k) B +\frac{1}{(n-1)(n-k)} \alpha_1^2(k) B. \end{align} \end{lemma} \begin{proof} Simply squaring \eqref{eq:WeightedMartingale2}, we have $$ |M_k|^2=W_k^2+\frac{\alpha_1^2(k)}{(n-k)^2}S_k^2+2\frac{\alpha_1(k)}{n-k}W_kS_k, $$ so we just need to find $EW_k^2$, $ES_k^2$ and $EW_kS_k$. We know $ES_k^2$ from (\ref{S_2*Expectation}), and we have $EX_i^2=B/n$ and $EX_iX_j=-B/(n(n-1))$ so \begin{align*} EW_k^2=&E\Big[\sum_{i=1}^{k}a_i^2X_i^2+\sum_{1\leq{i,j}\leq k, i\neq j}a_ia_jX_iX_j\Big]\\ =&\alpha_2(k)\frac{B}{n}+\left[\alpha_1^2(k)-\alpha_2(k)\right]\left[-\frac{B}{n(n-1)}\right] =\alpha_2(k)\frac{B}{n-1}-\alpha_1^2(k)\left[\frac{B}{n(n-1)}\right]. \end{align*} Similarly, we have \begin{align*} EW_kS_k=&E\Big[\sum_{i=1}^{k}a_iX_i^2+\sum_{i=1}^ka_i\sum_{1\leq{j}\leq k, j\neq i}X_iX_j\Big]\\ =&\alpha_1(k)\frac{B}{n}-(k-1)\alpha_1(k)\left[\frac{B}{n(n-1)}\right] =\alpha_1(k)\frac{(n-k)B}{n(n-1)}, \end{align*} so summing up the terms completes the proof of the lemma. \end{proof} From this lemma, the $L^2$ maximal inequality gives us \begin{align}\label{eq:weightedMax} E\max_{1\leq k\leq n-1} M_{k}^2 \leq & 4E[M_{n-1}]^2 =4 [\alpha_2(n-1) +\alpha_1^2(n-1)]\frac{B}{n-1}, \end{align} but to extract real value from we need to relate it to the weighted sum $W_k$. Using \eqref{eq:WeightedMartingale2} we can write $W_k$ as the difference between $M_k$ and $\alpha_1(k){S_k}/({n-k})$, so from the trivial bound $(x+y)^2\leq 2x^2+2y^2$ we have \begin{align*} \max_{1\leq k\leq n-1}|W_k|^2&\leq 2\max_{1\leq k\leq n-1}|M_k|^2+2 \max_{1\leq k\leq n-1}\alpha_1^2(k)\left|\frac{S_k}{n-k}\right|^2\\ &\leq 2\max_{1\leq k\leq n-1}|M_k|^2+2\max_{1\leq k\leq n-1}\alpha_1^2(k) \cdot\max_{1\leq k\leq n-1}\left|\frac{S_k}{n-k}\right|^2. \end{align*} The second step may look wasteful, but now we can apply both \eqref{eq:weightedMax} and the Max-Averages inequality~(\ref{easy*Garsia}) from Section \ref{sec:firstRAI} to obtain \begin{align*} E\max_{1\leq k\leq n-1}|W_k|^2\leq 8\Big[\alpha_2(n-1)+\alpha_1^2(n-1) \Big]\frac{B}{n-1} +8\max_{1\leq k\leq n-1} \alpha_1^2(k) \frac{B}{n}. \end{align*} Note that we also have \begin{align*} \max_{1\leq k\leq n}|W_k|^2&=\max\left(\max_{1\leq k\leq n-1}\left|W_k\right|^2, (W_{n-1}+a_nX_n)^2\right)\\ &\leq\max\left(\max_{1\leq k\leq n-1}\left|W_k\right|^2, 2W_{n-1}^2+2a_n^2X_n^2\right)\\ &\leq2\max_{1\leq k\leq n-1}\left|W_k\right|^2+2a_n^2X_n^2, \end{align*} and the bottom line is that \begin{equation}\label{eq:quantitytobound} E\left\{\max_{1\leq k\leq n}|W_k|^2\right\} =\frac{1}{n!}\sum_{\sigma}\max_{1\leq k\leq n}\left|\sum_{i=1}^k a_i x_{\sigma(i)}\right|^2 \end{equation} is bounded by the lengthy (but perfectly tractable) sum \begin{equation}\label{weighted*Garsia} 16\Big[\alpha_2(n-1)+\alpha_1^2(n-1)\Big]\frac{B}{n-1} +16\max_{1\leq k\leq n-1}\alpha_1^2(k)\frac{B}{n} +2a_n^2\frac{B}{n}. \end{equation} To make this concrete, note that for $a_k=(-1)^{k+1}$ the bound on \eqref{eq:quantitytobound} that we get from \eqref{weighted*Garsia} is simply $(16n/(n-1)+18/n)B$. The ratio $16n/(n-1)+18/n$ decreases to $16$, and for $n=18$ the upper bound is equal to $(17+{16}/{17})B$. By Cauchy's inequality, we have $|W_k|^2\leq k B$ for all $k$, so the bound $(17+{16}/{17})B$ also holds for all $n\leq 17$. When we assemble the pieces, we have a permutation maximal inequality for sums with alternating signs. \begin{proposition}\label{Garsia*alternating*sums*proposition} For real numbers $\{x_1,x_2,...,x_n\}$ with $x_1+x_2+\cdots+x_n=0$ we have \begin{equation}\label{Garsia*alternating*sums} \frac{1}{n!}\sum_{\sigma}\max_{1\leq k\leq n}|\sum_{i=1}^k(-1)^ix_{\sigma(i)}|^2 \leq \left(17+\frac{16}{17}\right)\sum_{i=1}^nx_i^2 \end{equation} where the sum is taken over all permutations of $\{x_1,x_2,...,x_n\}$. \end{proposition} The argument that leads to \eqref{Garsia*alternating*sums} is useful for more than just alternating sums; it has bite whenever there is meaningful cancelation in $a=(a_1, a_2, \ldots, a_n)$. Specifically, the bound \eqref{weighted*Garsia} is always dominated by $$16\alpha_2(n)+32 \!\!\! \max_{1\leq k\leq n-1}\alpha_1^2(k),$$ so our proof also gives us more general --- and potentially more applicable --- bound. \begin{proposition}\label{Garsia*alternating*sums*proposition*two} For sets of real numbers $\{a_1,a_2,...,a_n\}$ and $\{x_1,x_2,...,x_n\}$ such that $x_1+x_2+\cdots+x_n=0$ we have \begin{equation}\label{eq:GwithV} \frac{1}{n!}\sum_{\sigma}\max_{1\leq k\leq n}|\sum_{i=1}^ka_ix_{\sigma(i)}|^2 \leq \frac{16}{n-1}\{1+2V_n(a)\} \sum_{i=1}^na_i^2\sum_{i=1}^nx_i^2, \end{equation} where the sum is taken over all permutations of $\{x_1,x_2,...,x_n\}$ and where $V_n(a)$ defined in \eqref{eq:Vcondition}. \end{proposition} This inequality shows that there are concrete benefits to introducing $V(a)$. For example, by a sustained and subtle argument, Garsia (1970, 3.7.20, p.~92) arrived at a version of our inequality where the coefficient $16 \{1+2V_n(a)\}$ is replaced by ${80}$. Now, for uniform multipliers, one has $V_n(a)=O(n)$ and Garsia's inequality is greatly superior to our bound \eqref{eq:GwithV}. On the other hand, for multipliers that satisfy the cancelation property $V_n(a)\to 0$, the present bound eventually meets and beats Garsia's bound. In particular, for multipliers given by alternating signs, the constant of \eqref{eq:GwithV} is just $17+16/17$. \section{Weighted Sums and Folding} By many accounts, the permutation maximal inequality of Garsia (1970, p.~86) is the salient result in the theory of permutation inequalities, so it is a natural challenge to see if it can be proved by the robust martingale methods that follow from our martingale constructions. We give a proof of this kind --- without appeal to $V_n(a)$. Once the proof is complete, we address the differences between Garsia's inequality and the present bound with its curious constant. \begin{proposition}\label{Garsia*Inequality*Weighted*Sums} For real numbers $\{a_1,a_2,...,a_n\}$ and real numbers $\{x_1,x_2,...,x_n\}$ such that $x_1+x_2+\cdots+x_n=0$ we have \begin{equation}\label{eq:permWweights} \frac{1}{n!}\sum_{\sigma}\max_{1\leq k\leq n}\big|\sum_{i=1}^ka_ix_{\sigma(i)}\big|^2 \leq \big(80+\frac{4}{205}\big){\sum_{i=1}^na_i^2\sum_{i=1}^nx_i^2}\big/(n-1), \end{equation} where the sum is taken over all possible permutations of $\{x_1,x_2,...,x_n\}$. \end{proposition} \begin{proof} First, for all $1\leq k \leq n$, Cauchy's inequality gives us $\alpha_1^2(k)\leq k\alpha_2(k)$. Trivially one has $\alpha_2(k) \leq \alpha_2(n)$, so by Lemma \ref{lm:Msqrd} we have the bound \begin{equation}\label{Max*Ineqiality*for*Weighted*Martingale} E[M_k]^2\leq\bigg(1+\frac{k}{n-k}\bigg)\frac{\alpha_2(n) B}{n-1} \quad \quad \text{for } 1\leq k \leq n-1. \end{equation} Now, just from the definition \eqref{eq:WeightedMartingale2} of $M_k$, we can write $W_k$ as a difference between $M_k$ and $\alpha_1(k){S_k}/({n-k})$. We can then use the crude bound $(x+y)^2\leq 2x^2+2y^2$ and Cauchy's inequality to get \begin{align*} \max_{1\leq k\leq m}|W_k|^2& \leq 2\max_{1\leq k\leq m}|M_k|^2+\alpha_1^2(k)\left|\frac{S_k}{n-k}\right|^2\\ &\leq 2\max_{1\leq k\leq m}|M_k|^2+2\max_{1\leq k\leq m}\alpha_1^2(k)\left|\frac{S_k}{n-k}\right|^2\\ &\leq 2\max_{1\leq k\leq m}|M_k|^2+(2m) \alpha_2(n) \max_{1\leq k\leq m}\left|\frac{S_k}{n-k}\right|^2. \end{align*} Long ago, in \eqref{S_2*Expectation}, we calculated $ES_m^2$, so here we can apply the $L^2$ maximal inequality to both martingales $\{S_k/(n-k)\}$ and $\{M_k\}$ to get the bound \color{black} \begin{equation}\label{Intermediate*W_k^2*Inequality} E\max_{1\leq k\leq m}W_k^2\leq 8\left(1+\frac{m}{n-m}+\frac{m^2}{n(n-m)}\right)\frac{\alpha_2(n)B}{n-1}. \end{equation} Such an inequality for $1\leq m< n$ suggests the possibility of folding. To pursue this we first note \begin{align} \max_{1\leq k\leq n} W_k^2&= \max\left[\max_{1\leq k\leq m}W_k^2,\, \, \max_{m< k\leq n}W_k^2\right] \notag\\ &= \max\left[\max_{1\leq k\leq m}W_k^2,\, \, \max_{m< k\leq n}|W_m+a_{m+1}X_{m+1}+\cdots+a_kX_k|^2\right] \notag\\ &\leq \max\left[\max_{1\leq k\leq m}W_k^2,\,\, \, \, 2W_m^2+2\max_{m< k\leq n}|a_{m+1}X_{m+1}+\cdots+a_kX_k|^2\right]\notag\\ &\leq 2\left[\max_{1\leq k\leq m} W_k^2+\,\, \max_{m< k\leq n}|a_{m+1}X_{m+1}+\cdots+a_kX_k|^2\right].\label{eq:both} \end{align} By exchangeability, one expects the second maximum has a bound like the one derived in (\ref{Intermediate*W_k^2*Inequality}). To make this explicit, one simply needs to replace $m$ by $n-m$ in the upper bound of (\ref{Intermediate*W_k^2*Inequality}). Doing so gives us the sister bound \begin{equation*} E\max_{m< k\leq n}|a_{m+1}X_{m+1}+\cdots+a_kX_k|^2\leq8\left(1+\frac{(n-m)}{m}+\frac{(n-m)^2}{nm}\right)\frac{\alpha_2(n)B}{n-1}. \end{equation*} By our bounds on the two addends of \eqref{eq:both}, we then have $$ E\max_{1\leq k\leq n}W_k^2\leq 16\left(2+ \frac{m}{n-m}+\frac{n-m}{m}+\frac{m^2}{n(n-m)}+\frac{(n-m)^2}{nm}\right)\frac{\alpha_2(n)B}{n-1}. $$ It only remain to take $m= \lfloor n/2 \rfloor$ and to attend honestly to the consequences. If $n$ is even, the constant that multiplies $\alpha_2(n)B/(n-1)$ is exactly $80$. For odd $n$, the constant approaches $80$ from above as $n$ increases to infinity, and for $n=81$ the constant is $80+4/205$. Furthermore, Cauchy's inequality gives us $$ W_k^2 \leq \alpha_2(k) B \leq \alpha_2(n) B \quad \text{for all } 1\leq k \leq n, $$ and we have $\alpha_2(n) B\leq (80+4/205)\alpha_2(n)B/(n-1)$ for all $n\leq 81$. So, in the end, we come to \eqref{eq:permWweights}, our permutation maximal inequality with general weights. \end{proof} We would greet this result with some fanfare except that Garsia (1970, 3.7.20, p.~92) gives this bound with the constant $80$. Still, one needs to keep in mind that we have pursued this derivation only to illustrate the usefulness of the martingales that are given by our general linear algebraic construction. Perhaps it is victory enough to come so close to a long-standing result that was originally obtained by a delicate problem specific, argument. Compared to Garsia's argument, the proof of \eqref{eq:permWweights} is straightforward. It is also reasonably robust and potentially capable of further development, even though there seems to be no room to improve the constant. In spirit the proof is close to the elegant argument of Stout (1974, pp. 145--148) for his version of the easier unweighted inequality (Proposition~\ref{Garsia*Inequality}). In each instance, the heart of the matter is the application of the maximal $L^2$ inequality to some martingale. Here we have the benefit of ready access to the martingales \eqref{eq:MartingalsM2M3} and \eqref{eq:WeightedMartingale} that were served up to us by our general construction. \section{Observations and Connections} Our focus here is on methodology, and our primary aim has been to demonstrate the usefulness of a linear algebraic method for constructing martingales from the basic materials of sampling without replacement. Through our examples we hope to have shown that the martingales given by our general construction have honest bite. In particular, these martingales yield reasonably direct proofs of a variety of permutation inequalities --- both new ones and old ones. Among our new inequalities, the simple Hardy-type inequality \eqref{eq:SerflingInequality} seems particularly attractive. If we had to isolate a single open problem for attention, then our choice would be to determine if the constant of inequality \eqref{eq:SerflingInequality} is best possible. This problem seems feasible, but one will not get any help from the easy arguments that show that the corresponding constant in Hardy's inequality is best possible. The other new inequality that seems noteworthy is the Garsia-type inequality \eqref{Garsia*alternating*sums*proposition*two} where we introduce $V(a)$, the measure of multiplier cancelations. This inequality may be long-winded, but it isolates a common situation where one can do substantially better than the classic Garsia bound \eqref{eq:permWweights}. The quadratic permutation inequality (Proposition \ref{our*martingale}) and the discrete bridge inequality \eqref{Now*Done} are more specialized, and they may have a hard time finding regular employment. Still, they are perfect for the right job, and they also illustrate the diversity of the martingales that come from the general construction. We have developed several results in theory of permutation inequalities to test the effectiveness of the permutation martingales given by our construction, but permutation inequalities have a charm of their own, and one could always hope to do more. We have already mentioned the remarkable maximal inequalities of Chobanyan (1994) and Chobanyan and Salehi (2001) that exploit combinatorial mapping arguments in addition to martingale arguments. It would be interesting to see if our new martingales could help more in that context. Finally, we did not touch on the important weak-type (or Levy-type) permutation inequalities such as those studied in Pruss (1998) and Levental (2001), but it seems reasonable to expect that the martingales \eqref{eq:MartingalsM2M3} and \eqref{eq:WeightedMartingale} could also be useful in the theory of weak-type inequalities.
1,108,101,562,483
arxiv
\section{Introduction} Dispersal of infectious pathogens such as viruses through airborne sputum droplets can lead to the rapid transmission of diseases in the general population posing a great risk to public health. There are several minor and major infectious diseases that are transmitted through virus-carrying sputum droplets. A benign example of airborne disease is the common cold, while more severe examples include H1N1 influenza, severe acute respiratory syndrome (SARS), middle east respiratory syndrome (MERS), and coronavirus disease 2019 (COVID-19)\cite{wei16,gral11}. Since its outbreak in 2019, COVID-19 has transformed into the most destructive pandemic in over a century. The evidence on COVID-19 so far suggests that the possible modes of transmission of SARS-CoV-2 include respiratory droplets and aerosols, direct person-to-person contact and contact with surfaces (fomite mode of transmission). Direct person-to-person contact transmission and fomite transmission can be controlled by appropriate hygiene practices. However, controlling airborne transmission is far more challenging. Therefore, airborne transmission is likely the primary reason for turning COVID-19 into a global pandemic \cite{morawska2020,asadi2020}. The viruses carrying sputum droplets are generated not only during violent expiratory events like coughing and sneezing, but they are also generated during routine respiratory activities like speaking, singing, and breathing. This, coupled with the fact that SARS-CoV2 is transmitted during presymptomatic and asymptomatic phases of COVID-19\cite{ferretti2020}, increases the infectiousness and the rapid spread of SARS-CoV2. As part of developing effective strategies to mitigate the transmission of COVID-19 or any airborne disease in future, there is a need for quantification of the risk of infection under various social conditions. The well-mixed room approximation is one of the approaches that has been used by researchers for the estimation of infection of risk. By treating the concentration of carbon dioxide in exhaled air as a proxy for virion concentration in the ambient environment, the Wells-Riley model has been used for risk estimation in an indoor environment based on room-scale averages\cite{rudnick2003,issarow2015}. Recent works on this quantification of COVID-19 have also used a similar approach and applied it to indoor spaces such as bathrooms, laboratories and offices wherein the viral shedding of occupants was taken the source of virions in room-average analyses \cite{smith20,augenbraun20,kolinski21}. These approaches provide a measure of the average infection risk for the whole environment under consideration rather than the local risk of infection for each occupant. Yang et al. developed a framework for risk estimation between individuals during casual conversation. The method involves the estimation of viral concentration as a function of space and time by using passive scalars as a proxy for aerosols ejected during speech. The risk of infection is then estimated by evaluating the viral particles that are likely to be within the breathing zone of a susceptible individual. A similar method has been adopted by Singhal et al. for risk estimation during face-to-face conversations\cite{singhal2021}. These methods assume that the aerosols are well-mixed with exhaled air close to the source of aerosol ejection. However, the well-mixed aerosol assumption is reasonable for high aerosol concentration but not when the aerosol concentration is low. Moreover, the approach cannot account for the possible local variations in aerosol concentration due to changes in environmental factors like temperature and humidity. The aerosolization of small and medium-sized droplets is influenced by the humidity and temperature of the ambient environment. Direct measurement of aerosols and droplets that are likely to be within the breathing zone of a susceptible individual can provide a better estimate of infection risk. The focus of the present work is the estimation of infection risk by direct measurement of droplets and aerosols in the breathing zone of susceptible individuals using numerical simulation of droplet dispersion. To that end, we extend the dose-response model for droplet dispersion simulations for expiratory events like coughing, sneezing, speaking, etc. Using numerical simulations of droplet dispersion, infection risk during face-to-face conversation is investigated in this work. Furthermore, we also investigate the effect of humidity on the infection probability during casual conversations. \section{Estimation of Infection Probability} The risk of infection due to inhalation of viruses is commonly quantified using the dose-response model\cite{wells1955,watanabe2010,watanabe2012}. The model assumes that the number of viral particles needed, on average, to infect an individual is $ N_0 $. Assuming that the infection is a Poisson process, the probability of infection can be written as \begin{equation} P = 1 - e^{(-\frac{N}{ N_0})}, \label{eqn:Prisk} \end{equation} where $ N $ is the total number of virions inhaled. In order to compute the probability of infection $ P $, both $ N $ and $ N_0 $ have to be estimated. A range of values have been reported in the literature for $ N_0 $. Prentiss et al. have estimated a range of 322-2012 based on superspreading events at the early stage of COVID-19 pandemic \cite{prentiss20}, other similar studies provide estimates of the value that vary between 100 and 1000 \cite{kolinski21,augenbraun20}. In this study, we choose a value of 900 which lies within the range of values reported in the literature. The number of virions inhaled depends on the total duration of exposure, $ T $, the amount of air inhaled by a person breathing at the rate of B, and the local concentration of virions, $ C(t) $, in the breathing zone of the person. Therefore, the relationship of $ N $, with $ T, B \;\& \;C $ can be expressed as \begin{equation} N = B \int_{0}^{T}C(t)dt. \label{eqn:N-v1} \end{equation} While breathing rate is relatively constant for a given physical activity, the local concentration is transient and can vary with time. Depending on the type of physical activity, the breathing rate can vary from moderate values ($ 0.45 $ m$ ^3 $/hr) to values as high as $ 3.3 $ m$ ^3 $/hr \cite{binazzi06,united1989,shimer1995}. The breathing rate for sedentary activities like quite breathing, speaking at normal voice and singing are in a similar range: $ 0.54\pm0.21 $ m$ ^3 $/hr, $ 0.54\pm0.21 $ m$ ^3 $/hr and $ 0.61\pm0.4 $ m$ ^3 $/hr, respectively\cite{binazzi06}. For moderate physical activities like cycling and climbing stairs the breathing rate varies between $ 1.3-1.5 $ m$ ^3 $/hr, and for heaving activities like climbing with load, cross country skiing, the range is $ 2.5-3.3 $ m$ ^3 $/hr \cite{united1989}. For the present study, we choose a value of $ B=0.5 $ m$ ^3 $/hr which assumes that the subject at risk of infection is not involved in any strenuous activity. With the value of $ B $ known, the estimation of $ N $ depends on the estimation of $ C(t) $. There have been two main approaches of estimating $ C(t) $ in literature. The first approach is based on a well-mixed room averaged approximation\cite{prentiss20,kolinski21}. In this approach, the virions emitted by an infected person are assumed to be well-mixed due to air circulation and mixing within the domain under consideration. The change in virion concentration in the room is tracked over time, but the concentration is assumed to be uniform throughout the domain under investigation. An alternate approach proposed by Yang et al.\cite{yang20} involves estimation of $ C(t) $ over time and space. In this method, assuming that the aerosol-laden air is well-mixed at the point of ejection during expiratory events such as speaking, singing, coughing, etc., a passive scalar is used to model the transport of aerosols in direct numerical simulation or large eddy simulations. Taking passive scalar concentration as a proxy for virion concentration, an expression of $ C(t) $ is obtained. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{figs/breathing-zone-schematic.pdf} \caption{A schematic of droplet dispersion (A) during face to face conversation between two subjects, (B) between a group of people seated at a table. The region highlighted by a blue box around the mouth and nose of a susceptible subject, termed the breathing zone, is used to track droplets that are likely to be inhaled.} \label{fig:breathing-zone-schematic} \end{figure} In contrast to the two methods available in the literature, in this work, we develop a new model to estimate virion inhalation as a function of space and time in droplet dispersion simulations. As opposed to the well-mixed approximation of aerosol-laden air in the passive scalar approach, in simulations modeling direct droplet dispersion, the dynamics of aerosol/droplet transport and evaporation are considered. In contrast to aerosol transport, the dynamics of transport and mixing of droplets can be different due to the larger inertia of droplets. Aerosolization of small and medium droplets, depending on the ambient temperature and humidity, can significantly alter local aerosol and virion concentration. Therefore, the results of droplet dispersion simulations can be qualitatively different from those of the passive scalar approach. For the evaluation of infection probability in droplet dispersion simulations, we estimate $ N $ by tracking the number of droplets in a spatial region defined such that the air within this region is likely to be inhaled by a susceptible subject. Henceforth, we shall refer to this region as the breathing zone. A schematic of the breathing zone is shown in Fig.~\ref{fig:breathing-zone-schematic}. For simplicity, we have chosen a small rectangular region around the mouth and nose to represent the breathing zone. The shape of the region could in principle be arbitrary. Example of how the breathing zone is defined in two scenarios is shown in Fig.~\ref{fig:breathing-zone-schematic}. The first scenario is a face-to-face conversion between two persons. Second is a conversion between a group of people seated at a table. In all such situations, one or more infected subjects are modeled as the source of droplets/aerosols while the remaining subjects are treated as being susceptible to infection. A separate breathing zone is defined for each subject who might be susceptible to inhaling the virus-laden droplets and getting infected. A separate infection probability of each subject would be evaluated by tracking the droplets in the respective breathing zones. We choose a $ 10\times10\times15cm $ sized rectangular region to model the breathing zone for the present study. The longer side of the box is along the nose-ground direction. With regards to the placement of the breathing zone, the centre of one of the edges of the top face of the box is placed at the bottom edge of the nose and an adjacent face is placed in contact with the outermost surface of the mouth of a subject. Lastly, the breathing zone placement is such that its entire volume is outside the subject's interior. The number of virions the subject is likely to inhale depends on the local concentration of virions in the breathing zone. The local concentration can be written as \begin{equation}\label{key} C(t) = \frac{n(t)}{\upsilon_{\text{B}}}, \end{equation} where $ n(t) $ is the instantaneous count of virions at the time instant t in the breathing zone and $ \upsilon_{\text{B}} $ is the volume of the breathing zone. The instantaneous virion count $ n(t) $ is the product of the total ejection volume of droplets and aerosols, $ \upsilon_{d}^{0}(t) $, in the breathing zone and the viral load or viral density $ \lambda_{v} $ (copies/m$ ^3 $). The superscript $ ^0 $ in $ \upsilon_{d}^{0}(t) $ is used to imply that the volume of the droplets in question is the volume at the time of ejection from the mouth of the infected subject. Even as a droplet evaporates, its instantaneous volume, $ \upsilon_{d} $, decreases but its injection volume remains unchanged. Therefore, the total virion count of a given droplet remain constant as a droplet evaporates and aerosolizes. The average viral load of SARS-CoV2 in the sputum is $ 7\times 10^6 $ copies/ml and the maximum reported value is $ 2\times 10^9 $ copies/ml\cite{wolfel20}. For the analysis in the present work we choose the value $ 7\times 10^6 $ copies/ml. The local virion concentration can now be expressed as a function of viral load and droplet volume \begin{equation} C(t) = \frac{\upsilon_{d}^{0}(t)\lambda_{v}}{\upsilon_{\text{B}}}. \end{equation} Substituting the above expression into Eq.~\ref{eqn:N-v1} we obtain the following expression for $ N $ \begin{equation} N = \frac{B\lambda_{v}}{\upsilon_{\text{B}}} \int_{0}^{T} \upsilon_{d}^{0}(t)dt. \label{eqn:N-v3} \end{equation} In activities like speaking and singing over prolonged periods of time, the rate of droplet generation and dispersion are expected to reach a quasi-steady-state. The injection droplet volume that enters the breathing zone would reach a steady state value $ \overline{\upsilon}_{d}^{0} $. Therefore, under steady-state conditions the above equation can be simplified to \begin{equation} N = \frac{B\lambda_{v} \overline{\upsilon}_{d}^{0} T}{\upsilon_{\text{B}}} . \label{eqn:N-steady-state} \end{equation} This equation is applicable to quasi-steady processes like singing and speaking and it is not applicable to transient situations like sneezing and coughing. We must resort to Eq.~\ref{eqn:N-v3} for transient cases. \\ \noindent\textbf{Risk after vaccination and due to variant strains}: Most of the major vaccines for COVID-19 have reported high efficacy in preventing infection and very high efficacy in preventing severe disease and hospitalization. For example, vaccines by Pfizer, Moderna and AstraZeneca have been reported to have vaccine efficacy of $ 95\% $\cite{polack2020}, $ 94.1\% $\cite{baden2021} and $ 81.5\% $\cite{emary2021}, respectively. From the viewpoint of evaluating the infection probability of a vaccinated subject, the effect of a vaccine may be interpreted as an increase in the minimum number of virions needed to infect a person. This assumption implies that for exposure to small doses of virions, the infection probability will be very low or negligible. However, even an infected person, if exposed to very high doses of virions, may be at risk of infection. As SARS-CoV2 has spread through populations, it has mutated into several variants\cite{cdc-variants}. It has been reported that the transmissibility of some of the variants of SARS-CoV2 is higher than that of the original strain \cite{campbell2021,davies2021}. For example, the B.1.1.7 strain (alpha variant) has been reported to be $ 29\% $ more transmissible than the original strain\cite{campbell2021} and the B.1.617.2 strain (delta variant) has been estimated to be 43 to 90$ \% $ more transmissible than B.1.1.7 strain\cite{davies2021}. Within the framework of the infection risk model considered in this work, the higher transmissibility of a given variant could be interpreted as due to two reasons. First, the viral load of the variant strains could be higher. Second, the minimum virion dose needed for infection $ N_0 $ for a variant could be lower. The higher transmissibility could be due to one or a combination of these two factors. Under this assumption, the effect of vaccines and variant strains on the probability of infection can be incorporated into the risk model given by Eq.~\ref{eqn:Prisk} as follows. \begin{equation} P = 1 - e^{(-\alpha\frac{N}{ N_0})}, \label{eqn:Prisk-vaccine} \end{equation} where $ \alpha $ is factor that accounts for higher transmissibility of variant strains, lower risk of infection for vaccinated individuals, and with $ \alpha = 1 $ the above equation falls back to the original form in Eq.~\ref{eqn:Prisk}. For the case where a person is vaccinated with a vaccine of efficacy $ \eta_{vc} $, it can be shown that $ \alpha = 1-\eta_{vc} $. The details of derivation of $ \alpha $ can be found in the \textit{Appendix} section. The effect of vaccination may be interpreted as an increase in the minimum number of virions $ N_0 $ needed to infect a vaccinated person. If the efficacy of a vaccine is $ 100\% $ then the $ P=0 $. But, when vaccine efficacy is 0, Eq.~\ref{eqn:Prisk-vaccine} returns to the original form of $ P $ in Eq.~\ref{eqn:Prisk}. For a variant strain with a higher transmissibility factor $ \tau $, we can write $ \alpha = \tau $. For the alpha, strain the transmissibility factor is $ \tau = 1.29 $\cite{campbell2021}, and for the delta strain it is $ \tau = 2.45 $\cite{davies2021} (assuming 90$ \% $ higher transmissibility compared to alpha variant). For the original strain the transmissibility factor is $ \tau = 1 $, in which case Eq.\ref{eqn:Prisk-vaccine} return to the original form in Eq.\ref{eqn:Prisk}. \section{Methods} \subsection{Governing Equations} The flow solver used in the present work for carrying out the numerical simulation of droplet dispersion is made of an Eulerian reference frame for solving the fluid flow and species transport equations and a Lagrangian frame for solving the droplet dynamics model. The equations of motion of mass, momentum, energy and species transport can be expressed in compact notation as \begin{equation} \frac{\partial\rm{U}}{\partial t} +\nabla\cdot \mathbf{F} = \mathbf{S}. \label{eq:ge} \end{equation} Here, $\rm{U}$, $\mathbf{F}$ and $\mathbf{F}$ represent the primitive flow variables, the combined convective and diffusive terms, and the source terms, respectively \cite{poinsot2005theoretical}. The primitive variables vector and the flux vector are expanded below. \begin{equation} \mathbf{U}=\left(\begin{array}{c} \rho \\ \rho u_{1} \\ \rho u_{2} \\ \rho u_{3} \\ \rho e \\ \rho Y_{k} \end{array}\right), \quad F_{i}=\left(\begin{array}{c} \rho u_{i} \\ \rho u_{i} u_{1}+P \delta_{i 1}-\mu A_{i 1} \\ \rho u_{i} u_{2}+P \delta_{i 2}-\mu A_{i 2} \\ \rho u_{i} u_{3}+P \delta_{i 3}-\mu A_{i 3} \\ \rho(\rho e+P) u_{i}-\mu A_{i j} u_{j}+q_{i} \\ \rho u_{i} Y_{k}-\rho \hat{u}_{i}^{k} Y_{k} \end{array}\right) \label{eqn-UF} \end{equation} where the density and viscosity are represented by $ \rho $ and $ \mu $, respectively. $\mathbf{u}$, $e$ and $P$ are the velocity, the total specific energy and the pressure, respectively. The components of the velocity along the principle directions $1,2,3$ are given by $ ( u_1, u_2, u_3 ) $. The vapor phase of water from the liquid sputum is modeled as passive scalar species along with O$ _2 $ and N$ _2 $. The mass fraction of the species indexed $ k $ is represented by $ Y_k $ and $\hat{u}^{k}_i$ is corresponding diffusion velocity of the $ k^{th} $ species. $ \mathbf{q} = - \lambda \nabla T, $ is the heat flux where $T \& \lambda$ represent temperature and thermal diffusivity, respectively. The density and pressure are constrained together by the state equation $ P=\rho RT $, in which $ R $ is the gas constant and $ T $ is the temperature. The total specific energy is given by \begin{equation} e = \frac{P}{\gamma - 1} + \frac{1}{2} u_i u_i, \end{equation} where $\gamma$ is the ratio of the gas specific heat capacities. The diffusion velocity $\hat{\V{u}}^k Y_k$ of the $k^{\textrm{th}}$ species is defined in terms of the species diffusivity $D_k$, the relationship is given by \begin{equation} \hat{\V{u}}^k Y_k = D_k \nabla Y_k. \end{equation} The contribution to the source comes from the bouyancy term and the weak two-way coupling between the droplet-model and flow equations. The source term vector is given by \begin{equation} \mathbf{S} = \begin{pmatrix} {0} \\ {(\rho-\rho_0)g_1} \\ {(\rho-\rho_0)g_2} \\ {(\rho-\rho_0)g_3} \\ {(\rho-\rho_0)g_i u_i} \\ {S_{\rho Y_{k}} } \end{pmatrix}. \end{equation} $ \rho $ and $ \rho_0 $ are the local and far field ambient density, respectively and $ \mathbf{g} $ is the acceleration due to gravity (eg. $ \mathbf{g}=(0,0,-9.81)m/s^2 $) Of the species source terms $ S_{\rho Y_{k}} $, the non-droplet vapor species are zero. \subsection{Droplet Model} The widely used single droplet model is adopted in this work for modeling the sputum droplet dynamics. The droplets are modeled as discrete Lagrangian entities which are coupled with the Eulerian fluid flow equations for a weak-two-way coupling. The droplet transport and evaporation are influenced by the conditions of the ambient air, but the flow field is not affected by the droplets except the species of the vapour phase of the liquid droplet. The transport of the droplets is modeled by \begin{equation} \begin{aligned} \frac{d \mathbf{x}_d}{d t} &= \mathbf{u}_d, \\ \frac{d \mathbf{u}_{d}}{d t} &=\frac{3 C_{D}}{4 d_{d}} \frac{\rho}{\rho_{d}}\left(\mathbf{u}-\mathbf{u}_{d}\right)\left|\mathbf{u}-\mathbf{u}_{d}\right|+\mathbf{g} , \label{eqn-dropletU} \end{aligned} \end{equation} where $ \mathbf{x}_d $ and $ \mathbf{u}_d $ are the position and velocity of an individual droplet, respectively. $ d_d $ and $ \rho_d $ are the droplet diameter and liquid density of the droplet, respectively. $ C_D $ is the drag coefficient expressed a function of the droplet Reynolds number $ Re_d $. The droplet evaporation, influenced by the ambient air's velocity, humidity and temperature, is \begin{equation} \begin{aligned} \frac{d T_{d}}{d t} &=\frac{N u}{3 P r} \frac{c_{p}}{c_{l}} \frac{f_{1}}{\tau_{d}}\left(T-T_{d}\right) + \frac{1}{m_{d}}\left(\frac{d m_{d}}{d t}\right) \frac{L_{V}}{c_{p, d}} \\ \frac{d m_{d}}{d t} &=-\frac{m_{d}}{\tau_{d}}\left(\frac{S h}{3 S c}\right) \ln \left(1+B_{M}\right) \end{aligned} \end{equation} The temperature $ T_d $ is updated by tracking the convective heat transfer with the ambient air and the evaporative heat loss. The mass rate of change of the droplets is influenced by the local vapour fraction and air velocity which are encapsulated in mass transfer number $ B_m $ and the Sherwood number $ Sh $. Here, $ m_d $ and $ L_V $ are the mass of the droplet and the latent heat of evaporation at the droplet temperature, respectively. $ c_p $ and $ c_l $ are the specific heat at a constant pressure of the ambient air and the specific heat capacity of the liquid droplet, and $ \tau_d $ is the response time of the droplet. Further details of the various terms involved in the droplet model can be found in the work of Bale et al\cite{bale20b,bale2021}. \subsection{Solver framework and simulation environment} A multi-physics solver known as CUBE\cite{jans18,nishiguchi19} has been used for all the numerical simulations presented in this work. CUBE is a finite volume solver based on a hierarchical meshing framework known as the building cube method (BCM)\cite{naka03}. The meshing framework allows local mesh refinement enabling the high resolution in regions of interest while limiting the overall cell count. The supercomputer Fugaku has been used for carrying out the numerical simulation presented in this work. Fugaku comprises 158,976 nodes. Each node is equipped with a Fujitsu A64FX processor, which consists of 48 compute cores and 4 additional cores, and a memory of 32 GiB. The nodes are interconnected with 28Gbps, 2 lanes, 10 port TofuD interconnect. \subsection{Droplet modeling parameters} Numerical simulation of droplet dispersion requires three main input parameters - a) the distribution of the droplet diameter and a count of the droplet number ejected from the mouth, b) the flow profile of the expiratory event in consideration such as speaking or coughing, c) the average area of the mouth opening. Data on the droplet size distribution and the number of droplets for speech as well as cough have been widely reported in literature\cite{loudon67,chao09,asadi2019,xie09}. The distribution and the droplet number reported for speaking in these studies varies significantly. The droplet concentration ($ \# $/L), a proxy for droplet number, for speaking reported by Duiguid, Loudon and Roberts, and Chao et al. are 3.72, 223.25 and 150.8\cite{duguid46,loudon67,chao09}, respectively. The difference in the reported droplet size distribution is also as disparate as the data on droplet number. The diameter corresponding to the highest droplet count reported by Loudon and Roberts was 6 $ \mu $m, this value in the study of Duguid and Chao et al. is about 12 $ \mu $m, on the other hand, Xie et al.\cite{xie09} report a value as high as 50 $ \mu $m. As there is so much variation in the number and distribution of droplet size in this work we adopt a combination of distribution Duguid\cite{duguid46} and Xie et al.\cite{xie09}. The distribution of droplet diameter adopted in this work is shown in Fig.~\ref{fig:droplet-dist-flow}. For the flow generated from the mouth during speech, we adopt a sinusoidal model. The flowrate generated when counting from 1 to 10 is modeled as $ \dot{q}= A_i \sin^2(\pi t/T_i) $, where $ A_i $ is the amplitude and $ T_i $ is the period of the $ i^{th}$ utterance. After the word ’five’, the flow direction is reversed to model inhalation balancing the volume of air exhaled during speech counting from ’one’ to ’five’,. A similar inhalation is modeled after the word ’ten’. The velocity of the flow over an area of 6 cm$^2 $ over one cycle of counting from 1 to 10 including the two inhalations presented in Fig.~\ref{fig:droplet-dist-flow}. The period and amplitude of each utterance and the two inhalation phases of the speech flow can be deduced from the figure. The final parameter necessary to close the issue related to the boundary condition of droplet ejection is the area of the mouth opening. To the best of the authors’ knowledge, the information of the area of the mouth opening during speech is not available in the literature. An average mouth opening size of 4 cm$^2 $ was reported by Gupta et al. for the situation of cough. Assuming that the mouth opening during speech is on average larger than the opening during cough, we choose a circular surface that is 6 cm$ ^2 $ in the area to model the mouth opening during speech. The droplets are injected at the circular mouth model into the domain at time instants that match the peak of the velocity of each utterance. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Droplet_number_loud.pdf} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Speak_Breathe.pdf} } \caption{(a) the distribution of droplet diameter used to model the droplet ejection during speech. (b) The velocity profile of speaking flow over one cycle of speech which involves counting from 1 to 10 with two inhalation phases that balance of the volume of air exhaled.} \label{fig:droplet-dist-flow} \end{figure} \subsection{Simulation setup} The simulation setup involves a human model standing in an upright position similar to the infected subject shown in Fig.\ref{fig:mesh-geom}. The numerical mesh employed in this work is also shown in the figure. A mesh spacing of $ 4 $ mm is allocated to the region immediately in front of the mouth, which is the source of speaking flow and droplets, approximately up to a distance of 1 m. The numerical dissipation of the Roe-scheme\cite{roe81} used for the convective fluxes in our solver enable us to carryout implicit large eddy simulations (ILES)\cite{grinstein2007}. The human model is modeled with the immersed boundary condition\cite{li16} to impose no-slip and isothermal boundary conditions. The temperature of the human body surface is set to 300K to include the effects of buoyancy-driven flow by the human model, although the effect is not expected to be significant. The outer boundaries of the computational domain are treated with the slip boundary condition. The initial conditions for the simulation were set to the STP conditions. The temperature, pressure and relative humidity were set to 297 K, 101.3×103 Pa and 50$ \% $, respectively. The circular mouth geometry for modeling the speaking flow is placed 1 cm in front of the mouth of the human model. The flow generated by the speaking model is imposed on the circular mouth geometry. The relative humidity of the flow emanating from the mouth geometry is set to 90 $ \% $. The droplets generated from speaking are injected into the computational domain at random locations on the circular mouth geometry. The initial velocity of the droplets is set to 0, the droplets are to be driven by the flow from the time of injection into the domain. The initial temperature of the droplets is chosen to be 308 K matching the temperature of the interior of the human mouth and the density of the droplet is set to 1000 kg/s \begin{figure}[!t] \centering \includegraphics[width=0.24\textwidth]{figs/mesh_geom.pdf} \caption{Blocks of the numerical mesh employed in the present study. Subdivision of the blocks into $ 16 $ equal parts along each direction produces the cells.} \label{fig:mesh-geom} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=0.5\textwidth]{figs/Single_Drop_evap_validation.pdf} \caption{Comparison of droplet diameter as it evaporates with experimental data of Ranz and Marshall\cite{ranz52a,ranz52b}.} \label{fig:validation} \end{figure} \begin{figure}[!tb] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH50_T12s.pdf} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH50_T25s.pdf} } \caption{Dispersion of droplets during continuous speech at (a) $ t=12 $ s (b) $ t = 24 $ s} \label{fig:droplet-viz} \end{figure} \section{Results} \subsection{Validation} A numerical simulation of the evaporation of a single isolated droplet was carried out to validate the droplet model. The experiment of Ranz and Marshall\cite{ranz52a,ranz52b}, in which the evaporation dynamics of a motionless droplet was carried, is used to validate our numerical simulation. The numerical setup mimicked the experimental setup wherein a motionless droplet of initial diameter $ d_d = 1050\;\mu $m is placed in an environment where the relative humidity and the temperature of the surrounding air were $ RH=0\% $ and $ T= 298$ K, respectively. The initial temperature of the droplet was $ T_d=282 $ K. After the exposure of the droplet to the surrounding environment, the evolution of the droplet's diameter as it evaporates is tracked and compared the experimental data of Ranz and Marshall. The comparison is plotted in Fig.~\ref{fig:validation} where it can be seen that there is excellent agreement between the simulation results and the experimental data. \begin{figure}[!tb] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/P_v_dist.pdf}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/P_v_time.pdf}} \caption{(a)The probability of infection due to exposure for 15 mins plotted against distance during face-to-face conversation. (b) Evolution of $ P $ over time at distance of 0.5, 1.0, 2.0 m from the infection source.} \label{fig:RH50-PvsDist} \end{figure} \subsection{Infection risk during speech} The infection risk model presented in this work is adopted to investigate the probability of infection of a susceptible subject who is in a face-to-face conversation with an infected individual. For this we carried out the numerical simulation of dispersion of droplets ejected by an infected subject in a standing pose assumed to be continuously speaking for the duration of the simulation. In the simulation, we model only the geometry of the infected subject, however, the geometry of the susceptible subject is not modeled. The risk of infection of a virtual subject, whose dimensions are assumed to be identical to that of the infected subject, is evaluated at varying distances from the infected subject and also a function of time. The numerical simulation is carried out in a computational domain measuring $ 32\times 32\times 16 $ m$^3$ along x, y and z-axis, respectively. The human geometry is placed with the base of its feet at the centre of the computational domain along x and y-axis and at the bottom along the z-axis. The details of the boundary and initial conditions of the setup can be found in the \textit{Methods} section. The details of the speaking model employed in this work are provided in the section \textit{Droplet modeling parameters}. One cycle of the speaking model involves counting from 1 to 10 with two inhalation phases after the words '5' and '10', respectively, for mass balance (see Fig.~\ref{fig:droplet-dist-flow}). The speaking is modeled for the duration of the simulation by indefinitely looping the cycle of the speaking model. A visualization of instantaneous states of the dispersion of droplets at two time instants is presented in Fig.~\ref{fig:droplet-viz}. The size of the droplets is indicated by the coloring scheme of the droplets. The largest droplets are colored red and the smallest blue. It is evident that many of the droplets larger than 20 $ \mu$m quickly settle on the ground under the influence of gravity. As the initial condition for droplet velocity is 0, gravity contribution dominates the velocity of the larger droplets. Therefore, the horizontal distance traversed by larger droplets is not significant when compared to smaller droplets. The influence of gravity on droplets smaller than 10$ \mu $ m is negligible because of aerosolization due to very short evaporation timescales and consequently the flow-induced drag forces dominate the small droplet and aerosol velocity. As a consequence, the smaller droplets and a small number of medium-sized droplets remain airborne and they are carried by flow generated by the speech. The dispersion of the aerosolized droplets in the horizontal direction over two instants of time is shown in Fig.~\ref{fig:droplet-viz}. We next move on the investigation of infection probability of a virtual subject placed at different distances in front of the infected subject. The probability of infection at different distances in front of the infected subject for an exposure duration of $ T=15$ min is plotted in Fig.~\ref{fig:RH50-PvsDist}a. As for the infectious dose $ N_0 $, we have chosen a value of 900 which lies with the ranges values reported in the literature\cite{prentiss20,kolinski21,augenbraun20}. The variation of infection probability over distance exhibits a decaying profile. At distances less than 0.5m the probability of infection is greater than $ 70\% $, which rapidly decays to less than $ 20\% $ as the distance is increased to 2 m. It can be noted from Fig.~\ref{fig:droplet-viz} that the droplet concentration rapidly decays due to its dispersion in the vertical (z-axis) and in-plane direction (x-axis) as the droplets are advected away from the infection source thereby lowering the infection probability with distance. The shaded region in the figure depicts the change in the infection risk if the infectious dose is changed from 300 to 2000. The infection risk at a given distance is lower for larger values of $ N_0 $ and vice-versa. It is interesting to note that the shaded region narrows as the distance from the infected person increases from 0.25 to 2 m. As the distance from the infection source increases, due to the dispersion of droplets, the virion concentration in the inhalation zone decreases which in turn decreases the number of inhaled virions. When the inhaled virion count is small enough, the magnitude of the infectious dose $ N_0 $ becomes less important resulting in the narrowing of the shaded region. The variation of infection probability overtime at distances $ D= (0.5, 1.0, 2.0) $ m is plotted in Fig.~\ref{fig:RH50-PvsDist}b. For large virion concentrations at closer distances like $ D=0.5 $m, the $ P $ rapidly increases and saturates to the maximum value (1). On the other hand, at a further distance, due to lower virion concentration, the rate of change of $ P $ is more gradual. At any given instant of time, say 10, 20, 30 min, etc., the plot provides the relative risk of maintaining different distances from a likely infected person. Focusing on the horizontal grid line corresponding to $ P=0.2 $., it can be seen that the exposure time required for 20$ \% $ probability of infection is approximately 3, 10 and 21 min for distances of 0.5, 1 and 2 m, respectively. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/P_variant.pdf}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/P_vacc.pdf}} \caption{(a) comparison of infection probability over time of different strains of SARS-CoV2. (b) Comparison of infection probability of vaccinated and unvaccinated cases. } \label{fig:RH50-variant} \end{figure} \subsection{Effect of variants and vaccines} The probability of infection can change significantly due to variant strains of SARS-CoV-2 and vaccination. To account for such factors a generalized form of the equation for infection probability was introduced through Eq.~\ref{eqn:Prisk-vaccine}. By altering the parameter $ \alpha $, the equation can be adapted for variant strains of higher transmissibility factor $ \tau $, and vaccinated individuals. The transmissibility of B.1.1.7 strain (alpha variant) and the B.1.617.2 strain (delta variant) is $ 29\% $ and $ 145\% $ higher than the original strain, i.e. $ \tau = 1.29 $ and $ \tau=2.45 $ for the alpha and the delta variant, respectively. Evidently, the value $ \tau = 1 $ corresponds to the original strain. A comparison of how the $ P $, evaluated at a distance of 1 m, increases with time for the original strain, the alpha and the delta variants is plotted in Fig.~\ref{fig:RH50-variant}a. For the duration of exposure of less than 20 min, we find that there is a significant difference between the risk of infection of the original strain and the delta strain. This difference gradually reduces over time as P saturates to its maximum value. The probability of infection for the delta variant is approximately 2 times greater than the original strain. However, the difference between the infection probability of the alpha and the original strain is not very significant. Through the risk evaluation model presented in this work, it is also possible to incorporate the effect of vaccination on the probability of infection. As discussed in the previous section, the generalized form of Eq.~\ref{eqn:Prisk-vaccine} can be applied to vaccinated cases by using the expression $ \alpha = 1-\eta_{vc} $. For unvaccinated cases, we can set $ \eta_{vc}=0 $. To evaluate how the infection risk changes due to vaccination, we choose two values for $ \eta_{vc} $, $ 80\% $ and $ 95\% $, which approximately correspond to the efficacies of the AstraZeneca and Pfizer vaccines, respectively. The comparison of $ P $ evaluated at a distance of 1m from the infected person for the different values of $ \eta_{vc} $ is presented in Fig.~\ref{fig:RH50-variant}b. The infection probability remains below $ 20\% $ for exposure periods of up to 40 mins for $ \eta_{vc} = 0.85 $ and it remains well below $ 10\% $ for exposure period plotted in the figure. \begin{figure}[!tb] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH50_T25s.pdf} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH100_T25s.pdf} } \caption{A snapshot of droplet dispersion at $ t=24 $ s for (a) $ RH=50\% $ (b) $ RH=100\% $.} \label{fig:droplet-viz-RH} \end{figure} \subsection{Role of humidity} Direct simulation of droplet dispersion, as opposed to simulations of scalar transport as a proxy for aerosols, enables the investigation of the effect of environmental factors like temperature and humidity on droplet aerosolization and dispersion, and consequently on the risk of infection. We extend the numerical simulation of droplet dispersion during speech presented in the previous section to study the effect of humidity on infection risk. The relative humidity of the ambient environment could significantly affect the evaporation of medium and large droplets. This leads to the prevention or aerosolization of medium droplets that have a higher virion count compared to that of smaller droplets. As a consequence, the concentration of virions could be significantly altered by humidity, thereby influencing the probability of infection. In order to investigate role of humidity on infection risk, we carried out a numerical simulation of droplet dispersion during speech under three different humidity conditions $ 10\%, 50\% $ and $ 100\% $. The numerical setup and the boundary conditions are identical to those of the simulation in the previous section. The only parameter varied is the relative humidity ($ RH $). Three separate simulations were carried out in which the relative humidity was set to $ 10\%, 50\% $ and $ 100\% $. In Fig.~\ref{fig:droplet-viz-RH}, a snapshot of the droplet dispersion at $ t=24 $ s for $ RH=50\% $ and $ RH=100\% $ is presented. For the $ RH=100\% $ case, lack of evaporation prevents the aerosolization of medium-sized droplets. Droplets smaller than $ 5\;\mu$m remain airborne for prolonged periods. The velocity of some of the droplets between the sizes of 5 and $ 50\;\mu$, whose timing in ejection matches the peak velocity of the speaking model, is initially dominated by the fluid velocity. However, as the flow velocity decreases away from the mouth due to dissipation, the influence of gravity dominates the velocity of these droplets. This results in the droplets settling to the ground under the influence of gravity. In contrast, at lower $ RH $ values, the medium droplets in question can get partially or completely aerosolized and remain airborne. As a result, the concentration of aerosols and consequently the virions depends directly on the relative humidity of the ambient environment. This effect is quantified through the evaluation of the infection probability which directly depends on the local virion concentration. The variation of infection probability over distance from the infection source of the different humidity cases are compared in Fig.~\ref{fig:Rh-effect-dist-time}a. As the local droplet concentration is strongly influenced by the evaporation of medium droplets with higher virion count, the number of virions likely to be inhaled decreases with increasing humidity. The consequence of which can be seen in Fig.~\ref{fig:Rh-effect-dist-time}a and Fig.~\ref{fig:Rh-effect-dist-time}b. The trend of $ P $ decreasing with distance is consistent across all the humidity cases. However, for a given distance from the infection source, the probability of infection is lower for higher humidity. Fig.~\ref{fig:Rh-effect-dist-time}b plots the evolution of $ P $ over time evaluated at a distance of 1 m from the infection source for the different humidity cases. From these plots, it could be inferred that for a given temperature higher humidity lowers the risk of infection. However, the magnitude of reduction in the probability of infection could be strongly influenced by the temperature of the ambient environment. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH_effect.pdf} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/RH_v_time.pdf} } \caption{(a) Variation of infection probability over (a) distance from the infection source, and (b) time of exposure under different humidity conditions} \label{fig:Rh-effect-dist-time} \end{figure} \section{Summary and Discussion} In this work, we have developed a framework for quantifying the risk of infection due to airborne diseases like COVID-19 using the dose-response model in droplet-dispersion simulations. We have detailed in this work a methodology for estimating the inhalation dose, by measuring the volume of droplets/aerosols (proxies for virion count) in the breathing zone, directly from droplet dispersion simulations. We have adopted this framework to estimate the risk of infection from a person ejecting droplets while speaking. For this, we have carried out a numerical simulation of droplet dispersion from a single person standing and speaking continuously in an isolated environment. The infection probability was found to decrease with distance from the infected person. The magnitude of infection probability is strongly influenced by the minimum infection dose $ N_0 $ which can be seen in Fig~\ref{fig:RH50-PvsDist}a. If the minimum infection dose is assumed to be small then the infection risk remains relatively large even at distances as far as 2m from the infected person. On the other hand, if the infection dose is large then the infection probability is small even at a distance of 1 m. Furthermore, as the simulations have been carried out in a quiescent or poorly ventilated environment, the infection probability is likely to be high. This prediction can drastically change depending on the ambient environment's flow conditions. Therefore, we would not like to make any specific recommendations on social distancing based on these results. The main purpose of this work is to demonstrate the applicability of the framework of risk estimation using the dose-response model in droplet dispersion simulations. A generalized form of the dose-response model that can incorporate the effect of increased transmissibility of various strains of COVID-19 and the effect of vaccination has been presented in this work. A comparison of the infection risk due to variant strains of higher transmissibility was with the standard strain was presented. The results show that for exposure duration the infection risk of variant strain can be significantly higher than the standard strain. The difference in infection between strains decreases as exposure duration increases. Similarly, we also presented a comparison of the infection risk for a vaccinated person with that for an unvaccinated person. The probability of infection for vaccinated persons is very low for short and medium exposure duration. However, the infection risk can increase significantly provided the exposure duration is very long. One of the main advantages of droplet dispersion simulations is their ability to investigate the effect of environmental factors, keeping all other variables fixed, such as temperature and humidity on the infection risk. To demonstrate this point, an investigation of the effect of humidity of the ambient environment on the infection risk was carried out. This is an aspect that cannot be analyzed using well-mixed room average analysis or passive scale based aerosol transport models. The results of our analysis show that the infection risk is strongly influenced by humidity due to its effect on the evaporation of medium and small droplets. The infection risk at a given distance from the infected person has an inverse dependence on humidity. Lowering the ambient humidity increases the risk and vice-versa. This relationship between infection risk and humidity depends, to some extent, on the droplet diameter distribution adopted in the simulation. For example, a hypothetical droplet diameter profile that includes only aerosols may not result in a strong dependence of infection risk on humidity as the results of this study indicate. \section*{Acknowledgements} This work was supported by JST CREST, Grant Number JPMJCR20H7, Japan, and through the HPCI System Research Project (Project ID: hp210086) \section*{Author contributions statement} RB, AI, MT developed the dose-response model. RB, MT, MY developed the speaking model. RB, CGL, MT developed flow solver. RB carried out the simulations. RB, MT analyzed the data. RB wrote the manuscript. All authors reviewed the manuscript and provided inputs to the final edit.
1,108,101,562,484
arxiv
\section{#1}% \let\@seccntformat\@oldform% } \makeatother \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \title{Long tails in the long time asymptotics of quasi-linear hyperbolic-parabolic systems of conservation laws} \author{Guillaume van Baalen\footnote{Dept. of Mathematics and Statistics, Boston University}, Nikola Popovi\'c$^{*}$, and C. Eugene Wayne$^{*}$} \begin{document} \maketitle \begin{abstract} \noindent The long-time behaviour of solutions of systems of conservation laws has been extensively studied. In particular, Liu and Zeng \cite{liu:1997} have given a detailed exposition of the leading order asymptotics of solutions close to a constant background state. In this paper, we extend the analysis of \cite{liu:1997} by examining higher order terms in the asymptotics in the framework of the so-called two dimensional {\em p-system}, though we believe that our methods and results also apply to more general systems. We give a constructive procedure for obtaining these terms, and we show that their structure is determined by the interplay of the parabolic and hyperbolic parts of the problem. In particular, we prove that the corresponding solutions develop {\em long tails} that precede the characteristics. \end{abstract} \section{Introduction} In this paper, we consider the long-time behavior of solutions of systems of viscous conservation laws. This topic has been extensively studied. In particular, for the case of solutions close to a constant background state, \cite{liu:1997} contains a detailed exposition of the leading order long-time behavior of such solutions. More precisely, it is shown in \cite{liu:1997} that the leading order asymptotics are given as a sum of contributions moving with the characteristic speeds of the undamped system of conservation laws and that each contribution evolves as either a Gaussian solution of the heat equation or as a self-similar solution of the viscous Burger's equation. Thus with the exception of the translation along characteristics, these leading order terms reflect primarily the dissipative aspects of the problem. In this paper, in an effort to better understand the interplay between the hyperbolic and parabolic aspects of the problem, we examine higher order terms in the asymptotics. We work with a specific two-dimensional system of equations -- the {\em p-system}, but we believe that its behavior is prototypical. In particular, we think that our methods and results would extend to more complicated systems such as the `full gas dynamics' and the equations of Magneto-Hydro-Dynamics (MHD) as considered in \cite{liu:1997}. The specific set of equations we consider is the following: \begin{equa}[2]\label{eqn:p-system} \partial_t a &= c_1 \partial_x b~, & a(x,0)&=a_0(x)~,\\ \partial_t b &= c_2 \partial_x a + \partial_x g(a,b) + \alpha \left( \partial_x^2 b + \partial_x( f(a,b) \partial_x b ) \right)~,~~~~& b(x,0)&=b_0(x)~. \end{equa} We will make precise the assumptions on the nonlinear terms $f$ and $g$ below, but in order to describe our results informally, we basically assume that $|g(a,b)| \sim {\cal O}( (|a|+|b|)^2 )$ and $|f(a,b)| \sim {\cal O}( (|a|+|b|))$. We also note that without loss of generality, we can set $c_1=c_2=1$ and $\alpha=2$ in (\ref{eqn:p-system}), which can be achieved by appropriate scalings of space, time and the dependent variables, and possible redefinition of the functions $f$ and $g$. Physically, \reff{eqn:p-system} is a model for compressible, constant entropy flow, where $a$ represents the volume fraction (i.e. the reciprocal of the density) and $b$ is the fluid velocity. The first of the two equations in (\ref{eqn:p-system}) is the consistency relation between these two physical quantities. In particular, it would not be physically reasonable to include a dissipative term in this equation, whereas such a term arises naturally in the second equation which is essentially Newton's law, in which internal frictional forces are often present. As a consequence of the form of the dissipation the damping here is not `diagonalizable' in the terminology of \cite{liu:1997}. Next, we note that with the scaling $c_1=c_2=1$ and $\alpha=2$ in (\ref{eqn:p-system}), the characteristic speeds are $\pm 1$. Then, following Liu and Zeng \cite{liu:1997}, we introduce new dependent variables $u$ and $v$ which translate with those characteristic speeds $\pm1$, respectively. If the initial conditions $a_0$ and $b_0$ in (\ref{eqn:p-system}) decay sufficiently fast as $|x|\to\infty$, Liu and Zeng showed that in the translating frame of reference, $u(x,t)=\frac{1}{\sqrt{1+t}} g_0(\frac{x}{\sqrt{1+t}}) + {\cal O}((1+t)^{-\frac{3}{4}})$, and similarly for $v$, where $g_0$ is a self-similar solution of either the heat equation, or of Burger's equation, depending on the detailed form of the nonlinear terms. In this paper we derive similar expressions for the higher order terms in the asymptotics through a constructive procedure that can be carried out to arbitrary order. More precisely, we show that for any $N \ge 1$, there exist (universal) functions $\{ g_{n}^{\pm}\}_{n=1}^N$ and constants $\{ d_n^{\pm} \}_{n=1}^N$ determined by the initial conditions, such that \begin{equa}[2]\label{asym} u(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + \sum_{n=1}^N \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{+} g_{n}^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) +{\cal O}\myl{12}\frac{1}{(1+t)^{1-\frac{1}{2^{N+2}}}}\myr{12}~. \end{equa} We give explicit expressions for the functions $g_{n}^{\pm}$ below, but focusing for the moment on the case $N=1$ and the variable $u$, we have \begin{equs} u(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + \frac{1}{(1+t)^{\frac{3}{4}}} d_1^{+} g_{1}^{+} ({\textstyle\frac{x}{\sqrt{1+t}}}) + {\cal O}\myl{12}\frac{1}{(1+t)^{\frac{7}{8}}}\myr{12}~, \end{equs} where the functions $g_{0}^{+}(z)$ and $g_{1}^{+}(z)$ are solutions of the following ordinary differential equations: \begin{equs} \partial_z^2 g_{0}^{+}(z) + \frac{1}{2} z \partial_z g_{0}^{+}(z) + \frac{1}{2} g_{0}^{+}(z) + c_{+} \partial_z (g_0^{+}(z)^2) &= 0 \label{eqn:nonlineareq}\\ \partial_z^2 g_{1}^{+}(z) + \frac{1}{2} z \partial_z g_{1}^{+}(z) + \frac{3}{4} g_{1}^{+}(z) + 2 c_{+} \partial_z (g_0^{+}(z)g_{1}^{+}(z)) &= 0~. \label{eqn:lineareq} \end{equs} Here $c_{+}$ is a constant that depends on the Hessian matrix of $g(a,b)$ at $a=b=0$ and that will be specified in the course of our analysis. We will prove that while all solutions of (\ref{eqn:nonlineareq}) have Gaussian decay as $|x|\to\infty$, general solutions of the {\em linear} equation (\ref{eqn:lineareq}) are linear combinations of two functions $g_{1,\pm}^{+}(z)$, where $g_{1,\pm}^{+}(z)$ decays like a Gaussian as $z \to \mp \infty$ but only like $|z|^{-\frac{3}{2}}$ as $z \to \pm \infty$. The graphs of the functions $g_{0}^{+}(z)$ and $g_{1}^{+}(z)$ are presented in Figure \ref{fig:thefigure}. \begin{figure}[t] \unitlength=1mm \begin{center} \begin{picture}(0,0)(0,0) \put(-70,-90){\psfig{file=g0.eps,width=6cm}} \put(0,-90){\psfig{file=f1.eps,width=6cm}} \end{picture} \end{center} \begin{center} \begin{picture}(120,80)(0,0) \put(47,25){$z$} \put(23,75){$g_0^{+}(z)$} \put(117,25){$z$} \put(93,75){$g_1^{+}(z)$} \end{picture} \end{center} \setcaptionwidth{130mm} \caption{Graphs of the functions $g_0^{+}$ (left panel) and $g_1^{+}$ (right panel). Note the {\em long tail} of $g_{1}^{+}$ as $z\to\infty$.} \label{fig:thefigure} \end{figure}% Thus, the higher order terms in the asymptotics develop {\em long tails}. These tails are a manifestation of the hyperbolic part of the problem (or perhaps more precisely of the interplay between the parabolic and hyperbolic parts). Were we to consider just the asymptotic behavior of the viscous Burger's equation which gives the leading order behavior of the solutions, we would find that if the initial data is well localized, the higher order terms in the long-time asymptotics decay rapidly in space and have temporal decay rates given by half-integers. Another somewhat surprising aspect of our analysis is that the tails actually {\em precede} the characteristics. We also note one additional fact about the expansion in \reff{asym}. Prior research \cite{gallay:1998,wayne:1997} has shown that for both parabolic equations and damped wave equations the eigenfunctions of the operator \begin{equs} {\cal L} u(z) = \partial_z^2 u + \frac{1}{2} z \partial_z u \end{equs} play an important role for the asymptotics. In particular, on appropriate function spaces this operator has a sequence of isolated eigenvalues whose associated eigenfunctions can be used to construct an expansion for the long-time asymptotics. In this connection we prove that the functions $g_{n}^{\pm}$ are closely approximated by eigenfunctions of ${\cal L}$ with eigenvalues $\lambda_n = -\frac{1}{2} + 2^{-(n+1)}$; more precisely, the functions $g_{n}^{\pm}$ are eigenfunctions of a compact perturbation of ${\cal L}$, see e.g. (\ref{eqn:lineareq}). However, so far we have not succeeded in finding a function space which both contains these eigenfunctions (the functions $g_{n}^{\pm}$ decay slowly as $z\to\pm\infty$) and in which the corresponding eigenvalues are isolated points in the spectrum. We plan to investigate this point further in future research. Before moving to a precise statement of our results we note that our approach makes no use of Kawashima's energy estimates for hyperbolic-parabolic conservation laws \cite{kawashima:1987}. Instead we prove existence by directly studying the integral form of \reff{eqn:p-system}. We now state our results on the Cauchy problem \reff{eqn:p-system}. We begin by stating the precise assumptions we make on the nonlinearities $f$ and $g$ in \reff{eqn:p-system}. \begin{definition} \label{def:admissible} The maps $f,g:{\bf R}^2\to{\bf R}$ are admissible nonlinearities for (\ref{eqn:p-system}) if there is a quadratic map $g_0:{\bf R}^2\to{\bf R}$ and a constant $C$ such that for all $|{\bf z}|$, $|{\bf z}_1|$ and $|{\bf z}_2|$ small enough, \begin{equs}[2] |g({\bf z})|&\leq C|{\bf z}|^2~, &~~ |g({\bf z}_1)-g({\bf z}_2)| &\leq C|{\bf z}_1-{\bf z}_2|(|{\bf z}_1|+|{\bf z}_2|)~,\\ |\Delta g({\bf z})|&\leq C|{\bf z}|^3~, &~~ |\Delta g({\bf z}_1)-\Delta g({\bf z}_2)| &\leq C|{\bf z}_1-{\bf z}_2|(|{\bf z}_1|+|{\bf z}_2|)^2~,\\ |f({\bf z})|&\leq C|{\bf z}|~&~\mbox{ and }~~ |f({\bf z}_1)-f({\bf z}_2)|&\leq C|{\bf z}_1-{\bf z}_2|~, \end{equs} where $\Delta g({\bf z})\equiv g({\bf z})-g_0({\bf z})$. \end{definition} The main result of this paper can be formulated as follows: \begin{theorem}\label{thm:maintheorem} Fix $N>0$. There exists $\epsilon_0>0$ sufficiently small such that if \begin{itemize} \item[(i)] $ | a_0 |_{\H^1({ {\mathbb R}})} + |a_0 |_{\L^1({ {\mathbb R}})} < \epsilon_0$ and $ | b_0 |_{\H^2({ {\mathbb R}})} + |b_0 |_{\L^1({ {\mathbb R}})} < \epsilon_0$ \item[(ii)] $ | x^2 a_0 |_{\L^2({ {\mathbb R}})} + | x^2 b_0 |_{\L^2({ {\mathbb R}})} < \infty$, \end{itemize} then \reff{eqn:p-system} has a unique (mild) solution with initial conditions $a_0$ and $b_0$. Moreover, there exist functions $\{ g_{n}^{\pm} \}_{n=0}^{N}$ (independent of initial conditions) and constants $C_N$, $\{d_{n}^{\pm}\}_{n=1}^{N}$ determined by the initial conditions such that if we define \begin{equs} u(x,t) &= a(x-t,t) + b(x-t,t) ~~~\mbox{ and }~~~ v(x,t) = a(x+t,t) - b(x+t,t) \end{equs} then \begin{equa}[2]\label{eqn:asymptoticexpansion} u(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + \sum_{n=1}^N \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{+} g_{n}^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + R_u^N(x,t) \\ v(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{-}({\textstyle\frac{x}{\sqrt{1+t}}}) + \sum_{n=1}^N \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{-} g_{n}^{-} ({\textstyle\frac{x}{\sqrt{1+t}}}) + R_v^N(x,t)~, \end{equa} where the remainders $R_u^N$ and $R_v^N$ satisfy the estimates \begin{equa}[2]\label{eqn:remainderestimates} \sup_{t\geq0}(1+t)^{\frac{3}{4}-\frac{1}{2^{N+2}}} \|R_{\{u,v\}}^N(\cdot,t)\|_{\L^2({ {\mathbb R}})}&\leq C_N\\ \sup_{t\geq0} (1+t)^{\frac{5}{4}-\frac{1}{2^{N+2}}} \|\partial_x R_{\{u,v\}}^N(\cdot,t)\|_{\L^2({ {\mathbb R}})} &\le C_N~. \end{equa} Furthermore, for $n\geq1$, the functions $g_n^{\pm}$ satisfy $g_n^{\pm}(z)\sim |z|^{-1+2^{-n-1}}$ as $z\to\pm\infty$. \end{theorem} There is a slight incongruity in this result in that the norm in which we estimate the remainder term is weaker than that we use on the initial data; namely, we do not give estimates for the remainder in $\H^2({ {\mathbb R}})$, or in the localization norms $\L^1({ {\mathbb R}})$ and the weighted $\L^2({ {\mathbb R}})$-norm (on that aspect of the problem, see Remark \ref{rem:onestbete} below). Theorem \ref{thm:maintheorem} actually holds for slightly more general initial conditions than those satisfying (i)-(ii). Furthermore, we will prove that the estimates (\ref{eqn:remainderestimates}) hold for all initial conditions $(a_0,b_0)$ in a subset ${\cal D}_2\subset\H_1\times\H_2$ that is {\em positively invariant} under the flow of \reff{eqn:p-system}. However, since the topology used to define the subset ${\cal D}_2$ is somewhat non-standard, we have chosen to state the result initially in this slightly weaker, but hopefully more comprehensible, form to keep the introduction as simple as possible. \begin{remark} \label{rem:onestbete} It is interesting to note (see Proposition \ref{prop:weightednorm} below) that $\|x^2a(\cdot,t)\|_{\L^2({ {\mathbb R}})}+\|x^2b(\cdot,t)\|_{\L^2({ {\mathbb R}})}$ is finite for all finite $t>0$, but that the terms with $n\geq1$ in the asymptotic expansion do not satisfy this property due to the long tails of the functions $g^{\pm}_{n}$. \end{remark} \begin{remark} As the asymmetry in the degree of $x$ derivatives in (\ref{eqn:p-system}) suggests, we require more spatial regularity from the second component (the $b$ variable) than from the first (the $a$ variable). It is then natural to expect that $R_u^N$ or $R_v^N$ are not necessarily in $\H^2$, but that only their difference is. \end{remark} We conclude this section with a few remarks. Define $u_{\pm}(x,t)=a(x,t)\pm b(x,t)$. Then the asymptotics of the solutions of (\ref{eqn:p-system}) in the variables $u_{\pm}$ are the same as those of the two dimensional (generalized) Burger's equation \begin{equa}[2]\label{eqn:modelproblem} \partial_t u_{+}&=\partial_x^2u_{+}+\partial_xu_{+} +\partial_x(c_{+}u_{+}^2-c_{-}u_{-}^2)\\ \partial_t u_{-}&=\partial_x^2u_{-}-\partial_xu_{-} +\partial_x(c_{-}u_{-}^2-c_{+}u_{+}^2)~, \end{equa} where the constants $c_{\pm}$ are determined by the Hessian of $g(a,b)$ at $a=b=0$ through \begin{equs} c_{\pm}= \pm \frac{1}{8} (1,\pm1)\cdot \myl{14} \begin{matrix} \partial_a^2g & \partial_a\partial_bg\\ \partial_a\partial_bg & \partial_b^2g \end{matrix} \myr{14} \my{|}{14}_{a=b=0} \cdot \vector{1}{\pm1}~. \end{equs} We will see that the hyperbolic effects manifest themselves through the `source' terms $-c_{-}u_{-}^2$, respectively $c_{+}u_{+}^2$ in the first, respectively second equation in (\ref{eqn:modelproblem}). In particular, none of the terms $g_n^{\pm}$ with $n\geq1$ would be present in the asymptotic expansion if those terms were absent. Finally, note that we have chosen to state Theorem \ref{thm:maintheorem} for finite $N$. As it turns out, the sums appearing in (\ref{eqn:asymptoticexpansion}) converge in the limit as $N\to\infty$, in which case the estimates (\ref{eqn:remainderestimates}) hold with time weights replaced by $(1+t)^{\frac{3}{4}}\ln(2+t)^{-1}$ and $(1+t)^{\frac{5}{4}}\ln(2+t)^{-1}$. The proof can easily be done with the techniques used in this paper and is left to the reader. The remainder of the paper is organized as follows: In Section \ref{sec:cauchy}, we discuss the well-posedness of the Cauchy problem (\ref{eqn:p-system}) in an appropriately defined topology. In Section \ref{sec:asym}, we explain our strategy for proving our main result, Theorem \ref{thm:maintheorem}, on the long time asymptotics of solutions of (\ref{eqn:p-system}). Namely, we decompose that proof into a series of simpler sub-problems which are then tackled in subsequent sections: in Sections \ref{sect:burgers} and \ref{sect:inhomogeneousheat}, we investigate properties of solutions of Burger's type equations, respectively of inhomogeneous heat equations, as they occur naturally in the asymptotic analysis. In Section \ref{sect:cauchyproof}, we collect some estimates that are used in the proof of the well-posedness of (\ref{eqn:p-system}). Finally, in Section \ref{sect:remainderestimates}, we specify the sense in which the semigroup of the linearization of (\ref{eqn:p-system}) is close to heat kernels translating along the characteristics, and we give estimates on the remainder terms occurring in Theorem \ref{thm:maintheorem}. \section{Cauchy problem} \label{sec:cauchy} To motivate our technical treatment of the problem and in particular our choice of function spaces, we first note that upon taking the Fourier transform of the linearization of (\ref{eqn:p-system}), it follows that \begin{equs} \partial_t\vector{a}{b}= \L\vector{a}{b}\equiv \myl{14} \begin{matrix} 0 & ik\\ ik &-2k^2 \end{matrix} \myr{14} \vector{a}{b}~. \label{eqn:linearfourier} \end{equs} We then find that the (Fourier transform of) the semigroup associated with (\ref{eqn:linearfourier}) is \begin{equs} {\rm e}^{\L t}= {\rm e}^{-k^2t} \myl{15} \begin{matrix} \cos(kt\Delta)+\frac{k}{\Delta}\sin(kt\Delta) & \frac{i}{\Delta}\sin(kt\Delta)\\ \frac{i}{\Delta}\sin(kt\Delta) & \cos(kt\Delta)-\frac{k}{\Delta}\sin(kt\Delta) \end{matrix} \myr{15}~, \label{eqn:defeLt} \end{equs} where $\Delta=\sqrt{1-k^2}$. The most important fact about the semigroup ${\rm e}^{\L t}$ is that it is close to ${\rm e}^{\L_0 t}$, the semigroup associated with the problem \begin{equs} \partial_t\vector{u}{v} =\L_0\vector{u}{v} \equiv \myl{14} \begin{matrix} \partial_x^2+\partial_x & 0\\ 0 & \partial_x^2-\partial_x \end{matrix} \myr{14} \vector{u}{v} ~. \label{eqn:linearfourieruv} \end{equs} Formally, ${\rm e}^{\L_0 t}$ can be obtained by setting $\Delta=1$ in ${\rm e}^{\L t}$ and by conjugating with the matrix \begin{equs} \S\equiv \myl{14} \begin{matrix} 1 & 1\\ 1 & -1 \end{matrix} \myr{14}~. \label{eqn:defS} \end{equs} These two operations correspond to a long wavelength expansion and a change of dependent variables to quantities that move along the characteristics. More precisely, we will prove that ${\rm e}^{\L t}$ satisfies the intertwining property \begin{equs} \S{\rm e}^{\L t}&\approx{\rm e}^{\L_0 t}\S~, \end{equs} where the symbol $\approx$ means that the action of these two operators is the same in the large scale -- long time limit; see Lemma \ref{lem:closetoheat} at the beginning of Section \ref{sect:remainderestimates} for details. Furthermore, ${\rm e}^{\L t}$ satisfies parabolic-like estimates \begin{equs} |{\rm e}^{\L t}|&\leq C{\rm e}^{-\min(k^2,1)\frac{t}{4}} \myl{15} \begin{matrix} 1 & \frac{1}{\sqrt{1+k^2}}\\ \frac{1}{\sqrt{1+k^2}} & 1 \end{matrix} \myr{15}~, \label{eqn:estimateonkernel} \\ \my{|}{15} {\rm e}^{\L t} \vector{0}{ik} \my{|}{15} & \leq C\frac{{\rm e}^{-\min(k^2,1)\frac{t}{4}}}{\sqrt{t}} \vector{1}{\frac{1}{\sqrt{1+k^2}}} \label{eqn:estimateonDkernel} \end{equs} uniformly in $t\geq0$ and $k\in{\bf R}$. Hence, to summarize, ${\rm e}^{\L t}$ behaves like a superposition of heat kernels translating along the characteristics of the underlying hyperbolic problem. In view of the above observations as well as of classical techniques for parabolic PDE's, see e.g. \cite{temam:1997,bricmont:1994}, we will consider (\ref{eqn:p-system}) in the following (somewhat non-standard) topology (cf also \cite{gvb:2006}): \begin{definition} \label{def:functionspaces} We define ${\cal B}_0$, resp. ${\cal B}$, as the closure of ${\cal C}_0^{\infty}({\bf R},{\bf R}^2)$, resp. ${\cal C}_0^{\infty}({\bf R}\times[0,\infty),{\bf R}^2)$, under the norm $|\cdot|$, resp. $\|\cdot\|$, where for ${\bf z}_0=(a_0,b_0):{\bf R}\to{\bf R}^2$ and ${\bf z}=(a,b):{\bf R}\times[0,\infty)\to{\bf R}^2$, we define \begin{equs} |{\bf z}_0|= \|\widehat{\bf z}_0\|_{\infty} +\|{\bf z}_0\|_2 +\|{\rm D}{\bf z}_0\|_2 +\|{\rm D}^2b_0\|_2~,~~~ \|{\bf z}\|&= \|\hat{\bf z}\|_{\infty,0} +\|{\bf z}\|_{2,\frac{1}{4}} +\|{\rm D}{\bf z}\|_{2,\frac{3}{4}}+ \|{\rm D}^2b\|_{2,\frac{5}{4}^{\star}}~. \end{equs} Here $(Da)(x,t)\equiv\partial_xa(x,t)$, $\hat{a}(k,t)$ is the Fourier transform of $a(x,t)$, \begin{equs} \|f\|_{p,q}=\sup_{t\geq0}(1+t)^q\|f(\cdot,t)\|_p~,~~~~ \|f\|_{p,q^{\star}}=\sup_{t\geq0}\frac{(1+t)^q}{\ln(2+t)}\|f(\cdot,t)\|_p \end{equs} and $\|\cdot\|_p$ is the standard $\L^p({\bf R})$ norm. \end{definition} Before turning to the Cauchy problem with initial data in ${\cal B}_0$ we collect a few comments on our choice of function spaces. Consider first the requirements on the initial conditions in (\ref{eqn:p-system}). While the use of $\H^1$ space is quite natural in this context, we choose to replace the $\L^1$ norm by the (weaker) control of the $\L^{\infty}$ norm in Fourier space. This has the great advantage that all estimates can then be done in Fourier space, where the semigroup ${\rm e}^{\L t}$ has the simple, explicit, form (\ref{eqn:defeLt}). In turn, our choice of $q$-exponents in the norm $\|\cdot\|$ is motivated by the fact that these are the highest possible exponents for which the $\|\cdot\|$-norm of the leading order asymptotic term $\frac{1}{\sqrt{1+t}}g_0(\frac{x}{\sqrt{1+t}})$ is bounded. Note also that for the linear evolution (\ref{eqn:linearfourier}), we have \begin{equs} \|{\rm e}^{\L t}{\bf z}_0\|\leq C|{\bf z}_0|~, \label{eqn:linearestimate} \end{equs} since $\hat{j}(k,t)={\rm e}^{-\min(k^2,1)t}u_0(k)$ satisfies \begin{equs} \|{\rm D}^nj(\cdot,t)\|_{2} \leq C \myl{12} {\rm e}^{-t}\|{\rm D}^nu_0\|_2 +\min\myl{10} t^{-\frac{1}{4}-\frac{n}{2}}\|\hat{u}_0\|_{\infty}, \|D^nu_0\|_2 \myr{10} \myr{12} \end{equs} for all $n=0,1,\ldots$. Finally, we note that for admissible nonlinearities in the sense of Definition \ref{def:admissible}, the map $h(a,b)=f(a,b)\partial_xb+g(a,b)=h({\bf z})$ satisfies \begin{equs} \label{eqn:H} \|h({\bf z})\|_{1,\frac{1}{2}}+ \|h({\bf z})\|_{2,\frac{3}{4}}+ \|Dh({\bf z})\|_{2,\frac{5}{4}}&\leq C\|{\bf z}\|^2~,\\ \|h({\bf z}_1)-h({\bf z}_2)\|_{1,\frac{1}{2}}+ \|h({\bf z}_1)-h({\bf z}_2)\|_{2,\frac{3}{4}}&\leq C\|{\bf z}_1-{\bf z}_2\|(\|{\bf z}_1\|+\|{\bf z}_2\|)~, \label{eqn:DHone} \\ \|D(h({\bf z}_1)-h({\bf z}_2))\|_{2,\frac{5}{4}}&\leq C\|{\bf z}_1-{\bf z}_2\|(\|{\bf z}_1\|+\|{\bf z}_2\|)~. \label{eqn:DHtwo} \end{equs} We are now fully equipped to study the Cauchy problem (\ref{eqn:p-system}) in ${\cal B}$: \begin{theorem}\label{thm:cauchy} For all ${\bf z}_0\in{\cal B}_0$ with $|{\bf z}_0|=|(a_0,b_0)|\leq\epsilon_0$ small enough, the Cauchy problem (\ref{eqn:p-system}) is (locally) well posed in ${\cal B}$ if the nonlinearities are admissible in the sense of Definition \ref{def:admissible}. In particular, the solution satisfies $\|{\bf z}\|\leq c\epsilon_0$ for some $c>1$ and is unique among functions in ${\cal B}$ satisfying this bound. \end{theorem} \begin{proof} Upon taking the Fourier transform of (\ref{eqn:p-system}), we get \begin{equs} \partial_t\vector{a}{b}= \myl{14} \begin{matrix} 0 & ik\\ ik &-2k^2 \end{matrix} \myr{14} \vector{a}{b}+ \vector{0}{ikh}~, \label{eqn:fourier} \end{equs} which gives the following representation for the solution \begin{equs} {\bf z}(t)\equiv \vector{a(t)}{b(t)} ={\rm e}^{\L t}\vector{a_0}{b_0} +\int_0^t \hspace{-2mm} {\rm d}s~{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))} \equiv {\rm e}^{\L t}{\bf z}_0+ {\cal N}[{\bf z}](t) ~. \label{eqn:defN} \end{equs} We will prove below that for all ${\bf z}_i\in{\cal B}$, $i=1,2$, we have \begin{equs} \|{\cal N}[{\bf z}]\|\leq C\|{\bf z}\|^2~~\mbox{ and }~~ \|{\cal N}[{\bf z_1}]-{\cal N}[{\bf z_2}]\|\leq C\|{\bf z}_1-{\bf z}_2\|(\|{\bf z}_1\|+\|{\bf z}_2\|) \label{eqn:contract} \end{equs} for some constant $C$. The proof of Theorem \ref{thm:cauchy} then follows from the fact that for all ${\bf z}_0\in{\cal B}_0$ with $|{\bf z}_0|\leq \epsilon_0$ small enough and $c>1$, the r.h.s. of (\ref{eqn:defN}) defines a contraction map from some (small) ball of radius $c\epsilon_0$ in ${\cal B}$ onto itself. The general rule for proving the various estimates involved in (\ref{eqn:contract}) is to split the integration interval into two parts, with $s\in{\cal I}_1\equiv[0,\frac{t}{2}]$ and $s\in{\cal I}_2\equiv[\frac{t}{2},t]$. In ${\cal I}_1$, we place as many derivatives (or equivalently, factors of $k$) as possible on the semigroup ${\rm e}^{\L(t-s)}$, while on ${\cal I}_2$, (most of) these derivatives need to act on $h$, since the integral would otherwise be divergent at $s=t$. Additional difficulties arise from the fact that ${\rm e}^{\L t}$ has very little smoothing properties (slow or no decay in $k$ as $|k|\to\infty$), so that in some cases we need to consider separately the large-$k$ part and the small-$k$ part of the $\L^2$ norm, say. This is done through the use of $\P$, defined as the Fourier multiplier with the characteristic function on $[-1,1]$. We decompose the proof of $\|{\cal N}[{\bf z}]\|\leq C\|{\bf z}\|^2$ into that of \begin{equs} \|{\cal N}[{\bf z}]\|&\leq \|\widehat{{\cal N}[{\bf z}]}\|_{\infty,0} +\|{\cal N}[{\bf z}]\|_{2,\frac{1}{4}} +\|\P {\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}} +\|(1-\P){\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}}\\ &\phantom{=~} +\|(1-\P){\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}} +\|(1-{\mathbb Q})\P{\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}} +\|{\mathbb Q}\P{\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}} \\&\leq C \|{\bf z}\|^2 ~, \label{eqn:split} \end{equs} where ${\mathbb Q}$ is the characteristic function for $t\geq1$ and ${\cal N}[{\bf z}]_2$ denotes the second component of ${\cal N}[{\bf z}]$. We now consider $\|\P{\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}}$ as an example of the way we prove the above estimates. We have \begin{equs} \|\P{\rm D}{\cal N}[{\bf z}](\cdot,t)\|_2 &\leq \|h({\bf z})\|_{2,\frac{3}{4}} \my{(}{13}\sup_{|k|\leq1,\tau\geq0} \hspace{-3mm} |k|\sqrt{\tau}{\rm e}^{-\frac{k^2\tau}{4}} \my{)}{13} \int_0^{\frac{t}{2}} \hspace{-3mm}{\rm d}s~ \frac{(1+s)^{-\frac{3}{4}}}{t-s} \\&\phantom{=~} + \|{\rm D} h({\bf z})\|_{2,\frac{5}{4}} \my{(}{13} \sup_{|k|\leq1,\tau\geq0} \hspace{-3mm} {\rm e}^{-\frac{k^2\tau}{4}} \my{)}{13} \int_{\frac{t}{2}}^t \hspace{-2mm}{\rm d}s~ \frac{(1+s)^{-\frac{5}{4}}}{\sqrt{t-s}} \\&\leq C\|{\bf z}\|^2 \myl{12} \frac{2}{t} \int_0^{\frac{t}{2}} \hspace{-3mm} \frac{{\rm d}s~}{(1+s)^{\frac{3}{4}}}+ \frac{1}{(1+\frac{t}{2})^{\frac{5}{4}}} \int_{\frac{t}{2}}^t \hspace{-2mm} \frac{{\rm d}s~}{\sqrt{t-s}} \myr{12} \leq C\|{\bf z}\|^2 (1+t)^{-\frac{3}{4}} \label{eqn:firstB} \end{equs} for all $t\geq0$, which shows that $\|\P{\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}}\leq C\|{\bf z}\|^2$. All other estimates in (\ref{eqn:split}) can be done similarly; we postpone their proof to Section \ref{sect:cauchyproof} below. Finally, we note that the Lipschitz-type estimate in (\ref{eqn:contract}) can be obtained in the same manner, {\em mutatis mutandis}, due to the similarity between (\ref{eqn:DHone}) and (\ref{eqn:DHtwo}) with (\ref{eqn:H}); we omit the details. \end{proof} We can now turn to the question of the asymptotic structure of the solutions of (\ref{eqn:p-system}) provided by Theorem \ref{thm:cauchy}. Note that already if we wanted to prove that ${\rm e}^{\L t}{\bf z}_0$ satisfies `Gaussian asymptotics' we would need more localization properties on ${\bf z}_0$ than those provided by the ${\cal B}_0$-topology. It will turn out to be sufficient to require ${\bf z}_0\in{\cal B}_0\cap \L^2({ {\mathbb R}},x^m\d x)$ for (some) $m\geq2$. We now prove that this requirement is {\em forward invariant} under the flow of (\ref{eqn:p-system}): \begin{proposition}\label{prop:weightednorm} Let $\rho_m(x)=|x|^m$ and define \begin{equs} {\cal D}_m=\my{\{}{12}{\bf z}_0\in{\cal B}_0\mbox{ such that }|{\bf z}_0|+\|\rho_m{\bf z}_0\|_2<\infty \my{\}}{12}~. \end{equs} If ${\bf z}_0\in{\cal D}_m$ and $|{\bf z}_0|\leq\epsilon_0$ such that Theorem \ref{thm:cauchy} holds, then the corresponding solution ${\bf z}(t)$ of (\ref{eqn:p-system}) satisfies ${\bf z}(t)\in{\cal D}_m$ for all finite $t>0$. Furthermore, there holds $|{\bf z}(t)|\leq (1+\delta)\epsilon_0$ for some (small) constant $\delta$. \end{proposition} \begin{proof} Note first that by Theorem \ref{thm:cauchy}, $|{\bf z}(t)|\leq\|{\bf z}\|\leq(1+\delta)\epsilon_0$ since ${\bf z}_0\in{\cal B}_0$ and $|{\bf z}_0|\leq\epsilon_0$. Then, fix $m\in{\bf N}$, $m\geq1$. The proof of Theorem \ref{thm:cauchy} can easily be adapted to show that (\ref{eqn:p-system}) is {\em locally} (in time) well posed in ${\cal D}_m$. Global existence then follows from the fact that the quantity \begin{equs} N(t)=\frac{1}{2} \|\rho_m{\bf z}(\cdot,t)\|^2 =\frac{1}{2}\int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ |x|^m(a(x,t)^2+b(x,t)^2) \end{equs} grows {\em at most exponentially} as $t\to\infty$. Namely, we have \begin{equs} \partial_t N(t)&= \int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ |x|^m\myl{12} \partial_x(ab)+ 2b\partial_x^2b +b\partial_x\myl{10}f(a,b)\partial_xb+g(a,b)\myr{10} \myr{12}\\ &=- \int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ m|x|^{m-1}{\rm sign}(x) \myl{12} b(a+g(a,b))+(2+f(a,b))b\partial_xb \myr{12} \\&\phantom{=~} -\int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ |x|^m (\partial_xb)^2 \myl{10} 2+f(a,b) \myr{10} \\&\leq \int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ \myl{10} (m-1)^{m-1}+|x|^m \myr{10} \my{|}{12} b(a+g(a,b))+(2+f(a,b))b\partial_xb \my{|}{12} \\&\phantom{=~} -\int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ |x|^m (\partial_xb)^2 \myl{10} 2+f(a,b) \myr{10} \\&\leq \int_{-\infty}^{\infty} \hspace{-4mm}{\rm d}x~ \myl{10} (m-1)^{m-1}+|x|^m \myr{10} \myl{12} |b(a+g(a,b))|+ 2^{-1}|2+f(a,b)|b^2 \myr{12} \\&\leq C_1(m,\epsilon_0)+C_2(\epsilon_0)N(t)~, \end{equs} due to the estimates $\|f(a,b)\|_{\infty}\leq C\epsilon_0\ll2$ and $\|\frac{g(a,b)}{\sqrt{a^2+b^2}}\|_{\infty}\leq C\epsilon_0$. \end{proof} \section{Asymptotic structure - Proof of Theorem \ref{thm:maintheorem}} \label{sec:asym} We can now state our main result on the asymptotic structure of solutions of (\ref{eqn:p-system}) in a definitive manner: \begin{theorem}\label{thm:asymptoticsrestated} Let ${\cal D}_m$ be as in Proposition \ref{prop:weightednorm} with $m\geq2$, let ${\bf z}_0\in{\cal D}_m$ with $|{\bf z}_0|\leq\epsilon_0$ such that Theorem \ref{thm:cauchy} holds and define \begin{equs} u(x,t) &= a(x-t,t) + b(x-t,t) ~~~\mbox{ and }~~~ v(x,t) = a(x+t,t) - b(x+t,t) \end{equs} for the corresponding solution ${\bf z}(t)=(a(t),b(t))$ of (\ref{eqn:p-system}). Then there exist functions $\{ g_{n}^{\pm} \}_{n=0}^{N}$ (independent of ${\bf z}_0$) and constants $C_N$, $\{d_{n}^{\pm}\}_{n=1}^N$ determined by ${\bf z}_0$ such that \begin{equa}[2]\label{eqn:asymptoticexpansionrap} u(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + \sum_{n=1}^N \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{+} g_{n}^{+}({\textstyle\frac{x}{\sqrt{1+t}}}) + R_u^N(x,t) \\ v(x,t) &= \frac{1}{\sqrt{1+t}} g_0^{-}({\textstyle\frac{x}{\sqrt{1+t}}}) + \sum_{n=1}^N \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{-} g_{n}^{-} ({\textstyle\frac{x}{\sqrt{1+t}}}) + R_v^N(x,t)~, \end{equa} where the remainders $R_u^N$ and $R_v^N$ satisfy the estimates \begin{equa}[2]\label{eqn:remainderestimatesrap} \sup_{t\geq0}(1+t)^{\frac{3}{4}-\frac{1}{2^{N+2}}} \|R_{\{u,v\}}^N(\cdot,t)\|_{\L^2({ {\mathbb R}})}&\leq C_N\\ \sup_{t\geq0} (1+t)^{\frac{5}{4}-\frac{1}{2^{N+2}}} \|\partial_x R_{\{u,v\}}^N(\cdot,t)\|_{\L^2({ {\mathbb R}})} &\le C_N~. \end{equa} Furthermore, for $n\geq1$, the functions $g_n^{\pm}$ satisfy $g_n^{\pm}(z)\sim |z|^{-1+2^{-n-1}}$ as $z\to\pm\infty$. \end{theorem} \begin{remark} As will be apparent from the proof of Theorem \ref{thm:asymptoticsrestated}, any hyperbolic-parabolic system of the form \begin{equs} \partial_t{\bf z}+f({\bf z})_x=(B({\bf z}){\bf z}_x)_x \end{equs} with admissible nonlinearities in the sense of (the natural extension of) Definition \ref{def:admissible} gives rise to solutions having the same asymptotic structure as those of the p-system as long as the following two conditions are satisfied: \begin{enumerate} \item\label{item:inter} There exist two matrices ${\cal S}$ and ${\rm A}$ with ${\cal S}$ non-singular and ${\rm A}$ diagonal having eigenvalues of multiplicity $1$ for which ${\cal S}{\rm e}^{\L t}\approx{\rm e}^{\L_0 t}{\cal S}$ in the sense of Lemma \ref{lem:closetoheat} (see Section \ref{sect:remainderestimates}), where $\L_0=\partial_x^2+{\rm A}\partial_x$ and $\L=B(0)\partial_x^2-f'(0)\partial_x$. \item\label{item:cauchy} The Cauchy problem with initial condition in the corresponding functional space (the natural extension of ${\cal B}_0$ to the problem considered) is well posed and satisfies the analogues of Theorem \ref{thm:cauchy} and Proposition \ref{prop:weightednorm}. \end{enumerate} \end{remark} We now briefly comment on the above assumptions for specific systems such as the `full gas dynamics' and the MHD system. The intertwining property of item \ref{item:inter} above is proved in \cite{liu:1997} for quite general systems, though not in exactly the same topology as that used in Lemma \ref{lem:closetoheat}. As for item \ref{item:cauchy}, local well-posedness for initial data in ${\cal B}_0$ is certainly not an issue, the only difficulty is to prove that the various norms of Definition \ref{def:functionspaces} exhibit `parabolic-like' decay as $t\to\infty$. This is very likely to hold, particularly for systems satisfying item \ref{item:inter}. While the variables $(a,b)$ are adapted to the study of the Cauchy problem because of the inherent asymmetry of spatial regularity in (\ref{eqn:p-system}), they are not the best framework for studying the asymptotic structure of the solutions to (\ref{eqn:p-system}). It turns out to be more convenient to change variables to quantities that move along the characteristics. We thus define \begin{equs} \vector{u(x,t)}{v(x,t)} \equiv \myl{14} \begin{matrix} {\cal T}^{-1} & 0\\ 0 & {\cal T} \end{matrix} \myr{14} \myl{14} \begin{matrix} 1 & 1\\ 1 & -1 \end{matrix} \myr{14} \vector{a(x,t)}{b(x,t)} \equiv \myl{14} \begin{matrix} {\cal T}^{-1} & 0\\ 0 & {\cal T} \end{matrix} \myr{14}\S{\bf z}(x,t) ~, \end{equs} where ${\cal T}$ is the translation operator defined by \begin{equs} ({\cal T} f)(x,t)=f(x+t,t)~~\mbox{ or equivalently by }~~ \widehat{{\cal T} f}(k,t)={\rm e}^{ikt}\hat{f}(k,t)~. \label{eqn:translationdef} \end{equs} Note in passing that \begin{equs} a(x,t)=\frac{1}{2}\myl{14} u(x+t,t)+v(x-t,t) \myr{14}~~\mbox{ and }~~ b(x,t)=\frac{1}{2}\myl{14} u(x+t,t)-v(x-t,t) \myr{14}~. \end{equs} We then use the fact that ${\bf z}$ satisfies the integral equation \begin{equs} \S{\bf z}(t)&=\S{\rm e}^{\L t}{\bf z}_0+ \int_0^t \hspace{-2mm} {\rm d}s~\S{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))}\\ &={\rm e}^{\L_0 t}\S{\bf z}_0+ \int_0^t \hspace{-2mm} {\rm d}s~ {\rm e}^{\L_0(t-s)}\S~ \vector{0}{\partial_xg_0({\bf z}(s))} +{\cal R}[{\bf z}](t)~, \label{eqn:representation} \end{equs} where \begin{equs} {\cal R}[{\bf z}](t)&= \myl{10} \S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S \myr{10}{\bf z}_0 + \int_0^t \hspace{-2mm} {\rm d}s~ \my{[}{14} \S{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))} - {\rm e}^{\L_0(t-s)}\S~ \vector{0}{\partial_xg_0({\bf z}(s))} \my{]}{14}~. \end{equs} To justify the notation, which suggests that ${\cal R}$ is a remainder term, we will prove in Section \ref{sect:remainderestimates} that ${\cal R}[{\bf z}] =({\cal R}_u[{\bf z}],{\cal R}_v[{\bf z}])$ satisfies the improved decay rates \begin{equs} \|{\cal R}_{\{u,v\}}[{\bf z}]\|_{2,\frac{3}{4}^{\star}}+ \|{\rm D}{\cal R}_{\{u,v\}}[{\bf z}]\|_{2,\frac{5}{4}^{\star}}\leq C\epsilon_0~, \label{eqn:onRannounce} \end{equs} because of the intertwining relation $\S{\rm e}^{\L t}\approx{\rm e}^{\L_0 t}\S$ (see Lemma \ref{lem:closetoheat}) and the fact that $h({\bf z})=g_0({\bf z})+h.o.t.$. Recalling that $g_0$ is quadratic (cf Definition \ref{def:admissible}), we will write \begin{equs} g_0({\bf z})&=c_{+}(a+b)^2-c_{-}(a-b)^2+c_{3}(a+b)(a-b)\\ &=c_{+}({\cal T} u)^2-c_{-}({\cal T}^{-1}v)^2 +c_{3}({\cal T} u)({\cal T}^{-1}v) \end{equs} for ${\bf z}=(a,b)$. We thus find from (\ref{eqn:representation}) that $u$ and $v$ satisfy \begin{equs} u(t)&={\rm e}^{\partial_x^2t}(a_0+b_0) +\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} \myl{12} c_{+}u(s)^2 -c_{-}{\cal T}^{-2}v(s)^2 \myr{12} \\ &\phantom{=~} +{\cal T}^{-1}{\cal R}_{u}[{\bf z}](t) +c_{3}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} {\cal T}^{-1}\myl{10} ({\cal T} u(s))({\cal T}^{-1}v(s)) \myr{10} \label{eqn:firstintegu} ~,\\ v(t)&={\rm e}^{\partial_x^2t}(a_0 - b_0) +\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} \myl{12} c_{-}v(s)^2 -c_{+}{\cal T}^{2}u(s)^2 \myr{12} \\ &\phantom{=~} +{\cal T}{\cal R}_{v}[{\bf z}](t) -c_{3}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} {\cal T}\myl{10} ({\cal T} u(s))({\cal T}^{-1}v(s)) \myr{10} \label{eqn:firstintegv} ~. \end{equs} Note that, but for the presence of the second lines in (\ref{eqn:firstintegu}) and (\ref{eqn:firstintegv}), these expressions are precisely Duhamel's formula for the solution of the model problem (\ref{eqn:modelproblem}), written in terms of $u={\cal T}^{-1} u_{+}$ and $v={\cal T} u_{-}$. The next step is to write \begin{equs} u=u_{\star}+R_u^N=u_0+u_1+R_u^N~~\mbox{ and }~~ v=v_{\star}+R_v^N=v_0+v_1+R_v^N~, \end{equs} considering $R_u^N$ and $R_v^N$ as new `unknowns' and \begin{equa}[2] u_0(x,t)&=\frac{1}{\sqrt{1+t}} g_0^{+}({\textstyle\frac{x}{\sqrt{1+t}}})~,& u_1(x,t)&= \sum_{n=1}^{N} \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{+}g^{+}_{n}({\textstyle\frac{x}{\sqrt{1+t}}}) \\ v_0(x,t)&=\frac{1}{\sqrt{1+t}} g_0^{-}({\textstyle\frac{x}{\sqrt{1+t}}})~&~\mbox{ and }~~ v_1(x,t)&= \sum_{n=1}^{N} \frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} d_n^{-}g^{-}_{n}({\textstyle\frac{x}{\sqrt{1+t}}}) \label{eqn:defu0u1} \end{equa} for some coefficients $\{d_n^{\pm}\}_{n=1}^{N}$ and functions $\{g_{n}^{\pm}\}_{n=0}^{N}$ to be determined later. We now use \begin{equs} u^2&=(u-u_{\star})(u+u_{\star})+u_{\star}^2= R_u^N(u+u_{\star})+u_1^2+2u_0u_1+u_0^2~,\\ v^2&=(v-v_{\star})(v+v_{\star})+v_{\star}^2= R_v^N(v+v_{\star})+v_1^2+2v_0v_1+v_0^2~,\\ ({\cal T} u)({\cal T}^{-1}v)&= ({\cal T} R_u^N){\cal T}^{-1}\myl{12}\frac{v+v_{\star}}{2}\myr{12} +({\cal T}^{-1}R_v^N){\cal T}\myl{12}\frac{u+u_{\star}}{2}\myr{12} +({\cal T} u_{\star})({\cal T}^{-1}v_{\star})~. \end{equs} Since \begin{equs}[2] g_0^{+}(x)&=u_0(x,0)~,&~~~ u_1(x,0)&=\sum_{n=1}^{N}d^{+}_ng_{n}^{+}(x)~,\\ g_0^{-}(x)&=v_0(x,0)~&~\mbox{ and }~~ v_1(x,0)&=\sum_{n=1}^{N}d^{-}_ng_{n}^{-}(x)~, \end{equs} we find that $R_u^N$ and $R_v^N$ satisfy \begin{equs} R_u^N(t)&= {\rm e}^{\partial_x^2t}(a_0+b_0-g_0^{+}) \\&\phantom{= } +\my{[}{14} {\rm e}^{\partial_x^2t}u_0(0) + c_{+}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} u_0(s)^2 \my{]}{14}-u_0(t) \\&\phantom{= } +\my{[}{14} {\rm e}^{\partial_x^2 t}u_1(0) + 2c_{+}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} u_0(s)u_1(s) \my{]}{14}-u_1(t) \\&\phantom{= } - c_{-}\my{[}{14} \partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} {\cal T}^{-2} \myl{12} (v_0(s)^2+2v_0(s)v_1(s)) \myr{12} \my{]}{14} - \sum_{n=1}^{N} {\rm e}^{\partial_x^2 t} d^{+}_ng_{n}^{+} \\ &\phantom{= } +\widetilde{\cal R}_{u}[{\bf z},{\bf R}^{N}](t) +{\cal T}^{-1}{\cal R}_{u}[{\bf z}](t)~, \label{eqn:tractable_u}\\[4mm] R_v^N(t)&= {\rm e}^{\partial_x^2t}(a_0-b_0-g_0^{-}) \\&\phantom{= } +\my{[}{14} {\rm e}^{\partial_x^2t}v_0(0) + c_{-}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} v_0(s)^2 \my{]}{14}-v_0(t) \\&\phantom{= } +\my{[}{14} {\rm e}^{\partial_x^2 t}v_1(0) + 2c_{-}\partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} v_0(s)v_1(s) \my{]}{14}-v_1(t) \\&\phantom{= } -c_{+}\my{[}{14} \partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)} {\cal T}^{2} \myl{12} (u_0(s)^2+2u_0(s)u_1(s)) \myr{12} \my{]}{14} - \sum_{n=1}^{N} {\rm e}^{\partial_x^2 t} d^{-}_ng_{n}^{-} \\ &\phantom{= } +\widetilde{\cal R}_{v}[{\bf z},{\bf R}^{N}](t) +{\cal T}{\cal R}_{v}[{\bf z}](t)~, \label{eqn:tractable_v} \end{equs} where \begin{equs} \widetilde{\cal R}_{u}[{\bf z},{\bf R}^{N}](t)&= c_{+}{\rm E}_0 [h_{1,u}+h_{3,u}](t) -c_{-}{\rm E}_{-2}[h_{1,v}+h_{3,v}](t) +c_{3}{\rm E}_{-1}[h_{2} +h_{4} ](t) ~, \\ \widetilde{\cal R}_{v}[{\bf z},{\bf R}^{N}](t)&= c_{-}{\rm E}_0 [h_{1,v}+h_{3,v}](t) -c_{+}{\rm E}_{ 2}[h_{1,u}+h_{3,u}](t) -c_{3}{\rm E}_{ 1}[h_{2} +h_{4} ](t)~, \end{equs} with ${\bf R}^N=(R_u^N,R_v^N)$, \begin{equs}[2] {\rm E}_{\sigma}[h](t)&= \partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)}~ {\cal T}^{\sigma}h(s)~&~\mbox{and}\\ h_{1,u}&=R_u^N(u+u_{\star})~,~~~h_{3,u}=u_1^2~,&~~~ h_2&=({\cal T} R_u^N){\cal T}^{-1}\myl{12}\frac{v+v_{\star}}{2}\myr{12} +({\cal T}^{-1}R_v^N){\cal T}\myl{12}\frac{u+u_{\star}}{2}\myr{12}\\ h_{1,v}&=R_v^N(v+v_{\star})~,~~~h_{3,v}=v_1^2~,&~~~ h_4&=({\cal T} u_{\star})({\cal T}^{-1}v_{\star})~. \end{equs} Note that we can write (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) as ${\bf R}^N={\cal F}[{\bf z},{\bf R}^N]$. If we now consider ${\bf z}$ fixed, we can interpret ${\bf R}^N={\cal F}[{\bf z},{\bf R}^N]$ as an equation for ${\bf R}^N$ which can be solved via a contraction mapping argument. Namely, we will prove that if $\|{\bf z}\|\leq C\epsilon_0$, ${\bf R}^N\mapsto{\cal F}[{\bf z},{\bf R}^N]$ defines a contraction map inside the ball \begin{equs} \|R_u^N\|_{2,\frac{3}{4}-\epsilon}+ \|{\rm D} R_u^N\|_{2,\frac{5}{4}-\epsilon}+ \|R_v^N\|_{2,\frac{3}{4}-\epsilon}+ \|{\rm D} R_v^N\|_{2,\frac{5}{4}-\epsilon} \leq C \label{eqn:restestim} \end{equs} for $\epsilon=2^{-N-2}$, provided $\{g^{\pm}_{n}\}_{n=0}^{N}$ and $\{d^{\pm}_n\}_{n=1}^{N}$ are appropriately chosen. Basically, we will choose $u_0$, $v_0$, $u_1$ and $v_1$ in such a way that the second and third lines of (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) vanish. Note that if, for instance, we set the second, respectively third lines of (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) equal to zero, the resulting equalities are nothing but Duhamel's formulae for Burger's equations for $u_0$ and $v_0$, respectively for linearized Burger's equations for $u_1$ and $v_1$. Properties of solutions to these types of equations are studied in detail in Section \ref{sect:burgers} below. Once $u_0$, $v_0$, $u_1$ and $v_1$ are fixed, the time convolutions in the fourth lines of (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) can then be viewed as the solution of inhomogeneous heat equations with very specific inhomogeneous terms. Properties of solutions to this type of equations are studied in detail in Section \ref{sect:inhomogeneousheat} below. Assuming all results of Section \ref{sect:burgers} and \ref{sect:inhomogeneousheat}, we now explain how to proceed to prove that ${\cal F}[{\bf z},{\bf R}^N]$ defines a contraction map. Obviously, the requirement on $\{g^{\pm}_{n}\}_{n=0}^{N}$ and $\{d^{\pm}_n\}_{n=1}^{N}$ is that the first four lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) satisfy (\ref{eqn:restestim}). This is achieved in the following way: \begin{enumerate} \item \label{item:mass} The first line of (\ref{eqn:tractable_u}), respectively of (\ref{eqn:tractable_v}) satisfies (\ref{eqn:restestim}) for any $g_0^{\pm}$ such that the total mass of $g_0^{\pm}$ is equal to that of $a_0\pm b_0$, provided $a_0\pm b_0$ and $g_0^{\pm}$ satisfy $\|\,x^2(a_0\pm b_0)\|_2<\infty$ and $\|\,x^2g_0^{\pm}\|_2<\infty$. This fixes the total mass of $g_0^{\pm}$. Note also that we need the estimate $\|\,x^2(a_0\pm b_0)\|_2<\infty$. There is no smallness assumption here, which is to be expected since generically $\|\,x^2(a(\cdot,t)\pm b(\cdot,t))\|_2$ will grow as $t\to\infty$. Note on the other hand that Proposition \ref{prop:weightednorm} shows that $\|\,x^2(a(\cdot,t)\pm b(\cdot,t))\|_2$ remains finite for all $t<\infty$, so requiring $\|\,x^2(a_0\pm b_0)\|_2<\infty$ is acceptable. \item We can set the second lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) equal to zero by picking for $u_0$ and $v_0$ any solution of Burger's equations \begin{equs} \partial_t u_0=\partial_x^2u_0+c_{+}\partial_x(u_0)^2~~\mbox{ and }~~ \partial_t v_0=\partial_x^2v_0+c_{-}\partial_x(v_0)^2 \end{equs} (or of the corresponding heat equations if either $c_{+}$ or $c_{-}$ happen to be zero). In Proposition \ref{prop:burgers}, we will prove that there exist unique functions $u_0$ and $v_0$ of the form given in (\ref{eqn:defu0u1}) that satisfy the conditions of item \ref{item:mass} above (total mass and decay properties). This uniquely determines $u_0$ and $v_0$. \item We can also set the third lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) equal to zero, by picking any solutions $u_1$ and $v_1$ of linearized Burger's equations \begin{equs} \partial_t u_1=\partial_x^2u_1+2c_{+}\partial_x(u_0u_1) ~~\mbox{ and }~~ \partial_t v_1=\partial_x^2v_1+2c_{-}\partial_x(v_0v_1)~. \label{eqn:unpmvnpm} \end{equs} In Proposition \ref{prop:burgers}, we will also prove that there is a choice of functions $\{g^{\pm}_{n}\}_{n=1}^{N}$ such that $u_1$ and $v_1$ in (\ref{eqn:defu0u1}) satisfy (\ref{eqn:unpmvnpm}) for any choice of the coefficients $\{d^{\pm}_{n}\}_{n=1}^{N}$. Furthermore, in Proposition \ref{prop:burgers}, we will prove that the choice of functions can be made in such a way that $g^{\pm}_{n}(x)$ have Gaussian tails as $x\to\mp\infty$ and algebraic tails as $x\to\pm\infty$. This actually completely determines $g^{\pm}_{n}(x)$ up to multiplicative constants (this last indeterminacy will be removed when the coefficients $\{d^{\pm}_{n}\}_{n=1}^{N}$ are fixed). \item \label{item:free} We then further decompose the terms involving $g_{n}^{\pm}$ in the fourth lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) as $g_{n}^{\pm}(x)=f_n(\mp x)+R^{\pm}_{n}(x)$. The definition and properties of $f_n(x)$ are given in Lemma \ref{lem:alittlelemma}. In particular, in Proposition \ref{prop:burgers}, we will prove that $R^{\pm}_{n}(x)$ have zero total mass and Gaussian tails as $|x|\to\infty$, which implies that ${\rm e}^{\partial_x^2 t}R^{\pm}_{n}$ also satisfy (\ref{eqn:restestim}). \item\label{item:last} Finally, in Section \ref{sect:inhomogeneousheat}, we will prove that the time convolution part of the fourth lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) can be split into linear combinations of ${\rm e}^{\partial_x^2 t}f_{n}(\mp x)$ with $n=1\ldots N+1$ plus a remainder that satisfies (\ref{eqn:restestim}). The coefficients $\{d^{\pm}_n\}_{n=1}^{N}$ can then be set recursively by requiring that all the terms with $n=1\ldots N$ coming from the time convolution are canceled by those coming from item \ref{item:free} above. This can always be done because the coefficient of ${\rm e}^{\partial_x^2 t}f_{m}(\mp x)$ in the time convolution part of the fourth lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}) depends only on $g_0^{\pm}$ if $m=1$ and on $d^{\pm}_{m-1}$ if $m>1$. The only term that cannot be set to zero is the last term in the linear combination (the one with $n=N+1$), which is the one that `drives' the equations and fixes $\epsilon=2^{-N-2}$. \end{enumerate} The procedure outlined in \ref{item:mass}-\ref{item:last} takes care of the first four lines in (\ref{eqn:tractable_u}) and (\ref{eqn:tractable_v}). We will then prove in Section \ref{sect:remainderestimates} that the terms ${\cal R}_{\{u,v\}}[{\bf z}]$ satisfy (\ref{eqn:restestim}) and that \begin{equs} \label{eqn:onRtilde} \sum_{\alpha=0}^1 \|{\rm D}^{\alpha} \widetilde{\cal R}_{\{u,v\}} [{\bf z},{\bf R}^{N}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} &\leq C\epsilon_0 \sum_{\alpha=0}^1 \|{\rm D}^{\alpha}{\bf R}^{N}\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} +C~,\\ \label{eqn:onRtildeLip} \sum_{\alpha=0}^1 \|{\rm D}^{\alpha} (\widetilde{\cal R}_{\{u,v\}}[{\bf z},{\bf R}^N_1] -\widetilde{\cal R}_{\{u,v\}}[{\bf z},{\bf R}^N_2] )\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon}&\leq C\epsilon_0 \sum_{\alpha=0}^1 \|{\rm D}^{\alpha}({\bf R}^{N}_1-{\bf R}^{N}_2)\|_{2,\frac{3}{4} +\frac{\alpha}{2}-\epsilon}~. \end{equs} This finally proves that ${\cal F}[{\bf z},{\bf R}^N]$ defines a contraction map and that the solution of ${\bf R}^N={\cal F}[{\bf z},{\bf R}^N]$ satisfies (\ref{eqn:restestim}), which completes the proof of Theorems \ref{thm:maintheorem} and \ref{thm:asymptoticsrestated}. \section{Burger's type equations}\label{sect:burgers} In this section, we consider particular solutions of Burger's type equations \begin{equs} \partial_tu_0&=\partial_x^2u_0+\gamma\partial_xu_0^2 \label{eqn:burgers} \\ \partial_tu_n^{\pm}&=\partial_x^2u_n^{\pm}+2\gamma\partial_x(u_0u_n^{\pm}) \label{eqn:linburgers} \end{equs} of the form \begin{equs} {\textstyle u_0(x,t)=\frac{1}{\sqrt{1+t}}g_0(\frac{x}{\sqrt{1+t}})~~~~~~ \mbox{ and }~~~~~~ u_n^{\pm}(x,t)=\frac{1}{(1+t)^{1-\frac{1}{2^{n+1}}}} g_n^{\pm}(\frac{x}{\sqrt{1+t}})}~. \label{eqn:scalingform} \end{equs} We will show that for fixed ${\rm M}(u_0)=\intR\hspace{-3mm}{\rm d}x~u_0(x,t)=\intR\hspace{-3mm}{\rm d}x~g_0(x)$ small enough, there is a unique choice of $g_0$ and $g_n^{\pm}$ such that $g_n^{\pm}(x)=f_n(\mp x)+R_n^{\pm}(x)$, where \begin{equs} f_{n}(z)= \int_z^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{\xi{\rm e}^{-\frac{\xi^2}{4}}}{(\xi-z)^{1-\frac{1}{2^{n}}}} \label{eqn:deffn} \end{equs} and $R_n^{\pm}$ has zero mean and Gaussian tails as $|x|\to\infty$. In particular, $g_n^{\pm}(x)$ decays algebraically as $x\to\pm\infty$, as is apparent from (\ref{eqn:deffn}). Before proceeding to our study of (\ref{eqn:burgers}) and (\ref{eqn:linburgers}), we prove key properties of the functions $f_n$. \begin{lemma}\label{lem:alittlelemma} Fix $1\leq n<\infty$. The function $f_n$ is the unique solution of \begin{equa}\label{eqn:foneeq} \partial_z^2f_{n}(z)+{\textstyle\frac{1}{2}}z\partial_zf_{n}(z) +({\textstyle1-\frac{1}{2^{n+1}}})f_{n}(z)=0~,~~~~\mbox{with}\\ f_{n}(0)=2^{\frac{1}{2^n}}\Gamma({\textstyle\frac{1+2^{-n}}{2}})~~\mbox{ and }~~ \lim_{z\to\infty}z^{-1+\frac{1}{2^n}}{\rm e}^{\frac{z^2}{4}} f_{n}(z)&<\infty~. \end{equa} It satisfies $\intR\hspace{-3mm}{\rm d}z~f_n(z)=0$ and there exists a constant $C(n)$ such that \begin{equa} \sup_{z\in{\bf R}} \sum_{m=0}^{2} \rho_{\frac{1}{2^n} -m,1+m-\frac{1}{2^n}}(z) |\partial_z^m\myl{10}z f_n(z)+2\partial_{z}f_n(z)\myr{10}|&\leq C(n)\\ \sup_{z\in{\bf R}} \sum_{m=0}^{3} \rho_{\frac{1}{2^n}-1-m,2+m-\frac{1}{2^n}}(z) |\partial_z^mf_n(z)| &\leq C(n)~, \label{eqn:fnestimates} \end{equa} where \begin{equs} \rho_{p,q}(z)&=\my{\{}{16} \begin{array}{ll} (1+z^2)^{\frac{p}{2}}{\rm e}^{\frac{z^2}{4}}&\mbox{ if }z\geq0\\[2mm] (1+z^2)^{\frac{q}{2}}&\mbox{ if }z\leq0 \end{array} ~. \end{equs} \end{lemma} \begin{proof} We first note that $f_n$ can be written as \begin{equs} f_{n}(z)= \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{(\xi+z){\rm e}^{-\frac{(\xi+z)^2}{4}}}{\xi^{1-\frac{1}{2^{n}}}} = -2 \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ \xi^{\frac{1}{2^{n}}-1} \partial_{\xi}\myl{10}{\rm e}^{-\frac{(z+\xi)^2}{4}}\myr{10} ~. \label{eqn:otherfndef} \end{equs} This shows that $f_n$ solves (\ref{eqn:foneeq}) since, defining ${\cal L}f\equiv\partial_z^2f+\frac{1}{2}z\partial_zf+ (1-\frac{1}{2^{n+1}})f$, we find \begin{equs} {\cal L}f_n(z)= \int_{0}^{\infty} \hspace{-3mm}{\rm d}\xi~ \my{[}{12} \xi^{\frac{1}{2^n}}\partial_{\xi}^2\myl{10}{\rm e}^{-\frac{(z+\xi)^2}{4}}\myr{10} -{\textstyle\frac{1}{2^{n+1}}} (-2)\xi^{\frac{1}{2^n}-1} \partial_{\xi}\myl{10}{\rm e}^{-\frac{(z+\xi)^2}{4}}\myr{10} \my{]}{12} =0~. \end{equs} Obviously, $f_n(z)$ is finite for all finite $z$, so we only need to prove that $f_n$ satisfies the correct decay properties as $|z|\to\infty$ so that (\ref{eqn:fnestimates}) holds. It is apparent from (\ref{eqn:deffn}) that $f_n$ decays like a (modified) Gaussian as $z\to\infty$ and algebraically as $z\to-\infty$. Furthermore, substituting $f(z)=C |z|^{p_1}$ and $f(z)=C |z|^{p_2}{\rm e}^{-\frac{z^2}{4}}$ into ${\cal L}f=0$ shows that the only decay rates compatible with ${\cal L}f=0$ are $p_1=-2+\frac{1}{2^n}$ and $p_2=1-\frac{1}{2^n}$. We now complete the proof of the decay estimates (\ref{eqn:fnestimates}). Let $F_{n,m}(\xi,z)=\partial_z^m((\xi+z){\rm e}^{-\frac{(\xi+z)^2} {4}})$ and $G_{n,m}(\xi,z)=\partial_z^m(zF_n(\xi,z)+2\partial_zF_n(\xi,z))$. We first consider the case $z>0$ and note that $F_{n,m}$ and $G_{n,m}$ satisfy \begin{equs}[2] |F_{n,m}(\xi,z)|&\leq |F_{n,m}(0,z)|&~~~\mbox{ and }~~~ |G_{n,m}(\xi,z)|&\leq |G_{n,m}(0,z)| \end{equs} for all $\xi\geq0$ if $z\geq z_0$ for some $z_0$ large enough. We thus get, e.g. \begin{equs} |f_{n}(z)|&= \my{|}{14} \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ F_{n,0}(\xi,z)\xi^{\frac{1}{2^n}-1} \my{|}{14}\leq |F_{n,0}(0,z)| \int_0^{z^{-1}} \hspace{-3mm}{\rm d}\xi~ \xi^{\frac{1}{2^n}-1}+ z^{1-\frac{1}{2^n}} \int_{z^{-1}}^{\infty} \hspace{-3mm}{\rm d}\xi~ |F_{n,0}(\xi,z)| \leq C z^{1-\frac{1}{2^n}}{\rm e}^{-\frac{z^2}{4}}~. \end{equs} The estimates on $|\partial_z^m(zf_n(z)+2\partial_{z}f_n(z))|$ and $|\partial_z^{1+m} f_n(z)|$ when $z>0$ and $m\geq1$ can be done in exactly the same way; hence we omit the details. We now consider the case $z<0$ and note that $F_{n,m}$ and $G_{n,m}$ satisfy \begin{equs}[2] |F_{n,m}(\xi,z)|&\leq |F_{n,m}(-{\textstyle \frac{z}{2}},z)|~~ &\mbox{ and }~~ |G_{n,m}(\xi,z)|&\leq |G_{n,m}(-{\textstyle \frac{z}{2}},z)| \end{equs} for all $0\leq\xi\leq-\frac{z}{2}$ if $z\leq-z_0$ for some $z_0$ large enough. We thus find (integrating by parts in the second integral below) \begin{equs} |f_{n}(z)|&= \my{|}{14} \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ F_{n,0}(\xi,z)\xi^{\frac{1}{2^n}-1} \my{|}{14}\leq |F_{n,0}(-{\textstyle \frac{z}{2}},z)| \int_0^{-\frac{z}{2}} \hspace{-3mm}{\rm d}\xi~ \xi^{\frac{1}{2^n}-1}+ \my{|}{14} \int_{-\frac{z}{2}}^{\infty} \hspace{-3mm}{\rm d}\xi~ F_{n,0}(\xi,z) \xi^{\frac{1}{2^n}-1}\my{|}{14} \\& \leq C |z|^{\frac{1}{2^n}-1}{\rm e}^{-\frac{z^2}{16}} + 2\myl{10} 1- {\textstyle\frac{1}{2^n}} \myr{10} \int_{-\frac{z}{2}}^{\infty} \hspace{-3mm}{\rm d}\xi~ {\rm e}^{-\frac{(\xi+z)^2}{4}}\xi^{\frac{1}{2^n}-2} \leq C|z|^{\frac{1}{2^n}-2}~. \end{equs} Since the remaining estimates can again be done in exactly the same way, we omit the details. It only remains to show that $f_n(z)$ has zero total mass. This follows from \begin{equs} \int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~ f_n(z)= ( {\textstyle\frac{1}{2}-\frac{1}{2^{n+1}}} )^{-1}\int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~ {\cal L}f_n(z) =0 ~, \end{equs} since $\partial_z^2f_n,z\partial_zf_n$ and $f_n$ are all integrable over ${\bf R}$. \end{proof} \begin{remark} Using the representation (\ref{eqn:otherfndef}), splitting the integration interval into $[0,2^{-\frac{n}{2}})$ and $[2^{-\frac{n}{2}},\infty)$, integrating by parts and letting $n\to\infty$, one can prove that \begin{equs} \lim_{n\to\infty}}2^{-n}f_n(z)=z{\rm e}^{-\frac{z^2}{4}~, \end{equs} which shows that the constant $C(n)$ in (\ref{eqn:fnestimates}) grows at most like $2^n$. \end{remark} We can now study in detail the solutions of (\ref{eqn:burgers}) and (\ref{eqn:linburgers}) that are of the form (\ref{eqn:scalingform}): \begin{proposition} \label{prop:burgers} Fix $1\leq n<\infty$. For all $\alpha,\gamma\in{\bf R}$ with $|\alpha\gamma|$ small enough, there exist unique functions $u_0$ and $u_n^{\pm}$ of the form (\ref{eqn:scalingform}) that solve (\ref{eqn:burgers}) and (\ref{eqn:linburgers}), with $g_0$ satisfying \begin{equs} \int_{-\infty}^{\infty}\hspace{-3mm}{\rm d}z~g_0(z)=\alpha~,~~~ \sum_{m=0}^{3} \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{m}}|\partial_z^mg_0(z)| \leq C |\alpha| \end{equs} and with $g_n^{\pm}(z)=f_{n}(\mp z)+R_n^{\pm}(z)$, where $R_n^{\pm}$ satisfy \begin{equs} \int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~R_n^{\pm}(z)=0~~\mbox{ and }~~ {\displaystyle\sup_{z\in{\bf R}} \sum_{m=0}^{3}} \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{1+m-\frac{1}{2^n}}}|\partial_z^mR_n^{\pm}(z)| \leq C |\alpha\gamma|~. \end{equs} \end{proposition} \begin{proof} The (unique) solution of (\ref{eqn:burgers}) of the form $u_0(x,t)=\frac{1}{\sqrt{1+t}}g_0(\frac{x}{\sqrt{1+t}})$ satisfying $\intR\hspace{-3mm}{\rm d}z~g_0(z)=\alpha$ is given by \begin{equs} g_0(z)=\frac{ \tanh(\frac{\alpha\gamma}{2}){\rm e}^{-\frac{z^2}{4}} }{ \gamma\sqrt{\pi} (1+\tanh(\frac{\alpha\gamma}{2})\rm erf(\frac{z}{2})) }~. \end{equs} In particular, we have \begin{equs} \sum_{m=0}^{3} \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{m}}|\partial_z^mg_0(z)| \leq C |\alpha|~. \label{eqn:f0bound} \end{equs} We next note that substituting (\ref{eqn:scalingform}) into (\ref{eqn:linburgers}) gives \begin{equs} 0&= {\textstyle\partial_z^2g_n^{\pm}(z)+\frac{1}{2}z\partial_zg_n^{\pm}(z) +(1-\frac{1}{2^{n+1}})g_n^{\pm}(z)}+ 2\gamma\partial_z(g_0(z)g_n^{\pm}(z))\\[2mm] &\equiv{\cal L}g_n^{\pm}(z)+2\gamma\partial_z(u_0(z)g_n^{\pm}(z))~. \label{eqn:fpmeq} \end{equs} We formally have (using integration by parts) \begin{equs} \int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~ g_n^{\pm}(z)= ( {\textstyle\frac{1}{2}-\frac{1}{2^{n+1}}} )^{-1}\int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~ {\cal L}g_n^{\pm}(z) +2\gamma\partial_z(u_0(z)g_n^{\pm}(z)) = 0 ~, \label{eqn:formal} \end{equs} which shows that $g_n^{\pm}$ have zero total mass, {\em provided the formal manipulations above are justified}, i.e. provided $g_n^{\pm}$ and its derivatives decay fast enough so that the integrals are convergent. As is easily seen, $f_n(z)$ and $f_n(-z)$ are two linearly independent solutions of ${\cal L}f=0$, whose general solution can thus be written as $c_1f_n(z)+c_2f_n(-z)$. Using the variation of constants formula, we get that the solution of (\ref{eqn:fpmeq}) satisfies the integral equation \begin{equs} g_n^{\pm}(z) =f_n(z)\myl{14} c_{1}^{\pm}+2\gamma\int_0^{z} \hspace{-2mm}{\rm d}\xi~ {\textstyle\frac{ f_n(-\xi)\partial_{\xi}(g_0(\xi)g_n^{\pm}(\xi)) }{ W(\xi)} } \myr{14}+ f_n(-z)\myl{14} c_{2}^{\pm}-2\gamma\int_0^{z} \hspace{-2mm}{\rm d}\xi~ {\textstyle\frac{ f_n(\xi)\partial_{\xi}(g_0(\xi)g_n^{\pm}(\xi)) }{ W(\xi) }} \myr{14}~, \end{equs} where the Wronskian $W(z)$ is given by $W(z)=f_n(z)\partial_zf_n(-z)-f_n(-z)\partial_zf_n(z)$ and $c_{1}^{\pm}$ and $c_{2}^{\pm}$ are free parameters. Note that $W(z)$ satisfies $\partial_zW(z)=-\frac{z}{2}W(z)$ and hence $W(z)=W(0){\rm e}^{-\frac{z^2}{4}}$ for some $W(0)\neq0$. We now set $c_{1}^{\pm}$ and $c_{2}^{\pm}$ in such a way that (after integration by parts), we have \begin{equs} g_n^{\pm}(z)&=f_n(\mp z) +R[g_n^{\pm}](z)~, \label{eqn:contractforpm} \\ R[g_n^{\pm}](z)&= {\textstyle\frac{\gamma}{W(0)}} f_n(z) \int_{-\infty}^{z} \hspace{-3mm}{\rm d}\xi~ {\rm e}^{\frac{\xi^2}{4}} (\xi f_n(-\xi)+2\partial_{\xi}f_n(-\xi)) g_0(\xi)g_n^{\pm}(\xi)\\ &\phantom{=}~+ {\textstyle\frac{\gamma}{W(0)}} f_n(-z) \int_{z}^{\infty} \hspace{-3mm}{\rm d}\xi~ {\rm e}^{\frac{\xi^2}{4}} (\xi f_n(\xi)+2\partial_{\xi}f_n(\xi)) g_0(\xi)g_n^{\pm}(\xi)~. \end{equs} Using Lemma \ref{lem:alittlelemma} and (\ref{eqn:f0bound}), it is then easy to show that for $|\alpha\gamma|$ small enough, (\ref{eqn:contractforpm}) defines a contraction map in the norm \begin{equs} |f|_{2-\frac{1}{2^n}}\equiv \sup_{z\in{\bf R}}(\sqrt{1+z^2})^{2-\frac{1}{2^{n}}}|f(z)|~. \end{equs} Namely, we have the improved decay rates \begin{equs} \sup_{z\in{\bf R}} \sum_{m=0}^1 \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{1+m-\frac{1}{2^n}}}|\partial_z^mR[g_n^{\pm}](z)| &\leq C|\alpha\gamma|~ |g_n^{\pm}|_{2-\frac{1}{2^n}}~. \end{equs} This shows that (\ref{eqn:contractforpm}) has a (locally) unique solution among functions with $|f|_{2-\frac{1}{2^n}}\leq c_0$ if $|\alpha\gamma|$ is small enough. In particular, there holds \begin{equs} \sup_{z\in{\bf R}} \sum_{m=0}^1 \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{1+m-\frac{1}{2^n}}}|\partial_z^mR[g_n^{\pm}](z)| &\leq C|\alpha\gamma| ~, \end{equs} from which we deduce, using again (\ref{eqn:contractforpm}) and Lemma \ref{lem:alittlelemma}, that $|{\rm D} g_n^{\pm}|_{3-\frac{1}{2^n}}\leq c_1$ and thus \begin{equs} \sup_{z\in{\bf R}} \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{3-\frac{1}{2^n}}}|\partial_z^2R[g_n^{\pm}](z)| &\leq C|\alpha\gamma| ~. \end{equs} Iterating this procedure shows that $|{\rm D}^mg_n^{\pm}|_{2+m-\frac{1}{2^n}}\leq c_m$ and that \begin{equs} \sup_{z\in{\bf R}} \sum_{m=0}^3 \frac{{\rm e}^{\frac{z^2}{4}}}{ (\sqrt{1+z^2})^{1+m-\frac{1}{2^n}}}|\partial_z^mR[g_n^{\pm}](z)| &\leq C|\alpha\gamma| \end{equs} as claimed. In turn, this proves that the formal manipulations in (\ref{eqn:formal}) are justified, so that the functions $g_n^{\pm}(z)$ have zero total mass, which shows that the remainders $R[g_n^{\pm}](z)$ have zero total mass as claimed since $R[g_n^{\pm}](z)=g_n^{\pm}(z)-f_n(\pm z)$ and since both $g_n^{\pm}(z)$ and $f_n(z)$ have zero total mass. \end{proof} \section{Inhomogeneous heat equations}\label{sect:inhomogeneousheat} In this section, we consider solutions of inhomogeneous heat equations of the form \begin{equs} \partial_t u=\partial_x^2u+\partial_x \myl{13} (1+t)^{\frac{1}{2^n}-\frac{3}{2}} f\myl{10} {\textstyle \frac{x-2\sigma t}{\sqrt{1+t}} } \myr{10} \myr{13}~,~~~~u(x,0)=0~, \label{eqn:inhomoheateqgeneral} \end{equs} where $f$ is a regular function having Gaussian decay at infinity. Solutions of (\ref{eqn:inhomoheateqgeneral}) satisfy \begin{theorem} Let $1\leq n<\infty$, $\sigma=\pm1$, $\Xi(x)={\rm e}^{\frac{x^2}{8}}$, ${\rm M}(f)= \intR\hspace{-3mm}{\rm d}z~f(z)$ and \begin{equs} u_{n}(x,t)={\textstyle\frac{\sigma}{(1+t)^{1-\frac{1}{2^{n+1}}}} \frac{2^{-1-\frac{1}{2^n}}}{\sqrt{4\pi}} f_n(\frac{-\sigma x}{\sqrt{1+t}})}~~~~\mbox{ with }~~~~ f_n(z)= \int_z^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{\xi{\rm e}^{-\frac{\xi^2}{4}}}{(\xi-z)^{1-\frac{1}{2^n}}}~. \label{eqn:defun} \end{equs} The solution $u$ of (\ref{eqn:inhomoheateqgeneral}) satisfies \begin{equs} \|u-{\rm M}(f)\,u_{n}\|_{2,\frac{3}{4}^{\star}} + \|{\rm D}(u-{\rm M}(f)\,u_{n})\|_{2,\frac{5}{4}^{\star}} \leq C \sum_{m=0}^2\|\Xi{\rm D}^m f\|_{\infty} ~, \label{eqn:estiinhomoheat} \end{equs} for all $f$ such that the r.h.s. of (\ref{eqn:estiinhomoheat}) is finite. \end{theorem} \begin{remark} Note that while $u\to {\rm M}(f)u_{n}$ as $t\to\infty$ in the Sobolev norm (\ref{eqn:estiinhomoheat}), it does not do so in spatially weighted norms such as $\L^2({\bf R},x^2\d x)$, as $u_{n}$ has infinite spatial moments for all times, while all moments of $u$ are bounded for finite time. \end{remark} \begin{proof} We first define \begin{equs} F(\xi)= \int_{-\infty}^{\xi} \hspace{-3mm}{\rm d}z~\myl{10} f(z)- {\rm M}(f)\,{\textstyle \frac{{\rm e}^{-\frac{z^2}{4}}}{\sqrt{4\pi}}} \myr{10}~~~~\mbox{ with }~~~~ {\rm M}(f)=\int_{-\infty}^{\infty} \hspace{-3mm}{\rm d}z~f(z) \label{eqn:defF} \end{equs} and note that $F$ satisfies \begin{equs} \|{\rm D}^3F\|_1 +\sum_{m=0}^{2}\|\rho{\rm D}^m F\|_1 +\sum_{m=1}^{2}\|{\rm D}^m F\|_2 \leq C \sum_{m=0}^2\|\Xi{\rm D}^m f\|_{\infty} ~, \label{eqn:FF} \end{equs} where $\rho(x)=\sqrt{1+x^2}$. Namely, we first note that $\|\rho F\|_1\leq\|\hat{F}\|_2+\|\hat{F}''\|_2$ and $\hat{F}(k)=(ik)^{-1}(\hat{f}(k)-\hat{f}(0){\rm e}^{-k^2})$. Then, since $\|\Xi f\|_{\infty}<\infty$ implies that $\hat{f}$ is analytic, $\hat{F}$ is regular near $k=0$. The proof of (\ref{eqn:FF}) now follows from elementary arguments. We finally note that it follows from (\ref{eqn:defF}) that \begin{equs} (1+t)^{\frac{1}{2^n}-\frac{3}{2}} f\myl{10} {\textstyle \frac{x-2\sigma t}{\sqrt{1+t}} } \myr{10} = {\rm M}(f)~ \underbrace{\frac{(1+t)^{\frac{1}{2^n}-\frac{3}{2}}}{\sqrt{4\pi}} {\rm e}^{-\frac{(x-2\sigma t)^2}{4(1+t)}}}_{\equiv A(x,t)}+ \underbrace{ (1+t)^{\frac{1}{2^n}-1} \partial_x F\myl{10} {\textstyle \frac{x-2\sigma t}{\sqrt{1+t}} } \myr{10}}_{\equiv \partial_x B(x,t)}~. \label{eqn:underbraces} \end{equs} The proof of (\ref{eqn:estiinhomoheat}) is then completed by considering separately the solutions of heat equations with inhomogeneous terms given by $\partial_x A(x,t)$ and $\partial_x^2 B(x,t)$. This is done in Propositions \ref{prop:Gaussian} and \ref{prop:secondder} below. \end{proof} \begin{proposition} \label{prop:Gaussian} Let $\sigma=\pm1$, $1\leq n<\infty$, and let $u_n$ be defined as in (\ref{eqn:defun}). The solution $u$ of \begin{equs} \partial_t u=\partial_x^2u+\partial_xA ~,~~~~u(x,0)=0~, \label{eqn:inhomoheateq} \end{equs} with $A$ defined in (\ref{eqn:underbraces}) satisfies \begin{equs} \|u-u_{n}\|_{2,\frac{3}{4}} + \|{\rm D}(u-u_{n})\|_{2,\frac{5}{4}} \leq C~. \label{eqn:estiinhomoheatrap} \end{equs} \end{proposition} \begin{proof} The solution of (\ref{eqn:inhomoheateq}) is given by \begin{equs} u(x,t)= \partial_x \int_0^t \hspace{-2mm} {\rm d}s \int_{-\infty}^{\infty} \hspace{-3mm} {\rm d}y \frac{{\rm e}^{-\frac{(x-y)^2}{4(t-s)}}}{\sqrt{4\pi(t-s)}} \frac{{\rm e}^{-\frac{(y-2\sigma s)^2}{4(1+s)}}}{\sqrt{4\pi}(1+s)^{\frac{3}{2}-\frac{1}{2^{n}}}}~. \label{eqn:inhomoheat} \end{equs} To motivate our result, we note that performing the $y$-integration and changing variables from $s$ to $\xi\equiv\frac{2s-\sigma x}{\sqrt{1+t}}$ in (\ref{eqn:inhomoheat}) leads to \begin{equs} \lim_{t\to\infty} (1+t)^{1-\frac{1}{2^{n+1}}}u(-\sigma z\sqrt{1+t},t)= \lim_{t\to\infty} {\textstyle\frac{\sigma 2^{-1-{\frac{1}{2^n}}}}{\sqrt{4\pi}}} \int_{z}^{\frac{2t}{\sqrt{1+t}}+z} \hspace{-3mm}{\rm d}\xi~ {\textstyle\frac{ \xi{\rm e}^{-\frac{\xi^2}{4}} }{ (\xi-z+\frac{2}{\sqrt{1+t}})^{1-\frac{1}{2^n}} }} ={\textstyle\frac{\sigma 2^{-1-{\frac{1}{2^n}}}}{\sqrt{4\pi}}} f_n(z)~. \end{equs} More formally, taking the Fourier transform of (\ref{eqn:inhomoheat}) gives \begin{equs} \hat{u}(k,t)&=ik{\rm e}^{-k^2(1+t)} \int_0^{t} \hspace{-2mm}{\rm d}s~ \frac{{\rm e}^{2ik\sigma s}}{(1+s)^{1-\frac{1}{2^n}}}~. \end{equs} We now use that \begin{equs} \my{|}{14} \int_0^{t} \hspace{-2mm}{\rm d}s~ \frac{{\rm e}^{2ik\sigma s}}{(1+s)^{1-\frac{1}{2^n}}} -\int_0^{t} \hspace{-2mm}{\rm d}s~ \frac{{\rm e}^{2ik\sigma s}}{s^{1-\frac{1}{2^n}}} \my{|}{14}&\leq C(n)~,\\ \int_0^{t} \hspace{-2mm}{\rm d}s~ \frac{{\rm e}^{2ik\sigma s}}{s^{1-\frac{1}{2^n}}}&= |k|^{-\frac{1}{2^n}} \myl{10} \theta(\sigma k)J_n(|k|t)+\theta(- \sigma k)\overline{J_n(|k|t)} \myr{10}~, \end{equs} where $\theta(k)$ is the Heaviside step function and we defined \begin{equs} J_n(z)= \int_0^{z} \hspace{-2mm}{\rm d}s~ \frac{{\rm e}^{2is}}{s^{1-\frac{1}{2^n}}} \end{equs} for $z\geq0$. This function satisfies \begin{equs} \sup_{z\geq0} z^{1-\frac{1}{2^n}} |J_n(z)-J_{n,\infty}|\leq\frac{1}{2}~~\mbox{ for }~~ J_{n,\infty}=\lim_{z\to\infty}J_n(z)~. \end{equs} Now define \begin{equs} \widehat{u_{n}}(k,t)= ik{\rm e}^{-k^2(1+t)} |k|^{-\frac{1}{2^n}} \myl{10} \theta(\sigma k)J_{n,\infty}+\theta(-\sigma k)\overline{J_{n,\infty}} \myr{10}~. \label{eqn:widehat} \end{equs} We have \begin{equs} |\hat{u}(k,t)-\widehat{u_{n}}(k,t)|\leq (C(n)|k|+t^{-1+\frac{1}{2^n}}) {\rm e}^{-k^2(1+t)} \leq (C(n)|k|+t^{-\frac{1}{2}}) {\rm e}^{-k^2(1+t)}~, \label{eqn:hammer} \end{equs} from which (\ref{eqn:estiinhomoheatrap}) follows by direct integration. We complete the proof by showing that the inverse Fourier transform of the function $\widehat{u_{n}}(k,t)$ defined in (\ref{eqn:widehat}) satisfies \begin{equs} u_{n}(x,t)={\textstyle\frac{\sigma}{(1+t)^{1-\frac{1}{2^{n+1}}}} \frac{2^{-1-\frac{1}{2^n}}}{\sqrt{4\pi}} f_n(\frac{-\sigma x}{\sqrt{1+t}})}~~\mbox{ for }~~ f_n(z)=\int_z^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{\xi{\rm e}^{-\frac{\xi^2}{4}}}{(\xi-z)^{1-\frac{1}{2^n}}}~. \label{eqn:tocheck} \end{equs} This follows easily from the fact that \begin{equs} \widehat{u_{n}}(k,t)=(1+t)^{-\frac{1}{2}+\frac{1}{2^{n+1}}} \widehat{u_{n}}(k\sqrt{1+t},0)~, \end{equs} and that, since \begin{equs} f_n(z)= \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{(z+\xi){\rm e}^{-\frac{(z+\xi)^2}{4}}}{\xi^{1-\frac{1}{2^n}}}~, \end{equs} we get \begin{equs} {\textstyle\frac{\sigma2^{-1-\frac{1}{2^n}}}{\sqrt{4\pi}}} \widehat{f_n}(-\sigma k)&=2^{-\frac{1}{2^n}}i k{\rm e}^{-k^2} \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{{\rm e}^{ik\sigma \xi}}{\xi^{1-\frac{1}{2^n}}} =ik{\rm e}^{-k^2}|k|^{-\frac{1}{2^n}} \int_0^{\infty} \hspace{-3mm}{\rm d}\xi~ \frac{{\rm e}^{2i{\rm sign}(k\sigma)\xi}}{\xi^{1-\frac{1}{2^n}}}\\ &= ik{\rm e}^{-k^2}|k|^{-\frac{1}{2^n}} \myl{10} \theta(k\sigma)J_{n,\infty}+ \theta(-k\sigma)\overline{J_{n,\infty}} \myr{10} =\widehat{u_{n}}(k,0) \end{equs} as claimed. \end{proof} \begin{proposition} \label{prop:secondder} Let $\sigma=\pm1$, $1\leq n<\infty$ and $\rho(x)=\sqrt{1+x^2}$. The solution $u$ of \begin{equs} \partial_t u=\partial_x^2u+\partial_x^2 B ~,~~~~u(x,0)=0~, \label{eqn:inhomoheateqF} \end{equs} with $B$ defined in (\ref{eqn:underbraces}) satisfies \begin{equs} \|u\|_{2,\frac{3}{4}^{\star}} +\|{\rm D} u\|_{2,\frac{5}{4}^{\star}} \leq C~ \myl{13} \|{\rm D}^3F\|_1 +\sum_{m=0}^{2}\|\rho{\rm D}^m F\|_1 +\sum_{m=1}^{2}\|{\rm D}^m F\|_2 \myr{13} \label{eqn:estiinhomoheatF} \end{equs} for all $F$ for which the r.h.s. of (\ref{eqn:estiinhomoheatF}) is finite. \end{proposition} \begin{proof} We first note that the Fourier transform of $u$ is given by \begin{equs} \hat{u}(k,t)&= -k^2 \int_0^t \hspace{-2mm}{\rm d}s~ {\rm e}^{-k^2(t-s)-2ik\sigma s} \hat{F}(k\sqrt{1+s})(1+s)^{\frac{1}{2^n}-\frac{1}{2}}~, \end{equs} which implies \begin{equs} \|(1-{\mathbb Q})u\|_{2,\frac{3}{4}} + \|(1-{\mathbb Q}){\rm D} u\|_{2,\frac{5}{4}} \leq C\myl{10}\|{\rm D} F\|_2+\|{\rm D}^2 F\|_2\myr{10} \sup_{0\leq t\leq1} \int_0^t \hspace{-2mm} \frac{{\rm d}s}{\sqrt{t-s}} ~. \end{equs} Here ${\mathbb Q}$ is again defined as the characteristic function for $t\geq1$. Next, integrating by parts, we find \begin{equs} \hat{u}(k,t)&= \frac{ik\hat{F}(k){\rm e}^{-k^2t}}{2\sigma} -\frac{ik\hat{F}(k\sqrt{1+t}){\rm e}^{-2ik\sigma t}}{2\sigma (1+t)^{\frac{1}{2}-\frac{1}{2^n}} } +\hat{N}(k,t)\\ \mbox{where }~~~ \hat{N}(k,t)&= \frac{ik}{2\sigma} \int_0^t \hspace{-2mm}{\rm d}s~ {\rm e}^{-k^2(t-s)-2ik\sigma s} \myl{10}k^2+\partial_s\myr{10}~ \myl{14} \frac{\hat{F}(k\sqrt{1+s})}{(1+s)^{\frac{1}{2}-\frac{1}{2^n}}} \myr{14}~. \end{equs} We then note that \begin{equs} \|u-N\|_{2,\frac{3}{4}} + \|{\rm D}(u-N)\|_{2,\frac{5}{4}} \leq C\myl{10}\|F\|_1+\|{\rm D} F\|_2+\|{\rm D}^2 F\|_2\myr{10}~, \end{equs} and that, defining $\hat{G}(k)=\frac{1}{2}\partial_k\hat{F}(k)$, we have $\hat{N}(k,t)=\hat{N}_0(k,t)+\hat{N}_1(k,t)+\hat{N}_2(k,t)$, where \begin{equs} \hat{N}_0(k,t)&= \frac{ik^3}{2\sigma} \int_0^t \hspace{-2mm}{\rm d}s~ {\rm e}^{-k^2(t-s)-2ik\sigma s} ~ \myl{14} \frac{\hat{F}(k\sqrt{1+s})}{(1+s)^{\frac{1}{2}-\frac{1}{2^n}}} \myr{14}~,\\ \hat{N}_1(k,t)&= \frac{ik^2}{2\sigma} \int_0^t \hspace{-2mm}{\rm d}s~ {\rm e}^{-k^2(t-s)-2ik\sigma s} ~ \myl{14} \frac{\hat{G}(k\sqrt{1+s})}{(1+s)^{1-\frac{1}{2^n}}} \myr{14}~,\\ \hat{N}_2(k,t)&= \frac{ik}{2\sigma} \myl{10} {\textstyle \frac{1}{2^n}-\frac{1}{2} } \myr{10} \int_0^t \hspace{-2mm}{\rm d}s~ {\rm e}^{-k^2(t-s)-2ik\sigma s} ~ \myl{14} \frac{\hat{F}(k\sqrt{1+s})}{(1+s)^{\frac{3}{2}-\frac{1}{2^n}}} \myr{14}~. \end{equs} The procedure is now similar to that outlined in the proof of Theorem \ref{thm:cauchy}: split the integration intervals into $[0,\frac{t}{2}]$ and $[\frac{t}{2},t]$ and distribute the derivatives ($k$-factors) either on the functions $F$ and $G$, or on the Gaussian. Introducing the notation \begin{equs} \Bone{p_1,q_1}{p_2,q_2} \equiv \int_0^{\frac{t}{2}} \hspace{-3mm}{\rm d}s~ \frac{ (1+s)^{-q_1} }{ (t-s)^{p_1} }+ \int_{\frac{t}{2}}^t \hspace{-2mm}{\rm d}s~ \frac{ (1+s)^{-q_2} }{ (t-s)^{p_2} } ~, \label{eqn:defB} \end{equs} we then find that \begin{equs} \|{\mathbb Q}{\rm D}^{\alpha}N_0\|_{2,\frac{3}{4}+\frac{\alpha}{2}} &\leq C(\|F\|_1+\|{\rm D}^{2+\alpha} F\|_1) \sup_{t\geq1} t^{\frac{3}{4}+\frac{\alpha}{2}} ~\Bone{\frac{7}{4}+\frac{\alpha}{2},0}{\frac{3}{4},1+\frac{\alpha}{2}} ~,\\ \|{\mathbb Q}{\rm D}^{\alpha}N_1\|_{2,\frac{3}{4}+\frac{\alpha}{2}} &\leq C(\|G\|_1+\|{\rm D}^{1+\alpha} G\|_1) \sup_{t\geq1} t^{\frac{3}{4}+\frac{\alpha}{2}} ~\Bone{\frac{5}{4}+\frac{\alpha}{2},\frac{1}{2}} {\frac{3}{4},1+\frac{\alpha}{2}}~,\\ \|{\mathbb Q}{\rm D}^{\alpha}N_2\|_{2,\frac{3}{4}+\frac{\alpha}{2}^{\star}} &\leq C(\|F\|_1+\|{\rm D}^{\alpha} F\|_1) \sup_{t\geq1} \frac{t^{\frac{3}{4}+\frac{\alpha}{2}}}{\ln(2+t)} ~\Bone{\frac{3}{4}+\frac{\alpha}{2},1} {\frac{3}{4},1+\frac{\alpha}{2}} \end{equs} for $\alpha=0,1$. The proof is completed by a straightforward application of Lemma \ref{lem:onB} below, where we consider generalizations of the function $B_1$ in (\ref{eqn:defB}) (see Definition \ref{def:defB} below), since those will occur later on in Sections \ref{sect:cauchyproof} and \ref{sect:remainderestimates}. \end{proof} \section{Proof of Theorem \ref{thm:cauchy}, continued} \label{sect:cauchyproof} In view of the estimates (\ref{eqn:estimateonDkernel}) and (\ref{eqn:H}) on ${\rm e}^{\L t}$ and $h$, respectively, the estimates needed to conclude the proof of Theorem \ref{thm:cauchy} will naturally involve the functions $B_0$ and $B$ which are defined as follows: \begin{definition}\label{def:defB} We define \begin{equs} B_0[q](t)&= \int_0^{t} \hspace{-2mm}{\rm d}s \frac{{\rm e}^{-\frac{t-s}{8}}}{\sqrt{t-s}(1+s)^q}~,\\ \B{p_1,q_1,r_1} {p_2,q_2,r_2,r_3} &= \int_0^{\frac{t}{2}} \hspace{-3mm}{\rm d}s~ \frac{ (1+s)^{-q_1} }{ (t-s)^{p_1}(t-1+s)^{r_1} }+ \int_{\frac{t}{2}}^t \hspace{-2mm}{\rm d}s~ \frac{ (1+s)^{-q_2} \ln(2+s)^{r_3} }{ (t-s)^{p_2}(t-1+s)^{r_2} }~. \label{eqn:defgeneralB} \end{equs} \end{definition} These functions satisfy the following estimates: \begin{lemma} \label{lem:onB} Let $0\leq p_2<1$, $0\leq r_2\leq1-p_2$, $p_1,q_1,q_2,r_1\geq0$ and $r_3\in\{0,1\}$. There exists a constant $C$ such that for all $t\geq0$ there holds \begin{equs} B_0[q_1](t)&\leq C(1+t)^{-q_1}~,\\ \B{p_1,q_1,r_1} {p_2,q_2,r_2,r_3} &\leq C~ \ln(2+t)^{\alpha} \my{\{}{18} \begin{array}{ll} \frac{1}{(1+t)^{\beta}} &~~\mbox{ if }~~0\leq p_1\leq 1 \\ \frac{1}{t^{p_1-1}~(1+t)^{\beta-p_1+1}} &~~\mbox{ if }~~p_1>1 \end{array} ~, \label{eqn:generalB} \end{equs} where $\beta=\min(p_1+\min(q_1-1,0)+r_1,p_2+q_2+r_2-1)$, $\alpha=\max(\delta_{q_1,1},\delta_{p_2+r_2,1}+r_3)$ and $\delta_{i,j}$ is the Kronecker delta. Furthermore, since \begin{equs} \Bone{p_1,q_1}{p_2,q_2}= \B{p_1,q_1,0} {p_2,q_2,0,0}~, \end{equs} the estimate in (\ref{eqn:generalB}) applies for $B_1$ as well. \end{lemma} \begin{proof} The proof follows immediately from \begin{equs} B_0[q_1](t)&\leq {\rm e}^{-\frac{t}{16}} \int_0^{\frac{t}{2}} \hspace{-3mm} \frac{{\rm d}s}{\sqrt{t-s}} + \frac{ 1 }{ (\frac{t}{2}+1)^{q_1} } \int_0^{\frac{t}{2}} \hspace{-3mm}{\rm d}s~ \frac{{\rm e}^{-\frac{s}{8}}}{\sqrt{s}} ~,\\ \B{p_1,q_1,r_1} {p_2,q_2,r_2,r_3} &\leq \frac{ 1 }{ (\frac{t}{2})^{p_1}(\frac{t}{2}+1)^{r_1} } \int_0^{\frac{t}{2}} \hspace{-3mm} \frac{{\rm d}s}{(1+s)^{q_1}} + \frac{ \ln(2+t)^{r_3} }{ (\frac{t}{2}+1)^{q_2} } \int_0^{\frac{t}{2}} \hspace{-3mm} \frac{ {\rm d}s }{ s^{p_2}(1+s)^{r_2} } \end{equs} and straightforward integrations. \end{proof} We can now complete the proof of Theorem \ref{thm:cauchy}.\par \begin{proof}[Proof of Theorem \ref{thm:cauchy}, continued]\par First, we recall that our goal is to prove that the map ${\cal N}$ defined by \begin{equs} {\cal N}[{\bf z}](t)= \int_0^t \hspace{-2mm} {\rm d}s~{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))} \label{eqn:defNrap} \end{equs} satisfies $\|{\cal N}[{\bf z}]\|\leq C$ for all ${\bf z}\in{\cal B}$ with $\|{\bf z}\|=1$. We have already proved that $\|\P{\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}}\leq C$. The other necessary estimates are done as follows: \begin{equs} \|\widehat{{\cal N}[{\bf z}]}\|_{\infty,0}&\leq C\sup_{t\geq0} \Bone{\frac{1}{2},\frac{1}{2}}{\frac{1}{2},\frac{1}{2}} \leq C~,\\ \|{\cal N}[{\bf z}]\|_{2,\frac{1}{4}}&\leq C\sup_{t\geq0} (1+t)^{\frac{1}{4}} \Bone{\frac{1}{2},\frac{3}{4}}{\frac{1}{2},\frac{3}{4}}\leq C~,\\ \|\P {\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}}&\leq C\sup_{t\geq0} (1+t)^{\frac{3}{4}} \Bone{1,\frac{3}{4}} {\frac{1}{2},\frac{5}{4}} \leq C ~,\\ \|(1-\P){\rm D}{\cal N}[{\bf z}]\|_{2,\frac{3}{4}} &\leq \sup_{t\geq0}(1+t)^{\frac{3}{4}} B_0[{\textstyle\frac{5}{4}}](t)\leq C~,\\ \|(1-{\mathbb Q})\P{\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}}&\leq C\|(1-{\mathbb Q})\P{\rm D}{\cal N}[{\bf z}]_2\|_{2,\frac{3}{4}} \leq C\|\P{\rm D}{\cal N}[{\bf z}]_2\|_{2,\frac{3}{4}}\leq C~, \label{eqn:obvious} \\ \|{\mathbb Q}\P{\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}}&\leq C \sup_{t\geq1}{\textstyle\frac{(1+t)^{\frac{5}{4}}}{\ln(2+t)}} ~\B{\frac{3}{2},\frac{3}{4},0} {\frac{1}{2},\frac{5}{4},\frac{1}{2},0} \leq C~, \label{eqn:lastineqforremark} \\ \|(1-\P){\rm D}^2{\cal N}[{\bf z}]_2\|_{2,\frac{5}{4}^{\star}} &\leq \sup_{t\geq0}(1+t)^{\frac{5}{4}} B_0[{\textstyle\frac{5}{4}}](t)\leq C~. \label{eqn:koversqrt} \end{equs} In (\ref{eqn:obvious}), we used the obvious estimates $\|\P{\rm D} f\|_2\leq\|\P f\|_2$ and $\|(1-{\mathbb Q})f\|_{2,p}\leq 2^{p-q}\|(1-{\mathbb Q})f\|_{2,q}$ if $q<p$, while in (\ref{eqn:lastineqforremark}), we made use of ${\displaystyle\sup_{|k|\leq1,t\geq0}}|k|\sqrt{1+t}{\rm e}^{-k^2t}\leq1$, and finally in (\ref{eqn:koversqrt}) we used $\sup_{k\in{\bf R}} |k|(1+k^2)^{-\frac{1}{2}}=1$. Incidentally, (\ref{eqn:koversqrt}) is the only place in the above estimates where the (crucial) presence of the extra factor $(1+k^2)^{-\frac{1}{2}}$ in the second component of the r.h.s. of (\ref{eqn:estimateonDkernel}) is used. This concludes the proof of Theorem \ref{thm:cauchy}. \end{proof} \section{Remainder estimates} \label{sect:remainderestimates} We now make precise the sense in which the semigroup ${\rm e}^{\L t}$ is {\em close} to that of (\ref{eqn:linearfourieruv}), whose Fourier transform is given by \begin{equs} {\rm e}^{\L_0 t}&\equiv \myl{14} \begin{matrix} {\rm e}^{-k^2t+ikt} & 0\\ 0 & {\rm e}^{-k^2t-ikt} \end{matrix} \myr{14} \label{eqn:defT}~. \end{equs} \begin{lemma} \label{lem:closetoheat} Let $\P$ be the Fourier multiplier with the characteristic function on $[-1,1]$, and let ${\rm e}^{\L t}$ resp.~${\rm e}^{\L_0 t}$ be as in (\ref{eqn:defeLt}), resp.~(\ref{eqn:defT}) and $\S$ be as in (\ref{eqn:defS}). Then one has the estimates \begin{equs} \sup_{t\geq0,k\in{\bf R}} \sqrt{1+t}{\rm e}^{\frac{k^2t}{2}} \my{|}{15} \left( \P\S {\rm e}^{\L t}-{\rm e}^{\L_0 t}\S \vbox to 13pt{} \right)_{i,j} \my{|}{15} &\leq C~, \label{eqn:closetoheat} \end{equs} where $(\P\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S)_{i,j}$ denotes the $(i,j)$-entry in the matrix $\P\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S$. \end{lemma} \begin{proof} The proof follows by considering separately $|k|\leq1$ and $|k|>1$. We first rewrite \begin{equs} \P\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S= \P\myl{11}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{11} +(1-\P){\rm e}^{\L_0 t}\S~. \end{equs} We then have \begin{equs} \sup_{t\geq0,k\in{\bf R}} \sqrt{1+t}{\rm e}^{\frac{k^2t}{2}} \my{|}{15} \left( (1-\P){\rm e}^{\L_0 t}\S \vbox to 13pt{} \right)_{i,j} \my{|}{15} &\leq \sup_{t\geq0,|k|\geq1} \sqrt{1+t}{\rm e}^{-\frac{k^2t}{2}} \leq C~. \end{equs} For $|k|\leq1$, we first compute \begin{equs} {\rm e}^{\L_0 t}\S &= {\rm e}^{-k^2t} \myl{14} \begin{matrix} {\rm e}^{ikt} & {\rm e}^{ikt}\\ {\rm e}^{-ikt} & -{\rm e}^{-ikt} \end{matrix} \myr{14}~,\\ \S{\rm e}^{\L t} &={\rm e}^{-k^2t} \myl{20} \begin{matrix} \cos(kt\Delta)+\frac{1-ik}{\Delta}\,i\,\sin(kt\Delta) & \cos(kt\Delta)+\frac{1+ik}{\Delta}\,i\,\sin(kt\Delta)\\[2mm] \cos(kt\Delta)-\frac{1+ik}{\Delta}\,i\,\sin(kt\Delta) & -(\cos(kt\Delta)-\frac{1-ik}{\Delta}\,i\,\sin(kt\Delta)) \end{matrix} \myr{20}~, \end{equs} where we recall that $\Delta=\sqrt{1-k^2}$. We next note that \begin{equs} \P|\sin(kt\Delta)-\sin(kt)|+\P |\cos(kt\Delta)-\cos(kt)|&\leq \P|\cos(kt(\Delta-1))-1|+\P|\sin(kt(\Delta-1))|\\ &\leq \P|\sqrt{1-k^2}-1|~|k|t\leq \P|k|^3t~,\\ \P|({\textstyle\frac{1}{\Delta}}-1)\sin(kt\Delta)| &\leq \P|\sqrt{1-k^2}-1|~|k|t\leq \P|k|^3t~. \end{equs} The proof is completed noting that \begin{equs} \sup_{|k|\leq1,t\geq0} t^{\frac{m}{2}} |k|^n{\rm e}^{-\frac{k^2t}{2}}\leq C(n) \end{equs} for any (finite) $0\leq m\leq n$. \end{proof} We are now in a position to prove that the remainder \begin{equs} {\cal R}[{\bf z}](t)= \myl{10} \S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S \myr{10}{\bf z}_0+ \int_0^t \hspace{-2mm} {\rm d}s~ \my{[}{14} \S{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))} - {\rm e}^{\L_0(t-s)}\S~ \vector{0}{\partial_xg_0({\bf z}(s))} \my{]}{14} \end{equs} satisfies improved estimates as stated in (\ref{eqn:onRannounce}): \begin{theorem} \label{thm:simplificator} Let $\epsilon_0$ be again the (small) constant provided by Theorem \ref{thm:cauchy}. Then for all ${\bf z}_0\in{\cal B}_0$ with $|{\bf z}_0|\leq\epsilon_0$, the solution ${\bf z}$ of (\ref{eqn:p-system}) satisfies \begin{equs} \|{\cal R}[{\bf z}]\|_{2,\frac{3}{4}^{\star}}+ \|{\rm D}{\cal R}[{\bf z}]\|_{2,\frac{5}{4}^{\star}}\leq C\epsilon_0~. \label{eqn:onR} \end{equs} \end{theorem} \begin{proof} We first note that \begin{equs} \myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10}{\bf z}_0= \myl{10} \S\P{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S \myr{10}{\bf z}_0+\S(1-\P){\rm e}^{\L t}{\bf z}_0 \equiv L_1[{\bf z}_0](t)+L_2[{\bf z}_0](t) ~, \end{equs} and then use the fact that by Lemma \ref{lem:closetoheat}, we have \begin{equs} \|{\rm D}^{\alpha}L_1[{\bf z}_0]\|_{2,\frac{3}{4}+\frac{\alpha}{2}}&\leq C\sup_{t\geq0}(1+t)^{\frac{1}{4}+\frac{\alpha}{2}} \min\myl{10} \|{\rm D}^{\alpha}{\bf z}_0\|_2~,~ t^{-\frac{1}{4}-\frac{\alpha}{2}}\|\widehat{{\bf z}_0}\|_{\infty} \myr{10} \leq C|{\bf z}_0| \end{equs} for $\alpha=0,1$ and finally \begin{equs} \|L_2[{\bf z}_0]\|_{2,\frac{3}{4}}+ \|{\rm D} L_2[{\bf z}_0]\|_{2,\frac{5}{4}} &\leq C (\|{\bf z}_0\|_2+\|{\rm D}{\bf z}_0\|_2) \sup_{t\geq0} (1+t)^{\frac{5}{4}}{\rm e}^{-\frac{t}{4}} \leq C|{\bf z}_0|~. \end{equs} This proves \begin{equs} \|\myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10}{\bf z}_0\|_{2,\frac{3}{4}}+ \|{\rm D} \myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10} {\bf z}_0\|_{2,\frac{5}{4}} \leq C|{\bf z}_0| \end{equs} for all ${\bf z}_0\in{\cal B}_0$. We then show that \begin{equs} \| {\cal R}[{\bf z}](t)-\myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10}{\bf z}_0 \|_{2,\frac{3}{4}^{\star}}+ \|{\rm D}\myl{11} {\cal R}[{\bf z}](t)-\myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10}{\bf z}_0 \myr{11}\|_{2,\frac{5}{4}^{\star}} &\leq C\|{\bf z}\|^2 \end{equs} for all ${\bf z}\in{\cal B}$. We only need to prove the estimates for $\|{\bf z}\|=1$. We first decompose \begin{equs} {\cal R}[{\bf z}](t)-\myl{10}\S{\rm e}^{\L t}-{\rm e}^{\L_0 t}\S\myr{10}{\bf z}_0= \S{\cal N}_1[{\bf z}](t)+ \S{\cal N}_2[{\bf z}](t)+ {\cal N}_3[{\bf z}](t)~, \label{eqn:firstdecomposition} \end{equs} where \begin{equs} {\cal N}_1[{\bf z}](t)&=(1-\P) \int_0^t \hspace{-2mm} {\rm d}s~{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))} ~,\\ {\cal N}_2[{\bf z}](t)&=\P\int_0^t \hspace{-2mm} {\rm d}s~{\rm e}^{\L(t-s)} \vector{0}{\partial_xh({\bf z}(s))-\partial_xg_0({\bf z}(s))}~,\\ {\cal N}_3[{\bf z}](t)&=\int_0^t \hspace{-2mm} {\rm d}s~ \myl{10}\P\S{\rm e}^{\L(t-s)}-{\rm e}^{\L_0(t-s)}\S\myr{10} \vector{0}{\partial_xg_0({\bf z}(s))}~. \end{equs} We then recall that $h({\bf z})$ satisfies \begin{equs} \|h({\bf z})\|_{2,\frac{3}{4}}+ \|{\rm D} h({\bf z})\|_{2,\frac{5}{4}} \leq C\|{\bf z}\|^2 ~, \end{equs} which implies \begin{equs} \|{\cal N}_1[{\bf z}]\|_{2,\frac{3}{4}} &\leq C\sup_{t\geq0} (1+t)^{\frac{3}{4}}B_0[{\textstyle\frac{3}{4}}](t)\leq C~, ~~~~~ \|{\rm D}{\cal N}_1[{\bf z}]\|_{2,\frac{5}{4}} \leq C\sup_{t\geq0} (1+t)^{\frac{5}{4}}B_0[{\textstyle\frac{5}{4}}](t)\leq C~. \end{equs} Moreover $h_0(a,b)\equiv f(a,b)\partial_xb+g(a,b)-g_0(a,b)$ satisfies \begin{equs} \|h_0({\bf z})\|_{1,1}+ \|{\rm D} h_0({\bf z})\|_{1,\frac{3}{2}^{\star}} \leq C\|{\bf z}\|^2~. \end{equs} Here, we need to consider separately $t\in[0,1]$ and $t\geq1$ when estimating $\|\P{\rm D}{\cal N}_2[{\bf z}]\|_{2,\frac{5}{4}^{\star}}$. Writing again ${\mathbb Q}$ for the characteristic function for $t\geq1$, we find \begin{equs} \|\P{\cal N}_2[{\bf z}]\|_{2,\frac{3}{4}^{\star}} &\leq C\sup_{t\geq0} {\textstyle\frac{(1+t)^{\frac{3}{4}}}{\ln(2+t)}} \Bone{\frac{3}{4},1}{\frac{3}{4},1}\leq C~, \\ \|(1-{\mathbb Q})\P{\rm D}{\cal N}_2[{\bf z}]\|_{2,\frac{5}{4}^{\star}} &\leq C\sup_{0\leq t\leq1} (1+t)^{\frac{5}{4}} ~\Bone{\frac{3}{4},\frac{3}{2}}{\frac{3}{4},\frac{3}{2}}\leq C~, \\ \|{\mathbb Q}\P{\rm D}{\cal N}_2[{\bf z}]\|_{2,\frac{5}{4}^{\star}} &\leq C\sup_{t\geq1} {\textstyle\frac{(1+t)^{\frac{5}{4}}}{\ln(2+t)}} ~\B{\frac{5}{4},1,0} {\frac{3}{4},\frac{3}{2},0,1} \leq C~. \end{equs} We finally note that \begin{equs} \|g_0({\bf z})\|_{2,\frac{3}{4}}+ \|{\rm D} g_0({\bf z})\|_{2,\frac{5}{4}} \leq C\|{\bf z}\|^2~, \end{equs} and so, using Lemma \ref{lem:closetoheat}, we find \begin{equs} \|{\cal N}_3[{\bf z}]\|_{2,\frac{3}{4}^{\star}} &\leq \sup_{t\geq0} {\textstyle\frac{(1+t)^{\frac{3}{4}}}{\ln(2+t)}} ~\B{\frac{1}{2},\frac{3}{4},\frac{1}{2}}{\frac{1}{2},\frac{3}{4}, \frac{1}{2},0}\leq C~, \\ \|{\rm D}{\cal N}_3[{\bf z}]\|_{2,\frac{5}{4}^{\star}} &\leq \sup_{t\geq0} {\textstyle\frac{(1+t)^{\frac{5}{4}}}{\ln(2+t)}} ~\B{1,\frac{3}{4},\frac{1}{2}} {\frac{1}{2},\frac{5}{4},\frac{1}{2},0}\leq C~. \end{equs} This completes the proof. \end{proof} It now only remains to prove the estimates (\ref{eqn:onRtilde}) and (\ref{eqn:onRtildeLip}) on the maps $\widetilde{\cal R}_{\{u,v\}}$, where we recall that \begin{equs} \widetilde{\cal R}_{u}[{\bf z},{\bf R}^{N}](t)&= c_{+}{\rm E}_0 [h_{1,u}+h_{3,u}](t) -c_{-}{\rm E}_{-2}[h_{1,v}+h_{3,v}](t) +c_{3}{\rm E}_{-1}[h_{2} +h_{4} ](t) ~, \\ \widetilde{\cal R}_{v}[{\bf z},{\bf R}^{N}](t)&= c_{-}{\rm E}_0 [h_{1,v}+h_{3,v}](t) -c_{+}{\rm E}_{ 2}[h_{1,u}+h_{3,u}](t) -c_{3}{\rm E}_{ 1}[h_{2} +h_{4} ](t) ~, \end{equs} with \begin{equs}[2] {\rm E}_{\sigma}[h](t)&= \partial_x\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{\partial_x^2(t-s)}~ {\cal T}^{\sigma}h(s)~&~\mbox{and}\\ h_{1,u}&=R_u^N(u+u_{\star})~,~~~h_{3,u}=u_1^2~,&~~~ h_2&=({\cal T} R_u^N){\cal T}^{-1}\myl{12}\frac{v+v_{\star}}{2}\myr{12} +({\cal T}^{-1}R_v^N){\cal T}\myl{12}\frac{u+u_{\star}}{2}\myr{12}\\ h_{1,v}&=R_v^N(v+v_{\star})~,~~~h_{3,v}=v_1^2~,&~~~ h_4&=({\cal T} u_{\star})({\cal T}^{-1}v_{\star})~. \end{equs} Here, we will only prove that \begin{equs} \label{eqn:onRtilderap} \sum_{\alpha=0}^1 \|{\rm D}^{\alpha} \widetilde{\cal R}_{\{u,v\}} [{\bf z},{\bf R}^{N}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} &\leq C\epsilon_0 \sum_{\alpha=0}^1 \|{\rm D}^{\alpha}{\bf R}^{N}\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} +C~. \end{equs} It is then straightforward to show (\ref{eqn:onRtildeLip}), namely that the maps $\widetilde{\cal R}_{\{u,v\}}$ are Lipschitz in their second argument; we omit the details. To prove (\ref{eqn:onRtilderap}), we first need estimates on ${\bf h}_1=(h_{1,u},h_{1,v})$, $h_2$, ${\bf h}_3=(h_{3,u},h_{3,v})$ and $h_4$. We note that ${\bf u}_0=(u_0,v_0)$ and ${\bf u}_1=(u_1,v_1)$ satisfy \begin{equs} \|{\bf u}_0\|_{1,0}+\|{\bf u}_1\|_{1,0}+ \|{\rm D} {\bf u}_0\|_{1,\frac{1}{2}}+\|{\rm D} {\bf u}_1\|_{1,\frac{1}{2}} &\leq C~, \\ \sup_{t\geq0} (1+t)^{\frac{3}{2}}\myl{10}|{\bf u}_0(\pm t,t)|+|{\bf u}_1(\pm t,t)|\myr{10} +(1+t)^{2}\myl{10}|{\rm D} {\bf u}_0(\pm t,t)|+|{\rm D} {\bf u}_1(\pm t,t)|\myr{10} &\leq C \end{equs} for some constant $C$; see Proposition \ref{prop:burgers}. We thus find that \begin{equa} \label{eqn:hi} \|{\bf h}_{1}\|_{1,1-\epsilon}+ \|{\rm D} {\bf h}_{1}\|_{1,\frac{3}{2}-\epsilon} + \|h_2\|_{1,1-\epsilon}+ \|{\rm D} h_2\|_{1,\frac{3}{2}-\epsilon}&\leq C\epsilon_0 \sum_{\alpha=0}^1 \|{\rm D}^{\alpha}{\bf R}^{N}\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon}~,\\ \|{\bf h}_3\|_{1,1}+\|{\rm D} {\bf h}_3\|_{1,\frac{3}{2}} + \|h_4\|_{1,\frac{3}{2}}+ \|{\rm D} h_4\|_{2,2} &\leq C~. \end{equa} The proof of (\ref{eqn:onRtilderap}) then follows from Proposition \ref{prop:hiinhomoheat}, which implies that \begin{equs} \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}{\rm E}_{\sigma}[{\bf h}_{1}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon}+ \|{\rm D}^{\alpha}{\rm E}_{\sigma}[h_{2}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} &\leq C\epsilon_0 \sum_{\alpha=0}^1 \|{\rm D}^{\alpha}{\bf R}^{N}\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon}~,\\ \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}{\rm E}_{\sigma}[{\bf h}_{3}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}^{\star}}+ \|{\rm D}^{\alpha}{\rm E}_{\sigma}[h_{4}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}^{\star}} &\leq C \end{equs} for any $\sigma\in\{-2,-1,0,1,2\}$ if the estimates in (\ref{eqn:hi}) are satisfied. \begin{proposition} \label{prop:hiinhomoheat} Let $\epsilon>0$ and $\sigma\in\{-2,-1,0,1,2\}$. Then there holds \begin{equs} \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}{\rm E}_{\sigma}[h_{1}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} &\leq C \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}h_1\|_{1,1+\frac{\alpha}{2}-\epsilon}~, \\ \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}{\rm E}_{\sigma}[h_{2}]\|_{2,\frac{3}{4}+\frac{\alpha}{2}^{\star}} &\leq C \sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}h_2\|_{1,1+\frac{\alpha}{2}}~. \end{equs} \end{proposition} \begin{proof} Let $u_i={\rm E}_{\sigma}[h_{i}]$. Taking the Fourier transform, we find \begin{equs} \widehat{u_i}(k,t)= ik\int_0^{t} \hspace{-2mm}{\rm d}s~{\rm e}^{-k^2(t-s)+i\sigma ks}\widehat{h_i}(k,s)~. \end{equs} We can restrict ourselves to $\sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}h_{1}\|_{1,1+\frac{\alpha}{2}-\epsilon}=1$ and $\sum_{\alpha=0}^{1} \|{\rm D}^{\alpha}h_{2}\|_{1,1+\frac{\alpha}{2}}=1$. Then, it follows that \begin{equs} \|{\rm D}^{\alpha}u_1\|_{2,\frac{3}{4}+\frac{\alpha}{2}-\epsilon} &\leq C \sup_{t\geq0} (1+t)^{\frac{3}{4}+\frac{\alpha}{2}-\epsilon} ~\Bone{\frac{3}{4}+\frac{\alpha}{2},1-\epsilon} {\frac{3}{4},1+\frac{\alpha}{2}-\epsilon} \leq C~,\\ \|{\rm D}^{\alpha}u_2\|_{2,\frac{3}{4}+\frac{\alpha}{2}^{\star}} &\leq C \sup_{t\geq0} {\textstyle\frac{(1+t)^{\frac{3}{4}+\frac{\alpha}{2}}}{\ln(2+t)}} ~\Bone{\frac{3}{4}+\frac{\alpha}{2},1} {\frac{3}{4},1+\frac{\alpha}{2}} \leq C \end{equs} for $\alpha=0,1$ as claimed. \end{proof} \def\Rom#1{\uppercase\expandafter{\romannumeral #1}}\def\u#1{{\accent"15 #1}}\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
1,108,101,562,485
arxiv
\section{0pt}{4pt plus 2pt minus 2pt}{4pt plus 2pt minus 2pt} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} \input{writeup_10.tex} \bibliographystyle{IEEEtran} \section{Introduction} For self-navigation, the most fundamental computation required for a vehicle is to determine its position and orientation, i.e., \textit{pose} during motion. Higher-level path planning objectives such as motion tracking and obstacle avoidance operate by continuously estimating vehicle's pose. Recently, deep neural networks (DNNs) have shown a remarkable ability for vision-based pose estimation in highly complex and cluttered environments \cite{kendall2015posenet, zhou2020kfnet, wang2020atloc}. For visual pose estimation, DNNs can learn the correlation of vehicle's position/orientation and visual fields to a mounted camera. Thereby, vehicle's pose can be predicted using a monocular camera alone. In contrast, the traditional methods required bulky and power-hungry range sensors or stereo vision sensors to resolve the ambiguity between an object's distance and its scale \cite{fox2001particle,skrzypczynski2017mobile}. However, DNN's \textit{implicit learning} of flying domain features such as its map, placement of objects, coordinate frame, domain structure, \textit{etc.} in a standard pose-DNN also affects the robustness and adaptability of pose estimations. The traditional filtering-based approaches \cite{thrun2002probabilistic} account for the flying space structure using explicit representations such as voxel grids, occupancy grid, Gaussian mixture model (GMM), \textit{etc.} \cite{Dhawale-2020-121381}; thereby, updates to the flying space such as map extension, new objects, and locations can be more easily accommodated. Comparatively, DNN-based estimators cannot handle selective map updates, and the entire model must be retrained even under small randomized or structured perturbations. Additionally, filtering loops in traditional methods can adjudicate predictive uncertainties against measurements to systematically prune hypothesis space and can express prediction confidence along with the prediction itself \cite{thrun2002particle}. Whereas feedforward pose estimations from a deterministic DNN are vulnerable to measurement and modeling uncertainties. In thie paper, we use integrate traditional filtering techniques with deep learning to overcome such limitations of DNN-based pose estimation while exploiting their suitability to operate efficiently with monocular cameras alone. Specifically, we present a novel framework for visual localization by integrating DNN-based depth prediction and Bayesian filtering-based pose localization. In Figure \ref{fig:mainfig}, avoiding range sensors for localization, we utilize a DNN-based lightweight depth prediction network at the front end and sequential Bayesian estimation at the back end. Our key observation is that, unlike pose estimation, which innately depends on map characteristics such as spatial structure, objects, coordinate frame, \textit{etc.}, depth prediction is {\em map-independent} \cite{godard2017unsupervised, wofk2019fastdepth}. Thus, by applying deep learning only on domain-independent tasks and utilizing traditional models where domain is openly (or explicitly) represented helps improve the predictive robustness. Limiting deep learning to only domain-independent tasks also allows our framework to utilize vast training sets from unrelated domains. Open representation of map and depth estimates enables faster domain-specific updates and utilization of intermediate feature maps for other autonomy objectives, such as obstacle avoidance, thus improving computational efficiency. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{Figures/mainfigdpf.pdf} \caption{Proposed framework integrating depth estimator front-end and particle filter back-end for extremely lightweight and robust localization. DNN-based preprocessing avoids area/power-hungry range sensors. The filtered response of the back-end predictor is robust against measurement and modeling uncertainties. On the right are depth predictions from a lightweight network with varying model sizes.} \label{fig:mainfig} \vspace{-10pt} \end{figure*} \section{Monocular Localization with Depth Neural Network and Pose Filters} In Figure \ref{fig:mainfig}, our framework integrates deep learning-based depth prediction and Bayesian filters for visual pose localization in the 3D space. At the front end, a depth DNN scans monocular camera images to predict the relative depth of image pixels from the camera's focal point. A particle filter localizes the camera pose at the back end by evaluating the likelihood of 3D projection of depth scans over a GMM-based map representation of 3D space. Both frameworks are jointly trained for the extremely lightweight operation. Various components of the framework are discussed below: \vspace{-5pt} \subsection{Extremely Lightweight Depth Prediction} DNN-based monocular depth estimation has gained wide interest owing to impressive results. Several fully supervised \cite{zioulis2018omnidepth}, self-supervised \cite{godard2019digging}, and semi-supervised \cite{guizilini2020robust} convolutional neural network (CNN)-based depth estimators have been presented with promising results. However, for low-power edge robotics \cite{floreano2015science}, the existing depth DNNs are often oversized. A typical depth DNN combines an encoder that extracts the relevant features from the input images. The features are then up-sampled using a decoder to predict the depth map. \textit{Skip connections} between various encoding and decoding layers are typically used to obtain high-resolution image features within the encoder which in-turn helps the decoding layers reconstruct a high resolution depth output. In Figure \ref{fig:depthNN}, we consider a depth DNN that integrates state-of-the-art architectures for lightweight processing on mobile devices. The depth predictor uses MobileNet-v2 as encoder and RefineNet \cite{nekrasov2019real} as decoder. MobileNet-v2 concatenates memory-efficient inverted residual blocks (IRBs). The intermediate layer outputs (or RGB-image features) from the encoder are decoded through the successive channels of convolutional sum, chained residual pooling (CRP), and depth-feature upsampling. This architecture uniquely utilizes only 1$\times$1 convolutional layers in SUM and CRP blocks (replacing traditional high receptive field 3$\times$3 CONV layers with 1$\times$1 CONV layers), thus significantly reducing model parameters. Due to the modular architecture of the depth predictor in Figure \ref{fig:depthNN}, its size can be scaled down by reducing the number of layers in the encoder and decoder. However, with fewer parameters, the prediction quality is affected. Figure \ref{fig:mainfig} (on the right) shows the depth quality by reducing the number of model parameters. Later, we will discuss how despite lower quality depth prediction, accurate pose localization can be achieved by adapting maps to depth inaccuracies. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/depth-NN.pdf} \caption{Depth neural network architecture with MobileNet-v2 \cite{sandler2018mobilenetv2} encoder and decoder based on RefineNet \cite{nekrasov2019real}.} \vspace{-0.5em} \label{fig:depthNN} \end{figure} \vspace{-20pt} \subsection{Memory Efficient Mapping using GMMs} To minimize the memory footprint of maps, we utilized a GMM-based representation of 3D maps \cite{dhawale2018fast}. The point-cloud distribution of tested maps was clustered and fitted with a 3D GMM using Expectation-Maximization (EM) procedures. Although alternate map representations are prevalent, the parametric formulation of GMMs can considerably minimize the necessary storage and extraction cost. For example, Voxel grids \cite{moravec1985high} use cells to represent dense maps and are simpler to extract. However, the representation suffers from storage inefficiency since the free space also needs to be encoded. Surfels, 3D planes, and triangular mesh \cite{schops2019bad} are storage efficient, however expensive to retrieve map information from. Generative map modeling using GMMs requires only the storage of the means, variances, and mixing proportions. GMM-maps easily adapts to scene complexity, that is, for more complex scenes, we can use more mixture components as necessary. \subsection{Adapting Maps to Depth Mispredictions} In Figure \ref{fig:depthNN}, lightweight depth network with fewer parameters or layers induces significant inaccuracies in the predicted depth map. Therefore, the accuracy of pose estimation suffers. We discuss integrated learning of depth and pose reasoning to overcome such deficiencies of lightweight predictor. In Figure \ref{fig:mainfig}, we integrate a multi-layer perceptron (MLP)-based learnable transform (size: 3$\times$128$\times$3) to the original point-cloud (PC) map that minimizes the impact of lightweight depth predictor by translating and/or rotating map points adaptively to systematic inaccuracies of the predictor. The last layer of the depth predictor is also tuned. A joint training of map transformations and depth predictor is quite expensive since each update iteration involves nested sampling and particle filtering steps. The complexity of parameter filtering can be significantly minimized using techniques such as hierarchical GMM representations \cite{Dhawale-2020-121381}, beat-tracking \cite{heydari2021don}, \textit{etc.}, however, the resultant formulation is non-differentiable, precluding gradient descent-based optimization. To circumvent the training complexity, instead of directly minimizing $\ell_2$ norm of the predicted and ground truth pose trajectory, we minimize the negative log-likelihood (NLL) of input image projection via lightweight depth predictor onto the adapted domain maps. Thus, due to the differentiability of the corresponding loss function, the training can be efficiently implemented using standard optimization tools. However, such indirect training of map transforms and depth network is susceptible to overfitting. The loss function focuses on a minimal number of mixture functions in the proximity of ground truth, and it can significantly distort the structural correspondence among the original mixture functions. To alleviate this, we also regularize the loss function by penalizing the distance of the original and adapted map using KL (Kullback Leibler) divergence. Thus, the loss function for the joint training of map transforms, and depth layer is given as: \begin{equation} \mathcal{L}(\theta_\text{M}, \theta_\text{D}) = -\sum_i\text{log}\mathcal{M}_{A,\theta_\text{M}}(\mathcal{D}_{\mathcal{T}_I^i,\mathcal{T}_L^i,\theta_\text{D}}) + \lambda D_\text{KL}(\mathcal{M},\mathcal{M}_A) \end{equation} Here, $\theta_\text{M}$ are the parameters for map transformation, and $\theta_\text{D}$ are the parameters of the last layer of depth predictor. For a trajectory $\mathcal{T}$, $\mathcal{T}_I$ represents the set of input images and $\mathcal{T}_L$ corresponding pose labels. $\mathcal{M}$ is the original domain map, and $\mathcal{M}_A$ is the adapted map to compensate for inaccuracies of the lightweight predictor. Both $\mathcal{M}$ and $\mathcal{M}_A$ are represented as GMMs. $\mathcal{D}_{\mathcal{T}_I^i,\mathcal{T}_L^i,\theta_\text{D}}$ is the projection of predicted depth map of trajectory image $\mathcal{T}_I^i$ to 3D space by pin-hole camera model and assuming camera pose at the ground truth label $\mathcal{T}_L^i$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{Figures/pc-pics.pdf} \caption{(a) Merged original and transformed point-cloud maps. (b) Bounding boxes of the two maps with the original map in yellow and in red is the transformed map. (c) Relative distance coloring on the reference map data. (d) Histogram of cloud-to-cloud distances.} \label{fig:PCtransforms} \end{figure} In (1), the regularization term requires computing the KL divergence between the original and adapted maps, namely $\mathcal{M}$ and $\mathcal{M}_A$ respectively. KL divergence of two Gaussian functions is defined in closed form but cannot be analytically extracted for two GMMs. In the proposed framework, original and adapted maps, $\mathcal{M}$ and $\mathcal{M}_A$, have the same number of mixture components, and with a strong enough regularization coefficient ($\lambda$), the relative correspondence among mixture functions maintains, i.e., for i$^\text{th}$ mixture function in $\mathcal{M}$, the nearest mixture function in $\mathcal{M}_A$ has the same index. Leveraging these attributes, the KL divergence of $\mathcal{M}$ and $\mathcal{M}_A$ can be approximated using Goldberger's approximation as \cite{hershey2007approximating} \begin{equation} D_\text{KL}(\mathcal{M},\mathcal{M}_A) \approx \sum_i \pi_i \Big(D_\text{KL}(M_i,M_{A,i}) + \log \frac{\pi_i}{\pi_{A,i}}\Big) \end{equation} Here, $M_i$ is the i$^\text{th}$ mixture component of $\mathcal{M}$, and $M_{A,i}$ is the corresponding component in $\mathcal{M}_A$. $\pi_i$ is $M_i$'s weight and $\pi_{A,i}$ is $M_{A,i}$'s weight. The KL divergence of $M_i$ and $M_{A,i}$, i.e., $D_\text{KL}(M_i,M_{A,i})$ is analytically defined. Thus, $D_\text{KL}(\mathcal{M},\mathcal{M}_A)$ can be efficiently computed and is differentiable. Figure \ref{fig:PCtransforms} shows the point cloud adaptations of Scene-02 in RGBD dataset \cite{lai2014unsupervised} using the method. Figure \ref{fig:PCtransforms}(a) contains both the original and adapted point-cloud (PC) maps. In Figure \ref{fig:PCtransforms}(b), the reference or original map's 3D points are in yellow while the adapted PC is in red to highlight the adaptation difference. In Figure \ref{fig:PCtransforms}(c), the reference point cloud's 3D points are color-coded based on the relative distance of corresponding points in the adapted map. The cloud-to-cloud (C2C) distance histogram is shown in Figure \ref{fig:PCtransforms}(d). Thus, the results demonstrate that only a minimal tweaking of map data is sufficient to improve pose accuracy (evident in later results) despite extremely lightweight depth prediction. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Figures/paramplots.pdf} \caption{(a) Predicted pose trajectory using the proposed integrated depth estimator and pose filter for varying depth network sizes without joint training of depth estimator and pose filter. (b) Pose predictions for varying depth network sizes using the proposed technique with jointly training learnable map transforms and lightweight depth predictor. (c) The structural similarity index of depth predictions reduces for a reduction in network size. (d) Comparison of Pose errors (RMSEs) for baseline and proposed technique.} \label{fig:paramvarplots} \end{figure} \vspace{-1em} \section{Results and Discussion} Figures \ref{fig:paramvarplots}(a) and (b) compare the predicted pose trajectory (for varying depth network size) from the proposed monocular localization against an equivalent framework where joint training of depth network and filtering model is not performed. The comparison uses the RGBD scenes dataset \cite{lai2014unsupervised}. Figure \ref{fig:paramvarplots}(c) shows the corresponding degradation in depth images, measured using structural similarity index measure (SSIM). In Figure \ref{fig:paramvarplots}(d), despite significant degradation in depth image quality and reduction of depth predictor to one-third parameters, the proposed joint training maintains pose prediction accuracy by learning and adapting against systematic inaccuracies in depth prediction. Another crucial feature is that the original depth predictor can be trained on any dataset, and then tuned (on the last layer) for the application domain. For example, in the presented results, the original depth network was trained on NYU-Depth \cite{silberman2012indoor} and applied on RGBD scenes \cite{lai2014unsupervised}. Thus, the predictor has access to vast training data that can be independent of application domain. Figure \ref{fig:lightvarplots} demonstrates the resilience of proposed crossmodal pose prediction against DNN by considering extreme lighting variations. An equivalent MobileNet-v2-based PoseNet \cite{kendall2015posenet} is utilized as DNN for the comparisons. On the top, input images are subjected to extreme lighting variations using models in \cite{lai2014unsupervised} (L1: high brightness, L2: medium light, and L3: very dim light). Figure \ref{fig:lightvarplots}(a) compares trajectories from PoseNet and our framework (with and without the joint training). In all cases, equivalent sized models are considered, shown in Table I. In Figure \ref{fig:lightvarplots}(b), our framework is significantly more accurate than PoseNet in very dim light (L3) conditions due to in-built filtering loops, demonstrating superiority of crossmodal estimates than DNN-only estimates. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{Figures/lightvarplots.pdf} \caption{On top: Indoor RGB image captured in different lighting conditions. (a) Comparison of pose trajectories for MobileNetv2-based PoseNet, baseline GMM map-based pose filtering, and proposed integrated framework of depth estimator and pose filter in various lighting conditions. (c) Pose error (RMSE) plot in very dim light for various models.} \label{fig:lightvarplots} \end{figure} \vspace{-2em} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/table-dpf.pdf} \label{fig:table} \end{figure} \section{Conclusions} We presented a novel monocular localization framework by jointly learning depth estimates and map transforms. Compared to standard DNNs for pose estimates, such as PoseNet, the proposed approach is significantly more tolerant to model size scalability and environmental variations. Open representation of map and depth estimates in our approach also allows faster updates and resource efficiency by availing intermediate feature maps for other automation objectives, such as depth maps for obstacle avoidance.
1,108,101,562,486
arxiv
\section{Introduction} The goal of this paper is to present an alternative but equivalent approach to the classical construction of L-functions for automorphic representations of $\GL(1)$. Philosophically, we are trying to answer the question ``where does an L-function live?'' in the simplest case, Tate's classical construction of Hecke L-functions for $\GL(1)$. Let $F$ be a number field. There are several ways to construct the L-function for an automorphic representation over $F$. For example, one has Tate's thesis for Hecke L-functions for $\GL(1)$, or the methods of Jacquet--Langlands and Godement--Jacquet for $\GL(n)$. Regardless of the chosen method, one has to choose a \emph{test function} out of some space (e.g., $S(\AA_F)$ for Tate's thesis), to which a zeta integral is assigned. If one chooses the ``right'' test function, whose zeta integral is a GCD for all possible ones, then the resulting zeta integral is called an L-function. Instead, our approach is to construct a ring $\cS$ and an $\cS$-module $\cL$ of all zeta integrals. It turns out that the module of zeta integrals $\cL$ already contains the same data as the automorphic L-function for $\GL(1)$, and can be thought of as a categorification of the L-function. In this language, the GCD procedure usually used to select a specific L-function out of all zeta integrals becomes a search for a \emph{generator} for the module $\cL$. If we further specialize to totally real $F$, we can also give an algebro-geometric interpretation to our construction. Let us give a brief overview. Suppose that $F$ is totally real. Denote by $\Chars(F)$ the space of Hecke characters of $F$. In this paper, we are concerned with the definition of Hecke L-functions as functions on the space $\Chars(F)$. In this setting, the module $\cL$ can be turned into a sheaf of modules on $\Chars(F)$. Sections of this module that are supported on finitely many connected components are zeta integrals, and generators of this module are Hecke L-functions. Similarly, $\cS$ can be turned into a sheaf of rings of functions on the space $\Chars(F)$. We will use this point of view to algebraically reformulate and categorify several important properties of Hecke L-functions. Put differently, we are proposing that it is beneficial to think of L-functions in more geometric terms, such as modules and sheaves, instead of as specific functions. From this point of view, the existence of an actual \emph{function} which generates our module $\cL$, the L-function itself, is almost coincidental to the theory. Even if the module $\cL$ did not in fact have a generator, the theory could proceed without a problem. Moreover, it turns out that thinking of L-functions in terms of the module of zeta functions that they generate has unexpected benefits. In the same manner that thinking of vector spaces in basis-free terms allows one to shift the emphasis from the properties of specific elements of vector spaces to the properties of maps between them, one can use this idea to produce new results. In \cite{abst_aut_reps_arxiv}, the author compares two formulas for zeta integrals for $\GL(2)$ that give the same L-function: the Godement--Jacquet construction and the Jacquet--Langlands construction. The fact that they give the same L-function is well-known. But enhancing this into a correspondence between modules of zeta integrals turns out to induce a novel multiplicative structure on the category of $\GL(2)$-modules, with applications to the theory of automorphic representations. It must be noted that a similar approach has been taken before by Connes (e.g. Section~3.3 of \cite{riemann_F_one}), and Meyer in \cite{zeta_rep}. Connes and Meyer are able to, in the case of $\GL(1)$, canonically construct a virtual representation whose spectrum is the zeroes (minus the poles) of the L-function. This works because for $\GL(1)$, the module $\cL$ is locally free of rank $1$ over $\cS$, which allows one to attach a corresponding divisor $[\cL]-[\cS]$ over the spectrum of $\cS$. The virtual representation constructed by Connes and Meyer is this divisor, and in fact their construction of it goes through constructing our $\cS$ and $\cL$. However, Connes and Meyer put the focus on the ``divisor'' $[\cL]-[\cS]$ instead of the locally free module $\cL$ itself. \date{\textbf{Acknowledgements: }The author would like to thank his advisor, Joseph Bernstein, for providing the inspiration for this paper, and going through many earlier versions of it. The author would also like to thank both Shachar Carmeli and Yiannis Sakellaridis for their great help improving the quality of this text.} \subsection{Detailed Summary} Let us give a more detailed account of this paper's main ideas. Let $\chi\in\Chars(F)$ be some Hecke character $\chi\co\AA^\times/F^\times\ra\CC^\times$. Tate's classical construction of the complete Hecke L-function $\Lambda(\chi,s)$ of $\chi$ works by a GCD procedure. One defines a zeta integral \begin{equation} \label{eq:tate_zeta_integral} \int_{\AA^\times}\Phi(x)\chi(x)\abs{x}^s\dtimes{x} \end{equation} for every appropriate \emph{test function} $\Phi\in S(\AA)$. These zeta integrals are all meromorphic functions of $s\in\CC$. Every test function gives a zeta integral; however, some zeta integrals are distinguished. It turns out that the collection of zeta integrals has a greatest common divisor, a meromorphic function $\Lambda(\chi,s)$ such that all zeta integrals are multiples of $\Lambda(\chi,s)$ by an entire function. In this text, we will refer to any such GCD as a \emph{complete L-function} of $\chi$. Note that this terminology is usually used to refer to a specific standard choice of GCD. The above GCD procedure only defines $\Lambda(\chi,s)$ up to multiplication by an entire invertible function. For some properties of the L-function, this is enough. For example, the zeroes of the L-function are well-defined in this formalism. This formulation also naturally satisfies a functional equation. However, there are other questions that one would like to ask about the L-function that do not work quite as well with this definition. For example, one is often interested in special values of the L-function. Therefore, in order to speak about specific values of $\Lambda(\chi,s)$, there exists a standard (somewhat ad hoc) choice of GCD in a place-by-place manner in $F$. This expresses $\Lambda(\chi,s)$ as a product of standard L-factors over all places of $F$. As another example, the growth of the L-function in vertical strips (as $s\ra\sigma\pm i\infty$) is an important problem. However, with the standard choice of GCD, the function $\Lambda(\chi,s)$ decreases rapidly in vertical strips. This is due to the choice of L-factor at the Archimedean places, which decreases so fast that all other behaviour becomes irrelevant. In order to remedy this, the usual approach is to work with the \emph{incomplete L-function} $L(\chi,s)$, which simply drops the L-factors at $\infty$. In this paper, we will give an alternative approach to the GCD construction. We define a ring $\cS$ of functions over $\Chars(F)$, and a module $\cL$, which we think of as the module of zeta integrals. Our view is that the fundamental object of the theory is this module of zeta integrals. To relate this to the standard view, we will establish a canonical correspondence between generators of $\cL$, i.e. isomorphisms of modules \[ t\co\cS\xrightarrow{\sim}\cL, \] and a set of functions on $\Chars(F)$ which are complete L-functions under the standard formulation of the GCD procedure (see Construction~\ref{const:gens_are_L_funcs}). In this manner, claims about Hecke L-functions (growth, factorization under Abelian extensions, etc.) can be refined to explicit claims about the module $\cL$. The main point is that we are able to provide a geometric viewpoint which gives a more geometric flavor to the GCD definition of L-functions, i.e. exhibits them as generators of a module. A less formal, but perhaps more descriptive, explanation is as follows. We introduce the ring $\cS$, its module $\cL$, and a canonical isomorphism of $\cL$ with an extension $\cS'$ of $\cS$ after base change: \[ \cS'\otimes_\cS \cL\xrightarrow{\sim}\cS'. \] Geometrically, the extension $\cS\subseteq\cS'$ corresponds to restricting to a specific right-half-plane. This turns $\cL$ into a kind of ``divisor'' on the space $\Chars(F)$, supported on the corresponding left-half-plane. This ``divisor'' turns out to be principal (in an informal sense), and L-functions are precisely those functions which generate it (see Remark~\ref{remark:L_is_divisor} for more details). The virtual representation constructed by Meyer in \cite{zeta_rep} is the sky-scraper sheaf at this divisor, $[\cL]-[\cS]$. On top of providing an interesting perspective, this will be useful in several immediate ways. First of all, the proposed reformulation will enable us to refine some properties of L-functions as algebraic statements about the module $\cL$. We will include examples such as the functional equation, and more precise statements about decomposition properties of L-functions under Abelian extensions. Specifically, it turns out that the fact that L-functions of Abelian extensions decompose into a product of L-functions of characters can be categorified into an explicit canonical decomposition of the corresponding spaces of zeta integrals. See Remark~\ref{remark:cubic_L_decomposes} for an interesting application. Moreover, the set of functions on $\Chars(F)$ given by the algebraic construction is somewhat smaller than that given by the GCD construction. This means that for some statements about the growth of the L-function as $s\ra\sigma\pm i\infty$, no non-canonical choice of representative is necessary. Let us elaborate. It turns that out of the complete L-functions given by the GCD construction, the subset that correspond to generators of $\cL$ always have an additional property: they have moderate growth and decay properties in vertical strips. Hence, it makes sense to talk about their growth without introducing the incomplete L-function, or even choosing a standard representative. In other words, the geometric point of view specifies not only the zeroes of the L-function, but also its growth properties. Technically, this happens because the new formalism rejects the standard L-factors at the Archimedean places, which decay much too quickly in vertical strips to generate $\cL$. Instead, different Archimedean local L-factors are required, which ``fixes'' their growth. See Remark~\ref{remark:growth_of_L_in_vertical_strips} and Appendix~\ref{app:non_canonical_triv} for further discussion of this issue. Another kind of result that can be achieved using the geometric point of view is the trace formula given in \cite{zeta_rep}, where Meyer essentially derives the explicit formula for Hecke L-functions from the geometric construction without any reference to the underlying L-function. However, we will not pursue this direction here. The organization of this paper is as follows. Section~\ref{sect:dixmier_malliavin} contains a brief reminder about the Dixmier-Malliavin theorem, following \cite{dixmier_malliavin_for_born_arxiv}. Section~\ref{sect:module_S} introduces the ring $\cS$ of functions on $\Chars(F)$, and studies its properties. Section~\ref{sect:module_L} introduces the module $\cL$. Section~\ref{sect:canonical_triv} establishes the correspondence between generators of $\cL$ and L-functions. Section~\ref{sect:functional_equation} re-introduces the functional equation in the new language. Section~\ref{sect:decomposition_under_ext} studies the decomposition properties of L-functions under Abelian extensions. Finally, Appendix~\ref{app:non_canonical_triv} shows that a generator for the module $\cL$ actually exists, and discusses the various local L-factors given by the new formalism. \begin{remark} While $\GL(1)$ is a nice test-case, the theory of L-functions and automorphic representations often focuses on higher rank reductive groups, such as $\GL(n)$. While the contents of this text can be generalized in various ways to (say) $\GL(2)$, the non-commutativity of the group adds a severe technical complication that we will not handle here. It is the author's belief that the symmetric monoidal structure constructed in \cite{abst_aut_reps_arxiv}, which seems to help $\GL(2)$ manifest some ``commutative-like'' phenomena, would be useful in such an endeavor. See Remarks~3.1 and~4.1 of \cite{abst_aut_reps_arxiv} for details on these phenomena. \end{remark} \begin{remark} When working over number fields, a major technical tool that we will use is the notion of \emph{bornological vector space}. This notion greatly resembles that of the more commonly used \emph{topological vector space}, but is technically simpler and more suitable for studying the representation theory of locally compact groups. For example, the ring $\cS$ and module $\cL$ can be given the structure of a bornological ring and bornological module, respectively. However, bornologies are not the focus of this paper. Rather, we consider bornological vector spaces because they admit a strong version of the Dixmier-Malliavin theorem, which we want to use. Specifically, we will need the variant given in \cite{dixmier_malliavin_for_born_arxiv}. Moreover, this notion is not actually needed when $F$ is a function field, as the Dixmier-Malliavin theorem becomes trivial in this case. Readers can safely take $F$ to be a function field, and consequently ignore all bornological structures appearing in this text. We will give a brief reminder on the Dixmier-Malliavin theorem in Section~\ref{sect:dixmier_malliavin}. \end{remark} \section{Reminder on the Dixmier-Malliavin Theorem} \label{sect:dixmier_malliavin} This section is a brief reminder (closely following \cite{dixmier_malliavin_for_born_arxiv}) about bornological vector spaces and their relation with the Dixmier-Malliavin theorem, which are used in the main body of the text. This section should not be considered original contribution. Readers who are only interested in the case of L-functions over function fields can safely skip this section and ignore all mentions of bornological structures. Bornological vector spaces will be used in much of this text as a technically favorable alternative for topological vector spaces. In many applications, the two notions are very similar. However, while they are close enough that many theorems can be successfully stated both in terms of bornologies and in terms of topologies, there are still cases where one language is preferable to the other. For example, there are some theorems (especially in the representation theory of locally compact groups) that have many technical requirements when stated in the traditional language of topological vector spaces. However, these extra technical assumptions disappear when the theorem is stated in the language of bornological vector spaces instead. The main way we will use the notion of a bornological vector space in this text is because they support a stronger variant of the Dixmier-Malliavin theorem. More concretely, we will use bornological structures to prove that certain rings satisfy a property called quasi-unitality. Specifically, we say that a (non-unital) ring $R$ is \emph{quasi-unital} if the product map: \[ R\otimes_R R\ra R \] is an isomorphism, where the relative tensor product $R\otimes_R R$ is the quotient of $R\otimes R$ by all expressions of the form \[ (ab)\otimes c-a\otimes(bc). \] Similarly, if $R$ is a quasi-unital ring, and $M$ is an $R$-module, then we say that $M$ is \emph{smooth} if the action map: \[ R\otimes_R M\ra M \] is an isomorphism. Once again, we take $R\otimes_R M$ to be the relative tensor product. Let $G$ be an algebraic group, and let $F$ be a number field. Let $G(\AA)=G(\AA_F)$ denote the adelic points of $G$ over $F$. Let $C_c^\infty(G(\AA))$ be the (non-unital) ring of smooth and compactly supported functions on $G$, equipped with the convolution product. The ring $C_c^\infty(G(\AA))$ is naturally a bornological non-unital ring. The main result we need from \cite{dixmier_malliavin_for_born_arxiv} is the following variant of the Dixmier-Malliavin theorem: \begin{theorem}[Theorem~5.1 of \cite{dixmier_malliavin_for_born_arxiv}] \label{thm:adelic_garding_is_smooth} The following hold: \begin{enumerate} \item The ring $C_c^\infty(G(\AA))$ is quasi-unital. \item Let $V$ be a complete bornological vector space, equipped with a smooth action of $G(\AA)$ on $V$. Then $V$ is smooth as a $C_c^\infty(G(\AA))$-module. \end{enumerate} \end{theorem} \begin{remark} Essentially, what Theorem~\ref{thm:adelic_garding_is_smooth} means is that given some boundedness and analytic conditions on a representation of $G(\AA)$, then that representation is a smooth $C_c^\infty(G(\AA))$-module. Note that while the prerequisites for the theorem are analytic in nature (the existence of a complete bornology on $V$, such that the action of $G(\AA)$ is smooth with respect to it), the consequence of the theorem -- smoothness as a module of a quasi-unital ring -- is purely algebraic: \begin{equation} \label{eq:smoothness_of_module} C_c^\infty(G(\AA))\otimes_{C_c^\infty(G(\AA))}V\xrightarrow{\sim} V. \end{equation} That is, Equation~\eqref{eq:smoothness_of_module} holds as an isomorphism of vector spaces, and requires no notion of bornology or completeness to state. Our uses of bornologies in this paper will almost exclusively be as a way to establish this algebraic property. That is, the main focus of this paper is algebraic, rather than analytic. \end{remark} \begin{remark} The proof of Theorem~5.1 of \cite{dixmier_malliavin_for_born_arxiv} is written for the case of a Lie group $G$. The generalization to the adelic case follows Remark~5.3 of \cite{dixmier_malliavin_for_born_arxiv}. \end{remark} \begin{remark} This theorem becomes trivially true in the case that $F$ is a function field, and $C_c^\infty(G(\AA))$ is considered the ring of smooth and compactly supported functions on $G$ in the usual sense of being locally constant. In this case, no assumptions about bornologies are necessary. \end{remark} For more details, we direct the reader to \cite{dixmier_malliavin_for_born_arxiv}. See also \cite{borno_quasi_unital_algs2}, which discusses bornological structures in representation theory specifically, and \cite{borno_vs_topo_analysis}, which deals more generally with bornologies and functional analysis. \section{The Space of Automorphic Functions \texorpdfstring{$\cS$}{S}} \label{sect:module_S} Let $F$ be a number field (not necessarily totally real), and let $\Chars(F)$ be the space of Hecke characters $\chi\co\AA_F^\times/F^\times\ra\CC^\times$ with its complex analytic topology. That is, $\Chars(F)$ is a countable disjoint union of copies of $\CC$, each parametrized by $\abs{\cdot}^s\chi$ with $s\in\CC$, for some unitary Hecke character $\chi$. We will say that the component of all Hecke characters of the form $\abs{\cdot}^s\chi$ is the component \emph{corresponding to $\chi$}. \begin{remark} \label{remark:vertical_strips} We will often be speaking about functions on $\Chars(F)$ that are rapidly decreasing in vertical strips. This should be taken to mean that $\abs{f(\sigma+it)}\ra 0$ as $t\ra\infty$ faster than any polynomial in vertical strips $a\leq \sigma\leq b$, on every copy of $\CC$ separately. \end{remark} On the space $\Chars(F)$, there is a ring of (Mellin transforms of) Bruhat-Schwartz functions, $\cS_F$. Our main construction is a module $\cL_F$ over $\cS_F$, whose generators will correspond to L-functions. In this section, we will construct the space $\cS_F$ itself, and establish some of its basic properties. Instead of directly defining this, let us define its Fourier-Mellin transform, which might be more easily accessible: \begin{definition} Let $\Schw_F=S(\AA^\times)$ denote the (non-unital) ring of Bruhat-Schwartz functions on $\AA^\times$. Specifically, we set \[ S(\AA^\times)=S(F_\infty^\times)\otimes{\bigotimes_{p}}'S(F_p^\times), \] where \begin{itemize} \item The symbol $\bigotimes'$ denotes the restricted tensor product over the finite (i.e., non-Archimedean) places of $F$, with respect to the characteristic function $\one_{\O_p^\times}$. \item For a non-Archimedean place $p$, the space $S(F_p^\times)$ is the space of smooth and compactly supported functions on $F_p^\times$. \item For the Archimedean places, the space $S(F_\infty^\times)$ is the space of smooth functions $f$ on $F_\infty^\times=\prod_{v|\infty}F_v^\times$, satisfying that \[ \abs{\chi(y)\cdot Df(y)} \] is bounded for all multiplicative characters $\chi\co F_\infty^\times\ra\CC^\times$ and differential operators $D$ in the universal enveloping algebra of the Lie algebra of the real group $F_\infty^\times$. \end{itemize} We give $\Schw_F=S(\AA^\times)$ a ring structure via the convolution product with respect to the standard Haar measure on $\AA^\times$ (normalized as in, e.g., page~46 of \cite{aut_reps_book_I}). When the number field $F$ is clear from context, we will abuse notation and denote $\Schw=\Schw_F$. \end{definition} \begin{remark} The ring $\Schw$ naturally acquires a bornology as follows. We give each of the spaces $S(F_p^\times)$ a bornology consisting of the bounded subsets of its finite-dimensional linear subspaces. Likewise, we give $S(F_\infty^\times)$ a bornology consisting of the subsets where for each $\chi$ and $D$, the expression $\abs{\chi(y)\cdot Df(y)}$ above is bounded uniformly for $f$ and $y$. The bornivorous topology associated to this bornology on $\Schw$ is the usual topology of Bruhat-Schwartz functions on $\AA^\times$. \end{remark} \begin{remark} \label{remark:schwartz_at_infty_decay_exp} An alternative way to view $S(F_\infty^\times)$ is as follows. By applying the product of the logarithm maps over the Archimedean places $v|\infty$, we can identify $F_\infty^\times$ with a disjoint union of a finite number of copies of $\RR^{r_1+r_2}\times\left(\RR/\ZZ\right)^{r_2}$. Under this identification, the space $S(F_\infty^\times)$ corresponds to smooth functions, all of whose derivatives are decreasing faster than any exponential in the \emph{logarithmic} coordinates $\RR^{r_1+r_2}$. \end{remark} \begin{remark} The reader should note that we are actually working with \emph{measures}, rather than \emph{functions}, on the space $\AA^\times$ (since we are taking their integrals and convolutions). However, for the sake of simplicity, we are relying on the standard Haar measure $\dtimes{g}$ (see, e.g., page~46 of \cite{aut_reps_book_I} for an explicit description of this standard normalization) to abuse notation and speak of functions regardless. This is done in order to simplify the notation and exposition, while hopefully not confusing the reader too much. \end{remark} \begin{remark} The bornological space $\Schw=S(\AA^\times)$ is complete, and the action of $\AA^\times$ on it is smooth. Thus, by Theorem~\ref{thm:adelic_garding_is_smooth} and Claim~3.20 of \cite{dixmier_malliavin_for_born_arxiv}, the ring $S(\AA^\times)$ is quasi-unital. Note that this is a purely-algebraic property. \end{remark} \begin{definition} Let $\cS_F=\Schw_F/F^\times$ be the space of co-invariants of $\Schw_F=S(\AA^\times)$ by the action of $F^\times$ via multiplicative shifts. When the number field $F$ is clear from context, we will abuse notation and denote $\cS=\cS_F$. \end{definition} \begin{remark} We note that the map \[ f(g)\mapsto\sum_{q\in F^\times}f(qg) \] defines an isomorphism of $\cS=\Schw/{F^\times}$ with what is sometimes known as the space of Bruhat-Schwartz functions on $\AA^\times/F^\times$. Thus, we will sometimes refer to $\cS$ as the \emph{space of automorphic functions}. \end{remark} \begin{remark} \label{remark:coinv_are_closed} It is possible, via standard techniques, to construct a bounded section \[ \cS\ra\Schw \] for the canonical projection. In particular, $\cS$ is a complete bornological space. \end{remark} \begin{remark} Because $\Schw$ is commutative, the space $\cS$ is a ring. Once again, Theorem~\ref{thm:adelic_garding_is_smooth} and Claim~3.20 of \cite{dixmier_malliavin_for_born_arxiv}, along with the fact that $\cS$ is complete by Remark~\ref{remark:coinv_are_closed}, let us conclude that $\cS$ is quasi-unital. \end{remark} \begin{remark} \label{remark:paley_wiener} When $F$ is totally real, one can use the Paley-Wiener theorem to give an alternative description for the space $\cS$. It is isomorphic via the Mellin transform to the space of functions on $\Chars(F)$ which are supported only on finitely many copies of $\CC$, and on each one they are entire functions $f(s)$ which are rapidly decreasing in vertical strips. In this view, the bornology of $\cS=\bigoplus\cS_{\chi_0}$ is the direct sum bornology induced from the subspaces $\cS_{\chi_0}$ of functions $f\in\cS$ whose Mellin transform is supported on just one copy of $\CC$ (corresponding to $\chi_0\in\Chars(F)$). The bornology on the subspaces $\cS_{\chi_0}$ themselves is the von-Neumann bornology for the topology generated by the semi-norms \[ \norm{\hat{f}(s)}_{\sigma,n}=\sup_{t\in\RR}\,(1+\abs{t}^n)\abs{\hat{f}(\sigma+it)} \] for $\sigma\in\RR$ and $n\geq 0$, with \[ \hat{f}(s)=\int_{\AA^\times/F^\times}f(y)\chi_0(y)\abs{y}^s\dtimes{y} \] the Mellin transform of $f$. \end{remark} \begin{remark} One can give a similar, but more complicated, description when $F$ is not totally real. See also Remark~\ref{remark:paley_wiener_at_C} for a similar issue. \end{remark} \begin{remark} \label{remark:schwatz_is_co_sheaf} A convenient way to assign geometric intuition to the space $\cS$ is to think of it as a \emph{co-sheaf} on $\Chars(F)$. This should express the fact that the Mellin transforms of elements of $\cS$ are supported on a finite number of copies of $\CC$. To be more explicit, suppose that $F$ is totally real as above. We define a co-sheaf on $\Chars(F)$ as follows. To each connected open set $U$ of $\Chars(F)$ (which necessarily lies inside a single copy of $\CC$, corresponding to $\chi_0\in\Chars(F)$), the co-sheaf assigns the subspace $\cS_{\chi_0}\subseteq\cS$ of functions supported on that copy. For arbitrary open sets $U\subseteq\Chars(F)$, we let the value of the co-sheaf be the direct sum of the values of the co-sheaf on the connected components of $U$. This turns $\cS$ into the global co-sections of a locally constant co-sheaf on $\Chars(F)$. \end{remark} \section{The Module of Zeta Integrals \texorpdfstring{$\cL$}{L}} \label{sect:module_L} We are now ready for the main construction of this paper. We describe a module (of bornological vector spaces) $\cL$ over the ring $\cS$. The elements of this module will correspond to zeta integrals, and its generators $t\co\cS\xrightarrow{\sim}\cL$ will correspond to L-functions. We will show this correspondence explicitly in Section~\ref{sect:canonical_triv}. \begin{definition} Let $S(\AA)$ be the $\Schw=S(\AA^\times)$-module of Bruhat-Schwartz functions on $\AA$. Specifically, we set \[ S(\AA)=S(F_\infty)\otimes{\bigotimes_{p}}'S(F_p), \] where \begin{itemize} \item The symbol $\otimes'$ denotes the restricted tensor product over the finite (i.e., non-Archimedean) places of $F$, with respect to the characteristic function $\one_{\O_p}$. \item For a non-Archimedean place $p$, the space $S(F_p)$ is the space of smooth and compactly supported functions on $F_p$. \item For the Archimedean places, the space $S(F_\infty)$ is the space of Schwartz functions $f$ on $F_\infty=\prod_{v|\infty}F_v$. \end{itemize} \end{definition} \begin{remark} Note that $S(\AA)$ is a bornological $\Schw$-module. \end{remark} \begin{remark} Let us take a different point of view on $S(F_\infty)$. Under the identification of $F_\infty^\times$ as a finite disjoint union of copies of $\RR^{r_1+r_2}\times\left(\RR/\ZZ\right)^{r_2}$ (as in Remark~\ref{remark:schwartz_at_infty_decay_exp}), we can look at the restriction $f|_{F_\infty^\times}$ of a function $f\in S(F_\infty)$. This restriction approaches a limit as any subset of the coordinates approaches $-\infty$, and decays faster than any exponent as any of the coordinates approaches $\infty$. \end{remark} \begin{definition} We let $\cL_F$ denote the vector space of $F^\times$ co-invariants of $S(\AA)$, where $F^\times$ acts via multiplication on $\AA$. When the number field $F$ is clear from context, we will omit it from the notation and denote $\cL=\cL_F$. \end{definition} \begin{remark} It is immediate to see that $\cL$ is a bornological $\cS$-module. \end{remark} \begin{remark} Just like in Remark~\ref{remark:schwatz_is_co_sheaf}, one can gain some geometric intuition about $\cL$ by thinking of it as a co-sheaf. Suppose that $F$ is totally real. Then one can use the co-sheaf structure on $\cS$ defined in Remark~\ref{remark:schwatz_is_co_sheaf}, combined with the module structure of $\cL$, to turn $\cL$ into a co-sheaf as well. To be as explicit as possible, we construct a co-sheaf on $\Chars(F)$ as follows. To each connected open set $U$ of $\Chars(F)$, which necessarily lies inside a single copy of $\CC$ corresponding to some $\chi_0\in\Chars(F)$, the co-sheaf assigns the subspace $\cL_{\chi_0}=\cS_{\chi_0}\cdot \cL\subseteq\cL$. For arbitrary open sets $U\subseteq\Chars(F)$, we let the value of the co-sheaf be the direct sum of its values on the connected components of $U$. This turns $\cL$ into the global co-sections of a co-sheaf. \end{remark} General $\cS$-modules can be pretty badly behaved. However, the specific $\cS$-module $\cL$ is as nice as it can be -- in fact, it is secretly isomorphic to $\cS$, albeit in a non-canonical fashion. This fact will be proven in Appendix~\ref{app:non_canonical_triv}. More specifically, combining Remark~\ref{remark:coinv_are_closed} with Claim~\ref{claim:A_is_A_times}, we see that the bornological vector space $\cL$ is a complete, smooth $\cS$-module. In particular, we have the algebraic property that the action map \[ \cS\otimes_\cS\cL\xrightarrow{\sim}\cL \] is an isomorphism. \section{Correspondence with L-Functions} \label{sect:canonical_triv} One way to think of $\cL$ is as a module of zeta integrals. Indeed, given a function $f\in \cL=S(\AA)_{/F^\times}$, by restricting it to $\AA^\times/F^\times$ and applying the Mellin transform, we precisely get a zeta integral. This is the content of Construction~\ref{const:L_into_S_prime} below, which will let us relate $\cL$ to the classical notion of an L-function, via Construction~\ref{const:gens_are_L_funcs}. Let us give some more details. In order to have a well-defined notion of zeta integral, we will need to extend the ring $\cS$ to a sufficiently large ``ring of periods'' $\cS'$ to contain all necessary integrals. We will think of the extension $\cS\subseteq\cS'$ as the space of holomorphic functions on some ``right-half-plane'' in $\Chars(F)$. Along with $\cS'$, we will obtain a canonical trivialization \begin{equation} \label{eq:informal_triv} \cS'\otimes_\cS\cL\xrightarrow{\sim}\cS', \end{equation} sending every element of $\cL$ (which is, essentially, a test function coming from $S(\AA)$) to its corresponding zeta integral. This is the algebraic structure which captures, in our setting, the notion of a zeta integral. \begin{remark} \label{remark:L_is_divisor} There is another point of view on this issue as well, with a more geometric flavor. Consider the following informal analogy. Let $X$ be a scheme, with sheaf of functions $\O_X$. In our analogy, these correspond to $\Chars(F)$ and $\cS$, respectively. Suppose that we are given some localization $\O_X\subset\O_{X'}$ (which is our $\cS'$), allowing poles at certain places in $X$. A line bundle on $X$ is a locally free $\O_X$-module of rank one. The data of a line bundle, along with its trivialization after base change to $\O_{X'}$, is the data of a \emph{divisor} supported at $X-X'$. Therefore, geometrically, the data of the module $\cL$ along with its trivialization~\eqref{eq:informal_triv} can be thought of as a kind of ``divisor'' on the space $\Chars(F)$. The fact that $\cL$ is non-canonically isomorphic to $\cS$ is then a statement that this divisor is principal, and a function giving this principal divisor is an L-function. \end{remark} Since our only real requirement from $\cS'$ is that it is sufficiently large, its construction is relatively ad hoc. We will outline one possible choice for $\cS'$ in Subsection~\ref{subsect:module_S_prime}. In Subsection~\ref{subsect:L_gen_is_L_func}, we will construct the canonical trivialization~\eqref{eq:informal_triv} after base-change to $\cS'$. \subsection{The Extension \texorpdfstring{$\cS'$}{S'}} \label{subsect:module_S_prime} In this subsection, we will construct the extension $\cS\subseteq\cS'$, such that $\cS$ and $\cL$ become canonically isomorphic after base change to $\cS'$. This data will be used to give a correspondence between isomorphisms $t\co\cS\xrightarrow{\sim}\cL$ of $\cS$-modules (which we think of as \emph{generators} for $\cL$) and certain L-functions $L_t$ on the space $\Chars(F)$. This correspondence justifies thinking of $\cL$ as containing the data of an L-function. See Construction~\ref{const:gens_are_L_funcs}. Our immediate goal is to extend $\cS$ to some minimal extent that also allows $\cL$ to fit into it. This will be our choice of $\cS'$. Intuitively, we want $\cS'$ to somehow correspond to holomorphic functions on some right half-plane in $\Chars(F)$ where L-functions are absolutely convergent. Let us begin by explicitly constructing $\cS'$. Let $\norm{g}_{\AA}$ be the height function on $\AA$ given by \begin{equation*} \norm{g}_\AA=\prod_v\max\{\norm{g_v}_v,1\}, \end{equation*} which we restrict to $\AA^\times$. Similarly, we define a height function on $\AA^\times/F^\times$ by choosing the lowest lift: \[ \norm{g}_{\AA/F^\times}=\inf_{\stackrel{g'\in\AA^\times}{g\equiv g'\pmod{F^\times}}}\norm{g'}_\AA. \] \begin{construction} \label{const:S_prime} The ring extension $\cS'$ of $\cS$ is given by the space of smooth functions $f$ on $\AA^\times/F^\times$ such that: \begin{itemize} \item The stabilizer of $f$ is open in $\AA_\text{fin}^\times$. \item There is a bound: \[ \int_{\AA^\times/F^\times}\abs{g}^{1+\varepsilon}\norm{g}_{\AA/F^\times}^\sigma\cdot\abs{Df(g)}\dtimes{g}<\infty \] for all $\sigma<\infty$, $\varepsilon>0$ and $D$ in the universal enveloping algebra of $F_\infty^\times$. \end{itemize} \end{construction} \begin{remark} The convolution product turns $\cS'$ into a ring by observing that \[ \norm{gg'}_{\AA/F^\times}\leq\norm{g}_{\AA/F^\times}\norm{g'}_{\AA/F^\times}. \] \end{remark} \begin{remark} The ring $\cS'$ has a natural bornological structure, according to which it is complete. By Theorem~\ref{thm:adelic_garding_is_smooth}, it follows that $\cS'$ is smooth over $\cS$. \end{remark} \begin{remark} \label{remark:S_prime_is_localization} The complete bornological ring $\cS'$ induces a localization functor \[ V\mapsto\cS'\widehat{\otimes}_\cS V \] on the category of complete bornological smooth $\cS$-modules. Here, $\widehat{\otimes}_\cS$ is the completion of the relative tensor product. Indeed, we construct a natural morphism $V\ra\cS'\widehat{\otimes}_\cS V$ via \[\xymatrix{ V & \cS\otimes_\cS V \ar[l]_-\sim \ar[r] & \cS'\widehat{\otimes}_\cS V. }\] Thus, it remains to verify that the completion of the multiplication map \[ \cS'\widehat{\otimes}_\cS\cS'\ra\cS' \] is an isomorphism. This follows because $\cS'{\otimes}_\cS\cS'$ and $\cS'{\otimes}_{\cS'}\cS'$ share a dense subset $\cS$, with the same induced bornology. \end{remark} \begin{remark} \label{remark:S_prime_paley_wiener_in_lines} Suppose that $F$ is totally real. Then we can try giving a description of the image of $\cS'$ under the Mellin transform. This should give some geometric intuition about the ring $\cS'$. Indeed, using Remark~\ref{remark:paley_wiener}, we see that \[ \cS'=\bigoplus\cS'_{\chi_0}, \] where each $\cS'_{\chi_0}$ consists of functions on the right half plane $\{\abs{\cdot}^s\chi_0\suchthat\Re{s}>1\}$ of $\Chars(F)$ which are analytic in the right half-plane and rapidly decreasing in vertical strips there (in the sense of Remark~\ref{remark:vertical_strips}). Recall that we choose unitary representatives $\chi_0$ out of each connected component of $\Chars(F)$. \end{remark} \subsection{Canonical Trivialization of the Module of Zeta Integrals} \label{subsect:L_gen_is_L_func} Our goal for this subsection is to construct the isomorphism $\cS'\otimes_\cS\cL\xrightarrow{\sim}\cS'$, which will be induced from a map $\cL\ra\cS'$. This will let us provide the desired correspondence $t\mapsto L_t$ between generators $t\co\cS\xrightarrow{\sim}\cL$ and L-functions $L_t$. \begin{construction} \label{const:L_into_S_prime} There is a canonical morphism of bornological $\cS$-modules \[ \cL\ra\cS' \] given by: \[ f(g)\mapsto \sum_{q\in F^\times}f(qg). \] \end{construction} \begin{remark} One can similarly define a map \[ \cS\ra\cS', \] using the same formula. It is easy to check that the map $\cS\ra\cS'$ is injective, by using standard techniques. \end{remark} \begin{remark} We will see later (Corollary~\ref{cor:L_inj_into_S_prime}) that the morphism $\cL\ra\cS'$ is also injective. \end{remark} \begin{remark} The morphism $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} already encodes all zeta integrals. To see this, suppose that we had some test function $\Psi\in S(\AA)$. Applying the map $\cL\ra\cS'$ along with the Mellin transform, we obtain the corresponding zeta integral via Equation~\eqref{eq:tate_zeta_integral}. \end{remark} Our main result for this section is that the map $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} presents $\cS'$ as the localization of $\cL$ under the functor $\cS'\hat{\otimes}_\cS-$ of Remark~\ref{remark:S_prime_is_localization}. In other words, $\cS'$ defines a localization of the category of complete smooth $\cS$-modules under which $\cS$ and $\cL$ coincide. This will justify thinking of $\cL$ together with the data of the map $\cL\ra\cS'$ as defining a ``divisor'' on $\Chars(F)$. \begin{theorem} \label{thm:S_prime_L_isom} The morphism $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} induces an isomorphism: \[ \cS'\otimes_\cS\cL\xrightarrow{\sim}\cS'. \] \end{theorem} \begin{remark} In more classical terms, the content of Theorem~\ref{thm:S_prime_L_isom} is that the L-functions we are constructing have a zero-free right-half-plane. \end{remark} The proof of Theorem~\ref{thm:S_prime_L_isom} is fairly heavy, and will be postponed to the end of this section. At this point, we have all of the constructions necessary for our notion of L-function. We claim that the module $\cL$, together with the canonical embedding $\cL\ra\cS'$, encodes the classical notion of L-function. We formalize this statement as the following construction: \begin{construction} \label{const:gens_are_L_funcs} Suppose that we are given some (non-canonical) isomorphism \[ t\co\cS\xrightarrow{\sim}\cL. \] Applying the functor $\cS'\otimes_\cS-$, we get a morphism of modules \[ t\co\cS'\xrightarrow{\sim}\cS'. \] Recall that this is sometimes called an element of the \emph{roughening} of $\cS'$. After composing with the Mellin transform from both sides, as in Remark~\ref{remark:S_prime_paley_wiener_in_lines}, the map $t$ acts as multiplication by a function $L_t$ on some right-half-plane of the space $\Chars(F)$. We refer to $L_t$ as the \emph{L-function corresponding to $t$}. \end{construction} \begin{remark} Construction~\ref{const:gens_are_L_funcs} is not vacuous. Specifically, Theorem~\ref{thm:L_is_triv} guarantees the existence of a generator $t\co\cS\xrightarrow{\sim}\cL$ as above. \end{remark} \begin{corollary} \label{cor:L_inj_into_S_prime} The morphism $\cL\ra\cS'$ is injective. \end{corollary} \begin{proof} Pick an isomorphism $t\co\cS\xrightarrow{\sim}\cL$, as in Theorem~\ref{thm:L_is_triv}. Then we get a commutative diagram: \[\xymatrix{ \cS \ar[d]^t & \Schw\otimes_\Schw\cS \ar[l]_-\sim \ar[r] & \cS'\otimes_\cS\cS \ar[d]^{\cS'\otimes_\cS\,\displaystyle t} \ar[r]^-\sim & \cS' \ar@{-->}[d]^{L_t} \\ \cL & \Schw\otimes_\Schw\cL \ar[l]_-\sim \ar[r] & \cS'\otimes_\cS\cL \ar[r]^-\sim & \cS'. }\] By Theorem~\ref{thm:S_prime_L_isom}, we can extend $\cS'\otimes_\cS t$ to an isomorphism $\xymatrix@1{\cS' \ar@{-->}[r]^{L_t} & \cS'}$. Since the top row of the diagram is an injective morphism, so is the bottom row. \end{proof} \begin{remark} Let $t\co\cS\xrightarrow{\sim}\cL$ be some generator for $\cL$. Then $L_t$ is an automorphism of $\cS'$. In particular, it has no zeroes with $\Re{s}>1$. Moreover, it satisfies some moderate growth condition in vertical strips. We note that this L-function is well defined up to composing $t$ with an automorphism of $\cS$ as a module over itself. I.e., up to the roughening of $\cS$. This makes the L-function $L_t$ well-defined up to multiplication by an entire function such that both it and its inverse have no zeroes, and satisfy a similar moderate growth condition in vertical strips. \end{remark} \begin{remark} \label{remark:growth_of_L_in_vertical_strips} One should note that the above is slightly stronger than the corresponding classical claim. That is, when the L-function $\Lambda(\chi,s)$ is defined via the GCD procedure, it is only well-defined up to multiplication by an entire function with no zeroes. I.e., the moderate growth condition is absent. The author finds this interesting. Usually, when one considers growth conditions on, say, the Riemann zeta function, one takes the non-completed zeta function $\zeta(s)$. The reason for this is that the completed zeta function \[ \Lambda(s)=\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s) \] decreases very quickly in vertical strips, due to the presence of the gamma factor. In other words, the (interesting) growth behavior of the L-function in vertical strips is actually not a well-defined feature of the L-function when it is defined via the GCD procedure. However, with the new formalism, a statement of the form ``the L-function grows slowly in vertical strips'' makes sense without modifying the definition of the L-function. We will discuss this further in Appendix~\ref{app:non_canonical_triv}. \end{remark} \begin{remark} It is also possible to redo the above definitions based on different variations of the ring $S(\AA^\times)$, and therefore $\cS$. If, for instance, one chooses to loosen the smoothness requirements, this would result in Mellin transforms that are required to decrease in vertical strips, but not very quickly. That is, instead of rapidly decreasing in vertical strips, our functions would be required to decrease more slowly (say, at some fixed polynomial rate). In turn, this would make the roughening of $\cS$ smaller. The implication of that would be that the L-function's growth in vertical strips is more strictly controlled. This would make questions of Lindel\"of type well-defined. \end{remark} \begin{remark} \label{remark:almost_smooth} Another way to modify the definition uses something similar to the ``almost smooth'' functions of Appendix~A of \cite{almost_smooth_padic}. These functions are a natural modification of the notion of a smooth function on the $p$-adic part of $\AA^\times$. In brief: instead of taking locally constant functions, one takes functions whose Fourier coefficients are rapidly decreasing. Using a definition based on almost smooth functions, rather than smooth functions, should give a notion of L-function with well defined growth properties not only in the complex $\pm i\infty$ direction, but also in the conductor aspect. This generalization will be explored elsewhere. \end{remark} Let us return to the proof of Theorem~\ref{thm:S_prime_L_isom}. \begin{proof}[Proof of Theorem~\ref{thm:S_prime_L_isom}] We have a morphism \[\xymatrix{ \cS'\otimes_{\cS}\cL \ar[r] & \cS', }\] and would like to construct an inverse. Such an inverse can always be induced from a map of bornological $\cS$-modules, \[\xymatrix{ \cS \ar@{-->}[r] & \cS'\otimes_{\cS}\cL. }\] Our strategy for doing so will work as follows. We will construct maps \begin{equation} \label{eq:local_canonical_trivs}\xymatrix{ S(F_v^\times) \ar[r] & \cS'\otimes_{S(F_v^\times)} S(F_v) }\end{equation} for all places $v$, corresponding to a generator $t_v\co S(F_v^\times)\ra S(F_v)$ tensored with its inverse. Since the product defining an L-function absolutely converges in the right-half-plane of $\Chars(F)$ corresponding to $\cS'$, these maps will multiply together to yield the desired map \[\xymatrix{ \cS \ar[r] & \cS'\otimes_{\cS}\cL. }\] Indeed, we define $\cS_v=S(F_v^\times)$ and $\cL_v=S(F_v)$. We also define a local variant of the ring $\cS'$, as follows. Set the ring extension $\cS_v'$ of $\cS_v$ to be the G\r{a}rding space of the space of functions $f$ on $F_v^\times$ such that there is a bound: \[ \int_{F_v^\times}\abs{g}_v^{1+\varepsilon}\max\{1,\abs{g}_v\}^\sigma\cdot\abs{f(g)}\dtimes{g}<\infty \] for all $\sigma<\infty$, $\varepsilon>0$. We also require that $f$ be supported on a compact subset of $F_v$ when $v$ is finite. That is, when $v$ is finite, then $\cS_v'$ is the space of locally constant functions on $F_v^\times$, supported on a compact subset of $F_v$, which do not increase much faster than $\abs{g}^{-1}$ near $0$. When $v$ is infinite, then $\cS_v'$ is the space of smooth functions on $F_v^\times$ which are rapidly decreasing near $\infty$ and do not increase much faster than $\abs{g}^{-1}$ near $0$. The reader should note that in the case where $F_v$ is non-Archimedean, the term ``G\r{a}rding space'' simply refers to the subspace of functions on which the action of $F_v^\times$ is locally constant. This is always the case when $F$ is a function field. Now, our claim is that the natural inclusion $\cS_v\hookrightarrow\cL_v$ induces an isomorphism \[\xymatrix{ \cS_v' \ar[r]^-\sim & \cS'_v\otimes_{\cS_v}\cL_v. }\] Indeed, the quotient $\cL_v/\cS_v$ is killed by any inverse of the local L-function at $v$, which is invertible in $\cS_v'$. So, composing the above with the map $\cS_v\ra\cS_v'$ gives the desired local maps of~\eqref{eq:local_canonical_trivs}. Thus, it remains to show the convergence property we are after. Specifically, we claim that for all finite places $v$, the image of the distinguished vector $\one_{\O_v^\times}$ is the element \[ (\one_{\O_v^\times}-\one_{\pi_v\O_v^\times})\otimes\one_{\O_v}\in\cS_v'\otimes_{\cS_v}\cL_v, \] where $\pi_v$ is a uniformizer. Thus, it is enough to check that the product \[ \prod_v (\one_{\O_v^\times}-\one_{\pi_v\O_v^\times}) \] converges in $\cS'$, which indeed holds. \end{proof} \section{Functional Equation} \label{sect:functional_equation} Our goal for this section is to provide a notion of functional equation and analytic continuation that is compatible with our formalism. In terms of the analogy of Remark~\ref{remark:L_is_divisor}, we are showing that the ``divisor'' corresponding to $\cL$ is symmetric. To do this, we will need to examine the operation of Fourier transform on $\cL$ and its interaction with the embedding of $\cL$ in the space of functions on $\AA^\times/F^\times$. It will turn out that the compatibility (or lack thereof) of this embedding with the Fourier transform is deeply related to the poles of the L-function. We will thoroughly delve into this issue in the following two subsections. For the moment, let us give some notation, and provide some bottom lines. We begin by defining a ring involution $\iota\co\cS\ra\cS$ by: \[ \iota(f)(g)=\abs{g}^{-1}f(g^{-1}). \] For an $\cS$-module $M$, we will denote by $\prescript{\iota}{}{M}$ the twist of $M$ by $\iota$. \begin{construction} \label{const:fourier} We define the isomorphism \[ \F\co S(\AA)\xrightarrow{\sim}\prescript{\iota}{}{S(\AA)}. \] to be given by the Fourier transform. This isomorphism depends on a choice of additive character $\psi\co\AA/F\ra\CC^\times$. By taking co-invariants, we also obtain an isomorphism: \[ \F\co\cL\xrightarrow{\sim}\prescript{\iota}{}{\cL}, \] which no longer depends on the choice of $\psi$. \end{construction} This means that we have two spaces, $\cS$ and $\cL$, each armed with an involution, $\iota$ and $\F$ respectively. These two spaces are embedded in a bigger space $\cS'$ together. It turns out that the involutions $\iota$ and $\F$ \emph{coincide} on the intersection $\cS\cap\cL$ inside $\cS'$. In particular, one can define a pushout diagram of $\Schw$-modules: \begin{equation} \label{eq:S_ext_pushout} \xymatrix{ \cS\cap\cL \ar[r] \ar[d] & \cL \ar[d] \\ \cS \ar[r] & \cS+\cL=\cS_\ext, }\end{equation} where all of the spaces carry compatible involutions. Let us take a moment to delve into the intuitive meaning of the intersection $\cS\cap\cL$. Under the Mellin transform, $\cS$ should consist of entire functions on $\Chars(F)$ that are rapidly decreasing in vertical strips (satisfying some extra conditions). Similarly, $\cL$ should consist of such functions multiplied by the L-function on $\Chars(F)$. In particular, $\cS\cap\cL$ should consist of all functions that have the same zeroes as the L-function, no poles, and satisfy some growth conditions. This means that we expect that the intersection $\cS\cap\cL$ is large, while the quotient $\P=\cL/\cS\cap\cL$ is small, and is related to the poles of the L-function. So, take for granted for the moment that we know that the quotient $\P=\cL/\cS\cap\cL=\cS_\ext/\cS$ is sufficiently ``small''. Then we are able to interpret the above diagram~\eqref{eq:S_ext_pushout} as giving a functional equation and analytic continuation for L-functions: the extension $\cS_\ext$ of $\cS$ is essentially the space of meromorphic functions with a small number of prescribed poles, and the map $\cL\ra\cS_\ext$ which sends a test function to its corresponding zeta integral respects the involutions on both sides. Hence, our problem is reduced to proving some smallness bounds on $\P$. Na\"ively, this sounds hard. The quotient $\cS/\cS\cap\cL$ contains all information about the zeroes of the L-function, and so an explanation is needed as to why the seemingly similar quotient $\P=\cL/\cS\cap\cL$ is so much easier to characterize. As it turns out, we are able to provide a surprisingly \emph{conceptual} reason for why the poles of the L-function are easier to study. Specifically, we embed $\cL$ in a space $\widetilde{\cS}$, which is somewhat larger than $\cS'$, but carries its own involution. We then show that the difference between the involutions of $\cL$ and $\widetilde{\cS}$ precisely captures the polar divisor $\P$. This allows us to isolate the desired quotient for study in a canonical way. The structure of the rest of this section is as follows. In Subsection~\ref{subsect:analytic_cont_func_eq}, we will formalize the above explanation about the various involutions involved. We will introduce a slightly different definition for $\P$, although it will turn out to be equivalent. In Subsection~\ref{subsect:polar_div}, we will study the properties of the polar divisor $\P$, and prove that it is the same as $\cL/\cS\cap\cL$. \subsection{Analytic Continuation and the Polar Divisor} \label{subsect:analytic_cont_func_eq} In order to be able to state a re-interpretation of the functional equation, we first need to change the target space for the map $\cL\ra\cS'$. The reason for that is that the space $\cS'$ is just too small; it is not symmetric under $\iota$. Instead, we define a new target space $\widetilde{\cS}$. It will have the advantage of being symmetric under $\iota$, but it will no longer be a ring. This will allow us to compare $\iota$ with $\F$. \begin{construction} \label{const:schw_prime_prime} Define the $\cS$-module $\widetilde{\cS}$ to be the G\r{a}rding space (i.e., the space of the smooth vectors) of the space of functions $f$ on $\AA^\times/F^\times$ that are of moderate growth, with its natural bornology. \end{construction} \begin{remark} The space $\widetilde{\cS}$ is an adelic version of the space $A_\text{umg}(\Gamma\backslash G)$ of functions of uniform moderate growth (c.f. \cite{schwartz_of_aut_quotient}). \end{remark} \begin{remark} The bornological space $\widetilde{\cS}$ is complete, and by Theorem~\ref{thm:adelic_garding_is_smooth} it is also smooth over $\cS$. In fact, it is precisely the smoothening of the dual of $\cS$. I.e, $\widetilde{\cS}$ is the \emph{contragradient} of $\cS$. \end{remark} \begin{remark} In the case where $F$ is taken to be a function field, then no bornologies are necessary. One can directly define $\widetilde{\cS}$ to be the contragradient of $\cS$, in the sense of being the space of locally constant functions on $\AA^\times/F^\times$ with no conditions on growth. \end{remark} \begin{remark} The isomorphism $\iota\co\cS\xrightarrow{\sim}\prescript{\iota}{}{\cS}$ extends to an isomorphism \[ \iota\co\widetilde{\cS}\xrightarrow{\sim}\prescript{\iota}{}{\widetilde{\cS}} \] via the same formula \[ \iota(f)(g)=\abs{g}^{-1}f(g^{-1}). \] \end{remark} \begin{construction} We define a map $\cL\ra\widetilde{\cS}$ via the composition \[ \cL\ra\cS'\ra\widetilde{\cS}. \] \end{construction} Given this embedding, one may naturally ask about the relation between the two involutions $\F\co\cL\ra\prescript{\iota}{}{\cL}$ and $\iota\co\widetilde{\cS}\ra\prescript{\iota}{}{\widetilde{\cS}}$. As it turns out, they do not commute, and this lack of commutativity reflects the poles of the L-function. \begin{definition} We let $\delta\co\cL\ra\widetilde{\cS}$ denote the difference between the two maps in the diagram: \[\xymatrix{ \cL \ar[r]_j \ar[d]^\F & \widetilde{\cS} \ar[d]^\iota \ar@{=>}[dl]^{\iota\circ\delta} \\ \prescript{\iota}{}{\cL} \ar[r]_{\iota(j)} & \prescript{\iota}{}{\widetilde{\cS}}. }\] In other words, we have $\delta=j-\iota\circ\iota(j)\circ\F$, where $j\co\cL\ra\widetilde{\cS}$ is the embedding. \end{definition} Our claim is that the map $\delta\co\cL\ra\widetilde{\cS}$ precisely presents the \emph{poles} of our L-function. In the divisor interpretation, this means that it presents the positive part of the formal difference $[\cL]-[\cS]$. In other words, let us define: \begin{definition} Define the bornological $\cS$-module $\P$ to be the co-kernel \[\xymatrix{ 0 \ar[r] & \ker{\delta} \ar[r] & \cL \ar[r] & \P \ar[r] & 0. }\] We refer to $\P$ as the \emph{polar divisor}. \end{definition} This coincides with the interpretation as the positive part of the difference $[\cL]-[\cS]$ via: \begin{proposition} \label{prop:kernel_of_delta} The kernel $\ker{\delta}$ is equal to the intersection $\cS\cap\cL$ in $\widetilde{\cS}$. \end{proposition} \begin{remark} \label{remark:kernel_of_delta} The inclusion $\ker{\delta}\subseteq\cS\cap\cL$ is easy to see. Indeed, note that: \[ \ker{\delta}\subseteq{\cS'}\cap\iota({\cS'}). \] However, it is clear from the definition that \[ {\cS'}\cap\iota({\cS'})\subseteq\cS. \] \end{remark} We will postpone the proof of the rest of Proposition~\ref{prop:kernel_of_delta} to the next subsection. Before we discuss the specific properties of $\P$, let us discuss its relation to the analytic continuation property and the functional equation. Informally, the idea is as follows. We are looking for a space of meromorphic functions (under some Paley-Wiener correspondence) which is simultaneously big enough to contain $\cL$, and is symmetric with respect to $\iota$. Given such a space, we would be able to ask if its involution $\iota$ is compatible with the involution $\F$ of $\cL$. That would be our functional equation. Using the space $\widetilde{\cS}$ above fails on two counts: its involution is not compatible with $\F$, and it is too ``big'' to be thought of as a space of meromorphic functions. Therefore, we can ask for some minimal space $\cS_\ext$ satisfying our requirements. It turns out that we can safely choose $\cS_\ext$ to be the pushout \[\xymatrix{ \ker{\delta} \ar[d] \ar[r] & \cL \ar@{-->}[d] \\ \cS \ar@{-->}[r] & \cS_\ext. }\] This space automatically contains both $\cS$ and $\cL$, and has an involution \[ \cS_\ext\xrightarrow{\sim}\prescript{\iota}{}{\cS_\ext} \] compatible with both $\iota\co\cS\ra\prescript{\iota}{}{\cS}$ and $\F\co\cL\ra\prescript{\iota}{}{\cL}$. Moreover, clearly $\cS_\ext$ is an extension of $\cS$ by $\P$. Thus, if we had a strong enough grip on the support of $\P$, we would immediately be able to interpret $\cS_\ext$ as a space of meromorphic functions with prescribed poles, and see that the image of $\cL$ in $\cS_\ext$ satisfies a functional equation. \begin{remark} \label{remark:kernel_delta_means_S_ext_in_S_prime} The essence of Proposition~\ref{prop:kernel_of_delta} is now that $\cS_\ext$ can be identified with the sum $\cS+\cL$ inside $\widetilde{\cS}$. I.e., that the two inversions $\iota$, $\F$ can be extended to the sum $\cS+\cL$ in a compatible manner. Alternatively, since both $\cS$ and $\cL$ lie inside ${\cS'}$, the proposition can be interpreted as meaning that the natural map \[ \cS_\ext\ra{\cS'} \] is injective. Since we are informally thinking of ${\cS'}$ as a space of holomorphic functions on some right-half-plane, Proposition~\ref{prop:kernel_of_delta} should be interpreted as meaning that the functional equation carries no ``extra'' data in its L-functions that cannot be analytically continued from the right-half-plane. That is, a-priori it could have been possible that the correct notion of L-function contained a delta distribution on the complex plane (which would be invisible to analytic continuation). The proposition excludes this possibility. \end{remark} \subsection{Properties of the Polar Divisor \texorpdfstring{$\P$}{P}} \label{subsect:polar_div} Let us now start studying the properties of $\P$. Our goal is to show that its support is sufficiently small that the space $\cS_\ext$ of the previous subsection corresponds to meromorphic functions on the whole of $\CC$. Finally, we will use this to prove Proposition~\ref{prop:kernel_of_delta}. \begin{remark} \label{remark:P_is_smooth} We first note that $\P$ is smooth, as it is the quotient of a smooth module by a closed subspace. This follows from Corollary~5.6 of \cite{dixmier_malliavin_for_born_arxiv}. \end{remark} \begin{remark} In fact, using Proposition~\ref{prop:formula_for_delta}, we will see that we know exactly what $\P$ is in this case. It turns out to be $2$-dimensional, and in particular the bornological $\cS$-module $\P$ is complete. \end{remark} \begin{remark} As our next step in characterizing $\P$, we note that we must have a canonical isomorphism: \[ \P\xrightarrow{\sim}\prescript{\iota}{}{\P}, \] induced by $\F$. That is, the polar divisor is symmetric about the critical strip. This follows because \[ \iota\circ\delta\circ\F=-\delta, \] and thus $\ker{\delta}$ is preserved by $\F$. \end{remark} In order to make any further progress, we will need to make use of the following formula for $\delta$, which is an incarnation of the Poisson summation formula: \begin{proposition} \label{prop:formula_for_delta} The function $\delta\co\cL\ra\widetilde{\cS}$ is given by the formula: \[ \delta(\Psi)(g)=(\F\Psi)(0)\abs{g}^{-1}-\Psi(0). \] \end{proposition} This will allow us to know exactly where the poles $\P$ are, and to prove Proposition~\ref{prop:kernel_of_delta}. It is now clear exactly what $\P$ is: \begin{corollary} \label{cor:supp_of_polar} There is an isomorphism: \begin{equation*} \one\oplus\prescript{\iota}{}{\one}\xrightarrow{\sim} \P, \end{equation*} where $\one$ is the trivial $\AA^\times$-module, thought of as an $\cS$-module. \end{corollary} In particular, $\P$ is a sum of skyscraper co-sheaves on $\Chars(F)$. Using this corollary, we can also finally prove Proposition~\ref{prop:kernel_of_delta}. \begin{proof}[Proof of Proposition~\ref{prop:kernel_of_delta}] We want to show that $\ker{\delta}=\cS\cap\cL$. The inclusion of the LHS in the RHS was already shown in Remark~\ref{remark:kernel_of_delta}. The inclusion in the other direction will use Corollary~\ref{cor:supp_of_polar}. We turn $\cS\cap\cL$ into a bornological vector space via the Cartesian diagram \[\xymatrix{ \cS\cap\cL \ar[r] \ar[d] & \cS \ar[d] \\ \cL \ar[r] & \widetilde{\cS}. }\] First, we observe that the restriction $\iota\circ\delta|_{\cS\cap\cL}\co\cS\cap\cL\ra\prescript{\iota}{}{\widetilde{\cS}}$ factors through $\prescript{\iota}{}{{\cS'}}$. Indeed, it is given by the difference of maps $\iota-\F$, and the restriction lies in $\cS+\prescript{\iota}{}{\cL}\subseteq\prescript{\iota}{}{{\cS'}}$ (we are using the fact that the map ${\cS'}\ra\widetilde{\cS}$ is injective). Hence, we obtain an injective map \[ \frac{\cS\cap\cL}{\ker{\delta}}\ra\prescript{\iota}{}{{\cS'}}, \] and want to show that its domain is $0$. Since $\P$ is (in an informal sense) torsion by Corollary~\ref{cor:supp_of_polar}, and the domain in question is a subspace of $\P$, it is enough to show that ${\cS'}$ is torsion-free in the same sense. That is, it is enough to show that if $f\in\cS'$ satisfies \[ (f(ghh')\abs{h}-f(gh'))-(f(gh)\abs{h}-f(g))=0 \] for all $h,h'\in\AA^\times$, then $f(g)=0$. However, this is obviously correct. \end{proof} \section{Abelian Extensions} \label{sect:decomposition_under_ext} Let $E\supseteq F$ be an Abelian extension. For the sake of simplicity, we suppose that $E$ is quadratic over $F$. Then the zeta function of $E$ factors into a product of two L-functions over $F$. In this section, we will present this statement's incarnation in our language. This turns out to be an actual refinement; the new statement contains additional information allowing one to relate zeta integrals of specific test functions on $E$ with zeta integrals of specific test functions on $F$. We begin by introducing some notation. Denote the character on $\AA^\times_F/F^\times$ corresponding to the extension $E/F$ by $\eta=\eta_{E/F}$. This induces an automorphism \[ \chi\mapsto\chi\eta \] of $\Chars(F)$, along with an automorphism \begin{align*} \eta\co\cS_F & \ra\cS_F \\ f(g) & \mapsto\eta(g)\cdot f(g) \end{align*} of the ring $\cS_F$. For a $\cS_F$-module $M$, we will let $\prescript{\eta}{}{M}$ denote the twist of $M$ by $\eta$. From the geometric point of view, the extension $E\supseteq F$ induces a canonical map \[ N\co\Chars(F)\ra\Chars(E), \] given by pre-composing a Hecke character $\chi\co\AA_F^\times/F^\times\ra\CC^\times$ with $N_{E/F}$. Moreover, we get a morphism of non-unital rings \[ N_!\co \cS_E\ra\cS_F, \] given by integrating along the multiplicative norm map \[ N_{E/F}\co\AA_E^\times\ra\AA_F^\times. \] We now claim that, informally, when $\cL_E$ is ``pulled back'' to $\Chars(F)$ along $N$, it splits into a product. This will recover the corresponding classical fact about decomposition of L-functions for quadratic extensions. \begin{theorem} \label{thm:L_decomposes} There is a canonical isomorphism of bornological $\cS_F$-modules, \[ \cS_F\otimes_{\cS_E}\cL_E\cong\cL_F\otimes_{\cS_F}\prescript{\eta}{}{\cL_F}. \] Moreover, the isomorphism between the two sides is also compatible with the maps into $\cS'_F$. \end{theorem} Before diving into the proof, let us re-interpret Theorem~\ref{thm:L_decomposes} in terms of the analogy of Remark~\ref{remark:L_is_divisor}, to get a more geometric intuition. \begin{remark} Consider the canonical map \[ N\co\Chars(F)\ra\Chars(E), \] given by pre-composing a Hecke character with $N_{E/F}$. This map has kernel $\{1,\eta\}$, and its image is the subgroup of $\Chars(E)$ given by characters that are invariant under the Galois group of $E$ over $F$. Let us denote this image by $\Chars(E/F)=\Chars(E)^{\Gal(E/F)}$. With this language, the informal essence of Theorem~\ref{thm:L_decomposes} is that the push-forward of the ``divisor'' $\cL_F$ to $\Chars(E/F)$ (as a divisor) identifies with the restriction of $\cL_E$ to $\Chars(E/F)$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:L_decomposes}] We will prove that \[ S(\AA_F^\times)\otimes_{S(\AA_E^\times)}S(\AA_E)\cong S(\AA_F)\otimes_{S(\AA_F^\times)}\prescript{\eta}{}{S(\AA_F)}. \] This will be proven by embedding both sides in the larger space $\Func(\AA_F^\times)$ of functions on $\AA_F^\times$, and showing that their images coincide. Furthermore, we need to show that the bornologies on the two sides coincide. The two embeddings will be induced by the two maps, \begin{equation*}\xymatrix{ S(\AA_E) \ar[r] & \Func(\AA_F^\times), \\ S(\AA_F)\otimes_{S(\AA_F^\times)}\prescript{\eta}{}{S(\AA_F)} \ar[r] & \Func(\AA_F^\times), }\end{equation*} given by \[ \left\{h\mapsto f(h)\right\}\mapsto\left\{g\mapsto \int_{N(h)=g}f(h)\dtimes{h}\right\} \] and \[ f_1\otimes f_2\mapsto f_1*\eta f_2=\left\{g\mapsto \int f_1(g'^{-1}g)f_2(g')\eta(g')\dtimes{g'}\right\} \] respectively. To check this, it is enough to check that the above claim holds place-by-place, which is a straightforward (albeit tedious) verification. \end{proof} \begin{remark} \label{remark:abelian_L_decomposes} Let $E/F$ be an Abelian extension, which is no longer necessarily quadratic. Let us state the relevant generalization of Theorem~\ref{thm:L_decomposes}. There are characters $\eta_i\co\AA_F^\times/F^\times\ra\CC^\times$ corresponding to the extension $E/F$, with $0\leq i\leq d-1$ and where the degree of $E$ over $F$ is $d$. Moreover, there are maps \begin{align*} N & \co\Chars(F)\ra\Chars(E) \\ N_! & \co \cS_E\ra\cS_F, \end{align*} as above. The claim is that the canonical maps into $\cS_F'$ induce an isomorphism \[ \cS_F\otimes_{\cS_E}\cL_E\cong\prescript{\eta_0}{}{\cL_F}\otimes_{\cS_F}\cdots\otimes_{\cS_F}\prescript{\eta_{d-1}}{}{\cL_F}. \] \end{remark} \begin{remark} \label{remark:cubic_L_decomposes} One can also generalize the above to extensions that are not necessarily Abelian. For example, suppose that $E/F$ is a non-Abelian cubic extension. In this case, the extension defines an irreducible generic automorphic representation $(\pi,V)$ of $\GL_2(\AA_F)$. The author believes (but has not proven) that the correct variant of Remark~\ref{remark:abelian_L_decomposes} is as follows. We define a $S(\AA_F^\times)$-module by restriction of $\pi$ to $\GL_1(\AA_F)\times \{1\}$ along the diagonal. We denote its co-invariants under the resulting action of $F^\times$ by $\cL_F(\pi)$. There is an embedding $V\subset\Func(\AA_F^\times)$ given by the Whittaker model. After taking quotients by $F^\times$, this gives a map $\cL_F(\pi)\ra\cS'_F$. Now, the author believes (but has not proven in general) that this induces an isomorphism \[ \cS_F\otimes_{\cS_E}\cL_E\cong\cL_F\otimes_{\cS_F}\cL_F(\pi). \] The author finds it interesting that the object $\cS_F\otimes_{\cS_E}\cL_E$, constructed from \emph{Galois} data of the extension $E/F$, can be directly related to the underlying space $V$ of the \emph{automorphic} representation $\pi$, through its quotient $\cL_F(\pi)$. That is, it seems that the factorization of the L-function for non-Abelian field extension allows constructing a correspondence between the underlying spaces of the automorphic representations associated to the field extension, and a construction made of pure Galois data. \end{remark} \begin{appendices} \section{Generators for \texorpdfstring{$\cL$}{L}} \label{app:non_canonical_triv} The goal of this section is to explicitly show that the $\cS$-module $\cL$ defined above happens to be free of rank one. This will be the main result of this section, Theorem~\ref{thm:L_is_triv}. The choice of generator for $\cL$ is analogous to the process of picking a standard L-factor at every place in the GCD description of L-functions. In particular, the generator itself is not well-defined, and therefore the constructions below will be somewhat ad hoc. For the non-Archimedean places, we will see that the standard L-factor used suits our purposes just fine. That is, it serves as a generator for an appropriate module. See Claim~\ref{claim:locally_trivial_at_Qp}. For Archimedean places, this is no longer true. That is, the standard choice of L-factor at the Archimedean places turns out not to be a generator for the appropriate module. The reason for this failure is that the standard L-factor decreases too quickly in vertical strips. Instead, we will merely show the existence of a modification for this L-factor which \emph{does} have the right growth properties to be a generator. This is the content of Claim~\ref{claim:locally_trivial_at_R}. See also Remark~\ref{remark:growth_of_L_in_vertical_strips}. \begin{theorem} \label{thm:L_is_triv} The $\cS$-module $\cL$ is isomorphic to $\cS$. \end{theorem} This will follow from: \begin{claim} \label{claim:A_is_A_times} The $S(\AA^\times)$-module $S(\AA)$ is isomorphic to $S(\AA^\times)$. \end{claim} We will prove this place-by-place. \begin{claim} \label{claim:locally_trivial_at_Qp} Let $F$ be a non-Archimedean local field. Then the $S(F^\times)$-module $S(F)$ is isomorphic to $S(F^\times)$. Moreover, the isomorphism can be chosen such that it sends $\one_{\O^\times}$ to $\one_\O$. \end{claim} This is also proven as Item~(2) of Lemma~4.18 of \cite{zeta_rep}. \begin{proof} We consider the morphism \[ S(F^\times)\ra S(F) \] given by convolution with the distribution \[ f(g)=\begin{cases} 0 & \abs{g}>1 \\ \delta_1(g) & \abs{g}=1 \\ 1 & \abs{g}<1, \end{cases} \] where $\delta_1(g)$ is the delta distribution at $g=1$. It is a direct verification to check that this map satisfies the required properties. \end{proof} \begin{claim} \label{claim:locally_trivial_at_R} Let $F=\RR$. Then the $S(\RR^\times)$-module $S(\RR)$ is isomorphic to $S(\RR^\times)$. \end{claim} \begin{proof} One way to think about this kind of isomorphism is via the Mellin transform. A map \[ S(\RR^\times)\ra S(\RR) \] should correspond to pointwise multiplication by some function in the Mellin picture. In order to have the right image, this function (which is, essentially, the local L-function) needs to have the right poles and zeroes, as well as some growth properties in vertical strips. There are some explicit maps which are almost isomorphisms of $S(\RR)$ with $S(\RR^\times)$. The Mellin transforms of functions such as $e^{-\pi y^2}$ and $e^{iy^2}$ have the correct set of poles to give the right ``divisor'', but decrease too fast as $s\ra-i\infty$. The strategy of our proof will be to choose a function with the right poles and zeroes, and then ``fix'' its vertical growth. Let us address the proof itself. Note that it is enough to prove the claim separately for even and odd functions on $\RR$. We will define generalized functions $\phi_\pm\co\RR\ra\CC$, one for each parity, which are rapidly decreasing at $\infty$, and such that convolution with $\phi_++\phi_-$ defines the sought-after isomorphism \[ S(\RR^\times)\xrightarrow{\sim}S(\RR). \] We will explicitly describe only $\phi_+$. The odd variant can be given by $\phi_-(y)=y\phi_+(y)$. We will describe $\phi_+(y)$ via its Mellin transform. The idea is this. The usual L-function at $\infty$, \[ f(s)=\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right), \] has the right set of zeroes, but it decreases very quickly in vertical strips. This would prevent it from defining an isomorphism. Specifically, its absolute value behaves as \[ \abs{\left(\frac{s}{2e\pi}\right)^{s/2}\sqrt{\pi s}}\sim\abs{t}^{\frac{\sigma+1}{2}}e^{-\frac{\pi}{4}\abs{t}} \] via the Stirling formula, when $s=\sigma+it$ and $t\ra\pm\infty$. However, applying Lemma~\ref{lemma:correct_vertical} below to $f(s)$ yields an entire function $g(s)$, which has no zeroes, has a simple pole at every non-positive even integer, and such that $g(s)$ is bounded from above and below in vertical strips. We now claim that multiplying the Mellin transform by $g(s)$ gives an isomorphism. Specifically, let \[ \alpha\co S(\RR^\times)_+\ra S(\RR)_+ \] be the map on even functions sending a function whose Mellin transform is $h(s)$ to the function whose Mellin transform is $h(s)g(s)$. We wish to show that this map is bijective. Indeed, the map $\alpha$ is clearly injective. It remains to show that it is a surjection. Let $u(s)$ be the Mellin transform of an even function $v(y)$ in $S(\RR)_+$. We wish to show that $u(s)/g(s)$ is entire and is rapidly decreasing in vertical strips. That it is entire is clear, since $u(s)$ has at most simple poles at negative even integers. The rapid decrease in vertical strips follows by induction, using the fact that \[ \frac{v(y)-v(0)e^{-\pi y^2}}{y} \] removes the pole at $0$ and shifts $u(s)$ by $1$. \end{proof} In the course of the proof, we have used the following lemma to correct the behaviour of a function in vertical strips. \begin{lemma} \label{lemma:correct_vertical} Let $f(s)$ be a meromorphic function on $\CC$, which has no zeroes or poles outside the horizontal half-strip $\{\sigma+it\suchthat\text{$\sigma<1$ and $\abs{t}<1$}\}$. Then there exists a meromorphic function $g(s)$ such that $f(s)/g(s)$ has no zeroes or poles, and $g(s)$ is uniformly bounded from above and below outside the half-strip. \end{lemma} \begin{proof} This is a direct consequence of Arakelian's approximation theorem (see \cite{entire_function_approx}). The theorem allows us to create an entire function $\phi(s)$ satisfying that $\abs{\phi(s)-\log(f(s))}$ is uniformly bounded outside the half-strip. The desired function is now \[ g(s)=e^{-\phi(s)}f(s). \] \end{proof} \begin{claim} \label{claim:locally_trivial_at_C} Let $F=\CC$. Then the $S(\CC^\times)$-module $S(\CC)$ is isomorphic to $S(\CC^\times)$. \end{claim} Recall that the notation $\abs{\cdot}_\CC=\abs{N_{\CC/\RR}(\cdot)}$ denotes the absolute value of the norm, and thus differs from the usual absolute value by a power of $2$. \begin{remark} \label{remark:paley_wiener_at_C} In order to prove Claim~\ref{claim:locally_trivial_at_C}, we will need to give a Paley-Wiener style description for $S(\CC^\times)$. Indeed, we have \[ S(\CC^\times)=S\!\left(\RR^\times_{>0}\times\RR/{2\pi i\ZZ}\right)=S(\RR^\times_{>0})\,\hat{\otimes}\,S\!\left(\RR/{2\pi i\ZZ}\right) \] Thus, the Mellin transform \[ \hat{f}_n(s)=\int f(z)\left(\frac{z}{\abs{z}}\right)^{n}\abs{z}_\CC^s\dtimes{z} \] of a function $f\in S(\CC^\times)$ is entire in $s$, and satisfies that its semi-norms \[ \norm{\hat{f}_n(s)}_{\sigma,m}=\sup_{\stackrel{t\in\RR}{n\in\ZZ}}(1+\abs{t}^m)(1+\abs{n}^m)\abs{\hat{f}_n(\sigma+it)} \] are bounded for all $\sigma\in\RR$ and $m\geq 0$. This description exactly characterizes the image of $S(\CC^\times)$ under the Mellin transform. In addition, we will make use of the following description of the Mellin transform of $S(\CC)$. A sequence $\{{f}_n(s)\}_{n\in\ZZ}$ of meromorphic functions lies in the image of the Mellin transform of $S(\CC)$ if and only if: \begin{enumerate} \item Each function ${f}_n$ has at most simple poles, and they are located inside the set $-\frac{\abs{n}}{2}-\ZZ_{\geq 0}$. \item The semi-norms $\norm{{f}_n(s)}_{\sigma,m}$ are bounded for all $\sigma\in \frac{1}{4}+\frac{1}{2}\ZZ$ and $m\geq 0$. \end{enumerate} This description can be proven via standards methods, using the fact that the Mellin transforms of the functions $z^m e^{-\pi\abs{z}^2}$ satisfy the above requirements. \end{remark} \begin{proof}[Proof of Claim~\ref{claim:locally_trivial_at_C}] In a similar manner to the proof of Claim~\ref{claim:locally_trivial_at_R}, it is sufficient to supply a sequence of functions $\{{g}_n(s)\}_{n\in\ZZ}$ such that: \begin{enumerate} \item Each function $g_n(s)$ has no zeroes, and has a simple pole at $-\frac{\abs{n}}{2}-m$ for all integer $m\geq 0$. \item The functions $g_n(s),g_n(s)^{-1}$ satisfy some moderate growth condition. We will choose $\{{g}_n(s)\}_{n\in\ZZ}$ such that: \[ \sup_{\stackrel{t\in\RR}{n\in\ZZ}}\abs{{g}_n(\sigma+it)}<\infty, \qquad\sup_{\stackrel{t\in\RR}{n\in\ZZ}}\abs{{g}_n(\sigma+it)}^{-1}<\infty \] for all $\sigma\in \frac{1}{4}+\frac{1}{2}\ZZ$. \end{enumerate} Our choice is simply \[ g_n(s)=g(\abs{n}+2s), \] where $g(s)$ is the same function as in the proof of Claim~\ref{claim:locally_trivial_at_R}. \end{proof} This completes the proof of Theorem~\ref{thm:L_is_triv}. \end{appendices} \Urlmuskip=0mu plus 1mu\relax \bibliographystyle{alphaurl} \section{Introduction} The goal of this paper is to present an alternative but equivalent approach to the classical construction of L-functions for automorphic representations of $\GL(1)$. Philosophically, we are trying to answer the question ``where does an L-function live?'' in the simplest case, Tate's classical construction of Hecke L-functions for $\GL(1)$. Let $F$ be a number field. There are several ways to construct the L-function for an automorphic representation over $F$. For example, one has Tate's thesis for Hecke L-functions for $\GL(1)$, or the methods of Jacquet--Langlands and Godement--Jacquet for $\GL(n)$. Regardless of the chosen method, one has to choose a \emph{test function} out of some space (e.g., $S(\AA_F)$ for Tate's thesis), to which a zeta integral is assigned. If one chooses the ``right'' test function, whose zeta integral is a GCD for all possible ones, then the resulting zeta integral is called an L-function. Instead, our approach is to construct a ring $\cS$ and an $\cS$-module $\cL$ of all zeta integrals. It turns out that the module of zeta integrals $\cL$ already contains the same data as the automorphic L-function for $\GL(1)$, and can be thought of as a categorification of the L-function. In this language, the GCD procedure usually used to select a specific L-function out of all zeta integrals becomes a search for a \emph{generator} for the module $\cL$. If we further specialize to totally real $F$, we can also give an algebro-geometric interpretation to our construction. Let us give a brief overview. Suppose that $F$ is totally real. Denote by $\Chars(F)$ the space of Hecke characters of $F$. In this paper, we are concerned with the definition of Hecke L-functions as functions on the space $\Chars(F)$. In this setting, the module $\cL$ can be turned into a sheaf of modules on $\Chars(F)$. Sections of this module that are supported on finitely many connected components are zeta integrals, and generators of this module are Hecke L-functions. Similarly, $\cS$ can be turned into a sheaf of rings of functions on the space $\Chars(F)$. We will use this point of view to algebraically reformulate and categorify several important properties of Hecke L-functions. Put differently, we are proposing that it is beneficial to think of L-functions in more geometric terms, such as modules and sheaves, instead of as specific functions. From this point of view, the existence of an actual \emph{function} which generates our module $\cL$, the L-function itself, is almost coincidental to the theory. Even if the module $\cL$ did not in fact have a generator, the theory could proceed without a problem. Moreover, it turns out that thinking of L-functions in terms of the module of zeta functions that they generate has unexpected benefits. In the same manner that thinking of vector spaces in basis-free terms allows one to shift the emphasis from the properties of specific elements of vector spaces to the properties of maps between them, one can use this idea to produce new results. In \cite{abst_aut_reps_arxiv}, the author compares two formulas for zeta integrals for $\GL(2)$ that give the same L-function: the Godement--Jacquet construction and the Jacquet--Langlands construction. The fact that they give the same L-function is well-known. But enhancing this into a correspondence between modules of zeta integrals turns out to induce a novel multiplicative structure on the category of $\GL(2)$-modules, with applications to the theory of automorphic representations. It must be noted that a similar approach has been taken before by Connes (e.g. Section~3.3 of \cite{riemann_F_one}), and Meyer in \cite{zeta_rep}. Connes and Meyer are able to, in the case of $\GL(1)$, canonically construct a virtual representation whose spectrum is the zeroes (minus the poles) of the L-function. This works because for $\GL(1)$, the module $\cL$ is locally free of rank $1$ over $\cS$, which allows one to attach a corresponding divisor $[\cL]-[\cS]$ over the spectrum of $\cS$. The virtual representation constructed by Connes and Meyer is this divisor, and in fact their construction of it goes through constructing our $\cS$ and $\cL$. However, Connes and Meyer put the focus on the ``divisor'' $[\cL]-[\cS]$ instead of the locally free module $\cL$ itself. \date{\textbf{Acknowledgements: }The author would like to thank his advisor, Joseph Bernstein, for providing the inspiration for this paper, and going through many earlier versions of it. The author would also like to thank both Shachar Carmeli and Yiannis Sakellaridis for their great help improving the quality of this text.} \subsection{Detailed Summary} Let us give a more detailed account of this paper's main ideas. Let $\chi\in\Chars(F)$ be some Hecke character $\chi\co\AA^\times/F^\times\ra\CC^\times$. Tate's classical construction of the complete Hecke L-function $\Lambda(\chi,s)$ of $\chi$ works by a GCD procedure. One defines a zeta integral \begin{equation} \label{eq:tate_zeta_integral} \int_{\AA^\times}\Phi(x)\chi(x)\abs{x}^s\dtimes{x} \end{equation} for every appropriate \emph{test function} $\Phi\in S(\AA)$. These zeta integrals are all meromorphic functions of $s\in\CC$. Every test function gives a zeta integral; however, some zeta integrals are distinguished. It turns out that the collection of zeta integrals has a greatest common divisor, a meromorphic function $\Lambda(\chi,s)$ such that all zeta integrals are multiples of $\Lambda(\chi,s)$ by an entire function. In this text, we will refer to any such GCD as a \emph{complete L-function} of $\chi$. Note that this terminology is usually used to refer to a specific standard choice of GCD. The above GCD procedure only defines $\Lambda(\chi,s)$ up to multiplication by an entire invertible function. For some properties of the L-function, this is enough. For example, the zeroes of the L-function are well-defined in this formalism. This formulation also naturally satisfies a functional equation. However, there are other questions that one would like to ask about the L-function that do not work quite as well with this definition. For example, one is often interested in special values of the L-function. Therefore, in order to speak about specific values of $\Lambda(\chi,s)$, there exists a standard (somewhat ad hoc) choice of GCD in a place-by-place manner in $F$. This expresses $\Lambda(\chi,s)$ as a product of standard L-factors over all places of $F$. As another example, the growth of the L-function in vertical strips (as $s\ra\sigma\pm i\infty$) is an important problem. However, with the standard choice of GCD, the function $\Lambda(\chi,s)$ decreases rapidly in vertical strips. This is due to the choice of L-factor at the Archimedean places, which decreases so fast that all other behaviour becomes irrelevant. In order to remedy this, the usual approach is to work with the \emph{incomplete L-function} $L(\chi,s)$, which simply drops the L-factors at $\infty$. In this paper, we will give an alternative approach to the GCD construction. We define a ring $\cS$ of functions over $\Chars(F)$, and a module $\cL$, which we think of as the module of zeta integrals. Our view is that the fundamental object of the theory is this module of zeta integrals. To relate this to the standard view, we will establish a canonical correspondence between generators of $\cL$, i.e. isomorphisms of modules \[ t\co\cS\xrightarrow{\sim}\cL, \] and a set of functions on $\Chars(F)$ which are complete L-functions under the standard formulation of the GCD procedure (see Construction~\ref{const:gens_are_L_funcs}). In this manner, claims about Hecke L-functions (growth, factorization under Abelian extensions, etc.) can be refined to explicit claims about the module $\cL$. The main point is that we are able to provide a geometric viewpoint which gives a more geometric flavor to the GCD definition of L-functions, i.e. exhibits them as generators of a module. A less formal, but perhaps more descriptive, explanation is as follows. We introduce the ring $\cS$, its module $\cL$, and a canonical isomorphism of $\cL$ with an extension $\cS'$ of $\cS$ after base change: \[ \cS'\otimes_\cS \cL\xrightarrow{\sim}\cS'. \] Geometrically, the extension $\cS\subseteq\cS'$ corresponds to restricting to a specific right-half-plane. This turns $\cL$ into a kind of ``divisor'' on the space $\Chars(F)$, supported on the corresponding left-half-plane. This ``divisor'' turns out to be principal (in an informal sense), and L-functions are precisely those functions which generate it (see Remark~\ref{remark:L_is_divisor} for more details). The virtual representation constructed by Meyer in \cite{zeta_rep} is the sky-scraper sheaf at this divisor, $[\cL]-[\cS]$. On top of providing an interesting perspective, this will be useful in several immediate ways. First of all, the proposed reformulation will enable us to refine some properties of L-functions as algebraic statements about the module $\cL$. We will include examples such as the functional equation, and more precise statements about decomposition properties of L-functions under Abelian extensions. Specifically, it turns out that the fact that L-functions of Abelian extensions decompose into a product of L-functions of characters can be categorified into an explicit canonical decomposition of the corresponding spaces of zeta integrals. See Remark~\ref{remark:cubic_L_decomposes} for an interesting application. Moreover, the set of functions on $\Chars(F)$ given by the algebraic construction is somewhat smaller than that given by the GCD construction. This means that for some statements about the growth of the L-function as $s\ra\sigma\pm i\infty$, no non-canonical choice of representative is necessary. Let us elaborate. It turns that out of the complete L-functions given by the GCD construction, the subset that correspond to generators of $\cL$ always have an additional property: they have moderate growth and decay properties in vertical strips. Hence, it makes sense to talk about their growth without introducing the incomplete L-function, or even choosing a standard representative. In other words, the geometric point of view specifies not only the zeroes of the L-function, but also its growth properties. Technically, this happens because the new formalism rejects the standard L-factors at the Archimedean places, which decay much too quickly in vertical strips to generate $\cL$. Instead, different Archimedean local L-factors are required, which ``fixes'' their growth. See Remark~\ref{remark:growth_of_L_in_vertical_strips} and Appendix~\ref{app:non_canonical_triv} for further discussion of this issue. Another kind of result that can be achieved using the geometric point of view is the trace formula given in \cite{zeta_rep}, where Meyer essentially derives the explicit formula for Hecke L-functions from the geometric construction without any reference to the underlying L-function. However, we will not pursue this direction here. The organization of this paper is as follows. Section~\ref{sect:dixmier_malliavin} contains a brief reminder about the Dixmier-Malliavin theorem, following \cite{dixmier_malliavin_for_born_arxiv}. Section~\ref{sect:module_S} introduces the ring $\cS$ of functions on $\Chars(F)$, and studies its properties. Section~\ref{sect:module_L} introduces the module $\cL$. Section~\ref{sect:canonical_triv} establishes the correspondence between generators of $\cL$ and L-functions. Section~\ref{sect:functional_equation} re-introduces the functional equation in the new language. Section~\ref{sect:decomposition_under_ext} studies the decomposition properties of L-functions under Abelian extensions. Finally, Appendix~\ref{app:non_canonical_triv} shows that a generator for the module $\cL$ actually exists, and discusses the various local L-factors given by the new formalism. \begin{remark} While $\GL(1)$ is a nice test-case, the theory of L-functions and automorphic representations often focuses on higher rank reductive groups, such as $\GL(n)$. While the contents of this text can be generalized in various ways to (say) $\GL(2)$, the non-commutativity of the group adds a severe technical complication that we will not handle here. It is the author's belief that the symmetric monoidal structure constructed in \cite{abst_aut_reps_arxiv}, which seems to help $\GL(2)$ manifest some ``commutative-like'' phenomena, would be useful in such an endeavor. See Remarks~3.1 and~4.1 of \cite{abst_aut_reps_arxiv} for details on these phenomena. \end{remark} \begin{remark} When working over number fields, a major technical tool that we will use is the notion of \emph{bornological vector space}. This notion greatly resembles that of the more commonly used \emph{topological vector space}, but is technically simpler and more suitable for studying the representation theory of locally compact groups. For example, the ring $\cS$ and module $\cL$ can be given the structure of a bornological ring and bornological module, respectively. However, bornologies are not the focus of this paper. Rather, we consider bornological vector spaces because they admit a strong version of the Dixmier-Malliavin theorem, which we want to use. Specifically, we will need the variant given in \cite{dixmier_malliavin_for_born_arxiv}. Moreover, this notion is not actually needed when $F$ is a function field, as the Dixmier-Malliavin theorem becomes trivial in this case. Readers can safely take $F$ to be a function field, and consequently ignore all bornological structures appearing in this text. We will give a brief reminder on the Dixmier-Malliavin theorem in Section~\ref{sect:dixmier_malliavin}. \end{remark} \section{Reminder on the Dixmier-Malliavin Theorem} \label{sect:dixmier_malliavin} This section is a brief reminder (closely following \cite{dixmier_malliavin_for_born_arxiv}) about bornological vector spaces and their relation with the Dixmier-Malliavin theorem, which are used in the main body of the text. This section should not be considered original contribution. Readers who are only interested in the case of L-functions over function fields can safely skip this section and ignore all mentions of bornological structures. Bornological vector spaces will be used in much of this text as a technically favorable alternative for topological vector spaces. In many applications, the two notions are very similar. However, while they are close enough that many theorems can be successfully stated both in terms of bornologies and in terms of topologies, there are still cases where one language is preferable to the other. For example, there are some theorems (especially in the representation theory of locally compact groups) that have many technical requirements when stated in the traditional language of topological vector spaces. However, these extra technical assumptions disappear when the theorem is stated in the language of bornological vector spaces instead. The main way we will use the notion of a bornological vector space in this text is because they support a stronger variant of the Dixmier-Malliavin theorem. More concretely, we will use bornological structures to prove that certain rings satisfy a property called quasi-unitality. Specifically, we say that a (non-unital) ring $R$ is \emph{quasi-unital} if the product map: \[ R\otimes_R R\ra R \] is an isomorphism, where the relative tensor product $R\otimes_R R$ is the quotient of $R\otimes R$ by all expressions of the form \[ (ab)\otimes c-a\otimes(bc). \] Similarly, if $R$ is a quasi-unital ring, and $M$ is an $R$-module, then we say that $M$ is \emph{smooth} if the action map: \[ R\otimes_R M\ra M \] is an isomorphism. Once again, we take $R\otimes_R M$ to be the relative tensor product. Let $G$ be an algebraic group, and let $F$ be a number field. Let $G(\AA)=G(\AA_F)$ denote the adelic points of $G$ over $F$. Let $C_c^\infty(G(\AA))$ be the (non-unital) ring of smooth and compactly supported functions on $G$, equipped with the convolution product. The ring $C_c^\infty(G(\AA))$ is naturally a bornological non-unital ring. The main result we need from \cite{dixmier_malliavin_for_born_arxiv} is the following variant of the Dixmier-Malliavin theorem: \begin{theorem}[Theorem~5.1 of \cite{dixmier_malliavin_for_born_arxiv}] \label{thm:adelic_garding_is_smooth} The following hold: \begin{enumerate} \item The ring $C_c^\infty(G(\AA))$ is quasi-unital. \item Let $V$ be a complete bornological vector space, equipped with a smooth action of $G(\AA)$ on $V$. Then $V$ is smooth as a $C_c^\infty(G(\AA))$-module. \end{enumerate} \end{theorem} \begin{remark} Essentially, what Theorem~\ref{thm:adelic_garding_is_smooth} means is that given some boundedness and analytic conditions on a representation of $G(\AA)$, then that representation is a smooth $C_c^\infty(G(\AA))$-module. Note that while the prerequisites for the theorem are analytic in nature (the existence of a complete bornology on $V$, such that the action of $G(\AA)$ is smooth with respect to it), the consequence of the theorem -- smoothness as a module of a quasi-unital ring -- is purely algebraic: \begin{equation} \label{eq:smoothness_of_module} C_c^\infty(G(\AA))\otimes_{C_c^\infty(G(\AA))}V\xrightarrow{\sim} V. \end{equation} That is, Equation~\eqref{eq:smoothness_of_module} holds as an isomorphism of vector spaces, and requires no notion of bornology or completeness to state. Our uses of bornologies in this paper will almost exclusively be as a way to establish this algebraic property. That is, the main focus of this paper is algebraic, rather than analytic. \end{remark} \begin{remark} The proof of Theorem~5.1 of \cite{dixmier_malliavin_for_born_arxiv} is written for the case of a Lie group $G$. The generalization to the adelic case follows Remark~5.3 of \cite{dixmier_malliavin_for_born_arxiv}. \end{remark} \begin{remark} This theorem becomes trivially true in the case that $F$ is a function field, and $C_c^\infty(G(\AA))$ is considered the ring of smooth and compactly supported functions on $G$ in the usual sense of being locally constant. In this case, no assumptions about bornologies are necessary. \end{remark} For more details, we direct the reader to \cite{dixmier_malliavin_for_born_arxiv}. See also \cite{borno_quasi_unital_algs2}, which discusses bornological structures in representation theory specifically, and \cite{borno_vs_topo_analysis}, which deals more generally with bornologies and functional analysis. \section{The Space of Automorphic Functions \texorpdfstring{$\cS$}{S}} \label{sect:module_S} Let $F$ be a number field (not necessarily totally real), and let $\Chars(F)$ be the space of Hecke characters $\chi\co\AA_F^\times/F^\times\ra\CC^\times$ with its complex analytic topology. That is, $\Chars(F)$ is a countable disjoint union of copies of $\CC$, each parametrized by $\abs{\cdot}^s\chi$ with $s\in\CC$, for some unitary Hecke character $\chi$. We will say that the component of all Hecke characters of the form $\abs{\cdot}^s\chi$ is the component \emph{corresponding to $\chi$}. \begin{remark} \label{remark:vertical_strips} We will often be speaking about functions on $\Chars(F)$ that are rapidly decreasing in vertical strips. This should be taken to mean that $\abs{f(\sigma+it)}\ra 0$ as $t\ra\infty$ faster than any polynomial in vertical strips $a\leq \sigma\leq b$, on every copy of $\CC$ separately. \end{remark} On the space $\Chars(F)$, there is a ring of (Mellin transforms of) Bruhat-Schwartz functions, $\cS_F$. Our main construction is a module $\cL_F$ over $\cS_F$, whose generators will correspond to L-functions. In this section, we will construct the space $\cS_F$ itself, and establish some of its basic properties. Instead of directly defining this, let us define its Fourier-Mellin transform, which might be more easily accessible: \begin{definition} Let $\Schw_F=S(\AA^\times)$ denote the (non-unital) ring of Bruhat-Schwartz functions on $\AA^\times$. Specifically, we set \[ S(\AA^\times)=S(F_\infty^\times)\otimes{\bigotimes_{p}}'S(F_p^\times), \] where \begin{itemize} \item The symbol $\bigotimes'$ denotes the restricted tensor product over the finite (i.e., non-Archimedean) places of $F$, with respect to the characteristic function $\one_{\O_p^\times}$. \item For a non-Archimedean place $p$, the space $S(F_p^\times)$ is the space of smooth and compactly supported functions on $F_p^\times$. \item For the Archimedean places, the space $S(F_\infty^\times)$ is the space of smooth functions $f$ on $F_\infty^\times=\prod_{v|\infty}F_v^\times$, satisfying that \[ \abs{\chi(y)\cdot Df(y)} \] is bounded for all multiplicative characters $\chi\co F_\infty^\times\ra\CC^\times$ and differential operators $D$ in the universal enveloping algebra of the Lie algebra of the real group $F_\infty^\times$. \end{itemize} We give $\Schw_F=S(\AA^\times)$ a ring structure via the convolution product with respect to the standard Haar measure on $\AA^\times$ (normalized as in, e.g., page~46 of \cite{aut_reps_book_I}). When the number field $F$ is clear from context, we will abuse notation and denote $\Schw=\Schw_F$. \end{definition} \begin{remark} The ring $\Schw$ naturally acquires a bornology as follows. We give each of the spaces $S(F_p^\times)$ a bornology consisting of the bounded subsets of its finite-dimensional linear subspaces. Likewise, we give $S(F_\infty^\times)$ a bornology consisting of the subsets where for each $\chi$ and $D$, the expression $\abs{\chi(y)\cdot Df(y)}$ above is bounded uniformly for $f$ and $y$. The bornivorous topology associated to this bornology on $\Schw$ is the usual topology of Bruhat-Schwartz functions on $\AA^\times$. \end{remark} \begin{remark} \label{remark:schwartz_at_infty_decay_exp} An alternative way to view $S(F_\infty^\times)$ is as follows. By applying the product of the logarithm maps over the Archimedean places $v|\infty$, we can identify $F_\infty^\times$ with a disjoint union of a finite number of copies of $\RR^{r_1+r_2}\times\left(\RR/\ZZ\right)^{r_2}$. Under this identification, the space $S(F_\infty^\times)$ corresponds to smooth functions, all of whose derivatives are decreasing faster than any exponential in the \emph{logarithmic} coordinates $\RR^{r_1+r_2}$. \end{remark} \begin{remark} The reader should note that we are actually working with \emph{measures}, rather than \emph{functions}, on the space $\AA^\times$ (since we are taking their integrals and convolutions). However, for the sake of simplicity, we are relying on the standard Haar measure $\dtimes{g}$ (see, e.g., page~46 of \cite{aut_reps_book_I} for an explicit description of this standard normalization) to abuse notation and speak of functions regardless. This is done in order to simplify the notation and exposition, while hopefully not confusing the reader too much. \end{remark} \begin{remark} The bornological space $\Schw=S(\AA^\times)$ is complete, and the action of $\AA^\times$ on it is smooth. Thus, by Theorem~\ref{thm:adelic_garding_is_smooth} and Claim~3.20 of \cite{dixmier_malliavin_for_born_arxiv}, the ring $S(\AA^\times)$ is quasi-unital. Note that this is a purely-algebraic property. \end{remark} \begin{definition} Let $\cS_F=\Schw_F/F^\times$ be the space of co-invariants of $\Schw_F=S(\AA^\times)$ by the action of $F^\times$ via multiplicative shifts. When the number field $F$ is clear from context, we will abuse notation and denote $\cS=\cS_F$. \end{definition} \begin{remark} We note that the map \[ f(g)\mapsto\sum_{q\in F^\times}f(qg) \] defines an isomorphism of $\cS=\Schw/{F^\times}$ with what is sometimes known as the space of Bruhat-Schwartz functions on $\AA^\times/F^\times$. Thus, we will sometimes refer to $\cS$ as the \emph{space of automorphic functions}. \end{remark} \begin{remark} \label{remark:coinv_are_closed} It is possible, via standard techniques, to construct a bounded section \[ \cS\ra\Schw \] for the canonical projection. In particular, $\cS$ is a complete bornological space. \end{remark} \begin{remark} Because $\Schw$ is commutative, the space $\cS$ is a ring. Once again, Theorem~\ref{thm:adelic_garding_is_smooth} and Claim~3.20 of \cite{dixmier_malliavin_for_born_arxiv}, along with the fact that $\cS$ is complete by Remark~\ref{remark:coinv_are_closed}, let us conclude that $\cS$ is quasi-unital. \end{remark} \begin{remark} \label{remark:paley_wiener} When $F$ is totally real, one can use the Paley-Wiener theorem to give an alternative description for the space $\cS$. It is isomorphic via the Mellin transform to the space of functions on $\Chars(F)$ which are supported only on finitely many copies of $\CC$, and on each one they are entire functions $f(s)$ which are rapidly decreasing in vertical strips. In this view, the bornology of $\cS=\bigoplus\cS_{\chi_0}$ is the direct sum bornology induced from the subspaces $\cS_{\chi_0}$ of functions $f\in\cS$ whose Mellin transform is supported on just one copy of $\CC$ (corresponding to $\chi_0\in\Chars(F)$). The bornology on the subspaces $\cS_{\chi_0}$ themselves is the von-Neumann bornology for the topology generated by the semi-norms \[ \norm{\hat{f}(s)}_{\sigma,n}=\sup_{t\in\RR}\,(1+\abs{t}^n)\abs{\hat{f}(\sigma+it)} \] for $\sigma\in\RR$ and $n\geq 0$, with \[ \hat{f}(s)=\int_{\AA^\times/F^\times}f(y)\chi_0(y)\abs{y}^s\dtimes{y} \] the Mellin transform of $f$. \end{remark} \begin{remark} One can give a similar, but more complicated, description when $F$ is not totally real. See also Remark~\ref{remark:paley_wiener_at_C} for a similar issue. \end{remark} \begin{remark} \label{remark:schwatz_is_co_sheaf} A convenient way to assign geometric intuition to the space $\cS$ is to think of it as a \emph{co-sheaf} on $\Chars(F)$. This should express the fact that the Mellin transforms of elements of $\cS$ are supported on a finite number of copies of $\CC$. To be more explicit, suppose that $F$ is totally real as above. We define a co-sheaf on $\Chars(F)$ as follows. To each connected open set $U$ of $\Chars(F)$ (which necessarily lies inside a single copy of $\CC$, corresponding to $\chi_0\in\Chars(F)$), the co-sheaf assigns the subspace $\cS_{\chi_0}\subseteq\cS$ of functions supported on that copy. For arbitrary open sets $U\subseteq\Chars(F)$, we let the value of the co-sheaf be the direct sum of the values of the co-sheaf on the connected components of $U$. This turns $\cS$ into the global co-sections of a locally constant co-sheaf on $\Chars(F)$. \end{remark} \section{The Module of Zeta Integrals \texorpdfstring{$\cL$}{L}} \label{sect:module_L} We are now ready for the main construction of this paper. We describe a module (of bornological vector spaces) $\cL$ over the ring $\cS$. The elements of this module will correspond to zeta integrals, and its generators $t\co\cS\xrightarrow{\sim}\cL$ will correspond to L-functions. We will show this correspondence explicitly in Section~\ref{sect:canonical_triv}. \begin{definition} Let $S(\AA)$ be the $\Schw=S(\AA^\times)$-module of Bruhat-Schwartz functions on $\AA$. Specifically, we set \[ S(\AA)=S(F_\infty)\otimes{\bigotimes_{p}}'S(F_p), \] where \begin{itemize} \item The symbol $\otimes'$ denotes the restricted tensor product over the finite (i.e., non-Archimedean) places of $F$, with respect to the characteristic function $\one_{\O_p}$. \item For a non-Archimedean place $p$, the space $S(F_p)$ is the space of smooth and compactly supported functions on $F_p$. \item For the Archimedean places, the space $S(F_\infty)$ is the space of Schwartz functions $f$ on $F_\infty=\prod_{v|\infty}F_v$. \end{itemize} \end{definition} \begin{remark} Note that $S(\AA)$ is a bornological $\Schw$-module. \end{remark} \begin{remark} Let us take a different point of view on $S(F_\infty)$. Under the identification of $F_\infty^\times$ as a finite disjoint union of copies of $\RR^{r_1+r_2}\times\left(\RR/\ZZ\right)^{r_2}$ (as in Remark~\ref{remark:schwartz_at_infty_decay_exp}), we can look at the restriction $f|_{F_\infty^\times}$ of a function $f\in S(F_\infty)$. This restriction approaches a limit as any subset of the coordinates approaches $-\infty$, and decays faster than any exponent as any of the coordinates approaches $\infty$. \end{remark} \begin{definition} We let $\cL_F$ denote the vector space of $F^\times$ co-invariants of $S(\AA)$, where $F^\times$ acts via multiplication on $\AA$. When the number field $F$ is clear from context, we will omit it from the notation and denote $\cL=\cL_F$. \end{definition} \begin{remark} It is immediate to see that $\cL$ is a bornological $\cS$-module. \end{remark} \begin{remark} Just like in Remark~\ref{remark:schwatz_is_co_sheaf}, one can gain some geometric intuition about $\cL$ by thinking of it as a co-sheaf. Suppose that $F$ is totally real. Then one can use the co-sheaf structure on $\cS$ defined in Remark~\ref{remark:schwatz_is_co_sheaf}, combined with the module structure of $\cL$, to turn $\cL$ into a co-sheaf as well. To be as explicit as possible, we construct a co-sheaf on $\Chars(F)$ as follows. To each connected open set $U$ of $\Chars(F)$, which necessarily lies inside a single copy of $\CC$ corresponding to some $\chi_0\in\Chars(F)$, the co-sheaf assigns the subspace $\cL_{\chi_0}=\cS_{\chi_0}\cdot \cL\subseteq\cL$. For arbitrary open sets $U\subseteq\Chars(F)$, we let the value of the co-sheaf be the direct sum of its values on the connected components of $U$. This turns $\cL$ into the global co-sections of a co-sheaf. \end{remark} General $\cS$-modules can be pretty badly behaved. However, the specific $\cS$-module $\cL$ is as nice as it can be -- in fact, it is secretly isomorphic to $\cS$, albeit in a non-canonical fashion. This fact will be proven in Appendix~\ref{app:non_canonical_triv}. More specifically, combining Remark~\ref{remark:coinv_are_closed} with Claim~\ref{claim:A_is_A_times}, we see that the bornological vector space $\cL$ is a complete, smooth $\cS$-module. In particular, we have the algebraic property that the action map \[ \cS\otimes_\cS\cL\xrightarrow{\sim}\cL \] is an isomorphism. \section{Correspondence with L-Functions} \label{sect:canonical_triv} One way to think of $\cL$ is as a module of zeta integrals. Indeed, given a function $f\in \cL=S(\AA)_{/F^\times}$, by restricting it to $\AA^\times/F^\times$ and applying the Mellin transform, we precisely get a zeta integral. This is the content of Construction~\ref{const:L_into_S_prime} below, which will let us relate $\cL$ to the classical notion of an L-function, via Construction~\ref{const:gens_are_L_funcs}. Let us give some more details. In order to have a well-defined notion of zeta integral, we will need to extend the ring $\cS$ to a sufficiently large ``ring of periods'' $\cS'$ to contain all necessary integrals. We will think of the extension $\cS\subseteq\cS'$ as the space of holomorphic functions on some ``right-half-plane'' in $\Chars(F)$. Along with $\cS'$, we will obtain a canonical trivialization \begin{equation} \label{eq:informal_triv} \cS'\otimes_\cS\cL\xrightarrow{\sim}\cS', \end{equation} sending every element of $\cL$ (which is, essentially, a test function coming from $S(\AA)$) to its corresponding zeta integral. This is the algebraic structure which captures, in our setting, the notion of a zeta integral. \begin{remark} \label{remark:L_is_divisor} There is another point of view on this issue as well, with a more geometric flavor. Consider the following informal analogy. Let $X$ be a scheme, with sheaf of functions $\O_X$. In our analogy, these correspond to $\Chars(F)$ and $\cS$, respectively. Suppose that we are given some localization $\O_X\subset\O_{X'}$ (which is our $\cS'$), allowing poles at certain places in $X$. A line bundle on $X$ is a locally free $\O_X$-module of rank one. The data of a line bundle, along with its trivialization after base change to $\O_{X'}$, is the data of a \emph{divisor} supported at $X-X'$. Therefore, geometrically, the data of the module $\cL$ along with its trivialization~\eqref{eq:informal_triv} can be thought of as a kind of ``divisor'' on the space $\Chars(F)$. The fact that $\cL$ is non-canonically isomorphic to $\cS$ is then a statement that this divisor is principal, and a function giving this principal divisor is an L-function. \end{remark} Since our only real requirement from $\cS'$ is that it is sufficiently large, its construction is relatively ad hoc. We will outline one possible choice for $\cS'$ in Subsection~\ref{subsect:module_S_prime}. In Subsection~\ref{subsect:L_gen_is_L_func}, we will construct the canonical trivialization~\eqref{eq:informal_triv} after base-change to $\cS'$. \subsection{The Extension \texorpdfstring{$\cS'$}{S'}} \label{subsect:module_S_prime} In this subsection, we will construct the extension $\cS\subseteq\cS'$, such that $\cS$ and $\cL$ become canonically isomorphic after base change to $\cS'$. This data will be used to give a correspondence between isomorphisms $t\co\cS\xrightarrow{\sim}\cL$ of $\cS$-modules (which we think of as \emph{generators} for $\cL$) and certain L-functions $L_t$ on the space $\Chars(F)$. This correspondence justifies thinking of $\cL$ as containing the data of an L-function. See Construction~\ref{const:gens_are_L_funcs}. Our immediate goal is to extend $\cS$ to some minimal extent that also allows $\cL$ to fit into it. This will be our choice of $\cS'$. Intuitively, we want $\cS'$ to somehow correspond to holomorphic functions on some right half-plane in $\Chars(F)$ where L-functions are absolutely convergent. Let us begin by explicitly constructing $\cS'$. Let $\norm{g}_{\AA}$ be the height function on $\AA$ given by \begin{equation*} \norm{g}_\AA=\prod_v\max\{\norm{g_v}_v,1\}, \end{equation*} which we restrict to $\AA^\times$. Similarly, we define a height function on $\AA^\times/F^\times$ by choosing the lowest lift: \[ \norm{g}_{\AA/F^\times}=\inf_{\stackrel{g'\in\AA^\times}{g\equiv g'\pmod{F^\times}}}\norm{g'}_\AA. \] \begin{construction} \label{const:S_prime} The ring extension $\cS'$ of $\cS$ is given by the space of smooth functions $f$ on $\AA^\times/F^\times$ such that: \begin{itemize} \item The stabilizer of $f$ is open in $\AA_\text{fin}^\times$. \item There is a bound: \[ \int_{\AA^\times/F^\times}\abs{g}^{1+\varepsilon}\norm{g}_{\AA/F^\times}^\sigma\cdot\abs{Df(g)}\dtimes{g}<\infty \] for all $\sigma<\infty$, $\varepsilon>0$ and $D$ in the universal enveloping algebra of $F_\infty^\times$. \end{itemize} \end{construction} \begin{remark} The convolution product turns $\cS'$ into a ring by observing that \[ \norm{gg'}_{\AA/F^\times}\leq\norm{g}_{\AA/F^\times}\norm{g'}_{\AA/F^\times}. \] \end{remark} \begin{remark} The ring $\cS'$ has a natural bornological structure, according to which it is complete. By Theorem~\ref{thm:adelic_garding_is_smooth}, it follows that $\cS'$ is smooth over $\cS$. \end{remark} \begin{remark} \label{remark:S_prime_is_localization} The complete bornological ring $\cS'$ induces a localization functor \[ V\mapsto\cS'\widehat{\otimes}_\cS V \] on the category of complete bornological smooth $\cS$-modules. Here, $\widehat{\otimes}_\cS$ is the completion of the relative tensor product. Indeed, we construct a natural morphism $V\ra\cS'\widehat{\otimes}_\cS V$ via \[\xymatrix{ V & \cS\otimes_\cS V \ar[l]_-\sim \ar[r] & \cS'\widehat{\otimes}_\cS V. }\] Thus, it remains to verify that the completion of the multiplication map \[ \cS'\widehat{\otimes}_\cS\cS'\ra\cS' \] is an isomorphism. This follows because $\cS'{\otimes}_\cS\cS'$ and $\cS'{\otimes}_{\cS'}\cS'$ share a dense subset $\cS$, with the same induced bornology. \end{remark} \begin{remark} \label{remark:S_prime_paley_wiener_in_lines} Suppose that $F$ is totally real. Then we can try giving a description of the image of $\cS'$ under the Mellin transform. This should give some geometric intuition about the ring $\cS'$. Indeed, using Remark~\ref{remark:paley_wiener}, we see that \[ \cS'=\bigoplus\cS'_{\chi_0}, \] where each $\cS'_{\chi_0}$ consists of functions on the right half plane $\{\abs{\cdot}^s\chi_0\suchthat\Re{s}>1\}$ of $\Chars(F)$ which are analytic in the right half-plane and rapidly decreasing in vertical strips there (in the sense of Remark~\ref{remark:vertical_strips}). Recall that we choose unitary representatives $\chi_0$ out of each connected component of $\Chars(F)$. \end{remark} \subsection{Canonical Trivialization of the Module of Zeta Integrals} \label{subsect:L_gen_is_L_func} Our goal for this subsection is to construct the isomorphism $\cS'\otimes_\cS\cL\xrightarrow{\sim}\cS'$, which will be induced from a map $\cL\ra\cS'$. This will let us provide the desired correspondence $t\mapsto L_t$ between generators $t\co\cS\xrightarrow{\sim}\cL$ and L-functions $L_t$. \begin{construction} \label{const:L_into_S_prime} There is a canonical morphism of bornological $\cS$-modules \[ \cL\ra\cS' \] given by: \[ f(g)\mapsto \sum_{q\in F^\times}f(qg). \] \end{construction} \begin{remark} One can similarly define a map \[ \cS\ra\cS', \] using the same formula. It is easy to check that the map $\cS\ra\cS'$ is injective, by using standard techniques. \end{remark} \begin{remark} We will see later (Corollary~\ref{cor:L_inj_into_S_prime}) that the morphism $\cL\ra\cS'$ is also injective. \end{remark} \begin{remark} The morphism $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} already encodes all zeta integrals. To see this, suppose that we had some test function $\Psi\in S(\AA)$. Applying the map $\cL\ra\cS'$ along with the Mellin transform, we obtain the corresponding zeta integral via Equation~\eqref{eq:tate_zeta_integral}. \end{remark} Our main result for this section is that the map $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} presents $\cS'$ as the localization of $\cL$ under the functor $\cS'\hat{\otimes}_\cS-$ of Remark~\ref{remark:S_prime_is_localization}. In other words, $\cS'$ defines a localization of the category of complete smooth $\cS$-modules under which $\cS$ and $\cL$ coincide. This will justify thinking of $\cL$ together with the data of the map $\cL\ra\cS'$ as defining a ``divisor'' on $\Chars(F)$. \begin{theorem} \label{thm:S_prime_L_isom} The morphism $\cL\ra\cS'$ of Construction~\ref{const:L_into_S_prime} induces an isomorphism: \[ \cS'\otimes_\cS\cL\xrightarrow{\sim}\cS'. \] \end{theorem} \begin{remark} In more classical terms, the content of Theorem~\ref{thm:S_prime_L_isom} is that the L-functions we are constructing have a zero-free right-half-plane. \end{remark} The proof of Theorem~\ref{thm:S_prime_L_isom} is fairly heavy, and will be postponed to the end of this section. At this point, we have all of the constructions necessary for our notion of L-function. We claim that the module $\cL$, together with the canonical embedding $\cL\ra\cS'$, encodes the classical notion of L-function. We formalize this statement as the following construction: \begin{construction} \label{const:gens_are_L_funcs} Suppose that we are given some (non-canonical) isomorphism \[ t\co\cS\xrightarrow{\sim}\cL. \] Applying the functor $\cS'\otimes_\cS-$, we get a morphism of modules \[ t\co\cS'\xrightarrow{\sim}\cS'. \] Recall that this is sometimes called an element of the \emph{roughening} of $\cS'$. After composing with the Mellin transform from both sides, as in Remark~\ref{remark:S_prime_paley_wiener_in_lines}, the map $t$ acts as multiplication by a function $L_t$ on some right-half-plane of the space $\Chars(F)$. We refer to $L_t$ as the \emph{L-function corresponding to $t$}. \end{construction} \begin{remark} Construction~\ref{const:gens_are_L_funcs} is not vacuous. Specifically, Theorem~\ref{thm:L_is_triv} guarantees the existence of a generator $t\co\cS\xrightarrow{\sim}\cL$ as above. \end{remark} \begin{corollary} \label{cor:L_inj_into_S_prime} The morphism $\cL\ra\cS'$ is injective. \end{corollary} \begin{proof} Pick an isomorphism $t\co\cS\xrightarrow{\sim}\cL$, as in Theorem~\ref{thm:L_is_triv}. Then we get a commutative diagram: \[\xymatrix{ \cS \ar[d]^t & \Schw\otimes_\Schw\cS \ar[l]_-\sim \ar[r] & \cS'\otimes_\cS\cS \ar[d]^{\cS'\otimes_\cS\,\displaystyle t} \ar[r]^-\sim & \cS' \ar@{-->}[d]^{L_t} \\ \cL & \Schw\otimes_\Schw\cL \ar[l]_-\sim \ar[r] & \cS'\otimes_\cS\cL \ar[r]^-\sim & \cS'. }\] By Theorem~\ref{thm:S_prime_L_isom}, we can extend $\cS'\otimes_\cS t$ to an isomorphism $\xymatrix@1{\cS' \ar@{-->}[r]^{L_t} & \cS'}$. Since the top row of the diagram is an injective morphism, so is the bottom row. \end{proof} \begin{remark} Let $t\co\cS\xrightarrow{\sim}\cL$ be some generator for $\cL$. Then $L_t$ is an automorphism of $\cS'$. In particular, it has no zeroes with $\Re{s}>1$. Moreover, it satisfies some moderate growth condition in vertical strips. We note that this L-function is well defined up to composing $t$ with an automorphism of $\cS$ as a module over itself. I.e., up to the roughening of $\cS$. This makes the L-function $L_t$ well-defined up to multiplication by an entire function such that both it and its inverse have no zeroes, and satisfy a similar moderate growth condition in vertical strips. \end{remark} \begin{remark} \label{remark:growth_of_L_in_vertical_strips} One should note that the above is slightly stronger than the corresponding classical claim. That is, when the L-function $\Lambda(\chi,s)$ is defined via the GCD procedure, it is only well-defined up to multiplication by an entire function with no zeroes. I.e., the moderate growth condition is absent. The author finds this interesting. Usually, when one considers growth conditions on, say, the Riemann zeta function, one takes the non-completed zeta function $\zeta(s)$. The reason for this is that the completed zeta function \[ \Lambda(s)=\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s) \] decreases very quickly in vertical strips, due to the presence of the gamma factor. In other words, the (interesting) growth behavior of the L-function in vertical strips is actually not a well-defined feature of the L-function when it is defined via the GCD procedure. However, with the new formalism, a statement of the form ``the L-function grows slowly in vertical strips'' makes sense without modifying the definition of the L-function. We will discuss this further in Appendix~\ref{app:non_canonical_triv}. \end{remark} \begin{remark} It is also possible to redo the above definitions based on different variations of the ring $S(\AA^\times)$, and therefore $\cS$. If, for instance, one chooses to loosen the smoothness requirements, this would result in Mellin transforms that are required to decrease in vertical strips, but not very quickly. That is, instead of rapidly decreasing in vertical strips, our functions would be required to decrease more slowly (say, at some fixed polynomial rate). In turn, this would make the roughening of $\cS$ smaller. The implication of that would be that the L-function's growth in vertical strips is more strictly controlled. This would make questions of Lindel\"of type well-defined. \end{remark} \begin{remark} \label{remark:almost_smooth} Another way to modify the definition uses something similar to the ``almost smooth'' functions of Appendix~A of \cite{almost_smooth_padic}. These functions are a natural modification of the notion of a smooth function on the $p$-adic part of $\AA^\times$. In brief: instead of taking locally constant functions, one takes functions whose Fourier coefficients are rapidly decreasing. Using a definition based on almost smooth functions, rather than smooth functions, should give a notion of L-function with well defined growth properties not only in the complex $\pm i\infty$ direction, but also in the conductor aspect. This generalization will be explored elsewhere. \end{remark} Let us return to the proof of Theorem~\ref{thm:S_prime_L_isom}. \begin{proof}[Proof of Theorem~\ref{thm:S_prime_L_isom}] We have a morphism \[\xymatrix{ \cS'\otimes_{\cS}\cL \ar[r] & \cS', }\] and would like to construct an inverse. Such an inverse can always be induced from a map of bornological $\cS$-modules, \[\xymatrix{ \cS \ar@{-->}[r] & \cS'\otimes_{\cS}\cL. }\] Our strategy for doing so will work as follows. We will construct maps \begin{equation} \label{eq:local_canonical_trivs}\xymatrix{ S(F_v^\times) \ar[r] & \cS'\otimes_{S(F_v^\times)} S(F_v) }\end{equation} for all places $v$, corresponding to a generator $t_v\co S(F_v^\times)\ra S(F_v)$ tensored with its inverse. Since the product defining an L-function absolutely converges in the right-half-plane of $\Chars(F)$ corresponding to $\cS'$, these maps will multiply together to yield the desired map \[\xymatrix{ \cS \ar[r] & \cS'\otimes_{\cS}\cL. }\] Indeed, we define $\cS_v=S(F_v^\times)$ and $\cL_v=S(F_v)$. We also define a local variant of the ring $\cS'$, as follows. Set the ring extension $\cS_v'$ of $\cS_v$ to be the G\r{a}rding space of the space of functions $f$ on $F_v^\times$ such that there is a bound: \[ \int_{F_v^\times}\abs{g}_v^{1+\varepsilon}\max\{1,\abs{g}_v\}^\sigma\cdot\abs{f(g)}\dtimes{g}<\infty \] for all $\sigma<\infty$, $\varepsilon>0$. We also require that $f$ be supported on a compact subset of $F_v$ when $v$ is finite. That is, when $v$ is finite, then $\cS_v'$ is the space of locally constant functions on $F_v^\times$, supported on a compact subset of $F_v$, which do not increase much faster than $\abs{g}^{-1}$ near $0$. When $v$ is infinite, then $\cS_v'$ is the space of smooth functions on $F_v^\times$ which are rapidly decreasing near $\infty$ and do not increase much faster than $\abs{g}^{-1}$ near $0$. The reader should note that in the case where $F_v$ is non-Archimedean, the term ``G\r{a}rding space'' simply refers to the subspace of functions on which the action of $F_v^\times$ is locally constant. This is always the case when $F$ is a function field. Now, our claim is that the natural inclusion $\cS_v\hookrightarrow\cL_v$ induces an isomorphism \[\xymatrix{ \cS_v' \ar[r]^-\sim & \cS'_v\otimes_{\cS_v}\cL_v. }\] Indeed, the quotient $\cL_v/\cS_v$ is killed by any inverse of the local L-function at $v$, which is invertible in $\cS_v'$. So, composing the above with the map $\cS_v\ra\cS_v'$ gives the desired local maps of~\eqref{eq:local_canonical_trivs}. Thus, it remains to show the convergence property we are after. Specifically, we claim that for all finite places $v$, the image of the distinguished vector $\one_{\O_v^\times}$ is the element \[ (\one_{\O_v^\times}-\one_{\pi_v\O_v^\times})\otimes\one_{\O_v}\in\cS_v'\otimes_{\cS_v}\cL_v, \] where $\pi_v$ is a uniformizer. Thus, it is enough to check that the product \[ \prod_v (\one_{\O_v^\times}-\one_{\pi_v\O_v^\times}) \] converges in $\cS'$, which indeed holds. \end{proof} \section{Functional Equation} \label{sect:functional_equation} Our goal for this section is to provide a notion of functional equation and analytic continuation that is compatible with our formalism. In terms of the analogy of Remark~\ref{remark:L_is_divisor}, we are showing that the ``divisor'' corresponding to $\cL$ is symmetric. To do this, we will need to examine the operation of Fourier transform on $\cL$ and its interaction with the embedding of $\cL$ in the space of functions on $\AA^\times/F^\times$. It will turn out that the compatibility (or lack thereof) of this embedding with the Fourier transform is deeply related to the poles of the L-function. We will thoroughly delve into this issue in the following two subsections. For the moment, let us give some notation, and provide some bottom lines. We begin by defining a ring involution $\iota\co\cS\ra\cS$ by: \[ \iota(f)(g)=\abs{g}^{-1}f(g^{-1}). \] For an $\cS$-module $M$, we will denote by $\prescript{\iota}{}{M}$ the twist of $M$ by $\iota$. \begin{construction} \label{const:fourier} We define the isomorphism \[ \F\co S(\AA)\xrightarrow{\sim}\prescript{\iota}{}{S(\AA)}. \] to be given by the Fourier transform. This isomorphism depends on a choice of additive character $\psi\co\AA/F\ra\CC^\times$. By taking co-invariants, we also obtain an isomorphism: \[ \F\co\cL\xrightarrow{\sim}\prescript{\iota}{}{\cL}, \] which no longer depends on the choice of $\psi$. \end{construction} This means that we have two spaces, $\cS$ and $\cL$, each armed with an involution, $\iota$ and $\F$ respectively. These two spaces are embedded in a bigger space $\cS'$ together. It turns out that the involutions $\iota$ and $\F$ \emph{coincide} on the intersection $\cS\cap\cL$ inside $\cS'$. In particular, one can define a pushout diagram of $\Schw$-modules: \begin{equation} \label{eq:S_ext_pushout} \xymatrix{ \cS\cap\cL \ar[r] \ar[d] & \cL \ar[d] \\ \cS \ar[r] & \cS+\cL=\cS_\ext, }\end{equation} where all of the spaces carry compatible involutions. Let us take a moment to delve into the intuitive meaning of the intersection $\cS\cap\cL$. Under the Mellin transform, $\cS$ should consist of entire functions on $\Chars(F)$ that are rapidly decreasing in vertical strips (satisfying some extra conditions). Similarly, $\cL$ should consist of such functions multiplied by the L-function on $\Chars(F)$. In particular, $\cS\cap\cL$ should consist of all functions that have the same zeroes as the L-function, no poles, and satisfy some growth conditions. This means that we expect that the intersection $\cS\cap\cL$ is large, while the quotient $\P=\cL/\cS\cap\cL$ is small, and is related to the poles of the L-function. So, take for granted for the moment that we know that the quotient $\P=\cL/\cS\cap\cL=\cS_\ext/\cS$ is sufficiently ``small''. Then we are able to interpret the above diagram~\eqref{eq:S_ext_pushout} as giving a functional equation and analytic continuation for L-functions: the extension $\cS_\ext$ of $\cS$ is essentially the space of meromorphic functions with a small number of prescribed poles, and the map $\cL\ra\cS_\ext$ which sends a test function to its corresponding zeta integral respects the involutions on both sides. Hence, our problem is reduced to proving some smallness bounds on $\P$. Na\"ively, this sounds hard. The quotient $\cS/\cS\cap\cL$ contains all information about the zeroes of the L-function, and so an explanation is needed as to why the seemingly similar quotient $\P=\cL/\cS\cap\cL$ is so much easier to characterize. As it turns out, we are able to provide a surprisingly \emph{conceptual} reason for why the poles of the L-function are easier to study. Specifically, we embed $\cL$ in a space $\widetilde{\cS}$, which is somewhat larger than $\cS'$, but carries its own involution. We then show that the difference between the involutions of $\cL$ and $\widetilde{\cS}$ precisely captures the polar divisor $\P$. This allows us to isolate the desired quotient for study in a canonical way. The structure of the rest of this section is as follows. In Subsection~\ref{subsect:analytic_cont_func_eq}, we will formalize the above explanation about the various involutions involved. We will introduce a slightly different definition for $\P$, although it will turn out to be equivalent. In Subsection~\ref{subsect:polar_div}, we will study the properties of the polar divisor $\P$, and prove that it is the same as $\cL/\cS\cap\cL$. \subsection{Analytic Continuation and the Polar Divisor} \label{subsect:analytic_cont_func_eq} In order to be able to state a re-interpretation of the functional equation, we first need to change the target space for the map $\cL\ra\cS'$. The reason for that is that the space $\cS'$ is just too small; it is not symmetric under $\iota$. Instead, we define a new target space $\widetilde{\cS}$. It will have the advantage of being symmetric under $\iota$, but it will no longer be a ring. This will allow us to compare $\iota$ with $\F$. \begin{construction} \label{const:schw_prime_prime} Define the $\cS$-module $\widetilde{\cS}$ to be the G\r{a}rding space (i.e., the space of the smooth vectors) of the space of functions $f$ on $\AA^\times/F^\times$ that are of moderate growth, with its natural bornology. \end{construction} \begin{remark} The space $\widetilde{\cS}$ is an adelic version of the space $A_\text{umg}(\Gamma\backslash G)$ of functions of uniform moderate growth (c.f. \cite{schwartz_of_aut_quotient}). \end{remark} \begin{remark} The bornological space $\widetilde{\cS}$ is complete, and by Theorem~\ref{thm:adelic_garding_is_smooth} it is also smooth over $\cS$. In fact, it is precisely the smoothening of the dual of $\cS$. I.e, $\widetilde{\cS}$ is the \emph{contragradient} of $\cS$. \end{remark} \begin{remark} In the case where $F$ is taken to be a function field, then no bornologies are necessary. One can directly define $\widetilde{\cS}$ to be the contragradient of $\cS$, in the sense of being the space of locally constant functions on $\AA^\times/F^\times$ with no conditions on growth. \end{remark} \begin{remark} The isomorphism $\iota\co\cS\xrightarrow{\sim}\prescript{\iota}{}{\cS}$ extends to an isomorphism \[ \iota\co\widetilde{\cS}\xrightarrow{\sim}\prescript{\iota}{}{\widetilde{\cS}} \] via the same formula \[ \iota(f)(g)=\abs{g}^{-1}f(g^{-1}). \] \end{remark} \begin{construction} We define a map $\cL\ra\widetilde{\cS}$ via the composition \[ \cL\ra\cS'\ra\widetilde{\cS}. \] \end{construction} Given this embedding, one may naturally ask about the relation between the two involutions $\F\co\cL\ra\prescript{\iota}{}{\cL}$ and $\iota\co\widetilde{\cS}\ra\prescript{\iota}{}{\widetilde{\cS}}$. As it turns out, they do not commute, and this lack of commutativity reflects the poles of the L-function. \begin{definition} We let $\delta\co\cL\ra\widetilde{\cS}$ denote the difference between the two maps in the diagram: \[\xymatrix{ \cL \ar[r]_j \ar[d]^\F & \widetilde{\cS} \ar[d]^\iota \ar@{=>}[dl]^{\iota\circ\delta} \\ \prescript{\iota}{}{\cL} \ar[r]_{\iota(j)} & \prescript{\iota}{}{\widetilde{\cS}}. }\] In other words, we have $\delta=j-\iota\circ\iota(j)\circ\F$, where $j\co\cL\ra\widetilde{\cS}$ is the embedding. \end{definition} Our claim is that the map $\delta\co\cL\ra\widetilde{\cS}$ precisely presents the \emph{poles} of our L-function. In the divisor interpretation, this means that it presents the positive part of the formal difference $[\cL]-[\cS]$. In other words, let us define: \begin{definition} Define the bornological $\cS$-module $\P$ to be the co-kernel \[\xymatrix{ 0 \ar[r] & \ker{\delta} \ar[r] & \cL \ar[r] & \P \ar[r] & 0. }\] We refer to $\P$ as the \emph{polar divisor}. \end{definition} This coincides with the interpretation as the positive part of the difference $[\cL]-[\cS]$ via: \begin{proposition} \label{prop:kernel_of_delta} The kernel $\ker{\delta}$ is equal to the intersection $\cS\cap\cL$ in $\widetilde{\cS}$. \end{proposition} \begin{remark} \label{remark:kernel_of_delta} The inclusion $\ker{\delta}\subseteq\cS\cap\cL$ is easy to see. Indeed, note that: \[ \ker{\delta}\subseteq{\cS'}\cap\iota({\cS'}). \] However, it is clear from the definition that \[ {\cS'}\cap\iota({\cS'})\subseteq\cS. \] \end{remark} We will postpone the proof of the rest of Proposition~\ref{prop:kernel_of_delta} to the next subsection. Before we discuss the specific properties of $\P$, let us discuss its relation to the analytic continuation property and the functional equation. Informally, the idea is as follows. We are looking for a space of meromorphic functions (under some Paley-Wiener correspondence) which is simultaneously big enough to contain $\cL$, and is symmetric with respect to $\iota$. Given such a space, we would be able to ask if its involution $\iota$ is compatible with the involution $\F$ of $\cL$. That would be our functional equation. Using the space $\widetilde{\cS}$ above fails on two counts: its involution is not compatible with $\F$, and it is too ``big'' to be thought of as a space of meromorphic functions. Therefore, we can ask for some minimal space $\cS_\ext$ satisfying our requirements. It turns out that we can safely choose $\cS_\ext$ to be the pushout \[\xymatrix{ \ker{\delta} \ar[d] \ar[r] & \cL \ar@{-->}[d] \\ \cS \ar@{-->}[r] & \cS_\ext. }\] This space automatically contains both $\cS$ and $\cL$, and has an involution \[ \cS_\ext\xrightarrow{\sim}\prescript{\iota}{}{\cS_\ext} \] compatible with both $\iota\co\cS\ra\prescript{\iota}{}{\cS}$ and $\F\co\cL\ra\prescript{\iota}{}{\cL}$. Moreover, clearly $\cS_\ext$ is an extension of $\cS$ by $\P$. Thus, if we had a strong enough grip on the support of $\P$, we would immediately be able to interpret $\cS_\ext$ as a space of meromorphic functions with prescribed poles, and see that the image of $\cL$ in $\cS_\ext$ satisfies a functional equation. \begin{remark} \label{remark:kernel_delta_means_S_ext_in_S_prime} The essence of Proposition~\ref{prop:kernel_of_delta} is now that $\cS_\ext$ can be identified with the sum $\cS+\cL$ inside $\widetilde{\cS}$. I.e., that the two inversions $\iota$, $\F$ can be extended to the sum $\cS+\cL$ in a compatible manner. Alternatively, since both $\cS$ and $\cL$ lie inside ${\cS'}$, the proposition can be interpreted as meaning that the natural map \[ \cS_\ext\ra{\cS'} \] is injective. Since we are informally thinking of ${\cS'}$ as a space of holomorphic functions on some right-half-plane, Proposition~\ref{prop:kernel_of_delta} should be interpreted as meaning that the functional equation carries no ``extra'' data in its L-functions that cannot be analytically continued from the right-half-plane. That is, a-priori it could have been possible that the correct notion of L-function contained a delta distribution on the complex plane (which would be invisible to analytic continuation). The proposition excludes this possibility. \end{remark} \subsection{Properties of the Polar Divisor \texorpdfstring{$\P$}{P}} \label{subsect:polar_div} Let us now start studying the properties of $\P$. Our goal is to show that its support is sufficiently small that the space $\cS_\ext$ of the previous subsection corresponds to meromorphic functions on the whole of $\CC$. Finally, we will use this to prove Proposition~\ref{prop:kernel_of_delta}. \begin{remark} \label{remark:P_is_smooth} We first note that $\P$ is smooth, as it is the quotient of a smooth module by a closed subspace. This follows from Corollary~5.6 of \cite{dixmier_malliavin_for_born_arxiv}. \end{remark} \begin{remark} In fact, using Proposition~\ref{prop:formula_for_delta}, we will see that we know exactly what $\P$ is in this case. It turns out to be $2$-dimensional, and in particular the bornological $\cS$-module $\P$ is complete. \end{remark} \begin{remark} As our next step in characterizing $\P$, we note that we must have a canonical isomorphism: \[ \P\xrightarrow{\sim}\prescript{\iota}{}{\P}, \] induced by $\F$. That is, the polar divisor is symmetric about the critical strip. This follows because \[ \iota\circ\delta\circ\F=-\delta, \] and thus $\ker{\delta}$ is preserved by $\F$. \end{remark} In order to make any further progress, we will need to make use of the following formula for $\delta$, which is an incarnation of the Poisson summation formula: \begin{proposition} \label{prop:formula_for_delta} The function $\delta\co\cL\ra\widetilde{\cS}$ is given by the formula: \[ \delta(\Psi)(g)=(\F\Psi)(0)\abs{g}^{-1}-\Psi(0). \] \end{proposition} This will allow us to know exactly where the poles $\P$ are, and to prove Proposition~\ref{prop:kernel_of_delta}. It is now clear exactly what $\P$ is: \begin{corollary} \label{cor:supp_of_polar} There is an isomorphism: \begin{equation*} \one\oplus\prescript{\iota}{}{\one}\xrightarrow{\sim} \P, \end{equation*} where $\one$ is the trivial $\AA^\times$-module, thought of as an $\cS$-module. \end{corollary} In particular, $\P$ is a sum of skyscraper co-sheaves on $\Chars(F)$. Using this corollary, we can also finally prove Proposition~\ref{prop:kernel_of_delta}. \begin{proof}[Proof of Proposition~\ref{prop:kernel_of_delta}] We want to show that $\ker{\delta}=\cS\cap\cL$. The inclusion of the LHS in the RHS was already shown in Remark~\ref{remark:kernel_of_delta}. The inclusion in the other direction will use Corollary~\ref{cor:supp_of_polar}. We turn $\cS\cap\cL$ into a bornological vector space via the Cartesian diagram \[\xymatrix{ \cS\cap\cL \ar[r] \ar[d] & \cS \ar[d] \\ \cL \ar[r] & \widetilde{\cS}. }\] First, we observe that the restriction $\iota\circ\delta|_{\cS\cap\cL}\co\cS\cap\cL\ra\prescript{\iota}{}{\widetilde{\cS}}$ factors through $\prescript{\iota}{}{{\cS'}}$. Indeed, it is given by the difference of maps $\iota-\F$, and the restriction lies in $\cS+\prescript{\iota}{}{\cL}\subseteq\prescript{\iota}{}{{\cS'}}$ (we are using the fact that the map ${\cS'}\ra\widetilde{\cS}$ is injective). Hence, we obtain an injective map \[ \frac{\cS\cap\cL}{\ker{\delta}}\ra\prescript{\iota}{}{{\cS'}}, \] and want to show that its domain is $0$. Since $\P$ is (in an informal sense) torsion by Corollary~\ref{cor:supp_of_polar}, and the domain in question is a subspace of $\P$, it is enough to show that ${\cS'}$ is torsion-free in the same sense. That is, it is enough to show that if $f\in\cS'$ satisfies \[ (f(ghh')\abs{h}-f(gh'))-(f(gh)\abs{h}-f(g))=0 \] for all $h,h'\in\AA^\times$, then $f(g)=0$. However, this is obviously correct. \end{proof} \section{Abelian Extensions} \label{sect:decomposition_under_ext} Let $E\supseteq F$ be an Abelian extension. For the sake of simplicity, we suppose that $E$ is quadratic over $F$. Then the zeta function of $E$ factors into a product of two L-functions over $F$. In this section, we will present this statement's incarnation in our language. This turns out to be an actual refinement; the new statement contains additional information allowing one to relate zeta integrals of specific test functions on $E$ with zeta integrals of specific test functions on $F$. We begin by introducing some notation. Denote the character on $\AA^\times_F/F^\times$ corresponding to the extension $E/F$ by $\eta=\eta_{E/F}$. This induces an automorphism \[ \chi\mapsto\chi\eta \] of $\Chars(F)$, along with an automorphism \begin{align*} \eta\co\cS_F & \ra\cS_F \\ f(g) & \mapsto\eta(g)\cdot f(g) \end{align*} of the ring $\cS_F$. For a $\cS_F$-module $M$, we will let $\prescript{\eta}{}{M}$ denote the twist of $M$ by $\eta$. From the geometric point of view, the extension $E\supseteq F$ induces a canonical map \[ N\co\Chars(F)\ra\Chars(E), \] given by pre-composing a Hecke character $\chi\co\AA_F^\times/F^\times\ra\CC^\times$ with $N_{E/F}$. Moreover, we get a morphism of non-unital rings \[ N_!\co \cS_E\ra\cS_F, \] given by integrating along the multiplicative norm map \[ N_{E/F}\co\AA_E^\times\ra\AA_F^\times. \] We now claim that, informally, when $\cL_E$ is ``pulled back'' to $\Chars(F)$ along $N$, it splits into a product. This will recover the corresponding classical fact about decomposition of L-functions for quadratic extensions. \begin{theorem} \label{thm:L_decomposes} There is a canonical isomorphism of bornological $\cS_F$-modules, \[ \cS_F\otimes_{\cS_E}\cL_E\cong\cL_F\otimes_{\cS_F}\prescript{\eta}{}{\cL_F}. \] Moreover, the isomorphism between the two sides is also compatible with the maps into $\cS'_F$. \end{theorem} Before diving into the proof, let us re-interpret Theorem~\ref{thm:L_decomposes} in terms of the analogy of Remark~\ref{remark:L_is_divisor}, to get a more geometric intuition. \begin{remark} Consider the canonical map \[ N\co\Chars(F)\ra\Chars(E), \] given by pre-composing a Hecke character with $N_{E/F}$. This map has kernel $\{1,\eta\}$, and its image is the subgroup of $\Chars(E)$ given by characters that are invariant under the Galois group of $E$ over $F$. Let us denote this image by $\Chars(E/F)=\Chars(E)^{\Gal(E/F)}$. With this language, the informal essence of Theorem~\ref{thm:L_decomposes} is that the push-forward of the ``divisor'' $\cL_F$ to $\Chars(E/F)$ (as a divisor) identifies with the restriction of $\cL_E$ to $\Chars(E/F)$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:L_decomposes}] We will prove that \[ S(\AA_F^\times)\otimes_{S(\AA_E^\times)}S(\AA_E)\cong S(\AA_F)\otimes_{S(\AA_F^\times)}\prescript{\eta}{}{S(\AA_F)}. \] This will be proven by embedding both sides in the larger space $\Func(\AA_F^\times)$ of functions on $\AA_F^\times$, and showing that their images coincide. Furthermore, we need to show that the bornologies on the two sides coincide. The two embeddings will be induced by the two maps, \begin{equation*}\xymatrix{ S(\AA_E) \ar[r] & \Func(\AA_F^\times), \\ S(\AA_F)\otimes_{S(\AA_F^\times)}\prescript{\eta}{}{S(\AA_F)} \ar[r] & \Func(\AA_F^\times), }\end{equation*} given by \[ \left\{h\mapsto f(h)\right\}\mapsto\left\{g\mapsto \int_{N(h)=g}f(h)\dtimes{h}\right\} \] and \[ f_1\otimes f_2\mapsto f_1*\eta f_2=\left\{g\mapsto \int f_1(g'^{-1}g)f_2(g')\eta(g')\dtimes{g'}\right\} \] respectively. To check this, it is enough to check that the above claim holds place-by-place, which is a straightforward (albeit tedious) verification. \end{proof} \begin{remark} \label{remark:abelian_L_decomposes} Let $E/F$ be an Abelian extension, which is no longer necessarily quadratic. Let us state the relevant generalization of Theorem~\ref{thm:L_decomposes}. There are characters $\eta_i\co\AA_F^\times/F^\times\ra\CC^\times$ corresponding to the extension $E/F$, with $0\leq i\leq d-1$ and where the degree of $E$ over $F$ is $d$. Moreover, there are maps \begin{align*} N & \co\Chars(F)\ra\Chars(E) \\ N_! & \co \cS_E\ra\cS_F, \end{align*} as above. The claim is that the canonical maps into $\cS_F'$ induce an isomorphism \[ \cS_F\otimes_{\cS_E}\cL_E\cong\prescript{\eta_0}{}{\cL_F}\otimes_{\cS_F}\cdots\otimes_{\cS_F}\prescript{\eta_{d-1}}{}{\cL_F}. \] \end{remark} \begin{remark} \label{remark:cubic_L_decomposes} One can also generalize the above to extensions that are not necessarily Abelian. For example, suppose that $E/F$ is a non-Abelian cubic extension. In this case, the extension defines an irreducible generic automorphic representation $(\pi,V)$ of $\GL_2(\AA_F)$. The author believes (but has not proven) that the correct variant of Remark~\ref{remark:abelian_L_decomposes} is as follows. We define a $S(\AA_F^\times)$-module by restriction of $\pi$ to $\GL_1(\AA_F)\times \{1\}$ along the diagonal. We denote its co-invariants under the resulting action of $F^\times$ by $\cL_F(\pi)$. There is an embedding $V\subset\Func(\AA_F^\times)$ given by the Whittaker model. After taking quotients by $F^\times$, this gives a map $\cL_F(\pi)\ra\cS'_F$. Now, the author believes (but has not proven in general) that this induces an isomorphism \[ \cS_F\otimes_{\cS_E}\cL_E\cong\cL_F\otimes_{\cS_F}\cL_F(\pi). \] The author finds it interesting that the object $\cS_F\otimes_{\cS_E}\cL_E$, constructed from \emph{Galois} data of the extension $E/F$, can be directly related to the underlying space $V$ of the \emph{automorphic} representation $\pi$, through its quotient $\cL_F(\pi)$. That is, it seems that the factorization of the L-function for non-Abelian field extension allows constructing a correspondence between the underlying spaces of the automorphic representations associated to the field extension, and a construction made of pure Galois data. \end{remark} \begin{appendices} \section{Generators for \texorpdfstring{$\cL$}{L}} \label{app:non_canonical_triv} The goal of this section is to explicitly show that the $\cS$-module $\cL$ defined above happens to be free of rank one. This will be the main result of this section, Theorem~\ref{thm:L_is_triv}. The choice of generator for $\cL$ is analogous to the process of picking a standard L-factor at every place in the GCD description of L-functions. In particular, the generator itself is not well-defined, and therefore the constructions below will be somewhat ad hoc. For the non-Archimedean places, we will see that the standard L-factor used suits our purposes just fine. That is, it serves as a generator for an appropriate module. See Claim~\ref{claim:locally_trivial_at_Qp}. For Archimedean places, this is no longer true. That is, the standard choice of L-factor at the Archimedean places turns out not to be a generator for the appropriate module. The reason for this failure is that the standard L-factor decreases too quickly in vertical strips. Instead, we will merely show the existence of a modification for this L-factor which \emph{does} have the right growth properties to be a generator. This is the content of Claim~\ref{claim:locally_trivial_at_R}. See also Remark~\ref{remark:growth_of_L_in_vertical_strips}. \begin{theorem} \label{thm:L_is_triv} The $\cS$-module $\cL$ is isomorphic to $\cS$. \end{theorem} This will follow from: \begin{claim} \label{claim:A_is_A_times} The $S(\AA^\times)$-module $S(\AA)$ is isomorphic to $S(\AA^\times)$. \end{claim} We will prove this place-by-place. \begin{claim} \label{claim:locally_trivial_at_Qp} Let $F$ be a non-Archimedean local field. Then the $S(F^\times)$-module $S(F)$ is isomorphic to $S(F^\times)$. Moreover, the isomorphism can be chosen such that it sends $\one_{\O^\times}$ to $\one_\O$. \end{claim} This is also proven as Item~(2) of Lemma~4.18 of \cite{zeta_rep}. \begin{proof} We consider the morphism \[ S(F^\times)\ra S(F) \] given by convolution with the distribution \[ f(g)=\begin{cases} 0 & \abs{g}>1 \\ \delta_1(g) & \abs{g}=1 \\ 1 & \abs{g}<1, \end{cases} \] where $\delta_1(g)$ is the delta distribution at $g=1$. It is a direct verification to check that this map satisfies the required properties. \end{proof} \begin{claim} \label{claim:locally_trivial_at_R} Let $F=\RR$. Then the $S(\RR^\times)$-module $S(\RR)$ is isomorphic to $S(\RR^\times)$. \end{claim} \begin{proof} One way to think about this kind of isomorphism is via the Mellin transform. A map \[ S(\RR^\times)\ra S(\RR) \] should correspond to pointwise multiplication by some function in the Mellin picture. In order to have the right image, this function (which is, essentially, the local L-function) needs to have the right poles and zeroes, as well as some growth properties in vertical strips. There are some explicit maps which are almost isomorphisms of $S(\RR)$ with $S(\RR^\times)$. The Mellin transforms of functions such as $e^{-\pi y^2}$ and $e^{iy^2}$ have the correct set of poles to give the right ``divisor'', but decrease too fast as $s\ra-i\infty$. The strategy of our proof will be to choose a function with the right poles and zeroes, and then ``fix'' its vertical growth. Let us address the proof itself. Note that it is enough to prove the claim separately for even and odd functions on $\RR$. We will define generalized functions $\phi_\pm\co\RR\ra\CC$, one for each parity, which are rapidly decreasing at $\infty$, and such that convolution with $\phi_++\phi_-$ defines the sought-after isomorphism \[ S(\RR^\times)\xrightarrow{\sim}S(\RR). \] We will explicitly describe only $\phi_+$. The odd variant can be given by $\phi_-(y)=y\phi_+(y)$. We will describe $\phi_+(y)$ via its Mellin transform. The idea is this. The usual L-function at $\infty$, \[ f(s)=\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right), \] has the right set of zeroes, but it decreases very quickly in vertical strips. This would prevent it from defining an isomorphism. Specifically, its absolute value behaves as \[ \abs{\left(\frac{s}{2e\pi}\right)^{s/2}\sqrt{\pi s}}\sim\abs{t}^{\frac{\sigma+1}{2}}e^{-\frac{\pi}{4}\abs{t}} \] via the Stirling formula, when $s=\sigma+it$ and $t\ra\pm\infty$. However, applying Lemma~\ref{lemma:correct_vertical} below to $f(s)$ yields an entire function $g(s)$, which has no zeroes, has a simple pole at every non-positive even integer, and such that $g(s)$ is bounded from above and below in vertical strips. We now claim that multiplying the Mellin transform by $g(s)$ gives an isomorphism. Specifically, let \[ \alpha\co S(\RR^\times)_+\ra S(\RR)_+ \] be the map on even functions sending a function whose Mellin transform is $h(s)$ to the function whose Mellin transform is $h(s)g(s)$. We wish to show that this map is bijective. Indeed, the map $\alpha$ is clearly injective. It remains to show that it is a surjection. Let $u(s)$ be the Mellin transform of an even function $v(y)$ in $S(\RR)_+$. We wish to show that $u(s)/g(s)$ is entire and is rapidly decreasing in vertical strips. That it is entire is clear, since $u(s)$ has at most simple poles at negative even integers. The rapid decrease in vertical strips follows by induction, using the fact that \[ \frac{v(y)-v(0)e^{-\pi y^2}}{y} \] removes the pole at $0$ and shifts $u(s)$ by $1$. \end{proof} In the course of the proof, we have used the following lemma to correct the behaviour of a function in vertical strips. \begin{lemma} \label{lemma:correct_vertical} Let $f(s)$ be a meromorphic function on $\CC$, which has no zeroes or poles outside the horizontal half-strip $\{\sigma+it\suchthat\text{$\sigma<1$ and $\abs{t}<1$}\}$. Then there exists a meromorphic function $g(s)$ such that $f(s)/g(s)$ has no zeroes or poles, and $g(s)$ is uniformly bounded from above and below outside the half-strip. \end{lemma} \begin{proof} This is a direct consequence of Arakelian's approximation theorem (see \cite{entire_function_approx}). The theorem allows us to create an entire function $\phi(s)$ satisfying that $\abs{\phi(s)-\log(f(s))}$ is uniformly bounded outside the half-strip. The desired function is now \[ g(s)=e^{-\phi(s)}f(s). \] \end{proof} \begin{claim} \label{claim:locally_trivial_at_C} Let $F=\CC$. Then the $S(\CC^\times)$-module $S(\CC)$ is isomorphic to $S(\CC^\times)$. \end{claim} Recall that the notation $\abs{\cdot}_\CC=\abs{N_{\CC/\RR}(\cdot)}$ denotes the absolute value of the norm, and thus differs from the usual absolute value by a power of $2$. \begin{remark} \label{remark:paley_wiener_at_C} In order to prove Claim~\ref{claim:locally_trivial_at_C}, we will need to give a Paley-Wiener style description for $S(\CC^\times)$. Indeed, we have \[ S(\CC^\times)=S\!\left(\RR^\times_{>0}\times\RR/{2\pi i\ZZ}\right)=S(\RR^\times_{>0})\,\hat{\otimes}\,S\!\left(\RR/{2\pi i\ZZ}\right) \] Thus, the Mellin transform \[ \hat{f}_n(s)=\int f(z)\left(\frac{z}{\abs{z}}\right)^{n}\abs{z}_\CC^s\dtimes{z} \] of a function $f\in S(\CC^\times)$ is entire in $s$, and satisfies that its semi-norms \[ \norm{\hat{f}_n(s)}_{\sigma,m}=\sup_{\stackrel{t\in\RR}{n\in\ZZ}}(1+\abs{t}^m)(1+\abs{n}^m)\abs{\hat{f}_n(\sigma+it)} \] are bounded for all $\sigma\in\RR$ and $m\geq 0$. This description exactly characterizes the image of $S(\CC^\times)$ under the Mellin transform. In addition, we will make use of the following description of the Mellin transform of $S(\CC)$. A sequence $\{{f}_n(s)\}_{n\in\ZZ}$ of meromorphic functions lies in the image of the Mellin transform of $S(\CC)$ if and only if: \begin{enumerate} \item Each function ${f}_n$ has at most simple poles, and they are located inside the set $-\frac{\abs{n}}{2}-\ZZ_{\geq 0}$. \item The semi-norms $\norm{{f}_n(s)}_{\sigma,m}$ are bounded for all $\sigma\in \frac{1}{4}+\frac{1}{2}\ZZ$ and $m\geq 0$. \end{enumerate} This description can be proven via standards methods, using the fact that the Mellin transforms of the functions $z^m e^{-\pi\abs{z}^2}$ satisfy the above requirements. \end{remark} \begin{proof}[Proof of Claim~\ref{claim:locally_trivial_at_C}] In a similar manner to the proof of Claim~\ref{claim:locally_trivial_at_R}, it is sufficient to supply a sequence of functions $\{{g}_n(s)\}_{n\in\ZZ}$ such that: \begin{enumerate} \item Each function $g_n(s)$ has no zeroes, and has a simple pole at $-\frac{\abs{n}}{2}-m$ for all integer $m\geq 0$. \item The functions $g_n(s),g_n(s)^{-1}$ satisfy some moderate growth condition. We will choose $\{{g}_n(s)\}_{n\in\ZZ}$ such that: \[ \sup_{\stackrel{t\in\RR}{n\in\ZZ}}\abs{{g}_n(\sigma+it)}<\infty, \qquad\sup_{\stackrel{t\in\RR}{n\in\ZZ}}\abs{{g}_n(\sigma+it)}^{-1}<\infty \] for all $\sigma\in \frac{1}{4}+\frac{1}{2}\ZZ$. \end{enumerate} Our choice is simply \[ g_n(s)=g(\abs{n}+2s), \] where $g(s)$ is the same function as in the proof of Claim~\ref{claim:locally_trivial_at_R}. \end{proof} This completes the proof of Theorem~\ref{thm:L_is_triv}. \end{appendices} \Urlmuskip=0mu plus 1mu\relax \bibliographystyle{alphaurl}
1,108,101,562,487
arxiv
\section{Introduction} This package provides an essential feature to \LaTeX~that has been missing for too long. It adds a coffee stain to your documents. A lot of time can be saved by printing stains directly on the page rather than adding it manually. You can choose from four different stain types: \begin{enumerate} \item $270^\circ$ circle stain with two tiny splashes \item $60^\circ$ circle stain \item two splashes with light colours \item and a colourful twin splash. \end{enumerate} \section{Usage} To use the package, simply place the \texttt{coffee3.sty} file in the directory with all of your other \texttt{.tex} files \textit{or} install it properly (consult your distribution's manual). Then include the following line in the header of your document: \begin{verbatim} \usepackage{coffee4} \end{verbatim} To place a coffee stain on a page, put one of the following commands in the source code of the relevant page: \begin{verbatim} \cofeAm{alpha}{scale}{angle}{xoff}{yoff} \cofeBm{alpha}{scale}{angle}{xoff}{yoff} \cofeCm{alpha}{scale}{angle}{xoff}{yoff} \cofeDm{alpha}{scale}{angle}{xoff}{yoff} \end{verbatim} where alpha is the transparency factor $\in [0,1]$. The scale factor is {\tt scale}, and the standard is {\tt scale}=1. The angle is in degrees $\in [0,360]$. The position relative to the centre of the page is given by x and y offsets \texttt{xoff} and \texttt{yoff}. \section{Copyright} You can freely distribute this package as I do not believe in imaginary property. All stains are self-made, photographed by myself, processed with gimp and traced with Inkscape. Donations should be made in coffee only. My address is \begin{quote} Hanno Rein\\ DAMTP, CMS\\ Wilberforce Road\\ Cambridge CB3 0WA\\ United Kingdom \end{quote} See more coffee stains on the next pages. \cofeCm{0.9}{1}{180}{0}{0} \newpage \cofeDm{0.4}{0.5}{90}{0}{0} Coffee is great. \newpage \cofeBm{0.7}{1}{0}{0}{0} Coffee will save the world. \newpage \cofeAm{0.7}{0.75}{2}{0}{0} Coffee will save the world. \end{document} \section{Introduction} There has been some confusion over the role of postselection in quantum information processing protocols. On one hand, postselection is a powerful computational resource~\cite{Aaronson05} and enables technological goals, such as probabilistic photon-photon gates \cite{ObrPryWhi03}. On the other hand, in some situations postselection can impede quantum information processing. Probabilistic metrology---also known as metrology with abstention~\cite{GenRonCal2013a} and weak value amplification~\cite{DixStaJor09}---is the idea that postselection may improve estimation precision beyond the usual quantum limits. When the performance of probabilistic metrology is evaluated with respect to the standard figure of merit for parameter estimation, mean squared error, postselection is provably suboptimal, even when there are imperfections~\cite{Knee2013a,TanYam2013,FerCom2013,Knee2013b,ComFerJia13a,ZhaDatWam13,KneComFerGau14,PanJiaCom13}. Counter claims have been made in the literature (see Refs.~\cite{JorMarHow14,PanDreBru14,CalBenMun14,PangBrun14,SusaTanaka15}) but the issue is far from settled. In this article we attempt to reconcile the intuition that postselection can help statistical tasks with the fact that for the standard figures of merit generically it does not. To simplify the analysis and make our assumptions explicit we will use a statistical decision theory approach in the context of quantum state discrimination~\cite{BerHerHil04,BarCro09}. To assert that a state discrimination protocol is optimal, we must first specify a {\em cost} or {\em loss} function which encapsulates how each decision is penalized. Then we minimize the average loss over decision rules and measurements. This approach defines a task for which the optimal protocol incurs the least losses for the specified loss function. For example consider a two party discrimination game involving an employer Alice and an employee Bob. Alice gives Bob one of two quantum states $\Psi_1$ or $\Psi_2$. Bob is allowed to perform any generalized measurement on the state but then must report which state Alice gave him; he cannot decline to report a state. Bob's bonus, of at most $\mathbb{D} $ dollars, is tied to his performance in this game. If he reports $\Psi_i$ when $\Psi_j$ is true his bonus will be reduced to $\$ (1- \lambda_{i,j} )\mathbb{D}$ where $\lambda_{i,j}$ is called the loss function. Bob wants to devise a strategy to minimise his expected losses. When the cost of reporting the correct answer is ``0" and the incorrect answer is ``1" or maximal, $\lambda_{i,j}$ is known as the $0$-$1$ loss function. Mimimising the losses from the $0$-$1$ loss function is equivalent to minimizing the probability of misidentifying the states (termed the error probability)~\cite{Helstrom1976Quantum,Fuchs96}. The corresponding optimal measurement strategy, with respect to minimizing losses, is called the Helstrom~\cite{Helstrom1976Quantum} or minimum error measurement. A postselected strategy will have higher expected losses, that is it is suboptimal with respect to the $0$-$1$ loss function. Postselected strategies for state discrimination were introduced by Ivanovic~\cite{Iva87}, Dieks~\cite{Die88}, and Peres~\cite{Per88} in what is now known as \emph{unambiguous state discrimination} (USD). In USD one allows for an extra ``reject'' decision---postselection---then two nonorthogonal states can be distinguished without error, albeit probabilistically. The USD measurement is optimized in the sense that it has minimal probability of reporting the inconclusive result ``reject''. Prior work on inconclusive state discrimination has focused on exploring and optimizing schemes which interpolate between minimum error probability and minimum inconclusive result probability~\cite{CheBar98,ZhaLiGou99,TouAdaSte07, HayHasHor2008,SugHasHor2009, BagMunOli12, DreBruKor14}. Typically in USD and its generalizations~\cite{CroAndBar06} there is no explicit penalty for reporting ``reject''. It is unclear if such postselection is optimal with respect to any loss function. Here we re-formalize the inconclusive state discrimination problem by assigning a cost to discarded outcomes. In particular, we modify the most commonly used cost, the 0-1 loss function, to what we call the \mbox{0-1-$\lambda$}\ loss function. In the \mbox{0-1-$\lambda$}\ loss function, $\lambda$ is the cost of reporting ``reject''. In our approach, we find that the USD measurement appears when $\lambda\to 0$. In this limit there is an alternative protocol which is equally optimal: always report ``reject''. Finally we show how our results can be connected to previous approaches where there is a tradeoff between the rejection probablity and the error probablity ~\cite{CheBar98,ZhaLiGou99,TouAdaSte07, HayHasHor2008,SugHasHor2009, BagMunOli12, DreBruKor14}. Our analysis adheres to the desiderata suggested in Ref.~\cite{ComFerJia13a}, and thus is a definitive case where employing postselection can be said to be optimal. \cofeAm{0.125}{0.6}{0}{7in}{8in} \section{Statistical decision theory} We start by reviewing statistical decision theory and formally introducing the \mbox{0-1-$\lambda$}\ loss function, which is a special case of Chow's work on hypothesis testing or classification \cite{Chow57,Chow70}. Consider a set of competing hypotheses $\mathcal H_j$ for $j\in \{1,2,...,n \}$ with prior probabilities $\Pr(\mathcal H_j)$. Given some data $\mathbf{D}$ the posterior probability of the $j$'th hypothesis is \begin{align} \Pr(\mathcal H_j| \mathbf{D}) = \frac{\Pr( \mathbf{D}|\mathcal H_j)\Pr(\mathcal H_j)}{\Pr(\mathbf{D})}, \end{align} where \begin{align} \Pr( \mathbf{D}) = \sum_{j=1}^n \Pr( \mathbf{D}|\mathcal H_j)\Pr(\mathcal H_j). \end{align} What we would like to do is have a {\em decision rule} $\delta(\mathbf D)$ that maps the data $\mathbf{D}$ to decision $i$---that is, report hypothesis $i$, where in this case $i\in \{0,1,2...,n\}$. The decision $i=0$ allows for the possibility that one may not be able to decide, often referred to as the ``don't know'' or ``abstain'' or ``reject" option. In Bayesian decision theory the decision rule must arise from minimizing a loss function, which encapsulates how each decision is penalized. The conditional risk, i.e. the {\em a posteriori} expected loss, for the decision $i$ conditioned on data $\mathbf{D}$ is \begin{align} \mathcal R [i|\mathbf{D}] &= \sum_{j=1}^n\ \lambda_{i,j} \Pr( \mathcal H_j| \mathbf{D} ), \end{align} where the loss function is denoted by $\lambda_{i,j}$ which corresponds to reporting hypothesis $i$ when hypothesis $j$ is true. The loss function $\lambda_{i,j}$ is a good place to start building intuitions for the role of postselection in detection and estimation theory. Following Chow, we will require that \begin{align} \lambda_{i,i} <\lambda_{0,j} <\lambda_{i,j}\quad (i\neq j \neq 0), \end{align} which is interpreted as the loss for making a correct decision $ \lambda_{i,i}$ ($i\neq0$) is less than the cost of reject a decision $\lambda_{0,j}$ which is less than the cost of making a wrong decision $\lambda_{i,j}$. We relax this assumption in \srf{beyondzol}, such that $\lambda_{0,j} >\lambda_{i,j}$ is possible. A good description of the mathematical and philosophical requirements of a loss function can be found in chapter 2 of Ref.~\cite{berger85}. The optimal decision is \begin{align} \delta^*(\mathbf{D}) \equiv \arg\min_{i} \mathcal R [i|\mathbf{D}]. \end{align} When we turn our attention to quantum hypothesis testing we will need to determine the optimal measurement to pair with this optimal decision rule. The criterion for optimal we adopt will require us to minimize the average of the posterior risk \begin{subequations}\label{total risk} \begin{align} \mathcal R[\delta(\mathbf D)] &= \sum_{\mathbf D} \sum_j \lambda_{\delta(\mathbf D),j} \Pr(\mathcal H_j|\mathbf D) \Pr(\mathbf D), \\ & = \sum_{\mathbf D} \sum_j \lambda_{\delta(\mathbf D),j} \Pr(\mathbf D|\mathcal H_j) \Pr(\mathcal H_j), \end{align} \end{subequations} over the distribution of data and the measurement. When we assume the optimal decision is being used we denote the total risk as $\mathcal R ^*=\mathcal R[\delta^*(\mathbf D)] $. To simplify or analysis we will consider binary hypothesis testing (i.e $\mathcal H_1$ vs $\mathcal H_2$) and take \begin{align} \begin{array}{rl}\label{01lam} \lambda_{1,1} &=\lambda_{2,2} = 0,\\ \lambda_{1,2} &=\lambda_{2,1} =1,\\ \lambda_{0,1}&=\lambda_{0,2}=\lambda, \end{array} \end{align} which we call the ``\mbox{0-1-$\lambda$}'' loss function. For the \mbox{0-1-$\lambda$}\ loss function the conditional risks for decisions $i$ are \begin{align}\label{cond_risky} \begin{array}{rl} \mathcal R [2|\mathbf{D}] &= 1-\Pr( \mathcal H_2| \mathbf{D} ),\\ \mathcal R [1|\mathbf{D}] &=1-\Pr( \mathcal H_1| \mathbf{D} ), \\ \mathcal R [0|\mathbf{D}] &= \lambda , \end{array} \end{align} where we have used $\Pr( \mathcal H_1| \mathbf{D} )+ \Pr( \mathcal H_2| \mathbf{D} )=1$. \begin{figure}\centering \includegraphics[width=\columnwidth]{fig1.pdf} \caption{\label{fig} The Bloch representation of the states and POVM elements involved in the state discrimination protocol. The POVM elements, $E_{\bf D}(\phi)$ are not mixed states, but subnormalized rank-1 operators, which lie on a circle at a lower level in a cone of positive operators. The grey lines on the left figure are the arc of the POVM elements as $\phi$ is varied in \erf{POVM} from 0 to $\pi/2$. The right figure is illustrates two special cases of the POVM elements $E_{\bf D}(\phi)$. When $\phi =\pi/2 $ there are only two POVM elements and the measurement is the Helstrom measurement. When $\phi=\theta$ we recover the USD measurement. } \end{figure} Thus our decision rule $\delta^*(\mathbf{D})$ is \begin{align}\label{decision1} \delta^*(\mathbf{D}) = \begin{cases} 2 & \text{if } \mathcal R [2|\mathbf{D}] < \mathcal R [1|\mathbf{D}] \text{ and } \mathcal R [0|\mathbf{D}] \\ 1 & \text{if } \mathcal R [1|\mathbf{D}] < \mathcal R [2|\mathbf{D}] \text{ and } \mathcal R [0|\mathbf{D}] \\ 0 & \text{otherwise} \end{cases}. \end{align} With respect to the posterior probabilities we find \begin{align}\label{decision2} \delta^*(\mathbf{D}) = \begin{cases} 2 & \text{if }\Pr( \mathcal H_2| \mathbf{D} )\ge 1- \lambda \text{ and } \Pr( \mathcal H_1| \mathbf{D} ) \\ 1 & \text{if }\Pr( \mathcal H_1| \mathbf{D} )\ge 1- \lambda \text{ and } \Pr( \mathcal H_2| \mathbf{D} ) \\ 0 & \text{otherwise} \end{cases}. \end{align} In words, the decision rule is as follows: find the largest posterior probability; if it is greater than or equal to the threshold $1-\lambda$, report it; if it is less than $1-\lambda$ report ``reject''. Now we connect this decision theoretic framework to quantum hypothesis testing. \cofeAm{0.05}{0.6}{90}{8in}{-1.5in} \section{State discrimination}\label{sec:statediscrim} In quantum theory the statistics of measurements are described by a positive operator valued measure (POVM) $\{E_{\mathbf D} \}$, the elements of which sum to the identity: $\sum_{\mathbf D}E_{\mathbf D}=\Id$ . The number of elements of a POVM is the number of outcomes of the measurement. To match this with our previous terminology the outcomes of the measurement are the data $\mathbf{D}$. In order to encompass both USD and Helstrom measurements we must consider a three-outcome POVM $E_{\mathbf D}$ where $\mathbf D \in \{0, 1, 2\}$. Let us make the following symmetry assumptions to make the discussion less cumbersome: \begin{subequations}\label{symm} \begin{align} \Pr( \mathcal H_1 )& = \Pr( \mathcal H_2),\\ \Pr(E_1) &=\Pr(E_2),\\ \Pr( E_1| \mathcal H_1) & =\Pr( E_2| \mathcal H_2),\\ \Pr( E_1| \mathcal H_2) & =\Pr( E_2| \mathcal H_1),\\ \Pr( E_0| \mathcal H_1) & =\Pr( E_0| \mathcal H_2). \end{align} \end{subequations} These symmetries are implied, for example, by the states and operators in Fig.~\ref{fig}. \begin{figure*}\centering \includegraphics[width=\textwidth]{fig2.pdf} \caption{\label{fig2} Expected risk $\mathcal{R}$ (row 1) and decision rule (row 2) for the $\mbox{0-1-$\lambda$}$ loss function. In all figures the abscissa is $\phi$ (the measurement angle) and the ordinate is $\lambda$ (the cost of reporting ``reject'' ). The dark black line is the minimum risk ($\mathcal{R}^*[\phi^*]$) for a given $\lambda$ and thus specifies the optimal measurement angle. The shaded regions in the second row are simply the region for which the expected risk is less than $\lambda$; in this region one always reports $i$ if one obtained outcome $E_i$.} \end{figure*} Utilizing some of these these symmetries the total risk in \erfsub{total risk}{\,b} becomes \begin{align} \mathcal R =& \smallfrac{1}{2} [ (\lambda_{\delta(0),1}+\lambda_{\delta(0),2})\Pr(E_0|\mathcal H_1)+\nonumber\\ &\quad (\lambda_{\delta(1),1}+\lambda_{\delta(2),2})\Pr(E_1|\mathcal H_1)+\nonumber\\ &\quad (\lambda_{\delta(2),1}+ \lambda_{\delta(1),2})\Pr(E_2|\mathcal H_1)]. \end{align} Next we use the optimal decision rule, \erf{decision1} or \erf{decision2}, and more of the symmetries to massage this expression. Further, we assume that $\lambda <1/2$; as for $\lambda\ge1/2$ one can always randomly choose to report $\mathcal H_1$ or $\mathcal H_2$ and reduce the expected risk (in \srf{beyondzol} we will relax this assumption). Equation (\ref{symm}{e}) implies $\Pr(\mathcal H_1|E_0)=\Pr(\mathcal H_2|E_0)=1/2$, thus the lowest conditional risk i.e. \erf{cond_risky} implies that the optimal decision for $\mathbf D=0$ is $\delta^*(0)=0$ always. Also $\lambda_{{\delta^*}(1),1}=\lambda_{\delta^*(2),2}$ and $\lambda_{\delta^*(2),1}=\lambda_{\delta^*(1),2}$ are implied by symmetry as well. Using these relations we obtain \begin{align} \mathcal R^* =& \lambda_{0,1} \Pr(E_0|\mathcal H_1)+ \lambda_{\delta^*(1),1}\Pr(E_1|\mathcal H_1)+\nonumber\\ & \lambda_{\delta^*(2),1}\Pr(E_2|\mathcal H_1). \end{align} Recall from \erf{01lam} that $\lambda_{0,1}= \lambda$. Using this and Bayes rule we obtain \begin{align} \mathcal R^* =&2[ \lambda\Pr(\mathcal H_1|E_0) \Pr(E_0)+\lambda_{\delta^*(1),1}\Pr(\mathcal H_1|E_1)\Pr(E_1)\nonumber\\ &+\lambda_{\delta^*(2),1}\Pr(\mathcal H_1|E_2)\Pr(E_2)]. \end{align} Then using $\Pr(E_0)= 1-\Pr(E_1)-\Pr(E_2)=1-2\Pr(E_1)$ we have \begin{align} \mathcal R^* =&2\{ \smallfrac{1}{2} \lambda[1-2 \Pr(E_1)]+\\ & [\lambda_{\delta^*(1),1}\Pr(\mathcal H_1|E_1)+\lambda_{\delta^*(1),2}\Pr(\mathcal H_2|E_1)]\Pr(E_1)\},\nonumber \end{align} where we have used $\Pr(\mathcal H_1|E_2) = \Pr(\mathcal H_2| E_1)$ and \erfsub{symm}{b}. The term $T=[\lambda_{\delta^*(1),1}\Pr(\mathcal H_1|E_1)+\lambda_{\delta^*(2),1}\Pr(\mathcal H_1|E_2)]$ still depends on the optimal decision rule so we must explictly use it. It is important to note that we can't assume $\delta^*(1)=1$, this means we must consider two cases ($\delta^*(1)=2$ is obviously ruled out by symmetry): (1) $\delta^*(1)=0$: this implies $T= \lambda [ \Pr(\mathcal H_1|E_1)+ \Pr(\mathcal H_2|E_1)]=\lambda$; or (2) $\delta(1)=1$: this implies $T= \Pr(\mathcal H_2|E_1)$. Using the optimal decision rule, the risk becomes \begin{align}\label{risk1} \mathcal R^* &= \begin{cases} \lambda\Pr(E_0|\mathcal H_2)+\Pr(E_1|\mathcal H_2) & \text{if } \Pr(\mathcal H_2|E_1)\leq \lambda\\ \lambda & \text{otherwise} \end{cases}. \end{align} Equivalently this can be written as \begin{align}\label{da risk} \mathcal R^* &= \lambda + \min \left \{ 0,\Pr(E_1|\mathcal H_2)-\lambda [1-\Pr(E_0|\mathcal H_2)] \right \} \end{align} The above risk is true for the \mbox{0-1-$\lambda$}\ loss function and any two hypotheses and measurements satisfying the symmetry conditions. The first term represents the part of the expected risk when a rejection is made. The second term is not yet optimized over the possible measurements. As a specific example, here we will consider the problem of discriminating the following two quantum states: \begin{subequations} \begin{align}\label{quantdis} \mathcal H_1: \quad\ket{\Psi_1} &= \cos\smallfrac{\theta}{2} \ket{0} + \sin\smallfrac{\theta}{2} \ket{1},\\ \mathcal H_2: \quad\ket{\Psi_2} &= \cos\smallfrac{\theta}{2} \ket{0} -\sin\smallfrac{\theta}{2} \ket{1}, \end{align} \end{subequations} where $0\le \theta \le \pi/2$, $|\ip{\Psi_2}{\Psi_1}|=\cos\theta$ and the prior probabilities are $\Pr(\mathcal H_1)= \Pr(\mathcal H_2)=1/2$. The symmetry we imposed in \erf{symm}, imply the measurement is in fact a generalized measurement with POVM elements \begin{align}\label{POVM} E_2(\phi)&=\frac{1}{{2 \cos ^2\smallfrac{\phi}{2} }}\left( \begin{array}{cc} \sin ^2\smallfrac{\phi}{2} & -\sin \smallfrac{\phi}{2} \cos \smallfrac{\phi}{2} \\ -\sin \smallfrac{\phi}{2} \cos \smallfrac{\phi}{2} & \cos ^2\smallfrac{\phi}{2} \\ \end{array} \right),\nonumber\\ E_1(\phi)&=\frac{1}{{2 \cos ^2\smallfrac{\phi}{2} }}\left( \begin{array}{cc} \sin ^2\smallfrac{\phi}{2} &\phantom{-} \sin \smallfrac{\phi}{2} \cos \smallfrac{\phi}{2} \\ \phantom{-}\sin \smallfrac{\phi}{2} \cos \smallfrac{\phi}{2} & \cos ^2\smallfrac{\phi}{2} \\ \end{array} \right),\\ E_0(\phi)&=\left( \begin{array}{cc} 1-\tan ^2\smallfrac{\phi}{2} & 0 \\ 0 & 0 \\ \end{array} \right),\nonumber \end{align} such that $E _2(\phi)+E _1(\phi)+E _0(\phi)=\Id$. When $\phi=\pi/2$ we get $E_0= 0, E_1= \op{+}{+}, E_2= \op{-}{-}$ (where $\ket{\pm}$ are the eigenstates of the Pauli $X$ operator), which is the Helstrom measurement for all $\theta$. When $\phi = \theta$ we obtain the USD measurement for all $\theta$. In \frf{fig} the grey lines are the arc traced by \erf{POVM} as a function of $\phi$. Note that for $\phi > \pi/2$ the POVM element $E_0$ is not a positive operator, thus we do not allow these values of $\phi$. To apply the above decision theoretic formalism we need to compute the probabilities given in \erf{da risk}. All of these probabilities can be computed using the usual rule: \begin{align} \Pr(E_{\mathbf D}|\mathcal H_i,\phi) = \bra{\Psi_i}E_{\mathbf D}(\phi)\ket{\Psi_i}, \end{align} see footnote \cite{explicit} for some examples. Notice how all of the probabilities depend on the measurement angle $\phi$, this means the expected risk will also be a function of $\phi$. Given the POVM elements in \erf{POVM} the expected risk is \begin{align}\label{risk_eg} \mathcal{R}^*[\phi]=&\, \lambda\,+\\ &\min \left[0,\frac{(2 \lambda -1) (\cos \theta \cos \phi -1)-\sin \theta \sin \phi}{2 (1+\cos \phi)}\right],\nonumber \end{align} Intuitively this says the risk is at most $\lambda$ and sometimes less. This risk is plotted in \frf{fig2} as a function of $\lambda$ and $\phi$ for particular values of $\theta$. To find the optimal angle we fix $\lambda$ and ask which $\phi$ minimizes $\mathcal{R}^*[\phi]$. This can be done analytically. The trival case is when $\mathcal{R}^*[\phi]=\lambda$ an thus no optimization over $\phi$ is possible. The optimal measurement found by solving \begin{align}\label{risk_eg} \frac{\partial}{\partial \phi}\left [ \lambda\,+\frac{(2 \lambda -1) (\cos \theta \cos \phi -1)-\sin \theta \sin \phi}{2 (1+\cos \phi)} \right ]=0, \end{align} for $\phi$. { The constraint on the positivity of the measurement operators, i.e. $\phi\le \pi/2$, results in following peicewise defintion of optimal measurement angle} \begin{align}\label{phi_opt} \phi^* = \!\left\{\! \begin{array}{cl} 2 \cot^{-1}\!\left[(1-2 \lambda ) \cot \dfrac{\theta }{2}\right] \!& \text{if }\,\lambda < \dfrac 1 2 \left(1 - \tan\dfrac \theta 2 \right )\vspace{5pt}\\ \dfrac{\pi}{2} & \text{if }\,\lambda \ge \dfrac 1 2 \left(1 - \tan\dfrac \theta 2 \right )\vspace{5pt}\\ \end{array} \right .. \end{align} This optimal angle is plotted as the solid black lines in \frf{fig2}. The decision functions plotted in the second row of \frf{fig2} are particularly simple: in the shaded regions report $\mathbf D$ if $E_{\mathbf D}$ is observed and report ``reject'' or 0 if $E_{\mathbf D}$ is observed in the non shaded regions. From \frf{fig2} it is clear that, as a function of $\lambda$ the optimal measurement angle interpolates between the USD and the Helstrom measurement. This can be made explicit as follows. The second branch of \erf{phi_opt}, i.e. when $\phi^*= \pi/2$, is the Helstrom measurement. To recover the USD measurement we plug $\lambda = 0$ into \erf{phi_opt} gives $\phi^*= \theta$, so $\lambda=0$ implies the USD measurement. However, $\lambda=0$ is also a degenerate case where no cost is assigned to reporting ``reject''. Thus, the risk is {\em also} minimized by reporting ``reject'' for \emph{any} outcome of \emph{any} measurement or, equivelently, not bothering to make the measurement and simply reporting ``reject''. Recall that what we are calling \emph{the} USD measurement is the one which minimizes the probability of obtaining the ``reject'' outcome in the usual paradigm. Here, as expected, the USD measurement is approached for $\lambda\to0$. This is also when the probablity for reporting ``reject'' is maximized, see \frf{fig5} of \srf{sec:pr_pe}. \begin{figure}\centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{\label{fig3} The angle $\phi^*$ of the optimal measurement minimizing the risk for the $\mbox{0-1-$\lambda$}$ loss, i.e. \erf{phi_opt}, as a function of $\lambda$ and $\theta$. The dot dashed line at $\lambda=0$ corresponds to the USD measurement when $\phi^*=\theta$. Above the dashed line the Helstrom measurement is optimal. The optimal angle has been discretized for ploting}. \end{figure} To complete the example we plot in \frf{fig3} the optimal measurement angle $\phi^*$ as a function of $\lambda$ and the angle between the states $\theta$ and the $z$ axis. The USD protocol corresponds the line at $\lambda =0$ while the Helstrom measurement is performed for when $\phi^*=\pi/2$. The area where $\phi^*=\pi/2$ is approximately half of the parameter space, i.e. $\lambda \gtrsim \smallfrac{1}{2} (1-\theta/2) + O(\theta^3)$, thus even when the loss function encourages postselection it is not guaranteed to be optimal. Other studies of inconclusive state discrimination \cite{TouAdaSte07,HayHasHor2008, BagMunOli12, DreBruKor14} concern themselves with the probabilities of error and reporting the ``reject'' result. This avoids the question of what to do \emph{given} the outcome of some measurement. Here we have phrased the problem as a decision theoretic one where the loss is incurred on the decisions and once that loss is specified, a definitive answer can be given. In real applications, it would be unlikely that an agent's decisions are constrained to be deterministic functions of measurement operators. Indeed, our results imply that loosening that constraint can only decrease the agent's risk if they can not measure at the optimal angle for a given $\lambda$. \section{Relationship between Risk and error and reject probabilities}\label{sec:pr_pe} So far we have focused on the decision function and the loss function. In this section we connect our approach to the previous approaches which focus on tradeoffs between reject and error probabilities \cite{DreBruKor14}, and rejection thresholds \cite{BagMunOli12}. For equal prior probabilities the optimal decision rule when measuring at the optimal angle, is particularly simple: report $\mathbf D$ if $E_{\mathbf{D}}$. Let probability of making the correct decision be $C$, the probability of error be $E$, the probability of rejection be $R$ and the probability that a piece of data is accepted be $A$. These probabilities can be written explicitly as follows: \begin{subequations}\label{Probz} \begin{align} \Pr(C|\theta,\lambda)&=\sum_{i\in\{1,2\}}\Pr(\mathcal H_i) \Pr[E_i(\phi^*)|\Psi_i],\\ \Pr(E|\theta,\lambda)&=\sum_{i, j\in\{1,2\},i\neq j}\Pr(\mathcal H_i) \Pr[E_j(\phi^*)|\Psi_i],\\ \Pr(R|\theta,\lambda)&=\sum_{i\in\{1,2\}}\Pr(\mathcal H_i) \Pr[E_0(\phi^*)|\Psi_i],\\ \Pr(A|\theta,\lambda)&=\Pr(C|\theta,\lambda) +\Pr(E|\theta,\lambda), \end{align} \end{subequations} These probabilities obey $\Pr(E) +\Pr(C)+\Pr(R)=1$ which implies $\Pr(A)+\Pr(R) = 1$. \begin{figure}\centering \includegraphics[width=0.95\columnwidth]{fig4.pdf} \caption{\label{fig4} The probabilities in \erf{Probz} as a function of the angle between the states $\theta$. When $\lambda = 0.5$ it is easy to show that $\Pr(C|\theta)=1-\Pr(E|\theta)=(1/2)(1+\sin\theta)$, $\Pr(A|\theta)= 1$, and $\Pr(R)= 0$ as plotted in the top left plot. These lines are the gray lines on the other figures. Generically as $\theta\rightarrow 0$ probability for reporting ``don't know" indeed approaches one except when $\lambda =0.5$. When the equality in the second branch of \erf{phi_opt} is satisfied we see the measurement switches from one with an inconclusive outcome to the Helstrom measurement i.e. $\Pr(A)=1$ and $\Pr(R)=0$ and $\Pr(C|\theta)=1-\Pr(E|\theta)=(1/2)(1+\sin\theta)$. } \end{figure} In \frf{fig4} we plot these probabilities as a function of the angle $\theta$ between the states. A strategy without postselection adheres to the lines of \frf{fig4} when $\lambda=0$. Deviating from this behavior indicates postselection. Notice that as $\theta\rightarrow 0$ $\Pr(R)\rightarrow 1$ for all $\lambda$ except $\lambda=0.5$. While, in \frf{fig5} we plot the error probability and reject probability as a function of the rejection threshold. Postselection occurs whenever $\Pr(A)<1$. Notice that as $\lambda$ approaches 0, the probablilty of rejection gets closer to 1 for all values of $\theta$. \begin{figure}\centering \includegraphics[width=0.95\columnwidth]{fig5.pdf} \caption{\label{fig5} The rejection and error probabilities as a function of $\lambda$. When $\lambda = 0$ the measurement strategy is precisely the USD measurement and the rejection probability attains its maximum $\Pr(R)=\cos\theta$. Now consider the values of $\lambda$ for which $\Pr(R)=0$. For example when $\theta = \pi/8$, $\Pr(R)=0$ when $\lambda \in [0.4,0.5]$. As $\lambda$ is decreased the probability of reject increases and probability of error decreases with diminishing returns. } \end{figure} In 1970 Chow \cite{Chow70} showed a particularly simple relationship between the error probabilities and the minimum risk under the optimal decision rule \begin{subequations}\label{ChowRisk} \begin{align} \mathcal R^*[\phi^*] &= \Pr(E|\theta,\lambda)+ \lambda \Pr(R|\theta,\lambda),\\ &= \int_0^\lambda d\lambda' \Pr(R|\theta,\lambda',\phi). \end{align} \end{subequations} Both of these expressions can be visualized graphically, see \frf{fig6}. Prior to our work the expression given in \erf{ChowRisk} (a) is one of the ways the loss function has been explained, see e.g. \cite{DreBruKor14}. It is important that the optimal decision rule and measurement angle is used otherwise the risk will generally be different to the above risk. It turns out that $\Pr(E)$ can be derived from $\Pr(R)$ for a particular rejection threshold. Chow~\cite{Chow70} has shown that the Stieltjes integral of $\lambda$ with respect to $\Pr(R|\theta,\lambda)$ is precisely the error probability \begin{align}\label{StieltjesPrE} \Pr(E|\theta,\lambda)= - \int_0^\lambda \lambda' \, d\Pr(R|\theta,\lambda'). \end{align} As noted by Chow, this expression is suggestive of an error probability-reject probability tradeoff relation, see \frf{fig7}. If $\Pr(R|\theta,\lambda)$ is differentiable with respect to $\lambda$ then the Stieltjes integral reduces to the Riemann integral \begin{align}\label{RiemannPrE} \Pr(E|\theta,\lambda)= - \int_0^\lambda \lambda' \left [\frac {d}{d\lambda'}\Pr(R|\theta,\lambda') \right ]d\lambda'. \end{align} From \erf{StieltjesPrE} and \erf{RiemannPrE} it is clear that the slope of the error-reject tradeoff curve in \frf{fig7} is exactly value of the rejection threshold. Consequently the tradeoff is most effective initially and is less rewarding as the desired errror decreases. In \frf{fig7} we also see that specifying a particular rejection threshold, e.g. $\Pr(R)=Q$ as in \cite{BagMunOli12}, implies a value for $\lambda$ and $\Pr(E)$ (once $\theta$ is fixed). \begin{figure}\centering \includegraphics[width=\columnwidth]{fig6.pdf} \caption{\label{fig6} The relationship between risk and probability for rejection. The rejection probability is plotted as a function of the rejection threshold $\lambda$ when $\theta = \pi/8$. Consider a rejection threshold of $\lambda =0.3$, given this threshold and the angle between the states the expected risk can be computed from \erf{risk_eg} to be $\mathcal R \approx 0.26$. Equation \ref{ChowRisk} (b) shows this equivalent to the (shaded) area under the curve up to the rejection threshold. The area under the curve can be decomposed into a rectangle with height $ \Pr(R|\theta,\lambda)\approx0.724 $ and width $\lambda = 0.3$ so $\lambda \Pr(R|\theta,\lambda)\approx0.2172$ the integral given \erf{RiemannPrE} results in $\Pr(E|\theta,\lambda)\approx0.0428$ and thus $\mathcal R^*= \Pr(E|\theta,\lambda)+ \lambda \Pr(R|\theta,\lambda)$. } \end{figure} \begin{figure}\centering \includegraphics[width=\columnwidth]{fig7.pdf} \caption{\label{fig7} Error-reject tradeoff curve. In fact the derivative of $\Pr(E)$ with respect to $\Pr(R)$ is $\lambda$. These curves are implicit functions of $\lambda$. The trade off is not linear in the rejection threshold $\lambda$. This is evident on the line corresponding to $\theta = \pi/8$ where six crosses corresponding to $\lambda\in [0,0.1,0.2,0.3,0.4,0.5]$ are plotted. } \end{figure} \section{The 0-$\lambda_E$-$\lambda_R$ loss function}\label{beyondzol} \begin{figure}\centering \includegraphics[width=0.95\columnwidth]{fig8.pdf} \caption{\label{fig8} Decision regions for the 0-$\lambda_E$-$\lambda_R$ loss function. In all figures the angle between the states is $\theta = \pi /8$ and the reject loss was chosen to be $\lambda_R=1$. The shaded regions should be intepreted as report the column heading. In row one, the reporting of a hypothesis given the inconclusinve outcome is a result of \erf{gen_risk_D0}. Evidently, as $\lambda_E$ becomes large the decision rule becomes more like unambigous state discrimination. } \end{figure} Here we generalize the \mbox{0-1-$\lambda$}\ loss function to the 0-$\lambda_E$-$\lambda_R$ loss function, where $\lambda_E$ is the cost of reporting the incorrect decision and $\lambda_R$ is the cost of reporting reject --i.e., \begin{align} \begin{array}{rl}\label{0lamElamR} \lambda_{1,1} &=\lambda_{2,2} = 0,\\ \lambda_{1,2} &=\lambda_{2,1} =\lambda_E,\\ \lambda_{0,1}&=\lambda_{0,2}=\lambda_R. \end{array} \end{align} For the 0-$\lambda_E$-$\lambda_R$ loss function in \erf{0lamElamR} the conditional risks for decisions $i$ are \begin{align}\label{gen_risk} \begin{array}{rl} \mathcal R [2|\mathbf{D}] &=\lambda_E [1-\Pr( \mathcal H_2| \mathbf{D} ) ],\\ \mathcal R [1|\mathbf{D}]&=\lambda_E [1-\Pr( \mathcal H_1| \mathbf{D} ) ] , \\ \mathcal R [0|\mathbf{D}]&= \lambda_R , \end{array} \end{align} The following analysis assumes the same states [\erf{quantdis}], prior probablities [$\Pr(\mathcal H_i)=1/2$], and measurements [\erf{POVM}], as before. Of particular interest is the case when the measurement outcome $E_{0}(\phi)$ is obtained, i.e. $\mathbf{D}=0$, then the conditional risks are \begin{align}\label{gen_risk_D0} \begin{array}{rl} \mathcal R [2|0] &=\lambda_E/2,\\ \mathcal R [1|0] &=\lambda_E /2, \\ \mathcal R [0|0] &= \lambda_R. \end{array} \end{align} Thus if $\lambda_R> \lambda_E/2$ we should never reject, instead we should report either hypothesis, as illustrated in row 1 of \frf{fig8}. In \frf{fig8} we have chosen $\lambda_R=1$ so that for all $\lambda_E\le 2$ we must report either hypothesis to minimize our risk. In particular if we perform the a measurement with an inconclusive outcome $\phi<\pi /2$ and obtain the inconlusive outcome $E_{0}$ we should randomly choose between reporting $\mathcal H_1$ and $\mathcal H_2$. For $\lambda_R < \lambda_E /2$ we find \begin{align}\label{gen_decision} \delta(\mathbf{D}) = \begin{cases} 2 & \text{if }\Pr( \mathcal H_2| \mathbf{D} )\ge 1- \frac{\lambda_R}{\lambda_E} \text{ and } \Pr( \mathcal H_1| \mathbf{D} ) \\ 1 & \text{if }\Pr( \mathcal H_1| \mathbf{D} )\ge 1- \frac{\lambda_R}{\lambda_E} \text{ and } \Pr( \mathcal H_2| \mathbf{D} ) \\ 0 & \text{otherwise} \end{cases}. \end{align} In words, the decision rule is as follows: find the largest posterior probability; if it is greater than or equal to the threshold $1- \frac{\lambda_R}{\lambda_E} $, report it; if it is less than $1- \frac{\lambda_R}{\lambda_E}$, report ``reject''. Like the \mbox{0-1-$\lambda$}\ loss function, the 0-$\lambda_E$-$\lambda_R$ loss function also interpolates between the Helstrom measurement and unambiguous state discrimination, as illustrated in \frf{fig9}. Notice, for both loss functions, we did not need to ``normalize'' the loss function or add additional contraints such as $\Pr(R)=0$ or $\Pr(E)=0$, unlike other approaches \cite{DreBruKor14}. \begin{figure}\centering \includegraphics[width=0.95\columnwidth]{fig9.jpg} \caption{\label{fig9} Risk as a function of measurement angle $\phi$ and the cost of reporting the wrong decision $\lambda_E$ for the 0-$\lambda_E$-$\lambda_R$ loss function. Here $\theta = \pi /8$ and the reject loss was chosen to be $\lambda_R=1$. For $\lambda_E<2.5$ we see the optimal measurement is the Helstrom measurement and as $\lambda_E\rightarrow \infty$ the optimal measurement approaches the USD measurement. } \end{figure} \section{discussion} In the ongoing debate about postselection for information theoretic tasks in quantum theory, we have given a plausible example where postselection is a feature of the optimal solution. We say plausible because the loss function on the decisions was not tailored to favor full-blown postselection---the solution was not obvious. In \srf{sec:statediscrim} we have shown that USD measurements only arise in the limit when the cost assigned to discarding data is exactly zero, which corresponds to the line $\lambda=0$ for all $\theta$ in \frf{fig3}. In contrast, the Helstrom measurement appears to be the natural measurement for approximately half of the paramter space $\lambda \gtrsim \smallfrac{1}{2} (1-\theta/2)$. For the remainder of the parameter space, i.e. $\lambda \lesssim \smallfrac{1}{2} (1-\theta/2)$, strategies involving postselection (that are not USD) are optimal. In \srf{sec:pr_pe} we unified three seemingly separate approaches, namely the decision theoretic approach (i.e. our \mbox{0-1-$\lambda$}\ loss function), the rejection threshold approach \cite{BagMunOli12}, and the probability tradeoff approach \cite{DreBruKor14}. Section \ref{beyondzol} highlighted that the decision function can not simply be ignored---in some situations it is better to report an answer even if the inconclusive outcome was obtained. It is natural to ask what the implications of our analysis are. In practical situations it could be desirable to reduce errors by rejecting some data, but excessive rejection is required to reduce error to zero. And, at the point where the error is zero one can equivalently reject without bothering to perform any experiment, as the cost of rejection is also zero. Generally this implies when a loss function is specified as conditional on some event being successful that this is equivalent to assigning {\em cost} to a rejection option. Again, if the cost of rejection is zero why should you bother to perform the experiment at all? We have suggested a sensible approach is to embed a postselection protocol into a class of protocols which assign loss for discarding data, this makes clear the price of postselection. For example, consider offline magic state distillation for quantum computation \cite{BravyiKitaev05}. The success probability is relevant for quantifying efficiency (or expected yield in Sec. VI. of \cite{CamAnwBro12}) of the magic state distillation routine. When the success probability for the scheme is too small then the overall distillation routine is inefficient, even if it performs very well when it does succeed. This is generically true in offline state preparation. If costs are low, we are happy to wait for some time for a state to be prepared. But the cost are not zero, as we actually want to make a state and perform a useful task. The virtue of the decision theoretic approach is that all the assumptions, constraints and figures of merit are made explicit at the outset---the rest is derived. Thus, within this framework it is quite natural to include new constraints and features. For example, if experimental noise or inaccuracies or constraints are of concern, one must include those at the highest level---that is, they must be specified in the initial states, POVM, or loss function. Questions of robustness or imperfections, which plague other approaches, are simply a category mistake to ask here. A number of open questions remain. The first class of questions are about extensions to the specific ideas developed in this manuscript. A simple modification is when Alice makes collective measurements on $N$ copies of $\ket{\Psi_1}$ or $\ket{\Psi_2}$. In this case the states look more orthogonal because $|\ip{\Psi_1}{\Psi_2}|^{2N}\le |\ip{\Psi_1}{\Psi_2}|^2$. Based on our results in \frf{fig3} we conjecture that the optimal joint measurement for the \mbox{0-1-$\lambda$}\ loss function will look closer to a Helstrom measurement than the USD measurement. The obvious question is: does a bound on the $N$ copy risk exist? Ideally the solution would be something like the quantum Chernoff bound~\cite{QChernoff} which bounds the minimum error probability asymptotically in $N$ (i.e. the risk of the 0/1 loss function). The second class of questions are about the role of postselection in quantum information tasks. Although we have conjured an exotic loss function for which the optimal strategy includes postselection, it is not tied explicitly to an existing operational task. Nevertheless we suggest that our decision theoretic approach should be taken for any practical state discrimination (or estimation) problem which allows for the possibility of postselection. Extending our approach to parameter estimation seems to be the next great challenge. The results in this manuscript add weight to our suggested loss function \cite{ComFerJia13a}: report ``reject" and incur loss $\lambda$ for mean squared error (MSE) above some threshold and incur the MSE loss below that threshold. \acknowledgments{We thank Emili Bagan, Ben Baragiola, John Calsamiglia, Carl Caves, Justin Dressel, Bernat Gendra, Chris Granade, Mark Howard, Norbert L\"utkenhaus, Yihui Quek, Ramon Mu\~{n}oz-Tapia, and Elie Wolfe for discussions and suggestions. We are particularly grateful that Elie pointed out \erf{phi_opt} could be simplified to its present form. The authors thank Mathematica-gicians Agata Bra\'nczyk and Chris Granade---without the magic the figures in this manuscript would look considerably different. This work was supported in part by NSF Grant Nos. PHY-1212445 and PHY-1314763. JC was also supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems grant number CE110001013, CERC, NSERC, and FXQI. CF was also supported in part by the Canadian Government through the NSERC PDF program, the IARPA MQCO program, the ARC via EQuS project number CE11001013, and by the US Army Research Office grant numbers W911NF-14-1-0098 and W911NF-14-1-0103.}
1,108,101,562,488
arxiv
\section{Introduction} With many important applications in aerial or underwater missions, systems are underactuated either by design---in order to reduce actuator weight, expenses or energy consumption---or as a result of technical failures. In both cases, it is important to develop control policies that can exploit the nonlinearities of the dynamics, are general enough for this broad class of systems, and easily computable. Various approaches to nonlinear control range from steering methods using sinusoid controls\cite{MurraySin}, sequential actions of Lie bracket sequences\cite{murray1994book} and backstepping\cite{kokotovic1992joy, seto1994control} to perturbation methods\cite{junkins1986asymptotic}, sliding mode control (SMC)\cite{perruquetti2002sliding,utkin2013sliding, xu2008sliding}, intelligent\cite{brown1997intelligent, harris1993intelligent} or hybrid\cite{fierro1999hybrid} control and nonlinear model predictive control (NMPC) methods\cite{allgower2004nonlinear}. These schemes have been successful on well-studied examples including, but not limited to, the rolling disk, the kinematic car, wheeling mobile robots, the Snakeboard, surface vessels, quadrotors, and cranes \cite{bullo2000controllability, nonholonomiccrane,escareno2012trajectory,reyhanoglu1996nonlinear,fang2003nonlinear,toussaint2000tracking,bouadi2007sliding,bouadi2007modelling,chen2013adaptive, nakazono2008vibration, shammas2012analytic, morbidi2007sliding, roy2007closed, becker2010motion, kolmanovsky1995developments,boskovic1999intelligent}. The aforementioned methods are not ideal in dealing with controllable systems. In the case of perturbations, the applied controls assume a future of control decisions that did not take the disturbance history into account; backstepping is generally ineffective in the presence of control limits and NMPC methods are typically computationally expensive. SMC methods suffer from chattering, which results in high energy consumption and instability risks by virtue of exciting unmodeled high-frequency dynamics \cite{khalil1996noninear}, intelligent control methods are subject to data uncertainties\cite{el2014intelligent}, while other methods are often case-specific and will not hold for the level of generality encountered in robotics. We address this limitation by using needle variations to compute feedback laws for general nonlinear systems affine in control, discussed next. \subsection{Needle Variations Advantages to Optimal Control} In this paper, we investigate using needle variation methods to find optimal control for nonlinear controllable systems. Needle variations consider the sensitivity of the cost function to infinitesimal application of controls and synthesize actions that reduce the objective\cite{aseev2014needle,shaikh2007hybrid}. Such control synthesis methods have the advantage of efficiency in terms of computational effort, making them appropriate for online feedback (similar to other model predictive control methods, such as iLQG\cite{todorov2005generalized}, but with the advantage---as shown here---of having provable formal properties over the entire state space). For time evolving objectives, as in the case of trajectory tracking tasks, controls calculated from other methods (such as sinusoids or Lie brackets for nonholonomic integrators) may be rendered ineffective as the target continuously moves to different states. In such cases, needle variation controls have the advantage of computing actions that directly reduce the cost, without depending on future control decisions. However, needle variation methods, to the best of our knowledge, have not yet considered higher than first-order sensitivities of the cost function. We demonstrate analytically later in Section II that, by considering second-order needle variations, we obtain variations that explicitly depend on the Lie brackets between vector fields and, as a consequence, the higher-order nonlinearities in the system. Later, in Section III, we show that, for classically studied systems, such as the differential drive cart, this amounts to being able to guarantee that the control approach is \emph{globally} certain to provide descent at every state, despite the conditions of Brockett's theorem \cite{brockett1983asymptotic} on nonexistence of smooth feedback laws for such systems. \subsection{Paper Contribution and Structure} This paper derives the second-order sensitivity of the cost function with respect to infinitesimal duration of inserted control, which we will refer to interchangeably as the second-order mode insertion gradient or mode insertion Hessian (MIH). We relate the MIH expression to controllability analysis by revealing its underlying Lie bracket structure and present a method of using second-order needle variation actions to expand the set of states for which individual actions that guarantee descent of an objective function can be computed. Finally, we compute an analytical solution of controls that uses the first two orders of needle variations. Due to length constraints, the details of some proofs are shortened to an acceptable length, allowing us to include examples demonstrating the method. The content is structured as follows. In Section II, we prove that second-order needle variations guarantee control solutions for systems that are nonlinearly controllable using first-order Lie brackets. In Section III, we present an analytical control synthesis method that uses second-order needle actions. In Section IV, we implement the proposed synthesis method and present simulation results on a controllable, underactuated model of a 2D differential drive vehicle, a 3D controllable, underactuated kinematic rigid body and a 3D underactuated dynamic model of an underwater vehicle. \section{Needle Variation Controls based on Non-Linear Controllability} In this section, we relate the controllability of systems to first- and second-order needle variation actions. After presenting the MIH expression, we reveal its dependence on Lie bracket terms between vector fields. Using this connection, we tie the descent property of needle variation actions to the controllability of a system and prove that second-order needle variation controls can produce control solutions for a wider set of the configuration state space than first-order needle variation methods. As a result, we are able to constructively compute, via an analytic solution, control formulas that are guaranteed to provide descent, provided that the system is controllable with first-order Lie brackets. Generalization to higher-order Lie brackets appears to have the same structure, but that analysis is postponed to future work. \subsection{Second-Order Mode Insertion Gradient} Consider a system with state $x : \mathbb{R} \mapsto \mathbb{R}^{N \times 1} $ and control $u : \mathbb{R} \mapsto \mathbb{R}^{M \times 1} $ with control-affine dynamics of the form \begin{align}\label{dynamics} f(t,x(t), u(t)) = g(t, x(t)) + h(t, x(t)) u(t), \end{align} where $g(t,x(t))$ is the drift vector field. Further consider a time period $[t_o, t_f]$ and control modes described by \begin{align}\label{Dynamics} \dot{x}(t) = \begin{cases} f_1 (x(t), v), & t_0\leq t < \tau \\ f_2 (x(t), u), & \tau - \frac{\lambda}{2}\leq t < \tau + \frac{\lambda}{2} \\ f_1 (x(t), v), & \tau + \frac{\lambda}{2} \leq t \le t_f , \end{cases} \end{align} where $f_1$ and $f_2$ are the dynamics associated with \textit{default} and \textit{inserted} control $v$ and $u$, respectively. Parameters $\lambda$ and $\tau$ are the duration of the inserted dynamics $f_2$ and the switching time between the two modes. Dynamics of the form \eqref{Dynamics} are typically used in optimal control of hybrid systems to optimize the time scheduling of a-priori known modes\cite{egerstedt2006transition}. Here, we use such dynamics to obtain a new control mode $u$ that will optimally perturb the trajectory of any type of system with a needle action\cite{SAC}. Given a cost function $J$ of the form \begin{equation}\label{cost} J(x(t)) = \int_{t_o}^{t_f} l_1(x(t)) \mathrm{d}t + m(x(t_f)), \end{equation} where $l_1(x(t))$ is the running cost and $m(x(t))$ the terminal cost, the mode insertion gradient (MIG) is \medmuskip = 0.5mu \begin{align}\label{MIG} \frac{dJ}{d\lambda_+} = \rho^T (f_2 - f_1). \end{align} For brevity, the dependencies of variables are dropped. Although not presented here because of the length of the derivation and its similarity to \cite{caldwell2011switching}, a similar analysis shows that, for dynamics that do not directly depend on the control duration, the mode insertion Hessian (MIH) is given by \medmuskip = 0.5mu \begin{align}\label{MIH} \frac{d^2J}{d\lambda_+^2} =& (f_2 - f_1)^T\Omega(f_2-f_1) + \rho^T(D_xf_2 \cdot f_2 + D_xf_1\cdot f_1 \notag\\ &-2 D_xf_1\cdot f_2) - D_x l_1 \cdot (f_2 - f_1), \end{align} where $\rho : \mathbb{R} \mapsto \mathbb{R}^{N \times 1}$ and $\Omega : \mathbb{R} \mapsto \mathbb{R}^{N \times N}$ are the first- and second-order adjoint states (costates). These quantities are calculated from the default trajectory and are given by \begin{align*} \dot{\rho} &= -{D_xl_1}^T - D_xf_1^T\rho \\ \dot{\Omega} &= -{D_xf_1}^T\Omega - \Omega D_xf_1 - D_x^2l_1 - \sum_{i=1}^N \rho_i D_x^2f_1^i , \end{align*} that are subject to $\rho(t_f) = D_x m(x(t_f))^T$ and $\Omega(t_f) = D_x^2 m(x(t_f))^T$. The superscript $i$ in the dynamics $f_1$ refers to the $i^{th}$ element of the vector and is used to avoid confusion against default and inserted dynamics $f_1$ and $f_2$, respectively. \subsection{Dependence of Second Order Needle Variations on Lie Bracket Structure} The Lie bracket of two vectors $f(x)$, and $g(x)$ is \begin{align*} [f, g](x) = \frac{\partial g}{\partial x} f(x) - \frac{\partial f}{\partial x}g(x), \end{align*} which generates a control vector that points in the direction of the net infinitesimal change in state $x$ created by infinitesimal noncommutative flow $\phi_\epsilon^f\,\circ\,\phi_\epsilon^g\, \circ\,\phi_\epsilon^{-f}\,\circ\,\phi_\epsilon^{-g}\,\circ\,x_0$, where $\phi_\epsilon^f$ is the flow along a vector field $f$ for time $\epsilon$\cite{murray1994book, jakubczyk2001introduction}. Lie brackets are most commonly used for their connection to controllability\cite{rashevsky1938connecting,Chow1940}, but here they will show up in the expression describing the second-order needle variation. We relate second-order needle variation actions to Lie brackets in order to provide controls that are conditional on the nonlinear controllability of a system. Let $h_i : \mathbb{R} \mapsto \mathbb{R}^{N \times 1}$ denote the column control vectors that compose $h : \mathbb{R} \mapsto \mathbb{R}^{N \times M}$ in \eqref{dynamics} and $u_i \in \mathbb{R}$ be the individual control inputs. Then, we can express dynamics as \begin{align*} f = g + \sum_i^M h_iu_i. \end{align*} and, for default control $v=0$, we can re-write the MIH as \medmuskip = 0.3mu \begin{align*} \frac{d^2J}{d\lambda_+^2}=&\big(\sum_{i=1}^M h_iu_i\big)^T\,\Omega\sum_{j=1}^Mh_ju_j + \rho^T\Big(\sum_{i=1}^M(D_xh_iu_i)\cdot\,g \\ &-D_xg\cdot(h_iu_i)+\sum_{i=1}^MD_xh_iu_i\sum_{i=1}^Mh_iu_i\Big)-D_xl_1\sum_{i=1}^Mh_iu_i. \end{align*} Splitting the sum expression into diagonal ($i=j$) and off-diagonal ($i\ne j$) elements, and by adding and subtracting $2\sum_{i}^M\sum_{j=1}^{i-1}(D_xh_iu_i)(h_ju_j)$, we can write \begin{align*} \sum_{i=1}^MD_xh_iu_i\sum_{i=1}^Mh_iu_i =& \sum_{i}^M\sum_{j=1}^{i-1}[h_i,h_j]u_iu_j\\& + 2\sum_{i}^M\sum_{j=1}^{i-1}(D_xh_iu_i)(h_ju_j) \\ &+ \sum_{i=j=1}^M(D_xh_iu_i)(h_iu_i). \end{align*} Then, we can express the MIH as \begin{align*} \frac{d^2J}{d\lambda_+^2} =& \sum_{i=1}^M \sum_{j=1}^M u_i u_j h_i^T \Omega h_j + \rho^T\Big(\sum_{i=2}^M \sum_{j=1}^{i-1} [h_i,h_j] u_i u_j \notag \\ &+ 2 \sum_{i=2}^M\sum_{j=1}^{i-1} (D_x h_i)h_j u_iu_j + \sum_{i=1}^{M}(D_x h_i)h_iu_iu_i \notag \\ &+\sum_{i=1}^M [g, h_i] u_i\Big) - D_x l(\sum_{i=1}^M h_iu_i). \end{align*} The expression contains Lie bracket terms of the control vectors that appear in the system dynamics, indicating that second-order needle variations consider higher-order nonlinearities. By associating the MIH to Lie brackets, we next prove that second-order needle variation actions can guarantee decrease of the objective for certain types of controllable systems. \subsection{Existence of Control Solutions with First- and Second-Order Mode Insertion Gradients} In this section, we prove that the first two orders of the mode insertion gradient can be used to guarantee controls that reduce objectives of the form \eqref{cost} for systems that are controllable with first-order Lie brackets. The analysis is applicable to optimization problems that satisfy the following assumptions. \begin{assumption}\label{as:1} The vector elements of dynamics $f_1$ and $f_2$ are real, bounded, $\mathcal{C}^2$ in $x$, and $\mathcal{C}^0$ in $u$ and $t$. \end{assumption} \begin{assumption}\label{as:2} The incremental cost $l_1(x)$ is real, bounded, and $\mathcal{C}^2$ in $x$. The terminal cost $m(x(t_f))$ is real and twice differentiable with respect to $x(t_f)$. \end{assumption} \begin{assumption}\label{as:3} Default and inserted controls $v$ and $u$ are real, bounded, and $\mathcal{C}^0$ in $t$. \end{assumption} Under Assumptions \ref{as:1}-\ref{as:3}, the MIG and MIH expressions are well-defined. Then, as we show next, there are control actions that can improve any objective that is not a local optimizer. \begin{definition} A local optimizer of the cost function $\eqref{cost}$ is given by a set $(x^*, u^*)$ if and only if the set describes a trajectory that corresponds to an objective function $J(x^*(t))$ for which $D_xJ(x^*(t))= 0$. \end{definition} \begin{prop}\label{nonzerorho} Consider a set $(x, v)$ that describes the state and default control of \eqref{Dynamics}. If $(x, v) \ne (x^*, v^*)$, then the first-order adjoint $\rho$ is a non-zero vector. \end{prop} \begin{proof} Using \eqref{cost}, \begin{align*} x \ne x^* &\Rightarrow D_xJ(x(t)) \ne 0 \\ &\Rightarrow \int_{t_0}^{t_f} D_xl_1(x(t)) \mathrm{d}t + D_xm(x(t_f)) \ne 0 \\ &\Rightarrow \int_{t_0}^{t_f} D_xl_1(x(t)) \mathrm{d}t \ne 0~\text{OR}~D_xm(x(t_f)) \ne 0 \\ &\Rightarrow D_xl_1(x(t)) \ne 0~\text{OR}~D_xm(x(t_f)) \ne 0\\ &\Rightarrow \dot{\rho} \ne 0\lor \rho(t_f) \ne 0. \end{align*} Therefore, if $x \ne x^*$, then $\exists~ t\in[t_0,t_f]$ such that $\rho \ne 0$. \end{proof} \begin{prop}\label{AdjVec} Consider dynamics given by \eqref{Dynamics} and a pair of state and control $(x, v) \ne (x^*, v^*)$ such that $\frac{dJ}{d\lambda_+} = 0 ~ \forall ~ u~\in \mathbb{R}^M$ and $\forall ~ t \in [t_o, t_f]$. Then, the first-order adjoint $\rho$ is orthogonal to all control vectors $h_i$. \end{prop} \begin{proof} The linear combination of the elements of a vector $x$ is always zero if and only if $x$ is the zero vector. Given that, rewrite \eqref{MIG} as \begin{align*} \frac{d J}{d \lambda_+}=0 &\Rightarrow \rho^T \sum_i^M h_i (u_i - v_i) = 0 \\ &\Rightarrow \sum_i^M k_i w_i = 0 ~ \forall ~ w_i, \end{align*} where $w_i = (u_i - v_i)$ and $k_i = \rho^Th_i \in \mathbb{R}$. The linear combination of the elements of $k$ is zero for any $w_i$, which means $k$ must be the zero vector. By Proposition \ref{nonzerorho}, $\rho \ne 0$ for a non-optimizer pair of state and control and, as a result, $\rho^T h_i~=~0~\forall~i\in[1,M]$. \end{proof} \begin{prop}\label{AdjLie} Consider dynamics given by \eqref{Dynamics} and a pair of state and control $(x, v) \ne (x^*, v^*)$ such that $\frac{dJ}{d\lambda_+} = 0 ~ \forall ~ u \in \mathbb{R}^M$ and $\forall ~ t \in [t_o, t_f]$. Further assume that the control vectors $h_i$ and their Lie Bracket terms $[h_i, h_j]$ span the state space $\mathbb{R}^N$. Then, there exist $i$ and $j$ such that $\rho^T [h_i, h_j] \ne 0$. \end{prop} \begin{proof} A set of vectors $S = (v_1, \dots, v_M)$ is linearly independent if and only if every vector $r\in span(S)$ can be uniquely written as a linear combination of $(v_1, \dots, v_M)$. The control vectors and their Lie Brackets span the $\mathbb{R}^N$ space. On that assumption, it follows that any N-dimensional vector can be expressed in terms of the control vectors and their Lie Brackets. The first-order adjoint is an $N$-dimensional vector, which is non-zero for a non-optimizer pair of $x,v$ by Preposition \ref{nonzerorho}. Therefore, it can be expressed as \begin{equation}\label{RHO} \rho = c_1 h_1 + \dots + c_M h_M + \sum_{i\ne j}^M [h_i, h_j] \ne 0. \end{equation} Given that $\frac{dJ}{d\lambda_+}=0$, and by Proposition \ref{AdjVec}, $\rho$ is orthogonal to all control vectors $h_i$ (which also implies that the control vectors $h_i$ do not span $\mathbb{R}^N$). Then, left-multiplying \eqref{RHO} by $\rho^T$ yields \begin{align*} \rho^T \rho = \sum_{i\ne j}^M \rho^T[h_i, h_j] \ne 0. \end{align*}It follows that there is at least one Lie bracket term $[h_i, h_j]$ that is not orthogonal to the costate $\rho$. \end{proof} \begin{prop}\label{NegGrad} Consider dynamics given by \eqref{Dynamics} and a trajectory described by state and control $(x, v)$. If $(x, v) \ne (x^*, v^*)$, then there are always control solutions $u \in \mathbb{R}^M$ such that $\frac{dJ}{d\lambda_+} \le 0$ for some $t \in [t_o, t_f].$ \end{prop} \begin{proof} Using dynamics of the form in \eqref{dynamics}, the expression of the mode insertion gradient can be written as \begin{align*} \frac{dJ}{d\lambda_+} = \rho^T(f_2 - f_1) = \rho^T\big(h(u-v)\big). \end{align*} By Proposition \ref{nonzerorho}, $\rho \ne 0$ for a non-optimizer trajectory. Given controls $u$ and $v$ that generate a positive mode insertion gradient, there always exist control $u'$ such that the mode insertion gradient is negative, i.e. $u'-v = - (u-v)$. The mode insertion gradient is zero for all $u\in\mathbb{R}^M$ if and only if the costate vector is orthogonal to each control vector $h_i$\footnote{If the control vectors span the state space $\mathbb{R}^N$, the costate vector $\rho \in \mathbb{R}^N$ cannot be orthogonal to each of them. Therefore, for first-order controllable (fully actuated) systems, there always exist controls for which the cost can be reduced to first order.}. \end{proof} First-order needle variation methods are singular when the mode insertion gradient is zero. When that is true, the second-order mode insertion gradient is guaranteed to be negative for systems that are controllable with first-order Lie Brackets, which in turn implies that a control solution can be found with second-order needle variation methods. \begin{prop}\label{nonkin} Consider dynamics given by \eqref{Dynamics} and a trajectory described by state and control $(x, v) \ne (x^*, v^*)$ such that $\frac{dJ}{d\lambda_+} = 0$ for all $u \in \mathbb{R}^M$ and $t \in [t_o, t_f]$. If the control vectors $h_i$ and the Lie brackets $[h_i, h_j]$ and $[g, h_i]$ span the state space ($\mathbb{R}^N$), then there always exist control solutions $u\in \mathbb{R}^M$ such that $\frac{d^2J}{d\lambda_+^2} < 0$. \end{prop} \begin{proof} Let $k \in [1, M]$ be an index chosen such that $[h_i, h_k]$ for some $i\in[1,M]\setminus \{k\}$\footnote{The notation $\setminus$ indicates that the element $k$ is subtracted from the set $[1,M]$.} is a vector that is linearly independent of all control vectors $h_i~\forall~i\in[1,M]$. The proof then considers controls such that $u_j~=~v_i~\forall~ j,i\ne k$ and $v_k = 0$ and expresses the MIH expression \eqref{MIH} as \begin{align*} \frac{d^2J}{d\lambda_+^2} = u^T \mathcal{G} u - u_k((D_xl_1) h_k - \rho^T[g, h_k]) , \end{align*} where $\mathcal{G}_{ij} = 0~\forall~i, j \in[1,M]\setminus \{k\}$, $\mathcal{G}_{ik} = \mathcal{G}_{ki} = \frac{1}{2}[h_i, h_k]$, and $\mathcal{G}_{kk} = h_k^T\Omega h_k + \rho^TD_xh_k\cdot h_k$. The matrix $\mathcal{G}$ is shown to be either indefinite or negative semidefinite if there exists a Lie bracket term $[h_i, h_k]$ such that $\rho^T [h_i, h_k] \ne 0$. If, on the other hand, $\rho^T [h_i, h_k] = 0$, by reasoning that is similar to Proposition \ref{AdjLie}, there is at least a Lie bracket $[g, h_k] \ne 0$ and the MIH expression reduces to a quadratic in $u_k$. In either case, it then becomes straightforward to show that there exist controls for which the MIH expression is negative. \end{proof} \begin{theorem}\label{Theorem} Consider dynamics given by \eqref{Dynamics} and a trajectory described by state and control $(x, v) \ne (x^*, v^*)$. If the control vectors $h_i$ and the Lie brackets $[h_i, h_j]$ and $[g, h_i]$ span the state space $(\mathbb{R}^N)$, then there always exists a control vector $u \in \mathbb{R}^M$ and a duration $\lambda$ such that the cost function \eqref{cost} can be reduced. \end{theorem} \begin{proof} The local change of the cost function \eqref{cost} due to inserted control $u$ of duration $\lambda$ can be approximated with a Taylor series expansion \begin{align*} J(\lambda) - J(0) \approx \lambda \frac{dJ}{d\lambda_+} + \frac{\lambda^2}{2} \frac{d^2J}{d\lambda_+^2}. \end{align*} By Propositions \ref{NegGrad} and \ref{nonkin}, either 1) $\frac{dJ}{d\lambda_+} <0$ or 2) $\frac{dJ}{d\lambda_+} = 0$ and $\frac{d^2J}{d\lambda_+^2}<0$. Therefore, there always exist controls that reduce the cost function \eqref{cost} to first or second order. \end{proof} \section{Control Synthesis} In this section, we present an analytical solution of first- and second-order needle variation controls that reduce the cost function \eqref{cost} to second order. We then describe the algorithmic steps of the feedback scheme used in the simulation results of this paper. \subsection{Analytical Solution for Second Order Actions} For underactuated systems, there are states at which $\rho$ is orthogonal to the control vectors $h_i$ (see Proposition \ref{NegGrad}). At these states, control calculations based only on first-order sensitivities fail, while controls based on second-order information still decrease the objective provided that the control vectors and their Lie brackets span the state space (see Theorem \ref{Theorem}). We use this property to compute an analytical synthesis method that expands the set of states for which individual actions that guarantee descent of an objective function can be computed. Consider the Taylor series expansion of the cost around control duration $\lambda$. Given the expressions of the first- and second-order mode insertion gradients, we can write the cost function \eqref{cost} as a Taylor series expansion around the infinitesimal duration $\lambda$ of inserted control $u$: \begin{align} J(\lambda) & \approx J(0) + \lambda \frac{dJ}{d\lambda_+} + \frac{\lambda^2}{2} \frac{d^2J}{d\lambda_+^2} \notag. \end{align} The first- and second-order mode insertion gradients used in the expression are functions of the inserted control $u(t)$ in \eqref{Dynamics}. For a fixed $\lambda$, we can minimize the function using Newton's Method to update the control actions. Control solutions that minimize the Taylor expansion of the cost will have the form \begin{align}\label{Taylor} u^{\>*}(t)=& \underset{u}{\operatorname{argmin}} ~J(0) + \lambda \frac{dJ}{d\lambda_+} + \frac{\lambda^2}{2}\frac{d^2J}{d\lambda_+^2} +\frac{1}{2} \lVert u \rVert^2_R, \end{align} where the MIH has both linear and quadratic terms in $u(t)$. The time dependence of the control $u$ is purposefully used here to emphasize that control solutions are functions of time $t$. Using the G\^ateaux derivative, we computed the minimizer of \eqref{Taylor} to be \begin{align}\label{optcon} u^{\>*}(t)=&[\frac{\lambda^2}{2}\,\Gamma + R] ^{-1} \, [\frac{\lambda^2}{2}\,\Delta + \lambda (-h^T\rho)], \end{align} where $\Delta: \mathbb{R} \mapsto \mathbb{R}^{M\times1}$ and $\Gamma: \mathbb{R}\mapsto \mathbb{R}^{M\times M}$ are respectively the first- and second-order derivatives of $d^2J/d\lambda_+^2$ with respect to the control $u$ and are given by \begin{align*} \Delta\triangleq& \Big[\big[h^T \big(\Omega^T + \Omega\big)h + 2 h^T \cdot(\sum_{k=1}^{n} (D_xh_k)\rho_{k})^T\big]v \notag\\ & + {(D_xg \cdot{h})}^{T} \rho - (\sum_{k=1}^{n} (D_xh_k)\rho_{k}) \cdot g + h^T D_xl^T\Big]\\ \Gamma \triangleq& [h^T \big(\Omega^T + \Omega\big)h + h^T \cdot (\sum_{k=1}^{n} (D_xh_k)\rho_{k})^T+ \sum_{k=1}^{n} (D_xh_k)\rho_{k}\cdot h]^T. \end{align*} The parameter $R$ denotes a metric on control effort. The existence of control solutions in \eqref{optcon} depend on the inversion of the Hessian $H = \frac{\lambda^2}{2}\,\Gamma + R$. To ensure H is positive definite, we implement a spectral decomposition on the Hessian $H~=~VDV^{-1}$, where matrices $V$ and $D$ contain the eigenvectors and eigenvalues of $H$, respectively. We replace all elements of the diagonal matrix $D$ that are smaller than $\epsilon$ with $\epsilon$ to obtain $\bar{D}$ and replace $H$ with $\bar{H} = V\bar{D}V^{-1}$ in \eqref{optcon}. We prefer the spectral decomposition approach to the Levenberg-Marquardt method ($\bar{H} = H + \kappa I \succ 0$), because the latter affects all eigenvalues of the Hessian and further distorts the second-order information. At saddle points, we set the control equal to the eigenvector of $H$ that corresponds to the most negative eigenvalue in order to descend along the direction of most negative curvature\cite{murray2010newton,schnabel1990new, boyd2004convex, nocedal2006sequential}. This synthesis technique provides controls at time $t$ that guarantee to reduce the cost function \eqref{cost} for systems that are controllable using first-order Lie brackets. Control solutions are computed solely by forward simulating the state over a time horizon $T$ and backward simulating the first- and second-order costates $\rho$ and $\Omega$. As we see next, this leads to a very natural, and easily implementable, algorithm for applying cost-based feedback. \subsection{Algorithmic Description of Control Synthesis Method} The proposed second-order analytical controls presented in \eqref{optcon} are implemented in a series of steps shown in Algorithm \ref{algorithm}. \begin{algorithm} \begin{enumerate}[{1.}] \item Simulate states and costates with default dynamics $f_1$ over a time horizon $T$ \item Compute optimal needle variation controls \item Saturate controls \item Find the insertion time that corresponds to the most negative mode insertion gradient \item Use a line search to find control duration that ensures reduction of the cost function \eqref{cost} \end{enumerate} \caption{} \label{algorithm} \end{algorithm} We compare first- and second-order needle variation actions by implementing different controls in Step 2 of Algorithm \ref{algorithm}. For the first-order case, we implement controls that are the solution to a minimization problem of the first-order sensitivity of the cost function \eqref{cost} and the control effort \begin{align}\label{optimalu} u^*(t) &= \min\limits_{u} ~~ \frac{1}{2} (\frac{dJ_1}{d\lambda^+_i}-\alpha_d)^2+\frac{1}{2} \lVert u \rVert^2_R \notag\\ &= (\Lambda + R^T)^{-1}(\Lambda v + h^T\rho \alpha_d), \end{align} where $\Lambda \triangleq h^T\rho\rho^Th$ and $\alpha_d \in \mathbb{R}^- $ expresses the desired value of the mode insertion gradient term (see, for example, \cite{mamakoukas2016}). Typically, $\alpha_d = \gamma J_o$, where $J_o$ is the cost function \eqref{cost} computed using default dynamics $f_1$. For second-order needle variation actions, we compute controls using \eqref{optcon}. \subsection{Comparison to Alternative Optimization Approaches} Algorithm \ref{algorithm} differs from controllers that compute control sequences over the entire time horizon in order to locally minimize the cost function. Rather, the proposed scheme utilizes the time-evolving sensitivity of the objective to infinitesimal switched dynamics and searches in a one-dimensional space for a finite duration of a single action that will optimally improve the cost. It does so using a closed-form expression and, as a result, it avoids the expensive iterative computational search in high-dimensional spaces, while it may still get closer to the optimizer with one iterate. \\\indent First-order needle variation solution are shown in \eqref{optimalu} to exist globally, demonstrate a larger region of attraction and have a less complicated representation on Lie Groups\cite{taosha}. These traits naturally transfer to second-order needle controls \eqref{optcon} that also contain the first-order information that is present in \eqref{optimalu}. In addition, as this paper demonstrates, the suggested second-order needle variation controller has formal guarantees of descent for systems that are controllable with first-order Lie brackets, which---to the best of our knowledge---is not provided by any alternative method. Given these benefits, the authors propose second-order needle variation actions as a complement to existing approaches for time-sensitive robotic applications that may be subject to large initial error, Euler angle singularities, or fast-evolving (and uncertain) objectives. Next, we implement Algorithm \ref{algorithm} using first or second-order needle variation controls (shown in \eqref{optimalu} and \eqref{optcon}, respectively) to compare them in terms of convergence success on various underactuated systems. \section{Simulation Results} The proposed synthesis method is implemented on three underactuated examples, the differential drive cart, a 3D kinematic rigid body and a dynamic model of an underwater vehicle. The kinematic systems of a 2D differential drive and a 3D rigid body are controllable using first-order Lie brackets of the vector fields and help verify Theorem \ref{Theorem}. The underactuated dynamic model of a 3D rigid body serves to compare controls in \eqref{optcon} and \eqref{optimalu} in a more sophisticated environment. In all simulation results, we start with default control $v = 0$ and an objective function of the form \begin{align*} J(x(t)) = \frac{1}{2}\int_{t_o}^{t_f} \lVert \vec{x}(t)-\vec{x}_d (t) \rVert^2_Q dt+\frac{1}{2}\lVert \vec{x}(t_f)-\vec{x}_d(t_f)\rVert^2_{P_1}, \end{align*} where $\vec{x}_d$ is the desired state-trajectory, and $Q=Q^T \geq 0$, $P_1=P_1^T \geq 0$ are metrics on state error. \subsection{2D Kinematic Differential Drive} The differential drive system demonstrates that controls shown in \eqref{optimalu} that are based only on the first-order sensitivity of the cost function \eqref{cost} can be insufficient for controllable systems, contrary to controls shown in \eqref{optcon} that guarantee decrease of the objective for systems that are controllable using first-order Lie brackets (see Theorem \ref{Theorem}). The system states are its coordinates and orientation, given by $s = [x, y, \theta]^T$, with kinematic ($g=0$) dynamics \begin{align*} f = r\begin{bmatrix} cos(\theta) & cos(\theta) \\ sin(\theta) & sin(\theta) \\ \frac{1}{L} & -\frac{1}{L}\end{bmatrix} \begin{bmatrix}u_R \\ u_L \end{bmatrix}, \end{align*} where $r = 3.6$~cm, $L = 25.8$~cm denote the wheel radius and the distance between them, and $u_R$, $u_L$ are the right and left wheel control angular velocities, respectively (these parameter values match the specifications of the iRobot Roomba). The control vectors $h_1$, $h_2$ and their Lie bracket term $[h_1, h_2] = 2\frac{r^2}{L}\big[-sin(\theta),-cos(\theta)\big]^T$ span the state space ($\mathbb{R}^3$). Therefore, from Theorem \ref{Theorem}, there always exist controls that reduce the cost to first or second order. Fig.~\ref{Differential Drive} demonstrates how different first- and second-order needle variation actions perform on reaching a nearby target. Actions based on first-order needle variations \eqref{optimalu} do not generate solutions that turn the vehicle, but rather drive it straight until the orthogonal displacement between the system and the target location is minimized. Actions based on second-order needle variations \eqref{optcon}, on the other hand, converge successfully. \begin{figure}[] \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\linewidth,height = 0.15\textheight]{Roomba_Diagonal_1st_states} \caption{} \vspace{4ex} \end{subfigure \hfill% \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\linewidth,height = 0.15\textheight]{Roomba_Diagonal_1st} \caption{} \label{fig7:b} \vspace{4ex} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\linewidth, height = 0.15\textheight]{Roomba_Diagonal_2nd_states} \caption{} \label{fig7:c} \end{subfigure \hfill% \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\linewidth,height = 0.15\textheight]{Roomba_Diagonal_2nd.pdf} \caption{} \label{fig7:d} \end{subfigure} \caption{Differential drive using first- (top) and second-order (bottom) needle variation actions. Snapshots of the system are shown at $t = 0, 2.5, 5, 7.5, 10$, and $12.5$~sec. The target state is $[x_d, y_d, \theta_d] = [1000$~mm$,1000$~mm$,0]$.} \label{Differential Drive} \end{figure} We also present a Monte Carlo simulation that compares convergence success using first- and second-order needle variations controls shown in \eqref{optimalu} and \eqref{optcon}, respectively. We sampled over initial coordinates $x_0, y_0 \in [-1500, 1500]$~mm using a uniform distribution and keeping only samples for which the initial distance from the origin exceeded $L/5$; $\theta_0 = 0$ for all samples. Successful samples were within $L/5$ from the origin with an angle $\theta <\pi/12$ within 60 seconds using feedback sampling rate of 4 Hz. Results were generated using $Q = \text{diag}(10,10,1000)$, $P_1 = \text{diag}(0,0,0)$, $T = 0.5$~s, $R = \text{diag}(100,100)$ for \eqref{optimalu}, $R = \text{diag}(0.1,0.1)$ for \eqref{optcon}, $\gamma = -15$, $\lambda = 0.1$ and saturation limits on the angular velocities of each wheel $\pm$150/36~mm/s\,\footnote{The metric on control effort is necessarily smaller for \eqref{optcon}, due to parameter $\lambda$. The parameter was chosen carefully to ensure that control solutions from \eqref{optcon} and \eqref{optimalu} were comparable in magnitude.}. As shown in Fig.~\ref{DifDriveMC_2nd}, the system always converges to the target using second-order needle variation actions, matching the theory. \begin{figure}[] \centering \includegraphics[width=0.5\linewidth, height = 0.15\textheight]{DifDrive_Convergence_2nd.pdf} \caption{Convergence success rates of first- \eqref{optimalu} and second-order \eqref{optcon} needle variation controls for the kinematic differential drive model. Simulation runs: 1000.}\label{DifDriveMC_2nd} \end{figure} \subsection{3D Kinematic Rigid Body} The underactuated kinematic rigid body is a three dimensional example of a system that is controllable with first-order Lie brackets. To avoid singularities in the state space, the orientation of the system is expressed in quaternions\cite{titterton2004strapdown, kuipers1999quaternions}. The states are $s = [x, y, z, q_0, q_1, q_2, q_3]$, where $b = [x, y, z]$ are the world-frame coordinates and $q = [q_0, q_1, q_2, q_3]$ are unit quaternions. Dynamics $f = [\dot{b},\dot{q}]^T$ are given by \begin{gather} \dot{b} = R_qv, \label{dotb}\\ \dot{q} = \frac{1}{2}\begin{bmatrix} -q_1 & -q_2 & -q_3 \\ ~~q_0& -q_3& ~~q_2 \\ ~~q_3& ~~q_0& -q_1\\ -q_2& ~~q_1& ~~q_0 \end{bmatrix}\omega, \label{dotq} \end{gather} where $v$ and $\omega$ are the body frame linear and angular velocities, respectively\cite{da2015benchmark}. The rotation matrix for quaternions is \medmuskip = 0.5mu \begin{align*} R_q=\begin{bmatrix} q_0^2 + q_1^2 - q_2^2 - q_3^2& 2(q_1q_2 - q_0q_3)& 2(q_1q_3+q_0q_2) \\ 2(q_1q_2 + q_0q_3)& q_0^2 - q_1^2+q_2^2 - q_3^2& 2(q_2q_3 - q_0q_1)\\ 2(q_1q_3 - q_0q_2)& 2(q_2q_3 + q_0q_1)& q_0^2-q1^2 -q_2^2+ q_3^2\end{bmatrix}. \end{align*} The system is kinematic: $v = F$ and $\omega = T$, where $F = (F_1, F_2, F_3)$ and $T = (T_1, T_2, T_3)$ describe respectively the surge, sway, and heave input forces, and the roll, pitch, and yaw input torques. We render the rigid body underactuated by removing the sway and yaw control authorities ($F_2 = T_3 = 0$). The four control vectors span a four dimensional space. First order Lie bracket terms add two more dimensions to span the state space ($\mathbb{R}^6$) (the fact that there are seven states in the model of the system is an artifact that is inherent in quaternion representation; it does not affect controllability, given that there is also a constraint that the norms of quaternions must sum up to one). \begin{figure}[] \centering \includegraphics[width=0.5\linewidth, height = 0.15\textheight]{Kinematic_Convergence_2ndRed.pdf} \caption{Convergence success rates of second-order needle variation controls \eqref{optcon} for the underactuated kinematic vehicle. First-order actions \eqref{optimalu} do not affect the $y$-coordinate of the rigid body and therefore never converge. Simulation runs: 280.}\label{KinMC_2nd} \end{figure} The vectors $h_1, h_2, [h_2, h_3]$ span $\mathbb{R}^3$ associated with the world frame coordinate dynamics $\dot{x}, \dot{y}$, and $\dot{z}$. Similarly, vectors $h_3, h_4$, and $[h_4, h_3]$ also span $\mathbb{R}^3$. Thereby, control vectors and their first-order Lie brackets span the state space and, from Theorem \ref{Theorem}, optimal actions shown in \eqref{optcon} will always reduce the cost function \eqref{cost}. To verify the theory, we present the convergence success of the system on 3D motion (see Fig.~\ref{KinMC_2nd}). Using Monte Carlo sampling with uniform distribution, initial locations were randomly generated such that $x_0, y_0, z_0 \in [-50, 50]$~cm keeping only samples for which the initial distance from the origin exceeded 6~cm. We regarded as a convergence success each trial in which the rigid body was within 6~cm to the origin by the end of 60 seconds at any orientation. Results were generated at a sampling rate of 20~Hz using $Q = 0$, $P_1 = \text{diag}(100,200,100,0,0,0,0)$, $T = 1.0$~s, $\gamma = -50000$, $\lambda = 10^{-3}$, $R~=~10^{-6}\,\text{diag}(1,1,100,100)$ for \eqref{optcon}, and $R = \text{diag}(10,10,1000,1000)$ for controls in \eqref{optimalu}. Controls were saturated at $\pm 10$~cm/s for the linear velocities and $\pm 10$~rad/s for the angular ones. As shown in Fig.~\ref{KinMC_2nd}, and as expected, all locomotion trials were successful. \subsection{Underactuated Dynamic 3D Fish} \begin{figure}[] \centering \includegraphics[width=0.5\linewidth, height = 0.15\textheight]{Convergence_DynFish.pdf} \caption{Convergence success rates of first- and second-order needle variation controls (\eqref{optimalu} and \eqref{optcon}, respectively) for the underactuated \textit{dynamic} vehicle model. Simulation runs: 280} \label{DynMC} \end{figure} \begin{figure*}% \centering \begin{subfigure}{\columnwidth} \includegraphics[width=0.8\columnwidth,height = 0.18\textheight]{SideWaysSnapshots2}% \caption{}% \label{SidewayMovement}% \end{subfigure}\hfill% \begin{subfigure}{\columnwidth} \includegraphics[width=0.8\columnwidth,height = 0.18\textheight]{FullyActuated_Ydrift2}% \caption{}% \label{TrajTrack_drift}% \end{subfigure}\hfill% \caption{Figure \ref{SidewayMovement} shows snapshots of a parallel displacement maneuver using an underactuated dynamic vehicle model with second-order controls given by \eqref{optcon}; first-order solutions \eqref{optimalu} are singular throughout the simulation. Figure \ref{TrajTrack_drift} shows tracking performance of the same system in the presence of +10~cm/s $\hat{y}$ fluid drift. The yellow system corresponds to first-order needle variation actions; the red one to second order. The target trajectory (red ball) is indicated with white traces over a 10-second simulation. Animation of these results is available at https://vimeo.com/219628387.} \end{figure*} We represent the three dimensional rigid body with states $s~=~[b,~q,~v,~\omega]^T$, where $b = [x, y, z] $ are the world-frame coordinates, $q = [q_0,q_1, q_2, q_3]$ are the quaternions that describe the world-frame orientation, and $v = [v_x, v_y, v_z]$ and $\omega = [\omega_x, \omega_y, \omega_z]$ are the body-frame linear and angular velocities. The rigid body dynamics are given by $\dot{b}$ and $\dot{q}$ shown in \eqref{dotb} and \eqref{dotq} and \begin{gather*} M \dot{v} = Mv \times \omega + F, \\ J \dot{\omega} = J\omega \times \omega + T, \end{gather*} where the effective mass and moment of inertia of the rigid body are given by $M~=~\text{diag}(6.04, 17.31, 8.39)$~g and $J~=~\text{diag}(1.57, 27.78, 54.11)$g$\cdot$cm$^2$, respectively. This example is inspired by work in \cite{mamakoukas2016,postlethwaite2009optimal} and the parameters used for the effective mass and moment of inertia of a rigid body correspond to measurements of a fish. The control inputs are $F_2 = T_3 = 0$ and $F_3\ge0$. The control vectors only span a four dimensional space and, since they are state-independent, their Lie brackets are zero vectors. However, the Lie brackets containing the drift vector field $g$ (that also appear in the MIH expression) add from one to four (depending on the states) independent vectors such that control solutions in \eqref{optcon} guarantee decrease of the cost function \eqref{cost} for a wider set of states than controls in \eqref{optimalu}. Simulation results based on Monte Carlo sampling are shown in Fig.~\ref{DynMC}. Initial coordinates $x_0, y_0, z_0$ were generated using a uniform distribution in $[-100, 100]$~cm, discarding samples for which the initial distance to the origin was less than 15~cm. Successful trials where the ones for which, within a simulation window of 60 seconds, the system approached within 5~cm to the origin (at any orientation) and whose magnitude of the linear velocities was, at the same time, less than 5~cm/s. Results were generated at a sampling rate of 20~Hz using \medmuskip=0mu \thinmuskip=0mu \thickmuskip=0mu$T = 1.5$~s, $P_1 = 0$, $Q~=~\frac{1}{200}\text{diag}(10^3,10^3,10^3,0,0,0,0, 1, 1, 1, 2\cdot10^3,10^3,10^3)$, $\gamma = -5$, $R = \text{diag}(10^3,10^3,10^6,10^6)$ for \eqref{optimalu}, $R = \frac{1}{2}\,\text{diag}(10^{-6},10^{-6}, 10^{-3},10^{-3})$ for \eqref{optcon}, and $\lambda = 10^{-4}$.\medmuskip=4mu \thinmuskip=3mu \thickmuskip=5mu ~The same control saturations ($F_1\in[-1, 1]$\,mN, $F_3\in[0,1]$\,mN, $T_1\in[-0.1, 0.1]$\,$\mu$N$\cdot$m, and $T_2\in[-0.1, 0.1]$\,$\mu$N$\cdot$m) were used for all simulations of the dynamic 3D fish. As shown in Fig. \ref{DynMC}, controls computed using second-order needle variations converge faster than those based on first-order needle variations, and 97\% of trials converge within 60 seconds. Both methods converge over time to the desired location; as the dynamic model of the rigid body tumbles around and its orientation changes, possible descent directions of the cost function \eqref{cost} change and the control is able to push the system to the target. Controls for the first-order needle variation case \eqref{optimalu} are singular for a wider set of states than second-order needle variation controls \eqref{optcon} and, for this reason, they benefit more from tumbling. In a 3D parallel locomotion task, only second-order variation controls \eqref{optcon} manage to provide control solutions through successive heave and roll inputs, whereas controls based on first-order sensitivities \eqref{optimalu} fail (see Fig.~\ref{SidewayMovement}). \\ \indent As controls in \eqref{optcon} are non-singular for a wider subset of the configuration state space than the first-order solutions in \eqref{optimalu}, they will provide more actions over a period of time and keep the system closer to a time-varying target. Fig. \ref{TrajTrack_drift} demonstrates the superior trajectory tracking behavior of controls based on \eqref{optcon} in the presence of +10~cm/s $\hat{y}$ fluid drift. The trajectory of the target is given by $[x, y, z]$=\medmuskip=0mu \thinmuskip=0mu \thickmuskip=0mu[20\,+10\,cos($\frac{\text{t}}{5}$)\,cos($\frac{\text{3t}}{10}$), 20\,+\,10\,cos($\frac{\text{t}}{5}$)\,sin($\frac{\text{3t}}{10}$), 10\,sin($\frac{\text{2t}}{5}$)], with $T=2$~s, $\lambda=0.01$, $Q~=~\text{diag}(10,10,10,0,0,0,0, 0, 0, 0, 1,1,0.1)$, $\gamma=-50000$, $P_1=\text{diag}(10,10,10,0,0,0,0, 0, 0, 0, 0, 0, 0)$, $R=\text{diag}(10^3,10^3,10^6,10^6)$ for \eqref{optimalu}, and $R=\text{diag}(10,10, 10^4,10^4)$ for \eqref{optcon}.\medmuskip=4mu \thinmuskip=3mu \thickmuskip=5mu~The simulation runs in real time using a C++ implementation on a laptop with Intel$^\circledR$ Core$^{\text{TM}}$ i5-6300HQ CPU @2.30GHz and 8GB RAM. The drift is known for both first- and second-order systems and accounted for in their dynamics in the form of $\dot{b} = \dot{b} + \dot{b}_\text{drift}$, where $\dot{b}_\text{drift}$ is a vector that points in the direction of the fluid flow. Simulation results demonstrate superior tracking of second-order needle variation controls that manage to stay with the target, whereas, in the meantime, the system that corresponds to first-order needle variation controls is being drifted away by the flow. \\ \indent We also tested convergence success of the +10~cm/s $\hat{y}$ drift case. Initial conditions $x,y,z$ were sampled uniformly from a $30$~cm radius from the origin, discarding samples for which the initial distance was less than $5$~cm. We consider samples to be successful if, during 60 seconds of simulation, they approached the origin within 5~cm. Out of 500 samples, controls based on second-order variations converged 91\% of the time (with average convergence time of 5.87~s), compared to 89\% for first-order actions (with average convergence time of 9.3~s). Simulation parameters were \medmuskip=0mu \thinmuskip=0mu \thickmuskip=0mu $T=1$~s, $\lambda=10^{-4}$, $Q=10^{-3}\text{diag}(10,10,10,0,0,0,0, 1, 1, 1, 1,1,1)$, $P_1=\text{diag}(100,100,100,0,0,0,0, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, 0, 0, 0)$, $\gamma=-25000$, $R=\text{diag}(0.1,0.1,10^4,10^4)$ for \eqref{optimalu}, and $R=\frac{1}{2}\text{diag}(10^{-5},10^{-5}, 1,1)$ for \eqref{optcon}. \section{Conclusion} This paper presents a needle variation control synthesis method for nonlinearly controllable systems that can be expressed in control affine form. Control solutions provably exploit the nonlinear controllability of a system and, contrary to other nonlinear feedback schemes, have formal guarantees to decrease the objective. By optimally perturbing the system with needle actions, the proposed algorithm avoids the expensive iterative computation of controls over the entire horizon that other NMPC methods use and is able to run in real time. Simulation results on three underactuated systems compare first- and second-order needle variation controls and demonstrate the superior convergence success rate of the proposed feedback synthesis. Because second-order needle variation actions are non-singular for a wider set of the state space than controls based on first-order sensitivity, they are also more suitable for time-evolving objectives, as demonstrated by the trajectory tracking examples in this paper. Second order needle variation controls are also calculated at little computational cost and preserve control effort. These traits, demonstrated in the simulation examples of this paper, render feedback synthesis based on second- and higher-order needle variation methods a promising alternative feedback scheme for underactuated and nonlinearly controllable systems. \section*{Acknowledgments} This work was supported by the Office of Naval Research under grant ONR N00014-14-1-0594. Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of the Office of Naval Research. \clearpage \bibliographystyle{IEEEtran} \balance
1,108,101,562,489
arxiv
\section{Introduction} \label{sec: introduction} One of the most distinctive features of quantum phases of matter is that they are not completely characterized by their pattern of symmetry breaking (order parameters of some kind), which is in a sharp contrast to classical statistical systems. Instead, quantum ground states should be described by their pattern of entanglement such as topological or quantum order. \cite{Wen89} However, beyond some simple textbook examples, e.g., a system of two coupled $S=1/2$ spins (qubits), we do not have many intuitions about quantum entanglement hidden in many-body wave functions. In a recent couple of years, the entropy of entanglement (von Neumann entropy) \cite{footnote1} \begin{eqnarray} S_A = - \mathrm{tr}_{A}\, \rho_{A} \ln \rho_{A},\quad \rho_{A}=\mathrm{tr}_{B}\, |\Psi\rangle \langle \Psi|, \label{eq: def entanglement entropy} \end{eqnarray} has been used to measure how closely entangled (or how ``quantum'') a given ground state wave function $|\Psi\rangle$ is. Here, the total system is divided into two subsystems $A$ and $B$ and $\rho_{A}$ is the reduced density matrix for the subsystem $A$ obtained by taking a partial trace over the subsystem $B$ of the total density matrix $\rho=|\Psi\rangle \langle \Psi|$. This quantity is zero for classical product states whereas it takes a non-trivial value for valence-bond solid states (VBS), or resonating valence bond states (RVB) of quantum spin systems, say. Recently, the entanglement entropy at and close to quantum phase transitions in low-dimensional strongly correlated systems has been used as a new tool to investigate the nature of quantum criticality. \cite{ Holzhey94, Osterloh02, Osborne02, Vidal03, Fan04, Calabrese04, Furukawa05} Even though one can tell different quantum phases from the scaling of the entanglement entropy, it is still not completely understood what kind of information we can distill from the von Neumann entropy, other than that contained in conventional correlation functions. On the other hand, a phase degree of freedom is also a specific feature of quantum mechanics. Indeed, Berry phases \cite{Berry84} associated with (many-body) wave functions in solid states are related to several interesting quantum phenomena which have no classical analogue. Probably, it is best epitomized by the Thouless-Kohmoto-Nightingale-Nijs (TKNN) formula in the integer quantum Hall effect (IQHE) \cite{Thouless82,kohmoto}, in which gapped quantum phases are distinguished by an integral topological invariant originating from winding of the phase of wave functions. In addition to the IQHE, the Berry phase also appears in the King-Smith-Vanderbilt (KSV) formula \cite{Kingsmith93, Resta94} of the theory of macroscopic polarization, and its incarnation in quantum spin chains \cite{denNijs89,Nakamura02}, and so on. An observable consequence of the non-trivial Berry phase is the existence of localized states at the boundaries when we terminate a system with boundaries. \cite{Hatsugai93,Ryu02,Kitaev00} It is then tempting to ask if there is, if any, a connection between these two paradigms in quantum physics, namely, entanglement and the Berry phase. In this paper, we discuss this issue by taking a family of translational invariant lattice free fermion systems in $d$ dimensions as an example. We bipartition the system into two subsystems $A$ and $B$ by introducing $(d-1)$-dimensional flat interfaces. Within this setup, we can reduce the calculation of the entropy to that in a one-dimensional system by the $(d-1)$-dimensional Fourier transform along the interface. We assume the existence of a finite energy gap $m$ above ground states which is inversely proportional to the correlation length, $m \sim \xi_{corr}^{-1}$ (when measured in the unit of the band width). Furthermore, for simplicity, we consider the case in which there are only two bands that are separated by a gap. In this paper, we consider the Berry phase associated with a response of a quantum ground state to a continuous twist of the boundary condition. For the case of free lattice fermion systems, for which a ground state is given by a filled Fermi-Dirac sea, this Berry phase is a phase acquired by an adiabatic transport of the Bloch wave functions in the momentum space and also called Zak's phase.\cite{Zak89} Physically, it is related to macroscopic polarization of the Fermi-Dirac sea. \cite{Kingsmith93} A beauty of the simple two-band example that we discuss is that the Berry phase for the quantum ground state can be easily computed and visualized, following the pioneering work by Berry \cite{Berry84} (See Sec.\ \ref{sec: 1D two-band systems} and Fig.\ \ref{fig: bloch_sphere} below.) With these setups, we will demonstrate that taking the partial trace over a subsystem corresponds to creating boundaries in a system. Two contributions to the entanglement entropy will be then identified. The first one is of type already discussed in a flurry of recent works focusing on detection of quantum critical points. This contribution to the entanglement entropy is largely controlled by the correlation length $\xi_{corr}$. For example, in one-dimensional (1D) many-body systems close to criticality the entanglement entropy obeys a logarithmic law $S_A \sim \mathcal{A}(c/6)\ln \xi_{corr}/a$ where $c$ is the central charge of the conformal field theory that governs the criticality, $a$ the lattice constant, and $\mathcal{A}$ is the the number of boundary points of $A$. \cite{Vidal03,Calabrese04} On the other hand, the second contribution to the entropy comes from the localized boundary states of the correlation matrix that exist when the Berry phase of the ground state wave function is non-vanishing. Especially, when the Berry phase is equal to $\pi \times \mbox{(odd integer)}$ and when the ground state respects discrete symmetries of some sort, the localized boundary states are topologically protected as discussed in Refs.\ \cite{Hatsugai93,Ryu02}. For this case, we will show that the contribution from the boundary states to the von Neumann entropy is $\ln 2$ per boundary, i.e., the same amount of entropy carried by maximally entangled pair of two qubits. We will also illustrate, by taking a specific limit, that when $\gamma\neq 0$, the von Neumann entropy from the boundary states is that of partially entangled qubits. We also discuss that the $\ln 2$ contribution to the von Neumann entropy is related to vanishing of the expectation value of a certain non-local operator which creates a kink in 1D systems. This connection between the entanglement entropy and the kink operator is, in flavor, similar to discussions in Refs.\ \cite{Calabrese04,Casini05} in which the entanglement entropy is expressed as the expectation values of twist operators in conformal field theories. The rest of the paper is organized as follows. In Sec.\ \ref{sec: 1D two-band systems}, we start our discussions with 1D translational invariant Hamiltonians with two bands separated by a finite gap. The Berry phase is introduced as an expectation value of a specific non-local operator that twists the phase of wavefunctions. We then discuss its connection to the entanglement entropy by making use of the correlation matrix. The calculation of the entanglement entropy is, in general, a rather difficult task at least analytically. Furthermore, the Berry phase contribution to the entropy might not be of perturbative nature. We thus consider two limiting situations. In Subsec.\ \ref{subsec: limit 1}, we take the limit of the small correlation length $\xi_{corr} \ll 1$ and zero band width. In this specific limit, we can express the entanglement entropy as a function of the Berry phase $\gamma$. We next focus on cases with a discrete unitary particle-hole symmetry (chiral symmetry) in Subsec.\ \ref{subsec: limit 2}. Except for requiring the chiral symmetry, any parameters of the Hamiltonian (the band structure) can be arbitrary. Once we impose the chiral symmetry, the Berry phase $\gamma$ can take only discrete values, integral multiple of $\pi$. We then show when $\gamma=\pi \times (\mbox{odd integer})$, the entanglement entropy is bounded below as $S_A \ge 2 \ln 2$. In Sec.\ \ref{sec: connection to a kink operator}, we relate the lower bound of the entropy at $\gamma=\pi \times (\mbox{odd integer})$ to the vanishing of a non-local operator that creates a kink. In Sec.\ \ref{sec: 2D systems with the non-vanishing Chern number}, these discussions are applied to a higher dimensional example, a 2D superconductor with non-zero TKNN integer. We conclude in Sec.\ \ref{sec: conclusion}. \section{1D two-band systems} \label{sec: 1D two-band systems} We start from the following 1D translational invariant Hamiltonians with two bands separated by a finite gap, \begin{eqnarray} \mathcal{H} = \sum_{x,x'}^{\mbox{\tiny PBC}} \boldsymbol{c}_{x}^{\dag} H_{x-x'} \boldsymbol{c}_{x'}^{\ }, \quad H_{x-x'} = \left( \begin{array}{cc} t_+ & \Delta \\ \Delta' & t_- \end{array} \right)_{x-x'}. \label{eq: def 2-band 1D hamiltonian} \end{eqnarray} Here, a pair of fermion annihilation operators $\boldsymbol{c}^{\mathrm{T}}_x= (c_{+}^{\ }, c_{-}^{\ })_{x}$ is assigned for each cite, $x,x'=1,\cdots, N$, and the hermiticity of $\mathcal{H}$ implies $t_{\iota,x-x'}=t_{\iota, x'-x}^*$ and $\Delta_{x-x'}=(\Delta'_{x'-x})^*$ for $\iota=\pm$. We impose the periodic boundary condition (PBC) on the 1D lattice. In spite of its simplicity, this Hamiltonian (\ref{eq: def 2-band 1D hamiltonian}) has a wide range of applicability, such as the Bogoliubov-de Gennes Hamiltonian in superconductivity, graphite systems \cite{Ryu02}, ferroelectricity of organic materials and perovskite oxides \cite{Onoda04}, and the slave boson mean field theory for spin liquid states, say. By the Fourier transformation $ \boldsymbol{c}_x^{\ } = N^{-1/2} \sum_{k \in \mathrm{Bz}} e^{\mathrm{i}k x} \boldsymbol{c}_{k}^{\ } $ where the summation over $k$ extends over the 1st Brillouin zone (Bz), $k=2\pi n/N$ ($n=1,\ldots, N$), the Hamiltonian in the momentum space is given by $\mathcal{H} = \sum_{k \in \mathrm{Bz}} \boldsymbol{c}^{\dag}_{k} H(k) \boldsymbol{c}^{\ }_{k} $, with $ H(k) := \sum_{x} e^{-\mathrm{i}k x} H(x). $ If we introduce an ``off-shell'' four-vector $R^{\mu=0,1,2,3}(k)\in \mathbb{R}$ by $ R^0(k)\mp R^3(k):= t_{\pm}(k) $, $ -R^1(k)+\mathrm{i}R^2(k) := \Delta (k) $, we can rewrite the Hamiltonian in the momentum space as \begin{eqnarray} \mathcal{H} &=& \sum_{k \in \mathrm{Bz}} \boldsymbol{c}_{k}^{\dag} R^{\mu}(k)\sigma_{\mu} \boldsymbol{c}_{k}^{\ }, \label{eq: def 2-band 1D hamiltonian in k-space} \end{eqnarray} where $\boldsymbol{\sigma}_{\mu}=(\sigma_0,-\boldsymbol{\sigma})$ with $\sigma_0=\mathbb{I}_2$. Observing that $R^{\mu}\sigma_{\mu}$ is diagonalized by the same eigen vectors as those of $R_{i}\sigma_i=\boldsymbol{R}\cdot \boldsymbol{\sigma}$ (but with different eigen values), normalized eigen states $\vec{v}_{\pm}$ for $R^{\mu}\sigma_{\mu}$ are given by, when $\boldsymbol{R}$ is not in the Dirac string, $(R^1,R^2)\neq (0,0)$, \cite{Berry84} \begin{eqnarray} \vec{v}_{\pm} &=& \frac{1}{\sqrt{2R(R\mp R^3)}} \left( \begin{array}{c} R^1 -\mathrm{i} R^2 \\ \pm R -R^3 \end{array} \right), \label{eq : wave function for monopole} \end{eqnarray} where $R=|\boldsymbol{R}|$ (should not be confused with $R^0$), and the eigen value for $\vec{v}_{\pm}$ is $E_{\pm}=R^0 \mp R$. The Hamiltonian is then diagonalized as $ \mathcal{H} = \sum_{k} \boldsymbol{\alpha}_{k}^{\dag} \mathrm{diag}(E_+,E_-)_{k} \boldsymbol{\alpha}^{\ }_{k}, $ where $ c_{\iota,k} = (\vec{v}_{\sigma})^{\iota} \alpha_{\sigma,k} $. As we assume there is a finite gap for the entire Brillouin zone, $E_+ > E_-$, ${}^{\forall} k \in\mathrm{Bz}$. The vacuum $|\Psi \rangle$ is the filled Fermi sea $ |\Psi\rangle = \prod_{k \in \mathrm{Bz}} \alpha_{-,k}^{\dag} |0\rangle. $ The Berry phase can be defined through the expectation value of the twist operator: \begin{eqnarray} z &:=& \exp\left[ \mathrm{i}\frac{2\pi}{N}\sum_{x}x n_{x} \right], \label{eq: expectation value of the twist operator} \end{eqnarray} where $n_x$ is the electron number operator at site $x$, $n_{x}=\sum_{\iota}c^{\dag}_{x,\iota}c^{\ }_{x,\iota}$. This operator twists the phase of wave functions along the $x$-direction over the wide length scale, $N$. If we use the $S_{z}$ component of spin operator, say, instead of $n_x$, we can define the twist operator in spin systems in a similar fashion. The twist operator has been used to to characterize low-dimensional quantum systems \cite{Nakamura02} and to describe macroscopic polarization of insulators \cite{Kingsmith93}, say. For the Fermi-sea $|\Psi\rangle$, the expectation value of the twist operator is calculated as \begin{eqnarray} \langle \Psi |z |\Psi \rangle = (-1)^{N+1} \exp\left[ \mathrm{i}\gamma - \xi_{loc}^2/N +\mathcal{O}(1/N^2) \right], \label{eq: expectation value of the twist operator 2} \end{eqnarray} where the Berry phase (Zak's phase) $\gamma$ is given by a line integral of the gauge field $A(k)$ over the 1D Brillouin zone (Bz) \cite{Zak89, Kingsmith93, Resta94, Rem1}, \begin{eqnarray} \mathrm{i}A_x(k) &:=& \langle v_-(k) | \frac{\mathrm{d}}{\mathrm{d}k} | v_-(k) \rangle, \nonumber \\ \gamma &:=& \mathrm{i}\int_0^{2\pi}\mathrm{i}A_x(k)\, \mathrm{d}k\, . \label{eq: the Berry phase} \end{eqnarray} For the Fermi-sea $|\Psi\rangle$ derived from the Hamiltonian (\ref{eq: def 2-band 1D hamiltonian in k-space}), $\gamma$ is simply equal to half of the solid angle sustained by a loop defined by $\boldsymbol{R}(k)$ in $\boldsymbol{R}$-space \cite{Berry84, Ryu02}. (Fig.\ \ref{fig: bloch_sphere}) On the other hand, the $\mathcal{O}(1/N)$ correction to $\ln \langle \Psi |z |\Psi \rangle$ is real and given by the integral of the quantum metric $g_{xx}(k)$ over the Bz \cite{Marzari97}, \begin{eqnarray} g_{xx}(k) &:=& \mathrm{Re}\, \langle \partial_{k} v_- | \partial_{k} v_- \rangle - \langle \partial_{k} v_- | v_- \rangle \langle v_-| \partial_{k} v_- \rangle, \nonumber \\ \xi_{loc}^2 &:=& \pi \int_0^{2\pi} g_{xx}(k)\, \mathrm{d}k\,. \label{eq: the quantum metric} \end{eqnarray} The localization length $\xi_{loc}$ plays a similar role to $\xi_{corr}$ and is known to be related to most localized Wannier states in an insulating phase. \cite{Marzari97} \begin{figure} \begin{center} \includegraphics[width=8cm,clip]{bloch_sphere.eps} \caption{ \label{fig: bloch_sphere} (Left) The loop defined by a three components vector $\boldsymbol{R}(k)$ associated with the Hamiltonian in momentum space [Eq.\ (\ref{eq: def 2-band 1D hamiltonian})]. (Right) Loops for chiral symmetric Hamiltonians. } \end{center} \end{figure} \subsection{Truncated correlation matrix and its zero modes} We next partition the system into two parts, $A=\{x\, |\, x=1,\ldots,N_A\}$ and $B=\{x\, |\, x=N_A+1,\ldots,N_B\}$ with $N_A+N_B=N$, and ask, with the von Neumann entropy $S_A$, to what extent these two subsystems are entangled. Instead of directly tracing out the subsystem $B$ following the definition (\ref{eq: def entanglement entropy}), we can make use of correlation matrix $ C_{\iota\lambda}(x-y) := \langle c_{(x,\iota)}^{\dag} c_{(y,\lambda)}^{\ } \rangle $ as shown in Ref.\ \cite{Peschel02}. From the entire correlation matrix, we extract the submatrix $\{C_{\iota\lambda}(x-y)\}_{x,y\in A}$ where $x$ and $y$ are restricted in the subsystem $A$. The entanglement entropy is then given by \begin{eqnarray} S_{A}&=& -\sum_{a} \Big[ \zeta_{a}\ln\zeta_{a} + (1-\zeta_a)\ln(1-\zeta_{a}) \Big], \label{eq: master formula for the entropy} \end{eqnarray} where $\zeta_a$ are the eigen values of the truncated correlation matrix $\{C_{\iota\lambda}(x-y)\}_{x,y\in A}$. With the whole set of the eigen values $\{E_{\pm}(\boldsymbol{k})\}$ and eigen wavefunctions $\{v_{\pm}(\boldsymbol{k})\}$ (Eq.\ (\ref{eq : wave function for monopole})) in hand, the correlation matrix $ C_{\iota\lambda}(x-y) = N^{-1} \sum_{k\in \mathrm{Bz}} e^{-\mathrm{i}k(x-y)} C_{\iota\lambda}(k) $ is calculated exactly as \begin{eqnarray} C_{\iota\lambda}(k) &=& \frac{1}{2} \big[ n^{\mu}(k)\sigma_{\mu} \big]_{\iota\lambda}, \label{eq: corr matrix in mom space} \end{eqnarray} where we have introduced an ``on-shell'' four-vector $n^{\mu}$ by $ n^{\mu}=(1,\boldsymbol{R}/R) $. It should be noted that a set of Hamiltonians can share the same ground state wavefunction and thus the same correlation matrix. The basic idea we will use to discuss the entanglement entropy is to think that the correlation matrix $C(x-y)$ defines a 1D ``Hamiltonian'' with PBC. This ``Hamiltonian'' (let us call it the correlation matrix Hamiltonian or the $\mathcal{C}$-Hamiltonian for simplicity) has the same set of eigen wave functions as the original Hamiltonian but all the eigen values are given by either 1 or 0. The range of hopping elements in the generated system is order of the inverse gap of the original Hamiltonian. I.e., if there is a finite gap, the $\mathcal{C}$-Hamiltonian is local (short-ranged). Now, all we need to know is what energy spectrum the $\mathcal{C}$-Hamiltonian will have when we cut it into two parts, defined by $A$ and $B$. This is the same question asked in Ref.\ \onlinecite{Ryu02}, in which a criterion to determine the existence of zero-energy edge states is presented. There are two types of eigen values in the energy spectrum of the truncated $\mathcal{C}$-Hamiltonian in the thermodynamic limit $N_A \to \infty$. Eigen values of the first type are identical to their counterpart in the periodic (untruncated) system. On top of it, there appear localized boundary states whose eigen values are located within the bulk energy gap. Since the eigen values that belong to the bulk part of the spectrum are either 1 or 0, they do not contribute to the entanglement entropy as seen from Eq.\ (\ref{eq: master formula for the entropy}) whereas the boundary modes do. The question is then how many boundary states appear and with what energy when the system is truncated. As suggested from the the KSV formula in macroscopic polarization, the non-vanishing Berry phase of the filled band of the $\mathcal{C}$-Hamiltonian implies the existence of states localized near the boundary. Here, note that the Berry phase for the generated system ($\mathcal{C}$-Hamiltonian) is identical to that of the original system, since the original and generated Hamiltonians share the same set of eigen wave functions. \subsection{Dimerized limit} \label{subsec: limit 1} To know the number of localized states that appear in the spectrum and the energy eigen values thereof is, in general, a difficult task. In this subsection, we consider a limiting situation in which the localization length in Eq.\ (\ref{eq: the quantum metric}) is small $\xi_{corr} \ll 1$ and the band width of the energy spectrum is zero. More precisely, let us consider the case in which the correlation matrix is given by a four-vector $n^{\mu}$ (Eq.\ (\ref{eq: corr matrix in mom space})) with \begin{eqnarray} && \boldsymbol{R}(k)= (-\Delta \cos k,-\Delta \sin k,\xi), \label{eq: def example 1} \end{eqnarray} where $\Delta,\xi \in\mathbb{R}$. There is a family of Hamiltonians having this correlation matrix which includes the following ``dimerized'' Hamiltonian, \begin{eqnarray} \mathcal{H} = \sum_{x} \left[ \sum_{\iota} \iota \xi c_{x\iota}^{\dag} c_{x\iota}^{\ } + \Delta c_{x+1,+}^{\dag} c_{x,-}^{\ } + \mathrm{h.c.} \right]. \label{eq: dimerized Hamiltonian} \end{eqnarray} The inverse Fourier transformation of Eq.\ (\ref{eq: def example 1}) gives the correlation matrix in the tight-binding notation, \begin{eqnarray} \mathcal{C} = \sum_{x} \left[ \sum_{\iota} \frac{(R - \iota \xi)}{2R} c_{x\iota}^{\dag} c_{x\iota}^{\ } - \frac{\Delta}{2R} c_{x+1,+}^{\dag} c_{x,-}^{\ } + \mathrm{h.c.} \right]. \nonumber \\ \label{eq: dimerized correlation matrix} \end{eqnarray} This $\mathcal{C}$-Hamiltonian can be diagonalized for both periodic and truncated boundary conditions by introducing the ``dimer'' operators via $ d^{\dag}_{\pm,x+\frac{1}{2}} = ( c_{x,+}^{\dag} \pm c_{x+1,-}^{\dag} )/\sqrt{2}. $ (See also Appendix.) The truncated $\mathcal{C}$-Hamiltonian has $(N-1)$-fold degenerate eigen values $\zeta=0,1$, and two eigen values $ \zeta= ( 1\pm \frac{\xi}{R} )/2 $ that correspond to edge states. The entanglement entropy (in the thermodynamic limit) is then computed as \begin{eqnarray} \frac{1}{2} S_{A} &=& - \frac{\gamma}{2\pi}\ln \frac{\gamma}{2\pi} - \frac{(2\pi-\gamma)}{2\pi}\ln \frac{2\pi-\gamma}{2\pi}. \label{eq: formula in limit 1} \end{eqnarray} where the Berry phase $\gamma$ for the correlation matrix (\ref{eq: def example 1}) is $ \gamma/\pi = 1 -\xi/R. $ In the two extreme cases, $\xi=0$ and $\xi\to \pm \infty$, we have $ S_{A}(\xi=0)=2\ln 2 $ and $ S_{A}(\xi\to \pm \infty)=0 $, respectively. The entanglement entropy in the present case is a convex function with respect to $\gamma \in [0,2\pi]$ and the maximum is achieved when $\gamma=\pi$ whereas two minima are located at $\gamma=0,2\pi$. \subsection{Case of $\gamma=\pi$ with chiral symmetry} \label{subsec: limit 2} Although the formula (\ref{eq: formula in limit 1}) clearly shows the relation between the Berry phase and the entanglement entropy in a specific limit, it is rather difficult to extend Eq.\ (\ref{eq: formula in limit 1}) to more generic situations. However if we impose a discrete symmetry implemented by a unitary particle-hole transformation, so-called chiral symmetry, on the $\mathcal{C}$-Hamiltonian, it is possible to make a precise prediction for the number of boundary states that has an eigen value $\zeta=1/2$, following the same line of discussions in Ref.\ \onlinecite{Ryu02}. When the system respects the chiral symmetry, we can find a unitary matrix that anti-commutes with the one-particle Hamiltonian. For this case, $\boldsymbol{n}(k)$ is restricted to lie on a plane cutting the origin in $\boldsymbol{R}$-space, which in turn means that the Berry phase for the lower band of $\mathcal{H}$ is equal to $n \pi$ ($n \in \mathbb{N}$). (Fig.\ \ref{fig: bloch_sphere}) When $n$ is odd, we can show that there are at least a pair of boundary modes at $\zeta=1/2$, one of which is localized at the left end and the other at the right. \cite{Ryu-unpublished} (The system with $\gamma=\pm \pi$ is, in a sense, ``dual'' to that with the vanishing Berry phase where there is no boundary state. See Appendix.) Basically, this is because, when $n$ is odd, it is always possible to deform the $\mathcal{C}$-Hamiltonian into a ``reference'' one without closing the bulk energy gap and without changing the Berry phase. The reference $\mathcal{C}$-Hamiltonian is similar to the dimerized example (\ref{eq: dimerized correlation matrix}) for which one can exactly show the existence of $n$ pairs of edge modes at $\zeta=1/2$. In the course of deformation, the edge modes present in the reference $\mathcal{C}$-Hamiltonian can move away from $\zeta=1/2$. However, due to the chiral symmetry, the edge modes can escape from $\zeta=1/2$ only in a pair wise fashion, i.e., an edge state localized on the left/right must always be accompanied by the one localized on the same end and with the opposite eigen value with respect to $\zeta=1/2$. When $n$ is odd, a pair of edge modes ( one for each end ) cannot have its partner and hence we are left with at least one edge mode per boundary located exactly at $\zeta=1/2$. See Ref.\ \cite{Ryu02} for more detailed discussions. Then, the lower bound of the entanglement entropy is given by \begin{eqnarray} S_{A}&\ge& - \ln \frac{1}{2} - \ln \frac{1}{2} = 2\ln 2. \end{eqnarray} This lower bound is equal to the entanglement entropy contained in a dimer for each end of the original model, which is consistent with the fact that the origin of the boundary states discussed above can be traced back to dimers in the reference Hamiltonian to which a given target Hamiltonian is adiabatically connected. There can be other contributions from boundary states that are not connected to a dimer in the above sense. Indeed, as we will explicitly demonstrate below, this kind of boundary modes proliferate as we approach a quantum critical point whose number grows as $\sim \mathcal{A}(c/6)\ln \xi_{corr}/a$, and finally gives rise to the logarithmic divergences at the critical point. \cite{Calabrese04} Note also that our discussion here does not apply gapless systems since the matrix elements of the $\mathcal{C}$-Hamiltonian are long-ranged in this case. \begin{figure} \begin{center} \includegraphics[width=4cm,clip]{ssh_edge_red.eps} \includegraphics[width=4cm,clip]{ssh_corr_red.eps} \\ \includegraphics[width=4cm,clip]{ssh_kink.eps} \includegraphics[width=4.2cm,clip]{ssh_entngl.eps} \caption{ \label{fig: ssh edge} The energy spectra of (a) the Hamiltonian $\mathcal{H}$ with open ends, (b)the truncated correlation matrix $\mathcal{C}$, and (c) the matrix $\mathcal{S}$ (see Sec.\ \ref{sec: connection to a kink operator}) as a function of the dimerization parameter $\phi\in [-1,1]$ for the SSH model. Both energy and dimerization are measured in the unit of the hopping amplitude, $t$. (d) The entanglement entropy of the SSH model. } \end{center} \end{figure} \subsection{Example : the Su-Schrieffer-Heeger model} As an example, let us look at a situation in which two phases with the Berry phase $\gamma=\pi$ and 0 are connected by a quantum phase transition point. Physically, such an example is provided by the Su-Schrieffer-Heeger (SSH) model for a chain of polyacetylene. The 1D tight-binding Hamiltonian for the SSH model for a chain of polyacetylene is given by $ \mathcal{H} = \sum_{i=1}^{N_i} t \big( -1+(-1)^i\phi_i \big) \big( c_i^{\dag}c_{i+1}^{\ } + \mathrm{h.c.} \big) $ \cite{Heeger88} where $\phi_{i}$ represents dimerization at the $i$-th site, and an alternating sign of the hopping elements reflects dimerization between the carbon atoms in the molecule. Here, we treat the lattice in a classical fashion and neglected its elastic (kinetic) energy. Taking $\phi_i=\phi=\mathrm{const.}$, $t=1$, and defining a spinor at $x=2i-1$ by $ \boldsymbol{c}_{x} = \left( c_i,c_{i+1} \right)^{\mathrm{T}}, $ the Hamiltonian can be written as ($N=N_i/2$) \begin{eqnarray} \mathcal{H} &=& \sum_{x=1}^{N} \boldsymbol{c}^{\dag}_{x} \left( \begin{array}{cc} & -(1+\phi)\\ -(1+\phi) & \end{array} \right) \boldsymbol{c}^{\ }_{x} \nonumber \\ && - \boldsymbol{c}^{\dag}_{x} \left( \begin{array}{cc} & 0\\ 1-\phi & \end{array} \right) \boldsymbol{c}^{\ }_{x+1} + \mathrm{h.c.} \end{eqnarray} Under the PBC, the SSH Hamiltonian can be diagonalized as Eq.\ (\ref{eq: def 2-band 1D hamiltonian in k-space}) with $ R_{x}(k) =-1-\phi -(1-\phi)\cos k $, $ R_{y}(k) =(-1+\phi)\sin k $, $ R_z(k)=0. $ For $\phi \in [-1,0)$, the Berry phase is given by $\gamma=\pi$ whereas for $\phi \in (0,1]$ $\gamma=0$. These two phases are separated by a quantum phase transition at $\phi=0$. Following the discussion in Ref.\ \cite{Ryu02}, there is at least pair of boundary states for $\phi \in [-1,0)$ when we terminated the system. Indeed, for the numerically computed energy spectrum of the SSH model with open ends (Fig.\ \ref{fig: ssh edge}-(a) ) for $\phi\in[-1,+1]$, there is a pair of edge states in the bulk energy gap when $\phi \in [-1,0)$. The entanglement entropy is calculated by diagonalizing the $\mathcal{C}$-Hamiltonian. The energy spectrum of the $\mathcal{C}$-Hamiltonian with open ends is shown in Fig.\ \ref{fig: ssh edge}-(b). Again, there is a pair of boundary states for $\phi\in[-1,0)$ and for this case, $S_A$ is bounded from below as $S_A \ge 2 \ln 2$. ( Fig.\ \ref{fig: ssh edge}-(d)) When we approach the transition point $\phi=0$, some bulk eigen values turn into the boundary eigen values and they give rise to extra contributions other than the zero-energy boundary states. Similar behavior of the entanglement entropy is discussed for the quantum Ising chain in transverse magnetic field, where the $2\ln 2$ entropy originates from a Schr\"odinger cat state composed of all spin up and down configurations. \section{Connection to a kink operator} \label{sec: connection to a kink operator} We have seen that bipartitioning the system corresponds to an introduction of a sharp ``boundary'' (interface). In this section, we will realize it by a non-local operator, a kink operator \begin{eqnarray} \eta &:=& \exp\left[ \mathrm{i}\sum_{x} \varphi(x) n_{x} \right], \quad \eta^{\dag}=\eta^{-1}, \end{eqnarray} where \begin{eqnarray} \varphi(x)&:=& \left\{ \begin{array}{ll} 0, & x\in A, \\ \pi, & x \in B. \end{array} \right. \end{eqnarray} The geometric mean of this kink operator is the twist operator. \cite{Shindou05} The kink operator attaches a phase factor $\varphi(x)$ for the fermion operators at site $x$, \begin{eqnarray} \eta^{\dag} c_{x\iota}^{\ } \eta = e^{+\mathrm{i}\varphi(x)} c_{x\iota}, \quad \eta^{\dag} c_{x\iota}^{\dag} \eta = e^{-\mathrm{i}\varphi(x)} c_{x\iota}^{\dag}. \label{eq: phase attachment} \end{eqnarray} Thus, if we introduce the reduced density operator through \begin{eqnarray} \tilde{\rho}_{A} &:=& \frac{1}{2} \left[ \eta |\Psi\rangle\langle \Psi|\eta^{\dag} + |\Psi\rangle\langle \Psi| \right], \end{eqnarray} the matrix elements $\mathrm{tr}\, \big[ c_{x,\iota}^{\dag} c_{y,\lambda}^{\ } \tilde{\rho}_A \big]$ are vanishing whenever $x\in A$ and $y\in B$ and vice versa, whereas they coincide with the correlation matrix $C_{\iota\lambda}(x-y)$ when $x,y \in A$. Unlike $\rho_A$, the matrix elements $\mathrm{tr}\, \big[ c_{x,\iota}^{\dag} c_{y,\lambda}^{\ } \tilde{\rho}_A \big]$ are non-zero even for the $B$ subsystem. This ``padding'' does nothing however. In the following, we will discuss the expectation value of the kink operator $\langle \Psi| \eta | \Psi \rangle$ with respect to a given ground state wave function $|\Psi\rangle$ which is related to the expectation value of $\tilde{\rho}_A$ as $ \langle \Psi | \tilde{\rho}_A |\Psi \rangle = \frac{1}{2} \left[ |\langle \Psi| \eta | \Psi \rangle|^2 +1 \right]$. As we will see the vanishing of $\langle \Psi| \eta | \Psi \rangle$ is closely related to a $\ln 2$ contribution to $S_A$ discussed in the previous section. This can be understood intuitively as follows. Classical wave functions can be written as a product state and are rather insensitive to the kink operator. Thus, the ground state with the kink operator inserted $\eta |\Psi \rangle$ has a large overlap with the original ground state $|\Psi \rangle$. On the other hand, the kink operator destroys dimers if the Berry phase of the ground state is $\pi \times (\mbox{odd integer})$. As a consequence the overlap $\langle \Psi|\eta |\Psi \rangle$ is very small in this quantum phase, which in turn suggests that quasi-particles that constitute the continuum spectrum above the ground state can be interpreted as a kink created by $\eta$. Thus the kink operator is capable of distinguishing the quantum phases with different entanglement properties. To put the above statement in a quantum information perspective, remember the reduced density matrix $\tilde{\rho}_A$ is in general in a mixed state: \begin{eqnarray} \tilde{\rho}_A = \sum_{n} p_{n} |\Psi_n \otimes 0\rangle \langle \Psi_n \otimes 0|, \end{eqnarray} where $|\Psi_n \rangle$ belongs to the subsystem $A$, and $\sum_n p_n =1$. When the wavefunction $|\Psi\rangle$ happens to be a completely entanglement-free, product state, $|\Psi\rangle = |\Psi_{A}\rangle \otimes |\Psi_{B}\rangle$, the reduced density matrix $\tilde{\rho}_A$ is in a pure state, i.e., $p_{n\neq 1}=0$, $p_1=1$, $|\Psi_1 \rangle = |\Psi_A \rangle$. On the other hand, when $|\Psi\rangle$ is highly entangled, taking partial trace over the $B$ subsystem generates many pure states $|\Psi_n\rangle$ with non-zero weight $0< p_n < 1$. How far a given state $|\Psi_n\rangle$ from a product state can then be measured by taking the expectation value of the reduced density matrix $\tilde{\rho}_A$: \begin{eqnarray} \langle \Psi | \tilde{\rho}_A |\Psi \rangle = \sum_{n} p_{n} \langle \Psi |\Psi_n \otimes 0\rangle \langle \Psi_n \otimes 0|\Psi \rangle. \end{eqnarray} Clearly, it is equal to one when $|\Psi_n\rangle$ is a product state whereas it is expected to be less than one for entangled states. In the following subsections, we will establish that in an insulating phase the expectation value of the kink operator is zero in the thermodynamic limit when the Berry phase is $\pi\times (\mbox{odd integer})$ whereas it is finite otherwise. \subsection{Expectation value of the kink operator as a determinant} \label{subsec: the expectation value of the kink operator as a determinant} The computation of the expectation value of the kink operator for a Fermi-Dirac sea $ |\Psi\rangle = \prod_{k \in \mathrm{Bz}} \alpha_{-,k}^{\dag} |0\rangle $ goes as follows. In the momentum space, the phase attachment transformation (\ref{eq: phase attachment}) reads \begin{eqnarray} \eta^{\dag}\boldsymbol{c}_{k}^{\ }\eta^{\ } = \sum_q f_q \boldsymbol{c}_{k-q}, \quad \eta^{\dag}\boldsymbol{c}_{k}^{\dag}\eta^{\ } = \sum_q f^{*}_q \boldsymbol{c}^{\dag}_{k-q}. \end{eqnarray} where we introduced the Fourier components of $e^{\mathrm{i}\varphi(x)}$ by \begin{eqnarray} e^{\mathrm{i}\varphi(x)} = f(x) = \sum_{q\in \mathrm{Bz}} f_q e^{\mathrm{i}q x}, \end{eqnarray} with $q=2\pi n_q/N$ ($n_q\in \mathbb{N}$) and \begin{eqnarray} f_q&=& 2 \frac{1-e^{-\mathrm{i} \pi n_q} } {1- e^{-\mathrm{i} 2\pi n_q/N }} \nonumber \\ &=& \left\{ \begin{array}{ll} \displaystyle \frac{4} {1- e^{-\mathrm{i} 2\pi n_q/N}}, & n_q = 1,3,\ldots, N-1,\\ \displaystyle 0, & n_q = 0,2,\ldots, N-2. \\ \end{array} \right. \label{eq: fourier component fq} \end{eqnarray} In a basis that diagonalizes the Hamiltonian, \begin{eqnarray} \eta^{\dag} \boldsymbol{\alpha}_{k}^{\ }\eta^{\ } = \sum_{k'} \boldsymbol{S}_{k,k'}^{\dag} \boldsymbol{\alpha}_{k'}, \quad \eta^{\dag} \boldsymbol{\alpha}_{k}^{\dag}\eta^{\ } = \sum_{k'} \boldsymbol{\alpha}_{k'}^{\dag} \boldsymbol{S}_{k,k'}^{\ } \end{eqnarray} where a $2N\times 2N$ matrix $\boldsymbol{S}_{(k\iota)(k'\lambda)}^{\ }$ is given by \begin{eqnarray} \boldsymbol{S}_{(k\iota)(k'\lambda)}^{\ } &=& \sum_q f^{*}_q \left[ v^{\dag}(k-q) v(k) \right]_{\iota\lambda} \delta_{k-q,k'}, \end{eqnarray} and $v^{\dag}(p)= \big( v^{\dag}_{+}(p), v^{\dag}_{-}(p) \big) $. The expectation value of the kink operator with respect to $|\Psi\rangle$ is then represented as the determinant of $N\times N$ matrix $\boldsymbol{S}^{\ }_{(k-)(k'-)}$, \begin{eqnarray} \langle \Psi|\eta|\Psi\rangle &=& \mathrm{det}\, \left[ \boldsymbol{S}^{\ }_{(k-)(k'-)} \right]. \label{eq: expectation value of eta as a determinant} \end{eqnarray} If we define the ``hopping'' elements $t_{p,q}$ through \begin{eqnarray} t_{k,k-q}&:=& \left[ v^{\dag}(k) v(k-q) \right]_{--}, \end{eqnarray} the matrix $\boldsymbol{S}_{(k-)(k'-)}$ in Eq.\ (\ref{eq: expectation value of eta as a determinant}) can be represented by a tight-binding Hamiltonian as, \begin{eqnarray} \mathcal{S} &= & \sum_{k,k'} a_{k}^{\dag} \boldsymbol{S}_{(k-)(k'-)} a_{k'}^{\ } \nonumber \\ &=& \sum_k \sum_q f_q t^{\ }_{k,k-q} a_{k}^{\dag}a_{k-q}^{\ }, \end{eqnarray} where $a_{p}^{\dag}$ ($a_{p}^{\ }$) represents a fermionic creation (annihilation) operator defined for $p\in \mathrm{Bz}$. This Hamiltonian can be interpreted as describing a quantum particle hopping on a 1D lattice. Note that the gauge field $A_x(k)$ and the metric $g_{xx}(k)$ are related to the phase and the amplitude of the nearest neighbour hopping elements $t^{\ }_{k,k-2\pi/N}$, respectively. The hopping matrix $t^{\ }_{k,k-q}$ is generically non-local. Also, since the kink operator introduces a sharp boundary in the real space, the dual Hamiltonian is highly non-local in $k$-space. It is evident from Eq.\ (\ref{eq: expectation value of eta as a determinant}) that the vanishing of $\langle \Psi| \eta |\Psi \rangle$ is equivalent to existence of zero modes in the spectrum of the $\mathcal{S}$-Hamiltonian. As we will see below, the spectrum of the $\mathcal{S}$-Hamiltonian is pretty much similar to that of the $\mathcal{C}$-Hamiltonian: away from a critical point, the spectrum is gapped and all the eigen values are close to either $+1$ or $-1$, except a few eigen values in the gap that reflect the Berry phase if it is non-trivial. If the Berry phase is $\pi\times \mbox{(odd integer)}$, there are exact zero energy eigen modes. When we approach a critical point, eigen values proliferate around zero energy. Roughly speaking, the entanglement entropy takes into account the distribution of \textit{all} the eigen values of $\mathcal{S}$, whereas the kink operator only takes into account the products of all the eigen values. \subsection{''Chiral symmetry'' and ``time-reversal symmetry''} The $\mathcal{S}$-Hamiltonian has a chiral symmetry. It directly reflects our bipartitioning the original system and has nothing to do with the chiral symmetry in the original system. Indeed, from Eq.\ (\ref{eq: fourier component fq}), one can see that $a_{k}$ with $k$ odd/even are connected to $a_{k'}$ with $k'$ odd/even only. All the eigen states in $k$-space are connected to their partner with the opposite energy via \begin{eqnarray} a_k &\to& a'_k = (-1)^{\mathrm{i}\pi n_k}a_k, \quad k=\frac{2\pi n_k}{N}. \end{eqnarray} which in turn means in the real space \begin{eqnarray} a_x &\to& a_{x+N_A} = a'_x. \end{eqnarray} When the original system respects the chiral symmetry (not to be confused with the chiral symmetry above), all the single particle wave function $\psi(k)$ of $\mathcal{S}$ in $k$-space can be taken to be real by a suitable rotation in $\boldsymbol{R}$-space. [However when the Berry phase is $\gamma=\pi \times \mathrm{integer}$, this comes with a price to have a Dirac string that intersect $\boldsymbol{R}(k)$.] The ability of taking all ``hopping'' elements $t_{k,k'}$ to be real induces an additional ``time-reversal symmetry'' to $\mathcal{S}$-Hamiltonian; the phase associated with $f_q$ can be removed by a simple gauge transformation, \begin{eqnarray} a_k &\to & b_k = e^{+\mathrm{i}k/2-\mathrm{i}k N_A/2}a_k. \end{eqnarray} [See Fig.\ \ref{fig: arg f}.] Thus, we can take all the matrix elements $f_q t_{k,k-q}^{\ }$ in the $\mathcal{S}$-Hamiltonian to be real. Furthermore, when we go back to the real space, this ``time-reversal'' invariance implies a parity symmetry with respect to an inversion center $x_0=-N_A/2+1/2$. To see this, we first note that all the one-particle eigen states of $\mathcal{S}$ can be taken real in the basis $\{b_p^{\dag},b^{\ }_p\}$; the $\mathcal{S}$-Hamiltonian can be diagonalized as $ \mathcal{S} = \sum_{n} \epsilon_{n} d^{\dag}_{n}d_{n}^{\ } $ with \begin{eqnarray} b^{\ }_{p}= \sum_{n} \phi_{n}(p)d^{\ }_n, \quad b^{\dag}_{p}= \sum_{n} \phi_{n}(p)d_n^{\dag }, \end{eqnarray} where $\phi_{n}(p)$ is a eigen wavefunction which is real. Since the basis $\{a_x^{\dag},a_x^{\ }\}$ and $\{d_n^{\dag},d_n^{\ }\}$ are related through \begin{eqnarray} a_x &=& \sum_n \frac{1}{\sqrt{N}}\sum_{k} e^{\mathrm{i}k(x-1/2+N_A/2)} \phi_n(k) d_n, \end{eqnarray} the real space eigen wavefunctions $\psi_n(x)$ in the basis $\{a_x^{\dag},a_x^{\ }\}$ are given by \begin{eqnarray} \psi_n(x)= \frac{1}{\sqrt{N}}\sum_{k} e^{\mathrm{i}k(x-1/2+N_A/2)} \phi_n(k), \end{eqnarray} from which one can see $\psi_n(x)$ satisfies \begin{eqnarray} [\psi_n(x)]^{*}&=& \psi_n(-x+1-N_A). \end{eqnarray} I.e., the wave function amplitude is parity symmetric with respect to $x_0 = -N_A/2+1/2$. \begin{figure} \begin{center} \unitlength=10mm \begin{picture}(4,4)(-2,-1) \put(-4,1){\vector(1,0){8}} \put(3.5,1.2){$q$} \put(2.5,0.5){$+\pi$} \put(-3.5,0.5){$-\pi$} \put(0.1,0.5){$0$} \put(0.1,2.1){$+\pi/2$} \put(0.1,-0.4){$-\pi/2$} \put(0,-0.8){\vector(0,1){3.5}} \put(-1,2.5){$\mathrm{arg}\,f_q$} \thicklines \put(-3,1){\line(3,-1){3}} \put(0, 0){\line(0,1){2}} \put(0, 2){\line(3,-1){3}} \end{picture} \caption{ $ \mathrm{arg}\,f_q= \mathrm{arg}\, \left( 4 \frac{1}{1- e^{-\mathrm{i}q}} \right) = - \mathrm{arg}\, \left( 1- e^{-\mathrm{i}q} \right) $ for $n_q=1,3,5,\cdots,N-1$. \label{fig: arg f} } \end{center} \end{figure} This time-reversal symmetry, which is plays an important role for the vanishing of the expectation value of the kink operator. Indeed, it is this symmetry which guarantees existence of zero-modes of $\mathcal{S}$. \subsection{Existence of zero-modes} The argument that tells us the existence of zero modes for the $\mathcal{S}$-Hamiltonian is somewhat similar to the ``proof'' of the existence of zero modes for the $\mathcal{C}$-Hamiltonian in that we consider a adiabatic change of the Hamiltonian. The major difference comes from the fact that the chiral symmetry in the $\mathcal{S}$-Hamiltonian is implemented as a kind of time-reversal symmetry as we discussed before. We first establish that there is a pair of zero modes for $\mathcal{S}$ when we take $|\Psi\rangle$ as the ground state of the dimerized Hamiltonian (\ref{eq: def example 1}) with the chiral symmetry. The hopping elements in $\mathcal{S}$ are computed from the overlap of the Bloch wave functions as \begin{eqnarray} \langle v_{\pm}(p)| v_{\pm} (q) \rangle \hphantom{AAAAAAAAAAAAAAAAA} && \nonumber \\ = \frac{1}{2R(R\mp R^3)} \Big[ \Delta^2 e^{\mathrm{i}(p-q)} + R^2 \mp 2 R \xi+\xi^2 \Big]. && \end{eqnarray} The $\mathcal{S}$-Hamiltonian is then diagonalized as \begin{eqnarray} \mathcal{S}= \frac{1}{2R(R - R^3)} \hphantom{AAAAAAAAAAAAAAAA} && \nonumber \\ \times \sum_{x} \Big[ \Delta^2 f(x+1) + (R-\xi)^{2}f(x) \Big] a_x^{\dag}a^{\ }_x. && \end{eqnarray} We see that there are two mid-gap states with energies $\pm \xi/R$. Especially when $\xi=0$, there are a pair of zero energy states localized at the interfaces. We then change the Hamiltonian in a continuous fashion in such a way that (i) it respects the chiral symmetry during the deformation, and (ii) it does not cross the gap closing point (the origin of $\boldsymbol{R}$-space). During this deformation, the Berry phase of the ground state wavefunction is always kept to be $\pi$. As already discussed, we can take all the Bloch wave functions to be real and there is a ``time-reversal'' symmetry. One can see that the zero modes never escape from $E=0$ as it constrained by the time-reversal symmetry, which is nothing but the parity invariance with respect to $x_0=-N_A/2+1/2$. First note that since the $\mathcal{S}$-Hamiltonian in $k$-space is non-local, it is short-ranged (quasi-diagonal) in the real space. Thus, if we take the thermodynamic limit $N\to \infty$, states that appear between the gap are spatially localized near the interfaces located $x=1/2$ and $x= N_A+1/2$, which separate the system into the two subsystems. During the deformation, the two localized states, which located at $x=1/2$ and $x=N_A+1/2$, respectively, can in principle go away from $E=0$. Due to the ``chiral symmetry'' of the $\mathcal{S}$-Hamiltonian, if one goes up from $E=0$, the other must be go down. However, if there is the ``time-reversal symmetry'', each eigen state must be invariant under the space inversion with respect to $-N_A/2+1/2$. In order for the localized states to satisfy these two conditions, both of them must be located at $E=0$. As an example, the spectrum of the $\mathcal{S}$-Hamiltonian for the SSH model is presented in Fig.\ \ref{fig: ssh edge}-(c). The spectrum is almost identical to that of the $\mathcal{C}$-Hamiltonian and a pair of zero modes persists for the entire quantum phase $\phi \in [-1,0)$. \section{2D systems with the non-vanishing Chern number} \label{sec: 2D systems with the non-vanishing Chern number} As far as we consider translational invariant systems, the above 1D discussions still apply to higher dimensions. When a $d$-dimensional translational invariant system is bipartitioned by a $(d-1)$-dimensional hyperplane, we can perform the $(d-1)$-dimensional Fourier transformation along the interface. The Hamiltonian is block-diagonal in terms of the wave number along the interface $\boldsymbol{k}_{\parallel}$, $ \mathcal{H} =: \sum_{\boldsymbol{k}_{\parallel}} \mathcal{H}(\boldsymbol{k}_{\parallel})$, where $\mathcal{H}(\boldsymbol{k}_{\parallel})$ is a 1D Hamiltonian for each $\boldsymbol{k}_{\parallel}$-subspace. Then, the previous discussion applies to each $\mathcal{H}(\boldsymbol{k}_{\parallel})$. As an example of a 2D two-band system, let us consider 2D chiral $p$-wave superconductor ($p$-wave SC) defined by \begin{eqnarray} \mathcal{H} = \sum_{\boldsymbol{r}} \boldsymbol{c}_{\boldsymbol{r}}^{\dag} \left( \begin{array}{cc} t & \Delta\\ -\Delta & -t \end{array} \right) \boldsymbol{c}_{\boldsymbol{r}+\hat{\boldsymbol{x}}} +\mathrm{h.c.} \qquad\qquad && \nonumber \\ + \boldsymbol{c}_{\boldsymbol{r}}^{\dag} \left( \begin{array}{cc} t & \mathrm{i}\Delta\\ \mathrm{i}\Delta & -t \end{array} \right) \boldsymbol{c}_{\boldsymbol{r}+\hat{\boldsymbol{y}}} +\mathrm{h.c.} + \boldsymbol{c}_{\boldsymbol{r}}^{\dag} \left( \begin{array}{cc} \mu & 0\\ 0 & -\mu \end{array} \right) \boldsymbol{c}_{\boldsymbol{r}}, && \end{eqnarray} where the integral index $\boldsymbol{r}$ runs over the 2D square lattice, $\hat{\boldsymbol{x}}=(1,0)$, $\hat{\boldsymbol{y}}=(0,1)$, and $t,\Delta,\mu \in \mathbb{R}$. For simplicity, we set $t=\Delta=1$ in the following. The chiral $p$-wave SC has been discussed in the context of super conductivity in a ruthenate and paired states in the fractional quantum Hall effect. \cite{Read00, volovik, Senthil98, goryo, morita, Hatsugai02} There are four phases separated by three quantum critical points at $\mu=0,\pm 4$, which are labeled by the Chern number $Ch$ as $Ch=0$ $( |\mu| > 4)$, $Ch=-1$ $(-4 < \mu < 0)$, and $Ch=+1$ $( 0 < \mu < +4)$. The non-zero Chern number implies the IQHE in the spin transport. \cite{Senthil98} The energy spectrum of the family of Hamiltonians $\mathcal{H}(k_y)$ parametrized by the wave number in $y$-direction, $k_y$, is given in Fig.\ \ref{fig: 2D p-wave edge}-(a),(b). There are branches of edge states that connects the upper and lower band for phases with $Ch=\pm 1$. These edge states contributes to the entanglement entropy. The energy spectrum of the $\mathcal{C}$-Hamiltonian with open ends is shown in Fig.\ \ref{fig: 2D p-wave edge}-(c),(d). The corresponding entanglement entropy is also found in Fig.\ \ref{fig: 2D p-wave edge}-(d) for several values of the aspect ratio $r =N_y/N_x$. We can see that for small $r$, the entanglement entropy shows a cusp-like behavior at quantum phase transitions whereas for larger value of $r$, the cusp is less eminent. This behavior can be understood as a dimensional cross over of the scaling behavior of the entanglement entropy between 1D and 2D. For small $r$, the entropy behaves 1D-like and the cusp is a reminiscence of the logarithmic divergent behavior $S_A \sim \ln N_A$ of the pure 1D case. \cite{Holzhey94} On the other hand, for $r$ close to unity, the entropy exhibits a 2D behavior. In the pure 2D limit ($r=1$), noting that the band structure at the critical points $\mu=\pm 4$ consists of one gapless Dirac fermion, the entanglement entropy scales as $S_{A} = \alpha N_y -\beta N_y/N_A$ where $\alpha,\beta$ is some constant. (See Appendix \ref{app 2}.) Notice that unlike the case of a finite Fermi surface \cite{Wolf05,Gioev05}, $S_A/N_y$ is constant for a Dirac fermion. An interesting and direct application of the present section is the entanglement entropy of 2D $d$-wave superconductors and carbon nanotubes. In these systems, different ways of bipartitioning the system lead to different amounts of the entanglement entropy. \cite{Ryu02} \begin{figure} \begin{center} \includegraphics[width=4cm,clip]{2d_p-wave_edge_mu=-5_red.eps} \includegraphics[width=4cm,clip]{corr_20_10_mu=-5_red.eps} \\ \includegraphics[width=4cm,clip]{2d_p-wave_edge_mu=-3_red.eps} \includegraphics[width=4cm,clip]{corr_20_10_mu=-3_red.eps} \\ \includegraphics[width=5.5cm,clip]{2d_p-wave_entngl.eps} \caption{ \label{fig: 2D p-wave edge} The energy spectrum (measured in the unit of the hopping $t=1$) v.s. $k_y\in [0,2\pi)$ for the 2D $p$-wave SC with boundaries. The chemical potential is $\mu=-5$ (a) and $-3$ (b) and $t=\Delta =1$. The corresponding spectra of the $\mathcal{C}$-Hamiltonian are shown in (c) ($\mu=-5$) and (d) ($\mu=-3$). The entanglement entropy of the 2D chiral $p$-wave SC as a function of $\mu$ is presented in (e).n The aspect ratio $r =N_y/N_x$ is $r =1/2, 1/3, 1/4, 1/8$ from the bottom at $\mu=-4$. } \end{center} \end{figure} \section{Conclusion} \label{sec: conclusion} In this paper, we have identified two types of contributions to the entanglement entropy, i.e., one from the boundaries of the system created by taking the partial trace and the other from the bulk energy spectrum. The contribution from the boundaries is controlled by the Berry phase and hence we can make use of some known facts on the ``bulk-boundary correspondence'' to compute the entanglement entropy. Especially, we have obtained the lower bound of the entanglement entropy for 1D systems with discrete particle-hole symmetries. Intuitively, this means that when the Berry phase is zero, the ground state wave function is very close to a simple product state, and there is not much entanglement. Thus, ground states with non-trivial Berry phase can be said to be more entangled in general. Recently, it has been revealed that the Berry phase manifests itself in the semiclassical equation of motion \cite{Sundaram99}, the density of states \cite{Xiao05}, and the anomalous Hall effect, etc. One can put the Berry phase correction to the entanglement entropy in the catalog. One of the main massages of this paper is the superiority of the entanglement entropy to conventional correlation functions of local operators to describe quantum phases. Indeed, we clarified that the entanglement entropy is related to non-local operators ; the twist operator and kink operator. The bulk contribution to the entanglement entropy is related to the localization length (correlation length) which is the real part of the logarithm of the expectation value of the twist operator and can be expressed by the quantum metric \cite{Marzari97}. On the other hand, the edge contribution is tied with the imaginary part and to the Berry phase. [See Eqs.\ (\ref{eq: expectation value of the twist operator}) to (\ref{eq: the quantum metric}).] We have also made a connection between the entanglement entropy and the kink operator. It is known that several phases of 1D strong correlated systems (such as the Haldane phase) can be described by these non-local operators. Another connection of the entanglement entropy to a some sort of non-local operator can be also seen in a recent proposal of a holographic derivation of the entanglement entropy. \cite{Ryu-Takayanagi06} Thus, the entanglement entropy can potentially be very useful to detect several quantum phases that need a more subtle way of characterization than classically ordered phases. For example, the entanglement entropy can be applied to several types of spin liquid ground states which are speculated to be described by some kind of gauge theories. Indeed, for gapped phases of topological orders, this direction has already been explored to some extent \cite{Kitaev05, Levin05}. However, in order to push this direction further, we still need to deepen our understanding of the entanglement entropy. For example, extensions to multi-band systems, especially to the case of completely degenerate bands might be also interesting in which we need to use the non-Abelian Berry phase to characterize the system. \cite{Hatsugai04} It is also interesting to investigate if the Berry phase of quantum ground states can be captured by other types of entanglement measures such as the concurrence \cite{Wootters98}. Finally, among many other questions, we need to consider how we can measure the entanglement entropy in a direct fashion. \cite{Klich06} \section*{Acknowledgments} We are grateful to R.\ Shindou and T.\ Takayanagi for useful discussions. This research was supported in part by the National Science Foundation under Grant No.\ PHY99-07949 (SR) and a Grant-in-Aid from the ministry of Education, Culture, Sports, Science and Technology, Japan (YH).
1,108,101,562,490
arxiv
\section{Concluding Remarks} \label{sec-discussion} We analyze the impact of GDPR on storage systems. We find that achieving strict compliance efficiently is hard; a naive attempt at strict compliance results in significant slowdown. We modify Redis to be GDPR-compliant and measure the performance overhead of each modification. Below, we identify three key research challenges that must be addressed to achieve strict GDPR compliance efficiently. \subsection{Research Challenges} \label{sec-discuss-research} \vheading{Efficient Logging}. For strict compliance, every storage operation including reads must be synchronously written to persistent storage; persisting to solid state drives or hard drives results in significant performance degradation. New non-volatile memory technologies, such as Intel 3D Xpoint, can help reduce such overheads. Efficient auditing may also be achieved through the use of eidetic systems. For example, Arnold~\cite{eidetic-systems} is able to remember past state with only 8\% overhead; adapting Arnold for GDPR remains a challenge. \vheading{Efficient Deletion}. With all personal data possessing an expiry timestamp, we need data structures to efficiently find and delete (possibly large amounts of) data in a timely manner. Like timeseries databases, data can be indexed by their expiration time, then grouped and sorted by that index to speed up this process. However, GDPR is vague in its interpretation of deletions: it neither advocates a specific timeline for completing the deletions nor mandates any specific techniques. Thus, it remains to be seen if efforts like Google cloud's guarantee~\cite{google-cloud-deletes} to not retain customer data after 180 days of delete requests be considered compliant behavior. \vheading{Efficient Metadata Indexing}. Several articles of GDPR require efficient access to groups of data based on certain attributes. For example, accessing all the keys that allow processing for a particular \emph{purpose} while ignoring those that object to that purpose; or collating all the files of a particular \emph{user} to be ported to a new controller. While traditional databases natively offer this ability via secondary indices, not all storage systems have efficient or configurable support for this capability. \subsection{Limitations and Importance} \label{sec-discuss-generalize} Given its preliminary nature, our work has several limitations. First, we investigate one particular storage system, Redis, using one benchmark suite, YCSB. Expanding the scope to a broader range of storage systems like relational databases and file systems would increase the confidence of our findings. Next, it is likely that the performance of our GDPR-compliant Redis could be further improved with a deeper knowledge of Redis internals. Finally, while we focus exclusively on storage systems, researchers have shown~\cite{gdpr-sins} how GDPR compliance requires organization wide changes to the systems that process personal data. With the growing relevance of privacy regulations around the world, we expect this paper to trigger interesting conversations. This is one of the first efforts to systematically analyze the impact of GDPR on storage systems. We would be keen to engage the storage community in identifying and addressing the research challenges in this space. \section{Designing for Compliance} \label{sec-design} Based on our analysis of GDPR, we identify six key features that a storage system must support to be GDPR-compliant. Then, we characterize how systems show variance in their support for these features. \subsection{Features of GDPR-Compliant Storage} \label{sec-gdpr-storage-features} \vheading{Timely Deletion}. Under GDPR, no personal data can be retained for an indefinite period of time. Therefore, the storage system should support mechanisms to associate time-to-live (TTL) counters for personal data, and then automatically erase them from all internal subsystems in a timely manner. GDPR allows TTL to be either a static time or a policy criterion that can be objectively evaluated. \vheading{Monitoring and Logging}. In order to demonstrate compliance, the storage system needs an audit trail of both its internal actions and external interactions. Thus, in a strict sense, all operations whether in the data path (say, read or write) or control path (say, changes to metadata or access control) needs to be logged. \vheading{Indexing via Metadata}. Storage systems should have interfaces to allow quick and efficient access to groups of data. For example, accessing all personal data that could be processed under a specific purpose, or exporting all data belonging to a user. Additionally, it should have the ability to quickly retrieve and delete large amounts of data that match a criterion. \vheading{Access Control}. As GDPR aims to limit access to personal data to only permitted entities, for established purposes, and for predefined duration of time, the storage system must support fine-grained and dynamic access control. \vheading{Encryption}. GDPR mandates that personal data be encrypted both at rest and in transit. While pseudonymization may help reduce the scope and size of data needing encryption, it is still required and likely results in degradation of storage system performance. \vheading{Managing Data Location}. Finally, GDPR restricts the geographical locations where personal data may be stored. This implies that storage systems should provide an ability to find and control the physical location of data at all times. \subsection{Degree of Compliance} \label{sec-design-degree} Though GDPR is clear in its high-level goals, it is intentionally vague in its technical specifications. For example, GDPR mandates that no personal data can be stored indefinitely and must be deleted after its expiry time. However, it does not specify how soon after its expiry should the data be erased? Seconds, hours, or even days? GDPR is silent on this, only mentioning that the data should be deleted without an undue delay. What this means for system designers is that GDPR compliance need not be a fixed target, instead a spectrum. We capture this variance along two dimensions: \emph{response time} and \emph{capability}. \vheading{Real-time vs. Eventual Compliance}. Real-time compliance is when a system completes the GDPR task (\textit{e.g.,}\xspace deleting expired data or responding to user queries) synchronously in real-time. Otherwise, we categorize it as eventually compliant. Given the steep penalties (up to 4\% of global revenue or \euro20M, whichever is higher) for violating compliance, companies would do well to be in the strict end of the spectrum. However, as we demonstrate in \sref{sec-redis}, achieving real-time compliance results in significantly high overhead unless the challenges outlined in \sref{sec-discuss-research} are solved. This problem is further exacerbated for organizations that operate at scale. For example, Google cloud platform informs~\cite{google-cloud-deletes} their users that for a deleted data to be completely removed from all their internal systems, it could take up to 6 months. \vheading{Full vs. Partial Compliance}. Distinct from the response time, systems exhibit varying levels of feature granularities and capabilities. Such discrepancies arise because many GDPR requirements sit at odds with the design principles and performance guarantees of certain systems. For example, file systems do not implement indexing into files as a core operation since that feature is commonly supported via application software like {\tt grep}. Similarly, many relational databases only partially and indirectly support TTL as that operation could be realized using user-defined triggers, albeit inefficiently. Thus, we define \emph{{full compliance}} to be natively supporting all the GDPR features, and \emph{{partial compliance}} as enabling feature support in conjunction with external infrastructure or policy components. We use the term \emph{strict compliance} to reflect that a system has achieved both full- and real-time compliance. \section{Background on GDPR} \label{sec-gdpr} GDPR~\cite{gdpr-regulation} is laid out in 99 \emph{articles} that describe its legal requirements, and 173 \emph{recitals} that provide additional context and clarifications to these articles. GDPR is an expansive set of regulation that covers the entire lifecycle of personal data. As such, achieving compliance requires interfacing with infrastructure components (including compute, network, and storage systems) as well as operational components (processes, policies, and personnel). However, since our investigation primarily concerns with GDPR's impact on storage systems, we focus on articles that describe the behavior of storage systems. These fall into two broad categories: the rights of \emph{the data subjects} (i.e., the people whose personal data has been collected) and the responsibilities of \emph{ the data controllers} (i.e., the companies that collect personal data). \subsection{Rights of the Data Subject} \label{sec-gdpr-rights} There are 12 articles that codify the rights and freedoms of people. Among these, four directly concern storage systems. The first one, \textsl {Article 15: \textsc{Right of access by the data subject}} allows any person whose personal data has been collected by a company to obtain detailed information about its usage including (i) the purposes of processing, (ii) the recipients to whom it has been disclosed, (iii) the period for which it will be stored, and (iv) its use in any automated decision-making. Thus, the storage system should not only be designed to store these metadata but also be organized to allow a timely access. Related to this is the \textsl {Article 21: \textsc{Right to object}}, which allows a person to object at any time to using their personal data for the purposes of marketing, scientific research, historical archiving, or profiling. This requires storage systems to know both whitelisted and blacklisted purposes associated with personal data at all times, and control access to it dynamically. However, prominently, \textsl{Article 17: \textsc{Right to be forgotten}} grants people the right to require the data controller to erase their personal data without undue delay\footnote {Article 17 covers only the personal data, not the insights derived from it; nor can it be used to violate the rights of other people or law enforcement.}. This right is broadly construed whether or not the personal data was obtained directly from the customer, or if the customer had previously given consent. From a storage perspective, the article demands that the requested data be erased in a timely manner including all its replicas and backups. Finally, \textsl{Article 20: \textsc{Right to data portability}} states that people have the right to obtain all their personal information in a commonly used format as well as the right to have these transmitted to another company directly. Thus, storage systems should have the capability to access and transmit all data belonging to a particular user in a timely fashion. \begin{table*}[t] \makebox[1\textwidth][c]{ \begin{minipage}[b]{1\textwidth} \centering \small {\renewcommand{\arraystretch}{1.1} \begin{tabular}{| c | l | l | l |} \hline \thead{\bf No.} & \thead{\bf GDPR article} & \thead{\bf Key requirement} & \thead{\bf Storage feature} \\ \hline \hline {5.1} & Purpose limitation & Data must be collected and used for specific purposes & Metadata indexing \\ \hline {5.1} & Storage limitation & Data should not be stored beyond its purpose & Timely deletion \\ \hline {5.2} & Accountability & Controller must be able to demonstrate compliance & All \\ \hline {13} & Conditions for data collection & Get user's consent on how their data would be managed & All \\ \hline {15} & Right of access by users & Provide users a timely access to all their data & Metadata indexing \\ \hline {17} & Right to be forgotten & Find and delete groups of data & Timely deletion \\ \hline {20} & Right to data portability & Transfer data to other controllers upon request & Metadata indexing \\ \hline {21} & Right to object & Data should not be used for any objected reasons & Metadata indexing \\ \hline {25} & Protection by design and by default & Safeguard and restrict access to data & Access control, Encryption \\ \hline {30} & Records of processing activity & Store audit logs of all operations & Monitoring \\ \hline {32} & Security of data & Implement appropriate data security measures & Access control, Encryption \\ \hline {33, 34} & Notify data breaches & Share insights and audit trails from concerned systems & Monitoring \\ \hline {46} & Transfers subject to safeguards & Control where the data resides & Manage data location \\ \hline \end{tabular} } \end{minipage}} \caption{\emph {Key GDPR articles that significantly impact the design, interfacing, or performance of storage systems. The table maps the requirements of these articles into storage system features.}} \vspace{-0.4cm} \label{fig:regulation-table} \end{table*} \subsection{Responsibilities of the Data Controller} \label{sec-gdpr-responsibilities} Among the articles that outline the responsibilities of data controllers, 10 concern storage systems. Three articles elucidate the high-level principles of data security and privacy that must be followed by all controllers. \textsl{Article 24: \textsc{Responsibility of the controller}} establishes that the ultimate responsibility for the security of all personal data lies with the controller that has collected it; \textsl{Article 32: \textsc{Security of processing}} requires the controller to implement risk-appropriate and state-of-the-art security measures including encryption and pseudonymization; and lastly, \textsl{Article 25: \textsc{Data protection by design and by default}}, specifies that all systems must be designed, configured, and administered with data protection as a primary goal. There are several articles that set guidelines for the collection, processing, and transmission of personal data. The purpose limitation of \textsl{Article 5: \textsc{Processing of personal data}} mandates that personal data should only be collected for specific purposes and not be used for any other purposes. From a storage point, this translates to maintaining associated (purpose-)metadata that could be accessed and updated by systems that process personal data. Interestingly, \textsl{Article 13} also ascertains that data subjects have the right to know the specific purposes for which their personal data would be used as well as the duration for which it will be stored. The latter requirement means that storage systems have to support time-to-live mechanisms in order to automatically erase the expired personal data. Finally, while \textsl{Article 30: \textsc{Records of processing activities}} requires the controller to maintain logs of all activities concerning personal data, \textsl{Article 33: \textsc{Notification of personal data breach}} mandates them to notify the authorities and users within 72 hours of any personal data breaches. In conjunction with Accountability clause of \textsl{Article 5} which puts the onus of proving compliance on the controller, these articles impose stringent requirements on storage systems: to monitor and maintain detailed logs of all control- and data-paths interactions. For instance, every read operation now has to be followed by a (logging-)write operation. Table--\ref{fig:regulation-table} summarizes these articles and translates their key requirements into specific storage features. \section{Introduction} \label{sec-introduction} \setlength{\epigraphwidth}{2.1in} \setlength{\epigraphrule}{0.1pt} \epigraph{\emph{``In law, nothing is certain but the expense.''}}{Samuel Butler} Privacy and protection of personal data (or more aptly, the lack thereof) has become a topic of concern for the modern society. The gravity of personal data breaches is evident not only in their frequency ($\sim$1300 in 2017 alone~\cite{data-breaches-2017}) but also their scale (the Equifax breach~\cite{equifax} compromised the financial information of $\sim$145 million consumers), and scope (the Cambridge Analytica scandal~\cite{cambridge-analytica} harvested personal data to influence the U.K. Brexit referendum and the 2016 U.S. Presidential elections). In response to this alarming trend, the European Union (EU) adopted a comprehensive privacy regulation called the General Data Protection Regulation (GDPR)~\cite{gdpr-regulation}. GDPR defines the privacy of personal data as a fundamental right of all European people, and accordingly regulates the entire lifecycle of personal data. Thus, any company dealing with EU people's personal data is legally bound to comply with GDPR. While essential, achieving compliance is not trivial: Gartner estimates~\cite{gartner-prediction} that less than 50\% of the companies affected by GDPR would likely be compliant by the end of 2018. This challenge is exacerbated for a vast majority of companies that rely on third-parties for infrastructure services, and hence, do not have control over the internals of such services. For example, a company building a service on top of Google cloud storage system would not be compliant if that cloud subsystem is violating the GDPR norms. In fact, GDPR prevents companies from using any third-party services that violate its standards. Though GDPR governs the behavior of most of the infrastructure and operational components of an organization, its impact on the storage systems is potent: 31 of the 99 articles that make up GDPR directly pertain to storage systems. Motivated by this finding, we set out to investigate the impact of GDPR on storage systems. In particular, we ask the following questions: (i) What features should a storage system have to be GDPR-compliant? (ii) How does compliance affect the performance of different types of storage systems? (iii) What are the technical challenges in achieving strict compliance in an efficient manner? By examining the GDPR articles, we identify a core set of (six) features that must be implemented in the storage layer to achieve compliance. We hypothesize that despite needing to support a small set of new features, storage systems would experience a significant performance impact. This stems from a key observation: GDPR's goal of \emph{data protection by design and by default} sits at odd with the traditional system design goals (especially for storage systems) of optimizing for performance, cost, and reliability. For example, the regulation on identifying and notifying data breaches requires that a controller shall keep a record of all the interactions with personal data. From a storage system perspective, this turns every read operation into a read followed by a write. To evaluate our hypothesis, we design and implement the changes required to make Redis, a widely used key-value store, \emph{ GDPR-compliant}. This not only illustrates the challenges of retrofitting existing systems into GDPR compliance but also quantifies the resulting performance overhead. Our benchmarking using YCSB demonstrates that the GDPR-compliant version experiences a 20$\times$ slowdown compared to the unmodified version. We share several insights from our investigation. First, though GDPR is clear in its high-level goals, it is intentionally vague in its technical specifications. This allows GDPR compliance to be a continuum and not a fixed target. We define \emph{real-time compliance} and \emph{eventual compliance} to describe a system's approach to completing GDPR tasks. Our experiments show the performance impact of this choice. For example, by storing the monitoring logs in a batch (say, once every second) as opposed to synchronously, Redis' throughput improves by 6$\times$\xspace while exposing it to the risk of losing one second worth of logs. Such tradeoffs present design choices for researchers and practitioners building GDPR-compliant systems. Second, some GDPR requirements sit at odds with the design principles and performance guarantees of storage systems. This could lead to storage systems offering differing levels of native support for GDPR compliance (with missing features expected to be handled by other infrastructure or policy components). Finally, we identify three key research challenges (namely, efficient deletion, efficient logging, and efficient metadata indexing) that must be solved to make strict compliance efficient. \section{GDPR-Compliant Redis} \label{sec-redis} Redis~\cite{redis} is a prominent example of key-value stores, a class of storage where unstructured data (i.e., value) is stored in an associative array and indexed by unique keys. Our choice of Redis as the reference system is motivated by two reasons: (i) it is a modern storage system with an active open-source development, and (ii) key-value stores, in general, are not only widely deployed in Internet-scale systems ~\cite{dynamo-amazon, voldemort-linkedin, memcached-facebook} but are also an active area of research ~\cite{hashcache-nsdi, skimpystash-sigmod, silt-sosp, hyperdex-sigcomm, lsm-trie-atc, triad-atc, pebbles-sosp}. From amongst the features outlined in \sref{sec-gdpr-storage-features}, Redis fully supports {monitoring}, {metadata indexing}, and {managing data locations}; partially supports {timely deletion}; offers no native support for {access control} and {encryption}. Below, we discuss our changes---some involving implementation while others simply concerning policy and configurations---towards making Redis, \emph{GDPR compliant}. This effort resulted in $\sim$120 lines of code and configuration changes within Redis. Then, we evaluate the performance impact of our modifications to Redis (v4.0.11) using the Yahoo Cloud Serving Benchmark (YCSB) ~\cite{ycsb}. We configure YCSB workloads to use 2M operations, and run them on a Dell Precision Tower 7810 with quad-core Intel Xeon 2.8GHz processor, 16 GB RAM, and 1.2TB Intel 750 SSD. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{graphs/encryption/ycsb-2m.pdf} \caption{\emph{Performance overhead of GDPR-compliant Redis. YCSB benchmarking shows that monitoring and encryption will each reduce Redis' throughput to $\sim$30\% of the original.}} \vspace{-0.4cm} \label{fig:redis-gdpr-overhead} \end{figure} \subsection{Monitoring and Logging} \label{sec-redis-monitoring} Redis offers several mechanisms to generate complete audit logs: a debugging command called {\tt MONITOR}, configuring the server with slowlog option, and piggybacking on append-only-file ({\tt AOF}). Our microbenchmarking revealed that since Redis anyway performs its journaling via {\tt AOF}, the first two options result in more overhead than {\tt AOF}. Also, {\tt MONITOR} streams the logs over a network, thus requiring additional encryption. So, we selected the {\tt AOF} approach. However, {\tt AOF} records only those operations that modify the dataset. Thus, we had to update the {\tt AOF} code to include all of Redis' interactions. Our benchmarking shows that when we set {\tt AOF} to fsync every operation to the disk synchronously, Redis' throughput drops to $\sim$5\% of its original. But as Figure~\ref{fig:redis-gdpr-overhead} shows, when we relaxed the fsync frequency to once every second, the performance improved by 6$\times$\xspace i.e., throughput dropped only to $\sim$30\% the original. \viheading{Key takeaway}: Even fully supported features like \emph{logging} can cause significant performance overheads. Interestingly, the overheads vary significantly based on how strictly the compliance is enforced. \subsection{Encryption} \label{sec-redis-encryption} In lieu of natively extending Redis' limited security model, we incorporate third-party modules for encryption. For data at rest, we use the Linux Unified Key Setup (LUKS)~\cite{luks}, and for data in transit, we set up transport layer security (TLS) using Stunnel ~\cite{stunnel}. Figure~\ref{fig:redis-gdpr-overhead} shows that Redis performs at a third of its original throughput when encryption is enabled. We observed that most of overhead was due to TLS: this was because the TLS proxies in our setup had reduced the average available network bandwidth from 44 Gbps to 4.9 Gbps, thereby affecting both latency and throughput of YCSB. While there are alternatives to the LUKS-TLS approach like key-level encryption, our investigation using the open-source Themis~\cite{themis} cryptographic library showed similar performance overheads. \viheading{Key takeaway}: Retrofitting new features, especially those that do not align with the core design philosophies, will result in excessive performance overheads. \subsection{Timely Deletion} \label{sec-redis-ttl} While GDPR does not mandate a timeline for erasing the personal data after a request has been issued, it does specify that such data be removed from everywhere without undue delays. Redis offers three groups of primitives to erase data: (i) {\tt DEL} \& {\tt UNLINK} to remove one or more specified keys immediately, (ii) {\tt EXPIRE} \& {\tt EXPIREAT} to delete a given key after a specified timeout period, and (iii) {\tt FLUSHDB} \& {\tt FLUSHALL} to delete all the keys present in a given database or all existing databases respectively. The current mechanisms and policies of Redis present two hindrances. The first issue concerns the lag between the time of request and time of actual removal. While most of the above commands erase the data proactively, taking a time proportional to the size of data being removed, {\tt EXPIRE*} commands take a passive approach. The only way to guarantee the removal of an expired key is for a client to proactively access it. In absence of this, Redis runs a lazy probabilistic algorithm: once every 100ms, it samples 20 random keys from the set of keys with expire flag set; if any of these twenty have expired, they are actively deleted; if less than 5 keys got deleted, then wait till the next iteration, else repeat the loop immediately. Thus, as percentage of keys with associated expire increases, the probability of their timely deletion decreases. To quantify this delay in erasure, we populate Redis with keys, all of which have an associated expiry time. The time-to-live values are set up such that 20\% of the keys will expire in short-term (5 minutes) and 80\% in the long-term (5 days). Figure~\ref{fig:delay-expiry} then shows the time Redis took to completely erase the short-term keys once 5 minutes have elapsed. As expected, the time to erasure increases with the database size. For example, when there are 128k keys, clean up of expired keys ($\sim$25k of them) took nearly 3 hours. To support a stricter compliance, we modify Redis to iterate through the entire list of keys with associated {\tt EXPIRE}. Then, we re-run the same experiment to verify that all the expired keys are erased within sub-second latency for sizes of up to 1 million keys. The second concern relates to the persistence of deleted data in subsystems beyond the main storage engine. For example, in Redis AOF persistence model, any deleted data persists in AOF until its compaction either via a policy-triggered or user-induced {\tt BGREWRITEAOF} operation. Though Redis prevents any legitimate access to data that is already deleted, its decision to let these persist in various subsystems, purely for performance reasons, is antithetical to the GDPR goals besides exposing itself to side-channel attacks. A naive approach to guaranteeing an immediate removal of deleted personal data is to trigger AOF compaction every time a key gets deleted. However, since GDPR only mandates a reasonable time for clean up, it may be prudent to configure a periodic (say, hourly) AOF compaction, which in turn would guarantee that no deleted key persists beyond an hour boundary. \viheading{Key takeaway}: Even when the system supports a GDPR feature, system designers should carefully analyze its internal data structures, algorithms, and configuration parameters to gauge the degree of compliance. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{graphs/redis-ttl/ttl.pdf} \caption{\emph{The graph shows the delay in erasing the expired keys (20\% of total keys in each case) beyond their TTL. In contrast, our GDPR-compliant Redis erases all the expired keys within sub-second latency.}} \vspace{-0.4cm} \label{fig:delay-expiry} \end{figure}
1,108,101,562,491
arxiv
\section{Introduction} Turbulent transport governs the spreading of contaminants in the environment, mixing of chemical constituents in combustion engines or in stellar interiors, accretion in proto-stellar molecular clouds, acceleration of cosmic rays, and escape of hot particles from fusion machines. Because of its wide relevance, a fundamental characterization of the dispersive properties of turbulent flows is of practical interest to physicists and engineers. Here we examine the broadly relevant case of dispersion of Lagrangian tracer particles in statistically homogeneous but not necessarily isotropic turbulence. The Lagrangian viewpoint is particularly suited to the investigation of transport in turbulent fluids. A Lagrangian description of turbulence is based on following the paths of passive tracer particles in a turbulent flow. Single-particle diffusion, as originally addressed by Taylor \cite{taylor:turbdiff}, provides a basic characterization of a flow's transport properties\cite{falkovich_gawedzki_vergassola:review}. A more complete characterization of the turbulent transport has conventionally been formed from the relative dispersion of two, three, or four particles \citep{hackl2011multi,sawford2013gaussian,luthi2007lagrangian,toschibodenrev,xu2008evolution,lacasce2008statistics,yeungreview,pumir2000geometry,lin2013lagrangian}. However, in astrophysical environments where the effects of magnetic fields, rotation, or gravity are often significant, the more complex nature of statistically anisotropic or even inhomogenous nonlinear dynamics warrants additional examination. Dispersion in dynamically anisotropic systems such as vigorously convecting flows \citep{shearbursts,mazzitelli2014pair,maeder2008convective,brun2011modeling,leprovost2006self} where preferred directions exist and spatially coherent, persistent structures like convective plumes can form, motivates the present consideration of a complementary diagnostic based on a different Lagrangian concept: the convex hull\citep{efron65} of a $n$-particle group ($n\gg 4)$. The convex hull is the smallest convex polygon that encloses a group of particles; two dimensional convex hulls are pictured in FIG.~\ref{hull}. Convex hull analysis of turbulent dispersion is similar in spirit to following a drop of dye as it spreads in a fluid, or following a puff of smoke as it spreads in the air, both classical fluid dynamics problems \citep{gifsmokepuffs,richardson1926atmospheric,elder1959dispersion}. A large group of tracer particles can be marked, similarly to adding a drop of dye to a fluid flow, so that the same particles can be identified at all later times. Using the convex hull, a size for the group of tracer particles that were marked can be calculated at each time. The convex hull yields statistical information about a class of Lagrangian particles that is not equivalent to pre-selected tracer particle groups like particle pairs or tetrads. These standard Lagrangian multi-particle statistics represent a fixed and unique structural relationship between specific tracer particles. The evolution of particle-pair structures, expressed e.g. as separation and orientation, is analyzed as the pair of particles is advected by the fluid. In contrast, the convex hull does not establish a unique link between the tracers that generate it, but continuously selects from a predefined group, based on which tracer particles have ventured furthest from the geometrical center of the ensemble. Unlike particle pairs or tetrads, the particles that constitute the convex hull are dynamically changing. The definition of the convex hull thus corresponds to a filtering based on the entire dynamical past of each particle in the group. The convex hull captures the extremes of the excursions of a group of particles, information relevant to the non-Gaussian aspects of the dynamics. The behavior of particles that do not exhibit the fastest dispersion is filtered by the convex hull, allowing a classification of particle dynamics with regard to their dispersion efficiency. In this work we begin to explore this link to extreme value theory, which has the potential to provide new physical insight for turbulent diffusion. The dynamical relation between the Lagrangian particle population forming the convex hull and the bulk ensemble of tracer particles enclosed by it represents another aspect of this diagnostic that could be exploited in investigations of turbulent structure formation. In recent years, convex hull calculations have been used to study diverse topics such as the size of spreading GPS-enabled drifters moving on the surfaces of lakes and rivers \citep{lakes,spencer2014quantifying}, star-forming clusters \citep{schmeja}, forest fires \citep{forestfire}, proteins \citep{li08,millett2013identifying}, or clusters of contaminant particles \citep{dietzel2013numerical}. Studies of the relationships between random walks, anomalous diffusion, extreme statistics and convex hulls have been motivated by animal home ranges \citep{eco2006,ecoPRL,maj2010,lukovic2013area,vander2013trophic,dumonteil2013spatial,collyer2015habitat}. Convex hulls have also been used to study analytical statistics of Burgers turbulence by analogy with Brownian motion \citep{avellaneda1995statistical,bertoin2001some,chupeau2015convex}. MHD turbulence \citep{busse_hoho,homann2014structures} and turbulence during hydrodynamic convection \citep{schumacher09,schu2008,forster2007parameterization} are areas where statistical analysis of Lagrangian particles has begun to be applied only recently. This work presents new Lagrangian results from three-dimensional direct numerical simulations of turbulent MHD Boussinesq convection, and compares them with turbulent hydrodynamic Boussinesq convection and homogeneous isotropic turbulence. It is structured as follows. In Section \ref{section2} we describe the fluid simulations. In Sections \ref{section3} we present standard Lagrangian pair dispersion and discuss the results of these widely-used statistical tools for convective flows. In Section \ref{section4} we describe the convex hull analysis that we perform on groups of many Lagrangian tracer particles. We perform several basic checks on our convex hull calculations. We then compare the dispersion curves obtained from convex hulls of large groups of particles with the expected scalings for particle-pair dispersion. In Section \ref{section5} we demonstrate how the convex hull can be used to examine anisotropy. In Section \ref{section6} we apply extreme value theory, and show that the maximal square extensions of convex hull vertices are well described by a classic extreme value distribution, the Gumbel distribution. In Section \ref{section7} we summarize the results of this validation study, and our extreme value statistics. We discuss the potential uses and benefits of convex hull analysis. \section{Simulations \label{section2}} We investigate three different types of turbulent systems: forced homogeneous isotropic Navier-Stokes turbulence (simulation NST)\citep{angeladiss09}, Boussinesq convection in a neutral fluid (simulation HC), and Boussinesq convection in an electrically conducting fluid (simulation MC)\citep{shearbursts,moll2011}. These simulations are not designed for close comparison, but produced for a broad exploration of the convex hull analysis. In each of these direct numerical simulations, the equations are solved using a pseudospectral method in a cubic simulation volume with a side of length $2\pi$. The non-dimensional Boussinesq equations for MHD convection in Alfv\'enic units are \begin{eqnarray} \label{realbmhdc} \frac{\partial \vec{\omega} }{\partial t} &-& \nabla \times (\vec{v} \times \vec{\omega} + \vec{j} \times \vec{B}) = \hat{\nu} \nabla^2 \vec{\omega} - \nabla \theta \times \vec{g}_0 \\ \frac{\partial \vec{B} }{\partial t} &-& \nabla \times (\vec{v} \times \vec{B}) = \hat{\eta} \nabla^2 \vec{B} \\ \label{thermeq} \frac{\partial \theta }{\partial t} &+& (\vec{v} \cdot \nabla) \theta = \hat{\kappa} \nabla^2 \theta - (\vec{v} \cdot \nabla) T_0\\ && \nabla \cdot \vec{v}= \nabla \cdot \vec{B}=0 ~~. \end{eqnarray} These equations include the solenoidal velocity field $\vec{v}$, vorticity $\vec{\omega}=\nabla\times\vec{v}$, magnetic field $\vec{B}$, and current $\vec{j}=\nabla\times\vec{B}$. The quantity $\theta$ denotes the temperature fluctuation about a linear mean temperature profile $T_0(z)$ where $z$ is the direction of gravity. In eq. \eqref{thermeq} this mean temperature gradient provides the convective drive of the system. In eq. \eqref{realbmhdc}, the term including the temperature fluctuation $\theta$ is the buoyancy force. The vector $\vec{g}_0$ is a unit vector in the direction of gravity. Three dimensionless parameters appear in the equations: $\hat{\nu}$, $\hat{\eta}$, and $\hat{\kappa}$. They derive from the kinematic viscosity $\nu$, the magnetic diffusivity $\eta$, and thermal diffusivity $\kappa$. For simulation HC, the magnetic field $\vec{B}$ is set to zero. For simulation NST, both magnetic field terms and temperature terms are zero. A fixed time step and a trapezoidal leapfrog method \citep{kurihara1965use} are used for the time-integration for simulation NST. The Boussinesq convection simulations HC and MC are integrated in time using a low-storage 3rd-order Runge Kutta scheme \citep{will80} and an adaptive time step, which allows for better time resolution of large fluctuations that occur during convection. In this work we discuss turbulent dispersion in an incompressible fluid, where conservation of volume is a primitive concept. A volume of fluid that is convex at an initial time will occupy the same volume after a period of dynamic development but will generally change its shape and lose its convexity. Lagrangian tracer particles that are contained in the initial volume are marked so that they can be followed for the entire time of the simulation. At any later time, the volume of the convex hull of that group of marked particles is generally not conserved. This is illustrated in FIG.~\ref{hull} for a group of particles, and for snapshots taken at three times. The growth of surface area and volume are natural concepts for convex hulls. \begin{figure} \resizebox{3.3in}{!}{\includegraphics{figure1.pdf}} \caption{ An illustration of a two dimensional convex hull (solid line) surrounding a group of particles (solid points) as they disperse in time. The time progression is indicated by arrows, and the particles in each of the three convex hulls shown are the same. \label{hull} } \end{figure} A summary of the fundamental parameters that describe each simulation is given in Table~\ref{simsuma}. In this table, we define the Reynolds number to be $\mathsf{Re= \langle E_v^{1/2} L\rangle /\hat{\nu}} $, where $\mathsf{E_v}= \vec{v}^2/ 2$ is the kinetic energy, and the brackets indicate a time-average. We define the characteristic length scale $\mathsf{L}$ based on the largest-scale motions of the system in question. For statistically homogeneous turbulent convection the characteristic length scale is the instantaneous temperature gradient length scale $\mathsf{L}=T_*/\nabla T_0$ where $T_{*}$ is the root-mean-square of temperature fluctuations and $\nabla T_0$ is the constant vertical mean temperature gradient \citep{gibert2006high}. For non-convective statistically homogeneous turbulent flows, the characteristic length scale is a dimensional estimate of the size of the largest eddies, $\mathsf{L}=\mathsf{E_v}^{3/2}/\mathsf{\epsilon_{\mathrm{v}}}$, where $\mathsf{\epsilon_{\mathrm{v}}}=\hat{\nu} \langle \sum_k k^2 \vec{v}^2 \rangle$ is the time-averaged rate of kinetic energy dissipation. The magnetic Reynolds number is defined from the Reynolds number and the magnetic Prandtl number, i.e. $\mathsf{Re_m}=\mathsf{Pr_m Re}$. We measure length in units of the Kolmogorov microscale $\mathsf{\eta_{kol}=(\hat{\nu}^3/\mathsf{\epsilon_{\mathrm{v}}})^{1/4}}$ and also make use of the Kolmogorov time-scale $\mathsf{\tau_{\eta}=(\hat{\nu}/\mathsf{\epsilon_{\mathrm{v}}})^{1/2}}$. The Kolmogorov microscale multiplied by $\mathsf{k_{\mathrm{max}}}$, the highest wavenumber in the simulation, is often used to test whether a simulation is adequately resolved on small spatial scales. In this work all of the simulations fulfill the standard criterion based on the Kolmogorov microscale ($\mathsf{k_{\mathrm{max}} \mathsf{\eta_{kol}} >1.5 }$) for adequate spatial resolution \citep{pope2000turbulent}. The Reynolds numbers in Table~\ref{simsuma} are on the order of $10^3$; the Reynolds numbers and Kolmogorov microscales in Table~\ref{simsuma} are in the same range as current studies of moderately turbulent flows \citep[e.g.][]{marino2015helical,bianchi2016evolution}. \begin{table*} \caption{Simulation parameters: grid size $N^3$, total number of particles in the simulation $\mathsf{n_p}$ ($10^6$), Reynolds number $\mathsf{Re}$, magnetic Reynolds number $\mathsf{Re_m}$, Prandtl number $\mathsf{Pr}$, magnetic Prandtl number $\mathsf{Pr_m}$, Rayleigh number $\mathsf{Ra}$, Kolmogorov microscale $\eta_{\mathsf{kol}}$, Kolmogorov time-scale $\tau_{\eta}$, Lagrangian crossing time $\mathsf{LCT}$, average kinetic energy dissipation rate $\epsilon_{\mathrm{v}}$, Alfv\'en ratio $r_{\mathrm{A}}$, average Bolgiano-Obukhov length divided by the height of the simulation volume $\bar{\ell}_{\mathrm{bo}}$, average number of particles per convex hull $\mathsf{n_{pch}}$, number of convex hulls $\mathsf{N_{hulls}}$, initial length scale of convex hull $\ell_{\mathrm{hull}}$. \label{simsuma} } \begin{tabular}{lcccccccccccccccccccccccccccc} \hline\hline & $N^3$ & $\mathsf{n_p}$($10^6$) & $\mathsf{Re}$ & $\mathsf{Re_m}$ & $\mathsf{Pr}$ & $\mathsf{Pr_m}$ & $\mathsf{Ra}$ ($10^5$) & $\eta_{\mathsf{kol}}$ ($10^{-3}$) & $\tau_{\eta}$($10^{-2}$) & $\mathsf{LCT}$ ($\tau_{\eta})$ &$\epsilon_{\mathrm{v}}$ & $r_{\mathrm{A}}$ & $\bar{\ell}_{\mathrm{bo}}$ & $\mathsf{n_{pch}}$ & $\mathsf{N_{hulls}}$ & $\ell_{\mathrm{hull}}(\eta_{\mathsf{kol}})$ \\ \hline NST & $1024^3$ & 3.2 & 2900 & - & - & - & - & 4.58 & 5.25 & 276 & 0.15 & - & - & 24 & 5000 & 27 \\ \hline HC & $512^3$ & 1.0 & 1500 & - & 2.0 & - & 5 & 12.6 & 3.97 & 340 & 2.54 & - & 0.28 & 214 & 2500 & 22 \\ \hline MC & $512^3$ & 1.0 & 5100 & 7650 & 2.0 & 1.5 & 2.22 & 8.9 & 2.60 & 530 & 4.40 & 1.78 & 0.12 & 48 & 2500 & 30 \\ \hline\hline \end{tabular} \end{table*} Formulation of boundary conditions for simulations of turbulent flows is delicate because boundaries strongly influence the structure and dynamics of the flow. For homogeneous isotropic turbulence, it is standard to employ boundary conditions that are periodic in $x$, $y$, and $z$. These fully periodic boundary conditions are used for simulation NST. For convection simulations the choice of fully periodic boundary conditions (also called homogeneous Rayleigh-B\'enard boundary conditions) allows macroscopic elevator instabilities to form \citep{calzavarini_etal:elevator}. These instabilities destroy the natural pattern of the original turbulent flow field. The convection simulations discussed in this work use quasi-periodic rather than fully periodic boundary conditions. In quasi-periodic boundary conditions the only additional constraint is the explicit suppression of mean flows parallel to gravity, which are removed at each time step. Because our simulations are pseudospectral, the mean flow is straightforwardly isolated as the $z$ component of the $\vec{k}=(0,0,0)$ mode in Fourier space, which corresponds to the volume-averaged velocity in the z-direction. Quasi-periodic boundary conditions combine the conceptual simplicity of statistical homogeneity with a physically natural convective driving of the turbulent flow. These boundary conditions do not enforce a large-scale structuring of the turbulent flow, such as the convection-cell pattern observed when Rayleigh-B\'enard boundary conditions are used. In the quasi-periodic simulations presented in this work, we find no evidence of the macroscopic elevator instability although we follow the evolution of the flow for long times. Quasi-periodic boundary conditions allow for direct comparison with simulations that use fully periodic boundary conditions. In simulation NST the modes $2.5<k<3.5$ are forced using Ornstein-Uhlenbeck processes with a finite time-correlation on the order of the autocorrelation time of the velocity field (for further details of this forcing method, see \citet{eswaran1988examination}). The convection simulations HC and MC are Boussinesq systems driven solely by a constant temperature gradient in the vertical direction. The magnetic field present in simulation MC is generated self-consistently by the flow from a small random seed field through small-scale dynamo action. The system is evolved until a statistically stationary state is reached. For Boussinesq convection, a length scale that characterizes the scale-dependent importance of convective driving is the Bolgiano-Obukhov length, {$\ell_{\mathrm{bo}}=\mathsf{\epsilon_{\mathrm{v}}^{5/4}/\epsilon_{\mathrm{T}}^{3/4}}$}, where $\mathsf{\epsilon_{\mathrm{ T}}}$ is the average rate of thermal energy dissipation. This length scale separates convectively-driven scales of the flow {$\ell > \ell_{\mathrm{bo}}$} from the range of scales where the temperature fluctuations behave as a passive scalar $\ell < \ell_{\mathrm{bo}}$. In Table~\ref{simsuma} this length scale is averaged over the simulation time, normalized to the height of the simulation volume, and recorded as $\bar{\ell}_{\mathrm{bo}}$. The table also includes the mean Alfv\'en ratio, $r_{\mathrm{A}} = \langle \mathsf{E_v} /\mathsf{E_b} \rangle$, the time-average of the kinetic energy divided by the magnetic energy $\mathsf{E_b}= \vec{B}^2 / 2$. In the present numerical experiments, Navier-Stokes turbulence displays the weakest form of spatial coherence while Boussinesq magneto-convection exhibits anisotropy with regard to the direction of gravity as well as the occurrence of large-scale spatially-coherent structures. Additionally, a dynamical anisotropy arises because of the presence of magnetic fields. The positions of Lagrangian tracer particles are initialized in a homogeneous random distribution at a time when the turbulent flow is in a statistically stationary steady state. The total number of particles in the simulation, $\mathsf{n_p}$, is listed in Table~\ref{simsuma}. We use at least a million particles for a $512^3$ grid. This is a standard spatial density of tracer particles used to describe homogeneous turbulence \citep[e.g.][]{biferale2005lagrangian,homann2007lagrangian,busse2007statistics}. The Lagrangian statistics we produce have been tested and found to be well-resolved in space and time; we reproduce these statistics with half the particles. At each time step the particle velocities are interpolated from the instantaneous Eulerian velocity field using either a trilinear (for simulations HC and MC) or tricubic (for simulation NST) polynomial interpolation scheme. Particle positions are calculated by numerical integration of the equations of motion using a predictor-corrector method. For the convex hull calculations, the Lagrangian particle data is resampled at a rate of approximately $\tau_{\eta}/10$ for simulations NST and HC. The rate of sampling for simulation MC was smaller by a factor of $10$, and this was not found to impact the dispersive results examined here. Each simulation is run for a sufficient time that Lagrangian particle pairs have separated, on average, at least by the length of the simulation volume. We call this time the Lagrangian crossing time, $\mathsf{LCT}$, and it is listed in the table in units of the Kolmogorov time scale. Lagrangian particle pair dispersion statistics exhibit a diffusive trend near this time since the velocity fluctuations over this time and distance exhibit low correlation. \section{Pair Dispersion of Lagrangian tracer particles during homogeneous Boussinesq convection}\label{section3} This section presents results for particle-pair dispersion during homogeneous Boussinesq convection for comparison with many-particle dispersion calculated from a convex hull analysis. For an introduction to the rich field of Lagrangian particle-pair dispersion, we refer the reader to the review of \citet{salazar2009two}. In addition to this review, several more recent works\citep{bourgoin:disppheno, thalabard_krstulovic_bec:ctrwdisppheno, bitane_homann_bec:disptimescale} propose new dispersion phenomenologies based on locally ballistic dynamics, an alternative to the classical idea of turbulent diffusion exhibiting scale-dependent diffusivity \citep{richardson1926atmospheric}. Here we briefly recall the basic argument for scaling regimes of pair dispersion. For times short compared with the autocorrelation time of the Lagrangian velocities, the relative velocity of the particles is approximately constant. The mean-squared separation of a pair of Lagrangian particles is therefore expected to grow quadratically with time for a short time. This is called the \emph{ballistic} or \emph{Batchelor} regime. The extent of the ballistic regime is known to depend on the \emph{initial} separation of the particle pair, $\Delta_0$, due to a finite correlation of $\Delta_0$ and the root-mean-square (RMS) velocity fluctuations on this scale $v_{\Delta_0}$. Recent theoretical\cite{bourgoin:disppheno,thalabard_krstulovic_bec:ctrwdisppheno,bitane_homann_bec:disptimescale} and experimental\cite{ouellette_etal:pairdispexpmodels,ouellette_etal:pairdispexp} works make use of a key time scale linked to $\Delta_0$, the initial nonlinear turnover time $\tau_0 \equiv \Delta_0/v_{\Delta_0}$. In the inertial range of Navier-Stokes turbulence this initial turnover time can be estimated as $\tau_0 \sim v_{\Delta_0}^2/(2\epsilon_{\mathrm{v}})$. For times much larger than the autocorrelation time of the Lagrangian velocity, the velocities of a pair of Lagrangian particles are statistically independent. The mean-squared separation of a pair of Lagrangian particles is expected to grow linearly with time. This is typically called the \emph{diffusive} regime. In between the ballistic regime and the diffusive regime is a period of time where the mean-squared separation of particle pairs can grow cubically with time. This is typically called the \emph{Richardson-Obukhov regime}. The temporal separation of the ballistic and Richardson-Obukhov regimes \cite{bitane_homann_bec:disptimescale} can be estimated by $\tau_0$. Achieving a clear Richardson-Obukhov regime in direct numerical simulations depends on the initial separation of particles as well as the size of the inertial range, and is the subject of current ongoing research for Navier-Stokes turbulence. For this reason, and due to the limited extent of the inertial scaling range that is expected for the Reynolds numbers we obtain, we make no claims of observing a Richardson-Obukhov regime in the present convection simulations. We compute the initial turnover time via a one-dimensional Eulerian kinetic energy spectrum as $\tau_0=(k_0^3 \mathsf{E_v}(k_0))^{-1/2}$ with $k_0=2\pi/\Delta_0$. Although the moderate Reynolds numbers of the present simulations are far away from values where a true inertial range, devoid of influences from largest or smallest scales of the flows could be realized, for descriptive convenience we will apply this term to the interval of time-scales between the ballistic and the diffusive regime. FIG.~\ref{pairdispfig} illustrates the Lagrangian particle-pair dispersion for simulations HC and MC, both driven with homogeneous Boussinesq convection characterized by a large Bolgiano-Obukhov length. In this figure, thin solid lines indicate Batchelor scaling $\sim t^2$ and diffusive scaling $\sim t$, around the shortest and longest timescales, respectively. For both cases, HC and MC, we have $\Delta_0\simeq \eta$, and thus $\tau_0\simeq \tau_\eta$. Indeed, both curves deviate from $t^2$-scaling after $\tau_0$. For intermediate times $10 \tau_0 \lesssim (t-t_0) \lesssim 100 \tau_0$, they display a phase of fast separation which eventually levels off toward diffusive dispersion. The onset of fast pair separation in convection at approximately $10\tau_0$ is delayed compared to Navier-Stokes turbulence where it has been observed\cite{bitane_homann_bec:disptimescale} to begin at $(t-t_0)\simeq \tau_0$. In a simulation of convection an anisotropy exists between the direction of the mean temperature gradient and the direction perpendicular. The separation of particle pairs evolves differently in these two directions; the separation of particle pairs can also evolve differently depending on whether the pair of particles are initially separated in the direction of the mean temperature gradient or perpendicular to it. During Boussinesq convection with large Bolgiano-Obukhov length the Batchelor regime for pair separations looks similar to randomly forced hydrodynamic turbulence driven at the large scales, as shown by e.g. \citet{sawfordreview,yeung2004relative}. During the diffusive regime, large-scale flow structures associated with large Bolgiano-Obukhov length Boussinesq convection clearly affect the pair dispersion curve. The dispersion curve does not look as smooth as the result obtained from randomly forced hydrodynamic turbulence driven at the large scales. This is not surprising because the separation of the particle pairs has reached sizes comparable to the large-scale convective plumes. We note that although our convection simulations use quasi-periodic boundary conditions, FIG.~\ref{pairdispfig} is not qualitatively different from figure 2 of \citet{schu2008}, which presents Lagrangian dispersion during Rayleigh-B\'enard convection. For pair dispersion in simulations HC and MC, extensive averaging over different flow realizations would be necessary to achieve a perfectly smooth and universal result, free from the influence of intermittent plumes or large-scale magnetic structures. \begin{figure} \resizebox{3.375in}{!}{\includegraphics[angle=90]{figure2a.pdf}}\resizebox{3.375in}{!}{\includegraphics[angle=90]{figure2b.pdf}} \caption{Mean-square of the separation in the direction of gravity for pairs of Lagrangian tracer particles, dispersing in the hydrodynamic convection simulation HC (a) and the MHD convection simulation MC (b). Particle pairs are initially separated in the direction of gravity by $\Delta_0=\eta_{\mathsf{kol}}$ (HC) and $\Delta_0=1.4\eta_{\mathsf{kol}}$ (MC). Thin solid lines indicate: Batchelor scaling $\sim t^2$ (short timescales) and diffusive scaling $\sim t$ (long timescales). Time and length are given in units of the initial turnover time $\tau_{0}$ and the Kolmogorov microscale $\eta_{\mathsf{kol}}$, respectively. \label{pairdispfig} } \end{figure} \section{Convex Hull Analysis of Dispersing Tracer Particles \label{section4}} \subsection{Description of the convex hull calculations} We seed a number of tracer particles in the simulation volume, which produces a fixed density of tracer particles. In simulations NST, HC, and MC the number of tracer particles and their density is based on the number needed to produce well-resolved Lagrangian pair dispersion statistics. A convex hull analysis could potentially make use of a significantly higher density of tracer particles. To calculate a convex hull, we select and mark a group of Lagrangian tracer particles initially contained in a small cubic sub-volume of our simulation. The initial length scale of the group of particles $\ell_{\mathrm{hull}}$ is calculated as the side-length of an initial cubic sub-volume; in the limit where the group consists of only two particles, $\ell_{\mathrm{hull}}$ would be equivalent to $\Delta_0$, the initial separation of a particle pair. For the density of tracer particles in simulations NST, HC, and MC, $\ell_{\mathrm{hull}}$ varies between $20$ and $30$ $\eta_{\mathsf{kol}}$. The dependence of convex hull statistics on the initial length scale and density of the particle group is examined in Appendix \ref{appendixsizeden}. Selection of each particle group based on the initial position of the tracer particles yields groups that contain nearly the same number of particles, with random variation of approximately 20\% based on the homogeneous random initialization of the Lagrangian tracer particles. The average number of particles in a group, $\mathsf{n_{pch}}$, listed for simulations NST, HC, and MC in Table~\ref{simsuma}, is between 24 and 214. We follow the $\mathsf{N}_\mathsf{hull}$ convex hulls (cf.~Table~\ref{simsuma}) of the marked particle groups for the span of the simulation. The required calculation of the hulls for each time-step is performed using the standard QuickHull algorithm \citep{Barber96thequickhull,2013barber}, implemented in the function \emph{convhulln} in the package \emph{geometry} publicly available for R, from the R Project for Statistical Computing \citep{ihaka1996r,cranr}. The surface area and volume of the convex hulls are obtained based on a Delaunay triangulation of the hull vertices. We stop tracking the convex hull of a group of particles when the Lagrangian crossing time, $\mathsf{LCT}$, is reached to avoid the possibility of numerical artifacts due to the periodicity of the simulation volume. The initial positions of particle groups could be chosen in regions of special interest in the flow, but in this work we restrict ourselves to a homogeneous initial distribution of the groups. For each simulation the ensemble of particle groups is initially selected to fill completely a horizontal slab. The total number of groups of particles that we analyze using convex hulls is listed as $\mathsf{N_{hulls}}$ in Table~\ref{simsuma}. This large number of convex hulls is more than are required for statistical convergence of average quantities, but allows us to capture some statistically rare flow features. As any pair of particles separates in a turbulent flow, the particles move with the small-scale fluctuations of the velocity field. The distance between the two particles increases monotonically in time on average, but any specific pair of particles will produce an erratic, noisy signal. If a convex hull is defined by a very small group of particles, then most of the particles define the surface of the convex hull. These particles on the surface of the convex hull are called \emph{vertices} of the convex hull. In the situation where most of the particles are vertices, the convex hull, like the particle-pair distance, shrinks or grows erratically as its component tracer particles move in the turbulent flow. The limit where groups contain only small numbers of particles is of little physical interest for convex hull analysis, because particle pairs or particle tetrahedra already provide useful dispersion information. In simulations NST, HC, and MC we examine the relative dynamics of larger groups of particles. If a particle that is a vertex of the convex hull moves inward toward the center of the larger group of particles, it is unlikely that it will remain a vertex because of the requirement of convexity. It can become an \emph{interior particle} of the convex hull. Other particles may continue to move away from the group, and the convex hull will typically continue to expand smoothly. The particles that constitute the group of vertices of the convex hull can be exchanged frequently. This is a distinctive concept for the convex hull because it provides a contrast with more common Lagrangian diagnostics such as particle pairs or particle tetrahedra. For statistics constructed from particle pairs or particle tetrahedra, the same particles define the size at each point in time. The convex hull also intrinsically links a macroscopic length scale, the size of the convex hull, with the position of the convex hull's geometrical center. Over this length scale, the convex hull filters out tracer particles which disperse slower than its vertices, selecting the most efficiently dispersing members of the particle group. \subsection{Convex hull description of a group of tracer particles} A convex hull is defined by its vertices; these are the particles that dispersed the fastest in a given group of particles. Potentially this could decouple the convex hull from the enclosed particles in two ways. The number of vertices of the convex hull could become extremely small, or the majority of interior particles could detach from the convex hull vertices and clump somewhere in a subregion inside the hull. In this section we devise simple basic checks for these two {scenarios}. If the particles contained in the convex hull do not spread throughout the space inside of the convex hull evenly as it grows, the convex hull will fail to characterize the full group of particles. We use the average difference between the geometric center of the convex hull, $\vec{c}_{\mathsf{vtx}}$, and the virtual center of mass of the interior particles contained in the convex hull, $\vec{c}_{\mathsf{int}}$, as an indicator of decoupling. This difference will not be zero, because the particles that make up the convex hull will never fill the space perfectly evenly. Since this difference will grow in time as the particles disperse, we compare it to a maximal extent of the convex hull at any point in time, defined by $d=\left(d_x^2+d_y^2+d_z^2 \right)^{1/2}$ where $d_x$ is the extent of the convex hull projected on the $x$-direction, and $d_y$ and $d_z$ are defined similarly. FIG.~\ref{newclumpgraph1}(a) shows that the average difference between the centers normalized by the convex hull's maximal extent, $\delta c=\langle|\vec{c}_{\mathsf{vtx}}-\vec{c}_{\mathsf{int}}|/d\rangle$. This normalized average difference between centers does not become larger than 40\% during an initial phase ($0.2$ LCT $\lesssim t\lesssim 0.4$ LCT$)$ and during the subsequent phase converges toward a quasi-constant level ranging between 15\% and 20\%, which is less than the standard deviation of the coordinates of the group of tracer particles for each simulation. In FIG.~\ref{newclumpgraph1}(a), time is given in terms of the Lagrangian crossing time, LCT. The differences in the initial separation of the particle pairs in FIG. \ref{pairdispfig} ($\Delta_0\simeq \eta_{\mathsf{kol}}$) and the mean initial length scale of the convex hulls ($\ell_\mathrm{hull}\simeq 20-30 \eta_{\mathsf{kol}}$) generate dispersion curves that reflect different ranges of temporal and spatial scales of the underlying turbulence. Because the observable dispersion regimes and their duration can change as a consequence of different $\Delta_0$ or $\ell_\mathrm{hull}$, a direct comparison of both figures is difficult. In Navier-Stokes turbulence, the initial turnover time, $\tau_0$, has been shown\cite{bourgoin:disppheno,bitane_homann_bec:disptimescale} to signal the transition from the ballistic to the inertial range of dispersion, and thus to provide a reference scale of dispersion. We therefore use the initial turnover time $\tau_0$ to normalize the dispersion of particles contained in a convex hull, with initial length scale $\ell_\mathrm{hull}$. It is however not expected that this normalization can eliminate the physical differences between isotropic Navier-Stokes and anisotropic convective systems. Moreover, it is not clear whether the universality of $\tau_0$ extends beyond the transition from ballistic to Richardson-like dispersion. Close examination of FIGs.~\ref{pairdispfig} and \ref{newclumpgraph1} shows that the initial phase, during which the average difference in the centers of convex hull and interior particles increases to a maximal value, extends into the fast separation regime of particle pair dispersion. The subsequent phase of decreasing $\delta c$ corresponds to separation scales near to and in the diffusive regime. These signatures, as well as the sharp transients evident between phases of the evolution of $\delta c$ in FIG. \ref{newclumpgraph1}, indicate a potential utility of the convex hull for studies of the turbulent inertial range. FIG.~\ref{newclumpgraph2} reveals the distribution of the group of particles within the convex hull in the $z$-direction. Here the $z$-direction has been selected because it is the direction of the gravitational anisotropy in the convective cases; however for one-dimensional cuts in directions other than the $z$-direction, similar curves result. The ratio plotted in FIG.~\ref{newclumpgraph2} is the standard deviation of the particle positions, $\sigma_\text{p,z}$, divided by the extent of the hull in the $z$-direction. This ratio would be small if many of the tracer particles were to form a clump rather than spreading throughout the interior of the convex hull. For each of the three simulations we study, however, this quantity quickly comes to a plateau. After $0.1$ to $0.2$ LCT, i.e. the scales probed by the particles as they approach the inertial range, the ratio no longer decreases substantially. \begin{figure} \resizebox{3.375in}{!}{\includegraphics[angle=90,width=0.5\textwidth]{figure4a.pdf}}\resizebox{3.375in}{!}{\includegraphics[angle=90,width=0.5\textwidth]{figure4b.pdf}} \caption{ (a) The average distance between the geometric centers of the convex hulls, $\vec{c}_\text{vtx}$, and the virtual center of mass of their interior particles, $\vec{c}_\text{int}$, divided by the convex hull size, $d=(d_{x}^2+d_{y}^2+d_{z}^2)^{1/2}$. (b) Data as shown in panel (a), {shifted vertically to common initial value}. Solid line: NST, dotted line: HC, dashed line: MC. Averaging is performed over convex hulls calculated for each group of Lagrangian tracer particles and at each time. Time is given in units of (a) the Lagrangian crossing time, LCT, and (b) the initial turnover time, $\tau_0$. \label{newclumpgraph1} } \end{figure} \begin{figure} \resizebox{3.375in}{!}{\includegraphics[angle=90]{figure5.pdf}} \caption{ The standard deviation $\sigma_\text{p,z}$ of the $z$ coordinates of interior particles of a convex hull divided by its extension along the z-direction $d_{z}$. Averaging is performed over all convex hulls in each simulation. Time is given in units of the Lagrangian crossing time, LCT. \label{newclumpgraph2} } \end{figure} In all simulations the average number of vertices of the convex hulls decreases only mildly with time; this decrease is on the order of 10\% before the Lagrangian crossing time is reached. After the short initial phase up to $\tau_0$, the decrease in the number of convex hull vertices happens very gradually. We conclude that on average in simulations NST, HC, and MC, the convex hulls and their interior particles do not detach from each other in a way that would render the concept of the convex hull inappropriate for characterizing a pre-selected group of many Lagrangian particles. Based on the measurements presented, a clear distinction can be made between the diffusive regime and the inertial range. The correlated inertial-range velocity fluctuations lead to changes in the relationship of convex hull vertices and interior particles. This trend is reversed as soon as the diffusive regime is reached, largely neutralizing the differences between interior particles and the convex hull vertices on inertial scales. This susceptibility of the convex hull to the different characteristic regimes of ballistic, inertial-range, and diffusive turbulent transport render this diagnostic attractive for future Lagrangian investigations of turbulence. Apart from the ability of the convex hull to indicate different regimes of turbulent transport, the tests above also yield information about the dynamics of the turbulent velocity field. The average displacement shown in FIG.~\ref{newclumpgraph1} quantifies anisotropic differences between the dynamics of the most efficiently dispersing convex hull vertices and the slower dispersing interior particles. On the spatial scale set by the convex hull, an anisotropic difference of the velocity fluctuations responsible for vertex and interior tracer transport is observable as a relative displacement of the centers of the group of interior particles and of those that define the convex hull. FIG. \ref{newclumpgraph1}(b) which is a different representation of the data shown in FIG. \ref{newclumpgraph1}(a) demonstrates this point. All three systems have slightly different initial Lagrangian tracer configurations and, consequently, the corresponding initial values of $\delta c$ differ by up to $8$\% (MC) while NS and HC have an initial difference of approximately $1$\%. { Shifting the $\delta c$-curves of all three systems to a common initial level} allows a qualitative comparison although this simple approach can not eliminate all dynamical differences caused by varying initial tracer separations. The increase observed for $\delta c$ is driven by the particles that are part of the surface of the convex hull, since they determine the geometric center of the convex hull. The relative motion of particles contained in the interior of the hull is driven by velocity fluctuations on scales smaller than the convex hull size. Particles at opposite locations on the surface of the convex hull will experience velocity differences on the scale of the convex hull and therefore tend to move more rapidly apart from each other than particles in the interior of the hull, which in turn determine the center of mass of the convex hull. Thus on time scales of $(t-t_0)\lessapprox\tau_0$ a significant displacement between the geometric center and the center of mass of a group of particles can occur, evidenced in the rapid growth of $\delta c$. The relative displacement of the geometric center and the center of mass continues to grow at a slower rate for $(t-t_0)> \tau_0$. This can be attributed to a finite time correlation of the velocity fluctuations on the scale of the convex hull. In addition, as time evolves and the hull grows in size, particles in the interior of the convex hull will also experience increasing velocity fluctuations and thus some interior particles may become particles on the surface of the hull and - vice versa - particles on the surface of the convex hull can move into its interior due to engulfment by other particles. This process eventually leads to a decrease in the relative displacement of the geometric center and the center of mass as the diffusive regime is approached. A noticeble difference between the NST configuration and the convective systems HC and MC is the presence of a plateau for the NST case between $3\tau_0$ and $16\tau_0$, while for HC and MC, $\delta c$ continues to grow during this time. The different behavior may be caused by anisotropy in the convective flows HC and MC sustaining longer correlations in time for velocity fluctuations in preferential directions, which does not occur for the statistically isotropic Navier-Stokes case. The quantity shown in FIG.~\ref{newclumpgraph2} measures the diffusive character of the motion of the interior particles, rather than dynamical anisotropy. This measure exhibits a rapid transient around $\tau_0$ from initial levels towards a first roughly constant plateau throughout the inertial range that finally approaches the asymptotic diffusive value around $(t-t_0) \simeq 100\tau_0$. Here, the inherently hydrodynamic simulations NST and HC display less variation throughout the inertial range than system MC which exhibits additional flow structuring due to the presence of magnetic field fluctuations. This brief interpretation allows for extensions, for example focusing on vertex dynamics or a detailed direction-specific analysis by introducing spatial projections of the hulls to narrow down the structure of the underlying anisotropic fluctuations. This will be subject of future work. \subsection{Multi-particle dispersion using convex hull analysis \label{secmaxray}} Because ballistic and diffusive ranges for particle pair dispersion are typically discussed in terms of length squared, we employ analogous measures for a group of particles and convex hulls. This is intended to make comparison with dispersion curves as simple and direct as possible. We calculate a maximal ray $r$ internal to a convex hull defined by a group of particles $G$: \begin{eqnarray}\label{maxraydef} r = \max_{i,j \in G} \sqrt{(x_i - x_j)^2+(y_i - y_j)^2+(z_i - z_j)^2} \end{eqnarray} By definition, the particles $i,j$ that contribute to the maximum in this definition are always vertices of the convex hull. If the group of particles densely filled a sphere, the convex hull would be the surface of the sphere, and the maximal ray would be the diameter of the sphere. For this reason the maximal ray is sometimes also called the diameter of a convex hull. However in this work we examine anisotropic systems where the convex hull of a group of particles is not typically close to spherical; we opt for the more accurate former term. The susceptibility of the maximal ray's orientation to deformations of the convex hull can be parameterized by the RMS value of the vertex distance from the hull's geometrical center normalized by the average distance to the center (averages taken over the hull vertices), $Q=\sigma_{r_\text{vtx}}/\overline{r_\text{vtx}}$. If $Q \approx 0 $, i.e. the convex hull is close to spherical, the maximal ray can change its direction by an arbitrary amount and much faster than the autocorrelation times of the underlying turbulent fluctuations would suggest. In this case, small fluctuations of the hull radius, which can occur due to uncorrelated small-scale fluctuations, will lead to rapid changes of orientation of the maximal ray. The maximal ray is highly susceptible to anisotropic deformations of the convex hull. In contrast, a significant anisotropic deformation of the convex hull, $Q\gg 1$, acts like a threshold for directional variation of the maximal ray and stabilizes its orientation. This subsection focuses on quantities specific for the convex hull and their relation to classical Lagrangian mean-square pair-separation, $\langle(\Delta-\Delta_0)^2\rangle$. We average the square of the maximal ray over all groups of particles; the results for each simulation are shown in FIG.~\ref{dispgraph}(a). This figure demonstrates that the maximal ray, although it is not tied to the same particle pair in each tracer group, asymptotically converges to a ballistic regime signature $\sim t^2$ up to approximately $\tau_0$, and an asymptotic diffusive regime $\sim t$ at long times, for all systems considered. The data shown for MC does not attain the same temporal resolution as for systems NST and MC due to a larger time step but penetrates further into the diffusive regime. Additional length-scale estimates can be obtained from taking appropriate powers of the normalized surface area, $r_\text{S}=(S/(4\pi))^{1/2}$, and the volume, $r_\text{V}=(3V/(4\pi))^{1/3}$, of the convex hulls. Averaging the length scales produced by many different particle groups reveals dispersive behaviors that also tend to obey the ballistic and diffusive scaling laws. A comparison of dispersion curves produced from the surface area and volume of simulation NST are shown in FIG.~\ref{dispgraph}(b). Similar to Lagrangian pair dispersion, the expected asymptotic scaling laws for ballistic and diffusive regimes are approached by the surface- and volume-based distance approximation. However, they hold over a shorter period of time than those shown in FIG. \ref{pairdispfig}. Although FIG.~\ref{dispgraph}(b) shows dispersion curves only for simulation NST, similar results are found for simulations HC and MC. A Richardson-Obukhov-like regime is not observed. Because achieving a clear Richardson-Obukhov regime in direct numerical simulations depends on the initial separation of particles as well as the size of the inertial range, a Richardson-Obukhov regime is not expected in our simulations. Particle filtering, an inherent property of the selection criterion of convex hull vertices, may also contribute to the lack of a clear Richardson-Obukhov regime resulting from convex hull analysis of dispersion. During early dispersion, the vertices of a convex hull tend to be particles that move away from the center of the hull most rapidly in the direction radially outward from the center of the particle group; this may explain the quasi-ballistic signature before approximately $16 \tau_0$ (cf. FIG. \ref{newclumpgraph1}(b)). As noted by \citet{bianchi2016evolution}, although there is a conceptual connection between many-particle groups and particle pairs, many-particle groups provide different information when measuring dispersion scalings. There is a fundamental difference between the maximal ray and the surface- or volume-based length approximations that becomes particularly important with regard to deformations of the convex hull: the maximal ray by definition runs along the direction of maximum extent of the convex hull. In contrast, the other two quantities yield averaged and isotropized approximations of the length scale probed by the hull, i.e. the radius of a reference sphere of same surface or volume. Spherical geometry is a natural first-order approximation of a convex hull or, more precisely, the convex polyhedron, which we use as its numerical representation, since convexity implies that the hull has no corner pointing inwards. This constraint severely restricts the complexity of the hull's surface structure, since any such corner vertex would turn into an interior point enclosed by the hull. This results in an object which can mainly be deformed by flattening of the inscribed spheroid along some direction perpendicular to the maximal ray. The convex hull is not material and therefore is not constrained by volume conservation in incompressible flow. Although the possible length definitions do not show large qualitative differences compared to the maximal ray, their behaviour relative to each other reflects the different responses of hull area and volume to deformations of the convex hull. This will be exploited in Section \ref{section5}. \begin{figure}[H] \resizebox{3.5in}{!}{\includegraphics[angle=90]{figure6a.pdf}} \resizebox{3.5in}{!}{\includegraphics[angle=90]{figure6b.pdf}} \caption{(a) Evolution of the mean-square maximal ray $r$ of the convex hulls in all three systems. (b)~Evolution of mean-square maximal ray $X=r$ (solid curve), of the length based on the hull's surface area, $X=(S/4\pi)^{1/2}$ (dot-dash), and of the length based on the hull's volume, $X=(3V/4\pi)^{1/3}$ (dash-3dot), for simulation NST, thin solid lines as in FIG.~\ref{pairdispfig}. Brackets indicate averaging over all groups of tracer particles in a horizontal slab in each simulation volume. The symbols $r_0$, $S_0$, and $V_0$ denote the respective initial values.\label{dispgraph} } \end{figure} { \section{Results: Anisotropic dynamics of convex hull vertices \label{section5}} The relationship between the surface area, $S$, and volume, $V$, of a convex hull reveals the anisotropy of vertex transport in a turbulent flow, which is of particular interest during convection, i.e. in the presence of coherent velocity structures. We introduce the non-dimensional ratio $S/V^{2/3}$ as a direct way to quantify anisotropy. Because a sphere minimizes the amount of surface area for a given volume, an absolute lower bound of $4\pi / (4/3 \pi)^{2/3}\approx 4.8$ exists for this non-dimensional surface-volume ratio. An anisotropic convex hull, e.g. a cigar-shaped or a pancake-shaped hull, will have a higher surface to volume ratio, so the ratio gives an impression of how non-spherical the current state of the hull is. The ratio cannot differentiate between prolate (cigar-shaped) and oblate (pancake-shaped) convex hulls, because it approaches infinity in the limit both of zero pancake thickness and infinite cigar length. Higher values indicate a basic level of anisotropic deformation. FIG.~\ref{hullvol}(a) shows the time evolution of the surface-volume ratio, averaged over all convex hulls in each simulation. Because the particle groups consist of small numbers of particles which are randomly distributed, they are not initially perfectly isotropic and do not evenly fill the cubic initial volumes; the resulting convex hulls do not form either perfect cubes or perfect spheres. Thus the surface-volume ratio initially exhibits an average value of approximately 5.6, a low value that lies between the values for perfectly spherical and perfectly cubical volumes. In all simulations, the surface-volume ratio begins to increase around $t=\tau_\eta$, indicating that the convex hulls typically become stretched, \emph{i.e.} anisotropic, as their particles start to disperse due to turbulent fluctuations. In the Navier-Stokes case (NST) no global anisotropy exists in the flow. As expected, the average surface-volume ratio remains relatively low throughout the simulation reaching its maximal value around 10 $\tau_\eta$. At long times, the average surface-volume ratio returns to approximately its initial value as uncorrelated particle motion begins to eliminate anisotropic deformations of the convex hull. The changes in the surface-volume ratio also slow and it approaches a flat regime related to the diffusive trend observed in FIG.~\ref{dispgraph} at long times. In the case of hydrodynamic Boussinesq convection (HC), the mean temperature gradient introduces a preferential direction. We would thus straightforwardly expect higher stretching of the hulls in this direction. However, FIG.~\ref{hullvol} shows that this does not take place for the convex hulls we followed; for times greater than $\tau_\eta$ only a slight increase occurs followed by a plateau phase up to $30 \tau_\eta$. Subsequently, the average surface-volume ratio quickly decreases below its initial value. The scale of the convective plumes in simulation HC are large and diffuse, as reflected by the large Bolgiano-Obukhov length $\ell_{\mathrm{bo}}$, the smallest scale on which the cascade of thermal fluctuations is driven by buoyancy \citep{biskampbook}. This large Bolgiano-Obukhov length indicates that smaller-scale turbulent dynamics are not driven by the anisotropic influence of buoyancy. The convex hulls do not tend to become strongly anisotropic, because the length scale of the anisotropic convection differs considerably from the scale of the convex hulls examined. The Reynolds number of HC which is less by approximately $50\%$ as compared to the value of the NST system also explains why the HC simulation exhibits the lowest level of convex-hull anisotropy. A different behavior is observed for the magnetohydrodynamic convection (MC) simulation since larger-scale magnetic fluctuations have a strong impact on small-scale dynamics (in contrast to a large-scale velocity there exists no frame of reference which eliminates the magnetic field); consequently, far higher surface-volume ratios are attained than in the other two cases. In this simulation the large-scale magnetic field fluctuations result in strong local anisotropy of the small-scale velocity fluctuations \citep{grappin2010scaling,verdini2015anisotropy,matthaeus1996anisotropic,cho2000anisotropy,chandran2008strong,montgomery1981anisotropic,goldreich_sridhar:gs2,boldyrev:bmodelII}; the consequence is considerable stretching of the convex hulls. The mean alone does not characterize the full information that the convex hull analysis can provide about anisotropy in each simulation. The shape of the probability distribution of the surface-volume ratio yields a more comprehensive picture. If all convex hulls in a simulation were perfect spheres, the distribution of the surface-volume ratio would be a delta function. However, the distributions show a strong dependence on the type of turbulence indicated by the values of distribution mean, $\mu$, and standard deviation, $\sigma$, given in the caption of FIG. \ref{hullvol}. In the hydrodynamic convection case the distribution is the narrowest with the lowest mean indicating the highest level of anisotropy, followed by NST and MC. The significant hull anisotropy observed for system MC is a clear indication of the additional anisotropy imposed by the slowly evolving large-scale magnetic field fluctuations on the smaller-scale velocity fluctuations. These results are not surprising and consistent with the data given in FIG.~\ref{hullvol}(a). In addition, FIG.~\ref{hullvol}(b) shows the centered and normalized distributions of the surface-volume ratio for each of our three simulations after each set of convex hulls has evolved for 20 $\tau_\eta$. All distributions collapse on a positively skewed functional shape suggesting a general characteristic of convex hull deformation common to all three turbulent systems. } \begin{figure} \resizebox{3.375in}{!}{\includegraphics[angle=90]{figure8a.pdf}}\resizebox{3.375in}{!}{\includegraphics[angle=90]{figure8b.pdf}} \caption{The time evolution of the convex hull's surface area, $S$, divided by the $2/3$ root of the volume, $V$. In (a) the evolution of this non-dimensional surface-volume ratio is averaged over all convex hulls in each simulation. In (b) the probability distribution function, $P$, of $(S/V^{2/3}-\mu)/\sigma$ is shown at time 20 $\tau_\eta$, $\mu$ denoting the mean and $\sigma$ the standard deviation of the respective distribution. The tuples $(\mu,\sigma)$ are NST:~(6.8,0.7), HC:~(6.0,0.3), MC:~(8.5,1.6). \label{hullvol} } \end{figure} The surface-volume ratio varies spatially in each simulation. The time evolution of this ratio for a single convex hull in simulation MC is illustrated in FIG.~\ref{anipdf}. At early times, the surface-volume ratio for this individual hull grows to considerably exceed the mean, indicating that this hull is more stretched than the average convex hull of this ensemble. This surface-volume ratio also exhibits rapid changes in time. For example, during the period between approximately 5 $\tau_\eta$ and 10 $\tau_\eta$ this hull goes from a more anisotropic form than average to a considerably less anisotropic form. \begin{figure} \resizebox{3.375in}{3.00in}{\includegraphics[angle=90]{figure9a.pdf}}\resizebox{3.375in}{!}{\includegraphics{figure9b.pdf}} \caption{ (Left) a comparison of the non-dimensional surface-volume ratio between the convex hull of a single arbitrarily chosen group of tracer particles and the average, in the simulation MC. (Right) a contour plot that shows a horizontal slab filled with convex hulls in simulation MC, at a late time in the simulation. Darker colors represent higher values of the surface-volume ratio. The colors are shown at the initial positions of the convex hulls, and each pixel approximately represents the initial volume of a convex hull. \label{anipdf} } \end{figure} In FIG.~\ref{anipdf}, the surface-volume ratio is also shown as a contour plot for the set of convex hulls that fill a horizontal slab of simulation MC. Dark areas represent regions where convex hulls have grown with significant anisotropy. High spatial intermittency is also noticeable, with areas of large anisotropy bordering areas that grow more isotropically. This pattern of anisotropy remains similar over a long period of time, reflecting the strong influence of the initial configuration of the flow on local dispersion. Although we examine a small number of simulations, the non-dimensional surface-volume ratio that we introduce is clearly capable of revealing aspects of local anisotropy in turbulent flows. \section{Results: Extreme-value statistics of turbulent particle dispersion \label{section6}} The vertices of a convex hull are the particles that disperse fastest among a given group of particles, and the maximal ray defines a maximal dispersion of all particle pairs within the group. Thus the use of the convex hull evokes concepts from extreme value theory \citep{castillo2005extreme,majumdar2010random}. The most widely encountered distribution in extreme value theory, the Gumbel distribution\citep{bramwell2000universal,gumbel1958statistics}, has been frequently employed for climate modeling, including extreme rainfall and flooding \citep{hirabayashi2013global,borga2005regional, koutsoyiannis2004statistics,coles2003fully,yue2000gumbel}, extreme winds \citep{kang2015determination}, avalanches \citep{schweizer2009forecasting}, and earthquakes \citep{pisarenko2014characterization}. The Gumbel distribution has also been found to reasonably characterize the density fluctuations within galaxies \citep{antal2009galaxy,waizmann2012application,chongchitnan2012primordial} and in certain areas of tokamaks \citep{hnat2008characterization,anderson2009predicting,graves2005self}, binding energies in liquids \citep{chempath2010distributions}, as well as turbulent fluctuations \citep{noullez2002global,dahlstedt2001universal}. The cumulative distribution function $F$ for the Gumbel case has the well-known form: \begin{eqnarray}\label{eqgumbel} F(x)=\exp{(-\exp{(-(x-\mu)/\beta)})} \end{eqnarray} where the location parameter $\mu$ gives the mode of the distribution, $\beta$ is commonly called the scale parameter, and the median of the distribution is $\mu-\beta \ln(\ln(2))$. Because extreme value theory typically develops as an asymptotic theory for sample sizes $n \sim \infty$, convex hulls with large numbers $n$ of particles facilitate the exploitation of extreme value theory results. We examine the square-length of the maximal ray with extreme value theory, and this choice is crucial. The square-length of the maximal ray is a fundamental scalar commonly associated with dispersion, and thus the most natural physical quantity to consider. The square-length of the maximal ray is also consistent with a simple model of Gaussian displacements. No rigid upper limit exists for the square-length of the maximal ray, and thus the Gumbel distribution is the case that would be anticipated from extreme value theory. Because Lagrangian tracer particles move in a flow with a finite correlation in space and time, their motions are not independent. The number of particles in each group is also limited in these numerical experiments. Despite these limitations, we find that the shape of the cumulative distribution function of the square of the maximal ray is suggestive of a Gumbel distribution. This observation holds at each point in time, regardless of whether the particle groups sampled are in the ballistic regime, diffusive regime, or a transitional period of dispersion. A Gumbel distribution describes the results well, regardless of the initial length scale of the convex hull, and the initial density of particles, for the range $4 \eta_{\mathsf{kol}}< \ell_{\mathrm{hull}} < 64 \eta_{\mathsf{kol}}$ that we have tested (see Appendix \ref{appendixsizeden}). This suggests that the Gumbel distribution might provide an effective description of the probability of extremes of turbulent dispersion. The location and scale parameters can be different for different $\ell_{\mathrm{hull}}$, and at different times in the dispersion process, although a Gumbel distribution is recovered at each time. In addition, we consider a cumulative distribution function constructed from data at all times throughout the evolution of the convex hulls, as shown in FIG.~\ref{extremepdf}. Using data from all times is a reasonable choice that produces a single form of the cumulative distribution function relevant to the entire simulation. From the perspective of the simple model of Gaussian displacement, noted above, that pragmatic choice actualizes a distribution of values of the scale parameter. Such a possibility is well known in related, but physically distinct, studies of turbulence \citep[e.g.][]{castaing1990velocity}. FIG.~\ref{extremepdf}(a) shows that the distribution of square-length of the maximal ray is fit well with a Gumbel distribution when physically distinct directions, perpendicular and parallel to gravity, are considered individually in the magnetoconvection simulation MC. We found in Section \ref{section5} that the convex hulls in simulation MC become highly anisotropic on average. Thus the fact that a Gumbel distribution with different location and scale parameters accurately describes the extremes of dispersion in both of these physically distinct directions is a new and significant physical observation. \begin{figure} \resizebox{3.375in}{!}{\includegraphics{figure7a.pdf}}\resizebox{3.375in}{!}{\includegraphics{figure7b.pdf}} \caption{The log negative log of the cumulative distribution function, $F$, of the square of the maximal ray of the group of particles defined in eq.~\eqref{maxraydef}. Panel (a) shows the cumulative distribution function of the square of the maximal ray in the directions perpendicular and parallel to gravity from the MHD convection simulation MC. In (b) shows the cumulative distribution function of the square of the maximal ray for each simulation. For each cumulative distribution function shown, a line (solid black line) fits the natural log of the negative natural log of $F$ well. \label{extremepdf} } \end{figure} In FIG.~\ref{extremepdf}(a), we observe an ordering between the scale parameter obtained for the direction perpendicular to gravity and the direction parallel to gravity; the value of the scale parameter is larger in the direction parallel to gravity. FIG.~\ref{extremepdf}(b) compares the cumulative distribution functions of the square-length of the full maximal ray of the convex hull in each simulation, and again they demonstrate the linear behavior expected of $\ln{(-\ln(F))}$ for the Gumbel distribution. When the $\ln{(-\ln(F))}$ is fit using linear regression, the value of the scale parameters are: $\beta_{\mathsf{HC}}=0.17$, $\beta_{\mathsf{NST}}=0.40$, and $\beta_{\mathsf{MC}}=0.65$. In Section \ref{section5}, ordering these simulations according to the least anisotropic to the most anisotropic simulation produced: HC, NST, MC. We thus conjecture from the results in FIG.~\ref{extremepdf}(a) and (b) that faster dispersion linked to anisotropy will lead to a higher value of the scale parameter. \section{Discussion \label{section7}} We have shown that the convex hull can be used to characterize many-particle dispersion in turbulent flows, and can reproduce scalings similar to particle-pair, and other multi-particle Lagrangian statistics. The convex hull allows us to extract dispersion behaviors that produce clear scalings from groups of tracer particles that are significantly larger than have been typically examined by multi-particle statistics. We have examined particle dispersion using convex hulls across three types of physically distinct turbulence simulations, including Navier-Stokes turbulence, Boussinesq convection, and MHD Boussinesq convection. In each of the simulations that we consider, we have shown that the convex hull describes well the dynamics of the entire group of particles. In addition, these tests yield further information about the turbulent velocity field by quantifying the dynamical differences between interior particles and convex hull vertices. Dispersion curves produced using the maximal ray of the convex hull, the surface area of the convex hull, and the volume of the convex hull produce ballistic and diffusive scalings, which can be compared with particle-pair dispersion curves. Although the convex hull has been used to calculate volumes occupied by particles in some specialized contexts \citep{dietzel2013numerical, lakes}, this is the first time that the convex hull of the positions of Lagrangian tracer particles has been used as a fundamental diagnostic to obtain Lagrangian statistics of multi-particle dispersion in homogeneous turbulent flows. In addition, we have explored the convex hull's fundamental link to extreme value statistics. We have discussed that the convex hull provides new information about extremes of dispersion that standard multi-particle statistics cannot. Convex hulls calculated from large numbers of particles provide an ideal application for extreme value theory, an asymptotic theory for large samples. Predictions based on extreme value theory are of practical use for studies of contaminants or of energetic particles, where questions about maximal dispersion are critical. Experimentally it may be simpler to track the convex hull of a large number of particles than to track all the particles in the group individually. We show that the distribution of the square length of the maximal ray of the convex hull is the Gumbel case of generalized extreme value distributions. In addition we show that for a system that is anisotropic because of MHD convection, the maximal ray in each physically distinct direction is described well by the Gumbel distribution. Because the Gumbel distribution has been successful in predicting avalanches, extreme rainfall, and extreme winds, this nontrivial new observation will provide new physical intuition for modeling anomalous dispersion. In a second application of the convex hull analysis, we exploit the relationship between convex hull surface area and volume to examine the degree of anisotropy present in a turbulent convective flow. Our results reveal the extent of spatial variation of anisotropy. Moreover, this quantity also exhibits a probability distribution that has the same universal shape for all three considered physical systems. Convex hull analysis can easily isolate dispersive characteristics in any local region of interest, for example a region where a magnetic structure, or strong convective plume is present. Used in this way, they provide a versatile supplement to standard Lagrangian multi-particle statistics in complex turbulent flows. Because of these advantages, further investigation of the convex hull to analyze many-particle turbulent dispersion is justified. \begin{acknowledgements} {\small We thank Luca Biferale for his helpful comments on this work. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework (FP7/2007-2013)/ERC grant agreement no. 320478. This work has also been supported by the Max-Planck Society in the framework of the Inter-institutional Research Initiative ``Turbulent Transport and Ion Heating, Reconnection and Electron Acceleration in Solar and Fusion Plasmas'' of the MPI for Solar System Research, Katlenburg-Lindau, and the Institute for Plasma Physics, Garching (project MIFIF-A-AERO8047). Original simulations were performed on the VIP, VIZ, and HYDRA computer systems at the Rechenzentrum Garching of the Max Planck Society. Additional calculations were performed on the Konrad and Gottfried computer systems of the Norddeutsche Verbund zur F\"orderung des Hoch- und H\"ochstleistungsrechnens (HLRN). NWW acknowledges travel support from KLIMAFORSK project number 229754 and the London Mathematical Laboratory, and Office of Naval Research NICOP grant NICOP - N62909-15-1-N143 at Warwick and Potsdam. } \end{acknowledgements}
1,108,101,562,492
arxiv
\section{Introduction} Auto2, introduced by the author in \cite{auto2}, is a proof automation tool for the proof assistant Isabelle. It is designed to be a powerful, extensible prover that can consistently solve ``routine'' tasks encountered during a proof, thereby enabling a style of formalization using succinct proof scripts written in a custom, purely declarative language. In this paper, we present an application of auto2 to formalization of mathematics in untyped set theory \footnote{Code available at https://github.com/bzhan/auto2}. In particular, we discuss the formalization in Isabelle/FOL of the entire chain of development from the axioms of set theory to the definition of the fundamental group for an arbitrary topological space. Along the way, we discuss several improvements to auto2 as well as strategies of usage that allow us to work effectively with untyped set theory. The contribution of this paper is two-fold. First, we demonstrate that the auto2 system is capable of independently supporting proof developments on a relatively large scale. In the previous paper, several case studies for auto2 were given in Isabelle/HOL. Each case study is at most several hundred lines long, and the use of auto2 is mixed with the use of other Isabelle tactics, as well as proof scripts provided by Sledgehammer. In contrast, the example we present in this paper is a unified development consisting of over 13,000 lines of theory files and 3,500 lines of ML code (not including the core auto2 program). The auto2 prover is used exclusively starting from basic set theory. Second, we demonstrate one way to manage the additional complexity in proofs that arise when working with untyped set theory. For a number of reasons, untyped set theory is considered to be difficult to work with. For example, everything is represented as sets, including objects such as natural numbers that we usually do not think of as sets. Moreover, statements of theorems tend to be longer in untyped set theory than in typed theories, since assumptions that would otherwise be included in type constraints must now be stated explicitly. In this paper, we show that with appropriate definitions of basic concepts and setup for automation, all these complexities can be managed, without sacrificing the inherent flexibility of the logic. We now give an outline for the rest of the paper. In Section \ref{sec:structures}, we sketch our choice of definitions of basic concepts in axiomatic set theory. In particular, we describe how to use tuples to realize extensible records, and build up the hierarchy of algebraic structures. In Section \ref{sec:auto2}, we review the main ideas of the auto2 system, and describe several additional features, as well as strategies of usage, that allow us to manage the additional complexities of untyped set theory. In Section \ref{sec:exampleselem}, we give two examples of proof scripts using auto2, taken from the proofs of the Schroeder-Bernstein theorem and a challenge problem in analysis from Lasse Rempe-Gillen. In Section \ref{sec:fundamentalgroup}, we describe our main example, the definition of the fundamental group, in detail. Given a topological space $X$ and a base point $x$ on $X$, the fundamental group $\pi_1(X,x)$ is defined on the quotient of the set of loops in $X$ based at $x$, under the equivalence relation given by path homotopy. Multiplication on $\pi_1(X,x)$ comes from joining two loops end-to-end. Formalizing this definition requires reasoning about algebraic and topological structures, equivalence relations, as well as continuous functions on real numbers. We believe this is a sufficiently challenging task with which to test the maturity of our framework, although it has been achieved before in the Mizar system. HOL Light and Isabelle/HOL also formalized the essential ideas on path homotopy. We review these and other related works in Section \ref{sec:relatedwork}, and conclude in Section \ref{sec:conclusion}. \paragraph{Acknowledgements.} The author would like to thank the anonymous referees for their comments. This research is completed while the author is supported by NSF Award No. 1400713. \section{Basic constructions in set theory} \label{sec:structures} We now discuss our choice of definitions of basic concepts, starting with the choice of logic. Our development is based on the FOL (first-order logic) instantiation of Isabelle. The initial parts are similar to those in Isabelle/ZF, and we refer to \cite{paulson1,paulson2} for detailed explanations. The only Isabelle types available are $i$ for sets, $o$ for propositions (booleans), and function types formed from them. We call objects with types other than $i$ and $o$ \emph{meta-functions}, to distinguish them from functions defined within set theory (which have type $i$). It is possible to define higher-order meta-functions in FOL, and supply them with arguments in the form of lambda expressions. Theorems can be quantified over variables with functional type at the outermost level. These can be thought of as theorem-schemas in a first-order theory. However, one can only quantify over variables of type $i$ inside the statement of a theorem, and the only equalities defined within FOL are those between types $i$ (notation $\cdot = \cdot$) and $o$ (notation $\cdot \longleftrightarrow \cdot$). In practice, these restrictions mean that any functions that we wish to consider as first-class objects must be defined as set-theoretic functions. \subsection{Axioms of set theory}\label{sec:axioms} For uniformity of presentation, we start our development from FOL rather than theories in Isabelle/ZF. However, the list of axioms we use is mostly the same. The only main addition is the axiom of global choice, which we use as an easier-to-apply version of the axiom of choice. Note that as in Isabelle/ZF, several of the axioms introduce new sets or meta-functions, and declare properties satisfied by them. The exact list of axioms is as follows: \begin{isabelle} \ \ extension: \ \ "\isasymforall z. z \isasymin\ x \isasymlongleftrightarrow\ z \isasymin\ y \isasymLongrightarrow\ x = y" \isanewline \ \ empty\_set: \ \ "x \isasymnotin\ \isasymemptyset" \isanewline \ \ collect:\ \ \ \ \ "x \isasymin\ Collect(A,P) \isasymlongleftrightarrow\ (x \isasymin\ A \isasymand\ P(x))" \isanewline \ \ upair:\ \ \ \ \ \ \ "x \isasymin\ Upair(y,z) \isasymlongleftrightarrow\ (x = y \isasymor\ x = z)" \isanewline \ \ union:\ \ \ \ \ \ \ "x \isasymin\ \isasymUnion C \isasymlongleftrightarrow\ (\isasymexists A\isasymin C. x\isasymin A)" \isanewline \ \ power:\ \ \ \ \ \ \ "x \isasymin\ Pow(S) \isasymlongleftrightarrow\ x \isasymsubseteq\ S" \isanewline \ \ replacement: "\isasymforall x\isasymin A. \isasymforall y z. P(x,y) \isasymand\ P(x,z) \isasymlongrightarrow\ y = z \isasymLongrightarrow \isanewline \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ b \isasymin\ Replace(A,P) \isasymlongleftrightarrow\ (\isasymexists x\isasymin A. P(x,b))" \isanewline \ \ foundation:\ \ "x \isasymnoteq\ \isasymemptyset\ \isasymLongrightarrow\ \isasymexists y\isasymin x. y \isasyminter\ x = \isasymemptyset" \isanewline \ \ infinity:\ \ \ \ "\isasymemptyset\ \isasymin\ Inf \isasymand\ (\isasymforall y\isasymin Inf. succ(y) \isasymin\ Inf)" \isanewline \ \ choice:\ \ \ \ \ \ "\isasymexists x. x\isasymin S \isasymLongrightarrow\ Choice(S) \isasymin\ S" \end{isabelle} Next, we define several basic constructions in set theory. They are summarized in the following table. See \cite{paulson1} for more explanations. \begin{tabular} {c|c} Notation & Definition \\ \hline \isa{THE x. P(x)} & \isa{\isasymUnion(Replace(\isaset{\isasymemptyset}, \isasymlambda x y. P(y)))} \\ \isa{\isaset{b(x). x\isasymin A}} & \isa{Replace(A, \isasymlambda x y. y = b(x))} \\ \isa{SOME x\isasymin A. P(x)} & \isa{Choice(\isaset{x\isasymin A. P(x)})} \\ \isa{\isapair{a,b}} & \isa{\isaset{\isaset{a}, \isaset{a,b}}} \\ \isa{fst(p)} & \isa{THE a. \isasymexists b. p = \isapair{a,b}} \\ \isa{snd(p)} & \isa{THE b. \isasymexists a. p = \isapair{a,b}} \\ \isa{\isapair{\isamath{a_1,\dots,a_n}}} & \isa{\isapair{\isamath{a_1},\isapair{\isamath{a_2},\isapair{\isamath{\cdots,a_n}}}}} \\ \isa{if P then a else b} & \isa{THE z. P \isasymand\ z=a \isasymor\ \isasymnot P \isasymand\ z=b} \\ \isa{\isasymUnion a\isasymin I. X} & \isa{\isasymUnion \isaset{X(a). a\isasymin I}} \\ \isa{A \isasymtimes\ B} & \isa{\isasymUnion x\isasymin A. \isasymUnion y\isasymin B. \isaset{\isapair{x,y}}} \end{tabular} \subsection{Extensible records as tuples}\label{sec:functions} We now consider the problem of representing records. In our framework, records are used to represent functions, algebraic and topological structures, as well as morphisms between structures. It is often advantageous for records of different types to share certain fields. For example, groups and rings should share the multiplication operator, rings and ordered rings should share both addition and multiplication operators, and so on. It is well-known that when formalizing mathematics using set theory, records can be represented as tuples. To achieve sharing of fields, the key idea is to assign each shared field a fixed position in the tuple. We begin with the example of functions. A function is a record consisting of a source set (domain), a target set (codomain), and the graph of the function. In particular, we consider two functions with the same graph but different target sets to be different functions (another structure called \emph{family} is used to represent functions without specified target set). The three fields are assigned to the first three positions in the tuple: \begin{isabelle} \isacommand{definition} "source(F) = fst(F)" \isanewline \isacommand{definition} "target(F) = fst(snd(F))" \isanewline \isacommand{definition} "graph(F) \ = fst(snd(snd(F)))" \end{isabelle} A function with source \isa{S}, target \isa{T}, and graph \isa{G} is represented by the tuple \isa{\isapair{S,T,G,\isasymemptyset}} (we append an \isa{\isasymemptyset} at the end so the definition of \isa{graph} works properly). For \isa{G} to actually represent a function, it must satisfy the conditions for a functional graph: \begin{isabelle} \isacommand{definition} func\_graphs :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isakeyword{where} \isanewline \ \ "func\_graphs(X,Y) = \isacharbraceleft G\isasymin Pow(X\isasymtimes Y). (\isamath{\forall}a\isasymin X. \isamath{\exists!}y. \isapair{a,y}\isasymin G)\isacharbraceright" \end{isabelle} The set of all functions from \isa{S} to \isa{T} (denoted \isa{S \isasymrightarrow\ T}) is then given by: \begin{isabelle} \isacommand{definition} function\_space :: "i \isasymRightarrow\ i \isasymRightarrow\ i" (\isakeyword{infixr} "\isasymrightarrow" 60) \isakeyword{where} \isanewline \ \ "A \isasymrightarrow\ B = \isacharbraceleft\isapair{A,B,G,\isasymemptyset}. G\isasymin func\_graphs(A,B)\isacharbraceright" \end{isabelle} Functions can be created using the following constructor. Note this is a higher-order meta-function. The argument \isa{b} can be supplied by a lambda expression. \begin{isabelle} \isacommand{definition} Fun :: "[i, i, i \isasymRightarrow\ i] \isasymRightarrow\ i" where \isanewline \ \ "Fun(A,B,b) = \isapair{A, B, \isaset{p\isasymin A\isasymtimes B. snd(p) = b(fst(p))}, \isasymemptyset}" \end{isabelle} Evaluation of a function \isa{f} at \isa{x} (denoted \isa{f\isamath{^\backprime} x}) is then defined as: \begin{isabelle} \isacommand{definition} feval :: "i \isasymRightarrow\ i \isasymRightarrow\ i" (\isakeyword{infixl} "\isamath{^\backprime}" 90) \isakeyword{where} \isanewline \ \ "f \isamath{^\backprime}\ x = (THE y. \isapair{x,y}\isasymin graph(f))" \end{isabelle} \subsection{Algebraic structures} The second major use of records is to represent algebraic structures. In our framework, we will define structures such as groups, abelian groups, rings, and ordered rings. The carrier set of a structure is assigned to the first position. The order relation, additive data, and multiplicative data are assigned to the third, fourth, and fifth position, respectively. This is expressed as follows: \begin{isabelle} \isacommand{definition} "carrier(S) \ \ \ \ = fst(S)" \isanewline \isacommand{definition} "order\_graph(S) = fst(snd(snd(S)))" \isanewline \isacommand{definition} "zero(S) \ \ \ \ \ \ \ = fst(fst(snd(snd(snd(S)))))" \isanewline \isacommand{definition} "plus\_fun(S) \ \ \ = snd(fst(snd(snd(snd(S)))))" \isanewline \isacommand{definition} "one(S) \ \ \ \ \ \ \ \ = fst(fst(snd(snd(snd(snd(S))))))" \isanewline \isacommand{definition} "times\_fun(S) \ \ = snd(fst(snd(snd(snd(snd(S))))))" \end{isabelle} Here \isa{order\_graph} is a subset of \isa{S\isasymtimes S}, and \isa{plus\_fun}, \isa{times\_fun} are elements of \isa{S\isasymtimes S\isasymrightarrow S}. Hence, the operators $\le, +,$ and $*$ can be defined as follows: \begin{isabelle} \isacommand{definition} "le(R,x,y) \isasymlongleftrightarrow\ \isapair{x,y}\isasymin order\_graph(R)" \isanewline \isacommand{definition} "plus(R,x,y) = plus\_fun(R)\isamath{^\backprime}\isapair{x,y}" \isanewline \isacommand{definition} "times(R,x,y) = times\_fun(R)\isamath{^\backprime}\isapair{x,y}" \isanewline \end{isabelle} These are abbreviated to \isa{x \isale{R} y}, \isa{x \isaplus{R} y}, and \isa{x \isatimes{R} y}, respectively (in both theory files and throughout this paper, we use $*$ to denote multiplication in groups and rings, and $\times$ to denote product on sets and other structures). We also abbreviate \isa{x \isasymin\ carrier(S)} to \isa{x \isasymin. S}. The constructor for group-like structures is as follows: \begin{isabelle} \isacommand{definition} Group :: "[i, i, i \isasymRightarrow\ i \isasymRightarrow\ i] \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "Group(S,u,f) = \isapair{S,\isasymemptyset,\isasymemptyset,\isasymemptyset,\isapair{u,\isasymlambda p\isasymin S\isasymtimes S. f(fst(p),snd(p))\isasymin S},\isasymemptyset}" \end{isabelle} The following predicate asserts that a structure contains \emph{at least} the fields of a group-like structure, with the right membership properties (\isa{\isaone{G}} abbreviates \isa{one(G)}): \begin{isabelle} \isacommand{definition} is\_group\_raw :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_group\_raw(G) \isasymlongleftrightarrow \isanewline \ \ \ \ \ \isaone{G} \isasymin. G \isasymand\ times\_fun(G) \isasymin\ carrier(G) \isasymtimes\ carrier(G) \isasymrightarrow\ carrier(G) \end{isabelle} To check whether such a structure is in fact a monoid / group, we use the following predicates: \begin{isabelle} \isacommand{definition} is\_monoid :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_monoid(G) \isasymlongleftrightarrow\ is\_group\_raw(G) \isasymand\ \isanewline \ \ \ \ \ \ (\isasymforall x\isasymin.G. \isasymforall y\isasymin.G. \isasymforall z\isasymin.G. (x \isatimes{G} y) \isatimes{G} z = x \isatimes{G} (y \isatimes{G} z)) \isasymand \isanewline \ \ \ \ \ \ (\isasymforall x\isasymin.G. \isaone{G} \isatimes{G} x = x \isasymand\ x \isatimes{G} \isaone{G} = x)" \end{isabelle} \begin{isabelle} \isacommand{definition} units :: "i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "units(G) = \isaset{x \isasymin. G. (\isasymexists y\isasymin.G. y \isatimes{G} x = \isaone{G} \isasymand\ x \isatimes{G} y = \isaone{G})}" \end{isabelle} \begin{isabelle} \isacommand{definition} is\_group :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_group(G) \isasymlongleftrightarrow\ is\_monoid(G) \isasymand\ carrier(G) = units(G)" \end{isabelle} Note these definitions are meaningful on any structure that has multiplicative data. Likewise, we can define a predicate \isa{is\_abgroup} for abelian groups, that is meaningful for any structure that has additive data. These can be combined with distributive properties to define the predicate for a ring: \begin{isabelle} \isacommand{definition} is\_ring :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_ring(R) \isasymlongleftrightarrow\ (is\_ring\_raw(R) \isasymand\ is\_abgroup(R) \isasymand\ is\_monoid(R) \isasymand \isanewline \ \ \ \ \ \ is\_left\_distrib(R) \isasymand\ is\_right\_distrib(R) \isasymand\ \isazero{R} \isamath{\neq} \isaone{R})" \end{isabelle} Likewise, we can define the predicate for ordered rings, and constructors for such structures. Structures are used to represent the hierarchy of numbers: we let \isa{nat}, \isa{int}, \isa{rat}, and \isa{real} denote the \emph{set} of natural numbers, integers, etc, while \isamath{\mathbb{N}}, \isamath{\mathbb{Z}}, \isamath{\mathbb{Q}}, and \isamath{\mathbb{R}} denote the corresponding structures. Hence, addition on natural numbers is denoted by \isa{x \isasub{+}{\mathbb{N}}\ y}, addition on real numbers by \isa{x \isasub{+}{\mathbb{R}}\ y}, etc. We can also state and prove theorems such as \isa{is\_ord\_field(\isamath{\mathbb{R}})}, which contains all proof obligations for showing that the real numbers form an ordered field. \subsection{Morphism between structures} Finally, we discuss morphisms between structures. Morphisms can be considered as an \emph{extension} of functions, with additional fields specifying structures on the source and target sets. The two additional fields are assigned to the fourth and fifth positions in the tuple: \begin{isabelle} \isacommand{definition} "source\_str(F) = fst(snd(snd(snd(F))))" \isanewline \isacommand{definition} "target\_str(F) = fst(snd(snd(snd(snd(F)))))" \end{isabelle} The constructor for a morphism is as follows (here \isa{S} and \isa{T} are the source and target structures, while the source and target sets are automatically derived): \begin{isabelle} \isacommand{definition} Mor :: "[i, i, i \isasymRightarrow\ i] \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "Mor(S,T,b) = (let A = carrier(S) in let B = carrier(T) in \isanewline \ \ \ \ \ \isapair{A, B, \isaset{p\isasymin A\isasymtimes B. snd(p) = b(fst(p))}, S, T, \isasymemptyset})" \end{isabelle} The space of morphisms (denoted \isa{S \isasymrightharpoonup\ T}) is given by: \begin{isabelle} \isacommand{definition} mor\_space :: "i \isasymRightarrow\ i \isasymRightarrow\ i" (\isakeyword{infix} "\isasymrightharpoonup" 60) where \isanewline \ \ "mor\_space(S,T) = (let A = carrier(S) in let B = carrier(T) in \isanewline \ \ \ \ \ \isaset{{\isapair{A,B,G,S,T,\isasymemptyset}. G\isasymin func\_graphs(A,B)}})" \end{isabelle} Note the notation \isa{f\isamath{^\backprime} x} for evaluation still works for morphisms. Several other concepts defined in terms of evaluation, such as image and inverse image, continue to be valid for morphisms as well, as are lemmas about these concepts. However, operations that construct new morphisms, such as inverse and composition, must be redefined. We will use \isa{g \isamath{\circ} f} to denote the composition of two functions, and \isa{g \isamath{\circ_{\isasty{m}}}\ f} to denote the composition of two morphisms. Having morphisms store the source and target structures means we can define properties such as homomorphism on groups as a predicate: \begin{isabelle} \isacommand{definition} is\_group\_hom :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ "is\_group\_hom(f) \isasymlongleftrightarrow\ (let S = source\_str(f) in let T = target\_str(f) in \isanewline \ \ \ \ \ \ is\_morphism(f) \isasymand\ is\_group(S) \isasymand\ is\_group(T) \isasymand\ \isanewline \ \ \ \ \ \ (\isasymforall x\isasymin.S. \isasymforall y\isasymin.S. f\isamath{^\backprime}(x \isatimes{S} y) = f\isamath{^\backprime} x \isatimes{T} f\isamath{^\backprime} y))" \end{isabelle} The following lemma then states that the composition of two homomorphisms is a homomorphism (this is proved automatically using auto2): \begin{isabelle} lemma group\_hom\_compose: \isanewline \ \ "is\_group\_hom(f) \isasymLongrightarrow\ is\_group\_hom(g) \isasymLongrightarrow\ \isanewline \ \ \ target\_str(f) = source\_str(g) \isasymLongrightarrow\ is\_group\_hom(g \isamath{\circ_{\isasty{m}}}\ f)" \end{isabelle} \section{Auto2 in untyped set theory} \label{sec:auto2} In this section, we describe several additional features of auto2, as well as general strategies of using it to manage the complexities of untyped set theory. We begin with an overview of the auto2 system (see \cite{auto2} for details). Auto2 is a theorem prover packaged as a tactic in Isabelle. It works with a collection of rules of reasoning called \emph{proof steps}. New proof steps can be added at any time within an Isabelle theory. They can also be deleted at any time, although it is rarely necessary to add and delete the same proof step more than once. In general, when building an Isabelle theory, the user is responsible for specifying, by adding proof steps, how to use the results proved in that theory. In return, the user no longer needs to worry about invoking these results by name in future developments. The overall algorithm of auto2 is as follows. First, the statement to be proved is converted into contradiction form, so the task is always to derive a contradiction from a list of assumptions. During the proof, auto2 maintains a list of \emph{items}, the two most common types of which are propositions (that are derived from the assumptions) and terms (that have appeared so far in the proof). Each item resides in a \emph{box}, which can be thought of as a subcase of the statement to be proved (the box corresponding to the original statement is called the \emph{home box}). A proof step is a function that takes as input one or two items, and outputs either new items, new cases, or the action of shadowing one of the input items, or resolving a box by proving a contradiction in that box. The main loop of the algorithm repeatly applies the current collection of proof steps and adds any new items and cases in a best-first-search manner, until some proof step derives a contradiction in the home box. In addition to the list of items, auto2 also maintains several tables. The most important of which is the \emph{rewrite table}, which keeps track of the list of currently known equalities (not containing arbitrary variables), and maintains the congruence closure of these equalities. There are two other tables: the property table and the well-form table, which we will discuss later in this section. There are two broad categories of proof steps, which we call the \emph{standard} and \emph{special} proof steps in this paper. A standard proof step applies an existing theorem in a specific direction. It matches the input items to one or two patterns in the statement of the theorem, and applies the theorem to derive a new proposition. Here the matching is up to rewriting (\emph{E-matching}) using the rewrite table. A special proof step can have more complex behavior, and is usually written as an ML function. The vast majority of proof steps in our example are standard, although special proof steps also play an important role. The auto2 prover is not intended to be complete. For example, it may intentionally apply a theorem in only one of several possible directions, in order to narrow the search space. For more difficult theorems, auto2 provides a custom language of proof scripts, allowing the user to specify intermediate steps of the proof. Generally, when proving a result using auto2, the user will first try to prove it without any scripts, and in case of failure, successively add intermediate steps, perhaps by referring to an informal proof of the result. In case of failure, auto2 will indicate the first intermediate step that it is unable to prove, as well as what it is able to derive in the course of proving that step. We will show examples of proof scripts in Section \ref{sec:exampleselem}. The current version of auto2 can be set up to work with different logics in Isabelle. It contains a core program, for reasoning about predicate logic and equality, that is parametrized over the list of constants and theorems for the target logic. In particular, auto2 is now set up and tested to work with both HOL and FOL in Isabelle. \subsection{Encapsulation of definitions}\label{sec:definition} One commonly cited problem with untyped set theory is that every object is a set, including those that are not usually considered as sets. Common examples of the latter include ordered pairs, natural numbers, functions, etc. In informal treatments of mathematics, these definitions are only used to establish some basic properties of the objects concerned. Once these properties are proved, the definitions are never used again. In formal developments, when automation is used to produce large parts of the proof, one potential problem is that the automation may needlessly expand the original definitions of objects, rather than focusing on their basic properties. This increases the search space and obscures the essential ideas of the proof. Using the ability to delete proof steps in auto2, this problem can be avoided entirely. For any definition that we wish to drop in the end, we use the following three-step procedure: \begin{enumerate} \item The definition is stated and added to auto2 as rewrite rules. \item Basic properties of the object being defined are stated and proved. These properties are added to auto2 as appropriate proof steps. \item The rewrite rules for the original definition are deleted. \end{enumerate} For example, after the definitions concerning the representation of functions as tuples in Section \ref{sec:functions}, we prove the following lemmas, and add them as appropriate proof steps (as indicated by the attributes in brackets): \begin{isabelle} \isacommand{lemma} lambda\_is\_function [backward]: \isanewline \ \ "\isasymforall x\isasymin A. f(x)\isasymin B \isasymLongrightarrow\ Fun(A,B,f) \isasymin\ A \isasymrightarrow\ B" \end{isabelle} \begin{isabelle} \isacommand{lemma} beta [rewrite]: \isanewline \ \ "F = Fun(A,B,f) \isasymLongrightarrow\ x \isasymin\ source(F) \isasymLongrightarrow\ is\_function(F) \isasymLongrightarrow\ F\isamath{^\backprime} x = f(x)" \end{isabelle} \begin{isabelle} \isacommand{lemma} feval\_in\_range [typing]: \isanewline \ \ "is\_function(f) \isasymLongrightarrow\ x \isasymin\ source(f) \isasymLongrightarrow\ f\isamath{^\backprime} x \isasymin\ target(f)" \end{isabelle} After proving these (and a few more) lemmas, the rewriting rules for the definitions of \isa{Fun}, \isa{function\_space}, \isa{feval}, etc, are removed. Note that all lemmas above are independent of the representation of functions as tuples. Hence, this representation is effectively hidden from the point of view of the prover. Some of the original definitions may be temporarily re-added in rare instances (for example when defining the concept of morphisms). \subsection{Property and well-form tables}\label{sec:tables} In this section, we discuss two additional tables maintained by auto2 during a proof. The property table is already present in the version introduced in \cite{auto2}, but not discussed in that paper. The well-form table is new. The main motivation for both tables is that for many theorems, especially those stated in an untyped logic, some of its assumptions can be considered as ``side conditions''. To give a basic example, consider the following lemma: \begin{isabelle} \isacommand{lemma} unit\_l\_cancel: \isanewline \ \ "is\_monoid(G) \isasymLongrightarrow\ y \isasymin. G \isasymLongrightarrow\ z \isasymin. G \isasymLongrightarrow\ x \isatimes{G} y = x \isatimes{G} z \isasymLongrightarrow\ \isanewline \ \ \ x \isasymin\ units(G) \isasymLongrightarrow\ y = z" \end{isabelle} In this lemma, the last two assumptions are the ``main'' assumptions, while the first three are side conditions asserting that the variables in the main assumptions are well-behaved in some sense. In Isabelle/HOL, these side conditions may be folded into type or type-class constraints. We consider two kinds of side conditions. The first kind, like the first assumption above, checks that one of the variables in the main assumptions satisfy a certain predicate. In Isabelle/HOL, these may correspond to type-class constraints. In auto2, we call these \emph{property assumptions}. More precisely, given any predicate (in FOL this means constant of type \isa{i \isasymRightarrow\ o}), we can register it as a property. The \emph{property table} records the list of properties satisfied by each term that has appeared so far in the proof. Properties propagate through equalities: if \isa{P(a)} is in the property table, and \isa{a = b} is known from the rewrite table, then \isa{P(b)} is automatically added to the property table. The user can also add theorems of certain forms as further propagation rules for the property table (we omit the details here). The second kind of side conditions assert that certain terms occuring in the main assumptions are \emph{well-formed}. We use the terminology of well-formedness to capture a familiar feature of mathematical language: that an expression may make implicit assumptions about its subterms. These conditions can be in the form of type constraints. For example, the expression \isa{a \isaplus{R} b} implicitly assumes that \isa{a} and \isa{b} are elements in the carrier set of \isa{R}. However, this concept is much more general. Some examples of well-formedness conditions are summarized in the following table: \begin{tabular} {c|c} Term & Conditions \\ \hline \isa{\isasymInter A} & \isa{A \isamath{\neq} \isasymemptyset} \\ \isa{f \isamath{^\backprime}\ x} & \isa{x \isasymin\ source(f)} \\ \isa{g \isamath{\circ} f} & \isa{target(f) = source(g)} \\ \isa{g \isamath{\circ_{\isasty{m}}}\ f} & \isa{target\_str(f) = source\_str(g)} \\ \isa{a \isaplus{R} b} & \isa{a \isasymin. R, b \isasymin. R} \\ \isa{inv(R,a)} & \isa{a \isasymin\ units(R)} \\ \isa{a \isamath{/_{\isasty{R}}} b} & \isa{a \isasymin. R, b \isasymin\ units(R)} \\ \isa{subgroup(G,H)} & \isa{is\_subgroup\_set(G,H)} \\ \isa{quotient\_group(G,H)} & \isa{is\_normal\_subgroup\_set(G,H)} \end{tabular} In general, given any meta-function \isa{f}, any propositional expression in terms of the arguments of \isa{f} can be registered as a well-formedness condition of \isa{f}. In particular, well-formedness conditions are not necessarily properties. For example, the condition \isa{a \isasymin. R} for \isa{a \isaplus{R} b} involves two variables and hence is not a property. The \emph{well-form table} records, for every term encountered so far in the proof, the list of its well-formedness conditions that are satisfied. Whenever a new fact is added, auto2 checks against every known term to see whether it verifies a well-formedness condition of that term. The property and well-form tables are used in similar ways in standard proof steps. After the proof step matches one or two patterns in the ``main'' assumptions or conclusion of the theorem that it applies, it checks for the side conditions in the two tables, and proceed to apply the theorem only if all side conditions are found. Of course, this requires proof steps to be re-applied if new properties or well-formedness conditions of a term becomes known. \subsection{Well-formed conversions}\label{sec:conversion} Algebraic simplification is an important part of any automatic prover. For every kind of algebraic structure, e.g. monoids, groups, abelian groups, and rings, there is a concept of normal form of an expression, and two terms can be equated if they have the same normal form. In untyped set theory, such computation of normal forms is complicated by the fact that the relevant rewriting rules have extra assumptions. For example, the rule for associativity of addition is: \begin{isabelle} \ \ is\_abgroup(R) \isasymLongrightarrow\ x \isasymin. R \isasymLongrightarrow\ y \isasymin. R \isasymLongrightarrow\ z \isasymin. R \isasymLongrightarrow\ \isanewline \ \ \ \ \ \ x \isaplus{R} (y \isaplus{R} z) = (x \isaplus{R} y) \isaplus{R} z \end{isabelle} The first assumption can be verified at the beginning of the normalization process. The remaining assumptions, however, are more cumbersome. In particular, they may require membership status of terms that arise only during the normalization. For example, when normalizing the term \isa{a\isaplus{R}(b\isaplus{R}(c\isaplus{R}d))}, we may first rewrite it to \isa{a\isaplus{R}((b\isaplus{R}c)\isaplus{R}d)}. The next step, however, requires \isa{b\isaplus{R}c \isasymin. R}, where \isa{b\isaplus{R}c} does not occur initially and may not have occured so far in the proof. In typed theories, this poses no problem, since \isa{b\isamath{+}c} will be automatically given the same type as \isa{b} and \isa{c} when the term is created. In untyped set theory, such membership information must be kept track of and derived when necessary. The concept of well-formed terms provides a natural framework for doing this. Before performing algebraic normalization on a term, we first check for all relevant well-formedness conditions. If all conditions are present, we produce a data structure (of type \isa{wfterm} in Isabelle/ML) containing the certified term as well as theorems asserting well-formedness conditions. A theorem is called a \emph{well-formed rewrite rule} if its main conclusion is an equality, each of its assumptions is a well-formedness condition for terms on the left side of the equality, and it has additional conclusions that verify all well-formedness conditions for terms on the right side of the equality that are not already present in the assumptions. For example, the associativity rule stated above is not yet a well-formed rewrite rule: there is no justification for \isa{x\isaplus{R}y \isasymin. R}, which is a well-formedness condition for the term \isa{(x\isaplus{R}y)\isaplus{R}z} on the right side of the equality. The full well-formed rewrite rule is: \begin{isabelle} \ \ is\_abgroup(R) \isasymLongrightarrow\ x \isasymin. R \isasymLongrightarrow\ y \isasymin. R \isasymLongrightarrow\ z \isasymin. R \isasymLongrightarrow\ \isanewline \ \ \ \ \ \ x \isaplus{R} (y \isaplus{R} z) = (x \isaplus{R} y) \isaplus{R} z \isasymand\ x \isaplus{R} y \isasymin. R \end{isabelle} Given a well-formed rewrite rule, we can produce a \emph{well-formed conversion} that acts on \isa{wfterm} objects, in a way similar to how equalities produce regular conversions that act on \isa{cterm} objects in Isabelle/ML. Like regular conversions, well-formed conversions can be composed in various ways, and full normalization procedures can be written using the language of well-formed conversions. These normalization procedures in turn form the basis of several special proof steps. We give two examples: \begin{itemize} \item Given two terms $s$ and $t$ that are non-atomic with respect to operations in \isa{R}, where \isa{R} is a monoid (group / abelian group / ring), normalize $s$ and $t$ using the rules for \isa{R}. If the normalizations are equal, output $s = t$. \item Given two propositions \isa{a \isale{R} b} and \isa{\isasymnot (c \isale{R} d)}, where \isa{R} is an ordered ring. Compare the normalizations of \isa{b \isaminus{R} a} and \isa{d \isaminus{R} c}. If they are equal, output a contradiction. \end{itemize} These proof steps, when combined with proof scripts provided by the user, allow algebraic manipulations to be performed rapidly. They replace the handling of associative-commutative functions for HOL discussed in \cite{auto2}. \subsection{Discussion}\label{sec:discussion} We conclude this section with a discussion of our overall approach to untyped set theory, and compare it with other approaches. One feature of our approach is that we do not seek to re-institute a concept of types in our framework, but simply replace type constraints with set membership conditions (or predicates, for constraints that cannot be described by a set). The aim is to fully preserve the flexibility of set-membership as compared to types. Empirically, most of the extra assumptions that arise in the statement of theorems can be taken care of by classifying them as properties or well-formedness conditions. Our approach can be contrasted with that taken by Mizar, which defines a concept of soft types \cite{Mizar-type1} within the core of the system. Every framework for formalizing modern mathematics need a way to deal with structures. In Mizar, structures are defined in the core of the system as partial functions on selectors \cite{Mizar-struct,Mizar-struct2}. In both Isabelle/HOL and IsarMathLib's treatement of abstract algebra, structures are realized with extensive use of locales. For Coq, one notable approach is the use of Canonical Structures \cite{coq-canonical} in the formalization of the Odd Order Theorem. We chose a relatively simple scheme of realizing structures as tuples, which is sufficient for the present purposes. Representing them as partial functions on selectors, as in Mizar, is more complicated but may be beneficial in the long run. Finally, we emphasize that we do not make any modification to Isabelle/FOL in our development. The concept of well-formed terms, for example, is meaningful only to the automation. The whole of auto2's design, including the ability for users to add new proof steps, follows the LCF architecture. To have confidence in the proofs, one only need to trust the existing Isabelle system, the ten axioms stated in Section \ref{sec:axioms}, and the definitions involved in the statement of the results. \section{Examples of proof scripts} \label{sec:exampleselem} Using the techniques in the above two sections, we formalized enough mathematics in Isabelle/FOL to be able to define the fundamental group. In addition to work directly used for that purpose, we also formalized several interesting results on the side. These include the well-ordering theorem and Zorn's lemma, the first isomorphism theorem for groups, and the intermediate value theorem. Two more examples will be presented in the remainder of this section, to demonstrate the level of succinctness of proof scripts that can be achieved. Throughout our work, we referred to various sources including both mathematical texts and other formalizations. We list these sources here: \begin{itemize} \item Axioms of set theory and basic operations on sets, construction of natural numbers using least fixed points: from Isabelle/ZF \cite{paulson1,paulson2}. \item Equivalence and order relations, arbitrary products on sets, well-ordering theorem and Zorn's lemma: from Bourbaki's \emph{Theory of Sets} \cite{bourbaki}. \item Group theory and the construction of real numbers using Cauchy sequences: from my previous case studies \cite{auto2}, which in turn is based on corresponding theories in the Isabelle/HOL library. \item Point-set topology and construction of the fundamental group: from \emph{Topology} by Munkres \cite{munkres}. \end{itemize} \subsection{Schroeder-Bernstein Theorem} For our first example, we present the proof of the Schroeder-Bernstein theorem. See \cite{paulson2} for a presentation of the same proof in Isabelle/ZF. The bijection is constructed by gluing together two functions. Auto2 is able to prove automatically that under certain conditions, the gluing is a bijection (lemma \isa{glue\_function2\_bij}). For the Schroeder-Bernstein theorem, a proof script (provided by the user) is needed. This is given immediately after the statement of the theorem. \begin{isabelle} \isacommand{definition} glue\_function2 :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "glue\_function2(f,g) = Fun(source(f)\isasymunion source(g), target(f)\isasymunion target(g), \isanewline \ \ \ \ \ \isasymlambda x. if x \isasymin\ source(f) then f\isamath{^\backprime} x else g\isamath{^\backprime} x)'' \end{isabelle} \begin{isabelle} \isacommand{lemma} glue\_function2\_bij [backward]: \isanewline \ \ "f \isasymin\ A \isasymcong\ B \isasymLongrightarrow\ g \isasymin\ C \isasymcong\ D \isasymLongrightarrow\ A \isasyminter\ C = \isasymemptyset\ \isasymLongrightarrow\ B \isasyminter\ D = \isasymemptyset\ \isasymLongrightarrow \isanewline \ \ \ glue\_function2(f,g) \isasymin\ (A \isasymunion\ C) \isasymcong\ (B \isasymunion\ D)" \end{isabelle} \begin{isabelle} \isacommand{theorem} schroeder\_bernstein: \isanewline \ \ "injective(f) \isasymLongrightarrow\ injective(g) \isasymLongrightarrow\ f \isasymin\ X \isasymrightarrow\ Y \isasymLongrightarrow\ g \isasymin\ Y \isasymrightarrow\ X \isasymLongrightarrow \isanewline \ \ \ equipotent(X,Y)" \isanewline \ \ LET "X\_A = lfp(X, \isasymlambda W. X -- g\isamath{^{\backprime\backprime}} (Y -- f\isamath{^{\backprime\backprime}} W))" THEN \isanewline \ \ LET "X\_B = X -- X\_A, Y\_A = f\isamath{^{\backprime\backprime}} X\_A, Y\_B = Y -- Y\_A" THEN \isanewline \ \ HAVE "X -- g\isamath{^{\backprime\backprime}} Y\_B = X\_A" THEN \isanewline \ \ HAVE "g\isamath{^{\backprime\backprime}} Y\_B = X\_B" THEN \isanewline \ \ LET "f' = func\_restrict\_image(func\_restrict(f,X\_A))" THEN \isanewline \ \ LET "g' = func\_restrict\_image(func\_restrict(g,Y\_B))" THEN \isanewline \ \ HAVE "glue\_function2(f', inverse(g')) \isasymin\ (X\_A \isasymunion\ X\_B) \isasymcong\ (Y\_A \isasymunion\ Y\_B)" \end{isabelle} \subsection{Rempe-Gillen's challenge} For our second example, we present our solution to a challenge problem proposed by Lasse Rempe-Gillen in a mailing list discussion \footnote{http://www.cs.nyu.edu/pipermail/fom/2014-October/018243.html}. See \cite{hammering} for proofs of the same result in several other systems. The statement to be proved is: \begin{lemma} Let $f$ be a continuous real-valued function on the real line, such that $f(x) > x$ for all $x$. Let $x_0$ be a real number, and define the sequence $x_n$ recursively by $x_{n+1} := f(x_n)$. Then $x_n$ diverges to infinity. \end{lemma} Our solution is as follows. We make use of several previously proved results: any bounded increasing sequence in $\mathbb{R}$ converges (line 2), a continuous function \isa{f} maps a sequence converging to \isa{x} to a sequence converging to \isa{f\isamath{^\backprime} x} (line 4), and finally that the limit of a sequence in $\mathbb{R}$ is unique. \begin{isabelle} \isacommand{lemma} rempe\_gillen\_challenge: \isanewline \ \ "real\_fun(f) \isasymLongrightarrow\ continuous(f) \isasymLongrightarrow\ incr\_arg\_fun(f) \isasymLongrightarrow\ x0 \isasymin. \isamath{\mathbb{R}}\ \isasymLongrightarrow \isanewline \ \ \ S = Seq(\isamath{\mathbb{R}}, \isasymlambda n. nfold(f,n,x0)) \isasymLongrightarrow\ \isasymnot upper\_bounded(S)" \isanewline \ \ HAVE "seq\_incr(S)" WITH HAVE "\isasymforall n\isasymin.\isamath{\mathbb{N}}. S\isamath{^\backprime} (n \isasub{+}{\mathbb{R}}\ 1) \isasub{\ge}{\mathbb{R}}\ S\isamath{^\backprime} n" THEN \isanewline \ \ CHOOSE "x, converges\_to(S,x)" THEN \isanewline \ \ LET "T = Seq(\isamath{\mathbb{R}}, \isasymlambda n. f\isamath{^\backprime} (S\isamath{^\backprime} n))" THEN \isanewline \ \ HAVE "converges\_to(T,f\isamath{^\backprime} x)" THEN \isanewline \ \ HAVE "converges\_to(T,x)" WITH ( \isanewline \ \ \ \ HAVE "\isasymforall r\isasub{>}{\mathbb{R}} \isamath{0_\mathbb{R}}. \isasymexists k\isasymin.\isamath{\mathbb{N}}. \isasymforall n\isasub{\ge}{\mathbb{N}} k. \isasymbar T\isamath{^\backprime} n \isasub{-}{\mathbb{R}} x\isasymbar\isasub{}{\mathbb{R}} \isasub{<}{\mathbb{R}}\ r" WITH ( \isanewline \ \ \ \ \ \ CHOOSE "k \isasymin. \isamath{\mathbb{N}}, \isasymforall n\isasub{\ge}{\mathbb{N}} k. \isasymbar S\isamath{^\backprime} n \isasub{-}{\mathbb{R}}\ x\isasymbar\isasub{}{\mathbb{R}} \isasub{<}{\mathbb{R}}\ r" THEN \isanewline \ \ \ \ \ \ HAVE "\isasymforall n\isasub{\ge}{\mathbb{N}} k. \isasymbar T\isamath{^\backprime} n \isasub{-}{\mathbb{R}}\ x\isasymbar\isasub{}{\mathbb{R}} \isasub{<}{\mathbb{R}}\ r" WITH HAVE "T\isamath{^\backprime} n = S\isamath{^\backprime} (n \isasub{+}{\mathbb{N}}\ 1)")) \end{isabelle} \section{Construction of the fundamental group} \label{sec:fundamentalgroup} In this section, we describe our construction of the fundamental group. We will focus on stating the definitions and main results without proof, to demonstrate the expressiveness of untyped set theory under our framework. The entire formalization including proofs is 864 lines long. Let \isa{I} be the interval \isa{[0,1]}, equipped with the subspace topology from the topology on $\mathbb{R}$. Given two continuous maps \isa{f} and \isa{g} from \isa{S} to \isa{T}, a \emph{homotopy} between \isa{f} and \isa{g} is a continuous map from the product topology on \isa{S \isasymtimes\ I} to \isa{T} that restricts to \isa{f} and \isa{g} at \isa{S \isasymtimes\ \isaset{0}} and \isa{S \isasymtimes\ \isaset{1}}, respectively: \begin{isabelle} \isacommand{definition} is\_homotopy :: "[i, i, i] \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_homotopy(f,g,F) \isasymlongleftrightarrow\ \isanewline \ \ \ \ \ (let S = source\_str(f) in let T = target\_str(f) in \isanewline \ \ \ \ \ \ continuous(f) \isasymand\ continuous(g) \isasymand\ \isanewline \ \ \ \ \ \ S = source\_str(g) \isasymand\ T = target\_str(g) \isasymand\ F \isasymin\ S \isasub{\times}{\isasty{T}} I \isasub{\isasymrightharpoonup}{\isasty{T}} T \isasymand \isanewline \ \ \ \ \ \ (\isasymforall x\isasymin.S. F\isamath{^\backprime}\isapair{x,\isamath{0_\mathbb{R}}} = f\isamath{^\backprime} x \isasymand\ F\isamath{^\backprime}\isapair{x,\isamath{1_\mathbb{R}}} = g\isamath{^\backprime} x))" \end{isabelle} A \emph{path} is a continuous function from the interval. A homotopy between two paths is a \emph{path homotopy} if it remains constant on \isa{\isaset{0} \isasymtimes\ I} and \isa{\isaset{1} \isasymtimes\ I}: \begin{isabelle} \isacommand{definition} is\_path :: "i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_path(f) \isasymlongleftrightarrow\ (f \isasymin\ I \isasub{\isasymrightharpoonup}{\isasty{T}} target\_str(f))" \end{isabelle} \begin{isabelle} \isacommand{definition} is\_path\_homotopy :: "[i, i, i] \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "is\_path\_homotopy(f,g,F) \isasymlongleftrightarrow\ \isanewline \ \ \ \ (is\_path(f) \isasymand\ is\_path(g) \isasymand\ is\_homotopy(f,g,F) \isasymand \isanewline \ \ \ \ \ (\isasymforall t\isasymin.I. F\isamath{^\backprime}\isapair{\isamath{0_\mathbb{R}},t} = f\isamath{^\backprime}(\isamath{0_\mathbb{R}}) \isasymand\ F\isamath{^\backprime}\isapair{\isamath{1_\mathbb{R}},t} = f\isamath{^\backprime}(\isamath{1_\mathbb{R}})))" \end{isabelle} Two paths are \emph{path-homotopic} if there exists a path homotopy between them. This is an equivalence relation on paths. \begin{isabelle} \isacommand{definition} path\_homotopic :: "i \isasymRightarrow\ i \isasymRightarrow\ o" \isacommand{where} \isanewline \ \ "path\_homotopic(f,g) \isasymlongleftrightarrow\ (\isasymexists F. is\_path\_homotopy(f,g,F))" \end{isabelle} The path product is defined by gluing two morphisms. It is continuous by the pasting lemma: \begin{isabelle} \isacommand{definition} I1 = subspace(\isamath{\mathbb{R}}, closed\_interval(\isamath{\mathbb{R}},\isamath{0_\mathbb{R}},\isaoneR\ \isasub{/}{\mathbb{R}} \isatwoR)) \isanewline \isacommand{definition} I2 = subspace(\isamath{\mathbb{R}}, closed\_interval(\isamath{\mathbb{R}},\isaoneR\ \isasub{/}{\mathbb{R}} \isatwoR,\isamath{1_\mathbb{R}})) \isanewline \isacommand{definition} interval\_lower = Mor(I1,I,\isasymlambda t. \isamath{2_\mathbb{R}}\ \isasub{*}{\mathbb{R}}\ t) \isanewline \isacommand{definition} interval\_upper = Mor(I2,I,\isasymlambda t. \isamath{2_\mathbb{R}}\ \isasub{*}{\mathbb{R}}\ t\ \isasub{-}{\mathbb{R}}\ \isamath{1_\mathbb{R}}) \end{isabelle} \begin{isabelle} \isacommand{definition} path\_product :: "i \isasymRightarrow\ i \isasymRightarrow\ i" (infixl "\isasymstar" 70) where \isanewline \ \ "f \isasymstar\ g = glue\_morphism(I, f \isamath{\circ_{\isasty{m}}}\ interval\_lower, g \isamath{\circ_{\isasty{m}}}\ interval\_upper)" \end{isabelle} The loop space is a set of loops on \isa{X}. Path homotopy gives an equivalence relation on the loop space, and we define \isa{loop\_classes} to be the quotient set: \begin{isabelle} \isacommand{definition} loop\_space :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "loop\_space(X,x) = \isaset{f \isasymin\ I \isasub{\isasymrightharpoonup}{\isasty{T}} X. f\isamath{^\backprime}(\isamath{0_\mathbb{R}}) = x \isasymand\ f\isamath{^\backprime}(\isamath{1_\mathbb{R}}) = x}" \end{isabelle} \begin{isabelle} \isacommand{definition} loop\_space\_rel :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "loop\_space\_rel(X,x) = Equiv(loop\_space(X,x), \isasymlambda f g. path\_homotopic(f,g))" \end{isabelle} \begin{isabelle} \isacommand{definition} loop\_classes :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "loop\_classes(X,x) = loop\_space(X,x) // loop\_space\_rel(X,x)" \end{isabelle} Finally, the fundamental group is defined as: \begin{isabelle} \isacommand{definition} fundamental\_group :: "i \isasymRightarrow\ i \isasymRightarrow\ i" ("\isamath{\pi_1}") where \isanewline \ \ "\isamath{\pi_1}(X,x) = (let \isamath{\mathcal{R}}\ = loop\_space\_rel(X,x) in \isanewline \ \ \ \ \ Group(loop\_classes(X,x), equiv\_class(\isamath{\mathcal{R}},const\_mor(I,X,x)), \isanewline \ \ \ \ \ \ \ \ \ \ \ \isasymlambda f g. equiv\_class(\isamath{\mathcal{R}},rep(\isamath{\mathcal{R}},f) \isasymstar\ rep(\isamath{\mathcal{R}},g))))" \end{isabelle} To show that the fundamental group is actually a group, we need to show that the path product respects the equivalence relation given by path homotopy, and is associative up to equivalence (along with properties about inverse and identity). The end result is: \begin{isabelle} \isacommand{lemma} fundamental\_group\_is\_group: \isanewline \ \ "is\_top\_space(X) \isasymLongrightarrow\ x \isasymin. X \isasymLongrightarrow\ is\_group(\isamath{\pi_1}(X,x))" \end{isabelle} An important property of the fundamental group is that a continuous function between topological spaces induces a homomorphism between their fundamental groups. This is defined as follows: \begin{isabelle} \isacommand{definition} induced\_mor :: "i \isasymRightarrow\ i \isasymRightarrow\ i" \isacommand{where} \isanewline \ \ "induced\_mor(k,x) = \isanewline \ \ \ \ (let X = source\_str(k) in let Y = target\_str(k) in \isanewline \ \ \ \ \ let \isamath{\mathcal{R}}\ = loop\_space\_rel(X,x) in let \isamath{\mathcal{S}}\ = loop\_space\_rel(Y,k\isamath{^\backprime} x) in \isanewline \ \ \ \ \ Mor(\isamath{\pi_1}(X,x), \isamath{\pi_1}(Y,k\isamath{^\backprime} x), \isasymlambda f. equiv\_class(\isamath{\mathcal{S}}, k \isamath{\circ_{\isasty{m}}}\ rep(\isamath{\mathcal{R}},f))))" \end{isabelle} The induced map is a homomorphism satisfying functorial properties: \begin{isabelle} \isacommand{lemma} induced\_mor\_is\_homomorphism: \isanewline \ \ "continuous(k) \isasymLongrightarrow\ X = source\_str(k) \isasymLongrightarrow\ Y = target\_str(k) \isasymLongrightarrow \isanewline \ \ \ x \isasymin\ source(k) \isasymLongrightarrow\ induced\_mor(k,x) \isasymin\ \isamath{\pi_1}(X,x) \isasub{\isasymrightharpoonup}{\isasty{G}} \isamath{\pi_1}(Y,k\isamath{^\backprime} x)" \end{isabelle} \begin{isabelle} \isacommand{lemma} induced\_mor\_id: \isanewline \ \ "is\_top\_space(X) \isasymLongrightarrow\ x \isasymin. X \isasymLongrightarrow \isanewline \ \ \ induced\_mor(id\_mor(X),x) = id\_mor(\isamath{\pi_1}(X,x))" \end{isabelle} \begin{isabelle} \isacommand{lemma} induced\_mor\_comp: \isanewline \ \ "continuous(k) \isasymLongrightarrow\ continuous(h) \isasymLongrightarrow \isanewline \ \ \ target\_str(k) = source\_str(h) \isasymLongrightarrow\ x \isasymin\ source(k) \isasymLongrightarrow \isanewline \ \ \ induced\_mor(h \isamath{\circ_{\isasty{m}}}\ k, x) = induced\_mor(h, k\isamath{^\backprime} x) \isamath{\circ_{\isasty{m}}}\ induced\_mor(k, x)" \end{isabelle} \section{Related work} \label{sec:relatedwork} In Isabelle, the main library for formalized mathematics using FOL is Isabelle/ZF. The basics of Isabelle/ZF is described in \cite{paulson1,paulson2}. We also point to \cite{paulson1} for a review of older work on set theory from automated deduction and artificial intelligence communities. Outside the official library, IsarMathLib \cite{isarmathlib} is a more recent project based on Isabelle/ZF. It formalized more results in abstract algebra and point-set topology, and also constructed the real numbers. The initial parts of our development closedly parallels that in Isabelle/ZF, but we go further in several directions including constructing the number system. The primary difference between our work and IsarMathLib is that we use auto2 for proofs, and develop our own system for handling structures, so that we do not make use of Isabelle tactics, Isar, or locales. Outside Isabelle, the major formalization projects using set theory include Metamath \cite{metamath} and Mizar \cite{Mizar}, both of which have extensive mathematical libraries. There are some recent efforts to reproduce the Mizar environment in HOL-type systems \cite{Mizar-HOL2,Mizar-HOL1}. While there are some similarities between our framework and Mizar's, we do not aim for an exact reproduction. In particular, we maintain the typical style of stating definitions and theorems in Isabelle. More comparisons between our approach and Mizar are discussed in Section \ref{sec:discussion}. Mizar formalized not just the definition of the fundamental group \cite{fundamental-group-mizar}, but several of its properties, including the computation of the fundamental group of the circle. There is also a formalization of path homotopy in HOL Light which is then ported to Isabelle/HOL. This is used for the proof of the Brouwer fixed-point theorem and the Cauchy integral theorem, although the fundamental group itself does not appear to be constructed. In homotopy type theory, one can work with fundamental groups (and higher-homotopy groups) using synthetic definitions. This has led to formalizations of results about homotopy groups that are well beyond what can be achieved today using standard definitions (see \cite{hott-homotopy-group} for a more recent example). We emphasize that our definition of the fundamental group, as with Mizar's, follows the standard one in set theory. \section{Conclusion} \label{sec:conclusion} We applied the auto2 prover to the formalization of mathematics using untyped set theory. Starting from the axioms of set theory, we formalized the definition of the fundamental group, as well as many other results in set theory, group theory, point-set topology, and real analysis. The entire development contains over 13,000 lines of theory files and 3,500 lines of ML code, taking the author about 5 months to complete. On a laptop with two 2.0GHz cores, it can be compiled in about 24 minutes. Through this work, we demonstrated the ability of auto2 to scale to relatively large projects. We also hope this result can bring renewed interest to formalizing mathematics in untyped set theory in Isabelle.
1,108,101,562,493
arxiv
\section{Introduction} Learning to learn is essential in human intelligence but is still a wide area of research in machine learning. \textit{Meta-learning} has emerged as a popular approach to enable models to perform well on new tasks using limited data. It involves first a \textit{meta-training} process, when the model learns valuable features from a set of tasks. Then, at test time, using only few datapoints from a new, unseen task, the model (1) \textit{adapts} to this new task (i.e., performs \textit{few-shot learning} with \textit{context data}), and then (2) \textit{infers} by making predictions on new, unseen \textit{query inputs} from the same task. A popular baseline for meta-learning, which has attracted a large amount of attention, is Model-Agnostic Meta-Learning (MAML) \citep{maml}, in which the adaptation process consists of fine-tuning the parameters of the model via gradient descent. However, meta-learning methods can often struggle in several ways when deployed in challenging real-world scenarios. First, when context data is too limited to fully identify the test-time task, accurate prediction can be challenging. As these predictions can be untrustworthy, this necessitates the development of meta-learning methods that can express uncertainty during adaptation \citep{bayesian_maml, alpaca}. In addition, meta-learning models may not successfully adapt to ``unusual'' tasks, i.e., when test-time context data is drawn from an \textit{out-of-distribution} (OoD) task not well represented in the training dataset \citep{ood_maml, meta_learning_ood}. Finally, special care has to be taken when learning tasks that have a large degree of heterogeneity. An important example is the case of tasks with a \textit{multimodal} distribution, i.e., when there are no common features shared across all the tasks, but the tasks can be broken down into subsets (modes) in a way that the ones from the same subset share common features \citep{mmaml}. \textbf{Our contributions.}~\, We present \textsc{UnLiMiTD}{} (\textit{uncertainty-aware meta-learning for multimodal task distributions}), a novel meta-learning method that leverages probabilistic tools to address the aforementioned issues. Specifically, \textsc{UnLiMiTD}{} models the true distribution of tasks with a learnable distribution constructed over a linearized neural network and uses analytic Bayesian inference to perform uncertainty-aware adaption. We present three variants (namely, \approach-\textsc{I}, \approach-\textsc{R}{}, and \approach-\textsc{F}) that reflect a trade-off between learning a rich prior distribution over the weights and maintaining the full expressivity of the network; we show that \approach-\textsc{F}{} strikes a balance between the two, making it the most appealing variant. Finally, we demonstrate that (1) our method allows for efficient probabilistic predictions on in-distribution tasks, that compare favorably to, and in most cases outperform, the existing baselines, (2) it is effective in detecting context data from OoD tasks at test time, and that (3) both these findings continue to hold in the multimodal task-distribution setting. The rest of the paper is organized as follows. Section~\ref{sec:problem_statement} formalizes the problem. Section~\ref{sec:background} presents background information on the linearization of neural networks and Bayesian linear regression. We detail our approach and its three variants in Section~\ref{sec:approach}. We discuss related work in detail in Section~\ref{sec:related_work}. Finally, we present our experimental results concerning the performance of \textsc{UnLiMiTD}{} in Section~\ref{sec:results} and conclude in Section~\ref{sec:conclusion}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/flowchart.pdf} \caption{ The true task distribution $p(f)$ can be multimodal, i.e., containing multiple clusters of tasks (e.g., lines and sines). Our approach \textsc{UnLiMiTD}{} fits $p(f)$ with a parametric, tuneable distribution $\Tilde{p}_\xi(f)$ yielded by Bayesian linear regression on a linearized neural network.} \label{fig:flowchart} \end{figure} \section{Problem statement}\label{sec:problem_statement} A task $\mathcal{T}^i$ consists of a function $f_i$ from which data is drawn. At test time, the prediction steps are broken down into (1) \textit{adaptation}, that is identifying $f_i$ using $K$ context datapoints $(\xinput^i, \youtput^i)$ from the task, and (2) \textit{inference}, that is making predictions for $f_i$ on the \textit{query inputs} $\xinput^i_*$. Later the predictions can be compared with the \textit{query ground-truths} $\youtput^i_*$ to estimate the quality of the prediction, for example in terms of mean squared error (MSE). The meta-training consists in learning valuable features from a \textit{cluster of tasks}, which is a set of similar tasks (e.g., sines with different phases and amplitudes but same frequency), so that at test time the predictions can be accurate on tasks from the same cluster. We take a probabilistic, functional perspective and represent a cluster by $p(f)$, a theoretical distribution over the function space that describes the probability of a task belonging to the cluster. Learning $p(f)$ is appealing, as it allows for performing OoD detection in addition to making predictions. Adaptation amounts to computing the conditional distribution given test context data, and one can obtain an uncertainty metric by evaluating the negative log-likelihood (NLL) of the context data under $p(f)$. Thus, our goal is to construct a parametric, learnable functional distribution $\Tilde{p}_\xi(f)$ that approaches the theoretical distribution $p(f)$, with a structure that allows tractable conditioning and likelihood computation, even in deep learning contexts. In practice, however, we are not given $p(f)$, but only a meta-training dataset $\mathcal{D}$ that we assume is sampled from $p(f)$: $\mathcal{D}=\{ (\widetilde{\xinput}^i, \widetilde{\youtput}^i)\}_{i=1}^{N}$, where $N$ is the number of tasks available during training, and $(\widetilde{\xinput}^i, \widetilde{\youtput}^i) \sim \mathcal{T}^i$ is the entire pool of data from which we can draw subsets of context data $(\xinput^i, \youtput^i)$. Consequently, in the meta-training phase, we aim to optimize $\Tilde{p}_\xi(f)$ to capture properties of $p(f)$, using only the samples in $\mathcal{D}$. Once we have $\Tilde{p}_\xi(f)$, we can evaluate it both in terms of how it performs for few-shot learning (by comparing the predictions with the ground truths in terms of MSE), as well as for OoD detection (by measuring how well the NLL of context data serves to classify in-distribution tasks against OoD tasks, measured via the AUC-ROC score). \nopagebreak[4] \section{Background} \label{sec:background} \subsection{Bayesian linear regression and Gaussian Processes} \label{sec:reglin} Efficient Bayesian meta-learning requires a tractable inference process at test time. In general, this is only possible analytically in a few cases. One of them is the Bayesian linear regression with Gaussian noise and a Gaussian prior on the weights. Viewing it from a nonparametric, functional approach, this model is equivalent to a Gaussian process (GP) \citep{rasmussen}. Let ${\bm{X}} = ({\bm{x}}_1, \dots, {\bm{x}}_K) \in \mathbb{R}^{{N_x} \times K}$ be a batch of $K$ ${N_x}$-dimensional inputs, and let ${\bm{y}} = ({\bm{y}}_1, \dots, {\bm{y}}_K) \in \mathbb{R}^{{N_y} K}$ be a vectorized batch of ${N_y}$-dimensional outputs. In the Bayesian linear regression model, these quantities are related according to $ {\bm{y}} = \phi({\bm{X}})^\top \hat{\param} + \varepsilon \in \mathbb{R}^{{N_y} K} $ where $\hat{\param} \in \mathbb{R}^P$ are the weights of the model, and the inputs are mapped via $\phi:\mathbb{R}^{{N_x} \times K} \rightarrow \mathbb{R}^{P \times {N_y} K}$. Notice how this is a generalization of the usual one-dimensional linear regression (${N_y}=1$). If we assume a Gaussian prior on the weights $\hat{\param} \sim \mathcal{N}({\bm{\mu}}, {\bm{\Sigma}})$ and a Gaussian noise $\varepsilon \sim \mathcal{N}(\bm{0}, {\bm{\Sigma}}_\varepsilon)$ with ${\bm{\Sigma}}_\varepsilon = \sigma_\varepsilon^2 {\bm{I}}$, then the model describes a multivariate Gaussian distribution on ${\bm{y}}$ for any ${\bm{X}}$. Equivalently, this means that this model describes a GP distribution over functions, with mean and covariance function (or kernel) \begin{align} \begin{split} \label{eq:prior_pred_dist} {\bm{\mu}}_{\text{prior}} ({\bm{x}}_t) & = \phi({\bm{x}}_t)^\top {\bm{\mu}}, \\ \text{cov}_{\text{prior}} ({\bm{x}}_{t_1}, {\bm{x}}_{t_2}) & = \phi({\bm{x}}_{t_1})^\top {\bm{\Sigma}} \phi({\bm{x}}_{t_2}) + {\bm{\Sigma}}_\varepsilon =: k_{\bm{\Sigma}}({\bm{x}}_{t_1}, {\bm{x}}_{t_2}) + {\bm{\Sigma}}_\varepsilon . \end{split} \end{align} This GP enables tractable computation of the likelihood of any batch of data $({\bm{X}}, {\bm{Y}})$ given this distribution over functions. The structure of this distribution is governed by the feature map $\phi$ and the prior over the weights, specified by ${\bm{\mu}}$ and ${\bm{\Sigma}}$. This distribution can also easily be conditioned to perform inference. Given a batch of data $({\bm{X}}, {\bm{Y}})$, the posterior predictive distribution is also a GP, with an updated mean and covariance function \begin{align} \label{eq:post_pred_dist} \begin{split} {\bm{\mu}}_{\text{post}} ({\bm{x}}_{t_*}) & = k_{\bm{\Sigma}}({\bm{x}}_{t_*}, {\bm{X}}) \left( k_{\bm{\Sigma}}({\bm{X}}, {\bm{X}}) + {\bm{\Sigma}}_\varepsilon \right)^{-1} {\bm{Y}}, \\ \text{cov}_{\text{post}} ({\bm{x}}_{{t_1}_*}, {\bm{x}}_{{t_2}_*}) & = k_{\bm{\Sigma}}({\bm{x}}_{{t_1}_*}, {\bm{x}}_{{t_2}_*}) - k_{\bm{\Sigma}}({\bm{x}}_{{t_1}_*}, {\bm{X}}) \left( k_{\bm{\Sigma}}({\bm{X}}, {\bm{X}}) + {\bm{\Sigma}}_\varepsilon \right)^{-1} k_{\bm{\Sigma}}({\bm{X}}, {\bm{x}}_{{t_2}_*}). \end{split} \end{align} Here, ${\bm{\mu}}_{\text{post}}({\bm{X}}_*)$ represents our model's adapted predictions for the test data, which we can compare to ${\bm{Y}}_*$ to evaluate the quality of our predictions, for example, via mean squared error (assuming that test data is clean, following \citet{rasmussen}). The diagonal of $\text{cov}_{\text{post}}({\bm{X}}_*, {\bm{X}}_*)$ can be interpreted as a per-input level of confidence that captures the ambiguity in making predictions with only a limited amount of context data. \subsection{The linearization of a neural network yields an expressive linear regression model} \label{sec:linearization} As discussed, the choice of feature map $\phi$ plays an important role in specifying a linear regression model. In the deep learning context, recent work has demonstrated that the linear model obtained when linearizing a deep neural network with respect to its weights at initialization, wherein the Jacobian of the network operates as the feature map, can well approximate the training behavior of wide nonlinear deep neural networks \citep{jacot,nonlinear,liu2020linearity,shallow_nns_infinite_width,dnns_infinite_wdith}. Let $f$ be a neural network $f: \left({\bm{\theta}}, {\bm{x}}_t \right) \mapsto {\bm{y}}_t$, where ${\bm{\theta}} \in \mathbb{R}^{P}$ are the parameters of the model, ${\bm{x}} \in \mathbb{R}^{{N_x}}$ is an input and ${\bm{y}} \in \mathbb{R}^{{N_y}}$ an output. The linearized network (w.r.t. the parameters) around $\param_0$ is \begin{displaymath} f({\bm{\theta}}, {\bm{x}}_t) - f(\param_0, {\bm{x}}_t) \approx {\bm{J}}_{\bm{\theta}}( f )(\param_0, {\bm{x}}_t) ({\bm{\theta}} - \param_0), \end{displaymath} where ${\bm{J}}_{\bm{\theta}}(f)(\cdot, \cdot): \mathbb{R}^P \times \mathbb{R}^{N_x} \rightarrow \mathbb{R}^{{N_y} \times P}$ is the Jacobian of the network (w.r.t. the parameters). In the case where the model accepts a batch of $K$ inputs ${\bm{X}} = ({\bm{x}}_1, \dots, {\bm{x}}_K)$ and returns ${\bm{Y}} = ({\bm{y}}_1, \dots, {\bm{y}}_K)$, we generalize $f$ to $g: \mathbb{R}^P \times \mathbb{R}^{{N_x} \times K} \rightarrow \mathbb{R}^{{N_y} \times K}$, with ${\bm{Y}} = g({\bm{\theta}}, {\bm{X}})$. Consequently, we generalize the linearization: \begin{displaymath} g({\bm{\theta}}, {\bm{X}}) - g(\param_0, {\bm{X}}) \approx {\bm{J}}(\param_0, {\bm{X}}) ({\bm{\theta}} - \param_0), \end{displaymath} where ${\bm{J}}(\cdot, \cdot): \mathbb{R}^P \times \mathbb{R}^{{N_x} \times K} \rightarrow \mathbb{R}^{{N_y} K \times P}$ is a shorthand for ${\bm{J}}_{\bm{\theta}}(g)(\cdot, \cdot)$. Note that we have implicitly vectorized the outputs, and throughout the work, we will interchange the matrices $\mathbb{R}^{{N_y} \times K}$ and the vectorized matrices $\mathbb{R}^{{N_y} K}$. This linearization can be viewed as the ${N_y} K$-dimensional linear regression \begin{equation} \label{eq:linearized_network} {\bm{z}} = \phi_{\param_0}({\bm{X}})^\top \hat{\param} \in \mathbb{R}^{{N_y} K}, \end{equation} where the feature map $\phi_{\param_0}(\cdot): \mathbb{R}^{{N_x} \times K} \rightarrow \mathbb{R}^{P \times {N_y} K}$ is the transposed Jacobian ${\bm{J}}(\param_0, \cdot)^\top$. The parameters of this linear regression $\hat{\param} = \left( {\bm{\theta}} - \param_0 \right)$ are the \textit{correction} to the parameters chosen as the linearization point. Equivalently, this can be seen as a kernel regression with the kernel $ k_{\param_0}({\bm{X}}_1,{\bm{X}}_2) = {\bm{J}}(\param_0, {\bm{X}}_1) {\bm{J}}(\param_0, {\bm{X}}_2)^\top$, which is commonly referred to as the Neural Tangent Kernel (NTK) of the network. Note that the NTK depends on the linearization point $\param_0$. Building on these ideas, \citet{maddox} show that the NTK obtained via linearizing a DNN \textit{after} it has been trained on a task yields a GP that is well-suited for adaptation and fine-tuning to new, similar tasks. Furthermore, they show that networks trained on similar tasks tend to have similar Jacobians, suggesting that neural network linearization can yield an effective model for multi-task contexts such as meta-learning. In this work, we leverage these insights to construct our parametric functional distribution $\Tilde{p}_\xi(f)$ via linearizing a neural network model. \section{Our approach: \textsc{UnLiMiTD}} \label{sec:approach} In this section, we describe our meta-learning algorithm \textsc{UnLiMiTD}{} and the construction of a parametric functional distribution $\Tilde{p}_\xi(f)$ that can model the true underlying distribution over tasks $p(f)$. First, we focus on the single cluster case, where a Gaussian process structure on $\Tilde{p}_\xi(f)$ can effectively model the true distribution of tasks, and detail how we can leverage meta-training data $\mathcal{D}$ from a single cluster of tasks to train the parameters $\xi$ of our model. Next, we will generalize our approach to the multimodal setting, with more than one cluster of tasks. Here, we construct $\Tilde{p}_\xi(f)$ as a mixture of GPs and develop a training approach that can automatically identify the clusters present in the training dataset without requiring the meta-training dataset to contain any additional structure such as cluster labels. \subsection{Tractably structuring the prior predictive distribution over functions via a Gaussian distribution over the weights} In our approach, we choose $\Tilde{p}_\xi(f)$ to be the GP distribution over functions that arises from a Gaussian prior on the weights of the linearization of a neural network (\eqref{eq:linearized_network}). Consider a particular task $\mathcal{T}^i$ and a batch of $K$ context data $(\xinput^i, \youtput^i)$. The resulting prior predictive distribution, derived from \eqref{eq:prior_pred_dist} after evaluating on the context inputs, is ${\bm{Y}} | \xinput^i \sim \mathcal{N}( {\bm{\mu}}_{\youtput \mid \xcontextinput}, {\bm{\Sigma}}_{\youtput \mid \xcontextinput})$, where \begin{equation} \label{eq:prior_pred_dist_ntk} {\bm{\mu}}_{\youtput \mid \xcontextinput} = {\bm{J}}(\param_0, \xinput^i) {\bm{\mu}}, \quad {\bm{\Sigma}}_{\youtput \mid \xcontextinput} = {\bm{J}}(\param_0, \xinput^i) {\bm{\Sigma}} {\bm{J}}(\param_0, \xinput^i)^\top + {\bm{\Sigma}}_\varepsilon. \end{equation} In this setup, the parameters $\xi$ of $\Tilde{p}_\xi(f)$ that we wish to optimize are the linearization point $\param_0$, and the parameters of the prior over the weights $({\bm{\mu}}, {\bm{\Sigma}})$. Given this Gaussian prior, it is straightforward to compute the joint NLL of the context labels $\youtput^i$, \begin{align} \label{eq:single-nll} \mathrm{NLL}(\xinput^i, \youtput^i) = \frac12\left( \left\| \youtput^i - {\bm{\mu}}_{\youtput \mid \xcontextinput} \right\|^2_{{\bm{\Sigma}}_{\youtput \mid \xcontextinput}^{-1}} + \log\det {\bm{\Sigma}}_{\youtput \mid \xcontextinput} + {N_y} K \log 2 \pi \right). \end{align} The NLL (a) serves as a loss function quantifying the quality of $\xi$ during training and (b) serves as an uncertainty signal at test time to evaluate whether context data $(\xinput^i, \youtput^i)$ is OoD. Given this model, \textit{adaptation} is tractable as we can condition this GP on the context data analytically. In addition, we can efficiently make probabilistic predictions by evaluating the mean and covariance of the resulting posterior predictive distribution on the query inputs, using \eqref{eq:post_pred_dist}. \subsubsection{Parameterizing the prior covariance over the weights} \label{sec:prior_covariance} When working with deep neural networks, the number of weights $P$ can surpass $10^6$. While it remains tractable to deal with $\param_0$ and ${\bm{\mu}}$, whose memory footprint grows linearly with $P$, it can quickly become intractable to make computations with (let alone store) a dense prior covariance matrix over the weights ${\bm{\Sigma}} \in \mathbb{R}^{P \times P}$. Thus, we must impose some structural assumptions on the prior covariance to scale to deep neural network models. \textbf{Imposing a unit covariance.}~\, One simple way to tackle this issue would be to remove ${\bm{\Sigma}}$ from the learnable parameters $\xi$, i.e., fixing it to be the identity ${\bm{\Sigma}} = {\bm{I}}_{P}$. In this case, $\xi = (\param_0, {\bm{\mu}})$. This computational benefit comes at the cost of model expressivity, as we lose a degree of freedom in how we can optimize our learned prior distribution $\Tilde{p}_\xi(f)$. In particular, we are unable to choose a prior over the weights of our model that captures correlations between elements of the feature map. \textbf{Learning a low-dimensional representation of the covariance.}~\, An alternative is to learn a low-rank representation of ${\bm{\Sigma}}$, allowing for a learnable weight-space prior covariance that can encode correlations. Specifically, we consider a covariance of the form ${\bm{\Sigma}} = {\bm{Q}}^\top \diag{{\bm{s}}^2} {\bm{Q}}$, where ${\bm{Q}}$ is a fixed projection matrix on an $s$-dimensional subspace of $\mathbb{R}^{P}$, while ${\bm{s}}^2$ is learnable. In this case, the parameters that are learned are $\xi = (\param_0, {\bm{\mu}}, {\bm{s}})$. We define ${\bm{S}} := \diag{{\bm{s}}^2}$. The computation of the covariance of the prior predictive (\eqref{eq:prior_pred_dist_ntk}) could then be broken down into two steps: \begin{displaymath} \left\{ \begin{array}{l} A := {\bm{J}}(\param_0, \xinput^i) {\bm{Q}}^\top \\ {\bm{J}}(\param_0, \xinput^i) {\bm{\Sigma}} {\bm{J}}(\param_0, \xinput^i)^\top = A {\bm{S}} A^\top \end{array} \right. \end{displaymath} which requires a memory footprint of $O(P(s + {N_y} K) )$, if we include the storage of the Jacobian. Because ${N_y} K \ll P$ in typical deep learning contexts, it suffices that $s \ll P$ so that it becomes tractable to deal with this new representation of the covariance. \textbf{A trade-off between feature-map expressiveness and learning a rich prior over the weights.} Note that even if a low-dimensional representation of ${\bm{\Sigma}}$ enriches the prior distribution over the weights, it also restrains the expressiveness of the feature map in the kernel by projecting the $P$-dimensional features ${\bm{J}}(\param_0, {\bm{X}})$ on a subspace of size $s \ll P$ via ${\bm{Q}}$. This presents a trade-off: we can use the full feature map, but limit the weight-space prior covariance to be the identity matrix by keeping ${\bm{\Sigma}} = {\bm{I}}$ (case \approach-\textsc{I}). Alternatively, we could learn a low-rank representation of ${\bm{\Sigma}}$ by randomly choosing $s$ orthogonal directions in $\mathbb{R}^{P}$, with the risk that they could limit the expressiveness of the feature map if the directions are not relevant to the problem that is considered (case \approach-\textsc{R}). As a compromise between these two cases, we can choose the projection matrix more intelligently and project to the most impactful subspace of the full feature map --- in this way, we can reap the benefits of a tuneable prior covariance while minimizing the useful features that the projection drops. To select this subspace, we construct this projection map by choosing the top $s$ eigenvectors of the Fisher information matrix (FIM) evaluated on the training dataset $\mathcal{D}$ (case \approach-\textsc{F}). Recent work has shown that the FIM for deep neural networks tends to have rapid spectral decay \citep{scod}, which suggests that keeping only a few of the top eigenvectors of the FIM is enough to encode an expressive task-tailored prior. See Appendix~\ref{app:fim} for more details. \subsubsection{Generalizing the structure to a mixture of Gaussians} \label{sec:mixture} When learning on multiple clusters of tasks, $p(f)$ can become non-unimodal, and thus cannot be accurately described by a single GP. Instead, we can capture this multimodality by structuring $\Tilde{p}_\xi(f)$ as a \textit{mixture} of Gaussian processes. \textbf{Building a more general structure.}~\, We assume that at train time, a task $\mathcal{T}^i$ comes from any cluster $\left\{\mathcal{C}_j \right\}_{j=1}^{j=\alpha}$ with equal probability. Thus, we choose to construct $\Tilde{p}_\xi(f)$ as an equal-weighted mixture of $\alpha$ Gaussian processes. For each element of the mixture, the structure is similar to the single cluster case, where the parameters of the cluster's weight-space prior are given by $({\bm{\mu}}_j, {\bm{\Sigma}}_j)$. We choose to have both the projection matrix ${\bm{Q}}$ and the linearization point $\param_0$ (and hence, the feature map $\phi(\cdot) = {\bm{J}}(\param_0,\cdot)$) shared across the clusters. This yields improved computational efficiency, as we can compute the projected features once, simultaneously, for all clusters. This yields the parameters $\xi_\alpha = (\param_0, {\bm{Q}}, ({\bm{\mu}}_1, {\bm{s}}_1), \ldots, ({\bm{\mu}}_\alpha, {\bm{s}}_\alpha))$. This can be viewed as a mixture of linear regression models, with a common feature map but separate, independent prior distributions over the weights for each cluster. These separate distributions are encoded using the low-dimensional representations ${\bm{S}}_j$ for each ${\bm{\Sigma}}_j$. Notice how this is a generalization of the single cluster case, for when $\alpha=1$, $\Tilde{p}_\xi(f)$ becomes a Gaussian and $\xi_\alpha = \xi$\footnote{In theory, it is possible to drop ${\bm{Q}}$ and extend the identity covariance case to the multi-cluster setting; however, this leads to each cluster having an identical covariance function, and thus is not effective at modeling heterogeneous behaviors among clusters.}. \textbf{Prediction and likelihood computation.}~\, The NLL of a batch of inputs under this mixture model can be computed as \begin{equation} \label{eq:nll_mixture} \mathrm{NLL}_{\text{mixt}}(\xinput^i, \youtput^i) = \log \alpha - \log \add \exp (-\mathrm{NLL}_1(\xinput^i, \youtput^i), \ldots, -\mathrm{NLL}_\alpha(\xinput^i, \youtput^i)), \end{equation} where $\mathrm{NLL}_j(\xinput^i, \youtput^i)$ is the NLL with respect to each individual Gaussian, as computed in \eqref{eq:single-nll}, and $\log\add\exp$ computes the logarithm of the sum of the exponential of the arguments, taking care to avoid underflow issues. To make exact predictions, we would require conditioning this mixture model. As this is not directly tractable, we propose to first \textit{infer the cluster} from which a task comes from, by identifying the Gaussian $\mathcal{G}_{j_0}$ that yields the highest likelihood for the context data $\left( \xinput^i, \youtput^i \right)$. Then, we can \textit{adapt} by conditioning $\mathcal{G}_{j_0}$ with the context data and finally \textit{infer} by evaluating the resulting posterior distribution on the queried inputs $\xinput^i_*$. \subsection{Meta-training the Parametric Task Distribution} The key to our meta-learning approach is to estimate the quality of $\Tilde{p}_\xi(f)$ via the NLL of context data from training tasks, and use its gradients to update the parameters of the distribution $\xi$. Optimizing this loss over tasks in the dataset draws $\Tilde{p}_\xi(f)$ closer to the empirical distribution present in the dataset, and hence towards the true distribution $p(f)$. We present three versions of \textsc{UnLiMiTD}, depending on the choice of structure of the prior covariance over the weights (see Section~\ref{sec:prior_covariance} for more details). \approach-\textsc{I}{} (Algorithm~\ref{alg:meta_training_identity}) is the meta-training with the fixed identity prior covariance. \approach-\textsc{R}{} and \approach-\textsc{F}{} (Algorithm~\ref{alg:meta_training_learnt_cov}) learn a low-dimensional representation of that prior covariance, either with random projections or with FIM-based projections. \begin{algorithm}[t] \caption{\footnotesize \approach-\textsc{I}: meta-training with identity prior covariance} \footnotesize \label{alg:meta_training_identity} \begin{algorithmic}[1] \State Initialize $\param_0$, ${\bm{\mu}}$. \ForAll{epoch} \State Sample $n$ tasks $\{ \mathcal{T}^i, (\xinput^i, \youtput^i) \}_{i=1}^{i=n}$ \ForAll{$\mathcal{T}^i, (\xinput^i, \youtput^i)$} \State $NLL_i \gets \Call{GaussNLL}{\youtput^i; {\bm{J}}{\bm{\mu}},~ {\bm{J}}\jac^\top + {\bm{\Sigma}}_\varepsilon}$ \Comment{${\bm{J}} = {\bm{J}}(\param_0, \xinput^i)$} \EndFor \State Update $\param_0$, ${\bm{\mu}}$ with $\nabla_{\param_0 \cup {\bm{\mu}}} \sum_i NLL_i$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{\footnotesize \approach-\textsc{R}{} and \approach-\textsc{F}{}: meta-training with a learnt covariance} \footnotesize \label{alg:meta_training_learnt_cov} \begin{algorithmic}[1] \If{using random projections} \State Find random projection ${\bm{Q}}$ \State Initialize $\param_0$, ${\bm{\mu}}$, ${\bm{s}}$ \ElsIf{using FIM-based projections} \State Find intermediate $\param_0$, ${\bm{\mu}}$ with \approach-\textsc{I}{} \Comment{see Alg.~\ref{alg:meta_training_identity}} \State Find ${\bm{Q}}$ via \Call{FIMProj}{s}; initialize ${\bm{s}}$. \Comment{see Alg.~\ref{alg:fim_proj}} \EndIf \ForAll{epoch} \State Sample $n$ tasks $\{ \mathcal{T}^i, (\xinput^i, \youtput^i) \}_{i=1}^{i=n}$ \ForAll{$\mathcal{T}^i, (\xinput^i, \youtput^i)$} \State $NLL_i \gets \Call{GaussNLL}{\youtput^i; {\bm{J}}{\bm{\mu}},~ {\bm{J}} {\bm{Q}}^\top \diag{{\bm{s}}^2} {\bm{Q}} {\bm{J}}^\top + {\bm{\Sigma}}_\varepsilon}$ \Comment{${\bm{J}} = {\bm{J}}(\param_0, \xinput^i)$} \EndFor \State Update $\param_0$, ${\bm{\mu}}$, ${\bm{s}}$ with $\nabla_{\param_0 \cup {\bm{\mu}} \cup {\bm{s}}} \sum_i NLL_i$ \EndFor \end{algorithmic} \end{algorithm} \textbf{Computing the likelihood.}~\, In the algorithms, the function \Call{GaussNLL}{$\youtput^i$; $m$, $K$} stands for NLL of $\youtput^i$ under the Gaussian $\mathcal{N}(m, K)$ (see \eqref{eq:single-nll}). In the mixture case, we instead use \Call{MixtNLL}{}, which wraps \eqref{eq:nll_mixture} and calls \Call{GaussNLL}{} for the individual NLL computations (see discussion in Section~\ref{sec:mixture}). In this case, ${\bm{\mu}}$ becomes $\{{\bm{\mu}}_j\}_{j=1}^{j=\alpha}$ and ${\bm{s}}$ becomes $\{{\bm{s}}_j\}_{j=1}^{j=\alpha}$ when applicable. \textbf{Finding the FIM-based projections.}~\, The FIM-based projection matrix aims to identify the elements of $\phi = {\bm{J}}(\param_0, {\bm{X}})$ that are most relevant for the problem (see Section~\ref{sec:prior_covariance} and Appendix~\ref{app:fim}). However, this feature map evolves during training, because it is $\param_0$-dependent. How do we ensure that the directions we choose for ${\bm{Q}}$ remain relevant during training? We leverage results from \citet{ntk_evolution}, stating that the NTK (the kernel associated with the Jacobian feature map, see Section~\ref{sec:linearization}) changes significantly at the beginning of training and that its evolution slows down as training goes on. This suggests that as a heuristic, we can compute the FIM-based directions after partial training, as they are unlikely to deviate much after the initial training. For this reason, \approach-\textsc{F}{} (Algorithm~\ref{alg:meta_training_learnt_cov}) first calls \approach-\textsc{I}{} (Algorithm~\ref{alg:meta_training_identity}) before computing the FIM-based ${\bm{Q}}$ that yields intermediate parameters $\param_0$ and ${\bm{\mu}}$. Then the usual training takes place with the learning of ${\bm{s}}$ in addition to $\param_0$ and ${\bm{\mu}}$. \section{Related work} \label{sec:related_work} \textbf{Bayesian inference with linearized DNNs.}~\, Bayesian inference with neural networks is often intractable because the posterior predictive has rarely a closed-form expression. Whereas \textsc{UnLiMiTD}{} linearizes the network to allow for practical Bayesian inference, existing work has used other approximations to tractably express the posterior. For example, it has been shown that in the infinite-width approximation, the posterior predictive of a Bayesian neural network behaves like a GP \citep{shallow_nns_infinite_width, dnns_infinite_wdith}. This analysis can in some cases yield a good approximation to the Bayesian posterior of a DNN \citep{cnns_infinite_width}. It is also common to use Laplace's method to approximate the posterior predictive by a Gaussian distribution and allow practical use of the Bayesian framework for neural networks. This approximation relies in particular on the computation of the Hessian of the network: this is in general intractable, and most approaches use the so-called Gauss-Newton approximation of the Hessian instead \citep{laplace_scalable}. Recently, it has been shown that the Laplace method using the Gauss-Newton approximation is equivalent to working with a certain linearized version of the network and its resulting posterior GP \citep{laplace_linearization}. Bayesian inference is applied in a wide range of subjects. For example, recent advances in transfer learning have been possible thanks to Bayesian inference with linearized neural networks. \citet{maddox} have linearized pre-trained networks and performed domain adaptation by conditioning the prior predictive with data from the new task: the posterior predictive is then used to make predictions. Our approach leverages a similar adaption method and demonstrates how the prior distribution can be learned in a meta-learning setup. \textbf{Meta-learning.}~\, MAML is a meta-learning algorithm that uses as adaptation a few steps of gradient descent \citep{maml}. It has the benefit of being model-agnostic (it can be used on any model for which we can compute gradients w.r.t. the weights), whereas \textsc{UnLiMiTD}{} requires the model to be a differentiable regressor. MAML has been further generalized to probabilistic meta-learning models such as PLATIPUS or BaMAML \citep{bayesian_maml, probabilistic_maml}, where the simple gradient descent step is augmented to perform approximate Bayesian inference. These approaches, like ours, learn (during meta-training) and make use of (at test-time) a prior distribution on the weights. In contrast, however, \textsc{UnLiMiTD}{} uses exact Bayesian inference at test-time. MAML has also been improved for multimodal meta-learning via MMAML \citep{mmaml, revisit_mmaml}. Similarly to our method, they add a step to identify the cluster from which the task comes from \citep{mmaml}. OoD detection in meta-learning has been studied by \citet{ood_maml}, who build upon MAML to perform OoD detection in the classification setting, to identify unseen classes during training. \citet{meta_learning_ood} also implemented OoD detection for classification, by learning a Gaussian mixture model on a latent space. \textsc{UnLiMiTD}{} extends these ideas to the regression task, aiming to identify when test data is drawn from an unfamiliar function. ALPaCA is a Bayesian meta-learning algorithm for neural networks, where only the last layer is Bayesian \citep{alpaca}. Such framework yields an exact linear regression that uses as feature map the activations right before the last layer. Our work is a generalization of ALPaCA, in the sense that \textsc{UnLiMiTD}{} restricted to the last layer matches ALPaCA's approach. More on this link between the methods is discussed in Appendix~\ref{app:link_with_alpaca}. \section{Results and discussion} \label{sec:results} We wish to evaluate four key aspects of \textsc{UnLiMiTD}. (1) At test time, how do the probabilistic predictions compare to baselines? (2) How well does the detection of context data from OoD tasks perform? (3) How do these results hold in the multimodal setting? (4) Which approach performs better between (a) the identity covariance (\approach-\textsc{I}), (b) the low-dimensional covariance with random directions (\approach-\textsc{R}) and the compromise (c) using FIM-based directions (\approach-\textsc{F}) (see trade-off in Section~\ref{sec:prior_covariance})? That is, what is best between learning a rich prior distribution over the weights, keeping a full feature map, and a compromise between the two? We consider a cluster of sine tasks, one of linear tasks and one of quadratic tasks, regression problems inspired from \citet{mmaml}. Details on the problems can be found in Appendix~\ref{app:problem-details}. \textbf{Unimodal meta-learning: The meta-learned prior accurately fits the tasks.}~\, First, we investigate the performance of \textsc{UnLiMiTD}{} on a unimodal task distribution consisting of sinusoids of varying amplitude and phase, using the single GP structure for $\Tilde{p}_\xi(f)$. We compare the performance between \approach-\textsc{I}, \approach-\textsc{R}{} and \approach-\textsc{F}. We also compare the results between training with an infinite amount of available sine tasks (infinite task dataset), and with a finite amount of available tasks (finite task dataset). More training details can be found in Appendix~\ref{app:train-details-single}. Examples of predictions at the test time are available in Figure~\ref{fig:single-predictions}, along with confidence levels. \begin{figure}[t] \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_infinite_1.pdf} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_infinite_5.pdf} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_infinite_10.pdf} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_finite_1.pdf} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_finite_5.pdf} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/predictions/single_pred_fim_finite_10.pdf} \end{subfigure} \caption{ Example of predictions for a varying number of context inputs $K$, after meta-training with \approach-\textsc{F}. Top: \approach-\textsc{F}, infinite task dataset. Bottom: \approach-\textsc{F}, finite task dataset. The standard deviation is from the posterior predictive distribution. Note how the uncertainty levels are coherent with the actual prediction error. Also, note how uncertainty decreases when there is more context data. Notice how \approach-\textsc{F}{} recovers the shape of the sine even with a low number of context inputs. Finally, note how \approach-\textsc{F}{} is able to reconstruct the sine even when trained on fewer tasks (bottom). More comprehensive plots available in Figure~\ref{fig:single-predictions-full}.} \label{fig:single-predictions} \end{figure} In both OoD detection and quality of predictions, \approach-\textsc{R}{} and \approach-\textsc{F}{} perform better than \approach-\textsc{I}{} (Figure~\ref{fig:single-performance}), and this is reflected in the quality of the learned prior $\Tilde{p}_\xi(f)$ in each case (see Appendix~\ref{app:additional-single}). With respect to the trade-off mentioned in Section~\ref{sec:prior_covariance}, we find that for small networks, a rich prior over the weights matters more than the full expressiveness of the feature map, making both \approach-\textsc{R}{} and \approach-\textsc{F}{} appealing. However, after running further experiments on a deep-learning image-domain problem, this conclusion does not hold for deep networks (see Appendix~\ref{app:deep}), where keeping an expressive feature map is important (\approach-\textsc{I}{} and \approach-\textsc{F}{} are appealing in that case). Thus, \approach-\textsc{F}{} is the variant that we retain, for it allows similar or better performances than the other variants in all situations. Note how \approach-\textsc{F}{} outperforms MAML: it achieves much better generalization when decreasing the number of context samples $K$ (Figure~\ref{fig:single-performance}). Indeed, \approach-\textsc{F}{} trained with a finite task dataset performs better than MAML with an infinite task dataset: it is able to capture better the common features of the tasks with a smaller task dataset. \begin{figure}[t] \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/single_data.pdf} \caption{Examples of context data from in-dist. and OoD tasks} \label{fig:single-data} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{single_auc.pdf} \caption{AUC for OoD detection} \label{fig:single-auc} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{single_mse.pdf} \caption{MSE on predictions} \label{fig:single-mse} \end{subfigure} \caption{Unimodal case: Performance of \textsc{UnLiMiTD}{} for OoD detection and inference, as a function of the number of context datapoints $K$. The training dataset consists of sinusoids, while OoD tasks are lines and quadratic tasks. We compare different variants (\approach-\textsc{I}, \approach-\textsc{R}{} and \approach-\textsc{F}), and against MAML for predictions. We also compare training with a finite and infinite task dataset. Note how \approach-\textsc{R}{} and \approach-\textsc{F}{} have efficient OoD detection and outperform MAML in predictions. Also, note how MAML trained with an infinite task dataset performs worse than \approach-\textsc{R}{} and \approach-\textsc{F}{} trained on a finite task dataset.} \label{fig:single-performance} \end{figure} \textbf{Multimodal meta-learning: Comparing the mixture model against a single GP.}~\, Next, we consider a multimodal task distribution with training data consisting of sinusoids as well as lines with varying slopes. Here, we compare the performance between choosing the mixture structure or the single GP structure (see discussion in Section~\ref{sec:mixture}): in both cases, we use \approach-\textsc{F}. More training details can be found in Appendix~\ref{app:train-details-multi}). Both the OoD detection and the prediction performances are better with the mixture structure than with the single GP structure (Figure~\ref{fig:multi-performance}), indicating that the mixture model is a useful structure for $\Tilde{p}_\xi(f)$. This is reflected in the quality of the learned priors (see Appendix \ref{app:additional-multi} for qualitative results including samples from the learned priors). Note how the single GP structure still performs better than both MAML and MMAML for prediction, especially in the low-data regime. This demonstrates the strength of our probabilistic approach for multimodal meta-learning: even if the probabilistic assumptions are not optimal, the predictions are still accurate and can beat baselines. \begin{figure}[t] \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/multi_data.pdf} \caption{Examples of context data from in-dist. and OoD tasks} \label{fig:multi-data} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{multi_auc.pdf} \caption{AUC for OoD detection} \label{fig:multi-auc} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{multi_mse.pdf} \caption{MSE on predictions} \label{fig:multi-mse} \end{subfigure} \caption{Multimodal case: Performance of \textsc{UnLiMiTD}{} for OoD detection and inference, as a function of the number of context datapoints $K$. The training dataset includes both sines and lines, while OoD tasks are quadratic functions. We compare the different variants (\approach-\textsc{F}{} with a single GP or a mixture model), and against MAML/MMAML for predictions. Note how both versions of \approach-\textsc{F}{} yield better predictions than the baselines. In particular, even with a single GP, \approach-\textsc{F}{} outperforms the baselines.} \label{fig:multi-performance} \end{figure} \section{Conclusion}\label{sec:conclusion} We propose \textsc{UnLiMiTD}{}, a novel meta-learning algorithm that models the underlying task distribution using a parametric and tuneable distribution, leveraging Bayesian inference with linearized neural networks. We compare three variants, and show that among these, the Fisher-based parameterization, \approach-\textsc{F}{}, effectively balances scalability and expressivity, even for deep learning applications. We have demonstrated that (1) our approach makes efficient probabilistic predictions on in-distribution tasks, which compare favorably to, and often outperform, baselines, (2) it allows for effective detection of context data from OoD tasks, and (3) that both these findings continue to hold in the multimodal task-distribution setting. There are several avenues for future work. One direction entails understanding how the performance of \approach-\textsc{F}{} is impacted if the FIM-based directions are computed too early in the training and the NTK changes significantly afterwards. One could also generalize our approach to non-Gaussian likelihoods, making \textsc{UnLiMiTD}{} effective for classification tasks. Finally, further research can push the limits of multimodal meta-learning, e.g., by implementing non-parametric Bayesian methods to automatically infer an optimal number of clusters, thereby eliminating a hyperparameter of the current approach. \subsubsection*{Acknowledgements} The authors acknowledge the MIT SuperCloud \citep{supercloud} and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper. The authors would like to thank MISTI MIT-France for supporting this research. C.A. further acknowledges support from Mines Paris Foundation. N.A. acknowledges support from the Edgerton Career Development Professorship.
1,108,101,562,494
arxiv
\section{Introduction} When cosmic rays collide with nuclei in the air in Earth's atmosphere, hadrons are produced in inelastic collisions. These hadrons interact, lose energy, and finally decay, leading to a cascade of particles in the atmosphere that will finally hit the ground. Semileptonic decays of hadrons in the atmosphere generate a flux of neutrinos known as the atmospheric neutrino flux. This is an irreducible background for neutrino observatories such as Super-Kamiokande, IceCube, Antares or the planned KM3NeT. The fluxes at lower energies are pretty well understood, and come mainly from decays of long-lived charged pions and kaons, which are produced in essentially every inelastic collision. This ``conventional'' component of the flux falls steeply with increasing energy, both due to the spectral shape of the incoming cosmic ray flux, and due to the energy loss experienced by the mesons before they decay. Charged pions, for example, have a proper decay length of 8 meters, and with time dilation, their interaction lengths are much smaller than their decay lengths so that they have plenty of time to lose energy. The resulting neutrino energies are therefore downgraded compared to the incoming cosmic ray flux. Theoretical predictions by Honda and collaborators~\cite{honda,gaisserhonda} of the conventional flux agree very well with measurements up to energies of roughly $10^{5}$~GeV~\cite{Schukraft:2013ya}. At even higher energies, production of hadrons containing charm and bottom quarks leads to another component of the flux known as the ``prompt flux.'' These hadrons are produced much more rarely, but since they decay promptly without losing much energy, the resulting flux is harder and has more or less the same energy dependence as the incoming cosmic ray flux. At energies $\sim 10^{5}-10^{6}$~GeV, this flux is believed to start to dominate the conventional flux, as shown in Fig.~\ref{fig:fluxes}. \footnote{The importance of charm production in atmospheric cascades has been known a long time. We have also recently included charm production in the calculation of neutrino fluxes from astrophysical sources \protect\cite{Enberg:2008jm}.} Coincidentally, this is roughly the same energy that is starting to be probed by the IceCube experiment. The prompt flux is much more poorly known than the conventional flux, due to the theoretical difficulties involved in calculating charm quark production at very high energies. The calculation is sensitive to very small Bjorken-$x$ and very forward rapidities, and this kinematic region is not probed by present collider or fixed target data. One of the most recent predictions of the prompt flux is the ``ERS'' flux of Ref.~\cite{ers}, which was computed before the start of the LHC. This prediction is used by the IceCube experiment as a benchmark for the prompt flux. The IceCube experiment has recently observed two very high energy neutrino events at about 1~PeV~\cite{Aartsen:2013bka}, and later 26 more events at slightly lower energies~\cite{Aartsen:2013jdh}, all at and above the energies where the prompt flux becomes important. If these events are to be interpreted as (partly) due to a flux of astrophysical neutrinos, it must be clear that they can not be explained by the background coming from atmospheric neutrinos alone. The theoretical uncertainty in the prompt flux would probably not allow assigning the IceCube observations to atmospheric neutrinos---one would need a prompt flux larger than the ERS flux by a factor 15 in order to get a 10\% probability to observe two atmospheric events at 1~PeV~\cite{Aartsen:2013bka}. However, the significance of their observation depends on the normalization of the atmospheric neutrino flux; the significance of the first two events was $2.8\sigma$, but with an increase in the prompt flux by a factor 3.8, the significance is reduced to $2.3\sigma$. Similarly, the significance of the 28 events to not be completely of atmospheric origin is $4.1\sigma$, which is reduced to $3.6\sigma$ if the prompt flux background is increased by a factor 3.8. (The factor 3.8 is the level of the measured upper limit on the prompt flux.) Such an increase in the prediction if the various theoretical uncertainties would be better understood is not completely unlikely. On the other hand, if the prediction would become smaller, the significance would increase. In any case, to understand these events it is necessary to understand the background. The LHC experiments have now measured the charm production cross section at several energies~\cite{LHCcharm}, providing some constraints on the input to the flux calculation. There have also been theoretical developments in QCD after Ref.~\cite{ers}, that can be used to constrain the calculation~\cite{NLL,Albacete:2010sy,Kutak:2012qk}. It is therefore time to improve the calculation to get a better handle on the prediction. In this contribution I will discuss some of the ingredients in the calculation, and in particular I will discuss why there are large theoretical uncertainties involved. I will necessarily have to skip a lot of details of the calculation, but everything that is not discussed here can be found in our paper \cite{ers} or in the earlier calculations of the prompt flux \cite{Lipari:1993hd,Gondolo:1995fq,Bugaev:1998bi,Pasquali:1998ji,Martin:2003us}. See also the book by Gaisser~\cite{GaisserBook} for details on the description of the cascade in the atmosphere. \section{Calculation of the neutrino flux} To find the neutrino flux, we must solve a set of cascade equations that describe the energy loss and decay of nucleons, mesons and leptons in the atmosphere, where the cascade is triggered by an incoming cosmic ray proton. Here I will discuss some of the main points in the flux calculation; for details see Ref.~\cite{ers}. The flux equations are \begin{align} \D{\phi_N}{X} &= -\frac{\phi_N}{\lambda_N} + S( N A \to N Y ) \label{nucleonflux} \\ \D{\phi_M}{X} &= S( N A \to M Y ) -\frac{\phi_M}{\rho d_M(E)} -\frac{\phi_M}{\lambda_M} + S( M A \to M Y ) \label{mesonflux} \\ \D{\phi_\ell}{X} &= \sum_M S( M \to \ell Y ) \label{leptonflux} \end{align} where $\ell = \mu,\nu_\mu,\nu_e$ and the mesons include unstable baryons:\ for prompt fluxes from charm $M= D^\pm$, $D^0$, $\bar D^0$, $D_s^\pm$, $\Lambda_c^\pm$. $d_M=c\beta\gamma\tau$ is the decay length and $\lambda_{i}$ are the interaction lengths for hadronic energy loss. The variable $X$ is the slant depth, essentially the amount of atmosphere that a given particle has traversed. The initial conditions for the fluxes are zero for all but the nucleon flux, which is given by the incoming cosmic ray flux, i.e., we assume the cosmic ray flux to be composed of protons. We use a primary nucleon flux parametrization with a knee from Ref.~\cite{Gondolo:1995fq}. The functions $S( k \to j )$ are the regeneration functions given by \begin{equation} S( k \to j ) = \int_E^\infty dE' \frac{\phi_k(E')}{\lambda_k(E')} \D{n(k \to j ;E',E)}{E}, \end{equation} where $E'$ and $E$ are the energies of the incoming and outgoing particle. This is where the production cross sections and decay matrix elements come in. To solve the flux equations, we use the semi-analytic method of $Z$-moments used e.g.\ in\cite{GaisserBook,Lipari:1993hd,Gondolo:1995fq,Pasquali:1998ji}. This is known to be a good approximation (see e.g.\ \cite{Gondolo:1995fq}). The $Z$-moments are defined by \begin{equation} Z_{kh}=\int_E^\infty dE' \frac{\phi_k(E',X,\theta)}{\phi_k(E,X,\theta)}\frac{\lambda_k(E)}{\lambda_k(E')} \D{n(k A \to h Y;E',E)}{E}, \end{equation} which after assuming that the energy- and $(X,\theta)$-dependence of the flux is factorized as $\phi_k(E,X,\theta)=E^{-\gamma-1}\phi_k(X,\theta)$ takes the form \begin{equation} Z_{kh}=\int_E^\infty dE' \left(\frac{E'}{E}\right)^{-\gamma-1} \frac{\lambda_k(E)}{\lambda_k(E')} \D{n(k A \to h Y;E',E)}{E}, \label{Zkh} \end{equation} so that only the energy dependence of the flux enters the $Z$-moments. The $Z$-moments $Z_{NN}$ and $Z_{MM}$ as well as the interaction lengths $\lambda_{i}$ describe energy loss and scattering in the atmosphere and are computed using parametrized scattering cross sections (see \cite{ers} for details). The charm production process is described entirely by the $Z$-moment $Z_{NM}$, for all charmed mesons $M$. The decay $Z$-moments $Z_{M\ell}$, finally, are computed using two- or three-body phase space according to Refs.~\cite{Lipari:1993hd,Bugaev:1998bi}. Note that all $Z$-moments depend on the energy. The cascade equation for the mesons can then be rewritten in terms of the $Z$-moments as \begin{equation} \D{\phi_M}{X} = - \frac{\phi_M}{\rho d_M} - \frac{\phi_M}{\lambda_M} +Z_{MM} \frac{\phi_M}{\lambda_M} + Z_{NM} \frac{\phi_N}{\lambda_N} \label{phiM} \end{equation} with simpler equations for the nucleon and lepton fluxes. Eq.~(\ref{phiM}) is solved by obtaining separate solutions in the high- and low-energy limits where the interaction or decay terms dominate, respectively. The full solution is then obtained as an interpolation between the high- and low-energy solutions. The two solutions are separated by a critical energy $\epsilon_{M}$, which is different for different mesons, and that additionally depends on the zenith angle, since the amount of atmosphere the cascade traverses depends on this angle. The equations for the lepton fluxes are \begin{align} \phi^\text{low}_\ell &= Z_{M\ell,\gamma+1} \frac{Z_{NM}}{1-Z_{NN}}\phi_N(E) \\ \phi^\text{high}_\ell &= Z_{M\ell,\gamma+2} \frac{Z_{NM}}{1-Z_{NN}} \frac{\ln(\Lambda_M/\Lambda_N)}{1-\Lambda_N/\Lambda_M} \frac{\epsilon_M}{E} \phi_N(E), \end{align} where $(\gamma+1)$ is the spectral index of the incoming cosmic ray flux at high and low energy and $Z_{M\ell,\gamma+1}$ and $Z_{M\ell,\gamma+2}$ are calculated using these fluxes. The attenuation lengths $\Lambda_i$ are defined as $\Lambda_N(E)={\lambda_N(E)}/{(1-Z_{NN}(E))}$, etc. The lepton fluxes are thus proportional to the cosmic ray flux, but the energy dependence is modified by the energy dependence of the $Z$-moments and the attenuation lengths, and in addition the high-energy flux is suppressed by one power of energy compared to the cosmic ray flux. This additional factor of the energy comes from the gamma factor in the decay length, and the suppression only becomes effective when the charmed hadrons start losing energy before they decay, at very high energy. \section{Charm production in QCD} The $Z_{NM}$ functions can be rewritten in terms of the energy fraction $x_{E}=E/E'$ of the charmed particle. The essential ingredient is the differential cross section for production of a charm quark pair ${d\sigma(pp \to c\bar c)}/{dx_E}$. In the calculation of the neutrino flux we actually compute the cross section for production of charmed hadrons $M$ by convoluting the charm quark cross section with the relevant fragmentation functions, but let us first for simplicity consider the charm quark cross section by itself. Then $x_E=E_c/E_p$. At high energy, we have $x_E\simeq x_F$, where $x_{F}$ is the more convenient Feynman variable. In perturbative QCD, the dominant contribution to the cross section comes from the subprocess $gg\rightarrow c\bar{c}$, and the cross section is then given by \begin{equation} \frac{d\sigma}{dx_F}=\int \frac{ dM_{c\bar{c}}^2}{(x_1+x_2) s} \sigma_{gg\rightarrow c\bar{c}}(\hat{s}) G(x_1,\mu^2) G(x_2,\mu^2) \end{equation} where $x_{1,2}$ are the momentum fractions of the gluons, $x_F=x_1-x_2$ is the Feynman variable, and $G(x,\mu^2)$ is the gluon distribution of the proton, The center-of-mass energy of the partonic system is given by $\hat s=x_{1} x_{2} s$, and $\mu$ is the factorization scale. Given the charm--anticharm invariant mass $M_{c\bar{c}}$, the fractional momenta of the gluons, $x_{1,2}$, can be expressed in terms of the the Feynman variable, $x_F$, \begin{equation} \label{eq:x12} x_{1,2} = \frac{1}{2}\left( \sqrt{x_F^2+\frac{4M_{c\bar c}^2}{s}} \pm x_F\right) \ . \end{equation} Typically the factorization scale is taken to be of the order of $2m_c$. The squared center-of-mass energy is $s=2E_{p}m_{p}$, so from Eq.\ (\ref{eq:x12}), it is clear that at high energy the dominant contribution is the highly asymmetric case where one gluon PDF is evaluated at $x_1 \sim x_F$ and the other at $x_2\ll 1$. To illustrate the $x$-values involved, for an incoming energy of $E_{p}=100$~TeV and assuming the charm quarks do not have appreciable relative $p_{T}$, we get for $x_{F}=0$ (central production) that $x_2 = 5\times 10^{-3}$, and for $x_F=1$ (forward limit) we get $x_2 = 3\times 10^{-5}$. For $E_{p}=1$~PeV, we instead get $x_2 = 2\times 10^{-3}$ and $x_2 = 3\times 10^{-6}$, respectively. These are extremely small values of $x$---for forward production they are far smaller than anything accessible at today's accelerators. The gluon distribution cannot be measured directly and has large uncertainties at small $x$, and especially so for the low factorization scales $\mu\sim 2 m_c$ that we are interested in. The evolution of the PDFs at small $x$ involves large logarithms $\alpha_{s}\log(1/x)$ that need to be resummed. This is done by solving the BFKL equation \cite{bfkl}, which predicts a rapid power growth of the PDF as $x\to 0$. However, there are no measurements at very small $x$ values, so the behavior of the PDFs is not well known. The commonly used PDF parametrizations have quite small minimum $x$ values, but the shape of the PDF there is an extrapolation from data at higher $x$. For example, the PDF fits from the HERA experiments have $x > 10^{-4}$. The LHC experiments will reach smaller $x$, but at much larger factorization scales. (The ideal machine to measure PDFs at very small $x$ would probably be the proposed LHeC electron--proton collider~\cite{AbelleiraFernandez:2012cc}.) \begin{figure}[t] \includegraphics[width=0.48\columnwidth]{fig_flux_allflavors} \quad \includegraphics[width=0.48\columnwidth]{fig_flux_errorband_both_log} \caption{Left: Prompt and conventional fluxes of $\nu_\mu + \bar\nu_\mu$, $\nu_e + \bar\nu_e$, and $\mu^+ + \mu^-$ in the vertical direction. Conventional fluxes from Thunman, Ingelman and Gondolo (TIG), Ref.~\protect\cite{Gondolo:1995fq}. The three prompt fluxes are approximately equal, so only the $\nu_\mu + \bar\nu_\mu$ flux is shown. Right: Prompt and conventional $\nu_\mu + \bar\nu_\mu$ fluxes in the vertical direction. The shaded band is the theoretical uncertainty band for the prompt flux from \cite{ers}. Conventional fluxes from Gaisser and Honda (GH) \cite{gaisserhonda} and from TIG. Figures from Ref.\ \cite{ers}. \label{fig:fluxes}} \end{figure} The rapid growth of the gluon PDF at small $x$ can be interpreted as a growth in the number density of gluons. When the density becomes large enough, unitarity can be violated, but taking into account that gluons may begin to recombine at large densities, unitarity is saved. This leads to a reduction in the growth at small $x$. This phenomenon is known as parton saturation, and would reduce the gluon density and thus the cross section. The full description of parton saturation is complicated~\cite{MV,JIMWLK,Balitsky,Gelis:2008rw,Kovchegov}, but there is a sort of ``mean-field approximation'' to the full description known as the Balitsky--Kovchegov (BK) equation~\cite{Balitsky,Kovchegov}, which is phenomenologically very useful. In \cite{ers} we used an approximate solution of the BK equation due to Iancu, Itakura and Munier~\cite{Iancu:2003ge}, which has a handful of free parameters that have been fitted to HERA data~\cite{Iancu:2003ge,Soyez:2007kg}. We performed the calculation in a framework known as the dipole picture of small-$x$ QCD. Due to space limitations I will not describe this theoretical framework here, but refer to \cite{ers} and references therein. The main point is that we are taking the effects of parton saturation into account, which leads to a reduction of the cross section compared to a fixed order QCD calculation. The calculation is done using a different way of factorizing the cross section, with the essential ingredient being the dipole cross section $\sigma_\text{dip}$ that describes the scattering of a quark--antiquark pair on the nucleon or nucleus. However, as there is limited knowledge experimentally of the behavior of the PDFs at small $x$, it is not known how saturation works and manifests itself---or indeed if it occurs at all. There is thus a substantial theoretical uncertainty in the calculation of the charm cross section. The formalism we are using has been tested against DIS data from HERA at small $x$~\cite{Iancu:2003ge,Soyez:2007kg}. In Ref.~\cite{Goncalves:2006ch}, charm production in hadron--hadron collisions was calculated in the same framework we are using, and was tested against the limited amount of available data on this process. But we need much smaller $x$, so the agreement at larger $x$ does not necessarily mean that the extrapolation works well for smaller $x$. There are two ways of improving the result: there is now data on charm production from the LHC available~\cite{LHCcharm}, and there have been some theoretical developments in improving the predictions in small-$x$ QCD. It is well-known that the BFKL equation that describes the small-$x$ evolution without saturation must be supplemented by next-to-leading logarithmic corrections to give a stable and phenomenologically sound result. This should also be incorporated in the BK equation, which is essentially the BFKL equation minus a non-linear term, but this must be done approximately, perhaps along the lines of \cite{NLL,Albacete:2010sy,Kutak:2012qk,Cazaroto:2011qq}. There are also inherent uncertainties in the dipole model saturation calculation related to parameter values, choices of parametrization of the gluon distribution, of the factorization scale, and of the treatment of quark fragmentation into hadrons. We have quantified these by varying them within reasonable limits. In particular we vary the factorization scale between $\mu_F=2 m_c$ or $\mu_F=m_c$, and the charm quark mass between $m_c=1.3$~GeV and $m_c=1.5$~GeV. We also choose a few different available gluon PDFs and two different quark fragmentation functions. This gives rise to the uncertainty band shown in the right plot in Fig.~\ref{fig:fluxes}. The shape of the neutrino flux does not depend strongly on these choices, but the overall normalization varies by up to a factor of two. \begin{figure}[t] \includegraphics[width=0.6\columnwidth]{fig_flux_compare_others_lin} \caption{Prompt muon neutrino fluxes. The shaded area is the theoretical uncertainty of the ERS result, as discussed in the text. The solid line in the band is our standard result. The dashed curve is the NLO QCD calculation of Ref.~\protect\cite{Pasquali:1998ji} (PRS), modified to include fragmentation functions The dotted curve is the saturation model result of Ref.~\protect\cite{Martin:2003us} (MRS). The dash-dotted curve is the LO QCD calculation of Ref.\ \protect\cite{Gondolo:1995fq} (TIG). Figure from Ref.~\cite{ers}. \label{fig:compare_others}} \end{figure} In Ref.~\cite{ers}, we also compared our flux prediction to several earlier evaluations of the flux, see Fig.~\ref{fig:compare_others}. There is a range of predictions with a spread of about a factor of six. The NLO QCD calculation of Ref.~\cite{Pasquali:1998ji} is a regular QCD calculation that does not include saturation but uses a power-law extrapolation of the gluon PDF to small $x$. It is larger than our prediction by roughly a factor two. The toy model saturation prediction of Ref.~\cite{Martin:2003us} is smaller by a similar factor. It is clear that if saturation occurs, the cross section is smaller than it would be if saturation does not occur. The NLO QCD calculation can therefore be seen as an upper limit on the neutrino flux.\footnote{Note that if saturation does occur at the scales probed in existing experiments, then it is effectively included in the existing PDF fits, which makes it hard to discern from other effects.} If saturation does occur, as is expected on theoretical grounds, the ERS result is not likely to be much smaller than a new calculation, but it is not known what result an improved calculation might give. It is known, however, that when next-to-leading corrections to saturation are included, the growth of the cross section with energy is further suppressed~\cite{NLL}. To obtain some more information on these issues, we are planning to update the ERS prediction both with an improved saturation calculation that includes next-to-leading logarithmic corrections, and with a NLO QCD calculation with more modern PDF fits. \begin{theacknowledgments} Everything described in this talk was done in collaboration with my friends Mary Hall Reno and Ina Sarcevic, and I thank them for a very fruitful collaboration. I also thank the organizers of the VLVnT 2013 workshop for the invitation to present this talk. \end{theacknowledgments}
1,108,101,562,495
arxiv
\section{Introduction} Let $A\in\mathbb{C}^{m\times n}$ be a matrix with singular value decomposition given by $$A=U\Sigma V^*=\left[ \begin{array}{cc} U_{k} & U_{m-k} \\ \end{array} \right]\left[ \begin{array}{cc} \Sigma_k & 0 \\ 0 & \Sigma_{\min\{m,n\}-k} \\ \end{array} \right]\left[ \begin{array}{cc} V_{k} & V_{n-k} \\ \end{array} \right]^*. $$ Its best rank-$k$, $k<\min\{m,n\}$, approximation is obtained using truncated SVD, \begin{equation}\label{svdapproximation} A_k=U_k\Sigma_k V_k^*. \end{equation} However, approximation~\eqref{svdapproximation} is often hard to interpret in applications, especially when we work with very big matrices. Besides, this approximation does not keep useful matrix properties like sparsity and non-negativity. Therefore, in recent years attention is given to low-rank approximations obtained by interpolatory factorizations. These approximations are suboptimal, but keep the properties mentioned above and are more suitable in some applications where the columns and/or rows should keep their original meaning. The best known examples of the interpolatory factorizations are CX and CUR factorizations. A matrix CX factorization of $A\in\C^{m\times n}$ is a decomposition of the form \begin{equation}\label{cx} A=CX, \end{equation} where $C\in\C^{m\times k}$ contains $k$ columns of $A$ and $X\in\C^{k\times n}$. A matrix CUR factorization of $A\in\C^{m\times n}$ is decomposition of the form \begin{equation}\label{cur} A=CUR, \end{equation} where $C\in\C^{m\times k}$ contains $k$ columns of $A$, $R\in\C^{k\times n}$ contains $k$ rows of $A$ and $X\in\C^{k\times k}$. In these factorizations columns of $C$ and rows of $R$ keep the original interpretation from $A$. Usually, we are not looking for the exact decompositions~\eqref{cx} or~\eqref{cur}, but for the approximations $$A=CX+E \quad \text{or} \quad A=CUR+E, \quad \text{where } \|E\|\ll\|A\|.$$ Interpolatory decomposition of a tensor $\calX\in\C^{n_1\times n_2\times\cdots\times n_d}$ in the Tucker representation is the product of a core tensor $\calG\in\C^{r_1\times r_2\times\cdots\times r_d}$ and matrices $C_j\in\C^{n_j\times r_j}$, $1\leq j\leq d$. Over the last ten years different authors have studied generalizations of the CUR decomposition. In~\cite{MMD08} the authors study CUR tensor decomposition where the fibers that define the decomposition are chosen randomly according to a specific probability distribution. In~\cite{CC10} an adaptive algorithm that sequentially selects fibers of a $3$rd order tensor $\calX$ that form matrices $C_n$, $n=1,2,3$, is developed. Different choices for matrices $C_n$ are studied in~\cite{Saibaba16}, where the author gives detailed analysis of the computational costs and error bounds. \textbf{Motivation for the hybrid approach:} Generalizations of CUR decomposition for tensors represented in the Tucker format give the approximation error that depends on the dimension $d$. This can be a problem for high-dimensional tensors. On the other hand, in the applications it is not always important to keep the original entries in all modes. In the same way as the matrix $CX$ decomposition preserves only the original columns of a starting matrix, in the tensor case we can keep the original fibers in only one mode, or in more, but not all modes. The idea of the hybrid algorithm is to write a tensor $\calX\in\C^{n_1\times n_2\times\cdots\times n_d}$ as a product of a core tensor $\calS\in\C^{r_1\times r_2\times\cdots\times r_d}$, a matrix $C\in\C^{n_k\times r_k}$ obtained by extracting mode-$k$ fibers of $\calX$, and matrices $U_j\in\C^{n_j\times r_j}$, $j=1,\ldots,k-1,k+1,\ldots,d$, chosen to minimize the approximation error. This difference between the error obtained by the hybrid approach and the error from the tensor CUR method gets more important as the tensor dimension increases. We keep the Tucker representation because it is one of the most commonly used tensor formats. Also, it is suitable for function-related tensors. In Section~\ref{sec:preliminaries} we introduce the notation and give an overview of the concepts used in the paper. We introduce the hybrid method in Section~\ref{sec:hybrid}. Moreover, we give the error bound for the new method and compare it to the error resulting from the standard CUR approach. In Section~\ref{sec:numerical} we present the results of several numerical tests. We refer to the matrix case in Section~\ref{sec:matrix}. \section{Preliminaries and notation}\label{sec:preliminaries} Throughout the paper we use tensor notation from~\cite{KB09}. The \emph{order} of a tensor is a number of its dimensions. Tensors of order three or higher are denoted by calligraphic letters, e.g.\@ $\calX$, while matrices (order two tensors) are, as usually, denoted by capital letters, e.g.\@ $A$. Tensor analogues of rows and columns are called \emph{fibers} and they are extracted from a tensor by fixing all indices but one. For a third order tensor its fibers are columns, rows, and tubes, denoted $x_{:jk},x_{i:k},x_{ij:}$, respectively. If we fix all but two indices, we get tensor \emph{slices}. In the case of a third order tensor the slices are matrices $X_{i::},X_{:j:},X_{::k}$. The \emph{norm} of a tensor $\calX\in\C^{n_1\times n_2\times\cdots\times n_d}$ is a generalization of the matrix Frobenius norm and it is given by relation $$\|\calX\|_F=\sqrt{\sum_{i_1=1}^{n_1}\sum_{i_2=1}^{n_2}\cdots\sum_{i_d=1}^{n_d} |x_{i_1i_2\ldots i_d}|^2}.$$ Tensor \emph{unfolding} is a reordering of an order-$d$ tensor into a matrix. The mode-$m$ unfolding, $1\leq m\leq d$, of $\calX\in\C^{n_1\times n_2\times\cdots\times n_d}$ is an $n_m\times(n_1\cdots n_{m-1}n_{m+1}\cdots n_d)$ matrix $X_{(m)}$ obtained by arranging mode-$m$ fibers of $\calX$ into columns of $X_{(m)}$. The \emph{mode-$m$ product} of a tensor $\calX\in\C^{n_1\times n_2\times\cdots\times n_d}$ with a matrix $U\in\C^{p\times n_m}$ is a $n_1\times\cdots\times n_{m-1}\times p\times n_{m+1}\times\cdots\times n_d$ tensor \begin{equation}\label{eq:product} \calY=\calX\times_mU \quad \text{such that} \quad Y_{(m)}=UX_{(m)}. \end{equation} Elementwise, relation~\eqref{eq:product} can be written as $$\calY_{i_1\ldots i_{m-1}ji_{m+1}\ldots i_d}=\sum_{i_m=1}^{n_d} x_{i_1i_2\ldots i_d}u_{ji_m}.$$ We will use the associativity properties of mode-$m$ product, \begin{equation}\label{eq:associativity} \begin{aligned} \calX\times_m M \times_m N & =\calX\times_m(NM) \quad \text{and} \\ \calX\times_m M\times_n N & =\calX\times_n N\times_m M, \quad \text{for} \ m\neq n. \end{aligned} \end{equation} \emph{Tucker decomposition} is a decomposition of a tensor $\calX$ into a core tensor $\calS$ multiplied by a matrix in each mode, $$\calX=\calS\times_1U_1\times_2U_2\times_3\cdots\times_dU_d.$$ If $U_1,U_2,\ldots,U_d$ are unitary matrices, this decomposition is referred to as higher order singular value decomposition (HOSVD) studied in~\cite{DeL-hosvd}. Matrices $U_j$, $1\leq j\leq d$, from HOSVD are computed using the SVD of each unfolding of $\calX$. Note that if $$\calY=\calX\times_1M_1\times_2M_2\times_3\cdots\times_dM_d,$$ for some matrices $M_i$, $1\leq i\leq d$ of the adequate size, then \begin{equation}\label{eq:productmatrix} Y_{(m)}=M_mX_{(m)}\big{(}M_d\otimes M_{d-1}\otimes\cdots\otimes M_{m+1}\otimes M_{m-1}\otimes\cdots\otimes M_1\big{)}^*, \quad \text{for } 1\leq m\leq d. \end{equation} \emph{Multilinear rank} of $\calX$ is a $d$-tuple $(r_1,r_2,\ldots,r_d)$, where $r_j=\text{rank}(X_{(j)})$, $1\leq j\leq d$. If $r_1=\cdots=r_d=k$ we say that $\calX$ is a rank-$k$ tensor. A simple way to obtain a rank-$(r_1,r_2,\ldots,r_d)$ approximation $\hat{\calX}$ of $\calX$ is using truncated HOSVD (T-HOSVD), which is a tensor analogue of~\eqref{svdapproximation}. It is described in Algorithm~\ref{alg:thosvd}. \begin{Algorithm}\label{alg:thosvd} \hrule\vspace{1ex} \emph{T-HOSVD} \vspace{0.5ex}\hrule\vspace{0.5ex} \begin{algorithmic} \For{$i=1,\ldots,d$} \State Compute matrix $U_i$ containing the leading $r_i$ left singular vectors of $X_{(i)}$. \EndFor \State $\calS=\calX \times_1 U_1^* \cdots\times_d U_d^*$ \State $\hat{\calX}=\calS\times_1U_1 \cdots \times_d U_d$ \end{algorithmic} \hrule \end{Algorithm} The core tensor from HOSVD is, in general, not diagonal. Thus, HOSVD does not lead to the best low multilinear rank tensor approximation, as it is the case with SVD. To improve the approximation obtained by T-HOSVD one can use an iterative algorithm with the initialization based on the result obtained by T-HOSVD. A popular iterative algorithm for low-rank tensor approximation is the higher-order orthogonal iteration (HOOI)~\cite{SaadHOOI2006}. If the starting tensor is symmetric or anti-symmetric, a good choice can be the structure-preserving Jacobi algorithm~\cite{Ishteva13,BegKre17}. \subsection{Tensor CUR decomposition} A CUR-type decomposition of a tensor $\calA\in\C^{n_1\times n_2\times\cdots\times n_d}$ is given by \begin{equation}\label{cur-tensor} \calA\approx\calS\times_1C_1\times_2C_2\times_3\cdots\times_dC_d, \end{equation} where $\calS\in\C^{r_1\times r_2\times\cdots\times r_d}$ is a core tensor and matrices $C_j\in\C^{n_j\times r_j}$, $1\leq j\leq d$, contain $r_j$ mode-$j$ fibers of $\calA$. The algorithm that we are going to use for the CUR-type tensor decomposition is higher order interpolatory decomposition (HOID) from~\cite{Saibaba16}. It is derived for the tensors in the Tucker format and it shows good numerical behavior. This decomposition is based on CX decompositions of the unfoldings $A_{(j)}$ of $\calA$. One way to compute matrices $C_j$, $1\leq j\leq d$, from~\eqref{cur-tensor} is using QR decomposition with column pivoting (PQR) of $A_{(j)}$, \begin{equation}\label{rel:PQR} A_{(j)}P=QR, \quad 1\leq j\leq d, \end{equation} where $P$ is a permutation matrix, $Q$ is unitary and $R$ is upper-triangular. We can write relation~\eqref{rel:PQR} using block partitions as \begin{equation}\label{rel:QR} A_{(j)}\left[ \begin{array}{cc} P_1 & P_2 \\ \end{array} \right]=\left[ \begin{array}{cc} Q_1 & Q_2 \\ \end{array} \right]\left[ \begin{array}{cc} R_{11} & R_{12} \\ 0 & R_{22} \\ \end{array} \right], \quad 1\leq j\leq d, \end{equation} where $P_1$ and $Q_1$ have $r_j$ columns and $R_{11}$ is $r_j\times r_j$. Then we set \begin{equation}\label{rel:C} C_j=Q_1R_{11}, \end{equation} and \begin{equation}\label{rel:CX} A_{(j)}=C_jF+E, \end{equation} where \begin{equation}\label{rel:CXerror} F=\left[ \begin{array}{cc} I & R_{11}^{-1}R_{12} \\ \end{array} \right]P^T, \quad E=\left[ \begin{array}{cc} 0 & Q_2R_{22} \\ \end{array} \right]P^T. \end{equation} Knowing $C_j$, $1\leq j\leq d$, the core tensor $\calS$ from~\eqref{cur-tensor} is obtained as $$\calS=\calA \times_1 C_1^{+} \cdots\times_d C_d^{+}.$$ Here $C_j^{+}$ stands for the Moore-Penrose inverse of $C_j$, $1\leq j\leq d$. HOID Algorithm for a tensor $\calA\in\C^{I_1\times I_2\times\cdots\times I_d}$ is presented in~\ref{alg:hoid}. \begin{Algorithm}\label{alg:hoid} \hrule\vspace{1ex} \emph{HOID} \vspace{0.5ex}\hrule\vspace{0.5ex} \begin{algorithmic} \For{$i=1,\ldots,d$} \State Compute $C_i$ using PQR of $A_{(i)}$. \EndFor \State $\calS=\calA \times_1 C_1^{+} \cdots\times_d C_d^{+}$ \State $\hat{\calA}=\calS\times_1C_1 \cdots \times_d C_d$ \end{algorithmic} \hrule \end{Algorithm} \section{Hybrid algorithm}\label{sec:hybrid} In this section we describe and analyze our hybrid approach to tensor CUR-type decomposition. Let $\calA\in\mathbb{C}^{n_1\times n_2\times\cdots\times n_d}$. We are looking for a low multilinear rank approximation $\hat{\calA}$ of $\calA$. Precisely, we are looking for multilinear rank $(r_1,r_2,\ldots,r_d)$ tensor \begin{equation}\label{def:hybrid-approx} \hat{\calA}=\calS\times_1U_1\times_2\cdots\times_{m-1}U_{m-1}\times_mC_m\times_{m+1}U_{m+1}\cdots\times_dU_d, \end{equation} such that $$\calA=\hat{\calA}+\calE, \quad \|\calE\|\ll\|\calA\|.$$ In~\eqref{def:hybrid-approx}, $n_m\times r_m$ matrix $C_m$ contains $r_m$ columns of $A_{(m)}$, that is mode-$m$ fibers of $\calA$, $\calS$ is a $r_1\times r_2\times\cdots\times r_d$ tensor, and $U_i$ are $n_i\times r_i$ matrices, $i=1,\ldots,m-1,m+1,\ldots,d$. Without loss of generality we can assume that $m=1$. Thus, relation~\eqref{def:hybrid-approx} reads \begin{equation}\label{def:hybrid-approx1} \hat{\calA}=\calS\times_1C\times_2U_2\times_3\cdots\times_dU_d. \end{equation} We determine matrix $C$ using PQR decomposition as in relation~\eqref{rel:C} and as in Algorithm~\ref{alg:hoid}. On the other hand, to find $U_i$ we use SVD of $A_{(i)}$, $2\leq i\leq d$, as in Algorithm~\ref{alg:thosvd}. Then, the core tensor $\calS$ is obtained as \begin{equation}\label{def:hybrid-core} \calS=\calA\times_1C^{+}\times_2U_2^*\times_3\cdots\times_dU_d^*. \end{equation} Let us check that equation~\eqref{def:hybrid-core} gives an optimal $\calS$. We are looking for the core tensor $\calS$ from~\eqref{def:hybrid-approx1} such that \begin{equation}\label{eq:min} \|\calA-\calS\times_1C\times_2U_2\times_3\cdots\times_dU_d\|_F\rightarrow\min. \end{equation} Using mode-$1$ matricizations and equation~\eqref{eq:productmatrix} we can write minimization problem~\eqref{eq:min} as $$\|A_{(1)}-CS_{(1)}\big{(}U_d\otimes\cdots\otimes U_2\big{)}^*\|\rightarrow\min.$$ With the assumption that $C$ has full column rank, it follows from~\cite{FT07} that the optimal $S_{(1)}$ is $$S_{(1)}=C^+A_{(1)}\Big{(}\big{(}U_d\otimes\cdots\otimes U_2\big{)}^*\Big{)}^*=C^+A_{(1)}\big{(}U_d\otimes\cdots\otimes U_2\big{)}.$$ Now we use~\eqref{eq:productmatrix} to go back on the tensor format. Thus, we get $\calS$ as in relation~\eqref{def:hybrid-core}. The idea of the hybrid approach is summarized in Algorithm~\ref{alg:hybrid}. This algorithm corresponds to the problem given in~\eqref{def:hybrid-approx1}. It can easily be modified to $m=2,3,\ldots,d$. \begin{Algorithm}\label{alg:hybrid} \hrule\vspace{1ex} \emph{Hybrid algorithm} \vspace{0.5ex}\hrule\vspace{0.5ex} \begin{algorithmic} \For{$i=2,\ldots,d$} \State Compute matrix $U_i$ containing the leading $r_i$ left singular vectors of $A_{(i)}$. \EndFor \State Compute $C$ using PQR of $A_{(1)}$. \State $\calS=\calA\times_1C^{+}\times_2U_2^*\cdots\times_dU_d^*$ \State $\hat{\calA}=\calS\times_1C\times_2U_2\cdots\times_dU_d$ \end{algorithmic} \hrule \end{Algorithm} In Algorithm~\ref{alg:hybrid} we can choose to extract fibers from more than one mode of $\calA$. Assume that the approximation of $\calA$ requires keeping fibers from the first $t$ modes. In this case we apply HOSVD to find $U_i$, $t+1\leq i\leq d$, and PQR decomposition to find $C_j$, $1\leq j\leq t$. Then we set \begin{equation}\label{def:hybrid-more} \begin{aligned} \calS & =\calA\times_1C_1^{+}\cdots\times_tC_t^{+}\times_{t+1}U_{t+1}^*\cdots\times_dU_d^*, \\ \hat{\calA} & =\calS\times_1C_1\cdots\times_tC_t\times_{t+1}U_{t+1}\cdots\times_dU_d. \end{aligned} \end{equation} Note that instead of using HOID Algorithm to compute matrix $C$, one can choose a different method, such as volume maximization in tensor approximations~\cite{OST08}, leverage scores method~\cite{DMM08,MMD08,MD09,BW17}, and discrete empirical interpolation method (DEIM)~\cite{DG16,SE16}. The latter two are extended to the tensor case in~\cite{Saibaba16}. \subsection{Error analysis} In Theorem~\ref{tm:error} we give the error bound for the low multilinear rank approximation~\eqref{def:hybrid-approx1}. \begin{Theorem}\label{tm:error} Let $\calA\in\mathbb{C}^{n_1\times n_2\times\cdots\times n_d}$. Let $\hat{\calA}$ be an approximation of $\calA$ computed by Algorithm~\ref{alg:hybrid}. Then the approximation error $\calE$ satisfies the following inequality, \begin{equation}\label{rel:tmerror} \|\calE\|_F^2=\|\calA-\hat{\calA}\|_F^2 \leq p(r_1,n_1)(n_1-r_1)\sigma_{r_1+1}^2(A_{(1)}) + \sum_{j=2}^d (n_j-r_j)\sigma_{r_j+1}^2(A_{(j)}), \end{equation} where \begin{equation}\label{def:p} p(r,n):=\left(1+2r+\sum_{j=1}^{r-1}4^j(r-j)\right)(n-r), \end{equation} and $\sigma_i(X)$ stands for the $i$th singular value of $X$. \end{Theorem} \begin{proof} From~\eqref{def:hybrid-approx1} and~\eqref{def:hybrid-core}, using the properties of the mode-$m$ product given in~\eqref{eq:associativity}, we have \begin{align*} \|\calE\|_F^2 & =\|\calA-\calS\times_1C\times_2U_2\times_3\cdots\times_dU_d\|_F^2 \\ & = \|\calA-(\calA\times_1C^{+}\times_2U_2^*\times_3\cdots\times_dU_d^*)\times_1C\times_2U_2\times_3\cdots\times_dU_d\|_F^2 \\ & = \|\calA-\calA\times_1(CC^{+})\times_2(U_2U_2^*)\times_3\cdots\times_d(U_dU_d^*)\|_F^2. \end{align*} Matrices $CC^{+}$ and $U_jU_j^*$, $2\leq j\leq d$, are projections. Therefore, we can use the following result from~\cite[Lemma 2.1]{Saibaba16}: $$\|\calX-(\calX\times_1\Pi_1\times_2\Pi_2\times_3\cdots\times_d\Pi_d)\|_F^2\leq\sum_{j=1}^d\|\calX-\calX\times_j\Pi_j\|_F^2,$$ that holds for projections $\Pi_1,\ldots,\Pi_d$. It follows that \begin{align} \|\calE\|_F^2 & \leq \|\calA-\calA\times_1(CC^{+})\|_F^2+\sum_{j=2}^d\|\calA-\calA\times_j(U_jU_j^*)\|_F^2 \nonumber \\ & = \|(I_{n_1}-CC^+)A_{(1)}\|_F^2+\sum_{j=2}^d\|(I_{n_j}-U_jU_j^*)A_{(j)}\|_F^2. \label{tm:error1} \end{align} For $r<n$, set $$\tilde{I}_{r,n}:= \left[ \begin{array}{cc} I_{r} & 0 \\ 0 & 0 \\ \end{array} \right] \begin{array}{l} \}r \\ \}n-r \\ \end{array} .$$ Using the full matrix SVD of $A_{(j)}$, $$A_{(j)}=U_{(j)}\Sigma_{(j)}V_{(j)}^*, \quad 2\leq j\leq d,$$ we have \begin{align*} (I_{n_j}-U_jU_j^*)A_{(j)} & = (U_{(j)}I_{n_j}U_{(j)}^*-U_{(j)}\tilde{I}_{r_j,n_j}U_{(j)}^*)U_{(j)}\Sigma_{(j)}V_{(j)}^* \\ & = U_{(j)}\left[ \begin{array}{cc} 0 & \\ & I_{n_j-r_j} \\ \end{array} \right]U_{(j)}^*U_{(j)}\Sigma_{(j)}V_{(j)}^* \\ & = U_{(j)} \left[ \begin{array}{cccccc} 0 & & & & & \\ & \ddots & & & & \\ & & 0 & & & \\ & & & \sigma_{r_j+1}(A_{(j)}) & & \\ & & & & \ddots & \\ & & & & & \sigma_{n_j}(A_{(j)}) \\ \end{array} \right] V_{(j)}^*. \end{align*} This implies that \begin{equation}\label{tm:errorHOSVD} \|(I_{n_j}-U_jU_j^*)A_{(j)}\|_F^2 = \sum_{i=r_j+1}^{n_j} \sigma_i^2(A_{(j)}) \leq (n_j-r_j)\sigma_{r_j+1}^2(A_{(j)}). \end{equation} Further on, using~\eqref{rel:CX} for $j=1$ and the property of the Moore-Penrose inverse, $$CC^+C=C,$$ we get \begin{align*} (I_{n_1}-CC^+)A_{(1)} & = (I_{n_1}-CC^+)(CF+E) \\ & = CF+E-CC^+CF-CC^+E \\ & = (I_{n_1}-CC^+)E. \end{align*} Since $I_{n_1}-CC^+$ is projection we obtain \begin{equation}\label{tm:PQR1} \|(I_{n_1}-CC^+)A_{(1)}\|_F^2 \leq \|E\|_F^2=\|R_{22}\|_F^2, \end{equation} where $R_{22}$ is as in~\eqref{rel:QR} with $j=1$. Equality $\|E\|_F=\|R_{22}\|_F$ follows from~\eqref{rel:CXerror}. To get the upper bound on the norm of $R_{22}$, we use~\cite[Lemma 2.5]{ACGri20} that gives \begin{equation}\label{rel:R22acg} \|R_{22}\|_2\leq\sqrt{1+2r_1+\sum_{j=1}^{r_1-1}4^j(r_1-j)}\sqrt{n_1-r_1}\sigma_{r_1+1}(A_{(1)})=\sqrt{p(r_1,n_1)}\sigma_{r_1+1}(A_{(1)}), \end{equation} where function $p$ is defined in~\eqref{def:p}. Applying the equivalence of norms on~\eqref{rel:R22acg} it follows that $$\|R_{22}\|_F\leq \sqrt{p(r_1,n_1)}\sqrt{n_1-r_1}\sigma_{r_1+1}(A_{(1)}),$$ that is \begin{equation}\label{rel:R22} \|R_{22}\|_F^2\leq p(r_1,n_1)(n_1-r_1)\sigma_{r_1+1}^2(A_{(1)}). \end{equation} We now use~\eqref{tm:PQR1} and~\eqref{rel:R22} to get \begin{equation}\label{tm:errorPQR} \|(I_{n_1}-CC^+)A_{(1)}\|_F \leq p(r_1,n_1)(n_1-r_1)\sigma_{r_1+1}^2(A_{(1)}). \end{equation} Finally, we insert relations~\eqref{tm:errorHOSVD} and~\eqref{tm:errorPQR} in~\eqref{tm:error1} to obtain the bound~\eqref{rel:tmerror}. \end{proof} Analogous bound as in Theorem~\ref{tm:error} can be obtained for the case~\eqref{def:hybrid-more} where we want to keep the original fibers in $t$ modes of a tensor. Then we get \begin{equation}\label{def:hybrid-moreerror} \|\calE\|_F^2\leq \sum_{i=1}^t p(r_i,n_i)(n_i-r_i)\sigma_{r_i+1}^2(A_{(i)}) + \sum_{j=t+1}^d (n_j-r_j)\sigma_{r_j+1}^2(A_{(j)}), \end{equation} with $p(r_i,n_i)$ defined in~\eqref{def:p}, $1\leq i\leq t$. Contrary to the hybrid approach, assume that the approximation of $\calA$ is given as $$\hat{\calA}_{\text{CUR}}=\calS\times_1C_1\times_2C_2\times_3\cdots\times_dC_d,$$ where matrices $C_i$, $1\leq i\leq d$, are obtained by PQR decomposition~\eqref{rel:C}. Using the same reasoning it is easy to see that the error of such approximation equals \begin{equation}\label{rel:CURerror} \|\calA-\hat{\calA}_{\text{CUR}}\|_F \leq \sum_{i=1}^d p(r_i,n_i)(n_i-r_i)\sigma_{r_i+1}^2(A_{(i)}), \end{equation} where $p(r_i,n_i)$ is as in~\eqref{def:p} for $1\leq i\leq d$. Since $p(r,n)>1$, the difference between the error bounds~\eqref{rel:tmerror} and~\eqref{rel:CURerror} is increasing as the tensor order $d$ is increasing. Also, the difference between~\eqref{def:hybrid-moreerror} and~\eqref{rel:CURerror} is increasing as the number of modes $t$ in which we preserve the original fibers is decreasing. \section{Numerical examples}\label{sec:numerical} To illustrate the advantages of the hybrid CUR approximation we present three numerical examples. The tests are preformed using Matlab 2019b. In Figure~\ref{fig:error-d} we compare the relative error obtained by HOID method from Algorithm~\ref{alg:hoid} and the hybrid method from Algorithm~\ref{alg:hybrid}. We show the results of the experiments performed on two function related tensors, $$\calA(i_1,\ldots,i_d)=\frac{1}{i_1+i_2+\cdots+i_d}, \qquad \calB(i_1,\ldots,i_d)=\frac{1}{i_1+2\cdot i_2+\cdots+d\cdot i_d},$$ approximated with rank-$1$ tensors. Here, tensor $\calA$ is symmetric, while $\calB$ is not. We set $n_1=n_2=\cdots=n_d=7$ and vary tensor order for $d=3,4,5,6$. We observe that the relative error is significantly smaller when hybrid method is used. The difference between the relative errors is increasing with the tensor order $d$. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig1a-71.eps} \caption{Tensor $\calA$} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig1b-71.eps} \caption{Tensor $\calB$} \end{subfigure} \caption{Relative approximation error when tensor order varies.}\label{fig:error-d} \end{figure} In Figure~\ref{fig:error-n} we also compare the relative error arising from Algorithm~\ref{alg:hoid} and Algorithm~\ref{alg:hybrid}. Here we vary the number of modes in which we preserve the original fibers when using the hybrid CUR. We perform the test on tensor $\calB$ for $d=3$ and $d=4$. We set $n_1=n_2=\cdots=n_d=30$ and do rank-$2$ approximation. As expected, the difference in the approximation error is bigger as the number of the modes in which we preserve the original fibers is smaller. If we want to preserve the original fibers in all modes, than hybrid method boils down to the regular CUR method. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig2b3-302.eps} \caption{Tensor $\calB$ for $d=3$} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig2b4-302.eps} \caption{Tensor $\calB$ for $d=4$} \end{subfigure} \caption{Relative approximation error when the number of modes in which the fibers of the original tensors are preserved varies.}\label{fig:error-n} \end{figure} Moreover, in Figure~\ref{fig:error-k} we show the approximation error when the approximation rank $k$ varies. In the hybrid method we preserve the original fibers only in the first mode. We do the test for $d=3$ on a random $100\times100\times100$ tensor, and for $d=6$ on a random $7\times7\times7\times7\times7\times7$ tensor. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig3rand3-100.eps} \caption{$d=3$, $n_1=n_2=n_3=100$} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fig3rand6-7.eps} \caption{$d=6$, $n_1=\cdots=n_6=7$} \end{subfigure} \caption{Relative approximation error for random tensors when the approximation rank varies.}\label{fig:error-k} \end{figure} \section{Matrix case}\label{sec:matrix} Hybrid CUR-type rank-$k$ approximation of a matrix $A\in\mathbb{R}^{m\times n}$ is a spacial case of~\eqref{def:hybrid-approx}. Using tensor terminology, matrix columns are mode-$1$ fibers, while its rows are mode-$2$ fibers. For the ``matricizations'' of $A$ we have $$A_{(1)}=A \quad \text{and} \quad A_{(2)}=A^T.$$ Assume that the approximation preserves the columns of $A$. Then \begin{equation}\label{def:hybrid-mcol} \hat{A}=S\times_1C\times_2V=(CS)\times_2V=\left(V(CS)^T\right)^T=CSV^T, \end{equation} where $C\in\mathbb{R}^{m\times k}$ is made of $k$ columns of $A$. Matrix $V\in\mathbb{R}^{k\times n}$ consists of $k$ leading right singular vectors of $A$ attained by SVD, which are actually left singular vectors of $A_{(2)}$. Core ``tensor'' is matrix $$S=A\times_1C^+\times_2V^T=C^+AV.$$ On the other hand, assume that the approximation preserves the rows of $A$. Rows of $A=A_{(1)}$ can also be considered as columns of $A_{(2)}$. Here we have $$\hat{A}=S\times_1U\times_2C=(US)\times_2C=\left(C(US)^T\right)^T=USC^T=USR,$$ where $R=C^T$ contains $k$ rows of $A$, $U$ contains $k$ leading left singular vectors of $A$, and $$S=A\times_1U^T\times_2C^+=A\times_1U^T\times_2(R^T)^+=U^TAR^+.$$ Error obtained this way is smaller than the error obtained by CUR decomposition. This difference is quantified in Corollary~\ref{tm:error-matrix}. Its proof follows as a special case of Theorem~\ref{tm:error}. \begin{Corollary}\label{tm:error-matrix} Let $A\in\mathbb{C}^{m\times n}$. Let $\hat{A}$ be a rank-$k$ approximation of $A$ as in relation~\eqref{def:hybrid-mcol}. Assume that the matrix $C$ is obtained by QR with column pivoting. Then the approximation error $E$ satisfies the following inequality, $$\|E\|_F=\|A-\hat{A}\|_F \leq p(k,m)(m-k)\sigma_{k+1}^2(A) + (n-k)\sigma_{k+1}^2(A),$$ where $p(k,m)$ is defined by relation~\eqref{def:p}. \end{Corollary} The rank-$k$ approximation error obtained by truncated SVD~\eqref{svdapproximation} is $$\|E\|_2\geq\|A-A_k\|_2=\sigma_{k+1} \quad \text{and} \quad \|E\|_F\geq\|A-A_k\|_F=\sqrt{\sigma_{k+1}^2+\cdots+\sigma_n^2},$$ where $\sigma_i$ denotes the $i$-th singular value of $A$. In Figure~\ref{fig:error-matrixcase} we illustrate the claim of Proposition~\ref{tm:error-matrix} by comparing the relative approximation error in the Frobenius norm obtained by rank-$k$ approximation of a matrix using four matrix decompositions: SVD (for the reference case), CX (where matrix $X$ is obtained as $X=C^+A$), hybrid CUR as in~\eqref{def:hybrid-mcol}, and matrix CUR. Test is done on a random $2000\times2000$ matrix. Approximation rank $1\leq k\leq20$ is given on the horizontal axis. In the matrix case, our hybrid CUR approach is equivalent to CX matrix decomposition. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{matrixcase-rand2000.eps} \caption{Relative approximation error for a random matrix when the approximation rank varies.}\label{fig:error-matrixcase} \end{figure} \section*{Acknowledgements} This work has been supported in part by Croatian Science Foundation under the project UIP-2019-04-5200. The author would like to thank Georgia Tech for the kind hospitality during the process of writing this paper.
1,108,101,562,496
arxiv
\section{Introduction} For a Banach space $X$, the continuous pairing between $X^*$, the topological dual of $X$, and $X$ is denoted by $\langle , \rangle$. A map $A : X \rightarrow X^{\ast}$ is called $(S)_+$ if for every sequence $(u_n) \subset X$, $u_n \rightharpoonup u$, the inequality \begin{equation} \limsup_{n\to\infty} \langle A [u_n], u_n - u \rangle \leq 0, \end{equation} imply $u_n \to u$; see e.g. \cite{Browder82,Berkovits86}. It is well known ( see for example \cite{Skrypnik1994}) that the map \begin{equation} A[u]:=\sum_{\alpha\leq m} (-1)^{|\alpha|} D^\alpha f_\alpha(x,u,\dots,D^m u), \end{equation} from $X= W_0^{m, p} (\Omega)$ to $X^*= W^{- m, q}(\Omega)$ is $(S)_+$ where the pairing $\langle A[u],\phi \rangle$ is defined by the relation \begin{equation} \langle A [u], \phi \rangle = \sum_{| \alpha | \leq m} \int_{\Omega} f_{\alpha} (x, u,\dots,D^m u) D^{\alpha} \phi. \end{equation} The existence and the multiplicity problems of the quasi-linear elliptic equations in divergence form can be studied by the degree of the operator $A$ at zero. A degree theory, keeping all classical properties of a topological degree, for the bounded demi-continuous $(S)_+$ maps in Hilbert spaces is developed by I. Skrypnik \cite{Skrypnik73}. F. Browder generalized the degree to the mapping in uniformly convex reflexive Banach spaces \cite{Browder82,Browder1976}. Browder's construction is based on the direct generalization of the classical Brouwer degree through the Galerkin type approximation of $(S)_+$ mappings. An alternative construction through generalizing the Leray-Schauder degree has been carried out by J. Berkovits \cite{Berkovits1997,Berkovits86}. A new generalization of the Browder's degree theory from the Nagumo degree is reported by A. Kartsatos and D. Kerr \cite{Karts11}. A turning point in the application of degree theory is made by Y.Y. Li \cite{Li1989}, where the author uses the Fitzpatrick's degree of quasi-linear Fredholm maps to define a degree for fully nonlinear second order elliptic equations. A remarkable progress is reported by I. Skrypnik \cite{Skrypnik1994}, showing that every fully nonlinear uniformly elliptic equation (satisfying some growth rate conditions) is represented by an operator equation involving $(S)_+$ mapping. This formulation opens up a way to study the well-posedness problem of such equations by topological degree argument. Nevertheless, in all constructions of degree, some type of continuity (and in the weakest case, the demi-continuity) is required and can not be relaxed. A map $A:X\to X^*$ is called demi-continuous if the strong convergence $x_n\stackrel{X}\to x$ implies the weak convergence $A x_n \stackrel{X^*}\rightharpoonup A x$. However, this assumption fails for some real applications. Here we give an example from the phase transition in liquid crystals. It is shown that (see \cite{Niksirat14a,Niksirat14}), the stationary solution of the Doi-Onsager equation \begin{equation} \label{dyn} \frac{\partial f}{\partial t} = \Delta_r f + {\rm div} (f \nabla_r U (f)), \end{equation} for the interaction potential function $U(r)$ \begin{equation} \label{U2} U (f) (r) = \lambda \int_{S^2} |r\times r'| f (r') d \sigma (r'), \end{equation} and the probability density function $f(r)$ of the directions of the rod-like molecules \begin{equation} \label{f2} f (r) = \left( \int_{S^2} e^{- U (f)} d \sigma \right)^{- 1} e^{- U (f) (r)} , \end{equation} reduces to the fixed point problem $u-\lambda \Gamma[u]=0$, where \begin{equation} \label{Onsag} \Gamma [u] (r) := \left( \int_{S^2} e^{- u (r)} \right)^{- 1} \int_{S^2} \left (|r\times r'|-\frac{\pi}{4}\right) e^{- u (r')} d \sigma (r'). \end{equation} The natural function space for the problem is \begin{equation} H_0(S^2)=\{u\in L^2(S^2),u(-r)=u(r),\int_{S^2}{u(r) d\sigma(r)}=0 \}. \end{equation} It is simply seen that $\Gamma$ fails to be demi-continuous in any open neighbourhood of $0 \in H_0(S^2)$. Fix $r \in S^2$, and let $u_n$ be the following sequence \begin{equation} u_n (r) = \left\{ \begin{array}{ll} \log (2 \pi (1 - \cos (1 / n))) & \cos^{- 1} (r. \bar{r})^{} \in \left( 0, \frac{1}{n} \right)\\ 0 & {\rm otherwise} \end{array} \right. . \end{equation} Obviously, $u_n \xrightarrow{H_0 (S^2)} 0$, while \[ \lim_n \Gamma [u_n] = \lim_n \frac{1}{2 \pi (1 - \cos (1 / n))} \int_0^{1 / n} \int_0^{2 \pi} \hat{K} (\gamma) d \sigma \neq G (0) = 0. \] We have the following theorem. \begin{thrm}[Niksirat \cite{Niksirat14a}] \label{Hw} Fix $\lambda$ and let $\Omega_\lambda$ be the following set \[ \Omega_\lambda= \{u \in H_0 (S^2), |u (r) | \leq \lambda \} . \] The map $\Gamma : \Omega_\lambda \subset H_0(S^2) \to H_0(S^2)$ is continuous and compact. \end{thrm} By the above theorem, the map $\Gamma:W^{k,2}(S^2)\cap H_0(S^2)\to H_0(S^2)$ for $k>3$ is continuous due to the compact embedding of $W^{k,2}$ into the space of continuous functions. For the problem defined on the unit circle, the author \cite{Niksirat14} used the classical degree theory of the map \[ \mathbb{I}-\lambda G:W^{k,2}(S^1)\cap H_0(S^1)\to \left( W^{k,2}(S^1)\cap H_0(S^1)\right)^*. \] For higher dimensional problems, the calculations are intricate and lengthy. In this paper, we construct a new degree theory for the mapping $A : Y\subset X\to X^*$ where $X$ is a Banach space, $Y$ is a separable reflexive Banach space continuously embedded in $X$, and $A$ a bounded, demi-continuous and $(S)_+$. The constructed degree enjoys all properties of a classical topological degree. The class of homotopies for which the suggested degree remain constant is narrower than the degree usually that is usually defined for $(S)_+$ mappings. \section{Definition of the degree} Let $A: Y \to X^*$ be a map where $X$ is a Banach spaces and $Y$ is a separable reflexive Banach space continuously embedded in $X$. Without loss of generality, we can assume that $Y$ is a reflexive separable locally uniformly convex space due to the Kadec-Klee theorem (see \cite{Fabian11} Theorem 8.1). On the other hand, if $Y$ is a separable locally uniformly Banach space, the Browder-Ton embedding theorem, see \cite{Browder1968a,Berkovits2003}, implies the existence of a Hilbert spacee $H$ such that the embedding $j:H\hookrightarrow Y$ is dense and compact. Choosing a basis $\mathcal{H}=\{h_1,h_2,\dots\}$ for $H$, we can define the set $\mathcal{Y}=\{y_1,y_2,\dots\}$ where $y_k=j(h_k)$, and accordingly, the filtration $\{Y_1,Y_2,\dots\}$ for $Y$ where $Y_n={\rm span}\{y_1,\dots,y_n\}$. \begin{definition} For $A:Y\to X^*$, the finite rank approximation $A_n: Y \rightarrow Y_n$ is defined by the relation \begin{equation}\label{FRA} A_n (u) = \sum_{k = 1}^n \langle A [u], i (y_k) \rangle y_k, \end{equation} where $\langle, \rangle$ denotes the continuous pairing between $X^{\ast}$ and $X$ and $i : Y \to X$ denotes the continuous embedding of $Y$ into $X$. We should notice that the above approximation is completely different from the finite rank approximation of the map $i^* A:Y \to Y^*$. \end{definition} The inner product $(,)$ in $Y_n$ is uniquely defined by the relation $(y_i,y_j)=\delta_{i j}$. It is simply verified that $A_n$ and $A$ coincide on $Y_n$ in the following sense \begin{eqnarray} \langle A [u], i(v) \rangle = (A_n (u), v),\hspace{0.4cm} \forall v\in Y_n. \end{eqnarray} Let $W$ be a subspace of $X$. We write $A [u]\stackrel{W} =0$ if $\langle A [u], i(v) \rangle= 0$ for all $v \in W$. In sequel, we assume that $\Omega\subset Y$ is an open and bounded set, and $A : \Omega \to X^*$ is bounded, demi-continuous and $(S)_+$ in the following sense: \begin{itemize} \item $A$ is bounded if $A [\Omega]$ is bounded in $X^{\ast}$. \item $A$ is demi-continuous if $u_n \stackrel{Y}\to u$, then $A [u_n] \stackrel{X^*}\rightharpoonup A [u]$. In our setting, the latter weak convergence reads $\langle A [u_n], i (y) \rangle \rightarrow \langle A [u], i (y) \rangle$ for all $y \in Y$. Note that this notion is weaker than the demi-continuity of a map from $X$ to $X^*$. \item $A$ is $(S)_+$ if for every sequence $(u_n) \subset Y$, $u_n \rightharpoonup u $, the inequality \begin{equation} \limsup_{n \rightarrow \infty} \langle A [u_n], i(u_n - u) \rangle \leq 0, \end{equation} imply $u_n \rightarrow u$. \end{itemize} We further assume that $A$ satisfies the following condition $(H)$: \begin{eqnarray} (H) : \{ u \in Y ; A[u]\stackrel{Y} = 0 \} = \{ u \in Y ; A [u] \stackrel{X}= 0 \} . \end{eqnarray} \begin{lem} \label{0-lem}Let $D \subset \bar{\Omega}$ be a closed set. If there exists a sequence $(u_n) \subset D\cap Y_n$ such that $A[u_n]\stackrel{Y_n} = 0$, then the equation $A [u] = 0 \in X^*$ is solvable in $D$. In particular, if $u_n \in \partial \Omega$ and $A [u_n] \stackrel{Y_n}= 0$ then there is $u \in \partial \Omega$ such that $A [u] \stackrel{X}= 0$. \end{lem} \begin{proof} Let $(u_n)\subset D$ be a sequence and $A[u_n]\stackrel{Y_n} = 0$. Since $D$ is bounded, there exists $u \in Y$ and a subsequence of $(u_n)$ (that we still denote it by $u_n$) such that $u_n \rightharpoonup u$. Take an arbitrary sequence $(\zeta_n)$, $\zeta_n \in Y_n$ such that $\zeta_n \stackrel{Y}\longrightarrow u$. By the relation $A[u_n]\stackrel{Y_n}=0$ and the boundedness property of $A$, we have \begin{eqnarray} \limsup_{n \to \infty} \langle A [u_n], i (u_n - u) \rangle & = & \limsup_{n \to \infty} \langle A [u_n], i (\zeta_n - u) \rangle \nonumber\\ & \leq & \limsup_{n \to \infty} \| A [u_n] \|_{X^*} \| \zeta_n - u \|_Y = 0. \label{eq:201507021058} \end{eqnarray} Since $A$ is of class $(S)_+$ on $\Omega$, the inequality (\ref{eq:201507021058}) and the closedness of $D$ imply $u_n \to u\in D$. By the demi-continuity of $A$ on $Y$ we conclude $A [u_n] \rightharpoonup A [u]$. Now let $y \in Y$ be arbitrary. Take a sequence $(\xi_n)$, $\xi_n \in Y_n$ such that $\xi_n \xrightarrow{Y} y$. We have \begin{eqnarray} \langle A [u], i (y) \rangle & = & \lim_{n \to \infty} \langle A [u_n], i (y) \rangle \\ & = & \lim_{n \to \infty} \langle A [u_n], i (y - \xi_n) \rangle \nonumber\\ & \leq & \lim_{n \to \infty} \| A [u_n] \|_{X^*} \| y - \xi_n \|_{Y} = 0. \end{eqnarray} Replacing $y$ by $- y$ implies $\langle A[u_n],i(y) \rangle \geq 0$ and thus $A [u]\stackrel{Y}= 0$. Finally the condition $(H)$ implies $A [u] \stackrel{X}= 0$. \end{proof} \begin{definition} \label{deg-def} Assume that $0 \not\in A [\partial\Omega]$. The index of $A$ in $\Omega$ at $0 \in X^{\ast}$ is defined by the relation \begin{eqnarray} \label{deg-A-An} {\rm ind} (A, \Omega) = \lim_{n \to \infty} \deg_B (A_n, \Omega_n, 0), \end{eqnarray} where $\Omega_n = \Omega \cap Y_n$ and $\deg_B$ denotes the Brouwer degree. \end{definition} \remark{ The index of $A$ is different from the Browder degree of the map $A : Y \rightarrow Y^*$ even if $Y$ is a subspace of $X$. For example, let $X$ be a uniformly convex Banach space $X=Y\oplus \{u\}$ and $J:X \to X^*$ the duality map. Consider the map $A:Y\to X^*$, $\langle A[x],y+t u\rangle=\langle J(x),y\rangle+t $. It is simply seen that the Browder degree of the map $A:Y\to Y^*$ equals $1$ but the index of the map $A:Y\to X^*$ is $0$ by the definition (\ref{deg-def}). We first justify the definition (\ref{deg-def}).} \begin{prop} Assume that $0 \not\in A [\partial\Omega]$. The Brouwer degree of $A_n$ in $\Omega_n$ is stable, that is, there exists $N_0 > 0$ such that for $n \geq N_0$ we have \begin{equation} \label{deg-stbl} \deg_B (A_{n - 1}, \Omega_{n - 1}, 0) = \deg_B (A_n, \Omega_n, 0) . \end{equation} \end{prop} \begin{proof} We first show that there is $N_0 > 0$ such that $0 \notin A_n (\partial \Omega_n)$ for all $n \geq N_0$. Assuming the contrary, let there exist a sequence $u_n \in \partial \Omega_n$ such that $A [u_n] \stackrel{Y_n}= 0$. As $\Omega$ is open in $Y$, we have $u_n \subseteq \partial \Omega$. An argument similar to the one employed in the proof of the lemma (\ref{0-lem}), implies the existence of $u \in \partial \Omega$ such that $A [u] = 0$, a contradiction! Hence, there is $N_0 > 0$ such that the Brouwer degree of $A_n$ in $\Omega_n$ is well defined at $0$ for $n \geq N_0$. Consider the map $B_n : \Omega_n \rightarrow Y_n$ defined by \begin{eqnarray} B_n (u) := (A_{n - 1} (u), Pr_n (u) ), \end{eqnarray} where $Pr_n (u)$ denotes the $n$-th component of $u$ in $Y_n$. Apparently, we have \begin{eqnarray} \label{An-Bn} \deg_B (A_{n - 1}, \Omega_{n - 1}, 0) = \deg_B (B_n, \Omega_n, 0) . \end{eqnarray} Note that the proof ends once we show that for $n$ large enough there holds \begin{eqnarray*} \deg_B (A_n, \Omega_n, 0) = \deg_B (B_n, \Omega_n, 0) . \end{eqnarray*} Consider the following convex homotopy $h (t)$: \begin{eqnarray} \label{h-hom} h_n (t) = (1 - t) A_n + t B_n . \end{eqnarray} It suffices to show that $h_n (t) (u) \neq 0$ for all $u \in \partial \Omega_n$ and for all $t \in [0, 1]$ when $n$ is sufficiently large. Assume the contrary. Then there exists $z_n \in \partial \Omega_n$ and $t_n \in [0, 1]$ such that $h (t_n) (z_n) \stackrel{Y_n}= 0$. By the definition of finite rank approximation (\ref{FRA}), we can write $A_n(z_n)$ as \[ A_n(z_n)=\left(A_{n-1}(z_n),\langle A[z_n],i(y_n) \rangle y_n \right), \] where $\{y_1,y_2,..\}$ is a frame for $Y$. Now, the relation $h(t_n)(z_n)\stackrel{Y_n}=0$ implies that $A[z_n]\stackrel{Y_{n-1}}=0$ and \begin{equation} (1 - t_n) \langle A [z_n], i (y_n) \rangle y_n + t_n Pr_n (z_n)\stackrel{Y_n} = 0. \label{eq:201507021403} \end{equation} But $A[z_n]\stackrel{Y_{n-1}}=0$ implies $A_n (z_n) = r_n y_n$ for some $r_n \in \mathbb{R}$. This together with (\ref{eq:201507021403}) gives $r_n = \frac{- t_n}{1 - t_n} z_{n,n},$ where $z_{n,n}$ is the component of $z_n$ along $y_n$, that is, $Pr_n (z_n) = z_{n,n} y_n$. Therefore, we have \begin{equation} \langle A [z_n], i (z_n) \rangle = (A_n (z_n), z_n) = - \frac{t_n}{1 - t_n} z_{n,n}^2 \leq 0.\label{znn} \end{equation} Since $\partial \Omega$ is bounded and $(z_n)\in \partial\Omega$, there is a subsequence $(z_{n_k})$ such that $z_{n_k} \rightharpoonup z \in Y$. To keep notations simple, we denote this subsequence by $(z_n)$. Let $\zeta_n \in Y_{n - 1}$ be a sequence such that $\zeta_n \stackrel{Y}\to z$. We have \begin{eqnarray} \langle A [z_n], i (z_n - z) \rangle & = & \langle A [z_n], i (z_n - \zeta_n) \rangle + \langle A [z_n], i (\zeta_n - z) \rangle \nonumber\\ & \leq & \langle A [z_n], i (z_n) \rangle + \| A [z_n] \| \| \zeta_n - z \| \nonumber\\ & \leq & \| A [z_n] \| \| \zeta_n - z \| . \end{eqnarray} The last equality follows from (\ref{znn}). Therefore \begin{eqnarray} \limsup_n \langle A [z_n], i (z_n - z) \rangle \leq 0, \end{eqnarray} and since $A$ is $(S)_+$, we conclude $z_n \to z \in \partial \Omega$. Finally, following the lemma (\ref{0-lem}), we reach $A [z] = 0$, a contradiction! \end{proof} \section{Properties of the invariant} We show that the topological invariant defined in the definition (\ref{deg-def}) satisfies the classical properties of a topological degree. First we identify the admissible class of homotopy. \begin{definition}[Admissible homotopy] \label{hom-def}Let $\Omega$ be an open bounded subset of $Y$. A one parameter family of maps $h : [0, 1] \times \Omega \to X^*$ is called an admissible homotopy if it satisfies the following conditions: \begin{enumerate} \item $h : [0, 1] \times \Omega \rightarrow X^{\ast}$ is bounded and demi-continuous, \item $0 \not\in h ([0, 1] \times \partial \Omega)$, \item for every sequence $t_n \rightarrow t$, $t_n \in [0, 1]$ and $u_n \rightharpoonup u$ for $u_n \in \bar{\Omega}$, the inequality \[ \limsup_{n \to \infty} \langle h (t_n) (u_n), i (u_n - u) \rangle \leq 0, \] implies $u_n \to u$, \item for all $t \in [0, 1]$ we have \[ H (t) : \{ z \in Y ; h (t) (z) \stackrel{Y}= 0 \} = \{ z \in Y ; h (t) (z) \stackrel{X}= 0 \} . \] \end{enumerate} \end{definition} \begin{thrm} \label{deg-pro} Let $A : \Omega \rightarrow X^{\ast}$ be a bounded, demi-continuous, $(S)_+$ mapping that satisfies the condition $(H)$ and furthermore, $0 \not\in A [\partial \Omega]$. The index of $A : \Omega \subset Y \to X^*$ given in the definition(\ref{deg-def}) has the following properties: \begin{enumerate} \item the equation $A [u] = 0$ is solvable in $\Omega$ if ${\rm ind} (A, \Omega) \neq 0$, \item if $\Omega = \Omega_1 \cup \Omega_2$ where $\Omega_1$ and $\Omega_2$ are disjoint open sets then \[ \label{D-D} {\rm ind} (A, \Omega) = {\rm ind} (A, \Omega_1) + {\rm ind} (A, \Omega_2), \] \item If $h : [0, 1] \times \Omega \rightarrow X^{\ast}$ is an admissible homotopy then ${\rm ind} (h (t), \Omega)$ is independent of $t \in [0, 1]$. \end{enumerate} \end{thrm} \begin{proof} To prove (1), let us assume ${\rm ind} (A, \Omega) =1$. Therefore, there is $N$ such that ${\rm deg}_B(A_n,\Omega_n,0)=1$ for all $n\geq N$. By the properties of Brouwer degree, the equation $A_n (u) = 0\in Y_n$ is solvable in $\Omega_n$. Let $(u_n)\subset \overline{\Omega}_n$ be a sequence such that $A_n (u_n) = 0$. Now lemma (\ref{0-lem}) implies the existence of some $u \in \overline{\Omega}_n$ such that $A [u] \stackrel{X}= 0$. Since $0 \not\in A [\partial \Omega]$, we conclude $u \in \Omega$. The second property also follows directly from the domain decomposition of the Brouwer degree. In fact, if $\Omega=\Omega_1 \cup \Omega_2$ then for sufficiently large $n$ we have \[ {\rm ind}(A,\Omega)={\rm deg}_B(A_n,\Omega_{1,n}\cup \Omega_{2,n},0), \] where $\Omega_{1,n}=\Omega_1\cap Y_n$ and $\Omega_{2,n}=\Omega_2\cap Y_n$. By the domain decomposition of the Brouwer degree, we have \[ {\rm deg}_B(A_n,\Omega_{1,n}\cup \Omega_{2,n},0)={\rm deg}_B(A_n,\Omega_{1,n},0)+{\rm deg}_B(A_n, \Omega_{2,n},0). \] Now the claim is proved due to the relation \[ {\rm deg}_B(A_n,\Omega_{k,n},0)={\rm ind}(A,\Omega_k), k=1,2. \] To prove the homotopy invariance property of the defined index, assume \begin{equation} {\rm ind} (h (t),\Omega) \neq {\rm ind} (h (s),\Omega), \end{equation} for some $s,t\in[0,1]$. Let $n$ be so large such that the following relations hold \begin{eqnarray} \label{hom-1} {\rm ind} (h (t),\Omega) = \deg_B (h_n (t), \Omega_n, 0), \end{eqnarray} \begin{eqnarray} \label{hom-2} {\rm ind} (h (s),\Omega) = \deg_B (h_n(s), \Omega_n, 0), \end{eqnarray} Therefore, there exist $\tau_n \in [s, t]$ and $\zeta_n \in \partial \Omega$ such that $h (\tau_n) (\zeta_n) \stackrel{Y_n}= 0$. Since $(\zeta_n)$ is bounded, by an argument similar to the proof of the lemma (\ref{0-lem}), we derive the inequality \begin{eqnarray} \limsup_{n \rightarrow \infty} \langle h (\tau_n) (\zeta_n), i (\zeta_n - \zeta) \rangle \leq 0, \end{eqnarray} for some $\zeta \in Y$. The condition $(3)$ in the definition (\ref{hom-def}) implies $\zeta_n \to \zeta \in \partial \Omega$. Therefore, $h (\tau) (\zeta) \stackrel{Y}= 0$ for some $\tau $, a limit point of the sequence $(\tau_n)\subset [s, t]$. The condition $H(t)$ implies $h (\tau) (\zeta) \stackrel{X}= 0$ which contradicts the condition $(2)$ of the definition (\ref{hom-def}). Therefore there exist $N_0 > 0$ such that \begin{eqnarray} {\rm ind} (h_n (t), \Omega) = {\rm ind} (h_n (s), \Omega), \forall n\geq N_0. \end{eqnarray} and thus the index is independent of the homotopies in the class of admissible homotopy defined in the definition (\ref{hom-def}). \end{proof} \section{The stationary solutions of Doi-Onsager equation in $\mathbb{R}^2$} Here we study the one dimensional Doi-Onsager equation, and employ the suggested degree in this article. As we have shown earlier in \cite{Niksirat14}, this problem is reduced to the fixed point problem \begin{equation} A[u]:=u-\lambda \Gamma[u] \end{equation} where \begin{equation} \Gamma[u]=\left(\int_{0}^{2\pi}e^{-u(\theta)}d⁣\theta\right)^{-1}\int_0^{2\pi}\hat{K}(\theta-\theta') e^{-u(\theta')} d⁣\theta'. \end{equation} The kernel $\hat{K}$ is assumed to be in $ W^{1,\infty}[0,2\pi])$ having the following expansion \begin{equation} \hat{K}(\theta)=\sum_{n=1}^\infty{k_n \cos(2 n \theta)}, \end{equation} where $k_n<0$ for all $n$ and they satisfy the condition $ k_1<k_2<k_3<\cdots. $ It is obviously seen that the Onsager's kernel $\hat{K}(\theta)=|\sin(\theta)|-\frac{2}{\pi}$ satisfies the above assumptions. Let $Y$ be the space \[ Y=\left\{u\in W^{1,2}([0,2\pi]),u(\theta)=u(\pi+\theta) a.e.,u(\theta)=u(2\pi-\theta) a.e.,\int_0^{2\pi}u(\theta)d⁣\theta=0\right\}. \] It is seen that $A:Y\to Y$ is a continuous map in the class of $(S)_+$ mappings. In order to simplify calculations for the degree argument, we employ the constructed degree above and study the map $A:Y\to X$ where $X$ is the Hilbert space \[ X=\left\{u\in L^2([0,2\pi]),u(\theta)=u(\pi+\theta) a.e.,u(\theta)=u(2\pi-\theta) a.e.,\int_0^{2\pi}u(\theta)d⁣\theta=0\right\}. \] \begin{thrm}[Niksirat \cite{Niksirat14}] Assume that $\hat{K}\in W^{1,\infty}([0,2\pi])$. Then $\Gamma:Y\to Y$ is continuous and compact. \end{thrm} The following proposition guarantees that all the solutions of $A[u]=0$ are in the space $Y$. \begin{prop} Assume $u\in X$ and $A[u]=0$, then $u\in Y$. \end{prop} \begin{proof} We have \begin{equation} u(\theta)=\lambda \left(\int_{0}^{2\pi}e^{-u(\theta)}d⁣\theta\right)^{-1} \int_0^{2\pi} \hat{K}(\theta-\theta') e^{-u(\theta')} d\theta'. \end{equation} We have \begin{equation} ||\frac{d}{d\theta} u||^2=\lambda^2 \left(\int_{0}^{2\pi}e^{-u(\theta)}d⁣\theta\right)^{-2} \int_0^{2\pi}\left[\int_0^{2\pi}\frac{d}{d\theta} \hat{K}(\theta-\theta') e^{-u(\theta')} d\theta'\right]^2 d\theta. \end{equation} Since $\hat{K}\in W^{1,\infty}([0,2\pi])$, then \begin{equation} ||\frac{d}{d\theta} u|| \leq 2\pi \lambda ||\frac{d}{d\theta} \hat{K}||_\infty< \infty \end{equation} \end{proof} \begin{lem} For the map $A:Y\to Y^*$, let the finite rank approximation $\tilde{A}_n:Y\to Y_n$ be defined by the relation \begin{equation} \tilde{A}_n(u)=\sum_{k=1}^n{ \langle A[u],y_k \rangle_Y y_k}, \end{equation} and let $\Omega\subset Y$ be an open bounded subset. If $0\not\in A(\partial\Omega)$, then we have the following relation \begin{equation} \lim_{n\to\infty} \deg(\tilde{A}_n, \Omega_n,0)=\lim_{n\to\infty} \deg(A_n,\Omega_n,0), \end{equation} where $A_n$ is the finite rank approximation of $A:Y\to X$ defined in Def.(\ref{FRA}), and $\Omega_n:=\Omega\cap Y_n$. \end{lem} \begin{proof} Otherwise there is a sequence $z_n\in \partial\Omega_n$, and a sequence $t_n\in (0,1)$ such that \[ t_n \tilde{A}_n(z_n)+(1-t_n)A_n(z_n)=0. \] Since $\Omega$ is bounded, there is a subsequence (we still denote as $z_n$) such that $z_n \rightharpoonup z$. For any $y \in Y_n$ we have \[ \langle A [ z_n], y \rangle_Y = - \frac{1 - t_n}{t_n} \langle A [ z_n], y \rangle_X . \] Since $Y \hookrightarrow C ( [ 0, 2 \pi])$ is a compact embedding, then there is a subsequence (we still denote as $z_n$) such that $z_n \xrightarrow{X} z$, and thus \begin{eqnarray*} \limsup_{n \rightarrow \infty} \langle A [ z_n], y \rangle_Y = - \frac{1 - \tau}{\tau} \langle A [ z], y \rangle_X, \end{eqnarray*} for some $\tau \in [ 0, 1]$. Let $\zeta_n \in Y_n$ such that $\zeta_n \xrightarrow{Y} z$, and thus \begin{eqnarray*} \limsup_{n \rightarrow \infty} \langle A [ z_n], z_n - z \rangle_Y = \limsup_{n \rightarrow \infty} \langle A [ z_n], z_n - \zeta_n \rangle_Y =\\= - \frac{1 - \tau}{\tau} \limsup_{n \rightarrow \infty} \langle A [ z_n], z_n - z \rangle_X \leq 0. \end{eqnarray*} On the other hand, $\Gamma$ is a compact mapping and therefore \[ \limsup_{n \rightarrow \infty} \langle A [ z_n], z_n - z \rangle_Y = \limsup_{n \rightarrow \infty} \langle z_n, z_n - z \rangle_Y \leq 0, \] that implies $z_n \xrightarrow{Y} z \in \partial \Omega$. But for $y_n= \cos(2 n \theta)$ we have \[ \langle A[z],y_n\rangle_Y=(1+4n^2) \langle A[z],y_n\rangle_X \] that holds only if $A[z]=0$ and thus $A[z]=0$. Since $z\in \partial\Omega$, this is a contradiction. \end{proof} Now we state the main result of the solution of the equation $A[u]=0$. \begin{thrm}\label{M-Thm} The equation $u-\lambda \Gamma[u]=0$ has the unique trivial solution $u\equiv 0$ for \[ 0<\lambda<\lambda_0:=||\hat{K}||_\infty^{-1}, \] and two solutions bifurcate from the trivial solution at $\lambda_n:=-\frac{2}{k_n}$ for $n \geq 1$. The trivial solution is stable for $\lambda< \lambda_1$, and is unstable for $\lambda>\lambda_1$. \end{thrm} For the proof of theorem, we need the following Gr\"{u}ss type of inequality that the reader can see \cite{Dragomir} for a proof. \begin{lem} \label{G-lem}Assume that $d \mu$ is a probability measure and $f, g \in L_{\infty} ( \Omega)$, then \[ \left| \int_{\Omega} f d \mu \int g d \mu - \int f g d \mu \right| \leq \| f \|_{\infty} \| g \|_{\infty} \] \end{lem} \begin{lem} \label{Isol}If $\lambda < \| \hat{K} \|_{\infty}^{- 1}$, then all solutions to the equation $A [ u] = 0$ are isolated. \end{lem} \begin{proof} According to the Fredholm alternative theorem it is enough to show that $\ker ( Id - \lambda D \Gamma) [ u] = \{ 0 \}$. We calculate the Jacobian matrix $D \Gamma$. A straightforward calculation gives the following formula for the G{\^a}teaux derivative \begin{equation} \label{DG} D \Gamma [ u] ( v) = \int_0^{2 \pi} v ( \theta) d \mu_u \int_0^{2 \pi} \hat{K} ( \theta - \theta') d \mu_u - \int_0^{2 \pi} \hat{K} ( \theta - \theta') v ( \theta') d \mu_u, \end{equation} where $d \mu_u$ denotes the probability measure \[ d \mu ( \theta) = \left( \int_0^{2 \pi} e^{- u ( \theta)} d \theta \right)^{- 1} e^{- u ( \theta)} . \] Assume that $v = \lambda D \Gamma [ u] ( v)$. By lemma (\ref{G-lem}), we obtain \[ | v | = \lambda \left| \int_0^{2 \pi} v ( \theta) d \mu_u \int_0^{2 \pi} \hat{K} ( \theta - \theta') d \mu_u - \int_0^{2 \pi} \hat{K} ( \theta - \theta') v ( \theta') d \mu_u \right| \leq \lambda \| \hat{K} \|_{\infty} \| v \|_{\infty}, \] and thus $v = 0$. \end{proof} \begin{proof}(of theorem \ref{M-Thm}) The apriori estimate of the set containing the solution of $A [ u] = 0$ is \begin{equation} \| u \|_{\infty} \leq \lambda \| \hat{K} \|_{\infty}, \end{equation} and thus there is an open set $\Omega \subset Y$ such that for each $\lambda$, there is no solution to $A [ u] = 0$ on $\Omega^c$. Therefore, by the homotopy invariance property of degree for the convex homotopy $h_t = Id - t \lambda \Gamma$, we have \[ \deg ( {Id} - \lambda \Gamma, \Omega, 0) = \deg ( {Id}, \Omega, 0) = 1. \] Now, we calculate the Jacobian matrix of the map $\Gamma$ in the basis $\left\{ \phi_n := \frac{1}{\sqrt{\pi}} \cos ( 2 n \theta) \right\}$, that is, $a_{n m} ( u) := \langle D \Gamma [ u] ( \phi_n), \phi_m \rangle$. According to (\ref{DG}), we derive \[ a_{n m} = \int_0^{2 \pi} \phi_n ( \theta) d \mu \int_0^{2 \pi} \int_0^{2 \pi} \hat{K} ( \theta - \theta') \phi_m ( \theta) d \mu d \theta - \int_0^{2 \pi} \int_0^{2 \pi} \hat{K} ( \theta - \theta') \phi_n ( \theta') \phi_m ( \theta) d \mu d \theta, \] that gives \[ a_{n m} = k_m \left\{ \int_0^{2 \pi} \cos ( 2 n \theta) d \mu \int_0^{2 \pi} \cos ( 2 m \theta) d \mu - \int_0^{2 \pi} \cos ( 2 n \theta) \cos ( 2 m \theta) d \mu \right\}, \] and thus by Lemma (\ref{G-lem}) we obtain $| a_{n m} | \leq | k_m |$. On the other hand, since the solutions for $\lambda < \| \hat{K} \|_{\infty}^{- 1}$ are isolated due to Lemma (\ref{Isol}), the index of any solution is \[ {\rm ind} ( u, 0) = {\rm sign} \det ( {Id} - \lambda ( a_{n m})) = 1, \] due to the calculation $| a_{n m} | < | k_m |$ and the assumption $\lambda < \| \hat{K} \|_{\infty}^{- 1}$. Since the trivial solution $u \equiv 0$ has index $1$ and the degree of $A$ is $1$, we conclude that $u \equiv 0$ is the unique solution to the equation $A [ u] = 0$ if $\lambda < \| \hat{K} \|_{\infty}^{- 1}$. Now we show that two solutions bifurcate at $\lambda_n = - 2 k_n^{- 1}$ from the trivial solution $u \equiv 0$ for each $n \geq 1$. Note that $a_{n m} ( 0) = - \frac{k_n}{2} \delta_{n m}$, and thus \[ A_0 : = ( a_{n m} ( 0)) = {\rm diag} \left( - \frac{k_n}{2} \right), \] a diagonal matrix. Therefore $\det ( {Id} - \lambda A_0)$ changes its signs at $\lambda_n = 2 k_n^{- 1}$. It is seen that the dimensions of maps $\ker ( {Id} - \lambda_n D \Gamma [ 0])$ and $( {\rm Ran} ( {Id} - \lambda_n D \Gamma [ 0]))^{\bot}$ are equal to $1$ and thus ${Id} - \lambda_n D \Gamma [ 0]$ is a Fredholm map of index $0$. The equation $A [ u] = 0$ can be rewritten as \[ F ( \lambda, u) : = u - \lambda D \Gamma [ 0] ( u) + G ( \lambda, u) = 0, \] where $G ( \lambda, u) = \Gamma [ u] - D \Gamma [ 0] ( u)$. It is simply seen that $G ( \lambda, u) = o ( \| u \|)$ uniformly in $\lambda$. Now, by Theorem (28.3) in {\cite{Deimling}}, we conclude that $\lambda_n$ is a bifurcation point for the equation $A[u]=0$ with two bifurcation solutions of the form \[ F^{- 1} ( 0) = \{ ( \lambda_n + \mu ( t), t v + t z ( t)), - \delta < t < \delta \}, \] for some $\delta > 0$ where $v \in {\rm Ker} ( {Id} -\lambda_n D \Gamma [ 0])$ and $z ( t)$ lies in a complement of ${\rm span} \{ v \}$. For the stability, we just notice that the linear operator $Id-\lambda D\Gamma[0]$ which is the linearization of $A[u]$ at the trivial solution $u\equiv 0$ is unstable for $\lambda>\lambda_1$, and is stable for $0<\lambda<\lambda_1$. \end{proof}
1,108,101,562,497
arxiv
\section{The problem} Consider a `triangle' of squares in a grid whose sides are $n$ squares long, as illustrated by the diagram in Figure \ref{7.fig}, for which $n=7$. \begin{figure} \caption{A triangle of size 7} \label{7.fig} \begin{equation*} \setlength{\unitlength}{.5mm} \begin{picture}(100,90) \put(10,10){\line(1,0){70}} \put(10,20){\line(1,0){70}} \put(20,30){\line(1,0){60}} \put(30,40){\line(1,0){50}} \put(40,50){\line(1,0){40}} \put(50,60){\line(1,0){30}} \put(60,70){\line(1,0){20}} \put(70,80){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(20,10){\line(0,1){20}} \put(30,10){\line(0,1){30}} \put(40,10){\line(0,1){40}} \put(50,10){\line(0,1){50}} \put(60,10){\line(0,1){60}} \put(70,10){\line(0,1){70}} \put(80,10){\line(0,1){70}} \put(40,2){\makebox(10,5){$\longleftarrow$ $n$ $\longrightarrow$ }} \end{picture} \end{equation*} \end{figure} We call a southwest-to-northeast diagonal a {\it standard diagonal}. Note that our triangle consists of all the cells in an $n \times n$ square that lie on or below the longest standard diagonal. We denote by $N(n)$ the maximum number of dots that can be placed into the cells of the triangle such that each row, each column, and each standard diagonal contains at most one dot. Determining $N(n)$ is equivalent to solving the following problem: Suppose we have a chessboard made up of hexagonal cells arranged in the shape of an equilateral triangle of side $n$. Then $N(n)$ is the maximum number of non-attacking queens that can be placed on such a board, where a queen can move in any one of the three directions allowed on a hexagonal grid. The following theorem was proven by Vaderlind, Guy and Larson \cite[Problem 252]{VGL} and independently by Nivasch and Lev \cite{NL}: \begin{theorem} \label{main.thm} $N(n) = \NF(n)$, where \begin{eqnarray*} \NF(3t) &=& 2t \\ \NF(3t+1) &=& 2t+1 \\ \NF(3t+2) &=& 2t+1. \end{eqnarray*} \end{theorem} Note that the value of $\NF(n)$ can be stated more succinctly as follows: \[ \NF(n) = \left\lfloor \frac{2n+1}{3} \right\rfloor .\] In order to prove Theorem \ref{main.thm}, we require a construction to establish the lower bound $N(n) \geq \NF(n)$ as well as a proof of the upper bound $N(n) \leq \NF(n)$. In \cite{NL,VGL}, the upper bound was proven by elementary combinatorial arguments. In this note, we give a new proof of the upper bound using linear programming techniques. In the end, our proof is also combinatorial; the main contribution we make is to demonstrate the use of linear programming techniques in deriving the proof. \section{The lower bound} Before proving the upper bound, we give a construction to show that $N(n) \geq \NF(n)$. This construction is essentially the same as the ones in \cite{NL,VGL}. \begin{theorem}\label{thm:construction} $N(n) \geq \NF(n)$. \end{theorem} \begin{proof} First, we show that {$N(3t+1) \geq 2t+1$}: \begin{enumerate} \item Place a dot in the {leftmost cell} of the {$(2t+1)$st row} (where we number rows from top to bottom). \item Place $t$ more dots, each two squares to the right and one square up from the previous dot. \item Place a dot in the {$(t+2)$nd cell from the left} in the {bottom row}. \item Place $t-1$ more dots, each two squares to the right and one square up from the previous dot. \end{enumerate} It is easily verified that at most one dot is contained in each row, column, or standard diagonal. Next, {$N(3t+2) \geq N(3t+1) \geq 2t+1$} (it suffices to add a row of empty cells). Finally, {$N(3t) \geq N(3t+1) - 1 \geq 2t$} (delete the bottom row of cells from a triangle of side $3t+1$, noting that any row contains at most one dot). \end{proof} \begin{example}{\rm We show in Figure \ref{fig:n=7} that $N(7) \geq 5$ by applying the construction given in Theorem \ref{thm:construction}.} \end{example} \section{A new proof of the upper bound} \begin{figure}[tb] \caption{$N(7) \geq 5$} \label{fig:n=7} \begin{equation*} \setlength{\unitlength}{.5mm} \begin{picture}(90,90) \put(10,10){\line(1,0){70}} \put(10,20){\line(1,0){70}} \put(20,30){\line(1,0){60}} \put(30,40){\line(1,0){50}} \put(40,50){\line(1,0){40}} \put(50,60){\line(1,0){30}} \put(60,70){\line(1,0){20}} \put(70,80){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(20,10){\line(0,1){20}} \put(30,10){\line(0,1){30}} \put(40,10){\line(0,1){40}} \put(50,10){\line(0,1){50}} \put(60,10){\line(0,1){60}} \put(70,10){\line(0,1){70}} \put(80,10){\line(0,1){70}} \put(35,35){\circle*{3}} \put(55,45){\circle*{3}} \put(75,55){\circle*{3}} \put(45,15){\circle*{3}} \put(65,25){\circle*{3}} \end{picture} \end{equation*} \end{figure} The computation of $N(n)$ can be formulated as an {integer linear program}. Suppose we number the cells as indicated in the Figure \ref{6.fig} (where $n=6$): \begin{figure} \caption{Labelling the cells in a triangle of size $6$} \label{6.fig} \begin{equation*} \setlength{\unitlength}{.7mm} \begin{picture}(75,70) \put(10,10){\line(1,0){60}} \put(10,20){\line(1,0){60}} \put(20,30){\line(1,0){50}} \put(30,40){\line(1,0){40}} \put(40,50){\line(1,0){30}} \put(50,60){\line(1,0){20}} \put(60,70){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(20,10){\line(0,1){20}} \put(30,10){\line(0,1){30}} \put(40,10){\line(0,1){40}} \put(50,10){\line(0,1){50}} \put(60,10){\line(0,1){60}} \put(70,10){\line(0,1){60}} \put(11,13){{$x_{6,6}$}} \put(21,13){{$x_{6,5}$}} \put(31,13){{$x_{6,4}$}} \put(41,13){{$x_{6,3}$}} \put(51,13){{$x_{6,2}$}} \put(61,13){{$x_{6,1}$}} \put(21,23){{$x_{5,5}$}} \put(31,23){{$x_{5,4}$}} \put(41,23){{$x_{5,3}$}} \put(51,23){{$x_{5,2}$}} \put(61,23){{$x_{5,1}$}} \put(31,33){{$x_{4,4}$}} \put(41,33){{$x_{4,3}$}} \put(51,33){{$x_{4,2}$}} \put(61,33){{$x_{4,1}$}} \put(41,43){{$x_{3,3}$}} \put(51,43){{$x_{3,2}$}} \put(61,43){{$x_{3,1}$}} \put(51,53){{$x_{2,2}$}} \put(61,53){{$x_{2,1}$}} \put(61,63){{$x_{1,1}$}} \end{picture} \end{equation*} \end{figure} Define $x_{i,j} = 1$ if the corresponding cell contains a dot; define $x_{i,j} = 0$ otherwise. The sum of the variables in each row, column, and standard diagonal is at most 1. This leads to {constraints} of the form \begin{equation*} \sum_{j=1}^{i}x_{i,j}\leq 1, \quad \quad \mbox{for } i=1,2,\dotsc,n \end{equation*} \begin{equation*} \sum_{i=j}^{n}x_{i,j}\leq 1, \quad \quad \mbox{for } j=1,2,\dotsc,n \end{equation*} and \begin{equation*} \sum_{i=k+1}^{n}x_{i,i-k}\leq 1, \quad \quad \mbox{for } k=0,1,\dotsc,n-1. \end{equation*} Finally, $x_{i,j} \in \{0,1\}$ for all $i,j$. The {objective function} is to maximize $\sum x_{i,j}$ subject to the above constraints. It is obvious that the optimal solution to this integer program is $N(n)$. It is possible to relax the integer program to obtain a linear program, replacing the condition $x_{i,j} \in \{0,1\}$ by $0 \leq x_{i,j} \leq 1$ for all $i,j$. In fact, we do not have to specify $x_{i,j} \leq 1$ as an explicit constraint since it is already implied by the other constraints; it suffices to require $0 \leq x_{i,j}$ for all $i,j$. Denoting the optimal solution to this linear program by $\LP(n)$, we have that $\LP(n) \geq N(n)$. \begin{example} {\rm Using Maple, it can be seen that $\LP(6) = 4\frac{2}{7}$ (a solution to the LP having this value is presented in Figure \ref{LP:n=6}).} \end{example} \begin{figure} \caption{The optimal solution to $LP(6)$} \label{LP:n=6} \begin{equation*} \setlength{\unitlength}{.7mm} \begin{picture}(90,80) \put(10,10){\line(1,0){60}} \put(10,20){\line(1,0){60}} \put(20,30){\line(1,0){50}} \put(30,40){\line(1,0){40}} \put(40,50){\line(1,0){30}} \put(50,60){\line(1,0){20}} \put(60,70){\line(1,0){10}} \put(10,10){\line(0,1){10}} \put(20,10){\line(0,1){20}} \put(30,10){\line(0,1){30}} \put(40,10){\line(0,1){40}} \put(50,10){\line(0,1){50}} \put(60,10){\line(0,1){60}} \put(70,10){\line(0,1){60}} \put(14,13){{$0$}} \put(24,13){{$0$}} \put(34,13){{$\frac{2}{7}$}} \put(44,13){{$\frac{4}{7}$}} \put(54,13){{$\frac{1}{7}$}} \put(64,13){{$0$}} \put(24,23){{$\frac{2}{7}$}} \put(34,23){{$0$}} \put(44,23){{$\frac{3}{7}$}} \put(54,23){{$\frac{1}{7}$}} \put(64,23){{$\frac{1}{7}$}} \put(34,33){{$\frac{5}{7}$}} \put(44,33){{$0$}} \put(54,33){{$0$}} \put(64,33){{$\frac{2}{7}$}} \put(44,43){{$0$}} \put(54,43){{$\frac{5}{7}$}} \put(64,43){{$\frac{2}{7}$}} \put(54,53){{$0$}} \put(64,53){{$\frac{2}{7}$}} \put(64,63){{$0$}} \end{picture} \end{equation*} \end{figure} Next, we tabulate some solutions to $\LP(n)$ for small values of $n$ in Table \ref{tab1}. Based on the numerical data in Table \ref{tab1}, it is natural to formulate a conjecture about $\LP(n)$: \begin{table}[tb] \caption{Optimal solutions to the integer and linear programs for small $n$} \label{tab1} \[ \renewcommand{\arraystretch}{1.35} \begin{array}{c|c|c|c} n & N(n) & \LP(n) & \LP(n) - N(n)\\ \hline 3 & 2 & 2\frac{1}{4} & \frac{1}{4} \\ \hline 4 & 3 & 3 & 0 \\ \hline 5 & 3 & 3\frac{3}{5} & \frac{3}{5} \\\hline 6 & 4 & 4\frac{2}{7} & \frac{2}{7} \\\hline 7 & 5 & 5 & 0 \\\hline 8 & 5 & 5\frac{5}{8} & \frac{5}{8} \\\hline 9 & 6 & 6\frac{3}{10} & \frac{3}{10} \\\hline 10 & 7 & 7 & 0 \\\hline 11 & 7 & 7\frac{7}{11} & \frac{7}{11} \\\hline 12 & 8 & 8\frac{4}{13} & \frac{4}{13} \end{array} \] \end{table} \begin{conjecture}[LP Conjecture] Define \begin{eqnarray*} \LPF(3t) &=& 2t + \frac{t}{3t+1} \\ \LPF(3t+1) &=& 2t+1 \\ \LPF(3t+2) &=& 2t+1 + \frac{2t+1}{3t+2}. \end{eqnarray*} Then we conjecture that $\LP(n) = \LPF(n)$. \end{conjecture} It is easy to show the following: \begin{theorem} \label{LPproof.thm} {If the LP Conjecture is true, then $N(n) = \NF(n)$.} \end{theorem} \begin{proof} First, the LP Conjecture asserts that \begin{equation} \label{eq1} \LP(n) = \LPF(n). \end{equation} Because $N(n)$ is an integer and $N(n) \leq \LP(n)$, we have that \begin{equation} \label{eq2} N(n) \leq \lfloor \LP(n) \rfloor . \end{equation} Simple arithmetic establishes that \begin{equation} \label{eq3} \lfloor \LPF(n) \rfloor = \NF(n).\end{equation} Combining (\ref{eq1}), (\ref{eq2}) and (\ref{eq3}), we have \[N(n) \leq \lfloor \LP(n) \rfloor = \lfloor \LPF(n) \rfloor = \NF(n) .\] We showed in Theorem \ref{thm:construction} that $N(n) \geq \NF(n)$; hence $N(n) = \NF(n).$ \end{proof} The optimal solution to the linear program for $n=6$ that we presented in Figure \ref{LP:n=6} does not seem to have much apparent structure that could be the basis of a mathematical proof. Indeed, most of the small optimal solutions that we obtained are quite irregular, which suggests that proving the LP conjecture could be difficult. We circumvent this problem by instead studying the dual LP and appealing to weak duality. An LP in {\it standard form} is specified as: \begin{center} \begin{tabular}{|ll|}\hline maximize & $c^T x$\rule{0in}{.18in} \\ subject to & $Ax \leq b$, $x \geq 0$.\\ \hline \end{tabular} \end{center} This is often called the {\it primal LP}. Any vector $x$ such that $Ax \leq b$, $x \geq 0$ is called a {\it feasible solution}. The {\it objective function} is the value to be maximized, namely, $c^T x$. The corresponding {\it dual LP} is specified as: \begin{center} \begin{tabular}{|ll|}\hline minimize & $b^T y$\rule{0in}{.18in} \\ subject to & $A^T y \geq c$, $y \geq 0$.\\ \hline \end{tabular} \end{center} Here, a feasible solution is any vector $y$ such that $A^T y \geq c$, $y \geq 0$. The objective function is $b^T y$. We will use the following classic theorem. \begin{theorem}[Weak Duality Theorem] \label{weakduality.thm} The objective function value of the dual LP at any feasible solution is always greater than or equal to the objective function value of the primal LP at any feasible solution. \end{theorem} We now describe the dual LP for our problem. Suppose we label the rows of our triangle by $r_1,r_2,\dots,r_{n}$, such that {$r_i$ is the row containing $i$ squares}, and we label the columns and diagonals similarly. The following simple lemma is very useful. \begin{lemma} \label{sum.lem} If a cell is in row $r_i$, column $c_j$ and diagonal $d_k$, then {$i+j+k=2n+1$}. \end{lemma} In fact, it is not hard to see that there is a {bijection} from the set of $n(n+1)/2$ cells to the set of triples \[ \mathcal{T} = \{ (i,j,k) : i+j+k = 2n+1, 1 \leq i,j,k\leq n\} .\] In the dual LP, the {variables} are $r_1,r_2,\dots,r_{n}$, $c_1,c_2,\dots,c_{n}$, $d_1,d_2,\dots, d_{n}$. There is a {constraint} for each cell $C$. If $C$ is in row $r_i$, column $c_j$ and diagonal $d_k$, then the corresponding constraint is \[ r_i + c_j +d_k \geq 1.\] The {objective function} is to minimize $\sum r_i + \sum c_j + \sum d_k$. \medskip It turns out that there exist optimal solutions for the dual LP that have a very {simple, regular} structure. These were obtained by extrapolating solutions for small cases found by Maple. When $n = 3t+1$, define \begin{equation} \label{1.soln} r_i = c_i =\max\left\{0,\frac{i-t-1}{3t+1}\right\},\quad d_i=\max\left\{0,\frac{i-t}{3t+1}\right\}. \end{equation} When $n = 3t+2$, define \begin{equation} \label{2.soln} r_i= c_i = d_i =\max\left\{0,\frac{i-t-1}{3t+2}\right\}. \end{equation} When $n = 3t$, define \begin{equation} \label{0.soln} r_i = c_i = d_i =\max\left\{0,\frac{i-t}{3t+1}\right\}. \end{equation} \begin{lemma} \label{feas.lem} The values $r_i, c_i$ and $d_i$ defined in \eqref{1.soln}, \eqref{2.soln} and \eqref{0.soln} are {feasible} for the dual LP, and the {value of the objective function} for the dual LP at these solutions is $\LPF(n)$. \end{lemma} \begin{proof} First we consider the case $n = 3t+1$. Consider any cell $C$, and suppose $C$ is in row $r_i$, column $c_j$ and diagonal $d_k$. Then we have that \begin{align*} r_i+c_j+d_{k}&\geq\frac{i-t-1}{3t+1}+\frac{j-t-1}{3t+1}+\frac{k-t}{3t+1}\\ &=\frac{i+j+k- (3t+2)}{3t+1} \\ &=\frac{6t+3 - (3t+2)}{3t+1} \quad \text{(applying Lemma \ref{sum.lem})}\\ &=1. \end{align*} Therefore all constraints are satisfied. The value of the objective function is \begin{align*} &\frac{1}{3t+1}\left \sum_{i=t+1}^{3t+1}(i-t-1)+ \sum_{i=t+1}^{3t+1}(i-t-1)+ \sum_{i=t}^{3t+1}(i-t) \right)\\ &= \frac{1}{3t+1}\left(\frac{2t(2t+1)}{2}+\frac{2t(2t+1)}{2}+ \frac{(2t+1)(2t+2)}{2}\right)\\ &= \frac{(2t+1)(3t+1)}{3t+1}\\ &= 2t+1\\ &= \LPF(3t+1). \end{align*} The proofs for $n = 3t+2$ and $n= 3t$ are very similar. \end{proof} Our new proof of Theorem \ref{main.thm} follows immediately from Lemma \ref{feas.lem} by slightly modifying the proof of Theorem \ref{LPproof.thm}. \begin{proof} First, from weak duality and Lemma \ref{feas.lem}, we have \begin{equation*} \LP(n) \leq \LPF(n). \end{equation*} Combining this inequality with (\ref{eq2}) and (\ref{eq3}), we have \[N(n) \leq \lfloor \LP(n) \rfloor \leq \lfloor \LPF(n) \rfloor = \NF(n) .\] We showed in Theorem \ref{thm:construction} that $N(n) \geq \NF(n)$; hence $N(n) = \NF(n).$ \end{proof} \section{Discussion} We investigated the ``dots in triangles problem" due to an application to honeycomb arrays (see \cite{BPPS}). However, we did not realize that the dots in triangles problem had already been solved. Since we did not know the value of $N(n)$, we adopted an ``experimental" approach: \begin{enumerate} \item We used Maple to gather some numerical data. \item We formulated (obvious) conjectures based on the numerical data. \item Finally, we proved the conjectures mathematically. \end{enumerate} Many problems in combinatorics are amenable to such an approach, but this particular problem serves as an ideal illustration of the usefulness of this methodology. Indeed, the problem seemed almost to ``solve itself'', with minimal thought or human ingenuity required! \medskip It should also be emphasized that, in the end, the resulting proof is quite short and simple: \begin{enumerate} \item By a suitable direct construction, prove that $N(n) \geq \left\lfloor \frac{2n+1}{3} \right\rfloor $. \item Show that the dual LP has a feasible solution whose objective function value is {less than} $\left\lfloor \frac{2n+1}{3} \right\rfloor + 1$. \end{enumerate} The first conjecture we posed was the LP Conjecture, concerning the optimal solutions to the LP. In general, to prove a feasible solution to an LP is optimal, it is necessary to do the following: \begin{enumerate} \item Find a feasible solution to the primal LP and denote the value of the objective function by $C$. \item Find a feasible solution to the dual LP and denote the value of the objective function by $C^*$. \end{enumerate} If $C = C^*$, then the solution to the LP is optimal (this is often called {\it strong duality}). When $n \equiv 1 \bmod 3$, our work in fact proves the LP conjecture. This is because Theorem \ref{thm:construction} yields a solution to the primal LP whose objective function value matches the solution we later found to the dual LP. However, when $n \not\equiv 1 \bmod 3$, we do not have a general solution to the primal LP whose objective function value matches the solution to the dual LP. Although we are confident that the LP conjecture is also true for these values of $n$, proving it could get messy! \section*{Acknowledgements} Research of SRB and MBP was supported by EPSRC grant EP/D053285/1 and research of DRS was supported by NSERC grant 203114-06. The authors would like to thank Tuvi Etzion for discussions, funded by a Royal Society International Travel Grant, which inspired this line of research. Many thanks also to Bill Martin, for comments on an earlier version of this paper.
1,108,101,562,498
arxiv
\section{Introduction} Dimensional regularization (DR) \cite{tHooft:1972tcz} is a widely used regularization scheme to regularize divergences in the loop Feynman integrals, where the spacetime dimension is continued to non-integer $d=4-2\epsilon$ dimensions. In general $d$ spacetime dimensions, there exists a special class of so-called evanescent operators that vanish in four dimensions (\emph{i.e.}~in the limit $\epsilon\to 0$). In our previous work \cite{Jin:2022ivc}, we discussed the systematic construction of the gluonic evanescent operators and performed the one-loop renormalization. Based on those results, we continue to study the two-loop renormalization of evanescent operators in this paper. Starting from the two-loop order, evanescent operators are expected to have non-negligible physical effects. This has been studied a long time ago for the four-fermion type evanescent operators in four-dimensional \cite{Buras:1989xd,Dugan:1990df,Herrlich:1994kh,Buras:1998raa} and two-dimensional \cite{Bondi:1989nq,Vasiliev:1997sk,Gracey:2016mio} theories. There is an important difference between the gluonic evanescent operators and the four-fermion type evanescent operators. In the four-fermion case, there are infinite operators with the same canonical dimensions which in general can mix with each other, and thus a certain special choice of the operators \cite{Buras:1989xd,Dugan:1990df,Herrlich:1994kh} or some truncation of the basis \cite{DiPietro:2017vsp} are required. In our case, the gluonic operator basis at a given mass dimension contains only a finite number of operators, and this allows us to study the evanescent effects in a concrete setup. For example, we can compute the anomalous dimensions with a general choice of a basis and also address the possible issue of the scheme dependence without any ambiguity. We note that the finiteness of the basis number also happens for evanescent operators in the scalar theories \cite{Hogervorst:2015akt,Cao:2021cdt}. Evanescent effects in gravitational theories were also considered in \cite{Bern:2015xsa, Bern:2017puu}. In this paper, we obtain the two-loop anomalous dimensions for the full mass-dimension-10 basis in the planar YM theory.\footnote{Since the anomalous dimensions of dimension-10 operators itself includes the results of lower-dimension operators, our results also provide for the first time the anomalous dimensions of the dimension-8 operators.} The basis contains 36 independent operators and six of them are evanescent. We use two different schemes to perform the renormalization: the modified minimal subtraction ($\overline{\text{MS}}$) scheme \cite{Bardeen:1978yd} and the finite renormalization scheme \cite{Buras:1989xd,Bondi:1989nq}. The anomalous dimensions are scheme dependent because of the effect of the non-zero beta function. To further check our results, we consider the Wilson-Fisher (WF) conformal fixed point \cite{Wilson:1971dc} of the pure YM theory where the anomalous dimensions should be scheme independent. We find that the two schemes indeed give the same results, which provides a non-trivial consistency check of our computation. We would like to stress that the two-loop computation for the high-dimensional and high-length operators in the YM theory is also a challenging technical problem. For example, to calculate the anomalous dimensions of operators at classical dimension 10, two-loop four-point and five-point form factors are required. We employ a strategy that combines the unitarity method \cite{Bern:1994cg,Bern:1994zx,Britto:2004nc} and the integration by parts (IBP) reduction \cite{Chetyrkin:1981qh,Tkachov:1981wb}. Importantly, since we aim to study the evanescent operators, the computation must be performed in d dimensions. Therefore, it is natural to use the d-dimensional unitarity cut method and work in the conventional dimensional regularization (CDR). A technical challenge in computing the form factors of high-length operators is to perform non-trivial tensor integral reductions. We find that this reduction can be done relatively efficiently by a modified Passarino-Veltman reduction method \cite{Passarino:1978jh,Kreimer:1991wj}. The efficiency of the program can be further enhanced by using the numerical reconstruction method. Since the operator basis is finite, it is straightforward to reconstruct the analytic renormalization matrix using a finite set of numerical data. In this way, we obtain the analytic two-loop anomalous dimensions. The strategy we employ is expected to also provide an efficient framework for the two-loop renormalization of general high-length operators. The paper is arranged as follows. In Section~\ref{sec:prepare}, we introduce necessary notations and the dimension-10 operator basis. In Section~\ref{sec:renorm}, we describe the renormalization procedure and give the explicit renormalization formulas in the $\overline{\text{MS}}$ scheme and the finite renormalization scheme. In Section~\ref{calcbare}, we explain our calculation of the loop form factors using the $d$-dimensional unitarity method, and a detailed description of how to deal with the challenging tensor reduction is also given. In Section~\ref{sec:result}, we present our results, including the renormalization matrices and the anomalous dimensions. A summary and discussion are given in Section~\ref{sec:discuss}. The explicit basis of the mass-dimension-10 operators is given in Appendix \ref{all dim-10}. The two-loop renormalization matrix between physical operators is presented in Appendix~\ref{zppresult}. \section{Preparation}\label{sec:prepare} In this section, we first set up basic notations and our conventions for local operators in the pure YM theory in Section~\ref{ymnotation}. Then in Section~\ref{allopers}, we define the physical and evanescent operators, and we also present the operator basis of classical dimension 10. \subsection{Setup in the YM theory}\label{ymnotation} The Lagrangian of the pure YM theory is\footnote{For simplicity, in this paper we will not distinguish upper and lower Lorentz indices. For example, $\eta^{\mu\nu}$ and $\delta^{\mu}_\nu$ are regarded as equivalent. This will not cause any problem in the flat spacetime.} \begin{align} \mathcal{L}=-\frac{1}{2}\text{tr}(F_{\mu\nu}F_{\mu\nu}) = -\frac{1}{4} F^a_{\mu\nu}F^a_{\mu\nu}\,, \end{align} where $F_{\mu\nu}\equiv F^a_{\mu\nu} T^a$, and $T^a,\ \text{with } a = 1,..,N_c^2-1$, is the SU($N_c$) generator. The field strength tensor $F^a_{\mu\nu}$ is defined as \begin{align} F^a_{\mu\nu}=\partial_\mu A_\nu^a-\partial_\nu A_\mu^a+g_0 f^{abc}A_\mu^bA_\nu^c\,, \end{align} where $A_\mu^a$ is the gauge field, $f^{abc}$ is the structure constant and $g_0$ is the bare coupling. The Einstein summation is assumed for the repeated indices. The covariant derivative is defined to be \begin{align} D_\mu=\partial_\mu-\text{i}g_0A_\mu^aT^a\,. \end{align} The action of the covariant derivative is \begin{align} D_\mu X\equiv [D_\mu,X]\,, \label{Dact} \end{align} given that $X=X^aT^a$. The commutator of two $D_\mu$'s reads \begin{align} [D_\mu,D_\nu]=-\text{i}g_0F_{\mu\nu}\,. \label{Dcommute} \end{align} The equation of motion reads \begin{align} D_\mu F_{\mu\nu}=0\,. \label{eom} \end{align} Another important relationship is the Bianchi identity: \begin{align} D_{\mu}F_{\rho\sigma}=D_{\sigma}F_{\rho\mu}+D_{\rho}F_{\mu\sigma}\,. \label{bianchi} \end{align} We aim to study the gauge-invariant local scalar operators. For simplicity, in this paper we will take the large $N_c$ limit. Consequently, we only need to consider the single-trace operators. We will often abbreviate the Lorentz indices by integer numbers, such as \begin{align} \text{tr}(F_{12}F_{12})\equiv \text{tr}(F_{\mu_1\mu_2}F_{\mu_1\mu_2})\,. \end{align} It is conventional to classify the operators according to their classical dimensions. For example, we say that $\text{tr}(F_{12}F_{12})$ is a dimension-4 operator. Given an operator {$O$}, its $n$-point form factor is defined to be \begin{align} \mathcal{F}_{O,\text{$n$ gluons}}=\int \text{d}^dx e^{-{\text{i}}q\cdot x}\langle \text{g}_1\cdots,\text{g}_n |O|0\rangle\,, \end{align} where $\text{g}_i$ denotes an external gluon carrying an on-shell momenta $p_i$, and $q=\sum_i {p_i}$ is the off-shell momentum associated to the operator. See \emph{e.g.}~\cite{Yang:2019vag} for a recent introduction of form factors. All the gluons are assumed to be outgoing in this paper. For an operator $O$, there exists an integer number $m$ such that $\mathcal{F}^{(0)}_{O,\text{m gluons}}\neq0$ and its tree-level $n$-point form factors vanish if $n<m$. We call the $m$-gluon form factor as the minimal form factor of this operator. Similarly, an $(m+1)$-point form factor is called a next-to-minimal form factor, and a $(m-1)$-point form factor is called a sub-minimal form factor. The tree-level color-ordered minimal form factor can be achieved straightforwardly by the dictionary (see \emph{e.g.}~\cite{Jin:2022ivc}) \begin{align} F^{\mu\nu}\to \text{i}(p^\mu e^\nu+p^\nu e^\mu),\qquad D^{\mu}\to \text{i}p^\mu\,, \end{align} where $e^\mu$ is the polarization of a gluon and $p^\mu$ is the momentum of a gluon. By definition, there is no tree-level sub-minimal form factor. For the tree-level next-to-minimal form factors, compact formulas are given in \cite{Jin:2022ivc}. The length of an operator is defined to be the number of gluons of its minimal form factor.\footnote{In most cases, one can simply take the length as equal to the number of $F_{\mu\nu}$ in an operator. However, this may cause ambiguity sometimes. For example, the operator $\text{tr}([D_1,D_2]F_{34} F_{41} F_{23})=-ig_0 \text{tr}([F_{12},F_{34}] F_{41} F_{23})$ do not have a definite length by counting the number of $F_{\mu\nu}$; while using the above definition based on the minimal form factor, this operator should have length 4.} For example, the operator \begin{align} \text{tr}(D_3D_4F_{12}F_{23}F_{41})+\text{tr}\big(D^4(F_{12}F_{12})\big) \label{opexample} \end{align} contains a length-3 and a length-2 monomial operators. The length of the whole operator is two, since the minimal tree form factor is a two-point form factor. In this paper, each length-$L$ monomial operator is implicitly multiplied by a factor $(-\text{i}g_0)^{L-2}$. For example, \eqref{opexample} means $-\text{i}g_0 \text{tr}(D_3D_4F_{12}F_{23}F_{41})+\text{tr}\big(D^4(F_{12}F_{12})\big)$. This convention is due to the fact that a strength tensor can be interpreted as a commutator of two $D_{\mu}$'s and the commutation relation \eqref{Dcommute} involves a factor $-\text{i}g_0$. Under charge conjugation, the color trace of a monomial operator is reversed, together with a factor $(-1)^L$ with $L$ the length of the monomial operator \cite{Bardeen:1969md}. For example, \begin{align} {\rm tr}(D_3D_4F_{12}F_{23}F_{41}) \to - {\rm tr}(F_{41}F_{23}D_3D_4F_{12})\,. \end{align} An eigenstate operator under change conjugation with $+\ (-)$ C-parity is said to be C-even (C-odd). An operator before renormalization is called a bare operator, denoted as $O_\text{b}$. Since we aim to study the evanescent operators, we will adopt the CDR scheme to regularize the divergences of a bare operator, with loop momenta and external momenta all in $d$ spacetime dimensions. Given a set of bare operators $\vec{O}_\text{b}$, one can renormalize them as follows: \begin{align} {O}_i=Z_i^{~j}{O}_{\text{b},j}\,. \label{renormoperator} \end{align} A matrix element $Z^{~j}_i$ represents an operator mixing from $O_{i,\text{b}}$ to $O_{j,\text{b}}$. In general, operators of the same classical dimension can mix with each other. The dilatation matrix $\mathcal{D}$ is defined as \begin{align} \mathcal{D}\equiv -\mu\frac{\text{d}Z}{\text{d}\mu}Z^{-1}=\sum_{l=1}\left(\frac{\alpha_s}{4\pi}\right)^l\mathcal{D}^{(l)}\,. \label{overallgamma} \end{align} The eigenvalues of the dilatation matrix are the anomalous dimensions, denoted as $\gamma$. \subsection{Physical and evanescent operators}\label{allopers} We call an operator a physical operator if it does not vanish in four spacetime dimensions. Otherwise, we call it an evanescent operator. For example, the operator \begin{align} \delta^{\mu_1\mu_2\mu_3\mu_4\mu_5}_{\mu_6\mu_7\mu_8\mu_9\mu_{10}}\text{tr}(F_{\mu_1\mu_2}F_{\mu_3\mu_4}F_{\mu_5\mu_6}F_{\mu_7\mu_8}F_{\mu_9\mu_{10}})\,, \label{expforeva} \end{align} is an evanescent operator, where the generalized Kronecker symbol is defined to be \begin{equation} \delta^{\mu_1..\mu_n}_{\nu_1...\nu_n}= {\rm det}(\delta^\mu_\nu) = \left| \begin{matrix} \delta^{\mu_1}_{\nu_1} & \ldots & \delta^{\mu_1}_{\nu_n} \\ \vdots & & \vdots\\ \delta^{\mu_n}_{\nu_1} & \ldots & \delta^{\mu_n}_{\nu_n} \end{matrix} \right|\,. \end{equation} One can see that the rank-5 Kronecker symbol guarantees the vanishing of \eqref{expforeva} in four spacetime dimensions. More details of systematic construction of evanescent operators can be found in \cite{Jin:2022ivc}. We will choose the dimension-10 operators as concrete examples to study the effects of evanescent operators on the two-loop renormalization. As mentioned above, we will consider the large $N_c$ limit and thus we focus only on the single-trace operators. The arrangement of our operator basis is summarized in Table~\ref{opertable}. The basis is first divided into two classes: the physical ones and the evanescent ones. Within each class, we classify the operators according to their C-parities and then into different lengths. The physical operators can be further arranged into different helicity sectors. Within a helicity sector, the tree minimal form factors are only non-vanishing in the corresponding helicity configuration (and the conjugate configuration). For example, $\text{tr}(F_{12}F_{12})$ is in the $(-)^2$ sector and its tree-level minimal form factor is only non-vanishing if the helicities are $(-)^2$ or $(+)^2$. The explicit definitions of the operators are given in Appendix~\ref{all dim-10}. A similar basis has been given in \cite{Jin:2022ivc}. Here an improvement is made: the basis is reorganized in a form such that the total derivative operators are separated explicitly, see Appendix~\ref{all dim-10} for further details. This reorganization will help to analyze the comparison of the anomalous dimensions in different renormalization schemes in Section~\ref{getZ}. \begin{table} \centering \begin{tabular}{|c|l|l|l} \hline \multirow{6}{*}[-40pt]{30 physical} & \multirow{4}{*}[-20pt]{24 C-even} & 1 length-2 & \multicolumn{1}{l|}{$O_1:$ $(-)^2$} \\ \cline{3-4} & & 4 length-3 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$O_2,O_3:$ $(-)^3$\\ $O_4,O_5:$ $(-)^2+$\end{tabular}} \\ \cline{3-4} & & 15 length-4 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$O_6$--$O_{10}:$ $(-)^4$\\$O_{11}$--$O_{14}:$ $(-)^3+$\\ $O_{15}$--$O_{20}:$ $(-)^2(+)^2$\end{tabular}} \\ \cline{3-4} & & 4 length-5 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$O_{21},O_{22}:$ $(-)^5$\\$O_{23},O_{24}:$ $(-)^3(+)^2$\end{tabular}} \\ \cline{2-4} & \multirow{2}{*}[-13pt]{6 C-odd} & 1 length-3 & \multicolumn{1}{l|}{$O_{25}:$ $(-)^2+$} \\ \cline{3-4} & & 5 length-4 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$O_{26}:$ $(-)^4$\\ $O_{27},O_{28}:$ $(-)^3+$\\ $O_{29},O_{30}:$ $(-)^2(+)^2$\end{tabular}} \\ \hline \multicolumn{1}{|l|}{\multirow{3}{*}{6 evanescent}} & \multirow{2}{*}{5 C-even} & 3 length-4 & \multicolumn{1}{l|}{$O_{31},O_{32},O_{33}$} \\ \cline{3-4} \multicolumn{1}{|l|}{} & & 2 length-5 & \multicolumn{1}{l|}{$O_{34},O_{35}$} \\ \cline{2-4} \multicolumn{1}{|l|}{} & 1 C-odd & 1 length-2 & \multicolumn{1}{l|}{$O_{36}$} \\ \cline{1-4} \end{tabular} \caption{The arrangement of the dimension-10 single-trace operator basis. The subscript $i$ in $O_i$ corresponds to the row and column in the $Z$ matrix. We first divide the operators into the physical sector and the evanescent sector. Within each sector, the operators are further classified according to the C-parity, the length and the helicity sector gradually. } \label{opertable} \end{table} \section{Renormalization}\label{sec:renorm} In this section, we discuss the renormalization and the $Z$ matrix. We first discuss the structure of the $Z$ matrix under our arrangement of operators in Section~\ref{sec:Zform}. Then in Section~\ref{sec:strategy} we show the divergence structure of form factors and the strategy for renormalization. In Section~\ref{sec:scheme}, we discuss the $Z$ matrix in two different renormalization schemes: the $\overline{\text{MS}}$ scheme \cite{Bardeen:1978yd} and the finite renormalization scheme \cite{Buras:1989xd,Bondi:1989nq,Dugan:1990df}. \subsection{Structure of the $Z$ matrix}\label{sec:Zform} Before showing how to calculate the $Z$ matrix, in this section we first give a description of the blockwise structure of the $Z$ matrix resulting from our operator basis choice in Section~\ref{allopers}. The operators are divided into physical and evanescent operators, and we recall that physical operators refer to those that are non-vanishing in four spacetime dimensions. The $Z$ matrix has the following structure \begin{align} \left( \begin{array}{cc} Z_{\text{pp}} & Z_{\text{pe}} \\ Z_{\text{ep}} & Z_{\text{ee}} \end{array} \right). \label{Zfinite} \end{align} In a subscript, an ``e" denotes ``evanescent" and a ``p" denotes ``physical". For example, the $Z_{\text{pp}}$ denotes the block of physical-to-physical mixing. Other blocks are similarly defined. Since there is no mixing between the C-even and C-odd operators, the $Z$ matrix can be further divided into \begin{align} \left( \begin{array}{cccc} Z_{\text{pp}}^{\text{even}} & 0 &Z_{\text{pe}}^{\text{even}} & 0\\ 0 & Z_{\text{pp}}^{\text{odd}} &0 & Z_{\text{pe}}^{\text{odd}}\\ Z_{\text{ep}}^{\text{even}} & 0 &Z_{\text{ee}}^{\text{even}} & 0\\ 0 & Z_{\text{ep}}^{\text{odd}} &0 & Z_{\text{ee}}^{\text{odd}}\\ \end{array} \right)\,.\label{Zstructure} \end{align} Each block can be further arranged according to the lengths of the operators as follows \begin{align} &Z_{\text{pp}}^{\text{even}}=\left( \begin{array}{cccc} Z_{\text{pp},2\to 2}^{\text{even}} & Z_{\text{pp},2\to 3}^{\text{even}} & Z_{\text{pp},2\to 4}^{\text{even}}& Z_{\text{pp},2\to 5}^{\text{even}}\\ Z_{\text{pp},3\to 2}^{\text{even}} & Z_{\text{pp},3\to 3}^{\text{even}} & Z_{\text{pp},3\to 4}^{\text{even}}& Z_{\text{pp},3\to 5}^{\text{even}}\\ Z_{\text{pp},4\to 2}^{\text{even}} & Z_{\text{pp},4\to 3}^{\text{even}} & Z_{\text{pp},4\to 4}^{\text{even}}& Z_{\text{pp},4\to 5}^{\text{even}}\\ Z_{\text{pp},5\to 2}^{\text{even}} & Z_{\text{pp},5\to 3}^{\text{even}} & Z_{\text{pp},5\to 4}^{\text{even}}& Z_{\text{pp},5\to 5}^{\text{even}}\\ \end{array} \right)\,, &&Z_{\text{pp}}^{\text{odd}}=\left( \begin{array}{cc} Z_{\text{pp},3\to 3}^{\text{odd}} & Z_{\text{pp},3\to 4}^{\text{odd}}\\ Z_{\text{pp},4\to 3}^{\text{odd}} & Z_{\text{pp},4\to 4}^{\text{odd}}\\ \end{array} \right)\,, \label{Zpp}\\ &Z_{\text{pe}}^{\text{even}}=\left( \begin{array}{cc} Z_{\text{pe},2\to 4}^{\text{even}}& Z_{\text{pe},2\to 5}^{\text{even}}\\ Z_{\text{pe},3\to 4}^{\text{even}}& Z_{\text{pe},3\to 5}^{\text{even}}\\ Z_{\text{pe},4\to 4}^{\text{even}}& Z_{\text{pe},4\to 5}^{\text{even}}\\ Z_{\text{pe},5\to 4}^{\text{even}}& Z_{\text{pe},5\to 5}^{\text{even}}\\ \end{array} \right)\,, &&Z_{\text{pe}}^{\text{odd}}=\left( \begin{array}{c} Z_{\text{pe},3\to 4}^{\text{odd}}\\ Z_{\text{pe},4\to 4}^{\text{odd}}\\ \end{array} \right)\,, \label{Zpe}\\ &Z_{\text{ep}}^{\text{even}}=\left( \begin{array}{cccc} Z_{\text{ep},4\to 2}^{\text{even}} & Z_{\text{ep},4\to 3}^{\text{even}} & Z_{\text{ep},4\to 4}^{\text{even}}& Z_{\text{ep},4\to 5}^{\text{even}}\\ Z_{\text{ep},5\to 2}^{\text{even}} & Z_{\text{ep},5\to 3}^{\text{even}} & Z_{\text{ep},5\to 4}^{\text{even}}& Z_{\text{ep},5\to 5}^{\text{even}}\\ \end{array} \right)\,, &&Z_{\text{ep}}^{\text{odd}}=\left( \begin{array}{cc} Z_{\text{ep},4\to 3}^{\text{odd}} & Z_{\text{ep},4\to 4}^{\text{odd}}\\ \end{array} \right)\,, \label{Zep}\\ &Z_{\text{ee}}^{\text{even}}=\left( \begin{array}{cc} Z_{\text{ee},4\to 4}^{\text{even}}& Z_{\text{ee},4\to 5}^{\text{even}}\\ Z_{\text{ee},5\to 4}^{\text{even}}& Z_{\text{ee},5\to 5}^{\text{even}}\\ \end{array} \right)\,, &&Z_{\text{ee}}^{\text{odd}}=\left( \begin{array}{c} Z_{\text{ee},4\to 4}^{\text{odd}}\\ \end{array} \right)\,, \label{Zee} \end{align} where $Z_{L\to L'}$ represents the mixing from length-$L$ operators to length-$L'$ operators. The dilatation matrix $\mathcal{D}$, defined in \eqref{overallgamma}, has the same structure as the $Z$ matrix. The anomalous dimensions are the eigenvalues of $\mathcal{D}$, which is given by the equation \begin{align} \text{Det}\left(\mathcal{D}-\boldsymbol{1}\ \gamma\right)=0\,, \label{eigen} \end{align} where Det$(M)$ means the determinant of the matrix $M$ and $\boldsymbol{1}$ denotes the identity matrix. $\gamma$ can be expanded in the coupling constant as \begin{align} \gamma=\sum_{l=1}\left(\frac{\alpha_s}{4\pi}\right)^l\gamma^{(l)}\,. \label{gammaexpand} \end{align} In this work, we calculate $\gamma^{(1)}$ and $\gamma^{(2)}$. At the one-loop order, an operator would not mix with operators of lower lengths. In other words, $Z^{(1)}$ is a block upper triangular matrix according to the lengths of operators. This leads to the fact that the two-loop calculation of anomalous dimensions only requires the blocks $Z^{(1)}_{L\to L}$, $Z^{(1)}_{L\to L+1}$, $Z^{(2)}_{L\to L-1}$ and $Z^{(2)}_{L\to L}$. \subsection{Divergence structure of form factors}\label{sec:strategy} The renormalization Z matrices can be computed through the ultraviolet (UV) divergences of bare form factors via renormalization. The computation of bare form factors will be the topic of the next section. The bare form factors contain both UV and infrared (IR) divergences. The UV divergence can be obtained by subtracting the IR divergence from the bare form factors. Below we discuss the divergences and renormalization of form factors. According to \eqref{renormoperator}, the renormalized form factor of an operator $O_i$ reads \begin{align} \mathcal{F}_i=Z_i^{\ j} \mathcal{F}_{j,\text{b}}\,. \label{Frenorm} \end{align} with $\mathcal{F}_{j,\text{b}}$ the form factors of the bare operators, called the bare form factors. The $n$-point bare form factors can be expanded over the bare coupling as \begin{align} \mathcal{F}_{i,\text{b}}=g_0^{\delta_n} \sum_{l=0} \left(\frac{\alpha_0}{4\pi}\right)^{l}\mathcal{F}_{i,\text{b}}^{(l)}\,, \label{bare} \end{align} where $\alpha_0=\frac{g_0^2}{4\pi}$. The $\mathcal{F}_{i,\text{b}}^{(l)}$ denotes the bare $l$-loop form factors, which will be discussed in detail in Section~\ref{calcbare}. Here $\delta_n$ equals $(n-2)$ according to the convention that each monomial operator has a factor $(-\text{i}g_0)^{L-2}$ as mentioned in Section~\ref{ymnotation}. In the $\overline{\text{MS}}$ scheme, the two-loop renormalization of $\alpha_0$ reads (see {\emph{e.g.}~\cite{Gehrmann:2011aa}) \begin{align} \alpha_0=\alpha_s S_\epsilon^{-1}\mu^{2\epsilon}\left(1-\frac{\beta_0}{\epsilon}\frac{\alpha_s}{4\pi} +\left(\frac{\beta_0^2}{\epsilon^2}-\frac{\beta_1}{2\epsilon}\right)\left(\frac{\alpha_s}{4\pi}\right)^2+\mathcal{O}(\alpha_s^3)\right)\,, \label{renormalpha} \end{align} where $\alpha_s=\frac{g_s^2}{4\pi}$ is the renormalized coupling constant, $S_\epsilon=(4\pi e^{\gamma_E})^\epsilon$, and $\mu$ is the renormalization scale. The constants in \eqref{renormalpha} are \begin{align} \beta_0=\frac{11N_c}{3},\quad \beta_1=\frac{34N_c^2}{3}\,, \end{align} The $Z$ matrix can be expanded as \begin{align} Z^{\ j}_i=\delta^j_i+\sum_{l=1}\left(\frac{\alpha_s}{4\pi}\right)^l{{Z^{(l)}}^{\ j}_i}\,. \label{Zexpand} \end{align} Substituting \eqref{Zexpand} into \eqref{overallgamma}, together with the beta function in the pure Yang-Mills theory: \begin{align} \mu\frac{\text{d}\alpha_s}{\text{d}\mu}=-2\epsilon\alpha_s-\frac{\beta_0}{2\pi}\alpha_s^2+\mathcal{O}(\alpha_s^3)\,, \end{align} one obtains the expansion of the dilatation matrix as \begin{align} &\mathcal{D}^{(1)}=2\epsilon Z^{(1)}\,,\label{gamma1}\\ &\mathcal{D}^{(2)}=4\epsilon Z^{(2)}-2\epsilon \left(Z^{(1)}\right)^2+2\beta_0Z^{(1)}\,. \label{gamma2} \end{align} According to \eqref{gamma2}, a finite two-loop dilatation matrix requires the following matrix equation \begin{align} Z^{(2)}|_{\frac{1}{\epsilon^2}-\text{part}}=\frac{1}{2}(Z^{(1)})^2-\frac{\beta_0}{2\epsilon}Z^{(1)}\,. \label{z2ep2ms} \end{align} Substituting \eqref{bare}, \eqref{renormalpha} and \eqref{Zexpand} into \eqref{Frenorm}, one gets \begin{align} \mathcal{F}_i=g_s^{\delta_n}S_\epsilon^{-\frac{\delta_n}{2}}\sum_{l=0} \left(\frac{\alpha_s}{4\pi}\right)^{l}\mathcal{F}_i^{(l)}\,, \end{align} with \begin{align} &\mathcal{F}_i^{(0)}=\mathcal{F}_{i,\text{b}}^{(0)}\,,\label{reF0}\\ &\mathcal{F}_i^{(1)}=S_\epsilon^{-1}\mathcal{F}_{i,\text{b}}^{(1)}+\left({Z^{(1)}}_i^{\ j}-\frac{\delta_n}{2}\frac{\beta_0}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(0)}\,,\label{reF1}\\ &\mathcal{F}_i^{(2)}=S_\epsilon^{-2}\mathcal{F}_{i,\text{b}}^{(2)}+S_\epsilon^{-1}\left({Z^{(1)}}^{\ j}_i-\left(1+\frac{\delta_n}{2}\right)\frac{\beta_0}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(1)}\nonumber\\ &\quad \qquad+ \left({Z^{(2)}}^{\ j}_i-\frac{\delta_n}{2}\frac{\beta_0}{\epsilon}{Z^{(1)}}^{\ j}_i+\frac{\delta_n^2+2\delta_n}{8}\frac{\beta_0^2}{\epsilon^2}\delta^{\ j}_i-\frac{\delta_n}{4}\frac{\beta_1}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(0)}\,. \label{reF2} \end{align} The IR divergences take the nice universal form as \cite{Catani:1998bh} \begin{align} &\mathcal{F}_i^{(1)}\big{|}_{\text{IR}}=I^{(1)}(\epsilon)\mathcal{F}_i^{(0)}\label{ir1}\\ &\mathcal{F}_i^{(2)}\big{|}_{\text{IR}}=I^{(2)}(\epsilon)\mathcal{F}_i^{(0)}+I^{(1)}(\epsilon)\mathcal{F}_i^{(1)}\,, \label{ir2} \end{align} where the factor $I^{(1)}$ and $I^{(2)}$ are defined as \begin{align} I_{n}^{(1)}(\epsilon) &= - {e^{\gamma_E \epsilon} \over \Gamma(1-\epsilon)} \bigg( \frac{N_c}{\epsilon^2} + \frac{\beta_0}{2 \epsilon} \bigg) \sum_{i=1}^n (-{s_{i,i+1}} )^{-\epsilon} \,, \label{thei1} \\ I_{n}^{(2)}(\epsilon) &= - {1\over2} \big[ I^{(1)}(\epsilon) \big]^2 - {\beta_0 \over \epsilon} I^{(1)}(\epsilon) + {e^{-\gamma_E \epsilon} \Gamma(1-2\epsilon) \over \Gamma(1-\epsilon)} \left[ \frac{\beta_0}{\epsilon} + {\cal K}\right] I^{(1)}(2\epsilon) + n {e^{\gamma_E \epsilon} \over \epsilon \Gamma(1-\epsilon)}{\cal H}_{\Omega,g}^{(2)} \,, \nonumber \end{align} with \begin{align} {\cal K} = \left({67\over9} - {\pi^2\over3}\right) N_c\,, \qquad {\cal H}_{\Omega,g}^{(2)} = \left( \frac{\zeta_3}{2} + {5\over12} + {11\pi^2 \over 144} \right)N_c^2 \,. \end{align} Given the bare form factors (the calculation of bare form factors will be given in Section~\ref{calcbare}), one can calculate the $Z$ matrix according to \eqref{reF0}$\sim$\eqref{ir2}. \subsection{$Z$ matrix and renormalization schemes}\label{sec:scheme} The definition of the $Z$ matrix depends on the choice of renormalization schemes. In this subsection, we discuss $Z$ matrix in the $\overline{\text{MS}}$ scheme \cite{Bardeen:1978yd} and the finite renormalization scheme \cite{Buras:1989xd,Bondi:1989nq,Dugan:1990df}, given in Section~\ref{zms} and Section~\ref{zfin} respectively. We will show that it is easier to compute the physical anomalous dimensions in the finite renormalization scheme. \subsubsection{$\overline{\text{MS}}$ scheme}\label{zms} In the $\overline{\text{MS}}$ scheme, the $Z$ matrix is determined only by the UV divergences of form factors. Using \eqref{reF0}-\eqref{reF2} and \eqref{ir1}-\eqref{ir2}, one can get the relations between Z matrix and bare form factors up to two loops: \begin{align} {Z^{(1)}}^{\ j}_i\mathcal{F}_{j,\text{b}}^{(0)}&=\left(I^{(1)}(\epsilon)\mathcal{F}_{i,\text{b}}^{(0)}-S_\epsilon^{-1}\mathcal{F}_{i,\text{b}}^{(1)}+\frac{\delta_n}{2}\frac{\beta_0}{\epsilon}\mathcal{F}_{i,\text{b}}^{(0)}\right)\bigg{|}_{\text{divergent part}}\,.\label{msbareqa1}\\ {Z^{(2)}}^{\ j}_i\mathcal{F}_{j,\text{b}}^{(0)}&=\left[I^{(2)}(\epsilon)\mathcal{F}_{i,\text{b}}^{(0)}+I^{(1)}(\epsilon)\mathcal{F}_i^{(1)}-S_\epsilon^{-2}\mathcal{F}_{i,\text{b}}^{(2)}-S_\epsilon^{-1}\left({Z^{(1)}}^{\ j}_i-\left(1+\frac{\delta_n}{2}\right)\frac{\beta_0}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(1)}\right.\nonumber\\ & \qquad - \left.\left(-\frac{\delta_n}{2}\frac{\beta_0}{\epsilon}{Z^{(1)}}^{\ j}_i+\frac{\delta_n^2+2\delta_n}{8}\frac{\beta_0^2}{\epsilon^2}\delta^{\ j}_i-\frac{\delta_n}{4}\frac{\beta_1}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(0)}\right]\bigg{|}_{\text{divergent part}}\,. \label{msbareqa2} \end{align} We make two remarks on practical computations. In our problem, the rank of the $Z$ matrix is finite, thus one can reconstruct the $Z$ matrix from a sufficient set of numerical points of equations \eqref{msbareqa1} and \eqref{msbareqa2}. Here we use the d-dimensional numeric points, which can be achieved by assigning each external Lorentz invariant a random numerical value and there is no relation between the Lorentz invariants. In practice, we do this numerical assignment during the calculation of bare form factors as will be seen in Section~\ref{calcbare}. As another remark, since an operator may mix to operators of different lengths, it is necessary to consider the renormalization of form factors with different numbers of external gluons. A convenient order to renormalize the form factors of a given operator is from low-point ones to high-point ones. In this way, one can use the mixing matrix elements associated with lower-length operators as input in the renormalization of a higher-point form factor. This not only simplifies the computation but also provides a check for the computation. From the bare form factors, one thus obtains the $Z$ matrix up to two loops \begin{align} \left( \begin{array}{cc} Z_{\text{pp}}^{(1)} & Z_{\text{pe}}^{(1)} \\ 0 & Z_{\text{ee}}^{(1)} \end{array} \right), \qquad \left( \begin{array}{cc} Z_{\text{pp}}^{(2)} & Z_{\text{pe}}^{(2)} \\ Z_{\text{ep}}^{(2)} & Z_{\text{ee}}^{(2)} \end{array} \right), \label{Zfinite12} \end{align} and the corresponding dilatation matrices are \begin{align} \left( \begin{array}{cc} \mathcal{D}_{\text{pp}}^{(1)} & \mathcal{D}_{\text{pe}}^{(1)} \\ 0 & \mathcal{D}_{\text{ee}}^{(1)} \end{array} \right) , \qquad \left( \begin{array}{cc} \mathcal{D}_{\text{pp}}^{(2)} & \mathcal{D}_{\text{pe}}^{(2)} \\ \mathcal{D}_{\text{ep}}^{(2)} & \mathcal{D}_{\text{ee}}^{(2)} \end{array} \right) . \label{dilatationms} \end{align} Note that starting from two loops, all four blocks of $Z$ and $\mathcal{D}$ matrices are in general non-vanishing. In particular, $\mathcal{D}_{\text{ep}}$ is non-zero starting from two loops. Thus even to compute the two-loop physical anomalous dimensions $\gamma_{\text{pp}}^{(2)}$, it is necessary to calculate the whole dilatation matrix for both physical and evanescent operators up to the two-loop order. \subsubsection{Finite renormalization scheme}\label{zfin} We give an introduction to the finite renormalization scheme in this subsection. To distinguish from the $\overline{\text{MS}}$ scheme, we use $\hat{Z}$ and $\hat{\mathcal{D}}$ to denote the $Z$ matrix and dilatation matrix in the finite renormalization scheme. The most important feature of the finite renormalization scheme is that the dilatation matrix in this scheme has the following form \cite{Buras:1989xd,Bondi:1989nq,Dugan:1990df}: \begin{align} \left( \begin{array}{cc} \hat{\mathcal{D}}_{\text{pp}} & \hat{\mathcal{D}}_{\text{pe}} \\ 0 & \hat{\mathcal{D}}_{\text{ee}} \end{array} \right), \label{uptriangulargamma} \end{align} to all orders in the perturbation expansion. In the finite renormalization scheme, the renormalization of physical operators are the same as the ones in the $\overline{\text{MS}}$ scheme and for the $Z$ matrix we have \begin{align}\label{zppsame} \hat{Z}^{(l)}_{\text{pp}}=Z^{(l)}_{\text{pp}},\qquad \hat{Z}^{(l)}_{\text{pe}}=Z^{(l)}_{\text{pe}}\,. \end{align} While the renormalization of evanescent operators is different. This scheme takes into account the fact that the form factor of an evanescent operator is one order higher in the $\epsilon$ expansion, and the mixing from evanescent to physical operators of order $\epsilon^{0}$ should also be subtracted. In other words, one will modify the RHS of \eqref{msbareqa1}-\eqref{msbareqa2} by taking into account some ``finite'' part of form factors. Below we first describe how we compute the $Z$ matrix in this scheme, and then we explain how it produces the wanted form of dilatation matrix as \eqref{uptriangulargamma}. Given an evanescent operator $O_i$, we will separate ${\hat{Z}}^{(l)\ j}_{\quad i}$ as two parts: \begin{align}\label{finZ2part} {\hat{Z}}^{(l)\ j}_{\quad i} = {\hat{Z}}^{(l)\ j}_{\quad i}\big|_{\text{div}} + {\hat{Z}}^{(l)\ j}_{\quad i}\big|_{\text{fin}}\,. \end{align} The calculation of the divergent part ${\hat{Z}}^{(l)\ j}_{\quad i}\big|_{\text{div}}$ is similar to that of the $Z$ matrix in the $\overline{\text{MS}}$ scheme, and the calculation of the finite part ${\hat{Z}}^{(l)\ j}_{\quad i}\big|_{\text{fin}}$ will be discussed in detail below. In the following, the index $i$ in ${\hat{Z}}^{(l)\ j}_{\quad i}$ will refer only to the evanescent operators while $j$ can be both physical and evanescent operators. Consider first the one-loop order. We have \begin{align}\label{1loopsame} {\hat{Z}}^{(1)\ j}_{\quad i}\big|_{\text{div}}={Z}^{(1)\ j}_{\quad i}\,, \end{align} where ${Z}^{(1)}$ is the ${\overline{\text{MS}}}$ $Z$ matrix. To compute the finite part ${\hat{Z}}^{(1)\ j}_{\quad i}\big|_{\text{fin}}$, one can consider the one-loop form factors at 4-dimensional numerical points (for example, using the spinor helicity formalism). Since the tree-level form factors of evanescent operators vanish in 4-dimensional spacetime, the one-loop formula \eqref{msbareqa1} is modified as \begin{align} {{\hat{Z}}'^{(1)\ j}_{\quad i}}\mathcal{F}_{j,\text{b}}^{(0),\text{4d}}&=-S_\epsilon^{-1}\mathcal{F}_{i,\text{b}}^{(1),\text{4d}}\big{|}_{\text{divergent and finite parts}}\,,\label{fineqa1} \end{align} and we have \begin{align}\label{zhat} {\hat{Z}}^{(1)\ j}_{\quad i}\big|_{\text{fin}} = \text{finite part of }{\hat{Z}}'^{(1)\ j}_{\quad i}\,. \end{align} The superscript ``4d" means the form factor is calculated in 4-dimensional numerical points. It should be clear that $j$ in \eqref{zhat} can only refer to physical operators, thus ${\hat{Z}}^{(1)\ j}_{\quad i}\big|_{\text{fin}}$ only contribute to the block ${\hat Z}_{\text{ep}}^{(1)}$. Next we consider the two-loop order. The calculation of ${\hat{Z}}^{(2)\ j}_{\quad i}\big|_{\text{div}}$ is similar to the $\overline{\text{MS}}$ scheme, except that we need to replace all $Z^{(1)}$ in \eqref{msbareqa2} by $\hat{Z}^{(1)}$. Since $\hat{Z}^{(1)}\neq Z^{(1)}$, at the two-loop order one does not have the simple relation as \eqref{1loopsame}. For the finite part, we consider similarly the form factors at 4-dimensional numerical points and calculate ${\hat{Z}'^{(2)\ j}}_{\quad i}$ via the formula: \begin{align} {\hat{Z}'^{(2)\ j}}_{\quad i}\mathcal{F}_{j,\text{b}}^{(0),\text{4d}}&=\left[I^{(1)}(\epsilon)\mathcal{F}_i^{(1),\text{4d}}-S_\epsilon^{-2}\mathcal{F}_{i,\text{b}}^{(2),\text{4d}}-S_\epsilon^{-1}\left({\hat{Z}^{(1)\ j}}_{\quad i}-\left(1+\frac{\delta_n}{2}\right)\frac{\beta_0}{\epsilon}\delta^{\ j}_i\right)\mathcal{F}_{j,\text{b}}^{(1),\text{4d}}\right.\nonumber\\ &\qquad + \left.\frac{\delta_n}{2}\frac{\beta_0}{\epsilon}{\hat{Z}^{(1)\ j}}_{\quad i} \mathcal{F}_{j,\text{b}}^{(0),\text{4d}}\right]\bigg{|}_{\text{divergent and finite parts}}\,, \label{fineqa2} \end{align} and the two-loop finite part is given by \begin{align}\label{zhat2} {\hat{Z}}^{(2)\ j}_{\quad i}\big|_{\text{fin}} = \text{finite part of }{\hat{Z}}'^{(2)\ j}_{\quad i}\,. \end{align} As in the one-loop case, ${\hat{Z}}^{(2)\ j}_{\quad i}\big|_{\text{fin}}$ only contributes to the block $Z_{\text{ep}}^{(2)}$. One can also check the divergent mixing to the physical operators in ${\hat{Z}}'^{(2)\ j}_{\quad i}$ should be the same as in ${\hat{Z}}^{(2)\ j}_{\quad i}\big|_{\text{div}}$. From \eqref{gamma2}, one can see that ${\hat{Z}}^{(2)\ j}_{\quad i}\big|_{\text{fin}}$ contributes to the $\mathcal{O}(\epsilon)$ of the two-loop dilatation matrix, therefore, it does not contribute to the calculation of the two-loop anomalous dimensions but will begin to contribute at the three-loop order. As mentioned at the beginning of this subsection, the key feature of the finite renormalization scheme is that the dilatation matrix has the form of \eqref{uptriangulargamma}. {At the one-loop order, this is straightforward since ${\hat{Z}^{(1)}}_{\text{ep}}$ is finite, thus $\hat{\mathcal{D}}^{(1)}_{\text{ep}}\sim{\cal O}(\epsilon)$. At the two-loop order, the leading divergence of ${\hat{Z}^{(2)}}_{\text{ep}}$ is at ${\cal O}(1/\epsilon)$. The relation \eqref{z2ep2ms} still applies to the leading divergences (which is one order higher in the $\epsilon$-expansion than usual cases) \cite{tHooft:1972tcz,Dugan:1990df}: \begin{align} {\hat{Z}}_{\text{ep}}^{(2)}|_{\frac{1}{\epsilon}-\text{part}}=\frac{1}{2} \big( {\hat{Z}}_{\text{ep}}^{(1)}{\hat{Z}}_{\text{pp}}^{(1)}+{\hat{Z}}_{\text{ee}}^{(1)}{\hat{Z}}_{\text{ep}}^{(1)} \big) -\frac{\beta_0}{2\epsilon}{\hat{Z}}_{\text{ep}}^{(1)} \,. \label{vansih2loop} \end{align} Using \eqref{vansih2loop} and \eqref{gamma2}, it should then be clear that $\hat{\mathcal{D}}^{(2)}_{\text{ep}}$ is $\mathcal{O}(\epsilon)$ and thus one gets \eqref{uptriangulargamma}.} Note that ${\hat{Z}}_{\text{ep}}^{(1)}$, which is finite, is necessary in this cancellation. We check that our explicit two-loop calculations indeed confirm this structure. Since the dilatation matrix is block upper triangular as shown in \eqref{uptriangulargamma}, the physical anomalous dimensions are just the eigenvalues of the $\hat{\mathcal{D}}_{\text{pp}}$. This does not mean that evanescent operators have no effect on the physical anomalous dimensions. At the two-loop order, the effect of the evanescent operators on $\hat{\mathcal{D}}^{(2)}_{\text{pp}}$ comes from the term $(-2\epsilon {\hat{Z}}^{(1)}_{\text{pe}}{\hat{Z}}^{(1)}_{\text{ep}})$ according to \eqref{gamma2}. Therefore, evanescent operators should be renormalized up to the one-loop order in the calculation of the two-loop physical anomalous dimensions. We point out here that anomalous dimensions are scheme dependent, due to the non-vanishing beta function in the pure YM theory, and therefore, the results in the finite renormalization scheme are different from the ones in the $\overline{\text{MS}}$ scheme. On the other hand, at the conformal fixed point, anomalous dimensions should be independent of the renormalization scheme. A detailed discussion of the scheme dependence of anomalous dimensions will be given in Section~\ref{sec:fixed point}. \section{Calculation of bare form factors}\label{calcbare} In this section, we consider the computation of bare form factors up to the two-loop order. In Section~\ref{allcuts}, we give an overall description of our calculation. In Section~\ref{tensorreduce}, we discuss two methods for integral tensor reduction in detail. \subsection{Unitarity-IBP method}\label{allcuts} The main strategy of our calculation is based on a combination of the unitarity method \cite{Bern:1994cg,Bern:1994zx,Britto:2004nc} and the IBP reduction \cite{Chetyrkin:1981qh,Tkachov:1981wb}. This strategy has been applied to computing form factors (and Higgs amplitudes) in \cite{Jin:2018fak, Jin:2019ile, Jin:2019opr} and for pure gluon amplitudes in \cite{Boels:2017gyc, Boels:2018nrr, Jin:2019nya}. The numerical IBP method by cuts was also studied in \cite{Kosower:2011ty,Larsen:2015ped,Ita:2015tya,Georgoudis:2016wff,Abreu:2017hqn,Abreu:2017xsl}. A loop form factor can be written as a linear combination of a set of IBP master integrals as \begin{align} \mathcal{F}^{(l)}=\sum_i c_i I^{(l)}_i\,, \label{Fdecomposition} \end{align} where the coefficients $c_i$ are rational functions of the external Lorentz invariants and the spacetime dimension $d$. If one imposes a unitarity cut on \eqref{Fdecomposition}, one gets \begin{align} \mathcal{F}^{(l)}|_{\text{cut}}=\sum_{i'} c_{i'} I^{(l)}_{i'}|_{\text{cut}}\,, \label{cutdecomposition} \end{align} where the sum of $i'$ runs over all master integrals which can be detected by the cut, \emph{i.e.} non-vanishing under the cut condition. Below we show how to calculate the coefficients $c_{i'}$ in a single cut. In the end, we need to choose a set of cuts to cover all the masters and then calculate their coefficients. Given a cut, we first compute the cut-integrand of a loop form factor as the product of tree form factor and amplitudes: \begin{align} \mathcal{F}^{(l)}|_{\text{cut}}=\sum_{\text{helicities}} \mathcal{F}^{(0)} \times(\prod_i \mathcal{A}^{(0)}_i)\,, \label{cutF} \end{align} where $\mathcal{F}^{(0)}$ denotes a tree form factor and $\mathcal{A}^{(0)}_i$ denote tree scattering amplitudes. Since we use $d$-dimensional unitarity cuts, the tree-level results are obtained in terms of Lorentz product of $d$-dimensional momenta $\{p_i, l_a\}$ and polarization vectors $e_i$, and compact formula for form factors can be found in Appendix F of \cite{Jin:2022ivc}. Our results will be obtained in the CDR scheme which is valid for states in general $d$ spacetime dimensions.\footnote{ We mention that there is some alternative dimensional reduction scheme where the $4$-dimensional gauge fields may be expressed as $D$-dimensional components plus the $\epsilon$-scalars \cite{Siegel:1979wq, Capper:1979ns, Jack:1993ws, Harlander:2006rj, Nandan:2014oga}. One may also consider 6-dimensional spinor helicity formalism for form factors as in \cite{Huber:2019fea}. The operator renormalization has also been considered for gauge theories in six and eight spacetime dimensions \cite{Gracey:2015xmw, Gracey:2017nly}. For the two-loop renormalization involving fermionic evanescent operators and $\gamma_5$, see also \cite{Buras:1989xd, Schubert:1988ke}. } The sum of helicities is performed for the polarization vectors $e_l$ of internal cut gluon legs, for which we adopt the $d$-dimension helicity sum: \begin{align} e^\mu_{l} \circ {e^\nu_{l}}^*=\sum_{\text{helicities}} e^\mu_{l} {e^\nu_{l}}^*=\eta^{\mu\nu}-\frac{q^\mu l^\nu+l^\mu q^\nu}{l\cdot q}\,, \label{helsum} \end{align} where $q$ is a light-like reference momentum. After this step, the cut integrand is given as a rational function of Lorentz invariants. In particular, it includes the Lorentz products of the loop momenta and external polarization vectors like $l_j \cdot e_i$ which can not be expanded in terms of propagators, and thus the IBP reduction can not be used directly. To eliminate such Lorentz invariants, we multiply back the cut propagators in the cut integrand and then do tensor reduction for Feynman integrals. We adopt two different methods for tensor reduction: 1) the gauge invariant basis projection method, and 2) a hybrid method combing momentum decomposition and the PV reduction. The first method is efficient in the calculations of 2-point and 3-point form factors, while for form factors with more external gluons, the second method is preferable. We will give a detailed description of these two methods in Section~\ref{tensorreduce}. After tensor reduction, we can expand the integrals with a set of chosen propagators which are ready to do the IBP reduction (with the cut condition imposed), and in this work we use the package FIRE6 \cite{Smirnov:2019qkx}. After the IBP reduction, the cut form factor is transformed into the desired form as shown in \eqref{cutdecomposition} and one gets $c_{i'}$. Due to the complexity of the expressions, the numerical assignment for external Lorentz invariants is also used during the IBP reduction for the case of length-4 and length-5 operators. To illustrate the above strategy in a concrete setup, below we show all kinds of form factors calculated in this work, as well as all the IBP master integrals and the cuts to cover them. As discussed in Section~\ref{sec:strategy}, we only need to consider the renormalization matrices $Z^{(1)}_{L\to L}$, $Z^{(1)}_{L\to L+1}$, $Z^{(2)}_{L\to L-1}$ and $Z^{(2)}_{L\to L}$. Consequently, the necessary form factors are \begin{align} &\mathcal{F}^{(1)}_{L\to L},\ L=2,3,4,5;\qquad \mathcal{F}^{(1)}_{L\to L+1},\ L=2,3,4;\\ &\mathcal{F}^{(2)}_{L\to L},\ L=2,3,4,5;\qquad \mathcal{F}^{(2)}_{L\to L-1},\ L=3,4,5\,, \end{align} where $\mathcal{F}_{L\to n}$ represents an $n$-point form factor of a length-$L$ operator. For all these form factors, the one-loop and two-loop master integrals are shown in Figure~\ref{1loop_mi} and Figure~\ref{2loop_mi}. And we present all corresponding cuts in Figure~\ref{1loop_cuts} and Figure~\ref{2loop_cuts}. Notice that all external legs outside the loop are included in the double line. Since the calculation is in the large $N_c$ limit, all the cuts are planar and all the tree blocks are color-ordered. The relations between the form factors, master integrals and the cuts are as follows. \begin{itemize} \item The only master for one-loop minimal form factors is $(a)$ in Figure~\ref{1loop_mi}, detected by the cut $(a)$ in Figure~\ref{1loop_cuts}. \item The masters for one-loop next-to-minimal form factors are $(a)\sim(c)$ in Figure~\ref{1loop_mi}, detected by the cuts $(a)$ and $(b)$ in Figure~\ref{1loop_cuts} respectively. \item A two-loop sub-minimal form factor only has the master $(a)$ in Figure~\ref{2loop_mi}, detected by the cut $(a)$ in Figure~\ref{2loop_cuts}. \item A two-loop $2\to2$ form factor includes the masters $(a)\sim(f)$ in Figure~\ref{2loop_mi}. The detecting cuts are $(a)\sim(e)$ in Figure~\ref{2loop_cuts}. Note that since the local operator is a color singlet, the non-planar masters $(c)$, $(e)$ and $(f)$ in Figure~\ref{2loop_mi} are of leading $N_c$ order. \item A two-loop $3\to3$ form factors includes the masters $(a),\ (b),\ (d),\ (g) \sim (j)$ in Figure~\ref{2loop_mi}. The flipped version of $(h)$ and $(i)$, which we do not draw, are also included in the masters. The cuts are $(a),\ (b),\ (d)$ and $(f)$ in Figure~\ref{2loop_cuts}. \item All two-loop minimal form factors with more than 3 external legs have all the masters of a two-loop $3\to 3$ form factor and the master $(k)$ in Figure~\ref{2loop_mi}. Accordingly, one needs one more cut, which is the cut $(g)$ in Figure~\ref{2loop_cuts}. \end{itemize} It is worthwhile noting that coefficients of some master integrals can be calculated via different cuts, \emph{e.g.} the master integral $(j)$ in Figure~\ref{2loop_mi} can be detected by the cuts $(a)$ and $(f)$ in Figure~\ref{2loop_cuts}. In that case, coefficients of the master integral calculated from different cuts must be the same. All two-loop master integrals can be found in \cite{Gehrmann:2000zt,Gehrmann:2001ck}. \begin{figure}[t] \centering \includegraphics[scale=0.5]{all_1loop.eps} \caption{The one-loop masters.} \label{1loop_mi} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.5]{1loop_cuts.eps} \caption{The one-loop cuts.} \label{1loop_cuts} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.5]{2loop_mis.eps} \caption{The two-loop masters. The flipped versions of $(h)$ and $(i)$ are also included, which we do not draw.} \label{2loop_mi} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.5]{2loop_cuts.eps} \caption{The two-loop cuts} \label{2loop_cuts} \end{figure} \subsection{Two methods for tensor reduction}\label{tensorreduce} In this subsection, we give a detailed description of two methods for tensor reduction. We first introduce the gauge-invariant basis projection (see \emph{e.g.} \cite{Gehrmann:2011aa,Boels:2018nrr}) in Section~\ref{gibp}. This method takes advantage of the gauge invariance of the cut form factors and is powerful for the form factors with a small number of external legs. Then in Section~\ref{PV}, we introduce a hybrid method, including loop momentum decomposition and the PV reduction\cite{Passarino:1978jh,Kreimer:1991wj}. The second method is efficient for the calculation of the two-loop high-point form factors. \subsubsection{Gauge invariant basis projection}\label{gibp} In this section, we introduce the gauge invariant basis projection method. Taking into account the gauge invariance of the cut form factor, one has the following ansatz \begin{align} \mathcal{F}|_{\text{cut}}=\sum_{i}f_i(p,l) B_i(p,e)\,, \label{giansatz} \end{align} with $\{B_i\}$ a complete set of gauge invariant basis involving only the external momenta and the corresponding polarization vectors. The coefficients $f_i$ are functions of loop momenta and external momenta. For an $n$-point form factor, the basis can be constructed via the gauge invariant building blocks \cite{Boels:2018nrr} \begin{align} A_{i;j}&=e_i\cdot p_{j+1}\ p_i\cdot p_{j+2}-e_i\cdot p_{j+2}\ p_i\cdot p_{j+1},&i\leq j\leq i+n-3\,,\\ C_{i,j}&=e_i\cdot e_j\ p_i\cdot p_j-e_i\cdot p_j\ e_j\cdot p_i\,,&i,j=1,\cdots,n\text{ and }i\neq j\,, \label{AC} \end{align} where the cyclic convention $i+n=i$ is adopted. For example, the gauge-invariant basis of 3-point form factors reads \begin{align} A_{1;1}A_{2;2}A_{3;3},\ A_{1;1}C_{2,3},\ A_{2;2}C_{1,3},\ A_{3;3}C_{1,2}\,. \end{align} Define the inner product \begin{align} B_1\circ B_2 =\sum_{\text{helicities}} B_1 B_2\,, \end{align} where the expression for the sum of helicities is given in \eqref{helsum}. One can construct the dual basis $\{B^i\}$ as \begin{align} B^i=(G_B^{-1})^{ij}B_j\,, \label{dualB} \end{align} where $G_B$ is the inner product matrix of $\{B_i\}$: \begin{align} (G_B)_{ij}=B_i\circ B_j\,. \end{align} It is straightforward to verify that $B^i\circ B_j=\delta^{\ i}_j$. The $f_i$ in \eqref{giansatz} can be calculated by \begin{align} f_i=B^i\circ \mathcal{F}|_{cut}\,. \end{align} In general, the matrix $G_B$ may be complicated and it may be hard to calculate the inverse. One can refer to \cite{Boels:2018nrr} for a modified strategy, which does the projection blockwisely. To count the number of bases, we first note that the gauge invariant basis can be classified into $[\frac{n}{2}]+1$ classes as \begin{align} A^n,A^{n-2}C, A^{n-4}C^2\,,\cdots, A^{n-2k}C^{k},\cdots\,. \label{theACs} \end{align} Together with the counting of $A$ and $C$, one can derive the number of the basis for an $n$-point form factor: \begin{align} \sum_{i=0}^{[\frac{n}{2}]}(n-2)^{n-2i}\frac{\prod_{j=0}^{i-1}\binom{n-2}{2}}{i!}\,. \label{numofbases} \end{align} where $\binom{n}{m}$ is the binomial coefficient. The counting is sensitive to the number of external legs. The numbers of bases are $4$, $43$ and $558$ respectively for 3-point, 4-point and 5-point form factors. The large number of bases makes the projection method not so practicable for high-point form factors. In our calculation, the projection method is used for 2-point and 3-point form factors. \subsubsection{Loop momentum decomposition and PV reduction}\label{PV} In this subsection, we describe a hybrid method for tensor reduction, which combines the loop momentum decomposition and the PV reduction. Our problem is to do tensor reduction for integrals as follows \begin{align}\label{PVproblem} \int [\text{d}l]\frac{\prod_k l_{i_k}^{\mu_k}}{\prod_{j}\text{D}_j}\,, \end{align} where $[\text{d}l]=\prod_{i}\frac{\text{d}^dl_i}{(2\pi)^d}$ and $\text{D}_j$ denote propagators. We first decompose a loop momentum $l_i^{\mu}$ as \begin{align} l_{i}^\mu=\sum_{k}c_{i,k}\ p_k^\mu +l_{i,\perp}^\mu\,. \label{l decompse} \end{align} where $p_i$'s are external momenta in the propagators and $c_{i,k}$ is rational function of $l_i\cdot p_k$ and $p_j\cdot p_k$. The $l_{i,\perp}^\mu$ is defined to be perpendicular to all external momenta. After the decomposition \eqref{l decompse}, we would have a sum of integrals. Each term in the sum has the following form: \begin{align}\label{PVstep2} \int [\text{d}l]\frac{X}{\prod_{j}\text{D}_j}\prod_{k'} l_{i_{k'},\perp}^{\mu_{k'}}\,, \end{align} where $X$ is a rational function of $l_i\cdot p,p\cdot p\text{ and }p^\mu$. We go on to do the PV reduction for the tensor integral \eqref{PVstep2}, which should be transverse to all the external momentums. In this case, the only building block is the transverse metric $\eta_\perp^{\mu\nu}$ (see \emph{e.g.} \cite{Kreimer:1991wj,Henn:2014yza}), which is symmetric and has the following properties: \begin{align} {p_i}_\mu \eta_\perp^{\mu\nu}=0,\ {l_{i_m,_\perp}}_\mu \eta_\perp^{\mu\nu}=l_{i_m,_\perp}^\nu\,. \end{align} Then the PV reduction reads \begin{align} \int [\text{d}l]\frac{X}{\prod_{j}\text{D}_j}\prod_{k'}^n l_{i_{k'},\perp}^{\mu_{k'}}=\begin{cases} 0\,,& n\text{ odd}\\ \sum_\sigma \left[\int [\text{d}l]\frac{X}{\prod_{j}\text{D}_j}y_\sigma(l_\perp\cdot l_\perp,d)\right] \eta_\perp^{\mu_{\sigma_1}\mu_{\sigma_2}}\cdots\eta_\perp^{\mu_{\sigma_{n-1}}\mu_{\sigma_n}}\,,& n\text{ even} \end{cases}\,. \label{tensor decomposition} \end{align} where the sum of $\sigma$ runs over all the inequivalent permutations of $\mu_1\cdots\mu_n$. One can see that $\int [\text{d}l]\frac{X}{\prod_{j}\text{D}_j}$ can be treated as an overall factor and the coefficients $y_\sigma$, which are functions of $d$ and $l_\perp\cdot l_\perp$, can be calculated by contracting \eqref{tensor decomposition} with $\eta_\perp^{\mu\nu}$ and using the formula $\eta_\perp^{\mu\nu}\eta_{\perp\mu\nu}=d-m$ with $m$ the number of external momenta. Finally, we substitute \begin{align} l_{i,\perp}\cdot l_{j,\perp}=l_i\cdot l_j-\sum_{k,s}^{m}c_{i,k}c_{j,s}\ p_k\cdot p_s\,. \label{lperpsquare} \end{align} for all $l_\perp\cdot l_\perp$ in $y_\sigma$'s. The above relation can be derived by contracting $l_i$ and $l_j$ and then applying \eqref{l decompse}. In this way, we complete the tensor reduction for \eqref{PVproblem}, and the resulting form is ready to apply the IBP reduction. We summarize the above reduction in the following three steps: \begin{enumerate} \item We first do the momentum decomposition \eqref{l decompse}. \item Then we do the PV reduction according to \eqref{tensor decomposition}. \item Finally we substitute \eqref{lperpsquare} for all $l_\perp\cdot l_\perp$. \end{enumerate} As a remark, the loop momentum decomposition and the PV reduction are only related to the external momenta appearing in denominators. In other words, the calculation is not sensitive to the number of external legs outside the loops. One can see from Figure~\ref{1loop_cuts} and Figure~\ref{2loop_cuts} that for our calculation, the maximum number of external momenta is three. Therefore, this method turns out to be efficient for the two-loop calculation of length-4 and 5 operators in this work. The method can be straightforwardly applied to operators of higher lengths. \section{Anomalous dimensions of the dimension-10 operators}\label{sec:result} In this section, we present the results of the renormalization matrices and the anomalous dimensions for the dimension-10 operators. In Section~\ref{getZ}, we give the results in the $\overline{\text{MS}}$ scheme and the finite renormalization scheme, up to the two-loop order. In Section~\ref{sec:fixed point}, we present the anomalous dimensions at the Wilson-Fisher conformal fixed point up to the next-to-leading order, which is independent of the renormalization scheme. \subsection{The dimension-10 $Z$ matrix and anomalous dimensions }\label{getZ} Our results include the $Z^{(1)}_{L\to L}$, $Z^{(1)}_{L\to L+1}$, $Z^{(2)}_{L\to L-1}$ and $Z^{(2)}_{L\to L}$ blocks of the $Z$ matrix and the anomalous dimensions up to the two-loop order.\footnote{One will see that the $Z^{(1)}_{2\to 3}$ is not presented in the following. This is because the only length-2 operator is a total derivative of $F^2$, leading to the fact that all $Z_{2\to n}$, with $n>2$, vanish.} We present the results in the $\overline{\text{MS}}$ scheme in Section~\ref{2loopadms}, then the results in the finite renormalization scheme in Section~\ref{2loopad}. One can refer to \eqref{Zstructure}$\sim$\eqref{Zee} for our arrangement of the Z matrix. \subsubsection{The $\overline{\text{MS}}$ scheme}\label{2loopadms} In this section, we present the results in the $\overline{\text{MS}}$ scheme. We first review the one-loop $Z$ matrix and the one-loop anomalous dimensions $\gamma^{(1)}$, which were computed in \cite{Jin:2022ivc}.\footnote{The one-loop renormalization for dimension 6 and 8 YM operators were given in \cite{Gracey:2002he,Morozov:1984goy, Neill:2009tn, Harlander:2013oja, Dawson:2014ora}. The two-loop renormalization for dimension-6 operators were considered in \cite{Jin:2018fak, Jin:2019opr}, and the two-loop renormalization for length-3 operators up to dimension 16 were obtained in \cite{Jin:2020pwh}.} Then we present our result of the two-loop physical anomalous dimensions $\gamma^{(2)}$. Below is the one-loop result. The blocks in $Z^{\text{even},(1)}_{\text{pp}}$ are \begin{align} &Z^{\text{even},(1)}_{\text{pp},2\to 2}=\frac{N_c}{\epsilon} \left( \begin{array}{c} -\frac{11}{3 } \\ \end{array} \right)\,, \quad Z^{\text{even},(1)}_{\text{pp},3\to 3}=\frac{N_c}{\epsilon} \left( \begin{array}{cccc} 3 & 0 & 0 & 0 \\ -\frac{3}{5} & \frac{21}{5} & 0 & 0 \\ 0 & 0 & \frac{7}{3} & 0 \\ 0 & 0 & -1 & \frac{14}{3} \\ \end{array} \right)\,,\label{z1begin}\\ &Z^{\text{even},(1)}_{\text{pp},3\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{10}{3} & \frac{2}{3 } & 0 & 0 & 0 & -\frac{53}{72 } & -\frac{43}{96 } & 0 & 0 & \frac{7}{8 } & -\frac{1}{8 } & \frac{1}{12 } & 0 & 0 & 0 \\ \end{array} \right)\,,\\ &Z^{\text{even},(1)}_{\text{pp},4\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccccccccccccccc} 0 & \frac{5}{12} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{16}{3} & \frac{17}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{16}{3} & -\frac{5}{12} & \frac{16}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{6} & \frac{2}{3} & 8 & -\frac{2}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{8}{3} & -\frac{5}{12} & \frac{2}{3} & -\frac{10}{3} & \frac{14}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 5 & -\frac{3}{4} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\frac{5}{3} & \frac{9}{4} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\frac{5}{24} & -\frac{3}{8} & \frac{21}{4} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{6} & \frac{1}{2} & -1 & 6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{14}{3} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & -\frac{1}{2} & \frac{9}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{15} & \frac{2}{5} & \frac{1}{60} & \frac{112}{15} & -\frac{1}{6} & \frac{1}{15} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{26}{15} & \frac{3}{20} & -\frac{1}{15} & -\frac{1}{5} & \frac{31}{6} & -\frac{4}{15} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{64}{15} & \frac{2}{5} & \frac{14}{15} & \frac{14}{5} & -\frac{28}{3} & \frac{67}{30} \\ \end{array} \right)\,,\label{pp44}\\ &Z^{\text{even},(1)}_{\text{pp},4\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -\frac{17}{3} & 8 & 0 & 0 \\ -\frac{26}{3} & 10 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -\frac{46}{9} & \frac{22}{3} & \frac{2}{3} & \frac{4}{9} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{5} & -\frac{1}{15} \\ 0 & 0 & -\frac{6}{5} & -\frac{7}{30} \\ 0 & 0 & -\frac{197}{10} & -\frac{169}{60} \\ \end{array} \right)\,, \quad Z^{\text{even},(1)}_{\text{pp},5\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cccc} -\frac{11}{3} & 10 & 0 & 0 \\ -9 & \frac{49}{3} & 0 & 0 \\ 0 & 0 & \frac{9}{2} & \frac{1}{4} \\ 0 & 0 & 1 & \frac{37}{6} \\ \end{array} \right)\,. \end{align} One can see that at the one-loop order, there is no mixing between different helicity sectors. The blocks in $Z_{\text{pp}}^{\text{odd},(1)}$ are \begin{align} &Z^\text{odd,(1)}_{\text{pe},3\to 3}=\frac{N_c}{\epsilon} \left( \begin{array}{c} 4 \\ \end{array} \right)\,, Z^\text{odd,(1)}_{\text{pe},3\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\,, Z^\text{odd,(1)}_{\text{pe},4\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccccc} \frac{16}{3} & 0 & 0 & 0 & 0 \\ 0 & \frac{17}{4} & 0 & 0 & 0 \\ 0 & -1 & \frac{25}{4} & 0 & 0 \\ 0 & 0 & 0 & \frac{37}{10} & -\frac{1}{5} \\ 0 & 0 & 0 & -\frac{3}{10} & \frac{82}{15} \\ \end{array} \right)\,. \end{align} The blocks in $Z_{\text{pe}}^{\text{even},(1)}$ are \begin{align} &Z^{\text{even},(1)}_{\text{pe},3\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \frac{17}{48 } & -\frac{5}{192 } & 0 \\ \end{array} \right)\,, &&Z^{\text{even},(1)}_{\text{pe},4\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ -\frac{1}{3} & -\frac{1}{4} & 0 \\ -\frac{41}{72} & -\frac{23}{36} & -\frac{7}{12} \\ \frac{19}{72} & \frac{13}{36} & \frac{5}{12} \\ -\frac{1}{18} & \frac{13}{72} & 0 \\ -\frac{1}{54} & \frac{49}{216} & 0 \\ -\frac{143}{864} & -\frac{41}{432} & 0 \\ \frac{43}{216} & \frac{7}{27} & \frac{1}{4} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ -\frac{19}{18} & -\frac{29}{72} & 0 \\ -\frac{1}{240} & -\frac{1}{80} & -\frac{1}{60} \\ -\frac{1}{240} & \frac{1}{20} & \frac{1}{40} \\ -\frac{27}{80} & -\frac{11}{80} & -\frac{19}{40} \\ \end{array} \right)\,,\\ &Z^{\text{even},(1)}_{\text{pe},4\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ \frac{65}{18} & \frac{31}{18} \\ -\frac{11}{18} & -\frac{23}{9} \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ \frac{10}{27} & -\frac{46}{27} \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ -\frac{11}{20} & \frac{1}{10} \\ \frac{281}{180} & -\frac{1}{90} \\ \frac{277}{60} & -\frac{63}{20} \\ \end{array} \right)\,, &&Z^{\text{even},(1)}_{\text{pe},5\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cc} \frac{25}{18} & -\frac{55}{18} \\ \frac{10}{9} & -\frac{19}{9} \\ \frac{1}{6} & -\frac{5}{6} \\ -\frac{2}{9} & -\frac{4}{9} \\ \end{array} \right)\,. \end{align} The blocks in $Z_{\text{pe}}^{\text{odd},(1)}$ are \begin{align} Z^\text{odd,(1)}_{\text{pe},3\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{c} 0 \\ \end{array} \right)\,, Z^\text{odd,(1)}_{\text{pe},4\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{c} \frac{1}{12} \\ 0 \\ -\frac{1}{16} \\ \frac{19}{120} \\ \frac{13}{40} \\ \end{array} \right)\,. \end{align} The blocks in $Z_{\text{ee}}^{\text{even},(1)}$ are \begin{align} Z^{\text{even},(1)}_{\text{ee},4\to 4}=\frac{N_c}{\epsilon} \left( \begin{array}{ccc} 3 & -\frac{8}{3} & 0 \\ -\frac{10}{3} & \frac{14}{3} & 0 \\ 2 & 1 & \frac{19}{3} \\ \end{array} \right)\,, Z^{\text{even},(1)}_{\text{ee},4\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ -\frac{112}{9} & -\frac{20}{9} \\ \end{array} \right)\,, Z^{\text{even},(1)}_{\text{ee},5\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{cc} \frac{32}{9} & -\frac{26}{9} \\ -\frac{55}{9} & \frac{52}{9} \\ \end{array} \right)\,. \end{align} The only block in $Z_{\text{ee}}^{\text{odd},(1)}$ is \begin{align} Z^\text{odd,(1)}_{\text{ee},5\to 5}=\frac{N_c}{\epsilon} \left( \begin{array}{c} 5 \\ \end{array} \right). \label{zee5} \end{align} The block $Z^{(1)}_{\text{ep}}$ is zero in the $\overline{\text{MS}}$ scheme, which can be understood by the fact that the one-loop four-dimensional cut of an evanescent operator vanishes. In each $Z_{L\to L}$ block, the $Z$-matrix turns out to be block upper triangular within each helicity sector. This is due to the fact that we enumerate all the possible total derivative operators in our operator basis so that they as intrinsic lower-dimension operators do not mix to higher-dimension ones. One can refer to Appendix~\ref{all dim-10} for a more detailed discussion about total derivative operators. According to \eqref{gamma1}, one gets $\mathcal{D}^{(1)}$. Since $\mathcal{D}^{(1)}_{\text{ep}}$ vanishes, the physical and evanescent anomalous dimensions are just eigenvalues of $\mathcal{D}^{(1)}_{\text{pp}}$ and $\mathcal{D}^{(1)}_{\text{ee}}$ respectively. Within each block, there is no mixing between the C-even and C-odd operators. Besides, each C-parity block is upper triangular according to the length. Therefore, the anomalous dimensions can be further classified according to C-parity and length. The one-loop anomalous dimension are given as follows, where each anomalous dimension includes an implicit factor $N_c$: \begin{align} &\gamma_{\text{p,length-2}}^{\text{even},(1)}:-\frac{22}{3}\,,\label{p2eveng1}\\ &\gamma_{\text{p,length-3}}^{\text{even},(1)}:\frac{14}{3},6,\frac{42}{5},\frac{28}{3}\,,\\ &\gamma_{\text{p,length-3}}^{\text{odd},(1)}:\ 8\,,\\ &\gamma_{\text{p,length-4}}^{\text{even},(1)}: 9,\frac{21}{2},\frac{32}{3},12,\frac{1}{3} \left(17\pm 3 \sqrt{41}\right),\frac{1}{6} \left(31\pm \sqrt{697}\right),\frac{1}{4} \left(29\pm \sqrt{201}\right),\frac{2}{3} \left(19\pm 3 \sqrt{5}\right),\nonumber\\ &\qquad\qquad\qquad\quad x_1, x_2,x_3\,, \\ &\gamma_{\text{p,length-4}}^{\text{odd},(1)}:\ \frac{22}{3},\frac{17}{2},\frac{32}{3},11,\frac{25}{2}\,,\\ &\gamma_{\text{p,length-5}}^{\text{even},(1)}:\frac{2}{3} \left(19\pm 3 \sqrt{10}\right),\frac{1}{3} \left(32\pm \sqrt{34}\right)\,,\\ &\gamma_{\text{e,length-4}}^{\text{even},(1)}=\frac{1}{3}\left(23\pm\sqrt{345}\right),\frac{38}{3}\,,\\ &\gamma_{\text{e,length-5}}^{\text{even},(1)}=\frac{2}{3} \left(14\pm\sqrt{170}\right)\,,\\ &\gamma_{\text{e,length-4}}^{\text{odd},(1)}=10\,.\label{e4oddg1} \end{align} The $x_i$ are the solutions of the equation \begin{align} x^3-\frac{446 x^2}{15}+\frac{769 x}{3}-\frac{8014}{15}=0\,. \end{align} Their numerical solutions with $x_1<x_2<x_3$ are \begin{align} x_1=3.0565 \,,\quad\ x_2=11.573 \,,\quad\ x_3=15.104 \,. \end{align} The subscript ``p"(``e") means ``physical"(``evanescent"). At the two-loop order, the $Z$ matrix is calculated as shown in Section~\ref{zms}. The dilatation matrix can then be calculated according to \eqref{gamma2}. The results are given in the auxiliary file. As an example, we also present the block $Z^{(2)}_{\text{pp}}$ in Appendix~\ref{zppresult}. Below are the 2-loop corrections of the ones shown in \eqref{p2eveng1}$\sim$\eqref{e4oddg1}, where each anomalous dimension includes an implicit factor $N_c^2$: \begin{align} &\gamma_{\text{p,length-2}}^{\text{even},(2)}:-\frac{136}{3}\,,\label{l2eveng1}\\ &\gamma_{\text{p,length-3}}^{\text{even},(2)}:\frac{59}{3},\frac{439}{18},\frac{7121}{250},\frac{149525}{3996}\,,\\ &\gamma_{\text{p,length-3}}^{\text{odd},(2)}:\ \frac{206}{9}\,,\label{l3MSodd}\\ &\gamma_{\text{p,length-4}}^{\text{even},(2)}: \frac{1308521}{35532},\frac{12319}{288},\frac{815}{18},\frac{415}{18},\frac{37679\pm 2651 \sqrt{41}}{1476},\frac{2 \left(179129\pm 2352 \sqrt{697} \right)}{18819},\nonumber\\ &\qquad\qquad\qquad\quad \frac{29 \left(4108755061 \sqrt{201}\pm 115875887553\right)}{112160431296},\frac{1}{54} \left(3100\pm 103 \sqrt{5}\right),y_1, y_2,y_3\,, \\ &\gamma_{\text{p,length-4}}^{\text{odd},(2)}:\ \frac{32885}{1188},\frac{3125}{96},\frac{107}{2},\frac{75421}{1188},\frac{64211}{1440}\,,\\ &\gamma_{\text{p,length-5}}^{\text{even},(2)}:\frac{376249\pm 78535 \sqrt{10}}{8604},\frac{108341113246123 \sqrt{34}\pm 4211644375821510}{113297323414176}\,,\\ \label{l5eveng1} &\gamma_{\text{e,length-4}}^{\text{even},(2)}:\frac{\left(6442724032485\pm11542242689 \sqrt{345}\right)}{213976901880},\frac{4755559}{75255}\,,\\ &\gamma_{\text{e,length-5}}^{\text{even},(2)}:\frac{\left(3977690861205\pm50021112896 \sqrt{170}\right)}{114158809836}\,,\\ &\gamma_{\text{e,length-4}}^{\text{odd},(2)}:\frac{3079}{540}\,.\label{e4oddg2} \end{align} The $y_i$ are the solutions of the equation \begin{align} y^3-\frac{44053970579731 y^2}{334691552250}&+\frac{4335623758063848120847262203 y}{800852671362744392040000}\nonumber\\ &-\frac{12858742227506943574716057437659}{194607199141146887265720000}=0\,.\label{yms} \end{align} Their numerical solutions with $y_1<y_2<y_3$ are \begin{align} y_1=22.029\,,\quad\ y_2=52.952\,,\quad\ y_3=56.644\,. \end{align} An observation is that almost all the anomalous dimensions are positive. There are only two exceptions. One of them is the anomalous dimension of $\text{tr}(F^2)$, which can be written through the $\beta$ function~\cite{Spiridonov:1988md} and is negative at the one-loop and the two-loop order. The other one is $\frac{1}{3} \left(17-3 \sqrt{41}\right)$, which is one of the $\gamma_{\text{p,length-4}}^{\text{even},(1)}$. (While its two-loop correction is positive.) It would be interesting to understand better the signs of anomalous dimensions. \subsubsection{The finite renormalization scheme}\label{2loopad} In this section, we present the results in the finite renormalization scheme. Following Section~\ref{zfin}, we use $\hat{Z}$ to denote the $Z$ matrix and $\hat{\gamma}$ to denote the anomalous dimensions in the finite renormalization scheme. The one-loop $Z$ matrix includes four blocks \begin{align} \left( \begin{array}{cc} \hat{Z}_{\text{pp}}^{(1)} & \hat{Z}_{\text{pe}}^{(1)} \\ \hat{Z}_{\text{ep}}^{(1)} & \hat{Z}_{\text{ee}}^{(1)} \end{array} \right)\,. \end{align} The blocks $\hat{Z}_{\text{pp}}^{(1)}$, $\hat{Z}_{\text{pe}}^{(1)}$ and $\hat{Z}_{\text{ee}}^{(1)}$ are the same as the ones in the $\overline{\text{MS}}$ scheme. The only difference is the block $\hat{Z}_{\text{ep}}^{(1)}$, which is finite in this scheme. The blocks in $\hat{Z}^{\text{even},(1)}_{\text{ep}}$ read \begin{align} &\hat{Z}^{\text{even},(1)}_{\text{ep},4\to 4}=N_c \left( \begin{array}{ccccccccccccccc} \frac{16}{3} & -\frac{2}{3} & 0 & 0 & 0 & \frac{10}{3} & -2 & 0 & 0 & \frac{16}{3} & -\frac{14}{3} & \frac{14}{3} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & -\frac{2}{3} & \frac{7}{3} & -\frac{1}{3} & 0 & 0 & 0 \\ 0 & -\frac{2}{3} & 0 & -\frac{4}{3} & \frac{4}{3} & \frac{2}{9} & -\frac{3}{2} & \frac{10}{3} & -\frac{14}{3} & \frac{26}{9} & 0 & \frac{1}{9} & \frac{28}{3} & -\frac{58}{9} & -\frac{14}{9} \\ \end{array} \right)\,,\\ &\hat{Z}^{\text{even},(1)}_{\text{ep},4\to 5}=N_c \left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -\frac{182}{9} & \frac{92}{3} & -\frac{50}{3} & -4 \\ \end{array} \right)\,, \hat{Z}^{\text{even},(1)}_{\text{ep},5\to 5}=N_c \left( \begin{array}{cccc} \frac{14}{3} & -\frac{17}{3} & \frac{10}{3} & -\frac{1}{2} \\ -\frac{35}{6} & \frac{25}{3} & -\frac{5}{3} & \frac{5}{4} \\ \end{array} \right)\,. \end{align} The only block in $\hat{Z}^{\text{odd},(1)}_{\text{ep}}$ read \begin{align} \hat{Z}^{\text{odd},(1)}_{\text{ep},4\to 4}=N_c \left( \begin{array}{ccccc} -\frac{8}{3} & 0 & -3 & \frac{4}{3} & -\frac{14}{3} \\ \end{array} \right)\,. \end{align} Since the difference between $\hat{Z}^{(1)}$ and ${Z}^{(1)}$ is finite, the difference between the corresponding dilatation matrices is of order $\epsilon$ according to \eqref{gamma1}. Therefore, the one-loop anomalous dimensions, which is finite, are the same in the two schemes. The anomalous dimensions are the same as the ones in \eqref{p2eveng1}$\sim$\eqref{e4oddg1}. The two-loop $Z$ matrix can be calculated as described in Section~\ref{zfin}. The blocks $\hat{Z}^{(2)}_{\text{pp}}$ and $Z^{(2)}_{\text{pe}}$ are the same as the ones in the $\overline{\text{MS}}$ scheme, while $Z^{(2)}_{\text{ep}}$ and $Z^{(2)}_{\text{ee}}$ are different due to the contribution of the finite $Z^{(1)}_{\text{ep}}$. In this scheme, the dilatation matrix is block upper triangular as shown in \eqref{uptriangulargamma}. As discussed in Section~\ref{zfin}, this does not mean the evanescent operators are irrelevant to the physical anomalous dimensions, since $\hat{\mathcal{D}}^{(2)}_{\text{pp}}$ receive the contribution from the term $(-2\epsilon {\hat{Z}}^{(1)}_{\text{pe}}{\hat{Z}}^{(1)}_{\text{ep}})$. Note that if one only need to calculate the physical anomalous dimensions, only $\hat{Z}_{\text{pp}}$ is required, whose expression can be found in Appendix~\ref{zppresult}. The two-loop $Z$ matrix is given in the auxiliary file. Below are the two-loop anomalous dimensions, where each anomalous dimension includes an implicit factor $N_c^2$: \begin{align} &\hat{\gamma}_{\text{p,length-2}}^{\text{even},(2)}:-\frac{136}{3}\,,\label{l2finite}\\ &\hat{\gamma}_{\text{p,length-3}}^{\text{even},(2)}:\frac{59}{3},\frac{439}{18},\frac{7121}{250},\frac{149525}{3996}\,,\label{l3finiteeven}\\ &\hat{\gamma}_{\text{p,length-3}}^{\text{odd},(2)}: \frac{206}{9}\,,\label{l3finiteodd}\\ &\hat{\gamma}_{\text{p,length-4}}^{\text{even},(2)}:\frac{3427}{108},\frac{12319}{288},\frac{815}{18},\frac{877}{18},\frac{37679\pm 2651 \sqrt{41}}{1476},\frac{2 \left(179129\pm 2352 \sqrt{697}\right)}{18819},\nonumber\\ &\qquad\qquad\qquad\quad \frac{129729219\pm 5049167 \sqrt{201}}{4283712},\frac{1}{270} \left(16160\pm 251 \sqrt{5}\right),y_1,y_2,y_3\,,\\ &\hat{\gamma}_{\text{p,length-4}}^{\text{odd},(2)}: \frac{8048}{297},\frac{3125}{96},\frac{875}{18},\frac{50825}{1188},\frac{13159}{288}\,,\\ &\hat{\gamma}_{\text{p,length-5}}^{\text{even},(2)}:\frac{1}{18} \left(617\pm 142 \sqrt{10}\right),\frac{432664955274\pm 20543721361 \sqrt{34}}{12629285856}\,,\label{l5finite}\\ &\hat{\gamma}_{\text{e,length-4}}^{\text{even},(2)}: \frac{97}{3}\mp\frac{59 \sqrt{\frac{5}{69}}}{3},\frac{9098 }{261}\,,\\ &\hat{\gamma}_{\text{e,length-5}}^{\text{even},(2)}: \frac{\left(2513637\pm54631 \sqrt{170}\right)}{53244}\,,\\ &\hat{\gamma}_{\text{e,length-4}}^{\text{odd},(2)}: \frac{277}{9}\,. \end{align} The $y_i$ are the solutions of the equation \begin{align} y^3-\frac{250350031847 y^2}{1934633250}&+\frac{4824966722800230692858971 y}{925841238569646696000}\nonumber\\ &-\frac{13849613580264790328390513509}{224979420972424147128000}=0\,.\label{yfinite} \end{align} Their numerical solutions with $y_1<y_2<y_3$ are \begin{align} y_1=20.933\,,\quad\ y_2=53.407\,,\quad\ y_3=55.065\,. \end{align} As in the $\overline{\text{MS}}$ scheme, the two-loop anomalous dimensions are all positive in the finite scheme except for $\gamma_{\text{p,length-2}}^{\text{even},(2)}$. From the above results, one can see that all the two-loop \emph{evanescent} anomalous dimensions are different from the ones in the $\overline{\text{MS}}$ scheme. While due to the relations $\hat{Z}_{\text{pp}}=Z_{\text{pp}}$ and $\hat{Z}_{\text{pe}}=Z_{\text{pe}}$, some physical anomalous dimensions remain the same in the two schemes. From \eqref{l2eveng1}$\sim$\eqref{l3MSodd} and \eqref{l2finite}$\sim$\eqref{l3finiteodd}, we see that all length-2 and length-3 two-loop anomalous dimensions are the same in the two schemes. The reason is that there is no mixing from evanescent operators to length-2 and length-3 operators up to the two-loop order.\footnote{Here the mixing means divergent mixing. Actually, there can be finite mixing from an evanescent operator to length-3 operators in the finite renormalization scheme. However, this mixing would not affect the two-loop anomalous dimensions.} Besides, some length-4 anomalous dimensions remain the same in the two schemes. This is due to the fact that the $Z$ matrix is block upper triangular according to the $D$-type (a detailed discussion is given in Appendix~\ref{all dim-10}) and these length-4 operators are in the $D$-type sectors where there is no evanescent operator. \subsection{Anomalous dimensions at the conformal fixed point}\label{sec:fixed point} From \eqref{l2eveng1}$\sim$\eqref{yms} and \eqref{l2finite}$\sim$\eqref{yfinite} one can see that the two-loop anomalous dimensions depend on the renormalization scheme. On the other hand, the anomalous dimensions at a conformal fixed point should not depend on the scheme choice (see \emph{e.g.} \cite{Vasilev:2004yr} and \cite{DiPietro:2017vsp}). This provides a non-trivial crosscheck between our results in the two schemes. In the following, we first give a short proof for why the anomalous dimension is scheme independent at a conformal fixed point. Then we show that given $\mathcal{D}^{(1)}$ and $\mathcal{D}^{(2)}$, how to calculate the dilatation matrix up to next-to-leading (NLO) at the WF fixed point. Finally, we give all anomalous dimensions of dimension-10 operators at the WF fixed point. We use $\mathcal{D}^*$ and $\gamma^*$ to denote the dilatation matrix and anomalous dimensions at the WF fixed point. Assume that we have a set of renormalized operators, \begin{align} O_j=({Z})_j^{\ k}O_{k,\text{b}}\,. \end{align} A change of subtraction scheme can be generally thought of as a finite linear transformation of them \cite{Vasilev:2004yr}: \begin{align} {K}_i^{\ j}O_j={K}_i^{\ j}({Z})_j^{\ k}O_{k,\text{b}}\,, \label{lineartransofO} \end{align} Define $\tilde{Z}\equiv{K}{Z}$ and together with \eqref{overallgamma}, one can get $\tilde{\mathcal{D}}$ as \begin{align} \tilde{\mathcal{D}}=-\frac{\partial{K}}{\partial \alpha_s}(\mu\frac{d\alpha_s}{d{\mu}}){K}^{-1}+{K}\mathcal{D}{K}^{-1}\,, \label{gammalineartrans} \end{align} with $\mathcal{D}=-\mu\frac{\text{d}Z}{\text{d}\mu}(Z)^{-1}$. The transformation is not a similar transformation in general, so the eigenvalues of $\mathcal{D}$ and the ones of $\tilde{\mathcal{D}}$ are different. But if the theory is at a conformal fixed point, one has $\mu\frac{d\alpha_s}{d{\mu}}=0$. Then a scheme transformation leads to a similar transformation of the dilatation matrix, leading to the fact that the eigenvalues, \emph{i.e.} the anomalous dimensions, are independent of the renormalization scheme. Below we show the calculation of the anomalous dimensions at the WF fixed point. According to \eqref{renormalpha}, the coupling at the WF fixed point reads \begin{align} \alpha^*=-\frac{4 \pi \epsilon }{\beta_0}-\frac{4 \pi \beta_1 \epsilon ^2}{\beta_0^3}+\mathcal{O}(\epsilon^3)\,. \label{alphaWF} \end{align} One can see that $\alpha^*$ is proportional to $N_c^{-1}$. Since $\alpha^*$ should be positive and the pure YM theory is a confining theory with $\beta_0>0$, the WF fixed point exists when $\epsilon<0$, corresponding to a $d$-dimensional spacetime with $d>4$. Substitute the $\mathcal{D}^{(1)}$ and the $\mathcal{D}^{(2)}$ calculated in the last section back into \eqref{overallgamma} and replace $\alpha_s$ by $\alpha^*$, then we get the dilatation matrix expanded in $\epsilon$: \begin{align} \mathcal{D}^*=\sum_{i=1}\epsilon^{i}{\mathcal{D}^*_i}=\sum_{l}\left(\frac{\alpha^*}{4\pi}\right)^l\mathcal{D}^{(l)}=(-\frac{ \epsilon }{\beta_0}-\frac{ \beta_1 \epsilon ^2}{\beta_0^3})\mathcal{D}^{(1)}+\frac{\epsilon^2}{\beta_0^2}\mathcal{D}^{(2)}+\mathcal{O}(\epsilon^3)\,. \label{Dfix} \end{align} In the planar limit, the factor $N_c^l$ in $\mathcal{D}^{(l)}$ cancels the factor $N_c^{-l}$ in $(\alpha^*)^l$, so the dilatation matrix are independent of $N_c$.\footnote{This is a general feature for the anomalous dimensions at the WF fixed point in the large $N_c$ limit, see \emph{e.g.} Chapter 29 in \cite{Zinn-Justin:2002ecy} for the $O(N)$ theory.} The anomalous dimensions can be expanded in $\epsilon$ as \begin{align} \gamma^*=\sum_{i=1}\epsilon^{i}{\gamma^*_i}\,. \end{align} The dilatation matrix is different in the $\overline{\text{MS}}$ scheme and the finite renormalization scheme, \emph{i.e.} $\mathcal{D}^*_{\overline{\text{MS}}}\neq \mathcal{D}^*_{\text{fin}}$. According to the discussion under \eqref{gammalineartrans}, they are similar matrices. Our results verify that the anomalous dimensions should be the same in the two schemes at the WF fixed point. Below we present the anomalous dimensions. Note that there is no $N_c^{l}$ factor in $\gamma^*_l$. The leading-order (LO) results are \begin{align} &{\gamma^*_1}^{\text{even}}_{,\text{length-2}}:2\,,\label{fixeven2}\\ &{\gamma^*_1}_{,\text{length-3}}^{\text{even}}:-\frac{28}{11},-\frac{126}{55},-\frac{18}{11},-\frac{14}{11}\,,\\ &{\gamma^*_1}_{,\text{length-3}}^{\text{odd}}:\ -\frac{24}{11} \,,\\ &{\gamma^*_1}_{,\text{length-4}}^{\text{even}}: -\frac{38}{11},-\frac{36}{11},-\frac{32}{11},-\frac{63}{22},-\frac{27}{11}, \frac{2}{11} \left(-19\pm 3 \sqrt{5}\right),\frac{1}{11} \left(-17\pm 3 \sqrt{41}\right),\nonumber\\ &\qquad\qquad\qquad\quad \frac{3}{44} \left(-29\pm \sqrt{201}\right),\frac{1}{11} \left(-23\pm\sqrt{345}\right),\frac{1}{22} \left(-31\pm\sqrt{697}\right),x_1,x_2,x_3\,, \\ &{\gamma^*_1}_{,\text{length-4}}^{\text{odd}}:\ -\frac{75}{22},-3,-\frac{32}{11},-\frac{30}{11},-\frac{51}{22},-2\,, \\ &{\gamma^*_1}_{,\text{length-5}}^{\text{even}}:\frac{2}{11} \left(-19\pm 3 \sqrt{10}\right),\frac{2}{11} \left(-14\pm\sqrt{170}\right),\frac{1}{11} \left(-32\pm \sqrt{34}\right)\,,\label{fixeven5} \end{align} The $x_i$ are the roots of \begin{align} x^3+\frac{446 x^2}{55}+\frac{2307 x}{121}+\frac{72126}{6655}=0\,. \end{align} Their numerical solutions with $x_1<x_2<x_3$ are \begin{align} x_1=-4.1193\,,\quad\ x_2=-3.1562\,,\quad\ x_3=-0.83360\,. \end{align} Actually, one can derive from \eqref{Dfix} that the LO results can be achieved by substituting $\frac{\alpha_s N_c}{4\pi}\to -\frac{3}{11}$ into the one-loop anomalous dimensions calculated in the $\alpha_s$-expansion. The next-to-leading (NLO) corrections are \begin{align} &{\gamma^*_2}_{,\text{length-2}}^{\text{even}}:-\frac{204}{121}\,,\\ &{\gamma^*_2}_{,\text{length-3}}^{\text{even}}:\frac{376711}{590964},\frac{62379}{332750},\frac{1157}{2662},\frac{519}{1331} \,,\\ &{\gamma^*_2}_{,\text{length-3}}^{\text{odd}}:\ -\frac{182}{1331} \,,\\ &{\gamma^*_2}_{,\text{length-4}}^{\text{even}}: \frac{59703987}{33388135},-\frac{2779}{2662},\frac{2437}{2662},\frac{32693}{42592},\frac{3520939}{5254788},\frac{10844\pm 4805 \sqrt{5}}{7986}\nonumber\\ &\qquad\qquad\qquad\quad \frac{130093\pm 21023 \sqrt{41}}{218284},\frac{9316861814943\pm 357329198443 \sqrt{201}}{16587281561664},\nonumber\\ &\qquad\qquad\qquad\quad \frac{15093318600615\pm 2298106885061 \sqrt{345}}{31644806266920}, \frac{634967\pm 54897 \sqrt{697}}{2783121} \nonumber\\ &\qquad\qquad\qquad\quad ,y_1,y_2,y_3\,, \\ &{\gamma^*_2}_{,\text{length-4}}^{\text{odd}}:\ \frac{94321}{212960},\frac{35029}{15972},\frac{4065}{2662},-\frac{149731}{79860},\frac{19893}{42592},\frac{5957}{15972}\,, \\ &{\gamma^*_2}_{,\text{length-5}}^{\text{even}}:\frac{278813 \sqrt{10}\pm 433283}{1272436}, \frac{7528203818631\pm 2037367447760 \sqrt{170}}{16882819543524},\nonumber\\ &\qquad\qquad\qquad\quad \frac{476265955378374\pm 8389462392725 \sqrt{34}}{1523219570346144}\,, \end{align} The $y_i$ are the roots of \begin{align} y^3-\frac{13294802711131 y^2}{4499741980250}&+\frac{4515566752618635673017361833 y}{1592322513279522803486840000}\nonumber\\ &-\frac{10138741527932683336204035731219}{11444658831945242197781313816000}=0\,. \end{align} Their numerical solutions with $y_1<y_3<y_2$ are \begin{align} y_1=0.740685\,,\quad\ y_2=1.27805\,,\quad \ y_3=0.935839\,. \end{align} From \eqref{Dfix}, one can see that the $\mathcal{D}^*_2$ includes the one-loop term $-\frac{\beta_1}{\beta_0^3} \mathcal{D}^{(1)}$.\footnote{In the finite renormalization, the term $-\frac{1}{\beta_0}\hat{\mathcal{D}}_{\text{ep}}^{(1)}$ would also contribute to $\mathcal{D}^*_2$.} This may alter the signs of the NLO anomalous dimensions. Let us take the length-3 C-odd operator $O_{25}$ as an example. Since $O_{25}$ is an eigenstate of the dilatation matrix (as presented in Appendix~\ref{all dim-10}), in this simple case one can get its anomalous dimension at the fixed point via the following formula \begin{align} {\gamma^*_2}_{,\text{length-3}}^{\text{odd}}&=(\gamma_{\text{p},\text{length-3}}^{\text{odd},(1)}\alpha^*+\gamma_{\text{p},\text{length-3}}^{\text{odd},(2)}{\alpha^*}^2)\big{|}_{\text{coefficients of }\epsilon^2}\nonumber\\ &=\gamma_{\text{p},\text{length-3}}^{\text{odd},(1)}(-\frac{\beta_1}{\beta_0^3})+\gamma_{\text{p},\text{length-3}}^{\text{odd},(2)}\frac{1}{\beta_0^2}\nonumber\\ &=8(-\frac{306}{1331})+\frac{206}{9}\frac{9}{121}=-\frac{182}{1331}\,. \end{align} One can see that the term $\gamma_{\text{p},\text{length-3}}^{\text{odd},(1)}(-\frac{\beta_1}{\beta_0^3})$ alter the sign of the NLO anomalous dimension. \section{Conclusion}\label{sec:discuss} In this paper, we study the two-loop renormalization of gluonic evanescent operators in the pure YM theory. Although the tree-level matrix elements of evanescent operators vanish in four-dimensional spacetime, they are important at the quantum loop level in dimensional regularization since the internal legs can propagate in $d=4-2\epsilon$ dimensions. The effect of the evanescent operators on the physical anomalous dimensions comes from their mixing with physical ones, while the pattern of the effect depends on the renormalization scheme. Let us take the $\overline{\text{MS}}$ scheme as an example. At the one-loop order, such mixing is suppressed by the evanescent effect to be finite. Thus the evanescent operators have no effect on the physical anomalous dimensions. At the two-loop order, however, the mixing can be of order $1/\epsilon$ and can give rise to an important contribution to the physical anomalous dimensions. Our two-loop computation for the dimension-10 operator basis provides a first concrete example in the Yang-Mills theory to show the effect of gluonic evanescent operators on the physical anomalous dimensions. We have applied two different schemes to obtain the anomalous dimensions. In the $\overline{\text{MS}}$ scheme, one needs to consider the full renormalization matrix of both physical and evanescent operators up to the two-loop order. In the finite renormalization scheme, the dilatation matrix has the nice property that it is block upper triangular \cite{Buras:1989xd,Dugan:1990df}; therefore, to compute physical anomalous dimensions, one only needs to consider physical operators up to the two-loop order. However, we stress that it is still necessary to compute the one-loop renormalization of evanescent operators. More generally, to compute $l$-loop physical anomalous dimensions, one needs to consider the renormalization of evanescent operators up to the $(l-1)$-loop orders in the finite renormalization scheme. The anomalous dimensions are renormalization scheme dependent due to the running effect of the coupling constant. As a further consideration, we compute the anomalous dimensions for the YM theory at the WF fixed point. In this case, the theory is in non-integer dimensions and physical and evanescent operators are on an equal footing. We obtain the anomalous dimensions up to the next-to-leading order in the $\epsilon$-expansion. As expected, we find the anomalous dimensions computed with both renormalization schemes give the same results at the WF fixed point. This also provides a non-trivial check of the results. For the two-loop computation, we consider form factors which are matrix elements each involving one physical or evanescent operator. We use the $d$-dimensional unitarity-cut method combined with efficient integral reduction methods. To simplify the computation, we perform numerical computations in the intermediate steps to get numerical UV data and finally reconstruct the analytic $Z$ matrix. This provides a first two-loop computation of anomalous dimensions for a close set of Yang-Mills operators which include length-4 and length-5 operators. Our strategy can be straightforwardly applied to the two-loop renormalization of YM operators of higher lengths and is also expected to be applicable for high-dimensional operators in more general theories. \acknowledgments This work is supported in part by the National Natural Science Foundation of China (Grants No.~11935013, 12175291, 11822508, 12047503, 12047502, 11947301). We also thank the support of the HPC Cluster of ITP-CAS. \begin{appendix} \section{Dimension-10 operators}\label{all dim-10} In this appendix, we present the our dimension-10 single-trace operator basis. Comparing to those given in~\cite{Jin:2022ivc}, we organize the basis by separating total derivative operators explicitly. We first give a complementary discussion about total derivative operators. We say that an operator is of $D$-$(i,\alpha)$ type, if it is an $i$th total derivative of a rank-$\alpha$ operator. For example, $O_2$ in \eqref{theO2} is a $D$-$(4,2)$ operator. Particularly, an operator that has no overall covariant derivative $D$ is said to be of $D$-$(0,0)$ type. We order operators strictly according to their $D$-types as \begin{align} &D\text{-}(i,\alpha)>D\text{-}(i',\alpha'),\ \text{if $\alpha>\alpha'$}\,,\\ &D\text{-}(i,\alpha)>D\text{-}(i,\alpha'),\ \text{if $\alpha<\alpha'$}\,. \end{align} Within each helicity sector, our operator basis enumerates all the total derivative operators from the highest to the lowest $D$-type.\footnote{Actually, one should first enumerate the $D$-type operators before classifying the operators according to helicities to make sure that all the total derivative operators are included in the basis. While it turns out that the order is irrelevant in our case.} By considering classical dimensions and Lorentz structures, it is not hard to see that an operator would not mix to any operator of lower $D$-type. Thus the $Z$-matrix and the dilatation matrix are block upper triangular according to $D$-type. There are two special operators in our basis, \emph{i.e.} $O_1$ and $O_{25}$. Each of them is the only operator of the highest $D$-type in the corresponding C-parity sector, so they cannot mix to other operators and are eigenstates of the dilatation matrix. Below is the operator basis. For short of notations, we drop the symbol ``tr". For example, $F_{\mu_1\mu_2}F_{\mu_1\mu_2}$ means $\text{tr}(F_{\mu_1\mu_2}F_{\mu_1\mu_2})$. \subsection{The physical operators} \subsubsection*{C-even} The only length-2 operator: \begin{flalign} &O_1=D^6(F_{\mu_1\mu_2}F_{\mu_1\mu_2})&\,. \end{flalign} Below are the length-3 operators. The $(-)^2+$ sector: \begin{flalign} O_2&=D^2 D_4D_5(-\eta_{45} \frac{1}{4}F_{12}F_{13}F_{23} + F_{14}F_{25}F_{12})\,,\label{theO2}&\\ O_3&=D_4D_5D_6(\eta_{56} \frac{1}{2}D_1F_{23}F_{13}F_{24} - \frac{1}{2}D_6F_{25}F_{12}F_{14} + \frac{1}{2}F_{25}D_6F_{12}F_{14}- \frac{1}{2}F_{25}F_{12}D_6F_{14})\,. \end{flalign} The $(-)^3$ sector: \begin{flalign} O_4&=\frac{D^4(F12F13F23)}{12}\,,&\\ O_5&=D_4D_5(-D_1F_{35}D_1F_{24}F_{23} -D_2F_{15}F_{34}D_1F_{23} -\frac{1}{4}F_{35}D_1F_{24}D_1F_{23} \nonumber \\ &+ \frac{3}{2}D_1F_{23}F_{12}D_4F_{35} + \frac{3}{4}F_{35}D_1F_{23}D_1F_{24})\,. \end{flalign} Below are the length-4 operators. The $(-)^4$ sector: \begin{flalign} O_{6}&=D^2(\frac{1}{8}{F_{12}}{F_{12}}{F_{34}}{F_{34}}+\frac{1}{16}{F_{12}}{F_{34}}{F_{12}}{F_{34}}-\frac{1}{8}{F_{12}}{F_{23}}{F_{34}}{F_{14}}+\frac{3}{8}{F_{12}}{F_{34}}{F_{23}}{F_{14}})\,,& \end{flalign} \begin{flalign} O_{7}&=D^2(-\frac{1}{2}{F_{12}}{F_{12}}{F_{34}}{F_{34}}-\frac{1}{4}{F_{12}}{F_{34}}{F_{12}}{F_{34}}-{F_{12}}{F_{23}}{F_{34}}{F_{14}})\,,& \end{flalign} \begin{flalign} O_{8}&=D_5\bigg[-\frac{1}{2}{F_{12}}{F_{12}}{F_{34}}{D_{4}}{F_{35}}-{F_{13}}{F_{24}}{F_{35}}{D_{4}}{F_{12}}+{F_{13}}{F_{25}}{F_{34}}{D_{4}}{F_{12}}-{F_{13}}{F_{35}}{F_{24}}{D_{4}}{F_{12}}&\nonumber\\& +{F_{24}}{F_{13}}{F_{35}}{D_{4}}{F_{12}}-{F_{24}}{F_{35}}{F_{13}}{D_{4}}{F_{12}}-\frac{1}{2}{F_{34}}{F_{12}}{F_{12}}{D_{4}}{F_{35}}+{F_{34}}{F_{12}}{F_{13}}{D_{4}}{F_{25}}\nonumber\\& +{F_{35}}{F_{13}}{F_{24}}{D_{4}}{F_{12}}+D_6(-\frac{1}{8}{F_{12}}{F_{12}}{F_{36}}{F_{35}}+\frac{1}{4}{F_{12}}{F_{13}}{F_{36}}{F_{25}}+\frac{1}{4}{F_{12}}{F_{25}}{F_{36}}{F_{13}}\nonumber\\& -\frac{1}{8}{F_{12}}{F_{35}}{F_{36}}{F_{12}}+\frac{1}{4}{\eta_{56}}{F_{14}}{F_{12}}{F_{23}}{F_{34}}+\frac{1}{16}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}+\frac{1}{16}{\eta_{56}}{F_{34}}{F_{34}}{F_{12}}{F_{12}}\nonumber\\& -\frac{1}{8}{F_{36}}{F_{12}}{F_{12}}{F_{35}}+\frac{1}{4}{F_{36}}{F_{13}}{F_{12}}{F_{25}} +\frac{1}{4}{F_{36}}{F_{25}}{F_{12}}{F_{13}}-\frac{1}{8}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\bigg]\,, \end{flalign} \begin{flalign} O_{9}&=\frac{3}{4}F_{12}F_{12}D_{5}F_{34}D_{5}F_{34}+\frac{1}{4}F_{12}D_{5}F_{12}D_{5}F_{34}F_{34}+\frac{3}{4}F_{12}F_{34}D_{5}F_{12}D_{5}F_{34}&\nonumber\\ &+F_{12}F_{23}D_{5}F_{34}D_{5}F_{14}+2F_{12}F_{23}D_{5}F_{14}D_{5}F_{34}+2F_{12}D_{1}F_{34}D_{5}F_{23}F_{45}&\nonumber\\ &-F_{13}F_{23}D_{2}F_{45}D_{1}F_{45}\,, \end{flalign} \begin{flalign} O_{10}&=-\frac{1}{2}F_{12}F_{12}D_{5}F_{34}D_{5}F_{34}-\frac{1}{2}F_{12}D_{5}F_{12}D_{5}F_{34}F_{34}-\frac{1}{2}F_{12}F_{34}D_{5}F_{12}D_{5}F_{34}&\nonumber\\ &-3F_{12}F_{23}D_{5}F_{14}D_{5}F_{34}+F_{12}D_{5}F_{23}D_{5}F_{14}F_{34}\,. \end{flalign} The $(-)^3+$ sector: \begin{flalign} O_{11}&=D_{5}D_{6}(\frac{1}{2}{F_{12}}{F_{36}}{F_{12}}{F_{35}}+{F_{13}}{F_{25}}{F_{36}}{F_{12}}-\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}-{F_{36}}{F_{12}}{F_{13}}{F_{25}}&\nonumber\\ &+\frac{1}{2}{F_{36}}{F_{12}}{F_{35}}{F_{12}})\,, \end{flalign} \begin{flalign} O_{12}&=D_{5}D_{6}(\frac{1}{2}{F_{12}}{F_{12}}{F_{36}}{F_{35}}-{F_{12}}{F_{13}}{F_{36}}{F_{25}}-{F_{12}}{F_{25}}{F_{36}}{F_{13}}+\frac{1}{2}{F_{12}}{F_{35}}{F_{36}}{F_{12}}&\nonumber\\& -\frac{1}{3}{F_{12}}{F_{36}}{F_{12}}{F_{35}}-\frac{2}{3}{F_{13}}{F_{25}}{F_{36}}{F_{12}}+{F_{25}}{F_{12}}{F_{13}}{F_{36}}+{F_{25}}{F_{36}}{F_{13}}{F_{12}}\nonumber\\& -\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}+\frac{1}{6}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}-\frac{1}{4}{\eta_{56}}{F_{34}}{F_{34}}{F_{12}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{12}}{F_{12}}{F_{35}}\nonumber\\& +\frac{2}{3}{F_{36}}{F_{12}}{F_{13}}{F_{25}}-\frac{1}{3}{F_{36}}{F_{12}}{F_{35}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\,, \end{flalign} \begin{flalign} O_{13}&=D_5\bigg[{D_{4}}{F_{12}}{F_{34}}{F_{13}}{F_{25}}+{D_{4}}{F_{12}}{F_{35}}{F_{13}}{F_{24}}-{D_{4}}{F_{25}}{F_{13}}{F_{34}}{F_{12}}&\nonumber\\& +D_6(-\frac{1}{4}{F_{12}}{F_{12}}{F_{36}}{F_{35}}-\frac{1}{6}{F_{12}}{F_{36}}{F_{12}}{F_{35}}-\frac{1}{3}{F_{13}}{F_{25}}{F_{36}}{F_{12}}+\frac{1}{2}{F_{13}}{F_{36}}{F_{12}}{F_{25}}\nonumber\\& +\frac{1}{12}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}+\frac{1}{3}{F_{36}}{F_{12}}{F_{13}}{F_{25}}-\frac{1}{6}{F_{36}}{F_{12}}{F_{35}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{13}}{F_{25}}{F_{12}}\nonumber\\& +\frac{1}{4}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\bigg]\,, \end{flalign} \begin{flalign} O_{14}&=-\frac{1}{3}F_{12}D_{5}F_{34}F_{12}D_{5}F_{34}+\frac{2}{3}F_{13}D_{1}F_{45}F_{23}D_{2}F_{45}+\frac{2}{3}F_{13}D_{2}F_{45}F_{23}D_{1}F_{45}&\nonumber\\& -\frac{2}{3}F_{13}D_{12}F_{45}F_{23}F_{45}\,.& \end{flalign} The $(-)^2(+)^2$ sector: \begin{flalign} O_{15}&=D^2(\frac{1}{8}{F_{12}}{F_{34}}{F_{12}}{F_{34}}+\frac{1}{2}{F_{12}}{F_{34}}{F_{23}}{F_{14}})\,,& \end{flalign} \begin{flalign} O_{16}&=D^2(-\frac{1}{4}{F_{12}}{F_{12}}{F_{34}}{F_{34}}+\frac{1}{8}{F_{12}}{F_{34}}{F_{12}}{F_{34}}-\frac{1}{2}{F_{12}}{F_{23}}{F_{34}}{F_{14}})\,,& \end{flalign} \begin{flalign} O_{17}=&D_5D_6(-\frac{1}{4}{F_{12}}{F_{12}}{F_{35}}{F_{36}}-\frac{1}{2}{F_{12}}{F_{13}}{F_{36}}{F_{25}}+{F_{12}}{F_{25}}{F_{13}}{F_{36}}-\frac{1}{2}{F_{12}}{F_{25}}{F_{36}}{F_{13}}&\nonumber\\& -\frac{1}{4}{F_{12}}{F_{35}}{F_{12}}{F_{36}}+\frac{1}{2}{F_{12}}{F_{35}}{F_{36}}{F_{12}}-\frac{1}{4}{F_{12}}{F_{36}}{F_{12}}{F_{35}}+{F_{12}}{F_{36}}{F_{13}}{F_{25}}+{F_{12}}{F_{36}}{F_{25}}{F_{13}}\nonumber\\& -\frac{3}{4}{F_{12}}{F_{36}}{F_{35}}{F_{12}}-\frac{1}{2}{\eta_{56}}{F_{14}}{F_{12}}{F_{23}}{F_{34}}+{F_{14}}{F_{23}}{F_{12}}{F_{34}}-{F_{25}}{F_{36}}{F_{12}}{F_{13}}\nonumber\\& +\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}+\frac{1}{8}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}-\frac{1}{4}{F_{35}}{F_{12}}{F_{12}}{F_{36}}+\frac{1}{4}{F_{35}}{F_{12}}{F_{36}}{F_{12}}\nonumber\\& +\frac{1}{4}{F_{35}}{F_{36}}{F_{12}}{F_{12}}+{F_{36}}{F_{12}}{F_{13}}{F_{25}}+{F_{36}}{F_{12}}{F_{25}}{F_{13}}-\frac{3}{4}{F_{36}}{F_{12}}{F_{35}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{13}}{F_{12}}{F_{25}}\nonumber\\& +{F_{36}}{F_{13}}{F_{25}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{25}}{F_{12}}{F_{13}}+{F_{36}}{F_{25}}{F_{13}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\,, \end{flalign} \begin{flalign} O_{18}&=\frac{1}{4}F_{12}F_{12}D_{5}F_{34}D_{5}F_{34}+\frac{1}{4}F_{12}D_{5}F_{12}D_{5}F_{34}F_{34}-\frac{1}{4}F_{12}F_{34}D_{5}F_{12}D_{5}F_{34}\nonumber\\& +F_{12}F_{23}D_{5}F_{34}D_{5}F_{14}\,,\\ O_{19}&=\frac{1}{2}F_{12}F_{34}D_{5}F_{12}D_{5}F_{34}+F_{12}F_{23}D_{5}F_{14}D_{5}F_{34}+F_{12}D_{5}F_{23}D_{5}F_{14}F_{34}\,,\\ O_{20}&=-\frac{7}{4}F_{12}F_{12}D_{5}F_{34}D_{5}F_{34}+\frac{5}{4}F_{12}D_{5}F_{12}D_{5}F_{34}F_{34}-\frac{3}{4}F_{12}F_{34}D_{5}F_{12}D_{5}F_{34}&\nonumber\\& -F_{12}F_{23}D_{5}F_{34}D_{5}F_{14}-4F_{12}F_{23}D_{5}F_{14}D_{5}F_{34}+2F_{12}D_{5}F_{23}D_{5}F_{14}F_{34}\nonumber\\& -2F_{12}D_{1}F_{34}D_{5}F_{23}F_{45}+F_{13}F_{23}D_{1}F_{45}D_{2}F_{45}\,. \end{flalign} Below are the length-5 operators. The $(-)^5$ sector: \begin{flalign} O_{21}&=5F_{12}F_{12}F_{34}F_{35}F_{45}+F_{12}F_{13}F_{34}F_{45}F_{25}-5F_{12}F_{13}F_{24}F_{35}F_{45}\,,&\\ O_{22}&=\frac{5}{2}F_{12}F_{12}F_{34}F_{35}F_{45}+F_{12}F_{13}F_{24}F_{45}F_{35}-3F_{12}F_{13}F_{24}F_{35}F_{45}\,. \end{flalign} The $(-)^3(+)^2$ sector: \begin{flalign} O_{23}&=\frac{3}{2}F_{12}F_{12}F_{34}F_{35}F_{45}+F_{12}F_{13}F_{34}F_{45}F_{25}-F_{12}F_{13}F_{24}F_{45}F_{35}-2F_{12}F_{13}F_{24}F_{35}F_{45}\,,&\\ O_{24}&=F_{12}F_{12}F_{34}F_{35}F_{45}-2F_{12}F_{13}F_{24}F_{45}F_{35}-2F_{12}F_{13}F_{24}F_{35}F_{45}\,. \end{flalign} \subsubsection*{C-odd} The only length-3 operators of d-sector: \begin{flalign} O_{25}&=D_4D_5D_6(D_6F_{25}F_{12}F_{14} -F_{25}F_{12}D_6F_{14})\,.& \label{O25} \end{flalign} Below are the length-4 operators. The $(-)^4$ sector: \begin{flalign} O_{26}&=D_5\bigg[-{F_{12}}{F_{12}}{F_{34}}{D_{4}}{F_{35}}-2{D_{4}}{F_{12}}{F_{24}}{F_{13}}{F_{35}}+2{D_{4}}{F_{12}}{F_{25}}{F_{13}}{F_{34}}+2{D_{4}}{F_{12}}{F_{34}}{F_{13}}{F_{25}}&\nonumber\\& +2{D_{4}}{F_{12}}{F_{34}}{F_{25}}{F_{13}}-2{D_{4}}{F_{12}}{F_{35}}{F_{13}}{F_{24}}-2{D_{4}}{F_{12}}{F_{35}}{F_{24}}{F_{13}}-2{F_{13}}{F_{24}}{F_{35}}{D_{4}}{F_{12}}\nonumber\\& +2{F_{13}}{F_{25}}{F_{34}}{D_{4}}{F_{12}}-2{F_{13}}{F_{35}}{F_{24}}{D_{4}}{F_{12}}+2{F_{24}}{F_{13}}{F_{35}}{D_{4}}{F_{12}}-2{F_{24}}{F_{35}}{F_{13}}{D_{4}}{F_{12}}\nonumber\\& -2{D_{4}}{F_{25}}{F_{13}}{F_{12}}{F_{34}}-2{D_{4}}{F_{25}}{F_{13}}{F_{34}}{F_{12}}-2{D_{4}}{F_{25}}{F_{34}}{F_{12}}{F_{13}}-{F_{34}}{F_{12}}{F_{12}}{D_{4}}{F_{35}}\nonumber\\& +2{F_{34}}{F_{12}}{F_{13}}{D_{4}}{F_{25}}+2{F_{35}}{F_{13}}{F_{24}}{D_{4}}{F_{12}}+{D_{4}}{F_{35}}{F_{12}}{F_{12}}{F_{34}}+{D_{4}}{F_{35}}{F_{12}}{F_{34}}{F_{12}}\nonumber\\& +{D_{4}}{F_{35}}{F_{34}}{F_{12}}{F_{12}}\nonumber\\& +D_6(\frac{1}{4}{F_{12}}{F_{12}}{F_{35}}{F_{36}}-\frac{1}{2}{F_{12}}{F_{13}}{F_{25}}{F_{36}}-\frac{1}{4}{F_{12}}{F_{35}}{F_{36}}{F_{12}}+{F_{12}}{F_{36}}{F_{12}}{F_{35}}\nonumber\\& -\frac{3}{2}{F_{12}}{F_{36}}{F_{13}}{F_{25}}-\frac{3}{2}{F_{12}}{F_{36}}{F_{25}}{F_{13}}+{F_{12}}{F_{36}}{F_{35}}{F_{12}}-\frac{1}{2}{F_{13}}{F_{36}}{F_{12}}{F_{25}}\nonumber\\& -\frac{3}{2}{\eta_{56}}{F_{14}}{F_{23}}{F_{12}}{F_{34}}-\frac{1}{2}{F_{25}}{F_{36}}{F_{12}}{F_{13}}-\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}-\frac{3}{8}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}\nonumber\\& +\frac{1}{4}{F_{35}}{F_{36}}{F_{12}}{F_{12}}+\frac{3}{4}{F_{36}}{F_{12}}{F_{12}}{F_{35}}-\frac{3}{2}{F_{36}}{F_{12}}{F_{13}}{F_{25}}-\frac{3}{2}{F_{36}}{F_{12}}{F_{25}}{F_{13}}\nonumber\\& +{F_{36}}{F_{12}}{F_{35}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{13}}{F_{25}}{F_{12}})\bigg]\,. \end{flalign} The $(-)^3+$ sector: \begin{flalign} O_{27}&=D_5D_6(-\frac{1}{2}{F_{12}}{F_{12}}{F_{36}}{F_{35}}+{F_{13}}{F_{36}}{F_{12}}{F_{25}}-{F_{36}}{F_{13}}{F_{25}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{35}}{F_{12}}{F_{12}})& \end{flalign} \begin{flalign} O_{28}&=D_5\bigg[-2{D_{4}}{F_{12}}{F_{13}}{F_{25}}{F_{34}}-2{D_{4}}{F_{12}}{F_{34}}{F_{13}}{F_{25}}-2{D_{4}}{F_{12}}{F_{35}}{F_{13}}{F_{24}}+2{D_{4}}{F_{12}}{F_{35}}{F_{24}}{F_{13}}&\nonumber\\& +2{D_{4}}{F_{25}}{F_{13}}{F_{34}}{F_{12}}+2{D_{4}}{F_{25}}{F_{34}}{F_{12}}{F_{13}}-{D_{4}}{F_{35}}{F_{12}}{F_{34}}{F_{12}}\nonumber\\& +D_6(\frac{3}{4}{F_{12}}{F_{12}}{F_{36}}{F_{35}}-\frac{1}{2}{F_{12}}{F_{13}}{F_{36}}{F_{25}}-\frac{1}{2}{F_{12}}{F_{25}}{F_{36}}{F_{13}}+\frac{1}{4}{F_{12}}{F_{35}}{F_{36}}{F_{12}}\nonumber\\& -\frac{1}{2}{F_{12}}{F_{36}}{F_{12}}{F_{35}}+{F_{12}}{F_{36}}{F_{13}}{F_{25}}+{F_{12}}{F_{36}}{F_{25}}{F_{13}}-\frac{1}{2}{F_{12}}{F_{36}}{F_{35}}{F_{12}}-{F_{13}}{F_{36}}{F_{12}}{F_{25}}\nonumber\\& +{F_{14}}{F_{23}}{F_{12}}{F_{34}}+\frac{1}{2}{F_{25}}{F_{12}}{F_{13}}{F_{36}}+\frac{1}{2}{F_{25}}{F_{36}}{F_{13}}{F_{12}}+\frac{1}{8}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}\nonumber\\& +\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}-\frac{1}{8}{\eta_{56}}{F_{34}}{F_{34}}{F_{12}}{F_{12}}-\frac{1}{4}{F_{36}}{F_{12}}{F_{12}}{F_{35}}+{F_{36}}{F_{12}}{F_{13}}{F_{25}}\nonumber\\& +{F_{36}}{F_{12}}{F_{25}}{F_{13}}-\frac{1}{2}{F_{36}}{F_{12}}{F_{35}}{F_{12}}+{F_{36}}{F_{13}}{F_{25}}{F_{12}}-\frac{1}{4}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\bigg]\,, \end{flalign} The $(-)^2(+)^2$ sector: \begin{flalign} O_{29}&=D_5\bigg[-2{D_{4}}{F_{12}}{F_{13}}{F_{35}}{F_{24}}-2{D_{4}}{F_{12}}{F_{24}}{F_{13}}{F_{35}}+2{D_{4}}{F_{12}}{F_{24}}{F_{35}}{F_{13}}+2{D_{4}}{F_{12}}{F_{34}}{F_{25}}{F_{13}}&\nonumber\\& +2{D_{4}}{F_{12}}{F_{35}}{F_{13}}{F_{24}}-2{D_{4}}{F_{12}}{F_{35}}{F_{24}}{F_{13}}-2{D_{4}}{F_{25}}{F_{13}}{F_{34}}{F_{12}}+{D_{4}}{F_{35}}{F_{12}}{F_{34}}{F_{12}}\nonumber\\& +D_6(-\frac{1}{4}{F_{12}}{F_{12}}{F_{36}}{F_{35}}-\frac{1}{2}{F_{12}}{F_{13}}{F_{25}}{F_{36}}+\frac{1}{2}{F_{12}}{F_{25}}{F_{13}}{F_{36}}+\frac{1}{4}{F_{12}}{F_{35}}{F_{36}}{F_{12}}\nonumber\\& +\frac{1}{2}{F_{12}}{F_{36}}{F_{12}}{F_{35}}-\frac{1}{2}{F_{12}}{F_{36}}{F_{13}}{F_{25}}-\frac{1}{2}{F_{12}}{F_{36}}{F_{25}}{F_{13}}-\frac{1}{2}{\eta_{56}}{F_{14}}{F_{23}}{F_{12}}{F_{34}}\nonumber\\& -{F_{25}}{F_{36}}{F_{12}}{F_{13}}-\frac{1}{8}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}-\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}+\frac{1}{8}{\eta_{56}}{F_{34}}{F_{34}}{F_{12}}{F_{12}}\nonumber\\& -\frac{1}{4}{F_{35}}{F_{12}}{F_{12}}{F_{36}}+\frac{1}{4}{F_{35}}{F_{12}}{F_{36}}{F_{12}}+\frac{1}{4}{F_{35}}{F_{36}}{F_{12}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{12}}{F_{12}}{F_{35}}\nonumber\\& -\frac{1}{2}{F_{36}}{F_{12}}{F_{13}}{F_{25}}-\frac{1}{2}{F_{36}}{F_{12}}{F_{25}}{F_{13}}+\frac{1}{4}{F_{36}}{F_{12}}{F_{35}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{13}}{F_{25}}{F_{12}}\nonumber\\& +\frac{1}{2}{F_{36}}{F_{25}}{F_{13}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\bigg]\,, \end{flalign} \begin{flalign} O_{30}&=D_5\bigg[-2{D_{4}}{F_{12}}{F_{13}}{F_{24}}{F_{35}}-2{D_{4}}{F_{12}}{F_{13}}{F_{35}}{F_{24}}-2{D_{4}}{F_{12}}{F_{24}}{F_{35}}{F_{13}}+2{D_{4}}{F_{12}}{F_{25}}{F_{13}}{F_{34}}&\nonumber\\& +2{D_{4}}{F_{12}}{F_{34}}{F_{13}}{F_{25}}+2{D_{4}}{F_{12}}{F_{34}}{F_{25}}{F_{13}}-2{D_{4}}{F_{25}}{F_{13}}{F_{12}}{F_{34}}-2{D_{4}}{F_{25}}{F_{13}}{F_{34}}{F_{12}}\nonumber\\& -2{D_{4}}{F_{25}}{F_{34}}{F_{12}}{F_{13}}+{D_{4}}{F_{35}}{F_{12}}{F_{12}}{F_{34}}+{D_{4}}{F_{35}}{F_{12}}{F_{34}}{F_{12}}+{D_{4}}{F_{35}}{F_{34}}{F_{12}}{F_{12}}\nonumber\\& +D_6(\frac{1}{4}{F_{12}}{F_{12}}{F_{36}}{F_{35}}+\frac{1}{2}{F_{12}}{F_{13}}{F_{25}}{F_{36}}-\frac{1}{2}{F_{12}}{F_{25}}{F_{13}}{F_{36}}-\frac{1}{4}{F_{12}}{F_{35}}{F_{36}}{F_{12}}\nonumber\\& +\frac{1}{2}{F_{12}}{F_{36}}{F_{12}}{F_{35}}-\frac{3}{2}{F_{12}}{F_{36}}{F_{13}}{F_{25}}-\frac{3}{2}{F_{12}}{F_{36}}{F_{25}}{F_{13}}+{F_{12}}{F_{36}}{F_{35}}{F_{12}}-\frac{3}{2}{\eta_{56}}{F_{14}}{F_{23}}{F_{12}}{F_{34}}\nonumber\\& +{F_{25}}{F_{36}}{F_{12}}{F_{13}}-\frac{3}{8}{\eta_{56}}{F_{34}}{F_{12}}{F_{12}}{F_{34}}-\frac{1}{4}{\eta_{56}}{F_{34}}{F_{12}}{F_{34}}{F_{12}}-\frac{1}{8}{\eta_{56}}{F_{34}}{F_{34}}{F_{12}}{F_{12}}\nonumber\\& +\frac{1}{4}{F_{35}}{F_{12}}{F_{12}}{F_{36}}-\frac{1}{4}{F_{35}}{F_{12}}{F_{36}}{F_{12}}-\frac{1}{4}{F_{35}}{F_{36}}{F_{12}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{12}}{F_{12}}{F_{35}}\nonumber\\& -\frac{3}{2}{F_{36}}{F_{12}}{F_{13}}{F_{25}}-\frac{3}{2}{F_{36}}{F_{12}}{F_{25}}{F_{13}}+\frac{3}{4}{F_{36}}{F_{12}}{F_{35}}{F_{12}}-\frac{1}{2}{F_{36}}{F_{13}}{F_{25}}{F_{12}}\nonumber\\& -\frac{1}{2}{F_{36}}{F_{25}}{F_{13}}{F_{12}}+\frac{1}{2}{F_{36}}{F_{35}}{F_{12}}{F_{12}})\bigg]\,. \end{flalign} \subsection{The evanescent operators} \subsubsection*{C-even} The 3 length-4 operators \begin{flalign} O_{31}&=\frac{1}{8}D_9D_{10}\bigg[(2\delta^{56789}_{3412(10)}+\delta^{34789}_{5612(10)})F_{12}F_{34}F_{56}F_{78}\bigg]\,,&\\ O_{32}&=\frac{1}{4}D_9D_{10}\bigg[(\delta^{56789}_{3412(10)}-\delta^{34789}_{5612(10)})F_{12}F_{34}F_{56}F_{78}\bigg]\,,\\ O_{33}&=\frac{1}{4}\delta^{1256(10)}_{34789}\bigg[-{D_{9}}{F_{12}}{F_{56}}{D_{10}}{F_{34}}{F_{78}}+2{D_{9}}{F_{12}}{F_{78}}{F_{56}}{D_{10}}{F_{34}}-{D_{10}}{F_{34}}{F_{78}}{D_{9}}{F_{12}}{F_{56}}\bigg]\,. \end{flalign} The 2 length-5 operators \begin{flalign} O_{34}&=\frac{1}{8}\bigg[\delta^{12347}_{569(10)8}F_{12}F_{56}F_{34}F_{9(10)}F_{78}+\delta^{12349}_{5678(10)}F_{12}F_{56}F_{78}F_{9(10)}F_{34}\bigg]\,,&\\ O_{35}&=\frac{1}{8}\bigg[-2\delta^{12347}_{569(10)8}F_{12}F_{56}F_{34}F_{9(10)}F_{78}+\delta^{12349}_{5678(10)}F_{12}F_{56}F_{78}F_{9(10)}F_{34}\bigg]\,. \end{flalign} \subsubsection*{C-odd} The only length-4 operators \begin{flalign} O_{36}&=\frac{1}{4}D_{10}\bigg[2\delta^{2345(10)}_{67891}D_1F_{23}F_{45}F_{67}F_{89}-D_9\big(\delta^{56789}_{3412(10)}F_{12}F_{34}F_{56}F_{78}\big)\bigg]\,.& \end{flalign} \section{Two-loop renormalization matrix $Z^{(2)}_{\text{pp}}$}\label{zppresult} In this appendix, we present the two-loop mixing between the physical operators. The results in the two schemes are the same, \emph{i.e.} $Z^{(2)}_{\text{pp}}=\hat{Z}^{(2)}_{\text{pp}}$. Because the $\epsilon^{-2}$ parts can be derived by the one-loop $Z$ matrix according to \eqref{z2ep2ms}, below only the $\epsilon^{-1}$ parts are presented. The blocks in ${Z}^{\text{even},(2)}_{\text{pp}}$ are \begin{align} &{Z}^{\text{even},(2)}_{\text{pp},2\to 2}=\frac{N_c^2}{\epsilon } \left( \begin{array}{c} -\frac{34}{3}\\ \end{array} \right)\,, {Z}^{\text{even},(2)}_{\text{pp},3\to 2}=\frac{N_c^2}{\epsilon} \left( \begin{array}{c} -\frac{1}{3} \\ -\frac{209}{900} \\ -1 \\ -\frac{19}{36} \\ \end{array} \right)\,, {Z}^{\text{even},(2)}_{\text{pp},3\to 3}=\frac{N_c^2}{\epsilon} \left( \begin{array}{cccc} \frac{439}{72} & 0 & \frac{3}{2} & 0 \\ -\frac{1471}{4500} & \frac{7121}{1000} & \frac{89}{100} & 0 \\ 0 & 0 & \frac{59}{12} & 0 \\ \frac{5923}{28800} & \frac{1531}{3200} & -\frac{655}{1152} & \frac{32459}{3456} \\ \end{array} \right)\,, \end{align} \begin{align} &{Z}^{\text{even},(2)}_{\text{pp},4\to 3}=\frac{N_c^2}{\epsilon} \left( \begin{array}{cccc} 0 & 0 & -\frac{1}{4} & 0 \\ 0 & 0 & 4 & 0 \\ 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & -\frac{1}{3} & \frac{1}{9} \\ 0 & 0 & \frac{1}{6} & \frac{1}{9} \\ \frac{29}{80} & -\frac{123}{80} & -\frac{1}{16} & -\frac{73}{144} \\ \frac{31}{48} & -\frac{19}{16} & \frac{55}{144} & -\frac{275}{432} \\ -\frac{1}{120} & -\frac{29}{160} & \frac{13}{288} & -\frac{29}{216} \\ -\frac{11}{60} & \frac{3}{40} & -\frac{1}{24} & -\frac{1}{108} \\ \frac{1}{4} & 0 & 0 & 0 \\ -\frac{13}{18} & 0 & 0 & 0 \\ -\frac{5}{8} & \frac{3}{8} & 0 & 0 \\ \frac{473}{3600} & -\frac{19}{240} & 0 & 0 \\ \frac{2}{25} & \frac{1}{40} & 0 & 0 \\ -\frac{53}{400} & \frac{17}{80} & 0 & 0 \\ \end{array} \right)\,,\\ &{Z}^{\text{even},(2)}_{\text{pp},4\to 4}=\frac{N_c^2}{\epsilon} \left( \begin{array}{ccccccccc} \frac{833}{216} & \frac{449}{864} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{86}{27} & \frac{481}{54} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{907}{108} & -\frac{43}{54} & \frac{815}{72} & 0 & 0 & -\frac{391}{432} & \frac{13}{32} & \frac{31}{72} & 0 \\ -\frac{5}{18} & \frac{43}{96} & \frac{17}{27} & \frac{3695}{216} & -\frac{10}{9} & -\frac{215}{192} & \frac{995}{1152} & -\frac{59}{32} & \frac{131}{72} \\ \frac{143}{27} & -\frac{695}{864} & \frac{2}{27} & -\frac{373}{54} & \frac{791}{72} & \frac{1153}{1728} & -\frac{625}{1152} & \frac{509}{288} & -\frac{109}{72} \\ \frac{551}{432} & \frac{209}{1728} & 0 & 0 & 0 & \frac{1451}{144} & -\frac{103}{64} & 0 & 0 \\ \frac{1949}{1296} & -\frac{1525}{5184} & 0 & 0 & 0 & -\frac{7099}{1296} & \frac{8509}{1728} & 0 & 0 \\ \frac{17}{648} & \frac{25}{5184} & \frac{17}{64} & 0 & 0 & -\frac{30197}{20736} & -\frac{5287}{6912} & \frac{12319}{1152} & 0 \\ -\frac{221}{324} & \frac{299}{648} & -\frac{7}{48} & \frac{83}{18} & -\frac{5}{18} & \frac{2077}{5184} & \frac{173}{1728} & -\frac{127}{288} & \frac{73}{6} \\ -\frac{1}{36} & -\frac{257}{288} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{145}{54} & -\frac{329}{432} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{107}{54} & \frac{43}{54} & 0 & 0 & 0 & -\frac{209}{108} & \frac{25}{18} & 0 & 0 \\ \frac{209}{450} & \frac{499}{14400} & \frac{29}{1080} & -\frac{2299}{2160} & -\frac{121}{2700} & \frac{851}{14400} & -\frac{1909}{28800} & \frac{37}{288} & -\frac{11}{72} \\ -\frac{1141}{2700} & -\frac{5563}{43200} & -\frac{73}{90} & \frac{1577}{720} & \frac{1531}{1800} & \frac{4543}{43200} & -\frac{779}{28800} & \frac{1}{96} & \frac{5}{24} \\ -\frac{20371}{2700} & \frac{31997}{43200} & -\frac{10499}{720} & \frac{3661}{360} & \frac{58397}{3600} & \frac{53003}{43200} & \frac{21041}{28800} & -\frac{659}{96} & \frac{49}{12} \\ \end{array} \right.\,\nonumber\\ &\left.\qquad\qquad\qquad\qquad\qquad \begin{array}{cccccc} \frac{797}{288} & \frac{245}{96} & 0 & 0 & 0 & 0 \\ -\frac{113}{18} & -\frac{41}{6} & 0 & 0 & 0 & 0 \\ -\frac{497}{144} & -\frac{227}{144} & -\frac{61}{72} & 0 & 0 & 0 \\ -\frac{1979}{3600} & \frac{1031}{2400} & -\frac{2311}{2400} & -\frac{5903}{900} & \frac{28}{45} & -\frac{331}{600} \\ -\frac{8449}{3600} & -\frac{2867}{7200} & \frac{153}{800} & \frac{3041}{450} & \frac{1279}{360} & \frac{271}{900} \\ -\frac{425}{192} & \frac{41}{192} & -\frac{389}{288} & 0 & 0 & 0 \\ -\frac{251}{64} & -\frac{535}{1728} & -\frac{565}{288} & 0 & 0 & 0 \\ -\frac{263}{192} & \frac{211}{432} & -\frac{125}{128} & 0 & 0 & 0 \\ \frac{6319}{3240} & -\frac{2821}{2160} & \frac{12341}{12960} & -\frac{113}{15} & -\frac{367}{162} & -\frac{83}{405} \\ \frac{953}{216} & \frac{853}{144} & 0 & 0 & 0 & 0 \\ -\frac{2}{9} & \frac{1103}{216} & 0 & 0 & 0 & 0 \\ \frac{11}{16} & -\frac{533}{144} & \frac{299}{54} & 0 & 0 & 0 \\ -\frac{1583}{13500} & \frac{6623}{6000} & -\frac{23039}{108000} & \frac{701849}{54000} & \frac{92}{675} & -\frac{3899}{27000} \\ -\frac{254219}{108000} & \frac{54473}{36000} & -\frac{23063}{216000} & -\frac{353}{125} & \frac{273949}{21600} & -\frac{53341}{108000} \\ \frac{31951}{3000} & \frac{176}{125} & \frac{319699}{72000} & \frac{63643}{3000} & -\frac{125777}{7200} & \frac{854629}{108000} \\ \end{array} \right)\,,\\ &{Z}^{\text{even},(2)}_{\text{pp},5\to 4}=\frac{N_c^2}{\epsilon} \left( \begin{array}{ccccccccccccccc} 0 & 0 & \frac{5}{2} & -\frac{5}{2} & -\frac{5}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{7}{4} & -\frac{5}{4} & -\frac{7}{4} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{96} & 0 & \frac{1}{16} & -\frac{1}{32} & \frac{3}{16} & 0 & \frac{3}{32} & -\frac{1}{8} & -\frac{17}{16} & -\frac{1}{16} \\ 0 & 0 & 0 & 0 & 0 & -\frac{1}{144} & \frac{1}{6} & -\frac{17}{24} & \frac{17}{48} & -\frac{65}{216} & 0 & -\frac{65}{432} & \frac{3}{4} & \frac{91}{216} & -\frac{13}{216} \\ \end{array} \right)\,,\\ &{Z}^{\text{even},(2)}_{\text{pp},5\to 5}=\frac{N_c^2}{\epsilon} \left( \begin{array}{cccc} -\frac{409}{36} & \frac{1045}{36} & \frac{1535}{144} & \frac{1135}{288} \\ -\frac{1123}{72} & \frac{749}{24} & \frac{107}{16} & \frac{755}{288} \\ -\frac{85}{72} & \frac{121}{72} & \frac{13771}{1728} & \frac{2657}{1152} \\ -\frac{11}{18} & \frac{3}{4} & \frac{77}{32} & \frac{14359}{1728} \\ \end{array} \right)\,, \end{align} The blocks in ${Z}^{\text{odd},(2)}_{\text{pp}}$ are \begin{align} &{Z}^{\text{odd},(2)}_{\text{pp},3\to 3}=\frac{N_c^2}{\epsilon} \left( \begin{array}{c} \frac{103}{18}\\ \end{array} \right)\,, {Z}^{\text{odd},(2)}_{\text{pp},4\to 3}=\frac{N_c^2}{\epsilon} \left( \begin{array}{c} 0 \\ -\frac{3}{32} \\ -\frac{9}{32} \\ 0 \\ \frac{1}{6} \\ \end{array} \right)\,, {Z}^{\text{odd},(2)}_{\text{pp},4\to 4}=\frac{N_c^2}{\epsilon} \left( \begin{array}{ccccc} \frac{289}{24} & \frac{17}{24} & -\frac{73}{288} & -\frac{77}{360} & -\frac{11}{45} \\ 0 & \frac{3125}{384} & 0 & 0 & 0 \\ -\frac{1}{24} & -\frac{353}{288} & \frac{13267}{1152} & -\frac{37}{144} & \frac{7}{9} \\ \frac{2797}{720} & -\frac{1}{8} & \frac{541}{1440} & \frac{14867}{2160} & -\frac{19}{60} \\ -\frac{413}{720} & \frac{5}{6} & \frac{89}{160} & -\frac{13}{24} & \frac{10729}{1080} \\ \end{array} \right)\,. \end{align} \end{appendix} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,108,101,562,499
arxiv
\section{Introduction} The close proximity of the Magellanic Clouds offers an important opportunity to study in detail the dynamics and composition of these galaxies. One of the important issues that affects our interpretation of the observable parameters is the three-dimensional structure of the Clouds. The Large Magellanic Cloud (LMC) has long been regarded as a thin flat disk seen nearly face-on, but with the east side being closer than the west (Westerlund 1997; see also van der Marel 2004 for an updated review). Recent determinations of the tilt angle place it in the range 30--35$^\circ$ (van der Marel \& Cioni 2001, Olsen \& Salyk 2002, Nikolaev et al. 2004). However, one can find a significantly broader range in the literature (25--45$^\circ$, Westerlund 1997), which can at least partly be explained by nonplanar geometry of the inner LMC (Nikolaev et al. 2004). In contrast, the structure of the Small Magellanic Cloud (SMC) has proved difficult to define, mostly because of its large inclination angle. Different distance indicators implied a line-of-sight depth from 7 kpc (O, B and A type stars; Azzopardi 1982) to 6--12 kpc (star clusters; Crowl et al. 2001) and 20 kpc (Cepheids; Mathewson et al. 1986, 1988). It is well established that the asymmetric appearence of the SMC is due almost exclusively to the young population of stars, since the older stellar population shows a very regular distribution (e.g. Zaritsky et al. 2000, Cioni et al. 2000, Maragoudaki et al. 2001). Time-series observations from microlensing projects such as MACHO (Alcock et al. 2000) and OGLE (Udalski et al. 1992), combined with data from two recent near-infrared surveys, 2MASS (Skrutskie 1998) and DENIS (Epchtein 1998), have provided an unprecedented new wave of high-quality data and a dramatic improvement in statistical analyses. The order-of-magnitude increase in the numbers of well-measured stellar distance indicators has allowed detailed searches for small distance modulus variations across the face of the Clouds. Most recent examples for the LMC include the Cepheid period--luminosity (P--L) relations (Nikolaev et al. 2004), the Asymptotic Giant Branch (AGB) luminosity distribution (van der Marel \& Cioni 2001) and the luminosity of core helium-burning red clump stars (Subramaniam 2003, 2004). The basic idea was the same in all these studies: the observed apparent magnitude deviations were translated to distance modulus variations. The analysed fields ranged from the central 4.5 square degrees of the LMC bar (Subramaniam 2003, 2004) to over 300 square degrees, covering the outest parts of the disk (van der Marel \& Cioni 2001). In this Letter we present the first attempt to use red giant P--L relations to constrain the three-dimensional structures of the Clouds. Since the discovery of multiple P--L relations of long-period variables in the LMC (Wood et al. 1999, Wood 2000), there has been strong interest in pulsating red giants and many independent analyses have been published (Cioni et al. 2001, 2003; Noda et al. 2002, 2004; Lebzelter, Schultheis \& Melchior 2002; Kiss \& Bedding 2003, 2004 -- Paper I and Paper II; Ita et al. 2004ab; Groenewegen 2004; Schultheis, Glass \& Cioni 2004; Soszy\'nski et al. 2004). Interestingly, none of these studies considered the geometric effects on the apparent magnitudes, as has been done, for instance, by Wray, Eyer \& Paczy\'nski (2004), who analysed short-period red giants in the galactic Bulge. The main aim of the present paper is to address this issue and draw conclusions on the line-of-sight distance variations of the Clouds, as traced by pulsating red giants. \section{Data analysis} In this work we used OGLE-II periods and 2MASS K$_{\rm s}$ magnitudes of red giant variables in the LMC and SMC, as discussed in Papers I and II, where all relevant details on data reduction can be found. Our key assumption here is that the vertical scatter about any particular P--L relation contains information on the relative distances to individual stars. This allows us to examine the magnitude differences from mean P--L relations as a function of position on the sky. Although this sounds simple, we have to ask to what extent this assumption is valid. There are three aspects to the question: {\it (i)} star-to-star extinction variations due to interstellar dust; {\it (ii)} the contribution of stellar variability to the vertical scatter in the P--L relations; {\it (iii)} the intrinsic width of the P--L relations. \begin{table} \begin{centering} \caption{The inverse regression coefficients of the fitted P--L relations. They have the form $\log{P}~=~a^\prime~\times~K_S~+~b^\prime$. `Number' refers to the number of stars used in the line fit for that P--L relation.} \label{linefits} \begin{tabular}[b]{|l|c|c|c|} \hline P--L relation$^a$ & $a^\prime$ & $b^\prime$ & Number \\ \hline \multicolumn{4}{|c|}{{\bf LMC }} \\ R$_3$ (A$^-$) & $-0.237 \pm 0.002$ & $4.340 \pm 0.029$ & $2642$\\ R$_2$ (B$^-$)& $-0.269 \pm 0.004$ & $4.931 \pm 0.045$ & $1634$\\ \multicolumn{4}{|c|}{ } \\ \multicolumn{4}{|c|}{{\bf SMC }} \\ R$_3$ (A$^-$) & $-0.206 \pm 0.006$ & $4.030 \pm 0.120$ & $229$ \\ R$_2$ (B$^-$) & $-0.214 \pm 0.006$ & $4.330 \pm 0.130 $ & $117$ \\ 3O (A$^+$) & $-0.206 \pm 0.006 $ & $4.038 \pm 0.070$ & $133$ \\ 2O (B$^+$)& $-0.214 \pm 0.006$ & $4.349 \pm 0.069$ & $218$ \\ 1O (C$^\prime$)& $-0.238 \pm 0.006$ & $ 4.821 \pm 0.069$ & $260$ \\ F (C) & $-0.222 \pm 0.007$ & $4.932 \pm 0.078$ & $405$ \\ L$_2$ (D)$^b$ & $-0.170 \pm 0.005$ & $4.799 \pm 0.062$ & $534$ \\ L$_2$ (D)$^c$ & $-0.170 \pm 0.005$ & $4.810 \pm 0.160$ & $405$ \\ \hline \end{tabular} \end{centering} \vskip1mm $^a$ -- abbreviations by Ita et al. (2004a) in parentheses\\ $^b$ -- above the tip of the Red Giant Branch\\ $^c$ -- below the tip of the Red Giant Branch\\ \end{table} {\it (i)} The influence of dust absorption in the near-infrared was discussed by van der Marel \& Cioni (2001) for the LMC, who concluded that it was only a few hundredths of a magnitude in $K_S$ and could be neglected. This has later been criticised by Nikolaev et al. (2004), who preferred the multi-wavelength approach that involved both distance modulus and extinction variations. Their fig.\ 7 shows that most of the individual Cepheid reddenings scattered within $\delta E(B-V)\sim\pm0\fm1$, which can be translated to a random error $\delta K_s\approx\pm0\fm03$, using the extinction law in Schlegel et al. (1998). However, there are also two facts which have to be considered: 1. the Nikolaev et al. sample covered a much larger field of view, where larger extinction variations can occur; 2. Cepheids, as young supergiant stars, are often found close to star forming regions, where the local dust amount might be much higher than the average. Individual reddenings have also been determined by Subramaniam (2003), who analysed exactly the same OGLE-II field of view as us (Udalski, Kubiak \& Szymanski 1997), allowing a direct comparison. She found $\Delta_{max}E(V-I)\approx 0\fm029$ as the maximum reddening dispersion along the bar of the LMC, which corresponds to $\delta K_s\approx\pm0\fm01$ extinction dispersion in $K_S$. From these numbers, we conclude that for each star, the uncertainty in $K_S$ due to extinction is likely to be less than 0\fm03, possibly around 0\fm01, which is indeed negligible if we can take averages over hundreds of stars. {\it (ii)} Since 2MASS magnitudes are single-epoch measurements, intrinsic variability can introduce some uncertainty. For that reason, we decided to use only small-amplitude first ascent Red Giant Branch (RGB) stars in the LMC, of which several thousands were identified in Paper I. Their amplitudes in $I$ range from 0\fm005 to 0\fm02, so that 2MASS $K_S$ measurements should be within 0\fm01--0\fm02 of the mean values. The situation is less favourable in the SMC, for which the smaller number of stars (about 3,200 were analysed in Paper II) forced us to use most P--L relations, including both RGB and AGB variables. The latter may have amplitudes up to several tenths of a mag in the infrared (Whitelock, Marang \& Feast 2000). {\it (iii)} The horizontal width of the P--L relations is affected by errors arising from period determination and instrinsic scatter caused by astrophysics (e.g., mixing different populations of stars or random and time-dependent excitation of pulsations). Considering P--L ridges R$_2$ and R$_3$, we compared the relations presented in Paper I with those determined by Wood (2000) and Soszy\'nsky et al. (2004), both based on about 8 years of observations (twice as long as OGLE-II). If the widths were dominated by errors in period determination caused by the short time-span of data, the longer datasets should have led to significantly tighter P--L relations. Somewhat surprisingly, this is not the case: we found the same $\sim$0.1 dex width in $\log P$ in all three studies, which implies that the natural width of the relations is probably resolved by OGLE-II alone. We note that errors in period determination can also arise from the fact that the majority of stars show multiply periodic behaviour, so that period uncertainty does not necessarily scale inverse-proportionally with the full time-span. In this work we assumed that the scatter in the P--L relations is partly due to a random error in period, which will be averaged out in a large sample, and that there is a vertical scatter due to a statistical error in the distance moduli estimates. To calculate distance modulus variations, each star in our sample was classified as lying on one of the P--L relations. To minimize the effects of spurious periods, we only used the dominant period for each star. For the classification, we drew straight line boundaries between the different relations, similarly to Ita et al. (2004b). The results do not depend heavily on the exact definition of these lines because the majority of the stars lie away from the boundaries. To distinguish between AGB and RGB stars, we adopted TRGB magnitudes from Paper II. We then made linear least-squares fits to each of the P--L relations. Because of the non-zero width and the rhomboid-shaped P--L ridges, these fits were performed as inverse regressions in the form of $\log~P=a^\prime\times K_S+b^\prime$ (Table\ \ref{linefits}), which were then converted back to usual P--L relations in form of $K_S=a\times\log~P+b$. For a star at a given period, the vertical difference between its observed $K_S$ magnitude and the linear fit was taken as the distance modulus of that star relative to the average distance of the host galaxy. Because of the relatively large scatter in these distance moduli, caused by the issues discussed above, it was helpful to bin the distance data to see the underlying trends. When combining the results from different P--L relations, each measured distance modulus was weighted by the overall scatter (combined with the number of points) in its P--L relation. This gave more weight to the distances based on the P--L relations that had the tightest overall correlations. \section{Results} \subsection{Structure of the LMC} The results presented here for the LMC used only the R$_3$ and R$_2$ period-luminosity relations from Paper I, i.e. those below the TRGB (sequences A$^-$ and B$^-$ in Ita et al. 2004a). They had by far the tightest relations and gave the best distance measurements; 4,276 stars were used for the LMC. The OGLE-II data are limited to the region around the bar of the LMC, which lies approximately in the east-west direction. Therefore, the obvious way to bin the data was in Right Ascension. This is the same approach as used by Subramaniam (2003, 2004), who analysed dereddened OGLE-II red-clump magnitude variations in terms of relative distance variations. She has kindly provided her data and we compare these with our results in Fig.\ \ref{LMC_Sub}. To overlay the Subramaniam $I_0$ distance moduli, we subtracted their weighted mean value (($\langle I_0 \rangle=18\fm16$). Larger values in Fig.\ \ref{LMC_Sub} refer to those parts of the LMC that are further from us. The uncertainty in each RA bin was calculated from the scatter of the distance moduli that were combined to make that point. \begin{figure} \begin{center} \leavevmode \includegraphics[width=85mm]{ME992rvf1.ps} \end{center} \caption{The binned $K_S$ magnitude differences for various Right Ascensions, compared with red clump results by Subramaniam (2003). Larger values correspond to those parts of the LMC that are further from us.} \label{LMC_Sub} \end{figure} It is clear from Fig.\ \ref{LMC_Sub} that red giant P--L relations alone reveal that the bar is inclined away from us, running east-west. This is in perfect agreement with the general view of the LMC (Westerlund 1997). The magnitude variations can be translated to distance variations by assuming a distance modulus to the LMC of $(m-M)_0=18\fm5$ (Alves 2004a), corresponding to 50.1 kpc mean distance. The maximum distance modulus range we observe in the LMC is 0\fm1$\pm$0\fm03 (calculated from a linear fit between RA=77$^\circ$ and 87$^\circ$), which can be translated to 2.4$\pm$0.7 kpc distance variation. We interpret this as the distance range of a thin but inclined structure, where the thickness of the LMC bar is assumed to vary less than the mean distance along any given line-of-sight. Using Fig.\ \ref{LMC_Sub} and the assumed LMC distance, an inclination angle of 29 degrees (with a formal error of several degrees) can be determined. However, note that OGLE-II data are not well suited for inclination angle determination because the field of view is highly linear so the position angle of the node, along which the inclination angle has to be measured, is weakly constrained. In this sense, the inclination angle we determined is a lower limit to the real value. Nevertheless, our 29$^\circ$ is in good agreement with recent inclination angle determinations (e.g., van der Marel \& Cioni 2001, Nikolaev et al. 2004). \begin{figure} \begin{center} \leavevmode \includegraphics[width=8.5cm]{ME992rvf2.eps} \end{center} \caption{A 3-D representation of the LMC. The lighter regions are closer to us and the darker regions are further away.} \label{LMC_3D} \end{figure} There is also evidence for substructures within the bar as deviations from a straight line in Fig.\ \ref{LMC_Sub}. Similar substructures were recently found by Subramaniam (2003, 2004), who interpreted red clump magnitude variations as evidence of a misaligned secondary bar within the primary one. There appears to be good consistency between the two data sets, with the distance values mostly agreeing to within 1$\sigma$. However, there are also some discrepancies, most prominently at RA$\sim 79^\circ$ and RA$\sim83-85^\circ$. At these position there are two $\sim$0\fm05 dips in the Subramaniam data but not in our data (although we have fewer data points, so that the 2-3 $\sigma$ differences may arise purely from statistical fluctuation). There are two reasons to believe that our data in these regions are to be preferred. Firstly, optical and narrow-band H$\alpha$ images show prominent HII regions in the quoted positions (e.g. giant shells No. 51, 54 and 60 at RA$\sim79^\circ$ and No. 77 at RA$\sim84^\circ$ in Kim et al. 1999), which suggests that the difference may indicate undercorrected extinction in the $I$-band data used by Subramaniam (2003). Secondly, the two RAs where we found the largest deviations coincide exactly with the positions where the average $E(V-I)$ reddening values show sudden jumps in fig.\ 2 of Subramaniam (2003). Both facts imply that the red clump method (developed by Olsen \& Salyk 2002 and used by Subramaniam 2003) may be affected by a systematic error that is related to the red clump reddening determination. The red clump method assumes the red clump has constant $(V-I)_0$ colour everywhere in the LMC. Since we used reddening-insensitive $K$-band data for red giants, the discrepancy with clump results implies the intrinsic $(V-I)_0$ colour of the clump must be redder than the assumed constant value ($(V-I)_0$=0\fm92, Olsen \& Samolyk 2002), and this probably shows a change in the mean red clump population. Alves (2004a) showed that population effect on model red-clump colour-magnitude diagrams can significantly affect distances derived from $V$ and $I$ band red clump data. Our findings can be explained by a spatially varying clump population in the LMC, which may have broad implications for understanding the LMC bulge/halo/disk structure and formation history, as discussed recently by Alves (2004b) and Zaritsky (2004). To search for structure in the Declination direction we created a 3-D representation of the LMC in which the relative distance modulus is shown in grey scale (Fig.\ \ref{LMC_3D}). This figure was made using a two-dimensional averaging with a Gaussian weight-function ($FWHM=8\farcm6$). This representation makes the depression at RA$\sim77^\circ$ quite prominent. The overall appearance shows the tilt of the bar, with no measurable systematic trend along Declination within the OGLE-II field of view. \subsection{Structure of the SMC} \begin{figure} \begin{center} \leavevmode \includegraphics[width=85mm]{ME992rvf3.ps} \end{center} \caption{The binned $K_S$ magnitude differences for various Right Ascensions. The larger values refer to those parts of the SMC that are further from us.} \label{SMC_RA} \end{figure} In the SMC the smaller number of stars led us to use more P--L relations to reduce the errors. We included P--L ridges R$_3$, R$_2$, 3O, 2O, 10, F and L$_2$ (A$^-$, B$^-$, A$^+$, B$^+$, C$^\prime$, C and D in Ita et al. 2004a); these were the only relations for which reasonable boundaries could be defined, due to the larger scatter in the SMC. As for the LMC, we weighted the results by the corresponding scatter in each P--L relation, so that better-defined relations received higher weight. The resulting distance modulus variations are plotted in Fig.\ \ref{SMC_RA}. The uncertainties are considerably larger than for the LMC, which is partly due to having fewer stars and partly due to the known larger depth of the SMC (cf. Sect.\ 1). The latter effect is particularly prominent in fig.\ 9 of Ita et al. (2004a): all ridges, without exception, are significantly thicker, including those of the fundamental and first overtone Cepheids. For that reason it would be misleading to conclude from Fig.\ \ref{SMC_RA} that no relative distance modulus variations occur in the SMC. There are hints of substructures but their interpretation is difficult because the mean distance modulus is a complicated function of stellar distribution along the line of sight. Formally, the given distance modulus variations correspond to a distance range of 3.2$\pm$1.6 kpc, adopting $(m-M)_0=18\fm94$ (Paper II), which, contrary to the LMC, is likely to be more affected by the overall depth on every line-of-sight. A better depth measurement would require a study of the vertical scatter along individual P--L ridges, but for that purpose, these red giant P--L relations in the SMC are less useful because they are too close to each other in the P--L plane, which makes it more difficult to distinguish between members of different ridges. Finally, the 3-D representation of the SMC (Fig.\ \ref{SMC_3D}) shows that both the north-eastern and the south-western corners of the OGLE-II field are slightly closer to us, while there is a system of depressions along the northern boundary. Interestingly, the two deepest ``holes'' (at RA$\sim$14$^\circ$ and 11$^\circ$) coincide exactly with the two major concentrations of red giant (RGB and AGB) stars in figs.\ 6-7 in Cioni et al. (2000), thus they represent significant extensions of the main body of the SMC, inclined further away from us. \begin{figure} \begin{center} \leavevmode \includegraphics[width=8.5cm]{ME992rvf4.eps} \end{center} \caption{A 3-D representation of the SMC. The lighter regions are closer to us and the darker regions are further away.} \label{SMC_3D} \end{figure} \section{Conclusions} We have demonstrated the usefulness of pulsating red giants as distance indicators in external galaxies by measuring the 3-D structures of the Magellanic Clouds. In particular, the variables below the tip of the Red Giant Branch are numerous and their narrow P--L relations are real competitors to those of the Cepheid variables. About a year of continuous monitoring gives very well-defined P--L relations for those stars, which have periods between 15 and 50 days. Because of the much lower amplitudes, intrinsic variability introduces negligible vertical scatter in the relations. This is, of course, a major observational obstacle: one has to reach the several millimagnitude precision over the whole photometric monitoring period, which should be as continuous as possible. For that reason, red giant P--L relations in general can be considered as giving supplementary information to other distance indicators, although high-amplitude red giants have already been used for measuring a stand-alone extragalactic distance to NGC~5128=Cen~A (Rejkuba 2004). From an analysis of almost 4,300 RGB variables in the LMC we determined spatial depth of the LMC bar as a function of celestial position. The results are in good agreement with those based on other distance indicators (Cepheids, red clump stars, AGB luminosity distributions), but are perhaps less affected by dust. We also found possible evidence for spatial variations in the red clump population in the LMC, which suggests that the assumption of constant $(V-I)_0$ red-clump colour over the whole LMC may not be valid. For the Small Magellanic Cloud, we found a patchy structure with the two major concentrations of late-type stars displaced a few ($<3$) kiloparsecs further away. However, the vertical widths of red giant P--L relations seem to be significantly higher in the SMC, which can be attributed to the larger spatial depth range of the galaxy. Because of this, the calculated depth change is likely to be seriously affected and the interpretation of the results should be made with care. \section*{Acknowledgments} This work has been supported by the OTKA Grant \#T042509 and the Australian Research Council. P. Lah received a Vacation Scholarship for this project from the School of Physics, University of Sydney. L.L. Kiss is supported by a University of Sydney Postdoctoral Research Fellowship. Thanks are due to an anonymous referee for very helpful comments and suggestions. We also thank Dr. A. Subramaniam for providing her red clump distance data for the LMC. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The NASA ADS Abstract Service was used to access data and references.
1,108,101,562,500
arxiv
\section{Introduction} In recent 25 years, considerable progress has been made in loop quantum gravity (LQG)\cite{Ro04,Th07,As04,Ma07}, which is a non-perturbative and background independent quantization of general relativity. A particular feature of LQG is to to inherit the core concept of general relativity that the fundamental physical theory should be background independent. The construction of LQG inspired the research on spin foam models (SFMs) \cite{Rovelli}, which are proposed as the path-integral formulation for LQG. In SFMs, the transition amplitude between physical quantum states is formulated as a sum over histories of all physically appropriate states. The heuristic picture is the following. One considers a 4-dimensional spacetime region M bounded by two spacelike 3-surfaces, which would correspond to the initial and final states for the path integral. The feature of SFMs is to employ the spin network states in LQG to represent the initial and final states. A certain path between the two boundaries is a quantum 4-geometry interpolated between the two spin networks. The interpolated spin foams can be constructed by considering the dual triangulation of M and coloring its surfaces with half integers and edges with suitable intertwiners. In order to obtain the physical inner product between the two boundary states, one has to sum over all possible triangulations and colorings \cite{Ro04}. This is certainly difficult to be achieved. Thus it is desirable to examine the idea and construction of SFMs by certain simplified models. It is well known that the idea and technique of canonical LQG are successfully carried out in the symmetry-reduced models, known as loop quantum cosmology (LQC) \cite{Bojowald}. Because of the homogeneity (and isotropy), the infinite degrees of freedom of gravity have been reduced to finite ones in LQC \cite{Boj00}. Hence LQC provides a simple arena to test the ideas and constructions of the full LQG. Therefore, it was proposed to test the idea and construction of SFMs by LQC models \cite{Henderson}. It was shown in Refs.\cite{spinfoam1,spinfoam2} that the transition amplitude in the deparameterized LQC of $k=0$ Friedmann universe equals the physical inner product in the timeless framework, and concrete evidence has been provided in support of the general paradigm underlying SFMs through the lens of LQC. How to achieve local spinfoam expansion in LQC was also studied in Refs. \cite{on the spinfoam, RV2}. Recently, the effective Hamiltonian constraint of $k=0$ LQC was derived from the path integral formulation in timeless framework in Ref.\cite{most recent}, and the effective measure introduces some conceptual subtleties in arriving at the WKB approximation. The path integrals and effective Hamiltonians of LQC for $k=+1,-1$ FRW models are also derived \cite{Huang}. The coherent state functional integrals are also proposed to study quantum cosmological models \cite{Qin}. However, in canonical LQC, there are some quantization ambiguities in constructing the Hamiltonian constraint operator. Theoretically, the quantization method which inherits more features from full LQG would be preferred. In spatially flat FRW model, two alternative versions of Hamiltonian constraint operator are proposed in Ref. \cite{YDM2}, where the big bang singularity resolution, correct classical limit and re-collapse of an expanding universe because of higher-order quantum effect are obtained. The purpose of this paper is to study the alternative effective dynamics of LQC from the viewpoint of path integral. The models which we are considering is the spatially flat FRW models with a massless scalar field. We will formulate the path integrals for two different proposed dynamics of LQC in both timeless and deparameterized frameworks. The multiple group averaging method proposed in Ref.\cite{Huang} is employed. It turns out that the alternative effective Hamiltonians derived from the two different viewpoints are equivalent to each other. Moreover, the first-order modified Friedmann equations can be derived, and the quantum bounces for contracting universe will also be obtained in both kinds of dynamics. In section 2, we will introduce the basic frameworks of LQC and the path integral approach of multiple group averaging. Then we will derive the path integral formulation and effective Hamiltonian of alternative quantization model I and model II in sections 3 and 4 respectively. Finally a summary will be given in section 5. \section{Basic Scheme} We are consider the spatially flat FRW universe filled with a massless scalar field. In the kinematical setting, it is convenient to introduce an elementary cell ${\cal V}$ in the spatial manifold and restricts all integrations to this cell. Fixing a fiducial flat metric ${{}^o\!q}_{ab}$ and denoting by $V_o$ the volume of ${\cal V}$ in this geometry, the physical volume reads $V=a^3V_o$ . The gravitational phase space variables ---the connections and the density weighted triads --- can be expressed as $ A_a^i = c\, V_o^{-(1/3)}\,\, {}^o\!\omega_a^i$ and $E^a_i = p\, V_o^{-(2/3)}\,\sqrt{{}^o\!q}\,\, {}^o\!e^a_i$, where $({}^o\!\omega_a^i, {}^o\!e^a_i)$ are a set of orthonormal co-triads and triads compatible with ${{}^o\!q}_{ab}$ and adapted to ${\cal V}$. $p$ is related to the scale factor $a$ via $|p|=V_o^{2/3}a^2$. The fundamental Poisson bracket is given by: $ \{c,\, p\} = {8\pi G\gamma}/{3} $, where $G$ is the Newton's constant and $\gamma$ the Barbero-Immirzi parameter. The gravitational part of the Hamiltonian constraint reads $C_{\mathrm{grav}} = -6 c^2\sqrt{|p|}/\gamma^2$. To apply the area gap $\Delta$ of full LQG to LQC \cite{overview}, it is convenient to introduce variable $\bar{\mu}$ satisfying \begin{align} \label{f to mu} \bar{\mu}^2~|p|=\Delta\equiv(4\sqrt{3}\pi\gamma)\ell_p^2, \end{align} where $\ell_p^2=G\hbar$, and new conjugate variables \cite{Robustness,CS,DMY,YDM2}: \begin{align} v:=\frac{\text{sgn}(p)|p|^{\frac{3}{2}}}{2\pi\gamma{\ell}^2_{\textrm{p}}\sqrt{\Delta}}, ~~~~~b:=\frac{\bar{\mu}c}{2}.\label{v,b} \end{align} The new canonical pair satisfies $\{b,v\}=\frac{1}{\hbar}$. On the other hand, the matter phase space consists of canonical variables $\phi$ and $p_{\phi}$ which satisfy $\{\phi,p_{\phi}\}=1 $. To mimic LQG, the polymer-like representation is employed to quantize the gravity sector. The kinematical Hilbert space for gravity then reads $\mathcal{H}_{\rm{kin}}^{\rm{grav}}=L^2(\mathbb{R}_{\textrm{Bohr}},d\mu_H)$, where $\mathbb{R}_{\textrm{Bohr}}$ is the Bhor compactification of the real line and $d\mu_H$ is the Haar measure on it \cite{mathematical}. It turns out that the eigenstates of volume operator $\widehat{v}$, which are labeled by real number $v$, constitute an orthonomal basis in $\mathcal{H}_{\rm{kin}}^{\rm{grav}}$ as $\langle v_1| v_2\rangle=\delta_{v_1,v_2}$. For the scalar matter sector, one just uses the standard Schrodinger representation for its quantization, where the kinematical Hilbert space is $\mathcal{H}_{\rm{kin}}^{\rm{matt}}=L^2(\mathbb{R},d\phi)$. The total kinematical Hilbert Space of the system is a tensor product, $\mathcal{H}_{\rm{kin}}=\mathcal{H}_{\rm{kin}}^{\rm{grav}}\otimes\mathcal{H}_{\rm{kin}}^{\rm{matt}}$, of the above two Hilbert spaces. As a totally constrained system, the dynamics of this model is reflected in the Hamiltonian constraint $C_\mathrm{grav}+C_\mathrm{matt}=0$. Quantum mechanically, physical states are those satisfying quantum constraint equation \begin{align} (\widehat{C}_\mathrm{grav}+\widehat{C}_\mathrm{matt})\Psi(v,\phi)=0,\label{orin constraint} \end{align} which is not difficult to be rewritten as an Klein-Gordon like equation \cite{improved dynamics}: \begin{align} \widehat{C}\Psi(v,\phi)\equiv(\frac{\widehat{p}^2_{\phi}}{\hbar^2}-\widehat{\Theta})\Psi(v,\phi)=0. \label{constraint} \end{align} Eq.\eqref{constraint} indicates that we can get physical states by group averaging kinematical states as \cite{on the spinfoam} \begin{align} \Psi_f(v,\phi)=\lim\limits_{\alpha_o\rightarrow\infty}\int_{-\alpha_o}^{\alpha_o} d\alpha ~e^{i\alpha\widehat{C}}~f(v,\phi),\quad\forall f(v,\phi)\in\mathcal{H}_{\rm{kin}},\label{group average} \end{align} and thus the physical inner product of two states reads \begin{align} \langle~f|g~\rangle_{\rm{phy}}=\langle \Psi_f|g~\rangle=\lim\limits_{\alpha_o\rightarrow\infty}\int_{-\alpha_o}^{\alpha_o} d\alpha ~\langle f|e^{i\alpha\widehat{C}}|g\rangle.\label{innerproduct} \end{align} As is known, in timeless framework the transition amplitude equals to the physical inner product \cite{spinfoam1,spinfoam2}, i.e., \begin{align} A_{tls}(v_f, \phi_f;~v_i,\phi_i)=\langle v_f, \phi_f|v_i,\phi_i\rangle_{phy}=\lim\limits_{\alpha_o\rightarrow\infty} \int_{-\alpha_o}^{\alpha_o}d\alpha\langle v_f,\phi_f|e^{i\alpha\widehat{C}}|v_i,\phi_i\rangle.\label{amplitude} \end{align} On the other hand, Eq.\eqref{constraint} can also be written as \begin{align} \partial^2_\phi\Psi(v,\phi)+\widehat{\Theta}\Psi(v,\phi)=0.\label{Klein-Gorden} \end{align} The similarity between Eq.\eqref{Klein-Gorden} and Klein-Gorden equation suggests that one can regard $\phi$ as internal time, with respect to which gravitational field evolves. In this deparameterized framework, we focus on positive frequency solutions, i.e., those satisfying \begin{align} -i\partial_\phi\Psi_+(v,\phi)=\widehat{\sqrt{\Theta}}\Psi_+(v,\phi)\equiv\widehat{\mathcal{H}}\Psi_+(v,\phi). \label{positive frequency} \end{align} The transition amplitude in deparameterized framework is then given by \begin{align} A_{dep}(v_f,\phi_f;~v_i,\phi_i)=\langle v_f|e^{i\widehat{\mathcal{H}}(\phi_f-\phi_i)}|v_i\rangle, \label{deparameterized amplitude1} \end{align} where $|v_i\rangle$ and $|v_f\rangle$ are eigenstates of volume operator in $\mathcal{H}_{\rm kin}^{\rm grav}$, and $\phi$ is the internal time. Starting with \eqref{amplitude}, we now consdier the transition amplitude of path integral under timeless framework. In order to compute $\langle v_f, \phi_f|e^{i\alpha\widehat{C}}|v_i,\phi_i\rangle$, a straightforward way is to split the exponential into $N$ identical pieces and insert complete basis as in \cite{most recent}. However, since $\alpha$ is the group-averaging parameter which goes from $-\infty$ to $\infty$, it is unclear whether $\alpha$ could be treated as the time variable $t$ in non-relativistic quantum mechanics path integral. We thus consider alternative path integral formulation of multiple group-averaging \cite{Huang}. Here one single group averaging (\ref{group average}) is generalized as multiple ones: \begin{align} &\lim\limits_{\alpha_o\rightarrow\infty}\int_{-\alpha_o}^{\alpha_o}d\alpha ~e^{i\alpha\widehat{C}}|v,\phi\rangle\nonumber\\ =&\lim\limits_{\tilde\alpha_{{No}},...,\tilde\alpha_{{1o}}\rightarrow\infty}\frac{1}{2\tilde\alpha_{{No}}} \int_{-\tilde\alpha_\emph{{No}}}^{\tilde\alpha_\emph{{No}}} d\tilde\alpha_N...\frac{1}{2\tilde\alpha_{{2o}}}\int_{-\tilde\alpha_{{2o}}}^{\tilde\alpha_{{2o}}} d\tilde\alpha_2 \int_{-\tilde\alpha_{{1o}}}^{\tilde\alpha_{{1o}}} d\tilde\alpha_1 ~e^{i(\tilde\alpha_1 +...+\tilde\alpha_N )\widehat{C}}|v,\phi\rangle. \end{align} In order to trace the power for expansion, we re-scale the parameters by $\tilde\alpha_n=\epsilon\alpha_n$, where $\epsilon=\frac{1}{N}$. Then \eqref{amplitude} becomes \begin{align} &A_{tls}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{\alpha_\emph{{No}},...,\alpha_\emph{{1o}}\rightarrow\infty}\frac{1}{2\alpha_\emph{{No}}} \int_{-\alpha_\emph{{No}}}^{\alpha_\emph{{No}}} d\alpha_N...\frac{1}{2\alpha_\emph{{2o}}}\int_{-\alpha_\emph{{2o}}}^{\alpha_\emph{{2o}}} d\alpha_2\cdot\epsilon \int_{-\alpha_\emph{{1o}}}^{\alpha_\emph{{1o}}} d\alpha_1\langle v_f, \phi_f|e^{i\sum\limits_{n=1}^N{\epsilon\alpha_n}\widehat{C}}|v_i,\phi_i\rangle, \label{amplitude2} \end{align} where $\alpha_\emph{no}=\tilde\alpha_\emph{no}/\epsilon, n=1,2,..,N$. Next, we are going to insert a set of complete basis at each knot. Notice that $|v,\phi\rangle$ is the eigenstate of both volume operator and scalar operator simultaneously in $\mathcal{H}_{\rm kin}$, which is written as $|v\rangle|\phi\rangle$ for short, and \begin{align} \mathbbm{1}_{\rm kin}=\mathbbm{1}_{\rm kin}^{\rm grav}\otimes\mathbbm{1}_{\rm kin}^{\rm matt}=\sum\limits_{v}|v\rangle\langle v|\int d\phi~|\phi\rangle\langle\phi|. \end{align} Thus, we have \begin{align} \langle v_f, \phi_f|e^{i\sum\limits_{n=1}^N{\epsilon\alpha_n}\widehat{C}}|v_i,\phi_i\rangle=\sum\limits_{v_{N-1},...v_1}\int d\phi_{N-1}...d\phi_1\prod\limits_{n=1}^N\langle \phi_n|\langle v_n|e^{i\epsilon\alpha_n\widehat{C}}|v_{n-1}\rangle\phi_{n-1}\rangle, \label{insert basis} \end{align} where $v_f=v_N,\phi_f=\phi_N,v_i=v_0,\phi_i=\phi_0$ have been set. Since the constraint operator $\widehat{C}$ has been separated into gravitational part and material part, which live in $\mathcal{H}_{\rm kin}^{\rm grav}$ and $\mathcal{H}_{\rm kin}^{\rm matt}$ separately, we could calculate the exponential on each kinematical space separately. For the material part, one gets \begin{align} \langle{\phi_n}|e^{i\epsilon\alpha_n\frac{\widehat{p}^2_\phi}{\hbar^2}}|\phi_{n-1}\rangle =&\int dp_{\phi_n}\langle{\phi_n}|p_{\phi_n}\rangle\langle p_{\phi_n}|e^{i\epsilon\alpha_n\frac{\widehat{p}^2_\phi}{\hbar^2}}|\phi_{n-1}\rangle\nonumber\\ =&\frac{1}{2\pi\hbar}\int dp_{\phi_n}e^{i\epsilon(\frac{p_{\phi_n}}{\hbar}\frac{\phi_n-\phi_{n-1}}{\epsilon} +\alpha_n\frac{{p}^2_{\phi_n}}{\hbar^2})}. \label{material amplitude} \end{align} As for the gravitational part, in the limit $N\rightarrow\infty(\epsilon\rightarrow0)$, the operator $e^{-i\epsilon\alpha_n \widehat{\Theta}}$ can be expanded to the first order, and hence we get \begin{align} \langle v_{n}|e^{-i\epsilon\alpha_n \widehat{\Theta}}|v_{n-1}\rangle=\delta_{v_n,v_{n-1}}-i\epsilon\alpha_n\langle v_{n}|\widehat{\Theta}|v_{n-1}\rangle+\mathcal{O}(\epsilon^2). \label{piece} \end{align} \section{Path Integral and Effective Dynamics of Model I} In this section, we employ one of the alternative Hamiltonian constraint operators for LQC proposed in \cite{YDM2}, where the \emph{extrinsic curvature} $K^{i}_{a}$ in the Lorentz term of the gravitational Hamiltonian constraint was quantized directly following the procedure in full LQG. Using the simplification treatment in \cite{Robustness}, we can get the action of the gravitational Hamiltonian operator in this model as: \begin{align} \widehat{\Theta}^{\rm F}|v\rangle&=\frac{3\pi G\gamma^2}{4}v \big[(v+2)|v+4\rangle-2v|v\rangle+(v-2)|v-4\rangle\big]\nonumber\\ &\quad\quad-\frac{3\pi G(1+\gamma^2)}{16}v \big[(v+4)|v+8\rangle-2v|v\rangle+(v-4)|v-8\rangle\big],\label{theta F} \end{align} which leads to \begin{align} \langle v_{n}|\widehat{\Theta}^{\rm F}|v_{n-1}\rangle&=\frac{3\pi G\gamma^2}{4}v_{n-1}\frac{v_{n}+v_{n-1}}{2} (\delta_{v_{n},v_{n-1}+4}-2\delta_{v_{n},v_{n-1}}+\delta_{v_{n},v_{n-1}-4})\nonumber\\ &\quad-\frac{3\pi G(1+\gamma^2)}{16}v_{n-1}\frac{v_{n}+v_{n-1}}{2} (\delta_{v_{n},v_{n-1}+8}-2\delta_{v_{n},v_{n-1}}+\delta_{v_{n},v_{n-1}-8}).\label{matrix element1} \end{align} Applying \eqref{matrix element1} to \eqref{piece} and writing Kronecker delta as integral of $b_n$, which acts as the role of conjugate variable of $v_n$, we have \begin{align} &\langle v_{n}|e^{-i\epsilon\alpha_n \widehat{\Theta}_0}|v_{n-1}\rangle\nonumber\\ =&\frac{2}{\pi}\int^{\frac{\pi}{2}}_{0}db_n~e^{-ib_n(v_{n}-v_{n-1})}\Big(1-i\alpha_n\epsilon({3\pi G})v_{n-1}\frac{v_{n}+v_{n-1}}{2}\sin^2{(2b_n)}[1-(1+\gamma^2)\sin^2{(2b_n)}]\Big)+\mathcal{O}(\epsilon^2).\nonumber\\ \label{gravitational amplitude} \end{align} Applying \eqref{material amplitude} and \eqref{gravitational amplitude} to \eqref{amplitude2}, and then taking `continuum limit', we obtain \begin{align} &A^{\rm F}_{\rm tls}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{N\rightarrow\infty}~~~~\lim\limits_{\alpha_\emph{{No}},...,\alpha_\emph{{1o}}\rightarrow\infty} \left(\epsilon\prod\limits_{n=2}^N\frac{1}{2\alpha_\emph{{no}}}\right)\int_{-\alpha_\emph{{No}}}^{\alpha_\emph{{No}}} d\alpha_N...\int_{-\alpha_\emph{{1o}}}^{\alpha_\emph{{1o}}} d\alpha_1\nonumber\\ &\times\int_{-\infty}^{\infty}d\phi_{N-1}...d\phi_1\left(\frac{1}{2\pi\hbar}\right)^N\int_{-\infty}^{\infty} dp_{\phi_N}...dp_{\phi_1}\sum\limits_{v_{N-1},...,v_1}~\left(\frac{2}{\pi}\right)^N\int^{\frac{\pi}{2}}_{0}db_N...db_1\nonumber\\ &\times\prod\limits_{n=1}^{N}\exp{i\epsilon}\left[\frac{p_{\phi_n}}{\hbar}\frac{\phi_n-\phi_{n-1}}{\epsilon} -{b_n}\frac{v_n-v_{n-1}}{\epsilon}+\alpha_n \left(\frac{p_{\phi_n}^2}{\hbar^2}-{3\pi G}v_{n-1}\frac{v_{n}+v_{n-1}}{2} \sin^2{(2b_n)}[1-(1+\gamma^2)\sin^2{(2b_n)}]\right)\right]. \label{timeless F1} \end{align} Finally, we could write the above equation in path integral formulation as \begin{align} &A^{\rm F}_{\rm tls}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&c\int \mathcal{D}\alpha\int\mathcal{D}\phi\int\mathcal{D}p_{\phi}\int\mathcal{D}v\int\mathcal{D}b ~~\exp{\frac{i}{\hbar}\int_0^1d\tau \left[p_\phi\dot\phi-{\hbar}b\dot{v}+{\hbar}{\alpha}\left(\frac{p_\phi^2}{\hbar^2}-3\pi Gv^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}] \right)\right]}\label{timeless F2} \end{align} where $c$ is certain constant, and a dot over a letter denotes the derivative with respect to a \emph{fictitious time} variable $\tau$. The effective Hamiltonian constraint can be read out from Eq. (\ref{timeless F2}) as \begin{align} C^{\rm F}_{\rm eff}=\frac{p_\phi^2}{\hbar^2}-3\pi Gv^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}]. \label{effC} \end{align} Using this effective Hamiltonian constraint, we can explore the effective dynamics of universe by the modified Friedmann equation: \begin{equation}\label{Fdm eq1} H_{\rm F}^2=\frac{8\pi G\rho}{3}\left(1-\frac{\gamma^2+4(1+\gamma^2)\rho/\rho_{\rm c}}{1+\gamma^2} +\frac{\gamma^2\rho_{\rm c}}{2(1+\gamma^2)^2\rho}\left(1-\frac{4(1+\gamma^2)\rho}{\rho_{\rm c}}\right) \left(1-\sqrt{1-\frac{4(1+\gamma^2)\rho}{\rho_{\rm c}}}\right)\right), \end{equation} where $\rho=\frac{p^2_{\phi}}{2V^2}$ is the matter density and $\rho_{\rm c}\equiv\frac{\sqrt3}{32\pi G^2\hbar\gamma^3}$ is a constant. This modified Friedmann equation coincides with the one in \cite{YDM2} if we ignore the higher-order quantum corrections therein. It is easy to see that if the matter density increase to $\rho=\frac{\rho_{\rm c}}{4(1+\gamma^2)}$, the Hubble parameter would be zero and the \emph{bounce} could occur for a contracting universe. On the other hand, in the classical region of large scale, we have $\rho\ll\rho_{\rm c}$ and hence Eq. (\ref{Fdm eq1}) reduces to the standard classical Friedmann equation: $H^2=\frac{8\pi G\rho}{3}$. Besides the \emph{timeless} framework, we can also employ the above group-averaging viewpoint for the deparameterized framework. To this end, we define a new constraint operator $\widehat{C_+}=\frac{\widehat{p_{\phi}}}{\hbar}-\widehat{\mathcal{H}}$. Then Eq.\eqref{positive frequency} can be rewritten as \begin{align} \widehat{C_+}\Psi_+(v,\phi)=0. \end{align} The transition amplitude for this new Hamiltonian constraint reads \begin{align} A_{\rm dep}(v_f,\phi_f;~v_i,\phi_i)=\lim\limits_{\alpha_o\rightarrow\infty}\int_{-\alpha_o}^{\alpha_o} d\alpha \langle v_f,\phi_f|e^{i\alpha \widehat{C_+}}|v_i,\phi_i\rangle=\lim\limits_{\alpha_o\rightarrow\infty}\int_{-\alpha_o}^{\alpha_o} d\alpha \langle v_f,\phi_f|2\widehat{|{p}_\phi|}\widehat{\theta(p_{\phi})}e^{i\alpha \widehat{C}}|v_i,\phi_i\rangle, \label{deparameterized amplitude2} \end{align} where \begin{align} \widehat{|{p}_\phi|}|p_{\phi}\rangle=|p_{\phi}||p_{\phi}\rangle,~~ \widehat{\theta(p_{\phi})}|p_{\phi}\rangle= \begin{cases} 0& \text{$p_{\phi}\leq0$}\\ |p_{\phi}\rangle& \text{$p_{\phi}>0$}. \end{cases} \end{align} Similar to the timeless case, the integration over single $\alpha$ can be written as multiple integrations, \begin{align} &A_{\rm dep}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{\alpha_\emph{{No}},...,\alpha_\emph{{1o}}\rightarrow\infty}\frac{1}{2\alpha_\emph{{No}}} \int_{-\alpha_\emph{{No}}}^{\alpha_\emph{{No}}} d\alpha_N...\frac{1}{2\alpha_\emph{{2o}}}\int_{-\alpha_\emph{{2o}}}^{\alpha_\emph{{2o}}} d\alpha_2\cdot\epsilon\int_{-\alpha_\emph{{1o}}}^{\alpha_\emph{{1o}}} d\alpha_1 ~~\langle v_f, \phi_f|e^{i\sum\limits_{n=1}^N{\epsilon\alpha_n}\widehat{C_+}}|v_i,\phi_i\rangle. \label{deparameterized amplitude3} \end{align} Since we work in the deparameterized case, it is reasonable to insert in the completeness relation \cite{two points} \begin{align} \mathbbm{1}=\sum\limits_v|v,\phi,+\rangle\langle v,\phi,+|, \end{align} where \begin{align} |v,\phi,+\rangle=\lim\limits_{\beta_o\rightarrow\infty}\int_{-\beta_o}^{\beta_o} d\beta~e^{i\beta\widehat{C_+}}|v,\phi\rangle.\label{basis} \end{align} Note that the Hilbert space in deparameterized framework is unitarily equivalent to the physical Hilbert space with the basis \eqref{basis}. Then the first piece of the exponential in \eqref{deparameterized amplitude3} becomes \begin{align} \langle v_1,\phi_1,+|v_{0},\phi_{0},+\rangle=&\lim\limits_{\alpha_{1o}\rightarrow\infty}\int_{-\alpha_{1o}}^{\alpha_{1o}} d(\epsilon\alpha_1)~\langle v_1,\phi_1,+|e^{i\epsilon\alpha_1\widehat{C_+}}|v_{0},\phi_{0}\rangle\nonumber\\ =&\lim\limits_{\beta'_{1o}\rightarrow\infty}\int_{-\beta'_{1o}}^{\beta'_{1o}} d\beta'_1\langle v_1,\phi_1|e^{i\beta'_1\widehat{C_+}}|v_{0},\phi_{0}\rangle\nonumber\\ =&\lim\limits_{\beta_{1o}\rightarrow\infty}\epsilon\int_{-\beta_{1o}}^{\beta_{1o}} d\beta_1\langle v_1,\phi_1|e^{i\epsilon\beta_1\widehat{C_+}}|v_{0},\phi_{0}\rangle, \end{align} and the last piece of the exponential reads \begin{align} &\lim\limits_{\alpha_{No},\beta'_{No}\rightarrow\infty}\frac{1}{2\alpha_{No}}\int_{-\alpha_{No}}^{\alpha_{No}} d\alpha_N\int_{-\beta'_{No}}^{\beta'_{No}} d\beta'_N~\langle v_N,\phi_N|e^{i\epsilon\alpha_N\widehat{C_+}}e^{i\beta'_N\widehat{C_+}}|v_{N-1},\phi_{N-1}\rangle\nonumber\\ =&\lim\limits_{\beta'_{No}\rightarrow\infty}\int_{-\beta'_{No}}^{\beta'_{No}} d\beta'_N~\langle v_N,\phi_N|e^{i\beta'_N\widehat{C_+}}|v_{N-1},\phi_{N-1}\rangle\nonumber\\ =&\lim\limits_{\beta_{No}\rightarrow\infty}\epsilon\int_{-\beta_{No}}^{\beta_{No}} d\beta_N~\langle v_N,\phi_N|e^{i\epsilon\beta_N\widehat{C_+}}|v_{N-1},\phi_{N-1}\rangle. \end{align} The remaining pieces of the exponential can also be expressed as \begin{align} &\lim\limits_{\alpha_{no}\rightarrow\infty}\frac{1}{2\alpha_{no}}\int_{-\alpha_{no}}^{\alpha_{no}} d\alpha_n~\langle v_n, \phi_n,+|e^{i\epsilon\alpha_n\widehat{C_+}}|v_{n-1},\phi_{n-1},+ \rangle\nonumber\\ =&\lim\limits_{\beta'_{no}\rightarrow\infty}\int_{-\beta'_{no}}^{\beta'_{no}} d\beta'_n~\langle v_n,\phi_n|e^{i\beta'_n\widehat{C_+}}|v_{n-1},\phi_{n-1}\rangle\nonumber\\ =&\lim\limits_{\beta_{no}\rightarrow\infty}\epsilon\int_{-\beta_{no}}^{\beta_{no}} d\beta_n~\langle v_n,\phi_n|e^{i\epsilon\beta_n\widehat{C_+}}|v_{n-1},\phi_{n-1}\rangle. \end{align} So \eqref{deparameterized amplitude3} becomes \begin{align} &A_{\rm dep}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{\beta_{{No}},...,\beta_{{1o}}\rightarrow\infty}\epsilon^{N} \int_{-\beta_{{No}}}^{\beta_{{No}}} d\beta_N...\int_{-\beta_{{1o}}}^{\beta_{{1o}}} d\beta_1\sum\limits_{v_{N-1},...,v_1} \prod\limits_{n=1}^N~\langle v_n, \phi_n|e^{i\epsilon\beta_n\widehat{C_+}}|v_{n-1},\phi_{n-1}\rangle.\nonumber\\ =&\lim\limits_{\beta_\emph{{No}},...,\beta_\emph{{1o}}\rightarrow\infty}\epsilon^{N} \int_{-\beta_\emph{{No}}}^{\beta_\emph{{No}}} d\beta_N...\int_{-\beta_\emph{{1o}}}^{\beta_\emph{{1o}}} d\beta_1\sum\limits_{v_{N-1},...,v_1}\prod\limits_{n=1}^N~\langle v_n, \phi_n|2\widehat{|{p}_{\phi_n}|}\widehat{\theta({p}_{\phi_n})}e^{i\epsilon\beta_n\widehat{C}}|v_{n-1},\phi_{n-1} \rangle,\label{damplitude} \end{align} where \eqref{deparameterized amplitude2} is applied in second step. Now we can split each piece in \eqref{damplitude} into gravitational and material parts. Calculations similar to those in timeless framework lead to \begin{align} &A^{\rm F}_{\rm dep}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{N\rightarrow\infty}~~~~\lim\limits_{\beta_\emph{{No}},...,\beta_\emph{{1o}}\rightarrow\infty}\epsilon^{N} \int_{-\beta_\emph{{No}}}^{\beta_\emph{{No}}} d\beta_N...\int_{-\beta_\emph{{1o}}}^{\beta_\emph{{1o}}} d\beta_1\left(\frac{1}{2\pi\hbar}\right)^N\int_{-\infty}^{\infty}dp_{\phi_N}...dp_{\phi_1}\sum \limits_{v_{N-1},...,v_1}~~~\left(\frac{2}{\pi}\right)^N\int^{\frac{\pi}{2}}_{0}db_N...db_1\nonumber\\ &\times\prod\limits_{n=1}^{N}2|p_{\phi_n}|\theta(p_{\phi_n})\exp{i\epsilon}\Big[\frac{p_{\phi_n}}{\hbar} \frac{\phi_n-\phi_{n-1}}{\epsilon}-{b_n}\frac{v_n-v_{n-1}}{\epsilon}\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\beta_n\Big(\frac{p_{\phi_n}^2}{\hbar^2}-{3\pi G}v_{n-1}\frac{v_{n}+v_{n-1}}{2} \sin^2{(2b_n)}[1-(1+\gamma^2)\sin^2{(2b_n)}]\Big)\Big]. \end{align} We can integrate out $\beta_n$ and $p_{\phi_n}$ and arrive at \begin{align} &A^{\rm F}_{\rm dep}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{N\rightarrow\infty} \sum\limits_{v_{N-1},...,v_1}~~~\left(\frac{2}{\pi}\right)^N \int^{\frac{\pi}{2}}_{0}db_N...db_1\nonumber\\ &\times\prod\limits_{n=1}^{N}\exp{i\epsilon(\phi_f-\phi_i)}\left[\sqrt{{3\pi G}v_{n-1}\frac{v_{n}+v_{n-1}}{2}\sin^2{(2b_n)}[1-(1+\gamma^2)\sin^2{(2b_n)}]}-{b_n} \frac{v_n-v_{n-1}}{\epsilon(\phi_f-\phi_i)}\right]\nonumber\\ =&c'\int\mathcal{D}v\int\mathcal{D}b~\exp\frac{i}{\hbar}\int d\phi(\sqrt{3\pi G\hbar^2v^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}]} -{\hbar\dot{v}b}),\label{damplitude2} \end{align} where $\dot{v}=\frac{dv}{d\phi}$. The effective Hamiltonian in deparameterized framework can be read out from \eqref{damplitude2} as: \begin{equation} \mathcal{H}^{\rm F}_{\rm eff}=-\sqrt{3\pi G\hbar^2v^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}]},\label{effc H} \end{equation} We can use this effective Hamiltonian to get the effective equations of motion: \begin{eqnarray} &&\dot{v}=\frac{3\pi Gv^2\sin{(4b)}[1-2(1+\gamma^2)\sin^2{(2b)}]} {\sqrt{3\pi Gv^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}]}},\\ &&\dot{b}=-\textrm{sgn}(v)\sqrt{3\pi {G} v^2\sin^2{(2b)}[1-(1+\gamma^2)\sin^2{(2b)}]}, \end{eqnarray} which also predict a \emph{bounce} of a contracting universe when $\dot{v}=0$. It is easy to see that the bounce point coincides with that from \eqref{Fdm eq1}. In fact, the effective equations derived in timeless and deparameterized frameworks coincides with each other. Hence, at least at first-order level, both methods of path integral confirm the effective dynamics of this model in canonical theory. \section{Path Integral and Effective Dynamics of Model II} In the other Hamiltonian constraint operator proposed in \cite{YDM2}, the Lorentz term was constructed by using the fact that the extrinsic curvature $K^i_a$ is related to the connection $A^i_a$ by $A^i_a=\gamma{K^i_a}$ in the spatially flat case. Reexpressing the connection by holonomy, we can get a simplified version of this operator $\hat{\Theta}^{\rm S}$ similar to $\hat{\Theta}^{\rm F}$ as: \begin{align} \widehat{\Theta}^{\rm S}|v\rangle&=\frac{3\pi G\gamma^2}{4}v \big[(v+2)|v+4\rangle-2v|v\rangle+(v-2)|v-4\rangle\big]\nonumber\\ &\quad\quad-{3\pi G(1+\gamma^2)}v \big[(v+1)|v+2\rangle-2v|v\rangle+(v-1)|v-2\rangle\big].\label{theta S} \end{align} Following the same procedure in the previous section, we can get the transition amplitude in timeless framework as: \begin{align} &A^{\rm S}_{\rm tls}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{N\rightarrow\infty}~~~~\lim\limits_{\alpha_\emph{{No}},...,\alpha_\emph{{1o}}\rightarrow\infty} \left(\epsilon\prod\limits_{n=2}^N\frac{1}{2\alpha_\emph{{no}}}\right)\int_{-\alpha_\emph{{No}}}^{\alpha_\emph{{No}}} d\alpha_N...\int_{-\alpha_\emph{{1o}}}^{\alpha_\emph{{1o}}} d\alpha_1\nonumber\\ &\times\int_{-\infty}^{\infty}d\phi_{N-1}...d\phi_1\left(\frac{1}{2\pi\hbar}\right)^N\int_{-\infty}^{\infty} dp_{\phi_N}...dp_{\phi_1}\sum\limits_{v_{N-1},...,v_1}~\frac{1}{\pi^N}\int^{{\pi}}_{0}db_N...db_1\nonumber\\ &\times\prod\limits_{n=1}^{N}\exp{i\epsilon}\left[\frac{p_{\phi_n}}{\hbar}\frac{\phi_n-\phi_{n-1}}{\epsilon} -{b_n}\frac{v_n-v_{n-1}}{\epsilon}+\alpha_n \left(\frac{p_{\phi_n}^2}{\hbar^2}-{12\pi G}v_{n-1}\frac{v_{n}+v_{n-1}}{2} \sin^2{b_n}(1+\gamma^2\sin^2{b_n})\right)\right], \label{timeless S} \end{align} which gives an effective Hamiltonian constraint: \begin{equation} C^{\rm S}_{\rm eff}=\frac{p^2_{\phi}}{\hbar^2}-12\pi{G}v^2\sin^2{b}(1+\gamma^2\sin^2{b}). \end{equation} From this effective constraint, we can derive another modified Friedmann equation: \begin{equation} H^2_{\rm S}=\frac{8\pi{G}\rho}{3}\left(1-\frac{3(1+\gamma^2)+\gamma^2\rho/\rho_{\rm c}}{\gamma^2} +\frac{2(1+\gamma^2)\rho_{\rm c}}{\gamma^4\rho}\left(\sqrt{1+\frac{\gamma^2\rho}{\rho_{\rm c}}}\left(1+\frac{\gamma^2\rho}{\rho_{\rm c}}\right)-1\right)\right).\label{Fdm eq2} \end{equation} It is obvious that this effective equation is different from Eq.(\ref{Fdm eq1}) of the model I. By Eq.(\ref{Fdm eq2}), the quantum bounce would occur when matter density increase to $\rho=4(1+\gamma^2)\rho_{\rm c}$ for a contracting universe. It is easy to see that the Friedmann equation (\ref{Fdm eq2}) can also reduce to the classical one when $\rho\ll\rho_{\rm c}$. Moreover, Eq.(\ref{Fdm eq2}) coincides with the corresponding effective Friedmann equation in \cite{YDM2} if the higher-order quantum corrections therein are neglected. Similarly, we can also get the deparameterized amplitude: \begin{align} &A^{\rm S}_{\rm dep}(v_f, \phi_f;~v_i,\phi_i)\nonumber\\ =&\lim\limits_{N\rightarrow\infty} \sum\limits_{v_{N-1},...,v_1}~~~\frac{1}{\pi^N} \int^{{\pi}}_{0}db_N...db_1\nonumber\\ &\times\prod\limits_{n=1}^{N}\exp{i\epsilon(\phi_f-\phi_i)}\left[\sqrt{{12\pi G}v_{n-1}\frac{v_{n}+v_{n-1}}{2}\sin^2{b_n}(1+\gamma^2\sin^2{b_n})}-{b_n} \frac{v_n-v_{n-1}}{\epsilon(\phi_f-\phi_i)}\right]\nonumber\\ =&c'\int\mathcal{D}v\int\mathcal{D}b~\exp\frac{i}{\hbar}\int d\phi(\sqrt{12\pi G\hbar^2v^2\sin^2{b}(1+\gamma^2\sin^2{b})} -{\hbar\dot{v}b}),\label{damplitude3} \end{align} which gives an effective Hamiltonian: \begin{equation} \mathcal{H}^{\rm S}_{\rm eff}=-\sqrt{12\pi G\hbar^2v^2\sin^2{b}(1+\gamma^2\sin^2{b})}.\label{effc H2} \end{equation} The evolution of $v$ and $b$ with respect to $\phi$ can be obtained from this effective Hamiltonian as: \begin{eqnarray} &&\dot{v}\equiv\frac{dv}{d\phi}=\frac{12\pi{G}v^2\sin{(2b)}(1+2\gamma^2\sin^2{b})}{2\sqrt{12\pi G\hbar^2v^2\sin^2{b}(1+\gamma^2\sin^2{b})}},\nonumber\\ &&\dot{b}\equiv\frac{db}{d\phi}=-\textrm{sgn}(v)\sqrt{12\pi G\hbar^2v^2\sin^2{b}(1+\gamma^2\sin^2{b})}, \end{eqnarray} which again coincide with the effective Eq.(\ref{Fdm eq2}) and predict the same bounce when $\dot{v}=0$. Hence, the corresponding effective dynamics of this model in canonical theory is also confirmed at first order by the path integral. \section{Summary} The motivations to study alternative dynamics of LQC are in two folds. First, since there are quantization ambiguities in constructing the Hamiltonian constraint operator in LQC, it is crucial to check whether the key features of LQC, such as the quantum bounce and effective scenario, are robust against the ambiguities. Second, since LQC serves as a simple arena to test ideas and constructions induced in the full LQG, it is important to implement those treatments from the full theory to LQC as more as possible. Unlike the usual treatment in spatially flat and homogeneous models, the Euclidean and Lorentz terms have to be quantized separately in full LQG. Therefore, this kind of quantization procedure which kept the distinction of the Lorentz and Euclidean terms was proposed as alternative dynamics for LQC \cite{YDM2}. It was shown in the resulted canonical effective theory that the classical big bang is again replaced by a quantum bounce. Hence it is desirable to study such kind of prediction from different perspective. Moreover, it is desirable to examine the idea and construction of SFMs by the simplified models of LQC. The main results of the present paper can be summarized as follows. The path integral formulation is constructed for spatially flat FRW models under the framework of LQC with two alternative dynamics. In both models, we can express the transition amplitude in both the timeless and the deparameterized frameworks by multiple group averaging procedure. We can derive the effective Hamiltonians from both viewpoints. It turns out that in both models the resulted effective dynamics from the timeless and the deparameterized path integrals are equivalent to each other. This indicates the equivalence between two frameworks of path integral. Moreover, the modified Friedmann equations for both models are also obtained and coincide with the corresponding equations in \cite{YDM2} if the higher-order quantum corrections therein are neglected. This indicates the equivalence of the canonical approach and the path integral approach in LQC. Since our path integral approach inherits significant features of SFMs, it provides certain good evidence to support the scheme of SFMs. In both models, the \emph{quantum bounce} will replace the \emph{big bang singularity} due to the modified Friedmann equations when the matter density increase to the magnitude of \emph{Planck density}. Hence the quantum bounce resolution of big bang singularity in LQC is robust against the quantization ambiguities of the Hamiltonian constraint. Moreover, the alternative modified Friedmann equations (\ref{Fdm eq1}) and (\ref{Fdm eq2}) set up new arenas for studying phenomenological issues of LQC. \section*{ACKNOWLEDGMENTS} We would like to thank Haiyun Huang for helpful discussion. This work is supported by NSFC (No.10975017) and the Fundamental Research Funds for the Central Universities.
1,108,101,562,501
arxiv
\section{Introduction}\label{sec:Introduction} Given a set $P$ of $n$ points, and a set \set{B} of $m$ boxes (i.e. axis-aligned closed hyper-rectangles) in $d$-dimensional space, the \textsc{Box Cover} problem consists in finding a set $\mathcal{C \subseteq B}$ of minimum size such that \set{C} covers $P$. A special case is the \textsc{Orthogonal Polygon Covering} problem: given an orthogonal polygon \set{P} with $n$ edges, find a set of boxes $\mathcal{C}$ of minimum size whose union covers \set{P}. Both problems are \textsf{NP}-hard~\cite{CulbersonR94,Fowler1981}, but their known approximabilities in polynomial time are different: while \textsc{Box Cover} can be approximated up to a factor within $\bigo{\log \mathtt{OPT}}$, where $\mathtt{OPT}$ is the size of an optimal solution~\cite{BronnimannG95,Clarkson2007}; \textsc{Orthogonal Polygon Covering} can be approximated up to a factor within $\bigo{\sqrt{\log n}}$~\cite{KumarR03}. In an attempt to better understand what makes these problems hard, and why there is such a gap in their approximabilities, we introduce the notion of coverage kernels and study its computational complexity. Given a set \set{B} of $n$ $d$-dimensional boxes, a \emph{coverage kernel} of \set{B} is a subset $\mathcal{K} \subseteq \mathcal{B}$ covering the same region as \set{B}, and a minimum coverage kernel of \set{B} is a coverage kernel of minimum size. The computation of a minimum coverage kernel (namely, the \textsc{Minimum Coverage Kernel} problem) is intermediate between the \textsc{Orthogonal Polygon Covering} and the \textsc{Box Cover} problems. This problem has found applications (under distinct names, and slight variations) in the compression of access control lists in networks~\cite{DalyLT16}, and in obtaining concise descriptions of structured sets in databases~\cite{LakshmananNWZJ02,PuM05}. Since \textsc{Orthogonal Polygon Covering} is $\mathsf{NP}$-hard, the same holds for the \textsc{Minimum Coverage Kernel} problem. We are interested in the exact computation and approximability of \textsc{Minimum Coverage Kernel} in various restricted settings: \vskip -10pt \begin{enumerate} \item \textbf{Under which restrictions is the exact computation of \textsc{Minimum Coverage Kernel} still $\mathsf{NP}$-hard}? \item \textbf{How precisely can one approximate a \textsc{Minimum Coverage Kernel}} in polynomial time? \end{enumerate} When the interactions between the boxes in a set $\mathcal{B}$ are simple (e.g., when all the boxes are disjoint), a minimum coverage kernel of $\mathcal{B}$ can be computed efficiently. A natural way to capture the complexity of these interactions is through the intersection graph. The intersection graph of $\mathcal{B}$ is the un-directed graph with a vertex for each box, and in which two vertices are adjacent if and only the respective boxes intersect. When the intersection graph is a tree, for instance, each box of $\mathcal{B}$ is either completely covered by another, or present in any coverage kernel of $\mathcal{B}$, and thus a minimum coverage kernel can be computed efficiently. For problem on graphs, a common approach to understand when does an \textsc{NP}-hard problem become easy is to study distinct restricted classes of graphs, in the hope to define some form of ``boundary classes'' of inputs separating ``easy'' from ``hard'' instances~\cite{AlekseevBKL07}. Based on this, we study the hardness of the problem under restricted classes of the intersection graph of the input. \paragraph{Our results.} We study the \textsc{Minimum Coverage Kernel} problem under three restrictions of the intersection graph, commonly considered for other problems~\cite{AlekseevBKL07}: planarity of the graph, bounded clique-number, and bounded vertex-degree. We show that the problem remains $\mathsf{NP}$-hard even when the intersection graph of the boxes has clique-number at most 4, and the maximum degree is at most 8. For the \textsc{Box Cover} problem we show that it remains $\mathsf{NP}$-hard even under the severely restricted setting where the intersection graph of the boxes is planar, its clique-number is at most 2 (i.e., the graph is triangle-free), the maximum degree is at most 3, and every point is contained in at most two boxes. We complement these hardness results with two approximation algorithms for the \textsc{Minimum Coverage Kernel} problem running in polynomial time. We describe a $\bigo{\log n}$-approximation algorithm which runs in time within $\bigo{\mathtt{OPT} \cdot n^{\frac{d}{2} + 1}\log^2 n}$; and a randomized algorithm computing a $\bigo{\log \mathtt{OPT}}$-approximation in expected time within $\bigo{\mathtt{OPT} \cdot n^{\frac{d+1}{2}} \log^2 n}$, with high probability (at least $1- \frac{1}{n^{\Omega(1)}}$). Our main contribution in this matter is not the existence of polynomial time approximation algorithms (which can be inferred from results on \textsc{Box Cover}), but a new data structure which allows to significantly improve the running time of finding those approximations (when compared to the approximation algorithms for \textsc{Box Cover}). This is relevant in applications where a minimum coverage kernel needs to be computed repeatedly~\cite{Agarwal2014,DalyLT16,LakshmananNWZJ02,PuM05}. In the next section we review the reductions between the three problems we consider, and introduce some basic concepts. We then present the hardness results in \Cref{sec:covkernels_hardness}, and describe in \Cref{sec:covkernels_approximation} the two approximation algorithms. We conclude in \Cref{sec:cover_discussion} with a discussion on the results and future work. \section{Preliminaries}~\label{sec:background} \begin{figure}[t] \begin{center} \includegraphics[page=2,scale=1]{slabs.pdf} \end{center} \caption{ a) An orthogonal polygon $\mathcal{P}$. b) A set of boxes $\mathcal{B} = \{ b_1, b_2, b_3, b_4\}$ covering exactly $\mathcal{P}$, and such that in any cover of $\mathcal{P}$ with boxes, every box is either in $\mathcal{B}$, or fully covered by a box in $\mathcal{B}$. c) A set of points $\mathcal{D(B)} = \{p_1, p_2, p_3, p_4, p_5\}$ such that any subset of $\mathcal{B}$ covering $\mathcal{D(B)}$, covers also $\mathcal{P}$. d) The subset $\{b_1, b_2, b_4\}$ is an optimal solution for the \textsc{Orthogonal Polygon Cover} problem on $\mathcal{P}$, the \textsc{Minimum Coverage Kernel} problem on $\mathcal{B}$, and the \textsc{Box Cover} problem on $\mathcal{D(B)}, \mathcal{B}$. }\label{fig:intro} \end{figure} To better understand the relation between the \textsc{Orthogonal Polygon Covering}, the \textsc{Box Cover} and the \textsc{Minimum Coverage Kernel} problems, we briefly review the reductions between them. We describe them in the Cartesian plane, as the generalization to higher dimensions is straightforward. Let $\mathcal{P}$ be an orthogonal polygon with $n$ horizontal/vertical edges. Consider the grid formed by drawing infinitely long lines through each edge of $\mathcal{P}$ (see \Cref{fig:intro}.a for an illustration), and let $G$ be the set of $\bigo{n^2}$ points of this grid lying on the intersection of two lines. Create a set $\mathcal{B}$ of boxes as follows: for each pair of points in $G$, if the box having those two points as opposed vertices is completely inside $\mathcal{P}$, then add it to $\mathcal{B}$ (see \Cref{fig:intro}.b.) Let $\mathcal{C}$ be any set of boxes covering $\mathcal{P}$. Note that for any box $c \in \mathcal{C}$, either the vertices of $c$ are in $G$, or $c$ can be extended horizontally and/or vertically (keeping $c$ inside $\mathcal{P}$) until this property is met. Hence, there is at least one box in $\mathcal{B}$ that covers each $c \in \mathcal{C}$, respectively, and thus there is a subset $\mathcal{B}' \subseteq \mathcal{B}$ covering $\mathcal{P}$ with $|\mathcal{B}'| \le |C|$. Therefore, any minimum coverage kernel of $\mathcal{B}$ is also an optimal covering of $\mathcal{P}$ (and thus, transferring the \textsf{NP}-hardness of the \textsc{Orthogonal Polygon Covering} problem~\cite{CulbersonR94} to the \textsc{Minimum Coverage Kernel} problem). Now, let $\mathcal{B}$ be a set of $n$ boxes, and consider the grid formed by drawing infinite lines through the edges of each box in $\mathcal{B}$. This grid has within $\bigo{n^2}$ cells ($\bigo{n^{d}}$ when generalized to $d$ dimensions). Create a point-set $\mathcal{D(B)}$ as follows: for each cell $c$ which is completely inside a box in $\mathcal{B}$ we add to $\mathcal{D(B)}$ the middle point of $c$ (see \Cref{fig:intro}.c for an illustration). We call such a point-set a \emph{coverage discretization} of $\mathcal{B}$, and denote it as $\mathcal{D(B)}$. Note that a set $\mathcal{C} \subseteq \mathcal{B}$ covers $\mathcal{D(B)}$ if and only if $\mathcal{C}$ covers the same region as $\mathcal{B}$ (namely, $\mathcal{C}$ is a coverage kernel of $\mathcal{B}$). Therefore, the \textsc{Minimum Coverage Kernel} problem is a special case of the \textsc{Box Cover} problem. The relation between the \textsc{Box Cover} and the \textsc{Minimum Coverage Kernel} problems has two main implications. Firstly, hardness results for the \textsc{Minimum Coverage Kernel} problem can be transferred to the \textsc{Box Cover} problem. In fact, we do this in \Cref{sec:covkernels_hardness}, where we show that \textsc{Minimum Coverage Kernel} remains \textsf{NP}-hard under severely restricted settings, and extend this result to the \textsc{Box Cover} problem under even more restricted settings. The other main implication is that polynomial-time approximation algorithms for the \textsc{Box Cover} problem can also be used for \textsc{Minimum Coverage Kernel}. However, in scenarios where the boxes in $\mathcal{B}$ represent high dimensional data~\cite{DalyLT16,LakshmananNWZJ02,PuM05} and \textsc{Coverage Kernels} need to be computed repeatedly~\cite{Agarwal2014}, using approximation algorithms for \textsc{Box Cover} can be unpractical. This is because constructing $\mathcal{D(B)}$ requires time and space within $\Theta({n^{d}})$. We deal with this in \Cref{sec:covkernels_approximation}, where we introduce a data structure to index $\mathcal{D(B)}$ without constructing it explicitly. Then, we show how to improve two existing approximation algorithms~\cite{BronnimannG95,Lovasz75} for the \textsc{Box Cover} problem by using this index, making possible to use them for the \textsc{Minimum Coverage Kernel} problem in the scenarios commented on. \section{Hardness under Restricted Settings}~\label{sec:covkernels_hardness} We prove that \textsc{Minimum Coverage Kernel} remains $\mathsf{NP}$-hard for restricted classes of the intersection graph of the input set of boxes. We consider three main restrictions: when the graph is planar, when the size of its largest clique (namely the clique-width of the graph) is bounded by a constant, and when the degree of a vertex with maximum degree (namely the vertex-degree of the graph) is bounded by a constant. \subsection{Hardness of Minimum Coverage Kernel} Consider the \textsc{$k$-Coverage Kernel} problem: given a set \set{B} of $n$ boxes, find whether there are $k$ boxes in \set{B} covering the same region as the entire set. Proving that \textsc{$k$-Coverage Kernel} is $\mathsf{NP}$-complete under restricted settings yields the $\mathsf{NP}$-hardness of \textsc{Minimum Coverage Kernel} under the same conditions. To prove that \textsc{$k$-Coverage Kernel} is $\mathsf{NP}$-hard under restricted settings we reduce instances of the \textsc{Planar 3-SAT} problem (a classical $\mathsf{NP}$-complete problem~\cite{MulzerR08}) to restricted instances of \textsc{$k$-Coverage Kernel}. In the \textsc{Planar 3-SAT} problem, given a boolean formula in 3-CNF whose incidence graph% \footnote{ The \emph{incidence graph} of a 3-SAT formula is a bipartite graph with a vertex for each variable and each clause, and an edge between a variable vertex and a clause vertex for each occurrence of a variable in a clause. } is planar, the goal is to find whether there is an assignment which satisfies the formula. The (planar) incidence graph of any planar 3-SAT formula $\varphi$ can be represented in the plane as illustrated in \Cref{fig:planar3SAT-sample} for an example, where all variables lie on a horizontal line, and all clauses are represented by {\em non-intersecting} three-legged combs~\cite{KnuthR92}. We refer to such a representation of $\varphi$ as the \emph{planar embedding} of $\varphi$. \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \includegraphics[scale=1]{3-SAT_formula_sample.pdf} \caption{\small{Planar embedding of the formula $\varphi=(v_1 \lor \overline{v_2} \lor v_3) \land (v_3 \lor \overline{v_4} \lor \overline{v_5})\land(\overline{v_1} \lor \overline{v_3} \lor v_5) \land(v_1 \lor \overline{v_2} \lor v_4)\land(\overline{v_2} \lor \overline{v_3} \lor \overline{v_4}) \land(\overline{v_4} \lor v_5 \lor \overline{v_6})\land(\overline{v_1} \lor v_5 \lor v_6)$. The crosses and dots at the end of the clause legs indicate that the connected variable appears in the clause negated or not, respectively.}} \label{fig:planar3SAT-sample} \end{minipage} \end{figure} Based on this planar embedding we proof the results in \Cref{theo:coverage_hardness}. Although our arguments are described in two dimensions, they extend trivially to higher dimensions. \begin{theorem}\label{theo:coverage_hardness} Let $\mathcal{B}$ be a set of $n$ boxes in the plane and let $G$ be the intersection graph of $\mathcal{B}$. Solving \textsc{$k$-Coverage Kernel} over $\mathcal{B}$ is \emph{\textsf{NP}}-complete even if $G$ has clique-number at most 4, and vertex-degree at most 8. \end{theorem} \begin{proof} Given any set \set{B} of $n$ boxes in $\mathbb{R}^d$, and any subset \set{K} of \set{B}, certifying that \set{K} covers the same region as \set{B} can be done in time within $\bigo{n^{d/2}}$ using \citeauthor{Chan2013}'s algorithm~\cite{Chan2013} for computing the volume of the union of the boxes in $\mathcal{B}$. Therefore, \textsc{$k$-Coverage Kernel} is in $\mathsf{NP}$. % To prove that it is \textsf{NP}-complete, given a planar 3-SAT formula $\varphi$ with $n$ variables and $m$ clauses, we construct a set \set{B} of $\bigo{n + m}$ boxes with a coverage kernel of size $31m + 3n$ if and only if there is an assignment of the variables satisfying $\varphi$. % We use the planar embedding of $\varphi$ as a start point, and replace the components corresponding to variables and clauses, respectively, by gadgets composed of several boxes. % We show that this construction can be obtained in polynomial time, and thus any polynomial time solution to $k$-\textsc{Coverage Kernel} yields a polynomial time solution for \textsc{Planar 3-SAT}. % We replace the components in that embedding corresponding to variables and clauses, respectively, by gadgets composed of several boxes, adding a total number of boxes polynomial in the number of variable and clauses. \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.2in} \includegraphics[scale=.8,page=8]{kernel-np-hard.pdf} \caption{Variable and clause gadgets for $\varphi = (\overline{v_1} \lor v_2 \lor v_3) \land (v_1 \lor \overline{v_2} \lor v_4) \land (v_1 \lor \overline{v_3} \lor v_4)$. The bold lines highlight one side of each rectangle in the instance, while the dashed lines delimit the regions of the variable and clause components in the planar embedding of $\varphi$. Finding a minimum subset of rectangles covering the non-white regions yields an answer for the satisfiability of $\varphi$.} \label{fig:clausesample-full} \end{minipage} \end{figure} \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.3in} \includegraphics[scale=.8,page=1]{kernel-np-hard.pdf} \caption{ $a)$ Gadget for a variable appearing in two clauses. A clause gadget connects with the variable in the regions marked with a dot or a cross, depending on the sign of the variable in the clause. The optimal way to cover all the redundant regions (the gray regions) is to choose either all the red rectangles (as in $b$) or all the blue rectangles (as in $c$). $d)$ The intersection graph of the variable gadget. } \label{fig:variablegadget} \end{minipage} \end{figure} \paragraph*{Variable gadgets.} Let $v$ be a variable of a planar 3-SAT formula $\varphi$, and let $c_v$ be the number of clauses of $\varphi$ in which $v$ appears. The gadget for $v$ is composed of $4c_v + 2$ rectangles colored either red or blue (see \Cref{fig:variablegadget} for an illustration): $4 c_v$ \emph{horizontal} rectangles (of $3 \times 1$ units of size), separated into two ``rows'' with $2c_v$ rectangles each, and two \emph{vertical} rectangles (of $1 \times 4$ units of size) connecting the rows. % The rectangles in each row are enumerated from left to right, starting by one. The $i$-th rectangle of the $j$-th row is defined by the product of intervals $[4(i-1), 4i-1] \times [4(j-1), 4(j-1)+1]$, for all $i=[1..2c_v]$ and $j=1,2$. The gadget occupies a rectangular region of $(4c_v + 1) \times 4$ units. Although the gadget is defined with respect to the origin of coordinates, it is later translated to the region corresponding to $v$ in the embedding of \textcite{KnuthR92}, which we assume without loss of generality to be large enough to fit the gadget. Every horizontal rectangle is colored red if its numbering is odd, and blue otherwise. Besides, the vertical leftmost (resp. rightmost) rectangle is colored blue (resp. red). As we will see later, these colors are useful when connecting a clause gadget with its variables. Observe that: ($i$.) every red (resp. blue) rectangle intersects exactly two others, both blue (resp. red), sharing with each a squared region of $1 \times 1$ units (which we call \emph{redundant regions}); % ($ii$.) the optimal way to cover the redundant regions is by choosing either all the $2c_v + 1$ red rectangles or all the $2c_v + 1$ blue rectangles (see \Cref{fig:variablegadget} for an example). \paragraph*{Clause gadgets.} Let $C$ be a clause with variables $u$, $v$, and $w$, appearing in this order from left to right in the embedding of $\varphi$. Assume, without loss of generality, that the component for $C$ in the embedding is above the variables. We create a gadget for $C$ composed of 9 black rectangles, located and enumerated as in \Cref{fig:clausegadget}.$a$. The vertical rectangles numbered 1, 2 and 3 correspond to the legs of $C$ in the embedding, and connect with the gadgets of $u, v$, and $w$, respectively. The remaining six horizontal rectangles connect the three legs between them. The vertical rectangles have one unit of width and their height is given by the height of the respective legs in the embedding of $C$. Similarly, the horizontal rectangles have one unit of height and their width is given by the separation between the legs in the embedding of $C$ (see \Cref{fig:clausesample-full} for an example of how these rectangles are extended or stretched as needed). % Note that: ($i$.) every rectangle in the gadget intersects exactly two others (again, we call \emph{redundant regions} the regions where they meet); ($ii$.) any minimum cover of the redundant regions (edges in \Cref{fig:clausegadget}.$b$) has five rectangles, one of which must be a leg; and ($iii.$) any cover of the redundant regions which includes the three legs must have at least six rectangles (e.g., see \Cref{fig:clausegadget}.$c$). \begin{figure}[t] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.1in} \includegraphics[scale=.7,page=7]{kernel-np-hard.pdf} \caption{ $a)$ A clause gadget with nine black rectangles: three vertical legs (1,2, and 3) and six horizontal rectangles (4-9). We call the regions where they meet \emph{redundant regions}. The striped regions at the bottom of each leg connect with the variables. $b)$ The intersection graph of the gadget. Any minimum cover of the edges (redundant regions) requires 5 vertices (rectangles). $c)$ Any cover of the redundant regions which includes the three legs has 6 or more rectangles. } \label{fig:clausegadget} \end{minipage} \end{figure} \paragraph{Connecting the gadgets.} Let $v$ be a variable of a formula $\varphi$ and $c_v$ be the number of clauses in which $v$ occurs. The legs of the $c_v$ clause gadgets are connected with the gadget for $v$, from left to right, in the same order they appear in the embedding of $\varphi$. Let $C$ be the gadget for a clause containing $v$ whose component in the embedding of $\varphi$ is above (resp. below) that for $v$. $C$ connects with the gadget for $v$ in one of the rectangles in the upper (resp. lower) row, sharing a region of $1 \times 1$ units with one of the red (resp. blue) rectangles if the variable appears positive (resp. negative) in the clause (see \Cref{fig:clausevarsample}.$a$). We call this region where the variable and clause gadgets meet as \emph{connection region}, and its color is given by the color of the respective rectangle in the variable gadget. % Note that a variable gadget has enough connection regions for all the $c_v$ clauses in which it appears, because each row of the gadget has $c_v$ rectangles of each color. \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.25in} \includegraphics[scale=.75,page=5]{kernel-np-hard.pdf} \caption{An instance for $(\overline{v_1} \lor v_2 \lor v_3)$. % $a)$ The clause gadget connects with a variable gadget in one of its red or blue connection regions depending on the sign of the variable in the clause. % $b)$ To complete the instance, green boxes are added to cover all the regions with depth 1. The green rectangles are forced to be in any kernel by making them large enough so that each covers an exclusive region.} \label{fig:clausevarsample} \end{minipage} \end{figure} \paragraph{Completing the instance.} Each rectangle in a variable or clause gadget, as described, has a region that no other rectangle covers (i.e., of depth 1). Thus, the coverage kernel of the instances described up to here is trivial: all the rectangles. To avoid this, we cover all the regions of depth 1 with \emph{green} rectangles (as illustrated in \Cref{fig:clausevarsample}.$b$) which are forced to be in any coverage kernel% \footnote{ For simplicity, these green rectangles were omitted in \Cref{fig:clausesample-full} and \Cref{fig:clauseevaluation}. }. For every clause gadget we add $11$ such green rectangles, and for each variable gadget for a variable $v$ occurring in $c_v$ clauses we add $3c_v + 2$ green rectangles. Let $\varphi$ be a formula with $n$ variables and $m$ clauses. The instance of $k$-\textsc{Coverage Kernel} that we create for $\varphi$ has a total of $41m + 4n$ rectangles: ($i.$) each clause gadget has 9 rectangles for the comb, and 11 green rectangles, for a total of $20m$ rectangles over all the clauses; ($ii.$) a gadget for a variable $v$ has $4c_v + 2$ red and blue rectangles, and we add a green rectangle for each of those that does not connect to a clause gadget ($3c_v + 2$ per variable), thus adding a total of $7c_v + 4$ rectangles by gadget; and ($iii.$) over all variables, we add a total of $\sum_{i=1}^{n}{7c_{v_i} + 4} = 7(3m) + 4n = 21m+4n$ rectangles% \footnote{ Note that $\sum_{i=1}^{n}{c_{v_i}} = 3m$ since exactly 3 variables occurs in each clause. }. \paragraph{Intuition: from minimum kernels to boolean values.} Consider a gadget for a variable $v$. Any minimum coverage kernel of the gadget is composed of all its green rectangles together with either all its blue or all its red rectangles. Thus the minimum number of rectangles needed to cover all the variable gadgets is fixed, and known. If all the red rectangles are present in the kernel, we consider that $v=1$, otherwise if all the blue rectangles are present, we consider that $v=0$ (see \Cref{fig:clauseevaluation} for an example). % \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.25in} \includegraphics[scale=.75,page=12]{kernel-np-hard.pdf} \caption{$a)$ An instance for $\varphi = (\overline{v_1} \lor v_2 \lor v_3) \land (v_1 \lor \overline{v_2} \lor \overline{v_3})$ (the area covered by green rectangles is highlighted with green falling lines). $b)$ A cover of the gadgets corresponding to the assignment $v_1=0, v_2=1, v_3=1$ that does not satisfy $\varphi$.} \label{fig:clauseevaluation} \end{minipage} \end{figure} % In the same way that choosing a value for $v$ may affect the output of a clause $C$ in which $v$ occurs, choosing a color to cover the gadget for $v$ may affect the number of rectangles required to cover the gadget for $C$. For instance, consider that the gadget for $v$ is covered with blue rectangles (i.e., $v=0$), and that $v$ occurs unnegated in $C$ (see the gadgets for $v_1$ and the second clause in \Cref{fig:clauseevaluation}). The respective leg of the gadget for $C$ meets $v$ in one of its red rectangles. Since red rectangles were not selected to cover the variable gadget, that leg is forced to cover the connection region shared with the variable gadget, and thus is forced to be in any kernel of the gadget for $C$. This corresponds to the fact that the literal of $v$ in $C$ evaluates to 0. If the same happens for the three variables in $C$ (i.e., $C$ is not satisfied by the assignment), then to cover its gadget at least six of the black rectangles will be required (see the gadget for the second clause in \Cref{fig:clauseevaluation}). However, if at least one of its legs can be disposed of (i.e., at least one of the literals evaluates to 1), then the clause gadget can be covered with five of its black rectangles (see the gadget for the first clause in \Cref{fig:clauseevaluation}). % The minimum number of rectangles needed to cover all the variable gadgets is fixed and known: all the green rectangles, and one half of the red/blue rectangles of each variable. % Therefore, it suffices to show that there is an assignment satisfying a 3-SAT formula if and only if every clause gadget can be covered by five of its black rectangles (plus all its green rectangles). \paragraph{Reduction.} We prove the theorem in two steps. First, we show that such an instance has a coverage kernel of size $31m + 3n$ if and only $\varphi$ is satisfiable. Therefore, answering \textsc{$k$-Coverage-Kernel} over this instance with $k=31m+3n$ yields an answer for \textsc{Planar 3-SAT} on $\varphi$. Finally, we will show that the instance described matches all the restrictions in \Cref{theo:coverage_hardness}, under minor variations. \paragraph{($\Rightarrow$)} Let $\varphi$ be 3-CNF formula with $n$ variables $v_1, \ldots, v_n$ and $m$ clauses $C_1, \ldots, C_m$, let $\mathcal{B}$ be a set of boxes created as described above for $\varphi$, and let $\{v_1 = \alpha_1, \ldots, v_n = \alpha_n\}$ be an assignment which satisfies $\varphi$. We create a coverage kernel \set{K} of \set{B} as follows: % \begin{itemize} \item For each variable gadget for $v_i \in \{v_1, \ldots, v_n\}$ such that $\alpha_i = 0$ (resp. $\alpha_i = 1$), add to \set{K} all but its red (resp. blue) rectangles, thus covering the entire gadget minus its red (resp. blue) connection regions. This uncovered regions, which must connect with clauses in which the literal of $v_i$ evaluates to 0, will be covered later with the legs of the clause gadgets. % Over all variables, we add to \set{K} a total of $\sum_{i=1}^{n}{\left( (7-2)c_{v_i} + (4-1)\right)} = 5(3m) + 3n = 15m+3n$ rectangles.% \item For each clause $C_i \in \{C_1, \ldots, C_n\}$, add to \set{K} all its green rectangles and the legs that connect with connection regions of the variable gadgets left uncovered in the previous step. % Note that at least one of the legs of $C_i$ is not added to \set{K} since at least one of the literals in the clause evaluates to 1, and the connection region corresponding to that literal is already covered by the variable gadget. Thus, the redundant regions of the gadget for $C_i$ can be covered with five of its black rectangles (including the legs already added). So, finally add to \set{K} black rectangles from the clause for $C_i$ as needed (up to a total of five), until all the redundant regions are covered (and with the green rectangles, the entire gadget). Over all clauses, we add a total of $(11+5)m = 16m$ rectangles. \end{itemize} % By construction, \set{K} is a coverage kernel of \set{B}: it covers completely every variable and clause gadget. Moreover, the size of \set{K} is $(15m + 3n) + 16m = 31m + 3n$. \paragraph{($\Leftarrow$)} Let $\varphi$ be 3-CNF formula with $n$ variables $v_1, \ldots, v_n$ and $m$ clauses $C_1, \ldots, C_m$, let $\mathcal{B}$ be a set of boxes created as described above for $\varphi$, and let \set{K} be a coverage kernel of \set{B} whose size is $(15m + 3n) + 16m = 31m + 3n$. % Any coverage kernel of \set{B} must include all its green rectangles. Furthermore, to cover any clause gadget at least five of its black rectangles are required, and to cover any variable rectangle, at least half of its red/blue rectangles are required. Thus, any coverage kernel of \set{B} most have at least $(11+5)m = 16m$ of the clause rectangles, and at least $\sum_{i=1}^{n}{\left( (3+4/2)c_{v_i} + (3)\right)} = 5(3m) + 3n = 15m+3n$ of the variable rectangles are required. Hence, \set{K} must be a coverage kernel of minimum size. Since the redundant regions of any two gadgets are independent, \set{K} must cover each gadgets optimally (in a local sense). Given that the intersection graph of the red/blue rectangles of a variable gadget is a ring (see \Cref{fig:variablegadget}.d for an illustration), the only way to cover a variable gadget optimally is by choosing either all its blue or all its red rectangles (together with the green rectangles). Hence, the way in which every variable gadget is covered is consistent with an assignment for its variable as described before in the intuition. Moreover, the assignment induced by \set{K} must satisfy $\varphi$: in each clause gadget, at least one of the legs was discarded (to cover the gadget with 5 rectangles), and at least the literal in the clause corresponding to that leg evaluates to 1. \paragraph{Meeting the restrictions.} Now we prove that the instance of \textsc{Minimum Coverage Kernel} generated for the reduction meets the restrictions of the theorem. First, we show the bounded clique-number and vertex-degree properties for the intersection graph of a clause gadget and its three respective variable gadgets. In \Cref{fig:restrictions} we illustrate the intersection graph for the clause $\varphi = (\overline{v_1} \lor v_2 \lor v_3)$. The sign of the variables in the clause does not change the maximum vertex degree or the clique-number of the graph, so the figure is general enough for our purpose. Since we consider the rectangles composing the instance to be closed rectangles, if two rectangles containing at least one point in common (in their interior or boundary), their respective vertices in the intersection graph are adjacent. Note that in \Cref{fig:restrictions} the vertices with highest degree are the ones corresponding to legs of the clause gadget (rectangles 1, 2, and 3). There are 4-cliques in the graph, for instance the right lower corner of the green rectangle denoted $h$ is covered also by rectangles $1,4$ and $d$, and hence their respective vertices form a clique. % However, since there is no point that is covered by five rectangles at the same time, there are no 5-cliques in the graph. % \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.25in} \includegraphics[scale=.8,page=13]{kernel-np-hard.pdf} \caption{The intersection graph of the instance of the \textsc{Minimum Coverage Kernel} problem corresponding to $\varphi = (\overline{v_1} \lor v_2 \lor v_3)$ (see the complete instance in \Cref{fig:clausevarsample}.b). $a)$ Numbering of the rectangles of the clause gadget. $b)$ The intersection graph of the instance: the vertices corresponding to the legs have degree 8, which is the highest; and the red fat edges highlight two 4-cliques.} \label{fig:restrictions} \end{minipage} \end{figure} % Finally, note that, since the clause gadgets are located according to the planar embedding of the formula, they are pairwise independent.% \footnote{Two clause gadgets are pairwise independent if the rectangles composing them are pairwise independent, as well as the rectangles where they connect with their respective variables gadgets.} Thus, the bounds on the clique-number and vertex-degree of the intersection graph of any clause gadget extend also to the intersection graph of an entire general instance. % \qed \end{proof} \subsection{Extension to Box Cover} Since the \textsc{Minimum Coverage Kernel} problem is a special case of the \textsc{Box Cover} problem, the result of \Cref{theo:coverage_hardness} also applies to the \textsc{Box Cover} problem. However, in \Cref{theo:coverage_hardness_boxcover} we show that this problem remains hard under even more restricted settings. \begin{theorem}\label{theo:coverage_hardness_boxcover} Let $P$, $\mathcal{B}$ be a set of $m$ points and $n$ boxes in the plane, respectively, and let $G$ be the intersection graph of $\mathcal{B}$. Solving \textsc{Box Cover} over $\mathcal{B}$ and $P$ is \emph{\textsf{NP}}-complete even if every point in $P$ is covered by at most two boxes of $\mathcal{B}$, and $G$ is planar, has clique-number at most 2, and vertex-degree at most 4. \end{theorem} \begin{proof} We use the same reduction from \textsc{Planar 3-SAT}, but with three main variations in the gadgets: we drop the green rectangles of both the variable and clause gadgets, add points within the redundant and connection region of both variable and clause gadgets, and separate the rectangles numbered 5 and 6 of each clause gadget so they do not intersect (see \Cref{fig:boxcoverinstance} for an example). % \begin{figure}[h] \centering \begin{minipage}{.9\textwidth} \centering \hspace*{-.25in} \includegraphics[scale=.8,page=14]{kernel-np-hard.pdf} \caption{ $a)$ The instance of the \textsc{Box Cover} problem corresponding to $\varphi = (\overline{v_1} \lor v_2 \lor v_3)$. $b)$ The intersection graph of the instance: the vertices corresponding to the legs have degree 3, which is the highest; there are no 3-cliques in the graph; and the graph is planar and can be drawn within the planar embedding of $\varphi$ (highlighted with dashed lines). } \label{fig:boxcoverinstance} \end{minipage} \end{figure} % Since the interior of every connection or redundant region is covered by at most two of the rectangles in the gadgets, every point of the instance we create is contained in at most two boxes. In \Cref{fig:boxcoverinstance}.b we illustrate the intersection graph for the clause $\varphi = (\overline{v_1} \lor v_2 \lor v_3)$. Since the sign of the variables in the clause does not change the maximum vertex degree or the clique-number of the graph, or its planarity, the properties we mention next are also true for any clause. Note that three is the maximum vertex-degree of the intersection graph, and that there are no 3-cliques. Also note that the intersection graph can be drawn within the planar embedding of $\varphi$ so that no two edges cross, and hence the graph is planar. Again, due to the pairwise independence of the clause gadgets, these properties extend to the entire intersection graph of a general instance. % \qed \end{proof} In the next section, we complement these hardness results with two approximation algorithms for the \textsc{Minimum Coverage Kernel} problem. \section{Efficient approximation of Minimum Coverage Kernels}\label{sec:covkernels_approximation} Let $\mathcal{B}$ be a set of $n$ boxes in $\mathbb{R}^{d}$, and let $\mathcal{D(B)}$ be a coverage discretization of $\mathcal{B}$ (as defined in \Cref{sec:background}). A \emph{weight index} for $\mathcal{D(B)}$ is a data structure which can perform the following operations: \begin{itemize} \item \emph{Initialization:} Assign an initial unitary weight to every point in $\mathcal{D(B)}$; \item \emph{Query:} Given a box $b \in \mathcal{B}$, find the total weight of the points in $b$. \item \emph{Update:} Given a box $b \in \mathcal{B}$, multiply the weights of all the points within $b$ by a given value $\alpha \ge 0$; \end{itemize} We assume that the weights are small enough so that arithmetic operations over the weights can be performed in constant time. There is a trivial implementation of a weight index with initialization and update time within $\bigo{n^{d}}$, and with constant query time. In this section we describe an efficient implementation of a weight index, and combine this data structure with two existing approximation algorithms for the \textsc{Box Cover} problem~\cite{Lovasz75,BronnimannG95} and obtain improved approximation algorithms (in the running time sense) for the \textsc{Minimum Coverage Kernel} problem. \subsection{An Efficient Weight Index for a Set of Boxes} We describe a weight index for $\mathcal{D(B)}$ which can be initialized in time within $\bigo{n^\frac{d+1}{2}}$, and with query and update time within $\bigo{n^\frac{d-1}{2}\log n}$. Let us consider first the case of a set $I$ of $n$ intervals. \paragraph{\textbf{A weight index for a set of intervals}.} A trivial weight index which explicitly saves the weights of each point in $\mathcal{D}(I)$ can be initialized in time within $\bigo{n \log n}$, has linear update time, and constant query time. We show that by sacrificing query time (by a factor within $\bigo{\log n}$) one can improve update time to within $\bigo{\log n}$. The main idea is to maintain the weights of each point of $\mathcal{D}(I)$ indirectly using a tree. Consider a balanced binary tree whose leafs are in one-to-one correspondence with the values in $\mathcal{D}(I)$ (from left to right in a non-decreasing order). Let $p_v$ denote the point corresponding to a leaf node $v$ of the tree. In order to represent the weights of the points in $\mathcal{D}(I)$, we store a value $\mu(v)$ at each node $v$ of the tree subject to the following invariant: for each leaf $v$, the weight of the point $p_v$ equals the product of the values $\mu(u)$ of all the ancestors $u$ of $v$ (including $v$ itself). The $\mu$ values allow to increase the weights of many points with only a few changes. For instance, if we want to double the weights of all the points we simply multiply by 2 the value $\mu(r)$ of the root $r$ of the tree. Besides the $\mu$ values, to allow efficient query time we also store at each node $v$ three values $min(v),max(v),\omega(v)$: the values $min(v)$ and $ max(v)$ are the minimum and maximum $p_u$, respectively, such that $u$ is a leaf of the tree rooted at $v$; the value $\omega(v)$ is the sum of the weights of all $p_u$ such that $u$ is a leaf of the tree rooted at $v$. Initially, all the $\mu$ values are set to one. Besides, for every leaf $l$ of the tree $\omega(l)$ is set to one, while $min(l)$ and $max(l)$ are set to $p_l$. The $min$, $max$ and $\omega$ values of every internal node $v$ with children $l,r$, are initialized in a bottom-up fashion as follows: $min(v) = \min\{min(l), min(r)\}$; $max(v) = \max\{max(l), max(r)\}$; $\omega(v)=\mu(v)\cdot\left(\omega(l) + \omega(r)\right)$. It is simple to verify that after this initialization, the tree meets all the invariants mentioned above. We show in \Cref{theo:windex_intervals} that this tree can be used as a weight index for $\mathcal{D}(I)$. \begin{theorem}~\label{theo:windex_intervals} Let $I$ be a set of $n$ intervals in $\mathbb{R}$. There exists a weight index for $\mathcal{D}(I)$ which can be initialized in time within $\bigo{n \log n}$, and with query and update time within $\bigo{\log n}$. \end{theorem} \begin{proof} Since intervals have linear union complexity, $\mathcal{D}(I)$ has within $\bigo{n}$ points, and it can be computed in linear time after sorting, for a total time within $\bigo{n \log n}$. We store the points in the tree described above. Its initialization can be done in linear time since the tree has within $\bigo{n}$ nodes, and when implemented in a bottom-up fashion, the initialization of the $\mu, \omega, min,$ and $max$ values, respectively, cost constant time per node. To analyze the query time, let $\textsf{totalWeight}(a,b,t)$ denote the procedure which finds the total weight of the points corresponding to leafs of the tree rooted at $t$ that are in the interval $[a,b]$. This procedure can be implemented as follows: \begin{enumerate} \item if $[a,b]$ is disjoint to $[min(t),max(t)]$ return 0; \item if $[a,b]$ completely contains $[min(t),max(t)]$ return $\omega(r)$; \item if both conditions fail (leafs must meet either 1. or 2.), let $l,r$ be the left and right child of $t$, respectively; \item if $a > max(l)$ return $\mu(t) \cdot \textsf{totalWeight}(a,b,r)$; \item if $b < min(r)$ return $\mu(t) \cdot \textsf{totalWeight}(a,b,l)$; \item otherwise return $\mu(t) (\textsf{totalWeight}(a, \infty, l) + \textsf{totalWeight}(-\infty,b,r))$. \end{enumerate} \noindent Due to the invariants to which the $min$ and $max$ values are subjected, every leaf $l$ of $t$ corresponding to a point in $[a,b]$ has an ancestor (including $l$ itself) which is visited during the call to \textsf{totalWeight} and which meets the condition in step 2. For this, and because of the invariants to which the $\omega$ and $\mu$ values are subjected, the procedure $\textsf{totalWeight}$ is correct. Note that the number of nodes visited is at most 4 times the height $h$ of the tree: when both children need to be visited, one of the endpoints of the interval to query is replaced by $\pm \infty$, which ensures that in subsequent calls at least one of the children is completely covered by the query interval. Since $h \in \bigo{\log n}$, and the operations at each node consume constant time, the running time of \textsf{totalWeight} is within $\bigo{\log n}$. Similarly, to analyze the update time, let $\textsf{updateWeights}(a,b,t, \alpha)$ denote the procedure which multiplies by a value $\alpha$ the weights of the points in the interval $[a,b]$ stored in leafs descending from $t$. This can be implemented as follows: \begin{enumerate} \item if $[a,b]$ is disjoint to $[min(t),max(t)]$, finish; \item if $[a,b]$ completely contains $[min(t),max(t)]$ set $\mu(r)=\alpha \cdot \mu(r)$, set $ \omega(r)=\alpha \cdot \omega(r)$, and finish; \item if both conditions fail, let $l,r$ be the left and right child of $t$, respectively; \item if $a > max(l)$, call $\textsf{updateWeights}(a,b,r, \alpha)$; \item else if $b < min(r)$, call $\textsf{updateWeights}(a,b,l, \alpha)$; \item otherwise, call $\textsf{updateWeights}(a, \infty, l, \alpha)$, and $\textsf{updateWeights}(-\infty,b,r, \alpha)$; \item finally, after the recursive calls set $\omega(t)=\mu(t) \cdot \left(\omega(l) + \omega(r)\right)$, and finish. \end{enumerate} \noindent Note that, for every point $p_v$ in $[a,b]$ corresponding to a leaf $v$ descending from $t$, the $\mu$ value of exactly one of the ancestors of $u$ changes (by a factor of $\alpha$): at least one changes because of the invariants to which the $min$ and $max$ values are subjected (as analyzed for \textsf{totalWeight}); and no more than one can change because once $\mu$ is assigned for the first time to some ancestor $u$ of $v$, the procedure finishes leaving the descendants of $v$ untouched. % The analysis of the running time is analogous to that of \textsf{totalWeight}, and thus within $\bigo{\log n}$. % \qed \end{proof} The weight index for set of intervals described in \Cref{theo:windex_intervals} plays an important role in obtaining an index for a higher dimensional set of boxes. In a step towards that, we first describe how to use one dimensional indexes to obtain indexes for another special case of sets of boxes, this time in high dimension. \paragraph{\textbf{A weight index for a set of slabs}.} A box $b$ is said to be a slab within another box $\Gamma$ if $b$ covers completely $\Gamma$ in all but one dimension (see \Cref{fig:slabs}.a for an illustration). \begin{figure}[t] \centering \begin{minipage}{.9\textwidth} \centering \includegraphics[scale=.8]{slabs.pdf} \caption{(a) An illustration in the plane of three boxes equivalent to slabs when restricted to the box $\Gamma$. The dots correspond to a set of points which discretize the (grayed) region within $\Gamma$ covered by the slabs. }\label{fig:slabs} \end{minipage} \end{figure} Let $\mathcal{B}$ be a set of $n$ $d$-dimensional boxes that are slabs within another box $d$-dimensional box $\Gamma$. Let $\mathcal{B} |_\Gamma$ denote the set $\{b \cap \Gamma \mid b \in \mathcal{B}\}$ of the boxes $\in \mathcal{B}$ restricted to $\Gamma$. We describe a weight index for $\mathcal{D}(\mathcal{B} |_\Gamma)$ with initialization time within the size $n$, and with update and query time within $\bigo{n \log n}$. For all $i = [1..d]$, let $\mathcal{B}_i$ be the subset of slabs that are orthogonal to the $i$-th dimension, and let $I_i$ be the set of intervals resulting from projecting $\Gamma$ and each rectangle in $\mathcal{B} |_\Gamma$ to the $i$-th dimension (see \Cref{fig:slabs}.b for an illustration). The key to obtain an efficient weight index for a set of slabs is the fact that weight indexes for $\mathcal{D}(I_1), \ldots \mathcal{D}(I_d)$ can be combined without much extra computational effort into a weight index for $\mathcal{D}(\mathcal{B} |_\Gamma)$. Let $p$ be a point $\mathcal{D}(\mathcal{B} |_\Gamma)$ and let $x_i(p)$ denote the value of the $i$-th coordinate of $p$. Observe that for all $i \in [1..d]$, $x_i(p) \in \mathcal{D}(I_i)$ (see \Cref{fig:slabs}.b for an illustration). This allows the representation of the weight of each point $p \in \mathcal{D(B)}$ by means of the weights of $x_i(p)$ for all $i \in [1..d]$. We do this by maintaining the following \emph{weight invariant}: the weight of a point $p \in \mathcal{D}(\mathcal{B} |_\Gamma)$ is equal to $\prod_{i=1}^{d}{\left(\text{weight of }x_i(p) \text{ in } \mathcal{D}(I_i)\right)}$. \begin{lemma}~\label{lem:windex_slabs} Let $\mathcal{B}$ be a set of $n$ $d$-dimensional boxes that are equivalent to slabs when restricted to another $d$-dimensional box $\Gamma$. There exists a weight index for $\mathcal{D(B)}$ which can be initialized in time within $\bigo{n \log n}$, and with query and update time within $\bigo{\log n}$. \end{lemma} \begin{proof} Let $\mathcal{B}_i$ be the subset of $\mathcal{B}|_\Gamma$ orthogonal to the $i$-th dimension, and let $I_i$ be the set of intervals resulting from projecting $\Gamma$ and each rectangle in $\mathcal{B}_i$ to the $i$-th dimension. Initialize a weight index for $\mathcal{D}(I_i)$ as in \Cref{theo:windex_intervals}, for all $i \in [1..d]$. Since the weights of all the points in the one dimensional indexes are initialized to one, the weight of every point in $\mathcal{D}(\mathcal{B}|_\Gamma)$ is also initialized to one, according to the weight invariant. This initialization can be done in total time within $\bigo{n \log n}$. Let $b \in \mathcal{B}|_\Gamma$ be a box which covers $\Gamma$ in every dimension except for the $i$-th one, for some $i \in [1..d]$ (i.e., $b \in B_i$), and let $P_i$ bet the subset of $\mathcal{D}(I_i)$ contained within the projection of $b$ to the $i$-th dimension. The set of points of $\mathcal{D}(\mathcal{B}|_\Gamma)$ that are within $b$ can be generated by the expression $\{ (a_1, \ldots, a_d) \mid a_1 \in \mathcal{D}(I_1) \land \ldots \land a_{i-1} \in \mathcal{D}(I_{i-1}) \land a_i \in P_i \land a_{i+1} \in \mathcal{D}(I_{i+1}) \land \ldots \land a_d \in \mathcal{D}(I_d) \}$. Therefore, the total weight of the points within $b$ is given by the total weight of the points $x_i(p)$ for all $p \in P_i$ multiplied by the total weight of the points in $ \mathcal{D}(I_j)$, for all $j=[1..d]$ distinct from $i$. To query the total weight of the points of $\mathcal{D}(\mathcal{B} |_\Gamma)$ within a box $b \in B_i$ we query the weight index of $\mathcal{D}(I_i)$ to find the total weight of the points in the projection of $b$ to the $i$-th dimension (in time within $\bigo{\log n}$), then query the remaining $d-1$ indexes to find the total weight store in the index (stored at the $\omega$ value of the root of the trees), and return the product of those $d$ values. Clearly the running time is within $\bigo{\log n}$. The update is similar: to multiply by a value $\alpha$ the weight of all the points of $\mathcal{D}(\mathcal{B} |_\Gamma)$ within a box $b \in B_i$ we simply update the weight index of $\mathcal{D}(I_i)$ multiplying by $\alpha$ all the weights of the points within the projection of $b$ to the $i$-th dimension, and leave the other $d-1$ weight indexes untouched. The running time of this operation is also within $\bigo{n \log n}$, and the invariant remains valid after the update. % \qed \end{proof} \Cref{lem:windex_slabs} shows that there are weight indexes for a set of slabs $\mathcal{B}$ within another box $\Gamma$ that significantly improve the approach of explicitly constructing $\mathcal{D}(\mathcal{B}|_\Gamma)$, an improvement that grows exponentially with the dimension $d$. We take advantage of this to describe a similar improvement for the general case. \paragraph{\textbf{A Weight Index for The General Case}.} We now show how to maintain a weight index of a general set $\mathcal{B}$ of $d$-dimensional boxes. The main idea is to partition the space into cells such that, within each cell $c$, any box $b \in \mathcal{B}$ either completely contains $c$ or is equivalent to a \emph{slab}. Then, we use weight indexes for slabs (as described in \Cref{lem:windex_slabs}) to index the weights within each of the cells. This approach was first introduced by \textcite{Overmars1991} in order to compute the volume of the region covered by a set boxes, and similar variants were used since then to compute other measures~\cite{2017-COCOON-DepthDistributionInHighDimension-BabrayPerezRojas,Chan2013,YildizHershbergerSuri11}. The following lemma summarizes the key properties of the partition we use: \begin{lemma}[Lemma 4.2 of \textcite{Overmars1991}]~\label{lem:oyap} Let $\mathcal{B}$ be a set of $n$ boxes in $d$-dimensional space. There exist a binary partition tree for storing any subset of $\mathcal{B}$ such that \begin{itemize} \item It can be computed in time within $\bigo{n^{d/2}}$, and it has $\bigo{n^{\frac{d}{2}}}$ nodes; \item Each box is stored in $\bigo{n^{\frac{d-1}{2}}}$ leafs; \item The boxes stored in a leaf are slabs within the cell corresponding to the node; \item Each leaf stores no more than $\bigo{\sqrt{n}}$ boxes. \end{itemize} \end{lemma} Consider the tree of \Cref{lem:oyap}. Analogously to the case of intervals, we augment this tree with information to support the operations of a weight index efficiently. At every node $v$ we store two values $\mu(v), \omega(v)$: the first allows to multiply all the weights of the points of $\mathcal{D(B)}$ that are maintained in leafs descending from $v$ (allowing to support updates efficiently); while $\omega(v)$ stores the total weight of these points (allowing to support queries efficiently). To ensure that all/only the nodes that intersect a box $b$ are visited during a query or update operation, we store at each node the boundaries of the cell corresponding to that node. Furthermore, at every leaf node $l$ we implicitly represent the points of $\mathcal{D(B)}$ that are inside the cell corresponding to $l$ using a weight index for slabs. To initialize this data structure, all the $\mu$ values are set to one. Then the weight index within each leaf cell are initialized. Finally, the $\omega$ values of every node $v$ with children $l,r$, are initialized in a bottom-up fashion setting $\omega(v)=\mu(v)\cdot\left(\omega(l) + \omega(r)\right)$. We show in \Cref{theo:windex_gcase} how to implement the weight index operations over this tree and we analyze its running times. \begin{theorem}\label{theo:windex_gcase} Let $\mathcal{B}$ be a set of $n$ $d$-dimensional boxes. There is a weight index for $\mathcal{D(B)}$ which can be initialized in time within $\bigo{n^\frac{d+1}{2}}$, and with query and update time within $\bigo{n^\frac{d-1}{2}\log n}$. \end{theorem} \begin{proof} The initialization of the index, when implemented as described before, runs in constant time for each internal node of the tree, and in time within $\bigo{\sqrt{n} \log n}$ for each leaf (due to \Cref{lem:windex_slabs}, and to the last item of \Cref{lem:oyap}). Since the tree has $\bigo{n^{\frac{d}{2}}}$ nodes, the total running time of the initialization is within $\bigo{n^{\frac{d+1}{2}}\log n}$. Since the implementations of the query and update operations are analogous to those for the intervals weight index (see the proof of \Cref{theo:windex_intervals}), we omit the details of their correctness. While performing a query/update operation at most $\bigo{n^{\frac{d-1}{2}}}$ leafs are visited (due to the third item of \Cref{lem:oyap}), and since the height of the tree is within $\bigo{\log n}$, at most $\bigo{n^{\frac{d-1}{2}}\log n}$ internal nodes are visited in total. Hence, the cost of a query/update operation within each leaf is within $\bigo{\log n}$ (by \Cref{lem:windex_slabs}), and is constant within each internal node. Thus, the total running of a query/update operation is within $\bigo{n^\frac{d-1}{2}\log n}$. \qed \end{proof} \subsection{Practical approximation algorithms for Minimum Coverage Kernel.} Approximating the \textsc{Minimum Coverage Kernel} of a set $\mathcal{B}$ of boxes via approximation algorithms for the \textsc{Box Cover} problem requires that $\mathcal{D(B)}$ is explicitly constructed. However, the weight index described in the proof of \Cref{theo:windex_gcase} can be used to significantly improve the running time of these algorithms. We describe below two examples. The first algorithm we consider is the greedy $\bigo{\log n}$-approximation algorithm by \textcite{Lovasz75}. The greedy strategy applies naturally to the \textsc{Minimum Coverage Kernel} problem: iteratively pick the box which covers the most yet uncovered points of $\mathcal{D(B)}$, until there are no points of $\mathcal{D(B)}$ left to cover. To avoid the explicit construction of $\mathcal{D(B)}$ three operations most be simulated: ($i$.) find how many uncovered points are within a given a box $b \in \mathcal{B}$; ($ii$.) delete the points that are covered by a box $b \in \mathcal{B}$; and ($iii$.) find whether a subset $\mathcal{B'}$ of $\mathcal{B}$ covers all the points of $\mathcal{D(B)}$. For the first two we use the weight index described in the proof of \Cref{theo:windex_gcase}: to delete the points within a given box $b \in \mathcal{B}$ we simply multiply the weights of all the points of $\mathcal{D(B)}$ within $b$ by $\alpha = 0$; and finding the number of uncovered points within a box $b$ is equivalent to finding the total weight of the points of $\mathcal{D(B)}$ within $b$. For the last of the three operations we use the following observation: \begin{observation} Let $\mathcal{B}$ be a set of $d$-dimensional boxes, and let $\mathcal{B'}$ be a subset of $\mathcal{B}$. The volume of the region covered by $\mathcal{B'}$ equals that of $\mathcal{B}$ if and only if $\mathcal{B'}$ and $\mathcal{B}$ cover the exact same region. \end{observation} Let $\mathtt{OPT}$ denote the size of a minimum coverage kernel of $\mathcal{B}$, and let $N$ denote the size of $\mathcal{D(B)}$ ($N \in \bigo{n^{d}}$). The greedy algorithm of \textcite{Lovasz75}, when run over the sets $\mathcal{B}$ and $\mathcal{D(B)}$ works in $\bigo{\mathtt{OPT} \log N}$ steps; and at each stage a box is added to the solution. The size of the output is within $\bigo{\mathtt{OPT} \log N} \subseteq \bigo{\mathtt{OPT} \log n}$. This algorithm can be modified to achieve the following running time, while achieving the same approximation ratio: \begin{theorem}\label{theo:approx_logn} Let $\mathcal{B}$ be a set of $n$ boxes in $\mathbb{R}^{d}$ with a minimum coverage kernel of size $\mathtt{OPT}$. Then, a \textsc{Coverage Kernel} of $\mathcal{B}$ of size within $\bigo{\mathtt{OPT} \log n}$ can be computed in time within $\bigo{\mathtt{OPT} \cdot n^{\frac{d}{2} + 1}\log^2 n}$. \end{theorem} \begin{proof} We initialize a weight index as in \Cref{theo:windex_gcase}, which can be done in time $\bigo{n^\frac{d+1}{2}}$, and compute the volume of the region covered by $\mathcal{B}$, which can be done in time within $\bigo{n^{d/2}}$~\cite{Chan2013}. Let $C$ be an empty set. % At each stage of the algorithm, for every box $b \in \mathcal{B}\setminus C$ we compute the total weight of the points inside $b$ (which can be done in time within $n^{\frac{d-1}{2}}\log n$ using the weight index). We add to $C$ the box with the highest total weight, and update the weights of all the points within this box to zero (by multiplying their weights by $\alpha=0$) in time within $n^{\frac{d-1}{2}}\log n$. If the volume of the region covered by $C$ (which can be computed in $\bigo{n^{d/2}}$-time~\cite{Chan2013}) is the same as that of $\mathcal{B}$, then we stop and return $C$ as the approximated solution. The total running time of each stage is within $\bigo{n^{\frac{d+1}{2}}\log n}$. This, and the fact that the number of stages is within $\bigo{\mathtt{OPT} \log n}$ yield the result of the theorem. % \qed \end{proof} Now, we show how to improve \citeauthor{BronnimannG95}'s $\bigo{\log \mathtt{OPT}}$ approximation algorithm~\cite{BronnimannG95} via a weight index. First, we describe their main idea. Let $w : \mathcal{D(B) \rightarrow \mathbb{R}}$ be a weight function for the points of $\mathcal{D(B)}$, and for a subset $\mathcal{P} \subseteq \mathcal{D(B)}$ let $w(P)$ denote the total weight of the points in $\mathcal{P}$. A point $p$ is said to be $\varepsilon$-heavy, for a value $\varepsilon \in (0,1]$, if $w(p) \ge \varepsilon w(\mathcal{D(B)})$, and $\varepsilon$-light otherwise. A subset $\mathcal{B}' \subseteq \mathcal{B}$ is said to be an $\varepsilon$-net with respect to $w$ if for every $\varepsilon$-heavy point $p \in \mathcal{D(B)}$ there is a box in $\mathcal{B}' $ which contains $p$. Let $\mathtt{OPT}$ denote the size of a minimum coverage kernel of $\mathcal{B}$, and let $k$ be an integer such that $k/2 \le \mathtt{OPT} < k$. The algorithm initializes the weight of each point in $\mathcal{D(B)}$ to 1, and repeats the following \emph{weight-doubling step} until every range is $\frac{1}{2k}$-heavy: find a $\frac{1}{2k}$-light point $p$ and double the weights of all the points within every box $b \in \mathcal{B}$. When this process stops, it returns a $\frac{1}{2k}$-net $C$ with respect to the final weights as the approximated solution. Since each point in $\mathcal{D(B)}$ is $\frac{1}{2k}$-heavy, $C$ covers all the points of $\mathcal{D(B)}$. Hence, if a $\frac{1}{2k}$-net of size $\bigo{kg(k)}$ can be computed efficiently, this algorithm computes a solution of size $\bigo{kg(k)}$. Besides, \textcite{BronnimannG95} showed that for a given $k$, if more than $\mu_k = 4k \log (n/k)$ weight-doubling steps are performed, then $\mathtt{OPT} > 2k$. This allows to guess the correct $k$ via exponential search, and to bound the maximum weight of any point by $n^4/k^3$ (which allows to represent the weights with $\bigo{\log n}$ bits). See \citeauthor{BronnimannG95}'s article~\cite{BronnimannG95} for the complete details of their approach. We simulate the operations over the weights of $\mathcal{D(B)}$ again using a weight index, this time with a minor variation to that of \Cref{theo:windex_gcase}: in every node of the space partition tree, besides the $\omega,\mu$ values, we also store the minimum weight of the points within the cell corresponding to the node. During the initialization and update operations of the weight index this value can be maintained as follows: for a node $v$ with children $l,r$, the minimum weight $min_\omega(v)$ of a point in the cell of $v$ can be computed as $min_\omega(v) = \omega(v) \cdot\min\{min_\omega(l), min_\omega(r)\}$. This value allows to efficiently detect whether there are $\frac{1}{2k}$-light points, and to find one in the case of existence by tracing down, in the partition tree, the path from which that value comes. To compute a $\frac{1}{2k}$-net, we choose a sample of $\mathcal{B}$ by performing at least $(16k \log 16k)$ random independent draws from $\mathcal{B}$. We then check whether it is effectively a $\frac{1}{2k}$-net, and if not, we repeat the process, up to a maximum of $\bigo{\log n}$ times. \textcite{HausslerW87} showed that such a sample is a $\frac{1}{2k}$-net with probability at least $1/2$. Thus, the expected number of samples needed to obtain a $\frac{1}{2k}$-net is constant, and since we repeat the process up to $\bigo{\log n}$ times, the probability of effectively finding one is at least $1 - \frac{1}{n^{\Omega(1)}}$. We analyze the running time of this approach in the following theorem. \begin{theorem}\label{theo:approx_randomized} Let $\mathcal{B}$ be a set of $n$ boxes in $\mathbb{R}^{d}$ with a minimum coverage kernel of size $\mathtt{OPT}$. A coverage kernel of $\mathcal{B}$ of size within $\bigo{\mathtt{OPT} \log \mathtt{OPT}}$ can be computed in $\bigo{\mathtt{OPT} n^{\frac{d+1}{2}} \log^2 n}$-expected time, with probability at least $1- \frac{1}{n^{\Omega(1)}}$. \end{theorem} \begin{proof} The algorithm performs several stages guessing the value of $k$. Within each stage we initialize a weight index in time within $\bigo{n^\frac{d+1}{2}}$. Finding whether there is a $\frac{1}{2k}$-light point can be done in constant time: the root of the partition tree stores both $w(\mathcal{D(B)})$ and the minimum weight of any point in the $\omega$ and $min_\omega$ values, respectively. For every light point, the weight-doubling steps consume time within $\bigolr{n \times \left(n^\frac{d-1}{2}\log n\right)} \subseteq \bigo{n^{\frac{d+1}{2}}\log n}$ (by \Cref{theo:windex_gcase}). Since at each stage at most $4k \log (n/k)$ weight-doubling steps are performed, the total running time of each stage is within $\bigo{k n^{\frac{d+1}{2}}\log n \log \frac{n}{k}} \subseteq \bigo{k n^{\frac{d+1}{2}} \log^2 n}$. % Given that $k$ increases geometrically while guessing its right value, and since the running time of each stage is a polynomial function, the sum of the running times of all the stages is asymptotically dominated by that of the last stage, for which we have that $k \le \mathtt{OPT} \le 2k$. % Thus the result of the theorem follows. % \qed \end{proof} Compared to the algorithm of \Cref{theo:approx_logn}, this last approach obtains a better approximation factor on instances with small \textsc{Coverage Kernels} ($\bigo{\log n}$ vs. $\bigo{\log \mathtt{OPT}}$), but the improvement comes with a sacrifice, not only in the running time, but in the probability of finding such a good approximation. In two and three dimensions, weight indexes might also help to obtain practical $\bigo{ \log \log \mathtt{OPT}}$ approximation algorithms for the \textsc{Minimum Coverage Problem}. We discuss this, and other future directions of research in the next section. \section{Discussion}\label{sec:cover_discussion} Whether it is possible to close the gap between the factors of approximation of \textsc{Box Cover} and \textsc{Orthogonal Polygon Covering} has been a long standing open question~\cite{KumarR03}. The \textsc{Minimum Coverage Kernel} problem, intermediate between those two, has the potential of yielding answers in that direction, and has natural applications of its own~\cite{DalyLT16,LakshmananNWZJ02,PuM05}. Trying to understand the differences in hardness between these problems we studied distinct restricted settings. We show that while \textsc{Minimum Coverage Kernel} remains \textsf{NP}-hard under severely restricted settings, the same can be said for the \textsc{Box Cover} problem under even more extreme settings; and show that while the \textsc{Box Cover} and \textsc{Minimum Coverage Kernel} can be approximated by at least the same factors, the running time of obtaining some of those approximations can be significantly improved for the \textsc{Minimum Coverage Kernel} problem. Another approach to understand what makes a problem hard is Parameterized Complexity~\cite{DowneyF99}, where the hardness of a problem is analyzed with respect to multiple parameters of the input, with the hope of finding measures gradually separating ``easy'' instances form the ``hard'' ones. The hardness results described in \Cref{sec:covkernels_hardness} show that for the \textsc{Minimum Coverage Kernel} and \textsc{Box Cover} problems, the vertex-degree and clique-number of the underlaying graph are not good candidates of such kind of measures, opposed to what happens for other related problems~\cite{AlekseevBKL07}. In two and three dimensions, the \textsc{Box Cover} problem can be approximated up to $\bigo{ \log \log \mathtt{OPT}}$~\cite{AronovES10}. We do not know whether the running time of this algorithm can be also improved for the case of \textsc{Minimum Coverage Kernel} via a weight index. We omit this analysis since the approach described in \Cref{sec:covkernels_approximation} is relevant when the dimension of the boxes is high (while still constant), as in distinct applications~\cite{DalyLT16,LakshmananNWZJ02,PuM05} of the \textsc{Minimum Coverage Kernel} problem.
1,108,101,562,502
arxiv
\section{Introduction} The study of tangent bundles and their relationship to the base manifold often rely on the Sasaki metric. However, we may gain valuable mathematical and physical insights by choosing a more general metric. For mechanical systems, tangent bundles arise naturally where the manifold is the configuration space and the Lagrangian mechanics involve the configurations and their velocities as state variables \cite{BulloFrancesco2005Gcom}. A fundamental process is damping, in which the changes on the configuration depend on changes to its velocity. Hence, we want to study metrics where this kind of interaction is considered. The Sasaki metric does not consider these kind of interactions. The contributions of this paper are the generalization of the Sasaki metric and the derivation of the corresponding Levi-Civita connection on the tangent bundle. We also clarify the application of the results to general vector fields that are not constant along fibers. The paper is organized as follows. A brief overview of relevant differential geometry concepts are provided in \cref{sec:background}, our main results are in \cref{sec:main1} and \cref{sec:main2}, examples are in \cref{sec:experiments}, and the conclusions follow in \cref{sec:conclusions}. \section{Background} \label{sec:background} Let $M$ be a n-dimensional differentiable manifold equipped with a Riemannian metric $g$ and $TM$ the tangent bundle of $M$. For a point $p \in M$, let $T_{p}M$ denote the tangent space of $M$ at $p$. A point $\bar{P} \in TM$ is a pair in the set $\left\{ \left(p,u\right) \mid p \in M, u \in T_{p}M\right\}$. Let $\pi: TM \rightarrow M$ be the projection map. The differential of the projection map is a smooth map denoted as $d\pi: TTM \rightarrow TM$. For any vector fields $X,Y \in \mathfrak{X}\left(M\right)$, the Levi-Civita connection on $M$ is denoted by $\nabla_{X}Y$. From Sasaki, the tangent space $T_{\bar{P}}TM$ is a direct sum decomposition $T_{\bar{P}}TM = \mathcal{H}_{\bar{P}} \bigoplus \mathcal{V}_{\bar{P}}$, where $\mathcal{H}_{\bar{P}}$ is the horizontal subspace and $\mathcal{V}_{\bar{P}}$ is the vertical subspace \cite{Sasaki1958}. To construct the subspaces, we begin by defining the exponential map on $M$. For an open neighborhood $U$ of $p := \pi\left(\bar{P}\right)\in M$, the exponential map $exp_{p}: T_{p}M \rightarrow M$ maps a neighborhood $U'$ of 0 in $T_{p}M$ diffeomorphicly onto $U$. Let $\tau: \pi^{-1}\left(U\right) \rightarrow T_{p}M$ be the smooth map which parallel transports every $Y \in \pi^{-1}\left(U\right)$ from $q = \pi\left(Y\right)$ to $p$. For $u \in T_{p}M$, let $R_{-u}: T_{p}M \rightarrow T_{p}M$ be the translation defined by $R_{-u}\left(X\right) = X - u$ for $X \in T_{p}M$. Then, the connection map $K_{\left(p,u\right)}:T_{\left(p,u\right)}TM \rightarrow T_{p}M$ corresponding to the Levi-Civita connection is defined as \begin{equation} \label{eq:connectionMap} K\left(\bar{X}\right)_{\left(p,u\right)}= d\left(exp_{p}\circ R_{-u}\circ \tau \right)\left(\bar{X}\right)_{p} \end{equation} for all $\bar{X} \in T_{\left(p,u\right)}TM$. The vertical subspaces is then defined as the kernel of the differential $d\pi$, while the horizontal subspace is defined as the kernel of the connection map $K$. Throughout this paper, We will use the $d\pi$ and $K$ mappings as projections on the horizontal and vertical subspaces. A curve $\bar{\gamma}: I \rightarrow TM$ in the tangent bundle is said to be horizontal if its tangent $\bar{\gamma}'(t)$ satisfies $\bar{\gamma}'(t) \in \mathcal{H}_{\bar{\gamma}(t)}$ for all $t \in I$. And similarly, a curve $\bar{\gamma}: I \rightarrow TM$ in the tangent bundle is said to be vertical if its tangent $\bar{\gamma}'(t)$ satisfies $\bar{\gamma}'(t) \in \mathcal{V}_{\bar{\gamma}(t)}$ for all $t \in I$. If $X$ is a vector field on $M$, then there is a unique vector field $\hlift{X}$ on $TM$ called the horizontal lift of $X$ and a unique vector field $\vlift{X}$ on $TM$ called the vertical lift of $X$ such that \begin{equation} \label{eq:DefLifts} \begin{split} &d\pi\left(\hlift{X}\right)_{\bar{P}} = X_{\pi\left(\bar{P}\right)}, \quad K\left(\hlift{X}\right)_{\bar{P}} = 0_{\pi\left(\bar{P}\right)}\\ &d\pi\left(\vlift{X}\right)_{\bar{P}} = 0_{\pi\left(\bar{P}\right)}, \quad K\left(\vlift{X}\right)_{\bar{P}} = X_{\pi\left(\bar{P}\right)} \end{split} \end{equation} for all $\bar{P} \in TM$. A result of the tangent space decomposition is that any tangent vector $\bar{X} \in T_{\bar{P}}TM$ can be decomposed into its horizontal and vertical components $\bar{X} = \hlift{A} + \vlift{B}$ where $A = d\pi\left(\bar{X}\right), B = K\left(\bar{X}\right) \in T_{\pi\left(\bar{P}\right)}M$. It is important to note that the standard results and our results in \cref{sec:main1} rely on vector fields that only change along horizontal curves. We will denote these type of vector fields as \textbf{\textit{lift-decomposable vector fields}}. \begin{definition} A vector field $\bar{X} \in \mathfrak{X}\left(TM\right)$ is \textbf{\textit{lift decomposable}} if it only changes along horizontal curves. Then any vector field $\bar{X}$ can be decomposed locally around $(p,u) \in TM$ as $\bar{X}_{(p,u)} = \hlift{A}_{(p,u)} + \vlift{B}_{(p,u)}$ for $A,B \in T_{p}M$. \end{definition} \begin{remark} Lift-decomposable vector fields are constant along the fibers in the that sense $d\pi\left(\bar{X}(p,u)\right)_{(p,u)} = d\pi\left(\bar{X}(p,u')\right)_{(p,u')}$ and similarly for the connection map $K$ of $\bar{X}$ for any $p \in M, u, u' \in T_{p}M$, and $\bar{X} \in \mathfrak{X}\left(TM\right)$. \end{remark} In general, lift-decomposable vector fields may be too limiting. In \cref{sec:main2}, we show how to extend the results for lift-decomposable vector fields to any general vector fields that may change along both horizontal and vertical curves. As shown in \cite{Dombrowski1962}, the Lie bracket of horizontal and vertical lifts on $TM$ are given by the following \begin{equation} \label{eq:LieBracketRelations} \begin{split} \bliebracket{\vlift{X}}{\vlift{Y}}_{\left(p,u\right)} &= 0\\ \bliebracket{\hlift{X}}{\vlift{Y}}_{\left(p,u\right)} &= \vlift{(\nabla_{X}Y)}_{p}\\ \bliebracket{\hlift{X}}{\hlift{Y}}_{\left(p,u\right)} &= \hlift{\liebracket{X}{Y}}_{p} - \vlift{(\mathfrak{R}(X,Y)u)}_{p}\\ \end{split} \end{equation} for all vector fields $X,Y \in \mathfrak{X}\left(M\right)$, $\left(p,u\right) \in TM$, $\hlift{X}, \vlift{X}, \hlift{Y}, \vlift{Y}$ are the respective horizontal and vertical lifts, and $\mathfrak{R}$ is the curvature tensor on $M$. Note that the vector fields are lift decomposable. A metric $\bar{g}$ on the tangent bundle is said to be \textit{natural} with respect to $g$ on M if \begin{equation} \begin{split} &\bmetric[(p,u)]{\hlift{X}}{\hlift{Y}} = \metric[p]{X}{Y}\\ &\bmetric[(p,u)]{\hlift{X}}{\vlift{Y}} = 0 \end{split} \end{equation} for all $X,Y \in \mathfrak{X}\left(M\right)$ and $(p,u) \in TM$. The Sasaki metric, first introduced in \cite{Sasaki1958}, is a special natural metric that has been widely used to study the relationship between the base manifold and its tangent bundle. The Sasaki metric is given as \begin{equation} \begin{split} &\bmetric[(p,u)]{\hlift{X}}{\hlift{Y}} = \metric[p]{X}{Y}\\ &\bmetric[(p,u)]{\hlift{X}}{\vlift{Y}} = 0\\ &\bmetric[(p,u)]{\vlift{X}}{\vlift{Y}} = \metric[p]{X}{Y}. \end{split} \end{equation} The Kozul formula on $(M,g)$ is given by \begin{multline} \label{eq:KozulFormula} 2\metric{\nabla_{X}Y}{Z} = X\metric{Y}{Z} + Y\metric{X}{Z} - Z\metric{X}{Y}\\ + \metric{\liebracket{X}{Y}}{Z} - \metric{\liebracket{X}{Z}}{Y} - \metric{\liebracket{Y}{Z}}{X} \end{multline} for all vector fields $X, Y, Z \in \mathfrak{X}\left(M\right)$. \section{Levi-Civita Connection for Lift-Decomposable Vector Fields } \label{sec:main1} In this section, we assume that all vector fields on $TM$ are lift decomposable. We define a non-natural metric $\bar{g}_{\left(p,u\right)}$ on the tangent bundle as \begin{equation} \label{eq:NonNaturalMetric} \begin{split} &\bmetric[(p,u)]{\hlift{X}}{\hlift{Y}} = m_{1}\metric[p]{X}{Y}\\ &\bmetric[(p,u)]{\hlift{X}}{\vlift{Y}} = m_{2}\metric[p]{X}{Y}\\ &\bmetric[(p,u)]{\vlift{X}}{\vlift{Y}} = m_{3}\metric[p]{X}{Y} \end{split} \end{equation} where $\metric[p]{X}{Y}$ is the metric on the manifold $M$ at point $p$. $\hlift{X}, \vlift{X}, \hlift{Y}, \vlift{Y}$ are the respective horizontal and vertical lifts of the vector fields $X, Y \in \mathfrak{X}\left(M\right)$, $(p,u) \in TM$ and $m_{1}, m_{2}, m_{3} \in \mathbb{R}$. The scalars $m_{1}, m_{2}, m_{3}$ must be chosen such that $m_{1}, m_{3} > 0$ and $m_{1}m_{3}-m_{2}^{2} > 0$. \begin{proposition} Given a Riemannian manifold $(M,g)$ and $m_{1},m_{2},m_{3}$ chosen such that $m_{1}, m_{3} > 0$ and $m_{1}m_{3}-m_{2}^{2}>0$, the metric $\bar{g}$ defined in \cref{eq:NonNaturalMetric} is a Riemannian metric on TM. \end{proposition} \begin{proof} The metric $\bar{g}$ must be an inner product on $T_{(p,u)}TM$ at each point $(p,u) \in TM$. The symmetry and linearity properties can be verified through simple calculations. To show positive definiteness, we consider a tangent vector $\bar{Z} \in T_{(p,u)}TM$ where $\bar{Z} = \hlift{X} + \vlift{Y}$ for $X,Y \in T_{p}M$. Then \begin{equation*} \bmetric[(p,u)]{\bar{Z}}{\bar{Z}} = m_{1}\metric[p]{X}{X} + 2m_{2}\metric[p]{X}{Y} + m_{3}\metric[p]{Y}{Y}. \end{equation*} The metric can be bound from below by using the Cauchy-Schwarz inequality such that \begin{equation*} \bmetric[(p,u)]{\bar{Z}}{\bar{Z}} \geq m_{1}\|X\|^{2} - 2m_{2}\|X\|\|Y\| + m_{3}\|Y\|^{2} \end{equation*} where $\|\cdot\|$ is the norm with respect to $g$. The above equation can be rewritten in matrix notation as \begin{equation*} \bmetric[(p,u)]{\bar{Z}}{\bar{Z}} \geq \begin{bmatrix} \|X\| \\ \|Y\| \end{bmatrix}^{T} \begin{bmatrix} m_{1} & -m_{2}\\ -m_{2} & m_{3} \end{bmatrix} \begin{bmatrix} \|X\| \\ \|Y\| \end{bmatrix}. \end{equation*} From the above inequality, $\bar{g}$ must be positive definite since the middle matrix is positive definite by $m_{1}, m_{3} > 0$ and $m_{1}m_{3}-m_{2}^2 > 0$. \end{proof} The metric in \cref{eq:NonNaturalMetric} allows us to choose from a class of metrics on the tangent bundle with different horizontal and vertical subspaces along with their Levi-Civita connections. Using the metric defined in \cref{eq:NonNaturalMetric}, the Kozul formula in \cref{eq:KozulFormula}, and the relations from \cref{eq:LieBracketRelations}, we derive properties of the corresponding Levi-Civita connection $\bar{\nabla}$ on $TM$ for horizontal and vertical lifts (the proof closely mirrors those found in \cite{Dombrowski1962,Gudmundsson2002,Kowalski1971,Sasaki1958}). \begin{proposition} \label{lem:KozulFormula} Given a Riemannian manifold (M, g) and its tangent bundle TM equipped with the metric in \cref{eq:NonNaturalMetric}, the Levi-Civita connection $\bar{\nabla}$ on TM satisfies \begin{enumerate}[label=(\roman*)] \item $2\bmetric{\bar{\nabla}_{\hlift{X}}\hlift{Y}}{\hlift{Z}} = 2m_{1}\metric{\nabla_{X}Y}{Z} + 2m_{2}\metric{\mathfrak{R}(u,X)Y}{Z}$\label{it:LeviProp1} \item $2\bmetric{\bar{\nabla}_{\hlift{X}}\hlift{Y}}{\vlift{Z}} = 2m_{2}\metric{\nabla_{X}Y}{Z} - m_{3}\metric{\mathfrak{R}(X,Y)u}{Z}$\label{it:LeviProp2} \item $2\bmetric{\bar{\nabla}_{\hlift{X}}\vlift{Y}}{\hlift{Z}} = 2m_{2}\metric{\nabla_{X}Y}{Z} + m_{3}\metric{\mathfrak{R}(u,Y)X}{Z}$\label{it:LeviProp3} \item $2\bmetric{\bar{\nabla}_{\hlift{X}}\vlift{Y}}{\vlift{Z}} = 2m_{3}\metric{\nabla_{X}Y}{Z}$\label{it:LeviProp4} \item $2\bmetric{\bar{\nabla}_{\vlift{X}}\hlift{Y}}{\hlift{Z}} = m_{3}\metric{\mathfrak{R}(u,X)Y}{Z}$\label{it:LeviProp5} \item $2\bmetric{\bar{\nabla}_{\vlift{X}}\hlift{Y}}{\vlift{Z}} = 0$ \label{it:LeviProp6} \item $2\bmetric{\bar{\nabla}_{\vlift{X}}\vlift{Y}}{\hlift{Z}} = 0$ \label{it:LeviProp7} \item $2\bmetric{\bar{\nabla}_{\vlift{X}}\vlift{Y}}{\vlift{Z}} = 0$ \label{it:LeviProp8} \end{enumerate} for all vector fields $X,Y,Z \in \mathfrak{X}\left(M\right)$. \end{proposition} \begin{proof} The Kozul formula on the tangent bundle is used repeatedly to find the properties of the Levi-Civita connection. \begin{itemize} \item[\ref{it:LeviProp1}] The statement follows from the Kozul formula in the first equation. Then substituting properties from \cref{eq:LieBracketRelations} and \cref{eq:NonNaturalMetric}, we obtained the second equation. The third equation follows from the fact that six of the terms produce the Kozul formula on $M$. Lastly, we obtained the fourth equation by combining the Riemannian curvature tensor dependent terms such that $Z$ is isolated using the properties of the curvature tensor. \begin{multline*} 2\bmetric{\bar{\nabla}_{\hlift{X}}\hlift{Y}}{\hlift{Z}} = \hlift{X}\bmetric{\hlift{Y}}{\hlift{Z}} + \hlift{Y}\bmetric{\hlift{Z}}{\hlift{X}} - \hlift{Z}\bmetric{\hlift{X}}{\hlift{Y}}\\ - \bmetric{\hlift{X}}{\bliebracket{\hlift{Y}}{\hlift{Z}}} + \bmetric{\hlift{Y}}{\bliebracket{\hlift{Z}}{\hlift{X}}} + \bmetric{\hlift{Z}}{\bliebracket{\hlift{X}}{\hlift{Y}}}\\ = m_{1}X\metric{Y}{Z} + m_{1}Y\metric{Z}{X} - m_{1}Z\metric{X}{Y} -m_{1}\metric{X}{\liebracket{Y}{Z}}\\ + m_{2}\metric{X}{\mathfrak{R}(Y,Z)u} + m_{1}\metric{Y}{\liebracket{Z}{X}} - m_{2}\metric{Y}{\mathfrak{R}(Z,X)u}\\ + m_{1}\metric{Z}{\liebracket{X}{Y}} - m_{2}\metric{Z}{\mathfrak{R}(X,Y)u)}\\ = 2m_{1}\metric{\nabla_{X}Y}{Z} + m_{2}\metric{X}{\mathfrak{R}(Y,Z)u} - m_{2}\metric{Y}{\mathfrak{R}(Z,X)u}\\ -m_{2}\metric{Z}{\mathfrak{R}(X,Y)u}\\ = 2m_{1}\metric{\nabla_{X}Y}{Z} + 2m_{2}\metric{\mathfrak{R}(u,X)Y}{Z}\\ \end{multline*} \item[\ref{it:LeviProp2}] The statement is obtained in a similar fashion to \ref{it:LeviProp1}. The first equation is the Kozul formula. The second equation is obtained by substituting properties from \cref{eq:LieBracketRelations} and \cref{eq:NonNaturalMetric} followed by the expansion of the derivative of the metric terms using the metric compatibility. Note that by \cref{eq:NonNaturalMetric}, we can choose $g(X,Y)$ to be purely horizontal or vertical. Thus, $\vlift{Z}\metric{X}{Y} = 0$. Finally, the last equation is obtained by expanding the Lie Bracket and combining terms. \begin{multline*} 2\bmetric{\bar{\nabla}_{\hlift{X}}\hlift{Y}}{\vlift{Z}} = \hlift{X}\bmetric{\hlift{Y}}{\vlift{Z}} + \hlift{Y}\bmetric{\vlift{Z}}{\hlift{X}} - \vlift{Z}\bmetric{\hlift{X}}{\hlift{Y}}\\ - \bmetric{\hlift{X}}{\bliebracket{\hlift{Y}}{\vlift{Z}}} + \bmetric{\hlift{Y}}{\bliebracket{\vlift{Z}}{\hlift{X}}} + \bmetric{\vlift{Z}}{\bliebracket{\hlift{X}}{\hlift{Y}}}\\ = m_{2}\metric{Z}{\nabla_{Y}X} + m_{2}\metric{X}{\nabla_{Y}Z} + m_{2}\metric{Y}{\nabla_{X}Z} + m_{2}\metric{Z}{\nabla_{X}Y}\\ - m_{2}\metric{X}{\nabla_{Y}Z} + m_{2}\metric{Y}{-\nabla_{X}Z} + m_{2}\metric{Z}{\liebracket{X}{Y}} - m_{3}\metric{Z}{\mathfrak{R}(X,Y)u}\\ = 2m_{2}\metric{\nabla_{X}Y}{Z}-m_{3}\metric{\mathfrak{R}(X,Y)u}{Z}\\ \end{multline*} \ref{it:LeviProp3}-\ref{it:LeviProp7} are analogous to \ref{it:LeviProp2}. \item[\ref{it:LeviProp8}] The statement follows from the result that the Lie bracket of two vertical vector fields vanish and that $\metric{\cdot}{\cdot}$ can be chosen to be purely horizontal or vertical. \begin{multline*} 2\bmetric{\bar{\nabla}_{\vlift{X}}\vlift{Y}}{\vlift{Z}} = \vlift{X}\bmetric{\vlift{Y}}{\vlift{Z}} + \vlift{Y}\bmetric{\vlift{Z}}{\vlift{X}} - \vlift{Z}\bmetric{\vlift{X}}{\vlift{Y}}\\ - \bmetric{\vlift{X}}{\bliebracket{\vlift{Y}}{\vlift{Z}}} + \bmetric{\vlift{Y}}{\bliebracket{\vlift{Z}}{\vlift{X}}} + \bmetric{\vlift{Z}}{\bliebracket{\vlift{X}}{\vlift{Y}}}\\ = m_{3}\vlift{X}\metric{Y}{Z} + m_{3}\vlift{Y}\metric{Z}{X} - m_{3}\vlift{Z}\metric{X}{Y}\\ - \bmetric{\vlift{X}}{0} + \bmetric{\vlift{Y}}{0} + \bmetric{\vlift{Z}}{0}\\ = 0\\ \end{multline*} \end{itemize} \end{proof} Next, we extract the explicit form of the horizontal and vertical components of the Levi-Civita connection on the tangent bundle from \cref{lem:KozulFormula}. To do so, we first present a useful lemma. \begin{lemma} \label{lem:idMetricTM} Let $\bar{f}$ be a function $\bar{f}: T_{\left(p,u\right)}TM \rightarrow T_{\left(p,u\right)}TM$ such that \begin{equation} \begin{split} &\hcomp{\bar{f}\circ\bar{X}} = \frac{m_{3}\hcomp{\bar{X}}-m_{2}\vcomp{\bar{X}}} {m_{1}m_{3}-m_{2}^{2}}\\ &\vcomp{\bar{f}\circ\bar{X}} = \frac{-m_{2}\hcomp{\bar{X}}+m_{1}\vcomp{\bar{X}}} {m_{1}m_{3}-m_{2}^{2}}\\ \end{split} \end{equation} for all vector fields $\bar{X} \in \mathfrak{X}\left(TM\right)$, $\left(p,u\right) \in TM$, and $m_{1},m_{2},m_{3} \in \mathbb{R}$ such that $m_{1}, m_{3} > 0$ and $m_{1}m_{3}-m_{2}^{2}>0$. Then \begin{equation} \begin{split} \bmetric[\left(p,u\right)]{\bar{f}\circ\bar{X}}{\bar{Y}} ={ }& \bmetric[\left(p,u\right)]{\bar{X}}{\bar{f}\circ\bar{Y}}\\ ={ }&\metric[p]{\hcomp{\bar{X}}}{\hcomp{\bar{Y}}} + \metric[p]{\vcomp{\bar{X}}}{\vcomp{\bar{Y}}}. \end{split} \end{equation} \end{lemma} \begin{proof} The claim follows directly from the definitions of $\bar{f}$ and $\bar{g}$. \end{proof} \cref{lem:idMetricTM} is important in that if there is an expression for $\bar{g}$ with one known tangent vector, the horizontal and vertical components of the unknown tangent vector can be extracted through the metric instead of the $d\pi$ and $K$ mappings. \begin{remark} The results of \cref{lem:idMetricTM} can be better understood in local coordinates using matrix operations. To illustrate the point, we assume $g$ to be the natural Euclidean inner product on $M$, then \begin{equation} \label{eq:NonNaturalMetricMat} \begin{split} \bmetric[\left(p,u\right)]{\bar{X}}{\bar{Y}}={ }& \begin{bmatrix} \hcomp{\bar{X}}\\ \vcomp{\bar{X}} \end{bmatrix}^{T} \begin{bmatrix} m_{1}\mathcal{I}_{n} & m_{2}\mathcal{I}_{n}\\ m_{2}\mathcal{I}_{n} & m_{3}\mathcal{I}_{n} \end{bmatrix} \begin{bmatrix} \hcomp{\bar{Y}}\\ \vcomp{\bar{Y}} \end{bmatrix} \Biggr|_{p}\\ ={ }& \begin{bmatrix} \hcomp{\bar{X}}\\ \vcomp{\bar{X}} \end{bmatrix}^{T} \mathcal{M} \begin{bmatrix} \hcomp{\bar{Y}}\\ \vcomp{\bar{Y}} \end{bmatrix} \Biggr|_{p} \end{split} \end{equation} where $\mathcal{I}_{n}$ is the $n \times n$ identity matrix, $\bar{X}, \bar{Y} \in T_{\left(p,u\right)}TM$, $\left(p,u\right) \in TM$, and $m_{1},m_{2},m_{3}$ $\in \mathbb{R}$ such that $m_{1},m_{3} > 0$ and $m_{1}m_{3}-m_{2}^{2} > 0$. Since $\mathcal{M}$ is positive definite, its inverse $\mathcal{M}^{-1}$ exists. Thus, the function $\bar{f}$ can be interpreted (in matrix notation) as \begin{equation} \bar{f}\circ\bar{X} = \mathcal{M}^{-1} \begin{bmatrix} \hcomp{\bar{X}}\\ \vcomp{\bar{X}} \end{bmatrix}. \end{equation} When $\bar{f}$ acts on a tangent vector in \cref{eq:NonNaturalMetricMat}, we recover the identity matrix and the simple pairing of the horizontal and vertical components. \end{remark} The following theorem combines \cref{lem:KozulFormula} and \cref{lem:idMetricTM} to extract the explicit form of the horizontal and vertical components of the Levi-Civita connection $\bar\nabla_{\bar{X}}\bar{Y}$ on $TM$ for any vector fields $\bar{X}, \bar{Y} \in \mathfrak{X}(TM)$. \begin{theorem} \label{thm:LeviCivitaConn} Let (M, g) be a Riemannian manifold and $\bar{\nabla}$ be the Levi-Civita connection on the tangent bundle (TM, $\bar{g}$) equipped with the metric \cref{eq:NonNaturalMetric}. Then \end{theorem} \begin{enumerate}[label=(\roman*)] \item $\hcomp{\bar{\nabla}_{\hlift{X}}\hlift{Y}} = \nabla_{X}Y + \frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big(m_{2}m_{3}\mathfrak{R}(u,X)Y + \frac{m_{2}m_{3}}{2}\mathfrak{R}(X,Y)u \Big)$ \item $\vcomp{\bar{\nabla}_{\hlift{X}}\hlift{Y}} = \frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big( -m_{2}^{2}\mathfrak{R}(u,X)Y -\frac{m_{1}m_{3}}{2}\mathfrak{R}(X,Y)u \Big)$ \item $\hcomp{\bar{\nabla}_{\hlift{X}}\vlift{Y}} = \frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big( \frac{m_{3}^{2}}{2}\mathfrak{R}(u,Y)X \Big)$ \item $\vcomp{\bar{\nabla}_{\hlift{X}}\vlift{Y}} = \nabla_{X}Y - \frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big(\frac{m_{2}m_{3}}{2}\mathfrak{R}(u,Y)X \Big)$ \item $\hcomp{\bar{\nabla}_{\vlift{X}}\hlift{Y}} = \frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big( \frac{m_{3}^{2}}{2}\mathfrak{R}(u,X)Y \Big)$ \item $\vcomp{\bar{\nabla}_{\vlift{X}}\hlift{Y}} = -\frac{1}{m_{1}m_{3}-m_{2}^{2}}\Big( \frac{m_{2}m_{3}}{2}\mathfrak{R}(u,X)Y \Big)$ \item $\hcomp{\bar{\nabla}_{\vlift{X}}\vlift{Y}} = 0$ \item $\vcomp{\bar{\nabla}_{\vlift{X}}\vlift{Y}} = 0$ \end{enumerate} for all vector fields $X,Y \in \mathfrak{X}\left(M\right)$ and $\left(p,u\right) \in TM$.\\ \begin{proof} \cref{lem:KozulFormula} provides an expression for $\bmetric[\left(p,u\right)]{\bar{\nabla}_{\bar{X}}\bar{Y}}{\cdot}$ for any vector fields $\bar{X}, \bar{Y} \in \mathfrak{X}\left(TM\right)$ at a point $\left(p,u\right) \in TM$ where the second argument of $\bar{g}$ can be chosen arbitrarily. Thus, we chose a purely horizontal and vertical field to extract the components of the connection. For any arbitrary vector field $Z \in \mathfrak{X}(M)$ and $\bar{f}$ defined in \cref{lem:idMetricTM} \begin{align*} \bmetric[\left(p,u\right)]{\bar{\nabla}_{\bar{X}}\bar{Y}}{\bar{f}\circ\hlift{Z}} &= \metric[p]{\hcomp{\bar{\nabla}_{\bar{X}}\bar{Y}}}{Z}\\ \bmetric[\left(p,u\right)]{\bar{\nabla}_{\bar{X}}\bar{Y}}{\bar{f}\circ\vlift{Z}} &=\metric[p]{\vcomp{\bar{\nabla}_{\bar{X}}\bar{Y}}}{Z}.\\ \end{align*} \end{proof} The results of this section allows us to compute the Levi-Civita connection on the tangent bundle for any vector fields $\bar{X}, \bar{Y} \in \mathfrak{X}(TM)$ that are lift decomposable. However, lift-decomposable vector fields do not span the space of all possible smooth vector fields. In general, vector fields on the tangent bundle may change along both horizontal and vertical curves. In the next section, we show how to extend the results for lift-decomposable vector fields to any general vector field. \section{Levi-Civita Connection for General Vector Fields} \label{sec:main2} In this section, we extend the Levi-Civita connection in \cref{sec:main1} to general vector fields on the tangent bundle that may change along both horizontal and vertical curves. As discussed in \cref{sec:background} and \cref{sec:main1}, the Levi-Civita connection in \cref{thm:LeviCivitaConn} is only valid for lift-decomposable vector fields. In general, vector fields $\bar{Y} \in \mathfrak{X}\left(TM\right)$ at a point $(p,u) \in TM$ depend on both horizontal and vertical motions and may be expressed as \begin{equation} \label{eq:GenVF} \bar{Y}_{(p,u)} = \hlift{A}_{(p,u)} + \vlift{B}_{(p,u)} + \bar{C}_{(p,u)} + \bar{D}_{(p,u)} \end{equation} where $A,B \in T_{p}M$, $\bar{C} \in \mathcal{H}_{(p,u)}$, $\bar{D} \in \mathcal{V}_{(p,u)}$, and $\bar{C} = \bar{D} = 0$ at $\left(p,u\right)$ and along horizontal curves passing through $(p,u)$. To be more precise, $\bar{C}$ and $\bar{D}$ are the point-wise horizontal and vertical projections of the field $\bar{Y}_{(p',u')}-\hlift{A}_{(p,u)}-\vlift{B}_{(p,u)}$ for any point $(p',u') \in TM$ in the neighborhood around $(p,u)$. It is important to note that $A$ and $B$ change along horizontal curves, and $\bar{C}$ and $\bar{D}$ change along vertical curves. The standard results and our results in \cref{sec:main1} already considered how vector fields change along horizontal curves to derive the connection in \cref{thm:LeviCivitaConn}. In that formulation, we ignored the motion along the vertical curves because the vector fields are lift decomposable and thus constant along those curves. Now, we must also consider changes along vertical curves to obtain the Levi-Civita connection for general vector fields on the tangent bundle. \begin{corollary} \label{cor:TotalDerTM} The Levi-Civita connection $\bar{\nabla}_{\bar{X}}\bar{Y}$ on the tangent bundle $TM$ for any general vector fields $\bar{X}, \bar{Y} \in \mathfrak{X}\left(TM\right)$ at a point $(p,u) \in TM$ is given by \begin{equation} \bar{\nabla}_{\bar{X}} \bar{Y} = \bar{\nabla}_{(\hlift{F}+\vlift{G})}\left(\hlift{A} + \vlift{B}\right) + \tilde{\nabla}_{\vlift{G}}\left(\bar{C}+\bar{D}\right)\\ \end{equation} where $\bar{Y}$ is decomposed into the components defined in \cref{eq:GenVF} and $\bar{X} = \hlift{F} + \vlift{G}$ for $F,G \in T_{p}M$. The first term is the connection from \cref{thm:LeviCivitaConn} which captures changes along horizontal curves. The second term captures changes along vertical curves and does not depend on $\hlift{F}$ since $\bar{C}, \bar{D}$ are zero along any horizontal curve. The connection $\tilde{\nabla}$ is the usual connection on the flat tangent space corresponding to the choice of local coordinates. \end{corollary} \begin{proof} The proof follows from the vector field decomposition in \cref{eq:GenVF} and the properties of the Levi-Civita connection. Note that since $\bar{C}, \bar{D} = 0$ along horizontal curves, the connection $\tilde{\nabla}_{\hlift{F}}\left(\bar{C}+\bar{D}\right) = 0$. \end{proof} \section{Examples} In this section, we present two applications of our results. In the first, we show that the Sasaki metric and the corresponding Levi-Civita connection on $TM$ is a special case. In the second, we apply the results on SO(3) and derive the Levi-Civita connection on $T$SO(3). \label{sec:experiments} \subsection{Sasaki Metric} In this example, we show that the Saski Metric \cite{Sasaki1958} and the induced Levi-Civita connection on $TM$ is a special case of our results. If we choose \begin{equation*} m_{1} = 1, \quad m_{2} = 0, \quad m_{3} = 1 \end{equation*} then the metric \cref{eq:NonNaturalMetric} becomes \begin{equation*} \begin{split} &\bmetric{\hlift{X}}{\hlift{Y}} = \metric{X}{Y}\\ &\bmetric{\hlift{Y}}{\vlift{Y}} = 0\\ &\metric{\vlift{X}}{\vlift{Y}} = \metric{X}{Y}. \end{split} \end{equation*} The induced connection $\bar{\nabla}$ on $TM$, given by \cref{thm:LeviCivitaConn}, can be shown to be equivalent to the results obtained by Kowalski in \cite{Kowalski1971}. \subsection{SO(3) Example} In this example, we consider the Special Orthogonal Group $SO(3)$ equipped with a metric $g$ and its tangent bundle $TSO(3)$ equipped with the metric in \cref{eq:NonNaturalMetric}. The Levi-Civita connection on $SO(3)$ is given by Edelman in \cite{Edelman1998} \begin{equation} \nabla_{X}Y = \dot{Y} + \frac{1}{2}R\left(X^{T}Y + Y^{T}X\right) \end{equation} for all vector fields $X,Y \in \mathfrak{X}\left(SO(3)\right)$ at a point $R \in SO(3)$ and $\dot{Y}$ is the usual time derivative. Given left-invariant vector fields $\bar{X}, \bar{Y} \in \mathfrak{X}\left(TSO(3)\right)$ along a curve $\bar{\gamma}$ such that \begin{equation} \bar{X} = \left(R\hat{\zeta}, R\hat{\eta} \right), \quad \bar{Y} = \left( R\hat{\alpha}, R\hat{\beta} \right), \quad \bar{\gamma} = \left(R, R\hat{\omega}\right) \end{equation} where constants $\zeta, \eta, \alpha, \beta, \omega \in \mathbb{R}^{3}$, $\hat{(\cdot)}:\mathbb{R}^{3} \rightarrow so(3)$ is the hat operator which map real numbers to the Lie algebra, and $TT_{R}SO(3) \rightarrow T_{R}SO(3)$. Then the induced Levi-Civita connection $\bar{\nabla}$ on $TSO(3)$, in local coordinates, is given by \begin{enumerate}[label=(\roman*)] \item \begin{equation*} \begin{split} \hcomp{\bar{\nabla}_{\bar{X}}\bar{Y}} ={ }&R\left(\hat{\zeta}\hat{\alpha} + \frac{1}{2}\left(\hat{\zeta}^{T}\hat{\alpha} +\hat{\alpha}^{T}\hat{\zeta}\right)\right)\\ &\quad -R\frac{m_{2}m_{3}}{8(m_{1}m_{3}-m_{2}^{2})}\left( 2\liebracket{\liebracket{\hat{\omega}}{\hat{\zeta}}}{\hat{\alpha}} +\liebracket{\liebracket{\hat{\zeta}}{\hat{\alpha}}}{\hat{\omega}} \right)\\ &\quad -R\frac{m_{3}^{2}}{8(m_{1}m_{3}-m_{2}^{2})} \liebracket{\liebracket{\hat{\omega}}{\hat{\beta}}}{\hat{\zeta}}\\ &\quad -R\frac{m_{3}^{2}}{8(m_{1}m_{3}-m_{2}^{2})} \liebracket{\liebracket{\hat{\omega}}{\hat{\eta}}}{\hat{\alpha}} \end{split} \end{equation*} \item \begin{equation*} \begin{split} \vcomp{\bar{\nabla}_{\bar{X}}\bar{Y}} ={ }&R\left(\hat{\zeta}\hat{\beta} + \frac{1}{2}\left(\hat{\zeta}^{T}\hat{\beta} +\hat{\beta}^{T}\hat{\zeta}\right)\right)\\ &\quad +R\frac{m_{2}m_{3}}{8(m_{1}m_{3}-m_{2}^{2})} \liebracket{\liebracket{\hat{\omega}}{\hat{\beta}}}{\hat{\zeta}}\\ &\quad +R\frac{1}{8(m_{1}m_{3}-m_{2}^{2})}\left( 2m_{2}^{2}\liebracket{\liebracket{\hat{\omega}}{\hat{\zeta}}}{\hat{\alpha}} +m_{1}m_{3}\liebracket{\liebracket{\hat{\zeta}}{\hat{\alpha}}}{\hat{\omega}} \right)\\ &\quad +R\frac{m_{2}m_{3}}{8(m_{1}m_{3}-m_{2}^{2})} \liebracket{\liebracket{\hat{\omega}}{\hat{\eta}}}{\hat{\alpha}}. \end{split} \end{equation*} \end{enumerate} In the general case where $\omega = \omega(t)$, $\alpha = \alpha(\omega)$, $\beta = \beta(\omega)$, an additional term is required to account for changes in the vertical subspace along the curve $\bar{\gamma}$ (see \cref{cor:TotalDerTM}). The connection on $TSO(3)$ for this vector field is given by \begin{enumerate}[label=(\roman*)] \item \begin{equation*} \hcomp{\bar{\nabla}_{\bar{X}}\bar{Y}} = ... + R\left( \frac{\partial\alpha}{\partial\omega}\dot{\omega}\right)\hat{} \text{ }\hat{\eta} \end{equation*} \item \begin{equation*} \vcomp{\bar{\nabla}_{\bar{X}}\bar{Y}} = ... + R\left( \frac{\partial\beta}{\partial\omega}\dot{\omega}\right)\hat{} \text{ }\hat{\eta}. \end{equation*} \end{enumerate} where $\tilde{\nabla}$ is the usual directional derivative on $\mathbb{R}^{3}$. Both results can be validated by the metric compatibility requirement along their respective curves. \section{Conclusions} \label{sec:conclusions} In this paper, we study the relationship between Riemannian manifolds and their tangent bundle. Namely, we see that a manifold equipped with a metric and Levi-Civita connection induces a metric and Levi-Civita connection on its tangent bundle by the natural decomposition of the tangent bundle into the horizontal and vertical subspaces. We then defined a non-natural metric on the tangent bundle and derived the corresponding Levi-Civita connection. In addition, we showed explicitly how to extend the results to vector fields that are not constant along the fibers. As a validation of our results, we see that under special conditions the non-natural metric reduces to the Sasaki metric and the corresponding Levi-Civita connection agrees with the results of Kowalski. \bibliographystyle{siamplain}
1,108,101,562,503
arxiv
\section{Introduction} In the present paper we consider overdetermined problems for the fractional Laplacian in unbounded exterior sets or bounded annular sets. Different cases will be taken into account, but the results obtained will lead in any case to the classification of the solution and of the domain, that will be shown to possess rotational symmetry. The notation used in this paper will be the following. Given an open set whose boundary is of class~$\mathcal{C}^2$, we denote by~$\nu$ the inner unit normal vector and for any~$x_0$ on the boundary of such set, we use the notation \begin{equation}\label{def: s-derivative} (\partial_{\nu})_s u(x_0) := \lim_{t \to 0^+} \frac{u(x_0 + t \nu(x_0))-u(x_0)}{t^s}. \end{equation} Of course, when writing such limit, we always assume that the limit indeed exists and we call the above quantity the \emph{inner normal $s$-derivative} of $u$ at~$x_0$. The parameter~$s$ above ranges in the interval~$(0,1)$ and it corresponds to the fractional Laplacian~$(-\Delta)^s$. A brief summary of the basic theory of the fractional Laplacian will be provided in Section~\ref{sec: preliminaries}: for the moment, we just remark that~$(-\Delta)^s$ reduces to (minus) the classical Laplacian as~$s\to 1$, and the quantity in~\eqref{def: s-derivative} becomes in this case the classical Neumann condition along the boundary.\medskip With this setting, we are ready to state our results. For the sake of clarity, we first give some simplified versions of our results which are ``easy to read'' and deal with ``concrete'' situations. These results will be indeed just particular cases of more general theorems that will be presented later in Section~\ref{FFGG}. More precisely, the results we present are basically of two types. The first type deals with \emph{exterior sets}. In this case, the equation is assumed to hold in~$\mathbb{R}^N \setminus \overline{G}$, where~$G$ is a non-empty open bounded set of $\mathbb{R}^N$, not necessarily connected, whose boundary is of class~$\mathcal{C}^2$, and~$\overline{G}$ denotes the closure of $G$. We sometimes split $G$ into its connected components by writing \[ G= \bigcup_{i=1}^k G_i, \] where any $G_i$ is a bounded, open and connected set of class~$\mathcal{C}^2$, and, to avoid pathological situations, we suppose that $k$ is finite. Notice that $\overline{G_i} \cap \overline{G_j} = \emptyset$ if $i \neq j$. \medskip Then, the prototype of our results for exterior sets is the following. \begin{theorem}\label{thm: example} Let us assume that there exists $u \in \mathcal{C}^s(\mathbb{R}^N)$ such that \[ \begin{cases} (-\Delta)^s u =0 & \text{in $\mathbb{R}^N \setminus \overline{G}$} \\ u = a>0 & \text{in $\overline{G}$} \\ u(x) \to 0 & \text{as $|x| \to +\infty$} \\ (\partial_{\nu})_s u= const.= \alpha_i \in \mathbb{R} & \text{on $\partial G_i$}. \end{cases} \] Then $G$ is a ball, and $u$ is radially symmetric and radially decreasing with respect to the centre of $G$. \end{theorem} The second case dealt with in this paper is the one of \emph{annular sets}. Namely, in this case the equation is supposed to hold in~$\Omega \setminus \overline{G}$, where~$\Omega$ is a bounded set that contains~$G$ and such that~$\Omega \setminus \overline{G}$ is of class $\mathcal{C}^2$. \medskip Then, the prototype of our results for annular sets is the following. \begin{theorem}\label{thm: example 2} Let us assume that there exists $u \in \mathcal{C}^s(\mathbb{R}^N)$ such that \[ \begin{cases} (-\Delta)^s u =0 & \text{in $\Omega \setminus \overline{G}$} \\ u = a>0 & \text{in $\overline{G}$} \\ u= 0 & \text{in $\mathbb{R}^N \setminus \overline{\Omega}$} \\ (\partial_{\nu})_s u= const.= \alpha_i \in \mathbb{R} & \text{on $\partial G_i$} \\ (\partial_{\nu})_s u= const.= \beta \in \mathbb{R} & \text{on $\partial \Omega$}, \end{cases} \] where $(\partial_\nu)_s u$ denotes the inner (with respect to $\Omega \setminus \overline{G}$) normal $s$-derivative of $u$. Then $G$ and $\Omega$ are concentric balls, and $u$ is radially symmetric and radially decreasing with respect to the centre of $G$. \end{theorem} We stress that, here and in the following, we do not assume that $\mathbb{R}^N \setminus \overline{G}$ (resp. $\Omega \setminus \overline{G}$) is connected; this is why we do not use the common terminology \emph{exterior domain} (resp. \emph{annular domain}), which indeed refers usually to an exterior connected set (resp. annular connected set). We also stress that, while the Dirichlet boundary datum has to be the same for all the connected components of $G$, the Neumann boundary datum can vary. \medskip Overdetermined elliptic problems have a long history, which begins with the seminal paper by J. Serrin \cite{Serrin}. A complete review of the results which have been obtained since then goes beyond the aims of this work. In what follows, we only review the contributions regarding exterior or annular domains in the local case, and the few results which are available about overdetermined problems for the fractional Laplacian. Overdetermined problems for the standard Laplacian in exterior domains have been firstly studied by W. Reichel in \cite{Reichel1}, where he assumed that both $G$ and $\mathbb{R}^N \setminus \overline{G}$ are connected. In such a situation, W. Reichel proved that if there exists $u \in \mathcal{C}^2(\mathbb{R}^N \setminus G)$ (i.e. of class $\mathcal{C}^2$ up to the boundary of the exterior domain) such that \begin{equation}\label{pb: overdet local} \begin{cases} -\Delta u =f(u) & \text{in $\mathbb{R}^N \setminus \overline{G}$} \\ u = a>0 & \text{on $\partial G$} \\ u(x) \to 0 & \text{if $|x| \to +\infty$}\\ \partial_{\nu} u= const.= \alpha \le 0 & \text{on $\partial G$} \\ 0 \le u <a & \text{in $\mathbb{R}^N \setminus \overline{G}$}, \end{cases} \end{equation} where $\partial_\nu$ denotes the usual normal derivative, and $f(t)$ is a locally Lipschitz function, non-increasing for non-negative and small values of $t$, then $G$ has to be a ball, and $u$ is radially symmetric and radially decreasing with respect to the centre of $G$. The proof is based upon the moving planes method. With a different approach, based upon the moving spheres, A. Aftalion and J. Busca \cite{AftBus} addressed the same problem when $f$ is not necessarily non-increasing for small positive values of its argument. In particular, they could treat the interesting case $f(t) = t^p$ for $N/(N-2) < p \le (N+2)/(N-2)$. Afterwards, B. Sirakov \cite{Sirakov} proved that the result obtained in~\cite{Reichel1} holds without the assumption $u<a$, and for possibly multi-connected sets $G$. Moreover, he allowed different boundary conditions on the different components of $G$: that is, $u=a>0$ and $\partial_\nu u = \alpha \le 0$ on $\partial G$ can be replaced by $u=a_i>0$ and $\partial_\nu u=\alpha_i \le 0$ on $\partial G_i$, with $a_i$ and $\alpha_i$ depending on $i=1,\dots,k$. His method works also in the setting considered in \cite{AftBus}. We point out that in \cite{Sirakov} a quasi-linear regular strongly elliptic operator has been considered instead of the Laplacian. Concerning quasi-linear but possibly degenerate operators, we refer to \cite{Reichel2}. As far as overdetermined problems in annular domains is concerned, we refer the reader to \cite{Aless, Philippin, PayPhil, Reichel3}. In \cite{Aless}, G. Alessandrini proved the local counterpart of Theorem \ref{thm: example 2} for quasi-linear, possibly degenerate, operators. This enhanced the results in \cite{Philippin, PayPhil}. In \cite{Reichel3}, W. Reichel considered inhomogeneous equations for the Laplace operator in domains with one cavity. Regarding the nonlocal framework, the natural counterpart of the J. Serrin's problem for the fractional $s$-Laplacian ($0<s<1$) has been recently studied by M. M. Fall and S. Jarohs in \cite{FallJarohs}. In such contribution the authors introduced the main tools for dealing with nonlocal overdetermined problems, such as comparison principles for anti-symmetric functions and a nonlocal version of the Serrin's corner lemma. Such results will be used in our work. We refer also to \cite{Dalibard}, where the authors considered a similar problem in dimension $N=2$, and for $s=1/2$. \medskip As already mentioned, Theorems~\ref{thm: example} and~\ref{thm: example 2} are just simplified versions of more general results that we obtained. Next section will present the results obtained in full generality. \subsection{The general setting}\label{FFGG} Now we present our results in full generality. For this, first we consider the overdetermined problem in an exterior set~$\mathbb{R}^N \setminus \overline{G}$, namely: \begin{equation}\label{pb: overdet} \begin{cases} (-\Delta)^s u =f(u) & \text{in $\mathbb{R}^N \setminus \overline{G}$} \\ u = a>0 & \text{in $\overline{G}$} \\ (\partial_{\nu})_s u= const.= \alpha_i \in \mathbb{R} & \text{on $\partial G_i$.} \\ \end{cases} \end{equation} Note that the $s$-Neumann boundary datum can depend on $i$. Our first main result in this framework is the following. \begin{theorem}\label{thm: main 1} Let $f(t)$ be a locally Lipschitz function, non-increasing for nonnegative and small values of $t$, and let $\alpha_i <0$ for every $i$. If there exists a weak solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of \eqref{pb: overdet}, satisfying \[ \text{$0 \le u < a$ in $\mathbb{R}^N \setminus \overline{G}$ and $u(x) \to 0$ as $|x| \to +\infty$}, \] then $G$ is a ball, and $u$ is radially symmetric and radially decreasing with respect to the centre of $G$. \end{theorem} The concept of weak solution and its basic properties will be recalled in Section~\ref{sec: preliminaries} for the facility of the reader. We point out that under additional assumptions on $f$, the assumption $\alpha_i < 0$ in Theorem~\ref{thm: main 1} can be dropped, and the condition $0 \le u < a$ can be relaxed. \begin{theorem}\label{thm: main 1 prime} Let $f$ be a locally Lipschitz function, non-increasing in the whole interval $[0,a]$. If there exists a weak solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of \eqref{pb: overdet} satisfying \[ \text{$0 \le u \le a$ in $\mathbb{R}^N \setminus \overline{G}$ and $u(x) \to 0$ as $|x| \to +\infty$}, \] then $G$ is a ball, and $u$ is radially symmetric and radially decreasing with respect to the centre of $G$. \end{theorem} Other results in the same direction are the following. \begin{corollary}\label{corol: main 1} Under the assumptions of Theorem \ref{thm: main 1}, let us assume that $f(a) \le 0$; then the request $\alpha_i<0$ in the statement of Theorem~\ref{thm: main 1} is not necessary, and condition $0 \le u <a$ can be replaced by $0 \le u \le a$. \end{corollary} \begin{corollary}\label{corol: subharmonicity} If under the assumptions of Theorem \ref{thm: main 1}, we suppose that $f(t) \le 0$ for $t \ge 0$, then the request $\alpha_i<0$ is not necessary, and condition $0 \le u <a$ can be replaced by $u \ge 0$. Analogously, if under the assumptions of Theorem \ref{thm: main 1 prime}, we suppose that $f(t) \le 0$ for $t \ge 0$, then condition $0 \le u \le a$ can be replaced by $u \ge 0$. \end{corollary} The proof of the corollaries is based upon simple comparison arguments. When $f \ge 0$, in the same way one could show that assumption $u \ge 0$ is not necessary, obtaining in particular Theorem \ref{thm: example}. It is worth to notice that the regularity assumption $u \in \mathcal{C}^s(\mathbb{R}^N)$ is natural in our framework, see Theorem \ref{thm: regularity} in the appendix at the end of the paper: what we mean is that each bounded weak solution of the first two equations in \eqref{pb: overdet} is of class $\mathcal{C}^s(\mathbb{R}^N)$. All the previous results (and the forthcoming ones) could have stated for bounded weak solutions, without any regularity assumption. We preferred to assume from the beginning that $u \in \mathcal{C}^s(\mathbb{R}^N)$, since in this way the condition on $(\partial_\nu)_s u$ makes immediately sense, without further observations. The $\mathcal{C}^s(\mathbb{R}^N)$ regularity is optimal, as shown by the simple example \[ \begin{cases} (-\Delta)^s u=1 & \text{in $B_1$}\\ u=0 & \text{in $\mathbb{R}^N \setminus B_1$}, \end{cases} \] which has the explicit solution $u(x)= \gamma_{N,s} (1-|x|^2)^s_+$, where $v_+$ denotes the positive part of $v$, and $\gamma_{N,s}$ is a normalization constant depending on $N$ and on $s$. Also, eigenfunctions are not better than ${\mathcal{C}}^s(\mathbb{R}^N)$, see e.g.~\cite{MR3233760}. \begin{remark}\label{rem: on regularity} In the local setting \cite{Reichel1}, see also \cite{Sirakov}, the solution $u$ of \eqref{pb: overdet local} is supposed to be of class $\mathcal{C}^2$ up to the boundary. In this way, the authors could avoid to assume that $\alpha <0$: indeed, in Proposition 1 in \cite{Reichel1}, as well as in Step 4 in the proof of the main results in \cite{Sirakov}, the authors computed the second derivatives of $u$ on the boundary of $G$. In our context, it seems not natural to ask that $u$ has better regularity than $\mathcal{C}^s$, and this is the main reason for which we need to suppose $\alpha<0$ in Theorem~\ref{thm: main 1}. \end{remark} \begin{remark} We think that it is worth to point out that there exist solutions of \eqref{pb: overdet} satisfying all the assumptions of the above statements when $G$ is a ball. Such existence results will be stated and proved at the end of the paper, in Section~\ref{sec: existence}, for the sake of completeness. \end{remark} \begin{remark} As already pointed out, the fact that $\mathbb{R}^N \setminus \overline{G}$ is not supposed to be connected marks a difference with respect to the local case. The same difference, which arises also in \cite{FallJarohs}, is related to the non-local nature of both the fractional Laplacian and the boundary conditions. \end{remark} Now we present our results in the general form for an annular set $\Omega \setminus \overline{G}$ (recall that in this case $\Omega$ is bounded, and, by the regularity assumed, $G$ cannot be internally tangent to $\partial \Omega$). {Recall that $\Omega$ can be multi-connected but, in this case, we assume that $\Omega$ has a finite number of connected components.} The natural counterpart of the overdetermined problem \eqref{pb: overdet} for annular sets is given by \begin{equation}\label{pb: overdet bounded} \begin{cases} (-\Delta)^s u =f(u) & \text{in $\Omega \setminus \overline{G}$} \\ u = a>0 & \text{in $\overline{G}$} \\ u = 0 & \text{in $\mathbb{R}^N \setminus \overline{\Omega}$} \\ (\partial_{\nu})_s u= \alpha_i \in \mathbb{R} & \text{on $\partial G$} \\ (\partial_{\nu})_s u= \beta \in \mathbb{R} & \text{on $\partial \Omega$}, \end{cases} \end{equation} where the notation $(\partial_{\nu})_s$ is used for the inner normal $s$-derivative in $\partial (\Omega \setminus \overline{G})$. In this setting we have: \begin{theorem}\label{thm: main 2} Let $f$ be locally Lipschitz. Let us assume that there exists a weak solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of \eqref{pb: overdet} satisfying \[ 0 < u < a \qquad \text{in $\Omega \setminus \overline{G}$}. \] Then $\Omega$ and $G$ are concentric balls, and $u$ is radially symmetric and radially decreasing with respect to their centre. \end{theorem} Note that in this case no assumption on the monotonicity of $f$, or on the sign of $\alpha_i$ and $\beta$, is needed. As for the problem in exterior sets, the condition $0 < u <a$ can be relaxed under additional assumptions on $f$. \begin{theorem}\label{thm: main 2 prime} Let $f$ be a locally Lipschitz function, non-increasing in $[0,a]$. If there exists a weak solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of \eqref{pb: overdet} satisfying \[ \text{$0 \le u \le a$ in $\mathbb{R}^N$}, \] then both $\Omega$ and $G$ are concentric balls, and $u$ is radially symmetric and radially decreasing with respect to their centre. \end{theorem} \begin{corollary}\label{corol: main 2} Under the assumptions of Theorem \ref{thm: main 2}, let us assume that $f(a) \le 0$; then the condition $0 < u <a$ can be replaced by $0 < u \le a$ in $\mathbb{R}^N$. Analogously, if $f(0) \ge 0$, then the condition $0 < u <a$ can be replaced by $0 \le u < a$ in $\mathbb{R}^N$. \end{corollary} Clearly, if both $f(a) \le 0$ and $f(0) \ge 0$, we obtain the thesis for $\alpha_i,\beta \in \mathbb{R}$ and $0 \le u \le a$, and then we also obtain Theorem~\ref{thm: example 2} as a particular case. The proof of Corollary \ref{corol: main 2} is analogue to that of Corollary \ref{corol: main 1} (and thus will be omitted). One could also state a counterpart of Corollary \ref{corol: subharmonicity} in the present setting.\medskip At last, we observe that when $G$ (or $\Omega \setminus \overline{G}$) is a priori supposed to be radial, our method permits to deduce the radial symmetry of the solutions of the Dirichlet problem. In the local framework, this type of results have been proved in \cite{Reichel3,Reichel1,Sirakov}. \begin{theorem}\label{thm: radial 1} Let $B_{\rho}(x_0)$ be a ball, and let $u \in \mathcal{C}^s(\mathbb{R}^N)$ be a weak solution of \[ \begin{cases} (-\Delta)^s u= f(u) & \text{in $\mathbb{R}^N \setminus \overline{B_\rho(x_0)}$} \\ u = a>0 & \text{in $\overline{B_\rho(x_0)}$}, \end{cases} \] such that \begin{equation}\label{cond Neumann} (\partial_\nu)_s u <0 \quad \text{on $\partial B_{\rho}(x_0)$}. \end{equation} If $f(t)$ is a locally Lipschitz function, non-increasing for nonnegative and small values of $t$, and $u$ satisfies \[ \text{$0 \le u < a$ in $\mathbb{R}^N \setminus \overline{B_{\rho}(x_0)}$ and $u(x) \to 0$ as $|x| \to +\infty$}, \] then $u$ is radially symmetric and radially decreasing with respect to $x_0$. \end{theorem} When compared with the local results, condition \eqref{cond Neumann} seems not to be natural. On the other hand, for the reasons already explained in Remark \ref{rem: on regularity} we could not omit it in general. Nevertheless, under additional assumptions on $f$ it can be dropped. \begin{corollary}\label{corol: radial} If under the assumptions of Theorem \ref{thm: radial 1} we suppose that $f(a) \le 0$, then \eqref{cond Neumann} can be omitted, and condition $0 \le u<a$ can be replaced by $0 \le u \le a$. If moreover $f(t) \le 0$ for every $t \le 0$, then condition $0 \le u <a$ can be replaced by $u \ge 0$. \end{corollary} It is clear that similar symmetry results hold for Dirichlet problems in annuli. An interesting limit case takes place when $G=\{x_0\}$ is a single point of $\mathbb{R}^N$. The reader can easily check that the proof of Theorem \ref{thm: radial 1} works also in this setting. With some extra work, we can actually obtain a better result. \begin{theorem}\label{thm: radial point} Let us assume that there exists a bounded weak solution of \[ \begin{cases} (-\Delta)^s u=f(u) & \text{in $\mathbb{R}^N \setminus \{x_0\}$} \\ u(x_0)=a. \end{cases} \] If $f(t)$ is a locally Lipschitz functions, non-increasing for nonnegative and small values of $t$, and $u$ satisfies \[ \text{$0 \le u \le a$ in $\mathbb{R}^N \setminus \{x_0\}$ and $u(x) \to 0$ as $|x| \to +\infty$}, \] then $u$ is radially symmetric and radially decreasing with respect to $x_0$. \end{theorem} Notice that no condition on the $s$-normal derivative is needed. For this reason, we omitted the assumption $u \in \mathcal{C}^s(\mathbb{R}^N)$, which anyway, by Theorem \ref{thm: regularity}, would be natural also in this context. As a straightforward corollary, we obtain a variant of the Gidas-Ni-Nirenberg symmetry result for the fractional Laplacian. \begin{corollary} Let $u$ be a nonnegative bounded weak solution of $(-\Delta)^s u=f(u)$ in $\mathbb{R}^N$. If $f(t)$ is a locally Lipschitz functions, non-increasing for nonnegative and small values of $t$, and $u$ satisfies \[ \text{$u(x) \to 0$ as $|x| \to +\infty$}, \] then $u$ is radially symmetric and radially decreasing with respect to a point of $\mathbb{R}^N$. \end{corollary} Analogue symmetry results have been proved in \cite{FelmerWang}, for a different class of nonlinearities $f$ (having non-empty intersection with the one considered here). We point out that, with respect to \cite{FelmerWang}, we do not require any condition at infinity on the decay of $u$. \subsection{Outline of the paper} The basic technical definitions needed in this paper will be recalled in Section~\ref{sec: preliminaries}. In Section \ref{sec: over exterior} we consider overdetermined problems in exterior sets, proving Theorems \ref{thm: main 1}, \ref{thm: main 1 prime} and Corollaries \ref{corol: main 1}, \ref{corol: subharmonicity}. Section \ref{sec: over annular} is devoted to overdetermined problems in annular sets. In Section \ref{sec: radial} we study the symmetry of the solutions when the domain is a priori supposed to be radial. In Section \ref{sec: existence} we present some existence results. Finally, in a brief appendix we discuss the regularity of bounded weak solutions of Dirichlet fractional problems in unbounded sets. \section{Definitions and preliminaries}\label{sec: preliminaries} We collect in this section some definitions and results which will be used in the proofs of the main theorems. \subsection{Definitions} Let $N \ge 1$ and $s \in (0,1)$. For a function $u \in \mathcal{C}^\infty_c(\mathbb{R}^N)$, the fractional $s$-Laplacian is defined by \begin{align*} (-\Delta)^s u(x) :&= c_{N,s} \pv \int_{\mathbb{R}^N} \frac{u(x)-u(y)}{|x-y|^{N+2s}}\,dy \\ & = c_{N,s} \lim_{\varepsilon \to 0^+} \int_{ \{ |y-x| > \varepsilon \} } \frac{u(x)-u(y)}{|x-y|^{N+2s}} \, dy , \end{align*} where $c_{N,s}$ is a normalization constant, and $\pv$ stays for ``principal value". In the rest of the paper, to simplify the notation we will always omit both $c_{N,s}$ and $\pv$. The bilinear form associated to the fractional Laplacian is \[ \mathcal{E}(u,v) := \frac{c_{N,s}}{2} \int_{\mathbb{R}^{2N}} \frac{(u(x)-u(y)) (v(x)-v(y) )}{|x-y|^{N+2s}}\,dy. \] It can be proved that $\mathcal{E}$ defines a scalar product, and we denote by $\mathcal{D}^s(\mathbb{R}^N)$ the completion of $\mathcal{C}^\infty_c(\mathbb{R}^N)$ with respect to the norm induced by $\mathcal{E}$. We also introduce, for an arbitrary open set $\Omega \subset \mathbb{R}^N$, the space $H^s(\Omega):= L^2(\Omega) \cap \mathcal{D}^s(\mathbb{R}^N)$. It is a Hilbert space with respect to the scalar product $\mathcal{E}(u,v) + \langle u,v \rangle_{L^2(\Omega)}$, where $\langle \cdot, \cdot \rangle_{L^2(\Omega)}$ stays for the scalar product in $L^2(\Omega)$. The case $\Omega=\mathbb{R}^N$ is admissible. We write that $u \in H^s_{\loc}(\mathbb{R}^N)$ if $u \in H^s(K)$ for every compact set $K \subset \mathbb{R}^N$. A function $w$ is a weak supersolution of \[ (-\Delta)^s w \ge g(x) \qquad \text{in $\Omega$}, \] if $w \in \mathcal{D}^s(\mathbb{R}^N)$ and \begin{equation}\label{weak sol} \mathcal{E}(w,\varphi) \ge \int_{\Omega} g(x) \varphi(x) \, dx \qquad \forall \varphi \in \mathcal{C}^\infty_c(\Omega), \ \varphi \ge 0. \end{equation} If the opposite inequality holds, we write that $w$ is a weak subsolution. If $w \in \mathcal{D}^s(\mathbb{R}^N)$ and equality holds in \eqref{weak sol} for every $\varphi \in \mathcal{C}^\infty_c(\Omega)$, then we write that $w$ is a weak solution of \begin{equation}\label{vis sol} (-\Delta)^s w = g(x) \qquad \text{in $\Omega$}. \end{equation} Since we will always consider weak solutions (supersolutions, subsolutions), the adjective weak will sometimes be omitted. \subsection{Regularity results}\label{sub: regularity} Let $u \in L^\infty(\mathbb{R}^N) \cap \mathcal{D}^s(\mathbb{R}^N)$ be a weak solution of \begin{equation}\label{pb: Dirichlet} \begin{cases} (-\Delta)^s u= f(u) & \text{in $\mathbb{R}^N \setminus \overline{G}$} \\ u=a & \text{in $\overline{G}$}, \end{cases} \end{equation} with $f$ locally Lipschitz continuous. By Theorem \ref{thm: regularity} in the appendix, we know that \begin{itemize} \item $u \in \mathcal{C}^{1,\sigma}(\mathbb{R}^N \setminus \overline{G})$ for some $\sigma \in (0,1)$ (\emph{interior regularity}); \item $u \in \mathcal{C}^s(\mathbb{R}^N)$, and in particular $u/\delta^s \in \mathcal{C}^{0,\gamma}(\mathbb{R}^N \setminus G)$ for some $\gamma \in (0,1)$, where $\delta$ denotes the distance from the boundary of $G$ (\emph{boundary regularity}). \end{itemize} Since any weak solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of \eqref{pb: Dirichlet} such that $u(x) \to 0$ as $|x| \to +\infty$ is in $L^\infty(\mathbb{R}^N)$ (and also in $\mathcal{D}^s(\mathbb{R}^N)$, by definition of weak solution), the previous regularity results will be used throughout the rest of the paper. \subsection{Comparison principles} We recall a strong maximum principle and a Hopf's lemma for anti-symmetric functions \cite[Proposition 3.3 and Corollary 3.4]{FallJarohs}. In the quoted paper, the strong maximum principle is stated under the assumption that $\Omega$ is bounded, but for the proof this is not necessary. As a result, the following holds. \begin{proposition}[Fall, Jarohs \cite{FallJarohs}]\label{STRONG} Let $H \subset \mathbb{R}^n$ be a half-space, and let $\Omega \subset H$ (not necessarily bounded). Let $c \in L^\infty(\Omega)$, and let $w$ satisfy \[ \begin{cases} (-\Delta )^s w + c(x) w \ge 0 & \text{in $\Omega$} \\ w(x) = -w(\bar x) & \text{in $\mathbb{R}^N$} \\ w \ge 0 & \text{in $H$}, \end{cases} \] where $\bar x$ denotes the reflection of $x$ with respect to $\partial H$. Then either $w>0$ in $\Omega$, or $w \equiv 0$ in $H$. Furthermore, if $x_0 \in \partial \Omega \setminus \partial H$ and $w(x_0) = 0$, then $(\partial_\eta)_s w(x_0)<0$, where $\eta$ is the outer unit normal vector of $\Omega$ in $x_0$. \end{proposition} \begin{remark} If $\Omega \subset H$ shares part of its boundary with the hyperplane $\partial H$, and $x_0 \in \partial H \cap \partial \Omega$, then we cannot apply the Hopf's lemma in $x_0$, since it is necessary to suppose that it lies on the boundary of a ball compactly contained in $H$. This assumption is used in the proof in \cite{FallJarohs}. \end{remark} In the proof of Theorem \ref{thm: main 1}, we shall need a version of the Hopf's lemma allowing to deal with points of $\partial \Omega \cap \partial H$. To be more precise, let $\Omega'$ be a $\mathcal{C}^2$ set in $\mathbb{R}^N$, symmetric with respect to the hyperplane $T$, and let $H$ be a half-space such that $T=\partial H$. Let $\Omega := H \cap \Omega'$, and let us assume that $w \in \mathcal{C}^s(\mathbb{R}^N)$ satisfies \[ \begin{cases} (-\Delta)^s w + c(x) w = 0 & \text{in $\Omega'$} \\ w(x) = -w(\bar x) \\ w > 0 & \text{in $\Omega$} \\ w \ge 0 & \text{in $H$}, \end{cases} \] where $c \in L^\infty(\Omega')$ and $\bar x$ denotes the reflection of $x$ with respect to $T$. We note that $\partial \Omega \cap T$ is divided into two parts: a regular part of Hausdorff dimension $N-1$, which is a relatively open set in $T$, and a singular part of Hausdorff dimension $N-2$, which is $\partial \Omega' \cap T$. We also note that, by anti-symmetry, $w(x)=0$ for every $x \in T$. \begin{proposition}\label{prop: new hopf} In the previous setting, if $x_0$ is a point in the regular part of $\partial \Omega \cap T$, then \begin{equation}\label{1st} -\liminf_{t \to 0^+} \frac{w(x_0-t\nu(x_0))}{t} <0, \end{equation} where $\nu(x_0)$ is the outer unit normal vector to $T=\partial H$ in $x_0$. \end{proposition} \begin{remark} At a first glance it could be surprising that a boundary lemma involving a fractional problem gives a result on the full outer normal derivative of the function. Namely, functions satisfying a fractional equation of order~$2s$ are usually not better than~$\mathcal{C}^s(\mathbb{R}^N)$ at the boundary, so the first order incremental quotient in~\eqref{1st} is in general out of control. But in our case, if we look at the picture more carefully, we realize that since we are assuming that $w$ is an anti-symmetric solution in the whole $\Omega'$ (which contains both $\Omega$ and its reflection), any $x_0$ on the regular part of $\partial \Omega \cap T$ is actually an interior point for $w$, and hence it is natural to expect some extra regularity. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop: new hopf}] Without loss of generality, we can assume that \[ T=\{x_N=0\} \quad \text{and} \quad H=\{x_N >0\}, \] so that $\Omega=\Omega' \cap \{x_N >0\}$. Let $\rho>0$ be such that $B_{\rho}(x_0) \Subset \Omega'$. If necessary replacing $\rho$ with a smaller quantity, we can suppose that $B:=B_{\rho}(x_0',4\rho)$ and $B':=B_{\rho}(x_0',-4\rho)$ are both compactly contained in $\Omega'$. Now we follow the strategy of Lemma 4.4 in \cite{FallJarohs}: for $\alpha >0$ to be determined in the sequel, we consider the barrier \[ h(x):= x_N \left( \varphi(x) + \alpha(d_1(x) +d_2(x) ) \right), \] where \[ \varphi(x) = (\rho^2 - |x-x_0|^2)_+^s \] is the positive solution of \[ \begin{cases} (-\Delta)^s \varphi = 1 & \text{in $B_1$} \\ \varphi = 0 & \text{in $\mathbb{R}^N \setminus \overline{B_1}$}, \end{cases} \] and \[ d_1(x) := (\rho-|x-(x_0', 4\rho)|)_+ \qquad d_2(x) := (\rho-|x-(x_0', -4\rho)|)_+ \] are the truncated distance functions from the boundary of $B$ and $B'$, respectively. With this definition, we can compute $(-\Delta)^s h$ exactly as in Lemma 4.4 in \cite{FallJarohs}, proving that \[ (-\Delta)^s h(x) - c(x) h(x) \le (C_1-\alpha C_2) |x_N| \le 0 \qquad \text{for every $x \in B_{\rho(x_0) \cap \{x_N >0\}}$}, \] provided $\alpha>0$ is sufficiently large. By continuity and recalling that $w>0$ in $\Omega$, we deduce that $w \ge C >0$ in $B$. This permits to choose a positive constant $\sigma >0$ such that $w \ge \sigma h$ in $\overline{B_1}$, and hence $w-\sigma h \ge 0$ in $\Omega \setminus B_{\rho}(x_0)$. Therefore, the weak maximum principle (Proposition 3.1 in \cite{FallJarohs}) implies that $w \ge \sigma h$ in $\Omega$, and in particular \[ w(x_0',t) \ge \sigma t (\rho^2 -t^2)_+^s \qquad \forall t \in [0,\rho), \] which gives the desired result. \end{proof} As far as overdetermined problem in bounded exterior sets, namely Theorem \ref{thm: main 2}, we shall make use of a maximum principle in domain of small measure, proved in \cite[Proposition 2.4]{JarohsWeth} in a parabolic setting. In our context, the result reads as follows. \begin{proposition}[Jarohs, Weth \cite{JarohsWeth}]\label{SMALL} Let $H \subset \mathbb{R}^N$ be a half-space and let $c_\infty>0$. There exists $\delta=\delta(N,s,c_\infty)>0$ such that if $U \subset H$ with $|U| < \delta$, and $u$ satisfies in a weak sense \[ \begin{cases} -\Delta w +c(x) w \ge 0 & \text{in $U$} \\ w \ge 0 & \text{in $H \setminus U$} \\ w(x) = -w(\bar x), \end{cases} \] with $\|c\|_{L^\infty(U)} < c_\infty$, then $w \ge 0$ in $U$. \end{proposition} \section{The overdetermined problem in exterior sets}\label{sec: over exterior} In the first part of this section, we prove Theorem \ref{thm: main 1}. We follow the same sketch used by Reichel in \cite{Reichel1}, applying the moving planes method to show that for any direction $e \in \mathbb{S}^{N-1}$ there exists $\bar \lambda=\bar \lambda(e)$ such that both the $G$ and the solution $u$ are symmetric with respect to a hyperplane \[ T_{\lambda}:= \left\{ x \in \mathbb{R}^N: \langle x, e \rangle = \lambda\right\}, \] where $\langle\cdot,\cdot \rangle$ denotes the Euclidean scalar product in $\mathbb{R}^N$. In the following we fix the direction $e=e_N$ and use the notation $x=(x',x_N) \in \mathbb{R}^{N-1} \times \mathbb{R}$ for points of $\mathbb{R}^N$. For $\lambda \in \mathbb{R}$, we set \begin{equation}\label{notation} \begin{split} T_\lambda & := \{x \in \mathbb{R}^N: x_N=\lambda\}; \\ H_{\lambda} & := \{x \in \mathbb{R}^N: x_N>\lambda\}; \\ x^\lambda & := (x',2\lambda-x_N) \quad \text{the reflection of $x$ with respect to $T_\lambda$}; \\ A^\lambda&:= \text{the reflection of a given set $A$ with respect to $T_\lambda$}; \\ \Sigma_\lambda &:= H_\lambda \setminus \overline{G^\lambda} \quad \text{the so-called \emph{reduced half-space}}; \\ d_i &:= \inf \left\{ \lambda \in \mathbb{R}: T_\mu \cap \overline{G_i} = \emptyset \quad \text{for every $\mu > \lambda$}\right\}. \end{split} \end{equation} \begin{figure}[ht] \includegraphics[height=5cm]{FIG1SV.pdf} \caption{A picture of the reflected sets and the reduced half-space.} \end{figure} It is known that, for $\lambda$ a little smaller than $d_i$, the reflection of $G_i \cap H_\lambda$ with respect to $T_\lambda$ lies inside $G_i$, namely \begin{equation}\label{lies i} (\overline{G_i} \cap H_\lambda)^\lambda \subset G_i \cap H_\lambda^\lambda \quad \text{with strict inclusion}. \end{equation} In addition, $\langle \nu(x) ,e_N \rangle > 0$ for every $x \in \partial G_i \cap T_\lambda$ (we recall that $\nu$ denotes the outer normal vector on $\partial G_i$, thus directed directed towards the interior of $\mathbb{R}^N \setminus \overline{G_i}$); this remains true for decreasing values of $\lambda$ up to a limiting position $\bar \lambda_i$ such that one of the following alternatives takes place: \begin{itemize} \item[($i$)] \emph{internal tangency}: the reflection $(\overline{G_i} \cap H_\lambda)^\lambda$ becomes internally tangent to $\partial G_i$; \item[($ii$)] \emph{orthogonality condition}: $\langle \nu(x_0) ,e_N \rangle = 0$ for some $x_0 \in \partial G_i \cap T_\lambda$. \end{itemize} Let $\bar \lambda:= \max\{\bar \lambda_i: i=1,\dots,k\}$. For $\lambda > \bar \lambda$, it is clear that inclusion \eqref{lies i} holds for every $i$. Since $\overline{G_i} \cap \overline{G_j} = \emptyset$ for every $i \neq j$, it follows straightforwardly that \begin{equation}\label{lies} (\overline{G} \cap H_\lambda)^\lambda \subset G \cap H_\lambda^\lambda \quad \text{with strict inclusion}. \end{equation} Furthermore, $\bar \lambda$ can be characterized as \[ \bar \lambda= \inf \left\{ \lambda \in \mathbb{R} \left| \begin{array}{l} \text{$(\overline{G} \cap H_\mu)^\mu \subset G \cap H_\mu^\mu$ with strict inclusion, and} \\ \text{$\langle \nu(x),e_N \rangle >0$ for every $x \in T_\mu \cap \partial \Omega$, for every $\mu >\lambda$} \end{array} \right.\right\}. \] This simple observation permits to treat the case of multi-connected interior sets $G$ essentially as if they were consisting of only one domains. Two preliminary results regarding the geometry of the reduced half-space are contained in the following statement. \begin{lemma}\label{lem: geom remark} For every $\lambda \ge \bar \lambda$, the following properties hold: \begin{itemize} \item[($i$)] $\overline{G} \cap H_\lambda$ is convex in the $e_N$-direction; \item[($ii$)] the set $(\mathbb{R}^N \setminus \overline{G}) \cap H_\lambda$ is connected. \end{itemize} \end{lemma} \begin{proof} For property ($i$), we show that for any $\lambda \ge \bar \lambda$, if a point~$x=(x',x_N)$ belongs to~$\overline{G}\cap H_\lambda$, then also~$(x',t)\in \overline{G} \cap H_\lambda$ for every~$t\in [\lambda,x_N)$. If this is not true, then there exist $(x',x_N) \in G \cap H_\lambda$ and $(x',t) \not \in G$ with $t \in [\lambda,x_N)$. For $\lambda':= (x_N+t)/2 \ge \bar \lambda$, we have that $(x',t)= (x',x_N)^{\lambda'}$, but $(x',t) \not \in G$, which is in contradiction with the fact that, since $\lambda'>\bar \lambda$, the reflection of~$G$ with respect to~$T_{\lambda'}$ does not exit~$\overline{G}$ itself. As far as property ($ii$) is concerned, given two points~$x^{(1)}$, $x^{(2)}\in \Omega \cap H_\lambda$, we fix a large~$M>0$ and consider the two vertical segments~$x^{(i)}+t e_N$, with~$t\in[0, M-x^{(i)}_n]$. By point ($i$), these segments lie in~$\Omega\cap H_\lambda$. Each segment connects~$x^{(i)}$ with~$y^{(i)}:= ((x^{(i)})', M)$. Then, since~$G$ is bounded, if~$M$ is large we can connect~$ y^{(1)}$ and~$y^{(2)}$ with a horizontal segment~$ty^{(2)}+(1-t)y^{(1)}$ lying well outside~$G$. In this way, by considering the two vertical segments and the horizontal one as a single polygonal, we have joined~$x^{(1)}$ and~$x^{(2)}$ by a continuous path that lies in~$\Omega\cap H_\lambda$. This shows that~$\Omega\cap H_\lambda$ is connected. \end{proof} We define $w_\lambda(x):= u(x^\lambda)-u(x)$. Notice that \begin{equation}\label{ob1} (-\Delta)^s w_\lambda + c_\lambda(x) w_\lambda=0 \qquad \text{in $\Sigma_\lambda$}, \end{equation} where \begin{equation}\label{def: c_lambda} c_\lambda(x):= \begin{cases} -\displaystyle\frac{f(u_\lambda(x)) -f(u(x))}{u_\lambda(x)-u(x)} & \text{if $u_\lambda(x) \neq u(x)$} \\ 0 & \text{if $u_\lambda(x) = u(x)$,} \end{cases} \end{equation} is in $L^\infty(\mathbb{R}^N)$ since $u$ is bounded and $f$ is locally Lipschitz continuous. We aim at proving that the set \begin{equation}\label{def: capital lambda} \Lambda:=\left\{ \lambda > \bar \lambda: \text{$w_\mu \ge 0$ in $\Sigma_\mu$ for every $\mu \ge \lambda$} \right\} \end{equation} coincides with the interval $(\bar \lambda,+\infty)$, that $w_{\lambda}>0$ for every $\lambda \in \Lambda$, and that $w_{\bar \lambda} \equiv 0$ in $\Sigma_{\bar \lambda}$. From this we deduce that $u$ is symmetric with respect to $T_{\bar \lambda}$, and non-increasing in the $e_N$ direction in the half-space $H_\lambda$. Furthermore, we shall deduce that $\Omega$ (and hence also $G$) is convex in the $e_N$ direction, and symmetric with respect to $T_{\bar \lambda}$. As a product of the convexity and of the fact that $w_\lambda>0$ in $\Sigma_\lambda$ for $\lambda> \bar \lambda$, it is not difficult to deduce that $u$ is strictly decreasing in $x_N$ in $(\mathbb{R}^N \setminus \overline{G}) \cap H_{\bar \lambda}$. Repeating the same argument for all the directions $e \in \mathbb{S}^{N-1}$, we shall deduce the thesis. Although the strategy of the proof is similar to that of Theorem 1.1 in \cite{Reichel1}, its intermediate steps will differs substantially. We write that the hyperplane $T_\lambda$ moves, and reaches a position $\mu$, if $w_\lambda \ge 0$ in $\Sigma_\lambda$ for every $\lambda >\mu$. With this terminology, the first step in the previous argument consists in showing that the movement of the hyperplane can start. \begin{lemma}\label{lem: moving initial} There exists $R>0$ sufficiently large such that $w_\lambda \ge 0$ in $\Sigma_\lambda$ for every $\lambda >R$. \end{lemma} \begin{proof} We argue by contradiction, assuming that for a sequence $\lambda_k \to +\infty$ there exists $x_k \in \Sigma_{\lambda_k}$ such that $w_{\lambda_k}(x_k)<0$. Since $w_{\lambda_k}=0$ on $\partial \Sigma_{\lambda_k}=T_{\lambda_k}$, and $w_{\lambda_k} \to 0$ as $|x|$ tends to infinity, we can suppose that each $x_k$ is an interior minimum point of $w_{\lambda_k}$ in $\Sigma_{\lambda_k}$. Notice that \begin{equation}\label{io} c_{\lambda_k}(x_k) w_{\lambda_k}(x_k)\le0.\end{equation} Indeed, on one side $w_{\lambda_k}(x_k) < 0$, on the other side since $|x_k| \to +\infty$ we have both $u_k(x_k) \to 0$ and $u_k(x_k^{\lambda_k}) \to 0$, and since $f$ is monotone non-increasing for small value of its argument, we deduce by \eqref{def: c_lambda} that $c_{\lambda_k}(x_k) \ge 0$. Now we show that \begin{equation}\label{io2} (-\Delta)^s w_{\lambda_k}(x_k)\le0.\end{equation} For this, we consider the sets~$U:=\{ w_{\lambda_k} < w_{\lambda_k}(x_k)\}$ and~$V:= \{x_N<\lambda_k\}$. Notice that, by the minimality property of~$x_k$ in~$\overline{H_{\lambda_k}}$, we have that~$U\subset V$. Therefore any integral in~$\mathbb{R}^N$ may be decomposed as the sum of four integrals, namely the ones over~$U$, $V\setminus U$, $U^{\lambda_k}$ and~$H_{\lambda_k}\setminus U^{\lambda_k}$. Using this and the fact that~$w_{\lambda_k}\geq w_{\lambda_k}(x_k)$ outside~$U$, we have that \begin{equation}\label{io3} \begin{split} \int_{\mathbb{R}^N}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy \le \int_{U} & \frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy \\ &+\int_{U^{\lambda_k}}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy. \end{split} \end{equation} Also, if~$y\in U\subseteq \{x_N\le\lambda_k\}$ we have that~$|x_k-y|\ge |x_k-y^{\lambda_k}|$, and therefore $$ \int_{U}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy \le \int_{U}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{ |x_k-y^{\lambda_k}|^{N+2s}}\,dy,$$ since the numerator of the integrand is positive in~$U$. By changing variable~$z:=y^{\lambda_k}$ in the latter integral, we obtain \begin{equation}\label{io4} \int_{U}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy \le \int_{U^{\lambda_k}}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(z^{\lambda_k})}{ |x_k-z|^{N+2s}}\,dz.\end{equation} Observing that~$w_{\lambda_k}(z^{\lambda_k})=u(z)-u(z^{\lambda_k})=-w_{\lambda_k}(z)$ (and renaming the last variable of integration), we can write~\eqref{io4} as \begin{equation*} \int_{U}\frac{w_{\lambda_k}(x_k)-w_{\lambda_k}(y)}{|x_k-y|^{N+2s}}\,dy \le \int_{U^{\lambda_k}}\frac{w_{\lambda_k}(x_k)+w_{\lambda_k}(y)}{ |x_k-y|^{N+2s}}\,dy.\end{equation*} By plugging this information into~\eqref{io3}, we conclude that \begin{equation}\label{conclusion computation delta s} (-\Delta)^s w_{\lambda_k}(x_k) \le 2\int_{U^{\lambda_k}}\frac{w_{\lambda_k}(x_k)}{ |x_k-y|^{N+2s}}\,dy\le0, \end{equation} with strict inequality if $U \neq \emptyset$ since $w_{\lambda_k}(x_k)<0$. This proves~\eqref{io2}. Now we claim that \begin{equation}\label{ob2} (-\Delta)^s w_{\lambda_k}(x_k)=0=c_{\lambda_k}(x_k) w_{\lambda_k}(x_k) ..\end{equation} Indeed, we already know that both the quantities above are non-positive, due to~\eqref{io} and~\eqref{io2}. If at least one of them were strictly negative, their sum would be strictly negative too, and this is in contradiction with~\eqref{ob1}. Having established~\eqref{ob2}, we use it to observe that~$w_{\lambda_k}$ must be constant. Indeed, we firstly notice that $U = \emptyset$ (otherwise, as already observed, in~\eqref{conclusion computation delta s} we would have a strict inequality). This means that~$w_{\lambda_k} \ge w_{\lambda_k}(x_k)$ in the whole $\mathbb{R}^N$, and if the set~$\{w_{\lambda_k}>w_{\lambda_k}(x_k)\}$ would have positive measure, this would imply that~$(-\Delta)^s w_{\lambda_k}(x_k)<0$. Thus we conclude that~$w_{\lambda_k} \equiv w_{\lambda_k}(x_k) <0$, in contradiction with its anti-symmetry with respect to $T_{\lambda_k}$. \end{proof} Thanks to the above statement, the value $\mu:= \inf \Lambda$ is a real number. We aim at proving that the hyperplane $T_\lambda$ reaches the position $\bar \lambda$, i.e. $\mu = \bar \lambda$. In this perspective, a crucial intermediate result is the following. \begin{lemma}\label{lem: condizione per simmetria} Let $\lambda \ge \bar \lambda$. If $w_\lambda(x) = 0$ for some $x \in \Sigma_{\lambda}$, then $G$ is symmetric with respect to $T_\lambda$. In particular, if $w_\lambda \ge 0$ in $\Sigma_\lambda$ and $\lambda>\bar \lambda$, then $w_\lambda>0$ in $\Sigma_\lambda$. \end{lemma} \begin{proof} By the strong maximum principle, Proposition \ref{STRONG}, we have that if $w_\lambda(x) = 0$ for a point $x \in \Sigma_\lambda$, then $w_\lambda \equiv 0$ in $H_\lambda$. Let us assume by contradiction that $G$ is not symmetric with respect to $T_\lambda$. At first, it is easy to check that \[ G \cap H_\lambda^\lambda = (G \cap H_\lambda)^\lambda \quad \Longrightarrow \quad G= G^\lambda. \] Hence, having assumed that $G$ is not symmetric we have $G \cap H_\lambda^\lambda \neq (G \cap H_\lambda)^\lambda$; since for $\lambda > \bar \lambda$ the inclusion \eqref{lies} holds, this implies that $\overline{G} \cap H_\lambda^\lambda \supset (\overline{G} \cap H_\lambda)^\lambda$ with strict inclusion. Let \[ E:= (\overline{G} \cap H_\lambda^\lambda ) \setminus (\overline{G} \cap H_\lambda)^\lambda \neq \emptyset. \] \begin{figure}[ht] \includegraphics[height=5cm]{FIG2SV.pdf} \hspace{1 cm} \includegraphics[height=5cm]{FIG3SV.pdf} \caption{On the left, it is represented the situation for $\lambda> \bar \lambda$; on the right, the one for $\lambda=\bar \lambda$ with $G$ not symmetric with respect to $T_{\bar \lambda}$.} \end{figure} \noindent For every $x \in E^\lambda \subset H_\lambda$, we have \begin{align*} x^\lambda \in E \subset \overline{G} \quad \Longrightarrow \quad u(x^\lambda)=a, \\ x \in \mathbb{R}^N \setminus \overline{G} \quad \Longrightarrow \quad u(x)<a, \end{align*} and hence $w_\lambda(x) >0$, a contradiction. So far we showed that if $\lambda \ge \bar \lambda$ and $w_\lambda$ vanishes somewhere in $\Sigma_\lambda$, then $G$ is symmetric with respect to $T_\lambda$. But, by definition of $\bar \lambda$, this cannot be the case for $\lambda>\bar \lambda$, and hence for any such $\lambda$ if we can prove that $w_\lambda \ge 0$ in $\Sigma_\lambda$, we immediately deduce that $w_\lambda>0$ there. \end{proof} \begin{remark} In the proof we used in a crucial way both the nonlocal boundary conditions, and the nonlocal strong maximum principle. In particular, we point out that if $w_\lambda(x) = 0$ at one point $x \in \Sigma_\lambda$, then $w_\lambda \equiv 0$ in the whole half-space $H_\lambda$. \end{remark} In the following we shall make use the fact that from the sign of the $s$-derivative of a function $u$ in a given direction we can infer information about the monotonicity of $u$ itself. \begin{lemma}\label{lem: monotonia} Let $\eta \in \mathbb{S}^{N-1}$ and $w \in \mathcal{C}^s(\mathbb{R}^N)$. If \[ (\partial_{\eta})_s w(x_0) = \lim_{t \to 0} \frac{w(x_0 + t \eta) -w(x_0)}{t^s} <0, \] then $w$ is monotone decreasing in the direction $\eta$ in a neighborhood of $x_0$. \end{lemma} \begin{proof} If the conclusion were false, there would exists a sequence $t_k\to 0^+$ for which $w(x_0+t_k\eta)\ge w(x_0)$. Then $$ 0> (\partial_\eta)_s w(x_0) =\lim_{k\to+\infty} \frac{w(x_0+t_k\eta)-w(x_0)}{t_k^s} \ge0,$$ which is a contradiction. Note that the limit does exist, since we are assuming that $w \in \mathcal{C}^s(\mathbb{R}^N)$. \end{proof} We are finally ready to prove that the hyperplane $T_\lambda$ reaches the critical position $\bar \lambda$. \begin{lemma}\label{lem: moving continuation} There holds $\inf \Lambda=\bar \lambda$. \end{lemma} \begin{proof} By contradiction, we suppose that $\mu=\inf \Lambda> \bar \lambda$. By continuity, we have $w_\mu \ge 0$ in $\Sigma_\mu$, and hence Lemma \ref{lem: condizione per simmetria} yields $w_\lambda>0$ in $\Sigma_\lambda$. Since $\mu=\inf\Lambda$, there exist sequences $\bar \lambda < \lambda_k < \mu$, $\lambda_k \to \mu$, and $x_k \in \Sigma_{\lambda_k}$, such that $w_{\lambda_k}(x_k) <0$. Since $w_{\lambda_k} \ge 0$ on $\partial \Sigma_{\lambda_k}$ and $w_{\lambda_k} \to 0$ as $|x| \to +\infty$, it is not restrictive to assume that $x_k$ are interior minimum point of $w_{\lambda_k}$ in $\Sigma_{\lambda_k}$. If $|x_k| \to +\infty$, we obtain a contradiction as in Lemma \ref{lem: moving initial}. If $\{x_k\}$ is bounded, there exists $\bar x \in \Sigma_{\mu}$ such that, up to a subsequence, $x_k \to \bar x$. Concerning the pre-compactness of the sequence $\{w_{\lambda_k}\}$, we recall that by definition \[ w_{\lambda_k}(x) = u(x',2\lambda_k-x_N)-u(x',x_N). \] The function $u$ is of class $\mathcal{C}^s(\mathbb{R}^N)$, and hence for every compact $K \Subset \mathbb{R}^N$ there exists $C>0$ such that \[ \|w_{\lambda_k}\|_{\mathcal{C}^s\left(K\right)} \le C. \] Therefore, the sequence $w_{\lambda_k}$ is convergent to $w_{\mu}$ in $\mathcal{C}^{s'}_{\loc}(\mathbb{R}^N)$ for any $0<s'<s$. The uniform convergence entails $w_\mu(\bar x) = 0$, and by Lemma \ref{lem: condizione per simmetria} this implies that $\bar x \in \partial \Sigma_{\mu}$ is a boundary point. To continue the proof, we have to distinguish among three different possibilities, and in each of them we have to find a contradiction. \emph{Case 1) $\bar x$ lies on the regular part of $\partial \Sigma_{\mu} \cap T_{\mu}$.} We note that \[ \bar x \in \interior\left(\overline{\Sigma_{\lambda_k} \cup \Sigma_{\lambda_k}^{\lambda_k}}\right) \] for every $k$ sufficiently large, where $\interior$ denotes the interior of a set. Since, by interior regularity (see Subsection \ref{sub: regularity}), we know that $u \in \mathcal{C}^{1,\sigma}(\mathbb{R}^N \setminus \overline{G})$ for some $\sigma \in (0,1)$, by definition $\{w_k\}$ is uniformly bounded in $\mathcal{C}^{1,\sigma}(\overline{B_\rho(\bar x)})$. In particular, since by interior minimality (and interior regularity) we have $\nabla w_{\lambda_k}(x_k) = 0$ and $w_{\lambda_k} \to w_{\mu}$ in $\mathcal{C}^1(\overline{B_\rho(\bar x)})$, we deduce that $\nabla w_{\mu}(\bar x)=0$. This is in contradiction with Proposition \ref{prop: new hopf}, where we have showed that \[ -\liminf_{t \to 0^+} \frac{w_{\mu}(\bar x', \bar x_N +t)}{t} < 0. \] \emph{Case 2) $\bar x \in \partial (G^\mu \cap H_{\mu}) \setminus T_\mu$.} Since $\mu > \bar \lambda$, we know that $\partial (G^\mu \cap H_{\mu}) \cap \overline{G} = \emptyset$ (otherwise we have to be in a critical position of internal tangency). Having assumed that $0 \le u <a$ in $\Omega$, we deduce that $u(\bar x) <a$, and hence \[ w_{\mu}(\bar x) = a- u(\bar x) >0, \] in contradiction with the fact that by convergence $w_{\mu}(\bar x) = 0$. \emph{Case 3) $\bar x \in \partial G^{\mu} \cap T_\mu$.} We observe that $\bar x \in \partial G$, and since $\mu>\bar \lambda$, the outer (with respect to $G$) unit normal vector $\nu(\bar x)$ is such that $\langle \nu(\bar x), e_N \rangle > 0$. As $u$ is constant on $\partial G$, we have $(\partial_\eta)_s u(\bar x) = 0$ for every $\eta$ which is tangent to $\partial G$ in $\bar x$. Recalling that $(\partial_{\nu})_s u= \alpha_i <0$, and recalling Lemma \ref{lem: monotonia}, we see that $(\partial_{e_N} u)_s < 0$ in a neighborhood $B_{\rho}(\bar x) \cap \mathbb{R}^N \setminus G$. Let $y_k$ denote the reflection of $x_k$ with respect to $T_{\lambda_k}$. Since both $x_k,y_k \to \bar x$, at least for $k$ sufficiently large the whole segment connecting $x_k$ with $y_k$ is contained in $B_{\rho}(\bar x) \cap \overline{\mathbb{R}^N \setminus \overline{G}}$. Recalling that $(\partial_{e_N} u)_s < 0$, this implies that $u$ is monotone decreasing along the segment $[y_k,x_k]$, that is \[ w_{\lambda_k}(x_k) = u(y_k)-u(x_k) > 0, \] in contradiction with the fact that $w_{\lambda_k}(x_k) <0$. \end{proof} \begin{proof}[Conclusion of the proof of Theorem \ref{thm: main 1}] For every $\lambda \ge \bar \lambda$, we have $w_\lambda \ge 0$ in $\Sigma_\lambda$. If $\lambda>\bar \lambda$, then the strict inequality holds, and it remains to show that $w_{\bar \lambda} \equiv 0$ in $\Sigma_{\bar \lambda}$. To this aim, we argue by contradiction assuming that $w_{\bar \lambda} >0 $ in $\Sigma_{\bar \lambda}$, and we distinguish two cases. The following argument is adapted by \cite{FallJarohs}. \emph{Case 1) $\bar \lambda$ is a critical value of internal tangency.} There exists $x_0 \in \partial G \cap \partial( (G \cap H_\lambda)^\lambda) \setminus T_\lambda$. Clearly we have $w_{\bar \lambda}(x_0)=0$, so that $x_0^\lambda \in \partial G \cap H_\lambda$, and by Proposition \ref{STRONG} we deduce that $(\partial_{\nu})_s w_{\bar \lambda}(x_0) <0$, where $\nu$ denotes the outer unit normal vector to $\partial G$ in $x_0$. We observe that it cannot be $x_0 \in \partial G_i \cap H_{\bar \lambda}^{\bar \lambda}$ and $x_0 \in (\partial G_j \cap H_{\bar \lambda})^{\bar \lambda}$ with $i \neq j$. This follows from the definition of $\bar \lambda$ and the fact that $\overline{G_i} \cap \overline{G_j} = \emptyset$, see \cite[Lemma 2.1]{Sirakov} for a detailed proof. Hence, having assumed that $(\partial_\nu)_s u = \alpha_i$ on $\partial G_i$, and observing that by internal tangency $\nu(x_0)= - \nu(x_0^\lambda)$, we find also \[ (\partial_{\nu(x_0)})_s w_{\bar \lambda}(x_0) = (\partial_{\nu(x_0^\lambda)})_s u(x_0^{\bar \lambda}) - (\partial_{\nu(x_0)})_s u(x_0) = 0, \] which is a contradiction. \emph{Case 2) $\bar \lambda$ is a critical value where the orthogonality condition is satisfied.} Let $x_0 \in \partial G_i$, and let $\delta=\delta_{G_i}$ denote the distance function from the boundary of $G_i$. Up to rigid motions, it is possible to suppose that $x_0=0$, $\nu(x_0)$ is a vector of the orthonormal basis, say $\nu(x_0)=e_1$, and $\nabla^2 \delta(0)$ is a diagonal matrix; to ensure that the function $\delta$ is twice differentiable, we used the $\mathcal{C}^2$ regularity of $G$. Let $\eta:=(1,0,\dots,0,1)$. Adapting step by step the proof of Lemma 4.3 in \cite{FallJarohs}, it is possible to deduce that $w_{\bar \lambda}(t \eta) = o(t^{1+s})$ as $t \to 0^+$. In this step we need to recall that $u/\delta^s \in \mathcal{C}^{0,\gamma}(\mathbb{R}^N \setminus G)$, see Subsection \ref{sub: regularity}. On the contrary, thanks to the nonlocal version of the Serrin's corner lemma (Lemma 4.4 in \cite{FallJarohs}; the result is stated therein in a bounded domain, but the reader can check that this is not use in the proof) we also infer that $w_{\bar \lambda}(t \eta) \ge C t^{1+s}$ for $t$ positive and small, for some constant $C>0$. This gives a contradiction. We proved that $w_{\bar \lambda} \equiv 0$. By Lemma \ref{lem: condizione per simmetria}, this implies that $G$, and hence $\mathbb{R}^N \setminus \overline{G}$, are symmetric with respect to the hyperplane $T_{\bar \lambda}$. In principle both $\mathbb{R}^N \setminus \overline{G}$ and $G$ could have several connected components. But, as proved in Lemma \ref{lem: geom remark}, $(\mathbb{R}^N \setminus \overline{G}) \cap H_{\bar \lambda}$ is connected, which implies by symmetry that $(\mathbb{R}^N \setminus \overline{G}) \cap H_{\bar \lambda}^{\bar \lambda}$ is in turn connected. Therefore, if $\mathbb{R}^N \setminus \overline{G}$ is not connected, necessarily $G$ contains a neighborhood of the hyperplane $T_{\bar \lambda}$, which is not possible since $G$ is bounded. As far as the connectedness of $G$ is concerned, we firstly observe that by property ($i$) of Lemma \ref{lem: geom remark} and by symmetry, $G$ is convex in the $e_N$-direction. Let us assume by contradiction that there exists at least two connected components $G_1$ and $G_2$ of $G$. It is not possible that $G_1$ and $G_2$ meet at boundary points, since we assumed that $G$ is of class $\mathcal{C}^2$. Since $G$ is convex, there exists a hyperplane $T'$ not parallel to $T_{\bar \lambda}$ separating $G_1$ and $G_2$. Let $e$ be an orthogonal direction to $T'$. Defining \begin{align*} T_\lambda'&:= \{x \in \mathbb{R}^N: \langle x,e\rangle > \lambda\} \\ d'&:= \inf\left\{ \lambda \in \mathbb{R}: T_\mu' \cap \overline{G} = \emptyset \text{ for every $\mu >\lambda$} \right\}, \end{align*} without loss of generality we can suppose that $\overline{G_1} \cap T_{d'}'\neq \emptyset$, while $\overline{G_2} \cap T_{d'}' =\emptyset$. In the same way as we defined $\bar \lambda$ for the direction $e_N$, we can now define $\bar \lambda'$ for $e$, and prove that $G$ is symmetric with respect to $T'_{\bar \lambda'}$. But this gives clearly a contradiction, since by definition $G_2 \cap \{\langle x,e \rangle \ge \bar \lambda'\} = \emptyset$, while $G_2 \cap \{\langle x,e \rangle < \bar \lambda'\} = G_2$. \end{proof} \subsection{Proof of Theorem \ref{thm: main 1 prime}} The proof of Theorem \ref{thm: main 1 prime} follows a different sketch, being based upon the following known result (we refer to the appendix in \cite{FallJarohs} for a detailed proof). \begin{proposition}\label{prop: criterion} Let $u : \mathbb{R}^N \to \mathbb{R}$ be continuous and such that $u$ has a limit when $|x| \to +\infty$. Then the following statements are equivalent: \begin{itemize} \item[($i$)] $u$ is radially symmetric and radially non-increasing with respect to a point of $\mathbb{R}^N$; \item[($ii$)] for every half-space $H$ of $\mathbb{R}^N$ we have that either $u(x) \ge u(R_H(x))$ in $H$, or $u(x) \le u(R_H(x))$ in $H$, where $R_H$ denotes the reflection with respect to the boundary $\partial H$. \end{itemize} \end{proposition} Hence, to prove the radial symmetry of $u$ we aim at showing that condition ($ii$) in Proposition~\ref{prop: criterion} is satisfied. \medskip Let us consider at first all the half-space $H$ such that $\partial H$ is orthogonal to the $e_N$ direction. Using the notation introduced at the beginning of this section, and recalling the definition \eqref{def: c_lambda} of $c_\lambda$, we see that $c_\lambda \ge 0$ in $\mathbb{R}^N$ for every $\lambda$. Thus, it is not difficult to adapt the proof of Lemma \ref{lem: moving initial}, using the fact that for $\lambda \ge \bar \lambda$ we have $w_\lambda \ge 0$ in $H_\lambda \setminus \Sigma_\lambda$, to deduce that \begin{equation}\label{GE0} \text{for every $\lambda \ge \bar \lambda$, it results that $w_\lambda \ge 0$ in $\Sigma_{\lambda}$.} \end{equation} On the contrary, we point out that now we cannot immediately conclude that $w_\lambda>0$ in $\Sigma_\lambda$ for $\lambda>\bar \lambda$, since in the proof of Lemma \ref{lem: condizione per simmetria} we have used both the assumptions $\alpha<0$ and $u<a$ in $\mathbb{R}^N \setminus \overline{G}$. Nevertheless, we can prove that \begin{equation}\label{SIM} \text{$w_{\bar \lambda} \equiv 0$ in $H_{\bar \lambda}$}, \end{equation} arguing exactly as in the conclusion of the proof of Theorem \ref{thm: main 1}. \medskip This line of reasoning can be used for all the direction $e \in \mathbb{S}^{N-1}$. To be more explicit, we introduce the following notation: for a direction $e$ and a real number $\lambda$, we set \begin{align*} T_{e,\lambda} & :=\{\langle x,e \rangle= \lambda\} \\ H_{e,\lambda} & := \{ \langle x,e \rangle > \lambda \} \\ R_{e,\lambda} & := \text{reflection with respect to the hyperplane $T_{e,\lambda}$}\\ \Sigma_{e,\lambda} & := (\mathbb{R}^N \setminus \overline{R_{e,\lambda}(G)}) \cap H_{e,\lambda} \\ \bar \lambda(e) &:= \text{critical position for the direction $e$} \\ x^{e,\lambda} &:= x + (2\lambda- \langle x,e \rangle) e = R_{e,\lambda}(x) \\ w_{e,\lambda}(x)& := u(x^{\lambda,e})- u(x). \\ \end{align*} As in \eqref{GE0} and \eqref{SIM}, for every $e \in \mathbb{S}^{N-1}$ there hold \[ \text{for every $\lambda \ge \bar \lambda(e)$, it results that $w_{e,\lambda} \ge 0$ in $\Sigma_{e,\lambda}$,} \] and \[ \text{$w_{e,\bar \lambda(e)} \equiv 0$ in $H_{e,\bar \lambda(e)}$}. \] We show that this implies that condition ($ii$) in Proposition \ref{prop: criterion}. The following lemma has been implicitly used in the proof of Theorem 5.1 in \cite{FallJarohs}; here we prefer to include a detailed proof for the sake of completeness. \begin{lemma}\label{090} Let $e \in \mathbb{S}^{N-1}$, and let us assume that for $\lambda > \bar \lambda(e)$ it results $w_{e,\lambda} \ge 0$ in $H_{e,\lambda}$, and $w_{e,\bar \lambda(e)} \equiv 0$ in $H_{e,\bar \lambda(e)}$. Then \[ \text{either $w_{\mu} \ge 0$ in $H_{e,\mu}$, or $w_\mu \le 0$ in $H_{e,\mu}$, for every $\mu \in \mathbb{R}$}. \] \end{lemma} Notice that in principle $\bar \lambda(e) \neq -\bar \lambda(-e)$, and hence the result is not immediate. \begin{proof} Only to fix our minds, we consider $e=e_N$, and for the sake of simplicity we omit the dependence on $e_N$ in the notation previously introduced. For $\lambda \ge \bar \lambda$ there is nothing to prove. Let $\lambda < \bar \lambda$. We fix $\mu= 2\bar \lambda - \lambda$, so that $\bar \lambda$ is the medium point between $\lambda$ and $\mu$. In this way we have \begin{align*} w_\lambda(x',x_N) &= u(x',2\lambda-x_N)-u(x',x_N) = u\left(x',2(2\bar \lambda-\mu)-x_N\right) - u(x',x_N) \\ &= u\left(x',2 \bar \lambda-( 2\mu + x_N - 2\bar \lambda) \right) -u(x',x_N) \\ &= u(x',2\mu+x_N-2 \bar \lambda) - u(x',x_N) \\ & = u\left(x',2\mu-(2\bar \lambda-x_N) \right) -u(x',2\bar \lambda -x_N) \\ & = w_\mu(x',2\bar \lambda-x_N), \end{align*} where we used the fact that $u(x^{\bar \lambda}) = u(x)$ for every $x \in \mathbb{R}^N$. Now it is sufficient to observe that if $x \in H_\lambda$, then \[ 2\bar \lambda -x_N < 2 \bar \lambda-\lambda =\mu, \] that is, $(x',2\bar \lambda-x_N) \in \mathbb{R}^N \setminus \overline{H_\mu}$. Therefore, using the fact that $w_\mu \ge 0$ in $H_\mu$ and is anti-symmetric, we conclude that $w_\lambda \le 0$ in $H_\lambda$ for every $\lambda<\bar \lambda$. \end{proof} The result in Lemma~\ref{090} means that for every $e \in \mathbb{S}^{N-1}$ and $\lambda \in \mathbb{R}$ we have that either $u(x) \ge u(x^{e,\lambda})$ in $H_{e,\lambda}$, or else $u(x) \le u(x^{e,\lambda})$ in $H_{e,\lambda}$. Since the $H_{e,\lambda}$ are all the possible half-spaces of $\mathbb{R}^N$, by Proposition \ref{prop: criterion} we infer that $u$ is radially symmetric and radially non-increasing with respect to some point of $\mathbb{R}^N$. This still does not prove that $G$ is radially symmetric, but it is sufficient to ensure that $\{u <a\}$ is the complement of a ball $B$ or a certain radius $\rho$. Up to a translation, it is not restrictive to assume that the centre of $B$ is the origin. To complete the proof of Theorem \ref{thm: main 1 prime}, we have to show that $B=G$. \medskip Before, we point out that since $u$ is radial and non-constant, if $w_{e,\lambda} \equiv 0$ in $H_{e,\lambda}$, then necessarily $\lambda=\bar \lambda(e)= 0$. Indeed, if this were not true, then there exists $x \in R_{e,\lambda}(\overline{B}) \setminus \overline{B} \neq \emptyset$, and for any such $x$ we have \[ 0 = w_{e,\lambda }(x) = u( x^{e,\lambda})-u(x) = a- u(x) >0, \] a contradiction. Therefore, the strong maximum principle together with \eqref{GE0} imply that $w_{e,\lambda}>0$ in $\Sigma_{e,\lambda}$ for every $e \in \mathbb{S}^{N-1}$ and $\lambda>0$, which in particular proves the radial strict monotonicity of $u$ outside $\overline{B}$. \medskip Now we show that $B=G$. We argue by contradiction, noting that there are two possibilities: either $\overline{B \setminus \overline{G}} \cap \partial \{u<a\} \neq \emptyset$, or the intersection is empty, which means that $G$ is an annular region surrounding $(\mathbb{R}^N \setminus \overline{G}) \setminus \{u<a\}$. \begin{figure}[ht] \includegraphics[height=4.5cm]{FIG4SV.pdf} \hspace{1 cm} \includegraphics[height=5cm]{FIG5SV.pdf} \caption{On the left, the case $\overline{B \setminus \overline{G}} \cap \partial \{u<a\} \neq \emptyset$; on the right, $\overline{B \setminus \overline{G}} \cap \partial \{u<a\} \neq \emptyset$.} \end{figure} If the latter alternative takes place, then there exists $e \in \mathbb{S}^{N-1}$ and $\lambda>0$ such that $\overline{G} \cap H_{e,\lambda}$ is not convex. This is in contradiction with Lemma \ref{lem: geom remark}. It remains to show that also $ \overline{B \setminus \overline{G}} \cap \partial \{u<a\} \neq \emptyset$ cannot occur. To this aim, we observe that in such a situation there exists a direction $e \in \mathbb{S}^{N-1}$, a small interval $[\lambda_1,\lambda_2]$ with $\lambda_1>0$, and a small ball $A \subset B \setminus \overline{G}$, such that $\overline{A} \cap T_{e,\lambda} \neq \emptyset$ while $\overline{G} \cap T_{e,\lambda} \neq \emptyset$ for every $\lambda \in [\lambda_1,\lambda_2]$. Let $(\lambda_1+\lambda_2)/2 < \lambda < \lambda_2$. Then the reflection $R_{e,\lambda}(A \cap H_{e,\lambda}) \subset A$, and hence therein we have $w_{e,\lambda}(x) = 0$. But on the other hand, since $A \cap H_{e,\lambda} \subset \Sigma_\lambda$, we have also that by the strong maximum principle $w_{e,\lambda} >0$ in $\Sigma_{e,\lambda}$, a contradiction (recall that the only $\mu$ such that $w_{e,\mu} \equiv 0$ in $H_{e,\mu}$ is $\mu=0$). In this way, we have shown that $\overline{G} = \{u=a\}$ is a ball, and the function $u$ is radial with respect to the centre of $G$ and radially decreasing in $\mathbb{R}^N \setminus \overline{G}$, which is the desired result. \subsection{Proof of Corollary \ref{corol: main 1}} To show that $0 \le u \le a$ and $f(a) \le 0$ imply $u <a$ in $\mathbb{R}^N$, we use a comparison argument, as in the local case (see \cite[Corollary 1]{Reichel1}). Let us set \begin{equation}\label{def c in corol} c(x) := \begin{cases} -\displaystyle\frac{f(u(x))-f(a)}{u(x)-a} & \text{if $u(x) <a$} \\ 0 & \text{if $u(x) = a$}. \end{cases} \end{equation} Then, recalling that $u \le a$, we have \begin{align*} (-\Delta)^s (u-a) + c^+(x) (u-a) &\le (-\Delta)^s (u-a) + c(x)(u-a) \\ & = (-\Delta)^s u - ( f(u)-f(a) ) \le 0 \end{align*} in $\mathbb{R}^N \setminus \overline{G}$. We claim that this implies $u<a$ in $\mathbb{R}^N \setminus \overline{G}$. Indeed, if this were not true that there would exists a point $\bar x \in \mathbb{R}^N \setminus \overline{G}$ for $u$ with $u(\bar x) = a$. But then $(-\Delta)^s u(\bar x) \le 0$, in contradiction with the fact that \[ (-\Delta)^s u (\bar x) = \int_{\mathbb{R}^N} \frac{u(\bar x) -u(y)}{|x-y|^{N+2s}}\,dy > 0; \] here the strict inequality holds since $u \le a$ in $\mathbb{R}^N$, and $u<a$ in a set of positive measure by the boundary condition $u(x) \to 0$ as $|x| \to +\infty$. Now it remains to show that $\alpha_i<0$ for every $i$. This is a direct consequence of the Hopf's lemma for non-negative supersolutions proved in \cite{FallJarohs}, Proposition 3.3 plus Remark 3.5 therein. Indeed, we have already checked that for $c \in L^\infty(\mathbb{R}^N)$ defined in \eqref{def c in corol}, it results that \[ \begin{cases} (-\Delta)^s (a-u) + c(x)(a-u) \ge 0 & \text{in $\mathbb{R}^N \setminus \overline{G}$} \\ a-u \ge 0 & \text{in $\mathbb{R}^N$}. \end{cases} \] This implies $(\partial_\nu)_s (a-u) >0$ on $\partial G$, that is $(\partial_\nu)_s u<0$ on $\partial G$, where $\nu$ denotes the unit normal vector to $\partial G$ directed inwards $\mathbb{R}^N \setminus \overline{G}$. \subsection{Proof of Corollary \ref{corol: subharmonicity} } If there exists $x \in \mathbb{R}^N \setminus \overline{G}$ such that $u(x) > a$, then by the boundary conditions $u$ has an interior maximum point $ \bar x \in \mathbb{R}^N \setminus \overline{G}$. Therefore \[ (-\Delta)^s u(\bar x) = \int_{\mathbb{R}^N} \frac{u(\bar x)-u(y)}{|\bar x-y|^{N+2s}}\,dy \ge 0. \] Since $f(u) \le 0$ in $\mathbb{R}^N$, this forces $(-\Delta)^s u (\bar x) = 0$, and in turn $u(x) = u(\bar x) = a$ for every $x \in \mathbb{R}^N$, in contradiction with the fact that $u(x) \to 0$ as $|x| \to +\infty$. \section{Overdetermined problems in annular sets}\label{sec: over annular} The strategy of the proof of Theorem \ref{thm: main 2} is similar to that of Theorem \ref{thm: main 1}. We apply the moving planes method to show that for any direction $e \in \mathbb{S}^{N-1}$ there exists $\bar \lambda=\bar \lambda(e)$ such that both the sets $G$ and $\Omega$, and the solution $u$, are symmetric with respect to the hyperplane $T_{e,\bar \lambda(e)}$. We fix at first $e=e_N$ and, for $\lambda \in \mathbb{R}$, we let $T_\lambda, H_\lambda, x^\lambda,\dots$ be defined as in \eqref{notation}. We only modify the definitions of $\Sigma_\lambda$ in the following way: \[ \Sigma_{\lambda} := (\Omega \cap H_\lambda) \setminus \overline{G^\lambda}. \] Furthermore, instead of $d$ and $\bar \lambda$ we define \begin{align*} d_G &:= \inf\{ \lambda \in \mathbb{R}: \text{$T_\mu \cap \overline{G}= \emptyset$ for every $\mu > \lambda$}\} \\ d_\Omega &:= \inf\{ \lambda \in \mathbb{R}: \text{$T_\mu \cap \overline{\Omega}= \emptyset$ for every $\mu > \lambda$}\} \\ \bar \lambda_G &:= \inf\left\{ \lambda \le d_G\left| \begin{array}{l} \text{$(\overline{G} \cap H_\mu)^\mu \subset G \cap H_\mu^\mu$ with strict inclusion, and} \\ \text{$\langle \nu(x),e_N \rangle >0$ for every $x \in T_\mu \cap \partial G$, for every $\mu > \lambda$} \end{array} \right.\right\} \\ \bar \lambda_\Omega &:= \inf\left\{ \lambda \le d_\Omega\left| \begin{array}{l} \text{$(\overline{\Omega} \cap H_\mu)^\mu \subset \Omega \cap H_\mu^\mu$ with strict inclusion, and} \\ \text{$\langle \nu(x),e_N \rangle >0$ for every $x \in T_\mu \cap \partial \Omega$, for every $\mu > \lambda$} \end{array} \right.\right\} \\ \bar \lambda&:= \max \{\bar \lambda_G, \bar \lambda_\Omega\}. \end{align*} Note that $\bar \lambda_G$ and $\bar \lambda_\Omega$ are the critical positions for $G$ and $\Omega$, respectively, and $\bar \lambda$ can be considered as a critical position for $\Omega \setminus \overline{G}$. As in the previous section, we start with a simple geometric observation. \begin{lemma}\label{lem: geom remark bdd} The following properties hold: \begin{itemize} \item[($i$)] for $\lambda \ge \bar \lambda_G$, the set $\overline{G} \cap H_\lambda$ is convex in the $e_N$ direction; \item[($ii$)] for $\lambda \ge \bar \lambda_{\Omega}$, the set $\overline{\Omega} \cap H_\lambda$ is convex in the $e_N$ direction. \end{itemize} \end{lemma} The proof is analogue to that of Lemma \ref{lem: geom remark}, and thus is omitted. For $w_\lambda(x):= u(x^\lambda)-u(x)$, we have that \[ (-\Delta)^s w_\lambda+c_\lambda(x) w_\lambda = 0 \qquad \text{in $\Sigma_\lambda$}, \] exactly as in the previous section (we refer to \eqref{def: c_lambda} for the definition of $c_\lambda$). In the first part of the proof, we aim at showing that the set $\Lambda$ defined by \[ \Lambda:=\left\{ \lambda \in (\bar \lambda, d_\Omega): \text{$w_\mu \ge 0$ in $\Sigma_\mu$ for every $\mu \ge \lambda$} \right\} \] coincides with the interval $(\bar \lambda,d_{\Omega})$, that $w_{\lambda}>0$ for every $\lambda \in \Lambda$, and that $w_{\bar \lambda} \equiv 0$ in $\Sigma_{\bar \lambda}$. Since we are not assuming that $f$ is monotone (not even for small value of its argument), the argument in Lemma \ref{lem: moving initial} does not work. Nevertheless, we can take advantage of the boundedness of $\Omega$ to apply the maximum principle in domain of small measure. \begin{lemma}\label{lem: moving initial bdd} There exists $\sigma >0$ such that $w_{\lambda} \ge 0$ in $\Sigma_\lambda$ for every $\lambda \in (d_{\Omega}-\sigma,d_{\Omega})$. \end{lemma} \begin{proof} Since $f$ is Lipschitz continuous and $u$ is bounded, there exists $c_\infty>0$ independent of $\lambda$ such that $\|c_{\lambda}\|_{L^\infty(\mathbb{R}^N)} \le c_\infty$. Then, it is well defined, and independent on $\lambda$, the value $\delta=\delta(N,s,c_\infty)$ as in Proposition \ref{SMALL}. For $\lambda$ a little smaller than $d_{\Omega'}$, the measure of $\Sigma_\lambda$ is smaller than $\delta$, and the function $w_\lambda$ satisfies \[ \begin{cases} (-\Delta)^s w_\lambda + c_\lambda(x) w_\lambda = 0 & \text{in $\Sigma_{\lambda}$} \\ w_\lambda(x) = - w_\lambda(x^\lambda) \\ w_{\lambda} \ge 0 & \text{in $H_\lambda \setminus \Sigma_\lambda$}. \end{cases} \] As a consequence, by Proposition \ref{SMALL} we deduce that $w_\lambda \ge 0$ in $\Sigma_\lambda$. \end{proof} This means that the hyperplane $T_\lambda$ moves and reaches a position $\mu = \inf \Lambda <d_\Omega$. We aim at showing that $\mu = \bar \lambda$. This is the object of the next two lemmas. \begin{lemma}\label{lem: condizione simmetria bdd} Let $\lambda \ge \bar \lambda$. If $w_{\lambda}(x) =0$ for $x \in \Sigma_{\lambda}$, then both $G$ and $\Omega$ are symmetric with respect to $T_\lambda$. In particular, if $\lambda>\bar \lambda$, then $w_\lambda \ge 0$ in $\Sigma_\lambda$ implies $w_\lambda>0$ therein. \end{lemma} \begin{proof} By the strong maximum principle, if $w_{\lambda}(x) =0$ for $x \in \Sigma_{\lambda}$, then $w_\lambda \equiv 0$ in $H_\lambda$. As in the proof of Lemma \ref{lem: condizione per simmetria}, this implies that $G$ is symmetric with respect to $T_\lambda$, that is, $\lambda= \bar \lambda_G$. It remains to show that $\lambda$ is also equal to $\bar \lambda_\Omega$, and $\Omega$ is symmetric with respect to $T_\lambda$. To this aim, we observe that if this is not the case, then \[ F:= (\overline{\Omega} \cap H_\lambda^\lambda) \setminus (\overline{\Omega} \cap H_\lambda)^\lambda \neq \emptyset. \] Therefore, if $x \in F^\lambda \subset H_\lambda$, we have \begin{align*} x^\lambda \in F \subset \overline{\Omega} \quad \Longrightarrow \quad u(x^\lambda) > 0 \\ x \in \mathbb{R}^N \setminus \overline{\Omega} \quad \Longrightarrow \quad u(x) = 0, \end{align*} and hence $w_\lambda(x) >0$, a contradiction. Then also $\Omega$ is symmetric with respect to $T_\lambda$, which forces $\lambda=\bar \lambda_\Omega$. \end{proof} \begin{lemma}\label{4.4} There holds $\Lambda=(\bar \lambda,d_\Omega)$. \end{lemma} \begin{proof} By contradiction, suppose that~$\mu=\inf \Lambda>\bar \lambda$. Differently from the previous section, we use the again maximum principle in sets of small measure. Let $\delta$ as in Proposition \ref{SMALL} (we have already observed in the proof of Lemma \ref{lem: moving initial bdd} that $\delta$ can be chosen independently of $\lambda$). By Lemma \ref{lem: condizione simmetria bdd}, $w_\mu >0$ in $\Sigma_\mu$. Thus, there exists a compact set $K \Subset \Sigma_{\mu}$ such that \[ |\Sigma_\mu \setminus K| < \delta/2 \quad \text{and} \quad w_{\mu} \ge C >0 \text{ in $K$}, \] where we have used the boundedness of $\Sigma_\mu$. Furthermore, observing that by continuity $\Sigma_{\lambda} \to \Sigma_{\mu}$ as $\lambda \to \mu$, we can suppose that $K \Subset \Sigma_{\lambda}$ provided $|\lambda-\mu|<\varepsilon$ for some $\varepsilon>0$ sufficiently small. If necessary replacing $\varepsilon$ with a smaller quantity, by continuity again it follows that \[ |\Sigma_\lambda \setminus K| < \delta \quad \text{and} \quad w_{\lambda} \ge \frac{C}{2} \text{ in $K$} \] whenever $|\lambda-\mu|<\varepsilon$. For such values of $\lambda$, in the remaining part $\tilde \Sigma_{\lambda}= \Sigma_\lambda \setminus K$ we have \[ \begin{cases} (-\Delta)^s w_{\lambda}+ c_\lambda w_\lambda = 0 & \text{in $\tilde \Sigma_\lambda$} \\ w_\lambda(x) = -w_\lambda(x^\lambda) \\ w_\lambda \ge 0 & \text{in $H_\lambda \setminus \tilde \Sigma_\lambda$}, \end{cases} \] which means that we are in position to apply Proposition \ref{SMALL}, deducing that $w_\lambda \ge 0$ in $H_\lambda$. In particular, for $\mu-\varepsilon<\lambda \le \mu$ we obtain $w_\lambda >0$ in $\Sigma_\lambda$ thanks to Lemma \ref{lem: condizione simmetria bdd}, in contradiction with the minimality of $\mu$. \end{proof} \begin{proof}[Conclusion of the proof of Theorem \ref{thm: main 2}] We proved in Lemma~\ref{4.4} that $\Lambda=(\bar \lambda,d_{\Omega})$. Hence, by Lemma \ref{lem: condizione simmetria bdd}, to obtain the symmetry of $G$ and of $\Omega$, it is sufficient to check that $w_{\bar \lambda} \equiv 0$ in $H_{\bar \lambda}$. As in the proof of Theorem \ref{thm: main 1}, we argue by contradiction assuming that $w_{\bar \lambda} >0$ in $\Sigma_{\bar \lambda}$. Note that the critical position $\bar \lambda$ can be reached for four possible reasons: internal tangency for $G$, internal tangency for $\Omega$, orthogonality condition for $G$, orthogonality condition for $\Omega$. In all such cases we can reach a contradiction exactly as in the conclusion of the proof of Theorem \ref{thm: main 1}. This proves that both $\Omega$ and $G$ are symmetric with respect to $T_{\bar \lambda}$, and by Lemma \ref{lem: geom remark bdd}, they are also convex in the $e_N$ direction. If $\Omega$ has several two connected components $\Omega_1$ and $\Omega_2$, then by convexity the only possibility is that $\Omega_1$ and $\Omega_2$ are aligned along $T_{\bar \lambda}$. But in this case we can obtain a contradiction as in the conclusion of the proof of Theorem \ref{thm: main 1}. On $G$ we can argue exactly in the same way. \end{proof} \subsection{Proof of Theorem \ref{thm: main 2 prime}} The proof differs only for some details from that of Theorem \ref{thm: main 1 prime}, and thus it is only sketched. First of all, by monotonicity $c_\lambda \ge 0$ in $\mathbb{R}^N$ for every $\lambda$, and hence by Proposition 3.1 in \cite{FallJarohs} (weak maximum principle for anti-symmetric functions) we have that $w_\lambda \ge 0$ in $\Sigma_\lambda$ for every $\lambda \ge \bar \lambda$. Moreover, as in the conclusion of the proof of Theorem \ref{thm: main 2}, $w_{\bar \lambda} \equiv 0$ in $H_{\bar \lambda}$. Repeating the same argument for any direction $e \in \mathbb{S}^{N-1}$, we deduce by Proposition \ref{prop: criterion} that $u$ is radially symmetric and radially non-increasing in $\mathbb{R}^N$, which implies that $\{u=a\}=B_1$ and $\{u>0\}=B_2$ are concentric balls, and $B_1 \subset B_2$. By monotonicity, $G \subset B_1$ and $B_2 \subset \Omega$. Arguing as in the proof of Theorem \ref{thm: main 1 prime}, we deduce that $G=B_1$ and $\Omega=B_2$. \section{Radial symmetry}\label{sec: radial} \subsection{Proof of Theorem \ref{thm: radial 1}} We briefly describe how the proof of Theorem \ref{thm: main 1} can be adapted to obtain Theorem \ref{thm: radial 1}. Without loss of generality, we suppose that $x_0$, the centre of the cavity, is $0$. Using the same notation introduced in Section \ref{sec: over exterior}, see \eqref{notation}, we observe that for any direction $e \in \mathbb{S}^{N-1}$ the critical position $\bar \lambda(e)$ is reached for $\lambda=0$. Let us fix $e=e_N$, and let us introduce \[ \Lambda:= \left\{ \lambda \ge 0: \text{$w_\mu \ge 0$ in $\Sigma_\mu$ for every $\mu \ge \lambda$} \right\}. \] We aim at proving that $\Lambda=(0,+\infty)$, and that $w_\lambda > 0$ in $\Sigma_\lambda$ for every $\lambda>0$. Once that this is proved, we can repeat the argument with $e=-e_N$. Since the critical position for $e_N$ and $-e_N$ is the same, we have \[ T_{e_N,\bar \lambda(e_N)} = T_{-e_N, \bar \lambda(-e_N)} = \{x_N=0\}, \] from which we infer that $u$ is symmetric with respect to $\{x_N=0\}$, and strictly decreasing in the $x_N$ variable outside $B_{\rho}(0)$. Symmetry and monotonicity in all the other directions can be obtained in the same way. As in Lemma \ref{lem: moving initial}, we can show that $\Lambda \neq \emptyset$. Once that this is done, as in Lemmas \ref{lem: condizione per simmetria} and \ref{lem: moving continuation}, we can show that $\Lambda=(0,+\infty)$. This completes the proof. Notice that assumption \ref{cond Neumann} is used in Lemma \ref{lem: moving continuation}, case 3). \begin{remark} When $\Omega$ is bounded, by using the method in Section \ref{sec: over annular}, we see that the conclusion of Theorem \ref{thm: radial 1} remains true if we assume that only one between $\Omega$ and $G$ is a ball, and prescribe constant $s$-Neumann boundary condition on the other component. For the same result in the local case, we refer to \cite[Theorem 5]{Sirakov}. \end{remark} \subsection{Proof of Theorem \ref{thm: radial point}} Without loss of generality, we suppose that $x_0=0$. Using the same notation introduce in Section \ref{sec: over exterior}, we fix $e=e_N$ and observe that \[ \Sigma_\lambda=H_\lambda \setminus \{(0',2\lambda)\} \qquad \forall \lambda >0, \] and $\bar \lambda=0$ is the critical position for the hyperplane $T_\lambda$. We slightly modify the definition of $\Lambda$ in the following way: \[ \Lambda:= \left\{ \lambda>0: \text{$w_\mu >0$ in $\Sigma_\mu$ for every $\mu>\lambda$} \right\}, \] where we recall that $w_\lambda(x) = u(x^\lambda)-u(x)$ satisfies the equation \[ (-\Delta)^s w_\lambda+c_\lambda(x) w_\lambda = 0 \qquad \text{in $\Sigma_\lambda$}, \] and $c_\lambda$ has been defined in \eqref{def: c_lambda}. We aim at showing that $\Lambda=(0,+\infty)$. This is the object of the next three lemmas. \begin{lemma} There exists $R>0$ sufficiently large such that $w_\lambda>0$ in $\Sigma_\lambda$ for every $\lambda>R$. \end{lemma} \begin{proof} Exactly as in Lemma \ref{lem: moving initial}, it is possible to show that $w_\lambda \ge 0$ in $\Sigma_\lambda$. By the strong maximum principle, Proposition \ref{STRONG}, either $w_\lambda>0$ in $\Sigma_\lambda$ or $w_\lambda \equiv 0$ in $H_\lambda$. If $w_\lambda \equiv 0$ in $H_\lambda$, then $u(0^\lambda) = u(0',2\lambda) = a$. On the other hand, since $u(x) \to 0$ as $|x| \to +\infty$ we have that for $\lambda$ very large $u(x) \le a/2$ in the whole half-space $H_\lambda$, a contradiction. \end{proof} Thus, the quantity $\mu:= \inf \Lambda \ge 0$ is a real number. \begin{lemma}\label{lem: monot} The function $u$ is monotone strictly decreasing in $x_N$ in the half-space $\{x_N > \mu\}$. \end{lemma} \begin{proof} Let $y,z \in \{x_N > \mu\}$ with $y_N<z_N$. We aim at showing that $u(y) > u(z)$. For $\lambda:= (y_N+z_N)/2$, we claim that $z \in \Sigma_\lambda$. Once that this is shown, the desired conclusion simply follows by the fact that \[ u(y)-u(z) = w_\lambda(z) >0, \] as $\lambda>\mu$. Since $z_N>\lambda$, if $z \not \in \Sigma_\lambda$, then necessarily $z=0^\lambda$. This means that $y=0$, in contradiction with the fact that $y_N > \mu \ge 0$. \end{proof} We are ready to complete the proof of Theorem \ref{thm: radial point} by showing that $\mu=0$. \begin{lemma} It holds $\Lambda=(0,+\infty)$. \end{lemma} \begin{proof} By contradiction, let $\mu>0$. At first, by continuity $w_\mu \ge 0$ in $\Sigma_\mu$. Thus, by the strong maximum principle we have that either $w_\mu >0$ in $\Sigma_\mu$, or $w_\mu \equiv 0$ in $H_\mu$. To rule out the latter alternative, we observe that having assumed $\mu>0$, we obtain $u(0',2\mu) = a$. Thanks to the previous lemma, we infer that $u(0',x_N) > a$ whenever $x_N \in (\mu,2\mu)$, in contradiction with the maximality of $a$. Thus, it remains to reach a contradiction when $w_\mu > 0$ in $\Sigma_\mu$. By the definition of $\inf$, there exist sequences $0<\lambda_k<\mu$ and $x_k \in \Sigma_{\lambda_k}$ such that $\lambda_k \to \mu$ and $w_{\lambda_k}(x_k)<0$. Since $w_{\lambda_k} \ge 0$ in $\partial \Sigma_{\lambda_k}$ and tends to $0$ as $|x| \to +\infty$, it is not restrictive to assume that $x_k$ is an interior minimum points for $w_{\lambda_k}$ in $\Sigma_{\lambda_k}$. If $|x_k| \to +\infty$, we obtain a contradiction as in Lemma \ref{lem: moving initial}. Hence, up to a subsequence $x_k \to \bar x\in \overline{\Sigma_\mu}$. Notice that by uniform convergence $w_\mu(\bar x) = 0$, which forces $\bar x \in \partial \Sigma_\mu$. If $\bar x= (0',2\mu)$, this means that $u(0',2\mu) = a$, and by Lemma \ref{lem: monot} we obtain a contradiction with the fact that $u \le a$ in $\mathbb{R}^N$. Therefore $\bar x \in T_\mu$. This means that all the points $x_k$, and also $\bar x$, are interior points for the anti-symmetric functions $w_{\lambda_k}$ and $w_\mu$ in the sets \begin{align*} \mathbb{R}^N \setminus \left( \{0\} \cup \{(0',2\lambda_k)\}\right) = \Sigma_{\lambda_k} \cup T_{\lambda_k} \cup \Sigma_{\lambda_k}^{\lambda^k} \\ \mathbb{R}^N \setminus \left( \{0\} \cup \{(0',2\mu)\} \right) = \Sigma_{\mu} \cup T_{\mu} \cup \Sigma_{\mu}^{\mu}, \end{align*} respectively. As a consequence, we can argue as in case 1) of the proof of Lemma \ref{lem: moving continuation}, deducing that for some $\rho,\gamma>0$ the sequence $\{w_k\}$ is uniformly bounded in $\mathcal{C}^{1,\gamma}(\overline{B_\rho(\bar x)})$. This entails $\mathcal{C}^1$ convergence in $B_{\rho}(\bar x)$, and by minimality $\nabla w_\mu (\bar x) = 0$, in contradiction with Proposition \ref{prop: new hopf}. \end{proof} \section{Existence results}\label{sec: existence} This section is devoted to the proof of the existence of a solution to \begin{equation}\label{ex problem} \begin{cases} (-\Delta)^s u = f(u) & \text{in $\mathbb{R}^N \setminus \overline{B_1}$} \\ u = a & \text{in $\overline{B_1}$}, \end{cases} \end{equation} satisfying all the assumptions of Theorem \ref{thm: main 1 prime}. To this aim, we recall that the critical exponent for the embedding $H^s(\mathbb{R}^N) \hookrightarrow L^p(\mathbb{R}^N)$ is defined as $2^*_s:= 2N/N-2s$. Let~$ {\mathcal{R}}^s$ be the space of functions in~$H^s(\mathbb{R}^N)$ that are $u$ radial and radially decreasing with respect to the origin. We point out that, if~$u \in {\mathcal{R}}^s$, the decay estimate \begin{equation}\label{DDCC} |u(x)| \le C \|u\|_{L^2(\mathbb{R}^N)} |x|^{-N/2} \qquad \forall x \in \mathbb{R}^N \end{equation} holds (see e.g. Lemma 2.4 in \cite{DipPalVal} for a simple proof), and this ensures that $u(x) \to 0$ as $|x| \to +\infty$. Moreover, it is known\footnote{The details of the easy proof of this compactness statement can be obtained as follows. Given a bounded family $F$ in ${\mathcal{R}}^s$ and fixed $\epsilon>0$, we find an $\epsilon$-net for~$F$. That is, first we use~\eqref{DDCC} to say that for any $u\in F$, we have that $\|u\|_{L^p(\mathbb{R}^N\setminus B_R)}<\epsilon/2$ if $R$ is chosen suitably large (in dependence of $\epsilon$). Then we use local compact embeddings (see e.g. Corollary 7.2 in \cite{MR2944369}) for the compactness in $L^p(B_R)$: accordingly, we find $h_1, \dots h_M\in L^p(B_R)$ such that for any $u\in F$ there exists $i\in\{ 1,\dots,M\}$ such that $\| u-h_i\|_{L^p(B_R)} <\epsilon/2$ (of course $M$ may also depend on $\epsilon$). We extend $h_i$ as zero outside $B_R$, and we found an $\epsilon$-net since in this way $\| u-h_i\|_{L^p(\mathbb{R}^N)}\leq \| u-h_i\|_{L^p(B_R)}+ \| u\|_{L^p(\mathbb{R}^N\setminus B_R)}<\epsilon$. This shows the compactness that we need. For a more general and comprehensive treatment of this topic, we refer to \cite{MR683027} and Theorem 7.1 in~\cite{PRPR}. } that~${\mathcal{R}}^s$ compactly embeds into $L^p(\mathbb{R}^N)$ for every $2 < p < 2^*_s$. \begin{theorem} Let $f(t):= g(t) -t$ for some $g: \mathbb{R} \to \mathbb{R}$ continuous, odd, and such that \begin{equation}\label{ass existence} g(t) t \le 0 \qquad \text{for every $t \in \mathbb{R}$}. \end{equation} Then there exists a radially symmetric and radially decreasing solution $u \in \mathcal{C}^s(\mathbb{R}^N)$ of problem \eqref{ex problem}, satisfying the additional condition $0 \le u \le a$ in $\mathbb{R}^N$. \end{theorem} \begin{proof} We denote $\Omega = \mathbb{R}^N \setminus \overline{B_1}$, and set \[ X:= \left\{ u \in H^s(\Omega): \text{$u = a$ a.e. in $B_1$} \right\}. \] Let $J:X \to \mathbb{R}$ be defined by \[ J(u) := \frac{1}{2} \int_{\mathbb{R}^{2N}} \frac{(u(x)-u(y))^2}{|x-y|^{N-2s}}\,dx dy + \frac{1}{2}\int_{\Omega} u^2 - \int_{\Omega} G(u), \] where $G$ denotes a primitive of $g$. It is not difficult to check that if $u \in X$ is a minimizer for $J$, then $u$ solves \eqref{ex problem}, and hence in the following we aim at proving the existence of such a minimizer. Let $c:= \inf_{X} J$. Since $G \le 0$ by assumption \eqref{ass existence}, we have that \[ J(u) \ge \frac{1}{2} \int_{\mathbb{R}^{2N}} \frac{(u(x)-u(y))^2}{|x-y|^{N-2s}}\,dx dy + \frac{1}{2}\int_{\Omega} u^2 = \frac{1}{2} \|u\|_{H^s(\Omega)}^2, \] which implies that $J$ is coercive on $H^s(\Omega)$. Let $\{u_n\}$ be a minimizing sequence for $c$. Since $g$ is odd we can suppose that $u_n \ge 0$ for every $n$ (recall that, if $u \in H^s(\Omega)$, also $|u| \in H^s(\Omega)$), and thanks to the fractional Polya-Szego inequality (see \cite{Park}) it is not restrictive to assume that each $u_n$ is radially symmetric and radially non-increasing with respect to $0$. Thus $\{u_n\}$ is a bounded sequence in ${\mathcal{R}}^s$, so that by compactness we can extract a subsequence of $\{u_n\}$ (still denoted $\{u_n\}$), and find a function $u \in {\mathcal{R}}^s$, such that $u_n \to u$ weakly in $H^s(\Omega)$ and a.e. in $\mathbb{R}^N$. Notice that $u \in X$. Now by weak lower semi-continuity we infer \[ c \le J(u) \le \liminf_{n \to +\infty} J(u_n) = c, \] namely $u$ is a minimizer for $c$, and hence a solution of \eqref{ex problem}. By convergence, it is radially symmetric and radially non-increasing with respect to $0$, and is nonnegative. Moreover, \[ \begin{cases} (-\Delta)^s u = g(u) -u \le 0 & \text{in $\mathbb{R}^N \setminus \overline{B_1}$}\\ u \le a & \text{in $\overline{B_1}$} \\ u(x) \to 0& \text{as $|x| \to +\infty$}, \end{cases} \] which, as in the proof of Corollary \ref{corol: subharmonicity}, implies $u \le a$ in $\mathbb{R}^N$. Finally, by Theorem \ref{thm: regularity} it results $u \in \mathcal{C}^s(\mathbb{R}^N)$. \end{proof}
1,108,101,562,504
arxiv
\section{Introduction\label{sec:intro}} The presence and nature of dust in the universe can be explored by observing both the thermal radiation it emits, and its influence on stellar photons, which it absorbs and scatters to produce extinction. Both of these methods sample different dust populations, with emission being most sensitive to the hottest dust components along the entire line of sight, while extinction is sensitive to the full column of dust between the observer and the extinguished source. Therefore, the wavelength dependence of interstellar extinction can be interpreted in terms of the wavelength dependence of the probability for dust and radiation to interact, i.e. the dust cross-sections. { Extinction is observed to vary on different galactic lines of sight (e.g. \citealt{1990ApJS...72..163F,2007ApJ...663..320F}, hereafter \citetalias{2007ApJ...663..320F}), and extragalactically \citep{1983MNRAS.203..301H,1984A&A...132..389P,1994ApJ...429..582C}.} As a result, attempts are frequently made to analyse the composition of dust on given lines of sight by fitting the extinction curve using extinction cross-sections for likely mixtures of materials and particle sizes. One must, therefore, ensure that all possible biases and systematic effects are accounted for in the treatment of extinction. One key and often overlooked bias is the real angular extent of the observing beam in which extinction measurements are made. Since observations do not use a pencil beam, there is a non-zero probability of detecting scattered light \citep{1972ApJ...176..651M,2009A&A...493..385K}, both increasing the total detected flux and altering the wavelength dependence of extinction. It is also possible that an inhomogeneous dust distribution will present paths with different optical depths, with the relative covering fractions of the different phases influencing the detected flux. In galactic observations of the diffuse ISM the impact of scattering is typically assumed to be negligible, an assumption that we consider more carefully in Sect. \ref{sec:ism}. Nevertheless, in regions where dust and stars are well mixed or where the physical size of the observing beam is large compared to the structure of the dusty medium, the fraction of scattered light can become significant. This may occur in more distant galactic star-forming regions \citep{1984ApJ...287..228N} or for stars embedded in a compact (compared to the resolution) envelope or disc, i.e. dust-enshrouded (young or evolved) stars \citep{1996A&A...312..243V,1998A&A...340..103W,2006ApJ...636..362I}. Similarly, inhomogeneity and scattering effects also become significant in extragalactic astronomy \citep{1988ApJ...333..673B,1994ApJ...429..582C,2000ApJ...528..799W}, where an entire star-forming complex can comfortably fit within a single resolution element. As a result, unresolved observations of such systems must correctly account for these effects, or they will derive significantly different extinction laws that do not necessarily indicate any change in the physical nature of the dust grains. Such effects can include both steepening \citep{2009A&A...493..385K} and flattening \citep{1984ApJ...287..228N} of the extinction curve, under- or overestimation of stellar luminosities, or even negative extinction depending on the distribution of the dust and the size of the aperture \citep{2009A&A...493..385K}. In this paper we make use of numerical radiative transfer models to investigate the effect of scattering and clumpiness on extinction. In Sect. \ref{sec:back} we review the relevant theory and previous findings, and Sect. \ref{sec:MC} outlines the computational methods we employ. The remainder of the paper then investigates these effects with particular attention paid to circumstellar shells{ , discs} and the diffuse ISM. \section{Effective extinction\label{sec:back}} Following \citet{2009A&A...493..385K} we define the interstellar extinction law \begin{equation} \frac{\tau\left(\lambda\right)}{\tau_\mathrm{V}} = \frac{K_{\mathrm{ext}}\left(\lambda\right)}{K_{\mathrm{ext}}\left(\mathrm{V}\right)}, \label{eqn:ext} \end{equation} where $\tau$ is the optical depth and $K_\mathrm{ext}$ the extinction cross-sections of dust, for observations with infinite resolution. The so-called true extinction is therefore influenced only by the column density of extinguishing material (i.e. ISM dust) along the line of sight and the wavelength dependence of its interactions with light. Using the other standard definitions for colour excess $E\left(B-V\right) = A\left(\mathrm{B}\right) - A_{\mathrm{V}}$ and $E\left(\lambda -V\right) = A\left(\lambda\right) - A_{\mathrm{V}}$, where $A$ denotes the extinction in magnitudes, one arrives at \begin{equation} k\left(\lambda - V\right) = \frac{E\left(\lambda -V\right)}{E\left(B-V\right)}, \end{equation} which is the traditional form of the extinction law in terms of colour excess. This then naturally leads to the definition of the ratio of total-to-selective extinction, \begin{equation} R_{\mathrm{V}} = -k\left(0-V\right) = \frac{\tau_\mathrm{V}}{\tau_\mathrm{V} - \tau\left(\mathrm{B}\right)}. \label{eqn:RV} \end{equation} As a result, the broadband behaviour of the extinction curve can be described to first order by this quantity, $R_{\mathrm{V}}$, and hence so can the dust properties. Changes in $R_{\mathrm{V}}$ therefore indicate changes in the dust, usually assumed to result from changes in grain size, as this is, to first order, the dominant factor in the broad-band behaviour of the dust cross-sections. By finding combinations of dust grains whose cross-sections reproduce the observed constraints, one can therefore hope to understand the composition of dust along a particular line of sight. To do this, one must assume some dust constituents, typically some combination of silicon- and carbon-bearing species, which may be in distinct grain types (e.g. separate carbon- and silicate-bearing grains) or mixed together (composite grains). One must also choose a grain geometry (e.g. spherical, spheroidal, fractal etc.) and structure (e.g. homogeneous or porous). Then, by assuming a size distribution of the particles, one can compute the extinction cross-sections, albedo, phase function, etc. for the dust model and compare the wavelength dependence of these properties to those observed for interstellar dust. For a more detailed discussion of the processes involved in fitting the extinction curve, please refer to the literature \citep[e.g.][]{2001ApJ...548..296W,2003ARA&A..41..241D,2004ASPRv..12....1V,2012JQSRT.113.2334V,2014A&A...561A..82S}. However, in real observations a number of effects can complicate the picture. Firstly, although the extinction cross-sections are defined as \begin{equation} K_{\mathrm{ext}} = K_{\mathrm{abs}} + K_{\mathrm{sca}} \end{equation} in general scattering is not isotropic, meaning that observationally there is a degeneracy \citep{2002ocd..conf....1V} between the albedo $\omega = K_{\mathrm{sca}} / K_{\mathrm{ext}}$ and the anisotropy parameter \begin{equation} g = \langle cos\left(\theta\right)\rangle = \int p\left(\cos\left(\theta\right)\right) \cos\left(\theta\right)\mathrm{d}\cos\left(\theta\right)\label{eqn:gfac} \end{equation} \noindent where $p\left(\cos\left(\theta\right)\right)$ is the probability density function of the cosine of the scattering angle $\theta$, which parametrises the expectation of the scattering direction, with 1 corresponding to pure forward scattering and -1 to pure back-scattering. Furthermore, in real astronomical observations, the aperture or beam in which the extinction is measured is not a pencil beam and has some physical extension, determined by the resolution. Hence, unresolved structure within the beam can alter the observed extinction by for example \begin{itemize} \item partially occulting the source; \item inhomogeneities biasing observations toward low-$\tau$ paths; \item scattering light into the beam; \end{itemize} which can combine with the aforementioned degeneracy. Figure \ref{fig:scatter} depicts this in cartoon fashion { for sources in the far field}. \begin{figure} \resizebox{\hsize}{!}{ \begin{tikzpicture} \fill [gray,opacity=0.2] (0.2,4.5) rectangle (8.6,7.5); \draw [fill=gray] (2,6) circle (0.1); \draw [fill=gray] (2.2,6.6) circle (0.1); \draw [fill=gray] (2.3,5.7) circle (0.1); \draw [fill=gray] (1.7,6.3) circle (0.1); \draw [fill=gray] (2.2,9.6) circle (0.1); \draw [fill=gray] (8.5,6.6) circle (0.1); \draw [fill=gray] (8.6,5.85) circle (0.1); \draw [fill=gray] (8.4,5.1) circle (0.1); \draw [fill=gray] (6.4,6.9) circle (0.1); \draw [fill=gray] (7.4,5.4) circle (0.1); \draw [fill=gray] (3.4,5.7) circle (0.1); \draw [fill=gray] (4.4,5.55) circle (0.1); \draw [fill=gray] (5.4,6.6) circle (0.1); \draw [fill=gray] (6.7,5.85) circle (0.1); \draw [fill=gray] (7.,7.2) circle (0.1); \draw [fill=gray] (3.7,6.75) circle (0.1); \draw [fill=gray] (5.,4.5) circle (0.1); \draw [fill=gray] (2.7,6.9) circle (0.1); \draw [fill=gray] (8.4,5.1) circle (0.1); \draw [fill=gray] (8.4,5.1) circle (0.1); \draw [fill=gray] (2.2,3.6) circle (0.1); \node[star,star point ratio=1.8, fill=blue,minimum width=1mm,scale=0.5,text=white] at (8,6) {1}; \draw [color=black,thick] (0.2,4.5) -- (8.6,4.5); \draw [color=black,<-,thick] (0,7.5) -- (8.6,7.5); \node at (0.4,7.68) {To observer}; \draw [color=blue,<-,thick] (0.2,5.7) -- (2,6); \node at (0,5.7) {a}; \draw [color=blue,<-,thick] (2,6) -- (7.95,6); \draw [color=blue,<-,thick] (2.2,6.6) -- (7.95,6.15); \draw [color=blue,<-,thick] (2.2,9.6) -- (7.95,6.075); \draw [color=blue,<-,thick] (8.4,5.1) -- (7.95,5.85); \draw [color=blue,<-,thick] (0.2,4.8) -- (8.4,5.1); \node at (0,4.75) {b}; \draw[color=cyan,<-,thick,dotted](1.7,6.3) -- (2.2,9.6); \draw[color=cyan,<-,thick,dotted] (0.2,6.5) -- (1.7,6.3); \node at (0,6.4) {f}; \draw [color=cyan,<-,thick,dashed] (0.2,9) -- (2.2,6.6); \node at (0,9) {c}; \draw[color=blue,<-,thick](8,6) -- (2.2,3.6); \draw[color=cyan,<-,thick,dashed] (0.2,3.6) -- (2.2,3.6); \node at (0,3.6) {d}; \node[star,star point ratio=1.8, fill=blue,minimum width=1mm,scale=0.5,text=white] at (8,9) {2}; \draw[color=blue,<-,thick,dotted] (2.3,5.7) -- (8,9); \draw[color=cyan,<-,thick,dotted] (0.2,5.3) -- (2.3,5.7); \node at (0,5.2) {e}; \end{tikzpicture}} \caption{Scattered photons may still be observed on the detector. The grey-shaded region between the two black lines indicates the volume swept out by the observing beam of the telescope as it extends into space. Photons (light and dark blue lines) that arrive at the detector at the end of this region (marked `To observer') will be observed as though they originated at the star. The grey circles represent a distribution of dust along the line of sight toward the star being observed. The dark full lines show the contributions we consider here: `undeflected' photons (a) which are forward scattered and do not leave the beam, and photons which are back scattered (b) into the beam. The pale dashed lines (c,d) show cases where the scattering event leads to the photon leaving the observing beam. Finally, the dotted lines (e,f) represent cases that may contribute to observations, but occur with significantly lower probabilities and are hence not considered in this paper. } \label{fig:scatter} \end{figure} When the extinguished source and the observer are roughly equidistant from the extinguishing material, this effect is negligible \citep{2009A&A...493..385K}, but is increasingly significant the shorter the physical distance between the star and the attenuating matter. It naturally follows that this effect is most significant for embedded objects and extragalactic observations. To account for this, previous authors \citep[see e.g.][]{2009A&A...493..385K} have defined the \textit{effective} optical depth and extinction curve i.e. \begin{equation} \frac{\taueff\left(\lambda\right)}{\tau_\mathrm{V,eff}} \neq \frac{K_{\mathrm{ext}}\left(\lambda\right)}{K_{\mathrm{ext}}\left(\mathrm{V}\right)}, \label{eqn:effext} \end{equation} where $\taueff$ is the optical depth one derives from the observations; i.e. the negative of the logarithm of the ratio of the observed flux to the flux that \emph{would be observed in the absence of dust} \begin{equation} \taueff = - \mathrm{ln}\frac{F_{\mathrm{obs}}}{F_{\mathrm{0}}} . \label{eqn:taueff} \end{equation} From this follows the definitions of $R_{\mathrm{V,eff}}$ as in equations \ref{eqn:ext} to \ref{eqn:RV} with $\taueff$ instead of $\tau$. This is similar to the definition of attenuation optical depth $\tau_\mathrm{att}$ used in e.g. \citet{2000ApJ...528..799W}, but noticeably different from the definitions of $\taueff$ used in \citet{1996ApJ...463..681W} and \citet{1998A&A...340..103W} and $\tau_\mathrm{att}$ in \citet{2005ApJ...619..340F}, which exclude the contribution from scattered photons. It should also be clear that unlike $\tau$, $\taueff$ is a function not only of the source and its dust distribution, but also of the aperture in which it is observed \citep{2009A&A...493..385K}. \citet{2009A&A...493..385K} also emphasises that $\taueff$ is never larger than $\tau$, and that it can even be negative (e.g. in a reflection nebula). When the dust distribution is homogeneous, $\taueff$ depends only on $\tau$, the dust composition and the aperture, while for inhomogeneous media the spatial distribution of dust and the viewing angle are clearly also important \citep{1998A&A...340..103W}. \section{Monte Carlo models\label{sec:MC}} As the exploration of the effective extinction necessitates accurate radiative transfer modelling in inhomogeneous media, we must use Monte Carlo methods. We use an implementation originally described in \citet{2008ipid.book.....K}, and significantly expanded upon in \citet{2012A&A...539A..20S} and \citet{2012ApJ...751...27H}. Our code allows for an arbitrary choice of geometry, dust composition, and illumination source, and includes anisotropic scattering. By launching packets of radiation from the source and following their interactions with the surrounding dust distribution we solve the radiative transfer. The dust distribution consists of a Cartesian grid of densities and temperatures. The interactions of the radiation packets are then computed based on the method in \citet{2008ipid.book.....K} which employs the `immediate temperature update' method of \citet{2001ApJ...554..615B}. We have extended this method to include the \citet{1999A&A...344..282L} algorithm for the dust temperatures in optically thin regions to reduce the uncertainty in the dust temperatures, and to include anisotropic scattering by sampling scattering angles from the Henyey-Greenstein (HG) phase function \citep{1941ApJ....93...70H} \begin{equation} p\left(\cos\left(\theta\right)\right) = \frac{1}{4\pi} \frac{1 - g^2}{\left(1 + g^2 - 2 g \cos\left(\theta\right)\right)^{3/2} }, \label{eqn:HG} \end{equation} where $g$ is the anisotropy parameter (Eq. \ref{eqn:gfac}) derived from Mie calculus \citep{Mie1908,1983asls.book.....B}. This can be re-arranged to give \begin{equation} \cos\left(\theta\right) = \frac{1}{2g} \left[1 + g^2 - \left(\frac{1 - g^2}{1+g\left[2P-1\right]}\right)\right] \label{eqn:invHG} \end{equation} where $P=\int p\left(\cos\left(\theta\right)\right) \mathrm{d}\cos\left(\theta\right)$ is the cumulative probability distribution, from which the scattering angle can be sampled directly. As this function contains a singularity for $g=0$ it is necessary to treat these cases separately by explicitly interpreting them as isotropic scattering. Figure \ref{fig:HGfun} shows probability density functions $p\left(\theta\right)$ for the HG function over a representative range of the $g$ parameter. The importance of anisotropic scattering { is demonstrated} in Fig. \ref{fig:scadif}, which shows a difference image comparing the same model viewed in scattered light using either the HG function or a pseudo-isotropic approximation, in which the scattering cross-sections are reduced by $K^{\prime}_{sca} = \left(1-g\right)K_{sca}$, which effectively divides the scattering into an isotropically-scattered component and a forward-scattered (unscattered) component, which works well for $0\leq g \ll 1$ and $g=1$ but becomes increasingly poor as $\left| g \right| \rightarrow 1$. It is clear that the HG function shows a completely different distribution of scattered flux, with the near-side of the disc significantly brighter and the far-side darkened. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=0.3cm 0.5cm 4.5cm 16.5cm]{HGfun2.pdf}} \caption{Probability density function of the Henyey-Greenstein phase function as a function of scattering angle for a representative range of g-factors. $\theta = 0\degr$ indicates that the outgoing direction of the scattered photon is identical to that of the incoming one, while $\theta = \pm 180\degr$ implies a reversal of direction { (back scattering).}} \label{fig:HGfun} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=0.25cm 1.8cm 1.25cm 9.25cm]{scadif2.pdf}} \caption{Difference image between the V-band scattered flux computed assuming the HG phase-function ($F_\mathrm{HG}$) and a pseudo-isotropic one ($F_\mathrm{g-fac}$), computed for a dust disc with a half-opening angle of $30\degr$ viewed from an angle of $45\degr$ from the rotation axis using { a dust model composed of amorphous carbon and silicates}. The region of the disc at $\delta \leq 0$ is the near-side of the disc. The HG function reproduces the strength of forward scattering much more effectively, with differences between the two methods of up to an order of magnitude, although the integrated scattered flux is the same using both methods. } \label{fig:scadif} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1.5cm .5cm 3.75cm 16.5cm]{kscaplot.pdf}} \caption{Normalised scattering cross-sections (full lines) and albedo (dashed lines) as a function of wavelength for both dust models. While the albedo is rather flat, the cross-sections show a strong peak between $200-250\mbox{ nm}$.} \label{fig:ksca} \end{figure} Since we are able to follow the packets explicitly we can directly compute $\taueff$ for all wavelengths, and hence $R_{\mathrm{V,eff}} $, simply by counting how many packets emerge from the cloud at a given wavelength. A number of apertures can be defined on the basis of viewing angle or physical location. As photon packets exit the model grid, they are added to the statistics for the relevant apertures. We thus build-up effective extinction curves by computing the number of photons within the aperture and comparing to the number that would have been detected in the absence of dust (Eq. \ref{eqn:taueff}). Our code, including anisotropic scattering, is parallelised using the OpenMP\footnote{http://openmp.org/} API for use on shared memory machines. When we are only interested in the influence of scattering and extinction by dust on the ultraviolet, optical and near infrared we further optimise the code by neglecting dust emission. In this case all photon packets absorbed by the dust are discarded and the runtime of the code is decreased by a factor of four. Nevertheless the models remain computationally intensive, and although the physical scale of the apertures in which the extinction is computed are correct, it is necessary to overestimate their angular extent { as seen from the central star} to develop sufficient statistics without the runtime becoming infeasible. { This is done by placing the apertures closer to the star than the assumed distance to the observer.} In these studies we consider the so-called MRN grain size distribution ($dn\left(a\right) \propto a^{-q} \ da$, $ q = 3.5$ \citet{1977ApJ...217..425M}) of silicate and amorphous carbon. Since we are looking for changes caused by the dust geometry, the precise dust model chosen is not important. We make use of two dust models depending on the conditions we wish to explore: \begin{itemize} \item amorphous carbon and silicates (aCSi), using optical constants from \citet{1996MNRAS.282.1321Z} and \citet{2003ApJ...598.1017D}, respectively; \item graphite and silicates (GraSi), using optical constants from \citet{2003ApJ...598.1017D}. \end{itemize} Both models consist of carbonaceous grains with radii between 16\,nm and 130\,nm and silicate grains between 32\,nm and 260\,nm. The bulk density of the dust grains is 2.5$\mbox{ g cm}^{-3}$, and the carbon-to-silicate abundance ratio is 6.5. The scattering cross-sections of both models peak around 250\,nm (see Fig.~\ref{fig:ksca}); { this results from highly efficient scattering from grains with size $a \approx\lambda / 2\pi$, i.e. the size of the smallest silicate grains.} We are also able to generate high signal-to-noise images by post-processing the output of the radiative transfer simulations by using a ray-tracer. Scattered light images require that we first store the position, frequency and direction of photons before a scattering event. Then this information is read into the ray-tracing algorithm, and used to calculate the angle between the incident photon and the observing direction \begin{equation} \cos\theta = \vec{\hat{e}_i} \cdot \vec{\hat{e}_o} \end{equation} where $\vec{\hat{e}_{i,o}}$ indicate the incoming and observer direction unit vectors, respectively. We then determine the probability of scattering the photon packet into the viewing direction from the scattering phase-function (Eq. \ref{eqn:HG}), and this fraction of the packet is added to the ray. This is similar to the so called peel-off technique \citep{1984ApJ...278..186Y}. Emission images are computed by integrating the emission determined from the dust temperatures, cross-sections and optical depth along the line of sight. Both routines include a correction for the attenuation caused by the line-of-sight optical depth. In the case of a very small aperture (e.g. simulated observations of extinction in the diffuse ISM) the same signal-to-noise ratio can be achieved in a much shorter time by exploiting this capability { to integrate over all scattering events.} { While astronomical ray-tracing applications usually only consider models in the far field, allowing them to use parallel rays, models of the ISM need to consider photon scattering along the entire line of sight. Hence, we apply perspective-projection ray tracing \citep[e.g.][]{Appel} to capture the effect of the beam widening as the distance from the observer increase; the direction of a ray now depends upon its position on the detector, and all rays are divergent. This requires us to update the prescription in \citet{2012ApJ...751...27H}. In order to determine the deflection of each ray, we must first know the field of view required from the image. This is calculated from the inverse tangent of the projected size of model and the distance to the object e.g. for a cuboid where the long ($z$) axis is parallel to the central ray \begin{equation} \theta_{FOV} = 2 \tan^{-1}\left(\frac{\Delta x}{D}\right) \end{equation} where $\Delta x$ is the length of the $x$-axis of the model and $D$ is the distance from the observer to the object. This ensures that the entire model fits into the image at the location of the object. The deflection between adjacent pixels is then given by $\delta\theta_{pix} = \theta_{FOV}/n_{\rm pix}$ for a square image with $n_{\rm pix} \times n_{\rm pix}$ pixels. The direction of the ray launched from each pixel can then be found by rotating the direction of the vector joining the detector to the object by integer multiples of $\delta\theta$. Because the rays are divergent, the size of each pixel becomes a function of the distance from the detector along the ray. If $d$ is the distance the ray has travelled so far, then the pixel area $A\left(d\right) = \left(d \delta\theta\right)^2$. This value must be substituted for the constant value of the pixel size $A$ in Eq. 17 of \citet{2012ApJ...751...27H}. The algorithm is otherwise identical to standard parallel-projection ray-tracing methods. } We use the Monte Carlo code to calculate the temperature structure and distribution of scattering events in the model cloud and then calculate images at all the wavelengths of interest ($\lambda < 3\mu m$) with the ray--tracer. In an analogous manner to real observations, these images are then compared to identical images of dust-free simulations, and the effective optical depth and extinction curve are calculated (Eq.~\ref{eqn:taueff}). \section{Results\label{sec:res}} \subsection{Influence of clumps on extinction in circumstellar shells\label{sec:shell}} We wish to study the influence of clumps on the effective extinction curve, and so first benchmark the results of our treatment by comparing them to examples as found in the literature. Therefore, we compute the effective extinction curves for clumpy spherical shells, similar to those treated by \citet{1998A&A...340..103W}. In our case each clump occupies one cell of the model grid. This grid consists of a cube containing $\left[ nx, ny, nz\right] = \left[ 60, 60, 60\right]$ cells of equal size. The shell is completely described by its inner and outer radii $R_\mathrm{in}$ and $R_\mathrm{out}$, the number of clumps $N_\mathrm{cl}$ and the total dust mass in the shell $M_\mathrm{d}$. The range of parameters used is included in Table \ref{tab:shellpar}. \begin{table} \caption{Clumpy shell model parameters} \label{tab:shellpar} \centering \begin{normalsize} \begin{threeparttable} \begin{tabular*}{\hsize}{l l l l} \hline\hline \multicolumn{3}{c}{Parameter} & Values \\ \hline & & &\\ Inner radius &[AU] & $R_\mathrm{in}$ & 12 \\ Outer radius &[AU] & $R_\mathrm{out}$ & 120 \\ Dust mass\tnote{a} &[$10^{-7} \ M_{\odot}$] & $M_\mathrm{d}$ & 1.8, 5.5 \\ Optical depth &(aCSi) & $\tau_\mathrm{V}$\tnote{b}\, & 1.0, 3.1 \\ &(GraSi) & & 0.9, 2.9\\ Number of clumps&& $N_\mathrm{cl}$ & 0\tnote{c}, 500, 1000, \\ & & & 2000, 3000, \\ & & & 5000, 10000 \\ \hline \end{tabular*} \begin{tablenotes} \item [a] Total mass in dust in the shell. \item [b] Radial optical depths of the homogeneous shells of the respective dust masses. \item [c] 0 corresponds to a homogeneous shell. \end{tablenotes} \end{threeparttable} \end{normalsize} \end{table} The $N_\mathrm{cl}$ clumps are distributed randomly throughout the volume of the shell by selecting cubes from the model grid; selected cubes will contain dust, and non-selected cubes remain empty\footnote{ To avoid the possibility of infinite loops for high $N_\mathrm{cl}$, if the same cell is selected a second time, it will have its density doubled; if it is selected again it will then have triple the density, \textit{et cetera}.}. The total mass of the shell is then normalised to the input value. As a result, we have a distribution of identical clumps of a given total mass. As the clumps are randomly distributed, we must explore a large range of random seeds\footnote{The random seed is the state used to initialise the random number generator's output. By changing this value between different models by significantly more than the number of random numbers required we obtain nearly independent streams of pseudo-random numbers.} for each model to be able to extract average behaviour, and to quantify the variations that could be seen between otherwise identical shells. As changing the distribution of clumps and changing the angle from which a clumpy shell are equivalent, the variations between models with different seeds can also be interpreted in terms of a change in the location of the observer relative to a fixed axis. An example of the density distribution of these shells can be seen in Fig. \ref{fig:shellclumps}. For comparison, we compute homogeneous shells where $R_\mathrm{in}$, $R_\mathrm{out}$, and $M_\mathrm{d}$ are the same as in the clumpy cases. As the dust mass is fixed, models with fewer clumps have clumps of higher optical depths, which lie in the range $0.1 \leq \tau_\mathrm{cl} \leq 30${ , where $\tau_\mathrm{cl}$ is the optical depth at V band between two opposite faces of a clump}. The extinction curves are computed by treating each face of the model cube as a large aperture (see Sect.~\ref{sec:MC}), and are shown in Figs.~\ref{fig:SaCcurves}--\ref{fig:GraScurves}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[clip=true,trim=25cm 8cm 12cm 8cm]{shell_volume_5000_color_bar.jpg}} \caption{An example of the dust distribution in a clumpy shell shown in the 3D model volume. The source is located at the origin. While the colour indicates the dust density (i.e. number density of grains $\times$ mass of dust grains) at a point, the opacity of the colours is related to the total column density. Upon close inspection it is clear that neighbouring clumps may connect to form filamentary structures. } \label{fig:shellclumps} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=2.1cm 0.5cm 4.3cm 5cm]{SaCcurves.pdf}} \caption{Effective extinction curves for clumpy circumstellar shells { as a function of N$_{\rm cl}$} using the aCSi model. The line colours correspond to the models indicated in the top left. As the number of clumps decreases, the clumps become more optically thick and the effective extinction curve flattens. { The white solid line shows the input dust cross--sections normalised to the V band, and the black dotted line the same after a reduction in the scattering efficiencies by $\left(1-g\right)$.} { The two panels refer to different dust masses and hence homogeneous--shell optical depths as indicated on the panel.}} \label{fig:SaCcurves} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=2.1cm 0.5cm 4.3cm 5cm]{GraScurves.pdf}} \caption{As in Fig. \ref{fig:SaCcurves} using the GraSi model. The same effects occur with both dust models. In addition it is clear that the 2175\AA\,feature is suppressed as the clump optical depth increases.} \label{fig:GraScurves} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1cm 0.5cm 4.3cm 5cm]{features.pdf}} \caption{Fractional strength of the $2175\mbox{\AA}$ feature of the effective exctinction curves shown in Fig. \ref{fig:GraScurves} (colours as in Fig. \ref{fig:SaCcurves}). On top of the suppression of the feature, it is apparent that the shape of the feature is different from that given by the input dust cross-sections. } \label{fig:feature} \end{figure} As in \citet{1998A&A...340..103W}, we find that shells that consist of optically thick clumps have generally flatter extinction curves than that given by the dust cross-sections, and in the most extreme cases the extinction curve can become completely grey (Figs.~\ref{fig:SaCcurves}--\ref{fig:GraScurves}), in accordance with \citet{1984ApJ...287..228N}. Furthermore, the homogeneous shells (and those with optically thin clumps) have extinction curves that are significantly steeper than that one would derive from the cross-sections, similar to the findings of \citet{2009A&A...493..385K}. When using the GraSi dust model to include the 2175\,$\mbox{\AA}$ extinction bump, we see, as \citet{1984ApJ...287..228N}, that as $\tau_\mathrm{cl}$ increases, not only does the extinction curve flatten, but as in \citet{1984ApJ...287..228N}, the feature is weakened and eventually flattened out (Fig.\ref{fig:feature}). However, we also notice that in no case does the \emph{shape} of the feature agree with the input dust cross-sections, regardless of whether the cross-sections are parametrised in terms of $K_\mathrm{ext} = K_\mathrm{abs} + K_\mathrm{sca}$ or $K_\mathrm{ext} = K_\mathrm{abs} + \left(1-g\right)K_\mathrm{sca}$. { In particular, the wavelength of the peak of the feature shifts, generally to shorter wavelengths, although there is no clear trend with N$_{\rm cl}$.} Finally, the wavelength dependence of the extinction at $\lambda\ge 1\mu$m tends toward parallel power-laws i.e. with the same gradient but offset in $\taueff / \tau_\mathrm{V,eff}$ \citep{1984ApJ...287..228N}. This may indicate that other indicators of extinction are preferable to those given in the V-band, e.g. normalised to the JHK or even L bands, provided that one is confident that the dust is sufficiently cold to neglect dust emission in these bands. Alternatively, one may be able to use the wavelength at which the infrared extinction deviates from a power-law to infer the optical depth of clumps in the medium. The wavelength at which this deviation occurs appears to be related to the optical depth of the clumps, with more optically thick clumps showing power-law behaviour at longer wavelengths where they become optically thin. A number of differences exist between our models and those in the literature. \citet{1998A&A...340..103W} integrated the emergent flux over $4\pi$ steradians while we bin the extinction curve into directional apertures; as clumpiness naturally introduces some directionality to the shell, averaging over all directions neglects this. Contrary to the models of \citet{1984ApJ...287..228N}, which treated the extinguishing medium as a clumpy screen, the use of a shell geometry results in the inclusion of back-scattering, which requires that directionality be included. \citet{2009A&A...493..385K} on the other hand tailored their models to low optical depth clumps, neglecting the high clump optical depth cases we include here. \subsection{Light scattering by clumpy circumstellar discs\label{sec:disc}} Having ensured that we reproduce the literature results concerning extinction in clumpy media which arise due to scattering, it may be of interest to consider the influence of clumpiness on the observation of scattered light itself. To investigate this, we require models in which the view of the source is unobstructed, so that the stellar contribution can be easily subtracted to leave only the scattered photons. We thus model circumstellar discs constructed in a similar manner to the circumstellar shells in Sect. \ref{sec:shell}, but allowing dust only within a given opening angle of the equator; regions above this are dust free. We choose their inner and outer radii to approximately match those observed for the Vega outer debris disc (80 and 200 AU, respectively) and include 0.1$M${\scriptsize$_{\bigoplus}$} of dust using the aCSi model. The number of clumps and the opening angle of the disc, { $\alpha$,}\footnote{The disc consists of a dust filled equatorial region whose surface is at height $h=r\tan\alpha$ above the mid-plane, with conical dust-free regions at each pole.} are treated as free parameters. One such example can be seen in Fig. \ref{fig:discclumps}. The effective extinction is then measured for an observer seeing the disc face-on but unresolved. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[clip=true,trim=25cm 8cm 12cm 8cm]{disc_volume_5000_color_bar.jpg}} \caption{Similar to Fig. \ref{fig:shellclumps} but showing an example of the dust density distribution in a clumpy disc model.} \label{fig:discclumps} \end{figure} As the observer's view of the star is unobstructed, the effective optical depth is negative for all wavelengths, due to the addition of the scattered photons to the stellar emission. The wavelength dependence of this negative extinction(= scattered flux) can be interpreted to yield information concerning the scattering properties of the dust. However, as seen in Fig. \ref{fig:scacurves}, the presence of clumpy structure alters the wavelength dependence of the scattered light, making the deduction of the scattering properties an unreliable process. Although Fig.~\ref{fig:scacurves} shows only one disc opening angle (in this case 45\degr), the same behaviour is seen for all opening angles between 5 and 45\degr. It is clear that as the clumps become increasingly optically thick, the scattered light in the UV continuum is suppressed compared to the optical. The strong peak at $\sim$ 2000\,\AA\,that is visible in Fig. \ref{fig:scacurves} should not be confused with the 2175\,\AA\,extinction bump. It is created by scattering, coincides with the maximum of $K_{\rm{sca}}$ (see Fig.~\ref{fig:ksca}) and is not affected by the UV absorption. Therefore the feature is unaffected by the optical depth of the clumps. Conversely, because of stronger absorption in the optical and UV, the scattered flux in the NIR domain is enhanced relative to the optical. Due to their clumpy structure, there may be unobstructed sight-lines to regions deep within the disc. Clumps at such locations can then scatter photons into the observer's line of sight, but before escaping may encounter further clumps. Since the clumps are optically thick at shorter wavelengths, they would preferentially absorb optical/UV photons, while the NIR photons have a significantly higher escape probability. Since the strength and wavelength of the aforementioned scattering peak are functions of the size and chemical composition of the dust grains \citep[see e.g.][]{2009ApJ...696.1502H} and highly model dependent, it may be possible to infer the degree of clumpiness of a disc with sufficiently precise measurements of the integrated scattered flux at NIR, optical and UV wavelengths, and comparing the shape of the scattered continuum to any features observed. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1.5cm 0.6cm 4.4cm 17cm]{scacurves.pdf}} \caption{Effective extinction curves of face-on discs with an opening angle of 45$\degr$, computed for aCSi dust. The colours are again the same as in Fig. \ref{fig:SaCcurves}, with orange indicating $N_{\rm cl} = 1000$, and proceeding through yellow, green and blue for $N_{\rm cl} = 2000, 3000, 5000$ to magenta specifying $N_{\rm cl} = 10000$, with violet indicating a homogeneous disc. The feature at $\sim 4-5\mathrm{\mu m^{-1}}$ does not change while the continuum scattering shows significant changes in the optical and UV.} \label{fig:scacurves} \end{figure} If the relative enhancement of infrared scattering continues at wavelengths as long as $5\mu\mbox{m}$ then it may represent a source of contamination to the observations of ``core-shine'' \citep{2010A&A...511A...9S}, which are used to infer the presence of large grains in molecular cloud cores. \citet{2010A&A...511A...9S} report excess scattered flux in \textit{Spitzer}/IRAC \citep{2004ApJS..154...10F} bands 1 and 2 (at 3.6 and 4.5$\mu\mbox{m}$, respectively) toward dense regions ($A_\mathrm{V} \geq 10$), while the longer wavelength bands show only absorption in the most dense parts of the clouds. In principle, it is possible that dense clouds consist of many small, dense clumps that are not resolved in the observations, and if so the effective scattering behaviour would be modified, resulting in the apparent increase in infrared scattering. { Similarly to Sect. \ref{sec:shell}, we can also explore the effect of extinction when the star is viewed through the disc. Different optical paths through the disc will have radically different covering fractions of clumps, with paths through the mid-plane fully covered and lower covering fractions when the disc is viewed at lower inclination angles. As expected, the extinction curve through an edge-on clumpy disc exhibits the same behaviour as that for a clumpy shell \citep[e.g.][]{2015arXiv150804343S}. This remains the same as long as the entire beam is within the disc (i.e. approximately when $i\geq 90-\alpha$). However, for grazing and near-grazing inclinations, the behaviour of the extinction curve becomes chaotic, due to the complexity of the scattered radiation field. This effect is a major concern for studies of extinction towards e.g. AGN tori, where the extinction seen through the torus will bear little resemblance to the wavelength dependence of the dust properties. Our results specifically indicate that studies which infer large grains in the circumnuclear medium \citep[e.g.][]{2001A&A...365...28M,2014ApJ...792L...9L} have to consider the possibility of significant contamination from radiative transfer effects. } \subsection{Extinction in a clumpy diffuse ISM\label{sec:ism}} The ISM is believed to be a highly turbulent, inhomogeneous medium with structure on all scales in both the dense and diffuse phases \citep[e.g.][]{1994ApJ...423..681V,1997ApJ...474..730P,1998PhRvE..58.4501P}. It is therefore interesting to consider whether the effect of clumps on extinction as described above in Sects. \ref{sec:shell} and \ref{sec:disc} also influence extinction in the diffuse galactic ISM. { The previous two subsections have examined scenarios in which star and dust are co-located relative to the observer, but to examine the influence of scattering on extinction in the diffuse galactic ISM it is interesting to consider scenarios where the dust is distributed along the entire line of sight between the observer and the star. To explore this, we model the ISM as a cuboid viewed along its long axis. This cuboid is homogeneously filled with dust, such that the optical depth in V-band from the observer to the star ranges from 0.3 -- 20. Since we reproduce typical ISM optical depths, the interaction probability for each radiation packet is small. As a result, we must run large numbers of packets ($\sim 10^{9}$) to achieve good statistics. To include the influence of back-scattered as well as forward-scattered photons, 5\% of the model volume is behind the star as seen from the observer. The extinguished star is assumed to be at a distance of $100\mbox{ pc}$, however as this is a resolution dependent effect the model space can be uniformly rescaled to greater distances. The model cuboid is scaled so that the cross-section is 50\arcsec. We then solve the radiative transfer and generate images of both the scattered and emitted radiation by ray-tracing as outlined in Sect.~\ref{sec:MC}. From the images we extract a $5\arcsec\times5\arcsec$ aperture in order { to approximately match the diffraction limited resolution} of IUE, which remains the major source of UV data for extinction. } Contrary to the clumpy screen models of \citet{2005ApJ...619..340F,2011A&A...533A.117F} we include the effect of back-scattering by embedding the source within the dust column. Instead of a homogeneous density distribution, clumps are distributed randomly throughout the model space with fixed volume filling factor $f_{\mathrm{V}}\sim$1.5\% \citep[Eq. 27, assuming a two-phase ISM with the properties of the local ISM given therein]{2003ApJ...587..278W} so that the free parameters are the total dust mass and clump number. The clump radii ${R_\mathrm{cl}}$ are calculated from the number and filling factor of clumps, such that all the clumps within each model are identical i.e. \begin{equation} {R_\mathrm{cl}} = \left[\frac{3 \ f_{\mathrm{V}} \times x \times y \times z}{4 \pi \ N_\mathrm{cl} }\right]^{1/3} \end{equation} where $x,y,z$ are the dimensions of the model cuboid. As the clumps are randomly distributed, we vary the random seed to explore the parameter space created by the variations in the positions of the clumps. \begin{table} \caption{Clumpy ISM model parameters} \label{tab:ISMpar} \centering \begin{normalsize} \begin{threeparttable} \begin{tabular*}{\hsize}{l l l l} \hline\hline \multicolumn{3}{c}{Parameter} & Values \\ \hline &\\ Dust mass & [$M_{\odot}$] & $M_\mathrm{d}$ & 0.0058, 0.007, 0.009, \\ && & 0.0115, 0.035, 0.058, \\ && & 0.07, 0.09, 0.115, \\ && & 0.35, 0.58 \\ Optical depth& (GraSi) & $\tau_\mathrm{V}$\tnote{a} & 0.16, 0.2, 0.25 \\ && & 0.3, 1.0, 1.6, \\ && & 1.9, 2.5, 3.2, \\ && & 9.7, 16 \\ Clump number && $N_\mathrm{cl}$ & 10, 50, 100, 500 \\ \hline \end{tabular*} \begin{tablenotes} \item [a] Optical depths (measured in the V band) of the homogeneous models of the respective dust masses. \end{tablenotes} \end{threeparttable} \end{normalsize} \end{table} We consider three different simple descriptions of the clumps in these geometries: \begin{enumerate} \item Spherical clumps (1-phase) i.e. $\rho\left(R\right) = \rho_{\mathrm{cl}}$ for $R\leq R_{\mathrm{cl}}$, 0 elsewhere; \item as above but with the clumps embedded in a diffuse medium (2-phase) i.e. $\rho\left(R\right) = \rho_{\mathrm{cl}}$ for $R\leq R_{\mathrm{cl}}$, $10^{-4}\rho_{\mathrm{cl}}$ elsewhere; \item Pressure-constrained isothermal clumps with \begin{equation}\frac{\rho\left(R\right)}{\rho_{0}} = \frac{1}{1+\left(\frac{R}{R_\mathrm{cl}}\right)^2} \end{equation} to give a smoothly varying density distribution; \end{enumerate} examples of which can be seen in Figs. \ref{fig:1pclumps}, \ref{fig:pcclumps}. \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics[clip=true,trim=12cm 8cm 5cm 8cm]{ism_volume_spheres_color_bar.jpg}} \caption{Example section from a 1-phase clumpy density distribution (spherical clumps of constant density). The 2-phase distributions appear identical albeit with the intraclump space filled with a diffuse medium $10^{-4}$ times less dense than the clumps. This figure is constructed in a similar manner to Figs. \ref{fig:shellclumps}, \ref{fig:discclumps}, but the source is no longer within this section of the model due to the extreme length of the cuboid. In the ISM models, the source is not placed at the centre, but at a point 95\% of the length of the model cuboid. This section is that closest to the observer.} \label{fig:1pclumps} \end{figure} \begin{figure}[!t] \resizebox{\hsize}{!}{\includegraphics[clip=true,trim=7cm 16cm 0cm 12cm]{ism_volume_smooth_color_bar.jpg}} \caption{As in Fig. \ref{fig:1pclumps} but showing an example section from a pressure-constrained clump density distribution.} \label{fig:pcclumps} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1.3cm 0.5cm 4.5cm 5cm]{clumpyismRV5.pdf}} \caption{The range of $R_{\mathrm{V,eff}}$ produced by clumpy ISM models. Our models are indicated by filled blue circles. These are compared with the observed values of $R_{\mathrm{V}}$ reported in \citetalias{2007ApJ...663..320F} (gray full circles) taking $\tau_{\rm V,eff} = A_{\rm V} / 1.086$. The thick black dashed line indicates the $R_\mathrm{V}$ of the dust cross-sections. Except for a small fraction of outliers, which increases toward large optical depth, clumpiness has little effect on typical interstellar extinction curves. } \label{fig:RVrange} \end{figure} { The results from the clumpy ISM models can be seen in Fig. \ref{fig:RVrange}. With the exception of a small fraction of outliers\footnote{Approximately consistent with the expected number of cases where a clump is close to the star.}, the effect of clumpiness on extinction is negligible except at high optical depth. This suggests that on lines of sight that avoid the galactic centre, the effect of scattering can be neglected, and extinction can be reliably be used as a probe of the properties of interstellar dust, as expected from \citet{1983ApJ...270..169P}. The fact that the OB stars typically used to measure extinction tend to clear a large volume (several pc) surrounding them of interstellar matter through wind and radiation pressure further reduces the probability that interstellar extinction is significantly modified by scattering on distance scales of a few kpc. } \section{Discussion} From Sect. \ref{sec:shell} it is clear that if the dust is concentrated in optically thick clumps, the extinction curve is artificially flattened. This has been previously highlighted by \citet{1984ApJ...287..228N}, however, they did not attempt to derive a relationship between the flattening of extinction and the clump properties. Figure \ref{fig:taurv} demonstrates the relation between this increase in the V-band optical depth of the clumps and $R_{\mathrm{V,eff}}$, using the results from Sect. \ref{sec:shell} for both dust models { using all curves shown in figures~\ref{fig:SaCcurves}~\&~\ref{fig:GraScurves}}. While it is clear that both dust models follow similar trends, they appear to form two separate sequences. { The separation between these sequences is typically a few tens of percent.} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1.5cm .5cm 4.5cm 16.5cm]{tauclrv.pdf}} \caption{Evolution of the effective extinction curve in clumpy shells with the optical depth of the clumps for both dust models. While the two models show generally the same behaviour, there remains an offset ($\sim$ 20\%) between the two.} \label{fig:taurv} \end{figure} Since the optical depth at an arbitrary wavelength is not directly related to the extinction curve, we wish to transform this to a quantity that is. As the changes in the shape of the extinction curve result from wavelength-dependent optical depth effects, we introduce the quantity $\lambda_\mathrm{crit}$, which is defined as the wavelength for which $\tau_\mathrm{cl}\left(\lambda_\mathrm{crit}\right)=1$. When the change in $R_\mathrm{V}$ is plotted against this critical wavelength (Fig. \ref{fig:lamrv}) the two dust models overlap. The gradient is rather shallow for $\lambda_\mathrm{crit}\leq 600\,\mbox{nm}$ but steepens dramatically beyond this. As $R_{\mathrm{V,eff}}$ is related to the B- and V-band optical depths, it stands to reason that clumps that are optically thin or only marginally optically thick to these wavelengths would only weakly affect the shape of the extinction curve, while clumps that are optically thick at even longer wavelengths will have a much stronger effect. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[scale=0.5,clip=true,trim=1.5cm .5cm 4.5cm 16.5cm]{lamcrrv.pdf}} \caption{As in Fig. \ref{fig:taurv}, but now comparing $R_\mathrm{V}$ to $\lambda_\mathrm{crit}$, the wavelength at which $\tau_\mathrm{cl} = 1$. The two previously disparate curves are now reconciled.} \label{fig:lamrv} \end{figure} If the true $R_\mathrm{V}$ of the dust can be determined independently of the extinction measurements, then it is in principle possible to use the relation between $R_{\mathrm{V,eff}}$ and $\lambda_\mathrm{crit}$ to infer the structure of the medium. { It is important to note that all the effects described in this paper will become more significant as the distance between the object and the observer increases, as structure will be ever more poorly resolved. Therefore, extragalactic observations are particularly susceptible, and even more so at high redshift. This means that studies that use AGN \citep[e.g.][]{2001A&A...365...28M} or, in particular GRBs \citep{2011A&A...532A.143Z} to probe dust properties must be especially careful to consider the role of radiative transfer effects. } \section{Summary} Clumpy media and the collection of scattered light can fundamentally alter the observed extinction curve, which can significantly hinder the accurate interpretation of observations{ , in particular when the scattering medium is close to the extinguished object}. In clumpy media the changes {in the shape of the extinction curve} are related not to the optical depth of the clumps but rather to the critical wavelength for which $\tau_\mathrm{cl}\left(\lambda_\mathrm{crit}\right)=1$. If the true $R_\mathrm{V}$ is known, it is in principle possible to infer the structure of the medium from this relationship. { Furthermore, there is a shift in the wavelength of the peak of the 2175\,\AA\, feature towards shorter wavelengths.} Similarly, the observed scattering behaviour of dust can be markedly different if the scattering medium is clumpy rather than homogeneous. More optically thick clumps lead to a suppression of the optical and UV scattered flux in the continuum, while scattering features are unaffected, potentially providing a means by which to constrain the structure of a scattering medium. We have shown that the collection of scattered photons represents a major challenge to measurements of extinction towards embedded objects, { particularly in other galaxies e.g. AGN or GRBs, where large-scale structure is unresolved}. As a result, there is not necessarily a 1:1 link between the extinction curve and the wavelength dependence of dust cross-sections. { However, the effect on observations of diffuse galactic extinction is negligible.} \begin{acknowledgements} { We thank the anonymous referee and the editor whose comments helped improve this manuscript. We thank Endrik Kr\"{u}gel for helpful discussions and for providing the original version of his MC code, and Frank Heymann for discussions on the implementation of perspective-projection ray tracing. We are grateful to Sebastian Wolf for discussions, comments and suggestions which improved the content of this manuscript. PS is supported under DFG programme no. WO 857/10-1 } \end{acknowledgements} \bibliographystyle{aa}
1,108,101,562,505
arxiv
\section{Introduction} \label{sec:intro} Optical interference coatings with as low as possible optical and mechanical losses are in demand for high-precision optical measurement applications such as atomic clocks and interferometric gravitational-wave detectors~\cite{Harry:2011book}. In gravitational-wave observatories such as Advanced LIGO~\cite{TheLIGOScientific:2014jea} and Advanced Virgo~\cite{Acernese_2014}, optical and mechanical losses of the coatings must be very low in order to not degrade the detector sensitivity. Scatter from gravitational-wave detector optics increases the quantum-noise limit to sensitivity, leads to stray light that causes nonlinear noise by coupling back into the main beam after scattering off moving elements~\cite{Flanagan_1994, Accadia_2010}, and degrades squeezed states of light~\cite{Kwee:2014vba}. Optical absorption drives thermal effects, such as thermal lensing, within the gravitational-wave detector optics which can degrade detector sensitivity and performance~\cite{Wang_2017}. Mechanical loss determines the off-resonant Brownian motion of optical coatings (also known as coating thermal noise)~\cite{Harry:2011book}, which is a limiting noise source in gravitational-wave detectors~\cite{PhysRevD.102.062003}. Achieving the low optical and mechanical loss of coatings for gravitational-wave detectors is accomplished by selecting materials with excellent optical and mechanical properties, using ion-beam sputtering deposition with closely controlled deposition temperature and energy, ensuring cleanliness and purity, and through post-deposition annealing. Currently, Advanced LIGO and Advanced Virgo use coatings formed by TiO$_2$-doped Ta$_2$O$_5$ (high index) and SiO$_2$ (low index) ion-beam sputtered layers produced at Laboratoire des Matériaux Avancés. Post-deposition annealing of such coatings to 600$^{\circ}$C, in air, has been shown to reduce their scatter~\cite{Sayah:21,Capote:21}, absorption~\cite{Fazio:20}, and mechanical loss~\cite{Granata:2019fye}, with higher temperatures (in general) giving better results. The titania dopant is added to further decrease the mechanical loss and the absorption of the Ta$_2$O$_5$ layers~\cite{Granata:2019fye}. We note that other dopants such as zirconia can be added to frustrate crystallization in tantala~\cite{Abernathy_2021} and titania-doped-tantala~\cite{doi:10.1116/6.0001074}, though the samples used here do not include zirconia. There is currently heavy research into identifying coatings for future detectors that will have even lower coating thermal noise~\cite{2018RSPTA.37670282S} and as good optical properties as the current coatings. Post-deposition annealing of materials such as TiO$_2$-doped GeO$_2$ with SiO$_2$ is one path that has already shown promise~\cite{PhysRevLett.127.071101}. The practical limits to the maximum annealing temperatures achievable for ion-beam-sputtered amorphous thin-film coatings are determined by the onset of crystallization or damage mechanisms such as delamination, blisters, and cracks. The presence of such damage is often observed using optical methods such as visual inspection, imaging with a scatterometer or a microscope, and x-ray diffraction for crystallization. Typically, inspection for damage associated with annealing is performed before and after a given annealing regimen, providing incomplete information about the conditions that lead to the onset and growth of damage. Previous work by our group introduced an in-situ method to observe optical scattering from coatings while they are annealed in vacuum and showed that scattered light from TiO$_2$:Ta$_2$O$_5$, decreases during annealing to 500$^{\circ}$C in vacuum~\cite{Capote:21}. Thus far, achieving much higher temperatures with this setup has proven challenging. Furthermore, air annealing is more commonly used in the gravitational-wave optics community than vacuum annealing and has the advantages of demonstrated improvement of mechanical and absorption losses. Here, in Section~\ref{sec:setup}, we describe a new instrument that was developed to meet the goal of imaging scattered light from the coatings while they are being annealed in air to temperatures of 800$^{\circ}$C, or higher. In Section~\ref{sec:results} we show that this instrument is capable of imaging the onset and growth of crystals in samples with single-layer coatings of TiO$_2$:Ta$_2$O$_5$ (described in Section~\ref{sec:samples}) and the onset and growth of blisters in TiO$_2$:GeO$_2$/ SiO$_2$. In Section~\ref{sec:conclusion}, we discuss ways that this instrument will provide deeper insight into crystallization and other coating damage mechanisms by measuring their onset and evolution versus temperature. \section{Experimental Setup and Procedure} \label{sec:setup} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{images/TheSetupV4.png} \caption{Setup of the Air Annealing Scatterometer. Labeled components are described in the text. \textit{Left:} setup exterior to the oven. \textit{Middle:} setup inside the oven used for the first four samples. \textit{Right:} setup inside the oven for the last sample, where the stainless steel pedestal was replaced by a fused quartz crucible to decrease thermal expansion. The thermocouple is located just to the right of the viwport shown, on the ceramic interior surface. \label{fig:setup}} \end{figure} The setup of the Air Annealing Scatterometer (AAS) is shown in Figure~\ref{fig:setup}. The basic operation is as follows. A sample coated optic is mounted within the oven, then illuminated by an external light source and imaged at regular intervals (once per minute here) using a CCD camera while an annealing temperature profile is carried out. This system was designed to meet the stated goal of imaging coating scatter in-situ to 800$^{\circ}$C, or higher. The components that were required to meet that goal and their interplay are described below. The setup requires a programmable oven (\textbf{a}) capable of reaching 800$^{\circ}$C and of accurately carrying out heating profiles with multiple ``ramp" (increase/decrease temperature) and ``dwell" (maintain constant temperature) segments. We chose the industrial annealing oven ST-1500C-121012 by SentroTech, which uses molybdenum disilicide (MoSi$_2$) heating elements and thick ceramic insulation to reach, for an unmodified oven, 1500$^{\circ}$C. We worked with Sentrotech's engineering team to add observation and instrument ports (\textbf{b}) to both the front (the side with the dark orange door) and rear of the oven. These ports use conflat (CF) flanges to allow the use of heat tolerant gaskets and the option to connect various commercially available components, such as viewports and flanges. Because of these holes through the outer walls and insulation, this modified oven's maximum temperature will be reduced to 900$^{\circ}$C-1100$^{\circ}$C. We also added an air circulation fan option to the interior of the oven to ensure temperature consistency. The temperature is read by an S-type thermocouple that communicates with the oven's controller. The thermocouple is located between the two upper ports sticking out from the ceramic into the interior of the oven (nearby and to the right of sample holder shown in Figure~\ref{fig:setup}). The oven's controller (Nanodac from Eurotherm) uses proportional–integral–derivative (PID) control and provides an interface for creating heating profiles with up to 30 (ramp, dwell, or target temperature) segments using the software package iTools. A coated sample optic (\textbf{c}) is mounted within the oven on a solid stainless steel pedestal (\textbf{d}) (later replaced by a fused quartz crucible) so that it can be both illuminated and imaged through a viewport on the back of the oven. To most closely match the use case of coated optics in interferometric gravitational-wave detectors, the setup requires monitoring scattered light from samples illuminated at normal incidence by a light source similar in wavelength to that used by LIGO and Virgo (1064\,nm). To avoid time-dependent speckle effects associated with coherent light~\cite{Bhandari:11,Kontos:21,Capote:21}, thus better allowing association of small changes in scatter with physical changes in the coatings, we use a 1050\,nm superluminescent diode (SLD) (\textbf{e}, Thorlabs S5FC1050P, with 50\,nm bandwidth and coherence length $L_c=\lambda^2/\Delta\lambda\approx 20$\,$\mu{m}$. To monitor fluctuations in the incident power, a few percent of the SLD's output is picked off by a beam sampler (\textbf{f}, Thorlabs BSF10-C) and recorded by a calibrated power meter (\textbf{g}, Thorlabs PM100D). The transmitted light is (optionally) measured by a second power meter after passing the viewport on the front door of the oven. The setup is thus capable of measuring in-situ transmittivity and could be modified to record in-situ reflectivity if desired. A low-noise and high resolution camera is required to image the light scattered from the coated optic and identify defects and damage mechanisms such as point scatterers, blisters, and crystals. The AAS uses a cooled 4096x4096-pixel astronomical CCD camera (\textbf{h}, Apogee Alta F16M) with programmable capture, adjustable exposure times, and high linearity over a large illumination range. An image of the sample's coated surface at a scattering angle (defined as the angle between the sample's normal and the measured scattered light) of $\theta_s=8^{\circ}$ is cast on the CCD chip, with 2X magnification (M=$2.02\pm.03$), using a single ($f$=200\,mm) converging lens (\textbf{i}) and an adjustable iris (\textbf{j}). As the CCD sensor size is a 3.68cm x 3.68cm square, this magnification gives a field of view (at the object plane) of height $1.825+/-.025$\,cm and width $\cos{\theta_s}\approx 0.99$ times that. The SLD beams, camera, and imaging optics are all at the same height (i.e., in the plane of the SLD beam). A narrow-band filter (Edmunds 1050nm/50nm) is installed at the front of the lens tube to limit thermal radiation from the oven's heaters and room light from entering the camera, while allowing the SLD wavelengths to pass. To further limit the effects of thermal radiation and to account for ``hot" pixels in the camera, for each ``bright" image that is taken with the SLD illumination on, a ``dark" image is also taken with the SLD off, that can be subtracted from the bright image during analysis. Images are recorded using the Flexible Image Transport System (FITS) format, which incorporates metadata such as time stamp, exposure time, camera temperature, and saturation levels. A LabView Virtual Instrument (VI) is used to automatically control and acquire data from the SLD, oven, camera, and power monitors. A typical experiment goes as follows. A sample is installed and the focus of the imaging optics is checked. The desired heating profile, with typical duration of 1-2 days, is created using iTools and loaded to the oven's controller. The VI is configured with the desired camera exposure time and imaging cadence. The VI is started and it executes the following sequence (where times indicate the duration spent in each state): i) read oven set point, heater power, and thermocouple temperature ($<$1\,s); ii) turn SLD on (1.5\,s); iii) read incident and transmitted power monitors ($<$1s); iv) bright image exposure (5\,s); v) transfer bright image (20\,s); vi) turn SLD off (1.5\,s); vii) dark image exposure (5\,s); viii) transfer dark image (20\,s); ix) wait (roughly 9\,s) until the total elapsed time of the entire sequence reaches 60\,s, then repeat. In this way, one bright image and dark image along with one data point each for incident and transmitted laser power and oven temperature is collected per minute. The images and data are written to disk on the PC running LabView and backed up for analysis. The setup is located on a passive seismic isolation optical bench, within a laminar flow softwall cleanroom and the room is kept closed and dark during measurement. A PC in an adjacent room can be used to view the images in real time as they are collected. Scattered light is commonly quantified using the Bidirectional Reflectance Distribution Function (BRDF)~\cite{Stover:2012book}. \begin{equation} BRDF = \frac{dP_s/d\Omega_s }{P_i \cos \theta_s} \cong \frac{P_s/\Omega_s }{P_i \cos \theta_s}, \end{equation} where P$_i$ is the incident laser power and P$_s$ is the scattered light power measured at polar angle $\theta_s$ by the imaging system, which subtends a solid angle $\Omega_s$. Analysis of the AAS data to obtain BRDF is accomplished using a custom-written Matlab script which proceeds as follows for each data point (i.e., each set of bright image, dark image, temperature and power readings for a given time). An elliptically shaped region of interest is defined, by inspecting the bright image, to enclose the coating area that has the beam spot and thus significant scattered light. The dark image is subtracted from its corresponding bright image. The counts of all pixels within the region of interest in the subtracted region are summed up and normalized by the exposure time and the incident power and multiplied by a calibration factor (previously determined by comparing the scattered light from a diffuse reference sample measured by a calibrated power meter and by the CCD camera) to give BRDF~\cite{Magana-Sandoval:12}. To better estimate the optical scatter enclosed by the region of interest, 5 larger concentric regions of interest are defined around the first. Their counts are also summed and normalized and then all six values are fit with a line versus pixel area. The resulting y-intercept is taken as the true enclosed BRDF without any additional diffuse light~\cite{Capote:21}. While commissioning the AAS instrument, we identified a second illumination and measurement channel that provides complimentary information. At elevated temperatures, thermal radiation from the heating elements provides an alternative side illumination of the coatings that is particularly useful for viewing blisters and delamination, especially in the ``dark" images with the SLD off. While this was not the primary aim of this instrument one result demonstrating this capability is presented in the next section. We note that since thermal radiation is so strongly temperature dependent, a more constant dedicated side illumination from an SLD could be added to achieve good damage visibility for the full duration of the experiments. \section{Samples} \label{sec:samples} \begin{figure \centering \includegraphics[width=\linewidth]{images/samples.png} \caption{Left: Sample PL003, post-annealing, shown as an example. A single quarter-wavelength-thick (for 1064\,nm, 126\,$\mu$m physical thickness) layer of TiO$_2$:Ta$_2$O$_5$ (Ti/Ta=0.27) is coated (by Laboratoire des Matériaux Avancés) on a superpolished ($\sigma < 0.1$ nm) Corning 7979 fused silica substrate (from Coastline Optics). Right: Diagram (not to scale) of the single-layer coating and substrate (standard 1-inch optic with 25.4\,mm diameter and 6.35\,mm thickness). Five nominally identical samples were used in this study. \label{fig:samples}} \end{figure} Five nominally identical samples, see Figure~\ref{fig:samples}, were used in this study. The substrates were Corning 7979 fused silica, superpolished by Coastline Optics to $\sigma < 0.1$ nm RMS surface roughness (as measured by Coastline's Zygo 5500 optical profiler), with a 10-5 scratch-dig in the central 80\% of their face surface. The optic barrels were standard polished in order to stop them strongly scattering stray light and thus glowing brightly in the images. The edges were chamfered to avoid accidental chipping. Coastline produced 20 such samples with serial numbers 7979FSPL001 through 7979FSPL020. Samples (dropping the prefix) PL001, PL003, PL005, PL006, and PL007 were used in this study. All samples were coated in a single run by Laboratoire des Matériaux Avancés with a single ion-beam-sputtered layer of TiO$_2$:Ta$_2$O$_5$ (dopant level Ti/Ta=0.27) with a quarter-wavelength thickness (126\,$\mu$m physical thickness, assuming n=2.11) for 1064\,nm light. Thus these layers are produced by the same vendor and use the same titania-doped-tantala material as is used for the high-index of refraction layers in the current LIGO and Virgo optical coatings~\cite{Granata:2019fye}. \section{Results} \label{sec:results} \begin{figure \centering \includegraphics[width=0.8\linewidth]{images/temp-aas-oic.pdf} \includegraphics[width=0.8\linewidth]{images/brdf-aas-oic.pdf} \caption{Top: The temperature profiles, with 1C/min ramp rate and variable soak duration, for each sample shown over 36 hours. Cooling rates below 100C do not follow the desired rate because there is no fast cooling fan so cooling relies entirely on radiation. Small spikes in the temperature are due to automation or communication errors. Bottom: BRDF for each sample over the same timescale. Samples PL001 (700C), PL003 (750C), and PL007 (700C) show crystallization. Samples PL006 (600C) and PL005 (500C) show an overall decrease in scatter. The large variations seen at high temperatures are believed to be caused by fluctuations in thermal radiation of the heaters and we are working to eliminate this issue. \label{fig:all-results}} \end{figure} The top panel of Figure~\ref{fig:all-results} shows the PID-controlled temperature, measured by the S-type thermocouple mounted inside the oven, versus elapsed time for all five samples. The bottom panel shows the measured BRDF of the coating surface scatter of each sample at $\theta_s=8^{\circ}$ versus the same elapsed time. Sample PL003 was ramped to 750$^{\circ}$C with a 1.5$^{\circ}$C/minute rate and a 10-minute soak. This sample crystallized and is described in more detail below. All other samples were ramped at 1.0$^{\circ}$C/minute rate. Sample PL001 was ramped to 700$^{\circ}$C with a 10-minute soak and experienced only partial crystallization. Samples PL005 and PL006 were ramped to 500$^{\circ}$C and 600$^{\circ}$C, respectively, each with a 10-hour soak. Neither of the samples showed any signs of crystallization in the BRDF or the images. Instead, they both exhibited a decrease in scatter from the start to end of annealing, in agreement with previous work~\cite{Capote:21, Sayah:21}. Sample PL007 was ramped to 700$^{\circ}$C with a 10-hour soak and strongly crystallized, as described below. Runs with PL001, PL003, PL005, and PL006 used a stainless steel pedestal to support the optic holder at the height of the viewport. The thermal expansion of this pedestal caused the optic to translate up and down by more than 1\,mm during those runs. Since the beam and the imaging optics were fixed, this motion caused any bright point scatterers to translate through the beam intensity profile, causing shifts in the BRDF seen as the large bumps during the ramp up and ramp down for those traces. For the PL007 run, a fused silica crucible (Advaluetech FQ-2500) was used as the pedestal, greatly reducing the thermal expansion and thus essentially eliminating any translation of the sample during heating. For this sample, the copper gasket for the CF-flange used for the incident light and imaging was removed and this also reduced some stray light artifacts. The BRDF for all samples shows a large ``noisy" variation at higher temperatures, above 400$^{\circ}$C, with some data points not shown on the logarithmic scale as they were negative (meaning that the region of interest in the ``bright" image encloses less light than in the ``dark" image). This issue was associated with ``flashing" observed in the images on the timescale of tens of seconds which causes successive bright and dark images to have differing amounts of background light. The flashing is due to varying thermal radiation from the heater elements driven by the facts that the thermal irradiance varies as $T^4$ and the heating elements experience much higher temperature variations than the air and other components in the oven. Following the measurements presented here, several experiments have been conducted to learn more about this issue. The heaters were observed with a secondary camera with higher frame rate video, confirming the flashing and changes were made to the PID parameters to attempt to change the rate of heater switching. The issue has not yet been solved. As a workaround we have machined stainless steel radiation blocks to surround the optic that should passively lowpass the thermal radiation. \subsection{TiO$_2$:Ta$_2$O$_5$-coated sample PL003} Figure~\ref{fig:pl003-results} shows the results of annealing sample optic PL003. This simple heating profile was chosen to achieve crystallization in the coating layer based on previous studies~\cite{Fazio:20}. The bottom left panel shows the time evolution of the oven's temperature and the optic's bidirectional reflectance distribution function (BRDF). The BRDF exhibits a first bright peak at low temperature due to point scatterers getting very bright as the sample moves up (due to thermal expansion of the tall stainless steel holder) and the beam sweeps over them. At mid-temperatures the aforementioned noisy measurements of BRDF are evident, due to both variations in the heater element thermal radiation and the sample translating within the beam pattern. Starting around 735$^{\circ}$C there is a factor of ten, and permanent, increase in BRDF which has both the onset temperature and ``frosted" appearance expected for crystallization (and verified elsewhere with, e.g., x-ray diffraction~\cite{Fazio:20}). Thus, we infer this behavior to be related to crystallization. The images along the top row show increasing scattering with time as the crystals grow, with the final image resembling the incident beam pattern shown in the bottom right. For this run, the beam pattern exhibited a strong circular diffraction pattern that was caused by a small iris (to reduce this, the iris was opened more widely for other samples). The before and after visible light images show that the coating has become slightly cloudy. \begin{figure \centering \includegraphics[width=1.0\linewidth]{images/pl003-composite-small} \caption{Results of annealing sample PL003, a single quarter-wavelength layer of TiO$_2$:Ta$_2$O$_5$ on a superpolished fused silica substrate. \textit{Bottom, left to right:} Measured oven temperature (orange) and BRDF (blue) of the sample, both versus elapsed time; sample before annealing; sample after annealing, showing the coating is slightly milky and the stainless steel holder changed color; intensity profile of the incident laser beam at the location of the sample, showing a beam diameter 4\,mm, and for this run, a strongly diffracted beam. The top row shows cropped images of a (7\,mm-wide) region of the sample illuminated by the SLD. \textit{Top, left to right:} At 734.8$^{\circ}$C some point scatterers are visible, but no crystallization is seen; at 736.5$^{\circ}$C weak diffuse scattering from the onset of crystallization is seen; at 746.6$^{\circ}$C the beam intensity profile is scattered quite uniformly by the crystallized coating; at 724.8$^{\circ}$C on the ramp down, the same pattern is seen more brightly. \label{fig:pl003-results}} \end{figure} \subsection{TiO$_2$:Ta$_2$O$_5$-coated sample PL007} Figure~\ref{fig:pl007-results} shows a similar composite image for sample PL007. For this sample, the two improvements mentioned above were both in place. The sample was held by a small stainless steel holder on a tall fused silica crucible, so the vertical translation due to thermal expansion was negligible. The iris used to pass the beam was enlarged so the beam incident on the sample was more uniform and Gaussian. The sample was ramped at 1$^{\circ}$C/minute and soaked at 700$^{\circ}$C for 10 hours. After the sample reaches its soak temperature at 11:20 elapsed time, a clear and gradually increasing crystallization is seen in the images and the BRDF lasting until 13:00 elapsed time, at which point no further increase in BRDF is seen. The visible light after image of the coating is again milky. \begin{figure \centering \includegraphics[width=1.0\linewidth, angle=0]{images/pl007-composite-small} \caption{Results of annealing sample PL007, a single quarter-wavelength layer of TiO$_2$:Ta$_2$O$_5$ on a superpolished fused silica substrate. \textit{Bottom, left to right:} Measured oven temperature (orange) and BRDF (blue) of the sample, both versus elapsed time; sample before annealing; sample after annealing, showing the coating is slightly milky; intensity profile of the incident laser beam at the location of the sample, showing a beam diameter 7\,mm, and for this run, a more uniform Gaussian beam. The top row shows cropped images of a (12\,mm-wide) region of the sample illuminated by the SLD. The 700$^{\circ}$C soak starts at image 543 and elapsed time 11 hours, 21 minutes. \textit{Top, left to right:} Image 572, 37 minutes into the soak, shows several bright point scatterers are visible, but no sign of crystalization; Image 581, 48 minutes into the soak shows the first weak signs of crystallization; Image 590, one hour into the soak shows clear crystalization; Image 607, 81 minutes into the soak, the beam intensity profile is scattered quite uniformly by the crystallized coating. \label{fig:pl007-results}} \end{figure} \subsection{Side illumination example: TiO$_2$:GeO$_2$-coated sample 210811a} As described above, the AAS apparatus was found to have a second possible illumination and measurement channel in addition to the SLD front illumination. At elevated temperatures, thermal radiation from the heating elements provides a bright source of side illumination of the coatings. This light acts similarly to the light in a back- or side-illumination microscope and has proven particularly useful for viewing blisters and coating delamination, especially in the ``dark" images that are taken with the SLD off. Figure~\ref{fig:210811a-results} shows a series of such dark images, for a test optic, from coating run 210811a, a 52-layer quarter-wavelength stack of TiO$_2$:Ge$_2$ and Si$_{02}$ coated by Carmen Menoni's group at Colorado State University on a polished fused silica substrate~\cite{Davenport:22}. The ramp rate is 1$^{\circ}$C/minute and the soak is 10-hours long. The images show the nucleation and growth of blisters, with one blister in the bottom right corner of the last image uncapping and ``popping off'' the coating. Especially the onset of bubbling is much less visible, if at all, in the SLD-illuminated images. This coating performance is not indicative of the performance of titania-doped germania and silica coatings as this combination of rougher polish and many layers was known to lead to blisters. Such results provide insight into blister growth and delamination mechanisms, such as stress or outgassing, through the size and growth rate of blisters versus temperature. Such results will be published separately~\cite{Lalande}. \begin{figure \centering \includegraphics[width=1.0\linewidth]{images/210811a-composite-small} \caption{Results of annealing sample 210811a, a 52-layer quarter-wavelength stack of TiO$_2$:Ge$_2$ and Si$_{02}$ on a polished fused silica substrate. At 6 hours elapsed and 400$^{\circ}$C there are no blisters visible and the thermal radiation is visible but weak. At 500$^{\circ}$C small blisters have formed. At 530$^{\circ}$C the original blisters have grown and new blisters have formed and grown and begun to run into each other. At 550$^{\circ}$C the majority of the imaged surface is taken up by blisters. \label{fig:210811a-results}} \end{figure} \section{Conclusion} \label{sec:conclusion} We have described the Air Annealing Scatterometer and shown that it is capable of observing coating optical scatter, crystallization, and blisters/delamination throughout in-air annealing to 750$^{\circ}$C. The images and BRDFs presented here for TiO$_2$:Ta$_2$O$_5$ single-layer coatings with normal SLD illumination, reveal the onset, growth, and saturation (for given temperature profiles) of coating crystallization. Because the data produced is in-situ, real time, and includes images and thus maps of the crystal formation over the coating, this method has advantages over other crystal measurement methods such as x-ray diffraction. Exemplary results that used an alternative light source, side illumination from the oven's heaters, were also presented, showing the formation and evolution of blisters and delamination in a TiO$_2$:Ge$_2$/Si$_{02}$ multilayer coating. These results will be explored more in future work and they suggest a possible upgrade to the AAS of dedicated side illumination. To our knowledge, the capabilities and type of results presented here have not been previously demonstrated in the literature. Future steps will involve using the AAS to observe heat-induced damage mechanisms in candidate coating materials for future gravitational-wave detectors, especially those for which achievable annealing temperature appears to be limiting their performance. By providing onset and evolution data versus temperature, this apparatus will allow for deeper study of the damage mechanisms at play in coatings during high-temperature annealing. Such studies should lead to improvements in the coating manufacture process for low-optical-loss applications and thus improvements to future interferometric gravitational-wave detectors and atomic clocks. Two main issues were identified with the setup in the course of this study. The first was thermal expansion of the stainless steel optic pedestal. This was solved by replacing the majority of the steel, except for a small optic holder, with a fused silica crucible. Results using this crucible showed negligible vertical translation and a BRDF trend without apparent translation artifacts. The second was a large variation in the BRDF caused by the heaters flashing via thermal radiation on minute timescales. This has not been solved yet. This could be addressed with radiation shields or further PID tuning. \begin{backmatter} \bmsection{Funding} Content in the funding section will be generated entirely from details submitted to Prism. \bmsection{Acknowledgments} Portions of this work were presented at the Optical Interference Coatings Conference in 2022, paper number ThB.3, entitled ``Imaging Scatterometer for Observing Changes to Optical Coatings During Air Annealing''~\cite{Rezac:22}. The authors thank the LIGO Scientific Collaboration Optics Working Group, especially Carmen Menoni (Colorado State), François Schiettekatte (Montreal), and Rana Adhikari (Caltech) for helpful discussions regarding this work. This work and the authors were supported by NSF grants PHY-2207998, PHY-1807069, AST-2219109, and AST-1559694, and by the Dan Black Family Trust and Nancy and Lee Begovich. AG was supported in part by Nancy Goodhue-McWilliams. \bmsection{Disclosures} The authors declare no conflicts of interest. \bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. \end{backmatter}
1,108,101,562,506
arxiv
\section{Introduction} Let $ K$ be a convex body in $ \mathbb{R}^n $ with $ C^2 $ boundary $ \partial K$ and everywhere positive Gaussian curvature $\kappa$. First, in \cite{gruber1993asymptotic} it was shown that \begin{align*} &\lim_{N\to\infty}\frac{\min\{\text{vol}_n\left(P \setminus K\right)\;|\textrm{$ K \subset P $ and $P$ is a polytope with at most $N$ facets}\}}{N^{-\frac 2{n-1} }} \\&= \frac{1}{2}\text{div}_{n-1}\left(\int_{\partial K} \kappa\left(x\right)^{\frac{1}{n+1}}d\mu_{\partial K}\left(x\right)\right)^{\frac{n+1}{n-1}}, \end{align*} where $ \mu_{\partial K} $ denotes the surface measure of $ \partial K $ and $\text{div}_{n-1}$ is a constant that depends only on the dimension. In \cite{zador1982asymptotic}, Zador proved that $ \text{div}_{n-1} = (2\pi e)^{-1}n + o(n).$ Later, Ludwig \citep{ludwig1999asymptotic} showed a similar formula for arbitrarily position polytopes, namely \begin{align*} &\lim_{N\to\infty}\frac{\min\{\Delta_{v}(P,D_n)\;|\, P\textrm{ is a polytope with at most }N \textrm{ facets}\}}{N^{-\frac 2{n-1} }} =\\& \frac{1}{2}\textrm{ldiv}_{n-1}\left(\int_{\partial K} \kappa\left(x\right)^{\frac{1}{n+1}}d\mu_{\partial K}\left(x\right)\right)^{\frac{n+1}{n-1}}, \end{align*} where $ \textrm{ldiv}_{n-1} $ is a positive constant that depends only on the dimension. In \cite{Lud06}, it was shown that $ \textrm{ldiv}_{n-1} \geq c.$ Specifically, they proved that for every polytope $ P $ in $ \mathbb{R}^n $ with $ N \geq 10^n $ facets \begin{align} \Delta_v(D_n, P) \geq cN^{-\frac 2{n-1} }\text{vol}_n\left(D_n\right) . \end{align} For more details, please see Theorem 2 in \cite{Lud06}. The estimate for $ \textrm{div}_{n-1} $ implies that $\textrm{ldiv}_{n-1} \leq c_2n,$ which until this paper, was the best-known upper bound for $ \textrm{ldiv}_{n-1} $. Clearly, there is a gap of a factor of a dimension between the upper and lower bounds for $\textrm{ldiv}_{n-1}$. In this paper, we prove that removing the circumscribed restriction improves the order of approximation by a factor of dimension; specifically, we show that for all $ N \geq 10^n$ there is a polytope $ P_{n,N} $ in $ \mathbb{R}^n $ with at most $ N $ facets, which is generated from a random construction, that satisfies \begin{align}\label{firstineq} \Delta_v(D_n, P_{n,N}) \leq C_nN^{-\frac 2{n-1} }\text{vol}_n\left(D_n\right), \end{align} where $ C_n$ is a positive constant that depends only on the dimension and is bounded by an absolute constant. A corollary of this result is that $ \textrm{ldiv}_{n-1} \leq C,$ which closes the aforementioned gap in the estimates for $\textrm{ldiv}_{n-1}$ from \cite{ludwig1999asymptotic,Lud06}. This inequality also shows that one can approximate the $ n$-dimensional Euclidean ball in the symmetric volume difference by an arbitrarily positioned polytope with an exponential number of facets. This phenomena holds for the Hausdorff metric and the Banach-Mazur distance; see \cite{artstein2015asymptotic,aubrun2017alice}. When $ N $ is large enough, we improve the bound from Eq. \eqref{firstineq} to \[ \Delta_v(D_n, P_{n,N}) \leq \left(\int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt+O(n^{-0.5})\right)N^{-\frac 2{n-1} }\text{vol}_n\left(D_n\right), \] which implies that \[ \text{ldiv}_{n-1} \leq (\pi e)^{-1}\left(\int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt+\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt\right) + o(1) \sim \frac{0.96}{\pi e}+o(1). \] We also optimize the argument of Theorem 2 in \cite{Lud06} and prove that $ \text{ldiv}_{n-1} \geq (4\pi e)^{-1}+o(1).$ Recently, Hoehner, Sch\"utt and Werner \cite{Wer15} considered polytopal approximation of the ball with respect to the surface area deviation, which is defined for any two compact sets $ A,B \subset\mathbb{R}^n$ with measurable boundary as follows: \[ \Delta_s\left(A,\ B\right) := \text{vol}_{n-1}\left(\partial\left(A\cup B\right)\right) - \text{vol}_{n-1}\left(\partial\left(A\cap B\right)\right). \] It was also shown that for all polytopes $ Q $ in $ \mathbb{R}^n $ with $ N \geq {M}_n$ facets \[ \Delta_s\left(Q,D_n\right) \geq c_1N^{-\frac 2{n-1} }\text{vol}_{n-1}\left(\partial D_n\right) \] where ${M}_n$ is a natural number that depends only on the dimension $ n $ and $ c_1$ is a positive absolute constant. We show that this bound is optimal up to an absolute constant by using the aforementioned random construction to find a polytope $ Q_{n,N}$ in $ \mathbb{R}^n $ with at most $ N \geq 10^n $ facets that satisfies \[ \Delta_s\left(Q_{n,N},D_n\right) \leq 4C_nN^{-\frac 2{n-1} }\text{vol}_{n-1}\left(\partial D_n\right),\] where $ C_n \leq C$ are the constants that were defined in Eq. \eqref{firstineq}. \paragraph{Notations and Preliminary Results}\ \\ $ D_n$ is the $n\textrm{-dimensional}$ centered Euclidean unit ball.$ |A| $ is the Lebesgue measure,i.e. volume, of a set $A.$ Similarly $ |\partial A| $ is the surface area of the set $ A $. $ \text{conv}(A) $ denotes the the convex hull of the set $ A.$ $ A^c $ denotes the complementary set of $ A.$ \\ The symmetric volume difference between two sets $ |A \Delta B|$ is denoted by $ \Delta_v(A,B) $.\\ The surface area deviation $ \Delta_s \left(A,B\right)$ := $ |\partial\left(A\cup B\right)| - |\partial\left(A\cap B\right)| .$\\ We denote by $ \text{as}(K) := \int_{\partial K} \kappa\left(x\right)^{\frac{1}{n+1}}d\mu_{\partial K}\left(x\right) $ the affine surface area of $ C^2 $ convex body $ K,$ and by $\sigma$ the uniform measure on the $ \mathbb{S}^{n-1}.$ Throughout the paper $ c,c',C,C',c_1,c_2,C_1,C_2$ denote positive absolute constants that may change from line to line. We shall use the following auxiliary results. \begin{lemma} \[ \frac{c}{\sqrt{n}} \leq \frac{|D_n|}{|D_{n-1}|} \leq \frac{C}{\sqrt{n}} \] \end{lemma} \begin{thm}[Isoperimetric inequality] {\label{isoperemetric}} If $ K \subset \mathbb{R}^n $ be a convex body, then \[ |\partial K| \geq n|K|^{\frac{n-1}{n}}|D_n|^{\frac{1}{n}}. \] \end{thm} \begin{thm}[Affine isoperimetric inequality \cite{lutwak1996brunn}]{\label{affine}} Let $ K \subset \mathbb{R}^n $ be a convex body with $ |K|=|D_n| $ and let $ \text{as}(K) := \int_{\partial K} \kappa\left(x\right)^{\frac{1}{n+1}}d\mu_{\partial K}\left(x\right) $ denote the affine surface area of $K$. Then \[ \textrm{as}(K)\leq \textrm{as}(D_n). \] \end{thm} \begin{thm}[Theorem 1 in \cite{ludwig1999asymptotic}]\label{ludlemma} \begin{align*} &\lim_{N\to\infty}\frac{\min\{\Delta_v(D_n,P_{n,N})\;|\textrm{P is a polytope with at most N facets}\}}{N^{-\frac 2{n-1} }} =\\& \frac{1}{2}\textrm{ldiv}_{n-1}\left(\int_{\partial K} \kappa\left(x\right)^{\frac{1}{n+1}}d\mu_{\partial K}\left(x\right)\right)^{\frac{n+1}{n-1}} \end{align*} \end{thm} \begin{thm}[Theorem 2 in \cite{Lud06}]{\label{lowebound}} Assume that $ N>10^{n} $, and let $ P $ be a polytope in $\mathbb{R}^n$ with at most $ N $ facets. Then there exists $ c>0 $ such that \[ \Delta_v(D_n,P_{n,N}) \geq cN^{-\frac 2{n-1} }|D_n|. \] \end{thm} \section{Main results} \begin{thm}{\label{main_thm}} Let $ P^b_{n,N} $ be the polytope with at most $N$ facets that is best-approximating for $D_n$ with respect to the symmetric volume difference. Then for all $ N \geq n^n $, \begin{equation} \Delta_v(D_n,P^b_{n,N}) \leq \left(I+II+O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n|, \end{equation} where $ I = \int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt$ and $ II= \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt.$ It follows that \[ \text{ldiv}_{n-1} \leq (\pi e)^{-1}(I+II) + o(1) \sim \frac{0.96}{\pi e} + o(1). \] \end{thm} \begin{remark}{\label{niceremark}} The bound on $ N$ can be improved from $ N \geq n^n $ to $ N \geq 10^{n}.$ This causes a change to the constant before $ N^{-\frac 2{n-1} } $. The proof is slightly different from the proof of Theorem \ref{main_thm}, and for completeness we provide a sketch of the proof in Section \ref{Techandloose} \end{remark} In \cite{Lud06}, it was shown that $ \Delta_v(D_n,P) \geq cN^{-\frac 2{n-1} }|D_n|.$ We optimize their argument to obtain the following result. \begin{thm}{\label{suprisig}} Let polytope $ P $ in $\mathbb{R}^n$ with at most $ N \geq n^n$ facets satisfies \[ \Delta_v(D_n,P) \geq (\frac{1}{4}+O(N^{-\frac 2{n-1} }))N^{-\frac{2}{n-1}}|D_n|, \] and therefore $ \text{ldiv}_{n-1} \geq (4\pi e)^{-1} + o\left(1\right)$. \end{thm} \begin{thm}{\label{sec_thm}} Let $ Q^b_{n,N} $ be the polytope with at most $N$ facets that is best-approximating for $D_n$ with respect to the surface area deviation. Then for all $ N \geq n^n $ \begin{equation} \Delta_s\left(Q^b_{n,N}, D_n\right) \leq \left(2\cdot I+II + \frac{1}{2} + O\left(n^{-0.5}\right)\right)N^{-\frac 2{n-1} }|\partial D_n|, \end{equation} where $ I = \int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt,$ $ II= \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt.$ \end{thm} \begin{remark} The proof of Theorem \ref{sec_thm} implies that when $N\geq n^n$, there is a polytope $ P_{n,N} $ in $ \mathbb{R}^n $ with at most $ N$ facets that satisfies both \[ \Delta_v\left(P_{n,N}, D_n\right) \leq \left(I+II+ O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n| \] and \[ \Delta_s\left(P_{n,N}, D_n\right) \leq \left(2\cdot I+II+ \frac{1}{2} + O\left(n^{-0.5}\right)\right)N^{-\frac 2{n-1} }|\partial D_n|, \] where $ I = \int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt$ and $ II= \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt.$ \end{remark} \begin{remark} In Theorem $ \ref{sec_thm}$ the bound of the number of facets can be improved from $ N\geq n^n $ to $ N \geq 10^{n}.$ This causes a change to the constant before $ N^{-\frac 2{n-1} } $. \end{remark} \begin{remark} The author conjectures that in Theorem \ref{sec_thm} the estimate for the constant before the $ N^{-\frac 2{n-1} } $ can be improved. \end{remark} \subsection{Asymptotic results} In this section, we present some asymptotic results. First, let $ P^b_{n,N}\subset \mathbb{R}^n$ be the polytope with at most $N$ facets that is best-approximating for $D_n$ with respect to the symmetric volume difference. The following corollaries are consequences of Theorem \ref{main_thm}, Lemma \ref{lowebound} and Remark \ref{niceremark}. \begin{cor} If $ A \geq 10$ and the dimension is large enough, then \[ \frac{\Delta_v\left(D_n,P^b_{n,A^{n}}\right)}{|D_n|} \in [cA^{-2},CA^{-2}]. \] We conjecture that the limit $ \lim_{n\to\infty} \frac{\Delta_v\left(D_n,P^b_{n,A^{n}}\right)}{|D_n|} $ exists. \end{cor} \begin{cor} Let $ f\left(n\right) $ be a sequence that satisfies $ f(n) = e^{\omega(n)}.$ Then \[ \lim_{n\to\infty} \frac{\Delta_v\left(D_n,P^b_{n,f(n)}\right)}{|D_n|} = 0. \] \end{cor} \begin{remark}{\label{noaprrox}} It can be easily proven that if $ f\left(n\right) =e^{o(n)}$ the \[ \lim_{n\to\infty} \frac{\Delta_v\left(D_n,P^b_{n,f(n)}\right)}{|D_n|} = 1. \] \end{remark} \subsection{Conjectures}\label{asymptoticresutls} Due to symmetry considerations, we believe that Remark \ref{noaprrox} can be strengthened to: \begin{conj} If $ N \leq 2^n$ and the dimension is large enough, then \[ \lim_{n\to\infty} \frac{\Delta_v(D_n,P^b_{n,N})}{|D_n|} = 1. \] \end{conj} In order to present the last conjecture, we use a standard argument to show that if the dimension is fixed and the number of facets tends to infinity, then among all convex bodies with the same volume, the Euclidean ball is the hardest to approximate. For this purpose, let $ K $ be a convex body in $ \mathbb{R}^{n}$, and assume without loss of generality that $ |K| = |D_n|$. The \begin{equation}{\label{conjconj}} \begin{aligned}\lim_{N\to\infty}N^{\frac 2{n-1} }\min_{P\text{ has at most \ensuremath{N} facets}}\Delta_{v}(K,P) & =\frac{1}{2}\textrm{ldiv}_{n-1}\text{as}(K)^{\frac{n+1}{n-1}}\\ & \leq\frac{1}{2}\textrm{ldiv}_{n-1}\text{as}(D_{n})^{\frac{n+1}{n-1}}\\ & =\lim_{N\to\infty}N^{\frac 2{n-1} }\min_{P\text{ has at most \ensuremath{N} facets}}\Delta_{v}(D_n,P), \end{aligned} \end{equation} where the first and the last equalities follow from Lemma \ref{ludlemma}, and the inequality follows from the affine isoperimetric inequality (Lemma \ref{affine}). The author believes that the limit in Eq. \eqref{conjconj} is unnecessary, i.e. \begin{conj}\label{macbeath1} Fix $n\in\mathbb{N}, n\geq 2$ and $N\geq n+1$, and let $K$ be a convex body in $\mathbb{R}^n$. Then \[ \min_{P \text{ has at most $ N $ facets}}\frac{\Delta_v(P,K)}{|K|} \leq \min_{P \text{ has at most $ N $ facets}}\frac{\Delta_v(P,D_n)}{|D_n|}. \] \end{conj} Observe that by Theorem \ref{main_thm} there is a polytope with $ f(\varepsilon,n):=(c\varepsilon)^{-\frac{n-1}{2}}$ facets that gives an $\varepsilon$-approximation of the $ n $-dimensional Euclidean ball, i.e. $ \frac{\Delta_v(P_{n,f(\varepsilon,n)}, D_n)}{|D_n|} \leq \varepsilon.$ Lemma \ref{lowebound} then implies that this result is optimal, up to an absolute constant. If Conjecture \ref{macbeath1} holds, then it follows that all convex bodies can be approximated by polytopes with an exponential number of facets with respect to the symmetric volume difference. \begin{remark} Macbeath \cite{macbeath1951extremal} showed that if $n\geq 2$ and $N\geq n+1$, then for every convex body $ K $ in $ \mathbb{R}^n $ \[ \min_{P \text{ has at most $ N $ vertices, $ P \subset K $}}\frac{\Delta_v(P,K)}{|K|} \leq \min_{P \text{ has at most $ N $ vertices, $ P \subset D_n $}}\frac{\Delta_v(P, D_n)}{|D_n|}. \] \end{remark} \section{Proofs} For the proofs of Theorems \ref{main_thm} and \ref{sec_thm} we may assume that $ N $ is even. We also denote by $ \sigma $ the uniform probability measure on $ \mathbb{S}^{n-1},$ and recall that $ N \geq n^n.$ \subsection{Proof of Theorem \ref{main_thm}} First, choose a random $ y\in\ \mathbb{S}^{n-1}$ from the uniform distribution on the sphere, and define the random slab of width $t$ as the set $ \{x\in\mathbb{R}^n:\inner{x}{y}\leq t\} $. Then the probability that a point $ x \in\mathbb{R}^n$ lies outside of a random slab with width $ t \in \left(0,1\right)$ equals \begin{equation}{\label{alphanr}} \begin{aligned}\sigma_{y\in\mathbb{S}^{n-1}}\left(\inner{x}{y}\geq t\right) & =\sigma_{y\in\mathbb{S}^{n-1}}\left(\inner{\frac{x}{\norm{x}}}{y}\geq\frac{t}{\norm{x}}\right)\\ & =\frac{|\text{conv}(\vec{0},\{y\in\mathbb{S}^{n-1}:\inner{\frac{x}{\norm{x}}}{y}\geq\frac{t}{\norm{x}}\}\cap\partial D_{n})|}{|D_{n}|}\\ & =\frac{|\text{conv}(\vec{0},\{y\in\mathbb{R}^{^{n}}:\inner{\frac{x}{\norm{x}}}{y}=\frac{t}{\norm{x}}\}\cap D_{n})|}{|D_{n}|}\\ & +\frac{|\{y\in\mathbb{R}^{n}:\inner{\frac{x}{\norm{x}}}{y}\geq\frac{t}{\norm{x}}\}\cap D_{n}|}{|D_{n}|}\\ & =\frac{2|D_{n-1}|}{|D_{n}|}\left(\int_{\frac{t}{\|x\|_{2}}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx+\frac{t}{n\norm{x}}\left(1-\frac{t^{2}}{\|x\|_{2}^{2}}\right)^{\frac{n-1}{2}}\right), \end{aligned} \end{equation} where the first term is the volume of the spherical cap and the second is the volume of the cone with $ \vec{0} $ as its apex, and both sets have the common base $ \{y\in\mathbb{R}^{n}:\inneri{\frac{x}{\norm{x}}}{y}=\frac{t}{\norm{x}}\} \cap D_n.$ For shorthand, we denote by $ r=\|x\|_2 $ and the probability $ \sigma_{y\in \mathbb{R}^{n-1}}\left( \inner{x}{y} \geq t\right) $ by $\alpha_{n,r,t}.$ Let $ P $ be the random polytope that is generated by the intersection of $ \frac N2 $ independent random slabs with the same width $ t $. Observe that with probability one, $ P $ is \textbf{bounded} and has $ N $ facets. By independence, the probability that a point $x\in\mathbb{R}^n$ lies inside the random polytope $P$ equals \[ \Pr\left(x\in P\right) = \Pr_{y_1,\ldots,y_{\frac N2} \in \mathbb{S}^{n-1}}\left(\cap_{i=1}^{\frac N2} \inner{x}{y_i} \leq t\right) =\left(1-\alpha_{n,r,t}\right)^{\frac N2}. \] Using Fubini and polar coordinates, we express the expectation of the random variable $ |D_n \setminus P| $ as \begin{align*}{\mathbb{E}}[|D_{n}\setminus P |] & =\int_{\otimes_{i=1}^{\frac{N}{2}}\mathbb{S}^{n-1}}\int_{D_{n}}(1-\mathbbm{1}_{\{x\in\cap_{i=1}^{\frac{N}{2}}\inner{x}{y_{i}}\leq t\}})dx\,d\sigma\left(y_{1}\right)\ldots d\sigma(y_{\frac{N}{2}})\\ & =\int_{D_{n}}\int_{\otimes_{i=1}^{\frac{N}{2}}\mathbb{S}^{n-1}}(1-\mathbbm{1}_{\{x\in\cap_{i=1}^{\frac{N}{2}}\inner{x}{y_{i}}\leq t\}})d\sigma\left(y_{1}\right)\ldots d\sigma(y_{\frac{N}{2}})dx\\ & =\int_{D_{n}}\left(1-\alpha_{n,\norm{x},t}\right)^{\frac{N}{2}}dx=|\partial D_{n}|\int_{t}^{1}r^{n-1}\int_{\mathbb{S}^{n-1}}\left(1-\alpha_{n,r,t}\right)^{\frac{N}{2}}d\sigma dr\\ & =|\partial D_{n}|\int_{t}^{1}r^{n-1}\left(1-\alpha_{n,r,t}\right)^{\frac{N}{2}}dr. \end{align*} The expectation $ {\mathbb{E}}[|P \setminus D_n|] $ can be expressed similarly, and thus \begin{equation}\label{eq:Main_eq} \begin{aligned} {\mathbb{E}}[\Delta_v(D_n,P)] &= {\mathbb{E}}[|D_n \setminus P|] + {\mathbb{E}}[|P \setminus D_n|] \\& ={|\partial D_n|}\left(\int_{t}^{1}r^{n-1}\left(1 - \left(1-\alpha_{n,r,t}\right)^{\frac N2}\right)dr + \int_{1}^{\infty}r^{n-1}\left(1-\alpha_{n,r,t}\right)^{\frac N2}dr\right). \end{aligned} \end{equation} Now we set $ t=t_{n,N} $ to be \[t_{n,N}=\sqrt{1 - \left(\frac{\gamma|\partial D_n|}{N|D_{n-1}|}\right)^{\frac 2{n-1}}}\] where $ \gamma $ is a positive absolute constant that will be determined later. From now on, we use the notation $ \alpha_{n,r}$ instead of $ \alpha_{n,r,t_{n,N}} $. We split the the proof of Theorem \ref{main_thm} into two main lemmas that give upper bounds for the two terms in Eq. \eqref{eq:Main_eq} \begin{lemma}{\label{firstpartlem}} \begin{equation}\label{Partone} \begin{aligned} {\mathbb{E}}[|D_n \setminus P_{n,N}|] &= |\partial D_n|\int_{t_{n,N}}^{1}r^{n-1}\left(1 - \left(1-\alpha_{n,r}\right)^\frac N2\right)dr\\ &= \left(\int_{0}^{1}t^{-1}(1-e^{-\gamma t})dt+O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n|. \end{aligned} \end{equation} \end{lemma}\begin{lemma}{\label{lemma_2}} \begin{equation}\label{Parttwo} {\mathbb{E}}[|P \setminus D_n|] = |\partial D_n|\int_{1}^{\infty}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac N2}dr =\left(\int_{0}^{\infty}e^{-\gamma e^{t}}dt+O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n|. \end{equation} \end{lemma} First we show that Theorem \ref{main_thm} follows from the two aforementioned lemmas, and then we prove them. \paragraph{Proof of Theorem \ref{main_thm}} Lemmas \ref{firstpartlem} and \ref{lemma_2} give the upper bound \begin{equation}\label{symbound} {\mathbb{E}}[\Delta_v(P,D_n)] = \left(\int_{0}^{1} t^{-1}(1-e^{-\gamma t})dt + \int_{0}^{\infty}e^{-\gamma e^{t}}dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }|D_n|. \end{equation} Now we optimize over $ \gamma \in (0,\infty) $ and derive that the minimum is achieved at $ \gamma = \ln(2).$ This follows from the fact that \begin{equation*} \frac{\partial}{\partial\gamma}\left(\int_{0}^{1} t^{-1}(1-e^{-\gamma t})dt + \int_{0}^{\infty}e^{-\gamma e^{t}}dt + O(n^{-0.5})\right)= \frac{1}{\gamma}(1-2e^{-\gamma}). \end{equation*} The main part of the theorem follows from the fact that there is polytope $ P_{n,N} $, a realization of $ P $, whose symmetric volume difference is no more than ${\mathbb{E}}[\Delta_v(P,D_n)] $. Finally, we give an upper bound for $ \text{ldiv}_{n-1} $. Observe that by Lemma \ref{ludlemma} and Eq. \eqref{symbound} \begin{align*} & \left(\int_{0}^{1}t^{-1}(1-e^{-\ln(2)t})dt+\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt+O(n^{-0.5})\right)|D_{n}|\\ & \geq\frac{1}{2}\textrm{ldiv}_{n-1}\left(|\partial D_{n}|\right)^{\frac{n+1}{n-1}}=\frac{1}{2}(1+o(1))\textrm{ldiv}_{n-1}\frac{2\pi e}{n}|\partial D_{n}|\\ & =(1+o(1))\textrm{ldiv}_{n-1}\pi e|D_{n}|, \end{align*} and hence \[ \text{ldiv}_{n-1} \leq (\pi e)^{-1}\left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt+o(1)\right). \] \qed \\ Now we turn our attention to the proofs of the main lemmas. We denote by $\delta = \left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}}$, and we use the following lemma which is proven in Section \ref{Techandloose}. \begin{lemma}{\label{main_lemma}} Let $ r \in [1-\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}, 1+\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}]$. The \begin{equation} \alpha_{n,r}=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}. \end{equation} \end{lemma} \subsection{Proof of Lemma \ref{lemma_2}} Let us split Eq. \eqref{Parttwo} into five parts: \begin{equation} \begin{aligned}|\partial D_{n}| & \bigg[\int_{1}^{1+\delta}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr+\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\\ & \quad+\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr+\int_{1+\frac{2}{n}}^{n^{2}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\\ & \quad+\int_{n^{2}}^{\infty}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\bigg] \end{aligned} \end{equation} Next, we estimate these integrals in a series of lemmas. \begin{lemma} \[ \int_{1}^{1+\delta}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac {N}{2}}dr = \left(\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt+O\left(n^{-0.5}\right)\right)N^{-\frac 2{n-1} }|D_{n}|. \] \end{lemma} \begin{proof} By Lemma \ref{main_lemma}, if $r\in [1,1+\delta]$ then \[ \alpha_{n,r}=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}. \] Hence, \begin{equation} \begin{aligned} & |\partial D_{n}|\int_{1}^{1+\delta}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr=\\ & |\partial D_{n}|\int_{1}^{1+\delta}r^{n-1}\left(1-\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}\right)^{\frac{N}{2}}dr=\\ & (1+O(n^{-1}))|\partial D_{n}|\int_{1}^{1+\delta}e^{-\left(1+O\left(n^{-1}\right)\right)\gamma e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}}dr=\\ & \left(1+O\left(n^{-0.5}\right)\right)|D_{n}|N^{-\frac 2{n-1} }\int_{0}^{n^{0.5}}e^{-\gamma e^{t}}dt=\\ & \left(1+O\left(n^{-0.5}\right)\right)|D_{n}|N^{-\frac 2{n-1} }\int_{0}^{\infty}e^{-\gamma e^{t}}dt. \end{aligned} \end{equation} \end{proof} \begin{lemma} \[ |\partial D_{n}|\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr = |D_{n}|N^{-\frac 2{n-1} } o\left(n^{-0.5}\right). \] \end{lemma} \begin{proof} Since $ 1-\alpha_{n,r} $ is a decreasing function of $ r,$ we need to derive a \textbf{lower bound} for $ \alpha_{n,r}.$ First, by Lemma \ref{main_lemma} applied to $ r = {1+\delta}=1+\left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}},$ we get that \begin{align*} \alpha_{n,{1+\delta}}&=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}\\&=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}. \end{align*} Hence, \begin{equation} \begin{aligned} & |\partial D_{n}|\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\leq\\ & |\partial D_{n}|\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}\left(1-\alpha_{n,1+\left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}}}\right)^{\frac{N}{2}}dr=\\ & |\partial D_{n}|\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}\left(1-\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}\right)^{\frac{N}{2}}dr\leq\\ & |\partial D_{n}|\int_{1+\delta}^{1+2N^{-\frac 2{n-1} }}r^{n-1}e^{-c\gamma e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}dr\leq\\ & \left(1+o\left(n^{-1}\right)\right)|\partial D_{n}|e^{-c\gamma e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}\int_{1}^{1+2N^{-\frac 2{n-1} }}r^{n-1}dr=\\ & |D_{n}|N^{-\frac 2{n-1} } o\left(n^{-0.5}\right). \end{aligned} \end{equation} \end{proof} \begin{lemma} \[ |\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr = |D_{n}|N^{-\frac{2}{n-1}}o\left(n^{-0.5}\right). \] \end{lemma} \begin{proof} By Eq. \eqref{alphanr}, \begin{equation*} \begin{aligned} \alpha_{n,r} &\geq \frac{2|D_{n-1}|}{|D_n|}\frac {t_{n,N}}{nr} \left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}} \\&=\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_n|}\left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}} \end{aligned} \end{equation*} where $t_{n,N}=\sqrt{1 - \left(\frac{\gamma|\partial D_n|}{N|D_{n-1}|}\right)^{\frac 2{n-1}}}$. Hence, \begin{equation*} \begin{aligned} & |\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\leq\\ & e^{2}|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\leq\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}\left(1-\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_{n}|}\left(1-\frac{t_{n,N}^{2}}{r^{2}}\right)^{\frac{n-1}{2}}\right)^{\frac{N}{2}}dr\leq\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-N\left(1+O\left(n^{-1}\right)\right)\frac{|D_{n-1}|}{|\partial D_{n}|}\left(1-\frac{t_{n,N}^{2}}{r^{2}}\right)^{\frac{n-1}{2}}}dr\leq\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-N\left(1+O\left(n^{-1}\right)\right)r{}^{-\left(n-1\right)}\frac{|D_{n-1}|}{|\partial D_{n}|}\left(r^{2}-t_{n,N}^{2}\right)^{\frac{n-1}{2}}}dr. \end{aligned} \end{equation*} Again, using the fact that $ r \leq 1 + \frac 2n, $ the previous expression is no more than \begin{equation*} \begin{aligned} & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-cN\frac{|D_{n-1}|}{|\partial D_{n}|}\left(\left(1+\left(r-1\right)\right)^{2}-t_{n,N}^{2}\right)^{\frac{n-1}{2}}}dr=\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-cN\frac{|D_{n-1}|}{|\partial D_{n}|}\left(1-t_{n,N}^{2}+2\left(r-1\right)\right)^{\frac{n-1}{2}}}dr. \end{aligned} \end{equation*} Now we use that $t_{n,N} = \sqrt{1-\left(\frac{\gamma|D_{n-1}|}{|\partial D_n|N}\right)^{\frac 2{n-1}}} $ and the fact that $ \left(1+b\right)^n\geq 1+nb $ on $ [0,\infty) $ to derive that the previous expression equals \begin{equation} \begin{aligned} & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-c\gamma N\frac{|D_{n-1}|}{|\partial D_{n}|}\left(\left(\frac{\gamma|\partial D_{n}|}{|D_{n-1}|N}\right)^{\frac{2}{n-1}}+2\left(r-1\right)\right)^{\frac{n-1}{2}}}dr\leq\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-2c\gamma\left(1+2\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(\frac{\ln\left(n\right)}{n}\right)\right)\right)^{\frac{n-1}{2}}}dr\leq\\ & C|\partial D_{n}|\int_{1+2N^{-\frac 2{n-1} }}^{1+\frac{2}{n}}e^{-2c\gamma\left(1+\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(\frac{\ln\left(n\right)}{n}\right)\right)\right)}dr\leq\\ & C_{1}|\partial D_{n}|\int_{2N^{-\frac 2{n-1} }}^{\frac{2}{n}}e^{-c_{1}\gamma nN^{\frac{2}{n-1}}r}dr=|D_{n}|N^{-\frac{2}{n-1}}O\left(n^{-1}\right). \end{aligned} \end{equation} \end{proof} \begin{lemma} \[ |\partial D_{n}|\int_{1+\frac{2}{n}}^{n^{2}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr = |D_{n}|N^{-\frac{2}{n-1}}o\left(n^{-0.5}\right) \] \end{lemma} \begin{proof} Recalling that $ \alpha_{n,r} $ is decreasing in $ r, $ we derive that \begin{equation}\label{blabla} \begin{aligned} |\partial D_{n}|\int_{1+\frac{2}{n}}^{n^{2}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr&\leq |\partial D_{n}|\int_{1+\frac{2}{n}}^{n^{2}}r^{n-1}\left(1-\alpha_{n,1+\frac 2n}\right)^{\frac{N}{2}}dr\\ &\leq|\partial D_{n}|n^{2n}\int_{1+\frac{2}{n}}^{n^{2}}\left(1-\alpha_{n,1+\frac 2n}\right)^{\frac{N}{2}}dr. \end{aligned} \end{equation} In order to continue, we derive an upper bound for $ \alpha_{n,1+\frac 2n} .$ Using the fact that \[ \alpha_{n,r} > \frac{2|D_{n-1}|}{|D_n|}\frac {t_{n,N}}{nr} \left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}}, \] and also that $ t_{n,N} = 1-O\left(\frac 1{n^2}\right) $ and $r = 1+\frac{2}{n},$ it holds that \begin{equation*} \begin{aligned}\alpha_{n,1+\frac{2}{n}} & \geq\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_{n}|}\left(1-\frac{t_{n,N}^{2}}{r^{2}}\right)^{\frac{n-1}{2}}\\ & \geq c_{1}n^{-0.5}\left(\frac{4}{n}+o\left(n^{-1}\right)\right)^{\frac{n-1}{2}}\geq c_{2}\left(\frac{4}{n}\right)^{\frac{n}{2}}. \end{aligned} \end{equation*} Now we continue from the end of Eq. \eqref{blabla} to derive that \begin{equation} \begin{aligned} |\partial D_{n}|n^{2n}\int_{1+\frac{2}{n}}^{n^{2}}\left(1-c_{2}\left(\frac{4}{n}\right)^{\frac{n}{2}}\right)^{\frac{N}{2}}dr&\leq\,|\partial D_{n}|n^{2n}\int_{1+\frac{2}{n}}^{n^{2}}e^{-Nn^{-\frac{n}{2}}}dr\\ &\leq |\partial D_{n}|n^{2n+2}e^{-\sqrt{N}}\\ &=|\partial D_{n}|n^{2n}\int_{1+\frac{2}{n}}^{n^{2}}e^{-\sqrt{N}}dr\\ &=|D_{n}|N^{-\frac{2}{n-1}}o\left(n^{-0.5}\right), \end{aligned} \end{equation} where we used the assumption that $ N\geq n^n.$ \end{proof} The next lemma is proven in Section \ref{Techandloose} and will be used to prove Lemma \ref{3.9} below. \begin{lemma}{\label{tech_lemma}} Assume that $ r\geq n^2 $. Then \begin{equation} \alpha_{n,r} \geq 1-\frac{C\sqrt{n}}{r} . \end{equation} \end{lemma} \begin{lemma}\label{3.9} \[ |\partial D_{n}|\int_{n^{2}}^{\infty}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr = |D_{n}|N^{-\frac 2{n-1} } o\left(n^{-0.5}\right) \] \end{lemma} \begin{proof} We have that \begin{equation} \begin{aligned} |\partial D_{n}|\int_{n^{2}}^{\infty}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr&\leq|\partial D_{n}|\int_{n^{2}}^{\infty}r^{n-1}\left(\frac{C\sqrt{n}}{r}\right)^{\frac{N}{2}}dr\\&\leq |\partial D_{n}|C^{N}n^{\frac{N}{2}}\int_{n^{2}}^{\infty}r^{-\frac{N}{3}}dr\\&\leq|\partial D_{n}|C^{N}n^{\frac{N}{2}}n^{-\frac{2}{3}N+2}\int_{1}^{\infty}r^{-\frac{N}{3}}\\&=|D_{n}|N^{-\frac 2{n-1} } o\left(n^{-0.5}\right). \end{aligned} \end{equation} \end{proof} Putting everything together, Lemma \ref{lemma_2} now follows from all of the lemmas that were proven in this subsection, and finally we derive that \begin{equation} {\mathbb{E}}[|P \setminus D_n|] = \left(\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt+O\left(n^{-0.5}\right)\right)N^{-\frac 2{n-1} }|D_n|. \end{equation} \qed \subsection{Proof of Lemma \ref{firstpartlem}} First, we split the integral of Eq. \eqref{Partone} into two parts \begin{align*} |\partial D_{n}|\left(\int_{1-\delta}^{1}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr + \int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr \right). \end{align*} Next, we estimate the first integral. \begin{lemma} $$ |\partial D_{n}|\int_{1-\delta}^{1}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr = \left(\int_{0}^{1}t^{-1}(1-e^{-\gamma t})dt+O(n^{-0.5})\right)N^{-\frac 2{n-1} }|D_n|. $$ \end{lemma} \begin{proof} For $r\in [1-\delta,1] $, we use Lemma \ref{main_lemma} to estimate $ \alpha_{n,r} $ and derive that \[ \alpha_{n,r}=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}. \] Hence, \begin{equation*} \begin{aligned} & |\partial D_{n}|\int_{1-\delta}^{1}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr\leq\\ & |\partial D_{n}|\int_{1-\delta}^{1}r^{n-1}\left(1-\left(1-\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}\right)^{\frac{N}{2}}\right)dr. \end{aligned} \end{equation*} Using the equality $ 1-x_n= \left(1+O\left(x_n^2\right)\right)e^{-x_n} $, where $ x_n = O\left(n^{-1}\right) $, we obtain \begin{equation*} \begin{aligned} & |\partial D_{n}|\int_{1-\delta}^{1}r^{n-1}\left(1-e^{-\gamma e^{\left(1+O\left(n^{-1}\right)\right)\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}}\right)dr=\\ & |\partial D_{n}|\int_{1-\delta}^{1}r^{n-1}\left(1-\left(1+O\left(n^{-1}\right)\right)e^{-\gamma e^{\left(1+O\left(n^{-1}\right)\right)\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}}\right)dr=\\ & (1+O(n^{-1}))|\partial D_{n}|\int_{1-\delta}^{1}1-e^{-\gamma e^{\left(1+O\left(n^{-1}\right)\right)\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}}dr=\\ & (1+O(n^{-0.5}))|D_{n}|N^{-\frac 2{n-1} }\int_{0}^{n^{0.5}}1-e^{-\gamma e^{-x}}dx=\\ & (1+O(n^{-0.5}))|D_{n}|N^{-\frac 2{n-1} }\int_{0}^{1}t^{-1}(1-e^{-\gamma t})dt. \end{aligned} \end{equation*} \end{proof} We now estimate the second integral in Eq. \eqref{firstpartlem}. \begin{lemma} \[ |\partial D_n|\int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr = o(n^{0.5})N^{-\frac 2{n-1} }|D_n|. \] \end{lemma} \begin{proof} Using Lemma \ref{main_lemma} with $ r = {1-\delta}=1-\left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}},$ we get that \begin{align*} \alpha_{n,{1-\delta}}&=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}\\&=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{-\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}. \end{align*} Therefore, \begin{equation}\begin{aligned} & |\partial D_{n}|\int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}\right)dr\leq\\ & |\partial D_{n}|\int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-\left(1-\alpha_{n,1-\left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}}}\right)^{\frac{N}{2}}\right)dr=\\ & |\partial D_{n}|\int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-\left(1-\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{-\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}\right)^{\frac{N}{2}}\right)dr\leq\\ & |\partial D_{n}|\int_{t_{n,N}}^{1-\delta}r^{n-1}\left(1-e^{^{-\gamma\left(1+O\left(n^{-1}\right)\right)e^{-\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}}\right)dr\leq\\ & C|\partial D_{n}|(1-\delta-t_{n,N})e^{-\gamma\sqrt{n-1}}=|D_{n}|N^{-\frac 2{n-1} } o\left(n^{-0.5}\right). \end{aligned} \end{equation} \end{proof} \section{Proof of Theorem \ref{sec_thm}} Recall that we want to find an upper bound for $ \Delta_s\left(Q^b_{n,N}, D_n\right)$, where $ Q^b_{n,N} $ is a polytope in $ \mathbb{R}^n $ with at most $ N $ facets that minimizes the surface area deviation with the Euclidean ball. For this purpose, choose a polytope $P$ from the random construction that was used in Theorem \ref{main_thm} which satisfies both: \[ |D_n \setminus P| \leq \left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }|D_n| \] and \[ |P \setminus D_n| \leq \left( \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }|D_n|. \] First, we find a lower bound for $ |\partial\left(D_n \cap P\right)|.$ \begin{lemma}{\label{lem1}} \begin{equation} |\partial\left(P\cap D_n\right)| \geq \left(1 - \left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt\right)N^{-\frac 2{n-1} }\right)|\partial D_n|. \end{equation} \end{lemma} \begin{proof} By definition, $ P $ satisfies the inequality \[|P\cap D_n| \geq \left(1 -\left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }\right)|D_n|,\] and by the isoperimetric inequality (Lemma \ref{isoperemetric}) \begin{equation*} \begin{aligned} |\partial\left(P\cap D_n\right)| &\geq n|P\cap D_n|^{\frac{n-1}{n}}|D_n|^{\frac {{1}}{n}}\\&\geq \left(\left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }\right)|\partial D_n|. \end{aligned} \end{equation*} The lemma follows. \end{proof} Finally, we prove an upper bound for $ |\partial\left(D_n\cup P\right)| $. \begin{lemma}{\label{lem2}} \begin{equation} \begin{aligned} & |\partial\left(P\cup D_{n}\right)|\leq \\&\left(1+ \left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt + \frac{1}{2} +O(n^{-0.5})\right)N^{-\frac 2{n-1} }\right)|\partial D_{n}|. \end{aligned} \end{equation} \end{lemma} \begin{proof} By the definition of the symmetric volume difference, $ P $ satisfies the inequality \begin{equation}{\label{aa}} |P \cup D_n| \leq \left(1+ \left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }\right)|D_n|. \end{equation} By volume considerations, we notice that the origin is in the interior of $ P$. Hence, by the cone-volume formula, \begin{equation}{\label{bb}} \begin{aligned} |D_{n}\cup P|&= |\text{conv}(\vec{0},\partial P \cap D^c_{n})| + |\text{conv}(\vec{0},\partial D_{n}\cap P^c)| \\&= \frac{t_{n,N}}{n}|\partial P \cap D^c_{n}|+{\frac {{1}}{n}}|\partial D_{n}\cap P^c|, \end{aligned} \end{equation} where in the last equality we used the fact all the facets have the same height $ t_{n,N}.$ Now we use both Eqs. \eqref{aa} and \eqref{bb} to derive that \begin{align*} &\frac {t_{n,N}}{n}|\partial\left(P \cup D_n\right)| \leq \\&\left(1+ \left(\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt + O(n^{-0.5})\right)N^{-\frac 2{n-1} }\right)|D_n|. \end{align*} Since $ t_{n,N}= {1-\frac 12\left(1+O\left(n^{-0.5}\right)\right)N^{-\frac 2{n-1} }},$ the lemma follows. \end{proof} \begin{proof}[Proof of Theorem \ref{sec_thm}] The theorem now follows by using Lemmas \ref{lem1} and \ref{lem2} and the definition of the surface area deviation: \begin{equation} \begin{aligned} \Delta_s\left(D_n,P\right)&=|\partial\left(P\cup D_n\right)|-|\partial\left(P\cap D_n\right)|\\&\leq \left(2\int_{0}^{1} t^{-1}(1-e^{-\ln(2)t})dt + \int_{0}^{\infty}e^{-\ln(2)e^{t}}dt + \frac{1}{2} + O(n^{-0.5})\right)N^{-\frac 2{n-1} }|\partial D_n|. \end{aligned} \end{equation} \end{proof} \section{Proof of Theorem \ref{suprisig}} Let $ P^b_{n,N} $ be the polytope in $ \mathbb{R}^n $ with at most $ N $ facets that minimizes the symmetric volume difference with the $n$-dimensional Euclidean unit ball. In Theorem 2 of \cite{Lud06}, it was shown that \begin{align*} |D_n\setminus P_{n,N}| \geq \frac 1{n}\sum_{i=1}^{N}|F_i\cap D_n|\sqrt{1-(F_i\cap D_n)^{\frac{2}{n-1}}}, \end{align*} where $F_1,\ldots,F_N$ denote the facets of $ P_{n,N}.$ By Lemma 9 in \cite{Lud06}, each facet of $ P^b_{n,N} $ satisfies \[ |F_i \cap D_n| = |F_i \cap D^c_n|. \] We define $ \sqrt{1-r_i^2}$ to be the height such that $|D_n \cap \{x_1=\sqrt{1-r_i^2}\}| = |F_i|.$ From this definition, we know that $ d(o,F_i) > \sqrt{1-r_i^2} $ and $ |F_i \cap D_n| = \frac{1}{2}|F_i| = r_i^{n-1}|D_{n-1}|.$ Thus \begin{align}\label{target} |D_n\setminus P_{n,N}| \geq \frac {|D_{n-1}|}{2n}\sum_{i=1}^{N}r_i^{n-1}\left(1-\sqrt{1-r_{i}^{2}}\right). \end{align} We formulate an optimization problem, whose target function is smaller than the right-hand side of Eq. \eqref{target} and the constraint is the surface area of our polytope, \[ \min\left\{f\left(r_1,\ldots,r_N\right) :\ |D_{n-1}|\sum_{i=1}^{N}r_{i}^{n-1}=|\partial P^b_{n,N}| \ ,0\leq r_{i}\leq 1 ,\forall i\in1,\ldots,N\right\}, \] where \[ f\left(r_1,\ldots,r_N\right) = \frac{|D_{n-1}|}{2n}\sum_{i=1}^{N}r_{i}^{n-1}\left(1-\sqrt{1-r_{i}^{2}}\right). \] Using Lagrange multipliers and the separability of both $ f$ and the constraints, we derive that the minimum is achieved at the point \[ r^*_1=\cdots=r^*_N = \left(\frac{|\partial P|}{|D_{n-1}|N}\right)^{\frac 1{n-1}}. \] We conclude that \begin{equation} \begin{aligned}\Delta_{v}(P_{n,N}^{b},D_{n}) & \geq f\left(r_{1}^{*},\ldots,r_{N}^{*}\right)=\frac{|D_{n-1}|}{2n}\sum_{i=1}^{N}\frac{|\partial P_{n,N}^{b}|}{N|D_{n-1}|}\left(1-\sqrt{1-\left(\frac{|\partial P_{n,N}^{b}|}{|D_{n-1}|N}\right)^{\frac{2}{n-1}}}\right)\\ & =\frac{|\partial P|}{2n}\left(1-\sqrt{1-\left(\frac{|\partial P_{n,N}^{b}|}{|D_{n-1}|N}\right)^{\frac{2}{n-1}}}\right)\\ & \geq(\frac{1}{2}-cN^{-\frac 2{n-1} })|D_{n}|\left(1-\sqrt{1-\left(\frac{|\partial P_{n,N}^{b}|}{|D_{n-1}|N}\right)^{\frac{2}{n-1}}}\right)\\ & \geq\bigg(\frac{1}{4}-cN^{-\frac 2{n-1} }+O(n^{2}N^{-\frac{4}{n-1}})\bigg)N^{-\frac 2{n-1} }|D_{n}|, \end{aligned} \end{equation} where we used the isoperimetric inequality (Lemma \ref{isoperemetric}), Theorem \ref{main_thm} (which implies $ |\partial P| \geq (1-cN^{-\frac 2{n-1} }) |\partial D_n|$) and $ \sqrt{1-x} = 1-\frac{1}{2}x+O(x^2). $ Hence, by taking $ N \to \infty $ \[ \frac 12\text{ldiv}_{n-1}|\partial D_n|^{1+\frac{2}{n-1}} \geq \frac{1}{4}|D_{n}| \] so by Stirling's inequality we obtain $ \text{ldiv}_{n-1} \geq (4\pi e)^{-1} + o(1)$, as desired. \qed \section*{ACKNOWLEDGMENTS} I would like to express my sincerest gratitude to Prof. Bo'az Klartag for the inspiring discussions, and also to Prof. Gideon Schechtman and Dr. Ronen Eldan. Also I express my gratitude to my friend Prof. Steven Hoehner and Ms. Anna Mendelman for editing the content of this paper. \section{Technical lemmas and loose ends}{\label{Techandloose}} Recall that \[ \alpha_{n,r} = \frac{2|D_{n-1}|}{|D_n|}\left(\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^2\right)^{\frac{n-1}{2}}dx +\frac {t_{n,N}}{nr} \left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}}\right) \] where $t_{n,N} = \sqrt{1 - \left(\frac{\gamma |\partial D_n|}{N|D_{n-1}|}\right)^{\frac 2{n-1}}}$. The integral is the volume of the cap, and the second term is the volume of the cone whose common base is $ \{x\in\mathbb{R}^n: x_1 = \frac {t_{n,N}}r\} \cap D_n.$ When $N\geq n^n$, $ t_{n,N} $ is very close to 1. When $ r $ is close to 1, the volume of the cone is significantly larger than the volume of the cap. The following lemma formalizes this. \begin{lemma}{\label{sub_main_lemma}} Assume that $ r \in [1-\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}, 1+\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}]$. Then for all $N\geq n^n$, \begin{equation} \int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^2\right)^{\frac{n-1}{2}}dx \leq \frac{C}{n^2}\left(\frac {t_{n,N}}{nr} \left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}}\right). \end{equation} \end{lemma} \begin{proof} Observe that $ N^{-\frac 2{n-1} } = O(n^{-2}),$ which implies that $ \frac {t_{n,N}}r = 1-O\left(n^{-2}\right)$. Hence, \begin{equation} \begin{aligned}\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx & =\int_{\frac{t_{n,N}}{r}}^{1}(1+x)^{\frac{n-1}{2}}\left(1-x\right)^{\frac{n-1}{2}}dx\\ & \leq2^{\frac{n-1}{2}}\int_{t_{n,N}}^{1-\frac{t_{n,N}}{r}}x^{\frac{n-1}{2}}dx=\frac{2^{\frac{n+1}{2}}}{n-1}\left(1-\frac{t_{n,N}}{r}\right)^{\frac{n+1}{2}}\\ & \leq\frac{C}{n^{3}}2^{\frac{n+1}{2}}\left(1-\frac{t_{n,N}}{r}\right)^{\frac{n-1}{2}}\leq\frac{C}{2n^{3}}\left(1+\frac{t_{n,N}}{r}\right)^{\frac{n-1}{2}}\left(1-\frac{t_{n,N}}{r}\right)^{\frac{n-1}{2}}\\ & \leq\frac{C}{n^{2}}\frac{t_{n,N}}{nr}\left(1-\left(\frac{t_{n,N}}{r}\right)^{2}\right)^{\frac{n-1}{2}}. \end{aligned} \end{equation} \end{proof} Now we can complete all the missing details from the proof of Theorem \ref{main_thm}. First we prove Lemma \ref{main_lemma}. \begin{lemmaa} Assume that $ r \in [1-\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}, 1+\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}].$ Then it holds that \begin{equation} \alpha_{n,r}=\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)} \end{equation} \begin{proof} Using Lemma \ref{sub_main_lemma} and the fact that both $ t_{n,N}$ and $r $ are of the order $ 1-O\left(n^{-2}\right)$, we derive that \begin{equation*} \begin{aligned}\alpha_{n,r} & =\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|D_{n}|}\left(\frac{t_{n,N}}{nr}\right)\left(1-\frac{t_{n,N}^{2}}{\left(1+\left(r-1\right)\right)^{2}}\right)^{\frac{n-1}{2}}\\ & =\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_{n}|}\frac{1}{\left(1+\left(r-1\right)\right)^{n-1}}\left(\left(1+\left(r-1\right)\right)^{2}-t_{n,N}^{2}\right)^{\frac{n-1}{2}}\\ & =\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_{n}|}\left(1-t_{n,N}^{2}+2\left(r-1\right)+\left(r-1\right)^{2}\right)^{\frac{n-1}{2}}\\ & =\left(1+O\left(n^{-1}\right)\right)\frac{2|D_{n-1}|}{|\partial D_{n}|}\left(\left(\frac{\gamma|\partial D_{n}|}{|D_{n-1}|N}\right)^{\frac{2}{n-1}}+2\left(r-1\right)+\left(r-1\right)^{2}\right)^{\frac{n-1}{2}}\\ & =\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}\left(1+2\left(r-1\right)N^{\frac{2}{n-1}}(1+O(\frac{ln\left(n\right)}{n}))\right)^{\frac{n-1}{2}}\\ & =\frac{2\gamma\left(1+O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}. \end{aligned} \end{equation*} \end{proof} \end{lemmaa} The following is proof the of Lemma \ref{tech_lemma}. \begin{lemmaa} For all $r \geq n^2$, it holds that \begin{equation} \alpha_{n,r} \geq 1-\frac{C\sqrt{n}}{r} . \end{equation} \end{lemmaa} \begin{proof} We have \begin{align*}\alpha_{n,r} & =\frac{2|D_{n-1}|}{|D_{n}|}\left(\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx+\frac{t_{n,N}}{nr}\left(1-\frac{t^{2}}{r^{2}}\right)^{\frac{n-1}{2}}\right)\\ & \geq\frac{2|D_{n-1}|}{|D_{n}|}\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx \geq\frac{2|D_{n-1}|}{|D_{n}|}\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx\\ & =1-2\frac{|D_{n-1}|}{|D_{n}|}\int_{0}^{\frac{t_{n,N}}{r}}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx, \end{align*} where in the last equality we used the fact that $ |D_{n-1}|\int_{0}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx= \frac{|D_n|}{2}$. Continuing from the previous line, we obtain \begin{align*}\quad\quad\quad\quad\quad\quad & \geq1-2\frac{|D_{n-1}|}{|D_{n}|}\int_{0}^{\frac{1}{r}}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx\geq1-c\sqrt{n}\int_{0}^{\frac{1}{r}}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx\\ & \geq1-c_{1}\sqrt{n}\int_{0}^{\frac{1}{r}}\left(1-x\right)^{\frac{n-1}{2}}dx\\ & \geq1-\frac{c\sqrt{n}}{n}\left(1-\left(1-\frac{1}{r}\right)^{\frac{n+1}{2}}\right)\\ & \geq1-\frac{c}{\sqrt{n}}\left(1-\left(1-\frac{n+1}{2r}+\ldots\right)\right)\\ & \geq1-\frac{c\sqrt{n}}{r}. \end{align*} \end{proof} \subsection*{Sketch of the proof of Remark \ref{niceremark}} We give short proofs of the modifications needed so that Theorem \ref{main_thm} holds when the random polytope has at most $ 10^n \leq N \leq n^n$ facets. For this purpose, we modify Lemmas \ref{firstpartlem} and \ref{lemma_2} so that they will hold when $ 10^n \leq N \leq n^n.$ For both the aforementioned lemmas, we need to estimate the volume of a spherical cap with height $ h< 1 $. For this purpose, we shall use the following integration by parts identity: \begin{equation}\label{IntByParts} \begin{aligned} \int_{a}^{b}e^{ng(x)}dx &= \frac{1}{n}\bigg[ \frac{1}{g'(b)}e^{ng(b)} - \frac{1}{g'(a)}e^{ng(a)} \bigg]- \frac{1}{n}\int_{a}^{b}\frac{d}{dx}\left(\frac{1}{g'(x)}\right)e^{ng(x)}dx. \end{aligned} \end{equation} \begin{lemma}{\label{laplaceaprox}} Let $ a_n \in (\frac{2}{3},1)$ be a number that may depend on the dimension $ n $. Then the following holds: \begin{equation} \begin{aligned} \int_{a_n}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx = \frac{\left(1-a_n^{2}\right)^{\frac{n+1}{2}}}{a_n\left(n-1\right)}+O\left(\frac{\int_{a_n}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx}{n}\right). \end{aligned} \end{equation} \end{lemma} \begin{proof} Let $ \varepsilon < \frac{1-a_n}{2}$. Then \begin{align*}\int_{a_{n}}^{1-\varepsilon}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx & =\int_{a_{n}}^{1-\varepsilon}e^{\frac{n-1}{2}\ln(1-x^{2})}dx\\ & =\frac{2}{n-1}\bigg[-\frac{1-(1-\varepsilon)^{2}}{2(1-\varepsilon)}(1-(1-\varepsilon)^{2}){}^{\frac{n-1}{2}}\\ & \,\,\,+\frac{1-a_{n}^{2}}{2a_{n}}(1-a_{n}^{2}){}^{\frac{n-1}{2}}\bigg]-\frac{2}{n-1}\int_{a_{n}}^{1-\varepsilon}\frac{(1-x^{2})}{2x}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx\\ & \leq\frac{2}{n-1}\bigg[-\frac{(1-(1-\varepsilon)^{2})^{\frac{n+1}{2}}}{2(1-\varepsilon)}+\frac{1-a_{n}^{2}}{2a_{n}}(1-a_{n}^{2}){}^{\frac{n-1}{2}}\bigg]+\\ & \quad\quad\frac{C}{n}\int_{a_{n}}^{1-\varepsilon}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx, \end{align*} where the second equality follows from Eq. \eqref{IntByParts}. Taking the limit of both sides of the previous inequality as $\varepsilon\to 0$ yields the lemma. \end{proof} Now we show how to modify the proof of Lemma \ref{lemma_2}; Lemma \ref{firstpartlem} can be obtained by similar modifications. For this purpose, we need to derive a lower bound for $ \alpha_{n,r}$. First, we show that the volume of aforementioned cone is larger than the volume of the spherical cap. \begin{lemma}{\label{sub_main_lemma_2}} Assume that $ r \in [1, 1+\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}]$ and $ 10^{n}\leq N\leq n^{n} $. When the dimension is sufficiently large, it holds that \begin{equation*} \int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^2\right)^{\frac{n-1}{2}}dx \leq \frac{1}{100}\frac {t_{n,N}}{nr} \left(1-\frac{t_{n,N}^2}{r^2}\right)^{\frac{n-1}{2}}. \end{equation*} \end{lemma} \begin{proof} Applying Lemma \ref{laplaceaprox} with $ a_n = \frac {t_{n,N}}r$ yields \begin{align*}\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx & =\frac{1}{n-1}\frac{r}{t_{n,N}}\left(1-\frac{t_{n,N}^{2}}{r^{2}}\right)^{\frac{n+1}{2}}+O\left(n^{-1}\int_{\frac{t_{n,N}}{r}}^{1}\left(1-x^{2}\right)^{\frac{n-1}{2}}dx\right)\\ & \leq\frac{1}{100}\frac{t_{n,N}}{nr}\left(1-\frac{t_{n,N}^{2}}{r^{2}}\right)^{\frac{n-1}{2}}. \end{align*} \end{proof} Now we modify Lemma \ref{lemma_2}. Using Lemma \ref{sub_main_lemma_2}, one can repeat the proof of Lemma \ref{main_lemma} to derive the following \begin{lemma}[Modification of Lemma \ref{main_lemma}]{\label{alphnanrmodified}} Assume that $ r \in [1, 1+\frac{N^{-\frac 2{n-1} }}{\sqrt{n-1}}]$ and $ 10^{n}\leq N\leq n^{n} $. Then \begin{equation*} \frac{2\gamma\left(1-\frac{1}{25}+ O\left(n^{-1}\right)\right)}{N}e^{\left(n-1\right)\left(r-1\right)N^{\frac{2}{n-1}}\left(1+O\left(n^{-0.5}\right)\right)}\leq\alpha_{n,r}. \end{equation*} \end{lemma} Finally we show how to modify Lemma \ref{lemma_2}. \begin{lemma}[Modification of Lemma \ref{lemma_2}] \begin{equation*} {\mathbb{E}}[|P \setminus D_n|] \leq \left(\frac{\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt}{1-\frac{1}{20}}+O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n|. \end{equation*} \end{lemma} \begin{proof} We define $ \delta = \min\{N^{-\frac 2{n-1} }(n-1)^{-0.5},(100n)^{-1}\}$ and split ${\mathbb{E}}[|P \setminus D_n|] $ into three parts: \begin{equation*} \begin{aligned} {\mathbb{E}}[|P\setminus D_n|]=|\partial D_{n}| & \bigg(\int_{1}^{1+\delta}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr+\int_{1+\delta}^{n^{2}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\\ & \,\,+\int_{n^{2}}^{\infty}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\bigg). \end{aligned} \end{equation*} We handle the third integral in the same way as in the Lemma \ref{lemma_2}. Moreover, the second integral is negligible: \begin{equation*} \begin{aligned} & |\partial D_{n}|\int_{1+\delta}^{n^{2}}r^{n-1}\left(1-\alpha_{n,r}\right)^{\frac{N}{2}}dr\leq\\ & |\partial D_{n}|\int_{1+\delta}^{n^{2}}r^{n-1}\left(1-\alpha_{n,1+\left(n-1\right)^{-0.5}N^{-\frac{2}{n-1}}}\right)^{\frac{N}{2}}dr=\\ & |\partial D_{n}|\int_{1+\delta}^{n^{2}}r^{n-1}\left(1-\frac{2\ln(2)\left(1-\frac{1}{25}+O\left(n^{-1}\right)\right)}{N}e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}\right)^{\frac{N}{2}}dr\leq\\ & |\partial D_{n}|\int_{1+\delta}^{n^{2}}r^{n-1}e^{-\ln(2)\left(1-\frac{1}{25}+O\left(n^{-1}\right)\right)e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}dr\leq\\ & |\partial D_{n}|e^{-\ln(2)\left(1-\frac{1}{25}+O\left(n^{-1}\right)\right)e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}\int_{t_{n,N}}^{n^{2}}r^{n-1}dr\leq\\ & C|D_{n}|n^{n^{2}}e^{-\ln(2)\left(1-\frac{1}{25}+O\left(n^{-1}\right)\right)e^{\sqrt{n-1}\left(1+O\left(n^{-0.5}\right)\right)}}=o(n^{-3})|D_{n}|N^{-\frac 2{n-1} }. \end{aligned} \end{equation*} Finally, using the lower bound for $ \alpha_{n,r}$ that was proven in Lemma \ref{alphnanrmodified}, we can handle the first integral as we did in Lemma \ref{lemma_2} to derive that \begin{align*} {\mathbb{E}}[|P \setminus D_n|] \leq \left(\frac{\int_{0}^{\infty}e^{-\ln(2)e^{t}}dt}{1-\frac{1}{20}}+O\left(n^{-0.5}\right)\right)N^{-\frac{2}{n-1}}|D_n|. \end{align*} \end{proof} \bibliographystyle{plainnat}
1,108,101,562,507
arxiv
\section*{\Large\centering Supplementary Material: \\Occupancy Planes for Single-view RGB-D Human Reconstruction}\vspace{0.2cm} \input{./supp/tex/intro} \input{./supp/tex/implementation} \input{./supp/tex/quantitative} \input{./supp/tex/qualitative} \end{document} \section{More Qualitative Results}\label{sec: supp qualitative} \subsection{Comparison to IF-Net} \figref{fig: supp s3d if-net} and~\figref{fig: supp apple if-net} provide qualitative results which compare the reconstruction of IF-Net~\cite{Chibane2020ImplicitFI} to the proposed approach. OPlanes successfully deal with humans that are only partially visible, while IF-Net seems to struggle. \begin{figure}[!t] \centering \captionsetup[subfigure]{width=\textwidth} \centering \includegraphics[width=0.9\textwidth]{./supp/figures/s3d_compare_to_if_net} \captionsetup{width=\textwidth} \vspace{-0.2cm} \caption{Qualitative results on S3D~\cite{Hu2021SAILVOS3A}. For each reconstruction, we show two views. IF-Net~\cite{Chibane2020ImplicitFI} struggles to obtain consistent geometry if parts of the human are invisible, while an OPlanes model faithfully reconstructs the visible portion.} \label{fig: supp s3d if-net} \vspace{-0.4cm} \end{figure} \begin{figure}[!t] \centering \captionsetup[subfigure]{width=0.8\textwidth} \centering \includegraphics[width=0.8\textwidth]{./supp/figures/if_net_apple} \captionsetup{width=\textwidth} \caption{Qualitative results on real world data. For each reconstruction, we show two views. Similar to~\figref{fig: supp s3d if-net}, IF-Net~\cite{Chibane2020ImplicitFI} struggles to obtain consistent geometry if parts of the human are invisible, while an OPlanes model faithfully reconstructs the visible portion.} \label{fig: supp apple if-net} \vspace{-0.4cm} \end{figure} \section{More Quantitative Results}\label{sec: supp quantitative} \subsection{Performance across Various Visibility Levels} Since OPlanes can deal with humans of various visibilities, we are interested in understanding how the proposed approach performs across different partial visibility levels. We present results with respect to different visibility levels in~\tabref{tab: supp vis level}. To compute the visibility we use three steps: 1) we uniformly sample 100k points within the complete mesh of the human; 2) we project those 100k 3D points onto the 2D image and count the number of points which are in view; 3) the level of partial visibility is computed as the ratio of in-view points,~\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,~the number of in-view points divided by 100k. We also explicitly consider the fully visible humans in the $4^\text{th}$ row of~\tabref{tab: supp vis level}. Results for different visibility ranges are provided in the $1^\text{st}$ to $3^\text{rd}$ row of \tabref{tab: supp vis level}. As expected, the more visible the human, the better the model performs. Specifically, comparing full visibility to low visibility ($4^\text{th}$~\vs~$1^\text{st}$ row), we obtain higher IoU (0.707~\vs~0.668), smaller Chamfer distance (0.109~\vs~0.289), and more normal consistency (0.759~\vs~0.703). However, it is notable that the drop in performance is not very severe. To verify this, we also report IF-Net and PIFuHD results for each visibility range in~\tabref{tab: supp vis level}. Specifically, comparing the $4^\text{th}$~\vs~$1^\text{st}$ row, we observe: 1) for IoU ($\uparrow$ is better), IF-Net's performance drops from 0.644 to 0.365 and PIFuHD results drop from 0.533 to 0.131; 2) for Chamfer distance ($\downarrow$ is better), IF-Net results deteriorate from 0.134 to 0.444 and PIFuHD results worsen from 0.214 to 0.702; 3) for Normal consistency ($\uparrow$ is better), IF-Net results drop from 0.828 to 0.715 while PIFuHD results drop from 0.734 to 0.543. Summarizing the three observations, we find the proposed OPlanes to be more robust to partial visibility. \input{./supp/tables/vis_level} \section{Implementation Details}\label{sec: supp implement} To extract the image features $\mathcal{F}_\text{RGB}^{h_O \times w_O} = f_\text{RGB} (f_\text{FPN} (I_\text{RGB}))$ (see \equref{eq: rgb feat}), instead of feeding the raw RGB image $I_\text{RGB} \in \mathbb{R}^{H\times W\times 3}$ into the FPN backbone, we first concatenate the image $I_\text{RGB}$ with two simply-processed one-channel features which are concatenated to the image along the channel dimension. We therefore use $\hat{I}_\text{RGB} \in \mathbb{R}^{H\times W\times 5}$, which will be fed into the FPN,~\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,~$\mathcal{F}_\text{RGB}^{h_O \times w_O} = f_\text{RGB} (f_\text{FPN} (\hat{I}_\text{RGB}))$. The two one-channel features are: 1) for each pixel, we compute the distance to the visibility mask's boundary; 2) we detect edges with the help of a Farid filter~\cite{Farid2004DifferentiationOD}. \section{Submission of papers to NeurIPS 2022} Please read the instructions below carefully and follow them faithfully. \subsection{Style} Papers to be submitted to NeurIPS 2022 must be prepared according to the instructions presented here. Papers may only be up to {\bf nine} pages long, including figures. Additional pages \emph{containing only acknowledgments and references} are allowed. Papers that exceed the page limit will not be reviewed, or in any other way considered for presentation at the conference. The margins in 2022 are the same as those in 2007, which allow for $\sim$$15\%$ more words in the paper compared to earlier years. Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the NeurIPS website as indicated below. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for NeurIPS and other conference information are available on the World Wide Web at \begin{center} \url{http://www.neurips.cc/} \end{center} The file \verb+neurips_2022.pdf+ contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2022 is \verb+neurips_2022.sty+, rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09, Microsoft Word, and RTF are no longer supported!} The \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a nonanonymized version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to NeurIPS. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymize your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2022.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing (leading) of 11~points. Times New Roman is the preferred typeface throughout, and will be selected for you by default. Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no indentation. The paper title should be 17~point, initial caps/lower case, bold, centered between two horizontal rules. The top rule should be 4~points thick and the bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and below the title to rules. All pages should start at 1~inch (6~picas) from the top of the page. For the final version, authors' names are set in boldface, and each name is centered above the corresponding address. The lead author's name is to be listed first (left-most), and the co-authors' names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in Section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. First-level headings should be in 12-point type. \subsection{Headings: second level} Second-level headings should be in 10-point type. \subsubsection{Headings: third level} Third-level headings should be in 10-point type. \paragraph{Paragraphs} There is also a \verb+\paragraph+ command available, which sets the heading in bold, flush left, and inline with the text, with the heading followed by 1\,em of space. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2022+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2022} \end{verbatim} As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas). Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} \begin{figure} \centering \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \end{figure} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use color figures. However, it is best for the figure captions and the paper body to be legible if the paper is printed in either black/white or in color. \subsection{Tables} All tables must be centered, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{Sample table title} \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \section{Preparing PDF files} Please prepare submission files with paper size ``US Letter,'' and not, for example, ``A4.'' Fonts were the main cause of problems in the past years. Your PDF file must only contain Type 1 or Embedded TrueType fonts. Here are a few instructions to achieve this. \begin{itemize} \item You should directly generate PDF files using \verb+pdflatex+. \item You can check which fonts a PDF files uses. In Acrobat Reader, select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is available out-of-the-box on most Linux machines. \item The IEEE has recommendations for generating PDF files whose fonts are also acceptable for NeurIPS. Please see \url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf} \item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use "solid" shapes instead. \item The \verb+\bbold+ package almost always uses bitmap fonts. You should use the equivalent AMS Fonts: \begin{verbatim} \usepackage{amsfonts} \end{verbatim} followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following workaround for reals, natural and complex: \begin{verbatim} \newcommand{\RR}{I\!\!R} % \newcommand{\Nat}{I\!\!N} % \newcommand{\CC}{I\!\!\!\!C} % \end{verbatim} Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package. \end{itemize} If your file contains type 3 fonts or non embedded TrueType fonts, we will ask you to fix it. \subsection{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \begin{ack} Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2022/PaperInformation/FundingDisclosure}. Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission. \end{ack} \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. Note that the Reference section does not count towards the page limit. \medskip { \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262. } \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerTODO{} \item Did you describe the limitations of your work? \answerTODO{} \item Did you discuss any potential negative societal impacts of your work? \answerTODO{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerTODO{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerTODO{} \item Did you include complete proofs of all theoretical results? \answerTODO{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerTODO{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerTODO{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerTODO{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerTODO{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerTODO{} \item Did you mention the license of the assets? \answerTODO{} \item Did you include any new assets either in the supplemental material or as a URL? \answerTODO{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerTODO{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerTODO{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerTODO{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerTODO{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerTODO{} \end{enumerate} \end{enumerate} \section{Method} \subsection{Overview}\label{sec: approach overview} Given an RGB image, a depth map, a mask highlighting the human of interest in the image as well as the intrinsic camera parameters, our goal is to reconstruct a spatially-aligned human mesh $\mathcal{M}$. To generate the mesh $\mathcal{M}$, we introduce the \textit{Occupancy Planes} (OPlanes) representation, a plane-based representation of the geometry at various depth levels. This representation is inspired by classical semantic segmentation, but extends segmentation planes to various depth levels. OPlanes can be used to generate an occupancy grid, from which the mesh $\mathcal{M}$ is obtained via a marching cube~\cite{supplorensen1987marching} algorithm. We illustrate the framework in \figref{fig: example img}. In the following we first introduce OPlanes in~\secref{sec: oplane}. Subsequently, \secref{sec: oplane pred} details the developed deep net to predict OPlanes, while the training of the deep net is discussed in~\secref{sec: oplane train}. Finally, \secref{sec: oplane to mesh} provides details about generation of the mesh from the predicted OPlanes. \begin{figure}[!t] \centering \captionsetup[subfigure]{width=0.8\textwidth} \centering \hspace*{-0.3cm} \includegraphics[width=0.85\textwidth]{./figures/approach.pdf} \captionsetup{width=\textwidth} \vspace{-0.4cm} \caption{Occupancy planes (OPlanes) overview. \textbf{(a)} Occupancy plane $O_{z_i}^{H\times W}$ stores the occupancy information (black plane on the right) at a specific slice (light green plane on the left) in the view frustum. White pixels indicate ``inside'' the mesh (\secref{sec: oplane}). \textbf{(b)} Given RGB-D data and a mask, our approach takes a specific depth $z_i$ as input and predicts the corresponding occupancy plane $\widehat{O}_{z_i}^{H_O\times W_O}$ (\secref{sec: oplane pred}). The convolutional neural network $f_\text{spatial}$ explicitly considers context information for each pixel on the occupancy plane, which we find to be beneficial. During training, we not only supervise $\widehat{O}_{z_i}^{H_O\times W_O}$ through loss $\mathcal{L}^{H_O \times W_O}$ but we also supervise the intermediate feature $\widehat{O}_{z_i}^{h_O\times w_O}$ with loss $\mathcal{L}^{h_O \times w_O}$ (\secref{sec: oplane train}). } \label{fig: example img} \end{figure} \subsection{Occupancy Planes (OPlanes) Representation}\label{sec: oplane} Given an image capturing a human of interest, \emph{occupancy planes} (OPlanes) store the occupancy information of that human in the camera's view frustum. For this, the OPlanes representation consists of several 2D images, each of which store the mesh occupancy at a specific fronto-parallel slice through the camera's view frustum. Concretely, let $[z_\text{min}, z_\text{max}]$ be the range of depth we are interested in,~\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $z_\text{min}, z_\text{max}$ are the near-plane and far-plane of the view frustum of interest. Further, let the set $\mathcal{Z}_N \triangleq \{ z_1, \dots, z_N \,\vert\, z_\text{min} \leq z_i \leq z_\text{max}, \forall i \}$ contain the sampled depths of interest. The OPlanes representation $\mathcal{O}_{\mathcal{Z}_N}^{H\times W}$ for the depths of interest stored in $\mathcal{Z}_N$ refers to the set of planes \begin{align} \mathcal{O}_{\mathcal{Z}_N}^{H\times W} \triangleq \left\{ O_{z_1}^{H\times W}, O_{z_2}^{H\times W}, \dots, O_{z_N}^{H\times W} \,\vert\, z_i \in \mathcal{Z}_N \right\}. \end{align} Each OPlane $O_{z}^{H\times W} \in \{0, 1\}^{H \times W}$ is a binary image of height $H$ and width $W$. To compute the ground-truth binary values of the occupancy plane $O_{z}^{H\times W}$ at depth $z$, let $[x, y, 1]$ be a homogeneous pixel coordinate on the given image $I$. Given a depth $z$ of interest, the homogeneous pixel coordinate can be unprojected into the 3D space coordinate $[x_z, y_z, z] = z \cdot \pi^{-1}([x, y, 1])$, where $\pi(\cdot)$ denotes the perspective projection. Note, this unprojection deviates from prior human mesh reconstruction works~\cite{Zheng2019DeepHuman3H, Saito2019PIFuPI, Saito2020PIFuHDMP, He2020GeoPIFuGA} that assume an orthogonal projection with a weak-perspective camera. Instead, we utilize a perspective camera for more general use cases. From 3D meshes available in the training data we obtain a 3D point's ground-truth occupancy value as follows: \begin{align} o([x_z, y_z, z]) = \begin{cases} 1, \;\text{if $[x_z, y_z, z]$ inside the object},\\ 0, \;\text{otherwise}. \end{cases} \end{align} The value of the occupancy plane $O_{z}^{H\times W} \in \{0, 1\}^{H \times W}$ at depth $z$ and at pixel location $x, y$ can be obtained from the ground-truth occupancy value via \begin{align} O_{z}^{H\times W}[x, y] = o([x_z, y_z, z]), \end{align} where $O_{z}^{H\times W}[x, y]$ denotes the occupancy plane value of pixel $[x, y]$. % \subsection{Occupancy Plane Prediction}\label{sec: oplane pred} At test time, ground-truth meshes are not available. Instead we are interested in predicting the OPlanes from 1) a given RGB image $I_\text{RGB} \in \mathbb{R}^{H \times W \times 3}$ illustrating a human, 2) a depth map $\texttt{Depth} \in \mathbb{R}^{H \times W}$, 3) a mask $\texttt{Mask} \in \{0, 1\}^{H \times W}$, and 4) the calibrated camera's perspective projection $\pi$. Specifically, let $H_O\times W_O$ be the operating resolution where $H_O \leq H$ and $W_O \leq W$. We use a deep net to predict $N$ occupancy planes $\widehat{\mathcal{O}}^{H_O\times W_O}= \{ \widehat{O}_{z_i}^{H_O\times W_O} \}_{i=1}^N$ at various depth levels $z_i$ $\forall i\in\{1, \dots, N\}$, via \begin{align} \widehat{O}_{z_i}^{H_O\times W_O} = f_\text{spatial}([\mathcal{F}_\text{RGB}^{H_O\times W_O}; \mathcal{F}_{z_i}^{H_O\times W_O}] ). \label{eq: O_zi final} \end{align} Here, $[\cdot; \cdot]$ denotes the concatenation operation along the channel dimension. In order to resolve the depth ambiguity, we design $f_\text{spatial} (\cdot)$ to be a simple fully convolutional network that is specifically designed to fuse spatial neighborhood information within each occupancy plane prediction $\widehat{O}_{z_i}^{H_O\times W_O}$. Note that this design differs from prior work, which predicts the occupancy for each point independently. In contrast, we find that spatial neighborhood information is useful to improve occupancy prediction accuracy. For an accurate prediction, the fully convolutional net $f_\text{spatial} (\cdot)$ operates on image features $\mathcal{F}_\text{RGB}^{H_O\times W_O} \in \mathbb{R}^{H_O \times W_O \times C}$ and depth features $\mathcal{F}_{z_i}^{H_O\times W_O} \in \mathbb{R}^{H_O \times W_O \times C}$. In the following we discuss the deep nets to compute the image features $\mathcal{F}_\text{RGB}^{H_O\times W_O}$ and the depth features $\mathcal{F}_{z_i}^{H_O\times W_O}$. \noindent\textbf{Image feature $\mathcal{F}_\text{RGB}$.} The image feature $\mathcal{F}_\text{RGB}^{H_O\times W_O}$ is obtained by bilinearly upsampling a low-resolution feature map to the operating resolution $H_O\times W_O$. Concretely, \begin{align} \mathcal{F}_\text{RGB}^{H_O\times W_O} = \texttt{UpSample}_{h_O\times w_O \rightarrow H_O\times W_O} (\mathcal{F}_\text{RGB}^{h_O \times w_O}), \end{align} where $\mathcal{F}_\text{RGB}^{h_O \times w_O} \in \mathbb{R}^{h_O \times w_O \times C}$ is the RGB feature at the coarse resolution of $h_O \times w_O$. $\texttt{UpSample}_{h_O\times w_O \rightarrow H_O\times W_O}$ refers to the standard bilinear upsampling. The coarse resolution RGB feature $\mathcal{F}_\text{RGB}^{h_O \times w_O}$ is obtained via \begin{align} \mathcal{F}_\text{RGB}^{h_O \times w_O} = f_\text{RGB} (f_\text{FPN} (I_\text{RGB})), \label{eq: rgb feat} \end{align} where $f_\text{FPN}$ is the Feature Pyramid Network (FPN) backbone~\cite{Lin2017FeaturePN} and $f_\text{RGB}$ is another fully-convolutional network for further processing. \noindent\textbf{Depth feature $\mathcal{F}_{z_i}$.} The depth feature $\mathcal{F}_{z_i}^{H_O\times W_O}$ for an occupancy plane at depth $z_i$ encodes for every pixel $[x,y]$ the difference between the query depth $z_i$ and the depth at which the object first intersects with the camera ray. Concretely, we obtain the depth feature via \begin{align} \mathcal{F}_{z_i}^{H_O\times W_O} = f_\text{depth} (I_{z_i}^{H_O\times W_O}), \label{eq: depth feat} \end{align} where $f_\text{depth}$ is a fully convolutional network to process the depth difference image $I_{z_i}^{H_O\times W_O}$. The depth difference image $I_{z_i}^{H_O\times W_O}$ is constructed to capture the difference between the query depth $z_i$ and the depth at which the object first intersects with the camera ray.~\Ie, for each pixel $[x, y]$, \begin{align} I_{z_i}^{H_O\times W_O} [x, y] = \texttt{PE} (z_i - \texttt{Depth}[x, y]),\label{eq: z diff img} \end{align} where $\texttt{PE}(\cdot)$ is the positional encoding operation~\cite{Vaswani2017AttentionIA}. Intuitively, the depth difference image $I_{z_i}$ represents how far every point on the plane at depth $z_i$ is behind or in front of the front surface of the observed human. \subsection{Training}\label{sec: oplane train} The developed deep net to predict OPlanes is fully differentiable. We use $\theta$ to subsume all trainable parameters within the spatial network $f_\text{spatial}$ (\equref{eq: O_zi final}), the FPN network $f_\text{FPN}$, the RGB network $f_\text{RGB}$ (\equref{eq: rgb feat}), and the depth network $f_\text{depth}$ (\equref{eq: depth feat}). Further, we use $\widehat{\mathcal{O}}_\theta$ to refer to the predicted occupancy planes when using the parameter vector $\theta$. We train the deep net to predict OPlanes end-to-end with two losses by addressing \begin{align} \min_\theta \mathcal{L}_\theta^{H_O \times W_O} + \mathcal{L}_\theta^{h_O \times w_O}.\label{eq: loss} \end{align} Here, $\mathcal{L}_\theta^{H_O \times W_O}$ is the loss computed at the final prediction resolution of $H_O \times W_O$, while $\mathcal{L}_\theta^{h_O \times w_O}$ is used to supervise intermediate features at the resolution of $h_O \times w_O$. We discuss both losses next. \noindent\textbf{Final prediction supervision via $\mathcal{L}_\theta^{H_O \times W_O}$.} During training, we randomly sample $N$ depth values from the view frustum range $[z_\text{min}, z_\text{max}]$ to obtain the set of depth values of interest $\mathcal{Z}_N$ (\secref{sec: oplane}). For this, we use $z_\text{min} = \min \{ \texttt{Depth}[x, y] \,\vert\, \texttt{Mask}[x, y] == 1 \}$ by only considering depth information within the target mask. Essentially, we find the depth value that is closest to the camera. We set $z_\text{max} = z_\text{min} + z_\text{range}$, where $z_\text{range}$ marks the depth range we are interested in. During training, $z_\text{range}$ is computed from the ground-truth range which covers the target mesh. During inference, we set $z_\text{range} = 2 $ meters to cover the shapes and gestures of most humans. The high resolution supervision loss $\mathcal{L}_\theta^{H_O \times W_O}$ consists of two terms: \begin{align} \mathcal{L}_\theta^{H_O \times W_O} \triangleq &\; \lambda_\text{BCE} \cdot \mathcal{L}_\text{BCE}(\mathcal{O}^{H_O \times W_O}, \widehat{\mathcal{O}}_\theta^{H_O \times W_O}, \texttt{Mask}^{H_O \times W_O}, z_\text{min}, z_\text{max}) \nonumber \\ &\quad + \lambda_\text{DICE} \cdot \mathcal{L}_\text{DICE}(\mathcal{O}^{H_O \times W_O}, \widehat{\mathcal{O}}_\theta^{H_O \times W_O}, \texttt{Mask}^{H_O \times W_O}, z_\text{min}, z_\text{max}).\label{eq: fine loss} \end{align} Here, $\mathcal{L}_\text{BCE}$ is the binary cross entropy (BCE) loss while $\mathcal{L}_\text{DICE}$ is the DICE loss~\cite{Milletari2016VNetFC}. Both losses operate on the ground-truth OPlanes ${\cal O}^{H_O \times W_O}$ downsampled from the original resolution $H\times W$, the OPlanes $\widehat{\cal O}_\theta^{H_O \times W_O}$ predicted with the current deep net parameters $\theta$, and the human mask $\texttt{Mask}^{H_O \times W_O}$ downsampled from the raw mask. Note, we only consider points behind the human's front surface when computing the loss, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, on a plane $\widehat{O}_{z_i}$, we only consider $\{[x, y] \vert z_i \geq \texttt{Detph}[x, y] \}$. For readability, we drop the superscript ${H_O \times W_O}$ in the following. The BCE loss is computed via \begin{align} \mathcal{L}_\text{BCE} = \frac{1}{\vert \mathcal{Z}_N\vert \!\cdot\! \texttt{Sum}(\texttt{Mask}) }\hspace{-0.4cm} \sum\limits_{\substack{z_i \in \mathcal{Z}_N \\ x,y:\texttt{Mask}[x, y] = 1 }}\hspace{-0.4cm} \biggl(\! & O_{z_i}[x, y] \cdot \log \widehat{O}_{z_i}[x, y] % + (1 - O_{z_i}[x, y]) \cdot \log (1 - \widehat{O}_{z_i}[x, y]) \!\biggr), \end{align} where $\texttt{Sum}(\texttt{Mask})$ is the number of pixels within the target's segmentation mask and $x,y:\texttt{Mask}[x, y] = 1$ emphasizes that we only compute the BCE loss on pixels within the mask. Moreover, thanks to the occupancy plane representation inspired by classical semantic segmentation tasks, we can utilize the DICE loss from the semantic segmentation community to supervise the occupancy training. Specifically, we use \begin{align} \mathcal{L}_\text{DICE} = \frac{1}{\vert \mathcal{Z}_N\vert} \sum\limits_{\substack{z_i \in \mathcal{Z}_N}} \frac{2 \cdot \texttt{Sum}( \texttt{Mask} \cdot O_{z_i} \cdot \widehat{O}_{z_i})}{\texttt{Sum}(\texttt{Mask} \cdot O_{z_i}) + \texttt{Sum}(\texttt{Mask} \cdot \widehat{O}_{z_i})}. \end{align} This is useful because there can be a strong imbalance between the number of positive and negative labels in an OPlane $O_{z_i}$ due to human gestures. % The DICE loss has been shown to compellingly deal with such situations~\cite{Milletari2016VNetFC}. \noindent\textbf{Intermediate feature supervision via $\mathcal{L}_\theta^{h_O \times w_O}$.} Besides supervision of the final occupancy image $\widehat{O}_{z_i}$ discussed in the preceding section, we also supervise the intermediate features $\mathcal{F}_\text{RGB}^{h_O\times w_O}$ (\equref{eq: rgb feat}) via the loss $\mathcal{L}_\theta^{h_O \times w_O}$. Analogously to the high-resolution loss, we use two terms, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \begin{align} \mathcal{L}_\theta^{h_O \times w_O} \triangleq&\; \lambda_\text{BCE} \cdot \mathcal{L}_\text{BCE}(\mathcal{O}^{h_O \times w_O}, \widehat{\mathcal{O}}_\theta^{h_O \times w_O}, \texttt{Mask}^{h_O \times w_O}, z_\text{min}, z_\text{max}) \nonumber \\ &\quad + \lambda_\text{DICE} \cdot \mathcal{L}_\text{DICE}(\mathcal{O}^{h_O \times w_O}, \widehat{\mathcal{O}}_\theta^{h_O \times w_O}, \texttt{Mask}^{h_O \times w_O}, z_\text{min}, z_\text{max}).\label{eq: coarse loss} \end{align} Different from the high-resolution OPlanes representation, we predict the OPlanes representation at the coarse resolution $h_O \times w_O$ via \begin{align} \widehat{O}_{z_i}^{h_O \times w_O}[x, y] = \langle \mathcal{F}_\text{RGB}^{h_O \times w_O}[x, y, \cdot],\, \mathcal{F}_{z_i}^{h_O \times w_O}[x, y, \cdot] \rangle, \end{align} where $\langle \cdot, \cdot \rangle$ is the inner-product operation and $\mathcal{F}_\text{RGB}^{h_O \times w_O}[x, y, \cdot]$ represents the feature vector at the pixel location $[x, y]$. To obtain $\mathcal{F}_{z_i}^{h_O \times w_O}$, we feed the downsampled difference image $I_{z_i}^{h_O \times w_O}$ into $f_\text{depth}$. Intuitively, we use the inner product to encourage the image feature $\mathcal{F}_\text{RGB}^{h_O \times w_O}$ to be strongly correlated to % information from the depth feature $\mathcal{F}_{z_i}^{h_O \times w_O}$. \subsection{Inference}\label{sec: oplane to mesh} During inference, to reconstruct a mesh from predicted OPlanes $\widehat{\cal O}$, we first establish an occupancy grid before running a marching cube~\cite{supplorensen1987marching} algorithm to extract the isosurface. Specifically, we uniformly sample $N$ depths in the view frustum between depth range $[z_\text{min}, z_\text{min} + 2.0]$,~\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $N = 256$. Here 2.0 is a heuristic depth range which covers most human poses (\secref{sec: oplane train}). The network predicts an occupancy for each pixel on those $N$ planes. Importantly, since OPlanes represent occupancy corresponding to slices through the view frustum, a marching cube algorithm is not directly applicable. Instead, we first establish a voxel grid to cover the view frustum between $[z_\text{min}, z_\text{min} + 2.0]$. Each voxel's occupancy is sampled from the predicted OPlanes before a marching cube method is used. We emphasize that the number of planes do not need to be the same during training and inference, which we will show later. This ensures that the OPlanes representation is memory efficient at training time while enabling accurate reconstruction at inference time. \section{Introduction} Reconstructing the 3D shape of humans~\cite{guan2009estimating,tong2012scanning,bogo2016keep,lassner2017unite,guler2019holopose,xiang2019monocular,yu2017bodyfusion,varol2018bodynet,Zheng2019DeepHuman3H,Saito2019PIFuPI} has attracted extensive attention. It enables numerous applications such as AR/VR content creation, virtual try-on in the fashion industry, and image/video editing. % In this paper, we focus on the task of human reconstruction from a single RGB-D image and its camera information. A single-view setup is simple and alleviates the tedious capturing of sequences from multiple locations. However, while capturing of data is simplified, the task is more challenging because of the ambiguity in inferring the back of the 3D human shape given only a front view. A promising direction for 3D reconstruction of a human from a single image is the use of implicit functions. Prior works which use implicit functions~\cite{Saito2019PIFuPI,Saito2020PIFuHDMP} have shown compelling results on single view human reconstruction. In common to all these prior works is the use of per-point classification. Specifically, prior works formulate single-view human reconstruction by first projecting a point onto the image. A pixel aligned feature is subsequently extracted and used to classify whether the 3D point is inside or outside the observed human. While such a formulation is compelling, it is challenged by the fact that occupancy for every point is essentially predicted independently~\cite{Saito2019PIFuPI,Saito2020PIFuHDMP,Mescheder2019OccupancyNL}. While image features permit to take context into account implicitly, the point-wise prediction often remains noisy. This is particularly true if the data is more challenging than the one used in prior works. For instance, we observe prior works to struggle if humans are only partially visible,~\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, if we are interested in only reconstructing the observed part of a partially occluded person. To address this concern, we propose to formulate single-view RGB-D human reconstruction as an occupancy-plane-prediction task. For this, we introduce the novel \emph{occupancy planes} (OPlanes) representation. The OPlanes representation consists of multiple image-like planes which slice in a fronto-parallel manner through the camera's view frustum and indicate at every pixel location the occupancy for the corresponding 3D point. Importantly, the OPlanes representation permits to adaptively adjust the number and location of the occupancy planes during inference and training. Therefore, its resolution is more flexible than that of a classical voxel grid representation~\cite{varol2018bodynet,maturana2015voxnet}. Moreover, the plane structure naturally enables the model to benefit more directly from correlations between predictions of neighboring locations within a plane than unstructured representations like point clouds~\cite{Fan2017APS,qi2017pointnet} and implicit representations with per-point queries~\cite{Saito2019PIFuPI,Saito2020PIFuHDMP,Mescheder2019OccupancyNL}. % \begin{figure}[!t] \centering \captionsetup[subfigure]{width=0.8\textwidth} \centering \includegraphics[width=0.8\textwidth]{./figures/teaser.pdf} \captionsetup{width=\textwidth} \vspace{-0.3cm} \caption{Reconstruction results compared to PIFuHD~\cite{Saito2020PIFuHDMP} on S3D~\cite{Hu2021SAILVOS3A} data ($1^\text{st}$ row) and real-world data ($2^\text{nd}$ row). \textbf{(a)}: input image; \textbf{(b)} and \textbf{(c)}: results from PIFuHD~\cite{Saito2020PIFuHDMP}. It struggles with partial visibility and non-standing poses; \textbf{(d)}: our reconstruction overlaying the input image with perspective camera projection. \textbf{(e)}: another view of our reconstruction. } \label{fig: teaser} \vspace{-0.3cm} \end{figure} To summarize, our contributions are two-fold: 1) we propose the OPlanes representation for single view RGB-D human reconstruction; 2) we verify that exploiting correlations within planes is beneficial for 3D human shape reconstruction as illustrated in \figref{fig: teaser}. We evaluate the proposed approach on the challenging S3D~\cite{Hu2021SAILVOS3A} data and observe improvements over prior reconstruction work~\cite{Saito2020PIFuHDMP} by a margin (+0.267 IoU, -0.179 Chamfer-$\mathcal{L}_1$ distance, and +0.071 Normal consistency), particularly for occluded or partially visible humans. We also provide a comprehensive analysis to validate each of the design choices and results on real-world data. % \section{Conclusion}\label{sec:conc} We propose and study the occupancy planes (OPlanes) representation for reconstruction of 3D shapes of humans from a single RGB-D image. The resolution of OPlanes is more flexible than that of a classical voxel grid due to the implicit prediction of an entire plane. Moreover, prediction of an entire plane enables the model to benefit from correlations between predictions, which is harder to achieve for models which use implicit functions for individual 3D points. Due to these benefits we find OPlanes to excel in challenging situations, particularly for occluded or partially visible humans. \noindent\textbf{Limitations:} We used simple convolutional nets to show the benefits of the OPlanes representation. Hence, meshes aren't as detailed. We envision techniques from semantic segmentation which focus on boundary accuracy to provide further improvements, and leave their study to future work. \noindent\textbf{Acknowledgements:} This work is supported in part by the National Science Foundation under Grants 1718221, 2008387, 2045586, 2106825, MRI \#1725729, and NIFA award 2020-67021-32799. \section{Experiments} \subsection{Implementation Details}\label{sec: implement} Here we introduce key implementation details. Please see the appendix for more information. During training, the input has a resolution of $H = 512$ and $W = 512$. We operate at $H_O = 256$, $W_O = 256$, while the intermediate resolution is $h_O = 128$ and $w_O = 128$. During training, for each mesh, we randomly sample $N = 10$ planes in the range of $[z_\text{min}, z_\text{max}]$ at each training iteration.~\Ie, the set $\mathcal{Z}_N$ contains 10 depth values. As mentioned in~\secref{sec: oplane train}, during training, we set $z_\text{max}$ to be the ground-truth mesh's furthest depth. The four deep nets, all of which we detail next, are mostly convolutional. We use the triple \textit{(in, out, k)} to denote the input channels, the output channels, and the kernel size % of a convolutional layer. \noindent\textbf{Spatial network $f_\text{spatial}$} (\equref{eq: O_zi final}): It's a three-layer convolutional neural net (CNN) with a configuration of (256, 128, 3), (128, 128, 3), (128, 1, 1). We use group norm~\cite{Wu2018GroupN} and ReLU activation. \noindent\textbf{Feature pyramid network $f_\text{FPN}$} (\equref{eq: rgb feat}): We use ResNet50~\cite{He2016DeepRL} as the backbone of our FPN network. We use the output of each stage's last residual block as introduced in~\cite{Lin2017FeaturePN}. The final output of this FPN has 256 channels and a resolution of $\frac{H}{4} \times \frac{W}{4}$. \noindent\textbf{RGB network $f_\text{RGB}$} (\equref{eq: rgb feat}): It's a three-layer CNN with a configuration of (256, 128, 3), (128, 128, 3), (128, 128, 1). We use group norm~\cite{Wu2018GroupN} and ReLU activation. \noindent\textbf{Positional encoding \texttt{PE}} (\equref{eq: z diff img}): We follow~\cite{Vaswani2017AttentionIA} to define \begin{align} \texttt{PE}(\text{pos}) = \left( \texttt{PE}_0(\text{pos}), \texttt{PE}_1(\text{pos}), \dots, \texttt{PE}_{63}(\text{pos}), \texttt{PE}_{64}(\text{pos}) \right), \end{align} where $\texttt{PE}_{2t}(\text{pos}) = \sin(\frac{50 \cdot \text{pos}}{200^{2t / 64}})$ and $\texttt{PE}_{2t + 1}(\text{pos}) = \cos(\frac{50 \cdot \text{pos}}{200^{2t / 64}})$. \noindent\textbf{Depth difference network $f_\text{depth}$} (\equref{eq: depth feat}): It's a two-layer CNN with a configuration of (64, 128, 1), (128, 128, 1). We use group norm~\cite{Wu2018GroupN} and ReLU activation. To train the networks, we use the Adam~\cite{Kingma2015AdamAM} optimizer with a learning rate of 0.001. We set $\lambda_\text{BCE} = 1.0$ and $\lambda_\text{DICE} = 1.0$ (\equref{eq: fine loss} and~\equref{eq: coarse loss}). We set the batch size to 4 and train for 15 epochs. It takes around 22 hours to complete the training using an AMD EPYC 7543 32-Core Processor and an Nvidia RTX A6000 GPU. \subsection{Experimental Setup} \textbf{Dataset.} We utilize S3D~\cite{Hu2021SAILVOS3A} to train our OPlanes-based human reconstruction model. S3D is a photo-realistic synthetic dataset built on the game GTA-V, providing ground-truth meshes together with masks and depths. To construct our train and test set, we sample 27588 and 4300 meshes from its train and validation split respectively. This dataset differs from counterparts in prior works~\cite{Saito2019PIFuPI, Saito2020PIFuHDMP, He2020GeoPIFuGA, Alldieck2022PhotorealisticM3}: there are no constraints on the appearance of humans in the images. In this dataset, humans appear with any gestures, any sizes, any position, and any level of occlusion. In contrast, humans in datasets of prior work usually appear in an upright position and are mostly centered in an image while exhibiting little to no occlusion. We think this setup strengthens the generalization ability. See~\figref{fig: s3d qualitative} for some examples. \noindent\textbf{Baselines.} We compare to PIFuHD~\cite{Saito2020PIFuHDMP} and IF-Net~\cite{Chibane2020ImplicitFI}. \textbf{1) PIFuHD:} since there is no training code available, we test with the officially-released checkpoints. Following the author's suggestion in the public code repository to improve the reconstruction quality\footnote{\url{https://github.com/facebookresearch/pifuhd}}, we 1.1) remove the background with the ground-truth mask; 1.2) apply human pose detection and crop the image accordingly to place the human of interest in the center of the image.\footnote{\url{https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch}} \textbf{2) IF-Net: } we evaluate with the officially-released checkpoint. IF-Net uses a 3D voxel grid representation. We set the resolution of the grid to 256 to align with the pretrained checkpoint. \noindent\textbf{Evaluation metrics.} We focus on evaluating the quality of the reconstructed geometry. Following prior works~\cite{Mescheder2019OccupancyNL,Saito2019PIFuPI, Saito2020PIFuHDMP, He2020GeoPIFuGA, Alldieck2022PhotorealisticM3, Huang2020ARCHAR, He2021ARCHAC}, we report the Volumetric Intersection over Union (IoU), the bi-directional Chamfer-$\mathcal{L}_1$ distance, and the Normal Consistency. Please refer to the supplementary material of~\cite{Mescheder2019OccupancyNL} for more details on these metrics. To compute the IoU, we need a finite space to sample points. Since humans in our data appear anywhere in 3D space, the implicit assumption of prior works~\cite{Saito2019PIFuPI, Saito2020PIFuHDMP, He2020GeoPIFuGA} that there exists a fixed bounding box for all objects does not hold. Instead, we use the view frustum between depth $z_\text{min}$ and $z_\text{max}$ as the bounding box. Note, for evaluation purposes, $z_\text{max}$ utilizes the heuristic $z_\text{range}$ of 2.0 meters (\secref{sec: oplane train}). We sample 100k points for an unbiased estimation. When computing the Chamfer distance, we need to avoid that the final aggregated results are skewed by a scale discrepancy between different objects. We follow~\cite{Fan2017APS, Mescheder2019OccupancyNL} and let $\frac{1}{10}$ of each object's ground-truth bounding box's longest edge correspond to a unit of one when computing Chamfer-$\mathcal{L}_1$. % To resolve the discrepancy between the orthogonal projection and the perspective projection, we utilize the iterative-closest-point (ICP)~\cite{Besl1992AMF} algorithm to register the reconstruction of baselines to the ground-truth, following~\cite{Alldieck2022PhotorealisticM3}. Note, ICP is not applied to our OPlanes method since we directly reconstruct the human in the camera coordinate system. \subsection{Quantitative Results} \input{./tables/new_ablations} In~\tabref{tab: quant} we provide quantitative results, comparing to baselines in the $1^\text{st}$/$2^\text{nd}$~\vs~$6^\text{th}$ row. For a fair comparison when computing the results, we reconstruct the final geometry in a $256^3$ grid. Although 256 OPlanes are inferred, we train with only 10 planes per mesh in each iteration. \noindent\textbf{PIFuHD~\cite{Saito2020PIFuHDMP}:} the OPlanes representation outperforms the PIFuHD results by a margin. Specifically, our results exhibit a larger volume overlap with the ground-truth (0.691~\vs~0.428 on IoU, $\uparrow$ is better), more completeness and accuracy (0.155~\vs~0.332 on Chamfer distance, $\downarrow$ is better), and more fine-grained details (0.749~\vs~0.677 on normal consistency, $\uparrow$ is better). \noindent\textbf{IF-Net~\cite{Chibane2020ImplicitFI}:} we also compare to the depth-based single-view reconstruction approach IF-Net~\cite{Chibane2020ImplicitFI}. The results are presented in row 2 \vs 6 in~\tabref{tab: quant}. We find that IF-Net struggles to reconstruct humans which are partly occluded or outside the field-of-view (see~\figref{fig: supp s3d if-net} and~\figref{fig: supp apple if-net} for some examples). More importantly, we observe IF-Net to yield inferior results with respect to IoU (0.584~\vs~0.691, $\uparrow$ is better) and Chamfer distance (0.216~\vs~0.155, $\downarrow$ is better). Notably, we find the high normal consistency of IF-Net to be due to the high-resolution voxel grid, which provides more details. \subsection{Analysis} To verify design choices, we conduct ablation studies. We report the results in \tabref{tab: quant}'s $3^\text{rd}$ to $5^\text{th}$ row. \textbf{Per-point classification is not all you need:} To understand whether neighboring information is needed, we replace the $3\times 3$ kernel in $f_\text{spatial}$ (\equref{eq: O_zi final}, \secref{sec: implement}) with a $1\times 1$ kernel, which essentially conducts per-point classification for each pixel on the OPlane. Comparing the $3^\text{rd}$~\vs~$5^\text{th}$ row in~\tabref{tab: quant} corroborates the importance of context as per-point classification yields inferior results. This shows that the conventional way to treat shape reconstruction as a point classification problem~\cite{Saito2019PIFuPI, Saito2020PIFuHDMP, He2020GeoPIFuGA} may be suboptimal. Specifically, without directly taking into account the context information, we observe lower IoU (0.674~\vs~0.682, $\uparrow$ is better), larger Chamfer distance (0.161~\vs~0.158, $\downarrow$ is better), and less normal consistency (0.736~\vs~0.745, $\uparrow$ is better). \textbf{Intermediate supervision is important:} To understand whether the supervision of intermediate features is needed, we train our OPlanes model without $\mathcal{L}_\theta^{h_O \times w_O}$ (\equref{eq: coarse loss}). The results in the $4^\text{th}$~\vs~$5^\text{th}$ row of \tabref{tab: quant} verify the benefits of intermediate supervision. Concretely, with intermediate feature supervision, we obtain a better IoU (0.682~\vs~0.681, $\uparrow$ is better), an improved Chamfer distance (0.158~\vs~0.160, $\downarrow$ is better), and a better normal consistency (0.745~\vs~0.739, $\uparrow$ is better). \textbf{Training with more planes is beneficial:} We are curious about whether training with less planes harms the performance of our OPlanes model. For this, we sample only 5 planes per mesh when training the OPlanes model. The results in the $5^\text{th}$~\vs~$6^\text{th}$ row in~\tabref{tab: quant} demonstrate that training with more planes yields better results. Concretely, with more planes, we obtain better IoU (0.691~\vs~0.682, $\uparrow$ is better), smaller Chamfer distance (0.155~\vs~0.158, $\downarrow$ is better), and better normal consistency (0.749~\vs~0.745, $\uparrow$ is better). \subsection{Qualitative Results} \textbf{S3D.} We provide qualitative results in~\figref{fig: s3d qualitative} and~\figref{fig: supp s3d if-net}. OPlanes successfully handle various human gestures and different levels of visibility while PIFuHD fails in those situations. \noindent\textbf{Transferring Results to Real-World RGB-D Data.} In \figref{fig: transfer apple} and~\figref{fig: supp apple if-net} we use real-world data collected in the wild to compare to PIFuHD~\cite{Saito2020PIFuHDMP}. OPlanes results are obtained by directly applying the proposed OPlanes model trained on S3D, without fine-tuning or other adjustments. For this result, we use a 2020 iPad Pro equipped with a LiDAR sensor~\cite{arkit2021} and develop an iOS app to acquire the RGB-D images and camera matrices. The human masks are obtained by feeding RGB images into a Mask2Former~\cite{Cheng2021MaskedattentionMT}. We observe the PIFuHD results to be noisy and to contain holes. Our model benefits from the OPlanes representation which better exploits correlations within a plane. For this reason OPlanes better capture the human shape despite the model being trained on synthetic data. \begin{figure}[!t] \centering \captionsetup[subfigure]{width=\textwidth} \centering \includegraphics[width=0.9\textwidth]{./figures/s3d_data/s3d} \captionsetup{width=\textwidth} \vspace{-0.2cm} \caption{Qualitative results on S3D~\cite{Hu2021SAILVOS3A}. For each reconstruction, we show two views. PIFuHD~\cite{Saito2020PIFuHDMP} stuggles to obtain complete geometry in case of partial visibility or non-standing gestures while our OPlanes model faithfully reconstructs the shape.} \label{fig: s3d qualitative} \vspace{-0.4cm} \end{figure} \begin{figure}[!t] \centering \captionsetup[subfigure]{width=\textwidth} \centering \hspace*{0.1cm}\includegraphics[width=0.95\textwidth]{./figures/apple_data/apple} \captionsetup{width=\textwidth} \vspace*{-0.2cm} \caption{Transfer results on real-world RGB-D data captured with a 2020 iPad Pro.} \label{fig: transfer apple} \vspace{-0.3cm} \end{figure} \section{Related Work} 3D human reconstruction~\cite{guan2009estimating,tong2012scanning,yang2016estimation,zhang2017detailed,bogo2016keep,lassner2017unite,guler2019holopose,kolotouros2019convolutional,xiang2019monocular,xu2019denserac,yu2017bodyfusion,Zheng2019DeepHuman3H,varol2018bodynet} has been extensively studied for the last few decades. We first discuss the most relevant works on single-view human body shape reconstruction~\cite{gabeur2019moulding} and group them into two categories, template-based models and non-parametric models. Then we review the common 3D representations used for human reconstruction. \noindent\textbf{Template-based models for single-view human reconstruction.} Parametric human models such as SCAPE~\cite{anguelov2005scape} and SMPL~\cite{bogo2016keep} are widely used for human reconstruction. These methods~\cite{kanazawa2018end,varol2018bodynet,Zheng2019DeepHuman3H,Huang2020ARCHAR} use the human body shape as a prior to regularize the prediction space and predict or fit the low-dimensional parameters of a human body model. Specifically, HMR~\cite{kanazawa2018end} learns to predict the human shape by regressing the parameters of SMPL from a single image. BodyNet~\cite{varol2018bodynet} predicts a 3D voxel grid of the human shape and fits the SMPL body model to the predicted volumetric shape. DeepHuman~\cite{Zheng2019DeepHuman3H} utilizes the SMPL model as an initialization and further refines it with deep nets. Although the parametric human models are deformable and can capture various complex human body poses and different body measurements, these methods generally do not consider surface details such as hair, clothing as well as accessories. \noindent\textbf{Non-parametric models for single-view human reconstruction.} Non-parametric methods for human reconstruction~\cite{Saito2019PIFuPI,Saito2020PIFuHDMP,He2020GeoPIFuGA,hong2021stereopifu,gabeur2019moulding,wang2020normalgan} gained popularity recently as they are more flexible in recovering surface details compared to template-based methods. Among those methods, use of an implicit function~\cite{sclaroff1991generalized} to predict human body shape achieves state-of-the-art results~\cite{Saito2020PIFuHDMP}, showing that the expressivity of neural nets enables to memorize the human body shape. To achieve this, the task is usually formulated as a per-point classification,~\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, classifying every point in a 3D space independently into either inside or outside of the observed body. For this, PIFu~\cite{Saito2019PIFuPI} reconstructs the human shape from an image encoded into a feature map, from which it learns an implicit function to predict per-point the occupancy. PIFuHD~\cite{Saito2020PIFuHDMP} employs a two level implicit predictor and incorporates normal information to recover high quality surfaces. GeoPIFu~\cite{He2020GeoPIFuGA} learns additionally latent voxel features to encourage shape regularization. Hybrid methods have also been studied~\cite{Huang2020ARCHAR,Cao2022JIFFJI}, combining human shape templates with a non-parametric framework. These methods usually yield reconstruction results with surface details. However, in common to all the aforementioned methods, the per-point classification formulation doesn't \emph{directly} take correlations between neighboring 3D points into account. Therefore, predictions remain noisy, particularly in challenging situations with occlusions or partial visibility. Because of this, prior works usually consider images where the whole human body is visible and roughly centered. In contrast, for broader applicability and more accurate results in challenging situations, we propose the occupancy planes (OPlanes) representation. % \noindent\textbf{3D representations.} Various 3D representation have been developed, such as voxel grids~\cite{varol2018bodynet,maturana2015voxnet,lombardi2019neural}, meshes~\cite{lin2021end,wang2018pixel2mesh}, point clouds~\cite{qi2017pointnet,Fan2017APS,deng2018ppfnet,wu2020multi,aliev2020neural}, implicit functions~\cite{Mescheder2019OccupancyNL,Saito2019PIFuPI,Saito2020PIFuHDMP,He2020GeoPIFuGA,hong2021stereopifu,peng2021neural}, layered representations~\cite{shade1998layered,zhou2018stereo,srinivasan2019pushing,tucker2020single} and hierarchical representations~\cite{meagher1982geometric,hane2017hierarchical,yu2021plenoctrees}. For human body shape reconstruction, template-based representations~\cite{anguelov2005scape,bogo2016keep,pavlakos2019expressive,osman2020star} are also popular. Our proposed occupancy planes (OPlanes) representation combines the benefits of both layered representations as well as implicit representations. Compared to voxel grids, the OPlanes representation is more flexible, enabling prediction at different resolutions because of its implicit formulation of occupancy-prediction of an entire plane. Compared to unstructured representations such as implicit functions and point clouds, the OPlanes representation benefits from its increased context of a per-plane prediction as opposed to a per-pixel or per-point prediction. % Concurrently, Fourier occupancy field (FOF)~\cite{feng2022fof} is proposed to use a 2D field orthogonal to the view direction to represent the occupancy. Different from FOF, where coefficients for Fourier base functions are estimated for each position on the 2D field, OPlanes directly regress to the occupancy value.
1,108,101,562,508
arxiv
\section{Conclusion} Several basic but important concepts relevant to QIP are illustrated by experiments on a liquid-state ensemble NMR quantum information processor. While pure quantum mechanical states are not achievable here, the creation and application of pseudo-pure states is demonstrated. Tests of spinor behavior and entanglement are also described, illustrating quantum mechanical dynamics. Finally, building blocks (the Hadamard, c-NOT, and QFT) for a more complicated quantum computer are also introduced. \section{Entangled States} The Einstein-Podolski-Rosen (EPR) \cite{epr:orig,b:qt} paradox, concerning the spatial correlations of two entangled quantum systems, is perhaps the most famous example of quantum dynamics that is incompatible with a classical view. An entangled state is one that cannot be factored into the product of the individual particle wavefunctions. As a result, the state of one particle is necessarily correlated with the state of the other, and these correlations differ from those allowed by classical mechanics. Entanglement in quantum mechanics is normally raised to explore aspects of non-local effects and hidden variable theories. Due to the close proximity of nuclear spins and the fact that the ensemble is in a highly mixed state, the NMR measurements discussed below do not address these issues. Nevertheless, we can use the ability of liquid state NMR to simulate strong measurement to show that the behavior of an entangled state is inconsistent with a simple classical picture. The entangled state $\mket{\psi}=\frac{1}{\sqrt{2}}(\mket{00}+\mket{11})$, otherwise known as a Bell state, is given by the density matrix \begin{equation} \rho_{\rm Bell} =\tfrac{1}{2}\left(\tfrac{1}{2}{\bf 1} + 2I_zS_z + 2I_xS_x -2I_yS_y \right). \end{equation} The above state can be prepared directly from the pseudo-pure ground state $|00\rangle$ by the transformation \begin{equation} {\cal U} \equiv e^{-i I_x S_y \pi} \end{equation} which is implemented by the pulse sequence \begin{equation} \left[ \frac{\pi}{2} \right] ^S _{-x} \rightarrow \left[ \frac{\pi}{2} \right] ^I _{y} \rightarrow \left( \frac{1}{2J} \right) \rightarrow \left[ \frac{\pi}{2} \right] ^I _{-y} \rightarrow \left[ \frac{\pi}{2} \right] ^S _{x}. \end{equation} Readout pulses can then be used to verify the creation of this Bell state, as shown in Fig \ref{belltomofig}. One of the advantages of working with an ensemble is that we can introduce a pseudo-random phase variation accross the sample to simulate the decoherence that accompanies strong measurement. A pseudo-random phase variation in a given basis can be achieved by rotating the preferred axis to the z-axis and then applying a magnetic field gradient followed by the inverse rotation. This leads to the pulse sequence \begin{equation} \left[ \frac{\pi}{2} \right] ^I _{y} \rightarrow \left[ grad(z) \right] \rightarrow \left[ \pi \right] ^S _{y} \rightarrow \left[ grad(z) \right] \rightarrow \left[ \frac{\pi}{2} \right] ^I _{-y}. \label{xmeasure} \end{equation} It can be shown that such a measurement also ``collapses'' the $S$ spin along this direction. Thus, half the magnetization is along the +x-axis and the other half is along the -x-axis leaving zero magnetization in the $y$--$z$ plane. This is verified in our experiment by applying a series of readout pulses to confirm the creation of the $2I_xS_x$ state which corresponds to ``collapsing'' the pseudo-pure Bell state along the x-axis. The experimental results are shown in Fig \ref{strongmeasure}. An incoherent mixture of entangled states is easily generated by the pulse sequence \begin{equation} \left[ \frac{\pi}{2} \right] _{90^{\circ}}^{S}\rightarrow \left( \frac{1}{2J} \right) \rightarrow \left[ \frac{\pi}{2} \right] _{135^{\circ}}^{I}\rightarrow \left( \frac{1}{2J} \right) \rightarrow \left[ \frac{\pi}{2} \right] _{90^{\circ}}^{S} \end{equation} applied to $\rho_{eq}$ (Eq. ~\ref{eq:bal}), yielding the reduced density matrix \begin{equation} \rho_f = \left(\begin{array}{cccc} 0& 0&0&\frac{-1-i}{\sqrt{2}}\\ 0& 0&0& 0 \\ 0& 0&0& 0 \\ \frac{-1+i}{\sqrt{2}}&0&0& 0 \end{array}\right). \end{equation} Suppose one wishes to measure the polarization of spin $I$ along the $x$--axis and spin $S$ along the $z$--axis. One possibility is to use selective RF pulses to rotate the desired axis ($x$ in this case) to the $z$--axis, apply a $z$-gradient, and then rotate back to the $x$--$y$ plane to observe the induction signal as in Eq.~\ref{xmeasure}. Alternatively, one could rotate the desired measurement axis of one of the spins to the $z$--axis, rotate the other spin to the $x$--$y$ plane and then spin-lock the sample on resonance. In this latter case the inhomogeneities in the RF pulse and background field serve to effectively remove any signal perpendicular to the desired axis, and the induction signal is the same as in the first case. Thus for example, if a measurement along $y$ for spin $I$ and along $x$ for spin $S$ were required, observing the induction signal after the sequence \begin{equation} \left[\frac{\pi}{2}\right]_{x}^{S}-\left[spin lock\right]_{x}^{I}. \end{equation} Because one of the spins remains along the $z$--axis while the receiver is in phase with the other, the measured signals are anti-phase. The spectrographic traces shown in Figs. 6a-d indicate the results of the measurements $\mbox{Tr}\left( 4I_xS_y \rho_{f}\right)$, $\mbox{Tr}\left( 4I_yS_x \rho_{f}\right)$, $\mbox{Tr}\left( 4I_yS_y \rho_{f}\right)$, and $\mbox{Tr}\left( 4I_xS_x \rho_{f}\right)$, respectively. The traces show the Fourier-transformed induction signal read on the $^{13}$C channel, with absorptive peaks in phase along either the $+x$-- or $+y$--axis, depending on which axis the carbon nucleus was spin-locked. Notice that Fig.\ \ref{eprfig}(d) shows the same anti-phase signal as the other spectra, but ``flipped'' by $180^{\circ}$. The results of the four plots, taken together, show a simple inconsistency compared to a model of only two uncorrelated classical magnetic dipoles. The product of the four traces has an overall factor of $-1$, yet each magnetic moment is measured twice so that their signals should cancel. Each measurement is assumed to record either the x or y polarization if each dipole is measured independently of the state of the other. \section{Introduction} The fundamental physics of NMR is again, 50 years after its discovery, the subject of much discussion. The impetus behind this recent interest is the dramatic potential of quantum information processing (QIP) \cite{Steane}, particularly quantum computing, along with the realization that liquid-state NMR provides an experimentally accessible testbed for developing and demonstrating these new ideas \cite{coryorig,cory1,coryphysica,gershen,knill97,jonesjchem,chuangseth,chuangkubi,jonesmosca,jonesscience,cory2}. Most descriptions of quantum information processors have focused on the preparation, manipulation, and measurement of a single quantum system in a pure state. The applicability of NMR to QIP is somewhat surprising because, at finite temperatures, the spins constitute a highly mixed state, as opposed to the preferred pure state. However, NMR technology applied to the mixed state ensemble of spins (the liquid sample) does offer several advantages. Decoherence, which plays a detrimental role in the storage of quantum information, is conveniently long (on the order of seconds) in a typical solution sample, and it acts on the system by attenuating the elements of the density matrix and rarely mixes them. NMR spectrometers allow for precise control of the spin system via the application of arbitrary sequences of RF excitations, permitting the implementation of unitary transformations on the spins. Effective non-unitary transformations are also possible using magnetic field gradients. The gradient produces a spatially varying phase throughout the sample, and since the detection over the sample is essentially a sum over all the spins, phase cancellations from spins in distinct positions occur. These characteristics of NMR enable the creation of a class of mixed states, called pseudo-pure states, which transform identically to a quantum system in a pure state\cite{cory1}. NMR does have several noteworthy disadvantages. A single density matrix cannot be associated with a unique microscopic picture of the sample, and the close proximity of the spins prevents the study of non-local effects. Additionally, the preparation of pseudo-pure states from the high temperature equilibrium state in solution NMR entails an exponential loss in polarization. \cite{warren} In this paper, we review the results of a number of simple NMR experiments demonstrating interesting quantum dynamics. The experiments illustrate spinor behavior under rotations, the creation and validation of pseudo-pure states, their transformation into ``entangled'' states, and the simulation of wave function collapse via gradients. Additionally, the implementations of basic quantum logic gates are described, along with the Quantum Fourier Transform. \section{Preparation of Pseudo-Pure States} Before describing the creation of the pseudo-pure state, it is convenient to begin with a system of equal spin populations. This is achieved by applying the pulse sequence \begin{equation} \left[ \frac{\pi}{2} \right]^{I,S} _{x} \rightarrow \left( \frac{1}{4J} \right) \rightarrow \left[ \frac{\pi}{2} \right]^{I,S} _{y}\rightarrow \left( \frac{1}{4J} \right) \rightarrow \left[ \frac{\pi}{2} \right]^{I,S} _{-x} \rightarrow \left[ grad(z) \right], \end{equation} to the equilibrium density matrix, resulting in \begin{equation} \frac{1}{4}{\bf 1} + \frac{\epsilon}{4}\left(1+\tfrac{\gamma_S}{\gamma_I}\right)(I_z + S_z), \label{eq:bal} \end{equation} which has a balanced spin population. Because the eigenvalue structure of this density matrix is different from that of thermal equilibrium, there is no unitary transformation which could transform one to the other. The non-unitary gradient (where the non-unitarity refers to the spatial average over the phases created by the gradient) at the end of the above pulse sequence makes this transformation possible. Figure \ref{ppprepfig} shows a spectrum obtained after applying this sequence. Since the identity part of the equalized density matrix is unaffected by unitary transformations and undetectable in NMR, only the deviation density matrix, \begin{equation} I_{z}+S_{z}\quad=\quad \begin{array}{rl} & \begin{array}{cccc}| 0^{\tiny I} 0^{\tiny S}\rangle & |0^{\tiny I}1^{\tiny S}\rangle & |1^{\tiny I}0^{\tiny S}\rangle & | 1^{\tiny I}1^{\tiny S}\rangle \end{array} \\ \begin{array}{c} \langle 0^{\tiny I}0^{\tiny S}| \\ \langle 0^{\tiny I}1^{\tiny S}| \\ \langle 1^{\tiny I}0^{\tiny S}| \\ \langle 1^{\tiny I}1^{\tiny S}| \\ \end{array} & { \left( \begin{array}{p{11.5mm}p{11.5mm}p{11.5mm}p{11.5mm}} 1&0&0& 0\\ 0&0&0& 0\\ 0&0&0& 0\\ 0&0&0&-1 \end{array} \right)}, \end{array} \end{equation} which represents the excess magnetization aligned with the external magnetic field, is of interest. The above matrix representation has been made in the eigenbasis of the unperturbed Hamiltonian, and here the rows and columns have been labeled explicitly to avoid ambiguity. In the subsequent matrix expressions, the labels will be dropped. QIP requires the ability to create and manipulate pure states. NMR systems, however, are in a highly mixed state at thermal equilibrium. While single spin manipulation is not feasible in NMR, Cory et. al. \cite{coryorig,cory1,gershen} have developed a technique by which the equilibrium state is turned into a pseudo-pure state. Such a state can be shown to transform identically to a true pure state as follows: according to the rules of quantum mechanics, a unitary transformation ${\cal U}$ maps the density matrix $\rho$ to $\rho'={\cal U} \rho {\cal U}^{\dag}$. Thus an $N$-spin density matrix of the form $\rho=({\bf{1}}+\mket{\psi} \mbra{\psi})/2^N$ is mapped to \begin{equation} \frac{\bf{1}+({\cal U} \mket{\psi})({\cal U}\mket{\psi})^{\dag} }{2^N}. \end{equation} This shows that the underlying spinor $\mket{\psi}$ is transformed one-sidedly by ${\cal U}$ just as a spinor which describes a pure state would be. After equalizing the spin population from the thermal equilibrium state (eq. (5)), the application of \begin{equation} \left[ \frac{\pi}{4} \right]^{I,S} _{x} \rightarrow \left( \frac{1}{2J} \right) \rightarrow \left[ \frac{\pi}{6} \right]^{I,S} _{y}\rightarrow \left[ grad(z) \right] \end{equation} results in the pseudo-pure state (neglecting the initial identity component) \begin{equation} \sqrt{\frac{3}{32}}{\bf 1}+\sqrt{\frac{3}{8}}\left(I_{z}+S_{z}+2I_{z}S_{z}\right)= \sqrt{\frac{3}{2}} \left( \begin{array}{rrrr} 1&0&0& 0\\ 0&0&0& 0\\ 0&0&0& 0\\ 0&0&0& 0 \end{array}\right). \end{equation} Figure \ref{pptomofig} shows a series of spectra confirming the preparation of a pseudo-pure state. \section{Quantum Logic Gates} NMR provides a means whereby it is possible to analyze experiments as building blocks for a quantum information processor (QIP). Because spin $\tfrac{1}{2}$ particles can have two possible orientations (up or down), it is natural to associate spin states with computational bits. Further, NMR experiments can be viewed as performing computations on these quantum bits (qubits). \subsection{Pulse Sequences As Logic Gates} Suppose we wanted to implement the controlled-NOT (c-NOT, or also XOR) gate, common in computer science, using NMR techniques. A c-NOT gate performs a NOT operation on one bit, conditional on the other bit being set to 1. The action of a c-NOT gate is summarized by the truth table \begin{center} \begin{tabular}{llll} ${\bf A_{input}}$ & ${\bf B_{input}}$ & ${\bf A_{output}}$ & ${\bf B_{output}}$\\ \cline{1-4} F (up) & F (up) & F (up) & F (up)\\ F (up) & T (down) & F (up) & T (down)\\ T (down) & F (up) & T (down) & T (down)\\ T (down) & T (down) & T (down) & F (up), \end{tabular} \end{center} where the True and False values have been associated with up spins and down spins, respectively. The above truth table corresponds to a unitary transformation that implements \begin{equation} \begin{array}{rcl} \mket{00} & \rightarrow & \mket{00}\\ \mket{01} & \rightarrow & \mket{01}\\ \mket{10} & \rightarrow & \mket{11}\\ \mket{11} & \rightarrow & \mket{10}. \end{array} \end{equation} In a weakly coupled two-spin system, a single transition can be excited via application of the propagator, \begin{equation} {\cal U}\;=\;e^{-\imath \tfrac{1}{2} S_x \left(1 - 2 I_z \right) \omega t}\;=\; \left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & \cos{\tfrac{\omega t}{2}} & -\imath \sin{\tfrac{\omega t}{2}}\\ 0 & 0 & \imath \sin{\tfrac{\omega t}{2}} & \cos{\tfrac{\omega t}{2}} \end{array} \right), \end{equation} which for a perfect $\omega t=\pi$ rotation becomes (to within a phase factor) \begin{equation} {\cal U}\;=\; \left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{array} \right). \end{equation} It is clear that exciting a single transition in an NMR experiment is the same as a c-NOT operation from computer logic. In NMR terms, the action of the c-NOT gate is to rotate one spin, conditional on the other spin being down. Figure \ref{cnotfig} shows the result of performing a c-NOT on $\rho_{eq}$. While NMR is certainly capable of implementing the c-NOT operation as is done on a classical computer, that alone does not demonstrate any of the quantum dynamics. Gates implemented on a quantum information processor which have no classical counterpart are of much more interest. An example of such a gate is the single-spin Hadamard transform, \begin{equation} H\;=\;\tfrac{1}{\sqrt{2}} \left( \begin{array}{cc} 1 & 1\\ 1 & -1\\ \end{array} \right)\;=\;e^{i\left(\tfrac{1}{2}-\tfrac{I_x+I_z}{\sqrt{2}}\right)\pi}, \label{hadamard} \end{equation} which takes a spin from the state \ket{0} into the state $\tfrac{1}{\sqrt{2}} (\mket{0} + \mket{1})$. This is just a $\pi$ rotation around the vector $45^o$ between the $x$ and $z$ axes. A spectrum demonstrating the application of the Hadamard transform to the equilibrium state $\rho_{eq}$ is shown in figure \ref{Hadamardfig}. The c-NOT and single-spin rotations can be combined to generate any desired unitary transformation, and for this reason they are referred to as a universal set of gates. \cite{BBCDMSSSW:95} Analysis of conventional NMR experiments in terms of quantum information processing has led to a great deal of insight into areas such as the dynamics of pulse sequences for logic gates \cite{price}, and the effective Hamiltonian for exciting a single transition \cite{havel}. \subsection{The Quantum Fourier Transform} One of the most important transformations in quantum computing is the Quantum Fourier Transform (QFT). The QFT is a necessary component of Shor's algorithm, which allows the factorization of numbers in polynomial time\cite{Shor}, a task which no classical computer can achieve (so far as is known). Essentially, the QFT is the discrete Fourier transform which, for $q$ dimensions, is defined as follows \begin{equation} QFT_q|a\rangle \rightarrow \frac{1}{\sqrt{q}} \sum^{q-1}_{c=0} exp(2 \pi iac/q)|c\rangle \end{equation} This transform measures the input amplitudes of $|a\rangle$ in the $|c\rangle$ basis. Notice how the quantum Fourier transform on $|0\rangle$ will create an equal superposition in the $|c\rangle$ basis, allowing for parallel computation. In matrix form the two-qubit QFT transformation $QFT_2$, is expressed as \begin{eqnarray} QFT_2 &=& \frac{1}{2} \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \\ \end{array} \right). \end{eqnarray} As formulated by Coppersmith \cite{cop}, the QFT can be constructed from two basic unitary operations; the Hadamard gate $H_j$ (Eq.~\ref{hadamard}), operating on the {\it j}th qubit and the conditional phase transformation $B_{jk}$, acting on the {\it j}th and {\it k}th qubits, which is given by \begin{eqnarray} B_{jk} &=& \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{{i\theta_{jk}}} \end{array} \right) \;=\; e^{i\theta_{jk}\frac{1}{2}(1-2I_z)\frac{1}{2}(1-2S_z)} \label{Bgate} \end{eqnarray} where $\theta_{jk} = \frac{\pi}{2^{k-j}}$. The two-qubit QFT, in particular, can be constructed as \begin{equation} QFT_2\;=\; H_0B_{01}H_1 \end {equation} The $B_{jk}$ transformation can be implemented by performing the chemical shift and coupling transformations shown in Eq.~\ref{Bgate}. Figure \ref{QFTfig} shows the implementation of the QFT on a two spin system. The spectra show the 90$^{\rm o}$ phase shifts created after the QFT application. \section{The Spin System} The experiments were performed on the two-spin heteronuclear spin system, $^{13}$C-labeled chloroform ($^{13}$CHCl$_3$), thereby eliminating the use of shaped RF pulses. The $^{13}$C (I) and the $^1$H (S) nuclei interact via weak scalar coupling, and the Hamiltonian for this system is written as \begin{equation} {\cal{H}} =\omega_{I}I_{z}+\omega_{S}S_{z} +2\pi J I_{z}S_{z}, \end{equation} where $\omega_{I}$ and $\omega_{S}$ are the Larmor frequencies of the $^{13}$C and $^1$H spins respectively and $J<<\vert\omega_{I}-\omega_{S} \vert$ is the scalar coupling constant. In the standard model of quantum computation, the quantum system is described by a pure state. However, liquid-state NMR samples at room temperature are in highly mixed states, requiring the state of the system to be described by the density operator. In a liquid sample, the inter-molecular interactions are, for most practical purposes, averaged to zero so that only interactions within a molecule are observable; in other words, the sample can be thought of as an ensemble of quantum processors, each permitting quantum coherence within but not between molecules. For the purposes of this paper, the large density matrix of size $2^N \times 2^N$, where N is the number of spins in the sample, may be replaced by a much smaller density matrix of size $2^n \times 2^n$, where $n$ is the number of distinguishable spin-$\tfrac{1}{2}$ nuclei in the molecule. In the high temperature regime ($\epsilon=\frac{\hbar\gamma_IB_o}{2kT} \sim {\cal{O}}(10^{-6})$) the equilibrium density operator for the ensemble is \begin{equation} \rho=\tfrac{e^{-{\cal H}/kT}}{Z}\approx \tfrac{1}{4}{\bf 1}+\tfrac{1}{4}\epsilon\rho_{dev}= \tfrac{1}{4}{\bf 1}+\tfrac{1}{4}\epsilon \left(I_{z} + \frac{\gamma_S}{\gamma_I}S_{z} \right), \end{equation} where the relative value of the gyromagnetic ratios is $\gamma_S/\gamma_I \sim 4$. From the above, it is clear that at room temperature a spin system cannot be prepared in a pure state. However, it is possible to prepare a pseudo-pure state that transforms like a pure state. Also, notice that since the identity part of the density operator is invariant under unitary transformations, it is the deviation part of the density operator, that holds the information on the spin dynamics. Henceforth in this paper, the deviation density matrix will be simply referred to as the density matrix. The density operator is often written in the product operator basis formed by the direct product of individual spin operators\cite{productop,somaroo}. The product operator technique is used throughout this paper to express the dynamics of the spin system. Furthermore, if $n$ spins are coupled to one another, any arbitrary unitary operation can be composed from a series of RF pulses, chemical shift evolution and scalar coupling evolutions. \cite{coryphysica,BBCDMSSSW:95} \section{Spinor Behavior} Particles of half-integral spin have the curious property that when rotated by $2\pi$, their wave functions change sign while a $4\pi$ rotation returns their phase factors to their original value. The change in the sign of the wavefunction is not observable for a single particle, but it can be seen through an interference effect with a second ``reference spin.'' Spinor behavior, as this effect is called, was first experimentally measured using neutron interferometry \cite{neutron1,neutron2} and later using NMR interferometry \cite{vaughn}. The following simple experiment describes how the spinor behavior can be seen in chloroform, where the spinor behavior of $^{13}$C is correlated with the $^1$H nuclei as a multiplicative phase factor. Consider the unitary transformation \begin{equation} {\cal U}= \left( \begin{array}{cccc} 1&0&0&0\\ 0&\cos\left(\tfrac{\phi}{2}\right)&0&-\sin\left(\tfrac{\phi}{2}\right)\\ 0&0&1&0\\ 0&\sin\left(\tfrac{\phi}{2}\right)&0&\cos\left(\tfrac{\phi}{2}\right) \end{array}\right)= e^{-i\phi I_y (\tfrac{1}{2} - S_z)}. \end{equation} As explained in section 6, this can be viewed as a rotation by $\phi$ of the $^{13}$C conditional on the $^1$H being in the down state. This can be implemented via the pulse sequence \begin{equation} \left[ \frac{\phi}{2} \right] _{y}^I \rightarrow \left[ \frac{\pi}{2} \right] _{x}^I \rightarrow \left[ \frac{\phi}{2\pi J} \right] \rightarrow \left[ \frac{\pi}{2} \right] _{-x}^I. \end{equation} Application of this pulse sequence to the state $2I_zS_x$, where the spinor behavior of the I-spin is revealed by its correlation to the S-spin, results in \begin{equation} 2\cos(\phi/2)I_zS_x + 2\sin(\phi/2)I_xS_x. \end{equation} It can be clearly seen that when $\phi=2\pi$ the initial state gains a minus sign, but when $\phi=4\pi$ the state returns to its initial value. The state $2I_zS_x$ is made observable under the evolution of the internal hamiltonian previously defined and can be created from the equalized equilibrium state (eq. \ref{eq:bal}) using the sequence \begin{equation} \left[ \frac{\pi}{2} \right] _{x}^I \rightarrow \left[ grad(z) \right] \rightarrow \left[ \frac{\pi}{2} \right] _{x}^S \rightarrow \left( \frac{1}{2J} \right). \end{equation} Figure \ref{spinorfig} shows the spectra for several values of $\phi=0,2\pi$, and $4\pi$.
1,108,101,562,509
arxiv
\section*{Abstract: } Let $X$ be a smooth projective toric surface, and ${\mathbb H}^d(X)$ the Hilbert scheme parametrising the length $d$ zero-dimensional subschemes of $X$. We compute the rational Chow ring $A^*({\mathbb H}^d(X))_{\mathbb Q}$. More precisely, if $T\subset X$ is the two-dimensional torus contained in $X$, we compute the rational equivariant Chow ring $A_T^*({\mathbb H}^d(X))_{\mathbb Q}$ and the usual Chow ring is an explicit quotient of the equivariant Chow ring. The case of some quasi-projective toric surfaces such as the affine plane are described by our method too. \section*{Introduction} \label{sec:introduction} Let $X$ be a smooth projective surface and ${\mathbb H}^d(X)$ the Hilbert scheme parametrising the zero-dimensional subschemes of length $d$ of $X$. The problem is to compute the rational cohomology $H^*({\mathbb H}^d(X))$. The additive structure of the cohomology is well understood. First, Ellingsrud and Str{\o}mme computed in \cite{ellingsrud-stromme87:chow_group_of_hilbert_schemes} the Betti numbers $b_i({\mathbb H}^d(X))$ when $X$ is a plane or a Hirzebruch surface ${\mathbb F}_n$. The Betti numbers $b_i({\mathbb H}^d(X))$ for a smooth surface $X$ were computed by G{\"o}ttsche \cite{gottsche90:nbBettiDuSchemaHilbertDesSurfaces} who realised them as coefficients of an explicit power series in two variables depending on the Betti numbers of $X$. This nice and surprising organisation of the Betti numbers as coefficients of a power series was explained by Nakajima in terms of a Fock space structure on the cohomology of the Hilbert schemes \cite{nakajima97:_heisenberg_et_Hilbert_schemes}. Grojnowski announced similar results \cite{grojnowski:resultatsSimilairesANakajima}. As to the multiplicative structure of $H^*({\mathbb H}^d(X))$, the picture is not as clear. There are descriptions valid for general surfaces $X$ but quite indirect and unexplicit, and more explicit descriptions for some special surfaces $X$. The first steps towards the multiplicative structure were performed again by Ellingsrud and Str{\o}mme \cite{ellingsrud-stromme93:towardsTheChowRingOfPP2} (see also Fantechi-G\"ottsche \cite{fantechi-gottsche93:cohomologie-3-points}) . They gave an indirect description of the ring structure in the case $X={\mathbb P}^2$ in terms of the action of the Chern classes of the tautological bundles. Explicitly, Ellingsrud and Str{\o}mme constructed a variety $Y$ whose cohomology is computable, an embedding $i:{\mathbb H}^d({\mathbb P}^2)\ensuremath{\rightarrow} Y$, and proved the isomorphism $H^*({\mathbb H}^d({\mathbb P}^2))\simeq H^*(Y)/Ann(i_*(1))$ where $Ann$ is the annihilator. When $X=\A^2$, Lehn gave in \cite{lehn99:_chern_classes_of_tautological_sheaves_on_Hilbert_schemes} an identification between the cohomology ring $H^*({\mathbb H}^d(\A^2))$ and a ring of differential operators on a Fock space. With some extra algebraic work, it is possible to derive from it a totally explicit description of the cohomology ring $H^*({\mathbb H}^d(\A^2))$. This was done by Lehn and Sorger in \cite{lehn_sorger01:cup_product_on_Hilbert_schemes}. The same result was obtained independently by Vasserot \cite{vasserot01:anneauCohomologieHilbert} by methods relying on equivariant cohomology. When $X$ is a smooth projective surface, Costello and Grojnowski have identified $H^*({\mathbb H}^d(X))$ with two algebras of operators \cite{costello-grojnowski03:CohomoSchemaHilbertPonctuel}. Lehn and Sorger extended their results to the case of $K3$ surfaces \cite{lehn_sorger02:cup_product_on_Hilbert_schemes_for_K3}. Li,Qin,Wang have computed the ring structure of $H^*({\mathbb H}^d(X))$ when $X$ is the total space of a line bundle over ${\mathbb P}^1$ \cite{liQinWang04mathAG:cohomoDesSchemaHilbertSurface=FibreSurP1}. \medskip The goal of this work is to compute the Chow ring $A^*({\mathbb H}^d(X))$ when $X$ is a smooth projective toric surface. The description is new even in the case $X={\mathbb P}^2$. Nakajima's construction \cite{nakajima97:_heisenberg_et_Hilbert_schemes} has been fundamental and many of the above papers (\cite{vasserot01:anneauCohomologieHilbert}, \cite{lehn99:_chern_classes_of_tautological_sheaves_on_Hilbert_schemes}, \cite{lehn_sorger01:cup_product_on_Hilbert_schemes}, \cite{lehn_sorger02:cup_product_on_Hilbert_schemes_for_K3}, \cite{liQinWang04mathAG:cohomoDesSchemaHilbertSurface=FibreSurP1}) rely on it. The present work is independent of Nakajima's framework and uses equivariant Chow rings as the main tool. For simplicity, we use the notation ${\mathbb H}^d$ instead of ${\mathbb H}^d(X)$. We use the formalism of Chow rings and work over any algebraically closed field $k$. When $k={\mathbb C}$, the Chow ring co{\"\i}ncides with usual cohomology since the action of the two-dimensional torus $T$ on $X$ induces an action of $T$ on ${\mathbb H}^d$ with a finite number of fixed points. \medskip \textit{Equivariant Chow rings.} The construction of an equivariant Chow ring associated with an algebraic space endowed with an action of a linear algebraic group has been settled by Edidin and Graham \cite{edidinGraham:constructionDesChowsEquivariants}. Their construction is modeled after the Borel in equivariant cohomology. Brion \cite{brion97:_equivariant_chow_groups} pushed the theory further in the case the group is a torus $T$ acting on a variety ${\ensuremath \mathcal X}$. In our setting, ${\ensuremath \mathcal X}$ is smooth and projective. Brion gave a description of Edidin and Graham's equivariant Chow ring by generators and relations. This alternative construction makes it possible to prove that the usual Chow ring is an explicit quotient of the equivariant Chow ring. This is the starting point of this work: to realize the usual Chow ring as a quotient of the equivariant Chow ring. Explicitly, the morphism ${\ensuremath \mathcal X}\ensuremath{\rightarrow} Spec\ k$ yields a pullback on the level of Chow ring and makes $A_T^*({\ensuremath \mathcal X})$ a $A_T^*(Spec\ k)$-algebra. There is an isomorphism $A^*({\ensuremath \mathcal X})\simeq A_T^*({\ensuremath \mathcal X})/A_T^{>0}(Spec\ k) A_T^*({\ensuremath \mathcal X})$. Moreover, over the rationals, the restriction to fixed points $A_T^*({\ensuremath \mathcal X})_{{\mathbb Q}}\ensuremath{\rightarrow} A^*_T({\ensuremath \mathcal X}^T)_{{\mathbb Q}}$ is injective and its image is the intersection of the images of the morphisms $A_T^*({\ensuremath \mathcal X}^{T'})_{\mathbb Q}\ensuremath{\rightarrow} A^*_T({\ensuremath \mathcal X}^T)_{\mathbb Q}$ where $T'$ runs over all one codimensional subtori of $T$. \\ Thus the natural context is that of rational Chow rings and we lighten the notations: From now on, the symbols $A^*({\ensuremath \mathcal X}), A^*_T({\ensuremath \mathcal X})$ will implicitly stand for the rational Chow rings $A^*({\ensuremath \mathcal X})_{{\mathbb Q}}, A^*_T({\ensuremath \mathcal X})_{{\mathbb Q}}$ . \\ We apply Brion's results to ${\ensuremath \mathcal X}={\mathbb H}^d$. The locus ${\ensuremath \mathcal X} ^T={\mathbb H}^{d,T}$ is a finite number of points and the ring $A_T^*({\mathbb H}^{d,T})$ is a product of polynomial rings. In particular, the multiplicative structure of $A_T^*({\mathbb H}^{d})\subset A_T^*({\mathbb H}^{d,T})$ is completly determined. In view of the above description, the problem of computing $A^*_T({\mathbb H}^d)\subset A^*_T({\mathbb H}^{d,T})$ reduces to the computation of $A^*_T({\mathbb H}^{d,T'})\subset A^*_T({\mathbb H}^{d,T})$. The steps are as follows. \begin{itemize} \item First, we study the geometry of the locus ${\mathbb H}^{d,T'}\ensuremath{\subset} {\mathbb H}^{d}$. We identify its irreducible components with products $V_1\ensuremath{\times} \dots\ensuremath{\times} V_r$ where each term $V_i$ in the product is a projective space or a graded Hilbert scheme ${\mathbb H}^{T',H}$, in the sense of Haiman-Sturmfels \cite{haiman_sturmfels02:multigradedHilbertSchemes}. \item A graded Hilbert scheme ${\mathbb H}^{T',H}$ appearing as a term $V_i$ is embeddable in a product ${\mathbb G}$ of Grassmannians. A sligth modification of an argument by King and Walter \cite{king_walter95:generateurs_anneaux_chow_espace_modules} shows that the restriction morphism $A_T^*({\mathbb G})\ensuremath{\rightarrow} A_T^*({\mathbb H}^{T',H})$ is surjective. The idea for this step is that the universal family over ${\mathbb H}^{T',H}$ is a family of $k[x,y]$-modules with a nice resolution. Since the equivariant Chow ring $A_T^*({\mathbb G})$ is computable, we obtain a description of $A_T^*({\mathbb H}^{T',H})$. \item It then suffices to put the two above steps together via a K\"uneth equivariant formula to obtain a description of the equivariant Chow ring $A^*_T({\mathbb H}^{d,T'})$, thus of $A^*_T({\mathbb H}^d)=\cap _{T'}A^*_T({\mathbb H}^{d,T'})$ (Theorem \ref{thr:description du Chow avec produit tensoriel}). \end{itemize} At this point, the description of the equivariant Chow ring is complete, but the formula in theorem \ref{thr:description du Chow avec produit tensoriel} involves tensor products, direct sums and intersections. The last step consists in an application of a Bott formula (proved by Edidin and Graham in an algebraic context \cite{edidin_Graham98:formuleDeBott}) to get a nicer description. This is done in Theorem \ref{thr:description du Chow avec congruences}: If $\hat{T}$ is the character group of $T$, $S=Sym(\hat{T}\otimes {\mathbb Q})\simeq {\mathbb Q}[t_1,t_2]$ is the symmetric ${\mathbb Q}$-algebra over $\hat{T}$, $A^*_T({\mathbb H}^d)\ensuremath{\subset} S^{{\mathbb H}^{d,T}}$ is realised as a set of tuples of polynomials satisfying explicit congruence relations. In this setting, the usual Chow ring is the quotient of the equivariant Chow ring by the ideal generated by the elements $(f,\dots,f)$, $f$ homogeneous with positive degree. We illustrate our method on the case of ${\mathbb H}^3({\mathbb P}^2)$ in theorem \ref{thr:leCasHilbTroisP2}. Though we suppose for convenience that the underlying surface $X$ is projective, the descriptions of the Chow ring could be performed with conditions on $X$ weaker than projectivity: $X$ need only to be filtrable \cite{brion97:_equivariant_chow_groups}. In particular, the method applies for the affine plane. The key results about equivariant Chow rings used in the text have been extended to an equivariant $K$-theory setting by Vezzosi and Vistoli\cite{Vezzosi_Vistoli:KTheorieEquivarianteEtSousTores}. Thus the method developped in the present paper should generalize to equivariant $K$-theory as well. \medskip \textit{Acknowledgments.} Michel Brion generously shared his knowledge about equivariant cohomology. It is a pleasure to thank him for the stimulating discussions we had. \section{Hilbert functions} \label{sec:objectsInvolved} To follow the method sketched in the introduction, we need to compute the irreducible components of ${\mathbb H}^{d,T'}$ for a one-codimensional torus $T'\ensuremath{\subset} T$. In this section, we introduce the basic notations and several notions of Hilbert functions useful to describe these irreducible components. We also define the Hilbert schemes and Grassmannians associated with these Hilbert functions. \subsection*{The toric surface $X$} \label{sec:toric-variety-x} Let $T$ be a 2-dimensional torus with character group $\hat T$. Let $N=Hom_{{\mathbb Z}}(\hat T,{\mathbb Z})$, $N_{\mathbb R}=N\otimes {\mathbb R}$ and $\Delta\subset N_{\mathbb R}$ a fan defining a smooth projective toric surface $X$. Denote the maximal cones of $\Delta$ by $\sigma_1,\dots,\sigma_r$ with the convention $\sigma_{r+1}=\sigma_1$, and by $p_1,\dots,p_r$ the corresponding closed points of $X$. Assume that the cones are ordered such that $\sigma_i\cap \sigma_{i+1}=\sigma_{i,i+1}$ is a one-dimensional cone. Denote respectivly by $U_{i,i+1},O_{i,i+1},V_{i,i+1}=\overline O_{i,i+1}$ the open subvariety, the orbit and the closed subvariety of $X$ associated with the cone $\sigma_{i,i+1}$. Define similarly $U_i\ensuremath{\subset} X$ the open subscheme associated with $\sigma_{i}$. Explicitly, if $\sigma_i^\nu\ensuremath{\subset} \hat T\otimes {\mathbb R}$ is the dual cone of $\sigma_i\ensuremath{\subset} {\mathbb N}_{\mathbb R}$ and $R_i=k[\sigma_i^\nu\cap \hat T]$, then $U_i = Spec\ R_i$. The inclusion $R_i \ensuremath{\subset} k[\hat T]$ induces an open embedding $T=Spec\ k[\hat T]\hookrightarrow U_i$. The action of $T$ on itself by translation extends to an action of $T$ on $U_i$, and to an action of $T$ on $X=\cup U_i$. The open subscheme $U_i$ is isomorphic to an affine plane $Spec\ k[x,y]$. When using such coordinates $x,y$, we require $xy=0$ to be the equation of $V_{i-1,i} \cup V_{i,i+1}$ around $p_i$. The isomorphism $U_i\simeq Spec\ k[x,y]$ is then defined up to the automorphism of $k[x,y]$ that exchanges the two coordinates. \subsection*{Subtori and their fixed locus} \label{sec:subtori} Let $T'\hookrightarrow T$ be a one-dimensional subtorus of $T$, $\hat{T}'$ its character group. The torus $T'$ acts on $X$ by restriction and $U_i\ensuremath{\subset} X$ is an invariant open subset. The action of $T'$ on $U_i$ induces a decomposition $R_i=\sum_{\chi \in \hat{T}'} R_{T',i,\chi}$, where $R_{T',i,\chi}\subset R_i$ is the subvector space on which $T'$ acts with character $\chi$. \\ One shows easily that the fixed locus $X^{T'}$ admits two types of connected components. Some components are isolated fixed points . We let \begin{displaymath} PFix(T')=\{p\in X^{T'},\ p\ isolated\}. \end{displaymath} The other components are projective lines $V_{i,i+1}\simeq {\mathbb P}^1$ joining two points $p_i,p_{i+1}$ of $X^{T}$. We let \begin{displaymath} LFix(T')=\{ \{p_i,p_{i+1}\},\ p_i,p_{i+1} \mathrm{lie\ in\ an\ invariant\ } {\mathbb P}^1\}. \end{displaymath} By construction, \begin{displaymath} X^T=LFix(T')\cup PFix(T'). \end{displaymath} \subsection*{Staircases and Hilbert functions} \label{sec:staircases} A staircase $E\subset {\mathbb N}^2$ is a subset whose complement $C={\mathbb N}^2\setminus E$ satisfies $C+{\mathbb N}^2\subset C$. In our context, the word staircase will stand for finite staircase. By extension, a staircase $E\subset k[x,y]$ is a set of monomials $m_i=x^{a_i}y^{b_i}$ such that the the set of exponents $(a_i,b_i)$ is a staircase of ${\mathbb N}^2$. The automorphism of $k[x,y]$ exchanging $x$ and $y$ preserves the staircases. In particular it makes sense to consider staircases in $R_i$, though the automorphism $R_i\simeq k[x,y]$ is not canonical. \\ A staircase $E\subset R_i$ defines a monomial zero-dimensional subcheme $Z(E)\subset U_i$ whose ideal is generated by the monomials $m\in R_i\setminus E$. A multistaircase is a $r$-tuple $\underline E=(E_1,\dots,E_r)$ of staircases with $E_i\ensuremath{\subset} R_i$. It defines a subscheme $Z(\underline E)=\coprod Z(E_i)$. \\ In our context, a $T'$-Hilbert function is a function $H:\hat{T}'\ensuremath{\rightarrow} {\mathbb N}$ such that $\#H=\sum_{\chi \in \hat{T}'}H(\chi)$ is finite. A $T'$-Hilbert multifunction $\underline H$ is a collection of $T'$-Hilbert functions $H_C$ parametrized by the connected components $C$ of $X^{T'}$. Its cardinal is by definition \begin{displaymath} \#\underline H=\sum_C \#H_C. \end{displaymath} Equivalently, a $T'$-Hilbert multifunction is a $r$-tuple $\underline H=(H_1,\dots,H_r)$ of Hilbert functions such that $H_i=H_{i+1}$ if $\{p_i,p_{i+1}\}\in LFix(T')$. \\ If $Z\subset X$ is a zero-dimensional subscheme fixed under $T'$, then $H^0(Z,{\ensuremath \mathcal O}_Z)$ is a representation of $T'$ which can be decomposed as $\oplus V_{\chi}$ where $V_{\chi}\ensuremath{\subset} H^0(Z,{\ensuremath \mathcal O}_Z)$ is the subspace on which $T'$ acts through $\chi$. The $T'$-Hilbert function associated with $Z$ is by definition $H_{T',Z}(\chi)=\dim V_\chi$. We also define a Hilbert multifunction $\underline H_{T',Z}$ as follows. If $p_i\in PFix(T')$, let $Z_i\ensuremath{\subset} Z$ the component of $Z$ located on $p_i$ and $H_i=H_{T',Z_i}$. If $\{p_i,p_{i+1}\}\in LFix(T')$, let $Z_i=Z_{i+1}\ensuremath{\subset} Z$ the component of $Z$ located on $V_{i,i+1}$ and $H_i=H_{i+1}=H_{T',Z_i}$. The Hilbert multifunction associated to $Z$ is \begin{displaymath} \underline H_{T',Z}=(H_1,\dots,H_r). \end{displaymath} By construction, we have the equality \begin{displaymath} \#\underline H_{T',Z} =length(Z). \end{displaymath} A partition of $n\in {\mathbb N}$ is decreasing sequence $n_1,n_2,\dots$ of integers ( $ n_i \geq n_{i+1}\geq 0$) with $n_i=0$ for $i>>0$, and $\sum n_i=n$. The number of parts is the number of integers $i$ with $n_i\neq 0$. \\ We will denote by \begin{listecompacte} \item $Part(n)$ the set of partitions of $n$, $Part=\coprod Part(n)$, \item ${\cal{E}}$ the set of staircases of ${\mathbb N}^2$, \item ${\cal{M}\cal{E}}$ the set of multistaircases, \item ${\cal{H}}(T')$ the set of $T'$-Hilbert functions, \item ${\cal{M}\cal{H}}(T')$ the set of $T'$-Hilbert multifunctions. \end{listecompacte} \subsection*{Hilbert schemes and Grassmannians} \label{sec:hilb-scheme-grassmannians} We denote by ${\mathbb H}$ the Hilbert scheme parametrizing the 0-dimensional subschemes of $X$. It is a disjoint union ${\mathbb H}=\coprod {\mathbb H}^d$, where $ {\mathbb H}^d$ parametrizes the subschemes of length $d$. We denote by ${\mathbb H}_i\subset {\mathbb H}$ the open subscheme parametrizing the subschemes whose support is in $U_i$, and ${\mathbb H}_{i,i+1}={\mathbb H}_i\cup {\mathbb H}_{i+1}$. \\ The action of the torus $T$ on $X$ induces an action of $T$ on ${\mathbb H}$. We denote by ${\mathbb H}^T\ensuremath{\subset} {\mathbb H}$ the fixed locus under this action. If $T'\subset T$ is a one dimensional subtorus, and $\underline H$ is a $T'$-Hilbert multifunction, ${\mathbb H}^{T',\underline H}\ensuremath{\subset} {\mathbb H}$ parametrizes by definition the subschemes $Z$, $T'$ fixed, with $T'$-Hilbert multifunction $\underline H_{T',Z}=\underline H$. Define similarly ${\mathbb H}^{T',H}$ for a $T'$-Hilbert function $H$. \\ We will freely mix the above notations by intersecting the subschemes when we gather the indexes. For instance, ${\mathbb H}_i^T={\mathbb H}_i\cap {\mathbb H}^T$, ${\mathbb H}^{T',H,T}={\mathbb H}^{T',H} \cap {\mathbb H}^T$, ${\mathbb H}_{i,i+1}^{T'}={\mathbb H}_{i,i+1}\cap {\mathbb H}^{T'}$ etc \dots To avoid ambiguity, the formula is \begin{displaymath} {\mathbb H}_{s}^{s_1,\dots,s_k}={\mathbb H}_{s}\cap {\mathbb H}^{s_1}\cap \dots \cap {\mathbb H}^{s_k}, \end{displaymath} where \begin{displaymath} s\in \{i,\{i,i+1\}\},\ s_i\in \{d,T',(T',\underline H),(T',H),T\}. \end{displaymath} \\ If $T'\hookrightarrow T$ is a one dimensional subtorus and if $(i,\chi,h)\in \{1,\dots,r\}\ensuremath{\times} \hat{T}' \ensuremath{\times} {\mathbb N}$, we denote by \begin{displaymath} {\mathbb G}_{T',i,\chi,h} \end{displaymath} the Grassmannian parametrising the subspaces of $R_{T',i,\chi}$ of codimension $h$. If $H$ is a $T'$-Hilbert function, ${\mathbb G}_{T',i,H}=\prod_{\chi\in \hat{T}'} {\mathbb G}_{T',i,\chi,H(\chi)}$. It is a well defined finite product since $G_{T',i,\chi,H(\chi)}$ is a point for all but finite values of $\chi$. \section{Description of the fixed loci} \label{sec:descr-fixed-loci} Let $T'\hookrightarrow T$ be a one dimensional subtorus. The goal of this section is to give a description of the irreducible components of ${\mathbb H}^{T'}$ (proposition \ref{prop:descriptionDesComposantesIrred} and the comment preceding it). \begin{thm} \label{thr:composantesDuHilbertGradue} \begin{displaymath} {\mathbb H}_i^{T'}=\bigcup_{H\in {\cal{H}}(T')}{\mathbb H}_i^{T',H} \end{displaymath} is the decomposition of ${\mathbb H}_i^{T'}$ into smooth disjoint irreductible components. \end{thm} \begin{dem} This is proved in \cite{evain04:irreductibiliteDesHilbertGradues}. \end{dem} \begin{rem} For some $H$, ${\mathbb H}_i^{T',H}$ may be empty so the result is that the irreducible components are in one-to-one correspondance with the set of possible Hilbert functions $H$. Throwing away the empty sets in the above decomposition is possible: There is an algorithmic procedure to detect the emptiness of ${\mathbb H}_i^{T',H}$ (\cite{evain04:irreductibiliteDesHilbertGradues}, remark 23). \end{rem} Now the goal is to prove that ${\mathbb H}_{i,i+1}^{T', H}$ is empty or a product $P$ of projective spaces when $\{p_i,p_{i+1}\}\in LFix(T')$. An embedding $P\ensuremath{\rightarrow} {\mathbb H}^{T'}$ is constructed in the next proposition. Then it will be shown that a non empty ${\mathbb H}_{i,i+1}^{T',H}\ensuremath{\subset} {\mathbb H}^{T'}$ is the image of such an embedding. Let $\pi=(\pi_1,\pi_2,\dots )\in {\cal{P}}art(d) $ be a partition. Let $(n_1,\dots,n_s,0=n_{s+1})$ be the finite subsequence with no repetition obtained from $\pi$ with the removal of duplicates. Denote by $d_l$ the number of indexes $j$ with $\pi_j=n_l$. In other words, \begin{displaymath} \pi=(\underbrace{n_1,\dots,n_1}_{d_1\ times},\underbrace{n_2,\dots,n_2}_{d_2\ times},\dots, \underbrace{n_s,\dots,n_s}_{d_s\ times},0,\dots). \end{displaymath} Let $\{p_i,p_{i+1}\}\in LFix(T')$ and $p\in V_{i,i+1}$. Since $V_{i,i+1}\subset U_i\cup U_{i+1}$, one may suppose by symmetry that $p\in U_i\simeq Spec\ k[x,y]$. Exchanging the roles of $x$ and $y$, we may suppose that $V_{i,i+1}$ is defined by $y=0$ in $U_i$. We denote by $Z_{p,k}$ the subscheme with equation $(x-x(p),y^k)$. Intrinsecally, it is characterized as the only length $k$ curvilinear subscheme $Z\subset X$ supported by $p$, $T'$-fixed, such that $Z\cap V_{i,i+1}=p$ as a schematic intersection. The rational function \begin{eqnarray*} \phi_{\pi}:Sym^{d_1}V_{i,i+1}\ensuremath{\times} \dots \ensuremath{\times} Sym^{d_s}V_{i,i+1} &\dashrightarrow& {\mathbb H}^{d,T'}\\ (p_{11},\dots,p_{1d_1}),\dots,(p_{s1}\dots p_{sd_s}) &\mapsto& \coprod_{i\leq s,j\leq d_i} Z_{p_{ij},n_i} \end{eqnarray*} is well defined on the locus where all the points $p_{a,b}\in V_{i,i+1}$ are distinct. In fact, it is regular everywhere. \begin{prop} The function $\phi_{\pi}$ extends to a regular embedding $Sym^{d_1}V_{i,i+1}\ensuremath{\times} \dots\ensuremath{\times} Sym^{d_s}V_{i,i+1}\ensuremath{\rightarrow} {\mathbb H}^{d,T'}$. \end{prop} \begin{dem} The extension property is local thus it suffices to check it on an open covering. The covering $V_{i,i+1}=(V_{i,i+1}\cap U_i)\cup (V_{i,i+1}\cap U_{i+1})=W_i\cup W_{i+1}$ of $V_{i,i+1}$ induces a covering of the symmetric products $Sym^d V_{i,i+1}$. All the open sets in this covering play the same role. Thus by symmetry, it suffices to define an embedding \begin{displaymath} \psi_\pi:Sym^{d_1}W_i\ensuremath{\times} \dots \ensuremath{\times} Sym^{d_s} W_i \ensuremath{\rightarrow} {\mathbb H}^{T'} \end{displaymath} which generically co\"\i ncides with $\phi_\pi$. \\ Let $Z(p_{11},\dots,p_{sd_s})\subset U_i$ be the subscheme defined by the ideal \begin{displaymath} I_Z=(y^{n_1},y^{n_2}\prod_{\beta \leq d_1}(x-x(p_{1\beta})),\dots, y^{n_{s}} \prod_{\alpha<s}^{\beta\leq d_{\alpha}} x-x(p_{\alpha\beta}),\prod_{\alpha\leq s}^{\beta\leq d_{\alpha}} x-x(p_{\alpha\beta})\ ). \end{displaymath} Let \begin{eqnarray*} \psi_\pi:Sym^{d_1}W_i\ensuremath{\times} \dots \ensuremath{\times} Sym^{d_s} W_i &\ensuremath{\rightarrow}& {\mathbb H}^{T'}\\ _(p_{11},\dots,p_{1d_1}),\dots,(p_{s1}\dots p_{sd_s})&\mapsto& Z(p_{11},\dots,p_{sd_s}). \end{eqnarray*} Clearly, $Z(p_{11},\dots,p_{sd_s})$ is $T'$-fixed since $T'$ does not act on $x$. Thus, $\psi_\pi$ is a well defined morphism which extends $\phi_\pi$. Now, for $1\leq \alpha \leq s$, the transporter $(I_Z+y^{n_{\alpha+1}+1}:y^{n_{\alpha+1}})$ defines a subscheme $Z_\alpha$ of $V_{i,i+1}$ of length $d_1+\dots+d_{\alpha}$. Since $Z_{\alpha-1}\subset Z_{\alpha}$ the residual scheme $Z'_{\alpha}=Z_{\alpha}\setminus Z_{\alpha-1}$ is well defined for $\alpha \geq 2$. Consider the morphism \begin{eqnarray*} \rho:Im(\psi_\pi) &\ensuremath{\rightarrow} & Sym^{d_1}W_i\ensuremath{\times} \dots \ensuremath{\times} Sym^{d_s}W_i\\ Z&\mapsto& (Z_1,Z'_2,Z'_3,\dots,Z'_s). \end{eqnarray*} The composition $\rho\circ \psi_{\pi}$ is the identity. Thus, $\psi_{\pi}$ is an embedding, as expected. \end{dem} Remark that the $T'$-Hilbert function $H_{T',Z}$ is constant when $Z$ moves in a connected component of ${\mathbb H}^{T'}$. In particular, it is constant on $Im(\phi_{\pi})$ and $\phi_\pi$ factorizes: \begin{displaymath} \phi_\pi:\prod_{\alpha \leq s} Sym^{d_\alpha}V_{i,i+1} \ensuremath{\rightarrow} {\mathbb H}_{i,i+1}^{T',H_\pi} \end{displaymath} for a uniquely defined $T'$-Hilbert function $H_{\pi,T',i,i+1}$ that we note $H_{\pi}$ for simplicity. \\ \begin{prop} \label{prop:Hii+1=produitDeProjectifs} Let $\{p_i,p_{i+1}\}\in LFix(T')$, $H$ a $T'$-Hilbert function. If $H=H_\pi$ for some $\pi\in {\cal{P}}art$, then ${\mathbb H}_{i,i+1}^{T',H}$ is a product of projective spaces, thus irreducible. If $H\neq H_{\pi}$ then ${\mathbb H}_{i,i+1}^{T',H}=\emptyset$. \end{prop} \begin{dem} If $H=H_\pi$ for some $\pi\in {\cal{P}}art$, it suffices to prove that \begin{displaymath} \phi_\pi:\prod_{\alpha \leq s} Sym^{d_\alpha}V_{i,i+1} \ensuremath{\rightarrow} {\mathbb H}_{i,i+1}^{T',H_\pi} \end{displaymath} is an isomorphism. We already know that $\phi_{\pi}$ is an embedding thus we need surjectivity. Let $Z\in {\mathbb H}_{i,i+1}^{T',H_\pi}$. We may suppose without loss of generality that $Z\subset U_i=Spec\ k[x_i,y_i]$ and that $V_{i,i+1}$ is defined by $y_i=0$ in $U_i$. Since the ideal $I$ of $Z$ is $T'$-invariant, it is generated by elements $y_i^kP(x_i)$, where $P$ is a polynomial. The power $l$ being fixed, the polynomials $P$ such that $y_i^lP(x_i)\in I$ form an ideal in $k[x_i]$ generated by a polynomial $P_l$. The condition for $I$ to be an ideal implies the divisibility relation $P_m|P_l$ for $l<m$. Since $Z$ is $0$-dimensional, $P_i=1$ for $i>>0$. Let $t$ be the smallest integer such that $P_t=1$: $1=P_t|P_{t-1}|\dots|P_0$. In particular the sequence \begin{displaymath} D=(D_1,D_2,\dots)=(deg(P_0),deg(P_1),\dots) \end{displaymath} is a partition. Let $D^\nu\in {\cal{P}}art$ be the partition conjugate to $D$, ie. $D^\nu(k)=\#\{j\ s.t. \ D_j\geq k\}$. By construction, $D^\nu=\pi$. Let $(d_1,d_2,\dots, d_s) $ be the list obtained from the list $(D_{t}-D_{t+1},D_{t-1}-D_{t},\dots,D_1-D_2)$ by suppression of the zeros. Then $d_\alpha=deg(P_{j-1})-deg(P_{j})$ for some $j$ and we let $p_{\alpha,1},\dots,p_{\alpha,d_{\alpha}}$ be the zeros of the polynomial $\frac{P_{j-1}}{P_{j}}$. By definition of $\phi_\pi$, we have the equality $Z=\phi_{\pi}(p_{11},\dots,p_{sd_s})$, which shows the expected surjectivity.\\ If ${\mathbb H}_{i,i+1}^{T',H}$ is non empty, it contains a subscheme $Z\ensuremath{\subset} X$ fixed under the action of $T$. Such a $Z=Z(E_i)\cup Z(E_{i+1})$ is characterized by a pair $(E_i,E_{i+1})$ of staircases in $R_i$ and $R_{i+1}$. Suppose as before that $V_{i,i+1}$ is defined by $y_i=0$ around $p_i$ and by $y_{i+1}=0$ around $p_{i+1}$. Using these coordinates, $E_i$ (resp $E_{i+1}$) is associated with a partition $\pi^i$ (resp. $\pi^{i+1}$) defined by $x_i^ay_i^b\in E_i \Rightarrow b<\pi^i_{a+1}$ (resp. $x_{i+1}^ay_{i+1}^b\in E_{i+1} \Rightarrow b<\pi^{i+1}_{a+1}$). Let $\pi=(\pi^i\;^\nu+\pi^{i+1}\;^\nu)^\nu$. Then \begin{displaymath} H=H_{T',Z}=H_{T',Z(E_i)}+H_{T',Z(E_{i+1})}=H_{\pi^i}+H_{\pi^{i+1}}=H_\pi. \end{displaymath} \end{dem} Knowing that ${\mathbb H}_i^{T',H}$ is empty or irreducible (theorem \ref{thr:composantesDuHilbertGradue}) and that ${\mathbb H}_{i,i+1}^{T',H}$ is empty or a product of projective spaces (proposition \ref{prop:Hii+1=produitDeProjectifs}), we obtain easily the irreducible components of ${\mathbb H}^{d,T'}$: According to the last but one item of the next proposition, ${\mathbb H}^{T',\underline H}$ is empty or irreducible. Thus, the last item is the decomposition of ${\mathbb H}^{d,T'}$ into irreducible components (in fact into empty or irreducible components and we know which terms in the union are empty). \begin{prop}\label{prop:descriptionDesComposantesIrred} \begin{listecompacte} \item ${\mathbb H}^{T}=\coprod_{\underline E \in {\cal{M}\cal{E}} } Z(\underline E)$. \item ${\mathbb H}^{T'}=\prod_{p_i\in PFix(T')} {\mathbb H}_i^{T'}\ensuremath{\times} \prod_{\{p_i,p_{i+1}\}\in LFix(T')}\ {\mathbb H}_{i,i+1}^{T'}$. \item ${\mathbb H}^{T',\underline H}=\prod_{p_i\in PFix(T')}{\mathbb H}_i^{T',H_i}\ensuremath{\times} \prod_{\{p_i,p_{i+1}\}\in LFix(T')}{\mathbb H}_{i,i+1}^{T',H_i}$. \item ${\mathbb H}^{d,T'}=\coprod_{\underline H\in {\cal{M}\cal{H}}(T'),\#\underline H=d}{\mathbb H}^{T',\underline H}$. \end{listecompacte} \end{prop} \begin{dem} The first point is well known. As to the second point, since the support of a subscheme $Z\ensuremath{\subset} X$ parametrised by $p\in {\mathbb H}_i^{T'}$ (resp. by $p\in {\mathbb H}_{i,i+1}^{T'}$) is $p_i$ (resp. is on $V_{i,i+1}$) and since the various $p_i,V_{i,i+1}$ do not intersect, the union morphism is a well defined embedding \begin{displaymath} \prod_{i\in PFix(T')} {\mathbb H}_i^{T'}\ensuremath{\times} \prod_{\{p_i,p_{i+1}\}\in LFix(T')}\ {\mathbb H}_{i,i+1}^{T'} \ensuremath{\rightarrow} {\mathbb H}^{T'}. \end{displaymath} Since the support of a subscheme $Z\in {\mathbb H}^{T'}$ is included in $X^{T'}=PFix(T')\cup LFix(T')$, the surjectivity is obvious. The third point follows from the second. The last point is easy. \end{dem} \section{Equivariant Chow rings of products of Grassmannians} \label{sec:equiv-cohom-grassm} Let $V$ be a vector space with base ${\ensuremath \mathcal B}=\{e_0,\dots,e_n\}$. Let $\chi_0,\dots,\chi_n \in \hat{T}$ be distinct characters of $T$. These characters define an action of $T$ on $V$ by the formula $t.(v_0,\dots,v_n)=(\chi_0(t)v_0,\dots,\chi_n(t)v_n)$. The $T$-action on $V$ induces a $T$-action on the Grassmannian ${\mathbb G}(d,V)$ parametrising the $d$-dimensional quotients of $V$. In this section, we compute the $T$-equivariant Chow ring of ${\mathbb G}(d,V)$ and of products of such Grassmannians. \subsection*{Equivariant Chow ring of ${\mathbb G}(d,V)$} \label{sec:equiv-cohom-ggd} First, we recall the definition of equivariant Chow rings in the special case of a $T$-action (To keep constant the conventions of the paper, we work with rational Chow groups though it is not necessary in this section).\\ Let $U=(k^r\setminus 0)\ensuremath{\times} (k^r\setminus 0)$. The torus $T\simeq k^*\ensuremath{\times} k^*$ acts on $U$ by the formula $(t_1,t_2)(v,w)=(t_1v,t_2w)$. If ${\cal X}$ is a $T$-variety, the quotient $(U\ensuremath{\times} {\cal X})/T=U\ensuremath{\times}^T {\cal X}$ admits a projection to $U/T={\mathbb P}^{r-1}\ensuremath{\times} {\mathbb P}^{r-1}$. The Chow group $A_{l+2r-2}(U\ensuremath{\times}^T {\cal X})$ does not depend on the choice of $r$ provided that $r$ is big enough (explicitly $r>\dim {\cal X}-l$) and this Chow group is by definition the equivariant Chow group $A_l^{T}({\cal X})$. In case ${\cal X}$ is smooth, we let $A^l_T({\cal X})=A_{\dim {\cal X} -l}^T({\cal X})$ and this makes $A^*_T({\cal X})=\oplus_{l\geq 0}A^l_T({\cal X})$ a ring. \\ A $T$-equivariant vector bundle $F\ensuremath{\rightarrow} {\cal X}$ defines equivariant Chern classes: $F\ensuremath{\times}^T U$ is a vector bundle on ${\cal X}\ensuremath{\times}^T U$ and by definition $c_i^T(F)=c_i(F\ensuremath{\times}^T U)$. \\ The equivariant Chow ring $A^*_T(Spec\ k)$ of a point is a polynomial ring by the above description. A more intrinsec description is as follows. A character $\chi\in \hat{T}$ defines canonically an equivariant line bundle $V_{\chi}$ over $Spec\ k$. The map \begin{eqnarray*} \hat{T} &\ensuremath{\rightarrow}& A^*_T(Spec\ k)\\ \chi &\mapsto& c_1^T(V_\chi) \end{eqnarray*} extends to an isomorphism \begin{displaymath} S=Sym(\hat{T}\otimes {\mathbb Q})\ensuremath{\rightarrow} A^*_T(Spec\ k) \end{displaymath} where $Sym(\hat{T}\otimes {\mathbb Q})$ is the symmetric ${\mathbb Q}$-algebra over $\hat{T}$. \\ The morphism ${\cal X}\ensuremath{\rightarrow} Spec\ k$ induces by pullback a $S$-algebra structure over $A_T^*({\cal X})$. Let us now turn to the case ${\cal X}={\mathbb G}(d,V)$. We denote by ${\ensuremath \mathcal O}(\chi)$ the line bundle $V_{\chi}\ensuremath{\times}^T U\ensuremath{\rightarrow} U/T$. One checks easily that ${\mathbb G}(d,V)\ensuremath{\times}^T U\ensuremath{\rightarrow} U/T$ is the Grassmann bundle ${\mathbb G}(d,{\ensuremath \mathcal O}(\chi_0)\oplus \dots \oplus {\ensuremath \mathcal O}(\chi_n))$. The universal rank $d$ quotient bundle $Q_T\ensuremath{\rightarrow} {\mathbb G}(d,{\ensuremath \mathcal O}(\chi_0)\oplus \dots \oplus {\ensuremath \mathcal O}(\chi_n))$ over the Grassmann bundle has total space $Q_T=Q\ensuremath{\times} ^T U$ where $Q\ensuremath{\rightarrow} {\mathbb G}(d,V)$ is the universal quotient bundle over the Grasmanniann. In particular $c_i^T(Q)=c_i(Q_T)$. If $\lambda=(\lambda_1,\dots,\lambda_{\dim V-d},0,\dots)\in {\cal{P}}art$, let us denote by \begin{displaymath} D_{\lambda}=det(c_{\lambda_i+s-i}^T(Q))_{1\leq i,s\leq \dim V-d} \end{displaymath} the associated Schur polyn{o}mial in the equivariant Chern classes of $Q$. \begin{prop} The elements $D_{\lambda}$ generate the $S$-module $A^*_T({\mathbb G}(d,V))$. \end{prop} \begin{dem} Let $\delta\in {\mathbb N}$, $A^{\leq \delta}_T \subset A^*_T({\mathbb G}(d,V))$ be the submodule defined by the elements of degree at most $\delta$. It suffices to prove that every class in $A^{\leq \delta}_T$ is a linear combination of $D_{\lambda}$'s with coefficients in $S$. By definition, $A^{\leq \delta}_T = A^{\leq \delta}(U\ensuremath{\times}^T {\mathbb G}(d,V))$ with $U=(k^n\setminus\{0\})\ensuremath{\times}(k^n\setminus\{0\})$ and $n>>0$. As explained, the quotient $ U\ensuremath{\times}^T {\mathbb G}(d,V)$ is a Grassmann bundle over $U/T$ and the result follows from \cite{fulton84:_Intersection_theory}, Proposition 14.6.5 and Example 14.6.4, which describe the Chow ring of Grassmann bundles. \end{dem} \begin{rem}\label{rem:nbGenerateursFinis} The number of generators in the proposition is finite since $D_{\lambda}=0$ for $\lambda_1>d$. \end{rem} To realize $A^*_T({\mathbb G}(d,V))$ as an explicit $S$-subalgebra of $S^{{\mathbb G}(d,V)^T}$, we recall from \cite{brion97:_equivariant_chow_groups}, the following result: \begin{prop} If ${\cal X}$ is a projective non singular variety, the inclusion map $i:{\cal X}^T\ensuremath{\rightarrow} {\cal X}$ induces an injective $S$-algebra homomorphism $i_T^*:A_T^*({\cal X})\ensuremath{\rightarrow} A_T^*({\cal X}^T)$. \end{prop} In fact, Brion proved the injectivity of $i_T^*$ when ${\cal X}$ is a smooth filtrable variety and projective varieties are filtrable. \\ In the present situation, ${\cal X}={\mathbb G}(d,V)$. A point $p_{\Sigma} \in {\mathbb G}(d,V)^T$ is characterized by a subset $\Sigma=\{e_{i_1},\dots,e_{i_d}\}\subset {\ensuremath \mathcal B}$ of cardinal $d$: if $W\subset V$ is the vector space generated by $\{e_j, e_j\notin \Sigma\}$, then \begin{displaymath} p_{\Sigma}=V/W. \end{displaymath} Let $\sigma_i$ be the $i$-{th} symmetric polynomial in $d$ variables. Let \begin{displaymath} c_{i,\Sigma}=\sigma_i(\chi_{i_1},\dots, \chi_{i_d})\in S \end{displaymath} and \begin{displaymath} c_{i}\in S^{{\mathbb G}(d,V)^T}=(c_{i,\Sigma})_{\Sigma\subset B,\#\Sigma=d}. \end{displaymath} \begin{prop} Let $i^*_T:A^*_T({\mathbb G}(d,V))\ensuremath{\rightarrow} A^*_T({\mathbb G}(d,V)^T)=S^{{\mathbb G}(d,V)^T}$ be the restriction morphism induced by the inclusion $i:{\mathbb G}(d,V)^T\hookrightarrow {\mathbb G}(d,V)$. Then $i^*_T(c_i^T(Q))=c_{i}$. \end{prop} \begin{dem} The fiber of the universal quotient bundle $Q\ensuremath{\rightarrow} {\mathbb G}(d,V)$ over $p_\Sigma$, $\Sigma=\{e_{i_1},\dots,e_{i_d}\}$, is a direct sum of one-dimensional representations with characters $\chi_{i_1},\dots,\chi_{i_d}$ thus its equivariant total Chern class is $c^T(Q)=\prod_{j\leq d}(1+\chi_{i_j})$. \end{dem} \begin{nt} If $\lambda=(\lambda_1,\dots,\lambda_{\dim V-d},0,\dots) \in {\cal{P}}art$, let \begin{displaymath} \Delta_{\lambda}=det(c_{\lambda_i+s-i})_{1\leq i,s\leq \dim V-d}\in S^{{\mathbb G}(d,V)^T}. \end{displaymath} As in remark \ref{rem:nbGenerateursFinis}, only a finite number of $\Delta_\lambda$ are non zero. \end{nt} \begin{coro}\label{prop:CohomoEquivGrassmannienneAuxPointsFixes} The $S$-algebra $A_T^*({\mathbb G}(d,V))\subset S^{{\mathbb G}(d,V)^T}$ is generated as an $S$-module by the elements $\Delta_{\lambda}$. \end{coro} \begin{dem} The restriction morphism $A_T^*({\mathbb G}(d,V))\ensuremath{\rightarrow} A_T^*({\mathbb G}(d,V)^T)=S^{{\mathbb G}(d,V)^T}$ is injective and gives the inclusion. Since $c_i^T(Q)$ restricts to $c_i$, the generators $D_{\lambda}$ of the $S$-module $A_T^*({\mathbb G}(d,V))$ restrict to $\Delta_\lambda$. The result follows. \end{dem} \subsection*{Products of Grassmannians} \label{sec:prod-grassm} In the sequel, we will need to compute equivariant Chow rings of products of Grassmannians, and of products in general. The following result explains how to deal with these products. \begin{nt} If $P$ and $Q$ are two finite sets, $M\subset S^P$ and $N\subset S^Q$ are two $S$-modules, we denote by $M\otimes N$ the $S$-submodule of $S^{P\ensuremath{\times} Q}$ image of $M\otimes N$ under the natural isomorphism $ S^{P\ensuremath{\times} Q}\simeq S^P\otimes S^Q$. \end{nt} \begin{prop}\label{prop:ChowDesProduitsRestreintAuxPointsFixes} Let ${X}$ and $Y$ be smooth projective $T$-varieties with a finite number of fixed points. Let $A^*_T(X)\subset S^{X^T}$ and $A^*_T(Y)\subset S^{Y^T}$ be their equivariant Chow rings. Then $A^*_T(X\ensuremath{\times} Y)\subset S^{(X\ensuremath{\times} Y)^T}$ identifies to $A^*_T(X) \otimes A^*_T(Y)$. \end{prop} \begin{lm}\label{lm:ChowEquivariantDuProduit} If $X$ and $Y$ are two smooth $T$-varieties with a finite number of fixed points, then $A^*_T(X\ensuremath{\times} Y)=A^*_T(X)\otimes A^*_T(Y)$. \end{lm} \begin{dem} Let $F$ be a smooth variety with a cellular decomposition and $B$ a smooth variety. According to \cite{edidinGraham97:CharacteristicClasses}, prop. 2, if ${\ensuremath \mathcal F}\stackrel{\pi}{\ensuremath{\rightarrow}}B$ is a locally trivial fibration with fiber $F$, there is a non canonical isomorphism of $A^*(B)$-modules \begin{displaymath} \phi:A^*({\ensuremath \mathcal F})\ensuremath{\rightarrow} A^*(B) \otimes A^*(F). \end{displaymath} Explicitly, let us denote by $f_i\in A^*(F)$ the classes of the closures of the cells of $F$. They form a base of $A^*(F)$. If $F_i\in A^*({\ensuremath \mathcal F})$ is such that $F_i\cdot F=f_i$ then \begin{displaymath} \phi^{-1}(b\otimes f_i)=\pi^*b\cdot F_i. \end{displaymath} In our case, $X$ and $Y$ admit cellular decompositions whose cells are the Bialynicki-Birula strata associated with the action of a general one parameter subgroup $T'\hookrightarrow T$. Let us denote by $V_i \subset X$ and $W_i\subset Y$ the closures of these cells. Let \begin{displaymath} X_i=V_i\ensuremath{\times}^T U \subset X\ensuremath{\times}^T U \end{displaymath} and \begin{displaymath} Y_i=W_i\ensuremath{\times}^T U \subset Y\ensuremath{\times}^T U. \end{displaymath} By the above result about fibrations, we have: \begin{displaymath} A_T^*(X)\simeq A^*(X)\otimes S, \ \ A_T^*(Y)\simeq A^*(Y)\otimes S, \end{displaymath} and the isomorphisms identify $[X_i]$ with $[V_i]\otimes 1$, and $[Y_i]$ with $[W_i]\otimes 1$. The left and down arrows of the diagram \begin{eqnarray*} \begin{array}{ccc} (X\ensuremath{\times} Y\ensuremath{\times} U)/T& \ensuremath{\rightarrow}& X\ensuremath{\times}^T U\\ \ensuremath{\downarrow}& & \ensuremath{\downarrow}\\ Y\ensuremath{\times}^T U& \ensuremath{\rightarrow}& U/T \end{array} \end{eqnarray*} yields an identification \begin{displaymath} \psi:A_T^*(X\ensuremath{\times} Y)\ensuremath{\rightarrow} A_T^*(Y)\otimes A^*(X)\ensuremath{\rightarrow} A^*(Y)\otimes S\otimes A^*(X). \end{displaymath} Consider the natural $S$-module morphism (see \cite{edidinGraham:constructionDesChowsEquivariants}) \begin{displaymath} K:A_T^*(X)\otimes _S A_T^*(Y) \ensuremath{\rightarrow} A_T^*(X\ensuremath{\times} Y). \end{displaymath} The composition \begin{displaymath} \psi\circ K: A_T^*(X)\otimes _S A_T^*(Y) \ensuremath{\rightarrow} A^*(X)\otimes S\otimes A^*(Y) \end{displaymath} sends the base $[X_i]\otimes [Y_j]$ to the base $[V_i]\otimes 1\otimes [W_j]$, thus $K$ is an isomorphism. \end{dem} Now the proposition follows from the lemma and the commutativity of the following diagram. \begin{eqnarray*} \begin{array}{ccc} A_T^*(X)\otimes A_T^*(Y)& \ensuremath{\rightarrow} & A_T^*(X\ensuremath{\times} Y)\\ \downarrow i_X^*\otimes i_Y^*& & \ensuremath{\downarrow} i_{X\ensuremath{\times} Y}^*\\ A_T^*(X^T) \otimes A_T^*(Y^T)& \ensuremath{\rightarrow}& A_T^*(X^T\ensuremath{\times} Y^T) \end{array} \end{eqnarray*} \section{Chow rings of graded Hilbert schemes} \label{sec:equiv-cohom-equiv-hilbert-schemes} Let $R=k[x,y]$. In this section, the toric variety $X$ is not projective since we consider the case $X=Spec\ R$. The torus $T\simeq k^*\ensuremath{\times} k^*$ acts on $X$ by \begin{displaymath} (t_1,t_2).(x^\alpha y^\beta)=(t_1x)^\alpha (t_2y)^\beta. \end{displaymath} Let $T'\hookrightarrow T$ be a one dimensional subtorus such that $X^{T'}={(0,0)}$. Let $H\in {\cal{H}}(T')$ be a Hilbert function. The aim of this section is the computation of the image of the restriction morphism $A_T^*({\mathbb H}^{T',H})\ensuremath{\rightarrow} A_T^*({\mathbb H}^{T',H,T})$ (Corollary \ref{coro:descriptionRestrictionUsingGrassmannians}). A point $p\in {\mathbb H}^{T',H}$ parametrizes a $T'$-stable ideal \begin{displaymath} I=\bigoplus_{\chi \in \hat{T'}}I_\chi\ensuremath{\subset} R, \end{displaymath} where $T'$ acts with character $\chi$ on $I_{\chi}$. There is a $T$-equivariant embedding \begin{eqnarray*} {\mathbb H}^{T',H}&\stackrel{l}{\hookrightarrow}&{\mathbb G}_{T',H}=\prod_{\chi\in \hat{T}'}{\mathbb G}(H(\chi),R_{\chi}) \\ I&\mapsto& (I_{\chi}). \end{eqnarray*} \begin{prop} \begin{listecompacte} \item $l^*:A^*({\mathbb G}_{T',H})\ensuremath{\rightarrow} A^*({\mathbb H}^{T',H})$ is surjective. \item $l_T^*:A^*_T({\mathbb G}_{T',H})\ensuremath{\rightarrow} A^*_T({\mathbb H}^{T',H})$ is surjective. \end{listecompacte} \end{prop} \begin{dem} The surjectivity of $l^*$ has been shown by King and Walter \cite{king_walter95:generateurs_anneaux_chow_espace_modules} when $T'=\{(t,t)\}$. Their argument is valid for any $T'$ with minor modifications. We recall briefly their method which uses ideas from \cite{ellingsrud-stromme93:towardsTheChowRingOfPP2}. Let ${\cal S}$ be an associative $k$-algebra and $M$ be a fine moduli space whose closed points parametrize a class ${\ensuremath \mathcal C}$ of ${\cal S}$-modules. Denote by ${\ensuremath \mathcal A}$ the universal ${\cal S}\otimes {\ensuremath \mathcal O}_M$-module associated with the moduli space. King and Walter exhibit generators of $A^*(M)$ when ${\ensuremath \mathcal A}$ admits a nice resolution and some cohomological conditions are satisfied.\\ In the case ${\cal S}=R$,$T'=\{(t,t)\}$, $M={\mathbb H}^{T',H}$, ${\ensuremath \mathcal I}$ the universal ideal over $M$, ${\ensuremath \mathcal A}=(R\otimes {\ensuremath \mathcal O}_{{\mathbb H}^{T',H}} )/{\ensuremath \mathcal I}=\oplus {\ensuremath \mathcal A}_n$, the resolution is \begin{displaymath} 0\ensuremath{\rightarrow} \bigoplus_n R(-n-2)\otimes _k {\ensuremath \mathcal A}_n \ensuremath{\rightarrow} \bigoplus _n R(-n-1)^2\otimes _k {\ensuremath \mathcal A}_n \ensuremath{\rightarrow} \bigoplus _n R(-n)\otimes _k {\ensuremath \mathcal A}_n \ensuremath{\rightarrow} {\ensuremath \mathcal A} \ensuremath{\rightarrow} 0. \end{displaymath} Consider now a general $T'$. For $\chi \in \hat{T}'$, we denote by $R_{\chi}\ensuremath{\subset} R$ the subvector space on which $T'$ acts through $\chi$ and by $R(\chi)$ the $\hat T'$-graded $R$-module defined by $R(\chi)_{\chi'}=R_{\chi+\chi'}$. Let as above ${\ensuremath \mathcal A}=(R\otimes {\ensuremath \mathcal O}_{{\mathbb H}^{T',H}})/{\ensuremath \mathcal I}=\oplus_{\chi \in \hat{T}'}{\ensuremath \mathcal A}_\chi$. The torus $T'$ acts on $x$ and $y$ with characters $\chi_x,\chi_y$. Multiplications by $x$ and $y$ define morphisms $\xi:{\ensuremath \mathcal A}_\chi\ensuremath{\rightarrow} {\ensuremath \mathcal A}_{\chi+\chi_x}$ and $\eta:{\ensuremath \mathcal A}_\chi\ensuremath{\rightarrow} {\ensuremath \mathcal A}_{\chi+\chi_y}$. The resolution of ${\ensuremath \mathcal A}$ is: \begin{eqnarray*} 0\ensuremath{\rightarrow} \bigoplus_{\chi\in\hat T'}R(-\chi-\chi_x-\chi_y)\otimes _k {\ensuremath \mathcal A}_\chi \stackrel{\alpha}{\ensuremath{\rightarrow}}\\ \bigoplus _{\chi\in\hat T'} (R(-\chi-\chi_x)\oplus R(-\chi-\chi_y))\otimes _k {\ensuremath \mathcal A}_\chi \stackrel{\beta}{\ensuremath{\rightarrow}} \bigoplus _{\chi\in\hat T'} R(-\chi)\otimes _k {\ensuremath \mathcal A}_\chi \ensuremath{\rightarrow} {\ensuremath \mathcal A} \ensuremath{\rightarrow} 0. \end{eqnarray*} where the morphisms are \begin{displaymath} \alpha=\left( \begin{array}{c} -y\otimes 1 +1 \otimes \eta\\ x\otimes 1 - 1 \otimes \xi \end{array} \right), \beta=(x\otimes 1-1\otimes \xi\ \ y\otimes 1-1\otimes \eta). \end{displaymath} With this resolution, we can follow the rest of the argument of \cite{king_walter95:generateurs_anneaux_chow_espace_modules} to conclude that $A^*({\mathbb H}^{T',H})$ is generated by the Chern classes $c_i({\ensuremath \mathcal A}_\chi)$, hence $l^*$ is surjective. As to the second point, remark that the morphism $l^*$ is obtained from $l_T^*$ with the application of the functor $.\otimes S/S^+$, where $S^+\ensuremath{\subset} S$ denotes the ideal generated by the homogeneous elements of positive degree. Since $l^*$ is surjective, it follows from the graded Nakayama's lemma that $l_T^*$ is surjective. \end{dem} The commutative diagram \begin{displaymath} \begin{array}{ccc} {\mathbb H}^{T',H}& \stackrel{l}{\hookrightarrow}& {\mathbb G}_{T',H}\\ j\uparrow& & \uparrow m\\ {\mathbb H}^{T',H,T}& \stackrel{n}{\hookrightarrow}& {\mathbb G}_{T',H}^T \end{array} \end{displaymath} induces a map on the level of equivariant Chow rings. Using the surjectivity of $l_T^*$, we get: \begin{coro}\label{coro:descriptionRestrictionUsingGrassmannians} $Im\ j_T^*=Im\ n_T^*m_T^*$. \end{coro} \section{The equivariant Chow ring of ${\mathbb H}^d$} \label{sec:conclusion-proof} Let $T'\ensuremath{\subset} T$ be a one-dimensional subtorus. In this section, we define finite $S$-modules \begin{displaymath} {M_{T',i,H_i|}}\subset S^{{\mathbb H}^{T',H_i,T}_i} \end{displaymath} and \begin{displaymath} M_{T',i,i+1,H_i} \subset S^{{\mathbb H}_{i,i+1}^{T',H_i,T}} \end{displaymath} with explicit generators and we prove the formula: \begin{thm} \label{thr:description du Chow avec produit tensoriel} \begin{displaymath} A_T^*({\mathbb H}^d)=\bigcap_{T'\subset T}\bigoplus^{\underline H\in {\cal{M}\cal{H}}(T')}_{ \#\underline H=d} (\ \ \bigotimes_{p_i\in PFix(T')}M_{T',i,H_i|}\bigotimes_{\{p_i,p_{i+1}\}\in LFix(T')}M_{T',i,i+1,H_i}\ \ ) \end{displaymath} \end{thm} \begin{dem} A large part of the proof consists in collecting the results from the preceding sections using the appropriate notations. \\ Let $T'$ be a one-dimensional subtorus of $T$. Let $p_i\in PFix(T')$. Denote by ${\ensuremath \mathcal P}_d(R_{T',i,\chi})$, $\chi\in \hat{T}'$ the set of subsets of monomials of $R_{T',i,\chi}$ of cardinal $d$. A set of monomials $Z\in {\ensuremath \mathcal P}_d(R_{T',i,\chi})$ defines a point $p_Z\in {\mathbb G}_{T',i,\chi,d}^T$ as explained in the preceding sections: the subspace $V_Z\ensuremath{\subset} R_{T',i,\chi}$ associated to $p_Z$ is generated by the monomials $m\in R_{T',i,\chi}\setminus Z$. If $m\in Z$, it is an eigenvector for the action of $T$ and we denote by $\chi_m$ the associated character. Denote by \begin{displaymath} c_{T',i,\chi,d,j,Z}=\sigma_j(\chi_m,m\in Z)\in S \end{displaymath} the $j$-{th} symmetric polynomial in $d$ variables evaluated on the $\chi_m$ and by \begin{displaymath} c_{T',i,\chi,d,j}=(c_{T',i,\chi,d,j,Z})_{Z\in {\ensuremath \mathcal P}_d(R_{T',i,\chi})}\in S^{{\ensuremath \mathcal P}_d(R_{T',i,\chi})}=S^{{\mathbb G}_{T',i,\chi,d}^T}. \end{displaymath} For $\lambda=(\lambda_1,\dots,\lambda_{\dim R_{T',i,\chi}-d})\in {\cal{P}}art$, $\lambda_1\leq d$, let \begin{displaymath} \Delta_{T',i,\chi,d,\lambda}=det( c_{T',i,\chi,d,\lambda_{r}+s-r})_{1\leq s,r\leq \dim R_{T',i,\chi}-d}\in S^{{\mathbb G}_{T',i,\chi,d}^T} \end{displaymath} be the associated Schur polynomials. These Schur polynomials generate a $S$-module \begin{displaymath} M_{T',i,\chi,d}\subset S^{{\mathbb G}_{T',i,\chi,d}^T} \end{displaymath} By corollary \ref{prop:CohomoEquivGrassmannienneAuxPointsFixes}, we have \begin{prop} $A_T^*({\mathbb G}_{T',i,\chi,d})\simeq M_{T',i,\chi,d}$. \end{prop} If $H$ is a $T'$-Hilbert function, denote \begin{displaymath} M_{T',i,H}=\bigotimes_{H(\chi)\neq 0} M_{T',i,\chi,H(\chi)}\subset S^{{\mathbb G}_{T',i,H}^T} \end{displaymath} According to the description of equivariant Chow rings of products (proposition \ref{prop:ChowDesProduitsRestreintAuxPointsFixes}) and since ${\mathbb G}_{T',i,H}=\prod_{\chi\in \hat{T}'}{\mathbb G}_{T',i,\chi,H(\chi)}$, we have: \begin{prop} $A_T^*({\mathbb G}_{T',i,H})\simeq M_{T',i,H}$. \end{prop} The equivariant embedding \begin{displaymath} {\mathbb H}_i^{T',H}\hookrightarrow {\mathbb G}_{T',i,H} \end{displaymath} yields by restriction a morphism \begin{displaymath} S^{{\mathbb G}_{T',i,H}^T} \ensuremath{\rightarrow} S^{{\mathbb H}_i^{T',H,T}}. \end{displaymath} If $M\subset S^{{\mathbb G}_{T',i,H}^T} $, we denote by $M_{|}$ the image of $M$ by this restriction. \\ The section on the Chow ring of graded Hilbert schemes and corollary \ref{coro:descriptionRestrictionUsingGrassmannians} can be reformulated in this context as: \begin{prop} $A_T^*({\mathbb H}_i^{T',{H}})\simeq M_{T',i,{H}|}\subset S^{{\mathbb H}_i^{T',H,T}}$. In particular, if $\chi_1,\dots, \chi_s \in \hat{T}'$ are the characters such that $H(\chi_i)\neq 0$, the generators of $A_T^*({\mathbb H}_i^{T',{H}})$ are the elements \begin{displaymath} g_{T',i,H,\lambda_1,\dots,\lambda_s}=(\bigotimes_{\chi_j}\Delta_{T',i,\chi_j,H(\chi_j),\lambda_j})_{|}. \end{displaymath} \end{prop} Now we come to the description of $A^*_T({\mathbb H}_{i,i+1}^{T', H})$ when $\{p_i,p_{i+1}\}\in LFix(T')$. Remember that we have associated a $T'$-Hilbert function $H_\pi$ to a partition $\pi$ such that ${\mathbb H}_{i,i+1}^{T',H}\neq \emptyset$ iff $H=H_{\pi}$ for some $\pi$. Thus we are interested in the case $H=H_\pi$ and we start with the case $\pi=\pi(d,k)=(k,k,\dots,k,0,\dots)$ where $k$ appears $d$ times. In this case, a point $p\in {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}$ parametrizes a subscheme $Z=Z_i \cup Z_{i+1}$ where $Z_i \in {\mathbb H}_i^T$ and $Z_{i+1}\in {\mathbb H}_{i+1}^T$ are characterized by the integers $l_{i,Z}=length(Z_i\cap V_{i,i+1})$ and $l_{i+1,Z}=length(Z_{i+1}\cap V_{i,i+1})=d-l_{i,Z}$ (in local coordinates around $p_i$ (resp. around $p_{i+1}$) $I_{Z_i}=(y^k,x^{l_{i,Z}})$ (resp. $I_{Z_{i+1}}=(y^k,x^{l_{i+1,Z}})$)). \\ There is an action of $T$ on $V_{i,i+1}$ and we let $\chi_i$ (resp. $\chi_{i+1}=-\chi_i$) the character of $T$ which acts on the tangent space of $p_i\in V_{i,i+1}$ (resp. of $p_{i+1}$). \\ For $Z\in {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}$, we define \begin{displaymath} c_{i,i+1,\pi(d,k),Z}=\frac{l_{i,Z}(l_{i,Z}+1)}{2}\chi_i+ \frac{l_{i+1,Z}(l_{i+1,Z}+1)}{2}\chi_{i+1} \in S. \end{displaymath} Then we put \begin{displaymath} c_{i,i+1,\pi(d,k)}=(c_{i,i+1,\pi(d,k),Z})\in S^{ {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}} \end{displaymath} and we define \begin{displaymath} M_{T',i,i+1,H_{\pi(d,k)}}\subset S^{ {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}} \end{displaymath} to be the $S$-module generated by the powers $c_{i,i+1,\pi(d,k)}^j$, $0\leq j\leq d$. \begin{prop} $A_T^*({\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)}})\simeq M_{T',i,i+1,H_{\pi(d,k)}}\subset S^{ {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}} $. \end{prop} \begin{dem} We know by proposition \ref{prop:Hii+1=produitDeProjectifs} that ${\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)}}\simeq Sym^d V_{i,i+1}$. Denote by $V$ the vector space with ${\mathbb P}(V)=V_{i,i+1}$ and by $P_i,P_{i+1}$ a base of $V$ with $k.P_i=p_i$, $k.P_{i+1}=p_{i+1}$. The action of $T$ on $V_{i,i+1}$ lifts to an action of $T$ on $V$ with characters $0$ on $P_i$ and $\chi_i$ on $P_{i+1}$. The action on ${\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)}}\simeq Sym^d V_{i,i+1}={\mathbb P}(Sym^d(V))$ is induced by the characters $0,\chi_i,\dots,d\chi_i$ on $P_i^d,P_i^{d-1}P_{i+1},\dots,P_{i+1}^d$. Through the above identifications, a point $Z\in {\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}$ corresponds to the line $kP_i^{l_{i,Z}}P_{i+1}^{d-l_{i,Z}}\ensuremath{\subset} Sym^d(V)$. In particular, the universal quotient bundle $Q=V/{\ensuremath \mathcal O}(-1)$ restricts on $Z$ with equivariant Chern class \begin{displaymath} c_1^TQ_{Z}=\sum_{0 \leq j\leq d, j\neq d-l_{i,Z}}j.\chi_i=(\frac{d(d+1)}{2} -(d-l_{i,Z}))\chi_i. \end{displaymath} If we call $c_1$ the tuple $(c_1^TQ_{Z})_{Z\in {{\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}}}$, there is a constant $a=d+1$ such that all the coordinates of \begin{displaymath} ac_1-c_{i,i+1,\pi(d,k)}\in S^{{\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}} \end{displaymath} are equal to a constant $b=-\frac{(d^2+d)}{2} \chi _i \in {\mathbb Z}\chi_i$, independent of $Z$. The equivariant Chow ring $A_T^*({\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)}} )\ensuremath{\subset} S^{{\mathbb H}_{i,i+1}^{T',H_{\pi(d,k)},T}}$ is the $S$-module generated by the powers $c_1^j$, $0\leq j\leq d$, which is also the module generated by the powers $c_{i,i+1,\pi(d,k)}^j$. \end{dem} Let now $\pi$ be any partition and call $d_j$ then number of parts of $\pi$ whose value is $j$. Then $H_{\pi}=\sum_{j>0} H_{\pi(d_j,j)}$. Consider the decomposition (proposition \ref{prop:Hii+1=produitDeProjectifs}) \begin{displaymath} {\mathbb H}_{i,i+1}^{T',H_\pi}\simeq \prod _{j>0}^{d_j>0} {\mathbb H}_{i,i+1}^{T',H_{\pi(d_j,j)}}. \end{displaymath} If we adopt the convention that \begin{eqnarray*} M_{T',i,i+1,H_{\pi}}&=& \bigotimes_{j>0}^{d_j>0}M_{T',i,i+1,H_{\pi(d_j,j)}}\subset S^{{\mathbb H}_{i,i+1}^{T',H_{\pi},T} }. \end{eqnarray*} and if we denote by $n_1>n_2>\dots>n_s$ the integers such that $d_{n_i}>0$, the formula for the equivariant Chow ring of a product yields: \begin{prop} If $\{p_i,p_{i+1}\}\in LFix(T')$, $A_T^*({\mathbb H}_{i,i+1}^{T',H_\pi})\simeq M_{T',i,i+1,H_{\pi}} \ensuremath{\subset} S^{{\mathbb H}_{i,i+1}^{T',H_\pi,T}}$ where $ M_{T',i,i+1,H_\pi}$ is the submodule generated by the elements \begin{displaymath} g_{T',i,i+1,H_\pi,l_1,\dots,l_s}=\bigotimes_{j=1}^{j=s} c_{i,i+1,\pi(d_{n_j},n_j)}^{l_j},\ \ 0\leq l_j \leq d_{n_j}. \end{displaymath} \end{prop} Let $\underline H=(H_1,\dots,H_r)$ be a $T'$-Hilbert multifunction. The decomposition \begin{displaymath} {\mathbb H}^{T',\underline H}\simeq \prod_{p_i\in PFix(T')}{\mathbb H}_i^{T',H_i}\prod_{\{p_i,p_{i+1}\} \in LFix(T')} {\mathbb H}_{i,i+1}^{T',H_i} \end{displaymath} yields the following formula for $A_T^*({\mathbb H}^{T',H})$. \begin{prop} $A_T^*({\mathbb H}^{T',\underline H})\subset S^{{\mathbb H}^{T',\underline H,T}}$ identifies to the $S$-module \begin{displaymath} \bigotimes_{p_i\in PFix(T')}M_{T',i,H_i|}\bigotimes_{\{p_i,p_{i+1}\}\in LFix(T')}M_{T',i,i+1,H_i} . \end{displaymath} \end{prop} Since \begin{displaymath} {\mathbb H}^{d,T'}=\coprod_{\underline H\in {\cal{M}\cal{H}}(T'),\#\underline H=d}{\mathbb H}^{T',\underline H}, \end{displaymath} we obtain: \begin{prop} $A_T^*({\mathbb H}^{d,T'})\subset S^{{\mathbb H}^{d,T}}$ is isomorphic to \begin{displaymath} \bigoplus_{\underline H\in {\cal{M}\cal{H}}(T'),\#\underline H=d\ }(\bigotimes_{\ p_i\in PFix(T')}M_{T',i,H_i|}\bigotimes_{\{p_i,p_{i+1}\}\in LFix(T')}M_{T',i,i+1,H_i}\ ) \end{displaymath} \end{prop} Now by \cite{brion97:_equivariant_chow_groups}, theorem 3.3, \begin{displaymath} A_T^*({\mathbb H}^d)\simeq \bigcap_{T'\subset T}A_T^*({\mathbb H}^{d,T'}), \end{displaymath} which proves the theorem. \end{dem} \section{Description by congruences} \label{sec:descr-congr} In the last section, $A_T^*({\mathbb H}^{d})$ has been described by a formula involving tensor products and intersections. The goal of this section is to give a simpler presentation. Explicitly, we will describe $A_T^*({\mathbb H}^{d})\ensuremath{\subset} S^{{\mathbb H}^{d,T}}$ as a set of tuples of elements of $S$ satisfying congruence relations. The possibility to reformulate the description of the last section with congruence relations was suggested to me by Michel Brion. Let $\pi:{\cal X}\ensuremath{\rightarrow} Spec\ k$ be a smooth projective $T$-variety with a finite number of fixpoints, and \begin{displaymath} \begin{array}{ccccc} f:A_T^*({\cal X})&\otimes _S& A_T^*({\cal X}) &\ensuremath{\rightarrow} & S=A_T^*(Spec\ k)\\ x&\otimes & y &\mapsto & \pi_*(x\cdot y). \end{array} \end{displaymath} Let $Q=Frac(S)$. According to the localisation theorem (\cite{brion97:_equivariant_chow_groups}, cor. 3.2.1) the morphism $i_T^*:A_T^*({\cal X}) \hookrightarrow S^{{\cal X}^T}$ becomes an isomorphism \begin{displaymath} i^*_{T,Q}:A_T^*({\cal X})_Q=A_T^*({\cal X})\otimes _S Q \ensuremath{\rightarrow} Q^{{\cal X}^T} \end{displaymath} after tensorisation with $Q$. \begin{prop} Let $\beta_i=i_{T,Q}^*(\overline \beta_i)$ be a set of generators of the $S$-module $i_T^*A_T^*({\cal X})\subset S^{{\cal X}^T}$ and $\alpha=i_{T,Q}^*(\overline \alpha) \in S^{{\cal X}^T}$. Then $\alpha \in i_{T}^*A_T^*({\cal X}) \Leftrightarrow \forall i, f_Q(\overline \alpha\otimes \overline \beta_i)\in S$. \end{prop} \begin{dem} According to lemma \ref{lm:appartenanceAUnReseauParProduitScalaire} applied with $M=i_T^*A_T^*({\cal X})$, it suffices to find bases $B_i, C_j \in A_T^*({\cal X})$ with $f(B_i\otimes C_j)=\delta_{ij}$. Let $\lambda:T'=k^*\subset T$ be a one parameter subgroup with ${\cal X}^{T'}={\cal X}^T$. The Bialynicki-Birula cell associated to a point $p_i\in {\cal X}^{T'}$ is the set $\{x\in {\cal X},\ \lim_{t\ensuremath{\rightarrow} \infty}\lambda(t).x=p_i\}$. We denote by $b_i^+$ its closure and we let $B_i^+=b_i^+\ensuremath{\times}^T U\subset {\cal X}\ensuremath{\times}^T U$. It follows from the proof of lemma \ref{lm:ChowEquivariantDuProduit} that the elements $[B_i^+]\in A_T^*({\cal X})$ form a $S$-base. Consider similarly the cells $B_i^-$ defined by the one parameter subgroup $\lambda\circ i$, where $i:k^*\ensuremath{\rightarrow} k^*,\ x\mapsto x^{-1}$. It is a property of the Bialynicki-Birula cells that one can order the points $p_i$ such that: \begin{eqnarray*} b_i^+ \cap b_j^- &\neq& \emptyset \Rightarrow p_j\leq p_i,\\ b_i^+ \cap b_i^- &=&p_i \ \ \ (\mathrm{transversal\ intersection}). \end{eqnarray*} It follows that \begin{eqnarray*} f([B_i^+]\otimes [ B_j^-]) &\neq& 0 \Rightarrow p_j\leq p_i,\\ f([B_i^+] \otimes [B_i^-] &=& 1. \end{eqnarray*} Up to relabelling, one may suppose $p_1<p_2<\dots <p_n$. The matrix $m_{ij}=f([B_i^+]\otimes [ B_j^-])$ is a lower triangular unipotent matrix. In particular, there exists a triangular matrix $\lambda_{ij}$ such that $[C_j]=\sum \lambda_{ij}[B_i^-]$ verifies $f([B_i^+]\otimes [C_j])=\delta_{ij}$. \end{dem} \begin{lm}\label{lm:appartenanceAUnReseauParProduitScalaire} Let $Q=Frac(S)$, $M\subset S^n$ be a free $S$-module, $M_Q=M\otimes _S Q$, $f:M\otimes _S M\ensuremath{\rightarrow} S$ be $S$-linear, $f_Q:M_Q\otimes M_Q \ensuremath{\rightarrow} Q$ be the $Q$-linear map extending $f$, and $\beta_1,\dots,\beta_p$ be generators of $M$. Suppose that \begin{listecompacte} \item $M\ensuremath{\subset} S^n$ yields an isomorphism $M_Q\simeq Q^n$ after tensorisation with $Q$, \item there exist basis $(B_1,\dots,B_n)$, $(C_1,\dots,C_n)$ of $M$ such that $f(B_i\otimes C_j)=\delta_{ij}$. \end{listecompacte} Let $\alpha \in S^n$. Then $\alpha\in M \Leftrightarrow \forall i, f_Q(\alpha\otimes \beta_i)\in S$. \nolinebreak\vspace{\baselineskip} \hfill\rule{2mm}{2mm}\\ \end{lm} As a corollary, we get a description of $i_T^*A_T^*({\cal X})\ensuremath{\subset} S^{{\cal X}^T}$ in terms of congruences involving equivariant Chern classes of the restrictions $T_{{\cal X},p}$ of the tangent bundle $T_{\cal X}$ to fixed points. \begin{coro} Let $\beta_i=(\beta_{ip})_{p\in {\cal X}^T}$ be a set of generators of the $S$-module $i_T^*A_T^*({\cal X})\subset S^{{\cal X}^T}$ and $\alpha=(\alpha_p)\in S^{{\cal X}^T}$. Then the following conditions are equivalent. \begin{listecompacte} \item $\alpha \in i_T^*A_T^*({\cal X}) $ \item $\forall i,\ \sum_{p \in {\cal X}^T} (\alpha_p\beta_{ip} \prod_{q\neq p}c_{\dim {\cal X}}^T(T_{{\cal X},q}))\equiv 0\ (\prod_{p \in {\cal X}^T}c_{\dim {\cal X}}^T(T_{{\cal X},p}))$ \end{listecompacte} \end{coro} \begin{dem} Let us write $\beta_i=i_{T,Q}^*(\overline \beta_i)$, $\alpha=i_{T,Q}^*(\overline \alpha)$. By the integration formula of Edidin and Graham \cite{edidin_Graham98:formuleDeBott}, $f_Q(\overline \alpha\otimes \overline \beta_i)= \sum_{p \in {\cal X}^T} \frac{\alpha_p\beta_{ip}}{c_{\dim {\cal X}}^T(T_{{\cal X},p})}$. Thus, the corollary is nothing but the criteria of the last proposition. \end{dem} We can collect in a set \begin{math} G(T',\underline{H})\ensuremath{\subset} S^{{\mathbb H}^{T',\underline H,T}} \end{math} the generators of $i_T^*A_T^*({\mathbb H}^{T',\underline{H}}) \ensuremath{\subset} S^{{\mathbb H}^{T',\underline{H},T}}$ constructed in section \ref{sec:conclusion-proof}. Explicitly, $G(T',\underline H)$ contains the elements \begin{displaymath} g_{T',\underline H,\lambda_{ij},l_{ij}}=\bigotimes _{p_i\in PFix(T')}g_{T',i,H_i,\lambda_{i1},\dots,\lambda_{i,s_i}}\bigotimes_{\{p_i,p_{i+1}\}\in LFix(T')} g_{T',i,i+1,H_i,l_{i1},\dots,l_{i,t_i}}. \end{displaymath} These generators and the last corollary make it possible to obtain a description of $i_T^*A_T^*({\mathbb H}^{T',\underline{H}})$ via congruences. To get a description of \begin{displaymath} A_T^*({\mathbb H}^{d})\simeq \bigcap_{T'\ensuremath{\subset} T\ }\bigoplus_{\ {\mathbb H}^{T',\underline{H}}\neq \emptyset} i_T^*A_T^*({\mathbb H}^{T',\underline{H}}) \end{displaymath} we merely have to gather the congruence relations constructed for the various ${\mathbb H}^{T',\underline{H}}$. We finally obtain: \begin{thm} \label{thr:description du Chow avec congruences} The ring $A_T^*({\mathbb H}^{d})\ensuremath{\subset} S^{{\mathbb H}^{d,T}}$ is the set of tuples $\alpha=(\alpha_{p})$ such that, $\forall T'\ensuremath{\subset} T$ one-dimensional subtorus , $\forall \underline{H}\in {\cal{M}\cal{H}}(T')$ with ${\mathbb H}^{T',\underline{H}}\neq \emptyset$, $\forall g=(g_{p})\in G(T',\underline H)$, the congruence relation \begin{displaymath} \sum_{p \in {{\mathbb H}^{T',\underline{H},T}}} (\alpha_{p}g_{p} \prod_{q\neq p}^{q \in {{\mathbb H}^{T',\underline{H},T}}} c_{\dim {{\mathbb H}^{T',\underline{H}}}}^T(T_{{{\mathbb H}^{T',\underline{H}}},q}))\equiv 0\ (\prod_{p \in {{\mathbb H}^{T',\underline{H},T}}}c_{\dim {{\mathbb H}^{T',\underline{H}}}}^T(T_{{{\mathbb H}^{T',\underline{H}}},p})) \end{displaymath} holds. \end{thm} \begin{rem} The tangent space at a $T$-fixed point of ${\mathbb H}_i^{T',H}$ or ${\mathbb H}_{i,i+1}^{T',H}$ is known \cite{evain04:irreductibiliteDesHilbertGradues}. In particular, since ${\mathbb H}^{T',\underline{H}}$ is a product of terms isomorphic to ${\mathbb H}_i^{T',H}$ or ${\mathbb H}_{i,i+1}^{T',H}$, the equivariant Chern classes appearing in the theorem are explicitly computable (See the example in the next section). \end{rem} \begin{rem} \end{rem} Let $S^+=\hat TS\subset S$. The description of the usual Chow ring now follows from \cite{brion97:_equivariant_chow_groups},cor.2.3.1. \begin{thm} The ring $A^*({\mathbb H}^d)$ is the quotient of $A_T^*({\mathbb H}^d)\subset S^{{\mathbb H}^{d,T}}$ by the ideal $S^+A_T^*({\mathbb H}^{d})$ generated by the elements $(f,\dots,f)$, $f\in S^+$. \end{thm} \begin{rem} For $d=1$, ${\mathbb H}^d=X$ and one recovers that $A_T^*(X)$ is isomorphic to the space of continuous piecewise polynomial functions on the fan of $X$. The Betti numbers of ${\mathbb H}^d$ can be computed using the description of the last two theorems. In particular, one can check for small values of $d$ and explicit surfaces $X$ that the Betti numbers are those computed with G\"ottsche's formula. \end{rem} \section{An example} \label{sec:an-example} In this section, we compute the Chow ring of the Hilbert scheme ${\mathbb H}^3={\mathbb H}^3 {\mathbb P}^2$. First, we fix the notations: $T=k^*\ensuremath{\times} k^*=Spec\ k[t_1^{\pm 1},t_2^{\pm 1}]$ and ${\mathbb P}^2=Proj\ k[x_1,x_2,x_3]$. The torus $T$ acts on ${\mathbb P}^2$ and on itself. The symmetric group $S_3$ acts on ${\mathbb P}^2$. The action of an element $(a,b)\in T$, $\sigma \in S_3$ is as follows. \begin{eqnarray*} (a,b).x_1^\alpha x_2 ^\beta x_3^\gamma=x_1^\alpha (ax_2) ^\beta (bx_3)^\gamma\\ (a,b).t_1^\alpha t_2^{\beta}=(at_1)^\alpha (bt_2)^{\beta}\\ \sigma.x_1^\alpha x_2 ^\beta x_3^\gamma=x_{\sigma(1)}^\alpha x_{\sigma(2)} ^\beta x_{\sigma(3)}^\gamma. \end{eqnarray*} The equivariant map $T\ensuremath{\rightarrow} {\mathbb P}^2$, $(a,b)\ensuremath{\rightarrow} (1,a,b)$ identifies $t_1$ with $\frac{x_2}{x_1} $, and $t_2$ with $\frac{x_3}{x_1} $. We denote by $p_1=(1:0:0),p_2=(0:1:0),p_3=(0:0:1)$ the three toric points of ${\mathbb P}^2$. The plane ${\mathbb P}^2$ is covered by the three affine planes $U_1=Spec\ k[t_1,t_2]= Spec\ R_1$, $U_2=Spec\ k[t_1^{-1},t_1^{-1}t_2]=Spec\ R_2$, $U_3=Spec\ k[t^{-1}_2, t_1t_2^{-1}]=Spec\ R_3$. Since $S_3$ acts on $T=\{x_1x_2x_3\neq 0\}$, it acts on $\hat{T}$ by $\sigma.\chi(t)=\chi(\sigma^{-1}t)$, and on $S=Sym(\hat{T}\otimes {\mathbb Q})$. If $T'\ensuremath{\subset} T$ and $H\in {\cal{H}}(T')$ is a $T'$-Hilbert function, let $\sigma.H\in {\cal{H}}(\sigma.T')$ be the Hilbert function defined by $(\sigma.H)(\chi)=H(\sigma^{-1}.\chi)$. If $\underline H=(H_1,H_2,H_3)\in {\cal{M}\cal{H}}(T')$ is a Hilbert multifonction, let $\sigma.\underline H\in {\cal{M}\cal{H}}(\sigma.T')$ be the Hilbert multifunction with $(\sigma.\underline H)_i=\sigma.H_j$ where $j$ is such that $\sigma.p_j=p_i$. To each subvariety ${\mathbb H}^{T',\underline H}\ensuremath{\subset} {\mathbb H}$, we have associated a set of congruence relations $R_i$. Explicitly, constants $u_i\in S$ and $d_i(q)\in S$ for $q\in {\mathbb H}^{T',\underline H,T}$ have been defined such that $s \in S^{{\mathbb H}^{T',\underline H,T}}$ satisfies $R_i$ if \begin{displaymath} \sum_{q\in {\mathbb H}^{T',\underline H,T}} d_i(q)s(q) \equiv 0(u_i). \end{displaymath} The subvariety $\sigma.{\mathbb H}^{T',\underline H}={\mathbb H}^{\sigma.T',\sigma.\underline H}$ is associated with the set of congruence relations $\sigma.R_i$ where by definition $s\in {\mathbb H}^{\sigma.T',\sigma\underline H,T}$ satisfies $\sigma.R_i$ if : \begin{displaymath} \sum_{q\in {\mathbb H}^{\sigma.T',\sigma.\underline H,T}} d_i(\sigma^{-1}q)s(q) \equiv 0(\sigma.u_i). \end{displaymath} Summing up, there is an action of $S_3$ on the set of congruence relations. We will produce the set of relations up to this action. We list the possible $p\in {\mathbb H}^T$. Let $E_1=\{1,t_1,t_2\}\ensuremath{\subset} R_1$, $E_2=\{1,t_1,t_1^2\}\ensuremath{\subset} R_1$, $E_3= \{1,t_1,\}\ensuremath{\subset} R_1$, $E_4= \{1\}\ensuremath{\subset} R_1$, $E_5=\{1\}\ensuremath{\subset} R_2$, $E_6=\{1\}\ensuremath{\subset} R_3$. The multistaircases \begin{eqnarray*} \underline E_A=(E_1,\emptyset,\emptyset)\\ \underline E_B=(E_2,\emptyset,\emptyset)\\ \underline E_C=(E_3,E_5,\emptyset)\\ \underline E_D=(E_3,\emptyset,E_6)\\ \underline E_E=(E_4,E_5,E_6) \end{eqnarray*} are associated with points $A,B,C,D,E \in {\mathbb H}^T$. Up to the action of $S^3$, these are the only points of ${\mathbb H}^{3,T}$. \des{multiescaliers} We recall the description of the tangent space at $p\in {\mathbb H}^T$ where $p$ is described by a multistaircase $(F_1,F_2,F_3)$ (\cite{evain04:irreductibiliteDesHilbertGradues}). The staircase $F_i$ is a set of monomials in $R_i=k[x,y]$ where $x,y$ are the toric coordinates around $p_i$. A cleft for $F_i$ is a monomial $m=x^ay^b \notin F_i$ with ($a=0$ or $x^{a-1}y^b\in F_i$) and ($b=0$ or $x^{a}y^{b-1}\in F_i$). We order the clefts of $F_i$ according to their $x$-coordinates: $c_1=y^{b_1},c_2=x^{a_2}y^{b_2},\dots,c_p=x^{a_p}$ with $a_1=0<a_2<\dots<a_p$. An $x$-cleft couple for $F_i$ is a couple $C=(c_k,m)$, where $c_k$ is a cleft ($k\neq p$), $m\in F_i$, and $mx^{a_{k+1}-a_k}\notin F_i$. The torus $T$ acts on the monomials $c_k$ and $m$ with characters $\chi_k$ and $\chi_m$. We let $\chi_C=\chi_m-\chi_k$. By symmetry, there is a notion of $y$-cleft couple for $F_i$. The set of cleft couples for $p$ is by definition the union of the ($x$ or $y$)-cleft couples for $F_1$, $F_2$, $F_3$. The vector space $T_p{\mathbb H}$ is in bijection with the formal sums $\sum \lambda_i C_i$, where $C_i$ is a cleft couple for $p$. Moreover, under this correspondance, the cleft couple $C$ is an eigenvector for the action of $T$ with character $\chi_C$. If $p\in {\mathbb H}^{T}$, and if $\underline H$ is the $T'$-Hilbert multifunction of the subscheme associated with $p$, we let ${\mathbb H}^{T',p}={\mathbb H}^{T',\underline H}$. The subvariety ${\mathbb H}^{T',p}\ensuremath{\subset} {\mathbb H}$ give non trivial congruences only if ${\mathbb H}^{T',p}$ is not a point, ie. if $T_p{\mathbb H}^{T',p}\neq 0$. Using the above description of the tangent space, we find for each point $p$ a finite number of possible $T'$. The results are collected in the following array. Under each point $p$ are listed the couples $(a,b)$ such that $T_{ab}=\{t^a,t^b\}\ensuremath{\subset} T$ verifies ${\mathbb H}^{T_{ab},p}\neq \{p\}$. For each such $(a,b)$, the corresponding dimension $\dim {\mathbb H}^{T_{ab},p}$ is given. \begin{displaymath} \begin{array}{|c|c|c|c|c|} \hline A& B& C& D& E\\ \hline \begin{array}{cc} a,b& dim\\ 1,0& 2\\ 0,1& 2\\ 2,1& 1\\ 1,2& 1 \end{array} & \begin{array}{cc} a,b& dim\\ 1,0& 1\\ 0,1& 3\\ 1,1& 1\\ 1,2& 1 \end{array} & \begin{array}{cc} a,b& dim\\ 1,0& 1\\ 0,1& 3\\ 1,1& 2 \end{array} & \begin{array}{cc} a,b& dim\\ 1,0& 2\\ 0,1& 2\\ 1,1& 2 \end{array} & \begin{array}{cc} a,b& dim\\ 1,0& 2\\ 0,1& 2\\ 1,1& 2 \end{array}\\ \hline \end{array} \end{displaymath} For some $a,b,p$, $a',b',p'$, we have an identification ${\mathbb H}^{T_{ab},p}=\sigma.{\mathbb H}^{T_{a'b'},p'}$ ($\sigma\in S^3$). Explicitly, up to action, we have $H^{T_{10},A}=H^{T_{10},D}=H^{T_{01},A}$, $H^{T_{01},B}=H^{T_{01},C}$, $H^{T_{12},A}=H^{T_{12},B}=H^{T_{21},A}$, $H^{T_{11},C}=H^{T_{11},D}$, $H^{T_{01},D}=H^{T_{01},E}=H^{T_{10},E}=H^{T_{11},E}$. Thus, by symmetry, we only consider $(a,b,p)$ within the following list: \begin{displaymath} \{(0,1,A),(1,2,A),(1,0,B),(0,1,B),(1,1,B),(1,1,C),(1,0,C),(0,1,D)\}. \end{displaymath} For each of the above values of $(a,b,p)$, we construct the congruence relations associated with the variety ${\mathbb H}^{T_{ab},p}$. The results are summed up in the following array. \\ \epsfig{file=tableau.ps,height=220mm,width=170mm,angle=0} If $\sigma=(n_1,n_2)\in S_3$ is a permutation and $p\in {\mathbb H}^T$, we denote by $p_{n_1n_2}$ the element $\sigma.p$. We explain how to read the array, taking the second line as an example. The first three column mean that ${\mathbb H}^{T_{01},A}$ is isomorphic to ${\mathbb P}^1\ensuremath{\times} {\mathbb P}^1$ and contains the points $A,A_{13},D,D_{13}$. Four generators of $A_T^*({\mathbb H}^{T_{01},A})\ensuremath{\subset} S^{\{A,A_{13},D,D_{13}\}}$ have been constructed in section \ref{sec:conclusion-proof}, namely $A+A_{13}+D+D_{13}$,\dots,$t_2^2A+t_2^2A_{13}-t_2^2D-t_2^2D_{13}$. The coefficients of these expressions are written down in the fourth column. The top equivariant Chern classes $c_{top}^T(T_{A}{\mathbb H}^{T_{01},A})$, \dots, $c_{top}^T(T_{D_{13}}{\mathbb H}^{T_{01},A})$ are respectivly $t_2^2$, \dots, $-t_2^2$, as indicated in the fifth column. We can construct congruence relations with these data following the procedure of section \ref{sec:descr-congr}: $A_T^*({\mathbb H}^{T_{01},A})\ensuremath{\subset} S^{\{A,A_{13},D,D_{13}\} }$ is the set of elements $aA+a_{13}A_{13}+dD+d_{13}D_{13}$ whose coefficients $a,\dots,d_{13}$ verify $a+a_{13}-d-d_{13}\equiv 0(t_2^2)$, $d-d_{13}\equiv 0(t_2)$ and $a-a_{13}\equiv 0(t_2)$. This is the meaning of the last column. We gather the congruence relations constructed in the array, and we obtain: \begin{thm}\label{thr:leCasHilbTroisP2} The equivariant Chow ring $A_T^*({\mathbb H}^3{\mathbb P}^2)\ensuremath{\subset} {\mathbb Q}[t_1,t_2]^{\{A,A_{12},\dots,E\}}$ is the set of linear combinations $aA+a_{12}A_{12}+\dots +eE$ satisfying the relations \begin{listecompacte} \item $a+a_{13}-d-d_{13}\equiv 0(t_2^2)$ \item $d-d_{13}\equiv 0(t_2)$ \item $a-a_{13}\equiv 0(t_2)$ \item $a-b\equiv 0(2t_1-t_2)$ \item $b-b_{13}\equiv 0(t_2)$ \item $-b+3c-3c_{12}+b_{12}\equiv 0(t_1^3)$ \item $-b+c+c_{12}-b_{12}\equiv 0(t_1^2)$ \item $3b-c+c_{12}+-3b_{12}\equiv 0(t_1)$ \item $b-b_{23}\equiv 0(t_2-t_1)$ \item $ c-d+c_{23}-d_{23} \equiv 0((t_1-t_2)^2)$ \item $c+d-c_{23}-d_{23} \equiv 0(t_1-t_2)$ \item $c_{23}-d_{23} \equiv 0(t_1-t_2)$ \item $c-c_{13}\equiv 0(t_2)$ \item $d-2e+d_{12}\equiv 0(t_1^2)$ \item $d-d_{12}\equiv 0(t_1)$ \item all relations deduced from the above by the action of the symmetric group $S_3$. \end{listecompacte} \item The Chow ring $A^*({\mathbb H}^3{\mathbb P}^2)$ is the quotient of $A_T^*({\mathbb H}^3{\mathbb P}^2)$ by the ideal generated by the elements $fA+\dots +fE$, $f\in {\mathbb Q}[t_1,t_2]^+$. \end{thm}
1,108,101,562,510
arxiv
\section{Background} \label{sec::background} \subsection{Reinforcement Learning} We consider the standard RL setup where an agent interacts with an environment $\mathcal{E}$ over a number of discrete timesteps, where the interaction is modeled as a Markov Decision Process (MDP). At each timestep \(t\), the agent observes a state \(s_t\) from a state space \(\mathcal{S}\), and performs an action \(a_t\) from an action space \(\mathcal{A}\) according to its policy \(\pi\), where \(\pi\) is a mapping function represented as \(\pi : \mathcal{S} \to P(\mathcal{A})\). The agent then receives the next state \(s_{t+1}\in\mathcal{S}\) and a reward signal \(r_t\) from \(\mathcal{E}\). The process continues until a termination condition is met. The objective of the agent is to maximize the expected cumulative return $\mathbb{E}[R_t] = \mathbb{E}[\Sigma_{k=0}^{\infty} \gamma^{k} r_{t+k}]$ from \(\mathcal{E}\) for each timestep \(t\), where $\gamma\in (0, 1]$ is a discount factor. \subsection{Hierarchical Reinforcement Learning} HRL introduces the concept of `options' into the RL framework, where options are temporally extended actions. \cite{hrl} shows that an MDP combined with options becomes a Semi-Markov Decision Process (SMDP). Assume that there exists a set of options $\Omega$. HRL allows a `policy over options' $\pi_{\Omega}$ to determine an option for execution for a certain amount of time. Each option \(\omega\in\Omega\) consists of three components \((\mathcal{I}_\omega, \pi_\omega, \mathcal{\beta}_\omega)\), in which \(\mathcal{I}_\omega\subseteq \mathcal{S}\) is an initial set according to $\pi_{\Omega}$, \(\pi_\omega\) is a policy following option \(\omega\), and \(\mathcal{\beta}_\omega:\mathcal{S}\to[0, 1]\) is a termination function. When an agent enters a state \(s\in \mathcal{I}_\omega\), option \(\omega\) is adopted, and policy \(\pi_\omega\) is followed until a state \(s_k\) where \(\beta_\omega(s_k)\to1\). In episodic tasks, termination of an episode also terminates the current option. Our architecture is a special case of SMDP. Section~\ref{sec::methodology} introduces our update rules for \(\pi_\Omega\) and \(\pi_\omega\). In this paper, we refer to a `policy over options' as a \textit{master policy}, and an `option' as a \textit{sub-policy}. \section{Experimental Results} \label{sec::experimental_results} \subsection{Experimental Setup} \label{sec:experimental_setup} \subsubsection{Environments} \label{subsubsed::environments} We verify the proposed methodology in simple classic control tasks from the OpenAI Gym Benchmark Suite~\cite{openai_gym}, and a number of challenging continuous control tasks from both the OpenAI Gym Benchmark Suite and the DeepMind Control Suite~\cite{deepmindcontrolsuite2018} simulated by the MuJoCo~\cite{mujoco} physics engine. The challenging tasks include four continuous control tasks from the OpenAI Gym Benchmark Suite, and two tasks from the DeepMind Control Suite. \subsubsection{Hyperparameters} \label{subsubsed::hyperparameters} \begin{table}[t] \input{supplementary/tables/ours_hyperparams.tex} \end{table} \begin{table}[t] \caption{Number of neurons $n_{units}$ per layer for $\pi_{\omega_{small}}$ \& $\pi_{\omega_{large}}$, $c_{\omega_{small}}$, $c_{\omega_{large}}$, and $\lambda$ for each robotic control tasks.} \label{tab:n_units} \centering \renewcommand{\arraystretch}{1.1} \small \resizebox{\columnwidth}{!}{ \begin{tabular}{c|cc|ccc} \toprule Environment & $n_{units}$ for $\pi_{\omega_{small}}$ & $n_{units}$ for $\pi_{\omega_{large}}$ & $c_{\omega_{small}}$ & $c_{\omega_{large}}$ & $\lambda$\\ \midrule \textit{MountainCarContinuous-v0} & $8$ & $64$ & $1.0$ & $44.7$ & $1\mathrm{e}{-4}$\\ \textit{Swimmer-v3} & $8$ & $256$ & $1.0$ & $428.4$ & $1\mathrm{e}{-4}$\\ \textit{Ant-v3} & $64$ & $256$ & $1.0$ & $8.0$ & $1\mathrm{e}{-1}$ \\ \textit{FetchPickAndPlace-v1} & $32$ & $128$ & $1.0$ & $9.4$ & $2\mathrm{e}{-4}$\\ \textit{walker-stand} & $8$ & $64$ & $1.0$ & $18.1$ & $1\mathrm{e}{-2}$\\ \textit{finger-spin} & $8$ & $64$ & $1.0$ & $29.1$ & $1\mathrm{e}{-2}$\\ \bottomrule \end{tabular} } \end{table} In our experiments, the master policy $\pi_\Omega$ is implemented as a Deep Q-Network (DQN)~\cite{dqn} agent to discretely choose between the two sub-policies. On the other hand, the sub-policies $\pi_\omega$ are implemented as Soft Actor-Critic (SAC)~\cite{sac} agents for performing the continuous control tasks described above. The hyperparameters used for training are shown in Table~\ref{tab:ours_hyperparam}. Both $\pi_\Omega$ and $\pi_\omega$ are implemented as multilayer perceptrons (MLPs) with two hidden layers. We set the number of units $n_{units}$ per layer for $\pi_\Omega$ to 32 for all tasks, and determine $n_{units}$ for $\pi_\omega$ as follows. We first train a model with $n_{units}$ set to 512 as the criterion model, and then find the minimum $n_{units}$ for the model which can achieve $90\%$ of the performance of the criterion model. We use this as $n_{units}$ for $\pi_{\omega_{large}}$. And then we find $n_{units}$ for $\pi_{\omega_{small}}$, such that its value is less than or equal to $1/4$ of $n_{units}$ for $\pi_{\omega_{large}}$ and the performance of $\pi_{\omega_{small}}$ is around or below $1/3$ of the score achieved by the criterion model. \begin{figure}[t] \centering \includegraphics[width=.95\linewidth]{supplementary/figures/diff_lambda.png} \caption{Performance of the models trained with different $\lambda$. The scores are averaged from 5 different random seeds. Each model trained with different random seed is evaluated over 200 episodes.} \label{fig:diff_lambda} \end{figure} For the cost term, we adopt the inference FLOPs of \(\pi_\omega\) as \(c_\omega\), since the FLOPs executed by \(\pi_\omega\) and its energy consumption are correlated. We use the number of FLOPs of $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ divided by the number of FLOPs of $\pi_{\omega_{small}}$ as their policy costs $c_{\omega_{small}}$ and $c_{\omega_{large}}$, respectively, such that $c_{\omega_{small}}$ is equal to one. With regard to $\lambda$, from Fig.~\ref{fig:diff_lambda}, we observe that $\lambda$ and the ratio of choosing $\pi_{\omega_{large}}$ is negatively correlated. Even though the performances decline along with the reduced usage rate of $\pi_{\omega_{large}}$, there is often a range of $\lambda$ which leads to lower usage rate of $\pi_{\omega_{large}}$ and yet comparable performance to the model of high $\pi_{\omega_{large}}$ usage rate. We perform a hyperparameter search to find an appropriate $\lambda$, such that both $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ are used alternately within an episode, while allowing the agent to obtain high scores. The hyperparameters used in the cost term are listed in Table~\ref{tab:n_units}. \begin{figure}[t] \centering \includegraphics[width=.95\linewidth]{supplementary/figures/diff_nomegas.png} \caption{Performance of models trained with different $n_\omega$. The scores are averaged from 5 different random seeds. Each model trained with different random seed is evaluated over 200 episodes.} \label{fig:diff_nomega} \end{figure} We also do hyperparameter searches to find $n_\omega$. It can be observed from Fig.~\ref{fig:diff_nomega} that there is no obvious correlation between $n_\omega$ and the performance. \textit{Swimmer-v3} performs well with smaller values of $n_\omega$, while \textit{walker-stand} performs well with larger values of $n_\omega$. On the other hand, \textit{Ant-v3} performs well with $n_\omega$ equal to around $10$. Therefore, the choice of $n_\omega$ is relatively non-straightforward. We select the value of $n_\omega$ on account of two considerations: (1) $n_\omega$ should not be too small, or it will lead to increased master policy costs due to more frequent inferences of the master policy to decide which sub-policy to be used next; (2) $n_\omega$ should not be too large, otherwise the model will not be able to perform flexible switching between sub-policies. As a result, we set $n_\omega$ to five for all of the experiments considered in this work as a compromise. However, please note that an adaptive scheme of the step size $n_\omega$ may potentially further enhance the overall performance, and is left as a future research direction. For \textit{FetchPickAndPlace-v1}, we train the model with hindsight experience replay (HER)~\cite{her} to improve the sample efficiency. For most of the results, the default training and evaluation lengths are set to 2.5M timesteps and 200 episodes, respectively. The agents are implemented based on the source codes from Stable Baselines~\cite{stable-baselines} as well as RL Baselines Zoo~\cite{rl-zoo}, and are trained using five different random seeds. \subsubsection{Baselines} \label{subsubsed::baselines} The baselines considered include two categories: (1) a typical RL method, and (2) distillation methods. \textbf{Typical RL method.} To study the performance drop and the cost reduction compared with standard RL methods, we train two policies of different sizes (i.e., numbers of DNN parameters), where the small one and the large one are denoted as $\pi_{S-only}$ and $\pi_{L-only}$ respectively. The sizes of $\pi_{S-only}$ and $\pi_{L-only}$ correspond to $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ used in our method. Both $\pi_{S-only}$ and $\pi_{L-only}$ are trained independently from scratch as typical RL methods without the use of $\pi_{\Omega}$. \textbf{Distillation methods.} In order to study the effectiveness on cost reduction, we compare our methodology with a commonly used method in RL: policy distillation. Two policy distillation approaches are considered in our experiments: Behavior Cloning (BC)~\cite{bc_limitation} and Generative Adversarial Imitation Learning (GAIL)~\cite{gail}. For these baselines, a costly policy (i.e., the large policy) serves as the teacher model that distills its policy to an economic policy (i.e., the small policy). In our experiments, the teacher network is set to $\pi_{{L-only}}$, while the configurations of the student networks are described in Section~\ref{sec:baseline}. Please note that these baselines require more training data than the typical RL method baselines and our methodology, since they need data samples from expert (i.e., $\pi_{{L-only}}$) trajectories for training their student networks. \begin{figure}[t] \centering \hspace{0.2em} \begin{subfigure}{.9\linewidth} \centering \includegraphics[width=\linewidth]{figures/timeline_plots/timeline-swimmer-deterministic.pdf} \end{subfigure}% \newline \begin{subfigure}{.9\linewidth} \centering \includegraphics[width=.6\linewidth]{figures/timeline_plots/timeline-swimmer-actions.pdf} \end{subfigure} \caption{ A timeline for illustrating the sub-policies used for different circumstances in \textit{Swimmer-v3}, where the interleavedly plotted white and yellow dots along the timeline correspond to the sub-policies $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$, respectively. The master policy $\pi_{\Omega}$ selects $\pi_{\omega_{large}}$ when performing strokes, and employs $\pi_{\omega_{small}}$ to maintain or slightly adjust the posture of the swimmer between two strokes for drifting. The image at the bottom shows the actions conducted by the model. The transparent dots are actions decided by the not selected sub-policy. The opaque and the transparent dots reveal that actions conducted by $\pi_{\omega_{large}}$ is more complicated than $\pi_{\omega_{small}}$.\\ \vspace{-1em} } \label{fig:timeline_swimmer} \end{figure} \begin{figure}[t] \begin{subfigure}{.6\linewidth} \includegraphics[width=\linewidth]{figures/timeline_plots/timeline-mcar.pdf} \caption{\textit{MountainCarContinuous-v0}} \label{fig:timeline_mcar} \end{subfigure}% \begin{subfigure}{.4\linewidth} \centering \includegraphics[width=\linewidth]{figures/timeline_plots/timeline-FetchPickAndPlace.pdf} \caption{\textit{FetchPickAndPlace-v1}} \label{fig:timeline_fpap} \end{subfigure}% \newline \newline \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{figures/timeline_plots/timeline-walker-stand.pdf} \caption{\textit{Walker-stand}} \label{fig:timeline_walker_stand} \end{subfigure}% \vspace{.5em} \newline \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{figures/timeline_plots/timeline_legend.pdf} \end{subfigure}% \caption{\subref{fig:timeline_mcar} The mountain car uses $\pi_{\omega_{large}}$ to adjust its acceleration from a negative value to a positive value, while using $\pi_{\omega_{small}}$ to maintain its acceleration. \subref{fig:timeline_fpap} The robotic arm first approaches the object using $\pi_{\omega_{small}}$, and then employs $\pi_{\omega_{large}}$ to move the object to the target location. \subref{fig:timeline_walker_stand} The walker first utilizes $\pi_{\omega_{large}}$ and $\pi_{\omega_{small}}$ alternately to stand up. After reaching an upright posture, the walker leverages $\pi_{\omega_{small}}$ to maintain it afterwards. } \label{fig:timeline} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/swimmer_random.png} \caption{\textit{Swimmer-v3}} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/fetchpickandplace_random.png} \caption{\textit{FetchPickAndPlace-v1}} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/walkerstand_random.png} \caption{\textit{walker-stand}} \end{subfigure}% \leavevmode\smash{\makebox[0pt]{\hspace{.22\linewidth \rotatebox[origin=l]{90}{\hspace{2.9em Performance (scaled)}% }}\hspace{0pt plus 1filll}\null \begin{center} Inference costs (scaled) \end{center} \caption{Comparison of performance and cost. Each dot corresponds to a rollout of an episode. The \(y\)-axis is scaled so that the expert achieves 1 and a random policy achieves 0. The \(x\)-axis is also scaled such that only using $\pi_{\omega_{large}}$ throughout an episode corresponds to 1.} \label{fig:perf_vs_cost} \end{figure*} \subsection{Qualitative Analysis of the Learned Behaviors} \label{sec:analysis} We first illustrate a number of motivating timeline plots to qualitatively demonstrate that a control task can be handled by different sub-policies $\pi_{\omega}$ during different circumstances. \textbf{\textit{Swimmer-v3}.} Fig.~\ref{fig:timeline_swimmer} illustrates the decisions of the master policy $\pi_{\Omega}$, where the interleavedly plotted white and yellow dots along the timeline correspond to the execution of sub-policies $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$, respectively. In this task, a swimmer robot is expected to first perform a stroke and then maintain a proper posture so as to drift for a longer distance. It is observed that the model trained by our methodology tends to use $\pi_{\omega_{large}}$ while performing strokes and $\pi_{\omega_{small}}$ to maintain its posture between two strokes. One reason is that a successful stroke requires lots of delicate changes in each joint while holding a proper posture for drifting merely needs a few joint changes. Delicate changes in posture within a small time interval are difficult for $\pi_{\omega_{small}}$ since the outputs of it tend to be smooth over temporally neighboring states. \textbf{\textit{MountainCarContinuous-v0}.} The objective of the car is to reach the flag at the top of the hill on the right-hand side. In order to reach the goal, the car has to accelerate forward and backward and then stop acceleration at the top. Fig.~\ref{fig:timeline_mcar} shows that $\pi_{\omega_{large}}$ is used for adjusting the acceleration and $\pi_{\omega_{small}}$ is only selected when acceleration is not required. \textbf{\textit{FetchPickAndPlace-v1}.} The goal of the robotic arm is to move the black object to a target position (i.e., the red ball in Fig.~\ref{fig:timeline_fpap}). In Fig.~\ref{fig:timeline_fpap}, it can be observed that the agent trained by our methodology learns to use $\pi_{\omega_{small}}$ to approach the object, and then switch to $\pi_{\omega_{large}}$ to fetch and move it to the target location. One rationale for this observation is that fetching and moving an object entails fine-grained control of the clipper. The need for fine-grained control inhibits $\pi_{\omega_{small}}$ from being selected by $\pi_{\Omega}$ to fetch and move objects. In contrast, there is no need for fine-grained control for approaching objects. As a result, $\pi_{\omega_{small}}$ is mostly chosen when the arm is approaching the object to reduce the costs. \textbf{\textit{Walker-stand}.} The goal of the walker is to stand up and maintain an upright torso. Fig.~\ref{fig:timeline_walker_stand} shows that for circumstances when the forces applied change quickly, $\pi_{\omega_{large}}$ is used. For circumstances where the forces applied change slowly, $\pi_{\omega_{small}}$ is used. After the walker reaches a balanced posture, it utilizes $\pi_{\omega_{small}}$ to maintain the posture afterwards. To summarize the above findings, $\pi_{\omega_{large}}$ is selected when find-grained controls (i.e., tweaking actions within a small time interval) are necessary, and $\pi_{\omega_{small}}$ is chosen otherwise. \begin{table*}[t] \centering \caption{A summary of the performances of $\pi_{{S-only}}$, $\pi_{{L-only}}$, and our method (denoted as `\textit{Ours}') evaluated over 200 test episodes, along with the averaged percentages of $\pi_{\omega_{large}}$ being used by our method during an episode, as well as the averaged percentages of reduction in FLOPs when comparing \textit{Ours} (including the FLOPs from $\pi_{\Omega}$ and the sub-policies) against $\pi_{{L-only}}$.} \resizebox{\linewidth}{!}{ \begin{tabular}{c|cc|ccc} \toprule Environment & $\pi_{{S-only}}$ & $\pi_{{L-only}}$ & \textbf{\textit{Ours}} & \textbf{\% using $\pi_{\omega_{large}}$} & \textbf{\% Total FLOPs reduction} \\ [0.5ex] \midrule \textit{MountainCarContinuous-v0} & $-11.6\pm0.1$ & $93.6\pm0.1$ & $93.5\pm0.1$ & $44.5\%\pm5.7\%$ & $49.0\%\pm5.3\%$ \\ \textit{Swimmer-v3} & $35.5\pm7.7$ & $84.1\pm18.0$ & $108.8\pm24.9$ & $54.9\%\pm9.5\%$ & $44.6\%\pm8.3\%$ \\ \textit{Ant-v3} & $1,690.4\pm1,244.3$ & $3,927.2\pm1,602.8$ & $3,564.8\pm1,548.6$ & $53.9\%\pm5.5\%$ & $39.3\%\pm7.5\%$ \\ \textit{FetchPickAndPlace-v1} & $0.351\pm0.477$ & $0.980\pm0.140$ & $0.935\pm0.255$ & $46.5\%\pm3.0\%$ & $46.4\%\pm2.8\%$ \\ \textit{walker-stand} & $330.0\pm12.2$ & $977.7\pm22.1$ & $967.2\pm16.4$ & $5.7\%\pm1.1\%$ & $82.3\%\pm0.9\%$ \\ \textit{finger-spin} & $32.9\pm36.9$ & $978.0\pm32.4$ & $871.2\pm24.0$ & $55.2\%\pm19.7\%$ & $37.5\%\pm17.8\%$ \\ \bottomrule \end{tabular} } \label{tab:perf_best} \end{table*} \subsection{Performance and Cost Reduction} \label{sec:cost_vs_perf} In this section, we compare the performance and the cost of our method with typical RL methods described in Section~\ref{sec:experimental_setup}. Table~\ref{tab:perf_best} summarizes the performances corresponding to $\pi_{{S-only}}$, $\pi_{{L-only}}$, and our method (denoted as `\textit{Ours}') in the second, third, and fourth columns, respectively. Table~\ref{tab:perf_best} also summarizes the averaged percentages of $\pi_{\omega_{large}}$ being used by our method during an episode, as well as the averaged percentages of reduction in FLOPs when comparing \textit{Ours} (including the FLOPs from the master policy $\pi_{\Omega}$ as well as the two sub-policies) against the $\pi_{{L-only}}$ baseline. It can be seen that in Table~\ref{tab:perf_best}, the average performance of \textit{Ours} are comparable to $\pi_{L-only}$ and significantly higher than $\pi_{S-only}$. It can also be observed that our method does switch between $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ to control the agent, and thus reduce the total cost required for solving the tasks. To take a closer look into the behavior of the agent within an episode, Fig.~\ref{fig:perf_vs_cost} illustrates the performances and costs of our methodology over 200 episodes during evaluation for three control tasks. Each dot plotted in Fig.~\ref{fig:perf_vs_cost} corresponds to the evaluation result of \textit{Ours} in an episode, where the cost of each dot is divided by the cost of $\pi_{{L-only}}$. The performance of each dot is also scaled such that $[0, 1]$ corresponds to the averaged performances of a random policy and $\pi_{{L-only}}$. Please note that the scaled costs of our methodology may exceed one since the inference costs of $\pi_{\Omega}$ are considered in our statistics as well. Histograms corresponding to the performances and costs of the data points are provided on the right-hand side and the top side of each figure, respectively. For \textit{Swimmer-v3}, it is observed that our methodology is able to reduce about half of the FLOPs when compared against $\pi_{{L-only}}$. Although few data points correspond to only half of the averaged performance of $\pi_{{L-only}}$, most of the data points are comparable and even superior to that. For \textit{FetchPickAndPlace-v1}, it can be observed that the dots distribute evenly along the line (y=1.0), which means that the agent can solve the tasks in the majority of episodes while the induced costs vary largely across episodes. This phenomena is mainly caused by the broadly varying starting positions in different episodes. When the object is close the arm, the cost is near 1.0 since $\pi_{\omega_{large}}$ is used in the majority of time in an episode, as the result shown in Section~\ref{sec:analysis}. For \textit{walker-stand}, our method learns to use $\pi_{\omega_{large}}$ in the early stages to control the walker to stand up. After that, the agent only uses $\pi_{\omega_{small}}$ to slightly adjust its joints to maintain the posture of the walker. Therefore, a significant amount of inference costs can be saved in this task, causing the data points to concentrate on the top-left corner of the figure. These examples therefore validate that our cost-aware methodology is able to provide sufficient performances while reducing the inference costs required for completing the tasks. \subsection{Analysis of the Performance and the FLOPs per Inference} \label{sec:baseline} \begin{table*}[t] \caption{ An analysis of the performances and FLOPs per inference (denoted as FLOPs/Inf) for our method and the baselines. The network sizes of $\pi_{{fit}}$ and the student networks of the two policy distillation baselines are configured such that their FLOPs/Inf are approximately the same as the averaged FLOPs/Inf of \textit{Ours} (denoted as Avg-FLOPs/Inf). In \textit{MountainCarContinuous-v0}, $\pi_{{L-only}}$, \textit{Ours}, and $\pi_{{fit}}$ are trained for 100k timesteps. In other control tasks, they are trained for 2M timesteps. BC and GAIL require additional expert trajectories generated by the trained model $\pi_{{L-only}}$, which consists of 25 trajectories with 50 state-action pairs for each trajectory, as adopted in~\cite{gail}. Note that the numerical results presented in this table correspond to the score of best model selected from 5 training runs.} \label{tab:il_baselines} \footnotesize \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{c|cc|cc|cccc} \toprule Environment & $\pi_{{L-only}}$ & FLOPs/Inf & \textit{Ours} & Avg-FLOPs/Inf & $\pi_{{fit}}$ & GAIL & BC & FLOPs/Inf \\ [0.5ex] \midrule \textit{MountainCarContinuous-v0} & $93.6\pm0.1$ & 8,707 & $93.5\pm0.1$ & $4,440\pm177$ & $90.5\pm0.1$ & $-99.9\pm0.0$ & $93.3\pm0.1$ & 4,603 \\ \textit{Swimmer-v3} & $84.1\pm10.3$ & 137,219 & \textbf{$108.8\pm15.4$} & $76,019\pm9,122$ & $66.2\pm10.1$ & $63.2\pm9.8$ & $59.7\pm13.8$ & 76,763 \\ \textit{Ant-v3} & $3,927.2\pm524.0$ & 196,099 & \textbf{$3,564.8\pm724.7$} & $119,032\pm14,284$ & $2,553.0\pm511.7$ & $-15.6\pm101.0$ & $1,373.7\pm490.2$ & 119,451 \\ \textit{FetchPickAndPlace-v1} & $0.980\pm0.140$ & 42,755 & $0.935\pm0.247$ & $22,917\pm2,521$ & $0.920\pm0.271$ & $0.078\pm0.268$ & $0.153\pm0.360$ & 23,223 \\ \textit{walker-stand} & $977.7\pm20.2$ & 12,803 & \textbf{$967.2\pm16.3$} & $2,266\pm159$ & $819.5\pm14.5$ & $596.6\pm33.9$ & $159.1\pm22.6$ & 2,397 \\ \textit{finger-spin} & $978.0\pm33.0$ & 9,859 & \textbf{$871.2\pm28.5$} & $6,162\pm739$ & $848.1\pm27.0$ & $536.8\pm22.4$ & $7.6\pm19.5$ & 6,303 \\ \bottomrule \end{tabular} } \end{center} \end{table*} \begin{table*}[t] \caption{ Comparison of the proposed methodology with and without using the cost term $c_{\omega}$. }\label{tab:ablation_no_cost} \centering \tiny \begin{tabular}{ *{5}{c} } \toprule & \multicolumn{2}{c}{\textbf{With the cost term $c_{\omega}$}} &\multicolumn{2}{c}{\textbf{Without the cost term $c_{\omega}$}} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} \raisebox{\dimexpr1.25\normalbaselineskip-.5\height}[0pt][0pt]{\begin{tabular}{@{}c@{}} Environment \end{tabular}} & Performance & \% using $\pi_{\omega_{large}}$ & Performance & \% using $\pi_{\omega_{large}}$ \\ \midrule \textit{MountainCarContinuous-v0} & $35.5\pm48.9$ & $50.4\%\pm5.5\%$ & $66.3\pm40.6$ & $59.0\%\pm20.5\%$ \\ \textit{Swimmer-v3} & $98.9\pm23.2$ & $65.2\%\pm15.0\%$ & $71.5\pm33.3$ & $99.5\%\pm1.1\%$ \\ \textit{Ant-v3} & $2,558.8\pm1140.0$ & $47.8\%\pm15.0\%$ & $2,625.8\pm728.6$ & $80.9\%\pm39.2\%$ \\ \textit{FetchPickAndPlace-v1} & $0.822\pm0.103$ & $51.3\%\pm13.8\%$ & $0.785\pm0.175$ & $44.2\%\pm16.0\%$ \\ \textit{walker-stand} & $943.8\pm23.3$ & $19.7\%\pm9.8\%$ & $961.8\pm10.0$ & $100.0\%\pm0.0\%$\\ \textit{finger-spin} & $829.6\pm54.2$ & $38.6\%\pm14.9\%$ & $907.4\pm29.7$ & $100.0\%\pm0.0\%$\\ \bottomrule \end{tabular} \end{table*} \input{supplementary/tables/without_sharing_buffer.tex} We compare the performances of the proposed methodology and the baselines discussed in Section~\ref{subsubsed::baselines}, as well as their FLOPs per inference (denoted as FLOPs/Inf). The FLOPs/Inf for $\pi_{{L-only}}$, \textit{Ours}, and the student networks of the baselines, as well as their corresponding highest performances achieved are summarized in Table~\ref{tab:il_baselines}. For a fair comparison, the sizes of the student networks of the distillation baselines are configured such that their FLOPs/Inf (the last column of Table~\ref{tab:il_baselines}) are approximately the same as the averaged FLOPs/Inf of \textit{Ours} (the Avg-FLOPs/Inf column in Table~\ref{tab:il_baselines}, including the FLOPs contributed by both the master policy $\pi_{\Omega}$ and the sub-policies). As a reference, we additionally train a policy $\pi_{{fit}}$ using SAC from scratch based on the same DNN size as the student networks of the distillation baselines. For distillation baselines, both of them employ the pre-trained $\pi_{{L-only}}$ as their teacher networks. Then, the student networks are trained using the data sampled from the trajectories generated by the teacher networks, where 50 consecutive state-action pairs are sampled from each of the generated 25 trajectories, as those adopted in~\cite{gail}. The results show that for the environments in Table~\ref{tab:il_baselines}, \textit{Ours} deliver comparable performances to the $\pi_{{L-only}}$ baseline and outperforms the distillation baselines, under similar levels of FLOPs/Inf. From the perspective of data samples used, the distillation baselines consume more data samples (including the data samples required for training both the teacher and the student networks) than those required by \textit{Ours}, which is trained from scratch without the need of data samples from a pre-trained teacher network. The relatively lower performances of the distillation baselines are probably due to the smaller sizes of the networks compared to their teacher networks $\pi_{{L-only}}$, since the performances delivered by $\pi_{{fit}}$ are also lower than the corresponding performances of \textit{Ours}. The results thus suggest that our method is able to reduce inference costs while maintaining sufficient performances. \subsection{Ablation Study} \label{sec:ablation} \noindent\textbf{Effectiveness of the cost term.} We compare the evaluation results of our models trained with and without using the loss term in Table~\ref{tab:ablation_no_cost}. When the cost term is removed, the main factor that affects the decisions of $\pi_{\Omega}$ is its belief in how good each sub-policy can achieve. Since $\pi_{\omega_{large}}$ is able to obtain high scores on its own, it is observed that $\pi_{\Omega}$ prefers to select $\pi_{\omega_{large}}$. In contrast, incorporating the cost term decreases the percentages of using $\pi_{\omega_{large}}$ substantially, while still allowing our model to offer satisfying performances. \noindent \textbf{Effectiveness of shared experience replay buffer}. We compare the results of our models with and without the shared buffer across sub-policies $\pi_\omega$ in Table~\ref{tab:ablation_no_share_buffer}. For tasks except \textit{finger-spin}, the scores of the models without a shared $\mathcal{Z_\omega}$ are lower than those with a shared $\mathcal{Z_\omega}$. The lower scores of the models are due to reduced data samples for each sub-policies, since the transitions are not shared across the replay buffers. We also observed that some of the model trained without a shared $\mathcal{Z_\omega}$ is prone to use one of its sub-policies for the majority of time, instead of using both interleavedly. We believe that this is caused by unbalanced training samples for the two sub-policies. Namely, the relatively worse sub-policy is less likely to obtain sufficient data samples to improve its performance. While this problem can be solved by using training algorithms with improved exploration such as \cite{a3c}, we simply share $\mathcal{Z_\omega}$ among $\pi_\omega$ to address this issue. The models trained with a shared $\mathcal{Z_\omega}$ have lower variances in the choice of the two sub-policies (i.e., the third column of Table~\ref{tab:ablation_no_share_buffer}), and can exhibit more stable behaviors for $\pi_\Omega$. \section{Conclusion} \label{sec::conclusion} We proposed a methodology for performing cost-aware control based on an asymmetric architecture. Our methodology uses a master policy to select between a large sub-policy network and a small sub-policy network. The master policy is trained to take inference costs into its consideration, such that the two sub-policies are used alternately and cooperatively to complete the task. The proposed methodology is validated in a wide set of control environments and the quantitative and qualitative results presented in this paper show that the proposed methodology provides sufficient performances while reducing the inference costs required. The comparison of the proposed methodology and the baseline methods indicated that the proposed methodology is able to deliver comparable performance to the $\pi_{{L-only}}$ baseline, while requiring less training data than the knowledge distillation baselines. \section*{Acknowledgement} This work was supported by the Ministry of Science and Technology (MOST) in Taiwan under grant nos. MOST 110-2636-E-007-010 (Young Scholar Fellowship Program) and MOST 110-2634-F-007-019. The authors acknowledge the financial support from MediaTek Inc., Taiwan. The authors would also like to acknowledge the donation of the GPUs from NVIDIA Corporation and NVIDIA AI Technology Center (NVAITC) used in this research work. \section{Introduction} \label{sec::introduction} Recent works have combined reinforcement learning (RL) with the advances of deep neural networks (DNNs) to make breakthroughs in domains ranging from games~\cite{dqn, a3c, go} to robotic control~\cite{ddpg, sac, manipulate_robotic}. However, the inference phase of a DNN model is a computationally-intensive process~\cite{high_inference_cost, efficient_proc_dnn} and is one of the major concerns when applied to mobile robots, which are mostly battery-powered and have limitations on the energy budgets. Although the energy consumption of DNNs could be alleviated by reducing their sizes for energy-limited platforms, smaller DNNs are usually not able to attain same or comparable levels of performances as larger ones in complex scenarios. On the other hand, the performances of smaller DNNs may still be acceptable in some cases. For example, a small DNN unable to perform complex steering control is still sufficient to handle simple and straight roads. Motivated by this observation, we propose an asymmetric architecture that selects a small DNN to act when conditions are acceptable, while employing a large one when necessary. We implement this cost-efficient asymmetric architecture via leveraging the concept from hierarchical reinforcement learning (HRL)~\cite{hrl}, which consists of a \textit{master policy} and two \textit{sub-policies}. The master policy is designed as a lightweight DNN for decision-making, which takes in a state as its input and learns to choose a sub-policy based on the input state. The two sub-policies are separately implemented as a large DNN and a small DNN. The former is designed to deal with complicated state-action mapping, while the latter is responsible for handling simple scenarios. Therefore, when complex action control is required, the master policy uses the former. Otherwise, the latter is selected. To achieve the objective of cost-aware control, we propose a loss function design such that the inference costs of executing the two sub-policies are taken into consideration by the master policy. The master policy is required to learn to use the sub-policy with a small DNN as frequently as possible while maximizing and maintaining the agent's overall performance. Our principal contribution is an asymmetric RL architecture that reduces the deployment-time inference costs. To validate the proposed architecture, we perform a set of experiments on the representative robotic control tasks from the OpenAI Gym Benchmark Suite~\cite{openai_gym} and the DeepMind Control Suite~\cite{deepmindcontrolsuite2018}. The results show that the master policy trained by our methodology is able to alternate between the two sub-policies to save inference costs in terms of floating-point operations (FLOPs) with little performance drop. We further provide an in-depth look into the behaviors of the trained master policies, and quantitatively and qualitatively discuss why the computational costs can be reduced. Finally, we offer a set of ablation analyses to validate the design decisions of our cost-aware methodology. \section{Methodology} \label{sec::methodology} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/architecture/work_flow.pdf} \caption{An illustration of the workflow of our framework. The master policy $\pi_\Omega$ chooses a sub-policy $\pi_\omega \in \{\pi_{\omega_{small}}, \pi_{\omega_{large}}\}$, and uses it to interact with the environment $\mathcal{E}$ for $n_\omega$ timesteps. After this, $\pi_\Omega$ chooses another $\pi_\omega$, and the process repeats until the end of the episode. $\pi_\Omega$ and $\pi_\omega$ use different experience transitions to update their policies and have different replay buffers, while $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ share the same replay buffer. We train $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ with experience transitions $(s_t, a_t, r_t, s_{t+1}) \text{ for } t=0, 1, ..., etc$, and train $\pi_\Omega$ with transitions $(s_t, \omega_t, r_{\Omega_t}, s_{t+n_\omega}) \text{ for } t=0$, $\,n_\omega, ..., etc$. The reward $r_{\Omega_t}=\Sigma_{i=t}^{t+n_\omega-1} r_i\allowbreak-\lambda n_\omega c_{\omega_t}$ of $\pi_\Omega$ is the sum of rewards $r_i$ collected by $\pi_\omega$, which is penalized by the scaled cost $\lambda n_\omega c_{\omega_t}$ of $\pi_\omega$. Note that $\lambda$ is a scaling parameter. } \label{fig:work_flow} \end{figure} \subsection{Problem Formulation} \label{subsec::problem_formulation} The main objective of this research is to develop a cost-aware strategy such that an agent trained by our methodology is able to deliver satisfying performance while reducing its overall inference costs. We formulate the problem as an SMDP, with an aim to train the master policy in the proposed framework to use the smaller sub-policy when the condition is appropriate to be handled by it, and employ the larger sub-policy when the agent requires complex control of its actions. The agent is expected to use the smaller sub-policy as often as possible to reduce its computational costs. In order to incorporate the consideration of inference costs into our cost-aware strategy, we further assume that each sub-policy is cost-bounded. The cost of a sub-policy is denoted as $c_{\omega}$, where $\omega$ represents the sub-policy used by the agent. The reward function is designed such that the agent is encouraged to select the lightweight sub-policy as frequently as possible to avoid being penalized. \subsection{Overview of the Cost-Aware Framework} In order to address the problem formulated above, we employ an HRL framework consisting of a master policy $\pi_\Omega$ and two sub-policies $\pi_\omega$ of different DNN sizes, where $\omega \in \{\omega_{small}, \omega_{large}\}$ and the DNN size of $\omega_{large}$ is larger than that of $\omega_{small}$. We assume that both the sub-policies $\pi_\omega$ can be completed in a single timestep. At the beginning of a task, $\pi_\Omega$ first takes in the current state $s\in \mathcal{S}$ from $\mathcal{E}$ to determine which $\pi_\omega$ to use. The selected $\pi_\omega$ is then used to interact with $\mathcal{E}$ for $n_\omega$ timesteps, i.e., $\mathcal{\beta}_\omega\to1$ once the selected sub-policy $\omega$ is used for $n_\omega$ timesteps. The value of $n_\omega$ is set to be a constant for the two sub-policies, i.e., \(n_{\omega_{large}}=n_{\omega_{small}}\). The process repeats until the end of the episode. The workflow of the proposed cost-aware hierarchical framework is illustrated in Fig.~\ref{fig:work_flow}. Please note that even though the overall system is formulated as an SMDP, the formulation for $\pi_{\Omega}$ is still a standard MDP problem of selecting between a set of two temporally extended actions (i.e., using either \(\pi_{\omega_{small}}\) or \(\pi_{\omega_{large}}\)), as described in Section 3 of \cite{hrl}. Therefore, at timestep $t$, the goal of $\pi_\Omega$ becomes maximizing \(R_{\Omega_t} = \Sigma_{i=0}^{\infty} \gamma^{i} r_{\Omega_{t+i\cdot n_\omega}}\), where \(r_{\Omega_t}=\Sigma_{j=t}^{t+n_\omega-1} r_j\) is the cumulative rewards during the execution of \(\pi_\omega\). On the other hand, the update rule of \(\pi_\omega\) is the same as the intra-option policy gradient described in \cite{option_critic}. To deal with the data imbalance issue of the two sub-policies during the training phase as well as improving data efficiency, our cost-aware framework uses an off-policy RL algorithm for \(\pi_\omega\) so as to allow \(\pi_{\omega_{small}}\) and \(\pi_{\omega_{large}}\) to share the common experience replay buffer. \subsection{Cost-Aware Training} We next describe the training methodology. In case that no regularization is applied, $\pi_\Omega$ tends to choose $\pi_{\omega_{large}}$ due to its inherent advantages of being able to obtain more rewards on its own. As a result, we penalize $\pi_\Omega$ with \(c_\omega\) to encourage it to choose $\pi_{\omega_{small}}$ with a lower \(c_\omega\). The reward for $\pi_\Omega$ at $t$ is thus modified to \(r_t - \lambda c_\omega\), where \(\lambda\) is a cost coefficient for scaling. The higher the value of \(\lambda\) is, the more likely \(\pi_\Omega\) will choose $\pi_{\omega_{small}}$. The experience transitions used to update $\pi_\Omega$ are therefore expressed as \((s_t, \omega_t, r_{\Omega_t}, s_{t+n_\omega}) \text{ for } t=0\), \(n_\omega, 2n_\omega, ..., etc\), where \(r_{\Omega_t}=\Sigma_{i=t}^{t+n_\omega-1} r_i-\lambda n_\omega c_{\omega_t}\). \section{Related Work} \label{sec::related_work} A number of knowledge distillation based methods have been proposed in the literature to reduce the inference costs of DRL agents at the deployment time~\cite{distill_knowledge, fitnets, not_need_deep, multiplier_free_dnn}. These methods typically use a large teacher network to teach a small student network such that the latter is able to mimic the behaviors of the former. In contrast, our asymmetric approach is based on the concept of HRL~\cite{hrl}, a framework consisting of a policy over sub-policies and a number of sub-policies for executing temporally extended actions to solve sub-tasks. Previous HRL works~\cite{snn_hrl, option_critic, hiro, feudal_hrl, lifelong, policy_sketches, deliberation_cost, multi_task_popart, adaptation_hrl} have been concentrating on using temporal abstraction to deal with difficult long-horizon problems. As opposed to those prior works, our proposed method focuses on employing HRL to reduce the inference costs of an RL agent. Please note that the theme and objective of the paper is to propose a new direction of HRL to a practical problem in robot deployment scenarios, not a more general HRL strategy. \section{Appendix 1} \usepackage{appendix} \section{Additional Ablation Studies} \subsection{Sub-Policies with and without Separated Replay Buffers} \input{supplementary/tables/without_sharing_buffer.tex} In this section, we validate the choice of using a shared experience replay buffer across sub-policies. We compare the evaluation results of our models with and without the shared buffer in Table~\ref{tab:ablation_no_share_buffer}. For tasks except \textit{finger-spin}, the scores of the models with a shared buffer $\mathcal{Z_\omega}$ are higher than those without a shared $\mathcal{Z_\omega}$. The lower scores of the models trained without a shared $\mathcal{Z_\omega}$ are due to reduced data samples for training each sub-policies, since the transitions are not shared across the replay buffers. We also observed that some of the model trained without a shared $\mathcal{Z_\omega}$ is prone to use one of its sub-policies for the majority of time, instead of using both interleavedly. We believe that this is caused by the unbalanced training samples for the two sub-policies. In other words, the relatively worse sub-policy is less likely to obtain sufficient data samples to improve its performance. In contrast, the models trained with a shared $\mathcal{Z_\omega}$ have lower variances in the choice of the two sub-policies (e.g., please refer to the second column of Table~\ref{tab:ablation_no_share_buffer}), and are able to exhibit more stable behaviors for $\pi_\Omega$. \subsection{Comparison against an HRL Model with Two Large Sub-Policies} \input{supplementary/tables/all_large.tex} In Table~\ref{tab:all_large}, we compare the performances of our methodology, $\pi_{L-only}$ (trained with SAC), and the symmetric HRL models implemented with two $\pi_{\omega_{large}}$ (i.e., no $\pi_{\omega_{small}}$ is used). The objective of this analysis is to examine if the benefits of our method come from the direct use of HRL. In other words, this analysis inspects if HRL offers unfair advantages to our methodology over $\pi_{L-only}$ trained with SAC. For all tasks except \textit{Swimmer-v3}, it is observed that HRL offers little performance gain over the typical SAC method (i.e., $\pi_{L-only}$). In some cases, HRL even exhibits performance drops, which are probably due to the instability during the training phase of HRL. For instance, in \textit{MountainCarContinuous-v0}, the model with two $\pi_{\omega_{large}}$ learn to complete the task in only two out of the five training runs, causing the relatively low average performance. On the other hand, Table~\ref{tab:all_large} also reveals that the agent benefits from the use of HRL in the case of \textit{Swimmer-v3}, such that it is able to achieve higher scores than the agents based on $\pi_{L-only}$. Nevertheless, when compared with HRL with both $\pi_{\omega_{large}}$, our proposed asymmetric architecture with both $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ is able to reduce the costs further while maintaining the performance in \textit{Swimmer-v3}. \section{The Detailed Pseudo-Code of \\the Proposed Algorithm} Algorithm~\ref{algo} summarizes the training procedure of our methodology. In lines $6$-$9$, \(\pi_\Omega\) decides which \(\pi_\omega\) to use at a constant interval $n_{\omega}$. In lines $10$-$11$, the selected \(\pi_\omega\) determines an action and interact with $\mathcal{E}$. In line $12$, the reward for \(\pi_\Omega\) is derived by subtracting the weighted cost $\lambda c_\omega$ from $r_t$. Then, in lines $15$-$17$, experience transitions collected by \(\pi_\Omega\) and \(\pi_\omega\) are stored into the replay buffers $\mathcal{Z}_\Omega$ and $\mathcal{Z}_\omega$, respectively. Finally, in lines $23$-$25$, both \(\pi_\Omega\) and \(\pi_\omega\) are updated using sampled batches from $\mathcal{Z}_\Omega$ and $\mathcal{Z}_\omega$, respectively. The entire training procedure continues until the end of the horizon $T_{max}$. \section{Additional Background Material} In this section, we provide addition background material. We first describe the basic concepts of deep Q-network (DQN)~\citep{dqn} and soft actor-critic (SAC)~\citep{sac}, which are used for training our master policy and sub-policies, respectively. We then briefly explain the concepts of hindsight experience replay (HER)~\citep{her} and Boltzmann exploration for DQN~\citep{boltzmann_done_right}, which are utilized in our experiments. Finally, we provide some additional information that is related to hierarchical reinforcement learning (HRL). \subsection{Deep Q-Learning (DQN)} DQN~\citep{dqn} is a model-free RL algorithm based on deep neural networks (DNNs) for estimating the Q-function over high-dimensional state space. DQN is parameterized by a set of network weights $\phi$, which can be updated by a variety of RL algorithms. Given a policy $\pi$ and state-action pairs $(s,a)$, DQN incrementally updates its set of parameters $\phi$ such that $Q(s,a, \phi)$ approximates the optimal Q-function $Q^{*}$. The parameters $\phi$ are trained to minimize the loss function $L(\phi)$ iteratively using samples $(s, a, r, s^{\prime})$ drawn from an experience replay buffer $\mathcal{Z}$. $L(\phi)$ is represented as the following: \begin{equation} \label{eq::q_loss} L(\phi) = \mathbb{E}_{s,a,r,s^\prime \sim U(\mathcal{Z})}\big[(y - Q(s,a, \phi))^2\big], \end{equation} where $y = r + \gamma\max_{a^\prime} Q(s^\prime,a^\prime, \phi^{-})$, $r$ is the reward signal, $\gamma$ is the discount factor, $(s',a')$ is the next state-action pair, $U(\mathcal{Z})$ is a uniform distribution over $\mathcal{Z}$, and $\phi^{-}$ represents the parameters of a target network. The target network is the same as the online network parameterized by $\phi$, except that its parameters $\phi^{-}$ are updated by the online network periodically at constant intervals. Both the experience replay buffer and the target network enhance stability of the learning process dramatically. \subsection{Soft Actor-Critic (SAC)} Soft actor-critic (SAC)~\citep{sac} is a deep RL algorithm which optimizes a stochastic policy in an off-policy manner. The key feature of SAC is the entropy regularization term in the loss function, which enables an agent to maximize the expected return and maintain the stochasticity of the actions during the training phase. SAC leans a policy $\pi_\theta$ and two Q-functions $Q_{\phi1}$ and $Q_{\phi2}$ at the same time. The two Q-functions are used to reduce the overestimation bias error from function approximation, as explained in the double Q-learning~\citep{double_q} paper. The target for the Q-functions is expressed as follows: \begin{equation} \resizebox{\linewidth}{!}{ $y(r, s', d) = r + \gamma(1-d)\bigg( \min_{i=1,2} Q_{\phi_{targ,i}}(s', a', \phi_{targ,i}) - \alpha \log\pi_\theta(s', a') \bigg), \label{eq::SAC_target}$ } \end{equation} where $d$ is the terminal signal of an episode, $\theta$ the parameters of the policy $\pi_{\theta}$, and $\alpha$ the entropy coefficient which controls the stochasticity of the policy. Based on Eq.~(\ref{eq::SAC_target}), the Q-functions can be optimized to minimize the loss function $L(\phi)$ in Eq.~(\ref{eq::q_loss}). The policy $\pi_{\theta}$ can be updated to maximize: \begin{equation} \mathbb{E}_{s\sim\mathcal{Z}}\bigg( \min_{i=1,2} Q_{\phi_i}(s, a_\theta(s)) - \alpha\log\pi_\theta(a_\theta(s)|s) \bigg), \end{equation} where $\mathcal{Z}$ represents the replay buffer, and $a_\theta(s)$ denotes a sample from $\pi_\theta(.|s)$ which is differentiable with regard to the parameters $\theta$ of $\pi_{\theta}$ due to the use of the re-parameterization technique in SAC~\citep{sac}. \subsection{Hindsight Experience Replay (HER)} Consider an episode in a sparse reward setting environment with a state sequence $s_1,...,s_T$ and a goal $g\ne s_1,...,s_T$. This experience is not able to help the agent learn how to achieve the goal $g$, since no informative reward is acquired throughout this episode. In order to make the agent learn in such an environment, a more carefully designed reward function is required to guide the agent toward the goal. Instead of designing another reward function, HER~\citep{her} solves the above problem for an off-policy RL algorithm by replacing $g$ in the replay buffer with another pseudo goal, such that a large portion of the trajectories contain informative rewards which facilitate the learning of the agent. In addition, experience transitions with original goal $g$ is still available to the agent, such that the agent also learns to reach the true goals in the environment. There exist a number of strategies for choosing the pseudo goals for HER. In this paper, we use the `\textit{future}' strategy~\citep{her} for all of the experiments using HER, i.e., \textit{FetchPush-v1}, \textit{FetchPickAndPlace-v1}, and \textit{FetchPush-v1}, such that the pseudo goal is selected from the state achieved after the current timestep within the same episode. \subsection{Boltzmann Exploration for DQN} Boltzmann exploration is another action-selection strategy for DQN to explore its action space besides $\epsilon$-greedy. Boltzmann exploration applies softmax function over the evaluations of Q-function for each action and take these values as the probability of choosing each action. The higher the Q-value is for an action, the more likely the action will be chosen. This approach gives more chance to sub-optimal actions than the $\epsilon$-greedy approach. A drawback of Boltzmann exploration is that the interpretation of the Q-values after applying the softmax function as the probability for choosing an action may not be the best choice to aid exploration, and may lead to sub-optimal behaviors of the model~\citep{boltzmann_done_right}. Instead of using Boltzmann exploration during the training phase, we use it during the evaluation phase to enable $\pi_\Omega$ to have chances to choose the relatively worse $\pi_\omega$. This might lead to sub-optimal performance, however, a slight performance drop is acceptable in exchange of the reduction of computational costs. We apply Boltzmann exploration to several tasks in our experiments, including \textit{BipedalWalker-v3}, \textit{FetchPush-v1}, \textit{FetchSlide-v1}, \textit{FetchPickAndPlace-v1}, \textit{Hopper-stand}, \textit{Fish-swim}, and \textit{Reacher-easy}, where $\pi_\Omega$ tends to use one of its $\pi_\omega$ for an entire episode without the use of Boltzmann exploration. \section{Analyses of the Baselines with More Data Samples} In Table~\ref{tab:baseline_more_samples}, we show the training results of GAIL and BC with different numbers of data samples generated by the expert policy (i.e., $\pi_{L-only}$). The training data consist of different numbers of trajectories, where each trajectory contains all the state-action pairs collected in an episode. The network architecture for training these baselines is provided in Section~\ref{network_structure}. With more data samples from the expert, GAIL and BC gain improvement in their performances. Nevertheless, our model still outperform these baselines in four out of six tasks under the same level of computational costs. \input{supplementary/tables/baseline_more_sample.tex} \section{Additional Details of the Experimental Setup} \section{Details of the Experimental Setup} In this section, we provide details of our experimental setup, including the selection criteria of $c_{\omega}$ and $\lambda$, the network structures, as well as the hyperparameters used by our methodology and the baselines. \subsection{Selection Criteria of the Policy Cost $c_{\omega}$ and the Coefficient $\lambda$} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{supplementary/figures/diff_lambda.pdf} \caption{Performance of the models trained with different $\lambda$. The scores are averaged from 5 different random seeds. Each model trained with different random seed is evaluated over 200 episodes.} \label{fig:diff_lambda} \end{figure} We use the number of FLOPs of $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ divided by the number of FLOPs of $\pi_{\omega_{small}}$ as their policy costs $c_{\omega_{small}}$ and $c_{\omega_{large}}$, respectively, such that $c_{\omega_{small}}$ is equal to one. With regard to $\lambda$, from Fig.~\ref{fig:diff_lambda}, we observe that the relationship between $\lambda$ and the ratio of choosing $\pi_{\omega_{large}}$ is negatively correlated. In addition, the performances decline along with the reduced usage rate of $\pi_{\omega_{large}}$. We notice that there is often a range of $\lambda$ (ranging around the middle points in the figure), which allows us to develop candidate cost-efficient models that are potentially able to strike a balance between performance and usage rate of $\pi_{\omega_{large}}$. We then perform a hyperparameter search to find an appropriate $\lambda$, such that both $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$ are used alternately within an episode, while allowing the agent to obtain high scores. Table~\ref{tab:policy_cost_coefficient} summarizes the values of $c_{\omega_{small}}$, $c_{\omega_{large}}$, and $\lambda$ used in each of the environments. \subsection{Selection of the Master Policy Step Size $n_\omega$} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{supplementary/figures/diff_nomegas.pdf} \caption{Performance of models trained with different $n_\omega$. The scores are averaged from 5 different random seeds. Each model trained with different random seed is evaluated over 200 episodes.} \label{fig:diff_nomega} \end{figure} It can be observed from Fig.~\ref{fig:diff_nomega} that there is no obvious correlation between $n_\omega$ and the performance. \textit{Swimmer-v3} performs well with smaller values of $n_\omega$. \textit{Ant-v3} performs well with $n_\omega$ equal to around $10$. On the other hand, \textit{walker-stand} performs well with larger values of $n_\omega$. Therefore, the choice of $n_\omega$ is relatively non-straightforward. We select the value of $n_\omega$ on account of two considerations: (1) $n_\omega$ should not be too small, or it will lead to increased master policy costs due to more frequent inferences of the master policy to decide which sub-policy to be used next; (2) $n_\omega$ should not be too large, otherwise the model will not be able to perform flexible switching between sub-policies. For instance, in the case of \textit{finger-spin}, a model with $n_\omega$ greater than $15$ uses $\pi_{\omega_{large}}$ throughout an episode while the usage rate of $\pi_{\omega_{small}}$ becomes almost zero. As a result, we set $n_\omega$ to $5$ for all of the experiments in this work as a compromise. An adaptive scheme of the step size $n_\omega$ may potentially enhance the overall performance and is left as a future research direction. \subsection{Network Structure} \label{network_structure} We implement both the master policy $\pi_\Omega$ and the sub-policies $\pi_\omega$ as multilayer perceptrons (MLPs) with two hidden layers of the same sizes. For all of the experiments, we choose the number of neurons per layer $n_{units}$ for $\pi_\Omega$ to be 32, such that the inference costs induced by $\pi_\Omega$ only account for a small portion of the overall costs, while giving $\pi_\Omega$ sufficient capability to assign task segments to different sub-policies. In order to reasonably determine the numbers of units per layer for $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$, we first train a configuration with 512 units per layer for all selected tasks as our criterion policy $\pi_{criterion}$, and then train models with different number of units per layer, where $n_{units}\in\{8, 32, 64, 128, 196, 256\}$. For each task, we set $n_{units}$ for $\pi_{\omega_{large}}$ such that its performance is above 90\% of the score of $\pi_{criterion}$. We adjust $n_{units}$ for $\pi_{\omega_{small}}$ such that its value is less than or equal to $1/4$ of $n_{units}$ for $\pi_{\omega_{large}}$, and the performance of $\pi_{\omega_{small}}$ is around or below $1/3$ of the score achieved by $\pi_{criterion}$. The exact values of $n_{units}$ for all the control tasks are listed in Table~\ref{tab:n_units}. \input{supplementary/tables/n_units_of_hidden.tex} The network structures for performing the experiments of the baselines also consist of two hidden layers with the same number of units, except that the number of units for both layers is chosen such that the FLOPs/Inf (number of FLOPs per inference) is approximately the same as the averaged FLOPs/Inf of \textit{Ours} calculated from 200 test episodes, as discussed in Section~5.3 of the main manuscript. We list the values of $n_{units}$ used for training $\pi_{fit}$, BC~\citep{bc_limitation}, and GAIL~\citep{gail} in Table~\ref{tab:baseline_units}. \input{supplementary/tables/baseline_units.tex} \subsection{Hyperparameters for Training the Proposed Methodology and Baselines} The hyperparameters used for training the proposed methodology are provided in Table~\ref{tab:ours_hyperparam}. On the other hand, the hyperparameters used for training the baseline methods are summarized in Table~\ref{tab:baseline_hyperparam}. \section{Additional Experimental Results of the Proposed Methodology} \subsection{Statistics of the Performance and Cost for the Proposed Methodology} Following the discussions presented in Section~5.3 of the main manuscript, in this section, we provide the results of the other control tasks. The performances, the percentages of using $\pi_{\omega_{large}}$, the percentages of the total FLOPs reduction, as well as the performances of $\pi_{S-only}$ and $\pi_{L-only}$ are listed in Table~\ref{tab:extra_experiments}. For columns 2-6, the number reported in each entry is an average of the results from five different random seeds, where the result corresponding to each seed is averaged over 200 episodes. For columns 7-9, the best results of our proposed methodology are presented, which reveal that our methodology is able to balance the tradeoff between performance and computational costs. For tasks except for \textit{HalfCheetah-v3} and \textit{FetchSlide-v1}, Table~\ref{tab:extra_experiments} shows that our methodology results in only slight performance drops (when compared with $\pi_{{L-only}}$), while reducing a significant amount of computational costs for most of the tasks. For \textit{FetchSlide-v1}, the performance drop is primary due to the reduction in the usage of $\pi_{\omega_{large}}$. For \textit{HalfCheetah-v3}, our model tends to use either $\pi_{\omega_{small}}$ or $\pi_{\omega_{large}}$ for an entire episode, despite of the usage of either the fine-tuned policy cost coefficient $\lambda$ or Boltzmann exploration during the evaluation phase. A potential reason might be due to the fact that the control complexity required by the model is approximately the same for the entire episode. Thus, it is difficult for $\pi_{\Omega}$ to learn to switch between $\pi_{\omega_{small}}$ and $\pi_{\omega_{large}}$, leading to the performance drop. \vspace{0.5em} The best models selected from the five training rounds listed in the last three columns of Table~\ref{tab:extra_experiments} further reveal the feasibility to train models to deliver comparable performances as those achieved by $\pi_{L-only}$, , while reducing a significant amount of computational costs. We do not show the standard deviation for the entries in the last three columns of Table~\ref{tab:extra_experiments}. Instead, the distribution plots with regard to the performances and the computational costs are illustrated in Fig.~\ref{fig:sup_perf_vs_cost}. For most of the tasks, the dots are concentrated on the upper part of the figures, indicating the stability of the performances of the models trained by our methodology. For the \textit{fish-swim} task, the model trained with the vanilla SAC has a high score variance for each episode, which leads to a high variance in the experimental result of our methodology inherently. Nevertheless, the overall performance of our best model still outperforms the best model trained with the vanilla SAC in the \textit{fish-swim} task in Table~\ref{tab:extra_experiments}. \onecolumn \begin{landscape} \begin{table*}[t] \parbox[t]{.65\textwidth}{ \input{supplementary/tables/ours_hyperparams.tex} } \hfill \parbox[t]{.45\textwidth}{ \input{supplementary/tables/baseline_hyperparams.tex} } \input{supplementary/tables/extra_experiments.tex} \end{table*} \end{landscape} \twocolumn \begin{figure}[ht] \noindent\makebox[\textwidth][c]{% \resizebox{!}{.55\textheight}{ \begin{minipage}{\textwidth} \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/moutaincar_random.pdf} \caption{\textit{MountainCarContinuous-v0}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/ant_random.pdf} \caption{\textit{Ant-v3}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{figures/perf_vs_cost/finger_spin.pdf} \caption{\textit{finger-spin}} \end{subfigure}% \newline \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_BipedalWalker-v3.pdf} \caption{\textit{BipedalWalker-v3}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_Walker2d-v3.pdf} \caption{\textit{Walker-v3}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_FetchPush-v1.pdf} \caption{\textit{FetchPush-v1}} \end{subfigure}% \newline \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_FetchSlide-v1.pdf} \caption{\textit{FetchSlide-v1}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_cartpole-swingup.pdf} \caption{\textit{Cartpole-swingup}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_ball_in_cup-catch.pdf} \caption{\textit{Ball\_in\_cup-catch}} \end{subfigure}% \newline \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_hopper-stand.pdf} \caption{\textit{Hopper-stand}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_fish-swim.pdf} \caption{\textit{Fish-swim}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=\linewidth]{supplementary/figures/perf_vs_cost/perf_cost_reacher-easy.pdf} \caption{\textit{Reacher-easy}} \end{subfigure}% \newline \leavevmode\smash{\makebox[0pt]{\hspace{0em \rotatebox[origin=l]{90}{ \hspace{33.2em Performance (scaled)}% }}\hspace{0pt plus 1filll}\null \begin{center} Computational costs (scaled) \end{center} \caption{Comparison of performance and cost. Each dot in the plots corresponds to a rollout of an episode. The \(y\)-axis is performance, scaled so that the expert achieves 1 and a random policy achieves 0. The \(x\)-axis is computational costs, scaled so that using large policy throughout an episode takes 1.} \label{fig:sup_perf_vs_cost} \end{minipage} } } \end{figure} \clearpage \section{Computing Infrastructure} In this section, we provide the configuration of our computing infrastructure in Table.~\ref{tab:infrastructure} for reference. \section{Reproducibility} We implemented the proposed methodology based on the codes from Stable Baselines~\citep{stable-baselines} and RL Baselines Zoo~\citep{rl-zoo}, which are high-quality implementations for RL algorithms and training models. We modified the source codes for DQN, SAC, and the training procedure to adapt to our methodology. The source codes for our experiments are well verified, and all the experiments in our paper are fully reproducible. Please refer to the following github repository for more detailed instructions: \textcolor{blue}{\href{https://github.com/anonymouscjc/Computational-Cost-Aware-Control-Using-Hierarchical-Reinforcement-Learning}{link}}. \begin{table}[!tb] \centering \renewcommand{\arraystretch}{1.3} \caption{Specification of our computing infrastructure.} \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|l} \toprule Component & Customized Machine \\ \midrule Processor & 32 cores / 64 threads (3.0GHz, up to 4.2GHz) \\ Hard Disk Drive & 6TB SATA3 7200rpm \\ Solid-State Disk & 1TB PCIe Gen 3 NVMe \\ Graphics Card & NVIDIA GeForce$^{\circledR}$ RTX 2080Ti (two per instance) \\ Memory & 16GB DDR4 2400MHz (128GB in total) \\ \bottomrule \end{tabular} } \label{tab:infrastructure} \end{table} \bibliographystyle{unsrtnat}
1,108,101,562,511
arxiv
\section{Introduction} According to Einstein \cite{Einstein1917}, all interactions of light with matter are coherent in media large and homogeneous enough that statistical thermodynamics and Huygens construction are valid. Most spectroscopists have long ignored Einstein's work. Thus they despised Townes' ideas before masers and lasers work. This skepticism persists in astrophysics where it also refers to an article in which Menzel \cite{Menzel} confused radiance and irradiance and wrote: {\it The so-called ``stimulated'' emissions which have here been neglected should be included where strict accuracy is required. It is easily proved, however, that they are unimportant in the nebulae.} So, constantly, a large number of articles published in the best reviews, cited, for example, by Zheng \cite {Zheng}, Nilsson et al. \cite{Nilsson}, use Monte Carlo computations rather than the methods deducted from Einstein's theory, to calculate the propagation of light in atomic hydrogen. Quantum mechanics associates a particle with a linear, ``Schr\"odinger wave'' $\Psi$, scalar field of complex value whose Hermitian square at a point is proportional to the probability of finding the particle at that point. It is not known how to define a $\Psi$ wave for the photon, so a scalar function of the electromagnetic field is used in its place. Associating photons to this wave requires the definition of ``normal modes" valid only in a limited optical system out of which the photons can not be exported. Thus, W.E. Lamb \cite{WLamb}, W. E. Lamb, Schleich, Scully and Townes \cite{WLamb2} name the photon a pseudo particle which must be used with a great care that many physicists do not have. In the absence of Einstein's theory, astrophysicists apply Monte Carlo calculations to photons, in the study of light propagation in low pressure gas. Section \ref{coherence} shows that the use of the Monte-Carlo computations in optics must be limited to the case of media so heterogeneous that the concept of light ray cannot be used. Section \ref{Einstein} shows that the use of absolute spectral radiance simplifies the use of Einstein's theory in the presence of spontaneous emissions. Section \ref{Stromgren} improves a spectroscopic model introduced by Str\"omgren, in which many effects observed in laser technology apply. Section \ref{Applications} justifies several theories done in astrophysics, theories rejected by authors whose spectroscopy did not take the coherence into account. It suggests other applications. In conclusion, we suggest to develop models using optical coherence to simplify our sight of the universe. \section{Coherence and incoherence.}\label{coherence} The theory of light propagation in the atmosphere separates coherent and incoherent scatterings well: Huygens' construction shows the propagation of monochromatic waves in a homogeneous continuous medium, considering that each infinitesimal fragment of the medium located on a surface wave emits a monochromatic wave of same frequency, coherent with the exciting wave. This construction also applies to particles of a real medium if the finite density of particles in the vicinity of a wave surface is large enough. It must be assumed that the molecules constituting the medium emit a wave having a well defined difference of phase with the local incident wave. In a transparent medium, this difference of phase is usually a delay of $\pi/2$. The identity of the initial and scattered wavefronts allows an interference of the two waves to produce a single monochromatic, refracted wave. Einstein's theory \cite{Einstein1917} extends the theory of refraction to the emission or absorption of light by assuming that the complex amplitude of an incident wave is multiplied by a complex ``amplification'' coefficient, preserving the wave surfaces and respecting the laws of thermodynamics. This extension is not obvious because the energy exchanges with the molecules are quantized, so we must admit the existence of a process of (de-) coherence, either quantum or classical by Huygens constructions. This process transfers the energy exchanges between all involved molecules and few molecules that undergo a transition, and vice versa. \medskip Huygens' theory is faulty if the scattering of light by certain molecules depends on a stochastic parameter. In the neighborhood of a critical point of a gas, this parameter is the density that fluctuates and the theory of refraction is no longer valid. Away from the critical point, most molecules are no longer subject to density fluctuations, the coherent Rayleigh scattering (giving refraction) and incoherent Rayleigh scattering (blue of the sky ...) become compatible. At low pressure, the fluctuations are mostly binary collisions, whose density is proportional to the square of the pressure: the incoherent scattering disappears at low pressure, for instance in the stratosphere. This conclusion, extended by Einstein's theory to all interactions, is exactly opposite to Menzel's statement. \medskip If Monte-Carlo calculations explain well the result of complex interactions (such as neutron and uranium atom, light and cloud droplets too inhomogeneous to form a rainbow), the draw of the phase of the pilot wave of the photon is a negation of all the wave aspect of light: In particular, two photons whose pilot waves have opposite phases must cancel and not add their effects. A Monte-Carlo calculation that would take all phases into account is very complicated and unnecessary as Huygens' construction provides the result of a large number of interferences. \section{Using Einstein's theory.}\label{Einstein} The formula for the spectral radiance of a blackbody at temperature $T_P$, given by Planck in 1900, was adapted in 1911 by its author to the absolute spectral radiance $I_\nu$ at frequency $\nu$ (Planck \cite{Planck1911}) and approved by Einstein and Stern \cite{Einstein1913}: \begin{eqnarray} I_\nu=\frac{h\nu^3}{c^2}\{1+\frac{2}{\exp(h\nu/kT_P)-1}\}. \end {eqnarray} A small hole in the body lets out a beam having this spectral radiance so that the formula defines the Planck temperature $T_P$ of any beam according to its absolute spectral radiance and its frequency. Diffraction (and polarization) limited modes of interest in astrophysics propagate in beams limited by the aperture of a telescope and the diffracted image of a distant point. The optical extent (Clausius invariant) of these beams is the square of the wavelength $\lambda^2=c^2/\nu^2$. In these beams, an infinite, polarized sine wave defines a mode; each pulse of polarized natural light whose frequency spectrum $\Delta\nu$ corresponds to the pulse duration, defines a dynamical mode. Thus, the absolute energy in a mode is obtained multiplying the radiance $I_\nu$ by $\lambda^2/2=c^2/2\nu^2$. Corresponding to a degree of freedom, it is equivalent to $kT/2$ at high temperature, as required by thermodynamics. Using the absolute radiance, Einstein's A coefficient is null, so that there is no problem about the introduction of A: the spontaneous emission results from the amplification of the zero point field whose phase is generally unknown. \medskip Consider a transition between two nondegenerate levels of potential energy $E_1$ and $E_2>E_1$, whose respective populations are $N_1$ and $N_2$. Boltzman's law defines a transition temperature $T_B$ such that $N_2/N_1 =exp[(E_1-E_2)/kT_B]$. By convention, negative values of $T_B$ are accepted. In homogeneous environments, it is agreed to treat separately refraction and change of radiance. With this convention, a ray is attenuated or amplified by the medium which it crosses without its geometry is changed. Only the initial phase of a ray of unknown origin must be considered as stochastic. If the medium is opaque, $T_P$ equals $T_B$. Otherwise, $T_P$ tends to $T_B$. The algebraic value of the coefficient of amplification may be computed from Einstein's coefficient $B$, its sign is positive if $T_B<0$ or if $T_B>T_P$. \medskip An other advantage of the use of the absolute field is the correctness of the calculation of the energy exchanged with matter, which involves calculating the variation of the square of the absolute field which differs from the variation of the square of a relative field. \section{Study of an astrophysical size model introduced by Str\"omgren.}\label{Stromgren} \subsection{Str\"omgren's main results.}\label{results} \begin{figure*} \vspace*{0cm} \includegraphics[width=10cm]{shell.png}% \caption{Comparison of the amplifications of rays crossing a Str\"omgren sphere: the path inside the infinitesimal shells is larger for ray b than for ray a, null for ray c. Thus, versus the distance of the ray from the star, the total amplification increases, then falls, therefore has at least a maximum.} \label{coq} \end{figure*} Str\"omgren defined \cite{Stromgren} a model consisting of an extremely hot star immersed in a vast, low-density and initially cold hydrogen cloud. The ultra-violet emited by the star ionizes almost completely the hydrogen of a {\it Str\"omgren's sphere} which becomes transparent. Traces of atoms appearing in the outer regions of the sphere absorb energy by collisions and from light emitted by the star at their own eigen-frequencies.They dissipate this energy by radiating spontaneously into all directions. \begin{figure*} \vspace*{0cm} \includegraphics[width=80mm]{sphere.png} \caption{Appearance of a Str\"omgren's shell, supposing that spontaneously emitted light is neither absorbed nor amplified.} \label{sphere} \end{figure*} This energy dissipation lowers the temperature and causes a catastrophic increase in the density of atoms, so that the sphere is surrounded by a {\it Str\"omgren's shell} relatively thin, radiating intensely. \begin{figure*} \vspace*{0cm} \vspace*{0cm} \includegraphics[width=80mm]{modes.png} \caption{With a strong superradiance a Str\"omgren's shell appears as a pearls necklace.} \label{modes} \end{figure*} \subsection{Superradiance.}\label{superradiance} Figure \ref{coq} shows the amplification of a light ray by the Str\"omgren's shell which may be split into infinitesimal, concentric, spherical shells: For rays having crossed near the star, the path inside a crossed shell varies little. Farther, this thickness grows faster and faster up to a maximum amplification. Finally, the number of crossed infinitesimal shells decreases, the amplification falls to reach no amplification. Figure \ref{sphere} shows the appearance of the system supposing that the spontaneously emitted light is nether absorbed nor amplified. Str\"omgren did not know the intense, coherent interactions of light with matter. The plasma in the shell is similar to the plasma in gas lasers: the amplification factor is high, resulting in intense superradiance, maximal for rays located at a distance $R$ from the center of the sphere. $R$ defines precisely the inner radius of the shell. By competition of modes, these rays emit most of the available energy. Into a given direction they generate a circular cylinder. If the superradiance is large, the competition of modes playing on these rays selects a system of an even number of orthogonal modes, as in a laser whose central modes have been extinguished, for example by an opaque disc (daisy modes). Thus, strongly superradiant rays of a transition form a circle regularly punctuated (Fig 3), and the atoms are quickly de-excited. With a lower superradiance, or by a mixture of several frequencies in a wide band detector, the ring may appear continuous. \subsection{Multiphotonic scattering}\label{multiphoton} \medskip The spectral radiance of the rays emitted by a supernova has, at each frequency, the radiance of a laser, so that multiphoton interactions and transitions between virtual levels are allowed. All frequencies may be involved in simple combinations of frequencies which result in resonance frequencies of hydrogen atoms. Thus, an important fraction of the energy emitted by the star is absorbed. All multiphoton absorptions and induced emissions form a parametric induced scattering that transfers the bulk of the energy of the radial rays emitted by the star to the superradiant rays. If the star is seen under a solid angle much smaller than the points forming the ring, it is no longer visible. \subsection{Frequency shifts.}\label{shifts} A large coherent transfer of energy between two co-linear light beams is difficult because the wavelengths being generally different, the difference of phase between the two beams changes, so that the transferred amplitudes cancel along the path. To preserve the coherence, the usual solutions are: the use of two indices of refraction of a crystal, the use of non-colinear broad light beams, or the use of light pulses: In the ``Impulsive Stimulated Raman Scattering'' (ISRS) \cite{Ruhman}, the use of short laser pulses limits the phase shift of two light beams of slightly different frequencies, so that their coherence may be preserved: G. L. Lamb \cite{GLamb} wrote that the length of the used light pulses must be ``shorter than all relevant time constants''. A time constant is the collisional time because collisions break the phases. Others time constants are involved Raman periods which produce periods of beats between the exciting and the scattered beams. \medskip In appendix A, a general description of ISRS is avoided because the studied ``Coherent Raman Effect on Incoherent Light'' (CREIL) may be simply obtained by a replacement of the Rayleigh coherent scattering which produces the refraction by a Raman coherent scattering. For a ``parametric effect'', that is to avoid an excitation of matter so that matter plays the role of a catalyst, several light beams must be involved simultaneously. With the exception of frequency shifts, the properties of the CREIL are those of refraction: - The interactions are coherent, that is the wave surfaces are preserved. - The interaction is linear versus the amplitude of each light beam so that there is no threshold of amplitude. - The entropy of the set of light beams increases by an exchange of energy which shifts their frequencies. - Locally, the frequency shift of a beam is proportional to the column density of active scattering molecules and it depends on the temperatures and irradiances at all involved light frequencies. - Lamb's conditions must be met. \medskip Using ordinary incoherent light made of around one nanosecond pulses, the pressure of the gas and a Raman resonance frequency must be low, so that the effect is weak. With atomic hydrogen, frequency 1420 MHz of the hyperfine transition in 1S state is too large; in the first excited state the frequencies 178 MHz in the 2S$_{1/2}$ state, 59 MHz in 2P$_{1/2}$ state, and 24 MHz in 2P$_{3/2}$ are as large as allowed, very convenient. \medskip The background radiation is always implied. As it is generally nearly isotropic, a lot of beams is implied, their irradiance is large. Thus, for light, there is always a redshift component. These frequency shifts are easily observed in laboratories using picosecond laser pulses, or, with longer pulses in optical fibers. The frequency shift is roughly inversely proportional to cube of the length of the pulses, so that astronomical paths are needed with usual light. \medskip The density of atomic hydrogen is negligible in Str\"omgren's sphere except near the surface, where it grows with an exponential look to the surface. Thus, the intensity of ``spontaneous emission'' is low in depth, high in surface. In propagating in a medium that contains excited hydrogen atoms, the beam provides energy to thermal radiation of high irradiance, and receives energy from the hot rays emitted by the star whose irradiance is low. It is reasonable assuming that the balance is negative, so that, at the surface, the weak, deep emission is redshifted, while the stronger, surface emission is at the laboratory wavelength $\lambda_0$ (Fig. \ref{caj} with $\lambda_1=\lambda_0 $, laboratory wavelength). \medskip \begin{figure*} \vspace*{0cm} \includegraphics[width=100mm]{spec_calc} \caption{Theoretical, qualitative spectrum of light spontaneously emitted along a ray crossing a Str\"omgren sphere. Observed at the surface of the sphere (D=0), the maximum of radiance is at the laboratory wavelength ($\lambda_1=\lambda_0$). In the Str\"omgren shell, all wavelengths decrease, the scale of the spectrum is changed: $\lambda_1<\lambda_0$. The spectrum depends on the distance $\rho$ between the ray and the star.} \label{caj} \end{figure*} In the shell, near the sphere, it remains excited hydrogen able to catalyze exchanges of energy. The energy emitted by the star which propagates radially at speed $c$ is transferred to the tangent rays whose {\it radial component of speed} is low, so that the irradiance of warm rays becomes very large: the cold spontaneous emission receives energy and the spectrum is shifted towards shorter wavelengths (Fig. \ref{caj} with $\lambda_1<\lambda_0$). \section{Possible applications in astrophysics.}\label{Applications} \subsection{The distorted Str\"omgren's sphere of supernova remnant SNR1987A} \begin{figure*} \vspace*{0cm} \includegraphics[width=150mm]{calcul_mic.png} \caption{Spectrum of the spontaneous emission of the disk inside the necklace of SNR1987A, computed by Michael et al. \cite{Michael}.} \label{camic} \end{figure*} \begin{figure*} \vspace*{0cm} \includegraphics[width=150mm]{spectre.png} \caption{Spectrum recorded inside the ring of SNR1987A.(Michael et al.\cite{Michael})} \label{recs} \end{figure*} The supernova remnant SNR1987A is surrounded by relatively dense clouds of hydrogen making a ``hourglass'' (Sugerman et al. \cite{Sugerman}). These clouds have been detected shortly after the explosion by photon echoes. Burrows et al. \cite{Burrows} criticized an interpretation of the three rings of SNR1987A as the limbs of the hourglass because, without superradiance, assuming that the hourglass is a strangulated Str\"omgren's sphere, gives a distorted figure similar to figure \ref{sphere}, not a distorted figure \ref{modes} as observed. The superradiance generates the three ``pearl necklaces'' at the limbs of the hourglass. Evidently it is necessary to take into account several lines of hydrogen, variations of the density of hydrogen, so that the monochrome image of SNR1987A is complicated. Many images of nebulae, (for instance the ``bubble nebula'' SNR0509) are intermediate between figures \ref{sphere} and \ref{modes}. Without filters, SN1006 shows a sphere, its hydrogen lines show only a very bright part of its limb. \medskip The spectrum (Fig. \ref{recs}) emitted within the central ring of SNR1987A results from the superposition of spectra observed at different distances $\rho$ from the center of the disk, so that different spectra corresponding to different paths are added. Our spectrum (Fig. \ref{caj}) does not show a strong, cut peak as the calculated spectrum by Michael et al. \cite{Michael} (Fig. \ref{camic}). Our computation is better than Monte-Carlo, but the starting point is the same: the redshift of the spectrum is assigned to the propagation of light in the plasma of hydrogen r ther than an expansion of the universe, a Doppler effect of winds,... . Probably Michael et al. did not insist on this point because of hostility against any discussion on the origin of the ``cosmological'' redshifts. \subsection{Overview of other possible applications.} Many ``planetary nebulae'' show arcs of circles or ellipses punctuated or not. The usual explanation is that the image of a very bright, distant star is distorted, multiplied by the gravitational lensing of an interposed, dark, heavy star. This explanation has been criticized because it involves a large number of alignment of proper stars with Earth, and because it is difficult to justify a certain regularity of punctuation. The phases of two contiguous dots are opposite, so that this feature can be tested by interference if the necklace is incomplete. Some, like ``the necklace nebula\textquotedblright{} are so similar to SNR1987A that they appear images of Str\"omgren's systems. The spectra could distinguish the two types of rings: the spectrum of the limb of a Str\"omgren's sphere is a line spectrum while the spectrum of a very far, bright star is probably a continuous emission spectrum. Observed lines of many atoms have the shape of figure \ref{caj}. They may be generated by atoms heated in an hydrogen plasma. \medskip The frequency shifts by CREIL effect have many applications (Moret-Bailly \cite{MBIE,MB0507141,MBAIP06}: - Increase in the frequency of the radio-waves exchanged with the Pioneer 10 and 11 probes, resulting from a transfer of energy from the solar radiation where the solar wind is cooled enough to generate atoms. This frequency shift is usually interpreted by Doppler effect, as an anomalous acceleration of the probes. - The frequency shifts of the extreme UV lines emitted by the Sun and observed by SOHO are usually attributed to a Doppler effect due to vertical speeds of the source. But the frequencies observed at the limb are not the laboratory frequencies. Assume that at high pressure and temperature, hydrogen is in a state simi ar to a crystal, so that a CREIL effect is possible. The paths, thus the frequency shifts, from the depth that emits a line are larger at the limb than at the center so that the laboratory frequencies are preserved. - The spectrum of a neutron star heated to a very high temperature by accretion of a cloud of hydrogen is very similar to the spectrum of a quasar, including the Karlsson's periodicities. Thus the quasars may be in our galaxy or close to it, so that they are not enormous and do not move very fast. - High redshifts appear where hydrogen is atomic and excited. - It is necessary to re-examine the scales of distances deduced from Hubble's law which assumes a redshift of the spectra proportional to the path of light whereas the example of SNR1987A shows that it is necessary to take account of other parameters: density and state of hydrogen, temperature of the studied rays, temperatures and radiances of the other rays. Thus, an important work appears necessary to press the sponge of the maps of the galaxies. To get some reliable distances, one can use the dynamics of the galaxies to evaluate, without black matter, their sizes thus their distances. \section{Conclusion} The introduction of optical coherence in astrophysics provides new, efficient tools able to deepen our understanding of the universe: \medskip The propagation of light in resonant, diluted gases is usually calculated by two methods: optical coherence (Einstein) or Monte Carlo calculations. The results are very different so that Einstein's theory largely verified by the success of gas lasers must be chosen. The Monte Carlo calculations should be reserved for the propagation of light in opalescent media. The power of tools developed in connection with lasers should be used to study the diluted gases present in interstellar space with column densities much higher than those typically found in gas lasers. Many observations are easily explained by optical coherence: Papers whose conclusions are not convincing are validated by introducing coherence: The ``pearls necklaces" and the multiple images of stars attributed to gravitational lensing, arise from superradiance. The optical coherence explains the disappearance of supernova 1987A when its ``pearls necklace" appeared. The superradiance validates their coincidence with the limbs of an ``hourglass" observed by photon echoes: it sharpens and ponctuates this limb. The shape of many spectral lines broken at the shortest wavelength is due to a spontaneous emission and redshift of the lines in a hydrogen plasma. \medskip Unexpected results occur: - The frequencies of UV-X spectral lines of the Sun coincide with laboratory frequencies assuming that the lines are not shifted by a Doppler effect, but by an ``Impulsive Stimulated Raman Scattering" (ISRS): Energy is exchanged between radiations propagating in hot compressed hydrogen similar to a crystal. - The ``anomalous acceleration" of Pioneer 10 and 11 results from the attribution to a Doppler effect, of the blueshift of the carrier of microwave signals. This shift is due to an exchange of energy between the solar light and the microwaves. - The Hubble law can be explained by an exchange of energy between light and the microwave background by ISRS. This law does not provide a reliable distance scale as the ISRS depends, in particular, on the density of excited atomic hydrogen that works as a catalyst.
1,108,101,562,512
arxiv
\section{Introduction} Bound states of heavy quarks, in particular the charmonium state $J/\psi$ and its excitations as well as the heavier bottomonium states, are sensitive probes for deconfining features of the quark-gluon plasma (QGP) \cite{matsui}. Different excitations of these states are expected to dissolve at different temperatures in the QGP, giving rise to a characteristic sequential melting pattern \cite{mehr}. Recent lattice QCD calculations of thermal hadron correlation functions suggest that certain quarkonium states survive as bound states in the QGP well beyond the pseudo-critical temperature of the chiral crossover transition $T_c=(154\pm 9)$~MeV \cite{bazavov}; the $J/\psi$ and its pseudo-scalar partner $\eta_c$ disappear at about $1.5 T_c$ \cite{ding}, while the heavier bottomonium ground states can survive even up to $2 T_c$ \cite{Petreczky,aarts}. Light quark bound states, on the other hand, dissolve already at or close to the pseudo-critical temperature, $T_c$, reflecting the close relation between the chiral crossover and deconfinement of light quark degrees of freedom. This leads to a sudden change in the bulk thermodynamic observables and is even more apparent in the behavior of fluctuations of conserved charges, i.e. baryon number, electric charge or strangeness \cite{Koch, Ejiri}. The sudden change of ratios of different moments (cumulants) of net-charge fluctuations and their correlations in the transition region directly reflects the change of degrees of freedom that carry the relevant conserved charges. The total number of hadronic degrees of freedom, i.e. the detailed hadronic mass spectrum also influences bulk thermodynamics. For instance, the strong rise of the trace anomaly $(\epsilon -3P)/T^4$, found in lattice QCD calculations may be indicative for contributions of yet unobserved hadron resonances \cite{Majumder}. Recently it has been shown that the large set of fourth order cumulants of charge fluctuations and cross-correlations among fluctuations of conserved charges allows for a detailed analysis of the change from hadronic to partonic degrees of freedom in different charge sectors \cite{strange}. For instance, changes of degrees of freedom in the strange meson and baryon sectors of hadronic matter can be analyzed separately by choosing appropriate combinations of charge fluctuation observables. This led to the conclusion that a description of strong interaction matter in terms of uncorrelated hadronic degrees of freedom breaks down for all strange hadrons in the chiral crossover region, i.e. at $T\lesssim160$~MeV \cite{strange}, which suggests that strangeness gets dissolved at or close to $T_c$. This finding has been confirmed with the analysis presented in \cite{Bellwied}. A more intriguing question is what happens to the charmed sector of the hadronic medium at the QCD transition temperature. While it seems to be established that charmonium states, i.e. bound states with hidden charm, still exist in the QGP at temperatures well above $T_c$, this may not be the case for heavy-light mesons or baryons, i.e. open charm mesons ($D$, $D_s$) \cite{rapp,tolos} or charmed baryons ($\Lambda_c,\ \Sigma_c,\ \Xi_c,\ \Omega_c$). To address this question we calculate cumulants of net-charm fluctuations as well as correlations between moments of net-charm fluctuations and moments of net baryon number, electric charge or strangeness fluctuations. Motivated by the approach outlined in Ref.~\cite{strange} we analyze ratios of observables that may, at low temperature, be interpreted as contributions of open charm hadrons to the partial mesonic or baryonic pressure of strong interaction matter. We show that a description of net charm fluctuations in terms of models of uncorrelated hadrons breaks down at temperatures close to the chiral crossover temperature. We furthermore show that at low temperatures the partial pressure calculated in the open charm sector is larger than expected from hadron resonance gas (HRG) model calculations based on all experimentally measured charmed resonances as listed in the particle data tables \cite{PDG}. It, however, agrees well with an HRG based on charm resonances from quark model \cite{Isgur,cQM,Ebertm,Ebert} and lattice QCD calculations \cite{Prelovsek,Moir,Edwards}. This points at the existence and thermodynamic importance of additional, experimentally so far not established, open charm hadrons. \section{The charmed hadron resonance gas} While light quark fluctuations can be quite well described by a hadron resonance gas \cite{hotQCDHRG} built up from experimentally measured resonances that are listed in the particle data tables \cite{PDG} it is not at all obvious that this suffices in the case of the heavy open charm resonances. The particle data tables only list a few measured open charm resonances. Many more are predicted in the relativistic quark model \cite{Isgur,cQM,Ebertm,Ebert} and lattice QCD \cite{Moir,Edwards} calculations. In fact, the large set of excited charmed mesons and baryons found in lattice QCD calculations closely resembles the excitation spectrum predicted in quark model calculations. It is expected that many new open flavor states will be detected in upcoming experiments at Jefferson Laboratory, FAIR and the LHC \cite{cQM,glueX,PANDA,LHCb}. If these resonances are indeed part of the charmed hadron spectrum of QCD, they become excited thermally and contribute to the thermodynamics of the charmed sector of a hadron resonance gas. They will show up as intermediate states in the hadronization process of a quark-gluon plasma formed in heavy ion collisions and influence the abundances of various particle species \cite{PBM}. Heavy-light bound states also play an important role in the break-up of quarkonium bound states. In lattice QCD calculations their contribution becomes visible in the analysis of the heavy quark potential where they can help to explain the non-vanishing expectation value of the Polyakov loop at low temperatures \cite{Megias,Peter}. In order to explore the significance of a potentially large additional set of open charm resonances in thermodynamic calculations at low temperature we have constructed HRG models based on different sets of open charm resonances. In addition to the HRG model that is based on all experimentally observed charmed hadrons (PDG-HRG), we also construct an HRG model based on a set of charmed hadrons calculated in a quark model (QM-HRG) where we used the charmed meson \cite{Ebertm} and charmed baryon \cite{Ebert} spectrum calculated by Ebert \etal \footnote{The thermodynamic considerations presented here are mainly sensitive to the number of additional hadrons included in the calculations and not to the precise values of their masses. Thus lattice QCD results on the charmed baryon spectra \cite{Edwards} also lead to similar conclusions.}. One may wonder whether all the resonances calculated in a quark model exist or are stable and long-lived enough to contribute to e.g. the pressure of charmed hadrons. However, as highly excited states with masses much larger than the ground state energy in a given quark flavor channel are strongly Boltzmann suppressed, they play no significant role in thermodynamics. For this reason we also need not consider multiple charmed baryons or open charm hybrid states that have been identified in lattice QCD calculations \cite{Moir,Edwards} but generally have masses more than (0.8-1)~GeV above those of the ground state resonances. We explore the impact of such heavy states by introducing different cut-offs to the maximum mass up to which open charm resonances are taken into account in the HRG model. For instance, QM-HRG-3 includes all charmed hadron resonances determined in quark model calculations that have masses less than $3$~GeV. We calculate the open charm meson ($M_C(T,\vec{\mu})$) and baryon ($B_C(T,\vec{\mu}))$ pressure in units of $T^4$, such that the total charm contribution to the pressure is written as $P_C(T,\vec{\mu})/T^4 = M_C(T,\vec{\mu}) + B_C(T,\vec{\mu})$. As the charmed states are all heavy compared to the scale of the temperatures relevant for the discussion of the thermodynamics in the vicinity of the QCD crossover transition, a Boltzmann approximation is appropriate for all charmed hadrons, \begin{eqnarray} M_C(T,\vec{\mu})\hspace*{-0.3cm} &=&\hspace*{-0.3cm} {1\over {2\pi^2}}\hspace*{-0.3cm} \sum_{i\in C-mesons} \hspace*{-0.4cm} g_i \left(\frac{m_i}{T}\right)^2 K_2({{m_i/T}}) \cosh \left( Q_i \hat{\mu}_Q + S_i\hat{\mu}_S + C_i \hat{\mu}_C \right) \; , \\ B_C(T,\vec{\mu})\hspace*{-0.3cm} &=&\hspace*{-0.3cm} {1\over {2\pi^2}} \hspace*{-0.3cm} \sum_{i\in C-baryons} \hspace*{-0.5cm} g_i \left(\frac{m_i}{T}\right)^2 K_2({{m_i/T}}) \cosh \left( B_i\hat{\mu}_B + Q_i \hat{\mu}_Q +S_i\hat{\mu}_S+ C_i \hat{\mu}_C \right) \ . \nonumber \label{Cpressure} \end{eqnarray} Here, $\vec{\mu}=(\mu_B, \mu_Q, \mu_S, \mu_C)$, $\hat{\mu}\equiv \mu/T$ and $g_i$ are the degeneracy factors for the different states with electric charge $Q_i$, strangeness $S_i$ and charm $C_i$. \begin{figure}[!th] \begin{center} \includegraphics[scale=0.6]{Phrg} \end{center} \caption{Partial pressure of open charm mesons ($M_c$, bottom), baryons ($B_c$, middle) and the ratio $B_C/M_C$ (top) in a gas of uncorrelated hadrons, using all open charm resonances listed in the particle data table (PDG-HRG, dashed lines) \cite{PDG} and using additional charm resonances calculated in a relativistic quark model (QM-HRG, solid lines) \cite{Ebertm, Ebert}. Also shown are results from HRG model calculations where the open charm resonance spectrum is cut off at mass 3~GeV (QM-HRG-3) and 3.5~GeV (QM-HRG-3.5). At temperatures below 160~MeV the latter coincides with the complete QM-HRG model results to better than 1~\%. } \label{fig:hadronsPDG} \end{figure} Results from calculations of open charm meson and baryon pressures using different HRG models are shown in Fig.~\ref{fig:hadronsPDG}. The influence of additional states predicted by the quark model is clearly visible already in the QCD crossover transition region. At $T_c$, differences between PDG-HRG (dashed lines) and QM-HRG (solid lines) in the baryon sector are as large as 40\% while they are negligible in the meson sector. This reflects that the experimentally known meson spectrum is more complete than the baryon spectrum. In the open charm meson sector, the well established excitations cover a mass range of about $700$~MeV above the ground state $D,\ D_s$-mesons. In the charmed baryon sector much less is known, for instance, experimentally well known excitations of $\Xi_c$ range up to $350$~MeV above the ground state and in the doubly strange charmed baryon sector only two $\Omega_c$ states separated by $100$~MeV are well established. As a consequence of the limited knowledge of the charmed baryon spectrum compared to the open charm meson spectrum, the ratio of partial pressures in the baryon and meson sectors differs strongly between the PDG-HRG and the QM-HRG. This is shown in Fig.~\ref{fig:hadronsPDG}~(top). Significant differences between the QM-HRG-3 and PDG-HRG results also indicate that almost half of the enhanced contributions actually comes from additional charmed baryons that are lighter than the heaviest PDG state. Similar conclusions can be drawn when analyzing partial pressures in the strange-charmed hadron sector or the electrically charged charmed hadron sectors. \section{Calculation of charm fluctuations in (2+1)-flavor lattice QCD} In order to detect changes in the relevant degrees of freedom that are the carriers of charm quantum numbers at low and high temperatures as well as to study their properties we calculate dimensionless generalized susceptibilities of conserved charges, \beq \chi_{klmn}^{BQSC} = \left. \frac{\partial^{(k+l+m+n)} [P(\hmu_B,\hmu_Q,\hmu_S,\hmu_C)/T^4]} {\partial \hmu_B^k \partial \hmu_Q^l \hmu_S^m \partial \hmu_C^n} \right|_{\vec{\mu}=0} \ . \label{eq:susc} \eeq Here $P$ denotes the total pressure of the system. In the following we also use the convention to drop a superscript in $\chi_{klmn}^{BQSC}$ when the corresponding subscript is zero. For our analysis of net charm fluctuations we use gauge field configurations generated with the highly improved staggered quark (HISQ) action \cite{hisq}. Use of the HISQ action in the charm sectors includes the so-called $\epsilon$-term and thus makes our calculations free of tree-level order $(am_c)^4$ discretization errors \cite{hisq}, where $m_c$ is the bare charm quark mass in units of the lattice spacing. These dynamical (2+1)-flavor QCD calculations have been carried out with a strange quark mass ($m_s$) that has been tuned to its physical value and light $(u,\ d)$ quarks with mass $m_l/m_s =1/20$. In the continuum limit, the latter corresponds to a light pseudo-scalar mass of about 160~MeV. The charm quark sector is treated within the quenched approximation, neglecting the effects of charm quark loops. Within the temperature range relevant for the present study, the quenched approximation for the charm quarks is very well justified. Various lattice QCD calculations using dynamical charm have confirmed that contributions of dynamical charm quarks to bulk thermodynamic quantities, including the gluonic part of the trace anomaly as well as the susceptibilities of light, strange and charm quarks, remain negligible even up to temperatures as high as 300 MeV \cite{STOUT211,MILC-C}. We note that these quantities directly probe the influence of virtual quark pairs on observables calculated at a fixed value of the temperature. Unlike in these cases there is no simple observable known that would allow us to directly calculate the pressure at fixed temperature. This may be the reason for differences seen in current calculations of the pressure \cite{STOUT211,MILC-C} using quenched or dynamical charm. In this work, we only use observables that are of the former type and also do not require any multiplicative or additive renormalization. The line of constant physics for the charm quark has been determined at zero temperature by calculating the spin-averaged charmonium mass \cite{Yu}, $\frac{1}{4} ( m_{\eta_c} + 3 m_{J/\psi})$. For this purpose we used gauge field configurations generated by hotQCD on lattices of size $32^4$ and $32^3\cdot48$ in the range of gauge couplings, $6.39\le \beta= 10/g^2 \le 7.15$ \cite{bazavov,hotQCDHRG}. On finite temperature lattices with temporal extent $N_\tau=8$, this covers the temperature range\footnote{At finite lattice spacing $f_K$ has been used to set the temperature scale \cite{hotQCDHRG}.} $156.8~{\rm MeV}\le T \le 330.2~{\rm MeV}$. On these lattices and for the slightly larger-than-physical light quark mass value used in our calculations the transition temperature is $158(3)$~MeV, i.e. about $4$~MeV larger than the continuum extrapolated results at the physical values of the light and strange quark masses \cite{bazavov}. We consider this difference of about 3\% as the typical systematic error for all temperature values quoted for our analysis, which is not extrapolated to the physical point in the continuum limit. The line of constant physics for the charm quark sector is well parametrized by \begin{equation} m_ca = \frac{c_0 R(\beta) + c_2 R^3(\beta)}{1+d_2 R^2(\beta) } \; , \end{equation} with $R(\beta)$ denoting the two-loop $\beta$-function of massless 3-flavor QCD and $c_0=56.0$, $c_2 = 1.16\cdot 10^6$, $d_2=8.67\cdot 10^3$. On this line the charm quark mass varies by less than 5\%. The ratio of charm and strange quark masses, $m_c/m_s$, varies by about 10\%, with $m_c/m_s=12.42$ at $\beta=6.39$ and $m_c/m_s=11.28$ at $\beta=7.15$. For most of our calculations we use data sets on lattices of size $32^3\cdot 8$. A subset of these configurations has already been used for the analysis of strangeness fluctuations \cite{strange}. These data sets have been enlarged and now contain up to 16700 configurations at the lowest temperature, separated by 10 time units in rational hybrid Monte Carlo updates. Some additional calculations have been performed on coarser $24^3\cdot 6$ lattices, with fixed $m_c/m_s=12$, in order to check cut-off effects also in the charm quark sector. We summarize the statistics exploited in this calculation in Table.~\ref{tab:stat}. We calculate all the moments of net charm fluctuations needed to construct up to fourth order cumulants that correlate net-charm fluctuations with net baryon number, electric charge and strangeness fluctuations. As the calculation of charm fluctuations is fast we can afford to use on each gauge field configuration up to 6000 Gaussian distributed random source vectors for the inversion of the charmed fermion matrix. This leaves us with statistical errors that mainly arise from fluctuations in the light and strange quark sectors where we have used 1500 random source vectors for the inversion of the corresponding fermion matrices. \begin{table} \begin{center} \begin{tabular}{|c|r||c|r|} \hline \multicolumn{2}{|c|}{$N_\tau =8$} & \multicolumn{2}{|c|}{$N_\tau =6$} \\ \hline T[MeV] & \# conf & T[MeV] & \# conf \\ \hline 156.8 & 16700 & & ~ \\ 162.0 & 9520 & 162.3 & 7820\\ 165.9 & 9000 & 166.7 & 3590\\ 168.6 & 6130 & 170.2 & 5140 \\ 173.5 & 5510 & & \\ 178.3 & 5500 & & \\ 184.8 & 5730 & & \\ 189.6 & 4930 & & \\ 196.0 & 6000 & & \\ 207.3 & 1800 & & \\ 237.1 & 1600 & & \\ 273.9 & 1600 & & \\ 330.2 & 1600 & &\\ \hline \end{tabular} \end{center} \caption{Number of configurations analyzed at different values of the temperature and on different size lattices.} \label{tab:stat} \end{table} \section{Partial pressure of open charm hadrons from fluctuations and correlations} Our analysis of higher order cumulants of net charm fluctuations and their correlations with net baryon number, electric charge and strangeness, closely follows the concepts developed for our analysis of strangeness fluctuations \cite{strange}. The large charm quark mass, $m_c\gg T$, however leads to some simplifications. First of all, for temperatures a few times the QCD transition temperature, Boltzmann statistics is still a good approximation for a free charm quark gas. In the high temperature phase we can thus compare our results with cumulants derived from a free massive quark-antiquark gas in the Boltzmann approximation, \begin{equation} \frac{P_{c,free}(m_c/T,\vec{\mu}/T)}{T^4} = {3\over {\pi^2}} \left(\frac{m_c}{T}\right)^2 K_2({{m_c/T}}) \cosh \left( \frac{\hat{\mu}_B}{3} + \frac{2}{3} \hat{\mu}_Q + \hat{\mu}_C \right) \ , \label{fmc} \end{equation} where we used explicitly the quantum numbers of charm quarks. Another simplification occurs at low temperatures, where we expect a hadron resonance gas to provide a good description of cumulants of net charge fluctuations. At these temperatures, the pressure of the hadronic medium receives contributions from different open charm mesons and baryons. Using the fact that these hadrons carry integer conserved charges for baryon number ($|B|\le 1$), electric charge ($|Q|\le 2$), strangeness ($|S|\le 2$) and charm ($|C|\le 3$), we can separate the total open charm contribution to the pressure in terms of different mesonic ($M_C$) and baryonic ($B_{C,i}$ with $i\equiv |C|= 1,\ 2,\ 3$) sectors, \begin{equation} \frac{P_C(T,\vec{\mu})}{T^4} = M_C (T,\vec{\mu})+B_C (T,\vec{\mu}) = M_C (T,\vec{\mu})+\sum_{i=1}^3 B_{C,i}(T,\vec{\mu}) \; . \label{C-pressure} \end{equation} In this work, we further motivate the decomposition of the open charm pressure in terms of partial pressures in different electric charge and strangeness sectors. In such cases, we decompose the corresponding partial pressures as, \begin{equation} \frac{P_{C,X}(T,\vec{\mu})}{T^4} = M_{C,|X|=1}(T,\vec{\mu}) +B_{C,|X|=1}(T,\vec{\mu}) + B_{C,|X|=2}(T,\vec{\mu})\; ,\; X=Q,\ S \; . \label{PCQ} \end{equation} Due to the large charm quark mass, the masses of charmed baryons with $|C|=2$ or $3$ are substantially larger than those of the $|C|= 1$ hadrons; e.g. $\Delta = m_{C=2}-m_{C=1} \simeq 1.2$~GeV. Even at $T\simeq 200$~MeV, i.e. well beyond the validity range of any HRG model, the contribution of a $|C|=2$ hadron to $P_C(T,\vec{\mu})/T^4$ thus is suppressed by a factor $\exp (-\Delta/T) \simeq 10^{-3}$ relative to that of a corresponding $|C|=1$ hadron. The latter thus will dominate the total partial charm pressure, $P_C(T,\vec{\mu})/T^4 \simeq M_C (T,\vec{\mu})+ B_{C,1}(T,\vec{\mu})$. Similarly the baryon contributions to the charged and strange partial charm pressures will be dominated by $|C|=1$ baryons only. The dominance of the $|C|=1$ sector in all fluctuation observables involving open charm hadrons is immediately apparent from the temperature dependence of second and fourth order cumulants of net-charm fluctuations, $\chi_2^C$ and $\chi_4^C$, as well as the correlations between moments of net baryon number and charm fluctuations ($BC$-correlations). As long as the strong interaction medium can be described by a gas of uncorrelated hadrons these observables have simple interpretations in terms of partial pressure contributions $M_C$ and $B_{C,i}$ evaluated at $\vec{\mu}=0$, \begin{eqnarray} \chi_n^C &=& M_C +B_{C,1} + 2^n B_{C,2} + 3^n B_{C,3}\simeq M_C+B_{C,1}\; , \nonumber \\ \chi_{mn}^{BC} &=& B_{C,1} + 2^n B_{C,2} + 3^n B_{C,3} \simeq B_{C,1} \; , \label{chin} \end{eqnarray} where $n,\ m >0$ and $n$ or $n+m$ are even, respectively. Here and in the following we often omit the arguments of the functions $M_C(T,0)$, $B_{C,i}(T,0)$. The quantity $(\chi_4^C-\chi_2^C)/12$ is an upper bound for the contribution to the pressure from the $|C|>1$ channels in the open charm sector. For all temperature values analyzed by us, we find that this quantity is less than 0.2\% of $\chi_2^C$. In fact, for temperatures $T\le 200$~MeV the difference vanishes within errors. This may easily be understood as this difference is only sensitive to contributions of baryons with charm $|C|=2,\ 3$; i.e. $\chi_4^C-\chi_2^C = 12 B_{C,2} + 72 B_{C,3}$ in a gas of uncorrelated hadrons. We thus conclude that up to negligible corrections all cumulants of net-charm fluctuations, $\chi_n^C$, with $n>0$ and even, directly give the total open charm contribution to the pressure in an HRG, $P_C\equiv P_C(T,0) \simeq \chi_2^C$. Moreover, each of the off-diagonal $BC$-correlations, $\chi_{nm}^{BC}$, with $n+m > 0$ and even, approximates well the partial pressure of charmed baryons, $B_C\equiv B_C(T,0)\simeq \chi_{mn}^{BC}$. In Fig.~\ref{fig:BQC}~(right) we show lattice QCD data for $\chi_4^C/\chi_2^C$. In the crossover region this ratio is close to unity. This confirms that at low temperature the charm fluctuations $\chi_2^C$ and $\chi_4^C$ indeed are equally good representatives for the open charm partial pressure. \section{Melting of open charm hadrons} In order to determine the validity range of an uncorrelated hadron resonance gas model description of the open charm sector of QCD, without using details of the open charm hadron spectrum, we analyze ratios of cumulants of correlations between net charm fluctuations and net-baryon number fluctuations ($BC$-correlations) as well as cumulants of net charm fluctuations ($\chi_n^C$). As motivated in the previous section, a consequence of the dominance of the $|C|=1$ charmed baryon sector in thermodynamic considerations is that, to a good approximation, $BC$-correlations in the hadronic phase obey simple relations as, \begin{equation} \chi_{nm}^{BC} \simeq \chi_{11}^{BC} \;\; ,\;\ n+m > 2\; {\rm and~even}\ . \label{BC} \end{equation} \begin{figure}[!th] \begin{center} \includegraphics[scale=0.52]{baryon_reltn} \includegraphics[scale=0.52]{meson_reltn} \end{center} \caption{The left hand figure shows two ratios of fourth order baryon-charm($BC$) correlations. In an uncorrelated hadron gas both ratios receive contributions only from charmed baryons. Similarly, for the right hand figure the ratio $\chi_4^C/\chi_2^C$ is dominated by and $(\chi_2^C-\chi_{22}^{BC})/(\chi_4^C-\chi_{13}^{BC})$ only receives contributions from open charm mesons. The horizontal lines on the right hand side of both figures show the infinite temperature non-interacting charm quark gas limits of the respective quantities. The shaded region indicates the chiral crossover temperature at the physical pion mass in the continuum limit, $T_c=(154\pm 9)$~MeV, determined from the maximum of the chiral susceptibility \cite{bazavov}. Calculations have been performed on lattices of size $32^3\cdot 8$ (filled symbols) and $24^3\cdot 6$ (open symbols). } \label{fig:BQC} \end{figure} The ratio of any two of these susceptibilities, i.e. $\chi_{nm}^{BC}/\chi_{kl}^{BC}$ thus will be unity in a hadron resonance gas irrespective of its composition and the details of the baryon resonance spectrum. In Fig.~\ref{fig:BQC}~(left) we show the ratio $\chi_{13}^{BC}/\chi_{22}^{BC}$. It clearly suggests that above the crossover region, an uncorrelated gas of charmed baryons does no longer provide an appropriate description of the $BC$-correlations. Also shown in this figure is the ratio $\chi_{11}^{BC}/\chi_{13}^{BC}$. It is consistent with unity for all temperatures because the relation $\chi_{1n}^{BC} = \chi_{11}^{BC}$ not only holds in a non-interacting charmed hadron gas (Eq.~\ref{BC}), but also is valid in an uncorrelated charmed quark gas, as is easily seen from Eq.~\ref{fmc}. Higher order derivatives with respect to baryon chemical potentials, on the other hand, distinguish between the hadronic and partonic phases. E.g., one finds that for $n$ being odd, $\chi_{n1}^{BC} / \chi_{11}^{BC}=1$ in a hadron gas and $3^{1-n}$ in an uncorrelated charm quark gas. Subtracting any of the $BC$-correlations from the quadratic or quartic charm fluctuations provides an approximation for the open charm meson pressure in a gas of uncorrelated hadrons. We thus expect for instance, the relation \begin{equation} M_C = \chi_4^C - \chi_{13}^{BC} = \chi_2^C - \chi_{22}^{BC} \, . \label{BC-meson} \end{equation} to hold at low temperatures. Their ratio thus should be unity at low temperatures as long as the HRG description is valid. Fig.~\ref{fig:BQC}~(right) shows the ratio of the two observables introduced in Eq.~\ref{BC-meson}. It is obvious from the figure that also in the meson sector, an HRG model description breaks down in the crossover region at or close to $T_c$. The behavior seen in Fig.~\ref{fig:BQC} for correlations between net charm fluctuations and net baryon number fluctuations, in fact, is quite similar to the behavior seen in the strangeness sector ($BS$-correlations) \cite{strange} as well as in the light quark sector which dominates the correlations between net electric charge and net baryon number ($BQ$-correlations) \cite{hotQCDHRG}. In Fig.~\ref{fig:BC_MC} we show a comparison of ratios of cumulants of such correlations. For the $BS$ and $BQ$ correlations with the lighter quarks we have two additional data points below $156~$MeV. In the charm sector we choose a ratio of cumulants involving higher order derivatives in the charm sector as correlations involving only first order derivatives have large statistical errors. These ratios all should be unity in a gas of uncorrelated hadrons. It is apparent from Fig.~\ref{fig:BC_MC} that such a description breaks down for charge correlations involving light, strange, or charm quarks in or just above the chiral crossover region. \begin{figure}[!th] \begin{center} \includegraphics[scale=0.52]{baryon_reltn_C_S_Q} \end{center} \caption{Ratios of baryon-electric charge($~BQ$), baryon-strangeness($~BS$) and baryon-charm($~BC$) correlations calculated on lattices of size $32^3\cdot 8$. In the case of $BQ$ and $BS$ correlations we show results from the (2+1)-flavor calculations where $B$ and $Q$ do not contain any charm contribution. These data are taken from Ref.~\cite{strange,freeze}. The shaded region shows the chiral crossover region as in Fig.~\protect\ref{fig:BQC}. Horizontal lines on the right side show corresponding results for an uncorrelated quark gas. It should be noted that this limiting value is not defined for $\chi_{31}^{BQ}/\chi_{11}^{BQ}$ since the denominator as well as the numerator vanishes in perturbation theory up to ${\cal O}(g^4)$.} \label{fig:BC_MC} \end{figure} \section{Abundance of open charm hadrons} We now turn to the analysis of ratios of charge correlations and fluctuations that are, in contrast to the ratios shown in Fig.~\ref{fig:BQC}, sensitive to some details of the open charm hadron spectrum. We construct partial pressure components for the electrically charged charmed mesons and the strange-charm mesons, $M_{QC}\simeq\chi_{13}^{QC}-\chi_{112}^{BQC}$ and $M_{SC}\simeq\chi_{13}^{SC}-\chi_{112}^{BSC}$, respectively. We also consider the partial pressure of all open charm mesons $M_C = \chi_4^C - \chi_{13}^{BC}$ as motivated in Eq.~\ref{BC-meson}. Using these observables we construct ratios with cumulants, which in an HRG receive contributions only from different charmed baryon sectors in the numerator, \begin{equation} R_{13}^{BC} = \frac{\chi_{13}^{BC}}{M_C} \;\; ,\;\; R_{13}^{QC} = \frac{\chi_{112}^{BQC}}{M_{QC}} \;\; ,\;\; R_{13}^{SC} = - \frac{\chi_{112}^{BSC}}{M_{SC}} \;\; . \label{ratios} \end{equation} In an HRG, the first ratio just gives the ratio of charmed baryon and meson pressure, $\left(R_{13}^{BC}\right)_{HRG} = B_C/M_C$. In the two other cases, the numerator is a weighted sum of partial charmed baryon pressures in charge sectors $|X|=1$ and $|X|=2$ with $X=Q$ and $S$, respectively. These ratios are shown in Fig.~\ref{fig:SC_QC}. \begin{figure}[!th] \begin{center} \includegraphics[scale=0.6]{bar_mes_ratio} \end{center} \caption{Thermodynamic contributions of all charmed baryons, $R_{13}^{BC}$ (top), all charged charmed baryons, $R_{13}^{QC}$ (middle) and all strange charmed baryons, $R_{13}^{SC}$ (bottom) relative to that of corresponding charmed mesons (see \eq{ratios}). The dashed lines (PDG-HRG) are predictions for an uncorrelated hadron gas using only the PDG states. The solid lines (QM-HRG) are similar HRG predictions including also the states predicted by the quark model of Ref.\ \cite{Ebertm,Ebert}. The dotted lines (QM-HRG-3) are the same QM predictions, but only including states having masses $<3$ GeV. The shaded region shows the QCD crossover region as in Fig.~\ref{fig:BQC}. The horizontal lines on the right hand side denote the infinite temperature non-interacting charm quark gas limits for the respective quantities. The lattice QCD data have been obtained on lattices of size $32^3\cdot8$ (filled symbols) and $24^3\cdot6$ (open symbols). } \label{fig:SC_QC} \end{figure} HRG model predictions for these ratios strongly depend on the relative abundance of the charmed baryons over open charm mesons. Shown in Fig.~\ref{fig:SC_QC} are results obtained from the PDG-HRG calculation (dashed lines) and the QM-HRG (solid lines). Clearly in the temperature range of the QCD crossover transition, the lattice QCD data for these ratios are much above the PDG-HRG model results. In all the cases, the deviation from the PDG-HRG at $T=160$~MeV is 40\% or larger. As discussed in Sec.~2, this may not be too surprising as only a few charmed baryons have so far been listed in the particle data tables. The lattice QCD results instead show good agreement with an HRG constructed from open charm meson and baryon spectra calculated in a relativistic quark model \cite{Ebertm,Ebert}. The difference in PDG-HRG and QM-HRG model calculations mainly arises from the baryon sector (see Fig.~\ref{fig:hadronsPDG}). The observables shown in Fig.~\ref{fig:SC_QC} thus provide first-principles evidence for a substantial contribution of experimentally so far unobserved charmed baryons to the pressure of a hadron resonance gas\footnote{It should be obvious that this contribution to the pressure nonetheless is strongly suppressed relative to the contribution of the non-charmed sector in HRG models.}. This is also consistent with a large set of additional charmed baryon resonances that are predicted in lattice QCD calculations \cite{Edwards}. \section{Conclusions} We have calculated second and fourth order cumulants of net charm fluctuations and their correlations with fluctuations of other conserved charges, i.e. baryon number, electric charge and strangeness. Ratios of such cumulants indicate that a description of the thermodynamics of open charm degrees of freedom in terms of an uncorrelated charmed hadron gas is valid only up to temperatures close to the chiral crossover transition temperature. This suggests that open charm hadrons start to dissolve already close to the chiral crossover. Moreover, observables that are sensitive to the ratio of the partial open charm meson and baryon pressures as well as their counterparts in the electrically charged charm sector and the strange-charm sector suggest that a large number of so far experimentally not measured open charm hadrons will contribute to bulk thermodynamics close to the melting temperature. This should be taken into account when analyzing the hadronization of charmed hadrons in heavy ion collision experiments. So far our analysis has been performed by treating the charm quark sector in quenched approximation using fully dynamical (2+1)-flavor gauge field configurations as thermal heat bath. This, in fact, seems to be appropriate for the situation met in heavy ion collisions, where charm quarks are not generated thermally but are embedded into the thermal heat bath of light and strange quarks through hard collisions at early stages of the collision. We also do not expect that the cumulant ratios analyzed here will change significantly by treating also the charm sector dynamically. This, however, should be verified in future calculations. \section*{Acknowledgments} \noindent This work has been supported in part through contract DE-AC02-98CH10886 with the U.S. Department of Energy, through Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Nuclear Physics, the BMBF under grant 05P12PBCTA, the DFG under grant GRK 881, EU under grant 283286 and the GSI BILAER grant. Numerical calculations have been performed using GPU-clusters at JLab, Bielefeld University, Paderborn University, and Indiana University. We acknowledge the support of Nvidia through the CUDA research center at Bielefeld University. \vspace{0.5cm} \noindent
1,108,101,562,513
arxiv
\section{Introduction} \label{sec:intro} The Standard Model with minimal Higgs content is not expected to be the ultimate theoretical structure responsible for electroweak symmetry breaking \cite{hhg,habertasi}. If the Standard Model is embedded in a more fundamental structure characterized by a much larger energy scale ({\it e.g.}, the Planck scale, which must appear in any theory of fundamental particles and interactions that includes gravity), the Higgs boson would tend to acquire mass of order the largest energy scale due to radiative corrections. Only by adjusting (\ie, ``fine-tuning'') the parameters of the Higgs potential ``unnaturally'' can one arrange a large hierarchy between the Planck scale and the scale of electroweak symmetry breaking \cite{thooft,suss}. The Standard Model provides no mechanism for this, but supersymmetric theories have the potential to address these issues. In a supersymmetric theory, the size of radiative corrections to scalar squared-masses is limited by the exact cancelation of quadratically divergent contributions from loops of particles and their supersymmetric partners. Since supersymmetry is not an exact symmetry at low energies, this cancelation must be incomplete, and the size of the radiative corrections to the Higgs mass is controlled by the extent of the supersymmetry breaking. The resolution of the naturalness and hierarchy problems requires that the scale of supersymmetry breaking should not exceed ${\cal O}$(1~TeV) \cite{susysol}. Such ``low-energy'' supersymmetric theories are especially interesting in that, to date, they provide the only theoretical framework in which the problems of naturalness and hierarchy can be resolved while retaining the Higgs bosons as truly elementary weakly coupled spin-0 particles. The Minimal Supersymmetric extension of the Standard Model (MSSM) contains the Standard Model particle spectrum and the corresponding supersymmetric partners \cite{susyrev,hehtasi}. In addition, the MSSM must possess two Higgs doublets in order to give masses to up and down type fermions in a manner consistent with supersymmetry (and to avoid gauge anomalies introduced by the fermionic superpartners of the Higgs bosons). In particular, the MSSM Higgs sector is a CP-conserving two-Higgs-doublet model, which can be parametrized at tree-level in terms of two Higgs sector parameters. This structure arises due to constraints imposed by supersymmetry that determine the Higgs quartic couplings in terms of electroweak gauge coupling constants. In section 2, I review the general structure of the (nonsupersymmetric) two-Higgs-doublet extension of the Standard Model. By imposing the constraints of supersymmetry on the quartic terms of the Higgs potential (and the Higgs-fermion interaction) one obtains the Higgs sector of the MSSM. The tree-level predictions of this model are briefly summarized in section 3. The inclusion of radiative corrections in the analysis of the MSSM Higgs sector can have profound implications. The most dramatic effect of the radiative corrections on the MSSM Higgs sector is the modification of the tree-level mass relations of the model. The leading one-loop radiative corrections to MSSM Higgs masses are described in section 4. These include the full set of one-loop leading logarithmic terms, and the leading third generation squark-mixing corrections. In section 5, the leading logarithms are resummed to all orders via the renormalization group technique. A simple analytic formula is exhibited which serves as an excellent approximation to the numerically integrated renormalization group equations. Numerical examples demonstrate that the Higgs masses computed in this approximation lie within 2 GeV of their actual values over a very large fraction of the supersymmetric parameter space. Finally, some implications of the radiatively-corrected Higgs sector are briefly explored in section 6. Certain technical details are relegated to the appendices. \section{The Two-Higgs Doublet Model} \label{sec:two} I begin with a brief review of the general (non-supersymmetric) two-Higgs doublet extension of the Standard Model \cite{hhgref}. Let $\Phi_1$ and $\Phi_2$ denote two complex $Y=1$, SU(2)$\ls{L}$ doublet scalar fields. The most general gauge invariant scalar potential is given by \vbox{% \begin{eqalignno} {\cal V}&=m_{11}^2\Phi_1^\dagger\Phi_1+m_{22}^2\Phi_2^\dagger\Phi_2 -[m_{12}^2\Phi_1^\dagger\Phi_2+{\rm h.c.}]\nonumber\\[6pt] &\quad +\ifmath{{\textstyle{1 \over 2}}}\lambda_1(\Phi_1^\dagger\Phi_1)^2 +\ifmath{{\textstyle{1 \over 2}}}\lambda_2(\Phi_2^\dagger\Phi_2)^2 +\lambda_3(\Phi_1^\dagger\Phi_1)(\Phi_2^\dagger\Phi_2) +\lambda_4(\Phi_1^\dagger\Phi_2)(\Phi_2^\dagger\Phi_1) \nonumber\\[6pt] &\quad +\left\{\ifmath{{\textstyle{1 \over 2}}}\lambda_5(\Phi_1^\dagger\Phi_2)^2 +\big[\lambda_6(\Phi_1^\dagger\Phi_1) +\lambda_7(\Phi_2^\dagger\Phi_2)\big] \Phi_1^\dagger\Phi_2+{\rm h.c.}\right\}\,. \label{pot} \end{eqalignno} } \noindent In most discussions of two-Higgs-doublet models, the terms proportional to $\lambda_6$ and $\lambda_7$ are absent. This can be achieved by imposing a discrete symmetry $\Phi_1\rightarrow -\Phi_1$ on the model. Such a symmetry would also require $m_{12}=0$ unless we allow a soft violation of this discrete symmetry by dimension-two terms.\footnote{% This latter requirement is sufficient to guarantee the absence of Higgs-mediated tree-level flavor changing neutral currents.} For the moment, I will refrain from setting any of the coefficients in eq.~(\ref{pot}) to zero. In principle, $m_{12}^2$, $\lambda_5$, $\lambda_6$ and $\lambda_7$ can be complex. However, for simplicity, I shall ignore the possibility of CP-violating effects in the Higgs sector by choosing all coefficients in eq.~(\ref{pot}) to be real. The scalar fields will develop non-zero vacuum expectation values if the mass matrix $m_{ij}^2$ has at least one negative eigenvalue. Imposing CP invariance and U(1)$\ls{\rm EM}$ gauge symmetry, the minimum of the potential is \begin{equation} \langle \Phi_1 \rangle={1\over\sqrt{2}} \left( \begin{array}{c} 0\\ v_1\end{array}\right), \qquad \langle \Phi_2\rangle= {1\over\sqrt{2}}\left(\begin{array}{c}0\\ v_2 \end{array}\right)\,,\label{potmin} \end{equation} where the $v_i$ are assumed to be real. It is convenient to introduce the following notation: \begin{equation} v^2\equiv v_1^2+v_2^2={4M_{\ss W}^2\over g^2}=(246~{\rm GeV})^2\,, \qquad\qquad\bar t\equiv\tan\beta\equiv{v_2\over v_1}\,.\label{tanbdef} \end{equation} Of the original eight scalar degrees of freedom, three Goldstone bosons ($G^\pm$ and $G^0$) are absorbed (``eaten'') by the $W^\pm$ and $Z$. The remaining five physical Higgs particles are: two CP-even scalars ($h^0$ and $H^0$, with $m_{\hl}\leq m_{\hh}$), one CP-odd scalar ($A^0$) and a charged Higgs pair ($H^{\pm}$). The mass parameters $m_{11}$ and $m_{22}$ can be eliminated by minimizing the scalar potential. The resulting squared masses for the CP-odd and charged Higgs states are \begin{eqalignno} m_{\ha}^2 &={m_{12}^2\over s_{\beta}c_{\beta}}-\ifmath{{\textstyle{1 \over 2}}} v^2\big(2\lambda_5+\lambda_6\bar t^{-1}+\lambda_7\bar t\big)\,,\nonumber\\[6pt] m_{H^{\pm}}^2 &=m_{A^0}^2+\ifmath{{\textstyle{1 \over 2}}} v^2(\lambda_5-\lambda_4)\,. \label{mamthree} \end{eqalignno} \vskip1pc \noindent The two CP-even Higgs states mix according to the following squared mass matrix: \vbox{% \begin{eqalignno} {\cal M}^2 &=m_{A^0}^2 \left( \begin{array}{cc} s_{\beta}^2& -s_{\beta}c_{\beta}\\ -s_{\beta}c_{\beta}& c_{\beta}^2 \end{array}\right) \nonumber\\[3pt] &+v^2 \left( \begin{array}{cc} \lambda_1c_{\beta}^2+2\lambda_6s_{\beta}c_{\beta}+\lambda_5s_{\beta}^2 &(\lambda_3+\lambda_4)s_{\beta}c_{\beta}+\lambda_6 c_{\beta}^2+\lambda_7s_{\beta}^2 \\[3pt] (\lambda_3+\lambda_4)s_{\beta}c_{\beta}+\lambda_6 c_{\beta}^2+\lambda_7s_{\beta}^2 &\lambda_2s_{\beta}^2+2\lambda_7s_{\beta}c_{\beta}+\lambda_5c_{\beta}^2 \end{array}\right) \,, \label{massmhh} \end{eqalignno} } \noindent where $s_\beta\equiv\sin\beta$ and $c_\beta\equiv\cos\beta$. The physical mass eigenstates are \begin{eqalignno} H^0 &=(\sqrt{2}{\rm Re\,}\Phi_1^0-v_1)\cos\alpha+ (\sqrt{2}{\rm Re\,}\Phi_2^0-v_2)\sin\alpha\,,\nonumber\\ h^0 &=-(\sqrt{2}{\rm Re\,}\Phi_1^0-v_1)\sin\alpha+ (\sqrt{2}{\rm Re\,}\Phi_2^0-v_2)\cos\alpha\,. \label{scalareigenstates} \end{eqalignno} The corresponding masses are \begin{equation} m^2_{H^0,h^0}=\ifmath{{\textstyle{1 \over 2}}}\left[{\cal M}_{11}^2+{\cal M}_{22}^2 \pm \sqrt{({\cal M}_{11}^2-{\cal M}_{22}^2)^2 +4({\cal M}_{12}^2)^2} \ \right]\,, \label{higgsmasses} \end{equation} and the mixing angle $\alpha$ is obtained from \begin{eqalignno} \sin 2\alpha &={2{\cal M}_{12}^2\over \sqrt{({\cal M}_{11}^2-{\cal M}_{22}^2)^2 +4({\cal M}_{12}^2)^2}}\ ,\nonumber\\ \cos 2\alpha &={{\cal M}_{11}^2-{\cal M}_{22}^2\over \sqrt{({\cal M}_{11}^2-{\cal M}_{22}^2)^2 +4({\cal M}_{12}^2)^2}}\ . \label{alphadef} \end{eqalignno} The phenomenology of the two-Higgs doublet model depends in detail on the various couplings of the Higgs bosons to gauge bosons, Higgs bosons and fermions. The Higgs couplings to gauge bosons follow from gauge invariance and are thus model independent. For example, the couplings of the two CP-even Higgs bosons to $W$ and $Z$ pairs are given in terms of the angles $\alpha$ and $\beta$ by \begin{eqalignno} g\ls{h^0 VV}&=g\ls{V} m\ls{V}\sin(\beta-\alpha) \nonumber \\[3pt] g\ls{H^0 VV}&=g\ls{V} m\ls{V}\cos(\beta-\alpha)\,,\label{vvcoup} \end{eqalignno} where \begin{equation} g\ls V\equiv\begin{cases} g,& $V=W\,$,\\ g/\cos\theta_W,& $V=Z\,$. \end{cases} \label{hix} \end{equation} There are no tree-level couplings of $A^0$ or $H^{\pm}$ to $VV$. Gauge invariance also determines the strength of the trilinear couplings of one gauge boson to two Higgs bosons. For example, \begin{eqalignno} g\ls{h^0A^0 Z}&={g\cos(\beta-\alpha)\over 2\cos\theta_W}\,,\nonumber \\[3pt] g\ls{H^0A^0 Z}&={-g\sin(\beta-\alpha)\over 2\cos\theta_W}\,. \label{hvcoup} \end{eqalignno} In the examples shown above, some of the couplings can be suppressed if either $\sin(\beta-\alpha)$ or $\cos(\beta-\alpha)$ is very small. Note that all the vector boson--Higgs boson couplings cannot vanish simultaneously. From the expressions above, we see that the following sum rules must hold separately for $V=W$ and $Z$: \begin{eqalignno} g_{H^0 V V}^2 + g_{h^0 V V}^2 &= g\ls{V}^2m\ls{V}^2\,,\nonumber \\[3pt] g_{h^0A^0 Z}^2+g_{H^0A^0 Z}^2&= {g^2\over 4\cos^2\theta_W}\,. \label{sumruletwo} \end{eqalignno} These results are a consequence of the tree-unitarity of the electroweak theory~% \cite{ghw}. Moreover, if we focus on a given CP-even Higgs state, we note that its couplings to $VV$ and $A^0 V$ cannot be simultaneously suppressed, since eqs.~(\ref{vvcoup})--(\ref{hvcoup}) imply that \begin{equation} g^2_{h ZZ} + 4m^2_Z g^2_{hA^0Z} = {g^2m^2_Z\over \cos^2\theta_W}\,,\label{hxi} \end{equation} for $h=h^0$ or $H^0$. Similar considerations also hold for the coupling of $h^0$ and $H^0$ to $W^\pm H^\mp$. We can summarize the above results by noting that the coupling of $h^0$ and $H^0$ to vector boson pairs or vector--scalar boson final states is proportional to either $\sin(\beta-\alpha)$ or $\cos(\beta-\alpha)$ as indicated below \cite{hhg,hhgsusy}. \begin{equation} \renewcommand{\arraycolsep}{2cm} \let\us=\underline \begin{array}{ll} \us{\cos(\beta-\alpha)}& \us{\sin(\beta-\alpha)}\\[3pt] H^0 W^+W^-& h^0 W^+W^- \\ H^0 ZZ& h^0 ZZ \\ Z\hah^0& Z\haH^0 \\ W^\pm H^\mph^0& W^\pm H^\mpH^0 \\ ZW^\pm H^\mph^0& ZW^\pm H^\mpH^0 \\ \gamma W^\pm H^\mph^0& \gamma W^\pm H^\mpH^0 \end{array} \label{littletable} \end{equation} Note in particular that {\it all} vertices in the theory that contain at least one vector boson and {\it exactly one} non-minimal Higgs boson state ($H^0$, $A^0$ or $H^{\pm}$) are proportional to $\cos(\beta-\alpha)$. The 3-point and 4-point Higgs self-couplings depend on the parameters of the two-Higgs-doublet potential [eq.~(\ref{pot})]. The Feynman rules for the trilinear Higgs vertices are listed in Appendix A. The Feynman rules for the 4-point Higgs vertices are rather tedious in the general two-Higgs-doublet model and will not be given here. The Higgs couplings to fermions are model dependent, although their form is often constrained by discrete symmetries that are imposed in order to avoid tree-level flavor changing neutral currents mediated by Higgs exchange \cite{gw}. An example of a model that respects this constraint is one in which one Higgs doublet (before symmetry breaking) couples exclusively to down-type fermions and the other Higgs doublet couples exclusively to up-type fermions. This is the pattern of couplings found in the MSSM. The results in this case are as follows. The couplings of the neutral Higgs bosons to $f\bar f$ relative to the Standard Model value, $gm_f/2M_{\ss W}$, are given by (using 3rd family notation) \begin{eqaligntwo} \label{qqcouplings} h^0 b\bar b:&~~~ -{\sin\alpha\over\cos\beta}=\sin(\beta-\alpha) -\tan\beta\cos(\beta-\alpha)\,,\nonumber\\[3pt] h^0 t\bar t:&~~~ \phantom{-}{\cos\alpha\over\sin\beta}=\sin(\beta-\alpha) +\cot\beta\cos(\beta-\alpha)\,,\nonumber\\[3pt] H^0 b\bar b:&~~~ \phantom{-}{\cos\alpha\over\cos\beta}=\cos(\beta-\alpha) +\tan\beta\sin(\beta-\alpha)\,,\nonumber\\[3pt] H^0 t\bar t:&~~~ \phantom{-}{\sin\alpha\over\sin\beta}=\cos(\beta-\alpha) -\cot\beta\sin(\beta-\alpha)\,,\nonumber\\[3pt] A^0 b \bar b:&~~~\phantom{-}\gamma_5\,{\tan\beta}\,,\nonumber\\[3pt] A^0 t \bar t:&~~~\phantom{-}\gamma_5\,{\cot\beta}\,, \end{eqaligntwo} (the $\gamma_5$ indicates a pseudoscalar coupling), and the charged Higgs boson coupling to fermion pairs (with all particles pointing into the vertex) is given by \begin{equation} g_{H^- t\bar b}={g\over{2\sqrt{2}M_{\ss W}}}\ [m_t\cot\beta\,(1+\gamma_5)+m_b\tan\beta\,(1-\gamma_5)]. \label{hpmqq} \end{equation} The pattern of couplings displayed above can be understood in the context of the {\it decoupling limit} of the two-Higgs-doublet model \cite{habernir,DECP}. First, consider the Standard Model Higgs boson ($\phi^0$). At tree-level, the Higgs self-coupling is related to its mass. If $\lambda$ is the quartic Higgs self-interaction strength [see eq.~(\ref{potsm})], then $\lambda= m_{\phi^0}^2/v^2$. This means that one cannot take $m_{\phi^0}$ arbitrarily large without the attendant growth in $\lambda$. That is, the heavy Higgs limit in the Standard Model exhibits non-decoupling. In models of a non-minimal Higgs sector, the situation is more complex. In some models (with the Standard Model as one example), it is not possible to take any Higgs mass much larger than ${\cal O}(v)$ without finding at least one strong Higgs self-coupling. In other models, one finds that the non-minimal Higgs boson masses can be taken large at fixed Higgs self-couplings. Such behavior can arise in models that possess one (or more) off-diagonal squared-mass parameters in addition to the diagonal scalar squared-masses. In the limit where the off-diagonal squared-mass parameters are taken large [keeping the dimensionless Higgs self-couplings fixed and $\lsim{\cal O}(1)$], the heavy Higgs states decouple, while both light and heavy Higgs bosons remain weakly-coupled. In this decoupling limit, exactly one neutral CP-even Higgs scalar remains light, and its properties are precisely those of the (weakly-coupled) Standard Model Higgs boson. That is, $h^0\simeq\phi^0$, with $m_{\hl}\sim{\cal O}(M_{\ss Z})$, and all other non-minimal Higgs states are significantly heavier than $m_{\hl}$. Squared-mass splittings of the heavy Higgs states are of ${\cal O}(M_{\ss Z}^2)$, which means that all heavy Higgs states are approximately degenerate, with mass differences of order $M_{\ss Z}^2/m_{\ha}$ (here $m_{\ha}$ is approximately equal to the common heavy Higgs mass scale). In contrast, if the non-minimal Higgs sector is weakly coupled but far from the decoupling limit, then $h^0$ is not separated in mass from the other Higgs states. In this case, the properties\footnote{The basic property of the Higgs coupling strength proportional to mass is maintained. But, the precise coupling strength patterns of $h^0$ will differ from those of $\phi^0$ in the non-decoupling limit.} of $h^0$ differ significantly from those of $\phi^0$. Below, I exhibit the decoupling limit of the most general CP-even two-Higgs-doublet model \cite{DECP}. It is convenient to define four squared mass combinations: \begin{eqalignno} m\ls{L}^2\equiv&\ {\cal M}^2_{11}\cos^2\beta+{\cal M}^2_{22}\sin^2\beta +{\cal M}^2_{12}\sin2\beta\,,\nonumber \\[3pt] m\ls{D}^2\equiv&\ \left({\cal M}^2_{11}{\cal M}^2_{22}-{\cal M}^4_{12}\right)^{1/2}\,, \nonumber \\[3pt] m\ls{T}^2\equiv&\ {\cal M}^2_{11}+{\cal M}^2_{22}\,,\nonumber \\[3pt] m\ls{S}^2\equiv&\ m_{\ha}^2+m\ls{T}^2\,, \label{massdefs} \end{eqalignno} in terms of the elements of the neutral CP-even Higgs squared-mass matrix [eq.~(\ref{massmhh})]. In terms of the above quantities, \begin{equation} m^2_{H^0,h^0}=\ifmath{{\textstyle{1 \over 2}}}\left[m\ls{S}^2\pm\sqrt{m\ls{S}^4-4m_{\ha}^2m\ls{L}^2 -4m\ls{D}^4}\,\right]\,, \label{cpevenhiggsmasses} \end{equation} and \begin{equation} \cos^2(\beta-\alpha)= {m\ls{L}^2-m_{\hl}^2\overm_{\hh}^2-m_{\hl}^2}\,. \label{cosbmasq} \end{equation} In the decoupling limit, all the Higgs self-coupling constants $\lambda_i$ are held fixed such that $\lambda_i\lsim1$, while taking $m_{\ha}^2\gg\lambda_iv^2$. Then ${\cal M}^2_{ij}\sim{\cal O}(v^2)$, and it follows that: \begin{equation} m_{\hl}\simeq m\ls{L}\,,\qquad\qquad m_{\hh}\simeqm_{\ha}\simeqm_{\hpm}\,, \label{approxmasses} \end{equation} and \begin{equation} \cos^2(\beta-\alpha)\simeq\, {m\ls{L}^2(m\ls{T}^2-m\ls{L}^2)-m\ls{D}^4\overm_{\ha}^4}\,. \label{approxcosbmasq} \end{equation} \vskip9pt\noindent Note that eq.~(\ref{approxcosbmasq}) implies that $\cos(\beta-\alpha)= {\cal O}(M_{\ss Z}^2/m_{\ha}^2)$ in the decoupling limit, which means that the $h^0$ couplings to Standard Model particles match precisely those of the Standard Model Higgs boson. These results are easily confirmed by considering the $\cos(\beta-\alpha)\to 0$ limit of eqs.~(\ref{vvcoup})--(\ref{qqcouplings}). Although no experimental evidence for the Higgs boson yet exists, there are some experimental as well as theoretical constraints on the parameters of the two-Higgs doublet model. Experimental limits on the charged and neutral Higgs masses have been obtained at LEP. For the charged Higgs boson, $m_{\hpm}>44$~GeV \cite{LEPHIGGS}. This is the most model-independent bound and assumes only that the $H^{\pm}$ decays dominantly into $\tau^+\nu_\tau$, $c \bar s$ and $c\bar b$. The LEP limits on the masses of $h^0$ and $A^0$ are obtained by searching simultaneously for $e^+e^- \to h^0 f\bar f$ and $e^+e^- \to h^0 A^0$, which are mediated by $s$-channel $Z$-exchange \cite{janot}. The $ZZh^0$ and $Zh^0A^0$ couplings that govern these two decay rates are proportional to $\sin(\beta-\alpha)$ and $\cos(\beta-\alpha)$, respectively. Thus, one can use the LEP data to deduce limits on $m_{\hl}$ and $m_{\ha}$ as a function of $\sin(\beta-\alpha)$. Stronger limits can be obtained in the MSSM where $\sin(\beta-\alpha)$ is determined by other model parameters. At present, taking into account data from LEP-1 and the most recent LEP-2 data (at $\sqrt{s}=161$ and 172~GeV), one can exclude the MSSM Higgs mass ranges: $m_{\hl}< 62.5$~GeV (independent of the value of $\tan\beta$) and $m_{\ha}< 62.5$~GeV (assuming $\tan\beta> 1$) \cite{ypan}. The experimental information on the parameter $\tan\beta$ is quite meager. For definiteness, let us assume that the Higgs-fermion couplings are specified as in eq.~(\ref{qqcouplings}). The Higgs coupling to top quarks is proportional to $gm_t/2M_{\ss W}$, and is therefore the strongest of all Higgs-fermion couplings. For $\tan\beta<1$, the Higgs couplings to top-quarks are further enhanced by a factor of $1/\tan\beta$. As a result, some experimental limits on $\tan\beta$ exist based on the non-observation of virtual effects involving the $H^-t\bar b$ coupling. Clearly, such limits depend both on $m_{\hpm}$ and $\tan\beta$. The most sensitive limits are obtained from the measurements of $B^0$-$\overline{B^0}$ mixing and the widths of $b\to s\gamma$ and $Z\to b\bar b$ \cite{grant}. For example, the process $b\to s\gamma$ can be significantly enhanced due to charged Higgs boson exchange. If there are no other competing non-Standard Model contributions (and this is a big {\it if}), then present data excludes charged Higgs masses less than about 250 GeV \cite{joanne} (independent of the value of $\tan\beta$). In some regions of $\tan\beta$, the limits on the charged Higgs mass can be even more severe. However, other virtual contributions may exist that can cancel the effects of the charged Higgs exchange. For example, in the MSSM, constraints on $\tan\beta$ and $m_{\hpm}$ are significantly weaker. For $\tan\beta\gg 1$, the Higgs couplings to bottom-quarks are enhanced by a factor of $\tan\beta$. In this case, the measured rate for the inclusive decay of $B\to X+\tau\nu_\tau$ can be used to set an upper limit on $\tan\beta$ as a function of the charged Higgs mass. This is accomplished by setting a limit on the contribution of the {\it tree-level} charged Higgs exchange. Present data can be used to set a $2\sigma$ upper bound of $\tan\beta< 42(m_{\hpm}/M_{\ss W})$ \cite{ghn}. In the MSSM, this bound could be weakened due to one-loop QCD corrections mediated by the exchange of supersymmetric particles \cite{sola}. Theoretical considerations also lead to bounds on $\tan\beta$. The crudest bounds arise from unitarity constraints. If $\tan\beta$ becomes too small, then the Higgs coupling to top quarks becomes too strong. In this case, the tree-unitarity of processes involving the Higgs-top quark Yukawa coupling is violated. Perhaps this should not be regarded as a theoretical defect, although it does render any perturbative analysis unreliable. A rough lower bound advocated by Ref.~\cite{hewett}, $\tan\beta\gsim 0.3$, corresponds to a Higgs-top quark coupling in the perturbative region. A similar argument involving the Higgs-bottom quark coupling would yield $\tan\beta\lsim 120$. A more solid theoretical constraint is based on the requirement that Higgs--fermion Yukawa couplings remain finite when running from the electroweak scale to some large energy scale $\Lambda$. Above $\Lambda$, one assumes that new physics enters. The limits on $\tan\beta$ depend on $m_t$ and the choice of the high energy scale $\Lambda$. Using the renormalization group equations given in Appendix B, one integrates from the electroweak scale to $\Lambda$ (allowing for the possible existence of a supersymmetry-breaking scale, $M_{\ss Z}\leqM\ls{{\rm SUSY}}\leq \Lambda$), and determines the region of $\tan\beta$--$m_t$ parameter space in which the Higgs-fermion Yukawa couplings remain finite. This exercise has recently been carried out at two-loops in Ref.~\cite {schrempp}. Suppose that the low-energy theory at the electroweak scale is the MSSM, and that there is no additional new physics below the grand unification scale of $\Lambda=2\times 10^{16}$~GeV. Then, for $m_t=170$~GeV, the Higgs-fermion Yukawa couplings remain finite at all energy scales below $\Lambda$ if $1.5\lsim\tan\beta\lsim 65$. Note that this result is consistent with the scenario of radiative electroweak symmetry breaking in low-energy supersymmetry based on supergravity, which requires that $1\lsim\tan\beta\lsim m_t/m_b$. \section{The Higgs Sector of the MSSM at Tree Level} \label{sec:three The Higgs sector of the MSSM is a CP-conserving two-Higgs-doublet model, with a Higgs potential whose dimension-four terms respect supersymmetry and with restricted Higgs-fermion couplings in which $\Phi_1$ couples exclusively to down-type fermions while $\Phi_2$ couples exclusively to up-type fermions \cite{hhgref}. Using the notation of eq.~(\ref{pot}), the quartic couplings $\lambda_i$ are given by \begin{eqalignno}% \lambda_1 &=\lambda_2 = \ifmath{{\textstyle{1\over 4}}} (g^2+g'^2)\,, \nonumber\\ \lambda_3 &=\ifmath{{\textstyle{1\over 4}}} (g^2-g'^2)\,, \nonumber\\ \lambda_4 &=-\ifmath{{\textstyle{1 \over 2}}} g^2\,, \nonumber\\ \lambda_5 &=\lambda_6=\lambda_7=0\,.\label{bndfr} \end{eqalignno} Inserting these results into eqs.~(\ref{mamthree}) and (\ref{massmhh}), it follows that \vbox{% \begin{eqalignno}% m_{\ha}^2 &=m_{12}^2(\tan\beta+\cot\beta)\,,\nonumber\\ m_{\hpm}^2 &=m_{\ha}^2+M_{\ss W}^2\,, \label{susymhpm} \end{eqalignno} } \noindent and the tree-level neutral CP-even mass matrix is given by \begin{equation} {\cal M}_0^2 = \left( \begin{array}{ll} m_{\ha}^2 \sin^2\beta + m^2_Z \cos^2\beta& -(m_{\ha}^2+m^2_Z)\sin\beta\cos\beta\\ -(m_{\ha}^2+m^2_Z)\sin\beta\cos\beta& m_{\ha}^2\cos^2\beta+ m^2_Z \sin^2\beta\end{array}\right)\,.\label{kv} \end{equation} The eigenvalues of ${\cal M}_0^2$ are the squared masses of the two CP-even Higgs scalars \begin{equation} m^2_{H^0,h^0} = \ifmath{{\textstyle{1 \over 2}}} \left( m_{\ha}^2 + m^2_Z \pm \sqrt{(m_{\ha}^2+m^2_Z)^2 - 4m^2_Z m_{\ha}^2 \cos^2 2\beta} \; \right)\,.\label{kviii} \end{equation} and the diagonalizing angle is $\alpha$, with \begin{equation} \cos 2\alpha = -\cos 2\beta \left( {m_{\ha}^2-m^2_Z \over m^2_{H^0}-m^2_{h^0}}\right)\,,\qquad \sin 2\alpha = -\sin 2\beta \left( m^2_{H^0} + m^2_{h^0} \over m^2_{H^0}-m^2_{h^0} \right)\,.\label{kix} \end{equation} From these results,it is easy to obtain: \begin{equation} \cos^2(\beta-\alpha)={m_{\hl}^2(M_{\ss Z}^2-m_{\hl}^2)\over m_{\ha}^2(m_{\hh}^2-m_{\hl}^2)}\,. \label{cbmasq} \end{equation} Thus, in the MSSM, two parameters (conveniently chosen to be $m_{\ha}$ and $\tan\beta$) suffice to fix all other tree-level Higgs sector parameters. Consider the decoupling limit where $m_{\ha}\ggM_{\ss Z}$. Then, the above formulae yield \begin{eqalignno} m_{\hl}^2\simeq\ &M_{\ss Z}^2\cos^2 2\beta\,,\nonumber \\[3pt] m_{\hh}^2\simeq\ &m_{\ha}^2+M_{\ss Z}^2\sin^2 2\beta\,,\nonumber \\[3pt] m_{\hpm}^2=\ & m_{\ha}^2+M_{\ss W}^2\,,\nonumber \\[3pt] \cos^2(\beta-\alpha)\simeq\ &{M_{\ss Z}^4\sin^2 4\beta\over 4m_{\ha}^4}\,. \label{largema} \end{eqalignno} Two consequences are immediately apparent. First, $m_{\ha}\simeqm_{\hh} \simeqm_{\hpm}$, up to corrections of ${\cal O}(M_{\ss Z}^2/m_{\ha})$. Second, $\cos(\beta-\alpha)=0$ up to corrections of ${\cal O}(M_{\ss Z}^2/m_{\ha}^2)$. Of course, these results were expected based on the discussion of the decoupling limit in the general two-Higgs-doublet model given in section 2. Finally, a number of important mass inequalities can be derived from the expressions for the tree-level Higgs masses obtained above, \vbox{% \begin{eqalignno} m_{h^0} &\leq m_{\ha} \nonumber \\ m_{h^0} &\leq m|\cos 2\beta | \leq m_Z \,, \qquad {\rm with}\ m \equiv min(m_Z,m_{\ha}) \nonumber\\ m_{H^0} &\geq m_Z\,, \nonumber\\ m_{H^\pm} &\geqM_{\ss W}\,. \label{kx} \end{eqalignno} } \section{Radiative Corrections to the MSSM Higgs Masses} \subsection{Overview} The tree-level results of the previous section are modified when radiative corrections are incorporated. Naively, one might expect radiative corrections to have a minor effect on the phenomenological implications of the model. However, in the MSSM, some of the tree-level Higgs mass relations may be significantly changed at one-loop, with profound implications for the phenomenology. For example, consider the tree-level bound on the lightest CP-even Higgs boson of the MSSM: $m_{\hl}\leqM_{\ss Z}|\cos 2\beta|\leqM_{\ss Z}$. The LEP-2 collider (running at its projected maximum center-of-mass energy of 192~GeV, with an integrated luminosity of 150~${\rm pb}^{-1}$) will discover at least one Higgs boson of the MSSM if $m_{\hl}\leqM_{\ss Z}$ \cite{janot}. Thus, if the tree-level Higgs mass bound holds, then the absence of a Higgs discovery at LEP would rule out the MSSM. However, when radiative corrections are included, the light Higgs mass upper bound may be increased significantly. In the one-loop leading logarithmic approximation \cite{hhprl,early-veff} \begin{equation} \label{mhlapprox} m_{\hl}^2\lsimM_{ZZ}\cos^2\beta+{3g^2 m_t^4\over 8\pi^2M_{\ss W}^2}\,\ln\left({M_{\tilde t_1}M_{\tilde t_2}\over m_t^2}\right)\,, \end{equation} where $M_{\tilde t_1}$, $M_{\tilde t_2}$ are the masses of the two top-squark mass eigenstates. Observe that the Higgs mass upper bound is very sensitive to the top mass and depends logarithmically on the top-squark masses. In addition, due to the increased upper bound for $m_{\hl}$, the non-observation of a Higgs boson at LEP-2 cannot rule out the MSSM. Although eq.~(\ref{mhlapprox}) provides a rough guide to the Higgs mass upper bound, it is not sufficiently precise for LEP-2 phenomenology, whose Higgs mass reach depends delicately on the MSSM parameters. In addition, in order to perform precision Higgs measurements and make comparisons with theory, more accurate results for the Higgs sector masses (and couplings) are required. The radiative corrections to the Higgs mass have been computed by a number of techniques, and using a variety of approximations such as effective potential \cite{early-veff,veff,berz,erz,carena} and diagrammatic methods \cite{hhprl,turski,brig,madiaz,1-loop,hempfhoang,completeoneloop}. Complete one-loop diagrammatic computations of the MSSM Higgs masses have been presented by a number of groups \cite{completeoneloop}; the resulting expressions are quite complex, and depend on all the parameters of the MSSM. (The dominant two-loop next-to-leading logarithmic results are also known~\cite{hempfhoang}.) Moreover, as noted above, the largest contribution to the one-loop radiative corrections is enhanced by a factor of $m_t^4$ and grows logarithmically with the top squark mass. Thus, higher order radiative corrections can be non-negligible for large top squark masses, in which case the large logarithms must be resummed. The renormalization group (RG) techniques for resumming the leading logarithms has been developed by a number of authors \cite{rge,2loopquiros,llog}. The computation of the RG-improved one-loop corrections requires numerical integration of a coupled set of RG equations~\cite{llog}. Although this program has been carried out in the literature, the procedure is unwieldy and not easily amenable to large-scale Monte-Carlo analyses. Recently, two groups have presented a simple analytic procedure for accurately approximating $m_{h^0}$. These methods can be easily implemented, and incorporate both the leading one-loop and two-loop effects and the RG-improvement. Also included are the leading effects at one loop of the supersymmetric thresholds (the most important effects of this type are squark mixing effects in the third generation). Details of the techniques can be found in Refs.~\cite{hhh} and \cite{carena}. Here, I simply quote two specific bounds, assuming $m_t=175$~GeV and $M_{\tilde t}\lsim 1$~TeV: $m_{\hl}\lsim 112$~GeV if top-squark mixing is negligible, while $m_{\hl}\lsim 125$~GeV if top-squark mixing is ``maximal''. Maximal mixing corresponds to an off-diagonal squark squared-mass that produces the largest value of $m_{\hl}$. This mixing leads to an extremely large splitting of top-squark mass eigenstates. The charged Higgs mass is also constrained in the MSSM. At tree level, $m_{\hpm}$ is given by eq.~(\ref{susymhpm}), which implies that charged Higgs bosons cannot be pair produced at LEP-2. Radiative corrections modify the tree-level prediction, but the corrections are typically smaller than the neutral Higgs mass corrections discussed above. Although $m_{\hpm}\geqM_{\ss W}$ is not a strict bound when one-loop corrections are included, the bound holds approximately over most of MSSM parameter space (and can be significantly violated only when $\tan\beta$ is well below 1, a region of parameter space that is theoretically disfavored). In the remainder of this section, I shall present formulae which exhibit the leading contributions to the one-loop corrected Higgs masses. Symbolically, \begin{eqnarray} m_{\hpm}^2 & = & \left(m_{\hpm}^2\right)_{\rm 1LL} +\left(\Delta m_{\hpm}^2\right)_{\rm mix}\,,\nonumber \\ {\cal M}^2 & = & {\cal M}^2_{\rm 1LL}+ \Delta{\cal M}^2_{\rm mix}\,, \label{oneloopmasses} \end{eqnarray} where the subscript {\sl 1LL} refers to the tree-level plus the one-loop leading logarithmic approximation to the full one-loop calculation, and the subscript {\sl mix} refers to the contributions arising from $\widetilde q_L$--$\widetilde q_R$ mixing effects of the third generation squarks. The CP-even Higgs mass-squared eigenvalues are then obtained by using eq.~(\ref{higgsmasses}) and the corresponding mixing angle, $\alpha$, is obtained from eq.~(\ref{alphadef}). In the simplest approximation, squark mixing effects are neglected and the supersymmetric spectrum is characterized by one scale, called $M\ls{{\rm SUSY}}$. We assume that $M\ls{{\rm SUSY}}$ is sufficiently large compared to $M_{\ss Z}$ such that logarithmically enhanced terms at one-loop dominate over the non-logarithmic terms.\footnote{If this condition does not hold, then the radiative corrections would constitute only a minor perturbation on the tree-level predictions.} In this case, the full one-loop corrections ({\it e.g.}, obtained by a diagrammatic computation) are well approximated by the one-loop leading logarithmic approximation. Next, we incorporate the effects of squark mixing, which constitute the largest potential source of non-logarithmic one-loop corrections. In particular, these contributions to the Higgs mass radiative corrections arise from the exchange of the third generation squarks. Now, the approximation is parameterized by four supersymmetric parameters: $M\ls{{\rm SUSY}}$ (a common supersymmetric particle mass) and the third generation squark mixing parameters: $A_t$, $A_b$ and $\mu$. A more comprehensive set of formulae can be derived by treating the third generation squark sector more precisely by accounting for non-degenerate top and bottom squark masses. This approximation is characterized by seven supersymmetric parameters---the three squark mixing parameters mentioned above, three soft-supersymmetry-breaking diagonal squark mass parameters, $M_Q$, $M_U$, and $M_D$, and a common supersymmetry mass parameter $M\ls{{\rm SUSY}}$ which characterizes the masses of the first two generations of squarks, the sleptons, the charginos, and the neutralinos. Given an approximation to the one-loop Higgs mass (at some level of approximation as described above), one must incorporate the RG-improvement if $M\ls{{\rm SUSY}}\ggM_{\ss Z}$. A simple analytic procedure of Ref.~\cite{hhh} is described in the section 5, and some numerical results are presented there. Similar results have also been obtained by Carena and collaborators, where analytic approximations to the RG-improved radiatively corrected MSSM Higgs masses are also developed \cite{carena}. Although the approaches are somewhat different, the numerical results (in cases which have been compared) typically agree to within 1~GeV in the evaluation of Higgs masses. \subsection{One-Loop Leading Logarithmic Corrections to the MSSM Higgs Masses} The leading logarithmic expressions for Higgs masses can be computed from the one-loop renormalization group equations (RGEs) of the gauge and Higgs self-couplings, following Ref.~\cite{llog}. The method employs eqs.~(\ref{mamthree}) and (\ref{massmhh}), which are evaluated by treating the $\lambda_i$ as running parameters evaluated at the electroweak scale, $M\ls{{\rm weak}}$. In addition, we identify the $W$ and $Z$ masses by \vbox{% \begin{eqalignno}M_{\ss W}^2&=\ifmath{{\textstyle{1 \over 4}}} g^2(v_1^2+v_2^2)\,,\nonumber\\ M_{\ss Z}^2&=\ifmath{{\textstyle{1 \over 4}}} (g^2+g'^2)(v_1^2+v_2^2)\,,\label{vmasses} \end{eqalignno} } \noindent where the running gauge couplings are also evaluated at $M\ls{{\rm weak}}$. Of course, the gauge couplings, $g$ and $g'$ are known from experimental measurements which are performed at the scale $M\ls{{\rm weak}}$. The $\lambda_i(M\ls{{\rm weak}}^2)$ are determined from supersymmetric boundary conditions at $M\ls{{\rm SUSY}}$ and RGE running down to $M\ls{{\rm weak}}$. That is, if supersymmetry were unbroken, then the $\lambda_i$ would be fixed according to eq.~(\ref{bndfr}). Since supersymmetry is broken, we regard eq.~(\ref{bndfr}) as boundary conditions for the running parameters, valid at (and above) the energy scale $M\ls{{\rm SUSY}}$. That is, we take \vbox{% \begin{eqalignno} \lambda_1(M\ls{{\rm SUSY}}^2)&=\lambda_2(M\ls{{\rm SUSY}}^2)=\ifmath{{\textstyle{1\over 4}}}[g^2(M\ls{{\rm SUSY}}^2) +g'^2(M\ls{{\rm SUSY}}^2)],\nonumber \\[6pt] \lambda_3(M\ls{{\rm SUSY}}^2)&=\ifmath{{\textstyle{1\over 4}}}\left[g^2(M\ls{{\rm SUSY}}^2)-g'^2(M\ls{{\rm SUSY}}^2)\right], \nonumber \\ [6pt] \lambda_4(M\ls{{\rm SUSY}}^2)&=-\ifmath{{\textstyle{1 \over 2}}} g^2(M\ls{{\rm SUSY}}^2),\nonumber \\[6pt] \lambda_5(M\ls{{\rm SUSY}}^2)&=\lambda_6(M\ls{{\rm SUSY}}^2)= \lambda_7(M\ls{{\rm SUSY}}^2)=0\,, \label{boundary} \end{eqalignno} } \noindent in accordance with the tree-level relations of the MSSM. At scales below $M\ls{{\rm SUSY}}$, the gauge and quartic couplings evolve according to the renormalization group equations (RGEs) of the non-supersymmetric two-Higgs-doublet model given in eqs.~(B.5)--(B.7). These equations are of the form: \begin{equation} {dp_i\over dt} = \beta_i(p_1,p_2,\ldots)\qquad\mbox{with}~t\equiv\ln\,\mu^2 \,,\label{rgeqs} \end{equation} where $\mu$ is the energy scale, and the $p_i$ are the parameters of the theory ($p_i = g_j^2,\lambda_k,\ldots$). The relevant $\beta$-functions can be found in Appendix B. The boundary conditions together with the RGEs imply that, at the leading-log level, $\lambda_5$, $\lambda_6$ and $\lambda_7$ are zero at all energy scales. Solving the RGEs with the supersymmetric boundary conditions at $M\ls{{\rm SUSY}}$, one can determine the $\lambda_i$ at the weak scale. The resulting values for $\lambda_i(M\ls{{\rm weak}})$ are then inserted into eqs.~(\ref{mamthree}) and (\ref{massmhh}) to obtain the radiatively corrected Higgs masses. Having solved the one-loop RGEs, the Higgs masses thus obtained include the leading logarithmic radiative corrections summed to all orders in perturbation theory. The RGEs can be solved by numerical analysis on the computer. In order to derive the one-loop leading logarithmic corrections, it is sufficient to solve the RGEs iteratively. In first approximation, we can take the right hand side of eq.~(\ref{rgeqs}) to be independent of $\mu^2$. That is, we compute the $\beta_i$ by evaluating the parameters $p_i$ at the scale $\mu=M\ls{{\rm SUSY}}$. Then, integration of the RGEs is trivial, and we obtain \begin{equation} p_i(M\ls{{\rm weak}}^2)=p_i(M\ls{{\rm SUSY}}^2)-\beta_i\,\ln\left({M\ls{{\rm SUSY}}^2\overM\ls{{\rm weak}}^2} \right)\,.\label{oneloopllog} \end{equation} This result demonstrates that the first iteration corresponds to computing the one-loop radiative corrections in which only terms proportional to $\lnM\ls{{\rm SUSY}}^2$ are kept. It is straightforward to work out the one-loop leading logarithmic expressions for the $\lambda_i$ and the Higgs masses. First consider the charged Higgs mass. Since $\lambda_5(\mu^2)=0$ at all scales, we need only consider $\lambda_4$. Evaluating $\beta_{\lambda_4}$ at $\mu=M\ls{{\rm SUSY}}$, we compute \vbox{% \begin{eqalignno}% \lambda_4(M_{\ss W}^2)=-\ifmath{{\textstyle{1 \over 2}}} g^2 -&{1\over{32\pi^2}}\biggl[\bigl({\textstyle{ 4\over 3}}N_g+{\textstyle{1\over 6}}N_H-{\textstyle{10\over 3}} \bigr)g^4+5g^2g'^2 \nonumber\\[6pt] -&{{3g^4}\over{2m_W^2}}\left({{m_t^2}\over {s_{\beta}^2}}+{{m_b^2}\over{c_{\beta}^2}}\right) +{{3g^2m_t^2m_b^2}\over{s_{\beta}^2c_{\beta}^2m_W^4}}\Biggr] \ln\left({{M\ls{{\rm SUSY}}^2}\over{M_{\ss W}^2}}\right)\,. \label{lcuaunloop} \end{eqalignno} } \noindent The terms proportional to the number of generations $N_g=3$ and the number of Higgs doublets $N_H=2$ that remain in the low-energy effective theory at the scale $\mu=M_{\ss W}$ have their origin in the running of $g^2$ from $M\ls{{\rm SUSY}}$ down to $M_{\ss W}$. In deriving this expression, I have taken $M\ls{{\rm weak}}=M_{\ss W}$. This is a somewhat arbitrary decision, since another reasonable choice would yield a result that differs from eq.~(\ref{lcuaunloop}) by a non-leading logarithmic term. Comparisons with a more complete calculation show that one should choose $M\ls{{\rm weak}}=M_{\ss W}$ in computations involving the charged Higgs (and gauge) sector, and $M\ls{{\rm weak}}=M_{\ss Z}$ in computations involving the neutral sector. The above analysis also assumes that $m_t\sim {\cal O}(m_W)$. Although this is a good assumption, we can improve the above result somewhat by decoupling the $(t,b)$ weak doublet from the low-energy theory for scales below $m_t$. The terms in eq.~(\ref{lcuaunloop}) that are proportional to $m_t^2$ and/or $m_b^2$ arise from self-energy diagrams containing a $tb$ loop. Thus, such a term should not be present for $M_{\ss W}\leq \mu\leq m_t$. In addition, we recognize the term in eq.~(\ref{lcuaunloop}) proportional to the number of generations $N_g$ as arising from the contributions to the self-energy diagrams containing either quark or lepton loops (and their supersymmetric partners). To identify the contribution of the $tb$ loop to this term, simply write \vskip6pt \begin{equation} N_g=\ifmath{{\textstyle{1 \over 4}}} N_g(N_c+1)=\ifmath{{\textstyle{1 \over 4}}} N_c+\ifmath{{\textstyle{1 \over 4}}}[N_c(N_g-1)+N_g]\,, \label{ngee} \end{equation} \vskip6pt \noindent% where $N_c=3$ colors. Thus, we identify $\ifmath{{\textstyle{1 \over 4}}} N_c$ as the piece of the term proportional to $N_g$ that is due to the $tb$ loop. The rest of this term is then attributed to the lighter quarks and leptons. Finally, the remaining terms in eq.~(\ref{lcuaunloop}) are due to the contributions from the gauge and Higgs boson sector. The final result is \cite{madiaz} \begin{eqalignno} \lambda_4(M_{\ss W}^2) &=-\ifmath{{\textstyle{1 \over 2}}} g^2 -{N_c g^4\over{32\pi^2}}\left[{1\over 3} -{1\over{2m_W^2}}\biggl({{m_t^2}\over {s_{\beta}^2}}+{{m_b^2}\over{c_{\beta}^2}}\biggr) +{{m_t^2m_b^2}\over{s_{\beta}^2c_{\beta}^2m_W^4}}\right] \ln\left({{M\ls{{\rm SUSY}}^2}\over{m_t^2}}\right) \nonumber\\ [6pt] &\quad-{1\over 96\pi^2}\left\{\left[N_c(N_g-1)+N_g +\ifmath{{\textstyle{1 \over 2}}} N_H-10\right]g^4 +15g^2g'^2 \right\}\ln\left({M\ls{{\rm SUSY}}^2\overM_{\ss W}^2}\right)\, . \nonumber\\ \label{lambdaiv} \end{eqalignno} \pagebreak\noindent Inserting this result (and $\lambda_5=0$) into eq.~(\ref{mamthree}), we obtain the one-loop leading-logarithmic (1LL) formula for the charged Higgs mass \vspace*{6pt} \begin{eqalignno}% (m_{H^{\pm}}^2)_{\rm 1LL}&=m_A^2+m_W^2 +{{N_c g^2}\over{32\pi^2m_W^2}} \Bigg[{{2m_t^2m_b^2}\over{s_{\beta}^2c_{\beta}^2}}-m_W^2 \bigg({{m_t^2}\over{s_{\beta}^2}}+{{m_b^2}\over{c_{\beta}^2}}\bigg) +{\textstyle{2\over 3}}m_W^4\Bigg] \nonumber \\ [9pt] &\times\ln\left({{M\ls{{\rm SUSY}}^2}\over{m_t^2}}\right) +{{m_W^2}\over{48\pi^2}} \left\{\left[N_c(N_g-1)+N_g -9\right]g^2 +15g'^2\right\} \ln\left({{M\ls{{\rm SUSY}}^2}\over{m_W^2}}\right)\,. \nonumber\\ \label{llform} \end{eqalignno} \vskip6pt \noindent% Since this derivation makes use of the two-Higgs-doublet RGEs for the $\lambda_i$, there is an implicit assumption that the full two-doublet Higgs spectrum survives in the low-energy effective theory at $\mu=M_{\ss W}$. Thus, I have set $N_H=2$ in obtaining eq.~(\ref{llform}) above. It also means that $m_{\ha}$ cannot be much larger than $M_{\ss W}$.\footnote{If $m_{\ha}\sim {\cal O}(M\ls{{\rm SUSY}})$, then $H^\pm$, $H^0$ and $A^0$ would all have masses of order $M\ls{{\rm SUSY}}$, and the effective low-energy theory below $M\ls{{\rm SUSY}}$ would be that of the minimal Standard Model. For example, for $m_{\ha}=M\ls{{\rm SUSY}}$, the leading logarithmic corrections to the charged Higgs mass can be obtained from $m_{\hpm}^2=m_{\ha}^2+M_{\ss W}^2$ by treating $M_{\ss W}^2$ as a running parameter evaluated at $m_{\ha}$. Re-expressing $M_{\ss W}(m_{\ha})$ in terms of the physical $W$ mass yields the correct one-loop leading log correction to $m_{\hpm}^2$. For $M_{\ss Z}\leqm_{\ha}\leqM\ls{{\rm SUSY}}$, one can interpolate between the effective two-Higgs doublet model and the effective one-Higgs doublet model.} The leading logarithms of eq.~(\ref{llform}) can be resummed to all orders of perturbation theory by using the full RGE solution to $\lambda_4(M_{\ss W}^2)$ \vskip6pt \begin{equation} m_{\hpm}^2=m_{\ha}^2-\ifmath{{\textstyle{1 \over 2}}}\lambda_4(M_{\ss W}^2)(v_1^2+v_2^2)\,.\label{chiggsrge} \end{equation} \vspace*{1pc} Although the one-loop leading-log formula for $m_{\hpm}$ [eq.~(\ref{llform})] gives a useful indication as to the size of the radiative corrections, non-leading logarithmic contributions can also be important in certain regions of parameter space. A more complete set of radiative corrections can be found in the literature \cite{berz,turski,brig,madiaz,completeoneloop}. However, it should be emphasized that the radiative corrections to the charged Higgs mass are significant only for $\tan\beta<1$, a region of MSSM parameter space not favored in supersymmetric models. The computation of the neutral CP-even Higgs masses follows a similar procedure. The results of Ref.~\cite{llog} are summarized below. From eq.~(\ref{massmhh}), we see that we only need results for $\lambda_1$, $\lambda_2$ and $\widetilde\lambda_3\equiv\lambda_3+\lambda_4+\lambda_5$. (Recall that $\lambda_5=\lambda_6=\lambda_7=0$ at all energy scales.) By iterating the corresponding RGEs as before, we find \pagebreak \vbox{% \begin{eqalignno} \lambda_1(M_{\ss Z}^2)&=~~\ifmath{{\textstyle{1\over 4}}}[g^2+g'^2](M_{\ss Z}^2) +{g^4\over384\pi^2c_W^4}\Bigg[ P_t\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right) \nonumber\\ &\qquad+\bigg(12N_c{m_b^4\overM_{\ss Z}^4c_{\beta}^4}-6N_c{m_b^2\over\mzzc_{\beta}^2} +P_b+P_f+P_g+P_{2H} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\Bigg]\,, \nonumber \\[3pt] \lambda_2(M_{\ss Z}^2)&=~~\ifmath{{\textstyle{1\over 4}}} [g^2+g'^2](M_{\ss Z}^2) +{g^4\over384\pi^2c_W^4}\Bigg[\bigg(P_b+P_f+P_g+P_{2H} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right) \nonumber\\ &\qquad+\bigg(12N_c{m_t^4\overM_{\ss Z}^4s_{\beta}^4}-6N_c{m_t^2\over\mzzs_{\beta}^2} +P_t\bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right)\Bigg]\,, \nonumber \\[3pt] \widetilde\lambda_3(M_{\ss Z}^2)&=-\ifmath{{\textstyle{1\over 4}}}[g^2+g'^2](M_{\ss Z}^2) -{g^4\over384\pi^2c_W^4}\Bigg[\bigg(P_t-3N_c{m_t^2\over\mzzs_{\beta}^2} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right) \nonumber\\ &\qquad+\bigg(-3N_c{m_b^2\over\mzzc_{\beta}^2}+P_b+P_f+P_g'+P_{2H}' \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\Bigg]\,,\label{dlambda} \end{eqalignno} } \noindent where \begin{eqalignno} P_t~&\equiv~~N_c(1-4e_ts_W^2+8e_t^2s_W^4)\,, \nonumber\\[3pt] P_b~&\equiv~~N_c(1+4e_bs_W^2+8e_b^2s_W^4)\,, \nonumber\\[3pt] P_f~&\equiv~~ N_c(N_g-1)[2-4s_W^2+8(e_t^2+e_b^2)s_W^4] +N_g[2-4s_W^2+8s_W^4]\,, \nonumber\\[3pt] P_g~&\equiv-44+106s_W^2-62s_W^4\,, \nonumber \\[3pt] P_g'~&\equiv~~10+34s_W^2-26s_W^4\,, \nonumber\\[3pt] P_{2H}&\equiv -10+2s_W^2-2s_W^4\,, \nonumber\\ [3pt] P_{2H}'&\equiv~~8-22s_W^2+10s_W^4\,. \label{defpp} \end{eqalignno} \vskip6pt \noindent% In the above formulae, the electric charges of the quarks are $e_t = 2/3$, $e_b = -1/3$, and the subscripts $t, b, f, g$ and $2H$ indicate that these are the contributions from the top and bottom quarks, the other fermions (leptons and the first two generations of quarks), the gauge bosons and the Higgs doublets, and the corresponding supersymmetric partners, respectively. As in the derivation of $\lambda_4(M_{\ss W}^2)$ above, we have improved our analysis by removing the effects of top-quark loops below $\mu=m_t$. This requires a careful treatment of the evolution of $g$ and $g'$ at scales below $\mu=m_t$. The correct procedure is somewhat subtle, since the full electroweak gauge symmetry is broken below top-quark threshold; for further details, see Ref.~\cite{llog}. However, the following pedestrian technique works: consider the RGE for $g^2+g'^2$ valid for $\mu<M\ls{{\rm SUSY}}$ \vskip6pt \begin{equation} {d\over dt}(g^2+g'^2)={1\over 96\pi^2}\left[\left(8g^4 +\ifmath{{\textstyle{40 \over 3}}} g'^4\right)N_g+(g^4+g'^4)N_H-44g^4\right]\,.\label{grge} \end{equation} \vskip6pt\noindent This equation is used to run $g^2+g'^2$, which appears in eq.~(\ref{boundary}), from $M\ls{{\rm SUSY}}$ down to $M_{\ss Z}$. As before, we identify the term proportional to $N_g$ as corresponding to the fermion loops. We can explicitly extract the $t$-quark contribution by noting that \vskip6pt \begin{eqalignno} N_g\left(8g^4+\ifmath{{\textstyle{40 \over 3}}} g'^4\right)&= {g^4N_g\overc_W^4} \left[\ifmath{{\textstyle{64 \over 3}}} s_W^4-16 s_W^2+8\right]\nonumber \\[3pt] &={g^4\overc_W^4}\biggl\{N_c[1+(N_g-1)](1-4e_t s_W^2+8e_t^2 s_W^4)\nonumber \\[3pt] &\qquad + N_c N_g(1+4e_b s_W^2+8e_b^2 s_W^4) +N_g(2-4 s_W^2+8s_W^4)\biggr\}\,, \nonumber\\ \label{pedest} \end{eqalignno} \vskip6pt\noindent where in the first line of the last expression, the term proportional to 1 corresponds to the $t$-quark contribution while the term proportional to $N_g-1$ accounts for the $u$ and $c$-quarks; the second line contains the contributions from the down-type quarks and leptons respectively. Thus, iterating to one-loop, \vskip6pt \begin{eqalignno}% [g^2+g'^2](M\ls{{\rm SUSY}}^2)&= [g^2+g'^2](M_{\ss Z}^2)+{g^4\over 96\pi^2 c_W^4} \Biggl[P_t\ln\left({M\ls{{\rm SUSY}}^2\overm_t^2}\right)\nonumber \\[3pt] &\quad+\left[P_b+P_f+(s_W^4+c_W^4)N_H-44c_W^4\right] \ln\left({M\ls{{\rm SUSY}}^2\overM_{\ss Z}^2} \right)\Biggr]\,.\label{gaugeiter} \end{eqalignno} \noindent\vskip6pt Again, we take $N_H=2$, since the low-energy effective theory between $M_{\ss Z}$ and $M\ls{{\rm SUSY}}$ consists of the full two-Higgs doublet model. Eq.~(\ref{gaugeiter}) was used in the derivation of eq.~(\ref{dlambda}). We now return to the computation of the one-loop leading log neutral CP-even Higgs squared-mass matrix. The final step is to insert the expressions obtained in eq.~(\ref{dlambda}) into eq.~(\ref{massmhh}). The resulting matrix elements for the mass-squared matrix to one-loop leading logarithmic accuracy are given by \vbox{% \begin{eqalignno} ({\cal M}_{11}^2)_{\rm 1LL}&=m_{\ha}^2s_{\beta}^2+m_Z^2c_{\beta}^2 +{g^2\mzzc_{\beta}^2\over96\pi^2c_W^2}\Bigg[ P_t~\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right) \nonumber\\ &\quad+\bigg(12N_c{m_b^4\overM_{\ss Z}^4c_{\beta}^4}-6N_c{m_b^2\over\mzzc_{\beta}^2} +P_b+P_f+P_g+P_{2H} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\Bigg] \nonumber \\[3pt] ({\cal M}_{22}^2)_{\rm 1LL}&=m_{\ha}^2c_{\beta}^2+m_Z^2s_{\beta}^2 +{g^2\mzzs_{\beta}^2\over96\pi^2c_W^2}\Bigg[\bigg(P_b+P_f+P_g+P_{2H} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\nonumber \\[3pt] &\quad+\bigg(12N_c{m_t^4\overM_{\ss Z}^4s_{\beta}^4}-6N_c{m_t^2\over\mzzs_{\beta}^2} +P_t\bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right)\Bigg] \nonumber\\[3pt] ({\cal M}_{12}^2)_{\rm 1LL}&=-s_{\beta}c_{\beta}\Biggl\{m_{\ha}^2+m_Z^2 +{g^2M_{ZZ}\over96\pi^2c_W^2}\Bigg[\bigg(P_t-3N_c{m_t^2\over\mzzs_{\beta}^2} \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right)\nonumber \\[3pt] &\quad+\bigg(-3N_c{m_b^2\over\mzzc_{\beta}^2}+P_b+P_f+P_g'+P_{2H}' \bigg)\ln\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\Bigg]\Biggr\}\,, \label{mtophree} \end{eqalignno} } \noindent Diagonalizing this matrix [eq.~(\ref{mtophree})] yields the radiatively corrected CP-even Higgs masses and mixing angle $\alpha$. The analysis presented above assumes that $m_{\ha}$ is not much larger than ${\cal O}(M_{\ss Z})$ so that the Higgs sector of the low-energy effective theory contains the full two-Higgs-doublet spectrum. On the other hand, if $m_{\ha}\ggM_{\ss Z}$, then only $h^0$ remains in the low-energy theory. In this case, we must integrate out the heavy Higgs doublet, in which case one of the mass eigenvalues of ${\cal M}_0^2$ [eq.~(\ref{kv})] is much larger than the weak scale. In order to obtain the effective Lagrangian at $M\ls{{\rm weak}}$, we first have to run the various coupling constants to the threshold $m_{\ha}$. Then we diagonalize the Higgs mass matrix and express the Lagrangian in terms of the mass eigenstates. Notice that in this case the mass eigenstate $h^0$ is directly related to the field with the non-zero vacuum expectation value [\ie, $\beta(m_{\ha})=\alpha(m_{\ha})+\pi/2+{\cal O}(M_{\ss Z}^2/m_{\ha}^2)$]. Below $m_{\ha}$ only the Standard Model Higgs doublet $\phi\equivc_{\beta}\Phi_1+s_{\beta}\Phi_2$ remains. The scalar potential is \begin{equation} {\cal V}=m_{\phi}^2(\phi^{\dagger}\phi) +\ifmath{{\textstyle{1 \over 2}}}\lambda(\phi^{\dagger}\phi)^2 \,,\label{potsm} \end{equation} and the light CP-even Higgs mass is obtained using $m_{\hl}^2=\lambda v^2$. The RGE in the Standard Model for $\lambda$ is \cite{chengli,cmpp} \begin{equation} 16\pi^2\beta_{\lambda} = 6\lambda^2 +\ifmath{{\textstyle{3 \over 8}}} \left[2g^4+(g^2+g'^2)^2\right]-2\sum_i N_{c_i}h_{f_i}^4 -\lambda\biggl(\ifmath{{\textstyle{9 \over 2}}} g^2+\ifmath{{\textstyle{3 \over 2}}} g'^2-2\sum_i N_{c_i} h_{f_i}^2\biggr), \label{defbetl} \end{equation} \vskip12pt\noindent where the summation is over all fermions with $h_{f_i}=gm_{f_i}/(\sqrt{2}M_{\ss W})$. The RGEs for the gauge couplings are obtained from $\beta_{g^2}$ and $\beta_{g'^2}$ given in Appendix B by putting $N_H=1$. In addition, we require the boundary condition for $\lambda$ at $m_{\ha}$ \begin{eqalignno} \lambda(m_{\ha})&=\left[c_{\beta}^4\lambda_1+s_{\beta}^4\lambda_2+ 2s_{\beta}^2c_{\beta}^2(\lambda_3+\lambda_4+\lambda_5) +4c_{\beta}^3s_{\beta}\lambda_6+4\cbs_{\beta}^3\lambda_7\right](m_{\ha})\nonumber \\[3pt] &=\left[\ifmath{{\textstyle{1\over 4}}}(g^2+g'^2)c_{2\beta}^2\right](m_{\ha}) +{g^4\over384\pi^2c_W^4} \ln\left({M\ls{{\rm SUSY}}^2\overm_{\ha}^2}\right)\nonumber \\[3pt] &~\times\bigg[12N_c\bigg({m_t^4\overM_{\ss Z}^4}+{m_b^4\overM_{\ss Z}^4}\bigg) +6N_cc_{2\beta}\bigg({m_t^2\overM_{ZZ}}-{m_b^2\overM_{ZZ}}\bigg) \nonumber\\ &~+c_{2\beta}^2\big(P_t+P_b+P_f\big)+(s_{\beta}^4+c_{\beta}^4)(P_g+P_{2H}) -2s_{\beta}^2c_{\beta}^2(P_g'+P_{2H}')\bigg]\,, \label{lapprox} \end{eqalignno} \vskip12pt\noindent where $(g^2+g'^2)c_{2\beta}^2$ is to be evaluated at the scale $m_{\ha}$ as indicated. The RGE for $g^2+g'^2$ was given in eq.~(\ref{grge}); note that at scales below $m_A$ we must set $N_H=1$. Finally, we must deal with implicit scale dependence of $c_{2\beta}^2$. Since the fields $\Phi_i$ $(i=1,2)$ change with the scale, it follows that $\tan\beta$ scales like the ratio of the two Higgs doublet fields, \ie, \begin{equation} {1\over\tan^2\beta}{d\tan^2\beta\over dt}={\Phi_1^2\over\Phi_2^2}{d\over dt} \left({\Phi_2^2\over\Phi_1^2}\right) =\gamma_2-\gamma_1\,.\label{deftanb} \end{equation} Thus we arrive at the RGE for $\cos 2\beta$ in terms of the anomalous dimensions $\gamma_i$ given in eq.~(\ref{wavez}). Solving this equation iteratively to first order yields \begin{equation} c_{2\beta}^2(m_{\ha})=c_{2\beta}^2(M_{\ss Z}) +4\ctwobc_{\beta}^2s_{\beta}^2(\gamma_1-\gamma_2) \ln\left({m_{\ha}^2\overM_{ZZ}}\right)\,.\label{rgecb} \end{equation} The one loop leading log expression for $m_{\hl}^2 = \lambda(M_{\ss Z}) v^2$ can now be obtained by solving the RGEs above for $\lambda(M_{\ss Z})$ iteratively to first order using the boundary condition given in eq.~(\ref{lapprox}). The result is \begin{eqalignno} (m_{\hl}^2)_{\rm 1LL}&= \mzzc_{2\beta}^2(M_{\ss Z}) +{g^2m_Z^2\over96\pi^2c_W^2} \Bigg\{\bigg[12N_c{m_b^4\overM_{\ss Z}^4}-6N_cc_{2\beta}{m_b^2\overM_{ZZ}} +c_{2\beta}^2(P_b+P_f) \nonumber\\ &~~+\left(P_{g}+P_{2H})(s_{\beta}^4+c_{\beta}^4\right) -2s_{\beta}^2c_{\beta}^2\left(P_{g}'+P_{2H}'\right) \bigg]\ln\left({M\ls{{\rm SUSY}}^2\overM_{ZZ}} \right) \nonumber\\ &~~+\bigg[12N_c{m_t^4\overM_{\ss Z}^4}+ 6N_cc_{2\beta}{m_t^2\overM_{ZZ}}+c_{2\beta}^2P_t\bigg] \ln\left({M\ls{{\rm SUSY}}^2\over m_t ^2}\right) \nonumber\\ &~~-\bigg[\left(c_{\beta}^4+s_{\beta}^4\right)P_{2H}-2c_{\beta}^2s_{\beta}^2P_{2H}'-P_{1H}\bigg] \ln\left({m_{\ha}^2\overM_{ZZ}}\right)\Bigg\} \,, \label{mhltot} \end{eqalignno} where the term proportional to \begin{equation} P_{1H} \equiv -9c_{2\beta}^4+(1-2s_W^2+2s_W^4)c_{2\beta}^2\,, \label{defpps} \end{equation} corresponds to the Higgs boson contribution in the one-Higgs-doublet model. The term in eq.~(\ref{mhltot}) proportional to $\ln(m_{\ha}^2)$ accounts for the fact that there are two Higgs doublets present at a scale above $m_{\ha}$ but only one Higgs doublet below $m_{\ha}$. We can improve the above one-loop leading log formulae by reinterpreting the meaning of $M\ls{{\rm SUSY}}$. For example, all terms proportional to $\ln(M\ls{{\rm SUSY}}^2/m_t^2)$ arise from diagrams with loops involving the top quark and top-squarks. Explicit diagrammatic computations then show that we can reinterpret $M\ls{{\rm SUSY}}^2=M_{\tilde t_1}M_{\tilde t_2}$. Note that with this reinterpretation of $M\ls{{\rm SUSY}}^2$, the top quark and top squark loop contributions to the Higgs masses cancel exactly when $M_{\tilde t_1}=M_{\tilde t_2}=m_t$, as required in the supersymmetric limit. Likewise, in terms proportional to $P_b$ or powers of $m_b$ multiplied by $\ln(M\ls{{\rm SUSY}}^2/M_{ZZ})$, we may reinterpret $M\ls{{\rm SUSY}}=M_{\tilde b_1}M_{\tilde b_2}$. Terms proportional to $P_f\ln(M\ls{{\rm SUSY}}^2/M_{ZZ})$ come from loops of lighter quarks and leptons (and their supersymmetric partners) in an obvious way, and the corresponding $M\ls{{\rm SUSY}}^2$ can be reinterpreted accordingly. The remaining leading logarithmic terms arise from gauge and Higgs boson loops and their supersymmetric partners. The best we can do in the above formulae is to interpret $M\ls{{\rm SUSY}}$ as an average neutralino and chargino mass. To incorporate thresholds more precisely requires a more complicated version of eq.~(\ref{mtophree}), which can be easily derived from formulae given in Ref.~\cite{llog}. The explicit form of these threshold corrections can be found in Ref.~\cite{hhh}. However, the impact of these corrections are no more important than the non-leading logarithmic terms which have been discarded. The largest of the non-leading logarithmic terms is of ${\cal O}(g^2m_t^2)$, which can be identified from a full one-loop computation as being the subdominant term relative to the leading ${\cal O}(g^2m_t^4\lnM\ls{{\rm SUSY}}^2)$ term in ${\cal M}_{22}^2$. Thus, we can make a minor improvement on our computation of the one-loop leading-log CP-even Higgs squared mass matrix by taking \begin{equation} {\cal M}^2 = {\cal M}^2_{\rm 1LL} + {N_c g^2m_t^2\over48\pi^2s_{\beta}^2c_W^2}\left( \begin{array}{cc} 0&0\\0&1 \end{array}\right)\,. \label{nllogtrm} \end{equation} where ${\cal M}^2_{\rm 1LL}$ is the matrix whose elements are given in eq.~(\ref{mtophree}). One can check that this yields at most a 1~GeV shift in the computed Higgs masses. \subsection{Leading Squark Mixing Corrections to the MSSM Higgs Masses} In the case of multiple and widely separated supersymmetric particle thresholds and/or large squark mixing (which is most likely in the top squark sector), new non-leading logarithmic contributions to the scalar mass-squared matrix can become important. As shown in Ref.~\cite{llog}, such effects can be taken into account by modifying the boundary conditions of the $\lambda_i$ at the supersymmetry breaking scale [eq.~(\ref{boundary})], and by modifying the RGEs to account for multiple thresholds. In particular, we find that $\lambda_5$, $\lambda_6$ and $\lambda_7$ are no longer zero. If the new RGEs are solved iteratively to one loop, then the effects of the new boundary conditions are simply additive. In this section, we focus on the effects arising from the mass splittings and $\widetilde q_L$--$\widetilde q_R$ mixing in the third generation squark sector. The latter generates additional squared-mass shifts proportional to $m_t^4$ and thus can have a significant impact on the radiatively corrected Higgs masses \cite{erz}. First, we define our notation (we follow the conventions of Ref.~\cite{hehtasi}). In third family notation, the squark mass eigenstates are obtained by diagonalizing the following two $2\times 2$ matrices. The top-squark squared-masses are eigenvalues of \begin{equation} \left(\begin{array}{cc} M_{Q}^2+m_t^2+t_L m_Z^2 & m_t X_t \\ m_t X_t & M_{U}^2+m_t^2+t_R m_Z^2 \end{array}\right) \,, \label{stopmatrix} \end{equation} where $X_t \equiv A_t-\mu\cot\beta$, $t_L\equiv ({1\over 2}-e_t\sin^2\theta_W)\cos2\beta$ and $t_R\equiv e_t\sin^2\theta_W\cos2\beta$. The bottom-squark squared-masses are eigenvalues of \begin{equation} \left(\begin{array}{cc} M_{Q}^2+m_b^2+b_L m_Z^2 & m_b X_b \\ m_b X_b & M_{D}^2+m_b^2+b_R m_Z^2 \end{array}\right) \,, \label{sbotmatrix} \end{equation} where $X_b \equiv A_b -\mu\tan\beta$, $b_L\equiv (-{1\over 2}-e_b\sin^2\theta_W)\cos2\beta$ and $b_R\equiv e_b\sin^2\theta_W\cos2\beta$. $M_{Q}$, $M_{U}$, $M_{D}$, $A_t$, and $A_b$ are soft-supersymmetry-breaking parameters, and $\mu$ is the supersymmetric Higgs mass parameter. We treat the squark mixing perturbatively, assuming that the off-diagonal mixing terms are small compared to the diagonal terms. At one-loop, the effect of the squark mixing is to introduce the shifts $\Delta {\calm}^2_{\rm mix}$ and $\left(\Deltam_{\hpm}^2\right)_{\rm mix}$. In order to keep the formulae simple, we take $M_Q=M_U=M_D=M\ls{{\rm SUSY}}$, where $M\ls{{\rm SUSY}}$ is assumed to be large compared to $M_{\ss Z}$. Thus, the radiatively corrected Higgs mass is determined by $m_{\ha}$, $\tan\beta$, $M\ls{{\rm SUSY}}$, $A_t$, $A_b$, and $\mu$. The more complex case of non-universal squark squared-masses (in which $M_Q$, $M_U$, and $M_D$ are unequal but still large compared to $M_{\ss Z}$) is treated in Ref.~\cite{hhh}. It is convenient to define \begin{eqnarray} X_t&\equiv&A_t-\mu\cot\beta\,,\qquad\qquad Y_t\equiv A_t+\mu\tan\beta\,,\nonumber \\ X_b&\equiv&A_b-\mu\tan\beta\,,\qquad\qquad Y_b\equiv A_b+\mu\cot\beta\,. \label{xdefs} \end{eqnarray} We assume that the mixing terms $m_t X_t$ and $m_b X_b$ are not too large.\footnote{Formally, the expressions given in eqs.~(\ref{fullcorr})--(\ref{deltamhpm}) are the results of an expansion in the variable $(M_1^2-M_2^2)/ (M_1^2+M_2^2)$, where $M_1^2$, $M_2^2$ are the squared-mass eigenvalues of the squark mass matrix. Thus, we demand that $m_t X_t/M\ls{{\rm SUSY}}^2\ll 1$. For example, for $M\ls{{\rm SUSY}}=1$~TeV, values of $X_t/M\ls{{\rm SUSY}}\lsim 3$ should yield an acceptable approximation based on the formulae presented here.} Then, the elements of the CP-even Higgs squared-mass matrix are given by: \begin{equation} \label{fullcorr} {\calm}^2={\calm}^2_{\rm 1LL}+\Delta{\calm}^2_{\rm mix}\,, \end{equation} where ${\calm}^2_{\rm 1LL}$ has been given in eq.~(\ref{mtophree}), and \begin{eqnarray} \label{deltacalms} &&(\Delta{\calm}^2_{11})_{\rm mix} = {g^2N_c\over 32\pi^2M_{\ss W}^2M\ls{{\rm SUSY}}^2}\Biggl[ {4m_b^4A_b X_b\over\cos^2\beta}\left(1-{A_b X_b\over 12M\ls{{\rm SUSY}}^2}\right) -{m_t^4\mu^2 X_t^2\over 3M\ls{{\rm SUSY}}^2\sin^2\beta}\nonumber \\ &&\qquad -M_{\ss Z}^2 m_b^2A_b(X_b+\ifmath{{\textstyle{1 \over 3}}} A_b)-M_{\ss Z}^2 m_t^2\mu\cot\beta(X_t+\ifmath{{\textstyle{1 \over 3}}}\mu\cot\beta)\Biggr]\,, \nonumber \\ &&(\Delta{\calm}^2_{22})_{\rm mix} = {g^2N_c\over 32\pi^2M_{\ss W}^2M\ls{{\rm SUSY}}^2}\Biggl[ {4m_t^4A_t X_t\over\sin^2\beta}\left(1-{A_t X_t\over 12M\ls{{\rm SUSY}}^2}\right) -{m_b^4\mu^2 X_b^2\over 3M\ls{{\rm SUSY}}^2\cos^2\beta}\nonumber \\ &&\qquad -M_{\ss Z}^2 m_t^2A_t(X_t+\ifmath{{\textstyle{1 \over 3}}} A_t)-M_{\ss Z}^2 m_b^2\mu\tan\beta(X_b+\ifmath{{\textstyle{1 \over 3}}}\mu\tan\beta)\Biggr]\,, \nonumber \\ &&(\Delta{\calm}^2_{12})_{\rm mix} = {-g^2N_c\over 64\pi^2M_{\ss W}^2M\ls{{\rm SUSY}}^2} \Biggl[ {4m_t^4\mu X_t\over\sin^2\beta} \left(\!1-{A_t X_t\over 6M\ls{{\rm SUSY}}^2}\right) +{4m_b^4\mu X_b\over\cos^2\beta}\left(\!1-{A_b X_b\over 6M\ls{{\rm SUSY}}^2}\right)\nonumber \\ &&\qquad-M_{\ss Z}^2 m_t^2\cot\beta\left[X_t Y_t+\ifmath{{\textstyle{1 \over 3}}}(\mu^2+A_t^2)\right] -M_{\ss Z}^2 m_b^2\tan\beta\left[X_b Y_b+\ifmath{{\textstyle{1 \over 3}}}(\mu^2+A_b^2)\right]\Biggr]\,. \end{eqnarray} If $M_{\ss Z}\llm_{\ha}\leqM\ls{{\rm SUSY}}$, a separate analysis is required. One finds that eq.~(\ref{mhltot}) is shifted by \begin{eqnarray} \label{deltamhs} &&(\Deltam_{\hl}^2)_{\rm mix}\!=\!{g^2 N_c\over 16\pi^2M_{\ss W}^2M\ls{{\rm SUSY}}^2} \Biggl\{2m_t^4 X_t^2\left(\!1-{X_t^2\over 12M\ls{{\rm SUSY}}^2}\right)+2m_b^4 X_b^2\left(\!1-{X_b^2\over 12M\ls{{\rm SUSY}}^2}\right) \nonumber \\ &&+\ifmath{{\textstyle{1 \over 2}}}M_{\ss Z}^2\cos2\beta\left[m_t^2 \left(X_t^2+\ifmath{{\textstyle{1 \over 3}}}(A_t^2\!-\!\mu^2\cot^2\beta)\right) \!-m_b^2\left(X_b^2+\ifmath{{\textstyle{1 \over 3}}}(A_b^2\!-\!\mu^2\tan^2\beta)\right)\right] \!\Biggr\}. \end{eqnarray} Squark mixing effects also lead to modifications of the charged Higgs squared-mass. One finds that the charged Higgs squared-mass obtained in eq.~(\ref{llform}) is shifted by \vbox{% \begin{eqnarray} \label{deltamhpm} &&(m_{\hpm}^2)_{\rm mix} = {N_cg^2\over192\pi^2M_{\ss W}^2M\ls{{\rm SUSY}}^2}\Biggl[ {2m_t^2M_{\ss W}^2(\mu^2-2A_t^2)\over\sin^2\beta} +{2m_b^2M_{\ss W}^2(\mu^2-2A_b^2)\over\cos^2\beta}\hspace*{1cm} \nonumber \\ &&\quad -3\mu^2\left({m_t^2\over\sin^2\beta} +{m_b^2\over\cos^2\beta}\right)^2 +{m_t^2m_b^2\over\sin^2\beta\cos^2\beta}\left(3(A_t+A_b)^2 -{(A_t A_b -\mu^2)^2\overM\ls{{\rm SUSY}}^2}\right)\Biggr]. \end{eqnarray} } \section{RG-Improvement and Numerical Results for the MSSM Higgs Masses} \label{sec:five} The RG-improved Higgs masses (in the absence of squark mixing) are computed by solving the set of coupled REGs for the $\lambda_i(M\ls{{\rm weak}}^2)$, subject to the boundary conditions specified in eq.~(\ref{boundary}). Squark mixing effects are incorporated into the procedure by modifying the boundary conditions as described in Ref.~\cite{llog}. Hempfling, Hoang and I \cite{hhh} found a simple analytic algorithm which reproduces quite accurately the results of the numerical integration of the RGEs. The procedure starts with the formulae of section 4. The Higgs masses take the form given symbolically in eq.~(\ref{oneloopmasses}). Then, \begin{equation} {\calm}^2_{\rm 1RG}\simeq\overline{{\calm}^2}_{\rm 1LL}+ \Delta\overline{{\calm}^2}_{\rm mix}\equiv {\calm}^2_{\rm 1LL}\left[m_t(\mu_t),m_b(\mu_b)\right]+ \Delta{\calm}^2_{\rm mix}\left[m_t(\mu_{\tilde t}),m_b(\mu_{\tilde b})\right]\,, \label{simplemixform} \end{equation} where \begin{equation} \label{scales} \mu_t\equiv\sqrt{m_tM\ls{{\rm SUSY}}}\,,\qquad \mu_b\equiv\sqrt{m_ZM\ls{{\rm SUSY}}}\,, \qquad \mu_{\tilde q}\equivM\ls{{\rm SUSY}}~~~(q=t,b)\,. \end{equation} That is, the numerically integrated RG-improved CP-even Higgs squared-mass matrix, ${\calm}^2_{\rm 1RG}$, is well approximated by replacing all occurrences of $m_t$ and $m_b$ in ${\calm}^2_{\rm 1LL}(m_t,m_b)$ and $\Delta{\calm}^2_{\rm mix}(m_t,m_b)$ by the corresponding running masses evaluated at the scales as indicated above.\footnote{In this section, an overline above a quantity will indicate that the replacement of $m_t$ and $m_b$ by the appropriate running mass has been made.} To implement the above algorithm, we need formulae for $m_b(\mu)$ and $m_t(\mu)$. First, consider $m_{\ha}={\cal O}(M_{\ss Z})$. In this case, at mass scales below $M\ls{{\rm SUSY}}$, the effective theory of the Higgs sector is that of a non-supersymmetric two-Higgs-doublet model. In this model, the quark mass is the product of the Higgs-quark Yukawa coupling ($h_q$) and the appropriate Higgs vacuum expectation value: \begin{eqnarray} m_b(\mu) & = & \frac{1}{\sqrt{2}}\,h_b(\mu)\,v_1(\mu)\,,\nonumber \\ m_t(\mu) & = & \frac{1}{\sqrt{2}}\,h_t(\mu)\,v_2(\mu)\,. \label{topmass} \end{eqnarray} At scales $\mu\leqM\ls{{\rm SUSY}}$, we employ the one-loop non-supersymmetric RGEs of the two-Higgs doublet model for $h_b$, $h_t$, and the vacuum expectation values $v_1$ and $v_2$ (see Appendix B). This yields \begin{eqnarray} &&\frac{\rm d}{\rm d\ln\mu^2}\,m_b^2 = \frac{1}{64\,\pi^2}\,\left[\,6 h_b^2+2 h_t^2-32 g_s^2 +\frac{4}{3} g^{\prime 2}\,\right]\,m_b^2\,, \nonumber \\ &&\frac{\rm d}{\rm d\ln\mu^2}\,m_t^2 = \frac{1}{64\,\pi^2}\,\left[\,6 h_t^2+2 h_b^2-32 g_s^2 -\frac{8}{3} g^{\prime 2}\,\right]\,m_t^2\,. \label{mtrge} \end{eqnarray} For $m_{\ha}={\cal O}(M\ls{{\rm SUSY}})$, the effective theory of the Higgs sector at mass scales below $M\ls{{\rm SUSY}}$ is that of the one-Higgs doublet Standard Model. In this case, we define $m_q(\mu)=h_q^{\rm SM}(\mu) v(\mu)/\sqrt{2}$, where $v(M_{\ss Z})\simeq 246$~GeV is the one-Higgs-doublet Standard Model vacuum expectation value. In this case eq.~(\ref{mtrge}) is modified by replacing $6h_t^2+2h_b^2$ with $6(h_t^{\rm SM})^2-6(h_b^{\rm SM})^2$ in the RGE for $m_t^2$ (and interchange $b$ and $t$ to obtain the RGE for $m_b^2$). To solve these equations, we also need the evolution equations of $g_s$, and $g^\prime$. But, an approximate solution is sufficient for our purposes. Since $g^\prime$ is small, we drop it. We do not neglect the $h_b$ dependence which may be significant if $\tan\beta$ is large. Then, we can iteratively solve eq.~(\ref{mtrge}) to one loop by ignoring the $\mu$ dependence of the right hand side. We find \begin{equation} m_t(\mu) = m_t(m_t)\times \begin{cases}1-{1\over\pi}\left[\alpha_s-{1\over 16} (\alpha_b+3\alpha_t)\right]\, \ln\left(\mu^2/m_t^2\right)\,, & $m_{\ha}\simeq{\cal O}(M_{\ss Z})\,,$\\ 1-{1\over\pi}\left[\alpha_s-{3\over 16} (\alpha_t^{\rm SM}-\alpha_b^{\rm SM})\right]\, \ln\left(\mu^2/m_t^2\right)\,, & $m_{\ha}\simeq{\cal O}(M\ls{{\rm SUSY}})\,,$ \end{cases} \label{mtrun} \end{equation} where $\alpha_t\equiv h_t^2/4\pi$, {\it etc.}, and all coupling on the right hand side are evaluated at $m_t$. Similarly, \begin{equation} m_b(\mu) = m_b(M_{\ss Z})\times \begin{cases}1-{1\over\pi}\left[\alpha_s-{1\over 16} (\alpha_t+3\alpha_b)\right]\, \ln\!\left(\mu^2/M_{\ss Z}^2\right),&$m_{\ha}\simeq{\cal O}(M_{\ss Z}),$ \\ 1-{1\over\pi}\left[\alpha_s-{3\over 16} (\alpha_b^{\rm SM}\!-\alpha_t^{\rm SM})\right]\, \ln\!\left(\mu^2/M_{\ss Z}^2\right),&$m_{\ha}\simeq{\cal O}(M\ls{{\rm SUSY}}),$ \end{cases} \label{mbrun} \end{equation} For intermediate values of $m_{\ha}$, one may interpolate the above formulae between the two regions. Using eqs.~(\ref{mtrun}) and (\ref{mbrun}) in eq.~(\ref{simplemixform}), and diagonalizing the resulting squared-mass matrix yields our approximation to the RG-improved one-loop neutral CP-even Higgs squared-masses. We may also apply our algorithm to the radiatively corrected charged Higgs mass. However, in contrast to the one-loop radiatively corrected neutral Higgs mass, there are no one-loop leading logarithmic corrections to $m_{\hpm}^2$ that are proportional to $m_t^4$. Thus, we expect that our charged Higgs mass approximation will not be quite as reliable as our neutral Higgs mass approximation. Let us now compare various computations of the one-loop corrected light CP-even Higgs mass. In the first set of examples, all squark mixing effects are ignored. First, we evaluate two expressions for the RG-unimproved one-loop Higgs mass---the one-loop leading log Higgs mass calculated from ${\calm}^2_{\rm 1LL}$ and from a simplified version of ${\calm}^2_{\rm 1LL}$ in which only the dominant terms proportional to $m_t^4$ are kept. In the latter case, we denote the neutral CP-even Higgs squared-mass matrix by ${\calm}^2_{\rm 1LT}\equiv {\calm}^2_0+\Delta{\calm}^2_{\rm 1LT}$, where \begin{equation} \Delta{\calm}^2_{\rm 1LT}\equiv {3g^2 m_t^4\over 8\pi^2 m_W^2 \sin^2\beta} \ln\left(M\ls{{\rm SUSY}}^2/m_t^2\right) \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right)\,. \label{topapprox} \end{equation} In many analyses of ${\calm}^2_{\rm 1LT}$ and ${\calm}^2_{\rm 1LL}$ that have appeared previously in the literature, the Higgs mass radiative corrections were evaluated with the pole mass, $m_t$. Some have argued that one should take $m_t$ to be the running mass evaluated at $m_t$, although to one-loop accuracy, the two choices cannot be distinguished. Nevertheless, because the leading radiative effect is proportional to $m_t^4$, the choice of $m_t$ in the one-loop formulae is numerically significant, and can lead to differences as large as 10 GeV in the computed Higgs mass. In Ref.~\cite{hhh}, the choice of using $m_t(m_t)$ as opposed to $m_t^{\rm pole}$ (prior to RG-improvement) is justified by invoking information from a two-loop analysis. Thus, our numerical results for the light CP-even Higgs mass before RG-improvement are significantly lower (when $M\ls{{\rm SUSY}}$ is large) as compared to the original computations given in the literature, for fixed $m_t^{\rm pole}$. We have taken $m_t(m_t)=166.5$~GeV in all the numerical results exhibited below. We then apply our algorithm for RG-improvement by replacing $m_t$ and $m_b$ by the appropriate running masses as specified in eqs.~(\ref{simplemixform})--(\ref{scales}). We now show examples for $m_{\ha}=1$~TeV and two choices of $\tan\beta$ in Fig.~\ref{hhhfig1} [$\tan\beta=20$] and Fig.~\ref{hhhfig2} [$\tan\beta=1.5$], and for $m_{\ha}=100$~GeV and $\tan\beta=20$ in Fig.~\ref{hhhfig3}.\footnote{For $m_{\ha}=100$~GeV and $\tan\beta=1.5$, the resulting light Higgs mass lies below experimental Higgs mass bounds obtained by the LEP collaborations \cite{LEPHIGGS}.} Each plot displays five predictions for $m_{\hl}$ based on the following methods for computing the Higgs squared-mass matrix: (i)~${\calm}^2_{\rm 1LT}$; (ii)~${\calm}^2_{\rm 1LL}$; (iii)~$\overline{{\calm}^2}_{\rm 1LT}$; (iv)~$\overline{{\calm}^2}_{\rm 1LL}$; and (v)~${\calm}^2_{\rm 1RG}$ [the overline notation is defined in the footnote below eq.~(\ref{scales})]. The following general features are noteworthy. First, we observe that over the region of $M\ls{{\rm SUSY}}$ shown, ${\calm}^2_{\rm 1RG}\simeq\overline{{\calm}^2}_{\rm 1LL}$. In fact, $m_{\hl}$ computed from $\overline{{\calm}^2}_{\rm 1LL}$ is within 1 GeV of the numerical RG-improved $m_{\hl}$ in all sensible regions of the parameter space ($1\leq\tan\beta\leq m_t/m_b$ and $m_t$, $m_{\ha}\leqM\ls{{\rm SUSY}} \leq 2$~TeV). For values of $M\ls{{\rm SUSY}}>2$~TeV, the Higgs masses obtained from $\overline{{\calm}^2}_{\rm 1LL}$ begin to deviate from the numerically integrated RG-improved result. Second, the difference between $m_{\hl}$ computed from ${\calm}^2_{\rm 1LL}$ and from ${\calm}^2_{1RG}$ is non-negligible for large values of $M\ls{{\rm SUSY}}$; neglecting RG-improvement can lead to an overestimate of $m_{\hl}$ which in some areas of parameter space can be as much as 10 GeV. Finally, note that while the simplest approximation of $m_{\hl}$ based on ${\calm}^2_{\rm 1LT}$ reflects the dominant radiative corrections, it yields the largest overestimate of the light Higgs boson mass. \begin{figure}[htb] \centerline{\psfig{file=hhhfig1.ps,width=10cm,angle=90}} \vskip1pc \fcaption{The radiatively corrected light CP-even Higgs mass is plotted as a function of $M\ls{{\rm SUSY}}$ for $\tan\beta=20$ and $m_{\ha}= 1$~TeV. The one-loop leading logarithmic computation [dashed line] is compared with the RG-improved result which was obtained by numerical analysis [solid line] and by using the simple analytic result given in eq.~(\protect\ref{simplemixform}) [dot-dashed line]. For comparison, the results obtained using the leading $m_t^4$ approximation of eq.~(\protect\ref{topapprox}) [higher dotted line], and its RG-improvement [lower dotted line] are also exhibited. $M\ls{{\rm SUSY}}$ characterizes the scale of supersymmetry breaking and can be regarded (approximately) as a common supersymmetric scalar mass; squark mixing effects are set to zero. The running top quark mass used in our numerical computations is $m_t(m_t)= 166.5$~GeV. All figures are taken from Ref.~\protect\cite{hhh}.} \label{hhhfig1} \end{figure} \begin{figure}[hp] \centerline{\psfig{file=hhhfig2.ps,width=10cm,angle=90}} \vskip1pc \fcaption{The radiatively corrected light CP-even Higgs mass is plotted as a function of $M\ls{{\rm SUSY}}$ for $\tan\beta=1.5$ and $m_{\ha}= 1$~TeV. See the caption to Fig.~\protect\ref{hhhfig1}.} \label{hhhfig2} \vspace*{1pc} \centerline{\psfig{file=hhhfig3.ps,width=10cm,angle=90}} \vskip1pc \fcaption{The radiatively corrected light CP-even Higgs mass is plotted as a function of $M\ls{{\rm SUSY}}$ for $\tan\beta=20$ and $m_{\ha}= 100$~GeV. See the caption to Fig.~\protect\ref{hhhfig1}.} \label{hhhfig3} \end{figure} We next consider some examples in which squark-mixing effects are included. As above, we compare the value of $m_{\hl}$ computed by different procedures. Prior to RG-improvement, we first compute $m_{\hl}$ by diagonalizing ${\calm}^2_{\rm 1LL}+\Delta{\calm}^2_{\rm mix}$. Next, we perform RG-improvement as in Ref.~\cite{llog}\ by numerically integrating the RGEs for the Higgs self-couplings and inserting the results into eq.~(\ref{massmhh}); the resulting CP-even scalar squared-mass matrix is denoted by ${\calm}^2_{\rm 1RG}$. Finally, we extract $m_{\hl}$ and compare it to the corresponding result obtained by diagonalizing $\overline{{\calm}^2}_{\rm 1LL}+ \Delta\overline{{\calm}^2}_{\rm mix}$ given by eq.~(\ref{simplemixform}). These comparisons are exhibited in a series of figures. First, we plot $m_{\hl}$ {\it vs.} $X_t/M\ls{{\rm SUSY}}$ for $M\ls{{\rm SUSY}}=m_{\ha}=-\mu=1$~TeV for two choices of $\tan\beta$ in Fig.~\ref{hhhfig4} [$\tan\beta=20$] and Fig.~\ref{hhhfig5} [$\tan\beta=1.5$]. Note that Fig.~\ref{hhhfig4} is of particular interest, since it allows one to read off the maximal values of $m_{\hl}$ as a function of $X_t$ for $M\ls{{\rm SUSY}}\leq 1$~TeV, which were quoted in section 4.1. The maximum value of the Higgs mass occurs for $|X_t|\simeq 2.4M\ls{{\rm SUSY}}$. The reader may worry that this value is too large in light of our perturbative treatment of the squark mixing. However, comparisons with exact diagrammatic computations confirm that these results are trustworthy at least up to the point where the curves reach their maxima. From a more practical point of view, such large values of the mixing are not very natural; they cause tremendous splitting in the top-squark mass eigenstates and are close to the region of parameter space where the SU(2)$\times$U(1) breaking minimum of the scalar potential becomes unstable relative to color and/or electromagnetic breaking vacua \cite{casas}. \begin{figure}[htbp] \centerline{\psfig{file=hhhfig4.ps,width=10cm,angle=90}} \vskip1pc \fcaption{The radiatively corrected light CP-even Higgs mass is plotted as a function of $X_t/M\ls{{\rm SUSY}}$, where $X_t\equiv A_t-\mu\cot\beta$, for $M\ls{{\rm SUSY}}=m_{\ha}=-\mu=1$~TeV and $\tan\beta=20$. See the caption to Fig.~\ref{hhhfig1}.} \label{hhhfig4} \vskip1pc \centerline{\psfig{file=hhhfig5.ps,width=10cm,angle=90}} \vskip1pc \fcaption{The radiatively corrected light CP-even Higgs mass is plotted as a function of $X_t/M\ls{{\rm SUSY}}$, where $X_t\equiv A_t-\mu\cot\beta$, for $M\ls{{\rm SUSY}}=m_{\ha}=-\mu=1$~TeV and $\tan\beta=1.5$. See the caption to Fig.~\ref{hhhfig1}.} \label{hhhfig5} \end{figure} \begin{figure}[htbp] \centerline{\psfig{file=hhhfig6.ps,width=10cm,angle=90}} \caption{The radiatively corrected, RG-improved light CP-even Higgs mass is plotted as a function of $X_t/M\ls{{\rm SUSY}}$, where $X_t\equiv A_t-\mu\cot\beta$, for $M\ls{{\rm SUSY}}=m_{\ha}=1$~TeV and two choices of $\tan\beta=1.5$ and 20. Three values of $\mu$ are plotted in each case: $-1$~TeV [dashed], 0 [solid] and 1~TeV [dotted]. Here, we have assumed that the diagonal squark squared-masses are degenerate: $M_Q=M_U=M_D=M\ls{{\rm SUSY}}$.} \label{hhhfig6} \vspace{1pc} \centerline{\psfig{file=hhhfig7.ps,width=10cm,angle=90}} \caption{The radiatively corrected, RG-improved light CP-even Higgs mass is plotted as a function of $X_t/M\ls{{\rm SUSY}}$ for $M\ls{{\rm SUSY}}=1$~TeV and $m_{\ha}=100$~GeV. See the caption to Fig.~\ref{hhhfig6}.} \label{hhhfig7} \end{figure} \begin{figure}[htbp] \centerline{\psfig{file=hhhfig8.ps,width=10cm,angle=90}} \caption{The radiatively corrected, RG-improved light CP-even Higgs mass is plotted as a function of $M\ls{{\rm SUSY}}$ for $X_t=2.4M\ls{{\rm SUSY}}$ for three choices of ($\tan\beta$, $m_{\ha}$)= (20,1), (1.5,1), and (1.5,0.1), where $m_{\ha}$ is specified in TeV units. The solid line depicts the numerically integrated result, and the dot-dashed line indicates the result obtained from eq.~(\ref{simplemixform}).} \label{hhhfig8} \vspace{1pc} \centerline{\psfig{file=hhhfig9.ps,width=10cm,angle=90}} \caption{The radiatively corrected, RG-improved light CP-even Higgs mass is plotted as a function of $\tan\beta$ for $M\ls{{\rm SUSY}}= 1$~TeV and $m_{\ha}= 250$~GeV, for two choices of $X_t=0$ and $X_t= 2.4M\ls{{\rm SUSY}}$. See the caption to Fig.~\ref{hhhfig8}.} \label{hhhfig9} \end{figure} In Figs.~\ref{hhhfig4} and \ref{hhhfig5}, $\mu=-1$~TeV, {\it i.e.}, as $X_t\equiv A_t-\mu\cot\beta$ varies, so does $A_t$. In fact, for $m_{\ha}\ggM_{\ss Z}$, the dominant one-loop radiative corrections to $m_{\hl}^2$ depend only on $X_t$ and $M\ls{{\rm SUSY}}$ [see eq.~(\ref{deltamhs})], so that for fixed $X_t$, the $\mu$ dependence of $m_{\hl}$ is quite weak. This is illustrated in Fig.~\ref{hhhfig6}. For values of $m_{\ha}\sim{\cal O}(M_{\ss Z})$, the $\mu$ dependence is slightly more pronounced (although less so for values of $\tan\beta\gg 1$) as illustrated in Fig.~\ref{hhhfig7}. We also display $m_{\hl}$ as a function of $M\ls{{\rm SUSY}}$ for a number of different parameter choices in Fig.~\ref{hhhfig8}. In Fig.~\ref{hhhfig9}, we exhibit the $\tan\beta$ dependence of $m_{\hl}$ for two different choices of $X_t$. Again, we notice that our approximate formula [eq.~(\ref{simplemixform})], which is depicted by the dot-dashed line, does remarkably well, and never differs from the numerically integrated RG-improved value (solid line) by more than 1.5~GeV for $M\ls{{\rm SUSY}}\leq 2$~TeV and $\tan\beta\geq 1$. In summary, when the algorithm given by eqs.~(\ref{simplemixform}) and (\ref{scales}) is applied to the leading log one-loop corrections plus the leading terms resulting from squark mixing, the full (numerically integrated) RG-improved value of $m_{\hl}$ is reproduced to within an accuracy of about 2~GeV (assuming that supersymmetric particle masses lie below 2 TeV). The methods described above also yield accurate results for the mass of the heavier CP-even Higgs boson, $m_{\hh}$. The approximation to the radiatively corrected charged Higgs mass is slightly less accurate only because the leading $m_t$ enhanced terms are not as dominant as in the neutral Higgs sector.\footnote{The approximation to the radiatively corrected charged Higgs mass can be improved by including sub-dominant terms not contained in the formulae given in this paper; see Ref.~\cite{madiaz} for further details.} \section{Implications of the Radiatively Corrected Higgs Sector} \label{sec:six} Using the results of sections 4 and 5, one can obtain the leading radiative corrections to the various Higgs couplings, and proceed to investigate Higgs phenomenology in detail. Here, I shall describe the procedure used to obtain the Higgs couplings and briefly indicate some of the consequences. To obtain radiatively corrected couplings which are accurate in the one-loop leading logarithmic approximation, it is sufficient to use the tree-level couplings in which the parameters are taken to be running parameters evaluated at the electroweak scale. First, I remind the reader that $\tan\beta$ and $m_{\ha}$ are input parameters. Next, we obtain the CP-even Higgs mixing angle $\alpha$ by diagonalizing the radiatively corrected CP-even Higgs mass matrix. With the angle $\alpha$ in hand one may compute, for example, $\cos(\beta-\alpha)$ and $\sin\alpha$. These results can be used to obtain the Higgs couplings to gauge bosons [eq.~(\ref{littletable})] and fermions [eq.~(\ref{qqcouplings})]. Finally, the Higgs self-couplings [see Appendix A] are obtained by making use of the $\lambda_i$ evaluated at the electroweak scale. The end result is a complete set of Higgs boson decay widths and branching ratios that include one-loop leading-log radiative corrections. The Higgs production cross-section in a two-Higgs-doublet model via the process $e^+e^-\to Z\to ZH^0(Zh^0)$ is suppressed by a factor $\cos^2(\beta-\alpha)$ [$\sin^2(\beta-\alpha)$] as compared to the corresponding cross-sections in the Standard Model. At tree-level, we know that the decoupling limit applies when $m_{\ha}\ggM_{\ss Z}$. In fact, the approach to decoupling is quite rapid as indicated in eq.~(\ref{largema}). For $m_{\ha}\gsim 2M_{\ss Z}$, the couplings of $h^0$ to vector bosons and to quarks and leptons are phenomenologically indistinguishable from those of the Standard Model Higgs boson. Including radiative corrections does not alter this basic behavior, although one finds that $\cos^2(\beta-\alpha)\to 0$ more slowly as the radiative corrections become more significant. When radiative corrections have been incorporated, new possibilities arise which did not exist at tree-level. One example is the possibility of the decay $h^0\rightarrowA^0\ha$, which is kinematically forbidden at tree-level but is allowed for some range of MSSM parameters \cite{berz,nirtwo}. We can obtain the complete one-loop leading-log expression for the $h^0A^0\ha$ coupling (assuming $m_{\ha} \lsim m_Z$) by inserting the one-loop leading-log formulae for the $\lambda_i$ into eq.~(\ref{defghaa}) \cite{nirtwo} \vbox{% \begin{eqalignno}% &{g_{h^0A^0\ha}\over g m_Z/2c_W}=-c_{2\beta}s_{\beta+\alpha}\left\{ 1+{g^2\over96\pi^2c_W^2}\left[P_t\ln \! \left({M\ls{{\rm SUSY}}^2\over m_t^2}\right)+(P_b+P_f)\ln\!\left({M\ls{{\rm SUSY}}^2\overM_{ZZ}}\right) \right]\right\}\nonumber \\[3pt] &~~+{g^2N_c\over16\pi^2m_W^2m_Z^2}\left\{\left[ {\sas_{\beta}^2\overc_{\beta}^3}(2m_b^4-m_b^2m_Z^2c_{\beta}^2) -{(\cas_{\beta}^3-\sac_{\beta}^3)\over2c_{\beta}^2}m_b^2m_Z^2\right] \ln\!\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\right.\nonumber \\[3pt] &~~-\left.\left[{\cac_{\beta}^2\overs_{\beta}^3}(2m_t^4-m_t^2m_Z^2s_{\beta}^2) +{(\cas_{\beta}^3-\sac_{\beta}^3)\over2s_{\beta}^2}m_t^2m_Z^2\right] \ln\!\left({M\ls{{\rm SUSY}}^2\over m_t^2}\right)\right\}\nonumber \\[3pt] &~~-{g^2\over192\pi^2c_W^2}\left[s_{2\beta}c_{\beta+\alpha} (P_{2H}+P_g) -2(\cas_{\beta}^3-\sac_{\beta}^3)(P_{2H}'+P_g')\right]\ln\!\left({M\ls{{\rm SUSY}}^2\over m_Z^2}\right)\,.\nonumber\\ \label{ghaall} \end{eqalignno} } \noindent If kinematically allowed, $h^0\rightarrowA^0\ha$ would almost certainly be the dominant decay mode. However, the LEP experimental lower bound on $m_{\ha}$ now lies above $0.5(m_{\hl})_{\rm max}\simeq 62.5$~GeV. Thus, the region of parameter space where the decay $h^0\toA^0\ha$ is kinematically allowed is no longer viable. The possibility of measuring the $h^0A^0\ha$ couplings at a future $e^+e^-$ linear collider by detecting double Higgs production has been discussed in Ref.~\cite{djouadi}. Unfortunately, the prospects are poor due to low cross-sections and significant backgrounds. For the heavier Higgs states, there are many possible final state decay modes. The various branching ratios are complicated functions of the MSSM parameter space \cite{gbhs}. For example, a plot of the branching ratios of $H^0$, with the leading one-loop radiative corrections included, can be found in Ref.~\cite{gsw}. This plot indicates a rich phenomenology for heavy Higgs searches at future colliders. The precision measurements of Higgs masses and couplings will be one of the primary tasks of the LHC and future lepton-lepton colliders \cite{snowmass,gunreport} Although the possibility of a light Higgs discovery at LEP still remains, the effects of the radiative corrections may be significant enough to push the Higgs boson above the LEP-2 discovery reach. In this case, the discovery of the Higgs boson will be the purview of the LHC. Of course, if low-energy supersymmetry exists, then LHC will also uncover direct evidence for the supersymmetric particles. In this case, a detailed examination of the Higgs sector, with precision measurements of the Higgs masses and couplings, will provide a critical test for the underlying supersymmetric structure. Unlocking the secrets of the Higgs bosons will help reveal the mechanism of electroweak symmetry breaking and the nature of the TeV scale physics that lies beyond the Standard Model. \vskip3pc \centerline{{\bf Acknowledgments}} \medskip I would like to express my deep appreciation to my collaborators Ralf Hempfling and Andre Hoang, whose contributions to the work on radiatively corrected Higgs masses were instrumental to the development of the material reported in this paper. I would also like to thank Marco D\'\i az, Abdel Djouadi, Yuval Grossman, Jack Gunion, Yossi Nir, Scott Thomas, and Peter Zerwas for many fruitful interactions. Finally, I gratefully acknowledge conversations with Marcela Carena, Mariano Quiros and Carlos Wagner and appreciate the opportunity provided by the 1995 LEP-2 workshop to revisit many of the issues discussed here. This work was supported in part by the U.S. Department of Energy. \clearpage \makeatletter \@addtoreset{equation}{section} \def\Alph{section.\arabic{equation}{\Alph{section.\arabic{equation}} \makeatother
1,108,101,562,514
arxiv
\section*{INTRODUCTION} The adsorption of electronegative atoms on metal surfaces is of paramount importance in surface science as well as electrochemistry.\cite{Magnussen_CR102,Tripkovic_FD140,Andryushechkin_SSR73,Zhu_JESC163} As an electronegative atom approaches a metal surface, charge is transferred and it becomes negatively charged. This interaction can be described classically by the method of images, where the adatom/image-charge pair can be seen as a dipole. As more adatoms accumulate on the surface repulsive interactions are expected between them. Such interactions were confirmed for a variety of adatoms on metal surfaces \cite{Miller_JCP134,Loffreda_JCP108,Gava_PRB78,Ma_SS619,Peljhan_JPCC113,Inderwildi_JCP122,Gossenberger_SS631} and they typically scale as $\mu^2/R^3 \propto \mu^2\Theta^{\frac{3}{2}}$, where $\mu$ is the adatom induced dipole, $R$ is the nearest-neighbor interadatom distance, and $\Theta$ is the surface coverage. However, in a few cases, notably for electronegative atoms on Mg(001) \cite{Francis_PRB87,Cheng_PRL113} and O on Al(111),\cite{Jacobsen_PRB52,Kiejna_PRB63,Poberznik_JPCC120} counterintuitive attractive interactions were identified. In our previous publication \cite{Poberznik_JPCC120} we explained that these surprising attractive lateral interactions are a consequence of the interplay between electrostatic and geometric effects and that there exists a critical height of adatoms above the surface, below which attractive interactions can emerge. Since this model---explained in the Supporting Information and henceforth referred to as the simple ionic model---requires only (i) sufficiently ionic bonding and (ii) a low height of the adatom above the surface, it stands to reason that it should be generally applicable, provided that the two requirements are met. To address this proposition, the adsorption of four different electronegative adatoms (N, O, F, and Cl) on 44 elemental metals, as indicated in Figure~\ref{fig:ChosenSystemsPeriodicTable}, is considered herein by means of density-functional-theory (DFT) calculations. \section*{TECHNICAL DETAILS} DFT calculations were performed with the {\tt PWscf} code from the {\tt Quantum ESPRESSO} distribution\cite{Giannozzi_JPCM29} and the {\tt PWTK} scripting environment,\cite{PWTK} using the generalized gradient approximation (GGA) of Perdew--Burke--Ernzerhof (PBE) .\cite{Perdew_PRL77} We used the projector augmented wave (PAW) method\cite{Blochl_PRB50} with PAW potentials obtained from a pseudopotential library.\cite{DalCorso_CMS95,pseudos} Kohn--Sham orbitals were expanded in a plane wave basis set with a kinetic energy cutoff of 50 Ry (600 Ry for the charge density). Brillouin zone (BZ) integrations were performed with the special point technique,\cite{MonkhorstPack_PRB13} using a $12\times12\times1$ shifted $k$-mesh for \ensuremath{(1\times1)}\ surface cells (or equivalent for larger cells) and a Methfessel-Paxton smearing\cite{Methfessel_PRB40} of 0.02~Ry. Molecular graphics were produced by the {\tt XCrySDen} graphical package.\cite{Kokalj_JMGM17} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig1.pdf} \caption[Investigated metallic surfaces and adsorbates]{ The investigated metals and adsorbates are highlighted in the periodic table. The lattice type of each investigated metal is indicated by the color coding; note that metals with exotic lattices were modeled as either bcc, fcc, hcp, or simple-cubic (sc) as explained in the text. The side panel displays topviews of the considered surfaces for each lattice type.} \label{fig:ChosenSystemsPeriodicTable} \end{figure*} Most of the investigated metals crystallize in one of the following three lattice types: face-centered-cubic (fcc), hexagonal-close-packed (hcp), and body-centered-cubic (bcc). The exceptions are In and Sn, which crystallize in tetragonal lattices, as well as Hg and Bi, which crystallize in rhombohedral lattices. For these metals the most stable among fcc, hcp, bcc, or simple-cubic (sc) was chosen as the representative model in order to simplify the calculations. Additionally, $\alpha$-Mn has a unique bcc lattice with 58 atoms in the unit cell,\cite{Bradley_PRSLA115} however, for simplicity we modeled it with a plain bcc lattice. The selected Bravais lattice type for each investigated metal is indicated along with the considered surfaces in Figure~\ref{fig:ChosenSystemsPeriodicTable}, i.e., (001) for hcp, (110) and (100) for bcc, (100) and (111) for fcc, and (100) for sc metals. In total, we considered 70 different surfaces. The adatoms predominantly adsorb to hollow sites, although for some cases they prefer top or bridge sites: these exceptions are listed in Table~\ref{tab:exceptions} in the Supporting Information. The adatom binding energy (\ensuremath{E_{\rm b}}), as defined by eq~\eqref{eq:Eb} in the Supporting Information, was calculated for \ensuremath{(1\times1)}\ and \ensuremath{(2\times2)}\ adatom overlayers, designated as \ensuremath{E^{\obo}_{\mathrm{b}}} and \ensuremath{E^{\tbt}_{\mathrm{b}}}, respectively. The difference between the two binding energies ($\ensuremath{\Delta_{\Eb}}$): \begin{equation} \label{eq:Del} \ensuremath{\Delta_{\Eb}} = \ensuremath{E^{\obo}_{\mathrm{b}}} - \ensuremath{E^{\tbt}_{\mathrm{b}}}, \end{equation} was used as the criterion to determine whether lateral interactions are attractive. As to differentiate between attractive (or repulsive) and negligible lateral interactions, we arbitrarily adopt a threshold of 0.1~eV and define interactions to be attractive if $\ensuremath{\Delta_{\Eb}} < -0.1$~eV, negligible if $-0.1~\mathrm{eV} \leq \ensuremath{\Delta_{\Eb}} \leq 0.1~\mathrm{eV}$, and repulsive when $\ensuremath{\Delta_{\Eb}} > 0.1~\mathrm{eV}$. \section*{RESULTS AND DISCUSSION} The main result of this work is shown Figure~\ref{fig:AttractiveInteractionsResults}, which schematically presents the type of lateral interactions for the N, O, F, and Cl adatoms on 70 different surfaces of 44 elemental metals. We find that lateral interactions can be classified into four different groups: (i) the expected repulsive interactions; (ii,iii) the case where the simple ionic model applies and the lateral interactions are either attractive or negligible; and (iv) the case where conditions of the simple ionic model are met, however, surface reconstruction makes the low coverage \ensuremath{(2\times2)}\ overlayer more stable than the high-coverage one. Note that some cases belong to more than one scenario, nevertheless, each specific case is described only by a single category. To this end the following order of precedence is adopted: (1) attractive interactions, (2) reconstruction, and (3) negligible or repulsive interactions. Reconstruction is characterized by metal atoms (ions) nearest to the adatom being substantially displaced toward the adatom thus forming island-like structures on the surface. A typical example is shown in Figure~\ref{fig:ReconstructionExample}. In order to provide a quantitative measure of the extent of surface reconstruction, we defined the reconstruction quotient (\ensuremath{f_{\rm rec}}) as: \begin{equation} \label{eq:RQ} \ensuremath{f_{\rm rec}} = A_{\mathrm{R}}/A_{\ensuremath{(1\times1)}}, \end{equation} where $A_{\mathrm{R}}$ is the area of the reconstructed ``cell'' and $A_{\ensuremath{(1\times1)}}$ is the area of the \ensuremath{(1\times1)}\ unit-cell (for a schematic definition of these quantities see Figure~\ref{fig:ReconDegree} in the Supporting Information). Because metal ions nearest to the adatom always respond to its presence (by either moving toward or away from it), we define the surface to be reconstructed only when the \ensuremath{f_{\rm rec}}\ is significantly below 1; we arbitrarily set $\ensuremath{f_{\rm rec}} \leq 0.9$ as the criterion for reconstruction. \begin{figure*}[htb] \centering \includegraphics[width=0.94\textwidth]{fig2.pdf} \caption[Summary of lateral interactions between adatoms on metallic surfaces]{A summary of lateral interactions between adatoms on investigated metal surfaces. Four scenarios were found: (i) repulsive interactions, (ii, iii) attractive or negligible interactions, and (iv) reconstruction. For the N adatom all results are explicitly summarized, whereas for other adatoms only differences with respect to the N adatom are shown. For bcc and fcc metals, the results for the two considered surfaces are shown as indicated by the legend.} \label{fig:AttractiveInteractionsResults} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=1.00\columnwidth]{fig3.pdf} \caption[Example of surface reconstruction]{An example of surface reconstruction for O on Na(100). For the \ensuremath{(2\times2)}\ overlayer the Na atoms closest to the adatom move toward it so that Na$_4$O islands form. Such reconstruction is not possible for the \ensuremath{(1\times1)}\ overlayer due to symmetry. Such a reconstruction occurs for all metals labeled as ``REC'' in Figure~\ref{fig:AttractiveInteractionsResults}, though the extent of reconstruction can vary considerably.} \label{fig:ReconstructionExample} \end{figure} In addition to the aforementioned Figure~\ref{fig:AttractiveInteractionsResults}, which schematically summarizes the results about lateral interactions, \ensuremath{E_{\rm b}}\ values for each specific case are tabulated in Tables~\ref{tab:Eb-bcc-100}--\ref{tab:Eb-sc-100} and plotted along with \ensuremath{f_{\rm rec}}\ values in Figures~\ref{fig:Eb-and-ReconDegree-bcc-100}--\ref{fig:Eb-and-ReconDegree-hcp-001} in the Supporting Information. In accordance with previous studies,\cite{Miller_JCP134,Loffreda_JCP108,Gava_PRB78,Ma_SS619,Peljhan_JPCC113,Inderwildi_JCP122,Gossenberger_SS631} our results reveal that repulsion is the dominant case for electronegative adatoms on d-block metal surfaces with few exceptions, such as Fe(100), on which N and O adatoms display attractive interactions, and Hg(001), which displays either attractive, negligible, or repulsive interactions; negligible lateral interactions were also identified for some adatoms on group 3 and 4 d-block hcp metals. Additionally, reconstruction occurs for (100) surfaces of several bcc d-block metals. Attractive or negligible lateral interactions are the dominant scenario on the surfaces of p-block metals. In particular N, O, and F display such behavior on a large majority of investigated p-block metal surfaces. Exceptions are repulsive interactions for N on Al(111), Tl(001), and Rb(111); O on In(100) and Tl(001); and F on Bi(100). In contrast, Cl mainly displays repulsive lateral interactions on p-block metal surfaces, with the exception of In(100), Sn(100), and Pb(100) where lateral interactions are negligible. The third group are the s-block metals where the dominant scenario is reconstruction, in particular for N, O, and to lesser extent for F adatoms. Notable exceptions are Mg and Be, where lateral interactions are attractive and repulsive, respectively. In contrast, for Cl reconstruction occurs only on K(100) and Rb(100), whereas on other surfaces of s-block metals Cl generally displays either attractive or negligible lateral interactions, except on Li(110), Be, Na(110), and Mg where the interactions are repulsive. Our results indicate that in some cases, such as N, O, and F on alkali metals, where the two conditions of the simple ionic model for the attractive lateral interactions are met (ionic adatom--surface bonding and low height of adatoms), reconstruction occurs instead. This implies that the simple ionic model cannot describe all the situations and needs to be extended, at least conceptually, as to account for the possibility of reconstruction. To this end, we define two quantities termed {\it unoccupied surface area} (\ensuremath{A_{\mathrm{uno}}}) and {\it area occupied by the anion} (\ensuremath{A_{\mathrm{a}}}), defined as: \begin{equation} \label{eq:ExcessArea} \ensuremath{A_{\mathrm{uno}}} = A_{\ensuremath{(1\times1)}} - \pi R^2_{\mathrm{c}} \quad \text{and} \quad \ensuremath{A_{\mathrm{a}}} = \pi R^2_{\mathrm{a}}, \end{equation} where $A_{\ensuremath{(1\times1)}}$ is the area of the \ensuremath{(1\times1)}\ surface cell, \ensuremath{R_{\mathrm{c}}}\ is the ionic radius of the metal cation, calculated as the average of the effective ionic radii for all coordination numbers of the metal in the lowest cationic oxidation state,\cite{Shannon_ACB26} and \ensuremath{R_{\mathrm{a}}}\ is the effective radius of the anion \cite{Shannon_ACB26} (for a graphical representation of the {\it unoccupied surface area}, see Figure~\ref{fig:VacantArea} in the Supporting Information). The comparison between \ensuremath{A_{\mathrm{uno}}}\ and \ensuremath{A_{\mathrm{a}}}\ is presented in Figure~\ref{fig:Aex}. This figure reveals that alkali metals, Ca, and Sr display the largest \ensuremath{A_{\mathrm{uno}}}\ and reconstructions typically occur on their surfaces, in particular for N and O adatoms. Furthermore, it is also evident from the figure that \ensuremath{A_{\mathrm{a}}}\ of Cl$^-$ is much larger than \ensuremath{A_{\mathrm{a}}}\ of the other three adatoms and for this reason reconstructions and attractive interactions are considerably less frequent for Cl adatoms (cf.~Figure~\ref{fig:AttractiveInteractionsResults}). The next relevant observation is that repulsive interactions usually appear when \ensuremath{A_{\mathrm{uno}}}\ is small, i.e., when $\ensuremath{A_{\mathrm{uno}}}\lesssim\ensuremath{A_{\mathrm{a}}}$. This is the case of transition metal surfaces, where repulsive interactions dominate. Finally, if neither $\ensuremath{A_{\mathrm{uno}}}\gg\ensuremath{A_{\mathrm{a}}}$ nor $\ensuremath{A_{\mathrm{uno}}}\lesssim\ensuremath{A_{\mathrm{a}}}$ applies, then the interactions are likely attractive or negligible. There are, of course, exceptions, because such a simple rule simply cannot encompass all cases. \begin{figure}[tb] \centering \includegraphics[width=1.0\columnwidth]{fig4.pdf} \caption[Results of unoccupied surface area analysis]{Unoccupied surface area (\ensuremath{A_{\mathrm{uno}}}) for (a) open surfaces [fcc(100), bcc(100), and sc(100)] and (b) close-packed surfaces [fcc(111), hcp(001), and bcc(110)]. The horizontal dashed lines indicate the \ensuremath{A_{\mathrm{a}}}\ of adsorbed adatoms, calculated as described in the text.} \label{fig:Aex} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=1.00\columnwidth]{fig5.pdf} \caption[Critical-height analysis]{Comparison between critical heights as predicted by the simple ionic model and DFT calculated adatom heights for (a) open surfaces [fcc(100), bcc(100), and sc(100)] and (b) close-packed surfaces [fcc(111), hcp(001), and bcc(110)]. Each datapoint is color coded according to the type of interaction as indicated by the legend at the top right.} \label{fig:critical-height} \end{figure} As a further scrutiny of the utility of the simple ionic model, let us compare the critical height above the surface---i.e., the height above which the simple ionic model predicts that lateral interactions between adatoms are repulsive---with the adatoms heights predicted by DFT calculations (Figure~\ref{fig:critical-height}). Notably, there is not a single case of attractive lateral interactions with the adatoms located above the critical height. This observation is very reassuring and provides strong support to the validity of our explanation based on the simple ionic model, which differs from the explanations provided by Jacobsen et al.\cite{Jacobsen_PRB52} for O on Al(111) and by Cheng et al.\cite{Cheng_PRL113} for N, O, and F adatoms on Mg(0001). Jacobsen et al.\cite{Jacobsen_PRB52} emphasized the role of Al p-states that open new possibilities for hybridization and consequently lead to stronger bonding configurations at high coverage, but this explanation is brought into question by the aforementioned attractive interactions of N, O, and F adatoms on Mg(0001),\cite{Cheng_PRL113} which is an s-metal, as well as by the current findings---among the 24 identified cases of attractive interactions, 11 of them appear on non p-metals (7 on s- and 4 on d-metals, cf.~Figure~\ref{fig:AttractiveInteractionsResults}). The attractive interactions on Mg are accompanied by an adsorption induced decrease of the work function, which is another anomaly because an increase is typically expected for electronegative adatoms.\cite{Michaelides_PRL90} Cheng et al.\cite{Cheng_PRL113} attributed both anomalies to a highly polarizable electron spill-out in front of Mg(0001), i.e., the vertical electron charge redistribution (a depletion of charge above the adatom) causes the decrease of the work function,\cite{Cheng_PRL113,Michaelides_PRL90} whereas attractive interactions were explained by quantum mechanical screening, i.e., a lateral transfer of the spill-out electrons.\cite{Cheng_PRL113} In contrast, our explanation involves neither the metal p-states nor the highly polarizable electron spill-out, but instead explains the attraction by the simple ionic model---i.e., an interplay of electrostatic and geometric effects---requiring only unpolarizable point ions. It is worth noting that on Mg(0001) the attractive lateral interactions are indeed accompanied by an adsorption induced decrease of the work function, however, the latter is not required for attraction to emerge, as evidenced by Figure~\ref{fig:dWf-vs-dEb} in the Supporting Information, which shows the adsorption induced work function change for all currently identified ``attraction cases'' (whereas Figure~\ref{fig:dWf-all} shows work function changes for all considered overlayers). Among 24 such cases, the work function reduces for only 10 of them. Turning back to Figure~\ref{fig:critical-height}, its scrutiny for s- and p-block metals reveals that when the adatom height is below the critical height, the lateral interactions are either attractive, negligible, or the surface reconstructs. There are only a few exceptions, i.e., F on Bi(100), O on Tl(001), and N on Tl(001) and Pb(111). The situation is considerably different for transition metals, because for many cases the adatoms are below the critical height, yet the lateral interactions are repulsive. However, transition metals do not fulfill the second requirement of the simple ionic model, that is, the adatom--surface bonding is not sufficiently ionic, due to significant participation of covalent bonding.\cite{Peljhan_JPCC113,Baker_JACS130,Migani_JPCB110,Roman_PCCP16} Note that transition metals are rather electronegative with work-function values typically above 4~eV\cite{Michaelson_JAP48} (see also Figure~\ref{fig:work-function} in the Supporting Information); exceptions are group-3 metals, which display lower work-functions, but thereon the lateral interactions are usually not predicted by DFT to be repulsive. Finally, let us focus in more detail on cases denoted as ``reconstruction'', where the lower coverage \ensuremath{(2\times2)}\ adatom overlayer is more stable than the high coverage \ensuremath{(1\times1)}\ overlayer. Our analysis indeed reveals that the superior stability of the \ensuremath{(2\times2)}\ overlayer is by and large due to reconstruction, where the metal ions nearest to the adatom move laterally toward it, forming island-like structures on the surface (cf.~Figure~\ref{fig:ReconstructionExample}). For example, O on Na(100) displays a \ensuremath{\Delta_{\Eb}}\ of 1.8~eV. However, if the larger Cl$^-$ ion is adsorbed on Na(100), reconstruction is no longer viable and attractive interactions are found with a \ensuremath{\Delta_{\Eb}}\ of $-0.2$~eV. The extent to which reconstruction stabilizes the \ensuremath{(2\times2)}\ overlayer of O on Na(100) was estimated by performing a constrained relaxation, where the lateral coordinates of Na atoms in the topmost layer were constrained to their bulk positions. The resulting \ensuremath{\Delta_{\Eb}}\ reduces from 1.8~eV for the reconstructed structure to 0.2~eV for the constrained structure, which implies that reconstruction stabilizes the \ensuremath{(2\times2)}\ overlayer by 1.6~eV, which is considerable. Notice, however, that even without reconstruction, the \ensuremath{(2\times2)}\ overlayer remains slightly more stable. The reason for the superiority of the \ensuremath{(2\times2)}\ overlayer can be attributed to the large lattice constant of Na that diminishes the magnitude of electrostatic stabilization (the effect is illustrated in Figure~\ref{fig:ionic_model_predicitions} of the Supporting Information). Thus the lack of attractive interactions, even when the top layer is constrained, is likely a consequence of diminished stabilization in combination with other effects, not taken into account by the simple ionic model. \section*{CONCLUSION} To summarize, by performing DFT calculations of the adsorption of four different electronegative adatoms on 70 surfaces of 44 elemental metals, we showed that even something as conceptually simple as adsorption of electronegative adatoms on metal surfaces, can lead to unanticipated behavior. Understanding such interactions is important for heterogeneous catalysis and electrochemistry as they may provide a new insight into initial stages of corrosion and passivation. We identified four possible scenarios for the lateral interactions between electronegative adatoms, some of them being unexpected, and explained the reasons why they emerge. Lateral interactions can be: (i) repulsive (this is the expected scenario, but it prevails only on d-block metals), (ii, iii) attractive or negligible (this scenario is predominantly found for p-block metals and Mg; their origin is well explained by our simple ionic model, i.e., attraction is a consequence of predominantly ionic bonding and a low height of adatoms above the surface), or (iv) surface reconstruction of the lower coverage \ensuremath{(2\times2)}\ overlayer provides additional stabilization, making it more stable than the high-coverage \ensuremath{(1\times1)}\ overlayer. This case typically occurs on s-block metals. \section*{ACKNOWLEDGEMENT} This work has been supported by the Slovenian Research Agency (Grants No. P2-0393). \section{Description of the simple ionic model} The ``simple ionic model'' was derived in our previous publication\cite{Poberznik_JPCC120} and here we briefly explain its essence. The model is based on the electronic structure analysis of the O/Al(111) system,\cite{Poberznik_JPCC120} which reveals that O--Al bonding is ionic and furthermore that the excess electron charge on adatoms mainly comes from the nearest neighbor metal atoms.\cite{Poberznik_JPCC120} The last observation is exploited in the simple ionic model, where the adatom gets the electron charge exclusively from the nearest neighbor metal atoms, such that each neighboring metal atom contributes proportionally, as schematically shown in Figure~\ref{fig:ionic-model-charge}. The simple ionic model therefore consists of an ionic bilayer of adatom-anions/metal-cations and can be described by $N$ ions in the unit-cell at positions $\{\bm{\tau}_i\}_{i=1}^N$ and charges $\{q_i\}_{i=1}^N$. The interaction energy is then obtained by summing the pairwise Coluomb interactions among the ions in the infinite adatom/metal bilayer, i.e.: \begin{align} \label{eq:Eint_general} E_{\rm int} =& \frac{1}{2}\sum_{\ensuremath{\mathbf{R}}=\bm{0}}^{\infty}\sum_{i,j}^{N} \frac{q_iq_j}{|\ensuremath{\mathbf{R}} + \bm{\tau}_j - \bm{\tau}_i|} (1-\delta_{i,j}\delta_{R,0}), \end{align} where \{\ensuremath{\mathbf{R}}\} are the lattice vectors of a two-dimensional lattice. The role of the $(1-\delta_{i,j}\delta_{R,0})$ term is to omit the interaction of an ion with itself ($i=j$ and $R=0$, where $R=|\ensuremath{\mathbf{R}}|$). The infinite lattice sum in two-dimensions, $\sum_{\ensuremath{\mathbf{R}}\ne0}^\infty\left(\cdots\right)$, can be evaluated by explicitly calculating it within the cutoff radius \ensuremath{{R_{\rm cut}}}, whereas beyond \ensuremath{{R_{\rm cut}}}\ it is approximated by an integral, in particular: \begin{align} \label{eq:Eint_int} \nonumber E_{\rm int} \simeq& \ \frac{1}{2}\left(\sum_{\ensuremath{\mathbf{R}}=\bm{0}}^{|\ensuremath{\mathbf{R}}|<\ensuremath{{R_{\rm cut}}}}\sum_{i,j}^{N} \frac{q_iq_j}{|\ensuremath{\mathbf{R}} + \bm{\tau}_j - \bm{\tau}_i|} (1-\delta_{i,j}\delta_{R,0})\right.\\ \nonumber &\left. \quad\quad\quad +\ \frac{2\pi}{A}\int_{\ensuremath{{R_{\rm cut}}}}^\infty \sum_{i,j}^{N}\frac{Rq_iq_j}{\sqrt{R^2 + (z_j - z_i)^2}}{\rm d}R\right)\\ \nonumber =&\ \frac{1}{2}\left(\sum_{\ensuremath{\mathbf{R}}=\bm{0}}^{|\ensuremath{\mathbf{R}}|<\ensuremath{{R_{\rm cut}}}}\sum_{i,j}^{N} \frac{q_iq_j}{|\ensuremath{\mathbf{R}} + \bm{\tau}_j - \bm{\tau}_i|} (1-\delta_{i,j}\delta_{R,0})\right.\\ &\left. \quad\quad\quad -\ \frac{2\pi}{A}\sum_{i,j}^{N} q_iq_j\sqrt{R_{\rm cut}^2+ (z_j - z_i)^2}\right), \end{align} where $A$ is the area of the unit-cell and $z_i$ is the $\hat{z}$-coordinate of the atomic position $\bm{\tau}_i$, i.e., $\bm{\tau}_i = (x_i, y_i, z_i)$. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Figs/ionic-model-charge-distribution.pdf} \caption{Schematic of the electron charge distribution among adatoms and nearest neighbor metal atoms in the simple ionic model. (a) The charge on adatoms is set to \ensuremath{q_{\rm ad}}\ (labeled as ``$-$''), whereas the counter-charge is distributed proportionally to the nearest neighbor metal atoms, i.e., for an adatom with $n$ metal neighbors each of them donates $\ensuremath{q_{\rm ad}}/n$ electrons to the adatom. If a metal atom has no adatom neighbors then it remains charge-neutral, but if it has $m$ adatom neighbors then it donates $\ensuremath{q_{\rm ad}}/n$ electrons to each, thus in total $m\ensuremath{q_{\rm ad}}/n$ to all of them. (b) Charge distribution for \ensuremath{(2\times2)}\ and \ensuremath{(1\times1)}\ adatom overlayers on a square lattice of metal atoms, which is compatible with bcc(100), fcc(100), and sc(100). Unit-cells are indicated in red.} \label{fig:ionic-model-charge} \end{figure} Note that due to charge neutrality, the larger is the \ensuremath{{R_{\rm cut}}}, the smaller is the last sum ($-\frac{2\pi}{A}\sum_{i,j}^{N}\cdots$), due to cancellation between its terms. In particular, the last sum scales as $(2\pi/A)\mu_z^2R_{\rm rcut}^{-1}$, where $\mu_z$ is the $\hat{z}$-component of the dipole of the ions in the unit-cell, $\mu_z = \sum_{i=1}^Nz_iq_i$. Hence: \begin{equation} \label{eq:charge-dipole} -\sum_{i,j}^{N} q_iq_j\sqrt{R_{\rm cut}^2+ (z_j - z_i)^2} \simeq \mu_z^2R_{\rm rcut}^{-1}. \end{equation} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Figs/el_atoms_metallic_surfaces/primitive-model-predicitions.pdf} \caption{The prediction of the simple ionic model for the \ensuremath{(1\times1)}\ and \ensuremath{(2\times2)}\ adatom overlayers above a square lattice of metal atoms, which is compatible with the bcc(100), fcc(100), and sc(100) surfaces. % Notice that below the critical height the high-coverage \ensuremath{(1\times1)}\ overlayer is more stable than the lower-coverage \ensuremath{(2\times2)}\ overlayer. (a) The dependence of the interaction energy and the critical height on the lattice parameter for $a=1.6$~\AA\ and $a=5.3$~\AA. Notice that the \ensuremath{(1\times1)}\ over \ensuremath{(2\times2)}\ preference below the critical height decreases by increasing the lattice parameter, while concomitantly the critical height becomes larger. (b) The dependence of the interaction energy on the charge of the adatom for $q=-0.5$ and $q=-1.0$. Because the energy depends quadratically on the charge, the \ensuremath{(1\times1)}\ over \ensuremath{(2\times2)}\ preference below the critical height decreases with a decrease in charge magnitude.} \label{fig:ionic_model_predicitions} \end{figure} Figure~\ref{fig:ionic_model_predicitions} depicts the results of the simple ionic model for the \ensuremath{(1\times1)}\ and \ensuremath{(2\times2)}\ adatom overlayers over a square lattice of metal atoms, with the interaction energy shown as a function of adatom height above the surface. The figure illustrates the dependence of the interaction energy on the lattice parameter (Figure~\ref{fig:ionic_model_predicitions}a) and on the adatom charge (Figure~\ref{fig:ionic_model_predicitions}b). The most important result of the ionic model is that there exist a critical adatom height below which the high-coverage \ensuremath{(1\times1)}\ overlayer is more stable than the low coverage overlayers (currently only the \ensuremath{(2\times2)}\ overlayer is considered for low coverage, but in our previous work\cite{Poberznik_JPCC120} we considered even lower coverages). This effect is referred to as ``stabilization'' in the following. The critical height obviously depends on the lattice parameter due to geometric reasons, i.e., the larger is the lattice parameter, the larger is the critical height. Concomitantly with the increase of the lattice parameter the extent of stabilization obviously decreases (Figure~\ref{fig:ionic_model_predicitions}a). As for the dependence on the adatom charge, the stabilization obviously increases with the magnitude of the adatom charge (Figure~\ref{fig:ionic_model_predicitions}b), because the energy depends quadratically on the charge. Some of these dependencies can be easily understood from eq~\eqref{eq:Eint_general}. \section{Definitions} \subsection{Binding energy} DFT calculated binding energies were calculated as: \begin{equation} \label{eq:Eb} \ensuremath{E_{\rm b}} = \Esub{\mathit{X}/slab} - \Esub{X} - \Esub{slab}, \end{equation} where $X$ stands for the adatom (either N, O, F, or Cl) and $\Esub{\mathit{X}/slab}$, $\Esub{X}$, and $\Esub{slab}$ are the total energies of the adatom-slab system, standalone adatom, and bare slab, respectively. The binding energies for bcc, fcc, and hcp adatom/metal systems are plotted in Figures~\ref{fig:Eb-and-ReconDegree-bcc-100} to~\ref{fig:Eb-and-ReconDegree-hcp-001} and tabulated in Tables~\ref{tab:Eb-bcc-100} to \ref{tab:Eb-hcp-001-hcpsite}. The \ensuremath{E_{\rm b}}\ values of sc (simple-cubic) systems are given in Table~\ref{tab:Eb-sc-100}, but they are not plotted, because only Bi is ``considered'' to crystallize in this lattice type. Note that Bi and some other investigated metals crystallize in more ``exotic'' lattices, however, in order to simplify the calculations, they were modeled by one among the bcc, fcc, hcp, or sc lattice types, as described in the main text. \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{Figs/el_atoms_metallic_surfaces/fraction-of-ideal-area-scheme.pdf} \caption{A schematic definition of the reconstruction quotient, $\ensuremath{f_{\rm rec}} = A_{\mathrm{R}}/A_{\ensuremath{(1\times1)}}$. $A_{\ensuremath{(1\times1)}}$ is the area of the \ensuremath{(1\times1)}\ unit-cell (left panel), whereas $A_{\mathrm{R}}$ is the area enclosed by the four metal atoms nearest to the adatom forming a \ensuremath{(2\times2)}\ overlayer (right panel). The reconstruction quotient was used to estimate the degree of reconstruction, in particular, we used $\ensuremath{f_{\rm rec}} \leq 0.9$ as the criterion for reconstruction.} \label{fig:ReconDegree} \end{figure} \subsection{Reconstruction quotient} In addition to attractive and repulsive lateral interactions we also identified another possibility, when the two conditions of the simple ionic model for the attractive lateral interactions are met (ionic adatom--surface bonding and low height of adatoms). In particular, surface reconstruction stabilizes the \ensuremath{(2\times2)}\ overlayer and makes it more stable than the \ensuremath{(1\times1)}\ overlayer. In order to quantify the extent of reconstruction for each adatom/metal pair, we defined the reconstruction quotient (\ensuremath{f_{\rm rec}}) by eq~\eqref{eq:RQ} in the main text. The way the reconstruction quotient is calculated is schematically illustrated in Figure~\ref{fig:ReconDegree}. We defined the surface to be reconstructed only when \ensuremath{f_{\rm rec}}\ is significantly below 1; we arbitrarily set $\ensuremath{f_{\rm rec}} \leq 0.9$ as the criterion for reconstruction. We should comment on how the reconstruction quotient was calculated for close-packed bcc(110), fcc(111), and hcp(001) surfaces. In particular, on bcc(110) the adatoms were found in two distinct sites, i.e., three-fold and four-fold hollow sites (see Figure~\ref{fig:bcc-110-sites}), whereas on fcc(111) and hcp(001) both fcc- and hcp-hollow sites are three-fold coordinated. For all the three-fold hollow sites, \ensuremath{f_{\rm rec}}\ was estimated by considering the area spanned by the three metal cations nearest to the adatom, whereas for the bcc(110) four-fold hollow site the area spanned by the four nearest metal cations was taken into account. The obtained values for the reconstruction quotient are plotted on the right-hand side of Figures~\ref{fig:Eb-and-ReconDegree-bcc-100} to \ref{fig:Eb-and-ReconDegree-hcp-001}. The plots clearly show that the extent of reconstruction is the greatest for N and O adatoms on bcc(100) surfaces, where alkali metals stand out the most. In the case of fcc metals reconstruction occurs for N and O on the surfaces of Ca and Sr. According to the \ensuremath{f_{\rm rec}}\ value reconstruction also occurs for N on Sn(100), however, since the \ensuremath{(1\times1)}\ overlayer is still more stable this case is labeled as ``attraction'' in the main text. \subsection{Unoccupied surface area} As an approximate {\it ad hoc} criterion for which adatom/metal pairs reconstruction can be expected, we defined a quantity termed {\it unoccupied surface area}, whose calculation is schematically illustrated in Figure~\ref{fig:VacantArea}. \section{Stability of adsorption sites} The adatoms were mainly adsorbed in hollow sites; for fcc(111) predominantly fcc-hollow and for hcp(001) both fcc- and hcp-hollow sites were considered. However, for a few specific cases hollow sites are either unstable or less stable than bridge or top sites. The cases where non-hollow sites (or also hcp sites for fcc(111)) were found to be the stablest are listed in Table~\ref{tab:exceptions}. We begin our analysis by noticing that our results are in line with those reported by Zhu et al.\cite{Zhu_JESC163} for halogen adatoms (see the comparison in Figure~\ref{fig:Eb-current-v-Zhu}). Namely, both F and Cl prefer the top site on Al(111) at the lower coverage. The top site is also the preferred site for F on Ir(111) and Pt(111), whereas for Cl on Ir(111) both fcc and top sites display similar stabilities. Additionally, on Ca(111) and Sr(111) the hcp site is found to be more stable than the fcc site for O, F, and Cl, irrespective of the coverage. Note that most of these exceptions have been reported by other authors as well.\cite{Roman_PCCP16} As for bcc metals the top site is preferred for F on W(110). However, we find that F prefers the hollow site on Mo(110) and not the top site as reported by Zhu et al.\cite{Zhu_JESC163} The site preference for F and Cl on hcp metals is also reproduced, the only difference is that according to our calculations Cl prefers the fcc site on Tc(001), whereas Zhu et al.\@{} reported that the hcp site is more stable. Both sets of calculations, however, show that the two sites have very similar stabilities. The calculated \ensuremath{E_{\rm b}}\ values for \ensuremath{(2\times2)}\ layers of F and Cl are compared to those reported by Zhu et al.\@{} in Figure~\ref{fig:Eb-current-v-Zhu}. For Cl the average difference between the two sets of \ensuremath{E_{\rm b}}\ values is $-0.10\pm0.09$~eV, whereas for F the average difference is $0.19\pm0.15$~eV. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Figs/bcc110-sites.pdf} \caption{Three- and four-fold hollow sites on the bcc(110) surfaces. As an example, the \ensuremath{(2\times2)}\ overlayer of O on Fe(110) displays three-fold, whereas the \ensuremath{(2\times2)}\ overlayer of N on Fe(110) displays four-fold coordination.} \label{fig:bcc-110-sites} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{Figs/el_atoms_metallic_surfaces/vacant_surface_area.pdf} \caption[Scheme of unoccupied surface area]{The {\it unoccupied} surface area is calculated by subtracting the area occupied by the metal cation from the area of the \ensuremath{(1\times1)}\ unit-cell. For the radius of the metal cation, \ensuremath{R_{\mathrm{c}}}, the average value of cationic radii for all coordination numbers of the lowest cationic oxidation state is used. Radii were taken from Shannon and Prewitt.\cite{Shannon_ACB26}} \label{fig:VacantArea} \end{figure} In addition to the already documented site anomalies, we found that F prefers to adsorb on the top site of Os(001) and bridge site of Ru(001), whereas the top site is the most stable for Cl on Fe(110), for Cl on Mn(110) at high coverage, and for F on Bi(100). For O and F on Al(100) and Fe(100) the bridge site is also the more stable site at low coverage, whereas at high coverage the two sites generally display similar stabilities. For Cl on Al(100) and Fe(100) the bridge site is favored for both investigated coverages. Finally, as mentioned above adatoms adsorbed on hollow sites of bcc(110) surfaces display two specific configurations, one where the adatom is three-fold coordinated and a second one, where the adatom is four-fold coordinated. Adatoms adsorb predominantly in the three-fold hollow site, the exceptions are N on Li, Na, K, V, Cr, Fe; O on Li and Na; and Cl on Li, Na, and Fe. An example of three-fold and four-fold hollow sites on bcc(110) is shown in Figure~\ref{fig:bcc-110-sites}. \section{Adsorption induced work function changes} In some cases, such as N, O, and F adatoms on Mg(0001) the lateral attractive interaction between adatoms are accompanied by an adsorption induced decrease of the work function.\cite{Cheng_PRL113} It should be noted, however, that decrease of the work function is not required for attraction to emerge, as evidenced by Figure~\ref{fig:dWf-vs-dEb}, which shows the adsorption induced work function change for all the ``attraction cases'' currently identified. Among 24 such identified cases, work function reduces for only 10 of them. Figure~\ref{fig:dWf-all} plots the adsorption induced work function changes for all the considered adatom overlayers and Figure~\ref{fig:work-function} shows the experimental work functions for either closed-packed or polycrystalline surfaces of the 44 metals considered in this study.