uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,501,415
arxiv
\section{INTRODUCTION} The integration of powered two wheelers (PTWs) to intelligent transport systems as well as the development of PTWs specific innovative transport solutions depends upon the understanding of their mobility behaviors and interaction with other road users. However, PTWs create a peculiar traffic flow effects that cannot be reproduced by the currently available models. \par The flow of vehicles can be modeled at different granularities. Macroscopic models study collective behavior whereas microscopic representation model individual vehicle mobility. Mesoscopic models exhibit both macroscopic and microscopic behaviors, combine the aggregate level modeling in macroscopic approach with specific individual vehicle characteristics such as probabilistic lane changing and turning ratio. The choice of the modeling approach depends on different factors such as the required level of detail, accuracy, efficiency. Macroscopic modeling is an efficient and a preferable approach for studying analytical flow properties, since it allows to establish a closed form relationship between flow variables. \par Macroscopic traffic flow models most commonly apply the kinematic wave theory developed by Lighthill, Whitham and Richards \cite{c6,c7} (LWR). In order to integrate traffic heterogeneity (vehicle and driver), LWR model is extended to multi-class flow model. The variation among vehicle class is expressed in relation to maximum speed, perception difference to area/space occupancy, total/effective density. Multi-class LWR models are usually solved in the Eulerian coordinates. In Eulerian formulation, the evolution of flow properties such as density, flow, speed, etc., are evaluated at fixed points. However, recent studies show that Lagrangian representation, which tracks the evolution of flow properties of vehicle/platoon of vehicles, offers several advantages over the Eulerian representation, with the main benefits being numerical accuracy \cite{c2} and flexibility to easily incorporate traffic phenomena (e.g. capacity drop \cite{c15}) and vehicle characteristics.\par In Lagrangian systems, the LWR model is formulated in $(N,t)$ coordinate system \cite{c2}. Cumulative vehicle count (N) is found to be more suitable for certain traffic flow analysis \cite{c5,c10} and also makes it easier to establish a connection between follow-the-leader and LWR models \cite{c3}. For a mixed traffic of cars and trucks, the Lagrangian formulation is given in \cite{c1}, which identifies vehicle classes with a specific fundamental diagram, jam density, wave speed. However, the interaction between vehicle classes is disregarded. Similarly, in \cite{c4} another Lagrangian representation for multi-class LWR model is proposed. Nonetheless, these models are intended to characterize mixed traffic of cars and trucks. The discretization schemes fall short of describing correctly multi-class flows that have different characteristics from cars and trucks mixed flow, for example mixed flow of cars and two-wheelers, thus requiring further modification. \par To this end, in this paper, we propose a Lagrangian formulation for traffic flow consisting of cars and two-wheelers (PTWs). The derivation follows the Eulerian multi-class LWR model in \cite{c8}, where the fundamental diagram and the parameters for the fundamental diagram are defined uniquely for each class, and are also adapted to the traffic condition. We provide a discretization method applicable for solving any type of multi-class LWR model, including cars and PTWs mixed flow. Moreover, we propose an approach to reproduce a follow-the-leader type behavior using the Lagrangian representation. In car traffic, there is an ordered type of flow where the $n^{th}$ vehicle (follower) follows the ${(n-1)}^{th}$ vehicle. PTWs usually do not respect such ordered flow. To accurately represent the abreast movement of two-wheelers in Lagrangian representation, we introduce sub-lanes. We test the equivalence of the Lagrangian and Eulerian representation. From application standpoint, the Lagrangian representation is convenient to analyze vehicle-specific data such as trajectories and travel times. Using the spacing and speed data collected from probe vehicles together with the traffic flow model formulated in Lagrangian coordinate, traffic state can be estimated accurately \cite{c11}. Moreover, in hybrid traffic flow models, Lagrangian model is used in conjunction with Eulerian representations \cite{c13}. \par The rest of the paper is organized as follows. First, a formulation for traffic flow consisting PTWs and cars in Eulerian and the Lagrangian approaches is discussed. Thereafter, a discretization technique is presented. Following the Numerical examples and discussion, we wind up by giving concluding remarks. \par \section{Lagrangian Formulation of Multi-class LWR} We first introduce the Eulerian representation of the mixed flow of cars and PTWs, and then we show the transformation to Lagrangian coordinates. Multi-class LWR models distinguish the characteristics of each of the vehicle class. Different methods have been applied to represent accurately the distinctive features exhibited, depending on the involved vehicle types. In this study, our interest is in modeling mixed cars and PTWs flow. Thus, a model developed in \cite{c8} is used as a reference model. The model is based on free space distribution, wherein the difference in vehicle size, lateral and longitudinal gap acceptance and maximum speed are the factors that differentiate vehicle classes. The continuum equation which holds for each class is written as \begin{equation} \frac{\partial \rho_i(x,t)}{\partial t} + \frac{\partial q_i(x,t)}{\partial x}=0, \qquad \qquad i=1,2, \end{equation} where $\rho_i$ and $q_i$ denote density and flow of class i, respectively, over space $x$ and time $t$. Class specific flow, speed and density are related by the equation \begin{equation} \label{eq:MLWRq} q_i(x,t)=\rho_i(x,t) v_i(x,t),\qquad \qquad i=1,2. \end{equation} The speed $v_i$ for the individual vehicle class $i$ is a function of the densities of both classes and derived based on the assumption that the flow of vehicles is dictated by available free space \cite{c8}: \begin{equation}\label{eq:speed_fn} v_i=V_i(\rho_1,\rho_2)= v_i^f \left(\int_{r^c_i}^\infty f(l(\rho_1,\rho_2))) \, \mathrm{d}l\right) \end{equation} where $v_i^f, r^c_i$ and $f(l(\rho_1,\rho_2)))$ stand for the maximum speed, critical lateral gap and the probability density function of free space distribution, respectively. \par The Eulerian representation describes the evolution of the traffic state variables at a fixed point in space (Figure \ref{fig:euler}). Whereas, the Lagrangian view deal with the flow properties observed along the trajectory of vehicles (Figure \ref{fig:lag}). \par \begin{figure}[thpb] \centering \captionsetup{justification=centering} \begin{subfigure}{0.75\linewidth} \includegraphics[width=\textwidth]{figs/Eulerian_Diagram} \caption{Eulerian fixed frame} \label{fig:euler} \end{subfigure} \begin{subfigure}{0.75\linewidth} \includegraphics[width=\textwidth]{figs/Lagrangian_Diagram} \caption{Lagrangian moving frame} \label{fig:lag} \end{subfigure} \caption{A schematic of Lagrangian and Eulerian approach} \end{figure} The mathematical form of the conservation law in Lagrangian coordinates depends on the chosen coordinate system. Here, we take $(n,t)$ coordinate system. Moreover, there are two methods that are used to represent multi-class flows in Lagrangian coordinates. In the first method, there are separate Lagrangian coordinates for each vehicles class (method 1). On contrary, in the second method (method 2) there is one Lagrangian reference frame that moves with one of the selected vehicle class. Thus, for the other vehicle classes the conservation equation is derived based on this Lagrangian reference frame. In a situations where tracking of each vehicle is needed (e.g for class specific controls \cite{c9}) method 1 is suitable. Otherwise, method 2 is a computationally efficient approach, for instance to investigate the impact of PTWs on cars flow, or vice versa. \subsection{Method 1} By taking spacing as a state variable, the conservation equation in $(n,t)$ coordinate system is written as \cite{c2}: \begin{equation}\label{mt1_a} \frac{\partial s_i(x(t),t)}{\partial t} + \frac{\partial v_i(n,t)}{\partial n}=0 \qquad \qquad i=1,2 \end{equation} \begin{equation} s=\frac{-\partial x}{\partial n},~~\rho=\frac{-\partial n}{\partial x}=1/s \end{equation} where $s$ and $v$ denote, respectively, the average spacing and the speed associated to a group of vehicle/s labeled $n$. Vehicle groups are labeled in time order. The conversation equation applies for each vehicle class. Moreover, the grouping of vehicle and the labeling of vehicle groups is done separately for each vehicle class. This representation also take an assumption that vehicles in a group neither disband or merge with other group. Class specific speed-spacing fundamental relation has the following form: \begin{equation}\label{mt1_b} v_i=V(s_1,s_2) \end{equation} Speed-space fundamental diagram (FD) for PTWs and cars is given in Figure \ref{fig:fd}. As illustrated in the figure, the fundamental diagram for each class changes with the spacing/density of the other vehicle class. \begin{figure}[thpb] \captionsetup{justification=centering} \begin{subfigure}{0.45\linewidth} \includegraphics[width=\textwidth]{figs/car_speed} \caption{} \label{fig:fd1} \end{subfigure} \begin{subfigure}{0.45\linewidth} \includegraphics[width=\textwidth]{figs/ptw_speed} \caption{} \label{fig:fd2} \end{subfigure} \caption{Speed-spacing fundamental diagram (a) for Cars (b) for PTWs, $V_{1,max}=V_{2,max}=20m/s$ } \label{fig:fd} \end{figure} \subsection{Method 2} In the above multi-class Lagrangian conservation equation, individual vehicle class has a separate labeling (cumulative vehicle count). \cite{c4} proposed an alternative formulation, where the Lagrangian coordinates move with a reference vehicle class and only vehicle of this class are counted. In another word, the evolution of traffic state variable of the carrier vehicle class and other vehicle classes being carried inside is tracked. \par The motion of the reference (carrier) class is governed by: \begin{equation}\label{mt2_a} \frac{\partial s_r(x(t),t)}{\partial t} + \frac{\partial v_r(n,t)}{\partial n}=0, \end{equation} For the rest vehicle classes: \begin{equation}\label{mt2_b} \frac{\partial s_r/s_i}{\partial t} + \frac{\partial \left((v_r-v_i)/s_i\right)}{\partial n}=0, \end{equation} or equivalently it can be formulated in non conservative form \begin{equation*} \frac{\partial s_i}{\partial t} + \frac{s_i}{s_r} \frac{\partial v_i}{\partial n}- \frac{v_i-v_r}{s_r}\frac{\partial s_i}{\partial n}=0 \end{equation*} where the subscript $r$ and $i$ refers to, respectively, the reference vehicle class and other vehicle classes.\par In the conservation equation given above, the traffic state variable are spacing (s) and speed (v). When density ($\rho$) is used instead of spacing, the equation takes the following conservation form. \begin{equation} \frac{\partial (1/\rho_1)}{\partial t} + \frac{\partial v}{\partial n}=0 \end{equation} \begin{equation} \frac{\partial (\rho_i/\rho_1)}{\partial t}+ \frac{\partial (\rho_i (v_1-v_i))}{\partial n}=0 \end{equation} where $\rho_1 > 0$ always. \section{Discretization scheme} We apply the following numerical scheme to find the solution of Eq. (\ref{mt1_a}) - (\ref{mt1_b}) (Method 1). The $n$ domain is subdivided into $\Delta n$ sized clusters of vehicles (cells). An approximation of the average spacing $s$ over each cluster is updated at each time step $\Delta t$. Applying Godunov scheme, the numerical solution of the conservation equation is approximated by \begin{equation} \label{eq:m1} s_i^{t+\Delta t}=s_i^{t} -\frac{\Delta t}{\Delta n}(V_{i+1/2}-V_{i-1/2}) \end{equation} where $V_{i+1/2}$ and $V_{i-1/2}$ are the fluxes (speeds) at the boundaries of cell $i$. \begin{equation*} V_{i+1/2}=V(s_{1,i},s_{2,i},...),~ V_{i-1/2}=V(s_{1,i-1},s_{2,i-1},...) \end{equation*} Therefore, equation \ref{eq:m1} becomes \begin{equation} s_i^{t+\Delta t}=s_i^{t} -\frac{\Delta t}{\Delta n}(V(s_{1,i},...)-V(s_{1,i-1},...) \end{equation} which is similar to the direct difference approximation of the conservation equation. To obtain a stable solution $\Delta t$ should be restricted to Courant-Friedrichs-Lewy (CFL) condition, i.e. \begin{equation*} \Delta t \leq \frac{\Delta n}{\max(\lambda)} \end{equation*} where $\lambda$ is the wave speed. Following the definition of the flux (speed) at the boundary, the trajectory (location) $X$ \cite{c1} of each cluster can be updated using \begin{equation} X(i,t+\Delta t)= X(i,t) + \Delta t*V(s_{1,i},s_{2,i},...) \end{equation} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figs/nt_double} \caption{n-t domain discretization, separate coordinate for each vehicle class} \label{fig:vc_disc} \end{figure} For each vehicle class, clusters don not overlap each others. However, clusters of different vehicles class may overlap or occupy the same position. For example, in Fig. \ref{fig:vc_disc} the first cluster of vehicle class $1$ overlaps with two clusters of the other vehicle class. To compute $V(s_{1,i},s_{2,i})$, we need to approximate $s_{2,i}$ value in cluster i of vehicle class $1$. \begin{equation*} s_{2,i}^{(1)}=\frac{\Delta n_1 s_{1,i}}{\int_{X(i)}^{X(i-1)}\frac{1}{s_2(x)}dx} \end{equation*} where $s_2(x)$ is a function describing the average spacing $s$ of class $2$ as a function of location $x$. For the general case, \begin{equation} s_{c,i}^{(j)}=\frac{\Delta n_j s_{j,i}}{\int_{X(i)}^{X(i-1)}\frac{1}{s_c(x)}dx} ~~ c=1,2,... \end{equation} where j and c denote, respectively, the vehicle class cluster $i$ belongs to and the other vehicle classes. For $j=v$, the integration is reduced to $\Delta n_j$, thus $s_{v,i}^{(j)}=\Delta s_{j,i}$.\par The discretization method introduced above is for method 1, where each of the vehicle class is counted and grouped separately. However, there is an alternative representation (method 2) as shown in Eq. (7)-(8). In this case, vehicles of the reference class is clustered into $\Delta n$ sized group. Then, the average spacing $s$ of each vehicle class over the clusters of the reference class is updated at each time step $\Delta t$.\par For the reference class ($r$) the average spacing is updated following Eq. (11), and the trajectory is updated for according to Eq. (13). The average spacing of the remaining vehicle classes is updated according to: \begin{equation} {\left(\frac{s_{r,i}}{s_{c,i}}\right)}^{t+\Delta t}={\left(\frac{s_{r,i}}{s_{c,i}}\right)}^{t} -\frac{\Delta t}{\Delta n}(V_{c,i+1/2}-V_{c,i-1/2}) \end{equation} where $V_{v,i\pm 1/2}$ are the fluxes (speeds) at the cell boundaries. When the speed of the reference class is always higher than the rest classes $(v_r > v_c)$, the direction of the fluxes is to the left. Thus, \begin{IEEEeqnarray}{rCl}\label{eq:Ldir_flux} V_{c,i+1/2}=\frac{v_{r,i}-v_{c,i}}{s_{c,i}} \IEEEyessubnumber\\ V_{c,i-1/2}=\frac{v_{r,i-1}-v_{c,i-1}}{s_{c,i-1}} \IEEEyessubnumber \end{IEEEeqnarray} On the other hand, if $(v_r < v_c)$, the direction of the fluxes is to the right (see Fig. \ref{fig:fluxdir}). This suggests that the fluxes should be defined as \begin{IEEEeqnarray}{rCl} \label{eq:Rdir_flux} V_{c,i+1/2}=\frac{v_{r,i+1}-v_{c,i+1}}{s_{c,i+1}} \IEEEyessubnumber\\ V_{c,i-1/2}=\frac{v_{r,i}-v_{c,i}}{s_{c,i}} \IEEEyessubnumber \end{IEEEeqnarray} \begin{figure}[!thpb] \centering \includegraphics[width=3in]{figs/flux_direction} \caption{Direction of fluxes through the edges of the cluster} \label{fig:fluxdir} \end{figure} However, the flux definition in Eq. (\ref{eq:Rdir_flux}) is restricted to the situation where the fluxes through the edges are non-zero, i.e. $v_{c,i+1}>v_{r,i}$ and $v_{c,i}>v_{r,i-1}$. \par \begin{figure}[!thpb] \captionsetup{justification=centering} \begin{subfigure}{0.45\linewidth} \includegraphics[width=\textwidth]{figs/speedv2maxgr.png} \caption{} \label{fig:speeda} \label{} \end{subfigure} \begin{subfigure}{0.45\linewidth} \includegraphics[width=\textwidth]{figs/speedv1maxgr.png} \caption{} \label{fig:speedb} \end{subfigure} \caption{Speed density relation for (a) free flow speed of Cars is greater PTWs (b) free flow speed of PTWs is greater cars, density of PTWs $\rho_1=0.2 veh/m$} \end{figure} For traffic flow consisting of PTWs and cars, if the reference class is PTWs and PTWs have a higher free flow speed than cars (Fig. \ref{fig:speedb}), the flux definition in Eq. (\ref{eq:Ldir_flux}) applies. Nonetheless, if the free flow speed of car is higher than PTWs (Fig. \ref{fig:speeda}), whichever class is the reference class, we have both conditions, $v_r>v_c$ and $v_c>v_s$. For this reason, we give a general definition for the fluxes which applies irrespective of the order of the speeds. \par If $v_{r,i}^n>v_{c,i}^n$, \begin{IEEEeqnarray}{lCr} V_{i+1/2}=\frac{(v_{r,i}-v_{c,i})}{s_{c,i}} \IEEEyessubnumber\\ V_{i-1/2}=\frac{\max(0,(v_{r,i-1}-v_{c,i-1}))}{(v_{r,i-1}-v_{c,i-1})}\frac{(v_{r,i-1}-v_{c,i-1})}{s_{c,i-1}}\qquad \IEEEyessubnumber \end{IEEEeqnarray} If $v_{r,i}^n<v_{c,i}^n$, \begin{IEEEeqnarray}{lCr} V_{i+1/2}=\frac{\max(0,(v_{c,i+1}-v_{r,i}))}{(v_{c,i+1}-v_r^{i}))} \frac{(v_{r,i}-v_{c,i+1})}{s_{c,i+1}} \IEEEyessubnumber\\ V_{i-1/2}=\frac{\max(0,(v_{c,i}-v_{r,i-1}))}{(v_{c,i}-v_{r,i-1}))} \frac{(v_{r,i-1}-v_{c,i})}{s_{c,i}} \IEEEyessubnumber \end{IEEEeqnarray} \subsection{Follow-the-leader type model from Lagrangian representation} In continuum flow model, $\Delta n$ can take any positive value. A follow-the-leader type flow is observed when $\Delta n=1$. On the discretization scheme, grouping is done per vehicle classes base, which perfectly works for traffic flows obeying lane discipline. However, when we have two-wheelers which do not respect such an ordered flow, a special treatment is required. The reason is that, in the discretization, clusters of the same vehicle class are not allowed to overlap or occupy the same position. Consequently, the parallel movement of two-wheelers cannot be modeled properly. Thus, we integrate the abreast movement of two-wheelers by introducing sub-lanes. Accordingly, two-wheelers in a sub-lane adhere to the follow-the-leader principle. \par \begin{figure}[thpb] \centering \includegraphics[width=0.4\textwidth]{lagfig} \caption{View of vehicle in Lagrangian framework when sublane introduced} \label{} \end{figure} \begin{equation}\label{eq:locupdate} X(i,t+\Delta t)= X(i,t) + \Delta t*V \end{equation} The location of the vehicles is updated following Eq. (\ref{eq:locupdate}). Since the macroscopic speed is defined as a function of the free space between vehicles (refer \cite{c8}), the lateral and longitudinal interaction between vehicle classes can be captured. For example, the speed of a PTW depends on the number of vehicles (cars and PTWs on the other sub-lanes) within the space between the leader the follower PTWs and the longitudinal spacing. likewise for cars. With this approach, moving behavior of each vehicle class can be analyzed at a fine-grained level. Further, additional vehicle (or vehicle class) specific rules also can be incorporated, making it a suitable and efficient solution for dealing with cooperative intelligent transport system (C-ITS). \FloatBarrier \section{Numerical results and discussion} To test the validity and accuracy of the proposed discretization scheme, we compare the numerical results obtained with Eulerian approach and the two Lagrangian methods. For the simulation experiment, the parameters in Table \ref{table:setting} are used. \begin{table}[h] \caption{simulation settings} \label{table:setting} \begin{center} \begin{tabular}{l c} \hline \\ Maximum speed of cars & 15m/s\\ Maximum speed of PTWs & 20m/s\\ vehicle cluster size & 7.5 vehicles\\ Time step & 0.125 s\\ Space steps (Eulerian) & 10m\\ Road length & 3000m\\ lane width & 3.5 m\\ Number of lanes & 1\\ Simulation time & 45s\\ \hline \end{tabular} \end{center} \end{table} Lax-friedrich discretization scheme is employed to solve the Eulerian conservation equations. We assume identical initial densities for the two vehicle classes, cars ($\rho_2$) and PTWs ($\rho_1$), where $\rho_1=\rho_2=0.15 veh/m$, for $x \in [0, 1400m]$ and $\rho_1=\rho_2=0.3 veh/m$, otherwise. \par The evolution of the initial density as described by Eulerian and Lagrangian approach is presented in Fig. \ref{fig:lageul}. For the Lagrangian approach we considered two cases by changing the reference class. Lag. 1 stands for the result when PTWs are the reference class and Lag. 2 stands for the results when cars are the reference class. In this case, the fundamental diagram takes the shape in \ref{fig:speedb}.\par \begin{figure}[!htbp] \centering \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{figs/LagvsEulerian_ptw} \caption{PTW density wave} \label{fig:lageul_ptw} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{figs/LagvsEulerian_car} \caption{Car density wave} \label{fig:lageul_car} \end{subfigure} \caption{Lagrangian when PTW is the reference class ( Lag. 1) and cars is the reference class (Lag. 2) vs Eulerian} \label{fig:lageul} \end{figure} \FloatBarrier The density wave of PTWs and cars at time $t=40s$ are depicted in Fig. \ref{fig:lageul_car} and \ref{fig:lageul_ptw}, respectively. As can be seen, the results are close to each other except the difference at the upstream and downstream shock fronts. With this, we can prove the validity of the proposed discretization scheme for the case where the slower vehicle is the reference vehicle class (shown by Lag. 2).\par Furthermore, the comparison of the two Lagrangian methods is presented in Fig. \ref{fig:laglag}. The density waves for cars and PTWs shown in Figs. \ref{fig:laglag_car} and \ref{fig:laglag_ptw} illustrate that method 1 (Lag. 3) produces a more accurate result than method 2 (Lag. 1 and Lag. 2) of the Lagrangian approach. Specifically, at the high density to low density and low density to high density transition points numerical error are observed for the case of Lag. 1 and Lag. 2. \par \begin{figure}[!htbp] \centering \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{figs/LagvsLag_ptw} \caption{PTW density wave} \label{fig:laglag_ptw} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{figs/LagvsLag_car} \caption{Car density wave} \label{fig:laglag_car} \end{subfigure} \caption{Comparison between the two Lagrangian methods, (Lag. 1, Lag.2 ) for method 2, (Lag. 3) for method 1.} \label{fig:laglag} \end{figure} We also test the proposed numerical scheme, i.e. the definition of the fluxes at the boundary, for method 2 Lagrangian representation. For this experiment, we consider the fundamental digram in Fig. \ref{fig:speeda}, and the maximum speed of cars$=20 m/s$ and the maximum speed of PTWs$=15 m/s$. The rest simulation parameters and the initial density are identical to the the previous experiments. The evolution of cars and PTWs density waves is shown in Fig. \ref{fig:gen_flux}. According to the result obtained, the evolution is correctly described by the proposed scheme.\par \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{figs/slowreference} \caption{Density waves of PTWs (upper subplot) and cars (lower subplot), the numerical scheme defined for the general case of method 2 is applied} \label{fig:gen_flux} \end{figure} For the case $\Delta n=1$ the trajectory of the vehicles on the space-time plane is presented in Fig. \ref{fig:tra}. To track the interaction between vehicle classes at different traffic situations, a traffic light is located at $400m$ which stays red for the period $t \in [0,40s]$. PTWs have two sub-groups (sub-lanes) and the clustering of each sub-groups is done separately. As can be observed from overlapping trajectories of PTWs, by introducing sub-lanes the side by side movement of PTWs, in the same lane, can be reproduced (see the trajectory of vehicles departing the queue). \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{figs/trajecotry} \caption{Trajectories of vehicles, two sub-lanes for PTWs} \label{fig:tra} \end{figure} \section{CONCLUSIONS} Lagrangian formulation gives accurate representation, permits to study various traffic feature and is applicable for the current traffic state estimation schemes. Due to these benefits, Lagrangian representation is preferred over the Eulerian one. In this paper, we formulate the multiclass LWR model for a traffic flow consisting of PTWs and cars in Lagrangian coordinates. \par We proposed a numerical scheme taking into account the peculiar features observed in mixed flow of cars and PTWs. The validity of proposed method checked through simulation experiments. Accordingly to the result, our numerical scheme can produce a valid results. Moreover, the simulation results shows that the Lagrangian representation outperform over the Eulerian representation in terms of accuracy. The possibility of tracking the trajectory of each vehicles in Lagrangian representation, facilitate the investigation of different traffic phenomena for C-ITS applications. \FloatBarrier \section*{ACKNOWLEDGMENT} This work was funded by the French Government (National Research Agency, ANR) through the “Investments for the Future” Program reference \#ANR-11-LABX-0031-01. EURECOM acknowledges the support of its industrial members, namely, Orange, BMW Group, SAP, Monaco Telecom, Symantec, IABG.
1,116,691,501,416
arxiv
\section{Introduction} The standard model of elementary particles is the most important achievement in modern physics, and still it gives us the horizon of particle phenomenology~[1]. Supersymmetry ( SUSY )~[2-22] can be understood as one of the ways toward "beyond the standard model", from a viewpoint of particle phenomenology. In such a supersymmetric approach, a theory has both fermionic and bosonic degrees of freedom, and they interact with each other under a supersymmetric manner. Dynamics of interacting boson gases quite often show Bose-Einstein condensation ( BEC ) as a universal phenomenon. BEC was first found by Bose~[23] and Einstein~[24], and theory of BEC of an interacting nonrelativistic boson gas was constructed firstly by Bogoliubov~[25]. The Bogoliubov theory has a very universal character, and it is the case that the theory can be applied to various interacting boson systems. Moreover, methods and concepts of BEC and superfluidity of boson gases are useful to examine/understand an interacting fermion system, for example, BCS ( Bardeen-Cooper-Schrieffer ) superconductivity~[26] or chiral condensations in the Nambu$-$Jona-Lasinio model ( NJL )~[27,28] and quantum chromodynamics ( QCD )~[29], from the context of spontaneous symmetry breakings. On the other hand, a SUSY multiplet must be broken in a model for phenomenology because we have not yet found any superpartner. Due to the ( perturbative ) nonrenormalization theorem, a SUSY breaking cannot take place in a perturbation theory, and it should be realized in a nonperturbative manner, i.e. a spontaneous SUSY breaking. A lot of modern particle theoreticians consider that a dynamical symmetry breaking is phenomenologically prefered for a SUSY breakdown~[3,4,5,7,9,10,12]. \vspace{3mm} The most important problem in modern particle physics which has been found by experimental results is the origin of masses, their hierarchy, and flavor violations of particles. Recent experimental observations confirmed that neutrinos should have very tiny masses, and the seesaw mechanism is one of candidates for providing an explanation to neutrino masses~[30-34]. Hence, it is an interesting issue to make a SUSY model which will show a seesaw machanism. In the ordinary seesaw mechanism, neutrino has both a Dirac and a right-handed Majorana mass terms. The references (35) and (36) discussed a generalization of the ordinary seesaw mechanism, added a ( very tiny ) left-handed Majorana mass, and some interesting results were obtained. It is well-known fact that the O'Raifeartaigh model breaks SUSY at its tree level~[14]. Recently, the modified O'Raifeartaigh model has been examined in the context of meta-stable SUSY breaking~[10-13]. Reference (11) gives mass eigenvalues of scalars and spinors: In fact the eigenvalues take quite similar structure with that of the generalized seesaw mechanism~[35,36]. The purpose of this paper is to examine the modified O'Raifeataigh model under the context of the generalized seesaw mechanism of neutrinos. \vspace{3mm} This paper is organized as follows. In Sec. II, we invesitigate the generalized seesaw mechanism of the so-called modified O'Raifeartaigh model in the component field formalism. It is suitable to employ several many-body-theoretical techniques in the component field formalism though it becomes more lengthy than the superspace formalism. While, we have to use the notion of component fields at some discussions also in the superspace formalism, especially if we wish to take into account BEC in the scalar sector of the theory. After introducing the modified O'Raifeartaigh model, we shortly discuss its symmetry property and the classical minimum. We consider it might be possible that the solution of the classical minimum shows the generalized seesaw mass relation. Then, we employ the many-body-theoretical technique to take into account BEC in the scalar sector. By these preparation, the one-loop effective potential is calculated, and stability around the classical minimum will be investigated. Possibility of SUSY breakdown around the classical minimum also be examined. In a one-loop effective potential calculation, the loop expansion must converge rapidly enough, and thus the vacuum of a theory should have a semiclassical nature, will not obtain a radical modification by possible quantum corrections. This must be the case in our calculation, and thus we consider the situation where quantum corrections around a classical minimum are small. For comparison/supplement to the result of the component field formalism, a calculation of the one-loop effective potential in the superspace formalism is given in Sec. III. The summary and conclusion of this work is presented in Sec. IV. \vspace{3mm} We will follow the textbook of Wess and Bagger for the spinor algebra, gamma matrices and metric conventions throughout this paper~[2]. ( For example, the metric is $\eta^{\mu\nu}={\rm diag}(-1,1,1,1)$. ) \section{Component Field Formalism} \subsection{The Classical Solution} Our starting point is the following Lagrangian of the modified O'Raifeartaigh model~[10-13] of three chiral matter fields: \begin{eqnarray} {\cal L} &=& \Bigl( X^{\dagger}X + \Phi^{\dagger}_{+}\Phi_{+} + \Phi^{\dagger}_{-}\Phi_{-} \Bigr)\Big|_{\theta\theta\bar{\theta}\bar{\theta}} \nonumber \\ & & + \Bigl( fX + \frac{g}{2}X\Phi_{+}\Phi_{+} + m_{D}\Phi_{+}\Phi_{-} + m_{L}\Phi_{-}\Phi_{-} \Bigr)\Big|_{\theta\theta} \nonumber \\ & & + \Bigl( f^{\dagger}X^{\dagger} + \frac{g^{\dagger}}{2}X^{\dagger}\Phi^{\dagger}_{+}\Phi^{\dagger}_{+} + m^{\dagger}_{D}\Phi^{\dagger}_{+}\Phi^{\dagger}_{-} + m^{\dagger}_{L}\Phi^{\dagger}_{-}\Phi^{\dagger}_{-} \Bigr)\Big|_{\bar{\theta}\bar{\theta}}. \end{eqnarray} Here, $X$, $\Phi_{\pm}$ are chiral ( $+$; right, $-$; left ) superfields. We regard $\Phi_{\pm}$ as neutrino superfields, $m_{D}$ and $m_{L}$ denote a Dirac and a left-handed Majorana mass parameters, respectively. If $X$ takes a ( very large ) VEV compared with $m_{D}$ and $m_{L}$, then the theory may show a seesaw-type situation in the mass matrix eigenvalues of its fermion sector. The usual ( ordinary ) seesaw situation will be achieved by $m_{L}\to 0$. The mass dimensions of $f$ and $g$ become as follows: $f;[{\rm mass}]^{2}$, $g;[{\rm mass}]^{0}$. We consider the following global $U(1)_{V}$ ( gauge ) and $U(1)_{A}$ ( chiral ) transformations: \begin{eqnarray} & & U(1)_{V}: \, \Phi_{+} \to e^{i\alpha_{V}}\Phi_{+}, \quad \Phi_{-} \to e^{-i\alpha_{V}}\Phi_{-}, \quad U(1)_{A}: \, \Phi_{+} \to e^{i\alpha_{A}}\Phi_{+}, \quad \Phi_{-} \to e^{i\alpha_{A}}\Phi_{-}, \quad \alpha_{V}, \alpha_{A} \in {\bf R}. \end{eqnarray} The Majorana mass term of the mass parameter $m_{L}$ explicitly breaks both of these global symmetries. We can choose the charge of $X$ to keep the coupling term $\frac{g}{2}X\Phi_{+}\Phi_{-}$ invariant under these transformations as \begin{eqnarray} U(1)_{V}: \, X \to e^{-2i\alpha_{V}}X, \quad U(1)_{A}: \, X \to e^{-2i\alpha_{A}}X. \end{eqnarray} The terms $fX$ and $f^{\dagger}X^{\dagger}$ also explicitly break these global $U(1)$ symmetries. The Majorana mass term will breaks the $U(1)_{R}$ symmetry under the following charge assignment of the superfields $X$ and $\Phi_{\pm}$~[10-13]: \begin{eqnarray} U(1)_{R}: \, \theta \to e^{-i\alpha_{R}}\theta, \quad \bar{\theta} \to e^{i\alpha_{R}}\bar{\theta}, \quad X \to e^{2i\alpha_{R}}X, \quad \Phi_{+} \to \Phi_{+}, \quad \Phi_{-} \to e^{2i\alpha_{R}}\Phi_{-}, \quad \alpha_{R} \in {\bf R}. \end{eqnarray} Another $R$-charge assignment is also possible: \begin{eqnarray} U(1)_{R}: \, X \to X, \quad \Phi_{+} \to e^{i\alpha_{R}}\Phi_{+}, \quad \Phi_{-} \to e^{i\alpha_{R}}\Phi_{-}. \end{eqnarray} In this case, the $R$-symmetry will be restored at the limit $f\to 0$. Therefore, the term $fX$ and the Majorana mass term are incompatible with respect to the $U(1)_{R}$ symmetry. As a result, there is no global $U(1)$ symmetry in our theory. The absence of $R$-axion in the modified O'Raifeartaigh model is discussed in Refs.~[11,12] from the context of meta-stable SUSY breaking, and it is phenomenologically favorable. The ordinary O'Raifeartaigh model corresponds to the case $m_{L}=m^{\dagger}_{L}=0$, it has an $R$-symmetry, and SUSY is broken at the tree level~[12,14]. In the ordinary O'Raifeartaigh model, the classical solution becomes $\phi_{+}=\phi_{-}=0$ with $\phi_{X}=$ arbitrary, and SUSY is spontaneously broken in the vacuum. The one-loop effective potential of the O'Raifeartaigh model was calculated in Ref.~[15]. In that calculation, the degeneracy of vacua is lifted by the one-loop correction, and the origin of the potential becomes the only ground state. There is a ${\bf Z}_{2}$ symmetry under $\Phi_{\pm}\to -\Phi_{\pm}$ in (1). The tree level part of the Lagrangian will be obtained from the scalar potential by employing the Euler-Lagrange equations of the auxiliary fields of chiral multiplets: \begin{eqnarray} V^{tree}[\phi_{\pm},\phi_{X}] &=& |F_{X}|^{2} + |F_{+}|^{2} + |F_{-}|^{2} = \Big|f+\frac{g}{2}\phi^{2}_{+}\Big|^{2} + \Big|g\phi_{X}\phi_{+}+m_{D}\phi_{-}\Big|^{2} + \Big|m_{D}\phi_{+}+2m_{L}\phi_{-} \Big|^{2}. \end{eqnarray} In literature, the classical solution of $V^{tree}$ is given as follows~[??] \begin{eqnarray} \phi^{classical}_{X} &=& \frac{m^{2}_{D}}{2gm_{L}}, \quad \phi^{classical}_{+} = \pm\sqrt{\frac{-2f}{g}}, \quad \phi^{classical}_{-} = \mp\frac{m_{D}}{2m_{L}}\sqrt{\frac{-2f}{g}}. \end{eqnarray} Here we have assumed that $m_{D}$, $m_{L}$, $g$ and $f$ are real valued to obtain the classical minimum. When these parameters are real, the classical solution shows the spontaneous ${\bf Z}_{2}$ symmetry breakdown. All of the VEVs of the solution will go to infinity at the limit $g\to 0$, and $g=0$ is a singular point for the solution. ( This expression of the classical minimum of our model has a similarity with the classical solution of a Ginzburg-Landau-type $\varphi^{4}$ model which describes the low-energy property of the Ising ferromagnet~[37]. Hence $V^{tree}$ seems to have a relation with the Ising ferromagnet. ) $V^{tree}$ vanishes at the classical minimum and the ${\cal N}=1$ SUSY of this model is unbroken at the classical level~[11,12]. We will see quantum corrections to the classical solution through the following loop expansion calculation, though we mainly investigate a possibiliy of the generalized seesaw situation in the vicinity of the classical solution in this work. As mentioned above, the ordinary O'Raifeartaigh model gives $\langle\phi_{X}\rangle$=arbitray at its classical solution with broken SUSY, and the one-loop correction gives the unique vacuum as the origin of the potential ( the origin of (6) gives a finite energy and not supersymetric ). By introducing the left-handed Majorana mass term in (1), $\phi^{classical}_{X}$ has obtained the explicit expression given in (7) and then we can discuss the strength of the VEV $\langle \phi_{X}\rangle$ which would give a right-handed Majorana mass parameter with some dynamics of our theory: If we set $m_{L}=0$ from the beginning of our model, we cannot obtain an explicit expression for $\langle\phi_{X}\rangle$ ( at least at the classical level ). This is crucial in the context of this work. The ordinary seesaw mechanism ( the case $m_{L}=0$ ) cannot be considered by the ordinary O'Raifeartaigh model. In this paper, we examine (i) when the classical solution can give the generalized seesaw situation, (ii) how the vicinity of the classical soution in the one-loop potential is stable and robust, (iii) whether the vicinity of the classical solution in the one-loop potential breaks SUSY or not. Usually, SUSY is broken if there is an $R$-symmetry in a theory, while SUSY will be kept if an $R$-symmetry is broken. \vspace{3mm} After eliminating the auxiliary fields of $X$ and $\Phi_{\pm}$ and perform integrations of Grassmann coordinates, one finds the expression of ${\cal L}$ in terms of component fields as follows: \begin{eqnarray} {\cal L} &=& -\partial_{\nu}\phi^{\dagger}_{X}\partial^{\nu}\phi_{X} -\partial_{\nu}\phi^{\dagger}_{+}\partial^{\nu}\phi_{+} -\partial_{\nu}\phi^{\dagger}_{-}\partial^{\nu}\phi_{-} -i\bar{\psi}_{X}\bar{\sigma}^{\nu}\partial_{\nu}\psi_{X} -i\bar{\psi}_{+}\bar{\sigma}^{\nu}\partial_{\nu}\psi_{+} -i\bar{\psi}_{-}\bar{\sigma}^{\nu}\partial_{\nu}\psi_{-} \nonumber \\ & & -|m_{D}|^{2}(|\phi_{+}|^{2}+|\phi_{-}|^{2}) -|g|^{2}|\phi_{X}|^{2}|\phi_{+}|^{2} - 4|m_{L}|^{2}|\phi_{-}|^{2} -\frac{|g|^{2}}{4}|\phi_{+}|^{4} -\frac{1}{2}(f^{\dagger}g\phi^{2}_{+}+fg^{\dagger}\phi^{\dagger 2}_{+}) -|f|^{2} \nonumber \\ & & -(g^{\dagger}m_{D}\phi^{\dagger}_{X}+2m_{L}m^{\dagger}_{D})\phi^{\dagger}_{+}\phi_{-} -(gm^{\dagger}_{D}\phi_{X}+2m^{\dagger}_{L}m_{D})\phi^{\dagger}_{-}\phi_{+} \nonumber \\ & & -\frac{g}{2}\phi_{X}\psi_{+}\psi_{+} -g\phi_{+}\psi_{X}\psi_{+} -m_{D}\psi_{+}\psi_{-} -m_{L}\psi_{-}\psi_{-} -\frac{g^{\dagger}}{2}\phi^{\dagger}_{X}\bar{\psi}_{+}\bar{\psi}_{+} -g^{\dagger}\phi^{\dagger}_{+}\bar{\psi}_{X}\bar{\psi}_{+} -m^{\dagger}_{D}\bar{\psi}_{+}\bar{\psi}_{-} -m^{\dagger}_{L}\bar{\psi}_{-}\bar{\psi}_{-}. \end{eqnarray} The phases of mass parameters and $\phi_{X}$ are defined as \begin{eqnarray} \phi_{X} = |\phi_{X}|e^{i\theta_{X}}, \quad m_{D} = |m_{D}|e^{i\theta_{D}}, \quad m_{L} = |m_{L}|e^{i\theta_{L}}, \quad \theta_{X}, \theta_{D}, \theta_{L} \in {\bf R}. \end{eqnarray} We can absorb only two of these phases $\theta_{X}$, $\theta_{D}$ and $\theta_{L}$ by a redefinition of fields. Hereafter, we set $m_{D}=m^{\dagger}_{D}$ and $m_{L}=m^{\dagger}_{L}$ by a field redefinition while keeping the phase degree of freedom of $\phi_{X}$ without loss of generality. Later, we will observe that mass eigenvalues of scalars and spinors are functions of $\theta_{X}$. In principle, if we take into account phase degrees of freedom of $\phi_{\pm}$ and $\phi_{X}$, the classical solution of $V^{tree}$ becomes ( $f^{\dagger}=f$, $g^{\dagger}=g$, $m^{\dagger}_{D}=m_{D}$, $m^{\dagger}_{L}=m_{L}$ are imposed ) \begin{eqnarray} |\phi_{+}| &=& \pm \sqrt{\frac{-2f}{g}}\sqrt{\cos(2\theta_{+})\pm\sqrt{\cos^{2}(2\theta_{+})-1}}, \nonumber \\ |\phi_{-}| &=& -\frac{m_{D}|\phi_{+}|}{2m_{L}}\Bigl[\cos(\theta_{+}-\theta_{-})\pm\sqrt{\cos^{2}(\theta_{+}-\theta_{-})-1}\Bigr], \nonumber \\ |\phi_{X}| &=& -\frac{m_{D}|\phi_{+}|}{g|\phi_{+}|}\Bigl[\cos(\theta_{X}+\theta_{+}-\theta_{-})\pm\sqrt{\cos^{2}(\theta_{X}+\theta_{+}-\theta_{-})-1}\Bigr], \qquad \phi_{\pm} = |\phi_{\pm}|e^{i\theta_{\pm}}. \end{eqnarray} However, the phase degrees of freedoms of scalars are chosen by the vanishing conditions of square roots in (10) such as $\theta_{+}=0,\pi$, $\theta_{-}=0,\pi$, $\theta_{X}=0,\pi$ ( totally 16 solutions, all are degenerate ). The solutions (7) at the classical level are special cases of them. On the contrary, later, we show it is important to take into account the phase degrees of freedom $\theta_{X}$, by our examination of particle mass eigenvalues at one-loop level. It is quite difficult to take into account phase degrees of freedom of scalar fields and mass parameters in a complete manner in our calculation of the one-loop level ( and also, possible renormalization to them ), and thus we will use (7) frequently in our discussion. Due to the Hermiticity of our Lagrangian, we have obtained the quartic terms $|g|^{2}|\phi_{+}|^{4}/4$ and $|g|^{2}|\phi_{+}|^{2}|\phi_{X}|^{2}$ as positive ( more precisely, non-negative ) definite in (8). This fact guarantees the convergence of functional integral of variable $\phi_{+}$ in Euclidean region. These quartic interactions are hard-core repulsive interactions at $|g|>0$, give a stability of the scalar sector. \subsection{Bose-Einstein Condensation} From the examination at the tree-level of our theory, we speculate that a BEC takes place in the scalar sector of the effective potential of (1) also in the one-loop level. To take into account the BEC under an appropriate manner, the scalar fields will be divided into the condensates and their fluctuation parts: \begin{eqnarray} \phi_{X} = \phi^{c}_{X} + \tilde{\phi}_{X}, \quad \phi_{+} = \phi^{c}_{+} + \tilde{\phi}_{+}, \quad \phi_{-} = \phi^{c}_{-} + \tilde{\phi}_{-}, \end{eqnarray} where the superscript $c$ indicates the condensation parts of the fields. We should mention that the classical solution and Bose-Einstein condensates of $\phi_{X}$ and $\phi_{\pm}$ are different in principle~[38], \begin{eqnarray} \phi^{classical}_{X} \ne \phi^{c}_{X}, \quad \phi^{classical}_{\pm} \ne \phi^{c}_{\pm}, \end{eqnarray} because the latter include quantum corrections~[38]. We assume condensates are space-time independent. Under the decomposition of (11), one finds \begin{eqnarray} & & |\phi_{\pm}|^{2} \to |\tilde{\phi}_{\pm}|^{2} + \phi^{c\dagger}_{\pm}\tilde{\phi}_{\pm} + \tilde{\phi}^{\dagger}_{\pm}\phi^{c}_{\pm} + |\phi^{c}_{\pm}|^{2}, \nonumber \\ & & \phi_{+}^{2} \to (\tilde{\phi}_{+})^{2} + 2\phi^{c}_{+}\tilde{\phi}_{+} + (\phi^{c}_{+})^{2}, \quad \phi_{+}^{\dagger 2} \to (\tilde{\phi}^{\dagger}_{+})^{2} + 2\phi^{c\dagger}_{+}\tilde{\phi}^{\dagger}_{+} + (\phi^{c\dagger}_{+})^{2}, \quad \cdots, \end{eqnarray} so forth. Consequently, for example, the Dirac and Majorana mass terms of the scalar sector give terms linear in $\tilde{\phi}_{\pm}$. In fact, $m_{D}$ and $m_{L}$ have a role similar to chemical potential of a nonrelativistic boson theory. The terms linear in the fluctuating fields ( and tadpole-type diagrams ) will be dropped from our Lagrangian. This "variational" condition corresponds to the Euler-Lagrange equations for condensates~[37-42]. The quartic interactions of scalars in ${\cal L}$ become \begin{eqnarray} \frac{|g|^{2}}{4}|\phi_{+}|^{4} &=& \frac{|g|^{2}}{4}\Bigl[ |\tilde{\phi}_{+}|^{4} + 4|\tilde{\phi}_{+}|^{2}|\phi^{c}_{+}|^{2} + |\phi^{c}_{+}|^{4} + 2 |\tilde{\phi}_{+}|^{2} (\tilde{\phi}_{+}\phi^{c\dagger}_{+} + \tilde{\phi}^{\dagger}_{+}\phi^{c}_{+}) + \tilde{\phi}_{+}\tilde{\phi}_{+}\phi^{c\dagger}_{+}\phi^{c\dagger}_{+} + \tilde{\phi}^{\dagger}_{+}\tilde{\phi}^{\dagger}_{+}\phi^{c}_{+}\phi^{c}_{+} \Bigr], \nonumber \\ |g|^{2}|\phi_{X}|^{2}|\phi_{+}|^{2} &=& |g|^{2}\Bigl[ |\phi^{c}_{X}|^{2}|\phi^{c}_{+}|^{2} + |\phi^{c}_{X}|^{2}|\tilde{\phi}_{+}|^{2} + |\phi^{c}_{+}|^{2}|\tilde{\phi}_{X}|^{2} + |\tilde{\phi}_{X}|^{2}|\tilde{\phi}_{+}|^{2} + \phi^{c\dagger}_{X}\phi^{c}_{+}\tilde{\phi}^{\dagger}_{+}\tilde{\phi}_{X} + \phi^{c\dagger}_{+}\phi^{c}_{X}\tilde{\phi}^{\dagger}_{X}\tilde{\phi}_{+} \nonumber \\ & & + \phi^{c\dagger}_{X}\phi^{c\dagger}_{+}\tilde{\phi}_{+}\tilde{\phi}_{X} + \phi^{c}_{+}\phi^{c}_{X}\tilde{\phi}^{\dagger}_{X}\tilde{\phi}^{\dagger}_{+} + \phi^{c\dagger}_{X}|\tilde{\phi}_{+}|^{2}\tilde{\phi}_{X} + \phi^{c}_{X}|\tilde{\phi}_{+}|^{2}\tilde{\phi}^{\dagger}_{X} + \phi^{c\dagger}_{+}|\tilde{\phi}_{X}|^{2}\tilde{\phi}_{+} + \phi^{c}_{+}|\tilde{\phi}_{X}|^{2}\tilde{\phi}^{\dagger}_{+} \Bigr]. \end{eqnarray} Here, we have dropped the terms linear in $\tilde{\phi}_{+}$ or $\tilde{\phi}_{X}$ and their Hermitian conjugates. We will employ the Hartree-Fock-Bogoliubov ( HFB ) approximation to self-energies coming from the quartic and cubic interactions between fluctuations with introducing the following vacuum expectation values: \begin{eqnarray} & & J_{1}(x) \equiv \langle\tilde{\phi}^{\dagger}_{+}(x)\tilde{\phi}_{+}(x) \rangle, \quad J_{2}(x) \equiv \langle\tilde{\phi}_{+}(x)\tilde{\phi}_{+}(x) \rangle, \quad J^{\dagger}_{2}(x) \equiv \langle\tilde{\phi}^{\dagger}_{+}(x)\tilde{\phi}^{\dagger}_{+}(x) \rangle, \quad K_{1}(x) \equiv \langle\tilde{\phi}^{\dagger}_{X}(x)\tilde{\phi}_{X}(x) \rangle, \nonumber \\ & & K_{2}(x) \equiv \langle\tilde{\phi}_{X}(x)\tilde{\phi}_{+}(x) \rangle, \quad K^{\dagger}_{2}(x) \equiv \langle\tilde{\phi}^{\dagger}_{+}(x)\tilde{\phi}^{\dagger}_{X}(x) \rangle, \quad K_{3}(x) \equiv \langle\tilde{\phi}^{\dagger}_{X}(x)\tilde{\phi}_{+}(x) \rangle, \quad K_{3}^{\dagger}(x) \equiv \langle\tilde{\phi}^{\dagger}_{+}(x)\tilde{\phi}_{X}(x) \rangle. \end{eqnarray} Here, $J_{1}=\langle \tilde{\phi}^{\dagger}_{+}\tilde{\phi}_{+} \rangle$ is normal, while $J_{2}=\langle \tilde{\phi}_{+}\tilde{\phi}_{+}\rangle$ and $J^{\dagger}_{2} = \langle \tilde{\phi}^{\dagger}_{+}\tilde{\phi}^{\dagger}_{+}\rangle$ are anomalous self-energies, similar notions to the case of the BCS-Nambu-Gor'kov theory of superconductivity~[26,43,44]. The anomalous self-energies indicate a breakdown of particle-number non-conservation in the scalar sector, and this is of course an independent phenomenon with the particle-number-non-conservation caused by the Majorana mass term of the fermion sector. In nonrelativistic theory of BEC, anomalous self-energies are negative quantities. Therefore, one obtaines \begin{eqnarray} |\tilde{\phi}_{+}|^{2}\tilde{\phi}_{+} &\to& 2J_{1} \tilde{\phi}_{+} + J_{2} \tilde{\phi}^{\dagger}_{+}, \quad |\tilde{\phi}_{+}|^{2}\tilde{\phi}^{\dagger}_{+} \to 2J_{1} \tilde{\phi}^{\dagger}_{+} + J^{\dagger}_{2} \tilde{\phi}_{+}, \quad |\tilde{\phi}_{+}|^{4} \to 4 J_{1} \tilde{\phi}^{\dagger}_{+}\tilde{\phi}_{+} + J^{\dagger}_{2} \tilde{\phi}_{+}\tilde{\phi}_{+} + J_{2} \tilde{\phi}^{\dagger}_{+}\tilde{\phi}^{\dagger}_{+}. \end{eqnarray} By the HFB approximation, the cubic interactions of fluctuations will also be dropped from our Lagrangian. Hence we get \begin{eqnarray} \frac{|g|^{2}}{4}|\phi_{+}|^{4} &\to& \frac{|g|^{2}}{4}\Bigg[ \bigl( 4J_{1} + 4|\phi^{c}_{+}|^{2} \bigr)\tilde{\phi}^{\dagger}_{+}\tilde{\phi}_{+} + \bigl( J^{\dagger}_{2} + (\phi^{c\dagger}_{+})^{2} \bigr)\tilde{\phi}_{+}\tilde{\phi}_{+} + \bigl( J_{2} + (\phi^{c}_{+})^{2} \bigr)\tilde{\phi}^{\dagger}_{+}\tilde{\phi}^{\dagger}_{+} + |\phi^{c}_{+}|^{4} \Bigg], \end{eqnarray} and \begin{eqnarray} |g|^{2}|\phi_{X}|^{2}|\phi_{+}|^{2} &\to& |g|^{2}\Bigg[ |\phi^{c}_{X}|^{2}|\phi^{c}_{+}|^{2} + ( |\phi^{c}_{X}|^{2} + K_{1} )|\tilde{\phi}_{+}|^{2} + ( |\phi^{c}_{+}|^{2} + J_{1} )|\tilde{\phi}_{X}|^{2} \nonumber \\ & & + ( \phi^{c\dagger}_{X}\phi^{c}_{+} + K_{3} ) \tilde{\phi}^{\dagger}_{+}\tilde{\phi}_{X} + ( \phi^{c\dagger}_{+}\phi^{c}_{X} + K^{\dagger}_{3} ) \tilde{\phi}^{\dagger}_{X}\tilde{\phi}_{+} + ( \phi^{c\dagger}_{X}\phi^{c\dagger}_{+} + K^{\dagger}_{2} ) \tilde{\phi}_{+}\tilde{\phi}_{X} + ( \phi^{c}_{+}\phi^{c}_{X} + K_{2} ) \tilde{\phi}^{\dagger}_{X}\tilde{\phi}^{\dagger}_{+} \Bigg]. \end{eqnarray} From the classical solution, we guess $|g|^{2}|\phi^{c}_{+}|^{2}$ and $|g|^{2}|\phi^{c}_{X}|^{2}$ take values of $\sim{\cal O}(|g|^{0})$, and thus a matrix elements given by a polynomial of them are not small enough to neglect from our Lagrangian. While, we hope $0 < |g| \ll 1$ to be satisfied for convergence of a perturbative series/diagrams in terms of $|g|$. This condition could conflict with the seesaw condition $g\langle \phi_{X}\rangle\gg m_{D}\gg m_{L}$: A realization of the generalized seesaw situation is a non-trivial problem in our theory. In the next subsection, we will evaluate the one-loop effective potential of our theory. We hope the potential captures the essential feature of quantum dynamics of the system (1) even at the one-loop level. On the other hand, we simply drop $-\frac{g}{2}(\tilde{\phi}_{X}\psi_{+}\psi_{+}+\tilde{\phi}_{+}\psi_{X}\psi_{+})+({\rm h.c.})$ which will give a coupling between the scalar and spinor sectors in ${\cal L}$. \vspace{3mm} Now, we examine the variational condition, namely the vanishing condition of the terms linear in fluctuating scalars. From (8), we obtain the linear terms as follows: \begin{eqnarray} & & -\tilde{\phi}_{+} \Bigl[ |m_{D}|^{2}\phi^{c\dagger}_{+} + f^{\dagger}g\phi^{c}_{+} + gm^{\dagger}_{D}\phi^{c}_{X}\phi^{c\dagger}_{-} + 2m^{\dagger}_{L}m_{D}\phi^{c\dagger}_{-} \nonumber \\ & & \qquad + |g|^{2} \Bigl( \frac{ |\phi^{c}_{+}|^{2} + J_{1} + 2|\phi^{c}_{X}|^{2} + 2K_{1} }{2}\phi^{c\dagger}_{+} + \frac{J^{\dagger}_{2} + 2K^{\dagger}_{3} }{2}\phi^{c}_{+} + K^{\dagger}_{2}\phi^{c}_{X} + K^{\dagger}_{3}\phi^{c\dagger}_{X} \Bigr) \Bigr] \nonumber \\ & & -\tilde{\phi}_{-} \Bigl[ (|m_{D}|^{2}+4|m_{L}|^{2})\phi^{c\dagger}_{-} + (g^{\dagger}m_{D}\phi^{c\dagger}_{X}+2m_{L}m^{\dagger}_{D})\phi^{c\dagger}_{+} \Bigr] \nonumber \\ & & -\tilde{\phi}_{X} \Bigl[ gm^{\dagger}_{D}\phi^{c}_{+}\phi^{c\dagger}_{-}+ |g|^{2} \Bigl( J_{1}\phi^{c\dagger}_{X} + K^{\dagger}_{2}\phi^{c}_{+} \Bigr) \Bigr] \nonumber \\ & & + {\rm h.c.} \end{eqnarray} They must vanish in our treatment of BEC. All terms given above have mass dimension $[Mass]^{4}$. We will employ a kind of Popov approximation to our HFB theory~[38], i.e. $J_{2}=K_{2}=K_{3}=0$ ( all of the anomalous self-energies will be dropped ). At the classical solution (7), the vanishing condition of the coefficient function of $\tilde{\phi}_{X}$ gives \begin{eqnarray} J_{1} &=& J^{\dagger}_{1} = -\frac{2f}{g}. \end{eqnarray} This expression would be modified under a renormalization of bare parameters as $J_{1}=-2f^{(ren)}/g^{(ren)}$. We confirm that the coefficient of $\tilde{\phi}_{-}$ vanishes identically at the classical solution (7). From the vanishing condition of the coefficient of $\tilde{\phi}_{+}$ at (7), one finds \begin{eqnarray} K_{1} &=& K^{\dagger}_{1} = g^{-2}\Bigl( \frac{m^{4}_{D}+2m_{L}m^{3}_{D}}{4m^{2}_{L}} -gf \Bigr). \end{eqnarray} This expression of $K_{1}$ takes a large value at the seesaw condition $m_{D}\gg m_{L}$. \vspace{3mm} Next, we will introduce several fields of the following definitions for the convenience of our discussion: \begin{eqnarray} & & \Psi \equiv ( \Psi_{X},\Psi_{M} )^{T}, \quad \Psi_{X} \equiv (\psi_{X},\bar{\psi}_{X})^{T}, \quad \Psi_{M} \equiv (\Psi_{MR},\Psi_{ML})^{T}, \quad \Psi_{MR} \equiv (\psi_{+},\bar{\psi}_{+})^{T}, \quad \Psi_{ML} \equiv (\psi_{-},\bar{\psi}_{-})^{T}, \nonumber \\ & & \overline{\Psi} \equiv ( \overline{\Psi}_{X}, \overline{\Psi}_{M} ), \quad \overline{\Psi}_{X} \equiv (-\psi_{X},-\bar{\psi}_{X}), \quad \overline{\Psi}_{M} \equiv (\overline{\Psi}_{MR},\overline{\Psi}_{ML}), \quad \overline{\Psi}_{MR} \equiv (-\psi_{+},-\bar{\psi}_{+}), \quad \overline{\Psi}_{ML} \equiv (-\psi_{-},-\bar{\psi}_{-}), \nonumber \\ & & \Pi \equiv (\Pi_{X},\Pi_{M})^{T}, \quad \Pi_{X} \equiv (\tilde{\phi}_{X},\tilde{\phi}^{\dagger}_{X})^{T}, \quad \Pi_{M} \equiv (\Pi_{MR},\Pi_{ML})^{T}, \quad \Pi_{MR} \equiv (\tilde{\phi}_{+},\tilde{\phi}^{\dagger}_{+})^{T}, \quad \Pi_{ML} \equiv (\tilde{\phi}_{-},\tilde{\phi}^{\dagger}_{-})^{T}. \end{eqnarray} Here, $\psi_{X}$ is a Majorana, $\psi_{MR}$ and $\psi_{ML}$ are right- and left- handed Majorana fields, respectively. $T$ denotes transposition operation of a matrix. The Lagrangian density will be rewritten in the following form by these fields: \begin{eqnarray} {\cal L} &=& -V^{tree}[\phi^{c}_{\pm},\phi^{c}_{X}] + \frac{1}{2}\Pi^{\dagger}\Omega^{B}\Pi + \frac{1}{2}\overline{\Psi}\Omega^{F}\Psi. \end{eqnarray} The matrices $\Omega^{B}$ and $\Omega^{F}$ are defined as follows: \begin{eqnarray} & & \Omega^{B} \equiv \left( \begin{array}{cc} \Omega^{B}_{XX} & \Omega^{B}_{XM} \\ \Omega^{B}_{MX} & \Omega^{B}_{MM} \end{array} \right), \quad \Omega^{B}_{MM} \equiv \left( \begin{array}{cc} \Omega^{B}_{++} & \Omega^{B}_{+-} \\ \Omega^{B}_{-+} & \Omega^{B}_{--} \end{array} \right), \quad \Omega^{B}_{XM} \equiv \bigl( \Omega^{B}_{X+},\Omega^{B}_{X-} \bigr), \quad \Omega^{B}_{MX} \equiv \left( \begin{array}{c} \Omega^{B}_{+X} \\ \Omega^{B}_{-X} \end{array} \right), \nonumber \\ & & \Omega^{F} \equiv \left( \begin{array}{cc} \Omega^{F}_{XX} & \Omega^{F}_{XM} \\ \Omega^{F}_{MX} & \Omega^{F}_{MM} \end{array} \right), \quad \Omega^{F}_{XX} \equiv i\ooalign{\hfil/\hfil\crcr$\partial$}, \quad \Omega^{F}_{XM} \equiv \bigl( -g\phi^{c}_{+}P_{+} -g^{\dagger}\phi^{c\dagger}_{+}P_{-}, 0 \bigr), \nonumber \\ & & \Omega^{F}_{MX} \equiv \left( \begin{array}{c} -g\phi^{c}_{+}P_{+} -g^{\dagger}\phi^{c\dagger}_{+}P_{-} \\ 0 \end{array} \right), \quad \Omega^{F}_{MM} \equiv \left( \begin{array}{cc} i\ooalign{\hfil/\hfil\crcr$\partial$} -g\phi^{c}_{X}P_{+} -g^{\dagger}\phi^{c\dagger}_{X}P_{-} & -m_{D} \\ -m_{D} & i\ooalign{\hfil/\hfil\crcr$\partial$} -2m_{L} \end{array} \right). \end{eqnarray} The definitions $\gamma^{5} \equiv \gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$ and $P_{\pm} \equiv \frac{1\pm i\gamma^{5}}{2}$ have been used. The Hermiticity $(\Omega^{B})^{\dagger}=\Omega^{B}$ is satisfied. Entries of $\Omega^{B}$ become such that, \begin{eqnarray} & & \Omega^{B}_{XX} \equiv \left( \begin{array}{cc} \Box - |g|^{2}( J_{1} + |\phi^{c}_{+}|^{2} ) & 0 \\ 0 & \Box - |g|^{2}( J_{1} + |\phi^{c}_{+}|^{2} ) \end{array} \right), \quad \Omega^{B}_{--} \equiv \left( \begin{array}{cc} \Box - m^{2}_{D} -4m^{2}_{L} & 0 \\ 0 & \Box - m^{2}_{D} -4m^{2}_{L} \end{array} \right), \nonumber \\ & & \Omega^{B}_{++} \equiv \left( \begin{array}{cc} \Box - m^{2}_{D} -|g|^{2}(J_{1}+K_{1}+ |\phi^{c}_{+}|^{2} + |\phi^{c}_{X}|^{2}) & -\frac{|g|^{2}}{4}( J_{2} + (\phi^{c}_{+})^{2})-\frac{fg^{\dagger}}{2} \\ -\frac{|g|^{2}}{4}( J^{\dagger}_{2} + (\phi^{c\dagger}_{+})^{2})-\frac{f^{\dagger}g}{2} & \Box - m^{2}_{D} -|g|^{2}(J_{1}+K_{1}+ |\phi^{c}_{+}|^{2} + |\phi^{c}_{X}|^{2}) \end{array} \right), \nonumber \\ & & \Omega^{B}_{X+} \equiv \left( \begin{array}{cc} -|g|^{2}( K^{\dagger}_{3} + \phi^{c\dagger}_{+}\phi^{c}_{X} ) & -g^{\dagger}m_{D}\phi^{c}_{-} - |g|^{2}( K_{2} + \phi^{c}_{+}\phi^{c}_{X} ) \\ -gm_{D}\phi^{c\dagger}_{-} - |g|^{2}( K^{\dagger}_{2} + \phi^{c\dagger}_{+}\phi^{c\dagger}_{X} ) & -|g|^{2}( K_{3} + \phi^{c\dagger}_{X}\phi^{c}_{+} ) \end{array} \right), \nonumber \\ & & \Omega^{B}_{X-} \equiv \left( \begin{array}{cc} -g^{\dagger}m_{D}\phi^{c\dagger}_{+} & 0 \\ 0 & -gm_{D}\phi^{c}_{+} \end{array} \right), \quad \Omega^{B}_{+-} \equiv \left( \begin{array}{cc} -m_{D}(g^{\dagger}\phi^{c\dagger}_{X}+2m_{L}) & 0 \\ 0 & -m_{D}(g\phi^{c}_{X}+2m_{L}) \end{array} \right). \end{eqnarray} All of the off-diagonal elements of $\Omega^{B}$ are coming from particle-number-non-conserving interactions and/or mean-fields of ${\cal L}$ under the HFB approximation. The diagonalizations of $\Omega^{B}$ and $\Omega^{F}$ will give "quasiparticle" excitation energy spectra of scalar and spinor fields in terms of the bare parameters/fields. Especially we have an interest on whether the spinor $\psi_{X}$ becomes massive or not under the one-loop quantum correction. If there is no massless fermion, then there is no Nambu-Goldstone ( NG ) fermion, and the Nambu-Goldstone theorem implies the absence of spontaneous SUSY breaking in our theory~[9,12]. \subsection{The One-loop Effective Potential} In this subsection, we will evaluate and examine the one-loop effective potential of our theory. We obtain the generating functional of our theory as follows: \begin{eqnarray} {\cal Z} &\equiv& \int {\cal D}\tilde{\phi}_{X}{\cal D}\tilde{\phi}^{\dagger}_{X}{\cal D}\tilde{\phi}_{+}{\cal D}\tilde{\phi}^{\dagger}_{+}{\cal D}\tilde{\phi}_{-}{\cal D}\tilde{\phi}^{\dagger}_{-} {\cal D}\psi_{X}{\cal D}\bar{\psi}_{X}{\cal D}\psi_{+}{\cal D}\bar{\psi}_{+}{\cal D}\psi_{-}{\cal D}\bar{\psi}_{-} \nonumber \\ & & \times \exp\Bigg[i\int d^{4}x \Bigl\{ -V^{tree}[\phi^{c}_{\pm},\phi^{c}_{X}] + \frac{1}{2}\Pi^{\dagger}\Omega^{B}\Pi + \frac{1}{2}\overline{\Psi}\Omega^{F}\Psi \Bigr\} + ({\rm source})\Bigg]. \end{eqnarray} The one-loop contribution to the effective potential is evaluated to be \begin{eqnarray} & & V^{(1)}[\phi^{c}_{\pm},\phi^{c}_{X}] = V^{B(1)} + V^{F(1)}, \nonumber \\ & & V^{B(1)} \equiv \frac{i}{2}\ln{\rm Det}\Omega^{B} = \frac{i}{2}\ln{\rm Det}\Omega^{B}_{MM} + \frac{i}{2}\ln{\rm Det}\Bigl(\Omega^{B}_{XX}-\Omega^{B}_{XM}\frac{1}{\Omega^{B}_{MM}}\Omega^{B}_{MX}\Bigr), \nonumber \\ & & V^{F(1)} \equiv -\frac{i}{2}\ln{\rm Det}\Omega^{F} = -\frac{i}{2}\ln{\rm Det}\Omega^{F}_{MM} -\frac{i}{2}\ln{\rm Det}\Bigl(\Omega^{F}_{XX}-\Omega^{F}_{XM}\frac{1}{\Omega^{F}_{MM}}\Omega^{F}_{MX}\Bigr). \end{eqnarray} The effective action is found to be \begin{eqnarray} \Gamma_{(compo)} &\equiv& -i\ln{\cal Z} = \int d^{4}x\Bigl(-V^{tree}[\phi^{c}_{\pm},\phi^{c}_{X}] -V^{(1)}[\phi^{c}_{\pm},\phi^{c}_{X}] \Bigr). \end{eqnarray} If we perform the path integration of only $\Pi_{M}$ and $\Psi_{M}$, the generating functional becomes \begin{eqnarray} {\cal Z} &=& \int {\cal D}\tilde{\phi}_{X}{\cal D}\tilde{\phi}^{\dagger}_{X}{\cal D}\psi_{X}{\cal D}\bar{\psi}_{X}\exp\Bigg[i\int d^{4}x \Bigl\{ -V^{tree}[\phi^{c}_{\pm},\phi^{c}_{X}] - V^{(1)}_{M} \nonumber \\ & & + \frac{1}{2}\Pi^{\dagger}_{X}\Bigl(\Omega^{B}_{XX}-\Omega^{B}_{XM}\frac{1}{\Omega^{B}_{MM}}\Omega^{B}_{MX}\Bigr)\Pi_{X} + \frac{1}{2}\overline{\Psi}_{X}\Bigl(\Omega^{F}_{XX}-\Omega^{F}_{XM}\frac{1}{\Omega^{F}_{MM}}\Omega^{F}_{MX}\Bigr)\Psi_{X} \Bigr\} \Bigg], \end{eqnarray} where, \begin{eqnarray} V^{(1)}_{M} \equiv V^{B(1)}_{M} + V^{F(1)}_{M}, \quad V^{B(1)}_{M} \equiv \frac{i}{2}\ln{\rm Det}\Omega^{B}_{MM}, \quad V^{F(1)}_{M} \equiv -\frac{i}{2}{\rm Det}\Omega^{F}_{MM}. \end{eqnarray} To obtain this expression of ${\cal Z}$, we can regard $\Pi_{X}$ and $\Psi_{X}$ as Grassmann-even and Grassmann-odd source fields in the Gaussian integrations of $\Pi_{M}$ and $\Psi_{M}$, respectively. \vspace{3mm} Let us examine the matrix $\Omega^{F}$. Since $\Omega^{F}_{XX}$ is the inverse of propagator of massless fermion, a perturbative expansion in terms of $(\Omega^{F}_{XX})^{-1}$ for handling ${\rm Tr}\ln\Omega^{F}$ suffers from infrared divergences, indicates that the perturbative expansion is an unsuitable method for our model, and thus it is forbidden. If fermion $\psi_{X}$ remains massless at the one-loop level, the determinant of $\Omega^{F}$ has a zero point at $p^{2}=0$, must be factorized like $\det\Omega^{F}(p)=(p^{2})^{2}(p^{2}+({\rm mass})^{2})^{4}$. However, the direct evaluation of $\det\Omega^{F}(p_{\nu}=0)$ from (24), i.e. at the vanishing four-momentum, gives \begin{eqnarray} \det\Omega^{F}(p_{\nu}=0) &=& \bigl( 4|g|^{4}|\phi^{c}_{+}|^{4}m^{2}_{L} \bigr)^{2}. \end{eqnarray} Hence, there is no massless particle in the fermion sector at $\phi^{c}_{+}\ne 0$, and this fact indicates the absence of a spontaneous SUSY breakdown in our theory. Note that this fact is globally the case, whole of the functional space of $\det\Omega^{F}$. The one-loop effective potential of the contribution of $\psi_{\pm}$ will be obtained after the diagonalization of $\Omega^{F}_{MM}$ in the following form: \begin{eqnarray} V^{F(1)}_{M} = -i{\rm Tr}\ln(k_{0}-E^{F}_{+})^{2}(k_{0}+E^{F}_{+})^{2}(k_{0}-E^{F}_{-})^{2}(k_{0}+E^{F}_{-})^{2}. \end{eqnarray} The energy spectra $E^{F}_{\pm}$ become \begin{eqnarray} E^{F}_{\pm}(\mbox{\boldmath $k$}) &\equiv& \sqrt{ \mbox{\boldmath $k$}^{2} + M^{F2}_{\pm}}, \nonumber \\ M^{F}_{\pm} &\equiv& \sqrt{ m_{D}^{2} + \frac{|g|^{2}|\phi^{c}_{X}|^{2}}{2} + 2m_{L}^{2} \mp 2 \sqrt{ \Bigl( \frac{|g|^{2}|\phi^{c}_{X}|^{2}}{4}-m_{L}^{2}\Bigr)^{2} + m_{D}^{2}\Bigl(\frac{|g|^{2}|\phi^{c}_{X}|^{2}}{4}+m_{L}^{2}+|g||\phi^{c}_{X}|m_{L}\cos\theta_{X}\Bigr) } }. \end{eqnarray} Here, the phase of $\phi_{X}$ appears in $M^{F}_{\pm}$ given above. It was shown in Ref.~[36] that a one-loop potential is not degenerate with the phase $\theta_{X}$ in a Nambu$-$Jona-Lasinio-type dynamical model of the generalized seesaw mechanism. At the case $|g|^{2}|\phi^{c}_{X}|^{2}\gg m^{2}_{D}\gg m^{2}_{L}$ ( satisfied under $m_{D}\gg m_{L}$ in (7) ), these mass spectra show the generalized seesaw mechanism~[35,36], $M^{F}_{+}$ is light while $M^{F}_{-}$ is heavy. At $m_{D}\gg m_{L}$, they become \begin{eqnarray} M^{F}_{+} \sim \sqrt{2}m_{L}, \quad M^{F}_{-} \sim \sqrt{2m^{2}_{D}+2m^{2}_{L}+|g|^{2}|\phi^{c}_{X}|^{2}}. \end{eqnarray} Hence, in the generalized seesaw mechanism, the light spinor aquires its mass of ${\cal O}(m_{L})$ while $m_{D}$ and $|\phi_{X}|$ have quite minor contributions to it. By taking into account the result (31), we obtain the following mass formula of the fermion sector in terms of the bare quantities: \begin{eqnarray} \bigl( (M^{F}_{X})^{2}(M^{F}_{+})^{2}(M^{F}_{-})^{2} \bigr)^{2} &=& \bigl( 4|g|^{4}|\phi^{c}_{+}|^{4}m^{2}_{L} \bigr)^{2}. \end{eqnarray} The right hand side becomes $16f^{4}m^{4}_{L}$ at the classical solution, can take a small value at $f\to 0$ or $m_{L}\to 0$. $m_{L}=0$ is the case of ordinary O'Raifeartaigh model, and in that case $M^{F}_{X}=0$ takes place, indicates a breakdown of SUSY. Hence, the expression of mass of $\psi_{X}$-field is found to be \begin{eqnarray} M^{F}_{X} &=& \frac{2|g|^{2}|\phi^{c}_{+}|^{2}m_{L}}{\sqrt{m^{4}_{D}+4|g|^{2}|\phi^{c}_{X}|^{2}m^{2}_{L}-4|g||\phi^{c}_{X}|m^{2}_{D}m_{L}\cos\theta_{X}}}. \end{eqnarray} Note that $M^{F}_{X}$ can become imaginary when inside the square root of the denominator of (36) takes a negative value. By putting the classical solution (7), one obtaines \begin{eqnarray} M^{F}_{X} &=& \frac{-4f|g|m_{L}}{m^{2}_{D}\sqrt{2(1-\cos\theta_{X})}}. \end{eqnarray} $M^{F}_{X}$ becomes very small and will behave as a pseudo-NG fermion when $f,|g|,m_{L}\ll m_{D}$, and it vanishes at $m_{L}=0$, while $(M^{F}_{X})^{2}$ is always a potitive quantity. It is an interesting fact that $M^{F}_{X}$ will diverge under $\theta_{X}\to 0$, namely, not well-defined in the limit. ( In the case of a non-SUSY dynamical model of the generalized seesaw mechanism, $\theta_{X}=\pi$ is chosen as the vacuum state~[36]. ) Therefore, a careful examination on $\theta_{X}$ is important ( crucial ) in our theory. As examined in the subsection A, $\theta_{X}$ of the classical solutions will take $0$ or $\pi$. $M^{F}_{X}\sim{\cal O}(fgm_{L}/m^{2}_{D})$ if we choose $\theta_{X}=\pi$. We assume there is no spontaneous Lorentz symmetry breaking in the fermion sector. As a result, the spectrum of $\psi_{X}$-particle must take the following Lorentz symmetric form: \begin{eqnarray} E^{F}_{X} &=& \sqrt{\mbox{\boldmath $k$}^{2}+M^{F2}_{X}}. \end{eqnarray} Therefore, we get the one-loop contribution of the fermion sector as \begin{eqnarray} V^{F(1)} = -\frac{i}{2}{\rm Tr}\ln(k_{0}-E^{F}_{X})^{2}(k_{0}+E^{F}_{X})^{2}(k_{0}-E^{F}_{+})^{2}(k_{0}+E^{F}_{+})^{2}(k_{0}-E^{F}_{-})^{2}(k_{0}+E^{F}_{-})^{2}. \end{eqnarray} We summarize the result of our analysis of the fermion sector: (i) Our model at $m_{L}=0$ corresponds to the ordinary O'Raifeartaigh model, has $R$-symmetry and SUSY is broken at the ground state, and we have confirmed that an NG fermion appears~[9,12]. (ii) Our model at $m_{L}\ne 0$, namely the modified O'Raifeartaigh model~[11,12], will give the generalized seesaw mechanism at the point (7), while $\psi_{X}$ has a finite mass and SUSY seems not broken. We will proceed our examination to the boson sector of our theory. \vspace{3mm} In our treatment for the scalar sector, first we solve the secular equation $\det\Omega^{B}_{MM}=0$. Though the secular equation is quartic in the d'Alembertian $\Box$, fortunately, we can diagonalize $\Omega^{B}_{MM}$ analytically because its secular equation will be factorized into a product of two quadratic equations of $\Box$. The results is \begin{eqnarray} \det\Omega^{B}_{MM} &=& (k_{0}-E^{B}_{M1+})(k_{0}+E^{B}_{M1+})(k_{0}-E^{B}_{M1-})(k_{0}+E^{B}_{M1-}) \nonumber \\ & & \times (k_{0}-E^{B}_{M2+})(k_{0}+E^{B}_{M2+})(k_{0}-E^{B}_{M2-})(k_{0}+E^{B}_{M2-}). \end{eqnarray} There is no degeneracy in the spectra obtained from $\det\Omega^{B}_{MM}=0$. Here, the energy eigenvalues become \begin{eqnarray} E^{B}_{M1\pm}(\mbox{\boldmath $k$}) &\equiv& \sqrt{ \mbox{\boldmath $k$}^{2} + (M^{B}_{M1\pm})^{2}}, \quad E^{B}_{M2\pm}(\mbox{\boldmath $k$}) \equiv \sqrt{ \mbox{\boldmath $k$}^{2} + (M^{B}_{M2\pm})^{2}}, \nonumber \\ M^{B}_{M1\pm} &\equiv& \sqrt{\frac{c_{2}+c_{3}-|c_{4}|}{2}\mp\frac{1}{2}\sqrt{\bigl( c_{2}-c_{3}+|c_{4}|\bigr)^{2} + 4|c_{1}|^{2}}} \nonumber \\ M^{B}_{M2\pm} &\equiv& \sqrt{\frac{c_{2}+c_{3}+|c_{4}|}{2}\mp\frac{1}{2}\sqrt{\bigl( c_{2}-c_{3}-|c_{4}|\bigr)^{2} + 4|c_{1}|^{2}}} \nonumber \\ c_{1} &\equiv& -m_{D}(g^{\dagger}\phi^{c\dagger}_{X}+2m_{L}), \quad c_{2} \equiv m^{2}_{D} +4m^{2}_{L}, \nonumber \\ c_{3} &\equiv& m^{2}_{D} +|g|^{2}(J_{1}+K_{1}+ |\phi^{c}_{+}|^{2} + |\phi^{c}_{X}|^{2}), \quad c_{4} \equiv -\frac{|g|^{2}}{4}( J_{2} + (\phi^{c}_{+})^{2})-\frac{fg^{\dagger}}{2}. \end{eqnarray} $|c_{1}|^{2}$ includes the phase $\theta_{X}$. If we put the expressions at the classical solution for $\phi_{+}$, $J_{1}$ and $K_{1}$ to $c_{3}$ and $c_{4}$ with employing the Popov approximation $J_{2}=0$, we get \begin{eqnarray} M^{B}_{M1\pm} &=& M^{B}_{M2\pm} \nonumber \\ &=& \Bigg[ m_{D}^{2} + \frac{|g|^{2}|\phi^{c}_{X}|^{2}}{2} + 2m_{L}^{2} + \frac{m^{4}_{D}}{8m^{2}_{L}} + \frac{m^{3}_{D}}{4m_{L}} -\frac{5}{2}|g|f \nonumber \\ & & \quad \mp 2 \Bigl\{ \Bigl( \frac{|g|^{2}|\phi^{c}_{X}|^{2}}{4}-m_{L}^{2} + \frac{m^{4}_{D}}{16m^{2}_{L}} + \frac{m^{3}_{D}}{8m_{L}} -\frac{5}{4}|g|f \Bigr)^{2} + m_{D}^{2}\Bigl(\frac{|g|^{2}|\phi^{c}_{X}|^{2}}{4}+m_{L}^{2}+|g||\phi^{c}_{X}|m_{L}\cos\theta_{X}\Bigr) \Bigr\}^{1/2}\Bigg]^{1/2}. \end{eqnarray} Therefore, we conclude \begin{eqnarray} M^{F}_{\pm} &<& M^{B}_{M1\pm}, M^{B}_{M2\pm}, \end{eqnarray} at $f<0$, $g>0$. The mass eigenvalues of $M^{B}_{M1+}$ and $M^{B}_{M2+}$ become tachyonic at \begin{eqnarray} M^{B}_{M1+}; \, c_{2}(c_{3}-|c_{4}|) < |c_{1}|^{2}, \quad M^{B}_{M2+}; \, c_{2}(c_{3}+|c_{4}|) < |c_{1}|^{2}, \end{eqnarray} and an appearance of tachyon indicates the instability of vacuum state~[12,45]. In our condition (44) of tachyonic masses, $c_{3}$ and $c_{4}$ include the HFB self-energies. At the classical solution (7) with $J_{2}=0$, the tachyon condition (44) becomes such that \begin{eqnarray} 2f - g\Bigl(J_{1}+K_{1}\Bigr) = 5f-\frac{1}{g}\Bigl(\frac{m^{4}_{D}}{4m^{2}_{L}}+\frac{m^{3}_{D}}{2m_{L}}\Bigr) > 0. \end{eqnarray} Here we have assumed $f,g$ as real. ( Again, we wish to rewrite that $f$, $J_{1}$, $K_{1}$, and $J_{2}$ have mass dimension $[Mass]^{2}$, while $g$ is dimensionless. ) Hence, the vicinity of the classical solution is stable if $f<0$ and $g>0$. Due to the HFB self-energies and $\phi^{c}_{+}$, there are several differences between $M^{F}_{\pm}$ and the mass eigenvalues obtained from $\Omega^{B}_{MM}$: The mass spectra of bosons and fermions are not symmetric ( namely, not supersymmetric ) in our theory. We regard the vacuum energy as the order parameter of SUSY-breaking, we must examine the local/global structure of the one-loop effective potential to clarify whether the vacuum energy vanishes or not before concluding a breakdown of SUSY. \vspace{3mm} In the determinant $\det\{\Omega^{B}_{XX}-\Omega^{B}_{XM}(\Omega^{B}_{MM})^{-1}\Omega^{B}_{MX}\}$, we wish to concentrate upon the vicinity of the classical solution (7). At the point (7) with the neglection of $J_{2}$, $\Omega^{B}_{++}$ in the expression of (25) becomes diagonal. This helps us to evaluate the mass eigenvalue of $\phi_{X}$ in an analytic manner. Then we get $(M^{B}_{X\pm})^{2}$ in terms of bare parameters as follows: \begin{eqnarray} (M^{B}_{X\pm})^{2} &\equiv& \lim_{p^{2}\to 0} (|A|\pm |B|), \nonumber \\ A &\equiv& -a + \frac{|(|\alpha|^{2}+\beta^{2})b+c|d|^{2}-(\alpha de^{\dagger}+\alpha^{\dagger}d^{\dagger}e)|}{bc-|e|^{2}}, \quad B \equiv -\frac{|(\beta+\beta^{\dagger})(\alpha^{\dagger}b-de^{\dagger})|}{bc-|e|^{2}}, \nonumber \\ a &\equiv& - |g|^{2}( J_{1} + |\phi^{c}_{+}|^{2} ), \quad b \equiv \Box - m^{2}_{D} -4m^{2}_{L}, \nonumber \\ c &\equiv& \Box - m^{2}_{D} -|g|^{2}(J_{1}+K_{1}+ |\phi^{c}_{+}|^{2} + |\phi^{c}_{X}|^{2}), \quad d \equiv - g^{\dagger}m_{D}\phi^{c\dagger}_{+}, \quad e \equiv - m_{D}(g^{\dagger}\phi^{c\dagger}_{X}+2m_{L}), \nonumber \\ \alpha &\equiv& -|g|^{2}( K^{\dagger}_{3} + \phi^{c\dagger}_{+}\phi^{c}_{X} ), \quad \beta \equiv -g^{\dagger}m_{D}\phi^{c}_{-} - |g|^{2}( K_{2} + \phi^{c}_{+}\phi^{c}_{X} ). \end{eqnarray} These $a,b,c,d,e,\alpha,\beta$ are matrix elements of $\Omega^{B}$ ( see (25) ). We also set four-momentum as $p_{\nu}=0$ in the bosonic matrices. We examine the stability conditions of $M^{B}_{X\pm}$. Especially, we have interest on its behavior in the vicinity of the classical solution with the seesaw condition $|g|^{2}|\phi^{c}_{X}|^{2}\gg m^{2}_{D}\gg m^{2}_{L}$. A direct evaluation from (46) with (7), (20), (21), and by employing a Popov approximation $J_{2}=K_{2}=K_{3}=0$ gives \begin{eqnarray} (M^{B}_{X+})^{2} &=& (M^{B}_{X-})^{2} \nonumber \\ &=& -\frac{1}{(m^{2}_{D}+4m^{2}_{L})(3gf+\frac{m^{4}_{D}}{4m^{2}_{L}}+\frac{m^{3}_{D}}{2m_{L}})} \nonumber \\ & & \qquad \times \Bigl[ 48g^{2}f^{2}m^{2}_{L} + 2g^{2}f^{2}m^{2}_{D} + 8gfm^{3}_{D}m_{L} + 6gfm^{4}_{D} + 3gf\frac{m^{5}_{D}}{m_{L}} + \frac{gf}{2}\frac{m^{6}_{D}}{m^{2}_{L}} \Bigr]. \end{eqnarray} Hence, if we take into account the seesaw condition $m_{D}\gg m_{L}$, then we obtain $g>0$ and $f<0$ is the stability condition of $(M^{B}_{X\pm})^{2}$. A rough estimation gives \begin{eqnarray} (M^{B}_{X\pm})^{2} &\sim& -2gf \sim {\cal O}(gf). \end{eqnarray} We conclude that, with taking into account (45) and (48), the generalized seesaw mechanism can take place under $f<0$ and $g>0$ ( we have assumed that $|gf|(m_{L}/m^{2}_{D})\ll 1$ ). It is worth noticing that $(M^{B}_{X\pm})^{2}\sim -m^{4}/8m^{2}_{L}$ ( negative ) if we set $K_{1}=0$, and thus the HFB self-energy $K_{1}$ is important for stability of the potential at the classical solution. Finally one finds \begin{eqnarray} E^{B}_{X\pm} &=& \sqrt{\mbox{\boldmath $k$}^{2}+(M^{B}_{X\pm})^{2}}, \end{eqnarray} and we obtain the one-loop contribution of the scalar sector as follows: \begin{eqnarray} V^{B(1)} &=& \frac{i}{2}{\rm Tr} (k_{0}-E^{B}_{X+})(k_{0}+E^{B}_{X+})(k_{0}-E^{B}_{X-})(k_{0}+E^{B}_{X-}) \nonumber \\ & & \quad \times (k_{0}-E^{B}_{M1+})(k_{0}+E^{B}_{M1+})(k_{0}-E^{B}_{M1-})(k_{0}+E^{B}_{M1-}) \nonumber \\ & & \quad \times (k_{0}-E^{B}_{M2+})(k_{0}+E^{B}_{M2+})(k_{0}-E^{B}_{M2-})(k_{0}+E^{B}_{M2-}). \end{eqnarray} \vspace{3mm} It is a well-known fact that the naive dimensional regularization, suitable to keep a gauge invariance in a non-SUSY gauge theory, will break SUSY through the regularization. To circumvent of this problem is relatively easier in non-gauge models, while the problem is severe in SUSY gauge theories, and the method of "dimensional reduction" regularization seems more suitable~[18]. Since the Lagrangian we consider here is not a gauge model, here we employ a simple cutoff scheme for regularizations of integrals. After performing the four-dimensional momentum integration, the one-loop contribution to the effective potential will be obtained as follows: \begin{eqnarray} V^{(1)} &=& \frac{1}{16\pi^{2}}\Bigg[ \Lambda^{2}\Bigl\{M^{B2}_{X+}+M^{B2}_{X-}+M^{B2}_{M1+}+M^{B2}_{M1-}+M^{B2}_{M2+}+M^{B2}_{M2-}-2(M^{F2}_{X}+M^{F2}_{+}+M^{F2}_{-})\Bigr\} \nonumber \\ & & +\Lambda^{4}\ln\frac{(1+M^{B2}_{X+}/\Lambda^{2})(1+M^{B2}_{X-}/\Lambda^{2})(1+M^{B2}_{M1+}/\Lambda^{2})(1+M^{B2}_{M1-}/\Lambda^{2})(1+M^{B2}_{M2+}/\Lambda^{2})(1+M^{B2}_{M2-}/\Lambda^{2})} {(1+M^{F2}_{X}/\Lambda^{2})^{2}(1+M^{F2}_{+}/\Lambda^{2})^{2}(1+M^{F2}_{-}/\Lambda^{2})^{2}} \nonumber \\ & & -M^{B4}_{X+}\ln\Bigr(1+\frac{\Lambda^{2}}{M^{B2}_{X+}}\Bigl)-M^{B4}_{X-}\ln\Bigr(1+\frac{\Lambda^{2}}{M^{B2}_{X-}}\Bigl) +2M^{F4}_{X}\ln\Bigl(1+\frac{\Lambda^{2}}{M^{F2}_{X}}\Bigr) +2M^{F4}_{+}\ln\Bigl(1+\frac{\Lambda^{2}}{M^{F2}_{+}}\Bigr) + 2M^{F4}_{-}\ln\Bigl(1+\frac{\Lambda^{2}}{M^{F2}_{-}}\Bigr) \nonumber \\ & & -M^{B4}_{M1+}\ln\Bigr(1+\frac{\Lambda^{2}}{M^{B2}_{M1+}}\Bigl) -M^{B4}_{M1-}\ln\Bigl(1+\frac{\Lambda^{2}}{M^{B2}_{M1-}}\Bigr) -M^{B4}_{M2+}\ln\Bigr(1+\frac{\Lambda^{2}}{M^{B2}_{M2+}}\Bigl) -M^{B4}_{M2-}\ln\Bigl(1+\frac{\Lambda^{2}}{M^{B2}_{M2-}}\Bigr) \Bigg], \end{eqnarray} where, $\Lambda$ denotes the four-momentum cutoff. By the standard method for handling the effective potential, namely, remove contributions they will vanish at $\Lambda\to\infty$ ( there is no divergent constant due to ${\cal N}=1$ SUSY ), one obtains \begin{eqnarray} V^{(1)} &=& \frac{1}{16\pi^{2}}\Bigg[ \bigl(M^{B}_{X+}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{X+}}{\Lambda}\Bigr)^{2} +\bigl(M^{B}_{X-}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{X-}}{\Lambda}\Bigr)^{2} - 2\bigl(M^{F}_{X}\bigr)^{4}\ln\Bigl(\frac{M^{F}_{X}}{\Lambda}\Bigr)^{2} -2\bigl(M^{F}_{+}\bigr)^{4}\ln\Bigl(\frac{M^{F}_{+}}{\Lambda}\Bigr)^{2} - 2\bigl(M^{F}_{-}\bigr)^{4}\ln\Bigl(\frac{M^{F}_{-}}{\Lambda}\Bigr)^{2} \nonumber \\ & & \quad +\bigl(M^{B}_{M1+}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{M1+}}{\Lambda}\Bigr)^{2} +\bigl(M^{B}_{M1-}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{M1-}}{\Lambda}\Bigr)^{2} +\bigl(M^{B}_{M2+}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{M2+}}{\Lambda}\Bigr)^{2} +\bigl(M^{B}_{M2-}\bigr)^{4}\ln\Bigl(\frac{M^{B}_{M2-}}{\Lambda}\Bigr)^{2} \Bigg], \nonumber \\ & & ( \, M^{B}_{X\pm},M^{F}_{X},M^{B}_{M1\pm},M^{B}_{M2\pm},M^{F}_{\pm}\ll \Lambda \, ). \end{eqnarray} We have arrived at a generalization of the so-called SUSY Coleman-Weinberg potential discussed in Refs.~[11,12] ( see also, Ref.~[46] ). In our $V^{(1)}$, the one-loop contribution of $X$-field is also included. For obtaining a one-loop potential which will not diverge into the negative-energy direction at the limit $|\phi^{c}_{X}|\to\infty$~[47], we should impose both \begin{eqnarray} (M^{F}_{\pm}) < (M^{B}_{M1\pm})^{2}, (M^{B}_{M2\pm})^{2} \end{eqnarray} and \begin{eqnarray} (M^{F}_{X})^{2} < (M^{B}_{X\pm})^{2}. \end{eqnarray} We have known from (43) that (53) is satisfied, while if \begin{eqnarray} & & \theta_{X} \sim \pi, \quad 1 > g > 0, \quad 0 > f(m_{L}/m^{2}_{D}) > -1, \end{eqnarray} then (54) is satisfied. We should set model parameters with respect to these relations. The mass eigenvalues will degenerate under using (7) for $\phi^{c}_{\pm}$ with the Popov approximation. Therefore, we will denote them as follows: \begin{eqnarray} M^{B}_{X} \equiv M^{B}_{X+} = M^{B}_{X-}, \quad M^{B}_{+} \equiv M^{B}_{M1+} = M^{B}_{M2+}, \quad M^{B}_{-} \equiv M^{B}_{M1-} = M^{B}_{M2-}. \end{eqnarray} Since $m_{D},f\gg m_{L}$, especially we have interest on a situation \begin{eqnarray} M^{F}_{X} \ll M^{B}_{X} < M^{F}_{+} < M^{B}_{+} \ll M^{F}_{-} < M^{B}_{-}. \end{eqnarray} We have arrived at the order of mass eigenvalues and it is the crucial result for our discussion hereafter. \vspace{3mm} Since our effective potential and mass eigenvalues have similarities with those of the Minimal Supersymmetric Standard Model ( MSSM )~[19,20], let us utilize some methods/results from it. In theory of MSSM, SUSY is expricitly broken by a vacuum energy and several soft mass parameters, while it is not broken in the Lagrangian level as the starting point of our model. Thus, discussion on renormalization would become simpler than that of MSSM. A renormalization-group invariant calculation for renormalization of our $V^{tree}+V^{(1)}$ is subtle, because it includes many different mass parameters/scales~[19,20,48-53]. Since our interest is to examine a possibility of a realization of the generalized seesaw mechanism in the vicinity of the classical solution (7) with taking into account the one-loop contribution (52), we concentrate on a VEV of $\phi_{X}$ under the situation (55). Unfortunately, it is difficult to find a global minimum of the potential $V^{tree}+V^{(1)}$ because it has many parameters, $\phi^{c}_{\pm}$, $\phi^{c}_{X}$, $J_{1}$ and $K_{1}$ which should be determined variationally. For example, if we put the expressions of classical solution (7) for condensate $\phi^{c}_{+}$ to reduce variational parameters and try to find a minimum with respect to variation of $\phi^{c}_{X}$, the potential might give a non-vanishing vacuum energy ( hence SUSY is broken ) because this procedure corresponds to a restriction of trial functions in the variation: To achieve a true vacuum might be difficult. In the usual prescription of renomalization, a running coupling will be used to remove a renormalization point from a theory to get a physical ( renormalization-group invariant ) potential, though this procedure is difficult in our case. To make our problem tractable for our purpose, we will use the following definition of $V^{(1)}$~[50,53] by taking into account the Appelquist-Carazzone decoupling theorem~[54]: \begin{eqnarray} V^{(1)} &=& \frac{1}{8\pi^{2}}\sum_{l}\Bigl[ \theta(\mu^{2}-(M^{B}_{l})^{2})(M^{B}_{l})^{4}\ln\frac{(M^{B}_{l})^{2}}{\mu^{2}} - \theta(\mu^{2}-(M^{F}_{l})^{2})(M^{F}_{l})^{4}\ln\frac{(M^{F}_{l})^{2}}{\mu^{2}} \Bigr], \nonumber \\ & & ( l = X, +, - ). \end{eqnarray} ( A quite clear example of the decoupling theorem can be found in Ref.~[20]. ) Here, $\theta(x)$ is the Heaviside step function, it has been introduced to define mass thresholds inside the potential. We have changed the regularization method to $\overline{MS}$ ( modified minimal subtraction scheme ). $\mu\equiv e^{3/2}\bar{\mu}$, where $\bar{\mu}$ denotes the $\overline{MS}$ renormalization scale. Of course, $M^{B}_{l}$ and $M^{F}_{l}$ are functions of $\phi^{c}_{X}$. The logarithmic functions appeared in the above equation must satifiy $|\ln(M^{2}/\mu^{2})| < 1$ ( $\mu$: a renomalization point ) for the justification for our loop expansion. We should find the situation where $V^{(1)}(\mu)=0$ and $\frac{d}{d\mu}(V^{tree}+V^{(1)})=0$ are simultaneously satisfied~[50]. It is a hard task to arrive from the complete theory to an effective theory of lowest region in (57) with running parameters with $\mu$. In such a top-down approach, as we know from (57), we have totally six decoupling scales until we arrive at the region $\mu^{2}<(M^{F}_{X})^{2}$ where all particles are decoupled, and then $V^{tree}$ alone gives the renormalization-group invariant potential~[50,53]. Hence, first we wish to consider the case $(M^{F}_{X})^{2} < \mu^{2} < {\rm others}$. In this case, the potential of the one-loop contribution can be written down as follows: \begin{eqnarray} V^{(1)} &=& -\frac{1}{8\pi^{2}}(M^{F}_{X})^{4}\ln\frac{(M^{F}_{X})^{2}}{\mu^{2}}. \end{eqnarray} Here, we simply have assumed that the effect of decoupled particles is already included by a renormalization of parameter. This $V^{(1)}$ gives a positive contribution to our one-loop potential. After put the classical solution (7) to $\phi^{c}_{\pm}$ of this $V^{(1)}$, choose $\theta_{X}=\pi$, and take the derivative of $V^{tree}+V^{(1)}$ with respect to $|\phi_{X}|$, we get $\langle\phi^{c}_{X}\rangle = m^{2}_{D}/(2gm_{L})$ from the stationary condition: We find that the effective field theory of this renormalization point/scale gives the same expression for VEV of $\phi_{X}$ with its classical solution, shows the generalized seesaw mechanism. Needless to say, we will also obtain $\langle\phi^{c}_{X}\rangle = m^{2}_{D}/(2gm_{L})$ at the complete decoupled region $\mu^{2}<(M^{F}_{X})^{2}$ because $V^{(1)}=0$. Since the improved $V^{tree}$ is the "exact" potential ( with satisfying the matching condition )~[50,53], an observation will find that the vacuum is supersymmetric in the energy scale $\mu^{2}<(M^{F}_{X})^{2}$. We conclude that, under a reasonable choice of model parameters with respect to the conditions for stability of the vicinity of the classical solution (7) and a justfication on convergence of the loop expansion, certainly (7) is robust against a quantum correction, will not obtain a radical modification, and thus the generalized seesaw mechanism takes place. \section{Superspace Formalism} In this section we will calculate the one-loop effective potential in superfield formalism~[2,17,55], though we have to use components of superfields at several points of our discussions, especially for our consideration on BEC. For example, it seems difficult to consider the HFB approximation of quantum fluctuations of scalars in the superfield formalism, and thus the self-energies as $J_{1}, K_{1}, \cdots$ do not appear in our superspace formalism. It is a problem inherent in the superspace formalism, and as a result, the one-loop contribution of the superspace formalism is different from that of the component field formalism. Moreover, it seems difficult to examine the generalized seesaw mechanism by our superfield formalism of one-loop potential because mass eigenvalues of fermion and boson sectors will not be derived under a direct manner. Therefore the component field formalism is better to describe the dynamics and physical property of the scalar sector with having the BEC. The purpose of this section is to make a comparison between the two formalisms. The examination on the generalized seesaw mechanism is beyond scope of this section. We employ the background field method, the standard method of superspace formalism~[17], to take into account the BEC in our model: \begin{eqnarray} & & X = X^{c} + \tilde{X}, \quad X^{c} \equiv \phi^{c}_{X} + \theta\theta F^{c}_{X}, \quad \tilde{X} \equiv \tilde{\phi}_{X} + \theta \psi_{X} + \theta\theta\tilde{F}_{X}, \nonumber \\ & & \Phi_{\pm} = \Phi^{c}_{\pm} + \tilde{\Phi}_{\pm}. \quad \Phi^{c}_{\pm} \equiv \phi^{c}_{\pm} + \theta\theta F^{c}_{\pm}, \quad \tilde{\Phi}_{\pm} \equiv \tilde{\phi}_{\pm} + \theta \psi_{\pm} + \theta\theta\tilde{F}_{X}. \end{eqnarray} Similar to the case of component field formalism given in the previous section, again we assume the condensates $\phi^{c}_{X}$ and $\phi^{c}_{\pm}$ are independent on spacetime coordinates. The Lagrangian will be converted into the following form: \begin{eqnarray} {\cal L} &=& {\cal L}^{c} + \widetilde{\cal L}, \nonumber \\ {\cal L}^{c} &\equiv& \Bigl( X^{c\dagger}X^{c} + \Phi^{c\dagger}_{+}\Phi^{c}_{+} + \Phi^{c\dagger}_{-}\Phi^{c}_{-} \Bigr)\Big|_{\theta\theta\bar{\theta}\bar{\theta}} + \Bigg[ \Bigl( fX^{c} + \frac{g}{2}X^{c}\Phi^{c}_{+}\Phi^{c}_{+} + m_{D}\Phi^{c}_{+}\Phi^{c}_{-} + m_{L}\Phi^{c}_{-}\Phi^{c}_{-} \Bigr)\Big|_{\theta\theta} + ({\rm h.c.}) \Bigg], \nonumber \\ \widetilde{\cal L} &\equiv& \Bigg[ \tilde{X}^{\dagger}\tilde{X} +\frac{1}{2}\Xi^{\dagger}{\cal M}\Xi -\frac{D^{2}}{4\Box} g\Phi^{c}_{+}\tilde{X}\tilde{\Phi}_{+} -\frac{\overline{D}^{2}}{4\Box} g^{\dagger}\Phi^{c\dagger}_{+}\tilde{X}^{\dagger}\tilde{\Phi}^{\dagger}_{+} \Bigg]\Bigg|_{\theta\theta\bar{\theta}\bar{\theta}}. \end{eqnarray} Here, we have used the equivalent relations $\delta^{2}(\bar{\theta})=-D^{2}/4\Box$ and $\delta^{2}(\theta)=-\overline{D}^{2}/4\Box$ under the integration of $d^{4}x$ inside the action functional of the theory, and have dropped terms linear in the fluctuating ${\it superfields}$ $\tilde{X}$, $\tilde{\Phi}_{\pm}$. We have introduced several matrix notations defined as follows: \begin{eqnarray} \Xi &\equiv& (\tilde{\Phi}_{+},\tilde{\Phi}_{-},\tilde{\Phi}^{\dagger}_{+},\tilde{\Phi}^{\dagger}_{-})^{T}, \nonumber \\ {\cal M} &\equiv& \left( \begin{array}{cc} -\frac{D^{2}\overline{D}^{2}}{16}\otimes\sigma_{0} & -\frac{D^{2}}{4}{\cal C}^{\dagger} \\ -\frac{\overline{D}^{2}}{4}{\cal C} & -\frac{\overline{D}^{2}D^{2}}{16}\otimes\sigma_{0} \end{array} \right)\delta^{8}(z-z'), \quad {\cal C} \equiv m_{D}\otimes\sigma_{1}+m_{L}\otimes\frac{1-\sigma_{3}}{2}+\frac{g}{2}X^{c}\otimes\frac{1+\sigma_{3}}{2}. \end{eqnarray} The sigma matrices $\sigma^{\nu}$ ( the definition: $\sigma^{0}=-1_{2\times 2}$, while $\sigma^{1},\sigma^{2},\sigma^{3}$ are the ordinary Pauli matrices ) act on the two-dimensional chirality space $(+,-)$. The chiral and antichiral delta functions are defined as \begin{eqnarray} \frac{\delta \Phi_{\pm}(z')}{\delta \Phi_{\pm}(z)} = -\frac{\overline{D}^{2}}{4}\delta^{8}(z-z'), \quad \frac{\delta \Phi^{\dagger}_{\pm}(z')}{\delta \Phi^{\dagger}_{\pm}(z)} = -\frac{D^{2}}{4}\delta^{8}(z-z'), \quad z \equiv (x,\theta,\bar{\theta}). \end{eqnarray} The generating functional will be written down in the following form: \begin{eqnarray} {\cal Z} &=& \int {\cal D}\tilde{X}{\cal D}\tilde{X}^{\dagger}{\cal D}\tilde{\Phi}_{+}{\cal D}\tilde{\Phi}^{\dagger}_{+}{\cal D}\tilde{\Phi}_{-}{\cal D}\tilde{\Phi}^{\dagger}_{-}\exp\Bigl[i\int d^{4}x{\cal L} + ({\rm sources}) \Bigr] \nonumber \\ &=& \int {\cal D}\tilde{X}{\cal D}\tilde{X}^{\dagger} \exp\Bigg(i \int d^{4}x \Bigl[ {\cal L}^{c} + \Bigl( \tilde{X}^{\dagger}\tilde{X} \Bigr)_{\theta^{2}\bar{\theta}^{2}} \Bigr] + \frac{i}{2}{\rm Tr}\ln{\cal M} + {\cal G} \Bigg), \nonumber \\ {\cal G} &\equiv& -\frac{i}{2}\int d^{8}z\int d^{8}z' \frac{1}{2}{\cal J}(z){\cal M}^{-1}\delta^{8}(z-z'){\cal J}^{\dagger}(z'). \end{eqnarray} Here, $d^{8}z \equiv d^{4}x d^{2}\theta d^{2}\bar{\theta}$. To obtain the final expression of ${\cal Z}$ in (64), we have neglected contributions of (anti)chiral sources. The definition of ${\cal J}$ is \begin{eqnarray} {\cal J} &\equiv& \left( \begin{array}{cc} g\Phi^{c}_{+}\tilde{X}\otimes\frac{1+\sigma_{3}}{2} & 0 \\ 0 & g^{\dagger}\Phi^{c\dagger}_{+}\tilde{X}^{\dagger}\otimes\frac{1+\sigma_{3}}{2} \end{array} \right). \end{eqnarray} By putting the components of $X^{c}$ and $\Phi^{c}_{\pm}$, we can confirm the fact that ${\cal L}^{c}$ becomes \begin{eqnarray} {\cal L}^{c} &=& -V^{tree}[\phi^{c}_{\pm},\phi^{c}_{X}]. \end{eqnarray} Hence the tree level potential is the same in both of the formalisms. Next, we divide ${\cal M}$ as follows: \begin{eqnarray} & & {\cal M} = {\cal M}_{0} - {\cal M}', \quad {\cal M}_{0} \equiv \left( \begin{array}{cc} -\frac{D^{2}\overline{D}^{2}}{16}\otimes\sigma_{0} & 0 \\ 0 & -\frac{\overline{D}^{2}D^{2}}{16}\otimes\sigma_{0} \end{array} \right), \nonumber \\ & & {\cal M}^{-1}_{0} = \frac{1}{\Box^{2}}\left( \begin{array}{cc} -\frac{D^{2}\overline{D}^{2}}{16}\otimes\sigma_{0} & 0 \\ 0 & -\frac{\overline{D}^{2}D^{2}}{16}\otimes\sigma_{0} \end{array} \right), \quad {\cal M}' \equiv \left( \begin{array}{cc} 0 & +\frac{D^{2}}{4}{\cal C}^{\dagger} \\ +\frac{\overline{D}^{2}}{4}{\cal C} & 0 \end{array} \right). \end{eqnarray} The one-loop effective action $\frac{i}{2}{\rm Tr}\ln{\cal M}$ is evaluated to be \begin{eqnarray} \Gamma^{(1)}_{(super)} &\equiv& \frac{i}{2}{\rm Tr}\ln{\cal M} = \frac{i}{2}\ln{\rm Det}{\cal M}_{0} + \frac{i}{2}{\rm Tr}\ln(1-{\cal M}^{-1}_{0}{\cal M}') = \frac{i}{2}{\rm Tr}\ln\Bigl( 1 -\frac{1}{\Box}{\cal M}'\Bigr)\Box{\cal M}^{-1}_{0} \nonumber \\ &=& \lim_{z'\to z}\frac{i}{2}{\rm tr}\int d^{8}z \ln\Bigl[ 1 -\frac{1}{\Box}{\cal C}^{\dagger}{\cal C} \Bigr]\frac{D^{2}\overline{D}^{2}}{16\Box}\delta^{8}(z-z'). \end{eqnarray} We have dropped $\frac{i}{2}\ln{\rm Det}{\cal M}^{-1}_{0}$ because it does not contribute to $\Gamma^{(1)}_{(super)}$. The relations ${\cal M}^{-1}_{0}{\cal M}^{-1}_{0}=\Box^{-1}{\cal M}^{-1}_{0}$ and the commutator $[{\cal M}^{-1}_{0},{\cal M}']=0$ have been used. From the following identity in superspace, \begin{eqnarray} \frac{D^{2}\overline{D}^{2}}{16}\delta^{2}(\theta-\theta')\delta^{2}(\bar{\theta}-\bar{\theta}')\Big|_{\theta=\theta',\bar{\theta}=\bar{\theta}'} = 1, \end{eqnarray} the effective potential is found to be \begin{eqnarray} V^{(1)}_{(super)} &\equiv& -\frac{\Gamma^{(1)}_{(super)}}{\int d^{4}x} = \frac{i}{2}{\rm tr}\int d^{2}\theta d^{2}\bar{\theta}\int\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{p^{2}}\ln\Bigl(p^{2}+{\cal C}^{\dagger}{\cal C}\Bigr) \nonumber \\ &=& \frac{1}{2}{\rm tr}\Bigg[ \Lambda^{2}\ln\Bigl(1+\frac{{\cal C}^{\dagger}{\cal C}}{\Lambda^{2}}\Bigr) +{\cal C}^{\dagger}{\cal C}\ln\Bigl(1+\frac{\Lambda^{2}}{{\cal C}^{\dagger}{\cal C}}\Bigr)\Bigg]_{\theta^{2}\bar{\theta}^{2}} \nonumber \\ &=& \frac{|g|^{2}}{4}|F^{c}_{X}|^{2}\ln\Bigl(1 + \frac{\Lambda^{2}}{|M_{\cal C}|^{2}} \Bigr) - \Bigl[\frac{|g|^{2}}{4}\Bigr]^{2}|F^{c}_{X}|^{2}|\phi^{c}_{X}|^{2}\frac{\Lambda^{2}}{(\Lambda^{2}+|M_{\cal C}|^{2})|M_{\cal C}|^{2}} \nonumber \\ &\approx& |F^{c}_{X}|^{2}\frac{|g|^{2}}{4}\ln \frac{\Lambda^{2}}{|M_{\cal C}|^{2}}, \quad ( \, \Lambda \to \infty \, ), \nonumber \\ |M_{\cal C}|^{2} &=& m^{2}_{D} + m^{2}_{L} + \frac{|g|^{2}}{4}|\phi^{c}_{X}|^{2}. \end{eqnarray} Of course, the mass dimension of $V^{(1)}_{(super)}$ is $[{\rm mass}]^{4}$. Both $V^{tree}$ and $V^{(1)}_{(super)}$ vanish simultaneously at the classical vacuum (7) and SUSY is not broken. To make our calculation on ${\cal G}$ in (64) tractable, we approximate ${\cal M}^{-1}$ by replacing ${\cal C}\to m_{D}$. Then we get \begin{eqnarray} {\cal G} &=& -\frac{i}{4} \Bigg[ \int d^{8}z |g|^{2}|\Phi^{c}_{+}|^{2}\Bigl( \tilde{X}\frac{1}{\Box-m^{2}_{D}}\tilde{X}^{\dagger} + \tilde{X}^{\dagger}\frac{1}{\Box-m^{2}_{D}}\tilde{X}\Bigr) \nonumber \\ & & \quad + \int d^{6}z (g)^{2}(\Phi^{c}_{+})^{2}\Bigl( \tilde{X}\frac{m_{D}}{\Box-m^{2}_{D}}\tilde{X} \Bigr) + \int d^{6}\bar{z} (g^{\dagger})^{2}(\Phi^{c\dagger}_{+})^{2}\Bigl( \tilde{X}^{\dagger}\frac{m_{D}}{\Box-m^{2}_{D}}\tilde{X}^{\dagger} \Bigr) \Bigg]. \end{eqnarray} Obviously, ${\cal G}$ includes a K\"{a}hler potential and (anti)chiral superpotentials of the fluctuating $\tilde{X}$-field. From a consideration by the Wick theorem, one finds $(\Box-m^{2}_{D})^{-1}$ in the K\"{a}hler potential corresponds to the propagator $\langle T\tilde{\phi}_{+}\tilde{\phi}^{\dagger}_{+}\rangle$, while $m_{D}/(\Box-m^{2}_{D})$ in the chiral and antichiral superpotential parts of ${\cal G}$ came from $\langle T\tilde{F}_{+}\tilde{\phi}_{+}\rangle$ and $\langle T\tilde{F}^{\dagger}_{+}\tilde{\phi}^{\dagger}_{+}\rangle$, respectively. ( An examination of mass dimensions of these propagators is also helpful. ) Because $(\Phi^{c}_{+})^{2}\tilde{X}$ or $(1/(\Box -m^{2}_{D}))\tilde{X}$ are chiral superfields, $-\frac{D^{2}}{4\Box}$ can be inserted between them inside the integration $\int d^{8}z$. Therefore we get \begin{eqnarray} \int d^{8}z \tilde{X}^{\dagger}\tilde{X} -i{\cal G} &=& \frac{1}{2}\int d^{8}z (X^{\dagger},X){\cal M}_{X}\left( \begin{array}{c} X \\ X^{\dagger} \end{array} \right), \nonumber \\ {\cal M}_{X} &\equiv& \left( \begin{array}{cc} 1-\frac{g^{2}}{2}|\Phi^{c}_{+}|^{2}\frac{1}{\Box -m^{2}_{D}} & -\frac{g^{2}}{2}(\Phi^{c\dagger}_{+})^{2}(-\frac{\overline{D}}{4\Box})\frac{m_{D}}{\Box -m^{2}_{D}} \\ -\frac{g^{2}}{2}(\Phi^{c}_{+})^{2}(-\frac{D}{4\Box})\frac{m_{D}}{\Box -m^{2}_{D}} & 1-\frac{g^{2}}{2}|\Phi^{c}_{+}|^{2}\frac{1}{\Box -m^{2}_{D}} \end{array} \right). \end{eqnarray} Integration of ${\cal D}\tilde{X}{\cal D}\tilde{X}^{\dagger}$ will give ${\rm Det}^{-1}{\cal M}_{X}$, and this determinant gives a polynomial of $F^{c}_{+}$ and $F^{c\dagger}_{+}$. Because $F^{c}_{+}=F^{c\dagger}_{+}=0$ at (7), the one-loop contribution of ${\rm Det}^{-1}{\cal M}_{X}$ is also vanish and we conclude that SUSY is not broken at the classical solution (7). \section{Conclusion} In summary, we have examined the mass spectra of scalars and spinors of the modified O'Raifeartaigh model by our evaluation of the one-loop effective potential in the component field formalism, especially in the vicinity of the classical solution (7) of the model, from the context of the generalized seesaw mechanism. The BEC in the scalar sector has been considered, while the spinor sector has a mathematical similarity with relativistic theory of superconductivity~[56,57]. Therefore, some parts of our formulation has some similarities with that of theory of supersymmetric (color-)superconductivity~[58,59], though the intrinsic dynamics of them are quite different. We have emphasized that it becomes possible for us to examine the mass spectra for the generalized seesaw mechanism of neutrino by introducing the left-handed Majorana mass term ( namely, a modification~[11,12] ) to the ordinary O'Raifeartaigh model. Our calculation at the one-loop level of the effective potential of the component field formalism indicates that SUSY is not broken in the theory due to the absence of an NG fermion, and have confirmed that SUSY is not broken at the classical vacuum of the one-loop potential of the superfield formalism. \vspace{3mm} In this paper, we have discussed several VEVs of scalars. It is interesting for us to consider some possible relations between inflaton of cosmology, scalar fields of (1), a ( generalized ) seesaw mechanism of neutrino, and a spontaneous SUSY breaking. The scalar field $\phi_{X}$ seems to have a special role in a determination of local/global minima of an effective potential of (1), while its VEV determines a right-handed Majorana mass parameter in our theory. Thus it is interesting for us to investigate a possible scenario of a relation between $\phi_{X}$ and an inflaton for our further investigation. It might be possible to investigate the lightest fermion $\psi_{X}$-field could become a candidate of dark matter. In the strong CP problem of QCD, a theta angle gives us an important issue on axion. An investigation of a relation between $\theta_{X}$ and QCD theta angle ( and also the Peccei-Quinn mechanism~[60] ) is far beyond scope of this paper, though it is also an interesting problem.
1,116,691,501,417
arxiv
\section{Introduction} The extensive efforts over the years to reconcile thermodynamics with quantum mechanics~\cite{gemmerbook,mahlerbook,alicki1979quantum,kosloff1984quantum,kosloff2013quantum} have not yet fully resolved the fundamental question: What is truly quantum about quantum thermodynamics? Is there more to it than just restating traditional thermodynamic principles for quantized systems? Attempts to cope with this problem have revolved around possible quantum resources that may boost the thermodynamic performance of heat machines as compared to their classical counterparts. A prime contender for such a resource is quantum coherence~\cite{agarwal2001quantum,scully2003extracting,kozlov2006inducing,deffner2013information,dorfman2013photosynthetic,tscherbul2014long,uzdin2015quantum}. Intriguing schemes have predicted power and/or efficiency increase in quantum heat engines~\cite{scully2003extracting} due to sustainable (steady-state) coherence in non-thermal baths, as well as coherence-enhanced performance of photovoltaic solar heat converters~\cite{scully2010quantum,scully2011quantum,svidzinsky2011enhancing,svidzinsky2012enhancing,creatore2013efficient} or coherent effects in photosynthesis~\cite{dorfman2013photosynthetic,dijkstra2015coherent}. According to an interesting view~\cite{deffner2013information}, coherence in thermodynamics acts as an information reservoir or Maxwell's demon, i.e., as an extra resource that can tip the entropy balance and the division of input energy between heat and work in favor of the latter~\cite{szilard1929ueber,landauer1961irreversibility}. \par Here we explore the thermodynamics of multilevel systems with excited-state degeneracy, which is a prerequisite for the persistence of interlevel quantum coherence in steady state imposed by \emph{thermal baths}~\cite{agarwal2001quantum,kozlov2006inducing,tscherbul2014long,gelbwaser2014power}. Our goal is to address the question: To what extent is coherence an asset when such systems are employed as working media in heat machines? \par We show that the bounds on the efficiency and power output in the presence of coherence are general and common to all cycles, i.e., reciprocating cycles that consist of consecutive strokes and continuous cycles. Similar thermodynamic performance bounds are shared by multilevel systems whose excited states are degenerate and transition dipoles are perfectly aligned, and by multipartite Dicke systems~\cite{dicke1954coherence}. These general considerations (Sec.~\ref{sec_H_SB}) are followed (Secs.~\ref{sec_system}--\ref{sec_ss}) by a study of a quantum heat machine containing a working medium that has $N-1$ degenerate upper levels, whose transition dipoles to the ground state may not all be aligned. The system is constantly coupled to two spectrally distinct, hot and cold, thermal baths and is periodically modulated (Stark-shifted) by an external field. This modulation acts as a piston in the heat machine~\cite{gelbwaser2013minimal}. Our objective is to study the steady-state operation of such a heat machine, i.e., the limit cycle of its dissipative evolution, by deriving its heat currents, power, and efficiency in the ususal regime of weak system-bath coupling~\cite{gemmerbook,mahlerbook,alicki1979quantum,kosloff1984quantum,kosloff2013quantum,carmichaelbook,breuerbook,gorini1976completely,lindblad1976generators} (Sec.~\ref{sec_heat_currents}). \par The present theory extends our previous study of heat machines whose working medium is a periodically modulated two-level system (TLS) that is continuously coupled to two (hot and cold) baths~\cite{gelbwaser2013minimal,alicki2014quantum,gelbwaser2014power} (see also Ref.~\cite{gelbwaser2015thermodynamics} for a recent review). The merit of such models is that they are amenable to a full quantum-mechanical analysis by the Floquet method~\cite{alicki2012periodically,kolar2012quantum,gelbwaser2013minimal}. Moreover, their continuous-cycle operation avoids the difficulty of ensuring compatibility with the laws of thermodynamics, in contrast to machines that are operated via reciprocating cycles that consist of four strokes (e.g., the Carnot- or the Otto cycle), where, alternately, only a hot or a cold bath is coupled to the working medium at any time~\cite{schwablbook}. The difficulty therein is to properly account for the highly nonadiabatic energy and entropy flows induced by frequent on- and off-switching of interactions with alternate heat baths in consecutive strokes. By contrast, continuous-cycle, periodically-driven, heat machines can in a straightforward manner be made compatible with the first and second laws of thermodynamics regardless of their nonadiabaticity~\cite{gelbwaser2013minimal,kosloff2013quantum}. \par Steady-state coherences between upper levels are shown here to arise as a consequence of the thermalization of the initial state under the condition of strict transition-dipole alignment. However, thermalization may then be partially blocked, i.e., be incomplete, by the mere presence of dark states. For this reason we introduce the notion of \emph{thermalization capability} of the initial state. We show that this capability should be maximized, by avoiding the initial population of dark states, in order to cause maximal power enhancement by the multilevel heat machine, as compared to its two-level counterpart. We argue that it is the thermalization capability rather than steady-state coherence that underlies the power-enhancement mechanism: The key resource of the multilevel heat machine is the thermalization of transitions that share a \emph{common} ground state, and are thereby correlated, whether coherently or incoherently. Similar principles govern heat machines based on a multiatom Dicke system, notwithstanding their different dynamics (Sec.~\ref{sec_dicke}). \par The heat currents and the power are analytically shown (Secs.~\ref{sec_heat_currents} and~\ref{sec_dicke}) to be both strongly boosted by the same enhancement factor with respect to a single two-level system. Consequently, the efficiency of the multilevel or multipartite heat machine equals that of its two-level counterpart and thus adheres to the Carnot efficiency bound, which is reached at zero power---the operating point at which the quantum heat engine is transformed into a quantum refrigerator~\cite{gelbwaser2013minimal}. The Curzon--Ahlborn limit~\cite{curzon1975efficiency} for the efficiency of classical heat engines at maximum power, however, can be surpassed for appropriately engineered bath spectra and carefully chosen temperatures~\cite{gelbwaser2013minimal}. \par In Sec.~\ref{sec_realizations} we discuss possible realizations of the pertinent models and their limitations. Notably, we point out that the $N$-atom Dicke system may be the most straightforward experimental implementation of the effects predicted in this work. In Sec.~\ref{sec_conclusions_and_outlook} the conclusions of this work are presented. \section{Thermodynamic r\^ole of coherences and dark states}\label{sec_H_SB} \subsection{Model} \par \begin{figure} \centering \includegraphics[width=\columnwidth]{system_nlevels_particles_sketch} \caption{(Color online) (a) Sketch of a heat machine. In a reciprocating-cycle implementation the cold and hot baths as well as the piston are alternately coupled to the working fluid during specific strokes, whereas in continuous-cycle operation the system interacts with both baths as well as the piston at all times. The sketch shows the heat machine operating as an engine. In the refrigeration mode all energy flows (arrows) have to be reversed. (b) A quantum heat machine based on an $N$-level system with ($N-1$)-fold excited-state degeneracy that constantly interacts with a cold (dashed blue) and a hot (solid red) heat bath. The transition frequency is periodically modulated to allow for power extraction (heat engine) or supply (refrigerator). (c) An analogous quantum heat machine based on an ensemble of two-level atoms.}\label{fig_system} \end{figure} \par We consider a generalized $V$-type $N$-level system consisting of $N-1$ degenerate excited states $\ket{1},\dots,\ket{N-1}$ and a common ground state $\ket{0}$. The transition frequency between the excited states and the ground state is denoted by $\omega_0$. This system is assumed to be coupled (alternately or constantly) to cold and hot heat baths and periodically-driven (frequency-modulated) by a piston (see Fig.~\ref{fig_system}). Energy can flow from and to the baths via absorption and (stimulated and spontaneous) emission, respectively, of thermal bath quanta. \par The system-bath interaction Hamiltonian is assumed to be of a generalized dipole-coupling form~\cite{breuerbook} that is expressed in the rotating-wave approximation as~\cite{gelbwaser2014power} \begin{equation}\label{eq_H_SB} H_\mathrm{SB}=\sum_{j=1}^{N-1}\sum_{i\in\{\mathrm{c},\mathrm{h}\}}\left(\sigma_+^j\otimes\mathbf{d}_j\cdot\mathbf{B}_i+\sigma_-^j\otimes\mathbf{d}_j^*\cdot\mathbf{B}_i^\dagger\right), \end{equation} in terms of the excitation and de-excitation Pauli operators \begin{subequations}\label{eq_splus} \begin{align} \sigma_+^j&\mathrel{\mathop:}=\ketbra{j}{0}\\ \sigma_-^j&\mathrel{\mathop:}=\ketbra{0}{j}, \end{align} \end{subequations} the (possibly complex) dipole moments for the $j$th transition $\mathbf{d}_j$, and the coupling operators of the cold (hot) heat bath $\mathbf{B}_\mathrm{c}$ ($\mathbf{B}_\mathrm{h}$). We adopt the notation \begin{equation}\label{eq_def_alpha} \alpha_j\mathrel{\mathop:}=|\mathbf{d}_j|/|\mathbf{d}_1|, \end{equation} where the strength of the largest transition dipole $|\mathbf{d}_1|$ serves as reference (i.e., $\alpha_j\leq 1$). We further define the dipole alignment matrix \begin{equation}\label{eq_def_pij} \mathfrak{p}_{ij}e^{i\varphi_{ij}}\mathrel{\mathop:}=\frac{\mathbf{d}_{i}^*\cdot\mathbf{d}_{j}}{|\mathbf{d}_{i}||\mathbf{d}_{j}|}\stackrel{\mathbf{d}_{i},\mathbf{d}_{j}\in\mathbb{R}^3}{\equiv} \cos\measuredangle(\mathbf{d}_{i},\mathbf{d}_{j}), \end{equation} where $\mathfrak{p}_{ij}\in[0,1]$, which encodes the directional configuration of the $N-1$ dipoles. This matrix is Hermitian, as $\mathfrak{p}_{ij}=\mathfrak{p}_{ji}$ and $\varphi_{ij}=-\varphi_{ji}$. In what follows we will for brevity denote dipoles $\mathbf{d}_i$ and $\mathbf{d}_j$ with $\mathfrak{p}_{ij}=1$ as being parallel even if they differ by a phase ($\varphi_{ij}\neq 0$). \par Prior to discussing in detail a possible setup of such a heat machine, we infer from the system-bath interaction Hamiltonian~\eqref{eq_H_SB} some general results concerning the thermodynamic r\^ole of coherences in the working medium. \subsection{Collective-states basis} Owing to the degeneracy of the excited states of the system Hamiltonian \begin{equation}\label{eq_introduction_H_S} H_\mathrm{S}=\hbar\omega_0\sum_{j=1}^{N-1}\sigma_+^j\sigma_-^j, \end{equation} one is free to choose any (rotated) basis within this excited manifold, as the rotated states will still be energy eigenstates of the Hamiltonian~\eqref{eq_introduction_H_S}. Assuming that all dipoles are parallel and (for simplicity) of equal strength and real, we then transform the system-bath Hamiltonian~\eqref{eq_H_SB} to the basis consisting of the ground state $\ket{0}$, the collective ``bright'' state~\cite{ficek2004simulating} \begin{equation}\label{eq_introduction_psib} \ket\psi_\mathrm{b}\mathrel{\mathop:}=\frac{1}{\sqrt{N-1}}\sum_{j=1}^{N-1}\ket{j}, \end{equation} and $N-2$ rotated excited states that are orthogonal to the latter. The system-bath interaction Hamiltonian~\eqref{eq_H_SB} then adopts the form \begin{equation}\label{eq_introduction_H_SB_parallel} H_\mathrm{SB}^\mathrm{parallel}=\sum_{i\in\{\mathrm{c},\mathrm{h}\}}\sqrt{N-1}\left(\ketbra{\psi_\mathrm{b}}{0}\otimes\mathbf{d}_1\cdot\mathbf{B}_i+\mathrm{H.c.}\right). \end{equation} Formally, this Hamiltonian describes a \emph{single two-level system} formed by $\ket{0}$ and $\ket{\psi_\mathrm{b}}$, whose interaction with the baths has a dipole moment \emph{enhanced} by a factor of $\sqrt{N-1}$, which is responsible for superradiance~\cite{dicke1954coherence,gross1982superradiance}. The remaining $N-2$ states, which are orthogonal to $\ket{0}$ and $\ket\psi_\mathrm{b}$, are not accessible by this Hamiltonian: They are \emph{dark states} with respect to the (dipolar) system-bath interaction. Consequently, if the $N$-level system with parallel dipoles is initially prepared in one of these dark states, it does not exchange heat with the baths, and its state remains invariant under the action of the system-bath interaction Hamiltonian. We therefore anticipate the result (elaborated further on), that the steady-state solution for parallel dipoles strongly depends on the overlap of the initial state $\rho(0)$ with these dark states, i.e., on the initial value $\ew{\Pi_\mathrm{d}}_{\rho(0)}$ of the dark-state projector. \par If, however, the initial state of the multilevel system is non-dark, then the rate of quanta exchange of the \emph{effective} TLS (formed by $\ket\psi_\mathrm{b}$ and $\ket{0}$) with the baths, $\gamma_\mathrm{b}$, is enhanced by a factor of $N-1$ compared to the spontaneous-emission rate $\gamma_1$ of a TLS consisting of the states $\ket{0}$ and $\ket{1}$. This can be deduced from the (dissipative) master-equation description for the reduced density matrix of the system, based on the interaction Hamiltonian~\eqref{eq_introduction_H_SB_parallel}: The spontaneous-emission (decay) rate scales with the square of the transition-dipole moment~\cite{carmichaelbook} and therefore $\gamma_\mathrm{b}=(N-1)\gamma_1$ with $\gamma_1\propto|\mathbf{d}_1|^2$ (Sec.~\ref{subsec_parallel}). \subsection{Power and efficiency bounds}\label{sec_introduction_efficiency} Let us consider the implications of the collective enhancement for the heat currents, i.e., the rate of energy exchange between the system and the baths, and the power of the heat machine. The heat currents must be proportional to the rate of quanta absorption and emission of quanta from and to the baths, respectively~\cite{alicki1979quantum,kosloff1984quantum,boukobza2006thermodynamics}. This proportionality should be independent of the actual implementation of the heat machine, i.e., whether both baths are continuously coupled to the system or only one at a time in a reciprocating cycle. \par Due to the enhancement of the decay rate for parallel dipoles in a bright state, the ``cold'' ($J_\indexc$) and the ``hot'' ($J_\indexh$) heat currents $J_i=\dot{Q}_i$, $Q_i$ being the heat exchange with the $i$th bath, will be equally enhanced compared to their two-level counterparts, as \begin{subequations}\label{eq_introduction_currents_tls} \begin{align} J_\indexc&=(N-1)J_\indexc^\mathrm{TLS}\\ J_\indexh&=(N-1)J_\indexh^\mathrm{TLS}. \end{align} \end{subequations} The first law of thermodynamics (energy conservation)~\cite{alicki1979quantum,kosloff2013quantum,gelbwaser2013minimal} then implies that the power is enhanced by the same factor, \begin{equation}\label{eq_introduction_power} \dot{W}=-(J_\indexc+J_\indexh)=(N-1)\dot{W}^\mathrm{TLS}. \end{equation} Equations~\eqref{eq_introduction_currents_tls} and~\eqref{eq_introduction_power} imply that although an enhancement of the output power is expected, the efficiency of the multilevel heat engine, which is defined by the ratio of the extracted power to the invested ``hot'' current~\cite{schwablbook}, \begin{equation}\label{eq_introduction_eta} \eta\mathrel{\mathop:}=\frac{|\dot{W}|}{J_\indexh}=\frac{|\dot{W}^\mathrm{TLS}|}{J_\indexh^\mathrm{TLS}}\equiv 1-\frac{|J_\indexc^\mathrm{TLS}|}{J_\indexh^\mathrm{TLS}}, \end{equation} remains the \emph{same} as for a TLS-based heat engine where coherence is absent. Notably, the Carnot bound~\cite{schwablbook} is adhered to, by virtue of Eq.~\eqref{eq_introduction_power}, Eq.~\eqref{eq_introduction_eta} and the second-law condition~\cite{spohn1978entropy} \begin{equation}\label{eq_introduction_second_law} \frac{J_\indexc}{T_\indexc}+\frac{J_\indexh}{T_\indexh}\leq 0. \end{equation} \par The assumptions in the foregoing discussion were the degeneracy of the excited states as well as the dipolar-coupling form of $H_\mathrm{SB}$ between the system and both thermal baths. Any increase of the efficiency compared to Eq.~\eqref{eq_introduction_eta} would require different enhancement of $J_\indexc$ and $J_\indexh$, such that the enhancement $c_\mathrm{h}$ of the ``hot'' current $J_\indexh=c_\mathrm{h}J_\indexh^\mathrm{TLS}$ exceeds its ``cold'' current counterpart $c_\mathrm{c}$, i.e., $c_\mathrm{h}>c_\mathrm{c}$. This may only be possible for different coupling Hamiltonians between the system and the two thermal baths, $H_\mathrm{SB}^\mathrm{c}$ and $H_\mathrm{SB}^\mathrm{h}$, unlike Eq.~\eqref{eq_H_SB}. However, the Carnot bound, which is a corollary of the first and second laws~\cite{schwablbook}, is upheld regardless of these assumptions since even if such enhancement can be achieved, the second law restricts the ratio of the current enhancement factors $c_\mathrm{c}$ and $c_\mathrm{h}$ to satisfy [cf.\ Eq.~\eqref{eq_introduction_second_law}] \begin{equation} c_\mathrm{c}\frac{J_\indexc^\mathrm{TLS}}{T_\indexc}+c_\mathrm{h}\frac{J_\indexh^\mathrm{TLS}}{T_\indexh}\stackrel{!}{\leq}0. \end{equation} \par \subsection{R\^ole of coherences} If all dipoles are parallel, coherences in the working fluid (medium) will build up in the bare (energy) basis according to the Hamiltonian~\eqref{eq_introduction_H_SB_parallel} once the system is in contact with heat baths at finite temperature, since the bright state $\ket\psi_\mathrm{b}$ [see Eq.~\eqref{eq_introduction_psib}] is a coherent superposition of the bare excited states. \emph{Initial} coherences may either suppress or enhance the machine performance: They can bring the heat machine to a complete standstill if they correspond to the set of dark states $\{\ket{\psi_\mathrm{d}^i}\}$ since the Hamiltonian~\eqref{eq_introduction_H_SB_parallel} then yields $H_\mathrm{SB}\ket{\psi_\mathrm{d}^i}=0$, so that the system is then decoupled from the baths. Conversely, the Hamiltonian~\eqref{eq_introduction_H_SB_parallel} will enhance both the heat currents and the power by a factor of $N-1$ due to the superradiant (collective-decay) effect if either $\ket\psi_\mathrm{b}$ or the ground state $\ket{0}$ is initially prepared. Yet, even if the system has initially no coherence, it will evolve via the Hamiltonian~\eqref{eq_introduction_H_SB_parallel} to a thermal mixture of $\ket{0}$ and $\ket{\psi_\mathrm{b}}$ that has persistent coherences. Thus, steady-state coherences (embodied by the bright state) are formed in the bare energy basis as a result of thermalization, but the entire system does not thermalize, i.e., an $N$-level Gibbs state does not arise because of the availability of dark states (even if they are not populated) (see Sec.~\ref{subsec_parallel}). \par The foregoing discussion of the thermodynamic r\^ole of coherences in the working medium holds for the special case of all transition-dipole vectors in the Hamiltonian~\eqref{eq_H_SB} being parallel. In what follows, we, however, wish to explore the possibility of power enhancement for misaligned (non-parallel) dipoles, or any combination of aligned and misaligned dipoles. Since steady-state coherences correspond to (destructive or constructive) interference of the dipoles, they can only arise if at least some of the dipoles are aligned. Only then may dark states exist. Without the presence of such dark states, a thermal (Gibbs) steady state of the working fluid is expected. \par As we show in Sec.~\ref{sec_dicke}, these arguments also hold for an ensemble of $N$ two-level atoms in a suitable geometry, which realizes the Dicke model~\cite{dicke1954coherence,gross1982superradiance} (Sec.~\ref{sec_multiatom_realization}). In that case the effective system-bath interaction Hamiltonian cannot be written in the form of Eq.~\eqref{eq_introduction_H_SB_parallel}. Instead, the ensemble can then be mapped onto a collective spin-$N/2$ system and a multitude of dark states. The heat currents will no longer adhere to a simple form as in Eqs.~\eqref{eq_introduction_currents_tls}, but the efficiency of the Dicke heat machine will still be the same as of the TLS-based machine. \section{Floquet expansion of the master equation in continuous cycles}\label{sec_system} In the remainder of this article these issues are investigated for a continuous-cycle heat machine, wherein the ($N-1$)-fold degenerate system is constantly coupled to the two thermal baths and to a periodically modulating ``piston'' that allows for work extraction (in the case of an engine) or supply (in the case of a refrigerator or heat pump), respectively (see Fig.~\ref{fig_system}). \par In our model the ``piston'' effects are described by a synchronous periodic modulation of all the upper states~\cite{gelbwaser2013minimal,alicki2014quantum,gelbwaser2014power}, \begin{equation}\label{eq_H_S} H_\mathrm{S}(t)=\hbar[\omega_0+\omega(t)]\sum_{j=1}^{N-1}\sigma_+^j\sigma_-^j, \end{equation} where $\omega(t+\frac{2\pi}{\Omega})=\omega(t)$, $\Omega$ being the modulation rate. Such a transition-energy modulation may for example be induced by a varying magnetic field (via the Zeeman effect) or by an alternating electric field (via the Stark effect). \par The master equation~\cite{carmichaelbook} for the reduced density operator $\rho$ in the interaction picture is \begin{equation}\label{eq_master_general} \dot\rho=\mathcal{L}\rho, \end{equation} where the Liouvillian superoperator $\mathcal{L}$ is of the Lindblad--Gorini--Kossakowski--Sudarshan (LGKS) form~\cite{gorini1976completely,lindblad1976generators}. As shown in~\cite{alicki2012periodically,gelbwaser2013minimal} this Liouvillian can be decomposed into ``sub-bath'' Liouvillians $\mathcal{L}_i^q$ describing the interaction of the system with the $i$th bath ($i\in\{\mathrm{c},\mathrm{h}\}$) evaluated at the $q$th harmonic sideband ($q\in\mathbb{Z}$) of the unperturbed transition frequency $\omega_0$, induced by the modulation rate $\Omega$. These ``sub-bath'' (sideband) contributions are a consequence of the Floquet theorem for the solution of linear differential equations with periodic coefficients. The master equation~\eqref{eq_master_general} then adopts the \emph{additive} form \begin{subequations}\label{eq_master_L} \begin{equation}\label{eq_master} \dot\rho=\sum_{q\in\mathbb{Z}}\sum_{i=\{\mathrm{c},\mathrm{h}\}}\mathcal{L}_i^q\rho \end{equation} with the ``sub-bath'' Liouvillians (generalizing Ref.~\cite{gelbwaser2014power}) \begin{widetext} \begin{multline}\label{eq_L} \mathcal{L}_i^q\rho=\frac{1}{2}P(q)G_i(\omega_0+q\Omega)\sum_{j=1}^{N-1}\left[\alpha_j^2\mathcal{D}\left(\sigma_-^j,\sigma_+^j\right)+\sum_{\substack{j^\prime\neq j}}\mathfrak{p}_{jj^\prime}e^{i\varphi_{jj^\prime}}\alpha_j\alpha_{j^\prime}\mathcal{D}\left(\sigma_-^j,\sigma_+^{j^\prime}\right)\right]+\\ +\frac{1}{2}P(q)G_i(-\omega_0-q\Omega)\sum_{j=1}^{N-1}\left[\alpha_j^2\mathcal{D}\left(\sigma_+^j,\sigma_-^j\right)+\sum_{\substack{j^\prime\neq j}}\mathfrak{p}_{jj^\prime}e^{-i\varphi_{jj^\prime}}\alpha_j\alpha_{j^\prime}\mathcal{D}\left(\sigma_+^j,\sigma_-^{j^\prime}\right)\right]. \end{multline} \end{widetext} \end{subequations} \par Here the dissipator is defined as $\mathcal{D}(a,b)\mathrel{\mathop:}= 2a\rho b-ba\rho-\rho ba$ for arbitrary system operators $a,b$. The terms $\mathcal{D}\left(\sigma_-^j,\sigma_+^j\right)$ and $\mathcal{D}\left(\sigma_+^j,\sigma_-^j\right)$ in Eq.~\eqref{eq_L} describe (spontaneous and stimulated) emission into and absorption from the bath, respectively, involving a single transition dipole $\mathbf{d}_j$ and weighted by $\alpha_j^2$ [cf.\ Eq.~\eqref{eq_def_alpha}]. By virtue of sharing a \emph{common} ground state $\ket{0}$, these contributions to the Liouvillian amount to an \emph{indirect population transfer} between the different excited states via this common ground state. \par By contrast, the cross-terms $\mathcal{D}\left(\sigma_-^j,\sigma_+^{j^\prime}\right)$ and $\mathcal{D}\left(\sigma_+^j,\sigma_-^{j^\prime}\right)$ in Eq.~\eqref{eq_L}, weighted by $\alpha_j\alpha_{j^\prime}$, describe \emph{correlated} absorption and emission involving two different transitions, i.e., quanta exchange between excited states $\ket{j}$ and $\ket{j^\prime}$ via the common ground state $\ket{0}$, $\ket{j}\rightarrow\ket{0}\rightarrow\ket{j^\prime}$. These bath-mediated interactions between the excited states---which result in dynamical coherences---are largest when the two dipole moments involved, $\mathbf{d}_j$ and $\mathbf{d}_{j^\prime}$, are parallel up to a phase factor ($\mathfrak{p}_{jj^\prime}=1$). For orthogonal dipole orientations ($\mathfrak{p}_{jj^\prime}=0\ \forall j\neq j^\prime$) these cross-correlations vanish and no coherences build up. In three-dimensional space, this completely orthogonal configuration can only be realized in a three- or four-level system. \par The rates of all decay and absorption processes are determined by the respective prefactors of the dissipators in the Liouvillian~\eqref{eq_L}. Here $P(q)$ are the Floquet coefficients~\cite{alicki2012periodically} determining the weight of the $q$th harmonic sideband (with normalization $\sum_{q\in\mathbb{Z}}P(q)=1$) and $G_i(\omega)$ is the response spectrum of the $i$th bath evaluated at frequency $\omega$. These spectra fulfill the Kubo--Martin--Schwinger (KMS) detailed-balance condition~\cite{breuerbook} \begin{equation}\label{eq_kms} G_i(-\omega)=e^{-\beta_i\hbar\omega}G_i(\omega). \end{equation} For a bosonic bath, the coupling strengths explicitly read \begin{equation} G_i(\omega)=\gamma_i(\omega)\left(\bar{n}_i(\omega)+1\right), \end{equation} with $\bar{n}_i(\omega)\mathrel{\mathop:}=(e^{\beta_i\hbar\omega}-1)^{-1}$ denoting the number of thermal quanta at inverse temperature $\beta_i=1/k_\mathrm{B} T_i$ and $\gamma_i(\omega)$ being the frequency-dependent transition rate induced by the $i$th bath. \section{Three-level system}\label{sec_ehrenfest} In Sec.~\ref{sec_system} we have introduced the full model, invoking couplings to two heat baths and periodic modulation of $N-1$ excited states. These ingredients are necessary for the operation of the system as a heat machine. However, all the ``sub-bath'' Liouvillians $\mathcal{L}_i^q$ [Eq.~\eqref{eq_L}] in the master equation~\eqref{eq_master} contain the \emph{same} dissipators $\mathcal{D}$ and only differ by their respective prefactors. Owing to this additive structure, in this section we first solve the master equation for a three-level system interacting with a single bath at inverse temperature $\beta=1/k_\mathrm{B} T$ and a static (unmodulated) transition frequency ($q=0$, $P(0)=1$), and analyze its steady-state solution. This solution will serve to obtain the ``global'' solution for two baths involving a modulated transition frequency by simple changes of the coefficients (i.e., the transition rates). The two-bath ``global'' solution allowing for the modulation-induced harmonic sidebands, which is required for the computation of the heat currents, will be presented in Sec.~\ref{sec_heat_currents}. \subsection{Steady-state solution for degenerate excited states}\label{sec_threelevels_degenerate} In order to obtain some insight into the dynamics, we transform the master equation~\eqref{eq_master} into a set of Ehrenfest equations of motion for operator expectation values. These equations can subsequently be cast into an inhomogeneous linear ordinary differential equation (ODE) system \begin{equation}\label{eq_ode} \dot{\mathbf{x}}=\mathcal{A}\mathbf{x}+\mathbf{b} \end{equation} for the density matrix elements. In the case $N=3$ this vector is \begin{equation} \mathbf{x}\mathrel{\mathop:}=(\rho_{21},\rho_{12},\rho_{00},\rho_{22})^T. \end{equation} The corresponding coefficient matrix $\mathcal{A}$ and the inhomogeneity $\mathbf{b}$ are presented in Appendix~\ref{app_threelevels}. \par This linear system does not describe the entire density matrix, as it does not contain the coherences $\rho_{01}$ and $\rho_{02}$ between the ground and the excited states. The reason is that those matrix elements are completely decoupled from $\mathbf{x}$ and obey the independent (homogeneous) differential equation \begin{equation}\label{eq_ode_y} \dot{\mathbf{y}}=\mathcal{B}\mathbf{y} \end{equation} with \begin{equation} \mathbf{y}\mathrel{\mathop:}=(\rho_{10},\rho_{01},\rho_{20},\rho_{02})^T \end{equation} and the coefficient matrix $\mathcal{B}$ given in Appendix~\ref{app_threelevels}. All eigenvalues of the matrix $\mathcal{B}$ are negative. Consequently, the coherences between the ground and excited states are damped out regardless of their initial values and the steady-state solution of Eq.~\eqref{eq_ode_y} has no coherences, i.e., $\mathbf{y}^\mathrm{ss}=\mathbf{0}$. \par Let us now revisit the ODE~\eqref{eq_ode} for $\mathbf{x}$. The uniqueness of its steady-state solution depends on the determinant of the coefficient matrix $\mathcal{A}$, \begin{multline}\label{eq_detA} \det(\mathcal{A})=\left[\frac{1}{2}G(\omega_0)\right]^44\alpha^2\left(1+2e^{-\beta\hbar\omega_0}\right)\times\\\times\left(1+\alpha^2\right)^2\left(1-\mathfrak{p}^2\right). \end{multline} For aligned dipoles ($\mathfrak{p}\mathrel{\mathop:}=\mathfrak{p}_{12}=1$) we find $\det(\mathcal{A})=0$, corresponding to a singularity of the coefficient matrix~\eqref{eq_odes_a}. In this regime, multiple steady-state solutions of Eq.~\eqref{eq_ode} may exist (depending on the initial conditions), each satisfying the linear system of equations \begin{equation} \mathcal{A}\mathbf{x}^\mathrm{ss}=-\mathbf{b}. \end{equation} \par The general steady-state solution of the linear ODE~\eqref{eq_ode} (Eq.~\eqref{eq_ss_threelevels} in Appendix~\ref{app_threelevels}) is compatible with the one found in Refs.~\cite{agarwal2001quantum,ficek2002quantum}. While the steady state is diagonal and unique for non-aligned dipoles ($\mathfrak{p}\neq 1$), it depends on the initial conditions (via an integral of motion) and yields persistent coherences $\rho_{21}^\mathrm{ss}=\left(\rho_{21}^\mathrm{ss}\right)^*$ in the aligned case ($\mathfrak{p}=1$). The integral of motion agrees with a previous finding (involving a zero-temperature bath with~\cite{kozlov2006inducing} and without~\cite{ficek2002quantum} incoherent external driving, respectively). Such initial-condition dependent steady-state solutions have also been found for degenerate $\Lambda$-systems~\cite{berman2005spontaneously}. \par The time evolution of the system towards the steady state~\eqref{eq_ss_threelevels} is illustrated in Figs.~\ref{fig_timeevolution_populations} and~\ref{fig_timeevolution_coherences}, where we have numerically integrated the master equation~\eqref{eq_master} and compared its steady state to the analytic steady-state solution~\eqref{eq_ss_threelevels}. The time evolution of the ODE~\eqref{eq_ode} gives the same result. The dependence of the steady-state solution on the initial conditions for aligned dipoles ($\mathfrak{p}=1)$ can clearly be seen. Coherences between the excited states persist only in this regime and are just a transient effect for any misalignment. Notably, as seen from Eqs.~\eqref{eq_ss_threelevels}, the steady states for orthogonal dipoles (without transient coherences) and any other misalignment coincide. Consequently, dynamical correlations between the upper states [induced by the cross-terms in the Liouvillian~\eqref{eq_L}] are important at finite times but not in the long-time limit. In particular, this implies that coherences cannot play any \emph{thermodynamic} r\^ole in misaligned dipole transitions in the steady-state operation of the heat machine. \begin{figure} \centering \includegraphics[width=\columnwidth]{threelevels_timeevolution_populations_paper} \caption{(Color online) Time evolution of the populations (solid blue: $\rho_{22}$, dashed red: $\rho_{00}$) for aligned ($\mathfrak{p}=1$) and misaligned ($\mathfrak{p}=0.7$) dipoles and different initial conditions $\rho(0)=\proj{0}$ (left) and $\rho(0)=\proj{\psi}$ with $\ket\psi=\left(\ket{0}+\ket{1}-\ket{2}\right)/\sqrt{3}$ (right) obtained by numerically integrating the master equation~\eqref{eq_master}. The horizontal solid black lines are the analytic steady-state solution~\eqref{eq_ss_threelevels} of the ODE~\eqref{eq_ode}. Parameters: $e^{-\beta\hbar\omega_0}=\frac{1}{2}$, $\alpha=1$ and $G(\omega_0)=\gamma(\bar{n}+1)\equiv2\gamma$.}\label{fig_timeevolution_populations} \end{figure} \par \begin{figure} \centering \includegraphics[width=\columnwidth]{threelevels_timeevolution_coherences_paper} \caption{(Color online) Same as Fig.~\ref{fig_timeevolution_populations} for the coherence $\rho_{21}$. In the aligned case ($\mathfrak{p}=1$) the coherence persists in steady state, whereas it is merely a transient effect in the misaligned case ($\mathfrak{p}=0.7$).}\label{fig_timeevolution_coherences} \end{figure} \subsection{Non-degeneracy effects} The singularity of the coefficient matrix $\mathcal{A}$ for aligned dipoles only arises for perfectly degenerate levels. If a detuning $\Delta$ between the two excited states is introduced into the coefficient matrix~\eqref{eq_odes_a}, the determinant becomes proportional to $\left[(1+\alpha^2)^2(1-\mathfrak{p}^2)+\Delta^2/G^2(\omega_0)\right]$ in the Schr\"odinger picture [as a generalization of Eq.~\eqref{eq_detA}]. Amongst the solutions of $\det(\mathcal{A})=0$, only the case $\mathfrak{p}=1$ \emph{and} $\Delta=0$ lies within the domain of existence of $\mathfrak{p}$, which is $[0,1]$. In the interaction picture (in which the master equation~\eqref{eq_master} is presented) the coefficients then become time dependent through the phase factors $\mathfrak{p}e^{\pm i\varphi}\mapsto\mathfrak{p}e^{\pm i\varphi}e^{\pm i\Delta t}$, and the steady state is no longer determined by the determinant of the coefficient matrix. In the secular approximation, these time-dependent phase factors, and hence all cross-terms in the Liouvillian~\eqref{eq_L}, vanish upon coarse-graining the time evolution~\cite{breuerbook}. The steady-state solution is then given by the canonical (Gibbs) distribution (diagonal in the energy basis) $\rho_{ii}^\mathrm{ss}=e^{-\beta\hbar\omega_i}\rho_{00}^\mathrm{ss}$ for $1\leq i\leq N-1$. Hence, we must assume \emph{degenerate} levels in order to study possible implications of quantum coherences on the thermodynamic properties of a \emph{steady-state} quantum heat machine. \section{Steady-state solution: Dependence of thermalization on the dipole orientations}\label{sec_ss} In the previous section we have shown that in the simplest case ($N=3$), aligned dipoles are dynamically distinct from all other possible geometric configurations (cf.\ also Ref.~\cite{gelbwaser2014power}). In this section we show that this distinction arises from the existence of a dark state (anticipated in Sec.~\ref{sec_H_SB}), which hinders full thermalization under certain initial conditions if the dipoles are all aligned. We expect similar behavior for the general case of $N-1$ dipole transitions. In the following we investigate the steady-state solution for arbitrary dipole orientations, taking into account that as $N$ grows, so does the dipole configuration space. To this end we first consider the extreme, simple cases of (i) no dipoles being pairwise parallel and (ii) all dipoles being parallel. The general solution for arbitrary dipole orientations can then be derived in a transparent way from these extreme cases. \subsection{No pairwise parallel dipoles}\label{sec_ss_pneq1} If no dipoles are parallel, the master equation~\eqref{eq_master} (for a single bath at inverse temperature $\beta$ and without modulation, $q=0$) possesses a unique thermalized, diagonal, steady state \begin{subequations}\label{eq_ss_nlevels_pneq1} \begin{align} \rho_{ii}^\mathrm{ss}&=\frac{1}{N-1+e^{\beta\hbar\omega_0}}\equiv e^{-\beta\hbar\omega_0}\rho_{00}^\mathrm{ss}\quad i\in[1,\dots,N-1]\\ \rho_{00}^\mathrm{ss}&=\frac{1}{1+(N-1)e^{-\beta\hbar\omega_0}}. \end{align} \end{subequations} The straightforward proof is given in Appendix~\ref{app_proof_pneq1}. Here too, the steady state is independent of the actual orientation of the dipoles (as long as they are not aligned)---but the time evolution leading to this steady state depends on their orientation. \par Owing to their degeneracy, any superposition of excited states is again an energy eigenstate. This means that the density matrix~\eqref{eq_ss_nlevels_pneq1} is not only diagonal in the ``bare'' basis $\{\ket{0},\ket{1},\dots,\ket{N-1}\}$, but also in any basis $\{\ket{0},\ket{1^\prime},\dots,\ket{(N-1)^\prime}\}$ formed by the ground state and the rotated excited states. \subsection{Parallel dipoles}\label{subsec_parallel} Let us now consider the opposite case of all dipoles being parallel (which has been previously discussed in Sec.~\ref{sec_H_SB}). The steady-state solution in this case may not be unique due to the singularity of the coefficient matrix, which is linked to the existence of dark and bright states. \par When all dipoles are aligned with the reference dipole $\mathbf{d}_{1}$, i.e., $\mathbf{d}_{j}=\alpha_je^{i\varphi_{1j}}\mathbf{d}_{1}$, the interaction Hamiltonian~\eqref{eq_H_SB} between the system and the bath can be recast into the generic form [cf.\ also Eq.~\eqref{eq_introduction_H_SB_parallel}] \begin{equation}\label{eq_H_SB_parallel} \left.H_\mathrm{SB}\right|_{\mathfrak{p}_{ij}=1}=\sqrt{\sum_{j=1}^{N-1}\alpha_j^2}\left(\bar{\sigma}_+\otimes \mathbf{d}_{1}\cdot\mathbf{B} + \bar{\sigma}_-\otimes \mathbf{d}_{1}^*\cdot\mathbf{B}^\dagger\right). \end{equation} This operator is reported in the basis spanned by the following states: (i) the ground state $\ket{0}$, (ii) the bright state [see also Ref.~\cite{ficek2004simulating} and Eq.~\eqref{eq_introduction_psib}] \begin{equation}\label{eq_psib} \ket{\psi_\mathrm{b}}\mathrel{\mathop:}=\frac{1}{\sqrt{\sum_{j=1}^{N-1}\alpha_j^2}}\sum_{j=1}^{N-1}\alpha_je^{i\varphi_{1j}}\ket{j}, \end{equation} and (iii) $N-2$ dark states $\left\{\ket{\psi_\mathrm{d}^j}\right\}$, obtained by the Gram--Schmidt orthogonalization procedure. \par The Hamiltonian~\eqref{eq_H_SB_parallel} \emph{formally} looks like the coupling Hamiltonian of a single two-level system interacting with the environment. However, this operator acts on an $N$-dimensional Hilbert space, even if only two levels (the ground and the bright states) are explicitly involved, via the Pauli excitation and de-excitation operators [cf.\ also Eq.~\eqref{eq_introduction_H_SB_parallel}] \begin{subequations}\label{eq_pauli_bright_states} \begin{align} \bar{\sigma}_+&\mathrel{\mathop:}=\ketbra{\psi_\mathrm{b}}{0}\\ \bar{\sigma}_-&\mathrel{\mathop:}=\ketbra{0}{\psi_\mathrm{b}}. \end{align} \end{subequations} The associated transition-dipole vector is scaled by a factor of $\sqrt{\sum_{j=1}^{N-1}\alpha_j^2}$, which for equal transition strengths ($\alpha_j=1\,\forall j$) amounts to $\sqrt{N-1}$. The diagonal steady-state solution of the master equation following from the interaction Hamiltonian~\eqref{eq_H_SB_parallel} has the form (cf.\ Appendix~\ref{app_proof_p1}) \begin{subequations}\label{eq_ss_nlevels_p1} \begin{align} \rho_\mathrm{bb}^\mathrm{ss}&=\frac{1}{1+e^{\beta\hbar\omega_0}}\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]\equiv e^{-\beta\hbar\omega_0}\rho_{00}^\mathrm{ss}\\ \rho_{00}^\mathrm{ss}&=\frac{1}{1+e^{-\beta\hbar\omega_0}}\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]\\ \rho_{kk}^\mathrm{ss}&=\bkew{\phi_k}{\rho(0)}{\phi_k}\quad\text{ for }k=1,\dots,N-2. \end{align} \end{subequations} Here we have defined the projector \begin{equation} \Pi_\mathrm{d}\mathrel{\mathop:}=\sum_{j=1}^{N-2}\proj{\psi_\mathrm{d}^j}\equiv\mathbbm{1}-\proj{\psi_\mathrm{b}}-\proj{0} \end{equation} onto the dark-state subspace, the dark-state populations $\rho_{kk}^\mathrm{ss}$, and the spectral decomposition of the part of the initial state lying within the dark-state subspace, \begin{equation} \rho_\mathrm{d}(0)=\sum_{i,j=1}^{N-2}q_{ij}(0)\ketbra{\psi_\mathrm{d}^i}{\psi_\mathrm{d}^j}=\sum_{k=1}^{N-2}p_k(0)\proj{\phi_k}. \end{equation} This density matrix is not normalized to unity but to $\operatorname{Tr}\rho_{\mathrm{d}}(0)=\ew{\Pi_\mathrm{d}}_{\rho(0)}\equiv 1-\bkew{\psi_\mathrm{b}}{\rho(0)}{\psi_\mathrm{b}}-\bkew{0}{\rho(0)}{0}$. The states $\ket{\phi_k}$ associated to the dark-state populations $\rho_{kk}^\mathrm{ss}$ are thus the eigenvectors of $\rho_\mathrm{d}(0)$. If $\rho_\mathrm{d}(0)=0$, any basis of the dark subspace may be chosen instead. \par The dark-state populations $\rho_{kk}^\mathrm{ss}$ in the steady-state solution~\eqref{eq_ss_nlevels_p1} are the same as their initial values, since these states do not interact with the environment. The bright- and ground-state components (which do interact with the environment) eventually thermalize, their Boltzmann factor $e^{-\beta\hbar\omega_0}$ being determined by the bath temperature. However, the steady-state thermalization is only partial if the initial state has an overlap with the dark-state subspace. We therefore refer to the factor $1-\ew{\Pi_\mathrm{d}}_{\rho(0)}$ as the \emph{thermalization capability} of the initial state $\rho(0)$. \par The foregoing discussion has shown that whilst the interpretation of the initial-condition-dependent terms in the non-diagonal density matrix~\eqref{eq_ss_threelevels} may be obscure, their physical meaning is clearly revealed in the diagonal steady-state form~\eqref{eq_ss_nlevels_p1}, which has a distinct non-Gibbs character. \subsection{Arbitrary dipole orientations} Based on the two limiting cases considered above, we expect the dynamics to be different for aligned (parallel) and misaligned transition dipoles. \par We therefore group all aligned transition dipoles (up to a phase factor) into ``domains'': The first $n_1$ aligned dipoles are grouped in domain $1$, the next $n_2$ parallel ones are grouped in domain $2$ and so forth. Altogether, there are $p$ ``domains'', which contain \begin{equation} N_p\mathrel{\mathop:}=\sum_{j=1}^{p} n_j \end{equation} transition dipoles. The remaining $N-1-N_p$ transition dipoles are non-parallel to any other dipole (see Fig.~\ref{fig_dipoles}). \par \begin{figure} \centering \includegraphics[width=\columnwidth]{dipoles_paper_effective} \caption{(Color online) A possible configuration exhibiting a domain structure of parallel dipole transitions. Each group (domain) of aligned transitions (upper arrows) is equivalent to a single (enhanced) dipole transition (lower arrows).}\label{fig_dipoles} \end{figure} \par \par \begin{figure} \centering \includegraphics[width=\columnwidth]{dipoles_effective_level_scheme_paper} \caption{(Color online) Effective $N_\mathrm{eff}$-level scheme corresponding to the $N$-level system in Fig.~\ref{fig_dipoles}. The transitions associated to a bright state (i.e., to a domain of parallel dipoles) are enhanced compared to their bare-state counterparts. Within this effective system no transition dipoles are parallel to each other.}\label{fig_dipoles_effective_level_scheme} \end{figure} \par As detailed in Appendix~\ref{app_H_SB_arbitrary_geometry}, the $N$-level system with arbitrary dipoles can be mapped onto an $N_\mathrm{eff}$-level system with \emph{non-parallel dipoles}, where \begin{equation}\label{eq_neff} N_\mathrm{eff}\mathrel{\mathop:}= p+N-N_p. \end{equation} Upon introducing the new coupling constants \begin{equation} A_m\mathrel{\mathop:}= \begin{cases} \sqrt{\sum_{j=n_{m-1}+1}^{n_m}\alpha_{j}^2} & \text{ for } m\leq p \\ \alpha_{N_p+(m-p)} & \text{ for } p+1\leq m\leqN_\mathrm{eff}-1 \end{cases} , \end{equation} the interaction Hamiltonian adopts the form \begin{equation}\label{eq_H_SB_arbitrary} H_\mathrm{SB}=\sum_{m=1}^{N_\mathrm{eff}-1} A_m|\mathbf{d}_1|\left(\tilde{\sigma}_+^m\otimes \mathbf{e}_m\cdot\mathbf{B} + \tilde{\sigma}_-^m\otimes \mathbf{e}_m^*\cdot\mathbf{B}^\dagger\right). \end{equation} This is simply the Hamiltonian~\eqref{eq_H_SB} for $N_\mathrm{eff}$ levels with modified coupling strengths and \emph{no} pairwise parallel dipoles. The new Pauli operators $\tilde{\sigma}_+^m$ are [cf.\ Eqs.~\eqref{eq_splus} and~\eqref{eq_splusbar}] \begin{equation} \tilde{\sigma}_+^m\mathrel{\mathop:}= \begin{cases} \bar{\sigma}_+^m\equiv\ketbra{\psi_\mathrm{b}^m}{0}&m\leq p\\ &\\ \parbox[t]{0.35\columnwidth}{$\sigma_+^{N_p+(m-p)}$\\\mbox{$\equiv\ketbra{N_p+(m-p)}{0}$}}&p+1\leq m\leq N_\mathrm{eff}-1 \end{cases} . \end{equation} \par The mapping from $N$ levels with arbitrary transition dipoles to $N_\mathrm{eff}$ non-aligned dipoles (cf.\ Figs.~\ref{fig_dipoles} and~\ref{fig_dipoles_effective_level_scheme}) now assigns a weight $A_j$ to each transition, according to the number of \emph{thermalization pathways} forming each domain of parallel dipoles. Clearly, the completely misaligned ($N_\mathrm{eff}=N$) and aligned ($N_\mathrm{eff}=2$) cases discussed in Secs.~\ref{sec_ss_pneq1} and~\ref{subsec_parallel} above are included in this general scheme. \par The Liouvillian associated with the Hamiltonian~\eqref{eq_H_SB_arbitrary} is thus the same as for the non-aligned system obtained upon replacing $N\mapstoN_\mathrm{eff}$ and $\alpha_j\mapsto A_j$. The steady-state solution of the master equation then reads \begin{subequations}\label{eq_ss_nlevels_general} \begin{align} \rho_{00}^\mathrm{ss}&=\frac{1}{1+(N_\mathrm{eff}-1)e^{-\beta\hbar\omega_0}}\left[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\right]\label{eq_ss_nlevels_general_ground_state}\\ \rho_{ii}^\mathrm{ss}&= \begin{cases} e^{-\beta\hbar\omega_0}\rho_{00}^\mathrm{ss}&i=1,\dots,N_\mathrm{eff}-1\\ \bkew{\phi_i}{\rho(0)}{\phi_i}&i=N_\mathrm{eff}+1,\dots,N \end{cases} . \end{align} \end{subequations} Here again the $\{\ket{\phi_i}\}$ are the eigenvectors of $\rho_{\mathrm{d}}(0)$, i.e., the part of the initial density matrix lying within any of the $p$ dark-state subspaces. The projector $\Pi_\mathrm{d}$ onto the union of all dark-space subspaces has now the form \begin{align} \Pi_\mathrm{d}&=\sum_{j=1}^{p}\sum_{k=1}^{n_k-1}\proj{\psi_\mathrm{d}^{j,k}}\equiv\\ &\equiv\mathbbm{1}-\sum_{j=1}^{p}\proj{\psi_\mathrm{b}^{j}}-\sum_{j=N_p+1}^{N-1}\proj{j}-\proj{0}. \end{align} The proof of Eqs.~\eqref{eq_ss_nlevels_general} is analogous to the respective proofs of the steady-state solutions~\eqref{eq_ss_nlevels_pneq1} and~\eqref{eq_ss_nlevels_p1} for non-aligned and aligned dipoles. \par Whilst the steady-state density matrix~\eqref{eq_ss_nlevels_general} is diagonal in the new basis associated with the Hamiltonian~\eqref{eq_H_SB_arbitrary} (and hence does not contain any coherences), off-diagonal elements do appear in ${\tilde\rho}^\mathrm{ss}$, the representation of the density matrix $\rho^\mathrm{ss}$ in the energy (bare-state) basis [cf.\ also Eq.~\eqref{eq_ss_threelevels}]. \par It is important to note that the coherences with the largest modulus, $\sum_{i,j=1 (i\neq j)}^{N-1}|{\tilde\rho}_{ij}^\mathrm{ss}|$, arise for a \emph{dark initial state}, since the exchange between the excited states [described by Eqs.~\eqref{eq_master_L}] implies that the ground state is at least partly (thermally) populated in steady state [cf.\ Eq.~\eqref{eq_ss_nlevels_general_ground_state}]. This ground-state population inevitably reduces the excited-state population and hence the coherences between excited states. Equivalently, the steady-state density matrix can only describe a completely population-inverted state iff the initial state is dark. Yet, dark states are a severe hindrance to the operation of a quantum heat machine. Hence, the magnitude of steady-state coherences cannot be a criterion for possible power enhancement in a multilevel quantum heat engine, as shown below in detail. \section{Heat currents and power}\label{sec_heat_currents} \subsection{Heat currents} The general steady-state solution~\eqref{eq_ss_nlevels_general} of the master equation~\eqref{eq_master} is the key ingredient for computing the heat flows (currents) between the system and the cold and hot baths. Upon invoking the dynamical version of the second law of thermodynamics and the von~Neumann entropy, the heat currents associated to the $i$th bath are found to be~\cite{alicki2012periodically,gelbwaser2013minimal,kosloff2013quantum} \begin{equation} J_i=\sum_{q\in\mathbb{Z}}J_i^q \end{equation} in terms of the harmonic sideband contributions \begin{equation}\label{eq_J} J_i^q\mathrel{\mathop:}=-\frac{1}{\beta_i}\operatorname{Tr}\left[(\mathcal{L}_i^q\rho^\mathrm{ss})\ln(\rho_i^q)\right]. \end{equation} Here $\rho^\mathrm{ss}$ is the \emph{global} steady-state solution of the master equation~\eqref{eq_master} satisfying \begin{equation} \mathcal{L}\rho^\mathrm{ss}=0, \end{equation} and $\rho_i^q$ is the \emph{local} solution for the system interacting with a single ($i$th) bath evaluated at the $q$th harmonic sideband, i.e., the steady-state solution of the Liouvillian~\eqref{eq_L} \begin{equation} \mathcal{L}_i^q\rho_i^q=0. \end{equation} The heat currents~\eqref{eq_J} are based on a global two-bath solution, as required to avoid violation of the second law~\cite{levy2014local}. \par The steady-state solution of the entire master equation~\eqref{eq_master} can be obtained from Eq.~\eqref{eq_ss_nlevels_general} upon adapting the transition rates to a two-bath situation. Owing to the KMS relation~\eqref{eq_kms}, this amounts to replacing the real inverse temperature $\beta$ by an \emph{effective} inverse temperature $\beta_\mathrm{eff}$, which is defined by the Boltzmann factor \begin{equation}\label{eq_betaeff} e^{-\beta_\mathrm{eff}\hbar\omega_0}\mathrel{\mathop:}=\frac{\sum_{q\in\mathbb{Z}}\sum_{i\in\{\mathrm{c},\mathrm{h}\}} P(q)G_i(-\omega_0-q\Omega)}{\sum_{q\in\mathbb{Z}}\sum_{i\in\{\mathrm{c},\mathrm{h}\}} P(q)G_i(\omega_0+q\Omega)}. \end{equation} This effective temperature can be controlled (modified) by engineering the modulation type [and thus the Floquet coefficients $P(q)$] or the cold- and hot-bath response spectra [$G_\indexc(\omega_0+q\Omega)$ and $G_\indexh(\omega_0+q\Omega)$] at the corresponding harmonic sidebands $q\in\mathbb{Z}$, respectively. Likewise, the local steady-state solution $\rho_i^q$ is obtained from Eq.~\eqref{eq_ss_nlevels_general} upon replacing \begin{subequations} \begin{align} \beta&\mapsto\beta_i\\ \omega_0&\mapsto\omega_0+q\Omega. \end{align} \end{subequations} \par Inserting $\rho^\mathrm{ss}$ and $\rho_i^q$ into expression~\eqref{eq_J}, we find the heat currents, whose explicit form is presented in Appendix~\ref{app_heat_currents_power}. Their comparison to the previously found result for a TLS~\cite{gelbwaser2013minimal} yields the heat current ratio \begin{equation}\label{eq_heat_currents_general_tls} \frac{J_i}{J_i^\mathrm{TLS}}=\left(\sum_{j=1}^{N-1}\alpha_j^2\right)\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]\frac{1+e^{-\beta_\mathrm{eff}\hbar\omega_0}}{1+(N_\mathrm{eff}-1)e^{-\beta_\mathrm{eff}\hbar\omega_0}}. \end{equation} Here the first factor on the right-hand side is the enhancement that stems from the $N-1$ available thermalization pathways. It amounts to $N-1$ if all transitions are of the same strength ($\alpha_j=1\,\forall j$). The second factor measures the thermalization capability of the initial state (i.e., the overlap of the initial state with the non-dark states, viz.\ the part of the initial state amenable to thermalization). This factor is only relevant for the case of at least two dipoles being parallel (for other configurations there are no dark states). The last factor strongly depends on the modulation type and the bath properties via the inverse effective temperature $\beta_\mathrm{eff}$ defined in Eq.~\eqref{eq_betaeff}, but also on the effective number of misaligned dipoles $N_\mathrm{eff}$ [cf.\ Eq.~\eqref{eq_neff}]. This last factor becomes unity for $N_\mathrm{eff}=2$. Such an \emph{effective} two-level system is obtained for an $N$-level system with all transition dipoles being aligned [cf.\ Eqs.~\eqref{eq_ss_nlevels_p1}]. \par Remarkably, for any number of misaligned dipoles the product of the last two factors in Eq.~\eqref{eq_heat_currents_general_tls} is determined by the ratio of the respective ground-state populations, yielding \begin{equation}\label{eq_heat_currents_general_tls_rho00} \frac{J_i}{J_i^\mathrm{TLS}}=\left(\sum_{j=1}^{N-1}\alpha_j^2\right)\frac{\rho_{00}^\mathrm{ss}}{\rho_{00}^\mathrm{TLS}}\leq N-1. \end{equation} \par It follows from Eq.~\eqref{eq_heat_currents_general_tls_rho00} that the largest possible enhancement factor is bounded by \begin{equation} \sum_{j=1}^{N-1}\alpha_j^2\leq N-1, \end{equation} the equality sign corresponding to all transitions having the same strength. This maximal enhancement corresponds to the combined heat currents from $N-1$ \emph{independent} two-level heat machines. \par The most advantageous initial condition is a state orthogonal to the dark subspace, e.g., $\rho(0)=\proj{0}$. In that case, as is apparent from Fig.~\ref{fig_currents_nlevels}, at low effective temperatures all dipole orientations yield the same (maximum) enhancement by a factor of $N-1$ (assuming all dipoles are equally strong). This can be understood from the population ratio in Eq.~\eqref{eq_heat_currents_general_tls_rho00}: At low temperatures the ground-state population is close to one, independent of the dipole configuration. \par In the opposite case of high effective temperatures, configurations with more parallel transitions are more favorable. This can again be explained based on the ground-state population ratio in Eq.~\eqref{eq_heat_currents_general_tls_rho00}. Assuming full thermalization, the ground state is the more populated the fewer levels are available. Hence, for $N$ levels the smallest $N_\mathrm{eff}$ are the most beneficial at such temperatures. For the $10$-level system shown in Fig.~\ref{fig_currents_nlevels}, the ground-state population in the high-temperature limit ($\beta_\mathrm{eff}\rightarrow0$) is [according to Eqs.~\eqref{eq_ss_nlevels_general}] $\rho_{00}^\mathrm{ss}=1/N_\mathrm{eff}$, compared with $\rho_{00}^\mathrm{TLS}=1/2$, so that Eq.~\eqref{eq_heat_currents_general_tls_rho00} yields $\lim_{\beta_\mathrm{eff}\rightarrow0}J_i/J_i^\mathrm{TLS}=9\times2/N_\mathrm{eff}$. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{currents_power_nlevels_equal_dipoles} \caption{(Color online) Heat current ratio $J_i/J_i^\mathrm{TLS}$ [Eq.~\eqref{eq_heat_currents_general_tls}] for a ten-level system assuming no initial overlap with any dark states for (i) $N_\mathrm{eff}=N=10$ (no parallel dipoles), (ii) $N_\mathrm{eff}=6$ (some dipoles parallel) and (iii) $N_\mathrm{eff}=2$ (all nine dipoles parallel). Alignment is only beneficial at high effective temperatures (small $\beta_\mathrm{eff}\hbar\omega_0$). For all dipoles being parallel, amplification is independent on the bath temperatures. Due to Eq.~\eqref{eq_power_general_tls} the same holds for the power ratio.}\label{fig_currents_nlevels} \end{figure} \par \subsection{Power and efficiency} The first law (energy conservation) $\dot{W}=-(J_\indexh+J_\indexc)$ relates the heat currents to the power. Whilst the explicit form of the power is again reported in Appendix~\ref{app_heat_currents_power}, we here compare it to its TLS counterpart, \begin{equation}\label{eq_power_general_tls} \frac{\dot{W}}{\dot{W}^\mathrm{TLS}}=\left(\sum_{j=1}^{N-1}\alpha_j^2\right)\frac{\rho_{00}^\mathrm{ss}}{\rho_{00}^\mathrm{TLS}}\equiv\frac{J_i}{J_i^\mathrm{TLS}}\leq N-1. \end{equation} This is the same ratio as in Eq.~\eqref{eq_heat_currents_general_tls_rho00} for the heat currents and therefore Fig.~\ref{fig_currents_nlevels} also holds for the power ratio. We have thus come to a central conclusion: An $N$-level quantum heat machine can \emph{never} outperform $N-1$ independent two-level heat machines. \par In contrast to the power, the efficiency \begin{equation} \eta=\frac{-\dot{W}}{J_\indexh} \end{equation} (when the heat machine acts as an engine) and the coefficient of performance \begin{equation} \mathrm{COP}=\frac{J_\indexc}{\dot{W}} \end{equation} (when the heat machine acts as a refrigerator), respectively, are \emph{not} affected by the presence of degenerate upper states, regardless of the dipole orientation. Notably, the Carnot bound is not violated, based on the results previously obtained for the TLS heat machine~\cite{gelbwaser2013minimal}, as discussed in Sec.~\ref{sec_introduction_efficiency}. \subsection{Power-dependence on modulation rate} The result~\eqref{eq_power_general} for the power output of the periodically-driven continuous multilevel quantum heat machine only differs by the extra factor $N_\mathrm{eff}-1$ in the denominator from its counterpart for a two-level machine discussed in Ref.~\cite{gelbwaser2013minimal}. It was shown there that depending on the modulation frequency $\Omega$ the heat machine can be operated either as an engine ($\dot{W}<0$ for $\Omega<\Omega_\mathrm{crit}$) or as a refrigerator/heat pump ($\dot{W}>0$ for $\Omega>\Omega_\mathrm{crit}$), with $\Omega_\mathrm{crit}$ being some critical frequency. \par This behavior can be understood in terms of the frequency dependence of the bath populations $n_\mathrm{c}(\omega)$ and $n_\mathrm{h}(\omega)$. At the TLS resonance frequency, they fulfill $n_\mathrm{h}(\omega_0)>n_\mathrm{c}(\omega_0)$. The periodic driving introduces harmonic sidebands, that shift the frequencies at which the TLS couples to the baths [cf.\ also the Liouvillian~\eqref{eq_L}]. When only two sidebands $\omega_0\pm\Omega$ contribute and the cold and hot bath spectra do not overlap, the ``natural'' direction of the heat flows is reversed provided that $n_\mathrm{h}(\omega_0+\Omega)<n_\mathrm{c}(\omega_0-\Omega)$~\cite{kolar2012quantum}. This explains the occurrence of a ``critical'' frequency $\Omega_\mathrm{crit}$, defined by $n_\mathrm{h}(\omega_0+\Omega_\mathrm{crit})=n_\mathrm{c}(\omega_0-\Omega_\mathrm{crit})$, at which the heat flows (and hence also the power) vanish. This is the point at which Carnot efficiency is reached~\cite{gelbwaser2013minimal}. Similar scenarios occur for example for ultracold atoms in cavities~\cite{ritsch2013cold}, where, depending on the frequency of the cavity-pump laser, the atoms are either cooled or heated. \subsection{Power-dependence on initial conditions for aligned dipoles} As we have seen, the power enhancement of the $N$-level heat machine compared to a single two-level heat machine strongly depends on the initial conditions if some of the transition-dipole vectors are aligned [Eq.~\eqref{eq_power_general}]. We here illustrate this dependence for a three-level ($N=3$) system, which allows for a simple graphical representation. We shall now specify its advantageous initial conditions. Once the ground-state population $\rho_{00}(0)$ is fixed, the advantageous states, amenable to full thermalization, are those having the largest possible modulus of the coherence and the appropriate phase. These ``optimal'' coherences and populations must satisfy (cf.\ Sec.~\ref{sec_threelevels_degenerate}) \begin{subequations} \begin{align} \rho_{21}(0)&=\frac{\alpha e^{i\varphi}}{1+\alpha^2}\left[1-\rho_{00}(0)\right]\label{eq_initial_state_coherence_rho00_fixed}\\ \rho_{11}(0)&=\frac{1}{1+\alpha^2}\left[1-\rho_{00}(0)\right]\\ \rho_{22}(0)&=\frac{\alpha^2}{1+\alpha^2}\left[1-\rho_{00}(0)\right]. \end{align} \end{subequations} These conditions may be satisfied by a pure state $\rho(0)=\proj{\psi(0)}$, where \begin{equation} \ket{\psi(0)}=\sqrt{\rho_{00}(0)}\ket{0}+\sqrt{1-\rho_{00}(0)}e^{i\Phi}\ket{\psi_\mathrm{b}}, \end{equation} with an arbitrary phase $\Phi$. Alternatively, these conditions hold (since the coherences between the ground- and excited states do not matter) for the incoherent mixture \begin{equation} \rho(0)=\rho_{00}(0)\proj{0}+\left[1-\rho_{00}(0)\right]\proj{\psi_\mathrm{b}} \end{equation} in the new basis containing the bright state [cf.\ Eq.~\eqref{eq_psib}] \begin{equation} \ket\psi_\mathrm{b}=\frac{1}{\sqrt{1+\alpha^2}}\left(\ket{1}+\alpha e^{i\varphi}\ket{2}\right). \end{equation} Any deviation from this ``optimal'' initial condition results in a reduced thermalization capability and therefore in a reduced power output. The power-enhancement factor for two equal parallel dipoles ($\alpha=1$ and $\varphi=0$) is then \begin{equation}\label{eq_power_general_tls_parallel_three_level_system} \frac{\dot{W}}{\dot{W}^\mathrm{TLS}}=2\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]. \end{equation} Figure~\ref{fig_power_enhancement} clearly reveals that the initial state has to be carefully prepared in order to exhibit the maximal possible power boost. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{power_enhancement} \caption{(Color online) Power enhancement factor~\eqref{eq_power_general_tls_parallel_three_level_system} for a three-level system with respect to a TLS as a function of the ground-state population and the coherence for parallel equal dipole moments. The optimal amplification (by a factor of two) is realized on the upper edge of the colored area, i.e., for the maximally allowed coherence (and correct phase) once $\rho_{00}$ is fixed [Eq.~\eqref{eq_initial_state_coherence_rho00_fixed}]. The lower left shaded triangle corresponds to a reduction of the power relative to a TLS, up to complete suppression in its lower left corner (the point corresponding to the dark state, which has the maximally allowed value for the modulus of the coherence but the opposite phase). For anti-parallel ($\varphi=\pi$) dipoles the diagram would be vertically flipped around the $\operatorname{Re}\rho_{21}=0$ axis.}\label{fig_power_enhancement} \end{figure} \par \section{Dicke-system performance}\label{sec_dicke} \par \begin{figure} \centering \includegraphics[width=\columnwidth]{dicke} \caption{(Color online) The Dicke system exemplified for $N=2$. The two atoms interacting with the same environment (left plot) are mapped onto a non-degenerate four-level system consisting of collective states (middle plot). The two states carrying one excitation are superradiant ($\ket{1}$) and subradiant ($\ket{1^\prime}$), respectively. If the dipole-dipole interaction between the atoms vanishes ($\Omega_{12}=0$) and their collective decay rates are maximized ($\Gamma_{12}=\gamma$), the system is mapped onto an effective three-level system with equidistant levels (equivalent to a spin-$1$ system plus one dark state) (right plot).}\label{fig_dicke} \end{figure} \par Let us now consider an ensemble of $N$ two-level atoms interacting with the same bath. The bath mediates the dipole-dipole interaction as well as non-local cooperative photon exchange, whereby a photon emitted by one atom can be re-absorbed by another~\cite{lehmberg1970radiation,lehmberg1970radiation2}. Quite generally, the dipole-dipole interaction renders the $j$-excitation states non-degenerate~\cite{lehmberg1970radiation2,agarwalbook,ficek1986cooperative}. Whilst some of these collective states give rise to an enhanced decay rate, others decay much slower than their single-atom counterparts (see Fig.~\ref{fig_dicke}), conforming to ``superradiance'' and ``subradiance'', respectively~\cite{agarwalbook,agarwal1970master,lehmberg1970radiation,lehmberg1970radiation2,ficek1986cooperative,prasad2000polarium,scully2006directed,akkermans2008photon,scully2009super,lin2012superradiance,svidzinsky2013quantum}. Hence, in general $N$-atom systems do not obey the Dicke model~\cite{dicke1954coherence}. However, by choosing an appropriate geometry (elaborated on in Sec.~\ref{sec_multiatom_realization}), the dipole-dipole interaction can be made to vanish, while at the same time the subradiant states become completely dark (see Fig.~\ref{fig_dicke}). Only such a configuration can realize the Dicke system---$N$ two-level atoms (equivalent to spin-$1/2$ particles) can then be mapped onto a collective spin-$N/2$ system~\cite{breuerbook}, whose Hilbert space is spanned by $N+1$ (symmetrized) collective states $\ket{j}$ containing $j$ excitations with eigenenergy $j\hbar\omega_0$, where $\omega_0$ denotes the TLS transition frequency, and $2^N-(N+1)$ dark states, which are decoupled from the dynamics (By contrast, in the fully aligned multilevel case $N-2$ states of the total $N$ states are dark). This mapping~\cite{breuerbook,lehmberg1970radiation2} is sketched in Fig.~\ref{fig_dicke} for two atoms. \par The dissipative dynamics of the Dicke system is described by the Liouvillian superoperator (for brevity given here for a single bath and without modulation) (see also Ref.~\cite{breuerbook}) \begin{multline}\label{eq_L_dicke} \mathcal{L}\rho=N\frac{1}{2}G(\omega_0)\left(2A\rho A^\dagger-A^\dagger A\rho-\rho A^\dagger A\right) \\ +N\frac{1}{2}G(\omega_0)e^{-\beta\hbar\omega_0}\left(2A^\dagger\rho A-AA^\dagger\rho-\rho AA^\dagger\right), \end{multline} with \begin{equation} A\mathrel{\mathop:}=\frac{1}{\sqrt{N}}\sum_{i=1}^N \sigma_-^i\equiv\sum_{j=0}^{N-1}\ketbra{j}{j+1} \end{equation} and the individual Pauli operators $\sigma_-^i\mathrel{\mathop:}=\ketbra{g_i}{e_i}$ for the $i$th two-level atom. The jump operator $\propto \sqrt{N\gamma}A$ describes the emission of one quantum accompanied by de-excitations ($j\rightarrow j-1$) of the non-dark excited states $\ket{j>0}$. The individual emission rates from $\ket{j}$ to $\ket{j-1}$ ($j\in(0,N]$) evaluate to $\gamma_j=\gamma j(N-j+1)$~\cite{breuerbook}. The steady-state solution of the master equation with the Liouvillian~\eqref{eq_L_dicke} is then---in analogy to Eqs.~\eqref{eq_ss_nlevels_general}---the diagonal, partially-thermalized, state \begin{subequations}\label{eq_ss_dicke} \begin{align} \rho_{00}^\mathrm{ss}&=\frac{1}{\sum_{j=0}^N e^{-j\beta\hbar\omega_0}}\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]\\ \rho_{jj}^\mathrm{ss}&= \begin{cases} e^{-j\beta\hbar\omega_0}\rho_{00}^\mathrm{ss}&j=1,\dots,N\\ \bkew{\phi_j}{\rho(0)}{\phi_j}&j=N+1,\dots,2^N-1, \end{cases} \end{align} \end{subequations} where we again resort to the projector $\Pi_\mathrm{d}$ onto the dark subspace. Here the states $\ket{\phi_j}$ again denote the eigenstates of the dark part of the initial density matrix. The major differences of Eqs.~\eqref{eq_ss_dicke} to Eqs.~\eqref{eq_ss_nlevels_general} are the Boltzmann factors, which now differ according to the number of excitations in each state $\ket{j}$. As in Eqs.~\eqref{eq_ss_nlevels_general}, coherences between the bare atomic states explicitly appear when transforming the solution~\eqref{eq_ss_dicke} to the $N$-atom product basis. \par The same derivation as before for the multilevel case yields the cold and hot heat currents presented in Appendix~\ref{app_heat_currents_dicke}. The ratio of these heat currents~\eqref{eq_dicke_heat_currents} and the associated power to their counterparts generated by $N$ \emph{independent} two-level heat machines [Eq.~\eqref{eq_heat_currents_general} with $N=N_\mathrm{eff}=2$] reads \begin{equation}\label{eq_heat_power_ratio_dicke} \frac{J_i}{N J_i^\mathrm{TLS}}\equiv\frac{\dot{W}}{N {\dot{W}}^\mathrm{TLS}}=\left[\sum_{j=0}^{N-1} e^{-j\beta_\mathrm{eff}\hbar\omega_0}\right]\frac{\rho_{00}^\mathrm{ss}}{\rho_{00}^\mathrm{TLS}}, \end{equation} where \begin{equation} \frac{\rho_{00}^\mathrm{ss}}{\rho_{00}^\mathrm{TLS}}=\frac{1+e^{-\beta_\mathrm{eff}\hbar\omega_0}}{\sum_{j=0}^N e^{-j\beta_\mathrm{eff}\hbar\omega_0}}\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]. \end{equation} \par At low effective temperatures, $\beta_\mathrm{eff}\rightarrow\infty$, at most a single excitation is present in the Dicke system. In this limit, the Dicke ladder for $N-1$ atoms may be mapped onto a degenerate $N$-level system with parallel transition-dipole vectors (which corresponds to an effective two-level system with transition-dipole strength enhanced by $\sqrt{N-1}$). Now, from Fig.~\ref{fig_currents_nlevels} we know that in the low-temperature regime the aligned and non-aligned multilevel systems perform similarly, the latter corresponding to $N-1$ independent TLS. This behavior is obtainable also from Eq.~\eqref{eq_heat_power_ratio_dicke}, \begin{equation} \lim_{\beta_\mathrm{eff}\rightarrow\infty}\frac{\dot{W}}{N {\dot{W}}^\mathrm{TLS}}=\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]. \end{equation} Hence, at low effective temperatures the $N$-atom Dicke heat machine performs as well as (but not better than) $N$ independent two-level atoms, provided it is amenable to complete thermalization. \par In the opposite limit of high effective temperatures, $\beta_\mathrm{eff}\rightarrow 0$, the cooperative Dicke system yields a power boost, \begin{equation} \lim_{\beta_\mathrm{eff}\rightarrow0}\frac{\dot{W}}{N {\dot{W}}^\mathrm{TLS}}=\frac{2N}{N+1}\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\Big]. \end{equation} This ratio has the following meaning: The denominator represents the thermal equipartition of all non-dark state populations, $\Big[1-\ew{\Pi_\mathrm{d}}_{\rho(0)}\big]/(N+1)$, whereas $N$ is the bright-state power enhancement and the factor of $2$ stems from the high-temperature TLS ground-state population $1/2$. \par Hence, the $N$-atom Dicke heat engine can give at most twice the power of its counterpart consisting of $N$ independent TLS. The full temperature dependence of the power boost for various ensemble sizes is shown in Fig.~\ref{fig_currents_dicke}. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{currents_dicke} \caption{(Color online) Heat currents and power output [Eq.~\eqref{eq_heat_power_ratio_dicke}] for the $N$-atom Dicke model compared to $N$ independent two-level-atom heat machines for different particle numbers. Here we have assumed an optimal initial condition orthogonal to the dark subspace. The maximum enhancement factor is $2$.}\label{fig_currents_dicke} \end{figure} \par \par As a consequence of Eq.~\eqref{eq_heat_power_ratio_dicke}, i.e., the identical boost of the two heat currents, the efficiency of the Dicke heat engine and the COP of the Dicke refrigerator, respectively, are the same as their two-level counterparts, \begin{subequations} \begin{align} \eta^\mathrm{Dicke}&=\eta^\mathrm{TLS}\\ \mathrm{COP}^\mathrm{Dicke}&=\mathrm{COP}^\mathrm{TLS}. \end{align} \end{subequations} \par Our results on power enhancement in $N$-level systems as well as for $N$ atoms conforming to the Dicke model are summarized in Table~\ref{table}. \par \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline & \makecell{maximal power \\ relative to TLS \\ (low $T_\mathrm{eff}$)} & \makecell{maximal power \\ relative to TLS \\ (high $T_\mathrm{eff}$)} & \makecell{initial state \\ for maximal \\ power} \\ \hline \makecell{$N$-level \\ (non-aligned \\ dipoles)} & $N-1$ & $(N-1)\frac{2}{N}$ & any\\ \hline \makecell{$N$-level \\ (fully aligned \\ dipoles)} & $N-1$ & $N-1$ & non-dark \\ \hline \makecell{$N$-atom \\ Dicke} & $N$ & $\frac{2N^2}{N+1}$ & non-dark \\ \hline \end{tabular} \caption{Summary of the (maximal) power enhancement factors and their respective dependencies on the effective temperature [cf.\ Figs.~\ref{fig_currents_nlevels} and~\ref{fig_currents_dicke}] and the initial condition for the multilevel heat machine and the Dicke machine (compared to a single TLS). The Dicke heat machine shows the best performance.}\label{table} \end{table} \section{Realization considerations}\label{sec_realizations} \subsection{Misaligned dipoles} Degenerate excited states with non-parallel dipole-transition vectors to the ground state are ubiquitous in atomic systems without hyperfine splitting. For example, the $2p$ manifold of hydrogen consists of the three degenerate states $\ket{n=2,l=1,m=\{0,\pm1\}}$. Their corresponding dipole transition vectors to the ground state are all of equal strength, but orthogonal to each other. This orthogonality, however, does not restrict thermalization (but may not produce the maximal boost at high temperatures, cf.\ Fig.~\ref{fig_currents_nlevels}), since there is no difference between orthogonal and any other non-aligned configuration in the steady-state heat machine. \subsection{Aligned dipoles in multilevel atoms}\label{subsec_realizations_parallel} Multilevel degeneracy combined with perfect alignment (parallelism) of the transition dipoles, which is a prerequisite for coherence effects, is much harder to realize, since it cannot occur in bare atomic systems due to selection rules~\cite{ficek2002quantum}. It is, in general, also not possible to engineer dressed states within the excited manifold that have this property and at the same time adhere to our model: Combining degenerate states with orthogonal dipole moments (as in the $2p$ manifold of hydrogen) creates effective states with non-orthogonal but also non-parallel dipoles~\cite{ficek2002quantum}. \par A possible realization of alignment may be to superpose a decaying and a non-decaying (metastable) state. Such linear combinations of decaying and metastable states, however, do not possess a definite parity. This means that the dipole operator $\mathbf{D}$ has permanent (static) dipole moments $\bkew{e_i}{\mathbf{D}}{e_i}$, as e.g., formed via the linear Stark effect in hydrogen (see Fig.~\ref{fig_linear_stark_effect}). In the derivation of the master equation~\eqref{eq_master}, however, we have assumed that the ground and excited states have a definite parity and therefore the atom has no permanent dipole moment. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{linear_stark_effect} \caption{(Color online) Linear Stark effect in hydrogen. The metastable state $\ket{2,0,0}$ is mixed with $\ket{2,1,0}$ to form the dressed states $(\ket{2,0,0}\pm\ket{2,1,0})/\sqrt{2}$ by a static electric field~\cite{schwablbookqm1}; for a weak field these states are quasi-degenerate. Their transition dipoles (red arrows) are anti-parallel, but since $\ket{2,0,0}$ and $\ket{2,1,0}$ have opposite parity, a hydrogen atom in a static electric field behaves as if it possesses a permanent dipole moment~\cite{schwablbookqm1}.}\label{fig_linear_stark_effect} \end{figure} \par \par Nevertheless, a permanent dipole moment $\bkew{e_i}{\mathbf{D}}{e_i}$ is not a hindrance to the treatment of system-bath coupling for a broad class of thermal baths~\cite{breuerbook}: For dipolar system-bath coupling [$H_\mathrm{SB}=\mathbf{D}\cdot\tilde{\mathbf{B}}$ as in Eq.~\eqref{eq_H_SB}], the static part of the dipole operator in the interaction picture does not contribute to the Liouvillian part of the master equation as long as \begin{equation} \lim_{\omega\rightarrow0}G(\omega)\equiv\lim_{\omega\rightarrow0}\gamma(\omega) (n(\omega)+1)=0, \end{equation} which is the case for bosonic baths provided $\gamma(\omega)\propto\omega^\alpha$ with $\alpha>1$. This $\omega=0$ (static) component of the dipole operator, which is absent in the Liouvillian, represents the permanent dipole moment, since (given here for a TLS for brevity) \begin{equation} \mathbf{D}(t)=\mathbf{d}\sigma_+ e^{i\omega_0t}+\mathbf{d}^*\sigma_- e^{-i\omega_0t}+\bkew{e}{\mathbf{D}}{e}\sigma_+\sigma_- \end{equation} with the transition dipole matrix element $\mathbf{d}\mathrel{\mathop:}=\bkew{e}{\mathbf{D}}{g}$. \par Yet, even though Stark-shifted states may realize the aligned-dipoles model (on time scales much shorter than their inverse splitting $1/\Delta$), the predicted power boost is countered by the fact that in an equal superposition of decaying and metastable states only half of the superposed state has dipolar interaction with the bath~\cite{gelbwaser2014power}. Hence, the decay rate is reduced compared to that of the decaying state to $\gamma_\mathrm{eff}=\frac{1}{2}\gamma$. Consequently, the power boost and the reduction in the effective spontaneous emission rate exactly cancel each other. Still, one could argue, that there is a power boost relative to a TLS decaying at rate $\gamma_\mathrm{eff}$. \par It has been suggested that one may create an effective $V$-type system with aligned dipoles out of a $\Lambda$-system with orthogonal transitions by applying a strong laser field~\cite{ficek2002quantum}. The resulting $V$-system is non-degenerate, but the energy mismatch can be tuned to be very small, such that one can observe the effects discussed here on long time scales. However, as for the Stark effect discussed above, the decay rate of the dressed state is smaller than that of the bare excited state, precluding a power boost. \par In view of these difficulties, the strict alignment requirement should be relaxed in realistic experimental situations. Even though for quasi-aligned dipole transition vectors no dark states exist, strongly subradiant (metastable) states may still arise. The non-thermalization of these ``quasi-dark'' states should therefore be experimentally observable on appropriate time scales. \par A promising alternative is to make use of the plethora of vibrational and rotational levels in molecules as discussed in the supplemental material of Ref.~\cite{tscherbul2014long}. \subsection{The Dicke system}\label{sec_multiatom_realization} In view of the conceptual difficulties in finding a realization of parallel dipoles in a degenerate multilevel system that is capable of power boost compared to a TLS, we shall revisit the Dicke system discussed in Sec.~\ref{sec_dicke}. As we have seen, this multipartite setup exhibits similar thermodynamic properties to a degenerate $N$-level system of parallel dipoles and may produce a power boost under suitable initial conditions. \par Traditionally, the superradiant Dicke model was thought to be realizable by a dense atomic ensemble confined well within the cubed emission wavelength~\cite{breuerbook,gross1982superradiance}. Yet, in general, such ensembles do not obey the Dicke model, since the dipole-dipole interaction as well as the cooperative decay rates strongly vary, depending on the spatial symmetry and spacings of the atomic ensemble~\cite{lehmberg1970radiation,kurizki1985quantum,kurizki1987theory,mazets2007multiatom,petrosyan2002scalable,scully2009collective}. The remedy is to realize the Dicke model in field-confining structures such as photonic bandgap structures~\cite{kurizki1990two,li2008fabrication}, cavities~\cite{kurizki1996resonant} (see also~\cite{binder2015quantacell}), and waveguides~\cite{shahmoon2011strongly,shahmoon2013nonradiative,shahmoon2013dispersion,shahmoon2014nonlinear}. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{dicke_realization} \caption{(Color online) A realization of the Dicke model by means of two-level atoms confined at interatomic distance $\lambda$ next to or within one-dimensional waveguides. See text for details.}\label{fig_dicke_realization} \end{figure} \par In particular, consider the realistic setup (Fig.~\ref{fig_dicke_realization})~\cite{mitsch2014quantum} of a periodic $1$d lattice of atoms (each tightly confined by the lattice potential) that are positioned at equal distances $d$ within or next to a photonic waveguide. Alternatively, we may employ a chain of superconducting qubits coupled to a microwave coplanar waveguide~\cite{vanloo2013photon}. Field confinement can give rise to a giantly enhanced density of modes at the lower cutoff frequency of the waveguide~\cite{kleppner1981inhibited,shahmoon2013nonradiative}. Then, atoms whose resonance frequency $\omega_0$ is just above $\omega_\mathrm{cutoff}$ are predominantly coupled to the $1$d axial-mode continuum, whereas transverse-mode (free-space) effects are negligible in comparison, $\gamma_{1\mathrm{d}}\gg\gamma_\mathrm{free}$~\cite{kleppner1981inhibited,shahmoon2013nonradiative}. In such an effective $1$d photonic ``bath'', the cooperative decay rates $\Gamma_{ij}$ and resonant dipole-dipole interactions $\Omega_{ij}$ of atoms $i$ and $j$ are~\cite{vanloo2013photon,lalumiere2013input} \begin{subequations}\label{eq_dicke_realization_rates_general} \begin{align} \Gamma_{ij}&=\gamma_{1\mathrm{d}}\cos\left(kd\right)\\ \Omega_{ij}&=\frac{\gamma_{1\mathrm{d}}}{2}\sin\left(kd\right), \end{align} \end{subequations} with the wavenumber $k=2\pi/\lambda=\omega_0/c$. The oscillatory behavior of Eqs.~\eqref{eq_dicke_realization_rates_general} stands in stark contrast to the decay with distance of their counterparts in free space~\cite{lehmberg1970radiation} and allows us to choose the interatomic distance $d=\lambda$ such that \begin{subequations} \begin{align} \Gamma_{ij}&=\gamma_{1\mathrm{d}}\label{eq_dicke_realization_rates}\\ \Omega_{ij}&=0. \end{align} \end{subequations} In this geometry, the atomic lattice realizes the Dicke system~\cite{dicke1954coherence,agarwalbook,gross1982superradiance}, i.e., diagonalization of the $N$-atom Liouvillian operator with the interatomic decay rates~\eqref{eq_dicke_realization_rates} leads to the Liouvillian~\eqref{eq_L_dicke} involving a single decay channel, whose collective steady-state solution is the partial thermal state~\eqref{eq_ss_dicke}. By coupling the atoms to two similar waveguides kept at different temperatures (see Fig.~\ref{fig_dicke_realization}), we may operate the setup as the heat machine depicted in Fig.~\ref{fig_system}a. \par The high-temperature regime, where the Dicke model may outperform the fully thermalized model corresponds to temperatures $T_\mathrm{eff}\gtrsim \hbar\omega_0/(5k_\mathrm{B})$ (see Fig.~\ref{fig_currents_dicke}). For optical transitions ($\omega_0\sim10^{15}$\,Hz) these are temperatures of the order of $10^3$\,K. By contrast, for microwave transitions ($\omega_0\sim10^9$\,Hz) the high-temperature regime is already realized at $T_\mathrm{eff}\gtrsim10^{-3}$\,K. \section{Conclusions}\label{sec_conclusions_and_outlook} The present analysis sheds new light on multilevel or multipartite heat machines and their degeneracy as a thermodynamic resource that can boost the heat currents and the power output of heat engines. By contrast, the efficiency of such machines was shown to be unaltered compared to a TLS-based machine, which adheres to the Carnot bound. \par We can summarize our findings regarding the r\^ole of initial and steady-state coherences in the power enhancement of degenerate steady-state multilevel quantum heat engines as follows, \begin{enumerate} \item \emph{Steady-state coherences are a consequence of thermalization if at least two transition dipoles are parallel.} Such coherences arise in the bare energy basis but not in the rotated basis containing dark states. \item \emph{Neither initial nor steady-state coherences do necessarily imply power enhancement compared to a single TLS and, vice versa, power enhancement does not always entail initial or steady-state coherences.} The relevant factor for power enhancement is the thermalization capability (Sec.~\ref{subsec_parallel}) of the initial state, not the amount of steady-state coherence. As an extreme example, even if all three dipoles of a four-level system are perpendicular to each other (the largest orthogonal system possible in three-dimensional space) the power is enhanced---without coherence between the excited states at any time and regardless of the initial condition. The decisive ingredient is the \emph{common ground state} of the dipole transitions. It is generally true that aligned dipole configurations yield higher power boost than non-aligned ones in the high-temperature limit---if the initial state is properly chosen. Even then, we may not attribute this power boost (compared to non-aligned configurations) to steady-state coherences but rather to the presence of dark states, which reduce the number of available levels amenable to thermalization. The key factor for enhancement is the \emph{ground-state population} of this reduced $N_\mathrm{eff}$-level system. The fewer levels can thermalize, the more is the ground state thermally populated. Yet, any population in the ground state necessarily \emph{decreases} the maximally permitted coherences within the excited-states manifold. \item \emph{Power reduction implies the existence of steady-state coherences.} Power reduction is associated with dark states being initially populated. The largest possible steady-state coherences correspond to fully excited (population-inverted) states, which can only occur if the initial state is dark. \end{enumerate} \par Overall, we found that an $N$-level heat machine cannot yield higher power than $N-1$ independent two-level heat machines. Whilst at low-temperatures both machines perform equally well, the power enhancement in the $N$-level machine decreases at higher temperatures, unless all dipoles are parallel. Taking advantage of the full cooperativity of the excited states (occurring only for parallel transition dipoles), however, requires carefully chosen (non-dark) initial conditions (Sec.~\ref{sec_ss}) and its realization encounters conceptual difficulties (Sec.~\ref{subsec_realizations_parallel}). \par Similar conclusions were found to apply in $N$-atom ensembles that, under specially-chosen conditions, obey the Dicke model (Sec.~\ref{sec_multiatom_realization}). Remarkably, the $N$-atom Dicke model yields at best doubly-enhanced power compared to $N$ independent atoms (and only at high temperatures) (Sec.~\ref{sec_dicke}). Nevertheless, doubling the power output of a heat machine by cooperative effects will be an important achievement. \par To conclude, our findings elucidate the possible impact of steady-state coherence or entanglement in the working medium on the performance of heat machines. The fact that the heat baths are assumed to be thermal allows us to uphold the Carnot bound. The possibility of exceeding the Carnot bound by virtue of harnessing \emph{non-thermal} baths in the heat machine, including baths consisting of coherently-superposed~\cite{scully2003extracting,abah2014efficiency,turkpence2015quantum} or superradiant~\cite{hardal2015superradiant} atomic systems, as well as squeezed baths~\cite{rossnagel2014nanoscale} is outside the scope of the present discussion, and so are effects related to piston quantization~\cite{quan2012validity,boukobza2013breaking,gelbwaser2013work,gelbwaser2014heat,gelbwaser2015work} or to strong system-bath coupling~\cite{nieuwenhuizen2002statistical,gallego2014thermal,gelbwaser2015strongly} that require further clarification. Notwithstanding these open issues, the present results help delineate the limited (but significant) part that quantum coherence or entanglement may play in quantum thermodynamics. \begin{acknowledgments} We would like to thank Laurin Ostermann for helpful discussions. This work has been supported by the BSF, ISF, AERI, MOST, and CONACYT. \end{acknowledgments}
1,116,691,501,418
arxiv
\section{Introduction} Einstein's theory of relativity predicts that light rays gets deflected due to the curvature of spacetime known as the gravitational lensing effect. The deflection of light has been extensively studied previously in the literature in various astrophysical systems in the context of the weak as well as strong gravitational lensing limit \cite{weak1,weak2,strong1,strong2,strong3,virbhadra1,virbhadra2,virbhadra3,sereno1,sereno2,sereno3}. A very important contribution has been recently made by Gibbons and Werner who argued about the importance of global topology on the deflection of light using the optical geometry and the famous GBT \cite{gibbons1}. Furthermore, they computed the deflection angle from the Schwarzschild black hole by considering a domain outside of the light ray - different from the standard method where the lensing effect is strongly related with the mass of the body which is enclosed within a given region on space. More recently, Werner has extended this method and computed the deflection angle by a Kerr black hole \cite{werner}. Amazingly, this method was shown to be very suitable in calculating the deflection angle in spacetimes with topological defects, including cosmic strings and global monopoles \cite{kimet1,kimet2,kimet3}. Quantum effects on the deflection of light by a quantum improved Schwarzschild black holes and gravitational effects due to a cosmic string in Schwarzschild spacetime \cite{kimet4,ao}. In this paper, we will use the Gibbons-Werner (GW) method to obtain deflection angle in a quantum improved Kerr black hole geometry with a cosmic string. Topological defects are associated with a number of quantum and gravitational effects, including early structure formation from cosmic string loops which have been studied by Benjamin et al. in Ref. \cite{benjamin}, light deflection by a cosmic string and global monopoles in Ref. \cite{kimet2,kimet3}. Recently, Hackmann et al. \cite{kerr1}, investigated the deflection of light by the Kerr black hole pierced by a cosmic string. In this paper, using similar arguments, we shall introduce a quantum improved Kerr black hole metric pierced by a static and infinitely long cosmic string lying along the $z$ --axis to calculate the deflection angle in the weak approximation limit. Various modifications on the deflection of light and also modifications in the context of quantum gravity effects \cite{bohr}, non-linear electrodynamics \cite{novello}, have been studied in the context of some alternative gravity theories \cite{sumanta1,sumanta2}, in particular deflection in the strong limit by Eddington-inspired Born-Infeld black holes \cite{Born-Infeld1,Born-Infeld2}. Classically, a black hole has a horizon and also a singularity, however people have tried to remove singularities from black holes using various methods such as nonlinear electrodynamics, modified gravities or, using the effects of the quantum gravity. In this paper, we wish to extend the Kerr solution by taking into account the quantum effects into the deflection of light in the spacetime of a quantum improved Kerr black hole recently found by Torres \cite{torres}. The main idea to recover this metric is a running Newton's constant $G=G(k)$ \cite{qcsh0}. Further, let us now briefly review the GBT which connects the topologically surfaces. First, using Euler characteristic $\chi$ and a Riemannian metric $g$, one can choose the subset oriented surface domain as $(D,\chi,g)$ to find the Gaussian curvature $K$. Then the Gauss-Bonnet theorem is defined as follow \begin{equation} \int\int_D K \rm d S + \int_{\partial D} \kappa \rm d t +\sum_i \alpha_i = 2\pi \chi(D). \label{gb} \end{equation} where $\kappa$ is the geodesic curvature for $\partial D:\{t\}\rightarrow D$ and $\alpha_i$ is the exterior angle with $i^{th}$ vertex. Following this approach, global symmetric lenses are considered to be Riemannian metric manifolds, which are geodesic spatial light rays. In optical geometry, we calculate the Gaussian optical curvature $K$ to find the asymptotic bending angle which can be calculated as follows: \begin{equation} \hat{\alpha}=-\int \int_{D_\infty} K \mathrm{d}S. \end{equation} Note that this equation is an exact result for the deflection angle. In this equation, we integrate over an infinite region of the surface $D_\infty$ which is bounded by the light ray. By assumtion, Aone can use the above relation only for asymptotically Euclidian optical metrics. Therefore it will be interesting to see the form of the deflection angle in the case of non-asymptotically Euclidian metrics. One such interesting metric is to consider the Kerr black hole pierced by a cosmic string. We calculated the deviation aberration by taking a straight line as the zeroth approximation of the light ray. With this method deflection angle $\hat{\alpha}$ can be easily found but only in leading order terms. The main difficulty here is that the optical geometry should transform into a special type of Finsler manifold, known as the Randers manifold. Then we shall use the Naz{\i}m's method to construct a Riemannian manifold osculating the Randers manifold. This paper is organized as follows. In Section II, we first recall the quantum improved rotating black hole solution, after that we introduce a cosmic string in this spacetime with the corresponding quantum improved Kerr-Randers-Cosmic string optical metric. In Section III, we calculate the quantum improved Gaussian optical curvature which is used to calculate the deflection angle by applying GBT to the quantum improved optical geometry. In Section IV, we study the geodesic equations in the spacetime of quantum improved Kerr black hole with a cosmic string. In particular we calculate the deflection angle in leading order. In Section V, we elaborate on our results. \section{Quantum improved Kerr-Randers-Cosmic string optical metric} Recently a non-singular solution, also known as a quantum improved rotating black hole was found by Torres \cite{torres,qcsh0}. The basic idea is to use the running Newton's constant $G=G(k)$ with a position-dependent scale parameter i.e. $k=k(r)$ in the asymptotic safety approach. The quantum improved rotating black hole metric was found to be \begin{eqnarray}\label{1}\notag \mathrm{d}s^{2}&=& - \frac{\Delta_{\tilde{\omega}}}{\Sigma}\left(\mathrm{d}t-a \sin^2 \theta \mathrm{d}\varphi \right)^2+\frac{\Sigma}{\Delta_{\tilde{\omega}}}\mathrm{d}r^2+\Sigma \mathrm{d}\theta^2\\ &+& \frac{\sin^2 \theta}{\Sigma}\left(a \mathrm{d}t-(r^2+a^2)\mathrm{d}\varphi \right)^2, \label{m1} \end{eqnarray} where \begin{equation} \Delta_{\tilde{\omega}}=r^2+a^2-2G(r) M r, \end{equation} \begin{equation} \Sigma=r^2+a^2 \cos^2 \theta, \end{equation} \begin{equation} G(r)=\frac{G_{0}r^{3}}{r^{3}+\tilde{\omega} G_{0}\left(r+\gamma G_{0}M\right)}. \end{equation} Note that $k_{obs}$ gives a typical observational scale such that $G_0=G(k_{obs})$. The quantum effects are encoded by the parameter \begin{equation} \tilde{\omega}=\frac{167 \,\hbar}{30 \pi}. \end{equation} Let us now introduce a cosmic string piercing of this Kerr black hole solution by using the following coordinate transformation \cite{kerr0,kerr1,kerr2} \begin{equation} \mathrm{d} \varphi \to \beta \, \mathrm{d} \varphi, \end{equation} where the cosmic string parameter is given as $\beta =1-4 \mu$, where $\mu$ is known as the energy density of the cosmic string. In this case, the spacetime (\ref{m1}) can be written as \begin{eqnarray}\notag \mathrm{d}s^{2}&=& - \frac{\Delta_{\tilde{\omega}}}{\Sigma}\left(\mathrm{d}t-a \beta \sin^2 \theta \mathrm{d}\varphi \right)^2+\frac{\Sigma}{\Delta_{\tilde{\omega}}}\mathrm{d}r^2+\Sigma \mathrm{d}\theta^2\\ &+& \frac{\sin^2 \theta}{\Sigma}\left(a \mathrm{d}t-(r^2+a^2)\beta \mathrm{d}\varphi \right)^2. \label{metric} \end{eqnarray} Note that cosmic string parameter belongs to the interval $0<\beta <1$, while the deficit angle is given as $\delta=2\ \pi (1- \beta)$. In what follows we will use the metric (\ref{metric}) to find out the deflection angle of light. To do so, let us find the corresponding Finsler metric for our improved Kerr metric with a cosmic string. A Finsler metric $F$ with manifold $\mathcal{M}$ and $x\in \mathcal{M},\ X\in T_x M$ is characterised by the Hessian given as \begin{equation} g_{ij}(x,X)=\frac{1}{2}\frac{\partial^{2}F^{2}(x,X)}{\partial X^{i}\partial X^{j}}.\label{10-3} \end{equation} Moreover the Randers metric is given as \begin{equation} F(x, X)=\sqrt{a_{ij}(x)X^{i}X^{j}}+b_{i}(x)X^{i},\label{11-3} \end{equation} with the following condition $a^{ij}b_{i}b_{j}<1$, with $a_{ij}$ being a Riemannian metric and $b_{i}$ being a one-form. Then our quantum improved Kerr metric with a cosmic string can be easily written as the following stationary form metric \cite{gibbons2} \begin{equation}\label{13-3} \mathrm{d}s^2=V^2\left[-\left(\mathrm{d}t-b_i \mathrm{d}x^i \right)^2+a_{ij}\mathrm{d}x^i \mathrm{d}x^j\right], \end{equation} where $V$ should be properly chosen. If we consider null geodesics $\mathrm{d}s^2=0$, the last metric can be written in the form \eqref{11-3}, where \begin{widetext} \begin{eqnarray} a_{ij}(x)\mathrm{d}x^i \mathrm{d}x^j&=&\frac{\Sigma^4}{\Delta_{\tilde{\omega}}-a^2 \sin^2 \theta}\left( \frac{\mathrm{d}r^2}{\Delta_{\tilde{\omega}}}+\mathrm{d}\theta^2+\frac{\Delta_{\tilde{\omega}} \sin^2\theta \beta^2 } {\Delta_{\tilde{\omega}}-a^2 \sin^2\theta }\mathrm{d}\varphi^2 \right),\\ b_{i}(x)\mathrm{d}x^i &=& -\frac{2 a M G(r) r \beta \sin^2 \theta}{\Delta_{\tilde{\omega}}-a^2 \sin^2 \theta}\mathrm{d}\varphi. \end{eqnarray} Hence, in the equatorial plane $\theta=\pi/2$, we find the corresponding quantum improved Kerr-Randers-String optical metric given by \begin{equation}\label{16-3} F\left(r,\varphi,\frac{\mathrm{d}r}{\mathrm{d}t},\frac{\mathrm{d}\varphi}{\mathrm{d}t}\right)=\sqrt{\frac{r^4 \beta^2 \Delta_{\tilde{\omega}} }{(\Delta_{\tilde{\omega}}-a^2)^2}\left(\frac{\mathrm{d}\varphi}{\mathrm{d}t}\right)^2+\frac{r^4 }{\Delta_{\tilde{\omega}}(\Delta_{\tilde{\omega}}-a^2)}\left(\frac{\mathrm{d}r}{\mathrm{d}t}\right)^2}-\frac{2M G(r)\beta ar}{\Delta_{\tilde{\omega}}-a^2}\frac{\mathrm{d}\varphi}{\mathrm{d}t}. \end{equation} \end{widetext} One can immediately notice that $F$ can be used to describe the propagation of light after we first let $\mathrm{d}s^2=0$ which implies that $\mathrm{d}t=F(x,\mathrm{d}x)$. Fermat's principle in general relativity tells us that light rays $\gamma$ are choosen such that the following condition is satisfied \begin{equation} 0=\delta\,\int\limits_{\gamma}\mathrm{d}t=\delta\,\int\limits_{\gamma_F}F(x, \dot{x})\mathrm{d}t. \end{equation} Where $\gamma_F$ is geodesic of our Kerr-Randers-Cosmic string optical metric $F$. The key idea here is to construct a Riemannian manifold $(\mathcal{M}\bar{g})$ which osculates the Randers manifold $ (\mathcal{M}, F) $ using the so-called Naz{\i}m's method \cite{nazim}. One way to do this, is to choose a vector field $\bar{X}$ tangent to the geodesic $\gamma_{F}$, such that $\bar{X}(\gamma_{F})=\dot{x}$, with the Hessian \begin{equation} \bar{g}_{ij}(x)=g_{ij}(x,\bar{X}(x)).\label{17-3} \end{equation} Amazingly, one can check in Ref. \cite{werner}, that the geodesic $\gamma_{F}$ of the Randers manifold is also a geodesic $\gamma_{\bar{g}}$ of $(\mathcal{M},\bar{g})$, i.e. $\gamma_{F}=\gamma_{\bar{g}}$. Thus, one can use the osculating Riemannian manifold $(\mathcal{M},\bar{g})$ to compute the deflection angle of light. Let us choose the undeflected light rays as $r(\varphi)=b/\sin\varphi $ with $b$ being the impact parameter which is approximated as the minimal distance to the cosmic string passing through the black hole. Near the light ray, we choose our vector field as follows \begin{eqnarray} \nonumber\label{vec} \bar{X}^{r}&=&-\cos\varphi+\mathcal{O}(M,a), \\ \bar{X}^{\varphi}&=&\frac{\sin^{2}\varphi}{b}+\mathcal{O}(M,a). \end{eqnarray} In the next section we shall preside to apply the GBT to the osculating optical geometry $(\mathcal{M},\bar{g})$ and compute the deflection angle. \bigskip \section{Quantum improved Gaussian optical curvature and quantum improved deflection angle} Let us choose a non-singular domain $(D_{R},\bar{g})$ over the osculating Riemannian manifold $(\mathcal{M},\bar{g})$ bounded by circular curve $C_ {R}$ and the geodesic $\gamma_{\bar{g}}$, such that $\partial D_{R}=\gamma_{\bar{g}}\cup C_ {R}$ (see Fig. 1). Then GBT can be stated as follows (cf. \citep{werner}) \begin{equation}\label{19-4} \iint\limits_{D_{R}}K\,\mathrm{d}S+\oint\limits_{\partial D_{R}}\kappa\,\mathrm{d}t+\sum_{i}\theta_{i}=2\pi\chi(D_{R}), \end{equation} in which we note that $K$ is our quantum improved Gaussian curvature, $\kappa=|\nabla_{\dot{\gamma}}\dot{\gamma}|$ is the geodesic curvature, and $\theta_{i}$ being the $i^{th}$ exterior angles. In the limit $R\to \infty$, the sum of exterior jump angles at $S$ and $O$ gives $\theta_{O}+\theta_{S}\to \pi$. The Euler characteristic is $\chi(D_{R})=1$ since $D_ {R} $ is non-singular and simply connected. The GB theorem (\ref{19-4}) now can be written as, \begin{equation} \label{gb2} \iint\limits_{D_{R}}K\,\mathrm{d}S+\oint\limits_{\partial D_{R}}\kappa\,\mathrm{d}t=2\pi\chi(D_{R})-(\theta_{O}+\theta_{S})=\pi. \end{equation} \begin{figure}[h!] \includegraphics[width=0.47\textwidth]{Kerr2.png} \caption{\small \textit {Deflection of light by a quantum improved Kerr black hole pierced by a cosmic string which is perpendicular to the equatorial plane $(r,\varphi)$. Due to the conical topology the deflection angle of light is $4 \pi \mu$, while the total deflection angle is $\hat{\alpha}$, the impact parameter $b$ is the minimal radial distance of the light ray from the cosmic string. Note that the vector field $\bar{X}(r,\varphi)$ is tangent to the geodesic. } } \end{figure} However we note that since $\gamma_{\bar{g}}$ is geodesic one is left with $\kappa(\gamma_{\bar{g}})=0$. Hence we must calculate $\kappa(C_{R})\mathrm{d}t$ where $\kappa(C_{R})=|\nabla_{\dot{C}_{R}}\dot{C}_{R}|$. Since $R$ is chosen such that $C_{R}:= r(\varphi)=R=\text{const}$, only the radial component remains to be discussed \begin{equation} \left(\nabla_{\dot{C}_{R}}\dot{C}_{R}\right)^{r}=\bar{\Gamma}^{r}_{\varphi \varphi}\left(\dot{C}_{R}^{\varphi}\right)^{2}. \end{equation} Next, if we recall the unit speed condition given as $\bar{g}_{\varphi \varphi}\,\dot{C}_{R}^{\varphi}\dot{C}_{R}^{\varphi}=1$, and Christoffel symbol $\bar{\Gamma}^{r}_{\varphi \varphi}$, and choose $r(\varphi)=R=\text{const}$, then for the geodesic curvature we find the following result $\kappa(C_{R})\to R^{-1}$. Note that this result is similar to the Kerr black hole without a cosmic string, however if one recall Eq. (\ref{16-3}) it follows that \begin{eqnarray} \mathrm{d}t&=&\left[\sqrt{\frac{R^4 \beta^2 \Delta_{\tilde{\omega}} }{(\Delta_{\tilde{\omega}}-a^2)^2}}-\frac{2aM G(R)\beta R}{\Delta_{\tilde{\omega}}-a^2}\right]\mathrm{d}\varphi. \end{eqnarray} This, of course, is a result of the global conical topology due to the presence of the cosmic string in our spacetime. As a result we have \begin{eqnarray}\notag & &\lim_{R \to \infty} \kappa(C_{R})\mathrm{d}t\\ \notag &=&\lim_{R \to \infty}\left[\sqrt{\frac{R^2 \beta^2 \Delta_{\tilde{\omega}} }{(\Delta_{\tilde{\omega}}-a^2)^2}}-\frac{2aM G(R)\beta }{\Delta_{\tilde{\omega}}-a^2}\right]\mathrm{d}\varphi \\ &\to & \beta d \varphi. \end{eqnarray} The last result suggests, due to the cosmic string our quantum improved optical geometry is different form the Kerr optical geometry in the sense that cannot be viewed as asymptotically Euclidean. In other words we have $\kappa(C_{R})\mathrm{d}t/\mathrm{d}\varphi=\beta\neq 1$, but reduces to asymptotically Euclidean if and only if we let $\beta \to 1$. From the GB theorem (\ref{gb2}), one finds that \begin{eqnarray}\notag \iint\limits_{D_{R}}K\,\mathrm{d}S & + & \oint\limits_{C_{R}}\kappa\,\mathrm{d}t\overset{{R\to \infty}}{=}\iint\limits_{ D_{\infty}}K\,\mathrm{d}S \\ &+& \beta \int\limits_{0}^{\pi+ \hat{\alpha}}\mathrm{d}\varphi, \end{eqnarray} where we shall integrate over a domain $D_\infty$ outside the light ray $\gamma_{\bar{g}}$. Morover $\hat{\alpha}$ is nothing else but the deflection angle of light to be calculated. After some algebraic manipulation the deflection angle is found to be \begin{eqnarray}\label{alp} \hat{\alpha} &\simeq & 4 \pi \mu - \frac{1}{1-4 \mu} \int\limits_{0}^{\pi}\int\limits_{\frac{b}{\sin \varphi}}^{\infty}K\,\sqrt{\det \bar{g}}\,\mathrm{d}r\,\mathrm{d}\varphi. \end{eqnarray} In what follows we shall compute the quantum improved Gaussian optical curvature, $K$. But first, we must find the corresponding metric components $\bar{g}$ of the osculating Riemannian geometry. Making use of the Hessian (\ref{10-3}), and Eq. (\ref{vec}), we find the following relations \begin{widetext} \begin{eqnarray} \bar{g}_{rr}&=&\frac{-2 a \beta^3 Mr^3 \sin^6 \varphi G_0+\left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \, b^2 \right)^{3/2}(4r G_0 M+r^2+G_0\tilde{\omega})}{\left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \,b^2 \right)^{3/2} (G_0 \tilde{\omega}+r^2)},\\\notag \bar{g}_{\varphi \varphi}&=&\frac{r^2 \beta^2 }{(G_0\tilde{\omega}+r^2)\left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \,b^2 \right)^{5/2}}\\\notag &\times & \Big[\Big((Mr G_0+\tilde{\omega}G_0+r^2)r^2 \beta^2 \sin^4\varphi +\cos^2\varphi b^2 (MrG_0+r^2/2+\tilde{\omega}G_0/2)\Big)2 \left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi b^2 \right)^{3/2}\\\notag &-&\Big(r \beta \sin^2 \varphi (G_0\tilde{\omega}+r^2)\left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi b^2 \right)^{1/2}+aMG_0 (4 \beta^2 \sin^4 \varphi r^2+6 \cos^2 \varphi b^2 )\Big)\\ & \times & r \beta \sin^2 \varphi \left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi b^2 \right)\Big],\\ \bar{g}_{r\varphi}&=&\frac{2 a \beta MG_0 r \cos^3\varphi}{(G_0\tilde{\omega}+r^2)\left(\frac{r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \,b^2 }{b^2}\right)^{3/2}}, \end{eqnarray} with the determinant given as \begin{equation} \det \bar{g}= r^2 \beta^2 +\frac{6 M \beta^2 r^3 G_0}{G_0 \tilde{\omega}+r^2}-\frac{6 a \beta^3 M G_0 \sin^2\varphi r^3 }{\sqrt{r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \,b^2 } \left(G_0 \tilde{\omega}+r^2\right)}+\mathcal{O}(a^2,M^2). \end{equation} The quantum improved Gaussian optical curvature is defined as \begin{eqnarray}\label{25} K=\frac{\bar{R}_{r\varphi r\varphi}}{\det \bar{g}}=\frac{1}{\sqrt{\det \bar{g}}}\left[\frac{\partial}{\partial \varphi}\left(\frac{\sqrt{\det \bar{g}}}{\bar{g}_{rr}}\,\bar{\Gamma}^{\varphi}_{rr}\right)-\frac{\partial}{\partial r}\left(\frac{\sqrt{\det \bar{g}}}{\bar{g}_{rr}}\,\bar{\Gamma}^{\varphi}_{r\varphi}\right)\right]. \end{eqnarray} Using the above relations we find the following result \begin{eqnarray} K&=&-\frac{2 M G_0}{r^3 }+\frac{12 G_0^2 M \tilde{\omega}}{r^5\ }+\frac{6 a M G_0 f(r,\varphi,\tilde{\omega})}{r^9 }, \end{eqnarray} where \begin{eqnarray}\notag f(r,\varphi,\tilde{\omega})&=& \frac{\sin^2 \varphi(r^2-3 G_0 \tilde{\omega})\beta }{\left(r^2 \beta^2 \sin^4 \varphi+\cos^2 \varphi \, b^2 \right)^{7/2}}\Big[r^8 \beta^6 (r^2-\frac{5 G_0 \tilde{\omega}}{3}) \sin^{12} \varphi-\frac{b^2 r^4 \beta^2 (G_0 \tilde{\omega}+r^2)^2 \sin^{10} \varphi}{2}\\\notag &+&\frac{\cos^2\varphi \beta^2}{2} \left[(5\beta^2-9)r^4-142\tilde{\omega}G_0 r^2 (\beta^2+\frac{27}{22})- G_0^2 \tilde{\omega}^2 (\beta^2+9)\right] b^2 r^4 \sin^8 \varphi\\\notag &+& 4 b^3 \beta^2 \cos^2\varphi (r^2+\frac{G_0 \tilde{\omega}}{2})(G_0\tilde{\omega}+r^2) r^3 \sin^7 \varphi+2 b^2 \cos^2\varphi (b^2-\frac{5 \beta^2 \cos^2\varphi r^2}{2})(G_0\tilde{\omega}+r^2)^2 r^2 \sin^6 \varphi\\\notag &+& 8 b^3 \cos^4\varphi \beta^2 (r^2+\frac{G_0 \tilde{\omega}}{2})(G_0\tilde{\omega}+r^2) r^3 \sin^5 \varphi-3 \beta^6 b^6 G_0 \tilde{\omega} \cos^6 \varphi (r^2-\frac{G_0 \tilde{\omega}}{3})\\\notag &+&\cos^4\varphi \beta^2 \left[(\beta^2-\frac{11}{2})r^4+\frac{41}{3} \tilde G_0r^2{\omega}(\beta^2-\frac{33}{41})+2G_0^2 \tilde{\omega}^2(\beta^2-\frac{11}{4})\right]b^4 r^2 \sin^4 \varphi\\\notag &+&5 b^4 \cos^6\varphi (G_0\tilde{\omega}+r^2)^2 r^2 \sin^2 \varphi- 2 b^5 \cos^6\varphi (G_0\tilde{\omega}+r^2) (3G_0\tilde{\omega}+r^2)r \sin \varphi \\ &-& b^5 \cos^4\varphi (G_0\tilde{\omega}+r^2) (3G_0\tilde{\omega}+r^2)r \sin^3 \varphi\Big]. \end{eqnarray} Going back to Eq. (\ref{alp}) and substituting the quantum improved Gaussian optical curvature, the deflection angle can be calculated from the integral \begin{eqnarray} \hat{\alpha} &\simeq & 4 \pi \mu-\frac{1}{1-4 \mu}\int\limits_{0}^{\pi}\int\limits_{\frac{b}{\sin \varphi}}^{\infty}\left(-\frac{2 M G_0}{r^3 }+\frac{12 G_0^2 M \tilde{\omega}}{r^5 }+\frac{6 a M G_0 f(r,\varphi,\tilde{\omega})}{r^9}\right)\,\sqrt{\det \bar{g}}\,\mathrm{d}r\,\mathrm{d}\varphi. \end{eqnarray} Integrating and using a Taylor series expansion near $\eta$ and $\hbar$ the deflecton angle results with \begin{equation} \label{alp2} \hat{\alpha}\simeq 4 \pi \mu +\frac{4 G_0 M}{b}+\frac{16 M G_0\mu}{b}-\frac{1336 G_0^2 M\hbar }{45 \pi b^3 }-\frac{5344 G_0^2 M \hbar \mu}{45 \pi b^3}\pm \frac{4 G_0 M a}{b^2}. \end{equation} We note that in the last equation we have used positive sign for retrograde and negative sign for prograde light rays. Furthermore we recover the Kerr black hole as a limiting case of our result by setting $\mu=0$, while by setting $M=0$ we find the cosmic string deflection angle. \bigskip \section{Geodesics equations} Let us check our result for the deflection angle found in the last section by studying the geodesic equations. Furthermore we shall apply the variational principle \begin{equation} \delta \int \mathcal{L} \,\mathrm{d}s=0, \end{equation} to the quantum improved Kerr spacetime metric with a cosmic string, one finds the following equation for the Lagrangian \begin{eqnarray}\label{geo1} \mathcal{L}&=& - \frac{\Delta_{\tilde{\omega}}}{2\Sigma}\left(\dot{t}-a \sin^2 \theta \dot{\varphi} \right)^2+\frac{\Sigma}{2 \Delta_{\tilde{\omega}}}\dot{r}^2+\frac{\Sigma \dot{\theta}^2}{2}+\frac{\sin^2 \theta}{2 \Sigma}\left[a \dot{t}-(r^2(s)+a^2)\dot{\varphi} \right]^2, \end{eqnarray} where \begin{eqnarray} \Delta_{\tilde{\omega}}&=& r^2(s)+a^2-2G(r(s)) M r(s),\\ \Sigma &=& r^2(s)+a^2 \cos^2 \theta,\\ G(r)&=& \frac{G_{0}r^{3}(s)}{r^{3}(s)+\tilde{\omega} G_{0}\left[r(s)+\gamma G_{0}M\right]}. \end{eqnarray} As in the main text, we shall be interested in studying the deflection of planar photons by considering $\theta =\pi/2$. Let us consider the following two constants of motion $l$ and $\gamma$, defined as \cite{Boyer} \begin{eqnarray}\label{44}\notag p_{\varphi}&=&\frac{\partial \mathcal{L}}{\partial \dot{\varphi}}={\frac { \left( -a\beta\,\dot{\varphi}+{\it \dot{t}} \right) a\beta}{r^2(s) } \left[ r^2(s)+{a}^{2}-{\frac {2{\it G_0}\, r^4(s) M}{ r^3(s)+\omega\,{\it G_0}\, \left( r \left( s \right) +\gamma\,{\it G_0}\,M \right) }} \right] }\\ &-& {\frac {\beta \left[ a{\it \dot{t}}-\beta\, \dot{\varphi}\left( {a}^{2}+ r^2(s) \right) \right] \left[ {a}^{2}+r^2(s) \right] }{ r^2(s)}}=l,\\\notag p_{t}&=&\frac{\partial \mathcal{L}}{\partial \dot{t}}={\frac {-a\beta\,\dot{\varphi}+{\it \dot{t}}}{ r^2(s)} \left[ {a}^{2}+ r^2(s)-{ \frac {2{\it G_0}\, r^4(s) M}{ r^3(s)+\omega\,{\it G_0}\, \left( r \left( s \right) +\gamma\,{\it G_0}\,M \right) }} \right] }\\ &-& {\frac { \left[ a{\it \dot{t}}-\beta\,\dot{\varphi} \left( {a}^{2}+ r^2(s) \right) \right] a}{ r^2(s)}}=-\gamma. \end{eqnarray} Hereinafter it is very convenient to use a new variable $u$, which is related to the radial coordinate with the following coordinate transformation $r=1/u(\varphi)$. This gives the identity \begin{equation}\label{iden} \frac{\dot{r}}{\dot{\varphi}}=\frac{\mathrm{d}r}{\mathrm{d}\varphi}=-\frac{1}{u^2}\frac{\mathrm{d}u}{\mathrm{d}\varphi}. \end{equation} Without loss of generality one can choose $\gamma=1$ \cite{Boyer}. If we use Eqs. (\ref{geo1})-(\ref{iden}), and considering the fact that $u=u_{max}=1/r_{min}=1/b$ \cite{lorio}, this leads to $l=\beta b$. Then one finds the following differential equation, \begin{eqnarray}\label{33} \frac{a^2 \beta^2}{2}-\frac{\beta^2 \Xi^2(u)}{2\zeta^2(u)}+\frac{G_0 Ma^2 \beta^2}{u^2 \Upsilon(u)}-\frac{2 G_0 Ma \beta^2 \Xi(u)}{u^2 \zeta(u) \Upsilon(u)}+\frac{G_0 M \Xi^2(u)\beta^2}{u^2 \zeta^2(u) \Upsilon(u)}+\frac{\left(\frac{\mathrm{d}u}{\mathrm{d}\varphi}\right)^2}{2 u^6 \left(u^2+\frac{1}{u^6}-\frac{2 G_0M}{u^4 \Upsilon(u)}\right)}+\frac{\beta^2}{2 u^2}=0 \end{eqnarray} where \begin{eqnarray} \Xi(u) &=& \frac{\beta \left[G_0^2 a^2 \gamma M \tilde{\omega}u^5 +G_0^2 \gamma M \tilde{\omega} u^3+G_0 a^2 \tilde{\omega} u^4 +2 G_0 Ma^2 u^3 -2G_0 Ma u^3 b +G_0 \tilde{\omega}u^3+a^2 u^2 +1 \right]}{u^5},\\ \zeta(u) &=& \frac{\beta \left[bMu^3 G_0 \gamma \tilde{\omega} +b u^2 G_0 \tilde{\omega}+2 MG_0 (a-b)u+b\right]}{u^3},\\ \Upsilon(u) &=& G_0^2 \tilde{\omega}\gamma M+\frac{\tilde{\omega}G_0}{u}+\frac{1}{u^3}. \end{eqnarray} One can use a perturbation method to solve Eq. \eqref{33}. We can express the solution of our differential equation \eqref{33} in leading order terms as follows \cite{Boyer} \begin{equation} \Delta \varphi =\pi+\hat{\alpha}, \end{equation} where $\hat{\alpha}$ is the deflection angle to be calculated. The deflection angle can be calculated as follows (see for example \cite{weinberg}) \begin{equation} \hat{\alpha}=2|\varphi(u_{max})-\varphi_{\infty}|-\pi. \end{equation} where \begin{equation} \varphi= \int_0 ^{1/b} A(u) \mathrm{d}u. \end{equation} Note that in the last equation $A(u,\eta,b)$ is calculated by considering Taylor expansion series around $\mu$, $M$, and $\hbar$, given by \begin{eqnarray} A(u)=\frac{\left(4\mu+1\right)\left[ -167\,\hbar \,G_0^2 M{b}^{3}{u}^{5}+30\,{\it G_0}\,M\pi \,{b}^{ 3}{u}^{3}+334\,\hbar \,{{\it G_0}}^{2}Ma{u}^{3}+30\,{b}^{3}{u}^{2}\pi - 60\,Mau{\it G_0}\,\pi -30\,b\pi \right]}{30\,\sqrt {-{b}^{2}{u}^{2}+1} \left( {b}^{2}{u}^{2}-1 \right) \pi }. \end{eqnarray} The deflection angle in the weak deflection limit approximation is found to be \begin{equation} \hat{\alpha}\simeq 4 \pi \mu +\frac{4 G_0 M}{b}+\frac{16 M G_0\mu}{b}-\frac{1336 G_0^2 M\hbar }{45 \pi b^3 }-\frac{5344 G_0^2 M \hbar \mu}{45 \pi b^3}\pm \frac{4 G_0 M a}{b^2}, \end{equation} this result is in complete agreement with Eq. (\ref{alp2}) found by GW method. As expected, this result clearly shows that the standard Kerr solution \cite{Boyer} is modified due to the conical topology. It is interesting to note that this result is not in full agreement with third order terms, such as the mixed term, $Ma\mu$, or, say, $ M a \hbar$. In particular using the geodesic approach we find the following result up to third order terms \begin{equation} \hat{\alpha}_{geodesics}\simeq 4 \pi \mu +\frac{4 G_0 M}{b}+\frac{16 M G_0\mu}{b}-\frac{1336 G_0^2 M\hbar }{45 \pi b^3 }-\frac{5344 G_0^2 M \hbar \mu}{45 \pi b^3}\pm \frac{4 G_0 M a}{b^2}\pm \frac{668 G_0^2 a M\hbar}{15 \pi b^4}\pm \frac{16 M a G_0\mu}{b^2}, \end{equation} whereas the GW method gives \begin{equation} \hat{\alpha}_{GB}\simeq 4 \pi \mu +\frac{4 G_0 M}{b}+\frac{16 M G_0\mu}{b}-\frac{1336 G_0^2 M\hbar }{45 \pi b^3 }-\frac{5344 G_0^2 M \hbar \mu}{45 \pi b^3}\pm \frac{4 G_0 M a}{b^2}\pm \frac{1336 G_0^2 a M\hbar}{15 \pi b^4}\pm \frac{48 G_0 M a \mu}{5 b^2}. \end{equation} As one can see, the agreement between GB method and geodesics approach breaks down for the last two terms which can also be viewed as second order terms in mass i.e. $\mathcal{O}(M^2)$, since we assume $M\sim a$. However, as noted in Ref. \cite{kimet3}, this is not a surprising result since one needs to make a suitable choose for the vector field used in constructing the osculating Riemannian geometry. \bigskip \end{widetext} \section{Conclusion} In this paper, we have extended the GB method to the non-asymptotically flat spacetimes such as the Kerr black hole spacetime pierced by a static cosmic string. We have computed the quantum improved deflection angle of light in the weak approximation limit in the spacetime of a quantum improved Kerr black hole with a cosmic string. In the first case, we have used the GB method by introducing the quantum improved Kerr-Randers optical metric with a cosmic string and the corresponding osculating Riemannian metrics. In our second case we have used the geodesic equations and found the same result in leading order terms. We have also shown that the deflection angle increases due to the presence of the cosmic string. However, the main importance of GB method from the standard method used in computing light deflection is of conceptual nature. As we have shown one can find an exact result in leading order of quantum effects by integrating in a domain \textit{outside} of the light ray which is quite remarkable result. Finally, will be interesting to see if one can find and exact result for the deflection angle, up to the third order terms, using GW method. In principle, such a thing is possible, say, by choosing an appropriate vector fields used in constructing the osculating Riemannian geometry.\\ \bigskip \section*{Acknowledgements} This work was supported by the Chilean FONDECYT Grant No. 3170035 (A\"{O}).
1,116,691,501,419
arxiv
\section{Introduction} We study the problem of information elicitation without verification (``peer prediction''). This challenging problem arises across a diverse range of multi-agent systems, in which participants are asked to respond to an information task, and where there is no external input available against which to score reports. Examples include completing surveys about the features of new products, providing feedback on the quality of food or the ambience in a restaurant, sharing emotions when watching video content, and peer assessment of assignments in Massive Open Online Courses (MOOCs). The challenge is to provide incentives for participants to choose to invest effort in forming an opinion (a ``signal'') about a task, and to make truthful reports about their signals. In the absence of inputs other than the reports of participants, peer-prediction mechanisms make payments to one agent based on the reports of others, and seek to align incentives by leveraging correlation between reports (i.e., peers are rewarded for making reports that are, in some sense, predictive of the reports of others). Some domains have binary signals, for example ``was a restaurant noisy or not?'', and ``is an image violent or not?''. We are also interested in domains with non-binary signals, for example: \begin{itemize} \item {\em Image labeling.} Signals could correspond to answers to questions such as ``Is the animal in the picture a dog, a cat or a beaver'', or ``Is the emotion expressed joyful, happy, sad or angry.'' These signals are categorical, potentially with some structure: `joyful' is closer to `happy' than `sad', for example. \item {\em Counting objects.} There could be many possible signals, representing answers to questions such as (``are there 0, 1-5, 6-10, 11-100, or $>$100 people in the picture''?). The signals are ordered. \item {\em Peer assessment in MOOCs}. Multiple students evaluate their peers' submissions to an open-response question using a grading rubric. For example, an essay may be evaluated for clarity, reasoning, and relevance, with the grade for reasoning ranging from 1 (``wild flights of fancy throughout''), through 3 (``each argument is well motivated and logically defended.'') \end{itemize} We do not mean to take an absolute position that external ``ground truth'' inputs are never available in these applications. We do however believe it important to understand the extent to which such systems can operate using only participant reports. The design of peer-prediction mechanisms assumes the ability to make payments to agents, and that an agent's utility is linear-increasing with payment and does not depend on signal reports other than through payment. Peer prediction precludes, for example, that an agent may prefer to misreport the quality of a restaurant because she is interested in driving more business to the restaurant.\footnote{The payments need not be monetary; one could for example issue points to agents, these points conveying some value (e.g., redeemable for awards, or conveying status). On a MOOC platform, the payments could correspond to scores assigned as part of a student's overall grade in the class. What is needed is a linear relationship between payment (of whatever form) and utility, and expected-utility maximizers.} The challenge of peer prediction is timely. For example, Google launched {\em Google Local Guides} in November 2015. This provides participants with points for contributing star ratings and descriptions about locations. The current design rewards quantity but not quality and it will be interesting to see whether this attracts useful reports. After 200 contributions, participants receive a 1 TB upgrade of Drive storage (currently valued at \$9.99/month.) We are interested in {\em minimal} peer-prediction mechanisms, which require only signal reports from participants.\footnote{While more complicated designs have been proposed (e.g.~\cite{Prelec2004,RBTS-Witkowski2012,radanovic-subjective-aaai15}), in which participants are also asked to report their beliefs about the signals that others will report, we believe that peer-prediction mechanisms that require only signal reports are more likely to be adopted in practice. It is cumbersome to design user interfaces for reporting beliefs, and people are notoriously bad at reasoning about probabilities.} A basic desirable property is that truthful reporting of signals is a strict, correlated equilibrium of the game induced by the peer-prediction mechanism.\footnote{It has been more common to refer to the equilibrium concept in peer-prediction as a Bayes-Nash equilibrium. But as pointed out by Jens Witkowski, there is no agent-specific, private information about payoffs (utility is linear in payment). In a correlated equilibrium, agents get signals and a strategy is a mapping from signals to actions. An action is a best response for a given signal if, conditioned on the signal, it maximizes an agent's expected utility. This equilibrium concept fits peer prediction: each agent receives a signal from the environment, signals are correlated, and strategies map signals into reported signals. % } For many years, an Achilles heel of peer prediction has been the existence of additional equilibria that payoff-dominate truthful behavior and reveal no useful information~\cite{Jurca2009,DasguptaGhosh13,Radanovic-sensing2015}. An uninformative equilibrium is one in which reports do not depend on the signals received by agents. Indeed, the equilibria of peer-prediction mechanisms must always include an uninformative, mixed Nash equilibrium~\cite{waggoner14}. Moreover, with binary signals, a single task, and two agents, \citeN{jurca-faltings2005} show that an incentive-compatible, minimal peer-prediction mechanism will always have an uninformative equilibrium with a higher payoff than truthful reporting. Because of this, a valid concern has been that peer prediction could have the unintended effect that agents who would otherwise be truthful now adopt strategic misreporting behavior in order to maximize their payments. In this light, a result due to~\citeN{DasguptaGhosh13} is of interest: if agents are each asked to respond to multiple, independent tasks (with some overlap between assigned tasks), then in the case of binary signals there is a mechanism that addresses the problem of multiple equilibria. The binary-signal, multi-task mechanism is {\em strongly truthful}, meaning that truthful reporting yields a higher expected payment than any other strategy (and is tied in payoff only with strategies that report permutations of signals, which in the binary case means $1\rightarrow 2, 2\rightarrow 1$). We introduce a new, slightly weaker incentive property of {\em informed truthfulness}: no strategy profile provides more expected payment than truthful reporting, and the truthful equilibrium is strictly better than any uninformed strategy (where agent reports are signal-independent, and avoid the effort of obtaining a signal). Informed truthfulness is responsive to what we consider to be the two main concerns of practical peer prediction design: \smallskip (a) Agents should have strict incentives to exert effort toward acquiring an informative signal, and (b) Agents should have no incentive to misreport this information. \smallskip Relative to strong truthfulness, the relaxation to informed truthfulness is that there may be other informed strategies that match the expected payment of truthful reporting. Even so, informed truthfulness retains the property of strong truthfulness that there can be no other behavior strictly better than truthful reporting. The binary-signal, multi-task mechanism of Dasgupta and Ghosh is constructed from the simple building block of a \emph{score matrix}, with a score of `1' for agreement and `0' otherwise. Some tasks are designated without knowledge of participants as bonus tasks. The payment on a bonus task is 1 in the case of agreement with another agent. There is also a penalty of -1 if the agent's report on another (non-bonus) task agrees with the report of another agent on a third (non-bonus) task. In this way, the mechanism rewards agents when their reports on a shared (bonus) task agree more than would be expected based on their overall report frequencies. Dasgupta and Ghosh % remark that extending beyond two signals ``is one of the most immediate and challenging directions for further work.'' Our main results are as follows: \begin{itemize} \item % We study the {\em multi-signal extension of the Dasgupta-Ghosh mechanism} ($\textsc{MSDG}$), % and show that $\textsc{MSDG}$ is strongly truthful for domains that are {\em categorical}, where receiving one signal reduces an agent's belief that other agents will receive any other signal. We also show that (i) this categorical condition is tight for $\textsc{MSDG}$ for agent-symmetric signal distributions, and (ii) the peer grade distributions on a large MOOC platform do not satisfy the categorical property. \item We generalize $\textsc{MSDG}$, % obtaining the {\em Correlated Agreement (CA) mechanism}. This provides informed truthfulness in general domains, including domains in which the $\textsc{MSDG}$ mechanism is neither informed- nor strongly-truthful. % % % % The $\textsc{CA}$ mechanism requires the designer to know the correlation structure of signals, but not the full signal distribution. We further characterize domains where the $\textsc{CA}$ mechanism is strongly truthful, and show that no mechanism with similar structure and information requirements can do better.% \item For settings with a large number of tasks, we present a \emph{detail-free $\textsc{CA}$ mechanism}, in which the designer estimates the statistics of the correlation structure from agent reports. % % This mechanism is informed truthful in the limit where the number of tasks is large (handling the concern that reports affect estimation and thus scores), and we provide a convergence rate analysis for $\epsilon$-informed truthfulness with high probability. \end{itemize} We believe that these are the first results on strong or informed truthfulness in domains with non-binary signals without requiring a large population for their incentive properties (compare with~\cite{Radanovic-sensing2015,Kamble2015,RFJ2016}). The robust incentives of the multi-task $\textsc{MSDG}$ and $\textsc{CA}$ mechanisms hold for as few as two agents and three tasks, whereas these previous papers crucially rely on being able to learn statistics of the distribution from multiple reports. Even if given the true underlying signal distribution, the mechanisms in these earlier papers would still need to use a large population, with the payment rule based on statistics estimated from reports, as this is critical for incentive alignment in these papers. Our analysis framework also provides a dramatic simplification of the techniques used by \citeN{DasguptaGhosh13}. In a recent working paper,~\citeN{kong2016} show that a number of peer prediction mechanisms that provide variations on strong-truthfulness can be derived within a single information-theoretic framework, with scores determined based on the information they provide relative to reports in the population (leveraging a measure of mutual information between the joint distribution on signal reports and the product of marginal distributions on signal reports). Earlier mechanisms correspond to particular information measures. Their results use different technical tools, and also include a different, multi-signal generalization of \citeN{DasguptaGhosh13} that is independent of our results, outside of the family of mechanisms that we consider in Section~\ref{sec:52a}, and provides strong truthfulness in the limit of a large number of tasks.\footnote{While they do not state or show that the mechanism does not need a large number of tasks in any special case, the techniques employed can also be used to design a mechanism that is a linear transform of our $\textsc{CA}$ mechanism, and thus informed truthful with a known signal correlation structure and a finite number of tasks (personal communication).} \subsection{Related Work} The theory of peer prediction has developed rapidly in recent years. We focus on minimal peer-prediction mechanisms. Beginning with the seminal work of \citeN{MRZ2005}, a sequence of results relax knowledge requirements on the part of the designer~\cite{RBTS-Witkowski2012,jurca2011}, or generalize, e.g. to handle continuous signal domains~\cite{radanovic-faltings14}. Simple output-agreement, where a positive payment is received if and only if two agents make the same report (as used in the {\em ESP game}~\cite{vonAhn2004}), has also received some theoretical attention~\cite{waggoner14,Jain_acm_trans_econ_compu}. Early peer prediction mechanisms had uninformative equilibria that gave better payoff than honesty. \citeN{Jurca2009} show how to remove uninformative, pure-strategy Nash equilibria through a clever three-peer design. \citeN{ksl} show how to design strong truthful, minimal, single-task mechanisms with a known model when there are reports from a large number of agents. In addition to \citeN{DasguptaGhosh13} and~\citeN{kong2016}, several recent papers have tackled the problem of uninformative equilibria. % \citeN{Radanovic-sensing2015} establish strong truthfulness amongst symmetric strategies in a large-market limit where both the number of tasks and the number of agents assigned to each task grow without bound. \citeN{RFJ2016} provide complementary theoretical results, giving a mechanism in which truthfulness is the equilibrium with highest payoff, based on a population that is large enough to estimate statistical properties of the report distribution. They require a self-predicting condition that limits the correlation between differing signals. Each agent need only be assigned a single task. \citeN{Kamble2015} describe a mechanism where truthfulness has higher payoff than uninformed strategies, providing an asymptotic analysis as the number of tasks grows without bound. The use of learning is crucial in these papers. In particular, they must use statistics estimated from reports to design the payment rule in order to align incentives. This is a key distinction from our work.\footnote{\citeN{cai-stat-estimate15} work in a different model, showing how to achieve optimal statistical estimation from data provided by self-interested participants. These authors do not consider misreports and their mechanism is not informed- (or strongly-) truthful and is vulnerable to collusion. Their model is interesting, though, in that it adopts a richer, non-binary effort model. } \citeN{Witkowski2013} first introduced the combination of learning and peer prediction, coupling the estimation of the signal prior together with the shadowing mechanism. Although there is disagreement in the experimental literature about whether equilibrium selection is a problem in practice, there is compelling evidence that it matters~\cite{gao-ec14-trick-or-treat}; see~\citeN{faltings-hcomp14} for a study where uninformed equilibria did not appear to be a problem.\footnote{One difference is that this later study was in a many-signal domain, making it harder for agents to coordinate on an uninformative strategy.} \citeN{shnayder-ijcai16} use replicator dynamics as a model of agent learning to argue that equilibrium selection is indeed important, and that truthfulness is significantly more stable under mechanisms that ensure it has higher payoff than other strategies. % Orthogonal to concerns about equilibrium selection, \citeN{Gao2016} point out a modeling limitation---when agents can coordinate on some other, unintended source of signal, then this strategy may be better than truthful reporting. They suggest randomly checking a fraction of reports against ground truth as an alternative way to encourage effort. We discuss this in Section~\ref{subsec:signal-models}. Turning to online peer assessment for MOOCs, research has primarily focused on evaluating students' skill at assessment and compensating for grader bias~\cite{Piech2013}, as well as helping students self-adjust for bias and provide better feedback~\cite{Kulkarni2013}. Other studies, such as the {\em Mechanical TA}~\cite{Wright-KLB2015}, focus on reducing TA workload in high-stakes peer grading. A recent paper~\cite{wu-las2015} outlines an approach to peer assessment that relies on students flagging overly harsh feedback for instructor review. We are not aware of any systematic studies of peer prediction in the context of MOOCs, though \citeN{RFJ2016} present experimental results from an on-campus experiment. \section{Model} \label{sec:model} We consider two agents, 1 and 2, which are perhaps members of a larger population. Let $k\in M=\{1,\ldots,m\}$ index a task from a universe of $m\geq 3$ tasks to which one or both of these agents are assigned, with both agents assigned to at least one task. Each agent receives a signal when investing effort on an assigned task. The effort model that we adopt is binary: either an agent invests no effort and does not receive an informed signal, or an agent invests effort and incurs a cost and receives a signal. Let $S_1,S_2$ denote random variables for the signals to agents 1 and 2 on some task. The signals have a finite domain, with $i,j\in \{1,\ldots,n\}$ indexing a realized signal to agents 1 and 2, respectively. Each task is {\em ex ante} identical, meaning that pairs of signals are i.i.d. for each task. Let $P(S_1{=}i,S_2{=}j)$ denote the joint probability distribution on signals, with marginal probabilities $P(S_1{=}i)$ and $P(S_2{=}j)$ on the signals of agents 1 and 2, respectively. We assume exchangeability, so that the identity of agents does not matter in defining the signal distribution. The signal distribution is common knowledge to agents.\footnote{We assume common knowledge and symmetric signal models for simplicity of exposition. Our mechanisms do not require full information about the signal distribution, only the correlation structure of signals, and can tolerate some user heterogeneity, as described further in Section~\ref{sec:extensions}. % } We assume that the signal distribution satisfies \emph{stochastic relevance}, so that for all $s'\neq s''$, there exists at least one signal $s$ such that \begin{align} P(S_1{=}s|S_2{=}s') \ne P(S_1{=}s|S_2{=}s''), \end{align} and symmetrically, for agent 1's signal affecting the posterior on agent 2's. If two signals are not stochastically relevant, they can be combined into one signal. Our constructions and analysis will make heavy use of the following matrix, which encodes the correlation structure of signals. \begin{definition}[Delta matrix] The {\em Delta matrix} $\Delta$ is an $n\times n$ matrix, with entry $(i,j)$ defined as \begin{align} \Delta_{ij} &= P(S_1{=}i, S_2{=}j) - P(S_1{=}i) P(S_2{=}j). \end{align} \end{definition} The Delta matrix describes the correlation (positive or negative) between different realized signal values. For example, if $\Delta_{1,2}=P(S_1{=}1,S_2{=}2)-P(S_1{=}1)P(S_2{=}2)=P(S_1{=}1)(P(S_2{=}2|S_1{=}1)-P(S_2{=}2))>0$, then $P(S_2{=}2|S_1{=}1)>P(S_2{=}2)$, so signal 2 is positively correlated with signal 1 (and by exchangeability, similarly for the effect of 1 on 2). If a particular signal value increases the probability that the other agent will receive the same signal then $P(S_1{=}i,S_2{=}i)>P(S_1{=}i)P(S_2{=}i)$, and if this holds for all signals the Delta matrix has a positive diagonal. Because the entries in a row $i$ of joint distribution $P(S_1{=}i,S_2{=}j)$ and a row of product distribution $P(S_1{=}i)P(S_2{=}j)$ both sum to $P(S_1{=}i)$, each row in the $\Delta$ matrix sums to $0$ as the difference of the two. The same holds for columns. % The $\textsc{CA}$ mechanism will depend on the sign structure of the $\Delta$ matrix, without knowledge of the specific values. We will use a sign operator $\operatorname{Sign}(x)$, with value 1 if $x>0$, 0 otherwise.\footnote{Note that this differs from the standard $\operatorname{sign}$ operator, which has value -1 for negative inputs.} \begin{example} If the signal distribution is \begin{align*} P(S_1,S_2) &= \begin{bmatrix} 0.4 & 0.15 \\ 0.15 & .3 \end{bmatrix} \end{align*} with marginal distribution $P(S) = [0.55; 0.45]$, we have \[ \Delta = \begin{bmatrix} 0.4 & 0.15 \\ 0.15 & .3 \end{bmatrix} - \begin{bmatrix} 0.55 \\ 0.45 \end{bmatrix}\cdot \begin{bmatrix} 0.55 & 0.45 \end{bmatrix} \approx \begin{bmatrix} 0.1 & -0.1 \\ -0.1 & 0.1 \end{bmatrix}, \text{ and } \operatorname{Sign}(\Delta) = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. \] \end{example} An agent's {\em strategy} defines, for every signal it may receive and each task it is assigned, the signal it will report. We allow for mixed strategies, so that an agent's strategy defines a distribution over signals. Let $R_1$ and $R_2$ denote random variables for the {\em reports} by agents 1 and 2, respectively, on some task. Let matrices $F$ and $G$ denote the mixed strategies of agents 1 and 2, respectively, with $F_{ir} = P(R_1{=}r|S_1{=}i)$ and $G_{jr} = P(R_2{=}r|S_2{=}j)$ to denote the probability of making report $r$ given signal $i$ is observed (signal $j$ for agent 2). Let $r^k_1\in \{1,\ldots,n\}$ and $r^k_2\in \{1,\ldots,n\}$ refer to the realized report by agent 1 and 2, respectively, on task $k$ (if assigned). \begin{definition}[Permutation strategy] A {\em permutation strategy} is a deterministic strategy in which an agent adopts a bijection between signals and reports, that is, $F$ (or $G$ for agent 2) is a permutation matrix. \end{definition} \begin{definition}[Informed and uninformed strategies] An {\em informed strategy} has $F_{ir} \ne F_{jr}$ for some $i \ne j$, some $r\in \{1,\ldots,n\}$ (and similarly for $G$ for agent 2). An {\em uninformed strategy} has the same report distribution for all signals. \end{definition} Permutation strategies are merely relabelings of the signals; in particular, truthfulness (denoted $\mathds{I}$ below) is a permutation strategy. Note also that by definition, deterministic uniformed strategies are those that give the same report for all signals. Each agent is assigned to two or more tasks, and the agents overlap on at least one task. Let $M_b\subseteq M$ denote a non-empty set of ``bonus tasks'', a subset of the tasks to which both agents are assigned. Let $M_1\subseteq M\setminus M_b$ and $M_2\subseteq M\setminus M_b$, with $M_1\cap M_2=\emptyset$ denote non-empty sets of tasks to which agents 1 and 2 are assigned, respectively. These will form the ``penalty tasks.'' For example, if both agents are assigned to each of three tasks, $A, B$ and $C$, then we could choose $M_b=\{A\}$, $M_1=\{B\}$ and $M_2=\{C\}$. We assume that tasks are {\em a priori} identical, so that there is nothing to distinguish two tasks other than their signals. In particular, agents have no information about which tasks are shared, or which are designated bonus or penalty. This can be achieved by choosing $M_b, M_1$ and $M_2$ randomly after task assignment. This can also be motivated in largely anonymous settings, such as peer assessment and crowdsourcing. A {\em multi-task peer-prediction mechanism} defines a total payment to each agent based on the reports made across all tasks. The mechanisms that we study assign a total payment to an agent based on the sum of payments for each bonus task, but where the payment for a bonus task is adjusted downwards by the consideration of its report on a penalty task and that of another agent on a different penalty task. For the mechanisms we consider in this paper, it is without loss of generality for each agent to adopt a uniform strategy across each assigned task. Changing a strategy from task to task is equivalent in terms of expected payment to adopting a linear combination over these strategies, given that tasks are presented in a random order, and given that tasks are equivalent, conditioned on signal. This result relies on the random order of tasks as presented to each agent, preventing coordination. Tasks will be indexed as $1, \ldots, k \ldots, m$ from the first agent's point of view. The second agent will see them reshuffled using a permutation $\pi$ chosen uniformly at random: $\pi(1), \ldots, \pi(m)$. Let $\vec{F}$ be the first agent's strategy vector, with $F_k$ the first agent's strategy on task $k$. Fix the second agent's vector of strategies $\vec{G}$. Let $J_{ij}$ be the joint signal distribution. Then, for a broad class of mechanisms, it is without loss of generality to focus on agents having a single per-task strategy applied to all tasks. Let $K$, $K'$, $K''$ be random variables corresponding to a task id, with uniform probability of value $1,\ldots,m$. Let $\mathcal{M}$ be a \emph{linear} mechanism if its expected score function is a linear function of $\mathrm{Pr}(R^K_1=r_1, R^K_2=r_2)$ and $\mathrm{Pr}(R^{K'}_1=r_1,R^{K''}_2=r_2|K'\ne K'')$, for all set of report pairs $r_1,r_2$. For example, the DGMS mechanism we describe later has expected score \begin{align} \label{eq:2} \mathrm{Pr}(R^K_1&=R^K_2) - \mathrm{Pr}(R^{K'}_1=R^{K''}_2|K'\ne K'') = \\ &= \sum_{r=1}^n \mathrm{Pr}(R^K_1=r,R^K_2=r) - \mathrm{Pr}(R^{K'}_1=r,R^{K''}_2=r|K' \ne K''), \end{align} which fits this condition. The multi-task mechanism we define below is also linear. The expectation is with respect to the signal model, agent strategies, the random task order, and any randomization in the scoring mechanism itself. \begin{lemma} Let $\mathcal{M}$ be a linear mechanism. Let $\vec{F}$ be a vector of strategies. Then for any $\vec{G}$, $\bar{F}=\text{mean}(\vec{F})$ will have the same expected score as $\vec{F}$. \end{lemma} \begin{proof} We prove equivalence of expected value of $\mathrm{Pr}(R^K_1=r_1, R^K_2=r_2)$ and $\mathrm{Pr}(R^{K'}_1=r_1,R^{K''}_2=r_2|K' \ne K'')$ for all $r_1,r_2$, and equivalence for any $\mathcal{M}$ follows by linearity. Fix $r_1,r_2$. We first show that $\mathrm{Pr}(R^K_1=r_1, R^K_2=r_2)$ has the same expected value for $\vec{F}$ and $\bar{F}$. \begingroup \allowdisplaybreaks \begin{align} \mathrm{Pr}(R^K_1&=r_1, R^K_2=r_2) = \\ &=\frac{1}{m} \sum_{k=1}^m \mathrm{Pr}(R^k_1=r_1, R^k_2=r_2) \\ &= \frac{1}{m} \sum_{k=1}^m \sum_{i=1}^n \sum_{j=1}^n \mathrm{Pr}(S^k_1=i,S^k_2=j)\mathrm{Pr}(R^k_1=r_1|s_1=i)\mathrm{Pr}(R^k_2=r_2|s_2=j)\\ &= \frac{1}{m} \sum_{k=1}^m\sum_{i=1}^n \sum_{j=1}^n J_{ij} F^k_{ir_1} G^{\pi(k)}_{jr_2},\\ &\text{Taking the expectation over $\pi$, we get} \notag \\ &= \frac{1}{m!} \sum_{\pi} \frac{1}{m}\sum_{k=1}^m \sum_{i=1}^n \sum_{j=1}^n J_{ij} F^k_{ir_1} G^{\pi(k)}_{jr_2} \\ \intertext{where the sum is over all $m!$ possible permutations of the tasks. By symmetry, we know that each element of $G$ will be used for task $k$ with equal probability $1/m$:} &= \frac{1}{m} \sum_{\ell} \frac{1}{m}\sum_{k=1}^m \sum_{i=1}^n \sum_{j=1}^n J_{ij} F^k_{ir_1} G^\ell_{jr_2} \label{key-line} \\ & \text{and reordering the sums, we get:} \notag \\ &= \frac{1}{m} \sum_{\ell}\sum_{i=1}^n \sum_{j=1}^n J_{ij} G^\ell_{jr_2} \frac{1}{m}\sum_{k=1}^m F^k_{ir_1}. \\ &\text{Using the definition of $\bar{F}$ as the mean of $\vec{F}$,}\\ &= \frac{1}{m} \sum_{\ell} \sum_{i=1}^n \sum_{j=1}^n J_{ij} G^\ell_{jr_2} \bar{F}_{ir_1} \\ &= \mathrm{Pr}(R^K_1=r_1,R^K_2=r_2|\text{using $\bar{F}$ instead of $\vec{F}$}) \end{align}% \endgroup The same argument works for $\mathrm{Pr}(R^{K'}_1=r_1, R^{K''}_2=r_2|K' \ne K'')$, substituting $\mathrm{Pr}(S_1=i)\mathrm{Pr}(S_2=j)$ for $J_{ij}$. The key to the proof is the random permutation of task order in line~\ref{key-line}, which prevents coordination between the per-task strategies of the two agents. \end{proof} Given this uniformity, we write $E(F,G)$ to denote the expected payment to an agent for any bonus task. The expectation is taken with respect to both the signal distribution and any randomization in agent strategies. Let $\mathds{I}$ denote the truthful reporting strategy, which corresponds to the identity matrix. \begin{definition}[Strictly Proper] A multi-task peer-prediction mechanism is % {\em proper} if and only if truthful strategies form a % correlated equilibrium, so that $E(\mathds{I},\mathds{I}) \ge E(F,\mathds{I}),$ for all strategies $F\neq \mathds{I}$, and similarly when reversing the roles of agents 1 and 2. For {\em strict properness}, the inequality must be strict. \end{definition} This insists that the expected payment on a bonus task is (strictly) higher when reporting truthfully than when using any other strategy, given that the other agent is truthful. \begin{definition}[Strongly-truthful] A multi-task peer-prediction mechanism is {\em strongly-truthful} if and only if for all strategies $F, G$ we have $E(\mathds{I}, \mathds{I}) \ge E(F,G),$ and equality may only occur when $F$ and $G$ are both the same permutation strategy. \end{definition} In words, strong-truthfulness requires that both agents being truthful has strictly greater expected payment than any other strategy profile, % unless both agents play the same permutation strategy, in which case equality is allowed.\footnote{Permutation strategies seem unlikely to be a practical concern, since permutation strategies require coordination and provide no benefit over being truthful.} From the definition, it follows that any strongly-truthful mechanism is strictly proper. \begin{definition}[Informed-truthful] A multi-task peer-prediction mechanism is {\em informed-truthful} if and only if for all strategies $F, G$, $E(\mathds{I}, \mathds{I}) \ge E(F,G),$ and equality may only occur when both $F$ and $G$ are informed strategies. \end{definition} In words, informed-truthfulness requires that the truthful strategy profile has strictly higher expected payment than any profile in which one or both agents play an uninformed strategy, and weakly greater expected payment than all other strategy profiles. It follows that any informed-truthful mechanism is proper. Although weaker than strong-truthfulness, informed truthfulness is responsive to the primary, practical concern in peer-prediction applications: avoiding equilibria where agents achieve the same (or greater) payment as a truthful informed agent but without putting in the effort of forming a careful opinion about the task. For example, it would be undesirable for agents to be able to do just as well or better by reporting the same signal all the time. Once agents exert effort and observe a signal, it is reasonable to expect them to make truthful reports as long as this is an equilibrium and there is no other equilibrium with higher expected payment. Informed-truthful peer-prediction mechanisms provide this guarantee.\footnote{% For simplicity of presentation, we do not model the cost of effort explicitly, but it is a straightforward extension to handle the cost of effort as suggested in previous work~\cite{DasguptaGhosh13}. In our proposed mechanisms, an agent that does not exert effort receives an expected payment of zero, while the expected payment for agents that exert effort and play the truthful equilibrium is strictly positive. With knowledge of the maximum possible cost of effort, scaling the payments appropriately incentivizes effort.} \section{Multi-Task Peer-Prediction Mechanisms} \label{sec:multi-task-peer} We define a class of multi-task peer-prediction mechanisms that is parametrized by a {\em score matrix}, $S: \{1,\ldots,n\}\times\{1,\ldots,n\}\to\mathbb{R}$, that maps a pair of reports into a score, the same score for both agents. This class of mechanisms extends the binary-signal multi-task mechanism due to~\citeN{DasguptaGhosh13} in a natural way. \begin{definition}[Multi-task mechanisms] These mechanisms are parametrized by score matrix $S$. \begin{enumerate} \item Assign each agent to two or more tasks, with at least one task in common, and at least three tasks total. \item Let $r^k_1$ denote the report received from agent 1 on task $k$ (and similarly for agent 2). Designate one or more tasks assigned to both agents as bonus tasks (set $M_b$). Partition the remaining tasks into penalty tasks $M_1$ and $M_2$, where $|M_1|>0$ and $|M_2|>0$ and $M_1$ tasks have a report from agent 1 and $M_2$ a report from agent 2. \item For each bonus task $k \in M_b$, pick a random $\ell \in M_1$ and $\ell' \in M_2$. The payment to both agent 1 and agent 2 for task $k$ is % % $S(r^k_1,r^k_2)- S(r_1^{\ell}, r_2^{\ell'}).$ \item The total payment to an agent is the sum total payment across all bonus tasks.\footnote{A variation with the same expected payoff and the same incentive analysis is to compute the expectation of the scores on all pairs of penalty tasks, rather than sampling. We adopt the simpler design for ease of exposition. This alternate design would reduce score variance if there are many non-bonus tasks, and may be preferable in practice.} \end{enumerate} \end{definition} As discussed above, it is important that agents do not know which tasks will become bonus tasks and which become penalty tasks. The expected payment on a bonus task for strategies $F,G$ is \allowdisplaybreaks \begin{align} E(F,G) &= \sum_{i=1}^n\sum_{j=1}^n P(S_1{=}i, S_2{=}j) \sum_{r_1=1}^n \sum_{r_2=1}^n P(R_1{=}r_1|S_1{=}i)P(R_2{=}r_2|S_2{=}j) S(r_1,r_2) \notag\\ &\hspace{-2em} - \sum_{i=1}^n\sum_{j=1}^n P(S_1{=}i) P(S_2{=}j) \sum_{r_1=1}^n \sum_{r_2=1}^n P(R_1{=}r_1|S_1{=}i)P(R_2{=}r_2|S_2{=}j)S(r_1,r_2)\notag\\ &=\sum_{i=1}^n\sum_{j=1}^n \Delta_{ij} \sum_{r_1=1}^n\sum_{r_2=1}^n S(r_1,r_2)F_{ir_1}G_{jr_2}.\label{eq:expected-score-delta-gen} \end{align} The expected payment can also be written succinctly as $E(F,G) = \mathrm{tr}(F^\top \Delta G S^\top).$ In words, the expected payment on a bonus task is the sum, over all pairs of possible signals, of the product of the correlation (negative or positive) for the signal pair and the (expected) score given the signal pair and agent strategies. For intuition, note that for the identity score matrix which pays \$1 in the case of matching reports and \$0 otherwise, agents are incentivized to give matching reports for signal pairs with positive correlation and non-matching reports for signals with negative correlation. Now consider a general score matrix $S$, and suppose that all agents always report 1. They always get $S(1,1)$ and the expected value $E(F,G)$ is a multiple of the sum of entries in the $\Delta$ matrix, which is exactly zero. Because individual rows and columns of $\Delta$ also sum to zero, this also holds whenever a single agent uses an uninformed strategy. In comparison, truthful behavior provides payment $E(\mathds{I},\mathds{I})=\sum_{ij}\Delta_{ij}S(i,j)$, and will be positive if the score matrix is bigger where signals are positively correlated than where they are not. While agent strategies in our model can be randomized, the linearity of the expected payments allows us to restrict our attention to deterministic strategies. \begin{lemma} \label{lem:opt-is-deterministic} For any world model and any score matrix $S$, there exists a deterministic, optimal joint strategy for a multi-task mechanism. \end{lemma} \begin{proof} The proof relies on solutions to convex optimization problems being extremal. The game value can be written $V = \max_F \max_G h(F,G)$, where % \vspace{-0.3cm} \[h(F,G) = \sum_{i=1}^n\sum_{j=1}^n \Delta_{ij} \sum_{r_1=1}^n\sum_{r_2=1}^n S(r_1,r_2)F_{ir_1}G_{jr_2}~.\] % Note that $h$ is linear in both $F$ and $G$ separately. Now letting $V(F) = \max_G h(F,G)$ be the value for the $G$ player for a fixed $F$, we have $V = \max_F V(F)$ by definition. As $h(F,\cdot)$ is linear, and the strategy space for $G$, all binary row-stochastic matrices, is convex, there exists a maximizer at an extreme point. These extreme points are exactly the deterministic strategies, and thus for all $F$ there exists an optimal $G=G^{\text{opt}}$ which is deterministic. Considering the maximization over $F$, we see that $V(F) = \max_G h(F,G)$ is a pointwise supremum over a set of linear functions, and is thus convex. $V$ is therefore optimized by an extreme point, some deterministic $F=F^{\text{opt}}$, and for that $F^{\text{opt}}$ there exists a corresponding deterministic $G^{\text{opt}}$ by the above. \end{proof} Lemma~\ref{lem:opt-is-deterministic} has several consequences: \begin{itemize} \item It is without loss of generality to focus on deterministic strategies when establishing strongly truthful or informed truthful properties of a mechanism. \item There is a deterministic, perhaps asymmetric equilibrium, because the optimal solution that maximizes $E(F,G)$ is also an equilibrium. \item It is without loss of generality to consider deterministic deviations when checking whether or not truthful play is an equilibrium. \end{itemize} We will henceforth assume deterministic strategies. By a slight abuse of notation, let $F_i\in \{1,\ldots,n\}$ and $G_j\in \{1,\ldots,n\}$ denote the reported signals by agent 1 for signal $i$ and agent 2 for signal $j$, respectively. The expected score then simplifies to \begin{align} E(F,G)&=\sum_{i=1}^n\sum_{j=1}^n \Delta_{ij} S(F_i,G_j). \label{eq:expected-score-delta-determ} \end{align} We can think of deterministic strategies as mapping signal pairs to reported signal pairs. Strategy profile $(F,G)$ picks out a report pair (and thus score) for each signal pair $i, j$ with its corresponding $\Delta_{ij}$. That is, strategies $F$ and $G$ map signals to reports, and the score matrix $S$ maps reports to scores, so together they map signals to scores, and we then dot those scores with $\Delta$. \section{The Dasgupta-Ghosh Mechanism} We first study the natural extension of the~\citeN{DasguptaGhosh13} mechanism from binary to multi-signals. This multi-task mechanism uses as the score matrix $S$ the identity matrix (`1' for agreement, `0' for disagreement.) \begin{definition}[The Multi-Signal Dasgupta-Ghosh mechanism ($\textsc{MSDG}$)] This is a multi-task mechanism with score matrix $S(i,j)=1$ if $i=j$, 0 otherwise. \end{definition} \begin{example} Suppose agent 1 is assigned to tasks $\{A,B\}$ and agent 2 to tasks $\{B,C,D\}$, so that $M_b=\{B\}, M_1=\{A\}$ and $M_2=\{C,D\}$. Now, if the reports on $B$ are both 1, and the reports on $A, C$, and $D$ were $0,0$, and $1$, respectively, the expected payment to each agent for bonus task $B$ is $1 - (1\cdot 0.5 + 0 \cdot 0.5) = 0.5$. In contrast, if both agents use an uninformed coordinating strategy and always report 1, the expected score for both is $1 - (1\cdot 0.5 + 1 \cdot 0.5)=0$. \end{example} The expected payment in the $\textsc{MSDG}$ mechanism on a bonus task is \begin{align} E(F,G) = \sum_{i,j} \Delta_{ij} \mathds{1}_{[F_i=G_j]},\label{eq:expected-score-delta} \end{align} where $\mathds{1}_{x=y}$ is 1 if $x=y$, 0 otherwise. An equivalent expression is $\mathrm{tr}(F^\top \Delta G)$. \begin{definition}[Categorical model] A world model is {\em categorical} if, when an agent sees a signal, all other signals become less likely than their prior probability; i.e., $P(S_2=j|S_1=i) < P(S_2=j)$, for all $i$, for all $j\neq i$ (and analogously for agent 2). This implies positive correlation for identical signals: $P(S_2=i|S_1=i) > P(S_2=i)$. \end{definition} Two equivalent definitions of categorical are that the Delta matrix has positive diagonal and negative off-diagonal elements, or that $\operatorname{Sign}(\Delta)=\mathds{I}$. \begin{theorem} \label{thm:mech-is-strong truthful} If the world is categorical, then the $\textsc{MSDG}$ mechanism is strongly truthful and strictly proper. Conversely, if the Delta matrix $\Delta$ is symmetric and the world is not categorical, then the $\textsc{MSDG}$ mechanism is not strongly truthful. \end{theorem} \begin{proof} First, we show that truthfulness maximizes expected payment. We have $E(F,G) = \sum_{i,j} \Delta_{ij} \mathds{1}_{[F_i=G_j]}$. The truthful strategy corresponds to the identity matrix $\mathds{I}$, and results in a payment equal to the trace of $\Delta$: $E(\mathds{I},\mathds{I}) = \mathrm{tr}(\Delta) = \sum_{i} \Delta_{ii}$. By the categorical assumption, $\Delta$ has positive diagonal and negative off-diagonal elements, so this is the sum of all the positive elements of $\Delta$. Because $\mathds{1}_{[F_i=G_j]} \le 1$, this is the maximum possible payment for any pair of strategies. To show strong truthfulness, first consider an asymmetric joint strategy, with $F \ne G$. Then there exists $i$ s.t. $F_i \ne G_i$, reducing the expected payment by at least $\Delta_{ii} > 0$. Now consider symmetric, non-permutation strategies $F=G$. Then there exist $i \ne j$ with $F_i = F_j$. The expected payment will then include $\Delta_{ij} < 0$. This shows that truthfulness and symmetric permutation strategies are the only optimal strategy profiles. Strict properness follows from strong truthfulness. For the tightness of the categorical assumption, first consider a symmetric $\Delta$ with positive off-diagonal elements $\Delta_{ij}$ and $\Delta_{ji}$. Then agents can benefit by both ``merging'' signals $i$ and $j$. Let $\bar{F}$ be the strategy that is truthful on all signals other than $j$, and reports $i$ when the signal is $j$. Then $ E(\bar{F},\bar{F}) = \Delta_{ij} + \Delta_{ji} + \mathrm{tr}(\Delta) > E(\mathds{I},\mathds{I}) = \mathrm{tr}(\Delta)$, so $\textsc{MSDG}$ is not strongly truthful. Now consider a $\Delta$ where one of the on-diagonal entries is negative, say $\Delta_{ii}<0$. Then, because all rows and columns of $\Delta$ must add to 0, there must be a $j$ such that $\Delta_{ij} > 0$, and this reduces to the previous case where ``merging'' $i$ and $j$ is useful. \end{proof} For binary signals (`1' and `2'), any positively correlated model, such that $\Delta_{1,1}>0$ and $\Delta_{2,2}>0$, is categorical, and thus we obtain a substantially simpler proof of the main result in Dasgupta and Ghosh~\shortcite{DasguptaGhosh13}. \subsection{Discussion: Applicability of the $\textsc{MSDG}$ mechanism} Which world models are categorical? One example is a noisy observation model, where each agent observes the ``true'' signal $t$ with probability $q$ greater than $1/n$, and otherwise makes a mistake uniformly at random, receiving any signal $s \ne t$ with probability $(1-q)/(n-1)$. Such model makes sense for classification tasks in which the classes are fairly distinct. For example, we would expect a categorical model for a question such as ``Does the animal in this photo swim, fly, or walk?'' On the other hand, a classification problem such as the ImageNet challenge~\cite{imagenet-ILSVRC15}, with 1000 nuanced and often similar image labels, is unlikely to be categorical. For example, if ``Ape'' and ``Monkey'' are possible labels, one agent seeing ``Ape'' is likely to increase the probability that another says ``Monkey'', when compared to the prior for ``Monkey'' in a generic set of photos. The categorical property is also unlikely to hold when signals have a natural order, which we dub \emph{ordinal} worlds. \begin{example} \label{example:non-categorical} If two evaluators grade essays on a scale from one to five, when one decides that an essay should get a particular grade, e.g. one, this may increase the likelihood that their peer decides on that or an adjacent grade, e.g. one or two. In this case, the sign of the delta matrix would be \begin{equation} \label{eq:ordinal-delta} \operatorname{Sign}(\Delta) = {\small \begin{bmatrix} 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ \end{bmatrix}. } \end{equation} Under the $\textsc{MSDG}$ mechanism, evaluators increase their expected payoff by agreeing to always report one whenever they thought the score was either one or two, and doing a similar ``merge'' for other pairs of reports. We will return to this example below. \end{example} The categorical condition is a stronger requirement than previously proposed properties in the literature, such as those assumed in the analyses of the \citeN{jurca2011} and \citeN{RFJ2016} ``1/prior'' mechanism and the \citeN{RBTS-Witkowski2012} shadowing mechanism. The 1/prior mechanism requires the self-predicting property \[\mathrm{Pr}(S_2=j|S_1=i) < \mathrm{Pr}(S_2=j|S_1=j),\] whereas the categorical property insists on a upper bound of $\mathrm{Pr}(S_2=j)$, which is tighter than $\mathrm{Pr}(S_2=j|S_1=j)$ in the typical case where the model has positive correlation.% The shadowing mechanism requires \[\mathrm{Pr}(S_2=i|S_1=j) - \mathrm{Pr}(S_2=i) < \mathrm{Pr}(S_2=j|S_1=j) - \mathrm{Pr}(S_2=j),\] which says that the likelihood of signal $S_2=i$ cannot go up ``too much'' given signal $S_1=j$, whereas the categorical property requires the stronger condition that $\mathrm{Pr}(S_2=i|S_1=j) - \mathrm{Pr}(S_2=i) < 0$. To see how often categorical condition holds in practice, we look at the correlation structure in a dataset from a large MOOC provider, focusing on 104 questions with over 100 submissions each, for a total of 325,523 assessments from 17 courses. Each assessment consists of a numerical score, which we examine, and an optional comment, which we do not study here. As an example, one assessment task for a writing assignment asks how well the student presented their ideas, with options ``Not much of a style at all'', ``Communicative style'', and ``Strong, flowing writing style'', and a paragraph of detailed explanation for each. These correspond to 0, 1, and 2 points on this rubric element.\footnote{While we only see student reports, we take as an assumption that these reasonably approximate the true world model. As MOOCs develop along with valuable credentials based on their peer-assessed work, we believe it will nevertheless become increasingly important to provide explicit credit mechanisms for peer assessment.} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{figures/categorical_breakdown.pdf} \hfill \includegraphics[width=0.38\textwidth]{figures/avg_deltas.pdf} \includegraphics[width=0.1\textwidth]{figures/delta_colorbar.pdf} \caption{ Left: MOOC peer assessment is an ordinal domain, with most models with three or more signals not categorical. Right: Averaged $\Delta$ matrices, grouped by the number of signals in a domain. The positive diagonals show that users tend to agree on their assessments. For models of size 4 and 5, the ordinal nature of peer assessment is clear (e.g., an assessment of 2/5 is positively correlated with an assessment of 3/5). } \label{fig:model-breakdowns} \vspace{-0.5cm} \end{center} \end{figure} We estimate $\Delta$ matrices on each of the 104 questions from the assessments. We can think about each question as corresponding to a different signal distribution, and assessing a particular student's response to the question as an information task that is performed by several peers. The questions in our data set had five or fewer rubric options (signals), with three being most common (Figure~\ref{fig:model-breakdowns}L). This analysis confirms that the categorical condition only holds for about one third of our three-signal models and for none of the larger models (Figure~\ref{fig:model-breakdowns}L). We also computed the average $\Delta$ matrix for each model size, as visualized in Figure~\ref{fig:model-breakdowns}R. The bands of positive correlation around the diagonal are typical of what we refer to as an ordinal rather than categorical domain. \section{Handling the General Case} In this section, we present a mechanism that is informed-truthful for general domains. We then discuss when it is strongly-truthful, give a version of it requiring no domain knowledge, and discuss other considerations. \subsection{The Correlated Agreement Mechanism} Based on the intuition given in Section~\ref{sec:multi-task-peer}, and the success of \textsc{MSDG}\ for categorical domains, it seems promising to base the construction of a mechanism on the correlation structure of the signals, and in particular, directly on $\Delta$ itself. This is precisely our approach. In fact, we will see that essentially the simplest possible mechanism following this prescription is informed-truthful for \emph{all} domains. \begin{definition}[$\textsc{CA}$ mechanism] The {\em Correlated Agreement ($\textsc{CA}$) mechanism} is a multi-task mechanism with score matrix $S = \operatorname{Sign}(\Delta)$. \end{definition} \begin{theorem} \label{thm:01-informed-truthful} The $\textsc{CA}$ mechanism is informed-truthful and proper for all worlds. \end{theorem} \begin{proof} The truthful strategy $F^\ast,G^\ast$ has higher payment than any other pair $F,G$: \[E(F^\ast,G^\ast) = \sum_{i,j} \Delta_{i,j} S(i,j) = \sum_{i,j: \Delta_{ij}>0} \Delta_{i,j} \ge \sum_{i,j} \Delta_{i,j} S(F_i,G_j) = E(F,G),\] where the inequality follows from the fact that $S(i,j) \in \{0,1\}$. The truthful score is positive, while any uninformed strategy has score zero. Consider an uninformed strategy $F$, with $F_i=r$ for all $i$. Then, for any $G$, \[E(F,G) = \sum_{i} \sum_j \Delta_{i,j} S(r,G_j) = \sum_j S(r,G_j) \sum_i \Delta_{i,j} = \sum_j S(r,G_j) \cdot 0 = 0, \] where the next-to-last equality follows because rows and columns of $\Delta$ sum to zero.% \end{proof} While informed-truthful, the \textsc{CA}\ mechanism is not always strictly proper. As discussed at the end of Section~\ref{sec:model}, we do not find this problematic; let us revisit this point. The peer prediction literature makes a distinction between proper and strictly proper, and insists on the latter. This comes from two motivations: (i) properness is trivial in standard models: one can simply pay the same amount all the time and this would be proper (since truthful reporting would be as good as anything else); and (ii) strict properness provides incentives to bother to acquire a useful signal or belief before making a report. Neither (i) nor (ii) is a critique of the $\textsc{CA}$ mechanism; consider (i) paying a fixed amount does not give informed truthfulness, and (ii) the mechanism provides strict incentives to invest effort in acquiring a signal. \begin{example} Continuing with Example~\ref{example:non-categorical}, we can see why $\textsc{CA}$ is not manipulable. $\textsc{CA}$ considers signals that are positively correlated on bonus tasks (and thus have a positive entry in $\Delta$) to be matching, so there is no need to agents to misreport to ensure matching. In simple cases, e.g. if only the two signals 1 and 2 are positively correlated, they are ``merged,'' and reports of one treated equivalently to the other. In cases such as Equation~\ref{eq:ordinal-delta}, the correlation structure is more complex, and the result is not simply merging. \end{example} \subsection{Strong Truthfulness of the $\textsc{CA}$ Mechanism} \label{sec:52a} The $\textsc{CA}$ mechanism is always informed truthful. In this section we characterize when it is also strongly truthful (and thus strictly proper), and show that it is maximal in this sense across a large class of mechanisms. \begin{definition}[Clustered signals] \label{def:clus-sig} A signal distribution has {\em clustered signals} when there exist at least two identical rows or columns in $\operatorname{Sign}(\Delta)$. \end{definition} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{figures/clus-sig} \hfill \includegraphics[width=0.4\textwidth]{figures/no-clus-sig} \caption{ The blue and red nodes represent signals of agent 1 and 2, respectively. An edge between two signals represents that there is positive correlation between those signals. Left: A signal distribution for an image classification task with clustered signals. Right: A signal distribution for a MOOC peer assessment task or object counting task with ordinal signals and without clustered signals. } \label{fig:clus-sig} \end{center} \end{figure} Equivalently, two signals $i$ and $i'$ of an agent are clustered if $i$ is positively correlated with the same set of matched agent's signals as $i'$. \begin{example} See Figure~\ref{fig:clus-sig}. The first example corresponds to an image classification task where there are categories such as ``Monkey'', ``Ape'', ``Leopard'', ``Cheetah'' etc. The signals ``Monkey'' and ``Ape'' are clustered: for each agent, seeing one is positively correlated with the other agent having one of the two, and negatively correlated with the other possible signals. The second example concerns models with ordinal signals, such as peer assessment or counting objects. In this example there are no clustered signals for either agent. For example, signal 1 is positively correlated with signals 1 and 2, while signal 2 with signals 1, 2, and 3. \end{example} \begin{lemma} \label{lem:clus-sig} If $\Delta_{ij} \neq 0$, $\forall i,j$, then a joint strategy where at least one agent uses a non-permutation strategy and matches the expected score of truthful reporting exists if and only if there are clustered signals. \end{lemma} \begin{proof} Suppose clustered signals, so there exists $i\neq i'$ such that $\operatorname{Sign}(\Delta_{i,\cdot}) = \operatorname{Sign}(\Delta_{i',\cdot})$. Then if agent 2 is truthful, agent 1's expected score is the same for being truthful or for reporting $i'$ whenever she receives either $i$ or $i'$. Formally, consider the strategies $G = \mathds{I}$ and $F$ formed by replacing the $i$-th row in $\mathds{I}$ by the $i'$-th row. Observe that $S(i, j) = S(F_i, G_j)$ as the $i$-th and $i'$-th row in $S$ are identical. Hence, $E(F,G) = E(\mathds{I}, \mathds{I})$. The same argument holds for clustered signals for agent 2. If the world does not have clustered signals, any agent using a non-permutation strategy leads to lower expected score than being truthful. Suppose $F$ is a non-permutation strategy, such that $E(F,G) = E(\mathds{I}, \mathds{I})$ for some $G$. Then there exist signals $i\neq i'$ such $F_i=F_{i'}=r$, for some $r$. No clustered signals implies that $\exists j$ such that $\operatorname{Sign}(\Delta_{i,j}) \neq \operatorname{Sign}(\Delta_{i',j})$. Let $G(j) = j'$, for some $j'$. Without loss of generality assume that $\Delta(i,j) > 0$, then we get $\Delta(i',j) < 0$ as $\Delta(i',j) \neq 0$. The score for signal pair $(S_1 = i,S_2 = j)$ is $S(r,j')$ and for $(S_1 = i',S_2 = j)$ is also $S(r,j')$. Either $S(r,j') = 1$ or $S(r,j') = 0$. In both cases the strategy profile $F,G$ will lead to a strictly smaller expected score as compared to the score of truthful strategy, since $\Delta(i,j) > 0$ and $\Delta(i',j) < 0$. Similarly, we can show that if the second agent uses a non-permutation strategy, that also leads to strictly lower expected scores for both agents. \end{proof} We now give a condition under which there are asymmetric permutation strategy profiles that give the same expected score as truthful reporting. \begin{definition}[Paired permutations] A signal distribution has {\em paired permutations} if there exist distinct permutation matrices $P, Q$ \,s.t. $P \cdot \operatorname{Sign}(\Delta) = \operatorname{Sign}(\Delta) \cdot Q$. \end{definition} \begin{lemma} \label{lem:paired-perm} If $\Delta_{ij} \neq 0$, $\forall i,j$, then there exist asymmetric permutation strategy profiles with the same expected score under the $\textsc{CA}$ mechanism as truthful reporting if and only if the signal distribution has paired permutations. \end{lemma} \begin{proof} First we show that if the world has paired permutations then there exist asymmetric permutation strategy profiles that have the same expected score as truthful strategies. Consider $F = P$ and $G = Q$. From the paired permutations condition it follows that $S(i,j) = S(F_i, G_j)$, $\forall i,j$, since $S(F_i, G_j)$ is the $(i,j)$-th entry of the matrix $F\cdot S \cdot G^\top$ which is equal to $S$. Therefore, $E[F,G] = E[\mathds{I}, \mathds{I}]$. To prove the other direction, let $F$ and $G$ be the permutation strategies of agent 1 and 2, respectively, with $F \neq G$. If the world does not have paired permutations, then $F \cdot S \cdot G^\top \neq S$. Let $\hat{S} = F \cdot S \cdot G^\top$. The expected score for $F,G$ is \[ E[F,G] = \sum_{i,j} \Delta_{i,j} \cdot \hat{S}(i,j) \,, \] and the expected score for truthful strategies is \[ E[\mathds{I},\mathds{I}] = \sum_{i,j} \Delta_{i,j} \cdot S(i,j) \,. \] Combining the facts that $E[\mathds{I}, \mathds{I}] \geq E[F,G]$; $\Delta_{ij} \neq 0$, $\forall i,j$; and $\hat{S}$ differs from $S$ by at least one entry, $E[F,G]$ will be strictly less than $E[\mathds{I},\mathds{I}]$. \end{proof} Lemma~\ref{lem:clus-sig} shows that when the world has clustered signals, the $\textsc{CA}$ mechanism cannot differentiate between individual signals in a cluster, and is not strongly truthful. Similarly, Lemma~\ref{lem:paired-perm} shows that under paired permutations this mechanism is not able to distinguish whether an agent is reporting the true signals or a particular permutation of the signals. In domains without clustered signals and paired permutations, all strategies (except symmetric permutations) lead to a strictly lesser score than truthful strategies, and hence, the $\textsc{CA}$ mechanism is strongly truthful. The $\textsc{CA}$ mechanism is informed truthful, but not strongly truthful, for the image classification example in Figure~\ref{fig:clus-sig} as there are clustered signals in the model. For the peer assessment example, it is strongly truthful because there are no clustered signals and a further analysis reveals that there are no paired permutations. A natural question is whether we can do better by somehow `separating' clustered signals from each other, and `distinguishing' permuted signals from true signals, by giving different scores to different signal pairs, while retaining the property that the designer only needs to know $\operatorname{Sign}(\Delta)$. Specifically, can we do better if we allow the score for each signal pair $(S_1 = i, S_2 = j)$ to depend on $i,j$ in addition to $\operatorname{Sign}(\Delta_{ij})$? We show that this extension does not add any additional power over the $\textsc{CA}$ mechanism in terms of strong truthfulness. \begin{theorem} \label{thm:maximally-strong-truthful} If $\Delta_{ij} \neq 0$, $\forall i,j$, then $\textsc{CA}$ is maximally strong truthful amongst multi-task mechanisms that only use knowledge of the correlation structure of signals, i.e.\ mechanisms that decide $S(i,j)$ using $\operatorname{Sign}(\Delta_{ij})$ and index $(i,j)$. \end{theorem} \begin{proof} We first show that the $\textsc{CA}$ mechanism is strongly truthful if the signal distribution has neither clustered signals nor paired permutations. This follows directly from Lemmas~\ref{lem:clus-sig} and~\ref{lem:paired-perm}, as strategy profiles in which any agent uses a non-permutation strategy or both agents use an asymmetric permutation strategy lead to strictly lower expected score than truthful strategies. Next we show maximality by proving that if a signal distribution has either clustered signals or paired permutations then there do not exist any strong truthful multi-task mechanisms that only use the correlation structure of signals. We prove this by contradiction. Suppose there exists a strongly truthful mechanism for the given signal distribution which computes the scoring matrix using the correlation structure of signals. Let the scoring matrix for the signal distribution be $S$. If the signal distribution has clustered signals then at least two rows or columns in $\operatorname{Sign}(\Delta)$ are identical. Suppose that there exist $i \neq i'$, such that the $i$-th and $i'$-th row in $\operatorname{Sign}(\Delta)$ are identical. We will construct another delta matrix $\Delta'$ representing a signal distribution that has clustered signals, for which this mechanism cannot be simultaneously strongly truthful. Let $\Delta'$ be computed by exchanging rows $i$ and $i'$ of $\Delta$. Clearly, $\Delta'$ has clustered signals. Now, the scoring matrix for both $\Delta$ and $\Delta'$ is the same, since the sign structure is the same for both. Let $G = \mathds{I}$ and $F$ be computed by exchanging rows $i$ and $i'$ of $\mathds{I}$. Strong truthfulness for $\Delta$ implies that \begin{equation} \label{eqn:strong-delta} E_{\Delta}[\mathds{I},\mathds{I}] > E_{\Delta}[F,G] \,. \end{equation} However, observe that $E_{\Delta}[\mathds{I},\mathds{I}] = E_{\Delta'}[F,G]$ and $E_{\Delta'}[\mathds{I},\mathds{I}] = E_{\Delta}[F,G]$. Strong truthfulness for $\Delta'$ implies that \begin{equation} \label{eqn:strong-delta'} E_{\Delta'}[\mathds{I},\mathds{I}] > E_{\Delta'}[F,G] \implies E_{\Delta}[\mathds{I},\mathds{I}] < E_{\Delta}[F,G] \,. \end{equation} Equation~\ref{eqn:strong-delta} and~\ref{eqn:strong-delta'} lead to a contradiction, implying that the above mechanism cannot be strongly truthful. Similarly, we can show that if two columns in $\operatorname{Sign}(\Delta)$ are identical, then there exists another delta matrix $\Delta'$ formed by exchanging the columns of the $\Delta$ for $j\neq j'$ such that the $j$-th and $j'$-th column of $\operatorname{Sign}(\Delta)$ are identical. A similar contradiction can be reached using strong truthfulness on $\Delta$ and $\Delta'$. The interesting case is when the signal distribution satisfies paired permutations, i.e.\ there exist permutation matrices $P\neq Q$ such that $P\cdot S \cdot Q^\top = S$. Consider $\Delta' = (P^{-1}) \cdot \Delta \cdot (Q^{-1})^\top$, $F = P$, and $G = Q$. We need to argue that $\Delta'$ represents a correct signal distribution and that it has paired permutations. To see this, observe that exchanging the columns or rows of a delta matrix leads to a valid delta matrix, and pre-multiplying or post-multiplying a matrix with permutation matrices only exchanges rows or columns, respectively. Observe that the sign structure of $\Delta'$ is the same as the sign structure of $\Delta$ since $S = (P^{-1}) \cdot S \cdot (Q^{-1})^\top$, and therefore, the scoring matrix for both $\Delta$ and $\Delta'$ is the same. Due to this $\Delta'$ has paired permutations. Strong truthfulness for $\Delta$ implies that \begin{equation} \label{eqn:strong-delta-2} E_{\Delta}[\mathds{I},\mathds{I}] > E_{\Delta}[F,G] \,. \end{equation} However, again observe that $E_{\Delta}[\mathds{I},\mathds{I}] = E_{\Delta'}[F,G]$ and $E_{\Delta'}[\mathds{I},\mathds{I}] = E_{\Delta}[F,G]$ Strong truthfulness for $\Delta'$ implies that \begin{equation} \label{eqn:strong-delta'-2} E_{\Delta'}[\mathds{I},\mathds{I}] > E_{\Delta'}[F,G] \implies E_{\Delta}[\mathds{I},\mathds{I}] < E_{\Delta}[F,G] \,. \end{equation} Equation~\ref{eqn:strong-delta-2} and~\ref{eqn:strong-delta'-2} lead to a contradiction, implying that the above mechanism cannot be strongly truthful. Therefore, if the signal distribution has either clustered signals or paired permutations there exist no strongly truthful scoring mechanism that assigns scores based on the correlation structure of $\Delta$. \end{proof} This result shows that if a multi-task mechanism only relies on the correlation structure and is strongly truthful in some world model then the $\textsc{CA}$ mechanism will also be strongly truthful in that world model. Therefore, even if one uses $2\cdot n^2$ parameters in the design of scoring matrices from $\operatorname{Sign}(\Delta)$, one can only be strongly truthful in the worlds where $\textsc{CA}$ mechanism is strongly truthful, which only uses 2 parameters. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/informed_strong_breakdown.pdf} \caption{Number of MOOC peer assessment models with clustered signals ($\textsc{CA}$ is informed truthful) and without clustered signals ($\textsc{CA}$ is strongly truthful up to paired permutations). \label{fig:informed-strong-breakdown}} \end{figure} A remaining question is whether strongly truthful mechanisms can be designed when the score matrix can depend on the exact value of the $\Delta$ matrix. We answer this question negatively. \begin{theorem} \label{lem:strong-truthful-impossibility} There exist symmetric signal distributions such that no multi-task mechanism is strongly truthful. \end{theorem} \begin{proof} Let $n=3$, and consider any symmetric $\Delta$ matrix of the form: \[ \Delta = \begin{bmatrix} x & y & -(x+y) \\ y & x & -(x+y) \\ -(x+y) & -(x+y) & 2(x+y) \end{bmatrix} \,, \] for some $0 < y<x \leq 0.5$, and let \[ S = \begin{bmatrix} a & b & e \\ c & d & f \\ g & h & i \end{bmatrix} \,, \] for some $a,b,c,d,e,f,g,h,i$ which can be selected using complete knowledge of $\Delta$. We will consider three strategy profiles $(F^1,G^1), (F^2,G^2), (F^3,G^3)$, with \[ F^1 = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad \qquad G^1 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \;, \] \[ F^2 = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad \qquad G^2 = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \,, \] and \[ F^3 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad \qquad G^3 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \,. \] Using strong truthfulness condition $E[\mathds{I}, \mathds{I}] > E[F^1, G^1]$, we get \begin{eqnarray} \label{eqn:strong-f1-g1} \qquad ax + by + cy + dx \;&>&\; cx + dy + ay + bx \qquad \nonumber \\ \qquad (a+d)(x-y)\;&>&\; (c+b)(x-y) \qquad \qquad \nonumber\\ \qquad a+d\;&>&\; c+b \qquad \qquad\qquad \;, \end{eqnarray} where the last inequality follows due to the fact that $x>y$. Using strong truthfulness condition $E[\mathds{I}, \mathds{I}] > E[F^2, G^2]$, we get \begin{eqnarray} \label{eqn:strong-f2-g2} \qquad by + cy > -dx + a(2y + x) +(g+e-f-h)(-x-y) \qquad \, \end{eqnarray} and again using strong truthfulness condition $E[\mathds{I}, \mathds{I}] > E[F^3, G^3]$, we get \begin{eqnarray} \label{eqn:strong-f3-g3} \qquad by + cy > -ax + d(2y + x) +(f+h-g-e)(-x-y) \qquad \end{eqnarray} Now, multiplying equation~\ref{eqn:strong-f1-g1} by $y$ and combining equation it with equation~\ref{eqn:strong-f2-g2}, we get \begin{eqnarray} \qquad -dx + a(2y + x) +(g+e-f-h)(-x-y) \;\;<\;\; by + cy \;\;<\;\; ay+ dy \qquad \nonumber \\ \qquad \implies -dx + a(2y + x) +(g+e-f-h)(-x-y) \;\;< \;\; ay+ dy \qquad \nonumber\\ \qquad \implies a(x+y) \;\;< \;\; d(x+y) + (f+h-g-e)(-x-y) \qquad \label{eqn:combine-f1-g1-f2-g2} \end{eqnarray} Similarly, equation~\ref{eqn:strong-f1-g1} by $y$ and combining equation it with equation~\ref{eqn:strong-f3-g3}, we get \begin{eqnarray} \qquad -ax + d(2y + x) +(f+h-g-e)(-x-y) \;\;<\;\; by + cy \;\;<\;\; ay+ dy \qquad \nonumber \\ \qquad \implies -ax + d(2y + x) +(f+h-g-e)(-x-y) \;\;< \;\; ay+ dy \qquad \nonumber\\ \qquad \implies d(x+y) + (f+h-g-e)(-x-y) \;\;< \;\; a(x+y)\qquad \label{eqn:combine-f1-g1-f3-g3} \end{eqnarray} Equation~\ref{eqn:combine-f1-g1-f2-g2} and~\ref{eqn:combine-f1-g1-f3-g3} lead to a contradiction, implying that there does not exist any $a,b,c,d,e,f,g,h,i$ that can satisfy these equations simultaneously. Therefore, for matrices of the above form there do not exist any strongly truthful scoring matrices. \end{proof} Figure~\ref{fig:informed-strong-breakdown} evaluates the sign structure of the $\Delta$ matrix for the 104 MOOC questions described earlier. The $\textsc{CA}$ mechanism is strongly truthful up to paired permutations when signals are not clustered, and thus in roughly half of the worlds. \subsection{Detail-Free Implementation of the $\textsc{CA}$ Mechanism} So far we have assumed that the $\textsc{CA}$ mechanism has access to the sign structure of $\Delta$. In practice, the signs may be unknown, or partially known (e.g. the designer may know or assume that the diagonal of $\Delta$ is positive, but be uncertain about other signs). The $\textsc{CA}$ mechanism can be made detail-free in a straightforward way by estimating correlation and thus the score matrix from reports; it remains informed truthful if the number of tasks is large (even allowing for the new concern that reports affect the estimation of the distribution and thus the choice of score matrix.) \begin{definition}[The $\textsc{CA}$ Detail-Free Mechanism (CA-DF)] As usual, we state the mechanism for two agents for notational simplicity: \begin{enumerate} \item Each agent completes $m$ tasks, providing $m$ pairs of reports. \item Randomly split the tasks into sets $A$ and $B$ of equal size. \item Let $T^A, T^B$ be the empirical joint distributions of reports on the bonus tasks in $A$ and $B$, with $T^A(i,j)$ the observed frequency of signals $i,j$. Also, let $T^A_M, T^B_M$ be the empirical marginal distribution of reports computed on the penalty tasks in $A$ and $B$, respectively, with $T^A_M(i)$ the observed frequency of signal $i$. Note that we only take one sample per task to ensure the independence of samples. \item Compute the empirical estimate of the Delta matrix, based on reports rather than signals: $\Gamma^A_{ij} = T^A(i,j)-T^A_M(i)T^A_M(j)$, and similarly for $\Gamma^B$. \item Define score matrices, \emph{swapping task sets}: $S^A=\operatorname{Sign}(\Gamma^B)$, $S^B=\operatorname{Sign}(\Gamma^A)$. Note that $S^A$ does not depend on the reports on tasks in $A$. \item Apply the $\textsc{CA}$ mechanism separately to tasks in set $A$ and set $B$, using score matrix $S^A$ and $S^B$ for tasks in set $A$ and $B$, respectively. \end{enumerate} \end{definition} \begin{lemma} \label{lem:detailfree-1} For all strategies $F,G$ and all score matrices $S \in \{0,1\}^{n\times n}$, $E({S^\ast},\mathds{I},\mathds{I}) \geq E(S,F,G)$ in the multi-task mechanism, where $E(S,F,G)$ is the expected score of the mechanism with a fixed score matrix $S$. \end{lemma} \begin{proof} The expected score for arbitrary score matrix and strategies is: \[E(S,F,G) = \sum_{i=1}^n\sum_{j=1}^n\Delta_{ij} S(F_i,G_j)\] The expected score for truthful reporting with ${S^\ast}$ is \begin{align*} E({S^\ast},\mathds{I},\mathds{I}) = \sum_{i=1}^n\sum_{j=1}^n\Delta_{ij} \operatorname{Sign}(\Delta)_{ij} = \sum_{i,j: \Delta_{ij}>0} \Delta_{ij} \ge \sum_{i=1}^n\sum_{j=1}^n\Delta_{ij} S(F_i, G_j), \end{align*} where the inequality follows because $S$ is a 0/1 matrix. \end{proof} The lemma gives the main intuition for why $\textsc{CA}$-DF is informed truthful for large $m$: even if agents could set the score matrix completely independently of their strategies, the ``truthful'' score matrix ${S^\ast}$ is the one that maximizes payoffs. To get a precise result, the following theorem shows that a score matrix ``close'' to ${S^\ast}$ will be chosen with high enough probability. \begin{theorem}[Mechanism $\textsc{CA}$-DF is ($\epsilon,\delta$)-informed truthful] \label{thm:01DF-detail-free} Let $\epsilon>0$ and $\delta>0$ be parameters. Then there exists a number of tasks $m=O(n^{3}\log(1/\delta)/\epsilon^2)$ (for $n$ signals), such that with probability at least $1-\delta$, there is no strategy profile with expected score more than $\epsilon$ above truthful reporting, and any uninformed strategy has expected score strictly less than truthful. Formally, with probability at least $1-\delta$, $E(F,G) \le E(\mathds{I},\mathds{I})+\epsilon,$ for all strategy pairs $F,G$; for any uninformed strategy $F_0$ (equivalently $G_0$), $E(F_0,G) < E(\mathds{I},\mathds{I})$. \end{theorem} \begin{proof} Let $H^A$ and $H^B$ be the (unobserved) joint signal frequencies, which are a sample from the true joint distribution. Let $M^A$ and $M^B$ be the (unobserved) marginal signal frequencies, which are a sample from the true marginal distribution. Finally, let $\DeltaSup{A}$ and $\DeltaSup{B}$ the corresponding empirical Delta matrices. Fixing strategies $F,G$, $S^A$ is a function of $H^B$ and $M^B$, and independent of $H^A$ and $M^A$. This means that we can write the expected score for tasks in $A$ as \begin{align} E(S^A,F,G) = \sum_{i=1}^n\sum_{j=1}^n\Delta_{ij}S^A(F_i,G_j). \end{align} By Lemma~\ref{lem:detailfree-1}, we know that $E({S^\ast},\mathds{I},\mathds{I}) \ge E(S,F,G)$ for all $S,F,G$, and will show that once $m$ is large enough, being truthful gets close to this score with high probability. We have \begin{align} |E(S_A,\mathds{I},\mathds{I}) - E({S^\ast},\mathds{I},\mathds{I})| &= |E(\operatorname{Sign}(\DeltaSup{B}),\mathds{I},\mathds{I}) - E(\operatorname{Sign}(\Delta),\mathds{I},\mathds{I})| \\ &= |\sum_{i=1}^n\sum_{j=1}^n\Delta_{ij} (\operatorname{Sign}(\DeltaSup{B})_{ij} - \operatorname{Sign}(\Delta)_{ij})| ~. \end{align} Therefore, for some accuracy $\epsilon$ and confidence $\delta$, with $m = O(n^3\log(1/\delta)/\epsilon^2)$, we want \begin{align} \label{eq:target} &|\sum_{i=1}^n\sum_{j=1}^n\Delta_{ij} (\operatorname{Sign}(\DeltaSup{B})_{ij} - \operatorname{Sign}(\Delta)_{ij})| \le \epsilon~. \end{align} Observe that \begin{align} |\sum_{i,j}\Delta_{ij} (\operatorname{Sign}(\DeltaSup{B})_{ij} - \operatorname{Sign}(\Delta)_{ij})| &\le \sum_{i,j}| \Delta_{ij} (\operatorname{Sign}(\DeltaSup{B})_{ij} - \operatorname{Sign}(\Delta)_{ij})| \\ &\le \sum_{i,j} | \Delta_{ij} - \DeltaSup{B}_{ij}|~. \end{align} Therefore, it is sufficient to learn $\DeltaSup{B}$ such that \begin{align} \label{eq:learnt} &\sum_{i=1}^n\sum_{j=1}^n| \Delta_{ij} - \DeltaSup{B}_{ij}| \le \epsilon~. \end{align} We now use a standard result (see e.g.~\cite{devroye2001combinatorial}, Theorems 2.2 and 3.1), that any distribution over finite domain $\Omega$ is learnable within L1 distance $d$ in $O(|\Omega|/d^2)$ samples with high probability, specifically $1-\delta$ with an additional $\log(1/\delta)$ factor. Using this result we can learn the joint signal distribution of the agents using $O(9n^2/\epsilon^2)$ samples with accuracy $\epsilon/3$. We can also learn the marginal distribution of agents' signals using $O(9n^3/\epsilon^2)$ samples from the true marginal distribution with accuracy $\epsilon/3n$. With high probability, after these many samples from each of these distributions, we have \begin{align} \sum_{i=1}^n\sum_{j=1}^n| P_{ij} - H^{B}_{ij}| &\le \frac{\epsilon}{3} \label{eq:learnt-joint} \\ \sum_{i=1}^n| P_{i} - M^B_{i}| &\le \frac{\epsilon}{3n} \label{eq:learnt-marginal} ~. \end{align} Now, \begin{align} \sum_{i,j} | \Delta_{ij} - \DeltaSup{B}_{ij}| &= \sum_{i,j} |P_{ij} - H^B_{ij} - (P_i P_j - M^B_i M^B_j)|\\ & \le \sum_{i,j} |P_{ij} - H^B_{ij}| + \sum_{i,j}|P_i P_j - M^B_i M^B_j| \quad \text{(Triangle Ineq.)}\\ & \le \frac{\epsilon}{3} + \sum_{i,j}|P_i P_j - M^B_i \left(P_j \pm \frac{\epsilon}{3n} \right) | \qquad \text{(Using Eq.~\ref{eq:learnt-joint} \& ~\ref{eq:learnt-marginal} )}\\ & = \frac{\epsilon}{3} + \sum_{i,j}|P_i P_j - M^B_i P_j \pm M^B_i \frac{\epsilon}{3n} | \\ & \le \frac{\epsilon}{3} + \sum_{i,j}| \left(P_i - M^B_i \right) P_j| + \sum_{i,j} M^B_i \frac{\epsilon}{3n} \quad \text{(Triangle Ineq.)} \\ & = \frac{\epsilon}{3} + \sum_{i,j} P_j | P_i - M^B_i | + \sum_{i,j} M^B_i \frac{\epsilon}{3n} \\ & = \frac{\epsilon}{3} + \sum_{i,j} P_j | P_i - M^B_i | + \sum_{j} \frac{\epsilon}{3n} \\ & \leq \frac{\epsilon}{3} + \sum_{j=1}^n \sum_{i=1}^n | P_i - M^B_i | + n \frac{\epsilon}{3n} \qquad \qquad \qquad \text{(} |P_j|\leq 1 \text{)} \\ & \leq \frac{\epsilon}{3} + \sum_{j=1}^n \frac{\epsilon}{3n} + \frac{\epsilon}{3} \qquad \qquad\qquad\qquad\qquad\quad\text{(Using Eq.~\ref{eq:learnt-marginal})}\\ & = \epsilon ~. \end{align} We now conclude \begin{align} |E(S_A,\mathds{I},\mathds{I}) - E({S^\ast},\mathds{I},\mathds{I})| \quad &\leq \quad \sum_{i=1}^n\sum_{j=1}^n | \Delta_{ij} - \DeltaSup{B}_{ij}| \quad \leq \epsilon~, \end{align} which implies $E(S_A,\mathds{I},\mathds{I}) + \epsilon \ge E(S,F,G)$ for all $S,F,G$. Finally, note that the expected value of uninformed strategies is 0, because $E(S,F^0,G) = 0$ for any uninformed $F^0$, regardless of score matrix, while $\epsilon$ can always be set small enough ensuring that being truthful has positive expected payoff. \end{proof} \subsection{Agent heterogeneity} \label{sec:extensions} The $\textsc{CA}$ mechanism only uses the signs of the entries of $\Delta$ to compute scores, not the exact values. This means that the results can handle some variability across agent ``sensing technology,'' as long as the sign structure of the $\Delta$ matrix is uniform across all pairwise matchings of peers. In the binary signal case, this reduces to agents having positive correlation between their signals, giving exactly the heterogeneity results in \citeN{DasguptaGhosh13}. Moreover, the agents themselves do not need to know the detailed signal model to know how to act; as long as they believe that the scoring mechanism is using the correct correlation structure, they can be confident in investing effort and simply report their signals truthfully. \subsection{Unintended Signals} \label{subsec:signal-models} Finally, we discuss a seemingly pervasive problem in peer prediction: in practice, tasks may have many distinctive attributes on which agents may base their reports, in addition to the intended signal, and yet all models in the literature assume away the possibility that agents can choose to acquire such unintended signals. For example, in online peer assessment where students are asked to evaluate the quality of student assignments, students could instead base their assessments on the length of an essay or the average number of syllables per word. In an image categorization system, users could base their reports on the color of the top-left pixel, or the number of kittens present (!), rather than on the features they are asked to evaluate. Alternative assessments can benefit agents in two ways: they may require less effort, and they may result in higher expected scores via more favorable Delta matrices.\footnote{This issue is related to the perennial problem of spurious correlations in classification and regression.} We can characterize when this kind of manipulation cannot be beneficial to agents in the $\textsc{CA}$ mechanism. The idea is that the amount of correlation coupled with variability across tasks should be large enough for the intended signal. Let $\eta$ represent a particular {\em task evaluation strategy}, which may involve acquiring different signals from the task than intended. Let $\DeltaSup{\eta}$ be the corresponding $\Delta$ matrix that would be designed if this was the signal distribution. This is defined on a domain of signals that may be distinct from that in the designed mechanism. In comparison, let $\eta^\ast$ define the task evaluation strategy intended by the designer (i.e., acquiring signals consistent with the mechanism's message space), coupled with truthful reporting. The expected payment from this behavior is $\sum_{ij: \DeltaSup{\eta^\ast}_{ij}>0} \DeltaSup{\eta^\ast}_{ij}.$ The maximal expected score for an alternate task evaluation strategy $\eta$ may require a strategy remapping signal pairs in the signal space associated with $\eta$ to signal pairs in the intended mechanism (e.g., if the signal space under $\eta$ is different than that provided by the mechanism's message space). The expected payment is bounded above by $\sum_{ij: \DeltaSup{\eta}_{ij}>0} \DeltaSup{\eta}_{ij}.$ Therefore, if the expected score for the intended $\eta^\ast$ is higher than the maximum possible score for other $\eta$, there will be no reason to deviate. \section{Conclusion} We study the design of peer prediction mechanisms that leverage signal reports on multiple tasks to ensure informed truthfulness, where truthful reporting is the joint strategy with highest payoff across all joint strategies, and strictly higher payoff than all uninformed strategies (i.e., those that do not depend on signals or require effort). We introduce the $\textsc{CA}$ mechanism, which is informed-truthful in general multi-signal domains. The mechanism reduces to the~\citeN{DasguptaGhosh13} mechanism in binary domains, is strongly truthful in categorical domains, and maximally strongly truthful among a broad class of multi-task mechanisms. We also present a detail-free version of the mechanism that works without knowledge of the signal distribution while retaining $\epsilon$-informed truthfulness. Interesting directions for future work include: (i) adopting a non-binary model of effort, and (ii) combining learning with models of agent heterogeneity.
1,116,691,501,420
arxiv
\section{Introduction} In this article we apply a method of quantum many-body theory called the coupled cluster method (CCM) \cite{ccm1,ccm2,ccm5,ccm12,ccm15,ccm20,ccm26,ccm27,ccm32,ccm35} to study strongly interacting quantum spin systems. The CCM is not restricted, in principle, by the spatial dimensionality of the problem or by the presence of competition between bonds, i.e., in frustrated quantum spin systems. An important advance in the accuracy of the method for a localized approximation scheme called the LSUB$m$ scheme has been afforded by the use of ``high-order'' CCM via computer-algebraic implementations \cite{ccm12,ccm15,ccm20,ccm26}. This computer code developed by DJJ Farnell and J Schulenburg \cite{code} is very flexible in terms of the range of underlying crystallographic lattices, values for spin quantum number, and types of Hamiltonian that may be studied. A common first task in the practical application of the CCM is to rotate the local spin axes of the (often classical) model state so that (notationally only) the spins all appear to point in the downwards $z$-direction. Although the Hamiltonian is changed by transforming these local spin axes, these rotations are unitary and so they do not affect the energy eigenvalues or expectation values. Furthermore, we note that hitherto only coplanar model states have been used because they do not lead to complex numbers in the new Hamiltonian. By contrast, three dimensional (3D) non-coplanar model states can lead to imaginary terms in the new Hamiltonian after rotation and so are more difficult to treat computationally. In principle, however, macroscopic quantities predicted by the CCM (such as the ground-state energy and order parameter) should be still real even though the Hamiltonian is now complex because again these transformations of local spin axes are unitary. Here we explain how we may use such 3D model states for the CCM and how we may solve for the (now possibly complex) CCM correlation coefficients. As an illustration of the method, we consider the spin-half triangular-lattice Heisenberg antiferromagnet in a magnetic field \cite{LhuiMi,hon1999,CGHP,squareTriangleED,HSR04}. As is well-known, the response of quantum magnetic systems to an external field is revealed by its magnetization curve. The magnetic processes of quantum anitferromagnets is discussed, e.g., in Refs. \cite{hon1999,squareTriangleED,HSR04,nishi,chub,alicea,oshi,SchuRi,jump,ono,kagome_pl,schnalle,schroeder,fortune}, and the interested reader is referred to these sources for more details. The Hamiltonian that we will use here is given by \begin{equation} H = \sum_{\langle i,j\rangle} {\bf s}_i ~ \cdot ~ {\bf s}_j - \lambda \sum_i s_i^z ~~ , \label{heisenberg} \end{equation} where the index $i$ runs over all lattice sites on the triangular lattice. The expression $\langle i,j\rangle$ indicates a sum over all nearest-neighbor pairs, although each pair is counted once and once only. The strength of the applied external magnetic field is given by $\lambda$. There are two ground states classically (shown in Fig. \ref{model_states}): a set of coplanar states and a single non-coplanar state. The quantum system is discussed in Refs. \cite{hon1999,HSR04,nishi,chub,alicea,chub94,trumper00,ono,squareTriangleED}. Although on the classical level both cases (coplanar and non-coplanar) are energetically equivalent\cite{kawamura,chub,zhito,cabra}, previous results of approximate methods indicate that thermal or quantum fluctuations ought to favor the planar configuration \cite{kawamura,chub,zhito,cabra}. Previous results of the CCM \cite{farnell} indicate that a plateau state occurs for $1.37 \lesssim \lambda \lesssim 2.15$. (Note that we compare new results presented in this article for the non-coplanar states to those earlier results of Ref. \cite{farnell}.) These results for the plateau are in excellent agreement with experimental results for the magnetic compound Ba$_3$CoSb$_2$O$_9$ (a spin-half triangular-lattice antiferromagnet), which demonstrates a spin plateau that agrees quantitatively with results of the CCM and exact diagonalizations \cite{shirata}. Furthermore, CCM results indicate that a similar plateau occurs over the range $2.82 \lesssim \lambda \lesssim 3.70$ for the for the spin-one triangular-lattice antiferromagnet, and this theoretical result has subsequently been established experiment for the compound Ba$_3$NiSb$_2$O$_9$ (a spin-one triangular-lattice antiferromagnet) \cite{richter}. The main goal of our paper is to explain how the CCM can be used with 3D model states. Firstly we present a brief description of the CCM formalism and its application via computational methods to the subject of quantum spin models with 3D model states. As an illustration of our method, we describe the application of the method to the spin-half Heisenberg model for the triangular lattice at zero temperature in the presence of an external magnetic field. We present our results and then discuss the conclusions of this research. \section{The Coupled Cluster Method (CCM)} As the CCM has been discussed extensively elsewhere (see Refs. \cite{ccm1,ccm2,ccm5,ccm12,ccm15,ccm20,ccm26,ccm27,ccm32,ccm35}), a brief overview of the method is presented here only. Note however that the solution of the CCM equations for the case of 3D model states is presented in an Appendix, which has not been attempted before. In this case, CCM correlation coefficients may be complex, and so extensive changes to the basic computer code that implements the CCM to high orders of approximation are necessary, again as described in the Appendix. We begin the brief overview of the CCM method by presenting the ground-state Schr\"odinger equations, which are given by \begin{equation} H |\Psi\rangle = E_g |\Psi\rangle \;; \;\;\; \langle\tilde{\Psi}| H = E_g \langle\tilde{\Psi}| \;. \label{eq1} \end{equation} and bra and ket states are given by \begin{eqnarray} |\Psi\rangle = {\rm e}^S |\Phi\rangle \; &;& \;\;\; S=\sum_{I \neq 0} {\cal S}_I C_I^{+} \nonumber \; , \\ \langle\tilde{\Psi}| = \langle\Phi| \tilde{S} {\rm e}^{-S} \; &;& \;\;\; \tilde{S} =1 + \sum_{I \neq 0} \tilde{{\cal S}}_I C_I^{-} \; . \label{eq2} \end{eqnarray} We use model states (denoted $|\Phi\rangle$ in the ket state and $\langle\Phi|$ in the bra state) as references states for the CCM and those used here are shown in Fig. \ref{model_states}. \begin{figure} \epsfxsize=12cm \centerline{\epsffile{model_states.eps}} \caption{Classical ground states (also the CCM model states): I to III are coplanar, whereas IV is non-coplanar with spins at an angle $\theta$ to the plane perpendicular to the external field.} \label{model_states} \end{figure} We note that the CCM ket- and bra-state equations are given by \begin{eqnarray} \langle\Phi|C_I^{-} {\rm e}^{-S} H {\rm e}^S|\Phi\rangle &=& 0 , \;\; \forall I \neq 0 \;\; ; \label{ket_state_eqn} \\ \langle\Phi|\tilde{S} {\rm e}^{-S} [H,C_I^{+}] {\rm e}^S|\Phi\rangle &=& 0 , \;\; \forall I \neq 0 \;\; . \label{bra_state_eqn} \end{eqnarray} and that the method in which Eqs. (\ref{ket_state_eqn}) and (\ref{bra_state_eqn}) are solved has been discussed extensively elsewhere \cite{ccm1,ccm2,ccm5,ccm12,ccm15,ccm20,ccm26,ccm27,ccm32,ccm35} and so is not given here. The ground-state energy is given by \begin{equation} E_g = \langle \Phi | e^{-S} H e^S | \Phi \rangle ~~ . \end{equation} This equation is a function of the ket-state correlation coefficients only. We differentiate between those the model states that are coplanar and the single model state that is non-coplanar (or ``3D''). The analysis for the coplanar model state is given in Ref. \cite{farnell} and we refer the interested reader to this publication for more details. The non-coplanar model state IV has spins that make an angle $\theta$ to the plane perpendicular to the external field, as is also shown in Fig.~\ref{model_states}. We rotate the local spin axes of the spins such that all spins appear to point downwards for all four model states I, II, III, and IV in Fig.~\ref{model_states}. We have four new Hamiltonians after rotation of the local spin axes of the spins such that all spins appear to point downwards for all four model states I, II, III, and IV in Fig.~\ref{model_states}(a-d). The Hamiltonian for model state I, Fig.~\ref{model_states} after rotation of the local spin axes is given by: \begin{eqnarray} H &=& \sum_{\langle i_A \rightarrow i_B \rangle} \biggl \{ - \frac 14 (1+{\rm cos}(2\alpha)) ( s_{i_A}^+ s_{i_B}^+ + s_{i_A}^- s_{i_B}^- ) \nonumber \\ & & ~~~~~~~ + \frac 14 (1-{\rm cos}(2\alpha)) ( s_{i_A}^+ s_{i_B}^- + s_{i_A}^- s_{i_B}^+ ) \nonumber \\ & & ~~~~~~~ - {\rm cos}(2\alpha) s_{i_A}^z s_{i_B}^z + \frac 12 {\rm sin}(2\alpha) ( s_{i_A}^z s_{i_B}^+ + s_{i_A}^z s_{i_B}^- ) \nonumber \\ & & ~~~~~~~ - \frac 12 {\rm sin}(2\alpha) ( s_{i_A}^+ s_{i_B}^z + s_{i_A}^- s_{i_B}^z ) \biggr \} \nonumber \\ &+& \sum_{\langle i_{B,C} \rightarrow i_{C,A} \rangle} \biggl \{ - \frac 14 (1+{\rm sin}(\alpha)) ( s_{i_{B,C}}^+ s_{i_{C,A}}^+ + s_{i_{B,C}}^- s_{i_{C,A}}^- ) \nonumber \\ & & ~~~~~~~ + \frac 14 (1-{\rm sin}(\alpha)) ( s_{i_{B,C}}^+ s_{i_{C,A}}^- + s_{i_{B,C}}^- s_{i_{C,A}}^+ ) \nonumber \\ & & ~~~~~~~ - {\rm sin}(\alpha) s_{i_{B,C}}^z s_{i_{C,A}}^z + \frac 12 {\rm cos}(\alpha) ( s_{i_{B,C}}^z s_{i_{C,A}}^+ + s_{i_{B,C}}^z s_{i_{C,A}}^- ) \nonumber \\ & & ~~~~~~~ - \frac 12 {\rm cos}(\alpha) ( s_{i_{B,C}}^+ s_{i_{C,A}}^z + s_{i_{B,C}}^- s_{i_{C,A}}^z ) \biggr \} \nonumber \\ & & \nonumber \\ &-& \lambda \sum_{i_C} s_{i_C}^z + \lambda {\rm sin}(\alpha) \left(\sum_{i_A} s_{i_A}^z + \sum_{i_B} s_{i_B}^z\right) \nonumber \\ &-& \frac {\lambda}2 {\rm cos}(\alpha) \sum_{i_A} (s_{i_A}^+ + s_{i_A}^-) + \frac {\lambda}2 {\rm cos}(\alpha) \sum_{i_B} (s_{i_B}^+ + s_{i_B}^-)\;,\label{rotH2} \end{eqnarray} where the sum $\langle i_A \rightarrow i_B \rangle$ goes from sublattice $A$ to sublattice $B$ (and with directionality). Note that $\langle i_{B,C} \rightarrow i_{C,A} \rangle$ indicates a sum that goes from sublattice $B$ to sublattice $C$ and sublattice $C$ to sublattice $A$, respectively (and with directionality). The Hamiltonian for model state III, Fig.~\ref{model_states} after rotation of the local spin axes is given by: \begin{eqnarray} H &=& \sum_{\langle i_C \rightarrow i_{A,B} \rangle} \biggl \{ \frac 14 (-1+{\rm cos}(\alpha-\beta)) ( s_{{i_C}}^+ s_{{i_{A,B}}}^+ + s_{{i_C}}^- s_{{i_{A,B}}}^- ) \nonumber \\ & & ~~~~~~~ + \frac 14 (1+{\rm cos}(\alpha-\beta)) ( s_{{i_C}}^+ s_{{i_{A,B}}}^- + s_{{i_C}}^- s_{{i_{A,B}}}^+ ) \nonumber \\ & & ~~~~~~~ + {\rm cos}(\alpha-\beta) s_{{i_C}}^z s_{{i_{A,B}}}^z \nonumber \\ & & ~~~~~~~ + \frac 12 {\rm sin}(\alpha-\beta) ( s_{{i_C}}^+ s_{{i_{A,B}}}^z + s_{{i_C}}^- s_{{i_{A,B}}}^z ) \nonumber \\ & & ~~~~~~~ - \frac 12 {\rm sin}(\alpha-\beta) ( s_{{i_C}}^z s_{{i_{A,B}}}^+ + s_{{i_C}}^z s_{{i_{A,B}}}^- ) \biggr \} \nonumber \\ &+& \sum_{\langle i_A , i_B \rangle} \biggl \{ \frac 12 ( s_{i_A}^+ s_{i_B}^- + s_{i_A}^- s_{i_B}^+ ) + s_{i_A}^z s_{i_B}^z \biggr \} \nonumber \\ &+& \lambda {\rm sin}(\alpha)\left(\sum_{i_A} s_{i_A}^z + \sum_{i_B} s_{i_B}^z \right) + \lambda {\rm sin}(\beta) \sum_{i_C} s_{i_C}^z \nonumber \\ &+& \frac {\lambda}2 {\rm cos}(\alpha) \left\{ \sum_{i_A} (s_{i_A}^+ + s_{i_A}^-) + \sum_{i_B} (s_{i_B}^+ + s_{i_B}^-) \right\} \nonumber \\ &+& \frac {\lambda}2 {\rm cos}(\beta) \sum_{i_C} (s_{i_C}^+ + s_{i_C}^-) \; , \label{rotH3} \end{eqnarray} where the sum $\langle i_C \rightarrow i_{A,B} \rangle$ goes from sublattice $C$ to sublattices $A$ and $B$ (with directionality) and $\langle i_A , i_B \rangle$ goes over each bond connecting the $A$ and $B$ sublattices, but counting each one once only (and without directionality). We note that we have three sites in the unit cell for all of the models states used for the triangular lattice antiferromagnet. The Hamiltonian for model state II, Fig.~\ref{model_states}, is a limiting case of Eqs. (\ref{rotH2}) and (\ref{rotH3}). The Hamiltonian for the non-coplanar model state IV after rotation of the local spin axes is given by: \begin{eqnarray} H &=& \sum_{\langle i \rightarrow j \rangle} \biggl \{ (\sin^2 (\theta) - \frac 12 \cos^2 (\theta)) s_{i}^z s_{j}^z \nonumber \\ & & ~~~~~~~ + \frac 14 \left( \frac 12 \sin^2 (\theta) - \cos^2 (\theta) - \frac 12 \right) (s_{i}^+ s_{j}^+ + s_{i}^- s_{j}^-) \nonumber \\ & & ~~~~~~~ + \frac 14 \left(\cos^2 (\theta) - \frac 12 \sin^2 (\theta) - \frac 12 \right) (s_{i}^+ s_{j}^- + s_{i}^- s_{j}^+) \nonumber \\ & & ~~~~~~~ + \frac {\sqrt{3}}4 {\rm cos}(\theta) \big(s_{i}^z \{ s_{j}^+ + s_{j}^- \} - \{ s_{i}^+ + s_{i}^- \} s_{j}^z\big) \biggr \} \nonumber \\ &+& \lambda \sum_{i} {\rm sin}(\theta) s_{i}^z \nonumber \\ &+& {\rm i} \sum_{\langle i \rightarrow j \rangle} \biggl \{ \frac {\sqrt{3}}4 {\rm sin}(\theta) (s_{i}^+ s_{j}^- - s_{i}^- s_{j}^+) \nonumber \\ & & ~~~~~~~ + \frac 34 {\rm sin}(\theta) {\rm cos}(\theta) \big(s_{i}^z \{ s_{j}^+ - s_{j}^- \} + \{ s_{i}^+ - s_{i}^- \} s_{j}^z\big) \biggr \} \nonumber \\ & & \nonumber \\ &+& {\rm i} \frac {\lambda}2 \sum_{i} {\rm cos}(\theta) (s_{i}^+ - s_{j}^-) \;, \label{rotH4} \end{eqnarray} where $\langle i \rightarrow j \rangle$ are those ``directional'' nearest-neighbor bonds on the triangular going from the $A$ sublattice to $B$ sublattice, $B$ sublattice to $C$ sublattice, and $C$ sublattice to $A$ sublattice (in those directions only and not reversed). We see that this Hamiltonian now contains ``real and imaginary components''. Henceforth, we shall take the expression ``real and imaginary components'' to mean that the rotated Hamiltonian contains explicit factors involving the imaginary number $i \equiv \sqrt(-1)$ and other explicit factors that do not involve this imaginary number. We note that $\theta$ is the angle that the spins make to the plane perpendicular to the applied external field. Full details of the rotations used in the derivation of this Hamiltonian are presented in Appendix B. The manner in which CCM equations may be solved for the bra- and ket-state equations when correlation coefficients are allowed to be complex is discussed in the Appendix, although we note that the problem essentially reduces to a doubling of the number of CCM equations to be solved (i.e., for the real and imaginary components separately). However, we note that we use the LSUB$m$ approximation (in which clusters of $m$ contiguous sites limited to $m$ spin flips are included in $S$ and $\tilde S$) and that we consider the angles as free parameters in the CCM calculation. These angles are found by direct minimization of the CCM ground-state energy. This was achieved computationally at a given level of LSUB$m$ approximation, and a minimum ground state energy with respect to these canting angles was also found computationally for a given fixed value of $\lambda$. We note that there was only one angle was needed for model states I and IV, whereas two such angles were needed for model state III. Values of $\lambda$ were varied incrementally and the minimization process of the energy with respect to the canting angles repeated. The CCM calculations are costly in terms of computing time because we needed to minimize the ground-state energy with respect to such angles at each value of $\lambda$. We remark that the we have twice as many equations to solve for the 3D model state (for the real and imaginary components separately) ``as normal'' for the CCM. For this reason, results up to the LSUB6 level of approximation only are quoted in these initial tests, although we find that ground-state energies are highly converged for all model states even at this relatively low level of approximation. In order to investigate the magnetization process in antiferromagnets, we consider the total lattice magnetization $M$ along the direction of the magnetic field. We note that the important Hellmann-Feynman theorem is obeyed by the CCM and so we may obtain the lattice magnetization by finding $M = -\partial (E_g/N) / \partial \lambda$, which is carried out computationally in this paper. The method by which other expected values may be found is also discussed in the Appendix. \section{Results} As mentioned above, we present CCM results for model states I, II, III, and IV shown in Fig.~\ref{model_states}. The computational effort of the CCM calculations presented here for the non-coplanar model state IV to very high orders is very great and so results up to LSUB6 are presented here only for this model state in these initial studies. Results of the coplanar model states I to III from Ref. \cite{farnell} up to LSUB6 are also presented here for the purposes of comparison only, although we note that higher orders of approximation than LSUB6 were carried out in Ref. \cite{farnell} for these states. The results for the ground-state energy are shown in Fig.~\ref{triangle_energies}. We note that the results for the coplanar model states I to III from Ref. \cite{farnell} with lowest energy are shown only as a function of $\lambda$ in Fig.~\ref{triangle_energies}. Thus, results of model state I only are presented for small values of the applied magnetic field strength $\lambda$ and results of model state III only are presented for higher values of $\lambda$ near to $\lambda_s$. The results of both model states coincide in the intermediate regime. Again, these LSUB$m$ series of results are found to converge rapidly with increasingly levels of LSUB$m$ approximation over all values of the external field parameter $\lambda$. Results for model state IV are also shown. Firstly, we note that the imaginary component of the ground-state energy is found to sum to zero, as expected and required. Furthermore, the results for the ground-state energies for model state IV are much higher in value than their coplanar counterparts at identical levels of LSUB$m$ approximation. Note that LSUB6 was the highest level of approximation possible for this model state in these initial tests and so we limit all presented results (for all model states) to this level of approximation in order to allow a direct and unbiased comparison. However, we see that results for the ground-state energy are highly converged even at the LSUB6 level of approximation (by comparing results of LSUB6 to LSUB5 and LSUB4 in Fig.~\ref{triangle_energies}) and so these results are adequate to establish that the ground-state energies are indeed lower for the coplanar case. As noticed previously \cite{kawamura,chub,zhito,cabra}, these results provide clear and strong evidence that the ground state of this system is coplanar. This case has therefore been an excellent first test of new CCM based on a non-coplanar ``three-dimensional'' model state. \begin{figure} \epsfxsize=13cm \centerline{\epsffile{energies.eps}} \caption{ The ground-state energy per site, $E_g/N$. Energies for the coplanar states I-III from Ref. \cite{farnell} are lower than those of the non-coplanar state IV for all $\lambda$. } \label{triangle_energies} \end{figure} The results for the total lattice magnetization for model states I to III from Ref. \cite{farnell} are shown in Fig.~\ref{triangle_magnetization}. The LSUB$m$ results are again seen to converge rapidly for increasing $m$. However, there is a radical departure from the classical straight-line behavior (i.e. $M_{\rm{Classical}}=\frac{1}{9}\lambda$) in this case. These previous CCM results for the coplanar states accurately detect the plateau in the $M$ versus $\lambda$ curve at $M=\frac 16$. Indeed, these previous results of the CCM \cite{farnell} for the coplanar model states carried out to the LSUB8 level of approximation indicate that the width of this plateau is given by $1.37 \lesssim \lambda \lesssim 2.15$. (Note that the plateau corresponds to the ``straight" part of the curve in the $E_g(\lambda)$ curve shown in Fig.~\ref{triangle_energies}.) By contrast, we note that the new results from model state IV show no such spin plateau any level of LSUB$m$ attempted (e.g., LSUB4, LSUB5 and LSUB6 shown in the Fig.~\ref{triangle_magnetization}). Note that this plateau has been observed for this system by other approximate methods and in experiment \cite{shirata}. We note importantly again that the lattice magnetization must be real-valued because we find values for this quantity by taking first derivative of the ground-state energy, which itself is found to be real-valued (and not complex) for all values of $\lambda$. All of these results are excellent corroborating evidence that the ground state is not of the type shown by model state IV (i.e., non-coplanar), but is rather of coplanar type shown by model states I to III. Again, this is an excellent first check of the method for 3D model states. \begin{figure} \epsfxsize=13cm \centerline{\epsffile{lattice_mag.eps}} \caption{The lattice magnetization, $M$. Results for model state IV do not detect the spin plateau at $M=1/6$.} \label{triangle_magnetization} \end{figure} \section{Conclusions} In this article we described how the coupled cluster method (CCM) may be applied to study the behavior of quantum magnets with the aid of non-coplanar ``3D'' model states. A (slightly) modified method for solving for the CCM equation by ``direct iteration'' and for complex CCM correlation coefficients is presented in an Appendix. We employed new computer code to find the ground-state energy and the total lattice magnetization of the spin-half triangular-lattice antiferromagnetic systems in the presence of external magnetic fields for both coplanar and non-coplanar model states (which are the classical ground states). It was found that the ground-state energy was real-valued for the non-coplanar model state, as expected and required. Results up to LSUB6 were possible only in these initial tests for the non-coplanar model state due to the increased computational complexity of the problem. However, these results were clearly highly converged even at this level of approximation. In agreement with previous results of other methods, the coplanar states were found to have lower energy. Furthermore, the spin plateau for $M=\frac 16$ known to occur in this system was detected by the coplanar model states, although it was not detected by the non-coplanar model state at any level of approximation. We conclude that the spin-half triangular lattice antiferromagnet in an external magnetic field for the non-coplanar model state has been an excellent first test of new CCM based for a non-coplanar ``three-dimensional'' model state.\\ \pagebreak
1,116,691,501,421
arxiv
\section{Introduction} In this paper, we mainly consider the linear system \begin{equation}\label{eq:1.1} Ax=b, \end{equation} where the matrix $A$ is a complex symmetric matrix. This kind of linear systems (\ref{eq:1.1}) arise from different physical applications, for example, in modeling electromagnetic waves under the assumption of time-harmonic variation in the electromagnetic fields, Helmholtz's equations with a complex shift (see, e.g., \cite{DLHD,YAE,MFA}) etc. Moreover, the complex-valued linear system (\ref{eq:1.1}) can be directly generated in the field of lattice quantum chromo dynamics (QCD), where a model of the interactions of fermions (or quarks) on a lattice is given in terms of a complex-valued gauge field that directly leads to the linear system (\ref{eq:1.1}) (see, \cite{JSI,JBM}). In addition, this complex symmetric linear system also arises in centered difference discretization of the $R_{22}$-Pad\'{e} approximations in the time integration of parabolic partial differential equations (\cite{OAK}) and in direct frequency domain analysis of an $n$-degree-of-freedom ($n$-DOF) linear system (\cite{AFF}). There are some examples of scientific applications in \cite{ZZB}. Therefore, researches on numerical solutions of the linear system (\ref{eq:1.1}) are greatly needed. Next, for convenience, let $M_{n}(\mathbb{C})$ denote the set of $n\times n$ complex matrices and $A$ be a nonsingular matrix in $M_{n}(\mathbb{C})$. $\lambda_{1}$ and $\lambda_{n}$ are the largest and smallest eigenvalues of $A^{\ast} A$, respectively. Recently, a \emph{complex symmetric positive definite} (CSPD) matrix arising from the linear system (\ref{eq:1.1}) in Gaussian elimination without pivoting was firstly studied by Higham in \cite{HN}, which is called by \emph{\textbf{Higham matrices}} in \cite{A.K}. Subsequently, the paper \cite{A.K} gave a broader class of complex matrices---\emph{\textbf{generalized Higham matrices}}(sometimes they are also called accretive-dissipative matrices (see \cite{ML})), i.e., for any $A\in M_{n}(\mathbb{C})$, if, its Hermitian decomposition\footnote{It is also called the Toeplitz decomposition (see, \cite{ML}).} (see, \cite{NJ}) $$ A=B+\mathrm{i}C $$ satisfies that $$ B=B^{\ast}>0, ~~~~ C=C^{\ast}>0, $$ where $B^{\ast}$ is the conjugate transpose of $B$, then the matrix $A$ is said to be a generalized Higham matrix and denoted by $A \in M_{n}^{++}(\mathbb{C})$. Here, the sign $\geq$ is usually called the Loewner partial order of Hermitian matrices; i.e., we write $B\geq C$ if the matrix $B-C$ is positive semidefinite, similarly, $B>C$ means that $B-C$ is positive definite. In addition, a related class of matrices defined by $$ A=B+\mathrm{i}C, ~~~~B=B^{\ast}>0,~~~~C=C^{\ast}<0, $$ will be denoted by $M_{n}^{+-}(\mathbb{C})$ as in \cite{A.K}. This paper is a continuation of \cite{A.K}, and they both originate from the Higham's paper \cite{HN}. As it is well known, for the complex symmetric linear system (\ref{eq:1.1}), the growth factor $\rho_{n}(A)$ in Gaussian elimination for $A$ is defined by \begin{equation}\label{eq:1.2} \rho_{n}(A)=\frac{\max_{i,j,k}|a^{(k)}_{ij}|}{\max_{i,j}|a_{ij}|}, \end{equation} where $A\triangleq(a_{ij})$, $A^{(k)}\triangleq(a_{ij}^{(k)})$ and $A^{(k)}$ is the matrix obtained through the application of the first $k$ steps of Gaussian elimination to A (see, e.g.,\cite{A.K,GW}). In particular, $A^{(n-1)}$ is the upper triangular matrix resulting from the $LU$ factorization of $A$. Obviously, for the matrix $A$ in the linear system (\ref{eq:1.1}), if one is able to prove a satisfactory priori bound for $\rho_{n}(A)$, then it is safe not to pivot in computing the $LU$ factorization of the matrix $A$ (or to choose diagonal pivots based on other considerations such as sparsity preservation) (see, \cite{A.K,NJ}). For any Higham matrix $A$, the growth factor in Gaussian elimination \begin{equation}\label{eq:1.4} {\rho _n}(A) \le 2 \end{equation} was firstly conjectured by Higham (see, \cite{NJ}, P.210). An incorrect proof was given in \cite{HN}, but Ikramov et al. \cite{A.K} subsequently showed that \begin{equation}\label{eq:1.5} {\rho _n}\left( A \right) < 3 \end{equation} for any Higham matrix $A$. In addition, if the Higham matrix is extended by allowing $B$ and $C$ to be arbitrary Hermitian positive definite matrices, then \begin{equation} \label{eq:1.6} {\rho _n}\left( A \right) < 3\sqrt{2}. \end{equation} Moreover, Ikaramov et al noted that the above bound (\ref{eq:1.6}) remains true when $B$ or $C$ or both are negative (rather than positive) definite (see, \cite{A.K}). For a very restricted subset of Higham matrices, i.e., when $B=I_{n}$ and $C$ is real, symmetric and positive definite, the authors in \cite{I.K} proved the better bound $${\rho _n}(A) \le \frac{{1 + \sqrt {17} }}{4} \approx 1.28078 \cdots. $$ In addition, A. George and K. D. Ikramov (\cite{AK}) assumed that $B$ and $C$ with being positive definite, satisfy the inequality \[ C \le \alpha B,\;\;\;\alpha \ge 0, \] and they established a bound for the growth factor ${\rho _n}(A)$ that has the limit $1$ as $\alpha \to 0 $. Recently, Lin \cite{ML13} proved that if $A$ is a generalized Higham matrices, then the growth factor for such $A$ in Gaussian elimination is less than 4. Specially when $A$ is a Higham matrix, then the growth factor is less than $2\sqrt{2}$. However, as authors in \cite{A.K} pointed out, in no case have they observed the growth bigger than Higham's guess of $2$ from extensive numerical experiments with Higham matrices. Therefore, they believed that the bound (\ref{eq:1.4}) is correct and took the proof of this bound for an open problem (see also Problem 10.12 in \cite{NJ}). In this work, we continue studying this open problem and then give a new result $$ 0\leq\frac{{4\kappa }}{{{{\left( {1 + \kappa } \right)}^2}}} \le {\rho _n}\left( A \right) \le \frac{{2\left( {1 + {\kappa ^2}} \right)}}{{{{\left( {1 + \kappa } \right)}^2}}} \leq 2, $$ for the generalized Higham matrix $A$, where $\kappa\in [1,+\infty)$ is the maximum of the condition numbers of $B$ and $C$. This directly leads to the Higham's result (\ref{eq:1.4}) for any Higham matrix $A$, which proves the open problem. Here, for a nonsingular matrix $A$, its condition number is denoted by $\kappa (A)\triangleq \sqrt {\frac{{{\lambda _{\max }}\left( {{A^ * }A} \right)}}{{{\lambda _{\min }}\left( {{A^ * }A} \right)}}} $, i.e., the ratio of the largest and smallest singular value of $A$. This paper is organized as follows. In Section 2, we show some new bounds on the growth factor, based on the condition number. In Section 3, some figures and numerical examples are given to illustrate our results. \setcounter{equation}{0} \renewcommand{\theequation}{2.\arabic{equation}} \section{Main results}\label{sec2} In this section, let $A\in M_{n}(\mathbb{C})$ be partitioned as \begin{equation}\label{eq:1.3} \mathop {A \triangleq\left( {\begin{array}{*{20}{c}} {{A_{11}}}&{{A_{12}}}\\ {{A_{21}}}&{{A_{22}}} \end{array}} \right)}\limits_{} \mathop { = \left( {\begin{array}{*{20}{c}} {{B_{11}}}&{{B_{12}}}\\ {{B_{21}}}&{{B_{22}}} \end{array}} \right)}\limits_{} \mathop { + \mathrm{i}\left( {\begin{array}{*{20}{c}} {{C_{11}}}&{{C_{12}}}\\ {{C_{21}}}&{{C_{22}}} \end{array}} \right)}\limits_{}, \end{equation} where $A$ is an $n\times n$ nonsingular matrix. If $A_{11}$ is invertible, then the schur complement of $A_{11}$ in $A$ is denoted by $A/{A_{11}} = {A_{22}}- {A_{21}}A_{11}^{-1}A_{12}$ (see, \cite{W.Z}). \begin{lemma}(\cite{I.K}).\label{lem:2.1} Let $A$ be a CSPD matrix, then $A$ is nonsingular, and any principal submatrix of $A$ and any schur complement in $A$ are also CSPD matrices. \end{lemma} Obviously, Lemma \ref{lem:2.1} shows that, being a CSPD matrix is an hereditary property of active submatrices in Gaussian elimination. \begin{lemma}(\cite{I.K})\label{lem:2.2}. The largest element of a CSPD matrix $A$ lies on its main diagonal. \end{lemma} Thus, for any CSPD matrix $A$, the definition (\ref{eq:1.2}) can be replaced by $$ {\rho _n}\left( A \right) = \frac{{{{\max }_{j,k}}|a_{jj}^{(k)}|}}{{{{\max }_j}|{a_{jj}}|}}, $$ which greatly simplifies the analysis on bounding the growth factor for a CSPD matrix $A$. \begin{lemma}(\cite{H.J}).\label{lem:31} If $B$ is a nonzero $n\times n$ positive definite matrix having eigenvalues $\lambda_{1}\geq\lambda_{2}\geq \cdots \geq \lambda_{n}$, then for all orthogonal vectors $x,y\in \mathbb{C}^{n}$, and $x^{\ast}$ denotes the conjugate transpose of $x$, the following equality holds, \begin{equation}\label{eq:3.1} |{x^ * }By{|^2} \le {\left( {\frac{{{\lambda _1} - {\lambda _n}}}{{{\lambda _1} + {\lambda _n}}}} \right)^2}\left( {{x^ * }Bx} \right)\left( {{y^ * }By} \right). \end{equation} \end{lemma} \begin{lemma}(\cite{FZ}).\label{Lem:3.1} Let $B$ be as in Lemma \ref{lem:31}, then for any $n\times p$ matrix $X$ satisfying ${X^ *}X = {I_p}$, where $X^{\ast}$ means the conjugate transpose of the matrix $X$, we have that \begin{equation}\label{eq:3.2} {X^ * }{B^{ - 1}}X \le \frac{{{{\left( {{\lambda _1} + {\lambda _n}} \right)}^2}}}{{4{\lambda _1}{\lambda _n}}}{\left( {{X^ * }{B}X} \right)^{ - 1}}. \end{equation} \end{lemma} By Lemma \ref{Lem:3.1}, it is easy to obtain the following lemma. \begin{lemma}(\cite{FZ}).\label{lem:3.1} Let ${B} = \left( {\begin{array}{*{20}{c}} {B_{11}}&{B_{12}} \\ {B_{21}}&{B_{22}} \end{array}} \right)$ be an $n\times n$ Hermitian positive definite matrix, where $B_{22}$ is any $k\times k$ principal submatrix of $B$ ($k>0$), then \begin{equation}\label{eq:3.3} {B_{21}}B_{11}^{-1}B_{12}\le \left({\frac{{{1 - \kappa ({B})}}}{{{1 + \kappa ({B})}}}} \right)^2{B_{22}}, \end{equation} where $\kappa ({B})$ is the condition number of ${B}$. \end{lemma} \begin{theorem}(\cite{H.J}).\label{th:3.2} Let $B$ be a Hermitian positive definite matrix, then $\lambda_{n-t+i}(B)\leq\lambda_{i}(B_{t})\leq\lambda_{i}(B)$, ($i=1,2,\cdots,t$), where $B_{t}=B(i_{1},\cdots,i_{t})$ is the $t\times t$ principal submatrix of $B$. \end{theorem} \begin{corollary} (\cite{Wang}).\label{cor:3.2} Let $B$ be a Hermitian positive definite matrix, and partitioned as in Lemma \ref{lem:3.1}, then $\kappa(B)>\kappa(B_{11})$. \end{corollary} \begin{lemma}(\cite{ML}). Let $A=B+\mathrm{i}C$, $B=B^{*}$, $C=C^{*}$, be partitioned as in (\ref{eq:1.3}), if $B_{11}$, $C_{11}$ are invertible, then \begin{equation}\label{eq:3.4} A/A_{11}=B/B_{11}+\mathrm{i}(C/C_{11})+X(B_{11}^{-1}-\mathrm{i}C_{11}^{-1})^{-1}X^{*}, \end{equation} where $X=B_{21}B_{11}^{-1}-C_{21}C_{22}^{-1}$. \end{lemma} \begin{corollary}(\cite{ML})\label{cor:3.3}. Let $A=B+\mathrm{i}C$, $B=B^{*}$, $C=C^{*}$ be a generalized Higham matrix and be partitioned as in (\ref{eq:1.3}), if $A/A_{11}=R+iS$ is its Hermitian decomposition, then $R\geq B/B_{11}$, $S\geq C/C_{11}.$ \end{corollary} Next, we give our main result. \begin{theorem}\label{th:3.3} Let $A$ be a generalized Higham matrix, then \begin{equation}\label{eq:3.5} \frac{{4\kappa }}{{{{\left( {1 + \kappa } \right)}^2}}} \le {\rho _n}\left( A \right) \le \frac{{2\left( {1 + {\kappa ^2}} \right)}}{{{{\left( {1 + \kappa } \right)}^2}}}, \end{equation} where $\kappa$ is the maximum of the condition numbers of $B$ and $C$. \end{theorem} \textbf{Proof}. Fix the number $k\in\{1,2,\cdots,n-1\}$ and $j$, where $j\geq k+1$. Denote $A_{k}$ by the leading principal submatrix of order $k$ in $A$. We consider the $(k+1)\times (k+1)$ matrix $$ \mathop {\mathop {A_{kj} = \left( {\begin{array}{*{20}c} {A_{k}} & {\alpha} \\ {\beta^{T}} & {a_{jj}} \\ \end{array}} \right)}\limits_{} }=B_{kj}+\mathrm{i}C_{kj}, $$ where $$ \mathop {\mathop {\alpha^{T} = \left( {\begin{array}{*{20}c} {a_{1j}},a_{2j},\cdots,a_{kj} \\ \end{array}} \right)}\limits_{} }~~ \mathrm{and}~~ \mathop {\mathop {\beta^{T} = \left( {\begin{array}{*{20}c} {a_{j1}},a_{j2},\cdots,a_{jk} \\ \end{array}} \right)}\limits_{} }, $$ $$ \mathop {\mathop {B_{kj} = \left( {\begin{array}{*{20}c} {B_{k}} & {b} \\ {b^{*}} & {b_{jj}} \\ \end{array}} \right)}\limits_{} }~~ \mathrm{and}~~ \mathop {\mathop {C_{kj} = \left( {\begin{array}{*{20}c} {C_{k}} & {c} \\ {c^{*}} & {c_{jj}} \\ \end{array}} \right)}\limits_{} }. $$ Note that $A_{kj}$, $B_{kj}$ and $C_{kj}$ are principal order $k+1$ submatries of $A$, $B$ and $C$, respectively. It is easy to see that $a^{(k)}_{jj}$ can be obtained by performing block Gaussian elimination in $A_{kj}$; i.e., $$ a_{jj}^{(k)} = {a_{jj}} - {\beta^T}A_k^{ - 1}\alpha. $$ Setting $a_{jj}^{(k)} = \beta + \mathrm{i}\gamma ,~\beta ,~\gamma \in \mathbb{R}$. Since both $B_{kj}$ and $C_{kj}$ are Hermitian positive definite, according to the result of the Lemma \ref{lem:3.1}, we have $$ {b ^*}B_k^{- 1}b \le \left( {\frac{1-\kappa(B_{kj})}{1+\kappa(B_{kj})}} \right)^2b_{jj}~~\mathrm{ and}~~{c ^*}C_k^{ - 1}c \le \left( {\frac{1-\kappa(C_{kj})}{1+\kappa(C_{kj})}} \right)^2c_{jj}. $$ Next, by the corollary \ref{cor:3.3} and $f(x)=\frac{4x}{(1+x)^{2}}$ is decreasing in $x\in[1,+\infty)$, we get \begin{equation}\label{eq:3.6} \begin{array}{lll} |a_{jj}^{(k)}|=|\beta+\mathrm{i}\gamma|&\geq&|B_{kj}/B_{k}+\mathrm{i}C_{kj}/C_{k}|\\ &=&|(b_{jj}-b^{*}B_{k}^{-1}b)+\mathrm{i}(c_{jj}-c^{*}C_{k}^{-1}c)|\\ &\geq& |\frac{4\kappa(B_{kj})}{(1+\kappa(B_{kj}))^{2}}b_{jj}+\mathrm{i}\frac{4\kappa(C_{kj})}{(1+\kappa(C_{kj}))^{2}}c_{jj}|\\ &\geq&\frac{4\kappa_{kj}}{(1+\kappa_{kj})^{2}}|b_{jj}+\mathrm{i}c_{jj}|\\ &=&\frac{4\kappa_{kj}}{(1+\kappa_{kj})^{2}}|{a_{jj}}|. \end{array} \end{equation} where $\kappa_{kj}=max\{\kappa(B_{kj}), \kappa(C_{kj})\}$. Since $a_{jj}^{(k)} = {a_{jj}} - {\beta^T}A_k^{ - 1}\alpha=b_{jj}+\mathrm{i}c_{jj}-\beta^{T}A^{-1}_{k}\alpha$, $B/B_{11}=b_{jj}-b^{\ast}B_{k}^{-1}b$ and $C/C_{11}=c_{jj}-c^{\ast}C_{k}^{-1}c$, by the corollary \ref{cor:3.3}, we have $$b_{jj}-Re(\beta^{T}A^{-1}_{k}\alpha)\geq b_{jj}-b^{\ast}B_{k}^{-1}b, ~~~~ c_{jj}-Im(\beta^{T}A^{-1}_{k}\alpha)\geq c_{jj}-c^{\ast}C_{k}^{-1}c.$$ Noting that $g(x)=(\frac{1-x}{1+x})^{2}$ is increasing in $x\in[1,+\infty)$, one has $$ Re(\beta^{T}A_{k}^{-1}\alpha)\leq b^{*}B^{-1}_{k}b\leq \left( {\frac{1-\kappa(B_{kj})}{1+\kappa(B_{kj})}} \right)^2b_{jj} $$ and $$ Im(\beta^{T}A_{k}^{-1}\alpha)\leq c^{*}C^{-1}_{k}c\leq \left( {\frac{1-\kappa(C_{kj})}{1+\kappa(C_{kj})}} \right)^2c_{jj}, $$ equivalently, $$ |\beta^{T}A_{k}^{-1}\alpha|\leq \left( {\frac{1-\kappa_{kj}}{1+\kappa_{kj}}} \right)^2|a_{jj}|. $$ Thus, we can obtain \begin{equation}\label{eq:3.7} \begin{array}{lll} |a_{jj}^{(k)}|&=& |{a_{jj}} - {\beta^T}A_k^{ - 1}\alpha|\\ &\leq& |{a_{jj}}| + |{\beta^T}A_k^{ - 1}\alpha|\\ &\leq& |{a_{jj}}| +\frac{(1-\kappa_{kj})^{2}}{(1+\kappa_{kj})^{2}}|a_{jj}|\\ &=&\frac{2(1+\kappa_{kj}^{2})}{(1+\kappa_{kj})^{2}}|{a_{jj}}|. \end{array} \end{equation} According to the above inequalities (\ref{eq:3.6}) and (\ref{eq:3.7}), the following inequalities is obvious that $$ \frac{4\kappa_{kj}}{(1+\kappa_{kj})^{2}}\leq\rho_{n}(A)\leq\frac{2(1+\kappa_{kj}^{2})}{(1+\kappa_{kj})^{2}}. $$ Note that $f\left( x \right) = \frac{{4x}}{{{{\left( {1+ x} \right)}^2}}}$ is decreasing in $x\in[1,+\infty]$, and $g\left( x \right) = \frac{{2\left( {1 + {x^2}} \right)}}{{{{\left( {1 + x} \right)}^2}}}$ is increasing in $x\in[1,+\infty]$ (see, Fig. 1), by Corollary \ref{cor:3.2}, we have that \begin{equation}\label{eq:3.8} \frac{{4\kappa }}{{{{\left( {1 + \kappa } \right)}^2}}} \le {\rho _n}\left( A \right) \le \frac{{2\left( {1 + {\kappa ^2}} \right)}}{{{{\left( {1 + \kappa } \right)}^2}}}. \end{equation} The proof is completed. $\Box$ \begin{figure}[htbp]\label{fig1} \includegraphics[width=2.45in,height=2.7in]{paper1.eps} \includegraphics[width=2.45in,height=2.7in]{cond2.eps} \caption{\small Left: the variation curve of $f(\kappa)=\frac{4\kappa}{(1+\kappa)^2}$ with $\kappa$ decreasing, Right: the variation curve of $g(\kappa)=\frac{2(1+\kappa ^2)}{(1+\kappa )^2}$ with $\kappa$ increasing.} \end{figure} \begin{corollary}\label{cor:3:4} If $A$ is an $n\times n$ generalized Higham matrix, then \begin{equation}\label{eq:38} 0\leq\rho_{n}(A)\leq 2. \end{equation} \end{corollary} \textbf{Proof}. Since $\frac{{4\kappa }}{{{{\left({1 + \kappa } \right)}^2}}} \le {\rho _n}\left( A \right) \le \frac{{2\left( {1 + {\kappa ^2}} \right)}}{{{{\left( {1 + \kappa } \right)}^2}}}$, we calculate simultaneously the limit of both sides of the inequality, we have $$ \mathop {\lim }\limits_{\kappa \to \infty } \frac{{4\kappa }}{{{{\left( {1 + \kappa } \right)}^2}}} = 0~~ \mathrm{and}~~ \mathop {\lim }\limits_{\kappa \to \infty } \frac{{2(1 + {\kappa ^2})}}{{{{\left( {1 + \kappa } \right)}^2}}} = 2. $$ Therefore, the result (\ref{eq:38}) holds. $\Box$ \textbf{Remark 2.1}. Obviously, the above result (\ref{eq:38}) holds also for any Higham matrix, and hence the Higham's conjecture (see (\ref{eq:1.4})) is correct, which solves this open problem. In addition, since $\kappa\geq 1>0$, then $(\kappa+1)^{2}=\kappa^{2}+2\kappa+1>\kappa^{2}+1$, i.e., $\frac{2(\kappa^{2}+1)}{(\kappa+1)^{2}}<2$. So, generally speaking, $\rho_{n}(A)< 2$, see the following numerical experiment. \setcounter{equation}{0} \renewcommand{\theequation}{3.\arabic{equation}} \section{Numerical experiments}\label{sec3} In this section, a numerical example will be described. The goal of the experiment is to examine the effectiveness of our result. We consider the complex symmetric system of linear equation (\ref{eq:1.1}) arises in the centered difference discretizations of the $R_{22}$-Pad\'{e} approximations in the time integration of parabolic partial differential equations, further details refer to \cite{ZZB}. For convenience, the complex coefficient symmetric matrix (see, \cite{OAK}) may be written as $$ A=(K+\frac{3-\sqrt{3}}{\tau}I)+\mathrm{i}(K+\frac{3+\sqrt{3}}{\tau}I),\ \mathrm{i}=\sqrt{-1}, $$ where $I$ is the identity matrix, $\tau$ is the time step-size and $K$ is the five-point centered difference matrix approximating the negative Laplacian operator $L=-\Delta$ with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square $[0,1]\times[0,1]$ with the mesh-size $h=\frac{1}{m+1}$. In our tests, we take $\tau= h$. The matrix $K\in\mathbb{R}^{n\times n}$ possesses the tensor-product form $K=I\otimes V_{m}+V_{m}\otimes I$, with $V_{m}=h^{-2}\mathrm{tridiag}(-1,2,-1)\in\mathbb{R}^{m\times m}$. Hence, $K$ is an $n\times n$ block tridiagonal matrix, with $n=m^{2}$. Denote $$ B=K+\frac{3-\sqrt{3}}{\tau}I ~~\mathrm{and}~~ C=K+\frac{3+\sqrt{3}}{\tau}I, $$ and apply MATLAB 2011b functions to compute the condition number of $B$ and $C$: $$t_1=\mathtt{condest}(B), ~~t_2=\mathtt{condest}(C).$$ Let $\kappa=\max\{\kappa(t_{1}),\kappa(t_{2})\}$, $L=\frac{2(1+\kappa^{2})}{(1+\kappa)^{2}}$, respectively. The numerical results as follows (see, Table 1). \begin{table}[htbp] \begin{center} \begin{tabular*}{12.1cm}{p{45pt}p{70pt}p{70pt}p{70pt}p{55pt}} \multicolumn{5}{p{390pt}}{\small Table 1}\\ \multicolumn{5}{p{390pt}}{\small The changes of growth factor with the increase of condition number.}\\ \hline Size ($m$)&$t_{1}$&$t_{2}$&$\kappa$&$L$\\ \hline 700& 4.4239e+003& 1.1861e+003& 4.4239e+003& 1.9991 \\ 800& 5.0548e+003& 1.3552e+003& 5.0548e+003& 1.9992\\ 900& 5.6858e+003& 1.5242e+003& 5.6858e+003& 1.9993 \\ 1000& 6.3167e+003 & 1.6933e+003& 6.3167e+003& 1.9994 \\ 1100& 6.9477e+003 & 1.8623e+003& 6.9477e+003& 1.9994 \\ 1200& 7.5786e+003 & 2.0314e+003& 7.5786e+003& 1.9995 \\ 1300& 8.2095e+003 & 2.2005e+003& 8.2095e+003& 1.9995 \\ 1400& 8.8405e+003 & 2.3695e+003& 8.8405e+003& 1.9995 \\ \hline \end{tabular*} \label{tab1} \end{center} \end{table} Obviously, the results on this experiment conform with our theoretical analysis.\\ \textbf{Acknowledgements}. \emph{The authors sincerely thank prof. N.J. Higham for bringing \cite{NJ} to our attention and some friendly discussion on this topic, and Dr. M.-H. Lin (University of Waterloo) offered some useful materials on the manuscript of this paper, which led to a substantial improvement on this paper.} {\small
1,116,691,501,422
arxiv
\section{Introduction} The construction of quantum field theory models is inherently related to the Lorentz invariant partition of classical fields into the positive and the negative frequency parts $$ u(x) = u^+(x) + u^-(x), \quad x \in \mathbb{R}^{1,3} $$ The quantization procedure itself is then based on the commutators of the energy-momentum tensor components $T^{0\mu}$ with the positive and the negative frequency field operators $u^\pm(\bm{k}), \bm{k} \in \mathbb{R}^3$, with the field equations being used to eliminate the redundant component of the quantum fields. The calculation of the $n$-point Green functions $\langle u(x_1) \ldots u(x_n) \rangle$, the functional derivatives of the generating functional, is well known to suffer from loop divergences in both UV and IR domains of momentum space. The way to rule out the divergences is to separate the contributions of different scales, which can be formally casted in the form \cite{Altaisky2010PRD} $$ u(x) = \int u_a(x) \frac{da}{a}, $$ where the ''scale component'' $u_a(x)$ is not yet well defined. The most known way to separate the scales is the renormalization group technique \cite{GL1954,Wilson1973} the less known is the wavelet transform in quantum field theory \cite{AK2013}. The consideration would be straightforward for Euclidean quantum field theory, where the projection of an arbitrary function $u(x) \in L^2(\mathbb{R}^d)$ onto the scale $a$ is given by the convolution \begin{equation} u_a(b) := \int_{\mathbb{R}^d} \frac{1}{a^d} \bar{g} \left(\frac{x-b}{a}\right) u(x) d^d x, \end{equation} so that the function $u(x)$ can be reconstructed from the set of its {\sl wavelet coefficients} $\{ u_a(b)\}$ by the {\sl inverse wavelet transform} \cite{Daub10} \begin{equation} u(x) = \frac{1}{C_g} \int_{\mathbb{R}_+ \times \mathbb{R}^d} \frac{1}{a^d} g\left(\frac{x-b}{a}\right) u_{a}(b) \frac{dad^db}{a} \label{iwt} \end{equation} The analyzing function $g(x)$, satisfying rather loose admissibility condition $ \int \frac{|\tilde{g}(ak)|^2}{a} da = C_g < \infty $ is usually referred to as a basic wavelet. The continuous wavelet transform (CWT) is a feasible alternative to the usual Fourier transform \begin{equation} u(x) = \int e^{\imath k x} \tilde{u}(k) \frac{d^dk}{(2\pi)^d} \label{FT} \end{equation} because in any measurement are not accessible exactly in a given point $x$: to localize a particle in an interval $\Delta x$ the measuring device requests a momentum transfer of order $\Delta p\!\sim\!\hbar/\Delta x$. If $\Delta x$ is too small the field $u(x)$ at a fixed point $x$ has no experimentally verifiable meaning. At the same time establishing of canonical commutation relations between the field operators is essentially based on Fourier transform \eqref{FT}. It is intuitively clear that the commutator $[u_{a_1}(b_1),u_{a_2}(b_2)]$ is a function of $\frac{a_1}{a_2}$, which vanish if $\log \frac{a_1}{a_2}$ is significantly different from zero. This fact is well known in radiophysics : if a field (system) is localized in a region of size $a_1$ centered at point $b_1$, it may be detected by other field with significant probability only in case when $a_1$ and $a_2$ have the same order. If the window width $a_2$ is too narrow or too wide in comparison to $a_1$ the probability of detection is low. In the remainder of this paper we present the derivation of the canonical commutation relations between the field operators describing massive scalar field that depend on both position and resolution in $\mathbb{R}^{1,3}$ Minkowski space. \section{Continuous wavelet transform} \subsection{Basics of the continuous wavelet transform} Let $\mathcal{H}$ be a Hilbert space of states for a quantum field $|\phi\rangle$. Let $G$ be a locally compact Lie group acting transitively on $\mathcal{H}$, with $d\mu(\nu),\nu\in G$ being a left-invariant measure on $G$. Then, similarly to representation of a vector $|\phi\rangle$ in a Hilbert space of states $\mathcal{H}$ as a linear combination of an eigenvectors of momentum operator $ |\phi\rangle=\int |p\rangle dp \langle p |\phi\rangle,$ any $|\phi\rangle \in \mathcal{H}$ can be decomposed with respect to a representation $U(\nu)$ of $G$ in $\mathcal{H}$ \cite{Carey1976,DM1976}: \begin{equation} |\phi\rangle= \frac{1}{C_g}\int_G U(\nu)|g\rangle d\mu(\nu)\langle g|U^*(\nu)|\phi\rangle, \label{gwl} \end{equation} where $|g\rangle \in \mathcal{H}$ is referred to as an admissible vector, or {\em basic wavelet}, satisfying the admissibility condition $$ C_g = \frac{1}{\| g \|^2} \int_G |\langle g| U(\nu)|g \rangle |^2 d\mu(\nu) <\infty. $$ The coefficients $\langle g|U^*(\nu)|\phi\rangle$ are referred to as wavelet coefficients. If the group $G$ is abelian, the wavelet transform \eqref{gwl} with $G:x'=x+b'$ coincides with Fourier transform. \subsection{Euclidean space} The next to the abelian group is the group of the affine transformations of the Euclidean space $\mathbb{R}^d$ \begin{align}\nonumber G: x' = a R(\theta)x + b, \\ x,b \in \mathbb{R}^d, a \in \mathbb{R}_+, \theta \in SO(d), \label{ag1} \end{align} where $R(\theta)$ is the rotation matrix. We define unitary representation of the affine transform \eqref{ag1} with respect to the basic wavelet $g(x)$ as follows: \begin{equation} U(a,b,\theta) g(x) = \frac{1}{a^d} g \left(R^{-1}(\theta)\frac{x-b}{a} \right). \end{equation} (We use $L^1$ norm \cite{Chui1992,HM1996} instead of usual $L^2$ to keep the physical dimension of wavelet coefficients equal to the dimension of the original fields). Thus the wavelet coefficients of the function $u(x) \in L^2(\mathbb{R}^d)$ with respect to the basic wavelet $g(x)$ in Euclidean space $\mathbb{R}^d$ can be written as \begin{equation} u_{a,\theta}(b) = \int_{\mathbb{R}^d} \frac{1}{a^d} \overline{g \left(R^{-1}(\theta)\frac{x-b}{a} \right) }u(x) d^dx. \label{dwtrd} \end{equation} The wavelet coefficients \eqref{dwtrd} represent the result of the measurement of function $u(x)$ at the point $b$ at the scale $a$ with an aperture function $g$ rotated by the angle(s) $\theta$ \cite{PhysRevLett.64.745}. The function $u(x)$ can be reconstructed from its wavelet coefficients \eqref{dwtrd} using the formula \eqref{gwl}: \begin{widetext} \begin{equation} u(x) = \frac{1}{C_g} \int \frac{1}{a^d} g\left(R^{-1}(\theta)\frac{x-b}{a}\right) u_{a\theta}(b) \frac{dad^db}{a} d\mu(\theta) \label{iwt} \end{equation} \end{widetext} The normalization constant $C_g$ is readily evaluated using Fourier transform: \begin{align*} C_g &=& \int_0^\infty |\tilde g(aR^{-1}(\theta)k)|^2\frac{da}{a} d\mu(\theta)\\ &=& \int |\tilde g(k)|^2 \frac{d^dk}{|k|^d}<\infty. \end{align*} For isotropic wavelets $$ C_g = \int_0^\infty |\tilde g(ak)|^2\frac{da}{a} = \int |\tilde g(k)|^2 \frac{d^dk}{S_{d}|k|^d}, $$ where $S_d = \frac{2 \pi^{d/2}}{\Gamma(d/2)}$ is the area of unit sphere in $\mathbb{R}^d$. It is helpful to rewrite continuous wavelet transform in Fourier form: \begin{eqnarray*} u(x) &=& \frac{1}{C_g} \int_0^\infty \frac{da}{a} \int \dk{k}{d} e^{\imath k x} \tilde g(ak) \tilde u_a(k), \\ \tilde u_a(k) &=& \overline{\tilde g(ak)}\tilde u(k) . \end{eqnarray*} The wavelet function $\tilde{g}(ak)$ works as a {\em band-pass filter}, which injects a part of the energy carried by the k-mode of the function $u(x)$ into the ''detector'' of scale $a$, depending on how the product $|ak|$ is different from the unity. Indeed, taking the plane wave $\phi(x) = (2\pi)^{-d} exp(\imath k_0 x)$ as an example of free particle with momentum $k_0$, so that $\hat{P} \phi(x) = k_0 \phi(x), \hat{P} = -\imath \partial_x$, we get $$ \tilde{\phi}(k) = \delta^d(k-k_0), \quad \tilde{\phi}_a(k) = \bar{\tilde{g}}(ak) \delta^d(k-k_0) $$ and hence \begin{equation} \phi_a(b) = e^{\imath k_0 b} \bar{\tilde{g}}(ak_0) \end{equation} The partial momentum per octave is $k_0 \frac{|\tilde g(ak_0)|^2}{C_g}$, so that the sum over all possible scales is $k_0$. It is impossible however to do such separation in Minkowski space $\mathbb{R}^{1,3}$ in space-time coordinates $(t,x,y,z)$. \subsection{Minkowski space} To construct wavelet transform in Minkowski space it is convenient to turn from the space-time coordinates $x^\mu=(t,x,y,z)$ to the {\em light-cone coordinates}: \begin{equation}x^\mu=(x_+,x_-,y,z), x_\pm = \frac{t \pm x}{\sqrt{2}},\bm{x}_\perp = (y,z). \label{lcc} \end{equation} This is the so-called infinite momentum frame. The advantage of the coordinates \eqref{lcc} for the calculations in quantum field theory is significant simplification of the vacuum structure \cite{ChangMa1969,KS1970}. The metrics in the light-cone coordinates becomes $$ g_{\mu\nu} = \begin{pmatrix} 0 & 1 & 0 & 0 \cr 1 & 0 & 0 & 0 \cr 0 & 0 &-1 & 0 \cr 0 & 0 & 0 & -1 \end{pmatrix}. $$ The rotation matrix -- the Loretnz boosts in $x$ direction and the rotations in $(y,z)$ plane -- has a block-diagonal form $$ M(\eta,\phi) = \begin{pmatrix} e^{\eta} & 0 & 0 & 0 \cr 0 & e^{-\eta} & 0 & 0 \cr 0 & 0 & \cos\phi & \sin\phi \cr 0 & 0 &-\sin\phi & \cos\phi \end{pmatrix}, $$ so that $M^{-1}(\eta,\phi)=M(-\eta,-\phi)$. Hyperbolic rotation in ($t,x$) plane is determined by the hyperbolic rotation angle -- the rapidity $\eta$. The rotations in the transverse plane, not affected by the Lorentz contraction, are determined by the rotation angle $\phi$. The Poincare group can be extended by the scale transformations $x'=a x$ to the affine group $$ x' = a M(\eta,\phi) x + b, $$ with the representation written in the same form as that of wavelet transform in Euclidean space $\mathbb{R}^d$, viz: $$ U(a,b,\eta,\phi)u(x) = \frac{1}{a^4}u\left(M^{-1}(\eta,\phi)\frac{x-b}{a} \right), $$ defined in $L^1$ norm in accordance to \cite{HM1996,AK2013}. So, we have straightforward generalization of the definition of wavelet coefficients of a function $f(x)\in L^2(\mathbb{R}^{1,3})$ with respect to the basic wavelet $g$ \cite{AK2013iv,AK2013} \begin{widetext} \begin{align} W_{a,b,\eta,\phi}[f] = \int dx_+ dx_- d^2 \bm{x}_\perp \frac{1}{a^4} \overline{ g\left(M^{-1}(\eta,\phi)\frac{x-b}{a} \right)} f(x_+,x_-,\bm{x}_\perp). \label{dwtm} \end{align} \end{widetext} The difference from calculations in Euclidean space $\mathbb{R}^4$ is that the basic wavelet $g(\cdot)$ cannot be defined globally on $\mathbb{R}^{1,3}$. Instead, it should be defined in four separate domains impassible by Lorentz rotations: \begin{align*} A_1: k_+ >0, k_-<0; & A_2: k_+ <0, k_- >0;\\ A_3: k_+ >0, k_- >0;& A_4: k_+ <0, k_-<0 , \end{align*} where $k$ is wave vector, $k_\pm = \frac{\omega \pm k_x}{\sqrt{2}}$. Four separate wavelets should be defined in these four domains \cite{PG2011,PG2012}: \begin{equation} g_i(x) = \int_{A_i} e^{\imath k x} \tilde{g}(k) \dk{k}{4}, \quad i=\overline{1,4}. \end{equation} We assume the following definition of the Fourier transform in light cone coordinates: \begin{align*}\nonumber f(x_+,x_-,\bm{x}_\perp) &=& \int e^{\imath k_- x_+ + \imath k_+ x_- -\imath \bm{k}_\perp \bm{x}_\perp} \times \\ &\times& \tilde{f} (k_-, k_+,\bm{k}_\perp)\frac{dk_+dk_-d^2\bm{k}_\perp}{(2\pi)^4}. \end{align*} Substituting the Fourier images into the definition \eqref{dwtm} we get \begin{align}\nonumber W_{ab\eta\phi}^i = \int_{A_i} e^{\imath k_- b_+ + \imath k_+ b_- -\imath \bm{k}_\perp \bm{b}_\perp} \tilde{f} (k_-, k_+,\bm{k}_\perp) \\ \overline{\tilde{g}}(a e^\eta k_-, a e^{-\eta} k_+, a R^{-1}(\phi) \bm{k}_\perp) \frac{dk_+dk_-d^2\bm{k}_\perp}{(2\pi)^4}. \end{align} Similarly to the $\mathbb{R}^d$ case, the reconstruction formula is \cite{AK2013iv}: \begin{widetext} \begin{align*}\nonumber f(x) &=& \sum_{i=1}^4 \frac{1}{C_{g_i}} \int_{-\infty}^\infty d\eta \int_0^{2\pi} d\phi \int_0^\infty \frac{da}{a} \int_{M^4_1} db_+ db_- d^2\bm{b}_\perp \frac{1}{a^4} g_i \left( M^{-1}(\eta)\frac{\xi-b}{a} \right) W^i_{ab\eta\phi} \\ &=& \sum_{i=1}^4 \frac{1}{C_{g_i}} \int_{-\infty}^\infty d\eta \int_0^{2\pi} d\phi \int_0^\infty \frac{da}{a} \int_{A_i} \frac{dk_+dk_-d^2\bm{k}_\perp}{(2\pi)^4} e^{\imath k_- x_+ + \imath k_+ x_- -\imath \bm{k}_\perp \bm{x}_\perp} \times \\ &\times& \tilde{W}_{a\eta\phi}(k) \tilde{g}(ak_- e^\eta, a k_+ e^{-\eta},aR^{-1}(\phi)\bm{k}_\perp) \end{align*} \end{widetext} \section{Quantization} Same as in standard quantum field theory in Minkowski space we ought to use the mass-shell delta function to get rid of redundant degrees of freedom \cite{Bsh1980}. Let us consider the massive scalar field in $\mathbb{R}^{1,3}$ Minkowski space \begin{equation} u(x) = \int e^{\imath k x} 2 \pi \delta(k^2-m^2) \tilde{u}(k_-,k_+,\bm{k}_\perp) \frac{d^4k}{(2\pi)^4} \label{uft} \end{equation} The Lorentz invariant scalar product and the invariant volume in $k$-space are \begin{align*} kx \equiv k_0 x_0 - \bm{k}\bm{x} = k_- x_+ + k_+x_- - \bm{k}_\perp \bm{x}_\perp \\ \frac{d^4k}{(2\pi)^4} = \frac{dk_0 dk_x dk_y dk_z}{(2\pi)^4} = \frac{dk_-dk_+ d^2\bm{k}_\perp}{(2\pi)^4}. \end{align*} For a massive scalar field because of the mass shell delta function $\delta(2k_+k_- -\bm{k}^2 - m^2)$ only two domains $A_3$ and $A_4$ for which $k_+k_-$ is positive will contribute to the decomposition of $u(x)$. The integration over the $k_-$ variable with the mass shell delta function gives $$ k_- = \frac{\bm{k}_\perp^2+m^2}{2k_+} $$ After the substitution of integration variable $k\to-k$ in integration over $A_4$, the decomposition of $u(x)$ takes the form \begin{align} \nonumber u(x) &=& \int \Huge[ e^{\imath k x} \tilde{u}\left( \frac{\bm{k}_\perp^2+m^2}{2k_+},k_+,\bm{k}_\perp \right) + \\ \nonumber &+& e^{-\imath k x} \tilde{u}\left( -\frac{\bm{k}_\perp^2+m^2}{2k_+},-k_+,-\bm{k}_\perp \right)\Huge] \times \\ \nonumber &\times& \theta(k_+)\frac{dk_+ d^2\bm{k}_\perp}{2k_+(2\pi)^3} \\ \nonumber &\equiv& \int \left[e^{\imath k x} \tilde u^+(k) + e^{-\imath k x} \tilde u^-(k) \right] \times \\ &\times& \theta(k_+) \frac{dk_+ d^2\bm{k}_\perp}{2 k_+(2\pi)^3} \label{pnd} \end{align} Both $\tilde u^+(k)$ and $\tilde u^-(k)$ are defined on a hemisphere in $\mathbb{R}^3$ and can be decomposed into scale components by continuous wavelet transform in Euclidean space. The straightforward way to quantize the fields in the light-cone representation is to use the formal analogy between the decomposition \eqref{pnd} and the positive/negative frequency decomposition in usual coordinates ($t,\bm{x}$) in the equal-time quantization scheme \begin{equation}\left. \left\{u(t,\bm{x}), \frac{\partial L}{\partial \dot{u}(t,\bm{y})} \right\}\right|_{t=0} = \imath \delta^3 (\bm{x}-\bm{y}), \label{pb} \end{equation} where the curly brackets stand for the Poisson brackets substituted by commutator (anti-commutator) for Bose (Fermi) quantum fields. Using the Lagrangian \begin{equation} L = \frac{\partial u}{\partial x_+} \frac{\partial u}{\partial x_-} - \frac{1}{2}(\partial_\perp u)^2 - \frac{m^2}{2}u^2, \label{lpm} \end{equation} we can infer that the $x_+ = \frac{ t+x}{\sqrt{2}}$ variable can be considered as ''time'' on the light-cone \cite{LB1980}. In analogy to common case the Poisson bracket can be then casted in the form \begin{equation} \left\{ u(x_+=0,x_-,x_\perp), \left. \frac{\partial u}{\partial y_-}\right|_{y_+=0} \right\}=\imath \delta^3(x-y) \label{pblc} \end{equation} Substituting decomposition \eqref{pnd} into the bracket \eqref{pblc} and changing the bracket to commutator one gets \begin{equation} \left[ \tilde u^-(k),\tilde u^+(q) \right] = 2k_+ (2\pi)^3 \delta^3(k-q). \label{cr1} \end{equation} The latter equation is different from the standard commutation relation by changing the energy ($k_0$) to the momentum ($k_+$). The role of energy is played by $k_-$ in the light-cone coordinates. Substituting the inverse wavelet transform \begin{equation} \tilde u^\pm(k_+,\bm{k}_\perp) = \frac{1}{C_g}\int_0^\infty \tilde g(ak) \tilde u^\pm_a(k) \frac{da}{a}, \label{fwt} \end{equation} where $\tilde u^\pm_a(k) = \overline{\tilde g}(ak)\tilde u^\pm(k), k \equiv (k_+,\bm{k}_\perp)$, into the equality \eqref{cr1}, and assuming an isotropic basic wavelet $g(\cdot)$ for simplicity, we derive the commutation relations for the scale components \begin{widetext} \begin{equation} \left[ \tilde u^-_{a_1}(k),\tilde u^+_{a_2}(q) \right] = 16\pi^3 C_g a_1 \delta(a_1-a_2) k_+ \delta(k_+-q_+) \delta^2(\bm{k}_\perp-\bm{q}_\perp). \label{crs} \end{equation} \end{widetext} The commutation relation \eqref{crs} meets the general form of wavelet transform of the canonical commutation relations in Minkowski space, eq.(18) of \cite{AK2013} \begin{align*} [u^-_{ia\eta}(k),u^+_{ja'\eta'}(k') ] &=& a \delta(a-a') \delta (\eta-\eta')\times \\ &\times& \delta_{ij} C_{g_i} [u^-(k),u^+(k')], \end{align*} defined on four Lorentz-invariant domains $A_i,i=\overline{1,4}$. However, being defined on $k \in \mathbb{R}_+ \otimes \mathbb{R}^2$ it is easier for practical calculations. Introducing the vacuum state $\Phi_p$ with the momentum $p$ we get \begin{align*} P^n \tilde u^+(k) \Phi_p &=& (p^n+k^n) \tilde u^+(k) \Phi_p, \\ P^n \tilde u^-(k) \Phi_p &=& (p^n-k^n) \tilde u^-(k) \Phi_p. \end{align*} In the latter equations $\tilde u^\pm(k)$ can be subjected to wavelet transform so that $\tilde u^\pm(k)$ is expressed by \eqref{fwt} with $k$ having only 3 independent components. In this way we can construct the {\em multiscale} Fock space of states \begin{widetext} \begin{equation} \Phi = \sum_{j,s} \int F^{(\cdots j \cdots)}_s(a_1,k_1,\ldots, a_s,k_s) \tilde u_{j_1 a_1}^+(k_1) \ldots \tilde u_{j_s a_s}^+(k_s) \frac{da_1dk_{1+} d^2\bm{k}_{1\perp}}{a_1C_g16k_{1+}\pi^3} \ldots \frac{da_s dk_{s+} d^2\bm{k}_{s\perp}}{a_sC_g16k_{s+}\pi^3} \Phi_0, \end{equation} \end{widetext} where $k_i = (k_{i+},k_{i\perp})$ are three dimensional vectors, $j$ denote all other indices of the quantum states, and $\Phi_0$ is a vacuum state $u^-_i(x) \Phi_0 = 0$. \section{Conclusions} To be concluded, we have developed a quantization scheme suitable for applications in quantum theory of fields $u_a(x)$, which explicitly depend on both position $x$ and the scale (resolution) $a$. It is not suprising, that such fields can form a prospective framework for analytic calculations in quantum chromodynamics, where most approved results are obtained either numerically lattice simulations \cite{LQCD}, or analytically, with perturbation expansion being corrected by renormalization group methods \cite{CSS2004}. In the latter case the obtained results, viz., process amplitudes, parton distribution functions, nucleon form factors, tacitly depend on some formal scale parameter $\Lambda$, which is either cutoff momentum, or renormalization scale. From functional analysis point of view, this may suggest the use of space of functions which explicitly depend on both the position and the resolution. being operator-valued functions they certainly require commutation relations. The use of light-cone coordinates enables this construction. The massive scalar field quantization was choosen as a simple example. Perhaps the same technique can be used in general problems of quantum field theory, when wavelet transform is used to construct divergence free Green functions \cite{Federbush1995,Altaisky2010PRD,BP2013}. \section*{Acknowledgement} The work was supported in part by RFBR projects 13-07-00409, 14-02-00739 and by the Ministry of Education and Science of the Russian Federation in the framework of Increase Competitiveness Program of MISiS.
1,116,691,501,423
arxiv
\section{Introduction} Since the late 20th century (see, e.g., \cite{voorhoeve,kazarnovski,khovanski}) it has been known that many of the quantitative results relating algebraic sets and polyhedral geometry can be extended to more general analytic functions, including exponential sums. Here, we show that the recent estimates on the distance between amoebae and Archimedean tropical varieties from \cite{aknr} admit such an extension. Metric estimates for amoebae of polynomials are useful for coarse approximation of solution sets of polynomial systems, as a step toward finer approximation via, say, homotopy methods (see, e.g., \cite{aggr,hauenstein}). Polynomial systems are ubiquitous in numerous applications, and via a logarithmic change of variables, are clearly equivalent to systems of exponential sums with integer frequencies. Exponential sums with real frequencies are important in Signal Processing, Model Theory, and $3$-manifold invariants (see Remark \ref{rem:apps} below).\footnote{Lest there be any confusion, let us immediately clarify that we do {\em not} consider terms of the form $e^{p(x)}$ with $p$ a polynomial of degree $\geq\!2$. The latter type of exponential sums are of great importance in analytic number theory and the study of zeta functions. } \begin{dfn} \label{dfn:amoebaarchtrop} We use the abbreviations $[N]\!:=\!\{1,\ldots,N\}$, $w\!:=\!(w_1,\ldots,w_n)$, \linebreak $z\!:=\!(z_1,\ldots,z_n)$, $w\cdot z\!:=\!w_1z_1+\cdots+w_nz_n$, and $\mathbb{C}^*\!:=\!\mathbb{C}\setminus\{0\}$. We also let $\Re(z)$ denote the vector whose $i\thth$ coordinate is the real part of $z_i$, and $\Re(S)\!:=\!\{\Re(z)\; | \; z\!\in\!S\}$ for any subset $S\!\subseteq\!\C^n$. Henceforth, we let $A\!:=\!\{a_1,\ldots,a_t\}\!\subset\!\R^n$ have cardinality $t\!\geq\!2$, $b_j\!\in\!\mathbb{C}$ for all $j\!\in\![t]$, and set $g(z)\!:=\!\sum^t_{j=1}e^{a_j\cdot z+b_j}$. We call $g$ an {\em $n$-variate exponential $t$-sum} and call $A$ the {\em spectrum} of $g$. We also call the $a_j$ the {\em frequencies} of $g$ and define their {\em minimal spacing} to be $\delta(g)\!:=\!\min_{p\neq q}|a_p-a_q|$ where $|\cdot|$ denotes the standard $L^2$-norm on $\C^n$. Finally, let $Z(g)$ denote the zero set of $g$ in $\C^n$, and define the {\em (Archimedean) tropical variety of $g$} to be\\ \mbox{}\hfill $\mathrm{Trop}(g)\!:=\!\Re\left(\left\{z\!\in\!\C^n\; : \; \max_j\left|e^{a_j\cdot z+b_j}\right| \text{ is attained for at least two distinct } j\right\}\right)$.\hfill $\diamond$ \end{dfn} \noindent Note that while we restrict to real frequencies for our exponential sums, we allow complex coefficients. $\mathrm{Trop}(g)$ also admits an equivalent (and quite tractable) definition as the dual of a\linebreak \scalebox{.89}[1]{polyhedral subdivision of $A$ depending on the real parts of the $b_j$ (see Thm.\ \ref{thm:cxity2} and Prop.\ \ref{prop:slopes} below).} \begin{ex} When $n\!=\!1$ and $g(z)\:=\!e^{\sqrt{2}z_1}+e^{\log(3)+\pi \sqrt{-1}}$, we see that $Z(g)$ is a countable,\linebreak \scalebox{.93}[1]{discrete, and unbounded subset of the vertical line $\left\{z_1\!\in\!\mathbb{C}\; \left| \; \Re(z_1)\!=\!\frac{\log 3} {\sqrt{2}}\right.\right\}$. So $\Re(Z(g))\!=\!\left\{\frac{\log 3} {\sqrt{2}}\right\}$. $\diamond$} \end{ex} \begin{ex} When $g(z)\!:=\!e^{a_1z_1+b_1}+e^{a_2z_1+b_2}$ for some distinct $a_1,a_2\!\in\!\mathbb{R}$ (and any $b_1,b_2\!\in\!\mathbb{C}$) it is easily checked that $\mathrm{Trop}(g)\!=\!\Re(Z(g))\!=\!\left\{\frac{\Re(b_1-b_2)}{a_2-a_1} \right\}$. More generally, for any $n$-variate exponential $2$-sum $g$, $\mathrm{Trop}(g)$ and $\Re(Z(g))$ are the same affine hyperplane. However, the univariate exponential $3$-sum $g(z_1)\!:=\!(e^{z_1}+1)^2$ gives us $\mathrm{Trop}(g)\!=\!\{\pm \log 2\}$, which is neither contained in, nor has the same number of points, as $\Re(Z(g))\!=\!\{0\}$. $\diamond$ \end{ex} When $A\!\subset\!\Z^n$, $\Re(Z(g))$ is the image of the complex zero set of the polynomial $\sum^t_{j=1} e^{b_j} x^{a_j}$ under the coordinate-wise log-absolute value map, i.e., an {\em amoeba} \cite{gkz94}. Piecewise\linebreak linear approximations for amoebae date back to work of Viro \cite{virologpaper} and, in the univariate\linebreak case, Ostrowski \cite{ostrowski}. More recently, Alessandrini has associated piecewise linear\linebreak approximations to log-limit sets of semi-algebraic sets and definable sets in an $o$-minimal stucture \cite{alessandrini}. However, other than Definition \ref{dfn:amoebaarchtrop} here, we are unaware of any earlier\linebreak formulation of such approximations for real parts of complex zero sets of $n$-variate\linebreak exponential sums. Our first main results are simple and explicit bounds for how well $\mathrm{Trop}(g)$ approximates $\Re(Z(g))$, in arbitrary dimension. \begin{dfn} Given any subsets $R,S\!\subseteq\!\R^n$, their {\em Hausdorff distance} is\\ \mbox{}\hfill $\Delta(R,S)\!:=\! \max\left\{\sup\limits_{r\in R}{} \inf\limits_{\substack{\mbox{}\\ s\in S}}|r-s|, \sup\limits_{s\in S}{} \inf\limits_{\substack{\mbox{}\\ r\in R}} |r-s|\right\}$. \hfill $\diamond$ \end{dfn} \begin{thm} \label{thm:finally} For any $n$-variate exponential $t$-sum $g(z)\!:=\!\sum^t_{j=1} e^{a_j\cdot z +b_j}$ with $a_j\!\in\!\R^n$ and $b_j\!\in\!\mathbb{C}$ for all $j$, let $d$ be the dimension of the smallest affine subspace containing $a_1,\ldots,a_t$, and set $\delta(g)\!:=\!\min_{p\neq q} |a_p-a_q|$. Then $t\!\geq\!d+1$ and\\ \mbox{}\hspace{1cm}(0) If $t\!=\!d+1$ then $\mathrm{Trop}(g)\!\subseteq\!\Re(Z(g))$ (and thus $\sup\limits_{\text{\scalebox{.7}[1]{$w\in\mathrm{Trop}(g)$}}} \inf\limits_{\substack{\mbox{}\\ \text{\scalebox{.7}[1]{$r\in\Re(Z(g))$}}}} |r-w|\!=\!0$). \vspace{-.2cm} \noindent \mbox{}\hspace{1cm}(1) For $t\!\geq\!2$ we have:\\ \mbox{}\hspace{1.6cm}(a) $\displaystyle{\sup \limits_{\text{\scalebox{.7}[1]{$r\in\Re(Z(g))$}}} \inf \limits_{\substack{\mbox{} \\ \text{\scalebox{.7}[1]{$w\in\mathrm{Trop}(g)$}}}} |r-w|\leq \log(t-1)/\delta(g)}$\\ \mbox{}\hspace{1.6cm}(b) $\Delta(\Re(Z(g)),\mathrm{Trop}(g))\leq \frac{\sqrt{ed}t^2(2t-3)\log 3}{\delta(g)}$. \\ \mbox{}\hspace{1cm}(2) Defining the $n$-variate exponential $t$-sum $g_{t,n}(x)\!:=\!(e^{\delta z_1}+1)^{t-n}+e^{\delta z_2} +\cdots+e^{\delta z_n}$,\\ \mbox{}\hspace{1.7cm}we have $\Delta\!\left(\Re(Z(g_{t,n})),\mathrm{Trop}(g_{t,n})\right)\geq \log(t-n)/\delta$ for $t\!\geq\!n+1$ and $\delta\!>\!0$. \end{thm} \noindent We prove Theorem \ref{thm:finally} in Section \ref{sec:finally}. Fundamental results on the geometric and topological structure of $\Re(Z(g))$ have been derived in recent decades by Favorov and Silipo \cite{favorov,silipo}. However, we are unaware of any earlier explicit bounds for the distance between $\Re(Z(g))$ and $\mathrm{Trop}(g)$ when $A\!\not\subset\!\Z^n$. \noindent \begin{minipage}[b]{0.5\linewidth} \vspace{0pt} \begin{ex} When $g$ is the $2$-variate exponential $7$-sum $\sum^6_{j=0}\binom{7}{j}e^{\cos(2\pi j/7)z_1 +\sin(2\pi j/7)z_2}$, Assertion (1) of Theorem \ref{thm:finally} tells us that every point of $\Re(Z(g))$ lies within distance\\ \mbox{$\log(6)/\sqrt{(1-\cos(2\pi/7))^2+\sin(2\pi/7)^2} \!<\!2.065$} of some point of $\mathrm{Trop}(g)$. To the right, we can see $\mathrm{Trop}(g)$ as the black piecewise linear curve drawn on the right, along with the \scalebox{.9}[1]{stated neighborhood of $\mathrm{Trop}(g)$ containing $\Re(Z(g))$.} \end{ex} \end{minipage} \begin{minipage}[b]{0.4\linewidth} \vspace{0pt} \mbox{\epsfig{file=rex.eps,height=1.6in,clip=}\hspace{-.5cm} \epsfig{file=rexblowup.eps,height=1.6in,clip=}} \end{minipage} \noindent {\em The magnified view reveals that $\mathrm{Trop}(g)$ has exactly $3$ vertices. $\diamond$} \scalebox{.98}[1]{The special case $A\!\subset\!\Z^n$ of Theorem \ref{thm:finally} was known earlier, with a bound independent of $n$:}\linebreak \scalebox{.94}[1]{Our $\mathrm{Trop}(g)$ agrees with the older definition of (Archimedean) tropical variety for the polynomial}\linebreak $f(x)\!:=\!\sum^t_{j=1} e^{b_j} x^{a_j}$, and the simpler bound $\Delta(\mathrm{Amoeba}(f),\mathrm{Trop}(f))\!\leq\!(2t-3)\log(t-1)$ holds \cite{aknr}. Earlier metric results for the special case $A\!\subset\!\mathbb{Z}$ date back to work of Ostrowski on Graeffe iteration \cite{ostrowski}. Viro and Mikhalkin touched upon the special case $A\!\subset\!\mathbb{Z}^2$ in \cite{virologpaper} and \cite[Lemma 8.5, pg.\ 360]{mikhalkin}. We derive our distance bounds by using a projection trick arising from the study of random convex sets (see \cite{gpv} and Section \ref{sec:ball} below) to reduce to the $d\!=\!1$ case. The $d\!=\!1$ case then follows from specially tailored extensions of existing results for the polynomial case (see Section \ref{sec:back} below). This approach results in succinct proofs for our bounds. However, it is not yet clear if the dependence on $d$ is actually necessary or just an artifact of our techniques. A consequence of our approach is a refinement of an earlier estimate of Wilder (see \cite{wilder}, \cite{voorhoeve}, and Section \ref{sub:rouche} below) on the number of roots of univariate exponential sums in infinite horizontal strips of $\mathbb{C}$: Theorem \ref{thm:rec} (see Section \ref{sub:rouche}) allows us to estimate the number of roots in certain axis-parallel rectangles in $\mathbb{C}$. A very special case of Theorem \ref{thm:rec} is the fact that {\em all} the roots of $g$ are confined to an explicit union of infinite {\em vertical} strips explicitly determined by $\mathrm{Trop}(g)$. In what follows, the {\em open $\varepsilon$-neighborhood} of a subset $X\!\subseteq\!\mathbb{R}$ is simply $\{x'\!\in\!\mathbb{R}\; : \; |x-x'|\!<\!\varepsilon\text{ for some } x\!\in\!X\}$. \begin{cor} \label{cor:strip} Suppose $g$ is any univariate $t$-sum with real spectrum and $W$ is the open\linebreak $\frac{\log 3}{\delta(g)}$-neighborhood of $\mathrm{Trop}(g)$. Then all the complex roots of $g$ lie in $W\times \mathbb{R}$. In particular,\linebreak $\sup\limits_{\text{\scalebox{.7}[1]{$r\in\Re(Z(g))$}}} \inf\limits_{\substack{\mbox{} \\ \text{\scalebox{.7}[1]{$w\in\mathrm{Trop}(g)$}}}} |r-w|\leq\frac{\log 3}{\delta(g)}$ in the univariate case. \qed \end{cor} \noindent Unlike the distribution of roots of $g$ in horizontal strips, where there is a kind of equidistribution (see, e.g., \cite{voorhoeve,galligo} and Section \ref{sec:back} below), Corollary \ref{cor:strip} tells us that the roots of $g$ cluster only within certain deterministically predictable vertical strips. Our next main results concern the complexity of deciding whether a given point lies in the real part of the complex zero set of a given exponential sum, and whether checking membership in a neighborhood of a tropical variety instead is more efficient. \subsection{On the Computational Complexity of $\pmb{\Re(Z(g))}$ and $\pmb{\mathrm{Trop}(g)}$} \label{sub:cxity} We have tried to balance generality and computational tractability in the family of functions at the heart of our paper. In particular, the use of arbitrary real inputs causes certain geometric and algorithmic subtleties. We will see below that these difficulties are ameliorated by replacing exact queries with approximate queries. \begin{rem} \label{rem:apps} ``Polynomials'' with real exponents --- sometimes called {\em posinomials} --- occur naturally in many applications. For example, the problem of finding the directions of a set of unknown signals, using a radar antenna built from a set of specially spaced sensors, can easily be converted to an instance of root-finding in the univariate case \cite{forsythe,hwang}. Approximating roots in the higher-dimensional case is the fundamental computational problem of Geometric Programming \cite{duffin,chiang,boyd}. Pathologies with the phases of complex roots can be avoided through a simple exponential change of variables, so this is one reason that exponential sums are more natural than posinomials. Among other applications, exponential sums occur in the calculation of $3$-manifold invariants (see, e.g., \cite[Appendix A]{mcmullen} and \cite{hadari}), and have been studied from the point of view of Model Theory and Diophantine Geometry (see, e.g., \cite{wilkie,zilberolder, zilber}). $\diamond$ \end{rem} To precisely compare the computational complexity of $\Re(Z(g))$ and $\mathrm{Trop}(g)$ we will first need to fix a suitable model of computation: We will deal mainly with the {\em BSS model over $\mathbb{R}$} \cite{bcss}. This model naturally augments the classical {\em Turing machine} \cite{papa,arora,sipser2} by allowing field operations and comparisons over $\mathbb{R}$ in unit time. We are in fact forced to move beyond the Turing model since our exponential sums involve arbitrary real numbers, and the Turing model only allows finite bit strings as inputs. We refer the reader to \cite{bcss} for further background. We are also forced to move from {\em exact} equality and membership questions to questions allowing a margin of uncertainty. One reason is that exact arithmetic involving exponential sums still present difficulties, even for computational models allowing field operations and comparisons over $\mathbb{R}$. \begin{prop} \label{prop:dec} The problem of determining, for an input $(z_1,z_2)\!\in\!\mathbb{R}^2$, whether $z_1\!=\!e^{z_2}$, is undecidable\footnote{\cite{poonen} provides an excellent survey on undecidability, in the classical Turing model, geared toward non-experts in complexity theory.} in the BSS model over $\mathbb{R}$, i.e., there is no algorithm terminating in finite time for all inputs. \end{prop} \noindent (We were unable to find a precise statement of Proposition \ref{prop:dec} in the literature, so we provide a proof at the end of this section.) Note that when the input is restricted, deciding whether $z_1\!=\!e^{z_2}$ can be tractable (and even trivially so). For instance, a famous result of Lindemann \cite{lindemann} tells us that $e^{z_2}$ is transcendental if $z_2\!\in\!\mathbb{C}$ is nonzero and algebraic. Proposition \ref{prop:dec} may be surprising in light of there being efficient iterations for approximating the exponential function \cite{borwein,ahrendt}. Determining which questions are tractable for expressions involving exponentials has in fact been an important impetus behind parts of Computational Algebra, Model Theory, and Diophantine Geometry in recent decades (see, e.g., \cite{richardson,wilkie,zilberolder,habegger,scanlon}). As for the complexity of $\Re(Z(g))$, deciding membership turns out to be provably hard, already for the simplest bivariate exponential $3$-sums. \begin{thm} \label{thm:mem} \scalebox{.92}[1]{Determining, for arbitrary input $r_1,r_2\!\in\!\mathbb{R}$ whether $(r_1,r_2)\!\in\!\Re\!\left(Z\left(1-e^{z_1}-e^{z_2}\right)\right)$}\linebreak is undecidable in the BSS model over $\mathbb{R}$. \end{thm} \noindent (We prove Theorem \ref{thm:mem} at the end of this section.) The intractability asserted in Theorem \ref{thm:mem} can be thought of as an amplification of the ${\mathbf{NP}}$-hardness of deciding amoeba membership when $A\!\subset\!\mathbb{Z}$ \cite[Thm.\ 1.9]{aknr}. (See also \cite{plaisted} for an important precursor.) However, just as in Proposition \ref{prop:dec}, there are special cases of the membership problem from Theorem \ref{thm:mem} that are perfectly tractable. For instance, when $e^{r_1},e^{r_2}\!\in\!\mathbb{Q}$, deciding whether $(r_1,r_2)\!\in\!\Re\!\left(Z\left(1-e^{z_1}-e^{z_2}\right) \right)$ is in fact doable --- even on a classical Turing machine --- in polynomial-time (see, e.g., \cite{theobald,wolfsos} and \cite[Thm.\ 1.9]{aknr}). More to the point, Theorem \ref{thm:mem} above is yet another motivation for approximating $\Re(Z(g))$, and our final main result shows that membership queries (and even distance queries) involving $\mathrm{Trop}(g)$ are quite tractable in the BSS model over $\mathbb{R}$. We refer the reader to \cite{grunbaum,ziegler,triang} further background on polyhedral geometry and subdivisions. \begin{dfn} For any $n$-variate exponential $t$-sum $g$, let $\Sigma(\mathrm{Trop}(g))$ denote the polyhedral complex whose cells are exactly the (possibly improper) faces of the closures of the connected components of $\R^n\!\setminus\!\mathrm{Trop}(g)$. $\diamond$ \end{dfn} \begin{thm} \label{thm:cxity2} \scalebox{.97}[1]{Suppose $n$ is fixed. Then there is a polynomial-time algorithm that, for any}\linebreak input $w\!\in\!\R^n$ and $n$-variate exponential $t$-sum $g$, outputs the closure --- described as an\linebreak \scalebox{.98}[1]{explicit intersection of $O(t^2)$ half-spaces --- of the unique cell $\sigma_w$ of $\Sigma(\mathrm{Trop}(g))$ containing $w$.} \end{thm} \noindent We prove Theorem \ref{thm:cxity2} in Section \ref{sec:cxity}. An analogue of Theorem \ref{thm:cxity2}, for the classical Turing model (assuming $A\!\subset\!\Z^n$ and $w\!\in\!\Q^n$) appears in \cite[Thm.\ 1.5]{aggr}. Extending to $A\!\subset\!\R^n$ and real coefficients, and using the BSS model over $\mathbb{R}$, in fact conceptually simplifies the underlying algorithm and helps us avoid certain Diophantine subtleties. By applying the standard formula for point-hyperplane distance, and the well-known\linebreak efficient algorithms for approximating square-roots (see, e.g., \cite{borwein}), Theorem \ref{thm:cxity2}\linebreak implies that we can also efficiently check membership in any $\varepsilon$-neighborhood about $\mathrm{Trop}(g)$. This means, thanks to Theorem \ref{thm:finally}, that membership in a neighborhood of $\mathrm{Trop}(g)$ is a tractable and potentially useful relaxation of the problem of deciding membership in $\Re(Z(g))$. For completeness, we now prove Proposition \ref{prop:dec} and Theorem \ref{thm:mem}. \medskip \noindent {\bf Proof of Proposition \ref{prop:dec}:} The key is to consider the shape of the space of inputs $\mathcal{I}$ that lead to a ``Yes'' answer in a putative BSS machine deciding membership in the curve in $\mathbb{R}^2$ defined by $y\!=\!e^x$. In particular, \cite[Thm.\ 1, Pg.\ 52]{bcss} tells us that any set of inputs leading to a ``Yes'' answer in a BSS machine over $\mathbb{R}$ must be a countable union of semi-algebraic sets. So if $\mathcal{I}$ is indeed decidable relative to this model then $\mathcal{I}$ must contain a bounded connected neighborhood $W$ of a real algebraic curve (since $\mathcal{I}$ has infinite length). Since $\mathcal{I}$ is the graph of $e^x$, $W$ extends by analytic continuation to the graph of an entire algebraic function. But this impossible: One simple way to see this is that an entire algebraic function must have\linebreak \scalebox{.95}[1]{polynomial growth order. However, the function $e^x$ clearly has non-polynomial growth order. \qed} \medskip \noindent {\bf Proof of Theorem \ref{thm:mem}:} Similar to our last argument, one can easily show that $\mathcal{I}\!:=\!\Re(Z(1-e^{z_1}-e^{z_2}))$ being decidable by a BSS machine over $\mathbb{R}$ implies that a neighborhood $W$ of the boundary of $I$ must be real algebraic. (We may in fact assume that $W$ is the part of the boundary that lies in the curve defined by $y\!=\!\log(1-e^x)$.) So, via analytic continuation to $U\!:=\!\mathbb{C}\setminus\{(2k+1)\sqrt{-1}\pi\; | \; k\!\in\!\mathbb{Z}\}$, it suffices to show that $\log(1-e^x)$ is not an algebraic function that is analytic on $U$. But this is easy since an algebraic function can only have finitely many branch points, whereas $\log(1-e^x)$ has infinitely many. (Moreover, each branch point of $\log(1-e^x)$ has infinite monodromy whereas algebraic functions can only have branch points with finite monodromy.) \qed \section{Tropically Extending Classical Polynomials Root Bounds to Exponential Sums} \label{sec:back} \subsection{Basics on Roots of Univariate Exponential Sums} Let $\#S$ denote the cardinality of a set $S$. It is worth noting that although $\#\mathrm{Trop}(g)$ and our bounds for $\Delta(\Re(Z(g)),\mathrm{Trop}(g))$ are independent of the maximal distance between frequencies $D\!:=\!\max_{p,q}|a_p-a_q|$, the\linebreak cardinality $\#\Re(Z(g))$ can certainly depend on $D$, and even be infinite for $n\!=\!1$. \begin{ex} For any integer $D\!\geq\!2$, $g(z_1)\!:=\!e^{Dz_1}+e^{z_1}+1$ satisfies $\#\mathrm{Trop}(g)\!=\!1$ but $\#\Re(Z(g))\!=\!\lceil D/2\rceil$. The latter cardinality is easily computed by observing that the non-real roots of the trinomial $f(x_1)\!:=\!x^D_1+x_1+1$ occur in conjugate pairs, and at most $2$ roots of $f$ can have the same norm. (The latter fact is a very special case of \cite[Prop.\ 4.3]{tdw}.) $\diamond$ \end{ex} \begin{ex} Considering the decimal expansion of $\sqrt{2}$, and the local continuity of the roots of $e^{Dz_1}+e^{z_1}+1$ as a function of $D\!\in\!\mathbb{R}$, it is not hard to show that $X\!:=\!\Re\!\left(Z\!\left(e^{\sqrt{2}z_1}+e^{z_1}+1\right)\right)$ is in fact countably infinite, and Corollary \ref{cor:extremecauchy} below tells us that $X$ is also contained in the open interval $\left(-\frac{\log 2}{\sqrt{2}-1},\frac{\log 2}{\sqrt{2}-1}\right)$. $\diamond$ \end{ex} To derive our main results we will need the following variant of the Newton polytope, specially suited for studying real parts of roots of exponential sums. \begin{dfn} Let $\mathrm{Conv}(S)$ denote the convex hull of a subset $S\!\subseteq\!\R^n$, i.e., the smallest convex set containing $S$. Given any $n$-variate exponential $t$-sum $g(z)\!=\!\sum^t_{j=1} e^{a_j\cdot z+b_j}$ with real frequencies $a_j$, we then define its {\em Archimedean Newton polytope} to be $\mathrm{ArchNewt}(g)$\linebreak $:=\!\mathrm{Conv}\!\left(\{(a_j,-\Re(b_j))\}_{j\in [t]}\right)$. We also call any face of a polytope $P\!\subset\!\mathbb{R}^{n+1}$ having an outer-normal vector with {\em negative} last coordinate a {\em lower} face. $\diamond$ \end{dfn} \begin{prop} \label{prop:slopes} For any $n$-variate exponential $t$-sum $g$ with real spectrum we have\linebreak $\mathrm{Trop}(g)\!=\!\{w\; | \; (w,-1) \text{ is an outer normal of a positive-dimensional face of } \mathrm{ArchNewt}(g)\}$.\linebreak \scalebox{.95}[1]{Furthermore, when $n\!=\!1$, $\mathrm{Trop}(g)$ is also the set of slopes of the lower edges of $\mathrm{ArchNewt}(g)$. \qed} \end{prop} \noindent We refer the reader to \cite{aknr} for further background on the polynomial cases of $\mathrm{ArchNewt}$ and $\mathrm{Trop}$. A key trick we will use is relating the points of $\mathrm{Trop}(g)$ to (vertical) half-planes of $\mathbb{C}$ where certain terms of the univariate exponential sum $g$ dominate certain sub-summands of $g$. \begin{prop} \label{prop:lop} Suppose $g(z_1)\!:=\!\sum^t_{j=1} e^{a_jz_1+b_j}$ satisfies $a_1\!<\cdots<\!a_t$ and $b_j\!\in\!\mathbb{C}$ for all $j$. Suppose further that $w\!\in\!\mathrm{Trop}(g)$, $\ell$ is the unique index such that $(a_\ell,\Re(b_\ell))$ is the right-hand vertex of the lower edge of $\mathrm{ArchNewt}(g)$ of slope $w$, and let $\delta_\ell\!:=\!\!\!\!\!\!\!\!\min\limits_{\text{\scalebox{.7}[1]{ $p,q\in[\ell] \& p\neq q$}}} \!\!\!\!\!|a_p-a_q|$.\linebreak \vspace{-.4cm} \noindent Then for any $N\!\in\!\mathbb{N}$ and $z_1\!\in\!\left[w+\frac{\log(N+1)}{\delta_\ell},\infty\right)\times \mathbb{R}$ we have $\left|\sum\limits^{\ell-1}_{j=1} e^{a_jz_1+b_j}\right|\!<\! \frac{1}{N}\left| e^{a_\ell z_1+b_\ell}\right|$. \end{prop} \noindent {\bf Proof:} First note that $2\!\leq\!\ell\!\leq\!t$ by construction. Let $\beta_j\!:=\!\Re(b_j)$, $r\!:=\!\Re(z_1)$, and note that \begin{eqnarray*} \left|\sum^{\ell-1}_{j=1} e^{a_jz_1+b_j}\right| & \leq & \sum^{\ell-1}_{j=1} \left|e^{a_jz_1+b_j}\right| = \sum^{\ell-1}_{j=1} e^{a_jr+\beta_j} = \sum^{\ell-1}_{j=1} e^{a_j(r-w)+a_jw+\beta_j} \end{eqnarray*} Now, since $a_{j+1}-a_j\!\geq\!\delta_\ell$ for all $j\!\in\!\{1, \ldots,\ell-1\}$, we obtain $a_j\!\leq\!a_\ell-(\ell-j)\delta_\ell$. So for $r\!>\!w$ we have $\displaystyle{\left|\sum^{\ell-1}_{j=1} e^{a_jz_1+b_j}\right| \leq \sum^{\ell-1}_{j=1} e^{(a_\ell-(\ell-j)\delta_\ell)(r-w)+a_jw+\beta_j}\\ \leq \sum^{\ell-1}_{j=1} e^{(a_\ell-(\ell-j)\delta_\ell)(r-w) +a_\ell w+\beta_\ell}}$, where the last inequality follows from Definition \ref{dfn:amoebaarchtrop}. So then \begin{eqnarray*} \left|\sum^{\ell-1}_{j=1} e^{a_jz_1+b_j}\right| &\leq & e^{(a_\ell-(\ell-1)\delta_\ell)(r-w)+a_\ell w+\beta_\ell} \sum^{\ell-1}_{j=1} e^{(j-1)\delta_\ell (r-w)} \\ & = & e^{(a_\ell-(\ell-1)\delta_\ell)(r-w)+a_\ell w+\beta_\ell} \left(\frac{e^{(\ell-1)\delta_\ell(r-w)}-1} {e^{\delta_\ell(r-w)}-1}\right)\\ & < & e^{(a_\ell-(\ell-1)\delta_\ell)(r-w)+a_\ell w+\beta_\ell} \left(\frac{e^{(\ell-1)\delta_\ell(r-w)}} {e^{\delta_\ell(r-w)}-1}\right) = \frac{e^{a_\ell r+\beta_\ell}} {e^{\delta_\ell(r-w)}-1} \end{eqnarray*} So to prove our desired inequality, it clearly suffices to enforce $e^{\delta_\ell(r-w)}-1\!\geq\!N$. The last inequality clearly holds for all $r\!\geq\!w+\frac{\log (N+1)}{\delta_\ell}$, so we are done. \qed \medskip It is then easy to prove that the largest (resp.\ smallest) point of $\Re(Z(g))$ can't be too much larger (resp.\ smaller) than the largest (resp.\ smallest) point of $\mathrm{Trop}(g)$. Put another way, we can give an explicit vertical strip containing all the complex roots of $g$. \begin{cor} \label{cor:extremecauchy} Suppose $g$ is a univariate exponential $t$-sum with real spectrum and minimal spacing $\delta(g)$, and $w_{\mathrm{min}}$ (resp.\ $\wp$) is $\max \mathrm{Trop}(g)$ (resp.\ $\min \mathrm{Trop}(g)$). Then $\Re(Z(g))$ is contained in the open interval $\left(w_{\mathrm{min}}-\frac{\log 2}{\delta(g)},\wp+\frac{\log 2}{\delta(g)} \right)$. \end{cor} \noindent The $\log 2$ in Corollary \ref{cor:extremecauchy} can not be replaced by any smaller constant: For\linebreak $g(z_1)\!=\!e^{(t-1)z_1}-e^{(t-2)z_1}-\cdots -e^{z_1}-1$ we have $\delta(g)\!=\!1$, $\mathrm{Trop}(g)\!=\!\{0\}$, and it is easily checked that $\Re(Z(g))$ contains points approaching $\log 2$ as $t\longrightarrow \infty$. While the polynomial analogue of Corollary \ref{cor:extremecauchy} goes back to work of Cauchy, Birkhoff, and Fujiwara pre-dating 1916 (see \cite[pp.\ 243--249, particularly bound 8.1.11 on pg.\ 247]{rs} and \cite{fujiwara} for further background) we were unable to find an explicit bound for exponential sums like Corollary \ref{cor:extremecauchy} in the literature. So we supply a proof below. \medskip \noindent {\bf Proof of Corollary \ref{cor:extremecauchy}:} Replacing $z_1$ by its negative, it clearly suffices to prove\linebreak $\Re(Z(g))\!\subset\!\left(-\infty,\wp+\frac{\log 2}{\delta}\right)$. Writing $g(z_1)\!=\!\sum^t_{j=1}e^{a_jz_1+b_j}$ with $a_1\!<\cdots<\!a_t$, let $\zeta$\linebreak denote any root of $g$, $r\!:=\!\Re(\zeta)$, and $\beta_j\!:=\!\Re(b_j)$ for all $j$. Since we must have\linebreak $\sum^{t-1}_{j=1}e^{a_j\zeta+b_j}\!=\!-e^{a_t\zeta+b_t}$, taking absolute values implies that $\left|\sum^{t-1}_{j=1}e^{a_j\zeta+b_j}\right| \!=\!\left|e^{a_t\zeta+b_t}\right|$.\linebreak \scalebox{.92}[1]{However, this equality is contradicted by Proposition \ref{prop:lop} for $\Re(z_1)\!\geq\!\wp+ \frac{\log 2}{\delta}$. So we are done. \qed } \medskip Another simple consequence of our term domination trick (Proposition \ref{prop:lop} above) is that we can give explicit vertical strips in $\mathbb{C}$ free of roots of $g$. \begin{cor} \label{cor:gap} Suppose $g(z_1)\!:=\!\sum^t_{j=1} e^{a_jz_1+b_j}$ satisfies $a_1\!<\cdots<\!a_t$, $b_j\!\in\!\mathbb{C}$ for all $j$, and that $w_1$ and $w_2$ are {\em consecutive} points of $\mathrm{Trop}(g)$ satisfying $w_2\!\geq\!w_1+\frac{2\log 3}{\delta(g)}$. Let $\ell$ be the unique index such that $(a_\ell,\Re(b_\ell))$ is the vertex of $\mathrm{ArchNewt}(g)$ incident to lower edges of slopes $w_1$ and $w_2$. Then the vertical strip $\left[w_1+\frac{\log 3}{\delta(g)},w_2-\frac{\log 3}{\delta(g)}\right] \times \mathbb{R}$ contains {\em no} roots of $g$. \end{cor} \noindent {\bf Proof:} By Proposition \ref{prop:lop}, we have $\left|\sum^{\ell-1}_{j=1} e^{a_jz_1+b_j}\right|\!<\!\frac{1}{2} \left|e^{a_\ell z_1+b_\ell}\right|$ for all $z_1\!\in\!\left[w_1+\frac{\log 3} {\delta(g)},\infty\right)$ and (employing the change of variables $z_1\mapsto -z_1$) $\left|\sum^t_{j=\ell+1} e^{a_jz_1+b_j}\right|\!<\!\frac{1}{2} \left|e^{a_\ell z_1+b_\ell}\right|$ for all\linebreak \scalebox{.93}[1]{$z_1\!\in\!\left(-\infty,w_2-\frac{\log 3}{\delta(g)} \right]$. So we obtain $\left|\sum_{j\neq \ell} e^{a_jz_1+b_j}\right|\!<\! \left|e^{a_\ell z_1+b_\ell} \right|$ in the stated vertical strip, and this}\linebreak \scalebox{1}[1]{inequality clearly contradicts the existence of a root of $g$ in $\left[w_1+\frac{\log 3}{\delta(g)},w_2 -\frac{\log 3}{\delta(g)}\right]\times \mathbb{R}$. \qed} \begin{rem} Corollary \ref{cor:strip} from the introduction follows immediately from Corollaries \ref{cor:extremecauchy} and \ref{cor:gap}. $\diamond$ \end{rem} \medskip Let us now recall a result of Wilder \cite{wilder} (later significantly refined by Voorhoeve \cite{voorhoeve}) that tightly estimates the number of roots of exponential sums in infinite horizontal strips of $\mathbb{C}$. Let $\Im(\alpha)$ denote the imaginary part of $\alpha\!\in\!\mathbb{C}$ and let $\langle x \rangle \!:=\!\min_{u\in\mathbb{Z}}|x-u|$ be the distance of $x$ to the nearest integer. \begin{wvt} \cite[Thm.\ 5.3]{voorhoeve} For any univariate exponential $t$-sum $g$ with real frequencies $a_1\!<\cdots<\!a_t$ and $u\!\leq\!v$ let $H_{u,v}$ denote the number of roots of $g$, counting multiplicity, in the infinite horizontal strip $\left\{z_1\!\in\!\mathbb{C}\; | \; \Im(z_1)\!\in\![u,v] \right\}$. Then\\ \mbox{}\hfill $\displaystyle{\left|H_{u,v}-\frac{v-u}{2\pi}(a_t-a_1)\right|\leq t-1 -\sum^t_{j=2} \left\langle \frac{(v-u)(a_j-a_{j-1})}{2\pi}\right\rangle}$. \hfill \qed \end{wvt} \noindent We will ultimately refine the Wilder-Voorhoeve Theorem into a {\em localized} deviation bound (Theorem \ref{thm:rec} below) counting the roots of $g$ in special axis parallel rectangles in $\mathbb{C}$. For this, we will need to look more closely at the variation of the argument of $g$ on certain vertical and horizontal segments. \subsection{Winding Numbers and Density of Roots in Rectangles and Vertical Strips} \label{sub:rouche} To count roots of exponential sums in rectangles, it will be useful to observe a basic fact on winding numbers for {\em non}-closed curves. \begin{prop} \label{prop:rouche} Suppose $I\!\subset\!\mathbb{C}$ is any compact line segment and $g$ and $h$ are functions\linebreak \scalebox{.9}[1]{analytic on a neighborhood of $I$ with $|h(z)|<|g(z)|$ for all $z\!\in\!I$. Then $\left|\Im\left(\int_I\frac{g'+h'}{g+h}dz-\int_I\frac{g'}{g}dz\right) \right|<\pi$.} \end{prop} \noindent {\bf Proof:} The quantity $V_1\!:=\!\Im\left(\int_I\frac{g'}{g}dz\right)$ (resp.\ $V_2\!:=\!\Im\left(\int_I\frac{g'+h'}{g+h}dz\right)$) is nothing more than the variation of the argument of $g$ (resp.\ $g+h$) along the segment $I$. Since $I$ is compact, $|g|$ and $|g+h|$ are bounded away from $0$ on $I$ by construction. So we can lift the paths $g(I)$ and $(g+h)(I)$ (in $\C^*$) to the universal covering space induced by the extended logarithm function. Clearly then, $V_1$ (resp.\ $V_2$) is simply a difference of values of $\Im(\mathrm{Log}(g))$ (resp.\linebreak $\Im(\mathrm{Log}(g+h))$), evaluated at the endpoints $I$, where different branches of $\mathrm{Log}$ may be used at each endpoint. In particular, at any fixed endpoint $z$, our assumptions on $|g|$ and $|h|$ clearly imply that $g(z)+h(z)$ and $g(z)$ both lie in the open half-plane normal (as a vector in $\mathbb{R}^2$) to $g(z)$. So $|\Im(\mathrm{Log}(g(z)+h(z))) -\Im(\mathrm{Log}(g(z)))|\!<\!\frac{\pi}{2}$ at the two endpoints of $I$, and thus $|V_1-V_2|\!<\!\frac{\pi}{2}+\frac{\pi}{2}\!=\!\pi$. \qed \medskip Re-examining Corollary \ref{cor:strip} from the last section, one quickly sees that the vertical strips in $\mathbb{C}$ containing the roots of a univariate exponential sum $g$ correspond exactly to clusters of ``closely spaced'' consecutive points of $\mathrm{Trop}(g)$. These clusters of points in $\mathrm{Trop}(g)$ in turn correspond to certain sub-summands of $g$. In particular, sets of consective ``large'' (resp.\ ``small'') points of $\mathrm{Trop}(g)$ correspond to sums of ``high'' (resp.\ ``low'') order terms of $g$. Our next step will then be to relate the roots of a high (or low) order summand of $g$ to an {\em explicit portion} of the roots of $g$. \begin{lemma} \label{lemma:vert} Let $g(z_1)\!:=\!\sum^t_{j=1} e^{a_jz_1+b_j}$ with $a_1\!<\cdots<\!a_t$ and $b_j\!\in\!\mathbb{C}$ for all $j$, $u\!\leq\!v$, and let $w_{\mathrm{min}}$ (resp.\ $\wp$) be $\min\mathrm{Trop}(g)$ (resp.\ $\max\mathrm{Trop}(g)$). Also let $w_1$ and $w_2$ be {\em consecutive} points of $\mathrm{Trop}(g)$ satisfying $w_{\mathrm{min}}\!<\!w_1\!<\!w_2\!<\!\wp$ and let $\ell$ be the unique index such that $(a_\ell,\Re(b_\ell))$ is the vertex of $\mathrm{ArchNewt}(g)$ incident to lower edges of slopes $w_1$ and $w_2$ (so $2\!\leq\!\ell\! \leq\!t-1$). Finally, assume $w_2-w_1\!\geq\!\frac{2\log 3}{\delta(g)}$. and let $R^1_{u,v}$ and $R^2_{u,v}$ respectively denote the number of roots of $g$, counting multiplicity, in the rectangles $\left(w_{\mathrm{min}}-\frac{\log 2}{\delta(g)},w_1+\frac{\log 3}{\delta(g)}\right) \times[u,v]$ and $\left(w_2-\frac{\log 3}{\delta(g)},\wp+\frac{\log 2} {\delta(g)}\right)\times[u,v]$. Then\\ \mbox{}\hfill $\displaystyle{\left|R^1_{u,v}-\frac{v-u}{2\pi}(a_\ell-a_1) \right|\!\leq\!\varepsilon_1+1}$ \ \ and \ \ $\displaystyle{\left|R^2_{u,v}-\frac{v-u}{2\pi}(a_t-a_\ell) \right|\!\leq\!\varepsilon_2+1}$, \hfill \mbox{}\\ where $\varepsilon_1,\varepsilon_2\!\geq\!0$ and $\varepsilon_1+\varepsilon_2\!\leq\!t-1-\sum^t_{j=2} \left\langle \frac{(v-u)(a_j-a_{j-1})}{2\pi}\right\rangle$. \end{lemma} \noindent When $\mathrm{Trop}(g)$ has two adjacent points sufficiently far apart (as detailed above), Lemma \ref{lemma:vert} thus refines the Wilder-Voorhoeve Theorem. Lemma \ref{lemma:vert} also considerably generalizes an earlier root count for the polynomial case presented in \cite[Lemma 2.8]{aknr}: Rephrased in terms of the notation above, the older root count from \cite[Lemma 2.8] {aknr} becomes the equalities $R^1_{0,2\pi}\!=\!a_\ell-a_1$ and $R^2_{0,2\pi}\!=\!a_t-a_\ell$ for the special case $A\!\subset\!\mathbb{Z}$. \medskip \noindent {\bf Proof of Lemma \ref{lemma:vert}:} By symmetry (with respect to replacing $z_1$ by $-z_1$) it clearly suffices to prove the estimate for $R^2_{u,v}$. Since $g$ is analytic, the Argument Principle (see, e.g., \cite{ahlfors}) tells us that\\ \mbox{}\hfill $\displaystyle{R^2_{u,v}=\frac{1}{2\pi\sqrt{-1}} \int_{I_-\cup I_+\cup J_-\cup J_+} \frac{g'}{g}dz}$\hfill\mbox{}\\ where $I_-$ (resp.\ $I_+$, $J_-$, $J_+$) is the oriented line segment from \\ \mbox{}\hfill $\left(w_2-\frac{\log 3}{\delta(g)},v\right)$ (resp.\ $\left(\wp+\frac{\log 2}{\delta(g)},u\right)$, $\left(w_2-\frac{\log 3}{\delta(g)},u\right)$, $\left(\wp+\frac{\log 2}{\delta(g)},v\right)$)\hfill\mbox{}\\ to\\ \mbox{}\hfill $\left(w_2-\frac{\log 3}{\delta(g)},u\right)$ (resp.\ $\left(\wp+\frac{\log 2}{\delta(g)},v\right)$, $\left(\wp+\frac{\log 2}{\delta(g)},u\right)$, $\left(w_2-\frac{\log 3}{\delta(g)},v\right)$),\hfill\mbox{}\\ assuming no root of $g$ lies on $I_-\cup I_+\cup J_-\cup J_+$. By Corollaries \ref{cor:extremecauchy} and \ref{cor:gap}, there can be no roots of $g$ on $I_-\cup I_+$. So let assume temporarily that there are no roots of $g$ on $J_-\cup J_+$. Since $w_2-\frac{\log 3}{\delta(g)}\!\geq\!w_1+\frac{\log 3}{\delta(g)}$ by assumption, Proposition \ref{prop:lop} tells us that\\ \mbox{}\hfill $\frac{1}{2}\left|c_\ell e^{a_\ell \left(w_2-\frac{\log 3}{\delta(g)}+\sqrt{-1} v \right) +b_\ell}\right|\!>\! \left|\sum\limits^{\ell-1}_{j=1}e^{a_j\left(w_2-\frac{\log 3}{\delta(g)} +\sqrt{-1} v \right)+b_j} \right|$ \hfill\mbox{}\\ and, by symmetry and another application of Proposition \ref{prop:lop},\\ \mbox{}\hfill $\frac{1}{2}\left|c_\ell e^{a_\ell \left(w_2-\frac{\log 3}{\delta(g)} +\sqrt{-1} v \right) +b_\ell}\right|\!>\! \left|\sum\limits^t_{j=\ell+1}e^{a_j\left(w_2-\frac{\log 3}{\delta(g)} +\sqrt{-1} v \right)+b_j} \right|$. \hfill\mbox{}\\ So we can apply Proposition \ref{prop:rouche} and deduce that $\left|\Im\left(\int_{I_-}\frac{g'}{g}dz- \int_{I_-}\frac{(e^{a_\ell z+b_\ell})'}{e^{a_\ell z+b_\ell}}dz\right) \right|\!<\!\pi$. So then, since the last integral has imaginary part easily evaluating to $a_\ell(u-v)\sqrt{-1}$, we clearly obtain $\displaystyle{\left|\left(\frac{1}{2\pi\sqrt{-1}}\int_{I_-}\frac{g'}{g}dz \right) -a_\ell(u-v)\right|\!<\!\frac{1}{2}}$. An almost identical argument\linebreak (applying Propositions \ref{prop:lop} and \ref{prop:rouche} again, but with the term $e^{a_tz+b_t}$ dominating instead) then also yields $\displaystyle{\left|\left(\frac{1}{2\pi\sqrt{-1}}\int_{I_+}\frac{g'}{g}dz \right) -a_t(v-u) \right|\!<\!\frac{1}{2}}$. So now we need only prove sufficiently sharp estimates on $\frac{1}{2\pi\sqrt{-1}}\int_{J_\pm}\frac{g'}{g}dz$: \begin{eqnarray*} \left|\int_{J_-\cup J_+}\Im\!\left(\frac{g'}{g}\right)dz\right| & =& \left|\int^{\wp+\frac{\log 2}{\delta(g)}}_{w_2-\frac{\log 3}{\delta(g)}} \Im\!\left(\frac{g'\!\left(z+u\sqrt{-1}\right)}{g\!\left(z+u\sqrt{-1}\right)} -\frac{g'\!\left(z+v\sqrt{-1}\right)}{g\!\left(z+v\sqrt{-1}\right)}\right)dz \right|\\ & \leq & \int^{\wp+\frac{\log 2}{\delta(g)}}_{w_2-\frac{\log 3}{\delta(g)}} \left|\Im\left(\frac{g'\!\left(z+u\sqrt{-1}\right)}{g\!\left(z+u\sqrt{-1} \right)}-\frac{g'\!\left(z+v\sqrt{-1}\right)}{g\!\left(z+v\sqrt{-1}\right)} \right)\right|dz\\ & =: & K\!\left(w_2-\frac{\log 3}{\delta(g)},\wp+\frac{\log 2}{\delta(g)};u,v; g\right). \end{eqnarray*} A quantity closely related to $K\!\left(x_1,x_2;u,v;g\right)$ was, quite fortunately, already studied in Voorhoeve's 1977 Ph.D. thesis: In our notation, the proof of \cite[Thm.\ 5.3]{voorhoeve} immediately yields $\lim \limits_{x\rightarrow \infty} K(-x,x;u,v;g)\!=\!t-1-\sum^t_{j=2} \left\langle \frac{(v-u)(a_j-a_{j-1})}{2\pi}\right\rangle$. In particular, by the additivity of integration, the nonnegativity of the underlying integrands, and taking\linebreak $\varepsilon_1\!:=\!K\!\left(w_{\mathrm{min}}-\frac{\log 2}{\delta(g)}, w_1+\frac{\log 3}{\delta(g)};u,v;g\right)$ and $\varepsilon_2\!:=\!K\!\left(w_2-\frac{\log 3}{\delta(g)},\wp+\frac{\log 2}{ \delta(g)};u,v;g\right)$, we\linebreak obtain $\displaystyle{\left|\int_{J_-\cup J_+}\Im\!\left(\frac{g'}{g}\right)dz \right|\leq\varepsilon_2}$, with $\varepsilon_1,\varepsilon_2\!\geq\!0$ and $\varepsilon_1+\varepsilon_2\!\leq\! t-1-\sum^t_{j=2}\left\langle \frac{(v-u)(a_j-a_{j-1})}{2\pi}\right\rangle$. Adding terms and errors, we then clearly obtain $\displaystyle{\left|R^2_{u,v}-\frac{v-u}{2\pi}(a_t-a_\ell) \right|\!<\!\varepsilon_2+1}$, in the special case where no roots of $g$ lie on $J_-\cup J_+$. To address the case where a root of $g$ lies on $J_-\cup J_+$, note that the analyticity of $g$ implies that the roots of $g$ are a discrete subset of $\mathbb{C}$. So we can find arbitrarily small $\eta\!>\!0$ with the boundary of the slightly stretched rectangle $\left(w_2-\frac{\log 3}{\delta(g)},\wp+\frac{\log 2} {\delta(g)}\right)\times [u-\eta,v+\eta]$ not intersecting any roots of $g$. So then, by the special case of our lemma already proved, $\displaystyle{\left|R^2_{u-\eta,v+\eta}-\frac{v-u+2\eta}{2\pi}(a_t-a_\ell) \right|\!<\!\varepsilon'_2+1}$, with $\varepsilon'_1,\varepsilon'_2\!\geq\!0$ and $\varepsilon'_1+\varepsilon'_2\!\leq\!t-1 -\sum^t_{j=2}\left\langle \frac{(v-u+2\eta)(a_j-a_{j-1})}{2\pi}\right\rangle$. Clearly, $R^2_{u-\eta,v+\eta}\!=\!R^2_{u,v}$ for $\eta$ sufficiently small, and the limit of the preceding estimate for $R^2_{u-\eta,v+\eta}$ tends to the estimate stated in our lemma. So we are done. \qed \medskip We at last arrive at our strongest refinement of the Wilder-Voorhoeve Theorem. \begin{thm} \label{thm:rec} Suppose $g(z_1)\!=\!\sum^t_{j=1}e^{a_jz_1+b_j}$, $a_1\!<\cdots<\!a_t$, and $C$ is any connected component of the open $\frac{\log 3}{\delta(g)}$-neighborhood of $\mathrm{Trop}(g)$. Also let $w_{\mathrm{min}}(C)$ (resp.\ $\wp(C)$) be $\min(\mathrm{Trop}(g)\cap C)$ (resp.\ $\max(\mathrm{Trop}(g)\cap C)$) and let $i$ (resp.\ $j$) be the unique index such that $(a_i,\Re(b_i))$ is the left-most (resp.\ right-most) vertex of the lower edge of $\mathrm{ArchNewt}(g)$ of slope $w_{\mathrm{min}}(C)$ (resp.\ $\wp(C)$). Finally, let $R_{C,u,v}$ denote the number of roots of $g$, counting multiplicity, in the rectangle $C\times [u,v]$. Then\\ \mbox{}\hfill $\displaystyle{\left|R_{C,u,v}-\frac{v-u}{2\pi}(a_j-a_i) \right|\leq \varepsilon_C+1}$,\hfill\mbox{}\\ where $\varepsilon_C\!\geq\!0$ and the sum of $\varepsilon_C$ over all such connected components $C$ is no greater than $t-1-\sum^t_{j=2} \left\langle \frac{(v-u)(a_j-a_{j-1})}{2\pi}\right\rangle$. \end{thm} \noindent Note that Lemma \ref{lemma:vert} is essentially the special case of Theorem \ref{thm:rec} where $C$ is the leftmost or rightmost connected component specified above. Note also that a special case of Theorem \ref{thm:rec} implies that the fraction of roots of $g$ lying in $C\times \mathbb{R}$ (i.e., the ratio $\lim\limits_{y\rightarrow \infty} \frac{R_{C,u,v}}{H_{u,v}}$, using the notation from our statement of the Wilder-Voorhoeve Theorem) is exactly $\frac{a_j-a_i}{a_t-a_1}$. This density of roots localized to a vertical strip can also be interpreted as the average value of the function $1$, evaluated at all root of $g$ in $C\times \mathbb{R}$. Soprunova has studied the average value of general analytic functions $h$, evaluated at the roots (in a sufficiently large vertical strip) of an exponential sum \cite{soprunova}. Theorem \ref{thm:rec} thus refines the notion of the ``average value of $1$ over the roots of $g$ in $\mathbb{C}$'' in a different direction. \medskip \noindent {\bf Proof of Theorem \ref{thm:rec}:} The argument is almost identical to the proof of Lemma \ref{lemma:vert}, save for the horizontal endpoints of the rectangle and the dominating terms in the application of Proposition \ref{prop:lop} being slightly different. \qed \medskip A consequence of our development so far, particularly Corollary \ref{cor:strip}, is that every point of $\Re(Z(g))$ is close to some point of $\mathrm{Trop}(g)$. We now show that every point of $\mathrm{Trop}(g)$ is close to some point of $\Re(Z(g))$. The key trick is to break $\mathrm{Trop}(g)$ into clusters of closely spaced points, and use the fact that every connected component $C$ (from Theorem \ref{thm:rec}) contains at least one real part of a complex root of $g$. \begin{thm} \label{thm:uni} Suppose $g$ is any univariate exponential $t$-sum with real spectrum and $t\!\geq\!2$. Let $s$ be the maximum cardinality of $\mathrm{Trop}(g)\cap C$ for any connected component $C$ of the open $\frac{\log 3} {\delta(g)}$-neighborhood of $\mathrm{Trop}(g)$. (So $1\!\leq\!s\!\leq\!t-1$ in particular.) Then for any $v\!\in\!\mathrm{Trop}(g)$ there is a root $z\!\in\!\mathbb{C}$ of $g$ with $|\Re(z)-v|\!\leq\!\frac{(2s-1)\log 3}{\delta(g)}$. \end{thm} \noindent {\bf Proof:} For convenience, for the next two paragraphs we will allow negative indices $i$ for $\sigma_i\!\in\!\mathrm{Trop}(g)$ (but we will continue to assume $\sigma_i$ is increasing in $i$). Let us define $R$ to be the largest $j$ with $v,\sigma_1,\ldots,\sigma_j$ being consecutive points of $\mathrm{Trop}(g)$ in increasing order, $\sigma_1-v\!\leq\!\frac{2\log 3}{\delta(g)}$, and $\sigma_{i+1}-\sigma_i\!\leq\!\frac{2\log 3}{\delta(g)}$ for all $i\!\in\![j-1]$. (We set $R\!=\!0$\linebreak \scalebox{.94}[1]{should no such $j$ exist.) Similarly, let us define $L$ to be the largest $j$ with $v,\sigma_{-1},\ldots,\sigma_{-j}\!\in\!\mathrm{Trop}(g)$}\linebreak being consecutive points of $\mathrm{Trop}(g)$ in decreasing order, $v-\sigma_{-1}\!\leq\!\frac{2\log 3}{\delta(g)}$, and $\sigma_{-i}-\sigma_{-i-1}$\linebreak $\leq\!\frac{2\log 3}{\delta(g)}$ for all $i\!\in\![j-1]$. (We set $L\!=\!0$ should no such $j$ exist.) Note that $L+R+1\!\leq\!s$. By Theorem \ref{thm:rec} there must then be at least one point of $\Re(Z(g))$ in the interval\linebreak $\left[v-(2L+1)\frac{\log 3}{\delta(g)},v+(2R+1)\frac{\log 3}{\delta(g)} \right]$. So there must be a point of $\Re(Z(g))$ within distance $(2\max\{L,R\}+1)\frac{\log 3}{\delta(g)}$ of $v$. Since $2L+2,2R+2\!\leq\!2s$, we are done. \qed \medskip At this point, we are almost ready to prove our main theorems. The remaining fact we need is a generalization of Corollary \ref{cor:strip} to arbitrary dimension. \subsection{A Quick Distance Bound in Arbitrary Dimension} Having proved an upper bound for the largest point of $\Re(Z(g))$, one may wonder if there is a {\em lower bound} for the largest point of $\Re(Z(g))$. Montel proved (in different notation) the univariate polynomial analogue of the assertion that the largest points of $\Re(Z(g))$ and $\mathrm{Trop}(g)$ differ by no more than $\log(t-1)$ \cite{montel}. One can in fact guarantee that {\em every} point of $\Re(Z(g))$ is close to some point of $\mathrm{Trop}(g)$, and in arbitrary dimension. \begin{lemma} \label{lemma:martin} For any $n$-variate exponential $t$-sum $g$ with real spectrum and $t\!\geq\!2$ we have $\displaystyle{\sup \limits_{\text{\scalebox{.7}[1]{$r\in\Re(Z(g))$}}} \inf \limits_{\substack{\mbox{} \\ \text{\scalebox{.7}[1]{$w\in\mathrm{Trop}(g)$}}}} |r-w|\leq \log(t-1)/\delta(g)}$. \end{lemma} \noindent Let $z\!\in\!Z(g)$ and assume without loss of generality that\\ \mbox{}\hfill $\left|e^{a_1\cdot z+b_1}\right|\!\geq\! \left|e^{a_2\cdot z+b_2}\right| \!\geq \cdots \geq\!\left|e^{a_t\cdot z+b_t}\right|$. \hfill \mbox{}\\ Since $g(z)\!=\!0$ implies that $\left|e^{a_1\cdot z+b_1}\right|\!=\!\left|e^{a_2\cdot z+b_2}+ \cdots+e^{a_t\cdot z+b_t}\right|$, the Triangle Inequality\linebreak immediately implies that $\left|e^{a_1\cdot z+b_1}\right|\!\leq\!(t-1)\left|e^{a_2\cdot z+b_2} \right|$. Taking logarithms, and letting $\rho\!:=\!\Re(z)$ and $\beta_i\!:=\!\Re(b_i)$ for all $i$, we then obtain \begin{eqnarray} \label{eqn:3} & a_1\cdot \rho+\beta_1\geq \cdots \geq a_t\cdot \rho +\beta_t& \text{ and} \end{eqnarray} \begin{eqnarray} \label{eqn:4} & a_1\cdot \rho +\beta_1\leq \log(t-1)+a_2\cdot \rho +\beta_2& \end{eqnarray} For each $i\!\in\!\{2,\ldots,t\}$ let us then define $\eta_i$ to be the shortest vector such that \\ \mbox{}\hfill $a_1\cdot (\rho+\eta_i)+\beta_1 =a_i\cdot (\rho+\eta_i)+\beta_i$. \hfill\mbox{}\\ Note that $\eta_i\!=\!\lambda_i(a_i-a_1)$ for some nonnegative $\lambda_i$ since we are trying to affect the dot-product $\eta_i\cdot(a_1-a_i)$. In particular, $\lambda_i\!=\!\frac{(a_1-a_i)\cdot \rho+\beta_1-\beta_i}{|a_1-a_i|^2}$ so that $|\eta_i|\!=\!\frac{(a_1-a_i)\cdot \rho + \beta_1-\beta_i} {|a_1-a_i|}$. (Indeed, Inequality (\ref{eqn:3}) implies that $(a_1-a_i)\cdot \rho+\beta_1-\beta_i\!\geq\!0$.) Inequality (\ref{eqn:4}) implies that $(a_1-a_2)\cdot \rho+\beta_1-\beta_2\!\leq\!\log(t-1)$. We thus obtain\linebreak $|\eta_2|\!\leq\!\frac{\log(t-1)} {|a_1-a_2|}\!\leq\!\frac{\log(t-1)}{\delta(g)}$. So let $i_0\!\in\!\{2,\ldots,t\}$ be any $i$ minimizing $|\eta_i|$. We of course have $|\eta_{i_0}|\!\leq\!\log(t-1)/\delta(g)$, and by the definition of $\eta_{i_0}$ we have\\ \mbox{}\hfill $a_1\cdot (\rho+\eta_{i_0})+\beta_1\!=\!a_{i_0}\cdot (\rho+\eta_{i_0}) +\beta_{i_0}$.\hfill\mbox{}\\ Moreover, the fact that $\eta_{i_0}$ is the shortest among the $\eta_i$ implies that\\ \mbox{}\hfill $a_1\cdot (\rho+\eta_{i_0})+\beta_1\!\geq\!a_i\cdot (\rho+ \eta_{i_0})+\beta_i$\hfill\mbox{}\\ for all $i$. Otherwise, we would have $a_1\cdot (\rho+\eta_{i_0})+\beta_1\!<\!a_i\cdot (\rho+\eta_{i_0})+\beta_i$ and $a_1\cdot \rho+\beta_1\!\geq\!a_i\cdot \rho+\beta_i$ (the latter following from Inequality (\ref{eqn:3})). Taking a convex linear combination of the last two inequalities, it is then clear that there must be a $\mu\!\in\![0,1)$ such that\\ \mbox{}\hfill $a_1\cdot (\rho+\mu\eta_{i_0})+\beta_1\!=\!a_i\cdot (\rho+\mu\eta_{i_0})+\beta_i$.\hfill\mbox{}\\ Thus, by the definition of $\eta_i$, we would obtain $|\eta_i| \!\leq\!\mu|\eta_{i_0}|\!<\!|\eta_{i_0}|$ --- a contradiction. We thus have the following: (i) $a_1\cdot (\rho+\eta_{i_0})-(-\beta_1)\!=\! a_{i_0}\cdot (\rho+\eta_{i_0})-(-\beta_{i_0})$,\linebreak (ii) $a_1\cdot (\rho+\eta_{i_0})-(-\beta_1)\!\geq\! a_i\cdot (\rho+\eta_{i_0})-(-\beta_i)$ for all $i$, and (iii) $|\eta_{i_0}|\!\leq\!\log (t-1)/\delta(g)$. Together, these inequalities imply that $\rho+\eta_{i_0}\!\in\!\mathrm{Trop}(g)$. In other words, we've found a point in $\mathrm{Trop}(g)$ sufficiently near $\rho$ to prove our desired upper bound. \qed \section{Small Ball Probability} \label{sec:ball} Let $G_{n,k}$ be the Grassmanian of $k$-dimensional subspaces of $\R^n$, equipped with its unique rotation-invariant Haar probability measure $\mu_{n,k}$. The following ``small ball probability'' estimate holds. \begin{lemma} \label{lemma:proj} \cite[Lemma 3.2]{gpv} \noindent Let $1\!\leq\!k\!\leq n-1$, $x\!\in\!\mathbb R^{n}$, and $\varepsilon\!\leq\!\frac{1}{\sqrt{e}}$. Then \begin{equation*} \mu_{n,k} \left( \left\{ F\!\in\!G_{n,k} \; \left| \; |P_F(x)|\!\leq\!\varepsilon \sqrt{\frac{k}{n}}|x|\right.\right\}\right) \leq \left( \sqrt{e} \varepsilon\right)^{k}, \end{equation*} where $P_F$ is the surjective orthogonal projection mapping $\R^n$ onto $F$. \qed \end{lemma} \noindent An important precursor, in the context of bounding distortion under more general Euclidean embeddings, appears in \cite{matousek}. A simple consequence of the preceding metric result is the following fact on the existence of projections mapping a high-dimensional point set onto a lower-dimensional subspace in a way that preserves the minimal spacing as much as possible. \begin{prop} \label{prop:proj} Let $\gamma\!>\!0$ and $x_{1}, \ldots , x_{N}\!\in\!\R^n$ be such that $|x_{i}- x_{j}|\!\geq\!\gamma$ for all distinct $i,j$. Then, following the notation of Lemma \ref{lemma:proj}, there exist $F\!\in\!G_{n,k}$ such that\linebreak $\displaystyle{|P_F(x_{i}) - P_F(x_{j}) | \geq \sqrt{\frac{k}{en}} \frac{\gamma}{N^{2/k}}}$ for all distinct $i,j$. \end{prop} \noindent {\bf Proof:} Let $z_{\{i,j\}}:= x_{i}- x_{j}$. Then our assumption becomes $z_{\{i,j\}}\!\geq\!\gamma$ for all distinct $i,j$ and there are no more than $N(N-1)/2$ such pairs $\{i,j\}$. By Lemma \ref{lemma:proj} we have, for any fixed $\{i,j\}$, that $|P_F(z_{\{i,j\}})|\!\geq\!\varepsilon \sqrt{\frac{k}{n}} |z_{\{i,j\}}|$ with probability at least $1-\left( \sqrt{e} \varepsilon\right)^{k}$. So if $\varepsilon < \frac{1}{\sqrt{e}} \left( \frac{2}{ N(N-1)} \right)^{1/k}$, the union bound for probabilities implies that, for all distinct $i,j$, we have $|P_F(z_{\{i,j\}})|\!\geq\!\varepsilon \sqrt{\frac{k}{n}} |z_{\{i,j\}}|\!\geq\!\varepsilon \gamma \sqrt{\frac{k}{n}}$ (and thus $|P_F(x_{i}) - P_F(x_{j})|\!\geq\!\varepsilon \gamma \sqrt{\frac{k}{n}}$) with probability at least $1-\frac{N(N-1)}{2} (\sqrt{e} \varepsilon)^{k}$. Since this lower bound is positive by construction, we can conclude by choosing $\varepsilon\!:=\!\frac{1}{\sqrt{e} N^{2/k}}$. \qed \section{Proof of Theorem \ref{thm:finally}} \label{sec:finally} The assertion that $t\!\geq\!d+1$ is easy since any $d$-dimensional polytope always has at least $d+1$ vertices. So we now focus on the rest of the theorem. We prove Assertion 1(b) last. In what follows, for any real $n\times n$ matrix $M$ and $z\!\in\!\R^n$, we assume that $z$ is a column vector when we write $Mz$. Also, for any subset $S\!\subseteq\!\R^n$, the notation $MS\!:=\!\{Mz\; | \; z\!\in\!S\}$ is understood. The following simple functoriality properties of $\mathrm{Trop}(g)$ and $\Re(Z(g))$ will prove useful. \begin{prop} \label{prop:rescale} Suppose $g_1$ and $g_2$ are $n$-variate exponential $t$-sums, $\alpha\!\in\!\C^*$, $a\!\in\!\R^n$, $\beta\!:=\!(\beta_1,\ldots,\beta_n)\!\in\!\C^n$, and $g_2$ satisfies the identity $g_2(z)\!=\!\alpha e^{a\cdot z} g_1(z_1+\beta_1,\ldots,z_n+\beta_n)$. Then $\Re(Z(g_2))\!=\!\Re(Z(g_1))-\Re(\beta)$ and $\mathrm{Trop}(g_2)\!=\!\mathrm{Trop}(g_1)-\Re(\beta)$. Also, if $M\!\in\!\mathbb{R}^{n\times n}$ and we\linebreak \scalebox{.91}[1]{instead have the identity $g_2(z)\!=\!g_1(Mz)$, then $M\Re(Z(g_2))\!=\!\Re(Z(g_1))$ and $M\mathrm{Trop}(g_2)\!=\!\mathrm{Trop}(g_1)$. \qed} \end{prop} \subsection{Proof of Assertion (0)} First note that, thanks to Proposition \ref{prop:rescale}, an invertible linear change of variables allows us to reduce to the special case $A\!=\!\{\mathbf{O},e_1,\ldots,e_n\}$, where $\mathbf{O}$ and $\{e_1,\ldots,e_n\}$ are respectively the origin and standard basis vectors of $\R^n$. But this special case is well known: One can either prove it directly, or avail to earlier work of Rullg\aa{}rd on the spines of amoebae (see, e.g., the remark following Theorem 8 on Page 33, and Theorem 12 on Page 36, of \cite{rullgardthesis}). In fact, observing that our change of variables can in fact be turned into an isotopy (by the connectivity of $\mathbb{GL}^+_n(\mathbb{R})$), we can further assert that $\mathrm{Trop}(g)$ is a deformation retract of $\Re(Z(g))$ in this case. \qed \subsection{Proof of Assertion 1(a)} \label{sub:1} \scalebox{.97}[1]{This is simply Lemma \ref{lemma:martin}, which was proved in Section \ref{sec:back}. \qed} \subsection{Proof of Assertion (2)} \label{sub:ass2} The special case $\delta\!=\!1$ follows immediately from Assertion (2) of Theorem 1.5 of \cite{aknr} (after setting $x_i\!=\!e^{z_i}$ in the notation there). Proposition \ref{prop:rescale} tells us that scaling the spectrum of $g$ by a factor of $\delta$ scales $\Re(Z(g))$ and $\mathrm{Trop}(g)$\linebreak each by a factor of $1/\delta$. So we are done. \qed \subsection{Proof of Assertion 1(b)} \label{sub:big} First note that the Hausdorff distance in question is invariant under rotation in $\R^n$. So we may in fact assume that $g$ involves just the variables $z_1,\ldots,z_d$ and thus assume $d\!=\!n$. By the $k\!=\!1$ case of Proposition \ref{prop:proj} we deduce that there exists a unit vector $\theta\!\in\!\R^n$ such that \begin{gather}\label{proj} \min_{i\neq j}|a_i\cdot \theta - a_j\cdot \theta | \geq \frac{\delta(g)}{\sqrt{en} t^2} \end{gather} Now let $v\!\in\!\mathrm{Trop}(g)$ and write $v\!=\!v_{\theta}\theta + v_{\theta}^{\bot}$ for some $v^\bot_\theta$ perpendicular to $\theta$. Also let $u_\theta\!\in\!\mathbb{C}$ and $u\!\in\!\C^n$ satisfy $u\!=\!u_{\theta}\theta + v_{\theta}^{\bot}$. For $z_1\!\in\!\mathbb{C}$ define the univariate exponential $t$-sum $\tilde{g}(z_1)= \sum_{j=1}^{t} e^{(a_j\cdot (z_1\theta+ v_{\theta}^{\bot})) + b_i}$. By Inequality (\ref{proj}) we see that $\delta\!\left(\tilde{g}\right)\!\geq\!\frac{\delta(g)} {\sqrt{en} t^2}$. We also see that $\tilde{g}(u_{\theta})\!=\!g(u)$ and $\tilde{g}(v_{\theta})\!=\!g(v)$. By Theorem \ref{thm:uni} there exists a value for $u_{\theta}$ such that $0=\tilde{f}(u_{\theta})=f(u)$ and $|\Re(u_{\theta}) - v_{\theta}| \leq \frac{(2t-3)\log 3}{\delta\!\left(\tilde{g}\right)} \leq \frac{ \sqrt{en}t^2(2t-3)\log 3}{\delta(g)}$. \noindent So $|\Re(u)- v|\!=\!|(\Re(u_{\theta})-v_{\theta})\theta|\!\leq\! \frac{\sqrt{en}t^2 (2t-3)\log 3}{\delta(g)}\!=\!\frac{\sqrt{ed}t^2 (2t-3)\log 3}{\delta(g)}$ since we've already reduced to the case $d\!=\!n$. \qed \section{Proving Theorem \ref{thm:cxity2}} \label{sec:cxity} We will need some supporting results on linear programming before starting our proof. \begin{dfn} Given any matrix $M\!\in\!\mathbb{R}^{N\times n}$ with $i\thth$ row $m_i$, and $c\!:=\!(c_1,\ldots,c_N)\!\in\!\mathbb{R}^N$, the notation $Mx\!\leq\!c$ means that $m_1\cdot x\!\leq\!c_1,\ldots,m_N\cdot x\!\leq\!c_N$ all hold. These inequalities are called {\em constraints}, and the set of all $x\!\in\!\mathbb{R}^N$ satisfying $Mx\!\leq\!c$ is called the {\em feasible region} of $Mx\!\leq\!c$. We also call a constraint {\em active} if and only if it holds with equality. Finally, we call a constraint {\em redundant} if and only if the corresponding row of $M$ and corresponding entry of $c$ can be deleted without affecting the feasible region of $Mx\!\leq\!c$. $\diamond$ \end{dfn} \begin{lemma} \label{lemma:red} Suppose $n$ is fixed. Then, given any $c\!\in\!\mathbb{R}^N$ and $M\!\in\!\mathbb{R}^{N\times n}$, we can, in time polynomial in $N$, find a submatrix $M'$ of $M$, and a subvector $c'$ of $c$, such that the feasible regions of $Mx\!\leq\!c$ and $M'x\!\leq\!c'$ are equal, and $M'x\!\leq\!c'$ has no redundant constraints. Furthermore, in time polynomial in $N$, we can also enumerate all maximal sets of active constraints defining vertices of the feasible region of $Mx\!\leq\!c$. \qed \end{lemma} \noindent Note that we are using the BSS model over $\mathbb{R}$ in the preceding lemma. In particular, we are only counting field operations and comparisons over $\mathbb{R}$ (and these are the only operations needed). We refer the reader to the excellent texts \cite{schrijver,grotschel,gritzmann} for further background and a more leisurely exposition on linear programming. \medskip \noindent {\bf Proof of Theorem \ref{thm:cxity2}:} Let $w\!\in\!\R^n$ be our input query point. Using $O(t\log t)$ comparisons, we can isolate all indices such that $\max_j|e^{a_j\cdot z+b_j}|$ is attained, so let $j_0$ be any such index. Taking logarithms, we then obtain, say, $J$ equations of the form $a_j\cdot w+\Re(b_j)\!=\!a_{j_0}\cdot w+\Re(b_{j_0})$ and $K$ inequalities of the form $a_j\cdot w+\Re(b_j)\!>\!a_{j_0}\cdot w+\Re(b_{j_0})$ or $a_j\cdot w+\Re(b_j)\!<\!a_{j_0}\cdot w+\Re(b_{j_0})$. Thanks to Lemma \ref{lemma:red}, we can determine the exact cell of $\mathrm{Trop}(f)$ containing $w$ if $J\!\geq\!2$. Otherwise, we obtain the unique cell of $\R^n\!\setminus\!\mathrm{Trop}(f)$ with relative interior containing $w$. Note also that an $(n-1)$-dimensional face of either kind of cell must be the dual of an edge of $\mathrm{ArchNewt}(g)$. Since every edge has exactly $2$ vertices, there are at most $t(t-1)/2$ such $(n-1)$-dimensional faces, and thus $\sigma_w$ is the intersection of at most $t(t-1)/2$ half-spaces. So we are done. \qed \section*{Acknowledgements} We are grateful to Timo de Wolff for pointing out Silipo's work \cite{silipo}. We also thank Pascal Koiran, Gregorio Malajovich, Ji\v{r}\'{\i} Matou\v{s}ek, and Klaus Meer for useful discussions. \bibliographystyle{acm}
1,116,691,501,424
arxiv
\section{Introduction} \setlength\arraycolsep{2pt} Due to the increasing popularity of wireless devices in recent years, the radio spectrum has been an extremely scarce resource. By contrast, 90 percent of the existing licensed spectrum remains idle and the usage varies geographically and temporally as reported by the Federal Communication Commission (FCC) \cite{fcc2002}. This indicates that the fixed frequency regulation policy conflicts drastically with the high demand for frequency resource. Cognitive radio (CR) is one of the most promising technologies to deal with such irrational frequency regulation policy \cite{Haykin05, Zhao2007a} and has received lots of attention. In cognitive radio networks, secondary (unlicensed) users first reliably sense the primary (licensed) channel and then opportunistically access it without causing harmful interference to primary users \cite{fcc2008}. By doing this, the spectrum utilization of existing wireless communication networks can be tremendously improved. FCC has issued a Notice of proposed Rule Making to allow the unlicensed CR devices to operate in the unused channel \cite{fcc2003}. The IEEE has also formed the 802.22 working group to develop the standard for wireless regional area networks (WRAN) which will operate on unused VHR/UHF TV bands based on cognitive radio technology. Both of these activities will significantly change the current wireless communication situation. As mentioned above, the secondary users need to opportunistically access the unused licensed channel while causing negligible interference to the primary users. As a result, the detection of presence of primary users is a fundamental and critical task in the cognitive radio networks. Although the detection of presence of signals is known as a classical problem in signal processing, however, sensing the presence of primary users in a complicated communication environment, especially a CR-based network, is still a challenging problem from the practice perspective. This is mainly due to the following two limiting factors: Firstly, it is very difficult, if not possible, for the secondary user to obtain the necessary prior information about the signal characteristics of the primary user for most of the traditional detection techniques to apply. Secondly, the CR devices should be capable of sensing the very weak signals transmitted by primary users. For instance, the standard released by FCC has required that spectrum sensing algorithms need to reliably detect the transmitted TV signals at a very low signal-to-noise ratio (SNR) of at least $-18$dB \cite{fcc2008}. Thus far, there are mainly four types of spectrum sensing methods: energy detection \cite{Urkowitz1967, Digham2007}, matched filtering (coherent detection) \cite{Hwang09}, feature detection \cite{Danda94} and eigenvalue-based detection \cite{Zeng09,zeng08sp,RuiZhang10com}. Among them, energy detection is optimal if the secondary user only knows the local noise power\cite{kay98}. The matched-filtering based coherent detection is optimal for maximizing the detection probability but it requires the explicit knowledge of the transmitted signal pattern (e.g., pilot, training sequence etc.) of the primary user. The feature detection, often referred to as the cyclostationary detection, exploits the periodicity in the modulation scheme which, however, is difficult to determine in certain scenarios. By constructing the decision variables based on eigenvalues of the sampled covariance matrix to detect the presence of the primary user, the eigenvalue-based sensing methods presented in \cite{Zeng09,zeng08sp,RuiZhang10com} do not need to estimate the power of the noise and hence are more practical in most CR networks. Recently, several new spectrum sensing schemes by incorporating system-level design parameters have been introduced, such as throughput maximization \cite{Liang08, Quan09, Shen09} and cooperative sensing using multiple nodes \cite{Ganesan08, Ganesan07I, Ganesan07II, Jun08}. Nevertheless, the aforementioned four types of sensing techniques are still treated as a basic component in these new schemes. In this paper, we study a blind spectrum sensing method based on information theoretic criteria (ITC), an approach originally for model selection introduced by Akaike \cite{Akike73, Akaike74} and by Schwartz \cite{Schwartz78} and Rissanen \cite{Riss78}. Applying information theoretic criteria for spectrum sensing was firstly introduced in \cite{Zayen09, Zayen08, Majed07, Zayen10}. This work provides a more intensive study on the ITC sensing algorithm and its performance. The main contributions of this paper are as follows: \begin{itemize} \item First of all, to make the information theoretic criteria applicable, a new over-determined channel model is constructed by introducing multiple antennas or over sampling at the secondary user. \item Then, a simplified information theory criteria (SITC) sensing algorithm which only involves the computation of two decision values is presented. Compared to the original information theory criteria (OITC) sensing algorithm in \cite{Zayen09}, SITC is much less complex and yet almost has no performance loss. Simulation results also demonstrate that the proposed SITC based spectrum sensing outperforms the eigenvalue based sensing algorithm in \cite{Zeng09} and almost obtains the similar performance with \cite{zeng08sp}. The proposed sensing algorithm also enables a more tractable analytical study on the detection performance. \item Applying the recent advances in random matrix theory, we then derive closed-form expressions for both the probability of false alarm and probability of detection. which can approximate the actual results in simulation very well. \item Finally, based on the insight derived from the analytical study, we further present a generalized information theory criteria (GITC) sensing algorithm. By involving an adjustable threshold, the proposed GITC can provide flexible tradeoff between the probability of detection and probability of false alarm in order to supply different system requirements. \end{itemize} The rest of paper is organized as follows. In Section II, the preliminary on the information theoretic criteria is provided. The proposed over-determined system model is presented in Section III. Section IV gives the proposed SITC sensing algorithm and the theoretical analysis of its detection performance, followed by the GITC sensing algorithm in Section V. Extensive simulation results are illustrated in Section VI. Finally, Section VII offers some concluding remarks. \emph{Notations}: $\cal E[\cdot]$ denotes expectation over the random variables within the brackets. ${\rm Tr}({\bf A})$ stands for the trace of matrix ${\bf A}$. Superscripts $(\cdot)^T$ and $(\cdot)^\dagger$ denote transpose and conjugate transpose. \section{Preliminary on the Information Theoretic criteria} Information theoretic criteria are an approach originally for model selection introduced by Akaike \cite{Akike73, Akaike74} and by Schwartz \cite{Schwartz78} and Rissanen \cite{Riss78}. There are two well-known criteria that have been widely used: Akaike information criterion (AIC) and minimum description length (MDL) criterion. One of the most important applications of information theoretic criteria is to estimate the number of source signals in array signal processing \cite{Wax85}. Consider a system model described as \begin{equation} \label{eqn:AICMDLmodel} {\bm x}={\bf A}{{\bm s}}+{\bm{\mu}}, \end{equation} where $\bm x$ is the $p\times 1$ complex observation vector, $\bf A$ is a $p\times q$ ($p>q$) complex system matrix, $\bm s$ denotes the $q\times1$ complex source modulated signals and $\bm{\mu}$ is the additive complex white Gaussian noise vector. It is noted that the definite parameters $q$, $\bf A$ and $\sigma^2$ are all unknown. The resulting cost functions of AIC and MDL have the following form \cite{Wax85}: \begin{equation} \label{eqn:AICsolve} {{\rm AIC} (k)}={-2\log \left (\frac{\prod^p_{i=k+1}{l^{1/(p-k)}_i}}{{\frac{1}{p-k}}\sum^p_{i=k+1}{l_i}}\right )^{N(p-k)}}+2k(2p-k)+2, \end{equation} \begin{equation} \label{eqn:MDLsolve} {{\rm MDL} (k)}={-\log \left (\frac{\prod^p_{i=k+1}{l^{1/(p-k)}_i}}{{\frac{1}{p-k}}\sum^p_{i=k+1}{l_i}}\right )^{N(p-k)}} +\left({\frac{1}{2}}k(2p-k)+\frac{1}{2}\right)\log N, \end{equation} where $N$ signifies the observation times and $l_i$ denotes the ${i}$-th decreasing ordered eigenvalue of the sampled covariance matrix. The estimated number of source signals is determined by choosing the minimum \eqref{eqn:AICsolve} or \eqref{eqn:MDLsolve}. That is, \begin{equation} \label{eqn:AICestimate} {\hat k_{\rm AIC}}=\mathrm{arg} {\min_{j=0,1,\ldots,p-1}} {\rm AIC}(j), \end{equation} \begin{equation} \label{eqn:MDLestimate} {\hat k_{\rm MDL}}=\mathrm{arg} \min_{j=0,1,\ldots,p-1} {\rm MDL}(j). \end{equation} \section{System Model} We consider a multipath fading channel model and assuming that there is only one primary user in the cogitative radio network. Let $x(t)$ be a continuous-time baseband received signal at the secondary user's receiver. Spectrum sensing can be formulated as a binary hypothesis test between the following two hypotheses \begin{equation} \label{eqn:H0con} {{\cal H}_0:} \quad x(t)=\mu(t), \end{equation} \begin{equation} \label{eqn:H1con} {{\cal H}_1:} \quad x(t)=\int^{T}_0 {h(\ell)s(t-\ell)}{d\ell}+\mu(t), \end{equation} where $s(t)$ denotes the signal transmitted by the primary user, $h(t)$ is the continuous channel response between the primary transmitter and the secondary receiver, $\mu(t)$ denotes the additive white noise, the parameter $T$ signifies the duration of the channel. The channel response is also assumed to remain invariant during each observation. To obtain the discrete representation, we assume that the received signal is sampled at rate $f_s$ which is equal to the reciprocal of the baseband symbol duration $T_0$. For notation simplicity, we define $x(n)=x(nT_0)$, $s(n)=s(nT_0)$ and $\mu(n)=\mu(nT_0)$. Hence, the corresponding received signal samples under the two hypotheses are described as: \begin{equation} \label{eqn:H0dis} {{\cal H}_0:} \quad x(n)=\mu(n), \end{equation} \begin{equation} \label{eqn:H1dis} {{\cal H}_1:} \quad x(n)=\sum^{L-1}_{i=0}{h(i)s(n-i)}+\mu(n), \end{equation}\ where $h(i)$ ($0 \leqslant i \leqslant L-1$) denotes the discrete channel response of $h(t)$ and $L$ denotes the order of the discrete channel ($L$ taps). Let each observation consist of $M$ received signal samples. Then \eqref{eqn:H0dis} and \eqref{eqn:H1dis} can be rewritten in matrix form as: \begin{equation} \label{eqn:H0mat} {{\cal H}_0:} \quad {\bm x}_i=\bm \mu_i, \end{equation} \begin{equation} \label{eqn:H1mat} {{\cal H}_1:} \quad {\bm x}_i={\bf H}{\bm s}_i+\bm \mu_i, \end{equation} where $\bf{H}$ is an $M\times(L+M-1)$ circular channel matrix defined as \begin{equation} \label{eqn:ChannH} {\bf H}=\left[ \begin{array}{ccccccccc} h(L-1) & h(L-2) & \ldots & h(0)\\ & h(L-1) & h(L-2) & \ldots & h(0)\\ & & \ddots & & \ddots \\ & & & & & h(L-1) & h(L-2) & \ldots & h(0) \end{array} \right],\nonumber \end{equation} $\bm{x}_i$, $\bm{s}_i$, and $\bm{\mu}_i$ are the $M\times1$ observation vector, $(L+M-1) \times1$ source signal vector and $M\times1$ noise vector, respectively, and are defined as \begin{equation} \label{eqn:Xn} {{\bm x}_i}={[x(iM-M+1),x(iM-M+2),\ldots,x(iM)]^T}, \end{equation} \begin{equation} \label{eqn:Sn} {{\bm s}_i}={[s(iM-M-L+2),s(iM-M-L+3),\ldots,s(iM)]^T}, \end{equation} \begin{equation} \label{eqn:Nn} {{\bm \mu}_i}={[\mu(iM-M+1),\mu(iM-M+2),\ldots,\mu(iM)]^T}. \end{equation} Now, comparing \eqref{eqn:H1mat} with the array signal processing model \eqref{eqn:AICMDLmodel}, we find that a major difference is that the $\bf H$ in our considered system model is an under-determined matrix, i.e., the order of column is larger than the order of row. Therefore, the information theoretic criteria are not directly applicable here \cite{Wax85}. To construct an over-determined channel matrix $\bf{H}$ as in \eqref{eqn:AICMDLmodel}, one needs to enlarge the observation space. Obviously, simply increasing the observation window $M$ does not work. Here we propose to expand the observation space using one of the following two methods. One is to increase the spatial dimensionality by employing multiple receive antennas at the secondary user and the other is to increase the time dimensionality by over-sampling the received signals. It turns out that the two methods are similar to each other. Hence we shall focus on the multiple-antenna approach hereafter. The difference for over-sampling method will be discussed in the end of this section. In specific, suppose that the detector at the secondary user is equipped with $K$ antennas. Redefine \eqref{eqn:Xn} and \eqref{eqn:Nn} as \begin{equation} \label{eqn:newXn} {{\bm x}_i}=[x^i_1(1),x^i_2(1),\ldots,x^i_K(1),x^i_1(2),\ldots,x^i_K(2),\ldots,x^i_1(M),\ldots,x^i_K(M)]^T, \end{equation} \begin{equation} \label{eqn:newNn} {{\bm \mu}_i}=[\mu^i_1(1),\mu^i_2(1),\ldots,\mu^i_K(1),\mu^i_1(2),\ldots,\mu^i_K(2),\ldots,\mu^i_1(M),\ldots,\mu^i_K(M)]^T, \end{equation} where ${\bm x}^i_k=[x^i_k(1),x^i_k(2),\ldots,x^i_k(M)]^T$ represents the $M\times1$ observation vector at the $k$-th antenna at the $i$-th observation as in \eqref{eqn:Xn}, and ${\bm \mu}^i_k=[\mu^i_k(1),\mu^i_k(2),\ldots,\mu^i_k(M)]^T$ is the corresponding noise vector at the $k$-th antenna at the $i$-th observation as in \eqref{eqn:Nn}. Then, the new channel matrix $\bf{H}$ becomes an $MK \times (M+L-1$) matrix: \begin{equation} \label{eqn:newChannH} {\bf H}=\left[ \begin{array}{ccccccccc} h_1(L-1) & h_1(L-2) & \ldots & h_1(0)\\ \vdots & & \vdots\\ h_K(L-1) & h_K(L-2) & \ldots & h_K(0)\\ & h_1(L-1) & h_1(L-2) & \ldots & h_1(0)\\ & \vdots & \vdots\\ & h_K(L-1)& h_K(L-2) & \ldots & h_K(0)\\ & & \ddots & \ddots \\ & & & & & h_1(L-1)& h_1(L-2)& \ldots & h_1(0)\\ & & & & & \vdots & \vdots\\ & & & & & h_K(L-1)& h_K(L-2)& \ldots & h_K(0) \end{array} \right]. \end{equation} Here, $h_{k}(i)$, for $i=0, \ldots, L-1$, denotes the $i$-th channel tap observed at $k$-th antenna. To ensure that $\bf{H}$ is now an over-determined matrix (the order of row is larger than the order of column), we need to have \begin{equation} \label{eqn:conK} {K>\frac{L+M-1}{M}}, \end{equation} or, alternatively, \begin{equation} \label{eqn:conM} {M>\frac{L-1}{K-1}}. \end{equation} Furthermore, we assume the noise samples come form different antennas are independent with zero mean and ${\cal E}({\bm \mu_i}{\bm \mu^{H}_i})=\sigma^2{\bf I}_{MK}$. Then we can exactly ensure that the system mode under multiple antennas satisfies the over-determined condition specified in \cite{Wax85}. For ease of presentation, we define $p=MK$ and $q=L+M-1$ in \eqref{eqn:H1mat}. As mentioned earlier, the second approach to construct the over-determined channel model is for the secondary user to over-sample the received signals. Suppose that the over-sampling factor is given by $K$. That is, the received baseband signal is sampled $K$ times in one symbol. Then a similar system model as in \eqref{eqn:newXn}, \eqref{eqn:newNn} and \eqref{eqn:newChannH} can be obtained, except that ${\bm x}_i$ and ${\bm \mu}_i$ should be replaced with \begin{equation} \label{eqn:OvernewXn} {{\bm x}_i}={[x(iMK-MK+1),x(iMK-MK+2),\ldots,x(iMK)]^T}, \end{equation} \begin{equation} \label{eqn:OvernewNn} {{\bm \mu}_i}={[\mu(iMK-MK+1),\mu(iMK-MK+2),\ldots,\mu(iMK)]^T}, \end{equation} and $h_{k}(i)$, for $i=0, \ldots, L-1$, becomes the $k$-th over-sampling point of the $i$-th channel tap. It can be verified that $h_{k}(i)$'s are different for different $k$ \cite{Tse2005}. The major difference between the over-sampling approach and the multiple-antenna approach is that the over-sampled noise samples in \eqref{eqn:OvernewNn} are correlated, which contradicts the primary assumption of independent noise samples. Nevertheless, the pre-whiting technique can be used to whiten the correlated noises based on the known correlation matrix. The details can be referred to Appendix~\ref{prof_Whiten}. Before leaving this section. it is noted that, though the proposed over-determined model is based on the assumption that there is only one primary user in the cognitive network, it is also applicable the scenario where there exist multiple primary users. An alternative approach to construct the over-determined model in the presence of multiple primary users is to use the cooperative sensing technique as in \cite{Zayen10} by using multiple detectors. \section{Simplified Information Theoretic Criteria Sensing Algorithm and Performance analysis} Since the binary hypothesis test in the spectrum sensing is equivalent to the special case of source number estimation problem, the information theoretic criteria method can be directly applied to conduct spectrum sensing as firstly proposed in \cite{Zayen09, Zayen08, Majed07, Zayen10}. The basic idea is when the primary user is absent, the received signal ${\bm x}_i$ is only the white noise samples. Therefore, the estimated number of source signals via information theoretic criteria (AIC or MDL) should be zero. Otherwise, when the primary user is present, the number of source signals must be larger than zero. Hence, by comparing the estimated number of source signals with zero, the presence of the primary user can be detected. It is noted that the estimation of the number of source by using \eqref{eqn:AICestimate} and \eqref{eqn:MDLestimate} needs very little prior information about the primary user. In particular, it does not require the knowledge of channel state information, synchronization, nor pilot design and modulation strategy. Moreover it does not need the estimation of noise power. Hence we argue that information theoretic criteria method is a blind spectrum sensing similar to \cite{Zeng09,zeng08sp,RuiZhang10com}, and it is robust and suitable for the practical applications. However, it is known that signal detection is much easier than signal estimation. Therefore, using the estimation method to conduct the detection as in \cite{Zayen09, Zayen08, Majed07, Zayen10} may lead to unnecessary computational complexity overhead. In the mean time, it makes it difficult to carry out analytical study on the detection performance. In this section, we propose a simplified ITC algorithm to conduct the spectrum sensing. It can significantly reduce the computational complexity while having almost no performance loss as will be illustrated in Section V. It also enables a more tractable analytical study on the detection performance. \subsection{Simplified ITC sensing algorithm} Before presenting the simplified ITC sensing algorithm in detail, we have the following lemma. \textbf{Lemma 1}: If there is one value $\hat k (>0)$ which minimizes the AIC metric in \eqref{eqn:AICsolve} (MDL metric in \eqref{eqn:MDLsolve}), then $\rm {AIC}(0)>\rm {AIC}(1)$ ($\rm {MDL}(0)>\rm {MDL}(1)$) with high probability. \begin{proof} Please refer to Appendix~\ref{prof_lemma1} \end{proof} The outline of the proposed simplified sensing algorithm is as follows. \vspace{0.4cm} \hrule \hrule \vspace{0.2cm} \textbf{Algorithm 1: SITC sensing algorithm} \vspace{0.2cm} \hrule \vspace{0.3cm} ~~~Step 1. Compute the sampled covariance matrix of received signals, i.e., ${{\bf R}_x}={\frac{1}{N}\sum^N_{i=1}{{{\bm x}_i}}{{\bm x}_i}^{\dagger}},$ where ${\bm x}_i$'s are received vectors as described in \eqref{eqn:Xn} or \eqref{eqn:OvernewXn} and $N$ denotes the number of the observations. ~~~Step 2. Obtain the eigenvalues of ${\bf R}_x$ through eigenvalue decomposition technique, and denote them as $\{l_1,l_2,\ldots,l_p\}$ with $l_1\geq l_2,\ldots,\geq l_p$. ~~~Step 3. Calculate the decision values $\rm {AIC}(0)$ and $\rm {AIC}(1)$ ($\rm {MDL}(0)$ and $\rm {MDL}(1)$) according to \eqref{eqn:AICsolve}(\eqref{eqn:MDLsolve}). Then the detection decision metric is \begin{equation} \label{eqn:SITC1} {\cal T}_{\rm {SITC-AIC}}({\bf L}_x) :\rm {AIC}(0) \mathop \gtrless^{{\cal H}_1}_ {{\cal H}_0} \rm {AIC}(1) . \end{equation} if AIC is adopted, or \begin{equation} \label{eqn:SITC1} {\cal T}_{\rm {SITC-MDL}}({\bf L}_x) :\rm {MDL}(0) \mathop \gtrless^{{\cal H}_1}_ {{\cal H}_0} \rm {MDL}(1). \end{equation} if MDL is adopted, where ${\bf L}_x$ denote the set of eigenvalues $\{l_i,i=1,2,\ldots,p\}$ \vspace{0.2cm} \hrule \vspace{0.4cm} Note that in the OITC sensing algorithm \cite{Zayen09}, one needs to find the exact value of $\hat k$ from $0$ to $p-1$ to minimize the AIC in \eqref{eqn:AICsolve} or MDL in \eqref{eqn:MDLsolve}. In the proposed SITC algorithm, only two decision values at $k=0$ and $1$ should be computed and compared. Thus, the computational complexity is significantly reduced. In the next subsection, based on the proposed SITC algorithm, we present the analytical results on the detection performance. Since from the Lemma 1, the SITC algorithm almost obtains the same performance as OITC algorithm, we claim that our analytical results are also applicable for evaluating the performance of OITC algorithm. \subsection{Performance Analysis} Since spectrum sensing is actually a binary hypothesis test, the performance we focus on is the probability of detection $P_d$ (the probability for identifying the signal when the primary user is present) and the probability of false alarm $P_f$ (the probability for identifying the signal when the primary user is absent). As no threshold value is involved in the ITC sensing algorithm, $P_d$ is not directly related with $P_f$. The two probabilities are presented separately. For ease of presentation, we shall take the AIC criterion for example to illustrate the analysis throughout this section. The extension to MDL criterion is straightforward if not mentioned otherwise. \subsubsection{Probability of false alarm} According to the sensing steps in Algorithm 1, the false alarm occurs when $\rm {AIC}(0)$ is larger than $\rm {AIC}(1)$ at hypothesis ${\cal H}_0$. The probability of false alarm can be expressed as \begin{equation} \label{eqn:PfAIC} P_{f-AIC} = {\rm Pr} \big( {\rm {AIC}}(0) > {\rm {AIC}}(1)| {\cal H}_0 \big). \end{equation} Since the primary user is absent, the received signal ${\bm x}_i$ only contains the noises. The sampled covariance matrix ${\bf R}_x$ in Algorithm 1 thus turns to ${\bf R}_{\mu}$ defined as \begin{equation} \label{eqn:Rmu} {{\bf R}_\mu}={\frac{1}{N}\sum^N_{i=1}{{{\bm \mu}_i}}{{\bm \mu}_i}^{\dagger}}. \end{equation} Hence, the eigenvalues in \eqref{eqn:AICsolve} become the eigenvalues of the sampled noise covariance matrix ${\bf R}_\mu$ in \eqref{eqn:Rmu}, which is a Wishart random matrix \cite{Johnstone01}. By applying the recent advances on the eigenvalue distribution for Wishart matrices, a closed-form expression for the probability of false alarm can be obtained. \textbf{Proposition 1}: The probability of false alarm of the proposed spectrum sensing algorihtm can be approximated as: \begin{eqnarray} \label{eqn:PfOfAIC} {P_f} \approx {F_2}{\left(\frac{pN-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)} -{F_2}{\left(\frac{(p-\alpha_1)N-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)}\nonumber\\ +{F_2}{\left(\frac{(p-\alpha_2)N-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)} -{F_2}{\left(\frac{-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)}, \end{eqnarray} where $F_2(\cdot)$ is the cumulative distribution function (CDF) of Tracy-Widom distribution of order two \cite{Johnstone01}, $\alpha_1$ and $\alpha_2$ with $\alpha_1<\alpha_2$ are the two real roots of the function in \eqref{eqn:Append-A-funfA} if AIC is applied, or \eqref{eqn:Append-A-funfAMDL} if MDL is applied. \begin{proof} Recall the definition in \eqref{eqn:PfAIC}, to compute the probability of false alarm is to compute the probability \begin{equation} \label{eqn:Append-A-pfAIC} {P_{f-AIC}}= {{\rm Pr}({\rm AIC}(0)-{\rm AIC}(1)>0|{\cal H}_0)}. \end{equation} According to the cost function of AIC defined in \eqref{eqn:AICsolve}, we have \begin{equation} \label{eqn:Append-A-AIC0-1} {{\rm AIC}(0)-{\rm AIC}(1)}={-2\log{\left[\frac{\prod^p_{i=1}{l_i^{1/p}}}{{\frac{1}{p}}{\sum^p_{i=1}{l_i}}}\right]^{pN}}} {+2\log{\left[\frac{\prod^p_{i=2}{l_i^{1/{p-1}}}}{{\frac{1}{p-1}}{\sum^p_{i=2}{l_i}}}\right ]^{(p-1)N}}-(4p-2)}.\nonumber \end{equation} Then we can rewrite \eqref{eqn:Append-A-pfAIC} as \begin{equation} \label{eqn:Append-A-pfAIC2} {P_{f-AIC}} = {\rm Pr} \left( {\log}\left[\frac{({\frac{1}{p}}{\sum^p_{i=1}}{l_i})^p} {({{\frac{1}{p-1}}{\sum^p_{i=2}{l_i}}})^{p-1}l_1} \right] > \frac{4p-2}{2N} \bigg|{\cal H}_0\right ). \end{equation} Note here, the sum of eigenvalues of sampled covariance matrix, $\frac{1}{p}\sum^p_{i=1}{l_i}$, is equivalent to ${\frac{1}{pN}}{\rm Tr}\left({\sum^N_{i=1}{{{\bm x}_i}{{{\bm x}_i}}^{\dagger}}}\right)$. At hypothesis $H_0$, where the received vector involves only the noise samples, ${\frac{1}{pN}}{\rm Tr}\left({\sum^N_{i=1}{{{\bm x}_i}{{{\bm x}_i}}^{\dagger}}}\right)$ is the un-biased estimation of the covariance of the white noise. Therefore, when $N$ is sufficiently large, we have \begin{equation} \label{eqn:Append-A-estnoise} {\frac{1}{p}\sum^p_{i=1}{l_i}}\approx {\sigma^2}. \end{equation} Substituting \eqref{eqn:Append-A-estnoise} into \eqref{eqn:Append-A-pfAIC2} yields: \begin{equation} \label{eqn:Append-A-pfAIC3} {P_{f-AIC}} \approx {\rm Pr}{\left[ {\frac{(\sigma^2)^p}{(\frac{p}{p-1}\sigma^2-\frac{l_1}{p-1})^{p-1}l_1}}>\exp{\left( \frac{2p-1}{N}\right)} \bigg|{\cal H}_0\right]}. \end{equation} From \eqref{eqn:Append-A-pfAIC3}, it is seen that the probability of false alarm is only dependent on the largest eigenvalue of the noise sampled covariance matrix ${\bf R}_\mu$. Since ${\bf R}_\mu$ is actually a Wishart random matrix , its the largest eigenvalue $l_1$ satisfies the Tracy-Widom distribution of order two \cite{Johnstone01}. To apply this result, we rewrite \eqref{eqn:Append-A-pfAIC3} as \begin{eqnarray} \label{eqn:Append-A-pfAIC4} {P_{f-AIC}}\approx {\rm Pr} \left[{\frac{l_1}{\sigma^2}}{\left(p-\frac{l_1}{\sigma^2}\right)^{p-1}}< \frac{(p-1)^{p-1}}{\exp \left(\frac{2p-1}{N} \right)} \bigg|{\cal H}_0\right] \nonumber \\ = {\rm Pr}\left[x^p-px^{p-1}+ \frac{(p-1)^{p-1}}{\exp\left(\frac{2p-1}{N}\right)} >0 \bigg|{\cal H}_0\right], \end{eqnarray} where $x \triangleq p-\frac{l_1}{\sigma^2}$. Define a function \begin{equation}\label{eqn:Append-A-funfA} f(x) \triangleq x^p-px^{p-1}+\frac{(p-1)^{p-1}}{\exp\big(\frac{2p-1}{N}\big)}. \end{equation} We next find the real roots of this function. Taking the differentiation of $f(x)$ and equating it to zero, we obtain \begin{equation}\label{eqn:Append-A-diffefA} {\frac{df(x)}{dx}}=px^{p-1}-p(p-1)x^{p-2}=px^{p-2}[x-(p-1)]=0.\nonumber \end{equation} Clearly, $f(x)$ has two stationary points, which are $x=p-1$ and $x=0$. In the following, two scenarios with p being even or odd are considered respectively. When $p$ is even, it is easily found that the function $f(x)$ monotonically decreases over $(-\infty, p-1)$ and monotonically increases over $(p-1,\infty)$. Simultaneously, we can verify that \begin{equation}\label{eqn:Append-A-fAmin} f(p-1)=(p-1)^p-p(p-1)^{p-1}+\frac{(p-1)^{p-1}}{\exp\big(\frac{2p-1}{N}\big)}<0. \end{equation} and \begin{equation}\label{eqn:Append-A-fA0p} f(0)=f(p)=\frac{(p-1)^{p-1}}{\exp\big(\frac{2p-1}{N}\big)}>0. \end{equation} So there must be two real real roots within $(0,p)$ and around $p-1$ for function $f(x)$. Let $\alpha_1$ and $\alpha_2$, with $\alpha_1<\alpha_2$, denote the two real roots, then \eqref{eqn:Append-A-pfAIC4} is converted into an equivalent form: \begin{equation}\label{eqn:Append-A-pfAIC66} {P_{f-AIC}}\approx {\rm Pr}\left[x<\alpha_1 |H_0\right]+{\rm Pr}\left[\alpha_2<x |{\cal H}_0\right]. \end{equation} When $p$ is odd, we can find $f(x)$ decreases monotonically over $(0,p-1)$, while it is the monotonic increasing function over both $(-\infty,0)$ and $(p-1,\infty)$. According to the fact that $f(-\infty)<0$, $f(0)>0$, $f(p-1)<0$ and $f(p)>0$, we conclude that $f(x)$ have three real roots, which are denoted as $\alpha_0$, $\alpha_1$ and $\alpha_2$, with $\alpha_0<0$ and $0<\alpha_1<\alpha_2$, respectively. Then \eqref{eqn:Append-A-pfAIC4} can be rewritten as: \begin{equation}\label{eqn:Append-A-pfAIC77} {P_{f-AIC}}\approx {\rm Pr}\left[\alpha_0<x<\alpha_1 |H_0\right]+{\rm Pr}\left[\alpha_2<x |{\cal H}_0\right]. \end{equation} However, it is noted that as $N$ is large enough, the largest eigenvalue of the sampled noise covariance matrix, $l_1$, is just slightly larger than the true covariance of noise $\sigma^2$. Hence, from the definition, $x$ can be reasonably limited in $(0,p)$. Therefore, both the probability of \eqref{eqn:Append-A-pfAIC66} and \eqref{eqn:Append-A-pfAIC77} can be summarized as the following form \begin{equation {P_{f-AIC}}\approx {\rm Pr}\left[0<x<\alpha_1 |H_0\right]+{\rm Pr}\left[\alpha_2<x<p |{\cal H}_0\right]. \nonumber \end{equation} In other words, \begin{equation}\label{eqn:Append-A-pfAIC7} {P_{f-AIC}} \approx {\rm Pr}\left[p-\alpha_1<\frac{l_1}{\sigma^2}<p \bigg|{\cal H}_0\right]+\left[0<\frac{l_1}{\sigma^2}<p-\alpha_2 \bigg|{\cal H}_0\right].\nonumber \end{equation} Applying the distribution of the largest eigenvalue for Wishart matrix in random matrix theory \cite{Johnstone01}, the variable $N\frac{l_1}{\sigma^2}$ satisfies the distribution of Tracy-widom of order two, i.e., \begin{equation}\label{eqn:Append-A-eigdis} \frac{N\frac{l_1}{\sigma^2}-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})\big(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}}\big)^{\frac{1}{3}}}\backsim W_2\backsim F_2.\nonumber \end{equation} Here, $W_2$ and $F_2$ denote the probability density function (PDF) and cumulative density function (CDF) for distribution of Tracy-widom of order two respectively. Therefore, the probability of false alarm of AIC can be concluded as \eqref{eqn:PfOfAIC}. Similar with the above derivation, when the MDL criterion is applied, we only need to modify the step in \eqref{eqn:Append-A-pfAIC4} as \begin{equation}\label{eqn:Append-A-pfMDL1} {P_{f-MDL}} \approx {\rm Pr}\left[x^p-px^{p-1}+ \frac{(p-1)^{p-1}}{\exp\left(\frac{(p-0.5)\log{N}}{N}\right)} >0 \bigg|{\cal H}_0\right].\nonumber \end{equation} and redefine the function $f(x)$ in \eqref{eqn:Append-A-funfA} as \begin{equation}\label{eqn:Append-A-funfAMDL} f(x)\triangleq x^p-px^{p-1}+\frac{(p-1)^{p-1}}{\exp\big(\frac{(p-0.5)\log{N}}{N}\big)}. \end{equation} \end{proof} From Proposition 1, it is found that the probability of false alarm is independent with noise covariances $\sigma^2$. Therefore, the proposed SITC sensing algorithm is robust in practical applications. It is also noted that $P_f$ depends on the product of $M$ and $K$, i.e., $p=MK$, rather than the individual values of $M$ and $K$. \subsubsection{Probability of detection} When the primary user is present, the event of detection also occurs when $\rm {AIC}(0) > \rm {AIC}(1)$. The probability of detection is thus described as \begin{equation} \label{eqn:PdAIC} P_{d-AIC} = {\rm Pr} \big( {\rm {AIC}}(0) > {\rm {AIC}}(1) | {\cal H}_1 \big). \end{equation} Since at ${\cal H}_1$, the received vector ${\bm x}_i$ involves the signals transmitted by the primary user, the sampled covariance matrix ${\bf R}_x$ can be written as \begin{equation} \label{eqn:Rxn} {{\bf R}_x}={\frac{1}{N}} \sum^N_{i=1} {({\bf H}{\bm s}_i+{\bm \mu}_i)({\bf H}{\bm s}_i+{\bm \mu}_i)^{\dagger}}. \end{equation} Note that ${\bf R}_x$ is no longer a Wishart matrix. The exact distribution of its eigenvalues is unknown and difficult to find, and hence so is the $P_d$. In the following, we resort to deriving a closed-form expression for the conditional probability of detection given the channel matrix $\bf H$. The average probability of detection can then be obtained using a hybrid analytical-simulation approach. \textbf{Proposition 2}: Let ${\bf R}_s$ denote the covariance matrix of ${\bm s}_i$ given in \eqref{eqn:Sn} and $\{\delta_1, \delta_2, \ldots, \delta_p\}$ be the eigenvalues of matrix ${\bf H}{\bf R}_s{\bf H}^\dagger$ (with $\delta_1 \geqslant \delta_2 \geqslant \ldots \geqslant \delta_p$). Then there exists a value of $\rho$, for $\delta_p \leqslant \rho \leqslant \delta_1$, such that the probability of detection given $\bf H$ can be approximated as $P_{d|H}\approx Q(\rho)$, where the function $Q(\cdot)$ is \begin{eqnarray} \label{eqn:Qfun} {Q(\delta)}={F_2}{\left(\frac{pN-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)} -{F_2}{\left(\frac{(\frac{(p-\pi_1)\epsilon-\delta}{\sigma_2})N-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)}\nonumber \\ +{F_2}{\left(\frac{(\frac{(p-\pi_2)\epsilon-\delta}{\sigma_2})N-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)} -{F_2}{\left(\frac{-(\sqrt{N}+\sqrt{p})^2}{(\sqrt{N}+\sqrt{p})(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{p}})^{\frac{1}{3}}}\right)}, \end{eqnarray} where $\epsilon={\frac{1}{p}{\rm Tr}(\bf{H}\bf{R}_s\bf{H}^\dagger)+\sigma^2}$ and $\pi_1$, $\pi_2$ (with $\pi_1<\pi_2$) denote the two roots of the function \eqref{eqn:Append-c-jfun} for AIC or \eqref{eqn:Append-c-jfun2} for MDL. Furthermore, upper and lower bounds can be obtained as $Q(\delta_p)\leqslant P_{d|H} \leqslant Q(\delta_1)$. \begin{proof} Please refer to Appendix~\ref{prof_pro3}. \end{proof} From Proposition 2, we find that $P_d$ is not only related to $N$ and $p$, but also depends on $\frac {\epsilon}{\sigma^2}$, which is the ratio of the signal strength of primary user to the noise variance. The exact value of $\rho\in [\delta_p,\delta_1]$ in Proposition 2 is difficult to determine in an analytical way, since it is related to both the channel response $\bf H$ and the covariance matrix of source signal ${\bf R}_s$. In practice, we can simply choose $\rho_{AIC}=\frac{1}{2}(\delta_p+\delta_1)$ and $\rho_{MDL}=\frac{3}{4}(\delta_p+\delta_1)$. It will be demonstrated later in Section VI that the analytical $P_{d|H}$ based on this choice of $\rho$ can approximate the Monte Carlo results very well in most of cases. \section{Generalized information theoretic criteria sensing algorithm} As mentioned in the previous section, the probability of detection of and probability of false alarm of the proposed simplified ITC sensing algorithm are not directly related to each other as the algorithm does not involve any threshold (same for the original ITC algorithm in \cite{Zayen09}). According to the analytical results given in \eqref{eqn:PfOfAIC} and \eqref{eqn:Qfun}, to satisfy different system requirements, a proper set of values for the parameters $M$, $K$ and $N$ in model \eqref{eqn:H1mat} should be chosen, which is inconvenient for practical application. In this section, based on the analytical discussion in Section IV, we propose a generalized information theoretic criteria sensing algorithm which can provide a flexible tradeoff between $P_d$ and $P_f$ according to different system design requirements. From the expression given in \eqref{eqn:Append-A-pfAIC2} and \eqref{eqn:Append-c-pmfull}, we found that the sensing decision for SITC algorithm is actually based on the decision variable ${\log}\left[\frac{({\frac{1}{p}}{\sum^p_{i=1}}{l_i})^p}{({{\frac{1}{p-1}}{\sum^p_{i=2}{l_i}}})^{p-1}l_1} \right]$. Thus, we generalize the decision rule as \begin{equation} \label{eqn:GITC1} {\cal T}_{\rm {GITC}}({\bf L}_x) : \frac{({\frac{1}{p}}{\sum^p_{i=1}}{l_i})^p} {({{\frac{1}{p-1}}{\sum^p_{i=2}{l_i}}})^{p-1}l_1} \mathop \gtrless^{{\cal H}_1}_ {{\cal H}_0}\gamma, \end{equation} where $\gamma$ is a pre-set threshold. It is seen that if we set $\gamma=\exp{(\frac{2p-1}{N})}$, the decision rule given in \eqref{eqn:GITC1} turns into the AIC based SITC sensing algorithm presented in Algorithm 1. If we fix $\gamma=\exp{(\frac{(p-0.5)\log{N}}{N})}$, the algorithm becomes the MDL-based SITC sensing algorithm. Furthermore, it is easy to find that the analytical results obtained in Section IV are applicable for the GITC sensing algorithm. The only change that needs to be made is to replace $\alpha_1$ and $\alpha_2$ in \eqref{eqn:PfOfAIC} (or $\pi_1$ and $\pi_2$ in \eqref{eqn:Qfun}) by two real roots generated by the following equation. \begin{equation}\label{eqn:GITC2} f(x)\triangleq x^p-px^{p-1}+\frac{(p-1)^{p-1}}{\gamma}. \end{equation} Thus, the outline of the proposed GITC sensing algorithm can be summarized as follows. \vspace{0.4cm} \hrule \hrule \vspace{0.2cm} \textbf{Algorithm 2: GITC sensing algorithm} \vspace{0.2cm} \hrule \vspace{0.3cm} ~~~Step 1 and Step 2: the same as Algorithm 1 in Section IV. ~~~Step 3: According to the system requirement on $P_f$, choose a proper threshold $\gamma$ based on \eqref{eqn:PfOfAIC} and \eqref{eqn:GITC2}. ~~~Step 4: Conduct the decision based on \eqref{eqn:GITC1}. \vspace{0.2cm} \hrule \vspace{0.4cm} According to the decision variable presented in \eqref{eqn:GITC1}, we find that the proposed GITC sensing algorithm is actually also an eigenvalue-based method similar to \cite{Zeng09,zeng08sp,RuiZhang10com}. The advantage of the proposed GITC over the algorithms in \cite{zeng08sp,RuiZhang10com} is that it is able to analytically obtain the explicit decision threshold $\gamma$ according to the system requirement on $P_f$ before the actual deployment. \section{Simulation results and discussions} In this section, we present some numerical examples to demonstrate the effectiveness of the proposed sensing schemes and to confirm the theoretical analysis. \subsection{Comparison between simulation and analytical results for both SITC and OITC} In our first set of examples, we compare the simulation results with analytical results given in Proposition 1 and Proposition 2. The comparison between SITC and OITC are also presented. In the simulation, the channel taps are random numbers with zero-mean complex Gaussian distribution. All the results are averaged over 1000 Monte Carlo realizations. For each realization, random channel, random noise and random BPSK modulated inputs are generated. We define the SNR as the ratio of the average received signal power to the average noise power \begin{equation} \label{eqn:SNR} {SNR}=\frac {{\cal E}[\|{\bm x}_i-{\bm \mu}_i\|^2]}{{\cal E}[\|{\bm \mu}_i\|^2]}. \end{equation} The comparison of simulation and analytical results for $P_f$ is demonstrated in TABLE~\ref{TABLE:ChangeMKL} and TABLE~\ref{TABLE:ChangeN}. According to Proposition 1, $P_f$ is independent with the noise variance, thus remains constant over different SNR. Hence we average multiple values over different SNR as the simulated $P_f$ and compare it with the analytical $P_f$. From TABLE~\ref{TABLE:ChangeMKL}, we first observe that SITC and OITC perform almost the same. It is also seen that, for AIC, the analytical results are slightly larger than the simulation results especially when $p=MK$ is small. Nevertheless, the analytical approximation is accurate enough to evaluate the performance of the proposed sensing scheme. It is also found that $P_{f-AIC}$ gradually decreases as $p=MK$ increases while the $P_{f-MDL}$ remains zero in both simulation and analytical results. We conclude that the MDL method has excellent false alarm performance. From TABLE~\ref{TABLE:ChangeN}, we find that the probability of false alarm increases very slowly as $N$ increases. In fact, our simulation shows that $P_{f-AIC}$ is still below $0.1$ even when $N=10^{15}$ at $M=5$ and $K=4$. Figs.~\ref{fig:Theoretic_Pd}-\ref{fig:ChangeN} show $P_d$ at different system parameters. In Fig.~\ref{fig:Theoretic_Pd}, we first compare the detection performance obtained by simulation between SITC and OITC. It is seen that the proposed SITC sensing algorithm do not lead to any performance loss compared to OITC algorithm. Then, comparing the semi-analytical results obtained from Proposition 2 with the simulation results, one can observe a very good match between them, especially for MDL method. Thus, Proposition 2 is validated. Fig.~\ref{fig:ChangeK}, Fig.~\ref{fig:ChangeM} and Fig.~\ref{fig:ChangeN} present the simulation results of $P_d$ for variable $K$ (at $M=5, N=10000$), $M$ (at $K=4, N=10000$) and $N$ (at $K=4, M=5$), respectively. It is found that the performance is improved as any of these parameters increases. \subsection{Comparison between SITC and other sensing algorithms} Thus far, a few efficient sensing algorithms have been proposed in the literature, with each requirng distinct prior information. In this subsection, for fair comparison, we only choose the eigenvalue-based methods proposed in \cite{Zeng09,zeng08sp,RuiZhang10com} and the energy detection method since they both need little prior information. It should be mentioned that the proposed SITC-AIC and SITC-MDL algorithms are equivalent to the GITC algorithm provided in Section V via setting $\gamma_{\rm AIC}=\exp{(\frac{(2p-1)}{N})}$ and $\gamma_{\rm MDL}=\exp{(\frac{(p-0.5)\log{N}}{N})}$. Therefore, we omit the performance comparison with the GITC. In the simulation, we fix the order of channel $L=10$ as in \cite{Zeng09} and choose $N=10000$, $K=4$ and $M=5$. Fig.~\ref{fig:ComparisonWithLiang} shows the comparison with the energy detection (ED) method (with perfect estimation of noise covariance) and the four eigenvalue-based methods, namely, the maximum minimum eigenvalue detection (EV-MME) and energy with minimum eigenvalue detection (EV-EME) \cite{Zeng09}, the blindly combined energy detection (EV-BCED) \cite{zeng08sp}, and the arithmetic to geometric mean (EV-AGM) \cite{RuiZhang10com}. We see that, under almost the same $P_f$, energy detection performs the best, followed by the EV-AGM method, the proposed SITC-AIC method and EV-BCED method, and then EV-MME and EV-EME methods. Among the proposed SITC-AIC and four eigenvalue-based methods, the SITC-AIC almost obtains the same performance with the EV-BCED method and they both outperform EV-MME and EV-EME while being slightly inferior to the EV-AGM method. Though the proposed SITC-MDL method performs the worst in $P_d$, it is the best among all the considered schemes in terms of $P_f$ performance. The comparison with energy detection with noise uncertainty is presented in Fig.~\ref{fig:ComparisonWithED}, where ``ED-x dB" means that the noise uncertainty in energy detection is x-dB as defined in \cite{Zeng09}. It is observed that, although the proposed method performs worse than the energy detection method with accurate noise covariance estimation, it significantly outperforms in both $P_d$ and $P_f$ when there exists some noise uncertainty. This clearly demonstrates the robustness of information theoretic criteria based blind sensing algorithm. \subsection{Performance of the GITC algorithm} Results for the GITC sensing algorithm at different threshold values are demonstrated in Fig.~\ref{fig:ThresholdCase}. It is assumed that we should choose proper thresholds to make $P_f=0.1$, $P_f=0.05$ and $P_f=0.01$. According to the Proposition 1 and the discussion in Section V, we choose three thresholds $\gamma=1.0372$, $\gamma=1.0393$ and $\gamma=1.0429$ (note that since the analytical results are slightly larger than the simulation results, the thresholds we choose should make theoretic $P_f$ larger than required $P_f$ by about 0.02). From the plots, it is found that the $P_f$ requirements are satisfied very well. One can also see that the probability of false alarm is very sensitive to the threshold. Hence, the GITC sensing algorithm is flexible for system design with different requirements. \section{Conclusions} In this paper we have provided an intensive study on the information theoretic criteria based blind spectrum sensing method. Based on the prior work on the related study, we first proposed the simplified ITC sensing algorithm. This algorithm significantly reduces the computational complexity without losing any detection performance compared with the existing ITC based sensing algorithm. Moreover, it enables a more trackable analytical study on the detection performance. Thereafter, applying the recent advances in random matrix theory, we derive closed-form expressions for both the probability of false alarm and probability of detection which can tightly approximate the actual results in simulation. We further generalized the SITC sensing algorithm to an eigenvalue based sensing algorithm which strike the balance between the probabilities of detection and false alarm by involving an adjustable threshold. Simulation results demonstrate that the proposed blind sensing algorithm outperforms the existing eigenvalue-based sensing algorithms in certain scenarios. \appendices \section{Whitening the over-sampled noises} \label{prof_Whiten} At the secondary receiver, the received continuous signal is usually filtered by a low-pass filter. Therefore, the noise $\mu(t)$ in \eqref{eqn:H0con} and \eqref{eqn:H1con} should be correlated. We assume that the white noise before the filter is ${\hat \mu}(t)$ and the system function of the low-pass filter is $g(t)$ which is known at the secondary receiver. In the following, we only consider the real value case, since in the communication system, the complex value signal is just the combination of two orthogonal real value signals. As we have known, $\mu(t)$ can be described by ${\hat \mu}(t)$ and $g(t)$ as \begin{equation}\label{eqn:Append1-1}\nonumber \mu(t)=g(t)\otimes {\hat \mu}(t) =\int^{t_{max}}_0 {g(\ell){\hat \mu}(t-\ell)}{d\ell}, \end{equation} where $(0,t_{max})$ represents the time span of $g(t)$ and $\otimes$ denotes the convolution operator. Thus, the auto-correlative function of $\mu(t)$ denoted by $\phi_{\mu}(\tau)$ can be expressed as \begin{equation}\label{eqn:Append1-2}\nonumber \phi_{\mu}(\tau)=\phi_{g}(\tau)\otimes\phi_{{\hat \mu}}(\tau), \end{equation} where $\phi_{g}(\tau)$ and $\phi_{{\hat \mu}}(\tau)$ are the auto-correlative functions of $g(t)$ and ${\hat \mu}(t)$, respectively. Note that $\phi_{{\hat \mu}}(\tau)$ should be equal to $\sigma^2\delta(\tau)$ since ${\hat \mu}(t)$ is white (here the covariance of ${\hat\mu}(t)$ is assumed to be $\sigma^2$ ). Therefore, we derive that \begin{equation}\label{eqn:Append1-3}\nonumber \phi_{\mu}(\tau)=\sigma^2\phi_{g}(\tau)=\sigma^2\int^{t_{max}}_0 {g(\ell)g(\tau-\ell)}{d\ell},~~0\leq\tau\leq2t_{max} \end{equation} Thus, if the received signal is over-sampled at rate $Kf_s$ where $f_s$ is the reciprocal of the baseband symbol duration $T_0$ and $K$ is the over-sampling factor, the covariance matrix of the noise vector ${\bm \mu}_i$ given in \eqref{eqn:OvernewNn} becomes \begin{equation}\label{eqn:Append1-4}\nonumber {\bf R}_{\mu}=\sigma^2{\bf Q}, \end{equation} with $\bf Q$ having entries $q_{i,j}=\phi_{g}(|i-j|\frac{T_0}{K})$. Note that $\bf Q$ is a positive definite symmetric matrix. It can be decomposed into ${\bf Q}={\tilde {\bf Q}}^2$, where $\tilde {\bf Q}$ is also a positive definite symmetric matrix. Hence, to obtain the independent noise samples in the over-sampling scheme, we can pre-whiten the over-sampled noise samples ${\bm \mu}_i$ as \begin{equation}\label{eqn:Append1-5}\nonumber {\tilde {\bm \mu}_i}={\tilde {\bf Q}}^{-1}{\bm \mu}_i. \end{equation} Then, the covariance matrix of $\tilde {\bm \mu}_i$ transforms into \begin{equation}\label{eqn:Append1-6}\nonumber {\bf R}_{{\tilde{\mu}}_i}={\tilde {\bf Q}}^{-1}{\bf R}_{\mu}{\tilde {\bf Q}}^{-1}=\sigma^2{\bf I}_p. \end{equation} Now, noise samples ${\bm \mu}_i$ are whitened. It is noted that $\tilde {\bf Q}$ is only related to the low-pass filter and over-sampling factor $K$ and is independent to the signal and noise. Therefore, the pre-whitening process can be used blindly. \section{Proof of Lemma 1} \label{prof_lemma1} We prove the lemma from two aspects. Firstly, it has been shown in \cite{Xu95, Fishler02} that most of the estimation errors of AIC and MDL occur tightly around the true numbers. According to this finding, at hypothesis ${\cal H}_0$ (the true number of source signal is zero), if there exists ${\hat k}>0$ minimizing \eqref{eqn:AICsolve} or \eqref{eqn:MDLsolve}, then we have ${\hat k}=1$ with high probability. Hence, Lemma 1 holds for the case of false alarm. Next, we prove that Lemma 1 succeeds at hypothesis ${\cal H}_1$. Since the primary user is present, the eigenvalues $l_i$ of the sampled covariance matrix are distinct at least for $i=1,2,\ldots,q$ (here $q$ is the true source number). For $i=q,q+1,\ldots,p$, the eigenvalues are actually the estimation of noise variance $\sigma^2$. They may be equal to each other when $N$ is enough large. According to the expression of AIC and MDL, it is found that the second terms in \eqref{eqn:AICsolve} and \eqref{eqn:MDLsolve} are monotonically increasing functions of $k$. To make the cost function in \eqref{eqn:AICsolve} or \eqref{eqn:MDLsolve} minimum at ${\hat k}\in [1,p-1]$, we must have that the first terms in \eqref{eqn:AICsolve} and \eqref{eqn:MDLsolve} are monotonically decreasing for $k=0,1,\ldots,\hat k$. We next prove this statement. We focus on the AIC criterion and the extension to MDL is straightforward. Supposing $k'\in[2,{\hat k}]$ and \begin{equation}\label{eqn:Append2-1}\nonumber f_{\rm AIC}(k)={-2\log \left (\frac{\prod^p_{i=k+1}{l^{1/(p-k)}_i}}{{\frac{1}{p-k}}\sum^p_{i=k+1}{l_i}}\right )^{(p-k)N}}, \end{equation} we have \begin{equation}\label{eqn:Append2-2}\nonumber f_{\rm AIC}(k'-1)-f_{\rm AIC}(k')=2N\log \frac{\left(\frac{1}{p-k'+1}\sum^p_{i=k'}l_i \right)^{p-k'+1}} {\left(\frac{1}{p-k'}\sum^p_{i=k'+1}l_i\right)^{p-k'}l_k'}. \end{equation} Since \begin{equation}\label{eqn:Append2-3}\nonumber \begin{split} &\left(\frac{1}{p-k'+1}\sum^p_{i=k'}l_i \right)^{p-k'+1} \\ &=\left(\frac{1}{p-k'}\frac{p-k'}{p-k'+1}\sum^p_{i=k'+1}l_i+\frac{1}{p-k'+1}l_{k'}\right)^{p-k'+1}\\ &\geq \left[\left(\frac{1}{p-k'}\sum^p_{i=k'+1}l_i\right)^{\frac{p-k'}{p-k'+1}}\right. \left.l_k'^{\frac{1}{p-k'+1}}\right]^{p-k'+1}\\ &=\left(\frac{1}{p-k'}\sum^p_{i=k'+1}l_i\right)^{p-k'}l_k' \end{split} \end{equation} (here, the arithmetic-mean geometric-mean inequality $x_1^{a_1}+x_2^{a_2}\geq x_1^{a_1}x_2^{a_2}$ with $a_1+a_2=1$ is applied), we conclude that \begin{equation}\label{eqn:Append2-3}\nonumber \frac{\left(\frac{1}{p-k'+1}\sum^p_{i=k'}l_i \right)^{p-k'+1}} {\left(\frac{1}{p-k'}\sum^p_{i=k'+1}l_i\right)^{p-k'}l_k'} \geq 1. \end{equation} It further means \begin{equation}\label{eqn:Append2-4}\nonumber f_{\rm AIC}(k'-1)-f_{\rm AIC}(k')>0, \end{equation} i.e., $f_{\rm AIC}(k)$ is a monotonic decreasing function. Hence, we have \begin{equation}\label{eqn:Append2-5}\nonumber \lim_{N\rightarrow \infty}\frac{{\rm AIC}(0)-{\rm AIC}(1)}{N}= 2\log \frac{\left(\frac{1}{p}\sum^p_{i=1}l_i \right)^{p}} {\left(\frac{1}{p-1}\sum^p_{i=2}l_i\right)^{p-1}l_1} + \lim_{N\rightarrow \infty}\frac{-4p+2}{N}>0. \end{equation} If $N$ is finite but larger enough, we claim that Lemma 1 holds with high probability. The high probability is also contributed by the fact that, due to the property of SVD decomposition technique, the first eigenvalue $l_1$ is always much larger than other eigenvalues. Therefore, $2N\log \frac{\left(\frac{1}{p}\sum_{i=1}^p{l_i} \right)^{p}} {\left(\frac{1}{p-1}\sum_{i=2}^p{l_i}\right)^{p-1}l_1}$ is larger enough to make Lemma 1 succeed at hypothesis ${\cal H}_1$. Thus, we complete the proof of Lemma 1. \section{Proof of Proposition 2} \label{prof_pro3} We firstly derive the derivation of the probability of misdetection $P_m$ (the probability for misdetecting the presence of primary user at hypothesis $H_1$), then obtain the probability of detection $P_d$ through $1-P_m$. Without loss of generality, the following derivation is also based on AIC. According to \eqref{eqn:PdAIC}, we have \begin{equation}\label{eqn:Append-c-Pm} {P_{m-AIC|H}} = {\rm Pr}[AIC(0)-AIC(1)<0|{\cal H}_1].\nonumber \end{equation} Similar to the process described in the proof of Proposition 1, we can rewritten $P_{m-AIC|H}$ as \begin{equation} \label{eqn:Append-c-pmfull} {P_{m-AIC|H}} = {\rm Pr} \left( {\log}\left[\frac{({\frac{1}{p}}{\sum^p_{i=1}}{l_i})^p} {({{\frac{1}{p-1}}{\sum^p_{i=2}{l_i}}})^{p-1}l_1} \right] < \frac{4p-2}{2N} \bigg|{\cal H}_1\right ). \end{equation} Where $\{l_1,l_2,\ldots,l_p\}$ are the decreasing ordered eigenvalues of the sampled covariance matrix ${\bf R}_x$ in \eqref{eqn:Rxn}. When the number of observation $N$ is larger enough, we obtain the approximation \begin{equation}\label{eqn:Append-c-expassume} \frac{1}{N}\sum^N_{i=1}{{\bm x}_i{\bm x}^{\dagger}_i} \approx {\cal E}\left({{\bm x}_i{\bm x}^{\dagger}_i}\right) = {\bf H}{\bf R}_s{\bf H^{\dagger}}+\sigma^2{{\bf I}_p}.\nonumber \end{equation} Thus \begin{equation}\label{eqn:Append-c-lisumassme} \frac{1}{p}\sum^p_{i=1}{l_i} \approx {\frac{1}{p}}{\rm Tr}\left({\bf H}{\bf R}_s{\bf H^{\dagger}}\right)+\sigma^2.\nonumber \end{equation} Hence, \eqref{eqn:Append-c-pmfull} turns to \begin{eqnarray}\label{eqn:Append-c-pm3} {P_{m-AIC|H}} \approx {\rm Pr}\left[{\frac{l_1}{\epsilon}}{\left(p-\frac{l_1}{\epsilon}\right)^{p-1}} >\frac{(p-1)^{p-1}}{\exp\big(\frac{2p-1}{N}\big)} \bigg|{\cal H}_1\right] \nonumber\\ = {\rm Pr}\left[y^p-py^{p-1}+\frac{(p-1)^{p-1}}{\exp\left(\frac{2p-1}{N}\right)}<0 \bigg|{\cal H}_1\right], \end{eqnarray} where $\epsilon={\frac{1}{p}}{\rm Tr}\big({\bf H}{\bf R}_s{\bf H}\big)+\sigma^2$ and $y \triangleq p-\frac{l_1}{\epsilon}$. Assuming $\pi_1$ and $\pi_2$ (with $\pi_1<\pi_2$) are two real roots within $(0,p)$ of the following function \begin{equation}\label{eqn:Append-c-jfun} g(y)=y^p-py^{p-1}+\frac{(p-1)^{p-1}}{\exp\big(\frac{2p-1}{N}\big)}. \end{equation} As described in the proof of proposition 1, the probability of misdetection is concluded as \begin{equation}\label{eqn:Append-c-pm4} {P_{m-AIC|H}} \approx {\rm Pr}{\left[\pi_1<y<\pi_2 |{\cal H}_1\right]},\nonumber \end{equation} i.e., \begin{equation}\label{eqn:Append-c-pm5} {P_{m-AIC|H}} \approx {\rm Pr}{\left[(p-\pi_2)\epsilon<l_1<(p-\pi_1)\epsilon |{\cal H}_1\right]}. \end{equation} Note that $l_1$ is the largest eigenvalue of the sampled variance matrix ${\bf R}_x$. Given the channel matrix, ${\bf R}_x$ can be approximated as \begin{equation}\label{eqn:Append-c-approxiamtion} {\bf R}_x \approx {\frac{1}{N}}\left[{\bf H}{\sum^N_{i=1}{{\bm s}_i{{\bm s}_i}^{\dagger}}}{\bf H}^{\dagger}\right]+{\frac{1}{N}}{\sum^N_{i=1}{{\bm \mu}_i{{\bm \mu}_i}^{\dagger}}} \approx {\bf H}{{\bf R}_s}{\bf H}^{\dagger}+{\frac{1}{N}}{\sum^N_{i=1}{{\bm \mu}_i{{\bm \mu}_i}^{\dagger}}},\nonumber \end{equation} when $N$ is larger enough. Let $\{\delta_1, \delta_2,\ldots, \delta_p \}$ and $\{\chi_1,\chi_2,\ldots,\chi_p \}$ be the decreasing ordered eigenvalues of ${\bf H}{{\bf R}_s}{\bf H}^{\dagger}$ and ${\frac{1}{N}}{\sum^N_{i=1}{{\bm \mu}_i{{\bm \mu}_i}^{\dagger}}}$ respectively. Apply Weyl's inequality theorem in \cite{Bhatia97}, the largest eigenvalue of ${\bf R}_x$, $l_1$, satisfies \begin{equation}\label{eqn:Append-c-weyl} \chi_1+\delta_p \leqslant l_1 \leqslant \chi_1+\delta_1,\nonumber \end{equation} Equivalently $\chi_1$ satisfies \begin{equation}\label{eqn:Append-c-chi1} l_1-\delta_1 \leqslant \chi_1 \leqslant l_1-\delta_p. \end{equation} Therefore, there must exist a constant $\rho$ satisfying $\delta_p \leqslant \rho \leqslant \delta_1$ which makes $l_1-\rho$ equal to $\chi_1$. Then \eqref{eqn:Append-c-pm5} is rewritten as \begin{equation}\label{eqn:Append-c-pm6} {P_{m-AIC|H}} \approx {\rm Pr}{\left[(p-\pi_2)\epsilon-\rho<\chi_1<(p-\pi_1)\epsilon-\rho |{\cal H}_1\right]},\nonumber \end{equation} i.e., \begin{equation}\label{eqn:Append-c-pd1} {P_{d-AIC|H}} \approx {\rm Pr}{\left[\frac{(p-\pi_1)\epsilon-\rho}{\sigma^2}<\frac{\chi_1}{\sigma^2}<p |H_1\right]}+{\rm Pr}{\left[0<\frac{\chi_1}{\sigma^2}<\frac{(p-\pi_2)\epsilon-\rho}{\sigma^2} |{\cal H}_1\right]},\nonumber \end{equation} where we use the similar constraint for $\frac{\chi_1}{\sigma^2}$ as in the proof of Proposition 1. Since $\chi_1$ converges to the Tracy-Widom distribution of order two, we conclude \begin{equation}\label{eqn:Append-c-Pd} P_{d-AIC|H}\approx Q(\rho),\nonumber \end{equation} where $Q(\cdot)$ is defined in Proposition 2. Simultaneously, based on \eqref{eqn:Append-c-pm5}, the upper and lower bounds for $P_{m-AIC|H}$ is \begin{equation}\label{eqn:Append-c-bounds} 1-Q(\delta_1) \leqslant P_{m-AIC|H} \leqslant 1-Q(\delta_p).\nonumber \end{equation} Therefore, the upper and lower bound of $P_{d-AIC|H}$ can be obtain straightforwardly as \begin{equation}\label{eqn:Append-c-boundsofPd} Q(\delta_p) \leqslant P_{d-AIC|H} \leqslant Q(\delta_1).\nonumber \end{equation} The proof for MDL criterion is the same, except that the function $g(y)$ in \eqref{eqn:Append-c-jfun} is redefined as \begin{equation}\label{eqn:Append-c-jfun2} g(y)=y^p-py^{p-1}+\frac{(p-1)^{p-1}}{\exp\big(\frac{(p-0.5)\log{N}}{N}\big)}. \end{equation} Proposition 2 is thus proved. \bibliographystyle{IEEEtran}
1,116,691,501,425
arxiv
\section{Introduction} \label{sec:intro} Accreting neutron stars in low mass X-ray binaries (LMXB) are among the most luminous compact X-ray sources in the Milky Way. A number of them have luminosities exceeding $\sim {\rm few}\times 10^{38}$ erg/s and presumably accrete matter at the level close to the critical Eddington accretion rate. In the bright state these sources have rather soft X-ray spectra, indicating that their X-ray emission is predominantly formed in the optically thick media. Similar to black holes, at lower luminosities, $\log(L_{\rm X})\la 36.5-37$, neutron stars undergo a transition to the hard spectral state \citep[e.g.][]{barret01}. The energy spectra in this state point at the low optical depth in the emission region, indicating a significant change of the geometry of the accretion flow. In the soft spectral state, the commonly accepted picture of accretion at not too extreme values of accretion rate has two main ingredients -- the accretion disk (AD) and the boundary layer (BL). While in the disk matter rotates with nearly Keplerian velocities, in the boundary layer it decelerates down to the spin frequency of the neutron star and settles onto its surface. For the typical neutron star spin frequency ($\la500-700$Hz) comparable amounts of energy are released in these two regions \citep{ss86,sibg00}. This picture is based on rather obvious qualitative expectations as well as more sophisticated theoretical considerations and numerical modeling \citep{ss86,kluzniak,inogamov99,sibg00}. It has been receiving, however, little direct observational confirmation. Due to similarity of the spectra of the accretion disk and boundary layer the total spectrum has a smooth curved shape, which is difficult to decompose into separate spectral components \citep{mitsuda84,white88,disalvo01,done02}. This made application of physically motivated spectral models to the description of observed spectra of luminous neutron stars difficult, in spite of very significant increase in the sensitivity of X-ray instruments. Not surprisingly, the best fit parameters derived from the data of different instruments and, correspondingly, the inferred values of the physically meaningful quantities are often in contradiction to each other. This ambiguity can be resolved if the spectral information is analysed together with timing data. Early results of \citet{mitsuda84} and \citet{mitsuda86} suggested that the boundary layer and accretion disk may have different patterns of spectral variability. Based on the TENMA data, they studied the difference between the spectra averaged at different intensity levels -- that restricted the range of accessible time scales to $\ga 10^3$ sec. \cite{gilfanov03} and \cite{mikej05} have exploited the technique of Fourier frequency resolved spectroscopy \citep{freq_res99} to study spectral variability of luminous LMXBs in a broad range of time scales, including kHz QPO. Their findings are reviewed and discussed below. \begin{figure} \includegraphics[width=0.5\textwidth]{freqres_gx340.ps} \caption{Average and frequency resolved spectra of GX340+0 on the horizontal branch of the color-color diagram. The solid lines show the Comptonization spectrum with parameters similar to those given in the Section \ref{sec:bl_spectrum}. \label{fig:freqres_gx340}} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{lags_gx340.ps} \caption{Phase lags in GX340+0 on the horizontal branch as function of energy ({\em upper panel}) and Fourier frequency ({\em lower panel}). The energy dependent phase lags were computed in the 1--32 Hz frequency range, the frequency dependent lags are between 3--6.5 keV and 6.5--13 keV energy bands. The phase is normalized to 0--1 interval. \label{fig:lags_gx340}} \end{figure} \section{Fourier-frequency resolved spectroscopy of luminous LMXBs} \label{sec:freqres_theory} \subsection{The method} As defined in \citet{freq_res99}, the Fourier frequency resolved spectrum is the energy dependent rms amplitude in a selected frequency range, expressed in absolute (as opposite to fractional) units. A similar approach was used by \citet{mendez0614} to study the energy spectrum of kHz oscillations in 4U0614+09. One of its advantages over fractional rms--vs.--energy dependence is the possibility to use conventional (i.e. response folded) spectral approximations in order to describe the energy dependence of aperiodic variability. Although the interpretation of the frequency resolved spectra is not always straightforward, several applications of this technique to variability of black hole binaries gave meaningful results \citep[e.g.][]{freq_res99, gilfanov_cygx1}. We will use the following example as an illustration. Let us consider a two-component spectrum, consisting of a constant and variable component. The variable component changes its normalization but not the spectral shape. In this case the shape of the frequency resolved spectrum would not depend on the Fourier frequency and would be identical to the spectrum of the variable component. Importantly, the X-ray flux in all energy channels will vary coherently and with zero time/phase lag between different energies. Presence of significant phase lag and/or Fourier frequency dependence of the frequency resolved spectra would indicate that a more complex pattern of spectral variability is taking place. With few exceptions \citep{dieters00}, phase lag between light curves in different energy bands in luminous LMXBs is usually small, $\Delta\phi\la{\rm few}\times 10^{-2}$, coherence is consistent with unity, \citep[e.g.][]{vaughan94, vaughan99, dieters00} and the behavior of the fractional rms-vs-energy dependence is similar at different Fourier frequencies \citep{vdk86, vdk00}. This suggests, that Fourier frequency resolved spectra can be interpreted in a straightforward and model-independent manner. \subsection{Results} For the case study we use archival data of PCA observations of a Z-source GX340-0 on the horizontal branch of the color-color diagram. \citet{gilfanov03} conducted similar study of an atoll source 4U1608-52 and arrived at similar conclusions. Our choice was defined by the requirement that the PCA configuration combined sufficient energy resolution (large number of the energy channels) with good timing resolution and large total exposure time. The Fourier frequency resolved spectra in several frequency bands corresponding to the band limited continuum noise component and the $\sim 25$ Hz QPO are shown in Fig.~\ref{fig:freqres_gx340} along with the conventional spectrum of the source averaged over the same data. The figure clearly demonstrates that shape of the spectra depends on the Fourier frequency at low frequencies and becomes independent of the frequency at $f\ga 0.5$ Hz. Another conclusion from the data presented in Fig.~\ref{fig:freqres_gx340}, important for the following discussion, is that all frequency resolved spectra are significantly harder than the average source spectrum. The phase lags as function of energy and Fourier frequency are shown in Fig.~\ref{fig:lags_gx340}. No statistically significant phase lags were detected with an upper limit of $\Delta\phi\sim 10^{-2}$, where phase $\phi$ is normalized to the interval 0--1 (as opposed to $0-2\pi$). \subsection{Interpretation} We show below that independence of the frequency resolved spectra on the Fourier frequency and the smallness of the phase lags require a particularly simple form of the spectral variability. The constancy of the spectral shape with Fourier frequency implies that the power spectrum $P(E,\omega)$ can be represented as a product of two functions, one of which depends on the energy and the other on the frequency only. For convenience we write $P(E,\omega)$ in the form: \begin{equation} P(E,\omega)=S^2(E)\times f^2(\omega) \label{eq:pds} \end{equation} where non-negative functions $S(E)$ and $f(\omega)$ can be directly determined from the frequency resolved spectra. The Fourier image of the light curve $F(E,t)$ is: \begin{equation} \hat F(E,\omega)=S(E)\times f(\omega)\times e^{i \phi(E,\omega)} \end{equation} In the general case the complex argument $\phi(E,\omega)$ can depend both on Fourier frequency $\omega$ and energy $E$. If the phase lags between different energies are negligibly small, $\phi$ depends on the Fourier frequency only and the Fourier image of $F(E,t)$ is: \begin{equation} \hat F(E,\omega)=S(E)\times f(\omega)\times e^{i \phi(\omega)} \label{eq:ft} \end{equation} The light curve $F(E,t)$ can be computed via inverse Fourier transform of $\hat F(E,\omega)$: \begin{eqnarray} F(E,t)=\int d\omega \hat F(E,\omega) e^{i\omega t}=\nonumber\\ =S(E)\times \int d\omega f(\omega) e^{i \phi(\omega)} e^{i\omega t}= \\ =S(E)\times f(t)\nonumber \end{eqnarray} An arbitrary function of energy can obviously be added to the above expression: \begin{equation} F(E,t)=S_0(E)+ f(t)\times S(E) \label{eq:lc} \end{equation} Thus, the light curves at different energies are related by a linear transformation. From Eq.~(\ref{eq:lc}) it follows that the coherence of the signals in any two energy bands is exactly unity, as they are related by a linear transformation. This prediction is in a good agreement with observations (Fig.~\ref{fig:coh_gx340}). \begin{figure} \includegraphics[width=0.5\textwidth]{coherence_gx340.ps} \caption{GX340+0: Coherence between the light curves in the 3--6.5 and 6.5--13 keV energy bands as function of frequency. No correction for the dead time effects has been made. \label{fig:coh_gx340}} \end{figure} \section{Boundary layer and accretion disk emission} \label{sec:disk_bl} As follows from eq.(\ref{eq:lc}), two components can be distinguished in the GX340-0 emission. The term $S_0(E)$ in Eq.~(\ref{eq:lc}) is the constant (non-variable) part of the source emission spectrum and $f(t)$ represents flux variations of the second, variable component.\footnote {The possibility of small variations of the spectral parameter (e.g. temperature, optical depth etc.) is discussed by \citet{gilfanov03} and is shown to be inconsistent with observations. } The spectrum $S(E)$ of the variable component does not change in the course of flux variations and equals the frequency resolved spectrum, i.e. can be directly determined from observations. There are two major components of accretion onto a slowly rotating weakly magnetized neutron star -- (i) the Keplerian accretion disk and (ii) the boundary or spreading layer near the surface of the neutron star, in which the accreting matter decelerates to the spin frequency of the star and spreads over its surface \citep{ss86, kluzniak, inogamov99, popham01}. These two geometrically distinct regions give comparable contributions to the observed X-ray emission \citep{sibg00}. Recalling eq.(\ref{eq:lc}), it is plausible to assume, that the variable part of the X-ray emission is associated with one of these components. In order to check this assumption and to identify the variable component we consider below theoretical expectations for the disk and boundary layer spectra and compare them with the observed frequency resolved spectra. At sufficiently high values of $\dot{M}$ both BL and disk are optically thick, as confirmed by the softness of the LMXB spectra. Simple arguments, taking into account the difference in the emitting areas suggest that the spectrum of the boundary layer should be harder than that of the accretion disk \citep[e.g.][]{mitsuda84,greb}. Due to complexity of the boundary/spreading layer problem the theory has not advanced significantly beyond this qualitative statement -- no models capable to directly predict its spectrum exist yet. Significantly better progress has been achieved in modeling spectra of accretion disks \citep{ss73, shimura_takahara95, ross96}. Relatively simple models of multicolor disk type which account for the effects of Compton scattering with a simple color-to-effective temperature ratio turned out to be successful in describing the accretion disk spectra observed in the high state of black hole systems \citep[e.g.][]{grad, gierlinski97}. \begin{figure} \includegraphics[width=\columnwidth]{grad_bl_gx340_shaded_hist.ps} \caption{The average and frequency resolved spectra of GX340+0 (horizontal branch). The shaded area shows the plausible range of the boundary layer spectra calculated as described in the section \ref{sec:disk_bl}. The dashed (blue) histogram shows the best fit accretion disk spectrum \citep{gilfanov03}. The upper solid (red) histogram shows the boundary layer spectrum computed as the difference between the (observed) total and (predicted) accretion disk spectrum. The lower solid histogram is the same but scaled to the total energy flux of the frequency resolved spectrum. \label{fig:disk_bl}} \end{figure} For this reason we chose to use a model of the disk emission as the starting point. The BL spectrum is computed as a difference between the (observed) total spectrum and the (predicted) disk spectrum. To estimate the plausible range of the BL spectra we investigate the parameter space of the accretion disk model. For the latter we adopt the general relativistic accretion disk model by \citet{grad} (the ``grad model'' in XSPEC). The parameters of the model are: the source distance $D$, mass of the central object $M_{\rm NS}$, disk inclination angle $i$, the mass accretion rate $\dot{M}$ and the color-to-effective temperature ratio $f=T_{\rm col}/T_{\rm eff}$. With this approach we can predict the disk and the boundary layer spectra based on the observed X-ray flux and spectrum and very generic system parameters, such as neutron star spin frequency, the source distance etc. The procedure is described in detail in \citet{gilfanov03}. The obtained range of the BL spectra is shown in Fig.~\ref{fig:disk_bl} as the shaded area. The similarity of the predicted BL spectrum and of the observed frequency resolved spectrum is obvious. On the other hand the disk spectrum is significantly softer and is inconsistent with the frequency resolved spectrum. \citet{gilfanov03} conducted similar investigation of an atoll source 4U1608-52 having $\sim 10$ times lower luminosity. They considered frequency resolved spectra at different frequencies, including the kHz QPOs and showed that it demonstrates behavior identical to GX340-0. Based on these results we conclude that the X-ray variability on $\sim$ second -- millisecond time scales is related to variations of the luminosity of the boundary layer. The shape of its spectrum remains nearly constant in the course of these variations and equals the frequency resolved spectrum, i.e. can be directly obtained from the observations. This can be used to separate the boundary layer and the accretion disk contribution to the total spectrum and permits to check quantitatively the predictions the accretion disk and boundary layer models. It also opens the possibility to measure relative contributions of these two components of the accretion flow to the total observed X-ray emission. \section{Boundary layer spectrum} \label{sec:bl_spectrum} Based on the assumption that the frequency resolved spectra in luminous LMXBs adequately represent the boundary layer spectrum, we compare several LMXBs -- atoll sources 4U1608-52 and 4U1820-30 and Z-sources Cyg X-2 and GX 17+2. Their frequency resolved spectra at frequencies $f\ga$ few Hz are shown in Fig.\ref{freq_spectra}. As before, for Z sources we used only data on the horizontal branch of the color-color diagram, where the amplitude of variability at these frequencies is maximal. To facilitate comparison, the normalizations of all spectra were adjusted to match that of GX340+0. Similarity of the spectra is remarkable, especially considering significant difference in the average spectra and a factor of $\sim 10-20$ spread in the luminosity between atoll and Z-sources ($\sim 0.1 \dot{M}_{\rm Edd}$ and $\sim \dot{M}_{\rm Edd}$ correspondingly). The independence of the spectrum of the boundary layer on the luminosity lends support to the theoretical predictions by \citet{inogamov99} that the boundary layer is radiation pressure supported, i.e. radiates at the local Eddington flux limit. In this model the luminosity of the spreading layer on the surface of the neutron star changes due to variations of its area, rather than surface emissivity of the unit area. If this picture is correct, the parameters of the BL emission can be used to determine the value of the Eddington flux limit on the surface of the neutron star. As the Eddington flux limit is uniquely determined by the neutron star surface gravity and the atmospheric chemical composition, the neutron star mass and radius can be constrained \citep{mikej05}. The similarity of the spectral shape of the BL spectrum in different sources (Fig.\ref{freq_spectra}) indicates that there is no significant spread in the values of the mass and radius among LMXBs, in particular, that the surface gravity in atoll and Z-sources is similar. It also shows that there are no significant differences caused by variations in the atmospheric chemical abundances between sources. In particular we did not find statistically significant difference between ultra-compact compact binary 4U1820-30 and other sources. The shape of the frequency resolved ($\approx$ boundary layer) spectrum can be adequately described by the saturated Comptonization. For the sake of comparison with other results and for convenient parameterization of the BL spectrum we used the Comptonization model of Titarchuk (1994). The best fit parameters of the model fitted in the 3-20 keV range simultaneously to all five spectra shown in the Fig.\ref{freq_spectra} are: temperature of seed photons $kT_s=1.5\pm 0.1$, temperature of electrons $kT_e=3.3\pm0.4$ and the optical depth $\tau=5\pm1$ for slab geometry. The best fit model is shown by the thick dotted line on Fig.\ref{freq_spectra}. The temperature of the black body spectrum describing the shape of the cutoff in the observed spectrum at energies $>$13 keV is $kT_{\rm bb}=2.4\pm0.1$ keV (thin dashed line on Fig.\ref{freq_spectra}). The fact that kHz QPO show the same behavior as other components of the aperiodic variability indicates, that they have the same origin, i.e. are caused by the variations of the luminosity of the boundary layer. Although the kHz ``clock'' can be in the disk or due to it's interaction with the neutron star, the actual modulation of the X-ray flux occurs on the neutron star surface. \subsection{BL spectrum on the normal branch} Further along the Z-track of GX340+0, on normal and flaring branches, the fractional rms of the X-ray variability decreases significantly, by a factor of $\sim 5-10$. Nevertheless, the statistics is sufficient to place meaningful constrains on the first half of the normal branch. The data indicates that the behavior of the frequency resolved spectra does not change its character -- at sufficiently high frequency, $f\ga 1$ Hz, their shape does not depend upon the Fourier frequency and is significantly harder than the average spectrum and expected spectrum of the accretion disk. Therefore, we can conclude that frequency resolved spectra are representative of the spectrum of the boundary layer. Fit to the frequency resolved spectrum by Comptonization model requires infinitely large values of the Comptonization parameter. Correspondingly, it can be described by Wien or blackbody spectrum (they are close to each other $E\ga 3$ keV range) with the best fit temperature of $kT\approx 2.4$ keV. The frequency resolved spectra ($\approx$boundary layer spectra) of GX340-0 on the normal and horizontal branch are compared in Fig.~\ref{fig:bl_alongz}. Thus, with increase of the mass accretion rate up to a value close to critical Eddington rate the boundary layer spectrum in the 3--20 keV energy range approaches a Wien spectrum. \begin{figure} \includegraphics[width=\columnwidth]{fre_res_spectra.ps} \caption{Fourier-frequency resolved spectra ($\approx$boundary layer spectra) of 5 Z- and atoll sources \citep[from][]{mikej05}. For 4U1608-52 the frequency resolved spectrum of the lower kHz QPO is shown. All spectra were corrected for the interstellar absorption. The thick short-dashed line shows the best fit Comptonization model with $kT_s=1.5$, $kT_e=3.3$ keV, $\tau=5$. The thin long-dashed line shows the blackbody spectrum with temperature $kT_{\rm bb}=2.4$ keV.} \label{freq_spectra} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth, clip]{bl_nhcorr_alongz.ps} \caption{The absorption corrected frequency resolved spectra of QPO ($\approx$ boundary layer emission) in GX340 on the horizontal branch, (lower $\dot{M}$) and upper half of the normal branch (higher $\dot{M}$). The horizontal branch data is same as in Fig.~\ref{fig:freqres_gx340}--\ref{fig:disk_bl}. The solid line shows Wien spectrum with $kT=2.4$ keV. \label{fig:bl_alongz}} \end{figure} It would be interesting to follow up on these results and to consider the change of the BL spectrum from the horizontal to normal branch in other Z-sources. Unfortunately, in other four Z-sources from our sample the variability level on the normal branch is insufficient to obtain frequency resolved spectra with reasonable signal-to-noise ratio. \section{Nature of the Z-track} Knowledge of the shape of the boundary layer spectrum allows us to resolve the degeneracy caused by the similarity of the accretion disk and boundary layer spectra, which hindered many previous LMXB studies. As demonstrated by \citet{gilfanov03}, the spectra of atoll and Z-sources can be adequately described by the sum of the (renormalized) frequency resolved spectrum, representing the boundary layer component, and of the accretion disk emission (Fig.~\ref{fig:disk_bl}). The spectrum of the latter is well described by the general relativistic accretion disk model. The best fit values of the mass accretion rate are consistent with those inferred from the observed X-ray flux and accretion efficiency appropriate for a 1.4$M_{\sun}$ neutron star with spin frequency of $\sim 500$ Hz. The agreement is especially remarkable, as the luminosity and mass accretion rate in atoll and Z-sources differ by the factor of $\sim 10$. We further exploit this approach and study the behavior of Z-sources in the color-color diagram in an attempt to relate the motion along the Z-track to changes of the physically meaningful parameters. We consider spectra integrated over 128-sec time intervals. As above, these spectra are fitted with a model, consisting of the boundary layer and the accretion disk components. The shape of the boundary layer spectrum was approximated by the $comptt$ model with parameters from the section \ref{sec:bl_spectrum}. For the accretion disk spectrum we adopt the multicolor disk model ($diskbb$ model in XSPEC) or general relativistic accretion disk model ($grad$). The model adequately describes observed spectra on the normal and horizontal branch. Further details of the analysis method, limitations of the model and discussion of results are presented in \citet{mikej05}. \begin{figure} \includegraphics[width=\columnwidth]{hc_bl.ps} \caption{Dependence of the boundary layer contribution to the total X-ray emission of Z-sources as a function of the position on the Z-diagram} \label{blcontr} \end{figure} \subsection{BL fraction} The dependences of the BL contribution to the total X-ray emission on the position on the Z-track are plotted in Fig.\ref{blcontr}. The coordinate along the Z-track was defined to be proportional to the hard color with the reference points $S_Z=1,2$ corresponding to the turning points of the ``Z'', as is commonly used for such plots. Statistical uncertainties in the values of the BL fraction are small and can be neglected, as confirmed by the dispersion of the points in Fig.\ref{blcontr}. More important are the systematic ones associated with the imprecise knowledge of the shape of the BL spectrum and its possible variations along the Z-track. As demonstrated in \citet{mikej05} their amplitude does not exceed $\sim 0.05-0.1$ in the units of Fig.~\ref{blcontr}. The Fig.\ref{blcontr} suggests that the boundary layer fraction decreases along the Z-track and it is smaller on the normal branch than on the horizontal branch. As discussed in \citet{mikej05}, this conclusion is rather robust, as long as the assumption regarding the constancy of the boundary layer spectrum is approximately correct. As the variability at $f\ga 1$ Hz is primarily associated with the boundary layer emission, the decrease of the boundary layer fraction along the Z-track also explains well-known decrease of the level of aperiodic and quasi-periodic variability. Although no simple physical interpretation of the observed behavior can be offered, we mention several possibilities. One of these is that the general structure of the accretion flow does not change significantly and $\sim 50\%$ of the energy is always released on, or very close to the neutron star surface. The apparent decrease of the boundary layer fraction on the normal branch is a result of its geometrical obscuration by, for example, the geometrically thickened accretion disk. An alternative possibility is that at high values of the mass accretion rate $\dot{M}\sim\dot{M}_{\rm Edd}$ a significant modification of the accretion flow structure occurs and its division into two geometrically distinct parts -- boundary layer and accretion disk, becomes inapplicable. Namely, due to non-negligible pressure effects the deceleration of the orbital motion of the accreting matter from Keplerian frequency to the neutron star spin frequency would take place in a geometrically extended region with the radial extend of $\Delta R\sim R_{\rm NS}$. In this case, the observed decrease of the boundary layer fraction could reflect actual decrease of the fraction of the energy released on the neutron star surface with the rest of the energy being released in the extended transition region. \begin{figure} \includegraphics[width=\columnwidth]{ccd_model.eps} \caption{The horizontal and normal branches in the color-color diagram of several Z-sources (Cyg X-2, GX340+0, Sco X-1, GX 5-1 and GX 17+2). Overlayed on the data are the tracks predicted by the model (section \ref{sec:zdiagram}). The two thick dashed lines show evolution of the colors with change of the accretion rate in the disk for two different values of the BL fraction: 44\% (upper) and zero (lower) The values of the mass accretion rates $\dot{M}$ of the disk component are marked in units of $10^{18}$ g/s. The thick solid line shows the Z track with transition at $\dot{M}\sim 2\cdot 10^{18}$ g/s.} \label{z_model} \end{figure} \subsection{Shape of Z-diagram} \label{sec:zdiagram} Motivated by these results we use the two-component spectral model to explain the shape of the Z-track on the color-color diagram. For this purpose, the accretion disk component is modeled by the general relativistic accretion disk model of Ebisawa et al. (1991) (the {\em grad} model in XSPEC) which explicitly includes dependence on the mass accretion rate. Using this spectral model we can calculate the position in the color-color diagram as a function of the mass accretion rate and the boundary layer fraction. This is an attempt to understand general tendencies of the Z-diagram, rather than to construct its precise quantitative description. The results are presented in Fig.\ref{z_model}. The two dashed curves in the figure show the evolution of spectral colors with the increase of $\dot{M}$ for two values of the boundary layer fraction, BL/disk=0.8 and BL fraction of zero. The evolution of colors corresponding to a change of the BL contribution from $F_{\rm BL}/F_{\rm disk}=0.8$ to zero at $\dot{M}\sim2\times 10^{18}$ g/s is shown by the thick solid line. The general shape of the Z-track can be reproduced in the model as a result of variation of two parameters -- the mass accretion rate and BL fraction. The mass accretion rate increases along the Z-track. The Z-shape of the track is defined by the variation of the BL fraction, which decreases along the normal branch from the value of $\sim 50\%$ expected in the ``standard'' theories to a small number of the order of $\sim$zero at the end of the normal branch. The exact value of $\dot{M}$, corresponding to the transition from the horizontal to the normal branch depends on the disk model parameters -- the binary system inclination, the mass of the neutron star and the spectral hardening factor. For our choice of parameters, it equals $\dot{M}\sim 2\cdot10^{18}$ g/sec, i.e. is of the order of the Eddington critical value for a $1.4M_\odot$ neutron star. \section{Summary} \begin{enumerate} \item The X-ray variability in luminous LMXBs on the short timescales, $f\ga 1$ Hz, is caused by variations of the luminosity of the boundary layer. The accretion disk emission is significantly less variable at these frequencies. The BL spectrum remains nearly constant in the course of luminosity variations and its shape equals the frequency resolved spectrum, i.e. can be directly derived from the timing data (Fig.\ref{fig:disk_bl}). \item In the investigated range of the mass accretion rate $\dot{M}\sim (0.1-1)\dot{M}_{\rm Edd}$, the boundary layer spectrum depends weakly on $\dot{M}$. Its shape is remarkably similar in atoll and Z-sources (Fig.~\ref{freq_spectra}), despite an order of magnitude difference in the mass accretion rate. Data indicates that in the limit of high $\dot{M}\sim\dot{M}_{\rm Edd}$, the boundary layer spectrum can be described by Wien spectrum with $kT\approx 2.4$ keV (Fig.~\ref{fig:bl_alongz}). At lower values of $\dot{M}$ the spectra are better described by model of saturated Comptonization with electron temperature of $\sim 2-4$ keV and Comptonization parameter $y\sim 1$. Weak dependence of the BL spectrum on the global value of $\dot{M}$ lends support to the theoretical suggestion by \citet{inogamov99} that the boundary layer is radiation pressure supported. \item The kHz QPOs appear to have the same origin as aperiodic and quasiperiodic variability at lower frequencies. The msec flux modulations originate on the surface of the neutron star although the kHz ``clock'' might reside in the disk or be determined by the disk -- neutron star interaction. \item We attempt to relate the motion of Z-sources along the Z-track to changes in the values of the physically meaningful parameters. Our results suggest that the contribution of the boundary layer component to the observed emission decreases along the Z-track from the conventional value of $\sim 50\%$ on the horizontal branch to a rather small number at the end of the normal branch (Fig.\ref{blcontr},\ref{z_model}). The main difference of our approach from previous attempts is in the a priori knowledge of the shape of the boundary layer spectrum. This allowed us to avoid ambiguity of the spectral decomposition into boundary layer and disk components. \end{enumerate} \begin{acknowledgements} This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. \end{acknowledgements}
1,116,691,501,426
arxiv
\section{Related Work} In this section, we briefly discuss prior work on uncertainty modeling and research on uncertainty visualization. We then describe the importance of modeling and communicating uncertainty in text data. \subsection{Uncertainty Modeling} Uncertainty, error, ambiguity, and other representation issues have been modeled in various ways across research communities. Specifically for linguistic processing and visual analysis, it is imperative to capture such issues for every task at hand. For example, for the task of linguistic annotation, modeling ambiguities and inter-annotator agreement is of utmost importance, as described by Beck et al.~\cite{beck2020representation}. For other tasks, such as sentiment analysis, Bayesian deep learning can be utilized to characterize and model uncertainty~\cite{xiao2019quantifying}. For the task of multi-labeled text classification, Chen et al.~\cite{chen2020uncertainty} quantify uncertainty in the transformation step of the pipeline, deploying experiments to measure both \textit{aleatoric} and \textit{epistemic} uncertainty (various natures of uncertainty). Other approaches focus on uncertainty communication, for example through visualization. For example, Collins et al.~\cite{collins2007visualization} model the uncertainty of text data during visualization using lattice graphs. It's a specific graph-based visualization that expresses the multiple possible outputs that are hidden from the users. While there are many uncertainty modeling approaches, in the context of this paper, we focus on the overall visual text analysis pipeline, including uncertainty visualization, as described in the next subsection. \subsection{Uncertainty Visualization} According to Zuk and Carpendale, uncertainty is a fundamental part of any analytic or reasoning process \cite{zuk2007visualization}. Identifying and communicating uncertainty is critical for many analytical tasks. Prior work has addressed the importance of identifying, quantifying, and visualizing uncertainty (e.g., \cite{ hullman2019authors, kay2016ish}). The complexity of visualizing uncertainty is a known challenge \cite{greis2017designing}. Researchers in visualization have investigated various techniques for conveying uncertainty through interaction, animation, and sonification \cite{tse2016we}. Some examples include Value Suppressing Uncertainty Palettes (VSUPs), which adjust the visual channel allocated to uncertainty based on the level of uncertainty \cite{correll2018value}, and Hypothetical Outcome Plots (HOPs), that animates a finite set of individual draws \cite{hullman2015hypothetical}. \subsection{The Importance of Uncertainty in Text Data} While previous work explores various methods for visualizing uncertainty in general, due to the complexity and ambiguity involved in text data, traditional methods might not always apply to the variety of issues that appear and propagate through the text analysis pipeline. Recent work in text analytics suggests that text visualization needs more careful consideration of uncertainty, its sources, and potential ways to visualize them \cite{hofman2020visualizing}. The uncertainty in text visualization comes from many origins. First, we need to consider that text is an imperfect representation of human thoughts; therefore, encoding thoughts in a text by nature produces artifacts. Another issue is that people might have a different understanding and interpretation of any given text. Hence, one single text input can result in multiple interpretations, which affect not only the interpretation of the receiver but also the annotators, because they might have various interpretations. Uncertainty can hamper judgment in any decision-making scenario \cite{tversky1974judgment}, but the impact of uncertainty exacerbates in domains where text data is utilized for important decisions, such as civic decision-making~\cite{Baumer2022OfCourse}, or in humanities research where historians and archivists analyze the text data in newspaper archives to answer fundamental questions about society~\cite{handler2022clioquery}. The open challenge in the text visual analytics domain is how to identify sources of errors and artifacts in the text visualization pipeline and how to design techniques and embed them in various stages of the pipeline to communicate uncertainty to various actors such as annotators and end users. Due to the inherent complexity of uncertainty visualization, another open challenge is how to quantify, model, and visualize the uncertainty to ensure users can understand and interpret it correctly. \section{Uncertainties in Visual Text Analytics Pipelines}\label{sec:uncertainty} Uncertainty in text visualization stems from many origins. \autoref{fig:teaser} visually summarizes the six sources of uncertainty we identified in the three stages of the visual text analysis pipeline: labeling, modeling, and analysis. In the following, we describe the different types of uncertainty in detail. \subsection{Semantic Uncertainty} At the start of the pipeline, uncertainty can be introduced by the text \emph{producer}, that is, by the way in which a person types a text or adds a text document to a collection. The errors can be related to the text itself or to the metadata, such as the date of the text production or the text attribution (author). Transcription from a manuscript or from an oral source can always contain transcription errors or misspelled names that will propagate errors and hence uncertainty down the pipeline. Note that we discuss here a type of uncertainty that comes from the producer, and was not caused by textual (e.g misspellings) and metadata errors. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \begin{tabular}{p{8cm} } \cellcolor{darkblue}\textbf{Semantic Uncertainty }\\ \cellcolor{lightblue} The uncertainty caused by the producer's mental linguistic model and translation of thought to text. This uncertainty is also caused by the expression of the speaker's feeling that something is not known or certain, often conveyed by means of specific linguistic markers in the text. \end{tabular} \end{table} \noindent Text is an imperfect representation of human thoughts, therefore encoding thoughts in a text by nature produces artifacts. \autoref{fig:teaser} shows that the first type of uncertainty concerns the origin of what is communicated through language. This uncertainty refers to the fact that expressions of language are recontextualized thoughts, events, and experiences through language in the written medium. In the transition from the producer through language to the reader, information may or may not be lost. We refer to this as \textit{semantic uncertainty} on the part of the producer. This type of uncertainty cannot always be detected in the text as the producer uses their own linguistic model that can differ from others. There are cases where the semantic uncertainty can be observed from the text with the speaker using language to express ``\textit{doubt as to the likelihood or truth of what she or he is saying}''~\cite{simaki2020annotating}, their opinions, facts, or ideas. If producers pay attention to this problem, they often indicate this through expressions such as \textit{may}, \textit{not sure}, \textit{might}, \textit{could}. \\ \noindent \textbf{Examples}: \textit{``I am not sure how to get there.'', ``We might go to the restaurant.'', ``We have enough time, haven’t we?.''} In these examples, the producers use specific linguistic items (\textit{not sure}, \textit{might}) and ways (tag question \emph{haven't we?}) to express their doubt/uncertainty on the particular way to get to a location, the possibility to go to a restaurant, or whether there is enough time to do something. If no such expressions of uncertainty are used, then one may not be able to identify the semantic uncertainty of the text. \noindent \textbf{Challenges}: The main challenge is to produce accurate and precise representations in the pipeline of the refined semantic relations expressed in the texts, as well as to identify the uncertainty in the content and how this affects the overall meaning of the text. \subsection{Comprehension Uncertainty} The second source of uncertainty comes from data capturing and annotation performed on the text. This step can include entity recognition and more general annotations such as sentiment analysis, introduced by \textit{annotators}. This enrichment adds higher-level semantics, but also possible errors or misinterpretations that might change the result of the annotation and introduce uncertainty. \begin{table}[h] \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{8cm} } \cellcolor{darkgreen}\textbf{Comprehension Uncertainty}\\ \cellcolor{lightgreen} The uncertainty caused during the data collection and annotation process due to technical challenges, perceptual differences and/or other limitations. \end{tabular} \end{table} \noindent People do not respond in the same way to a given text. Hence, one single text input can result in multiple interpretations. This brings us to the second type of uncertainty that we identified, namely, \emph{comprehension uncertainty}. This uncertainty can be caused during (i) the data capturing and collection process, and (ii) the data annotation process. In (i), the uncertainty is caused by issues related to the representativeness of various text genres/types/contents/topics and their balance in the data set, the noise that irrelevant content creates in the data set, and various biases in the data extraction and collection process. In (ii), multiple factors contribute to the uncertainty, as human annotators come to the task with different linguistic understandings and perceptions of the text. One single text input can result in multiple interpretations, and hence, different decisions by annotators. Phenomena such as polysemy and ambiguity, and vague or generic annotation guidelines can further influence the uncertainty in the annotation process. Also, the annotators’ different perceptual systems play an important role in their final decision. All these factors can lead to a high level of disagreement between the annotators, which creates uncertainty about the reliability of the annotated data, and adds more uncertainty in the NLP pipeline.\\ \noindent \textbf{Examples}: Consider a case where the annotator is asked to identify the sense that is addressed or discussed in each sentence e.g. taste, touch, smell, etc. The annotator is given the sentence \textit{``The food is too soft for me.''}. It depends on the annotator's comprehension of the sentence to choose \textit{smell}, \textit{touch} (texture) or the \textit{taste} as the associated sense with this sentence. This case concerns the comprehension uncertainty caused by (ii).\\ \noindent \textbf{Challenges}: Apart from the various difficulties that may be faced during the data capturing part, a basic issue that needs to be considered is the design of a solid and well-crafted data collection and annotation protocol, as well as a thorough control (possibly automatic) before finalizing the annotated data set. If this is not adequately addressed, less reliable data is produced, and as a result, more uncertainty is added to the pipeline.\\ \subsection{Encoding Uncertainty} Text encoding in itself can also cause uncertainty. Some complex formats have been designed to encode text in a rich yet faithful fashion, e.g., through the Text Encoding Initiative (TEI)~\cite{ide1995text}. However, very few projects use these sophisticated mechanisms. More often than not, visual text analysis projects resort to simpler encodings that can lead to information loss or generate ambiguity that causes uncertainty. Even the TEI guidelines, although very rich and well documented, cannot avoid unexpected variations in their interpretation. \begin{table}[h!] \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{8cm} } \cellcolor{darkorange}\textbf{Encoding Uncertainty}\\ \cellcolor{lightorange} The uncertainty caused by the data mapping to a data structure, which could lead to a lossy representation of the input. \end{tabular} \end{table} \noindent The TEI guidelines~\cite{sperberg1994guidelines}, designed to define best practices to encode textual sources with a rich vocabulary of annotations, mention several mechanisms for encoding uncertainty, such as ``\textit{levels of certainty}'' and “precision” in the chapter ``Certainty, Precision, and Responsibility,'' and encoding for text segments such as ``\textit{unclear}'', ``\textit{gap}'' for the transcription of text or speech. All of these textual or linguistic uncertainties are idiosyncratic and intrinsic to our languages, texts, and speech structures. The TEI also allows encoding alternative interpretations for the same text segment (using the $<choice>$ element), as well as marking visible errors ($<sic>$) and possible corrections ($<corr>$). These annotations can become very rich and are currently not supported consistently by visualization systems; they are mostly ignored.\\ \noindent The use of external resources to enrich the text is also a source of uncertainty. For example, when named entities are recognized in a text (e.g., a name or a place name), many NLP systems try to \emph{resolve} them, keeping a dictionary of mentioned persons, looking them up in popular databases (e.g., Wikipedia), or trying to find the location of a named place. These enrichments also lead to errors and consequently to uncertainty. For example, the exact address of a person in the 18th century might not be resolved accurately by a modern geocoding service when the street name has changed; the address is then resolved as a city instead of a precise block location.\\ \noindent \textbf{Example}: \textit{One-hot-vector} encoding of the sentence: ``\textit{This is an example sentence that contains the word example.}''\\ \vspace{-1em} \begin{itemize}[nosep] \item this: $<1,0,0,0,0,0,0,0,0> $ \item is: $<0,1,0,0,0,0,0,0,0> $ \item an: $<0,0,1,0,0,0,0,0,0> $ \item example: $<0,0,0,2,0,0,0,0,0> $ \item ... \end{itemize} \noindent Using a \textit{one-hot-vector} encoding as illustrated above is an efficient way to gather statistical information about the text, but leads to the loss of the word contexts, as the order is not preserved. With such an encoding, uncertainty cannot be reverted, as the original sentence cannot be reconstructed by reverting the encoding. \\ \noindent \textbf{Challenges}: The main challenge for encoding is to avoid loss of information if this information is usable in the processing. However, some loss is unavoidable since the text data needs to be structured in a form that can be processed. Measuring the information loss during this step can enable efficient communication of encoding uncertainty. This builds an interesting area for future research. \subsection{Transformation Uncertainty} The encoding typically represents the data as embedding vectors in a high-dimensional space. These get transformed through NLP models that are either exclusively considering the internal data from the pipeline or additionally rely on external resources, such as language modeling or externalizing expert knowledge and feedback. The NLP models introduce transformation uncertainty into the pipeline. Such transformations often rely on design decisions by the \textit{computational linguistics experts}. \begin{table}[h] \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{8cm} } \cellcolor{darkred}\textbf{Transformation Uncertainty}\\ \cellcolor{lightred} Uncertainty that is introduced through computations, for example, through language modeling or injection of expert knowledge and feedback. \end{tabular} \end{table} \noindent After encoding and possible enrichment, the text is often transformed to be easy to analyze. Most search engine will transform sentences or documents into high-dimensional vectors using language modeling approaches, such as \textit{word2vec}, \textit{doc2vec}, \textit{BERT}, or \textit{GPT-3}. These vectors allow finding similar documents fast, but they also abstract-out the text and turn it into a representation that humans cannot interpret directly. These transformations are complex and can generate artifacts, errors, and lead to uncertainty.\\ \noindent \textbf{Example}: ``\textit{This sentence is about \textbf{cats}}.'' and ``\textit{This sentence is not about \textbf{cats}, but about dogs}.'' If we consider the two sentences above and a model that is mapping them to different topics~\cite{el2019lingvis}, transformation uncertainty can arise if the word \textit{cats} in the second sentence causes it to be partly considered as belonging to the topic cats, ignoring the negation. This example is trivial, but more nuanced issues arise in many NLP pipelines.\\ \noindent \textbf{Challenges}: Choosing the appropriate algorithm to model and transform text is crucial for visual text analysis. However, most linguistic models are not fine-tuned to the exact problems they are applied to. Hence, a major challenge is to configure the appropriate processing steps for a given text input and to allow for the capturing of uncertainties in each processing step. \subsection{Representation Uncertainty} The output of the transformation step needs to be presented to a human by means of visualization. This is a potential source of uncertainty. For example, 2D visualization of high-dimensional vectors that are trained to represent documents is prone to dimensionality reduction artifacts, namely, distortions. This can cause uncertainty about actual distances between vectors (documents) in the original space. These uncertainties are usually introduced by the \textit{visualization experts}. \begin{table}[h] \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{8cm} } \cellcolor{darkpink}\textbf{Representation Uncertainty}\\ \cellcolor{lightpink} In order to visualize the high-dimensional vectors resulting from the transformation step of the pipeline, one needs to use projection techniques to create 2D/3D representation of these vectors. Such a process generates artifacts and errors in the distances that cause Representation Uncertainty. \end{tabular} \end{table} \noindent The output of the transformation step is often a high-dimensional vector that embeds certain information about words, sentences, or documents within itself. Let us assume that in our example pipeline, the transformation step outputs vectors that represent documents. The distances between the document vectors capture the relatedness of the corresponding documents. Therefore, visual observation of the document vectors can be a valuable tool for the purpose of the topic, genre, or in general document analysis. In order to visualize such vectors, one has to reduce the dimensionality of the vectors to two or three dimensions using projection techniques such as t-SNE \cite{tsne} or UMAP \cite{umap}. Projecting high-dimensional data to lower dimensions is inherently lossy and can cause distortions. Hence, the resulting visualization of the low-dimensional projection may be unfaithful to the original distances between the document vectors. Therefore, the 2D/3D representation may carry errors and artifacts that result in \textit{representation uncertainty}.\\ In general, when two vectors are close, their corresponding documents are thought to be similar. However, the low-dimensional representation may include seemingly coherent clusters that do not exist in the original data (false neighbors) or conversely miss existing clusters of the original vectors due to topological artifacts of the projection (missing neighbors). \\ A visualization should at least inform the users of these possible artifacts, preferably indicate areas they occur and, if possible, allow resolving them. The lack of communication regarding representation uncertainty may hinder the trust of the user. Currently, few visualization systems inform users of possible artifacts, and almost none provide techniques to overcome them, especially for many data points. Addressing that problem is essential for text visualization in particular, but also for multidimensional visualization in general. Aupetit~\cite{topo_aupetit,interactive_JDF} has proposed a few techniques for small amounts of data and Martins \cite{MARTINS201426} for larger amounts, but they are targeted towards expert users.\\ \noindent \textbf{Example}: The artifacts of false neighbors and missing neighbors in the 2D representation can be considered as examples of contributing factors to the representation uncertainty.\\ \noindent \textbf{Challenges}: The most significant challenge is to resolve errors and artifacts generated by this part of the pipeline. However, this is not always achievable due to the lossy nature of the dimensionality reduction methods. Therefore, another interesting task is quantifying errors and artifacts and visualizing them in the representation step. The key here is to create an intuitive yet expressive visualization for communication to reduce the representation uncertainty. \subsection{Interpretation Uncertainty} At the last stage of the pipeline the receiver or \textit{analyst} inspecting the visualization is interpreting the data through the lens of the visual design; interpretation uncertainty is due to the mindset of the \textit{analyst} (user). \begin{table}[ht] \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{8cm} } \cellcolor{darkpurple}\textbf{Interpretation Uncertainty}\\ \cellcolor{lightpurple} The uncertainty caused by the analyst's interpretation of the text visualization at the last stage of the pipeline is called Interpretation Uncertainty. \\ \end{tabular} \end{table} \noindent The representation stage of the pipeline provides the analyst with a visualization of the input text. The analyst has their own linguistic model that they use to interpret the text data. This model is unique to each user (analyst). Besides this, the user has a personal interpretation of the observed visualization that can differ from other humans' interpretations. This can be due to different interpretations of long and short distances and also what counts as scattered and what counts as a coherent cluster. Similar uncertainties can occur when the analyst is interpreting the colors in the visualization. Both the linguistic model and the visual interpretation of the analyst contribute to the uncertainty generated at the last stage of the pipeline.\\ \noindent \textbf{Examples} Analyst may interpret a set of points associated with text as a \textit{coherent cluster} or conversely interpret it as a \textit{scattered point set}.\\ \noindent \textbf{Challenges}: Text data in nature is ambiguous, and therefore there is no single canonical data representation or universally accepted interpretation \cite{Poesio2005Reliability,Pavlick2019Inherent}. As a result, the human who reads the visualization and multiple representations of the text can misinterpret the representations. A challenge for the user of the text visualization is to come up with a suitable qualitative or quantitative metric supporting the interpretation of the visualization and therefore reach a more reliable interpretation of the visualization. Furthermore, the questions that the analyst seeks to answer using the text visualization should be designed such that they can be answered mostly by quantitative measures and numbers to minimize the effects of personal interpretations. \section{Discussion \& Conclusion} In this paper, we presented a detailed description of the uncertainty surrounding each step of the visual text analysis pipeline. We did, however, restrict ourselves to the uncertainty of each step in isolation, and the impact of each of the identified uncertainty on future processing steps remains unclear. The steps of the text visualization pipeline can be more intertwined in reality, for instance, the visualization expert and the analyst may revisit the data and output of previous steps of the pipeline. Hence, one of the main challenges, and an opportunity for future work, is to characterize the interaction between different steps of the pipeline and the propagation of uncertainty. One plausible assumption is that the propagation \textit{amplifies} previous uncertainties, however, it can also \textit{nullify} errors and artifacts, hiding possible issues that could lead to harmful rationalizations when interpreting the final results~\cite{SeEl2022Beware}. \\ As previous work suggests \cite{chen2020uncertainty}, the nature of uncertainty can be characterized as \textit{aleatoric} or \textit{epistemic}. Uncertainty in each step of the pipeline can have either of these natures. The first challenge is to identify the nature of uncertainty in each step. If we can identify the sources of uncertainty and their effects throughout the pipeline, the next major challenge is to (visually) communicate their impact. Specifically, exposing all inner workings of the analysis steps and visualizing the errors per step directly, most likely leads to visual and cognitive overload. Possible solutions might be to design in-situ and on-demand explanations through \textit{co-adaptive analytics}~\cite{sperrle2021co} by learning from user interactions during analysis and teaching them potential impacts of uncertainty~\cite{sperrle2020learning}. \acknowledgments{ We would like to thank the organizers of Dagstuhl Seminar 22191 \emph{``Visual Text Analytics''}\footnote{\url{https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=22191}}. This work has benefited substantially from the discussions held during this seminar.} \bibliographystyle{abbrv-doi}
1,116,691,501,427
arxiv
\section{} Studying photometric time series in the frequency domain can serve as a means of detecting rotational modulations (e.g. \citealt{reinhold:2013, neilsen:2013}), measuring asteroseismic modes (e.g. \citealt{chaplin:2014}) and even detecting short-period transiting planets \citep{sanchis:2013}. To our knowledge, there is no prior archive of \emph{Kepler}\ power spectra and so we present one here to aid the community in searching for such effects. We downloaded the long-cadence (LC) DR25 \emph{Kepler}\ PDC photometric time series for every KIC appearing in the \citet{mathur:2017} catalog (196,845), of which we were able to acquire the photometry for 196,791 from MAST. Photometric points with an error flag other than zero were discarded, as were any data occurring in Q0. Missing cadences were then filled using a spline interpolation to form a continuous time series for each star. We then removed 3\,$\sigma$ outliers (where $\sigma$ is defined as 1.4286 multiplied by the median absolute deviation of the spline fit residuals) against an 11-point moving median and again replaced missing points using spline interpolation again. Next, on each quarter independently, we computed a periodogram as the square magnitude of the discrete Fourier transform of the photometric data (without any weighting applied). To increase the sensitivity of the periodograms, at the expense of some resolution, we used Welch's method \citep{smith:1999} where the partitions were set to 1000 points in length overlapping by 500 points each. We further apply a smoothing window to reduce ripple in the frequency domain, in our case using the Nuttall Window and finally logged the resulting powers. Each of these 2,594,616 Fourier transforms is made available at at Columbia's Academic Commons \dataset[(DOI: 10.7916/D8RR3FCW)]{https://doi.org/10.7916/D8RR3FCW}. The maximum frequency considered corresponds to a periodicity of twice the LC cadence, or 24.47\,cycles per day. The lowest frequency was 0.049 cycles per day (corresponding to a periodicity of 20.4\,days), and this also defined the frequency step-size used, giving a total of 500 points per periodogram evenly spaced in frequency. In order to investigate the possibility of global instrumental artifacts, we elected to combine periodograms together. To do this, we first normalize the spectra to a common scaling due to the very dynamic range in powers observed. The normalization was chosen such that the median power from 1.5 to 1.0 hours, the highest frequency range, was set to unity. We then computed the median and upper/lower 1\,$\sigma$ quantiles of the combined periodograms at each frequency. Channel-averaged power spectra are observed to universally peak at the lowest frequencies (see Figure~\ref{fig:1}), as expected due to long-term trends dominating. Moving into the higher frequencies, the power decreases for all channels until around 3.5\,hours, after which a slight and broad excess is observed, centered at around 2\,hours. Close to this broad and very common bump, a small but sharp excess power is also observed for the majority of the channels at $83.0 \pm 2.5$\,minutes (2.8 cadences), leading to a $\sim2$\% increase in the median power. These two features may be related, as our best estimate for the peak of of the broad bump is 136\,mins, beginning at 190\,mins and so if symmetric in periodicity the feature would end at 82\,minutes, precisely where the sharp feature is observed. The 82\,minute feature corresponds to a frequency of $\sim 200$\,$\mu$Hz, which is approximately the peak asteroseismic frequency of a $\log g \sim 3$ evolved star \citep{brown:1991} and thus may be represent a potential source of contamination for \emph{Kepler}\ asteroseismology. We also note that quarter-averaged periodograms tend to be self-similar every four quarters (i.e. one full rotation). \begin{figure*} \begin{center} \includegraphics[width=16.0cm,angle=0,clip=true]{figure.pdf} \caption{\emph{ Black numbered panels show each of \emph{Kepler}'s 84 channels with a channel-averaged periodogram (across all quarters) depicted in each box. The continuous lines in each periodogram represent the $\pm 2$\,$\sigma$ quantiles around the median, in 0.5\,$\sigma$ steps. Corner panels, numbered ``Q'', depict the quarter-averaged periodograms. Full versions of each figure and the data behind each are made available at \dataset[(DOI: 10.7916/D8RR3FCW)]{https://doi.org/10.7916/D8RR3FCW}. }} \label{fig:1} \end{center} \end{figure*} \acknowledgments DMK is supported by the Alfred P. Sloan Foundation.
1,116,691,501,428
arxiv
\section{Mesoscopic Model } Let's arrange $N$ base pairs (\textit{bps}) on a circle with radius $R$ such that, $2 \pi R / N \sim 3.4$ \AA, as depicted in Fig.~\ref{fig:1}. When all \textit{bps} centers of mass lie on the circumference, which represents the molecule backbone, the system is in the ground state. Say $\textbf{r}_i$, the inter-strand fluctuation for the \textit{i}-base pair ($i=1,..N$) with respect to the ground state. Hence, we define the vector $\textbf{t}_i$: \begin{eqnarray} \bigl({t }_i \bigr)_{x} =\, |\textbf{r}_i| \cos\phi_i \cos\theta_i ; \, \, \, \, \bigl({t }_i\bigr)_{y} =\,(R + |\textbf{r}_i|\sin\theta_i) \cos\phi_i ; \, \, \, \, \bigl({t }_i\bigr)_{z} =\,(R + |\textbf{r}_i|) \sin\phi_i \,. \label{eq:004} \end{eqnarray} The ground state is recovered once all \textit{bps}-fluctuations vanish hence, $t_i =\,R ,\,\, \forall i$. The polar angle, $\theta_i =\, (i - 1) 2\pi / h + \theta_S$, measures the $i-$ \emph{bp} twisting around the molecule backbone, with $h=\, N / Tw$ being the number of \textit{bps} per helix turn and $Tw$ is the twist number accounting for the coiling of the individual strands around the helical axis \cite{calla}. The azimuthal angle, $\phi_i =\, (i-1){{2 \pi} / N} + \phi_S $, defines the bending between adjacent \emph{bps} along the stack. As the polynucleotide chain has a direction due to the chemistry of the intra-strand bonds, a distribution of values for the twist ($\theta_S$) and the bending ($\phi_S $) of the first \emph{bp} in the sequence is weighed in the computation. The fluctuational orbits defined by $i=\,1$ and $i=\,N+1$ overlap consistently with the closure condition holding for the DNA ring. \begin{figure} \includegraphics[height=7.5cm,width=12.5cm,angle=-90]{f1.ps} \caption{\label{fig:1}(Color online) (a) Helicoidal model for circular DNA with bending planes. The blue filled circles are the centers of mass of the base pairs stacked along the molecule backbone with a rise distance $3.4$ \AA. In the ground state all \emph{bps} lie on the circumference with radius $R$. The red-shaded areas are spanned by the fluctuational vectors whose amplitude is measured by $|r_i|$ for the $i-$ \emph{bp}. The azimuthal angle $\phi_i$ measures the bending of the $i-$ \emph{bp} plane with respect to the $(x',y')$ plane, $\textbf{x'}$ being normal to the sheet plane. (b) Local reference system for the $i-$ \emph{bp}. $\theta_i$ is the twist around the molecule backbone. The z-axis is tangent to the ground state circle.} \end{figure} \section{Space-Time Mapping} My previous path integral analysis of DNA \cite{io09,io10,io11a,io11b,io12,io13a,io13b,io14a,io14b} were based on the ansatz that the \emph{bps} displacements could be treated as one dimensional paths $x(\tau_i)$, $|\textbf{r}_i| \rightarrow \, x(\tau_i)$, with the imaginary time $\tau_i \in [0, \beta]$ and $\beta$ being the inverse the temperature. Here I introduce a more general (albeit more CPU time consuming) space-time mapping technique, which does not pin a base pair to a specific $\tau_i$ thus avoiding the somewhat arbitrary partition of the $\beta$ length in $N$ intervals: \begin{eqnarray} |\textbf{r}_i| \rightarrow x_i(\tau) ; \, \, \, \, |\textbf{t}_i| \rightarrow \eta_i(\tau) ; \, \, \, \, \, \tau \in [0 \,, \beta ] \,. \label{eq:005} \end{eqnarray} The paths $x_i(\tau)$ are expanded in Fourier series, the constraint $x_{i=\,1}(\tau)=\,x_{i=\,N+1}(\tau)$ for a DNA loop is implemented and, for the model in Fig.~\ref{fig:1}, the\textit{ bps} fluctuations in the path integral formalism are given by: \begin{eqnarray} & & \eta_i(\tau)=\, \bigl[R^2 + x_i(\tau)^2 + 2R |x_i(\tau)|f(\theta_i,\phi_i) \bigr]^{1/2} \, \nonumber \\ & &f(\theta_i,\phi_i)=\,\sin\theta_i \cos^2 \phi_i + \sin^2 \phi_i \, . \, \label{eq:006} \end{eqnarray} This new mapping technique lets $\tau$ as an integration variable and permits to set a realistic rise distance between adjacent nucleotides along the stack. Written in terms of the $\eta_i(\tau)$, the Hamiltonian contains: \textit{ i)} a Morse potential, $V_M[\eta_i(\tau)]$, describing the effective hydrogen bond interaction between \textit{bps} mates, \textit{ ii)} a solvent term, $V_{sol}[\eta_i(\tau)]$, accounting for hydrogen bond recombination with the counterions dissolved in water, \textit{ iii)} a (two particles) stacking potential, $V_{S}\bigl[ \eta _i(\tau), \eta _{i-1}(\tau) \bigr]$, between adjacent bases along the strand. The potential parameters are widely discussed in Refs. \cite{io11a,io11b,io12}. Accordingly, the classical action $A[\eta_{i} ]$ is a $\tau$-integral of the base pair \textit{kinetic \textit{plus} potential} energies \cite{io14a} and the partition function $Z_N$ for a sequence with $N$ \textit{bps} reads: \begin{eqnarray} & &Z_N =\,\prod_{i=1}^{N} \oint {D}x_{i} \sum_{\theta_S, \phi_S} \exp\Bigl\{- \beta A[\eta _i] \Bigr\}\,\, \nonumber \\ & &A[\eta_{i} ]=\, \sum_{i=1}^{N} \int_{0}^{\beta} d\tau \biggl[ \frac{\mu}{2} \dot{\eta }_i^2(\tau) + V_M[\eta_i(\tau)] + \,V_{sol}[\eta_i(\tau)] + V_{S}\bigl[ \eta _i(\tau), \eta _{i-1}(\tau) \bigr] \biggr] \, , \, \label{eq:7} \end{eqnarray} where $\mu=\,300 \, amu$ is the \textit{bp} reduced mass. The measure ${D}x_{i}$ is a multiple integral over the path Fourier coefficients while $\oint$ indicates that the paths $x_i(\tau)$ are closed trajectories. The integration over the two particles potential greatly enhances the computational times with respect to the previous method \cite{io14b} but it offers a more realistic model for the base stacking in sequences of any length. \section{Free Energy of Minicircles} There has been a widespread interest in the properties of small sequences of DNA following measurements of high cyclization probabilities which have pointed to an intrinsic flexibility for fragments of $\sim 100$ \textit{bps} or less \cite{cloutier}. It has been suggested that the formation of local disruptions in the helix of DNA loops may be the mechanism which permits to release the torsional stress and energetically favors the stability of bent molecules. Certainly, the breaking of some \textit{bps} and the opening of fluctuational bubbles change the average pitch of the helix, that is the number of \textit{bps} per helix turn. \begin{figure} \includegraphics[height=7.0cm,width=12.5cm,angle=-90]{f2.ps} \caption{\label{fig:2}(Color online) Free energy per base pair for three loops with $N=\,18, \,66, \,86$. The free energies are computed, at room temperature, as a function of the number of \textit{bps} per helix turn.} \end{figure} In the theoretical framework synthesized by Eq.~(\ref{eq:7}), I compute the free energy ($F=\,\beta^{-1}\ln Z_N$) of three heterogeneous loops with different length, $N=\,18, \,66, \,86$, but similar content of AT-\textit{bps} and GC-\textit{bps}, $\sim 50 \%$ each. The rise distance is pinned to the experimental value. The $66-$ and $86-$ \textit{bps} loops have been prepared as described in Ref.\cite{volo08} whereas the $N=\,18$ \textit{bps} is a toy sequence here introduced for comparison. For each loop, we simulate a broad set of values for the twist $Tw$ and, for any $Tw$ (i.e., $h$), $Z_N$ is calculated by summing over an ensemble of fluctuational paths representing a large number of molecule configurations, about $10^7$ paths for each base pair in Eq.~(\ref{eq:7}). Hence, the free energy is obtained as a function of the helical repeat $h$. The computational time for a simulation, e.g. for the $N=\,66$ sequence, is about eight days on a workstation (Intel Xeon E5-1620 v2, 3.7GHz processor). The room temperature results for $F/N$ are plotted in Fig.~\ref{fig:2}. While the shortest loop shows a free energy minimum also at $h \sim 7$, the minima for all loops are remarkably found for $h$ in the range $\sim (10 - 12)$ in line with the well known values of helical pitch in DNA sequences \cite{calla}. Even more interestingly, the $66-$ and $86-$ minicircles also show free energy minima at larger $h$ due to a spontaneous unwinding of the complementary strands. This $Tw$ reduction is consistent with the observations \cite{volo08} of \textit{bps} disruptions occuring in small loops as a consequence of the strong rotational deformations of the helix. The bending stress decreases by increasing the radius of the loop. Then, for an ensemble of molecules with $N$ \textit{bps}, the free energy minimization evaluates the thermodynamically stablest values of helical repeat, that is an average property of the molecule ensemble. Furthermore, the path integral method can also determine the probabilities for the formation of fluctuational bubbles and select those base pairs along the DNA sequence for which hydrogen bond breaking is more likely to occur. By tuning the system temperature and the input parameters which control the counterions concentration in the solvent, we thus obtain a general and reliable computational scheme for the modeling of heterogeneous DNA loops in various ambient conditions.
1,116,691,501,429
arxiv
\section*{Abstract} Public health surveillance systems often fail to detect emerging infectious diseases, particularly in resource limited settings. By integrating relevant clinical and internet-source data, we can close critical gaps in coverage and accelerate outbreak detection. Here, we present a multivariate algorithm that uses freely available online data to provide early warning of emerging influenza epidemics in the US. We evaluated 240 candidate predictors and found that the most predictive combination does \textit{not} include surveillance or electronic health records data, but instead consists of eight Google search and Wikipedia pageview time series reflecting changing levels of interest in influenza-related topics. In cross validation on 2010-2016 data, this algorithm sounds alarms an average of 16.4 weeks prior to influenza activity reaching the Center for Disease Control and Prevention (CDC) threshold for declaring the start of the season. In an out-of-sample test on data from the rapidly-emerging fall wave of the 2009 H1N1 pandemic, it recognized the threat five weeks in advance of this surveillance threshold. Simpler algorithms, including fixed week-of-the-year triggers, lag the optimized alarms by only a few weeks when detecting seasonal influenza, but fail to provide early warning in the 2009 pandemic scenario. This demonstrates a robust method for designing next generation outbreak detection algorithms. By combining scan statistics with machine learning, it identifies tractable combinations of data sources (from among thousands of candidates) that can provide early warning of emerging infectious disease threats worldwide. \section*{Author summary} Early detection of infectious disease outbreaks enable targeted interventions that prevent transmission and mitigate disease burden. However, we lack rapid surveillance systems for many global threats. This paper introduces a hierarchical statistical method for evaluating diverse data sources and incorporating them into powerful outbreak detection algorithms. We apply the method to design a next generation early warning system for influenza epidemics in the US. By monitoring online Google and Wikipedia search activity for information relating to influenza symptoms and treatment, our algorithm can detect the emergence of seasonal influenza months before the official start of the season. \linenumbers \section*{Introduction} Emerging and re-emerging human viruses threaten global health and security. Early warning is vital to preventing and containing outbreaks. However, viruses often emerge unexpectedly in populations that lack resources to detect and control their spread. The silent Mexican origin of the 2009 pandemic \cite{Mexico2009, Fraser2009Pandemic}, unprecedented 2014-2015 expansion of Ebola out of Guinea \cite{Baize2014Ebola}, and the rapid spread of Zika throughout the Americas in 2016-2017 \cite{Zhang2017zika} highlighted critical shortcomings and the potential for life-saving improvements in global disease surveillance. Traditionally, public health agencies have relied on slow, sparse and biased data extracted during local outbreak responses or collected via voluntarily reporting by healthcare providers. The 21st century explosion of health-related internet data--for example, disease-related Google searches, Tweets, and Wikipedia term visits--and the proliferation of pathogen molecular data and electronic health records have introduced a diversity of real-time, high-dimensional, and inexpensive data sources that may ultimately be integrated into or even replace traditional surveillance systems. In building 'nextgen' surveillance systems, we face the interdependent challenges of identifying combinations of data sources that can improve early warning and developing powerful statistical methods to fully exploit them. Engineers have designed anomaly detection methods for statistical process control (SPC)---including the Shewhart \cite{Shewhart1931}, cumulative sum (CUSUM) \cite{page1954cusum, Lorden1971cusum}, and exponential weighted moving average (EWMA) methods \cite{Roberts1959ewma}---to achieve real-time detection of small but meaningful deviations in manufacturing processes from single or multiple input data streams. When the focal process is \textit{in-control}, these methods assume that the inputs are independent and identically distributed random variables with distributions that can be estimated from historical data. Anomalous events can thus be detected by scanning real-time data for gross deviations from these baseline distributions. Biosurveillance systems similarly seek to detect changes in the incidence of an event (e.g., infections) as early and accurately as possible, often based on case \textit{count} data. By adjusting SPC methods to account for autocorrelations, researchers have developed algorithms that can detect the emergence or re-emergence of infectious diseases \cite{Fricker2013}. Such methods have been applied to influenza \cite{Cowling2006flu, Griffin2009flu, Boyle2011influenza, Pervaiz2012flu, Mathes2017}, Ross River disease \cite{Watkins2008ross, Pelecanos2010ross}, hand-foot-and-mouth disease \cite{Li2011handfoot, Zhang2013handfoot, Lai2017handfoot}, respiratory tract infections \cite{wieland2007respiratory, Spanos2012respiratory, Mathes2017}, meningitis \cite{karami2017meningitis}, and tuberculosis outbreaks \cite{Kammerer2013tuberculosis}. These models exploit a variety of public health data sources, including syndromic surveillance, case count and laboratory test data. While they achieve high sensitivity and precision, alarms typically sound once an outbreak has begun to grow exponentially and thus do not provide ample early warning. For annual influenza, CUSUM-derived detection methods applied to Google Flu Trends data sound alarms an average of two weeks prior to the official start of the influenza season \cite{Pervaiz2012flu}. The Early Aberration Reporting System (EARS) \cite{Hutwagner2003ears} was launched by the CDC in 2000s to provide national, state, and local health departments with several CUSUM-derived methods to facilitate the syndromic surveillance. The BioSense surveillance system \cite{Bradley2005biosense} implements methods derived from EARS to achieve early detection of possible biologic terrorism attacks and other events of public health concern on a national level. Two other surveillance systems, ESSENCE and NYCDOHMH \cite{Bravata2004ESSENCE, Shmueli2010NYCDOHMH}, maintained by United States Department of Defense and the New York City Department of Health and Mental Hygiene, respectively, implement EWMA-based methods for outbreaks monitoring. Most of these systems are univariate (i.e., analyze a single input data source) and consider only public health surveillance data collected during local outbreak responses or via voluntarily reporting by healthcare providers. The time lag between infection and reporting can be days to weeks. Thus, the earliest warning possible for an emerging outbreak may be well after cases begin rising. Over the last decade, public health agencies and researchers have begun to explore a variety of 'nextgen' disease-related data sources that might improve the spatiotemporal resolution of surveillance. Electronic health records (EHR) systems like athenahealth can provide near real-time access to millions of patient records, nationally, and have been shown to correlate strongly with influenza activity \cite{Santillana2016athena}. Participatory surveillance systems like Flu Near You, which asks volunteers to submit brief weekly health reports, also provide a near real-time view of ILI activity \cite{Chunara2013fny}. However, such data sources may be geographically, demographically or socioeconomically biased, depending on the profiles of participating healthcare facilities or volunteers \cite{Brownstein2017FNYbias}. Internet-source data such as Google Trends \cite{Ginsberg2009gft}, Wikipedia page views \cite{McIver2014wiki, Hickmann2015wiki}, and Twitter feeds \cite{Broniatowski2013twitter} exhibit correlations with disease prevalence, and have been harnessed for seasonal influenza nowcasting and forecasting. However, they have not yet been fully evaluated for early outbreak detection, and may be sensitive to sociological perturbations, including media events and behavioral contagion \cite{Chan2011, Chunara2012}. Here, we introduce a hierarchical method for building early and accurate outbreak warning systems that couples a multivariate version of EWMA model with a forward feature selection algorithm (MEWMA-FFS). The method can evaluate thousands of data sources and identify small combinations that maximize the timeliness and sensitivity of alarms while achieving a given level of precision. It can be applied to any infectious disease threat provided sufficient data for the candidate predictors. For novel threats, the candidates may include a wide variety of proxies that are expected to produce dynamics resembling the focal threat (e.g., data on closely related pathogens, other geographic regions, or even social responses to non-disease events). To demonstrate the approach, we design a multivariate early warning system for seasonal influenza using eight years of historical data (2009-2017) and hundreds of predictors, including traditional surveillance, internet-source, and EHR data. The optimal combination of input data includes six Google and two Wikipedia time series reflecting online searches for information relating to the symptoms, biology and treatment of influenza. By monitoring these data, the algorithm is expected to detect the emergence of seasonal influenza an average of $16.4$ weeks (and standard deviatiation of $3.3$ weeks) in advance of the Center for Disease Control and Prevention (CDC) threshold for the onset of the season. In out-of-sample validation, the model detected the fall wave of the 2009 H1N1 pandemic and the 2016-2017 influenza season five and fourteen weeks prior to this threshold, respectively. \section*{Materials and methods} \subsection*{Early detection model} The MEWMA model is derived from a method described in \cite{Joner2008mewma}. We define one time series as \textit{gold standard}, and one value in the range of the gold standard as the event threshold. Events (outbreaks) correspond to periods when observations in the gold standard cross and remain above the event threshold. We project the timing of events in the gold standard time series onto the candidate time series (predictors). We assume that the data falling outside the event periods follow a multivariate normal distribution $\boldsymbol{F}$ (null distribution) with a mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$ that can be estimated from baseline (non-outbreak) data with equations [\ref{mu}] and [\ref{cov}]: \begin{equation} \boldsymbol{\mu} = \mathbb{E}(\boldsymbol{X_T} | y_{\boldsymbol{T}} < \varepsilon) \label{mu} \end{equation} \begin{equation} \boldsymbol{\Sigma} = \text{Cov}(\boldsymbol{X_T} | y_{\boldsymbol{T}} < \varepsilon) \label{cov} \end{equation} Here, $\varepsilon$ is the value of the threshold defining outbreak events. $\boldsymbol{T}$ are all time points at which observations in gold standard $y$ are below event threshold $\varepsilon$. $\boldsymbol{X}_{\boldsymbol{T}}$ is a matrix of observations from candidate time series at time points $\boldsymbol{T}$. At each time $t$, MEWMA calculates \begin{equation} \textbf{S}_t= \begin{cases} \max[\textbf{0}, \lambda(\textbf{X}_t - \boldsymbol{\mu})+(1-\lambda)\textbf{S}_{t-1}], & \text{for}\ t>0 \\ \textbf{0}, & \text{for}\ t=0 \end{cases} \end{equation} where $\boldsymbol{X}_t$ is a vector of current observation from candidate time series; $\lambda$ is the smoothing parameter $(0 < \lambda < 1)$; $\boldsymbol{S}_t$ is a weighted average of the current observation standardized around $\boldsymbol{\mu}$ and the previous $\boldsymbol{S}$ statistic. Then the multivariate EWMA test statistic $\boldsymbol{E}_t$ is calculated as \begin{equation} \boldsymbol{E}_t = \boldsymbol{S}_t^T \boldsymbol{\Sigma}^{-1}_{\boldsymbol{S}_{\infty}} \boldsymbol{S}_t \end{equation} \begin{equation} \boldsymbol{\Sigma}_{\boldsymbol{S}_{\infty}} = \frac{\lambda}{2 - \lambda} \boldsymbol{\Sigma} \label{covS} \end{equation} The MEWMA signals whenever $\boldsymbol{E}_t$ exceeds a predetermined threshold $h$. That is, the observation at time $t$ deviates significantly from the baseline distribution. \subsection*{Performance measurement} Given that our objective is to detect emerging outbreaks early and accurately, we evaluate data based on the timing of alarms relative to the start of events. Only alarms within detection windows are considered as true positive alarms. Specifically, we calculate performance of a candidate system (combination of predictors) as given by \begin{equation} P(\boldsymbol{X}, \lambda, h; y) = \frac{1}{N} \sum_{n=1}^{N}(1 - \frac{\Delta T_n} {T_{w}}), \end{equation} where $N$ is the total number of events in gold standard, $T_{w}$ is the length of the detection window (e.g., sixteen weeks surrounding the start of an event) and $\Delta T_n$ is the time between the start of the detection window and the first alarm for event $n$. If no alarm sounds during the detection window for event $n$, then $\Delta T_n = T_w$. Performance values range from zero to one. A perfect score of one indicates that alarms consistently sound during the first week of the detection window; 0.5 indicates that alarms occur, on average, right at the start of events; lower values indicate delayed alarms, triggered weeks after the event has begun. \subsection*{Parameter optimization} When implementing MEWMA-FFS, we must estimate the smoothing parameter $\lambda$ and the threshold $h$. The parameter pair $(\lambda, h)$ should maximize the performance of the model while minimizing the number of false positive alarms triggered outside detection windows for actual events. To constrain the number of false positive alarms, we specify the Average Time between False Signals (ATFS) during the training process. This parameter is the expected number of time steps between signals during non-outbreak periods and is given by \begin{equation} ATFS \triangleq \mathbb{E}(t^{**} - t^{*} | \tau_s = \infty), \end{equation} where $t^{*}$ denotes the time an initial alarm is triggered; $t^{**}$ is the next time an alarm sounds; $\tau_s$ is the first day of an event, with $\tau_s = \infty$ indicating that an event never occurs. The value of ATFS can be estimated using simulations. We first generate samples from the null distribution (data outside event periods), then use the MEWMA procedure described in \ref{mu} - \ref{covS} to trigger alarms, and finally use the spacing between these false alarms to estimate ATFS \cite{Fricker2013}. To calculate the optimal parameter pair, we begin with fixing a value of ATFS ($\varphi$). Given a set of time series $\boldsymbol{X}$, this constrains the possible choices for parameter pairs $(\lambda, h)$ to a curve $\Gamma(\varphi, \boldsymbol{X})$. The overarching optimization goal is given by \begin{equation} \boldsymbol{X}^*, \lambda^*, h^* = \arg \max_{\{\boldsymbol{X}\subset\boldsymbol{\Omega}:|\boldsymbol{X}|=k, (\lambda, h) \in \Gamma(\varphi; \boldsymbol{X}) \}} P(\boldsymbol{X}, \lambda, h; y) \end{equation} where $\boldsymbol{X}^*$ is the optimal combination of time series; $\boldsymbol{\Omega}$ is a set of all candidate time series; $k$ is the pre-determined number of time series in the optimization; $\lambda^*$ and $h^*$ are the optimal parameter pair. To evaluate parameter pairs $(\lambda, h)$ on the curve $\Gamma(\varphi, \boldsymbol{X})$, we consider values of $\lambda$ between zero and one with a step size 0.1. Since ATFS is monotonically increasing in $h$, this allows us to efficiently find the corresponding approximate value of $h$ using the secant method \cite{Allen1998} with the tolerance value of 0.5 and the maximum number of iterations of 100. We plug each resulting parameter pair into the MEWMA model and measure in-sample performance. The parameter pair maximizing the in-sample performance is chosen for out-of-sample prediction. \subsection*{Forward feature selection} To choose the optimal combinations of time series for early warning, we implement stepwise forward feature selection algorithm in combination with MEWMA. We begin with no predictors and test the model performance (in terms of the average timing of early detection) when we add each of the possible candidate predictors on their own. We select the time series that most improves model performance as the first predictor. We then repeat the following until we reach a target number of predictors or the model performance levels off: (1) evaluate each \emph{remaining} candidate predictor in combination with predictors already selected for the system and (2) select the candidate that most improves model performance for inclusion in the system. Formally, \begin{equation} \boldsymbol{X}_0 := \emptyset\ \text{and}\ \boldsymbol{X}_{i+1} := \boldsymbol{X}_i \cup \Big\{ \arg \max_{x \in \boldsymbol{\Omega} \backslash \boldsymbol{X}_i } P(\boldsymbol{X}_i \cup {\{x\}} , \lambda, h; y) \Big\} \end{equation} where $X_i$ is a set of selected candidate time series at step $i$; $\boldsymbol{\Omega}$ is a set of all candidate time series; $P(\boldsymbol{X}_i \cup {\{x\}} , \lambda, h; y)$ is the performance metric; $y$ is the gold standard; $\lambda$ is the smoothing parameter, and $h$ is the threshold for test statistic. \subsection*{Optimizing early detection of influenza outbreaks in the US} We demonstrate the MEWMA-FFS framework by designing an early detection system for influenza in the US using 2010-2016 data. Using national scale ILINet data as the gold standard (described under \emph{Data} below), outbreak events (influenza outbreaks) are defined as ILINet surpassing a specified threshold for at least three weeks. Candidate predictors are selected to detect the onset of influenza outbreaks as early as possible in a specific number of weeks leading up and following the start of each event. When selecting candidate predictors, all time series are evaluated using six-fold cross-validation. For each fold, one of the six influenza seasons is held out for testing and the other five are used for training. The candidate model is evaluated by the timing of the alarm relative to the actual start of the event, averaged across the six out-of-sample predictions. To constrain false positives, we set the target ATFS to 20 weeks and then choose optimization parameter pairs $(\lambda, h)$ by running 1000 simulations. To reduce the stochasticity of simulation further, each optimization experiment is repeated 40 times and optimal combination of predictors is determined by the median of their ranks. After building the early detection systems (i.e., selecting optimal combinations of predictors via MEWMA-FFS), we perform two additional rounds of model evaluation. Since the gold standard and predictor data overlap for only six influenza seasons (2010-2016), we used this data twice: first, as described above, we use six-fold cross-validation (one season held out) to select optimal combinations of predictors for each model; second, we use three-fold cross-validation (two seasons held out) to compare the performance of different optimized models. We report the timing of alarms relative to the official start of each event, the proportion of events detected (recall), and the percentage of true alarms over all alarms (precision) across the three folds. In preliminary analysis, we found that the length of training data does significantly impact model performance (Fig~\ref{ts_length_performance}). Finally, following model construction and comparison on 2010-2016 data, we further evaluate the performance of the best models in comparison to simpler alternatives using true test data from the 2016-2017 influenza season and the fall wave of the 2009 H1N1 influenza pandemic. Since we do not reset $\boldsymbol{S}_t$ to zero following alarms, systems tend to signal repeatedly until the observations return to baseline. Therefore, we track only the timing of the first alarm during continuous clusters of alarms. MEWMA without resetting saves on computation during FFS optimization, as it allows us to reference a single set of stored null distribution calculations when testing for alarms. That is, if $\boldsymbol{F}$ is the null distribution for all candidate time series, we can compute and save the mean vector $\boldsymbol{\mu}$, covariance matrix $\boldsymbol{\Sigma}$, and $\boldsymbol{S}_t$ statistic with $\boldsymbol{X}_t$, the vector of observations from all candidate time series at time $t$. Given a subset $\boldsymbol{U}$ of candidate time series, the test statistic $E_t$ can be computed by using the pre-computed $\boldsymbol{S}_t$ and $\boldsymbol{\Sigma}$ directly. \subsection*{Choosing an event threshold and detection window} To speed up the optimization experiments, we tune the event threshold $\varepsilon$ and length of detection window $T_{w}$. We run optimization experiments using eleven ILINet time series across a range of values for $\varepsilon$ and $T_{w}$ (\nameref{threshold_detectionWindow_performance_comparison}). We constrain the $T_{w}$ so that the start of the window did not precede the lowest observation in the onset of a given outbreak. As in our primary analysis, predictors are selected using 6-fold cross validation and compared via a secondary round of 3-fold cross-validation. We considered ILINet event thresholds ranging from 1\% to 2\% and detection windows ranging from 4 to 20 weeks surrounding the onset of an event and found that a combination of $\varepsilon=1.25\%$and $T_{w}=16$ maximizes the timeliness, precision and recall (\nameref{threshold_detectionWindow_performance_comparison}). \subsection*{Assessing the trade-off between run-time and performance} To evaluate the impact of the ATFS on model performance, we run optimization experiments across ATFS values ranging from 5 to 150. In each experiment, predictors are selected and evaluated through cross-validation as described above. For each ATFS value, we run 40 replicates and record their compute time on the Olympus High Performance Compute Cluster~\cite{olympus}. \subsection*{Sensitivity analysis} To evaluate the impact of the training period duration, we run five optimization experiments following the procedures described above, while varying the length of the training time series from 12 years to 4 years: 2004-2016, 2006-2016, 2008-2016, 2010-2016, 2012-2016. To evaluate the importance of including recent data, we run a series of optimization experiments with variable time gaps between the end of a four-year training period and the beginning of a one-year testing period (\nameref{ts_gap_dataVis}). \subsection*{Alternative models} We compare our optimized early detection algorithms with three simpler alternatives. All three models were fit via 3-fold cross-validation on 2010-2016 ILINet data, with two seasons held out in each round. When computing performance, we follow the methods described above for the MEWMA-FFS model: We consider only the first alarm in each cluster and assume the same objective function, event threshold, detection window, and ATFS. \emph{Week-based trigger}: The model triggers alarms in the same week of every year. Week 34 maximizes the cross-validated performance. \emph{Rise-based trigger}: The model triggers alarms as soon as ILINet reports increase for $n$ consecutive weeks. We considered $n$ ranging from 2 to 20 weeks and determined that $n=4$ maximizes the cross-validated performance. \emph{Univariate-ILINet US}: we fit the MEWMA-FFS model using national level ILINet data as the sole predictor. \subsection*{Data} The method evaluates candidate data sources based on ability to detect events in a designated \emph{gold standard} data source. Throughout this study, we use CDC national-scale ILINet data as gold standard and consider the following five categories of candidate data: (a) ILINet; (b) NREVSS; (c) Google Trends; (d) Wikipedia access log; (e) athenahealth EHR. ILINet: The CDC complies information on the weekly number of patient visits to healthcare providers for influenza-like illness through the US Outpatient Influenza-like Illness Surveillance Network (ILINet). Current and historical ILINet data are freely available on FLUVIEW \cite{fluview}. We use weekly percentage of ILI patient visits to healthcare providers on both national and Health and Human Services (HHS) scales (which are weighted by state population). The national scale time series serve as our gold standard data, and both national and HHS data are considered as candidate data sources during optimization from 07/03/2009 through 02/06/2017. NREVSS: Approximately 100 public health and over 300 clinical laboratories in the US participate in virologic surveillance for influenza through either US World Health Organization (WHO) Collaborating Laboratories System or the National Respiratory and Enteric Virus Surveillance System (NREVSS). All participating labs issue weekly reports providing the total number of respiratory specimens tested and the percent positive for influenza. These data are publicly available on FLUVIEW \cite{fluview}. Our optimization considers both national and HHS scale time series of weekly percentage of specimens positive for influenza from 07/03/2009 through 02/06/2017. GT: Google Correlate~\cite{googleCorrelate} and Google Trends~\cite{googleTrends} are freely-available tools developed by Google that enable users to (1) find search terms correlated with user-provided time series and (2) obtain search frequency time series corresponding to user-provided search terms, respectively. We first applied Google Correlate to national scale ILINet data between 01/04/2004 and 5/16/2009 and retrieved the top 100 matches (Table~\nameref{GT_search_terms}). We then applied Google Trends to each of the top 100 search terms to obtain search frequency time series for 07/03/2009 through 02/06/2017. These serve as candidate data sources in our optimization. Wikipedia: Wikipedia is widely used as a online reference (nearly 506 million visitors per month) \cite{McIver2014wiki}. Researchers have demonstrated a correlation between US ILINet and time series of access frequencies for English-language Wikipedia articles relating to influenza \cite{McIver2014wiki, Hickmann2015wiki}. Using the Delphi Epidata API \cite{Farrow2016thesis}, we obtained the normalized weekly number of hits for each of 53 influenza-related Wikipedia pages listed in \cite{Hickmann2015wiki} from 07/03/2009 through 02/06/2017 (\nameref{wiki_articles}). Athena: athenahealth provides cloud-based services for healthcare providers and manages large volumes of electronic health records data. In collaboration with athenahealth, we obtained the following daily data for approximately 71939 healthcare providers across the US from 07/03/2010 to 02/06/2016: the total number of patient visits, the number of influenza vaccine visits, the number of visits billed with a influenza diagnosis code on the claim, the number of ILI visits, the number of visits ordered a influenza test, the number of visits with a influenza test result, the number of visits with a positive influenza test, and the number of visits with a flu-related prescription. We generated 77 time series total for the following seven variables, each aggregated by week and compiled at the national and HHS scale: (1) ILIVisit---the weekly count of ILI visits; (2) ILI\%---the ratio of the number of ILI visits and the total number of visits; (3) FluVaccine---the weekly count of visits with a influenza vaccine; (4) FluVisit---the weekly count of visits billed with a influenza diagnosis code on the claim; (5) Positive\%---the ratio of the number of visits with a positive influenza test result to the number of visits with a influenza test; (6) FluResult---the number of patient visits with a influenza test result; (7) FluRX---the number of patient visits with a flu-related prescription. \section*{Results} \subsection*{Early detection from single data sources} We first fit the early detection model to each of the 240 candidate time series individually and assess their ability to anticipate when ILINet will cross a threshold of 1.25\%. Performance indicates the average timing of alarms based on six out-of-sample tests, with the range of zero to one corresponding to eight weeks after to eight weeks before the event reaching the threshold 1.25\%. The expected performance is highly variable across data sources (Fig~\ref{single_performance_threshold125}), with ILINet and Google source data generally providing earlier warning than laboratory, EHR and Wikipedia data. The Google Trends time series for 'human temperature' provides the best balance of timeliness, precision and recall (Fig~\ref{systems_and_alarms_threshold125}(A)~and~\nameref{alarms_threshold125_supplement}), with an average advanced warning of 14 weeks prior to the CDC's 2\% threshold for the onset of the influenza season \cite{cdcflu}. National scale ILINet data triggers alarms an average of 11.7 weeks prior to the 2\% threshold (Fig~\ref{systems_and_alarms_threshold125}). Several data sources failed to detect any of the seasons, including Wikipedia page views relating to non-seasonal influenza viruses and athenahealth counts of positive influenza tests in HHS regions 8 and 9. \begin{figure}[!ht] \centering \includegraphics[width=110mm]{single_performance_threshold125.pdf} \caption{{\bf Early detection by single data sources, summarized by category.} For each of the 240 candidate predictors, we fit a univariate detection model and measured performance by averaging early warning across six-fold cross validation (2010-2016). Emergence events for optimization are defined by an ILINet threshold of 1.25\%. The expected performance is highly variable, ranging from 0 to 0.77. A value of one means that the system consistently sounded alarms a full eight weeks prior to the event threshold 1.25\%; a value of 0.5 indicates that, on average, the alarms sound at the time reaching the threshold 1.25\%; lower values indicate delayed alarms.} \label{single_performance_threshold125} \end{figure} \subsection*{Early detection from multiple data sources} We selected optimal combinations of predictors from within each class of data. For CDC ILINet, we considered 11 candidate predictors and found that the optimized system included three time series: ILINet HHS region 7 (Iowa, Kansas, Missouri and Nebraska), ILINet HHS region 5 (Illinois, Indiana, Ohio, Michigan, Minnesota and Wisconsin), and ILINet US (Fig~\ref{data_selection_curve_threshold125}). Across all replicates, HHS region 7 was selected as the most informative predictor, which alone outperforms the optimized system using multiple NREVSS data sources (Fig.~\ref{data_selection_curve_threshold125}). HHS region 9 and US were not selected in all replicates, and just marginally elevate the performance of HHS region 7. Comparing the optimized internet-source systems (Google Trends and Wikipedia) to optimized EHR (athenahealth) system, we find that the best combination of Google Trends time series---\emph{human temperature}, \emph{normal body temperature}, \emph{break a fever}, \emph{fever cough}, \emph{flu treatments}, \emph{thermoscan}, \emph{ear thermometer}---outperforms the others (Fig~\ref{data_selection_curve_threshold125} and \ref{systems_and_alarms_threshold125}(A)). \begin{figure*}[ht] \centering \includegraphics[width=\columnwidth]{data_selection_curve_threshold125.pdf} \caption{{\bf Performance curves for early detection systems.} Systems were optimized within each data category (ILINet, NREVSS, Google Trends, Wikipedia, and athenahealth) and across all data categories, including and excluding Google Trends. Performance is the average advanced warning within the 16 week detection window surrounding the week when ILINet reaches the event threshold of 1.25\%. Performance equal to one indicates that a model consistently signals eight weeks ahead of the event threshold and zero indicating failure to signal within the detection window. Early detection improves as forward selection sequentially adds the most informative remaining data source until reaching a maximum performance. For the optimal system, the first six predictors are Google Trends sources and the remaining two are Wikipedia sources; for the optimal system excluding Google Trends, the top sources are from Wikiperdia, athenahealth, wikipedia and ILINet, in that order. } \label{data_selection_curve_threshold125} \end{figure*} Across the three-fold out-of-sample tests, the ILINet system detected all six influenza outbreaks with an average advanced warning of 12.7 weeks prior to the CDC's season onset threshold, while the Google Trends system detected 83.3\% of outbreaks (five out of six), with an average advanced warning of 16.4 weeks (excluding missing outbreaks) prior to the official threshold (Fig~\ref{systems_and_alarms_threshold125}(A)~and~\nameref{alarms_threshold125_supplement}). The other systems each detected four to six of six test seasons (not always the same seasons), with average advanced warning ranging from 9.5 to 14.2 weeks (Fig~\ref{systems_and_alarms_threshold125}(A)~and~\nameref{alarms_threshold125_supplement}). Individual ILINet time series generally provide earlier warning than individual EHR and Wikipedia time series. However, performance reverses for optimized multivariate models, with the best ILINet algorithm underperforming both the EHR and Wikipedia algorithms (Fig~\ref{systems_and_alarms_threshold125}(A)~and~\nameref{alarms_threshold125_supplement}). To build multi-category early detection systems, we applied the optimization method to the 'winners' of the previous experiments. That is, we considered the 26 predictors shown on the first five plots of Fig~\ref{data_selection_curve_threshold125}. The best model includes eight predictors. The top six are all Google Trends: \emph{human temperature}, \emph{normal body temperature}, \emph{break a fever}, \emph{fever cough}, \emph{flu treatments}, \emph{thermoscan}; the remaining two are Wikipedia: \emph{orthomyxoviridae} and \emph{shivering}, which only improve the performance of the system marginally (Fig~\ref{data_selection_curve_threshold125}). None of the ILINet, NREVSS, or EHR time series made the cut. The combined system achieves comparable early warning to the optimized Google Trends system while detecting higher proportion of events with lower number of false alarms (Fig~\ref{systems_and_alarms_threshold125}). Furthermore, it sounds alarms earlier than all three alternative models in four out of six seasons. In 2012-2013 all models provide similar early warning; in 2015-2016, the \textit{week-trigger} and \textit{rise-trigger} algorithms signal two and three weeks ahead of our optimized algorithm, respectively (Fig~\ref{systems_and_alarms_threshold125}(B)). The optimized algorithm also produces fewer false alarms than the rise-trigger algorithm and detects a higher proportion of influenza seasons than week-trigger algorithm. (Fig~\ref{systems_and_alarms_threshold125}(B)). The MEWMA model using only ILINet data typically lags all other models in signalling events. When we exclude Google Trends candidates from optimization, the method selects Wikipedia pageviews of \textit{flu season} as the most informative predictor followed by a combination of EHR, Wikipedia and ILINet time series (Fig~\ref{data_selection_curve_threshold125}). Expected performance declines slightly without Google Trends data. In three-fold out-of-sample evaluation, the six influenza seasons are detected at an average of 14.8 weeks prior to the CDC's 2\% threshold without missing any events (Fig~\ref{systems_and_alarms_threshold125}). \begin{figure*}[!ht] \centering \includegraphics[width=130mm]{systems_and_alarms_threshold125.pdf} \caption{{\bf Performance of optimized US influenza detection algorithms in three-fold cross validation (2010-2016).} (A) Distribution of system performance over six influenza outbreaks across 40 replicates, in terms of the timing of true alarms relative to the official onset of influenza seasons (excluding missed seasons), proportion of alarms indicating actual events (precision), and proportion of events detected (recall). (B) Timing of alarms relative to the official onset of each influenza season. Using US ILINet time series (blue curves) as a historical \textit{gold standard}, the detection models were trained to sound alarms as early as possible in the sixteen weeks surrounding the week when ILINet reaches 1.25\%. Bar plot (panel 1) shows the advanced warning provided by out-of-sample alarms in terms of weeks in advance of the CDC's 2\% ILINet threshold for declaring the onset the influenza season. Bars not shown indicate missed events. In the lower time series plots, dashed green lines indicate the CDC's seasonal influenza threshold of 2\%; numbers indicate the corresponding week of the year; short red lines indicate the timing of the alarms given by the optimized model.} \label{systems_and_alarms_threshold125} \end{figure*} \subsection*{Out-of-sample detection of the 2009 H1N1 pandemic and 2016-2017 influenza season} We further validated our algorithms using held out ILINet data from two different epidemics. For the 2016-2017 influenza season, the optimized algorithm signaled the start of 2016-2017 season 14 weeks prior to ILINet reaching the CDC's 2\% threshold, which outperforms the univariate ILINet model. However, the week-trigger and rise-trigger algorithms beat the optimized algorithm by two weeks. For the atypical fall wave of transmission during the 2009 H1N1 pandemic, these two models failed to signal the emerging threat. It emerged much earlier in the year than seasonal influenza (thus tripping up the week-trigger algorithm) and at a higher epidemic growth rate (thus outpacing the rise-trigger algorithm) \cite{cdc09pandemic}. The optimal system was able to detect the the fall wave five weeks prior to ILINet reaching the 2\% threshold (Fig~\ref{final_detection_threshold125}). The univariate ILINet model again lags the best model by several weeks in out-of-sample test. This suggests that our optimized multivariate models are more robust for detecting anomalous influenza threats than the simpler alternatives. \begin{figure*}[!ht] \centering \includegraphics[width=110mm]{final_detection_0910_threshold125_alter.pdf} \caption{{\bf Early detection of the 2009 H1N1 pandemic (out-of-sample).} The optimized model was trained on 2010-2016 ILINet data, and then tested on US ILINet reports (blue curve) during fall wave of the 2009 H1N1 pandemic. It triggered an alarm (triange) five weeks prior to ILINet reaching the official epidemic threshold of 2\% (dashed lines). Red markers indicate timing of alarms triggered by the optimized and baseline models.} \label{final_detection_threshold125} \end{figure*} \subsection*{Sensitivity to training period} When we varied the length of the training period from four to twelve years, we selected overlapping sets of optimal predictors, with all five systems including ILINet data for HHS regions 6 and 7 (Table~\nameref{diff_time_window}). The systems detected similar proportions of events. However, the precision (the proportion of true alarms to all alarms) appears to increase with the length of the training period while, surprisingly, the alarms tend to sound later (Fig~\ref{ts_length_performance}). We also found system performance to be fairly insensitive to the gap between the training and testing periods (\nameref{ts_lagged_performance}), suggesting robust performance with only periodic system updates. \begin{figure*}[!ht] \centering \includegraphics[width=60mm]{ts_length_performance_alter.pdf} \caption{{\bf Duration of training period impacts early detection.} Graphs compare the performance of five systems optimized using continuous training data ranging in length from four to twelve years (each ending in 2016), evaluated via cross-validation on 2012-2016 data. Alarm timeliness (top) unexpectedly declines as the training period increases (maximum likelihood linear regression, P=0.019), while the proportion of true alarms (middle) improves (maximum likelihood linear regression, P=0.000256). Training period does not significantly impact recall (not shown). } \label{ts_length_performance} \end{figure*} \section*{Discussion} This MEWMA-FFS framework is designed to build robust early outbreak detection systems that harness a variety of traditional and next generation data sources. For seasonal influenza in the US, we identified a combination of freely available internet-source data that robustly detects the start of the season an average of $16.4$ (SD $3.3$) weeks in advance of the national surveillance threshold (ILINet reaching $2\%$). This is five weeks earlier than previously published early detection algorithms based on ILINet and Google data \cite{Cowling2006flu, Pervaiz2012flu}. In a retrospective out-of-sample attempt to detect the fall wave of the 2009 H1N1 influenza pandemic, the optimized multivariate algorithm provided the earliest warning among the competing models. However, it sounded an alarm only five weeks prior to ILINet reaching the national $2\%$ threshold. The shorter lead time may stem from the anomalously rapid growth of the 2009 pandemic. Across the six influenza seasons between 2010 and 2017, ILINet took an average of 9.4 weeks to increase from $1.25\%$ to $2\%$, with a minimum of six weeks in seasons 2012-2013 and 2014-2015; in the fall of 2009, this transpired in a single week (week 34). Public health surveillance data (e.g., ILINet and NREVSS) can detect emerging influenza seasons on their own, but a combination of eight Google query and Wikipedia pageview time series provided earlier warning across all eight epidemics tested. Although we cannot definitively explain the performance of internet data, we note that 59\% of flu-related Wikipedia English pageviews come from countries outside the US, including the United Kingdom, Canada, and India \cite{McIver2014wiki}. Perhaps earlier influenza seasons elsewhere provide advanced warning of imminent transmission in the US. The utility of Google and Wikipedia data may also stem from their large and diverse user bases and their immediate use following symptoms relative to seeking clinical care\cite{Ginsberg2009gft}. NRVESS is among the mostly costly and time lagged data sources; it performs poorest when considered individually and is never selected for inclusion in combined early detection systems. However, NRVESS provides critical spatiotemporal data for detecting and tracking novel viruses, including pandemic and antiviral resistant influenza, and informing annual vaccine strain selections. Thus, we speculate that NRVESS might rank among the most important sources when designing systems for virus-specific influenza nowcasting and forecasting objectives. We emphasize that these algorithms are not designed to forecast epidemics, but rather to detect unexpected increases in disease-related activity that may signal an emerging outbreak~\cite{Fricker2013}. Early warning provides public health agencies valuable lead time for investigating and responding to a new threat. For seasonal and pandemic influenza, such models can expedite targeted public health messaging, surge preparations, school closures, vaccine development, and antiviral campaigns. Influenza forecasting models potentially provide more information about impending epidemics, including the week of onset, the duration of the season, the overall burden, and the timing and magnitude of the epidemic peak~\cite{shaman2013, brooks2015, ertme2018}. However, they are typically not optimized for early warning or for detecting outbreaks that are anomalous in either the timing or pace of expansion. Our conclusions may not be readily applied to influenza detection outside the US or to other infectious diseases. However, the general framework could be similarly deployed to address such challenges. Even for seasonal influenza in the US, our results pertain to only early detection of seasonal influenza activity as estimated from ILINet, and stem from only six seasons of historical data. If we changed the optimization target (i.e., gold standard data) to an EHR or regional ILINet source, the resulting data systems and corresponding performances may differ considerably. Furthermore, as alternative data and longer time series become available, the optimal systems could potentially improve. Early detection systems should therefore be regularly reevaluated and tailored to the specific objectives and geopolitical jurisdictions of public health stakeholders, and our optimization framework can facilitate easy and comprehensive updates. This approach requires domain-knowledge in the selection of candidate data sources. Next generation \emph{proxy} data should be relevant to the focal disease and population, such as symptom or drug related search data. Climate and environmental factors may prove predictive for directly transmitted and vector borne diseases, and may be a promising direction for enhancing the early detection systems developed here. This \emph{black box} approach can select data sources with spurious or misleading relationships to the gold standard data. Thus, it may be prudent to screen data sources before and after optimization that are unlikely to correlate reliably with the target of early detection. We implemented this MEWMA-FFS framework as an user-friendly app in the Biosurveillance Ecosystem (BSVE) built by the US Defense Threat Reduction Agency (DTRA) \cite{bsve}. Military bioanalysts can now use it to evaluate and integrate diverse data sources into targeted early detection systems for a wide range of infectious diseases worldwide. The versatility of this \textit{plug-and-play} method stems from two assumptions: (1) it simply scans for deviations from underlying distributions rather than modeling a complex epidemiological process, and (2) it does not require seasonality, just historical precedents with which to train the model. We can now more easily harness the growing volumes of health-related data to improve the timeliness and accuracy of outbreak surveillance and thereby improve global health. \section*{Supporting information} \paragraph*{S1 Fig.} \label{threshold_detectionWindow_performance_comparison} {\bf Comparison of system performances with different pairs of event threshold $\varepsilon$ and detection window $T_w$ in three-fold cross validation (2010-2016).} Distribution of average system performance over six influenza seasons across 40 replicates, in terms of the timing of true alarms(excluding missed seasons), proportion of alarms indicating actual events (precision), and proportion of events detected (recall). \begin{figure*}[!hp] \centering \includegraphics[width=120mm]{threshold_detectionWindow_performance_comparison.pdf} \end{figure*} \paragraph*{S2 Fig.} \label{alarms_threshold125_supplement} {\bf Out-of-sample detection of US influenza seasons by single source and single category early warning systems.} Using US ILINet time series (blue curves) as a historical gold standard, the detection models were optimized to sound alarms as early as possible in the sixteen weeks surrounding the threshold 1.25\% for optimization. The bar plot (panel 1) shows the alarm timing for each influenza season from 2010-2016 relative to the official ILINet threshold of 2\%. Bars not shown indicate missed events in early detection, while positive values show alarms are triggered prior to the official start of each influenza season. In panel 2, horizontal green dashed lines represent the threshold of 2\%, while vertical green dashed lines indicate the onset of influenza seasons according to the threshold of 2\%; numbers indicate the corresponding week of the year; red short lines show alarm timings for flu seasons from the optimized model. \begin{figure*}[!ht] \centering \includegraphics[width=120mm]{alarms_threshold125_supplement.pdf} \end{figure*} \paragraph*{S3 Fig.} \label{ATFS_performance_tradeoff} {\bf The trade-off between timeliness, and precision, recall, running time.} Each system was optimized using different values of ATFS. The three plots show the trade-off between alarms timings and the proportion of alarms indicating actual events (precision), proportion of events detected (recall), and running time of each optimization with 40 repeats running in parallel, respectively. Each run selected different combinations of predictors (~\nameref{ATFS_tradeoff_table}) and detected influenza emergence an average of 11-14 weeks prior to the official onset of influenza seasons. There is a weak trade-off between timeliness and precision and minimal trade-off between timeliness and recall. The precision is always below 0.9 while recall is equal to one for most of values of ATFS. This is because we consider the timing of only the first alarm in a cluster; the ATFS is expected to impact the total number of alarms but not neccessarily the number of alarm clusters~\cite{Fricker2013}. Meanwhile, a larger value of ATFS requires longer running time for optimization. An optimization experiment with ATFS set to 50 (the value that maximizes timeliness and preceision) requires twice the run time of an experiment using ATFS 20; however, the gain is only one additional week of early warning. Thus, it is valuable to balance performance and compute time when setting ATFS for optimization. \begin{figure*}[!ht] \centering \includegraphics[width=\columnwidth]{ATFS_performance_tradeoff.pdf} \end{figure*} \paragraph*{S4 Fig.} \label{ts_gap_dataVis} {\bf Diagram of training and testing periods used in sensitivity analysis.} \begin{figure*}[!hp] \centering \includegraphics[width=90mm]{ts_gap_dataVis.pdf} \end{figure*} \clearpage \paragraph*{S5 Fig.} \label{ts_lagged_performance} {\bf Sensitivity to the training period.} Each of five systems was optimized using training and testing periods diagrammed in \nameref{ts_gap_dataVis}. The three graphs show performance in terms alarm timing (top), proportion of alarms that correspond to actual events (middle), and proportion of events detected (bottom). Gap between testing and training periods does not appear to significantly impact performance. \begin{figure*}[!ht] \centering \includegraphics[width=60mm]{ts_lagged_performance.pdf} \end{figure*} \paragraph*{S1 Table.} \label{ATFS_tradeoff_table} {\bf Time series selected for early detection systems across different values of ATFS.} Time series are listed in order of selection, assuming an ILINet threshold of 1.25\% for optimization. \begin{table}[!hp] \begin{adjustwidth}{-0.75in}{0in} \centering \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{10}{|c|}{\bf Value of ATFS}\\ \hline 5 & 10 & 20 & 30 & 40 & 50 & 80 & 100 & 120 & 150\\ \hline HHS 7 & HHS 7 & HHS 7 & HHS 7 & HHS 7 & US & HHS 7 & US & US & US\\ & HHS 6 & HHS 5 & US & US & HHS 4 & HHS 6 & HHS 4 & HHS 9 & HHS 4\\ & HHS 2 & US & HHS 6 & HHS 10 & HHS 10 & HHS 4 & HHS 6 & HHS 6 & HHS 6\\ & & & & HHS 4 & HHS 6 & US & HHS 7 & HHS 5 & HHS 7\\ & & & & HHS 6 & & & & HHS 1 & \\ & & & & HHS 8 & & & & & \\ \hline \end{tabular} \end{adjustwidth} \end{table} \clearpage \paragraph*{S2 Table.} \label{diff_time_window} {\bf Data sources selected for early detection systems across variable length training periods.} Time series are listed in order of selection, assuming an ILINet event threshold of 1.25\% \begin{table}[!hp] \centering \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{\bf Model Training Period} \\ \hline 2004-2016 & 2006-2016 & 2008-2016 & 2010-2016 & 2012-2016\\ \hline HHS 5 & US & HHS 7 & HHS 7 & HHS 3\\ HHS 7 & HHS 6 & HHS 1 & HHS 9 & HHS 7\\ HHS 9 & HHS 7 & HHS 6 & HHS 6 & HHS 10\\ HHS 6 & HHS 9 & & & HHS 2\\ HHS 8 & HHS 8 & & & HHS 6\\ & HHS 2 & & & US\\ \hline \end{tabular} \end{table} \clearpage \paragraph*{S3 Table.} \label{GT_search_terms} {\bf Candidate Google Trends data sources for early detection of seasonal influenza.} Optimization experiments evaluated 100 time series based on each of these search terms. \begin{table}[!hp] \begin{adjustwidth}{-1.5in}{0in} \centering \begin{tabular}{|l|l|l|l|} \hline \multicolumn{4}{|c|}{\bf Google Search Terms} \\[3pt] \hline influenza type a & \shortstack[l]{how long is the\\flu contagious} & signs of flu & pneumonia \\[3pt] exposed to flu & low body & early flu symptoms & flu report \\[3pt] symptoms of flu & get over the flu & how long does flu last & flu headache \\[3pt] flu duration & treating flu & normal body temperature & flu cough\\[3pt] flu contagious & flu vs. cold & get rid of the flu & flu last \\[3pt] \shortstack[l]{incubation period\\for flu} & flu coughing & break a fever & flu contagious period \\[3pt] flu fever & having the flu & type a influenza & ear thermometer \\[3pt] treat the flu & treatment for flu & i have the flu & \shortstack[l]{how to get rid\\of the flu} \\[3pt] how to treat the flu & human temperature & after the flu & flu how long \\[3pt] signs of the flu & dangerous fever & when you have the flu & symptoms of bronchitis \\[3pt] \shortstack[l]{influenza incubation\\period} & cold versus flu & flu in children & \shortstack[l]{what to do if\\you have the flu}\\[3pt] over the counter flu & the flu & taking temperature & cold and flu \\[3pt] how long is the flu & remedies for flu & if you have the flu & \shortstack[l]{over the counter\\flu medicine} \\[3pt] symptoms of the flu & contagious flu & how long flu & flu type \\[3pt] flu recovery & \shortstack[l]{how long does\\the flu last} & flu germs & treating the flu \\[3pt] flu and fever & flu lasts & \shortstack[l]{incubation period\\for the flu} & do i have the flu \\[3pt] flu medicine & have the flu & cold vs. flu & flu care \\[3pt] flu or cold & oscillococcinum & flu and cold & how long contagious \\[3pt] is flu contagious & \shortstack[l]{how long is\\flu contagious} & thermoscan & fight the flu \\[3pt] how long does the flu & flu treatments & flu complications & reduce a fever \\[3pt] cold symptoms & how to reduce a fever & upper respiratory & fever dangerous \\[3pt] treat flu & influenza symptoms & high fever & cure the flu \\[3pt] is the flu contagious & cold vs flu & flu children & medicine for flu \\[3pt] flu treatment & braun thermoscan & the flu virus & flu length \\[3pt] flu vs cold & fever cough & how to treat flu & cure flu \\[3pt] \hline \end{tabular} \end{adjustwidth} \end{table} \clearpage \paragraph*{S4 Table.} \label{wiki_articles} {\bf Candidate Wikipedia data sources for early detection of seasonal influenza.} Optimization experiments evaluated 53 time series based on access frequency for each of these Wikipedia articles. \begin{table}[!hp] \begin{adjustwidth}{-1in}{0in} \centering \begin{tabular}{|l|l|l|l|} \hline \multicolumn{4}{|c|}{\bf Wikipedia Articles} \\[3pt] \hline Antiviral drugs & Gastroenteritis & \shortstack[l]{Influenza A virus\\subtype H5N1} & Influenza-like illness \\[3pt] Avian influenza & Headache & \shortstack[l]{Influenza A virus\\subtype H7N2} & Influenzavirus A \\[3pt] Canine influenza & \shortstack[l]{Hemagglutinin\\(influenza)} & \shortstack[l]{Influenza A virus\\subtype H7N3} & Influenzavirus C \\[3pt] Cat flu & Human flu & \shortstack[l]{Influenza A virus\\subtype H7N7} & Malaise \\[3pt] Common cold & Influenza A virus & \shortstack[l]{Influenza A virus\\subtype H9N2} & Nasal congestion \\[3pt] Chills & Influenza & \shortstack[l]{Influenza A virus\\subtype H7N9} & Myalgia \\[3pt] Cough & \shortstack[l]{Influenza A virus\\subtype H1N1} & \shortstack[l]{Influenza A virus\\subtype H10N7} & Nausea \\[3pt] Equine influenza & \shortstack[l]{Influenza A virus\\subtype H1N2} & Influenza B virus & Neuraminidase inhibitor \\[3pt] Fatigue (medical) & \shortstack[l]{Influenza A virus\\subtype H2N2} & Influenza pandemic & Orthomyxoviridae \\[3pt] Fever & \shortstack[l]{Influenza A virus\\subtype H3N8} & Influenza prevention & Oseltamivir\\[3pt] Flu season & \shortstack[l]{Influenza A virus\\subtype H3N2} & Influenza vaccine & Paracetamol\\[3pt] Rhinorrhea & Rimantadine & Shivering & Sore throat \\[3pt] Swine influenza & Viral neuraminidase & Viral pneumonia & Vomiting \\[3pt] Zanamivir & & &\\[3pt] \hline \end{tabular} \end{adjustwidth} \end{table} \section*{Acknowledgments} We thank athenahealth, Inc. for providing Electronic Health Records data. Funding was provided by US Department of Defense the Defense Threat Reduction Agency contract HDTRA-14-C-0114, and US National Institute of General Medical Sciences Models of Infectious Disease Agent Study Grant U01GM087719. \nolinenumbers \clearpage
1,116,691,501,430
arxiv
\section{Introduction} \label{sec:intro} In this paper we point out the effect of magnetostrictions in the interior of a neutron star on its rotation frequency. The exact composition of the interior of a neutron star still is an ongoing debate, since it critically depends on the equations of state for the different constituents of the interior used in calculations (for reviews see, e.\,g., \cite{baym18}, \cite{lattimer07}, or \cite{jiang19}, and references therein). Further progress is expected from the Neutron Star Interior Composition Explorer (NICER) (\cite{bogdanov19a, bogdanov19b}) installed on the International Space Station which is devoted to the study of neutron stars through soft X-ray timing. To illustrate the magnetostrictive effect on the rotation frequency of a neutron star, and be specific, we resort to a simplified model for the interior, presented by \cite{ruderman98}. In this model the neutron star is composed of different regions with different properties. In the interior there is assumed a liquid state, with neutrons which exhibit superfluid properties, and with protons which form a type-II superconductor with superconducting flux lines carrying magnetic moments (\cite{srinivasan90}). The crust of the star is assumed as a solid, and the outermost shell of the crust is formed by completely ionized iron atoms which carry nuclear magnetic moments, and a sea of relativistc electrons which carry electronic magnetic moments (\cite{chamel08}). Usually, a magnetic effect on the mass density of the neutron star, which determines the inertia tensor, is not taken into account in the literature. Here we show that such an effect exists, and that this affects the rotational frequency of the star. It is worthwhile mentioning that in general the rotation frequency of the neutron star itself is not a constant. There are physical processes which accelerate the rotation (spin-up) and physical processes which retard the rotation (spin-down). An example for a spin-down process is the emission of electromagnetic waves by which the star loses energy. The radiation originates in the magnetosphere of the neutron star, and is magnetic dipole radiation, because in general the magnetic field axis of the neutron star is not aligned with its rotation axis. \section{Magnetostriction in the Neutron Star} \label{sec:magnetostriction} The magnetic moments of the particles in the various regions of the star discussed in Section \ref{sec:intro} produce magnetizations in those regions. Due to the magnetoelastic interactions of the magnetization with the matter, magnetostrictive strains arise which change the mass density of the star and, thus, the inertia tensor of the neutron star. From this it follows that there is a magnetic effect on the rotation frequency of the star. In this Section we want to sketch how the calculation of the magnetostriction in the different inner regions of the neutron star would have to proceed, and which parameter inputs are needed. The underlying theory is the micromagnetic theory described in \cite{kronmueller03}. The basic quantitity entering the calculation of the magnetostriction is the so-called quasiplastic strain tensor $\boldsymbol{\epsilon}^{Q}$. An expression for the components of this tensor in the isotropic liquid interior of the star is given in \cite{hubert98}, \begin{equation} \epsilon^{Q}_{ik} = \frac{1}{2} \lambda_s (\gamma_i \gamma_k - \frac{1}{2} \delta_{ik}). \label{eq1} \end{equation} Here $\lambda_s$ is a constant which characterizes the magnetostrictive properties of the isotropic liquid. The $\gamma_{i}$ are the direction cosines of the magnetization, which is assumed homogeneous in the liquid part, not position-dependent. The quantity $\delta_{ik}$ represents the usual Kronecker symbol. An expression for the components of the tensor in the anisotropic solid crust of the star is also given in \cite{hubert98}, \begin{eqnarray} \epsilon^{Q}_{ik} &=& \frac{3}{2} \lambda_{100} (\gamma_i^2-\frac{1}{3}) \quad \hbox{for} \quad i=k=1,2,3 \nonumber \\ \epsilon^{Q}_{ik} &=& \frac{3}{2} \lambda_{111} \gamma_i \gamma_k \quad \hbox{for} \quad i \neq k. \label{eq 2} \end{eqnarray} The quantities $\lambda_{100}$ and $\lambda_{111}$ are magnetostriction constants which correspond to the fractional changes of the length of the sample upon saturation in [100]- direction and in [111]-direction, respectively. The magnetostrictive strains of the two regions add up to a total strain. In the following the magnetostrictions in the liquid and in the solid region of the neutron star are calculated separately. \subsection{Isotropic liquid interior} In the liquid part the elastic (magnetostrictive) strain $\boldsymbol{\epsilon}^{el}$ is determined by minimizing the elastic potential $\Phi^{el}$ given in \cite{kronmueller03} with respect to $\boldsymbol{\epsilon}^{el}$, \begin{equation} \Phi^{el} = -\frac{1}{2} \int (\boldsymbol{\epsilon}^{Q}\cdot \cdot ~\boldsymbol{C} \cdot \cdot ~\boldsymbol{\epsilon}^{Q}- \boldsymbol{\epsilon}^{el} \cdot \cdot ~\boldsymbol{C} \cdot \cdot ~\boldsymbol{\epsilon}^{el}) d^3 \boldsymbol{r}\,, \label{eq3} \end{equation} In (3) the symbol $\cdot \cdot$ denotes the tensor product. This potential includes the quasiplastic strain tensor $\boldsymbol{\epsilon}^{Q}$, the tensor $\boldsymbol{C}$ of elastic constants, as well as the tensor $\boldsymbol{\epsilon}^{el}$. The magnetization does not depend on the position, and therefore the integrand does not depend on position either. Thus the integral is just the integrand times the volume of the liquid region. To calculate the value of the integral one has to insert the respective components of the tensor of elastic constants for the isotropic liquid and the volume of this region. Furthermore, one has to insert a value for the magnetization in the interior part of the neutron star arising from the superconducting flux tubes. For different neutron stars the magnetizations and the volumes of the various regions may be very different. This is a consequence of the fact that the magnetic fields of neutron stars lie in the range from $10^8$ Gauss up to $10^{15}$ Gauss \cite{reisenegger01}. \subsection{Crust} In the solid outermost shell of the crust the calculation of the magnetostriction follows a different but similar line. As shown in \cite{pethick98}, the elastic deformation energy, which represents $\Phi^{el}$, cannot be represented by strains, but the fundamental variable rather is the displacement field $\boldsymbol{u}$ of the atoms. From $\boldsymbol{u}$ the strain tensor $\boldsymbol{\epsilon}$ can be calculated via \begin{equation} \boldsymbol{\epsilon} = \frac{1}{2} \left(\nabla \boldsymbol{u} + \nabla \boldsymbol{u}^t \right), \end{equation} where the superscript $t$ denotes transposition. Inversely, the displacement field $\boldsymbol{u}$ can be calculated also with this equation for given strain tensors $\boldsymbol{\epsilon}$. From a given quasiplastic strain tensor $\boldsymbol{\epsilon}^Q$ the corresponding quasiplastic displacement field $\boldsymbol{u}^Q$ can be calculated, and for given elastic strain tensor $\boldsymbol{\epsilon}^{el}$ the corresponding elastic displacement field $\boldsymbol{u}^{el}$ can be calculated. The elastic potential $\Phi^{el}$ given by eq. (3) for the liquid region, describes the elastic deformation energy of the system. The first part is the energy resulting from the quasiplastic strains $\boldsymbol{\epsilon}^Q$, the second part is the energy resulting from the elastic strains $\boldsymbol{\epsilon}^{el}$. The elastic potential for the solid shell is the counterpart to the elastic potential of the liquid region. The two parts of the elastic potential $\Phi^{el}$ for the solid part again describe the elastic deformation energy, the first part the energy related to the quasiplastic displacement field $\boldsymbol{u}^Q$, the second part the energy related to the elastic displacement field $\boldsymbol{u}^{el}$. The quasiplastic displacement field $\boldsymbol{u}^Q$ is calculated via eq. (4) from the quasiplastic strain field $\boldsymbol{\epsilon}^Q$, and the elastic displacement field $\boldsymbol{u}^{el}$ is determined by minimizing the following elastic potential $\Phi^{el}$ of the nonisotropic shell of the crust with respect to $\boldsymbol{u}^{el}$. For the elastic deformation energy in the outermost shell of the neutron star we can use eq. 11 of \cite{pethick98}. In this way we obtain \begin{eqnarray} \Phi^{el} &=& \ - \frac{1}{2} \int \Bigg\{ - \frac{B}{2} \left( \frac{\partial u_{el,x}}{\partial x} + \frac{\partial u_{el,y}}{\partial y} \right)^2 + \frac{C}{2} \left[\left(\frac{\partial u_{el,x}}{\partial x} - \frac{\partial u_{el,y}}{\partial y} \right)^2 + \left( \frac{\partial u_{el,x}}{\partial x} + \frac{\partial u_{el,y}}{\partial y} \right)^2 \right] \nonumber \\ &+& \frac{K_3}{2} \left(\frac{\partial^2 u}{\partial z^2} \right)^2 + B^\prime \left(\frac{\partial u_{el,x}}{\partial x} + \frac{\partial u_{el,y}}{\partial y} \right) \left( \frac{\partial u}{\partial z} \right)^2 +\frac{B^{\prime\prime}}{2} \left( \frac{\partial u}{\partial z} \right)^4 \nonumber \\[1.7ex] &~& \hskip 2 truecm \hbox{plus the same sum of terms with~} \boldsymbol{u}^Q \hbox{~instead of~}\boldsymbol{u}^{el} \Bigg\} \; d^3\boldsymbol{r} \, . \end{eqnarray} \section{ {\bf Input necessary to calculate the size of the magnetic effect}} \label{sec:estimate} The magnetostriction changes the mass density of the star. The inertia tensor of the spherical star may be calculated from the mass densities in the various regions. To do this, one has to start from the initial mass density, which then is changed by the magnetostrictive strains. Because the rotation frequency of the star depends on the inertia tensor and on the angular momentum of the matter of the neutron star, the change of the initial mass density due to the magnetostrictive strains results in a magnetic effect on the rotational frequency. {\bf We now discuss the various inputs which are needed to calculate the size of the magnetic effect. To start with one} must insert values for the magnetizations in the various regions of the neutron star, because these magnetizations determine the quasiplastic strains $\boldsymbol{\epsilon}^Q$ and the quasiplastic displacement field $\boldsymbol{u}^Q$ which enter equations (3) and (5). In the envelope of the solid crust there are completely ionized Fe atoms which carry a nuclear magnetic moment, and a sea of relativistic electrons which carry an electronic magnetic moment, $m_{e}$, respectively. Because the nuclear moment is much smaller than the electronic moment, {\bf one} has to take into account only the electronic moments. The magnetization then is $M_{el} = Z m_{el} \varrho$, where $\varrho$ is the density of Fe atoms and $Z$ is the total number of electrons in the Fe atom, given by the nuclear charge number $Z =26$. Inserting an electron number density $\varrho$ of $10^4$ per cm$^3$, this yields a magnetization of about $242 \times 10^{-14}$ A/m. The magnetization of the liquid superconducting proton region with flux tubes is $M=m_{\rm tube} den_{\rm tube}$, where $m_{\rm tube}$ is the magnetic moment of a single tube and $den_{\rm tube}$ is the density (number of tubes per area) of the flux tubes. The change of the total magnetic moment $m$ of the sample by the appearance of $n$ tubes is then $\Delta m = n \Phi_0/\mu_0$, with the elementary flux quantum $\Phi_0 = 2.07 \times 10^{-7}$ Gauss/m$^2$ and with $\mu_0 = 4 \pi \; 10^{-7}$ Vs/Am. From this the dipole moment of the flux tube can be calculated. The area density of the flux tubes is (cf. \cite{ruderman98}) $den_{tube} \approx 10^4 /(P(s)\, \hbox{cm}^2)$, where $P(s)$ is the period of the vortex rotation in units of seconds. {\bf An anonymous referee points out to us that our estimates of the magnetizations in the various parts of the neutron star are based on the assumption that there is a complete alignment of the respective magnetic moments. In fact, the magnetic moments related to the flux lines of the type-II superconductor in the liquid interior of the neutron star may deviate from a complete alignment because of the rotation of the star. The magnetic moments of the relativistic electrons in the solid outer crust of the neutron star also may not be completely aligned because of thermal disorder. Furthermore, the referee notes that in neutron stars there may be a contribution to the magnetization generated by macroscopic currents, which is correct. In our theory only magnetizations appear, independent of the physical processes which generate them.} The constants which appear in the elastic deformation energy of the non-isotropic solid envelope of the crust (which appear in eq. (5)) are given in \cite{pethick98}, the values of the tensor of elastic constants for the liquid isotropic region of the interior of the star are yet unknown. Even if {\bf one} inserted reasonable values for them, a very big problem for the calculation of the magnetic effect remains, because the magnetostriction constant $\Lambda_s$ of the isotropic liquid region is not known, and also the magnetostriction constants $\Lambda_{100}$ and $\Lambda_{111}$ of the non-isotropic solid envelope of the crust are completely unknown. It is true that for some transition metals and for some intermetallic compounds the values of these constants are known (see Table 2.3 of \cite{kronmueller03}) at temperatures of 4.2 K and at room temperature, and for densities of these materials which they have on the surface of the earth. But in the liquid region and in the solid envelope of the crust of the neutron star the temperatures and the densities are drastically higher, so that it does not make sense to insert the values given in that table of \cite{kronmueller03}. Therefore, a {\bf final calculation} of the magnitude of the magnetic effect on the rotation frequency of the neutron star must be postponend to the future, when more information, either theoretical or experimental, on these constants has become available. But the main message of this paper remains that it is important to point out the existence of such a magnetic effect. \section{Summary}\label{sec:summary} In the present paper we have pointed out a magnetic effect on the rotation frequency of a neutron star. To the best of our knowledge this magnetic effect has not been taken into account in the literature so far. It results from the magnetoelastic interaction of the magnetizations in the various regions of the neutron star with matter. This interaction generates magnetostrictive strains, which change the mass density of the star and, thus, the inertia tensor. Because the rotation frequency of the star depends on the inertia tensor and on the angular momentum of the star material, this produces a change in the rotational frequency of the neutron star. All this is an interesting combination of the knowledge of the astrophysical properties of the neutron star with the knowledge of how magnetostricton is produced in a magnetic material. {\bf \cite{easson77} proposed an anisotropic stress tensor for the type-II proton superconductor in neutron stars, and the question arises whether this is equivalent to the magnetostriction effects discussed in the present paper. Note, however, that the magnetostriction generates an anisotropic strain of the material by the magnetoelastic interaction of the magnetization with the matter, i.e., in the present paper the magnetostrictive strains are investigated, rather than the stress produced by the magnetostriction. All quantities appearing in the magnetostriction formalism are strains and not stresses. Therefore the discussion by \cite{easson77} is not equivalent to the discussion in the present paper. Furthermore, the anisotropic stress is predicted only for the liquid interior of the neutron star, whereas in the present paper both the liquid interior and the solid crust envelope are taken into account. The question also arises whether the magnetic effect on the rotation frequency could be detected by observing the change of the rotation frequency in time. Such a change due to a time-dependence of the magnetic effect would appear when the magnetizations and hence the magnetostrictions in the various parts of the neutron star themselves would vary. In the liquid inner part of the neutron star this could happen as a result of the rotation of the star. In the solid outer crust of the star the change could appear because the charged particles are accelerated or retarded due to the spin-up and spin-down processes. However, the change of the magnetic effect on the rotation frequency caused by these changes of the magnetizations and hence of the magnetostrictions in the various parts of the star could hardly be distinguished from the changes of the rotation frequenciy by the above mentioned spin-up and spin-down processes.} \acknowledgements We thank Malvin Ruderman and Chris Pethick for helpful correspondence.
1,116,691,501,431
arxiv
\subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title} \section{Introduction} Active galactic nuclei (AGNs) feedback now appears as a crucial process in galaxy formation and evolution. It is well-known that the tight correlation between nuclear black hole mass ($M_{\rm BH}$) and bulge stellar velocity dispersion (i.e., the $M_{\rm BH} - \sigma_*$ relation; \citealt{ferrarese2000}; \citealt{gebhardt2000}; \citealt{tremaine2002}) is compelling evidence for a close connection between the evolution of supermassive black holes and their host galaxies. AGN feedback is the most likely explanation for this relation (e.g., \citealt{silk1998}; \citealt{fabian1999}). \citet{hopkins2006} also indicate the importance of feedback in the evolutionary model for starbursts, AGN activity and spheroidal galaxies. This feedback can terminate star formation in the host galaxy and cease gas accretion onto the nuclear black hole, with the form of radiation, winds, jets and outflows. The various emission lines from narrow-line regions (NLRs) of AGN are ideally suited to study the central regions of AGNs, as well as the interaction between the central engine and its host galaxy. Unlike the broad-line regions (BLRs), the NLR is spatially resolvable, at least for nearby galaxies. It is generally believed that the narrow emission lines are produced by clouds illuminated by the central AGN, and the kinematics of the NLR clouds are mainly dominated by the gravitational potential of the bulge (e.g., \citealt{whittle1992}; \citealt{nelson1996}). Since the NLR connects to various factors such as the energy input from the central engine, the structure of AGN, radio jets, and star formation etc, it accesses a number of key questions of AGN phenomenon. [\mbox{O\,{\sc iii}}]\ $\lambda$$\lambda$4959,5007 emission lines are commonly used to study the properties of NLRs. It is usually the strongest narrow line in AGNs at optical band and cleanly isolated from other emission and absorption features in the optical spectrum. The line profile of [\mbox{O\,{\sc iii}}]\ doublets in low-redshift AGNs is usually asymmetric. In most case, there is a sharper fall-off to the red than to the blue, and the redshift of [\mbox{O\,{\sc iii}}]\ is negative compared to the system velocity derived from different indicators, such as the low-ionization lines ([\mbox{N\,{\sc ii}}], [\mbox{S\,{\sc ii}}]), or stellar absorption lines (\citealt{heckman1981} and consequent researches). This asymmetry feature of [\mbox{O\,{\sc iii}}]\ line has been suggested as an indicator of outflows in Seyfert 1s and 2s (\citealt{heckman1981}; \citealt{whittle1988}; \citealt{colbert1996}; \citealt{crenshaw2010}), Narrow Line Seyfert 1 galaxies (\citealt{bian2005}; \citealt{komossa2008}), type 1 quasars (\citealt{heckman1984}; \citealt{boroson2005}) and narrow line radio galaxies (\citealt{holt2008}). It is now believed that a broad blue wing in addition to the main narrow component is ubiquitous in [\mbox{O\,{\sc iii}}]\ emission. Many previous studies suggest this blue wing is contributed by AGN outflows, they also attempt to correlate the parameters of lines, such as the [\mbox{O\,{\sc iii}}]\ blueshift or/and the [\mbox{O\,{\sc iii}}]\ line width, with the physical properties of AGNs (\citealt{zamanov2002}; \citealt{aoki2005}; \citealt{boroson2005}; \citealt{bian2005}; \citealt{komossa2008}). Based on homogeneous samples of radio-quiet Seyfert 1 galaxies and QSOs selected from SDSS, \citet{zhang2011} find that the blueshift of [\mbox{O\,{\sc iii}}]\ has only weak correlations with fundamental AGN parameters, such as the nuclear continuum luminosity at 5100\AA\ ($L_{5100}$), black hole mass ($M_{\rm BH}$), and the Eddington ratio ($L_{\rm bol}/L_{\rm Edd}$). \citet{alexander2012} mentioned that in statistics, the width and luminosity of the blue wing increase with [\mbox{O\,{\sc iii}}]\ luminosity, but independent of radio loudness, indicating the outflows are driven by AGN radiation rather than relativistic jets. \citet{zhang2008} found that Seyfert 1s have lower [\mbox{N\,{\sc ii}}]/{H$\alpha$}\ ratios than Seyfert 2s and the location of Seyfert 1s on the BPT diagram varies with extinction of broad lines, suggesting that the inner dense NLR is obscured by dusty torus. The inner dense NLR might be the place where the [\mbox{O\,{\sc iii}}]\ blue wing originates (\citealt{zhang2013}). \citet{stern2013} revisit the location of type 1 and type 2 AGNs on the BPT diagrams, finding similar result as \citet{zhang2008}---type 1 AGNs are offset to lower [\mbox{S\,{\sc ii}}]/{H$\alpha$}\ and [\mbox{N\,{\sc ii}}]/{H$\alpha$}\ ratios. However, they conclude that this offset between type 1 and type 2 AGNs is a selection effect rather than dust extinction. In this paper, we will explore the asymmetric behavior of [\mbox{O\,{\sc iii}}]~$\lambda$$\lambda$4959,5007 lines in more detail, studying the origin of [\mbox{O\,{\sc iii}}]\ asymmetry. In Section 2, we describe the sample selection and data analysis. We show the results in Section 3. In Section 4, the origin of the broad wing is discussed. Our conclusions are given in Section 5. Throughout this paper, a cosmology with {\slshape H}$_0$ = 70 $\rm{km~s}^{-1}$\ Mpc$^{-1}$, $\Omega_{\rm m} = 0.3$, and $\Omega_\Lambda = 0.7$ is adopted. \section{Sample and Data Analysis} \label{sect:sample} \subsection{The Sample} We begin with the galaxy sample of SDSS (\citealt{york2000}) seventh data release (DR7; \citealt{abazajian2009}) and select type 2 active galaxies based on the widely used BPT diagram (\citealt{baldwin1981}). The SDSS DR7 spectroscopic galaxy catalog contains $\sim$ 930,000 spectra taken through $3\arcsec$ diameter fiber in the primary redshift range $0 \la {\rm z} \la 0.3$. Flux and equivalent width (EQW) of narrow emission lines (e.g., {H$\alpha$}, [\mbox{N\,{\sc ii}}]\ $\lambda$6583, [\mbox{O\,{\sc iii}}]\ $\lambda$5007, {H$\beta$}) as well as line indices D4000, $\rm H\delta_A$ and stellar mass have been publicly available since 2008 in the MPA/JHU catalog \footnote{The raw data files of this catalogue can be downloaded from http://www.mpa-garching.mpg.de/SDSS/DR7/}. The criteria used to select the parent type 2 AGN sample used in our analysis are the following: \begin{enumerate} \item { Redshift between $0.01 \leq z \leq 0.3$ and specPrimary $=$ 1. The lower redshift limit of 0.01 is applied to avoid the influence of peculiar velocity. SpecPrimary $=$ 1 deletes repeat observations from the sample.} \item { $\log([\mbox{O\,{\sc iii}}]/{\rm H}\beta) > 0.61/[\log([\mbox{N\,{\sc ii}}]/{\rm H}\alpha) - 0.47] + 1.19$ (the solid curve in Figure 2 from \citealt{kauffmann2003}), or $\log([\mbox{N\,{\sc ii}}]/{\rm H}\alpha) \geq 0.47$. For those objects with {H$\alpha$}, [\mbox{N\,{\sc ii}}], [\mbox{O\,{\sc iii}}], and {H$\beta$}\ emission lines detected with signal-to-noise ratio (S/N) $>$ 3, we separate type 2 AGNs from other sources using the emission line ratio diagnostics. } \item { The EQW of [\mbox{O\,{\sc iii}}]\ $\lambda$5007 emission line is smaller than $-$5 (negative EQW means emission) and the median S/N per pixel in the rest-frame wavelength range 4880-4920\AA\ and 5030-5070\AA\ (the continuum around [\mbox{O\,{\sc iii}}]) greater than 15. The high spectral quality requirement around [\mbox{O\,{\sc iii}}]\ region ensures reliable analysis of the line profile. } \end{enumerate} We refer to this sample hereafter as ``parent sample''. It contains 9,389 type 2 AGNs. Other parameters which would be used in this paper like stellar mass ($M_*$), absorption line indices (D4000 and $\rm H\delta_A$) and stellar velocity dispersion ($V_{\rm disp}$) are also provided in the MPA/JHU catalog. \subsection{Fitting the Stellar Continuum} The aim of this study is to use the [\mbox{O\,{\sc iii}}]\ emission line to probe outflows in the NLR. In order to get pure emission line spectra, we need to model the stellar continuum of each galaxy. As described in \citet{tremonti2004} and \citet{brinchmann2004}, the continua and absorption lines of each galaxy are fitted by a stellar population. The basic assumption is that any galaxy star formation history can be approximated by a sum of discrete bursts. The library of template spectra is composed of single stellar population (SSP) models generated using a preliminary version of the population synthesis code of \citet{charlot2013}, including models of 10 different ages (0.005, 0.025, 0.1, 0.2, 0.6, 0.9, 1.4, 2.5, 5, and 10 Gyr) and four metallicities (0.004, 0.008, 0.017, and 0.04). For each metallicity, the ten template spectra with different ages are convolved to the measured stellar velocity dispersion of the SDSS galaxy, and the best-fitting model spectrum is constructed from a non-negative linear combination of the ten template spectra, with dust attenuation modeled as an additional free parameter. The metallicity which yields the minimum $\chi^2$ is selected as the final best fit. The fitting results can be found on the SDSS-MPA Web site. \subsection{Fitting the Emission Lines} After subtracting the stellar continuum model, we use the following simple method to fit [\mbox{O\,{\sc iii}}]\ $\lambda\lambda4959,5007$ emission lines. First, we use only one Gaussian to model each [\mbox{O\,{\sc iii}}]\ line (hereafter the single-Gaussian model), [\mbox{O\,{\sc iii}}]\ $\lambda4959$ is forced to have the same profile and shift as [\mbox{O\,{\sc iii}}]\ $\lambda5007$. We use the galaxy redshift from SDSS pipeline to define rest frame. We also decompose each [\mbox{O\,{\sc iii}}]\ line into two Gaussians (hereafter the double-Gaussian model), a narrow core ($[\mbox{O\,{\sc iii}}]_{4959}^{\rm NC}\ \mbox{and}\ [\mbox{O\,{\sc iii}}]_{5007}^{\rm NC}$) and a broad wing ($[\mbox{O\,{\sc iii}}]_{4959}^{\rm BW}\ \mbox{and}\ [\mbox{O\,{\sc iii}}]_{5007}^{\rm BW}$). Each component of [\mbox{O\,{\sc iii}}]\ $\lambda4959$ is tied to the relevant component of [\mbox{O\,{\sc iii}}]\ $\lambda5007$ in the same way as that in the single-Gaussian model. The line center is limited in the range of 4980-5050 \AA. We compare the reduced $\chi^2$ of the single-Gaussian and double-Gaussian model, and use the F-test \citep[chap. 12.1]{lupton1993} to calculate how significantly the fit is improved by the double-Gaussian model. Figure \ref{figF-test} shows the probability level ($\sigma_{\rm P}$) that the double-Gaussian model can improve the fit of emission lines, as a function of the improvement of $\chi^2$, which is defined as $(\chi^2_{\rm one}-\chi^2_{\rm two})/\chi^2_{\rm one}$. $\chi^2_{\rm one}$ is the reduced $\chi^2$ of the single-Gaussian model and $\chi^2_{\rm two}$ is the reduced $\chi^2$ of the double Gaussian model. We select the 1,630 sources up the horizontal dashed line as our sample for studying outflow. These galaxies require two Gaussians at a significance greater than 8$\sigma$, with $\chi^2$ improvement greater than $\sim$65\%. \begin{figure} \includegraphics[angle=90,width=\textwidth]{ms1623fig1.ps} \caption{The probability level ($\sigma_{\rm P}$) of the improvement of the fit of emission lines. We select the sources up the horizontal dashed line as our sample.} \label{figF-test} \end{figure} \section{Results} \label{sect:results} With the double-Gaussian model fitting of the 1,630 sources in the sample, we find the velocity shifts of the core component, $V_{\rm off}^{\rm core} = (\lambda_{\rm core} - \lambda_0)/\lambda_0 \times c$, has a Gaussian distribution over a range of $-200 \sim 200$ $\rm{km~s}^{-1}$, with a median value at 8 $\rm{km~s}^{-1}$\ and 68\% of the total probability distribution distributed over the range $-37 \sim 53$ $\rm{km~s}^{-1}$. Here $\lambda_{\rm core}$ is the central wavelength of the core component, $\lambda_0 = 5006.84$ is the rest-frame line center of [\mbox{O\,{\sc iii}}]\ in air, $c$ is the speed of light. The distribution of velocity shifts of the wing component is highly deviated from Gaussian with a median shift of $-$72 $\rm{km~s}^{-1}$. We note that the pipeline redshift is determined from both the emission lines and the continuum. If the emission lines are blueshifted, then the pipeline redshifts tend to be underestimated. Here we re-determine the system redshift from the continuum and absorption lines: starting from the stellar continuum model given in section 2.2, we iteratively increase/decrease the system velocity by 5 $\rm{km~s}^{-1}$\ and re-calculate a $\chi^2$ value, the absorption line redshift is determined by the case with the lowest $\chi^2$ value. In the following section, we use this absorption line redshift to define rest-frame. Figure \ref{figexample} shows one example of the two component fit, the black is the observed emission line spectrum, the broad wing and the narrow core are shown in blue and red, respectively, the best fit model is over-plotted in green. \begin{figure} \includegraphics[angle=0,width=\textwidth]{ms1623fig2.ps} \caption{ An example of double Gaussian fit. For each [\mbox{O\,{\sc iii}}]\ emission line, two Gaussian components are used. The red represents the core component, and the blue one is the underlying broad wing. The best fit model is shown in green. } \label{figexample} \end{figure} \subsection{Correlation between Wing and Core Flux} In Figure \ref{figc_w}, we show the correlation between the fluxes of the wing ($F_{\rm wing}$) and core ($F_{\rm core}$) of [\mbox{O\,{\sc iii}}]\ $\lambda5007$ emission line for both type 1 (red dots) and type 2 (black dots) AGNs sample. The type 1 AGN sample contains 383 objects from \citet{zhang2011} with a redshift range of $0.01\leq z \leq 0.3$. The green line, ${\rm \log}F_{\rm wing} = (0.792\pm0.070) ~ {\rm \log}F_{\rm core} - (3.112\pm1.016)$, is the best linear least-square fit for type 1 AGNs with the Spearman rank-order correlation coefficient ($r_{\rm S}$) of 0.663, while the blue line, ${\rm \log}F_{\rm wing} = (0.724\pm0.035) ~ {\rm \log}F_{\rm core} - (3.964\pm0.496)$, is the fit for type 2 AGNs, with $r_{\rm S} = 0.701$. \citet{zhang2011} found that on average, the core component comprise 54\% of the total emission, which is consistent with a 52\% contribution of the core component in our type 2 sample. The detailed explanation of this strong correlation between the fluxes of core and wing components in both type 1 and type 2 AGNs is beyond the scope of the current paper, we will build a model to understand this correlation in a following paper. \begin{figure} \includegraphics[angle=90,width=\textwidth]{ms1623fig3.ps} \caption{ Correlation between the fluxes of broad wing and narrow core of [\mbox{O\,{\sc iii}}]\ $\lambda$5007 emission line in type 1 ($red$) and type 2 ($black$) AGNs. The Spearman rank-order correlation coefficients ($r_{\rm s}$ ) are 0.663 (0.701) for type 1 (type 2) AGNs. The best linear Least-squares approximation is shown, green line for type 1s, and blue long dash line for type 2s. The gray dash line is the x = y line. } \label{figc_w} \end{figure} \subsection{How $V_{\rm off}^{\rm wing}$ Depends on the Properties of Galaxies} Since the blue asymmetry of [\mbox{O\,{\sc iii}}]\ profile generally suggests the exist of outflow, we explore whether the shift of the broad wing is connected with the strength of AGN activity and star formation which are the primary driver of the outflow. We estimate the bolometric luminosity of the AGN as $L_{\rm bol} \approx 600\ \ensuremath{L_{\mathrm{[O {\tiny III}]}}}$ (\citealt{kauffmann2009}). \ensuremath{L_{\mathrm{[O {\tiny III}]}}}\ is the total luminosity of wing and core with dust extinction from Balmer decrement, and the correlation in this section is independent on which [\mbox{O\,{\sc iii}}]\ luminosity we use, $\ensuremath{L_{\mathrm{[O {\tiny III}]}}}^{\rm wing}$, $\ensuremath{L_{\mathrm{[O {\tiny III}]}}}^{\rm core}$, or $\ensuremath{L_{\mathrm{[O {\tiny III}]}}}^{\rm wing} + \ensuremath{L_{\mathrm{[O {\tiny III}]}}}^{\rm core}$. The Eddington ratio is defined as $\lambda = L_{\rm bol}/L_{\rm Edd}$, where $L_{\rm Edd} \equiv 1.26~\times~10^{38}~ (M_{\rm BH}/M_\odot)~\rm{erg~s}^{-1}$. The mass of the central black hole is estimated from the famous $M_{\rm BH} - \sigma_*$ relation (\citealt{ferrarese2000}; \citealt{gebhardt2000}) of the form $\log({M_{\rm BH}}/{M_\odot}) = 8.13 + 4.02 \log({\sigma_*}/{200})$ (\citealt{tremaine2002}), where $\sigma_*$ is the stellar velocity dispersion. Figure \ref{figproperty} shows the velocity shift of wing component ($V_{\rm off}^{\rm wing}$) as a function of $L_{\rm bol}$, $\lambda$, mass of supermassive black holes ($M_{\rm BH}$), D4000, $\rm H\delta_A$ and stellar mass ($M_*$). The stellar mass ($M_*$) and absorption line indices (D4000 and $\rm H\delta_A$) are provided in MPA/JHU catalog. The red lines are the median. The correlation results are listed in Table \ref{tab-property}, where $r_{\rm S}$ is the Spearman rank-order correlation coefficient and $P_{\rm null}$ is the probability for the null hypothesis of no correlation. The high significant of $P_{\rm null}$ is due to the large number of sources in our sample. In summary, we have not found any distinct correlation between $V_{\rm off}^{\rm wing}$ and galaxy properties. Outflow is driven by both AGN and star formation activity. However, the contribution of AGN and star formation activity to the outflow varies from object to object. This leads to a lack of correlation in Figure \ref{figproperty}. The result is consistent with that in \citet{komossa2008} and \citet{zhang2011}. \begin{figure*} \begin{center} \includegraphics[angle=90,width=\textwidth]{ms1623fig4.ps} \end{center} \caption{ Velocity shifts of the wing component relative to the system velocity $V_{\rm off}^{\rm wing}$ versus $L_{\rm bol}$, $\lambda$, $M_{\rm BH}$, D4000, $\rm H\delta_A$ and $M_*$. Diamonds are the position of median value in each bin. } \label{figproperty} \end{figure*} \setcounter{table}{0} \begin{table*} \begin{minipage}[c]{\textwidth} \caption{Correlation Results between $V_{\rm off}^{\rm wing}$ and Galaxy Properties} \label{tab-property} \small \begin{center} \begin{tabular}{lcccccc} \\ \hline\hline correlation & $L_{\rm bol}$ & $\lambda$ & $M_{\rm BH}$ & D4000 & $\rm H\delta_A$ & $M_*$ \\ \hline\noalign{\smallskip} $r_{\rm S}$ & -0.184 & 0.010 & -0.083 & 0.097 & -0.204 & -0.083 \\ $P_{\rm null}$ & 8.9e-13 & 6.8e-1 & 1.3e-3 & 1.9e-4 & 1.9e-15 & 1.3e-3 \\ \hline\hline \end{tabular} \end{center} \end{minipage} \end{table*} \subsection{Comparison with Type 1 AGNs} In the standard AGN unified scheme, type 1 and type 2 AGNs are intrinsically the same objects. As a result of obscuration of torus, the radiation originated from the accretion disc will push materials out mostly along the rotation axis of the accretion disc. If the blue wing of [\mbox{O\,{\sc iii}}]\ is triggered by outflows, we should observe a faster velocity in type 1 than in type 2 AGNs due to the inclination effect. In order to avoid any evolution effect and make a fair comparison between type 1 and type 2 AGNs, we construct twin subsamples from the type 1 (\citealt{zhang2011}) and type 2 AGNs by matching their redshift with a tolerance of $\Delta z$ = 0.004, namely, the type 1 and type 2 subsamples have exactly the same redshift distribution. See the histograms in Figure \ref{figz}. The black, blue and red lines show the redshift distributions for type 1, type 2 AGNs, and the matched twin sample, respectively. Through this redshift match, each subsample contains 264 objects. Figure \ref{figvoff} $\sim$ \ref{figsig} show the distributions of velocity offset and line width of the [\mbox{O\,{\sc iii}}]\ wing and core components for the twin subsamples, and the median values are shown by vertical dashed lines, black for type 1 and blue for type 2 AGNs. \begin{figure} \includegraphics[angle=90,width=\textwidth]{ms1623fig5.ps} \caption{Redshift distribution of type 1 ($black$) and type 2 ($blue$) AGN sample. The red histogram shows the redshift distribution of the twin sample.} \label{figz} \end{figure} In Fig. \ref{figvoff}, we show the distributions of the velocity offset relative to the system velocity, $V_{\rm off}^{\rm core}$ and $V_{\rm off}^{\rm wing}$. The system velocity is derived from [\mbox{S\,{\sc ii}}]\ emission line for type 1 sample and from stellar absorption lines for type 2 sample. The median values of $V_{\rm off}^{\rm core}$ are $-11$ $\rm{km~s}^{-1}$, and 6 $\rm{km~s}^{-1}$\ for type 1 and type 2 subsamples, and the median values of $V_{\rm off}^{\rm wing}$ are $-162$ $\rm{km~s}^{-1}$, and $-97$ $\rm{km~s}^{-1}$\ for type 1 and type 2 subsamples, respectively. \begin{figure} \includegraphics[angle=90,width=\textwidth]{ms1623fig6.ps} \caption{ Distributions of velocity offset relative to the system redshift for type 1 ($black$) and type 2 ($blue$) twin samples. The left panel for the core component, while the right panel for the wing component. The median values are marked by black vertical dash line for type 1s and blue vertical long dash line for type 2s. } \label{figvoff} \end{figure} In Fig. \ref{figsig}, we show the distributions of line width of wing ($\sigma_{\rm wing}$) and core ($\sigma_{\rm core}$) components. The median values of $\sigma_{\rm core}$ are 138 $\rm{km~s}^{-1}$\ and 131 $\rm{km~s}^{-1}$\ for type 1 and type 2 subsamples, and the median values of $\sigma_{\rm wing}$ are 393 $\rm{km~s}^{-1}$\ and 370 $\rm{km~s}^{-1}$\ for type 1 and type 2 subsamples, respectively. Basically, there is no difference in $\sigma_{\rm core}$ and $\sigma_{\rm wing}$ between type 1 and type 2 AGNs. \begin{figure} \includegraphics[angle=90,width=\textwidth]{ms1623fig7.ps} \caption{ Distributions of the line width $\sigma$ for type 1 ($black$) and type 2 ($blue$) twin samples. The left panel for the core component, while the right panel for the wing component. The median values are marked by black vertical dash line for type 1s and blue vertical long dash line for type 2s. } \label{figsig} \end{figure} \section{Origin of the Broad Wing} In this section, we discuss the origin of the broad wing of [\mbox{O\,{\sc iii}}]\ $\lambda\lambda4959,5007$, including its location and physical mechanism, based on the observational results in section 3. \begin{enumerate} \item { \textsl{\textbf{Location}}. We derive black hole mass ($M_{\rm BH}$) from the $M_{\rm BH}-\sigma_*$ relation. At the same time, if we assume the region which generates the wing component is still dominated by the potential of central super-massive black hole, we can estimate the location of the region ($R_{\rm wing}$) where the wing comes from as $R_{\rm wing} = G \frac{M_{\rm BH}}{f \Delta V^2}$, $f$ is the scaling factor with a value of 3.85 (\citealt{collin2006}), $\Delta V$ is the emission line width, we set $\Delta V = \sigma_{\rm wing}$, $G = 6.67384 \times 10^{-11}\ {\rm m}^3\ {\rm kg}^{-1}\ {\rm s}^{-2}$ is gravitational constant. Finally, we get $R_{\rm wing}$ with a median value of ten pc for both type 1 and type 2 subsamples. We stress that the value of $R_{\rm wing}$ we derive should be a lower limit since the region that the blue wing originates is not virialized. } \item { \textsl{\textbf{Physical mechanism}}. In section 3.3, we find the wing component has a median of $V_{\rm off}^{\rm wing} = -162\ (-97)$ $\rm{km~s}^{-1}$\ for the type 1 (type 2) subsample. If we assume the velocity offset of wing component originates from the outflows which blow out in a direction perpendicular to the accretion disk, we would expect $V_{\rm off}^{\rm wing}$(type 2) = $V_{\rm off}^{\rm wing}$(type 1) $\times \cos \theta$, where $\theta$ is the opening angle of torus, $V_{\rm off}^{\rm wing}$(type 1) and $V_{\rm off}^{\rm wing}$(type 2) are median values of the velocity offset of the wing component for type 1 and type 2 AGNs, respectively. Applying $V_{\rm off}^{\rm wing}$(type 1) = $-$162 and $V_{\rm off}^{\rm wing}$(type 2) = $-$97, we get $\theta \sim 50 \degr$, this result is consistent with that from literatures (e.g., \citealt{netzer1987}, \citealt{krolik1994}).} \end{enumerate} In addition, we have not found any distinct correlation between $V_{\rm off}^{\rm wing}$ and galaxy properties (D4000, $\rm H\delta_A$, stellar mass), as well as the physical properties of AGNs (bolometric luminosity, the Eddington ratio, black hole mass). The result is consistent with several previous studies (e.g., \citealt{komossa2008}; \citealt{zhang2011}). If we accept a scenario in which the low-velocity gas in the core component is dominated by the gravity of the bulge, while the wing is more strongly influenced by the outflow cloud of active nucleus (e.g., \citealt{zamanov2002}; \citealt{greene2005}), the terminal outflow velocity would depend on the origin of the outflow on the one hand, and the deceleration mechanism on the other hand. \citet{komossa2008} discussed several possibilities to explain the acceleration and entrainment of the NLR outflow, including radiation pressure, entrainment in radio jets, thermal winds, and high Eddington ratio. So the launching velocity of the outflow cloud is determined by different acceleration mechanisms and/or different stages of the AGN activity. Meanwhile, NLR outflow is decelerated by the ISM of the host galaxy. Denser ISM results in more efficient deceleration, implying lower velocity (\citealt{zhang2011}). Anyway, both accelerated mechanisms and the column density of NLR cloud would lead to different terminal velocities, thereby explaining the lack of correlation between the observed $V_{\rm off}^{\rm wing}$ and the physical properties of AGNs. \section{Conclusion} We select a type 2 AGN sample from SDSS DR7. In this sample, two Gaussian components are required to model the [\mbox{O\,{\sc iii}}]\ $\lambda$5007 emission line, a broad wing plus a narrow core. We measure the velocity shift (relative to the absorption lines), line width and flux of both components. Combining our type 2 AGN sample with a type 1 sample from \citet{zhang2011}, we find that: \begin{enumerate} \item {there is a tight correlation between the fluxes of wing and core components in both type 1 and type 2 samples. In both samples, the flux of the wing components is roughly equal to that of the core components.} \item {in the unification scheme of AGNs, the type 1 and 2 AGNs are intrinsically the same, their different appearance is due to that we observe them from different direction, a dusty torus blocks the continuum source and broad-line region in type 2 AGNs. The difference in the velocity shift of the broad wing between type 1 and type 2 AGNs consists with a picture in which the broad wing originates from outflows and the outflows blow out in a direction perpendicular to the accretion disk with a certain opening angle.} \item {the velocity shift of the wing component has only weak, if any, correlations with the physical properties of AGNs (bolometric luminosity, the Eddington ratio and the mass of supermassive black holes) and the host galaxies (D4000, $\rm H\delta_A$ or stellar mass). We suggest the lack of correlation is due to that the outflow is driven by both AGN and star formation activity. However, the contribution of AGN and star formation activity to the outflow varies from object to object. Future IFU survey like MaNGA (Mapping Nearby Galaxies at APO) will help us to understand, AGN and star formation, which is the primarily driver of outflow in a certain object.} \end{enumerate} \normalem \begin{acknowledgements} Zhixin Peng thanks Jing Wang for useful discussions. The research is supported by the National Natural Science Foundation of China (NSFC; Grant Nos. 11273015, 11133001 and 11003007), the National Basic Research Program (973 program No. 2013CB834905), and Specialized Research Fund for the Doctoral Program of Higher Education (20100091110009). Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the US Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is \url{http://www.sdss.org}. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max- Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory and the University of Washington. \end{acknowledgements} \bibliographystyle{raa}
1,116,691,501,432
arxiv
\section{Introduction} The defining property of hierarchical spin-glasses is that the Hamiltonian may be defined recursively in terms of the Hamiltonians of the two sub-systems formed by dividing the spins into two equally sized groups: \begin{equation} \label{eq:HM} H_{n}(s) = H_{n-1}(s_L) + H_{n-1}(s_R) + \epsilon_{n}(s) \,. \end{equation} Here $s$ denotes all the Ising spins $s_1, ..., s_{N}$, with $N=2^n$ constrained to be a power of 2, $s_L$ denotes first half of spins, $s_1, ..., s_{2^{n-1}}$ and $s_R$ the second half, $s_{2^{n-1}+1}, ..., s_{2^n}$. The $L,R$ notation stands for ``left'' and ``right'', and corresponds to the fact that the recursion structure is naturally associated with a balanced binary tree. The two sub-systems are coupled together through an interaction term $\epsilon_n(s)$ which specifies the model. Dyson was the first to study models of this form, introducing a ferromagnetic model known as the Dyson Hierarchical Model (DHM) \cite{dyson1969existence}, for which the interaction term is \begin{equation*} \epsilon_n(s) = - J \, C^{n} \left(\frac{1}{2^{n}} \sum_{i=1}^{2^{n}} s_i \right)^2 \,. \end{equation*} Here $J>0$ is a ferromagnetic coupling, and $C$ controls how the interaction strength scales with system size. At each level in the recursion, the two sub-systems are coupled together via the square of the magnetization of the combined system. The hierarchical structure greatly aids the analysis of this model, and in particular leads to the key result that the Wilsonian Renormalization Group (RG) equations are exact when applied to this model. Due to the exactness of the RG equations, the DHM and other hierarchical models are useful toy models for studying and further developing the Renormalization Group (RG). In particular, it has proven especially difficult to develop a RG theory for spin-glasses, especially non-mean field systems. As a result, spin-glass versions of hierarchical models have been introduced and used to develop a theory of renormalization for these systems which might extend to other, non-hierarchical spin-glasses.\footnote{For an excellent review of this subject, see \cite{castellana2013renormalization}.} For example, the Hierarchical Random Energy Model (HREM) \cite{castellana2010hierarchical, castellana2011real} is defined by taking the couplings $\epsilon_n(s)$ to be independent and identically distributed random variables (with no dependence on the spin-configuration). Another well-studied example is the Hierarchical Edwards-Anderson (HEA) model \cite{franz2009overlap, castellana2010renormalization, castellana2011renormalization, castellana2011real}, for which the interaction term is \begin{equation*} \epsilon_{n}(s) = - \frac{C^{2n}}{2^n} \sum_{i < j = 1}^{2^{n}} J_{ij} s_i s_j \,, \end{equation*} where the couplings are standard normal random variables, i.e. $J_{ij} \sim \mathcal{N}(0,1)$. In this work, a new hierarchical Ising spin model is introduced for which the coupling term $\epsilon_n(s)$ is simply the parity of the combined spins from the two sub-systems. The motivation for considering such an interaction is to develop a toy model of a spin system which is capable of exhibiting geometric frustration and large degeneracies, and which nonetheless exhibits a high degree of analytic and computational tractability. However, more work is needed to determine whether this model exhibits a proper spin-glass phase, and therefore we will refer to it as a spin system, as opposed to a spin-glass. This paper is organized as follows. In Section~\ref{sec:model} we introduce the model and derive its key recursion relations. In Section~\ref{sec:complexity} we discuss the computational complexity of the model, and show that $O(N)$ algorithms exist for computing both the partition function and the ground state. In Section~\ref{sec:widthsymmetric} we consider a special case of the model in which the couplings are width-symmetric (i.e., they are uniform within a given level of the hierarchy). When the couplings are furthermore taken to be equal across all levels of the hierarchy, the model is shown to exhibit a thermal phase transition. Finally, in Section~\ref{sec:discussion} we conclude with a discussion. The details of the $O(N)$ ground state algorithm are provided in Appendix~\ref{app:algorithm}, and Appendix~\ref{app:subparity} contains the derivation of the recursion relation for sub-parities (defined below). Lastly, a first step towards studying the model in the presence of disorder is taken in Appendix~\ref{sec:disorder}, where we show how automatic differentiation may be used to exactly compute thermodynamic quantities for finite system size. We have released the code used for some of the numerical analyses done in this work here: \url{https://github.com/gshartnett/hierarchical}. \section{The Hierarchical Parity Model \label{sec:model}} Like all hierarchical models, the hierarchical parity model may be defined recursively. At each step in the recursion, the Hamiltonian of the system is the sum of the left and right sub-system Hamiltonians, plus an interaction term that couples them together. It is convenient to work in terms of the associated balanced binary free, depicted in Fig.~\ref{fig:binary_tree}. The coordinates of the nodes are $k,p$, with $k=0,...,n$ the ``height'' coordinate, measured from the leaf nodes with $k=0$ to the root node with $k=n$, and $p = 1, ..., 2^{n-k}$ the ``width'' coordinate. Using these coordinates, the defining recursion relation for the model is given by: \begin{equation} \label{eq:Hrecursive} H_{k,p} := H_{k-1, 2p-1} + H_{k-1, 2p} - J_{k, p} s_{[(p-1) 2^k + 1 : p 2^k]} \,. \end{equation} Here the notation $s_{[a:b]}$ denotes the product of spins $a$ through $b$, i.e. $s_{[a:b]} := \prod_{i=a}^b s_i$, and $J_{k,p}$ are arbitrary couplings. The Hamiltonian of the root node corresponds to the full system, $H := H_{n,1}$, and the Hamiltonians of the leaf nodes are simply $H_{0,p} := - J_{0,p} s_p$. At each level in the recursion, two distinct spin-systems are coupled together through their overall parity. The Hamiltonian may also be defined non-recursively as the sum of a parity interaction associated with each node: \begin{equation} \label{eq:modeldefinition} H = -\sum_{k=0}^n \sum_{p=1}^{2^{n-k}} J_{k,p} s_{[(p-1) 2^k+1: p \, 2^k]} \,. \end{equation} To clarify the notation, the full Hamiltonian for $n=3$ is: \begin{align} H = &- J_{3,1} s_1 s_2 s_3 s_4 s_5 s_6 s_7 s_8 - J_{2,1} s_1 s_2 s_3 s_4 - J_{2,2} s_5 s_6 s_7 s_8 \nonumber \\ &- J_{1,1} s_1 s_2 - J_{1,2} s_3 s_4 - J_{1,3} s_5 s_6 - J_{1,4} s_7 s_8 \nonumber \\ &- J_{0,1} s_1 - J_{0,2} s_2 - J_{0,3} s_3 - J_{0,4} s_4 - J_{0,5} s_5 - J_{0,6} s_6 - J_{0,7} s_7 - J_{0,8} s_8 \,. \end{align} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{binary_tree.png} \caption{The binary tree corresponding to the the system of $N=2^3=8$ spins, with an arbitrary spin configuration shown below. The nodes are given the coordinates $(k,p)$, with $k$ denoting the height of the tree (measured from the leaf nodes) and $p$ is the width coordinate.} \label{fig:binary_tree} \end{figure} The recursive property also extends to the partition function. Letting $Z_{k,p}$ denote the partition function associated the $(k,p)$ sub-system with Hamiltonian $H_{k,p}$, \begin{equation} Z_{k,p} := \sum_{\{ s_{(p-1) 2^k + 1}, ..., s_{p 2^k} \}} e^{-\beta H_{k,p}} \,, \end{equation} it can be shown that \begin{align} \label{eq:partitionfunction} \ln Z_{k,p} &= \ln Z_{k-1, 2p-1} + \ln Z_{k-1, 2p} \\ &+ \ln \Big[\cosh( \beta J_{k,p}) + P_{k-1,2p-1} P_{k-1, 2p} \sinh(\beta J_{k,p}) \Big] \nonumber \,, \end{align} where \begin{align} P_{k,p} := \beta^{-1} \partial_{J_{k, p}} \ln Z_{k, p} = \langle s_{[(p-1)2^k + 1: p 2^k]} \rangle_{k,p} \end{align} is the expectation value of the parity of the spins associated with the sub-system $H_{k,p}$.\footnote{Note that the Boltzmann factor appearing in the thermal average is the one associated to the sub-system, $e^{-\beta H_{k,p}}$, and not the Boltzmann factor of the full system, $e^{-\beta H}$.} This expectation value can also be shown to satisfy a recursion relation: \begin{equation} \label{eq:parityrecursion} P_{k,p} = \frac{\sinh(\beta J_{k,p}) + \cosh(\beta J_{k,p}) P_{k-1, 2p-1} P_{k-1, 2p}}{\cosh(\beta J_{k,p}) + \sinh(\beta J_{k,p}) P_{k-1, 2p-1} P_{k-1, 2p}} \,. \end{equation} The initial conditions of the recursions are that ${Z_{0,p} = 2\cosh(\beta J_{0, p})}$, and ${P_{0,p} = \tanh(\beta J_{0,p})}$. $P_{k,p}$ is the thermal expectation value of the parity, which can take the values $\pm 1$. Therefore, Eq.~\ref{eq:parityrecursion} may be restated as a recursion relation between the parity probability distribution, rather than the expectation value. Let \begin{equation} \mathbb{P}_{k,p}(P) := \sum_{\{ s_{(p-1) 2^k + 1}, ..., s_{p 2^k} \}} \frac{e^{-\beta H_{k,p}}}{Z_{k,p}} \delta_{P, s[(p-1)2^k + 1: p 2^k]} \end{equation} be the probability mass function over the parity, with $P \in \{-1, 1\}$. This relates to the expectation value via ${P_{k,p} = \mathbb{P}_{k,p}(1) - \mathbb{P}_{k,p}(-1) = 2 \mathbb{P}_{k,p}(1) - 1}$ (since ${\mathbb{P}_{k,p}(-1) = 1 - \mathbb{P}_{k,p}(1)}$). This distribution can be shown to satisfy the recursion: \begin{equation} \mathbb{P}_{k,p}(P) = \left(\frac{Z_{k-1,2p-1} Z_{k-1,2p-2}}{Z_{k,p}}\right) e^{\beta J_{k,p} P} \sum_{P_L, P_R \in \{-1,1\}} \mathbb{P}_{k-1,2p-1}(P_L) \mathbb{P}_{k-1,2p-2}(P_R) \delta_{P, P_L P_R} \,, \end{equation} where $P_{L,R}$ denote the parities of the left and right sub-systems. Actually, exact recursion relations can be defined for many more correlators. $P_{k,p}$ is the expectation value of the parity of all $2^k$ spins in the $(k,p)$ sub-system, and it is also interesting to consider the parity of subsets of these $2^k$ spins, also within the $(k,p)$ sub-system. Therefore, introduce the notation $P_{k,p}^{k',p'}$ to denote the parity of the $2^{k'}$ block of spins $s_{[(p'-1)2^{k'}+ 1: p' 2^{k'}]}$ within the $(k,p)$ sub-system, i.e. \begin{equation} P_{k,p}^{k',p'} := \beta^{-1} \partial_{J_{k',p'}} \ln Z_{k,p} \,. \end{equation} Note that for this expression to be sensible, $(k', p')$ must be the coordinates of a descendant node of the $(k,p)$ node. To give some examples, for $k'=k-1, p'=2p-1$, this corresponds to the expectation value of the parity of the left descendant spins. For $k'=k-2, p'=4p-3$, this corresponds to the left-left descendant spins, and so on. And $k'=1, p' = 1$ corresponds to $\langle s_1 s_2 \rangle_{k,p}$. The recursion relation for $P_{k,p}^{k',p'}$ can be compactly written in terms of a path on the binary tree, at the expense of some additional notation. Let $\bm{n}$ denote the binary tree coordinates, and let $\bm{n}_0 = (k,p)$ denote the node of the sub-system in question, and $\bm{n}_L = (k',p')$ a child node of $\bm{n}_0$ corresponding to the spins of interest. Let $\bm{n}_a$, $a=0,...,L$ denote the sequence of nodes corresponding to the unique length-$L$ path $\mathcal{P}(\bm{n}_0, \bm{n}_L)$ in the tree from the sub-system node $\bm{n}_0$ to the destination node $\bm{n}_L$, with $L = k - k'$. Then, the recursion relation is: \begin{align} \label{eq:subPrecursion} P_{\bm{n}_0}^{\bm{n}_L} = \beta^{-1} \partial_{J_{\bm{n}_L}} \sum_{\bm{n} \in \mathcal{P}(\bm{n}_0, \bm{n}_L)} & \ln \Big[ \cosh(\beta J_{\bm{n}}) + P_{\text{Left}(\bm{n})} P_{\text{Right}(\bm{n})} \sinh(\beta J_{\bm{n}}) \Big] \,. \end{align} Here, Left$(\bm{n})$, Right$(\bm{n})$ denote the left or right descendants of node $\bm{n}$. The expression in the sum depends implicitly on $J_{\bm{n}_L}$ via the left or right parity for all nodes in the path except the final one, in which case the dependence is explicit. It is important to note that $P_{k,p}^{k',p'}$ can only represent the parity of collections of spins which comprise all of the descendants of a particular node in the binary tree - therefore, many (but not all) of the correlation functions of the system are governed by exact recursion relations, making this model very amenable to RG analysis. Notably, the nature of the parity interactions allowed these relations to be derived for \textit{arbitrary} couplings. To gain further physical intuition for the model, a key insight is that it is capable of exhibiting geometric frustration at multiple scales. Considering an arbitrary $H_{k,p}$ sub-system, the top-level interaction encourages the parity of the $2^k$ sub-system spins to be aligned with the sign of the $J_{k,p}$ coupling. Similarly, the descendent interactions encourage the parities of the two $2^{k-1}$ sub-system spins to be aligned with the sign of their respective couplings, $J_{k-1,2p-1}$, $J_{k-1,2p}$. If these are not consistent with one another, i.e. if \begin{equation} \label{eq:geometricfrustration} \text{sign}(J_{k,p}) \neq \text{sign}(J_{k-1,2p-1} J_{k-1,2p}) \,, \end{equation} then the system will exhibit a form of geometric frustration at the scale of $2^{k-1}$ spins. Moreover, there is a separate frustration condition for each of the $N-1$ $(k,p)$ sub-systems with $k = 1, ..., n$. Therefore, the system can exhibit frustration on multiple scales, provided that at least some of the couplings are anti-ferromagnetic (negative). Lastly, an important comment is in order regarding the single-spin couplings, $J_{0,i}$. If these are absent for a system with $N$ spins, then the system can be shown to be equivalent to multiple copies of $N/2$-spin systems. Concretely, if $s_1, ..., s_N$ are the spins of the original system with $J_{0,i} = 0$ $\forall i$, then by introducing the composite spin variables $S_j := s_{2j-1} s_{2j}$, with $j=1,...,N/2$, a new system with half the number of spins is formed which will have single-spin couplings given by the two-body couplings of the original system. For example, the two-body interaction term $J_{1,1} s_1 s_2$ in the original system will become $J'_{0,1} S_1$ in the new system, with $J'_{0,1} = J_{1,1}$. Moreover, each state in the new system with $N/2$ spins will occur with degeneracy $2^{N/2}$ in the original system with $N$ spins - as required by the fact that there are $2^N$ states in total. Thus, throughout this work we will assume that the single-spin couplings are not all zero, since otherwise the system could be effectively reduced to a simpler system of fewer spins. A consequence of the above is that the single-spin couplings should be thought of as a necessary ingredient in the model definition, and not as an external applied field. This stands in contrast to more standard spin-glass models, such as the Sherrington-Kirkpatrick model or the Edwards-Anderson model. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{factor_graph.png} \caption{The factor graph for the case $n=3$, corresponding to $N = 2^3 = 8$ spins. The square nodes represent interaction terms, and the circles represent the spin variables.} \label{fig:factorgraph} \end{figure} \section{Computational Tractability \label{sec:complexity}} The hierarchical structure is significant enough to render the model solvable, in the sense that many quantities of interest can be computed in $O(N)$ steps. \begin{prop} \label{prop:partition} The partition function is computable in $O(N)$ steps. \end{prop} \begin{proof} By construction. Associate each index pair $(k,p)$ to a node in a balanced binary tree. The recursion relation Eq.~\ref{eq:partitionfunction} allows for the partition function at a given node to be computed, provided the partition function and parity operators of the children nodes are also known. Similarly, Eq.~\ref{eq:parityrecursion} can be used to compute the parity operators of the children nodes in terms of the parities of their children. Each computation takes $O(1)$ steps, and so by carrying out the computation at each of the $2N-1$ nodes in order of increasing $k$ (i.e. from the leaves to the root), the entire partition function may be computed in $O(N)$ steps. \end{proof} A similar result holds for the problem of computing the ground state (lowest energy configuration) and its degeneracy. \begin{prop} \label{prop:groundstate} The ground state and its degeneracy is computable and in $O(N)$ steps. \end{prop} \begin{proof} By construction. A general description of the algorithm is provided here; Appendix \ref{app:algorithm} contains a more formal pseudo-code description. Note that if the ground state is degenerate, only a single ground state will be returned by this procedure. The basic idea is that the hierarchical structure of the model allows for the ground state at each node in the binary tree to be computed in terms of the ground states of the left and right children nodes. First, it will be useful to separate the states according to their parity, $\pm 1$, which controls the sign of the interaction between the left and right sub-systems. Let $\bm{s}_{k,p}^{(0)\pm}$ indicate the lowest energy state of the $(k,p)$ sub-system with parity $\pm 1$. Note that these are the \textit{lowest} energy states, meaning that at least one but possibly both will be a ground state of the full system. In the event that there are multiple lowest energy states at each level, these will just correspond to a single representative state for each parity. For a given node $(k,p)$ in the binary tree the lowest energy parity $-1$ state will be one, or both, of $\bm{s}_{k-1,2p-1}^{(0)-} \parallel \bm{s}_{k-1,2p}^{(0)+}$, $\bm{s}_{k-1,2p-1}^{(0)+} \parallel \bm{s}_{k-1,2p}^{(0)-}$. Here $\parallel$ just means the concatenation of the left and right states. Similarly, the lowest energy parity $+1$ state will be one, or both, of $\bm{s}_{k-1,2p-1}^{(0)-} \parallel \bm{s}_{k-1,2p}^{(0)-}$, $\bm{s}_{k-1,2p-1}^{(0)+} \parallel \bm{s}_{k-1,2p}^{(0)+}$. By comparing the energies of these states, the lowest energy state of each parity may be found, together with its degeneracy. Therefore, the $\pm$ lowest-energy state of the full system may be computed recursively by keeping track of the $\pm$ lowest-energy state of each sub-system. The degeneracies may be similarly computed. This procedure is formalized in Algorithm~\ref{alg:groundstate}. Since there are $2N-1$ nodes to visit, and since each node requires $O(1)$ operations, this algorithm runs in $O(N)$ steps. \end{proof} The above results establish that the hierarchical parity model is computationally tractable for arbitrary arbitrary couplings $J_{k,p}$. For general spin systems, computing the partition function or ground state takes an exponential number of steps, and therefore the hierarchical structure of the model allows for exponential speed-ups. The notion of a factor graph is often useful in the analysis of the computational tractability of spin systems and graphical models more generally. The factor graph is a bipartite graph with two types of nodes, variable (spin) nodes and factor (interaction) nodes. The factor graph for this model is depicted in Fig.~\ref{fig:factorgraph} for the case of $N = 2^3 = 8$ spins. When the factor graph is a tree (and thus has no loops), belief propagation may be used to efficiently solve a number of computational problems - including the calculation of marginal distributions of a single spin, the sampling of the Boltzmann distribution, and the calculation of the partition function \cite{mezard2009information}. However, as can be seen by direct inspection of Fig.~\ref{fig:factorgraph}, the factor graph in this case is not a tree, and therefore these results do not apply. Thus, Propositions~\ref{prop:partition} and \ref{prop:groundstate} are not merely consequences of well-known results of belief propagation. \section{Width-Symmetric Model \label{sec:widthsymmetric}} The above results demonstrate that the model is computational tractable for arbitrary couplings. In this section, we will consider a special case which admits an analytically tractable large-$N$ limit. The width-symmetric model is obtained by restricting the couplings to be independent of the width index $p$, while allowing them to scale with the height index $k$: \begin{equation} J_{k,p} = J_k = 2^{k \sigma} J \,. \end{equation} Here $J$ is the bond strength parameter, with $J > 0$ corresponding to ferromagnetic interactions and $J <0$ to anti-ferromagnetic interactions. The scaling of the couplings with height in the binary tree is controlled by $\sigma$. The nature of the large-$N$ limit is determined by $\sigma$. If $\sigma < 0$, then the parity coupling between sub-systems will vanish in this limit, resulting in an free model. The case $\sigma = 0$ corresponds to the uniform model, where all couplings are equal. For $0 < \sigma < 1$ the couplings scale as a fractional power of the volume (which is $2^k$ for a level-$k$ sub-system). For other hierarchical models, such as the HEA or HREM, this is known as the non-mean-field regime. The existence of a tractable non-mean-field regime is one of the main motivations for studying hierarchical spin-glasses, as it is more physically relevant to real-world systems than the mean-field regime offered by more widely studied models such as the Sherrington-Kirkpatrick model \cite{sherrington1975solvable}. In contrast, for $\sigma = 1$ the couplings scale linearly with the volume of the system, corresponding to a mean-field system. Finally, the case $\sigma > 1$ corresponds to the case where the couplings scale extra-linearly with system volume. It is important to note that this mean-field terminology does not apply here due to the non-local nature of the interactions. In particular, the existence of a phase transition will be demonstrated for $\sigma = 0$, and the ``non-mean-field" case $0 < \sigma < 1$ will turn out to be trivial. Furthermore, the ``mean-field" case $\sigma = 1$ does not even admit a well-defined thermodynamic limit. The model may be analyzed for different choices of the scaling parameter $\sigma$: the cases $\sigma < 0$, $\sigma = 0$, and $0 < \sigma < 1$ will be separately considered below. First, however, it is worth noting that as the couplings no longer depend on the ``width'' index $p$, the parity recursion relation Eq.~\ref{eq:parityrecursion} simplifies to \begin{equation} \label{eq:parityrecursion2} P_{k} = \frac{\sinh (\beta J_k) + P_{k-1}^2 \cosh (\beta J_k)}{\cosh (\beta J_k) + P_{k-1}^2 \sinh (\beta J_k)} \,, \end{equation} where $P_{k,p} = P_k$ for all $p$. The large-$N$ limit may be understood through the analysis of the fixed points of this recursion relation, which are denoted as $P_{\infty} := \lim_{k \rightarrow \infty} P_{k,1}$. Additionally, the relation for the width-symmetric partition function $Z_k = Z_{k,p}$ also simplifies, allowing for the free energy density ${f_n := -\beta^{-1} 2^{-n} \ln Z_{n}}$ to be solved for in terms of a geometrically-weighted sum: \begin{equation} \label{eq:freeenergyrecursion} f_n = f_0 - \beta^{-1} \sum_{k=1}^{n} 2^{-k} \ln \left[ \cosh(\beta J_k) + P_{k-1}^2 \sinh(\beta J_k) \right] \,. \end{equation} where ${f_0 = - \beta^{-1} \ln \left[ 2 \cosh(\beta J) \right]}$. This series converges, and thus the large-$N$ limit is well-defined, if and only if $\sigma < 1$. \subsection{Free Model} We first consider the case $\sigma < 0$. This model can be seen to be free because the coupling between sub-systems vanishes in the large-$N$ limit. The parity expectation value can also be shown to be trivial in this case; the recursion relation becomes $P_{\infty} = P_{\infty}^2$, and so either $P_{\infty} = 0$ or $P_{\infty} = 1$. Expanding around these reveals that the former fixed point is stable, whereas the latter is unstable. \subsection{Uniform Model} Setting $\sigma = 0$ corresponds to a uniform model, where the couplings are all the same. As illustrated in Fig.~\ref{fig:factorgraph}, the interaction terms in the Hamiltonian can be associated with the nodes in a balanced binary tree (the black squares), and as such there are $2N-1$ such terms. For $\sigma = 0$, each term contributes equally to the overall Hamiltonian, which means that $N$ interactions involve a single spin, $N/2$ spins involve 2 spins, $N/4$ interactions involve 4 spins, and so on, with just a single term involving the full system parity. \subsubsection{The Parity Recursion Relation} The fixed points of the parity recursion relation Eq.~\ref{eq:parityrecursion2} are given by the roots of a cubic polynomial, and are: $P_{\infty}^{(0)} = 1$ and \begin{equation} P_{\infty}^{(\pm)} = \frac{\text{coth}(\beta J) -1}{2} \pm \frac{\text{sgn}(\beta J)}{2} \sqrt{ (\text{coth}(\beta J) - 3) (\text{coth}(\beta J) + 1)} \,. \end{equation} The 3 roots are degenerate for $\beta = \beta_c$, with \begin{equation} \beta_c = \frac{\text{coth}^{-1}(3)}{J} = \frac{\ln 2}{2 J} \approx \frac{0.3466}{J} \,. \end{equation} It is convenient to consider the ferromagnetic and anti-ferromagnetic cases separately. For the ferromagnetic case with $J > 0$, the $P^{(\pm)}_{\infty}$ roots are complex for $\beta > \beta_c$, and they are real for $\beta < \beta_c$. However in this case $P_{\infty}^{(+)}$ is greater than 1 and thus represents an unphysical solution for all temperatures, since the parity expectation value must lie in the interval $[-1,1]$. Moreover, the $P^{(0)}$ fixed-point is stable for $\beta > \beta_c$ and unstable for $\beta < \beta_c$, whereas the $P^{(\pm)}$ fixed-points are stable for $\beta < \beta_c$ and unstable for $\beta > \beta_c$. We therefore conclude that \begin{equation} P_{\infty} = \begin{cases} P_{\infty}^{(-)} &\mbox{if } \beta \le \beta_c \\ P_{\infty}^{(0)} & \mbox{if } \beta \ge \beta_c \end{cases} \,. \end{equation} Just above the critical temperature the correlation function behaves as \begin{equation} P_{\infty}^{(-)} = 1 - 2 \sqrt{2(\beta_c - \beta) J} + 4 (\beta_c - \beta) J + \mathcal{O}\left( (\beta_c - \beta)^{3/2} \right) \,, \end{equation} and therefore the parity expectation value is non-analytic around the critical temperature $\beta = \beta_c$, indicating a phase transition. In contrast, for the anti-ferromagnetic case with ${J < 0}$, all 3 fixed-points are real for all temperatures - although $P_{\infty}^{(+)} < -1$ and once again represents an unphysical solution. The $P^{(0)}$ fixed-point is always unstable, and the $P^{(\pm)}$ fixed-points are always stable. Therefore, the physically realized solution is $P_{\infty} = P_{\infty}^{(-)}$ and there is no non-analytic behavior. In order to understand the convergence of $P_{n,1}$ to the fixed point $P_{\infty}$, in Fig.~\ref{fig:Pkplot_Jboth} $P_{n,1}$ is depicted for $n=0,1,2,...,10$ (corresponding to $N=1, 2, 4, ..., 1024$) for both the anti-ferromagnetic and ferromagnetic cases. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Pkplot_Jboth.pdf} \caption{The parity expectation value $P_{n}$ as a function of inverse temperature in the uniform model, for both the anti-ferromagnetic $(J < 0)$ and ferromagnetic $(J>0)$ models. To emphasize the relation between the two cases, the anti-ferromagnetic curve has been plotted for negative $\beta J$ values and the ferromagnetic curve has been plotted for positive $\beta J$ values. The thick dashed line shows the analytic result for $P_{\infty}$, and the vertical dashed line marks the phase transition.} \label{fig:Pkplot_Jboth} \end{figure} To summarize, the uniform $(\sigma = 0$) width-symmetric model exhibits a phase transition in the ferromagnetic case where $J > 0$. The order parameter is the expectation value of the parity of the full system, ${P_{\infty} = \lim_{n \rightarrow \infty} \langle s_1 ... s_{2^n} \rangle}$. This quantity is 1 in the low-temperature parity-locked phase, and lies in the interval $(0,1)$ in the high-temperature phase.\footnote{Technically, it is more accurate to call $1 - P_{\infty}$ the order parameter. However, this terminology seems counter-intuitive since it suggests that the parity-locked phase with $P_{\infty} = 1$ is ``disordered".} Importantly, there is no geometric frustration in this case as all the couplings are positive. In contrast, there is no phase transition in the anti-ferromagnetic model, and the parity is not locked for any finite temperature, although it does tends toward $-1$ as $\beta \rightarrow \infty$. The anti-ferromagnetic model does exhibit geometric frustration, leading to an extensive ground state degeneracy (see below). Unfortunately, although this model allows for many relations to be worked out, it does not seem possible to obtain an expression for the free energy density. This is because the sum in Eq.~\ref{eq:freeenergyrecursion} depends on $P_k$ for all $k$, and not just on the fixed points. The nature of the phase transition can be further probed by studying the fixed points of the sub-parities via Eq.~\ref{eq:subPrecursion}. Let $P_{\infty}^{(L)} := \lim_{n \rightarrow \infty} P_{n,1}^{k,p}$ denote the large-$N$ limit of the sub-parities in the uniform coupling case, with $L = n - k$ denoting the length of the path from the root node $(n,1)$ to the destination node $(k,p)$. As shown in the SI, the ferromagnetic parity-locked phase exhibits $P_{\infty}^{(L)} = P_{\infty} = 1$. All the sub-parities are 1, which means that every single spin is pointing up. Additionally, just above the transition temperature the parities obey $P_{\infty}^{(L)} \sim 1 - c_L \sqrt{(\beta_c - \beta)J}$, for $c_L$ some positive constant. Thus, as the temperature is raised above the critical temperature the sub-parities become un-frozen at every scale. \subsubsection{The Ground State Degeneracy} In Algorithm~\ref{app:algorithm} an $O(N)$ algorithm for computing the ground state, energy, and degeneracy for a system of arbitrary couplings is presented. The uniform model represents a special case of the more general problem for which the algorithm may be simplified to a set of recursion relations, which we now derive. First, we note that the ferromagnetic model has a unique ground state corresponding to the all up configuration $s_i = 1$ $\forall i$. To calculate the ground state degeneracy of the anti-ferromagnetic model, first consider the system with $n=1$, corresponding to $N=2$ spins. The configurations $\{\downarrow\downarrow, \uparrow\downarrow, \downarrow\uparrow\}$ each have energy $-1$, while the configuration $\uparrow\uparrow$ has energy $3$ (setting $J=-1$ for convenience).\footnote{For brevity here we use $\downarrow$ to correspond to the $s=-1$ spin down state and $\uparrow$ to correspond to the $s=1$ spin up state.} So, the ground state degeneracy for level $n=1$ is $d_1 = 3$. Also, note that the first ground state, $\downarrow\downarrow$, has even parity, whereas the other two ground states have odd parity. Candidate ground states at level $n=2$ can be formed by concatenating two copies of these states. There are five even parity configurations, four are formed by joining two odd parity configurations, such as $(\uparrow\downarrow)(\downarrow\uparrow)$, and one is formed by two copies of the sole even parity configuration, $(\downarrow\downarrow)(\downarrow\downarrow)$. Each of these has energy $-1$. There are also four odd parity configurations of the form $(\downarrow\downarrow)(\uparrow \downarrow)$. These have energy $-3$, and so the ground state degeneracy at level $n=2$ is $d_2 = 4$. In considering the ground state degeneracy at level $n=3$, even parity configurations can be formed by joining the ground states at level $n=2$, these will have energy $-3-3+1 = -5$ (two copies of a $-3$ energy state plus a parity penalty term of $+1$). At first glance this might seem to exhaust the ground states at this level, but because the energy cost of flipping the overall parity happens to be the same as the gap between the ground and first excited states of the previous level system, odd parity ground states at level $n=3$ can be formed by joining an odd parity $n=2$ ground state with an even parity $n=2$ first excited state, for example $(\downarrow\downarrow\uparrow\downarrow)(\downarrow\downarrow\downarrow\downarrow)$. These states will also have energy $-3-1-1 = -5$. The ground state degeneracy at level $n=3$ is therefore $d_3 = 56$, consisting of 16 even parity states plus 40 odd parity states. A simple set of recursion relations that captures this pattern is: \begin{equation} d_{n}^- = 2 \, d_{n-1}^- \, d_{n-1}^+ \,, \qquad d_n^+ = \begin{cases} (d_{n-1}^-)^2 & n \text{ odd} \\ (d_{n-1}^-)^2 + (d_{n-1}^+)^2 & n \text{ even} \end{cases} \,, \qquad d_n = \begin{cases} d_n^- + d_n^+ & n \text{ odd} \\ d_n^- & n \text{ even} \end{cases} \,. \end{equation} Here $d_n$ is the ground state degeneracy and $d_n^{\pm}$ are the degeneracies of the lowest odd/even parity states, respectively, which may or may not be themselves ground states. The recursion begins with all terms equal to 1 for $n=0$. The large-$N$ behavior of the degeneracy $d_n$ can be numerically inferred to be \begin{equation} d_n \sim A \, b^N \,, \end{equation} with $A$ an overall constant, and $b \approx 1.6234$. This corresponds to an extensive ground state degeneracy. \subsection{Parity-Locked Model} Lastly, for $(0 < \sigma < 1)$ the fixed point equation, Eq.~\ref{eq:parityrecursion}, becomes: \begin{equation} P_{\infty} = \text{sgn}(J) \,. \end{equation} As a result, the model is always in the ``parity-locked phase'', and there is no phase transition. \section{Discussion \label{sec:discussion}} In this work we have introduced a hierarchical parity model. Like all hierarchical models, the recursive property greatly facilitates an RG analysis, and in particular exact RG recursion relations may be derived for the parity of full system, as well as for certain subsets of spins. Moreover, these relations hold in the presence of \textit{arbitrary} couplings. The model also admits $O(N)$ algorithms for computing the ground state and the partition function. There are other well-known computationally tractable examples of spin models with arbitrary couplings. For example, one important result is that 2-spin Ising models defined over planar lattices are solvable in polynomial time \cite{barahona1982computational}. This result can be extended to graphs of fixed genus $g$ (planar graphs have genus $g=0$) \cite{regge2000combinatorial, galluccio2000new}. Importantly, these results are for general couplings. If the couplings are homogeneous and ferromagnetic, then the partition function of the Ising model defined over a general graph $G$ can be recognized as the Tutte polynomial $T_G(x,y)$, evaluated at a particular point in $\mathbb{R}^2$ \cite{welsh1990computational}. The computational complexity of computing this is {\#P-hard} in general, except for a few special isolated points as well as all points lying on the hyperbola $H_1: (x-1)(y-1) = 1$, in which case it is computable in polynomial time. Notably, all of these examples involve models with 2-spin interaction terms, whereas Eq.~\ref{eq:modeldefinition} is characterized by interaction terms involving $2^k$-spin terms, with $k=0,...,n$. This work thus expands the set of Ising spin system geometries known to remain tractable even in the presence of frustration. In order to study the thermodynamic, large-$N$ limit of the model it is convenient to restrict the couplings to only depend on the height of the binary tree. For $J_{k,p} = 2^{k\sigma} J$, the model is trivially free for $\sigma < 0$, and ill-defined for $\sigma \ge 1$. The case $\sigma = 0$ corresponds to a uniform model where all the couplings are equal, and this model was shown to exhibit a thermal phase transition. The low-temperature phase is ``parity-locked''. Just above the transition, the parity correlators exhibit non-analyticity as they melt and become unfrozen. Finally, the case $0 < \sigma < 1$, which would correspond to a non-mean-field model in a hierarchical model with local interactions, turns out to be trivial in that the parities are always locked and there is no evidence of a phase transition. Lastly, one of the key motivations for studying hierarchical models such as this one is to better understand the Renormalization Group for complex disordered systems. As a first step towards this goal, in Appendix~\ref{sec:disorder} we utilize the recursion relation for the partition function and the ground state algorithm to compute key thermodynamic quantities of the model at finite system size. However, more work is needed to properly investigate whether the disordered model exhibits a spin-glass transition. \subsection*{Data Availability} Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. \subsection*{Acknowledgments} This work grew out of an earlier collaboration with Masoud Mohseni, and I wish to acknowledge useful discussions with him throughout this project. I would also like to thank Edward Parker, Federico Ricci-Tersenghi, and two anonymous referees for their useful feedback on earlier versions of this manuscript.
1,116,691,501,433
arxiv
\section{Introduction} \newpage The dynamic response of pure Ising systems to time dependent magnetic fields is currently being studied intensively \cite{rmp}. In particular, the response of Ising systems to pulsed fields has recently been investigated \cite{pos,pep1,pep2}. The pulse can be either ``positive'' or ``negative''. At temperatures $T$ below the critical temperature $T_c$ of the corresponding static case (without any external field), the majority of the spins orient themselves along a particular direction giving rise to the prevalent order. In the following, we denote by positive (or negative) pulse an external field pulse applied along (or opposite) the direction of the existing order. The effects of a positive pulse can be analyzed by extending appropriately the finite size scaling technique to this finite time window case \cite{pos}, and it does not involve any new transition or introduce any new thermodynamic scale into the problem. The negative field pulse, on the other hand, induces a new dynamic ``magnetization-reversal'' transition, involving completely new length and time scales \cite{pep1,pep2}. In fact, we believe, the spontaneously occurring dynamic symmetry-breaking transition in Ising models under (high frequency) external oscillating fields \cite{rmp,rik} occurs actually during this ``negative'' pulse period (and not during the ``positive'' pulse period; compared to the instantaneous existing order in the system), and the universality classes of these two transitions are identical. We report here the results of an investigation on the nature of the characteristic length and time scales involved in this dynamic magnetization-reversal transition in an Ising model under the negative pulsed field. In the absence of any symmetry breaking field, for temperatures below the critical temperature of the corresponding static case ($T < T_c$), there are two equivalent free energy minima with average magnetizations $+m_0$ and $-m_0$. If in the ordered state the equilibrium magnetization is $+m_0$ (say) and a very weak pulse is applied in the direction opposite to the existing order, then temporarily during the pulse period the free energy minimum with magnetization $-m_0$ will be brought down compared to that with $+m_0$. If this asymmetry is made permanent, then any non-vanishing field (strength), which is responsible for the asymmetry, would eventually induce a transition from $+m_0$ to $-m_0$ (in the limit of vanishing field strength). Instead, if the field is applied in the form of a pulse, the asymmetry in the free energy wells is removed after a finite period of time. In that case, the point of interest lies in the combination of the pulse height or strength ($h_p$) and its width or duration ($\Delta t$) that can give rise to the transition from $+m_0$ to $-m_0$. We call this a magnetization-reversal transition. A crucial point about the transition is that it is not necessary that the system should attain its final equilibrium magnetization $-m_0$ during the presence of the pulse; the combination of $h_p$ and $\Delta t$ should be such that the final equilibrium state is attained at any subsequent time, even a long time after the pulse is withdrawn (see Fig. 1) . The ``phase boundary'', giving the minimal combination of $h_p$ and $\Delta t$ necessary for the transition, depends on the temperature. As $T \rightarrow T_c$, the magnetization reversal transition occurs at lower values of $h_p$ and/or $\Delta t$ and the transition disappears at $T \ge T_c$. In the present paper we present an argument that this dynamic transition corresponds to infinite time and length scales, all along the phase boundary in the $h_p-\Delta t$ plane at any temperature at T $\ < T_c$. We show that the relaxation time $\tau$ and the correlation length $\xi$ both diverge as one approaches the phase boundary. In the mean field case, we show (using equations of motion linearized in the magnetization) that $$\tau \sim \ln \left( \frac{1}{m_w} \right)~~~~~~ {\rm and} ~~~~~~~~~ \xi \sim \sqrt{\ln \left( \frac{1}{m_w}\right)}$$ where $m_w$ is the ``order-parameter'' for the transition, given by the magnetization at the time of withdrawal of the pulse, starting from $m_0$, the equilibrium magnetization at the temperature $T (< T_c)$ (see Fig. 1). It may be noted that $m_w(T, h_p, \Delta t) = 0$ at the phase boundary of the magnetization-reversal transition. We also show that $\xi$ and $\tau$ grow sharply as one approaches the phase boundary in the Monte Carlo case as well, although the nature of the growths are different from the mean field case. We also study the shapes and sizes of the reversed spin domains as one approaches the spin-reversal transition phase boundary in the Monte Carlo case. We compare the observed growth in the relaxation time in this case with that predicted by the nucleation theory. The Ising model in the presence of an external magnetic field is described by the Hamiltonian \begin{equation} H=- \frac{1}{2} \sum _{(ij)}J_{ij}S_{i}S_{j}-\sum _{i}h_iS_{i}, \end{equation} where \( S_{i} \) denotes the spin at \( i \)th site, \( J_{ij} \) is the cooperative interaction between the spins at sites $i$ and $j$ and $(...)$ denotes the nearest-neighbour pairs. Here $h_i$ is the external field, allowed to be time dependent, and also site-dependent to allow investigation of separation-dependent correlations. The free energy of the system in the Bragg-Williams approximation is given by\cite{brout} \begin{equation} F=-\frac{1}{2}\sum_{(ij)} J_{ij} m_im_j -\sum_i h_i m_i +\sum_i \frac{T}{2}[\ln (1-m_i^{2})+ m_i\ln \left( \frac{1+m_i}{1-m_i}\right)- 2\ln 2], \end{equation} with $m_i = \langle S_i \rangle$, where $\langle ... \rangle$ denotes the thermal average. In the presence of a time and site-dependent field, the time dependent magnetization satisfies the Langevin equation \begin{equation} \frac{dm_i}{dt} = - \frac {\lambda}{T} \frac{\delta F} {\delta m_i} = \lambda \left[ \sum_j K_{ij} m_j(t) + \frac {h_i(T)}{T} - \frac {1}{2} \ln \left(\frac {1+m_i(t)}{1-m_i(t)} \right)\right], \end{equation} where $K_{ij} = J_{ij}/T$ and $\lambda$ is a constant. Differentiation with the space and time dependent magnetic field $h_i(t)$ generates the space and time dependent susceptibility. After the differentiation we can set $h_i(t) = h(t)$ to obtain results for a pulsed field uniform in space. Then $m_i(t) \rightarrow m(t)$ gives \begin{equation} \frac{dm(t)}{dt}=\lambda \left[ K(0)m(t)+\frac{h(t)}{T}-\frac{1}{2}\ln \left( \frac{1+m(t)}{1-m(t)}\right) \right]. \label{eq:d} \end{equation} The resulting equation for the susceptibility, in the Fourier space, is \begin{equation} \frac{d\chi _{q}(t)}{dt}=\lambda \left[ K(q)-\frac{1}{1-m^{2}(t)}\right] \chi _{q}(t)+\frac{\lambda}{T} \delta (t-t'). \label{eq:e} \end{equation} Here, $K(q)$ is the Fourier transform of $K_{ij}$; for small $q$, \( K(q) \simeq K(0)(1-q^{2}) \); in the mean field theory $K(0) = T_c/T$. Using (\ref{eq:d}) and (\ref{eq:e}), we can write \begin{equation} \frac{d\chi _{q}(t)}{dm(t)}=\frac{\left[ K(q)-\frac{1}{1-m^{2}(t)}\right] \chi _{q}(t)}{K(0)m(t)-\frac{h_{p}}{T}-\frac{1}{2}\ln \left[ \frac{1+m(t)} {1-m(t)}\right] }. \end{equation} In the limit when \( m(t) \) is small, retaining up to the linear term in $m(t)$, \begin{equation} \frac{d\chi _{q}(t)}{dm(t)}=\frac{[K(q)-1]\chi _{q}(t)} {[K(0)-1]m(t)-\frac{h_{p}}{T}}. \label{eq:g} \end{equation} This equation can now be solved in the three different time zones (Fig. 1): namely, in the equilibrium regime before the application of the pulse where $m = m_0$ (regime I), the (nonequilibrium) pulsed period regime, at the end of which $m = m_w$ (regime II), and the regime after the pulse is withdrawn (regime III) when the system eventually returns to equilibrium (with $m(t \rightarrow \infty) = -m_0$ if the transition occurs, or $= m_0$ if it does not). Hence in regime II and III, we get the non-equilibrium susceptibility $\chi_q$ as a function of $m(t)$. The solution of (\ref{eq:d}) also gives the non-equilibrium magnetization $m(t)$, and hence we can also arrive at $\chi_q (t)$. Noting that $ \chi _{q}(t)=\chi _{q}^{s} $ when \( m(t)=m_{0} \), at the start of regime II, where \( m_{0} \) and \( \chi ^{s}_{q} \) are equilibrium values of the magnetization and susceptibility respectively, we can integrate (\ref{eq:g}) in that regime to obtain \begin{equation} \frac{\chi _{q}(t)}{\chi _{q}^{s}}=\left[ \frac{m(t)-\Gamma } {m_{0}-\Gamma }\right] ^{a_{q}}, \label{eq:h} \end{equation} where $$\Gamma =\frac{h_{p}/T}{K(0)-1} \eqno (8a)$$ and $$a_{q}=\frac{K(q)-1}{K(0)-1}. \eqno (8b)$$ Also integrating the linearized version of (\ref{eq:d}) in region II, one gets \begin{equation} m(t)=\Gamma +(m_{0}-\Gamma )\exp [\lambda b(t-t_{0})], \end{equation} where \( b=K(0)-1 \). At the end of region II, the value of magnetization is given by \begin{equation} m_w = m(t_0+\Delta t) = \Gamma + (m_0 - \Gamma) \exp(\lambda b \Delta t). \end{equation} The eqn. (\ref{eq:h}) can therefore be written as \begin{equation} \frac{\chi _{q}(t)}{\chi _{q}^{s}}=\exp (\lambda ba_{q}t)=\exp [(K(q)-1)\lambda t]. \end{equation} In regime III, however, \( h(t)=0 \) and the (initial) boundary condition is \( m(t_{0}+\Delta t)=m_{w} \). Integrating (\ref{eq:g}) in this regime one gets \[ \frac{\chi _{q}(t)}{\chi _{q}(t_{0}+\Delta t)}=\left[ \frac{m(t)}{m_{w}}\right] ^{a_{q}}\] or \begin{equation} \chi _{q}(t)=\chi _{q}^{s}\exp [\lambda (K(q)-1)(t_{}+\Delta t)]\left[ \frac{m(t)}{m_{w}}\right] ^{a_{q}}, \end{equation} where use has been made of the eqn. (\ref{eq:h}). Concentrating on the dominating \( q \)-dependence of the susceptibility, one can write \begin{equation} \chi _{q}(t)\sim \chi _{q}^{s}\exp [-q^{2}\xi ^{2}], \end{equation} where the correlation length \( \xi \) is defined as \begin{equation} \xi \equiv \xi (m_{w})=\left[ \frac{\ln (1/m_{w})}{1-T/T_{c}}\right] ^{\frac{1}{2}}. \label{eq:m} \end{equation} This is one of the principal results of this paper, and it shows that the characteristic length $\xi$ diverges as the order parameter $m_w$ goes to zero. Consider now the \( t \) dependence arising in \( \chi _{q=0}(t) \) through the factor \( m(t)^{a_{q}} \). Solving (\ref{eq:d}) in regime III yields \begin{equation} m(t)=m_{w}\exp [\lambda b\{t-(t_{0}+\Delta t)\}], \label{eq:n} \end{equation} which shows that long time is required to attain moderate values of \( m(t) \) starting from low values of \( m_{w} \). Especially, starting from time \( t=t_{0}+\Delta t \), the time taken by the system to reach the final equilibrium value is defined as the relaxation time \( \tau \) of the system. Therefore from (\ref{eq:n}) we can write \begin{equation} \tau =\frac{1}{\lambda }\left( \frac{T}{T_{c}-T}\right) \ln \left( \frac{m_{0}}{m_{w}}\right). \label{eq:o} \end{equation} The growth of the time scale occurs in \( \chi _{q=0}(t) \) too through the \( m(t) \) dependence : $$\chi _{q=0}(t)\sim \left[ \frac{m(t)}{m_{w}}\right] ^{a_{q=0}}\sim \exp [\lambda b\{t-(t_{0}+\Delta t)\}].$$ Eqn. (\ref{eq:m}) and (\ref{eq:o}) can be used to establish a relationship between \( \tau \) and \( \xi \) : \begin{equation} \tau \sim \ln \left( \frac{1}{m_{w}}\right) \sim \frac{T}{T_{c}}\xi ^{2}. \label{eq:p} \end{equation} This corresponds to critical slowing, with the characteristic time diverging with the characteristic length with the dynamical critical exponent \[ z=2.\] The above results are obtained in the linearized limit of the mean field eqns. of motion (\ref{eq:d}) and (\ref{eq:e}). We also measured, solving the full dynamical equation (\ref{eq:d}) numerically, the relaxation time $\tau$ by computing the time required by $m(t)$ to reach the final equilibrium value $\pm m_0$, with an accuracy of $O(10^{-4})$, from the time of withdrawal of the pulse (in regime III). Fig. 2 shows that this $\tau$ indeed diverges as one approaches the phase boundary, where $m_w = 0$. In fact, the numerical results are observed to fit very well with the analytic result (\ref{eq:p}) (shown by the solid line in Fig. 2). The divergence of both the time and length scale were also investigated at low temperatures by employing Monte Carlo methods. Simulations \cite{pep2} on a square lattice of typical size $L=200$ with periodic boundary conditions indicated an exponential growth of the time scale : \begin{equation} \tau \sim \exp[-c(T) \mid m_w \mid], \label{eq:q} \end{equation} where $c(T)$ is a constant depending on temperature only. Further, finite size scaling of the order parameter relationship \begin{equation} m_w \sim~ \mid h_p - h_p^c \mid^\beta \end{equation} is consistent\cite{pep2} with $\beta = 0.90 \pm 0.02 $ and with a correlation length divergence with $\nu = 1.5 \pm 0.3$. (Here $h_p^c$ is the critical value of the pulse field $h_p$, making $m_w=0$ at the end of regime II). These results qualitatively compare with the divergence of scales at the transition point predicted by the mean field treatment. However, the growths of the time and length scales are quantitatively of different nature to that of the mean field case, because at low temperatures droplet growth is a dominant mechanism. The growth of droplets of size $l$ is associated with an activation energy\cite{pep2} $E(l) = -2h_pl^d + \sigma l^{d-1}$, where $\sigma$ is the surface tension. Using the relationship between $l$ and $h_p$ at the energy minimum together with (18) at small $m_w$ gives a characteristic time \begin{equation} \tau ~~\sim ~~\exp\left[\frac{1}{T} h_p^{1-d}\right] ~~\sim ~~\exp[-c_1(T) \mid m_w \mid^{1/\beta} (h_p^c)^{d-2}]. \end{equation} Since $\beta$ is close to unity, this is consistent with the observed relation (\ref{eq:q}). The typical size of cluster or domain of reversed spins provides a qualitative idea about the correlation length of the system. In order to study the growth of the typical reversed-spin domain size, we define a pseudo-correlation length $\tilde \xi $ as follows : \begin{equation} \tilde \xi^2 = \frac{\sum_s R_s^2 s^2 n_s}{\sum_s s^2 n_s}, \end{equation} where $n_s$ is the number of domains or clusters of size $s$ and the radius of gyration $R_s$ is defined as $ R_s^2 = \sum_{i=1}^s \mid r_i - r_0 \mid^2/s $, where $r_i$ is the position vector of the $i$th spin of the cluster and $ r_0 = \sum^s_{i=1} (r_i/s)$ is defined as the centre of mass of the particular cluster. The pseudo-correlation length $\tilde \xi$ is observed to grow to system size order as one approaches the phase boundary (Fig. 3); thereby providing further indication of the growth of a length scale. It should be noted that, as in the static transition in the pure Ising system, the length $\tilde \xi$ is distinct from the correlation length \cite{robin}. In the linear limit of the mean field dynamics, it has been possible to show the divergence of both the length and time scales at the magnetization-reversal transition phase boundary. Sharp growth of these scales has also been observed in the Monte Carlo case, studied in two dimension. Here, we looked at the size distribution of the clusters or domains of reversed spins whose average size was observed to grow at the phase boundary of the transition. AM would like to thank A. Dutta for useful discussions. BKC is grateful to the INSA-Royal Society Exchange Program for supporting his visit to the Department of Physics, University of Oxford, UK, where part of the work was done.
1,116,691,501,434
arxiv
\section{Introduction} \label{introduction} Radio galaxies appear to come in two fundamentally different types, as encapsulated in the Fanaroff and Riley classification scheme \citep{fr74}. FR\,I sources have bright cores and edge-darkened lobes, while FR\,II sources are edge-brightened with hotspots at the end. These morphological differences suggest that the interactions between the radio jets and their environments are very different in the two classes, and that their evolutionary tracks may also be quite distinct. FR\,II sources are powerful radio sources with fairly homogeneous morphologies. They contain highly relativistic jets extending from the central AGN to very bright hotspots surrounded by low surface brightness lobes. Dynamical models for FR\,II sources are quite successful, indicating that the lobes expand in a self-similar way \citep[][hereafter KA97]{falle91,ka97}. Based on this, a range of radio emission models have been developed, and these models allow FR\,II sources to be tracked through the power-linear size (P-D) diagram \citep{kda97,brw99, mk02}. FR\,I sources are much more common than FR\,IIs \citep{parma02}, but they are also more complex and have only one common feature: no hotspot at the outer end of the jet. About half of the FR\,I sources show a \textit{fat double} morphology similar to FR\,II lobes, while the rest inflate turbulent lobes after passing through a so-called brightening point, with plumes or tails at the end \citep{ol89, ow91, parma02}. Modeling FR\,I sources is difficult, as it is hard to describe all types of FR\,I sources with a single model. \citet{bick94} tried to model the FR\,I sources by relativistic conservation laws, and \citet{lb02a} studied 3C\,31 in detail. \citet[][hereafter W09]{wang09} adopted the mixing-layer structure from \citet[][hereafter CR91]{cr91} and built an analytical model which could explain the observational behavior of typical tailed FR\,I sources. Generally speaking, FR\,IIs are more powerful than FR\,Is, with a transition radio luminosity around $P_{178\textrm{MHz}}\sim10^{25}$W\,Hz$^{-1}$\,sr$^{-1}$. A transition luminosity also exists in the optical band \citep{ol94}. The value of the transition luminosity is not precise, as it also depends on the properties of the host galaxies \citep{lo96}. The origin of the \textit{FR\,I/II dichotomy} has been discussed extensively in the literature. Studies on compact steep-spectrum sources (CSS) suggested that these objects are typically young and may generally evolve into large-scale radio sources \citep{fanti95}. Among these CSS sources, FR\,Is and FR\,IIs may have different progenitors due to different powers and environments \citep{alexander00, kunert05}. However, the transition from FR\,IIs to FR\,Is can also possibly take place at a later stage of the jet evolution under certain circumstance \citep{falle91,bicknell95,kb07}. The instabilities of jets have been studied by a number of numerical simulations, and they find that the jet instability evolution and the large-scale jet morphology are mainly determined by the jet Lorentz factor \citep{perucho04b, perucho05}, and the ambient density profile \citep{rossi08, meliani08}. However, these numerical simulations more concentrated on the earlier stage of the jet evolutions, when the lobe structures around the central relativistic jets have not been well developed and differ from that at the later jet evolution stage. Meanwhile, the initial set of the simulations (e.g. the radial resolution) may also affect the simulation results \citep{perucho04b}. Therefore, analytical models describing the evolutions of jet stabilities are desired for studying the jets at the late stage of their evolutions. Taking the idea of the relativistic mixing layer model developed by W09, we can investigate more precisely how the central jets of FR\,II sources are eroded by the entrainment due to the interactions with their surrounding lobes. More specifically, we describe the entrainment process by embedding the W09 mixing-layer model for FR\,I jets in a simple, self-similar model of an FR\,II radio lobe, and monitor how the central jet is gradually eroded by the growing turbulent shear layer at the interface between the jet and the lobe. The goal of this paper is to study if the entrainment can play an important role on determining the maximum size of an FR\,II jet at the late stage of its evolution. Our basic model for a central jet and shear layer embedded in an FR\,II radio lobe is developed in Section \ref{model_development}. The maximum size of FR\,II jet is calculated and discussed in Section \ref{maximum_size}. We also argue that the resulting \textit{dead} FR\,IIs will ultimately re-emerge as FR\,I radio sources. A brief sketch of this transition process is provided in Section \ref{model_transition}. Finally, in Section \ref{conclusion}, we summarise our conclusions and outline future work suggested by our model. \section{Model development} \label{model_development} \begin{figure} \includegraphics[width=0.5\textwidth]{unified} \caption{Sketch of the evolution of a radio outflow. At $t_{0}$, the young outflow is showing a FR\,II morphology. At $t_{1}$, the outflow is still in FR\,II phase while the shear layer has already grown. At $t_{2}$, the hotspot vanish and the outflow will transfer into FR\,I stage after this age.} \label{cartoon} \end{figure} We describe FR\,II objects by embedding a highly relativistic central jet inside a surrounding radio-emitting lobe. Although the lobe density is thought to be very low, we assume that a turbulent shear layer may nevertheless form at the jet-lobe interface. This shear layer will entrain and mix material from both regions. This entrainment will gradually erode the central jet. More specifically, once all of the highly relativistic material in the central jet has been mixed up with the lobe material in the shear layer, the central jet is completely destroyed. Previous work suggests that the hotspot is a very compact, high-pressure region that gives rise to strong radio emission \citep{scheck02}. The highly relativistic central jet may play an important role in energising the hotspot as there is large amount of energy injected into a small area. Meanwhile, the bulk velocity of the material in the shear layer is not as fast as that in the central jet. Although the shear layer is still supersonic and can form weak shocks and working surfaces, it is not powerful or concentrated enough to support a hotspot. Thus, we assume that as the central jet is gradually eroded, the hotspot weakens. After the central jet is totally destroyed at a certain stage of FR\,II evolution, the hotspot vanishes at the same time. This is in agreement with observations which indicate that the hotspot luminosity decreases with the linear size of the FR\,II source \citep{pm03}. When the hotspot vanishes, the object will cease to be a \textit{proper} FR\,II and will most likely resemble a lobed FR\,I. A sketch of this process is described in Figure \ref{cartoon}. Most analytical FR\,II models suggest the lobe is formed by the particles injected from the central jet through the hotspot \citep{falle91}. Although numerical simulations show that there is efficient mixing between the lobe and the shocked ambient medium \citep[e.g.][]{scheck02}, their simulations are not long enough and only represent a relatively early stage of the lobes. X-ray inverse Compton measurements of FR\,II radio lobes show that the strength of the magnetic field in the lobe is close to the equipartition value, which suggests that the FR\,II lobes do not contain an energetically dominant proton population \citep{ks05, croston05}. Meanwhile, the lobe internal pressures are in good agreement with the environment pressures, which suggests that there is no need for substantial mixing to provide the required pressures \citep{Belsole07}. These are the evidence showing that the interactions between the FR\,II lobes and their environments on large scales are not significant. Therefore, as our model here is based on previous analytical models and we are only considering FR\,IIs at a late stage of their evolution, we neglect the mixing and assume all the lobe material is from the central relativistic jet. As the lobe occupies a much bigger space, the density in the lobe is much lower than the density of central relativistic jet. The AGN active time is thought to be around a few $10^{8}$\,yr, and the maximum size of FR\,II objects is observed to be a few Mpc. In order to decide whether the interaction between the jet and the lobe can ever be a significant factor in the evolution of an FR\,II, we therefore need to consider if entrainment could conceivably destroy the central jet on this time and/or length scale. The mixing-layer model for FR\,I objects developed by W09 discussed the interaction between a laminar jet and its environment. It also predicts the position where the laminar jet disappears. We can therefore apply the same model to the FR\,II case and study the interaction between a jet and its lobe in the relativistic limit. This allows us to place interesting limits on the maximum sizes of FR\,II sources due to entrainment. In this section, we will first outline the analytical FR\,II lobe model and the W09 mixing-layer model, and then present the results with typical values for the parameter. \subsection{The self-similar model for FR\,II lobes} KA97 and \citet{kda97} have established a successful model for FR\,II radio sources describing their dynamics and evolutions. In this section, we summarise the important features of the model. KA97 follow the basic dynamical picture proposed by \citet{scheuer74} and \citet{falle91}, assuming that the laminar jets will end in strong shocks where the electrons are accelerated. The electrons pass through the shocks and subsequently inflate a lobe with a uniformly distributed pressure. The jet is in pressure-equilibrium with the lobe, and KA97 showed that the lobe then expands in a self-similar way. The evolution of the lobe size is determined by a balance of the ram pressure of the lobe material and that of the medium surrounding the host galaxy, which is pushed aside by the jet. The density distribution outside the core radius, $a_{0}$, is approximated by a power-law, $\rho(x)=\rho_{0}a_{0}^{\alpha}x^{-\alpha}$, where $x$ is the radial distance from the central AGN and $\rho_{0}$ is the density in the core radius, $a_{0}$. KA97 suggested that, for typical radio galaxies, $\rho_{0}=7.2\times10^{-22}$\,kgm$^{-3}$ at $a_{0}=2$\,kpc. These values may vary for different sources, but we will later show that the precise numbers here are not important in our model. The exponent $0<\alpha\le2$ is constrained by both theories and observations. X-ray observations find that the exponents for most clusters are close to 1.5 \citep{vikhlinin06, croston08}, so we adopt $\alpha=1.5$ for the moment and will discuss the effects of adopting different values later in Section 3.2.1. Having set the density profile above, we can express the length of the lobe by: \begin{equation} L_{j}=c_{1}(Q_{0}t^{3}/\Lambda)^{\frac{1}{5-\alpha}}, \end{equation} where $Q_{0}$ is the jet power, $\Lambda=\rho_{0}a_{0}^{\alpha}$, $t$ is the jet age and $c_{1}$ is a constant given by equation (25) in KA97. The pressure of the lobe also evolves with the jet age and can be written as: \begin{equation} p_{c}=\frac{18c_{1}^{2-\alpha}}{(\Gamma_{x}+1)(5-\alpha)^{2}4R_{T}^{2}}\Lambda^{\frac{3}{5-\alpha}}Q_{0}^{\frac{2-\alpha}{5-\alpha}}t^{\frac{-4-\alpha}{5-\alpha}}. \end{equation} $\Gamma_{x}$ is the adiabatic index of the external medium, which is set to be 5/3 here. $R_{T}$ is the axial ratio, which is normally distributed between 1.3 and 6, with an average value of 2 \citep{lw84}. For simplicity, we adopt this value initially and will discuss the effect of adopting different values in Section 3.2.2. As the jet grows in a self-similar way, we can express the volume of the lobe by $V=\pi L_{j}^{3}/(4R_{T}^{2})$. The particles injected into the jet are believed to be highly relativistic, so the rest mass injected into the lobe is given by $m_{0}=Q_{0}t/(c^{2}\gamma_{j})$, where $\gamma_{j}$ is the Lorentz factor of the particles injected. With the expression of $V$ and $m_{0}$, the density in the lobe is given by: \begin{equation} \rho_{c}=\frac{m_{0}}{V}=\frac{4R_{T}^{2}}{\pi c^{2}\gamma_{j} c_{1}^{3}}\Lambda^{\frac{3}{5-\alpha}}Q_{0}^{\frac{2-\alpha}{5-\alpha}}t^{\frac{-4-\alpha}{5-\alpha}}. \end{equation} We refer the reader to KA97 for a detailed derivation/explanation of the equations above. \subsection{Entrainment and the mixing-layer model} W09 model FR\,I sources with a mixing-layer structure in which a laminar jet interacts with its environment by forming a turbulent mixing layer at the interface between the two regions. This growing shear layer continuously entrains and mixes material from the jet and its environment, until finally the laminar core has been completely eroded and disappears. The structure of the different layers is then determined by using relativistic fluid mechanics and applying the relativistic conservation laws of mass, momentum and energy. In this paper, we borrow this basic picture to estimate under what conditions the central jet of an FR\,II object may disappear. We assume that an FR\,II object evolves following KA97 model. Its central jet is therefore embedded inside the lobe and is presumably subject to entrainment from the lobe. The interaction of the jet with the lobe, and its subsequent evolution, are described by the relativistic mixing-layer model from W09. It is important to note that in the case of FR\,II type objects, the central jets are not in direct contact with the environment, in contrast to the model for FR\,Is presented in W09. Unlike the external medium, the properties of the material inside the lobe, e.g. the pressure and the density, are assumed to have uniform distributions. Thus we adopt the \textit{constant environment} case of the W09 FR\, I model with $p_{e}=p_{c}$ and $\rho=\rho_{c}$, where $p_{c}$ and $\rho_{c}$ are given by Equations (2) and (3) respectively. For this simplified case, the three relativistic conservation laws are re-written as: {\setlength\arraycolsep{1pt} \begin{eqnarray} \frac{\mathscr{R}_{j}\Gamma_{j}}{\Gamma_{j}-1}&&\gamma_{j}\beta_{j}(r_{0}^{2}-r_{j}^{2}(x))=\nonumber\\ &&\frac{\mathscr{R}_{s}(x)\Gamma_{s}}{\Gamma_{s}-1}\gamma_{s}\beta_{s}(r_{s}^{2}(x)-r_{j}^{2}(x))-F(x),\\ \frac{(\mathscr{R}_{j}+1)\Gamma_{j}}{\Gamma_{j}-1}&&\gamma_{j}^{2}\beta_{j}^{2}(r_{0}^{2}-r_{j}^{2}(x))=\nonumber\\ &&\frac{(\mathscr{R}_{s}(x)+1)\Gamma_{s}}{\Gamma_{s}-1}\gamma_{s}^{2}\beta_{s}^{2}(r_{s}^{2}(x)-r_{j}^{2}(x)),\\ \frac{(\mathscr{R}_{j}+1)\Gamma_{j}}{\Gamma_{j}-1}&&\gamma_{j}^{2}\beta_{j}(r_{0}^{2}-r_{j}^{2}(x))=\nonumber\\ &&\frac{(\mathscr{R}_{s}(x)+1)\Gamma_{s}}{\Gamma_{s}-1}\gamma_{s}^{2}\beta_{s}(r_{s}^{2}(x)-r_{j}^{2}(x))-F(x). \end{eqnarray} } Based on the equations above, the radius of the central jet, $r_{j}$ could be expressed as a function of the distance away from the central AGN, $x$: \begin{equation} r_{j}(x)^{2}=r_{0}^{2}-\frac{F(x)(\Gamma_{j}-1)}{\Gamma_{j}\gamma_{j}^2\beta_{j}(\frac{\beta_{j}}{\beta_{s}}-1)(\mathscr{R}_{j}+1)}, \end{equation} where $\Gamma_{j}=\Gamma_{s}=4/3$ are the adiabatic indices of the material inside the central jet and the shear layer. $r_{0}$ is the initial radius of the jet at the brightening point, which we assume is a constant equal to 100\,pc throughout the life time of the jet. $\mathscr{R}_{j}$ is defined as the ratio between rest mass energy and non-relativistic enthalpy for jet material. In principle, as jet pressure decreases with jet age, the value of $\mathscr{R}_{j}$ should increase. However, this value may vary for different sources and it is hard to estimate from the observation. W09 obtained $\mathscr{R}_{j}=13.4$ by applying this entrainment model to 3C\,31, which is an old FR\,I source. Considering we are discussing FR\,II sources at the late stage of their evolution, we assume a common value of $\mathscr{R}_{j}=10$ for simplicity in our calculations. $F(x)=cg(x)/[\pi p(x)]$ is defined in W09, where $g(x)=\int_{S}\rho\bm{v_{\rm ent}}\bm{\cdot n}dS$ is the mass entrainment function ($\bm{n}$ is the normal direction of the unit surface $dS$). Taking Equations (2) and (3), we find that $F(x)$ is is given by: \begin{equation} F(x)=\frac{8R_{T}^{4}(\Gamma_{x}+1)(5-\alpha)^{2}}{\pi^{2}c\gamma_{j}c_{1}^{5-\alpha}}\int_{S}\bm{v_{\rm ent}}\bm{\cdot n}dS. \end{equation} As discussed in W09, entrainment is mainly due to turbulent motions, so the entrainment velocity $\bm{v_{ent}}$ is closely related to the sound speed, $C_{c}$ in the lobe. This is a constant throughout the jet lifetime, as the lobe is undergoing adiabatic expansion. We set $v_{ent}=\eta C_{c}$, and $\eta$ is the entrainment efficiency which is set to be 0.5 here. KA97 and CR91 defined and used an entrainment efficiency in much the same way and argued for an upper limit of $\eta<0.26$ in their non-relativistic mixing layer model. Our model is built under different conditions, and we choose a default value of $\eta=0.5$. We discuss this issue in more detail in Section 3.2.3. As $\int_{S}dS$ is also a function of $r_{j}(x)$, we find that $r_{j}(x)$ is just a function of $\gamma_{j}$ and could be solved numerically. This interestingly shows that $L_{max}$, the maximum distance that an FR\,II object can reach, where $r_{j}(L_{max})=0$, only depends on the Lorentz factor, $\gamma_{j}$. $\beta=v/c$ and $\gamma=(1-\beta^{2})^{-0.5}$ are measures of the bulk velocity. $\beta_{j}$ and $\beta_{s}$ refer to the velocities in the central jet and the shear layer defined by W09 respectively. The analysis of some typical FR\,I sources indicate that bulk velocities are $\beta\approx$ 0.8 -- 0.9 where the jets first brighten abruptly and decelerate rapidly to speeds of $\beta \approx$ 0.1 -- 0.4 where recollimation takes place \citep[e.g.][]{lb02a, cl04, canvin05, lcbh06}. Both the analytical models and numerical simulations suggest that the jet velocity has a transverse structure. However, it has a restricted range and does not evolve significantly with $x$ \citep{lb02b}, so it is reasonable to use a quasi-one-dimensional analysis and adopt velocity values averaged across the jet cross-section in each region (W09). In this paper, we adopt $\beta_{s}=0.3$ as a typical value. The value of $\beta_{j}$, which is a key value deciding the maximum length of FR\,II object, will be discussed in the following section. \section{The maximum size of FR\,II jets} \label{maximum_size} \subsection{Results} \label{result} \begin{figure} \includegraphics[width=0.5\textwidth]{gamma_l} \caption{The maximum length of FR\,II jet as a function of $\beta_{j}$, with $\alpha=1.5$ and $R_{T}=2$.} \label{gamma_l} \end{figure} The model discussed in the last section contains various parameters and most of them have been fixed based on previous observations or theoretical work. We first concentrate on the maximum length that an FR\,II jet can reach with given jet bulk velocity. Figure \ref{gamma_l} shows the distribution of $L_{max}$ as a function of $\gamma_{j}$. The diagram suggests that, if the particles in the central jet are slow, $L_{max}$ is small, and the FR\,II structure will be easily destroyed by entrainment at an early stage of its evolution. For example, an FR\,II with $\gamma_{j}=2$ ($\beta_{j}\sim0.87$) can survive only to a maximum length of $\sim5$ kpc. As $\beta_{j}$ increases, the jet can survive longer and the maximum jet length increases. An FR\,II with $\gamma_{j}=15$ ($\beta_{j}\sim0.998$) can reach as far as 3000\,kpc. This upper limit is sufficient to cover almost all the FR\,IIs currently observed. \begin{figure} \includegraphics[width=0.5\textwidth]{l_z} \caption{The distributions of 3CRR and 7CRS sources on the linear size-redshift plane. The plus signs are 3CRR sources and the square signs are 7CRS sources.} \label{l_z} \end{figure} Relativistic beaming has been observed for many radio sources, and it is widely accepted that powerful FR\,IIs have a very high bulk velocity. Direct measurement of jet bulk velocity is difficult, but several attempts have been made. \citet{hough02} observed the parsec-scale regions of 25 quasars (with FR\,II morphology) in the 3CRR sample and estimated their bulk velocities to fall in the range of $\gamma_{j}\approx5-10$. X-ray observations done by \citet{sambruna04} require a $\gamma_{j}\sim10$. \citet{hardcastle06} applied beamed inverse-Compton model to a sample of X-ray jets and shows the required bulk velocity on parsec scale could be as high as $\gamma_{j}\ge15$. \citet{begelman08} obtained similar results implying $\gamma_{j}\geq2$ and possibly as high as $\sim50$. \citet{jorstad08} studied a sample of radio loud galaxies and stated that the mean $\gamma_{j}$ is about 5 for radio galaxies, about 13 for BL Lacs, and about 20 for quasars. However, all these estimations of the bulk velocities are quite rough and model dependent. For example, although \citet{hough02} measured the apparent velocities for individual sources, their orientation angles were not well constrained. \citet{hardcastle06} used a fixed angle for all their sources. Considering that the apparent velocities of all sources are superluminal, the value of orientation angle plays an important role on calculating the real bulk velocity. The uncertainty of the orientation angles together with the uncertainty of apparent velocity measurements imply huge uncertainties on the calculated real jet velocities and real jet sizes. We therefore cannot convincingly test our model by comparing predictions with observations of individual systems. Nevertheless, we can check for the overall consistency in a statistical sense here. Observational samples of FR\,IIs (e.g. 3CRR, 6CE and 7CRS) show that most objects have sizes between 10\,kpc and 1000\,kpc. These size scales refer to a $\gamma_{j}\gtrsim2-10$ based on our model. The largest object in the 3CRR sample has a length of $\sim4000$kpc, which refers to $\gamma_{j}\gtrsim17$. All these values of Lorentz factor required can be fulfilled by the observational results described above. \citet{mh09} analysed a complete sample of FR\,IIs and suggested that although the jet bulk speed on parsec scales can be as high as $\gamma>10$, it might be much smaller on kpc scales. Their fitted model gave a Lorentz factor around $1.18-1.49$, corresponding to a speed of $0.53c-0.74c$. This is also consistent with some earlier work implying that the large-scale jets only have moderately relativistic bulk speeds \citep[e.g.][]{wa97,hardcastle99,al04}. However, this deceleration phenomenon can be naturally explained by our model here: the speeds observed at kpc scale may refer to the region containing both the central jet and the mixing layer. As the bulk velocity in the layer can be $0.3c$ or even lower, it is possible for us to observe a moderately relativistic velocity when the boundary layer dominates the jet on kpc scale. We can also check the maximum age of an FR\,II jet. With the observational constrained Lorentz factors, our model predicts that most FR\,IIs can have a maximum size between 10 kpc and a few 1000 kpc. \citet{oc98} observed young and powerful Compact Symmetric Objects, (which are believed to be the progenitors of radio galaxies,) and obtained a hotspot advance speed of $\sim0.03c-0.3c$ at a scale of $\sim100$\,pc. Deceleration of the head may happen as a slower speed of $\sim0.02c$ was obtained for kpc scale jet Cygnus A \citep{cb96}. Therefore, we can estimate a maximum age range from several $10^{5}$\,yr up to $10^{8}$\,yr, which indicates that an FR\,II jet can be destroyed anytime between these time scales, but it can hardly survive beyond $10^{8}$ yr. This range of maximum age agrees well with other analytical work, e.g. \citet{bird08}, who found an average jet age of $1.2\times10^{7}$ yr. The most interesting conclusion from our model is that, as most parameters of the model are restricted in a small range by the observations, the maximum length of an FR\,II object is mainly determined by the bulk velocity of the central jet. Jets with a similar bulk velocity will have a similar $L_{max}$. Here we would like to investigate two complete samples, 3CRR and 7CRS, to see if their source distributions support this idea. It is hard to obtain accurate jet properties for individuals, but as complete samples, we can assume the overall distributions of the jet properties (e.g. jet size, age and environment density) represent the average value at each redshift. \citet{wk08} find there is a strong relation between the redshift and FR\,II environment density ($\Lambda\propto(1+z)^{5.8}$). If $L_{max}$ is affected by the environment, we should observe a strong relation between the average jet size and the redshift. Figure \ref{l_z} shows that the overall jet size only slightly decreases by a factor of $\sim3$ from local universe to $z=2$. This small decrease may be more due to the selection effect: The sources are fainter when they get older and larger, so at high redshift, the sources easily fall below the flux limit before they grow to a fairly large size. The observations are not in contrast with our model's prediction, but better obtained jet properties or larger complete samples are surely desired in the future work to provide a better support for our model. Our model also indicates that $L_{max}$ does not depend directly on the jet power. Although the jet power is a function of $\gamma_{j}$, it is also determined by the inject rate of rest mass. Again, we employ the 3CRR and 7CRS samples. As discussed in the last paragraph, we assume that at each redshift, 3CRR and 7CRS sources have similar age and environment density distributions as they are both complete samples. Therefore, as the 7CRS sample has a much lower flux limit, the sources in the 7CRS sample are generally less powerful than that in the 3CRR sample. In Figure \ref{l_z}, we can not see a significant difference between the size distributions from the two samples, which is in agreement with our prediction. However current observational samples are small with poor statistics. Larger samples with lower flux limit need to be considered in the future work. \begin{figure} \includegraphics[width=0.5\textwidth]{different_alpha} \caption{The same diagram with Fig. \ref{gamma_l}, but with different value of $\alpha$ ($R_{T}=2$). The dotted, solid, dashed, dash-dotted line refer to $\alpha=1.9, 1.5, 0.7, 0.0$ respectively.} \label{different_alpha} \end{figure} \subsection{Dependencies on parameters} In the previous section, we discussed about how the maximum length of a FR\,II object depends on the bulk velocity of the central jet. However, $L_{max}$ may also depend on other parameters which we set to be constants in Section 2. Some of the parameters have not been well constrained by observations, (e.g. $r_{o}$ and $\mathscr{R}_{j}$,) so we leave them as constants in this paper. However, $\alpha$ and $R_{T}$ are well constrained and studied for radio jet evolution. Meanwhile the value of $\eta$ has been discussed a lot in previous work. Therefore, we will focus on these three parameters and discuss how their values could affect the calculated $L_{max}$. \subsubsection{The power-law index of environment density distribution, $\alpha$} \citet{falle91} has shown that for $\alpha>2$, a jet shock could not form. X-ray observations confirm that the value of $\alpha$ should be between 0 and 2, but the values for individual objects may vary significantly. In this context, we also need to consider the core radius, $a_{0}$, inside which we take the jet to be surrounded by a constant density environment ($\alpha$ = 0). \citet{croston08} find that some FR\,I sources have fairly flat environments up to 100 kpc. Although this may not apply to FR\,IIs, it is still important to investigate how sensitive our results are to the adopted value of $\alpha$. In order to answer this question, we calculate $L_{max}$ as a function of $\gamma_{j}$ for different values of $\alpha$ and plot the results in Figure \ref{different_alpha}. From this, we find that $\alpha$ has only a very small effect on the relation between $L_{max}$ and $\gamma_{j}$. If the jet is located in a flatter environment, it will have a slightly bigger maximum length and vise versa. This result is in line with our assumption that the maximum length of an FR\,II does not depend directly on its environment properties. Currently, there are not many sources with well-determined environment density profiles, but in the future it will be well-worth checking if a strong relation between $\alpha$ and average jet size exists, since this would clearly contradict with our model. \begin{figure} \includegraphics[width=0.5\textwidth]{different_rt} \caption{The same diagram with Fig. \ref{gamma_l}, but with different value of $R_{T}$ ($\alpha=1.5$). The dotted, solid, dashed, dash-dotted line refer to $R_{T}=1.3, 2.0, 4.0, 6.0$ respectively.} \label{different_rt} \end{figure} \subsubsection{The axial ratio, $R_{T}$} We set $R_{T}=2$ as a constant in Section 2, but it may also have different values for different objects and lead to different $L_{max}$ as well. Our evolutionary model is self-similar, so we assume $R_{T}$ is constant throughout the jet lifetime. However, this may not be true in detail, especially during the late stages of FR II evolution. Some simulations suggest that $R_{T}$ changes with time \citep{krause05}, and there is also strong observational evidence that larger sources have larger $R_{T}$ \citep{mullin08}. It is therefore also important to check how sensitive $L_{max}$ is to a variable $R_{T}$. Figure \ref{different_rt} shows the $L_{max}-\gamma_{j}$ diagram with different values of $R_{T}$. It shows that a fatter jet should have a larger maximum length. As $R_{T}$ is used for calculating the properties inside the lobe, which directly associate with entrainment process, it is reasonable to have a bigger influence on $L_{max}$ than $\alpha$. However, the minimum and maximum suggested values of $R_{T}$ only change the final $L_{max}$ by a factor of around 3, so our assumption of constant $R_{T}$ should still be a tolerable approximation. \subsubsection{The entrainment efficiency, $\eta$} The entrainment efficiency is another important parameter constraining the maximum size of an FR\,II jet, but it is difficult to obtain from either theoretical analysis or observations. CR91 discussed the entrainment efficiency in the context of their model and showed that this is limited either by the ability of the environment/jet to supply material for the mixing layer, or by the maximum possible growth rate of mixing layer itself. They defined three different regimes, each associated with a particular upper limit on the entrainment efficiency. In our model, the environment is actually the cocoon and $\rho_{c}\ll\rho_{j}$, so the appropriate regime is the \textit{environment-limited regime}. As the system is in a pressure equilibrium, we must then have a sound speed in the cocoon (environment) much larger than that in the jet. Therefore, we can only set a common upper limit of $\eta<1$. The actual value of $\eta$ for individual sources is more difficult to generalise as a number of instabilities (e.g. Kelvin-Helmholtz, current-driven and so on) together with jet properties can play important roles in setting the value of $\eta$. As a result, in our actual calculations in this paper, we have simply assumed a typical value of $\eta=0.5$. Different values of $\eta$ in the allowed range will result in a large $L_{max}$ coverage, so we do not intend to plot a $L_{max}-\gamma_{j}$ diagram here. However, it is easy to see that a lower (higher) entrainment efficiency will lead to a slower (faster) destruction of the central jet and to a larger (smaller) maximum jet size. For example, if $\eta=0.9$, a jet need to have $\gamma_{j}>12$ to reach a maximum size of 1000\,kpc, but for $\eta=0.1$, the same $L_{max}$ can be reached with only $\gamma_{j}>5$. \subsection{Comparison with previous work} KA97 also discussed the stability of jet based on the mixing layer model from CR91. Given that the W09 model used here is also based on CR91, it is interesting to compare our results to those obtained by KA97. KA97 found that in an environment with $\alpha=0$, the jet can easily be destroyed and can only be stable up to 2.6 kpc. With $\gamma_{j}=2$ (KA97's default assumption), our model predicts a slightly lager $L_{max}$ of around 5 kpc for this $\alpha$. However, KA97 also argued that the $L_{max}$ will increase with $\alpha$, i.e. jets in the environment with a steeper density gradient can grow to larger sizes before becoming unstable. This contrasts with our model, in which the value of $\alpha$ plays only a minor role in setting $L_{max}$, with $\gamma_{j}$ being the key parameter instead. There are two main differences between the models from KA97 and this paper. First, KA97 adopt \textit{mixing-layer-limited regime} of CR91 model, and obtain an upper limit of $\eta<0.26$ for the entrainment efficiency. However, as we discussed in the last section, the \textit{environment-limited regime} should be more appropriate here. Meanwhile, the original CR91 model applied by KA97 was designed for non-relativistic cases, whereas the W09 model is based on the relativistic conservation laws. In the non-relativistic case, the energy and momentum are mainly decided by the density of the lobe, which strongly depends on lobe volume and $\alpha$. However, in the relativistic case, the momentum and energy are dominated by the relativistic component, so the Lorentz factor is the key parameter. A number of numerical simulations have studied jet instabilities in more detail taking into account more factors other than the Lorentz factor as considered in our model. For example, \citet{mizuno07} suggested that the distribution of magnetic field is crucial for determining the Kelvin-Helmholtz stabilisation. Moreover, \citet{rossi08} and \citet{meliani08} found that the environment/jet density contrast is important in determining the instability evolution and entrainment properties. However, all these simulations only represent the early stage of jet evolution. At this stage, the lobe is still small and not well established. Meanwhile, the mixing between the environment and the lobe may be significant, and the lobe density is much higher than that at the later stages of jet evolution, which are considered in this paper here. Therefore the properties of the environment play more important roles. \citet{perucho04b} performed long-term simulations for the relativistic jets, and considered a slightly overdensed environment and lobe, which is more closed to our case here. They also found that the jet Lorentz factor is an important parameter deciding the nonlinear stabilities of the jets. With the same thermodynamical properties, the jets with smaller Lorentz factors start to mix and transfer momentum to the environment at a earlier stage. However, as they tried various models with different values of thermodynamical properties, they claimed that a number of other parameters (e.g., jet-to-ambient enthalpy ratio, temperature and jet internal energy,) also affect the jet stabilities and long-term morphologies. Although in our model here, we only consider the jet Lorentz factor as the key parameter, it is worth reiterating one particular simplification we make, which is that we attribute all kinds of jet instability evolutions to the growing of the shear layer between the central jet and the lobe. In reality, the evolution and the growing speed of the shear layer must surely be a function of several other physical parameters and may therefore vary in time and between sources \citep{perucho05}. We make this simplification here purely to keep the model analytically tractable, although with sufficient data it may become possible to reconstruct the dependence of instability evolutions on various jet parameters from observations in the future. \section{The evolution from FR\,II into FR\,I sources} \label{model_transition} The existence of a maximum size for FR\,II sources due to the erosion of their laminar jets raises an obvious question: what happens to an FR\,II object that reaches this limit? \citet{bick94} suggested that the transition between FR\,I/II is due to the transition from subrelativistic to relativistic flow caused by entrainment. As the death of FR\,IIs in our model here is also due to the entrainment, we argue that the FR\,IIs reaching their maximum sizes are likely to evolve into FR\,Is. In this section, we will outline a simple, but plausible scenario for the transition of a radio galaxy with an FR\,II morphology to one with an FR\,I morphology. The basic idea is sketched in Figure \ref{cartoon}. When a stable radio outflow is born at time $t_{0}$, it exhibits an FR\,II structure with a laminar flow embedded inside a lobe and a hotspot at the end. At stage $t_{1}$, when the jet length $L_{j}$ is smaller than the maximum length $L_{max}$, the outflow grows with age, following the KA97 picture. Meanwhile, however, the central jet continuously suffers entrainment from the lobe, and the structure of the outflow can be described by our model here. The outflow evolves with an FR\,II morphology until it reaches $L_{max}$ at age $t_{max}$. At this time, the central jet is totally eroded, and the hotspot vanishes. The detailed evolution process of the radio outflow at this stage is described in Section 2. When the outflow evolves to an age of $t_{2}$, where $t_{2}>t_{max}$, the shear layer dominates the end region of the jet and the hotspot vanishes. A weaker shock and the lobe structure may still exist with plasma injected into the lobe after the shock from the end of the jet. The expected structure at this stage is reminiscent of a typical lobed FR\,I source. As the outflow becomes even older, the energy from the shear layer can hardly support the lobe structure or the working surface of the shock at the end of the jet, so the plasma will form a turbulent tail with the lobe disappearing either because it is refilled from the environment or because it simply runs out of energy. At the end of this evolution stage, we will observe a naked tailed jet like 3C\,31. The jet is in direct contact with the environment, and a mixing shear layer is formed. At the same time, the laminar part may shrink again as the density of the environment is higher than that of the lobe. Please note that we are not claiming that our transition scenario here is the only way to generate FR\,Is. The precursors of FR\,Is may also include weak CSS sources \citep{pm07}, jets hitting dense environments \citep{meliani08}, and weak FR\,IIs reaching the pressure equilibrium with their environment inside the core region \citep{kb07}. These works are not in contrast with our work here as we are considering if FR\,IIs can evolve into FR\,Is at the late stage of their evolution. Our transition scenario is just complementary to all the work above, providing a new plausible way for powerful FR\,IIs to develop into FR\,Is later in their lives. More studies are still needed in order to fully understand the FR\,I/II dichotomy. \section{Conclusion} \label{conclusion} We have embedded a mixing-layer model originally developed for modelling FR\,I jets into a self-similar model for FR\,II radio lobes to study the effect of entrainment on the central jets in FR\, II objects. We find that, for reasonable parameters, the growing mixing layer between the central jet and the radio lobe could play an important role during the evolution of FR\,II objects. The maximum length that a jet can reach is decided mainly by the bulk velocity of the particles in the central jet, $\beta_{j}$, but not directly related to the environment or the jet power. We find a maximum length of $\sim$1000 kpc for $\beta_{j}=0.997$, assuming an entrainment efficiency of $\eta=0.5$, an environment index of $\alpha=1.5$ and a lobe axial ratio of $R_{T}=2$. If the jet is located in a flatter environment with a smaller $\alpha$ or the jet is fatter with a smaller $R_{T}$, its maximum length will be larger. We have also sketched the likely evolution of FR\,II sources after they reach their maximum size. Once the hotspots are extinguished, such sources will initially look like lobed FR\,I objects. However, ultimately their lobes will stop getting fed from the jet, become turbulent and be refilled by the environment. At this point they will emerge as classic, 3C\,31-like FR\,I sources. This simple scenario suggests a new evolutionary connection between FR\,I and FR\,II sources and may help to shed new light on the FR\,I/II dichotomy. The FR\,I/II transition process suggested by our model may not depend on the environment directly, but there is some observational evidence showing that FR\,Is and FR\,IIs may inhabit different in environments \citep{pp88}. Thus we cannot totally rule out the influence from the environment. There might be relations between environment properties and model dependent parameters (e.g. $\gamma_{j}$), so that the environment can affect $L_{max}$ \textit{indirectly}. This is an interesting open question for the future work once we have more observational data for the jet environment. In closing, we stress that the picture we have developed here -- especially that of the evolution beyond $t_{max}$ -- is still basically a toy model. In the future work, we are planning to model the evolution of the jet from FR\,II to lobed FR\,I to tailed FR\,I in more detail. Our final goal is to build a unified model for all types of radio galaxies and track how they evolve and morph into each other across the P-D diagram. \section*{Acknowledgments} We thank Prof. Robert Laing for helpful comments. JHC acknowledges funding from the South-East Physics Network (SEPNet). \label{lastpage}
1,116,691,501,435
arxiv
\section{\label{}} \section{Introduction} The problem of finding a covariant expression for the distribution and conservation of gravitational energy-momentum for General Relativity dates to the 1910s. Einstein took the requirement that the gravitational field equations alone entail energy-momentum conservation as a criterion for finding his field equations in his process of discovery (\cite{EinsteinEntwurfGerman,NortonField,Janssen,JanssenRenn}); ironically, it was widely concluded that the final theory lacked any local conservation law for energy-momentum. The equation $\nabla_{\mu} T^{\mu\nu} =0$ for material stress-energy, though a consequence of Einstein's equations, is a balance equation, not a conservation equation, because the covariant divergence of a rank 2 tensor (with any index placement and density weight) cannot be written using a coordinate divergence. A coordinate divergence is required for integral conservation laws (\cite{Anderson}). Gravitational energy-momentum has been reviewed on several occasions (\cite{TolmanEnergy,SchrodingerSTS,FletcherConservation,TrautmanConserve,CattaneoConsHP,CattaneoConsMilano,Davis,GoldbergReview,Carmeli,SzabadosReview}). While there is no difficulty in writing down quantities satisfying local conservation laws (in the sense of a coordinate divergence), there seem to be too many expressions without the anticipated interconnections. More specifically, it has been expected that there ought to be a (10- or 16-component) tensor, geometric object, or other suitably covariant expression that describes the local distribution of gravitational energy-momentum, and yet evidently there is not one. Pseudotensorial answers go back to the Einstein's work in 1916 (\cite{EinsteinFoundationGerman}), while objections to them from Schr\"{o}dinger and from Bauer appeared in 1918 (\cite{SchrodingerEnergy,BauerEnergy,Pauli,Cattani}). Later developments included the introduction of additional background structures, such as a flat background metric (\cite{Rosen1,RosenAnn,Graiff,Bonazzola}), an orthonormal tetrad (\cite{MollerAnnals}), or a flat connection (\cite{SorkinFlux,FatibeneFrancaviglia}). While the introduction of such further structures has achieved tensorial form with respect to coordinate transformations, this result has always come at the cost of introducing a new sort of gauge dependence, because the choice of specific background metric, tetrad, or connection lacks physical meaning and yet affects the results. The introduction of additional structures appears simply to move the lump in the carpet, not to flatten it out. Though new background structures continue to be introduced, the inductive lesson only gets stronger that the gauge dependence problem is not resolvable in such a fashion (\cite{SzabadosReview}). In this respect it is unclear that much has been gained beyond the original dependence of pseudotensors on coordinates found in the 1910s. The solution to the problem of gauge dependence, briefly, is to take \emph{all possible auxiliary structures of a given type together}. Thus, for example, the collection of all flat background metrics does not depend on the choice of any particular background metric. Changing the flat background metric from one specific example to another merely leads to another member of the same collection. Looking for some finite-component expression that is covariant under a change of the background metric, though traditional, is a mistake. Similar remarks hold for tetrads, connections, and even coordinate systems. Indeed the cases of background metrics, background connections, and coordinate systems seem closely related, while the tetrad case differs and so will not be discussed much here. Its introduction of a gratuitous local Lorentz group is a major disadvantage, and it is in fact not required for spinors, as will appear below. Some authors, especially those who emphasize how different General Relativity is from other field theories rather than how similar it is, have tried to make the best out of the apparent non-existence of gauge-invariant gravitational energy localization. Thus the question has been rejected as inappropriate, as shown by the equivalence principle: ``[a]nybody who looks for a magic formula for `local gravitational energy-momentum' is looking for the right answer to the wrong question.'' (\cite[p. 467]{MTW}) However, this is an \emph{ad hoc} move. Noether's theorems do not care about the equivalence principle; they simply give results in any coordinate system (\cite{BradingConserve}). Rather than criticizing the results of Noether's theorem in terms of preconceived notions of invariance and then mysteriously invoking a principle irrelevant to Noether's theorem to reduce the puzzlement over the lack of an invariant energy complex, it is preferable to learn from the results of Noether's theorem that there is a broader notion of invariance suited to the existence of infinitely many distinct conserved energies. There is no reason to expect the components of a pseudotensor to transform into each other once the vast multitude of gravitational energy-momenta is recognized. Most issues discussed here are considered in more detail in a forthcoming paper (\cite{EnergyGravity}). \section{Infinite-Component Covariant Density in Terms of All Flat Backgrounds} Using a flat background metric tensor $\eta_{\mu\nu}$ allows one to describe gravitational energy in a tensorial way, independent of the choice of coordinates. Let $u$ represent bosonic matter fields; spinors will be considered below. One can write down a gravitational energy-momentum tensor $t^{\mu\nu}[g_{\alpha\beta}, \eta_{\rho\sigma}]$ such that the total energy-momentum complex $ (\sqrt{-g} T^{\mu\nu}[g_{\alpha\beta}, u] + \sqrt{-g} t^{\mu\nu}[g_{\alpha\beta}, \eta_{\rho\sigma}])$ satisfies covariant conservation \begin{eqnarray} \partial_{\mu} (\sqrt{-g} T^{\mu\nu} + \sqrt{-g} t^{\mu\nu}) = 0 \end{eqnarray} with respect to the flat metric's torsion-free covariant derivative $ \partial_{\mu}.$ When General Relativity is formulated with a background metric, the action has two invariances, one under changes of coordinates and one under gauge transformations. The latter transformations alter the mathematical relationship between $g_{\mu\nu}$ and $\eta_{\mu\nu}.$ For this reason $t^{\mu\nu}$ is tensorial with respect to coordinate transformations, but gauge-variant under gauge transformations (\cite{Grishchuk}). Whereas finite one-parameter coordinate transformations can be written as \begin{eqnarray} g_{\sigma\rho} \rightarrow e^{\pounds_{\xi}} g_{\sigma\rho}, u \rightarrow e^{\pounds_{\xi}} u, \eta_{\mu\nu} \rightarrow e^{\pounds_{\xi}} \eta_{\mu\nu}, \label{coord} \end{eqnarray} gauge transformations are written as \begin{eqnarray} g_{\sigma\rho} \rightarrow e^{\pounds_{\xi}} g_{\sigma\rho}, u \rightarrow e^{\pounds_{\xi}} u, \eta_{\mu\nu} \rightarrow \eta_{\mu\nu}, \label{gauge} \end{eqnarray} which leave the flat metric alone. Different and equally appropriate choices of background metric give different localizations, but correspond to the same physical situation. Thus the achievement of tensorial energy-momentum localization has been purely formal; like a lump in the carpet, the gauge dependence has merely been shifted, not ironed out. One can avoid dependence on the choice of any particular background metric $\eta_{\mu\nu}$ by collecting all of them together in a set $\{ (\forall \eta_{\rho\sigma}) \eta_{\rho\sigma} \}.$ Every flat metric yields a covariant conservation law: \begin{eqnarray} \{ (\forall \eta_{\rho\sigma}) \; \partial_{\mu}(T^{\mu\nu} \sqrt{-g} + t^{\mu\nu} \sqrt{-g}) =0 \}, \end{eqnarray} each conserved using the appropriate flat covariant derivative. Using a mere flat connection is analogous, but then there angular momentum problems (\cite{ChangNesterChen}, \emph{c.f.} \cite{GoldbergConservation}). This is an infinite-component gauge-invariant localization of gravitational energy. Gravitational energy is localized, but there are far more energies than one naively expected. There is an apparently universal tacit assumption that there ought to be just one gravitational energy-momentum (with 10 or perhaps 16 components). This assumption of uniqueness is especially clear in treatments by Goldberg (\cite{GoldbergReview}), Faddeev (\cite{FaddeevEnergy}) and Szabados (\cite[section 3.1.3]{SzabadosReview}). Faddeev writes, ``The energy of the gravitational field is not localized, i.e., a uniquely defined energy density does not exist.'' (\cite{FaddeevEnergy}) While stated with special clarity in some cases, the assumption of uniqueness is implicit almost everywhere in the literature in the expectation that a pseudotensorial expression (perhaps Einstein's) in one coordinate system ought ideally to be related by a transformation law to that pseudotensor in another coordinate system in order to have the intended physical meaning of representing gravitational energy-momentum density. This expectation of uniqueness makes sense if, as in other theories, there is only one energy in General Relativity. It has been known at least since 1958 due to Bergmann and Komar, however, that there are \emph{infinitely many} gravitational energies, and that any coordinate basis or vector field generates one (\cite{BergmannConservation,KomarConservation}). Some of them might be zero; for example, a vector field derived by index-raising from an exact covector has vanishing Komar energy density. (The resulting Komar energies are unsatisfactory (\cite{PetrovKatz2}), so there is reason to expect the energies to depend on more than just a single vector field and the metric.) Some of the energies might plausibly regarded as faces of a single energy, such as if a Lorentz or affine transformation relates them. But the point remains that there are a great many \emph{different} gravitational energy-momenta, uncountably infinitely many, far more than one naively expected. The question must be asked: why can't they all be real? In fact there is no reason that they cannot all be real. Thus there is no reason whatsoever to expect distinct conserved quantities to behave mathematically as though they were just faces of one (finite-component) conserved quantity; the paradox dissolves. If there were a finite-component gauge-invariant localization, then it would represent under different gauges only different faces of the same entity. One question not addressed here pertains to the uniqueness of the gravitational energy-momentum (pseudo)tensor, given the variety of candidates available. It seems reasonable to require a candidate to be suitably related to Noether's theorem and to require correct values of integrated quantities in some basic contexts. There might remain some nonuniqueness due to the possibility of adding quantities with identically vanishing divergence. A good candidate is due to Joseph Katz, Ji\v{r}\'{i} Bi\v{c}\'{a}k and Donald Lynden-Bell (\cite{KatzBicakLB,KatzEnergy,PetrovChapter}). Or perhaps the appropriate form depends on the boundary conditions (\cite{NesterQuasiPseudo,NesterQuasi}). \section{Spinors as Almost Geometric Objects} Given the most common ways of treating spinor fields, it is not obvious how gravitational energy localization in the form proposed here would work. M{\o}ller's orthonormal tetrad formalism was motivated in part by its supposed necessity to accommodate spinor fields (\cite{MollerAnnals}). The local Lorentz group introduced in the tetrad formalism seems quite unhelpful for localizing gravitational energy, however, even if one accepts all the tetrads at once. Whereas the background metrics or background connections are closely related to the coordinate transformation freedom that is already present and ineliminable from the manifold, the local $O(3,1)$ group apparently bears no such relation. Fortunately it is not the case that a tetrad is necessary for spinors, contrary to widely held opinion. The tetrad formalism and local Lorentz group follow only if one insists on a linear coordinate transformation law for spinors as opposed to a nonlinear one (\cite[p. 234]{GatesGrisaruRocekSiegel} \cite{OPspinor}). It is possible to include spinor fields almost like tensors in the Ogievetsky-Polubarinov-Bilyalov formalism (\cite{OPspinor,OP,BilyalovSpinors}). The spinor and the metric together form a nonlinear geometric object $\langle g_{\mu\nu}, \psi \rangle$ (\cite{OPspinor,OP,BilyalovSpinors}) (up to a sign for the spinor part), with mild restrictions on the admissible coordinates to distinguish the time coordinate from the spatial coordinates. (The inequalities restricting the coordinates serve the same purpose as Bilyalov's matrix $T$ that interchanges two coordinates (\cite{BilyalovConservation}) to get time listed first. The possibility of the field dependence of the admissible coordinates is typically not entertained when one defines a manifold as having all possible coordinate systems.) The nonlinearity is due to the fact that the new components of the spinor depend not only (linearly) on the old spinor components, but also on the metric (\cite{OPspinor}). By suitably weighting the spinor and exploiting conformal invariance, one could make the weighted spinor depend only on the conformal part of the metric. \section{Localization in Terms of Pseudotensor in All Coordinate Systems} The use of a background metric or connection has the virtue that it manifestly has every sort of invariance that one would expect---both tensoriality under coordinate transformations and covariance under gauge transformations. It is initially somewhat less clear what one should expect in a formalism with no background metric. Fortunately one can gauge-fix the formalism above with a flat background metric or connection to find out. I will ignore global issues by pretending that all coordinate charts are defined everywhere. One convenient gauge fixing takes the bimetric formalism above and dispenses with the flat background metric tensors by choosing Cartesian coordinates for each flat metric separately. Thus each flat metric tensor $\eta_{\mu\nu}$ in the set $\{ (\forall \eta_{\rho\sigma}) \; \eta_{\rho\sigma} \}$ is downgraded to a \emph{matrix} $\eta_{MN}=diag(-1,1,1,1) $ and its resulting connection is downgraded to a three-index entity with only vanishing components, which can be ignored. Now the former coordinate freedom (\ref{coord}) is destroyed, but the former gauge freedom (\ref{gauge}) is formally converted into coordinate freedom (which has no effect on the numerical matrix $\eta_{MN}$). The new coordinate freedom is still gauge freedom in the sense of Dirac-Bergmann constrained dynamics. In a chart one has one's favorite pseudotensor $ t^{\mu\nu}[g_{\mu\nu}, \eta_{MN}],$ where the expression $g_{\mu\nu}$ now means the coordinate components of the curved metric. Using Einstein's field equations, the total energy-momentum complex is conserved in the sense of having vanishing \emph{coordinate} divergence \begin{equation} \frac{\partial}{ \partial x^{\mu} } (\sqrt{-g} T^{\mu\nu} + \sqrt{-g} t^{\mu\nu}[g_{\mu\nu}, \eta_{MN}]) =0 \end{equation} in every coordinate system. The gauge-invariant infinite-component gravitational energy-momentum distribution is just a certain pseudotensor in \emph{every} coordinate system. The curved metric thus appears in all possible coordinate systems. This expression for the localization of gravitational energies has infinitely many components in a nontrivial sense: each coordinate system picks out a distinct conserved energy. The distinctness depends on the fact that the expression $t^{\mu\nu}$ is not a tensor or other geometric object (\cite{BergmannConservation,Anderson}). The components of a tensor or any geometric object with respect to all coordinate systems give infinitely many faces of the same entity, but here we have infinitely many distinct entities, each appearing in its own adapted coordinate system. A long time ago Tolman proposed that having a pseudotensorial conservation law in every coordinate system is good enough, and forms an alternative way to be covariant (\cite{TolmanEnergy,Tolman}). He did not address the standard objections, however. It is now clear that Tolman's proposal was correct as far as it went, but it needed to be supplemented with Bergmann's derivation of infinitely many different conservation laws from different coordinate bases. \section{Objections to Pseudotensors Wrongly Assume Uniqueness of Energy} Having developed the covariant construction of localized energy-momenta, one can now easily resolve some standard objections to pseudotensors, which already appeared in Pauli's review (\cite{Pauli}) and have reappeared in countless places since then. For example, it is noted with disappointment that a given pseudotensor (at least one without second derivatives) can be made to vanish at any point or along any worldline by a suitable choice of coordinates. With the tacit assumption that gravitational energy-momentum is unique, one then concludes that there is no real fact of the matter pertaining to the density of gravitational energy-momentum at that point or along that worldline. But the point or worldline was arbitrary, so there is no fact of the matter about gravitational energy-momentum localization in general. Sometimes it is held that the situation improves somewhat when symmetries yield Killing vectors, as in the case of spherical symmetry (\cite[p. 603]{MTW}.) It is now clear how this objection goes astray: the components of a given pseudotensor with respect to different coordinate systems in fact pick out \emph{different energies}, some but not all of which vanish at the arbitrarily chosen point or along the arbitrarily chosen worldline. The fact that some energies vanish there but others don't is a bit unfamiliar, but it is in no way paradoxical on reflection. Given long disappointment with gravitational energy localization, many authors have turned to seeking quasilocalization, in which the energy in some volume is specified, rather than the energy density at a point. Quasilocal energy is generally expected to be unique. The injustice of that expectation, however, follows from the multitude of local energy densities pointed out by Bergmann (\cite{BergmannConservation}). Pseudotensors are related to quasilocal methods (\cite{NesterQuasiPseudo,NesterQuasi}). It is sometimes expected that a good quasilocal mass (energy) should vanish in flat spacetime, though that criterion does not hold for every proposed definition (\cite{Bergqvist}). Likewise positive definiteness is sometimes expected, though not always achieved (\cite{Bergqvist,SzabadosReview}). Local gravitational energy-momentum expressions do not reliably vanish in Minkowski space-time for all gauges either; instead they vanish in some coordinate systems/gauges (\cite{PetrovChapter}) but not others. If this result seems problematic, the resolution, again, is to notice that different coordinate systems/gauges pick out different energies. It is a bit surprising that some of them fail to vanish even in Minkowski space-time, but it is not absurd. Minkowski space-time is perhaps unusual in that \emph{there exists} an energy-momentum density that vanishes everywhere. Concerning Bauer's objection that flat spacetime in unimodular spherical coordinates has nonzero Einstein pseudotensor energy density (\cite{BauerEnergy,Pauli}), the fact that the same pseudotensorial expression in different coordinate systems picks out different energies removes the paradox. The fact that the total energy in these spherical coordinates diverges (\cite[p. 176]{Pauli}) is not terribly surprising, given that spherical coordinates have marvelously strong coordinate effects. Another traditional objection, this one due to Schr\"{o}dinger, calls attention to the vanishing of an Einstein pseudotensor (outside the Schwarzschild radius) for the Schwarzschild space-time in nearly Cartesian coordinates with the unimodular condition $\sqrt{-g}=1$ (\cite{SchrodingerEnergy,Pauli}). Part of the worry presumably is that a vanishing Einstein pseudotensor suggests that no gravitational energy is present, but intuitively surely there is some present. Once again the existence of many distinct energy densities is helpful to recognize. Possibly one would expect the \emph{total} mass-energy to come out ``right'' in this context, but various localizations are known to exist, in some cases with the energy all in some small region, in others not (\cite{PetrovPoint,PetrovChapter}). If Schr\"{o}dinger had shown that \emph{all} the gravitational energy densities vanished outside the Schwarzschild radius, such a result might be worrisome, but no such thing was shown. That his particular energy vanishes is an interesting feature of gravitational energy as defined by the Einstein pseudotensor and his coordinate system, but it is no real objection. It is analogous to concluding that the electromagnetic field vanishes because one can choose a gauge with $A_0 = 0.$ \section{Equivalence of All Conservation Laws to Einstein's Equations} In a typical field theory, one achieves energy-momentum conservation by noting that every field present in the equations of motion either has Euler-Lagrange equations or has generalized Killing vector fields in the sense of vanishing Lie derivative (\cite{TrautmanUspekhi}). In General Relativity as typically formulated (without a background metric or connection), every field present has Euler-Lagrange equations; there are no non-variational fields. One might then expect that the energy-momentum of matter and gravity together to be conserved using both the gravitational field equations and the matter field equations. A distinctive feature of General Relativity is that, because of gravitational gauge invariance (see, \emph{e.g.}, \cite{SliBimGRG}), conservation follows using the gravitational field equations alone, without using the matter equations (\cite{Anderson}). The collection of all of the pseudotensorial conservation laws---a specific pseudotensor in all coordinates---is in fact \emph{equivalent} to Einstein's equations (\cite{Anderson,EnergyGravity}), so the reverse entailment also holds. This fact sheds light on those approaches that aim to derive Einstein's field equations using the conservation laws as premises or lemmas (\cite{EinsteinEntwurfGerman,Deser,SliBimGRG}). \section{Angular Momentum Localization} For angular momentum, one introduces the coordinates $x^{\mu}$ and a symmetric choice of total energy-momentum complex $ \sqrt{-g} T^{\mu\nu} + \sqrt{-g} t^{\mu\nu}$ so that \begin{equation} \mathcal{M}^{\mu\nu\alpha}\equiv \sqrt{-g}( T^{\mu\nu} + t^{\mu\nu}) x^{\alpha} - \sqrt{-g}( T^{\mu\alpha} + t^{\mu\alpha}) x^{\nu} \end{equation} satisfies the conservation law $ \frac{ \partial}{\partial x^{\mu}} \mathcal{M}^{\mu\nu\alpha}=0 $ in all coordinates. By parity of reasoning with the above, the collection of these angular momentum densities in \emph{every} coordinate system is an appropriate covariant infinite-component object. Thus angular-momentum achieves a gauge-invariant localization in the same way as energy-momentum. \section{Conceptual Benefits of Energy Localization and Conservation} If one is aware of the uses to which the supposed lack of an energy conservation law in General Relativity has been put by now, then the benefits of even a formal local energy conservation law become evident. The received view that there is no gauge-invariant and hence physically meaningful local conservation law for energy-momentum in General Relativity tends to inspire (though not strictly entail) a variety of unwarranted conclusions. Some have criticized or rejected General Relativity (or Big Bang cosmology in particular) as having mystical tendencies on account of its supposed lack of conservation laws, while others have appealed to General Relativity for certain purposes for the same reason. Elsewhere I discuss six such examples (\cite{EnergyGravity}). The best known is due to Tryon, to the effect that the only meaningful energy conservation law for closed spaces is a global one with zero energy; thus it seems that energy conservation poses no objection to the spontaneous origin of universes (\cite{Tryon}). Finding gauge-invariant and hence physically meaningful local conservation laws therefore contributes to scientific rationality by resolving a conceptual problem (\cite{LaudanProgress}). \bigskip
1,116,691,501,436
arxiv
\section*{Introduction} GX 3+1 is one of the brightest, persistent Galactic bulge X-ray sources. The source has occasionally produced thermonuclear X-ray bursts when its X-ray flux was low (see \cite{Makish,Sun}). Based on X-ray spectral and temporal variability studies with EXOSAT it has been classified as an atoll source. From this and the observed type I X-ray bursts the source is almost certainly a low mass binary containing a neutron star \cite{HVK}. \cite{Lew87} reported the detection of low frequency noise (LFN) and QPO with frequencies around 8 Hz in some data intervals using EXOSAT observations. Subsequent analysis reinterpreted the power spectra of GX 3+1 in terms of so called very low frequency noise (VLFN) and a ``peaked'' high frequency noise (HFN) component, which then led to the atoll classification \cite{HVK}. To date kilohertz quasiperiodic oscillations (kHz QPO) have been detected wth RXTE in 13 LMXB (see \cite{vdk97} for a recent review). So far there is no strong consensus on the mechanism and/or mechanisms which produce the kHz QPO in the persistent X-ray flux from LMXB, but considerable interest has been focused on various beat-frequency interpretations of the kind first proposed to explain the twin QPO peaks observed in 4U 1728-34 \cite{Stroh96}. The millisecond timescale of the variability indicates that whatever process is at work is almost assuredly ocurring within a few 10s of km of the neutron star surface. Here we present results from an analysis of a short RXTE observation of GX 3+1 to search for high frequency variability. \begin{figure}[b!] \centerline{\epsfig{file=gx3_lc_2_15kev_umdfig.eps,height=2.5in,width=5.5in}} \vspace{10pt} \caption{Lightcurve from GX 3+1 measured with the PCA in the 2 - 15 keV range. The data show the countrates sampled in 8 s time bins. Notice the presence of significant non-Poissonian variations on timescales $> 10$ s.} \label{fig1} \end{figure} \section*{Data Summary} The observations were conducted with RXTE on October 9, 1996 UT. We obtained 2 ksec of data in 4, 125 $\mu$s (1/8192 s) single bit data modes. These proportional counter array (PCA) data modes covered photon energies from 2 - 5, 5 - 9, 9 - 14, and 14 - 90 keV. We also obtained so called Standard modes which provide 1/8 s lightcurves across the full PCA bandpass as well as 16 s spectral accumulations. The PCA lightcurve from our observations in the 2 - 15 keV range is shown in figure 1. The data shows the countrates measured in 8 s intervals. The presence of strong low-frequency variability on timescales longer than about 10 s is clearly evident in this lightcurve. This type of strong LFN from GX 3+1 was also reported by \cite{Lew87}. \begin{figure}[b!] \centerline{\epsfig{file=gx3_10hzqpo_umd.ps,height=2.0in,width=5.5in}} \vspace{10pt} \caption{Power spectrum of GX 3+1 in the 2 - 15 keV range measured with the PCA. The broad feature between 3 and 30 Hz is the peaked HFN component. The solid line is the best fitting model described in the text.} \label{fig2} \end{figure} \begin{figure}[b!] \centerline{\epsfig{file=gx3_hfpsd_umd.eps,height=2.0in,width=5.5in}} \vspace{10pt} \caption{High frequency power spectrum from GX 3+1. There is no significant detection of kHz QPO during this observation. The upper limit on the amplitude of Khz QPO in the 900 Hz region is about 1\% (rms).} \end{figure} \section*{Power Spectral Analysis} We used the single bit data in the 2 - 15 keV range to calculate FFT power spectra. We calculated power spectra for individual, 128 s intervals and then averaged the individual spectra. To further improve the power spectral statistics we also averaged again in frequency space. Figure 2 shows the power spectrum from 0.5 to 400 Hz calculated in this way. The broad feature centered around 10 keV has been referred to as "peaked" high frequency noise (HFN \cite{HVK}) and also QPO \cite{Lew87}. To model this noise component and estimate its amplitude we fit the power spectrum to a model consisting of a power law and a gaussian. This model provides a marginally acceptable fit to the data, although it does not fit the shape of the feature very well at the lower frequencies. From this fit we derive an rms amplitude for the peaked HFN of about 4 \%, which is similar to values measured previously with EXOSAT \cite{Lew87}. We also fit a power law model to the power spectrum for frequencies below 2 Hz and find a good fit with $P \propto \nu^{-1.23}$, where $P$ and $\nu$ are the power spectral amplitude and the frequency, respectively. The amplitude (rms) in the 2 - 15 keV band of this LFN is 4.2 \% (integrated from 0.001 to 4 Hz). We also investigated the dependence of rms amplitude on photon energy and found that the amplitude of the LFN increases with photon energy in a way similar to that reported by \cite{Lew87}. The power spectrum in the 2 - 15 keV range, emphasizing the high frequency portion, is shown in figure 3. We did not find any significant QPO in the kHz range. There is a modest enhancement in the vicinity of 900 Hz, but it is not significant enough to claim a detection. The inferred rms amplitude assuming that all power in that frequency bin were signal power is about 1 \%. \section*{Discussion} Based on the presence of strong LFN and the peaked HFN the source was almost certainly in the so called ``banana'' state during these observations. Whether or not the source is in an upper or lower banana state is difficult to discern from these short observations. The lack of kHz QPO is interesting, since other atoll sources with strong kHz QPO in the island states, such as 4U 1728-34, do not show kHz QPO (or it is very much weaker) when they are in the banana state. Indeed, recent observations of 4U 1728-34 in the upper banana state did not show kHz QPO down to rms amplitudes $< 1\%$ \cite{Stroh97}. Similar behavior has been reported for 4U 1820-30, which only shows kHz QPO in a rather narrow range of source intensities \cite{Smale}. These nondetections at high inferred mass accretion rates are an important clue as to the mechanism which produces the kHz QPO. In some models the optical depth to X-ray photons in the vicinity of the inner edge of the accretion disk is an important parameter in determining the frequency and strength of kHz variations \cite{MLS}. For example, it is likely that the optical depth in the inner accretion disk region increases with mass accretion rate as sources move from the banana to upper banana branches in the color - color diagram. One possibility is that any kHz variability is scattered to much lower amplitudes by the increasing optical depth.
1,116,691,501,437
arxiv
\section{Introduction} A linear matrix polynomial $\sA$ (of dimension $k$, in $n$ variables) is a symmetric $k\times k$-matrix whose entries are affine linear polynomials over $\R$, in the variables $\ul X =(X_1,\ldots,X_n)$. Equivalenty, it is a linear polynomial in $\ul X$ with coefficients $A_i$ from $\Sym_{k}(\R)$, the space of real symmetric $k\times k$-matrices: $$\sA(\ul X) = A_0 + X_1\cdot A_1 + \ldots + X_n\cdot A_n.$$ For a linear matrix polynomial $\sA$, the set $$\sS(\sA)=\left\{ x\in\R^n\mid \sA(x)\succeq 0\right\}$$ is called a \textit{spectrahedron} or an \textit{LMI set}. Here, $\succeq 0$ denotes positive semidefiniteness. A spectrahedron is thus a generalization of a polyhedron, which one would obtain by using a diagonal matrix polynomial $\sA$. By using non-diagonal matrices, one can have infinitely many linear inequalities defining $\sS(\sA)$, an inequality $y^t\sA(\ul X)y\geq 0$ for every $y\in\R^k$. One can also see spectrahedra as intersections of the cone of positive semidefinite matrices with an affine linear subspace of $\Sym_{k}(\R)$, where the affine subspace is parametrized by $x_1,\ldots, x_n$ (at least if $A_1,\ldots, A_n$ are linearly independent). So the cone of positive semidefinite symmetric $k\times k$-matrices is the standard model of a spectrahedron. Spectrahedra are always convex, semialgebraic and closed, even basic closed semialgebraic, i.e. defined by finitely many simultaneous polynomial inequalities. They are also \textit{rigidly convex}, a condition that was first introduced by Helton and Vinnikov \cite{MR2292953}. The authors show that rigid convexity is also sufficient for a two-dimensional set to be a spectrahedron. Lewis, Parrilo and Ramana \cite{MR2146191} then observed that this proves the Lax conjecture. The question whether every rigidly convex set is a spectrahedron is open for higher dimensions. Also the facial structure of spectrahedra is well known, see for example Ramana and Goldman \cite{MR1342934}. The authors show that the faces of a spectrahedron are parametrized by subspaces of $\R^k$, and that all faces are exposed; see also Section \ref{convsec} below. Spectrahedra are of great importance in polynomial optimization. They occur as sets of feasible solutions in semidefinite optimization problems, which are generalizations of linear optimization problems. There exist efficient numerical algorithms to solve such problems, see Boyd, El Ghaoui, Feron and Balakrishnan \cite{MR1342934} and Vandenberghe and Boyd \cite{MR1379041} for more information. Images of spectrahedra under linear projections are still useful for optimization. They are of the form $$\left\{x\in\R^n\mid\exists y\in\R^m\ \sA(x,y)\succeq 0\right\} ,$$ for some linear matrix polynomial $\sA$ in $n+m$ variables. Such sets are called \textit{semidefinite representable sets}, and they have recently gained a lot of attention. Semidefinite representable sets are always convex and semialgebraic, but \textit{no other} necessary condition is known so far. Helton and Nie \cite{HeltonNieNecSuffSDP} conjecture that \textit{every} convex semialgebraic set is semidefinite representable. So far, the following facts are known: (i) Every spectrahedron is semidefinite representable. Projections of semidefinite representable sets are semidefinite representable. (ii) Finite intersections of semidefinite representable sets are semidefinite representable. (iii) For certain semialgebraic sets $S$, Lasserre's method from \cite{LasserreConvSets} allows to explicitly construct a semidefinite representation, i.e. a spectrahedron that projects to $S$. The method works for \textit{basic closed} semialgebraic sets, i.e. sets defined by finitely many simultaneous polynomial inequalities, and involves sums of squares representions of linear polynomials. Helton and Nie \cite{HeltonNieSDPrepr} have used this method to prove semidefinite representability under certain curvature conditions on the defining inequalities of a set. However, the Lasserre method can only work if all faces of the convex set are exposed, see Netzer, Plaumann and Schweighofer \cite{NePlSch}. So there are basic closed semialgebraic convex sets for which the method fails. (iv) The convex hull of a finite union of semidefinite representable sets is again semidefinite representable. This is Helton and Nie \cite{HeltonNieNecSuffSDP}, see also \cite{NeSi}. So one can apply the Lasserre method locally, at least for compact convex sets. Helton and Nie \cite{HeltonNieNecSuffSDP} use this to prove additional curvature results. These seem to be the most important facts on semidefinite representable sets so far. In particular there is a complete lack of results on the semidefinite representability of non-closed semialgebraic sets. In this work we start examining such sets. We show that the relative interior of a semidefinite representable set is always semidefinite representable. The main result is then Theorem \ref{mainthm} below. It states the we can remove all faces of a semidefinite representable set, except those that are parametrized by another semidefinite representable set, and again obtain a semidefinite representable set. This result allows to produce many new examples. We start with some helpful results on convex sets and semidefinite matrices. \section{Lemmas on convex sets and positive semidefinite matrices}\label{convsec} In this section we state some easy (and probably well known) facts about convex sets and matrices. They will be used in Section \ref{open} below. \begin{LemmaDef} \label{eins}Let $S\subseteq \R^n$ be convex. The \textit{relative interior} $\relint(S)$ of $S$ is the subset of $S$ that forms the interior of $S$ in the affine hull of $S$. So a point $x\in S$ belongs to $\relint(S)$ if and only if for all points $y\in S$ there is some $\ep >0$ such that $x+\ep(x-y)\in S$. If $z\in\relint(S)$ then another point $x\in S$ belongs to $\relint(S)$ if and only if there is some $\ep>0$ such that $x+\ep(x-z)\in S$. One has $S\subseteq \overline{\relint(S)}$. \end{LemmaDef} \begin{proof}This is an easy exercise.\end{proof} \begin{Lemma}\label{dense} Let $S\subseteq \R^n$ be a convex set and let $T$ be a convex subset of $S$ which is dense in $S$. Then $T$ contains the relative interior $\relint(S)$ of $S$. \end{Lemma} \begin{proof} Without loss of generality assume that $S$ and therefore also $T$ has nonempty interior in $\R^n$. Now assume for contradiction that there is some $x\in\interior(S)$ that does not belong to $T$. Then by separation of disjoint convex sets, we find an affine linear polynomial $0\neq\ell\in\R[\ul X]$ with $\ell(x)\leq 0$ and $\ell\geq 0$ on $T$. Since $T$ has nonempty interior there is some $y\in T$ with $\ell(y)>0$. Since $T\subseteq S$ and $x\in \interior(S)$ we find some $\ep>0$ such that $y':=x+\ep(x-y)\in S$. Since $\ell(y')<0$ and $\ell\geq 0$ on $\overline{T} $, this contradicts $S\subseteq \overline{T}$. \end{proof} \begin{Cor}\label{intproj} Let $S\subseteq \R^{m}$ be convex and let $\ph\colon\R^m\rightarrow \R^n$ be a linear map. Then $$\ph(\relint(S))=\relint(\ph(S)).$$ \end{Cor} \begin{proof} The inclusion "$\subseteq$" is clear. For "$\supseteq$" notice that since $\relint(S)$ is convex and dense in $S$, $\ph(\relint(S))$ is a convex and dense subset of $\ph(S)$. So the claim follows from Lemma \ref{dense}. \end{proof} \begin{Def} Let $S\subseteq \R^n$ be a convex set. A \textit{face of $S$} is a nonempty convex subset $F\subseteq S$ with the following property: for any $x,y\in S$ and $ \la\in (0,1)$, if $\la x +(1-\la)y\in F$ then $x,y\in F$. A face $F$ of $S$ is \textit{exposed}, if either $F=S$ or there is a supporting hyperplane $H$ of $S$ in $\R^n$ such that $S\cap H=F$. This is equivalent to the existence of an affine linear polynomial $\ell\in\R[\ul X]$ with $\ell\geq 0$ on $S$ and $S\cap \{\ell=0\} =F$. \end{Def} \begin{Lemma} For every point $x\in S$ there is a unique face $F_x$ of $S$ that contains $x$ in its relative interior. $F_x$ consist precisely of the points $y\in S$ for which there is some $\ep>0$ such that $x+\ep(x-y)\in S$. \end{Lemma} \begin{proof} Again an easy exercise. \end{proof} If $S\subseteq\R^n$ is a spectrahedron, defined by the $k$-dimensional linear matrix inequality $\sA(\ul X)\succeq 0$, then every face of $S$ is of the form $$F_U=\{x\in S\mid U\subseteq \ker \sA(x)\}$$ for some subspace $U$ of $\R^k,$ and one has $F_x=F_{\ker \sA(x)}$ for all $x\in S$; every face of $S$ is exposed (see \cite{MR1342934} and also \cite{NePlSch}). We now turn to matrices. The next Proposition will be crucial for the results in Section \ref{open}. \begin{Prop} \label{prop}Let $A\in\Sym_k(\R)$ and $B\in \R^{m\times k}$. Let $I_m$ denote the identity matrix of dimension $m$. Then the following are equivalent: \begin{itemize} \item[(i)] there is some $\lambda\in\R$ such that $\left(\begin{array}{c|c} A& B^t \\\hline B & \lambda \cdot I_m\end{array}\right)\succeq 0$ \item[(ii)] $A\succeq 0$ and $\ker A\subseteq \ker B$ \end{itemize} \end{Prop} \begin{proof} By Theorem 1 in Albert \cite{MR0245582}, (i) is equivalent to the existence of some $\la$ such that $$A\succeq 0,\ B=BA^{\dagger}A, \ \la\cdot I_m -BA^{\dagger}B^t\succeq 0,$$ where $A^{\dagger}$ denotes the Penrose-Moore pseudoinverse matrix to $A$. By Theorem 9.17 in Ahlbrandt and Peterson \cite{MR1423802}, condition $B=BA^{\dagger}A$ is equivalent to $\ker A\subseteq\ker B$. Finally, one can always choose some big enough $\la$ to insure $\la\cdot I_m -BA^{\dagger}B^t\succeq 0$, which proves the Proposition. \end{proof} \section{Non-closed semidefinite representable sets}\label{open} All of the existing results on semidefinite representations of sets concern \textit{closed} sets. Our goal in this section is to start examining non-closed sets. The following easy result states that we can always remove faces of semidefinite representable sets, and still obtain semidefinite representability. It does not use the results from Section \ref{convsec} yet. \begin{Prop}\label{faceoff} If $S$ is semidefinite representable and $F$ is a face of $S$, then $F$ and $S\setminus F$ are semidefinite representable. \end{Prop} \begin{proof} First assume that $S$ is a spectrahedron, defined by the linear matrix polynomial $\sA$. Then $F$ is an exposed face of $S$ (by \cite{MR1342934}, Corollary 1), which means that there is an affine linear polynomial $\ell\in\R[\underline{X}]$ such that $\ell\geq 0$ on $S$ and $\{\ell=0\}\cap S=F.$ So we have $$ F=\left\{x\in\R^n\mid \sA(x)\succeq 0 \wedge \ell(x)=0\right\}$$ and $$S\setminus F=\left\{x\in \R^n\mid \sA(x)\succeq 0 \wedge \exists \lambda \left(\begin{array}{cc}\lambda & 1 \\1 & \ell(x)\end{array}\right)\succeq 0 \right\}.$$ This shows that $F$ is even a spectrahedron and $S\setminus F$ is semidefinite representable. Now let $S$ be semidefinite representable and let $\widetilde{S}\subseteq \R^{n+m}$ be a spectrahedron such that $S$ is the image of $\widetilde{S}$ with respect to the projection $\pr\colon \R^{n+m}\rightarrow \R^n$. Then $\widetilde{F}:=\pr^{-1}(F)\cap \widetilde{S}$ is a face of $\widetilde{S}$. Since $\widetilde{F}$ projects onto $F$ and $\widetilde{S}\setminus \widetilde{F}$ projects onto $S\setminus F$, both sets are semidefinite representable. \end{proof} For a semidefinite representable set with only finitely many faces, i.e. for a polyhedron, we thus know that its interior is again semidefinite representable. But this result is true in general: \begin{Prop}\label{sdpint} If $S$ is semidefinite representable, then $\relint(S)$ is also semidefinite representable. \end{Prop} \begin{proof}First assume that $S$ is a spectrahedron, defined by the matrix polynomial $\sA(\ul X)=A_0 +X_1A_1 +\ldots + X_n A_n.$ Fix a point $z\in \relint(S)$. By Lemma \ref{eins}, $\relint(S)$ has the following description: $$\relint(S)=\left\{ x\in S\mid \exists \ep>0\ x+\ep(x-z)\in S\right\}.$$ For $\ep>0$ we have $\sA(x+\ep(x-z))\succeq 0$ if and only if $\frac{1}{1+\ep}\cdot \sA(x+\ep(x-z))\succeq 0$, and \begin{align*} \frac{1}{1+\ep}\cdot \sA(x+\ep(x-z)) &= \left( \frac{1}{1+\ep} \right)\cdot A_0 +x_1A_1 +\cdots +x_nA_n \\ &\quad -\left(\frac{\ep}{1+\ep}\right)\cdot \left( z_1A_1+\cdots +z_nA_n\right).\end{align*} Making the transformation $\de:=\frac{1}{1+\ep}$ and writing $B:= -(z_1A_1+\cdots +z_nA_n)$ we find $$ \relint(S)=\left\{x\in \R^n\mid \exists \delta\in (0,1)\quad \de A_0 + x_1A_1+\cdots + x_nA_n + (1-\de)B\succeq 0\right\}.$$ Since the condition $\de\in (0,1)$ can be translated into $$\exists\la \ \left(\begin{array}{cc}\la & 1 \\1 & \delta\end{array}\right) \succeq 0 \wedge \left(\begin{array}{cc}\la & 1 \\1 & 1- \delta\end{array}\right)\succeq 0,$$ this is clearly a semidefinite representation of $\relint(S)$. Now let $S$ be semidefinite representable and suppose $\widetilde{S}\subseteq \R^{n+m}$ is a spectrahedron that projects to $S$. Then $\relint(\widetilde{S})$ projects onto $\relint(S)$, by Corollary \ref{intproj}. Since we already know that $\relint(\widetilde{S})$ is semidefinite representable, this proves the claim. \end{proof} \begin{Rem}We also have some quantitative information in this last result. Assume that $S\subseteq\R^n$ is semidefinite representable and $\widetilde{S}\subseteq\R^{n+m}$ is a spectrahedron that projects to $S$. If $\widetilde{S}$ is defined by a $k$-dimensional linear matrix polynomial, then $\relint(S)$ is the image of a spectrahedron in $\R^{n+m+2},$ defined by a linear matrix polynomial of dimension $k+4$. This is clear from the proof of Proposition \ref{sdpint}. \end{Rem} \begin{Rem} We could also try to quantify the element $z$ in the proof of Proposition \ref{sdpint}, instead of only using one fixed $z$ from $\relint(S)$. This would allow us to be more sophisticated in removing faces of $S$. However, the approach from the proof doesn't seem to work then. It relies on the fact that we consider $z$ as a fixed parameter. Otherwise we can not get rid of the product $(1+\ep)x$ by dividing through $1+\ep$. However, we can still prove something better, using a different method. This is our main result, Theorem \ref{mainthm} below. \end{Rem} By now we have shown that we can remove \textit{finitely many faces} or \textit{all faces} of codimension $\geq 1$ from a semidefinite representable set, and obtain a semidefinite representable set. But with the results from the previous section we can prove more. We start with spectrahedra (recall the notations from Section \ref{convsec}): \begin{Prop}\label{sub}Let $S$ be defined by the $k$-dimensional linear matrix polynomial $\sA(\ul X)$. Then for every subspace $W$ of $\R^k$, the set $$\left\{x\in S\mid \ker \sA(x)\subseteq W\right\}=S\setminus \bigcup_{U \nsubseteq W}F_U$$ is semidefinite representable. \end{Prop} \begin{proof} Choose an $m\times k$-matrix $B$ with $\ker B=W$. By Proposition \ref{prop} we find $$\left\{x\in S\mid \ker \sA(x)\subseteq W\right\}= \left\{ x\in\R^n\mid \exists \la\ \left(\begin{array}{c|c}\sA(x) & B^t \\\hline B & \la\cdot I_m\end{array}\right)\succeq 0 \right\},$$ which is a semidefinite representation. \end{proof} \begin{Rem} If $S$ has nonempty interior, then the linear matrix polynomial $\sA(\ul X)$ can be chosen such that $\sA(\ul X)\succ 0$ defines $\interior(S)$, see \cite{MR2292953}. Then $$\interior(S)=\{ x\in S\mid \ker \sA(x)\subseteq \{0\} \}$$ is semidefinite representable by Proposition \ref{sub}. This is another way to prove Proposition \ref{sdpint}. \end{Rem} \begin{Ex} \label{ex1}Let $D_2$ be the unit disk in $\R^2,$ defined by the linear matrix polynomial $$\sA(X_1,X_2):=\left(\begin{array}{cc}1-X_1 & X_2 \\X_2 & 1+X_1\end{array}\right),$$ as above. The faces of $D_2$ are $D_2$ itself and the points on the boundary of $D_2$. For $(x_1,x_2)\in D_2$ we have $$\ker \sA(x_1,x_2)=\left\{\begin{array}{ll}\{0\} & \mbox{ if } x_1^2+x_2^2<1 \\\R\cdot (x_2,x_1-1) & \mbox{ if } x_1^2+x_2^2=1, x_1\neq 1\\ \R\cdot (1,0) & \mbox{ if } (x_1,x_2)=(1,0)\end{array}\right.$$ So one checks that for any one-dimensional subspace $W$ of $\R^2$, the set $$\left\{ (x_1,x_2)\in S\mid \ker \sA(x_1,x_2)\subseteq W\right\}$$ is the open unit disk together with one point on the boundary. Since the convex hull of a finite union of semidefinite representable sets is again semidefinite representable (by \cite{HeltonNieNecSuffSDP}, Theorem 2.2), we obtain that the open unit disk together with finitely many points on the boundary is semidefinite representable. By Proposition \ref{faceoff}, also $D_2$ with finitely many points on the boundary removed is semidefinite representable. \end{Ex} So Propositions \ref{faceoff}, \ref{sdpint} and \ref{sub} tell us that we can either remove finitely many faces or "almost all" of the faces of a spectrahedron and obtain a semidefinite representable set. But we would also like to do something in between, for example remove a semi-arc from the boundary of the disk. This leads to our main result: For a convex set $S$ and $z\in S$ we denote by $\mathcal{F}(z,S)$ the set of all faces of $S$ that contain $z$. In particular always $S\in \mathcal{F}(z,S)$. For a set $T\subseteq S$ we denote by $(T\looparrowleft S)$ the union of the interiors of all faces of $S$ that are touched by $T$, i.e. $$(T\looparrowleft S) :=\bigcup_{z\in T}\ \bigcup_{ F\in \mathcal{F}(z,S)}\relint(F).$$ \begin{Thm} \label{mainthm}Let $T\subseteq S\subseteq \R^n$ be semidefinite representable sets. Then $(T\looparrowleft S)$ is also semidefinite representable. \end{Thm} \begin{proof} First assume that $S$ is a spectrahedron. Let $\sA(\ul X)$ be a $k$-dimensional symmetric linear matrix polynomial defining $S$. For any $z\in T$ we have \begin{align*}\bigcup_{F\in \mathcal{F}(z,S) } \relint(F) & = \left\{ x\in S\mid z\in F_x\right\} \\ &= \left\{ x\in \R^n\mid \sA(x)\succeq 0, \ker \sA(x)\subseteq \ker \sA(z)\right\}. \end{align*} So by Proposition \ref{prop} we have $$(T\looparrowleft S)=\left\{ x\in\R^n\mid \exists z\in T\ \exists \la \ \left(\begin{array}{c|c}\sA(x) & \sA(z) \\\hline \sA(z) & \la\cdot I_k\end{array}\right)\succeq 0\right\},$$ which is a semidefinite representation. Now let $S$ be semidefinite representable. So there is a spectrahedron $\widetilde{S}$ in some $\R^{n+m}$ that projects onto $S$ via the projection map $\pr\colon \R^{n+m}\rightarrow \R^n$. Define $$\widetilde{T}:=\pr^{-1}(T)\cap \widetilde{S}=\left\{ (x,y)\in \R^{n+m}\mid (x,y)\in \widetilde{S} , x \in T\right\},$$ which is clearly a semidefinite representable subset of $\widetilde{S}.$ We now know that $(\widetilde{T}\looparrowleft \widetilde{S})$ is semidefinite representable, so we finish the proof by showing $$ \pr\left((\widetilde{T}\looparrowleft \widetilde{S})\right) = (T \looparrowleft S).$$ For "$\subseteq$" let $(x,y)\in (\widetilde{T}\looparrowleft \widetilde{S})$ be given. We have to show $x\in (T\looparrowleft S)$. There is some $(v,w)\in \widetilde{T}$ and some face $\widetilde{F}\in \mathcal{F}((v,w),\widetilde{S})$ such that $(x,y)\in\relint(\widetilde{F}).$ So there is some $\ep>0$ such that $(x,y)+\ep\left((x,y)-(v,w)\right)\in \widetilde{F}.$ So $x+\ep(x-v)\in\pr(\widetilde{F})\subseteq S.$ This implies $v\in F_x$, so $F_x\in\mathcal{F}(v,S)$ and clearly $x\in\relint(F_x)$. Since $v\in T$ this proves $x\in (T\looparrowleft S)$. For "$\supseteq$" let $F$ be a face of $S$ that contains some element from $T$. Then $\widetilde{F}:=\pr^{-1}(F) \cap \widetilde{S}$ is a face of $\widetilde{S}$ that contains some element from $\widetilde{T}$. By Corollary \ref{intproj} we find $$\pr\left(\relint(\widetilde{F})\right)= \relint\left( \pr(\widetilde{F})\right)= \relint(F),$$ which proves the desired inclusion. \end{proof} \begin{Rems} \begin{itemize} \item[(0)] One has $(S\looparrowleft S)=S$ and $(\emptyset\looparrowleft S)=\emptyset$ for any convex set $S$. Clearly $T\subseteq T'\subseteq S$ implies $(T\looparrowleft S)\subseteq (T'\looparrowleft S).$ \item[(i)] For a point $x\in \relint(S)$ one has $(\{x\}\looparrowleft S)= \relint(S).$ So Theorem \ref{mainthm} generalizes Proposition \ref{sdpint} from above. \item[(ii)] $(T\looparrowleft S)$ always contains $T$, and also $\relint(S)$ as long as $T\neq \emptyset$. \item[(iii)] The semidefinite representation of $(T\looparrowleft S)$ is explicitly given in the proof of Theorem \ref{mainthm}. So one for example checks that it preserves rational coefficients from a semidefinite representation of $T$ and $S$. \end{itemize} \end{Rems} \begin{Ex} Let $D_2$ be the unit disk in $\R^2$. We find that we can remove any arc in the boundary of $D_2$ (and therefore any semialgebraic subset of the boundary) and obtain a semidefinite representable set. This is implied by Theorem \ref{mainthm}. For any arc in the boundary of $D_2$ one simply has to provide a semidefinite representable subset $T$ of $D_2$ that touches the boundary of $D_2$ precisely in the points that do not belong to the given arc. This is always possible, as one easily checks. \end{Ex} \begin{Ex}Consider the following subset $S$ of $\R^2$: $$S=D_2\cup \left([-1,1]\times [0,1]\right).$$ $S$ is not a spectrahedron, since it is not even basic closed semialgebraic (and has a non-exposed face). But it is semidefinite representable, which for example follows from Theorem 2.2 in Helton and Nie \cite{HeltonNieNecSuffSDP}. Now consider the subset $T$ of $S$ defined by $$T=\{(x,y)\in S\mid \vert x \vert -1 \leq y \leq 0 \}.$$ Then $(T\looparrowleft S)$ consists of $\interior(S)$ together with the point $ (0,-1)$ and the set $\{-1,1\}\times [0,1).$ Since $S$ and $T$ are semidefinite representable, so is $(T\looparrowleft S)$. \end{Ex} {\linespread{1}\bibliographystyle{plain}
1,116,691,501,438
arxiv
\section{Introduction} Plurality voting is a popular tool for collective decision-making in many domains, including both human societies and multiagent systems. Under this voting rule, each voter is supposed to vote for her most favorite candidate (or abstain); the winner is then the candidate that receives the highest number of votes. If several candidates have the highest score, the winner is chosen among them using a {\em tie-breaking rule}; popular tie-breaking rules include the {\em lexicographic rule}, which imposes a fixed priority order over the candidates; the {\em random candidate rule}, which picks one of the tied candidates uniformly at random; and the {\em random voter rule}, which picks the winner among the tied candidates according to the preferences of a randomly chosen voter. In practice, voters are often {\em strategic}, i.e., they may vote non-truthfully if they can benefit from doing so. In that case, an election can be viewed as a game, where the voters are the players, and each player's space of actions includes voting for any candidate or abstaining. For deterministic rules (such as Plurality with lexicographic tie-breaking), the behavior of strategic voters is determined by their preference ordering, i.e., a ranking of the candidates, whereas for randomized rules a common approach is to specify utility functions for the voters; i.e., the voters are assumed to maximize their {\em expected utility} under the lottery induced by tie-breaking. The outcome of the election can then be identified with a pure Nash equilibrium (PNE) of the resulting game. However, for the Plurality voting game with $3$ or more voters, this approach fails to provide a useful prediction of voting behavior: for each candidate $c$ there is a PNE where $c$ is the unique winner, irrespective of the voters' preferences. Indeed, if there are at least $3$ voters, the situation where all of them vote for $c$ is a PNE, as no voter can unilaterally change the election outcome. However, such equilibria may disappear if we use a more refined model of voters' preferences that captures additional aspects of their decision-making. For instance, in practice, if a voter feels that her vote is unlikely to have any effect on the election outcome, she may decide to abstain from the election. Also, voters may be averse to lying about their preferences, in which case they can be expected to vote for their top candidate unless there is a clear strategic reason to vote for someone else. By taking into account these aspects of voters' preferences, we obtain a more faithful model of their behavior. The problem of characterizing and computing the equilibria of Plurality voting, both for ``lazy'' voters (i.e., ones who prefer to abstain when they are not pivotal) and for ``truth-biased'' voters (ones who prefer to vote truthfully when they are not pivotal), has recently received a considerable amount of attention. However, it is difficult to compare the existing results, since they rely on different tie-breaking rules. In particular, \cite{des-elk:c:eq}, who study lazy voters, use the random candidate tie-breaking rule, and~\cite{obr-mar-tho:c:truth-biased} consider truth-biased voters and the lexicographic tie-breaking rule. Thus, it is not clear whether the differences between the results in these papers can be attributed to voters' secondary preferences or to the tie-breaking rule. The primary goal of our paper is to tease out the effects of different features of these models, by systematically considering various combinations of secondary preferences and tie-breaking rules. We consider two types of secondary preferences (lazy voters and truth-biased voters) and three tie-breaking rules (the lexicographic rule, the random voter rule, and the random candidate rule); while two of these combinations have been studied earlier by Desmedt and Elkind~\cite{des-elk:c:eq} and Obraztsova et al.~\cite{obr-mar-tho:c:truth-biased}, to the best of our knowledge, the remaining four possibilities have not been considered before. For each of the new scenarios, we characterize the set of PNE for the resulting game; in doing so, we also fill in a gap in the characterization of Desmedt and Elkind for lazy voters and random candidate tie-breaking. We then consider the problems of deciding whether a given game admits a PNE and whether a given candidate can be a co-winner/unique winner in some PNE of a given game. For all settings we consider, we determine the computational complexity of each of these problems, classifying them as either polynomial-time solvable or NP-complete. We use our characterization results to analyze the impact of various features of our models on the election outcomes. Finally, we extend our results to the setting where some of the voters may be {\em principled}, i.e., are guaranteed to vote truthfully. \smallskip \noindent{\bf Related Work\quad} Equilibria of Plurality voting have been investigated by a number of researchers, starting with~\cite{far:b:voting}. However, most of the earlier works either consider solution concepts other than pure Nash equilibria, such as iterative elimination of dominated strategies~\cite{mou:j:dominance,dhi-loc:j:dominance}, or assume that voters have incomplete information about each others' preferences~\cite{mye-web:j:voting}. Both types of secondary preferences (lazy voters and truth-biased voters) appear in the social choice literature, see, respectively, \cite{bat:j:abstentions,bor:j:costly,sin-ian:j:costly} and \cite{dut-sen:j:nash,lom-yos:j:nash}. In computational social choice, truth-biased voters have been considered by Meir et al.~\cite{mei-pol:c:convergence} in the context of dynamics of Plurality voting; subsequently, Plurality elections with truth-biased voters have been investigated empirically by Thompson et al.~\cite{tho-lev-ley:c:empirical} and theoretically by Obraztsova et al.~\cite{obr-mar-tho:c:truth-biased}. To the best of our knowledge, the only paper to study computational aspects of Plurality voting with lazy voters is that of Desmedt and Elkind~\cite{des-elk:c:eq}. Our approach to tie-breaking is well-grounded in existing works. Lexicographic tie-breaking is standard in the computational social choice literature. The random candidate rule has been discussed by Desmedt and Elkind~\cite{des-elk:c:eq}, and, more recently, by Obraztsova, Elkind and Hazon~\cite{obr-elk-haz:c:ties} and Obraztsova and Elkind~\cite{obr-elk:c:ties2}. The random voter rule is used to break ties under the Schulze method~\cite{sch:j:schulze}; complexity of manipulation under this tie-breaking rule has been studied by Aziz et al.~\cite{azi-gas-mat:c:random-voter}. \section{Preliminaries} \noindent For any positive integer $t$, we denote the set $\{1, \dots, t\}$ by $[t]$. We consider elections with a set of {\em voters} $N=[n]$ and a set of {\em alternatives}, or {\em candidates}, $C = \{c_1, \dots c_m\}$. Each voter is associated with a {\em preference order}, i.e., a strict linear order over $C$; we denote the preference order of voter $i$ by $\succ_i$. The list $(\succ_1,\dots, \succ_n)$ is called a {\em preference profile}. For each $i\in N$, we set $a_i$ to be the top choice of voter $i$, and let ${{\mathbf{a}}}=(a_1, \dots, a_n)$. Given two disjoint sets of candidates $X$, $Y$ and a preference order $\succ$, we write $X \succ Y$ if in $\succ$ all candidates from $X$ are ranked above all candidates from $Y$. We also assume that each voter $i\in N$ is endowed with a {\em utility function} $u_i:C\to{\mathbb N}$; $u_i(c_j)$ is the utility derived by voter $i$ if $c_j$ is the unique election winner. We require that $u_i(c)\neq u_i(c')$ for all $i\in N$ and all $c, c'\in C$ such that $c\neq c'$. The vector ${{\mathbf{u}}}=(u_1,\dots, u_n)$ is called the {\em utility profile}. Voters' preference orders and utility functions are assumed to be consistent, i.e., for each $i\in N$ and every pair of candidates $c, c'\in C$ we have $c\succ_i c'$ if and only if $u_i(c)>u_i(c')$; when this is the case, we will also say that $\succ_i$ is {\em induced} by $u_i$. Sometimes, instead of specifying preference orders explicitly, we will specify the utility functions only, and assume that voters' preference orders are induced by their utility functions; on other occasions, it will be convenient to reason in terms of preference orders. A {\em lottery} over $C$ is a vector ${{\mathbf{p}}} = (p_1, \dots, p_m)$ with $p_j\ge 0$ for all $j\in[m]$ and $\sum_{j\in[m]} p_j=1$. The value $p_j$ is the probability assigned to candidate $c_j$. The {\em expected utility} of a voter $i\in N$ from a lottery ${{\mathbf{p}}}$ is given by $\sum_{j\in[m]}u_i(c_j)p_j$. In this paper we consider Plurality elections. In such elections each voter $i\in N$ submits a {\em vote}, or {\em ballot}, $b_i\in C\cup\{\bot\}$; if $b_i=\bot$, voter $i$ is said to {\em abstain}. The list of all votes ${{\mathbf{b}}}=(b_1, \dots, b_n)$ is also called a {\em ballot vector}. We say that a ballot vector is {\em trivial} if $b_i=\bot$ for all $i\in N$. Given a ballot vector ${{\mathbf{b}}}$ and a ballot $b'$, we write $({{\mathbf{b}}}_{-i}, b')$ to denote the ballot vector obtained from ${{\mathbf{b}}}$ by replacing $b_i$ with $b'$. The {\em score} of an alternative $c_j$ in an election with ballot vector ${{\mathbf{b}}}$ is given by ${{\mathrm{sc}}}(c_j, {{\mathbf{b}}}) = |\{i\in N\mid b_i = c_j\}|$. Given a ballot vector ${{\mathbf{b}}}$, we set $M({{\mathbf{b}}})=\max_{c\in C}{{\mathrm{sc}}}(c, {{\mathbf{b}}})$ and let $W({{\mathbf{b}}}) = \{c\in C\mid {{\mathrm{sc}}}(c,{{\mathbf{b}}}) = M({{\mathbf{b}}})\}$, $H({{\mathbf{b}}}) = \{c\in C\mid {{\mathrm{sc}}}(c,{{\mathbf{b}}}) = M({{\mathbf{b}}})-1\}$, $H'({{\mathbf{b}}}) = \{c\in C\mid {{\mathrm{sc}}}(c,{{\mathbf{b}}}) = M({{\mathbf{b}}})-2\}$. The set $W({{\mathbf{b}}})$ is called the {\em winning set}. Note that if ${{\mathbf{b}}}$ is trivial then $W({{\mathbf{b}}})=C$. If $|W({{\mathbf{b}}})|=1$ then the unique candidate in $W({{\mathbf{b}}})$ is declared to be the winner. Otherwise, the winner is selected from $W({{\mathbf{b}}})$ according to one of the following tie-breaking rules. \begin{itemize} \item [(1)] Under the {\em lexicographic rule $R^L$}, the winner is the candidate $c_j\in W({{\mathbf{b}}})$ such that $j\le k$ for all $c_k\in W({{\mathbf{b}}})$. \item [(2)] Under the {\em random candidate rule $R^C$}, the winner is chosen from $W({{\mathbf{b}}})$ uniformly at random. \item [(3)] Under the {\em random voter rule $R^V$}, we select a voter from $N$ uniformly at random; if she has voted for a candidate in $W({{\mathbf{b}}})$, we output this candidate, otherwise we ask this voter to report her most preferred candidate in $W({{\mathbf{b}}})$, and output the answer. This additional elicitation step may appear difficult to implement in practice; fortunately, we can show that, in equilibrium it is almost never necessary. \end{itemize} Thus, the outcome of an election is a lottery over~$C$; however, for $R^L$ this lottery is degenerate, i.e., it always assigns the entire probability mass to a single candidate. For each $X\in\{L, C, V\}$ and each ballot vector~${{\mathbf{b}}}$, let ${{\mathbf{p}}}^X({{\mathbf{b}}})$ denote the lottery that corresponds to applying $R^X$ to the set $W({{\mathbf{b}}})$. Note also that for every $c_j\in C$ it holds that if $p^C_j({{\mathbf{b}}})\neq 0$ then $p^C_j({{\mathbf{b}}})\ge \frac{1}{m}$; similarly, if $p^V_j({{\mathbf{b}}})\neq 0$ then $p^V_j({{\mathbf{b}}})\ge \frac{1}{n}$. In what follows, we consider {\em lazy} voters, who prefer to abstain when their vote has no effect on the election outcome, and {\em truth-biased} voters, who never abstain, but prefer to vote truthfully when their vote has no effect on the election outcome. Formally, pick $\varepsilon<\min\{\frac{1}{m}, \frac{1}{n}\}$, and consider a utility profile ${{\mathbf{u}}}$ and a tie-breaking rule $R^X\in\{R^C, R^V, R^L\}$. Then \begin{itemize} \item if voter $i$ is {\em lazy}, her utility in an election with ballot vector ${{\mathbf{b}}}$ under tie-breaking rule $R^X$ is given by $$ U_i({{\mathbf{b}}})= \begin{cases} \sum_{j\in [m]}p^X_j({{\mathbf{b}}})u_i(c_j) & \text{if $b_i\in C$},\\ \sum_{j\in [m]}p^X_j({{\mathbf{b}}})u_i(c_j)+\varepsilon & \text{if $b_i=\bot$}. \end{cases} $$ \item if voter $i$ is {\em truth-biased}, her utility in an election with ballot vector ${{\mathbf{b}}}$ under tie-breaking rule $R^X$ is given by $$ U_i({{\mathbf{b}}})= \begin{cases} \sum_{j\in [m]}p^X_j({{\mathbf{b}}})u_i(c_j) &\text{if $b_i\in C\setminus\{a_i\}$},\\ \sum_{j\in [m]}p^X_j({{\mathbf{b}}})u_i(c_j)+\varepsilon &\text{if $b_i=a_i$},\\ -\infty&\text{if $b_i=\bot$}. \end{cases} $$ \end{itemize} We consider settings where all voters are of the same type, i.e., either all voters are lazy or all voters are truth-biased; we refer to these settings as {\em lazy} or {\em truth-biased}, respectively, and denote the former by ${{\mathcal{L}}}$ and the latter by ${{\mathcal{T}}}$. In what follows, we consider all possible combinations of settings (${{\mathcal{L}}}$, ${{\mathcal{T}}}$) and tie-breaking rules ($R^L$, $R^C$, $R^V$). A combination of a setting ${{\mathcal{S}}}\in\{{{\mathcal{L}}}, {{\mathcal{T}}}\}$, a tie-breaking rule $R\in\{R^L, R^C, R^V\}$ and a utility profile ${{\mathbf{u}}}$ induces a strategic game, which we will denote by $({{\mathcal{S}}}, R, {{\mathbf{u}}})$: in this game, the players are the voters, the action space of each player is $C\cup\{\bot\}$, and the players' utilities $U_1, \dots, U_n$ for a vector of actions ${{\mathbf{b}}}$ are computed based on the setting and the tie-breaking rule as described above. We say that a ballot vector ${{\mathbf{b}}}$ is a {\em pure Nash equilibrium (PNE)} of the game $({{\mathcal{S}}}, R, {{\mathbf{u}}})$ if $U_i({{\mathbf{b}}})\ge U_i({{\mathbf{b}}}_{-i},b')$ for every voter $i\in N$ and every $b'\in C\cup\{\bot\}$. For each setting ${{\mathcal{S}}}\in\{{{\mathcal{L}}}, {{\mathcal{T}}}\}$ and each tie-breaking rule $R\in\{R^L, R^C,R^V\}$, we define three algorithmic problems, which we call $({{\mathcal{S}}}, R)$-{\sc ExistNE}, $({{\mathcal{S}}}, R)$-{\sc TieNE}, and $({{\mathcal{S}}}, R)$-{\sc SingleNE}. In each of these problems, we are given a candidate set $C$, $|C|=m$, a voter set $N$, $|N|=n$, and a utility vector ${{\mathbf{u}}}=(u_1, \dots, u_n)$, where each $u_i$ is represented by $m$ numbers $u_i(c_1), \dots, u_i(c_m)$; these numbers are positive integers given in binary. In $({{\mathcal{S}}}, R)$-{\sc TieNE} and $({{\mathcal{S}}}, R)$-{\sc SingleNE} we are also given the name of a target candidate $c_p\in C$. In $({{\mathcal{S}}}, R)$-{\sc ExistNE} we ask if $({{\mathcal{S}}}, R, {{\mathbf{u}}})$ has a PNE. In $({{\mathcal{S}}}, R)$-{\sc TieNE} we ask if $({{\mathcal{S}}}, R, {{\mathbf{u}}})$ has a PNE ${{\mathbf{b}}}$ with $|W({{\mathbf{b}}})|>1$ and $c_p\in W({{\mathbf{b}}})$. In $({{\mathcal{S}}}, R)$-{\sc SingleNE} we ask if $({{\mathcal{S}}}, R, {{\mathbf{u}}})$ has a PNE ${{\mathbf{b}}}$ with $W({{\mathbf{b}}})=\{c_p\}$. Each of these problems is obviously in NP, as we can simply guess an appropriate ballot vector ${{\mathbf{b}}}$ and check that it is a PNE. We omit some of the proofs due to space constraints; these proofs can be found in the supplementary material. \section{Lazy Voters}\label{sec:lazy} \noindent In this section, we study PNE in Plurality games with lazy voters. The case where the tie-breaking rule is $R^C$ has been analyzed in detail by Desmedt and Elkind~\cite{des-elk:c:eq}, albeit for a slightly different model; we complement their results by considering $R^L$ and~$R^V$. We start by extending a result of Desmedt and Elkind to all three tie-breaking rules considered in this paper. \begin{proposition}\label{prop:lazy-basic} For every $R\in\{R^L,R^C, R^V\}$ and every utility profile ${{\mathbf{u}}}$, if a ballot vector ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{L}}},R,{{\mathbf{u}}})$ then for every voter $i\in N$ either $b_i=\bot$ or $b_i\in W({{\mathbf{b}}})$. Further, if $|W({{\mathbf{b}}})|=1$, then there exists exactly one voter $i\in N$ with $b_i\neq\bot$. \end{proposition} \begin{proof} Suppose that $b_i\not\in W({{\mathbf{b}}})$ for some voter $i\in N$. Then if $i$ changes her vote to $\bot$, the set $W({{\mathbf{b}}})$ will not change, so $i$'s utility would improve by $\varepsilon$, a contradiction with ${{\mathbf{b}}}$ being a PNE of $({{\mathcal{L}}},R,{{\mathbf{u}}})$. Similarly, suppose that $|W({{\mathbf{b}}})|=1$ and there are two voters $i,i'\in N$ with $b_i\neq\bot$, $b_{i'}\neq\bot$. It has to be the case that $b_i=b_{i'}=c_j$ for some $c_j\in C$, since otherwise $|W({{\mathbf{b}}})|\ge 1$. But then if voter $i$ changes her vote to $\bot$, $c_j$ will remain the election winner, so $i$'s utility would improve by $\varepsilon$, a contradiction. \end{proof} \iffalse Desmedt and Elkind show that under $R^C$ checking whether a given utility profile admits a PNE is NP-hard, and we show that this hardness result also applies to the case where the ties are broken according to the preferences of the random voters. This motivates us to consider the complexity of this problem for commonly studied restricted preference domains, such as single-peaked and single-crossing preferences. We provide polynomial-time algorithms for $({{\mathcal{L}}}, R^C)$-{\sc ExistNE} and $({{\mathcal{L}}}, R^C)$-{\sc ChanceNE} under single-peaked and single-crossing $m$-decreasing preferences. \fi \smallskip \noindent{\bf Lexicographic Tie-breaking\quad} The scenario where voters are lazy and ties are broken lexicographically turns out to be fairly easy to analyze. \begin{theorem}\label{thm:lazy-lex-char} For any utility profile ${{\mathbf{u}}}$ the game $G = ({{\mathcal{L}}}, R^L, {{\mathbf{u}}})$ has the following properties: \begin{enumerate} \item If ${{\mathbf{b}}}$ is a PNE of $G$ then $|W({{\mathbf{b}}})|\in\{1,m\}$. Moreover, $|W({{\mathbf{b}}})|=m$ if and only if ${{\mathbf{b}}}$ is the trivial ballot and all voters rank $c_1$ first. \item If ${{\mathbf{b}}}$ is a PNE of $G$ then there exists at most one voter $i$ with $b_i\neq \bot$. \item $G$ admits a PNE if and only if all voters rank $c_1$ first (in which case $c_1$ is the unique PNE winner) or there exists a candidate $c_j$ with $j>1$ such that (i) ${{\mathrm{sc}}}(c_j,{{\mathbf{a}}})>0$ and (ii) for every $k<j$ it holds that all voters prefer $c_j$ to $c_k$. If such a candidate exists, he is unique, and wins in all PNE of $G$. \end{enumerate} \end{theorem} \iffalse \begin{proof} Fix a utility profile ${{\mathbf{u}}}$ and a ballot ${{\mathbf{b}}}$ such that ${{\mathbf{b}}}$ is a PNE of $G = ({{\mathcal{L}}}, R^L, {{\mathbf{u}}})$. To prove the first claim, suppose first that $1<|W({{\mathbf{b}}})|$ and ${{\mathbf{b}}}$ is not trivial. Then there are two candidates $c_j, c_k\in W({{\mathbf{b}}})$, $j<k$, such that ${{\mathrm{sc}}}(c_j, {{\mathbf{b}}})>0$ and ${{\mathrm{sc}}}(c_k,{{\mathbf{b}}})>0$. Hence, there exists at least one voter who votes for $c_k$. However, the election outcome will not change if this voter abstains, a contradiction with ${{\mathbf{b}}}$ being a PNE of $G$. Now, suppose that ${{\mathbf{b}}}$ is trivial. In this case $W({{\mathbf{b}}})=C$ and $c_1$ wins. If any voter prefers some other candidate $c$ to $c_1$, she can improve her utility by voting for $c$, as this will change the election outcome to $c$. On the other hand, if all voters rank $c_1$ first, the trivial ballot is clearly a PNE. The second claim follows from our first claim and Proposition~\ref{prop:lazy-basic}. To prove the third claim, suppose that there exists a candidate $c_j$, $j > 1$, satisfying conditions (i) and (ii). Consider a ballot vector ${{\mathbf{b}}}$ where $b_i=c_j$ for some voter $i$ with $a_i=c_j$ (the existence of such voter is guaranteed by condition (i)) and $b_{i'}=\bot$ for all $i'\in N\setminus\{i\}$. Voter $i$ cannot benefit from voting for another candidate or abstaining, as this will change the election outcome to one she likes less than the current outcome. Any other voter can only change the election outcome if she votes for a candidate $c_k$ with $k<j$. But then condition (ii) implies that no voter wants the election outcome to change in this way. Conversely, suppose that ${{\mathbf{b}}}$ is a PNE. We have argued that either ${{\mathbf{b}}}$ is trivial or $b_i=c_j$ for some $i\in N$ and some $c_j\in C$ and $b_{i'}=\bot$ for all $i'\in N\setminus\{i\}$. In the latter case, if $c_j\neq a_i$, voter $i$ can improve her utility by voting for $a_i$. Moreover, if $j=1$, voter $i$ can improve her utility by abstaining, as $c_1$ would remain the election winner in this case. Finally, if there exists a candidate $c_k$ with $k<j$ such that some voter $i'$ prefers $c_k$ to $c_j$, then $i'$ can change the election outcome to $c_k$ by voting for $c_k$. It remains to show that conditions (i) and (ii) can be satisfied by at most one candidate. To see this, note that if both $c_j$ and $c_k$ satisfy condition (i) and $j<k$, then $c_k$ violates condition (ii), as the voter who ranks $c_j$ first clearly prefers $c_j$ to $c_k$. \end{proof} \fi The following corollary is directly implied by Theorem~\ref{thm:lazy-lex-char}. \begin{corollary}\label{cor:lazy-lex-easy} $({{\mathcal{L}}}, R^L)$-{\sc ExistNE}, $({{\mathcal{L}}}, R^L)$-{\sc SingleNE} and $({{\mathcal{L}}}, R^L)$-{\sc TieNE} are in~{\em P}. \end{corollary} \begin{remark}\label{rem:lazy-lex} {\em The reader may observe that, counterintuitively, while the lexicographic tie-breaking rule appears to favor $c_1$, it is impossible for $c_1$ to win the election unless he is ranked first by all voters. In contrast, $c_2$ wins the election as long as he is ranked first by at least one voter and no voter prefers $c_1$ to $c_2$. In general, the lexicographic tie-breaking rule favors lower-numbered candidates with the exception of $c_1$. As for $c_1$, his presence mostly has a destabilizing effect: if some, but not all voters rank $c_1$ first, no PNE exists. This phenomenon is an artifact of our treatment of the trivial ballot vector: it disappears if we assume (as Desmedt and Elkind do) that when ${{\mathbf{b}}}=(\bot,\dots,\bot)$ the election is declared invalid and the utility of each voter is $-\infty$: under this assumption $c_1$ is the unique possible equilibrium winner whenever he is ranked first by at least one voter. } \end{remark} \iffalse \begin{proposition} \textbf{Lazy voters, lexicographic tie-breaking rule}. In strong Nash Equilibrium, at most one alternative $a$ can win the elections; $a$ must be a winner in pure Nash Equilibria. Also, any other alternative $b \neq a$ must be ranked higher than $a$ by at most one agent. \end{proposition} \begin{proposition} A winner in a strong Nash Equilibrium if the voters are lazy is also a winner in Nash Equilibrium if the voters are truth-biased. \end{proposition} \fi \smallskip \noindent{\bf Randomized Tie-breaking\quad } We will now consider $R^C$ and $R^V$. \cite{des-elk:c:eq} characterize utility profiles that admit a PNE for lazy voters and $R^C$. However, there is a small difference between our model and that of Desmedt and Elkind: while we assume that the trivial ballot vector results in a tie among all candidates, Desmedt and Elkind assume that in this case the election is canceled and each voter's utility is $-\infty$. Further, the results of Desmedt and Elkind implicitly assume that the number of voters $n$ exceeds the number of candidates $m$; if this is not the case, Theorem~2 in their paper is incorrect (see Remark~\ref{rem:corr}). Thus, we will now provide a full characterization of utility profiles ${{\mathbf{u}}}$ such that $({{\mathcal{L}}}, R^C,{{\mathbf{u}}})$ admits a PNE, and describe the corresponding equilibrium ballot profiles. Our characterization result remains essentially unchanged if we replace $R^C$ with $R^V$: for almost all utility profiles ${{\mathbf{u}}}$ and ballot vectors ${{\mathbf{b}}}$ it holds that ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{L}}}, R^C,{{\mathbf{u}}})$ if and only if it is a PNE of $({{\mathcal{L}}}, R^V,{{\mathbf{u}}})$; the only exception is the case of full consensus (all voters rank the same candidate first). \begin{theorem}\label{thm:char-rand} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be a utility profile over $C$, $|C|=m$, and let $R\in\{R^C, R^V\}$. The game $G = ({{\mathcal{L}}}, R, {{\mathbf{u}}})$ admits a PNE if and only if one of the following conditions holds: \begin{itemize} \item[(1)] all voters rank some candidate $c_j$ first; \item[(2)] each candidate is ranked first by at most one voter, and, moreover, $\frac{1}{n}\sum_{i\in N}u_\ell(a_i)\ge \max_{i\in N\setminus\{\ell\}}u_\ell(a_i)$ for each $\ell\in N$. \item[(3)] there exists a set of candidates $X = \{c_{\ell_1}, \dots, c_{\ell_k}\}$ with $2\le k \le \min(n/2,m)$ and a partition of the voters into $k$ groups $N_1, \dots, N_k$ of size ${n}/{k}$ each such that for each $j\in[k]$ and each $i\in N_j$ we have $c_{\ell_j}\succ_i c$ for all $c\in X\setminus\{c_{\ell_j}\}$, and, moreover, $\frac{1}{k}\sum_{c\in X}u_i(c)\ge \max_{c\in X\setminus\{c_{\ell_j}\}}u_i(c)$. \end{itemize} Further, if condition (1) holds for some $c_j\in C$, then if $R=R^C$ then for each $i\in N$ the game $G$ has a PNE where $i$ votes for $c_j$ and all other voters abstain, whereas if $R=R^V$ the game $G$ has a PNE where all voters abstain; if condition (2) holds, then $G$ has a PNE where each voter votes for her top candidate; and if condition (3) holds for some set $X$, then $G$ has a PNE where each voter votes for her favorite candidate in $X$. The game $G$ has no other PNE. \end{theorem} \iffalse \begin{proof} It is easy to see that any of the conditions (1)--(3) is sufficient for the existence of PNE, with ballot vectors described in the statement of the theorem witnessing this. We will now show that satisfying at least one of these conditions is necessary for the existence of a PNE, and that no other ballot vector is a PNE. Fix a tie-breaking rule $R\in\{R^C, R^V\}$, a utility profile ${{\mathbf{u}}}$, and suppose that a ballot vector ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{L}}}, R,{{\mathbf{u}}})$. We will argue that ${{\mathbf{u}}}$ satisfies one of the conditions (1)--(3). Suppose first that $W({{\mathbf{b}}})=\{c_j\}$ for some $c_j\in C$. By Proposition~\ref{prop:lazy-basic} there exists a voter $i\in N$ with $b_i=c_j$, and $b_{i'}=\bot$ for all $i'\in N\setminus\{i\}$. It has to be the case that $a_i=c_j$: otherwise voter $i$ can make $a_i$ the unique winner by changing her vote to $a_i$, thus increasing her utility. Now, suppose that $a_{i'}\neq c_j$ for some $i'\in N\setminus\{i\}$. If voter $i'$ changes her ballot to $c_\ell = a_{i'}$, the new winning set is $\{c_j, c_\ell\}$. Now, if $R=R^C$, the overall utility of $i'$ is given by $\frac12(u_{i'}(c_\ell)+u_{i'}(c_j))$, and if $R=R^V$, the overall utility of $i'$ is given by $\lambda u_{i'}(c_\ell)+(1-\lambda)u_{i'}(c_j)$, where $\lambda\ge \frac{1}{n}$ (this is because voter $i'$ herself ranks $c_\ell$ above $c_j$). In both cases, $i'$ can increase her utility by voting $c_\ell$, a contradiction. Hence, it has to be the case that all voters rank $c_j$ first, i.e., condition~(1) is satisfied. Now, suppose that $|W({{\mathbf{b}}})|> 1$. We will argue that in this case either all voters abstain or no voter abstains. Indeed, suppose that $b_i=\bot$, $b_{\ell}\neq \bot$ for some $i,\ell\in N$, i.e., each candidate in $W({{\mathbf{b}}})$ receives at least one vote. If, instead of abstaining, $i$ votes for her most preferred candidate in $W({{\mathbf{b}}})$, this candidate becomes the unique election winner. In contrast, under ${{\mathbf{b}}}$ $i$'s least preferred candidate in $W({{\mathbf{b}}})$ wins with positive probability: this is immediate for $R=R^C$, and for $R=R^V$ this holds because for every $c_j\in W({{\mathbf{b}}})$ there exists a voter $i'$ with $b_{i'}=c_j$, and $c_j$ wins whenever ties are broken according to the preferences of voter ${i'}$. Thus, $i$ can improve her utility by changing her vote, a contradiction. Hence, if $|W({{\mathbf{b}}})|=k$ and ${{\mathbf{b}}}$ is not trivial, each candidate in $W({{\mathbf{b}}})$ receives exactly $n/k$ votes. In particular, if $|W({{\mathbf{b}}})|=n$ and ${{\mathbf{b}}}$ is not trivial, each candidate in $W({{\mathbf{b}}})$ receives exactly one vote. We will argue that in this case condition (2) is satisfied. We will first prove that $b_i=a_i$ for all $i\in N$. Indeed, suppose that $b_i\neq a_i$ for some $i\in N$, and consider the ballot vector ${{\mathbf{b}}}'=({{\mathbf{b}}}_{-i}, a_i)$. If $a_i\in W({{\mathbf{b}}})$, then $W({{\mathbf{b}}}')=\{a_i\}$, whereas under ${{\mathbf{b}}}$ voter $i$'s least preferred candidate in $W({{\mathbf{b}}})$ wins with positive probability. If $a_i\not\in W({{\mathbf{b}}})$, we have $W({{\mathbf{b}}}')=(W({{\mathbf{b}}})\setminus\{b_i\})\cup\{a_i\}$, so $U_i({{\mathbf{b}}}')=U_i({{\mathbf{b}}})+\frac{1}{n}(u_i(a_i)-u_i(b_i))>U_i({{\mathbf{b}}})$. In both cases $i$ can increase her overall utility by voting for $a_i$, a contradiction. Hence, we have $W({{\mathbf{b}}})=\{a_i\mid i\in N\}$. Thus, under both $R^C$ and $R^V$ the outcome of this election is a lottery that assigns equal probability to all candidates in $W({{\mathbf{b}}})$. Now, if any voter prefers her second most preferred candidate in $W({{\mathbf{b}}})$ to this lottery, she can vote for that candidate, making him the unique election winner, a contradiction with ${{\mathbf{b}}}$ being a PNE. Thus, in this case condition (2) is satisfied. Now, suppose that ${{\mathbf{b}}}$ is not trivial and $|W({{\mathbf{b}}})|=k<n$. We have argued that each candidate in $W({{\mathbf{b}}})$ receives exactly $n/k$ votes. This means that $k$ divides $n$, so in particular $k\le n/2$ and each candidate in $W({{\mathbf{b}}})$ receives at least two votes. Under both of our tie-breaking rules, each candidate in $W({{\mathbf{b}}})$ wins with probability $1/k$. Consider a voter $i$. She can make any candidate in $W({{\mathbf{b}}})\setminus\{b_i\}$ the unique election winner by voting for him. Since ${{\mathbf{b}}}$ is a PNE, no voter wants to change the election outcome in this way; this implies, in particular, that each voter votes for her favorite candidate in $W({{\mathbf{b}}})$. Thus, in this case condition (3) is satisfied with $X=W({{\mathbf{b}}})$; the voters are partitioned into groups according to their votes in ${{\mathbf{b}}}$. It remains to consider the case where ${{\mathbf{b}}}$ is the trivial ballot vector. When $R=R^C$, ${{\mathbf{b}}}$ cannot be a PNE: under ${{\mathbf{b}}}$ the outcome is a uniform lottery over $C$, and every voter would rather vote for her favorite candidate in order to make him the unique winner. When $R=R^V$, the outcome is a lottery that assigns a positive probability to each candidate in $A=\{a_i\mid i\in N\}$. If $|A|>1$, ${{\mathbf{b}}}$ is not a PNE: each voter would prefer to vote for her favorite candidate in order to make him the unique winner. However, if $A$ is a singleton, i.e., all voters rank some candidate $c_j$ first, the trivial ballot vector is a PNE: after all voters abstain, $R^V$ picks a random voter, and this voter selects $c_j$. \end{proof} \fi \begin{remark}\label{rem:corr} {\em Desmedt and Elkind claim (Theorems 1 and 2) that for $R^C$ and lazy voters a PNE exists if and only if the utility profile satisfies either condition (1) or condition (3) with constraint $k\le n/2$ removed. To see why this is incorrect, consider a $2$-voter election over the candidate set $C=\{x,y, z\}$, where voters' utility functions are consistent with preference orders $x\succ y\succ z$ and $x\succ z\succ y$, respectively. According to Desmedt and Elkind, the ballot vector $(y, z)$ is a PNE of the corresponding game. This is obviously not true: each of the voters would prefer to change her vote to $x$. Note, however, that the two characterizations differ only when $m\ge n$, and in practice the number of voters usually exceeds the number of candidates. } \end{remark} Desmedt and Elkind show that checking condition (3) of Theorem~\ref{thm:char-rand} is NP-hard; in their proof $n>m$, and the proof does not depend on how the trivial ballot is handled. Further, their proof shows that checking whether a given candidate belongs to some such set $X$ is also NP-hard. On the other hand, Theorem~\ref{thm:char-rand} shows that PNE with singleton winning sets only arise if some candidate is unanimously ranked first, and this condition is easy to check. We summarize these observations as follows. \begin{corollary}\label{cor:lazy-rand-hard} For $R\in\{R^C, R^V\}$, the problems $({{\mathcal{L}}}, R)$-{\sc ExistNE} and $({{\mathcal{L}}}, R)$-{\sc TieNE} are {\em NP}-complete, whereas $({{\mathcal{L}}}, R)$-{\sc SingleNE} is in {\em P}. \end{corollary} \iffalse The characterization provided by Theorem~\ref{thm:char-rand} is quite complicated. We will now define a family of utility functions for which it can be simplified considerably. We say that a utility function $u$ over a candidate set $C$, $|C|=m$, is {\em $m$-decreasing} if for every pair of candidates $c, c'\in C$ it holds that $u(c)>u(c')$ implies $u(c)\ge m \cdot u(c')$. If a voter's utility function $u$ is $m$-decreasing, then for any subset of candidates $X\subseteq C$ of size at least $2$ this voter prefers the uniform lottery over $X$ to her second most preferred candidate in $X$ being the unique winner. Indeed, if she ranks the candidates in $X$ as $c\succ c'\succ\dots$ then under the uniform lottery over $X$ her utility is at least $\frac{1}{|X|}u(c)$ and if $c'$ wins, her utility is $u(c')\le \frac{1}{m}u(c)\le \frac{1}{|X|}u(c)$. This observation allows us to simplify Theorem~\ref{thm:char-rand} for $m$-decreasing utilities as follows. \begin{proposition}\label{prop:char-exp} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be a utility profile over $C$, $|C|=m$, where each $u_i$, $i\in N$, is $m$-decreasing, and let $R\in\{R^C,R^V\}$. Then the game $G = ({{\mathcal{L}}}, R, {{\mathbf{u}}})$ admits a PNE if and only if one of the following conditions holds: \begin{itemize} \item[(1)] all voters rank some candidate $c_j\in C$ first; \item[(2)] each candidate is ranked first by at most one voter; \item[(3)] there exists a set of candidates $X = \{c_{\ell_1}, \dots, c_{\ell_k}\}$ with $2\le k \le \min(n/2,m)$ and a partition of the voters into $k$ groups $N_1, \dots, N_k$ of size ${n}/{k}$ each such that for each $j\in[k]$ and each $i\in N_j$ we have $c_{\ell_j}\succ_i c$ for all $c\in X\setminus\{c_{\ell_j}\}$. \end{itemize} \end{proposition} Besides being easier to work with, $m$-decreasing utilities capture an interesting class of voters' preferences: if a voter's utility function is of this form, she finds all candidates very different. We remark that the hardness proof for {\sc ExistNE} given by \cite{des-elk:c:eq} goes through for utility functions in this class. \fi \section{Truth-biased Voters}\label{sec:truth} \noindent For truth-biased voters, our exposition follows the same pattern as for lazy voters: we present some general observations, followed by a quick summary of the results for lexicographic tie-breaking, and conclude by analyzing randomized tie-breaking. The following result is similar in spirit to Proposition~\ref{prop:lazy-basic}. \begin{proposition}\label{prop:truth-basic} For every $R\in\{R^L,R^C, R^V\}$ and every utility profile ${{\mathbf{u}}}$, if a ballot vector ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{T}}},R,{{\mathbf{u}}})$ then for every voter $i\in N$ either $b_i=a_i$, or $b_i\in W({{\mathbf{b}}})$. \end{proposition} \begin{proof} Consider a voter $i\in N$ such that $a_i\neq b_i$ and $b_i\not\in W({{\mathbf{b}}})$. Suppose $a_i\not\in W({{\mathbf{b}}})$. Then, if $i$ changes her vote to $a_i$, the new winning set is either $W({{\mathbf{b}}})$ or $W({{\mathbf{b}}})\cup\{a_i\}$. In either case, $i$'s utility increases at least by $\varepsilon$, a contradiction. Suppose now that $a_i\in W({{\mathbf{b}}})$. This means that either $W({{\mathbf{b}}}) = \{a_i\}$ or $a_i$ is in a tie with other candidates under ${{\mathbf{b}}}$. Then, if $i$ votes for $a_i$, the new winning set is just $\{a_i\}$, so $i$'s utility increases by at least $\varepsilon$, a contradiction again. \end{proof} \smallskip \noindent{\bf Lexicographic Tie-breaking\quad} Obraztsova et al.~\cite{obr-mar-tho:c:truth-biased} characterize the PNE of the game $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$. As their characterization is quite complex, we will not reproduce it here. However, for the purposes of comparison with the lazy voters model, we will use the following description of {\em truthful} equilibria given by Obraztsova et al. \begin{proposition}[Obraztsova et al., Theorem 1]\label{prop:truth-lex} Consider a utility profile ${{\mathbf{u}}}$, let ${{\mathbf{a}}}$ be the respective truthful ballot vector, and let $j=\min\{r\mid c_r\in W({{\mathbf{a}}})\}$. Then ${{\mathbf{a}}}$ is a PNE of $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$ if and only if neither of the following conditions holds: \begin{itemize} \item[(1)] $|W({{\mathbf{a}}})|>1$, and there exists a candidate $c_k\in W({{\mathbf{a}}})$ and a voter $i$ such that $a_i\neq c_k$ and $c_k\succ_i c_j$. \item[(2)] $H({{\mathbf{a}}})\neq\emptyset$, and there exists a candidate $c_k\in H({{\mathbf{a}}})$ and a voter $i$ such that $a_i\neq c_k$, $c_k\succ_i c_j$, and $k<j$. \end{itemize} \end{proposition} We will also state a crucial property of non-truthful PNE, identified by Obraztsova et al. For this, we first need the following definition. \begin{definition} \label{def:threshold} Consider a ballot vector ${{\mathbf{b}}}$, where candidate $c_j$ is the winner under $R^L$. A candidate $c_k\neq c_j$ is called a {\em threshold candidate with respect to ${{\mathbf{b}}}$} if either (1) $k< j$ and ${{\mathrm{sc}}}(c_k,{{\mathbf{b}}})={{\mathrm{sc}}}(c_j,{{\mathbf{b}}})-1$ or (2) $k> j$ and ${{\mathrm{sc}}}(c_k,{{\mathbf{b}}})={{\mathrm{sc}}}(c_j,{{\mathbf{b}}})$. We denote the set of threshold candidates with respect to ${{\mathbf{b}}}$ by $T({{\mathbf{b}}})$. \end{definition} That is, a threshold candidate is someone who could win the election if he had one additional vote. A feature of all non-truthful PNE is that there must exist at least one threshold candidate. The intuition for this is that, since voters who are not pivotal prefer to vote truthfully, in any PNE that arises under strategic voting, the winner receives just enough votes so as to beat the required threshold (as set by the threshold candidate) and not any more. \begin{lemma}[Obraztsova et al., Lemma 2] \label{lem:threshold} Consider a utility profile ${{\mathbf{u}}}$, let ${{\mathbf{a}}}$ be the respective truthful ballot vector, and let ${{\mathbf{b}}}\neq{{\mathbf{a}}}$ be a non-truthful PNE of $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$. Then $T({{\mathbf{b}}})\neq\emptyset$. Further, ${{\mathrm{sc}}}(c_k, {{\mathbf{b}}})={{\mathrm{sc}}}(c_k, {{\mathbf{a}}})$ for every $c_k\in T({{\mathbf{b}}})$, i.e., all voters whose top choice is $c_k$ vote for $c_k$. \end{lemma} The existence of a threshold candidate is an important observation about the structure of non-truthful PNE, and we will use it repeatedly in the sequel. We note that the winner in ${{\mathbf{a}}}$ need not necessarily be a threshold candidate in a non-truthful PNE ${{\mathbf{b}}}$. Obraztsova et al.~show that, given a candidate $c_p\in C$ and a score $s$, it is computationally hard to decide whether the game $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$ has a PNE ${{\mathbf{b}}}$ where $c_p$ wins with a score of $s$. This problem may appear to be ``harder'' than $({{\mathcal{T}}}, R^L)$-{\sc TieNE} or $({{\mathcal{T}}}, R^L)$-{\sc SingleNE}, as one needs to ensure that $c_p$ obtains a specific score; on the other hand, it does not distinguish between $c_p$ being the unique top-scorer or being tied with other candidates and winning due to tie-breaking. We now complement this hardness result by showing that all three problems we consider are NP-hard for ${{\mathcal{T}}}$ and $R^L$. \iffalse To this end, we provide a reduction from the {\sc MaxIntersect} problem, which has recently been shown to be NP-hard \cite{cli-pop:j:subset}. We first define this problem formally. \begin{definition} An instance of {\sc MaxIntersect} is a tuple $({{\mathcal{E}}}, {{\mathcal{A}}}_1, \dots, {{\mathcal{A}}}_k, q)$, where ${{\mathcal{E}}} = \{e_1,\dots,e_n\}$ is a finite set of elements, each ${{\mathcal{A}}}_i$, $i\in[k]$, is a collection of $m$ subsets of ${{\mathcal{E}}}$, and $q$ is a positive integer. It is a ``yes''-instance if there exist sets $A_1, \dots, A_k$ such that $A_i\in{{\mathcal{A}}}_i$ for $i\in [k]$ and $|\cap_{i\in [k]}A_i|\ge q$, and a ``no''-instance otherwise. \end{definition} We are now ready to present our hardness results. \fi \begin{theorem}\label{thm:truth-lex-hard} $({{\mathcal{T}}}, R^L)$-{\sc SingleNE}, $({{\mathcal{T}}}, R^L)$-{\sc ExistNE}, and $({{\mathcal{T}}}, R^L)$-{\sc TieNE} are {\em NP}-complete. \end{theorem} The proof is by a reduction from {\sc Maximum $k$-Subset Intersection (MSI)} (see the supplementary material). Surprisingly, the complexity of MSI was very recently posed as an open problem by Clifford and Popa~\cite{cli-pop:j:subset}; subsequently, MSI was shown to be hard under Cook reductions by Xavier~\cite{xav:j:subset}. Here we first establish NP-hardness of MSI under Karp reductions, which may be of independent interest, and then show NP-hardness of our problems by constructing reductions from MSI. \iffalse \begin{proof} We will first establish the NP-completeness of $({{\mathcal{T}}}, R^L)$-{\sc SingleNE}, and then show how to modify the proof for the other two problems. It is trivial to show that $({{\mathcal{T}}}, R^L)$-{\sc SingleNE} is in NP, so we focus on showing that it is NP-hard. Consider an instance $I$ of the {\sc MaxIntersect} problem. We have $|{{\mathcal{A}}}_1|=\ldots=|{{\mathcal{A}}}_k|=m$, and we can assume that $m>n-q+2k$ (if this is not the case, we can add several empty sets to each ${{\mathcal{A}}}_i$). We can also assume that $n\geq q$, as otherwise $I$ is clearly a ``no''-instance. For each $i\in[k]$, let ${{\mathcal{A}}}_i=\{S^i_1, \dots, S^i_m\}$. Finally, we can assume that for every $e \in {{\mathcal{E}}}$ there exist indices $i,j$, such that $e \notin S^i_j$. We now construct an instance of our problem. We set $P=\{p_1, \dots, p_k\}$ and let $C={{\mathcal{E}}}\cup P \cup \{w_1, w_2\}$, with ties broken according to $e_1>\ldots e_n>p_1>\ldots>p_k>w_1>w_2$. We set $w_2$ to be the target winning candidate, i.e., $c_p:= w_2$. Finally, we set $\delta = \frac{1}{6(n+k)}$. We will now describe the voters' preferences and their utility functions (while the utility functions play no role in this proof, we will use the same construction in the NP-hardness proof for randomized tie-breaking (Corollary~\ref{cor:truth-rand-hard}), where they do matter). The voters are split into five blocks as follows. \begin{itemize} \item {\bf Block 1:} For every $i\in [k]$ and every $j\in [m]$, we construct a voter $v_{ij}$ who ranks the candidates as $p_i \succ {{\mathcal{E}}} \setminus S^i_j \succ P \setminus \{p_i\} \succ w_2 \succ S^i_j \succ w_1$. We let $u_{ij}$ denote the utility function of $v_{ij}$, which we construct as follows. We set $u_{ij}(p_i) = 1, u_{ij}(w_2) = \frac{1}{2}, u_{ij}(w_1) = \frac{1}{4}$. Further, $v_{ij}$ assigns utility of $1-j\delta$ to her $j$-th most preferred candidate in ${{\mathcal{E}}} \setminus S^i_j$ and utility of $1-(j+n)\delta$ to her $j$-th most preferred candidate in $P \setminus \{p_i\}$. Note that $|{{\mathcal{E}}} \setminus S^i_j|<n$, so these numbers are strictly between $1$ and $1/2$ and they are consistent with the ranking of voter $v_{ij}$. Finally, $v_{ij}$ assigns utility of $1/2 - j\delta$ to her $j$-th most preferred candidate in $S^i_j$; these numbers are strictly between $1/2$ and $1/4$. \item {\bf Block 2:} We set $s = m+2$, and we add $s-1$ voters whose preferences are of the form $w_1 \succ w_2 \succ P \succ {{\mathcal{E}}}$. \item {\bf Block 3:} We add $s-2k-(n-q)$ voters with preferences of the form $w_2 \succ w_1 \succ P \succ {{\mathcal{E}}}$. \item {\bf Block 4:} For every $e_j \in {{\mathcal{E}}}$, we add $s-2$ voters with preferences of the form $e_j \succ P \succ w_2 \succ w_1 \succ {{\mathcal{E}}} \setminus \{e_j\}$. \item {\bf Block 5:} For every $i\in[k]$ we add a voter with preferences of the form $p_i \succ P \setminus \{p_i\} \succ w_2 \succ {{\mathcal{E}}} \succ w_1$. \end{itemize} Each of the voters in blocks 2--5 assigns utility of $1-\delta(j-1)$ to the $j$-th candidate in her ranking. Let $I\rq{}$ be the constructed instance. We want to establish that $I$ is a ``yes''-instance of the {\sc MaxIntersect} problem if and only if $I\rq{}$ is a ``yes''-instance of $({{\mathcal{T}}}, R^L)$-{\sc SingleNE}. Suppose first that $I\rq{}$ is a ``yes''-instance of our problem. Then there exists a PNE ${{\mathbf{b}}}$ with $W({{\mathbf{b}}})=\{w_2\}$. We will first establish some properties of ${{\mathbf{b}}}$. Let ${{\mathbf{a}}}$ denote the truthful ballot for $I\rq{}$. We have ${{\mathrm{sc}}}(p_i,{{\mathbf{a}}})=s-1$ for every $p_i \in P$, ${{\mathrm{sc}}}(w_2,{{\mathbf{a}}})=s-2k-(n-q)$, ${{\mathrm{sc}}}(w_1,{{\mathbf{a}}})=s-1$ and ${{\mathrm{sc}}}(e_j,{{\mathbf{a}}})=s-2$ for every $e_j \in {{\mathcal{E}}}$. It follows that $w_2$ is not among the winners in ${{\mathbf{a}}}$. We will now argue that $T({{\mathbf{b}}})=\{w_1\}$. We know by Lemma~\ref{lem:threshold} that $T({{\mathbf{b}}})\neq\emptyset$. Since $w_2$ is the winner at ${{\mathbf{b}}}$, we have $w_2\not\in T({{\mathbf{b}}})$. Also, it is easy to see that $P\cap T({{\mathbf{b}}})=\emptyset$. Indeed, suppose that $p\in T({{\mathbf{b}}})$ for some $p\in P$. All voters in Block 4 prefer $p$ to $w_2$. By Proposition \ref{prop:truth-basic}, in ${{\mathbf{b}}}$ these voters vote either for their top choice or for $w_2$. But if $p\in T({{\mathbf{b}}})$, each of these voters would prefer to change her vote to $p$, a contradiction with ${{\mathbf{b}}}$ being a PNE. A similar argument shows that ${{\mathcal{E}}}\cap T({{\mathbf{b}}})=\emptyset$. Indeed, we assumed that for every $e_{\ell} \in {{\mathcal{E}}}$ there exists a pair $i,j$ such that $e_{\ell} \notin S^i_j$. Then voter $v_{ij}$ from Block 1 prefers $e_{\ell}$ to $w_2$, but $e_\ell$ is not her top choice. By Proposition \ref{prop:truth-basic}, in ${{\mathbf{b}}}$ voter $v_{ij}$ votes for her top choice or for $w_2$, but if $e_\ell\in T({{\mathbf{b}}})$, she would prefer to change her vote to $e_\ell$, a contradiction with ${{\mathbf{b}}}$ being a PNE. As we have ruled out all candidates except for $w_1$, it follows that $T({{\mathbf{b}}})=\{w_1\}$, and hence, by Lemma \ref{lem:threshold}, ${{\mathrm{sc}}}(w_1, {{\mathbf{b}}})=s-1$. Then, by the tie-breaking rule, we have ${{\mathrm{sc}}}(w_2, {{\mathbf{b}}})=s$. Thus, in ${{\mathbf{b}}}$ candidate $w_2$ receives exactly $2k + n-q$ non-truthful votes, in addition to the votes of his own supporters. We also know that the voters from Block 3 keep voting for $w_2$ in ${{\mathbf{b}}}$, and, by Lemma \ref{lem:threshold}, the voters from Block 2 keep voting for $w_1$ in ${{\mathbf{b}}}$. Hence $w_2$ receives the extra $2k + n-q$ votes in ${{\mathbf{b}}}$ from Blocks 1, 4 and 5. It has to be the case that ${{\mathrm{sc}}}(p_i,{{\mathbf{b}}})\leq s-3$ for every $p_i \in P$. Indeed, we have ${{\mathrm{sc}}}(c',{{\mathbf{b}}})\le s-2$ for all $c'\in {{\mathcal{E}}} \cup P$ since $T({{\mathbf{b}}})=\{w_1\}$. Further, if ${{\mathrm{sc}}}(p_i,{{\mathbf{b}}})= s-2$ for some $p_i\in P$, the ballot vector ${{\mathbf{b}}}$ would not be a PNE, as some voters from Blocks 1, 4, and 5 vote for $w_2$, but all of them prefer $p_i$ to $w_2$. Thus, in total, we must have at least $2k$ voters from Blocks 1 and 5 who vote for $w_2$ in ${{\mathbf{b}}}$, and for every $p_i\in P$ there are at least two such voters whose truthful top choice is $p_i$. This means that for every $i\in[k]$, there is at least one voter $v_{ij}$ who has deviated to $w_2$. Now, for each ${{\mathcal{A}}}_i$, $i\in [k]$, we pick a set $A_i\in{{\mathcal{A}}}_i$ such that $A_i=S^i_j$ for some $j\in [m]$ such that $v_{ij}$ votes for $w_2$ in ${{\mathbf{b}}}$. We will now argue that $|\cap_{i\in [k]} A_i|\ge q$. To see this, let ${{\mathcal{E}}}'=\{e\in{{\mathcal{E}}}\mid{{\mathrm{sc}}}(e, {{\mathbf{b}}})=s-2\}$. Note that in ${{\mathbf{b}}}$ there are at most $n-q$ voters in Block 4 who vote for $w_2$. so $|{{\mathcal{E}}}'|\ge q$. To complete the proof, we will argue that for each $e\in{{\mathcal{E}}}'$ we have $e\in A_i$ for all $i\in [k]$. Indeed, fix some $e\in{{\mathcal{E}}}'$ and some $i\in [k]$. By our choice of $A_i$, there exists a voter $v_{ij}$ who votes for $w_2$ such that $S^i_j=A_i$. Suppose that $v_{ij}$ prefers $e$ to $w_2$. If she changes her vote to $e$, then $e$ becomes the new winner, a contradiction with ${{\mathbf{b}}}$ being a PNE. Thus, it has to be the case that $e\in S^i_j=A_i$, which is what we wanted to prove. Hence, if we have obtained a ``yes''-instance for $({{\mathcal{T}}}, R^L)$-{\sc SingleNE}, the original instance of {\sc MaxIntersect} was also a ``yes''-instance. For the converse direction, consider a collection of sets $A_1\in{{\mathcal{A}}}_1, \dots, A_k\in{{\mathcal{A}}}_k$ with $|\cap_{i\in [k]} A_i|\ge q$. Let ${{\mathcal{E}}}'=\cap_{i\in [k]} A_i$. Note that for each $i\in[k]$ there is an index $j_i$ such that $A_i=S^i_{j_i}$. We will now construct a set of voters $N'$ as follows. We first set $N'=\{v_{ij_i}\mid i\in[k]\}$. Then we add to $N'$ all voters in Block 5. Further, for each $e\not\in {{\mathcal{E}}}'$, we add to $N'$ one voter who ranks $e$ first. Observe that at this point we have $|N'|\le 2k+(n-q)$. If $|N'|<2k+(n-q)$, we pick $n-q+2k-|N'|$ additional voters from Block 4, and add them to $N'$. Now, consider a ballot vector ${{\mathbf{b}}}$ where the voters in $N'$ vote in favor of $w_2$, whereas everyone else votes truthfully. We have ${{\mathrm{sc}}}(w_2, {{\mathbf{b}}})=s$, ${{\mathrm{sc}}}(w_1, {{\mathbf{b}}})=s-1$, ${{\mathrm{sc}}}(p_i, {{\mathbf{b}}})=s-3$ for all $i\in[k]$, ${{\mathrm{sc}}}(e, {{\mathbf{b}}})\le s-3$ for all $e\in {{\mathcal{E}}}\setminus{{\mathcal{E}}}'$, ${{\mathrm{sc}}}(e, {{\mathbf{b}}})= s-2$ for all $e\in{{\mathcal{E}}}'$. Moreover, all non-truthful voters rank $w_2$ above $w_1$ as well as above all candidates in ${{\mathcal{E}}}'$. Thus, ${{\mathbf{b}}}$ is a PNE. Finally, we comment on the hardness of the problems $({{\mathcal{T}}}, R^L)$-{\sc ExistNE} and $({{\mathcal{T}}}, R^L)$-{\sc TieNE}. For $({{\mathcal{T}}}, R^L)$-{\sc ExistNE}, we can use exactly the same reduction, as it can be shown that $w_1$ is the only possible threshold candidate and hence only $w_2$ can be a winner in a PNE. For $({{\mathcal{T}}}, R^L)$-{\sc TieNE}, we make a small modification to the reduction above. Specifically, it suffices to switch the tie-breaking order between $w_1$ and $w_2$, and also add one more voter to Block 2 in favor of $w_1$. We omit the details to avoid repetition. \end{proof} \fi \smallskip \noindent{\bf Randomized Tie-breaking\ } It turns out that for truth-biased voters the tie-breaking rules $R^C$ and $R^V$ induce identical behavior by the voters; unlike for lazy voters, this holds even if all voters rank the same candidate first. For clarity, we present our characterization result for randomized tie-breaking in three parts. We start by considering PNE with winning sets of size at least $2$; the analysis for this case turns out to be very similar to that for lazy voters. \begin{theorem}\label{thm:truth-rand-ties} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be a utility profile over $C$, $|C|=m$, and let $R\in\{R^C, R^V\}$. The game $G = ({{\mathcal{T}}}, R, {{\mathbf{u}}})$ admits a PNE with a winning set of size at least $2$ if and only if one of the following conditions holds: \begin{itemize} \item[(1)] each candidate is ranked first by at most one voter, and, moreover, $\frac{1}{n}\sum_{i\in N}u_\ell(a_i)\ge \max_{i\in N\setminus\{\ell\}}u_\ell(a_i)$ for each $\ell\in N$. \item[(2)] there exists a set of candidates $X = \{c_{\ell_1}, \dots, c_{\ell_k}\}$ with $2\le k \le \min(n/2,m)$ and a partitioning of the voters into $k$ groups $N_1, \dots, N_k$, of size ${n}/{k}$ each, such that for each $j\in[k]$ and each $i\in N_j$, we have $c_{\ell_j}\succ_i c$ for all $c\in X\setminus\{c_{\ell_j}\}$, and, moreover, $\frac{1}{k}\sum_{c\in X}u_i(c)\ge \max_{c\in X\setminus\{c_{\ell_j}\}}u_i(c)$. \end{itemize} Further, if condition (1) holds, then $G$ has a PNE where each voter votes for her top candidate, and if condition (2) holds for some $X$, then $G$ has a PNE where each voter votes for her favorite candidate in $X$. The game $G$ has no other PNE. \end{theorem} \iffalse \begin{proof} It is clear that if one of the conditions (1)--(2) is satisfied then the game admits a PNE of the form described in the statement of the theorem. For the converse direction, fix a tie-breaking rule $R\in\{R^C, R^V\}$ and a utility profile ${{\mathbf{u}}}$, and suppose that a ballot vector ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{T}}}, R,{{\mathbf{u}}})$ with $|W({{\mathbf{b}}})|\ge 2$. We will argue that ${{\mathbf{u}}}$ satisfies one of the conditions (1)--(2). If $|W({{\mathbf{b}}})|=n$, each candidate in $W({{\mathbf{b}}})$ receives exactly one vote. As argued in the proof of Theorem~\ref{thm:char-rand}, this means that each voter votes for her favorite candidate, and prefers the uniform lottery over $A=\{a_i\mid i\in N\}$ to her second most preferred candidate in $A$ being the unique winner, i.e., condition (1) holds. Now, suppose that $|W({{\mathbf{b}}})|<n$. We claim that $b_i\in W({{\mathbf{b}}})$ for all $i\in N$. Indeed, suppose that $b_i\not\in W({{\mathbf{b}}})$ for some $i\in N$. Let $c_j$ be voter $i$'s most preferred candidate in $W({{\mathbf{b}}})$. If $i$ changes her vote to $c_j$, $c_j$ becomes the unique winner, whereas when she votes $b_i$, the outcome is a lottery over $W({{\mathbf{b}}})$ where candidates other than $c_j$ have a positive chance of winning. Thus, $i$ can profitably deviate, a contradiction. Thus, there exists a $k\ge 2$ such that each candidate in $W({{\mathbf{b}}})$ receives $n/k$ votes. An argument similar to the one in the proof of Theorem~\ref{thm:char-rand} shows that condition (2) must be satisfied. \end{proof} \fi The case where the winning set is a singleton is surprisingly complicated. We will first characterize utility profiles that admit a truthful PNE with this property. \begin{theorem}\label{thm:truth-rand-single1} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be a utility profile over $C$, let $R\in\{R^C, R^V\}$, and suppose that $W({{\mathbf{a}}})=\{c_j\}$ for some $c_j\in C$. Then ${{\mathbf{a}}}$ is a PNE of the game $G = ({{\mathcal{T}}}, R, {{\mathbf{u}}})$ if and only if for every $i\in N$ and every $c_k\in H({{\mathbf{a}}})\setminus\{a_i\}$, it holds that $c_j\succ_i c_k$. \end{theorem} \iffalse \begin{proof} Consider the ballot vector ${{\mathbf{a}}}$ and a voter $i\in N$. Clearly, if $a_i=c_j$, voter $i$ cannot improve her utility by deviating. Otherwise, the only way $i$ can change the election outcome is by changing her vote to some $c_k\in H({{\mathbf{a}}})\setminus\{a_i\}$, in which case the outcome is a lottery over $\{c_j,c_k\}$ where both of these candidates has a positive chance of winning. The condition of the theorem says that no voter wants to change the election outcome in this way. \end{proof} \fi Finally, we consider elections that have non-truthful equilibria with singleton winning sets. \begin{theorem}\label{thm:truth-rand-single2} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be a utility profile over $C$, let $R\in\{R^C, R^V\}$, and consider a ballot vector ${{\mathbf{b}}}$ with $W({{\mathbf{b}}})=\{c_j\}$ for some $c_j\in C$ and $b_r\neq a_r$ for some $r\in N$. Then ${{\mathbf{b}}}$ is a PNE of the game $G = ({{\mathcal{T}}}, R, {{\mathbf{u}}})$ if and only if all of the following conditions hold: \begin{itemize} \item[(1)] $b_i\in\{a_i, c_j\}$ for all $i\in N$; \item[(2)] $H({{\mathbf{b}}})\neq\emptyset$; \item[(3)] $c_j\succ_i c_k$ for all $i\in N$ and all $c_k\in H({{\mathbf{b}}})\setminus\{b_i\}$; \item[(4)] for every candidate $c_\ell\in H'({{\mathbf{b}}})$ and each voter $i\in N$ with $b_i=c_j$, $i$ prefers $c_j$ to the lottery where a candidate is chosen from $H({{\mathbf{b}}})\cup\{c_j, c_\ell\}$ according to $R$. \end{itemize} \end{theorem} \iffalse \begin{proof} Suppose that a ballot profile ${{\mathbf{b}}}$ satisfies conditions (1)--(4) of the theorem, and consider a voter $i\in N$. If $b_i=a_i=c_j$, the current outcome is optimal for $i$. If $b_i=a_i\neq c_j$, the only way that voter $i$ can change the election outcome is by voting for a candidate $c_k\in H({{\mathbf{b}}})\setminus\{a_i\}$, in which case the winner will be chosen from $\{c_j,c_k\}$ according to $R$. By condition (3), voter $i$ does not benefit from this change. By Proposition~\ref{prop:truth-basic}, the only remaining possibility is that $b_i=c_j\neq a_i$. Then $i$ can change the election outcome by (a) voting for a candidate $c_k\in H({{\mathbf{b}}})$; (b) voting for a candidate $c_\ell\in H'({{\mathbf{b}}})$; or (c) voting for a candidate in $C\setminus (H({{\mathbf{b}}})\cup H'({{\mathbf{b}}})\cup\{c_j\})$. In case (a) $c_k$ becomes the unique winner, so by condition (3) this change is not profitable to $i$. In case (b) the outcome is a tie among the candidates in $H({{\mathbf{b}}})\cup\{c_j, c_\ell\}$, so by condition (4) voter $i$ cannot profit from this change. Finally, in case (c) the outcome is a tie among the candidates in $H({{\mathbf{b}}})\cup\{c_j\}$, and by condition (3), $i$ prefers the current outcome to this one. Thus, a ballot vector satisfying conditions (1)--(4) is indeed a PNE. Conversely, suppose that ${{\mathbf{b}}}$ is a PNE of $({{\mathcal{T}}}, R, {{\mathbf{u}}})$ for some $R\in\{R^C, R^V\}$ and some utility profile ${{\mathbf{u}}}$, where $b_r\neq a_r$ for some $r\in N$. It follows from Proposition~\ref{prop:truth-basic} that ${{\mathbf{b}}}$ satisfies condition (1). If condition (2) is violated, voter $r$ can increase her utility by $\varepsilon$, by changing her vote to $a_r$, as $c_j$ would remain the unique election winner in this case. If condition (3) is violated for some $i\in N$ and some $c_k\in H({{\mathbf{b}}})$, voter $i$ can profitably deviate by changing her vote to $c_k$; if $b_i=c_j$, $c_k$ would then become the unique election winner, and if $b_i\neq c_j$, the outcome will be a tie between $c_j$ and $c_k$, so under $R$ each of them will win with positive probability. Similarly, if condition (4) is violated for some $i\in N$ and some $c_\ell\in H'({{\mathbf{b}}})$, voter $i$ can profitably deviate by changing her vote to $c_\ell$, so that the outcome becomes a tie among $H({{\mathbf{b}}})\cup\{c_j, c_\ell\}$. This concludes the proof. \end{proof} \fi We now consider the complexity of {\sc ExistNE}, {\sc TieNE}, and {\sc SingleNE} for truth-biased voters and randomized tie-breaking. The reader may observe that the characterization of PNE with ties in Theorem~\ref{thm:truth-rand-single2} is essentially identical to the one in Theorem~\ref{thm:char-rand}. As a consequence, we immediately obtain that $({{\mathcal{T}}}, R^C)$-{\sc TieNE} and $({{\mathcal{T}}}, R^V)$-{\sc TieNE} are NP-hard. For {\sc ExistNE} and {\sc SingleNE}, a simple modification of the proof of Theorem~\ref{thm:truth-lex-hard} shows that these problems remain hard under randomized tie-breaking. These observations are summarized in the following corollary. \begin{corollary}\label{cor:truth-rand-hard} For $R\in\{R^C, R^V\}$, $({{\mathcal{T}}}, R)$-{\sc SingleNE}, $({{\mathcal{T}}}, R)$-{\sc TieNE}, and $({{\mathcal{T}}}, R)$-{\sc ExistNE} are {\em NP}-complete. \end{corollary} \iffalse \begin{proof} Let $R\in\{R^C, R^V\}$. For $({{\mathcal{T}}}, R)$-{\sc TieNE}, as mentioned above, our claim follows from Theorem \ref{thm:char-rand} and its implications, as discussed in Section \ref{sec:lazy-cand}. For $({{\mathcal{T}}}, R)$-{\sc SingleNE} with $R\in\{R^C, R^V\}$, we can use the same reduction from {\sc MaxIntersect} as in Theorem \ref{thm:truth-lex-hard}. The only change is the analysis in the last paragraph of the proof of Theorem \ref{thm:truth-lex-hard}, due to the different tie-breaking rule. In particular, consider again a PNE ${{\mathbf{b}}}$ with winner $w_2$. By the analysis of Theorem \ref{thm:truth-lex-hard}, the set of candidates ${{\mathcal{E}}}'=\{e\in{{\mathcal{E}}}\mid{{\mathrm{sc}}}(e, {{\mathbf{b}}})=s-2\}$ contains at least $q$ elements. Consider a candidate $e\in{{\mathcal{E}}}'$. Suppose that among the voters from Block 1 who deviated to $w_2$, there exists a voter $v_{ij}$ who prefers $e$ to $w_2$. Her utility in ${{\mathbf{b}}}$ is $1/2$. Suppose that she deviates to $e$ instead. In this case the score of $w_1$, $w_2$, and $e$ becomes $s-1$. Therefore, the new winning set is $\{w_1,w_2,e\}$. Given that $u_{ij}(w_1) = 1/4$ and $u_{ij}(e)> 3/4$, the utility of voter $v_{ij}$ becomes more than $1/2$, contradicting the fact that ${{\mathbf{b}}}$ is a PNE. Thus, for any voter $v_{ij}$ in Block 1 who deviated to $w_2$, and for any candidate $e\in{{\mathcal{E}}}'$ it holds that $v_{ij}$ prefers $w_2$ to $e$, i.e., $e\in S^i_j$. Thus, $e\in A_i$ for each $i\in [k]$, where the sets $A_1, \dots, A_k$ are defined in the proof of Theorem \ref{thm:truth-lex-hard}. As this holds for every $e\in{{\mathcal{E}}}'$ and $|{{\mathcal{E}}}'|\ge q$, if we have obtained a ``yes''-instance for $({{\mathcal{T}}}, R)$-{\sc SingleNE}, we started with a ``yes''-instance of {\sc MaxIntersect}. Finally, regarding $({{\mathcal{T}}}, R)$-{\sc ExistNE}, a simple modification of the same reduction can yield the desired result; we omit the details. \end{proof} \fi \section{Comparison}\label{sec:comparison} \noindent We are finally in a position to compare the different models considered in this paper. \noindent{\bf Tie-breaking rules\ } We have demonstrated that in equilibrium the two randomized tie-breaking rules ($R^C$ and $R^V$) induce very similar voter behavior, and identical election outcomes, both for lazy and for truth-biased voters. This is quite remarkable, since under truthful voting these tie-breaking rules can result in very different lotteries. In contrast, there is a substantial difference between the randomized rules and the lexicographic rule. For instance, when voters are lazy, {\sc ExistNE} is NP-hard for $R^C$ and $R^V$, but polynomial-time solvable for $R^L$. Further, the lexicographic rule is, by definition, not anonymous, and Theorem~\ref{thm:lazy-lex-char} demonstrates that candidates with smaller indices have a substantial advantage. For truth-biased voters the impact of tie-breaking rules is less clear: while we have obtained NP-hardness results for all three rules, it appears that, in contrast with lazy voters, for truth-biased voters randomized tie-breaking induces ``simpler'' PNE than lexicographic tie-breaking. \noindent{\bf Lazy vs. truth-biased voters\ } Under lexicographic tie-breaking, the sets of equilibria induced by the two types of secondary preferences are incomparable: there exists a utility profile ${{\mathbf{u}}}$ such that the sets of candidates who can win in PNE of $({{\mathcal{L}}}, R^L, {{\mathbf{u}}})$ and $({{\mathcal{T}}},R^L, {{\mathbf{u}}})$ are disjoint. \begin{example} {\em Let $C=\{c_1,c_2, c_3\}$, and consider a $4$-voter election with one vote of the form $c_2\succ c_3\succ c_1$, and three votes of the form $c_3\succ c_2\succ c_1$. The only PNE of $({{\mathcal{L}}}, R^L, {{\mathbf{u}}})$ is $(c_2, \bot, \bot, \bot)$, where $c_2$ wins, whereas the only PNE of $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$ is $(c_2, c_3, c_3, c_3)$, where $c_3$ wins. } \end{example} For randomized tie-breaking, the situation is more interesting. For concreteness, let us focus on $R^C$. Note first that the utility profiles for which there exist PNE with winning sets of size $2$ or more are the same for both voter types. Further, if $({{\mathcal{L}}}, R^C, {{\mathbf{u}}})$ has a PNE ${{\mathbf{b}}}$ with $|W({{\mathbf{b}}})|=1$ (which happens only if there is a unanimous winner), then ${{\mathbf{b}}}$ is also a PNE of $({{\mathcal{T}}}, R^C, {{\mathbf{u}}})$. However, $({{\mathcal{T}}}, R^C, {{\mathbf{u}}})$ may have additional PNE, including some non-truthful ones. In particular, for truth-biased voters, the presence of a strong candidate is sufficient for stability: Proposition~\ref{prop:truth-lex} implies that if there exists a $c\in C$ such that ${{\mathrm{sc}}}(c, {{\mathbf{a}}})\ge {{\mathrm{sc}}}(c',{{\mathbf{a}}})+2$ for all $c'\in C\setminus\{c\}$, then for any $R\in\{R^L, R^C, R^V\}$ the ballot vector ${{\mathbf{a}}}$ is a PNE of $({{\mathcal{T}}}, R, {{\mathbf{u}}})$ with $W({{\mathbf{a}}})=\{c\}$. \noindent{\bf Existence of PNE\ } One can argue that, when the number of voters is large relative to the number of candidates, under reasonable probabilistic models of elections, the existence of a strong candidate (as defined in the previous paragraph) is exceedingly likely (we omit the formal statement of this result and its proof due to space constraints), so elections with truth-biased voters typically admit stable outcomes; this is corroborated by the experimental results of \cite{tho-lev-ley:c:empirical}. In contrast, for lazy voters stability is more difficult to achieve, unless there is a candidate that is unanimously ranked first: under randomized tie-breaking rules, there needs to be a very precise balance among the candidates that end up being in $W({{\mathbf{b}}})$, and under $R^L$ the eventual winner has to Pareto-dominate all candidates that lexicographically precede him. Either of these conditions appears to be quite difficult to satisfy in a large election. \noindent{\bf Quality of PNE\ } In all of our models, a candidate ranked last by all voters cannot be elected, in contrast to the basic game-theoretic model for Plurality voting. However, not all non-desirable outcomes are eliminated: under $R^V$ and $R^C$ both lazy voters and truth-biased voters can still elect a Pareto-dominated candidate with non-zero probability in PNE. This has been shown for lazy voters and $R^C$ by~\cite{des-elk:c:eq} (Example 1), and the same example works for truth-biased voters and for $R^V$. A similar construction shows that a Pareto-dominated candidate may win under $R^L$ when voters are truth-biased. \iffalse For completeness, we describe this example below. \begin{example} Let $C=\{c_1, c_2, c_3\}$, $n=4$. Suppose that all voters rank $c_1$ first, the first two voters prefer $c_2$ to $c_3$, and the remaining two voters prefer $c_3$ to $c_2$. Then for every utility vector ${{\mathbf{u}}}$ consistent with these preferences, every ${{\mathcal{S}}}\in\{{{\mathcal{L}}},{{\mathcal{T}}}\}$ and every $R\in\{R^V, R^C\}$ it holds that ${{\mathbf{b}}}=(c_2, c_2, c_3, c_3)$ is a Nash equilibrium of $({{\mathcal{S}}}, R, {{\mathbf{u}}})$. \end{example} A similar construction shows that a Pareto-dominated candidate may win under $R^L$ when voters are truth-biased. \begin{example} Let $C=\{c_1, c_2, c_3, c_4\}$, $n=4$. Suppose that voter $1$'s preference order is $c_1\succ c_3\succ c_4\succ c_2$, voter $2$'s preference order is $c_2\succ c_3\succ c_4\succ c_1$, and the last two voters' preference orders are $c_3\succ c_4\succ c_1\succ c_2$. Then for every utility vector ${{\mathbf{u}}}$ consistent with these preferences it holds that ${{\mathbf{b}}}=(c_1, c_2, c_4, c_4)$ is a Nash equilibrium of $({{\mathcal{T}}}, R^L, {{\mathbf{u}}})$. \end{example} \fi In contrast, lazy voters cannot elect a Pareto-dominated candidate under $R^L$: Theorem~\ref{thm:lazy-lex-char} shows that the winner has to be ranked first by some voter. \iffalse However, even in this setting the winner can be almost Pareto-dominated, i.e., ranked below another candidate (in fact, ranked last) by all but one voter. \begin{example} Consider an election with $|C|\ge 3$, where voter $1$ ranks $c_2$ first and all other voters rank $c_3$ first and $c_2$ last. Then for every utility vector ${{\mathbf{u}}}$ consistent with these preferences it holds that ${{\mathbf{b}}}=(c_2, \bot, \dots, \bot)$ is a Nash equilibrium of $({{\mathcal{L}}}, R^L, {{\mathbf{u}}})$. \end{example} \fi We can also measure the quality of PNE by analyzing the Price of Anarchy (PoA) in both models. The study of PoA in the context of voting has been recently initiated by Branzei et al.~\cite{bcmp_2013}. The additive version of PoA, which was considered by Branzei et al., is defined as the worst-case difference between the score of the winner under truthful voting and the truthful score of a PNE winner. It turns out that PoA can be quite high, both for lazy and truth-biased voters. To illustrate this, we provide in the supplementary material two examples showing that under lexicographic tie-breaking ${{\mathrm{PoA}}} = \Omega (n)$ in both models. Similar results can be established for randomized tie-breaking as well. Even though the ${{\mathrm{PoA}}}$ results are not encouraging, this is only a worst-case analysis and we expect PNE to have a better performance on average. For the truth-biased model, this is also supported by the experimental evaluation of Thompson et al.~\cite{tho-lev-ley:c:empirical}, who showed that in the truth-biased model most PNE identified in their simulations had good social welfare properties. Formalizing this observation, i.e., providing average-case analysis of the quality of PNE in voting games, is a promising topic for future work. \section{Extension: Principled Voters}\label{sec:principled} \noindent The results of this paper can be extended to the setting where some of the voters are {\em principled}, i.e., always vote truthfully (and never abstain). Due to space constraints, we relegate the formal statements of our results for this extended model to the supplementary material. Briefly, the presence of principled voters has the strongest effect on lazy voters and lexicographic tie-breaking, whereas for other settings the effect is less pronounced. All computational problems that were easy in the standard model remain easy in the extended model (and, obviously, all hard problems remain hard). Finally, in the presence of principled voters the random candidate tie-breaking rule is no longer equivalent to the random voter tie-breaking rule. \iffalse \subsection{Principled + Lazy Voters, Lexicographic Tie-breaking}\label{sec:principled-lazy-lex} \noindent We have argued that in elections where all voters are lazy and the tie-breaking rule is $R^L$, there is at most one voter who does not abstain and all PNE have the same winner. However, in the presence of principled voters, this is no longer true; indeed, there are elections where {\em every} candidate can win in a PNE. \begin{example}\label{ex:principled-lazy-lex} Consider an election over a candidate set $C=\{c_1, \dots, c_m\}$, $m>1$, where there are two principled voters who both vote for $c_m$, and two lazy voters who both rank $c_m$ last. Then the ballot vector where both lazy voters abstain is a PNE (with winner $c_m$). Moreover, for every $j\in[m-1]$ the ballot vector where both lazy voters vote for $c_j$ is a PNE as well (with winner $c_j$). \end{example} Nevertheless, given an election with principled and lazy voters, we can characterize the set of candidates who can win in a PNE of the respective game. \begin{proposition} Let ${{\mathbf{u}}}$ be the lazy voters' utility profile over $C$ and let ${{\mathbf{a}}}^P$ be the principled voters' ballot vector. Let $j=\min\{k\mid c_k\in W({{\mathbf{a}}}^P)\}$, and let $H^+({{\mathbf{a}}}^P)=\{c_k\in H({{\mathbf{a}}}^P)\mid k<j\}$. Then the game $G=({{\mathcal{L}}}, R^L, {{\mathbf{u}}},{{\mathbf{a}}}^P)$ has the following properties. \begin{itemize} \item[(1)] If ${{\mathbf{b}}}$ is a PNE of $G$ then there is at most one candidate $c\in C$ such that $b_i=c$ for some $i\in N$; further, if $b_i=c$ for some $c\in C$, $i\in N$, then $c$ is the winner in ${{\mathbf{b}}}+{{\mathbf{a}}}^P$. \item[(2)] $G$ has a PNE where $c_j$ wins if and only if $(\bot, \dots,\bot)$ is a PNE of $G$. \item[(3)] If $k>j$ then $G$ has a PNE where $c_k$ wins if and only if there are at least $M({{\mathbf{a}}}^P)+1-{{\mathrm{sc}}}(c_k, {{\mathbf{a}}}^P)$ lazy voters who prefer $c_k$ to all candidates in $(W({{\mathbf{a}}}^P)\cup H^+({{\mathbf{a}}}^P))\setminus\{c_k\}$. \item[(4)] If $k<j$ then $G$ has a PNE where $c_k$ wins if and only if there are at least $M({{\mathbf{a}}}^P)-{{\mathrm{sc}}}(c_k, {{\mathbf{a}}}^P)$ lazy voters who prefer $c_k$ to all candidates in $(W({{\mathbf{a}}}^P)\cup H^+({{\mathbf{a}}}^P))\setminus\{c_k\}$. \end{itemize} \end{proposition} \begin{corollary} The problems $({{\mathcal{L}}}, R^L)$-{\sc ExistNE$^P$}, $({{\mathcal{L}}}, R^L)$-{\sc TieNE$^P$}, and $({{\mathcal{L}}}, R^L)$-{\sc SingleNE$^P$} are in {\em P}. \end{corollary} \subsection{Principled + Lazy Voters, Randomized Tie-breaking}\label{sec:principled-lazy-rand} \noindent We will now consider the effect of the presence of principled voters on lazy voters under randomized tie-breaking. We show that single-winner PNE in this setting may have a more complicated structure than single-winner PNE in the absence of principled voters. On the other hand, PNE where several candidates are tied for winning are very similar to those that arise when no principled voters are present. We first consider the random candidate tie-breaking rule. \begin{proposition}\label{prop:principled-lazy-rand-unique} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be the lazy voters' utility profile over $C$, $|C|=m$, and let ${{\mathbf{a}}}^P=(a_{n+1}, \dots, a_{n+s})$ be the principled voters' ballot profile. The game $G = ({{\mathcal{L}}}, R^C, {{\mathbf{u}}}, {{\mathbf{a}}}^P)$ admits a PNE ${{\mathbf{b}}}$ with $W({{\mathbf{b}}}+{{\mathbf{a}}}^P)=\{c_j\}$ for some $c_j\in C$ if and only if one of the following conditions holds: \begin{itemize} \item[(1)] $W({{\mathbf{a}}}^P)=\{c_j\}$, $H({{\mathbf{a}}}^P)=\emptyset$; \item[(2)] $|V_j|\ge M({{\mathbf{a}}}^P)+1-{{\mathrm{sc}}}(c_j, {{\mathbf{a}}}^P)$, where $V_j$ is the set that consists of all voters $i\in N$ such that (a) $u_i(c_j)>u_i(c_k)$ for all $c_k\in W({{\mathbf{a}}}^P)$ and (b) for each $c_\ell\in H({{\mathbf{a}}}^P)$ it holds that $$ u_i(c_j)\ge \frac{1}{|W({{\mathbf{a}}}^P)+1|}\sum_{c\in W({{\mathbf{a}}}^P)\cup\{c_\ell\}}u_i(c). $$ \end{itemize} Moreover, if condition (1) holds then $G$ has a PNE where all lazy voters abstain, and if condition (2) holds then $G$ has a PNE where exactly $M({{\mathbf{a}}}^P)+1-{{\mathrm{sc}}}(c_j, {{\mathbf{a}}}^P)$ lazy voters vote for $c_j$, while the remaining lazy voters abstain. The game $G$ has no other PNE with winning set $\{c_j\}$. \end{proposition} \begin{corollary}\label{cor:principled-lazy-singleNE} The problem $({{\mathcal{L}}}, R^C)$-{\sc SingleNE$^P$} is in {\em P}. \end{corollary} \begin{proposition}\label{prop:principled-lazy-rand-tie} Let ${{\mathbf{u}}}=(u_1, \dots, u_n)$ be the lazy voters' utility profile over a candidate set $C$, $|C|=m$, and let ${{\mathbf{a}}}^P=(a_{n+1}, \dots, a_{n+s})$ be the principled voters' ballot profile. Then the game $G = ({{\mathcal{L}}}, R^C, {{\mathbf{u}}}, {{\mathbf{a}}}^P)$ admits a PNE ${{\mathbf{b}}}$ with $|W({{\mathbf{b}}}+{{\mathbf{a}}}^P)|>1$ if and only if one of the following conditions holds: \begin{itemize} \item[(1)] each candidate is ranked first by at most one voter in $N\cup P$ and $\frac{1}{n+s}\sum_{i\in N\cup P}u_\ell(a_i)\ge \max_{i\in (N\cup P)\setminus\{\ell\}}u_\ell(a_i)$ for all $\ell\in N$. \item[(2)] there exists a set of candidates $X = \{c_{\ell_1}, \dots, c_{\ell_k}\}$ with $k\ge 2$, a positive integer $n'\le n$ with $n'/k\ge 2$ such that for each $c\not\in X$ we have ${{\mathrm{sc}}}(c, {{\mathbf{a}}}^P)<n'/k$, and a partition of the lazy voters into $k$ groups $N_1, \dots, N_k$ (some of which may be empty) such that \begin{itemize} \item[(a)] for each $j\in [k]$ we have $|N_j|+{{\mathrm{sc}}}(c_{\ell_j}, {{\mathbf{a}}}^P)= n'/k$; \item[(b)] for each $j\in[k]$ and each $i\in N_j$ we have $c_{\ell_j}\succ_i c$ for all $c\in X\setminus\{c_{\ell_j}\}$; \item[(c)] for each $j\in[k]$ and each $i\in N_j$ we have $\frac{1}{k}\sum_{c\in X}u_i(c)\ge \max_{c\in X\setminus\{c_{\ell_j}\}}u_i(c)$; \item[(d)] for each $j\in[k]$, each $i\in N_j$, and each $c'\in C\setminus X$ with ${{\mathrm{sc}}}(c', {{\mathbf{a}}}^P)=n'/k-1$ we have $\frac{1}{k}\sum_{c\in X}u_i(c)\ge \frac{1}{k}\sum_{c\in (X\cup\{c'\})\setminus\{c_{\ell_j}\}}u_i(c)$. \end{itemize} \end{itemize} Moreover, if condition (1) holds then $G$ has a PNE where each lazy voter votes for her top candidate, and if condition (2) holds, then $G$ has a PNE where each lazy voter votes for her top candidate in $X$. The game $G$ has no other PNE with two or more winners. \end{proposition} The following corollary is a direct consequence of Corollary~\ref{cor:lazy-rand-hard} and the fact that the model with no principled voters is a special case of the model with principled voters. \begin{corollary} The problems $({{\mathcal{L}}}, R^C)$-{\sc TieNE$^P$} and $({{\mathcal{L}}}, R^C)$-{\sc ExistNE$^P$} are {\em NP}-complete. \end{corollary} The reader may have noticed that Proposition~\ref{prop:principled-lazy-rand-unique}, Corollary~\ref{cor:principled-lazy-singleNE}, and Proposition~\ref{prop:principled-lazy-rand-tie} are stated for $R^C$, but not for $R^V$. The reason for this is that in the presence of principled voters the tie-breaking rules $R^C$ and $R^V$ are no longer equivalent. \begin{example}\label{ex:rc-neq-rv} Consider an election over the candidate set $C=\{c_1, c_2, c_3\}$, where there are two lazy voters whose utility function is given by $u(c_1)=20$, $u(c_2)=4$, $u(c_3)=1$, two lazy voters whose utility function is given by $u'(c_1)=20$, $u'(c_2)=4$, $u'(c_3)=1$, and one principled voter who ranks the candidates as $c_3\succ c_1\succ c_2$. It is easy to see that both for $R^C$ and for $R^V$ the resulting game has a PNE where two lazy voters vote for $c_1$, two lazy voters vote for $c_2$, and the principled voter votes for $c_3$. Under $R^C$ candidates $c_1$ and $c_2$ are equally likely to win in this PNE. However, under $R^V$ candidate $c_1$ wins with probability $3/5$ and candidate $c_2$ wins with probability $2/5$. \end{example} Nevertheless, all results in this section can be extended to random voter tie-breaking, by replacing the uniform lotteries over the winning sets in Propositions~\ref{prop:principled-lazy-rand-unique} and~\ref{prop:principled-lazy-rand-tie} by lotteries that correspond to choosing an element of the winning set according to the preferences of a random voter (who may be principled or lazy). While the inequalities that one needs to verify become more cumbersome, the complexity of the respective computational problems remains the same. In particular, we obtain the following corollary. \begin{corollary} The problems $({{\mathcal{L}}}, R^V)$-{\sc TieNE$^P$} and $({{\mathcal{L}}}, R^V)$-{\sc ExistNE$^P$} are {\em NP}-complete, whereas $({{\mathcal{L}}}, R^V)$-{\sc SingleNE$^P$} is in~{\em P}. \end{corollary} \subsection{Principled + Truth-biased Voters} Principled and truth-biased voters are quite similar in their behavior; therefore, adding principled voters to the setting of Section~\ref{sec:truth} results in fewer changes than adding them to the setting of Section~\ref{sec:lazy}. To illustrate this point, we will now show how to extend Proposition~\ref{prop:truth-lex} to settings where principled voters may be present. \begin{proposition}\label{prop:principled-truth-lex} Let ${{\mathbf{u}}}$ be the utility profile of truth-biased voters, let ${{\mathbf{a}}}$ be their truthful ballot vector, and let ${{\mathbf{a}}}^P$ be the ballot vector of principled voters. Let $j=\min\{r\mid c_r\in W({{\mathbf{a}}}+{{\mathbf{a}}}^P)\}$. Then ${{\mathbf{a}}}$ is a PNE of $({{\mathcal{T}}}, R^L, {{\mathbf{u}}}, {{\mathbf{a}}}^P)$ if and only if neither of the following conditions holds: \begin{itemize \item[(1)] $|W({{\mathbf{a}}}+{{\mathbf{a}}}^P)|>1$, and there exists a candidate $c_k\in W({{\mathbf{a}}}+{{\mathbf{a}}}^P)$ and a voter $i\in N$ such that $a_i\neq c_k$ and $c_k\succ_i c_j$. \item[(2)] $H({{\mathbf{a}}}+{{\mathbf{a}}}^P)\neq\emptyset$, and there exists a candidate $c_k\in H({{\mathbf{a}}}+{{\mathbf{a}}}^P)$ and a voter $i\in N$ such that $a_i\neq c_k$, $c_k\succ_i c_j$, and $k<j$. \end{itemize} \end{proposition} All other claims in Section~\ref{sec:truth} can be modified in a similar way: essentially, we replace $W({{\mathbf{b}}})$, $H({{\mathbf{b}}})$ and $H'({{\mathbf{b}}})$ with $W({{\mathbf{b}}}+{{\mathbf{a}}}^P)$, $H({{\mathbf{b}}}+{{\mathbf{a}}}^P)$ and $H'({{\mathbf{b}}}+{{\mathbf{a}}}^P)$, but when considering the voters' incentives to change their votes, we limit our attention to truth-biased voters. Of course, we have to take into account the number of votes cast by principled voters in favor of each candidate in the winning set, and distinguish between $R^V$ and $R^C$ (as in Sections~\ref{sec:principled-lazy-lex} and ~\ref{sec:principled-lazy-rand}). Finally, it is immediate that all hardness results established in Section~\ref{sec:truth} remain true in the presence of principled voters. \fi \section{Conclusions and Future Work} \noindent We have characterized PNE of Plurality voting for several combinations of secondary preferences and tie-breaking rules. Our complexity results are summarized in Table~\ref{tbl:summary}. A promising direction for future work is to investigate more general classes of tie-breaking rules. It is also interesting to consider the complexity of various refinements of Nash equilibria for our models, such as strong Nash equilibria (for which an analysis for ${{\mathcal{T}}}$ and $R^L$ can be found in the work of Obraztsova et al.~\cite{obr-mar-tho:c:truth-biased}), or subgame-perfect Nash equilibria for settings where voters submit their ballots one by one; see \cite{des-elk:c:eq} and \cite{xia-con:c:spne} for some results about such equilibria. \begin{table}[ht] {\small \begin{tabular}{|r|c|c|c|} \hline & {\sc SingleNE} & {\sc TieNE} & {\sc ExistNE} \\ \hline $({{\mathcal{L}}}, R^L)$ & P (Cor.~\ref{cor:lazy-lex-easy}) & P (Cor.~\ref{cor:lazy-lex-easy}) & P (Cor.~\ref{cor:lazy-lex-easy}) \\ \hline $({{\mathcal{L}}}, R^C)$ & P (Cor.~\ref{cor:lazy-rand-hard}) & NPc (Cor.~\ref{cor:lazy-rand-hard}) & NPc (Cor.~\ref{cor:lazy-rand-hard}) \\ \hline $({{\mathcal{L}}}, R^V)$ & P (Cor.~\ref{cor:lazy-rand-hard}) & NPc (Cor.~\ref{cor:lazy-rand-hard}) & NPc (Cor.~\ref{cor:lazy-rand-hard}) \\ \hline $({{\mathcal{T}}}, R^L)$ & NPc (Thm.~\ref{thm:truth-lex-hard}) & NPc (Thm.~\ref{thm:truth-lex-hard}) & NPc (Thm.~\ref{thm:truth-lex-hard}) \\ \hline $({{\mathcal{T}}}, R^C)$ & NPc (Cor.~\ref{cor:truth-rand-hard}) & NPc (Cor.~\ref{cor:truth-rand-hard}) & NPc (Cor.~\ref{cor:truth-rand-hard}) \\ \hline $({{\mathcal{T}}}, R^V)$ & NPc (Cor.~\ref{cor:truth-rand-hard}) & NPc (Cor.~\ref{cor:truth-rand-hard}) & NPc (Cor.~\ref{cor:truth-rand-hard}) \\ \hline \end{tabular} } \caption{\label{tbl:summary}Complexity results: P stands for ``polynomial-time solvable'', NPc stands for ``NP-complete''.} \end{table} \newpage
1,116,691,501,439
arxiv
\section{Introduction} Parity-time (PT) symmetry is one of the most significant symmetries of open systems with gain and loss. Various open systems can be described by non-Hermitian Schr\"{o}dinger equations. If a non-Hermitian Hamiltonian and its eigenstates respect PT symmetry, the spectrum is entirely real \cite{1998Bender}. PT-symmetric open systems can be realized in the classical \cite{2010Ruter} and quantum \cite{2017Xiao} optical setups and can be applied, for example, to the lasing \cite{2014Hodaei} and sensing \cite{2016Liu}. Topological phenomena in PT-symmetric open systems are also intensively studied \cite{2015Yuce,2017Weimann,2018Kawabata,2020Kawasaki}. However, the non-Hermitian Schr\"{o}dinger equation is an approximated time-evolution of the open quantum systems and can describe only the short-time dynamics. Instead, the time evolution of density operators should be taken into account to describe the long-time dynamics of the open quantum systems. If a system-environment coupling is sufficiently weak, the time-evolution of open quantum systems is well captured by the Markovian quantum master equation \cite{2002Breuer} \(\displaystyle i\frac{d\rho}{dt}=\hat{\mathcal{L}}[\rho]\). Since the superoperator \(\hat{\mathcal{L}}\) called Lindbladian is non-Hermitian, we can introduce PT symmetry to the general open quantum systems \cite{2012Prosen}. The topological phenomena of Markovian open quantum systems are studied from the dynamical perspective \cite{2020Lieu,2022Kawasaki} with the help of the non-Hermitian topological phases \cite{2019Kawabata}. Nevertheless, the topological phases of PT-symmetric Markovian open quantum systems are still unclear. In this work, we investigate the PT-symmetric topological phase of open quantum systems described by the Markovian quantum master equation. We consider the Kitaev chain which is the one-dimensional topological superconductor with dissipation. We show that the system respects PT symmetry and that all the bulk spectrum of the Lindbladian \(\hat{\mathcal{L}}\) has a common imaginary part. We also show that the edge modes break PT symmetry, and one of them must have a zero eigenvalue in a wide parameter region. \section{Dissipative topological superconductors with PT symmetry} \subsection{Model and formalism} \label{subsec:model} We consider a topological superconductor that couples with environments. If the memory effects are negligible, the time-evolution is given as the Markovian quantum master equation with the Lindblad form \cite{2002Breuer}, \begin{equation} i\frac{d\rho}{dt}=\hat{\mathcal{L}}[\rho]=[\mathcal{H},\rho]+i\sum_{\mu}(2L_{\mu}\rho L_{\mu}^{\dagger}-\{L_{\mu}^{\dagger}L_{\mu},\rho\}), \label{eq:lindblad} \end{equation} where \(\rho\) is the density operator of the system. \(\mathcal{H}\) is the Hamiltonian of the system, and the jump operator \(L_{\mu}\) describes a dissipation process of the system. The superoperator \(\hat{\mathcal{L}}\) generates the time-evolution, and we call it Lindbladian. In this work, we consider the Kitaev chain whose Hamiltonian is given as \begin{equation} \mathcal{H}=\frac{it_0}{2}\sum_j(w_{j,\alpha}w_{j,\beta}-w_{j,\beta}w_{j,\alpha})+\frac{it_1}{2}\sum_j(w_{j,\beta}w_{j+1,\alpha}-w_{j+1,\alpha}w_{j,\beta}),~t_0,t_1\geq0 \label{eq:Kitaev} \end{equation} where \(w_{j,\alpha}\) and \(w_{j,\beta}\) are the Hermitian Majorana operators satisfying \(\{w_{j,s},w_{j',s'}\}=2\delta_{j,j'}\delta_{s,s'}\). They are related to the fermionic operators \(a, a^{\dagger}\) as \(w_{j,\alpha}=a_j+a_j^{\dagger},~w_{j,\beta}=i(a_j^{\dagger}-a_j)\). We focus on the one-body dissipation proportional to the Majorana operators: \begin{equation} L_{j}=\gamma w_{j,\alpha},\quad\gamma>0. \label{eq:jump_op} \end{equation} Since the Hamiltonian is non-interacting and the jump operators are linear in the Majorana operators, the Lindbladian is also non-interacting and we can employ third quantization \cite{2008Prosen,2021Barthel}. The set of operators of the \(n\) fermion systems form a Hilbelt space with the Hilbert-Schmidt inner product \(\bbraket{A|B}=\mathrm{tr}[A^{\dagger}B]\), and we can regard the operator \(A\) as a vector \(\bket{A}\). We choose the basis as \(\bket{P_{\bm{p}}}=\bket{w_{1,\alpha}^{p_{1,\alpha}}w_{1,\beta}^{p_{1,\beta}}\cdots w_{n,\beta}^{p_{n,\beta}}}\) (\(p_{j,s}\in\{0,1\}\), and we use the notation \(\bm{o}\coloneqq(o_{1,\alpha},o_{1,\beta},\cdots,o_{n,\beta})\) where \(o\) is an arbitrary object such as number, operator, or superoperator, throughout the paper), then the fermionic superoperators \begin{equation} \hat{c}_{j,s}\bket{P_{\bm{p}}}=\delta_{p_{j,s},1}\bket{w_{j,s} P_{\bm{p}}},~\hat{c}_{j,s}^{\dagger}\bket{P_{\bm{p}}}=\delta_{p_{j,s},0}\bket{w_{j,s} P_{\bm{p}}} \label{eq:3rdquant_basis} \end{equation} can be defined. They satisfy the fermionic anticommutation relations \(\{\hat{c}_{j,s},\hat{c}_{j',s'}^{\dagger}\}=\delta_{j,j'}\delta_{s,s'},~\{\hat{c}_{j,s},\hat{c}_{j',s'}\}=0\). Since the Lindbladian changes the basis \(\bket{P_{\bm{p}}}\) as \(\bket{w_{j,s} w_{j',s'} P_{\bm{p}}},~\bket{P_{\bm{p}}w_{j,s} w_{j',s'}},\) and \(\bket{w_{j,s} P_{\bm{p}} w_{j',s'}}\) by the non-interacting assumption, we can rewrite the Lindbladian as a quadratic form of the fermionic superoperators \(\hat{c},~\hat{c}^{\dagger}\). After some calculation, it is shown that the Lindbladian preserves the fermion parity \(\Pi=(-1)^{N}\) (\(N\) is the particle number \(N=\sum_j a_{j}^{\dagger} a_{j}\)) though the Lindbladian does not preserve the particle number \(N\). Then the non-interacting Lindbladian can be block-diagonalized in even and odd fermion parity sectors. We focus on the even-parity sector because proper quantum states have even fermion parity \(\Pi\rho\Pi=\rho\) (that is, they are written as the linear combination of the operators of the form \(\ket{\mathrm{even}}\bra{\mathrm{even}}\) and \(\ket{\mathrm{odd}}\bra{\mathrm{odd}}\), where \(\Pi\ket{\mathrm{even}}=\ket{\mathrm{even}}\) and \(\Pi\ket{\mathrm{odd}}=-\ket{\mathrm{odd}}\)). We obtain a simplified form of the Lindbladian of this model for the even-parity sector as \begin{equation} \hat{\mathcal{L}}=4\hat{\bm{c}}^{\dagger}Z\hat{\bm{c}}. \label{eq:3rdquant} \end{equation} The non-Hermitian matrix \(Z\) is defined by \begin{equation} Z\coloneqq H-i\mathrm{Re}M, \label{eq:Z_def} \end{equation} where \(H\) and \(M\) are Hermitian matrices defined by the coefficients of the \(\mathcal{H}\) and \(L_{\mu}\): \begin{gather} \mathcal{H}=\bm{w}^TH\bm{w},~M=\sum_{\mu}\bm{l}_{\mu}\bm{l}_{\mu}^{\dagger},~L_{\mu}=\bm{l}_{\mu}^{T}\bm{w}=\sum_{j,s} l_{\mu,j,s}w_{j,s}. \label{eq:M_def} \end{gather} Thus, we regard the Lindbladian as a non-Hermitian non-interacting Hamiltonian with the first quantized Hamiltonian \(4Z\). By diagonalizing \(Z\), the Lindbladian is also diagonalized as \begin{equation} \hat{\mathcal{L}}=\sum_{j=1}^{2n}4\lambda_j \hat{b}'_j\hat{b}_j, \label{eq:lind_diag} \end{equation} where \(Z=\sum_{j=1}^{2n}\lambda_j \bm{\psi_j}\bm{\chi_j}^{\dagger}\) is the eigendecomposition of \(Z\) and \(\hat{b}'_j\coloneqq \hat{\bm{c}}^{\dagger}\bm{\psi_j},~\hat{b}_j\coloneqq \bm{\chi_j}^{\dagger}\hat{\bm{c}}\) are creation and annihilation superoperators of the eigenmodes satisfying generalized canonical anticommutation relations \(\{\hat{b}_j,\hat{b}'_k\}=\delta_{j,k},~\{\hat{b}_j,\hat{b}_k\}=\{\hat{b}'_j,\hat{b}'_k\}=0\). Non-interacting Lindbladians always have a steady state \(\bket{\mathrm{NESS}}\) such that \(\hat{b}_j\bket{\mathrm{NESS}}=0\) for all \(j\). Then an eigenoperator of the Lindbladian is constructed as \(\prod_{j=1}^{2n}\hat{b}'_j{}^{\nu_j}\bket{\mathrm{NESS}}~(\nu_j\in\{0,1\})\), and its eigenvalue is given as \(4\sum_{j=1}^{2n}\lambda_j\nu_j\). In particular, an eigenoperator with even fermion parity and a zero eigenvalue corresponds to a steady state of the system \cite{2021Barthel}. Under the periodic boundary conditions, \(Z\) in our model is given in the momentum space as \begin{equation} Z(k)=\frac{i}{2}\begin{pmatrix} -2\gamma^2 & -t(k)^* \\ t(k) & 0 \end{pmatrix},~t(k)\coloneqq t_0+t_1 e^{ik}. \label{eq:Z_kit} \end{equation} \subsection{PT symmetry and the topological invariant} Owing to Eq.\ \eqref{eq:3rdquant}, we can investigate some properties of the Lindbladian by investigating the matrix \(Z\) instead. In particular, we investigate the topological properties of the dissipative Kitaev chain in this work. To this end, we employ the topological classification of the non-Hermitian matrix \(Z\) \cite{2019Kawabata,2020Lieu,2022Kawasaki}. \(Z\) has all symmetries in AZ\(^{\dagger}\) class, and it belongs to class BDI\(^{\dagger}\): \begin{align} \mathrm{TRS}^{\dagger}:~&\mathcal{T} Z^T(-k)\mathcal{T}^{-1}=Z(k),~\mathcal{T}=\sigma_z, \label{eq:trs} \\ \mathrm{PHS}^{\dagger}:~&Z^*(-k)=-Z(k), \label{eq:phs} \\ \mathrm{CS}:~&\Gamma Z^{\dagger}(k)\Gamma^{-1}=-Z(k),~\Gamma=\sigma_z, \label{eq:cs} \end{align} where \(\sigma_z\) is one of the Pauli matrices \(\sigma_z=\begin{pmatrix}1&0 \\ 0&-1\end{pmatrix}\). Moreover, the traceless part of \(Z\) has PT symmetry as \begin{equation} (\mathcal{PT})\left[Z(k)+\frac{i\gamma^2}{2}I\right]^*(\mathcal{PT})^{-1}=Z(k)+\frac{i\gamma^2}{2}I,~\mathcal{PT}=\sigma_x, \label{eq:pt} \end{equation} where \(\sigma_x=\begin{pmatrix}0&1\\1&0\end{pmatrix}\) and \(I\) is the identity operator. Note that some symmetries such as PT symmetry for Lindbladians are defined for the traceless part \cite{2012Prosen,2022Kawasaki} as the Lindbladians do not have eigenstates with the positive imaginary part of eigenvalues. PT symmetry in Eq.\ \eqref{eq:pt} ensures that the spectrum of \(Z\) are of the form \(\nu-i\gamma^2/2~(\nu\in\mathbb{R})\) or \(\{\nu-i\gamma^2/2,\nu^*-i\gamma^2/2\}~(\nu\in\mathbb{C})\). The eigenvalues take the former if the corresponding eigenvectors of \(Z\), \(\bm{\psi}\), is the same with \(\mathcal{PTK}\bm{\psi}\) up to phase factors, where \(\mathcal{K}\) is the complex conjugation operation. On the other hand, the eigenvalues take the latter if $\bm{\psi}$ and \(\mathcal{PTK}\bm{\psi}\) are linearly independent. We say that the eigenvectors do not break PT symmetry if the corresponding eigenvalues take the former, while the eigenvectors break PT symmetry and forms a pair \(\{\bm{\psi},\mathcal{PTK}\bm{\psi}\}\) if the corresponding eigenvalues take the latter. Typically, eigenvalues in the former form become the latter form as we increase the non-Hermiticity of a PT-symmetric matrix. If \(Z\) does not have PT symmetry breaking eigenvectors, in other words, imaginary parts of all eigenvalues are \(-i\gamma^2/2\), the system belongs to the PT symmetry unbroken phase. If some eigenvectors break PT symmetry, {\it i.e.} imaginary parts of some eigenvalues are different from \(-i\gamma^2/2\), the system belongs to the PT symmetry broken phase. We call the transition from PT unbroken phase to broken phase PT symmetry breaking. The eigenvalues of \(Z(k)\) in Eq.\ \eqref{eq:Z_kit} are obtained as \begin{equation} \lambda_{\pm}=\pm\frac{1}{2}\sqrt{|t(k)|^2-\gamma^4}-\frac{i\gamma^2}{2}. \label{eq:dispersion} \end{equation} We show the eigenvalues in Fig.\ \ref{fig:winding} (a). \begin{figure}[tb] \centering \includegraphics[width=0.95\columnwidth]{winding_v1.pdf} \caption{(a)The spectrum of \(Z\) with periodic boundary conditions. Red and blue lines correspond to the cases of \(t_1=3t_0,\gamma^2=0.3t_0\) (PT symmetry unbroken) and \(t_1=0.5t_0,\gamma^2=0.8t_0\) (PT symmetry broken), respectively. (b)The phase diagram of the topological invariant in Eq.\ \eqref{eq:winding} when \(\gamma^2=0.2t_0\). In the yellow region, the real line gap closes due to PT symmetry breaking, and the winding number in Eq.\ (\ref{eq:winding}) is ill-defined.} \label{fig:winding} \end{figure} The imaginary parts of all eigenvalues equal to \(-i\gamma^2/2\) if \(|t(k)|^2>\gamma^4\) for all \(k\) [red lines of Fig.\ \ref{fig:winding} (a)]. All eigenvectors do not break PT symmetry, and the real line gap opens in this region. The PT symmetry breaking occurs and the line gap closes if there exists \(k\) such that \(|t(k)|^2\leq\gamma^4\) [blue vertical line of Fig.\ \ref{fig:winding} (b)]. The left and right eigenvectors of $Z(k)$ are written as \begin{equation} \bm{\chi_{\pm}}^{\dagger}=\frac{1}{N_{\pm}}(\pm\sqrt{|t(k)|^2-\gamma^4}-i\gamma^2,-it(k)^*),~\bm{\phi_{\pm}}=\frac{1}{N_{\pm}}\binom{\pm\sqrt{|t(k)|^2-\gamma^4}-i\gamma^2}{it(k)}, \label{eq:eigenvector} \end{equation} where \(N_{\pm}\) is the normalization constant as \(N_{\pm}^2=2\sqrt{|t(k)|^2-\gamma^4}(\sqrt{|t(k)|^2-\gamma^4}\mp i\gamma^2)\). Now we focus on the topological properties of the Lindbladian \(\hat{\mathcal{L}}\) in Eq.\ (\ref{eq:3rdquant}). According to the topological classification of non-Hermitian matrices \cite{2019Kawabata}, the matrices in class BDI\(^{\dagger}\) with one spatial dimension have a \(\mathbb{Z}\)-valued topological invariant. We can calculate a topological invariant as the winding number, \begin{equation} w=\frac{1}{2\pi i}\int_{\mathrm{BZ}}q^{-1}\frac{dq}{dk}dk, \label{eq:winding} \end{equation} where \(q\) is defined through the matrix \(Q\): \begin{equation} Q=\begin{pmatrix}0&q \\ q^{\dagger}&0\end{pmatrix},~Q\coloneqq I-(\bm{\phi_-}\bm{\chi_-}^{\dagger}+\bm{\chi_-}\bm{\phi_-}^{\dagger}). \label{eq:q_def} \end{equation} Inserting the eigenvectors into \(Q\), we get that the winding number is 0 if \(t_0>t_1+\gamma^2\) and -1 if \(t_1>t_0+\gamma^2\). The phase diagram of the topological invariant is shown in Fig.\ \ref{fig:winding} (b). \subsection{PT symmetry breaking edge modes} We study the topological edge modes of the Lindbladian in Eq.\ \eqref{eq:3rdquant} in this subsection. We numerically diagonalize \(Z\) for open boundary conditions in the parameter region so that all bulk modes do not break PT symmetry, and the spectra are shown in Fig.\ \ref{fig:eig}. \begin{figure}[tb] \centering \includegraphics[width=0.95\columnwidth]{eig_kitaev_v1.pdf} \caption{(a) The spectra of \(Z\) with open boundary conditions. The red dots and blue crosses correspond to the cases of \(t_1=3t_0,\gamma^2=0.3t_0\) and \(t_1=1.5t_0,\gamma^2=0.2t_0\), respectively. (b) Spatial configuration of edge modes in the case of \(t_1=3t_0,\gamma^2=0.3t_0\). We set \(n=500\). If the index is odd (even), the flavor of the corresponding position is \(\alpha~(\beta)\). (b1) Edge states near the left boundary. (b2) Edge states near the right boundary.} \label{fig:eig} \end{figure} We confirmed that two edge modes appear in Fig.\ \ref{fig:eig} (a), as predicted by the bulk-edge correspondence with the topological invariant we have calculated in the previous subsection. All bulk modes do not break PT symmetry, and corresponding eigenvalues have the same imaginary part \(-i\gamma^2/2\), as shown in Fig. \ref{fig:eig} (a). In contrast, the two edge modes break PT symmetry since the edge modes localize near the one boundaries as shown in Fig.\ \ref{fig:eig} (b). Their eigenvalues takes \(0\) and \(-i\gamma^2\) in both cases. We can show the existence of these edge modes analytically. If the eigenvector of \(Z\) is written as \((A_1,B_1,\cdots,A_n,B_n)^T\), the eigenvalue equation of \(Z\) is recast as \begin{align} -t_1B_{j-1}-2\gamma^2 A_j+t_0B_j &= \frac{2}{i}\lambda A_j, \label{eq:eigen_bulk1} \\ -t_0A_j+t_1A_{j+1} &= \frac{2}{i}\lambda B_j \label{eq:eigen_bulk2} \end{align} for the bulk and \begin{align} -2\gamma^2 A_1+t_0B_1 &= \frac{2}{i}\lambda A_1, \label{eq:eigen_edge1} \\ -t_0A_n &= \frac{2}{i}\lambda B_n \label{eq:eigen_edge2} \end{align} for the boundaries. We note that the above equations correspond to the eigenvalue equation of the Kitaev chain $\mathcal{H}$ in Eq.\ (\ref{eq:Kitaev}) when \(\gamma=0\). If \(t_1>t_0\), a zero-energy edge mode of the Kitaev chain $\mathcal{H}$ satisfying the above equations in the limit of $n \rightarrow \infty$ with \(A_j=0,~B_j/B_{j-1}=t_1/t_0\) is also the solution of the corresponding eigenvalue equation of \(Z\) with \(\lambda=0\). The edge mode localizes near $j=n$ as shown in the blue curve of Fig.\ \ref{fig:eig} (b2). Since the dissipation acts only on the flavor \(\alpha\), the edge mode is unaffected by the dissipation and the eigenvalue remains at zero. Conversely, the other zero-energy edge mode of the Kitaev chain $\mathcal{H}$ satisfying Eqs.\ \eqref{eq:eigen_bulk1} - \eqref{eq:eigen_edge2} in the limit of $n \rightarrow \infty$ with \(A_j/A_{j+1}=t_1/t_0,~B_j=0\) is most sensitive to the dissipation, which leads to the largest imaginary part of the eigenvalue; \(\lambda=-i\gamma^2\). The latter edge mode localizes near the other boundary $j=1$ as shown in the red curve of Fig.\ \ref{fig:eig} (b1). The latter edge mode with \(\lambda=-i\gamma^2\) is also obtained by applying \(\mathcal{PTK}\) to the former edge mode because they form a pair due to PT symmetry breaking. We also confirm that the edge modes do not appear in the parameter region with \(w=0\), as predicted by the bulk-edge correspondence. Finally, we mention the steady state of the system. Since the jump operators in this model are Hermitian, the infinite-temperature state \(\rho_{\mathrm{inf}}\propto I\) is the steady state of the time-evolution \(\hat{\mathcal{L}}[\rho_{\mathrm{inf}}]=0\). Although \(Z\) has a zero eigenvalue, it does not lead to multiple steady states (see also Refs.\ \cite{2019Caspel,2021Barthel}). We recall that the spectrum of \(\hat{\mathcal{L}}\) is expressed by the spectrum of \(Z\) as \(4\sum_{j=1}^{2n}\lambda_j\nu_j\) and the eigenoperators are given as \(\prod_{j=1}^{2n}\hat{b}'_j{}^{\nu_j}\bket{\mathrm{NESS}}\). Since \(\hat{b}'\) changes the fermion parity, the eigenoperators with \(\sum_j\nu_j\) even have even fermion parity and correspond to the quantum states. In particular, from the formulae, degenerated zero eigenvalues of \(Z\) are essential to have additional steady states of the system. \section{Conclusions} We have studied the topological phase of the dissipative Kitaev chain in this work. We have shown that the dissipative Kitaev chain retains PT symmetry and the bulk spectrum of the Lindbladian can possess a common imaginary part due to PT symmetry. Imposing open boundary conditions on the system, we have clarified that the edge modes break PT symmetry and have different imaginary parts of the eigenvalues from the bulk modes. We have discussed that the eigenvalues of edge modes must be \(0\) and \(-i\gamma^2\), while the steady edge mode is impossible in this model. The future direction of this work is the systematic construction of steady edge states by utilizing PT symmetry. This work sheds light on manipulating the steady topological edge states by engineering dissipation. We thank Yasuhiro\ Asano and Kousuke\ Yakubo for helpful discussions. M.\ K.\ was supported by JST SPRING (Grant No.\ JPMJSP2119). This work was also supported by KAKENHI (Grants No.\ 20H01828, No.\ JP21H01005, and No.\ JP22H01140, and No.\ 22K03463).
1,116,691,501,440
arxiv
\section{Background} Electroweak theory couples the baryon (B) and the lepton (L) numbers to the Chern-Simons number of the weak gauge field through the axial anomaly. At temperatures higher than the electroweak phase transition, the rate of Chern-Simons number fluctuations -- the sphaleron rate -- has a nonzero value, whereas at lower temperatures it is exponentially suppressed and, when the Higgs field expectation value $v \gg T$, the rate is negligible. In electroweak baryogenesis scenarios \cite{Kuzmin:1985mm} the baryon number of the Universe is generated during the electroweak phase transition. However, this scenario does not work in the Standard Model: it requires a strongly first order phase transition, whereas the Standard Model has a smooth crossover \cite{Kajantie:1996mn}. Further, the CP violation in the Standard Model is not sufficient to drive baryon number generation. Nevertheless, the sphaleron rate during the electroweak crossover in the Standard Model is relevant for some Leptogenesis scenarios: in these scenarios lepton asymmetry is converted into baryon asymmetry through sphaleron transitions. If the lepton asymmetry is generated just before or during the electroweak phase transition, how the sphaleron rate shuts off has an effect on the generated baryon number. The sphaleron rate has been studied in the broken phase before, but either with unphysical Higgs masses \cite{Moore:1998swa,Moore:2000jw,Tang:1996qx} or not very deeply in the broken phase \cite{Moore:1998swa}. In the electroweak theory, the gauge field vacua are labeled by the Chern-Simons number \begin{equation} n_{CS} = \int d^3 x \ j_{CS}^0 = -\frac{g^2}{64 \pi}\int d^3x \ \epsilon^{ijk} \textrm{Tr}\left(A_i F_{jk} + i \frac{g}{3} A_i A_j A_k \right). \end{equation} The Chern-Simons current $j_{CS}^{\mu}$ is in turn related through the axial anomaly to the baryon- and lepton-number currents \begin{equation} \partial_{\mu} (j_B^{\mu}+j_L^{\mu}) = n_g \left(\frac{g^2}{16 \pi^2}\epsilon^{\alpha \beta \mu \nu} A^a_{\alpha \beta} A^a_{\mu \nu} \right), \end{equation} by \begin{equation}\label{baryoncurrent} \partial_{\mu} \ j_B^{\mu} = n_g \ \partial_{\mu} \ j_{CS}^{\mu}, \end{equation} where the U(1) part of the theory is omitted. Transitions between vacua are possible by surmounting the potential barrier through sphaleron transitions. The sphaleron rate is strongly suppressed at low temperatures, where the potential barrier is high. At temperatures above the EWPT, though, transitions among vacua are made possible through thermal fluctuations because there is no longer any potential barrier. Each transition changes $n_{CS}$ by one unit and therefore violates the baryon number by $n_g$ $=$ 3 \begin{displaymath} B(t_f)-B(t_i) = n_g \ [n_{CS}(t_f)-n_{CS}(t_i)] \end{displaymath} thus providing a source of Baryogenesis. In previous works, the sphaleron rate has been studied at the energy range of the electroweak phase transition either in the symmetric phase with lattice simulations \cite{Ambjorn:1990pu} and semiclassical methods \cite{Philipsen:1995sg}, or in the broken phase with both perturbative calculations \cite{Burnier:2005hp} and on the lattice \cite{Moore:1998swa,Krasnitz:1993mt}. In this work we unify these two pictures and find the overall behavior of the sphaleron rate from the symmetric phase to the broken one, passing through the electroweak crossover. Our results are compared to analytical estimates both in the broken and symmetric phases \cite{Burnier:2005hp}. \section{Theory on the lattice} The thermodynamics of the 4-dimensional electroweak theory is studied in 3 dimensions by means of dimensional reduction \cite{Kajantie:1995dw}, a perturbative technique that gives the correspondence between 4D and 3D parameters. The result is a SU(2) effective theory with the Higgs field $\phi$ and gauge field $A_{\mu}$ ($F_{ij}$) \begin{equation} L = \frac{1}{4} F^a_{ij} F^a_{ij} + (D_i\phi)^{\dagger}(D_i\phi)+m_3^2 \phi^\dagger \phi + \lambda_3 (\phi^\dagger \phi)^2, \end{equation} and 3D effective parameters $g_3^2$, $\lambda_3$ and $m_3^2$. B\"odeker showed \cite{Bodeker:1998hm} that at leading order in log(1/$g$) the time evolution of this effective SU(2) Higgs model is governed by Langevin dynamics. The latter, though, is very slow on the lattice and can be substituted by any other dissipative procedure, e.~g.~heat bath. One heat-bath sweep through the lattice corresponds to the real-time step \cite{Moore:2000jw} \begin{equation} \Delta t = \frac{a^2 \ \sigma_{el}}{4}, \end{equation} where \begin{equation}\label{Moore3.9} \sigma^{-1}_{\textrm{el}} = \frac{3}{m^2_D}\gamma , \quad \textrm{with} \quad \gamma= \frac{N g^2 T}{4 \pi} \left[ln\frac{m_D}{\gamma}+3.041 \right] \end{equation} is the non-abelian color conductivity, which quantifies the current response to infrared external fields, $N$ is the dimension of the SU(N) gauge group, and $m_D$ is the Debye mass, determining the length scale $l_D$ $\sim$ $1/m_D$ $\sim$ $1/gT$. We made use of a 32$^3$ lattice, with $\beta_G \equiv \frac{4}{g_3^2 a} =$ 9, where $g_3$ is the 3D gauge coupling and $a$ the lattice spacing. In real-time simulations, for each mass and temperature pair, we computed 4 trajectories for every 1000 initial configurations. \section{Methods} \vspace*{-3 mm} In the symmetric phase we make use of canonical MC simulations and approach the broken phase. At very low temperatures, the rate is highly suppressed and canonical methods do not work anymore. We need multicanonical methods, which calculate a weight function that compensates the high potential barrier between the vacua, thus allowing transitions. The exact value of the sphaleron rate \begin{equation} \Gamma \equiv \lim_{t\rightarrow \infty} \frac{\langle(n_{CS}(t)-n_{CS}(0))^2\rangle}{V \ t} \end{equation} is obtained, in the broken phase, through a method similar to the one used in \cite{Moore:1998swa,Moore:2000jw}. \begin{description} \item[]\textbf{a.} First we fix the order parameter ($n^*_{CS}$ $=$ $1/2$ in our case) which separates one vacuum from the neighbouring one. \item[]\textbf{b.} We calculate the probability for $n_{CS}$ to be in the small interval $n^*_{CS} \pm \epsilon/2$. This can be achieved only with multicanonical methods, as the probability $P_{\epsilon}$ of being on top of the barrier is extremely small. \item[]\textbf{c.} Then the probability $P_{\epsilon}$ is transformed into a flux by multiplying it with $\langle dn_{CS} / dt \rangle / \epsilon$. This is calculated by taking initial configurations in the interval $\epsilon$, performing real-time simulations and keeping track of the $n_{CS}$ value after some time $dt$. \item[]\textbf{d.} Finally, we calculate the \textsl{dynamical prefactor} \begin{equation} \textbf{d} = \sum_{sample} \frac{\delta}{\# \ \textrm{crossings}} \end{equation} which is a measure of the fraction of the crossings which lead to a permanent change in $n_{CS}$. $\delta$ is 0 for configurations that return to the initial vacuum and 1 if the initial and final vacuum are different. The initial configurations are chosen to be in $n^*_{CS}$ $\pm$ $\epsilon / 2$ and the real-time evolution is performed forward and backwards in time. \item[]\textbf{e.} The sphaleron rate is then \begin{equation} \Gamma \equiv \frac{P(\mid n_{CS}-n^*_{CS} \mid < \epsilon/2) } {\epsilon \ P(\mid n_{CS} < n^*_{CS}\mid)} \left\langle \mid \frac{dn_{CS}}{dt} \mid \right\rangle \times \textbf{d}. \end{equation} \end{description} \section{Results} \begin{figure} \vspace*{-8 mm} \centerline{ \includegraphics[width=.43\linewidth]{ncsboth_proc.ps} \includegraphics[width=.43\linewidth]{traj_m115t142eqm_proc.ps} \\ } \caption[a]{\small{Left: The Chern-Simons number evolution below the critical temperature ($m_H =$ 115 GeV, $T =$~142 GeV) in canonical and multicanonical simulations. The transition rate in this plot is not related to the real time rate, but shows the efficiency of the probability distribution measurement. Right: A set of heat-bath trajectories originating from the same configuration. Gluing together any two of these produces a trajectory, which corresponds to a sphaleron transition if the two end-points are in different minima.} } \label{csfig} \end{figure} Figure \ref{csfig} (left) shows the efficiency of the multicanonical method at low temperatures. For the Higgs mass of 115 GeV and the temperature of 142 GeV, we see that in the canonical simulation, no transitions happen, while in the multicanonical run we have a random walk in the adjusted potential, where we have compensated for the statistical suppression by the weight function $W$. This can also be seen in the probability distributions these simulations produce (Figure \ref{plog_both}) for the same $T$ $=$ 142 GeV. \begin{figure} \vspace*{-4 mm} \centerline{ \epsfig{file=plog_can_poster.ps,width=0.42\linewidth,clip=} \epsfig{file=plog_mul_poster.ps,width=0.42\linewidth,clip=} \\ } \caption[a]{\small{The probability distributions of Chern-Simons number in the deep broken phase in canonical (left) and multicanonical simulations (right).}} \label{plog_both} \end{figure} The multicanonical weight function $W$ thus permits sampling with constant probability, being the conversion factor between multicanonical and physical probability \begin{equation} P_{muca} \varpropto exp[W] \ P_{can}. \end{equation} Figure \ref{csfig} (right) shows several real-time heat-bath trajectories from the same initial configuration. Each trajectory crosses a different number of times the least-probable interval $\epsilon$ on the top of the barrier, and ends either back into the initial vacuum or into the adjacent one. In Figure \ref{phisq} we show the Higgs field expectation value $\langle\phi^2\rangle$ for both masses (115 GeV, left, and 160 GeV, right) as a function of temperature. We notice a perfect match between the canonical and multicanonical results and a smooth transition from the symmetric to the broken phase. The sphaleron rate $\Gamma / T^4$ is shown in Figure \ref{sphrate} for $m_H =$ 115 GeV and 160 GeV, with the theoretical curves obtained separately for the broken and symmetric phases, through perturbative calculations in \cite{Burnier:2005hp}. \begin{figure}[!h] \vspace*{-7.5 mm} \begin{center} \includegraphics[width=.495\linewidth]{phi2_m115_proc.ps} \includegraphics[width=.495\linewidth]{phi2_m160_proc.ps} \caption{\small{The Higgs expectation value $\langle\phi^2\rangle$ for Higgs masses of 115 GeV (left) and 160 GeV (right) as a function of temperature. The high-temperature canonical and low-temperature multicanonical results match beautifully in the transition region.}} \label{phisq} \end{center} \end{figure} \begin{figure}[!h] \vspace*{-5 mm} \begin{center} \includegraphics[width=.58\linewidth]{rate115_last.ps}\\ \includegraphics[width=.58\linewidth]{rate160_last1.ps} \caption{\small{The sphaleron rate for a Higgs mass of 115 GeV (above) and 160 GeV (below). The high-temperature canonical and low-temperature multicanonical results again match very well in the transition region. Also shown are previous high-temperature estimates (top, horizontal line) and perturbative calculations in the low-temperature phase (bottom, wide band) from \cite{Burnier:2005hp}.}} \label{sphrate} \end{center} \end{figure} \section{Conclusion} We improved the previous estimates for the sphaleron rate and determined its behaviour from the symmetric to the broken phase, through the electroweak crossover. Our results are in agreement with previous estimates in the symmetric phase. In the broken phase we notice that the slope of our curve is the same as in the analytic one \cite{Burnier:2005hp}. We however note a discrepancy of up to two orders of magnitude in the size of the rate, although part of the shift in temperature may be explained in terms of renormalization constants. Even though the Standard Model has a too weak source of CP-violation in the quark sector, Baryogenesis might still be viable through lepton number violating processes. The sphaleron rate plays an important role in Leptogenesis, as the conversion of lepton to baryon number depends on it, and it is therefore important to know its size rather accurately. \acknowledgments This work is supported by the Academy of Finland grants 1134018 and 114371. M.D. also acknowledges support from the Magnus Ehrnrooth foundation. The computations have been performed at the Finnish IT center for Science (CSC).
1,116,691,501,441
arxiv
\section*{\sffamily \Large INTRODUCTION} Among the {\em molecular descriptors} which provide information on molecular constitution, {\em topological indices} have several assets such as: (1) easy calculation with very low computer time (cpu) requirements; (2) diversity of possibilities to choose from in order to match properties of the data set; (3) high correlation ability with chemical and physical properties or biological activities. There are numerous of topological indices that have found some applications in theoretical chemistry, especially in QSPR/QSAR research. Within all topological indices ones of the most investigated are the descriptors based on the valences of atoms in molecules, so-called degree--based topological indices. Among the degree--based topological indices a class of {\em geometric--arithmetic} topological indices \cite{fu-gr-vu} may be defined as $$GA_{gen}(G)=\sum_{uv \in E(G)} \frac{2\sqrt{Q_u Q_v}}{Q_u+Q_v}\,,$$ where $Q_u$ is some quantity that can be in a unique manner associated with vertex $u$ of graph $G$. The first member of this class was considered by Vuki\v cevi\' c and Furtula \cite{vu-fu} in year 2009 by setting $Q_u$ to be the vertex degree $d(u)$. One year later Fath-Tabar et al. \cite{fa-th} introduced the second such index by setting $Q_u$ to be the number $n_u$, which is the number of vertices of G lying closer to vertex $u$ than to vertex $v$ for edge $uv$ of a graph $G$. The edge variant was studied by Bo Zhou et al. \cite{zh-gu} in 2009 and lead to the third geometric--arithmetic index; $Q_u$ being the number $m_u$ of edges of $G$ lying closer to vertex $u$ than to vertex $v$ for edge $uv$ of a graph $G$. The fourth member of this class was considered by Ghorbani et al. \cite{gh-kh} in 2010 by setting $Q_u$ to be the eccentricity of vertex $u$ denoted by $\varepsilon_u$ and finally the fifth geometric--arithmetic index was defined in 2011 by Graovac et al. \cite{gr_gh_ho} by letting $Q_u$ to be the sum of degrees over all vertices incident with vertex $u$. Beside that the edge and the total versions of geometric--arithmetic indices were considered\cite{mah-1,mah-2} and recently Wilczek\cite{wi} defined nine new geometric--arithmetic indices. Some mathematical properties of first four geometric--arithmetic indices are obtained in \cite{das-1,das-2,das-3,das-4,das-5,das-6, ro-1,ro-2}. It was also shown that first three geometric--arithmetic indices possess relatively good descriptive as well as predictive capabilities with respect to some selected properties of octanes and benzenoid hydrocarbons\cite{das-4, vu-fu}. After the introduction of the fifth geometric--arithmetic index it has been calculated for many different families of (molecular) graphs, but there were no known correlation nor with physico-chemical properties of molecules nor with any other topological indices. The aim of the present paper is to fill this gap. The fifth geometric--arithmetic index is compared with some other distance-based topological indices and physico-chemical properties. As it is best correlated with the atom--bond connectivity index, which is used for predicting the heat of formation of certain hydrocarbons, the connection between the fifth geometric--arithmetic index and the heat of formation is established. \section*{\sffamily \Large THE FIFTH GEOMETRIC-ARITHMETIC INDEX AND SOME OTHER DISTANCE-BASED TOPOLOGICAL INDICES} \label{sec:indices} A \textit{graph} $G$ is an ordered pair $G = (V, E)$ of a set $V$ of \textit{vertices} (also called nodes or points) together with a set $E$ of \textit{edges}, which are $2$-element subsets of $V$ (more information about some basic concepts in graph theory can be found in a book written by West\cite{west}). Having a molecule, if we represent atoms by vertices and bonds by edges, we obtain a \textit{molecular graph}. The graphs considered in this paper are all finite and connected. The {\em degree} $d(u)$ of a vertex $u \in V(G)$ is the number of edges incident to vertex $u$. The fifth geometric--arithmetic index is defined as $$ GA_5(G)=\sum_{uv \in E(G)} \frac{2\sqrt{ S_u S_v}}{S_u+S_v}\,,$$ where $\displaystyle S_u=\sum_{uv \in E(G)} d(v)$. As an example, we calculate the fifth geometric--arithmetic index for 1-methylnaphthalene (see Figure \ref{fig1}). \begin{figure} \begin{center} \begin{tikzpicture} \draw[thick] (0,0) -- (0.86,0.5) -- (0.86,1.5) -- (0,2) -- (-0.86,1.5) -- (-0.86,0.5) -- (0,0); \draw[thick] (0.86,0.5) -- (1.72,0) -- (2.58,0.5) -- (2.58,1.5) -- (1.72, 2) -- (0.86,1.5); \draw[thick] (1.72, 2) -- (1.72, 3); \end{tikzpicture} \caption{Molecular graph $G$ of 1-methylnaphthalene.} \label{fig1} \end{center} \end{figure} First we denote the molecular graph of 1-methylnaphthalene by $G$. Then \begin{align*} \text{GA}_5 (G) & = \frac{2\sqrt{4 \cdot 4}}{4+4} + 4\frac{2\sqrt{4 \cdot 5}}{4+5} + \frac{2\sqrt{5 \cdot 8}}{5+8} + \frac{2\sqrt{6 \cdot 8}}{6+8} + \frac{2\sqrt{5 \cdot 6}}{5+6} + 2\frac{2\sqrt{5 \cdot 7}}{5+7} + \frac{2\sqrt{7 \cdot 8}}{7+8} + \frac{2\sqrt{3 \cdot 6}}{3+6} \\ & \approx 11,8465. \end{align*} The fifth geometric--arithmetic index was firstly computed for nanostar dendrimers \cite{gr_gh_ho}, followed by circumcoronene series\cite{fa3}, zig-zag polyhex nanotubes and nanostars\cite{fa2}, $TURC4C8(S)$ nanotube\cite{fa-5}, armchair polyhex nanotubes\cite{fa1} and polyaromatic hydrocarbons\cite{fa-4}. Beside that this index was computed for naphtalenic nanosheet $[4n,2m]$\cite{am}, fan and wheel molecular graphs\cite{gao}, bridge and carbon nanocones\cite{ga_wa} and just recently for para-line graphs in convex polytopes\cite{fo}. Probably the most studied degree--based topological indices are the Zagreb indices which have been introduced almost fifty years ago by Gutman and Trinajsti\v c \cite{tr_gu}. The {\em first Zagreb index} is defined as $$ZM_1(G)=\sum_{v \in V(G) } d^2(v)$$ and the {\em second Zagreb index} equals $$ZM_2(G)=\sum_{uv \in E(G) } d(u)d(v)\,.$$ Zagreb indices can also be calculated by using the {\em valence vertex degree} $d^v(u)$ (i.e. the number of valence electrons of $u$ minus the number of hydrogen atoms attached to $u$) resulting in the first and the second valence Zagreb indices $$ZM_1^v(G)=\sum_{u \in V(G) } (d^v(u))^2 \quad \text{and} \quad ZM_2^v(G)=\sum_{uw \in E(G) }(d^v(u))^2 (d^v(w))^2\,.$$ Some other important degree--based topological indices are {\em Randi\'c connectivity index} \cite{randic} $$\chi (G)=\sum_{uv \in E(G)} \frac{1}{\sqrt{d(u)d(v)}}\,,$$ the {\em Pogliani index} \cite{pogliani} $$Dz(G)=\sum_{u \in V(G) } d^Z(u)\,,$$ where $d^Z(u)$ is the $Z$-delta number of $u$ and is defined as a quotient between the number of valence electrons and the principal quantum number of vertex $u$. Further we have the {\em atom--bond connectivity index} \cite{es_to_ro_gu} $$ABC(G)=\sum_{uv \in E(G)} \sqrt{\frac{d(u)+d(v)-2}{d(u)d(v)}}\,,$$ the {\em ramification index} \cite{ar-pe} $$Ram(G) = \sum_{v \in V(G), \, d(v)\geq 3} (d(v)-2)\,,$$ {\em Narumi simple index} \cite{narumi} $$Snar(G) = \prod _{v \in V(G)} d(v)\,,$$ the {\em total structure connectivity index} \cite{ne_we_se} $$Xt(G) =\prod_{v \in V(G)} \frac{1}{\sqrt{d(v)}}\,, $$ and the {\em quadratic index} \cite{balaban}, also called normalized quadratic index $$Q(G)=\frac{1}{2}\sum_{g} ((g^2-g)\,^gF+2)=3-2\vert V(G) \vert+\frac{M_1(G)}{2}\,,$$ where $g$ are the different vertex degree values and $^g F$ is the vertex degree count. \section*{\sffamily \Large COMPUTATIONAL DETAILS} In the present section we give an algorithm which we use to compute the fifth geometric--arithmetic index. The algorithm contains two special functions, i.e. {\tt CalculateDegree} and {\tt CalculateS}. Let $G$ be a graph given by an adjacency matrix with vertices $1,2,\ldots,n$. The function {\tt CalculateDegree} computes degree for a given vertex. The function {\tt CalculateS} determines the value which is the sum of degrees of all neighbors for a given vertex. Note that this two special functions can be obtained easily and both functions have the time complexity $O(n^2)$. \medskip \begin{small} \begin{algorithm}[H]\label{alg:edini} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \DontPrintSemicolon \Input{\! Graph $G$ with vertices $1,2,\ldots,n$.} \Output{$\text{GA}_5(G)$} \SetKwFunction{CD}{CalculateDegree} \SetKwFunction{CS}{CalculateS} $X \leftarrow 0$\; \For{\bf{each} $u \in V(G)$}{ $d_u \leftarrow \CD(u)$\; } \For{\bf{each} $u \in V(G)$}{ $S_u \leftarrow \CS(u)$\; } \For{\bf{each} $uv \in E(G)$}{ $X \leftarrow X + \frac{2 \sqrt{S_u S_v}}{S_u + S_v}$\; } $\text{GA}_5(G) \leftarrow X$ \; \caption{The fifth geometric--arithmetic index.} \end{algorithm} \end{small} \medskip The fifth geometric--arithmetic indices for the polyaromatic hydrocarbons molecules are collected in Tables \ref{tab:GA5+ABC} and \ref{tab:GA5+ABC2}; and for the alkane series in Table \ref{tab:GA5+H}. Since the data for the atom--bond connectivity index for the polyaromatic hydrocarbons is not available, we compute these values with a simple algorithm. The algorithm for calculating the atom--bond connectivity index is quite similar to the algorithm for calculating the fifth geometric--arithmetic index. For the completeness of the paper, we give it anyway. As before, let $G$ be a graph given by an adjacency matrix with vertices $1,2,\ldots,n$ and the function {\tt CalculateDegree} computes degree for a given vertex. Let us mention that Algorithm \ref{alg:ABC} has the time complexity $O(n^2)$. \begin{small} \begin{algorithm}[H]\label{alg:ABC} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \DontPrintSemicolon \Input{Graph $G$ with vertices $1,2,\ldots,n$.} \Output{$\text{ABC}(G)$} \SetKwFunction{CD}{CalculateDegree} $X \leftarrow 0$\; \For{\bf{each} $u \in V(G)$}{ $d_u \leftarrow \CD(u)$\; } \For{\bf{each} $uv \in E(G)$}{ $X \leftarrow X + \sqrt{\frac{d_u + d_v - 2}{d_u d_v}}$\; } $\text{ABC}(G) \leftarrow X$ \; \caption{The atom--bond connectivity index.} \end{algorithm} \end{small} \medskip The atom--bond connectivity indices for the polyaromatic hydrocarbons are collected in Tables \ref{tab:GA5+ABC} and \ref{tab:GA5+ABC2}. \section*{\sffamily \Large RESULTS AND DISCUSSION} A benchmark data set for the octane isomers and the polyaromatic haydrocarbons is available at {\rm www.moleculardescriptors.eu}. For the set of 18 octane isomers we have compared the fifth geometric--arithmetic index with 16 physico-chemical properties and then further with the degree--based topological indices - unfortunately no significant correlation could be established. Next we perform the regression analysis on the set of 82 polyaromatic hydrocarbons. The benchmark data set enables the analysis of three physico-chemical properties (melting and boiling point, octanol-water partition coefficient); we could not establish any correlation between the fifth geometric--arithmetic index and these properties. Further we consider degree--based indices and it turns out that the best correlation is the correlation between the fifth geometric--arithmetic index and the atom--bond connectivity index (see Figure \ref{fig-ga5+ABC}). More precisely, the regression statistics for these two indices is: multiple $R$ is $0,9997$, $R^2$ is= $0,9994$; adjusted $R^2$ is 0,9994 and standard error is 0,1621. Good correlation probably follows from the fact that the formulas for computing the fifth geometric--arithmetic index and the atom--bond connectivity index are quite similar. \begin{center} \begin{figure}[h!] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio=true]{graf_GA5-ABC.eps} \caption{\label{fig-ga5+ABC} The correlation between the fifth geometric--arithmetic index and the atom--bond connectivity index.} \end{figure} \end{center} The second best correlation is the correlation between the fifth geometric--arithmetic index and the first valence Zagreb index. In this case, the regression statistics is as follows: multiple $R$ is 0,9984, $R^2$ is 0,9967; adjusted $R^2$ is 0,9967 and standard error is 0,3730. The correlation between the fifth geometric--arithmetic index and the Randi\'c connectivity index is also very good. More precisely, multiple $R$ is 0,9982; $R^2$ is 0,9964; adjusted $R^2$ is 0,9963 and standard error is 0,3928. The correlation between the fifth geometric--arithmetic index is also very high with the following indices: Narumi simple index, Pogliani index, the first and the second Zagreb index, the second Zagreb valence index, and Harary index. One can see that the regression statistics for these indices is: multiple $R$ greater than 0,993, $R^2$ is greater than 0,987; adjusted $R^2$ is greater that 0,987 and standard error is at most 0,724. The quadratic index, the ramification index, the total structure connectivity index also have a good correlation with the fifth geometric--arithmetic index. In this case the standard error is greater than 1, $R^2$ is between 0,94 and 0,98, multiple $R$ and adjusted $R^2$ are both between 0,87 and 0,96. From this follows that correlation is still good, but in respect to other indices it is relatively poor. In the case of Schultz index and Gutman index the correlation is weaker. The regression statistics between the fifth geometric--arithmetic index and the degree--based topological indices mentioned here are all collected in Table \ref{tab:TheRegressionStatistics}. As we can see the fifth geometric--arithmetic index correlates the best with the atom--bond connectivity index. The atom--bond connectivity index was introduced in 1989 by Estrada et al.\cite{es_to_ro_gu} as a tool to describe the heat of formation $\Delta H$ of alkanes since it has shown a good quantitative structure-property relationship (QSPR) model. We have therefore used the available data \cite{texas} and checked if there is any correlation between the fifth geometric--arithmetic index and the heat of formation of certain polyaromatic hydrocarbons. The data gathered in Table \ref{tab:GA5+H} results in the linear regression with the multiple $R$ being 0,971 and $R^2$ equals 0,942, which is similar to the correlation obtained by by Gutman and Furtula \cite{gu-fu}, where the heat of formation of some polyaromatic hydrocarbons was compared with the first geometric--arithmetic index and the established correlation coefficient was 0,972. Since the seminal paper \cite{es_to_ro_gu} on atom--bond connectivity index models the heat of formation $\Delta H$ of alkanes, our next aim is to compare $\Delta H$ of alkanes with theirs fifth geometric--arithmetic index. The data in Table \ref{tab:GA5+H} is taken form {\em www.webbook.nist.gov} and results in a linear regression with the multiple $R=0,9999$ and $R^2=0,9998$. The good correlation is due to a linear relation between the fifth geometric--arithmetic index and the atom--bond connectivity index of the alkane series. The molecular graph of an alkane $C_nH_{2n}$ is a path $P_n$ on $n$ vertices, so for $n\geq 5$ a straightforward calculation yields in $$\begin{array}{rcl} GA_5(P_n) & = & 2(\frac{\sqrt{2\cdot 3}}{5}+\frac{\sqrt{3\cdot 4}}{7}+(n-5)\frac{\sqrt{4\cdot 4}}{8}+\frac{\sqrt{3\cdot 4}}{7}+ \frac{\sqrt{2\cdot 3}}{5}) \\ &=&4\frac{\sqrt{6}}{5}+8\frac{\sqrt{3}}{7}+n-5\\ ABC(P_n)&=&(n-1)\frac{\sqrt{2}}{2} \end{array}$$ what gives $$ABC(P_n)=0.7071\, GA_5(P_n)+0.0431\,.$$ \section*{\sffamily \Large CONCLUSIONS} The fifth geometric--arithmetic index is an index in the family of the degree--based topological indices. The index is relatively new and although its mathematical properties and closed formulae for some families of chemical graphs were derived, there is no proven relation between the fifth geometric--arithmetic index and physico-chemical properties or some other (degree--based) topological indices so far. In this paper we consider three types of molecules. In the case of octane isomers no significant results could be shown. For the polyaromatic hydrocarbons and the alkane series very good correlation and linear relation, respectively, between the fifth geometric--arithmetic index and the atom--bond connectivity index is established. As a consequence the fifth geometric--arithmetic index is related with the heat of formation. Our data set in that case consisted of 18 polyaromatic hydrocarbons and 19 members of the alkane series, what is not enough for a credible QSPR analysis and this is a problem which could be considered in the future by the chemical society, since the Algorithm \ref{alg:edini} enables the calculation of the fifth geometric--arithmetic index. \subsection*{\sffamily \large ACKNOWLEDGMENTS} \noindent The authors Matev\v z \v Crepnjak and Petra \v Zigert Pleter\v sek acknowledge the financial support from the Slovenian Research Agency, research core funding No. P1-0403 and No. P1-0297, J1-9109, respectively. \clearpage
1,116,691,501,442
arxiv
\section{Introduction} Let $A$ be an abelian variety over a number field $\ensuremath{\mathrm{E}}\subset \ensuremath{\mathbb{C}}$ and $\overline{\ensuremath{\mathrm{E}}}$ an algebraic closure of $\ensuremath{\mathrm{E}}$. For $v$ a place of $\ensuremath{\mathrm{E}}$ dividing a prime $p$ where $A$ has good reduction and $\ell\neq p$ a prime, the action of $\mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}})$ on the $\ell$-adic cohomology $\ensuremath{\mathrm{H}}^1_{\ensuremath{\mathrm{\acute{e}t}}}(A_{\overline{\ensuremath{\mathrm{E}}}},\ensuremath{\mathbb{Q}}_\ell)$ is unramified, and the characteristic polynomial $P_{v,\ell}(t)$ of a geometric Frobenius $\mathrm{Frob}_v\in \mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}})$ has coefficients in $\mathbb Z,$ and is independent of $\ell.$ The aim of this paper is to prove a refinement of this statement for the image of $\mathrm{Frob}_v$ in the {\em Mumford-Tate} group of $A.$ Recall that the Mumford--Tate group $\bf G$ of $A$ is a reductive group over $\ensuremath{\mathbb{Q}}$, defined as the Tannakian group of the $\ensuremath{\mathbb{Q}}$-Hodge structure given by the Betti cohomology $V_B:=\ensuremath{\mathrm{H}}^1_B(A(\ensuremath{\mathbb{C}}),\ensuremath{\mathbb{Q}}).$ It may also be defined as the stabilizer in $\mathbf{GL}(V_B)$ of all Hodge cycles on $A.$ A fundamental result of Deligne \cite{De1} asserts that there exists a finite extension $\ensuremath{\mathrm{E}}'/\ensuremath{\mathrm{E}}$ in $\overline{\ensuremath{\mathrm{E}}}$ such that for any prime $\ell,$ the action of $\mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}}')$ on $\ensuremath{\mathrm{H}}^1_\ensuremath{\mathrm{\acute{e}t}}(A_{\overline{\ensuremath{\mathrm{E}}}},\ensuremath{\mathbb{Q}}_{\ell})$ is induced by a representation $$\rho^{\ensuremath{\mathbf{G}}}_\ell:\mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}}')\rightarrow \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell).$$ It is not hard to see that for any finite extension $\ensuremath{\mathrm{E}}'/\ensuremath{\mathrm{E}}$, if $\rho^{\ensuremath{\mathbf{G}}}_\ell$ exists for one $\ell,$ then it exists for all $\ell.$ Moreover there is a minimal such extension $\ensuremath{\mathrm{E}}'.$ The existence of $\rho^{\ensuremath{\mathbf{G}}}_{\ell}$ is in fact predicted by the (in general still unproved) Hodge conjecture for $A.$ Upon replacing $\ensuremath{\mathrm{E}}$ by $\ensuremath{\mathrm{E}}'$, we assume there is a map $\rho^{\ensuremath{\mathbf{G}}}_\ell:\mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}})\rightarrow \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$. For any reductive group $\ensuremath{\mathbf{H}}$ over $\ensuremath{\mathbb{Q}}$ we write $\mathrm{Conj}_{\ensuremath{\mathbf{H}}}$ for the variety of semisimple conjugacy classes of $\ensuremath{\mathbf{H}}$ and $\chi_{\ensuremath{\mathbf{H}}}:\ensuremath{\mathbf{H}}\rightarrow \mathrm{Conj}_{\ensuremath{\mathbf{H}}}$ for the natural projection map. We thus obtain a well-defined element $$\gamma_\ell = \gamma_{\ell}(v) : = \chi_{\ensuremath{\mathbf{G}}}(\rho_\ell^{\ensuremath{\mathbf{G}}}(\mathrm{Frob}_v))\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell),$$ the conjugacy class of $\ell$-adic Frobenius at $v$. Our main theorem is the following. \begin{introthm}\label{introthm: main} Let $p>2$ and $v|p$ a prime of $\ensuremath{\mathrm{E}}$ where $A$ has good reduction. Then there exists $\gamma\in \mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ such that $$\gamma=\gamma_\ell\in\mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell), \ \forall\ell\neq p.$$ \end{introthm} Since $P_{v,\ell}(t)$ is independent of $\ell,$ the image of $\gamma_\ell$ in $\mathrm{Conj}_{\ensuremath{\mathbf{G}}\ensuremath{\mathbf{L}}(V)}(\ensuremath{\mathbb{Q}}_\ell)$ is defined over $ \ensuremath{\mathbb{Q}}$ and independent of $\ell$. However, in general the map $\mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})\rightarrow \mathrm{Conj}_{\ensuremath{\mathbf{G}}\ensuremath{\mathbf{L}}(V)}(\ensuremath{\mathbb{Q}})$ is not injective, so the theorem gives more information than the $\ell$-independence of $P_{v,\ell}(t)$. An analogue of the above theorem for any algebraic variety (or more generally motive) over a number field was conjectured by Serre in \cite[12.6]{Serre}, but in general one does not even know the analogue of Deligne's theorem on the existence of $\rho^{\ensuremath{\mathbf{G}}}_{\ell}.$ Previously proved cases of our theorem include a result of Noot who showed a version of this theorem where $\mathrm{Conj}_{\ensuremath{\mathbf{G}}}$ is replaced by a certain quotient $\mathrm{Conj}_{\ensuremath{\mathbf{G}}_{A}}'$ and under the additional assumption that the Frobenius element $\gamma_\ell$ is weakly neat \cite{Noot}. More recently, one of us \cite{Ki3} proved the Theorem when the base change $\ensuremath{\mathbf{G}}\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_p$ is unramified, at least for some $\ensuremath{\mathrm{E}}'.$ Noot's argument uses the independence of $\ell$ of $P_{v,\ell}(t),$ together with group theoretic arguments to analyze the map $\mathrm{Conj}_{\ensuremath{\mathbf{G}}}\rightarrow \mathrm{Conj}_{\ensuremath{\mathbf{G}}\ensuremath{\mathbf{L}}(V)}.$ The result of \cite{Ki3} is proved by showing that, on the Shimura variety associated to $\ensuremath{\mathbf{G}},$ the isogeny class corresponding to $A$ contains a point which admits a CM lift. It does not seem possible to extend either method to prove Theorem 1.1. Our proof makes use of families of abelian varieties with Mumford--Tate group contained in $\ensuremath{\mathbf{G}}$, and especially the structure of their mod $p$ reductions. These families are parameterized by a Shimura variety $\mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}}, X)$ associated to $\ensuremath{\mathbf{G}},$ and defined over a number field (its reflex field) $\ensuremath{\mathbf{E}}\subset \ensuremath{\mathbb{C}}$ which is contained in $\ensuremath{\mathrm{E}}$. We take $\ensuremath{\mathrm{K}} = \ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p$ with $\ensuremath{\mathrm{K}}_p \subset \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_p)$ a parahoric subgroup and $\ensuremath{\mathrm{K}}^p \subset \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p)$ a compact open subgroup. Let $w$ be the restriction of $v$ to $\ensuremath{\mathbf{E}}.$ Write $\ensuremath{\mathbf{E}}_w$ for the completion of $\ensuremath{\mathbf{E}}$ at $w,$ $\O_{\ensuremath{\mathbf{E}}_w}$ for the ring of integers of $\ensuremath{\mathbf{E}}_w$ and $\kappa(w)$ for its residue field. Under some mild conditions we show that $\mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ has an integral model $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ over $\O_{\ensuremath{\mathbf{E}}_w},$ which is smoothly equivalent to a ``local model", defined as the closure of an orbit of $\ensuremath{\mathbf{G}}$ acting on a certain Grassmannian. This extends the results of the first author and Pappas \cite{KP}, which were restricted to the case when $\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ was a tamely ramified group. For each prime $\ell \neq p,$ $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ is equipped with a $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$-torsor $\ensuremath{\mathbb{L}}_{\ell}.$ In particular, for any finite extension $\kappa/\kappa(w)$ and $x \in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(\kappa),$ the $q = |\kappa|$-Frobenius acting on the geometric fiber of $\ensuremath{\mathbb{L}}_{\ell}$ at $x,$ gives rise to an element $\gamma_{x,\ell}\in\mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell).$ We say $x$ has the property ($\ell$-ind), or the \textit{$\ell$-independence property}, if there exists an element $\gamma\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ such that $$\gamma=\gamma_{x,\ell}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell), \forall \ell\neq p.$$ Now suppose that $(\ensuremath{\mathbf{G}},X)$ satisfies the conditions needed to guarantee the existence of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ (cf. Theorem \ref{thm: integral models abelian type}); the general case of Theorem \ref{introthm: main} is eventually reduced to this one. Then for a suitable choice of $\ensuremath{\mathrm{K}}$, our abelian variety $A$ corresponds to a point $\tilde{x}_A\in \mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(\ensuremath{\mathrm{E}})$ and its mod $v$ reduction is a point $x_{A}$ of the special fiber $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}} := \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)\otimes_{\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_w}}\kappa(w).$ Moreover there is an equality $\gamma_\ell(v)=\gamma_{x_A,\ell}$ as elements of $\mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$. Thus in order to show Theorem \ref{introthm: main}, it suffices to prove \begin{equation}\tag{$\dagger$} \text{If $\kappa/\kappa(w)$ is finite, and $x\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(\kappa),$ then $x$ satisfies ($\ell$-ind). } \end{equation} For the rest of the introduction we assume $p>2$. By considering $A$ as a point on a larger Shimura variety related to a group of the form $\mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{G}}$ where $\ensuremath{\mathrm{F}}$ is a suitably chosen totally real field, one can show that Theorem \ref{introthm: main} follows from the following special case of $(\dagger)$. \begin{introthm}\label{introthm: l indep full}Let $(\ensuremath{\mathbf{G}},X)$ be a Shimura datum of Hodge type and assume $\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ is quasi-split, $\ensuremath{\mathrm{K}}_p$ is a very special parahoric and the triple $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathrm{K}}_p)$ is acceptable. Then for any $\kappa/\kappa(w)$ finite and $x\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(\kappa),$ $x$ satisfies ($\ell$-ind). \end{introthm} The condition of acceptability of the triple $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathrm{K}}_p)$ is a technical one, and we refer the reader to \S\ref{sec: acceptable triple} for the definition. As a first step towards Theorem \ref{introthm: l indep full}, we show the following Theorem, which guarantees that under the assumptions of Theorem \ref{introthm: l indep full}, ($\ell$-ind) holds on a dense, Zariski open subset of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$. \begin{introthm}\label{introthm: can lift for Shimura var} Assume $(\ensuremath{\mathbf{G}},X)$ is Hodge type and the triple $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathrm{K}}_p)$ is acceptable. Then \begin{enumerate} \item Any closed point $x$ lying in the $\mu$-ordinary locus $ \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}\subset \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$ admits a lifting to a special point $\widetilde{x}\in\mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X).$ \item If in addition $\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ is quasi-split and $\ensuremath{\mathrm{K}}_p$ is very special. Then $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}$ is Zariski open and dense in $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$. \end{enumerate} \end{introthm} The lifting constructed in (1) is the analogue in our setting of the Serre--Tate canonical lift and had been considered for Shimura varieties with good reduction in previous work of Moonen \cite{Mo} and Shankar and the second author \cite{SZ}. For these points, the Frobenius lifts to an automorphism of the associated CM abelian variety, and we obtain the desired element $\gamma\in \mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ by considering the induced action on Betti cohomology. To prove Theorem \ref{introthm: l indep full}, one considers a smooth curve $\ensuremath{\mathcal{C}}$ with a map $\pi:\ensuremath{\mathcal{C}} \rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}.$ Using a theorem of Laurent Lafforgue \cite[Th\'eor\`eme VII.6]{Laf} on the existence of compatible local systems on smooth curves, we show that if the property ($\ell$-ind) holds for a dense open subset of points on $\ensuremath{\mathcal{C}}$ then it holds for all points of $\ensuremath{\mathcal{C}}.$ Our results on the structure of the integral models $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ imply that $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$ is equipped with a certain combinatorially described stratification, the Kottwitz-Rapoport stratification. The stratum of maximal dimension is the smooth locus of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}.$ A theorem of Poonen \cite{Poonen} shows that $\pi$ can be chosen so that its image intersects $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}$ and any point $x$ of the maximal stratum. The $\mu$-ordinary case explained above then implies that any such $x$ satisfies ($\ell$-ind). We now argue by induction on the codimension of the strata; for a closed point $x$ in some stratum of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}},$ we show that $\pi$ can be chosen so that its image contains $x,$ and also meets some higher dimensional stratum. In fact, using general arguments with ampleness, it is not hard to construct a $\pi$ whose image contains any closed point $x\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}},$ and meets the $\mu$-ordinary locus. This would appear to avoid the induction on strata above. However, this argument would only allow us to prove the $\ell$-independence result for some power of the Frobenius. To prove Theorem \ref{introthm: l indep full} in full, one needs the existence of a $y \in \ensuremath{\mathcal{C}},$ with $\pi(y) = x,$ such that $\pi$ induces an isomorphism of residue fields $\kappa(x) \simeq \kappa(y).$ To construct such curves, we first construct a sequence of smooth curves which are {\em subschemes} of the local model associated to $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X),$ using the explicit group theoretic description of this local model. These are then pulled back to $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ via the local model diagram. We remark that the assumption that $\ensuremath{\mathrm{K}}_p$ is very special is key to our argument, as this not only guarantees the density of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}$, but also that the Kottwitz--Rapoport stratification on the local model has a particularly simple description (cf. \S\ref{subsec: KR stratification very special}) which is used in the construction of $\pi$. The induction argument would also be unnecessary if one could show a conjecture of Deligne \cite[Conjecture 1.2.10]{De3} on the existence of compatible local systems on a normal variety. For smooth schemes Deligne's conjecture has been proved by Drinfeld \cite{Drinfeld}, but the special fiber $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$ is not smooth, so Drinfeld's theorem does not suffice for our purposes. We now explain the organization of the paper. In \S 2-5 we construct the integral models of the Shimura varieties we will need. These are then used to prove Theorem \ref{introthm: main} in \S 6,7. As explained above, there are two main results we need about these integral models: the local model diagram, which relates them to an orbit closure on a Grassmannian, and an analogue of Serre--Tate theory at $\mu$-ordinary points. The properties of these local models are established in \S 3. In particular, we show that a suitable Hodge embedding induces a closed immersion on local models (cf. Proposition \ref{prop: local model embedding main}) which generalizes \cite[Proposition 2.3.6]{KP}. In \S 4 we review the deformation theory of $p$-divisible groups equipped with a collection of crystalline tensors following \cite{KP}, and show the existence of canonical deformations for $\mu$-ordinary $p$-divisible groups. The latter uses a generalization to general parahorics of a result of Wortmann on $\mu$-ordinary $\sigma$-conjugacy classes, which is proved in \S 2. We combine the previous results to construct the required integral models in \S 5, first in some special Hodge type cases, then in general following \cite[\S4.4-6]{KP}. A key input for the general case is the notion of $R$-smoothness, introduced in \S2, which allows us to extend the twisting construction of \cite[\S4.4]{KP} beyond the tamely ramified case. In \S 6, we prove Theorem \ref{introthm: l indep full} following the strategy outlined above and in \S 7 we prove Theorem \ref{introthm: main} using Theorem \ref{introthm: l indep full}. Finally we remark that for technical reasons related to the level structure on $A$, we actually work with Shimura stacks (i.e. Shimura varieties where the level structure is not neat) in \S 5-7. \textit{Acknowledgments:} M.K. was supported by NSF grant DMS-1902158. R.Z. was supported by NSF grant DMS-1638352 through membership of the Institute for Advanced Study. \section{Group theoretic results}\label{sec: group theoretic} \subsection{$\sigma$-straight elements} \subsubsection{} Let $F$ be a non-archimedean local field with ring of integers $\ensuremath{\mathcal{O}}_F$. We fix a uniformizer $\varpi_F\in\ensuremath{\mathcal{O}}_F$ and we let $k_F$ denote the residue field of $\ensuremath{\mathcal{O}}_F$. We let $\ensuremath{\breve{F}}$ denote the completion of the maximal unramified extension of $F$ and $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}$ its ring of integers, and we fix $\overline{F}$ an algebraic closure of $F$. We let $k$ be the residue field of $\ensuremath{\mathcal{O}}_{\breve F}$ which is an algebraic closure of $k_F$. We write $\Gamma$ for the absolute Galois group $\mathrm{Gal}(\overline{F}/F)$ of $F$ and $I$ for the inertia subgroup, which is identified with $\text{Gal}(\overline{\ensuremath{\breve{F}}}/\ensuremath{\breve{F}})$. We let $\sigma$ denote the Frobenius element of $\text{Aut}(\ensuremath{\breve{F}} /F)$. Let $S$ be a scheme. If $X$ is a scheme over $S$ and $S'\rightarrow S$ is a morphism of schemes, then we write $X_{S'}$ for the base change of $X$ along $S'\rightarrow S$. \subsubsection{}Let $G$ be a reductive over $F$. Let $S$ be a maximal $\ensuremath{\breve{F}}$-split torus of $G$ defined over $F$ and $T$ its centralizer (cf. \cite[1.10]{Ti1} for the existence of $S$). By Steinberg's Theorem, $G$ is quasi-split over $\ensuremath{\breve{F}}$ and $T$ is a maximal torus of $G$. We let $\ensuremath{\mathcal{B}}(G,F)$ (resp. $\ensuremath{\mathcal{B}}(G,\ensuremath{\breve{F}})$) denote the (extended) Bruhat--Tits building of $G$ over $F$ (resp. $\ensuremath{\breve{F}}$). Let $\mathfrak{a}$ denote a $\sigma$-invariant alcove in the apartment $V:=\ensuremath{\mathcal{A}}(G,S,\ensuremath{\breve{F}})$ over $\ensuremath{\breve{F}}$ associated to $S$; we write $\ensuremath{\mathcal{I}}$ for the corresponding Iwahori group scheme over $\ensuremath{\mathcal{O}}_F$. The relative Weyl group $W_0$ and the Iwahori Weyl group are defined as \begin{equation}\label{eqn: exact sequence Iwahori Weyl}W_0=N(\ensuremath{\breve{F}})/T(\ensuremath{\breve{F}})\ \ \ W=N(\ensuremath{\breve{F}})/\mathcal{T}_0(\mathcal{O}_{\ensuremath{\breve{F}}})\end{equation} where $N$ is the normalizer of $T$ and $\mathcal{T}_0$ is the connected N\'eron model for $T$. These are related by an exact sequence \[\xymatrix{0\ar[r]& X_*(T)_I\ar[r]& W\ar[r]& W_0\ar[r]& 0.}\] For an element $\lambda\in X_*(T)_I$ we write $t_\lambda$ for the corresponding element in $W$; such elements will be called translation elements. We will sometimes write $W_G$ or $W_{G_{\ensuremath{\breve{F}}}}$ for $W$ if we want to specify the group that we are working with. \subsubsection{}We also fix a special vertex $\ensuremath{\mathfrak{s}}$ lying in the closure of $\ensuremath{\mathfrak{a}}$. Such a vertex induces a splitting of the exact sequence (\ref{eqn: exact sequence Iwahori Weyl}) and gives an identification \begin{equation} \label{eqn: id apartment}V\cong X_*(T)_{I}\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}.\end{equation} Let $\mathrm{Aff}(V)$ denote the group of affine transformations of $V$. Then we have an identification $\mathrm{Aff}(V)\cong V\rtimes \ensuremath{\mathrm{GL}}(V).$ The Frobenius $\sigma$ acts on $V$ via affine transformations and we write $\varsigma\in \ensuremath{\mathrm{GL}}(V)$ for the linear part of this action. The identification (\ref{eqn: id apartment}) also determines a dominant chamber $C_+\subset X_*(T)_{I}\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$; namely by taking the one containing $\ensuremath{\mathfrak{a}}$, and we write $B$ for the corresponding Borel subgroup defined over $\ensuremath{\breve{F}}$. We write $\sigma_0$ for the automorphism of $X_*(T)_{I}\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$ defined by $\sigma_0:=w_0\circ\varsigma$ where $w_0\in W_0$ is the unique element such that $w_0\circ \varsigma (C_+)=C_+$. We call this the $L$-action on $X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$; by definition it preserves $C_+$. \subsubsection{}\label{subsubsec:bruhatorder} Let $\mathbb{S}$ denote the set of simple reflections in the walls of $\mathfrak{a}$. We let $W_a$ denote the affine Weyl group; it is the subgroup of $W$ generated by the reflections in $\mathbb{S}$. Then $(W_a,\ensuremath{\mathbb{S}})$ has the structure of a Coxeter group and hence a notion of length and Bruhat order. The Iwahori Weyl group and affine Weyl group are related via the following exact sequence \begin{equation}\label{eqn: exact sequence affine Weyl}\xymatrix{0\ar[r]& W_a\ar[r]&W\ar[r]& \pi_1(G)_I\ar[r]& 0.}\end{equation} The choice of $\mathfrak{a}$ induces a splitting of this exact sequence and $\pi_1(G)_I$ can be identified with the subgroup $\Omega\subset W$ consisting of elements which preserve $\ensuremath{\mathfrak{a}}$. The length function $\ell$ and Bruhat order $\leq$ extend to $W$ via this choice of splitting and $\Omega$ is identified with the set of length $0$ elements. We let $\widetilde{\kappa}_G(w)$ denote the image of $w \in W$ in $\pi_1(G)_I$ and $\kappa_G(w)$ its projection to $\pi_1(G)_\Gamma$. For $w\in W$, there is an integer $n$ such that $\sigma^n$ acts trivially on $W$ and $w\sigma(w)\dotsc\sigma^{n-1}(w)=t_\lambda$ for some $\lambda\in X_*(T)_I$. We define the (non-dominant) Newton cocharacter $\nu_w\in X_*(T)_{I,\ensuremath{\mathbb{Q}}}\cong X_*(T)^I_{\ensuremath{\mathbb{Q}}}$ to be $\frac{1}{n}\lambda$, which is easily seen to be independent of $n$. We let $\overline{\nu}_w\in X_*(T)^{I,+}_{\ensuremath{\mathbb{Q}}}$ be the dominant representative of $\nu_w$. \subsubsection{}\label{sec: affine Weyl group} Let $T_{\mathrm{sc}}$, denote the preimage of $T$ in the simply connected covering $G_{\mathrm{sc}}$ of the derived group of $G$. Then $W_a$ is the Iwahori Weyl group for $G_{\mathrm{sc}}$ and we have the following exact sequence \[\xymatrix{0\ar[r]& X_*(T_{\mathrm{sc}})_I\ar[r]& W_a\ar[r] &W_0\ar[r]& 0.}\]Since the action of $I$ permutes the set of absolute coroots, $X_*(T_{\mathrm{sc}})_I$ is torsion free and there is an inclusion $X_*(T_{\mathrm{sc}})_I\hookrightarrow X_*(T)_I$. By \cite{HaRa}, there exists a reduced root system $\Sigma$ such that $$W_a \simeq Q^\vee(\Sigma)\rtimes W_0$$ where $Q^\vee(\Sigma)$ and $W(\Sigma)$ denotes the coroot lattice and Weyl group of $\Sigma$ respectively. The roots of $\Sigma$ are proportional to the roots of the relative root system for $G_{\ensuremath{\breve{F}}}$; however the root systems themselves may not be proportional. As explained in \cite[p7]{HaRa}, we may consider elements of $\Sigma$ as functions on $X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$, and we write $\langle\ ,\ \rangle$ for the induced pairing between $X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$ and the root lattice associated to $\Sigma$. We let $\rho$ denote the half sum of all positive roots in $\Sigma$. Then for any $\lambda\in X_*(T)_I$ we have the equality \begin{equation}\label{eqn: length of translation element}\ell(t_\lambda)=\langle\overline{\lambda},2\rho\rangle, \end{equation} where $\overline{\lambda}\in W_0\cdot\lambda$ is the dominant representative of $\lambda$, i.e. the image of $\overline{\lambda}$ in $X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$ lies in $C_+$. \subsubsection{} We say that an element $w\in W$ is $\sigma$-\textit{straight} if for any $n\in \ensuremath{\mathbb{N}}$, $$\ell(w\sigma(w)\dotsc\sigma^{n-1}(w))=n\ell(w).$$ It is straightforward to check that this is equivalent to the condition $\ell(w)=\langle \overline{\nu}_w,2\rho\rangle$. In this paper, we are particularly interested in translation elements $t_{\mu'}$ which are also $\sigma$-straight; the key property of these elements that we will need is that they are central for some Levi subgroup of $G$ defined over $F$. For any $v\in X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$, we let $\Phi_{v,0}$ be the set of relative roots $\alpha$ for $G_{\ensuremath{\breve{F}}}$ such that $\langle v,\alpha\rangle=0$. We may then associate to $v$ the semi-standard Levi subgroup $M_v\subset G_{\ensuremath{\breve{F}}}$ generated by $T$ and the root subgroups $U_\alpha$ corresponding to $\alpha\in\Phi_{v,0}$. If in addition $v$ is fixed by $\varsigma$, then $M_v$ is defined over $F$. We say $\lambda\in X_*(T)_I$ is central in $G$ if it pairs with any relative root (equivalently any root in $\Sigma$) to give $0$. \begin{lemma}\label{lemma: cochar central}Let $\mu'\in X_*(T)_I$ such that $t_{\mu'}$ is a $\sigma$-straight element and let $M:=M_{\nu_{t_{\mu'}}}$ be the semi-standard Levi subgroup of $G$ associated to the Newton cocharacter $\nu_{t_{\mu'}}$. Then $M$ is defined over $F$ and ${\mu'}$ is central in $M$. \end{lemma} \begin{proof}For any $\lambda\in X_*(T)_I$, and for sufficiently divisible $n$ we have $$n\nu_{\sigma(t_\lambda)}=\sigma(t_\lambda)\dotsc\sigma^n(t_\lambda)=t_{\lambda}^{-1}n\nu_{t_\lambda} t_\lambda=n\nu_{t_{\lambda}}.$$ Note that $\sigma(t_{\lambda})=t_{\varsigma(\lambda)}$; it follows that $\nu_{\sigma(t_{\lambda})}=\varsigma(\nu_{t_{\lambda}})$ and hence $\nu_{t_{\lambda}}$ is fixed by $\varsigma$. Therefore $M$ is defined over $F$. We let $u\in W_0$ be such that $u(\nu_{t_{\mu'}})=\overline{\nu}_{t_{\mu'}}$. For a sufficiently divisible $n$, we have $$\ell(t_{\mu'})=\langle\overline{\nu}_{t_{\mu'}},2\rho\rangle=\frac{1}{n}\sum_{i=0}^{n-1}\langle u\varsigma^i(\mu'),2\rho\rangle$$ where the first equality follows from the $\sigma$-straightness of $t_{\mu'}$. Now $\langle u\varsigma^i(\mu'),2\rho\rangle\leq \ell(t_{\mu'})$ with equality if and only if $u\varsigma^i(\mu')$ is dominant. Therefore $u\varsigma^i(\mu')$ is dominant for all $i$ and hence $\varsigma^i(\mu')$ is contained in the translate $C'$ of the dominant chamber $C_+$ by $u^{-1}$ for all $i$. Now $M$ corresponds to a sub-root system $\Sigma_M$ of $\Sigma$ consisting of the roots $\alpha\in \Sigma$ such that $\langle \nu_{t_{\mu'}},\alpha\rangle=0$. Then $\Sigma_M$ is also the reduced root system associated to the affine Weyl group for $M$ as in \S\ref{sec: affine Weyl group}. We must show for all $\alpha\in \Sigma_M$, we have $\langle{\mu'},\alpha\rangle=0$. Let $\alpha\in \Sigma_M$ be a root, then since $\varsigma^i(\mu')$ is contained in a single Weyl chamber for all $i$, it follows that $\langle \varsigma^i(\mu'),\alpha\rangle$ have the same sign for all $i$. Without loss of generality, assume $\langle \varsigma^i(\mu'),\alpha\rangle\geq 0, \forall i.$ Then we have \begin{equation} \begin{split}0&=\langle \nu_{t_{\mu'}},\alpha\rangle =\frac{1}{n}\sum_{i=0}^{n-1}\langle\varsigma^i(\mu'),\alpha\rangle \end{split}. \end{equation} Since all the terms in the sum are non-negative, they must be $0$. Hence $\mu'$ is central in $M$. \end{proof} \subsubsection{}Now let $\{\mu\}$ be a geometric conjugacy class of cocharacters of $G$. Let $\mu\in X_*(T)_I$ denote the image of a dominant (with respect to the choice of Borel $B$ defined above) representative $\widetilde{\mu}\in X_*(T)$ of $\{\mu\}$. \begin{lemma} Let $w\in W_0$ such that for $\mu':=w({\mu})$, $t_{\mu'}$ is a $\sigma$-straight element. Let $\widetilde{\lambda}:=w(\widetilde{\mu})\in X_*(T)$. Then $\widetilde{\lambda}$ is central in $M:=M_{\nu_{t_{\mu'}}}$. Here, we consider $W_0$ as a subgroup of the absolute Weyl group for $G$. \end{lemma} \begin{proof} Let $w(C_+)\subset X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$ be the translate of the dominant chamber by $w$. Then $w(C_+)$ determines a chamber $C_M$ for $M$ (it is the unique chamber for $M$ such that $w(C_+)\subset C_M$) and $\mu'\in C_M$. The chamber $C_M$ determines an ordering of the root system $\Sigma_M$. Let ${\alpha}$ be a positive root for $\Sigma_M$ and $\widetilde{\alpha}\in X^*(T)$ an (absolute) root lifting ${\alpha}$; such a lift exists by the construction of $\Sigma$, see eg. \cite[VI, 2.1]{Bourbaki}. We let $(\ ,\ ):X_*(T)\times X^*(T)\rightarrow \ensuremath{\mathbb{Z}}$ denote the natural pairing. Let $K/\ensuremath{\breve{F}}$ be a finite Galois extension over which $T$ splits. We have by definition of $\Sigma_M$ $$0=\langle \mu',{\alpha}\rangle=c\sum_{\tau\in\mathrm{Gal}(K/\ensuremath{\breve{F}})}( \widetilde{\lambda},\tau(\widetilde{\alpha}))$$for some positive $c\in \mathbb{R}$, where the first equality follows since $\mu'$ is central in $M$. For any $\tau\in \mathrm{Gal}(K/\ensuremath{\breve{F}})$, $C_M$ is preserved by $\tau$ and hence $\tau(\widetilde{\alpha})$ is a positive root for $M$. Therefore $(\widetilde{\lambda},\tau(\widetilde{\alpha}))\geq 0$, and hence $(\widetilde{\lambda},\tau(\widetilde{\alpha}))=0$ for all $\tau$. Applying this to every relative root $\alpha$ for $M$, we see that $\widetilde\lambda$ is central in $M$. \end{proof} \subsection{$\mu$-ordinary $\sigma$-conjugacy classes} \subsubsection{}\label{sec: mu admissible set}Let $\{\mu\}$ be a geometric conjugacy class of cocharacters of $G$; we let $\widetilde{\mu}\in X_*(T)$ and $\mu\in X_*(T)_I$ as above. The $\mu$-admissible set is defined to be $$\ensuremath{\mathrm{Adm}}(\{\mu\})=\{w\in W|w\leq t_{x({\mu})} \text{ for some }x\in W_0\}.$$ It has a unique minimal element denoted $\tau_{\{\mu\}},$ which is also the unique element of $\ensuremath{\mathrm{Adm}}(\{\mu\})\cap\Omega$. For $b\in G({\ensuremath{\breve{F}}})$, we let $[b]$ denote the set $\{g^{-1}b\sigma(g)|g\in G({\ensuremath{\breve{F}}})\}$, the $\sigma$-conjugacy class of $b$. The set of $\sigma$-conjugacy classes $B(G)$ has been classified by Kottwitz in \cite{Ko2} and \cite{Ko1}. For $b\in G({\ensuremath{\breve{F}}})$, we let $\nu_b:\ensuremath{\mathbb{D}}\rightarrow G_{\ensuremath{\breve{F}}}$ denote its Newton cocharacter and $$\overline{\nu}_b\in X_*(T)_{I,\ensuremath{\mathbb{Q}}}^+\cong X_*(T)_{\ensuremath{\mathbb{Q}}}^{I,+}$$ the dominant representative for $\nu_b$; it is known that $\overline{\nu}_b$ is invariant under the action of $\sigma_0$. We let $\widetilde{\kappa}_G:G(\ensuremath{\breve{F}})\rightarrow \pi_1(G)_I$ denote the Kottwitz homomorphism and we write $$\kappa_G:G({\ensuremath{\breve{F}}})\rightarrow \pi_1(G)_\Gamma$$ for the composition of $\widetilde{\kappa}_G$ and the projection map $\pi_1(G)_I\rightarrow \pi_1(G)_\Gamma$. This induces a well-defined map $B(G)\rightarrow \pi_1(G)_\Gamma$, also denoted $\kappa_G$. Then there is an injective map \begin{equation}\label{eqn: Kottwitz classification}B(G)\xrightarrow{(\kappa_G,b\mapsto \overline{\nu}_b)}\pi_1(G)_\Gamma\times (X_*(T)_{\ensuremath{\mathbb{Q}}}^{I,+})^{\sigma_0}.\end{equation} \subsubsection{} There is a more explicit description of this map using $W$. For $w\in W$, its $\sigma$-conjugacy class is the set $\{u^{-1}w\sigma(w)| u\in W\}$. We let $B(W,\sigma)$ denote the set of $\sigma$-conjugacy classes in $W$. For $w\in W$, we let $\dot{w}\in N(\ensuremath{\breve{F}})$ denote a lift of $w$. Then to $w\in W$, we associate the $\sigma$-conjugacy class of $\dot{w}$; by Lang's theorem this does not depend on the choice of representative $\dot{w}$. We write $$\Psi:B(W,\sigma)\rightarrow B(G)$$ for the map induced by $w\mapsto [\dot{w}]$. By \cite[Theorem 3.7]{He1}, $\Psi$ is surjective and we have a commutative diagram \begin{equation} \label{eqn: B(G) Iwahori Weyl} \xymatrix{B(W, \sigma) \ar@{->>}[rr]^{\Psi} \ar[dr]_{(\overline\nu,\kappa_G)} & & B(G) \ar@{^{(}->}[ld]^{(\overline\nu,\kappa_G)} \\ & (X_*(T)^{I,+}_ \ensuremath{\mathbb{Q}}) \times \pi_1(G)_\Gamma &}.\end{equation} The map $\Psi$ is not injective in general, however it is proved in \cite[Theorem 3.7]{He1} that its restriction to the set of $\sigma$-straight $\sigma$-conjugacy classes is a bijection. Here, a $\sigma$-conjugacy class in $W$ is said to be $\sigma$-straight if it contains a $\sigma$-straight element. \subsubsection{} Note that there is a partial order on the set $X_*(T)_\ensuremath{\mathbb{Q}}^+$; for $\lambda,\lambda'\in X_*(T)_\ensuremath{\mathbb{Q}}^+$, we write $\lambda\leq\lambda'$ if $\lambda'-\lambda$ is a non-negative rational linear combination of positive roots. For $\{\mu\}$ as above, we write $\mu^\natural$ for the common image of an element of $\{\mu\}$ in $\pi_1(G)_\Gamma$ and we define $$\mu^\diamond=\frac{1}{N}\sum_{i=1}^N\sigma_0^i(\mu)\in X_*(T)^+_{I,\ensuremath{\mathbb{Q}}}\cong X_*(T)^{I,+}_\ensuremath{\mathbb{Q}}.$$ where $N$ is the order of the element $\sigma_0$ acting on $X_*(T)_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{Q}}$. We set $$B(G,\{\mu\})=\{[b]\in B(G):\kappa_G(b)=\mu^\natural,\overline{\nu}_b\leq\mu^\diamond\}.$$ Note that for $[b_1],[b_2]\in B(G,\{\mu\})$ such that $\overline{\nu}_{[b_1]}=\overline{\nu}_{[b_2]}$, we have $[b_1]=[b_2]$ since $[b_1]$ and $[b_2]$ have common image $\mu^\natural$ under $\kappa_G$. \begin{definition}\label{def: mu ordinary} Suppose there exists a class $[b]\in B(G,\{\mu\})$ such that $\overline{\nu}_{[b]}=\mu^\diamond$ (such a class is necessarily unique if it exists by the above remark). We write $[b]_{\mu}$ for this class; it is called the $\mu$-ordinary $\sigma$-conjugacy class. \end{definition} \begin{remark} \cite[Theorem 1.1]{HeNie2} have shown that $B(G,\{\mu\})$ always contains a maximal element with respect to the partial order $\leq$. When $G$ is quasi-split, this class is just $[b]_{\mu}$. However if $G$ is not quasi-split, there may be no $[b]\in B(G,\{\mu\})$ such that $\overline{\nu}_{[b]}=\mu^\diamond$. \end{remark} \begin{lemma}\label{lemma: ord class rep by straight elt}Assume there exists $[b]_{\mu}\in B(G,\{\mu\})$ with $\overline{\nu}_{[b]_{\mu}}=\mu^\diamond$. There exists $\mu'\in W_0\cdot{\mu}$ with $t_{\mu'}$ $\sigma$-straight such that $\dot{t}_{\mu'}\in [b]_{\mu}.$ \end{lemma} \begin{proof}Since $[b]_{\mu}\in B(G,\{\mu\})$, there exists a $\sigma$-straight element $w\in \ensuremath{\mathrm{Adm}}(\{\mu\})$ such that $\dot{w}\in [b]_{\mu}$ by \cite[Theorem 4.1]{He3}. The commutativity of diagram (\ref{eqn: B(G) Iwahori Weyl}) implies that $\overline{\nu}_w=\mu^\diamond$. Since $w$ is $\sigma$-straight, we have $$\ell(w)=\langle\overline{\nu}_w,2\rho\rangle=\langle\mu^\diamond,2\rho\rangle=\langle{\mu},2\rho\rangle=\ell({t_\mu}),$$ where the final equality uses (\ref{eqn: length of translation element}) and the fact that $\mu$ is dominant. Since $w\in\ensuremath{\mathrm{Adm}}(\{\mu\})$, $\ell(w)\leq \ell(t_{{\mu}})$ with equality if and only if $w=t_{\mu'}$ for some $\mu'\in W_0\cdot {\mu}$. \end{proof} \subsubsection{} Now let $G'$ be another reductive group over $F$ and $f: G\rightarrow G'$ a group scheme morphism which induces an isogeny $G_{\ensuremath{\mathrm{der}}}\rightarrow G'_{\ensuremath{\mathrm{der}}}$. We write $\{\mu'\}$ for the $G'$-conjugacy class of cocharacters induced by $\{\mu\}$. We have the following relationship between $\mu$-ordinary $\sigma$-conjugacy classes for $G$ and $G'$. \begin{lemma}\label{lemma: mu-ordinary class change of groups}\begin{enumerate}\item There exists $[b]_{\mu}\in B(G,\{\mu\})$ with $\overline{\nu}_{[b]_{\mu}}=\mu^\diamond$ if and only if there exists $[b']_{\mu'}\in B(G',\{\mu'\})$ with $\overline{\nu}_{[b']_{\mu'}}=\mu'^\diamond$. \item Let $[b]\in B(G,{\mu})$ and $[b']:=[f(b)]\in B(G',\{\mu'\})$. Then $[b]=[b]_{\mu}$ if and only if $[b']= [b']_{\mu'}$. \end{enumerate} \end{lemma} \begin{proof} (1) Note that we have a commutative diagram \[\xymatrix{ B(G)\ar[r]\ar[d]&(X_*(T)^{I,+}_ \ensuremath{\mathbb{Q}}) \times \pi_1(G)_\Gamma\ar[d]\\ B(G')\ar[r]& (X_*(T')^{I,+}_ \ensuremath{\mathbb{Q}}) \times \pi_1(G')_\Gamma}\] where $T'$ is the centralizer of a maximal $\ensuremath{\breve{F}}$-split torus of $G'$ containing $f(T)$. Thus one direction of (1) is clear. For the converse, suppose there exists $[b']_{\mu'}\in B(G',\{\mu'\})$. Note that by assumption, there is an identification of relative Weyl groups for $G$ and $G'$. Then by Lemma \ref{lemma: ord class rep by straight elt}, there exists $w_0\in W_0$ such that $t_{w_0(\mu')}$ is a $\sigma$-straight element of the Iwahori Weyl group for $G'$ and $\dot{t}_{w_0(\mu')}\in [b']_{\mu}$. Then it is easy to check that $t_{w_0(\mu)}$ is a $\sigma$-straight element of the Iwahori Weyl group for $G$ and that $\overline{\nu}_{t_{w_0(\mu)}}=\mu^\diamond$. It follows that $[\dot{t}_{w_0(\mu)}]=[b]_{\mu}\in B(G,\{\mu\})$. (2) One direction is clear. Suppose then that $[b']=[b']_{\mu'}.$ It follows that $\overline{\nu}_{[b]}=\mu^\diamond +\alpha$ for some $\alpha\in X_*(\ker(G\rightarrow G'))^I$. But $[b]\in B(G,\{\mu\})$ and hence $\mu^\diamond-\overline{\nu}_{[b]}$ is a rational linear combination of positive coroots. Thus $\alpha=0$ and $[b]=[b]_{\mu}$. \end{proof} \subsection{Parahoric group schemes}\label{sec: parahoric group schemes} \subsubsection{} Recall the extended Bruhat--Tits buildings $\ensuremath{\mathcal{B}}(G,F)$ and $\ensuremath{\mathcal{B}}(G,\ensuremath{\breve{F}})$ associated to $G$. For a non-empty bounded subset $\Xi\subset \ensuremath{\mathcal{B}}(G,F)$ which is contained in an apartment, we let $G(F)_{\Xi}$ (resp. $G(\ensuremath{\breve{F}})_\Xi$) denote the subgroup of $G(F)$ (resp. $G(\ensuremath{\breve{F}})$) which fixes $\Xi$ pointwise. By the main result of \cite{BT2}, there exists a smooth affine group scheme $\widetilde{\ensuremath{\mathcal{G}}}_{\Xi}$ over $\ensuremath{\mathcal{O}}_F$ with generic fiber $G$ which is uniquely characterized by the property $\widetilde{\ensuremath{\mathcal{G}}}_{\Xi}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})=G(\ensuremath{\breve{F}})_\Xi$. As in \cite[\S 1.1.2]{KP}, we will call such a group scheme the Bruhat--Tits stabilizer scheme associated to $\Xi$. If $\Xi=\{x\}$ is a point we write $G(F)_x$ (resp. $\widetilde{\ensuremath{\mathcal{G}}}_x$) for $G(F)_{\{x\}}$ (resp. $\widetilde{\ensuremath{\mathcal{G}}}_{\{x\}}$). For $ \Xi\subset \ensuremath{\mathcal{B}}(G,F)$, we write $\ensuremath{\mathcal{G}}_\Xi$ for the “connected stabilizer” $\Xi$ (cf. \cite[\S4]{BT2}). We are mainly interested in the cases where $\Xi$ is a point $x$ or an open facet $\ensuremath{\mathfrak{f}}$. In this case, $\ensuremath{\mathcal{G}}_x$ (resp. $\ensuremath{\mathcal{G}}_{\ensuremath{\mathfrak{f}}}$) is the parahoric group scheme associated to $x$ (resp. $\ensuremath{\mathfrak{f}}$). By \cite{HaRa}, $\ensuremath{\mathcal{G}}_{\Xi}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})= \widetilde\ensuremath{\mathcal{G}}_{\Xi}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\cap\ker\widetilde{\kappa}_G.$ It follows that $\ensuremath{\mathcal{G}}_\Xi(\ensuremath{\mathcal{O}}_F)=\widetilde{\ensuremath{\mathcal{G}}}_\Xi(\ensuremath{\mathcal{O}}_F)\cap\ker \widetilde{\kappa}_G$. If $\ensuremath{\mathfrak{f}}$ is a facet of $\ensuremath{\mathcal{B}}(G,F)$ we say $x\in \ensuremath{\mathfrak{f}}$ is generic if every element of $G(F)$ which fixes $x$ also fixes $\ensuremath{\mathfrak{f}}$ pointwise. The set of generic points in $\ensuremath{\mathfrak{f}}$ is an open dense subset of $\ensuremath{\mathfrak{f}}$, and for any generic point $x\in\ensuremath{\mathfrak{f}}$, we have $\widetilde{\ensuremath{\mathcal{G}}}_{x}=\widetilde{\ensuremath{\mathcal{G}}}_\ensuremath{\mathfrak{f}}$ and $\ensuremath{\mathcal{G}}_x=\ensuremath{\mathcal{G}}_{\ensuremath{\mathfrak{f}}}$ We may also consider the corresponding objects over $\ensuremath{\breve{F}}$ and we use the same notation in this case. When it is understood which point of $\ensuremath{\mathcal{B}}(G,F)$ or $\ensuremath{\mathcal{B}}(G,\ensuremath{\breve{F}})$ we are referring to, we simply write $\widetilde{\ensuremath{\mathcal{G}}}$ and $\ensuremath{\mathcal{G}}$ for the corresponding group schemes. A parahoric group scheme $\ensuremath{\mathcal{G}}$ is said to be a \emph{connected} parahoric if there exists $x\in \ensuremath{\mathcal{B}}(G,F)$ such that $\widetilde{\ensuremath{\mathcal{G}}}_x=\ensuremath{\mathcal{G}}_x=\ensuremath{\mathcal{G}}$; if such a point exists, it is necessarily a generic point in the facet containing it. Let $G'$ be another connected reductive group and assume there is an identification $G_{\mathrm{ad}}\cong G'_{\mathrm{ad}}$ between their respective adjoint groups. Then there are surjective maps of buildings $\ensuremath{\mathcal{B}}(G,F)\rightarrow \ensuremath{\mathcal{B}}(G_{\ensuremath{\mathrm{ad}}},F)$ and $\ensuremath{\mathcal{B}}(G',F)\rightarrow \ensuremath{\mathcal{B}}(G'_{\ensuremath{\mathrm{ad}}},F)$ which are equivariant for $G(F)$ and $G'(F)$ respectively. If $\ensuremath{\mathcal{G}}=\ensuremath{\mathcal{G}}_x$ is a parahoric group scheme for $G$ corresponding to $x\in \ensuremath{\mathcal{B}}(G,F)$, then $\ensuremath{\mathcal{G}}$ determines a parahoric group scheme $\ensuremath{\mathcal{G}}'=\ensuremath{\mathcal{G}}'_{x'}$ for $G'$ where $x'\in \ensuremath{\mathcal{B}}(G',F)$ lies in the preimage of the image of $x$ in $\ensuremath{\mathcal{B}}(G_{\ensuremath{\mathrm{ad}}},F)$. \subsubsection{} Now let $J\subset \mathbb{S}$ be a subset and we write $W_J$ for the subgroup of $W$ generated by $J$. If $W_J$ is finite, $J$ corresponds to a parahoric group scheme $\ensuremath{\mathcal{G}}$ over $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}$; such parahorics are called \emph{standard} (with respect to $\ensuremath{\mathfrak{a}}$). We let $W^J$ (resp. $^JW$) denote the set of minimal length representatives of the cosets $W/W_J$ (resp $W_J\backslash W)$. We recall the Iwahori decomposition. For $w\in W$, the map $w\mapsto \dot{w}$ induces a bijection $$W_J\backslash W / W_J \cong \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\backslash G({\ensuremath{\breve{F}}})/\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}).$$ We now assume $J$ is $\sigma$-stable; in this case $\ensuremath{\mathcal{G}}$ is defined over $\ensuremath{\mathcal{O}}_F$ and is a parahoric group scheme for $G$. For the rest of \S\ref{sec: parahoric group schemes}, we fix a geometric conjugacy class of cocharacters $\{\mu\}$ of $G$ and assume the existence of $[b]_{\mu}\in B(G,\{\mu\})$. We define $\ensuremath{\mathrm{Adm}}(\{\mu\})_J$ to be the image of $\ensuremath{\mathrm{Adm}}(\{\mu\})$ in $W_J\backslash W/W_J.$ We sometimes write $\ensuremath{\mathrm{Adm}}_G(\{\mu\})_J$ if we want to specify the group $G$ we are working with. The following is the key group theoretic result that we need in order to prove the existence of canonical liftings in \S\ref{sec: canonical liftings}. \begin{prop}\label{prop: F-crystal basis} Let $b\in\left( \bigcup_{w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\right)\cap [b]_{\mu}$. Then \begin{enumerate}\item $b\in \mathcal{G}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{t}_{\mu'}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ for some $\sigma$-straight element $t_{\mu'}$ . \item There exists $g\in \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ such that $g^{-1}b\sigma(g)=\dot{t}_{\mu'}$ \end{enumerate} \end{prop} \begin{proof}By \cite[Theorem 6.1 (b)]{HR}, there exists $h\in \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ such that $h^{-1}b\sigma(h)\in \mathcal{I}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\mathcal{I}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ for some $w\in {^JW}$. Thus $w\in {^JW}\cap\ensuremath{\mathrm{Adm}}(\{\mu\})_J$ and hence lies in ${^J}W\cap\ensuremath{\mathrm{Adm}}(\{\mu\})$ by \cite[Theorem 6.1]{He3}. Thus upon replacing $b$ by $h^{-1}b\sigma(h)$, we may assume $b\in\mathcal{I}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\mathcal{I}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$. By \cite[Theorem 4.1]{HZ}, there exists a $\sigma$-straight element $x\leq w$ such that $[b]_{\mu}\cap\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{x}\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\neq\emptyset$ (the Theorem in {\em loc.~cit.} proves the non-emptiness of the affine Deligne--Lusztig variety $X_x(b)$, which is equivalent to this statement). By \cite[Theorem 3.5]{He1}, $\dot{x}\in [b]_{\mu}$ and by the same argument as in Lemma \ref{lemma: ord class rep by straight elt} we have $x=t_{\mu'}$ for some $\mu'\in W_0\cdot{\mu}$. Since $x\leq w$ and $w\in \ensuremath{\mathrm{Adm}}(\{\mu\})$, we have $w=t_{\mu'}$. This proves (1). For (2), the above argument shows that we may assume $b\in \ensuremath{\mathcal{I}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{t}_{\mu'}\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ for $t_{\mu'}$ a $\sigma$-straight element. By \cite[Proposition 4.5]{He1}, there exists $i\in\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ such that $i^{-1}b\sigma(i)=\dot{t}_{\mu'}$; the result follows. \end{proof} \begin{remark} This result is a generalization to general parahorics of \cite[Proposition 2.5]{SZ} which is due to Wortmann. In the case when $\ensuremath{\mathcal{G}}$ is a hyperspecial parahoric, this result is the group theoretic analogue of the fact that there is exactly one isomorphism class of ordinary $F$-crystal over $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}$. \end{remark} \newpage \subsection{N\'eron models of tori} \subsubsection{} For later applications to constructing integral models for Shimura varieties, we will need some results concerning N\'eron models of tori and their consequences for Bruhat--Tits group schemes. Let $T$ be a torus over a local field $F$; recall we have defined $\ensuremath{\mathcal{T}}_0$ to be the connected N\'eron model of $T$. We let $\ensuremath{\mathcal{T}}$ (resp. $\ensuremath{\mathcal{T}}_{\mathrm{ft}}$) denote the lft N\'eron model (resp. finite type N\'eron model) for $T$. Then we have $\ensuremath{\mathcal{T}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})=T(\ensuremath{\breve{F}})$ and $\ensuremath{\mathcal{T}}_{\mathrm{ft}}$ is characterized by the condition $\ensuremath{\mathcal{T}}_{\mathrm{ft}}( \ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})=\{t\in T(\ensuremath{\breve{F}})|\widetilde\kappa_T(t)\in X_*(T)_{I,\mathrm{tors}}\}$ where $X_*(T)_{I,\mathrm{tors}}$ is the torsion subgroup of $X_*(T)_I$. Alternatively, by \cite[n$^\circ$1]{Ra} the connected components of the special fiber of $\ensuremath{\mathcal{T}}$ are parameterized by $X_*(T)_I$ and $\ensuremath{\mathcal{T}}_{\mathrm{ft}}$ is the unique smooth subgroup scheme of $\ensuremath{\mathcal{T}}$ whose special fiber is given by the set of connected components corresponding to the torsion subgroup $X_*(T)_{I,\mathrm{tors}}$ of $X_*(T)_{I}$. \subsubsection{}Let $\widetilde{F}/F$ be a finite Galois extension over which $T$ splits and we let $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ denote the lft N\'eron model of $T_{\widetilde{F}}.$ \footnote{We are abusing notation here since $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ is not necessarily the base change to $\ensuremath{\mathcal{O}}_{\widetilde{F}}$ of the N\'eron model $\ensuremath{\mathcal{T}}$ of $T$ over $\ensuremath{\mathcal{O}}_{F}$.} By \cite[\S7.6, Proposition 6]{BLR}, $\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ is the lft N\'eron model over $\ensuremath{\mathcal{O}}_{F}$ for $\mathrm{Res}_{\widetilde{F}/F}T_{\widetilde{F}}$. There is a natural map $T\rightarrow \mathrm{Res}_{\widetilde{F}/F}T_{\widetilde{F}}$ and we define $\ensuremath{\mathcal{T}}^c$ to be the Zariski closure of $T$ inside $\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_{F}}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$. As in \cite[\S4.4.8]{BT2}, $\ensuremath{\mathcal{T}}^c$ does not depend on the choice of Galois splitting field of $T$. \begin{definition} We say a torus $T$ is \textit{R-smooth} if $\ensuremath{\mathcal{T}}^c$ is smooth. \end{definition} Since $\ensuremath{\mathcal{T}}^c$ satisfies the N\'eron mapping property (cf. \cite[Proof of Theorem 4.2]{Ed}), we have $\ensuremath{\mathcal{T}}\cong\ensuremath{\mathcal{T}}^c$ if $T$ is $R$-smooth. We can similarly define a notion of $R$-smoothness for tori over $\ensuremath{\breve{F}}$. It is easy to see using compatibility of N\'eron models with base change along $\ensuremath{\mathcal{O}}_F\rightarrow \ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}$ that a torus over $F$ is $R$-smooth if and only if $T_{\ensuremath{\breve{F}}}$ is $R$-smooth. The main property concerning $R$-smooth tori that we need is the following. \begin{lemma}\label{lem: R smooth torus property} Suppose we have a closed immersion $f:T_1\rightarrow T_2$ between tori where $T_1$ is $R$-smooth, then $f$ extends to a closed immersion $\ensuremath{\mathcal{T}}_1\rightarrow \ensuremath{\mathcal{T}}_2$ of lft N\'eron models. \end{lemma} \begin{proof} Let $\widetilde{F}$ be a finite Galois splitting field for $T_1$ and $T_2$. Then since $T_{1,\widetilde{F}}$ and $T_{2,\widetilde{F}}$ are just products of multiplicative group schemes, the map $T_{1,\widetilde{F}}\rightarrow T_{2, \widetilde{F}}$ extends to a closed immersion of lft N\'eron models $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}\rightarrow \ensuremath{\mathcal{T}}_{2,\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ over $\ensuremath{\mathcal{O}}_{\widetilde{F}}$. We obtain a diagram \[\xymatrix{\ensuremath{\mathcal{T}}_1\ar[r]^f\ar[d]_g&\ensuremath{\mathcal{T}}_2\ar[d]^h\\ \mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{1,\ensuremath{\mathcal{O}}_{\widetilde{F}}}\ar[r]^i&\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{2,\ensuremath{\mathcal{O}}_{\widetilde{F}}}}\] where $i$ is a closed immersion since it is given by applying restriction of scalars to a closed immersion and $g$ is a closed immersion since $T_1$ is $R$-smooth. It follows that $h\circ f=i\circ g$ is a closed immersion, and hence $f$ is a closed immersion since $h$ is separated. \end{proof} \subsubsection{}The proof of \cite[Theorem 4.2]{Ed} shows that if $T$ splits over a tamely ramified extension of $F$, then $T$ is $R$-smooth. In addition, the main examples of $R$-smooth tori that we will consider are given by the following Proposition. \begin{prop}\label{prop: examples of R smooth torus}\begin{enumerate}\item Let $K/F$ be a finite extension and $S$ an $R$-smooth torus over $K$. Then $T:=\mathrm{Res}_{K/F}S$ is $R$-smooth. \item Suppose we have tori $T_1,T_2$ and $T_3$ with $T_1$ and $T_2$ $R$-smooth, together with group scheme morphisms $f:T_1\rightarrow T_3$ and $g:T_2\rightarrow T_3$ satisfying the following properties \begin{enumerate}[label=(\roman*)]\item $f$ is surjective and induces a smooth map $f:\ensuremath{\mathcal{T}}_1\rightarrow\ensuremath{\mathcal{T}}_3$ on lft N\'eron models. \item $g$ is a closed immersion. \end{enumerate} Then the connected component $T$ of the identity of the fiber product $T_1\times_{T_3}T_2$ is an $R$-smooth torus. \end{enumerate} \end{prop} \begin{proof}(1) Let $\widetilde{F}$ be a finite Galois splitting field of $T$ which necessarily contains $K$. For any $F$-morphism $\tau:K\rightarrow \widetilde{F}$, the base change of $S$ along $\tau$ is split. Since $S$ is $R$-smooth, it follows that we have a closed immersion of $\ensuremath{\mathcal{O}}_K$-group schemes $$\ensuremath{\mathcal{S}}\rightarrow\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_K}\ensuremath{\mathcal{S}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}},$$ where $\ensuremath{\mathcal{S}}$ (resp. $\ensuremath{\mathcal{S}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$) is the lft N\'eron model for $S$ (resp. $S_{\widetilde{F}}$). Applying $\mathrm{Res}_{\ensuremath{\mathcal{O}}_K/\ensuremath{\mathcal{O}}_F}$ we obtain a closed immersion $$\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{S}}\rightarrow\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{S}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}.$$ Taking the product over all $\tau:K\rightarrow \widetilde{F}$ we obtain a closed immersion $$\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{S}}\rightarrow\prod_{\tau:K\rightarrow \widetilde{F}}\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{S}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}\cong\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}.$$ Since $\mathrm{Res}_{\ensuremath{\mathcal{O}}_K/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{S}}$ is the lft N\'eron model $\ensuremath{\mathcal{T}}$ for $T$, it follows that $\ensuremath{\mathcal{T}}$ is the closure of its generic fiber inside $\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ and hence $T$ is $R$-smooth. (2) We may assume $F=\ensuremath{\breve{F}}$. We let $\ensuremath{\mathcal{T}}''$ denote the fiber product $\ensuremath{\mathcal{T}}_1\times_{\ensuremath{\mathcal{T}}_3}\ensuremath{\mathcal{T}}_2$, where the $\ensuremath{\mathcal{T}}_i$ are the lft N\'eron models for $T_i$. Then condition (i) implies that the map $\ensuremath{\mathcal{T}}''\rightarrow \ensuremath{\mathcal{T}}_2$ is smooth, and hence $\ensuremath{\mathcal{T}}''$ is smooth over $\ensuremath{\mathcal{O}}_F$. We let $\ensuremath{\mathcal{T}}'\subset \ensuremath{\mathcal{T}}''$ denote the connected component of the identity; then $\ensuremath{\mathcal{T}}'$ is a smooth scheme over $\ensuremath{\mathcal{O}}_F$. Moreover $\ensuremath{\mathcal{T}}'$ satisfies the N\'eron mapping property for $T$; it follows that $\ensuremath{\mathcal{T}}'$ is isomorphic to the lft N\'eron model $\ensuremath{\mathcal{T}}$ for $T$. Let ${\widetilde{F}}$ denote a finite Galois splitting field for $T_1$ (and hence also for $T$); we obtain a commutative diagram: \[\xymatrix{\ensuremath{\mathcal{T}}\ar[r]\ar[d]&\ensuremath{\mathcal{T}}_1\ar[d]\\ \mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}\ar[r]&\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{1,\ensuremath{\mathcal{O}}_{\widetilde{F}}}}\] Condition (ii) and the $R$-smoothness of $T_2$ implies that the natural map $\ensuremath{\mathcal{T}}\rightarrow \ensuremath{\mathcal{T}}_1$ is a closed immersion. By $R$-smoothness of $\ensuremath{\mathcal{T}}_1$, the map $\ensuremath{\mathcal{T}}_1\rightarrow\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{1,\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ is a closed immersion. It follows that $\ensuremath{\mathcal{T}}\rightarrow\mathrm{Res}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{O}}_{\widetilde{F}}}$ is a closed immersion and hence $T$ is $R$-smooth. \end{proof} \begin{cor}\label{cor: extension of torus is R smooth} Let $T_1= \prod_{i=1}^s\mathrm{Res}_{K_i/F}S_{1,i}$ and $T_3 = \prod_{i=1}^s\mathrm{Res}_{K_i/F}S_{3,i}$ respectively where $K_i$ is a finite extension of $F$ and $S_{1,i}, S_{3,i}$ are $K_i$-tori which split over a tamely ramified extension of $F_{i}$, and let $T_2$ be an $F$-torus which splits over a tamely ramified extension of $F$. Suppose we are given a group scheme morphism $f:T_1\rightarrow T_3$ which arises from a product of surjective maps $S_{1,i}\rightarrow S_{3,i}$ over $K_i$, and $g:T_2\rightarrow T_3$ a group scheme morphism which is a closed immersion. Then the connected component $T$ of the identity of the fiber product $T_1\times_{T_3}T_2$ is an $R$-smooth torus. \end{cor} \begin{proof} By Proposition \ref{prop: examples of R smooth torus} (1) and \cite[Theorem 4.2]{Ed}, $T_1$ and $T_2$ are $R$-smooth tori. By part (2) of Proposition \ref{prop: examples of R smooth torus}, it suffices to show that $f:T_1\rightarrow T_3$ induces a smooth map $\ensuremath{\mathcal{T}}_1\rightarrow \ensuremath{\mathcal{T}}_3$ on lft N\'eron models over $F$. For this it suffices to consider the case $s=1$; we thus drop the index $i$ from the notation. We first reduce to the case $\ker (f)$ is connected. Let $D:=\ker(f)$ and let $D^\circ$ denote the connected component of the identity of $D$. We assume $f=\mathrm{Res}_{K/F}h$ where $h:S_1\rightarrow S_3$; then $D=\mathrm{Res}_{K/F} \ker h$ and $D^\circ=\mathrm{Res}_{K/F}(\ker h)^\circ$, where $(\ker h)^\circ$ is the connected component of the identity of $\ker h$. The quotient $S_3':=S_1/(\ker h)^\circ$ is a torus equipped with an isogeny $S_3'\rightarrow S_3$ and we have an exact sequence \[\xymatrix{0\ar[r] &(\ker h)^\circ\ar[r] &S_1\ar[r] &S_3'\ar[r] &0.}\] Setting $T_3':=\mathrm{Res}_{K/F}S_3'$, we obtain an exact sequence \[\xymatrix{0\ar[r] &D^\circ\ar[r] &T_1\ar[r]^{f'} &T_3'\ar[r] &0.}\] We define $T_2'$ to be the connected component of the identity of $T_2\times_{T_3}T_3'$. Then we may identify $T$ with the connected component of the identity of $T_{1}\times_{T_{3}'}T_2'$. Since $T_2'\rightarrow T_3'$ is a closed immersion, we may replace $T_2$ and $T_3,$ by $T_2$ and $T_3'$ respectively and hence assume that $\ker f$ is connected. By properties of Weil restriction, it is enough to show that the map $\ensuremath{\mathcal{S}}_{1}\rightarrow \ensuremath{\mathcal{S}}_3$ on lft N\'eron models over $\ensuremath{\mathcal{O}}_K$, obtained from $S_1 \rightarrow S_2$ over $K$, is smooth. We reduce to showing that a surjective map $T\rightarrow T'$ between $F$-tori which split over tamely ramified extensions of $F$ and whose kernel is connected induces a smooth map $\ensuremath{\mathcal{T}}\rightarrow \ensuremath{\mathcal{T}}'$ between lft N\'eron models. This now follows from the same argument as \cite[Theorem 6.1 (5)$\Rightarrow$(6)]{Ed} using the fact that $\ker(T\rightarrow T')$ is a torus. \end{proof} \subsubsection{} The previous results have the following consequences for Bruhat--Tits group schemes. Let $G$ be a reductive group over $F$ and $\widetilde{\ensuremath{\mathcal{G}}}$ a Bruhat--Tits stabilizer scheme corresponding to $x\in \ensuremath{\mathcal{B}}(G,F)$ which is generic in the facet containing it. Let $\beta:G\hookrightarrow G'$ be a closed immersion of reductive groups over $F,$ which induces an isomorphism on derived groups. As in \cite[\S1.1.3]{KP}, $x$ determines a point $x'\in \ensuremath{\mathcal{B}}(G',F)$ and we write $\widetilde{\ensuremath{\mathcal{G}}}'$ for the corresponding Bruhat--Tits stabilizer scheme of $G'$; then $\beta$ extends to a group scheme homomorphism $\beta:\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \widetilde{\ensuremath{\mathcal{G}}}'$. \begin{prop}\label{prop: closed immersion of BT schemes} Assume that the centralizer of any maximal $\ensuremath{\breve{F}}$-split torus in $G$ is an $R$-smooth torus. Then $\beta:\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \widetilde{\ensuremath{\mathcal{G}}}'$ is a closed immersion. \end{prop} \begin{proof} Since the construction of Bruhat--Tits stabilizer schemes is compatible with unramified base extensions, it is enough to prove the result in the case $F=\ensuremath{\breve{F}}$. We let $S$ be a maximal $\ensuremath{\breve{F}}$-split torus in $G$ such that $x$ lies in $\ensuremath{\mathcal{A}}(G,S,\ensuremath{\breve{F}})$. Let $T$ be the centralizer of $S$ which by assumption is an $R$-smooth torus. Let $S'$ be a maximal split torus of $G'$ such that $S'\cap G=S$ and we let $T'$ denote the centralizer of $S'$. By the construction of Bruhat--Tits stabilizer schemes in \cite[\S4.6]{BT2}, the Zariski closure of $T$ (resp. $T'$) inside $\widetilde{\ensuremath{\mathcal{G}}}$ (resp. $\widetilde{\ensuremath{\mathcal{G}}}'$) can be identified with the finite type N\'eron model $\ensuremath{\mathcal{T}}_{\mathrm{ft}}$ (resp. $\ensuremath{\mathcal{T}}'_{\mathrm{ft}}$). We claim that the natural map $T\rightarrow T'$ extends to a closed immersion \begin{equation}\label{eqn: closed immersion tori}\ensuremath{\mathcal{T}}_{\mathrm{ft}}\rightarrow \ensuremath{\mathcal{T}}_{\mathrm{ft}}'\end{equation} between finite type N\'eron models. Assuming this, we can prove the proposition. For any relative root $\alpha$, the map $G\rightarrow G'$ induces an isomorphism between the corresponding root subgroups $U_\alpha$ and $U'_{\alpha}$. If we let $\ensuremath{\mathcal{U}}_{\alpha}$ and $\ensuremath{\mathcal{U}}_{\alpha}'$ denote the corresponding schematic closures, then by the construction of $\widetilde{\ensuremath{\mathcal{G}}}$ and $\widetilde{\ensuremath{\mathcal{G}}}'$ in \cite[\S4.6]{BT2}, the map $G\rightarrow G'$ also induces an isomorphism $\ensuremath{\mathcal{U}}_{\alpha}\rightarrow\ensuremath{\mathcal{U}}_{\alpha}'$. Thus by \cite[Theorem 2.2.3]{BT2} the schematic closure $\widehat{\ensuremath{\mathcal{G}}}$ of $G$ in $\widetilde{\ensuremath{\mathcal{G}}}'$ contains the smooth big open cell $$\prod_{\alpha}\ensuremath{\mathcal{U}}_{-\alpha}\times{\ensuremath{\mathcal{T}}}_{\mathrm{ft}}\times\prod_{\alpha}\ensuremath{\mathcal{U}}_\alpha;$$ hence by \cite[Corollary 2.2.5]{BT2}, $\widehat{\ensuremath{\mathcal{G}}}$ is smooth. Since $\widehat{\ensuremath{\mathcal{G}}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})=G(\ensuremath{\breve{F}})\cap\widetilde{\ensuremath{\mathcal{G}}}'(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$, it follows that $\widehat{\ensuremath{\mathcal{G}}}\cong \widetilde{\ensuremath{\mathcal{G}}}$, and hence we obtain a closed immersion $\widetilde{\ensuremath{\mathcal{G}}}\hookrightarrow \widetilde{\ensuremath{\mathcal{G}}}'$ as desired. It remains to show the existence of the closed immersion (\ref{eqn: closed immersion tori}). By Lemma \ref{lem: R smooth torus property}, we have a closed immersion $\ensuremath{\mathcal{T}}\rightarrow \ensuremath{\mathcal{T}}'$ of lft N\'eron models. We let $\phi:X_*(T)_I\rightarrow X_*(T')_I$ denote the morphism on the targets of the Kottwitz homomorphism. Then it is easy to see that $$\phi^{-1}(X_*(T')_{I,\mathrm{tors}})=X_*(T)_{I,\mathrm{tors}}.$$ As the finite type N\'eron models $\ensuremath{\mathcal{T}}_{\mathrm{ft}}$ and $\ensuremath{\mathcal{T}}'_{\mathrm{ft}}$ correspond to the subschemes of $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{T}}'$ whose special fibers are given by the connected components parameterized by $X_*(T)_{I,\mathrm{tors}}$ and $X_*(T)_{I,\mathrm{tors}}$ respectively, it follows that $\ensuremath{\mathcal{T}}\rightarrow \ensuremath{\mathcal{T}}'$ induces a closed immersion $\ensuremath{\mathcal{T}}_{\mathrm{ft}}\rightarrow \mathrm{\ensuremath{\mathcal{T}}}'_{\mathrm{ft}}$ as desired. \end{proof} \begin{remark} As all maximal $\ensuremath{\breve{F}}$-split tori are $\ensuremath{\breve{F}}$-conjugate, the centralizer of any maximal $\ensuremath{\breve{F}}$-split torus is $R$-smooth if there exists one such centralizer which is $R$-smooth. \end{remark} \subsubsection{}Now let $\beta:G\rightarrow G'$ be a central extension between reductive groups with kernel $Z$ and $\ensuremath{\mathcal{G}}$ the parahoric group scheme associated to some $x\in \ensuremath{\mathcal{B}}(G,F)$. We let $\ensuremath{\mathcal{G}}'$ denote the parahoric of $G'$ corresponding to $\ensuremath{\mathcal{G}}$; then as above, $\beta$ extends to a group scheme homomorphism $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{G}}'$. \begin{prop}\label{prop: exact sequence parahorics}Assume $Z$ is an $R$-smooth torus. Then the Zariski closure $\widetilde{\ensuremath{\mathcal{Z}}}$ of $Z$ inside $\ensuremath{\mathcal{G}}$ is smooth and there is an (fppf) exact sequence \begin{equation}\label{eqn: exact sequence of parahorics}\xymatrix{0\ar[r]&\widetilde{\ensuremath{\mathcal{Z}}}\ar[r]&\ensuremath{\mathcal{G}}\ar[r]^\beta&\ensuremath{\mathcal{G}}'\ar[r]&0}\end{equation}of group schemes over $\ensuremath{\mathcal{O}}_F$. \end{prop} \begin{proof}As in Proposition \ref{prop: closed immersion of BT schemes}, it suffices to prove the Proposition when $F=\ensuremath{\breve{F}}$. Let $S$ be a maximal $\ensuremath{\breve{F}}$-split torus of $G$ such that $x$ lies in $\ensuremath{\mathcal{A}}(G,S,\ensuremath{\breve{F}})$. Let $T$ be the centralizer of $S$ and we let $T'$ be the corresponding maximal torus of $G'$. Assume there exists an fppf exact sequence \begin{equation}\label{eqn: exact sequence of tori}\xymatrix{1\ar[r]&\widetilde\ensuremath{\mathcal{Z}}\ar[r]&\ensuremath{\mathcal{T}}_{0}\ar[r]&\ensuremath{\mathcal{T}}'_{0}\ar[r]&1}\end{equation}where $\ensuremath{\mathcal{T}}_{0}$ and $\ensuremath{\mathcal{T}}'_0$ are the connected N\'eron models of $T$ and $T'$ respectively. Then we may argue as in \cite[Proposition 1.1.4]{KP} to obtain the desired exact sequence (\ref{eqn: exact sequence of parahorics}). It remains to exhibit the exact sequence (\ref{eqn: exact sequence of tori}); we follow the argument of \cite[Lemma 6.7]{PR}. By assumption we obtain a closed immersion between lft N\'eron models $\ensuremath{\mathcal{Z}}\rightarrow \ensuremath{\mathcal{T}}$. We let $\widetilde{\ensuremath{\mathcal{Z}}}'$ denote the subgroup scheme of $\ensuremath{\mathcal{Z}}$ with generic fiber $Z,$ and special fiber corresponding to the connected components of the special fiber of $\ensuremath{\mathcal{Z}}$ parameterized by $\ker(X_*(Z)_I\rightarrow X_*(T)_I)$. Then $\widetilde{\ensuremath{\mathcal{Z}}}'$ is smooth and we have a closed immersion $\widetilde{\ensuremath{\mathcal{Z}}}'\rightarrow \ensuremath{\mathcal{T}}_0$. It follows that $\widetilde{\ensuremath{\mathcal{Z}}}'$ coincides with $\widetilde\ensuremath{\mathcal{Z}}$ and we obtain a closed immersion $\widetilde\ensuremath{\mathcal{Z}}\rightarrow \ensuremath{\mathcal{T}}_0$. As in \cite[Lemma 6.7]{PR} we have an exact sequence: \[\xymatrix{1\ar[r]&\widetilde{\ensuremath{\mathcal{Z}}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\ar[r]&\ensuremath{\mathcal{T}}_0(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\ar[r]&\ensuremath{\mathcal{T}}'_0(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\ar[r]&1}\] The quotient $\ensuremath{\mathcal{T}}_0/\widetilde\ensuremath{\mathcal{Z}}$ is a smooth affine commutative group scheme with the same generic fiber as $\ensuremath{\mathcal{T}}_0'$ and with the same $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}$-points; hence by \cite[Proposition 1.7.6]{BT2} we have $\ensuremath{\mathcal{T}}_0'\cong\ensuremath{\mathcal{T}}_0/\widetilde{\ensuremath{\mathcal{Z}}}$. The result follows. \end{proof} \section{Local models of Shimura varieties} In this section we assume $F$ is a finite extension of $\ensuremath{\mathbb{Q}}_p$ with residue field $k_F$. \subsection{Local models for Weil-restricted groups}\label{sec: Local models for weil restricted groups} \subsubsection{}\label{sec: affine grassmannian}Let $K_0/F$ be a finite unramified extension. Let $P(u)\in\ensuremath{\mathcal{O}}_{K_0}[u]$ be a monic polynomial and $\underline{\ensuremath{\mathcal{G}}}$ a smooth affine group scheme over $\ensuremath{\mathcal{O}}_{K_0}[u]$. We consider the functor $\mathrm{Fl}^{P(u)}_{\underline{\ensuremath{\mathcal{G}}},0}$ on $\ensuremath{\mathcal{O}}_{K_0}$-algebras $R$ given by $$\mathrm{Fl}^{P(u)}_{\underline{\ensuremath{\mathcal{G}}},0}(R)=\{\text{iso. classes of pairs }(\ensuremath{\mathcal{E}},\beta)\},$$ where $\ensuremath{\mathcal{E}}$ is a $\underline{\ensuremath{\mathcal{G}}}$-torsor over $R[u]$ and $\beta:\ensuremath{\mathcal{E}}|_{R[u][1/P(u)]}\xrightarrow{\sim} \ensuremath{\mathcal{E}}^0$ is an isomorphism of $\underline{\ensuremath{\mathcal{G}}}$-torsors, where $\ensuremath{\mathcal{E}}^0$ denotes the trivial $\underline{\ensuremath{\mathcal{G}}}$-torsor. We then define the mixed characteristic affine Grassmannian $$\mathrm{Fl}^{P(u)}_{\underline{\ensuremath{\mathcal{G}}}}:=\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K_0}/\ensuremath{\mathcal{O}}_{F}}\mathrm{Fl}^{P(u)}_{\underline{\ensuremath{\mathcal{G}}},0}.$$ By embedding $\underline\ensuremath{\mathcal{G}}$ into a general linear group, one deduces as in \cite[Proposition 4.1.4]{Levin}, that $\mathrm{Fl}^{P(u)}_{\underline{\ensuremath{\mathcal{G}}}}$ is representable by an ind-scheme over $\ensuremath{\mathcal{O}}_F$. \subsubsection{}\label{sec: local model triple} Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a local model triple over $F$ in the sense of \cite[\S2.1]{HPR}. Thus $\bullet$ $G$ is a reductive group scheme over $F$. $\bullet$ $\{\mu\}$ is a geometric conjugacy class of minuscule cocharacters of $G$. $\bullet$ $\ensuremath{\mathcal{G}}=\ensuremath{\mathcal{G}}_x$ for some $x\in \ensuremath{\mathcal{B}}(G,F)$ which is generic in the facet containing it. In addition, we will often make the following assumption. $(*)$ $G$ is isomorphic to $ \prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ where $K_i/F$ is a finite extension and $H_i$ is a reductive group over $K_i$ which splits over a tamely ramified extension of $K_i$. When $r=1$, we simply write $G=\mathrm{Res}_{K/F}H$. If $p\geq 5$, then any adjoint group satisfies $(*)$. Using this fact, one can define local models for any group $G$ when $p\geq 5$ (see \cite[Remark 4.2.3]{Levin}) although we will not need the construction in this level of generality. \subsubsection{}\label{sec: Levin group schemes}Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a triple with $G\cong \mathrm{Res}_{K/F}H$ as above. By \cite[p 172]{Prasad}, there is an identification of buildings $\ensuremath{\mathcal{B}}(G,F)\cong \ensuremath{\mathcal{B}}(H,K)$. Therefore we may identify the set of parahoric subgroups of $G(F)$ with the set of parahoric subgroups of $H(K)$; see \cite[\S4.2]{HaRi} for example. Thus, there is a parahoric group scheme $\ensuremath{\mathcal{H}}$ over $\ensuremath{\mathcal{O}}_K$ such that $\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_F)$ is identified with $\ensuremath{\mathcal{H}}(\ensuremath{\mathcal{O}}_K)$ as subgroups of $G(F)\cong H(K)$. By \cite[Proposition 4.7]{HaRi}, we have $\ensuremath{\mathcal{G}}\cong \mathrm{Res}_{\ensuremath{\mathcal{O}}_K/\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{H}}$. If we consider $x$ as a point in $\ensuremath{\mathcal{B}}(H,K)$, then $\ensuremath{\mathcal{H}}$ is the parahoric group scheme of $H$ associated to $x$. Let $K_0$ denote the maximal unramified extension of $F$ contained in $K$ and write $\ensuremath{\mathcal{O}}_{K_0}$ (resp. $k_0$) for its ring of integers (resp. residue field). We let $\ensuremath{\mathcal{O}}_{K_0}[u^\pm]$ denote the ring $\ensuremath{\mathcal{O}}_{K_0}[u,u^{-1}]$. We fix a uniformizer $\varpi$ of $K$ and we write $Q(u)\in \ensuremath{\mathcal{O}}_{K_0}[u]$ for the Eisenstein polynomial which is the minimal polynomial for $\varpi$ over $K_0$. Then \cite[\S3,4]{Levin} constructs a smooth affine group scheme $\underline{\ensuremath{\mathcal{H}}}$ over $\ensuremath{\mathcal{O}}_{K_0}[u]$ which specializes to $\ensuremath{\mathcal{H}}$ under the map $\ensuremath{\mathcal{O}}_{K_0}[u]\rightarrow \ensuremath{\mathcal{O}}_{K},$ $u\mapsto \varpi$ and such that $$\underline{H}:=\underline{\ensuremath{\mathcal{H}}}|_{\ensuremath{\mathcal{O}}_{K_0}[u^\pm]}$$ is a reductive group scheme. Applying the construction of \S\ref{sec: affine grassmannian} we obtain the ind-scheme $\mathrm{Fl}_{\underline{\ensuremath{\mathcal{H}}}}^{Q(u)}$ over $\ensuremath{\mathcal{O}}_F$ which is ind-projective by \cite[Theorem 4.2.11]{Levin}. \subsubsection{}\label{sec: Levin local models}For a $K_0$-algebra $R,$ the completion $\widehat {R[u]}$ of $R[u]$ along $Q(u),$ contains the completion of $K_0[u]$ along $Q(u).$ The latter ring may be identified with $K[[t]],$ by a map sending $t$ to $Q(u)$ and inducing the identity on residue fields. Then $\widehat {R[u]}$ may be identified with $(K\otimes_{K_0} R)[[t]]$ by sending $t$ to $Q(u).$ This induces an isomorphism from the generic fiber of $\mathrm{Fl}_{\underline{\ensuremath{\mathcal{H}}},0}^{Q(u)}$ with the affine Grassmannian $\mathrm{Gr}_{\mathrm{Res}_{K/K_0}H}$ (cf. \cite[Corollary 3.5]{HaRi}), and hence an isomorphism between the generic fiber of $\mathrm{Fl}^{Q(u)}_{\underline{\ensuremath{\mathcal{H}}}}$ with $\mathrm{Gr}_{\mathrm{Res}_{K/F}H}\cong\mathrm{Gr}_{G}$ (recall that this is the fpqc sheaf associated to the functor on $F$-algebras $R$ given by $R\mapsto G(R((t)))/G(R[[t]])$). The special fiber of $\mathrm{Fl}_{\underline{\ensuremath{\mathcal{H}}}}^{Q(u)}$ can be identified with the partial affine flag variety $\mathrm{Res}_{k_0/k_F}\mathcal{FL}_{\underline{\ensuremath{\mathcal{H}}}_{k_0[[t]]}}$; here $\mathcal{FL}_{\underline{\ensuremath{\mathcal{H}}}_{k_0[[t]]}}$ is the fpqc sheaf associated to the functor $$R\mapsto \underline{\ensuremath{\mathcal{H}}}_{k_0[[t]]}(R((t)))/\underline{\ensuremath{\mathcal{H}}}_{k_0[[t]]}(R[[t]])$$ on $k_0$-algebras. A representative $\mu$ of $\{\mu\}$ over $\overline{F}$ determines an element of $G(\overline{F}((t)))$ and hence a point $e_\mu := \mu(t)\in \mathrm{Gr}_G(\overline{F})$. The Schubert variety $S_\mu$ is then defined to be the closure of the $G(\overline{F}[[t]])$-orbit of $e_\mu$ in $\mathrm{Gr}_G$. The conjugacy class $\{\mu\}$ has a minimal field of definition $E$ known as the (local) reflex field, and the Schubert variety $S_\mu\subset \mathrm{Gr}_G$ is defined over $E$. The local model $\ensuremath{\mathrm{M}}_{\ensuremath{\mathcal{G}},\{\mu\}}^{\mathrm{loc}}$ is defined to be the Zariski closure of $S_\mu$ in $\mathrm{Fl}^{Q(u)}_{\underline{\ensuremath{\mathcal{H}}}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E$. \subsubsection{} In general, if $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ as in (*), we define $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$ to be the product $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}:=\prod_{i=1}^r\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}_i,\{\mu_i\}}\otimes_{\ensuremath{\mathcal{O}}_{E_i}}\ensuremath{\mathcal{O}}_E$. Here the parahoric $\ensuremath{\mathcal{G}}_i$ of $\mathrm{Res}_{K_i/F}H_i$ is determined by $\ensuremath{\mathcal{G}}\cong\prod_{i=1}^r\ensuremath{\mathcal{G}}_i$, $\{\mu_i\}$ is the $\mathrm{Res}_{K_i/F}H_i$ factor of the $G$-conjugacy class $\{\mu\}$, and $E_i$ (resp. $E$) is the field of definition of $\{\mu_i\}$ (resp. $\{\mu\}$). The following theorem follows immediately from \cite[Theorem 4.2.7]{Levin}. \begin{thm}\label{thm: Levin} Suppose $G$ satisfies $(*)$ and that $p$ does not divide the order of the algebraic fundamental group $\pi_1(G_{\mathrm{der}})$ of the derived group $G_{\mathrm{der}}$ of $G$. Then the scheme $\ensuremath{\mathrm{M}}_{\ensuremath{\mathcal{G}},\{\mu\}}^{\mathrm{loc}}$ is normal with reduced special fiber. Moreover each geometric irreducible component of $\ensuremath{\mathrm{M}}_{\ensuremath{\mathcal{G}},\{\mu\}}^{\mathrm{loc}}\otimes_{\ensuremath{\mathcal{O}}_E} k$ is normal and Cohen--Macaulay. \end{thm}\qed \begin{remark} (1) Note that the input for the constructions in this subsection is a parahoric group scheme $\mathcal{H}$ over $\ensuremath{\mathcal{O}}_K$ and a finite extension $K/F$. When $K=F$, the group scheme $\underline{\ensuremath{\mathcal{H}}}$ and the mixed characteristic affine Grassmannian $\mathrm{Fl}^{u-\varpi}_{\underline{\ensuremath{\mathcal{H}}}}$ agrees with those constructed by Pappas--Zhu \cite{PZ}. (2) Using the argument in \cite[Proposition 4.2.4, Remark 4.2.5]{Levin}, one can show that the local model $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$ depends only on $\ensuremath{\mathcal{G}}$ and $\{\mu\}$ and not on the choice of extension $K$ or the uniformizer $\varpi$. \end{remark} \subsubsection{} We may identify the geometric special fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$ with a certain union of Schubert varieties corresponding to the $\mu$-admissible set $\ensuremath{\mathrm{Adm}}(\{\mu\})_J$ defined in \ref{sec: mu admissible set}; we explain this in the remainder of \S\ref{sec: Local models for weil restricted groups}. To do this, we first explain the relationship between the Iwahori Weyl group of $G$ and a certain reductive group over $k_F[[u]]$. Let $S$ denote a maximal $\breve{K}$-split torus of $H$ defined over $K$ such that $x$ lies in a $\sigma_K$-invariant facet in the apartment $\ensuremath{\mathcal{A}}(H,S,\breve{K})$ corresponding to $S$ (here $\sigma_K$ denotes the Frobenius element of $\mathrm{Aut}(\ensuremath{\breve{K}}/K)$). Then the construction in \cite[Proposition 3.1.2]{Levin} provides us with a maximal $\mathcal{O}_{\breve{K}_0}[u^\pm]$-split torus $\underline{S}$ of $\underline{H}$ defined over $\ensuremath{\mathcal{O}}_{{K}_0}[u^\pm]$ which extends $S$. The choice of $\underline{S}$ gives us an identification of apartments \begin{equation}\label{eqn: identificaton of apartments} \ensuremath{\mathcal{A}}(\underline{H}_{\kappa((u))},\underline{S}_{\kappa((u))},\kappa((u)))\cong \ensuremath{\mathcal{A}}(H,S,\breve{K})\end{equation} for $\kappa=\breve{K}_0,k$. Moreover there is an identification of Iwahori Weyl groups \begin{equation}\label{eqn: id Iwahori Weyl groups} W_{\underline{H}_{\kappa((u))}}\cong W_{H_{\ensuremath{\breve{K}}}} \end{equation} for $\underline{H}_{\kappa((u))}$ and $H_{\breve{K}}$ such that the identification (\ref{eqn: identificaton of apartments}) is equivariant for the actions of these groups on the respective apartments. We let $$x_{\kappa((u))}\in \ensuremath{\mathcal{A}}(\underline{H}_{\kappa((u))},\underline{S}_{\kappa((u))},\kappa((u)))$$ be the point corresponding to $x$ under the identification (\ref{eqn: identificaton of apartments}). Then the group scheme $\underline{\ensuremath{\mathcal{H}}}/\ensuremath{\mathcal{O}}_{K_0}[u]$ has the property that its specialization to $\kappa[[u]]$ is isomorphic to the parahoric group scheme corresponding to $x_{\kappa((u))}$. \subsubsection{}\label{sec: identification of Iwahori Weyl group}Let $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$ denote the group scheme $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}:=\mathrm{Res}_{k_0[[u]]/k_F[[u]]}\underline{\ensuremath{\mathcal{H}}}_{k_0[[u]]}$ and we write $\underline{G}_{k_F((u))}$ for its generic fiber. We let $\underline{\ensuremath{\mathcal{G}}}_{k[[u]]}$ (resp. $\underline{G}_{k((u))}$) denote the base change of $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$ (resp. $\underline{G}_{k_F((u))}$) to $k[[u]]$ (resp. $k((u))$). Then by construction, the special fiber of $\mathrm{Fl}_{\underline{\ensuremath{\mathcal{H}}}}^{Q(u)}$ is identified with the usual partial affine flag variety associated to $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$; here we use \cite[Corollary 3.6 and Lemma 3.7]{HaRi} for the identification $\mathrm{Res}_{k_0/k_F}\mathcal{FL}_{\underline\ensuremath{\mathcal{H}}_{k_0[[u]]}}\cong \mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}}$. The isomorphism (\ref{eqn: id Iwahori Weyl groups}) induces an isomorphism of Iwahori Weyl groups \begin{equation}\label{eqn: id of Iwahori Weyl groups 2}W_G\cong W_{\underline{G}_{k_F((u))}}.\end{equation}Indeed we have identifications $$W_G\cong \prod_{\psi: K_0\rightarrow \ensuremath{\breve{F}}} W_H,\ \ \ W_{\underline{G}_{k_F((u))}}\cong \prod_{\psi:k_0\rightarrow k}W_{\underline{H}_{k_F((u))}},$$ where $\psi$ runs over $F$ (resp. $k_F)$-embeddings. Identifying $k_0\rightarrow k$ with the unique lift $K_0\rightarrow \ensuremath{\breve{F}}$ and using (\ref{eqn: id Iwahori Weyl groups}), we obtain the identification (\ref{eqn: id of Iwahori Weyl groups 2}). Similarly, we obtain an identification of apartments \begin{equation} \label{eqn: id of apartments} \ensuremath{\mathcal{A}}(\underline{G}_{k_F((u ))},\underline{S}_{k((u))}',k((u)))\cong \ensuremath{\mathcal{A}}(G,S',\ensuremath{\breve{F}}). \end{equation} Here $S'$ is the maximal $\ensuremath{\breve{F}}$-split torus of $G$ determined by the maximal $\ensuremath{\breve{K}}$-split torus of $H$ as in \cite[\S4.2]{HaRi}, and $\underline{S}_{k((u))}'$ is the maximal $k((u))$-split torus of $\underline{G}_{k_F((u))}$ obtained from the maximal $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u^\pm]$-split torus $\underline{S}$ of $\underline{H}.$ Moreover the identification (\ref{eqn: id of apartments}) is compatible with the action of Iwahori Weyl groups under the identification (\ref{eqn: id of Iwahori Weyl groups 2}). \subsubsection{}\label{subsubsec:fnfielddefns}We fix a $\sigma$-invariant alcove $\ensuremath{\mathfrak{a}}\subset \ensuremath{\mathcal{A}}(G,S',\ensuremath{\breve{F}})$ whose closure contains $x$. This determines a set of simple reflections $\ensuremath{\mathbb{S}}$ for $W_G$ and the parahoric $\ensuremath{\mathcal{G}}$ is a standard parahoric for this choice of alcove; hence it corresponds to a $\sigma$-stable subset $J\subset\ensuremath{\mathbb{S}}$. We let $\underline{\ensuremath{\mathfrak{a}}}$ denote the alcove in $\ensuremath{\mathcal{A}}(\underline{G}_{k_F((u ))},\underline{S}_{k((u))}',k((u)))$ corresponding to $\ensuremath{\mathfrak{a}}$ and $\underline{\ensuremath{\mathbb{S}}}$ the set of simple reflections in the walls of $\underline\ensuremath{\mathfrak{a}}$. There is an identification $\ensuremath{\mathbb{S}}\cong\underline\ensuremath{\mathbb{S}}$ and we let $\underline{J}\subset\underline\ensuremath{\mathbb{S}}$ be the subset corresponding to $J\subset\ensuremath{\mathbb{S}}$; then $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$ is the standard parahoric group scheme for $\underline{G}_{k_{F}((u))}$ associated to $\underline{J}$. Writing $W_J$ (resp. $W_{\underline{J}}$) for the finite group generated by the reflections in $J$ (resp. $\underline{J}$), we obtain an identification $W_J\cong W_{\underline{J}}$, and an identification \begin{equation}\label{eqn: id Iwahori double coset}W_J\backslash W_G/W_J\cong W_{\underline{J}}\backslash W_{\underline{G}_{k_F((u))}}/W_{\underline{J}}.\end{equation} In particular we may consider $\ensuremath{\mathrm{Adm}}(\{\mu\})_J$ as a subset of $W_{\underline{J}}\backslash W_{\underline{G}_{k_F((u))}}/W_{\underline{J}}$. For an element $w\in W_{G}$, we write $\underline{{w}}\in W_{\underline{G}_{k_F((u))}}$ for the corresponding element and $\underline{\dot{w}}\in \underline{G}_{k_F((u))}(k((u)))$ a lift of $w$. We let $\underline{S}_w$ denote the closure of the $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}(k[[u]])$-orbit of $\underline{\dot{w}}$ considered as a point of the partial affine flag variety $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}}\otimes_{k_F}k$ for $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$. \subsubsection{}If $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$, we may define $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}:=\prod_{i=1}^r\underline{\ensuremath{\mathcal{G}}}_{i,k_F[[u]]}$, where the $\underline{\ensuremath{\mathcal{G}}}_{i,k_F[[u]]}$ are the $k_F[[u]]$-group schemes constructed in the previous paragraphs using the groups $\mathrm{Res}_{K_i/F}H_i$. We let $\underline{G}_{i,k_F((u))}$ denote the generic fiber of $\underline{\ensuremath{\mathcal{G}}}_{i,k_F[[u]]}$ and we define $\underline{G}_{k_F((u))}:=\prod_{i=1}^r\underline{G}_{i,k_F((u))}$. Since the construction of Iwahori Weyl groups and apartments are compatible with products, the above discussion extends to this case. In particular, we have an identification of double cosets for the Iwahori Weyl group (\ref{eqn: id Iwahori double coset}), and for $w\in W_J\backslash W_G/W_J$ we have the associated Schubert variety $\underline{S}_w$ in $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}}\otimes_{k_F}k$. Applying \cite[Proposition 4.3.2]{Levin} to each of the factors $\mathrm{Res}_{K_i/F}H_i$, we obtain the following theorem. \begin{thm}\label{thm: special fiber of local models and admissible set} Let $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ and assume that $p\nmid|\pi_1(G_{\mathrm{der}})|$. We have an identification $$\mathrm{M}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\otimes_{\ensuremath{\mathcal{O}}_E}k\cong \bigcup_{w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J}\underline{S}_w$$ as closed subschemes of $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}}\otimes_{k_F}k$. \end{thm}\qed \subsection{Embedding local models}\label{sec: lattice chains} \subsubsection{}\label{sec: Pappas-Zhu lattices chains} We recall the construction of certain lattice chains of $\ensuremath{\mathcal{O}}_{K_0}[u]$-modules from \cite[\S5.2.1]{PZ}. Let $\underline{W}=\ensuremath{\mathcal{O}}_{K_0}[u]^{n}$ and $W=\underline{W}\otimes_{\ensuremath{\mathcal{O}}_{K_0}[u],u\mapsto 0}\ensuremath{\mathcal{O}}_{K_0} \cong \ensuremath{\mathcal{O}}_{K_0}^n$. Write ${W}=\oplus_{i=0}^rV_i$ for some $r$ and direct summands $V_i$ of $W,$ and let $U_i=\oplus_{j\geq i}V_j$ which forms a flag of subspaces of ${W}$; we write $P\subset \ensuremath{\mathrm{GL}}({W})$ for the corresponding parabolic. For $i=0,\dotsc,r-1$ we let $\underline{W}_i\subset \underline{W}$ denote the inverse image of $U_i$ under $\underline{W}\rightarrow{W}$; the sequence $W_i$ satisfies $$u\underline{W}\subset \underline{W}_{r-1}\subset\dotsc\subset\underline W_0= \underline{W}.$$ We extend the sequence to $\mathbb{Z}$ by letting $\underline{W}_{i+kr}=u^k\underline{W}_{i}$ and we write $\underline{W}_{\bullet}$ for the resulting chain indexed by $\mathbb{Z}$. As in \cite[\S5.2.1]{PZ}, the dilatation $\ensuremath{\mathrm{GL}}(\underline{W}_\bullet)$ of $\ensuremath{\mathrm{GL}}(\underline{W})$ along $P$ can be identified with the closed subscheme of $\prod_{i=0}^{r-1}\ensuremath{\mathrm{GL}}(\underline{W}_i)$ which respect the maps $\underline{W}_i\rightarrow \underline{W}_{i+1}$. Let $\mathcal{GL}$ be the parahoric group scheme over $\ensuremath{\mathcal{O}}_K$ of $GL_n(K)$ corresponding to the stabilizer of the lattice chain $\underline{W}_i\otimes_{\ensuremath{\mathcal{O}}_{K_0}[u],u\mapsto \varpi}\ensuremath{\mathcal{O}}_K$ in $K^n$. Then $\ensuremath{\mathrm{GL}}(\underline{W}_\bullet)$ is isomorphic to the $\ensuremath{\mathcal{O}}_{K_0}[u]$-group scheme $\underline{\mathcal{GL}}$ associated to $\mathcal{GL}$ and the extension $K/F$ in \S\ref{sec: Levin group schemes}. Since every parahoric of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n(K)$ arises in this way, this gives an explicit description of the associated $\ensuremath{\mathcal{O}}_{K_0}[u]$-group scheme $\underline{\mathcal{GL}}$ attached to any parahoric of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n(K)$. \subsubsection{}\label{sec: new Hodge embedding}Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a local model triple as in \S\ref{sec: local model triple} with $G\cong \mathrm{Res}_{K/F}H$ . Let $\rho:G\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)$ be a faithful minuscule representation, where $V$ is a finite dimensional vector space over $F$, such that $\rho\circ\mu$ is conjugate to a standard (i.e. having weights $0,-1$) minuscule coweight and such that $G$ contains the scalars. We will show that we may replace $\rho$ by a different faithful minuscule representation $\rho':G\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ such that $\rho'$ induces a closed immersion of local models $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_{W},\{\rho'\circ\mu\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E$$ where $\mathcal{GL}_{W}$ is a certain parahoric group scheme of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$. Base changing $\rho$ to $K$, we obtain a map $H\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$ given by composing $$\rho_K:G_K\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$$ with the diagonal map $H\rightarrow G_K$. Let $W$ denote the underlying $F$-vector space corresponding to $V_K$. We consider the composition $$\rho':G=\mathrm{Res}_{K/F}H\xrightarrow{\rho_1} \mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)\xrightarrow{\rho_2} \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$$where $\rho_1$ is obtained by applying restriction of scalars to the map $H\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$, and $\rho_2$ is induced by the restriction of structure functor from $K$-vector spaces to $F$-vector spaces. \subsubsection{}\label{sec: map of buildings}Since $H$ splits over a tame extension of $K$ and $H\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$ is a minuscule representation, it follows from \cite[\S1.2]{KP} that there exists a $H(K)$-equivariant toral embedding of buildings \begin{equation}\label{eqn: embedding of buildings over K}\ensuremath{\mathcal{B}}(H,K)\rightarrow \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),K).\end{equation} There are canonical identifications of $\ensuremath{\mathcal{B}}(G,F)$ (resp. $\ensuremath{\mathcal{B}}(\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),F)$) with $\ensuremath{\mathcal{B}}(H,K)$ (resp. $\ensuremath{\mathcal{B}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),K)$); we thus obtain a $G(F)$-equivariant toral embedding of buildings \begin{equation} \label{eqn: embedding of buildings} \ensuremath{\mathcal{B}}(G,F)\rightarrow \ensuremath{\mathcal{B}}(\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),F).\end{equation} Similarly, restriction of structure induces a $GL(V_K)$-equivariant map of buildings $$\ensuremath{\mathcal{B}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),K)\cong \ensuremath{\mathcal{B}}(\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),F)\rightarrow \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W),F).$$ Let $y$ (resp.~$z$) denote the image of $x$ in $\ensuremath{\mathcal{B}}(\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),F)$ (resp.~$\ensuremath{\mathcal{B}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W),F)$). We write $\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F}$ (resp. $\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W$) for the parahoric group schemes over $\ensuremath{\mathcal{O}}_F$ for $\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$ (resp. $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$) corresponding to $y$ (resp. $z$), and we write $\{\mu_{K/F}\}$ and $\{\mu_W\}$ for the respective conjugacy class of cocharacters of $\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$ and $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ induced by $\{\mu\}$. If we write $\mathcal{GL}$ for the parahoric $\ensuremath{\mathcal{O}}_K$-group scheme of $\ensuremath{\mathrm{GL}}(V_K)$ associated to $y$; then $\mathcal{GL}_{K/F}:=\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K}/\ensuremath{\mathcal{O}}_F}\mathcal{GL}$. The natural map of group schemes $\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \mathcal{GL}_{K/F}$ is a closed immersion since this map is obtained by Weil restriction of a closed immersion between $\ensuremath{\mathcal{O}}_K$-group schemes as in \cite[Proposition 1.3.3]{KP}. We will need the following lemma. \begin{lemma}\label{lem: closed immersion of group scheme GL} Let $K$ be a non-archimedean local field (in possibly equal characteristic) and $K'/K$ a finite (not necessarily separable) extension. Let $V$ be a vector space over $K'$ and let $W$ denote $V$ considered as a vector space over $K$. Let $\mathcal{GL}$ be a parahoric group scheme of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)$ corresponding to the stabilizer of an $\ensuremath{\mathcal{O}}_{K'}$-lattice chain $\{\Lambda_i\}_{i=1,\dotsc, r}$ in $V$. We write $\{\Lambda_{W,i}\}_{i=1,\dotsc,r}$ for the associated $\ensuremath{\mathcal{O}}_K$-lattice chain of $W$ and we let $\mathcal{GL}_W$ denote the parahoric group scheme of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ stabilizing $\{\Lambda_{W,i}\}_{i=1,\dotsc,r}$. Then the natural closed immersion $\mathrm{Res}_{K'/K}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ extends to a closed immersion of $\ensuremath{\mathcal{O}}_K$-group schemes $$\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K'}/\ensuremath{\mathcal{O}}_K}\mathcal{GL}\hookrightarrow \mathcal{GL}_W.$$ \end{lemma} \begin{proof}The group scheme $\mathcal{GL}$ is the schematic closure of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)\rightarrow \prod_{i=1}^r \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)$ (under the diagonal embedding) in $ \prod_{i=1}^r\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\Lambda_i)$. Similarly $\mathcal{GL}_W$ is the schematic closure of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)\rightarrow \prod_{i=1}^r\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ in $\prod_{i=1}^r\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\Lambda_{W,i})$. Thus we have a commutative diagram of $\ensuremath{\mathcal{O}}_{K}$-schemes \[\xymatrix{\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K'}/\ensuremath{\mathcal{O}}_K}\mathcal{GL} \ar[r]\ar[d] & \mathcal{GL}_W\ar[d]\\ \prod_{i=1}^r\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K'}/\ensuremath{\mathcal{O}}_K}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\Lambda_i)\ar[r] &\prod_{i=1}^r\ensuremath{\mathrm{GL}}(\Lambda_{W,i})} \] where the vertical arrows are closed immersions. It therefore suffices to show the bottom arrow is a closed immersion, and hence we reduce to proving the lemma when $r=1$, i.e. when $\mathcal{GL}$ is the stabilizer $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\Lambda)$ of a single $\ensuremath{\mathcal{O}}_K$-lattice $\Lambda\subset V$. This case can be proved, for example, by explicitly writing down the equations for the morphism. \end{proof} By Lemma \ref{lem: closed immersion of group scheme GL}, the map $\mathcal{GL}_{K/F}\rightarrow \mathcal{GL}_W$ is a closed immersion. Composing with $\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \mathcal{GL}_{K/F}$ we obtain a closed immersion of $\ensuremath{\mathcal{O}}_F$-group schemes $\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \mathcal{GL}_{W}$ extending $\rho'$. \subsubsection{}\label{subsec: Grassmannian}By our assumption on $\rho\circ\mu$, $\mu_W$ is conjugate to a standard minuscule coweight $$a\mapsto \mathrm{diag}(1^{(n-d)},(a^{-1})^{(d)})$$of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$, where $n=\dim_F W$. The generic fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}$ is the Grassmannian $\mathrm{Gr}(d,n)$ of $d$-dimensional subspaces of $W$. We let $X_\mu$ denote the generic fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$; it can be identified with the $E$-variety $G/P_\mu$, where $P_\mu$ is the parabolic subgroup of $G$ corresponding to $\mu$. Then the representation $\rho':G\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ induces a closed immersion \begin{equation}\label{eqn: embedding local models generic fiber}X_\mu\rightarrow \mathrm{Gr}(d,n)\otimes_{\ensuremath{\mathcal{O}}_F} E.\end{equation} \begin{prop}\label{prop: local model embedding main}The map (\ref{eqn: embedding local models generic fiber}) extends to a closed immersion of local models \begin{equation}\label{eqn: local model embedding}\rho'^{\mathrm{loc}}:\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\rightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{ \mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E.\end{equation} \end{prop} \begin{proof} Recall $\rho'$ factors as $\rho_2\circ\rho_1$; it suffices to show there are closed immersions $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}\otimes_{\ensuremath{\mathcal{O}}_{E'}}\ensuremath{\mathcal{O}}_E\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E$$ where the first map is induced by $\rho_1$ and the second map is induced by $\rho_2$. Here, $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}$ is the local model attached to the $\ensuremath{\mathcal{O}}_K$-group scheme $\mathcal{GL}$ and the extension $K/F$ as in \S\ref{sec: Levin group schemes}, and $E'$ is the local reflex field for the $\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)$-conjugacy class of cocharacters $\{\mu_{K/F}\}$. Step (1): $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}\otimes_{\ensuremath{\mathcal{O}}_{E'}}\ensuremath{\mathcal{O}}_E$. As in \cite[Proposition 2.3.7]{KP}, it follows from descent that it suffices to show that such a closed immersion exists upon base change to $\ensuremath{\breve{E}}$. Thus we need to show that there exists a closed immersion $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}_{\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F,\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}},\{\mu_{K/F}\}}\otimes_{\ensuremath{\mathcal{O}}_{\breve{E}'}}\ensuremath{\mathcal{O}}_{\breve{E}}$$ where $\ensuremath{\mathcal{G}}_{\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}}$ (resp. $\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F,\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}}$) denotes the corresponding parahoric group schemes for $G_{\ensuremath{\breve{F}}}$ (resp. $\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)\otimes_F \ensuremath{\breve{F}}$) and these are the analogues of the local models defined over $\ensuremath{\breve{F}}$. We have isomorphisms $$G_{\ensuremath{\breve{F}}}\cong \prod_{\tau:{K_0}\rightarrow \ensuremath{\breve{F}}}\mathrm{Res}_{\ensuremath{\breve{K}}/\ensuremath{\breve{K}}_0}H_{\ensuremath{\breve{K}}},\ \ \ \mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K)\otimes_F \ensuremath{\breve{F}}\cong \prod_{\tau:{K_0}\rightarrow \ensuremath{\breve{F}}}\mathrm{Res}_{\ensuremath{\breve{K}}/\ensuremath{\breve{K}}_0}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_{\ensuremath{\breve{K}}})$$ and the embedding $\rho_{1,\ensuremath{\breve{F}}}$ is given by the product embedding; it suffices to consider each factor separately. Thus upon relabeling we may assume $G_{\ensuremath{\breve{F}}}\cong \mathrm{Res}_{\ensuremath{\breve{K}}/\ensuremath{\breve{K}}_0}H_{\ensuremath{\breve{K}}}$ and that $\rho_1$ is induced by restriction of scalars from an embedding $$\phi:H_{\ensuremath{\breve{K}}}\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_{\ensuremath{\breve{K}}}).$$ For notational simplicity, we write $\underline{\mathcal{H}}$ for the $\ensuremath{\mathcal{O}}_{\breve K_0}[u]$-group scheme associated to $\ensuremath{\mathcal{H}}_{\breve K}$. The same proof as \cite[Proposition 8.1]{PZ} shows that it suffices to show that there exists a lattice chain $\underline{V}_{\bullet}$ in $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u]^{n}$ such that $\phi $ extends to a homomorphism of $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u]$-group schemes $$\phi_{\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u]}:\underline{\ensuremath{\mathcal{H}}}\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{V}_{\bullet})$$ satisfying the following two conditions $\bullet$ $\rho$ extends to a group scheme morphism $\underline{\ensuremath{\mathcal{H}}}\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{V}_{\bullet})$ over $\ensuremath{\mathcal{O}}_{\breve K_0}[u]$. $\bullet$ The homomorphism $$\underline{\mathcal{H}}_{k[[u]]}:=\underline{\ensuremath{\mathcal{H}}}\otimes_{\ensuremath{\mathcal{O}}_{\breve K_0}[u]}k[[u]]\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{V}_{\bullet}\otimes_{\ensuremath{\mathcal{O}}_{\breve K_0}[u]}k[[u]])$$ is a locally closed immersion, and the Zariski closure of $\underline{H}_{k((u))}:=\underline{H}\otimes_{\ensuremath{\mathcal{O}}_{\breve K_0}[u]} k((u))$ in $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{W}_{\bullet}\otimes_{\ensuremath{\mathcal{O}}_{\breve K_0}[u]} k[[u]])$ is a smooth group scheme $\ensuremath{\mathcal{P}}'$ whose connected component may be identified with $\underline{\ensuremath{\mathcal{H}}}_{k[[u]]}$. Indeed, under these assumptions, the proof in \cite[Proposition 8.1]{PZ} shows that extending torsors along $\phi_{\ensuremath{\mathcal{O}}_{\breve K_0}[u]}$ gives a morphism $\mathrm{Fl}_{\underline{\ensuremath{\mathcal{H}}}}^{Q(u)}\rightarrow\mathrm{Fl}_{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{V}_{\bullet})}^{Q(u)}$ which restricts to a closed immersion $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}_{\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F,\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}},\{\mu_{K/F}\}}\otimes_{\ensuremath{\mathcal{O}}_{\breve{E}'}}\ensuremath{\mathcal{O}}_{\breve{E}}$. The construction of the map $\phi_{\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u]}$ follows, with some minor modifications, from the same argument as \cite[Proposition 2.3.7]{KP}; as in the construction of the group scheme $\underline{H}$ in \cite{Levin}, the key point is to realize the tame descent over $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}_0}[u^\pm]$ as opposed to $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}}[u^\pm]$ in \cite{KP}. We briefly sketch their argument, pointing out what modifications are needed in our situation. Let $\widetilde{\breve{K}}/\breve K$ be a splitting field for $H_{\breve{K}}$ which we may assume is finite, tamely ramified and Galois. We let $\widetilde{e}:=[\widetilde{\breve{K}}:\breve K]$ and fix a uniformizer $\widetilde{\varpi}$ of $\widetilde{\breve{K}}$. The action of $\mathrm{Gal}(\widetilde{\breve{K}}/\breve{K})$ extends to an action on $\ensuremath{\mathcal{O}}_{\breve{K_0}}[w^\pm]/\ensuremath{\mathcal{O}}_{\breve{K_0}}[u^\pm]$, where $w^{\widetilde{e}}=u$. Using the argument in \cite[Proposition 2.3.7, Step 1]{KP}, we obtain a representation $$\phi_{\ensuremath{\mathcal{O}}_{{\breve K}_0}[u^\pm]}:\underline{\ensuremath{\mathcal{H}}}_{\ensuremath{\mathcal{O}}_{{\breve K}_0}[u^\pm]}\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n(\ensuremath{\mathcal{O}}_{\breve{K}_0[u^\pm]})$$ which extends $\phi$ under the map $u\mapsto \varpi$; this is constructed by descending along the cover $\ensuremath{\mathcal{O}}_{\breve{K_0}}[w^\pm]/\ensuremath{\mathcal{O}}_{\breve{K_0}}[u^\pm]$. (In {\em loc.~cit.}, they apply the argument to the cover $\ensuremath{\mathcal{O}}_{\breve{K}}[w^\pm]/\ensuremath{\mathcal{O}}_{\breve{K}}[u^\pm]$ to obtain a representation over $\ensuremath{\mathcal{O}}_{\breve{K}}[u^\pm]$). Here, the specialization of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_{n}(\ensuremath{\mathcal{O}}_{\breve{K}_0}[u^\pm])$ along $u\mapsto \varpi$ is identified with $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_{\breve{K}})$ via a suitable choice of basis for $V_{\breve{K}}$. The construction of $\underline{V}_{\bullet}$ then proceeds in the same way as \cite[Proposition 2.3.7, Step 1]{KP}. We write $T$ for the diagonal torus of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n$; then the basis of $V_{\breve{K}}$ is chosen so that $y\in \ensuremath{\mathcal{A}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n,T,\breve{K})$. Using the identification of apartments \begin{equation}\label{eqn: id of apartment GLn} \ensuremath{\mathcal{A}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_n,T,\ensuremath{\breve{K}})\cong \ensuremath{\mathcal{A}}(\underline{GL}_{n,\ensuremath{\breve{K}}_0((u))},\underline{T}_{\ensuremath{\breve{K}}_0((u))},\ensuremath{\breve{K}}_0((u))). \end{equation} we obtain a lattice chain $\underline{N}_\bullet$ of $\breve{K}_0[[u]]$-modules in $\breve{K}_0((u))^n$ corresponding to the image of $y$ in $\ensuremath{\mathcal{A}}(\underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{n,\ensuremath{\breve{K}}_0((u))},\underline{T}_{\ensuremath{\breve{K}}_0((u))},\ensuremath{\breve{K}}_0((u))).$ Then if we define $\underline{V}_\bullet:=\underline{N}_{\bullet}\cap \ensuremath{\mathcal{O}}_{\breve{K}_0}[u^\pm]^n$, $\phi_{\ensuremath{\mathcal{O}}_{{\breve K}_0}[u^\pm]}$ extends to a map $\phi_{\ensuremath{\mathcal{O}}_{{\breve K}_0}[u]}:\underline{\ensuremath{\mathcal{H}}}\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(\underline{V}_\bullet)$ satisfying the required conditions. Step (2): $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_{E'}$. Since $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ is a split $F$-group, the local model $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W,\{\mu_W\}}$ is naturally a subscheme of $\ensuremath{\mathrm{Fl}}_{\ensuremath{\underline{\mathcal{GL}}}_{W}}^{v - \varpi_F}$. Here, $\ensuremath{\underline{\mathcal{GL}}}_{W}$ is an $\O_F[v]$-group scheme and $\ensuremath{\mathrm{Fl}}_{\ensuremath{\underline{\mathcal{GL}}}_{W}}^{v - \varpi_F}$ is defined by applying \S\ref{sec: Levin group schemes} with $K=F.$ We first show there exists a map $\ensuremath{\mathrm{Fl}}^{Q(u)}_{\underline{\mathcal{GL}}}\rightarrow \ensuremath{\mathrm{Fl}}_{\ensuremath{\underline{\mathcal{GL}}}_{W}}^{v-\varpi_F}$; here $\underline{\mathcal{GL}}$ is the $\ensuremath{\mathcal{O}}_{K_0}[u]$-group scheme associated to the $\ensuremath{\mathcal{O}}_K$-group scheme $\mathcal{GL}$ and the extension $K/F$ as in \S\ref{sec: Levin group schemes}. Let $W_0$ denote the underlying $K_0$-vector space of $V$. Denote by $\ensuremath{\mathcal{GL}}_{W_0}$ the parahoric group scheme over $\ensuremath{\mathcal{O}}_{K_0}$ corresponding to the image of $y$ under the map of buildings $$ \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GL}}(V_K),K) = \ensuremath{\mathcal{B}}(\mathrm{Res}_{K/K_0}\ensuremath{\mathrm{GL}}(V_K),K_0)\rightarrow \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GL}}(W_0),K_0) $$ We first define a map $\ensuremath{\mathrm{Fl}}^{Q(u)}_{\ensuremath{\underline{\mathcal{GL}}},0} \rightarrow \ensuremath{\mathrm{Fl}}^{v-\varpi_F}_{\ensuremath{\underline{\mathcal{GL}}}_{W_0},0}.$ (This amounts to constructing the map above in the special case when $F = K_0$). Define the map $$r:\ensuremath{\mathcal{O}}_{K_0}[v]\rightarrow \ensuremath{\mathcal{O}}_{K_0}[u],\ \ v\mapsto Q(u)+\varpi_F,$$ which lifts the inclusion $\O_{K_0} \rightarrow \ensuremath{\mathcal{O}}_K,$ via $v \mapsto \varpi_F,$ and $u \mapsto \varpi.$ Let $\ensuremath{\underline{\mathcal{GL}}}_{K/K_0}$ denote the group scheme given by Weil restriction of $\ensuremath{\underline{\mathcal{GL}}}$ along $r$; then the base change of $\ensuremath{\underline{\mathcal{GL}}}_{K/K_0}$ along $\ensuremath{\mathcal{O}}_{K_0}[v]\rightarrow\O_{K_0},\ v\mapsto \varpi_F$ is identified with $\ensuremath{\mathcal{GL}}_{K/K_0} : = \ensuremath{\mathrm{Res}}_{\O_K/\O_{K_0}}\ensuremath{\mathcal{GL}}.$ We begin by constructing a map $$i:\ensuremath{\underline{\mathcal{GL}}}_{K/K_0}\rightarrow \underline{\mathcal{GL}}_{W_0}$$ extending the map of $\ensuremath{\mathcal{O}}_{K_0}$-schemes $\ensuremath{\mathcal{GL}}_{K/K_0} \rightarrow \mathcal{GL}_{W_0}$ under the specialization $v\mapsto\varpi_F$, such that the base change to $k[[v]]$ $$i_{k[[v]]}: \ensuremath{\underline{\mathcal{GL}}}_{K/K_0,k[[v]]}\rightarrow \underline{\mathcal{GL}}_{W_0,k[[v]]}$$ is a closed immersion. To construct $i,$ let $\underline{W}_{\bullet}$ denote the lattice chain of $\ensuremath{\mathcal{O}}_{K_0}[u]$-modules associated to $\mathcal{GL}$ via the construction in \S\ref{sec: Pappas-Zhu lattices chains}; then $\underline{\mathcal{GL}}$ may be identified with the automorphism group of $\underline{W}_\bullet$. We may view $\underline{W}_{\bullet},$ via $r,$ as a lattice chain of $\ensuremath{\mathcal{O}}_{K_0}[v]$-modules $\underline{W}_{0,\bullet}.$ Then we may identify $\underline{\mathcal{GL}}_{W_0}$ with the automorphism group of $\underline{W}_{0,\bullet}$. Since any $\ensuremath{\mathcal{O}}_{K_0}[u]$-automorphism of $\underline{W}_{\bullet}$ gives an $\ensuremath{\mathcal{O}}_{K_0}[v]$-automorphism of $\underline{W}_{0,\bullet}$, we obtain a natural map of $\ensuremath{\mathcal{O}}_{K_0}[v]$-group schemes $i:\ensuremath{\underline{\mathcal{GL}}}_{K/K_0}\rightarrow \underline{\mathcal{GL}}_{W_0}$ as desired. The base change $i_{k[[v]]}:\underline{\mathcal{GL}}_{K/K_0,k[[v]]}\rightarrow \underline{\mathcal{GL}}_{W_0,k[[v]]}$ is induced by restriction of structure from $k[[u]]$-lattices to $k[[v]]$-lattices under the map $v\mapsto u^e$, where $e=[K:K_0]$. Therefore it is a closed immersion by Lemma \ref{lem: closed immersion of group scheme GL}. By \cite[Corollary 3.6]{HaRi}, the Weil restriction of torsors along $r$ induces an isomorphism $$\mathrm{Fl}^{Q(u)}_{\underline{\mathcal{GL}},0}\xrightarrow{\sim}\mathrm{Fl}^{v-\varpi_F}_{\ensuremath{\underline{\mathcal{GL}}}_{K/K_0},0}.$$ Combining this isomorphism with the map given by extending torsors along $i,$ we obtain the required map $$\iota_0:\mathrm{Fl}^{Q(u)}_{\underline{\mathcal{GL}},0}\cong\mathrm{Fl}_{\ensuremath{\underline{\mathcal{GL}}}_{K/K_0},0}^{v-\varpi_F}\rightarrow \mathrm{Fl}^{v-\varpi_F}_{\underline{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}}_{W_0},0}.$$ Now applying $\mathrm{Res}_{\ensuremath{\mathcal{O}}_{K_0}/\ensuremath{\mathcal{O}}_F}$ we obtain a map $$\iota:\mathrm{FL}_{\underline{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}}}^{Q(u)}\rightarrow \mathrm{Res}_{\ensuremath{\mathcal{O}}_{K_0}/\ensuremath{\mathcal{O}}_{F}}\mathrm{Fl}^{v-\varpi_F}_{\underline{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}}_{W_0}}.$$ A standard argument (cf. \cite[Theorem 1.4]{PR}) shows that $\iota\otimes_{\ensuremath{\mathcal{O}}_F} k$ is a locally closed immersion. Since the domain of this map is ind-projective it follows that $\iota\otimes_{\ensuremath{\mathcal{O}}_F} k$ is a closed immersion. We compose $\iota$ with the map $$\iota':\ensuremath{\mathrm{Res}}_{\ensuremath{\mathcal{O}}_{K_0}/\ensuremath{\mathcal{O}}_{F}}\ensuremath{\mathrm{Fl}}^{v-\varpi_F}_{\ensuremath{\underline{\mathcal{GL}}}_{W_0}} \rightarrow \ensuremath{\mathrm{Fl}}^{v-\varpi_F}_{\ensuremath{\underline{\mathcal{GL}}}_{W}}$$ induced by the embedding $\ensuremath{\mathrm{Res}}_{K_0/F} \ensuremath{\mathrm{GL}}(W_0) \rightarrow \ensuremath{\mathrm{GL}}(W).$ As in \cite[Proof of Proposition 8.1]{PZ}, $\iota'\otimes_{\ensuremath{\mathcal{O}}_{F}} k$ is a closed immersion, since $\mathrm{Res}_{K_0/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_0)$ is an unramified group and the embedding $\mathrm{Res}_{K_0/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_0)\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ is minuscule. It follows that the composite map $\iota'\circ\iota$ is a closed immersion on special fibers. Restricting to the local models we obtain a map \begin{equation} \label{eqn: map of local models GL} \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}\rightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_{E'} \end{equation} which is a closed immersion on special fibers. An argument involving Nakayama's Lemma as in \cite[Proposition 8.1]{PZ} shows that (\ref{eqn: map of local models GL}) itself is a closed immersion. It remains to check the statement regarding the generic fiber. This follows from the definition of local models in \S\ref{sec: Levin local models}, and the fact that the map $r$ takes $v - \varpi_F$ to $Q(u).$ \end{proof} \subsubsection{}\label{sec: embedding of local models product}More generally if $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ as in (*) and $\rho:G\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)$ is a faithful representation such that $\rho\circ\mu$ is a conjugate to a standard minuscule coweight and $G$ contains the scalars, we let $W_i$ denote the underlying $F$-vector space of $V\otimes_{\ensuremath{\mathcal{O}}_F}K_i$. Then as before we obtain a new faithful minuscule representation given by the composition $$\rho':G\cong \prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i\rightarrow \prod_{i=1}^r\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_i)\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W).$$ where the first map is induced from a product of maps $\rho_i':\mathrm{Res}_{K_i/F}H_i\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_i)$ and $W:=\prod_{i=1}^rW_i$. We let $\mathcal{GL}_{W_i}$ denote the parahoric for $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_i)$ as constructed in \S\ref{sec: map of buildings}; this determines a parahoric $\mathcal{GL}_W$ of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$ given by the stabilizer of the lattice chain in $W$ formed by all possible products of the lattice chains in $W_i$ corresponding to $\mathcal{GL}_{W_i}$. We let $\mu_{W_i}$ denote the $i^{\mathrm{th}}$-component of the $\prod_{i=1}^r\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W_i)$-conjugacy class of cocharacters induced by $\{\mu\}$. By \cite[Proposition 2.3.7]{KP}, there is a closed immersion \begin{equation}\label{eqn: embedding local models GLn product} \prod_{i=1}^r\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_{W_i},\{\mu_{W_i}\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\rho'\circ\mu\}} \end{equation}Applying Proposition \ref{prop: local model embedding main} to each factor and composing with (\ref{eqn: embedding local models GLn product}), we obtain the following. \begin{prop}\label{prop: local model embedding product} There is a closed immersion \begin{equation}\label{eqn: local model embedding product}\rho'^{\mathrm{loc}}:\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\rightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\rho'\circ\mu\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E.\end{equation} extending the natural map on generic fibers. \end{prop}\qed \subsection{Local models and the admissible set} \subsubsection{}We keep the notation of the previous subsection. We now give a more explicit description of the closed immersion $$\rho'^{\mathrm{loc}}\otimes_{\ensuremath{\mathcal{O}}_E} k:\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\otimes_{\ensuremath{\mathcal{O}}_E}k\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}k$$ constructed in Proposition \ref{prop: local model embedding product} on the level of $k$-points. We first consider the case $G\cong\mathrm{Res}_{K/F}H$ with $K$,$H$ as above. Let $\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}$ denote the $k_F[[u]]$-group scheme defined in \S \ref{sec: identification of Iwahori Weyl group} and $\underline{\ensuremath{\mathcal{G}}}_{k[[u]]}$ its base change to $k[[u]]$. We may identify $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ with the union \begin{equation} \label{eqn: points of local model} \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)=\bigcup_{w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J}S_w(k)\subset \underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}(k((u)))/\underline{\ensuremath{\mathcal{G}}}_{k_F[[u]]}(k[[u]]).\end{equation} For notational convenience we write $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W$ for the group scheme $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(W)$. We also write $\underline{\mathcal{GL}}_W$ for the $\ensuremath{\mathcal{O}}_F[v]$ group scheme associated to $\mathcal{GL}_W$ in \cite{PZ}, and we let $\underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_W$ denote its base change to $\ensuremath{\mathcal{O}}_F[v^\pm]$. Then similarly to (\ref{eqn: points of local model}), we may identify $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}(k)\subset \underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_W(k((v)))/\underline{\mathcal{GL}}_W(k[[v]])$$ with a union of Schubert varieties for $\ensuremath{\mathrm{Adm}}_{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W}(\{\mu_W\})_{J'}$. Here $J'$ is a subset of the set of simple reflections for the Iwahori Weyl group of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W$ corresponding to the parahoric $\mathcal{GL}_W$. On the other hand, the discussion in \cite[\S3.4]{Z} shows that there is an embedding $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}(k)\subset \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}).$$ Note that the convention in {\em loc.~cit.} is that $g\in \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ corresponds to the filtration induced $\{g\varpi\Lambda_i\}_{i\in\ensuremath{\mathbb{Z}}}$, where $\{\Lambda_i\}_{i\in\ensuremath{\mathbb{Z}}}$ are the constituent lattices of the lattice chain corresponding to $\mathcal{GL}_W$. Thus we may consider $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ as a subset of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$. Now the embedding $\rho':G\rightarrow \ensuremath{\mathrm{GL}}_W$ may be extended to a morphism $\rho':\mathcal{G}\rightarrow\mathcal{GL}_W$; hence we obtain a map \begin{equation}\label{eqn: map of mixed char flag variety}H(\ensuremath{\breve{K}})/\ensuremath{\mathcal{H}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}})\cong G(\ensuremath{\breve{F}})/\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}}).\end{equation} If $\ensuremath{\mathcal{G}}$ is a connected parahoric, i.e. $\widetilde{\ensuremath{\mathcal{G}}}=\ensuremath{\mathcal{G}}$, this map is an injection. The following proposition is the analogue of \cite[Proposition 3.4]{Z} in our setting. \begin{prop}\label{prop: mu admissible mixed char}Assume $\ensuremath{\mathcal{G}}$ is a connected parahoric. Let $g\in G(\ensuremath{\breve{F}})$ with $$g\in \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$$ for some $w\in W_J\backslash W/W_J$. Then the image of $\rho'(g)$ in $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ lies in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ if and only if $w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J$. \end{prop} \begin{proof}By the construction of the map $$\rho'^{\mathrm{loc}}:\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\otimes_{\ensuremath{\mathcal{O}}_E}k\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}k$$ in Proposition \ref{prop: local model embedding main}, the map on the special fiber of local models is given by the composition $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\otimes_{\ensuremath{\mathcal{O}}_E} k\xrightarrow{\rho_1^{\mathrm{loc}}} \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{K/F},\{\mu_{K/F}\}}\otimes_{\ensuremath{\mathcal{O}}_{E'}}k\xrightarrow{\rho_2^{\mathrm{loc}}} \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}k.$$ We let $\underline{W}_{\bullet}$ denote the $\ensuremath{\mathcal{O}}_{K_0}[u]$-lattice chain constructed in the proof of Proposition \ref{prop: local model embedding main} Step (2), and we write $\underline{W}_{\bullet,k[[u]]}$ for the lattice chain given by base change to $k[[u]]$. We let $\underline{\mathcal{GL}}_{k[[u]]}$ denote the stabilizer of the $k[[u]]$-lattice chain $\prod_{\psi:k_0\rightarrow k}\underline{W}_{\bullet,k[[u]]}$, and $\underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{k((u))}$ its generic fiber. Then we may identify $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\otimes_{\ensuremath{\mathcal{O}}_E} k$ (resp. $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_{K/F},\{\mu_{K/F}\}}\otimes_{\mathcal{O}_{E'}}k$) with a closed subscheme of $$\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{k[[u]]}}\ (\text{resp. } \mathcal{FL}_{\underline{\mathcal{GL}}_{k[[u]]}})$$ and the map $\rho_1^{\mathrm{loc}}$ is induced by extending torsors along a morphism $$\underline{\ensuremath{\mathcal{G}}}_{k[[u]]}\rightarrow \underline{\mathcal{GL}}_{k[[u]]}.$$ Recall $e:=[K:K_0]$ and we let $k[[v]]\rightarrow k[[u]]$ denote the map sending $v$ to $u^e$. We write $\underline{\mathcal{GL}}_{W,k[[v]]}$ for the base change to $k[[v]]$ of the $\ensuremath{\mathcal{O}}_{K_0}[v]$-group $\underline{\mathcal{GL}}_{W}$. Then $\underline{\mathcal{GL}}_{W,k[[v]]}$ is identified with the stabilizer of $\prod_{\psi:k_0\rightarrow k}\underline{W}_{\bullet,k[[u]]}$ as $k[[v]]$-modules; here we take all possible products of lattices in the lattice chain. There is a natural map of $k[[v]]$-group schemes \begin{equation}\label{eqn: map of GLn group schemes}\mathrm{Res}_{k[[u]]/k[[v]]}\underline{\mathcal{GL}}_{k[[u]]}\rightarrow \underline{\mathcal{GL}}_{W,k[[v]]}\end{equation} induced by the forgetful functor from $k[[u]]$-modules to $k[[v]]$-modules. Then we may identify $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}\otimes_{\ensuremath{\mathcal{O}}_F}k$ with a closed subscheme of $\mathcal{FL}_{\underline{\mathcal{GL}}_{W,k[[v]]}}$ and the map $\rho_2^{\mathrm{loc}}$ is given by extending torsors along (\ref{eqn: map of GLn group schemes}). The map $\rho'^{\mathrm{loc}}$ on $k$-points is then given by the injection \begin{equation}\label{eqn: embedding of flag varieties}\underline{G}_{k((u))}(k((u)))/\underline{\ensuremath{\mathcal{G}}}_{k[[u]]}(k[[u]])\hookrightarrow \underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{W,k((v))}(k((v)))/\underline{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}}_{W,k[[v]]}(k[[v]]). \end{equation} We have a commutative diagram of maps of apartments \footnotesize{\begin{equation}\label{eqn: commutative diagram of apartments} \xymatrix{\ensuremath{\mathcal{A}}(G,S',\ensuremath{\breve{F}})\ar[d]^{\rotatebox{270}{$\cong$}}_{(\ref{eqn: id of apartments})}\ar@{^{(}->}[r]& \ensuremath{\mathcal{A}}(\mathrm{Res}_{K/F}\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V_K),T',\ensuremath{\breve{F}})\ar[d]^{\rotatebox{270}{$\cong$}}\ar@{^{(}->}[r]& \ensuremath{\mathcal{A}}(\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W,T'_W,\ensuremath{\breve{F}})\ar[d]^{\rotatebox{270}{$\cong$}}\\ \ensuremath{\mathcal{A}}(\underline{G}_{k((u))},\underline{S}',k((u)))\ar@{^{(}->}[r] & \ensuremath{\mathcal{A}}(\underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{k((u))},\underline{T}_{k((u))}',k((u))) \ar@{^{(}->}[r]& \ensuremath{\mathcal{A}}(\underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{W,k((v))},\underline{T}'_{k((v))},k((v)))}.\end{equation}} \normalsize Here the tori $T'$ and $\underline{T}'_{k((u))}$ are defined as follows. Let $\underline{\Lambda}$ denote the $\ensuremath{\mathcal{O}}_{\breve{K}_0}[u^\pm]$-module corresponding to the base change to $\ensuremath{\mathcal{O}}_{\breve K_0}$ of the common generic fiber of $\underline{W}_{\bullet}$. The torus $\underline{T}'\subset \underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{k((u))}$ is the maximal split torus determined by a suitable choice of basis $\underline{b}$ of $\underline{\Lambda}$; cf. \cite[Proof of Proposition 2.3.7]{KP}. Then $T'$ (resp. $\underline{T}'_{k((u))}$) is the base change of $\underline{T}'$ to $\breve{K}$ (resp. $k((u))$). The existence of the left square follows from the construction of the basis $\underline{b}$; cf. \cite[\S3.3]{Z}. The tori $T'_W$ and $\underline{T}'_{k((v))}$ are determined by $T'$, $\underline{T}'_{k((u))}$ and the choice of uniformizers $\varpi$, $u$ of $\ensuremath{\breve{K}}$ and $k((u))$ respectively. The commutativity of the right square then follows from the explicit description of the apartments in terms of lattice chains. We may also identify Iwahori Weyl groups for the groups in the top row with the respective Iwahori Weyl group in the bottom row, and the vertical isomorphisms are compatible with the action of the Iwahori Weyl groups. Moreover the horizontal maps induce morphisms of Iwahori Weyl groups and they are equivariant for the actions of these groups on the apartment. We now argue as in \cite[Proposition 3.4]{Z}. Since $\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ maps to $\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$, we may assume $g=g_1\dot{w}$. There is a $\ensuremath{\mathcal{G}}\otimes_{\ensuremath{\mathcal{O}}_{F}}\ensuremath{\mathcal{O}}_E$-action on $\mathrm{M}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$. Over the special fiber this action coincides with the one given by left multiplication by $\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ on $$\mathrm{M}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)\subset\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}\subset \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_{W}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}});$$ note that the action of $\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ necessarily factors through $\ensuremath{\mathcal{G}}(k)$ since $\rho'\circ\mu$ is minuscule. Thus upon modifying $g$ by $g_1$ on the left, we may assume that $g=\dot{w}$. Using the commutativity of the diagram (\ref{eqn: commutative diagram of apartments}) and the fact that this diagram is equivariant for the action of Iwahori Weyl groups, it follows that the image of $g$ in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\mu_W\}}(k)\subset \underline{\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}}_{W,k((v))}(k((v)))/\underline{\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}}_{k[[v]]}(k[[v]])$ is given by the image of $\underline{\dot{w}}\in \underline{G}_{k((u))}(k((u)))$, where $\underline{\dot{w}}$ is a lift of the element $\underline{w}\in {W}_{\underline{G}_{k((u))}}$ corresponding to $w$ under the isomorphism (\ref{eqn: id of Iwahori Weyl groups 2}). It follows from Theorem \ref{thm: special fiber of local models and admissible set} that $g$ gives a point in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ if and only if $w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J$. \end{proof} \subsubsection{}We now let $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ as in (*), and $\rho:G\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V)$ a faithful representation as in \S\ref{sec: embedding of local models product}. As before we write $W$ for the $F$-vector space underlying $\prod_{i=1}^rV_{K_i}$ and $\rho'^{\mathrm{loc}}:\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\rho'\circ\mu\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E$ the closed immersion of local models constructed in Proposition \ref{prop: local model embedding product}. This factors as $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow\prod_{i=1}^r\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_{W_i},\{\mu_{W_i}\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E\hookrightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\rho'\circ\mu\}}\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E.$$ As before, we may identify $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\mathcal{GL}_W,\{\rho'\circ\mu\}}(k)$ with a subset of $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W({\ensuremath{\breve{F}}})/\mathcal{GL}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$. Using the fact that the embedding $G(\ensuremath{\breve{F}})\hookrightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})$ factors through $\prod_{i=1}^rGL_{W_i}(\ensuremath{\breve{F}})$ and applying Proposition \ref{prop: mu admissible mixed char} we obtain the following. \begin{prop} Let $G\cong\prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ and assume $\ensuremath{\mathcal{G}}$ is a connected parahoric. Let $g\in G(\ensuremath{\breve{F}})$ with $$g\in \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$$ for some $w\in W_J\backslash W/W_J$. Then the image of $\rho'(g)$ in $\ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}_W(\ensuremath{\breve{F}})/\mathcal{GL}_W(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ lies in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ if and only if $w\in\ensuremath{\mathrm{Adm}}(\{\mu\})_J$, where $J\subset \ensuremath{\mathbb{S}}$ is the set of simple reflections corresponding to $\ensuremath{\mathcal{G}}$. \end{prop}\qed \subsection{More general local models}\label{sec: embedding into Grassmannian} \subsubsection{} In this subsection we extend the construction of local models to certain triples $(G,\ensuremath{\mathcal{G}},\{\mu\})$ with the condition (*) relaxed. This is necessary for the later applications to Shimura varieties because groups of the form $\mathrm{Res}_{K/F}H$ rarely arise as the group at $p$ of a Shimura datum of Hodge type. Let $G$ be a reductive group over $F$ and $\{\mu\}$ a conjugacy class of minuscule cocharacters for $G$. Let $\rho:G\rightarrow \ensuremath{\mathrm{GSp}}(V)$ be a faithful symplectic representation, where $V$ is a $2n$-dimensional vector space over $F$ equipped with a perfect alternating bilinear form $\Psi$. We assume that $\rho\circ\mu$ is conjugate to the standard minuscule coweight $a\mapsto \mathrm{diag}(1^{(n)},(a^{-1})^{(n)})$ and that $G$ contains the scalars. We call such an embedding a local Hodge embedding. \begin{definition}\label{def: embedding into GL(V)}The pair $(G,\{\mu\})$ is said to be \emph{regular} if the following three conditions are satisfied. \begin{enumerate} \item $G$ is a subgroup of a reductive group $G'\cong \prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ as in (*) such that the inclusion $G\subset G'$ induces an isomorphism $G_{\mathrm{der}}\cong \mathrm{G}'_{\mathrm{der}}$. \item There exists a local Hodge embedding $\rho:G\rightarrow \ensuremath{\mathrm{GSp}}(V)$ such that $\rho$ extends to a closed immersion $\rho:G'\rightarrow \ensuremath{\mathrm{G}}\ensuremath{\mathrm{L}}(V).$ \item The centralizer $T$ of a maximal $\ensuremath{\breve{F}}$-split torus of $G$ is $R$-smooth. \end{enumerate} We say a local model triple $(G,\{\mu\},\ensuremath{\mathcal{G}})$ is regular if the associated pair $(G,\{\mu\})$ is regular. \end{definition} \begin{remark} \begin{enumerate} \item For later applications, all Shimura varieties that we work with can be related to one whose associated local model triple is regular. Therefore, this assumption will not appear in our final result. \item By Proposition \ref{prop: closed immersion of BT schemes}, condition (3) implies the inclusion $G\subset G'$ induces a closed immersion $\widetilde{\ensuremath{\mathcal{G}}}\rightarrow \widetilde \ensuremath{\mathcal{G}}'$, where $\widetilde{\ensuremath{\mathcal{G}}}'$ is the Bruhat--Tits stabilizer scheme for $G'$ corresponding to $\widetilde\ensuremath{\mathcal{G}}$.\end{enumerate} \end{remark} \subsubsection{}\label{sec: admissible set for modified local model}Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a regular triple and $G'\cong \prod_{i=1}^r\mathrm{Res}_{K_i/F}H_i$ as in Definition \ref{def: embedding into GL(V)}. Since $G$ and $G'$ have the same derived group, the parahoric $\ensuremath{\mathcal{G}}$ determines a parahoric group scheme $\ensuremath{\mathcal{G}}'$ of $G'$. We define a local model for $G$ by setting $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}:=\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}',\{\mu'\}}$, where $\{\mu'\}$ is the $G'$-conjugacy class of cocharacters induced by $\{\mu\}$. If we let $P_\mu\subset G$ denote the parabolic subgroup corresponding to some representative $\mu$ of $\{\mu\}$, and $P'_{\mu'}\subset G'$ the corresponding parabolic of $G'$, then there is a canonical identification $$X_\mu:=G/P_\mu\cong G'/P'_{\mu'}$$ so such a definition is justified. It is possible to prove that the definition of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$ does not depend on the choice of $G',$ but we will not need this, and will always consider the definition via a choice of auxiliary group $G'.$ We choose a $\sigma$-invariant alcove $\ensuremath{\mathfrak{a}}\subset\ensuremath{\mathcal{B}}(G,\breve F)$ as in \S\ref{subsubsec:fnfielddefns}; this determines a set of simple reflections $\ensuremath{\mathbb{S}}$ for the Iwahori Weyl group $W$ and we let $J\subset \ensuremath{\mathbb{S}}$ be the subset corresponding to the parahoric $\ensuremath{\mathcal{G}}$. There is a natural $G(\ensuremath{\breve{F}})$-equivariant map of buildings $\ensuremath{\mathcal{B}}(G,\ensuremath{\breve{F}})\rightarrow \ensuremath{\mathcal{B}}(G',\ensuremath{\breve{F}})$ and the alcove $\ensuremath{\mathfrak{a}}$ determines an alcove $\ensuremath{\mathfrak{a}}'\subset\ensuremath{\mathcal{B}}(G',\ensuremath{\breve{F}})$. We let $W', \ensuremath{\mathbb{S}}'$ denote the corresponding objects for $G'$. By construction, there is a canonical identification $\ensuremath{\mathbb{S}}\cong\ensuremath{\mathbb{S}}'$ and we let $J'\subset \ensuremath{\mathbb{S}}'$ denote the subset corresponding to $J$. Then $J'$ corresponds to the parahoric $\ensuremath{\mathcal{G}}'$ of $G$. The stratification of the special fiber of the local model has a stratification naturally indexed by the $\mu'$-admissible $\ensuremath{\mathrm{Adm}}(\{\mu'\})_{J'}$ set of $G'$. However the natural map $G\rightarrow G'$ induces a map $W\rightarrow W'$ between Iwahori Weyl groups and by \cite[Lemma 3.6]{HaRi2}, this induces a bijection $$\ensuremath{\mathrm{Adm}}_G(\{\mu\})_J\cong\ensuremath{\mathrm{Adm}}_{G'}(\{\mu'\})_{J'}.$$ We may thus consider the strata as being indexed by $\ensuremath{\mathrm{Adm}}(\{\mu\})_J$. \subsubsection{}\label{sec: first new Hodge embedding}Let $\rho:G\rightarrow \mathrm{GSp}(V)$ be a local Hodge embedding as in Definition \ref{def: embedding into GL(V)} (2) and $\rho:G'\rightarrow \ensuremath{\mathrm{GL}}(V)$ its extension to $G'$. Let $\rho':G'\rightarrow \ensuremath{\mathrm{GL}}(W)$ be the embedding obtained from $\rho$ via the construction in \S\ref{sec: embedding of local models product}; we write $2n':=\dim_F W$. Recall, that $W = \prod_{i=1}^r W_i,$ with $W_i = V\otimes_F K_i,$ viewed as an $F$-vector space. We may equip $W_i$ with the alternating bilinear form given by $$\Psi_i:W_i\times W_i\xrightarrow{ \Psi\otimes_{F}K_i}K\xrightarrow{\rm tr} F,$$ where ${\rm tr}:K\rightarrow F$ is the trace map. We then define an alternating bilinear form $\Psi'$ on $W$ by setting $\Psi':=\sum\Psi_i$. It is easy to check that the induced map $G\rightarrow \ensuremath{\mathrm{GL}}(W)$ factors through $\ensuremath{\mathrm{GSp}}(W)$ and we write $\rho^H$ for the induced map $G\rightarrow \mathrm{GSp}(W)$. There is a canonical equivariant toral embedding of buildings $$\ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GSp}}(W),F)\rightarrow \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GL}}(W),F);$$ see eg. \cite[\S2.3.2]{KP}. Arguing as in \cite[Lemma 2.3.3]{KP}, we may choose the embedding (\ref{eqn: embedding of buildings}) such that the composition $\ensuremath{\mathcal{B}}(G,F)\rightarrow \ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GL}}(W),F)$ factors through $\ensuremath{\mathcal{B}}(\ensuremath{\mathrm{GSp}}(W),F)$. We write $\mathcal{GSP}$ (resp. $\mathcal{GL}_W$) for the parahoric group scheme of $\ensuremath{\mathrm{GSp}}(W)$ (resp. $\ensuremath{\mathrm{GL}}(W)$) corresponding to the image of $x$. The local model $\ensuremath{\mathrm{M}}_{\mathcal{GSP},\{\rho^H\circ\mu\}}^{\mathrm{loc}}$ agrees with the one studied by G\"ortz in \cite{Go2}; its generic fiber is the Lagrangian Grassmannian $\mathrm{LGr}(W),$ which parameterizes $n'$-dimensional isotropic subspaces of $W$. The natural map $$X_\mu\rightarrow \mathrm{Gr}(n',2n')\otimes_F E$$ factors through $\mathrm{LGr}(W)\otimes_F E$. The following corollary follows immediately from Proposition \ref{prop: local model embedding main}, using the existence of the closed immersion $$\ensuremath{\mathrm{M}}_{\mathcal{GSP},\{\rho^H\circ\mu\}}^{\mathrm{loc}}\rightarrow \ensuremath{\mathrm{M}}_{\mathcal{GL}_W,\{\rho'\circ\mu'\}}^{\mathrm{loc}};$$ cf. \cite[\S2.3.4]{KP}. \subsubsection{}Arguing as in \cite[\S2.3.15]{KP}, one can further modify $\rho^H$ so that $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}$ maps into a smooth Grassmannian. \begin{cor}\label{cor: embedding of local model into Grassmannian 1} Let $(G,\ensuremath{\mathcal{G}},\{\mu\})$ be a regular triple. Then there exists a good local Hodge embedding $G\rightarrow \mathrm{GSp}(W')$. \qed \end{cor} \begin{definition}\label{def: good embedding} Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a regular triple and we let $G\subset G'$ as in Definition \ref{def: embedding into GL(V)} (1). Let $W$ be an $F$-vector space and $\Lambda\subset W$ an $\ensuremath{\mathcal{O}}_F$-lattice. We say that a faithful representation $\varrho:G\rightarrow \mathrm{GL}(W)$ is \emph{good} with respect to $\Lambda$ if the following two conditions are satisfied. \begin{enumerate} \item $\varrho$ extends to a closed immersion $\widetilde{\ensuremath{\mathcal{G}}}' \hookrightarrow \ensuremath{\mathcal{GL}}_{W} := \ensuremath{\mathrm{GL}}(\Lambda)$. \item There is a closed immersion of local models $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}\hookrightarrow \mathrm{Gr}(\Lambda)\otimes_{\ensuremath{\mathcal{O}}_F}\ensuremath{\mathcal{O}}_E$$ which extends the natural map on the generic fiber, where $\mathrm{Gr}(\Lambda)$ is the Grassmannian of subspaces $\mathcal{F}\subset \Lambda$ of rank $d$. Here $d$ is such that $\varrho\circ\mu$ is conjugate to the standard minuscule coweight $a\mapsto(1^{(n-d)},(a^{-1})^{(d)})$. \end{enumerate} A representation $\varrho:G\rightarrow \ensuremath{\mathrm{GL}}(W)$ is said to be \emph{good} if there exists an $\ensuremath{\mathcal{O}}_F$-lattice $\Lambda\subset W$ with respect to which $\varrho$ is good, and we say that a local Hodge embedding $\rho:G\rightarrow \mathrm{GSp}(W)$ is good if the induced representation $G\rightarrow \ensuremath{\mathrm{GL}}(W)$ is good. \end{definition} \begin{cor}\label{cor: embedding of local model into Grassmannian} Let $(G,\ensuremath{\mathcal{G}},\{\mu\})$ be a regular triple and $\rho^H:G\rightarrow \mathrm{GSp}(W)$ a Hodge embedding as constructed in \S\ref{sec: first new Hodge embedding}. Then we may find a new Hodge embedding $\rho'':G\rightarrow \ensuremath{\mathrm{GSp}}(W')$ such that $\rho''$ is good. \qed \end{cor} \subsubsection{}Let $(G,\{\mu\},\ensuremath{\mathcal{G}})$ be a regular local model triple and $\rho'':G\rightarrow \ensuremath{\mathrm{GSp}}(W')$ a good Hodge embedding. We let $\Lambda\subset W'$ be a lattice with respect to which $\rho''$ is good. As explained in \cite[\S3.6]{Z}, we may identify the $k$-points of $\mathrm{Gr}(\Lambda)$ with a subset of $\ensuremath{\mathrm{GL}}_{W'}(\ensuremath{\breve{F}})/\mathcal{GL}_{W'}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$, where $\ensuremath{\mathcal{G}}\ensuremath{\mathcal{L}}_{W'}:=\ensuremath{\mathrm{GL}}(\Lambda)$. The following Corollary can be deduced easily from Proposition \ref{prop: mu admissible mixed char}. \begin{cor}\label{cor: mu admissible mixed char}Assume the parahoric $\ensuremath{\mathcal{G}}$ is connected. Let $g\in G(\ensuremath{\breve{F}})$ with $$g\in \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})\dot{w}\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$$ for some $w\in W_J\backslash W/W_J$. Then the image of $\rho''(g)$ in $\ensuremath{\mathrm{GL}}_{W'}(\ensuremath{\breve{F}})/\mathcal{GL}_{W'}(\ensuremath{\mathcal{O}}_{\ensuremath{\breve{F}}})$ lies in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu\}}(k)$ if and only if $w\in \ensuremath{\mathrm{Adm}}(\{\mu\})_J$.\qed \end{cor} \section{Deformation theory of $p$-divisible groups}\label{sec: deformation theory} \subsection{The versal deformation space with tensors}\label{sec: adapted liftings}\subsubsection{}We recall the deformation theory of $p$-divisible groups equipped with a collection of crystalline tensors following \cite[\S3]{KP}. As most of the arguments of {\em loc.~cit.} go through unchanged in our setting, we discuss in detail only those points which do not. In this section, we assume $p>2$ and we work over the base field $\ensuremath{\mathbb{Q}}_p$ so that $\ensuremath{\breve{\mathbb{Q}}_p}=W(k)[\frac{1}{p}]$, where $W(k)$ denotes the Witt vectors of $k$. For any ring $R$ and an $R$-module $M$, we let $M^\otimes$ denote the direct sum of all $R$-modules obtained from $M$ by taking duals, tensor products, symmetric and exterior products. If $R$ is a complete local ring with residue field of positive characteristic and $\ensuremath{\mathscr{G}}$ is a $p$-divisible group over $R$, we write $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})$ for its (contravariant) Dieudonn\'e crystal. \subsubsection{}\label{subsec: deformation space with crystalline tensors} Let $\ensuremath{\mathscr{G}}_0$ be a $p$-divisible group over $k$ and set $\ensuremath{\mathbb{D}}:=\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_0)(\ensuremath{\breve{\mathbb{Z}}_p})$. We write $\varphi$ for the Frobenius on $\ensuremath{\mathbb{D}}$. Let $(s_{\alpha,0})\subset\ensuremath{\mathbb{D}}^\otimes$ be a collection of $\varphi$-invariant tensors whose image in $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_0)(k)^\otimes$ lie in $\mathrm{Fil}^0$. We assume that there exists a $\ensuremath{\mathbb{Z}}_p$-module $U$ and an isomorphism \begin{equation} \label{eqn: triv Dieudonne} U\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}\cong\ensuremath{\mathbb{D}}\end{equation} such that $s_{\alpha,0}\in U^\otimes$. Write $\widetilde{\ensuremath{\mathcal{G}}}\subset \ensuremath{\mathrm{GL}}(U)$ for the pointwise stabilizer of $\{s_{\alpha,0}\}_{\alpha}$ so that $\widetilde{\ensuremath{\mathcal{G}}}_{\ensuremath{\breve{\mathbb{Z}}_p}}$ can be identified with the stabilizer of $s_{\alpha,0}$ in $\ensuremath{\mathrm{GL}}(\ensuremath{\mathbb{D}})$. We assume that the generic fiber $G:=\widetilde{\ensuremath{\mathcal{G}}}\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\mathbb{Q}}_p$ is a reductive group. and that $\widetilde{\ensuremath{\mathcal{G}}}=\widetilde{\ensuremath{\mathcal{G}}}_x$ for some $x\in \ensuremath{\mathcal{B}}(G,\ensuremath{\mathbb{Q}}_p)$ which is generic in its facet. We write $\ensuremath{\mathcal{G}}$ for the parahoric group scheme corresponding to $x$. Let $P\subset \ensuremath{\mathrm{GL}}(\ensuremath{\mathbb{D}})$ be a parabolic subgroup lifting the parabolic $P_0$ corresponding to the filtration on $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_0)(k)$. Write $\ensuremath{\mathrm{M}}^{\mathrm{loc}}=\ensuremath{\mathrm{GL}}(\ensuremath{\mathbb{D}})/P$ and $\mbox{Spf}A=\widehat{\ensuremath{\mathrm{M}}}^{\mathrm{loc}}$ the completion of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}$ at the identity; then $A$ is isomorphic to a power series ring over $\ensuremath{\breve{\mathbb{Z}}_p}$. Let $K'/\ensuremath{\breve{\mathbb{Q}}_p}$ be a finite extensions and $y:A\rightarrow K'$ a continuous map such that $\ensuremath{s_{\alpha,0}}\in \mathrm{Fil}^0\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K'$ for the filtration induced by $y$ on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K'$. By \cite[Lemma 1.4.5]{Ki2}, the filtration corresponding to $y$ is induced by a $G$-valued cocharacter $\mu_y$. Let $G.y$ be the orbit of $y$ in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K'$ which is defined over a finite extension $\ensuremath{\breve{E}}/\ensuremath{\breve{\mathbb{Q}}_p}$, and we write $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}$ for the closure of this orbit in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}$. \subsubsection{} Let $R$ be a complete local ring with maximal ideal $\mathfrak{m}$ and residue field $k$. We let $W(R)$ denote the Witt vectors of $R$. Recall \cite{Zi1} we have a subring $$\widehat{W}(R)=W(k)\oplus\mathbb{W}(\mathfrak{m})\subset W(R),$$ where $\mathbb{W}(\mathfrak{m})\subset W(R)$ consists of Witt vectors $(w_i)_{i\geq1}$ with $w_i\in\mathfrak{m}$ and $w_i\rightarrow 0$ in the $\mathfrak{m}$-adic topology. The Frobenius of $W(R)$ induces a map $\varphi:\widehat{W}(R)\rightarrow \widehat{W}(R)$, and we write $I_R$ for the kernel of the projection $\widehat{W}(R)\rightarrow R$. We recall the following definition, which is \cite[Definition 4.6]{Z} in the case that $G$ splits over a tamely ramified extension of $\ensuremath{\mathbb{Q}}_p$. \begin{definition}\label{def: G-adapted} Let $K/\ensuremath{\breve{\mathbb{Q}}_p}$ be a finite extension. Let $\ensuremath{\mathscr{G}}$ be a $p$-divisible group over $\ensuremath{\mathcal{O}}_K$ whose special fiber is isomorphic to $\ensuremath{\mathscr{G}}_0$. We say $\ensuremath{\mathscr{G}}$ is $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted if the tensors $\ensuremath{s_{\alpha,0}}$ extend to Frobenius invariant tensors $\widetilde{s}_\alpha\in\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\ensuremath{\widehat{W}}(\ensuremath{\mathcal{O}}_K))^\otimes$ such that the following two conditions hold: \begin{enumerate} \item There is an isomorphism $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\ensuremath{\widehat{W}}(\ensuremath{\mathcal{O}}_K))\cong \ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\widehat{W}}(\ensuremath{\mathcal{O}}_K)$ taking $\widetilde{s}_\alpha$ to $\ensuremath{s_{\alpha,0}}$. \item Under the canonical identification $$\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\ensuremath{\mathcal{O}}_K)\otimes_{\ensuremath{\mathcal{O}}_K}K\cong \ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K$$ given by \cite[Lemma 3.1.17]{KP}, the filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K$ is induced by a $G$-valued cocharacter conjugate to $\mu_y$. \end{enumerate} \end{definition} \subsubsection{} Consider the local model triple $(G,\{\mu_y^{-1}\},\ensuremath{\mathcal{G}})$. We assume in addition that the following conditions are satisfied: \begin{equation}\label{eqn: assumption 2}\text{ The pair $(G,\{\mu_y^{-1}\})$ is regular and $p\nmid|\pi_1(G_{\ensuremath{\mathrm{der}}})|$}. \end{equation} \begin{equation}\label{eqn: assumption} \text{The embedding $G\subset \ensuremath{\mathrm{GL}}(U_{\ensuremath{\mathbb{Q}}_p})$ is good with respect to $U$.} \end{equation} \begin{equation}\label{eqn: assumption 4} \text{$G\subset \ensuremath{\mathrm{GL}}(U_{\ensuremath{\mathbb{Q}}_p})$ contains the scalars.} \end{equation} Under these assumptions, Corollary \ref{cor: embedding of local model into Grassmannian} implies that the definition of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}$ above agrees with the local model $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}}\otimes_{\ensuremath{\mathcal{O}}_E}\ensuremath{\mathcal{O}}_{\breve E}$, cf. \cite[Example 3.3]{Z} regarding the sign convention for cocharacters defining local models. We write $\widehat{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}\cong \mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$ for the completion of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}$ at the identity element. By Theorem \ref{thm: Levin}, $A_{\widetilde{\ensuremath{\mathcal{G}}}}$ is normal and we have a natural surjective map $A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\breve E}\rightarrow A_{\widetilde{\ensuremath{\mathcal{G}}}}$ corresponding to the closed immersion $\widehat{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}\subset \widehat{\ensuremath{\mathrm{M}}}^{\mathrm{loc}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\breve E}$. \subsubsection{} We now apply the construction in \cite[3.2]{KP}; the following is essentially \cite[Proposition 3.2.17]{KP}. \begin{prop}\label{prop: versal deformation space tensors} There exists a versal $p$-divisible group $\ensuremath{\mathscr{G}}_A$ over $\mathrm{Spf} A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\ensuremath{\breve{E}}}$ deforming $\ensuremath{\mathscr{G}}_0$ such that for any $K/\ensuremath{\breve{\mathbb{Q}}_p}$ finite, a map $\varpi:A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_E\rightarrow K$ factors through $A_{\widetilde{\ensuremath{\mathcal{G}}}}$ if and only if the $p$-divisible group $\ensuremath{\mathscr{G}}_{\varpi}$ given by the base change of $\ensuremath{\mathscr{G}}_A$ along $\varpi$ is $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted. \end{prop} \begin{proof}Under our assumptions and using \cite[Proposition 10.3]{Anschutz} in place of \cite[Proposition 1.4.3]{KP}, we find that the conditions (3.2.2)-(3.2.4) of \cite{KP} are satisfied; we may thus apply the construction in \cite[\S3.2]{KP} to obtain $\ensuremath{\mathscr{G}}_A$. By construction, the base change $\ensuremath{\mathscr{G}}_{A_{\tilde{\ensuremath{\mathcal{G}}}}}:=\ensuremath{\mathscr{G}}_A\otimes _{A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\breve E}}{A_{\tilde{\ensuremath{\mathcal{G}}}}}$ is equipped with Frobenius invariant tensors $s_{\alpha,0,A_{\widetilde{\ensuremath{\mathcal{G}}}}}\in \ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{A_{\tilde{\ensuremath{\mathcal{G}}}}})(\widehat{W}(A_{\tilde{\ensuremath{\mathcal{G}}}}))^\otimes$. It is then clear that for $\varpi:A_{\widetilde{\ensuremath{\mathcal{G}}}}\rightarrow K$, the tensors $s_{\alpha,0}$ extend to $$\widetilde{s}_\alpha\in \ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\varpi})(\widehat{W}(\ensuremath{\mathcal{O}}_K))^\otimes$$ so that Definition \ref{def: G-adapted} (1) is satisfied. Indeed the tensors $\widetilde{s}_\alpha$ are obtained from $s_{\alpha,0,A_{\widetilde{\ensuremath{\mathcal{G}}}}}$ via base change. The argument in \cite[Proposition 4.7]{Z} shows that condition (2) is also satisfied, so that $\ensuremath{\mathscr{G}}_{\varpi}$ is $(\widetilde\ensuremath{\mathcal{G}},\mu_y)$-adapted. The converse is \cite[Proposition 3.2.17]{KP} \end{proof} \subsection{Deformations with \'etale tensors} \subsubsection{} Let $K/\ensuremath{\breve{\mathbb{Q}}_p}$ be a finite extension and $\ensuremath{\mathscr{G}}$ a $p$-divisible group over $\ensuremath{\mathcal{O}}_K$ with special fiber $\ensuremath{\mathscr{G}}_0$. We write $T_p\ensuremath{\mathscr{G}}$ for the $p$-adic Tate-module of $\ensuremath{\mathscr{G}}$ and $T_p\ensuremath{\mathscr{G}}^\vee$ its linear dual. We let $s_{\alpha,\ensuremath{\mathrm{\acute{e}t}}}\in T_p\ensuremath{\mathscr{G}}^{\vee \otimes}$ be a collection of tensors whose stabilizer $\widetilde{\ensuremath{\mathcal{G}}}$ has reductive generic $G$ and $\widetilde{\ensuremath{\mathcal{G}}}=\widetilde{\ensuremath{\mathcal{G}}}_x$ for some $x\in \ensuremath{\mathcal{B}}(G,\ensuremath{\mathbb{Q}}_p)$ which is generic in the facet containing it. We write $\ensuremath{\mathbb{D}}:=\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_0)(\ensuremath{\breve{\mathbb{Z}}_p})$ and we let $$s_{\alpha,0}\in D_{\mathrm{cris}}(T_p\ensuremath{\mathscr{G}}^\vee)^\otimes \simeq \ensuremath{\mathbb{D}}^\otimes\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Q}}_p}$$ denote the image of $s_{\alpha,\ensuremath{\mathrm{\acute{e}t}}}$ under the $p$-adic comparison isomorphism. \begin{prop}\label{prop: trivialization of Dieudonne}\begin{enumerate}\item We have $s_{\alpha,0}\in \ensuremath{\mathbb{D}}^\otimes. $ Moreover the $s_{\alpha,0}$ extend canonically to tensors $\widetilde{s}_\alpha\in\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\widehat{W}(\ensuremath{\mathcal{O}}_K))^\otimes$ and there exists an isomorphism \begin{equation} \label{eqn: trivialization of display} T_p\ensuremath{\mathscr{G}}^\vee\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\widehat{W}(\ensuremath{\mathcal{O}}_K)\cong\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\widehat{W}(\ensuremath{\mathcal{O}}_K)) \end{equation}taking $s_{\alpha,0}$ to $\widetilde{s}_\alpha$. \item There exists a $G$-valued cocharacter $\mu_y$ such that \begin{enumerate}[label=(\roman*)] \item Under the canonical isomorphism $$\gamma:\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} K\cong \ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}})(\ensuremath{\mathcal{O}}_K)\otimes_{\ensuremath{\mathcal{O}}_K} K,$$ the filtration is induced by a $G$-valued cocharacter conjugate to $\mu_y$. \item The filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} K$ induced by $\mu_y$ lifts the filtration on $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_0)\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}k$. \end{enumerate} Here we consider $G_{\ensuremath{\breve{\mathbb{Q}}_p}}\subset \ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Q}}_p}$ via base change of (\ref{eqn: trivialization of display}) to $\ensuremath{\breve{\mathbb{Q}}_p}$. \end{enumerate} \end{prop} \begin{proof} The argument is the same as \cite[Proposition 3.3.8, Corollary 3.3.10]{KP}, again using \cite[Proposition 10.3]{Anschutz} in place of \cite[Proposition 1.4.3]{KP}. \end{proof} \subsubsection{}The isomorphism (\ref{eqn: trivialization of display}) induces an isomorphism $$T_p\ensuremath{\mathscr{G}}^\vee\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}\cong \ensuremath{\mathbb{D}}$$ taking $s_{\alpha,\ensuremath{\mathrm{\acute{e}t}}}$ to $s_{\alpha,0}$ which we now fix. Taking $T_p\ensuremath{\mathscr{G}}^\vee$ to be $U$, we place ourselves in the setting of \S\ref{subsec: deformation space with crystalline tensors}. It follows that we have a notion of $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted lifting where $\mu_y$ is as in Proposition \ref{prop: trivialization of Dieudonne}. Moreover it follows from the same proposition that $\ensuremath{\mathscr{G}}$ itself is a $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted lifting. The next proposition then follows immediately from Proposition \ref{prop: trivialization of Dieudonne} and the definition of $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted liftings. \begin{prop}[{\cite[Proposition 3.3.13]{KP}}]\label{prop: G-adapted etale} Let $K'/\ensuremath{\breve{\mathbb{Q}}_p}$ be a finite extension and let ${\ensuremath{\mathscr{G}}}'$ be a deformation of $\ensuremath{\mathscr{G}}_0$ to $\ensuremath{\mathcal{O}}_{K'}$ such that (1) The filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} K'$ corresponding to $\ensuremath{\mathscr{G}}'$ is induced by a $G$-valued cocharacter conjugate to $\mu_y$. (2) The tensors $s_{\alpha,0}\in\ensuremath{\mathbb{D}}^\otimes$ correspond to tensors $s_{\alpha,\ensuremath{\mathrm{\acute{e}t}}}\in T_p\ensuremath{\mathscr{G}}'^{\vee\otimes}$ under the $p$-adic comparison isomorphism. Then ${\ensuremath{\mathscr{G}}}'$ is $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted lifting. \end{prop}\qed \subsection{Canonical liftings for $\mu$-ordinary $p$-divisible groups}\label{sec: M-adapted} \subsubsection{}\label{sec: mu ordinary pdiv gp} We return to the setting of \S\ref{sec: adapted liftings}. Thus $\ensuremath{\mathscr{G}}_0$ is a $p$-divisible group over $k $ equipped with $s_{\alpha,0}\in \ensuremath{\mathbb{D}}^\otimes$. We fix a $\ensuremath{\breve{\mathbb{Z}}_p}$-linear isomorphism \begin{equation} \label{eqn: trivialization Dieudonne} U\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}\cong\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{0}) \end{equation} as in (\ref{eqn: triv Dieudonne}) so that $s_{\alpha,0}\in U^\otimes$. In \S\ref{sec: M-adapted}, we will assume in addition to (\ref{eqn: assumption 2})--(\ref{eqn: assumption 4}), that $\ensuremath{\mathcal{G}}$ is a connected parahoric so that $\ensuremath{\mathcal{G}}=\widetilde{\ensuremath{\mathcal{G}}}$. Since the $s_{\alpha,0}$ are $\varphi$-invariant, the Frobenius is given by $b\sigma$ for an element $b\in G(\ensuremath{\breve{\mathbb{Q}}_p})$, and modifying (\ref{eqn: trivialization Dieudonne}) by an element $h\in \ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})$ modifies $b$ by $b\mapsto h^{-1}b\sigma(h)$. Therefore $b$ is well-defined up to $\sigma$-conjugation by an element of $\ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})$ and in particular we obtain a well-defined class $[b]\in B(G)$. We choose a maximal $\ensuremath{\breve{\mathbb{Q}}_p}$-split torus $S$ of $G$ defined over $\ensuremath{\mathbb{Q}}_p$ such that $x\in \ensuremath{\mathcal{A}}(G,S,\ensuremath{\breve{\mathbb{Q}}_p})$ and we let $T$ denote its centralizer. We fix a $\sigma$-stable alcove $\ensuremath{\mathfrak{a}}\subset \ensuremath{\mathcal{A}}(G,S,\ensuremath{\breve{\mathbb{Q}}_p})$ such that $x$ lies in the closure of $\ensuremath{\mathfrak{a}}$; thus $\ensuremath{\mathcal{G}}$ corresponds to a subset $J\subset \ensuremath{\mathbb{S}}$ of the set of simple reflections of $W$ determined by $\ensuremath{\mathfrak{a}}$. We follow the notation of \S\ref{sec: group theoretic} and let $\widetilde{\mu}\in X_*(T)$ denote the dominant (with respect to a choice of Borel defined over $\ensuremath{\breve{\mathbb{Q}}_p}$) representative of the conjugacy class $\{\mu_y\}$; we write ${\mu}$ for its image in $X_*(T)_I$. We have a closed immersion of local models $$\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}}\hookrightarrow \mathrm{Gr}(U)\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\mathcal{O}}_E,$$ where $\mathrm{Gr}(U)$ classifies submodules of $U$ of rank $\mathrm{dim}_k\mathrm{Fil}^0\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} k.$ By definition, the filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}k$ corresponds to an element of $\mathrm{Gr}(U)(k)$ which lies in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}}(k)$. This filtration is by definition the kernel of $\varphi$; thus its preimage in $\ensuremath{\mathbb{D}}$ is given by $$\{v\in\ensuremath{\mathbb{D}}|b\sigma(v)\in p\ensuremath{\mathbb{D}}\}.$$ This is just the $\ensuremath{\breve{\mathbb{Z}}_p}$-lattice $\sigma^{-1}(b^{-1})p\ensuremath{\mathbb{D}}$. It follows from Corollary \ref{cor: mu admissible mixed char} that $\sigma^{-1}(b^{-1})\in \ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})\dot{w}\ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})$ for some element $w\in\ensuremath{\mathrm{Adm}}(\{\mu_y^{-1}\})_J$, and hence that $b\in\ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})\sigma(\dot{u})\ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})$ for some $u\in \ensuremath{\mathrm{Adm}}(\{\mu_y\})_J$. In particular we have $[\sigma^{-1}(b)]\in B(G,\{\mu_y\})$ by \cite[Theorem 1.1]{He3}. \subsubsection{}Now assume the existence of $[b]_{\mu}\in B(G,\{\mu_y\})$ as in Definition \ref{def: mu ordinary}, and that $\sigma^{-1}(b)\in [b]_{\mu}$. We will construct a $(\ensuremath{\mathcal{G}},\mu_y)$-adapted (recall $\widetilde{\ensuremath{\mathcal{G}}}=\ensuremath{\mathcal{G}}$) deformation of $\ensuremath{\mathscr{G}}_0$ which will be the analogue of the Serre--Tate canonical lifting in this context. By Proposition \ref{prop: F-crystal basis} applied to $\sigma^{-1}(b)$, there exists an element $h\in \ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p})$ such that $h^{-1}b\sigma(h)=\sigma(\dot{t}_{\mu'})$ for some $\mu'\in W_0\cdot\mu$ with $t_{\mu'}$ $\sigma$-straight. Upon modifying the isomorphism (\ref{eqn: trivialization Dieudonne}), we may assume $b=\sigma(\dot t_{\mu'})$; we fix this choice of (\ref{eqn: trivialization Dieudonne}) from now on. Let $M$ be the semistandard Levi subgroup of $G$ corresponding to $\nu_{t_{\mu'}}=\nu_{\sigma(t_{\mu'})}$; then $t_{\mu'}$ is central in $W_M$ by Lemma \ref{lemma: cochar central}. Let $w\in W_0$ such that $w\cdot{\mu}=\mu'$ and write $\widetilde{\lambda}:=w\cdot\widetilde{\mu}$; then by Lemma \ref{lemma: cochar central}, $\widetilde{\lambda}$ is central in $M$. Let $$\ensuremath{\mathcal{M}}(\ensuremath{\breve{\mathbb{Z}}_p}):=M(\ensuremath{\breve{\mathbb{Q}}_p})\cap\ensuremath{\mathcal{G}}(\ensuremath{\breve{\mathbb{Z}}_p});$$ it is the $\ensuremath{\breve{\mathbb{Z}}_p}$-points of a parahoric group scheme $\ensuremath{\mathcal{M}}$ of $M$ defined over $\ensuremath{\mathbb{Z}}_p$. Since $\ensuremath{\mathcal{G}}$ is a connected parahoric and $\pi_1(M)_I\rightarrow \pi_1(G)_I$ has torsion-free kernel, it follows that $\ensuremath{\mathcal{M}}$ is a connected parahoric. \begin{lemma} Let $K$ be the field of definition of $\widetilde{\lambda}$. The filtration induced by $\widetilde{\lambda}$ on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} K$ specializes to $\mathrm{Fil}^0\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}} k$. \end{lemma} \begin{proof} Let $G\subset G'$ where $G'$ is as in Definition \ref{def: embedding into GL(V)} and let $\ensuremath{\mathcal{G}}'$ be the corresponding parahoric. The cocharacter $\widetilde{\lambda}$ determines a $K$-point $s_{\widetilde{\lambda}^{-1}}$ of $\ensuremath{\mathrm{Gr}}_{G'}$ which lies in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}}$ (cf. \cite[Example 3.3]{Z} for the sign convention) and whose image in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}=\ensuremath{\mathrm{Gr}}(U)\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}$ corresponds to the filtration induced by $\widetilde{\lambda}$. The geometric special fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}}$ is a closed subscheme of $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}'_{k[[u]]}}$ where $\underline{\ensuremath{\mathcal{G}}}'_{k[[u]]}$ is a $k[[u]]$-group scheme associated to $\ensuremath{\mathcal{G}}'$ as in \S\ref{sec: Local models for weil restricted groups}. By \cite[Proposition 4.2.8]{Levin}, $s_{\widetilde{\lambda}^{-1}}$ extends to an $\ensuremath{\mathcal{O}}_{\ensuremath{\breve{K}}}$-point of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_y^{-1}\}} $ whose special fiber is the point $\dot{\underline{t}}_{\mu'}^{-1}$. Here $\underline{t}_{\mu'}^{-1}$ is the element of the Iwahori Weyl group for $\underline{G}'_{k((u))}:=\underline{\ensuremath{\mathcal{G}}}'_{k[[u]]}\otimes_{k[[u]]}k((u))$ corresponding to $t_{\mu'}^{-1}$ under the identification of Iwahori Weyl groups (\ref{eqn: id of Iwahori Weyl groups 2}). By construction of the embedding $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}(k)\hookrightarrow \ensuremath{\mathrm{GL}}_U(\ensuremath{\breve{\mathbb{Q}}_p})/\ensuremath{\mathrm{GL}}_{U}(\ensuremath{\breve{\mathbb{Z}}_p})$ in \S\ref{sec: embedding into Grassmannian} (cf. Proof of Proposition \ref{prop: mu admissible mixed char}), the filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}k$ corresponding to the image of $\dot{\underline{t}}_{\mu'}^{-1}$ in $\ensuremath{\mathrm{M}}^{\mathrm{loc}}(k)$ is given by the reduction mod $p$ of $\dot{t}_{\mu'}^{-1}p\ensuremath{\mathbb{D}}=\sigma^{-1}(b^{-1})p\ensuremath{\mathbb{D}}$. The proposition follows. \end{proof} \subsubsection{} We extend the tensors $s_{\alpha,0}\in U^\otimes$ to tensors $t_{\beta,0}\in U^\otimes$ whose stabilizer is $\ensuremath{\mathcal{M}}$. Viewed in $\ensuremath{\mathbb{D}} \simeq U\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p},$ the $t_{\beta,0}$ are $\varphi$-invariant as $b=\sigma(\dot t_{\mu'}) \in M(\ensuremath{\breve{\mathbb{Q}}_p}).$ Since $\widetilde\lambda$ is an $M$-valued cocharacter, we may apply the construction in \S\ref{sec: adapted liftings} to $M$ and the tensors $t_{\beta,0}$. In particular we have a notion of $(\mathcal{M},\widetilde\lambda)$-adapted liftings of $\ensuremath{\mathscr{G}}_0$. It is clear from the definition that any $(\mathcal{M},\widetilde\lambda)$-adapted lifting is also a $(\ensuremath{\mathcal{G}},\mu_y)$-adapted lifting. \subsubsection{} \label{sec: M-adapted 2} Recall the $\sigma$-centralizer group $$J_b(\ensuremath{\mathbb{Q}}_p):=\{g\in G(\ensuremath{\breve{\mathbb{Q}}_p})|g^{-1}b\sigma(g)=b\}.$$ There is an action of $J_b(\ensuremath{\mathbb{Q}}_p)$ on $\ensuremath{\mathscr{G}}_0$ in the isogeny category. Since $\nu_{g^{-1}b\sigma(g)}=g^{-1}\nu_bg$ for any $g\in G(\ensuremath{\breve{\mathbb{Q}}_p})$, it follows that for $b=\sigma(\dot{t}_{\mu'})$, we have $J_b(\ensuremath{\mathbb{Q}}_p)\subset M(\ensuremath{\breve{\mathbb{Q}}_p})$. \begin{thm}\label{thm: can-lift}Let $K/\ensuremath{\breve{\mathbb{Q}}_p}$ be an extension over which $\widetilde{\lambda}$ is defined and suppose $\widetilde{\ensuremath{\mathcal{G}}}=\ensuremath{\mathcal{G}}$. There exists a $(\ensuremath{\mathcal{G}},\mu_y)$-adapted lifting ${\ensuremath{\mathscr{G}}}$ to $\ensuremath{\mathcal{O}}_K$ such that the action of $J_b(\ensuremath{\mathbb{Q}}_p)$ on $\ensuremath{\mathscr{G}}_0$ lifts to ${\ensuremath{\mathscr{G}}}$ in the isogeny category. \end{thm} \begin{proof}Suppose there exists an $(\ensuremath{\mathcal{M}},\widetilde{\lambda})$-adapted lifting $\ensuremath{\mathscr{G}}$ of $\ensuremath{\mathscr{G}}_0$; from the above discussion, we have that $\ensuremath{\mathscr{G}}$ is also a $(\ensuremath{\mathcal{G}},\mu_y)$-adapted lifting. By Definition \ref{def: G-adapted} (2), the filtration on the weakly admissible filtered $\varphi$-module associated to $T_p{\ensuremath{\mathscr{G}}}^\vee$ is induced by an $M$-valued cocharacter conjugate to $\widetilde{\lambda}$, hence by $\widetilde{\lambda}$ itself since it is central in $M$. Since $J_b(\ensuremath{\mathbb{Q}}_p)\subset M(\ensuremath{\breve{\mathbb{Q}}_p})$, the action of $J_b(\ensuremath{\mathbb{Q}}_p)$ respects the filtration and hence lifts to an action on $\ensuremath{\mathscr{G}}$ in the isogeny category. It suffices to show the existence of an $(\ensuremath{\mathcal{M}},\widetilde{\lambda})$-adapted lifting. This follows from the same argument as \cite[Proposition 4.9]{Z}. \end{proof} \section{Integral models of Shimura varieties and canonical liftings}\label{sec: integral models for Shimura + canonical lifts} \subsection{Integral models } \label{subsec: integral models Hodge type} \subsubsection{}\label{subsec: integral models Hodge type preamble}For the rest of this paper we fix an algebraic closure $\overline{\ensuremath{\mathbb{Q}}}$, and for each place $v$ of $\ensuremath{\mathbb{Q}}$ (including $v=\infty$) an algebraic closure $\overline{\ensuremath{\mathbb{Q}}}_v$ together with an embedding $i_v:\overline{\ensuremath{\mathbb{Q}}}\rightarrow \overline{\ensuremath{\mathbb{Q}}}_v$ (here $\overline{\ensuremath{\mathbb{Q}}}_\infty\cong \ensuremath{\mathbb{C}}$). Let $\ensuremath{\mathbf{G}}$ be a reductive group over $\ensuremath{\mathbb{Q}}$ and $X$ a $\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{R}}}$-conjugacy class of homomorphisms $$h:\mathbb{S}:=\text{Res}_{\mathbb{C}/\R}\mathbb{G}_m\rightarrow \ensuremath{\mathbf{G}}_\mathbb{R}$$ such that $(\ensuremath{\mathbf{G}},X)$ is a Shimura datum in the sense of \cite{De}. Let $c$ be complex conjugation. Then $\ensuremath{\mathbb{S}}(\ensuremath{\mathbb{C}})=(\ensuremath{\mathbb{C}}\otimes_{\R}\ensuremath{\mathbb{C}})^\times\cong \ensuremath{\mathbb{C}}^\times \times c^*(\ensuremath{\mathbb{C}}^\times)$ and we write $\mu_h$ for the cocharacter given by $$\ensuremath{\mathbb{C}}^\times\rightarrow \ensuremath{\mathbb{C}}^\times\times c^*(\ensuremath{\mathbb{C}}^\times)\xrightarrow h \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{C}}).$$ We set $w_h:=\mu_h^{-1}\mu_h^{c-1}$. Let $\ensuremath{\mathbb{A}}_f$ denote the ring of finite adeles and $\ensuremath{\mathbb{A}}_f^p$ the subring of $\ensuremath{\mathbb{A}}_f$ with trivial $p$-component. Let $\ensuremath{\mathrm{K}}_p\subset \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_p)$ and $\ensuremath{\mathrm{K}}^p\subset\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)$ be compact open subgroups and write $\ensuremath{\mathrm{K}}:=\ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p$. Then \begin{equation}\label{eqn: points of Shimura var}\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathbb{C}}}=\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}})\backslash X\times \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)/\ensuremath{\mathrm{K}}\end{equation}can be identified with the complex points of a smooth algebraic stack over $\ensuremath{\mathbb{C}}$. The theory of canonical models implies that $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathbb{C}}}$ has a model $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ over the reflex field $\ensuremath{\mathbf{E}}\subset \ensuremath{\mathbb{C}}$, which is defined to be the field of definition of the conjugacy class $\{\mu_h\}$. We may consider $\ensuremath{\mathbf{E}}$ as a subfield of $\overline{\ensuremath{\mathbb{Q}}}$ via the embedding $i_\infty:\overline{\ensuremath{\mathbb{Q}}}\hookrightarrow \ensuremath{\mathbb{C}}$ and we write $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}}$ for the ring of integers of $\ensuremath{\mathbf{E}}$. If $\ensuremath{\mathrm{K}}^p $ is sufficiently small (indeed if $\ensuremath{\mathrm{K}}^p$ is neat), then $\mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ is an algebraic variety. We also define $$\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X):=\lim_{\leftarrow\ensuremath{\mathrm{K}}^p}\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p}(\ensuremath{\mathbf{G}},X)$$ $$ \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X):=\lim_{\leftarrow\ensuremath{\mathrm{K}}}\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X);$$ these are pro-varieties equipped with actions of $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p)$ and $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)$ respectively. \subsubsection{}We now assume that there is an embedding of Shimura data $$\iota:(\ensuremath{\mathbf{G}},X)\rightarrow (\mathbf{GSp}(V),S^{\pm}).$$Here $\mathbf{GSp}(V)$ is the group of symplectic similitudes of a $\ensuremath{\mathbb{Q}}$-vector space $V$ equipped with a perfect alternating bilinear form $\Psi$, and $S^\pm$ is the Siegel double space. Fix a prime $p>2$ and let $v$ be the prime of $\ensuremath{\mathbf{E}}$ above $p$ induced by the embedding $i_p:\overline{\ensuremath{\mathbb{Q}}}\rightarrow \overline{\ensuremath{\mathbb{Q}}}_p$. We let $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}}$ denote the ring of integers of $\ensuremath{\mathbf{E}}$ and $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}$ the localization $v$, and we write $E$ for the completion of $\ensuremath{\mathbf{E}}$ at $v$. We let $k_E$ denote the residue field at $v$ and we fix an algebraic closure $k$ of $k_E$. Set $G:=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$. We let $\widetilde{\ensuremath{\mathcal{G}}}:=\widetilde{\ensuremath{\mathcal{G}}}_x$ for some $x\in \ensuremath{\mathcal{B}}(G,\ensuremath{\mathbb{Q}}_p)$ which is generic in the facet containing it and we write $\ensuremath{\mathcal{G}}$ for the associated parahoric group scheme. For the rest of \S\ref{subsec: integral models Hodge type}, we make following assumption. \begin{equation}\label{eqn: Shim ass 1}\text{$(G,\{\mu_h\})$ is regular and $p\nmid |\pi_1(G_{\ensuremath{\mathrm{der}}})|$}.\end{equation} Then arguing as in \cite[2.3.15]{KP} (cf. Corollary \ref{cor: embedding of local model into Grassmannian}), upon replacing $\iota$ by another Hodge embedding, we may assume that the local Hodge embedding $\iota_{\ensuremath{\mathbb{Q}}_p}:G\rightarrow \mathrm{GSp}(V_{\ensuremath{\mathbb{Q}}_p})$ is a good embedding. In this case, we say that $\iota$ itself is a good Hodge embedding. \subsubsection{} \label{subsub: integral model Hodge type construction} We set $\widetilde{\ensuremath{\mathrm{K}}}_p:=\widetilde{\ensuremath{\mathcal{G}}}(\ensuremath{\mathbb{Z}}_p)$ and $\ensuremath{\mathrm{K}}_p:=\ensuremath{\mathcal{G}}(\ensuremath{\mathbb{Z}}_p)$, and we let $\widetilde{\ensuremath{\mathrm{K}}}:=\widetilde{\ensuremath{\mathrm{K}}}_p\ensuremath{\mathrm{K}}^p$ and $\ensuremath{\mathrm{K}}:=\ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p$. Let $\iota:(\ensuremath{\mathbf{G}},X)\rightarrow (\mathbf{GSp}(V),S^{\pm})$ be a good embedding and let $V_{\ensuremath{\mathbb{Z}}_p}\subset V_{\ensuremath{\mathbb{Q}}_p}$ be a $\ensuremath{\mathbb{Z}}_p$-lattice with $V_{\ensuremath{\mathbb{Z}}_p}\subset V_{\ensuremath{\mathbb{Z}}_p}^\vee$ and such that $G\rightarrow \ensuremath{\mathrm{GL}}(V_{\ensuremath{\mathbb{Q}}_p})$ is good with respect to $V_{\ensuremath{\mathbb{Z}}_p}$. Let $V_{\ensuremath{\mathbb{Z}}_{(p)}}=V_{\ensuremath{\mathbb{Z}}_p}\cap V$. We write $G_{\ensuremath{\mathbb{Z}}_{(p)}}$ for the Zariski closure of $\ensuremath{\mathbf{G}}$ in $\ensuremath{\mathrm{GL}}(V_{\ensuremath{\mathbb{Z}}_{(p)}})$; then $G_{\ensuremath{\mathbb{Z}}_{(p)}}\otimes_{\ensuremath{\mathbb{Z}}_{(p)}}\ensuremath{\mathbb{Z}}_p\cong \widetilde \ensuremath{\mathcal{G}}$. Let $\ensuremath{\mathrm{K}}'=\ensuremath{\mathrm{K}}_p'\ensuremath{\mathrm{K}}'^p $ where $\ensuremath{\mathrm{K}}'_p$ is the stabilizer in $\ensuremath{\mathrm{GSp}}(V_{\ensuremath{\mathbb{Q}}_p})$ of the lattice $V_{\ensuremath{\mathbb{Z}}_p}$ and $\ensuremath{\mathrm{K}}'^p \subset \mathbf{GSp}(\ensuremath{\mathbb{A}}_f^p)$ is a compact open subgroup. The choice of $V_{\ensuremath{\mathbb{Z}}_{(p)}}$ gives rise to an interpretation of $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}'}(\mathbf{GSp},S^\pm)$ as a moduli stack of abelian varieties up to prime to $p$ isogeny and hence an integral model $\mathscr{S}_{\ensuremath{\mathrm{K}}'}(\mathbf{GSp},S^\pm)$ over $\ensuremath{\mathbb{Z}}_{(p)}$, see \cite[\S 4]{KP} and \cite[\S6]{Z}. Assume that $\ensuremath{\mathrm{K}}^p$ is a neat compact open subgroup. By \cite[Lemma 2.1.2]{Ki2}, we can choose $\ensuremath{\mathrm{K}}'^p$ such that $\iota$ induces a closed immersion: $$\ensuremath{\mathrm{Sh}}_{\widetilde{\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)\hookrightarrow \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}'}(\mathbf{GSp},S^\pm)\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{E}}.$$ Let $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)^-$ be the Zariski closure of $\ensuremath{\mathrm{Sh}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ inside $\mathscr{S}_{\ensuremath{\mathrm{K}}'}(\mathbf{GSp},S^\pm)\otimes_{\ensuremath{\mathbb{Z}}_{(p)}}\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}$, and $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ to be the normalization of $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)^-$. We also define the pro-scheme $$\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X):=\lim_{\leftarrow\ensuremath{\mathrm{K}}^p}\ensuremath{\mathscr{S}}_{\widetilde{\ensuremath{\mathrm{K}}}_p\ensuremath{\mathrm{K}}^p}(\ensuremath{\mathbf{G}},X).$$ The $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p)$-action on $\mathrm{Sh}_{\widetilde{\ensuremath{\mathrm{K}}}_p}(\ensuremath{\mathbf{G}},X)$ extends to $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)$. Hence we may define $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p}(\ensuremath{\mathbf{G}},X)$ for a general (not necessarily neat) compact open subgroup $\ensuremath{\mathrm{K}}^p\subset\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)$ as the quotient stack $\ensuremath{\mathscr{S}}_{{\widetilde\ensuremath{\mathrm{K}}_p}}(\ensuremath{\mathbf{G}},X)/\ensuremath{\mathrm{K}}^p$. Alternatively, we may take a compact open subgroup $\ensuremath{\mathrm{K}}_1^p\subset \ensuremath{\mathrm{K}}^p$ which is neat and normal in $\ensuremath{\mathrm{K}}^p$, and define $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ as the quotient of $\ensuremath{\mathscr{S}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}^p_1}(\ensuremath{\mathbf{G}},X)$ under the action of the finite group $\ensuremath{\mathrm{K}}^p/\ensuremath{\mathrm{K}}_1^p$. \subsubsection{}\label{subsubsec:hodgecycles} In order to understand the local structure of $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$, we need to introduce Hodge cycles. By \cite[Proposition 1.3.2]{Ki2}, the subgroup $G_{\ensuremath{\mathbb{Z}}_{(p)}}$ is the stabilizer of a collection of tensors $s_\alpha\in V_{\ensuremath{\mathbb{Z}}_{(p)}}^\otimes$. Let $h:\mathcal{A}\rightarrow \mathscr{S}_{{\widetilde\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)$ denote the pullback of the universal abelian scheme on $\mathscr{S}_{\ensuremath{\mathrm{K}}'}(\mathbf{GSp},S^\pm)$ and let $V_{\mathrm{B}}:=R^1h_{\mathrm{an},*}\ensuremath{\mathbb{Z}}_{(p)}$, where $h_{\mathrm{an}}$ is the map of complex analytic spaces associated to $h$. Since the tensors $s_\alpha$ are $\ensuremath{\mathbf{G}}$-invariant, they give rise to sections $s_{\alpha,\ensuremath{\mathrm{B}}}\in V_B^\otimes$. We also let $\mathcal{V}=R^1h_*\Omega^\bullet$ be the relative de Rham cohomology of $\mathcal{A}$. Using the de Rham isomorphism, the $s_{\alpha,B}$ give rise to a collection of Hodge cycles $s_{\alpha,\mathrm{dR}}\in \mathcal{V}_\ensuremath{\mathbb{C}}^\otimes$, where $\mathcal{V}_\ensuremath{\mathbb{C}}$ is the complex analytic vector bundle associated to $\mathcal{V}$. By \cite[Corollary 2.2.2]{Ki2}, these tensors are defined over $\ensuremath{\mathbf{E}}$. Similarly for a finite prime $\ell\neq p$, we let $\mathcal{V}_\ell = \mathcal{V}_\ell(\ensuremath{\mathcal{A}}) =R^1h_{\mathrm{\acute{e}t}*}\ensuremath{\mathbb{Q}}_\ell$ and $\mathcal{V}_p = \mathcal{V}_p(\ensuremath{\mathcal{A}}) = R^1h_{\eta,\mathrm{\acute{e}t}*}\ensuremath{\mathbb{Z}}_p$ where $h_\eta$ is the generic fibre of $h$. Using the \'etale-Betti comparison isomorphism, we obtain tensors $s_{\alpha,\ell}\in \mathcal{V}^\otimes_\ell$ and $s_{\alpha,p}\in\mathcal{V}_p^\otimes$. For $T$ an $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}$-scheme (resp $\mathbf{E}$-scheme, resp. $\mathbb{C}$-scheme), $*=\ell$ or $\mathrm{dR}$ (resp. $\mathrm{\mathrm{\acute{e}t}}$, resp. $\mathrm{B}$) and $x\in \mathscr{S}_{{\widetilde\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)(T)$, we write $\mathcal{A}_x$ for the pullback of $\mathcal{A}$ to $x$ and $s_{\alpha,*,x}$ for the pullback of $s_{\alpha,*}$ to $x$. For $T$ an $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}$-scheme, an element $x\in\mathscr{S}_{{\widetilde\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)(T)$ corresponds to a triple $(\mathcal{A}_x,\lambda,\epsilon_{\ensuremath{\mathrm{K}}'}^p)$, where $\lambda$ is a weak polarization (cf. \cite[\S6.3]{Z}) and $\epsilon^p_{\ensuremath{\mathrm{K}}'}$ is a section of the \'etale sheaf $\underline{\mathrm{Isom}}_{\lambda,\psi}(\widehat{V}(\mathcal{A}_x),V_{\ensuremath{\mathbb{A}}_f^p})/\ensuremath{\mathrm{K}}'^p$; here $$\widehat{V}(\ensuremath{\mathcal{A}}_x) =\varprojlim_{p\nmid n}\ensuremath{\mathcal{A}}_x[n] $$ is the adelic prime to $p$ Tate module of $\ensuremath{\mathcal{A}}_x.$ As in \cite[\S3.4.2]{Ki2}, $\epsilon^p_{\ensuremath{\mathrm{K}}'}$ can be promoted to a section $$\epsilon_{\ensuremath{\mathrm{K}}}^p\in\Gamma(T,\underline{\mathrm{Isom}}_{\lambda,\psi}(\widehat{V}(\mathcal{A}_x),V_{\ensuremath{\mathbb{A}}_f^p})/\ensuremath{\mathrm{K}}^p)$$ which takes $s_{\alpha,\ell,x}$ to $s_{\alpha}$ for $\ell\neq p.$ \subsubsection{}\label{subsec: integral models formal nbd} Recall that $k$ is an algebraic closure of $k_E$ and $\ensuremath{\breve{\mathbb{Q}}_p}=W(k)[1/p]$. Let $\overline{x}\in\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(k)$ and $\widetilde{x}\in\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(\ensuremath{\mathcal{O}}_K)$ a point lifting $\overline{x}$, where $K/\ensuremath{\breve{\mathbb{Q}}_p}$ is a finite extension. Let $\ensuremath{\mathscr{G}}_{\widetilde{x}}$ denote the $p$-divisible group associated to $\mathcal{A}_{\widetilde{x}}$ and $\ensuremath{\mathscr{G}}_{\ensuremath{{\overline{x}}}}$ its special fiber; we let $\ensuremath{\mathbb{D}}:=\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\ensuremath{{\overline{x}}}})(\ensuremath{\breve{\mathbb{Z}}_p})$. Then $T_p\ensuremath{\mathscr{G}}_{\widetilde{x}}^\vee$ is identified with $\ensuremath{\mathrm{H}}^1_{\mathrm{\acute{e}t}}(\mathcal{A}_{\widetilde{x},\overline{K}},\ensuremath{\mathbb{Z}}_p)$ and we obtain $\mathrm{Gal}(\overline{K}/K)$-invariant tensors $s_{\alpha,p,\widetilde{x}}\in T_p\ensuremath{\mathscr{G}}^{\vee\otimes}_{\widetilde x}$ whose stabilizer can be identified with $\widetilde{\ensuremath{\mathcal{G}}}$. Let $s_{\alpha,0,\widetilde{x}}\in\ensuremath{\mathbb{D}}[\frac{1}{p}]^\otimes$ denote the tensors corresponding to $s_{\alpha,p,\widetilde{x}}$ via the $p$-adic comparison isomorphism. By \cite[Proposition 1.3.7]{KMS}, $s_{\alpha,0,\widetilde{x}}$ are independent of the choice of lifting $\widetilde{x}\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(\ensuremath{\mathcal{O}}_K)$. We may therefore denote them by $s_{\alpha,0,\overline{x}}$. By Proposition \ref{prop: trivialization of Dieudonne}, we have $s_{\alpha,0,\overline{x}}\in\ensuremath{\mathbb{D}}^\otimes$ and there is a $\ensuremath{\breve{\mathbb{Z}}_p}$-linear bijection \begin{equation}\label{eqn: trivialization Dieudonne rational}V^\vee_{\ensuremath{\mathbb{Z}}_p}\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}\cong T_p\ensuremath{\mathscr{G}}_{\widetilde{x}}^\vee\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Z}}_p}\cong\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Z}}_p}\end{equation} taking $s_{\alpha}$ to $s_{\alpha,0,\overline{x}}$. The filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K$ corresponding to $\ensuremath{\mathscr{G}}_{\widetilde{x}}$ is induced by a $G$-valued cocharacter conjugate to $\mu_h^{-1}$. By a result of Blasius and Wintenberger \cite{Bl}, $s_{\alpha,\mathrm{dR},\widetilde{x}}\in \widetilde{x}^*(\ensuremath{\mathcal{V}})^\otimes \cong\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\widetilde{x}})(\ensuremath{\mathcal{O}}_K)^\otimes$ corresponds to $s_{\alpha,p,\widetilde{x}}$ via the $p$-adic comparison isomorphism. Hence $s_{\alpha,\mathrm{dR},\widetilde{x}}$ may be identified with the image of the elements $\widetilde{s}_\alpha\in \ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\widetilde{x}})(\widehat{W}( \ensuremath{\mathcal{O}}_K))^\otimes$ of Proposition \ref{prop: trivialization of Dieudonne} inside $\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\widetilde{x}})( \ensuremath{\mathcal{O}}_K)^\otimes$. The same Proposition implies that there is an $\ensuremath{\mathcal{O}}_K$-linear bijection $$\ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\widetilde{x}})(\ensuremath{\mathcal{O}}_K)\cong\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_K$$ taking $s_{\alpha,\mathrm{dR},\widetilde{x}}$ to $s_{\alpha,0,\overline{x}}$ and which lifts the identity over $k$. Thus there is a $G$-valued cocharacter $\mu_y$ which is $G$-conjugate to $\mu_h^{-1}$ and which induces a filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_K$ lifting the filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}k$. We may therefore define the notion of $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted liftings as in \S\ref{sec: deformation theory} and it follows from Proposition \ref{prop: trivialization of Dieudonne} that $\ensuremath{\mathscr{G}}_{\widetilde{x}}$ is a $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted lifting. \subsubsection{}Note that $G\subset \ensuremath{\mathrm{GL}}(V_{\ensuremath{\mathbb{Q}}_p})$ contains the scalars since it contains the image of $w_h$. It follows that under our assumptions, conditions (\ref{eqn: assumption 2})--(\ref{eqn: assumption 4}) are satisfied. We let $P\subset \ensuremath{\mathrm{GL}}(\ensuremath{\mathbb{D}})$ be a parabolic lifting $P_0$ as in \S\ref{sec: adapted liftings}. We obtain formal local models $\widehat{\ensuremath{\mathrm{M}}}^{\mathrm{loc}}=\mbox{Spf}A$ and $\widehat{\ensuremath{\mathrm{M}}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}}=\mbox{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}\cong \widehat{\ensuremath{\mathrm{M}}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$, and the filtration corresponding to $\mu_y$ is given by a point $y:A_{\widetilde{\ensuremath{\mathcal{G}}}}\rightarrow \ensuremath{\mathcal{O}}_K$. \begin{prop}\label{prop: formal nbd Shimura}Assume $\ensuremath{\mathrm{K}}^p$ is neat. Let $\widehat{U}_{\overline{x}}$ be the completion of $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)^-$ at the image of $\overline{x}$. \begin{enumerate} \item $\widehat{U}_{\overline{x}}$ can be identified with a closed subspace of $\mathrm{Spf}A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\breve E}$ containing $\mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$. \item A deformation $\ensuremath{\mathscr{G}}$ of $\ensuremath{\mathscr{G}}_{\ensuremath{{\overline{x}}}}$ corresponds to a point on the irreducible component of $\widehat{U}_{\overline{x}}$ containing $\widetilde{x}$ if and only if $\ensuremath{\mathscr{G}}$ is $(\widetilde{\ensuremath{\mathcal{G}}},\mu_y)$-adapted. \item Let $\overline{x}'\in\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(k)$ whose image in $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)^-(k)$ coincides with that of $\overline{x}$. Then $s_{\alpha,0,\overline{x}'}=s_{\alpha,0,\overline{x}}\in\ensuremath{\mathbb{D}}^\otimes$ if and only if $\overline{x}=\overline{x}'$. \end{enumerate} \end{prop} \begin{proof} Since the conditions (\ref{eqn: assumption 2})--(\ref{eqn: assumption 4}) are satisfied, we may apply the construction of Proposition \ref{prop: versal deformation space tensors}; this allows us to view $\mathrm{Spf}A$ as a versal deformation space for $\ensuremath{\mathscr{G}}_{\overline{x}}$ and hence we obtain a map $\Theta:\widehat{U}_{\overline{x}}\rightarrow \mathrm{Spf}A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\ensuremath{\breve{E}}}$ such that the universal $p$-divisible group over $\mathrm{Spf}A\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\mathcal{O}}_{\ensuremath{\breve{E}}}$ pulls back to the one over $\widehat{U}_{\overline{x}}$ arising from the universal abelian scheme over $\widehat U_{\overline{x}}$. The map $\Theta$ is a closed immersion by the Serre--Tate theorem. Let $Z\subset \widehat{U}_{\overline{x}}$ denote the irreducible component of $\widehat U_{\overline{x}}$ containing $\widetilde{x}$. Let $K'$ be a finite extension of $\breve{E}$ and let $\widetilde{x}'\in Z(K')$. Then the tensors $s_{\alpha,p,\widetilde{x}'}$ correspond to $s_{\alpha,0,\overline{x}}$ under the $p$-adic comparison isomorphism. Moreover the filtration on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}K'$ corresponding to $\ensuremath{\mathscr{G}}_{\widetilde{x}'}$ is induced by a $G$-valued cocharacter conjugate to $\mu_h^{-1}$, and hence conjugate to $\mu_y$. By Proposition \ref{prop: G-adapted etale}, $\ensuremath{\mathscr{G}}_{\widetilde{x}'}$ is a $(\widetilde\ensuremath{\mathcal{G}},\mu_y)$-adapted deformation of $\ensuremath{\mathscr{G}}_{\overline{x}}$ and hence $\widetilde{x}'$ corresponds to a point of $\mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$. Since this is true for any $\widetilde x'$, it follows that $\Theta|_{Z}$ factors through $\mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$. Since $Z$ and $\mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$ have the same dimension, it follows that $Z\cong \mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$. We thus obtain (1) and (2). One direction of (3) is clear. For the other direction, let $\widetilde{x}'\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)(\ensuremath{\mathcal{O}}_{K'})$ be a lift of $\overline{x}'$. Then by Proposition \ref{prop: trivialization of Dieudonne}, $s_{\alpha,0,\overline{x}'}$ arises from the specialization of tensors $\widetilde s_{\alpha} \in \ensuremath{\mathbb{D}}(\ensuremath{\mathscr{G}}_{\widetilde{x}'})(\widehat{W}(\ensuremath{\mathcal{O}}_K))$. By Assumption, we have $s_{\alpha,0,\overline{x}'}=s_{\alpha,0,\overline{x}}$. It follows that $\ensuremath{\mathscr{G}}_{\widetilde{x}'}$ corresponds to a $(\widetilde\ensuremath{\mathcal{G}},\mu_y)$-adapted lifting and hence to a point of $\mathrm{Spf}A_{\widetilde{\ensuremath{\mathcal{G}}}}$. By what we have seen, $\widetilde{x}'$ corresponds to a point in the same irreducible component $Z\subset \widehat U_{\overline{x}}$ containing $\widetilde{x}$ and hence $\overline{x}=\overline{x}'$. \end{proof} \subsubsection{} The above description of the local structure of $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ may be globalized as follows. \begin{thm}\label{thm: local model diagram \begin{enumerate} \item $\mathscr{S}_{{\widetilde\ensuremath{\mathrm{K}}_p}}(\ensuremath{\mathbf{G}},X)$ is an $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}$-flat, $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p)$-equivariant extension of $\ensuremath{\mathrm{Sh}}_{{\widetilde\ensuremath{\mathrm{K}}_p}}(\ensuremath{\mathbf{G}},X)$. \item Assume $\ensuremath{\mathrm{K}}^p$ is neat. Let $\widehat{U}_{\overline{x}}$ be the completion of $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ at some $k$-point $\overline{x}$. Then there exists a point $\overline z\in \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}(k)$ such that $\widehat{U}_{\overline{x}}$ isomorphic to the completion of ${\ensuremath{\mathrm{M}}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$ at $\overline{z}$. \item $\mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ fits in a local model diagram: \[\xymatrix{ &\widetilde{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\ar[dr]^q\ar[dl]_\pi&\\ \mathscr{S}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E} & &\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}}\] where $\pi$ is a $\widetilde{\ensuremath{\mathcal{G}}}$-torsor and $q$ is smooth of relative dimension $\dim G$. \end{enumerate} \end{thm} \begin{proof} (1) is clear and (2) follows from Proposition \ref{prop: formal nbd Shimura}. For (3), we first assume $\ensuremath{\mathrm{K}}^p$ is neat. Recall we have the vector bundle $\ensuremath{\mathcal{V}}$ over $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ corresponding to the de Rham cohomology of the universal abelian variety over $\ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$. Its generic fiber $\ensuremath{\mathcal{V}}_{\ensuremath{\mathbf{E}}}$ is equipped with tensors $s_{\alpha,\mathrm{dR}}\in \ensuremath{\mathcal{V}}_{\ensuremath{\mathbf{E}}}^\otimes$ and these extend to $\ensuremath{\mathcal{V}}$ by the same argument as \cite[Proposition 4.2.6]{KP}. Moreover the argument of {\em loc.~cit.} also shows that the scheme classifying isomorphisms $f:V_{\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}}^\vee\cong \ensuremath{\mathcal{V}}$ which take $s_{\alpha}$ to $s_{\alpha,\mathrm{dR}}$ is a $\widetilde\ensuremath{\mathcal{G}}$-torsor $\widetilde{\ensuremath{\mathscr{S}}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$. Let $(x,f)$ be an $S$-point of $\widetilde{\ensuremath{\mathscr{S}}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}$. The map $q$ is defined by sending $(x,f)$ to the inverse image $f^{-1}(\ensuremath{\mathcal{F}})\subset V_{\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}_{(v)}}}^\vee\otimes_{\ensuremath{\mathcal{O}}_{\ensuremath{\mathbf{E}}{(v)}}}\ensuremath{\mathcal{O}}_S$ of the Hodge filtration $\ensuremath{\mathcal{F}}\subset\ensuremath{\mathcal{V}}_x$. This gives us a map $\widetilde{\ensuremath{\mathscr{S}}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\rightarrow\mathrm{Gr}(V_{\ensuremath{\mathbb{Z}}_p}^\vee)\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\mathcal{O}}_E$ which factors through $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$ by the argument of \cite[Theorem 4.2.7]{KP}, which also shows that $q$ is smooth. Now for a general (not necessarily neat) $\ensuremath{\mathrm{K}}^p$, we let $\ensuremath{\mathrm{K}}^p_1\subset \ensuremath{\mathrm{K}}^p$ be a neat compact open subgroup which is normal in ${\ensuremath{\mathrm{K}}^p}$. The action of $\ensuremath{\mathrm{K}}^p/\ensuremath{\mathrm{K}}_1^p$ on $\ensuremath{\mathscr{S}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}_1^p}(\ensuremath{\mathbf{G}},X)$ naturally extends to $\widetilde{\ensuremath{\mathscr{S}}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}_1^p}(\ensuremath{\mathbf{G}},X)$, and the map $$q_1:\widetilde{\ensuremath{\mathscr{S}}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}_1^p}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\rightarrow \ensuremath{\mathrm{M}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}^{\mathrm{loc}}$$ is compatible with this action. We thus obtain a diagram of stacks \begin{equation} \label{eqn: local model diagram} \xymatrix{ & \widetilde{\ensuremath{\mathscr{S}}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}_1^p}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\ar[r]^{\widetilde{p}}\ar[dl]_{\pi_1} & \widetilde{\ensuremath{\mathscr{S}}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\ar[dr]^q \ar[dl]_\pi& \\ \ensuremath{\mathscr{S}}_{{\widetilde\ensuremath{\mathrm{K}}_p}\ensuremath{\mathrm{K}}_1^p}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}\ar[r]^p& \ensuremath{\mathscr{S}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_E}& & \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}} \end{equation}as desired.\end{proof} \subsubsection{} We now use the above to study integral models for parahoric level structure. Let $\ensuremath{\mathbf{G}}_{\mathrm{sc}}$ denote the simply connected cover of $\ensuremath{\mathbf{G}}_{ \mathrm{der}}$ and we set $\ensuremath{\mathbf{C}}:=\ker(\ensuremath{\mathbf{G}}_{\mathrm{sc}}\rightarrow \ensuremath{\mathbf{G}}_{\mathrm{der}})$. For $c\in \ensuremath{\mathrm{H}}^1(\ensuremath{\mathbb{Q}},\ensuremath{\mathbf{C}})$ and $\ell$ a finite prime, we write $c_\ell$ for the image of $c$ in $\ensuremath{\mathrm{H}}^1(\ensuremath{\mathbb{Q}}_\ell,\ensuremath{\mathbf{C}})$. We introduce the following assumption. \begin{equation} \label{ass; parahoric reduction assumption} \text{If $c\in\ensuremath{\mathrm{H}}^1(\ensuremath{\mathbb{Q}},\ensuremath{\mathbf{C}})$ satisfies $c_\ell=0$ for all $\ell\neq p$, then $c_p=0$.} \end{equation} There is a natural finite map of Shimura varieties $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)\rightarrow \ensuremath{\mathrm{Sh}}_{\widetilde\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ and we define the integral model for parahoric level $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ to be the normalization of $\ensuremath{\mathscr{S}}_{\widetilde{\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)$ inside $\mathrm{Sh}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$. We similarly write $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)$ for the inverse limit over the prime to $p$ levels. The discussion in \cite[\S4.3]{KP} extends verbatim to the current situation and we obtain the following proposition; cf. \cite[Proposition 4.3.7, Corollary 4.3.9]{KP} \begin{prop}\label{prop: integral models for parahoric Hodge type} Assume (\ref{ass; parahoric reduction assumption}) is satisfied. \begin{enumerate} \item The covering $\ensuremath{\mathscr{S}}_{{\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)\rightarrow \ensuremath{\mathscr{S}}_{\widetilde{\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbf{G}},X)$ is \'etale, and for $\ensuremath{\mathrm{K}}^p$ sufficiently small, this covering splits over an unramified extension. \item The geometrically connected components of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ are defined over the maximal extension $\ensuremath{\mathbf{E}}^p$ of $\ensuremath{\mathbf{E}}$ unramified at all primes above $p$. \end{enumerate} \end{prop}\qed \subsection{Integral models for Shimura varieties of abelian type}\label{sec: integral models abelian type} We now use the previous results to construct integral models for Shimura varieties of abelian type. In particular, this will allow us to construct integral models for general Hodge type Shimura varieties without the assumptions in \S\ref{subsec: integral models Hodge type}. This last case is all that is needed for our main application on $\ell$-independence. However, since the general abelian type case is no more difficult, we also include this case for completeness. As many of the arguments are exactly the same as in \cite[\S 4]{KP}, in what follows we will refer to relevant statements in \cite{KP} if the argument in {\em loc.~cit.} carries over directly and only give details for those points which do not. \subsubsection{}\label{subsec: integral model abelian type} We keep the notation of \S\ref{subsec: integral models Hodge type}, so that $(\ensuremath{\mathbf{G}},X)$ is a Shimura datum of Hodge type and we set $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$. Assume that $(\ensuremath{\mathbf{G}},X)$ satisfies the following conditions. \begin{itemize}\item The pair $(G,\{\mu_h\})$ is regular and $p\nmid|\pi_1(G_{\ensuremath{\mathrm{der}}})|$. \item $\ensuremath{\mathbf{G}}$ satisfies (\ref{ass; parahoric reduction assumption}). \item The center $Z$ of $G:=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ is an $R$-smooth torus. \end{itemize} As before, we let $\ensuremath{\mathcal{G}}=\ensuremath{\mathcal{G}}_x$ be a parahoric group scheme corresponding to a point $x\in \ensuremath{\mathcal{B}}(G,\ensuremath{\mathbb{Q}}_p)$ which is generic in the facet containing it. Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be a Shimura datum which is equipped with a central isogeny $\alpha:\ensuremath{\mathbf{G}}_{\mathrm{der}}\rightarrow \ensuremath{\mathbf{G}}_{2,\mathrm{der}}$ inducing an isomorphism $(\ensuremath{\mathbf{G}}_{\mathrm{ad}},X_{\mathrm{ad}})\cong (\ensuremath{\mathbf{G}}_{2,\mathrm{ad}},X_{2,\mathrm{ad}})$. The parahoric $\ensuremath{\mathcal{G}}$ determines a parahoric $\ensuremath{\mathcal{G}}_2$ of $G_2:=\ensuremath{\mathbf{G}}_2\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_p$ and we set $\ensuremath{\mathrm{K}}_{2,p}:=\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{Z}}_p)$. We write $\ensuremath{\mathbf{E}}_2$ for the reflex field of $(\ensuremath{\mathbf{G}}_2,X_2)$ and we let $\ensuremath{\mathbf{E}}':=\ensuremath{\mathbf{E}}.\ensuremath{\mathbf{E}}_2.$ Our choice of embedding $i_p$ induces a place $v'$ (resp. $v_2$) of $\ensuremath{\mathbf{E}}'$ (resp. $\ensuremath{\mathbf{E}}_2$) and we set $E':=\ensuremath{\mathbf{E}}'_{v'}$ and $E_2:=\ensuremath{\mathbf{E}}_{2,v_2}$ to be the completions. Fix a connected component $X^+\subset X$. By real approximation, upon modifying the isomorphism $\ensuremath{\mathbf{G}}_{\mathrm{ad}}\cong \ensuremath{\mathbf{G}}_{2,\mathrm{ad}}$ by an element of $\ensuremath{\mathbf{G}}_{\ensuremath{\mathrm{ad}}}(\ensuremath{\mathbb{Q}})$, we may assume that the image of $X_2\subset X_{2,\mathrm{ad}}$ contains the image of $X^+.$ We write $X_2^+$ for $X^+$ viewed as a subset of $X_2.$ We denote by $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}}, X)^+ \subset \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}}, X)$ and $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)^+ \subset \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)$ the geometrically connected components corresponding to $X^+$ and $X_2^+$. These are defined over extensions of $\ensuremath{\mathbf{E}}$ and $\bf E'$ respectively, which are unramified at primes above $p$. The identification $X_2^+ \simeq X^+$ induces a finite map \begin{equation}\label{eqn:mapconncomps} \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}}, X)^+ \rightarrow \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)^+ \end{equation} Let $x_{\mathrm{ad}}$ be the image of $x$ in $\ensuremath{\mathcal{B}}(G_{\mathrm{ad}},\ensuremath{\mathbb{Q}}_p)$ and we denote by $\ensuremath{\mathcal{G}}_{\mathrm{ad}}$ the parahoric model of $G_{\mathrm{ad}}$ corresponding to $x_{\mathrm{ad}}.$ We then have the following generalization of \cite[Corollary 4.6.18]{KP}. \begin{prop}\label{prop: auxiliary SV construction}Under the assumptions above, there is a $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}^p_f)$-equivariant extension of $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)$ to an $\ensuremath{\mathcal{O}}_{E'}$-scheme with $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}^p_f)$-action $\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$ such that \begin{enumerate} \item For any discrete valuation ring $R$ of mixed characteristic the map $$\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)(R)\rightarrow\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X)(R[\frac{1}{p}])$$ is a bijection \item The map (\ref{eqn:mapconncomps}) induces a finite map of $\O_{E^{\prime\ensuremath{\mathrm{ur}}}}$-schemes $$\mathscr{S}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}}, X)^+ \rightarrow \mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)^+, $$ where $\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)^+$ denotes the closure of $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)^+$ in the $\ensuremath{\mathcal{O}}_{E'^{\mathrm{ur}}}$-scheme $\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2, X_2)_{\ensuremath{\mathcal{O}}_{E'^{\mathrm{ur}}}},$ and similarly for $\mathscr{S}_{\ensuremath{\mathrm{K}}_{p}}(\ensuremath{\mathbf{G}}, X)^+.$ \item If $\tilde{\ensuremath{\mathcal{G}}}=\ensuremath{\mathcal{G}},$ then there exists a diagram \begin{equation}\label{eqn: local model diagram abelian type}\xymatrix{ &\widetilde{\mathscr{S}}^{\mathrm{ad}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)\ar[dr]^q\ar[dl]_\pi&\\ \mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2) & &\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}}\end{equation} where $\pi$ is a $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$-equivariant ${\ensuremath{\mathcal{G}}}_{\mathrm{ad}}$-torsor and $q$ is smooth of relative dimension $\dim \ensuremath{\mathbf{G}}_{\mathrm{ad}},$ and $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$-equivariant, when $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$ is equipped with the trivial $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$-action. \end{enumerate} \end{prop} \begin{proof} This can be deduced from Theorem \ref{thm: local model diagram}, as in \cite[\S 4.4-4.6]{KP}. We explain only how the assumption of $R$-smoothness of $Z$ is used. Let $G_{\ensuremath{\mathbb{Z}}_{(p)}}$ (resp. $G_{\mathrm{ad},\ensuremath{\mathbb{Z}}_{(p)}}$) denote the $\ensuremath{\mathbb{Z}}_{(p)}$-model of $\ensuremath{\mathbf{G}}$ (resp. $\ensuremath{\mathbf{G}}_{\mathrm{ad}}$) associated to $\ensuremath{\mathcal{G}}$ (resp. $\ensuremath{\mathcal{G}}_{\mathrm{ad}}$). Let $\ensuremath{\mathbf{Z}}$ denote the center of $\ensuremath{\mathbf{G}}$ and $Z_{\ensuremath{\mathbb{Z}}_{(p)}}$ the closure of $\ensuremath{\mathbf{Z}}$ in $G_{\ensuremath{\mathbb{Z}}_{(p)}}.$ By Proposition \ref{prop: exact sequence parahorics}, the assumption of $R$-smoothness on $Z=\ensuremath{\mathbf{Z}}_{\ensuremath{\mathbb{Q}}_p}$ and descent implies that the natural map $G_{\ensuremath{\mathbb{Z}}_{(p)}}/Z_{\ensuremath{\mathbb{Z}}_{(p)}}\rightarrow G_{\mathrm{ad},\ensuremath{\mathbb{Z}}_{(p)}}$ is an isomorphism. This gives us the analogue of \cite[Lemma 4.6.2(2)]{KP}, and allows us to carry out the constructions of \S 4.6 of {\it loc.~cit}. \end{proof} Let $\ensuremath{\mathrm{K}}_2^p\subset \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$ be a compact open subgroup, and we write $\ensuremath{\mathrm{K}}_2:=\ensuremath{\mathrm{K}}_{2,p}\ensuremath{\mathrm{K}}_2^p\subset \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f)$. Taking the quotient of the diagram (\ref{eqn: local model diagram abelian type}) by $\ensuremath{\mathrm{K}}_2^p$, we obtain $$q:\widetilde{\mathscr{S}}^{\mathrm{ad}}_{\ensuremath{\mathrm{K}}_{2}}(\ensuremath{\mathbf{G}}_2,X_2) \rightarrow \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}},$$ a smooth morphism of $\ensuremath{\mathcal{O}}_{E'}$-stacks of relative dimension $\dim \ensuremath{\mathbf{G}}_{\mathrm{ad}}$. \subsubsection{}\label{subsubsec: construction of integral model} We recall some features of the construction in Proposition \ref{prop: auxiliary SV construction} which will be needed later. As in \cite[\S4.5.6]{KP}, we set $$\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}):=\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Q}})^-*_{\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}})_+/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Q}})}\ensuremath{\mathbf{G}}_{\mathrm{ad}}( \ensuremath{\mathbb{Q}})^+$$ $$\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}}):=\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p)/\ensuremath{\mathbf{Z}}({\ensuremath{\mathbb{Z}}_{(p)})^-}*_{\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Z}}_{(p)})_+/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Z}}_{(p)})}\ensuremath{\mathbf{G}}_{\mathrm{ad}}(\ensuremath{\mathbb{Z}}_{(p)})^+,$$ and as in \cite[\S4.6.3]{KP}, we set $$\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}})^\circ:=\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}})^-/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Q}})^-*_{\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}})/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Q}})}\ensuremath{\mathbf{G}}_{\mathrm{ad}}(\ensuremath{\mathbb{Z}}_{(p)})^+$$ $$\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})^\circ:=\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Z}}_{(p)})^-/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Q}})^-*_{\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Z}}_{(p)})_+)/\ensuremath{\mathbf{Z}}(\ensuremath{\mathbb{Z}}_{(p)})}\ensuremath{\mathbf{G}}_{\mathrm{ad}}(\ensuremath{\mathbb{Z}}_{(p)})^+.$$ We refer to \emph{loc. cit.} for an explanation of this notation. We obtain an $\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}})$-action (resp. $\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})$-action) on $\ensuremath{\mathrm{Sh}}(\ensuremath{\mathbf{G}},X)$ (resp. $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)$). The assumption that the center of $G$ is an $R$-smooth torus implies that the $\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})$-action on $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)$ extends to an $\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})$-action on $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)$. As in \cite[Lemma 4.6.10]{KP}, the natural map \begin{equation}\label{eqn: injection A groups}\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})^\circ\backslash\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Z}}_{(p)}})\rightarrow \ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}})^\circ\backslash\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_2)/\ensuremath{\mathrm{K}}_{2,p}\end{equation} is an injection, and we fix $J\subset \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}}_p)$ a set of coset representatives for the image of (\ref{eqn: injection A groups}). Then $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$ is constructed as \begin{equation}\label{eqn: abelian type construction}\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)=\left[[\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_p}(\ensuremath{\mathbf{G}},X)^+\times\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Z}}_{(p)}})]/\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Z}}_{(p)}})^\circ]\right]^{|J|} \end{equation} \subsubsection{} Let $H$ be a simple, adjoint, reductive group over $\mathbb R,$ which is of classical type, and is associated to a Hermitian symmetric domain; in particular $H(\mathbb R)$ is not compact. Thus $H$ is of type $A, B, C, D^{\mathbb R}, D^{\mathbb H}$ in the classification of \cite[1.3.9]{De2}, with the type $A$ case including unitary groups of any signature $U(p,q)$ with $p,q \neq 0.$ We set $H^\sharp = H_{\mathrm{sc}},$ the simply connected cover of $H,$ unless $H$ is of type $D^{\mathbb H},$ in which case we set $H^\sharp$ equal to the image of $H_{\mathrm{sc}}$ in the representation corresponding to the standard representation of the orthogonal group. Now let $\ensuremath{\mathrm{F}}$ be a totally real field, and $\ensuremath{\mathbf{H}}$ a simple, adjoint reductive group of classical type over $\ensuremath{\mathrm{F}}.$ Assume that \begin{itemize} \item for every embedding $\sigma: \ensuremath{\mathrm{F}} \hookrightarrow \mathbb R,$ $\ensuremath{\mathbf{H}}\otimes_{\sigma,\ensuremath{\mathrm{F}}} \mathbb R$ is either compact or associated to a Hermitian symmetric domain. \item $\ensuremath{\mathbf{H}}\otimes_{\sigma,\ensuremath{\mathrm{F}}} \mathbb R$ is non-compact for some $\sigma.$ \item If $\ensuremath{\mathbf{H}}$ is of type $D,$ then for those $\sigma$ such that $\ensuremath{\mathbf{H}}\otimes_{\sigma,F} \mathbb R$ is non-compact, the associated Hermitian symmetric domain does not depend on $\sigma.$ That is, it is always of type $D^{\mathbb R}$ or always of type $D^{\mathbb H}.$ \end{itemize} We define $\ensuremath{\mathbf{H}}^{\sharp}$ to be $\ensuremath{\mathbf{H}}_{\mathrm{sc}}$ unless $\ensuremath{\mathbf{H}}$ is of type $D,$ in which case we define $\ensuremath{\mathbf{H}}^{\sharp}$ to be the unique quotient of $\ensuremath{\mathbf{H}}_{\mathrm{sc}}$ such that $\ensuremath{\mathbf{H}}^{\sharp}\otimes_{\sigma,F} \mathbb R = (\ensuremath{\mathbf{H}}\otimes_{\sigma,F} \mathbb R)^{\sharp}$ whenever $\ensuremath{\mathbf{H}}\otimes_{\sigma,F} \mathbb R$ is non-compact. Now suppose $\ensuremath{\mathbf{H}}$ is a reductive group over $\ensuremath{\mathrm{F}},$ with $\ensuremath{\mathbf{H}}^{\ensuremath{\mathrm{ad}}} = \prod_{i=1}^s \ensuremath{\mathbf{H}}_i$ where each $\ensuremath{\mathbf{H}}_i$ is a simple, adjoint reductive group of classical type over $F$ satisfying the three conditions above. Then we set $\ensuremath{\mathbf{H}}^{\sharp} = \prod_{i=1}^s \ensuremath{\mathbf{H}}_i^{\sharp}.$ Now let $(\ensuremath{\mathbf{H}},Y)$ be a Shimura datum such that $(\ensuremath{\mathbf{H}}_{\mathrm{ad}},Y_{\mathrm{ad}})$ is of abelian type. Recall \cite{De2} that in this case the three conditions above are satisfied, so $\ensuremath{\mathbf{H}}^{\sharp}$ is well defined \footnote{In \cite[4.6.21]{KP} it is incorrectly asserted that $\ensuremath{\mathbf{H}}^{\sharp}$ is defined for any $(H,Y)$ with $H$ of classical type, however $H$ may not satisfy the third condition above. This is however satisfied if $(\ensuremath{\mathbf{H}}_{\mathrm{ad}},Y_{\mathrm{ad}})$ is of abelian type.}, and $(\ensuremath{\mathbf{H}},Y)$ is of abelian type if and only if $\ensuremath{\mathbf{H}}_{\mathrm{der}}$ is a quotient of $\ensuremath{\mathbf{H}}^\sharp$. \subsubsection{} Proposition \ref{prop: auxiliary SV construction} shows that we can construct good integral models for Shimura data $(\ensuremath{\mathbf{G}}_2,X_2)$ of abelian type provided we can relate it to a Shimura datum $(\ensuremath{\mathbf{G}},X)$ of Hodge type satisfying good properties. Those $(\ensuremath{\mathbf{G}}_2,X_2)$ for which we can do this are essentially the following \begin{definition}Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be a Shimura datum. We say that $(\ensuremath{\mathbf{G}}_2,X_2)$ is \emph{acceptable} if it is of abelian type and there is an isomorphism $G_{2,\mathrm{ad}}:=\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathrm{ad}},\ensuremath{\mathbb{Q}}_p}\cong\prod_{i=1}^r\mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}H_i$ where $F_i/\ensuremath{\mathbb{Q}}_p$ is a finite extension and $H_i$ is a reductive group over $F_i$ which splits over a tamely ramified extension of $F_i$. \end{definition} The following lemma is the analogue of \cite[Lemma 4.6.22]{KP}, and is the key input that will allow us to deduce the existence of good integral models for acceptable Shimura data. \begin{prop}\label{lemma: auxiliary Hodge type datum}Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be an acceptable Shimura datum. Then there exists a Shimura datum $(\ensuremath{\mathbf{G}},X)$ of Hodge type together with a central isogeny $\ensuremath{\mathbf{G}}_{\mathrm{der}}\rightarrow \ensuremath{\mathbf{G}}_{2,\mathrm{der}}$ which induces an isomorphism $(\ensuremath{\mathbf{G}}_{\mathrm{ad}},X_{\mathrm{ad}})\cong (\ensuremath{\mathbf{G}}_{2,\mathrm{ad}},X_{2,\mathrm{ad}})$. Moreover, $(\ensuremath{\mathbf{G}},X)$ may chosen to satisfy the following conditions. \begin{enumerate} \item $\pi_1(G_{\mathrm{der}})$ is a 2-group and is trivial if $(\ensuremath{\mathbf{G}}_{2,\mathrm{ad}},X_{2,\mathrm{ad}})$ has no factors of type $D^{\ensuremath{\mathbb{H}}}$. Moreover $\ensuremath{\mathbf{G}}$ satisfies assumption (\ref{ass; parahoric reduction assumption}). \item Any prime $v_2|p$ of $\ensuremath{\mathbf{E}}_2$ splits in the composite $\ensuremath{\mathbf{E}}':=\ensuremath{\mathbf{E}}.\ensuremath{\mathbf{E}}_2$. \item The center $Z$ of $G$ is an $R$-smooth torus over $\ensuremath{\mathbb{Q}}_p$. \item $X_*(G_{\mathrm{ab}})_I$ is torsion free. \item The pair $(G,\{\mu_h\})$ is regular and $p\nmid|\pi_1(G_{\ensuremath{\mathrm{der}}})|$. \end{enumerate} \end{prop} \begin{proof}We follow the proof of \cite[Lemma 4.6.22]{KP}. Let $\ensuremath{\mathbf{G}}_{2,\mathrm{ad}}\cong \prod_{j=1}^s\mathrm{Res}_{\ensuremath{\mathrm{F}}_j/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{H}}_j$, where $\ensuremath{\mathrm{F}}_j$ is a totally real field and $\ensuremath{\mathbf{H}}_j$ is an absolutely simple $\ensuremath{\mathrm{F}}_j$-group. By \cite[2.3.10]{De2}, we may choose $(\ensuremath{\mathbf{G}},X)$ a Shimura datum of Hodge type with $\ensuremath{\mathbf{G}}_{\mathrm{der}}\cong \ensuremath{\mathbf{G}}_{2,\mathrm{ad}}^{\sharp},$ and such that the central isogeny $\ensuremath{\mathbf{G}}_{\mathrm{der}}\rightarrow \ensuremath{\mathbf{G}}_{2,\mathrm{der}}$ induces an isomorphism of Shimura data $(\ensuremath{\mathbf{G}}_{\mathrm{ad}},X_{\mathrm{ad}})\cong(\ensuremath{\mathbf{G}}_{2,\mathrm{ad}},X_{2,\mathrm{ad}})$. Then $\ensuremath{\mathbf{G}}_{\mathrm{der}}$ has the form $\prod_{j=1}^s\mathrm{Res}_{\ensuremath{\mathrm{F}}_j/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{H}}_j^\sharp.$ As in \cite[Lemma 4.6.22]{KP}, it follows that $(\ensuremath{\mathbf{G}},X)$ satisfies (1). In the course of constructing $(\ensuremath{\mathbf{G}},X)$ satisfying the other conditions, we will keep track of a certain group $\ensuremath{\mathbf{G}}'$ containing $\ensuremath{\mathbf{G}}$ such that the Hodge embedding $(\ensuremath{\mathbf{G}},X)\rightarrow (\mathbf{GSp}(V),S^\pm)$ extends to a representation $\ensuremath{\mathbf{G}}'\rightarrow \mathbf{GL}(V)$; this will be needed in the verification of (5). We now explain how to choose $(\ensuremath{\mathbf{G}},X)$ satisfying (2). We first assume $s=1$ so that $\ensuremath{\mathbf{G}}_{2,\mathrm{ad}}\cong \mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{H}}.$ Let $\ensuremath{\mathfrak{p}}_1,\dotsc,\ensuremath{\mathfrak{p}}_d$ denote the primes of $\ensuremath{\mathrm{F}}$ above $p$ and write $F_i$ for the completion of $\ensuremath{\mathrm{F}}$ at $\ensuremath{\mathfrak{p}}_i$. Then $\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathrm{ad}},\ensuremath{\mathbb{Q}}_p}\cong\prod_{i=1}^d \mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}\ensuremath{\mathbf{H}}_{F_i}$, and our assumptions imply that $H_i:=\ensuremath{\mathbf{H}}_{F_i}$ splits over a tamely ramified extension of $F_i$. We choose $\ensuremath{\mathrm{K}}/\ensuremath{\mathrm{F}}$ a quadratic imaginary extension of $\ensuremath{\mathrm{F}}$ such that all primes of $\ensuremath{\mathrm{F}}$ above $p$ split in $\ensuremath{\mathrm{K}}$. We fix a set $T$ of embeddings $\ensuremath{\mathrm{K}}\rightarrow \ensuremath{\mathbb{C}}$ satisfying the same conditions as in \cite[\S4.6.22]{KP}. The construction of \cite[Proposition 2.3.10]{De2} then gives a Shimura datum $(\ensuremath{\mathbf{G}},X)$ of Hodge type such that any prime $v_2|p$ of $\ensuremath{\mathbf{E}}_2$ splits in $\ensuremath{\mathbf{E}}'$. Moreover $(\ensuremath{\mathbf{G}},X)$ is constructed as a subgroup of a group $\ensuremath{\mathbf{G}}'$ with $\ensuremath{\mathbf{G}}_{\mathrm{der}}\simeq \ensuremath{\mathbf{G}}'_{\mathrm{der}},$ $\ensuremath{\mathbf{G}}'\cong \mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{H}}'$ and such that the Hodge embedding $(\ensuremath{\mathbf{G}},X)\rightarrow (\mathbf{GSp}(V),S^\pm)$ extends to a representation $\ensuremath{\mathbf{G}}'\rightarrow \mathbf{GL}(V)$. The group $\ensuremath{\mathbf{G}}'$ splits over the composite of $\ensuremath{\mathrm{K}}$ and the splitting field of $\ensuremath{\mathbf{G}}.$ It follows that $\ensuremath{\mathbf{G}}'_{\ensuremath{\mathbb{Q}}_p}\cong\prod_{i=1}^d \mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}H_i'$ where $H_i'$ splits over a tamely ramified extension of $F_i$. In general for $s>1$, we apply the above to each of the individual factors. We now show that we can arrange so that (3) is satisfied. Let $\ensuremath{\mathbf{G}}\subset \ensuremath{\mathbf{G}}'$ as above and set $G':=\ensuremath{\mathbf{G}}'_{\ensuremath{\mathbb{Q}}_p}$. Let $T'$ denote the centralizer of a maximal $\ensuremath{\breve{\mathbb{Q}}_p}$-split torus in $G'$ defined over $\ensuremath{\mathbb{Q}}_p$ and let $T:=G\cap T'$ which is a maximal torus of $G$. Then $T'\cong \prod_{i=1}^r\mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}S'_i$ where $F_i/\ensuremath{\mathbb{Q}}_p$ is finite and $S'_i$ is a torus over $F_i$ which splits over a tamely ramified extension. By construction of $\ensuremath{\mathbf{G}}$ in \cite[Proposition 2.3.10]{De2}, for $i=1,\dotsc,r$ there are induced tori $S''_i$ over $F_i$ which split over a tamely ramified extension and maps $S_i'\rightarrow S''_i$ which induce a map $T'\rightarrow T'':=\prod_{i=1}^r\mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}S''_i$ such that $T$ is the identity component of the pullback $T'\times_{T''}\ensuremath{\mathbb{G}}_m$. Here $\ensuremath{\mathbb{G}}_m\rightarrow T''$ is the diagonal map. Thus $T$ arises from the construction in Corollary \ref{cor: extension of torus is R smooth} and hence is $R$-smooth. Arguing as in \cite[Proof of Prop 2.2.4]{Ki2}, we may choose a maximal torus $\ensuremath{\mathbf{T}}$ of $\ensuremath{\mathbf{G}}$ such that $\ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Q}}_p}$ is $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_p)$-conjugate to $T$, and there exists $h\in X$ such that $h$ factors through $\ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{R}}}$. In fact, we may choose $\ensuremath{\mathbf{T}}$ to be given by $\ensuremath{\mathbf{T}}'\cap \ensuremath{\mathbf{G}}$, where $\ensuremath{\mathbf{T}}'\subset \ensuremath{\mathbf{G}}'$ is a torus such that $\ensuremath{\mathbf{T}}'_{\ensuremath{\mathbb{Q}}_p}$ is $\ensuremath{\mathbf{G}}'(\ensuremath{\mathbb{Q}}_p)$-conjugate to $T'$. We set $\ensuremath{\mathbf{G}}_1:=(\ensuremath{\mathbf{G}}\times\ensuremath{\mathbf{T}})/{\ensuremath{\mathbf{Z}}}$ and $\ensuremath{\mathbf{G}}_1':=(\ensuremath{\mathbf{G}}'\times\ensuremath{\mathbf{T}}')/{\ensuremath{\mathbf{Z}}'}$, where $\ensuremath{\mathbf{Z}}$ and $\ensuremath{\mathbf{Z}}'$ are the centers of $\ensuremath{\mathbf{G}}$ and $\ensuremath{\mathbf{G}}'$ respectively. Then the center $Z_1$ of $G_1:=\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Q}}_p}$ is isomorphic to $T$ and hence an $R$-smooth torus. We let $X_1$ denote the conjugacy class of Deligne homomorphisms for $\ensuremath{\mathbf{G}}_1$ determined by $h\times 1$ for $h\in X$. As in \cite[Lemma 4.6.22]{KP}, we let $W$ denote the $\ensuremath{\mathbf{G}}_1$-representation $\mathrm{Hom}_{\ensuremath{\mathbf{Z}}}(V,V),$ and we may equip $W$ with an alternating form such that there is a Hodge embedding $(\ensuremath{\mathbf{G}}_1,X_1)\rightarrow (\mathbf{GSp}(W),S_1^\pm)$. By construction, this extends to a homomorphism $\ensuremath{\mathbf{G}}_1'\rightarrow \mathbf{GL}(W)$. Moreover, if we let $Z=\ensuremath{\mathbf{Z}}_{\ensuremath{\mathbb{Q}}_p}$ we take $T_1:=(T\times T)/Z\subset G_1$ which is the centralizer of a maximal $\ensuremath{\breve{F}}$-split torus in $G_1$, then $T_1$ also arises from the construction in Corollary \ref{cor: extension of torus is R smooth}; it is the identity component of the pullback $(T'\times T')/Z'\times_{T''} \ensuremath{\mathbb{G}}_m$ where $Z':=\ensuremath{\mathbf{Z}}'_{\ensuremath{\mathbb{Q}}_p}$. Thus $T_1$ is $R$-smooth. This observation will be needed below to insure that (5) is satisfied. Upon replacing $(\ensuremath{\mathbf{G}},X)$ by $(\ensuremath{\mathbf{G}}_1,X_1)$ we may assume $(\ensuremath{\mathbf{G}},X)$ satisfies (3). To show we can arrange so that (4) and (5) are satisfied, we may apply the same construction as in \cite[Lemma 4.6.22]{KP} to $(\ensuremath{\mathbf{G}},X)$. This gives a Shimura datum $(\ensuremath{\mathbf{G}}_1,X_1)$ of Hodge type with $X_*(G_{1,\mathrm{ab}})_I$ torsion free, i.e. condition (4) is satisfied. A similar argument as the one in the previous paragraph shows that the Hodge embedding $(\ensuremath{\mathbf{G}}_1,X_1)\rightarrow (\mathbf{GSp}(V),S^\pm)$ extends to an embedding $\ensuremath{\mathbf{G}}_1'\rightarrow \mathbf{GL}(V)$ for a suitable $\ensuremath{\mathbf{G}}_1'$ of the form $\prod_{j=1}^s\mathrm{Res}_{\ensuremath{\mathrm{F}}_j/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{H}}'_j$. Moreover, the explicit description of $\ensuremath{\mathbf{G}}_1$ shows that both the center $Z_1$ of $G_1=\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Q}}_p}$ and the centralizer $T$ of a maximal $\ensuremath{\breve{\mathbb{Q}}_p}$-split torus in $G_1$ arise from the construction in Corollary \ref{cor: extension of torus is R smooth}. It follows that $(G_1,\{\mu_{h_1}\})$ is regular. Since we have assumed $p>2$, condition (1) implies $p\nmid|\pi_1(G_{\ensuremath{\mathrm{der}}})|$ and hence condition (5) is satisfied. \end{proof} \subsubsection{}\label{sec: acceptable triple}For later applications to constructing canonical liftings, we introduce the following additional condition on the parahoric. \begin{definition} Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be an acceptable Shimura datum and $\ensuremath{\mathcal{G}}_2$ a parahoric group scheme for $G_2=\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Q}}_p}$. We say the triple $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ is \textit{acceptable} if we can choose a Shimura datum as in Proposition \ref{lemma: auxiliary Hodge type datum} such that the corresponding parahoric $\ensuremath{\mathcal{G}}$ of $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ is connected. \end{definition} \begin{cor}\label{cor: acceptable triple}Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be an acceptable Shimura datum and $\ensuremath{\mathcal{G}}_2$ any parahoric group scheme of $G_2$. Assume $\ensuremath{\mathbf{G}}_{\ensuremath{\mathrm{ad}}}$ has no factors of type $D^\ensuremath{\mathbb{H}}$. Then the triple $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ is acceptable. \end{cor} \begin{proof}Let $(\ensuremath{\mathbf{G}},X)$ be as in Proposition \ref{lemma: auxiliary Hodge type datum} and $\ensuremath{\mathcal{G}}$ the corresponding parahoric group scheme of $G$. Since $\pi_1(G_{\mathrm{der}})$ is trivial, we have $\pi_1(G)\cong X_*(G_{\mathrm{ab}})$. Thus $\pi_1(G)_I\cong X_*(G_{\mathrm{ab}})_I$ is torsion free and hence the Kottwitz map $\widetilde\kappa_G$ is trivial on $\widetilde{\ensuremath{\mathcal{G}}}$. It follows that $\ensuremath{\mathcal{G}}$ is a connected parahoric. \end{proof} \begin{remark} The assumption of acceptability on the triple above is what is needed to construct canonical liftings in \S\ref{sec: canonical liftings}. We remark that it is possible for a triple $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ to be acceptable even if $\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathrm{ad}}}$ has factors of type $D^{\ensuremath{\mathbb{H}}}$, cf. Proposition \ref{prop: reduction to acceptable case}; thus it is a more general notion than just excluding $D^{\ensuremath{\mathbb{H}}}$ factors. \end{remark} \subsubsection{} Proposition \ref{lemma: auxiliary Hodge type datum} shows that if $(\ensuremath{\mathbf{G}}_2,X_2)$ is acceptable, it can be related to a Hodge type Shimura datum $(\ensuremath{\mathbf{G}},X)$ satisfying the assumptions in \S\ref{subsec: integral model abelian type}. We thus obtain the following theorem; the argument is the same as \cite[Theorem 4.6.23]{KP}. \begin{thm}\label{thm: integral models abelian type}Let $(\ensuremath{\mathbf{G}}_2,X_2)$ be an acceptable Shimura datum. Let $\ensuremath{\mathcal{G}}_2$ be a parahoric group scheme of $G_2$ and set $\ensuremath{\mathrm{K}}_{2,p}:=\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{Z}}_p)$. Then there exists a Shimura datum of Hodge type $(\ensuremath{\mathbf{G}},X)$ such that the conditions of Proposition \ref{prop: auxiliary SV construction} are satisfied and such that all primes $v_2|p$ of $\ensuremath{\mathbf{E}}_2$ split completely in $\ensuremath{\mathbf{E}}'=\ensuremath{\mathbf{E}}.\ensuremath{\mathbf{E}}_2$. In particular for any prime $v_2|p$ of $\ensuremath{\mathbf{E}}_2$, we obtain a $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$-equivariant $\ensuremath{\mathcal{O}}_{E_2}$-scheme $\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$ with the following properties. \begin{enumerate}\item $\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$ is \'etale locally isomorphic to $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$, where $\ensuremath{\mathcal{G}}$ is the parahoric group scheme of $G$ corresponding to $\ensuremath{\mathcal{G}}_2$. \item For any discrete valuation ring $R$ of mixed characteristic the map $$\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)(R)\rightarrow\mathscr{S}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)(R[\frac{1}{p}])$$ is a bijection. \item If the triple $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ is acceptable, then $(\ensuremath{\mathbf{G}},X)$ can be chosen so that for any compact open subgroup $\ensuremath{\mathrm{K}}_2=\ensuremath{\mathrm{K}}_{2,p}\ensuremath{\mathrm{K}}_2^p\subset \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f)$, there exists a diagram of $\ensuremath{\mathcal{O}}_{E_2}$-stacks \[\xymatrix{ &\widetilde{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)\ar[dr]^q\ar[dl]_\pi&\\ \mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2) & &\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}}\] where $\mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2):=\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)/\ensuremath{\mathrm{K}}_2^p$, $\pi$ is a $\ensuremath{\mathcal{G}}_{\mathrm{ad}}$-torsor and the map $q$ is smooth of relative dimension $\dim \ensuremath{\mathbf{G}}_{\mathrm{ad}}$. In particular, such a diagram exists if $\ensuremath{\mathbf{G}}_2$ has no factors of type $D^\ensuremath{\mathbb{H}}$. \end{enumerate} \end{thm} \qed \begin{remark}\begin{enumerate} \item If $p>2$, then every abelian type Shimura datum $(\ensuremath{\mathbf{G}}_2,X_2)$ is acceptable. Thus this Theorem essentially completes the construction of integral models for abelian type Shimura varieties with parahoric level over primes $p>3$. Moreover for $p=3$, only the case when $G_{2,\ensuremath{\mathrm{ad}}}$ has a factor of type $D_4$ needs to be excluded. \item The local model diagram in Theorem \ref{thm: integral models abelian type} (3), is a weaker form of the diagram postulated in \cite{HR}. However, for our applications, the important property is that $\widetilde{\ensuremath{\mathscr{S}}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$ is a torsor for a \textit{connected} smooth $\ensuremath{\mathcal{O}}_{E_2}$-group scheme.\end{enumerate} \end{remark} \subsection{$\mu$-ordinary locus and canonical liftings}\label{sec: canonical liftings} \subsubsection{}\label{sec: canonical liftings 1} We keep the notation of \S\ref{sec: integral models abelian type}. We let $(\ensuremath{\mathbf{G}}_2,X_2)$ be an acceptable Shimura datum and $\ensuremath{\mathrm{K}}_{2,p}=\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{Z}}_p)$ where $\ensuremath{\mathcal{G}}_2$ is a parahoric group scheme of $G_2:=\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Q}}_p}$. Then by Theorem \ref{thm: integral models abelian type}, we may construct an integral model $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)/\ensuremath{\mathcal{O}}_{E_2}$ for $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$ from an auxiliary Shimura datum $(\ensuremath{\mathbf{G}},X)$ of Hodge type as in the conclusion of Proposition \ref{lemma: auxiliary Hodge type datum} equipped with a good Hodge embedding $\iota:(\ensuremath{\mathbf{G}},X)\rightarrow (\ensuremath{\mathrm{GSp}}(V),S^\pm)$. In particular $(\ensuremath{\mathbf{G}},X)$ satisfies the conditions in \S\ref{subsec: integral model abelian type}. We fix such a $(\ensuremath{\mathbf{G}},X)$ and $\iota$ for the rest of this section. We assume that $(\ensuremath{\mathbf{G}}_2,X_2)$ is Hodge type and we fix a Hodge embedding $\iota_2:(\ensuremath{\mathbf{G}}_2,X_2)\rightarrow (\mathbf{GSp}(V_2),S_2^\pm)$. By the main theorem of \cite{Landvogt}, there is a $G_2(\ensuremath{\mathbb{Q}}_p^{\mathrm{ur}})$-equivariant embedding of buildings $\ensuremath{\mathcal{B}}(G_2,\ensuremath{\mathbb{Q}}_p^{\mathrm{ur}})\rightarrow \ensuremath{\mathcal{B}}(\mathrm{GSp}(V_{2,\ensuremath{\mathbb{Q}}_p}),\ensuremath{\mathbb{Q}}_p^{\mathrm{ur}})$. Upon replacing $\iota_2$ with a new Hodge embedding, we may assume there is a $\ensuremath{\mathbb{Z}}_p$-lattice $V_{2,\ensuremath{\mathbb{Z}}_p}\subset V_{2,\ensuremath{\mathbb{Q}}_p}$ with $V_{2,\ensuremath{\mathbb{Z}}_p}\subset V_{2,\ensuremath{\mathbb{Z}}_p}^\vee$ such that $G_2\rightarrow \mathrm{GSp}(V_{2,\ensuremath{\mathbb{Q}}_p})$ extends to a morphism of Bruhat--Tits stabilizer schemes $\widetilde{\ensuremath{\mathcal{G}}}_2\rightarrow \mathcal{GSP}$, where $\mathcal{GSP}$ is the group scheme stabilizer of $V_{2,\ensuremath{\mathbb{Z}}_p}$ (cf. \cite[Proposition 1.7.6]{BT2}). We set $\ensuremath{\mathrm{K}}'_{2,p}:=\mathcal{GSP}(\ensuremath{\mathbb{Z}}_p)$ and we let $\ensuremath{\mathrm{K}}_2'^p\subset \mathbf{GSp}(V_{2,\ensuremath{\mathbb{A}}_f^p})$ a compact open subgroup containing $\ensuremath{\mathrm{K}}_2^p$. \begin{prop}\label{prop: morphism integral models}There is a map of $\ensuremath{\mathcal{O}}_{E_2}$-stacks \begin{equation}\label{eqn: Hodge type SV map}\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2'}(\mathbf{GSp}(V_2),S_2^\pm)_{\ensuremath{\mathcal{O}}_{E_2}}\end{equation} extending the natural map on the generic fiber. \end{prop} \begin{proof} Let $\ensuremath{\mathbf{Z}}$ denote the center of $\ensuremath{\mathbf{G}}$ and we write $\ensuremath{\mathbf{Z}}^{\mathrm{sp}}$ for the connected component of the identity of the kernel of the multiplier homomorphism $c:\mathbf{GSp}(V)\rightarrow \ensuremath{\mathbb{G}}_m$ restricted to $\ensuremath{\mathbf{Z}}$. We define a subgroup $\ensuremath{\mathbf{G}}_3\subset \mathbf{GL}(V)\times\mathbf{GL}(V_2)$ generated by $\ensuremath{\mathbf{Z}}^{\mathrm{sp}}\times 1$, the image of $\ensuremath{\mathbf{G}}_{\ensuremath{\mathrm{der}}}$ under the the product of $\iota$ and $\ensuremath{\mathbf{G}}_{\ensuremath{\mathrm{der}}}\rightarrow\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathrm{der}}}\xrightarrow{\iota} \mathbf{GSp}(V_2),$ and the diagonal torus $\ensuremath{\mathbb{G}}_m\subset \mathbf{GL}(V)\times\mathbf{GL}(V_2)$. Set $V_3= V\oplus V_2$ which we may equip with a perfect alternating bilinear form induced from $V$ and $V_2$. As in \cite[\S4.3]{Zhang}, there is a conjugacy class of Deligne homomorphisms $X_3$ for $\ensuremath{\mathbf{G}}_3$ such that $(\ensuremath{\mathbf{G}}_3,X_3)$ is a Shimura datum, and there are natural morphisms of Shimura data \[\xymatrix{(\ensuremath{\mathbf{G}},X)&(\ensuremath{\mathbf{G}}_3,X_3) \ar[r]\ar[l]& (\ensuremath{\mathbf{G}}_2, X_2) \ar[r] & (\mathbf{GSp}(V_2),S_2^\pm)}.\] Moreover using the explicit description of $\ensuremath{\mathbf{G}}_3$ and our assumption on $\iota_2$ above, one checks that the Hodge embedding $\iota_3: (\ensuremath{\mathbf{G}}_3,X_3)\rightarrow (\mathbf{GSp}(V_3),S_3^\pm)$ is a good Hodge embedding. We can now conclude the proof by applying the arguments of \cite{Zhang}. More precisely, when $\ensuremath{\mathbf{G}}_2$ is tamely ramified the result follows from \cite[Proposition~5.4]{Zhang}, but the same arguments work since we have constructed integral models in a more general situation: Let $\ensuremath{\mathcal{G}}$ and $\ensuremath{\mathcal{G}}_3$ denote the parahoric group schemes of $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ and $G_3=\ensuremath{\mathbf{G}}_{3,\ensuremath{\mathbb{Q}}_p}$ corresponding to $\ensuremath{\mathcal{G}}_2$, and set $\ensuremath{\mathrm{K}}_p=\ensuremath{\mathcal{G}}(\ensuremath{\mathbb{Z}}_p)$, $\ensuremath{\mathrm{K}}_{3,p}=\ensuremath{\mathcal{G}}_3(\ensuremath{\mathbb{Z}}_p)$. Arguing as in \cite[Theorem 4.6]{Zhang}, we obtain maps on connected components \begin{multline} \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{p}}(\ensuremath{\mathbf{G}},X)_{\ensuremath{\mathcal{O}}_{E_2^{\mathrm{ur}}}}^+ \simeq \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{3,p}}(\ensuremath{\mathbf{G}}_3,X_3)_{\ensuremath{\mathcal{O}}_{E_2^{\mathrm{ur}}}}^+ \\ \rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)_{\ensuremath{\mathcal{O}}_{E_2^{\mathrm{ur}}}}^+ \rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}'_p}(\mathbf{GSp}(V_2),S_2^\pm)_{\ensuremath{\mathcal{O}}_{E_2^{\mathrm{ur}}}}^+. \end{multline} We may then apply the argument of \cite[Proposition 5.4]{Zhang}, noting that the diagram (5.3.1) of \emph{loc. cit.} exists in our setting. \end{proof} \subsubsection{}\label{subsec; mu ord locs}Let $h:\ensuremath{\mathcal{A}}^2\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$ denote the pullback of the universal abelian variety along (\ref{eqn: Hodge type SV map}). Let $s_{\alpha}\in V_2^\otimes$ be a collection of tensors whose stabilizer is $\ensuremath{\mathbf{G}}_2$. Then as in \S\ref{subsubsec:hodgecycles}, these give rise to tensors $s_{\alpha,B}\in V_B:=R^1h_{\mathrm{an*}}\ensuremath{\mathbb{Q}}$, $s_{\alpha,\ell}\in\ensuremath{\mathcal{V}}_\ell(\ensuremath{\mathcal{A}}^2):=R^1h_{\mathrm{\acute{e}t*}}\ensuremath{\mathbb{Q}}_\ell$ for all $\ell\neq p$ and $s_{\alpha,p}\in\ensuremath{\mathcal{V}}_p(\ensuremath{\mathcal{A}}^2):=R^1h_{\eta,\mathrm{\acute{e}t*}}\ensuremath{\mathbb{Q}}_p$. For any $\ensuremath{\mathcal{O}}_{E_2}$-scheme $T$ and $x\in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$, we write $\ensuremath{\mathcal{A}}^2_{x}$ for the pullback of $\ensuremath{\mathcal{A}}^2$ to $x$. For $K/\ensuremath{\breve{\mathbb{Q}}_p}$ finite and $\widetilde{x}\in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathcal{O}}_K)$ with special fiber $\overline{x}$, we let $s_{\alpha,0,\widetilde{x}}\in \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{A}}^2_{\overline{x}}[p^\infty])[1/p]^\otimes$ denote the images of $s_{\alpha,p,\widetilde{x}}$ under the $p$-adic comparison isomorphism. As in \S\ref{subsec: integral models formal nbd}, these tensors depend only on $\overline{x}$ and not on $\widetilde{x}$; we thus write $s_{\alpha,0,\overline{x}}$ for these tensors. Note that \cite[Proposition 1.3.7]{KMS} applies here since the morphism $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2'}(\mathbf{GSp}(V_2),S_2^\pm)_{\ensuremath{\mathcal{O}}_{E_2}}$ factors through the normalization of its scheme theoretic image, and all objects are pulled back from this normalization. \subsubsection{}Let $\ensuremath{{\overline{x}}}\in \mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(k)$, and set $\ensuremath{\mathbb{D}}:=\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{A}}^2_{\overline{x}}[p^\infty])$. We fix an isomorphism $$V^\vee_{2,\ensuremath{\mathbb{Z}}_p}\otimes_{\ensuremath{\mathbb{Z}}_p}\ensuremath{\breve{\mathbb{Q}}_p}\cong\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Q}}_p},$$ taking $s_{\alpha}$ to $ s_{\alpha,0,\overline{x}}$; such an isomorphism exists by Steinberg's theorem (cf. \cite[1.3.8]{KMS}). Then the Frobenius on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Q}}_p}$ is given by $b\sigma$ for some $b\in G_2(\ensuremath{\breve{\mathbb{Q}}_p})$. By \cite[Lemma 1.3.9]{KMS}, we have $[b]\in B(G_2,\{\mu_2\})$ where $\{\mu_2\}=\{\mu_{h_2}^{-1}\}$. We write $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2}$ (resp. $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_{2,p}}$) for the special fiber of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$ (resp. $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$) over the residue field $k_{E_2}$ of $\ensuremath{\mathcal{O}}_{E_2}$. The map $\ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}}_2}(k)\rightarrow B(G_2,\{\mu_2\})$ sending $\overline{x}$ to the $\sigma$-conjugacy class $[b]$ of the associated element $b$ induces the Newton stratification of $\ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_2},k}:=\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2}\otimes_{k_{E_2}}k$. Let $[b]\in B(G_2,\{\mu_2\})$, we write $\ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_2},[b]}\subset\ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_2},k}$ for the strata corresponding to $[b]$; if $\ensuremath{\mathrm{K}}_2^p$ is neat, it is a locally closed subscheme of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,k}$. Similarly, we write $ \ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_{2,p}},[b]} = \underset{\leftarrow}{\lim}_{\ensuremath{\mathrm{K}}_2^p} \ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_{2,p}\ensuremath{\mathrm{K}}_2^p},[b]};$ such a definition makes sense since $\ensuremath{\mathcal{S}}_{{\ensuremath{\mathrm{K}}_{2}},[b]}$ is compatible with the prime to $p$ level. For the rest of \S\ref{sec: canonical liftings} we assume the existence of the class $[b]_{\mu_2}\in B(G_2,\{\mu_2\})$ as in Definition \ref{def: mu ordinary}. \begin{definition} We define the the $\mu_2$-ordinary locus of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,k}$ to be $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}$. \end{definition} \subsubsection{}We say that a parahoric subgroup $\ensuremath{\mathrm{K}}_{2,p}=\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{Z}}_p)$ is very special if $\ensuremath{\mathcal{G}}_2(\ensuremath{\breve{\mathbb{Z}}_p})$ is a special parahoric subgroup of $G_2(\ensuremath{\breve{\mathbb{Q}}_p})$ Note that such a parahoric exists if and only if $G_2$ is quasi-split (cf. \cite[Lemma 6.1]{Zhu2}). The following is deduced easily from \cite[Corollary 1.3.16]{KMS}. \begin{thm}\label{thm: density} Assume $G_2$ is quasi-split, $\ensuremath{\mathrm{K}}_{2,p}=\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{Z}}_p)$ is a very special parahoric subgroup and $\ensuremath{\mathrm{K}}_2^p$ is neat. Then \begin{enumerate} \item $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2}$ is normal. \item The $\mu_2$-ordinary locus $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_{2},[b]_{\mu_2}}$ is Zariski open and dense in $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,k}$. \end{enumerate} \end{thm} \begin{proof} To show (1), it suffices by Theorem \ref{thm: integral models abelian type} to show that the special fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}},\{\mu_h\}}$ is normal. For this, it suffices by Theorem \ref{thm: Levin} to show that the special fiber is integral. This follows from the argument in \cite[Corollary 9.4]{PZ}, noting that as in {\em loc.~cit.} the $\mu$-admissible set $\ensuremath{\mathrm{Adm}}(\{\mu\})_J$ has a single extremal element when $J\subset \ensuremath{\mathbb{S}}$ corresponds to a very special standard parahoric of $G(\ensuremath{\breve{\mathbb{Q}}_p})$. (2) follows from (1) by \cite[Corollary 1.3.16]{KMS}. \end{proof} \subsubsection{}\label{subsec: construction of I groups} Let $\ensuremath{{\overline{x}}}\in \mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(k)$. Define $\text{Aut}_\ensuremath{\mathbb{Q}}(\mathcal{A}^2_{\ensuremath{{\overline{x}}}})$ to be the $\ensuremath{\mathbb{Q}}$-group whose points in a $\ensuremath{\mathbb{Q}}$-algebra $R$ is given by $$\mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\mathcal{A}^2_{\ensuremath{{\overline{x}}}})(R)=(\mathrm{End}(\mathcal{A}^2_{\ensuremath{{\overline{x}}}})\otimes_{\ensuremath{\mathbb{Z}}} R)^\times$$ By functoriality, $\mathrm{Aut}_{\ensuremath{\mathbb{Q}}}\mathcal{A}^2_{\ensuremath{{\overline{x}}}}$ acts on $T_\ell\mathcal{A}^2_{\ensuremath{{\overline{x}}}}\otimes_{\ensuremath{\mathbb{Z}}_\ell}\ensuremath{\mathbb{Q}}_\ell$ for $\ell\neq p$ and on $\ensuremath{\mathbb{D}}\otimes_{\ensuremath{\breve{\mathbb{Z}}_p}}\ensuremath{\breve{\mathbb{Q}}_p}$, and we write $I_{\ensuremath{{\overline{x}}}}$ for the closed subgroup of $\mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\mathcal{A}^2_{\ensuremath{{\overline{x}}}})$ consisting of automorphisms which preserve $s_{\alpha,\ell,\ensuremath{{\overline{x}}}}$ and $s_{\alpha,0,\ensuremath{{\overline{x}}}}$. There is a canonical inclusion $I_{\ensuremath{{\overline{x}}}}\otimes_\ensuremath{\mathbb{Q}}\ensuremath{\mathbb{Q}}_p\subset J_b$, where $J_b$ is the $\sigma$-centralizer group for $b\in G_2(\ensuremath{\breve{\mathbb{Q}}_p})$. The goal of the rest of this section is to prove the following theorem. \begin{thm}\label{thm: can lift for Shimura var} Assume the triple $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ is acceptable. Let $\ensuremath{{\overline{x}}}\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}(k)$. Then $\ensuremath{{\overline{x}}}$ admits a lifting to a special point $\widetilde{x}\in\mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(K)$ for some $K/\ensuremath{\breve{\mathbb{Q}}_p}$ finite such that the action of $I_{\ensuremath{{\overline{x}}}}(\ensuremath{\mathbb{Q}})$ on $\mathcal{A}^2_{\ensuremath{{\overline{x}}}}$ lifts to an action (in the isogeny category) on $\mathcal{A}^2_{\widetilde{x}}$. \end{thm} \begin{remark}\label{rem: independence of model} The statement of the Theorem and all the constructions above implicitly depend on the choice of auxiliary Shimura datum $(\ensuremath{\mathbf{G}},X)$ and the choice of Hodge embeddings $\iota$ and $\iota_2$. It is possible to show that they are independent of the choices, but we will not consider this and always work with a fixed choice of $(\ensuremath{\mathbf{G}},X)$ and $\iota,\iota_2$. \end{remark} \subsubsection{}Note that $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathcal{G}})$ is also an acceptable triple with $(\ensuremath{\mathbf{G}},X)$ Hodge type. Theorem \ref{thm: can lift for Shimura var} will be reduced to the following special case. \begin{prop}\label{prop: Canonical lift good case}Assume $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)=(\ensuremath{\mathbf{G}},X,\ensuremath{\mathcal{G}})$ and the Hodge embeddings $\iota$ and $\iota_2$ coincide. Then Theorem \ref{thm: can lift for Shimura var} holds. \end{prop} \begin{proof}Under these assumptions, we have $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)=\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)$ and the integral model is constructed as in \S\ref{subsub: integral model Hodge type construction}. Moreover $\ensuremath{\mathcal{G}}_2$ is a connected parahoric. Since the definition of $I_{\overline{x}}$ is independent of the prime to $p$ level, it suffices to consider the case of neat $\ensuremath{\mathrm{K}}_2^p$. Applying the construction in \S\ref{sec: M-adapted}, we obtain a parahoric model $\mathcal{M}$ of a Levi subgroup $M \subset G_2,$ and an $M$-valued cocharacter $\widetilde\lambda$ lying in the $G_2$-conjugacy class of $\mu_2$ and such that $\widetilde{\lambda}$ is central in $M$. Let $\ensuremath{\mathscr{G}}$ be the $(\mathcal{M},\widetilde{\lambda})$-adapted deformation to $\ensuremath{\mathcal{O}}_K$ constructed in Theorem \ref{thm: can-lift}. By Proposition \ref{prop: formal nbd Shimura}, $\ensuremath{\mathscr{G}}$ corresponds to a point $\widetilde{x}\in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathcal{O}}_K)$ lifting $\overline{x}$ and hence to an abelian variety $\mathcal{A}^2_{\widetilde{x}}$ over $K$. By Theorem \ref{thm: can-lift}, the action of $J_b(\ensuremath{\mathbb{Q}}_p)$ on $\ensuremath{\mathscr{G}}_{\ensuremath{{\overline{x}}}}$ lifts to $\ensuremath{\mathscr{G}}$. Since $I_{\ensuremath{{\overline{x}}}}(\ensuremath{\mathbb{Q}})\subset J_b(\ensuremath{\mathbb{Q}}_p)$, by the Serre--Tate theorem, the action of $I_{\ensuremath{{\overline{x}}}}$ lifts to $\mathcal{A}^2_{\widetilde{x}}$ in the isogeny category. We now show $\widetilde{x}$ is a special point. Since $I_{\ensuremath{{\overline{x}}}}$ fixes the tensors $s_{\alpha,0,\ensuremath{{\overline{x}}}}$, it also fixes $s_{\alpha,p,\widetilde{x}}$, and hence it fixes $s_{\alpha,B}$. Thus we may consider $I_\ensuremath{{\overline{x}}}$ as a subgroup of $\mathbf{G}_2$. By \cite[Theorem 6]{KMS}, the absolute rank of $I_{\ensuremath{{\overline{x}}}}$ is equal to the absolute rank of $\ensuremath{\mathbf{G}}_2$. Let $\ensuremath{\mathbf{T}}$ be a maximal torus of $I_{\ensuremath{{\overline{x}}}}$, which is therefore a maximal torus of $\ensuremath{\mathbf{G}}_2$. The Mumford-Tate group of $\mathcal{A}^2_{\widetilde{x}}$ is a subgroup of $\ensuremath{\mathbf{G}}_2$ which commutes with $\ensuremath{\mathbf{T}}$ hence must be contained in $\ensuremath{\mathbf{T}}$. Therefore $\widetilde{x}$ is a special point. \end{proof} \subsubsection{} To prove Theorem \ref{thm: can lift for Shimura var} in general, we make use of the following auxiliary construction. For notational convenience, we write $(\ensuremath{\mathbf{G}}_1,X_1)$ for $(\ensuremath{\mathbf{G}},X)$ and $\iota_1:(\ensuremath{\mathbf{G}}_1,X_1)\rightarrow (\mathbf{GSp}(V_1),S_1^\pm)$ for the good Hodge embedding $\iota$. We define $\ensuremath{\mathbf{G}}_3$ to be the identity component of $(\ensuremath{\mathbf{G}}_1\times_{\ensuremath{\mathbf{G}}_{2,\mathrm{ad}}}\ensuremath{\mathbf{G}}_2)\times_{\ensuremath{\mathbb{G}}_m\times\ensuremath{\mathbb{G}}_m}\ensuremath{\mathbb{G}}_m$, where $\ensuremath{\mathbf{G}}_1\times_{\ensuremath{\mathbf{G}}_{2,\mathrm{ad}}}\ensuremath{\mathbf{G}}_2\rightarrow \ensuremath{\mathbb{G}}_m\times\ensuremath{\mathbb{G}}_m$ is induced by composing with the multiplier homomorphisms $c_{1}:\mathbf{GSp}(V_1)\rightarrow\ensuremath{\mathbb{G}}_m$, $c_{2}:\mathbf{GSp}(V_2)\rightarrow\ensuremath{\mathbb{G}}_m$, and $\ensuremath{\mathbb{G}}_m\rightarrow\ensuremath{\mathbb{G}}_m\times\ensuremath{\mathbb{G}}_m$ is the diagonal embedding. Let $h_1\in X_1$ and $h_2\in X_2$ which have the same image in $X_{2,\mathrm{ad}}$; such a pair exists by our choice of $\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathrm{ad}}}\cong \ensuremath{\mathbf{G}}_{2,\ensuremath{\mathrm{ad}}}$ (cf. \S\ref{subsec: integral model abelian type}). Then $h_1\times h_2$ factors through $\ensuremath{\mathbf{G}}_3$ and determines a $\ensuremath{\mathbf{G}}_{3,\ensuremath{\mathbb{R}}}$ conjugacy class of Deligne homomorphisms $X_3$ such that $(\ensuremath{\mathbf{G}}_3,X_3)$ is a Shimura datum. There are natural morphisms of Shimura data \[\xymatrix{(\ensuremath{\mathbf{G}}_1,X_1)&(\ensuremath{\mathbf{G}}_3,X_3)\ar[r]\ar[l]&(\ensuremath{\mathbf{G}}_2,X_2).}\] For $i=1,2,3$, let $\ensuremath{\mathbf{E}}_i$ denote the reflex field of $(\ensuremath{\mathbf{G}}_i,X_i)$; then we have $\ensuremath{\mathbf{E}}_3\subset\ensuremath{\mathbf{E}}':=\ensuremath{\mathbf{E}}_1\ensuremath{\mathbf{E}}_2$. We let $v_i$ (resp. $v'$) denote the place of $\ensuremath{\mathbf{E}}_i$ (resp. $\ensuremath{\mathbf{E}}'$) induced by the embedding $i_p$ and we let $E_i$ (resp. $E'$) denote the completion. By construction, we have $E'=E_2$. Set $G_i:=\ensuremath{\mathbf{G}}_{i,\ensuremath{\mathbb{Q}}_p}$, and let $\ensuremath{\mathcal{G}}_1$ (resp. $\ensuremath{\mathcal{G}}_3$) denote the parahoric subgroup of $G_1$ (resp. $G_3$) determined by $\ensuremath{\mathcal{G}}_2$. For $i=1,2,3$, we set $\ensuremath{\mathrm{K}}_{i,p}:=\ensuremath{\mathcal{G}}_i(\ensuremath{\mathbb{Z}}_p)$ and we fix compact open subgroups $\ensuremath{\mathrm{K}}_i^p\subset\ensuremath{\mathbf{G}}_i(\ensuremath{\mathbb{A}}_f^p)$ such that $\ensuremath{\mathrm{K}}_3^p$ maps to $\ensuremath{\mathrm{K}}_1^p$ and $\ensuremath{\mathrm{K}}_2^p$. We set $\ensuremath{\mathrm{K}}_i:=\ensuremath{\mathrm{K}}_{i,p}\ensuremath{\mathrm{K}}_i^p$. \subsubsection{} Let $\ensuremath{\mathbf{H}}$ denote the subgroup of $ \mathbf{GSp}(V_1)\times\mathbf{GSp}(V_2)$ consisting of elements $(g_1,g_2)$ such that $c_{1}(g_1)=c_{1}(g_2)$. Then the natural map $\ensuremath{\mathbf{G}}_3\rightarrow \mathbf{GSp}(V_1)\times\mathbf{GSp}(V_2)$ factors through $\ensuremath{\mathbf{H}}$ and we let $S'$ denote the $\ensuremath{\mathbf{H}}_{\ensuremath{\mathbb{R}}}$-conjugacy class of homomorphisms $\ensuremath{\mathbb{S}}\rightarrow \ensuremath{\mathbf{H}}_{\ensuremath{\mathbb{R}}}$ induced by $X_3$. Set $V_3:=V_1\oplus V_2$. We equip $ V_3$ with a perfect alternating bilinear form given by the sum of the forms on $V_1$ and $V_2$. Then there are natural morphisms of Shimura data $(\ensuremath{\mathbf{H}},S')\rightarrow (\mathbf{GSp}(V_i),S_i^\pm)$ for $i=1,2,3$. Recall we have fixed a $\ensuremath{\mathbb{Z}}_p$-lattice $V_{2,\ensuremath{\mathbb{Z}}_p}\subset V_{2,\ensuremath{\mathbb{Q}}_p}$; we let $V_{1,\ensuremath{\mathbb{Z}}_p}\subset V_{1,\ensuremath{\mathbb{Q}}_p}$ be a $\ensuremath{\mathbb{Z}}_p$-lattice such that $\iota_1$ is good with respect to $V_{1,\ensuremath{\mathbb{Z}}_p}$. We set $V_{3,\ensuremath{\mathbb{Z}}_p}:=V_{1,\ensuremath{\mathbb{Z}}_p}\oplus V_{2,\ensuremath{\mathbb{Z}}_p}\subset V_{3,\ensuremath{\mathbb{Q}}_p}$. For $i=1,2,3$, we let $\ensuremath{\mathrm{K}}'_{i,p}$ denote the stabilizer of $V_{i,\ensuremath{\mathbb{Z}}_p}$ inside $\mathbf{GSp}(V_{i,\ensuremath{\mathbb{Q}}_p})$ and let $\ensuremath{\mathrm{H}}_p$ denote the stabilizer of $V_{3,\ensuremath{\mathbb{Z}}_p}$ inside $\ensuremath{\mathbf{H}}(\ensuremath{\mathbb{Q}}_p)$. We also fix compact open subgroups $\ensuremath{\mathrm{K}}_i'^p\subset \mathbf{GSp}(V_{i,\ensuremath{\mathbb{A}}_f^p})$ containing the image of $\ensuremath{\mathrm{K}}_i^p$ for $i=1,2,3$, $\ensuremath{\mathrm{H}}^p\subset \ensuremath{\mathbf{H}}(\ensuremath{\mathbb{A}}_f^p)$ containing the image of $\ensuremath{\mathrm{K}}_3^p$, and we set $\ensuremath{\mathrm{K}}_i'=\ensuremath{\mathrm{K}}'_{i,p}\ensuremath{\mathrm{K}}_i'^p$, $\ensuremath{\mathrm{H}}=\ensuremath{\mathrm{H}}_p\ensuremath{\mathrm{H}}^p$. The Shimura variety $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{H}}}(\ensuremath{\mathbf{H}},S')$ has a moduli interpretation as pairs of tuples $(A^i,\lambda_i,\epsilon^p_i)$, $i=1,2$, where $A^i$ is an abelian variety up to prime to $p$ isogeny, $\lambda_i$ is a weak polarization and $\epsilon^p_i$ is a prime to $p$ level structure and hence extends to an integral model $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{H}}}(\ensuremath{\mathbf{H}},S')$ over $\ensuremath{\mathbb{Z}}_{(p)}$. \begin{prop} There is a commutative diagram of $\ensuremath{\mathcal{O}}_{E'}$-stacks \begin{equation}\label{eqn: product SV diagram}\xymatrix{\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1}(\ensuremath{\mathbf{G}}_1,X_1)_{\ensuremath{\mathcal{O}}_{E'}}\ar[d]^{i_1}&\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)_{\ensuremath{\mathcal{O}}_{E'}}\ar[r]^{j_2}\ar[d]^{i_3}\ar[l]_{j_1}&\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)_{\ensuremath{\mathcal{O}}_{E'}}\ar[d]^{i_2}\\ \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1'}(\mathbf{GSp}(V_1),S_1^\pm)_{\ensuremath{\mathcal{O}}_{E'}}&\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}'}(\ensuremath{\mathbf{H}},S')_{\ensuremath{\mathcal{O}}_{E'}}\ar[r]\ar[l]&\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{H}}}(\mathbf{GSp}(V_2),S_2^\pm)_{\ensuremath{\mathcal{O}}_{E'}}}.\end{equation} \end{prop} \begin{proof}It suffices to consider the case of neat prime to $p$ level structure so that we may assume all objects are schemes. The existence of the bottom row follows from the moduli interpretations of the integral models. The morphisms in the top row can be constructed using the same argument as \cite[Proposition 5.4]{Zhang} noting that all the models are constructed via $(\ensuremath{\mathbf{G}}_1,X_1)$. The morphism $i_1$ exists by construction of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1}(\ensuremath{\mathbf{G}}_1,X_1)_{\ensuremath{\mathcal{O}}_{E'}}$. The morphism $i_2$ is constructed in Proposition \ref{prop: morphism integral models} and $i_3$ can be constructed in the same way. The commutativity then follows from the commutativity on the generic fiber. \end{proof} \subsubsection{} Composing $i_3$ and the natural map $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{H}}}(\ensuremath{\mathbf{H}},S')_{\ensuremath{\mathcal{O}}_{E'}}\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2'}(\mathbf{GSp}(V_3),S_3^\pm)_{\ensuremath{\mathcal{O}}_{E'}}$ we obtain a map $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)_{\ensuremath{\mathcal{O}}_{E'}}\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3'}(\mathbf{GSp}(V_3),S_3^\pm)_{\ensuremath{\mathcal{O}}_{E'}}$. Therefore we may apply the constructions of \S\ref{subsec; mu ord locs} to $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)_{\ensuremath{\mathcal{O}}_{E'}}$. Let $\ensuremath{\mathcal{A}}^{i}\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_i}(\ensuremath{\mathbf{G}}_i,X_i)_{\ensuremath{\mathcal{O}}_{E'}}$, denote the pullback of the universal abelian variety along $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_i}(\ensuremath{\mathbf{G}}_i,X_i)_{\ensuremath{\mathcal{O}}_{E'}}\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_i'}(\mathbf{GSp}(V_i,S_i^\pm)_{\ensuremath{\mathcal{O}}_{E'}}$. For $i=3$, this map factors through $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{H}}}(\ensuremath{\mathbf{H}},S')_{\ensuremath{\mathcal{O}}_{E'}}$ and there is an identification \begin{equation} \label{eqn: product abelian variety} \ensuremath{\mathcal{A}}^3\cong j_1^*\ensuremath{\mathcal{A}}^1\times j_2^*\ensuremath{\mathcal{A}}^2.\end{equation} Let $\overline{x}_3\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)(k)$ and write $\overline{x}_1\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1}(\ensuremath{\mathbf{G}}_1,X_1)(k)$, $\overline{x}_2\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(k)$ for the image of $\overline{x}_3$ under $j_1$ and $j_2$. The isomorphism (\ref{eqn: product abelian variety}) implies we have an isomorphism $\ensuremath{\mathcal{A}}^3_{\overline{x}_3}\cong \ensuremath{\mathcal{A}}^1_{\overline{x}_1}\times\ensuremath{\mathcal{A}}^2_{\overline{x}_2}$. We let $I_{\overline{x}_3}\subset \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^3_{\overline{x}_3})$, $I_{\overline{x}_2}\subset \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^1_{\overline{x}_1})$ denote the groups constructed in the same way as \S\ref{subsec: construction of I groups}. \begin{prop}\label{prop: relation between I groups} There are natural exact sequences:\[\xymatrix{0\ar[r] & \ensuremath{\mathbf{C}}_1 \ar[r]&I_{\overline{x}_3}\ar[r]&I_{\overline{x}_1}\ar[r]&0}\] \[\xymatrix{0\ar[r] & \ensuremath{\mathbf{C}}_2 \ar[r]&I_{\overline{x}_3}\ar[r]&I_{\overline{x}_2}\ar[r]&0}\] where $\ensuremath{\mathbf{C}}_1$ (resp. $\ensuremath{\mathbf{C}}_2$) is the kernel of the map $f:\ensuremath{\mathbf{G}}_3\rightarrow \ensuremath{\mathbf{G}}_1$ (resp. $g:\ensuremath{\mathbf{G}}_3\rightarrow \ensuremath{\mathbf{G}}_2$). \end{prop} \begin{proof} Since $\ensuremath{\mathbf{G}}_3\subset\ensuremath{\mathbf{H}}$, we may assume that the set of tensors defining $\ensuremath{\mathbf{G}}_3\subset \mathbf{GL}(V_3)$ includes tensors corresponding to the projections of $V_{3,\ensuremath{\mathbb{Z}}_{(p)}}$ onto the direct summands $V_{i,\ensuremath{\mathbb{Z}}_{(p)}}\subset V_{3,\ensuremath{\mathbb{Z}}_{(p)}}$ for $i=1,2$. It follows that $I_{\overline{x}_3}$ respects the product decomposition $\ensuremath{\mathcal{A}}^3_{\overline{x}_3}\cong \ensuremath{\mathcal{A}}^1_{\overline{x}_1}\times\ensuremath{\mathcal{A}}^2_{\overline{x}_2}$ and hence we obtain a natural map $I_{\overline{x}_3}\rightarrow \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^1_{\overline{x}_1})$. Similarly, by considering the pullback to $V_{3}$ of tensors defining $\ensuremath{\mathbf{G}}_{1}$, one can show that $I_{\overline{x}_3}\rightarrow \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^1_{\overline{x}_1})$ factors through $I_{\overline{x}_1}$. We obtain a natural map $I_{\overline{x}_3}\rightarrow I_{\overline{x}_1}$. Let $\widetilde{x}_3\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)(\ensuremath{\mathcal{O}}_K)$ denote a lift of $\overline{x}_3$. Since $\ensuremath{\mathbf{C}}_1$ lies in the center of $\ensuremath{\mathbf{G}}_3$, we have natural maps $$\ensuremath{\mathbf{C}}_1\rightarrow \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^3_{\widetilde{x}_3}\otimes_K\overline{K})\rightarrow \mathrm{Aut}_{\ensuremath{\mathbb{Q}}}(\ensuremath{\mathcal{A}}^3_{\overline{x}_3,k})$$ whose image lies in $I_{\overline{x}_3}$. We thus obtain a sequence $\ensuremath{\mathbf{C}}_1\rightarrow I_{\overline{x}_3}\rightarrow I_{\overline{x}_1}$ and it suffices to check the exactness upon base changing to $\ensuremath{\mathbb{Q}}_{\ell}$ for some prime $\ell\neq p$. By \cite[Theorem 6]{KMS} there is a semisimple element $\gamma_{\ell}\in \ensuremath{\mathbf{G}}_3(\ensuremath{\mathbb{Q}}_{\ell})$ such that the natural inclusion $I_{\overline{x}_3}\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_\ell\subset \ensuremath{\mathbf{G}}_{3,\ensuremath{\mathbb{Q}}_\ell}$ (resp. $I_{\overline{x}_1}\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_\ell\subset \ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Q}}_\ell}$) identifies $I_{\overline{x}_3}\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_\ell$ (resp. $I_{\overline{x}_1}\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_\ell)$ with the centralizer of $\gamma_{\ell}$ in $\ensuremath{\mathbf{G}}_{3,\ensuremath{\mathbb{Q}}_\ell}$ (resp. $f(\gamma_\ell)$ in $\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Q}}_\ell}$). We thus obtain the first exact sequence and the argument for $I_{\overline{x}_2}$ is analogous. \end{proof} \subsubsection{} We can now prove the general case of Theorem \ref{thm: can lift for Shimura var}. \begin{proof}[Proof of Theorem \ref{thm: can lift for Shimura var}] It suffices to consider the case of neat prime to $p$ level structure. For $i=1,2,3$, we write $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_i}$ for the special fiber of the integral model $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{i}}(\ensuremath{\mathbf{G}}_i,X_i)$. Let $\overline{x}_2\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}(k)$. We first assume $\overline{x}_2=j_2(\overline{x}_3)$ for some $\overline{x}_3\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_3}(k)$; by Lemma \ref{lemma: mu-ordinary class change of groups} we have $\overline{x}_3\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_3,[b]_{\mu_3}}(k)$. Let $\overline{x}_1\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_1,[b]_{\mu_1}}(k)$ denote the image of $\overline{x}_3$. By Proposition \ref{prop: Canonical lift good case}, there exists $K/\ensuremath{\breve{\mathbb{Q}}_p}$ finite and $\widetilde{x}_1\in\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}_1}(\ensuremath{\mathbf{G}}_1,X_1)(K)$ lifting $\overline{x}_1$ such that the action of $I_{\overline{x}_1}(\ensuremath{\mathbb{Q}})$ lifts to $\ensuremath{\mathcal{A}}^1_{\widetilde{x}_1}$. Then we may consider $I_{\overline{x}_1}$ as a subgroup of $\ensuremath{\mathbf{G}}_1$ and we let $\ensuremath{\mathbf{T}}_1$ denote the connected component of the center of $I_{\overline{x}_1}$. The Mumford--Tate group of $\ensuremath{\mathcal{A}}^1_{\widetilde{x}_1}$ is a connected subgroup of $\ensuremath{\mathbf{G}}_1$ which commutes with $I_{\overline{x}_1},$ hence is contained in $\ensuremath{\mathbf{T}}_1,$ as $I_{\overline{x}_1}$ and $\ensuremath{\mathbf{G}}_1$ have the same rank. Let $\ensuremath{\mathbf{T}}_3\subset\ensuremath{\mathbf{G}}_3$ denote the identity component of the preimage of $\ensuremath{\mathbf{T}}_1$ in $\ensuremath{\mathbf{G}}_3$ and $\ensuremath{\mathbf{T}}_2$ the image of $\ensuremath{\mathbf{T}}_3$ in $\ensuremath{\mathbf{G}}_2$. By construction, the morphisms of integral models $$\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1}(\ensuremath{\mathbf{G}}_1,X_1)_{\ensuremath{\mathcal{O}}_{E'}}\leftarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_3}(\ensuremath{\mathbf{G}}_3,X_3)_{\ensuremath{\mathcal{O}}_{E'}}\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)_{\ensuremath{\mathcal{O}}_{E'}}$$ induce isomorphisms of the completions at geometric points in the special fiber. Thus let $\widetilde{x}_3$ (resp. $\widetilde{x}_2$) denote the point lifting $\overline{x}_3$ (resp. $\overline{x}_2$) corresponding to $\widetilde{x}_1$. Then the Mumford--Tate group for $\ensuremath{\mathcal{A}}^3_{\widetilde{x}_3}$ (resp. $\ensuremath{\mathcal{A}}^2_{\widetilde{x}_2}$) is contained in $\ensuremath{\mathbf{T}}_3$ (resp. $\ensuremath{\mathbf{T}}_2$). It follows from Proposition \ref{prop: relation between I groups} that $I_{\overline{x}_3}$ (resp. $I_{\overline{x}_2}$) is contained in the centralizer of $\ensuremath{\mathbf{T}}_3$ in $\ensuremath{\mathbf{G}}_3$ (resp. $\ensuremath{\mathbf{T}}_2$ in $\ensuremath{\mathbf{G}}_2$), and hence the action of $I_{\overline{x}_2}(\ensuremath{\mathbb{Q}})$ lifts to an action on $\ensuremath{\mathcal{A}}_{\widetilde{x}_2}$. Now let $\overline{x}_2\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}(k)$ be any point. It suffices to prove the result with $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)$ in place of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2),$ and with $\overline{x}_2$ replaced by a lift to a point of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_{2,p},[b]_{\mu_2}}(k),$ which we will again denote $\overline{x}_2$. Recall $J\subset \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}}_p)$ is a set of coset representatives for the image of (\ref{eqn: injection A groups}). Then by the construction of $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2,p}(\ensuremath{\mathbf{G}}_2,X_2)$ via $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_1,p}(\ensuremath{\mathbf{G}}_1,X_1)$ in \S\ref{subsubsec: construction of integral model}, there exists $j\in J$ such that $\overline{x}_2\in [\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{1,p}}(\ensuremath{\mathbf{G}}_1,X_1)^+\times\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Z}}_{(p)}})j]/\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Z}}_{(p)}})^\circ$. We let $\overline{x}_2'\in [\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{1,p}}(\ensuremath{\mathbf{G}}_1,X_1)^+\times\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{2,\ensuremath{\mathbb{Z}}_{(p)}})]/\ensuremath{\mathscr{A}}(\ensuremath{\mathbf{G}}_{1,\ensuremath{\mathbb{Z}}_{(p)}})^\circ$ be the point corresponding to $\overline{x}_2$ under the isomorphism induced by $j$. Then upon modifying $\overline{x}_2$ by an element of $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{A}}_f^p)$ which only changes the abelian variety $\ensuremath{\mathcal{A}}^2_{\overline{x}_2}$ up to prime to $p$ isogeny, we may assume $\overline{x}_2'=j_2(\overline{x}_3')$ for some $\overline{x}_3'\in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{3,p}}(\ensuremath{\mathbf{G}}_3,X_3)(k)$. Let $\widetilde{x}_2'\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathcal{O}}_K)$ be a lift of of $\overline{x}_2',$ for some finite extension $K/\breve\ensuremath{\mathbb{Q}}_p.$ By construction, corresponding to the element $j,$ there is (after possibly increasing $K$) a point $\widetilde{x}_2 \in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_{2,p}}(\ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathcal{O}}_K)$ lifting $\overline{x}_2,$ and a $p$-power quasi-isogeny $\ensuremath{\mathcal{A}}^2_{\widetilde{x}_2}\rightarrow \ensuremath{\mathcal{A}}^2_{\widetilde{x}_2'}$ taking $s_{\alpha,0,\overline{x}_2}$ to $s_{\alpha,0,\overline{x}_2'}$ (resp. $s_{\alpha,\ell,\overline{x}_2}$ to $s_{\alpha,\ell,\overline{x}_2'}$ for $\ell\neq p$). By considering the reduction of this quasi-isogeny one sees that $\overline{x}'_2\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}_{2,p},[b]_{\mu}}(k),$ and one also obtains an induced isomorphism $I_{\overline{x}_2}\cong I_{\overline{x}_2'}.$ From what we saw above, it follows that we may choose $\widetilde{x}_2'$ such that the action of $I_{\overline{x}_2'}$ lifts to $\ensuremath{\mathcal{A}}^2_{\widetilde{x}'_2}$. Then the action of $I_{\overline{x}_2}\cong I_{\overline{x}_2'}$ lifts to $\ensuremath{\mathcal{A}}^2_{\widetilde{x}_2}$. \end{proof} \subsubsection{}\label{sec: indep mu ordinary}We will use the above to deduce properties about the conjugacy class of Frobenius as in \cite[\S2.3]{Ki3}. Assume $\ensuremath{{\overline{x}}}\in \mathcal{S}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}(k)$ arises from an $\ensuremath{\mathbb{F}}_{q}$-point $x\in\mathscr{S}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathbb{F}}_q)$ where $\ensuremath{\mathbb{F}}_q$ is a finite extension of $k_{E_2}$. For $\ell \neq p$ a prime, let $\gamma_\ell$ denote the geometric $q$-Frobenius in $\mathrm{Gal}(\overline{\mathbb{F}}_q/\ensuremath{\mathbb{F}}_{q})$ acting on the dual of the $\ell$-adic Tate module $T_\ell\mathcal{A}_\ensuremath{{\overline{x}}}^{2\vee}$. Since the tensors $s_{\alpha,\ell,\overline{x}}\in T_\ell\ensuremath{\mathcal{A}}_{\ensuremath{{\overline{x}}}}^{2,\otimes} $ are Galois-invariant, we may consider $\gamma_\ell$ as an element of $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}}_\ell)$ via the level structure $V_{\ensuremath{\mathbb{Q}}_\ell}\cong T_\ell\ensuremath{\mathcal{A}}_{\overline{x}}^{2}\otimes_{\ensuremath{\mathbb{Z}}_\ell}\ensuremath{\mathbb{Q}}_\ell$. \begin{cor}\label{cor: l indep mu ordinary}Assume $(\ensuremath{\mathbf{G}}_2,X_2,\ensuremath{\mathcal{G}}_2)$ is an acceptable triple of Hodge type. Suppose $\overline x\in\mathcal{S}_{\ensuremath{\mathrm{K}}_2,[b]_{\mu_2}}(k)$ arises from $x\in\mathscr{S}_{\ensuremath{\mathrm{K}}_2}( \ensuremath{\mathbf{G}}_2,X_2)(\ensuremath{\mathbb{F}}_q)$. There exists an element $\gamma_0\in \ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}})$, such that \begin{enumerate}\item For $\ell\neq p$, $\gamma_0$ is conjugate to $\gamma_\ell$ in $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}}_\ell)$. \item $\gamma_0$ is elliptic in $\ensuremath{\mathbf{G}}_2(\R)$. \end{enumerate} \end{cor} \begin{proof} The proof is the same as in \cite[Corollary 2.3.1]{Ki3}. Since $\mathcal{A}^2_{x}$ is defined over $\mathbb{F}_q$, the $q$-Frobenius $\gamma$ lies in $ I_{\ensuremath{{\overline{x}}}}(\ensuremath{\mathbb{Q}})$. Let $\widetilde{x}\in\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}_2}(\ensuremath{\mathbf{G}}_2,X_2)(K)$ denote the lifting constructed in Theorem \ref{thm: can lift for Shimura var}. Then by considering the action of $I_{\overline{x}}(\ensuremath{\mathbb{Q}})$ on the Betti cohomology of $\ensuremath{\mathcal{A}}_{\widetilde{x}}$, we may consider $I_{\overline{x}}(\ensuremath{\mathbb{Q}})$ as a subgroup of $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}})$. Defining $\gamma_0$ to be the image of $\gamma$ inside $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}})$, we have that $\gamma_0$ is conjugate to $\gamma_\ell$ in $\ensuremath{\mathbf{G}}_2(\ensuremath{\mathbb{Q}}_\ell)$ by the Betti-\'etale comparison isomorphism. If $\ensuremath{\mathbf{T}}$ is any torus in $I_{\overline{x}}$ containing $\gamma_0$, the positivity of the Rosati involution implies $\ensuremath{\mathbf{T}}(\ensuremath{\mathbb{R}})/w_{h_2}(\ensuremath{\mathbb{R}}^\times)$ is compact. Hence $\gamma_0\in \ensuremath{\mathbf{T}}(\ensuremath{\mathbb{Q}})$ is elliptic in $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{R}})$. \end{proof} \section{Independence of $\ell$ for Shimura varieties}\label{sec: l indep for Shimura var} \subsection{Frobenius conjugacy classes}\label{sec: Frob conj classes} \subsubsection{} We apply the results of the previous section to deduce an $\ell$-independence result for the conjugacy class of Frobenius at all points on the special fiber of Shimura varieties. We keep the notation of the previous section but now $(\ensuremath{\mathbf{G}},X)$ will be an acceptable Shimura datum of Hodge type. As before we let $\ensuremath{\mathcal{G}}$ be a parahoric group scheme of $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ and set $\ensuremath{\mathrm{K}}_p=\ensuremath{\mathcal{G}}(\ensuremath{\mathbb{Z}}_p)$. Then we have the integral model $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ over $\ensuremath{\mathcal{O}}_E$ constructed from a fixed auxiliary Hodge type Shimura datum $(\ensuremath{\mathbf{G}}_1,X_1)$ as in Proposition \ref{lemma: auxiliary Hodge type datum} and a good Hodge embedding $\iota_1$. The auxiliary Shimura datum $(\ensuremath{\mathbf{G}}_1,X_1)$ plays a minor role in what follows. Let $p>2$ and $\ell\neq p$ be primes and suppose that in addition the compact open subgroup $\ensuremath{\mathrm{K}}\subset \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f)$ is of the form $\ensuremath{\mathrm{K}}_\ell\ensuremath{\mathrm{K}}^\ell$. We let $\widetilde{\ensuremath{\mathbb{L}}}_\ell$ denote the $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$-local system on $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ arising from the pro-\'etale covering $$\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}^\ell}(\ensuremath{\mathbf{G}},X):=\lim_{\underset{\ensuremath{\mathrm{K}}_\ell'\subset \ensuremath{\mathrm{K}}_\ell}\leftarrow}\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}'_\ell\ensuremath{\mathrm{K}}^\ell}(\ensuremath{\mathbf{G}},X)\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$$ and we write $\ensuremath{\mathbb{L}}_\ell$ denote the induced local system on the special fiber $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$ over $k_E$. If $\iota:(\ensuremath{\mathbf{G}},X)\rightarrow (\mathbf{GSp}(V),S^\pm)$ is a Hodge embedding as in \S\ref{sec: canonical liftings 1} then we have an identification \begin{equation}\label{eqn: id of local systems}\ensuremath{\mathbb{L}}_\ell=\underline{\mathrm{Isom}}_{(s_\alpha,s_{\alpha,\ell})}(V_{\ensuremath{\mathbb{Q}}_l},\ensuremath{\mathcal{V}}_\ell^\vee)\end{equation} where the scheme classifies $\ensuremath{\mathbb{Q}}_\ell$-linear isomorphisms taking $s_{\alpha}$ to $s_{\alpha,\ell}$; here the notation is as in \S\ref{subsec; mu ord locs}. \subsubsection{}Let $y\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbb{F}}_q)$ and we write $\overline{y}$ for the induced geometric point of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$. We let $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0$ denote the connected component of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$ containing $y$ and $\overline{x}\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0(k)$ a fixed geometric point. Over $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0$, the $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$-local system $\ensuremath{\mathbb{L}}_{\ell}$ corresponds to a homomorphism $$\rho^0_{\ell}:\pi_1(\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0,\overline{x})\rightarrow \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_{\ell}).$$ We have a map $$\text{Gal}(\overline{\ensuremath{\mathbb{F}}}_q/\ensuremath{\mathbb{F}}_{q})\rightarrow \pi_1(\mathcal{S}^0_{\ensuremath{\mathrm{K}}},\overline{y})\xrightarrow{\sim} \pi_1(\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0,\overline{x}),$$ where the isomorphism $\pi_1(\mathcal{S}^0_{\ensuremath{\mathrm{K}}},\overline{y})\xrightarrow{\sim}\pi_1(\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0,\overline{x})$ is well-defined up to conjugation. We thus obtain a well defined conjugacy class in $\pi_1(\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^0,\overline{x})$ corresponding to the image of the geometric $q$-Frobenius and we write $\mathrm{Frob}_y$ for a representative of this conjugacy class. \subsubsection{}For any reductive group $H$ over a field $F$ of characteristic $0$, we write $\mathrm{Conj}_{H}$ for the variety of semisimple conjugacy classes in $H$. Explicitly, if $H=\mathrm{Spec}\ R$, then we have $\mathrm{Conj}_{H}\cong \mathrm{Spec}\ R^{H}$, where $H$ acts on $R$ via conjugation. The set $\mathrm{\ensuremath{\mathrm{Conj}}}_{H}(\overline{F})$ can be identified with the set of semisimple $H(\overline{F})$ conjugacy classes in $H(\overline{F})$. We write $\chi_H:H\rightarrow \mathrm{Conj}_{H}$ for the projection map. For example if $H=\ensuremath{\mathrm{GL}}_n$, $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathrm{GL}}_n}$ is the variety $\ensuremath{\mathbb{A}}^{n-1}_F\times\ensuremath{\mathbb{G}}_{m,F}$ and the map $\chi$ takes an element of $\ensuremath{\mathrm{GL}}_n$ to its associated characteristic polynomial. In our setting, we thus obtain for each prime $\ell\neq p$, a well-defined element $\gamma_{y,\ell}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$ corresponding to $\chi_{\ensuremath{\mathbf{G}}}(\rho_\ell^0(\mathrm{Frob}_y))$. Our main Theorem concerning the $\ell$-independence property of Shimura varieties is the following. \begin{thm}\label{thm: l indep full} Let $p>2$. Assume $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$ is quasi-split, $\ensuremath{\mathcal{G}}$ is a very special parahoric group scheme and that $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathcal{G}})$ is an acceptable triple of Hodge type. Let $y\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbb{F}}_{q})$ where $\ensuremath{\mathbb{F}}_q/k_E$ is a finite extension. Then there exists an element $\gamma_0\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ such that $\gamma_0=\gamma_{y,\ell}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$ for all $\ell\neq p$. \end{thm} \begin{remark} Unlike in Corollary \ref{cor: l indep mu ordinary}, it is not always possible to lift $\gamma$ to an element of $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}})$. \end{remark} The rest of \S\ref{sec: l indep for Shimura var} will be devoted to the proof of Theorem \ref{thm: l indep full}. \newpage \subsection{Explicit curves in the special fiber of local models} \subsubsection{} We begin by recalling the local model diagram and certain properties of the Kottwitz--Rapoport stratification. By Theorem \ref{thm: integral models abelian type} (3), there exists a diagram of stacks \begin{equation}\label{eqn: local model diagram scheme} \xymatrix{ & \widetilde{\ensuremath{\mathscr{S}}}^{\mathrm{ad}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)\ar[dr]^{q} \ar[dl]_{\pi}& \\ \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)& & \ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}_1,\{\mu_{h_1}\}}} \end{equation} where $\pi:\widetilde{\ensuremath{\mathscr{S}}}^{\mathrm{ad}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)\rightarrow \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$ is a $\ensuremath{\mathcal{G}}_{\mathrm{ad}}$-torsor. Here $\ensuremath{\mathcal{G}}_{\ensuremath{\mathrm{ad}}}$ is the parahoric group scheme of $G_{1,\ensuremath{\mathrm{ad}}}\cong G_{\ensuremath{\mathrm{ad}}}$ corresponding to $\ensuremath{\mathcal{G}}$. Let $\ensuremath{\mathcal{M}}$ denote the special fiber of $\ensuremath{\mathrm{M}}^{\mathrm{loc}}_{\ensuremath{\mathcal{G}}_1,\{\mu_{h_1}\}}$; it is a scheme over $k_E$. Recall the local model is defined using a group $G'\cong\prod_{i=1}^r\mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}H$ such that there exists a central extension $G'_{\mathrm{der}}\rightarrow G_{\mathrm{der}}$, and the parahoric group scheme $\ensuremath{\mathcal{G}}'$ of $G'$ is determined by $\ensuremath{\mathcal{G}}$; then the geometric special fiber $\ensuremath{\mathcal{M}}_{k}$ has a stratification indexed by $\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$. Here we consider $\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}\subset W'_{J'}\backslash W'/W'_{J'}$ where $W'$ is the Iwahori Weyl group for $G'$ and $J'\subset\ensuremath{\mathbb{S}}'$ is the subset of simple reflections for $G'$ determined by $\ensuremath{\mathcal{G}}'$. We write $\ensuremath{\mathcal{M}}_k^w$ for the strata corresponding to $w\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_J$. It follows formally from the existence of the diagram (\ref{eqn: local model diagram scheme}) that $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}$ admits a stratification by $\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$. This is known as the Kottwitz--Rapoport stratification and we write $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^w$ for the strata corresponding to $w\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$. From the definition of this stratification, for $\overline{x}\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(k)$ the complete local ring of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^w$ at $\overline x$ is identified with the complete local ring at a point $\overline x'\in\ensuremath{\mathcal{M}}_k^w(k)$. The closure relations for this stratification is given by the Bruhat order on $W'_{J'}\backslash W'/W'_{J'}$. \subsubsection{}\label{subsec: KR stratification very special}For the rest of \S\ref{sec: l indep for Shimura var}, we assume $(\ensuremath{\mathbf{G}},X,\ensuremath{\mathcal{G}})$ satisfies the assumptions in Theorem \ref{thm: l indep full}. In this case, $\ensuremath{\mathcal{M}}_k$ and $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}$ are normal schemes; cf. Theorem \ref{thm: density}. We let $\ensuremath{\mathfrak{s}}\in \ensuremath{\mathcal{B}}(G,\ensuremath{\breve{\mathbb{Q}}_p})$ denote the special vertex associated to $\ensuremath{\mathcal{G}}$. This determines a special vertex $\ensuremath{\mathfrak{s}}'\in \ensuremath{\mathcal{B}}(G',\ensuremath{\breve{\mathbb{Q}}_p})$. In this case the set $\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ has the following alternative description. Let $S'$ denote a maximal $\ensuremath{\breve{\mathbb{Q}}_p}$-split torus of $G'$ defined over $\ensuremath{\mathbb{Q}}_p$ such that $\ensuremath{\mathfrak{s}}'\in\ensuremath{\mathcal{A}}(G',S',\ensuremath{\breve{\mathbb{Q}}_p})$ and $T'$ the centralizer of $S'$. Fix a Borel subgroup of $G'$ defined over $\ensuremath{\mathbb{Q}}_p$ and assume we have identified $X_*(T')_I\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$ with $\ensuremath{\mathcal{A}}(G',S',\ensuremath{\breve{\mathbb{Q}}_p})$ via the choice of special vertex $\ensuremath{\mathfrak{s}}'$. We may consider $\mu$ as an element of $X_*(T')_I$. For $\lambda,\lambda'\in X_*(T')_I^+$, we write $\lambda\curlyeqprec\lambda'$ if $\lambda' -\lambda$ is an \textit{integral} linear combination of positive coroots in the reduced root system $\Sigma'$ associated to $G'$; we write $\lambda\prec \lambda'$ if in addition $\lambda\neq\lambda'$. Then there is an identification $$W'_{J'}\backslash W'/W'_{J'}\cong X_*(T')_I^+,$$ and the ordering $\curlyeqprec$ agrees with the Bruhat order on $W'_{J'}\backslash W'/W'_{J'}$ under this identification (cf. \cite{Lu}). It follows that we have an identification $$\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}=\{t_\lambda|\lambda\in X_*(T')_I^+,\ \lambda\curlyeqprec {\mu}\}.$$ We will write $\ensuremath{\mathcal{M}}_k^\lambda$ (resp. $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^\lambda$) for the strata $\ensuremath{\mathcal{M}}_k^{t_\lambda}$ (resp. $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^{t_\lambda}$). \subsubsection{}For notational simplicity, we will use $\underline{\ensuremath{\mathcal{G}}}$ to denote the group $\underline{\ensuremath{\mathcal{G}}}'_{\ensuremath{\mathbb{F}}_p[[t]]}$ defined in \S\ref{sec: identification of Iwahori Weyl group}. Its generic fiber will be denoted $\underline{G}$ and the Iwahori Weyl group $W_{\underline{G}}$ may be identified with the Iwahori Weyl group for $G'$. As in Theorem \ref{thm: special fiber of local models and admissible set}, we may identify $\ensuremath{\mathcal{M}}_k$ with a union of Schubert varieties corresponding to $\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ in $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}}$. The strata $\ensuremath{\mathcal{M}}_k^\lambda$ may be identified with the $\underline{\ensuremath{\mathcal{G}}}(k[[t]])$-orbit of the element $\underline{\dot{t}}_\lambda$ considered as an element in $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}}$ and by the above discussion, the closure relations between the strata are given by the partial ordering $\preccurlyeq$. Since $t_\mu\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ is the unique maximal element, it follows that $\ensuremath{\mathcal{M}}^{{\mu}}_k$ is contained in the smooth locus of $\ensuremath{\mathcal{M}}$ and hence $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^{{\mu}}$ is contained in the smooth locus of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}.$ The strata $\ensuremath{\mathcal{M}}_k^\lambda$ and $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^\lambda$ are both defined over the field of definition of $\lambda\in W'_{J'}\backslash W'/W'_{J'}$. In other words, if $n$ is the smallest integer such that $\sigma^n(\lambda)=\lambda$, then $\ensuremath{\mathcal{M}}^\lambda_k$ and $\ensuremath{\mathcal{S}}^\lambda_{\ensuremath{\mathrm{K}},k}$ are both defined over $\ensuremath{\mathbb{F}}_{p^n}$; we write $\ensuremath{\mathcal{M}}^\lambda $ and $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\lambda$ for the models over $\ensuremath{\mathbb{F}}_{p^n}$. \subsubsection{} The key geometric property of the Kottwitz--Rapoport stratification on $\ensuremath{\mathcal{M}}_k$ that we will need is the following. \begin{prop}\label{prop: map from curve} Let $y\in\ensuremath{\mathcal{M}}^\lambda(\ensuremath{\mathbb{F}}_{q})$ with $\lambda\in \ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ and $\lambda \neq {\mu}$. There exists a smooth, geometrically connected curve $C$ over $\ensuremath{\mathbb{F}}_{q}$ and a map $\phi:C\rightarrow \ensuremath{\mathcal{M}}_{\ensuremath{\mathbb{F}}_q}$ such that \begin{enumerate}[label=(\roman*)] \item There exists $y'\in C(\ensuremath{\mathbb{F}}_{q})$ such that $\phi(y')=y$. \item $\phi^{-1}(\ensuremath{\mathcal{M}}_k^{\lambda'})$ is open and dense in $C$ for some $\lambda'\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ with $\lambda\prec\lambda'.$ \end{enumerate} \end{prop} \begin{remark} Using an ampleness argument, it is easy to show that such a map always exists if we replace $\ensuremath{\mathbb{F}}_{q}$ by its algebraic closure $k$. The key property is that for $\ensuremath{\mathcal{M}}$, this map exists without extending the residue field. By \cite[\S6]{Drinfeld}, there are normal and Cohen--Macaulay schemes where this property fails. \end{remark} \begin{proof}[Proof of Proposition \ref{prop: map from curve}] The statement depends only on $\ensuremath{\mathcal{G}}'$ and not on $\ensuremath{\mathcal{G}},$ so we may assume (for notational simplicity) that $G = G'.$ We first show using the $\underline{\ensuremath{\mathcal{G}}}$-action on $\ensuremath{\mathcal{M}}$ that it suffices to consider the case $$y=\underline{\dot{t}}_\lambda\in \underline{G}(k((t)))/\underline{\ensuremath{\mathcal{G}}}(k[[t]]).$$ Let $\sigma_q$ denote the $q$-Frobenius; then since $y\in \ensuremath{\mathcal{M}}^\lambda(\ensuremath{\mathbb{F}}_{q})$, we have $\sigma_q(\lambda)=\lambda$. Therefore we may choose the lift $\underline{\dot{t}}_\lambda\in\underline{G}(\ensuremath{\mathbb{F}}_{q}((t)))$ so that $\underline{\dot{t}}_\lambda\in \ensuremath{\mathcal{M}}^\lambda(\ensuremath{\mathbb{F}}_q)$. By Lemma \ref{lemma: rational G orbit} below, there exists $g\in \underline{\ensuremath{\mathcal{G}}}(\ensuremath{\mathbb{F}}_q[[t]])$ such that $g\underline{\dot{t}}_\lambda=y$ in $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}}$. Therefore if $C$ satisfies the conditions (i) and (ii) for the point $\underline{\dot{t}}_\lambda$, $gC$ satisfies (i) and (ii) for the point $y$. It therefore suffices to prove the case $y=\underline{\dot{t}}_\lambda$; we make this assumption from now on. Now since $\lambda\prec{\mu}$, by Stembridge's Lemma \cite[Lemma 2.3]{Ra1}, there exists a positive root $\alpha\in \Sigma$ such that $\lambda+\alpha^\vee\curlyeqprec{\mu}$. Since $\lambda,{\mu}\in X_*(T)_I^{\sigma_q}$, it follows that $$\lambda+\sigma^i_q(\alpha^\vee)\curlyeqprec {\mu}$$ for all $i$. If $\{\alpha, \sigma_q(\alpha),\dotsc,\sigma_q^{m-1}(\alpha)\}$ denotes the orbit of $\alpha$ under $\sigma_q$, it follows that $$\lambda':=\lambda+\sum_{i=0}^{m-1}\sigma_q^i(\alpha^\vee)\curlyeqprec{\mu},$$ and hence $\lambda'\in\ensuremath{\mathrm{Adm}}_{G}(\{\mu\})_{J}$. Now $\alpha$ determines a relative root $\widetilde{\alpha}$ of $\underline{G}$ over $\ensuremath{\mathbb{F}}_q((t))$ which we always take to be the long root; then $\widetilde{\alpha}$ is either divisible or non-divisible. We let $U_{\widetilde{\alpha}}$ denote the relative root subgroup corresponding to $\widetilde{\alpha}$ and $\ensuremath{\underline{G}}_{\widetilde{\alpha}}$ the simply connected covering of the (semi-simple) group generated by $U_{\widetilde{\alpha}}$ and $U_{-\widetilde{\alpha}}$; it is a reductive group over $\ensuremath{\mathbb{F}}_{q}((t))$. We will identify $U_{\widetilde{\alpha}}$ with the corresponding unipotent subgroup of $\ensuremath{\underline{G}}_{\widetilde{\alpha}}$. The parahoric $\underline{\ensuremath{\mathcal{G}}}$ determines a parahoric model $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ of $\underline{G}_{\widetilde \alpha}$ and there is a closed immersion $$\iota_{\widetilde{\alpha}}:\mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}}\rightarrow \mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}},\ensuremath{\mathbb{F}}_q}$$ defined over $\ensuremath{\mathbb{F}}_q$, where $\mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}}$ is the affine flag variety associated to $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$. We write $\ensuremath{\mathcal{U}}_{\widetilde{\alpha}}$ (resp. $ \ensuremath{\mathcal{U}}_{-\widetilde{\alpha}} $) for the group schemes over $\ensuremath{\mathbb{F}}_q[[t]]$ corresponding to $U_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q((t)))\cap \underline{\ensuremath{\mathcal{G}}}(\ensuremath{\mathbb{F}}_q[[t]]) $ (resp. $U_{-\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q((t)))\cap \underline{\ensuremath{\mathcal{G}}}(\ensuremath{\mathbb{F}}_q[[t]])$). Then we claim that for each positive $\alpha$, there exists a morphism $$f:\ensuremath{\mathbb{A}}^1_{\ensuremath{\mathbb{F}}_q}\rightarrow \mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}}$$ defined over $\ensuremath{\mathbb{F}}_q$ satisfying the following two conditions \begin{enumerate}[label=(\roman*')] \item $f(0)=\dot{e}$, where $\dot{e}$ is the base point in $\mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}}$. \item $f(\ensuremath{\mathbb{A}}^1_{\ensuremath{\mathbb{F}}_q}\backslash\{0\})\in L^+\ensuremath{\mathcal{U}}_{\widetilde{\alpha}}\underline{\dot{t}}_{\alpha^\vee}L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}/L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}\cup L^+\ensuremath{\mathcal{U}}_{\widetilde{\alpha}/2}\underline{\dot{t}}_{\alpha^\vee}L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}/L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$. \end{enumerate} Here the second term in the union in (ii') is to be read as empty if $\widetilde \alpha$ is not divisible. Assuming the claim we may prove the proposition as follows. We consider the morphism $$\phi:\ensuremath{\mathbb{A}}_{\ensuremath{\mathbb{F}}_q}^1\rightarrow \mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}},\ \ \ x\mapsto \underline{\dot{t}}_{{\lambda}}(\iota_{\widetilde{\alpha}}\circ f)(x),$$ in other words we translate the composition $\iota_{\widetilde{\alpha}}\circ f$ by $\underline{\dot{t}}_\lambda$. Then condition (i) follows from (i') and condition (ii) follows from (ii') using the fact that $\lambda$ is dominant. It remains to prove the existence of $f$ satisfying (i') and (ii'). We will construct $f$ explicitly using a presentation of the group $\ensuremath{\underline{G}}_{\widetilde{\alpha}}$; it turns out that by \cite[\S4.1.4]{BT2} there are essentially three distinct cases to consider which we now describe. If $\widetilde{\alpha}$ is a non-divisible root then there is an identification $$\ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong \mathrm{Res}_{K/\ensuremath{\mathbb{F}}_q((t))}\ensuremath{\mathrm{SL}}_2$$ where $K$ is some finite separable extension of $\ensuremath{\mathbb{F}}_q((t))$ and the parahoric $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ is characterized by the property $$\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(k[[t]])=\ensuremath{\mathrm{SL}}_2(\ensuremath{\mathcal{O}}_K\otimes_{\ensuremath{\mathbb{F}}_q[[t]]}k[[t]]).$$ If $\widetilde{\alpha}/2$ is also a relative root, then there is an identification $$\ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong \mathrm{Res}_{K/\ensuremath{\mathbb{F}}_q((t))}\ensuremath{\mathrm{SU}}_3$$where $K/\ensuremath{\mathbb{F}}_{q}((t))$ is finite separable and $\ensuremath{\mathrm{SU}}_3$ is the special unitary group associated to a hermitian space over a (separable)\footnote{Since we have assumed $p>2$, this is automatic.} quadratic extension $K'/K.$ We recall the presentation of the $K$-group $\ensuremath{\mathrm{SU}}_3$ in \cite[Example 1.15]{Ti1}. We let $\tau\in \mathrm{Gal}(K'/K)$ denote the non-trivial element and we consider the hermitian form on $K'^3$ given by $$\langle(x_{-1},x_0,x_1),(y_{-1},y_0,y_1)\rangle=\tau(x_{-1})y_1+\tau(x_0)y_0+\tau(x_1)y_{-1}.$$ The group $\ensuremath{\mathrm{SU}}_3$ is the special unitary group attached to this form. For $i=-1,1$ and $c,d\in K'$ such that $\tau(c)c+d+\tau(d)=0$, we define $$u_{i}(c,d)=I_3+(g_{rs})$$ where $I_3$ is the identity matrix and $(g_{rs})$ is the matrix with entries $g_{-i,0}=-\tau(c)$, $g_{0,i}=c$, $g_{-i,i}=d$ and $g_{rs}=0$ otherwise. The root subgroups are then given by $$U_{\pm\widetilde{\alpha}/2}(K)=\{u_{\pm 1}(c,d)|c,d\in K', \tau(c)c+\tau(d)+d=0\}$$ $$U_{\pm\widetilde{\alpha}}(K)=\{u_{\pm 1}(0,d)|c,d\in K', \tau(d)+d=0 \}.$$ Then we may consider the parahoric $$\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q[[t]])= \ensuremath{\mathrm{SU}}_3(K)\cap \ensuremath{\mathrm{GL}}_3(\ensuremath{\mathcal{O}}_{K'});$$ we call this the standard parahoric. When $K'/K$ is unramified this is the only very special parahoric (up to conjugacy). When $K'/K$ is ramified, there is another conjugacy class of very special parahorics in addition to the standard parahoric which we shall call the non-standard parahoric. We let $u$ be a uniformizer of $K'$ and we define $s\in \ensuremath{\mathrm{GL}}_3(K')$ to be the element $\mathrm{diag}(1,1,u)$. Then the non-standard parahoric $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ is given by $$\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q[[t]])=\ensuremath{\mathrm{SU}}_3(K)\cap s\ensuremath{\mathrm{GL}}_3(\ensuremath{\mathcal{O}}_{K'})s^{-1}.$$ We label the cases as follows. Case (1): $\widetilde{\alpha}$ is non-divisible, $\ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong \mathrm{Res}_{K/\ensuremath{\mathbb{F}}_q((t))}\ensuremath{\mathrm{SL}}_2$ and $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q[[t]])=\ensuremath{\mathrm{SL}}_2(\ensuremath{\mathcal{O}}_K)$. Case (2): $\widetilde{\alpha}$ is divisible, $ \ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong \mathrm{Res}_{K/\ensuremath{\mathbb{F}}_q((t))}\ensuremath{\mathrm{SU}}_3$ and $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ is the standard parahoric. Case (3): $\widetilde{\alpha}$ is divisible, $ \ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong \mathrm{Res}_{K/\ensuremath{\mathbb{F}}_q((t))}\ensuremath{\mathrm{SU}}_3$ with $K'/K$ ramified and $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ is the non-standard parahoric. We now proceed with the construction of $f$ in each of the three cases. Case (1). In this case the isomorphism $\ensuremath{\underline{G}}_{\widetilde{\alpha}}\cong\mathrm{Res}_{K/\ensuremath{\mathbb{F}}_{q}((t))}\ensuremath{\mathrm{SL}}_2$ induces identifications $$u_{\pm\widetilde{\alpha}}:\mathrm{Res}_{K/\ensuremath{\mathbb{F}}_{q}((t))}\ensuremath{\mathbb{G}}_a\xrightarrow{\sim}U_{\pm\widetilde{\alpha}}.$$ Let $u$ be a uniformizer of $K$; then we may define a map $$f:\ensuremath{\mathbb{A}}^1_{\ensuremath{\mathbb{F}}_{q}}\rightarrow \mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{\widetilde{\alpha}}},\ \ \ \ x\mapsto u_{-\widetilde{\alpha}}(u^{-1}x).$$ Clearly (i') is satisfied, and a simple calculation in $\ensuremath{\mathrm{SL}}_2$ shows that for $0\neq x$, we have $$u_{-\widetilde{\alpha}}(u^{-1}x)\in u_{\widetilde{\alpha}}(ux^{-1})\underline{\dot{t}}_{\alpha^\vee}L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$$ so that (ii') also holds. Case (2). Recall in this case, the parahoric $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}$ is characterized by $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q[[t]])=\ensuremath{\mathrm{SU}}_3(K)\cap \ensuremath{\mathrm{GL}}_3(\ensuremath{\mathcal{O}}_{K'})$. We fix a uniformizer $u$ of $K'$ such that $\tau(u)=-u$ and define $$f:\ensuremath{\mathbb{A}}_{\ensuremath{\mathbb{F}}_q}^1\rightarrow \mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}_{\widetilde{\alpha}}}, \ \ x\mapsto u_{-1}(0,u^{-1}x).$$ A calculation using the presentation recalled above shows that for $x\neq0$, we have $$u_{-1}(0,u^{-1}x)\in u_1(0,ux^{-1})\underline{\dot{t}}_{\alpha^\vee}L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}};$$ as in Case (1), it follows that (i') and (ii') are satisfied. Case (3). Recall $K'/K$ is ramified and $\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}(\ensuremath{\mathbb{F}}_q[[t]])=\ensuremath{\mathrm{SU}}_3(K)\cap s\ensuremath{\mathrm{GL}}_3(\ensuremath{\mathcal{O}}_{K'})s^{-1}$. We consider the map $$\ensuremath{\mathbb{A}}_{\ensuremath{\mathbb{F}}_q}^1\rightarrow \mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}}}, \ \ x\mapsto u_{-1}(x,-\frac{x^2}{2}).$$ A calculation using the presentation above shows that for $x\neq 0$, we have$$u_{-1}(x,-\frac{x^2}{2})\in u_1(2x^{-1},2x^{-2})\dot{t}_{\alpha^\vee}L^+\ensuremath{\underline{\mathcal{G}}}_{\widetilde{\alpha}};$$ as in the previous two cases it follows that (i') and (ii') are satisfied. \end{proof} \begin{lemma}\label{lemma: rational G orbit} Let $y\in \ensuremath{\mathcal{M}}^{\lambda}(\ensuremath{\mathbb{F}}_{q})$ and assume $\underline{\dot{t}}_\lambda\in\underline{G}(\ensuremath{\mathbb{F}}_q[[t]])$. Then there exists $g\in \underline{\ensuremath{\mathcal{G}}}(\ensuremath{\mathbb{F}}_{q}[[t]])$ such that $g\underline{\dot{t}}_\lambda L^+\underline{\ensuremath{\mathcal{G}}}=y$ in $\mathcal{FL}_{\underline{\ensuremath{\mathcal{G}}}}$. \end{lemma} \begin{proof} By definition, there exists $h\in \ensuremath{\underline{\mathcal{G}}}(k[[t]])$ such that $h\underline{\dot{t}}_\lambda=y$. We consider the subgroup $$\ensuremath{\underline{\mathcal{G}}}(k[[t]])\cap \underline{\dot{t}}_\lambda\ensuremath{\underline{\mathcal{G}}}(k[[t]])\underline{\dot{t}}_\lambda^{-1}\subset \ensuremath{\underline{G}}(k((t)));$$ it is the intersection of the kernel of the Kottwitz homomorphism $\widetilde{\kappa}_{\underline{G}}$ and the stabilizer of a bounded subset of the building $\ensuremath{\mathcal{B}}(\underline{G},k((t)))$. Thus by \cite[Prop. 3 and Remark 4]{HaRa}, it arises as the $k$-points of a smooth connected group scheme $\underline{\ensuremath{\mathcal{K}}}_{\lambda}$ defined over $\ensuremath{\mathbb{F}}_{q}[[t]]$. The element $h$ is defined up to right multiplication by $\underline{\ensuremath{\mathcal{K}}}_\lambda(k[[t]])$; hence since $\sigma_q(y)=y$, we have $\sigma_q(h)=hk$ for some $k\in \underline{\ensuremath{\mathcal{K}}}_\lambda(k[[t]])$. By Lang's theorem applied to $\underline{\ensuremath{\mathcal{K}}}_\lambda$, there exists $k_1\in\underline{\ensuremath{\mathcal{K}}}_\lambda(k[[t]])$ such that $g:=hk_1$ is fixed by $\sigma_q$, and we have $g\dot{t}_\lambda=y$ in $\mathcal{FL}_{\ensuremath{\underline{\mathcal{G}}}}$. \end{proof} \subsubsection{}Using Theorem \ref{eqn: local model diagram scheme}, we may deduce the following result about the local structure of the Shimura stack $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$. \begin{cor}\label{cor: map from smooth stack to Shimura var} Let $x\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\lambda(\ensuremath{\mathbb{F}}_{q})$ with $\lambda\in \ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ and $\lambda \neq {\mu}$. There exists a smooth, geometrically connected curve $C'$ over $\ensuremath{\mathbb{F}}_{q}$ and a map $\phi':C'\rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathbb{F}}_q}$ such that \begin{enumerate}[label=(\roman*)] \item There exists $x'\in C'(\ensuremath{\mathbb{F}}_{q})$ such that $\phi'(x')=x$. \item $ \phi'^{-1}(\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},k}^{\lambda'})\subset C'$ is an open dense subscheme for some $\lambda'$ $\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ with $\lambda\prec\lambda'$. \end{enumerate} \end{cor} \begin{proof} We write \[\xymatrix{ & \widetilde{\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}}\ar[dr]^{q_{k_E}}\ar[dl]_{\pi_{k_E}}& \\ \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}} & & \ensuremath{\mathcal{M}}}\] for the special fiber of (\ref{eqn: local model diagram scheme}). Since $\pi_{k_E}$ is a torsor for the smooth connected group scheme $\ensuremath{\mathcal{G}}_{\mathrm{ad},k_E}$, the point $x$ lifts to a point $\widetilde{x}\in\widetilde{\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}}(\ensuremath{\mathbb{F}}_q)$ and we write $y$ for its image in $\ensuremath{\mathcal{M}}(\ensuremath{\mathbb{F}}_q)$. By definition of the stratification on $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$, we have $y\in\ensuremath{\mathcal{M}}^\lambda(\ensuremath{\mathbb{F}}_q)$. We apply Proposition \ref{prop: map from curve} to $y$ to obtain a map $\phi':C\rightarrow \ensuremath{\mathcal{M}}_{\ensuremath{\mathbb{F}}_q}$ satisfying (i) and (ii) in Proposition \ref{prop: map from curve} for some $\lambda'\in\ensuremath{\mathrm{Adm}}_{G'}(\{\mu\})_{J'}$ with $\lambda\prec\lambda'$; we let $y'\in C(\ensuremath{\mathbb{F}}_q)$ mapping to $y$. Consider the pullback $\widetilde{\ensuremath{\mathcal{S}}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}{\times}_{\ensuremath{\mathcal{M}}_{\ensuremath{\mathbb{F}}_q}} C$ which is a smooth stack over $\ensuremath{\mathbb{F}}_q$. By \cite[Th\'eor\`eme 6.3]{LMB}, there exists a smooth scheme $Y/\ensuremath{\mathbb{F}}_q$ and a smooth map $Y\rightarrow \widetilde{\ensuremath{\mathcal{S}}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}{\times}_{\ensuremath{\mathcal{M}}_{\ensuremath{\mathbb{F}}_q}} C$ defined over $\ensuremath{\mathbb{F}}_q$ such that $\widetilde{x}$ lies in the image of a point $\widetilde{y}\in Y(\ensuremath{\mathbb{F}}_q)$. Now let $Y^{\lambda'}$ denote the preimage of $\ensuremath{\mathcal{M}}^{\lambda'}$ in $Y$; by the assumption on $C$, it is a dense open subscheme of $Y$. By \cite[Theorem 1.1]{Poonen}, there exists a smooth geometrically connected curve $C'\subset Y$ such that $\widetilde{y}\in C'(\ensuremath{\mathbb{F}}_q)$ and $C'\cap Y^{\lambda'}\neq\emptyset$ so that the preimage of $Y^{\lambda'}$ in $C'$ is open and dense. We write $\phi':C'\rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}$ for the composition $$C'\rightarrow Y\rightarrow \widetilde{\ensuremath{\mathcal{S}}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}{\times}_{\ensuremath{\mathcal{M}}_{\ensuremath{\mathbb{F}}_q}} C\rightarrow \widetilde{\ensuremath{\mathcal{S}}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}\rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q} .$$ Then setting $x'=\widetilde{y}\in C'(\ensuremath{\mathbb{F}}_q)$, we have $ \phi'(x')=x$, so (i) is satisfied, and property (ii) follows by the construction. \end{proof} \subsection{Compatible local systems and $\ell$-independence} \subsubsection{}We recall the theory of compatible local systems. Let $X$ be a normal scheme over $\ensuremath{\mathbb{F}}_q$ where $q$ is a power of $p$ and let $\ensuremath{\mathcal{L}}_\ell$ be a $\overline{\ensuremath{\mathbb{Q}}}_\ell$-local system (lisse sheaf) on $X$. For $x\in X(\ensuremath{\mathbb{F}}_{q^n})$, we write $\mathrm{Frob}_x$ for the local Frobenius automorphism acting on the stalk $\mathcal{L}_{\ell,\overline{x}}$ of $\mathcal{L}_\ell$ at a geometric point $\overline{x}$ lying over $x$. Suppose that for every closed point $x\in X(\ensuremath{\mathbb{F}}_{q^n})$ the characteristic polynomial $\det(1-\mathrm{Frob}_xt|\mathcal{L}_{\ell,\overline{x}}),$ has coefficients in a number field $E\subset \overline{\ensuremath{\mathbb{Q}}}_\ell$ (this is conjectured to be the case if $\ensuremath{\mathcal{L}}_{\ell}$ has determinant of finite order). Let $\ell'$ be a prime not equal to $p$ or $\ell$. A $\overline{\ensuremath{\mathbb{Q}}}_{\ell'}$-local system $\mathcal{K}_{\ell'}$ is said to be a \emph{compatible local system} for $\mathcal{L}_\ell$ if there is some possibly larger number field $E'$ and embeddings $E' \subset \overline{\ensuremath{\mathbb{Q}}}_\ell, E'\subset\overline{\ensuremath{\mathbb{Q}}}_{\ell'}$ such that for every closed point $x\in X(\ensuremath{\mathbb{F}}_{q^n})$, the characteristic polynomials $\det(1-\mathrm{Frob}_xt|\mathcal{L}_{\ell,\overline{x}}),$ $\det(1-\mathrm{Frob}_xt|\mathcal{K}_{,\ell',\overline{x}})$ have coefficients in $E'$ and there is an equality $$\det(1-\mathrm{Frob}_xt|\mathcal{L}_{\ell,\overline{x}})=\det(1-\mathrm{Frob}_xt|\mathcal{K}_{\ell',\overline{x}})\in E'[t].$$ The existence of compatible local systems over smooth curves is due to Lafforgue \cite[Th\'eor\`eme VII.6]{Laf}, and the case of smooth schemes is due to Drinfeld \cite[Theorem 1.1]{Drinfeld}. \subsubsection{} We now continue with the notations of \S\ref{sec: Frob conj classes}. For the rest of this section, it will be convenient to fix a Hodge embedding $\iota: (\ensuremath{\mathbf{G}},X) \rightarrow (\ensuremath{\mathbf{GSp}}(V),S^\pm)$ as in \S\ref{sec: canonical liftings 1}. The element $\gamma_{y,\ell}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$ arises as an element of $\mathrm{Conj}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})$. Indeed the image of $\gamma_{y,\ell}$ in $\mathrm{Conj}_{{\ensuremath{\mathbf{GL}}(V)}}(\ensuremath{\mathbb{Q}}_\ell)$ under the map induced by $\iota$ lies in $\mathrm{Conj}_{\ensuremath{\mathbf{GL}}(V)}(\ensuremath{\mathbb{Q}})$ since it corresponds to the action of Frobenius on the $\ell$-adic Tate module of an abelian variety. Since $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}\rightarrow \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{GL}}(V)}$ is a finite map, $\gamma_{y,\ell}\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})$. Similarly if $\ell'\nmid p\ell$ is another prime, $\gamma_{y,\ell'}$ arises as an element of $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})$. We let $F$ be a finite extension of $\ensuremath{\mathbb{Q}}$ such that $\gamma_{y,\ell},\gamma_{y,\ell'}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(F)$; such an extension exists since $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}$ is a $\ensuremath{\mathbb{Q}}$-variety. Let $\lambda,\lambda'$ be the two places over $F$ induced by the fixed embeddings $i_\ell:\overline{\ensuremath{\mathbb{Q}}}\rightarrow\overline{\ensuremath{\mathbb{Q}}}_\ell$ and $i_{\ell'}:\overline{\ensuremath{\mathbb{Q}}}\rightarrow\overline{\ensuremath{\mathbb{Q}}}_{\ell'}$. We take $\vartheta:\ensuremath{\mathbf{G}}_F\rightarrow \mathbf{GL_n}_F$ to be a representation over $F$; then the $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$-local system $\ensuremath{\mathbb{L}}_\ell$ induces an $F_\lambda$-adic local system $\ensuremath{\mathcal{L}}_\ell$ over $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$. Similarly we obtain an $F_{\lambda'}$-adic local system $\ensuremath{\mathcal{L}}_{\ell'}$. \begin{lemma}\label{lemma: Frob l adic unit} For any closed point $x\in \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbb{F}}_{q})$, the eigenvalues of $\mathrm{Frob}_x$ acting on $\mathcal{L}_{\ell,\overline{x}}$ are $\ell$-adic units. \end{lemma} \begin{proof} It suffices to prove this for a single faithful representation of $\ensuremath{\mathbf{G}}$. For the representation $\iota:\ensuremath{\mathbf{G}}\rightarrow \mathbf{GL}(V)$, the action of $\mathrm{Frob}_x$ on $\mathcal{L}_{\ell,\overline{x}}$ corresponds to the action of Frobenius on the $\ell$-adic Tate module of an abelian variety and hence its eigenvalues are all $\ell$-adic units. \end{proof} \subsubsection{}We let $\vartheta(\gamma_{y,\ell})\in\ensuremath{\mathrm{Conj}}_{\mathbf{GL_n}}(F)\subset\ensuremath{\mathrm{Conj}}_{\mathbf{GL_n}}(F_\lambda)$ denote the image of the conjugacy class of $\mathrm{Frob}_y$ under $\vartheta$ and we similarly define $\vartheta(\gamma_{y,\ell'})\in\ensuremath{\mathrm{Conj}}_{\mathbf{GL_n}}(F)\subset\ensuremath{\mathrm{Conj}}_{\mathbf{GL_n}}(F_{\lambda'})$. \begin{prop}\label{prop: indep in GL} $\vartheta(\gamma_{y,\ell})=\vartheta(\gamma_{y,\ell'})$ in $\mathrm{Conj}_{\mathbf{GL_n}}(F)$. \end{prop} \begin{proof}Note that if $y\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}(\ensuremath{\mathbb{F}}_{q})$, where $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}$ denotes the $\mu$-ordinary locus of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}$, then the result follows from Corollary \ref{cor: l indep mu ordinary}. The proof then proceeds in two steps. We first prove the result for $y\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^{{\mu}}(\ensuremath{\mathbb{F}}_{q})$ using the result for the $\mu$-ordinary locus. We then deduce the result for general $y$ by descending induction on the strata $\lambda$ for which $y\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\lambda(\ensuremath{\mathbb{F}}_{q})$. Step (1): Let $y\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^{{\mu}}(\ensuremath{\mathbb{F}}_{q})$. Recall that $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^{{\mu}}$ is a smooth algebraic stack over $k_E$ and that $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}\cap \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\mu$ is a dense and open substack of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\mu$ (in fact one can show $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}\subset \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\mu$). Using the same argument as in the proof of \ref{cor: map from smooth stack to Shimura var} (i.e. applying \cite[Th\'eor\`eme 6.3]{LMB} and \cite[Theorem 1.1]{Poonen}), we may find a smooth geometrically connected curve $C$ over $\ensuremath{\mathbb{F}}_q$ and a map $\psi:C\rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}^\mu$ defined over $\ensuremath{\mathbb{F}}_q$ such that there exists a point $x'\in C(\ensuremath{\mathbb{F}}_q)$ with $\psi(x')=x$ and such that the preimage $C_{[b]_{\mu}}$ of $\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},[b]_{\mu}}$ in $C$ is open and dense. We write $\ensuremath{\mathcal{L}}_{\ell}^{C}$ (resp. $\ensuremath{\mathcal{L}}_{\ell'}^{C}$) for the pullback $\psi^*\ensuremath{\mathcal{L}}_\ell$ of $\ensuremath{\mathcal{L}}_\ell$ (resp. $\psi^*\ensuremath{\mathcal{L}}_{\ell'}$ of $\ensuremath{\mathcal{L}}_{\ell'}$) to ${C}$. By Lemma \ref{lemma: Frob l adic unit}, $\ensuremath{\mathcal{L}}^C_{\ell}$ satisfies the conditions in Chin's refinement of Lafforgue's Theorem \cite[Theorem 4.6]{Chin}. Thus there exists a $\overline{\ensuremath{\mathbb{Q}}}_{\ell'}$-local system $\ensuremath{\mathcal{K}}^{C}_{\ell'}$ over $C$ which is compatible for $\ensuremath{\mathcal{L}}^{C}_{\ell}$. Upon possibly enlarging $F$, we have that for any closed point $x\in C(\ensuremath{\mathbb{F}}_{q^s})$, $$\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{L}}_{\ell,\bar x}^{C})=\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{K}}_{\ell',\bar x}^{C}) \in F[t].$$ Hence, by Step (1), for any closed point $x\in C_{[b]_{\mu}}(\ensuremath{\mathbb{F}}_{q^s})$, we have $$\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{L}}_{\ell',\bar x}^{C})=\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{L}}_{\ell,\bar x}^{C}) =\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{K}}_{\ell', \bar x}^{C}).$$ Therefore, by the Chebotarev density Theorem, the semisimplifications of $\ensuremath{\mathcal{K}}^{C}_{\ell'}$ and $\ensuremath{\mathcal{L}}^{C}_{\ell'}$ are isomorphic, and hence $$\vartheta(\gamma_{y,\ell})=\det(1-\mathrm{Frob}_yt|\ensuremath{\mathcal{L}}_{\ell,\bar y}^{C})=\det(1-\mathrm{Frob}_yt|\ensuremath{\mathcal{L}}_{\ell',\bar y}^{C})=\vartheta(\gamma_{y,\ell'})$$ which is what we wanted to show. Step (2): Let $y\in\ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}}}^\lambda(\ensuremath{\mathbb{F}}_{q})$. We proceed by descending induction on; by part (2) we know the result for the maximal element $\lambda={\mu}$. Thus suppose the result is true for all $\lambda'\succ\lambda$. Let $\phi:C\rightarrow \ensuremath{\mathcal{S}}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}$ be a map as in Corollary \ref{cor: map from smooth stack to Shimura var} where $C$ is a smooth geometrically connected curve over $\ensuremath{\mathbb{F}}_q$. We write $\ensuremath{\mathcal{L}}^{C}_{\ell}$ (resp. $\ensuremath{\mathcal{L}}^C_{\ell'}$) for the local system $\phi^*\ensuremath{\mathcal{L}}_l$ (resp. $\phi^*\ensuremath{\mathcal{L}}_{\ell'}$) on $C$. We let $\ensuremath{\mathcal{K}}_{\ell'}^C$ be a compatible $\overline{\ensuremath{\mathbb{Q}}}_{\ell'}$-local system for $\ensuremath{\mathcal{L}}_\ell$ which exists as above. We let $U\subset C$ denote the open subscheme $$U:=\phi^{-1}(\bigcup_{\lambda\prec\lambda'}\ensuremath{\mathcal{S}}^{\lambda'}_{\ensuremath{\mathrm{K}},\ensuremath{\mathbb{F}}_q}).$$ By property (ii) in Corollary \ref{cor: map from smooth stack to Shimura var}, $U$ is a non-empty dense open subscheme of $C$. Applying the induction hypothesis we see that for all $x\in U(\ensuremath{\mathbb{F}}_{q^s})$, we have $$\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{L}}_{\ell',\bar x}^C)=\det(1-\mathrm{Frob}_xt|\ensuremath{\mathcal{K}}_{\ell',\bar x}^C).$$ Arguing as in Step (2) we find that $$\vartheta(\gamma_{y,\ell})=\det(1-\mathrm{Frob}_yt|\ensuremath{\mathcal{L}}_{\ell,\bar y}^C)=\det(1-\mathrm{Frob}_yt|\ensuremath{\mathcal{L}}_{\ell,\bar y}^C)=\vartheta(\gamma_{y,\ell'}).$$ This completes the proof of the Proposition. \end{proof} \subsubsection{}We may now prove Theorem \ref{thm: l indep full}. \begin{proof}[Proof of Theorem \ref{thm: l indep full}] For all $\ell,\ell' \neq p,$ and $\vartheta$ as above, we have $\vartheta(\gamma_{y,\ell})=\vartheta(\gamma_{y,\ell'})$ by Proposition \ref{prop: indep in GL}. This implies that $\gamma_{y,\ell}=\gamma_{y,\ell'}\in\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}}),$ by a result of Steinberg \cite[6.6]{Steinberg:regular}. Hence, there exists $\gamma_y\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})$ such that $\gamma_{y}=\gamma_{y,\ell}$ for all $\ell\neq p$. It suffices to show $\gamma_y$ is defined over $\ensuremath{\mathbb{Q}}$. Since $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}$ is a $\ensuremath{\mathbb{Q}}$-variety, the residue field of the point $\gamma_y$ is a finite extension $F/\ensuremath{\mathbb{Q}}$. Since $\gamma_y\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$ for all $\ell$, each finite prime of $\ensuremath{\mathbb{Q}}$ has a split prime in $F$ above it; hence the Chebotarev density theorem implies $\gamma_y\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$. Indeed let $F'/\ensuremath{\mathbb{Q}}$ be the Galois closure of $F.$ Then for every prime $\ell\neq p$, there exists $l$ a prime of $F'$ above $\ell$ such that the Frobenius $\mathrm{Frob}_{l}$ lies in $\mathrm{Gal}(F'/F) \subset \mathrm{Gal}(F'/\ensuremath{\mathbb{Q}}).$ It follows that $\mathrm{Gal}(F'/F)$ intersects every conjugacy class of $\mathrm{Gal}(F'/\ensuremath{\mathbb{Q}})$ and hence these groups are equal. \end{proof} \begin{remark} The proof of Theorem \ref{thm: l indep full} uses Theorem \ref{thm: can lift for Shimura var} and hence depends on a choice Hodge embedding $\iota$ for $(\ensuremath{\mathbf{G}},X)$. The statement of Theorem \ref{thm: l indep full} itself does not depend on such a choice since the local system $\widetilde\ensuremath{\mathbb{L}}_\ell$ is intrinsic to $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{G}},X)$. The Hodge embedding is used to deduce properties of $\widetilde\ensuremath{\mathbb{L}}_{\ell}$ via the isomorphism (\ref{eqn: id of local systems}). \end{remark} \section{Conjugacy class of Frobenius for abelian varieties} \subsection{Mumford--Tate groups}\label{subsec:Mumford-Tate groups} \subsubsection{}Let $A$ be an abelian variety over a number field $\ensuremath{\mathrm{E}}$. Recall we have fixed an embedding $i_\infty:\overline{\ensuremath{\mathbb{Q}}}\rightarrow\ensuremath{\mathbb{C}}$; using this we may consider $\ensuremath{\mathrm{E}}$ as a subfield of $\ensuremath{\mathbb{C}}$. We write $V_B$ for the Betti cohomology $\ensuremath{\mathrm{H}}^1_B(A(\ensuremath{\mathbb{C}}),\ensuremath{\mathbb{Q}})$ which is equipped with a Hodge structure of type $((0,-1),(-1,0))$. This Hodge structure is induced by a morphism $$h:\mathbb{S}:=\text{Res}_{\ensuremath{\mathbb{C}}/\R}\mathbb{G}_m\rightarrow \ensuremath{\mathrm{GL}}(V_B)$$ We write $$\mu:\ensuremath{\mathbb{C}}^\times\xrightarrow{z\mapsto (z,1)}\ensuremath{\mathbb{C}}^\times\times c^*(\ensuremath{\mathbb{C}}^\times)\xrightarrow{h} {\ensuremath{\mathrm{GL}}}(V_B\otimes\ensuremath{\mathbb{C}})$$ for the Hodge cocharacter. \begin{definition} The Mumford--Tate group $\ensuremath{\mathbf{G}}$ of $A$ is the smallest algebraic subgroup of $\ensuremath{\mathrm{GL}}(V_B)$ defined over $\ensuremath{\mathbb{Q}}$ such that $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{C}})$ contains the image of $\mu$. \end{definition} The group $\ensuremath{\mathbf{G}}$ can also be characterized as the algebraic subgroup of $\ensuremath{\mathrm{GL}}(V_B)$ that stabilizes all Hodge cycles; it is known that $\ensuremath{\mathbf{G}}$ is a reductive group. We remark that $\ensuremath{\mathbf{G}}$ depends on the embedding $\ensuremath{\mathrm{E}}\hookrightarrow \ensuremath{\mathbb{C}}$; indeed different embeddings will give rise to an inner form of $\ensuremath{\mathbf{G}}$. \subsubsection{} For a prime number $\ell$, we write $T_\ell A$ for the Tate module of $A$. The action of the absolute Galois group $\Gamma_{\ensuremath{\mathrm{E}}}:=\mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}})$ on $T_\ell A^\vee$ gives rise to a representation $\rho_\ell:\Gamma_{\ensuremath{\mathrm{E}}}\rightarrow \mathbf{GL}(T_\ell A^\vee)$ and the Betti-\'etale comparison gives us a canonical isomorphism $$\ensuremath{\mathrm{H}}^1_B(A(\ensuremath{\mathbb{C}}),\ensuremath{\mathbb{Q}})\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathbb{Q}}_l\cong T_\ell A^\vee\otimes_{\ensuremath{\mathbb{Z}}_\ell}\ensuremath{\mathbb{Q}}_\ell.$$ Deligne's theorem that Hodge cycles are absolutely Hodge \cite{De1}, implies that upon replacing $\ensuremath{\mathrm{E}}$ by a finite extension, the map $\rho_\ell$ factors through $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$; see \cite[Remarque 1.9]{Noot}. In fact this condition does not depend on $\ell.$ \begin{lemma}\label{indepoflfac} $\rho_\ell$ factors through $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$ for some prime $\ell,$ if and only if it factors through $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$ for all primes $\ell.$ \end{lemma} \begin{proof} The subgroup $\ensuremath{\mathbf{G}} \subset \ensuremath{\mathrm{GL}}(V_B)$ is the stabilizer of a collection of Hodge cycles $(s_{\alpha})_{\alpha}.$ We consider the $\ell$-adic components $(s_{\alpha,\ell})_{\ell},$ as in \S\ref{subsubsec:hodgecycles}. For $\sigma \in \Gamma_{\ensuremath{\mathrm{E}}},$ $(\sigma(s_{\alpha,\ell}))_{\ell},$ is again a Hodge cycle, by Deligne's theorem \cite[Theorem 2.11]{De1}. In particular, if $(\sigma(s_{\alpha,\ell}))_{\ell},$ and $(s_{\alpha,\ell})_{\ell}$ have equal components at some prime $\ell,$ then they are equal. \end{proof} The Lemma shows that the condition that $\Gamma_{\ensuremath{\mathrm{E}}}$ fixes $(s_{\alpha,\ell})_{\alpha}$ pointwise does not depend on $\ell.$ This condition is equivalent to asking that $\Gamma_{\ensuremath{\mathrm{E}}}$ maps to $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell).$ \subsubsection{}\label{subsubsec:notation} We replace $\ensuremath{\mathrm{E}}$ by the smallest extension such that $\Gamma_{\ensuremath{\mathrm{E}}}$ maps to $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell),$ and we write $\rho_\ell^{\ensuremath{\mathbf{G}}}$ for the induced map $\Gamma_{\ensuremath{\mathrm{E}}}\rightarrow \ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$ and $\iota_\ell$ for the inclusion $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)\rightarrow \mathbf{GL}(T_\ell A^\vee)$. Let $v$ be a prime of $\ensuremath{\mathrm{E}}$ lying above a prime $p$ such that $A$ has good reduction at $v.$ Upon modifying the embedding $i_p:\overline{\ensuremath{\mathbb{Q}}}\rightarrow \overline{\ensuremath{\mathbb{Q}}}_p$ fixed in \S\ref{subsec: integral models Hodge type preamble}, we may assume that $v$ is induced by $i_p$. We write $E = \ensuremath{\mathrm{E}}_v,$ and we let $\ensuremath{\mathbb{F}}_q$ denote the residue field of $\ensuremath{\mathrm{E}}$ at $v.$ For $\ell\neq p$ a prime, the criterion of N\'eron--Ogg--Shafarevich implies the representation $\rho_\ell$ is unramified at $v$. Let $\ensuremath{\mathrm{Fr}}_v$ be a geometric Frobenius element at $v$, we write $\gamma_\ell(v)=\chi_{\ensuremath{\mathbf{G}}}(\rho_\ell^\ensuremath{\mathbf{G}}(\ensuremath{\mathrm{Fr}}_v))\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell)$ for the conjugacy class of $\rho_\ell^\ensuremath{\mathbf{G}}(\mathrm{Fr}_v)$ which only depends on $v$ and not the choice of Frobenius element. We write $P_{v,\ell}(t)$ for the characteristic polynomial of $\ensuremath{\mathrm{Fr}}_v$ acting on $T_\ell A^\vee$, which has coefficients in $\ensuremath{\mathbb{Z}}$ and is independent of $\ell$. \subsubsection{}\label{subsubsec: restriction of scalars} We will make use of the following auxiliary construction. Let $\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}$ be a totally real field, and let $\ensuremath{\mathbf{H}}':=\text{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{G}}$. There is a canonical inclusion $\ensuremath{\mathbf{G}}\hookrightarrow \ensuremath{\mathbf{H}}'$. We let $(V,\psi)$ be the symplectic space corresponding to $\ensuremath{\mathrm{H}}_1(A(\ensuremath{\mathbb{C}}),\ensuremath{\mathbb{Q}})$ where $\psi$ is a Riemann form for $A$ and $\ensuremath{\mathbf{G}}\rightarrow \mathbf{GSp}(V)$ is the natural map. We let $W$ denote the symplectic space over $\ensuremath{\mathbb{Q}}$ whose underlying vector space is $V\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathrm{F}}$ and whose alternating form $\psi'$ is given by the composition $$W\times W\xrightarrow{\psi\otimes_{\ensuremath{\mathbb{Q}}}\ensuremath{\mathrm{F}}}\ensuremath{\mathrm{F}}\xrightarrow{\text{Tr}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}}\ensuremath{\mathbb{Q}}.$$ Let $c_{\ensuremath{\mathbf{G}}}:\ensuremath{\mathbf{G}}\rightarrow \mathbb{G}_m$ denote the restriction of the multiplier homomorphism $c:\mathbf{GSp}(V)\rightarrow \mathbb{G}_m$ to $\ensuremath{\mathbf{G}}$. We form the fiber product $$\xymatrix{\ensuremath{\mathbf{H}}'' \ar[rr]\ar[d] && \mathbb{G}_m \ar[d]_{\Delta}\\ \ensuremath{\mathbf{H}}' \ar[rr]^{\!\!\!\!!\!\!\mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}c_{\ensuremath{\mathbf{G}}}} && \mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\mathbb{G}_m }$$ where the map $\Delta$ is the diagonal map and we let $\ensuremath{\mathbf{H}}$ denote the neutral connected component of $\ensuremath{\mathbf{H}}''$. Thus $\ensuremath{\mathbf{H}}$ is a connected reductive group over $\ensuremath{\mathbb{Q}}$. The inclusion $\ensuremath{\mathbf{G}}\hookrightarrow \ensuremath{\mathbf{H}}'$ factors through $\ensuremath{\mathbf{H}}$ and we let $h'$ denote the composition $$\mathbb{S}\xrightarrow{h} \ensuremath{\mathbf{G}}_{\R}\rightarrow \ensuremath{\mathbf{H}}_{\R}.$$ Write $X$ for the $\ensuremath{\mathbf{G}}(\R)$ conjugacy class of $h$ and $X_{\ensuremath{\mathbf{H}}}$ for the $\ensuremath{\mathbf{H}}(\R)$-conjugacy class of $h'$. Consider the composition $$\iota':\ensuremath{\mathbf{H}}'\xrightarrow{\mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\iota}\mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\mathbf{GSp}(V)\xrightarrow{f} \mathbf{GL}(W)$$ where $f$ is induced by the forgetful functor from $\ensuremath{\mathrm{F}}$-vector spaces to $\ensuremath{\mathbb{Q}}$-vector spaces. It is easy to see that the restriction of $\iota'$ to $\ensuremath{\mathbf{H}}$ factors through $\mathbf{GSp}(W)$, and we also denote by $\iota'$ the induced map. We write $S'^\pm$ for the Siegel half space corresponding to $W$. One checks easily that $(\ensuremath{\mathbf{G}}, X),$ and $(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ are Shimura data, and that we have embeddings of Shimura data $$ (\ensuremath{\mathbf{G}}, X) \hookrightarrow (\ensuremath{\mathbf{H}}, X_{\ensuremath{\mathbf{H}}}) \hookrightarrow (\mathbf{GSp}(W), S'^{\pm}).$$ \subsubsection{}The following lemma will be used to show that for the $\ell$-independence of $\gamma_\ell(v)$ in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}$, it suffices to show the $\ell$-independence in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{H}}}$. \begin{lemma}\label{lemma: conj injective} The natural inclusion $\ensuremath{\mathbf{G}}\rightarrow \ensuremath{\mathbf{H}}$ induces a $\mathrm{Gal}(\overline{\ensuremath{\mathbb{Q}}}/\ensuremath{\mathbb{Q}})$-equivariant injection $$\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})\rightarrow \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{H}}}(\overline{\ensuremath{\mathbb{Q}}}).$$ \end{lemma} \begin{proof}Let $h,h'\in \ensuremath{\mathbf{G}}(\overline{\ensuremath{\mathbb{Q}}})$ such that there exists $g\in \ensuremath{\mathbf{H}}(\overline{\ensuremath{\mathbb{Q}}})$ such that $g^{-1}hg=h'$. We consider $\ensuremath{\mathbf{H}}$ as a subgroup of $\ensuremath{\mathbf{H}}'$. Then under the identification $$\ensuremath{\mathbf{H}}'_{\overline{\ensuremath{\mathbb{Q}}}}\cong \prod_{\iota:\ensuremath{\mathrm{F}}\rightarrow\overline{\ensuremath{\mathbb{Q}}}}\ensuremath{\mathbf{G}}_{\overline{\ensuremath{\mathbb{Q}}}},$$ $h,h'$ correspond to the elements $(h,\dotsc,h),(h',\dotsc,h')$ respectively and we write $g=(g_1,\dotsc,g_n)$. Then $g^{-1}hg=h'$ implies $g_1hg_1^{-1}=h'$. Thus $h$ and $h'$ have the same image in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\overline{\ensuremath{\mathbb{Q}}})$. The $\text{Gal}(\overline{\ensuremath{\mathbb{Q}}}/\ensuremath{\mathbb{Q}})$-equivariance follows from the fact that $\ensuremath{\mathbf{G}}\rightarrow \ensuremath{\mathbf{H}}$ is defined over $\ensuremath{\mathbb{Q}}$. \end{proof} \subsection{The main theorem} \label{sec: main thm} We now prove our main theorem (cf. Theorem \ref{introthm: main}). We need the following preliminary result. \begin{lemma}\label{lem:elementlevelstr} Let $G$ be a connected reductive group over $\ensuremath{\mathbb{Q}}_p.$ If $g \in G(\ensuremath{\mathbb{Q}}_p)$ lies in some compact open subgroup of $G(\ensuremath{\mathbb{Q}}_p),$ then there exists a finite extension $F/\ensuremath{\mathbb{Q}}_p$ over which $G$ splits and such that $g$ lies in the parahoric subgroup of $G(F)$ associated to a very special vertex in the building $\ensuremath{\mathcal{B}}(G,F).$ \end{lemma} \begin{proof} Write $g=g_sg_u$ for the Jordan decomposition of $g$ so that $g_s$ is semisimple and $g_u$ is unipotent. Since $g$ lies in a compact open subgroup of $G(\ensuremath{\mathbb{Q}}_p)$, $g$ is power bounded and hence $g_s$ and $g_u$ are power pounded. Let $T\subset G$ be a maximal torus defined over $\ensuremath{\mathbb{Q}}_p$ such that $g_s\in T(\ensuremath{\mathbb{Q}}_p)$. We will take $F$ to be the splitting field of $T$. Since $g_s\in T(F)$ is power bounded, it is contained in $\ensuremath{\mathcal{T}}_{F,0}(\ensuremath{\mathcal{O}}_F)$ where $\ensuremath{\mathcal{T}}_{F,0}$ is the connected N\'eron model for the base change $T_F$. If we let $\ensuremath{\mathcal{A}}(G,T,F)\subset \ensuremath{\mathcal{B}}(G,F)$ be the apartment corresponding to $T_F$, then $g_s$ acts trivially on $\ensuremath{\mathcal{A}}(G,T,F)$. Now $g_u\in U(F)$ where $U$ is the unipotent radical of some Borel subgroup $B$ of $G_F$ containing $T$. Let $\ensuremath{\mathfrak{s}}\in \ensuremath{\mathcal{A}}(G,T,F)$ be any special vertex and we use this vertex to identify $\ensuremath{\mathcal{A}}(G,T,F)$ with $X_*(T)\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{R}}$. Since each affine root subgroup of $G_F$ fixes a half apartment in $ \ensuremath{\mathcal{A}}(G,T,F)$, there exists a sufficiently dominant (with respect to the choice of Borel $B$) very special vertex $\ensuremath{\mathfrak{s}}'$ which is fixed by $g_u$. It follows that $\ensuremath{\mathfrak{s}}'$ is fixed by $g$. We write $\widetilde{\ensuremath{\mathcal{G}}}$ for the Bruhat--Tits stabilizer scheme over $\ensuremath{\mathcal{O}}_F$ corresponding to $\ensuremath{\mathfrak{s}}'$; by the above discussion we have $g\in\widetilde{\ensuremath{\mathcal{G}}}(\ensuremath{\mathcal{O}}_F)$. Since $G$ is split over $F$, $\widetilde{\ensuremath{\mathcal{G}}}$ is equal to the parahoric group scheme $\ensuremath{\mathcal{G}}$ associated to $\ensuremath{\mathfrak{s}}'$. \end{proof} \subsubsection{} We now return to the assumptions and notation of \S \ref{subsec:Mumford-Tate groups}. Thus we have an abelian variety $A/\ensuremath{\mathrm{E}},$ such that $\rho_\ell:\Gamma_{\ensuremath{\mathrm{E}}}\rightarrow \ensuremath{\mathrm{GL}}(T_\ell A^\vee)$ factors through $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{Q}}_\ell)$ for all $\ell$. Recall $E=\ensuremath{\mathrm{E}}_v$ and $\ensuremath{\mathbb{F}}_q$ is its residue field. The map $i_p:\overline{\ensuremath{\mathbb{Q}}}\rightarrow \overline{\ensuremath{\mathbb{Q}}}_p$ determines an inclusion \begin{equation}\label{eqn: decomposition group}\mathrm{Gal}(\overline{E}/E)\rightarrow \mathrm{Gal}(\overline{\ensuremath{\mathrm{E}}}/\ensuremath{\mathrm{E}}).\end{equation} We let $\widetilde{\sigma}_q\in \Gamma_{\ensuremath{\mathrm{E}}}$ be the image under (\ref{eqn: decomposition group}) of a lift of the geometric Frobenius in $\mathrm{Gal}(\overline{E}/E)$. \begin{prop}\label{prop: reduction to acceptable case}Let $p>2$. There exists a totally real field $\ensuremath{\mathrm{F}}$ such that if $(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ denotes the Shimura datum of Hodge type coming from the construction in \S\ref{subsubsec: restriction of scalars}, we have $H:=\ensuremath{\mathbf{H}}_{\ensuremath{\mathbb{Q}}_p}$ is quasi-split and there exists a very special parahoric group scheme $\ensuremath{\mathcal{H}}$ for $H$ such that \begin{enumerate} \item The image of $\rho_p^{\ensuremath{\mathbf{G}}}(\widetilde{\sigma}_q)$ in $H(\ensuremath{\mathbb{Q}}_p)$ lies in $\ensuremath{\mathcal{H}}(\ensuremath{\mathbb{Z}}_p)$. \item The triple $(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}},\ensuremath{\mathcal{H}})$ is acceptable. \end{enumerate} \end{prop} \begin{proof}Let $G=\ensuremath{\mathbf{G}}_{\ensuremath{\mathbb{Q}}_p}$. By Lemma \ref{lem:elementlevelstr} applied to the element $\rho_p^{\ensuremath{\mathbf{G}}}(\widetilde{\sigma}_q)\in G(\ensuremath{\mathbb{Q}}_p)$, there exists a finite extension $F/\ensuremath{\mathbb{Q}}_p$ such that $G_F$ is split and there exists a very special parahoric $\ensuremath{\mathcal{G}}$ of $G_F$ such that the image of $\rho_p^{\ensuremath{\mathbf{G}}}(\widetilde{\sigma}_q)$ in $G(F)$ lies in $\ensuremath{\mathcal{G}}(\ensuremath{\mathcal{O}}_F)$. We let $\ensuremath{\mathrm{F}}$ be a totally real field such that $\ensuremath{\mathrm{F}}_w\cong F$ for all places $w|p$ of $\ensuremath{\mathrm{F}}$. By construction $\ensuremath{\mathbf{H}}\subset\ensuremath{\mathbf{H}}'=\mathrm{Res}_{\ensuremath{\mathrm{F}}/\ensuremath{\mathbb{Q}}}\ensuremath{\mathbf{G}}$ and we have an isomorphism\begin{align*} H':=\ensuremath{\mathbf{H}}'_{\ensuremath{\mathbb{Q}}_p}\cong\prod_{w|p}\mathrm{Res}_{\ensuremath{\mathrm{F}}_w/\ensuremath{\mathbb{Q}}_p}\ensuremath{\mathbf{G}}_{\ensuremath{\mathrm{F}}_w}\cong \prod_{w|p}\mathrm{Res}_{F/\ensuremath{\mathbb{Q}}_p}G_F \end{align*} We let $\ensuremath{\mathcal{H}}'$ denote the parahoric group scheme of $H'$ corresponding to $\prod_{w|p}\ensuremath{\mathcal{G}}$. Then $\ensuremath{\mathcal{H}}'(\ensuremath{\mathbb{Z}}_p)\cap H(\ensuremath{\mathbb{Q}}_p)$ arises as the $\ensuremath{\mathbb{Z}}_p$-points of a parahoric group scheme $\ensuremath{\mathcal{H}}$ for $H:=\ensuremath{\mathbf{H}}_{\ensuremath{\mathbb{Q}}_p}$. By construction $H'$ is quasi-split since it is the restriction of scalars of a split group, and hence $H$ is quasi-split. Since $G(\ensuremath{\mathbb{Q}}_p)\subset H(\ensuremath{\mathbb{Q}}_p)$, the image of $\rho_p^{\ensuremath{\mathbf{G}}}(\widetilde{\sigma}_q)$ in $H(\ensuremath{\mathbb{Q}}_p)$ lies in $\ensuremath{\mathcal{H}}(\ensuremath{\mathbb{Z}}_p)$ so that (1) is satisfied. To show (2) is satisfied, we let $(\ensuremath{\mathbf{H}}_1,X_1)$ be an auxiliary Shimura datum of Hodge type as constructed in Proposition \ref{lemma: auxiliary Hodge type datum} so that there is a central extension $\ensuremath{\mathbf{H}}_{1\mathrm{der}}\rightarrow \ensuremath{\mathbf{H}}_{\mathrm{der}}$ and we write $H_1:=\ensuremath{\mathbf{H}}_{1,\ensuremath{\mathbb{Q}}_p}$. The parahoric $\ensuremath{\mathcal{H}}$ of $H$ determines a very special parahoric group scheme of $\ensuremath{\mathcal{H}}_1$ of $H_1$. It suffices to show $\ensuremath{\mathcal{H}}_1$ is a connected parahoric. Note that there is an isomorphism $H_{\ensuremath{\mathrm{ad}}}\cong H_{1,\ensuremath{\mathrm{ad}}}\cong \prod_{i=1}^r\mathrm{Res}_{F_i/\ensuremath{\mathbb{Q}}_p}G_i$ where $G_i$ is a \emph{split} reductive group over $F_i$. It follows that any parahoric of $H_{\ensuremath{\mathrm{ad}}}$ is connected. There is a natural map $\widetilde{\ensuremath{\mathcal{H}}}_1\rightarrow \widetilde{\ensuremath{\mathcal{H}}}_{\ensuremath{\mathrm{ad}}} $ and a commutative diagram \[\xymatrix{\widetilde{\ensuremath{\mathcal{H}}}_1(\ensuremath{\breve{\mathbb{Z}}_p}) \ar[r]\ar[d]_{\widetilde\kappa_{H_1}} & \widetilde{\ensuremath{\mathcal{H}}}_{\ensuremath{\mathrm{ad}}}(\ensuremath{\breve{\mathbb{Z}}_p})\ar[d]^{\widetilde\kappa_{H_{\ensuremath{\mathrm{ad}}}}}\\ \pi_1(H_1)_I \ar[r]&\pi_1(H_{\ensuremath{\mathrm{ad}}})_I.}\] Therefore $\widetilde{\ensuremath{\mathcal{H}}}_1(\ensuremath{\breve{\mathbb{Z}}_p})$ maps to $\ker(\pi_1(H_1)_I\rightarrow\pi_1(H_{\ensuremath{\mathrm{ad}}})_I)$ and it suffices to show this group is torsion free. We have a commutative diagram with exact rows. \[\xymatrix{ &\pi_1(H_{1\ensuremath{\mathrm{der}}})_I\ar[r] \ar[d]&\pi_1(H_1)_I\ar[r]\ar[d]& X_*(H_{1\ensuremath{\mathrm{ab}}})_I\ar[r]\ar[d]& 0\\ 0 \ar[r]& \pi_1(H_{\ensuremath{\mathrm{ad}}})_I \ar[r]^\sim&\pi_1(H_{\ensuremath{\mathrm{ad}}})_I\ar[r] &\{1\}\ar[r]& 0 }\] Since $\pi_1(H_{1,\ensuremath{\mathrm{der}}})\rightarrow \pi_1(H_{\ensuremath{\mathrm{ad}}})$ is injective and these are induced modules, it follows that $\pi_1(H_{1,\ensuremath{\mathrm{der}}})_I\rightarrow \pi_1(H_{\ensuremath{\mathrm{ad}}})_I$ is injective. By construction, $X_*(H_{1\mathrm{ab}})_I$ is torsion free, and hence so is $\mathrm{ker}(\pi_1(H_1)_I\rightarrow \pi_1(H_{\ensuremath{\mathrm{ad}}})_I)$ by the snake Lemma. \end{proof} \begin{thm}\label{thm: l indep for abelian var}Let $p>2$ be a prime and $v|p$ a place of $\ensuremath{\mathrm{E}}$ where $A$ has good reduction. Then there exists an element $\gamma\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ such that for all $\ell\neq p$, we have $\gamma=\gamma_\ell(v)$ in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_l)$. \end{thm} \begin{remark} As remarked above, the group $\ensuremath{\mathbf{G}}$ depends on the embedding $\ensuremath{\mathrm{E}}\hookrightarrow \ensuremath{\mathbb{C}}$ up to inner automorphism. However, this does not change the $\ensuremath{\mathbb{Q}}$-variety $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}$, and it can be checked that the statement of the theorem can be made independent of the choice of embedding. \end{remark} \begin{proof}[Proof of \ref{thm: l indep for abelian var}] We may assume that $\ensuremath{\mathbf{G}}$ is not a torus as in this case $A$ has complex multiplication and the result is a theorem of Shimura--Taniyama. We choose a totally real field $\ensuremath{\mathrm{F}}$ as in Proposition \ref{prop: reduction to acceptable case} and let $(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ be the associated Shimura datum of Hodge type arising from the construction in \S\ref{subsubsec: restriction of scalars}. By construction, there is a very special parahoric $\ensuremath{\mathcal{H}}$ of $\ensuremath{\mathbf{H}}_{\ensuremath{\mathbb{Q}}_p}$ such that the image of $\rho_p^{\ensuremath{\mathbf{G}}}(\widetilde{\sigma}_q)$ inside $\ensuremath{\mathbf{H}}(\ensuremath{\mathbb{Q}}_p)$ lies in $\ensuremath{\mathrm{K}}_p:=\ensuremath{\mathcal{H}}(\ensuremath{\mathbb{Z}}_p)$. Hence, there exists a finite extension $\ensuremath{\mathrm{E}}'$ of $\ensuremath{\mathrm{E}}$ such that $\rho^{\ensuremath{\mathbf{G}}}_p|_{\Gamma_{\ensuremath{\mathrm{E}}'}}$ factors through $\ensuremath{\mathrm{K}}_p,$ and such that there is a prime $v'|v$ of $\ensuremath{\mathrm{E}}'$ such that $\ensuremath{\mathrm{E}}'_{v'}$ has residue field $\ensuremath{\mathbb{F}}_q.$ We may thus replace $\ensuremath{\mathrm{E}}$ by $\ensuremath{\mathrm{E}}',$ without changing the statement of the theorem, and assume that the image of $\rho^{\ensuremath{\mathbf{G}}}_p$ in $\ensuremath{\mathbf{H}}(\ensuremath{\mathbb{Q}}_p)$ factors through $\ensuremath{\mathrm{K}}_p.$ Now let $(s_{\alpha,\ell})_{\ell \neq p}\in \widehat{V}^p(A)^\otimes$ denote the $\ell$-adic realizations of the absolute Hodge cycles for $A$. By our assumption on $\ensuremath{\mathrm{E}}$, the representation $\rho^p:\Gamma_{\ensuremath{\mathrm{E}}} \rightarrow \ensuremath{\mathrm{GL}}(\widehat{V}^p(A))$ factors through $\ensuremath{\mathbf{G}}(\ensuremath{\mathbb{A}}_f^p) \subset \ensuremath{\mathbf{H}}(\ensuremath{\mathbb{A}}_f^p),$ and hence through a compact open subgroup $\ensuremath{\mathrm{K}}^p\subset \ensuremath{\mathbf{H}}(\ensuremath{\mathbb{A}}_f^p)$. Write $\ensuremath{\mathrm{K}}:=\ensuremath{\mathrm{K}}_p\ensuremath{\mathrm{K}}^p$. We now define a point of $\ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ using the Hodge embedding $\iota':(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})\rightarrow \mathbf{GSp}(W),S^\pm)$. Consider the abelian variety up to isogeny $A^F = A\otimes_{\ensuremath{\mathbb{Q}}} F,$ equipped with the isomorphism $\varepsilon: \widehat{V}(A^F) \simeq V\otimes_{\ensuremath{\mathbb{Q}}} \ensuremath{\mathbb{A}}_f \otimes_{\ensuremath{\mathbb{Q}}}F$ induced by the identity on $V$. Since $\rho_p^{\ensuremath{\mathbf{G}}}$ and $\rho^p$ act via $\ensuremath{\mathrm{K}}$, the $\ensuremath{\mathrm{K}}$-orbit of $\varepsilon$ is $\Gamma_\ensuremath{\mathrm{E}}$-invariant. Thus, the triple $(A^F,\lambda\otimes F, \varepsilon),$ defines a point $\widetilde{x}_A \in \ensuremath{\mathrm{Sh}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})(\ensuremath{\mathrm{E}}).$ (Note that, since $\psi$ is $\ensuremath{\mathbf{H}}$-invariant, up to scalars, $\lambda$ is defined over $\ensuremath{\mathrm{E}}$ as a weak polarization). By our choice of $\ensuremath{\mathrm{F}}$, the triple $(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}},\ensuremath{\mathcal{H}})$ satisfies the assumptions of Theorem \ref{thm: l indep full}. Thus we may apply it to the reduction $x_A \in \ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})(\ensuremath{\mathbb{F}}_q)$, where $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ is the integral model constructed from a choice of auxiliary Hodge type Shimura datum. This implies that there exists $\gamma\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{H}}}(\ensuremath{\mathbb{Q}})$ such that for all $\ell\neq p$, we have $\gamma=\gamma_\ell(v)$ in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{H}}}(\ensuremath{\mathbb{Q}}_\ell).$ By Lemma \ref{lemma: conj injective}, it follows that $\gamma\in \ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}})$ and $\gamma=\gamma_\ell(v)$ in $\ensuremath{\mathrm{Conj}}_{\ensuremath{\mathbf{G}}}(\ensuremath{\mathbb{Q}}_\ell).$ \end{proof} \begin{remark}In the proof of Theorem \ref{thm: l indep for abelian var}, we used an integral $\ensuremath{\mathscr{S}}_{\ensuremath{\mathrm{K}}}(\ensuremath{\mathbf{H}},X_{\ensuremath{\mathbf{H}}})$ which depends on the choice of an auxiliary Shimura datum of Hodge type. As mentioned in Remark \ref{rem: independence of model}, such a model should be independent of choices. In any case, all we use is that such a model exists which satisfies the extension property in Theorem \ref{thm: integral models abelian type} (2) and the conclusion of Theorem \ref{thm: l indep full}. \end{remark} \bibliographystyle{amsalpha}
1,116,691,501,443
arxiv
\section{Introduction} The purpose of these notes is to provide a new proof of Mattila, Melnikov, and Verdera's theorem. The exposition is self-contained, relying only on a knowledge of basic real analysis. \begin{thm}\label{thm1}\cite{MMV} An Ahlfors-David regular measure $\mu$ whose associated Cauchy transform operator is bounded in $L^2(\mu)$ is uniformly rectifiable. \end{thm} The precise statement of this theorem is given in Section \ref{statementsec}. The scheme employed to prove Theorem \ref{thm1} in these notes is quite different from that in \cite{MMV}, and relies upon a characterization of \textit{reflectionless measures}. In this regard, one may compare the proof to that of Mattila's theorem \cite{Mat95b}: \emph{Suppose that $\mu$ is a finite Borel measure satisfying $\liminf_{r\rightarrow 0} \frac{\mu(B(z,r))}{r} \in (0,\infty)$ for $\mu$-a.e. $z\in \mathbb{C}$. If the Cauchy transform of $\mu$ exists $\mu$-a.e. in the sense of principal value, then $\mu$ is rectifiable}. Mattila's proof of this theorem uses a characterization of \textit{symmetric measures}, the reader may consult Chapter 14 of the book \cite{Mat95} for more information. Subsequently, Mattila's theorem was generalized to the case of singular integrals in higher dimensions by Mattila and Preiss in \cite{MP95}. To find the analogous generalization of the proof we carry out here would answer a longstanding problem of David and Semmes \cite{DS}. Very recently, Nazarov, Tolsa, and Volberg \cite{NTV12} completed the solution of the problem of David and Semmes in the case of singular integral operators of co-dimension 1. They proved that if $\mu$ is a $d$-dimensional Ahlfors-David regular measure in $\mathbb{R}^{d+1}$, then the boundedness of the $d$-dimensional Riesz transform in $L^2(\mu)$ implies that one of the criteria for uniform rectifiability given in \cite{DS} is satisfied. See \cite{NTV12} for more details, and for further references and history about this problem. Throughout this paper, we shall only consider Ahlfors-David regular measures. For closely related results without this assumption, see the paper of L\'{e}ger \cite{Leg}. \section{Notation} We shall adopt the following notation: \begin{itemize} \item $B(z,r)$ denotes the open disc centred at $z$ with radius $r>0$. \item For a square $Q$, we set $z_Q$ to be the centre of $Q$, and $\ell(Q)$ to denote the side-length of $Q$. \item We shall denote by $\mathcal{D}$ the standard lattice of dyadic squares in the complex plane. A dyadic square is any square of the form $[k2^{j}, (k+1)2^{j})\times [\ell 2^{j}, (\ell+1)2^{j})$ for $j, k$ and $\ell$ in $\mathbb{Z}$. \item We define the Lipschitz norm of a function $f$ by $$\|f\|_{\operatorname{Lip}} = \sup_{z, \xi\in \mathbb{C}, z\neq \xi} \frac{|f(z) - f(\xi)|}{|z-\xi|}.$$ \item We denote by $\operatorname{Lip}_0(\mathbb{C})$ the space of compactly supported functions with finite Lipschitz norm. The continuous functions with compact support are denoted by $C_0(\mathbb{C})$. \item For $f:\mathbb{C}\rightarrow\mathbb{C}$, we set $\|f\|_{\infty} = \sup_{z\in \mathbb{C}}|f(z)|.$ In particular, note that we are taking the \textit{pointwise everywhere} supremum here. \item The closure of a set $E$ is denoted by $\overline{E}$ \item The support of a measure $\mu$ is denoted by $\operatorname{supp}(\mu)$. \item For a line $L$, we write the one dimensional Hausdorff measure restricted to $L$ by $\mathcal{H}^1_{L}$. If $L=\mathbb{R}$, we instead write $m_1$. \item We will denote by $C$ and $c$ various positive absolute constants. These constants may change from line to line within an intermediate argument. The constant $C$ is thought of as large (at the very least greater than $1$), while $c$ is thought of as small (certainly smaller than $1$). We shall usually make any dependence of a constant on a parameter explicit, unless it is clear from the context what the dependencies are. \end{itemize} \section{The precise statement of Theorem \ref{thm1}}\label{statementsec} \subsection{The Cauchy transform of a measure $\mu$} Let $K(z)=\tfrac{1}{z}$ for $z\in \mathbb{C}\backslash \{0\}$. For a measure $\nu$, the \textit{Cauchy transform} of $\nu$ is formally defined by $$\mathcal{C}(\nu)(z) = \int_{\mathbb{C}}K(z-\xi) d\nu(\xi) \text{ for }z\in \mathbb{C}. $$ In general, the singularity in the kernel is too strong to expect the integral to converge absolutely on $\operatorname{supp}(\nu)$. It is therefore usual to introduce a regularized Cauchy kernel. For $\delta>0$, define $$K_{\delta}(z) = \frac{\bar{z}}{\max(\delta, |z|)^2}. $$ Then the $\delta$-regularized Cauchy transform of $\nu$ is defined by $$\mathcal{C}_{\delta}(\nu)(z) = \int_{\mathbb{C}}K_{\delta}(z-\xi) d\nu(\xi), \text{ for }z\in \mathbb{C}. $$ Before we continue, let us introduce a very natural condition to place upon $\mu$. A measure $\mu$ is called $C_0$-\emph{nice} if $\mu(B(z,r))\leq C_0 r$ for any disc $B(z,r)\subset \mathbb{C}$. If $\mu$ is a $C_0$-nice measure, then for any $f\in L^2(\mu)$ and $z\in \mathbb{C}$, we have that $\mathcal{C}_{\mu, \delta}(f)(z):=\mathcal{C}_{\delta}(f\mu)(z)$ is bounded in absolute value in terms of $\delta$, $C_0$, and $\|f\|_{L^2(\mu)}$. To see this, we shall need an elementary tail estimate, which we shall refer to quite frequently in what follows: \begin{lem}\label{tailest} Suppose $\mu$ is $C_0$-nice measure. For every $\varepsilon>0$ and $r>0$, we have $$\int\limits_{\mathbb{C}\backslash B(0,r)} \frac{1}{|\xi|^{1+\varepsilon}}d\mu(\xi)\leq \frac{C_0(1+\varepsilon)}{\varepsilon} r^{-\varepsilon}. $$ \end{lem} The proof of Lemma \ref{tailest} is a standard exercise, and is left to the reader. With this lemma in hand, we return to our claim that $C_{\mu, \delta}(f)$ is bounded. First apply the Cauchy-Schwarz inequality to estimate $$|\mathcal{C}_{\mu, \delta}(f)(z)|\leq \Bigl(\int_{\mathbb{C}}|K_{\delta}(z-\xi)|^2d\mu(\xi)\Bigl)^{1/2}\|f\|_{L^2(\mu)}.$$ But now $\int_{\mathbb{C}}|K_{\delta}(z-\xi)|^2d\mu(\xi) \leq \int_{B(z,\delta)}\frac{|\xi-z|^2}{\delta^4}d\mu(\xi) + \int_{\mathbb{C}\backslash B(z,\delta)}\frac{1}{|\xi-z|^2}d\mu(\xi). $ The first term on the right hand side of this inequality is at most $\tfrac{\mu(B(z,\delta))}{\delta^2}\leq \tfrac{C_0}{\delta}$, and the second term is no greater than $\tfrac{2C_0}{\delta}$ by Lemma \ref{tailest}. We therefore see that $|\mathcal{C}_{\mu, \delta}(f)(z)|\leq \bigl(\tfrac{3C_0}{\delta}\bigl)^{1/2}\|f\|_{L^2(\mu)}$. In particular, we have $\mathcal{C}_{\mu, \delta}(f)(z)\in L^2_{\text{loc}}(\mu)$. One conclusion of this discussion is that for any nice measure $\mu$, it makes sense to ask if $C_{\mu,\delta}$ is a bounded operator from $L^2(\mu)$ to $L^2(\mu)$. \begin{defn} We say that $\mu$ is $C_0$-\emph{good} if it is $C_0$-nice and $$\sup_{\delta>0}\|\mathcal{C}_{\mu, \delta}\|_{L^2(\mu)\rightarrow L^2(\mu)}\leq C_0. $$ By definition, the Cauchy transform operator associated with $\mu$ is bounded in $L^2(\mu)$ if $\mu$ is good. \end{defn} The two dimensional Lebesgue measure restricted to the unit disc is good. However, this measure is not supported on a $1$-rectifiable set and so such measures should be ruled out in a statement such as Theorem \ref{thm1}. To this end, we shall deal with Ahlfors-David (AD) regular measures. \begin{defn}\label{ADreg} A nice measure $\mu$ is called AD-regular, with regularity constant $c_0>0$, if $\mu(B(z,r))\geq c_0 r$ for any disc $B(z,r)\subset \mathbb{C}$ with $z\in \operatorname{supp}(\mu)$. \end{defn} \subsection{Uniform rectifiability} A set $E\subset \mathbb{C}$ is called \emph{uniformly rectifiable} if there exists $M>0$ such that for any dyadic square $Q\in \mathcal{D}$, there exists a Lipschitz mapping $F:[0,1]\rightarrow \mathbb{C}$ with $\|F\|_{\operatorname{Lip}}\leq M\ell(Q)$ and $E\cap Q \subset F([0,1])$. We can altnernatively say that $E$ is uniformly rectifiable if there exists $M>0$ such that for any dyadic square $Q\in \mathcal{D}$, there is a rectifiable curve containing $E\cap Q$ of length no greater than $M\ell(Q)$. A measure $\mu$ is uniformly rectifiable if the set $E=\operatorname{supp}(\mu)$ is uniformly rectifiable. We may now restate Theorem \ref{thm1} in a more precise way. \begin{thm} A good AD-regular measure $\mu$ is uniformly rectifiable. \end{thm} \section{Making sense of the Cauchy transform on $\operatorname{supp}(\mu)$} \label{weaklims} The definition of a good measure does not immediately provide us with a workable definition of the Cauchy transform on the support of $\mu$. In this section, we rectify this matter by defining an operator $\mathcal{C}_{\mu}$ as a weak limit of the operators $\mathcal{C}_{\mu, \delta}$ as $\delta\rightarrow 0$. This idea goes back to Mattila and Verdera \cite{MV}. We fix a $C_0$-good measure $\mu$. Note that if $f \in \operatorname{Lip}_0(\mathbb{C})$, then $f$ is bounded in absolute value by $\|f\|_{\operatorname{Lip}}\cdot \text{diam}(\operatorname{supp}(f))$. Fix $f,g\in \operatorname{Lip}_0(\mathbb{C})$. Then for any $\delta>0$, we may write $$\langle \mathcal{C}_{\mu,\delta}(f),g\rangle_{\mu} = \frac{1}{2}\iint\limits_{\mathbb{C}\times\mathbb{C}} K_{\delta}(z-\xi)\bigl[f(\xi)g(z)-f(z)g(\xi)\bigl]d\mu(z)d\mu(\xi). $$ Let $H(z,\xi) = \tfrac{1}{2}\bigl[f(\xi)g(z)-f(z)g(\xi)\bigl]$. It will be useful to denote by $I_{\delta}(f,g)$ the expression $$I_{\delta}(f,g) =I_{\delta, \mu}(f,g)= \iint\limits_{\mathbb{C}\times\mathbb{C}} K_{\delta}(z-\xi) H(z,\xi)d\mu(\xi)d\mu(z). $$ Now, note that if $S = \operatorname{supp}(f)\cup\operatorname{supp}(g)$, it is clear that $\operatorname{supp}(H)\subset S\times S$. In addition $H$ is Lipschitz in $\mathbb{C}^2$ with Lipschitz norm no greater than $\tfrac{1}{\sqrt{2}}\bigl(\|f\|_{\infty}\|g\|_{\operatorname{Lip}}+\|g\|_{{\infty}}\|f\|_{\operatorname{Lip}}\bigl)$. To see this, first observe that $|H(z,\xi) - H(\omega, \xi)|\leq \tfrac{1}{2} \bigl(\|f\|_{\infty}\|g\|_{\operatorname{Lip}}+\|g\|_{\infty}\|f\|_{\operatorname{Lip}}\bigl)|z-\omega| $, whenever $z, \omega, \xi\in \mathbb{C}$. By using this inequality twice, we see that $$|H(z_1,z_2) - H(\xi_1, \xi_2)|\leq \tfrac{1}{2}\bigl(\|f\|_{\infty}\|g\|_{\operatorname{Lip}}+\|g\|_{\infty}\|f\|_{\operatorname{Lip}}\bigl)[|z_1-\xi_1|+|z_2-\xi_2|], $$ and the claim follows since $|z_1-\xi_1|+|z_2-\xi_2|\leq \sqrt{2}\sqrt{|z_1-\xi_1|^2+|z_2-\xi_2|^2}$. Since $H(z,z)=0$, this Lipschitz bound immediately yields $$|H(z,\xi)|\leq \tfrac{1}{\sqrt{2}}\bigl(\|f\|_{\infty}\|g\|_{\operatorname{Lip}}+\|g\|_{\infty}\|f\|_{\operatorname{Lip}}\bigl)|z-\xi| \text{ for any }z\neq \xi.$$ As a result of this bound on the absolute value of $H$, there exists a constant $C(f,g)>0$ such that $$|K_{\delta}(z-\xi)||H(z,\xi)|\leq C(f,g)\chi_{S\times S}(z,\xi).$$ On the other hand, since $\mu$ is a nice measure, the set $\{(z,\xi)\in \mathbb{C}\times\mathbb{C}: \, z=\xi\}$ is $\mu\times\mu$ null, and so for $\mu\times\mu$ almost every $(z, \xi)$, the limit as $\delta\rightarrow0$ of $K_{\delta}(z-\xi)$ is equal to $K(z-\xi)$. As a result, the Dominated Convergence Theorem applies to yield $$\lim_{\delta\rightarrow 0}I_{\delta}(f,g) = \iint\limits_{\mathbb{C}\times\mathbb{C}} K(z-\xi)H(z,\xi)d\mu(z)d\mu(\xi). $$ This limit will be denoted by $I(f,g)=I_{\mu}(f,g)$. Moreover, there is a quantitative estimate on the speed of convergence: $$|I(f,g) - I_{\delta}(f,g)|\leq \iint\limits_{\substack{ (z,\xi)\in S\times S:\\ |z-\xi|<\delta}}C(f,g) d\mu(z)d\mu(\xi) \leq C(f,g)\delta\mu(S). $$ Since $\mu$ is $C_0$-nice, $\mu(S)$ can be bounded in terms of the diameters of the supports of $f$ and $g$, and we see that $|I(f,g) - I_{\delta}(f,g)|\leq C(f,g)\delta$. We have now justified the existence of an operator $\mathcal{C}_{\mu}$ acting from the space of compactly supported Lipschitz functions to its dual with respect to the standard pairing $\langle f,g\rangle_{\mu}=\int_{\mathbb{C}}fgd\mu$. Since $\mu$ is $C_0$-good, for any $\delta>0$ we have \begin{equation}\label{Idelt} |I_{\delta}(f,g)|\leq C_0\|f\|_{L^2(\mu)}\|g\|_{L^2(\mu)},\text{ for any } f,g \in L^2(\mu), \end{equation} and this inequality allows us to extend the definition of $I(f,g)$ to the case when $f$ and $g$ are $L^2(\mu)$ functions. To do this, we first pick functions $f$ and $g$ in $L^2(\mu)$. Let $\varepsilon>0$. Using the density of $\operatorname{Lip}_0(\mathbb{C})$ in $L^2(\mu)$, we write $f=f_1+f_2$ and $g=g_1+g_2$, where $f_1$ and $g_1$ are compactly supported Lipschitz functions, and the norms of $f_2$ and $g_2$ in $L^2(\mu)$ are as small as we wish (say, less than $\varepsilon$). We know that $I_{\delta}(f_1,g_1)\rightarrow I(f_1,g_1)$ as $\delta\rightarrow 0$. Consequently, for each $\varepsilon>0$, $I_{\delta}(f,g)$ can be written as a sum of two terms, the first of which (namely $I_{\delta}(f_1,g_1)$) has a finite limit, and the second term (which is $I_{\delta}(f_1,g_2)+I_{\delta}(f_2,g_1)+I_{\delta}(f_2,g_2)$) has absolute value no greater than $ C_0\varepsilon(3\varepsilon+ \|f\|_{L^2(\mu)}+\|g\|_{L^2(\mu)})$. It follows that the limit as $\delta\rightarrow 0$ of $I_{\delta}(f,g)$ exists. We define this limit to be $I(f,g)=I_{\mu}(f,g)$. From (\ref{Idelt}), we see that $|I(f,g)|\leq C_0\|f\|_{L^2(\mu)}\|g\|_{L^2(\mu)}$. Therefore we may apply the Riesz-Fisher theorem to deduce the existence of a (unique) bounded linear operator $\mathcal{C}_{\mu}:L^2(\mu)\rightarrow L^2(\mu)$ such that $$\langle \mathcal{C}_{\mu}(f), g\rangle_{\mu} = I(f,g) \text{ for all }f,g\in L^2(\mu). $$ Having defined an operator $\mathcal{C}_{\mu}$ for any good measure $\mu$, we now want to see what weak continuity properties this operator has. \begin{defn} We say that the sequence $\mu_k$ tends to $\mu$ weakly if, for any $\varphi\in C_0(\mathbb{C})$, $$\int_{\mathbb{C}}\varphi \,d\mu_k \rightarrow \int_{\mathbb{C}}\varphi \,d\mu \text{ as } k\rightarrow \infty. $$ \end{defn} We now recall a standard weak compactness result, which can be found in Chapter 1 of \cite{Mat95} (or any other book in real analysis). \begin{lem}\label{bookconv} Let $\{\mu_k\}_k$ be a sequence of measures. Suppose that for each compact set $E\subset \mathbb{C}$, $\sup_k \mu_k(E)<\infty$. Then there exists a subsequence $\{\mu_{k_j}\}_{k_j}$ and a measure $\mu$ such that $\mu_{k_j}$ converges to $\mu$ weakly. \end{lem} An immediate consequence of this lemma is that any sequence $\{\mu_k\}_k$ of $C_0$-nice measures has a subsequence that converges weakly to a measure $\mu$. The next lemma shows that the various regularity properties of measures that we consider are inherited by weak limits. \begin{lem}\label{wlnicegood} Suppose that $\mu_k$ converges to $\mu$ weakly. If each measure $\mu_k$ is $C_0$-good with $AD$ regularity constant $c_0$, then the limit measure $\mu$ is also $C_0$-good with $AD$ regularity constant $c_0$. \end{lem} \begin{proof} We shall first check that $\mu$ is AD regular. Let $x\in \text{supp}(\mu)$, $r>0$, and choose $\varepsilon\in (0,r/2)$. Consider a smooth non-negative function $f$, supported in the disc $B(x,\varepsilon)$, with $f\equiv 1$ on $B(x, \tfrac{\varepsilon}{2})$. Then $\int_{\mathbb{C}} fd\mu>0$. Hence, for all sufficiently large $k$, $\int_{\mathbb{C}} f d\mu_k>0$. For all such $k$, $B(x,\varepsilon)\cap \operatorname{supp}(\mu_k)\neq \varnothing$, and so there exists $x_k\in B(x,\varepsilon)$ satisfying $\mu_k(B(x_k, r-2\varepsilon))\geq c_0 (r-2\varepsilon)$. As a result, $\mu_k(B(x,r-\varepsilon))>c_0(r-2\varepsilon)$. Now let $\varphi\in C_0(\mathbb{C})$ be nonnegative and supported in $B(x,r)$, satisfying $\|\varphi\|_{\infty}\leq 1$ and $\varphi \equiv 1$ on $B(x, r-\varepsilon)$. Then $$\mu(B(x,r))\geq \int_{\mathbb{C}} \varphi d\mu = \lim_{k\rightarrow \infty}\int_{\mathbb{C}} \varphi d\mu_k \geq c_0(r-2\varepsilon). $$ Letting $\varepsilon\rightarrow 0$, we arrive at the desired AD regularity. The property that $\mu$ is $C_0$-nice is easier and left to the reader (it also follows from standard lower-semicontinuity properties of the weak limit). It remains to show that $\mu$ is $C_0$-good. Fix $f,g\in \operatorname{Lip}_0(\mathbb{C})$ and define $H$ and $S$ as before. Note that $K_{\delta}(z-\xi)H(z,\xi)$ is a Lipschitz function in $\mathbb{C}^2$, and has support contained in $S\times S$. Let $U\supset S$ be an open set with $\mu(U)\leq \mu(S)+1$. The (complex valued) Stone-Weierstrass theorem for a locally compact space tells us that that the algebra of finite linear combinations of functions in $C_0(U)\times C_0(U)$ is dense in $C_0(U\times U)$ (with respect to the uniform norm in $\mathbb{C}^2$). Let $\varepsilon>0$. There are functions $\varphi_1, \dots, \varphi_n$ and $\psi_1,\dots, \psi_n$, all belonging to $C_0(U)$, such that $|K_{\delta}(z-\xi)H(z,\xi)-\sum_{j=1}^n\varphi(z)\psi(\xi)|<\varepsilon$ for any $(z,\xi)\in U\times U$. For each $j=1,\dots,n$, we hav $$\lim_{k\rightarrow \infty} \iint\limits_{\mathbb{C}\times\mathbb{C}} \varphi_j(z)\psi_j(\xi)d\mu_k(z)d\mu_k(\xi) = \iint\limits_{\mathbb{C}\times\mathbb{C}} \varphi_j(z)\psi_j(\xi)d\mu(z)d\mu(\xi). $$ It therefore follows that $$\limsup_{k\rightarrow \infty}| I_{\delta, \mu_k}(f,g) - I_{\delta, \mu}(f,g)|\leq \varepsilon (\limsup_{k\rightarrow \infty}\mu_k(U)^2+\mu(U)^2).$$ On the other hand, $\mu_k$ is $C_0$ nice, and so $\mu_k(U)\leq C(f,g)$. Since $\varepsilon>0$ was arbitrary, we conclude that $I_{\delta, \mu_k}(f,g) \rightarrow I_{\delta, \mu}(f,g)$ as $k\rightarrow \infty$. As a result of this convergence, we have that $$|I_{\mu, \delta}(f,g)|\leq C_0\liminf_{k\rightarrow \infty}\bigl(\|f\|_{L^2(\mu_k)}\|g\|_{L^2(\mu_k)}\bigl). $$ But since both $|f|^2$ and $|g|^2$ are in $C_0(\mathbb{C})$, the right hand side of this inequality equals $\|f\|_{L^2(\mu)}\|g\|_{L^2(\mu)}$. We now wish to appeal to the density of $\operatorname{Lip}_0(\mathbb{C})$ in $L^2(\mu)$ to extend this inequality to all $f,g\in L^2(\mu)$. Let $R>0$. As $\mu$ is $C_0$-nice, we saw in Section \ref{statementsec} that $\mathcal{C}_{\mu,\delta}: L^2(\mu)\rightarrow L^2(B(0,R),\mu)$. But then, since the space of Lipschitz function compactly supported in $B(0,R)$ is dense in $L^2(B(0,R),\mu)$, we conclude that $||\mathcal{C}_{\mu,\delta}||_{L^2(\mu)\rightarrow L^2(B(0,R),\mu)}\leq C_0$. Finally, taking the limit as $R\rightarrow \infty$, the monotone convergence theorem guarantees that $\|\mathcal{C}_{\mu, \delta}\|_{L^2(\mu)\rightarrow L^2(\mu)}\leq C_0$, and hence $\mu$ is $C_0$-good.\end{proof} The proof of the next lemma is left as an exercise. \begin{lem}\label{ADsupp} Suppose that $\mu_k$ is a sequence of $c_0$ $AD$-regular measures converging weakly to a measure $\mu$. If $z_k\in \operatorname{supp}(\mu_k)$ with $z_k\rightarrow z$ as $k\rightarrow \infty$, then $z\in \operatorname{supp}(\mu)$. \end{lem} Our last task is to check that the bilinear form $I_{\mu_k}$ has nice weak convergence properties. For this, let $f,g\in \operatorname{Lip}_0(\mathbb{C})$. For $\delta>0$, we write \begin{equation}\begin{split}\nonumber|I_{\mu_k}(f,g) - I_{\mu}(f,g)| \leq & \, |I_{\mu_k}(f,g) -I_{\delta, \mu_k}(f,g)|+|I_{\delta, \mu_k}(f,g)-I_{\delta, \mu}(f,g)| \\ &+ |I_{\delta, \mu}(f,g)- I_{\mu}(f,g)|. \end{split}\end{equation} The first and third terms are bounded by $C(f,g)\delta$. The second term converges to $0$ as $k\rightarrow \infty$. Therefore $$\limsup_{k\rightarrow \infty}|I_{\mu_k}(f,g) - I_{\mu}(f,g)| \leq C(f,g)\delta. $$ But $\delta>0$ was arbitrary, and so $I_{\mu_k}(f,g)$ converges to $I_{\mu}(f,g)$ as $k\rightarrow \infty$. \section{Riesz Systems}\label{rieszsyssec} Throughout this section we fix a $C_0$-nice measure $\mu$. A system of functions $\psi_Q$ $(Q\in \mathcal{D})$ is called a $C$-Riesz system if $\psi_Q \in L^2(\mu)$ for each $Q$, and \begin{equation}\label{Rieszsys}\Bigl|\Bigl| \sum_{Q\in \mathcal{D}} a_Q \psi_Q \Bigl|\Bigl|^2_{L^2(\mu)} \leq C\sum_{Q\in \mathcal{D}} |a_Q|^2, \end{equation} for every sequence $\{a_Q\}_{Q\in \mathcal{D}}$. By a simple duality argument, we see that if $\psi_Q$ is a $C$-Riesz system, then $$\sum_{Q\in \mathcal{D}} \bigl|\langle f, \psi_Q\rangle_{\mu}\bigl|^2 \leq C \|f\|^2_{L^2(\mu)}, \text{ for any }f\in L^2(\mu). $$ Suppose now that with each square $Q\in \mathcal{D}$, we associate a set $\Psi_Q$ of $L^2(\mu)$ functions. We say that $\Psi_Q$ $(Q\in \mathcal{D})$ is a $C$-Riesz family if, for any choice of functions $\psi_Q \in \Psi_Q$, the system $\psi_Q$ forms a $C$-Riesz system. We now introduce a particularly useful Riesz family. Suppose that $\mu$ is a $C_0$-nice measure. Fix $A>1$, and define \begin{equation}\begin{split}\nonumber\Psi^{\mu}_{Q,A}\! \!= \!\!\Bigl\{ \psi: \operatorname{supp}(\psi)\!\subset B(z_Q, A\ell(Q)),& \,\|\psi\|_{\operatorname{Lip}}\leq \ell(Q)^{-3/2},\int_{\mathbb{C}}\! \psi \,d\mu=0\Bigl\}. \end{split}\end{equation} \begin{lem} For any $A>1$, $\Psi^{\mu}_{Q,A}$ is a $C$-Riesz family, with constant $C=C(C_0, A)$. \end{lem} \begin{proof}For each $Q\in \mathcal{D}$, pick a function $\psi_Q \in \Psi^{\mu}_{Q,A}$. Then we have $$\|\psi_Q\|_{\infty} \leq \|\psi_{Q}\|_{\operatorname{Lip}}\cdot \text{diam}(\operatorname{supp}(\psi_Q))\leq \ell(Q)^{-3/2} \cdot 2A\ell(Q)\leq CA\ell(Q)^{-1/2}, $$ and $$\|\psi_Q\|_{L^2(\mu)}^2\leq ||\psi_Q||_{L^{\infty}}^2\mu(B(z_Q,A\ell(Q)))\leq CA^3.$$ Now, if $Q', Q''\in \mathcal{D}$ with $\ell(Q')\leq \ell(Q'')$, then $\langle \psi_{Q'}, \psi_{Q''}\rangle_{\mu} =0$ provided that $B(z_{Q'}, A\ell(Q'))\cap B(z_{Q''}, A\ell(Q''))=\varnothing.$ If $B(z_{Q'}, A\ell(Q'))$ intersects $B(z_{Q''}, A\ell(Q''))$, we instead have the bound $$|\langle \psi_{Q'}, \psi_{Q''}\rangle_{\mu}|\leq CA^3 \Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{3/2}. $$ Indeed, note that $\|\psi_{Q'}\|_{L^1(\mu)}\leq ||\psi_{Q'}||_{L^{\infty}}\mu(B(z_{Q'}, A\ell(Q')))\leq CA^2\ell(Q')^{1/2}$, while the oscillation of $\psi_{Q''}$ on the set $B(z_{Q'}, A\ell(Q'))$ (which contains the support of $\psi_{Q'}$) is no greater than $\tfrac{A\ell(Q')}{\ell(Q'')^{3/2}}$. By multiplying these two estimates we arrive at the desired bound on the absolute value of the inner product. Consider a sequence $\{a_Q\}_Q \in \ell^2(\mathcal{D})$. Then $$\Bigl|\Bigl| \sum_{Q\in \mathcal{D}} a_Q \psi_Q \Bigl|\Bigl|^2_{L^2(\mu)} \leq 2\sum_{\substack{Q', Q''\in \mathcal{D} :\\ \ell(Q')\leq \ell(Q'')}}|a_{Q'}||a_{Q''}||\langle \psi_{Q'}, \psi_{Q''}\rangle_{\mu}|. $$ Inserting our bounds on the inner products into this sum, we see that we need to bound the sum $$CA^3\sum_{\substack{\ell(Q')\leq \ell(Q''), \\ B(z_{Q'}, A\ell(Q'))\cap B(z_{Q''}, A\ell(Q''))\neq \varnothing}}\!\!\!|a_{Q'}\|a_{Q''}|\Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{3/2}. $$ (Since all sums involving squares will be taken over the lattice $\mathcal{D}$, we will not write this explicitly from now on.) Using Cauchy's inequality, we estimate $$|a_{Q'}\|a_{Q''}|\Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{3/2}\leq \frac{|a_{Q'}|^2}{2}\Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{1/2}+ \frac{|a_{Q''}|^2}{2}\Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{5/2}. $$ It therefore suffices to estimate two double sums: $$ I = \sum_{Q'} |a_{Q'}|^2 \sum_{\substack{Q'':\ell(Q')\leq \ell(Q''), \\ B(z_{Q'}, A\ell(Q'))\cap B(z_{Q''}, A\ell(Q''))\neq \varnothing}} \Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{1/2}, $$ and $$ II = \sum_{Q''}|a_{Q''}|^2 \sum_{\substack{Q':\ell(Q')\leq \ell(Q''), \\ B(z_{Q'}, A\ell(Q'))\cap B(z_{Q''}, A\ell(Q''))\neq \varnothing}} \Bigl(\frac{\ell(Q')}{\ell(Q'')}\Bigl)^{5/2}. $$ For each dyadic length $\ell$ greater than $\ell(Q')$, there are at most $CA^2$ squares $Q''$ of side length $\ell$ for which $B(z_{Q''}, A\ell)$ has non-empty intersection with $B(z_{Q'}, A\ell(Q'))$. Hence $$I\leq \sum_{Q'} |a_{Q'}|^2 \sum_{k\geq 0} CA^2 2^{-k/2}\leq CA^2 \sum_Q|a_Q|^2.$$ Concerning $II$, all the relevant squares $Q'$ in the inner sum are contained in the disc $B(z_{Q''}, 3A\ell(Q''))$. Therefore, at scale $\ell$ there are at most $CA^2 \bigl(\frac{\ell(Q'')}{\ell}\bigl)^{2}$ squares $Q'$ of side length $\ell$ that can contribute to the inner sum. As a result, $$II \leq CA^2 \sum_{Q''}|a_{Q''}|^2\sum_{k\geq 0} 2^{-k/2}\leq CA^2\sum_Q|a_Q|^2. $$ Combining our bounds, we see that the $\Psi^{\mu}_{Q,A}$ is a Riesz family, with Riesz constant $C(C_0)A^5$.\end{proof} \section{Bad squares and uniform rectifiability} In this section we identify a local property of the support of a measure, which ensures that the measure is uniformly rectifiable. The mathematics in this section is largely due to David, Jones, and Semmes, see \cite{DS} Chapter 2.1, and is simpler than Jones' geometric Traveling Salesman theory \cite{Jon}, which was used in \cite{MMV}. Fix a $C_0$-nice measure $\mu$, which is AD-regular with regularity constant $c_0$. Set $E=\operatorname{supp}(\mu)$. \subsection{The construction of a Lipschitz mapping} We will begin by constructing a certain graph. For our purposes, a \textit{graph} $\Gamma=(\mathcal{N}, \mathcal{E})$ is a set of points $\mathcal{N}$ (the vertices), endowed with a collection of line segments $\mathcal{E}$ (the edges) where each segment has its end-points at vertices. A \textit{connected component} of the graph is a maximal subset of vertices that can be connected through the edges. For example, the graph depicted in Figure 1 below has two connected components. The \textit{distance between connected components} of a graph is measured as the distance between the relevant sets of vertices. Therefore, the distance between the components of the graph depicted in Figure 1 is the distance between the vertices labeled $p$ and $q$. \begin{figure}[h]\label{comppic} \centering \includegraphics[height = 30mm]{CauchyComps2} \caption[A graph.]{An example of a graph consisting of two connected components.} \end{figure} \begin{defn}For a graph $\Gamma=(\mathcal{N}, \mathcal{E})$, and a square $Q$, we define $\Gamma_Q$ to be the subgraph with vertex set $\mathcal{N}\cap 7Q$, endowed with the edges from $\mathcal{E}$ connecting those vertices in $7Q$.\end{defn} Let $\tau \in (0,1)$. Fix $P\in \mathcal{D}$ (this square is to be considered as the viewing window in the definition of uniform rectifiability). Choose a (small) dyadic fraction $\ell_0$ with $\ell_0<\ell(P)$. We shall construct a graph adapted to $P$ inductively. Set $\mathcal{N}$ to be a maximal $\tau \ell_0$ separated subset of $E$. Note that $\mathcal{N}$ forms a $\tau\ell_0$ net of $E$. \textbf{\emph{The base step.}} For each square $Q\in \mathcal{D}$ with $\ell(Q)=\ell_0$ and $3Q\cap \mathcal{N}\neq\varnothing$, fix a point which lies in $3Q\cap\mathcal{N}$. Then join together every point of $\mathcal{N} \cap 3Q$ to this fixed point by line segments, as illustrated in the figure below. In $3Q$, there are at most $C\tau^{-2}$ points of $\mathcal{N}$, and so the total length of the line segments in $3Q$ is $C\tau^{-2}\ell_0$. \begin{figure}[h]\label{basesteppic} \centering \includegraphics{CauchyGraphBase} \caption[A graph.]{The base step in the construction applied in $3Q$.} \end{figure} We thereby form the graph $\Gamma_{\ell_0}(\ell_0)$ comprised of the vertex set $\mathcal{N}$, and the set of line segments $\mathcal{E}_{\ell_0}(\ell_0)$ obtained by carrying out the above procedure for all squares $Q\in \mathcal{D}$ with $\ell(Q)=\ell_0$. This is the base step of the construction. \textbf{\emph{The inductive step.}} Let $\ell$ be a dyadic fraction no smaller than $\ell_0$. Suppose that we have constructed the graph $\Gamma_{\ell_0}(\ell) = (\mathcal{N}, \mathcal{E}_{\ell_0}(\ell))$ The graph $\Gamma_{\ell_0}(2\ell)$ is set to be the pair $(\mathcal{N}, \mathcal{E}_{\ell_0}(2\ell))$, where $\mathcal{E}_{\ell_0}(2\ell)$ is obtained by taking the union of $\mathcal{E}_{\ell_0}(\ell)$ with the collection of line segments obtained by performing the following algorithm: \textit{For every square $Q\in \mathcal{D}$ with $\ell(Q)=2\ell$, consider the graph $\Gamma = (\Gamma_{\ell_0}(\ell))_Q$. If $\Gamma$ has at least two components that intersect $3Q$, then for each such component, choose a vertex that lies in its intersection with $\mathcal{N}\cap 3Q$. Fix a point in $3Q\cap \mathcal{N}$, and join each of the chosen points to this fixed point with an edge}. \begin{figure}[h]\label{comppic} \centering \includegraphics{CauchyGraphInductAlg} \caption[Blah.]{The induction algorithm applied to a square $Q$. The grey edges indicate the edges of $\Gamma_{\ell_0}(\ell)$ not included in the subgraph $\Gamma=(\Gamma_{\ell_0}(\ell))_Q$. The dashed lines indicate the edges added by applying the algorithm. Note that in this case the graph $\Gamma$ has seven components, four of which intersect $3Q$. The fixed point in $3Q\cap \mathcal{N}$ is denoted by $a$.} \end{figure} We carry out the inductive procedure for $\ell=\ell_0, \dots, \tfrac{\ell(P)}{2}$, and thereby obtain the graph $\Gamma_{\ell_0}(\ell(P))$. To continue our analysis, first note the following elementary fact: \begin{lem}\label{dyadicfact} Let $Q\in \mathcal{D}$ with $\ell(Q)=2\ell$. For any two points $z_1, z_2\in 4Q$ with $|z_1-z_2|<\ell$, there is a dyadic square $Q'$ of sidelength $\ell$, such that $7Q'\subset 7Q$, and $z_1,z_2\in 3Q'$.\end{lem} \begin{proof} Pick the square $Q'$ to be the dyadic square of side length $\ell$ containing $z_1$. Then $\text{dist}(Q', \mathbb{C}\backslash 3Q') =\ell$, so $z_2\in 3Q'$. Since $\ell(Q)=2\ell$, we have that $4Q$ is a union of dyadic squares of side-length $\ell$. Therefore $Q'$ is contained in $4Q$. As the square annulus $7Q\backslash 4Q$ is of width $\tfrac{3}{2}\ell(Q)=3\ell$, we conclude that $7Q'\subset 7Q$.\end{proof} We shall use this lemma (or rather a weaker statement with $4Q$ replaced by $3Q$) to deduce the following statement. \begin{cla}\label{compsep} For each $\ell\geq \ell_0$, and $Q\in \mathcal{D}$ with $\ell(Q)=2\ell$, any two connected components of $(\Gamma_{\ell_0}(\ell))_Q$ which intersect $3Q$ are $\ell$-separated in $3Q$. \end{cla} \begin{proof}First suppose $\ell=\ell_0$. Let $z_1,z_2\in 3Q$ with $|z_1-z_2|< \ell_0$. Choose $Q'$ as in Lemma \ref{dyadicfact}. We have that $z_1,z_2\in 3Q'$, and $3Q'\subset 7Q$. But then $z_1$ and $z_2$ must have been joined when the base step rule was applied to the square $Q'$, and so they lie in the same component of $(\Gamma_{\ell_0}(\ell))_Q$. Now suppose that $\ell>\ell_0$, and $z_1,z _2\in3Q$ with $|z_1-z_2|<\ell$. Again, let $Q'$ be the square of Lemma \ref{dyadicfact}. The induction step applied at level $\tfrac{\ell}{2}$ to the square $Q'$ ensures that $z_1$ and $z_2$ are joined by edges in $\mathcal{E}_{\ell_0}(\ell)$ that are contained in $7Q'\subset 7Q$. Therefore, $z_1$ and $z_2$ lie in the same component of $(\Gamma_{\ell_0}(\ell))_Q$.\end{proof} \begin{cla}\label{countcla} There exists a constant $C>0$, such that for each $\ell\geq \ell_0$, and for every $Q\in \mathcal{D}$ with $\ell(Q)=2\ell$, the iterative procedure applied to $Q$ increases the length of $\Gamma_{\ell_0}(2\ell)$ by at most $C\ell$.\end{cla} \begin{proof} From Claim \ref{compsep}, we see that the graph $(\Gamma_{\ell_0}(\ell))_Q$ can have at most $C$ components which have non-empty intersection with $3Q$. Consequently, the application of the inductive procedure can generate at most $C$ new edges, each of which having length no greater than $\sqrt{2}\cdot 6\ell$. The claim follows. \end{proof} \textbf{\emph{Adapting the graph to $P$.}} We begin with another observation about the induction algorithm. Note that any two vertices in $3P\cap\mathcal{N}$ can be joined by edges in $\mathcal{E}_{\ell_0}(\ell(P))$ that are contained in $7P$. Thus, the graph $(\Gamma_{\ell_0}(\ell(P)))_P$ has only one connected component which intersects $3P$, and we denote this component by $\Gamma$. Let us denote by $L = L(\ell_0)$ the total length of $\Gamma$. By Euler's theorem, there is a walk through $\Gamma$ which visits each vertex of $\Gamma$ at least once, and travels along each edge at most twice. By a suitable parametrization of this walk, we arrive at the following lemma: \begin{lem}\label{walklem} There exists $F:[0,1]\rightarrow \mathbb{C}$, with $\|F\|_{\operatorname{Lip}}\leq 2L$, and such that $F([0,1])\supset \mathcal{N}\cap 3P$. \end{lem} If we have a suitable control over $L(\ell_0)$ independently of $\ell_0$, then $E\cap P$ is contained in the image of a Lipschitz graph: \begin{lem}\label{lipfun} Suppose that there exists $M>0$ such that $L(\ell_0)\leq M\ell(P)$ for every $\ell_0>0$. Then there exists $F:[0,1]\rightarrow \mathbb{C}$, such that $\|F\|_{\operatorname{Lip}}\leq 2M\ell(P)$, and $E\cap P \subset F([0,1])$. \end{lem} \begin{proof} Let $\ell_0 = 2^{-k}$. Let $F_k$ denote the function of Lemma \ref{walklem}. Then $F_k([0,1])$ is a $\tau 2^{-k}$-net of $E\cap P$. By appealing to the Arzela-Ascoli theorem, we see that there is a subsequence of the $F_k$ (which we again denote by $F_k$), converging uniformly to some limit function $F$. The function $F$ is Lipschitz continuous with Lipschitz norm no greater than $2M\ell(P)$. Now, for any $x\in E\cap P$, there exists a sequence $\{x_k\}_k$ where $x_k\in F_k([0,1])$, and $|x-x_k|<\tau 2^{-k}$. Take $t_k\in [0,1]$ with $F_k(t_k)=x_k$. There is a convergent subsequence of $\{t_k\}_k$ which converges to some $t\in [0,1]$. But then $F(t)=x$, and the proof is complete. \end{proof} We shall now estimate $L(\ell_0)$ in terms of the total side length of squares where the induction step has been carried out. Note that only the base and inductive steps applied to the dyadic squares $Q$ contained in $7P$ can contribute to the length. We shall first estimate the contribution to the length by the base step. \begin{cla}\label{basesteplength} The contribution to the length of $\Gamma$ from the base step is no greater than $ C\tau^{-2}\ell(P)$. \end{cla} \begin{proof}Let $N$ denote the number of dyadic sub-squares of $7P$ with side length $\ell_0$ where the base step has been carried out. For any such square $Q$, we must have $3Q\cap \operatorname{supp}(\mu) \neq \varnothing$. From the AD-regularity of $\mu$, it follows that $\mu(4Q)\geq c_0\ell(Q) = c_0\ell_0$. Hence $$c_0\ell_0 N \leq \sum_{Q\in \mathcal{D}:\,Q\subset 7P , \,\ell(Q) = \ell_0}\mu(4Q) \leq C\mu(C P)\leq C\ell(P),$$ and therefore $N\leq C\tfrac{\ell(P)}{\ell_0}$. Consequently, the contribution to the length of $\Gamma$ from the base step is no greater than $C\tau^{-2}\ell_0 \tfrac{\ell(P)}{\ell_0}$, as required.\end{proof} We now denote by $\mathcal{Q}(P,\ell_0)$ the collection of dyadic squares $Q\in \mathcal{D}$ such that $ \ell(Q)\in[\ell_0, \ell(P)]$, $Q\subset 7P$, and the inductive step has been carried out non-vacuously in $Q$ at scale $\tfrac{\ell(Q)}{2}$. Claim \ref{countcla} guarantees that for each $Q\in \mathcal{Q}(P, \ell_0)$, an application of the inductive procedure increases the length $L(\ell_0)$ by no more than $C\ell(Q)$. Combining this observation with Claim \ref{basesteplength}, we infer the following bound: \begin{equation}\label{lbound}L(\ell_0) \leq C\tau^{-2}\ell(P) + C\sum_{Q\in \mathcal{Q}(P,\ell_0)} \ell(Q). \end{equation} \subsection{Bad squares} Given the construction above, we would like to find a convenient way of identifying whether a square has been used in the inductive procedure at some scale. Since we don't want these squares to occur very often, we call them \textit{bad squares}. \begin{defn} We say that $Q\in \mathcal{D}$ is a $(\mu)$-bad square if there exist $\zeta, \xi \in B(z_Q, 10\ell(Q))\cap \operatorname{supp}(\mu)$, such that $|\zeta-\xi|\geq \ell(Q)/2$, and there exists $z\in[\zeta,\xi]$ such that $B(z, \tau\ell(Q))\cap E=\varnothing$. \end{defn} We now justify the use of this definition: \begin{lem} Suppose that $\tau< \tfrac{1}{16}$. Suppose that the inductive algorithm has been applied to $Q\in \mathcal{D}$. Then $Q$ is a bad square. \end{lem} \begin{proof} If the inductive algorithm has been applied, then there is a graph $\Gamma=(\mathcal{N},\mathcal{E})$\footnote{In the notation of the previous section, $\Gamma = \Gamma_{\ell_0}\bigl(\tfrac{\ell(Q)}{2}\bigl)$, for some $\ell_0\leq \tfrac{\ell(Q)}{2}$.}, with the following properties: \begin{enumerate} \item The set $\mathcal{N}$ forms a $\tfrac{\tau\ell(Q)}{2}$ net of $E$. \item For every dyadic square $Q'$ with $\ell(Q')<\ell(Q)$ and $7Q'\subset 7Q$, we have that if $z_1,z_2\in 3Q'\cap \mathcal{N}$, then $z_1$ and $z_2$ lie in the same component of $\Gamma_Q$. \item The connected components of $\Gamma_Q$ that intersect $3Q$ are at least $\tfrac{\ell(Q)}{2}$ separated in $3Q$. \end{enumerate} (In fact, property (2) implies property (3), as was seen in Claim \ref{compsep}). By assumption, there exist two points $\zeta$ and $\xi$ in $3Q\cap \mathcal{N}$ that lie in different components of $\Gamma_Q$. Then $|\zeta-\xi|\geq\tfrac{\ell(Q)}{2}$. Consider the line segment $[\zeta,\xi]$. Cover this segment with overlapping discs of radius $\tau\ell(Q)$, such that the centre of each disc lies in the line segment $[\zeta, \xi]$ (see Figure 4). \begin{figure}[t]\label{BadSquarePic} \centering \includegraphics[width=120mm]{badsquarediscs} \caption[The discs.]{The intersecting discs of radius $\tau\ell(Q)$, along with their concentric doubles. The cloud of points represents those points of $\mathcal{N}$.} \end{figure} Suppose that every disc has positive $\mu$ measure. If $\tau< \tfrac{1}{16}$, the concentric double of each disc is contained in $4Q$. Furthermore, in the concentric double of each disc, there must be a point from $\mathcal{N}$. We therefore form a chain of points in $\mathcal{N}\cap 4Q$, with every consecutive pair of points in the chain are separated by a distance of at most $8\tau\ell(Q)<\ell(Q)/2$. Furthermore, the first point in the chain is within a distance of $\ell(Q)/2$ of $\zeta$, and the last point in the chain is no further than $\ell(Q)/2$ from $\xi$. Therefore, Lemma \ref{dyadicfact} ensures that each consecutive pair of points in the chain are contained in the concentric triple of some dyadic square $Q'$ with $\ell(Q')<\ell(Q)$ and $7Q'\subset 7Q$. But then property (2) yields that each such pair lies in the same component of $\Gamma_Q$. As a result, $\zeta$ and $\xi$ also lie in the same component of $\Gamma_Q$. From this contradiction, we see that one of the discs of radius $\tau\ell(Q)$ has zero measure, which implies that $Q$ is a bad square. \end{proof} Now let $\mathcal{B}^{\mu}$ denote the set of those squares $Q\in \mathcal{D}$ that are bad. To prove Theorem \ref{thm1}, it suffices to prove the following proposition: \begin{prop} \label{carlreduction} Suppose that $\mu$ is a $C_0$-good measure with AD regularity constant $c_0$. There is a constant $C=C(A,C_0,c_0)>0$ such that for each $P\in \mathcal{D}$, \begin{equation}\label{carl}\sum_{Q\in \mathcal{B}^{\mu}, \,Q\subset P} \ell(Q) \leq C\ell(P). \end{equation} \end{prop} Let us see how Theorem \ref{thm1} follows from this proposition. Fix $P\in \mathcal{D}$, and construct $\Gamma_{\ell_0}(\ell(P))$ for $\ell_0<\ell(P)$. From Proposition \ref{carlreduction}, the bound (\ref{lbound}) for the length $L(\ell_0)$ is no more than $M\ell(P)$, where $M$ can be chosen to depend on $A,c_0, C_0$, and $\tau$ (in particular, $M$ can be chosen independently of $P$). But now Lemma \ref{lipfun} yields the existence of a function $F:[0,1]\rightarrow \mathbb{C}$ with Lipschitz norm no greater than $M\ell(P)$, such that $E\cap P\subset F([0,1])$. This is the required uniform rectifiability. The condition (\ref{carl}) is very well known in harmonic analysis, and a family of squares $\mathcal{B}^{\mu}$ satisfying (\ref{carl}) is often referred to as a \textit{Carleson family}. The best constant $C>0$ such that (\ref{carl}) holds for all $P\in \mathcal{D}$ is called the \textit{Carleson norm} of $\mathcal{B}^{\mu}$. \section{Bad squares and the Riesz family $\{\Psi^{\mu}_{Q,A}\}_Q$} Fix a $C_0$-good measure $\mu$ with AD-regularity constant $c_0>0$. Choose $A'>1$, with $A'\geq A$. Recall the Riesz family $\Psi^{\mu}_{Q,A}$ introduced in Section 5. For each $Q\in \mathcal{D}$, we define $$\Theta_{A,A'}(Q)=\Theta_{A,A'}^{\mu}(Q) = \inf_{F\supset B(z_Q, A'\ell(Q))} \sup_{\psi\in\Psi^{\mu}_{Q,A}} \ell(Q)^{-1/2}|\langle \mathcal{C}_{\mu}(\chi_F), \psi\rangle_{\mu}|. $$ Consider a fixed $P\in \mathcal{D}$. Then for each $Q\subset P$ there exists a function $\psi_Q\in\Psi^{\mu}_{Q,A}$ such that $\Theta_{A,A'}(Q)^2\ell(Q) \leq 2 |\langle \mathcal{C}_{\mu}(\chi_{B(z_P,2 A'\ell(P))}), \psi_Q\rangle_{\mu}|^2 $ (note here that $B(z_Q, A'\ell(Q))\subset B(z_P, 2A'\ell(P))$ whenever $Q\subset P$). Hence $$\sum_{Q\subset P} \Theta_{A,A'}(Q)^2\ell(Q) \leq 2\sum_{Q\subset P} |\langle \mathcal{C}_{\mu}(\chi_{B(z_P, 2A'\ell(P))}), \psi_Q\rangle_{\mu}|^2. $$ Since $\psi_Q$ ($Q\in \mathcal{D}$) forms a $C(C_0,A)$-Riesz system, the right hand side of this inequality is bounded by $C(C_0,A)\|\mathcal{C}_{\mu}(\chi_{B(z_P, 2A'\ell(P))})\|_{L^2(\mu)}^2$. As $\mu$ is $C_0$-good, this quantity is in turn bounded by $C(C_0,A)\mu(B(z_P, 2A'\ell(P)))$, which is at most $ C(C_0,A,A')\ell(P)$. Therefore $$\sum_{Q\subset P}\Theta_{A,A'}(Q)^2\ell(Q)\leq C(C_0, A,A')\ell(P). $$ As an immediate corollary of this discussion, we arrive at the following result: \begin{lem} Let $\gamma>0$. Consider the set $\mathcal{F}_{\gamma}$ of dyadic squares $Q$ satisfying $\Theta_{A,A'}(Q)>\gamma$. Then $\mathcal{F}_{\gamma}$ is a Carleson family, with Carleson norm bounded by $C(C_0,A,A')\gamma^{-2}$. \end{lem} In order to prove Proposition \ref{carlreduction} (from which Theorem \ref{thm1} follows), it therefore suffices to prove the following proposition: \begin{prop}\label{badlower} Suppose $\mu$ is a $C_0$-good measure with AD regularity constant $c_0>0$. There exist constants $A,A'>1$, and $\gamma>0$, such that for any square $Q\in \mathcal{B}^{\mu}$, $$\Theta^{\mu}_{A,A'}(Q)\geq \gamma. $$ \end{prop} We end this section with a simple remark about scaling. \begin{rem}[Scaling Remark]\label{scalerem} Fix a square $Q$, a function $\psi\in \Psi_{Q,A}^{\mu}$, and a compact set $F\subset \mathbb{C}$. For $z_0\in \mathbb{C}$, set $\widetilde{\mu}(\cdot) = \tfrac{1}{\ell(Q)} \mu(\ell(Q)\cdot +z_0)$, $\widetilde\psi(\cdot) = \ell(Q)^{1/2}\psi(\ell(Q)\cdot + z_0)$ and $\widetilde{F} = \tfrac{1}{\ell(Q)}(F-z_0)$. Then $||\widetilde{\psi}||_{\operatorname{Lip}}\leq 1$, $\operatorname{supp}(\widetilde\psi)\subset B(\tfrac{z_{Q}-z_0}{\ell(Q)}, A)$, and $$\langle \mathcal{C}_{\widetilde{\mu}}(\chi_{\widetilde{F}}), \widetilde\psi\rangle_{\widetilde\mu} = \ell(Q)^{-1/2}\langle \mathcal{C}_{\mu}(\chi_F), \psi\rangle_{\mu}.$$ \end{rem} \section{Reflectionless measures}\label{reflmeasintro} In this section, we explore what happens if Proposition \ref{badlower} fails. To do this, we shall need a workable definition of the Cauchy transform operator of a good measure acting on the constant function $1$. Suppose that $\nu$ is a $C_0$-good measure with $0\not\in \operatorname{supp}(\nu)$. \subsection{The function $\widetilde{\mathcal{C}}_{\nu}(1)$} Let us begin with an elementary lemma. \begin{lem}\label{l1awayfromsupp} Suppose that $\sigma$ is a $C_0$-nice measure with $0\not\in\operatorname{supp}(\sigma)$. Let $z\in \mathbb{C}$ with $z\not\in\text{supp}(\sigma)$. Set $d_0 = \text{dist}(\{0,z\}, \operatorname{supp}(\sigma))$. Then $$\int_{\mathbb{C}}\Bigl|\frac{1}{z-\xi}+\frac{1}{\xi}\Bigl| d\sigma(\xi) \leq \frac{C(C_0)|z|}{d_0}. $$ \end{lem} \begin{proof} Note the estimate $$\int_{\mathbb{C}} \Bigl|\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl|d\sigma(\xi) \leq \frac{2}{d_0} \sigma(B(0, 2|z|)) +2\int_{\mathbb{C}\backslash B(0, 2|z|)} \frac{|z|}{|\xi|^2}d\sigma(\xi). $$ The first term on the right hand side has size no greater than $\tfrac{2C_0|z|}{d_0}$. Since the domain of integration in the second term can be replaced by $\mathbb{C}\backslash B(0, \max(d_0,2|z|))$, Lemma \ref{tailest} guarantees that the second integral is bounded by $\tfrac{C|z|}{\max(2|z|,d_0)}$. \end{proof} For $z\not\in \operatorname{supp}(\nu)$, define \begin{equation}\label{C1offsupp}\widetilde{\mathcal{C}}_{\nu}(1)(z) = \int_{\mathbb{C}}\Bigl[\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl]d\mu(\xi) = \int_{\mathbb{C}}[K(z-\xi)-K(-\xi)]d\nu(\xi). \end{equation} Lemma \ref{l1awayfromsupp} guarantees that this integral converges absolutely. To extend the definition to the support of $\nu$, we shall follow a rather standard path. We shall initially define $\widetilde{\mathcal{C}}_{\nu}(1)$ as a distribution, before showing it is a well defined function $\nu$-almost everywhere. Recall from Section \ref{weaklims} how we interpret $\mathcal{C}_{\nu}$ as a bounded operator in $L^2(\nu)$. Fix $\psi\in \operatorname{Lip}_0(\mathbb{C})$. Choose $\varphi\in \operatorname{Lip}_0(\mathbb{C})$ satisfying $\varphi\equiv 1$ on a neighbourhood of the support of $\psi$. Then define \begin{equation}\begin{split}\label{C1defn}\langle\widetilde{\mathcal{C}}_{\nu}(1),&\psi\rangle_{\nu} = \langle\mathcal{C}_{\nu}(\varphi),\psi\rangle_{\nu}- \mathcal{C}_{\nu}(\varphi)(0)\cdot\!\int_{\mathbb{C}}\psi d\nu\\ &+\int_{\mathbb{C}}\psi(z)\!\!\int_{\mathbb{C}}(1-\varphi(\xi))\bigl[K(z-\xi)\!-\!K(-\xi)\bigl] d\nu(\xi)d\nu(z). \end{split}\end{equation} Note that Lemma \ref{l1awayfromsupp}, applied with $\sigma = |1-\varphi|\cdot\nu$, yields that $$\sup_{z\in \operatorname{supp}(\psi)}\int_{\mathbb{C}}|(1-\varphi(\xi))|\cdot |K(z-\xi)-K(-\xi)|d\nu(\xi)<\infty. $$ Therefore the inner product in (\ref{C1defn}) is well defined. We now claim that this inner product is independent of the choice of $\varphi$. To see this, let $\varphi_1$ and $\varphi_2$ be two compactly supported Lipschitz continuous functions, that are both identically equal to $1$ on some neighbourhood of $\operatorname{supp}(\psi)$. If $z\in \operatorname{supp}(\psi)$, then \begin{equation}\begin{split}\nonumber\int_{\mathbb{C}}&(1-\varphi_1(\xi))\Bigl[\frac{1}{z-\xi}+\frac{1}{\xi}\Bigl]d\nu(\xi)-\int_{\mathbb{C}}(1-\varphi_2(\xi))\Bigl[\frac{1}{z-\xi}+\frac{1}{\xi}\Bigl]d\nu(\xi)\\ & = \int_{\mathbb{C}}(\varphi_2(\xi)-\varphi_1(\xi))\Bigl[\frac{1}{z-\xi}+\frac{1}{\xi}\Bigl]d\nu(\xi)\\ & =\int_{\mathbb{C}}(\varphi_2(\xi)-\varphi_1(\xi))K(z-\xi)d\nu(\xi)+\mathcal{C}_{\nu}(\varphi_1)(0) - \mathcal{C}_{\nu}(\varphi_2)(0). \end{split}\end{equation} (All integrals in this chain of equalities converge absolutely.) Now consider $$\int_{\mathbb{C}}\psi(z)\int_{\mathbb{C}}(\varphi_2(\xi)-\varphi_1(\xi))K(z-\xi)d\nu(\xi)d\nu(z). $$ As a result of the anti-symmetry of $K$, this equals $$\frac{1}{2}\int_{\mathbb{C}}\int_{\mathbb{C}}K(z-\xi)\bigl[\psi(z)[\varphi_2(\xi)-\varphi_1(\xi)]- \psi(\xi)[\varphi_2(z)-\varphi_1(z)]\bigl]d\nu(\xi)d\nu(z). $$ However, as we saw in Section \ref{weaklims}, $K(z-\xi)(\psi(z)\varphi_j(\xi) - \psi(\xi)\varphi_j(z))\in L^1(\nu\times\nu)$ for each $j=1,2$. Hence, by using the linearity of the integral, and applying Fubini's theorem, we see that the last line equals $I_{\nu}(\varphi_2,\psi)-I_{\nu}(\varphi_1,\psi)$. By definition, this is equal to $\langle \mathcal{C}_{\nu}(\varphi_2),\psi\rangle_{\nu} - \langle \mathcal{C}_{\nu}(\varphi_1),\psi\rangle_{\nu}$. The claim follows. We have seen that $\widetilde{\mathcal{C}}_{\nu}(1)$ is well-defined as a distribution. For any bounded open set $U\subset\mathbb{C}$, if we choose $\varphi$ to be identically equal to $1$ on a neighbourhood of $U$, then $\widetilde{\mathcal{C}}_{\nu}(1)\in L^2(U, \nu)$. Since Lipschitz functions with compact support are dense in $L^2(U, \nu)$, we find that $\widetilde{\mathcal{C}}_{\nu}(1)$ is well-defined $\nu$-almost everywhere. Finally, we note that the smoothness of the function $\varphi$ is not essential. If $\psi\in \operatorname{Lip}_0(\mathbb{C})$, let $U$ be a bounded open set containing $\operatorname{supp}(\psi)$. Then it is readily seen that $\langle\widetilde{\mathcal{C}}_{\nu}(1),\psi\rangle_{\nu}$ equals \begin{equation}\begin{split}\nonumber \langle\mathcal{C}_{\nu}(\chi_U),\psi\rangle_{\nu}- \mathcal{C}_{\nu}(\chi_U)(0)\cdot\langle 1,\psi\rangle_{\nu} +\Bigl\langle\int_{\mathbb{C}\backslash U}\!\! [K(\cdot-\xi)+K(\xi)]d\nu(\xi),\psi\Bigl\rangle_{\nu}. \end{split}\end{equation} \subsection{The weak continuity of $\widetilde{\mathcal{C}}_{\nu}(1)$} We shall introduce a couple more sets of functions. $\Phi^{\nu}_{A}$ will denote those functions $\psi$ with $\|\psi\|_{\operatorname{Lip}}\leq 1$, that satisfy $\int_{\mathbb{C}} \psi \,d\nu =0$ and $\operatorname{supp}(\psi)\subset B(0,A)$. We define $\Phi^{\nu}$ to be the set of compactly supported functions $\psi$ with $\|\psi\|_{\operatorname{Lip}}\leq 1$, satisfying $\int_{\mathbb{C}} \psi \,d\nu =0$. We start with another standard estimate. \begin{lem}\label{l1meanzero} Suppose that $\nu$ is $C_0$-nice measure. For $R>0$, suppose that $\psi\in \Phi_{R}^{\nu}$. Then $\|\psi\|_{L^1(\nu)}\leq C(C_0)R^2$. \end{lem} \begin{proof} Simply note that $$\int_{B(0,R)}|\psi|d\nu = \int_{B(0,R)} \Bigl|\psi-\frac{1}{\nu(B(0,R))}\int_{B(0,R)} \psi d\nu\Bigl| d\nu. $$ This quantity is no greater than $\text{osc}_{B(0,R)}(\psi) \nu(B(0,R))$, which is less than or equal to $2R\cdot C_0R$. \end{proof} Our next lemma concerns a weak continuity property of $\widetilde{\mathcal{C}}_{\nu}(1)$. \begin{lem}\label{weakcontcau1} Let $\nu_k$ be a sequence of $C_0$-good measures, with $0\not\in \operatorname{supp}(\nu_k)$. Suppose that $\nu_k$ converge weakly to $\nu$ (and so $\nu$ is $C_0$-good), with $0\not\in \operatorname{supp}(\nu)$. Fix non-negative sequences $\widetilde{\gamma}_k$ and $\widetilde{A}_k$, satisfying $\widetilde{\gamma}_k\rightarrow 0$, and $\widetilde{A}_k\rightarrow \widetilde{A}\in (0,\infty]$. If $|\langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi\rangle_{\nu_k}| \leq \widetilde{\gamma}_k$ for all $\psi\in \Phi^{\nu_k}_{\widetilde{A}_k}$, then $$|\langle \widetilde{\mathcal{C}}_{\nu}(1), \psi\rangle_{\nu}| =0 \text{ for all }\psi\in \Phi^{\nu}_{\widetilde{A}}.$$ (Here $\Phi^{\nu}_{\widetilde{A}}= \Phi^{\nu}$ if $\widetilde{A}=\infty$.) \end{lem} \begin{proof} If $\nu(B(0,\widetilde{A}))=0$, then there is nothing to prove, so assume that $\nu(B(0,\widetilde{A}))>0$. Let $\varepsilon>0$. Pick $\psi\in \Phi^{\nu}_{\widetilde{A}}$. Then there exists $R\in (0,\infty)$ such that $\operatorname{supp}(\psi)\subset B(0,R)\subset B(0, \widetilde{A}_k)$ for all sufficiently large $k$, and $\nu(B(0,R))>0$. Fix $\rho\in \operatorname{Lip}_0$ with $\operatorname{supp}(\rho)\subset B(0,R)$, such that $\int_{\mathbb{C}}\rho d\nu=c_{\rho}>0$. Define $$\psi_k = \psi- b_k \rho, \text{ with }b_k = \frac{1}{\int_{\mathbb{C}}\rho\, d\nu_k} \int_{\mathbb{C}}\psi d\nu_k. $$ Note that $\psi_k$ is supported in $B(0,R)$, and has $\mu_k$-mean zero. Since $b_k\rightarrow 0$, we have that $\|\psi_k\|_{\operatorname{Lip}} \leq 2$ for all sufficiently large $k$. Therefore, for these $k$, we have $|\langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi_k\rangle_{\nu_k}| \leq 2\widetilde{\gamma}_k$. Now pick $\varphi\in \operatorname{Lip}_0$ with $\varphi \equiv 1$ on $B(0,2R)$ and $0\leq \varphi\leq 1$ on $\mathbb{C}$, such that both $$|\langle \mathcal{C}_{\nu_k}(\varphi), \psi_k\rangle_{\nu_k} - \langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi_k\rangle_{\nu_k}|<\varepsilon,$$ for all sufficiently large $k$, and $$|\langle \mathcal{C}_{\nu}(\varphi), \psi\rangle_{\nu} - \langle \widetilde{\mathcal{C}}_{\nu}(1), \psi\rangle_{\nu}|<\varepsilon. $$ To see that such a choice is possible, note that if $\varphi \equiv 1$ on $B(0,R')$ for $R'>2R$, then $|\langle \mathcal{C}_{\nu_k}(\varphi), \psi_k\rangle_{\nu_k} - \langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi_k\rangle_{\nu_k}|$ is bounded by $$\int_{B(0,R)}|\psi_k(z)|\int_{\mathbb{C}}|1-\varphi(\xi)| \Bigl|\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl|d\nu_k(\xi)d\nu_k(z), $$ (recall here that $\psi_k$ has $\nu_k$ mean zero). For any $z\in B(0,R)$, note that $\text{dist}(z, \operatorname{supp}(1-\varphi))\geq \tfrac{R'}{2}$, and so by applying Lemma \ref{l1awayfromsupp}, we see that the above quantity is no greater than $C\|\psi_k\|_{L^1(\nu_k)}\tfrac{R}{R'}$. Applying Lemma \ref{l1meanzero}, we see that $|\langle \mathcal{C}_{\nu_k}(\varphi), \psi_k\rangle_{\nu_k} - \langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi_k\rangle_{\nu_k}|\leq C\tfrac{R^3}{R'}$, which can be made smaller than $\varepsilon$ with a reasonable choice of $R'$. The same reasoning shows that $|\langle \mathcal{C}_{\nu}(\varphi), \psi\rangle_{\nu} - \langle \widetilde{\mathcal{C}}_{\nu}(1), \psi\rangle_{\nu}|<\varepsilon$ provided $R'$ is chosen suitably. On the other hand, as $\nu_k$ is $C_0$-good, we have $ |\langle \mathcal{C}_{\nu_k}(\varphi), \rho \rangle_{\nu_k}|\leq C_0\|\varphi\|_{L^2(\nu_k)}\|\rho\|_{L^2(\nu_k)} $. Since $\varphi$ and $\rho$ are compactly supported Lipschitz functions, the right hand side of this inequality converges to $C_0\|\varphi\|_{L^2(\nu)}\|\rho\|_{L^2(\nu)} $, and so it is bounded independently of $k$. Bringing together these observations, we see that $\langle \mathcal{C}_{\nu_k}(\varphi), \psi_k\rangle_{\nu_k}$ converges to $\langle \mathcal{C}_{\nu}(\varphi), \psi\rangle_{\nu}$ as $k\rightarrow \infty$. But since $|\langle \widetilde{\mathcal{C}}_{\nu_k}(1), \psi_k\rangle_{\nu_k}| \leq 2\widetilde{\gamma}_k$ for $k$ large enough, we deduce from the triangle inequality that $|\langle \widetilde{\mathcal{C}}_{\nu}(1), \psi\rangle_{\nu}| \leq 4\varepsilon$. \end{proof} Let us now suppose that Proposition \ref{badlower} is false. Fix $A\geq 100$. For each $k \in \mathbb{N}$, $k\geq 2A$, there is a $C_0$-good measure $\mu_k$ with AD-regularity constant $c_0>0$, a square $Q_k\in \mathcal{B}^{\mu_k}$, and a set $E_k\supset B(z_{Q_k}, k\ell(Q_k))$ such that \begin{equation}\label{gammaksmall}| \langle \mathcal{C}_{\mu_k}(\chi_{E_k}), \psi \rangle_{\mu_k}|\leq \frac{1}{k}, \text{ for all }\psi\in \Psi_{A,Q_k}^{\mu_k}. \end{equation} In addition, by the scale invariance of the condition (\ref{gammaksmall}) (see Remark \ref{scalerem}), we may dilate and translate the square $Q_k$ so that it has side length $1$, and so that there are $\zeta_k, \xi_k \in B(z_{Q_k}, 10)\cap\operatorname{supp}(\mu_k)$ with $|\zeta_k-\xi_k|\geq 1/2$, such that $0\in[\zeta_k,\xi_k]$ and $B(0, \tau)\cap \operatorname{supp}(\mu_k)=\varnothing$. Note that the translated and dilated square is not necessarily dyadic. By passing to a subsequence if necessary, we may assume that $\mu_k$ converge weakly to a measure $\mu^{(A)}$ (using the uniform niceness of the $\mu_k$). This limit measure is $C_0$-good, with AD-regularity constant $c_0$, and $0\not\in \mu^{(A)}$. Furthermore, it is routine to check that $\mu^{(A)}$ satisfies the following property (recall Lemma \ref{ADsupp}): \begin{equation}\label{badmuA}\begin{split}\text{There exist }&\xi, \, \zeta \in \overline{B(0,20)}\cap \operatorname{supp}(\mu^{(A)}),\text{ with } |\xi-\zeta|\geq \frac{1}{2},\\&\text{ such that }0\in [\zeta,\xi]\text{ and }B(0,\tau)\cap \operatorname{supp}(\mu^{(A)})=\varnothing.\end{split}\end{equation} Now, for each $k$ we have that $B(0, \tfrac{A}{2}) \subset B(z_{Q_k}, A)$ and $E_k\supset B(0, \tfrac{k}{2}) \supset B(0,A)$. We claim that $$|\langle \widetilde{\mathcal{C}}_{\mu_k}(1), \psi\rangle_{\mu_k}| \leq \frac{1}{k} +\frac{CA^3}{k}, \text{ for all }\psi\in \Phi_{\tfrac{A}{2}}^{\mu_k}. $$ To see this, note that for any $\psi\in \Phi_{\tfrac{A}{2}}^{\mu_k}$, $\langle \widetilde{\mathcal{C}}_{\mu_k}(1), \psi\rangle_{\mu_k}$ is equal to $$ \langle \mathcal{C}_{\mu_k}(\chi_{E_k}), \psi\rangle_{\mu_k} + \int_{B(0,\tfrac{A}{2})}\psi(z)\int_{\mathbb{C}\backslash E_k} \Bigl(\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl) d\mu_k(\xi)d\mu_k(z). $$ The first term is smaller than $\tfrac{1}{k}$ in absolute value. To bound the second term, note that for any $z\in B(0, \tfrac{A}{2})$, $\text{dist}(z, \mathbb{C}\backslash E_k)\geq \tfrac{k}{2}$ so Lemma \ref{l1awayfromsupp} yields that this second term is no larger than $\tfrac{CA}{k}\|\psi\|_{L^1(\mu_k)}$, and applying Lemma \ref{l1meanzero} yields the required estimate. We now apply Lemma \ref{weakcontcau1} with $\nu_k = \mu_k$, $\widetilde{\gamma}_k = \tfrac{1}{k} +\tfrac{CA^3}{k}$, and $\widetilde{A}_k = \tfrac{A}{2}$. Our conclusion is that $|\langle \widetilde{\mathcal{C}}_{\mu^{(A)}}(1), \psi\rangle_{\mu^{(A)}}| = 0, $ for all $\psi\in \Phi_{\tfrac{A}{2}}^{\mu^{(A)}}$. We now set $A=k$, for $k>100$. The above argument yields a measure $\mu^{(k)}$ satisfying $|\langle \widetilde{\mathcal{C}}_{\mu^{(k)}}(1), \psi\rangle_{\mu^{(k)}}| = 0, $ for all $\psi\in \Phi_{\tfrac{k}{2}}^{\mu^{(k)}}$. We now pass to a subsequence of $\{\mu^{(k)}\}_k$ so that $\mu^{(k)}\rightarrow \mu$ weakly as $k\rightarrow \infty$. The measure $\mu$ is $C_0$-good with AD-regularity constant $c_0$, and satisfies the property (\ref{badmuA}) with $\mu$ replacing $\mu^{(A)}$. By applying Lemma \ref{weakcontcau1} with $\widetilde{\nu_k} = \mu^{(k)}$, $\widetilde{\nu}=\mu$, $\widetilde{A}_k = \tfrac{k}{2}$, and $\widetilde{\gamma}_k=0$, we arrive at the following result: \begin{lem}\label{absurd} Suppose that Proposition \ref{badlower} fails. Then there exists a $C_0$-good measure $\mu$ with AD-regularity constant $c_0$, such that \begin{equation}\label{reflectionless} |\langle \widetilde{\mathcal{C}}_{\mu}(1), \psi\rangle_{\mu}| = 0, \text{ for all }\psi\in \Phi^{\mu}, \end{equation} and there exist $\xi, \, \zeta \in \overline{B(0,20)}\cap \operatorname{supp}(\mu)$, with $|\xi-\zeta|\geq \frac{1}{2},$ such that $0\in [\zeta,\xi]$ and $B(0,\tau)\cap \operatorname{supp}(\mu)=\varnothing.$ \end{lem} We call any measure $\mu$ that satisfies (\ref{reflectionless}) a \textit{reflectionless measure}. It turns out that there aren't too many good AD-regular reflectionless measures. \begin{prop} \label{reflectioncharac} Suppose that $\mu$ is a non-trivial reflectionless good AD-regular measure. Then $\mu = c\mathcal{H}^1_{L}$ for a line $L$, and a positive constant $c>0$. \end{prop} Note that Proposition \ref{reflectioncharac} contradicts the existence of the measure $\mu$ in Lemma \ref{absurd}. Therefore, once Proposition \ref{reflectioncharac} is proved, we will have asserted Proposition \ref{badlower}, and Theorem \ref{thm1} will follow. Hence it remains to prove the proposition. It is at this stage where the precise structure of the Cauchy transform is used. \section{The Cauchy transform of a reflectionless good measure $\mu$ is constant in each component of $\mathbb{C}\backslash \operatorname{supp}(\mu)$} Our goal is now to prove Proposition \ref{reflectioncharac}. Suppose that $\mu$ is a reflectionless $C_0$-good measure. We may assume that $0\not\in \operatorname{supp}(\mu)$. All constants in this section may depend on $C_0$ without explicit mention. Since $\widetilde{\mathcal{C}}_{\mu}(1)$ is a well defined $\mu$-almost everywhere function and satisfies (\ref{reflectionless}), we conclude that it is a constant $\mu$-almost everywhere in $\mathbb{C}$, say with value $\varkappa\in \mathbb{C}$. \begin{lem} \label{refconstsupp} Suppose that $\mu$ is a $C_0$-good reflectionless measure, and $0\not\in \operatorname{supp}(\mu)$. Then there exists $\varkappa\in \mathbb{C}$ such that $\widetilde{\mathcal{C}}_{\mu}(1)=\varkappa$ $\mu$-almost everywhere. \end{lem} Our considerations up to now have been quite general, but now our hand is forced to use the magic of the complex plane. The main difficulty is to obtain some information about the values of $\widetilde{\mathcal{C}}_{\mu}(1)$ away from the support of $\mu$ in terms of the constant value $\varkappa$. \subsection{The resolvent identity} \begin{lem}\label{resolvlem} For every $z\not\in\operatorname{supp}(\mu)$, $$[\widetilde{\mathcal{C}}_{\mu}(1)(z)]^2 = 2\varkappa\cdot \widetilde{\mathcal{C}}_{\mu}(1)(z). $$ \end{lem} An immediate consequence of Lemma \ref{resolvlem} is that either $\widetilde{\mathcal{C}}_{\mu}(1)(z)=2\varkappa$ or $\widetilde{\mathcal{C}}_{\mu}(1)(z)=0$ for any $z\not\in \operatorname{supp}(\mu)$. Since $\widetilde{\mathcal{C}}_{\mu}(1)$ is a continuous function away from $\operatorname{supp}(\mu)$, it follows that $\widetilde{\mathcal{C}}_{\mu}(1)$ is constant in each connected component of $\mathbb{C}\backslash \operatorname{supp}(\mu)$. A variant of Lemma \ref{resolvlem}, where the Cauchy transform is considered in the sense of principal value, has previously appeared in work of Melnikov, Poltoratski, and Volberg, see Theorem 2.2 of \cite{MPV}. We shall modify the proof from \cite{MPV} in order to prove Lemma \ref{resolvlem}. We shall first provide an incorrect proof of this lemma. Indeed, note the following regularized version of the resolvent identity: for any three distinct points $z,\xi, \omega\in \mathbb{C}$, \begin{equation}\label{resolve}\begin{split}\Bigl[\frac{1}{z-\xi}&+\frac{1}{\xi}\Bigl]\cdot\Bigl[\frac{1}{\xi-\omega}+\frac{1}{\omega}\Bigl] +\Bigl[ \frac{1}{z-\omega}+\frac{1}{\omega}\Bigl]\cdot\Bigl[\frac{1}{\omega-\xi}+\frac{1}{\xi}\Bigl] \\ &=\Bigl[ \frac{1}{z-\xi}+\frac{1}{\xi}\Bigl]\cdot\Bigl[ \frac{1}{z-\omega}+\frac{1}{\omega}\Bigl]. \end{split}\end{equation} Integrating both sides of this equality with respect to $d\mu(\xi)d\mu(\omega)$, we (only formally!) arrive at $2\widetilde{\mathcal{C}}_{\mu}(\widetilde{\mathcal{C}}_{\mu}(1))(z) = [\widetilde{\mathcal{C}}_{\mu}(z)]^2$. Once this is established, Lemma \ref{refconstsupp} completes the proof. The proof that follows is a careful justification of this integration. \begin{proof}We shall define $\widetilde{\mathcal{C}}_{\mu, \delta}(\varphi)(\omega) = \int_{\mathbb{C}}\bigl[K_{\delta}(\omega - \xi) +\tfrac{1}{\xi}\bigl] \varphi(\xi)d\mu(\xi)$ for $\varphi\in \operatorname{Lip}_0(\mathbb{C})$. In particular, as $\delta$ tends to $0$, $\widetilde{\mathcal{C}}_{\mu, \delta}(\varphi)$ converges to $\widetilde{\mathcal{C}}_{\mu}(\varphi)=\mathcal{C}_{\mu}(\varphi) - \mathcal{C}_{\mu}(\varphi)(0)$ weakly in $L^2(\mu)$. Set $d_0 = \text{dist}(\{z,0\}, \operatorname{supp}(\mu))$. Snce $d_0>0$, Lemma \ref{l1awayfromsupp} tells us that $[K(z-\cdot)+K(\cdot)]\in L^1(\mu)$. For $N>0$, define a bump function $\varphi_N\in \operatorname{Lip}_0(\mathbb{C})$, satisfying $\varphi_N\equiv 1$ on $B(0, N)$, and $\operatorname{supp}(\varphi)\subset B(0, 2N)$. Consider the identity (\ref{resolve}), and multiply both sides by $\varphi_N(\xi)\varphi_N(\omega)$. After integration against $d\mu(\xi)d\mu(\omega)$, the right hand side of this equality becomes $$\int_{\mathbb{C}}\Bigl[\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl]\varphi_N(\xi)d\mu(\xi)\int_{\mathbb{C}}\Bigl[\frac{1}{z-\omega} + \frac{1}{\omega}\Bigl]\varphi_N(\omega)d\mu(\omega). $$ But since $\displaystyle\bigl[K(z-\cdot)+K(\cdot)\bigl]\in L^1(\mu)$, the dominated convergence theorem ensures that as $N\rightarrow \infty$, this expression converges to $[\widetilde{\mathcal{C}}_{\mu}(1)(z)]^2$. Now, let $\delta>0$, and note that $$\frac{1}{\xi-\omega} = K_{\delta}(\xi-\omega)+ \chi_{B(0,\delta)}(\xi-\omega)\cdot\Bigl[\frac{1}{\xi-\omega}-\frac{\overline{\xi-\omega}}{\delta^2}\Bigl]. $$ Consider the integral \begin{equation}\begin{split}\nonumber\int_{\mathbb{C}}\int_{\mathbb{C}} \chi_{B(0,\delta)}(\xi-\omega)&\Bigl[\frac{1}{\xi-\omega}-\frac{\overline{\xi-\omega}}{\delta^2}\Bigl]\cdot\Bigl[\frac{1}{z-\xi}+\frac{1}{\xi}-\frac{1}{z-\omega} - \frac{1}{\omega}\Bigl]\\ & \varphi_N(\xi)\varphi_N(\omega)d\mu(\xi)d\mu(\omega). \end{split}\end{equation} Note that $\bigl|\frac{1}{z-\xi}+\frac{1}{\xi}-\frac{1}{z-\omega} - \frac{1}{\omega}\bigl|\leq \frac{2}{d_0^2}|\xi-\omega|$ for $\xi, \omega\in \operatorname{supp}(\mu)$, and so this integral is bounded in absolute value by a constant multiple of $\int_{\mathbb{C}}\varphi_N(\xi) \mu(B(\xi, \delta))d\mu(\xi)$, which is bounded by $C\delta N.$ This converges to zero as $\delta\rightarrow 0$. Making reference to (\ref{resolve}), we have thus far shown that \begin{equation}\begin{split}\nonumber\lim_{N\rightarrow\infty}\lim_{\delta\rightarrow 0}&\int_{\mathbb{C}}\int_{\mathbb{C}}\varphi_N(\xi)\varphi_N(\omega)\Bigl\{\Bigl[\frac{1}{z-\xi}+\frac{1}{\xi}\Bigl]\Bigl[K_{\delta}(\xi-\omega)+\frac{1}{\omega}\Bigl] \\ &+\Bigl[ \frac{1}{z-\omega}+\frac{1}{\omega}\Bigl]\Bigl[K_{\delta}(\omega-\xi)+\frac{1}{\xi}\Bigl]\Bigl\}d\mu_N(\xi)d\mu_N(\omega)= [\widetilde{\mathcal{C}}_{\mu}(1)(z)]^2. \end{split}\end{equation} By Fubini's theorem, and the weak convergence of $\widetilde{\mathcal{C}}_{\mu, \delta}(\varphi)$ to $\widetilde{\mathcal{C}}_{\mu}(\varphi)$, the left hand side of this equality is equal to twice the following limit \begin{equation}\begin{split}\lim_{N\rightarrow \infty}\Bigl[ \int_{\mathbb{C}} \varphi_N(\xi)\Bigl[\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl]\widetilde{\mathcal{C}}_{\mu}(\varphi_N)(\xi) d\mu(\xi)\Bigl]. \end{split}\end{equation} Therefore, to prove the lemma, it suffices to show that this limit equals $\varkappa \widetilde{\mathcal{C}}_{\mu}(1)(z)$. To do this, let $\alpha\in (\tfrac{1}{2},1)$. First consider $$I_N = \int_{B(0, N^{\alpha})} \varphi_N(\xi)\Bigl|\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl|\cdot|\widetilde{\mathcal{C}}_{\mu}(\varphi_N)(\xi) -\widetilde{\mathcal{C}}_{\mu}(1)(\xi)|d\mu(\xi). $$ Note that, for $|\xi|\leq N^{\alpha}$, we have $$|\widetilde{\mathcal{C}}_{\mu}(\varphi_N)(\xi) -\widetilde{\mathcal{C}}_{\mu}(1)(\xi)|\leq \int_{\mathbb{C}}|1-\varphi_N(\omega)|\Bigl|\frac{1}{\xi-\omega}-\frac{1}{\omega}\Bigl|d\mu(\omega). $$ Applying Lemma \ref{l1awayfromsupp} yields an upper bound for the right hand side of $C\tfrac{|\xi|}{N}\leq CN^{\alpha-1}.$ But as $[K(z-\cdot)+K(\cdot)]\in L^1(\mu)$, we conclude that $I_N\rightarrow 0$ as $N\rightarrow \infty$. Next, note that $$\int_{B(0, N^{\alpha})} \!\!\varphi_N(\xi)\Bigl[\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl]\widetilde{\mathcal{C}}_{\mu}(1)(\xi) d\mu(\xi)\! = \!\varkappa\! \int_{B(0, N^{\alpha})} \!\!\varphi_N(\xi)\Bigl[\frac{1}{z-\xi} + \frac{1}{\xi}\Bigl] d\mu(\xi), $$ which converges to $\varkappa\cdot \widetilde{\mathcal{C}}_{\mu}(1)(z)$ as $N\rightarrow \infty$. To complete the proof the lemma, it now remains to show that $$\lim_{N\rightarrow \infty}\int_{B(0,2N)\backslash B(0, N^{\alpha})}|\widetilde{\mathcal{C}}_{\mu}(\varphi_N)(\xi)|\cdot \Bigl|\frac{1}{z-\xi}-\frac{1}{\xi}\Bigl|d\mu(\xi) =0. $$ To do this, first note that $|\widetilde{\mathcal{C}}_{\mu}(\varphi_N)(\xi)|\leq |\mathcal{C}_{\mu}(\varphi_N)(\xi)| + C\log \tfrac{N}{d_0}$ (this merely uses the $C_0$-niceness of $\mu$). On the other hand, for sufficiently large $N$, $\bigl|\frac{1}{z-\xi}-\frac{1}{\xi}\bigl|\leq \frac{8|z|}{N^{2\alpha}}$ for $|\xi|\geq N^{\alpha}$. Therefore, there is a constant $C=C(C_0,d_0)>0$ such that \begin{equation}\begin{split}\nonumber&\int_{B(0,2N)\backslash B(0, N^{\alpha})}|\widetilde{\mathcal{C}}_{\mu}(1)(\xi)|\cdot \Bigl|\frac{1}{z-\xi}-\frac{1}{\xi}\Bigl|d\mu(\xi)\\ &\;\;\;\leq \frac{C|z|\log N}{N^{2\alpha}}\mu(B(0,2N))+ \frac{C|z|}{N^{2\alpha}}\int_{ B(0,2N)}|\mathcal{C}_{\mu}(\varphi_N)(\xi)|d\mu(\xi), \end{split}\end{equation} Finally, since $\|\mathcal{C}_{\mu}(\varphi_N)\|_{L^2(\mu)} \leq C\sqrt{\mu(B(0, 2N))}$, and $\mu(B(0,2N))\leq CN$, we estimate the right hand side here by a constant multiple of $\frac{|z|N\log N}{N^{2\alpha}}$, which tends to zero as $N\rightarrow \infty$.\end{proof} \section{The proof of Proposition \ref{reflectioncharac}} In this section we conclude our analysis by proving Proposition \ref{reflectioncharac}. To do this, we shall use the notion of a tangent measure, which was developed by Preiss \cite{P87}. Suppose that $\nu$ is a Borel measure on $\mathbb{C}$. The measure $\nu_{z,\lambda}(A) = \tfrac{\nu(\lambda A+z)}{\lambda}$ is called a $\lambda$-blowup of $\nu$ at $z$. A tangent measure of $\nu$ at $z$ is any measure that can be obtained as a weak limit of a sequence of $\lambda$-blowups of $\nu$ at $z$ with $\lambda \rightarrow 0$. Now suppose that $\mu$ is a nontrivial $C_0$-good with AD regularity constant $c_0$. Then any $\lambda$-blowup measure of $\mu$ at $z\in \operatorname{supp}(\mu)$ will again have these properties ($C_0$-goodness, and $c_0$-AD regularity). Therefore, both properties are inherited by any tangent measure of $\mu$. In particular, every tangent measure of $\mu$ at $z\in \operatorname{supp}(\mu)$ is non-trivial, provided that $\mu$ is non-trivial. Lastly, we remark that if $\mu$ is reflectionless, then any tangent measure of $\mu$ is also reflectionless. This follows from a simple application of Lemma \ref{weakcontcau1}. In what follows, it will often be notationally convenient to translate a point on $\operatorname{supp}(\mu)$ to the origin. Whenever this is the case, the definition of $\widetilde{\mathcal{C}}_{\mu}(1)$ in (\ref{C1offsupp}) is translated with the support of the measure, and becomes \begin{equation}\label{z0defn}\widetilde{\mathcal{C}}_{\mu}(1)(z) = \int_{\mathbb{C}}[K(z-\xi)-K(z_0-\xi)]d\mu(\xi),\end{equation} for some $z_0\not\in\operatorname{supp}(\mu)$. If $\mu$ is reflectionless, then $\widetilde{\mathcal{C}}_{\mu}(1)$ is constant in each component of $\mathbb{C}\backslash \operatorname{supp}(\mu)$, and takes one of two values in $\mathbb{C}\backslash \operatorname{supp}(\mu)$. \subsection{Step 1} Suppose that $\operatorname{supp}(\mu)\subset L$, for some line $L$. Then by translation and rotation we may as well assume that $L=\mathbb{R}$. If the support is not the whole line, then there exists an interval $(x, x')$ disjoint from the support of $\mu$, with either $x$ or $x'$ in the support of $\mu$. By rotating the support if necessary, we may assume that $x'\in \operatorname{supp}(\mu)$. Denote by $\tilde\mu$ a non-zero tangent measure of $\mu$ at $x'$. Then $\tilde\mu$ has support contained in the segment $[x',\infty)$, and $x'\in \operatorname{supp}(\tilde\mu)$. Since $\tilde\mu$ is reflectionless, we may apply Lemma \ref{resolvlem} to deduce that $\widetilde{\mathcal{C}}_{\tilde\mu}(1)(x'-t)$ is constant for all $t>0$. Differentiating this function with respect to $t$, we arrive at $\int_{x'}^{\infty}\frac{1}{(x-t-y)^2}d\tilde\mu(y).$ This integral is strictly positive as $\tilde\mu$ is not identically zero. From this contradiction we see that $\operatorname{supp}(\mu)=\mathbb{R}$. Consequently, we have that $d\mu(t) = h(t) dt$, where $c_0\leq h(t)\leq C_0$. Now let $y>0$ and consider, for $x\in \mathbb{R}$, $$\widetilde{\mathcal{C}}_{\mu}(1)(x-yi) - \widetilde{\mathcal{C}}_{\mu}(1)(x+yi) = 2i\int_{\mathbb{R}}\frac{y}{(x-t)^2+y^2}h(t) dt. $$ The expression on the left hand side is constant in $x\in \mathbb{R}$ and $y>0$. On the other hand, the integral on the right hand side is a constant multiple of the harmonic extension of $h$ to $\mathbb{R}^2_+$. The Poisson kernel is an approximate identity, and so by letting $y\rightarrow0^{+}$ we conclude that $h$ is a constant. Therefore $\mu = c m_1,$ with $c>0$. \subsection{Step 2} We now turn to the general case. We first introduce some notation. For $z\in \mathbb{C}$ and a unit vector $e$, $H_{z, e}$ denotes the (closed) half space containing $z$ on the boundary, with inner unit normal $e$. With $\alpha\in (0,1)$, we denote $C_{z,e}(\alpha) = \{\xi\in \mathbb{C}: \langle \xi-z , e\rangle > \alpha |\xi-z|\}$, where $\langle \cdot, \cdot \rangle$ is the standard inner product in $\mathbb{R}^2$. \begin{lem}\label{nothingotherside} Suppose that $z\not\in \operatorname{supp}(\mu)$. Let $\tilde{z}$ be a closest point in $\operatorname{supp}(\mu)$ to $z$, and set $e = \tfrac{\tilde{z}-z}{|\tilde{z}-z|}$. For each $\alpha\in (0,1)$, there is a radius $r_{\alpha}>0$ such that $B(\tilde z, r_{\alpha})\cap C_{\tilde z, e}(\alpha)$ is disjoint from $\operatorname{supp}(\mu)$. \end{lem} \begin{proof} \begin{figure}[t]\label{ballpic} \centering \includegraphics[width = 108mm]{CauchyCones} \caption{The set-up for Lemma \ref{nothingotherside}} \end{figure} We may suppose that $z=-r i$ for some $r>0$ and $\tilde{z}=0$, (and so $e=i$). We shall examine the imaginary part of the Cauchy transform evaluated at $-ti$ for $t\in (0,\tfrac{r}{2})$: $$\Im [\widetilde{\mathcal{C}}_{\mu}(1)(-ti)] = \int_{\mathbb{C}}\Bigl[\frac{\Im(\xi)+t}{|\xi+it|^2}-\frac{\Im(\xi-z_0)}{|\xi-z_0|^2}\Bigl]d\mu(\xi). $$ Lemma \ref{resolvlem} guarantees that $\Im [\widetilde{\mathcal{C}}_{\mu}(1)(-ti)] = \Im[\widetilde{\mathcal{C}}_{\mu}(1)(z)]$ for any $t>0$. In particular, it is bounded independently of $t$. Making reference to Figure 5, we let $R>3r$, and define three regions: $I = \mathbb{C}\backslash B(0,R)$, $II = \bigl\{\xi\in B(0,R) : \Im(\xi)<-t\}$, and $III = B(0,R)\backslash II.$ Set $d_0=\text{dist}(z_0, \operatorname{supp}(\mu)).$ First note that if $R>3|z_0|$, and $\xi \in \mathbb{C}\backslash B(0,R)$, then $$\Bigl| \frac{\Im(\xi)+t}{|\xi+it|^2}-\frac{\Im(\xi-z_0)}{|\xi-z_0|^2}\Bigl|\leq \frac{C}{|\xi|^2}. $$ Therefore, \begin{equation}\begin{split}\nonumber \int_{|\xi|\geq R}&\Bigl|\frac{\Im(\xi)+t}{|\xi+it|^2} - \frac{\Im(\xi-z_0)}{|\xi-z_0|^2}\Bigl|d\mu(\xi)+ \int_{B(0,R)}\Bigl|\frac{\Im(\xi-z_0)}{|\xi-z_0|^2}\Bigl|d\mu(\xi)\\ &\leq \int_{|\xi|\geq R} \frac{C}{|\xi|^2}d\mu(\xi)+ \int_{B(0, R)}\frac{1}{|\xi-z_0|}d\mu(\xi). \end{split}\end{equation} The right hand side of this inequality if finite and independent of $t$. Next, note that if $\xi\in II\cap \operatorname{supp}(\mu)$, then $|\xi-it|^2\geq -(\Im(\xi)+t)r$, provided that $|\Im(\xi)|<\tfrac{r}{2}$ and $t<\tfrac{r}{2}$. To see this, note that $|\xi-z|>r$, and so by elementary geometry, $|\xi-it|^2\geq r^2 - (r+(\Im (\xi)+t))^2$. This is at least $-(\Im(\xi)+t)r$ under our assumptions on $\xi$ and $t$. Therefore, if $t<\tfrac{r}{2}$, then $$\int_{II}\frac{\Im(\xi)+t}{|\xi+it|^2}d\mu(\xi)\geq -\int_{II\cap B(0, \tfrac{r}{2})}\frac{1}{r}d\mu(\xi) - \Bigl|\int_{II\backslash B(0,\tfrac{r}{2})}\frac{\Im(\xi)+t}{|\xi+it|^2}d\mu(\xi)\Bigl|. $$ Both terms on the right hand side are bounded in absolute value by $C\tfrac{\mu(B(0,R))}{r}\leq \tfrac{CR}{r}$ (recall that $B(z,r)\cap\operatorname{supp}(\mu)=\varnothing$). Note that the integral on the left hand side is at most zero. Our conclusion thus far is that there is a constant $\Delta$, depending on $C_0,d_0, R,r$ and $\Im(\widetilde{\mathcal{C}}_{\mu}(1)(z))$, such that for any $t<\tfrac{r}{2}$, \begin{equation}\label{IIIbound}\Bigl|\int_{III}\frac{\Im(\xi)+t}{|\xi+it|^2}d\mu(\xi)\Bigl|\leq \Delta. \end{equation} Note that the integrand in this integral is positive for any $\xi\in III$. Suppose now that the statement of the lemma is false. Then there exists $\alpha>0$, along with a sequence $z_j\in C_{0,e}(\alpha)\cap \operatorname{supp}(\mu)$ with $z_j\rightarrow 0$ as $j\rightarrow \infty$. By passing to a subsequence, we may assume that $|z_j|\leq\tfrac{R}{2}$ for each $j$, and also that the balls $B_j = B(z_j, \tfrac{\alpha}{2}|z_j|)$ are pairwise disjoint. Each ball $B_j\subset III$, and provided that $t\leq \tfrac{\alpha}{2}|z_j|$, we have $$\frac{\Im(\xi)+t}{|\xi+ti|^2}\geq \frac{\alpha|z_j|}{8|z_j|^2} = \frac{\alpha}{8|z_j|}, \text{ for any }\xi \in B_j. $$ As a result, we see that $$\int_{III}\frac{\Im(\xi)+t}{|\xi+it|^2}d\mu(\xi)\geq \sum_{j: \, t\leq|z_j|/2} \int_{B_j}\frac{\Im(\xi)+t}{|\xi+ti|^2}d\mu(\xi) \geq \sum_{j: \, t\leq|z_j|/2}\mu(B_j)\frac{\alpha}{8|z_j|}. $$ But $\mu(B_j)\geq \tfrac{c_0\alpha}{2}|z_j|$, and so the previous integral over $III$ has size at least $\tfrac{c_0\alpha^2}{16}\cdot\text{card}\{j: t\leq \tfrac{1}{2}|z_j|\}$. However, if $t$ is sufficiently small, then this quantity may be made larger than the constant $\Delta$ appearing in (\ref{IIIbound}). This is absurd. \end{proof} We now pause to prove a simple convergence lemma. \begin{lem}\label{outsideconv}Suppose that $\nu_k$ is a sequence of $C_0$-nice measures with AD-regularity constant $c_0$ that converges to $\nu$ weakly as $k\rightarrow \infty$ (and so $\nu$ is $C_0$-nice with AD-regularity constant $c_0$). If $z_0\not\in\operatorname{supp}(\nu)$, then for any $z\not\in \operatorname{supp}(\nu)$, $\widetilde{\mathcal{C}}_{\nu_k}(1)(z)$ is well defined (as in (\ref{z0defn})) for sufficiently large $k$, and $\widetilde{\mathcal{C}}_{\nu_k}(1)(z)\rightarrow \widetilde{\mathcal{C}}_{\nu}(1)(z)$ as $k\rightarrow \infty$.\end{lem} \begin{proof} First note that there exists $r>0$ such that $\nu(B(z_0,r))=0=\nu(B(z,r))$. But then, by the AD regularity of each $\nu_k$, we must have that $\nu_k(B(z_0, \tfrac{r}{2}))=0=\nu_k(B(z,\tfrac{r}{2}))$ for sufficiently large $k$, and hence $\widetilde{\mathcal{C}}_{\nu_k}(1)(z)$ is well defined. Let $N>0$, and choose $\varphi_N \in \operatorname{Lip}_0(\mathbb{C})$ satisfying $\varphi_N\equiv 1$ on $B(0,N)$ and $0\leq \varphi_N\leq 1$ in $\mathbb{C}$. For large enough $k$, $|\widetilde\mathcal{C}_{\nu}(1)(z) -\widetilde\mathcal{C}_{\nu_k}(1)(z)|$ is no greater than the sum of $|\int_{\mathbb{C}}[K(z-\xi)-K(z_0-\xi)]\varphi_N(\xi) d(\nu-\nu_k)(\xi)|$ and $|\int_{\mathbb{C}}[K(z-\xi)-K(z_0-\xi)][1-\varphi_N(\xi)] d(\nu-\nu_k)(\xi)|$. The first of these two terms tends to zero as $k\rightarrow\infty$, while the second has size at most $\tfrac{C|z-z_0|}{N}$ (for sufficiently large $N$) due to Lemma \ref{l1awayfromsupp}. This establishes the required convergence.\end{proof} \begin{lem}\label{halfspacelem} Suppose that $z\not\in \operatorname{supp}(\mu)$, and $\tilde z$ is a closest point on the support of $\mu$ to $z$. Let $e= \tfrac{\tilde{z}-z}{|\tilde{z}-z|}$. Then $\operatorname{supp}(\mu)\subset H_{\tilde{z},e}$. \end{lem} \begin{proof} Write $e=e^{i\theta}$. By translation, we may assume that $\tilde{z}=0$. To prove the lemma, it suffices to show that $B(-\rho e, \rho)\cap \operatorname{supp}(\mu)=\varnothing$ for all $\rho>0$. Fix $t_0$ small enough to ensure that $te\not\in\operatorname{supp}(\mu)$ for any $0<t\leq t_0$. The existence of $t_0>0$ is guaranteed by Lemma \ref{nothingotherside}. Now set $\sigma = \widetilde{\mathcal{C}}_{\mu}(1)(z)-\widetilde{\mathcal{C}}_{\mu}(1)(t_0e)$. Notice that the value of $\sigma$ is independent of the choice of $z_0\not\in \operatorname{supp}(\mu)$ in (\ref{z0defn}), so we shall fix $z_0=t_0e$. Now, let $\mu^{\star}$ denote a tangent measure to $\mu$ at $0$. On account of Lemma \ref{nothingotherside}, the support of $\mu^{\star}$ is contained in the line $L$ through $0$ perpendicular to $e$. By Step 1, $\mu^{\star} = c^{\star}\mathcal{H}^1_{L}$ with $c^{\star}\in [c_0,C_0]$. As a result, for any $y>0$, we have that \begin{equation}\label{secondjump}\widetilde{\mathcal{C}}_{\mu^{\star}}(1)(\!-ye)\! -\!\widetilde{\mathcal{C}}_{\mu^{\star}}(1)(ye) \!= \! \int_{\mathbf{R}}\! \frac{-2e^{-i\theta}y}{t^2+y^2} c^{\star} dm_1(t)\!=\!-2\pi c^{\star} e^{-i\theta}. \end{equation} We claim that $\widetilde{\mathcal{C}}_{\mu^{\star}}(1)(-ye) - \widetilde{\mathcal{C}}_{\mu^{\star}}(1)(ye) = \sigma$. To see this, note that for $\lambda>0$ small enough so that $y\lambda\leq t_0$, we have $\widetilde{\mathcal{C}}_{\mu_{0,\lambda}}(1)(-ye)-\widetilde{\mathcal{C}}_{\mu_{0,\lambda}}(1)(ye) = \widetilde{\mathcal{C}}_{\mu}(1)(-\lambda ye)-\widetilde{\mathcal{C}}_{\mu}(1)(\lambda ye)$. But this equals $\sigma$ because $-\lambda ye$ and $\lambda ye$ lie in the same connected components of $\mathbb{C}\backslash\operatorname{supp}(\mu)$ as $z$ and $t_0 e$ respectively. Since $\mu^{\star}$ is a weak limit of measures $\mu_{0,\lambda_k}$ for some sequence $\lambda_k\rightarrow 0$, applying Lemma \ref{outsideconv} proves the claim. Consequently, we have that $\sigma$ determines the direction of tangency from $z$ to $\operatorname{supp}(\mu)$ (the angle $\theta$). The right hand side of (\ref{secondjump}) is non-zero, and so $t_0e$ lies in a different component of $\mathbb{C}\backslash\operatorname{supp}(\mu)$ to $z$. As there are only two possible values that $\widetilde{\mathcal{C}}_{\mu}(1)$ can take in $\mathbb{C}\backslash\operatorname{supp}(\mu)$, $\sigma$ is determined by $\widetilde{\mathcal{C}}_{\mu}(1)(z)$. Since $\widetilde{\mathcal{C}}_{\mu}(1)$ is constant in each connected component of $\mathbb{C}\backslash \operatorname{supp}(\mu)$, the direction of tangency from any point in the connected component of $\mathbb{C}\backslash \operatorname{supp}(\mu)$ containing $z$ to $\operatorname{supp}(\mu)$ is the same. Finally, set \begin{equation}\begin{split}\nonumber\mathcal{I} = \bigl\{\rho>0: \{-t e: t\in (0,\rho]\}&\text{ lies in the same connected component}\\ &\text{ of }\mathbb{C}\backslash\operatorname{supp}(\mu)\text{ as }z\bigl\}.\end{split}\end{equation} We claim that if $\rho\in \mathcal{I}$, then $B(-\rho e, \rho)\cap \operatorname{supp}(\mu)=\varnothing$. Indeed, otherwise there is a point $\zeta\neq 0$ which is a closest point in $\operatorname{supp}(\mu)$ to $-\rho e$. But then it follows that $e=\tfrac{\zeta+\rho e}{|\zeta+\rho e|}$. Given that $\{-te: t\in (0, \rho]\}\cap \operatorname{supp}(\mu)=\varnothing$, this is a contradiction. From this claim, we see that if $\rho \in \mathcal{I}$, then $(0,2\rho)\subset\mathcal{I}$. Since $|z|\in \mathcal{I}$, it follows that $\mathcal{I}=(0,\infty)$, so $B(-\rho e, \rho)\cap \operatorname{supp}(\mu)=\varnothing$ for any $\rho>0$. \end{proof} \begin{proof}[Proof of Proposition \ref{reflectioncharac}] An immediate corollary of Lemma \ref{halfspacelem} is the following statement: For each $z\not\in \operatorname{supp}(\mu)$, there is a half space with $z$ on its boundary which does not intersect $\operatorname{supp}(\mu)$. Now, suppose that there are three points $z, \xi,\zeta \in \text{supp}(\mu)$ which are not collinear. Then they form a triangle. Since $\mu$ is AD-regular, there is a point in the interior of this triangle outside of the support of $\mu$. Let's call this point $\omega$. But then there is a half space, with $\omega$ on its boundary, which is disjoint from $\operatorname{supp}(\mu)$. This half space must contain at least one of the points $z$, $\xi$ or $\zeta$. This is absurd. \end{proof}
1,116,691,501,444
arxiv
\section{Introduction} The topological entropy and the metric entropy of dynamical systems are celebrated invariants in the moduli of topological and metric conjugacy. For the continuous dynamics of compact spaces, they are related by the variational principle: The topological entropy is the supremum over the metric entropies of all probability invariant measures. A classical problem in ergodic theory is to determine whether the supremum of metric entropies is attained by invariant measures of maximal entropy. The number of such maximizing measures is another interesting question. The Lyapunov exponents are also important numbers which measure the complexity of dynamics. They are defined almost everywhere with respect to any invariant probability measure. By Oseledets' theorem, for an invariant ergodic measure of a diffeomorphism $f\in{\rm Diff}^1(M)$ over a manifold $M$ with dimension $d$, there are $d$ numbers $ \lambda_1 \leq \lambda_2 \cdots \leq \lambda_d$ such that for a.e. $x \in M$ and for any $v \in TM \setminus \{0\}$, we have $\lim_{n \rightarrow \infty} \frac{1}{n} \log \|Df_x^n (v)\| = \lambda_i$ for some $1 \leq i \leq d.$ The $d-$numbers $\lambda_i$ are the \emph{Lyapunov exponents} of $(f, \mu).$ A measure $\mu$ is called \emph{hyperbolic} if all its Lyapunov exponents are non-zero. The entropy and Lyapunov exponents of smooth diffeomorphisms are related by celebrated Ruelle's inequality and Pesin's formula: ``The entropy is smaller than the sum of the positive Lyapunov exponents and the equality is equivalent to the smoothness of measure along unstable manifolds". For surface diffeomorphisms, using Ruelle's inequality it is not difficult to see that any measure of non-zero entropy is hyperbolic. Partially hyperbolic dynamics constitutes a successful branch of dynamics beyond uniformly hyperbolic systems (See next section for definitions). For partially hyperbolic diffeomorphisms, the non-hyperbolicity of invariant measures comes from vanishing exponent in the central bundle. One interesting problem in smooth ergodic theory is to verify the abundance of partially hyperbolic dynamics with natural measures (for instance, volume or measures of maximal entropy) with non-vanishing central exponents. An approach to this problem is to find mechanisms to remove zero central exponents (see \cite{GT} for a survey). Another way is to study the possible rigid properties of the systems with non-hyperbolic natural measures. A deep analysis of the Lyapunov exponents, for stationary sequence of matrices going back to Furstenberg \cite{F}, more general linear cocycles by Ledrappier \cite{L} and generalized to the context of non linear cocycles by Avila and Viana \cite{AV}, gives an invariance principle for invariant measures with vanishing central exponents. Speaking in the spirit of the works of Ledrappier and Avila-Viana, vanishing exponents on the central direction reveals the ``deterministic behavior of the invariant measure along the central foliation". See subsection (\ref{invariance1}) in Preliminaries for a more precise interpretation. The invariance principle had been noticed also by Baxendale \cite{Bax} (see also a neat proof in the circle action case in the work of Deroin-Klepstyn and Navas \cite{DKN} and the result of Crauel \cite{cra}.) In this paper we work with the notion of entropy along expanding foliations \cite{Y} and give a simple criterium for the invariance principle. In particular, we obtain a simpler proof for the invariance principle which was formulated in terms of the Lyapunov exponents in the previous known results. We emphasize that in the proof of invariance principle by Ledrappier and Avila-Viana a notion of entropy (Kullback information, see \cite{L}) along central foliation is hidden. As a byproduct we obtain a rigidity result for partially hyperbolic diffeomorphisms with one dimensional compact central foliation. We consider high entropy ergodic measures of partially hyperbolic dynamics with one dimensional compact central leaves and prove ``strong hyperbolicity" of such measures for typical dynamics. By strong hyperbolicity we mean that all high entropy ergodic measures have center exponent uniformly bounded away from zero. So, our result is a rigidity statement for partially hyperbolic diffeomorphisms with one dimensional center bundle (see the precise setting in what follows): if there are high entropy invariant measures which are weakly hyperbolic (with central Lyapunov exponent converging to zero) then in fact the dynamics is conjugate to isometric extension of Anosov homeomorphism. In particular, our result sheds light on some conjectures in smooth ergodic theory, specially in the quest for non-hyperbolic ergodic measures related to a conjecture of D\'{i}az-Gorodetski in \cite{DG} (see section \ref{quest} for more details). We thank S. Crovisier for the comments on the relation of our work with the iterated function systems setting and discussions on the Ledrappier-Young results. \section{Statement of results} Throughout this paper we will work with partially hyperbolic diffeomorphisms. $f: M \rightarrow M$ is \emph{partially hyperbolic} if there is a $Tf$-invariant splitting of the tangent bundle $TM = E^s\oplus E^c \oplus E^u$, such that for all unit vectors $v^\s\in E^\s_x\setminus \{0\}$ ($\s= s, c, u$) with $x\in M$ we have: $$\|T_xfv^s\| < \|T_xfv^c\| < \|T_xfv^u\| $$ for some suitable Riemannian metric. Furthermore $f$ satisfies: $\|Tf|_{E^s}\| < 1$ and $\|Tf^{-1}|_{E^u}\| < 1$. For partially hyperbolic diffeomorphisms, it is a well-known fact that there are foliations ${\mathcal F}^\s$ tangent to the distributions $E^\s$ for $\s=s,u$ . The leaf of ${\mathcal F}^\s$ containing $x$ will be called ${\mathcal F}^\s(x)$, for $\s=s,u$. \par In general the central bundle $E^c$ may not be tangent to an invariant foliation. However, whenever it exists we denote it by $\mathcal{F}^c.$ Our first main result is a criterium in terms of entropy for the so-called invariance principle (See (\ref{invariance1}) and (\ref{invariance2}).) for cocycles over {\it Anosov homeomorphisms} (See Preliminaries section). The building block of the proof uses arguments similar to Ledrappier \cite{L}, Ledrappier-Young \cite{LY}. Our proof depends on the analysis of partial entropy along expanding foliation of the measures (see \cite{L1}, \cite{LY2} when this notion firstly was introduced and \cite{Y} for a recent generalization). In particular, our approach permits us to give an interpretation of the proof of the invariance principle obtained by Avila-Viana \cite{AV} in the case of cocycles on fiber bundles with compact fibers and over Anosov homeomorphism, without using deformation of cocycles. Let $f: M \rightarrow M$ be a partially hyperbolic dynamics satisfying the following conditions (See Preliminaries, section (\ref{preliminaries}), for the definitions.): \begin{itemize} \item H1. $f$ is dynamically coherent with all center leaves compact, \item H2. $f$ admits global holonomies, \item H3. $f_c$ is a transitive topological Anosov homeomorphism, where $f_c$ is the induced dynamics satisfying $f_c \circ \pi = \pi \circ f$ and $\pi : M \rightarrow M/\mathcal{F}^c$ is the natural projection to the space of central leaves. \end{itemize} A large class of partially hyperbolic dynamics denoted by fibered partially hyperbolic systems satisfy (H1) and (H2) and all known examples satisfy (H3). In particular, it is shown in \cite{HP} that, over any 3 dimensional Nilmanifold different from the torus, every partially hyperbolic diffeomorphism satisfies (H1), (H2) and (H3). Denote by $h (\mu, \mathcal{F}^u)$ the ``entropy along unstable foliation of $f$" (See sub section \ref{entropyfoliation} for the details). For an $f-$invariant measure $\mu,$ let $\nu = \pi_*(\mu)$ and $\{\mu^u_x\}, \{\nu^u_{\pi(x)}\}$ denote respectively conditional measures of $\mu$ and $\nu$ along suitable measurable partitions sub-ordinated to the unstable foliation of $f$ and $f_c.$ We say $\mu \in Gibb^u_{\nu}(f)$ if $\pi_*(\mu^u_x) = \nu^u_{\pi(x)}$ for $\mu-$almost every $x \in M.$ This property is equivalent to the so-called $u-$invariance of $\{\mu^c_x\}_{x \in M}$ (conditional measures of $\mu$ along central foliation) under unstable holonomies. See Proposition \ref{gibbs-uinvariant} for the details. We prove the following main theorem: \begin{main} \label{u-invariantprinciple} (Entropy criterium for $u-$invariance) Let $f$ be a $C^2-$partially hyperbolic diffeomorphism satisfying H1, H2 and H3. Suppose $\mu$ be an $f-$invariant probability and $\nu := \pi_* \mu.$ Then, $h_{\mu} (f, \mathcal{F}^u) \leq h_{\nu} (f_ c)$ and equality occurs if and only if $\mu \in Gibb^u_{\nu} (f).$ \end{main} Observe that in the above theorem, we do not assume any hypothesis on the measures $\mu$ and $\nu$ to obtain the inequality. As a corollary we may obtain the following invariance principle. \begin{corollary} \label{exp-invariantprinciple} Let $\mu$ and $f$ be as in Theorem \ref{u-invariantprinciple}. If all the central Lyapunov exponents of $\mu$ are non-positive almost everywhere, then $\mu$ is $u-$invariant. \end{corollary} Finally let us give another corollary of our main theorem which will be used in section \ref{proofhighentropy}. \begin{corollary} \label{onedimensional} Let $\mu$ and $f$ be as in Theorem \ref{u-invariantprinciple}. If the central foliation is one dimensional then $\mu \in Gibb^u_ {\nu}(f)$ if and only if $h_{\mu}(f) = h (\mu, \mathcal{F}^u).$ \end{corollary} We also add another corollary which is a bit more technical, however useful in the development of the results using the invariance principle. Let $M, N$ be compact manifolds, $$ f: M \times N \rightarrow M \times N, (x, \theta) \rightarrow (A(x), f_{x}(\theta)), $$ where $A$ is an Anosov diffeomorphism. Fix any $A-$invariant probability $\nu.$ \begin{corollary} \label{invariancelimit} Let $f_n$ be as above and $\mu_n,$ $f_n-$invariant such that $f_n \rightarrow f, \mu_n \rightarrow \mu.$ If $\mu_n \in Gibb^u_{\nu}(f_n)$ then $\mu \in Gibb^u_{\nu}(f).$ \end{corollary} We thank an anonymous referee to point out this corollary of our main result. \subsection{Rigidity of high entropy measures in partial hyperbolic setting} Let $M$ be a smooth manifold. Denote by $\operatorname{SPH}_1(M)$ the set of $C^2$ partially hyperbolic diffeomorphism $f$ on $M$ with {\bf one-dimensional} central bundle satisfying hypothesis (H1), (H2) and (H3) plus accessibility property. We remark that if $M$ is a closed orientable 3-manifold and $f$ is partially hyperbolic with compact center manifolds and $E^s, E^c$ and $E^u$ are orientable then $f_c$ is conjugate to a hyperbolic toral automorphism (See Theorem 3 in \cite{HHTU}) . A special class of partially hyperbolic diffeomorphisms in $\operatorname{SPH}_1(M)$ are those which we denote by rotation type. $f\in\operatorname{SPH}_1(M)$ is of \emph{rotation type} if there exists an isometric continuous action of $\mathbb{S}^1$ into $M$ which commutes with $f.$ That is, $\rho_{\theta} : M \rightarrow M, \theta \in \mathbb{S}^1$ with $f \circ \rho_{\theta} =\rho_{\theta} \circ f.$ Any rotation type partially hyperbolic diffeomorphism admits a unique measure of maximal entropy and its center Lyapunov exponent vanishes almost everywhere. This is the non-generic case (in $SPH_1(M)$) where the unique measure of maximal entropy is non-hyperbolic (see theorem \ref{dichotomy}). It has been shown in \cite{HHTU} that for every $f\in \operatorname{SPH}_1(M)$ which is not rotation type, $f$ admits only finitely (strictly larger than one) many ergodic maximal measures $\mu_1^+,\cdots, \mu_{k(+)}^+$ and $\mu_1^-,\cdots, \mu_{k(-)}^-$, where $\mu_i^+$ has center exponent positive for $1\leq i \leq k(+)$, and $\mu_i^-$ has negative center exponent for $1\leq i \leq k(-)$. \begin{theorem} \label{dichotomy} \cite{HHTU} Suppose $f\in \operatorname{SPH}_1(M)$, then $f$ admits finitely many ergodic measures of maximal entropy. There are two possibilities: \begin{enumerate} \item $f$ is rotation type and has a unique entropy maximizing measure $\mu$. The central Lyapunov exponent $\lambda_c(\mu)$ vanishes and $(f, \mu)$ is isomorphic to a Bernoulli shift, \item $f$ has more than one ergodic entropy maximizing measure, all of which with non vanishing central Lyapunov exponent. The central Lyapunov exponent $\lambda_c(\mu)$ is nonzero and $(f, \mu)$ is a finite extension of a Bernoulli shift for any such measure $\mu.$ Some of these measures have positive central exponent and some have negative central exponent. \end{enumerate} \end{theorem} A more precise description of the ergodic maximal measures of the latter situation of the last theorem was obtained in \cite{UVY}. The complement of rotation type diffeomorphisms is $C^1$ open and $C^{\infty}-$dense, where we prove that high entropy ergodic measures are hyperbolic. \begin{main}\label{main.nonunihyp} Suppose $f\in\operatorname{SPH}_1(M)$ is not rotation type, then there is $\varepsilon >0$ and $\lambda_0>0$ such that for every ergodic invariant probability measure $\mu$ of $f$ with entropy larger than $h_{top}(f)-\varepsilon$, its center exponent satisfies $|\lambda^c(\mu)|>\lambda_0$. \end{main} Theorem~\ref{main.nonunihyp} is a corollary of the following main result. \begin{main}\label{main.converging} Let $f\in \operatorname{SPH}_1 (M)$ which is not a rotation type, and $\{\mu_n\}_{n=1}^\infty$ be a sequence of ergodic probability measure of $f$ such that $\lim_{n\to \infty} h_{\mu_n}(f)=h_{top}(f)$. Suppose $\mu_n$ converges to $\mu$ in the weak-* topology and all $\mu_n$ have non-positive center exponent, then $\mu$ is a combination of $\mu^-_1,\cdots, \mu^-_{k(-)}$. \end{main} \section{Quest for (non-)hyperbolic measures} \label{quest} We would like to mention that Theorem \ref{main.nonunihyp} sheds light on some questions and conjectures in smooth ergodic theory. \begin{conjecture} (D\'{i}az-Gorodetski \cite{DG}) In ${\rm Diff}^r(M), r \geq 1,$ there exists an open and dense subset $\mathcal{U} \subset {\rm Diff}^r(M) $ such that every $f \in \mathcal{U}$ is either uniformly hyperbolic or has an ergodic non-hyperbolic invariant measure. \end{conjecture} By our result, for $C^1$ open and $C^{\infty}-$dense partially hyperbolic dynamics with one dimensional compact central leaves (forming a circle bundle) one cannot look for non-hyperbolic ergodic measures among measures of large entropy. Let us mention that by Bochi, Bonatti and D\'{i}az \cite{BBD} result, there exists an open and dense subset $\mathcal{U} \subset \operatorname{SPH}_1(M) \cap RT(M)$ such that any $f \in \mathcal{U}$ has an ergodic measure with positive entropy and zero central Lyapunov exponent. Here $RT(M)$ is the set of $C^1-$robustly transitive diffeomorphisms. By our theorem, these non-hyperbolic ergodic measures can not have high entropy. J. Buzzi [section 1.5, \cite{Buz}] posed questions about abundance of hyperbolicity (of measures) for typical partially hyperbolic dynamical systems with one dimensional central bunde. Our result gives some partial answer to his questions too. We would like also recall a recent result of D\'{i}az-Gelfert-Rams \cite{DGR}. They study transitive step skew -product maps modeled over a complete shift whose fiber maps are circle maps. They focus on non-hyperbolic measures (with zero fiber exponent) and prove that such measures are approximated in weak$-*$ and entropy by hyperbolic measures. In the proof of their theorem 3, they consider three cases for the variational principle where the first case is: $h_{top} = sup_{\mu \in \mathcal{M}_{erg, 0}} h_{\mu}$ where $ \mathcal{M}_{erg, 0}$ is the subset of ergodic measures of step skew product with vanishing fiber exponent. By our result this first case will not occur. We will not give the rigorous proof of this fact here and it will appear elsewhere. This gives a more accurate information for their study. Still finding the value $ sup_{\mu \in \mathcal{M}_{erg, 0}} h_{\mu}$ is interesting. We would like to mention that in general setting of partially hyperbolic diffeomorphisms with one dimensional central bundle, it is not clear whether high entropy ergodic measures inherit the hyperbolicity of ergodic measures of maximal entropy (whenever all of ergodic measures of maximal entropy are hyperbolic). For $C^{1+ \alpha}-$diffeomorphisms in the homotopy class of Anosov diffeomorphisms of $\mathbb{T}^3$, a similar argument to Theorem 5.1 in \cite{ures} shows that high entropy measures are hyperbolic. \subsection{Less regularity} Although throughout this article we always assume the diffeomorphisms to be $C^2$, Theorems~\ref{main.nonunihyp} and ~\ref{main.converging} also hold for $C^{1+\alpha}$ diffeomorphisms. In fact, the only place where $C^2$ hypothesis used is where we use the Lipschitzness of unstable holonomy inside center-unstable plaques (see \cite{LY}) to conclude that when center Lyapunov exponents are non-positive, the entropy of a measure is equal to the entropy along the unstable foliation (See the Proof of Theorem \ref{main.converging}.) More precisely, the $C^2$ hypothesis is used to obtain Lipschitz holonomy of Pesin unstable lamination inside the center unstable set (see \cite{LY}[Section 4.2]). For $f\in {\rm Diff}^{1+\alpha} (M)$, it has been shown in \cite{Brown} that the strong unstable foliation restricted to each center unstable leaf is Lipschitz, then one may repeat the proof of \cite{LY}. \section*{acknowledgment} A. T was in a research period at Universit\'{e} Paris-Sud (thanks to support of FAPESP-Brasil:2014/23485-2, CNPq-Brasil) and would like to thank hospitality of Laboratoire de Topologie and in particular, Sylvain Crovisier and J\'{e}r\^{o}me Buzzi for many useful conversations. J.Y. was partially supported by CNPq, FAPERJ, and PRONEX. \section{ Preliminaries } \label{preliminaries} A partially hyperbolic diffeomorphism is called {\it accessible} if one can join any two points in the manifold by a path of finitely many components which is piecewise tangent to either $E^s$ or $E^u.$ In general it is not true that there is a foliation tangent to $E^c$. Indeed, there may be no foliation tangent to $E^c$ even if $\dim E^c =1$ (see \cite{HHU}). We shall say that $f$ is \emph{dynamically coherent} if there exist invariant foliations ${\mathcal F}^{c\s}$ tangent to $E^{c\s}=E^c \oplus E^\s$ for $\s=s,u$. Note that by taking the intersection of these foliations we obtain an invariant foliation ${\mathcal F}^c$ tangent to $E^c$ that subfoliates ${\mathcal F}^{c\s}$ for $\s=s,u$. Observe that ${\mathcal F}^{\s}$ also subfoliates ${\mathcal F}^{c\s}$ for $\s \in \{s, u\}.$ For any dynamically coherent $f$ and any two points $x, y$ with $y \in \mathcal{F}^u(x),$ there is a neighbourhood $U_x$ of $x$ in $\mathcal{F}^c(x)$ and a homeomorphism $H^u_{x, y} : U_x \rightarrow \mathcal{F}^c(y)$ such that $H^u_{x, y}(x) =y$ and $H^u_{x, y}(z) \in \mathcal{F}^u(z) \cap \mathcal{F}^c_{loc} (y).$ Similarly, one may define local stable holonomies $H^s_{x, y}$ for $y \in \mathcal{F}^s(x).$ We say $f$ admits global unstable holonomy if for any $y \in \mathcal{F}^u(x)$ the holonomy is defined globally $H^u_{x, y} : \mathcal{F}^c(x) \rightarrow \mathcal{F}^c(y).$ Similarly we define the notion of global stable holonomy and $f$ admits global holonomies when it admits global stable and unstable holonomies. There are many robust examples of partially hyperbolic diffeomorphisms which are dynamically coherent. The simplest construction goes as follows. Start with a hyperbolic toral automorphism $A : \mathbb{T}^2\to \mathbb{T}^2$, and then let $f_0\in {\rm Diff}(\mathbb{T}^2\times S^1)$ be a skew product map such that $$f_0((x,\theta))= (A(x),\theta) $$ Then $f_0$ is a partially hyperbolic diffeomorphism and so is every $f$ in a $C^1$ neighborhood of $f_0$. Moreover, it follows from general results in \cite{HPS} that $f$ is indeed dynamically coherent and admits global holonomies. A generalization of the above examples are fibered partially hyperbolic dynamics (See Avila-Viana-Wilkinson and Hirsch-Pugh-Shub \cite{HPS}). They are examples of dynamically coherent dynamics admitting global holonomies. A partially hyperbolic diffeomorphism $f: M \rightarrow M$ is fibered partially hyperbolic dynamics if there exists a continuous fiber bundle $\pi : M \rightarrow B$ with fibers modeled by compact manifold such that $\pi^{-1} (b)$ is a center leaf of $f$ for any $b \in B.$ Let $M_c:= M / \mathcal{F}^c$ be the quotien space and denote by $f_c$ the induced dynamics, i.e; $f_c \circ \pi = \pi \circ f$ where $\pi: M \rightarrow M_c$ is the natural projection. $f_c$ is a {\it topological Anosov homeomorphism} (See \cite{Vi} and section 2.2 of \cite{YV}). We assume that $f_c$ is transitive (hypothesis H3 in section 2), which is the case for all known examples. A transitive topological Anosov homeomorphism shares many similar properties with Anosov diffeomorphisms. For instance, it has a pair of topological foliations which play the same role of stable/unstable foliations, there exist Markov partitions (for Markov partitions in the topological Anosov setting see \cite{Hi}.) and its unique measure of maximal entropy can be obtained by the Margulis method. We give a more precise description in what follows: An Anosov homeomorphism of $M_c,$ by definition admits two invariant topological foliations ${\mathcal W}^s$ and ${\mathcal W}^u$ with similar dynamical properties as in the diffeomorphism case. The leaves are topological submanifolds and $${\mathcal W}^s(\xi) = \bigcup_{n} f_c^{-n} {\mathcal W}^s_{\epsilon}(f_c^n(\xi)), {\mathcal W}^u(\xi) = \bigcup_{n} f_c^{n} {\mathcal W}^u_{\epsilon}(f_c^{-n}(\xi))$$ where $${\mathcal W}^s_{\epsilon}(\xi) = \{ \eta \in M_c: d_c(f_c^n (\xi),f_c^n (\eta) ) \leq \epsilon\} $$ $$ {\mathcal W}^u_{\epsilon}(\xi) = \{\eta \in M_c: d_c(f_c^{-n} (\xi),f_c^{-n} (\eta) ) \leq \epsilon\} $$ and $d_c$ is a distance in $M_c:$ $$ d_c(\xi,\eta) := \sup_{x \in \xi} \inf_{y \in \eta} d(x, y) + \sup_{y \in \eta} \inf_{x \in \xi} d(x, y) $$ for any $\xi, \eta \in M_c.$ As we will assume that $f_c$ is transitive throughout this article, the quotient space $M_c$ is homotopic to a $d-1$ dimension torus ${\mathbb T}^{d-1}$. We use ${\mathcal F}^{\s}$, $\s=s,c,u$ to denote the invariant foliations of $f$, and ${\mathcal W}^i$, $i=s,u$ to denote the stable and unstable foliation of $f_c$. As a summary, we have: $f_c$ is a transitive topological Anosov homeomorphism. It admits a Markov partition and a unique measure of maximal entropy $\nu$. \subsection{Nonlinear cocycle and Invariance Principle} \label{invariance1} Any $f: M \rightarrow M$ satisfying (H1), (H2) and (H3) can be considered as a smooth (nonlinear) cocycle over $f_c: M_c \rightarrow M_c$ with global holonomies. We denote by $M_{x_c}$ the fiber $\pi^{-1}(x_c)$ which is a center leaf and $f_{x_c} : M_{x_c} \rightarrow M_{f_c(x_c)}$ is the restriction of $f$ on $M_{x_c}.$ By hypothesis we have two families of stable and unstable holonomies: for any $y_c \in W^u(x_c),$ then $H^u_{x_c , y_c} : M_{x_c} \rightarrow M_{y_c}$ satisfies: \begin{itemize} \item $H^u_{y_c, z_c} \circ H^u_{x_c, y_c} = H^u_{x_c, z_c}$ and $H^u_{x_c, x_c} = id$ \item $f_{y_c} \circ H^u_{x_c, y_c} = H^u_{f_c(x_c), f_c(y_c)} \circ f_{x_c}$ \end{itemize} A crucial feature of the above holonomy map is that it is Lipschitz when $f$ is $C^2.$ The consequences of this regularity are explored in the work of Ledrappier-Young \cite{LY} and it is used in our paper. If the central foliation is one-dimensional then the Lipschitz property holds for the lower regular diffeomorphisms, for instance, $f \in C^{1+ \alpha}.$ See \cite{PSW,Brown} for other technical conditions to guarantee the Lipschitz property of stable/unstable holonomies inside center-stable/center-unstable manifolds. Any invariant measure (by $f$) $\mu$ projects down to $\nu:= \pi_* \mu$ which is invariant by $f_c.$ By Rokhlin's \cite{R} disintegration theorem, there exist a system of conditional measures $\{\mu^c_{x_c}\}$ such that $$\mu = \int \mu^c_{x_c} d \nu(x_c).$$ We seek for a criterium which implies the invariance of conditional measures by $u-$holonomy (or $s-$holonomy), i.e. \begin{equation} \label{determinism} \mu^c_{y_c} = (H^u_{x_c, y_c})_{*} \mu^c_{x_c} \end{equation} for any $y_c \in W^u(f_c, x_c)$ for $x_c, y_c$ belonging to a full $\nu$ subset of $M_c.$ Similarly we define $s-$invariance of the conditional measures. This invariance of conditional measures by holonomies is called invariance principle by Avila and Viana \cite{AV}. The simple but fundamental observation is that instead of verifying $u-$invariance of $\{\mu^c_{x_c}\}$ we may verify equivalently $c-$invariance of conditional measures on local unstable plaques $\{ \mu^u_{x} \}$. This can be rewritten as $$\pi_* \mu^u_{x} = \nu^u_{\pi(x)}.$$ We denote by $Gibb^u_{\nu} (f)$ any measure satisfying the above condition. See section \ref{proofIP} for more precise definitions. The choice of local unstable plaques is based on the fact that the quotient dynamics $f_c$ admits a natural partition (Markov partition) and by means of such partition we are able to define a measurable partition sub-ordinated to unstable foliation. The atoms of such measurable partition are called unstable plaques. The above observation about the equivalence between $u-$invariance and $c-$invariance and other abstract measure theoretical results are proved in Section \ref{measurabletoolbox}. \subsection{Invariance principle with vanishing exponents} \label{invariance2} Using the above corollary for $f$ and $f^{-1}$ one can conclude that if the central Lyapunov exponents are zero then $\mu$ is both $u$ and $s-$invariant. That means, there are two systems of conditional measures one of them $s-$invariant and another $u-$invariant. Observe that up to now, the conditional measures are measurable objects: They are defined on a full measurable subset and varies measurably. If we suppose that $\nu:= \pi_* \mu$ has {\bf local product structure} (as defined in measurable toolbox section \ref{measurabletoolbox}) then one can prove that there exists a system of conditional measures defined {\bf everywhere} and the conditional measures depend continuously on the base point. This last passage makes a subtle use of Hopf type argument, See Avila-Viana's invariance principle (See theorem (D) and Proposition 4.8 in \cite{AV}). So we would like to remark that our theorem and the Hopf type argument given by Avila-Viana reveals a different and shorter proof of the Invariance principle. Moreover, our approach makes the role of entropy much more clear. We emphasize that we prove and apply our result in the case of partially hyperbolic diffeomorphisms (as we see in section \ref{proofhighentropy}). Avila-Viana proved $u-$invariance principle in the more general setting and in the same spirit of Ledrappier result for abstract sub-sigma algebras of the base space. See Theorems (B) in \cite{AV}. \section{A measurable toolbox} \label{measurabletoolbox} In this section we develop an abstract measurable toolbox which deals with disintegration of measures and their properties mostly related to invariance with respect to the holonomy of foliations. {\bf The notations are similar to the dynamical ones, but no dynamics is assumed here.} Let $(A:=[0, 1]^{c+u}, \mu^{cu})$ be the unit square equipped with a probability measure and $\mathcal{F}^c , \mathcal{F}^u$ be a pair of transversal foliations of $A$ with compact leaves and dimension of leaves respectively $c$ and $u$. We assume the following topological product structure: There exists a continuous injective and surjective map $Q(. , .) : \mathcal{F}^u(x_0) \times \mathcal{F}^c(x_0) \rightarrow [0, 1]^{c+u}$ such that $Q(x, y) = \mathcal{F}^c(x) \cap \mathcal{F}^u(y).$ \begin{definition} \label{def:mensuravel} We say that a partition $\mathcal P$ is measurable (or countably generated) with respect to $\mu$ if there exist a measurable family $\{A_i\}_{i \in \mathbb N}$ and a measurable set $F$ of full measure such that if $B \in \mathcal P$, then there exists a sequence $\{B_i\}$, where $B_i \in \{A_i, A_i^c \}$ such that $B \cap F = \bigcap_i B_i \cap F$. \end{definition} Let $\mathcal P$ be a measurable partition of a compact metric space $M$ and $\mu$ a borelian probability. Then, by Rokhlin's theorem \cite{R}, there exists a disintegration by conditional probabilities for $\mu$. The foliations $\mathcal{F}^{u,c}$ will be considered as measurable partitions and we denote by $\{\mu_{x}^c\}$ and $\{\mu^u_{x}\}$ the system of conditional probability measures along $\mathcal{F}^c$ and $\mathcal{F}^u:$ $$\mu^{cu} = \int_{A} \mu^u_{x} d\mu^{cu} (x) = \int_{A} \mu^{c}_x d \mu^{cu}(x)$$ where $\mu_ {x}^{u}$ (resp. $\mu^c_x$) is a probability measure depending only on the leaf $\mathcal{F}^{u} (x)$ (respt. $\mathcal{F}^c(x)$). Another equivalent way to write the disintegration equation (along $\mathcal{F}^c$) above is to consider the quotient space $A/ \mathcal{F}^{c}$ equipped with the quotient measure $\tilde{\mu}^{cu}:= \pi_*(\mu^{cu})$ where $\pi: A \rightarrow A/\mathcal{F}^{c}$ is the canonical projection. We can write $$ \mu^{cu} = \int_{A/\mathcal{F}^c} \mu_{P}^c d \tilde{\mu}^{cu} (P) $$ where $\mu^c_{P}$ is the conditional probability measure on a typical leaf of $\mathcal{F}^c$. By definition, for any integrable function $\phi: M\to \mathbb{R}$, we have $$\int_{A} \phi d \mu^{cu} = \int_{A/\mathcal{F}^c} \int_{P} \phi(x) d \mu_{P}^c (x) d \tilde{\mu}^{cu}(P). $$ The product structure of the pair of foliation above, permits us to define holonomy maps $H^u$ and $H^c$ respectively between leaves of $\mathcal{F}^c$ and $\mathcal{F}^u.$ We say a system of disintegration $\mu^c$ is \emph{$u-$invariant} if $\mu_y^{c} = (H^u_{x, y})_{*} \mu_x^c$ for $x,y$ belong to a $\mu$ full measure subset, where $H_{x, y}^u$ is the $u-$holonomy map between $\mathcal{F}^c(x)$ and $\mathcal{F}^c(y)$ induced by the foliation $\mathcal{F}^u$. Similarly we define \emph{$c-$invariance} of $\{\mu_{y}^u\}$ by $\mu_{y}^u = (H^c_{x, y})_{*} \mu^u_{x}.$ \begin{lemma} \label{product} If $\{\mu_{x}^c\}$ is $u-$invariant then $\{\mu_x^u\}$ is $c-$invariant and $\mu^{cu} = Q_* (\mu^u_{x_0} \times \mu^c_{x_0})$ for any typical point $x_0.$ \end{lemma} By the definition of conditional measures and $u-$invariance of $\mu^c$ we have $$ \int \phi d \mu = \int_{\mathcal{F}^u(x_0)} \int_{\mathcal{F}^c(x_0)} \phi \circ H^u_{x_0, z}(x) d \mu^c_{x_0}(x) d \tilde{\mu} (z) $$ for any continuous function $\phi$ where $\tilde{\mu}$ is the quotient measure on the quotient space identified with $\mathcal{F}^u(x_0).$ Using Fubini's theorem and the fact that $H^u_{x_0, z}(x) = H^c_{x_0, x} (z)$ for any $x \in \mathcal{F}^c(x_0) , z \in \mathcal{F}^u(x_0)$ we obtain $$ \int \phi d \mu = \int_{\mathcal{F}^c(x_0)} \int_{\mathcal{F}^u(x_0)} \phi \circ H^c_{x_0, x}(z) d \tilde{\mu}(z) d \mu^c_{x_0}(x). $$ By essential uniqueness of disintegration, the above equality shows that the system of conditional measures $\{ \mu^u\}$ satisfies $\mu^u_{x} = H^c_{x_0, x} \tilde{\mu}$, which implies the $c-$invariance. The last claim of the lemma is direct from definition and Fubini theorem. \subsection{Disintegration along three (coherent) foliations} \label{x0} Now we consider a probability measure on the unit cube $K = [0, 1]^{s+c+u}$ equipped with probability $\mu$ and consider three transverse foliations $\mathcal{F}^{s}, \mathcal{F}^{c}, \mathcal{F}^{u}$ (we call them stable, central and unstable foliation) and assume the following coherence property: there exist two more foliations $\mathcal{F}^{cu}$ (center-unstable foliation) and $\mathcal{F}^{cs}$ (center-stable foliation) such that subfoliate respectively into $\mathcal{F}^{c}, \mathcal{F}^{u}$ and $\mathcal{F}^{c}, \mathcal{F}^{s}.$ Moreover, inside any leaf of $\mathcal{F}^{cs}$ the leaves of $\mathcal{F}^{c}$ and $\mathcal{F}^{s}$ have product structure. Similarly we assume product structure inside leaves of $\mathcal{F}^{cu}.$ We have two holonomy maps called unstable holonomy and stable holonomy. The unstable holonomy is defined between any two central leaves inside a center-unstable leaf and similarly we define stable holonomy. Observe that in this section we are not assuming any dynamical property for these foliations and just use the names that will be used later. We make some useful geometrical identifications of the quotient spaces. Let $x_0 \in K$ which we fix from now on. By the coherence hypothesis the quotient spaces $K/\mathcal{F}^s$ and $K/\mathcal{F}^u$ may be identified by $\mathcal{F}^{cu}_{x_0}$ and $\mathcal{F}^{cs}_{x_0}.$ The quotient by central foliation $K /\mathcal{F}^c$ is a compact metric space and it admits two topological foliations $W^u$ and $W^s.$ Let $\pi: K \to \tilde{K}=K/ \mathcal{F}^c$ be the canonical projection and $\nu:= \pi_* (\mu)$. We have two following important properties: \begin{itemize} \item $\pi(\mathcal{F}^{cu} (x)) = W^u(\pi(x))$ \item $\pi(\mathcal{F}^{cs} (x)) = W^s(\pi(x))$. \end{itemize} So we may identify $K/ \mathcal{F}^{cu}$ and $K/ \mathcal{F}^{cs}$ respectively with $W^s(\pi (x_0))$ and $W^u(\pi(x_0)).$ We also may identify $\tilde{K} / W^u$ and $\tilde{K} / W^s$ respectively with $W^s(\pi(x_0))$ and $W^u(\pi(x_0)).$ With a bit of abuse of notations, for $t \in W^s(\pi (x_0))$ we denote by $\mathcal{F}_{t}^{cu}$ as the center-unstable plaque corresponding to $t.$ In what follows we study conditional measures along various foliations, by $$\{\mu_{x}^{*}\}, * \in \{s, c, u, cs, cu\}$$ we mean the conditional probabilities along leaves of foliation $\mathcal{F}^{*}.$ We also may disintegrate $\nu$ along foliations $W^u$ and $W^s$ obtaining two additional system of conditional measures $\nu_{\pi(x)}^u$ and $\nu_{\pi(x)}^s.$ \begin{lemma} \label{quotient} $\pi_* (\mu_{x}^{cu}) = \nu^u_{\pi(x)}$ for $\mu$ almost every $x.$ \end{lemma} \begin{proof} By definition $\mu = \displaystyle{\int} \mu^{cu} d \tilde{\mu}$ where $\tilde{\mu}$ is the probability on the quotient $K / \mathcal{F}^{cu}$ which is identified with $W^s(\pi(x_0)).$ We also have $\nu = \displaystyle{\int} \nu^u d \tilde{\nu}$ where the quotien measure $\tilde{\nu}$ is defined on $\tilde{K} / W^u = W^s(\pi(x_0)).$ It is not difficult to see that $\tilde{\nu} = \tilde{\mu}.$ Indeed taking any measurable subset $S \subset W^s(\pi(x_0))$ we have: $$ \tilde{\mu} (S) = \mu( \bigcup_{t \in S} \mathcal{F}^{cu}(t)) $$ and $$ \tilde{\nu} (S) = \nu(\bigcup_{t \in S} W^u(t)) = \mu( \pi^{-1} (\bigcup_{t \in S} W^u(t)) ) = \mu( \bigcup_{t \in S} \mathcal{F}^{cu}(t)). $$ So, by definition \begin{align*} \int_{\tilde{K}/ W^u } \nu^u d \tilde{\nu} = \nu =& \pi_*{\mu} = \pi_* (\displaystyle{\int}_{K/ \mathcal{F}^{cu}}\mu^{cu} d \tilde{\mu} ) \\=& \int_{K/ \mathcal{F}^{cu}} \pi_{*} \mu d \tilde{\mu} = \int_{\tilde{K}/ W^u } \pi_{*} \mu^{cu} d \tilde{\nu} \end{align*} By essential uniqueness of disintegration we conclude the proof of lemma. \end{proof} \begin{proposition} \label{gibbs-uinvariant} $\nu_{\pi(x)}^u = \pi_* (\mu_{x}^u)$ holds for $\mu$ almost every $x$ if and only if $\mu^c$ is $u-$invariant. \end{proposition} By coherent hypothesis we may speak about $H^u$ holonomy between two central leaves and consequently $\{\mu^c\}$ being $u-$invariant makes sense. \begin{proof} Let us prove ``if" part: Observe that by essential uniqueness of disintegration the family of conditional measures $\mu_{x}^u$ almost everywhere coincides with the disintegration of $\mu^{cu}$ along $\mathcal{F}^u.$ The same statement for $\mu^c$ holds. That is, the disintegration of $\mu^{cu}$ along the central palques coincides with $\mu^c_x.$ Observe that for any center-unstable plaque $\mathcal{F}^{cu}(x)$, the quotient by central plaques can be identified by the unstable plaque $\mathcal{F}^u(x).$ By lemma \ref{product} and invariance of $\mu^c$ by $u-$holonomy we conclude that $\mu^u$ is invariant by $\mathcal{F}^c$ holonomy. This yields that for any $D \subset \mathcal{F}^u(x)$ we have $$\mu^{cu}_{x} (\mathcal{F}^c(D)) = \int_{\mathcal{F}^u(x)} \mu^u_z (H_{x,z}^c (D)) d \tilde{\mu}^{cu} (z) = \mu_{x}^u(D)$$ where $ \mathcal{F}^c(D) = \bigcup_{z \in D} \mathcal{F}^c(z)$, $H_{x, z}^c$ is the central holonomy map between two unstable leaves and $\tilde{\mu}^{cu}$ the probability one the quotient space $\mathcal{F}^{cu}/\mathcal{F}^u$. Observe that $$ \mu^{cu}(\mathcal{F}^c(D)) = (\pi_* \mu^{cu}) (\pi(D)) = \nu^u_{\pi(x)} (\pi(D)). $$ where the last equality comes from Lemma \ref{quotient}. Comparing the above two equations we conclude that $\nu^u_{\pi(x)} = \pi_*(\mu^u_x)$. To prove the ``only if" part, just observe that $\nu^u_{\pi(x)} = \pi_* \mu^u_x$ implies $\mu^u$ is $c-$invariant and by lemma \ref{product} we conclude that $\mu^c$ is $u-$invariant. \end{proof} \section{Proof of Entropy criterium for invariance principle} \label{proofIP} Throughout this section we prove Theorem \ref{u-invariantprinciple}. Recall that $\mu$ denotes an invariant probability measure by $f$ and that $\pi_*(\mu)=\nu$. Fix $\{A^i_c\}$ a Markov partition of $f_c$. Denote by $A^i=\pi^{-1}(A^i_c)$, then ${\mathcal A}=\{A^{i}\}$ is a partition on the manifold $M$. We may assume that the boundary of each element of this partition has zero $\mu-$measure. Fix $i=1,\dots,k$, for any $x\in A^i$, denote by $W^u_{loc}(\pi(x))$ the unstable plaque contained inside $A^i_{c}$, and by ${\mathcal F}^u_{loc}(x)$ (unstable plaque) the connected component of the unstable leaf of ${\mathcal F}(x)$ which intersects $A^i$ and contains $x$. In the proof we use four measurable partitions: \begin{itemize} \item Central foliation $\mathcal{F}^c$ is a foliation by compact leaves and so it is a measurable partition. The conditional measures of $\mu$ along this partition are denoted by $\{\mu^c_x\};$ \item $\xi_c^u=\{W^u_{loc}(\pi(x)); \pi(x) \in M_c\}$ is a measurable partition of $M_c$ by unstable plaques of $f_c$. We may disintegrate $\nu = \pi_* \mu$ along this partition and the conditional measures are denoted by $\{\nu^{u}_{\pi(x)}\}$; \item $ \pi^{-1}(\xi_c^u)$ is a measurable partition of $M$ by $\mathcal{F}_{loc}^{cu}$ (center-unstable) plaques. The corresponding conditional measures of $\mu$ are denoted by $\{\mu^{cu}_x\}$ \item and $\xi^u=\{{\mathcal F}^u_{loc}(x); x\in M\}$ is a measurable partition of $M$ by unstable plaques of $f$ and $\{\mu^u_x\}$ stands for the system of conditional measures of $\mu$. \end{itemize} Considering the conditional measures of $\mu$ along different measurable partition introduced above we define a new category of measures which we call ``$u-$Gibbs relative to measure $\nu$" or just by $Gibb^u_{\nu}$ states. \begin{definition}\label{d.u state} We say $\mu$ is a \emph{$Gibbs^u_{\nu}$-state} if $\pi_* \mu = \nu$ and for $\mu-$almost every $x\in M$, $$\pi_*\mu^u_x=\nu^u_{\pi(x)}.$$ \end{definition} We denote by $\operatorname{Gibb}_{\nu}^u(f)$ the set of Gibbs$^u_{\nu}$-states of $f$. Observe that by Proposition \ref{gibbs-uinvariant} all measures in $\operatorname{Gibb}_{\nu}^u(f)$ have $u-$invariant disintegration along the central foliation. Recall that a measurable partition $\eta$ for a map $f$ is \emph{increasing} if $f\eta \preceq \eta$. Then all the three partitions $\xi_c^u$, $\pi^{-1}(\xi_c^u)$ and $\xi^u$ are increasing. It is easy to see that $\xi^u$ is finer than $\pi^{-1}(\xi_c^u).$ \subsection{Partial entropy along expanding foliations} \label{entropyfoliation} In this section, we recall the general definition of partial entropy along the expanding foliation (See \cite{L1}, \cite{LY2} and \cite{Y}.) Let $f$ be a diffeomorphism, we say a foliation ${\mathcal F}$ is \emph{$f$-expanding} if: \begin{itemize} \item ${\mathcal F}$ is invariant; \item $f$ is expanding along ${\mathcal F}$. \end{itemize} \begin{remark} By unstable manifold theorem, the unstable foliation ${\mathcal F}^u$ is an expanding foliation of $f$. It deserves to observe that, although $f_c$ is only a topological Anosov homeomorphism, we may still consider ${\mathcal W}^u$ as an \emph{expanding foliation} of $f_c$. Indeed, we can use a conjugacy to identify it with a linear Anosov diffeomorphism $A_0$ and the unstable foliation is preserved by the conjugacy. \end{remark} For any invariant probability measure $\mu$ of $f$, we say a measurable partition $\xi$ is \emph{$\mu$ adapted} (sub-ordinated) to $\mathcal{F}$ if the following conditions are satisfied: \begin{itemize} \item there is $r_0>0$ such that $\xi(x)\subset B^{{\mathcal F}}_{r_0}(x)$ for $\mu$ almost every $x$, where $B^{{\mathcal F}}_{r_0}(x)\subset {\mathcal F}(x)$ is a ball of ${\mathcal F}(x)$ with radius $r_0$ \item $\xi(x)$ contains an open neighborhood of $x$ inside ${\mathcal F}(x)$; \item $\xi$ is increasing, that is, for $\mu$ almost every $x$, $\xi(x)\subset f(\xi(f^{-1}(x)))$. \end{itemize} Then the \emph{$\mu$ partial entropy along expanding foliation ${\mathcal F}$} is defined by $$h_{\mu}(f,{\mathcal F})=H_{\mu}(f^{-1} \xi \mid \xi).$$ \begin{remark} It is easy to check that $\xi_c^u$ is $\nu$ adapted to foliation ${\mathcal W}^u$ and $\xi^u$ is $\mu$ adapted to ${\mathcal F}^u$. Then, by the definition, \begin{equation}\label{eq.partialentropy} h_\nu(f_c,{\mathcal W}^u)=H_\nu(f_c^{-1} \xi_c^u\mid \xi_c^u) \text{ and } h_\mu(f,{\mathcal F}^u)= H_\mu(f^{-1}\xi^u\mid \xi^u). \end{equation} \end{remark} \subsection{Proof of Theorem A} We use the notations introduced in beginning of this section: $\{A^i\}$ is a partition of $M$ into finitely many domains. Each $A^i$ is partitioned into stable, unstable and central plaques with the coherence property. We use abstract results obtained in measurable toolbox (Section \ref{measurabletoolbox}) for each $A^i.$ For each $A^i$ fix $x_i \in A^i$ which plays the role of $x_0$ in subsection \ref{x0}. To simplify the notations we use $\mathcal{F}^{cu}_{loc}(t)$ to denote the atom of partition $\pi^{-1}(\xi_c^u)$ containing $t \in A^i$. By definition, \begin{align} \label{entropy} h_{\mu} (f, \mathcal{F}^u) &= \int_M -\log \mu_{z}^u ( f^{-1} \xi^u (z)) d \mu(z) \\ & = \sum_ i \int_{W^s_{loc}(\pi(x_i))} \int_{\mathcal{F}^{cu}_{loc}(t)} - \log \mu_{z}^u (f^{-1} \xi^u (z)) d \mu_{t}^{cu} (z) d \tilde{\mu} (t) \\ &= \sum_ i \int_{W^s_{loc}(\pi(x_i))} \int_{\mathcal{F}^{cu}_{loc}(t)} - \log \mu_{z}^u (f^{-1} \xi^u (z)) d \mu_{t}^{cu} (z) d \tilde{\nu} (t) \end{align} where the sum above is over all $A^i$ and $f^{-1} \xi^u (x)$ stands for the element of the partition $f^{-1} \xi^u$ which contains $x.$ The second equality comes from the disintegration $$ \mu= \int \mu^{cu} d \tilde{\mu}. $$ For the third equality we identify the quotient of $A^i$ by the center-unstable plaques, with the stable plaque of $\pi(x_i)$ and recall that the quotient measure $\tilde{\mu}$ can be identified with the quotient measure $\tilde{\nu}$ where $\displaystyle{\nu = \int \nu^u d \tilde{\nu}}$ (See the proof of Lemma \ref{quotient}.) Now observe that $f_c^{-1} (\xi^u_{c})$ induces a partition on each element of $\xi^u_{c}.$ Taking pre images by $\pi$ we conclude that each $\mathcal{F}_{loc}^{cu}(t)$ is partitioned into finitely many subsets $\mathcal{F}_{loc}^{cu}(t) = \bigcup_{j} B_j$ where for each $j,$ $\pi(B_j)$ is an atom of $f_c^{-1} (\xi^u_{c}).$ \begin{figure} \includegraphics[scale=0.3]{claim.pdf} \centering \caption{} \end{figure} \begin{Claim} For any $t \in W^s(\pi(x_i))$ we have: $$ \int_{\mathcal{F}_{loc}^{cu}(t)} - \log \mu_{z}^u (f^{-1} \xi^u (z)) d \mu_{t}^{cu} (z) \leq \sum_j - \nu_t^u (\pi(B_j)) \log (\nu_t^u (\pi(B_j)). $$ \end{Claim} To prove the above claim (See the above figure), first by lemma \ref{quotient} we obtain that for each $z \in B_j$ $$\nu^u_{t} (\pi(B_j)) = \mu_t^{cu} (B_j) = \int_{\mathcal{F}^{c}(t)} \mu^u_{\theta} (B_j) d \tilde{\mu}_{t}^{cu} (\theta), $$ where $\tilde{\mu}_t^{cu}$ is the measure on $\mathcal{F}^{cu}_{loc}(t)/\xi^u\approx \mathcal{F}^c(t)$. Take $g(x)= -x \log x$ for $x > 0$ and apply Jensen inequality to obtain: \begin{align} \label{cimabaixo} \int_{\mathcal{F}^{c}(t)} g (\mu_ {\theta}^u (B_j)) d \tilde{\mu}_{t}^{cu} (\theta) \leq g (\int_{\mathcal{F}^{c}(t)} \mu^u_{\theta} (B_j) d \tilde{\mu}_{t}^{cu} (\theta) ) = - \mu_t^{cu} (B_j) \log (\mu_t^{cu}(B_ j)). \end{align} Now, \begin{align*} \int_{\mathcal{F}^{cu}(t)} - \log \mu_{z}^u (f^{-1} \xi^u (z)) d \mu_{t}^{cu} (z) = \\ =\sum_j \int_{\mathcal{F}^{cu}(t)} - \mathcal{X}_{B_j}(z)\log \mu_{z}^u (f^{-1} \xi^u (z)) d \mu_{t}^{cu} (z)\\ \sum_j \int_{\mathcal{F}^{c}(t)} \int_{\mathcal{F}^u_{loc}(\theta)} -\mathcal{X}_{B_j}(z) \log \mu_{z}^u (f^{-1} \xi^u (z)) ) d\mu^u_{\theta}(z) d \tilde{\mu}_{t}^{cu} (\theta) = \\ \sum_j \int_{\mathcal{F}^{c}(t)} -\mu^u_\theta(B_j) \times (\log \mu_{\theta}^u (B_j) ) d \tilde{\mu}_{t}^{cu} (\theta) = \\ = \sum_j \int_{\mathcal{F}^{c}(t)} g( \mu^u_{\theta} (B_j) ) d \tilde{\mu}_{t}^{cu} (\theta) \leq^{\text{by}(\ref{cimabaixo})} \\ \sum_j -\mu_t^{cu} (B_j) \log (\mu_t^{cu} (B_j)) = \\ \sum_j -\nu_t^u (\pi(B_j)) \log (\nu_t^u (\pi(B_j)) \end{align*} and we completed the proof of the claim. Now taking integral from the both sides of the expressions in the claim with respect to $\tilde{\nu}$ and using \eqref{entropy} and summing over all $A^i$ we obtain that $$h_{\mu} (f,\mathcal{F}^u) \leq H_\nu(f_c^{-1}\xi^u_c\mid \xi^u_c) = h_{\nu} (f_c).$$ Indeed, $h_{\nu} (f_c) = H_ {\nu} (f_c^{-1} \xi^u_c | \xi^u_c)$ and $$ H_ {\nu} (f_c^{-1} \xi^u_c | \xi^u_c) = \sum_i \int_{\mathcal{W}^s_{\operatorname{loc}} (\pi(x_i))} \sum_{j} - \nu^u_t (\pi(B_j)) \log (\nu^u_t(\pi(B_j))).$$ Observe that the above sum is over $A^i_c.$ When $h_{v} (f_c) = h_{\mu} (f,\mathcal{F}^u)$, we must have equality in the Jensen inequality. Hence, we shown that $\pi_*(\mu^u_{x})=\nu^u_{\pi(x)}$ restricting on the sub algebra generated by $f^{-1}\xi^u$. Because $$h_\mu(f^n,\mathcal{F}^u)=nh_\mu(f,\mathcal{F}^u)=nh_{\nu}(f^c) = h_{\nu} (f_ c^n),$$ applying a similar argument as above, one can show that $\pi_*(\mu^u_{x})=\nu^u_{\pi(x)}$ restricting on the sub algebra generated by ${\mathcal B}_0=\{f^{-n}\xi^u\}_{n\in \mathbb{N}}$. Observe that ${\mathcal B}_0$ generates the Boreal $\sigma$-algebra of every $\xi^u(x)$, hence we show that $\pi_*(\mu^u_x)=\nu^u_{\pi(x)}$, as claimed. Now that we have proved Theorem \ref{u-invariantprinciple}, let us show how to conclude the proof of Corollary \ref{exp-invariantprinciple}. \begin{proof} (of Corollary \ref{exp-invariantprinciple}) By Ledrappier-Young \cite{LY} we have $$h_{\mu}(f) = h (\mu, \mathcal{F}^u).$$ Indeed, the authors in \cite{LY2} define the notion of entropy $h_i$ along $i$-th unstable manifolds $W^i$ for $1 \leq i \leq u.$ Here $$ W^i(x) = \{y \in M, \limsup_{n \rightarrow \infty} \frac{1}{n} \log d(f^{-n}(x), f^{-n}(y)) \leq - \lambda_i \} $$ and $\lambda_1 > \lambda_2 \cdots > \lambda_u$ are the positive Lyapunov exponents of $\mu.$ In particular, in Theorem (C'), they proved that $h_u = h_{\mu}(f).$ See item (iii) after theorem (C') in \cite{LY2}. Here as the central Lyapunov exponents are non-positive we conclude that $W^u$ coincides with unstable foliation $\mathcal{F}^u.$ Again using \cite{LY2} we see that $h_u = h (\mu, \mathcal{F}^u) $ which yields $h_{\mu}(f) = h (\mu, \mathcal{F}^u).$ As $f_c$ is a factor of $f$ we have that $h_{\nu} (f_c) \leq h_{\mu} (f)$ and this implies $$h_{\mu} (f, \mathcal{F}^u) = h_{\nu} (f_c).$$ Now Theorem \ref{u-invariantprinciple} implies that $\mu$ is $u-$invariant. \end{proof} Now, let us give the proof of Corollary \ref{onedimensional}. \begin{proof} (of Corollary \ref{onedimensional}) If the central foliation is one dimensional then $h_{\mu}(f) = h_{\nu}(f_c).$ Indeed, by Ledrappier-Walter's variational principle \cite{LW}, \begin{equation} \label{LW} \sup_{\hat{\mu}: \pi_ * \hat{\mu} = \nu} h_{\hat{\mu}} (f) = h_{\nu}(f_c) + \int_{M^c} h(f, \pi^{-1} (y)) d\nu(y). \end{equation} Since each $\pi^{-1} (y)$ is a circle and its iterates have bounded length we have that $h(f, \pi^{-1} (y)) =0$ that is, fibers does not contribute to the entropy. Hence, by the above equality and the well-known fact that $h_{\mu} (f) \geq h_{\nu} (f_c)$ we conclude that $h_{\mu}(f) = h_{\nu}(f_c)$ and the corollary is immediate from the Theorem \ref{u-invariantprinciple}. \end{proof} Finally, let us prove the corollary \ref{invariancelimit}. By Theorem \ref{u-invariantprinciple}, $\mu_n \in Gibb^u_{\nu}(f_n)$ implies that $h_{\mu_n}(f_n , \mathcal{F}^u_{n}) = h_{\nu}(A)$ where $\mathcal{F}^u_n$ represents the unstable foliation of $f_n.$ By the upper semi-continuity property (proved in \cite{Y}) we conclude that $$\lim sup_{n \rightarrow \infty} h_{\mu_n} (f_n , \mathcal{F}^u_{n}) \leq h_{\mu} (f , \mathcal{F}^u). $$ which implies $$ h_{\nu}(A) \leq h_{\mu} (f , \mathcal{F}^u). $$ Again using Theorem \ref{u-invariantprinciple}, we have $\mu \in Gibb^u_{\nu}(f).$ \section{Proof of Rigidity of high entropy measures} \label{proofhighentropy} In this section we prove Theorems \ref{main.nonunihyp} and \ref{main.converging}. Let us recall some facts about measures of maximal entropy in our context. As $f_c$ is a transitive Anosov homeomorphism it admits a unique measure of maximal entropy. From now on, $\nu$ denotes the unique maximal measure of $f_c$ and $\nu^u_{\pi(x)}$ denotes the conditional measure on $W^u_{loc}(\pi(x))$ of $\nu$ corresponding to the measurable partition $\xi_c^u$. Denote by $$H^s_{\pi(x),\pi(y)}: W^u_{loc}(\pi(x))\to W^u_{loc}(\pi(y)).$$ the holonomy map in each Markov component induced by the stable foliation ${\mathcal W}^s$. The following result is classical (for instance by means of Margulis construction of measures of maximal entropy): the measure of maximal entropy for $f_c$ has local product structure. \begin{lemma} For $\nu$ almost every points $\pi(x), \pi(y) \in A^i_c$ ($i=1,\dots, k$), $$(H^s_{\pi(x),\pi(y)})_*(\nu^u_{\pi(x)})=\nu^u_{\pi(y)}.$$ A similar statement holds for the disintegration of $\nu$ along stable plaques. \end{lemma} In particular, fixing $ p_i \in A^i_c$, $\nu\mid A^c_i$ can be written as \begin{equation}\label{eq.productforquotient} \nu\mid A_c^i= \int_{W^s(p_i)} (H^s_{p_i, q})_*(\nu^u_{p_i})d\nu^{s}(q), \end{equation} where $\nu^s$ is the quotient measure on the quotient space $A^i_c/\xi_c^u\cong W^s_{loc}(p_i)$. \subsection{Some properties of Gibbs measures} The next proposition is formulated for measures in $\operatorname{Gibb}_{\nu}^u(f)$ such that $\nu$ is the maximal entropy measure of $f_c.$ Although the proposition holds for all invariant measures $\nu$, we formulated it in the case where $\nu$ has local product structure which is sufficient for our purposes. \begin{proposition}\label{p.Gibbs u state} Let $f$ be as in Theorem (B) and $\nu$ be the measure of maximal entropy for $f_c$; then \begin{itemize} \item[(a)] $\operatorname{Gibb}_{\nu}^u(f)$ is a compact convex set in the weak-* topology and the extreme points are ergodic; \item[(b)] For each $\mu\in \operatorname{Gibb}_{\nu}^u(f)$, almost every ergodic component of $\mu$ belongs to $\operatorname{Gibb}_{\nu}^u(f)$. \end{itemize} \end{proposition} \begin{proof} First consider a coordinate of $A^i_c$ $$\Phi^i_c: [0,1]^s\times [0,1]^u=\mathcal{I}_c \to A^i_c$$ such that for any $a_c=(a_1,a_2)\in \mathcal{I}_c$ and $x_c=\Phi^i_c(a_c)$: \begin{itemize} \item[(i)] $\Phi^i_c(a_1\times [0,1]^u)=W^u_{loc}(x_c)$; \item[(ii)] $\Phi^i_c([0,1]^s\times a_2)=W^s_{loc}(x_c)$. \end{itemize} In these coordinate, the $\Phi^i_c$ image of every horizontal plane is a stable plaque, and $\Phi^i_c$ image of every vertical plane is a unstable plaque. Then by \eqref{eq.productforquotient}, the disintegrations of $\nu_i=(\Phi^i_c)^{-1}_*(\nu\mid A^i_c)$ along the foliation $\{a_1\times [0,1]^u\}_{a_1\in [0,1]^s}$ are all the same, which we denote by $\nu^u_i$; In the following, we also need a coordinate for $A^i$. We take each $A^i_c\subset M_c$ with small diameter, such that the central bundle is trivial restricted on $A^i_c$. Then we can take a continuous coordinate of $A^i$, $$\Phi^i: [0,1]^s\times [0,1]^u \times S^1=\mathcal{I} \to A^i$$ such that for any $a=(a_1,a_2,a_3)\in \mathcal{I}$ and $x=\Phi^i(a)$: \begin{itemize} \item[(i)] $\Phi^i(a_1\times [0,1]^u\times a_3)={\mathcal F}^u_{loc}(x)$; \item[(ii)] $\Phi^i(a_1\times [0,1]^u\times S^1)={\mathcal F}^{cu}_{loc}(x)$; \item[(iii)] $\Phi^i([0,1]^s\times a_2\times S^1)={\mathcal F}^{cs}_{loc}(x)$; \item[(iv)] $\Phi^i(a_1\times a_2 \times S^1)={\mathcal F}^c(x)$. \end{itemize} Of course, $\mathcal{F}^s$ and $\mathcal{F}^u$ are not necessarily jointly integrable, If we denote by $\pi_{3}: \mathcal{I}\to [0,1]\times [0,1]$: $\pi_3(a_1\times a_2\times a_3)=a_1\times a_2$, then it is clear that, we have \begin{equation}\label{eq.commute} \Phi^i_c\circ \pi_3=\pi\circ \Phi^i. \end{equation} From the definition of $\operatorname{Gibb}^u_\nu$ and \eqref{eq.commute}, the disintegrations of $\mu_i=(\Phi^i)^{-1}_*(\mu\mid A^i)$ along the foliation $$\{a_1\times [0,1]^u\times a_3\}_{a_1\in [0,1]^s,a_3\in S^1}$$ equal to $(\Phi^i)^{-1}_*(\mu^u_{\cdot})$, which are all the same and coincide to $\nu^u_i$. We first prove that $\operatorname{Gibb}^u_\nu(f)$ is compact. Let $\mu_n\in\operatorname{Gibb}^u_\nu$, and $\mu_n\overset{\text{weak}*}{\to} \mu$. We are going to show that $\mu$ also belongs to $\operatorname{Gibb}^u_\nu$. Then by the coordinate, it is sufficiently to show that, the disintegration of measure $\mu_i$ along the foliation $$\{a_1\times [0,1]^u\times a_3\}_{a_1\in [0,1]^s,a_3\in S^1}$$ equal to $\nu^u_i$. This is obvious, because each measure $\mu_{n,i}=(\Phi^i)^{-1}(\mu_n\mid A_i)$ can be written as a product $$d\mu_{n,i}((a_1,a_2,a_3))=d\nu^u_i(a_2) d\tilde{\mu}_{n,i}(a_1\times a_3), $$ where $\tilde{\mu}_{n,i}$ is a probability measure on the space $[0,1]^s\times 0\times S^1$. Then its limit, $\mu_i$ can be written in the same manner. Now we are going to prove the second part. Let $\mu\in \operatorname{Gibb}^u_\nu$ and write the ergodic decomposition of $\mu$ by $$\mu= \int_{M / \xi_{erg}} \mu_P d\tilde{\mu}(P),$$ where $\xi_{erg}$ is the measurable partition of $M$ into ergodic components of $\mu.$ Now, we recall the crucial fact that $\xi^u$ is finer than $\xi_{erg}$. See \cite{LY} (Section 6.2) and proposition 2.6 in \cite{LS}. For $x\in M$, denote by $\xi^u (x)$ and $P= \xi_{erg}(x)$ the elements of partitions $\xi^u$ and $\xi_{erg}$ which contain $x$ respectively. Then by the essential uniqueness of the disintegration, for $\mu$ almost every $x$, the disintegration of $\mu$ along the partition $\xi^u$ on the element $\xi^u (x)$, denoted by $\mu^u_x$, coincides with the disintegration of the ergodic component $\mu_{P}$ along the partition $\xi^u$. This means that, for $\tilde{\mu}$ almost every $P$, the disintegration of $\mu_{P}$ along the partition $\xi^u$ equal to $\mu^u_{x}=\pi^{-1}(\nu^u_{x_c})$ for $\mu_{P}$ almost every $x$, and hence $\mu_{P}$ belongs to $\operatorname{Gibb}^u_\nu$. \end{proof} \begin{proposition}\label{p.invariant principle} Let $\omega$ be an ergodic maximal measure of $f$ with non-positive (resp. non-negative) center exponent then $\pi_*(\omega)=\nu$ and $\omega\in \operatorname{Gibb}^u_{\nu}(f)$ (resp. $\omega\in \operatorname{Gibb}^s_\nu(f)$). Moreover, if $\omega\in \operatorname{Gibb}^u_\nu(f)$ , then $\mu$ is $u-$invariant. \end{proposition} \begin{proof} To prove the first part of the proposition take $\omega$ ergodic measure of maximal entropy with non-positive center exponent. By Ledrappier-Walter's variational principle and one dimensionality of central foliation, $\pi_*(\omega)$ is the measure of maximal entropy for $f_c$, that is $\pi_*(\omega) = \nu.$ By Rokhlin disintegration theorem, there is a system of conditional probability measures along center foliation. By invariance principle (Corollary \ref{exp-invariantprinciple}) $\{\omega_x\}$ is $u-$invariant which is the same to say $\omega \in Gibb^u_{\nu}$ by Proposition \ref{gibbs-uinvariant}. The second part of the proposition is immediate from Proposition \ref{gibbs-uinvariant}. \end{proof} \begin{corollary} If $\omega \in \operatorname{Gibb}^u_\nu(f) \cap \operatorname{Gibb}^s_\nu(f)$ then $f$ is of rotation type, and there is a family of conditional measures $\omega^c$ along the center foliation, such that $\mu^c$ varies continuously respect to the center leaves, and is invariant under stable, and unstable holonomies $H^{*}, *=s,u$. \end{corollary} \begin{proof} By above proposition $\omega$ is both $u$ and $s-$state. As the quotient measure $\pi_{*}(\mu) = \nu$ has local product structure the corollary is immediate from invariance principle (see \cite{AV}). \end{proof} We need a main property on the partial entropy along expanding foliations which is the following upper semi-continuity result: \begin{proposition}[\cite{Y}]\label{p.uppersemicontinuous} Let ${\mathcal F}$ be an expanding foliation of $f$, and $\mu_n$ be a sequence of invariant probability of $f$. Suppose $\mu_n$ converge to $\mu_0$ in the weak-* topology, then $$\limsup_{n \rightarrow \infty} h_{\mu_n}(f,{\mathcal F})\leq h_{\mu_0}(f,{\mathcal F}).$$ \end{proposition} In the following, we show the idea of the proof of the above proposition when $f\in \operatorname{SPH}_1$, since in this case the discussion is much simpler. \begin{proof}[Sketch of proof:] We need to show \begin{equation}\label{eq.semicontinuous} \limsup_{n \rightarrow \infty} H_{\mu_n}(f^{-1}\xi^u\mid \xi^u)\leq H_\mu(f^{-1}\xi^u\mid \xi^u). \end{equation} Fix a point $x_i\in \pi^{-1}(p_i)$ for each $i=1,\dots, k$. Consider a sequence of finite partitions ${\mathcal C}_{i,1}\leq{\mathcal C}_{i,2}\leq \dots$ on $${\mathcal F}^{cs}_{loc}(x_i)=\pi^{-1}(W^s_{loc}(p_i)),$$ such that \begin{itemize} \item[(A)] $\operatorname{diam}({\mathcal C}_{i,t})\to 0$; \item[(B)] For any $i,t$ and any element $C$ of ${\mathcal C}_{i,t}$, $\mu_n(\cup_{x\in \partial C}\xi^u(x))=0$ for every $n$. \end{itemize} Then for every $t>0$, there are two finite partitions $\tilde{{\mathcal C}}_{t}$ and $\overline{{\mathcal C}}_t$: $$\tilde{{\mathcal C}}_{t}=\{\cup_{x\in C} \xi^u(x); C \text{ is an element of ${\mathcal C}_{i,t}$ for some $1\leq i \leq k$}\};$$ and $$ \overline{{\mathcal C}}_t=\{\tilde{C}\cap \pi^{-1}(P), \text{ where $\tilde{C}$ is an element of $\tilde{{\mathcal C}}_t$ and $P$ is an element of $f^{-1}(\xi^u_c)$}\}. $$ Then $\tilde{{\mathcal C}}_t\leq \overline{{\mathcal C}}_t$ and both sequence of partitions are increasing. Moreover, we have \begin{itemize} \item[(i)] $\tilde{{\mathcal C}}_t \nearrow \xi^u$; \item[(ii)] $\overline{{\mathcal C}}_t\nearrow f^{-1}\xi^u$; \item[(iii)] $\mu_n(\partial(\tilde{{\mathcal C}}_t))=0$ and $\mu_n(\partial(\overline{{\mathcal C}}_t))=0$ for every $t,n$. \end{itemize} We claim that for each $n\in \mathbb{N}$, \begin{equation}\label{eq.finteapproach} H_{\mu_n}(\overline{{\mathcal C}}_t\mid \tilde{{\mathcal C}}_t)\searrow H_{\mu_n}(f^{-1}(\xi^u)\mid\xi^u). \end{equation} To prove this claim, first by (i): $$H_{\mu_n}(\overline{{\mathcal C}}_1 \mid \tilde{{\mathcal C}}_t)\searrow H_{\mu_n}(\overline{{\mathcal C}}_1\mid\xi^u).$$ It follows that $$H_{\mu_n}(\overline{{\mathcal C}}_1\vee\tilde{{\mathcal C}}_t \mid \tilde{{\mathcal C}}_t)\searrow H_{\mu_n}(\overline{{\mathcal C}}_1\vee \xi^u\mid\xi^u).$$ From the construction of the partitions $\tilde{{\mathcal C}}_t$ and $\overline{{\mathcal C}}_t$, it is easy to see that $\overline{{\mathcal C}}_1\vee \tilde{{\mathcal C}}_t=\overline{{\mathcal C}}_t$ and $\overline{{\mathcal C}}_1\vee \xi^u=f^{-1}(\xi^u)$, and the proof of this claim follows immediately. Now we continue the proof of \eqref{eq.semicontinuous}. By the above claim, for any $\varepsilon>0$, there is $T$ sufficiently large, such that $H_{\mu_0}(\overline{{\mathcal C}}_T\mid \tilde{C}_T)<H_{\mu_0}(f^{-1}(\xi^u)\mid \xi^u)+\varepsilon$. Because both partitions $\overline{{\mathcal C}}_T$ and $\tilde{{\mathcal C}}_T$ are finite and their boundaries have zero measure for any $\mu_n$, we have that \begin{equation*} \begin{aligned} H_{\mu_0}(\overline{{\mathcal C}}_T\mid \tilde{{\mathcal C}}_T)&=H_{\mu_0}(\overline{{\mathcal C}}_T)-H_{\mu_0}(\tilde{{\mathcal C}}_T)\\ &=\lim H_{\mu_n}(\overline{{\mathcal C}}_T)-\lim H_{\mu_n}(\tilde{{\mathcal C}}_T)\\ &=\lim H_{\mu_n}(\overline{{\mathcal C}}_T\mid \tilde{{\mathcal C}}_T).\\ \end{aligned} \end{equation*} Again by the above claim, we show that \begin{equation*} \begin{aligned} \limsup H_{\mu_n}(f^{-1}(\xi^u)\mid \xi^u)&\leq \lim H_{\mu_n}(\overline{{\mathcal C}}_T\mid \tilde{{\mathcal C}}_T)\\ &=H_{\mu_0}(\overline{{\mathcal C}}_T\mid \tilde{{\mathcal C}}_T)\\ &\leq H_{\mu_0}(f^{-1}(\xi^u)\mid \xi^u)+\varepsilon.\\ \end{aligned} \end{equation*} Since we can take $\varepsilon$ arbitrarily small, we finished the proof of \eqref{eq.semicontinuous}. \end{proof} \subsection{Proof of Theorems \ref{main.nonunihyp} and \ref{main.converging}} Let us first prove Theorem ~\ref{main.converging}. As the center exponents of $\mu_n$ are non-positive, the Pesin unstable lamination coincides to the unstable foliation. Then we have $$h_{\mu_n}(f)=h_{\mu_n}(f,{\mathcal F}^u),$$ which was proved in Ledrappier-Young \cite[Corollary 5.3]{LY} under the assumption that $f$ is $C^2$. See the proof of Corollary \ref{exp-invariantprinciple} for more details. By our assumption and Proposition~\ref{p.uppersemicontinuous}, $$h_\mu(f,{\mathcal F}^u)\geq \limsup_{n\to \infty}h_{\mu_n}(f,{\mathcal F}^u).$$ But by variational principle, it is clear that $h_\mu(f,{\mathcal F}^u)\leq h_\mu(f)\leq h_{top}(f)$. Hence, we have the equality: $h_\mu(f,{\mathcal F}^u)=h_{top}(f) = h_{\nu} (f_c)$. Then as a corollary of Theorem \ref{u-invariantprinciple}, $\mu\in \operatorname{Gibb}^u_\nu$. Now it is sufficient to prove the following lemma, \begin{lemma} If $f$ is not of rotation type, then $\operatorname{Gibb}^u_\nu(f) = V^{-}$ where $V^{-}$ the set of invariant probabilities which are combination of $\mu^-_1,\dots, \mu^-_{k(-)}$. \end{lemma} \begin{proof} By Proposition~\ref{p.invariant principle}, $\mu^-_i\in \operatorname{Gibb}^u_\nu(f)$ for each $1\leq i \leq k(-)$ and consequently $$V^-\subset \operatorname{Gibb}^u_\nu(f).$$ Now let $\mu$ belongs to $\operatorname{Gibb}^u_\nu(f)$, by Proposition~\ref{p.Gibbs u state} (b), we may assume that it is ergodic. As $\mu$ projects to $\nu$ which a maximal measure for $f_c$ we have that $\mu$ is a maximal measure, then, by Theorem~\ref{dichotomy}, the center exponent of $\mu$, $\lambda^c(\mu)$, can not vanish. We claim that $\lambda^c(\mu)<0$. Suppose by contradiction that $\lambda^c(\mu)>0$, then by Proposition~\ref{p.Gibbs u state}, $\mu\in \operatorname{Gibb}^s_\nu(f)$, which contradicts Proposition~\ref{p.invariant principle}. Therefore $\mu$ is a maximal measure with negative center exponent, apply Theorem~\ref{dichotomy} again, $\mu=\mu^-_i$ for some $1\leq i \leq k(-)$. Thus $$\operatorname{Gibb}^u_\nu(f)\subset V^-$$ and the proof is complete. \end{proof} So we have proved Theorem \ref{main.converging} and the proof of Theorem \ref{main.nonunihyp} is a simple corollary. Indeed, suppose that by contradiction there is a sequence of ergodic measures $\mu_n$ such that $h_{\mu_n} \rightarrow h_{top}(f)$ and without loss of generality we assume that $\lambda^c(\mu)_n \leq 0$ and converge to zero. Let $\mu$ be an accumulation point of $\mu_n.$ By continuity argument $$\lambda^c(\mu_n) \rightarrow \lambda^c(\mu):= \int_{M} \log Df|_{E^c(x)} d \mu(x).$$ By Theorem ~\ref{main.converging}, $\mu$ is a convex combination of $\mu^{-}_1, \cdots, \mu^{-}_{k^{-}}$ so, $$|\lambda^c (\mu)|\geq \min \{ \lambda^c(\mu^ {-}_1), \cdots, \lambda^c ( \mu^{-}_{k^{-}}) \}, $$ which is a contradiction.
1,116,691,501,445
arxiv
\section{Introduction} Game theory has found many applications in multi-agent engineering problems, wherein each agent can be modelled as an independent, selfish decision maker that tries to optimize its individual, but coupled, cost function. These include wireless communication networks \cite{faawb06}, \cite{lh10}, \cite{ch12}, optical networks \cite{lp12}, \cite{lp06}, smart-grid and PEV charging \cite{mwjsl10}, \cite{sg16c}, \cite{hi13} noncooperative flow control \cite{ysm11}, \cite{ab05} and multi-agent formation problems\cite{lqs14}. The relevant equilibrium sought is the Nash equilibrium (NE), whereby no agent has incentive to unilaterally change its action. The objective is to design either continuous-time or discrete-time, distributed learning schemes that converge to the NE under reasonable assumptions on the game properties and agent knowledge. Most works focus on algorithms for agents that either do not have dynamics, or have single integrator dynamics, and disturbances are not explicitly considered, \cite{FKB12}. There are many scenarios when the game or the agents are subject to disturbances, noise or uncertainties. Examples are demand-side management in smart-grids, with changes in the energy consumption demand, \cite{mwjsl10}, feedback control for PEV charging load allocation, \cite{hi13}, or power control for optical-signal-to-noise ratio (OSNR) in the presence of pilot tones, \cite{tp06}. Yet there have been relatively few works on Nash equilibrium seeking in such settings. In \cite{hi13}, a time-varying pricing function that affects the cost functions of each agent is considered. Only robustness to the time-varying component is investigated. Another good motivating example is the case of a group of mobile robots in a sensor network, similar to the examples in \cite{sjs12}. Each agent in the network has a goal related to its global position. However, it must also consider its position relative to the other agents in the network in order to make sure that it maintains communication with its neighbours. This can easily be formulated as a game played by the robots, which can be modelled as higher-order agents. In addition, each robot may be subject to a disturbance, e.g., wind or a slope in the terrain. It is important that these robots be able to reject this deterministic disturbance and still converge to the NE. A similar problem without disturbances was presented in \cite{zm13}, however the state space is discretized and the game is treated as a finite action game. This formulation ignores the dynamics of the individual agents. Motivated by the above, in this paper our focus is to extend these results to games wherein the agents are modelled as (multi)-integrator systems subject to external deterministic disturbances. This is related to NE seeking with noisy feedback, on which there has been recent work. A dual-averaging algorithm with noisy gradients is considered in \cite{ms17}. A discrete-time extremum seeking algorithm with noisy cost measurements for agents modelled as single and double integrators and kinematic unicycles is investigated in \cite{sjs12}. In both of these papers, the noise involved is stochastic in nature instead of a deterministic disturbance, as we consider here. Separately, NE seeking in the special class of aggregative games for Euler-Lagrange systems has been recently investigated in \cite{dl19}, which is similar to our work due to the dynamic nature of the agents involved, but without disturbances being considered. Our work is related to the literature on disturbance rejection and tracking in multi-agent systems, \cite{bdp15}, \cite{dpj14}, \cite{zd13}, \cite{xwhj16}, \cite{xlh17}. Most output regulation problems in multi-agent systems can be viewed as specific cases of game theoretical problems. The synchronization problem, for example, can be regarded as a special game where each agent's cost function is quadratic and corresponds to the sum of the squared distances to all of its neighbours. Our work is also related to distributed optimization, where a group of agents cooperatively minimize a global cost function, the sum of the agents' individual cost functions. Optimization schemes that reject disturbances have been discussed for single integrator systems \cite{wyh14}, systems with unit relative degree \cite{whj16} and systems with double integrator dynamics \cite{twy18}. In \cite{CortesSICON2016} the robustness of a continuous-time distributed optimization algorithm is analyzed in the presence of additively persistent noise on agents' communication and computation, in a directed communication graph. Key differences from a game setup are the cooperative nature of the problem and the fact that usually each agent's cost is decoupled of the others' variables. Exploiting summability, leads to a set of parallel decoupled optimization problems, one for each agent and its own cost function. Even when the overall cost is not separable, due to its summable structure, one can extend the problem to an augmented space of estimates, where it becomes separable and convex. In a game context, an agent's cost is inherently \emph{coupled} to the others' decisions, on which it does not have control and convexity is only partial. \textit{Contributions. } Motivated by the above, in this paper we consider how to design Nash equilibrium seeking dynamics that simultaneously reject exogenous disturbances. We consider single and double-integrator agents, i.e., agents that behave as continuous-time dynamical systems that integrate their respective inputs, in a partial-information setting, i.e., networked regimes where agents may only access the states of their neighbours. We also discuss extensions to multi-integrator agents. Unlike multi-agent set stabilization problems with disturbance rejection, herein the stabilization goal is the a priori unknown Nash equilibrium of the game, which has to be reached irrespective of disturbances. In all cases, we make standard assumptions that provide existence and uniqueness of the NE of the game. Due to the partial-information setting, we inspire from the disturbance-free results in \cite{gp18}. Each player keeps track of an estimate of the others' decisions as in \cite{gp18}, and the problem can be seen as one of multi-agent agreement with disturbance rejection. The agreement subspace is the estimate-consensus subspace at the Nash equilibrium, irrespective of the disturbance. The proposed agent learning dynamics has two components: a gradient-play with estimate consensus component (that drives each player's dynamics towards minimizing its own cost function) and a dynamic internal-model component, which effectively implements a reduced-order observer of the disturbance. Unlike typical multi-agent agreement, \cite{bdp15}, \cite{dpj14}, \cite{xwhj16}, we cannot use individual passivity of each agent. Rather, our proofs rely on combining input-to-state stability with design of a reduced-observer for disturbance, under strong monotonicity of the pseudo-gradient and Lipschitz continuity of the extended pseudo-gradient. The resulting agent dynamics are locally distributed, with coupling introduced only through the communication graph. The paper is organized as follows. In Section \ref{sec:background}, we give the necessary background on nonlinear systems, graph theory and noncooperative game theory. In Section \ref{sec:problem}, we formulate the NE seeking problem for dynamic agents with disturbance rejection. In Section \ref{sec:SingleIntegrator}, we give our results for NE seeking dynamics with disturbance rejection for single-integrator agents. In Section \ref{sec:second_integrator}, we formulate a NE seeking algorithm for double-integrator agents and discuss extensions to multi-integrator agents. In Section \ref{sec:simulations}, we compare by simulation their performance with those of a standard gradient-play dynamics and an augmented gradient-play dynamics with estimate consensus (partial information setting), and give conclusions in Section \ref{sec:conclusions}. A short version of this work appeared in \cite{AR_LP_CDC2018}, where only single-integrators are treated. \emph{Notations.} Let $\mathbb{R}$, $\mathbb{R}_{\geq 0}$ denote the set of real and non-negative real numbers, $\mathbb{C}$ and $\mathbb{C}^-$ the set of complex numbers and complex numbers with negative real part. Given $x, y \!\in \!\mathbb{R}^n$, $x^Ty$ denotes the inner product of $x$ and $y$. Let $\|\cdot\|\!:\!\mathbb{R}^n \!\rightarrow \!\mathbb{R}_{\geq 0}$ denote the Euclidean norm and $\|\cdot\|\!:\!\mathbb{R}^{m\times n} \!\rightarrow \!\mathbb{R}_{\geq 0}$ denote the induced matrix norm. $\col(x_1,\dots,x_N)$ denotes $[x_1^T,\dots,x_N^T]^T$. Given matrices $A_1,\dots,A_N$, $\blkdiag(A_1,\dots,A_N)$ denotes the block diagonal matrix with $A_i$ on the diagonal. $I_n$ denotes the $n\!\times \! n$ identity matrix. $\boldsymbol{1}_n$ denotes the $n\!\times \!1$ all ones vector. $A\otimes B$ denotes the Kronecker product of matrices $A$ and $B$. \vspace{-0.25cm} \section{Background}\label{sec:background} \subsection{Input to State Stability} In this work, we model the dynamics of each agent as a continuous time dynamical system. We first introduce some background from \cite{hk02}. Consider a nonlinear system, \vspace{-0.23cm}\begin{align} \label{eq:NLSys} \dot x = f(x,u) \end{align} where $\dot x:=\frac{dx(t)}{dt}$, $f\!:\!\mathbb{R}^n \!\times \!\mathbb{R}^m \!\rightarrow \!\mathbb{R}^n$ is locally Lipschitz in $x$ and $u$ and the input $u(t)$ is a piecewise continuous, bounded function. \begin{defn} \label{defn:ISS} System (\ref{eq:NLSys}) is input-to-state stable (ISS) if there exist $\beta \in \mathcal{KL}$ and $\gamma \in \mathcal{K}$ such that for any initial state $x(t_0)$ and any bounded input $u(t)$, the solution $x(t)$ satisfies \vspace{-0.25cm}\begin{align*} \|x(t)\| \leq \beta(\|x(t_0)\|,t-t_0)+\gamma\Big(\sup_{t_o\leq\tau\leq t} \|u(\tau)\|\Big), \, \,\forall t\geq t_0 \end{align*} \end{defn} \begin{thm} \label{thm:ISSLyapunov} (Theorem 4.19, \cite{hk02}) Let $V(x)$ be a continuously differentiable function such that \vspace{-0.23cm}\begin{align*} \alpha_1(\|x\|)&\leq V(x)\leq\alpha_2(\|x\|)\\ \frac{\partial V}{\partial x}f(x,u) &\leq - W(x),\ \forall \|x\| \geq \rho(\|u\|) > 0 \end{align*} $\forall x \!\in \!\mathbb{R}^n,\ u \!\in \!\mathbb{R}^m$, where $\alpha_1,\alpha_2 \!\in \!\mathcal{K}_\infty, \!\rho \! \in \! \mathcal{K}$, and $W(x)$ is positive definite. Then system (\ref{eq:NLSys}) is ISS with $\gamma \!=\! \alpha_1^{-1}\! \circ \! \alpha_2 \! \circ \!\rho$. \end{thm} Consider now the cascade of two systems \vspace{-0.28cm}\begin{align} \label{eq:NLSysCascade1} \dot x_1 &= f_1(x_1,x_2)\\ \label{eq:NLSysCascade2} \dot x_2 &= f_2(x_2) \end{align} with $f_1 \!: \!\mathbb{R}^{n_1} \!\times \!\mathbb{R}^{n_2} \!\rightarrow \!\mathbb{R}^{n_1}$, $f_2 \!: \!\mathbb{R}^{n_2} \!\rightarrow \!\mathbb{R}^{n_2}$ locally Lipschitz. \begin{lemma} \label{lemma:ISSCascade} (Lemma 4.7, \cite{hk02}) If the system (\ref{eq:NLSysCascade1}) with $x_2$ as an input is ISS and the origin of (\ref{eq:NLSysCascade2}) is globally uniformly asymptotically stable, then the origin of the cascade system (\ref{eq:NLSysCascade1}) and (\ref{eq:NLSysCascade2}) is globally uniformly asymptotically stable. \end{lemma} \subsection{Graph Theory} In this paper, we consider NE seeking for dynamic agents with communication over networks with fixed (static) topology. The communication protocol relies on graph theory. The following is from \cite{gr01}. An undirected graph, $G$ is a pair $G = (\mathcal{I},E)$ where $\mathcal{I} = \{1,\dots,N\}$ is the vertex set and $E \subset \mathcal{I}\times\mathcal{I}$ is the edge set. Since $G$ is an undirected graph, for all $i,j \in \mathcal{I}$, if $(i,j)\in E$ then $(j,i)\in E$. Let $\mathcal{N}_i \subset \mathcal{I}$ denote the set of neighbours of player $i$. The adjacency matrix $\textbf{A} = [a_{ij}] \in \mathbb{R}^{N \times N}$ of the graph $G$ is defined such that $a_{ij}=1$ if $(j,i)\in E$ and $a_{ij}=0$ otherwise. For an an undirected graph, $a_{ij}=a_{ji}$. $G$ is connected if any two agents are connected by a path. The Laplacian matrix $L = [l_{ij}] \in \mathbb{R}^{N \times N}$ of the graph $G$ is defined as $l_{ii} = \sum_{j\neq i} a_{ij}=|\mathcal{N}_i|$ and $l_{ij} = -a_{ij}$, for $i\neq j$. For an undirected and connected graph, $L$ is symmetric positive definite and has a simple zero eigenvalue such that $0<\lambda_2(L)\leq \ldots \leq \lambda_N(L)$ and $L\textbf{1}_N = 0$. Furthermore, for any vector $y \in \mathbb{R}^N$ satisfying $\textbf{1}_N^Ty=0$, $\lambda_2(L)\|y\|^2\leq y^TLy \leq \lambda_N(L)\|y\|^2$. \subsection{Game Theory} Consider a set of players, $\mathcal{I} = \{1,\dots,N\}$. Each player $i\in \mathcal{I}$ controls its own action $x_i \in \Omega_i \subset \mathbb{R}^{n_i}$. The overall action set of the players is $\Omega = \Omega_1 \times \dots \times \Omega_N \subset \mathbb{R}^n$, where $n = \sum_{i\in \mathcal{I}} n_i$. Let $x = (x_i,x_{-i})\in \Omega$ denote the overall action profile of all players, where $\textcolor{blue}{x_{-i}} \in \Omega_{-i} = \Omega_1 \times \dots \times \Omega_{i-1} \times \Omega_{i+1} \times \dots \times \Omega_N\subset \mathbb{R}^{n_{-i}}$ is the action set of all players except for player $i$. Let $J_i:\Omega \rightarrow \mathbb{R}$ be the cost function of player $i$. Each player tries to minimize its own cost function over its action. Denote the game $\mathcal{G}(\mathcal{I},J_i,\Omega_i)$. \begin{defn} \label{def:NE} Given a game $\mathcal{G}(\mathcal{I},J_i,\Omega_i)$, an action profile $x^* = (x_i^*,x_{-i}^*)\in \Omega$ is a Nash Equilibrium (NE) of $\mathcal{G}$ if \begin{align*} J_i(x_i^*,x_{-i}^*)\leq J_i(x_i,x_{-i}^*) \quad \forall i \in \mathcal{I},\ \forall x_i \in \Omega_i \end{align*} At a Nash Equilibrium no player can unilaterally decrease its cost, and thus has no incentive to switch strategies (actions) on its own. \end{defn} \begin{asmp} \label{asmp:Jsmooth} For each $ i \in \mathcal{I}$, let $\Omega_i=\mathbb{R}^{n_i}$, the cost function $J_i:\Omega \rightarrow \mathbb{R}$ be $\mathcal{C}^1$ in its arguments and convex in $x_i$. \end{asmp} Under Assumption \ref{asmp:Jsmooth}, any NE satisfies \vspace{-0.26cm}\begin{align} \label{eq:NashInner} \nabla_{i} J_i(x^*_i,x^*_{-i})=0,\quad \forall i \in \mathcal{I} \end{align} where $\nabla_i J_i(x_i,x_{-i}) = \frac{\partial}{\partial x_i} J_i(x_i,x_{-i}) \in \mathbb{R}^{n_i}$ is the partial gradient of player $i$'s cost function, with respect to its own action. We denote the set of all NE in the game by \vspace{-0.25cm \begin{align} \Gamma_{NE} = \big \{x \in \mathbb{R}^n| \nabla_i J_i(x_i,x_{-i}) = 0,\, \forall i \in \mathcal{I}&\big \} \end{align} Let $\!F(x)\!\! =\!\! \col(\!\nabla_1 J_1(x),\dots,\!\nabla_N J_N(x))$ denote the pseudo-gradient- the stacked vector of all partial gradients, so (\ref{eq:NashInner}) is \vspace{-0.5cm} \begin{align*} F(x^*) = 0 \end{align*} \begin{asmp} \label{asmp:PseudoGrad} The pseudo-gradient $F:\Omega \rightarrow \mathbb{R}^n$ is strongly monotone, $(x-x')^T(F(x)-F(x'))>\mu||x-x'||^2$, $\forall x,x' \in \mathbb{R}^n$ for $\mu>0$ and Lipschitz continuous, $||F(x)-F(x')|| \leq \theta ||x-x'||$, $\theta >0$. \end{asmp} Under Assumptions \ref{asmp:Jsmooth} and \ref{asmp:PseudoGrad}, by Theorem 3 in \cite{sfpp14}, the game has a unique NE. \vspace{-0.25cm} \subsection{Full-Information Gradient Dynamics} \label{sec:GradDyn} In the rest of this paper, we assume that each agent updates its action in a continuous manner, therefore $x_i=x_i(t)$. For simplicity of notation, we drop the explicit dependence on time. In a game with perfect information, i.e., complete communication graph, a gradient-based NE seeking algorithm (gradient-play) can be used for action update, given b \vspace{-0.26cm}\begin{align}\label{eq:GradDyn} \Sigma_i: \quad \dot x_i = -\nabla_i J_i(x_i,x_{-i}),\quad \forall i \in \mathcal{I} \end{align} We call $\Sigma_i$ the \emph{agent learning dynamics}, and note that it requires full decision information of the others', $x_{-i}$. The game can be visualized as an interconnection between all agents' learning dynamics, $\Sigma_i$, $i \!\in \!\mathcal{I}$, represented as in Fig. \ref{fig:GameNoDyn}, where $\Sigma_{-i}$ denote the other agents' learning dynamics (except $i$), and $s_{-i}$ is the information received by agent $i$ from the others $\Sigma_{-i}$ in continuous-time. Hence, in the full information setting, $s_{-i} =x_{-i}$. \begin{figure}[h!] \vspace{-0.26cm} \centering \begin{tikzpicture}[auto, node distance=1cm] \node [input, name=input] {}; \node [block, right = of input, minimum width = 1 cm, minimum height = 0.75 cm] (player) {$ \Sigma_i $}; \node [block, below right = -0.75 cm and 1 cm of player, minimum height = 2.5 cm, minimum width = 1 cm] (game) {$ \mathcal{G} $}; \node [block, below = of player, minimum width = 1 cm, minimum height = 0.75 cm] (others) {$\Sigma_{-i}$}; \path (player.south) -- (player.south west) coordinate[pos=0.5] (a1); \path (others.north) -- (others.north west) coordinate[pos=0.5] (b1); \path (player.south) -- (player.south east) coordinate[pos=0.5] (a2); \path (others.north) -- (others.north east) coordinate[pos=0.5] (b2); \draw [->] (a2) -- node [name=si] {$s_i$}(b2); \draw [->] (b1) -- node [name=so] {$s_{-i}$}(a1); \draw [->] (player) -- node [name=xi] {$x_i$}(player-|game.west); \draw [->] (others) -- node [name=xo] {$x_{-i}$}(others-|game.west); \end{tikzpicture} \caption{Game as an interconnection between agents' dynamics} \label{fig:GameNoDyn} \end{figure} With \eqref{eq:GradDyn}, the overall dynamics of all players, $\Sigma = (\Sigma_i,\Sigma_{-i})$ is $\Sigma: \,\,\, \dot x = -F(x)$. Note that $\Sigma$ can be viewed as a feedback interconnection between a bank of integrators with the pseudo-gradient map $F$. Under Assumption \ref{asmp:Jsmooth}, the solutions of (\ref{eq:GradDyn}) exist and are unique for any initial condition, $x(0)$. Under Assumption \ref{asmp:PseudoGrad}, the unique Nash Equilibrium of the game is globally asymptotically stable for the interconnected $\Sigma$, with $\Sigma_i$ as in \eqref{eq:GradDyn}, (cf. \cite{Flam02} or Lemma 1, \cite{gp18}). \subsection{Partial-Information Gradient Dynamics} \label{sec:GradDynPartial} Often only partial information is available to each agent, i.e., from the neighbours of each agent. In this case, a modified algorithm must be used, where agent $i$ uses estimates, $\textbf{x}^i$, which it shares with its neighbours, and evaluates its gradient using these estimates instead of the others' actions. Referring to Fig. \ref{fig:GameNoDyn}, in this case, $s_{-i} = \{\textbf{x}^j|j \in \mathcal{N}_i\}$. The following is from \cite{gp18}. Consider a game with information exchanged over a network, with static communication graph, $G_c$ and Laplacian, $L$ \begin{asmp} \label{asmp:GraphConnected} The undirected graph $G_c$ is connected. \end{asmp} Consider the following agent learning dynamics \vspace{-0.26cm}\begin{align}\label{eq:GradDynPartialInfo} \Sigma_i: \begin{cases}\dot{\textbf{x}}_{-i}^i = -\mathcal{S}_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ \dot x_i = -\nabla_i J_i(x_i,\textbf{x}_{-i}^i) - \mathcal R_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j), \, \forall i \in \mathcal{I} \end{cases}\hspace{-0.26cm} \end{align} where $\textbf{x}_{-i}^i$ are agent $i$'s estimates of the others actions. Based on local communication with its neighbours, $\mathcal{N}_i$, each agent $i$ computes estimates of all other agents' actions, $\textbf{x}_{-i}^i\!\!=\!\!\col(\textbf{x}_{1}^i,\dots,\textbf{x}_{i-1}^i,\textbf{x}_{i+1}^i,\dots,\textbf{x}_{N}^i) \! \in \! \mathbb{R}^{n_{-i}}$ and uses these estimates to evaluate its gradient, $\nabla_i J_i(x_i,\textbf{x}_{-i}^i)$. Then, $\textbf{x}^i\!=\!\col(\textbf{x}_{1}^i,\dots,\textbf{x}_{i-1}^i,x_i,\textbf{x}_{i+1}^i,\dots,\textbf{x}_{N}^i) \! \in \! \mathbb{R}^n$ and $\textbf{x}\!=\!\col(\textbf{x}^1,\dots,\textbf{x}^N) \!\in\! \mathbb{R}^{Nn}$. The matrices $\mathcal{R}_i$ and $\mathcal{S}_i$ are defined as follows \vspace{-0.26cm}\begin{align} \label{eq:RS} \begin{split} \mathcal R_i &:= \begin{bmatrix} 0_{n_i\times n_{<i}} & I_{n_i} & 0_{n_i\times n_{>i}} \end{bmatrix}\\ \mathcal{S}_i &:= \begin{bmatrix} I_{n_{<i}} & 0_{n_{<i} \times n_i} & 0_{n_{<i} \times n_{>i}}\\ 0_{n_{>i} \times n_{<i}} & 0_{n_{>i} \times n_i} & I_{n_{>i}} \end{bmatrix} \end{split} \end{align} for actions and estimates selection, where $n_{<i}:=\sum_{j<i\ j,i\in \mathcal{I}}n_j$ and $n_{>i}:=\sum_{j>i\ j,i\in \mathcal{I}}n_j$. Then $x_i = \mathcal{R}_i \textbf{x}^i$, $\textbf{x}_{-i}^i = \mathcal{S}_i \textbf{x}^i$, and $\textbf{x}^i = \mathcal{R}_i^T x_i+\mathcal{S}_i^T \textbf{x}^i_{-i}$. Note that, \vspace{-0.26cm}\begin{align}\label{eq:r_i} \mathcal{R}_i^T\mathcal{R}_i+\mathcal{S}_i^T\mathcal{S}_i = I_{n}, \qquad \qquad \forall i \in \mathcal{I} \end{align} The vector of stacked, partial gradients $\nabla_i J_i(x_i,\textbf x_{-i}^i)$ in \eqref{eq:GradDynPartialInfo}, computed based on estimates, is denoted as \vspace{-0.26cm}\begin{align} \label{eq:ExtPseudo} \textbf F(\textbf{x}) = \col(\nabla_1 J_1(x_1,\textbf{x}_{-1}^1),\dots,\nabla_N J_N(x_N,\textbf{x}_{-N}^N)). \end{align} and is called the extended pseudo-gradient. Note that $\textbf F$ satisfies $\textbf{F}(\textbf{1}_N \otimes x) = F(x) $ for any $x$, hence \vspace{-0.26cm}\begin{align} \label{eq:ExtPseudoZero} \textbf{F}(\textbf{1}_N \otimes x^*) = 0 \end{align} \begin{asmp} \label{asmp:ExtendedPseudoGrad} $\textbf F$ is Lipschitz continuous, $||\textbf{F}(\textbf{x})-\textbf{F}(\textbf{x}')|| \leq \theta ||\textbf{x}-\textbf{x}'||$, for all $\textbf{x},\textbf{x}' \in \mathbb{R}^{Nn}$, for some $\theta >0$. \end{asmp} Under Assumptions \ref{asmp:Jsmooth}- \ref{asmp:ExtendedPseudoGrad}, if $\mu(\lambda_2(L)-\theta)>\theta^2$, the unique NE, $x=x^*$, is globally asymptotically stable for all networked interconnected $\Sigma_i$, \eqref{eq:GradDynPartialInfo}, (cf. Theorem 1, \cite{gp18}). \vspace{-0.26cm} \section{Problem Formulation} \label{sec:problem} In this paper, we consider the problem of NE seeking for multi-integrator agents in the presence of additive disturbance signals. The dynamics of each agent can be modelled by the following linear system, of order $r_i\geq1$ \vspace{-0.26cm}\begin{align} \label{eq:multiIntegrator} x_i^{(r_i)} = u_i+d_i, \quad \forall i \in \mathcal{I} \end{align} where $x_i^{(r_i)}:=\frac{d^{r_i}x_i(t)}{dt^{r_i}}$. Each agent has a cost function $J_i(x_i,x_{-i})$ that it seeks to minimize. Agent $i$ is affected by disturbance, $d_i$, which can be modelled as being generated by \vspace{-0.26cm}\begin{align} \label{eq:Case1a_w} \mathcal{D}_i:&\begin{cases} \dot w_i = S_iw_i, \quad w_i(0) \in \mathcal{W}_i, \, \, \, \forall i \in \mathcal{I}\\ d_i = D_iw_i \end{cases} \end{align} where $w_i \in\mathbb{R}^{q_i}$, $d_i \in \mathbb{R}^{n_i}$. We assume that $ \mathcal{W}_i$ a compact subset set of $\mathbb{R}^{q_i}$ and that $\mathcal{D}_i$ is marginally (neutrally) stable and observable, \cite{ib90}. Let $\mathcal{W}= \mathcal{W}_1 \times \dots \times \mathcal{W}_N$. This setting is motivated by cases where the agents in the game have inherent dynamics. This occurs, for example, in the case of a game played between a network of velocity-actuated (single-integrator) or force-actuated (double-integrator) robots whose costs depend upon their position only. The disturbances are the result of a deterministic effect from the physical nature of the systems, e.g., wind pushing the mobile robots. These disturbances have known form (e.g., constant) but unknown parameters (e.g., strength). Therefore, we assume that each agent knows its $S_i$ and $D_i$ only, but has no knowledge of the initial condition $w_i(0)\in\mathcal{W}_i$ or the resulting solution $w_i(t)$. These are standard assumptions in the output regulation literature. Now the problem becomes one of finding control inputs, $u_i$, that minimize the cost function $J_i(x_i,x_{-i})$ while simultaneously rejecting the disturbances, i.e., designing dynamics $\Sigma_i$, under which the NE $x^*$ is asymptotically stable for the closed-loop irrespective of disturbances (Fig. \ref{fig:Case1NoDyn}). We consider separately the single-integrator agents (Section \ref{sec:SingleIntegrator}) and double-integrator agents (Section \ref{sec:second_integrator}), and indicate how to extend the results to multi-integrator agents. \begin{figure}[h!] \centering \begin{tikzpicture}[auto, node distance=1cm] \node [input, name=input] {}; \node [block, right = of input, minimum width = 1 cm, minimum height = 0.75 cm] (player) {$ \Sigma_i $}; \node [input, name=dist, above= 0.5 cm of player] {}; \node [block, below right = -0.75 cm and 1 cm of player, minimum height = 2.5 cm, minimum width = 1 cm] (game) {$ \mathcal{G} $}; \node [block, below = of player, minimum width = 1 cm, minimum height = 0.75 cm] (others) {$\Sigma_{-i}$}; \node [input, name=dist2, below= 0.5 cm of others] {}; \path (player.south) -- (player.south west) coordinate[pos=0.5] (a1); \path (others.north) -- (others.north west) coordinate[pos=0.5] (b1); \path (player.south) -- (player.south east) coordinate[pos=0.5] (a2); \path (others.north) -- (others.north east) coordinate[pos=0.5] (b2); \draw [->] (dist) -- node [pos=-0.2] {$d_i$} (player); \draw [->] (dist2) -- node [pos=-0.2] {$d_{-i}$} (others); \draw [->] (a2) -- node [name=si] {$s_i$}(b2); \draw [->] (b1) -- node [name=so] {$s_{-i}$}(a1); \draw [->] (player) -- node [name=xi] {$x_i$}(player-|game.west); \draw [->] (others) -- node [name=xo] {$x_{-i}$}(others-|game.west); \end{tikzpicture} \caption{Game with disturbances on the dynamics of each agent} \label{fig:Case1NoDyn} \end{figure} In each case we consider a partial-decision information setting, under local knowledge and communication over a graph $G_c$. We will show that if each player uses a gradient-play dynamics combined with an internal-model correction term that implements a reduced order observer for $w_i$, \cite{ai95}, (and a consensus-based dynamics), then every solution of the stacked dynamics of all agents stays bounded and will converge to the NE, $x^*$, irrespective of disturbances $w \in \mathcal{W}$. \section{NE Seeking for Single-Integrator Agents} \label{sec:SingleIntegrator} In this section, we consider a game $\mathcal{G}$ where each agent is modelled as \vspace{-0.26cm}\begin{align} \label{eq:Case1a} \dot x_i = u_i+d_i, \quad \forall i \in \mathcal{I} \end{align} where $d_i$ is generated by \eqref{eq:Case1a_w}, as in the example of a network of velocity actuated robots, and has a cost $J_i(x_i,x_{-i})$, which it seeks to minimize while rejecting disturbances. We consider that each agent has partial (networked) information from his neighbours over graph, $G_c$. Under Assumptions \ref{asmp:Jsmooth} and \ref{asmp:PseudoGrad}, the game has a unique NE. Inspired by \eqref{eq:GradDynPartialInfo}, our proposed $u_i$ is dynamic and is generated by \vspace{-0.2cm}\begin{align} \label{eq:AgentFullInfoInput} \begin{split} \dot{\textbf{x}}_{-i}^i &= -\mathcal{S}_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ \dot \xi_i &= S_i(K_ix_i+\xi_i) +K_i \nabla_i J_i(x_i,\textbf x_{-i}^i)\\ &\quad +K_i \mathcal R_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ u_i &= -\nabla_i J_i(x_i,\textbf{x}^i_{-i})- \mathcal R_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ &\quad-D_i(K_ix_i+\xi_i), \quad \quad \quad \quad \quad \forall i \in \mathcal{I} \end{split} \end{align} where $K_i$ is chosen such that $\sigma(S_i-K_iD_i)\subset\mathbb{C}^-$. Note that \eqref{eq:AgentFullInfoInput} has a gradient-play term (evaluated at estimates) as well as a dynamic component $\dot{\xi}_i$ to reject disturbances, combined with a dynamic Laplacian-based estimate-consensus component $\dot{\textbf{x}}_{-i}^i $, which in steady-state should bring all $\textbf{x}^i$ to the consensus subspace, $\textbf{x}^i=\textbf{x}^j$. This leads to $\Sigma_i$ given by, \vspace{-0.26cm}\begin{align} \label{eq:AgentPartialInfo} \Sigma_i:\begin{cases} \dot x_i &= -\nabla_i J_i(x_i,\textbf x_{-i}^i) - \mathcal R_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ &\quad -D_i(K_ix_i+\xi_i) + d_i \\ \dot{\textbf{x}}_{-i}^i &= -\mathcal{S}_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j)\\ \dot \xi_i &= S_i(K_ix_i+\xi_i) +K_i \nabla_i J_i(x_i,\textbf x_{-i}^i)\\ &\quad +K_i \mathcal R_i \sum_{j \in \mathcal{N}_i} (\textbf x^i - \textbf x^j),\quad \quad \forall i \in \mathcal{I} \end{cases} \end{align} Compared to \eqref{eq:GradDynPartialInfo}, \eqref{eq:AgentPartialInfo} has an extra-component, $\xi_i$, that acts an internal model for the disturbance. The following result shows convergence to the NE irrespective of disturbances. \begin{thm} \label{thm:SingleIntPartial} Consider a game $\mathcal{G}(\mathcal{I},J_i,\Omega_i)$ with partial information over a graph $G_c$ with Laplacian $L$ and agent learning dynamics $\Sigma_i$, \eqref{eq:AgentPartialInfo}, where disturbance $d_i$ is as in \eqref{eq:Case1a_w}. Under Assumptions \ref{asmp:Jsmooth}, \ref{asmp:PseudoGrad}, \ref{asmp:GraphConnected} and \ref{asmp:ExtendedPseudoGrad}, if $\mu(\lambda_2(L)-\theta)>\theta^2$, then $\bar{\textbf x} = \textbf 1_N \otimes x^*$, where $x^*$ is the unique NE, is globally asymptotically stable for all networked interconnected $\Sigma_i$s, \eqref{eq:AgentPartialInfo}, for all $w \in \mathcal{W}$. Moreover, all players' estimates converge globally to $\bar{\textbf x} = \textbf 1_N \otimes x^*$, for all $w \in \mathcal{W}$. \end{thm} \begin{proof} The idea of the proof is to express all agents' interconnected dynamics as a closed-loop dynamical system for which the NE is shown to be globally asymptotically stable irrespective of disturbances. To show stability we use a suitable change of coordinates to put the system in cascade form. Then we exploit ISS properties induced by strong monotonicity of the pseudo-gradient and Lipschitz continuity of the extended pseudo-gradient. In stacked form, using $\textbf{F}$, \eqref{eq:ExtPseudo}, \eqref{eq:Case1a_w}, all interconnected $\Sigma_i$, \eqref{eq:AgentPartialInfo}, of all agents $i \in \mathcal{I}$, can be written as a closed-loop system \vspace{-0.26cm} \begin{align} \label{eq:sigma_part_distb} &\quad \dot w = Sw \notag\\ \Sigma:&\begin{cases} \dot x = -\textbf{F}(\textbf{x}) - \mathcal R \textbf{L} \textbf{x}-D(Kx+\xi) + Dw\\ \mathcal{S} \dot{\textbf{x}} = -\mathcal{S} \textbf{L} \textbf{x}\\ \dot \xi = S(Kx+\xi) +K \textbf{F}(\textbf{x})+K \mathcal R \textbf{L} \textbf{x} \end{cases} \end{align} with $\mathcal R \!\!=\! \!\blkdiag\!(\mathcal R_1,\dots\mathcal R_N)$, $\!\mathcal S \!= \!\blkdiag\!(\mathcal S_1\dots\!\mathcal S_N)$, $\!\textbf{L} \!\!= \!L \otimes I_N $, $\!\textbf{x}\!=\col(\textbf{x}^1,\dots\textbf{x}^N) $, $\!\col(\textbf{x}_{-1}^1,\dots,\textbf{x}_{-N}^N\!)\!=\!\mathcal{S}\textbf{x}\!$ by $\textbf{x}_{-i}^i \!=\! \mathcal{S}_i \textbf{x}^i$. Consider the coordinate transformation $\xi \mapsto \rho \!:=\! w\!-\!(Kx \!+ \!\xi)$, so that $ \dot \rho \!=\!(S\!-\!KD)\rho $. Note that from $\textbf{x}^i = \mathcal{R}_i^T x_i+\mathcal{S}_i^T \textbf{x}^i_{-i}$ it follows that $\textbf{x} = \mathcal R^Tx+\mathcal{S}^T\mathcal{S}\textbf{x}$. Using $\mathcal{R}^T\mathcal{R}\!+\!\mathcal{S}^T\mathcal{S} \!=\! I_{Nn}$, from \eqref{eq:r_i}, and the previous relations, it follows that in the new coordinates, the stacked-form dynamics \eqref{eq:sigma_part_distb} are given as, \vspace{-0.26cm}\begin{align} \dot w &= Sw \nonumber \\ \begin{split} \label{eq:StackedDynamicsDist} \dot{\textbf{x}} &= -\mathcal{R}^T\textbf{F}(\textbf{x}) - \textbf{L} \textbf{x}+\mathcal{R}^TD\rho\\ \dot \rho &= (S-KD)\rho \end{split} \end{align} We note that \eqref{eq:StackedDynamicsDist} is in cascade form from $\rho$ to $\textbf{x}$. By shifting the coordinates $\textbf{x} \mapsto \tilde{\textbf{x}} := \textbf x - \bar{\textbf{x}}$, where $\bar{\textbf{x}} = \textbf{1}_N \otimes x^*$, the dynamics of the $(\tilde {\textbf{x}},\rho)$ subsystem become \vspace{-0.26cm}\begin{align} \label{eq:CascadeSystem2} \begin{split} \dot{\tilde{\textbf{x}}} &= -\mathcal R^T \textbf{F}(\tilde{\textbf{x}}+\bar{\textbf{x}})-\textbf{L}(\tilde{\textbf{x}}+\bar{\textbf{x}})+\mathcal R^TD\rho\\ \dot \rho &= (S-KD)\rho \end{split} \end{align} Note that \eqref{eq:CascadeSystem2} is again in cascade form, with the $\rho$-subsystem generating the external input for the $\tilde{\textbf{x}}$-subsystem. Consider $V(\tilde{\textbf{x}}) = \frac{1}{2} \|\tilde{\textbf{x}}\|^2$. Then, along solutions of the $\tilde{\textbf{x}}$-subsystem in \eqref{eq:CascadeSystem2}, using $\textbf{L}\!\bar{\textbf{x}}= \textbf{0}_{Nn}$, it holds that \vspace{-0.2cm} \begin{align}\label{bau_0} \dot V &= -\tilde{\textbf{x}}^T \big ( \mathcal R^T[\textbf{F}(\tilde{\textbf{x}}+\bar{\textbf{x}}) + \textbf{L}\tilde{\textbf{x}} \! - \mathcal R^TD\rho \big ) \end{align} Decompose $\mathbb{R}^{Nn}$ as $\mathbb{R}^{Nn}\!=\!C^n \!\oplus \!E^n$, where $C^n \!=\! \{\textbf{1}_N \otimes \!x\! |\ x \!\in\! \mathbb{R}^n\}$ is the consensus subspace, and $E^n$ is its orthogonal complement. Any $\textbf{x}\!\in \!\mathbb{R}^{Nn}$ can be written as $\textbf{x \!}=\! \textbf{x}^\bot \!+\!\textbf{x}^{\|}$, where $\textbf{x}^{\|} \!= \!P_C \textbf{x} \!\in \!C^n$, $\textbf{x}^\bot \! = \! P_E \textbf{x} \! \in \! E^n$, for $P_C \!=\! \frac{1}{N} \textbf{1}_N \! \otimes \!\textbf{1}_N^T \!\otimes \! I_n$, $ P_E \!=\! I_{Nn} \!-\!\frac{1}{N}\textbf{1}_N \!\otimes \! \textbf{1}_N^T \!\otimes \!I_n $. Thus, ${\textbf{x}}^{\|} \!=\! \textbf{1}_N \!\otimes \!x$, for some $x \! \in \! \mathbb{R}^n$, and $ \tilde{\textbf{x}} \!=\!\textbf x \! -\! \bar{\textbf{x}} \!=\! \tilde{\textbf{x}}^\bot \!+\!\tilde{\textbf{x}}^{\|}$, where $\tilde{\textbf{x}}^{\|}\! =\! \textbf{1}_N \!\otimes \! (x\!-\!x^*)$, $\tilde{\textbf{x}}^\bot \!=\!{\textbf{x}}^\bot$. Using $\textbf{F}(\bar{\textbf{x}}) \!=\!\textbf{0}_{n}$ by (\ref{eq:ExtPseudoZero}), from \eqref{bau_0} we get \vspace{-0.2cm} \begin{align}\label{bau_00} \begin{split} \dot V &= -(\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^{\|})^T \mathcal R^T[\textbf{F}(\tilde{\textbf{x}}+\bar{\textbf{x}}) - \textbf{F}(\bar{\textbf{x}})]\\ & -(\tilde{\textbf{x}}^\bot\!+\!\tilde{\textbf{x}}^{\|})^T\textbf{L}(\tilde{\textbf{x}}^\bot \!+\! \tilde{\textbf{x}}^{\|})\!+\!(\tilde{\textbf{x}}^\bot\!+\!\tilde{\textbf{x}}^{\|})^T\mathcal R^TD\rho \end{split} \end{align} Note that $\textbf{L}\tilde{\textbf{x}}^{\|} \!= \!\textbf{0}_{Nn}$ and $\lambda_2(L)\|\tilde{\textbf{x}}^\bot\|^2 \!\leq \! \tilde{\textbf{x}}^\bot \textbf{L} \tilde{\textbf{x}}^\bot$, $\forall \tilde{\textbf{x}}^\bot \! \in \! E^n$ by properties of the Laplacian under Assumption \ref{asmp:GraphConnected}. Adding and subtracting $\! \textbf{F}(\tilde{\textbf{x}}^{\|}\!+\!\bar{\textbf{x}}) $ in \eqref{bau_00}, with $\textbf{F}(\tilde{\textbf{x}}^{\|}\!+\!\bar{\textbf{x}}) \!=\! \textbf{F}(\textbf{1}_N \otimes x) \!=\! F(x)$, $\textbf{F}(\bar{\textbf{x}}) \!= \!\textbf{F}(\textbf{1}_N \otimes x^*) \!=\! F(x^*)$, and using $\mathcal R(\tilde{\textbf{x}}^{\|})\! =\!x-x^*$, yields \vspace{-0.25cm} \begin{align*} \begin{split} \dot V &\leq -(\tilde{\textbf{x}}^\bot)^T\mathcal R^T[ \textbf{F}(\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^{\|}+\bar{\textbf{x}}) - \textbf{F}(\tilde{\textbf{x}}^{\|}+\bar{\textbf{x}})]\\ &\quad-(\tilde{\textbf{x}}^\bot)^T\mathcal R^T[F(x) - F(x^*)]-\lambda_2(L)\|\tilde{\textbf{x}}^\bot\|^2\\ &\quad -(x-x^*)^T[ \textbf{F}(\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^{\|}+\bar{\textbf{x}}) - \textbf{F}(\tilde{\textbf{x}}^{\|}+\bar{\textbf{x}})]\\ &\quad -(x-x^*)^T[F(x) - F(x^*)]+(\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^{\|})^T\mathcal R^TD\rho \end{split} \end{align*} Using $\|\textbf{F}(\!\tilde{\textbf{x}}^\bot\!+\!\tilde{\textbf{x}}^{\|}\!+\!\bar{\textbf{x}})\!-\!\textbf{F}(\!\tilde{\textbf{x}}^\|\!+\!\bar{\textbf {x}})\| \!\leq \!\theta \| \!\tilde{\textbf{x}}^\bot\|$ by Assumption \ref{asmp:ExtendedPseudoGrad}, $\|\mathcal R \!\tilde{\textbf{x}}^\bot\| \! \leq \! \|\mathcal R\| \|\tilde{\textbf{x}}^\bot\|$, $\|F(x)-F(x^*)\|\! \leq \!\bar \theta \|x-x^*\|\! \leq \!\theta \|x-x^*\|$, $(x-x^*)^T[F(x)-F(x^*)] \! \geq \!\mu \|x-x^*\|^2$ by Assumption \ref{asmp:PseudoGrad}, yields \begin{align*} \begin{split} \dot V &\leq \theta \|\tilde{\textbf{x}}^\bot \|^2 + \theta\|\tilde{\textbf{x}}^\bot\|\|x-x^*\|-\lambda_2(L)\|\tilde{\textbf{x}}^\bot\|^2\\ &\quad+\theta\|x-x^*\|\|\tilde{\textbf{x}}^\bot\|-\mu\|x-x^*\|^2+(\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^\|)^T\mathcal R^TD\rho \end{split} \end{align*} Using $\|x-x^*\| = \frac{1}{\sqrt{N}}\|\tilde{\textbf{x}}^\|\|$, we can write \vspace{-0.26cm} \begin{align*} \begin{split} \dot V &\leq -\begin{bmatrix} \|\tilde{\textbf{x}}^\|\| & \|\tilde{\textbf{x}}^\bot\| \end{bmatrix} \begin{bmatrix} \frac{1}{N} \mu & -\frac{1}{\sqrt{N}} \theta \\ -\frac{1}{\sqrt{N}} \theta & \lambda_2(L)-\theta \end{bmatrix}\begin{bmatrix} \|\tilde{\textbf{x}}^\|\| \\ \|\tilde{\textbf{x}}^\bot\| \end{bmatrix}\\ &\quad +\|\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^\|\|\|\mathcal R^TD\|\|\rho\| \end{split} \end{align*} Then, given any $a>0$, for any $\| \tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^\| \| \geq \frac{\|\mathcal R^TD\|}{a}\|\rho\|$, \vspace{-0.26cm} \begin{align*} \begin{split} \dot V &\leq -\begin{bmatrix} \|\tilde{\textbf{x}}^\|\| & \|\tilde{\textbf{x}}^\bot\| \end{bmatrix} \begin{bmatrix} \frac{1}{N} \mu & -\frac{1}{\sqrt{N}} \theta \\ -\frac{1}{\sqrt{N}} \theta & \lambda_2(L)-\theta \end{bmatrix}\begin{bmatrix} \|\tilde{\textbf{x}}^\|\| \\ \|\tilde{\textbf{x}}^\bot\| \end{bmatrix}\\ &\quad +a\|\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^\|\|^2 \end{split} \end{align*} Note that $\|\tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^{\|}\|^2=\|\tilde{\textbf{x}}^\bot\|^2+\|\tilde{\textbf{x}}^{\|}\|^2 = \|\tilde{\textbf{x}}\|^2 $, so that, for any $\| \tilde{\textbf{x}}^\bot+\tilde{\textbf{x}}^\| \| \geq \frac{\|\mathcal R^TD\|}{a}\|\rho\|$ we can write, \vspace{-0.2cm} \begin{align}\label{bau_2} \dot V \!\leq \!-\!\big [ \|\tilde{\textbf{x}}^\|\| \, \, \|\tilde{\textbf{x}}^\bot\| \big ] \!\!\begin{bmatrix} \frac{1}{N} \mu - a &\! -\frac{1}{\sqrt{N}} \theta \\ -\frac{1}{\sqrt{N}} \theta &\! \lambda_2(L)-\theta - a \end{bmatrix}\!\!\begin{bmatrix} \|\tilde{\textbf{x}}^\|\| \\ \|\tilde{\textbf{x}}^\bot\| \end{bmatrix} \end{align} For the $\tilde{\textbf{x}}$-subsystem in \eqref{eq:CascadeSystem2} to be ISS, we need the matrix on the right-hand side to be positive definite. This holds for any $a\!>\!0$ such that $a\!<\!\frac{1}{N} \mu$, $a\!<\!\lambda_2(L) \!-\! \theta$, and $(\frac{1}{N} \mu \!-\!a)(\lambda_2(L) \!-\! \theta \!-\! a) \!-\! \frac{1}{N} \theta^2 \!>\! 0$. Since $\mu(\lambda_2(L)-\theta)>\theta^2$, the intersection of the above inequalities is guaranteed to be nonempty and the matrix is positive definite for any such $a$. Then, for any such $a$, $\dot V(\tilde{\textbf{x}}) \leq -W(\tilde{\textbf{x}}),\, \forall \|\tilde{\textbf{x}}\| \geq \frac{\|\mathcal R^TD\|}{a}\|\rho\|$, where $W(\tilde{\textbf{x}})$ is a positive definite function, hence the $\tilde{\textbf{x}}$-subsystem in \eqref{eq:CascadeSystem2} is ISS with respect to $\rho$ by Theorem \ref{thm:ISSLyapunov}. Since $\!\dot \rho \!=\! (S\!-\!KD)\rho$ is asymptotically stable by (\ref{eq:AgentPartialInfo}), it follows that the origin of (\ref{eq:CascadeSystem2}) is asymptotically stable by Lemma \ref{lemma:ISSCascade}, hence $( \mathbf{1}_N \! \otimes x^*,0)$ is asymptotically stable for (\ref{eq:StackedDynamicsDist}), for any $w \in \mathcal{W}$. \end{proof} \begin{remark} Local results follow if Assumption \ref{asmp:ExtendedPseudoGrad} holds only locally around $\mathbf{x}^*=\mathbf{1}_N \otimes x^*$. We note that the class of quadratic games satisfies Assumption \ref{asmp:ExtendedPseudoGrad} globally. \end{remark} \begin{remark} In the special case of full-information, there is no need for estimates and the agent (closed-loop) learning dynamics $\Sigma_i$, \eqref{eq:AgentPartialInfo}, reduce to, \vspace{-0.2cm}\begin{align} \label{eq:AgentFullInfo} \Sigma_i: \begin{cases} \dot x_i = -\nabla_i J_i(x_i,x_{-i})-D_i(K_ix_i+\xi_i)+d_i, \\ \dot \xi_i = S_i(K_ix_i+\xi_i)+K_i\nabla_i J_i(x_i,x_{-i}), \end{cases} \end{align} The convergence result as in Theorem \ref{thm:SingleIntPartial} holds without the need for Assumptions \ref{asmp:GraphConnected} and \ref{asmp:ExtendedPseudoGrad}. \end{remark} \section{NE Seeking For Double Integrators} \label{sec:second_integrator} In this section, we consider NE seeking for double-integrator agents with disturbances. Our motivation is two-fold. Firstly, the agents in the game might have some sort of inherent dynamics, such as double integrator robots playing a game wherein the cost functions are functions of their positions. Each agent, therefore, cannot directly update its action, $x_i$, via choice of input $u_i$ and must take into account its inherent dynamics. Secondly, we may want to consider higher-order dynamics for learning, as done extensively in the optimization literature, e.g., the heavy-ball method. Consider that each agent is modelled as a double integrator \begin{align} \label{eq:DoubleIntegratorDist} \begin{cases} \dot x_i = v_i\\ \dot v_i = u_i+d_i \end{cases} \end{align} where $x_i, v_i, u_i,d_i\in \mathbb{R}^{n_i}$, $d_i$ generated by \eqref{eq:Case1a_w}. Each agent minimizes its cost function $J_i(x_i,x_{-i})$, with the constraint that its steady-state velocity is zero. This setting is motivated for example in the case of a network of mobile, force-actuated robots whose costs depend on their positions only. At steady state, necessarily, their velocities must be zero. This requirement can be seen as the result of a quadratic penalty term, $J_{v_i}(v_i)\!=\! \frac{1}{2}\|v_i\|^2$, on the velocity of each agent. Thus, the overall cost function for each agent is given by $\bar J(x_i,x_{-i},v_i) \!=\! J(x_i,x_{-i}) \!+\! \frac{1}{2}\|v_i\|^2$ and the resulting NE is \vspace{-0.15cm} \begin{align} \label{eq:doubleNESet} \!\!\!\!\Gamma_{NE}\!= \big \{\!(x,\!v\!) \!\in \! \mathbb{R}^n \!\!\times \! \!\mathbb{R}^n | \!\nabla_i J_i(x_i,x_{-i}) \!=\! 0,v_i\!=\!0, \forall i \! \in\! \mathcal{I}\big \} \end{align} Under Assumptions \ref{asmp:Jsmooth} and \ref{asmp:PseudoGrad}, $x^*$ is unique. By \eqref{eq:doubleNESet}, $(x^*,v^*)$ is such that \vspace{-0.2cm} \begin{align} \label{eq:PsudoGradZeroDouble} F(x^*) &= 0, \quad v^* = 0 \end{align} We consider partial-information learning dynamics under which the NE of the game is reached in the presence of additive disturbances. We propose the following dynamic feedback \vspace{-0.2cm} \begin{align} \label{eq:doubleIntLearningDist} \begin{split} \dot{\boldsymbol{\gamma}}_{-i}^i \!\!\!&= -\mathcal{S}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)\\ \!\dot \xi_i \!\!\!&= S_i(K_iv_i+\xi_i)+K_i \nabla_i J_i(x_i+b_i v_i,\boldsymbol{\gamma}_{-i}^i)\\ &\quad+K_i \Big(\frac{1}{b_i}v_i + \mathcal{R}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)\Big)\\ \!\!u_i \! \!&=\! -\! \nabla_iJ_i(x_i + b_iv_i,\boldsymbol{\gamma}_{-i})- \frac{1}{b_i}v_i\\ &\quad \!-\! \mathcal{R}_i \!\sum_{j \in \mathcal{N}_i}\!(\boldsymbol{\gamma}^i \! -\!\! \boldsymbol{\gamma}^j)\!-\!D_i(K_iv_i\!+\!\xi_i)\! \end{split} \end{align} where $K_i$ is such that $\sigma(S_i\!-\!K_iD_i) \! \!\subset \!\mathbb{C}^-$ and $b_i\!>\!0$ and $\boldsymbol{\gamma}^i$ is agent $i$'s estimate variable. Note that in $\nabla_iJ_i$ is evaluated at a predicted point, $x_i+b_iv_i$. We denote $\boldsymbol{\gamma}_j^i$ agent $i$'s prediction of $x_j+b_jv_j$. \eqref{eq:doubleIntLearningDist} uses a similar internal model and Laplacian-based consensus scheme \eqref{eq:AgentFullInfoInput}, as in Section \ref{sec:SingleIntegrator}. However, instead of each agent estimating the others' actions, $x_{-i}$, they estimate the predicted actions $\{x_j + b_jv_j|j\in \mathcal{I},j\neq i\}$ used to evaluate the gradient at the predicted point. Each agent will then share these estimates as well as their own prediction with their neighbours. Therefore, each agent $i$ computes $\boldsymbol{\gamma}_{-i}^i=\col(\boldsymbol{\gamma}_{1}^i,\dots,\boldsymbol{\gamma}_{i-1}^i,\boldsymbol{\gamma}_{i+1}^i,\dots,\boldsymbol{\gamma}_{N}^i) \in \mathbb{R}^{n_{-i}}$ and uses these estimates when evaluating its gradient, $\nabla_i J_i(x_i+b_iv_i,\boldsymbol{\gamma}_{-i}^i)$. Intuitively, each agent makes a prediction on the future state of the game, $\!x_i\!+\!b_iv_i$, based on the current actions and velocities, and evaluates its gradient with respect to $x_i$ at this point. We denote $\boldsymbol{\gamma}^i=\col(\boldsymbol{\gamma}_{1}^i,\dots,\boldsymbol{\gamma}_{i-1}^i,x_i+b_iv_i,\boldsymbol{\gamma}_{i+1}^i,\dots,\boldsymbol{\gamma}_{N}^i) \in \mathbb{R}^n$ and $\boldsymbol{\gamma}=\col(\boldsymbol{\gamma}^1,\dots,\boldsymbol{\gamma}^N) \in \mathbb{R}^{Nn}$. \begin{remark}\label{rem:rem1} From an agent perspective, the intuition behind \eqref{eq:doubleIntLearningDist} is that each agent evaluates its partial-gradient at a predicted future point, $x_i+b_iv_i$, obtained as a first-order prediction from the current action and velocity of each agent, with a negative feedback on its velocity. This can be viewed as resulting from the quadratic penalty term associated with the velocity of each agent. In addition, consider the disturbance-free case and recall that gradient-play is a method that works well for single-integrators, i.e., systems with unit relative degree. By creating a fictitious output $\boldsymbol{\gamma}^i_i\!:\!=\!x_i\!+\!b_iv_i$ we decrease the relative degree of each agent to $\{1,\dots,1\}$. This creates a hyperplane $x+\mathcal{B}v -x^* = 0$, where $\mathcal{B}=\blkdiag(b_1I_{n_1},\dots,b_NI_{n_N})$, on which the pseudo-gradient map is zero. The pseudo-gradient feedback makes this hyperplane attractive for the double-integrator system. The feedback stabilizes $v=0$ and renders this hyperplane invariant, thereby stabilizing $x = x^*$. \end{remark} \begin{remark}\label{rem:rem2} We note that \eqref{eq:doubleIntLearningDist} is similar to a passivity-based group coordination design, e.g., \cite{ma07}. Indeed, the inner-loop feedback $u_i = -\frac{1}{b_i}v_i$ renders the agent dynamics passive with $\boldsymbol{\gamma}^i_i = x_i+b_iv_i$ as output. However, we stress that the feedback $\nabla_iJ_i(x_i+b_iv_i,\boldsymbol{\gamma}_{-i})$ is not necessarily the proper gradient of any function, as required in \cite{ma07}. Therefore, individually, each agent is not a passive system when the feedback is added, due to coupling to the others' actions via the cost function. This precludes using a passivity approach as in \cite{ma07}. Rather, here we use a combined ISS approach to deal with both the disturbance and the higher-order stabilization. \end{remark} The choice of feedback \eqref{eq:doubleIntLearningDist} yields learning (closed-loop) dynamics given by \vspace{-0.27cm} \begin{align} \label{eq:doubleIntDynDistPart} \!\Sigma_i\!:\!\begin{cases}\! \dot{\boldsymbol{\gamma}}_{-i}^i \!\!\!&= -\mathcal{S}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j),\quad \quad \quad \quad \forall i \in \mathcal{I}\\ \!\dot x_i \!\!\!&=v_i\\ \!\dot v_i \!\!\!&=\!-\! \nabla_iJ_i(x_i + b_iv_i,\boldsymbol{\gamma}_{-i})- \frac{1}{b_i}v_i\\ &\quad \!-\! \mathcal{R}_i \!\sum_{j \in \mathcal{N}_i}\!(\boldsymbol{\gamma}^i \! -\!\! \boldsymbol{\gamma}^j)\!-\!D_i(K_iv_i\!+\!\xi_i)\!+\!d_i\\ \!\dot \xi_i \!\!\!&= S_i(K_iv_i+\xi_i)+K_i \nabla_i J_i(x_i+b_i v_i,\boldsymbol{\gamma}_{-i}^i)\\ &\quad+K_i \Big(\frac{1}{b_i}v_i + \mathcal{R}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)\Big)\\ \end{cases} \end{align} \begin{thm}\label{thm:second_partInfo_w_Disturb} Consider a game $\mathcal{G}(\mathcal{I},J_i,\mathbb{R}^{n_i})$ with partial information over a graph $G_c$ with Laplacian $L$ and learning dynamics $\Sigma_i$, \eqref{eq:doubleIntDynDistPart}, where disturbance $d_i$ is generated by \eqref{eq:Case1a_w}. Under Assumptions \ref{asmp:Jsmooth}-\ref{asmp:ExtendedPseudoGrad}, if $\mu(\lambda_2(L)\!-\!\theta)\!>\!\theta^2$ then $1_N \otimes x^*$, where $x^*$ is the unique NE, is globally asymptotically stable for all networked interconnected $\Sigma_i$, for all $w \in \mathcal{W}$. Moreover, each player's estimates converge globally to the NE value, $\bar{\boldsymbol{\gamma}} = \textbf 1_N \otimes x^*$. \end{thm} \begin{proof} The idea of the proof is similar to that of Theorem \ref{thm:SingleIntPartial}. We use a change of coordinates to express the closed-loop dynamics in cascade form and use ISS arguments the show stability of the NE for the overall cascade system, irrespective of disturbance. The difference lies in the fact that \eqref{eq:doubleIntDynDistPart} has extra terms due to the higher-order dynamics $\dot v_i$ that must be incorporated into the cascade. The stacked dynamics of \eqref{eq:doubleIntDynDistPart} is given by \vspace{-0.3cm} \begin{align} \label{eq:doubleIntDynFullDistPart} \begin{split} &\quad\dot w \!=\! Sw\\ \Sigma:&\begin{cases} \mathcal{S}\dot{\boldsymbol{\gamma}} \! = \!-\mathcal{S}\textbf{L}\boldsymbol{\gamma}\\ \dot x \!=\!v\\ \dot v \!= \!-\mathcal{B}^{-1}v\!-\!\textbf{F}(\boldsymbol{\gamma})\!-\!\mathcal{R}\textbf{L}\boldsymbol{\gamma}\!-\!D(Kv\!+\!\xi)\!+\!Dw\\ \dot \xi \!= \!S(Kv+\xi)+K(\textbf{F}(\boldsymbol{\gamma})+\mathcal{B}^{-1}v+\mathcal{R}\textbf{L}\boldsymbol{\gamma}) \end{cases} \end{split} \end{align} Note that $\mathcal{R} \boldsymbol{\gamma} = [\mathcal{R}_i \boldsymbol{\gamma}^i]_{i\in \mathcal{I}} = [x_i + b_i v_i]_{i \in \mathcal{I}} = x + \mathcal{B} v $. Let the coordinate transformation $x \mapsto \mathcal{R}\boldsymbol{\gamma} \!: = \! x \!+\! \mathcal{B} v$. Then, \vspace{-0.25cm} \begin{align*} \mathcal{R}\dot{\boldsymbol{\gamma}} &= -\mathcal{B}\textbf{F}(\boldsymbol{\gamma})-\mathcal{B}\mathcal{R}\textbf{L}\boldsymbol{\gamma}-\mathcal{B}D(Kv+\xi-w). \end{align*} Combining this with the second equation in \eqref{eq:doubleIntDynFullDistPart}, by using the properties of $\mathcal{R}$ and $\mathcal{S}$, $\mathcal{R}^T\mathcal{R}+ \mathcal{S}^T\mathcal{S} = I$, yields that \vspace{-0.25cm} $$ \!\!\dot{\boldsymbol{\gamma}} \!=\! -\mathcal{R}^T\mathcal{B}\textbf{F}(\!\boldsymbol{\gamma}\!)\!-\!(\mathcal{R}^T\mathcal{B}\mathcal{R}\!+\!\mathcal{S}^T\mathcal{S})\textbf{L}\boldsymbol{\gamma} $$ Let $\xi \mapsto \rho \! := \! w-(K v\!+\!\xi)$, so that $\dot \rho =(S-KD)\rho$. Consider also $\boldsymbol{\gamma} \mapsto \tilde{\boldsymbol{\gamma}} := \boldsymbol{\gamma}-\bar{\boldsymbol{\gamma}}$. Then, in the new coordinates, using $\textbf{L}\bar{\boldsymbol{\gamma}} = 0$, the dynamics of the $(\tilde{\boldsymbol{\gamma}},v,\rho)$ are given by \vspace{-0.2cm} \begin{align} \label{eq:doubleIntDynStackedNew2bPartialDist} \dot v &= -\mathcal{B}^{-1} v - \textbf{F}(\tilde{\boldsymbol{\gamma}}+\bar{\boldsymbol{\gamma}}) - \mathcal{R}\textbf{L}\tilde{\boldsymbol{\gamma}}+D\rho\\ \label{eq:doubleIntDynStackedNew1bPartialDist} \dot{\tilde{\boldsymbol{\gamma}}}\! &=\! -\mathcal{R}^T\mathcal{B}\textbf{F}(\tilde{\boldsymbol{\gamma}}\!+\!\bar{\boldsymbol{\gamma}})\!-\!(\mathcal{R}^T\mathcal{B}\mathcal{R}\!+\!\mathcal{S}^T\mathcal{S})\textbf{L}\tilde{\boldsymbol{\gamma}}\!+\!\mathcal{R}^T\mathcal{B}D\rho\\ \label{eq:doubleIntDynStackedNew3bPartialDist} \dot \rho &= (S-KD)\rho \end{align} Note that the (\ref{eq:doubleIntDynStackedNew2bPartialDist}-\ref{eq:doubleIntDynStackedNew3bPartialDist}) is in cascade form with subsystem $(\tilde{\boldsymbol{\gamma}}, \rho)$ (\ref{eq:doubleIntDynStackedNew1bPartialDist}, \ref{eq:doubleIntDynStackedNew3bPartialDist}) generating the external input for \eqref{eq:doubleIntDynStackedNew2bPartialDist}. In turn, (\ref{eq:doubleIntDynStackedNew1bPartialDist}, \ref{eq:doubleIntDynStackedNew3bPartialDist}) is in cascade form with (\ref{eq:doubleIntDynStackedNew3bPartialDist}) generating input $\rho$ for (\ref{eq:doubleIntDynStackedNew1bPartialDist}). We show first that (\ref{eq:doubleIntDynStackedNew1bPartialDist}) is ISS with respect to $\rho$. Consider \vspace{-0.25cm \begin{align}\label{V_doubleInt_gamma} V(\tilde{\boldsymbol{\gamma}}) = \frac{1}{2} \tilde{\boldsymbol{\gamma}}^T \mathcal{R}^T\mathcal{B}^{-1}\mathcal{R}\tilde{\boldsymbol{\gamma}}+\frac{1}{2}\tilde{\boldsymbol{\gamma}}^T \mathcal{S}^T \mathcal{S}\tilde{\boldsymbol{\gamma}} \end{align} which is positive definite. Taking the time-derivative of $V$ \eqref{V_doubleInt_gamma} along the solutions of \eqref{eq:doubleIntDynStackedNew1bPartialDist}, using $\textbf{F}(\bar{\boldsymbol{\gamma}})=0$, cf. (\ref{eq:ExtPseudoZero}), and properties of $\mathcal{R}$, $\mathcal{S}$, e.g. $\mathcal{R}\mathcal{S}^T=0$, $\mathcal{R}\mathcal{R}^T=I$, yields \vspace{-0.25cm} \begin{align*} \begin{split} \dot V \!&=\! -\tilde{\boldsymbol{\gamma}}^T\mathcal{R}^T\mathcal{B}^{-1}(\mathcal{B}\textbf{F}(\!\tilde{\boldsymbol{\gamma}}\!+\!\bar{\boldsymbol{\gamma}})\!+\!\mathcal{B}\mathcal{R}\textbf{L}\tilde{\boldsymbol{\gamma}}\!-\!\mathcal{B}D\rho)-\tilde{\boldsymbol{\gamma}}^T\mathcal{S}^T\mathcal{S}\textbf{L}\tilde{\boldsymbol{\gamma}}\\ \!&=-\tilde{\boldsymbol{\gamma}}^T(\mathcal{R}^T\textbf{F}(\tilde{\boldsymbol{\gamma}}+\bar{\boldsymbol{\gamma}}) + \textbf{L}\tilde{\boldsymbol{\gamma}}-\mathcal{R}^TD\rho) \end{split} \end{align*} which is similar to \eqref{bau_0}. Then, following an argument as in the proof of Theorem \ref{thm:SingleIntPartial}, it follows that the $\tilde{\boldsymbol{\gamma}}$ subsystem of \eqref{eq:doubleIntDynStackedNew1bPartialDist} is ISS with input $\rho$. Since the origin of the $\rho$ subsystem is globally asymptotically stable, then the origin of the $(\tilde{\boldsymbol{\gamma}},\rho)$ subsystem is globally asymptotically stable (cf. Lemma \ref{lemma:ISSCascade}). Now consider the $v$-subsystem \eqref{eq:doubleIntDynStackedNew2bPartialDist} with input $(\tilde{\boldsymbol{\gamma}},\rho)$ and $ V_2(v) = \frac{1}{2}\|v\|^2 $. Along \eqref{eq:doubleIntDynStackedNew2bPartialDist}, using Assumption \ref{asmp:PseudoGrad}, \vspace{-0.2cm}% \begin{align*} \dot V_2 &= -v^T\mathcal{B}^{-1}(v)-v^T(\textbf{F}(\tilde{\boldsymbol{\gamma}}+\bar{\boldsymbol{\gamma}})+\mathcal{R}\textbf{L}\tilde{\boldsymbol{\gamma}}-D\rho)\\ &\!\leq \!-\frac{1}{b_m}\|v\|^2\!+\!\|v\|\|\textbf{F}(\!\tilde{\boldsymbol{\gamma}}\!+\!\bar{\boldsymbol{\gamma}})\| +\|v\|\|\mathcal{R}\textbf{L}\tilde{\boldsymbol{\gamma}}\|+\|v\|\|D\|\|\rho\| \\ &\leq -\frac{1}{b_m}\|v\|^2 + \beta\|v\|(\|\rho\|+\|\tilde{\boldsymbol{\gamma}}\|) \end{align*} where $b_m \!=\! \max_{i\in \mathcal{I}} b_i$, $\beta \! = \! \max\{\|\mathcal{R}\textbf{L}\|\!+\!\theta,\|D\|\}$. Thus $ \dot V_2 \leq -\frac{1}{b_m}\|v\|^2 + \beta\|v\| \sqrt{2(\|\rho\|^2+\| \tilde{\boldsymbol{\gamma}}\|^2)} $ or, \vspace{-0.26cm} \begin{align*} \dot V_2 \leq -\frac{1}{b_m}\|v\|^2 + \sqrt{2}\beta\|v\|\|\tilde{u}\|. \end{align*} where $\tilde{u}:=\col(\tilde{\boldsymbol{\gamma}},\rho)$. Hence, $ \dot V_2 \! \leq \!-\big(\frac{1}{b_m} \!-\! b \big)\|v\|^2,\ \forall \|v\| \!\geq \!\frac{\sqrt{2}\beta}{b} \!\|\tilde{u}\| $, for any $0\!<\!b\!<\!\frac{1}{b_m}$. Therefore, by Theorem \ref{thm:ISSLyapunov}, \eqref{eq:doubleIntDynStackedNew2bPartialDist} is ISS with $\tilde{u} = (\tilde{\boldsymbol{\gamma}},\rho)$. Since the origin of $(\tilde{\boldsymbol{\gamma}},\rho)$ subsystem is globally asymptotically stable, by Lemma \ref{lemma:ISSCascade}, the origin of \eqref{eq:doubleIntDynStackedNew1bPartialDist}-\eqref{eq:doubleIntDynStackedNew3bPartialDist} is globally asymptotically stable, hence $(x^*,0)$ is globally asymptotically stable for \eqref{eq:doubleIntDynFullDistPart} for all $w \in \mathcal{W}$. \end{proof} \begin{remark} In the full information case, there is no need for estimate and \eqref{eq:doubleIntDynDistPart} reduces to \vspace{-0.22cm} \begin{align}\label{eq:doubleIntDynDist} \!\Sigma_i\!\!:\!\! \begin{cases} \!\!\dot x_i \!\! \!\!\!&= \!v_i\\ \!\!\dot v_i \!\!\!\!\!&= \!-\!\nabla_iJ_i(x_i\!\!+\!\!b_iv_i,\!x_{-i}\!\!+\!\!b_{-i}v_{-i}) \!-\! \!\frac{1}{b_i}v_i \!-\!\!D_i(K_iv_i\!+\!\!\xi_i)\!+\!\!d_i\\ \!\!\dot \xi_i \!\!\!\!\!&= \!S_i(K_iv_i\!+\!\xi) \!+\! K_i\big(\nabla_iJ_i(x_i\!+\!b_iv_i,\!x_{-i}\!+\!\!b_{-i}v_{-i})\!+\! \frac{1}{b_i}v_i\!\big) \end{cases} \end{align} The convergence results hold without the need for Assumptions \ref{asmp:GraphConnected} and \ref{asmp:ExtendedPseudoGrad}. Furthermore, the disturbance-free, higher-order learning dynamics generated by \eqref{eq:doubleIntDynDist} is \vspace{-0.23cm $$\ddot x + \mathcal{B}^{-1} \dot x + F(x + \mathcal{B} \dot x) =0$$ which resembles heavy-ball with friction dynamics used in optimization, \cite{Attouch2000,ALvarez2000}. \end{remark} \begin{remark} The results from this section can easily be extended to multi-integrator agents. Consider that each agent is modelled as a $\!r_i^{th}\!$ order integrator, $r_i \! \geq \!2$, \vspace{-0.26cm}\begin{align*} \dot x_i &= C_i v_i\\ \dot v_i &= A_iv_i+B_i(u_i+d_i), \quad \forall i \in \mathcal{I} \end{align*} where $ A_i \!=\! \begin{bmatrix} \!0_{n_i(r_i-2)\!\times \!n_i} \!&\! I_{n_i(r_i-2)} \\ \!0_{n_i\!\times \! n_i} \!&\! \!0_{n_i\! \times \! n_i(r_i-2)} \end{bmatrix}$, $ B_i \!=\! \begin{bmatrix} \!0_{n_i(r_i-2)\!\times \!n_i} \\ \! I_{n_i} \! \end{bmatrix}$, $C_i \!=\! \begin{bmatrix} \! I_{n_i} \! \!& \!0_{n_i \! \times \! n_i(r_i-2)} \!\end{bmatrix}$, $v_i = \col(v_i^1,\dots,v_i^{r_i-1})$, and has a cost function $J_i(x_i, x_{-i})$. In this case, $ \gamma_i \!:= \!x_i \!+\! \begin{bmatrix} c_i^T \!\otimes \!I_{n_i} \!&\! I_{n_i} \end{bmatrix} \!v_i$ where $c_i^T \!=\! \begin{bmatrix} c_{i,1} \!&\! \dots \! & \!c_{i,(r_i-2)} \end{bmatrix}$, $c_{i,k}$ are the coefficients of any $\!(r_i\!-\!1)^{\!th}\!$ order Hurwitz polynomial with $c_{i,0} \!\!=\! 1$, $c_{i, (r_i-1)} \!\!=\!1$, and $u_i \!:=\! -\nabla_i J_i(\gamma_i,\boldsymbol{\gamma}^i_{-i}) \!-\! \begin{bmatrix} I_{n_i} \!& \! c_i^T \!\otimes \!I_{n_i} \end{bmatrix} \! v_i$. When $r_i \!=\!2$, this feedback reduces to the one for the second-order integrator with $b_i=1$. Then a dynamic learning scheme similar to \eqref{eq:doubleIntDynDistPart} can be developed, by appropriately augmenting with reduced-order observer for the disturbance, and consensus-dynamics for the estimates $\boldsymbol{\gamma}^i_{-i}$. The resulting agent learning dynamics are given as \begin{align} \label{eq:multiIntDyn} \Sigma_i: \begin{cases} \dot{\boldsymbol{\gamma}}_{-i}^i &= -\mathcal{S}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)\\ \dot x_i &= C_iv_i\\ \dot v_i &= A_iv_i - B_i\Big(\nabla_iJ_i(\boldsymbol{\gamma}_i,\boldsymbol{\gamma}_{-i})+\begin{bmatrix} I_{n_i} \!& \! c_i^T \!\otimes \!I_{n_i} \end{bmatrix} \! v_i\\ &\quad+\mathcal{R}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)-D_iw\\ &\quad+D_i(K_iv_i^{r_i-1}+\xi_i)\Big)\\ \dot \xi_i &= S_i(K_iv_i^{r_i-1}+\xi_i)+K_i\Big(\nabla_i J_i(\boldsymbol{\gamma}_i^i,\boldsymbol{\gamma}_{-i}^i)\\ &\quad+\mathcal{R}_i\sum_{j \in \mathcal{N}_i}(\boldsymbol{\gamma}^i - \boldsymbol{\gamma}^j)+\begin{bmatrix} I_{n_i} \!& \! c_i^T \!\otimes \!I_{n_i} \end{bmatrix} \! v_i\Big) \end{cases} \end{align} which for $r_i=2$ reduces to \eqref{eq:doubleIntDynDistPart}. \begin{thm} \label{thm:MultiPartialDist} Consider a game $\mathcal{G}(\mathcal{I},J_i,\Omega_i)$ with partial information communicated over a graph $G_c$ with Laplacian $L$ and agent dynamics given by $\Sigma_i$, \eqref{eq:multiIntDyn}. Under Assumptions \ref{asmp:Jsmooth}, \ref{asmp:PseudoGrad}, \ref{asmp:GraphConnected} and \ref{asmp:ExtendedPseudoGrad}, if $\mu(\lambda_2(L)-\theta)>\theta^2$ then the unique NE, $x=x^*$, is globally asymptotically stable for \eqref{eq:multiIntDyn} for all $w \in \mathcal{W}$. Moreover, each player's estimates converge globally to the NE values, $\bar{\textbf x} = \textbf 1_N \otimes x^*$, for all $w \in \mathcal{W}$. \end{thm} \begin{proof} Similar to Theorem \ref{thm:second_partInfo_w_Disturb}. \end{proof} \end{remark} \section{Numerical Examples}\label{sec:simulations} In this section we consider two application scenarios: an optical network OSNR game and a sensor network game. In both examples, our algorithms are compared with the full and partial-information gradient-play in the presence of disturbances. \vspace{-0.3cm} \subsection{OSNR Game} Consider an optical signal-to-noise ratio (OSNR) model for wavelength-division multiplexing (WDM) links \cite{lp12}, where 10 channels, $\mathcal{I} = \{1,\dots,10\}$, are transmitted over an optically amplified link. We consider each channel as an agent and denote each agent's transmitting power as $x_i$, while the noise power of each channel as $n_i^0$. Each agent attempts to maximize its OSNR on its channel by adjusting its transmission power. Each agent has a cost function as in \cite{pp09}, given by \vspace{-0.2cm} $$ \!\!J_i(x_i,x_{-i}) \!=\! a_ix_i\!+\!\frac{1}{P^0\!-\!\sum_{j \in \mathcal{I}}\!x_j} \!-\!b_i\!\ln\!\Big(1\!+\!c_i \!\frac{x_i}{n_i^0\!+\!\!\sum_{ j \neq i }\!\Gamma_{ij} x_j}\Big) $ where $a_i>0$ is a pricing parameter, $P^0$ is the total power target of the link, $b_i>0$, and $\Gamma = [\Gamma_{ij}]$ is the link system matrix, with parameters as in \cite{sp16}. Each channel (agent) has dynamics (\ref{eq:Case1a}), where the disturbance is generated due to the pilot-tones used for network tracing and monitoring, \cite{tp06}, which take the form of a sinusoidal signal with a unique frequency assigned for each channel and unknown modulation. Thus $d_i = P^0[1+m_i\sin(2\pi f_it)]$, where $m_i = 0.1i$ (unknown modulation index) and frequency $f_i = 10i$ kHz, $i \in \mathcal{I}$. \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.2in]{figure4b} \caption{Gradient-play dynamics \eqref{eq:GradDyn} subject to disturbances}\label{fig:4b} \vspace{-0.26cm} \end{figure} \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.2in]{figure4c} \vspace{-0.26cm} \caption{Agent dynamics $\Sigma_i$ \eqref{eq:AgentFullInfo} subject to disturbances}\label{fig:4c} \end{figure} \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[ width =4.5cm]{GraphPlotOSNR} \caption{Random communication graph, $G_c$, $\lambda_2 = 2.6158$} \label{fig:GraphOSNR} \end{figure} First, we consider that each agent has full information about the others' actions and we compare the results of agent dynamics, (\ref{eq:AgentFullInfo}), with a standard gradient-play scheme \eqref{eq:GradDyn}. As seen in Fig. \ref{fig:4b} and Fig. \ref{fig:4c}, \eqref{eq:GradDyn} do not the reject disturbances (sustained fluctuations in the OSNR values), while (\ref{eq:AgentFullInfo}) successfully reject disturbances and converge to the NE found in \cite{sp16}. Next, assume each agent has partial information over a random graph, $G_c$, Fig. \ref{fig:GraphOSNR}. The results of dynamics (\ref{eq:AgentPartialInfo}) are plotted in Fig. \ref{fig:5c}, while those of the Laplacian-based gradient dynamics \eqref{eq:GradDynPartialInfo} are shown in Fig. \ref{fig:5b}, with similar comparison. \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.2in]{figure5b} \caption{Laplacian-based dynamics \eqref{eq:GradDynPartialInfo} over $\!G_c\!$}\label{fig:5b} \vspace{-0.26cm} \end{figure} \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.2in]{figure5c} \caption{Agent dynamics $\!\Sigma_i\!$ \eqref{eq:AgentPartialInfo} subject to disturbances over $\!G_c\!$}\label{fig:5c} \vspace{-0.26cm} \end{figure} \vspace{-0.26cm} \subsection{Sensor Networks} Our next example is similar to the one investigated in \cite{sjs12}. However, our algorithm uses a continuous-time gradient-play inspired feedback instead of the discrete-time extremum seeking algorithm used in \cite{sjs12}. It is also important to note that while \cite{sjs12} considers noisy feedbacks, it does not consider disturbance rejection as we have posed it here. Consider a group of five mobile robots in the plane in a sensor network. Each agent has a cost function that is a function of all robots' positions, $(x_i,x_{-i})$, \vspace{-0.25cm} \begin{align} J_i(x_i,x_{-i}) = x_i^Tx_i+x_i^Tr_i+\sum_{j \in \mathcal{I}} \|x_i-x_j\|^2 \end{align} where $r_1 \!=\! \col(2,-2)$, $r_2 \!=\! \col(-2,-2)$, $r_3 \!=\! \col(-4,2)$, $r_4\! =\! \col(2,-4)$, and $r_5 \! =\! \col(3,3)$. We consider two types, velocity actuated and force-actuated robots, and in each case we consider the full-information and the partial-information case with communication over a random graph, $G_c$ (Fig. \ref{fig:Gc11}). \begin{figure}[h!] \vspace{-0.26cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=1.5in]{SensorGraph} \caption{Random Communication Graph, $G_c$}\label{fig:Gc1} \end{figure} \subsubsection{Velocity-Actuated Robots} Consider that each agent in the network is a velocity-actuated robot with dynamics given by (\ref{eq:Case1a}), where $d_i = \col(0.5,0)$ is a constant disturbance. \begin{figure}[h!] \vspace{-0.3cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.8in]{singleintegrator} \caption{Comparison of \eqref{eq:AgentFullInfo} and \eqref{eq:GradDyn} for single-integrator agents}\label{fig:1} \end{figure} We consider first that each agent has full information about the other's actions and compare our algorithm \eqref{eq:AgentFullInfo} to the standard gradient-play \eqref{eq:GradDyn}. In Fig. \ref{fig:1} solid-lines depict gradient-play results in the disturbance-free case. In the presence of disturbances, as seen in Fig. \ref{fig:1}, \eqref{eq:AgentFullInfo} (dashed-lines) converges to the same NE values, while the standard gradient-play (dotted-lines) does not. \begin{figure}[h!] \vspace{-0.3cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.8in]{singleintegratorPartial} \caption{Comparison of $\!$ \eqref{eq:AgentPartialInfo} $\&$ \eqref{eq:GradDynPartialInfo} for single-integrator agents}\label{fig:2} \vspace{-0.3cm} \end{figure} Next consider that each agent only has partial-information communicated over a graph, $G_c$. We compare our algorithm \eqref{eq:AgentPartialInfo} to that of the Laplacian based-gradient dynamics \eqref{eq:GradDynPartialInfo} in Fig. \ref{fig:2}, where solid-lines depict \eqref{eq:GradDynPartialInfo} results in the disturbance-free case. In the presence of disturbances, Fig. \ref{fig:2} shows that \eqref{eq:AgentPartialInfo} (dashed-lines) converges to the same NE values as found by the full-information case, Fig. \ref{fig:1}, while \eqref{eq:GradDynPartialInfo} (dotted-lines) does not. \subsubsection{Force-Actuated Robots} Consider that each agent is modelled as double integrator, \eqref{eq:DoubleIntegratorDist} where $d_i = \col(0.5,0)$. The corresponding results are shown in Fig. \ref{fig:3} (full information case) and Fig. \ref{fig:4} (partial-information over $G_c$), where dashed-lines correspond to \eqref{eq:doubleIntDynDist} and \eqref{eq:doubleIntDynDistPart}, respectively, while dotted-lines to the disturbance-free learning algorithm. \begin{figure}[h!] \vspace{-0.3cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.8in]{doubleintegrator} \caption{Results of \eqref{eq:doubleIntDynDist} for double-integrator agents}\label{fig:3} \vspace{-0.3cm} \end{figure} \begin{figure}[h!] \vspace{-0.3cm} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,width=2.8in]{doubleintegratorPartial} \caption{Results of \eqref{eq:doubleIntDynDistPart} for double-integrator agents}\label{fig:4} \vspace{-0.3cm} \end{figure} \begin{remark} Although we did not specifically investigate systems with noisy feedbacks, it is possible to show that due to their ISS properties, the dynamics \eqref{eq:AgentPartialInfo} and \eqref{eq:doubleIntDynDistPart} have a certain amount of robustness to feedback noise, such as the type investigated in \cite{sjs12} and \cite{ms17}. The ISS property implies that for any bounded feedback noise, the steady-state solution will remain in a neighbourhood of the NE. \end{remark} \vspace{-0.3cm} \section{Conclusions}\label{sec:conclusions} We considered Nash equilibrium seeking schemes for (multi)-integrator agents subject to external disturbances. We addressed the case of full information on the others' decisions, as well as the case where agents have partial-decision information, based on local observation and communication. In both cases, we proposed new continuous-time dynamic schemes that converge to the Nash equilibrium, irrespective of the disturbance. Besides a gradient-play component, the proposed agent dynamics have a dynamic internal-model component, and, in the case of partial-information, a consensus component that drives agents to reach the decision-estimate consensus subspace. \bibliographystyle{IEEETran}
1,116,691,501,446
arxiv
\section{Introduction}\label{sec:intro} Accurate measurements of the rest-frame UV luminosity function (LF) are crucial for studying the evolution of galaxies at high redshift and reconstructing the physics and timeline of cosmic reionization. In recent years, significant progress has been achieved in measuring the LF out to $z\sim8$ and beyond based on images taken with the Hubble Space Telescope in the deep legacy fields, the Hubble Frontier Fields and through parallel programs \citep[e.g.,][]{Bouwens2007,Bouwens2011,Bradley2012,Oesch2012,Robertson2013,McLure2013,Schenker2013,Schmidt2014,Bradley2014,Bouwens2014,Finkelstein2014,Zitrin2015,Coe2015}. From many of these surveys it appears the LF at $z<6$ is well fit by a \citet{Schechter1976} function with a power-law slope at faint luminosities and an exponential drop at the bright end, where it is expected that feedback reduces star-formation in the most massive galaxies \citep{Somerville2012} and dust extinction may reduce the UV flux of galaxies \citep{Cai2014}. The evolution of the LF is expected to be driven by these processes and the evolution of the underlying halo mass function. It is so far unestablished which processes dominate the evolution and whether there are signification changes in the physical conditions of galaxies forming at high redshifts. Recent studies by \citet{Bowler2014b,Bowler2014a} and \citet{Finkelstein2014} claimed an over-abundance of galaxies at the bright end of the $z\geq 6$ LF when compared to the fit of a \citet{Schechter1976} function, although \citet{Bouwens2014} found no evidence for a departure from a Schechter-like form at $z\sim 4-8$, largely analyzing the same data sets. An over-abundance of bright galaxies may also be apparent in smaller surveys \citep[e.g.,][]{Ono2012,Hathi2012,Finkelstein2013}. If the departure from an exponential cutoff is confirmed by future observations, this may be an indication of the changing astrophysical conditions of high-redshift galaxies. However, another possible explanation is that the LF remains intrinsically with a Schechter form and the over-abundance of bright galaxies is caused by gravitational lensing magnification bias, which has been predicted to be significant for galaxies at $z\geq 8$ \citep{Wyithe2011}. While it has long been recognized that the gravitational lensing effect can be exploited in order to probe intrinsically faint galaxies -- in particular behind massive clusters of galaxies at moderate redshift -- \citep[e.g.,][]{Franx1997,Ellis2001,Schmidt2014a,Bowler2014a,Zitrin2015,Atek2015,Coe2015}, the effect in blank fields is much less well appreciated. In fact, gravitational lensing affects all lines of sight, as the trajectory of every photon in the universe is perturbed by the inhomogeneous foreground mass distribution. Though the effect is generally not as strong as in the fields of massive clusters of galaxies, even so-called blank field surveys are affected by gravitational lensing (weak, intermediate, or strong). In practice, owing to the lensing effect, flux-limited surveys include sources that should be below the sample threshold, but have been magnified into the sample. Furthermore, gravitational lensing changes the relation between observed solid angle and cosmic volume with respect to that expected for a perfectly homogeneous universe. At fixed detector field-of-view the intrinsic solid angle observed is smaller for magnification $\mu>1$ and vice versa. This phenomenon is called {\em magnification bias} \citep[e.g.,][]{Turner1984,Wyithe2000,Wyithe2011} and it can change the shape of the observed LF. Thus, it needs to be accounted for in order to derive accurate intrinsic LFs from flux-limited samples. The main aim of this paper is to improve the estimation of the intrinsic UV LF at high redshift by developing a formalism to take into account the magnification bias. Our new formalism improves on previous work in several ways: we extend the analytic strong lensing model of \citet{Wyithe2011} to include the redshift evolution of the deflector population, and we develop a technique to treat the intermediate lensing regime and introduce a framework to include weak lensing effects, neither of which have been systematically accounted for in any previous estimates of the LF. Furthermore, by providing probability distribution functions for the magnification of each dropout and empty field, our formalism can be directly included in any Bayesian LF parameter estimation, thus allowing for a rigorous derivation of the related uncertainties. We present two applications of our formalism. The first application is the interpretation of the $z\sim8$ dropouts found by the \emph{Brightest of Reionizing Galaxies Survey}\footnote{\url{http://borg.physics.ucsb.edu}}, \citep[hereafter BoRG,][]{Trenti2011}. After estimating the fraction of sources in BoRG that are multiply imaged and presenting one strongly-lensed candidate and three candidate systems with magnification $\mu>1.4$, we use the extended sample presented by \citet{Schmidt2014} to derive the LF including the effects of magnification bias. For this we extend the Bayesian formalism introduced by \citet{Schmidt2014} by including a term describing the likelihood for magnification of high-redshift sources for each field, and marginalize over the range of possible magnifications. The second application of our formalism is a set of predictions for the modification of the LF at $8<z\leq 16$, where JWST will detect dropouts \citep{JWST_SSR}, by using a variety of possible LFs based on theoretical models \citep[][]{Munoz2012,Behroozi2015} and extrapolations of lower redshift data \citep{Bouwens2014,Finkelstein2014}. With our formalism we can give a quantitative assessment of how magnification bias will affect future surveys. The paper is organized as follows. In Section~\ref{sec:data} we briefly describe the BoRG survey and the data used in this paper. In Section~\ref{sec:theory} we introduce the relevant theoretical background for gravitational lensing and magnification bias. In Section~\ref{sec:strong} we develop a semi-analytic framework, based on that in \citet{Wyithe2011} to study the magnification bias due to strong and intermediate gravitational lensing. In Section~\ref{sec:weak} we use the reconstruction of lines-of-sight in cosmological simulation data to investigate weak lensing. The Bayesian inference for the determination of the intrinsic LF is introduced in Section~\ref{sec:LF} and presented in more detail in Appendix~\ref{app:bayes}. The results are presented and discussed in Section~\ref{sec:results}. A brief summary is given in Section~\ref{sec:conc}. All magnitudes are AB magnitudes and a standard concordance cosmology with $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$, and $h = 0.7$ is assumed. The Millennium Simulation uses a cosmology with $\Omega_m = 0.25$, $\Omega_\Lambda = 0.75$, and $h = 0.73$, which is used to estimate the weak lensing magnification. We assume the difference between these two cosmologies is negligible for our purposes. \section{Data}\label{sec:data} This paper estimates the $z\sim8$ LF using 38 bright Lyman Break galaxies selected from the BoRG survey and 59 fainter dropouts taken from deep legacy fields (in HUDF09 and the WFC3/IR wide area Early Release Science). The BoRG survey is described briefly in Section~\ref{sec:data-borg}, but we refer to \citet{Trenti2011,Trenti2012,Bradley2012} and \citet{Schmidt2014} for further details. The deep legacy data are described by \citet{Bouwens2011}. Additionally, we used data of galaxies with spectroscopically determined velocity dispersions to estimate the velocity dispersion of the foreground BoRG galaxies (described in Section~\ref{sec:data-lens}). In Section~\ref{sec:data-millsim} we give an overview of the simulated data used in the analysis of weak lensing. \subsection{The BoRG Survey}\label{sec:data-borg} The ongoing BoRG survey is a pure-parallel imaging program with the HST WFC3. The current survey covers $\sim350$ arcmin$^2$ divided into 71 independent fields located randomly on the sky. This reduces cosmic variance below the level of statistical noise \citep{Trenti2008,Bradley2012}. The photometry is in the visual and near-infrared, primarily using the four HST WFC3 filters F606W, F098M, F125W, and F160W (commonly referred to as V-, Y-, J-, and H-bands respectively). The $z\sim8$ BoRG survey consisted mainly of HST programs GO/PAR 11700 and GO/PAR 12572 (PI: Trenti) and includes a small additional number of coordinated parallels from COS-GTO. 53 core BoRG fields are complemented by other archival data including 8 fields from GO/PAR 11702 \citep[PI: Yan,][]{Yan2011} and 10 COS-GTO fields, which used the F600LP-band instead of the F606W-band. The BoRG survey is the largest current survey of Y-band dropouts by solid angle. The $z\sim8$ galaxy candidates were identified from Y-band dropouts, full details of the selection criteria used to find dropouts are described in \citet{Schmidt2014}. The BoRG survey detected 38 Lyman break galaxy (LBG) candidates at $z\sim8$ with S/N $>5$ in the J-band, of which 10 have S/N $>8$ \citep{Bradley2012,Schmidt2014}. We use the $5\sigma$ sample of objects in this work. Throughout this work we will assume 42\% of the selected BoRG dropouts are contaminants \citep[usually $z\sim2$ interlopers, see e.g.,][]{Hayes2012,Bouwens2013}. This is the fiducial contamination fraction for the BoRG sample and was shown to be robust in the estimation of the LF by \citet{Bradley2012,Schmidt2014}. By definition we cannot determine which specific sources are contaminants without further photometry and spectroscopy, but our rigorous Bayesian method to determine the LF allows us to accurately estimate the LF parameters accounting for the presence of random contaminants \citep{Schmidt2014}. \subsection{Massive Foreground Galaxies Acting as Deflectors}\label{sec:data-lens} In Section~\ref{sec:strong-identify} we estimate the velocity dispersions of strong lens candidates in the BoRG fields by comparing their photometry with similar early-type galaxies which have both HST photometry and spectroscopically determined velocity dispersions \citep{Treu2005,Belli2014,Belli2014a}. We divided the galaxy samples into three large redshift bins in order to account for the position of the 4000\AA\ break in the filters at higher redshifts. In the range $z<1$ we used a sample of 165 spheroidal galaxies from \citet{Treu2005} with photometry from the Great Observatories Origins Deep Survey North \citep[GOODS-N,][]{Bundy2005}. For $z>1$ we use a sample of 66 massive quiescent galaxies, presented by \citet{Belli2014,Belli2014a}, which were selected from HST photometric catalogs of objects in the COSMOS, GOODS and Extended Groth Strip (EGS) fields \citep{Grogin2011,Koekemoer2011,Windhorst2011}. We used an aperture correction to rescale observed velocity dispersions, $\sigma_\textrm{obs}$, to $\sigma_e$, the velocity dispersion within one effective radius, $R_e$. We follow \citet{Belli2014} and used the model of \citet{VandeSande2013} which proposes a constant rescaling: \begin{equation} \label{eqn:strong-identify_sigmaeff-highz} \sigma_e = 1.05\sigma_\textrm{obs} \end{equation} For galaxies at $z<1$ (the \citet{Treu2005} sample), we used the model of \citet{Cappellari2006}: \begin{equation} \label{eqn:strong-identify_sigmaeff-lowz} \sigma_e = \left(\frac{R_e}{R}\right)^{-0.066}\sigma_\textrm{obs} \end{equation} where the slit size, $R$ is the $1\arcsec$ aperture on Keck DEIMOS \citep{Treu2005}. The reference photometry used for the individual samples differ. As listed in Table~\ref{tab:strong-mag-sigma} we use HST F606W for galaxies at $z<0.5$, HST F850LP from \citet{Treu2005} (converted to F098M through linear interpolation) for galaxies at $0.5<z<1.0$, and HST F160W for galaxies at $z>1$. \subsection{The Millennium Simulation}\label{sec:data-millsim} In Section~\ref{sec:weak} we describe our method to generate weak lensing probability density functions (PDFs) by reconstructing simulation data along the line-of-sight to $z\sim8$. Due to the very high redshift of our sources, it was necessary to use simulation data containing halos out to redshifts above 5. We used 24 $1.4 \times 1.4$ square degree simulated lightcones built by \citet{Henriques2012} from the Millennium Simulation \citep{Springel2005} which contain halos out to $z\sim12$. While the Millennium Simulation contains halos from very high redshift, it has a box length of only 500 Mpc $h^{-1}$. The comoving distance in the universe to $z=1$ is 2390 Mpc $h^{-1}$, so it is necessary to build lightcones with the galaxies correctly distributed in comoving volumes (see \citet{Blaizot2005} and \citet{Kitzbichler2007} for a thorough discussion of generating mock lightcones). These lightcones were generated using the semi-analytical galaxy formation model of \citet{Guo2011}, and photometric properties were calculated using the stellar population synthesis code by \citet{Maraston2005} which can be applied at high redshift. \section{Theoretical Background}\label{sec:theory} In this section we summarize the relevant theory for the galaxy LF, strong and weak gravitational lensing, and magnification bias. \subsection{Galaxy Luminosity Function}\label{sec:theory-LF} When a simply parametrized form is needed, we describe the LF by a Schechter function \citep{Schechter1976}: \begin{equation} \label{eqn:theory_LF-sch} \Psi(L) = \frac{\Psi^\star}{L^\star}\left( \frac{L}{L^\star} \right)^\alpha \exp\left(-\frac{L}{L^\star}\right) \end{equation} where $L^\star$ marks the characteristic break in the LF, $\Psi^\star$ is the characteristic density at that luminosity and $\alpha$ is the power-law exponent slope of the faint end. \subsection{Strong Lensing}\label{sec:theory-strong} If the line-of-sight to a background source is closely aligned with a massive foreground object, e.g. a cluster or single massive galaxy, gravitational lensing can produce multiple observed images of the source \citep{Schneider1992,SaasFee}. Multiple imaging signifies the regime of strong gravitational lensing lensing. \subsubsection{Singular Isothermal Sphere}\label{sec:theory-strong-sis} Strong gravitational lenses are commonly modeled as Singular Isothermal Spheres (SIS), which provides a convenient analytic form to describe the mass profiles of massive galaxies \citep[e.g.][and references therein]{Treu2010}. The scale of image separation is characterized by the {\em Einstein radius} of the lens: \begin{equation} \label{eqn:theory-strong_ER-SIS} \theta_\textrm{ER}(\sigma, z) = 4\pi \frac{D_{ls}}{D_s} \left(\frac{\sigma}{c}\right)^2 \end{equation} where $D_{ls}$ and $D_s$ are the angular diameter distances between the lens and source, and from the observer to the source respectively, $\sigma$ is the velocity dispersion of the lens galaxy, and $c$ is the speed of light. Velocity dispersion is the most important property for determining the strength of a strong gravitational lens as it scales with the mass of the dark matter in the system \citep{Turner1984,SaasFee,Treu2010}. The magnification, $\mu$, due to an SIS lens is given by: \begin{equation} \label{eqn:theory-strong_mu} \mu = \frac{|\theta|}{|\theta| - \theta_\textrm{ER}} \end{equation} where $\theta$ is the distance between the lens and the source in the image plane. An SIS lens can produce two images, with the brighter one having magnification $\mu >2$, or one image with magnification $\mu <2$. The case of multiple imaging is referred to here as strong lensing. In this paper we refer to images with $1.4<\mu<2$ as intermediate lensing. \subsubsection{Multiple Image Optical Depth}\label{sec:theory-strong-optdep} The optical depth $\tau_m$ is the cross-section for a galaxy at redshift $z_S$ to be multiply imaged (i.e. strongly lensed) by a foreground galaxy at $z_L$: it is the fraction of the sky covered by the Einstein radii of all intervening deflectors at redshifts $z_L$. Following standard practice and assuming SIS deflectors, \citet{Wyithe2011} defines it as: \begin{equation} \label{eqn:theory-strong_optdep-wyithe} \tau_{m} = \int_0^{z_S} dz_L \int d\sigma \; \Phi(\sigma,z_L) \; (1+z_L)^3 \; c \frac{dt}{dz_L} \pi D_L^2 \; \theta_\textrm{ER}^2(\sigma,z_L) \end{equation} where $\Phi(\sigma,z_L)$ is the velocity dispersion function of the deflectors, $D_L$ is the angular diameter distance to $z_L$, and $t$ is time. Without the magnification bias, the optical depth gives the probability of a high-redshift source being multiply imaged. \subsection{Weak Lensing}\label{sec:theory-weak} Weak gravitational lensing is the deflection of light that causes the magnification and distortion of an observed source, but without producing multiple images. There are no empty lines-of-sight in the universe, so all light traveling to us has been deflected some amount by intervening mass \citep{Hilbert2007}. Whilst it is impossible to determine the exact effect on individual observed sources, it can be done in a statistical sense and is important to quantify this effect for our high-redshift sources. The lens equation can be constructed for an arbitrary number of lens planes due to an ensemble of deflectors along the line-of-sight \citep{Hilbert2009,McCully2014}. The magnification of a source in a multiplane system is a function of the total convergence and total shear experienced. \citet{Hilbert2009} showed to first order that the total convergence and shear are the sum of the individual contributions from each object along the line-of-sight: \begin{equation} \label{eqn:theory-weak_mu} \mu = \frac{1}{(1-\sum_i\kappa_i)^2 - |\sum_i\gamma_i|^2} \end{equation} The convergence, $\kappa_i$, and shear, $\gamma_i$, of each object are determined by the lens model. \subsection{Magnification Bias} \label{sec:theory-magbias} The gravitational lensing of a source with luminosity $L$ in a solid angle $\Omega$ of sky has two effects. The observed luminosity is magnified by a factor $\mu$ and sources are now distributed over a magnified solid angle $\mu \Omega$. In a flux-limited sample intrinsically low luminosity sources can be magnified above the survey limit, while the number density of sources can decrease for a given observed solid angle. Since the faint end of the LF of high-redshift LBG galaxies is so steep, in regions around large low-redshift deflectors we may observe an excess of intrinsically faint high-redshift sources. These effects are known as the {\em magnification bias} and will effect our inferences about the population and LF of high-redshift galaxies. If it were possible to observe all galaxies in the universe without the magnification bias the probability of a high-redshift galaxy being strongly lensed is purely given by the optical depth, $\tau_m$ (Section~\ref{sec:theory-strong-optdep}). However, magnification of more numerous intrinsically faint sources into our surveys implies that we do not observe the true population of galaxies with luminosity. The magnification bias increases the probability that a sample of observed high-redshift sources have been gravitationally lensed. The magnification bias for sources with observed luminosities above $L_\textrm{lim}$ in a flux-limited sample is given by: \begin{equation} \label{eqn:theory-magbias} B = \frac{\int_{\mu_\textrm{min}}^{\mu_\textrm{max}} d\mu \, p(\mu) N\left(>\frac{L_\textrm{lim}}{\mu}\right)}{N(>L_\textrm{lim})} \end{equation} assuming that each source could be magnified between $\mu_\textrm{min}$ and $\mu_\textrm{max}$. Where $p(\mu)$ is the probability distribution for magnification of a source and $N(>L_\textrm{lim})$ is the integrated galaxy LF \citep{Wyithe2011}. The true probability of a high-redshift source being multiply imaged is $B\tau_m$. Therefore, using $B$ it is possible to find the fraction of galaxies at a given redshift in a flux-limited sample that are multiply-imaged: \begin{equation} \label{eqn:theory-magbias_fraclens} F_\textrm{mult} = \frac{B\tau_m}{B\tau_m + B'(1-\tau_m)} \end{equation} We assume that $B'$, the bias for galaxies to not be multiply imaged is close to unity. If the survey limit is brighter than the characteristic apparent magnitude of the observed sample the magnification bias is expected to be large, as a large fraction of the observed sources are likely to be intrinsically fainter sources magnified above the detection threshold of the survey. We can compute the gravitationally lensed LF, including strong and weak gravitational lensing: \begin{eqnarray} \label{eqn:theory-magbias_mod-LF} \Psi_\textrm{mod}(L) &=& (1-\tau_m)\frac{1}{\mu_\textrm{demag}}\Psi\left(\frac{L}{\mu_\textrm{demag}}\right) \nonumber \\ &+& \; \tau_m \int_0^{\infty} d\mu \; \frac{1}{\mu} p(\mu) \Psi\left(\frac{L}{\mu}\right) \end{eqnarray} Where $\mu_\textrm{demag} < 1$ is introduced such that the mean magnification over the entire sky is unity \citep{Pei1995,Wyithe2011} and $p(\mu)$ is the full probability density for magnification of a high-redshift source, as above. For a Schechter LF, the gravitationally lensed LF is predicted to exhibit a `kick' in the bright end \citep[e.g.,][]{Wyithe2011} due to a pile-up of brightened galaxies, whereas at the faint end the magnification of flux is balanced by 
the loss of number density \citep[for faint-end slope $\alpha \sim -2$,][]{Blandford1992} so there is no distortion, even if many strongly lensed faint sources are observed. \section{Strong and Intermediate Lensing}\label{sec:strong} In this section we compute the probability that the $z\sim8$ dropouts are affected by strong and intermediate lensing. First, in Section~\ref{sec:strong-zevol} we compute the strong lensing optical depth and the probability that a $z\sim8$ source is multiply imaged by foreground massive elliptical galaxy deflectors. We account for evolution of the deflector population based on the observed stellar mass function. In Section~\ref{sec:strong-identify} we describe our method to identify sources in the intermediate lensing regime ($1.4 <\mu < 2$). In order to identify these sources, we estimate the lensing strength of massive foreground galaxies based on HST photometry and an empirical calibration of the \citet{Faber1976} relation. A candidate strongly lensed dropout in the BoRG fields was presented by \citet{Barone-Nugent2013a}, in this paper one more candidate multiply-imaged dropout ($\mu > 2$) is found, and three dropouts may experience significant intermediate magnification. We detail their properties in Table~\ref{tab:strong-strongish}. \subsection{Strong Lensing by an Evolving Deflector Population} \label{sec:strong-zevol} In order to compute the strong lensing optical depth and multiple image probability, we follow \citet{Wyithe2011} and use a simple SIS lensing model (see Section~\ref{sec:theory-strong-sis}) with a flat cosmology. Strong lenses are assumed to be uniformly distributed in the universe and we can calculate the probability of encountering a strong lens along the line-of-sight to a high-redshift source, i.e. the lensing optical depth (see Section~\ref{sec:theory-strong-optdep}). By considering the number of galaxies observed above a certain flux limit we can calculate the magnification bias factor, $B$, from \Eq{eqn:theory-magbias}, assuming a Schechter luminosity function (\Eq{eqn:theory_LF-sch}). For these calculations we use the $z\sim8$ LF inferred by \citet{Schmidt2014}, with a characteristic magnitude of $M^\star = -20.15^{+0.29}_{-0.38}$, faint-end slope of $\alpha = -1.87^{+0.26}_{-0.26}$, and number density of $\log_{10} \Psi^\star [\textrm{Mpc}^{-3}] = -3.24^{+0.25}_{-0.24}$. We marginalize over the entire MCMC chain for each of the Schechter parameters. In their calculation of the optical depth \citet{Wyithe2011} used the local velocity dispersion function as measured by SDSS \citep{Choi2007}. As most strong lenses occur at $z \simlt 1.5$ \citep{Fassnacht2004,Treu2010}, \citet{Wyithe2011} assumed that the velocity dispersion function does not evolve with redshift for massive galaxies. This is consistent with studies of the velocity dispersion function out to $z \sim 1$ \citep[e.g.,][]{Chae2010,Bezanson2012}. However, significant galaxy growth and evolution is observed from $z>1$ as structure forms \citep{VandeSande2013,Belli2014}, and we can improve the accuracy of the model by allowing the parameters of the velocity dispersion function for massive ellipticals to evolve with redshift. Introducing redshift evolution is expected to reduce the optical depth \citep{Barkana1999}. The dashed blue line in the left panel of Figure~\ref{fig:strong-zevol_optdep-diff-vdfevol} shows the probability that the source has been multiply imaged as a function of lens redshift for a source at $z\sim8$, calculated using \Eq{eqn:theory-strong_optdep-wyithe}. The distribution is strongly peaked at $z_L \sim 1$, but there is a significant probability that $z_L > 1.5$. Only 48\% of the contribution to the optical depth for strong lensing occurs at $z_L < 1.5$. We find that 90\% of lensing occurs within a lens redshift of $z_L \simlt 3.5$. Therefore, in order to account for most of the optical depth we need to find the form of the velocity dispersion function out to $z= 3\sim4$ where the galaxy population is significantly different from recent times \citep{Bundy2005,Muzzin2013,VandeSande2014}. Several studies have investigated the evolution of the velocity dispersion function out to $z \sim 1.5$ \citep[e.g.,][]{Chae2010,Bezanson2011,Bezanson2012,Bezanson2013}. These works are consistent with no evolution, but have large uncertainties. Measurements of velocity dispersion beyond $z > 2$ are very difficult as the brightest emission lines fall within near-IR atmospheric absorption regions \citep{Kriek2006,Belli2014a}. Therefore, we estimate the evolution of the velocity dispersion function at high redshift based on the evolution of the stellar mass function, a related quantity that has been well-measured at $z>2$. We convert the stellar mass function into the velocity dispersion function by means of the well-known correlation between stellar velocity dispersion ($\sigma$) and stellar mass ($M_\textrm{stell}$) taken from \citet{Auger2010}: $\log(\sigma[\text{km s$^{-1}$}]) = p\overline{M} - 11p + q$, where $p=0.24\pm0.02$, $q=2.34\pm0.01$ and $\overline{M} = \log{(M_\textrm{stell}/M_{\odot})}$. This relation was derived for massive lens galaxies with high velocity dispersions, which will be the strongest contribution to the optical depth as $\tau \sim \sigma^4$. High-redshift galaxies are observed to have higher velocity dispersions at fixed mass than in the local universe \citep[e.g.,][]{VandeSande2013,Belli2014,Bezanson2015}. Thus the stellar mass-velocity dispersion relation is expected to evolve with redshift. Following \citet{VandeSande2013} we expect evolution of the form $(\sigma/\sigma_0) \propto (1+z)^\beta$, where $\sigma_0$ is the expected velocity dispersion at $z\sim0$. In Figure~\ref{fig:strong-zevol_evol-sigma} we plot publicly available data from \citet{VanderWel2008,vanDokkum2009,Newman2010,Toft2012,Bezanson2013,VandeSande2013,Belli2014,Belli2014a} and fit a relation of this form for all galaxies with estimated stellar masses between $10.8 < \log(M_\textrm{stell}/M_\odot) < 12.0$, and measured velocity dispersion $\sigma > 200$ km s$^{-1}$ as this was the region where the \citet{Auger2010} relation was derived. We find $\beta = 0.20 \pm 0.07$. Our result is lower than the result from \citet{VandeSande2013} because we use the \citet{Auger2010} stellar mass-velocity dispersion relation for massive lens galaxies as $\sigma_0$, whereas \citet{VandeSande2013} compare to a dynamical mass-velocity dispersion relation. As demonstrated in \citet{VandeSande2013} $M_\textrm{stell}/M_\textrm{dyn}$ increases with redshift, so will reduce the evolution we find compared to that in \citet{VandeSande2013}. If we consider the same galaxy sample and fit both our relation derived from stellar masses and the \citet{VandeSande2013} dynamical mass relation, and include the evolution in $M_\textrm{stell}/M_\textrm{dyn}$, our results are consistent. We note that because the optical depth depends on velocity dispersion to the fourth power, the form of the velocity dispersion function at $z>2$ is the greatest source of uncertainty in the calculation of optical depths. \begin{figure}[!t] \includegraphics[width=0.49\textwidth]{sigma_zevol.pdf} \caption{Redshift evolution of massive galaxy velocity dispersion, relative to the velocity dispersion estimated from inferred stellar masses via the \citet{Auger2010} relation. We find evolution of the form $(\sigma/\sigma_0) \propto (1+z)^{0.20 \pm 0.07}$, where $\sigma_0$ is the velocity dispersion estimated using the stellar mass-velocity dispersion relation from \citet{Auger2010}. We plot the mean linear fit (black line) and the $1\sigma$ confidence region (gray shaded region).} \label{fig:strong-zevol_evol-sigma} \end{figure} The stellar mass function can be described by a Schechter function \citep[e.g.,][]{Muzzin2013}: \begin{equation} \label{eqn:strong-zevol_SMF} \Phi_S(\overline{M}) = (\ln{10})\;\Phi^*_S \;10^{\left(\overline{M}-\overline{M}_S^*\right)(1+\alpha_S)} \; \exp\left[{-10^{\overline{M}-\overline{M}^*_S}}\right] \end{equation} The characteristic stellar mass is given by $\overline{M}_S^* = \log{(M^*_\textrm{stell}/M_{\odot})}$, $\Phi^*_S$ is the characteristic density normalization, and $\alpha_S$ is the low-mass-end slope. In order to model the redshift evolution of the stellar mass function, we use publicly available data on quiescent galaxies at $z \leq 4$ from the COSMOS/UltraVISTA Survey \citep{Muzzin2013}. They derive the best-fit single Schechter function parameters for the stellar mass function as a function of redshift. Their stellar mass function parameters for quiescent galaxies, allowing for evolution of $\alpha_S$, are plotted as a function of redshift in Figure~\ref{fig:strong-zevol_evol-smf}. We assumed the redshift evolution $X = X_0(1+z)^a$, where $X$ represents the stellar mass function Schechter parameters and $X_0$ represents the values at $z=0$. We used a Bayesian MCMC linear fitting method to fit this functional form to the data, and plot the mean and one standard deviation confidence fits in Figure~\ref{fig:strong-zevol_evol-smf}. There is significant evolution in $\Phi^*_M$. However, there is also large uncertainty in the evolution of $\Phi^*_M$ due to the spread of the data. We ignore evolution in the low-mass-end slope, since the lensing effect is dominated by the most massive galaxies. We also ignore evolution in $\overline{M}_S^*$, for which the evolution appears non-negligible but it has little effect on \Eq{eqn:strong-zevol_SMF}. The redshift-dependent velocity dispersion function obtained in this way becomes \begin{equation} \label{eqn:strong-zevol_vdf-evol} \Phi(\sigma,z) = p^{-1} \; \frac{\Phi_S^*(z) }{\sigma (1+z)^\beta} \;\left(\frac{\sigma}{\sigma^*}\right)^{ p^{-1} (1+\alpha_S)} \; \exp\left[{-\left(\frac{\sigma}{\sigma^*}\right)^{ p^{-1} }}\right] \end{equation} with $p=0.24\pm0.02$, $\beta = 0.20\pm0.07$ (obtained from the evolution of velocity dispersion in Figure~\ref{fig:strong-zevol_evol-sigma}), $\Phi_S^*(z) = 3.75 \pm 2.99 \times 10^{-3} (1+z)^{-2.46\pm0.53}$ Mpc$^{-3}$, $\alpha_S=-0.54\pm0.32$ and $\sigma^* = 216\pm18$ km s$^{-1}$. This was derived using the stellar mass-velocity dispersion relation above \citep{Auger2010}, including the scatter in the relation. At $z=0$ recent well-measured velocity dispersion functions \citep[e.g.,][]{Sheth2003,Choi2007} are within the uncertainties of this redshift-evolving relation, showing that our inferred evolution is consistent with direct measurements where they overlap. \begin{figure}[!t] \includegraphics[width=0.49\textwidth]{evol_SMF_freealpha_log.pdf} \caption{Redshift evolution of the best-fit single Schechter function parameters from \citet{Muzzin2013} for the stellar mass function of quiescent galaxies, allowing for evolution of $\alpha_S$. Fits of the form $X_0(1+z)^a$ are plotted: the solid lines show the mean fit, dotted lines show the $1\sigma$ error on the data. Only $\Phi^\star_M$ shows significant evolution with redshift.} \label{fig:strong-zevol_evol-smf} \end{figure} \begin{figure*} \includegraphics[width=0.49\textwidth]{optdepth_paper_scattersigma_dtau_err.pdf} \includegraphics[width=0.49\textwidth]{optdepth_paper_scattersigma_tau.pdf} \caption{{\bf (Left)} Contribution to the optical depth for a source at $z\sim8$ to be multiply imaged as a function of the lens redshift, $z_L$, (solid black line) calculated using \Eq{eqn:theory-strong_optdep-wyithe}, including the evolution of the deflector population with redshift (Section~\ref{sec:strong-zevol}), for comparison we plot the contribution for a constant comoving density of lens galaxies \citep[dashed blue line,][]{Wyithe2011}. {\bf (Right)} Optical depth for multiple imaging as a function of source redshift, including evolution of the deflector population (solid black line). The gray shaded regions show the 1$\sigma$ uncertainty bounds on the optical depth and its distribution, given the uncertainties in velocity dispersion and stellar mass evolution described in the text. The optical depth without redshift evolution of lens galaxies is also plotted for comparison \citep[dashed blue line,][]{Wyithe2011}.} \label{fig:strong-zevol_optdep-diff-vdfevol} \end{figure*} Using this redshift-dependent velocity dispersion function we compute the optical depth for strong lensing, and the distribution of the optical depth with lens redshift. In the left panel of Figure~\ref{fig:strong-zevol_optdep-diff-vdfevol}, now using the redshift evolving deflector population from \Eq{eqn:theory-strong_optdep-wyithe} and \Eq{eqn:strong-zevol_vdf-evol}, we see that the majority of the contribution to the optical depth is from lens galaxies at $z\simlt1.5$, which agrees with current observations of lensed high-redshift dropouts \citep{Barone-Nugent2013a,Schmidt2014a,Atek2015}. In the right panel of Figure~\ref{fig:strong-zevol_optdep-diff-vdfevol} we plot the optical depth as a function of source redshift and find that including the redshift evolution of the deflector population reduces the optical depth at high redshift compared with the work in \citet{Wyithe2011} as expected by theoretical predictions \citep{Barkana1999}, and it appears to start to flatten by $z_S\sim10$. Our estimated optical depth at $z<8$ is in good agreement with values derived by an independent method by \citet{Barone-Nugent2015}, and consistent with \citet{Wyithe2011} for $z \simlt 8$. We note the optical depths presented in \cite{Barone-Nugent2015} are marginally higher than the results of this paper, but we can recover their optical depth using a steeper evolution of $\sigma(z)$. It is clear that the uncertainty in the evolution of velocity dispersion, which is the best indicator of the mass of lens galaxies, provides the largest uncertainty in determining the optical depth. Finally, we compute the probability that high-redshift galaxies in a flux-limited sample have been multiply imaged. This is shown in Figure~\ref{fig:strong-zevol_flens} as a function of limiting magnitude for each of the BoRG fields. As expected, the probability that a source in each field is multiply imaged, $F_\textrm{mult}$ (\Eq{eqn:theory-magbias_fraclens}) increases with the survey limiting magnitude, owing to the magnification bias. We estimate 3-15 \% of observed sources brighter than $M^\star$ have been strongly lensed, this is consistent with the results of \citet{Barone-Nugent2015} who use an independent method to infer the lensed fraction. \begin{figure}[!t] \includegraphics[width=0.49\textwidth]{flens_borg_magab_vdfevol_all_Mstar_auger.pdf} \caption{Multiply-imaged fraction (see \Eq{eqn:theory-magbias_fraclens}) for $z\sim 8$ sources brighter than the J-band limiting magnitude in each of the BoRG fields, as a function of the UV characteristic magnitude, $M^\star$, including the evolution of the deflector population (Section~\ref{sec:strong-zevol}). The probability of a high-redshift source being multiply imaged increases as the survey magnitude limit becomes brighter than $M^\star$. We expect very few intrinsically bright sources, so any bright source has a high likelihood of being significantly magnified according to the magnification bias. We have used the full MCMC chain for $M^\star$ from \citet{Schmidt2014} and plot the mean value with errorbars of one standard deviation. The optical depth, the probability of multiple imaging without including the magnification bias factor, $B$ (Section~\ref{sec:theory-magbias}), is plotted as the green dashed line.} \label{fig:strong-zevol_flens} \end{figure} \subsection{Identifying Significantly Magnified Sources}\label{sec:strong-identify} Whilst all the fields are subject to weak lensing, it is necessary to establish which of the individual sources experience multiple-imaging ($\mu > 2$), or are close enough to a deflector to experience an intermediate magnification ($1.4 < \mu < 2$). We expect strong lensing evens to be rare, but possible given the size of the BoRG survey. Among the BoRG sources, \citet{Barone-Nugent2013a} presented a candidate strongly-lensed system in borg\_0440-5244 \citep[for naming conventions see][]{Bradley2012}. The candidate appears to be lensed by a foreground group with an Einstein radius of $1.49\arcsec$, corresponding to a velocity dispersion of $\sim 300$ km s$^{-1}$, producing a magnification of $3.7\pm0.2$ of the dropout. In this Section we describe a method to identify other potentially lensed sources in the catalogs and illustrate how to account for them systematically when estimating the LF. For computational speed, we considered as potential deflectors only $z<3$ objects within 18 arcseconds of the $z \sim 8$ dropouts in each field (the typical Einstein radius is of order 1-2 arcseconds for massive galaxies). The key quantity that we need to estimate the lensing strength is the velocity dispersion \citep{Turner1984,Treu2010}. Thus for every galaxy sufficiently close to a dropout, we estimate their velocity dispersions by comparing their photometry with that of samples of similar objects with spectroscopically-determined velocity dispersions. We selected galaxy samples with HST photometry in bands used in BoRG in order to estimate velocity dispersion based on our own photometry. As a comparison sample, we used data from \citet{Treu2005} and \citet{Belli2014,Belli2014a}, as described in Section~\ref{sec:data-lens}. As described in Section~\ref{sec:strong-zevol}, the velocity dispersion-stellar mass relation is believed to evolve weakly with redshift since $z\sim2$ \citep[e.g.,][]{VandeSande2013,Belli2014a,Bezanson2015}, and galaxies will be intrinsically brighter at higher redshift due to younger stellar populations \citep[e.g.][]{Treu2005}. We account for this by fitting an evolving \citet{Faber1976} relation to the comparison sample of the form $L \propto \sigma^4 (1+z)^\beta$. In practice, we bin the data in redshift, and fit a function of the form $\log \sigma = -0.1m + a\log(1+z) + b$ using a Bayesian MCMC estimation where $\sigma$ is the velocity dispersion in km s$^{-1}$, $m$ is apparent magnitude in a given band, $z$ is galaxy redshift, and $a$ and $b$ are constants. We restrict our fit to galaxies with a measured velocity dispersion of at least 200 km s$^{-1}$, where samples are less affected by incompleteness and selection effects. We present the estimated parameters in Table~\ref{tab:strong-mag-sigma} and fits to the data are shown in Figure~\ref{fig:strong-identify-FJ}. \begin{table}[h] \centering{ \caption[ ]{Correlation between velocity dispersion, redshift and apparent magnitude} \label{tab:strong-mag-sigma} \begin{tabular}[c]{clcc} \hline \hline Redshift & Band ($m$) & a & b \\ \hline $z<0.5$ & F606W & $2.26\pm0.79$ & $4.08\pm0.12$ \\ $0.5 < z < 1.0$ & F089M & $0.93\pm0.13$ & $4.20\pm0.03$ \\ $z>1.0$ & F160W & $1.02\pm0.15$ & $4.12\pm0.05$ \\ \hline \multicolumn{4}{l}{\textsc{Note.} -- Fits of the form $\log \sigma = -0.1m + a\log(1+z) + b$} \end{tabular}} \end{table} \begin{figure*}[t] \centering{ \includegraphics[trim = 0mm 4mm 0mm 20mm, clip, width=0.9\textwidth]{FaberJackson_magsigma_zbins_line_highsigma_werrs.pdf} \caption{Evolving Faber-Jackson relation for massive galaxies with redshift. Data for $z<1$ are from \citet{Treu2005} (red and green circles), data for $z>1$ are from \citet{Belli2014} (blue triangles) and \citet{Belli2014a} (blue crosses). Red points indicate apparent magnitude in the F606W band ($z<0.5$), green points have magnitudes in the F098M band ($0.5<z<1$), and blue points are data with magnitudes in the F160W band ($z>2$). Only galaxies with $\sigma > 200$ km s$^{-1}$ were used in the fitting. The slope of the relation in velocity dispersion and magnitude is fixed at the \citet{Faber1976} result of $L \propto \sigma^4$. We fit the evolution with redshift, which changes the intercept of the line on the velocity dispersion axis. The uncertainty in magnitude is 0.1 mag which is a fiducial value given the fitting procedures. The black dashed lines shows the mean fit for the mean redshift of objects in each plotted bin. The fitting parameters are given in Table~\ref{tab:strong-mag-sigma}.} \label{fig:strong-identify-FJ}} \end{figure*} The posterior probability distribution function of Einstein radii for each object are found using \Eq{eqn:theory-strong_ER-SIS}, sampling over the full MCMC chain for the velocity dispersion. The redshifts of the objects were determined using the Bayesian Photometric Redshifts (BPZ) code, using a flat prior and the default parameters and templates \citep{Benitez2004,Coe2006}. All photometric redshifts for relevant foreground galaxies are well-fit by BPZ and have uncertainties in photometric redshift $<15\%$. The PDF for magnification, $p(\mu)$ is found by computing the magnification, $\mu$ (\Eq{eqn:theory-strong_mu}), at the position of the dropout given the distribution of Einstein radii found for each foreground object using the distribution for its velocity dispersion, $\sigma_\textrm{inf}$ estimated from the fits in Table~\ref{tab:strong-mag-sigma}. The greatest source of error in this procedure is the magnitude-velocity dispersion-redshift relation: uncertainties in magnitude and redshift determination have small effects on the magnification PDFs in comparison to the uncertainty in velocity dispersion. When the mean magnification produced by such a foreground object exceeds $\mu = 1.4$ we use the magnification PDF derived from the above procedure and treat the dropout as described in Section~\ref{sec:LF-combine} in our calculations of the LF. Using this method, we find one of the dropouts \citep[borg\_0436-5259\_1233, presented in][]{Bradley2012} has a magnification probability distribution consistent with strong lensing. This dropout is shown in the top left panel of Figure~\ref{fig:strong-postage} and its estimated lensing properties are given in Table~\ref{tab:strong-strongish}. The dropout appears to be magnified by a large galaxy at $z\sim 0.40$ with estimated velocity dispersion $294\pm47$ km s$^{-1}$ (estimated from photometry via the empirical relation presented in Table~\ref{tab:strong-mag-sigma}). We estimate its magnification to be $\mu = 2.05 \pm 0.52$, the large uncertainty is due to the uncertainty in the relationship between apparent magnitude and velocity dispersion. If the dropout is indeed strongly lensed the counter image would be almost directly behind the center of the lens galaxy, and will be demagnified according to \Eq{eqn:theory-strong_mu}, unfortunately making it impossible to detect. The dropout is very faint ($m_\textrm{J} = 27.0 \pm 0.2$) and no significant elongation is detected in any of the observed bands but this dropout would be an excellent object for further investigation. Three of the dropouts (borg\_1301+0000\_160, borg\_1408+5503 and borg\_2155-4411\_341) experience mean magnification $> 1.4$. Postage stamps of these dropouts are shown in Figure~\ref{fig:strong-postage} and their lensing properties are presented in Table~\ref{tab:strong-strongish}. As described in Section~\ref{sec:data-borg} the fiducial BoRG contamination fraction is 42\% \citep{Bradley2012,Schmidt2014} meaning that some of the sources presented here may be lower redshift interlopers \citep[e.g.,][]{Hayes2012,Bouwens2013}. Without further photometry and/or spectroscopy we cannot identify which of the sources are interlopers, but we note that the photometric redshift PDFs for these four sources (obtained from BPZ) all have strong peaks at $z\sim8$, suggesting a higher probability than the average (58\%) for these particular objects to be true $z\sim8$ sources. Interestingly, borg\_1301+0000\_160 is the brightest dropout in the survey, with $m_J = 25.5\pm0.2$, and appears tangentially elongated in the J-band image (middle panel of Figure~\ref{fig:strong-postage}). This object is also a very interesting target for further imaging and spectroscopic follow-up. We note that our method assigns a significantly lower velocity dispersion to the potential strong lens (borg\_0440-5244\_647) than the one estimated by \citet{Barone-Nugent2013a} in their presentation of this object. They estimated the velocity dispersion of the deflector to be $\sigma \sim 300$ km s$^{-1}$, whereas our method estimates a mean velocity dispersion of $\sim 170 \pm 33$ km s$^{-1}$. This is likely to be because our method does not account for lensing by groups and clusters, while \citet{Barone-Nugent2013a} suggest that this dropout is lensed by a group of at least two objects at $z\sim1.8$, of which borg\_0440-5244\_647 is the largest. They estimated velocity dispersions of the deflector galaxies by using an abundance matching relation between mass and luminosity, derived from \citet{Cooray2005}, and measuring the angular size of the lensing objects. However, when using a redshift-dependent \cite{Faber1976} relation \citep{Barone-Nugent2015} similar to ours (Table~\ref{tab:strong-mag-sigma}) they estimate the velocity dispersion of this single galaxy to be $\sim 180 \pm 46$ km s$^{-1}$ (via private communication), which agrees with our result. Neglecting group-scale lensing is a potential limitation of our method, which may underestimate magnification in a few cases. However the impact on the overall estimation of the LF inference is negligible since the phenomenon is so rare. \begin{figure*}[t] \centering{ \includegraphics[width=0.8\textwidth]{postage_stamps_jband.pdf} \caption{The four BoRG dropouts (from top left to bottom right: borg\_0436-5259\_1233, borg\_1301+0000\_160, borg\_1408+5503 and borg\_2155-4411\_341) with significant magnification probabilities, shown in the F125W band with a Gaussian smoothing radius of 1 in $8 \arcsec$ boxes. The solid red lines outline the dropouts with a $0.3 \arcsec$ radius. The dashed green lines outline the potential foreground deflectors, with radius corresponding to the Einstein radius of an SIS deflector lensing a source at $z=8$. The candidate strong lens system (borg\_0436-5259\_1233) is shown in the top left panel, and has an estimated magnification of $\mu = 2.05 \pm 0.52$. Interestingly, borg\_1301+0000\_160 (top right) is the brightest dropout in the BoRG survey. The parameters for all of these objects are given in Table~\ref{tab:strong-strongish}.} \label{fig:strong-postage}} \end{figure*} \begin{table*}[!t] \centering{ \caption[ ]{Strong and intermediate lensing parameters derived by estimating velocity dispersions of bright foreground galaxies close to $z\sim8$ dropouts} \label{tab:strong-strongish} \begin{tabular}[c]{lcccccccc} \hline \hline Field & Dropout ID & $J_\textrm{125} \,^\textrm{a}$ & Foreground ID & $z_f$ & Separation ($\arcsec$) & $\sigma_\textrm{inf}$(km s$^{-1}$) & $\theta_\textrm{ER}$ ($\arcsec$) & $\mu$ \\ \hline borg\_0436-5259 & 1233$^\textrm{b,c}$ & $27.1\pm0.2$ & 1191 & $1.52\pm0.03$ & 2.79 & $294\pm47$ & $1.32\pm0.40$ & $2.05\pm0.52$ \\ borg\_1301+0000 & 160$^\textrm{d}$ & $25.5\pm0.2$ & 144 & $1.14\pm0.15$ & 1.99 & $184\pm31$ & $0.60\pm0.20$ & $1.47\pm0.30$ \\ borg\_1408+5503 & 980$^\textrm{c}$ & $27.0\pm0.2$ & 959 & $0.40\pm0.06$ & 3.11 & $193\pm69$ & $1.01\pm0.70$ & $1.54\pm0.62$ \\ borg\_2155-4411 & 341$^\textrm{c}$ & $26.6\pm0.2$ & 244 & $0.74\pm0.11$ & 2.27 & $216\pm22$ & $0.97\pm0.20$ & $1.80\pm0.33$ \\ \hline \multicolumn{9}{l}{\textsc{Note.} -- $^\textrm{a}$ Total (AUTOMAG) apparent magnitude in the J-band of the dropout \citep{Bradley2012}.} \\ \multicolumn{9}{l}{ $^\textrm{b}$ Strongly-lensed candidate. $^\textrm{c}$ 5$\sigma$ source. $^\textrm{d}$ 8$\sigma$ source.} \end{tabular}} \end{table*} \newpage \section{Weak Lensing}\label{sec:weak} In this section we discuss the methods used to find the PDFs for magnification of a source at $z\sim8$ by all intervening matter. We used the Pangloss code\footnote{\url{http://github.com/drphilmarshall/Pangloss}} developed by \citet{Collett2013} that generates lensing parameters for reconstructed lines-of-sight. We describe the production of magnification PDFs from simulation data from the Millennium Simulation \citep{Springel2005} in Section~\ref{sec:weak-pangloss} and in Section~\ref{sec:weak-borg} we present the BoRG field weak magnification PDFs. Our PDFs agree well with other theoretical work at lower redshifts \citep{Hilbert2007,Hilbert2009,Greene2013}. \subsection{Estimating Magnification from Simulation Catalogs}\label{sec:weak-pangloss} The weak lensing reconstruction model developed by \citet{Collett2013} takes simulation halo catalogs and places halos in a three-dimensional grid, with each halo contributing convergence $\kappa_i$ and shear $\gamma_i$ along a line-of-sight to a source at a given redshift. Halos are modeled as truncated NFW profiles \citep{Baltz2009}: \begin{equation} \label{eqn:weak_NFW_trunc} \rho(r) = \frac{\rho_\textrm{NFW}(r)}{1 + \left(\frac{r}{r_t}\right)^2} \end{equation} where we used the truncation radius $r_t = 5r_{200}$, shown to be robust by \citet{Collett2013}. Where $r_{200}$ is the radius at which the mass density falls to 200 times the critical mass density of the universe. The convergence and shear derived from this profile are given in \citet{Baltz2009}. Magnification due to all intervening deflectors along a line-of-sight is given by \Eq{eqn:theory-weak_mu}. We built PDFs for all lensing parameters by sampling over $10^3$ of lines-of-sight. As described in Section~\ref{sec:data-millsim} we used lightcones built from the Millennium Simulation \citep{Henriques2012,Springel2005}. The simulated catalogs provide a list of halos with associated galaxies, but they do not include other dark structure, clumped in filaments and absent in voids. This missing matter will affect the overall density of the universe so it is necessary to take this into account when estimating $\kappa$ and $\mu$. We account for this by subtracting convergence from redshift slices so that the mean convergence along all lines-of-sight in the catalogs to a given redshift equals zero, and the mean magnification is unity, as they should be. Following work by \citet{Suyu2010} and \citet{Greene2013}, we compare lines-of-sight in the BoRG fields with simulation data based on relative density of objects. We define the overdensity parameter \begin{equation} \label{eqn:density} \xi = \frac{n_i}{n_\textrm{tot}} \end{equation} where $n_i$ is the number of objects per unit area in each lightcone (or real field) and $n_\textrm{tot}$ is the total number of objects divided by the total survey area. Given that the simulation catalogs are $\sim 500 \times$ larger than the total BoRG survey area we expect them to give representative results. We then calculate the number of objects per square arcsecond brighter than $m = 24$ in the J-band in each of the BoRG fields compared to the total number of objects above this flux limit in the whole survey. Similarly we calculate the overdensity of objects above the same limit in the simulated lightcones. \citet{Henriques2012} include mock photometry based on stellar population synthesis codes by \citet{Maraston2005} which include J-band magnitudes. As shown in Figure~\ref{fig:weak-pangloss-millsim_overdensity}, the distribution of overdensities for the observed data is within the range of that for simulated data. Finally, to generate magnification PDFs for a given BoRG field, we combine the magnifications from all simulation lines-of-sight which are within $\pm 2\%$ in overdensity of the observed value. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{overdensity_compare.pdf} \caption{Comparison of the overdensity of lines-of-sight in the Millennium Simulation and the BoRG fields. $\xi = n_i/n_\textrm{tot}$ where $n_i$ is the number of objects per unit area above a certain flux limit in each lightcone (or real BoRG field) and $n_\textrm{tot}$ is the total number of objects above the same flux limit divided by the total survey area. We use a flux limit of $m<24$ in F125W (J-band).} \label{fig:weak-pangloss-millsim_overdensity} \end{figure} In Figure~\ref{fig:weak-pangloss-compare_Hilbert} we plot the magnification PDFs for a source at various redshifts over all lines-of-sight. As the source redshift increases, the peak of the distribution shifts to lower magnification, but the high-magnification tail becomes more important, such that the mean magnification over all lines-of-sight remains unity. We match results for $z<6$ from \citet{Hilbert2007} well. It is clear that there is little change in the distribution between $z_S=6$ and $z_S=8$, as there are negligible numbers of large halos above $z>5$. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{PofMu_z_henriques_compare_Hilbert2007.pdf} \caption{Probability distribution function for magnification for four values of source redshift. The dashed line marks the mean magnification of the universe. These results compare well with \citet{Hilbert2007}. Due to the lack of significant mass between $z \sim 6$ and $z \sim 8$ there is little change in the distributions of magnification for sources at those redshifts, as the total convergence does not change much.} \label{fig:weak-pangloss-compare_Hilbert} \end{figure} In Figure~\ref{fig:weak-pangloss-z8} we plot the magnification PDFs for a variety of overdensities. The more overdense lines-of-sight produce a higher mean magnification, as expected, but also have a greater variance than the distributions for underdense lines-of-sight. This agrees well with the estimates at lower redshift by \citet{Greene2013}. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{PofMu_z=8.pdf} \caption{Probability distribution function for magnification for a range of values of overdensities for a source at $z=8$. More overdense lines-of-sight are skewed towards higher magnification, with a broad distribution. More underdense lines-of-sight are skewed towards lower magnification, with a narrower distribution due to the deficit of intervening mass.} \label{fig:weak-pangloss-z8} \end{figure} \subsection{BoRG Weak Lensing Magnification PDFs}\label{sec:weak-borg} The kernel density estimates \citep{rosenblatt1956,parzen1962} fit to the magnification PDFs for all the BoRG fields are shown in Figure~\ref{fig:weak-borg_allpdfs}. As expected, the BoRG fields do not have significant over- or underdensities, but are rather typical of blank fields at $z\sim8$, as shown in Figure~\ref{fig:weak-pangloss-compare_Hilbert}. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{PofMu_borg_all.pdf} \caption{Probability distribution function for magnification for all of the BoRG fields, with a source at $z\sim8$. The lines are kernel density estimations to the distributions. It is clear there is little range in overdensity for the BoRG fields.} \label{fig:weak-borg_allpdfs} \end{figure} There is significant motivation for the magnification PDFs to take a log-normal form. The 3D matter density distribution of the universe is well-described by a log-normal random field \citep{Coles1991}, and weak lensing probability distributions arise directly from the mass distribution. However, when accounting for the magnification bias in individual fields to infer the LF from the dropout sample (see Section~\ref{sec:LF}) it was necessary to express the magnification distributions in a form that could easily convolve analytically with a Gaussian distribution (for more details see Appendix~\ref{app:bayes}). For this we used a Bayesian MCMC approach to fit the distributions of magnification for each field as a linear sum of Gaussian functions. \section{Recomputing the LF}\label{sec:LF} In this section we outline the method of estimating the $z\sim8$ LF from the BoRG high-redshift candidates, taking the magnification bias into account. Following \citet{Schmidt2014}, who did not account for the magnification bias when estimating the BoRG $z\sim 8 $ LF, we use the Bayesian inference method devised by \citet{Kelly2008}, which is described in Section~\ref{sec:LF-bayesian} and in Appendix~\ref{app:bayes}. In Section~\ref{sec:LF-lensing} we describe in more detail how we take into account the weak and intermediate lensing magnification. \subsection{Bayesian Estimation of the LF} \label{sec:LF-bayesian} As in \citet{Schmidt2014}, we assume that the intrinsic luminosity function is modeled by the Schechter function in \Eq{eqn:theory_LF-sch}. In order to facilitate comparison with our previous work we use the sample of 38 BoRG Y-band dropouts and 59 additional fainter dropouts from the Hubble Ultra-Deep Field (HUDF) and Early Release Science (ERS) programs \citep{Bouwens2011}. Bayesian statistics allows us to express the posterior probability that the LF is fit by a Schechter function with parameters $\theta = (\alpha, L^\star, \Psi^\star)$ given the observed luminosity $L_\textrm{J,obs}$ of the dropouts in the J-band, and the non-detections in the V-band ($I_V = 0$), as the product of the prior on the Schechter parameters and the likelihood: \begin{equation} \label{eqn:LF-bayesian_bayes} p(\theta \,|\, L_\textrm{J,obs}, I_V =0) \propto p(\theta) \times p(L_\textrm{J,obs}, I_V = 0 \,|\, \theta) \end{equation} This posterior probability can be expressed (see Appendix~\ref{app:bayes} for full details) as: \begin{eqnarray} \label{eqn:LF-bayesian_bayes-margpost} p(\theta &\,|\,& L_\textrm{J,obs}, \; I_\textrm{V}=0) \propto \; p(\theta) \times C^{N_z}_{(1-f)n} \times C^{\frac{f}{1-f} N_z}_{f n} \nonumber \\ &\times& \; \prod_{l}^\mathcal{C} \left[1-\frac{ A_l}{\overline{\mu}_l A_\textrm{sky}}\; p(I=1\,|\,\theta) \right]^{\frac{N_z-(1-f_l)c_{l}}{1-f_l}} \nonumber \\ &\times& \; \prod_i^{n} p(L_{\textrm{J,obs},i}\,|\,\theta) \end{eqnarray} Where we iterate over $l$ fields with $i$ $z\sim8$ candidates. Here $N_z$ is the number of high-$z$ dropouts in the surveyed comoving cosmological volume, $A_l$ is the area of the individual $\mathcal{C}$ fields in \citet{Schmidt2014}, which each contain $c_l$ high redshift candidates ($n = \sum_l^\mathcal{C} c_l$). Each candidate has an assumed contamination fraction of $f_l$. We use a fiducial value for the contamination of 42\% for the BoRG sample, the contamination fractions for the HUDF/ERS samples are included in the selection function, see Appendix~\ref{app:bayes}, as described in \citet{Oesch2012,Bradley2012} and \citet{Schmidt2014}. Changing the contamination value in the range $f=0-0.60$ effects the characteristic magnitude and the number density of the LF by less than their estimated $1\sigma$ uncertainties, and the change in the faint-end slope is comparable to its $1\sigma$ uncertainty \citep{Bradley2012,Schmidt2014}. The Bayesian framework allows us to accurately estimate the LF parameters accounting for contamination. $A_\textrm{sky}$ is the area of the full sky The $C^a_b$ factors are binomial coefficients which are the fully correct method of modeling source counts. We assume uniform priors on $\alpha$, $\log_{10}L^\star$ and $\log_{10}N_z$. $p(I=1\,|\,\theta)$ is the probability distribution of an object making it into the dropout sample based on the photometric selection described in \citet{Schmidt2014}. $p(L_{\textrm{J,obs},i}\,|\,\theta)$ is the likelihood function for the observed J-band luminosity of the $i$'th object in the sample. The last term includes marginalization over the magnification PDF: \begin{eqnarray} \label{eqn:LF-bayesian_bayes-Lobs-like2} p(L_\textrm{J,obs}\,|\,\theta) = \int \int& p(\mu)\; p(L_\textrm{J,obs}\,|\,\mu L_\textrm{J,true}) \nonumber \\ \times &p(L_\textrm{J,true}\,|\,\theta) \; dL_\textrm{J,true} \; d\mu \end{eqnarray} In Appendix~\ref{app:bayes} we give the expanded expression of the posterior distribution from \Eq{eqn:LF-bayesian_bayes-margpost} used when performing the LF parameter inference and describe the derivation and motivation for \Eq{eqn:LF-bayesian_bayes-Lobs-like2}. We refer to Appendix~\ref{app:bayes} and \citet{Schmidt2014} for further details. \subsection{Including the Lensing Corrections}\label{sec:LF-lensing} \subsubsection{Analytic Form for Magnification PDFs}\label{sec:LF-bayesian-pofmu} In order to make integration of \Eq{eqn:LF-bayesian_bayes-Lobs-like2} computationally feasible we require a simple analytic form for $p(\mu)$ that will convolve simply with a Gaussian distribution (see Appendix~\ref{app:bayes}). As described in Section~\ref{sec:weak-borg}, the weak lensing magnification PDF is well-fit by a log-normal distribution. However, this cannot be convolved analytically with a Gaussian. Therefore, we fit the magnification PDFs from all regimes as a linear combination of Gaussian functions with different means and standard deviations. The weak lensing magnification PDFs (see Section~\ref{sec:weak-borg}) are well-fit by a combination of three Gaussian functions. The intermediate lensing PDFs (see Section~\ref{sec:strong-identify}) are also well-fit by a combination of three Gaussian functions. \subsubsection{Combining Lensing Regimes}\label{sec:LF-combine} All of the fields have a weak lensing magnification PDFs based on their overdensity (see Section~\ref{sec:weak}), but we have also identified one strongly-lensed candidate and three dropouts close to large foreground galaxies that produce an intermediate magnification PDF (see Section~\ref{sec:strong-identify}). To account for the magnification bias, we need to use the correct magnification PDF for each field. In the case when a strong or intermediate lens appears present, we split the field into two parts for the calculation of the posterior: one is a circle with radius $10 \; \theta_\textrm{ER}$ containing the dropout and the deflector, where we use the strong or intermediate lens magnification PDF. For the remainder of the field we use the weak lensing magnification PDF. Whilst the total flux across the sky is conserved, locally over- or underdensities that produce magnification not only magnify fluxes, but also increases areas. Hence, the individual BoRG fields we observe have been magnified (or demagnified) from their true sizes. We account for this in the posterior probability \Eq{eqn:LF-bayesian_bayes-margpost} by dividing the measured area of each field by the mean magnification in that field, $\overline{\mu}_l$ from the magnification PDFs. For weak lensing magnification PDFs $\langle\mu_l\rangle \sim 1$. For the intermediate lensing case $1.4 < \overline{\mu}_l < 2$ due to our selection process. As magnification is most important for the bright-end of the LF, and negligible at the faint end, for simplicity and without loss of precision, we adopt $\mu=1$ for the 59 fainter dropouts \citep{Bouwens2011}. Additionally, one of the BoRG fields (borg\_1815\_3244) is centered on the Galactic plane and is dominated by stars. We discard this field in our calculation of the LF. \section{Results}\label{sec:results} Using the framework described in Section~\ref{sec:LF} to account for the magnification bias we present our estimation of the $z\sim8$ galaxy LF based on our sample of 97 $z\sim8$ LBGs (described in Section~\ref{sec:data}). First, in Section~\ref{sec:occur} we compare our estimates of strong and intermediate lensing probabilities with the actual observations. Then, in Section~\ref{sec:results-LF} we carry out the inference of the $z\sim8$ LF. Finally, in Section~\ref{sec:results-LF-z}, we use our semi-analytical model of strong lensing optical depths described in Section~\ref{sec:strong} to predict the form of observed LFs at $z \geq 8$. \subsection{Strong and Intermediate Lensing Events in the BoRG Survey}\label{sec:results-borg-strong} \label{sec:occur} The simple SIS strong lensing model described in Section~\ref{sec:strong-zevol} predicts the probability of $z\sim8$ sources in the BoRG survey being multiply imaged to be $\sim 3-15\%$, increasing as the field limiting magnitude becomes brighter than $M^\star$. The majority of the BoRG fields have a multiple-image probability for high-redshift sources of $< 10\%$ (see Figure~\ref{fig:strong-zevol_flens}). We predict that 1-2 of the 38 BoRG Y-band dropouts may be strongly lensed. One candidate strong lens system in BoRG was presented by \citet{Barone-Nugent2013a}, a rigorous search for strong lenses in all 71 BoRG fields as part of this work revealed one more candidate. Additionally, this search revealed three candidate intermediate lens systems, with $\mu>1.4$. These candidates are presented in Figure~\ref{fig:strong-postage} and Table~\ref{tab:strong-strongish}. Whilst strong lensing creates larger magnification, the probability of encountering a strong lens along the line-of-sight is low: as shown in Figure~\ref{fig:strong-zevol_optdep-diff-vdfevol} the optical depth is roughly $\tau\approx 0.31\%$ for a source at $z=8$. The optical depth for intermediate lensing is much higher: for an object to experience intermediate lensing it must be within $3.5\theta_\textrm{ER}$ of the foreground deflector, resulting in $\tau\approx 4\%$ for a source at $z=8$. Thus, intermediate lensing offers an additional boost to the flux of high-redshift galaxies, and must be correctly accounted for in estimations of the LF. \subsection{Inference of the Intrinsic $z\sim8$ LF}\label{sec:results-LF} We estimate the $z\sim8$ LF from the sample of 97 LBG described in Section~\ref{sec:data}, including the 38 S/N$_\textrm{J} > 5$ objects from the BoRG survey, including the effects of magnification bias. We sample the posterior distribution function for the Schechter function parameters with an MCMC chain of 40 000 steps. The results of the estimated LF are shown in Figure~\ref{fig:results_LF-borg}, and the correlations between the Schechter function parameters and their PDFs are shown in Figure~\ref{fig:results_LF-borg-params}. We plot the results of \citet{Schmidt2014} for comparison in both figures. We see a small deviation from the uncorrected LF of $\sim0.15$ mag at the limit of the brightest BoRG source, and there is negligible difference between the LFs at $M > -21$. The Schechter function parameters for the new LF are within the uncertainties of the estimation by \citet{Schmidt2014}, though we find a slightly fainter value of $M^\star$ and higher value of $\Psi^\star$ than \citet{Schmidt2014}. This is expected because of the slight deviation at the bright end of the LF, and there is a strong correlation between these parameters, as shown in Figure~\ref{fig:results_LF-borg-params}. It is clear that magnification bias is not a significant effect at this redshift and the luminosity range of the BoRG sources. This also demonstrates that although we predict $3-15\%$ of the BoRG sources are strongly lensed this does not affect the LF within the survey limits, as predicted in Section~\ref{sec:theory-magbias}. Our results are in good agreement with those of Fialkov \& Loeb (2015) who use an independent semi-analytic method to show the effect of magnification bias is small below $M>-21.5$. Fialkov \& Loeb (2015) predict that if the brightest observed galaxy has absolute magnitude $M_\textsc{uv} = - 24.5$ - lying in the significantly distorted tail of the magnified LF (Wyithe et al. 2011) - there is a $\sim13.3\%$ discrepancy in the normalization of a Schechter LF at $z\sim8$ for sampled galaxies with $\mu_\textrm{max} = 2$ (i.e. only weak and intermediate lensing effects) compared to the intrinsic LF. Whilst this upper limit is several orders of magnitude brighter than currently observed, this demonstrates that it will be important to include the effects of magnification bias from weak and intermediate lensing in surveys that find extremely bright galaxies. Table~\ref{tab:results} summarizes the estimated Schechter function parameters for this LF in comparison with other recent LF estimates from the literature. We find that our fit parameters are in good agreement with the recent literature, demonstrating that magnification bias is not affecting current $z\sim8$ LF observations. Note that our results have significantly smaller error bars than those of \citet{Finkelstein2014}, because their sample contains only 3 $z\sim 8$ galaxies brighter than $M = -21$, making their fit less well-constrained at the bright end. Our results show that magnification bias does not affect current estimates of the LF at $z \simlt 8$ and therefore cannot explain the apparent flattening of the bright-end of the LF recently observed by \citet{Bowler2014b,Bowler2014a} and \citet{Finkelstein2014} at $z\sim 7 - 8$. \citet{Bowler2014b,Bowler2014a} accounted for strong lensing of bright sources, but they still find a deviation of $\sim0.4$ mag from a Schechter fit at $M=-22$. We predict a lensed fraction of $\sim 3-15 \%$ for bright galaxies (Figure~\ref{fig:strong-zevol_flens}) from the BoRG survey which is essentially free of cosmic variance \citep{Trenti2008}, so providing cosmic variance and contamination by lower redshift interlopers \citep{Hayes2012,Bradley2012,Bouwens2013,Schmidt2014} were correctly accounted for in the work of \citet{Bowler2014b,Bowler2014a} and \citet{Finkelstein2014}, we expect the magnification bias to be negligible in the bright-end of these LFs. This lends credence to the interpretation that these observations may be the result of the changing intrinsic properties of galaxies at $z\simgt 7$, possibly due to changing dust fractions \citep{Cai2014} and/or feedback processes \citep{Somerville2012}. \begin{figure} \includegraphics[width=0.49\textwidth]{LF-int_strong-3gauss-N40000.pdf} \caption{The intrinsic $z\sim8$ LF, which is well-described by a \citet{Schechter1976} function, including the magnification bias due to weak and intermediate lensing in all BoRG fields (solid black line). We plot the LF without the treatment of the magnification bias \citep{Schmidt2014} for comparison (dashed red line). The lines corresponds to the median values of the MCMC samples and the shaded regions correspond to the 68\% confidence region of the samples. The LF estimated here is virtually indistinguishable from that of \citet{Schmidt2014}, demonstrating that magnification bias is not a significant effect at $z\sim8$. The Schechter parameters for this LF are given in Table~\ref{tab:results} along with literature values. The binned data from BoRG12 \citep{Bradley2012} and the faint HUDF/ERS candidates \citep{Bouwens2011} are also plotted as blue and green points respectively. The inverted green triangle denotes the brightest BoRG dropout. We note that the LF is estimated from the unbinned data.} \label{fig:results_LF-borg} \end{figure} \begin{figure*} \includegraphics[width=0.98\textwidth]{LF-multiD.pdf} \caption{The correlations between the $z\sim8$ LF Schechter function parameters $(\alpha, \, M^\star \textrm{ and } \Psi^\star$) estimated from the BoRG dropouts including treatment of magnification bias (black), compared to the parameters obtained without the treatment of magnification bias \citep[red,][]{Schmidt2014} with $1\sigma$ and $2\sigma$ confidence contours. There is clear correlation between all three parameters. The top panels show the marginalized PDFs for each parameter.} \label{fig:results_LF-borg-params} \end{figure*} \begin{table*} \centering{ \caption[ ]{Comparison of $z\sim8$ Schechter LF Parameters} \label{tab:results} \begin{tabular}[c]{lllll} \hline \hline Reference & $M^\star$ & $\alpha$ & $\log_{10} \Psi^\star$ [Mpc$^{-3}$] \\ \hline This work & $-19.85^{+0.30}_{-0.35}$ & $-1.72^{+0.30}_{-0.29}$ & $-3.00^{+0.23}_{-0.31}$ \\ \citet{Finkelstein2014} & $-20.89^{+0.74}_{-1.08}$ & $-2.36^{+0.54}_{-0.40}$ & $-4.14^{+0.65}_{-1.01}$ \\ \citet{Bouwens2014} & $-20.63\pm0.36$ & $-2.02\pm0.23$ & $-3.68\pm0.32$ \\ \citet{Schmidt2014} $5\sigma$ & $-20.15^{+0.29}_{-0.38}$ & $-1.87^{+0.26}_{-0.26}$ & $-3.24^{+0.25}_{-0.34}$ \\ \citet{Schmidt2014} $8\sigma$ & $-20.40^{+0.39}_{-0.55}$ & $-2.08^{+0.30}_{-0.29}$ & $-3.51^{+0.36}_{-0.52}$ \\ \cite{McLure2013} & $-20.12^{+0.37}_{-0.48}$ & $-2.02^{+0.22}_{-0.23}$ & $-3.35^{+0.28}_{-0.47} $ \\ \citet{Schenker2013} & $-20.44^{+0.47}_{-0.35}$ & $-1.94^{+0.21}_{-0.24}$ & $-3.50^{+0.35}_{-0.32} $ \\ \hline \end{tabular}} \end{table*} \subsection{Predictions for $z>8$ and Future Surveys}\label{sec:results-LF-z} There is clear evolution in the LF for $z<8$ \citep[e.g.][]{Bouwens2007,VanderBurg2010,Bouwens2014,Bowler2014b,Finkelstein2014}, and this is expected to continue to higher redshifts. However, the processes which drive this evolution are not well-understood: the evolution is thought to follow hierarchical structure formation and the evolution of the halo mass function \citep{Vale2004}, but there are also important quenching processes that may reduce star formation in massive galaxies \citep{SaasFee,Somerville2012}, and changes in the amount of dust present in galaxies will affect the attenuation of flux. Thus there are a multitude of theoretical models for the evolution of the LF. The gravitationally lensed LF (\Eq{eqn:theory-magbias_mod-LF}) exhibits a significant `kick' in the bright-end tail for $M \simlt -22$ at $z\sim8$. This is just beyond the brightest BoRG objects, so it is unlikely that the BoRG survey observes the regime of magnification bias at the bright-end. This is in agreement with theoretical studies by \citet{Wyithe2011} and \citet{Fialkov2015}. However, in upcoming wide-area surveys magnification bias presents a useful tool to test LF evolution models because it allows us to probe the bright end, where there are large theoretical uncertainties and the evolution is expected to be fast \citep{Bowler2014b}. In order to explore the range of possible scenarios, in Figure~\ref{fig:results_LF-zevol} we plot the predicted intrinsic (dashed lines) and observed (solid lines) LFs for a range of redshifts, comparing a variety of evolution models. We assume these models are the intrinsic LFs at a given redshift and used \Eq{eqn:theory-magbias_mod-LF} to estimate the observed LF. We plot the BoRG $z\sim8$ LF \citep{Schmidt2014} for comparison. Additionally, we mark the comoving volumes and magnitude ranges accessible to future high-redshift surveys. The top left panel shows the LF model from \citet{Bouwens2014} which is an extrapolation from observations at $z < 10$. The top right panel shows the LF model from \citet{Finkelstein2014} which is an extrapolation from observations at $4 < z < 8$. The bottom left panel shows the model developed by \citet{Munoz2012} which follows the evolution of the halo mass function, and includes dust attenuation. The bottom right panel is a model from \citet{Behroozi2015} constructed from a comparison of the specific star formation rate to the specific halo mass accretion rate, and including dust models from \citet{Charlot2000}. The four models have significantly different behaviors at the bright end. While the \citet{Bouwens2014} model has by construction a bright end that is very similar to that measured at lower redshifts, the \citet{Munoz2012} model has a very shallow bright end, and the \citet{Finkelstein2014} and \citet{Behroozi2015} models are in-between. As a result, the effects of magnification bias (which are stronger for the steeper LF) are very different: negligible in the \citet{Munoz2012} case and appreciable in the three other cases. However, the bright end of the \citet{Munoz2012} model is the easier one to test observationally, within reach of a James Webb Space Telescope medium depth, medium width survey \citep[e.g. JWST MD][]{Windhorst2006}. Except in the case of a very shallow bright end, we do not expect the magnification bias to be significant in our upcoming BoRG $z\sim 9,10$ survey (HST Cycle 22, PI Trenti). In all cases, it is clear that surveys covering $>100$ deg$^2$, e.g. Euclid and WFIRST, should find many bright $z > 8$ LBGs. We expect the observed high-redshift galaxy samples will be dominated by magnification bias in these surveys. We predict almost all $z\sim8$ sources in Euclid will have been strongly lensed. The framework developed in this work will be crucial for determining the intrinsic luminosity of high-redshift sources found in such surveys. Our results confirm the suggestion by \citet{Wyithe2011} that magnification bias will be important to probe the bright end of the LF at high redshift. However, we find that the magnitude of the effect is less pronounced than in that study, owing mostly to our accounting for the redshift evolution of the deflector population. \begin{figure*}[!h] \centering{ \includegraphics[width=0.49\textwidth]{LF-Bouwens2014.pdf} \includegraphics[width=0.49\textwidth]{LF-Finkelstein2014.pdf} \includegraphics[width=0.49\textwidth]{LF-Munoz2012.pdf} \includegraphics[width=0.49\textwidth]{LF-Behroozi2015.pdf} } \caption{Predicted observed LFs for $z\geq 8$ redshifts. For $z=8$ we use the Schechter LF from \citet{Schmidt2014}, plotted as a thick black line. The white band indicates the error on the Schechter function parameters, and the thin black line is the extrapolation of the LF beyond the observational limit. We show the regions of magnitude and volume observable by current and future surveys: the total BoRG survey including the $z\sim 8$ survey described in Section~\ref{sec:data-borg} and the upcoming BoRG $z\sim 9,10$ survey (HST Cycle 22, PI: Trenti); the James Webb Telescope Medium Deep (JWST MD) \citep{Windhorst2006}; the Wide-Field Imaging Surveyor for High-Redshift Ultra-Deep Field (WISH UDF, http://wishmission.org/en/doc.html); the Wide-Field Infrared Survey Telescope High Latitude Survey (WFIRST HLS) \citep{Spergel2013} and the Euclid Wide Survey (WS) \citep{Laureijs2011}. As explained in the text, BoRG does not survey enough area to observe the rarest bright sources which are most affected by magnification bias, but future wide-field surveys will be dominated by this effect. {\bf (Top Left)} For $z>8$ we use the LF model from \citet{Bouwens2014} which is an extrapolation from $z\sim10$. {\bf (Top Right)} For $z>8$ we use extrapolate the evolution of the Schechter function parameters over $4<z<8$ from \citet{Finkelstein2014}. {\bf (Bottom Left)} For $z\geq 8$ we use the luminosity model from \citet{Munoz2012} which is based on the evolution of the halo mass function. These do not exhibit the sharp cut-off at the bright-end and are not affected by magnification bias. {\bf (Bottom Right)} For $z>8$ we use the LF evolution model from \citet{Behroozi2015}. The dashed lines indicate the intrinsic LFs, the solid lines are the observed LFs including the magnification bias calculated using \Eq{eqn:theory-magbias_mod-LF}.} \label{fig:results_LF-zevol} \end{figure*} \section{Summary and Conclusion}\label{sec:conc} We have introduced a systematic way to account for the magnification bias in estimations of high-redshift LFs. The method involves estimating the probability density function for weak lensing magnification along a given line-of-sight by comparison with results from the reconstruction of simulated halo data, and by estimating the strong and intermediate lensing magnification PDF of dropouts due to massive deflector galaxies in close proximity to the dropout. We applied this method to estimate the $z\sim8$ LF from the 38 BoRG Y-band dropouts and 59 fainter dropouts from \citet{Bouwens2011}. Our main results are summarized as follows: \begin{enumerate}[(a)] \item The probability of a BoRG $z\sim8$ dropout being multiply imaged is $\sim 3-15$\%, increasing with limiting magnitude. This is consistent with finding two strongly-lensed dropouts in the BORG survey: the candidate system presented in \citet{Barone-Nugent2013a}, and the additional strongly-lensed candidate dropout in this paper. We also find three dropouts which may experience significant magnification without multiple imaging, consistent with our expectations. \item We extended the Bayesian formalism for the estimation of the LF parameters presented by \citet{Schmidt2014} to account for the magnification bias. This involves marginalizing over the magnification PDFs for strong and weak lensing effects. The inferred Schechter function parameters are: \begin{itemize} \item[] $M^\star = -19.85^{+0.30}_{-0.35}$, \item[] $\alpha = -1.72^{+0.30}_{-0.29}$, \item[] $\log_{10} \Psi^\star [\textrm{Mpc}^{-3}] = -3.00^{+0.23}_{-0.31}$, \end{itemize} These values do not differ significantly from estimates not accounting for the magnification bias. \item Thus magnification bias cannot be an explanation for the apparent flattening of the bright-end of the LF recently observed by \citet{Bowler2014b,Bowler2014a} and \citet{Finkelstein2014}. \item The $z\sim8$ LF appears significantly magnified for extremely bright galaxies ($M_\textsc{uv} < -22$). Though current surveys have not observed such rare, luminous galaxies, future wide-field surveys will probe this region. For surveys $> 100$ deg$^2$, e.g. WFIRST, Euclid, we predict that samples of $z\gtrsim8$ galaxies will be dominated by magnification bias. \item Magnification bias will be a useful tool to distinguish between high-redshift LF evolution models. In particular it could help determine whether the LF transitions from a Schechter form to a power-law form at high redshift, indicating significant changes in the astrophysical properties of those galaxies. \end{enumerate} \acknowledgments We thank Joey Mu\~{n}oz for useful discussions and providing his LF evolution model; Peter Behroozi for providing his LF evolution model; Sirio Belli for providing photometry of the galaxies described in \citet{Belli2014,Belli2014a}; and Stefan Hilbert for useful comments regarding the weak lensing simulations. This work was supported by the HST BoRG grants GO-11700, 12572, and 12905. This paper is based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute. This work made use of the freely available Pangloss code, written by Tom Collett and Phil Marshall. The Millennium Simulation databases used in this paper are publicly available through the German Astrophysical Virtual Observatory. \begin{appendix} \section{Bayesian Framework for Estimating the luminosity function}\label{app:bayes} We use Bayesian statistics to find the relationship between the prior probability of the $z \sim 8$ dropouts being galaxies with LF Schechter parameters $\theta = (\alpha, \; L^\star, \; \Psi^\star)$, and these parameters' posterior probability given the dropout candidates' detection threshold in the J-band, assuming their non-detection in the V-band. The posterior probability is given by: \begin{equation} \label{eqn:bayes} p(\theta \,| \,L_\textrm{J,obs}, I_V =0) \propto p(\theta) \times p(L_\textrm{J,obs}, I_V = 0 \,|\, \theta) \end{equation} where the last term is the likelihood and $p(\theta)$ is the prior on the LF parameters. We will assume uniform priors on $\alpha$ and $\log_{10} L^\star$. We can expand the expression for the posterior: \begin{eqnarray} \label{eqn:bayes-post} p(\theta \,|\, L_\textrm{J,obs},I_\textrm{V}=0) \propto \; p(\theta)&&\; C^{N_z}_{n_z} \; \prod_{l}^\mathcal{C} \left[1- A_l/A_\textrm{sky}\; p(I=1|\theta) \right]^{N_z-c_{lz}} \times \; \prod_i^{n_z} p(L_{\textrm{J,obs},i}\,|\,\theta) \nonumber \\ \times&& \; C^{N_c}_{n_c} \; \prod_{l}^\mathcal{C} \left[1- A_l/A_\textrm{sky}\; p(I=1|\theta) \right]^{N_c-c_{lc}} \times \; \prod_i^{n_c} p(L_{\textrm{J,obs},i}\,|\,\theta) \end{eqnarray} where the $C^a_b$ terms are binomial coefficients which correctly model the distribution of source counts. $N_z$ and $N_c$ are the number of high-redshift sources given the intrinsic LF and the number of potential contaminants in the Universe respectively. We will assume a uniform prior on $\log_{10} N_z$. In the observed sample the number of high-redshift sources and contaminants are given by $n_z$ and $n_c$. The total number of galaxies in the observed sample, $n_t$ is given by their sum. We take the product over $\mathcal{C}$ individual observed fields where $c_l$ represents the number of galaxies in the $l$'th field with $n_t$ also given by the sum of $c_l$ over all of the fields. The fraction of the sky covered by the $l$'th field is given by $A_l/A_\textrm{sky}$. The contamination fraction in each field, $f_l$ is set at the fiducial value of 42\% \citep{Schmidt2014,Bradley2012} for the BoRG sources, the contamination fraction for the fainter HUDF/ERS sources \citep{Bouwens2011} is included in the selection function (see below). The last term in \Eq{eqn:bayes-post} is the likelihood for the $i$'th object in the sample. In \citet{Schmidt2014} this was expressed as: \begin{eqnarray} \label{eqn:bayes-likelihood-kasper} p(L_\textrm{J,obs}\,|\,\theta) &=& \; \int_0^\infty p(L_\textrm{J,obs} \,|\, L_\textrm{J,true}) \; p(L_\textrm{J,true} \,|\, \theta )\; dL_\textrm{J,true} \\ &=& \; \int_0^\infty \mathcal{N}(L_\textrm{J,obs} \,|\, L_\textrm{J,true}, \, \delta L_\textrm{J,field})\; \textrm{gamma}(L_\textrm{J,true} \,|\, \alpha,L^\star) \; dL_\textrm{J,true} \; \nonumber \end{eqnarray} where we use $p(L\,|\,\theta)\propto \frac{\Psi(L, \theta)}{\Psi^{\star}}$ (see Equation~(1) of \citet{Kelly2008}). The function gamma$(L_\textrm{J,true} \,|\, \alpha,L^\star)$ is related to the Schechter LF (\Eq{eqn:theory_LF-sch}) as $\textrm{gamma}(L \,|\, \alpha,L^\star) = \frac{\Psi(L)}{\Psi^\star\Gamma(\alpha+1)}$. \begin{equation} \label{eqn:bayes-errgauss} \mathcal{N'}(L_\textrm{J,obs} \,|\, L_\textrm{J,true}, \, \delta L_\textrm{J,field}) = \frac{1}{\delta L_\textrm{J,field}\sqrt{2\pi}} \exp\left[-\frac{(L_\textrm{J,obs}-L_\textrm{J,true})^2}{2\, \delta L_\textrm{J,field}^2} \right] \end{equation} represents the true luminosity inferred from the observations assuming a Gaussian measurement error with $\delta L_\textrm{J,field}$ being the median photometric error in the J-band in the given field. In order to include the effects of the magnification bias, we must integrate over the nuisance parameter $L_\textrm{J,mag}$, which represents the luminosity of an object in the J-band, magnified above its true luminosity. Including this \Eq{eqn:bayes-likelihood-kasper} becomes: \begin{equation} \label{eqn:bayes-Lobs-like} p(L_\textrm{J,obs}\,|\,\theta) = \int \int p(L_\textrm{J,obs}\,|\,L_\textrm{J,mag})\; p(L_\textrm{J,mag}\,|\,L_\textrm{J,true})\; p(L_\textrm{J,true}\,|\,\theta) \; dL_\textrm{J,true} \; dL_\textrm{J,mag} \end{equation} where $p(L_\textrm{J,obs}\,|\,L_\textrm{J,mag})$ is now the term with Gaussian measurement errors similar to \Eq{eqn:bayes-errgauss}, given that we make observations of magnified luminosities: \begin{equation} \label{eqn:bayes-errgauss2} \mathcal{N}(L_\textrm{J,obs} \,|\, L_\textrm{J,mag}, \, \delta L_\textrm{J,field}) = \frac{1}{\delta L_\textrm{J,field}\sqrt{2\pi}} \exp\left[-\frac{(L_\textrm{J,obs}-L_\textrm{J,mag})^2}{2\, \delta L_\textrm{J,field}^2} \right] \end{equation} To find the probability that luminosity is magnified from its true luminosity, $p(L_\textrm{J,mag}\,|\,L_\textrm{J,true})$, we must integrate over the full magnification probability density: \begin{eqnarray} \label{eqn:bayes-LmagLtrue} p(L_\textrm{J,mag}\,|\,L_\textrm{J,true}) &=& \int p(L_\textrm{J,mag} \,|\, \mu, L_\textrm{J,true} )\; p(\mu) \; d\mu \end{eqnarray} We can marginalize over $L_\textrm{J,mag}$ in the first part of \Eq{eqn:bayes-Lobs-like}: \begin{eqnarray} \label{eqn:bayes-f} p(L_\textrm{J,obs} \,|\, L_\textrm{J,true}) &=& \int p(L_\textrm{J,obs}\,|\,L_\textrm{J,mag}) \; p(L_\textrm{J,mag}\,|\,L_\textrm{J,true})\; dL_\textrm{J,mag} \nonumber \\ &=& \int \int p(L_\textrm{J,mag}\,|\,\mu, L_\textrm{J,true} )\; p(\mu)\; p(L_\textrm{J,obs},L_\textrm{J,mag},\,\delta L_\textrm{J,field})\; d\mu \; dL_\textrm{J,mag} \nonumber \\ &=& \int \int \delta(L_\textrm{J,mag} - \mu L_\textrm{J,true} )\; p(\mu) \; \mathcal{N}(L_\textrm{J,obs}\,|\,L_\textrm{J,mag},\,\delta L_\textrm{J,field})\; d\mu \; dL_\textrm{J,mag} \nonumber \\ &=& \int p(\mu) \; \mathcal{N}(L_\textrm{J,obs}\,|\,\mu L_\textrm{J,true}, \, \delta L_\textrm{J,field}) \; d\mu \nonumber \\ &=& \int p(\mu) \frac{1}{\delta L_\textrm{J,field}\sqrt{2\pi}} \exp \left[-\frac{\left(L_\textrm{J,obs} - \mu L_\textrm{J,true} \right)^2 }{2\, \delta L^2_\textrm{J,field}} \right] \; d\mu \end{eqnarray} Here we have used the Dirac delta function $\delta(L_\textrm{J,mag} - \mu L_\textrm{J,true} )$ to map true luminosities to magnified luminosities. To make computation of \Eq{eqn:bayes-post} feasible, we integrate \Eq{eqn:bayes-f} analytically and want to remove any $L_\textrm{J,mag}$ dependence we fit the magnification PDFs as a normalized linear combination of Gaussian terms with coefficients $\beta_i$ centered on $\overline{\mu}_\textrm{i,mag}$ with standard deviation $\sigma_\textrm{i,mag}$ : \begin{equation} \label{eqn:bayes-pmu-Gauss} p(\mu) = \; \sum_i^n \beta_i \frac{1}{\sigma_\textrm{i,mag}\sqrt{2\pi}} \exp \left[-\frac{(\mu - \overline{\mu}_\textrm{i,mag})^2}{2\, \sigma^2_\textrm{i,mag}}\right] \end{equation} \Eq{eqn:bayes-f} can then be integrated analytically: \begin{equation} \label{eqn:bayes-f-Gauss} p(L_\textrm{J,obs} \,|\, L_\textrm{J,true}) = \; \sum_i^n \frac{\beta_i}{\sqrt{2\pi}} \frac{1}{\sqrt{\sigma^2_\textrm{i,mag} L^2_\textrm{J,true} + \delta L^2_\textrm{J,field}}} \exp \left[-\frac{(L_\textrm{J,obs} - \overline{\mu}_\textrm{i,mag} L_\textrm{J,true})^2}{2 \; (\sigma^2_\textrm{i,mag} L^2_\textrm{J,true} + \delta L^2_\textrm{J,field})} \right] \end{equation} As solid angle is also magnified in gravitational lensing we must divide the measured field area $A_l$ by the average magnification in each field $\overline{\mu}_\textrm{l}$. If $\overline{\mu}_\textrm{l} > 1$ the fields we observe appear larger than their true sizes. We can therefore express \Eq{eqn:bayes-post} as (see \citet{Schmidt2014} for details): \begin{eqnarray}\label{eqn:bayes-margpost} p(\theta \,|\, L_\textrm{J,obs},I_\textrm{V}=0) \propto \; &p&(\theta)\; \times \; C^{N_z}_{(1-f)n_t} C^{\frac{f}{1-f} N_z}_{f n_t} \; \nonumber \\ &\times& \; \prod_{l}^\mathcal{C} \left[1- \frac{A_l}{\overline{\mu}_l A_\textrm{sky}}\; \int_0^\infty \int_0^\infty dL_\textrm{J,true,$l$} \, dL_\textrm{J,obs,$l$} \; \mathcal{S}(L_\textrm{J,obs,l}) \; \mathcal{F}(L_\textrm{J,obs,$l$}, L_\textrm{J,true,$l$}) \right]^{\frac{1}{1-f_l}(N_z-(1-f_l)c_{l})} \nonumber \\ &\times& \; \prod_i^{n_t} \int_0^\infty \mathcal{F}(L_\textrm{J,obs,$i$}, L_\textrm{J,true,$i$}) \;dL_\textrm{J,true,$i$} \end{eqnarray} Here we have defined $\mathcal{F}(L_\textrm{J,obs}, L_\textrm{J,true}) = p(L_\textrm{J,obs} \,|\, L_\textrm{J,true})\; \textrm{gamma}(L_\textrm{J,true} \,|\, \alpha ,L^\star) $ and included the selection function $\mathcal{S}(L_\textrm{J,obs})$. The selection function estimates the completeness of the source selection and has been obtain for each individual BoRG field as explained in \citet{Oesch2012,Bradley2012} and \citet{Schmidt2014}. Thus, \Eq{eqn:bayes-margpost} is the posterior probability distribution for a sample of $n_t$ binomially distributed objects, assumed have an intrinsic Schechter LF of the form shown in \Eq{eqn:theory_LF-sch}. The observed luminosity of each object is related to its true luminosity via a magnification PDF and an assumed Gaussian error distribution. \end{appendix} \bibliographystyle{apj}
1,116,691,501,447
arxiv
\section{INTRODUCTION} The detection of faint sources in far IR can be greatly affected by the amount and structure of the background radiation. The main source of background radiation in far IR is the smooth component of the Galactic emission, known as cirrus emission. The amount of emission manifests itself as photon noise whose fluctuations follow Poisson statistics. In addition, any brightness fluctuation at scales below the beam size could cause confusion with real point sources. The cirrus emission was discovered by the Infrared Astronomy Satellite (\textit{IRAS}) \cite{low84}, and is thought to be due to radiatively heated interstellar dust in irregular clouds of wide ranges of spatial scales. The cirrus emission peaks at far-IR wavelengths but was detected in all four \textit{IRAS} bands at 12, 25, 60, and 100 $\mu$m (Helou \& Beichman 1990, hereafter HB0). The brightness of cirrus emission depends upon the Galactic latitude and is significant for wavelengths longer than 60 $\mu$m. The cirrus emission, which is the main source of background radiation in far-IR, causes an uncertainty in the determination of source fluxes use its brightness varies from place to place. The accurate determination of observational detection limits requires a knowledge of the cirrus emission as a function of position on the sky. The other important factor affecting the source detection is the source confusion which mainly depends upon the telescope beam size and the source distribution itself. The effects resulting from a combination of the sky confusion and the source confusion will be discussed in depth in the forthcoming paper [Jeong et al. 2004c \shortcite{jeong04c}, in preparation], and we concentrate on the effect of sky confusion in the present paper. There have been realistic estimations of the sky confusion from observational data from \textit{IRAS} and the Infrared Space Observatory (\textit{ISO}) (Gautier et al. 1992; HB90; Herbstmeier et al. 1998; Kiss et al. 2001). However, the resolution of the data from \textit{IRAS} and \textit{ISO} is not sufficient to the application to larger missions planned in future. Many valuable data in the far-IR wavelength range will be available within or around this decade by a multitude of IR space projects such as \textit{Spitzer} \cite{gall03}, \textit{ASTRO-F} \cite{mura98,shib00,naka01,pearson04}, Herschel Space Observatory (\textit{HSO}) \cite{pilb03,poglit03} and the Space Infrared Telescope for Cosmology and Astrophysics (\textit{SPICA}) \cite{naka04}. Since these instruments will observe the sky with high sensitivities and high angular resolution, it is necessary to understand the factors determining their detection limits. The purpose of the present paper is to investigate the effects of cirrus emission on the detection of faint point sources in highly sensitive future infrared observations. Based on the measured power spectrum and the spectral energy distribution models of the dust emission over the entire sky, we generate the dust map with higher spatial resolution in various relevant wavelength bands by extrapolating the power spectrum to small scales. This paper is organized as follows. In Section \ref{sec:sky_conf}, we briefly describe the sky confusion noise due to sky brightness fluctuations. In Section \ref{sec:gen_dmap}, the high angular resolution realization of Galactic dust emission in various IR bands is presented. Based upon the specifications of each IR mission, we estimate the sky confusion noise by using simple fluctuation analysis in Section \ref{sec:stat_analy_scn}. We compare estimated detection limits based on fluctuation analysis with the results based on the photometry on realistically simulated data in Section \ref{sec:scn_phot}. Our conclusions are summarised in Section \ref{sec:summary}. \section{CONFUSION DUE TO SKY FLUCTUATION}\label{sec:sky_conf} \begin{figure} \centering \centerline{ \psfig{figure=fig-01.ps,height=2.5cm} } \caption{Schematic outline of the reference aperture configurations for two symmetrically placed circular apertures (Gautier et al. 1992).} \label{fig_ref_aper} \end{figure} Measuring the brightness of sources involves subtracting the sky background derived from the well-defined reference. The fluctuations in the surface brightness of extended structure on similar scales to the resolution of the telescope and instrument beam can produce spurious events that can be easily mistaken for genuine point sources. This is because the source detection is usually simply accomplished from the difference in signal between the on-source position and some background position. Therefore sky confusion noise due to the sky brightness fluctuations, $N(\theta)$, is defined as (HB90; Gautier et al. 1992): \begin{equation} N(\theta) = \Omega \sqrt{S(\theta)}, \label{eqn_noise} \end{equation} where $\Omega$ is the solid angle of the measuring aperture, $\theta$ is the angular separation between the target and reference sky positions, and $S(\theta)$ is the second order structure function, which is defined as \cite{gautier92}: \begin{equation} S(\theta) = \left\langle \left | I(x) - \frac{I(x - \theta) + I(x + \theta)}{2} \right |^2 \right\rangle _x , \label{eqn_strno} \end{equation} where $I$ is the sky brightness, $x$ is the location of the target, and $\langle~ \rangle$ represents the average taken over the whole map. For the configuration of two symmetrically placed reference apertures, see Fig. \ref{fig_ref_aper}. Although the zodiacal emission is main background source in the short wavelength of far-IR range in low ecliptic latitude regions, it will not contribute to the fluctuations on the large scales because the zodiacal light is generally smooth on scales smaller than typical resolution of IR observations \cite{reach95,kelsall98}. From the analysis of the \textit{ISO} data, $\acute{\rm A}$brah$\acute{\rm a}$m et al. \shortcite{abra97} searched for the brightness fluctuations in the zodiacal light at 25 $\mu$m with 5 fields of $\sim$~0.5$^\circ$~$\times$~0.5$^\circ$ at low, intermediate, and high ecliptic latitudes. They found that an upper limit to the fluctuations of 0.2 per cent of the total brightness level was estimated for an aperture of 3$^\prime$ diameter. This amount of fluctuations would not cause any significant noise. Therefore, the sky confusion noise is mainly related to the spatial properties of the cirrus. In many cases, the power spectrum of the dust emission can be expressed as a simple power-law. Using the \textit{IRAS} data at 100 $\mu$m, Gautier et al. \shortcite{gautier92} computed the power spectrum $P$ of the spatial fluctuations of cirrus emission as a function of spatial frequency $k$, for angles between 4$^\prime$ and 400$^\prime$. \begin{equation} P = P_0 \left( \frac{k}{k_0} \right)^{\alpha} = P_0 \left( \frac{d_0}{d} \right)^{\alpha}, \label{eqn_ps} \end{equation} where $d$ represents the angular scale corresponding angular frequency ($k = \frac{2\pi}{d}$). The subscript 0 on $k$ and $d$ denotes a reference scale, $P_0$ is the powers at $k=k_0$, and $\alpha$ is the index of the power spectrum. Since the second order structure function is proportional to power spectrum representing the spatial structure of cirrus, the sky confusion noise $N$ on a scale $d$ corresponding to the width of the measurement aperture scales as: \begin{equation} N \propto \left( \frac{d}{d_0} \right)^{1- \frac{\alpha}{2}} \cdot P_0^{\frac{1}{2}}. \label{eqn_rel_strps} \end{equation} HB90 extended the work by Gautier et al. \shortcite{gautier92} at $\lambda = 100~\mu$m in order to estimate the sky confusion at all wavelengths, using the empirical relationship, $P_0 \propto \langle I_0 \rangle^3$ and $\alpha = -3$ in Gautier et al. \shortcite{gautier92}. They found an approximation for the cirrus confusion noise as follows (hereafter HB90 formula): \begin{equation} N = \zeta \left( \frac{\lambda}{100 ~\mu \rm m} \right)^{2.5} \left( \frac{D_t}{1 ~\rm m} \right)^{-2.5}\left( \frac{\langle I_{\lambda} \rangle}{1 ~\rm MJy\, sr^{-1}} \right)^{1.5} {\rm mJy}, \label{eqn_strn_hb} \end{equation} where $\zeta$ is a constant, $\lambda$ the wavelength of the measurement, $D_t$ the diameter of the telescope, and $\langle I_{\lambda} \rangle$ is the mean brightness at the observation wavelength. They set the constant $\zeta$ to be 0.3. This indicates that the sky confusion depends upon both the variation of the surface brightness in the background structure and the resolution of the telescope. Consequently, the noise becomes less significant for larger aperture sizes. \section{GENERATION OF CIRRUS MAP}\label{sec:gen_dmap} In order to investigate the sky confusion for the present and upcoming infrared space missions with a high resolution, we need the information on the behavior of cirrus emission in very small scales. Since observationally available data have rather low resolution, we need to add high resolution component. In this section, we describe the method of extending the low resolution data to high resolution. For the observational low resolution data, we used the all-sky 100 $\mu$m dust map generated from the \textit{IRAS} and \textit{COBE} data by Schlegel, Finkbeiner, and Davis (1998; hereafter SFD98). \subsection{Fluctuations at Higher Spatial Resolution} \subsubsection{Measured Power Spectrum} \begin{figure} \centering \centerline{ \psfig{figure=fig-02.ps,height=6.5cm}} \caption{Measured power spectrum of dust emission in the dust map by SFD98 (Schlegel, Finkbeiner \& Davis 1998). The four curves represent four patches selected in the Northern and the Southern Galactic sky at $b = |50|^{\circ}$.} \label{fig_mps_SFD98} \end{figure} Fig. \ref{fig_mps_SFD98} shows the measured power spectrum in the dust maps of SFD98 at a Galactic latitude of $b = |50|$ degrees. These power spectra are well fitted to power laws of index -2.9. However, the power drops at higher frequencies corresponding to the map resolution of $\sim$ 6.1 arcmin. This breakdown of the power spectrum is due to the large beam size of IRAS map. Although we can recover the small-scale fluctuation by the deconvolution of a point spread function (PSF), there is clearly some limitation. We need to generate the dust map including the contributions from small-scale fluctuations in order to study for the planned present and future missions with high resolution ($<$~1 arcmin). We obtain such high resolution map by adding small-scale structure of cirrus emission to the low-resolution map of SFD98 assuming that the small-scale fluctuations also follow the estimated power spectrum with the same power-law index, as described above. \subsubsection{Small Scale of Fluctuations}\label{subsec:sim_de} The power, $P(k)$, is defined as the variance of the amplitude in the fluctuations: \begin{equation} P(k) \equiv \langle\mid\delta_k\mid^2 \rangle = \frac{1}{V}\int \xi(x) \frac{\sin(kx)}{kx} 4\pi x^2 dx, \label{eqn_powvar} \end{equation} where $\delta_k$ is the perturbation field, $\langle\mid\delta_k\mid^2\rangle$ is the variance of the fluctuation and $\xi(x)$ is the correlation function of the brightness field. We assume that the distribution of fluctuations is approximated as a random Gaussian process where the Fourier components $\delta_k$ have random phases so that the statistical properties of distribution are fully described by the power spectrum $\mid \delta_k \mid^2$ \cite{peebles80}. In this case, we can set each fluctuation within a finite grid in the frequency domain by a random Gaussian process of the amplitude of each fluctuation considering the realization of a volume for the sample embedded within a larger finite volume \cite{gp90,park94,peacock99}. We assign Fourier amplitudes randomly within the above distribution in the finite volume and assign phases randomly between 0 and 2$\pi$. Since the field used in this simulation is small ($<$ 10 degree), we can take the small-angle approximation and treat the patch of sky as flat \cite{white99}. In the flat sky approximation, we obtain the power spectrum and generate a patch of the dust map in cartesian coordinates. \begin{figure*} \centering \centerline{ \psfig{figure=fig-03a.ps, height=5cm} \psfig{figure=fig-03b.ps, height=5cm} \psfig{figure=fig-03c.ps, height=5cm} } \centerline{ \psfig{figure=fig-03d.ps, height=8cm} } \caption{Simulated dust emission map (upper) and the profile of map (lower). The upper-left panel shows the simulated image assuming a power spectrum with a power index of -3. The upper-middle panel and the upper-right panel show only large-scale fluctuations and small-scale fluctuations, respectively. The lower panel shows the one-dimensional profile for a selected part of the upper-left and the upper-middle panel.} \label{fig_dmap_all} \end{figure*} We generate a realistic distribution of the Galactic emission in the following manner. The basic data for the information of the large-scale structure are obtained from the low resolution all-sky map by SFD98. We add the simulated small-scale structure to these basic data in the Fourier domain, where the power spectrum of the small-scale structure follows that of the large-scale structure. Fig. \ref{fig_dmap_all} shows our simulated emission map including small-scale fluctuations. The left panel of Fig. \ref{fig_dmap_all} shows the simulated dust emission image corresponding to a power spectrum with $\alpha = -3$. The middle panel includes only the emission above the resolution of the dust map by SFD98, $\sim$ 6.1 arcmin, (large-scale emission) while the right panel shows the emission above the resolution of the dust map by SFD98 (separated in Fourier domain, i.e., small-scale emission). The lower panel shows the profiles for selected areas of two images (upper-left and upper-middle panels). We find in this simulation that the emission including the high resolution, small-scale component (above the resolution of the dust map by SFD98 to a resolution of 4 arcsec) reflects the trend of the large-scale emission (above the resolution of SFD98 dust map). \begin{figure*} \centering \centerline{ \psfig{figure=fig-04a.ps, height=6cm} \psfig{figure=fig-04b.ps, height=6cm} } \centerline{ \psfig{figure=fig-04c.ps, height=9cm} } \caption{Patch of SFD98 dust map, regenerated patch (upper panel) and the estimated power spectrum (lower panel). The upper-left panel is a patch of the SFD98 dust map at the Galactic latitude of 50 degree and the upper-right panel is the regenerated patch based upon the patch from the SFD98 dust map. The dashed and solid lines in the lower panel show the estimated power spectrum of the upper-left and the upper-right panels, respectively. Note that the Nyquist frequency in the power spectrum of the upper-right panel is 7.5 arcmin$^{-1}$, but we only plot to $\sim$ 0.5 arcmin$^{-1}$. The dotted line shows the fit to the power spectrum below the spatial cutoff frequency.} \label{fig_dmap_gb50} \end{figure*} We obtain a patch of the dust map including small-scale fluctuations by summing the large-scale component of SFD98 map and the small-scale component of the simulated emission in the Fourier domain. According to this scheme of Fourier power spectrum analysis, the cutoff spatial frequency of the dust map by SFD98 is set to the Nyquist limit, i.e. a half of the spatial frequency corresponding to the resolution of the dust map by SFD98. We use the power spectrum fitted below the Nyquist sampling limit in order to extend the power spectrum to higher spatial frequencies. Typically, the 2D power spectrum of a SFD98 dust map patch shows the presence of a cross along spatial frequencies of $x$ and $y$ axis if we assume that the centre in the spatial domain is regarded as the spatial frequency 0. This cross is caused by the Fast Fourier Transform (FFT) algorithm that makes an ``infinite pavement'' with the image prior to computing the Fourier transform \cite{miv02}. In order to preserve the information of the emission at the edges, we directly use the power at the spatial frequencies of $x$ and $y$ axis, and extrapolate the power at other spatial frequencies (above the cutoff spatial frequency) according to the estimated power spectrum. In Fig. \ref{fig_dmap_gb50}, we show a patch of the dust map by SFD98 at a Galactic latitude of 50 degree (upper left), a patch regenerated by extending the power spectrum (upper right) and the estimated power spectrum (lower panel). \subsection{Dust Emission at Other Wavelengths} Assuming that the spatial structure of the dust emission is independent of wavelength, we can obtain the dust map at other wavelengths than 100 $\mu$m by applying an appropriate model for the Spectral Energy Distribution (SED). Since the dust particles are small ($<$ 0.25 $\mu$m) compared with far-IR wavelengths, the opacity does not depend upon the details of the particle size distribution, but on the nature of the emitting material itself. In the far-IR, the opacity $\kappa_{\nu}$ generally follows a power law: \begin{equation} \kappa_{\nu} \propto \nu^{\beta} \label{eqn_em_law} \end{equation} with frequency $\nu$. The SED may be approximated as one-component or two-component models \cite{schl98,fink99}. The dust temperature map is constructed from the COBE Diffuse Infrared Background Experiment (\textit{DIRBE}) 100 $\mu$m and 240 $\mu$m data \cite{bogg92} which was designed to search for the cosmic IR background radiation. For a one-component moedel, the emission $I_\nu$ at frequency $\nu$ can be expressed as \begin{equation} I_\nu = K_{100}^{-1}(\beta, T)\, I_{100} \, \frac{\nu^{\beta} B_{\nu} (T)}{\nu_0^{\beta} B_{\nu_0} (T)}, \label{eqn_one_em_nu} \end{equation} where $B_{\nu}(T)$ is the Planck function at temperature $T$, $I_{100}$ is the \textit{DIRBE}-calibrated 100 $\mu$m map, $K_{100}^{-1}(\beta, T)$ is the colour correction factor for the \textit{DIRBE} 100 $\mu$m filter when observing a $\nu^{\beta} B_{\nu}(T)$ spectrum (\textit{DIRBE} Explanatory Supplement 1995). Although the generated temperature maps have relatively low resolution (1.3$^\circ$) compared with our simulated dust map patch, we interpolate this map to small grid sizes ($<$ 10 arcsec). Taking the emissivity model with $\beta = 2$ \cite{dl84}, we can obtain the dust temperature from the \textit{DIRBE} 100 $\mu$m/240 $\mu$m emission ratio. \begin{figure} \centering \centerline{ \psfig{figure=fig-05.ps,height=6.5cm}} \caption{Comparison between the one-component dust model and the two-component dust model for one small patch. The dust emission of the two-component model in the wavelength range from 120 $\mu$m to 200 $\mu$m is slightly higher than that of the one-component model due to the dominant contribution by carbon grains.} \label{fig_dust_model} \end{figure} Based upon laboratory measurements, a multicomponent model for interstellar dust has been constructed by Pollack et al. \shortcite{poll94}. In order to solve the inconsistency of the $\nu^2$ emissivity model in the 100 $-$ 2100 GHz (3000 $-$ 143 $\mu$m) emission, Finkbeiner et al. \shortcite{fink99} used a two-component model where diverse grain species dominate the emission at different frequencies in order to fit the data of the COBE Far Infrared Absolute Spectrophotometer (FIRAS). Assuming that each component of the dust has a power-law emissivity over the FIRAS range, Finkbeiner et al. \shortcite{fink99} constructed the emission $I_\nu$ in multicomponent model: \begin{equation} I_\nu = \frac{\sum_i ~f_i ~Q_i(\nu) ~B_{\nu}(T_i)}{\sum_i ~f_i ~Q_i(\nu_0) ~B_{\nu_0}(T_i) ~K_{100}(\beta_i, T_i)} \, I_{100}, \label{eqn_mul_em_nu} \end{equation} where $f_i$ is a normalization factor for the $i$-th grain component, $T_i$ is the temperature of component $i$, $K_{100}$ is the \textit{DIRBE} colour-correction factor and $I_{100}$ is the SFD98 100 $\mu$m flux in the \textit{DIRBE} filter. The emission efficiency $Q_i(\nu)$ is the ratio of the emission cross section to the geometrical cross section of the grain component $i$. In order to obtain the temperature of each component, we further need effective absorption opacity defined by \begin{equation} \kappa_i^* = {\int_0^\infty \kappa_i^{\rm abs} J_{\rm ISRF} (\nu) d\nu\over \int_0^\infty J_{\rm ISRF} (\nu) d\nu}, \end{equation} where $\kappa_i^{\rm abs}$ is the absorption opacity of the $i$-th component, and $J_{\rm ISRF}$ is the mean intensity of interstellar radiation field. Finkbeiner et al. \shortcite{fink99} assumed that the normalization factors do not vary with locations and size independent optical properties of dust grains. The emission efficiency factor $Q_i$ at far-IR is further assumed to follow a power-law with different indices ($\beta$) for different dust species. In the present work, we adopted the `best-fitting' two-component model by Finkbeiner et al. \shortcite{fink99}: $\beta_1 = 1.67$, $\beta_2$=2.70, $f_1=0.0363$, $f_2=0.9637$, and $q_1/q_2 =13.0$, where $q_i=\kappa_i^{\rm abs}(\nu_0)/\kappa_i^*$ which represents the ratio of far-IR emission cross section to the UV/optical absorption cross section. The reference frequency $\nu_0$ is that corresponding to wavelength 100 $\mu$m. If we further assume that the interstellar radiation field has constant spectrum, the temperature of each component can be uniquely determined by the far-IR spectrum represented by the \textit{DIRBE} 100 $\mu$m/240 $\mu$m ratio. A two-component model provides a fit to an accuracy of $\sim$ 15 per cent to all the FIRAS data over the entire high-latitude sky. In Fig. \ref{fig_dust_model}, we see the dust emission for the one-component and two-component dust models [see Schlegel et al. \shortcite{schl98}; Finkbeiner et al. \shortcite{fink99}]. The two-component model agrees better with the FIRAS data in the wavelength range longer than 100 $\mu$m where the dust emission estimated from one-component model is significantly lower than the estimate from the two-component model. In two models, the contribution of the small grains resulting in an excess below 100 $\mu$m is not considered. Since there is no significant difference between models below 100 $\mu$m while the dust emission of the two-component model is slightly higher than that of the one-component model in wavelengths ranging from 120 to 200 $\mu$m, we use the two-component model in our calculations. Through a PSF convolution at each wavelength and a wavelength integration over a 5 $\mu$m wavelength grid, we obtain the high resolution dust map in other bands. \section{FLUCTUATION ANALYSIS FOR SKY CONFUSION NOISE}\label{sec:stat_analy_scn} Among the parameters affecting the sky confusion noise, most of them depend upon the mean brightness, the spatial structure of the cirrus, and the observing wavelength, as seen in equation (\ref{eqn_strn_hb}). In Table \ref{tab_inst_para}, we list the basic instrumental parameters of present and future IR space missions; the aperture of the telescope, Full Width at Half Maximum (FWHM) of the beam profile and the pixel size for each detector. For comparison with previous studies \cite{herb98,kiss01}, we include the specifications for \textit{ISO}. We select a short wavelength band (SW) and a long wavelength band (LW) for each mission. \begin {table*} \centering \caption {Instrumental parameters for various space missions.} \label{tab_inst_para} \vspace{5pt} \begin{tabular}{@{}cccccccc} \hline\vspace{-5pt} \\ & Aperture & Wavelength \span\omit & FWHM $^a$\span\omit & Pixel size\span\omit \vspace{5pt} \\ & (meter) & ($\mu$m)\span\omit & (arcsec)\span\omit & (arcsec)\span\omit \vspace{5pt} \\ Space Mission & & SW & LW & SW & LW & SW & LW \vspace{5pt} \\\hline \vspace{-10pt} \\ \textit{ISO} $^b$ & 0.6 & 90 & 170 & 31.8 & 60 & 46 & 92 \vspace{5pt} \\ \textit{Spitzer} $^c$ & 0.85 & 70 & 160 & 16.7 & 35.2 & 9.84 & 16 \vspace{5pt} \\ \textit{ASTRO-F} $^d$ & 0.67 & 75 & 140 & 23 & 44 & 26.8 & 44.2 \vspace{5pt} \\ \textit{Herschel} $^e$ & 3.5 & 70 & 160 & 4.3 & 9.7 & 3.2 & 6.4 \vspace{5pt} \\ \textit{SPICA} & 3.5 & 70 & 160 & 4.3 & 9.7 & 1.8 & 3.6 \vspace{5pt} \\ \hline \end{tabular} \medskip \begin{flushleft} {\em $^a$} FWHM of diffraction pattern. \\ {\em $^b$} Two ISOPHOT filters (C1\_90 in SW band and C2\_170 in LW band). \\ {\em $^c$} MIPS bands for the \textit{Spitzer} mission. \\ {\em $^d$} \textit{ASTRO-F/FIS} (Far Infrared Surveyor) has a WIDE-S band in SW and WIDE-L band in LW. \\ {\em $^e$} PACS have `blue' array in short wavelength (60-85$\mu$m or 85-130$\mu$m) and the `red' array in long wavelength (130-210$\mu$m). \end{flushleft} \end{table*} In order to examine the dependency of the sky confusion noise on the instrumental parameters, we list sky confusion $N$ estimated from HB90 formula for each mission considered in this work in Table \ref{tab_const_HB}. As the aperture of the telescope becomes larger or the wavelength becomes shorter, sky confusion $N$ should become correspondingly smaller. In Section \ref{sec:gen_dmap}, we obtained the dust maps extended to high spatial resolution over a wide spectral range. With this simulated dust map, we estimate the sky confusion noise for various space mission projects. \begin {table} \centering \caption {Sky confusion noise estimated from HB90 formula for each space mission. The instrumental parameters for each mission are given in Table \ref{tab_inst_para}. The mean brightness here is fixed to be 1 MJy~sr$^{-1}$.} \label{tab_const_HB} \vspace{5pt} \begin{tabular}{@{}ccc} \hline\vspace{-5pt} \\ & $N$ (mJy) \span\omit \vspace{5pt} \\ Space Mission & SW & LW \vspace{5pt} \\\hline \vspace{-10pt} \\ \textit{ISO} & 0.83 & 4.05 \vspace{5pt} \\ \textit{Spitzer} & 0.18 & 1.46 \vspace{5pt} \\ \textit{ASTRO-F} & 0.40 & 1.89 \vspace{5pt} \\ \textit{Herschel} & 0.0054 & 0.042 \vspace{5pt} \\ \textit{SPICA} & 0.0054 & 0.042 \vspace{5pt} \\ \hline \end{tabular} \end{table} \subsection{Selected Regions} \begin{figure} \centering \centerline{ \psfig{figure=fig-06a.ps, height=3.5cm} \psfig{figure=fig-06b.ps, height=3.5cm} } \centerline{ \psfig{figure=fig-06c.ps, height=3.5cm} \psfig{figure=fig-06d.ps, height=3.5cm} } \caption{PSF-convolved patch of the dust map for space mission; \textit{ISO} (upper-left), \textit{ASTRO-F} (upper-right), \textit{Spitzer} (lower-left), \textit{Herschel/SPICA} (lower-right) missions.} \label{fig_psfcv_dmap} \end{figure} \begin {table} \begin{minipage}{80mm} \centering \caption {Properties of the selected regions. The Galactic longitude of all patches is 0$^\circ$. I$_0$ is a mean sky brightness, $\alpha$ is the power index of the power spectrum, and P$_0$ is the power estimated at 0.01 arcmin$^{-1}$ and 100 $\mu$m.} \vspace{10pt} \label{tab_prop_patch} \begin{tabular}{@{}cccccc} \hline \vspace{-10pt} \\ & I$_0$ $^a$\span\omit \span\omit & $\alpha$ $^b$ & $\log$ P$_0$ $^c$ \vspace{5pt} \\ & (MJy~sr$^{-1}$)\span\omit \span\omit & & (Jy$^2$~sr$^{-1}$) \vspace{5pt} \\ Region~$^a$ & 70$\mu$m & 100$\mu$m & 160$\mu$m & & \vspace{5pt} \\\hline \vspace{-10pt} \\ $b$=10$^{\circ}$ & 5.4 & 24.4 & 53.9 & -3.45$\pm$0.11 & 9.00$\pm$0.17 \vspace{5pt} \\ $b$=17$^{\circ}$ & 3.5 & 18.6 & 45.3 & -3.50$\pm$0.16 & 9.05$\pm$0.24 \vspace{5pt} \\ $b$=22$^{\circ}$ & 3.5 & 15.3 & 34.1 & -3.54$\pm$0.15 & 8.48$\pm$0.22 \vspace{5pt} \\ $b$=28$^{\circ}$ & 2.2 & 8.9 & 24.7 & -3.50$\pm$0.15 & 7.74$\pm$0.21 \vspace{5pt} \\ $b$=36$^{\circ}$ & 1.2 & 6.0 & 14.4 & -3.80$\pm$0.10 & 7.41$\pm$0.15 \vspace{5pt} \\ $b$=45$^{\circ}$ & 0.6 & 2.8 & 6.2 & -3.13$\pm$0.12 & 6.39$\pm$0.18 \vspace{5pt} \\ $b$=59$^{\circ}$ & 0.3 & 1.4 & 2.9 & -2.99$\pm$0.09 & 6.00$\pm$0.13 \vspace{5pt} \\ $b$=70$^{\circ}$ & 0.2 & 1.2 & 2.6 & -3.20$\pm$0.10 & 6.27$\pm$0.15 \vspace{5pt} \\ $b$=84$^{\circ}$ & 0.1 & 0.8 & 1.8 & -2.87$\pm$0.09 & 5.77$\pm$0.14 \vspace{5pt} \\ $b$=90$^{\circ}$ & 0.1 & 0.5 & 1.4 & -2.87$\pm$0.08 & 5.66$\pm$0.12 \vspace{5pt} \\ \hline \end{tabular} \end{minipage} \end{table} We generate the PSF-convolved patches of a dust map as a function of increasing Galactic latitude (decreasing sky brightness) from 0.3 MJy~sr$^{-1}$ to 25 MJy~sr$^{-1}$ at 100 $\mu$m at a resolution of 1 arcsec by using the method explained in Section \ref{sec:gen_dmap}. The size of the simulated image is 1.3$^\circ$ $\times$ 1.3$^\circ$. For the PSF, we used an ideal circular aperture Airy pattern corresponding to the aperture size of telescopes. In Fig. \ref{fig_psfcv_dmap}, we can see the PSF-convolved small patch of dust map (900$\arcsec$ $\times$ 900$\arcsec$) for each space mission. As the aperture of the telescope becomes larger, the structures that can be visible become smaller. Since the cirrus emission generally depends upon the Galactic latitude, we select the patches as a function of the Galactic latitude. We list the properties of selected regions at a Galactic longitude of 0$^\circ$ among 50 patches in Table \ref{tab_prop_patch}. The estimated power spectrum in Table \ref{tab_prop_patch} differs from patch to patch. In order to reflect the large structure of the dust map and reduce the discrepancies of the power spectrum between adjacent patches, we use a large area around the patch ($\sim$ 2.5$^\circ$ $\times$ 2.5$^\circ$) in the measurement of the power spectrum. \subsection{Estimation of Sky Confusion Noise} \subsubsection{Contribution of Instrumental Noise}\label{subsec:inst_noise} In order to estimate the sky confusion noise, the structure function for the cirrus emission patch obtained by measuring the sky brightness fluctuations is widely used \cite{gautier92,herb98,kiss01}. The size of the measuring aperture is set to be the FWHM of each beam profile if the detector pixel size is smaller than the FWHM of a beam profile. Since the sky confusion noise and the instrumental noise are statistically independent \cite{herb98,kiss01}, the measured noise $N_{\rm meas}$ is \begin{equation} N_{\rm meas}^2 = N^2 + \eta \cdot \sigma_{\rm inst}^2, \label{eqn_stat_strno} \end{equation} where $N$ is the sky confusion noise corresponding 1$\sigma$, $\sigma_{\rm inst}$ is the instrumental noise, and $\eta$ is the contribution factor from the instrumental noise. The contribution factor $\eta$ can be determined by the size of the measurement aperture and the separation (see equation \ref{eqn_strno} and Fig. \ref{fig_ref_aper}). \subsubsection{Comparison with Other Results} We estimate the sky confusion noise from the patches of the simulated sky map. In Fig. \ref{fig_frac_em}, we plot the fractional area as a function of sky brightness over the whole sky to visualize the sky brightness distribution. \begin{figure} \centering \centerline{ \psfig{figure=fig-07.ps, height=6cm} } \caption{The fraction of the sky brightness for all sky. Note that most of the sky have the sky brightness below 1 MJy~sr$^{-1}$ (SW) and 15 MJy~sr$^{-1}$ (LW). The contribution in the highest mean brightness resulted from near the Galactic center.} \label{fig_frac_em} \end{figure} Since we consider the sky confusion caused solely by the emission from cirrus structures, we do not include any contribution from the instrumental noise. In order to determine a dependency of the sky confusion noise on separation, we performed a ``calculation'' for the estimation of sky confusion noise for given mean brightness of the sky patch for each space mission (\textit{ISO}, \textit{Spitzer}, \textit{ASTRO-F}, and \textit{Herschel/SPICA}) by systematically varying the value of $s$ from 2 to 7, using equation (\ref{eqn_strno}), where $s$ parameter is related to the separation $\theta$ = $sD$. Generally, larger separation causes larger sky confusion noise because we may be estimating the fluctuations from different structures. In practical photometry, large separations are generally used, i.e., $\theta$ = $sD$, $s>2$ in the configuration of Fig. \ref{fig_ref_aper} \cite{kiss01,laur03}. As a reference, we take the estimate of the sky confusion noise with $s = 2.5$ for a comparison of the measured sky confusion with the photometric results given in Section \ref{sec:scn_phot}. In the source detection, the background estimation parameter have the same role with the separation parameter. We found the optimal value for the background estimation parameter through the photometry (see Section \ref{sec:sub_source_det} for detailed explanation). In Figs \ref{fig_strn_iso} -- \ref{fig_strn_spica}, we present our estimates of the sky confusion noise for the \textit{ISO}, \textit{Spitzer}, \textit{ASTRO-F} and \textit{Herschel/SPICA} space missions comparing the formula for the sky confusion noise predicted by HB90 (hereafter HB90 formula). For \textit{ISO} results, the sky confusion noise with $s=2.5$ is overestimated for the dark fields, but underestimated for the bright fields (see Fig. \ref{fig_strn_iso}). With larger separations, e.g., $s=7$, the estimated confusion noise approaches the HB90 formula although it is still overestimated for the dark fields. We can see the same tendency in other studies in the sky confusion noise measured from \textit{ISO} observations \cite{herb98,kiss01}. The measured sky confusion noise for the \textit{Spitzer} and \textit{Herschel/SPICA} missions are much lower than the predictions of HB90 except for the dark fields (see Figs \ref{fig_strn_sirtf} and \ref{fig_strn_spica}). Comparing the empirical relation between $P_0$ and $I_0$ by Gautier et al. \shortcite{gautier92}, we present our estimated $P_0$ in Fig. \ref{fig_rel_p0b0}. It shows a lower $P_0$ in bright fields and the higher $P_0$ in dark fields could cause an underestimation in the bright fields and an overestimation in the dark fields of the sky confusion noise. Such inconsistencies, overestimation of $P_0$ in bright fields and underestimation of $P_0$ in dark fields, also appear in other regions of the sky. By fitting our estimations of $P_0$, we obtained a new relation between the $P_0$ and $I_0$. The HB90 formula assumed the wavelength dependency only through the beam size. However, although the cirrus structure is generally preserved in other wavelengths, the empirical relation should be scaled according to the variation of the cirrus brightness with wavelength, i.e, cirrus spectral energy distribution. Therefore, in order to apply our empirical formula to other wavelength bands, we need some additional correction. For this correction, we used the ratio of the mean brightness at the two wavelengths, e.g., $I_{160\mu\rm m}$/$I_{100\mu\rm m}$ $\sim$ 2 (see Table \ref{tab_prop_patch}). For comparison with the sky confusion noise estimated from the \textit{ISO} mission, we plot the HB90 formula to which our empirical relation is applied (see thick dotted line in Fig. \ref{fig_strn_iso}). Although our formula solve the discrepancies of our estimations to some extent, there are still disagreements especially with the results for higher resolution missions. The HB90 formula was obtained from the analysis of the low resolution \textit{IRAS} data at 100 $\mu$m and assumed a constant power index for the cirrus power spectrum. In the case of the high resolution missions, since the sky confusion becomes sensitive to the local structure rather than the large scale structure, the calculation of the sky confusion strongly depends upon the power spectrum estimated for each patch and the power at the scale length corresponding to the resolution of the detector. Therefore, we should consider carefully the combination of the resolution and the power spectrum of the cirrus in the estimation of the sky confusion noise. In addition, the larger discrepancy in the bright regions for the \textit{ASTRO-F} mission compared to the prediction from \textit{ISO} observations can be explained by an increase in the spatial resolution, although the aperture sizes of two telescopes are similar (see the specifications of the two space missions in Table \ref{tab_inst_para}). We conclude that the sky confusion level predicted by the \textit{IRAS} data from which HB90 formula are derived is significantly overestimated in the case of the higher resolution missions. \begin{figure*} \centering \centerline{ \psfig{figure=fig-08a.ps, height=6.4cm} \psfig{figure=fig-08b.ps, height=6.4cm} } \caption{Estimated sky confusion noise for the \textit{ISO} mission. Upper and lower panels show the sky confusion noise at 90 $\mu$m and 170 $\mu$m, respectively. The dotted line shows the sky confusion noise by HB90 (Helou \& Beichman 1990). The symbols are the estimated sky confusion noise on averaging 5 patches with similar mean brightness. For comparison, we plot the estimated sky confusion noise for the larger separation of $s=7$. The circle symbol means the sky confusion noise correcting the contribution from the CFIRB. The thick dotted line is the HB90 formula to which our empirical relation is applied.} \label{fig_strn_iso} \end{figure*} \begin{figure*} \centering \centerline{ \psfig{figure=fig-09a.ps, height=6.4cm} \psfig{figure=fig-09b.ps, height=6.4cm} } \caption{Estimated sky confusion noise for the \textit{ASTRO-F} mission. Left and right panels show the sky confusion noise in the WIDE-S band (75 $\mu$m) and WIDE-L band (140 $\mu$m), respectively. The symbols and lines are same as given in Fig. \ref{fig_strn_iso}.} \label{fig_strn_fis} \end{figure*} \begin{figure*} \centering \centerline{ \psfig{figure=fig-10a.ps, height=6.4cm} \psfig{figure=fig-10b.ps, height=6.4cm} } \caption{Estimated sky confusion noise for the \textit{Spitzer} mission. Left and right panels show the sky confusion noise for the MIPS 70 $\mu$m and 160 $\mu$m bands, respectively. The symbols and lines are same as in Fig. \ref{fig_strn_iso}.} \label{fig_strn_sirtf} \end{figure*} \begin{figure*} \centering \centerline{ \psfig{figure=fig-11a.ps, height=6.4cm} \psfig{figure=fig-11b.ps, height=6.4cm} } \caption{Estimated sky confusion noise for the \textit{Herschel} and \textit{SPICA} missions. Left and right panels show the sky confusion noise at 70 $\mu$m and 160 $\mu$m, respectively. The symbols and lines are same as in Fig. \ref{fig_strn_iso}.} \label{fig_strn_spica} \end{figure*} \begin{figure} \centering \centerline{ \psfig{figure=fig-12.ps, height=6.5cm} } \caption{The relation between $P_0$ and $B_0^3$. The dotted line is the result from Gautier et al. (1992), the symbol is from our estimated $P_0$, and the dashed line is the fit to our result. In bright fields, values of $P_0$ expected from Gautier et al. (1992) have higher values than those measured from our patches in bright fields.} \label{fig_rel_p0b0} \end{figure} Generally the most important component superimposed on the extragalactic background in the far-IR is the cirrus emission. However, at high spatial frequencies the Cosmic Far-IR Background (CFIRB) fluctuations may become dominant \cite{schl98,gui97,juvela00}. Therefore, in any estimation of the sky confusion noise using observational data in the dark fields should consider the fluctuation due to the CFIRB. By fitting the sky confusion noise over the mean sky brightness, Kiss et al. \shortcite{kiss01} obtained CFIRB fluctuation of 7 $\pm$ 2 mJy at 90 $\mu$m and 15 $\pm$ 4 mJy at 170 $\mu$m. After correcting for the contribution of the CFIRB in the estimation of the sky confusion noise, we obtained results similar with those of Kiss et al. \shortcite{kiss01} in the dark fields (see the symbol in circle with arrow in Fig. \ref{fig_strn_iso} at the mean brightness of $\sim$ 1.5 MJy/sr). Since the CFIRB fluctuations strongly depend upon the extragalactic source count model, we will discuss this issue in greater detail in our forthcoming paper [Jeong et al. 2004c \shortcite{jeong04c}, in preparation]. \subsubsection{Sky Confusion Noise for Various Separations}\label{subsec:skyconf_sep} Kiss et al. \shortcite{kiss01} analyzed the dependency of the sky confusion noise on other separations by the simple power expression from \textit{ISO} observational data: \begin{equation} N(q \theta_{\rm min}) = N(\theta_{\rm min}) \times q^{\gamma}, \label{eqn_dep_sep} \end{equation} where $q>1$ and $\gamma$ is a constant for a specific map. We obtained $\gamma$'s for all patches and showed $\gamma$ as a function of mean brightness for each mission as given in Fig. \ref{fig_dep_sep}. As the sky becomes brighter, $\gamma$ becomes larger due to the prominent structure of the cirrus emission. Kiss et al. \shortcite{kiss01} obtained a much lower $\gamma$ in dark regions, but their values of $\gamma$ in other regions are similar to our results. This result can be explained by two possible effects: one is that the cirrus structure observed by \textit{ISO} is blurred by the instrumental noise in most of the dark regions and the other is that many extragalactic point sources below the detection limit, i.e. CFIRB fluctuations, can remove the cirrus structure. If we only consider the component due to the cirrus in the dark fields, the values of $\gamma$ in the dark regions by Kiss et al. \shortcite{kiss01} are similar to our results. In most of the bright regions, the scatter of $\gamma$ shows the similar trend and this is probably caused by the relatively large difference in the spatial structure in each region. In the same mean brightness, $\gamma$'s in SW band are larger than those in LW band because spatial structures should be prominent in SW band. In addition, since we use the simulated data, changing features of $\gamma$ in two wavelength have a similar shape. For the \textit{Herschel} and \textit{SPICA} missions, our estimations show that $\gamma$ slowly increases and the error decreases compared with other missions, because of the much higher resolution than the other missions considered. \begin{figure*} \centering \centerline{ \psfig{figure=fig-13a.ps, height=5.5cm} \psfig{figure=fig-13b.ps, height=5.5cm} } \centerline{ \psfig{figure=fig-13c.ps, height=5.5cm} \psfig{figure=fig-13d.ps, height=5.5cm} } \caption{Dependency of the sky confusion noise on separation for \textit{ISO}, \textit{ASTRO-F}, \textit{Spitzer}, \textit{Herschel} and \textit{SPICA}, respectively. The dotted line and the dashed line is a fit to our estimation analysis data for SW and LW band, respectively. In the brighter regions, $\gamma$ has higher values than in the dark fields.} \label{fig_dep_sep} \end{figure*} \subsubsection{Effect of Power Index $\alpha$} In this study, we assume that the structure of cirrus is independent of wavelength. However, recent papers reported on enhanced dust emissivity at some medium-to-high density clouds in LW band of Far-IR due to the presence of a cold dust component (T $\leq$ 15K) \cite{cam01,delbur03,step03}. This result imply that the cirrus structure can be changed in LW band. Kiss et al. \shortcite{kiss03} suggested that the power index of the power spectrum also depends upon both wavelength and surface brightness due to the coexistence of dust components with various temperatures within the same field and cold extended emission features (usually, $-2.0 < \alpha < -4.0$). Using the assumption that the sky confusion noise is proportional to the scale length (see equation \ref{eqn_rel_strps}), we can estimate the sky confusion for different power indices. The ratio $\psi$ of the sky confusion noise with the power index of $\alpha + \epsilon$ to that with the power index of $\alpha$ can be defined as \begin{equation} \psi = \frac{N(\alpha + \epsilon)}{N(\alpha)}, \label{eqn_scn_index} \end{equation} where $\epsilon$ is the contribution to the power index from any other structure in the power spectrum. In this calculation, we fix the power at the scale length of the resolution limit of the map ($\sim$ 6.1 arcmin) and wavelength at 100 $\mu$m from the assumption that the power over this scale is not affected by the extra components proposed by Kiss et al. \shortcite{kiss03}. Table \ref{tab_scaled_strn} lists the ratio of the sky confusion noise for the different power indices for each space mission covering power indices of the power spectrum on the cirrus emission. Since the fluctuation at smaller scales is more sensitive to the power index, the sky confusion noise is much more dependent upon the power index for the space missions with higher resolutions. As seen in Table \ref{tab_prop_patch}, our estimated power indices in the bright regions ($\alpha$ $>$ 3.3) are somewhat higher than those in low density regions ($\alpha$ $<$ 2.8). From the recent \textit{Spitzer} observation, Ingalls et al. \shortcite{ingalls04} obtained the power index of -3.5 at 70 $\mu$m in the Gum Nebula. Therefore, if this varying power index is not so large, it does not affect severely the final sensitivity values. \begin {table} \centering \caption {Ratio $\psi$ of the sky confusion noise for the different power indices.} \label{tab_scaled_strn} \vspace{5pt} \begin{tabular}{@{}ccccc} \hline\vspace{-5pt} \\ & $\epsilon$~$^a$ = -1.0 \span\omit & $\epsilon$ = 1.0 \span\omit \vspace{5pt} \\ Space Mission & SW & LW & SW & LW \vspace{5pt} \\\hline \vspace{-10pt} \\ \textit{ISO} & 0.13 & 0.19 & 1.7 & 1.2 \vspace{5pt} \\ \textit{Spitzer} & 0.083 & 0.12 & 2.8 & 1.9 \vspace{5pt} \\ \textit{ASTRO-F} & 0.10 & 0.13 & 2.2 & 1.8 \vspace{5pt} \\ \textit{Herschel} & 0.041 & 0.061 & 5.6 & 3.8 \vspace{5pt} \\ \textit{SPICA} & 0.041 & 0.061 & 5.6 & 3.8 \vspace{5pt} \\ \hline \end{tabular} \medskip \begin{flushleft} {\em $^a$} contribution index in the power spectrum. \end{flushleft} \end{table} \section{PHOTOMETRIC MEASUREMENTS OF SKY CONFUSION NOISE}\label{sec:scn_phot} In Section \ref{sec:stat_analy_scn}, we estimated the sky confusion noise by the fluctuation analysis. The sky confusion noise should affect the source detection efficiency, causing a deterioration in the detection limit. In this section, we obtain the measured sky confusion noise by carrying out photometry on realistically simulated data. \subsection{Source Distribution}\label{sec:source_dist} The distribution of sources per unit area on the sky can be described as a function of the flux density and depends upon both the spatial distribution of the sources and their luminosity function. For simplicity, we assume the number of sources whose flux is greater than flux $F$, $n(>F)$, is a power-law function of $F$, \begin{equation} n(>F) = n_0 (> F_0) \left({F\over F_0}\right)^{-\omega}, \label{eqn_sdist} \end{equation} for $F_{\rm min} < F < F_{\rm max}$, where $n_0$ and $F_0$ are normalization constants for number of sources and for flux, respectively, $F_{\rm min}$ is the minimum flux, $F_{\rm max}$ is the maximum flux in the source distribution. The source confusion caused by the overlapping of adjacent sources mainly depends upon the source distribution and the beam profile \cite{cond74,fran89}. Source confusion becomes important as the observation sensitivity increases since there are usually more faint sources than brighter ones. Currently favorable source count models require strong evolution in order to fit the \textit{ISO} data from mid- to far-IR, the SCUBA data at sub-mm wavelengths, and the Cosmic Infrared Background (CIRB) at 170 $\mu$m \cite{oliver97,smail97,kawara98,hugh98,aussel99,puget99,esf00,serjeant00,lagache00,mat00,scott02}. In our study, we use a simple source distribution for the purpose of investigating only the effect of the sky confusion. We will discuss the source confusion with more realistic source count models in the forthcoming paper. In order to avoid the contributions from any source confusion itself, we assume rather sparse distribution of sources. However, the estimate of detection limit becomes rather uncertain, if there are too few sources. Therefore, we have employed a model for the $n(F)$ utilizing a distribution with two slopes, $\omega$ = 1.0 for bright flux region and $\omega$ = 0.3 for faint flux region (see Fig. \ref{fig_source_dist}), in order to derive an accurate value for the sky confusion limits without source confusion effect. Since the sky confusion noises in the SW bands are much lower than those in the LW bands, we set different normalization constants and minimum flux values $F_{\rm min}$, i.e., $F_{\rm min}$ = 0.001 mJy and $n_0 (> F_0)$ = 3 in the SW band, $S_{\rm min}$ = 0.1 mJy and $n_0 (> F_0)$ = 10 in the LW band, where $F_0$ is set to be 100 mJy (see Fig. \ref{fig_source_dist}). \begin{figure} \centering \centerline{ \psfig{figure=fig-14.ps, height=5cm} } \caption{Source distribution in the SW band and LW band. We use different slopes ($\omega$ = 1.0 and $\omega$ = 0.3) for the power law source distribution at the boundary flux of 10 mJy in order to reduce the effect of the source confusion.} \label{fig_source_dist} \end{figure} \subsection{Source Detection}\label{sec:sub_source_det} We generate images including point sources convolved with the beam profile of each mission using the source distribution described in Section \ref{sec:source_dist}. Fig. \ref{fig_sim_img} shows the simulated images for the various missions considered. As the detector pixel and the beam profile become smaller, more sources and smaller structure in the cirrus emission appear. \begin{figure*} \centering \centerline{ \psfig{figure=fig-15a.ps, height=6cm} \psfig{figure=fig-15b.ps, height=6cm} } \centerline{ \psfig{figure=fig-15c.ps, height=6cm} \psfig{figure=fig-15d.ps, height=6cm} } \caption{Simulated images including point sources in the LW band for \textit{ISO} (upper-left), \textit{ASTRO-F} (upper-right), \textit{Spitzer} (lower-left), \textit{Herschel} and \textit{SPICA} (lower-right) missions. The mean brightness of the cirrus background is 2 MJy~sr$^{-1}$ at 160 $\mu$m.} \label{fig_sim_img} \end{figure*} We carried out aperture photometry on the simulated images using the SExtractor software \textit{v}2.2.2 \cite{bert96}. There are several parameters to be fixed to perform the photometry, but the most influential parameters are the size of a background mesh for estimating background level and the threshold for the source detection in this aperture photometry. In order to optimise for better reliability of the detected sources and reducing the rate of false detection, we make trials by changing two parameters. Finally, we set the size of the background mesh to be 2.5 times of the measuring aperture, and the detection threshold as 4$\sigma$. The final detection limit is determined by the minimum flux of detected point sources. We found that the detection limits determined from 4$\sigma$ criteria are consistent with the 4 times of sky confusion noise measured from the fluctuation analysis. Note that our sky confusion noise estimated from the fluctuation analysis is a 1$\sigma$ fluctuation. In Fig. \ref{fig_strn_phot}, we compare the detection limit by photometry with the sky confusion noise for each mission. For the \textit{ISO} and \textit{ASTRO-F} missions, the results from photometry give relatively higher detection limits than the theoretical estimations via fluctuation analysis. This trend results from the larger detector pixel size compared to the FWHM of the beam profile. The large detector pixel size of the \textit{ISO} mission significantly degraded the performance of the detection of the point sources (e.g., the left panels in Fig. \ref{fig_strn_phot}). \begin{figure*} \centering \centerline{ \psfig{figure=fig-16a.ps, height=5.5cm} \psfig{figure=fig-16b.ps, height=5.5cm} } \centerline{ \psfig{figure=fig-16c.ps, height=5.5cm} \psfig{figure=fig-16d.ps, height=5.5cm} } \caption{Estimated detection limit by photometry. Figures show the detection limit and 4 times sky confusion noise estimated from the fluctuation analysis for the \textit{ISO} and \textit{ASTRO-F} missions (left) and \textit{Spitzer}, \textit{Herschel} and \textit{SPICA} missions (right). Upper and lower panels show the results for the SW band and LW band, respectively.} \label{fig_strn_phot} \end{figure*} \section{SUMMARY AND DISCUSSION}\label{sec:summary} Based on the observed 100 $\mu$m dust map and the models of a dust spectrum, we generated high resolution background maps at wavelengths ranging from 50 to 200 $\mu$m. Using these simulated cirrus maps, we estimated the sky confusion noise for various IR space missions such as \textit{ISO}, \textit{Spitzer}, \textit{ASTRO-F}, \textit{Herschel} and \textit{SPICA}. Since we have the observational results only from \textit{ISO}, we compared the results of our simulation with the \textit{ISO} data. We found that the sky confusion noise estimated with our simulated maps are consistent with the \textit{ISO} results. However, in the dark fields the sky confusion noise is more weakly dependent upon the beam separation parameter than in the bright fields in the case of the \textit{ISO} observation. We conclude that this is due to the fact that the instrumental noise dominates in the dark regions or alternatively, the CFIRB fluctuation is more important. We also found that the sky confusion predicted from the \textit{IRAS} data is significantly overestimated in the case of the large aperture telescopes, except for the dark fields. We have confirmed our results through a realistic simulation. We performed photometry on simulated images including point sources with a sparse source distribution in order to avoid the effects of confusion due to crowded point sources. The detection limits obtained from the photometric analysis agree with the sky confusion noise estimated using fluctuation analysis except for \textit{ISO} and \textit{ASTRO-F}. The discrepancies for these missions are due to the large detector pixel size compared to the FWHM of the beam size. The mean brightness of the cirrus emission usually decreases with increasing Galactic latitude \cite{boul88}. In order to estimate the detection limits as a function of Galactic latitude, we derived a simple formula for each wavelength band. Because the cirrus emission is extremely strong near the Galactic centre, we excluded the Galactic latitudes $|b|<10^\circ$. Fig. \ref{fig_detlim_gb} shows the detection limits as a function of Galactic latitude. The detection limits for all missions appear to saturate beyond $b \sim 30^\circ$. \begin{figure} \centering \centerline{ \psfig{figure=fig-17.ps, width=8.5cm} } \vspace{25pt} \caption{Detection limits due to the Galactic cirrus as a function of Galactic latitude. The two line plotted for each mission are for the SW band (lower line) and the LW band (upper line).} \label{fig_detlim_gb} \end{figure} Fig. \ref{fig_detlim_scn} summarises the final detection limits for point sources at mean and low sky brightness regions due to the Galactic cirrus. In addition, we also plot the currently estimated 5$\sigma$ detection limits for sources of each mission. The detection limits only take into account the instrumental noise. The instrumental noise for \textit{ASTRO-F} mission is explained in detail in Jeong et al. (2003; 2004a; 2004b). The integration time is 500 sec for the \textit{Spitzer} mission (\textit{Spitzer} Observer's Manual\footnote{Further information can be found at the following url: \it{http://ssc.spitzer.caltech.edu/mips/sens.html}}) and 1 hour for the \textit{Herschel} mission \cite{pilb03}. As shown in Fig. \ref{fig_detlim_scn}, sky confusion almost approaches the detection limit in the LW band of the \textit{ASTRO-F} and \textit{Spitzer} missions. Although the sky confusion does not severely affect the detection limits of \textit{Herschel} mission, it can affect the detection limit of the \textit{SPICA} because it will have a large aperture telescope cooled to very low temperatures in order to achieve exceptional sensitivity in the far-IR (see Nakagawa 2004 for the detailed information of the \textit{SPICA} mission). \begin{figure} \centering \centerline{ \psfig{figure=fig-18.ps, width=8.5cm} } \vspace{25pt} \caption{Detection limits due to Galactic cirrus at mean and low sky brightness in each band. The mean sky brightness in the SW and LW bands is set to 1 MJy~sr$^{-1}$ and 15 MJy~sr$^{-1}$, respectively. The lower value for each detection limit corresponds to the detection limit at low sky brightness usually at high Galactic latitudes. The symbol shows the 5$\sigma$ sensitivity for the \textit{ASTRO-F}, \textit{Spitzer}, \textit{Herschel} missions without confusion and the error bar corresponds to 1$\sigma$ sensitivity.} \label{fig_detlim_scn} \end{figure} \section*{Acknowledgment} This work was financially supported in part by the KOSEF Grant R14-2002-058-01000-0. Chris Pearson acknowledges a European Union Fellowship to Japan. We thank Kyung Sook Jeong for careful reading of the manuscript and fruitful suggestions.
1,116,691,501,448
arxiv
\section{Experience with prototypes at future CTA sites} Atmospheric monitoring devices, such as the All-sky Cameras (ASC), the Sun/Moon Photometers and the FRAM robotic telescopes have been, since 2015, gradually deployed on the future sites of the Cherenkov Telescope Array (CTA) -- the CTA South site near Cerro Armazones, Chile and the CTA North site on La Palma, Canary Islands, Spain \cite{CTA}. This serves two purposes: to characterize the conditions at the sites in preparation for the CTA operational phase (see \cite{sitechar} for results) and to test the operation of the devices in realistic conditions, improve their reliability and develop maintenance procedures, as well as to learn how to process the data obtained in the extremely clear atmosphere of the site. The prototype operation on CTA South was interrupted by an intrusion in 12/2018 when all the solar power equipment was stolen and any remaining hardware had to be temporarily moved to storage. The devices will be installed again at a safer location and the operation will resume in 09/2019; the operation at CTA North is ongoing. The uptime of the devices on both sites is summarized in Fig~\ref{fig:uptime}, which indicates, for every day, whether at least some data have been taken by the given instrument. The causes for the downtimes observed and their impact on the future operation of the instrument within CTA are discussed for each device individually. \subsection{All-sky Cameras} The All-sky Camera system \cite{ASC} consists of G2-4000 CCD camera by Moravian Instruments equipped with photometric Johnson and UV filters and Sigma 4.5 mm f/2.8 EX DC fish-eye lens connected to a data acquisition computer. Initially the computer was an Intel Atom-based single board computer running embedded Windows XP system and Matlab code for data acquisition. Due to several failures of the computers running with ASCs at Pierre Auger Observatory and at CTA South and unavailability of replacements, the acquisition system was replaced in 2017 by Raspberry Pi 3 single board computer running Linux and Python code. The ASC at the CTA North site was collecting data used to determine the cloudiness on the site smoothly most of the time. Two water leaks into the ASC housing occurred in April 2018 and April 2019. The issues were caused by a degraded sealant and a better silicone sealant was identified as a replacement. The operation of the ASC at the CTA South site was more problematic due to the off-grid operation of the device depending on solar power. Around April 2016 insufficient power in the single installed battery was detected and another battery was added, which stabilized the power for a while. Later in 2016, the computer died and was replaced after few months. Issues with power started to appear again in 2017 and eventually the charger failed in September 2017. The charger was replaced in July 2018 and the Raspberry Pi was installed at the same time. However, the batteries reached the end of their lifespan and could not be replaced at the time, therefore only very limited operation was possible since. Overall, even the few failures caused long downtimes due to the remote operation and essentially no availability of local staff with spare parts. This will clearly not be the case during the CTA Observatory operation. \subsection{FRAM telescopes} The FRAM \cite{FRAM} is a robotic telescope, using an off-the-shelf astronomical equatorial mount (Paramount MYT for CTA South and 10Micron GM2000HPS for CTA North) to guide a Zeiss 135/2 lens attached to a Moravian Instruments G4-16000 large-format CCD camera, for the purpose of the determination of atmospheric extinction using wide-field stellar photometry \cite{FRAM}. A significant part of the downtime observed at CTA South was due to interruptions of internet connection to the remote site, as in the early stages, the operating software was not suitable for completely unsupervised operations -- this has been gradually improved over time to the point where autonomous operation is now possible for many days. Two major hardware incidents were recorded with the FRAM at CTA South. In 09/2017, the protective roof did not fully close, due to a leak in the hydraulic system, which became loose over time -- this seems to be due to "settling" of the new installation and will be prevented by checking any newly installed, or re-hosed FRAM once, one month after installation. The cooling of the CCD camera stopped working in 06/2018 due to a damage of the camera mainboard. The vendor determined humidity as the cause of the issue and suggested shortening the maintenance intervals of the camera desiccant to half a year to prevent future issues. Both failures could be easily solved within a day if there was technical staff and spare parts on site, as foreseen during CTA operations. The CTA North FRAM experienced only small interruptions as the reliable connection to the site provided by the MAGIC collaboration has been very helpful. At least some downtime can be explained with bad weather and the rest mainly due to tweaking of the drivers for the previously not used 10Micron mount. However, determining whether bad weather is fully responsible for a whole night of downtime is difficult from the system logs -- upon discovering this issue, we have started keeping a daily hand-written log of operations for better assessment of prototype reliability. The operation of the FRAM telescopes under extremely clear conditions has also aided the development of improved methods of determination of the vertical aerosol optical depth (VAOD) from FRAM data. The mostly low and almost constant amount of aerosols at both sites allows detection of instrumental artifacts, such as the population of "outliers" with VAOD about 0.05 higher than surrounding points, that were traced to a rare error when the telescopes moves during a part of the exposure. In a similar vein, the clarity of the data helped to find the source of the unexpected correlation of the data with the phase of the Moon, as reported in \cite{FRAM2}. \subsection{Sun/Moon Photometer} The Sun/Moon photometer CE318-T is a commercial device which measures atmospheric extinction using the Sun or the Moon as a light source. During the time of its deployment on both sites, it has worked reliably, with the exception of an incident on CTA South in 10/2017 when it ran out of space in internal storage to record data. This problem can be circumvented by connecting the device to a computer, however this has proven to be problematic, because in case of problems with such a connection, data may be lost altogether. As storing data only within the device does not allow monitoring of its status, the CTA North Photometer has been endowed with a module for data connection over cellular network, which however is yet to be connected to the network. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{comp.pdf} \caption{\label{fig:framphot} Comparison of FRAM and Moon Photometer VAOD measurements taken within 15 minutes from each other. Cuts on calibration constant stability and Moon phase were applied to CTA South Photometer data. In each panel, a linear fit is plotted.} \end{figure} Despite the excellent track record of operation, the actual availability of high-quality data from CTA South is limited by unexpected fluctuations in the calibration constant of the device. This value is expected to be stable with gradual decay over time, but it varies significantly during some periods, rendering the data occasionally nonphysical (negative VAODs occur). It is not yet clear whether the affected data can be recovered (it will be discussed with the vendor). The difference in the behavior between the two Photometers can be seen when compared to FRAM data (Fig.~\ref{fig:framphot}), as the CTA South Photometer shows significant deviations even after stringent cuts are applied on the quality of the calibration and on the Moon phase (as the data quality is worst for small lunar illuminations). \section{Operation schemes for the future CTA Observatory} In parallel to the progress in hardware and data analysis, the concepts for integration of these devices into the daily operations of the future CTA Observatory are being actively developed. The aerosol information provided by the FRAM telescopes in concert with Raman LIDARs to be installed at the sites will be used for data analysis and corrections, both \textit{online} and \textit{offline}. The ASCs will be paired with Ceilometers to provide 3D information on clouds for the purpose of intelligent scheduling. \subsection{Atmospheric calibration} In order to achieve the required level of accuracy for the \textit{offline} analysis, contemporaneous and only marginally simplified aerosol profiles must be used to simulate the CTA's instrument response function (IRF). Only for the \textit{online} analysis, faster, though less accurate, algorithms may be used which corrects the effective areas and energy biases only (see e.g.~\cite{Fruck:2015}). Figure~\ref{fig:atmocorrections} highlights then the procedure to obtain average instrument response functions over a time interval within which the systematic error due to simplifications of the profile remains acceptable: \begin{enumerate} \item Measurements of the full profile are taken with the Raman LIDAR \cite{Raman} before and after an observation block. Between these, the FRAM follows the field-of-view of the CTA array and measures the integral AOD. At the same time, the CTA telescopes register the trigger rates. \item The individual integral AOD measurements of detected stars are combined within suitable patches to produce AOD maps (e.g. using a Voronoi tesselation technique~\cite{Janecek:2017}). A possible stratospheric aerosol contribution must be directly subtracted at this point. The CTC (Cherenkov transparency coefficient) is calculated from the telescope trigger rates and the optical throughput calibration~\cite{muons}, according to the prescriptions outlined in~\cite{CTC}. \item The AOD maps are interpolated in time. Interlaced Raman LIDAR measurements may serve to divide the interpolated AODs in vertical bins and to re-calibrate its integral. \item These ``aerosol extinction hypercubes'' (with altitude, wavelength and time as additional dimensions) are then split into a slow component (due quasi-stable aerosol layers, like the boundary layer or sometimes even clouds), and a fast one, which changes throughout a science observation run. \item The expected shifts in CTA performance from a given extinction hypercube are continuously confronted with those of currently used (or foreseen) MC generated IRFs, and a series of systematic errors is calculated and confronted with the CTA requirements. \item In case one of these systematic errors exceeds the allowed limit, a new ``Good Time Interval'' (GTI) is communicated to CTA, together with a suitable start time, and an averaged extinction hypercube, to be used for a new MC simulation of the atmosphere. The average extinction cube (AEC) contains the vertical profile in one dimension, the wavelength dependency in a second dimension, and the aerosol extinction coefficient as a result of both. The AEC must be quality checked and confronted with alternative procedures, like the CTC. \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=0.99\textwidth]{AtmoCorrections.pdf} \caption{\label{fig:atmocorrections} Scheme for the determination of new GTIs and new MC simulated IRFs.} \end{figure} The critical part of this procedure is a robust estimate of the systematic error. For this purpose, MC simulation studies are required to determine the impact of aerosols and clouds at different altitudes, and cirrus clouds covering parts of the FOV of a Cherenkov camera on the angular and energy resolution and bias, if a uniform coverage of the CTA cameras by aerosols has been assumed in the simulations. Previous studies of the typical morphology of clouds and evolution time scales at the CTA sites are needed as input for these simulations. \subsection{Intelligent target selection} For the intelligent target selection, a robust prediction of the environmental conditions across the full observable sky, for at least the duration of the next scheduling block (i.e. $\sim$20~min) is necessary, but the accuracy of that prediction is of a less stringent requirement. This leads to the following operation scheme: \begin{enumerate} \item The ASC takes a picture of the full sky and converts the result in an atmospheric extinction map (AEM). \item A cloud recognition program applies a threshold to the AEM and decomposes the result in ellipses~(see e.g.~\cite{adam}), yielding the ASC Clouds List (CL). \item The Ceilometer receives the full list of possible observation targets in the short-term scheduler and picks those which are close enough to a cloud in the CL as to get possibly affected by it. After discarding those clouds that remain not associated with CTA observation targets, the Reduced ASC Clouds List (RCL) is obtained. The Ceilometer points to the center of each cloud in the RCL and takes an extinction profile. The mean altitude of each cloud is added to the RCL. \item A now-cast of wind speed and direction at the relevant cloud altitudes is obtained from Global Data Assimilation Systems~\cite{munar} and added to the RCL. Using the RCL and historical knowledge, each cloud in the RCL gets propagated in time for a duration corresponding to next scheduling block, and a probability is calculated for each associated CTA observation target to get covered by the cloud. \item In case the probability lies above a certain threshold, the expected observation parameters for the next scheduling block (reduction of accessible energy range, degradation of energy and pointing resolution and effective areas) are calculated. \item The list of targets with associated cloud coverage probabilities and performance degradation parameters is returned to the CTA scheduler for evaluation, before the next scheduling block is executed. \end{enumerate} \section*{Acknowledgements} We gratefully acknowledge financial support from the agencies and organizations listed here: \href{http://www.cta-observatory.org/consortium\_acknowledgments}{www.cta-observatory.org/consortium\_acknowledgments}, in particular by Ministry of Education, Youth and Sports of the Czech Republic (MEYS) under the projects MEYS LM2015046, LTT17006 and EU/MEYS CZ.02.1.01/0.0/0.0/16\_013/0001403 and European Structural and Investment Fund and MEYS (Project CoGraDS -- CZ.02.1.01/0.0/0.0/15\_003/0000437). The installation of the devices on La Palma would not be possible without the support from the MAGIC Collaboration. This work was conducted in the context of the CTA Consortium.
1,116,691,501,449
arxiv
\section{GRIBOV NOISE} In ref.\ \cite{Gr} Gribov showed that, for non-abelian gauge theory, the standard gauge-fixing conditions used for perturbative calculations do not fix the gauge fields uniquely. The existence of these {\em Gribov copies} does not affect the results from perturbation theory, but their elimination could play a crucial role for non-perturbative features of these theories. In lattice gauge theories gauge fixing is, in principle, not required. However, because of asymptotic freedom, the continuum limit is the weak-coupling limit, and a weak-coupling expansion requires gauge fixing. Thus, one is led to consider gauge-dependent quantities on the lattice as well. Unfortunately gauge fixing on the lattice is afflicted by the same problem of Gribov copies encountered in the continuum case \cite{MPR}. In order to get rid of Gribov copies the physical configuration space has to be identified with the so-called {\em fundamental modular region} $\Lambda$, which is defined (in the continuum) as the set of {\em absolute} minima of the functional \cite{STSF} \begin{equation} E_{A}[ g ] \equiv \frac{1}{2}\,\sum_{\mu\mbox{,}\,a}\, \int\,d^{4}x\, \left\{\,\left[\,A^{(g)}\,\right]^{a}_{\mu}(x)\,\right\}^{2} \,\mbox{.} \label{eq:Econt} \end{equation} Similarly, on the lattice, we can eliminate Gribov copies looking for the absolute minimum of the functional ${\cal E}_{U}[ g ]$ ({\em minimal Landau gauge}) \cite{Z2} \begin{equation} {\cal E}_{U}[ g ] \equiv \frac{1}{8\,V} \sum_{\mu\mbox{,}\, x} \, \mathop{\rm Tr}\nolimits \, \left[ \, 1\!\!\!\bot \, - \, U_{\mu}^{(g)}(x) \, \right] \label{eq:Etomin} \;\mbox{.} \end{equation} Given the appearance of Gribov copies in numerical studies, we need to understand their influence ({\em Gribov noise}) on the evaluation of gauge-dependent quantities. To this end we compare the results for gluon and ghost propagators using two different averages \cite{paper1}: the average considering only the absolute minima (denoted by ``am''), which should give us the result in the minimal Landau gauge; and the average considering only the first gauge-fixed gauge copy generated for each configuration (denoted by ``fc''). The latter average is the result that we would obtain if Gribov noise were not considered. \begin{figure}[htb] \psfig{figure=plot_50_lattice.ps,height=2.8in} \vspace{-1.5cm} \caption{\protect\small Plot of the three-momentum-space gluon propagator $D(k)$ (``fc''-data), as a function of the square of the lattice momentum $p^{2}(k)$. Data correspond to $V = 16^{3}$ ($\Box$), $V = 32^{3}$ ($\Diamond$) and $V = 64^{3}$ ($\ast$), at $\beta = 5.0$. Error bars are one standard deviation. Averages are taken over $40$ gauge-fixed configurations.} \label{fig:gluon3d} \vspace{-0.5cm} \end{figure} \normalsize Our data \cite[Table 2]{paper1} show absence of Gribov noise for the gluon propagator. In fact, data corresponding to the minimal Landau gauge (absolute minima) are in complete agreement, within statistical errors, with those obtained in a generic Landau gauge (average ``fc''). This happens even at $\beta = 0$, where the number of Gribov copies is very large and Gribov noise, if present, is more easily detectable. On the contrary, a nonzero Gribov noise for the ghost propagator can be clearly observed \cite[Table 3]{paper1}. In particular, data corresponding to the absolute minima (average ``am'') are constantly smaller than or equal to the corresponding ``fc''-data. This effect is small but clearly detectable for the values of $\beta$ in the strong-coupling region. (This was not observed at $\beta = 2.7$. However, at this value of $\beta$ almost no Gribov copies were produced, even for a lattice volume $V = 16^4$, and therefore we cannot expect a difference between the two sets of data.) This result can been qualitatively explained \cite{paper1}. As for the infrared behavior of these two propagators, the data for the ghost propagator show a pole ``between'' the zeroth-order perturbative behavior $p^{-2}$ --- valid at large momenta --- and the $p^{-4}$ singularity predicted in \cite{Z2}, but in agreement with the pole $p^{-2(1+s)} (0 < s < 1)$ recently obtained in ref.\ \cite{newpaper}. For the gluon propagator the data show, in the {\bf strong-coupling} regime, a propagator decreasing as the momentum goes to zero. This anomalous behavior, predicted in \cite{Gr,Z2,Z1}, is still observable at $\beta = 1.6$, if large volumes are considered \cite{paper2}. This result is also observable in the {\bf scaling region} in the three-dimensional case, and in the limit of large lattice volume (see Fig.\ \ref{fig:gluon3d}). Finally, the behavior of the zero three-momentum-space gluon propagator is strongly affected by the zero-momentum modes of the gluon field \cite[Fig.\ 2]{paper1}, as predicted in \cite{Mitr}. \section{FOURIER ACCELERATION} In order to minimize the functional ${\cal E}_{U}[ g ]$ defined in eq.\ \reff{eq:Etomin}, and to reduce {\em critical slowing-down}, we can use the Fourier accelerated algorithm \cite{Fourier,gfix123}. With this algorithm the update is given by $g^{(new)}(x) \equiv R(x) \, g^{(old)}(x)$, where \begin{equation} R(x) \, \propto \, 1\!\!\!\bot - {\widehat F}^{-1}\left[ \, \frac{\alpha}{p^{2}(k)} \, {\widehat F} \left( \nabla \cdot A^{\left( g \right)} \right) \right](x) \;\mbox{.} \end{equation} Here ${\widehat F}$ is a Fourier transform, $\alpha$ is a tuning parameter, $p^{2}(k)$ is the square of the lattice momentum, and $\nabla \cdot A$ is the lattice divergence of the gluon field $A_{\mu}$. However, this algorithm is of difficult implementation on parallel machines, due to the use of the fast Fourier transform (FFT). Let us notice that $ {\widehat F}^{-1}\,p^{-2}(k)\, {\widehat F} \,=\,\left(\,- \Delta\,\right)^{-1} $, where $\Delta$ is the lattice Laplacian operator. Thus, the FFT can be avoided by inverting $\Delta$ using an algorithm that requires the same computational work (i.e.\ $V \log N$), such as a multigrid (MG) algorithm with W cycle and piecewise-constant interpolation. At the same time, using MG, we can reduce the computational work with a good initial guess for the solution, and we can choose the accuracy of the solution. (With FFT the accuracy is fixed by the precision used in the numerical code.) We note that the tuning parameter $\alpha$ is usually fixed with an accuracy of a few percent, and thus the inversion of $\Delta$ should not require a very high accuracy either. We started our simulations on an IBM RS-6000/340 workstation. We tested different types of multigrid cycles: $\gamma= 0$ (Gauss-Seidel update), $\gamma= 1$ (V cycle) and $\gamma= 2$ (W cycle). We see that MG with $\gamma = 2$, two relaxation sweeps on each grid, a minimum of two full multigrid sweeps for each version of $\Delta$, and an accuracy of $10^{-6}$, is equivalent to an FFT algorithm \cite{future}. \begin{table}[htb] \begin{center} \vspace{-0.5cm} \begin{tabular*}{7.5cm}{cccc} \hline algorithm & $ V $ & GF-sweeps & CPU-time \\ \hline FFT & $ 4^4 $ & $13.6 \pm 0.3$ & $36 \pm 1$ \\ \hline MG & $ 4^4 $ & $13.7 \pm 0.3$ & $65 \pm 3$ \\ \hline \hline FFT & $ 8^4 $ & $16.1 \pm 0.3$ & $972 \pm 16$ \\ \hline MG & $ 8^4 $ & $16.1 \pm 0.3$ & $1381 \pm 49$ \\ \hline \hline FFT & $ 16^4 $ & $20.0 \pm 0.4$ & $25254 \pm 560$ \\ \hline MG & $ 16^4 $ & $19.9 \pm 0.4$ & $33797 \pm 1114$ \\ \hline \end{tabular*} \parbox{7.5cm}{ \vspace{0.2cm} \caption{\label{tab:accuracy} \vskip -0.8cm \hskip 1.25cm \protect\small : Comparison between the FFT and MG algorithms at $\beta = \infty$.} } \vspace{-1cm} \end{center} \end{table} \normalsize In Table \ref{tab:accuracy} we report results obtained at $\beta = \infty$ for the FFT algorithm, and for MG. Clearly, the two algorithms have a similar performance, showing a number of gauge-fixing sweeps increasing logarithmically with the lattice size $N$, and the CPU-time increasing as $N^4 \log N$. We also did a test at $\beta = 2.2$ (see Table \ref{tab:finiteb}). Again, MG is equivalent to the FFT algorithm. \begin{table}[htb] \begin{center} \vspace{-0.8cm} \begin{tabular*}{7.5cm}{cccc} \hline algorithm & $ V $ & GF-sweeps & CPU-time \\ \hline FFT & $ 8^4 $ & $333 \pm 27$ & $19998 \pm 1236$ \\ \hline MG & $ 8^4 $ & $312 \pm 25$ & $18722 \pm 1496$ \\ \hline \end{tabular*} \parbox{7.5cm}{ \vspace{0.2cm} \caption{\label{tab:finiteb} \vskip -0.8cm \hskip 1.25cm \protect\small : Comparison between the FFT and MG algorithms at $\beta = 2.2$.} } \vspace{-1cm} \end{center} \end{table} \normalsize In order to parallelize, the idea is to use as the coarsest grid for the multigrid algorithm a grid with volume equal to or larger than the number of nodes of the parallel machine. For example, for an APE100 computer with $8^3$ nodes we implemented MG with the coarsest grid $8^4$. Then, on the coarsest grid, we can use a Gauss-Seidel relaxation if its volume is small. Otherwise we can use a Conjugate Gradient algorithm to relax the solution. (This combination MG+CG has been used in the past to accelerate MG on vector machines \cite{vector}.) In this way, the computational work for the inversion of $\Delta$ still increases as V$\log$N, provided that we keep fixed the size of the coarsest grid. We tested this combination first on a workstation for a $16^4$ lattice with coarsest grid $8^4$ at $\beta = \infty$, performing two CG-sweeps when relaxing on the coarsest grid. We obtained $20.0 \pm 0.4$ for the GF-sweeps, and $46281 \pm 1732$ for the CPU-time. Therefore (see Table \ref{tab:accuracy}) the performance of this gauge-fixing algorithm is essentially equivalent to that of FFT and MG. Similarly, the performance at $\beta = 2.2$, for an $8^4$ lattice with coarsest grid $4^4$, is comparable to the performance of the FFT and MG algorithms (see Table \ref{tab:finiteb}): we obtained $314 \pm 25$ for the GF-sweeps, and $23599 \pm 1884$ for the CPU-time. Finally, we implemented the MG+CG algorithm on an APE100 computer comparing its performance with a standard overrelaxation (OVE) and an unaccelerated local algorithm (the so-called Los Alamos algorithm, LOS) \cite{gfix123}. The number of gauge-fixing sweeps obtained, at $\beta = \infty$ and for lattice volume $16^4$, was $131 \pm 3$ for LOS, $34.8 \pm 0.5$ for OVE, and $ 16.4 \pm 0.1 $ for MG+CG. Clearly the MG+CG algorithm is able to reduce the number of gauge-fixing sweeps compared to the two local algorithms. We plan to extend the tests on APE computers to larger lattice volumes.
1,116,691,501,450
arxiv
\section{Introduction} The relationship between apparent peak brightness and redshift of Type Ia supernovae (SNe Ia) depends on the cosmological model; this has provided the most direct evidence for the accelerated expansion of the Universe \citep{riess98,perl99}. Current SN Ia surveys such as SNLS \citep{snls06}, ESSENCE \citep{essence}, and GOODS-SN \citep{snhst} are contributing to current constraints on cosmological parameters, and SNe Ia will continue to be important for cosmological constraints in the next generation of surveys such as the JDEM candidates ADEPT, DESTINY \citep{destiny} and SNAP \citep{snap}. Type Ia supernovae are interpreted as the thermonuclear explosion of a white dwarf that has reached the Chandrasekhar mass and thus has become unstable, probably through accretion from a companion star (the single-degenerate scenario) or merging with another white dwarf (the double-degenerate scenario). However, no fully consistent model of a SN Ia explosion has yet been built. The natural scatter in SNe Ia peak luminosities covers roughly one magnitude; the rms peak luminosity is 0.45 mag after excluding outliers. Empirical correlations based on light curve shape \citep{phil93} or intrinsic color \citep{vdb, tripp} allow reduction of the intrinsic scatter to about 0.13 mag \citep{snls06}, making them usable for cosmological measurements. However, it is not yet known how much of the residual scatter is correlated with physical parameters that could evolve with redshift, and thus bias the measurement of cosmological parameters. In order for SNe to be useful for constraining dark energy at the level expected in future SN satellite experiments, the evolution of luminosity at a given light-curve shape over the probed redshift range must be less than 1-2\% \citep{how07, sarkar08}. There are several observational indications for a range of delay times between the birth of the progenitor system and the explosion of the SN Ia, leading some authors to envision the existence of at least two different populations of SN Ia populating slightly different regions of stretch-brightness parameter space (Sullivan et al. 2006). It is not yet known if they are described (at the percent level) by the same Phillips relation; if they correspond to different production channels there is no reason for the Phillips relations to be identical. In any case, the SN population may not be a one-dimensional family, but may depend on other parameters (at an as-yet unknown level) such as metallicity and delay time. This would result in a source of scatter in the SN Ia Hubble diagram that could be reduced with measurements of those additional parameters. Moreover, if the Phillips relations are different and the relative numbers of SNe in the two putative populations evolve with redshift, the values of cosmological parameters derived from the Hubble relation will be biased if we do not determine population-dependent corrections. More generally, any extra parameter (age, metallicity, production channel...) in the Phillips relations that is correlated with redshift will bias cosmological measurements. It is therefore essential to assess the size of such effects. Measuring them by looking for correlations between Hubble diagram residuals and intrinsic properties such as delay time, progenitor metallicity, etc. could help correct for them (at least statistically if not event-by-event) in future surveys. The brightest supernova events only occur in actively star-forming galaxies (Hamuy et al. 1995, 1996a,b), suggesting prompt explosions, while under-luminous events are most often found in spirals and E/S0 galaxies, whose old stellar populations would suggest delayed explosions \citep{how01}. Mannucci et al. (2006) and \cite{sb05} have proposed a two-component model for SNe Ia, and several authors \citep{sul06, man05} have shown that the supernova rate can be expressed as a sum of a term proportional to the total mass of the galaxy and a term proportional to the recent star formation rate. In the Mannucci et al. model, some of the supernovae would explode several Gyr after the birth of the progenitor system, while others would after a fraction of a Gyr. Moreover, those two populations have different luminosities, the ``prompt'' component being brighter with broader light-curves (Sullivan et al. 2006). The prompt component will dominate at higher redshifts when the Universe's age (or the time since star formation began) was less than the lifetime of the longer-lived progenitors. Determining the relative numbers of supernovae in the two populations is the first step in understanding any possible bias these populations might cause. Given the 13\% scatter in the calibrated peak luminosities of supernovae, measuring systematic effects in the Phillips relation at the percent level will take samples of a few hundred supernovae \citep{sarkar08}. Hamuy et al. (2000) used 62 SN Ia host galaxies to study the impact of host morphology, magnitude and colors on the decline rate $\Delta m_{15}$, which allows one to estimate the SN peak luminosity. They found a correlation with age but not with metallicity (see also the erratum to their paper). However, their sample was very small and most of their estimates of age and metallicity were based on photometry only, without spectra of the host galaxies, and therefore their accuracy was limited. Moreover, although they investigated various environmental effects, their methodology was not sensitive to a second parameter in the Phillips relation, since they used the decline rate as a ``reddening-free and distance-free estimate of the SN peak brightness'', and thus assumed {\em a priori} the universality of the relation. Gallagher et al. (2005) carried out a similar analysis with spectra of 57 SN Ia hosts, and put tentative constraints on the SN progenitor lifetime using an estimate of current-to-average star formation rate. They claimed to see hints of both a bimodal behavior and a lower limit of the progenitor lifetime. They admitted that their findings were rather inconclusive. \citet{sul06} used 100 SNe from the SNLS and broadband spectral energy distributions of the host galaxies to estimate stellar masses and star formation rates. They found a component proportional to the stellar mass, and a component proportional to the recent star formation rate, averaged over the last 0.5 Gyr. Our study improves on these earlier papers by using a larger sample of SNe, with spectra of their host galaxies from the Sloan Digital Sky Survey (SDSS; York et al. 2000). We use a sophisticated stellar population code called VESPA \citep{vespa} which allows us to determine the stellar formation history of the hosts. We also determine the star formation history of a large sample of normal galaxies from the SDSS as a control. Differences in those star formation histories will yield information on SN progenitor lifetime: short lifetimes for instance should statistically enrich the host sample in galaxies with recent star formation. \section{Host galaxies and reference sample} We gathered a sample of about 1300 confirmed SNe Ia from IAU circulars\footnote{\tt http://www.cfa.harvard.edu/iau/cbat.html}, the CfA supernovae list\footnote{\tt http://cfa-www.harvard.edu/iau/lists/Supernovae.html} and the SDSS-SN public list of supernovae\footnote{\tt http://sdssdp47.fnal.gov/sdsssn/sdsssn.html}. We cross-referenced this list with the SDSS DR5 \citep{AM07} spectroscopic survey of galaxies: 256 galaxies with spectra were identified as SN Ia hosts, corresponding to 257 supernovae (one galaxy hosted two supernovae). The list of hosts used in this paper is available online\footnote{\tt http://sn.aubourg.net/hosts/}. A large fraction of our sample comes from surveys like SDSS (104 SNe), LOSS and LOTOSS (49 SNe for the two surveys). The rest have various origins (Puckett\footnote{\tt http://www.cometwatch.com} : 17 SNe, Pollas (1994) : 12 SNe, etc.). The detection efficiency of this sample is unknown, as it depends in detail on the way in which the SNe were found. To account for the selection function of SN Ia discovery, we also process a control sample of $10^5$ DR5 galaxy spectra, chosen randomly in the survey release, and weighted to reproduce the redshift distribution of the host sample --- this is the parameter which could most significantly bias delay time measurements. We plan to handle these effects more accurately with a full Monte Carlo treatment in a future paper. As a cross check, we did the same analysis, keeping only supernovae at $z<0.1$. The reduction in the sample size (from 257 to 190) increases the error, but does not change the result (see table 2 below). Other effects will be discussed in section 4. \begin{figure} \includegraphics[width=\columnwidth,height=8cm]{f2.pdf} \caption{Distribution of the number of stellar populations recovered by VESPA from the host sample (gray) and the control sample (white). The two distributions are very similar (see text).} \label{fig:vespabins} \end{figure} \section{Reconstructing the star formation and metallicity history of SN host galaxies} The spectrum of a galaxy is a superposition of spectra of single stellar populations which formed at a given age with a given metallicity. Since it is not possible to recover the star formation and metallicity history with infinite precision \citep[e.g.][]{Jimenez+04}, it is only sensible to attempt to recover the star formation and metallicity history with a certain time resolution. The VESPA algorithm \citep{vespa} does this, providing a detailed history only where the data warrant it. Note that broad-band colors are not sufficient to determine the star formation histories of galaxies, as they suffer from significant age-metallicity degeneracies \citep[e.g.][]{Jimenez+04}. \begin{figure*} \includegraphics[width=2\columnwidth]{sfr-4plots.pdf} \caption{Type Ia supernova rate per stellar mass, unnormalized, vs. fraction of stellar mass formed in the four time intervals indicated in each panel. The dashed line is a fit to a dual component model ${SNR} = \alpha \times M_* + \beta \times M_{\rm time\ range}$, for each time range. In the absence of a stellar population of a given age contributing to the supernova rate, the curve would be consistent with flat (i.e,, $\beta$ consistent with zero). We find that $\beta/\alpha$ is non zero at the five-sigma level for the time intervals [0-180] Myr. Shaded areas are $\pm2\sigma$ fit values.} \label{fig:rate} \end{figure*} In brief, VESPA uses singular value decomposition to calculate the number of significant components in the spectrum of a given galaxy. VESPA then uses an algorithm to determine the best-fitting non-negative values of the star formation fractions. Extensive tests of the performance of VESPA on synthetic spectra as a function of wavelength coverage and signal-to-noise ratio can be found in \citet{vespa}, along with a study of age-metallicity degeneracy. To limit the search to a manageable amount of parameters, and because currently available spectra never have the quality or spectral range to justify going beyond this choice, VESPA's finest resolution consists of 16 age bins, logarithmically spaced in lookback time between 0.002 and 14 Gyrs. Specifically, the lower limit in age of the 16 bins are: 0.002, 0.02, 0.03, 0.0462, 0.074, 0.115, 0.177, 0.275, 0.425, 0.6347, 1.02, 1.57, 2.44, 3.78, 5.84, and 9.04 Gyr. VESPA chooses the number of stellar populations to model depending on the quality of the data. The SDSS galaxy spectra typically allow between 7 and 10 age bins in both the SN host sample and the control sample, showing there are no significant differences in the two samples (Fig.~\ref{fig:vespabins}; see also Tojeiro et al. 2007), although there is non-zero star formation in only 3-5 of those bins. The relatively large number of bins implied by Fig.~\ref{fig:vespabins} is an indication that in most cases the star formation which is recovered is localised to narrow time intervals. The metallicity for each population is a free parameter, so there are as many metallicity values recovered as there are star formation fractions. Following Sullivan et al. (2006), we parametrize the total supernova rate with a two-component model, ${SNR} = \alpha M_* + \beta M_{recent}$, where $M_*$ is the total stellar mass and $M_{recent}$ is the mass formed in some time range that we can vary. $\alpha$ and $\beta$ reflect, respectively, the SNR per unit stellar mass of an old population of progenitors (roughly proportional to the total stellar mass), and the SNR per unit stellar mass of a young population of progenitors (proportional to the mass in recently formed stars). Because we do not have a calibration of the selection function of the supernova surveys we have used, we normalize with our control sample, representative of the whole SDSS spectroscopic catalog (our reference sample), and can determine $SNR$ only up to a proportionality constant. That is, we fit the data to a model \begin{equation} \frac{SNR}{M_*} = C\left(1 + \frac{\beta}{\alpha} \frac{M_{recent}}{M_*}\right), \end{equation} where $C$ is an unnormalized proportionality constant. Thus the quantity $\beta/\alpha$ quantifies the fraction of the supernova rate with progenitor stars that formed in the chosen time range. The constant $C$ is the product of the ``slow'' rate $A$ introduced in the Mannucci et al. two-component model, and an unknown global SN detection efficiency factor averaged over all SN surveys which contribute to our reference sample. We estimate the SNR per unit stellar mass, $SNR/M_*$ as a function of $\frac{M_{recent}}{M_*}$ as follows. For any unbiased sample of galaxies, the observed SNR per unit stellar mass is equal to the number of supernovae divided by the total stellar mass in the sample, multiplied by an efficiency factor $\epsilon$: $\frac{SNR}{M_*} = \epsilon \frac{N_{SN}}{M_{total}}$. We can divide them into bins of their star formation fraction in the recent time range, $\frac{M_{recent}}{M_*}$, as determined by the VESPA analysis. Within each bin we can therefore calculate $\frac{SNR}{M_*}$, by simply counting the number of SNe in each subset and dividing it by the total stellar mass in galaxies with this value of $\frac{M_{recent}}{M_*}$. The latter is calculated from the much larger control sample. We repeated this exercise for a variety of ages for the young component, and also varying the dust model details in VESPA and the bin boundaries. We find a significant contribution to the SN rate from recent stellar populations (in the sense that $\beta/\alpha$ is significantly positive), where we vary the definition of ``recent'' between 74 and 180 Myr. The contribution of stars in the 180-250 Myr range is lower by at least a factor of five. The value of $\beta/\alpha$ is robust to setting the boundary anywhere between 74 and 180 Myr, and thus here we quote the most conservative value of 180 Myr, since starburst duration and internal degeneracies in VESPA could make age bins ``leak'' one into another. We illustrate this in Figure~\ref{fig:rate}. We have divided our 16 time bins into four broader bins, and asked for the correlation of the supernova rate with the fraction of star formation that occurred in each bin. The four panels show the un-normalized Type Ia supernova rate per stellar mass as a function of the fraction of stellar mass formed each of these broad time bins [0-180] Myr (top-left panel), [180-660]Myr (bottom-left panel), [0.66-2.44] Gyr (top-right panel), and [2.44-13] Gyr (bottom-right panel). The resulting best-fit value of $\beta/\alpha$ and the corresponding correlations are given in Table 1. For the youngest range, 0-180 Myr, we measure $\beta/\alpha = 454 \pm 78$, which is five-sigma evidence for a short duration component. Since $\beta/\alpha$ in older bins is significantly lower, the positive signal we find in the most recent age range cannot be due to correlation (``leakage'') from older bins. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Age (Myr) & $\beta/\alpha$ & Error on $\beta/\alpha $& Significance ($\sigma$)\\ \hline 0-180 & 454 & 78 & 5.8 \\ 180-660 & 56 & 16 & 3.4 \\ 660-2440 & 18.4 & 3.5 & 5.2 \\ 2440-13700 & -3 & 1.0 & -- \\ \hline \end{tabular} \caption{Fits of supernova rate per unit stellar mass $\propto 1 + + {\beta\over\alpha} \frac{M_{\rm range}}{M_*}$ for different ranges of star formation lookback time for the $\beta$ component.} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|} \hline Age (Myr) & $\beta/\alpha$ & Error on $\beta/\alpha $& Significance ($\sigma$)\\ \hline 0-180 & 347 & 75 & 4.6 \\ 180-660 & 43 & 20 & 2.2 \\ 660-2440 & 46 & 8 & 5.8 \\ 2440-13700 & -3 & 1.0 & -- \\ \hline \end{tabular} \caption{Same as table 1, considering only SNe with $z<0.1$.} \end{table} Our results should be robust against possible spectroscopic calibration errors: because we compare the host population to a control sample of spectra taken with the same telescope and instrument, and processed with the same spectroscopic pipeline and the same star formation history recovery algorithm (VESPA), systematic errors anywhere in the chain would be shared by the two samples. The selection function of our SN sample is also irrelevant as long as it is not strongly dependent on host properties which are correlated with SFR; the effect of possible efficiency biases in the SN host sample will be discussed in more detail in \S 4 below. We do not consider supernovae without a SDSS host but this is fully consistent with our approach of comparing to a SDSS reference population : our reference sample is, by definition, complete, and we compare, among that sample, hosts without SN to hosts with SN. A bias will be introduced in our results if there is a correlation between galaxy mass and specific star formation rate. Indeed, most massive galaxies in the present-day universe have little star formation; most of the star formation in the present-day universe is in low-mass galaxies (Heavens et al. 2004). We have tested this bias with a Monte-Carlo simulation, as follows. We generated an artificial SN sample in our control sample following the rule $ \alpha \times M_* + \beta \times M_{\rm recent}$. In the absence of correlations between recent star formation and mass in this reference sample, applying our method recovers the input $\beta/\alpha$ values. However, our reference sample has an anti-correlation between mass and recent star formation. With the input $\beta$ set equal to zero, we recover $\beta/\alpha = -3 \pm 2$, compatible with zero (dashed line in Fig.~\ref{fig:bias}). Using $\beta/\alpha = 700$ (dotted line in Fig.~\ref{fig:bias}), we recover $\beta/\alpha = 465$, $2/3$ of the simulated value (and close to our observed value). Therefore we conclude that the anti-correlation between galaxy mass and star formation rate causes us to {\em underestimate} the contribution of a prompt component to the supernova rate. \begin{figure} \includegraphics[width=\columnwidth]{bias-plot.pdf} \caption{Estimation of the possible bias that can be introduced by star formation today taking place preferentially in smaller mass galaxies. Using our SDSS control sample we have generated supernova using $\beta/\alpha = 0$ and $\beta/\alpha = 700$. The recovered lines show that for the case with $\beta/\alpha = 0$ (dotted line) we indeed find no signature of supernova, while for the case $\beta/\alpha = 700$ we do recover supernova activity at the $2/3$ level (dotted line) of the input value ($\beta/\alpha \simeq 465$), thus our conclusions are conservative. See more details in text.} \label{fig:bias} \end{figure} The ratio $\beta/\alpha$ is compatible with previous estimates, for which the ``recent'' SFR is estimated in general from colors, broadband SED fitting, core-collapse SN rate or cosmic SFR. These results represent an average over half a gigayear, and are thus only a rough match to our results. We can roughly convert our mass estimate $M_{180}$ ($M_{recent}$ for the last 180 Myr) to recent SFR through $M_{180} = 180\times 10^{6} \frac{SFR}{1 M_\odot \rm yr^{-1}} M_\odot$. With this, the Neill et al. (2006) values (a ``slow'' rate $A = 1.4 \pm 1.0 \times 10^{-14}$ SN $M_\odot^{-1}$ $\rm yr^{-1}$ and a ``prompt'' rate $B = 8.0 \pm 2.6 \times 10^{-4}$ SN $(M_\odot \rm yr^{-1})^{-1} yr^{-1}$) yield $\beta/\alpha \simeq 300$, the Sullivan et al. (2006) values ($5.3 \pm 1.0 \times 10^{-14}$ SN $M_\odot^{-1}$ $\rm yr^{-1}$ and $3.9 \times 10^{-4}$ SN $(M_\odot \rm yr^{-1})^{-1} yr^{-1}$) yield $\beta/\alpha \simeq 40$, and the two values quoted by Scannapieco \& Bildsten (2005) yield $\beta/\alpha \simeq 300$ and 150, respectively. However since ``recent'' star formation is estimated differently in each case, and on an difference recent time range, the figures are not expected to match closely. \section{Discussion and conclusion} Our result would be sensitive to any systematic effect enriching the SN host sample in blue galaxies (i.e., those with large $M_{recent}/M_*$), for reasons unrelated to SN physics (bias in efficiency, or bias in the monitored galaxy sample for targeted searches). Some targeted supernova searches have focused on star-forming galaxies (e.g., Richmond, Filippenko, \& Galisky 1998). In our sample, the main targeted search is the Lick Observatory Supernova Search (LOSS, Filippenko et al. 2001). There is no indication this search is biased in this way, and our results change insignificantly if we remove those hosts. Blue galaxies are fainter, and one could naively expect to detect SNe more easily in those faint hosts, but with modern image differencing techniques, the effect should be minor. One also expects such a bias to occur in surveys that rely on spectroscopic typing of SNe, since the fainter, low-stretch SN, that occur preferencially in bright galaxies, will be more difficult to characterize spectroscopically. Star-forming galaxies tend to produce brighter events. On the other hand, star-forming hosts tend to be dustier, making supernovae harder to detect or characterize. If such an efficiency effect had an impact on our analysis, in the sense of enhancing or simulating what we observe, we would expect the host galaxy sample to exhibit lower typical masses than the reference sample, or lower dust content. VESPA yields an estimate of dust content and luminosity of the hosts. We see no significant difference in luminosity and dust distributions between the host and control samples. Rather, the host stellar mass distribution is shifted towards slightly {\em higher} masses, as we would expect given that the SN rate should increase with the mass. We are thus confident this effect does not significantly bias our results. Such an efficiency effect would also be more important at high redshift. As we noted above, restricting our sample to SNe occurring at $z<0.1$ does not change our findings. In our next paper, we plan to use the lightcurve characteristics in our analyses for those objects for which they are available. Following Mannucci et al. (2005, 2006) and Sullivan et al. (2006), we have shown that SNe Ia can occur through short-lived progenitors, hinting at a variety of stellar evolution paths with different lifetimes. We have given the first estimate of the lifetime of the ``fast'' component, by reconstructing the star formation history of SN host galaxies and finding an increased contribution to the SN Ia rate from stars evolving in less than 180 Myr. Such a short time delay strongly constrains the nature of possible progenitors. They must be stars that evolve fast enough, i.e. with a mass above $\sim 3.5 M_\odot$, but must be below the super-AGB mass threshold (about 8 $M_\odot$) above which one gets electron-capture supernovae \citep{poel07}. \citet{PS06} have also suggested that a significant fraction of binaries are twins (i.e., pairs of stars with essentially identical masses), and that such twin binaries could produce a short ($< 0.1$ Gyr) path to SN Ia. Considering common envelope evolution phenomena, \citet{PS06} argue that such twin systems could yield double degenerate SNe Ia in a way that would be both fast and efficient (see also \citealt{hachisu}). Are there enough high-mass progenitors to account for the observed SN Ia rate? These progenitors have to have masses between $3.5 M_\odot$ (in order to explode within 180 Myr) and $8 M_\odot$. Only a fraction of these stars $f_\beta$ will actually explode as a SN Ia progenitor. We take into account five factors: the fraction of stars in binaries ($f_a$), the fraction of the binaries both of whose components lie in the range 3.5 to 8 $M_\odot$ ($f_b$), the fraction of stars at a suitable separation for mass transfer ($f_c$), the fact that every binary yields a single explosion, ($f_d$), and an overall efficiency ($\eta_\beta$, as not all possible progenitors may explode). Maoz (2007) has estimated the first four factors, and finds $f_a \in [2/3,1], f_b\in[1/6, 1/3], f_c\in[1/4,1/2]$ and $f_d=1/2$. Crudely multiplying these factors together gives the fraction of objects in the appropriate mass range that explode as prompt SNe Ia: $f_\beta\in[0.014, 0.083]\eta_\beta$. The fraction of stars which will explode as a SN Ia progenitor, $f_\beta$ is then given by $f_\beta = \frac{N_B}{N_{3.5-8}}$ where $N_B$ is the total number of SNe Ia from the fast route, and $N_{3.5-8}$ is the total number of stars in the correct mass range. Our result only allows us to estimate the ratio of the two components. We can however use published values of $A$ in order to infer an absolute value of $B$ from our ratio---we call this value $B'$. Using $A=1.4 \pm 1.0 \times10^{-14} SN yr^{-1}M_\odot^{-1}$ as published by Neill et al. (2006) gives $B' = 6.3 \pm 4.7 \times 10^{-12} SN\rm \, yr^{-1}\, M_\odot^{-1}$. We can now estimate $N_B = B' \times 180\,{\rm Myrs} \times M_{180}$ and $N_{3.5-8} = 0.0157*M_{180}$, assuming a Salpeter IMF of the form $N(m) \propto m^{-2.35}$. This gives $f_\beta = 0.073 \pm 0.053$, within the values given by Maoz (2007). Using a value of $A=5.3 \pm 1.1 \times 10^{-14}$ from Sullivan et al. (2006) gives a value of $B' = 2.41 \pm 0.65 \times 10^{-11}$ and $f_\beta = 0.28 \pm 0.07$. Similarly, $A=4.4^{+1.6}_{-1.4} \times 10^{-14}$ as published by Scannapieco \& Bildsten (2005) results in roughly $B' = 1.99 \pm 0.69 \times 10^{-11}$ and $f_\beta = 0.23 \pm 0.08$. With the slow $A$ rate of Neill et al (2006) there is complete consistency with the theoretical expectations of SN Ia rates from Maoz (2007). However, with the higher $A$ rates of Sullivan et al (2006) or Scannapieco \& Bildsten (2005), there is some tension with our results, which would require high efficiency. It is hard to know how worried one should be about this: firstly we are relying on external measurements of the delayed rate, and secondly an excess of SNIa explosions with respect to the predicted number of progenitors is observed in a large number of SNIa studies (Maoz 2007). This tension can be alleviated in a number of ways---some of which discussed in the above paper---but generally indicates that the efficiency of the mechanism which produces SNIa explosions must be high. A crucial question for the use of SNe Ia as standard candles in cosmology is whether these different routes yield objects which are standardizeable to high accuracy via the same empirical corrections. Current data find no evidence for a difference \citep{hamuy00,sul03,bronder07}, but the requirements for using SNe Ia as a Dark Energy probe are stringent, and it will be important to establish this point accurately. VESPA should be able to assist directly with the correction, first by determining the standardization for each route at the required percent level accuracy, and then allowing to apply the correct standardization at least statistically if not for individual supernovae. Furthermore, VESPA provides metallicity estimates for the hosts, which may correlate with peak luminosity, and thus allow us to reduce the scatter in the distance indicator. This will be the subject of a future study. We plan to expand our sample by obtaining spectra of a larger number of SN hosts, allowing us to deconvolve the delay time function. Future papers will address more quantitatively the long duration component, the downsizing bias, the metallicity effects (see also Prieto et al. 2007), and stellar evolution models compatible with our findings. EA acknowledges the importance of numerous discussions with the late Bohdan Paczy\'nski, to whom we dedicate this paper. RT is funded by the Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia under the reference PRAXIS SFRH/BD/16973/04. RJ and DNS acknowledge support from the NSF PIRE-0530095. This work was supported in part by DOE grant DE-FG02-07ER41514.
1,116,691,501,451
arxiv
\section{Introduction} The presence of Galactic interstellar dust affects astronomical observations over a wide range of wavelengths. In the mid-infrared and far-infrared, Galactic dust emission contributes significantly to the total observed sky intensity. At optical and ultraviolet (UV) wavelengths, dust grains absorb and scatter starlight. Observations of interstellar dust emission/absorption can improve our understanding of the physical conditions and composition of the interstellar medium (ISM), an environment which plays a crucial role in Galactic evolution and star formation. Equally, or perhaps even more important to the practice of astronomy, however, is accurately accounting for dust as a foreground which reddens optical/UV observations of stars/galaxies and superimposes Galactic emission on low-frequency observations of the cosmic microwave background (CMB). Over the past decades, satellite observations have dramatically enhanced our knowledge about infrared emission from the ISM. The \textit{Infrared Astronomy Satellite} ({\it IRAS}), with its $\sim$4$'$ resolution, revolutionized the study of Galactic dust emission, first revealing the high-latitude ``infrared cirrus'' using 60$\mu$m and 100$\mu$m observations \citep{low84, wheelock94} and highlighting the importance of detailed dust mapping in the far-infrared/submillimeter as a key foreground for cosmology. Later, the Diffuse Infrared Background Experiment (DIRBE) aboard the {\it COBE}~satellite provided complementary full-sky measurements at ten infrared wavelengths from 1.25$\mu$m to 240$\mu$m, boasting a reliable zero point despite inferior $\sim$0.7$^{\circ}$ angular resolution \citep{boggess92}. {\it COBE}/FIRAS \citep{firas} also provided full-sky infrared dust spectra at $7^{\circ}$ resolution in 213 narrow frequency bins between 30 GHz and 2850 GHz. \citet[hereafter FDS99]{FDS99} used these FIRAS data to derive a globally best-fit model of dust emission applicable over a very broad range of frequencies. FDS99 showed that no model consisting of a single modified blackbody (MBB) could accurately match the FIRAS/DIRBE spectrum at both the Wien and Rayleigh-Jeans extremes. To fit the thermal dust spectrum between 100 and 3000 GHz, FDS99 therefore proposed an emission model consisting of two MBBs, each with a different temperature and emissivity power law index. Physically, these two components might represent distinct dust grain species within the ISM, or they might simply provide a convenient fitting function. By combining this best-fit two-component model with a custom reprocessing of DIRBE and {\it IRAS}~100$\mu$m data, FDS99 provided widely used foreground predictions with $6.1'$ FHWM, limited largely by their $1.3^{\circ}$ resolution DIRBE-based temperature correction. The {\it Planck}~2013 data release \citep{planck2013} represents an important opportunity to revisit foreground predictions in light of {\it Planck}'s superb, relatively artifact-free broadband data covering the entire sky and a wide range of frequencies. Towards this end, \cite{planckdust} has conducted a study modeling {\it Planck}~353 GHz, 545 GHz, 857 GHz and DIRBE/{\it IRAS}~100$\mu$m emission with a single-MBB spectrum. More recently, \cite{aniano} has applied the \cite{dl07} dust grain model to {\it Planck}, {\it IRAS}, and {\it WISE}~emission between 353 GHz and $12\mu$m. Here we investigate the FDS99 two-component dust emission model as an alernative parametrization for the 100-3000 GHz dust spectral energy distribution (SED) composed of {\it Planck}~High Frequency Instrument (HFI), DIRBE and {\it IRAS}~data. In doing so, we obtain {\it Planck}-based maps of dust temperature and optical depth, both at $6.1'$ resolution. Because we employ a model that has been validated with FIRAS down to millimeter wavelengths and optimized for {\it Planck}, our derived parameters are useful in constructing high-resolution predictions of dust emission over a very broad range of wavelengths. This includes low frequencies (100-350 GHz), which \cite{planckdust} caution their model may not adequately fit, and also wavelengths near the peak of the dust SED, relevant to e.g. {\it AKARI}~140-160$\mu$m \citep{akari}. We also anticipate our derived optical depth map will serve as a valuable cross-check for extinction estimates based directly upon optical observations of stars \citep[e.g.][]{schlafly14} and as a baseline for next-generation dust extinction maps incorporating high-resolution, full-sky infrared data sets such as {\it WISE}~\citep{wright10, meisner14} and {\it AKARI}. In $\S$\ref{sec:data} we introduce the data used throughout this study. In $\S$\ref{sec:prepro} we describe our preprocessing of the {\it Planck}~maps to isolate thermal emission from Galactic dust. In $\S$\ref{sec:modeling} we explain the two-component emission model we apply to the {\it Planck}-based dust SED. In $\S$\ref{sec:bpcorr}, we discuss the details of predicting {\it Planck}~observations based on this dust model. In $\S$\ref{sec:global} we derive constraints on our model's global parameters in light of the {\it Planck}~HFI maps. In $\S$\ref{sec:fitting} we detail the Markov chain Monte Carlo (MCMC) method with which we have estimated the spatially varying parameters of our model. In $\S$\ref{sec:ebv} we calibrate our derived optical depth to reddening at optical wavelengths. In $\S$\ref{sec:em_compare} we compare our two-component thermal dust emission predictions to those of \cite{planckdust}. In $\S$\ref{sec:release} we present the full-sky maps of dust temperature and optical depth we have obtained, and conclude in $\S$\ref{sec:conclusion}. \section{Data} \label{sec:data} All {\it Planck}~data products utilized throughout this work are drawn from the {\it Planck}~2013 release \citep{planck2013}. Specifically, we have made use of all six of the zodiacal light corrected HFI intensity maps \citep[\texttt{R1.10\_nominal\_ZodiCorrected},][]{planckzodi}. Our full-resolution (6.1$'$ FWHM) SED fits neglect the two lowest HFI frequencies, 100 and 143 GHz, as these have FWHM of 9.66$'$ and 7.27$'$ respectively. To incorporate measurements on the Wien side of the dust emission spectrum, we include 100$\mu$m data in our SED fits. In particular, we use the \citet[henceforth SFD]{SFD} reprocessing of DIRBE/{\it IRAS}~100$\mu$m, which we will refer to as \verb|i100|, and at times by frequency as 3000 GHz. The \verb|i100| map has angular resolution of $6.1'$, and was constructed so as to contain only thermal emission from Galactic dust, with compact sources and zodiacal light removed, and its zero level tied to H\,\textsc{i}. We use the \verb|i100| map as is, without any custom modifications. In some of our FIR dust SED analyses which do not require high angular resolution, specifically those of $\S$\ref{sec:global}, $\S$\ref{sec:lores}, and $\S$\ref{sec:hier}, we also make use of the SFD reprocessings of DIRBE 140$\mu$m (2141 GHz) and 240$\mu$m (1250 GHz). \section{Preprocessing} \label{sec:prepro} The following subsections detail the processing steps we have applied to isolate Galactic dust emission in the {\it Planck}~maps in preparation for SED fitting. \subsection{CMB Anisotropy Removal} \label{sec:cmb} We first addressed the CMB anisotropies before performing any of the interpolation/smoothing described in $\S$\ref{sec:ptsrc}/$\S$\ref{sec:smth}. The CMB anisotropies are effectively imperceptible upon visual inspection of {\it Planck}~857 GHz, but can be perceived at a low level in {\it Planck}~545 GHz, and are prominent at 100-353 GHz relative to the Galactic emission we wish to characterize, especially at high latitudes. To remove the CMB anisotropies, we have subtracted the Spectral Matching Independent Component Analysis \citep[SMICA,][]{smica} model from each of the {\it Planck}~maps, applying appropriate unit conversions for the 545 and 857 GHz maps with native units of MJy/sr. Low-order corrections, particularly our removal of Solar dipole residuals, are discussed in $\S$\ref{sec:zp}. \subsection{Compact Sources} \label{sec:ptsrc} After subtracting the SMICA CMB model, we interpolate over compact sources, including both point sources and resolved galaxies. Removing compact sources at this stage is important as it prevents contamination of compact-source-free pixels in our downstream analyses which require smoothing of the {\it Planck}~maps. SFD carefully removed point sources and galaxies from the \verb|i100| map everywhere outside of $|b|$$<$$5^{\circ}$. We do not perform any further modifications of the \verb|i100| map to account for compact sources. To mask compact sources in the {\it Planck}~217-857 GHz maps, we use the SFD compact source mask. At 100, 143 GHz we use the compact source masks provided by the {\it Planck}~collaboration in the file \verb|HFI_Mask_PointSrc_2048_R1.10.fits|. Given our pixelization (see $\S$\ref{sec:pix}), 1.56\% of pixels are masked at 217-857 GHz (1.05\%, 1.02\% at 100, 143 GHz). \subsection{Smoothing} \label{sec:smth} For our full-resolution model, we wish to simultaneously fit \verb|i100| along with the four highest-frequency {\it Planck}~bands. To properly combine these maps, they must have the same point spread function (PSF). \verb|i100|, with its $6.1'$ symmetric Gaussian beam, has the lowest angular resolution of the relevant maps. To match PSFs, we have therefore smoothed each of the {\it Planck}~maps under consideration to \verb|i100| resolution by considering each native {\it Planck}~map to have a symmetric Gaussian beam and smoothing by the appropriate symmetric Gaussian such that the resulting map has a $6.1'$ FWHM. The FWHM values we assign to the native {\it Planck}~maps are taken from \cite{planckbeam}, and are listed in Table \ref{table:offs}. \subsection{Molecular Emission} \label{sec:mole} Because the FIRAS spectra consist of many narrow frequency bins, FDS99 were able to discard the relatively small number of frequency intervals contaminated by strong molecular line emission. Unfortunately, while the {\it Planck}~data considered in this study are of high angular resolution, the broad {\it Planck}~bandpasses do not allow us to adopt the same approach as FDS99 in dealing with line emission. Instead, we must subtract estimates of the molecular line contamination from each {\it Planck}~band in order to best isolate the thermal continuum we wish to characterize. The most prominent molecular line emission in the {\it Planck}~bands of interest arises from the three lowest CO rotational transitions: J=1$\rightarrow$0 at 115 GHz, J=2$\rightarrow$1 at 230 GHz and J=3$\rightarrow$2 at 345 GHz, respectively affecting the {\it Planck}~100, 217 and 353 GHz bands. The J=1$\rightarrow$0 line also imparts a signal upon {\it Planck}~143 GHz, but at a negligible level, $\sim$1000$\times$ fainter relative to the dust continuum than J=1$\rightarrow$0 at 100 GHz. More specifically, the ratio of J=1$\rightarrow$0 intensity to thermal dust emission in {\it Planck}~143 GHz is $\geq$0.001 for only $<$2\% of the sky. To correct for molecular emission, we employ the {\it Planck}~Type 3 CO data product, which boasts the highest S/N among the available full-sky CO maps based on the {\it Planck}~HFI and Low Frequency Instrument (LFI) data \citep{planckco}. The native angular resolution of the Type 3 CO map is 5.5$'$. We therefore begin by smoothing the raw Type 3 CO map to match the PSF of the smoothed {\it Planck}~intensity maps we wish to correct for molecular emission. We must apply the appropriate unit conversions to the Type 3 CO map before subtracting it from the {\it Planck}~intensity maps, which have native units of $K_{CMB}$ at the frequencies of interest. The Type 3 CO map is provided in units of K$_{RJ}$ km/s of J=1$\rightarrow$0 emission. To convert this quantity to $K_{CMB}$, we assume that all of the CO emission arises from the $^{12}$CO isotope, and derive the {\it Planck}-observed CO intensity in units of $K_{CMB}$ as follows: \begin{equation} I_{CO, \nu_i, N, N-1} = I_{3}F_{12CO, \nu_i, N, N-1} R_{N, N-1} \end{equation} Where $I_{CO, \nu_i, N, N-1}$ is the intensity in $K_{CMB}$ in {\it Planck}~band $\nu_i$ due to the CO transition from J=$N$ to J=$(N$$-$1). $I_3$ represents the appropriately smoothed Type 3 CO amplitude in K$_{RJ}$ km/s of J=1$\rightarrow$0 emission. The $F_{12CO, \nu_i, N, N-1}$ are conversion factors between K$_{RJ}$ km/s and $K_{CMB}$ for particular band/transition pairs. The relevant values, calculated with the \textit{Unit Conversion and Colour Correction} software utilities (\verb|v1.2|), are: \noindent $F_{12CO, 100, 1, 0}$=1.478$\times$10$^{-5}$$K_{CMB}/(K_{RJ} \ km/s)$, \noindent $F_{12CO, 217, 2, 1}$=4.585$\times$$10^{-5}$$K_{CMB}/(K_{RJ} \ km/s)$, and \noindent $F_{12CO, 353, 3, 2}$=1.751$\times$$10^{-4}$$K_{CMB}/(K_{RJ} \ km/s)$. \noindent $R_{N, N-1}$ represents the line ratio of the transition from J=$N$ to J=$(N$$-$1) relative to the J=1$\rightarrow$0. Thus, $R_{1,0}$=1, and we further adopt $R_{2,1}$=0.595 and $R_{3,2}$=0.297 based on \cite{planckco}. These line ratios are assumed to be constant over the entire sky. Formally, then, the CO contamination in band $\nu_i$ is given by: \begin{equation} \label{equ:molcorr} I_{CO, \nu_i} = \sum\limits_{N} I_{CO, \nu_i, N, N-1} \end{equation} It happens that, for each of the {\it Planck}~bands in which CO emission is non-negligible (100, 217 and 353 GHz), only a single $N$ contributes ($N$=1, $N$=2 and $N$=3, respectively). Unfortunately, the Type 3 CO map at $6.1'$ FWHM is rather noisy, and the vast majority of the sky has completely negligible CO emission. Thus, in order to avoid adding unnecessary noise outside of molecular cloud complexes and at high latitudes, we have zeroed out low-signal regions of the Type 3 CO map. We identify low-signal regions as those with $\mathcal{I}_3$$<$1 K$_{RJ}$ km/s, where $\mathcal{I}_3$ is the Type 3 CO map smoothed to 0.25$^{\circ}$ FWHM. As a result of this cut, 90\% of the sky remains unaffected by our CO correction, particularly the vast majority of the high Galactic latitude sky. \subsection{Zero Level} \label{sec:zp} Although we wish to isolate and model thermal emission from Galactic dust, the {\it Planck}~maps contain additional components on large angular scales. At each frequency, there can exist an overall, constant offset that must be subtracted to set the zero level of Galactic dust by removing the mean cosmic IR background \citep[CIB,][]{cibreview}, as well as any instrumental offset. Additionally, faint residuals of the Solar dipole remain at low frequencies. We will address these issues by separately solving two sub-problems: first, we set the absolute zero level of {\it Planck}~857 GHz relative to external data, and second we fit the 100-545 GHz offsets and low order corrections by correlating these {\it Planck}~bands against {\it Planck}~857 GHz. \subsubsection{Absolute Zero Level} \label{sec:zp_abs} In \cite{planckdust}, the absolute zero level of thermal dust emission was set by requiring that {\it Planck}~infrared emission tends to zero when H\,\textsc{i} is zero, assuming a linear correlation between these two measurements at low column density. However, this approach is less than completely satsifying in that there appear to be different slopes of {\it Planck}~857 GHz versus H\,\textsc{i} for different ranges of H\,\textsc{i} intensity. In particular, {\it Planck}~857 GHz appears to ``flatten out'' at very low H\,\textsc{i}, as shown in Figure 5 of \cite{planckdust}. More quantitavely, we have found using the LAB H\,\textsc{i} data \citep{lab} for $-72$$<$$v_{LSR}$$<$$+25$ km/s that the best-fit slope for H\,\textsc{i}$<$70 K km/s is a factor of $\sim$1.9 lower than the best fit slope for 110 K km/s $<$H\,\textsc{i}$<$200 K km/s, and as a result the implied zero level offsets for {\it Planck}~857 GHz differ by $\sim$0.37 MJy/sr. \begin{figure} \begin{center} \epsfig{file=zp_857.eps, width=3.4in} \caption{\label{fig:fdsref} Scatter plot of FDS99-predicted 857 GHz thermal dust emission versus {\it Planck}~857 GHz observations, illustrating our absolute zero level determination described in $\S$\ref{sec:zp_abs}.} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{file=dipole_all_100.eps, width=3.4in} \caption{\label{fig:dip} Scatter plots of {\it Planck}~100, 143, 217, 353, and 545 GHz versus {\it Planck}~857 GHz. Left: before applying our best-fit zero level offsets and additional low-order corrections. Right, top four panels: {\it Planck}~143-545 GHz after correcting for each band's best-fit offset and residual Solar dipole. Bottom right: {\it Planck}~100 GHz after applying the spherical harmonic corrections of Equation \ref{equ:harm}. The dashed red line shows the best-fit linear relationship in all cases.} \end{center} \end{figure} \begin{figure*} \begin{center} \epsfig{file=harm_100.eps, width=6.5in} \caption{\label{fig:harm} Summary of low-order corrections at 100 GHz. Left: prior to our low-order corrections, a $\sim$17$\mu$K zero level offset is present and strong low-order problems reduce the linearity of the 100 GHz trend versus 857 GHz. Center: scatter plot of {\it Planck}~100 GHz versus 857 GHz after applying the best-fit offset and residual Solar dipole corrections derived with Equation \ref{equ:dip} to {\it Planck}~100 GHz. The correlation is strengthened, but remains far less tight than for 143-545 GHz (see right column of Figure \ref{fig:dip}, top four rows). Right: after applying the spherical harmonic corrections of Equation \ref{equ:harm} to {\it Planck}~100 GHz, the correlation versus 857 GHz is far more tightly linear than following the dipole correction.} \end{center} \end{figure*} Because of this ambiguity in the relationship between 857 GHz and H\,\textsc{i} emission, we decided to instead constrain the {\it Planck}~857 GHz zero level by comparison to the FDS99-predicted 857 GHz thermal dust emission. This renders our {\it Planck}~857 GHz absolute zero level tied indirectly to H\,\textsc{i} through the FDS99 100$\mu$m and 240$\mu$m zero levels. We perform a linear fit to the FDS99-predicted 857 GHz values as a function of {\it Planck}~857 GHz. For this purpose, we employ a version of the {\it Planck}~857 GHz map with zodiacal light and point sources removed and smoothed to 1$^{\circ}$ FWHM, which we will refer to as $\mathcal{I}_{857}$. We consider $\mathcal{I}_{857}$ to be the independent variable, as it has much higher S/N than the FDS99 prediction, henceforward referred to as $\mathcal{F}_{857}$. Note that $\mathcal{F}_{857}$ is not simply the FDS99 model evaluated at 857 GHz, but also incorporates the color correction factor of $\S$\ref{sec:bpcorr}, using the FDS99 temperature map to determine the dust spectrum shape. We rebin to $N_{side}$=64 and restrict to pixels with $\mathcal{I}_{857}$$<$2.15 MJy/sr. Since {\it Planck}~857 GHz smoothed to degree resolution has very high S/N, we can safely perform such a cut on $\mathcal{I}_{857}$. Figure \ref{fig:fdsref} shows a scatter plot of $\mathcal{I}_{857}$ versus $\mathcal{F}_{857}$, with a moving median and linear fit overplotted. The linear fit was performed with uniform weights and iterative outlier rejection. The best-fit linear model is given by $\mathcal{F}_{857}$=0.991$\mathcal{I}_{857}$$-$0.018 MJy/sr. It is encouraging that the slope is quite close to unity. It is also encouraging that our choice of {\it Planck}~857 GHz threshold at 2.15 MJy/sr is unimportant; any threshold value between 1.3 MJy/sr (28$^{th}$ percentile in $\mathcal{I}_{857}$) and 3.9 MJy/sr (61$^{st}$ percentile in $\mathcal{I}_{857}$) yields a zero level offset within 0.01 MJy/sr of our adopted value. The formal statistical error on the best-fit 857 GHz offset is quite small, $\sim$0.002 MJy/sr. The systematics likely to dominate the actual uncertainty on our FDS-based zero level are imperfections in the {\it Planck}/\verb|i100| zodiacal light models and the FDS99 temperature map. To quantify these systematic uncertainties, we split the sky into four quadrants, with boundaries at $b$=0$^{\circ}$ and $l$=0$^{\circ}$, $l$=180$^{\circ}$. We again restricted to $\mathcal{I}_{857}$$<$2.15 MJy/sr, and repeated the regression in each quadrant. The rms of the per-quadrant slopes was found to be 0.0188, while the rms of the per-quadrant offsets was 0.0586 MJy/sr. Our adopted $\sim$0.06 MJy/sr zero level uncertainty is sufficiently large to be consistent with the possible error introduced by assuming no appreciable Solar dipole signal in the {\it Planck}~857 GHz map. If we allow for a dipole template in our FDS99 versus {\it Planck}~linear regression at 857 GHz, the best-fit dipole amplitude is only 0.02 MJy/sr. \begin{deluxetable*}{ccccccccc} \tabletypesize{\scriptsize} \tablecolumns{8} \tablewidth{0pc} \tablecaption{\label{table:offs} Input Map Properties \& Pre-processing} \tablehead{ \colhead{$\nu$ (GHz)} & \colhead{Instrument(s)} & \colhead{Offset ($K_{CMB}$)} & \colhead{Dipole ($K_{CMB}$)} & \colhead{$s_{857,\nu}$$\times$$u_{\nu}$} & \colhead{$\sigma_{s_{857,\nu}}$$\times$$u_{\nu}$} & \colhead{$n_{\nu}$} ($K_{CMB}$) & \colhead{$c_{\nu}$} & \colhead{FWHM ($'$)} } \startdata 100 & {\it Planck}~HFI & 1.69$\times$10$^{-5}$$\pm$3.61$\times$10$^{-7}$ & $-$1.08$\times$10$^{-5}$ & 1.46$\times$10$^{-3}$ & 2.92$\times$10$^{-5}$ & 7.77$\times$10$^{-5}$ & 0.0054 & 9.66 \\ 143 & {\it Planck}~HFI & 3.58$\times$10$^{-5}$$\pm$7.58$\times$10$^{-7}$ & $-$1.08$\times$10$^{-5}$ & 4.68$\times$10$^{-3}$ & 9.37$\times$10$^{-5}$ & 3.25$\times$10$^{-5}$ & 0.0054 & 7.27 \\ 217 & {\it Planck}~HFI & 7.79$\times$10$^{-5}$$\pm$2.60$\times$10$^{-6}$ & $-$1.40$\times$10$^{-5}$ & 2.09$\times$10$^{-2}$ & 4.19$\times$10$^{-4}$ & 4.51$\times$10$^{-5}$ & 0.0054 & 5.01 \\ 353 & {\it Planck}~HFI & 2.76$\times$10$^{-4}$$\pm$1.95$\times$10$^{-5}$ & $-$3.08$\times$10$^{-5}$ & 9.32$\times$10$^{-2}$ & 1.86$\times$10$^{-3}$ & 1.51$\times$10$^{-4}$ & 0.012 & 4.86 \\ & & Offset (MJy/sr) & Dipole (MJy/sr) & $s_{857,\nu}$ & $\sigma_{s_{857,\nu}}$ & $n_{\nu}$ (MJy/sr) & & \\ \cline{3-7} \\ [-2ex] 545 & {\it Planck}~HFI & 7.27$\times$10$^{-2}$$\pm$1.99$\times$10$^{-2}$ & 1.63$\times$10$^{-2}$ & 3.31$\times$10$^{-1}$ & 6.62$\times$10$^{-3}$ & 0.046 & 0.10 & 4.84 \\ 857 & {\it Planck}~HFI & 1.82$\times$10$^{-2}$$\pm$6.02$\times$10$^{-2}$ & - & 1.0 & 2.0$\times$10$^{-2}$ & 0.046 & 0.10 & 4.63 \\ 1250 & DIRBE & 7.06$\times$10$^{-2}$$\pm$1.19$\times$10$^{-1}$ & - & 1.98 & 3.97$\times$10$^{-2}$ & 0.42 & 0.10 & 42 \\ 2141 & DIRBE & 1.04$\times$10$^{-1}$$\pm$1.54$\times$10$^{-1}$ & - & 2.56 & 5.12$\times$10$^{-2}$ & 0.79 & 0.10 & 42 \\ 3000 & DIRBE/{\it IRAS} & 0.0$\pm$4.3$\times$10$^{-2}$ & - & 1.27 & 2.53$\times$10$^{-2}$ & 0.06 & 0.10 & \ 6.1 \enddata \tablecomments{Column 1: Approximate band center frequency of each input map. Note that 1250 GHz and 2141 GHz refer to the SFD98 reprocessings of DIRBE 240$\mu$m and 140$\mu$m respectively. Column 2: Instrument(s) from which the input map at each frequency has been obtained. Column 3: Zero level offset subtracted from each raw input map. Column 4: Best-fit residual Solar dipole amplitude according to Equation \ref{equ:dip}. Column 5: Dimensionless correlation slope of each map relative to {\it Planck}~857 GHz. These are the correlation slopes used in the analysis of $\S$\ref{sec:global}, specifically Equation \ref{equ:chi2corr}. Column 6: Adopted uncertainty on the dimensionless correlation slopes relative to {\it Planck}~857 GHz, for use in Equation \ref{equ:chi2corr}. Column 7: $n_{\nu}$ represents the adopted per-pixel statistical noise level at full resolution, which contributes to the error budget of Equation \ref{equ:errorbudget}. Column 8: Multiplicative fractional uncertainty on each input map, for use in the error budget of Equation \ref{equ:errorbudget}. Column 9: Native angular resolution of each input map.} \end{deluxetable*} \subsubsection{Relative Zero Level} \label{sec:relzero} In the course of this study we use not only {\it Planck}~857 GHz, but also all of the remaining {\it Planck}~HFI bands, as well as \verb|i100|. To derive the zero level offsets that must be applied to each of the five lowest-frequency {\it Planck}~bands, we perform a regression versus the {\it Planck}~857 GHz map corrected for the best-fit absolute zero level offset from $\S$\ref{sec:zp_abs}. We assume no offset need be applied to \verb|i100|, which already has its zero level tied to H\,\textsc{i} by SFD. The need for additional low-order corrections beyond simple scalar offsets became evident upon inspecting the HFI maps at 100-545 GHz. In particular, we noticed the presence of a low-level dipole pattern, with an orientation consistent with that of the Solar dipole. Our strategy will be to simultaneously fit both this residual dipole and the zero-level offset amplitude for each band. To most precisely recover these amplitudes, it is necessary to have the highest available S/N in the independent variable of our regression. For this reason we have used {\it Planck}~857 GHz as a reference for the 100-545 GHz bands, as opposed to the FDS99 predictions or H\,\textsc{i} data. In doing so, we assume {\it Planck}~857 GHz contains no appreciable Solar dipole residual. We perform one regression per HFI band (other than 857 GHz) to simultaneously fit for the zero level offset, the slope relative to 857 GHz, and the residual dipole amplitude. For each 100-545 GHz HFI band, we restrict to regions of low column density (H\,\textsc{i} $<$ 200 K\,km\,s$^{-1}$ for $-72$$<$$v_{LSR}$$<$$+25$ km\,s$^{-1}$) and fit the following model: \begin{equation} \label{equ:dip} \mathcal{I}_{\nu_i, p} = m\mathcal{I}_{857, p} + b + d\mathcal{D}_{p} \end{equation} With $p$ denoting a single $N_{side}$=64 HEALPix pixel \citep{healpix} in the maps $\mathcal{I}_{857}$, $\mathcal{I}_{\nu_i}$, and $\mathcal{D}$. Here $\mathcal{I}_{857}$ is the {\it Planck}~857 GHz map with zodiacal emission, compact sources, and the constant offset of $\S$\ref{sec:zp_abs} removed, smoothed to $1^{\circ}$ resolution. $\mathcal{I}_{\nu_i}$ is the corresponding $1^{\circ}$ resolution {\it Planck}~HFI map with zodiacal emission, CMB anisotropies, and compact sources removed. In the context of Equation \ref{equ:dip}, $\nu_i \in$ \{100, 143, 217, 353, 545\} GHz. Note that $\mathcal{I}_{\nu_i}$ is always in the native units of the relevant {\it Planck}~band. $\mathcal{D}$ is a scaling of the Solar dipole pattern oriented toward $(l, b) = (263.99^{\circ}, 48.26^{\circ})$, with unit amplitude. Because $\sim$18,000 pixels satisfy the low H\,\textsc{i} cut, we have an overconstrained linear model with three parameters: $m$, $d$, and $b$. $m$ represents the best-fit slope of {\it Planck}~band $\nu_i$ versus {\it Planck}~857 GHz assuming they are linearly related. $d$ is the residual Solar dipole amplitude, and its best-fit value represents the scaling of the Solar dipole that makes the {\it Planck}~band $\nu_i$ versus 857 GHz correlation most tightly linear. $b$ represents the constant offset that must be subtracted from the band $\nu_i$ map to make its zero level consistent with that of the 857 GHz map. For each band $\nu_i$, we obtain estimates of $m$, $d$, and $b$ by performing a linear least squares fit with uniform weights and iterative outlier rejection. Figure \ref{fig:dip} shows scatter plots of the band $\nu_i$ versus 857 GHz correlation before (left) and after (right) correcting for the best-fit offset and residual dipole, for each $\nu_i \in$ \{143, 217, 353, 545\} GHz. Not only are the tightened correlations striking in these scatter plots, but the residual dipole subtractions appear very successful in the two-dimensional band $\nu_i$ maps themselves. Before performing thermal dust fits, we therefore subtract the best-fit $b$ and $d\mathcal{D}$ from each 143-545 GHz map. The best-fit offsets and residual dipole amplitudes are listed in Table \ref{table:offs}, along with other important per-band parameters, such as the fractional multiplicative calibration uncertainty $c_{\nu}$. We found that a dipole correction alone could not sufficiently rectify the {\it Planck}~100 GHz map (see Figure \ref{fig:harm}). Therefore, for 100 GHz, we performed a modified version of the Equation \ref{equ:dip} fit, using the following model: \begin{equation} \label{equ:harm} \mathcal{I}_{100, p} = m\mathcal{I}_{857, p} + b + \sum_{l=1}^{4} \sum_{m=-l}^{l} a_{lm}Y_{l}^{m}(\theta_p, \phi_p) \end{equation} Where $Y_{l}^{m}$ are the real spherical harmonics, and the $a_{lm}$ are their corresponding real coefficients. The angle $\phi_p$ is taken simply to be $l_{gal, p}$ and $\theta_p$=$(90^{\circ}-b_{gal, p})$. Thus, we have replaced the Solar dipole term with a sum of 24 spherical harmonic templates, which, when multiplied by the best-fit $a_{lm}$ coefficients and subtracted from {\it Planck}~100 GHz make the relation between 100 GHz and 857 GHz most tightly linear. Figure \ref{fig:harm} illustrates the improved correlation of 100 GHz vs. 857 GHz when including the spherical harmonic corrections relative to the dipole-only correction. The spherical harmonic decomposition of Equation \ref{equ:harm} did not improve the correlations at higher frequencies enough to warrant replacing the dipole-only correction in those cases. \section{Dust Emission Model} \label{sec:modeling} At sufficiently high frequencies, Galactic thermal dust emission can be adequately modeled as a single MBB with power-law emissivity \citep[e.g. SFD;][]{planckdust}. However, it has long been recognized, particularly in view of the FIRAS spectra, that the dust SED flattens toward the millimeter in a manner which is not consistent with a simple extrapolation of single-MBB models to low frequencies. In the diffuse ISM, \cite{reach95} found an improved fit to the FIRAS data using an empirically motivated superposition of two $\beta$=2 MBBs, one representing a `hot' grain population ($T$$\approx$16$-$21 K), the other a `cold' grain population ($T$$\approx$4$-$7 K). FDS99 built a more physically motivated two-MBB model, in which different grain emission/absorption properties account for the differing temperatures of each population, and these temperatures are coupled by assuming thermal equilibrium with the same interstellar radiation field (ISRF). The primary FDS99 analysis considered the intrinsic grain properties of each species, for example the emissivity power law indices, to be constant over the sky, and performed a correlation slope analysis to constrain these parameters with FIRAS and DIRBE observations. FDS99 also constructed a DIRBE 240$\mu$m/100$\mu$m ratio to account for temperature variation at $\sim$1.3$^{\circ}$ resolution. In this work we seek to apply the FDS99 emission model to the {\it Planck}~data set, which offers a dramatic enhancement in angular resolution relative to the FIRAS spectra. The {\it Planck}~data thereby allow us to derive an improved temperature correction at near-{\it IRAS}~resolution ($\S$\ref{sec:mcmc}), re-evaluate the best-fit global dust properties ($\S$\ref{sec:global}, $\S$\ref{sec:hier}), and fit additional two-component model parameters as a function of position on the sky ($\S$\ref{sec:lores}). The shape of the two-component model spectrum we will consider is given by: \begin{equation} M_{\nu} \propto \Big[f_{1}q_{1}\Big(\frac{\nu}{\nu_{0}}\Big)^{\beta_1}B_{\nu}(T_1) + f_{2}q_{2}\Big(\frac{\nu}{\nu_0}\Big)^{\beta_2}B_{\nu}(T_2)\Big] \end{equation} Where $B_{\nu}$ is the Planck function, $T_1$ is the `cold' dust temperature, $T_2$ is the `hot' dust temperature, and $\beta_1$ and $\beta_2$ are the emissivity power-law indices of the cold and hot dust components respectively. $q_1$ represents the ratio of FIR emission cross section to optical absorption cross section for species 1, and similarly $q_2$ for species 2. $f_1$ and $f_2$ dictate the relative contributions of the two MBB components to the combined SED. Thus, $f_1$ and $f_2$ can be thought of as encoding the mass fraction of each species, although technically $f_1$ ($f_2$) is the optical absorption cross-section weighted mass fraction for species 1 (2). Following the convention of FDS99, we choose $\nu_0$=3000 GHz and take $f_2$=(1$-$$f_1$). Mathematically, this two-MBB model requires specification of seven parameters for every line of sight: $T_1$, $T_2$, $\beta_1$, $\beta_2$, $f_1$, $q_1$/$q_2$ and the normalization of $M_{\nu}$. However, under the assumption that the temperature of each species is determined by maintaining thermal equilibrium with the same ISRF, $T_1$=$T_1$($T_2$, $\beta_1$, $\beta_2$, $q_1/q_2$) is fully determined by these other parameters. $T_1$ is always related to $T_2$ via a simple power law, although the prefactor and exponent depend on the parameters $q_1/q_2$, $\beta_1$ and $\beta_2$ (see FDS99 Equation 14). These considerations still leave us with six potentially free parameters per line of sight. Unfortunately, fitting this many parameters per spatial pixel is not feasible for our full-resolution $6.1$$'$ fits, as these are constrained by only five broadband intensity measurements. Hence, as in FDS99, we deem certain parameters to be ``global'', i.e. spatially constant over the entire sky. In our full-resolution five-band fits, we designate $\beta_1$, $\beta_2$, $f_1$ and $q_1/q_2$ to be spatially constant. This same approach was employed by FDS99, and the globally best-fit values obtained by FDS99 for these parameters are listed in the first row of Table \ref{tab:global}. With these global parameters, FDS99 found $T_2$$\approx$$16.2$K, $T_1$$\approx$$9.4$K to be typical at high-latitude. In $\S$\ref{sec:global}, we discuss the best-fit global parameters favored by the {\it Planck}~HFI data; these are listed in the second row of Table \ref{tab:global}. \begin{deluxetable*}{llrrrrrrrrrr} \tabletypesize{\scriptsize} \tablecolumns{11} \tablewidth{0pc} \tablecaption{\label{tab:global} Global Model Parameters} \tablehead{ \colhead{Number} & \colhead{Model} & \colhead{$f_1$} & \colhead{$q_1/q_2$} & \colhead{$\beta_1$} & \colhead{$\beta_2$} & \colhead{$T_2$} & \colhead{$T_1$} & \colhead{$n$} & \colhead{D.O.F.} & \colhead{$\chi^2$} & \colhead{$\chi^2_{\nu}$} } \startdata 1 & FDS99 best-fit & 0.0363 & 13.0 & 1.67 & 2.70 & 15.72 & 9.15 & 1.018 & 7 & 23.9 & 3.41 \\ 2 & FDS99 general & 0.0485 & 8.22 & 1.63 & 2.82 & 15.70 & 9.75 & 0.980 & 3 & 3.99 & 1.33 \\ 3 & single MBB & 0.0 & ... & ... & 1.59 & 19.63 & ... & 0.999 & 6 & 33.9 & 5.65 \\ [-2ex] \enddata \end{deluxetable*} Fixing the aforementioned four global parameters, our full-resolution, five-band fits have two remaining free parameters per line of sight: the hot dust temperature $T_2$ determines the SED shape and the normalization of $M_{\nu}$ determines the SED amplitude. In the lower-resolution fits of $\S$\ref{sec:lores} which include all HFI bands, we will allow $f_1$ to be a third free parameter, still holding $\beta_1$, $\beta_2$, and $q_1/q_2$ fixed. To calculate the optical depth in the context of this model, we assume optically thin conditions, meaning that $\tau_{\nu}$ = $M_{\nu}/S_{\nu}$, where $M_{\nu}$ is the appropriately scaled two-component model intensity and the source function is given by: \begin{equation} \label{eqn:source} S_{\nu} = \frac{f_1q_1(\nu/\nu_0)^{\beta_1}B_{\nu}(T_1) + f_2q_2(\nu/\nu_0)^{\beta_2}B_{\nu}(T_2)}{f_1q_1(\nu/\nu_0)^{\beta_1}+f_2q_2(\nu/\nu_0)^{\beta_2}} \end{equation} \section{Predicting the Observed SED} \label{sec:bpcorr} The thermal dust emission model of $\S$\ref{sec:modeling} predicts the flux density per solid angle $M_{\nu}$ in e.g. MJy/sr for any single frequency $\nu$. In practice, however, we wish to constrain our model using measurements in the broad {\it Planck}/DIRBE bandpasses, each with $\Delta\nu/\nu\sim0.3$. Both the {\it Planck}~and DIRBE data products quote flux density per solid angle in MJy/sr under the `IRAS convention'. More precisely, each value reported in the {\it Planck}~maps gives the amplitude of a power-law spectrum with $\alpha$=$-1$, evaluated at the nominal band center frequency, such that this spectrum integrated against the transmission reproduces the bolometer-measured power. Because our model spectra do not conform to the $\alpha$=$-1$ convention, we have computed color correction factors to account for the MBB($T$, $\beta$) spectral shape and the transmission as a function of frequency: \begin{equation} \label{equ:bpcorr} b_{\nu_i}(T, \beta) = \frac{\int \nu^{\beta}B_{\nu}(T)\mathcal{T}_{\nu_i}(\nu) d\nu \bigg[\int (\nu_{i,c}/\nu)\mathcal{T}_{\nu_i}(\nu) d\nu\bigg]^{-1}}{\nu_{i,c}^{\beta}B_{\nu_{i,c}}(T)} \end{equation} Here $\nu_{i,c}$ is the nominal band center frequency of band $\nu_i$, with $\nu_{i,c} \in$ \{100, 143, 217, 353, 545, 857, 1249.1352, 2141.3747, 2997.92458\} GHz. $\mathcal{T}_{\nu_i}(\nu)$ represents the relative transmission as a function of frequency for band $\nu_i$. For the HFI maps, $\mathcal{T}_{\nu_i}(\nu)$ is given by the {\it Planck}~transmission curves provided in the file \verb|HFI_RIMO_R1.10.fits| \citep{planckresponse}. For \verb|i100| and DIRBE 140$\mu$m, 240$\mu$m, we have adopted the corresponding DIRBE transmission curves. The two-component model prediction in band $\nu_i$ under the IRAS convention, termed $\tilde{I}_{\nu_i}$, is then constructed as a linear combination of color-corrected MBB terms: \begin{equation} \label{equ:iras} \tilde{I}_{\nu_i} \propto \sum_{k=1}^{2} b_{\nu_i}(T_k, \beta_k) f_k q_k (\nu_{i,c}/\nu_0)^{\beta_k} B_{\nu_{i,c}}(T_k) \end{equation} The color correction of Equation \ref{equ:bpcorr} therefore allows us to predict $\tilde{I}_{\nu_i}$ by computing monochromatic flux densities at the central frequency $\nu_{i,c}$ and then multiplying by factors $b_{\nu_i}(T, \beta)$. In practice, we interpolated the color corrections off of a set of precomputed, one-dimensional lookup tables each listing $b_{\nu_i}(T, \beta)$ for a single $\beta$ value as a function of $T$. We thus avoided the need to interpolate in both $\beta$ and $T$ by computing only a small set of one dimensional correction factors for the particular set of $\beta$ values of interest (e.g. $\beta$=1.67, 2.7, 1.63, 2.82 ..., see Table \ref{tab:global}). This color correction approach makes the MCMC sampling described in $\S$\ref{sec:mcmc} much more computationally efficient by circumventing the need to perform the integral in the numerator of Equation \ref{equ:bpcorr} on-the-fly for each proposed dust temperature. We have chosen to compute the color corrections on a per-MBB basis because this approach is very versatile; all possible two-component (and single-MBB) models are linear combinations of MBBs, so we can apply all of our color correction machinery even when we allow parameters other than temperature (e.g. $f_1$) to vary and thereby modify the dust spectrum shape. With these color corrections and the formalism established in $\S$\ref{sec:modeling} in hand, we can mathematically state the model we will use e.g. during MCMC sampling to predict the observed SED. The predicted observation in band $\nu_i$ is given by: \begin{equation} \label{eqn:inten} \tilde{I}_{\nu_i} = \frac{\sum\limits_{k=1}^{2} b_{\nu_i}(T_k, \beta_k) f_k q_k (\nu_{i,c}/\nu_0)^{\beta_k} B_{\nu_{i,c}}(T_k) u_{\nu_i}^{-1}}{\sum\limits_{k=1}^{2} b_{545}(T_k, \beta_k) f_k q_k (545 \textrm{GHz}/\nu_0)^{\beta_k} B_{545}(T_k)}\tilde{I}_{545} \end{equation} This equation is quite similar to Equation \ref{equ:iras}, but with two important differences. First, the normalization of $\tilde{I}_{\nu_i}$ is now specified by $\tilde{I}_{545}$, which represents the IRAS convention {\it Planck}~545 GHz intensity. The denominator serves to ensure that, for the case of $\nu_i$=545 GHz, $\tilde{I}_{545}$ is self-consistent. Second, each term in the numerator is multiplied by a unit conversion factor $u_{\nu_i}^{-1}$. This factor is necessary because some of the {\it Planck}~maps of interest have units of $K_{CMB}$ (100-353 GHz), while the remaining maps (545-3000 GHz) have units of MJy/sr. We have adopted the strategy of predicting each band in its native units, whether MJy/sr or $K_{CMB}$. For this reason, we always evaluate $B_{\nu_{i,c}}$ in Equation \ref{eqn:inten} in MJy/sr and let $u_{\nu_i}$=1 (dimensionless) for $\nu_i$$\ge$545 GHz. For $\nu_i$$\le$353 GHz, $u_{\nu_i}$ represents the conversion factor from $K_{CMB}$ to MJy/sr, given by \cite{planckresponse} Equation 32. \section{Global Model Parameters} \label{sec:global} \begin{figure*} \begin{center} \epsfig{file=dirbe_slopes.eps, width=6.5in} \caption{\label{fig:dirbe_slopes} Linear fits of SFD-reprocessed DIRBE 240$\mu$m (left), 140$\mu$m (center), and 100$\mu$m (right) as a function of {\it Planck}~857 GHz. The red lines illustrate the DIRBE correlation slopes used in our dust emission model optimization of $\S$\ref{sec:global}.} \end{center} \end{figure*} While we ultimately aim to obtain {\it Planck}-resolution maps of the spatially varying dust temperature and optical depth, we start by applying the machinery/formalism thus far developed to reassess the best-fit global two-component model parameters in light of the {\it Planck}~HFI data. FDS99 determined the best-fit values of the two-component model global parameters $\beta_1$, $\beta_2$, $q_1/q_2$ and $f_1$ via a correlation slope analysis incorporating DIRBE and FIRAS data. Here we seek to estimate these same global parameters via an analogous correlation slope analysis in which we swap the {\it Planck}~HFI maps for FIRAS at low frequencies, while still relying on DIRBE at higher frequencies. We also seek to determine via this correlation slope analysis whether or not the combination of {\it Planck}+DIRBE data favors two-component models over single-MBB models in the same way that the FIRAS+DIRBE data did in the FDS99 analysis. In the two-component model case, based on a spectrum of {\it Planck}~and DIRBE correlations slopes, we wish to obtain estimates for six free parameters: $\beta_1$, $\beta_2$, $q_1/q_2$, $f_1$, $T_2$ and the overall spectrum normalization $n$. The constraints we employ are the correlation slopes of each of the {\it Planck}~HFI bands, as well as DIRBE 100$\mu$m (3000 GHz), 140$\mu$m (2141 GHz) and 240$\mu$m (1250 GHz) relative to {\it Planck}~857 GHz, i.e. $dI_{\nu_i}/dI_{857}$. We will refer to the slope for band $\nu_i$ relative to {\it Planck}~857 GHz as $s_{857,\nu_i}$. The slopes for {\it Planck}~100-545 GHz are taken to be those derived from the relative zero level fits of $\S$\ref{sec:relzero}, and are illustrated by the dashed red lines in the right-hand column plots of Figure \ref{fig:dip}. The 857 GHz slope is unity by definition. At 1250, 2141 and 3000 GHz, we use the SFD-reprocessed DIRBE maps. For each DIRBE band, we determine $s_{857,\nu_i}$ by performing a linear fit to DIRBE as a function of {\it Planck}~857 GHz, after both have been zodiacal light subtracted and smoothed to $1^{\circ}$ FWHM, also restricting to the low HI mask of $\S$\ref{sec:relzero} (see Figure \ref{fig:dirbe_slopes}). Counting 857 GHz, we thus have nine correlation slope constraints for six free parameters. Including DIRBE 140$\mu$m and 240$\mu$m is critical in making the problem at hand sufficiently overconstrained, and also in providing information near the peak of the dust SED at $\sim$160$\mu$m, which is particularly sensitive to the presence of a single versus multiple MBB components. We assume an uncertainty of 2\% on each of the $s_{857,\nu_i}$ and minimize the chi-squared given by: \begin{equation} \label{equ:chi2corr} \chi^2 = \sum_{i=0}^{8}\frac{\big[s_{857,\nu_{i}}-n\frac{\tilde{I}_{\nu_i}(\beta_1, \beta_2, f_1, q_1/q_2, T_2)}{\tilde{I}_{857}(\beta_1, \beta_2, f_1, q_1/q_2, T_2)}\big]^2}{\sigma_{s_{857,\nu_i}}^2} \end{equation} Where $\nu_i$$\in$\{100, 143, 217, 353, 545, 857, 1250, 2141, 3000\} GHz. Note that this formula encompasses the general two-component case; in the single-MBB case, we take $f_1$=0 and hence $q_1/q_2$, $\beta_1$ and $T_1$ are immaterial, but Equation \ref{equ:chi2corr} still applies. Note also that no `priors' are included to preferentially drag our results towards agreement with those of FDS99. The correlation slopes $s_{857, \nu}$ and their adopted uncertainties are listed in the fifth and sixth columns of Table \ref{table:offs}. The results of our chi-squared minimization are listed in Table \ref{tab:global}. First (model 1), we fix $\beta_1$, $\beta_2$ ,$q_1/q_2$ and $f_1$ to the best-fit values from the FDS99 analysis based on DIRBE+FIRAS. We then allow $n$ and $T_2$ to vary so as to best match our DIRBE+{\it Planck}~spectrum. This results in a reduced chi-squared of $\chi^2_{\nu}$=3.41. Reassuringly, $n$ is quite close to unity. It should be noted though that our best-fit $T_2$ is $\sim$0.5 K lower than that found by FDS99 for the same values of $\beta_1$, $\beta_2$, $q_1/q_2$ and $f_1$. Next (model 2), we consider the fully-general two-component model, allowing all six model parameters to vary. In this case, the reduced chi-squared of the best fit parameters is $\chi^2_{\nu}=$1.33, signifying that our introduction of four additional free parameters is justified. The best-fit $\beta_1$ and $\beta_2$ are both consistent with the corresponding FDS99 values to within 5\%. $q_1/q_2$=8.22 represents a $\sim$40\% lower value than found by FDS99, while $f_1$=0.0458 represents a $\sim$25\% increase relative to FDS99. Again, our best-fit high-latitude $T_2$ is $\sim$0.5 K lower than the typical value of $\langle T_2 \rangle$=16.2 K from FDS99. Lastly, we calculate the optimal single-MBB fit to the {\it Planck}+DIRBE correlation slope spectrum. The best-fit single MBB has $\beta$=1.59, $T$=19.63, and $\chi^2_{\nu}$=5.65, indicating a significantly worse fit to the data than our best-fit two-component model (model 2). Thus, our {\it Planck}+DIRBE correlation slope analysis has confirmed the main conclusion of FDS99 and others e.g. \cite{reach95}, that the FIR/submm dust SED prefers two MBBs to just one, but, for the first time, independent of FIRAS. Still, it is apparent that the improvement in $\chi^2_{\nu}$ for single-MBB versus double MBB models found here is substantially less dramatic ($\Delta\chi^2_{\nu}$=4.32) than that found in FDS99 ($\Delta\chi^2_{\nu}$=29.2). This is likely attributable to the exquisite narrow-band frequency coverage of FIRAS, especially near the dust SED peak, which makes FIRAS a better suited data set than {\it Planck}~for a detailed analysis of the globally best-fit dust SED model. In $\S$\ref{sec:hier}, we confirm the basic conclusions of this section via an approach in which we allow the dust temperature to vary spatially. The analysis of $\S$7.5 also allows us to confirm the conclusions of this section while including a fully detailed uncertainty model; our assumption of 2\% per-band uncertainties on the correlation slopes is largely a statement that we seek a model accurate to 2\% from 100-3000 GHz, although the fact that our $\chi^2_{\nu}$ values are order unity suggests that the assumed uncertainties are not grossly over or underestimated. \section{MCMC Fitting Procedure} \label{sec:fitting} The following subsections detail our procedure for constraining the two-component dust emission model parameters which are permitted to vary spatially. We use the MCMC procedure described to perform two types of fits: (1) full-resolution $6.1'$ fits, in which only the SED normalization and dust temperatures vary spatially, and (2) lower-resolution fits in which $f_1$ is also allowed to vary from one line of sight to another. \begin{figure} \begin{flushleft} \epsfig{file=sed.eps, width=3.3in} \caption{\label{fig:sed} Top: Summary of observed SEDs and best-fit thermal dust emission models for $\sim$13,000 $N_{side}$=2048 pixels with similar best-fit temperatures and optical depths (15.695 K$<$$T_2$$<$15.705 K, 2.3$\times$$10^{-5}$$<$$\tau_{545}$$<$2.5$\times$$10^{-5}$). This region of parameter space was arbitrarily chosen in order to obtain a large number of pixels within a narrow $T_2$ interval and small fractional range in $\tau_{545}$. Black points represent the average observed intensities after rescaling each pixel to $\tau_{545}$=2.4$\times$$10^{-5}$, while red error bars represent the typical per-pixel uncertainties at each frequency. For each pixel, the best-fit two-component model is derived via the MCMC procedure of $\S$\ref{sec:mcmc}, based on {\it Planck}~217-857 GHz and SFD 100$\mu$m at full $6.1'$ resolution. Note that the two lowest-frequency data points were not used to derive the average two-component fit shown (blue line), while the three lowest-frequency data points were not used to derive the average \cite{planckdust} single-MBB fit shown (cyan line). Bottom: Comparison of average data, average two-component model and average \cite{planckdust} single-MBB model after dividing out the average two-component model. Black error bars represent the uncertainty on the mean observed spectrum. The two-component fit is consistent with the average data from 100-3000 GHz, whereas extrapolating the \cite{planckdust} model to 100-217 GHz yields predictions which are significantly low relative to the observed SED.} \end{flushleft} \end{figure} \begin{figure*} \begin{center} \epsfig{file=posteriors.eps, width=7.0in} \caption{\label{fig:post} Gridded posterior PDFs for three $N_{side}$=2048 HEALPix pixels, based on {\it Planck}~217-857 GHz and SFD 100$\mu$m at full $6.1'$ resolution. The colorscale is linear in $\log(P)$, with black corresponding to the maximum of $\log(P)$ and white representing $max[\log(P)]-5$. Light green crosses and ellipses mark the best-fit parameters and $1\sigma$ uncertainties based on our MCMC sampling of the posteriors. Our MCMC parameter and uncertainty estimates are in good agreement with those based on gridded posteriors. These three pixels are also representative in that we find the posterior distributions from Equation \ref{eqn:post} are in general extremely well-behaved, showing no multimodality or other pathological qualities. Left: Low S/N pixel at high latitude in the Galactic north. Center: High S/N pixel in the Polaris flare region. Right: Low S/N pixel at high latitude in the Galactic south.} \end{center} \end{figure*} \subsection{Pixelization} \label{sec:pix} For the purpose of fitting, we divide the sky into $\sim$50 million pixels of angular size $\sim$1.72$'$, defined by the HEALPix pixelization in Galactic coordinates, with $N_{side}$=2048. This pixelization is convenient because it is the format in which the {\it Planck}~HFI maps were released, and because it adequately samples the $6.1'$ FWHM maps under consideration in our full-resolution fits. Our procedure will fit the intensity measurements in each spatial pixel independently. \subsection{Sampling Parameters} \label{sec:samp} As discussed in $\S$\ref{sec:modeling}, our full-resolution fits consider the ``global'' parameters $f_1$, $q_1/q_2$, $\beta_1$, $\beta_2$ to be spatially constant. We employ the best-fit {\it Planck}+DIRBE global parameters of Table \ref{tab:global}, model 2. For each line of sight, only the dust spectrum normalization and dust temperatures are allowed to vary. In order to predict the dust SED for a given pixel, we are thus left with two remaining degrees of freedom, and must choose an appropriate set of two parameters to sample and thereby constrain via MCMC. To determine the SED normalization in each pixel, we draw samples in $\tilde{I}_{545}$, the `IRAS convention' intensity in the {\it Planck}~545 GHz bandpass, as defined in Equation \ref{eqn:inten}. With the four aforementioned global parameters fixed, the dust spectrum shape is determined entirely by the two dust temperatures, which are coupled. To constrain the dust temperatures, we sample in $T_2$, the hot dust temperature. For each sample in $T_2$, we compute the corresponding value of $T_1$, thereby fully specifying the SED shape. In principle, we could sample in either $T_1$ or $T_2$, but have chosen to sample in $T_2$ because emission from this component dominates in the relatively high frequency bands which most strongly constrain the dust temperatures. For the lower resolution fits described in $\S$\ref{sec:lores}, we sample in three parameters: $\tilde{I}_{545}$, $T_2$, and $f_1$. \subsection{Markov Chains} \label{sec:mcmc} In our full-resolution fits, we use a MCMC approach to constrain the parameters $\tilde{I}_{545}$ and $T_2$. For each pixel, we run a Metropolis-Hastings (MH) Markov chain sampling the posterior probability of the observed 217-3000 GHz thermal dust SED as a function of the two parameters $\tilde{I}_{545}$ and $T_2$. More specifically, for each pixel, we are sampling the posterior given by: \begin{equation} \label{eqn:post} P(\tilde{I}_{545}, T_2|\mathbf{I}) \propto \mathcal{L}(\mathbf{I}|\tilde{I}_{545}, T_2)P(T_2)P(\tilde{I}_{545}) \end{equation} Here $\mathbf{I}$ denotes the vector of observed thermal dust intensities quoted under the `IRAS convention': $\mathbf{I}$ = ($I_{217}$, $I_{353}$, $I_{545}$, $I_{857}$, $I_{3000}$). The likelihood function is given by: \begin{equation} \label{equ:like} \mathcal{L}(\mathbf{I}|\tilde{I}_{545}, T_2) = \textrm{exp}\Big[-\frac{1}{2}(\mathbf{I}-\mathbf{\tilde{I}})^T\Sigma^{-1}(\mathbf{I}-\mathbf{\tilde{I}})\Big] \end{equation} Here $\mathbf{\tilde{I}}$ is the vector of predicted observations based on Equation \ref{eqn:inten} and the proposed values of $\tilde{I}_{545}$ and $T_2$: $\mathbf{\tilde{I}} = $($\tilde{I}_{217}$, $\tilde{I}_{353}$, $\tilde{I}_{545}$, $\tilde{I}_{857}$, $\tilde{I}_{3000}$). $\Sigma$ is the per-pixel covariance matrix constructed based on the uncertainties in the observed intensities: \begin{equation} \label{equ:covariance} \Sigma = \begin{pmatrix} \sigma^2_{217} & \ldots & \rho_{217,3000}\sigma_{217}\sigma_{3000} \\ \vdots & \ddots & \vdots \\ \rho_{3000,217}\sigma_{3000}\sigma_{217} & \ldots & \sigma^2_{3000} \end{pmatrix} \end{equation} For each pixel $p$ in band $\nu_i$, the variance of the measured value $I_{\nu_i}(p)$ is taken to be: \begin{multline} \label{equ:errorbudget} \sigma^2_{\nu_i}(p) = c^2_{\nu_i}I_{\nu_i}^2(p) + c^2_{\nu_i}\sigma_{CMB, \nu_i}^2 + (\delta O_{\nu_i})^2 \\ + n_{\nu_i}^2 + \sigma^2_{CO, \nu_i}(p) + \sigma^2_{CIBA, \nu_i} \end{multline} This error budget is modeled after \cite{planckdust} Equation B.1, but with some modifications and additions. The first term accounts for the multiplicative uncertainty on the input maps. Table \ref{table:offs} lists the multiplicative calibration uncertainty $c_{\nu}$ for each band. These values are taken from Table 11 of \cite{planckcalib}. The second term represents an uncertainty due to our subtraction of the \verb|SMICA| CMB model. The analogous term in \cite{planckdust} Equation B.1 is ($c_{\nu}$$\times$\verb|SMICA|($p$))$^2$, i.e. an uncertainty proportional to the CMB model amplitude in each pixel. Because this term's spatial dependence can imprint the CMB anisotropies on the derived parameters, we have chosen to replace \verb|SMICA|($p$) with a spatially constant, RMS value for the CMB amplitude, $\sigma_{CMB, \nu_i}$. $\delta O_{\nu_i}$ represents the uncertainty in the band $\nu_i$ zero level offset, and the values of $\delta O_{\nu_i}$ can be read off from the second column of Table \ref{table:offs}. $n_{\nu_i}$ represents the instrumental noise in band $\nu_i$. Because using per-pixel noise estimates based on the {\it Planck}~ \verb|ii_cov| parameter can imprint features of the survey pattern onto the derived parameters, we have adopted a conservative, spatially constant value of $n_{\nu_i}$ for each band. These values of $n_{\nu_i}$ are listed in Table \ref{table:offs}. The next term accounts for the uncertainty on the CO emission correction, taking $\sigma_{CO, \nu_i}(p)$=0.15$\times$$I_{CO, \nu_i}(p)$ (see $\S$\ref{sec:mole}, specifically Equation \ref{equ:molcorr}). Finally, we include a term to account for the RMS amplitude of the cosmic infrared background anisotropy (CIBA) in band $\nu_i$, $\sigma_{CIBA, \nu_i}$. The values for the CIBA RMS amplitudes are obtained by assuming a $T$=18.3 K, $\beta$=1.0 MBB spectrum for the CIB, with 857 GHz normalization from \cite{ciba}. The CIBA not only contributes to the per-band variance $\sigma^2_{\nu_i}$, but also to the inter-frequency covariances; this is why we have included the off-diagonal terms in the covariance matrix of Equation \ref{equ:covariance}. In our noise model, the CIBA is the only source of inter-frequency covariance. Thus, the off-diagonal covariance matrix element between bands $\nu_i$ and $\nu_j$ is given by: \begin{equation} \Sigma_{ij} = \rho_{\nu_i, \nu_j}\sigma_{\nu_i}\sigma_{\nu_j} = \rho_{CIBA, \nu_i, \nu_j}\sigma_{CIBA, \nu_i}\sigma_{CIBA, \nu_j} \end{equation} With values for $\rho_{CIBA, \nu_i, \nu_j}$ from \cite{covarciba}. The approach we have taken in accounting for the CIBA is similar to that of \cite{planckdust}, Appendix C, in that we treat the CIBA amplitude in each pixel as a Gaussian random draw. However, instead of performing a separate analysis to gauge the uncertainty on derived dust parameters due to the CIBA, we allow the CIBA covariance to propagate naturally into our uncertainties via the likelihood function. Still, our treatment of the CIBA is a major oversimplification; a more sophisticated approach that accounts for the detailed CIBA spatial structure, or even removes the CIBA by subtraction would be preferable. We include the following prior on the hot dust temperature: \begin{equation} \label{equ:t2prior} P(T_2) = \mathcal{N}(T_2|\bar{T}_2, \sigma_{\bar{T}_2}) \end{equation} With $\bar{T}_2$ = 15.7 K and $\sigma_{\bar{T}_2}$ = 1.4 K. The $T_2$ prior mean is chosen based on the typical high-latitude $T_2$ value derived from the correlation slope analysis of $\S$\ref{sec:global}. We find, as desired, that this relatively broad $T_2$ prior has little influence on the derived temperatures, other than to regularize the rare pixels with one or more defective intensities which might otherwise yield unreasonable parameter estimates. In principle, there can also be an informative prior on $\tilde{I}_{545}$. However, we have chosen to assume a uniform prior on the SED normalization and, as a matter of notation, will omit $P(\tilde{I}_{545})$ henceforward. In practice we always perform computations using logarithms of the relevant probabilities. \begin{figure} \begin{center} \epsfig{file=tcomparison.eps, width=3.3in} \caption{\label{fig:comparison} Comparison of temperature maps based on FIR dust emission over a $10.5^{\circ}\times8.3^{\circ}$ region centered about $(l,b) = (111.6^{\circ}, 18.3^{\circ})$. Top: SFD temperature map based on DIRBE $100\mu$m and $240\mu$m, with $\sim$1.3$^{\circ}$ resolution. Center: $6.1'$ resolution two-component temperature based on {\it Planck}~217-857 GHz and SFD 100$\mu$m. Bottom: \cite{planckdust} single-MBB temperature map based on {\it Planck}~353-857 GHz and 100$\mu$m data, with $5.1'$ FWHM. Both temperature maps incorporating {\it Planck}~observations clearly show a major improvement in angular resolution relative to SFD.} \end{center} \end{figure} For each pixel, we initialize the Markov chain with parameters $\tilde{I}_{545}$ = $I_{545}$ and $T_2$ consistent with the FDS99 DIRBE 100$\mu$m/240$\mu$m ratio map $\mathscr{R}$. The initial proposal distribution is a two-dimensional normal distribution, with $\sigma_{T_2}$=0.25 K, $\sigma_{\tilde{I}_{545}}$=max(0.01$\times$$I_{545}$, 0.05 MJy/sr) and $\rho_{T_2, \tilde{I}_{545}}$=0. We run 5 iterations of burn-in, each consisting of 500 MH steps. After each burn-in iteration, we rescale the proposal distribution so as to ultimately attain an acceptance fraction $f_{acc}$ as close as possible to the optimal value $f_{opt}=0.234$. This is accomplished by multiplying the proposal distribution standard deviations by $f_{acc}$/$f_{opt}$. After burn-in, we estimate the parameters and their uncertainties by performing 10,000 sampling steps, with $T_{2, j}$ and $\tilde{I}_{545, j}$ denoting the proposed parameter values at the $j^{th}$ step since the end of burn-in. From these 10,000 samples, we compute estimates of each parameter's mean, $\langle T_2 \rangle$ = $\langle T_{2, j} \rangle$, $\langle \tilde{I}_{545} \rangle$ = $\langle \tilde{I}_{545, j} \rangle$, of each parameter's variance, $\sigma^2_{T_2}$ = $\langle T^2_{2, j} \rangle-\langle T_{2, j} \rangle ^2$, $\sigma^2_{\tilde{I}_{545}}$=$\langle \tilde{I}^2_{545, j} \rangle-\langle \tilde{I}_{545, j} \rangle ^2$ and of the covariance $\sigma_{T_2}\sigma_{\tilde{I}_{545}}$=$\langle T_{2,j}-\langle T_2 \rangle \rangle \langle \tilde{I}_{545, j} - \langle \tilde{I}_{545} \rangle \rangle$. After obtaining this initial estimate of the covariance matrix for each pixel, we re-run a second iteration of the entire MCMC procedure, starting from the first burn-in period. On this iteration, for each pixel, we begin with a proposal distribution that is a two-dimensional Gaussian with covariance equal to the first-pass covariance estimate. This gives the each pixel's proposal distribution approximately the `right shape', whereas on the first pass we started by simply guessing the relative widths of the proposal distribution in $\tilde{I}_{545}$, $T_2$, and also assumed that the $\rho_{T_2, \tilde{I}_{545}}$=0. Lastly, during post burn-in sampling, we also estimate the monochromatic two-component intensity at 545 GHz, $M_{545}$ = $\langle M_{545, j} \rangle$ = $\langle \tilde{I}_{545, j}/b_{545}(T_{2,j}, \beta_2) \rangle$, its variance, and the 545 GHz optical depth $\tau_{545}$=$\langle \tau_{545, j} \rangle$ = $\langle M_{545, j}/S_{545, j} \rangle$ and its variance. $\tau_{545}$ and $M_{545}$ are more readily useful than the sampling parameters themselves for translating our fit results into predictions of reddening ($\S$\ref{sec:ebv}) and thermal dust emission ($\S$\ref{sec:lofreq}), respectively. At high Galactic latitude, we find a typical $T_2$ uncertainty of 0.45 K, and typical $\tilde{I}_{545}$ fractional uncertainty of 13\%. Figure \ref{fig:sed} illustrates the two-component model SED and the intensity measurements which constrain our fits, while Figure \ref{fig:post} shows example posterior PDFs for three pixels. Figure \ref{fig:comparison} shows a map of our derived hot dust temperature at full-resolution, for a patch of sky in the Polaris flare region. We validated the parameters and uncertainties recovered from our MCMC procedure by comparing with results based on finely gridded posterior calculations performed on a random subset of pixels. These comparisons verified that the proposal distribution rescaling and reshaping steps that we employ do improve the accuracy of the recovered parameters/uncertainties, and that the parameters/uncertainties ultimately derived are highly reliable. We can quantify the fidelity of our MCMC parameter estimates by noting that the RMS fractional discrepancy between MCMC and gridded posterior means is 0.25\% for $\tilde{I}_{545}$ and 0.07\% ($\sim$0.01 K) for $T_2$. Regarding the accuracy of our uncertainty estimates, we find RMS fractional discrepancies of 2.2\% for $\sigma_{\tilde{I}_{545}}$ and 2.4\% for $\sigma_{T_2}$. Aside from these small statistical scatters, we find no biases in our MCMC estimates of the parameters and their uncertainties. \subsection{Low-resolution Fits} \label{sec:lores} \begin{figure} \begin{center} \epsfig{file=f1_1deg.eps, width=3.3in} \caption{\label{fig:f1} $1^{\circ}$ FWHM full-sky map of $f_1$ derived from our low-resolution fits described in $\S$\ref{sec:lores}. Red coloring masks pixels with appreciable molecular emission, as defined in $\S$\ref{sec:mole}. Such pixels should not be trusted in this analysis, which is sensitive to the SED shape at low frequencies affected by CO line emission. Variations in $f_1$ along the ecliptic plane are spurious results of imperfect zodiacal light subtractions. However, interesting astrophysical variations of $f_1$ are evident, particularly the trend of increasing $f_1$ with decreasing absolute Galactic latitude, the relatively low $f_1$ values in the Polaris flare and R Coronae Australis regions, and the clouds with relatively high $f_1$ values near the north Galactic pole.} \end{center} \end{figure} As mentioned in $\S$\ref{sec:modeling}, the combination of high S/N and high angular resolution afforded by the {\it Planck}~HFI maps provides us with the opportunity to allow additional parameters of the two-component model, previously fixed by FDS99, to vary spatially. Specifically, we consider allowing $f_1$ to vary, while maintaining $\beta_1$, $\beta_2$, and $q_1/q_2$ spatially constant. In principle, we could alternatively introduce a third free parameter by permitting $\beta_1$, $\beta_2$ or $q_1/q_2$ to vary while holding $f_1$ fixed. However, a model in which $f_1$ varies continuously from one line of sight to another is the most natural three-parameter scenario, in that $f_1$ variation can be attributed to continuous changes in the dust species' mass fractions, whereas continuous variations in the other global parameters, which represent grain emission/absoprtion properties, seem less plausible. In order for our variable $f_1$ fits to remain sufficiently constrained following the introduction of a third free parameter, we enhance per-pixel S/N by smoothing the input maps to $1^{\circ}$ FWHM, and pixelize at $N_{side}=64$. To best constrain the model parameters in each pixel, we also include {\it Planck}~100 GHz and 143 GHz, and DIRBE 140$\mu$m and 240$\mu$m, all at $1^{\circ}$ resolution. We now run Markov chains sampling in all three of $f_1$, $\tilde{I}_{545}$ and $T_2$, with the posterior given by: \begin{equation} \label{eqn:f1post} P(\tilde{I}_{545}, T_2, f_1|\mathbf{I}) \propto \mathcal{L}(\mathbf{I}|\tilde{I}_{545}, T_2, f_1)P(T_2)P(f_1) \end{equation} The likelihood here is conceptually the same as that of Equation \ref{equ:like}, but now depends on $f_1$, which can vary from proposal to proposal within each individual pixel. The other difference is that $\mathbf{I}$ and $\mathbf{\tilde{I}}$ now include 100 GHz, 143 GHz, 140$\mu$m and 240$\mu$m, in addition to the five bands used for the full-resolution fits. The prior $P(T_2)$ from Equation \ref{equ:t2prior} remains unchanged. We adopt the following prior on $f_1$: \begin{equation} \label{equ:f1prior} P(f_1) = \mathcal{N}(f_1|\bar{f}_1, \sigma_{\bar{f}_1}) \end{equation} With $\bar{f}_1$=0.0485 (from Table \ref{tab:global}, model 2) and $\sigma_{\bar{f}_1}$=0.005. This is a fairly stringent prior, but we must restrict the fit from wandering with too much freedom, as we are attempting to constrain three parameters using an SED with only nine intensity measurements, several of which are quite noisy. Again, we have adopted a uniform prior on $\tilde{I}_{545}$, and, as mentioned previously, we have omitted it from Equation \ref{eqn:f1post} as a matter of notation. The resulting full-sky map of $f_1$ is shown in Figure \ref{fig:f1}. A general trend of increasing $f_1$ towards lower absolute Galactic latitudes is apparent. The other most salient features are the relatively low values of $f_1$ in the Polaris flare and R Coronae Australis regions, and the relatively high $f_1$ clouds near the north Galactic pole. \subsection{Global Parameters Revisited} \label{sec:hier} The posterior sampling framework thus far described also affords us an opportunity to evaluate the goodness-of-fit for competing dust SED models, and thereby cross-check the conclusions of our correlation slope analysis in $\S$\ref{sec:global}. The basic idea will be to continue evaluating the posterior of Equation \ref{eqn:post}, but at low resolution ($N_{side}$=64), including all HFI bands as well as DIRBE 100$\mu$m, 140$\mu$m and 240$\mu$m, and switching to a uniform prior on $T_2$. Under these circumstances, the chi-squared corresponding to the best-fit parameters for pixel $p$, termed $\chi^2_p$, is simply $-2 \times\log(P_{max})$. We will refer to the per-pixel chi-squared per degree of freedom as $\chi^2_{p, \nu}$. \begin{figure} \begin{center} \epsfig{file=chi2_dirbe.eps, width=3.3in} \caption{\label{fig:chi2_dirbe} Comparison of goodness-of-fit, $\chi^2_{\nu}$=$\langle \chi^2_{p, \nu} \rangle$, for various dust SED models, as described in $\S$\ref{sec:hier}. For single-MBB models with spatially constant $\beta$, we varied $\beta$ between 1 and 2 (horizontal axis), achieving reduced chi-squared $\chi^2_{\nu}$ shown by the black line, with $\beta$=1.57 providing the best single-MBB fit. Horizontal lines indicate $\chi^2_{\nu}$ for other dust emission models considered, including the FDS99 best-fit two-component model (Table \ref{tab:global}, model 1, red) and the \cite{planckdust} single-MBB model (green). The minimum $\chi^2_{\nu}$ is achieved with two-componenent `model 2' from Table \ref{tab:global} (magenta).} \end{center} \end{figure} Because we seek to compare the goodness-of-fit for various dust SED models in the diffuse ISM, we restrict to a set of $\sim$10,800 pixels ($\sim$22\% of the sky), with $|b|>30^{\circ}$ and $|\beta|>10^{\circ}$. We also avoid the SMICA inpainting mask, pixels with appreciable CO contamination, and compact sources. The goodness-of-fit `objective function' we employ to judge the quality of a particular dust SED model is $\langle \chi^2_{p, \nu} \rangle$, where the average is taken over the aforementioned set of $\sim$10,800 pixels. $\langle \chi_{p, \nu}^2 \rangle$ is also equivalent to the reduced chi-squared, $\chi^2_{\nu}$, when considering the total number of free parameters to be the number of pixels multiplied by the number of free parameters per pixel (and similarly for the total number of constraints), and taking $\chi^2$=$\sum\chi^2_{p}$. We calculate $\chi^2_{\nu}$ for various dust SED models, independently minimizing each $\chi^2_p$ by finding pixel $p$'s best-fitting dust temperature and normalization, then evaluating $\langle \chi_{p, \nu}^2 \rangle$. First, we consider single-MBB models with $\beta$ spatially constant (see the black line in Figure \ref{fig:chi2_dirbe}). $\beta$=1.57 yields the best fit, with $\chi^2_{\nu}$=2.51. This result is in excellent agreement with that of $\S$\ref{sec:global}, where we found the best-fit single-MBB model to have $\beta$=1.59. We also evaluated $\chi^2_{\nu}$ for single-MBB models in which $\beta$ varies spatially. In these cases, we adopted the 0.5$^{\circ}$ resolution $\beta$ map from \cite{planckdust}. We started by calculating $\chi^2_{\nu}$ using the \cite{planckdust} temperature map, finding $\chi^2_{\nu}$=4.68. Note that in this case no per-pixel chi-squared minimization was involved, as we simply evaluated $\chi^2_{p}$ for each pixel based on the fully-specified \cite{planckdust} emission model. Next, we tested a single-MBB model for which we adopted the \cite{planckdust} $\beta$ map, but allowed the per-pixel temperature and normalization to vary so as to minimize $\chi^2_p$. In this case, we found $\chi^2_{\nu}$=2.51, effectively identical to the value found for the spatially constant $\beta$=1.57 single-MBB model. This is perhaps unsurprising, as the average $\beta$ value from \cite{planckdust} over the mask in question is $\langle\beta\rangle$=1.58. This result does suggest, however, that in diffuse regions the half-degree variations in $\beta$ are not materially improving the goodness-of-fit over the full frequency range 100-3000 GHz relative to a model with appropriately chosen spatially constant $\beta$. We move on to evaluate two-component models, first calculating $\chi^2_{\nu}$ with the FDS99 global parameters (Table \ref{tab:global}, model 1). We find $\chi^2_{\nu}$=2.33, a slight improvement relative to the best-fitting single-MBB models. Finally, we calculate $\chi^2_{\nu}$ for Table \ref{tab:global} model 2, the two-component model favored by our {\it Planck}+DIRBE correlation slopes. In this case, we achieve the best goodness-of-fit out of all the models we have tested, with $\chi^2_{\nu}$=2.11. Thus, our degree-resolution goodness-of-fit analysis has generally confirmed the conclusions of $\S$\ref{sec:global}. We find the single-MBB $\beta$ value favored by the combination of {\it Planck}~and DIRBE to be nearly identical here ($\beta$=1.57) versus in $\S$\ref{sec:global} ($\beta$=1.59). As in $\S$\ref{sec:global}, we also find that the {\it Planck}+FIRAS and {\it Planck}+DIRBE best-fit two-component models from Table \ref{tab:global} outperform single-MBB alternatives, though only by a relatively small margin in $\chi^2_{\nu}$. Still, because our present analysis has $\sim$75,500 degrees-of-freedom, $\Delta \chi^2_{\nu}$=0.4 formally corresponds to an enormously significant improvement in $\chi^2$. The agreement between our correlation slope analysis and the present goodness-of-fit analysis is especially encouraging for three main reasons: (1) in the present analysis, dust temperature has been allowed to vary on degree scales, whereas in $\S$\ref{sec:global} we assumed a single global dust temperature (2) the present analysis employs a fully detailed, per-pixel uncertainty model and (3) in the present analysis, our zero-level offsets factor into the dust temperature, whereas in $\S$6 this was not the case, meaning the former and latter analyses agree in spite of their potential to be affected by rather different systematics. \section{Optical Reddening} \label{sec:ebv} While the temperature and optical depth maps thus far derived are useful for making thermal dust emission foreground predictions, estimating optical reddening/extinction is another important application of the $\tau_{545}$ map. Translating our two-component optical depth to reddening is especially valuable because our $T_2$ map has $\sim$13$\times$ better angular resolution than the SFD temperature correction, and thus there is reason to believe our two-component reddening estimates may be superior to those of SFD. However, as discussed in $\S$\ref{sec:replace}, we do not yet advocate for the wholesale replacement of SFD, and more detailed work is still necessary to determine/quantify the extent to which {\it Planck}-based dust maps might improve reddening estimates relative to SFD. \subsection{Reddening Calibration Procedure} \label{sec:calib_ebv} \begin{figure} \begin{center} \epsfig{file=calib_ebv.eps, width=3.3in} \caption{\label{fig:calib} Linear fit of $E(B-V)_{SSPP}$ as a function of two-component 545 GHz optical depth, illustrating our procedure for calibrating optical depth to reddening, as described in $\S$\ref{sec:calib_ebv}.} \end{center} \end{figure} We calibrate optical depth to reddening empirically rather than derive a relationship between $\tau_{545}$ and reddening by introducing additional assumptions about the dust grain physics and size distribution. To achieve this empirical calibration, we must adopt a set of calibrator objects for which true optical reddening is known. There are various possibilities at our disposal. \cite{planckdust} calibrated their radiance and $\tau_{353}$ maps to $E(B-V)$ using broadband Sloan Digital Sky Survey \citep[SDSS;][]{sdss} photometry for a set of $\sim$10$^5$ quasars. The SFD calibration was originally tied to a sample of 384 elliptical galaxies, but was later revised by \citet[hereafter SF11]{schlafly11} based on $\sim$260,000 stars with both spectroscopy and broadband photometry available from the SEGUE Stellar Parameter Pipeline \citep[SSPP,][]{sspp}. \begin{figure*} \begin{center} \epsfig{file=ebv_resid.eps, width=6.6in} \caption{\label{fig:resid} (top left) Residuals of $E(B-V)_{2comp}$ relative to $E(B-V)_{SSPP}$ as a function of $E(B-V)_{SFD}$. The grayscale represents the conditional probability within each $E(B-V)_{SFD}$ bin. The central black line shows the moving median. The upper and lower black lines represent the moving 75th and 25th percentiles respectively. (bottom left) Residuals of $E(B-V)_{2comp}$ relative to $E(B-V)_{SSPP}$ as a function of hot dust temperature $T_2$. (top right) Same as top left, but illustrating the residuals of $E(B-V)_{mbb}$, our calibration of the \cite{planckdust} $\tau_{353}$ to $E(B-V)_{SSPP}$. (bottom right) Same as bottom left, but showing the $E(B-V)_{mbb}$ residuals as a function of the single-MBB dust temperature from \cite{planckdust}. The temperature axes always range from the 0.4$^{th}$ percentile temperature value to the 99.6$^{th}$ percentile temperature value.} \end{center} \end{figure*} To calibrate our two-component optical depth to reddening, we make use of the stellar sample from SF11. Given a library of model stellar atmospheres, the spectral lines of these stars can be used to predict their intrinsic optical broadband colors. The `true' reddening is then simply the difference between the observed $g-r$ color and the $g-r$ color predicted from the spectral lines. Applying a color transformation then yields `true' $E(B-V)$ values for $\sim$260,000 lines of sight. Throughout our SSPP calibration analysis, we restrict to the $\sim$230,000 lines of sight with $|b|$$>$20$^{\circ}$ in order to avoid stars which may not lie behind the full dust column. In this section and $\S$8.2, we make absolute latitude cuts (in both $b$ and $\beta$) at $20^{\circ}$, to match the footprint of SF11 and adapt to the non-uniform distribution of SSPP stars on the sky. The calibration of two-component optical depth to $E(B-V)$ is performed as a linear regression of $E(B-V)_{SSPP}$ versus $\tau_{545}$. $\tau_{545}$ is considered to be the independent variable in this regression, as we ultimately wish to predict $E(B-V)$ as a function of optical depth, and $\tau_{545}$ has much higher S/N than the SSPP $E(B-V)$ estimates. This regression is illustrated in Figure \ref{fig:calib}. As expected, there is a strong linear correlation between $E(B-V)_{SSPP}$ and $\tau_{545}$. The conversion factor from $\tau_{545}$ to $E(B-V)$ is 2.62$\times$10$^{3}$. Reassuringly, the best-fit offset is close to zero, $\sim$2.6 mmag. Figure \ref{fig:resid} shows the residuals of our $\tau_{545}$-based reddening predictions, $E(B-V)_{2comp}$, relative to the corresponding SF11 reddening measurements, $E(B-V)_{SSPP}$, as a function of SFD reddening, $E(B-V)_{SFD}$, (top left panel) and as a function of hot dust temperature (bottom left panel). For comparison, the right panels show analogous residual plots, but with respect to reddening predictions based on our calibration of the \cite{planckdust} 353 GHz optical depth to $E(B-V)_{SSPP}$ , using the same regression procedure employed to calibrate $E(B-V)_{2comp}$. We refer to these reddening predictions based on the \cite{planckdust} single-MBB model and calibrated to the SF11 measurements as $E(B-V)_{mbb}$. All four residual plots in Figure 12 show systematic problems at some level. The most striking systematic trend is the `bending' behavior of the reddening residuals versus $E(B-V)_{SFD}$ (top panels), with the median residual bottoming out near $-10$ mmag at $E(B-V)_{SFD}$$\approx$0.15 mag. This behavior is common to both $E(B-V)_{2comp}$ and $E(B-V)_{mbb}$, and in fact was first noted in the residuals of $E(B-V)_{SFD}$ itself relative to $E(B-V)_{SSPP}$ by SF11 (see their Figure 6). Such a bending behavior is troubling because it could indicate a nonlinearity common to many FIR reddening predictions based on column densities inferred from dust emission. Alternatively, because the SF11 stars are distributed over the sky in a highly non-uniform manner, the bend could arise from aliasing of discrepancies particular to certain sky regions (e.g. inner vs. outer Galaxy) on to the $E(B-V)_{SFD}$ axis. \begin{figure*} \begin{center} \epsfig{file=ebv_resid_ecl.eps, width=6.6in} \caption{\label{fig:resid_ecl} Same as Figure \ref{fig:resid}, but restricting to high ecliptic latitude, $|\beta|>20^{\circ}$. In both the top left and top right plots, the bending of the reddening residuals as a function of $E(B-V)_{SFD}$ seen in Figure \ref{fig:resid} has been eliminated. Further, the two-component reddening residual temperature dependence (bottom left) has been significantly reduced relative to the corresponding trend shown in Figure \ref{fig:resid}. For $E(B-V)_{SFD}$$\gtrsim$0.3 mag, the top row plots appear noisy because there are an insufficient number of remaining SSPP points of comparison.} \end{center} \end{figure*} The obvious culprit for any potential nonlinearity in FIR-based reddening estimates is a faulty temperature correction. For this reason, we have included the bottom panels of Figure \ref{fig:resid}, to check for the presence of a temperature dependence of the reddening residuals. Indeed, in both the two-component and single-MBB cases there exists some systematic dependence of the reddening residuals on temperature. For $T_{mbb}$$\gtrsim$19 K, the median residual is reasonably flat, but at lower temperatures (the lowest temperature $\sim$20\% of SSPP sight lines), the median shows trends at the $\sim$10 mmag level. On the other hand, the median residual in the two-component case trends downward with increasing $T_2$ over the entire $T_2$ range shown, with a peak-to-peak amplitude of $\sim$20 mmag. \subsection{Rectifying the Reddening Residuals} In this section we describe our attempts to eliminate the systematic problems in the two-component reddening residuals shown in the left column of Figure \ref{fig:resid}. We employed two main strategies: (1) recomputing the two-component $\tau_{545}$ by re-running our Markov chains after modifying the input maps and/or changing the particular two-component model paramters adopted and (2) making spatial cuts to isolate sky regions in which the residuals are especially pristine (or especially problematic). The following is a list of dust model modifications we tested, but which proved to have little impact on the reddening residual trends as a function of either $E(B-V)_{SFD}$ or $T_2$: \begin{itemize} \item Varying each of the global two-component model parameters $\beta_1$, $\beta_2$, $q_1/q_2$ and $f_1$ individually while holding the others fixed. \vspace{-3mm} \item Allowing $f_1$ to vary spatially as in the fits of $\S$\ref{sec:lores}. \vspace{-2mm} \item Changing the mean and/or variance of the $T_2$ prior. \vspace{-6mm} \item Varying multiple global parameters at a time e.g. both $f_1$ and $q_1/q_2$, restricting to regions of parameter space favored by our goodness-of-fit analyses described in $\S$\ref{sec:global} and $\S$\ref{sec:hier}. \end{itemize} We additionally investigated the following spatial cuts which did not resolve the dominant problems noted in the reddening residuals: \begin{itemize} \item Separating Celestial north and south. \vspace{-3mm} \item Separating Galactic north and south. \vspace{-3mm} \item Separating inner and outer Galaxy. \vspace{-3mm} \item Combining the above two sets of cuts i.e. separating the Galaxy into quadrants. \vspace{-3mm} \item Combining these spatial cuts with the dust model changes of the previous list. \end{itemize} However, we found that changing the zero level offsets of the input maps had a significant effect on the strength of the anticorrelation between median reddening residual and $T_2$. In particular, we experimented with perturbing the zero level offset of {\it Planck}~857 GHz while correspondingly changing the zero levels of the remaining {\it Planck}~maps based on the prescription of $\S$\ref{sec:relzero}. We also experimented with changing the zero level of SFD \verb|i100|, independent of the other zero levels. Unfortunately, completely flattening the reddening residual dependence on $T_2$ required unreasonably large zero level modifications. For example, flattening the $T_2$ residual required adding $\gtrsim$0.6 MJy/sr to the \verb|i100| map. Such an offset is implausible, being an order of magnitude larger than the nominal \verb|i100| zero level uncertainty quoted by SFD, and comparable to the entire 3000 GHz CIB monopole signal. Furthermore, we note that even these large zero level modifications had virtually no effect in eliminating the reddening residual `bend' versus $E(B-V)_{SFD}$. Thus, changing the zero level offsets showed hints of promise in rectifying the reddening residual temperature dependence, but could not by itself completely resolve the systematic trends in reddening residuals. The only solution we have been able to identify that both removes the `bend' vs. $E(B-V)_{SFD}$ and simultaneously reduces the temperature dependence of the reddening residuals is cutting out the ecliptic plane by restricting to $|\beta|>20^{\circ}$. In this case, we completely eliminated the bending behavior of the residual versus $E(B-V)_{SFD}$, and significantly reduced the $T_2$ dependence to a peak-to-peak amplitude of only $\sim$10 mmag (see Figure \ref{fig:resid_ecl}). Figure \ref{fig:resid_ecl} still includes the single-MBB plots (right column), to show that the bend versus $E(B-V)_{SFD}$ is eliminated by the $|\beta|$ cut, even for the single-MBB model. However, the single-MBB residuals still differ systematically from zero for $T$$\lesssim$$19$ K. Perhaps the improvements in the two-component reddening residuals after restricting to high ecliptic latitude should come as no surprise, given that the ecliptic plane is the most obvious systematic problem with our temperature map (see the full-sky results shown in Figure \ref{fig:results}). After cutting the ecliptic plane, we found that only small zero level perturbations were required to fully flatten the temperature residuals, while still maintaining flat residuals versus $E(B-V)_{SFD}$. The optimal offsets we found were $\pm$0.08 MJy/sr to \verb|i100| and 857 GHz respectively (see Figure \ref{fig:resid_offs}). These offsets are well within reason, given the nominal zero level uncertainties quoted in Table \ref{table:offs}. \begin{figure} \begin{center} \epsfig{file=resid_offs.eps, width=3.3in} \caption{\label{fig:resid_offs} Two-component reddening residuals after restricting to high ecliptic latitude ($|\beta|>20^{\circ}$) and perturbing the \texttt{i100} and 857 GHz zero levels by $+$0.08 MJy/sr and $-$0.08 MJy/sr respectively. The bending behavior as a function of $E(B-V)_{SFD}$ has been eliminated, and virtually no temperature dependence remains. For $E(B-V)_{SFD}$$\gtrsim$0.3 mag, the top plot appears noisy because there are an insufficient number of remaining SSPP points of comparison following our cut on ecliptic latitude.} \end{center} \end{figure} \section{Comparison of Emission Predictions} \label{sec:em_compare} \subsection{The 353-3000 GHz Frequency Range} \label{sec:hifreq} Here we compare our two-component emission predictions to those of the \cite{planckdust} single-MBB model in the 353-3000 GHz range. This frequency range represents the overlap between the recommended range of applicability for the \cite{planckdust} model and the 100-3000 GHz frequency range of our two-component model. Since we have used input maps that are very similar to those of \cite{planckdust}, and since our model and the \cite{planckdust} model both fit the data well in this frequency range, good agreement between our two-component predictions and those of the \cite{planckdust} single-MBB model is to be expected. \begin{figure} [ht] \begin{center} \epsfig{file=compare_hifreq.eps, width=3.2in} \caption{\label{fig:compare_hifreq} Scatter plots of \cite{planckdust} single-MBB predictions (vertical axes) versus our two-component predictions (horizontal axes), rebinning to $N_{side}$=64 and restricting to the diffuse regions of $\S$\ref{sec:hier}. The lines of best fit are shown in blue, and red lines represent perfect agreement between the two predictions. Note that a per-band offset has been applied to the \cite{planckdust} predictions to account for the differing zero level offsets used in building the two models. After accounting for the different zero levels, the best fit offsets between predictions are consistent with zero to within the uncertainties quoted in Table \ref{table:offs}. The slopes are also very nearly unity, to within $\leq$1.7\%.} \end{center} \end{figure} We compare the emission models in this frequency range by using each model in turn to predict the observed {\it Planck}~353, 545, and 857 GHz maps, as well as the 3000 GHz DIRBE/{\it IRAS}~map. We rebin to $N_{side}$=64 and restrict to the diffuse sky regions of our mask from $\S$\ref{sec:hier}. We summarize this comparison by producing a per-band scatter plot of the \cite{planckdust} prediction versus the two-component prediction, and performing a linear regression between these two quantities. Before plotting and performing these regressions, we adjusted the \cite{planckdust} predictions to account for the differing zero level offsets used in this work and in \cite{planckdust}. For instance, at 3000 GHz, \cite{planckdust} added 0.17 MJy/sr to the SFD98 zero level, whereas we made no such modification; therefore, for the sake of comparison, we subtracted 0.17 MJy/sr from the \cite{planckdust} predictions before plotting and performing the 3000 GHz regression. The slopes obtained from these linear fits indicate very good agreement between the single-MBB and two-component models, with values between 0.983-1.015 (agreement at the $\leq$1.7\% level). The offsets are also consistent with zero to within the uncertainties quoted in Table \ref{table:offs}. We do not find evidence that our two-component model provides emission predictions in the 353-3000 GHz range which are superior to those of \cite{planckdust}. From 353-3000 GHz and in diffuse sky regions, the main difference between emission predictions from these two models will be overall offsets due to differing input map zero levels. \subsection{The 100-217 GHz Frequency Range} \label{sec:lofreq} FDS99 originally performed their FIRAS+DIRBE dust SED analysis for the sake of accurately forecasting low-frequency CMB foregrounds. Recently, Galactic CMB foregrounds, especially in the 100-150 GHz frequency range, have become a focal point of cosmology owing to the \cite{bicep2} $B$-mode polarization results. Here we show that our two-component foreground predictions remain accurate on average to within $2.2$\% from 100-217 GHz, and we quantify the benefit of using our two-component emission predictions in this frequency range relative to extropolating the \cite{planckdust} single-MBB model. \begin{figure*} \begin{center} \epsfig{file=compare_lofreq.eps, width=6.8in} \caption{\label{fig:lofreq} Comparison between low-frequency thermal dust emission predictions from our best-fit two-component model (Table \ref{tab:global}, model 2) and those based on extrapolation of the \cite{planckdust} model. The top row shows scatter plots of the \cite{planckdust} predictions versus observed {\it Planck}~100 GHz (left), {\it Planck}~143 GHz (center) and {\it Planck}~217 GHz (right). The bottom row shows scatter plots of the corresponding two-component predictions versus {\it Planck}~observations. In all cases, the blue line indicates the best-fit linear relationship, while the red line represents a perfect match between predictions and observations. The lines of best-fit illustrate that the single-MBB model systematically underpredicts emission (in the multiplicative sense) by 18.8\%, 12.6\% and 7.9\% at 100, 143 and 217 GHz respectively. On the other hand, by the same metric, the two-component model predictions at 100-217 GHz are always accurate to within $\le$2.2\%. The two-component fit results shown are based on 217-3000 GHz observations, meaning that the 100 GHz and 143 GHz predictions are truly extrapolations, while the 217 GHz agreement is enforced by the fitting process itself to some extent.} \end{center} \end{figure*} To assess the accuracy of low-frequency emission predictions, we compare the observed {\it Planck}~HFI map at each of 100, 143, 217 GHz to the corresponding single-MBB and two-component predictions, with all maps smoothed to $1^{\circ}$ FWHM and binned down to $N_{side}$=$64$. We restrict to the same set of pixels used for the goodness-of-fit analysis of $\S$\ref{sec:hier}, with $|b|>30^{\circ}$ and $|\beta|>10^{\circ}$, also avoiding molecular emission, the SMICA inpainting mask, and compact sources. We then perform a linear fit between the {\it Planck}~observed emission and the predicted emission at each frequency and for each emission model. For these fits, we consider the predicted emission to be the independent variable, since it has higher S/N than the observations, especially at 100 and 143 GHz. We also assign pixel weights proportional to the predicted emission, so that the best-fit lines faithfully capture the linear trend exhibited without being biased by the large number of very low S/N pixels with minimal emission. Scatter plots between the predicted and observed emission are shown in Figure \ref{fig:lofreq}. The best-fit lines are overplotted and their equations are given in the top left corner of each subplot. In both the single-MBB and two-component cases, all of the best fit offsets are within the uncertainties quoted in Table \ref{table:offs}. On the other hand, the top row of Figure \ref{fig:lofreq} shows that the \cite{planckdust} single-MBB extrapolations yield slopes substantially different from unity: 1.079 at 217 GHz, 1.126 at 143 GHz, and 1.188 at 100 GHz. The fact that the slopes are larger than unity indicates that the \cite{planckdust} extrapolations are systematically low. The systematic underprediction evidently becomes gradually more pronounced as lower frequencies are considered, with a 7.9\% underprediction at 217 GHz, a 12.6\% underprediction at 143 GHz and an 18.8\% underprediction at 100 GHz. A deficit in single-MBB predictions relative to the observed {\it Planck}~100-217 GHz emission was also noted in \cite{planckdust2011}, e.g. their Figure 7. For the case of the two-component model, we perform full-resolution 217-3000 GHz fits using the {\it Planck}+DIRBE favored global parameters (Table \ref{tab:global}, model 2), then smooth to $1^{\circ}$ FWHM and bin down to $N_{side}=64$ before predicting the 100-217 GHz emission. The bottom row of Figure 14 shows that each of the best-fit lines is very similar to the corresponding red line which represents a perfect match between predicted and observed emission. More quantitatively, the two-component slopes are all within 2.2\% of unity: 0.978 at 217 GHz, 0.986 at 143 GHz and 1.022 at 100 GHz. We note that at 217 GHz, the good agreement is in some sense predetermined by the fact that {\it Planck}~217 GHz has been included in our two-component MCMC fits. On the other hand, the 143 and 100 GHz predictions are based on extrapolation. We conclude from these predicted versus observed emission comparisons that our two-component model outperforms extrapolation of the \cite{planckdust} single-MBB model at predicting Galactic thermal dust emission in diffuse regions from 100-217 GHz. It should be reiterated, once again, that \cite{planckdust} did not intend for their single-MBB model to be extrapolated to frequencies below 350 GHz (see their $\S$7.2.1), whereas we optimized our two-component model to be valid over the entire 100-3000 GHz frequency range. Our two-component model thus represents the first {\it Planck}~based thermal dust emission model valid over the entire 100-3000 GHz frequency range. \section{Data Release} \label{sec:release} We are releasing a set of $N_{side}$=2048 HEALPix maps in Galactic coordinates which summarize the results of our full-resolution two-component dust fits. Low-resolution renderings of our full-sky dust temperature and optical depth maps are shown in Figure \ref{fig:results}. Our data release also includes software utilities for obtaining emission and reddening predictions from our {\it Planck}-based two-component fits. Refer to the data release documentation and FITS file headers for further details.\footnote{\texttt{http://faun.rc.fas.harvard.edu/ameisner/planckdust}} \begin{figure*} \begin{center} \epsfig{file=results.eps, width=7.0in} \caption{\label{fig:results} (top) Hot dust temperature derived from our full-resolution two-component model fits of {\it Planck}~217-857 GHz and SFD 100$\mu$m, downbinned to 27.5$'$ resolution. (bottom) Corresponding full-sky map of best-fit two-component 545 GHz optical depth.} \end{center} \end{figure*} \section{Conclusions} \label{sec:conclusion} \subsection{Single-MBB versus Two-component emission} A major aim of this work has been to determine whether the FDS99 two-component dust emission model remains favored over single-MBB models when swapping the {\it Planck}~HFI maps for FIRAS at frequencies below 1250 GHz. We compared dust SED models in two ways (1) by fitting a 100-3000 GHz spectrum composed of per-band correlation slopes versus {\it Planck}~857 GHz (2) by finding the best-fit dust temperature and optical depth per line-of-sight, with each pixel's SED comprised of 100-3000 GHz {\it Planck}+DIRBE data, and comparing the average goodness-of-fit under various emission models. In both the correlation slope analysis of $\S$\ref{sec:global} and the goodness-of-fit analysis of $\S$\ref{sec:hier} we found that the best-fit {\it Planck}+DIRBE two-component model (Table \ref{tab:global}, model 2) outperformed the best-fit single-MBB model, but by a lesser margin in $\chi^2_{\nu}$ than found by FDS99 using FIRAS+DIRBE. Specifically, our best-fit {\it Planck}+DIRBE two-component model yielded an improvement of $\Delta \chi^2_{\nu}$=3.41 ($\S$\ref{sec:global}) and $\Delta \chi^2_{\nu}$=0.4 ($\S$\ref{sec:hier}). This represents a far less dramatic contrast in $\chi^2_{\nu}$ than found by the FDS99 correlation slope analysis, $\Delta \chi^2_{\nu}$=29.2. Perhaps a relative lack of discrimination amongst competing dust SED models when relying on {\it Planck}+DIRBE is to be expected, given that our constraints include only nine broad frequency channels, whereas FDS99 employed $>$200 narrow bands. Still, $\Delta \chi^2_{\nu}$=0.4 from $\S$\ref{sec:hier} is formally of enormous significance, given the $\sim$75,000 degrees of freedom in that analysis. Nevertheless, we have established that the two-component emission model remains viable in light of the {\it Planck}~HFI data, and that the FIR/submm dust SED's preference for two MBB components rather than just one is not simply an idiosyncracy of the FIRAS spectra. Furthermore, we showed in $\S$\ref{sec:lofreq} that our 100-217 GHz two-component emission predictions are on average accurate to within 2.2\%, whereas extrapolating the \cite{planckdust} single-MBB model systematically underestimates low-frequency dust emission by 18.8\% at 100 GHz, 12.6\% at 143 GHz and 7.9\% at 217 GHz. We therefore recommend that those interested in thermal dust foregrounds in the 100-3000 GHz frequency range use our data release to predict unpolarized dust emission, at the very least in order to help determine the level at which the choice of dust emission model may influence their conclusions. \subsection{Towards a Replacement for SFD} \label{sec:replace} Because of the broad frequency coverage and high angular resolution afforded by the {\it Planck}~HFI full-sky maps, we initially speculated that a {\it Planck}~based extinction map might easily outperform SFD, the most commonly used optical reddening map. However, at this point in time, we do not yet recommend that the results presented in this work be considered a replacement for SFD in terms of optical extinction/reddening estimates. The CIBA remains a major imperfection that still requires further investigation. The CIB anisotropies are very evident in low-dust regions of our maps of optical depth and predicted dust emission. As described in $\S$\ref{sec:mcmc}, we have propagated the CIBA RMS amplitudes and inter-frequency covariances into our uncertainty estimates through the likelihood function in our MCMC procedure. However, this treatment falls far short of actually removing the spatial imprint of the CIBA on our derived parameters. The CIB anisotropies are more prominent in our optical depth map relative to that of SFD because of the lower-frequency {\it Planck}~maps we rely upon to achieve a high-resolution temperature correction. Imperfect zodiacal light (zodi) corrections represent a second major limitation of our results. The ecliptic plane's prominence in our full-sky temperature map (Figure \ref{fig:results}) suggests that the zodiacal light subtractions performed on the input maps are not ideal. Our comparisons of the FIR maps used in this study against H\,\textsc{i} emission bear out this notion, further revealing that the imperfect zodi corrections are not limited to \verb|i100|, but in fact are noticeable in all of the HFI \verb|R1.10_nominal_ZodiCorrected| maps as well. We deemed it infeasible to reconsider all of the {\it Planck}~zodi corrections in addition to the 3000 GHz zodi correction as a part of this study, especially considering that the forthcoming {\it Planck}~2014 release is expected to include a revised/improved zodi subtraction. Irrespective of the notable imperfections in our results, more detailed comparisons between our reddening estimates here and those of SFD are required to determine/quantify which map is superior in particular applications. One definitive improvement of our reddening estimates relative to those of SFD is our ability to quote reddening uncertainties, which results from the probabilistic framework of $\S$\ref{sec:mcmc}. The extinction estimates from this work can also be employed as an alternative to those of SFD, to gauge the impact of dust map choice in a specific end user's application. We thank the anonymous referee for helpful suggestions. We gratefully acknowledge support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE1144152, and NASA grant NNX12AE08G. Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. This research made use of the NASA Astrophysics Data System (ADS) and the IDL Astronomy User's Library at Goddard. \footnote{Available at \texttt{http://idlastro.gsfc.nasa.gov}} \bibliographystyle{apj}
1,116,691,501,452
arxiv
\section{Introduction}\vspace{-0.05in} Coherent integrated photonic neural networks (C-IPNNs) promise ultra-fast and ultra-low-energy linear multipliers for emerging artificial intelligence (AI) accelerators. C-IPNNs based on singular value decomposition (SVD), referred to as SC-IPNNs in this paper, factorize a linear multiplier into one diagonal and two unitary matrices, each of which can be implemented using an array of Mach--Zehnder interferometers (MZIs). During the training, the phase angles on each MZI ($\phi$ and $\theta$ in Fig. \ref{fig1}(a)) are adjusted using stochastic gradient descent to minimize the overall training loss. Nevertheless, SC-IPNNs suffer from a large footprint and high static power consumption. In particular, SC-IPNNs employ phase shifters (PSes)---implemented often using thermo-optic phase-shift mechanisms---where the phase change ($\Delta\phi$) is directly proportional to the tuning power consumption ($P$) and the PS length ($L$): $\Delta\phi\propto P\cdot L$~\cite{jacques2019optimization}. To maintain the phase shifts, the tuning power is consumed throughout inferencing and can range up to 25 mW/$\pi$ \cite{harris2014efficient}. In addition, the underlying PSes in the MZI devices in SC-IPNNs (see $\phi$ and $\theta$ in Fig. \ref{fig1}(a)) account for a significant portion of network footprint (e.g., up to $\approx$90\% of a single MZI footprint designed in \cite{shokraneh2020theoretical}). Moreover, it was shown that uncertainties in PSes, especially in those with high phase angles, can cause up to 70\% loss in SC-IPNN accuracy~\cite{banerjee2021optimizing}. A potential solution to the aforementioned problems is to prune the redundant PSes and minimize the phase angles in the network. Prior attempts at pruning SC-IPNNs use a software-only approach where a trained network is first pruned and the resultant sparse weight matrices are subsequently mapped to the MZI arrays using SVD. However, due to the complex mapping between the weights and the MZI arrays in SC-IPNNs (see Fig. \ref{fig1}(b)), sparse weight matrices often lead to non-sparse MZI phase settings (and vice versa). Consequently, software-only pruning is inefficient in SC-IPNNs and imposes significant accuracy losses. To enable efficient pruning in SC-IPNNs, we propose the first hardware-aware pruning technique for SC-IPNNs, called CHAMP. As we will show, in a representative SC-IPNN with 155,268 PSes, CHAMP can prune up to 74.86\% of PSes with no accuracy loss and up to 99.45\% of PSes with an accuracy loss of less than 5\%. These correspond to a 46.05\% and 98.23\% reduction in static power consumption, respectively. Additionally, if the redundant PSes are removed (rather than being power-gated), the resultant SC-IPNN demonstrates significantly smaller footprint---which in turn will help reduce the dynamic power consumption (the analysis of which is beyond the scope of this paper)---and higher immunity to uncertainties in phase angles. \vspace{-0.2in} \begin{figure}[H] \centering \subfigure[A 4$\times$4 linear multiplier in an SC-IPNN]{ \includegraphics[width=.375\textwidth]{Figures/fig1a.pdf} }\hspace{-0.02in}% \subfigure[Bidirectional many-to-one mapping]{ \includegraphics[width=.315\textwidth]{Figures/fig1c.pdf} }\hspace{-0.04in}% \subfigure[Matrix sparsity versus PS sparsity]{ \includegraphics[width=.285\textwidth]{Figures/fig1b.pdf} } \vspace{-1em} \caption{(a) A linear multiplier implemented based on SVD and using MZI arrays (MZI dimensions are obtained from \cite{shokraneh2020theoretical}). (b) An illustration of the bidirectional many-to-one mapping between the elements of a 5$\times$5 unitary matrix and the mapped MZI array. The numbers in each cell of the unitary matrix denote the MZIs that affect the corresponding matrix element. (c) Histogram of the sparsity of PSes (percentage of zero phase angles) in the mapped MZI arrays for 3000 randomly generated weight matrices of different dimensions (1000 of each dimension) with sparsity $s_w$, where 80\%$\leq s_w \leq$100\%. The inset shows a similar plot for 95\%$\leq s_w \leq$100\%. } \vspace{-1.5em} \label{fig1} \end{figure} \section{CHAMP: Proposed Hardware-Aware Magnitude Pruning Framework}\vspace{-0.05in} In hardware-unaware pruning techniques, a fraction of the weights in each neural network layer---typically those with a smaller magnitude---are clamped to zero. Then, the network is retrained (i.e., fine-tuned) to recover the accuracy while ensuring that only the non-zero weights are updated. However, the mapping of the sparse weight matrices obtained using hardware-unaware pruning approaches to MZI arrays may not necessarily lead to sparse PSes. Fig. \ref{fig1}(c) shows that several randomly generated sparse weight matrices are \textit{not} mapped to sparse PSes (especially true for larger weight matrices). The discrepancy between the sparsity of the weight matrices and their corresponding PS mappings is due to the \textit{bidirectional many-to-one association (BMA)} between the elements of the weight matrix and the phase angles. Each element of the weight matrix of a linear layer in SC-IPNNs is mapped to multiple phase angles and each phase angle in an MZI array affects multiple matrix element, as shown in Fig. \ref{fig1}(b). Prior work using hardware-unaware pruning showed that no more than 30\% (45\% for non-SVD-based C-IPNNs) of the phase angles can be pruned without a significant ($\approx$10\%) accuracy loss \cite{gu2020towards}. \par \begin{figure}[t] \centering \subfigure[Interlinking between one-shot and iterative pruning approaches in CHAMP]{ \includegraphics[width=.61\textwidth]{Figures/fig2a.pdf} }\hspace{-0.01in}% \subfigure[One-shot (OS) magnitude pruning]{ \includegraphics[width=.36\textwidth]{Figures/fig2b.pdf} } \vspace{-1em} \caption{(a) Block diagram highlighting the hybrid (one-shot and iterative) pruning approach in CHAMP. (b) Fine-tuned inferencing accuracy, PS sparsity, and mean phase angle for one-shot magnitude pruning for different $\alpha_{k}^{OS}$. The yellow rectangle shows the best-performing (high sparsity and low accuracy loss) one-shot-pruned model.} \vspace{-2em} \label{fig2} \end{figure} The key difference between the existing pruning methods and CHAMP lies in the training approach. Instead of mapping the software-trained weight matrices to phase angles, we use a \textit{photonic hardware-aware} approach where, during backpropagation, the phase angles (and not the weight matrix elements) are adjusted based on the computed gradients. Photonic hardware-aware software training offers more control on the phase angles during training and is essential for efficient SC-IPNN pruning. Fundamentally, in magnitude pruning, the weights (phase angles here) with magnitude smaller than a threshold are set to zero. Next, the pruned network is retrained to recover the lost accuracy while clamping the pruned phase angles to zero. A common approach to determine the threshold for the phase angles in an MZI array is to consider a factor---say $\alpha$---of the standard deviation of the non-zero phase angles in the array. The higher the value of $\alpha$, the more aggressive is the pruning. We also maintain a binary mask for each phase angle in the MZI layer; an element of the mask is 0 (1) if and only if the corresponding phase angle is zero (non-zero). During backpropagation, the computed gradient of each phase angle is multiplied with its corresponding mask element, thus ensuring that the zero phase angles are not perturbed. Magnitude pruning can be performed in a one-shot or iterative manner. In the one-shot approach, all phase angles below a threshold are pruned in a single step after which retraining (a.k.a. fine-tuning) is performed. In the iterative approach, $\alpha$ (and, in turn, the pruning rate) is increased over multiple steps with each step followed by fine-tuning. \par Fig. \ref{fig2}(a) shows a block diagram of the CHAMP framework. We use a hybrid approach where the faster one-shot (OS) pruning is used to quickly ramp up the sparsity of the PSes, and then the iterative (IT) approach is employed to increase the sparsity further while maintaining a high-enough model accuracy. The inputs to the OS approach include the trained SC-IPNN, the minimum acceptable inferencing accuracy ($acc_{min}^{OS}$), and a set of $K$ $\alpha$'s to determine the thresholds ($\alpha_{k}^{OS}$, $k=$~0, 1, $\dots$, $K-1$). The $K$ OS runs are mutually independent and can be executed in parallel. Out of the $K$ OS-pruned models, the best-performing one (with maximum sparsity and accuracy greater than $acc_{min}^{OS}$) is considered for IT pruning. The inputs in the IT approach include the initial $\alpha$ ($\alpha_{0}^{IT}$), the step-increment in $\alpha$ ($\Delta \alpha$), and the minimum acceptable accuracy ($acc_{min}^{IT}$). The $\alpha$ for the $i^{th}$ iteration ($i\geq$~1) is given by $\alpha_{i}^{IT}=\alpha_{i-1}^{IT}+\Delta \alpha$. The IT approach terminates when the inferencing accuracy becomes less than $acc_{min}^{IT}$; in this case, the checkpoint model saved in the previous iteration is returned as the sparse SC-IPNN. Also, in different pruning runs, we consider the same $\alpha$ ($\alpha_{k}^{OS}$ and $\alpha_{i}^{IT}$) for each SC-IPNN layer. However, the threshold phase angle (given by $\alpha\times$std. dev. of non-zero phase angles in a layer) can differ from layer to layer.\vspace{-0.05in} \section{Results and Discussion}\vspace{-0.05in} To demonstrate the performance of CHAMP, we consider a case study of a fully connected feedforward SC-IPNN with two hidden layers (with 256 and 100 neurons) and 155,268 PSes, implemented using the Clements design \cite{clements2016optimal}. We train the network on the MNIST dataset; each real-valued image is converted to a complex feature vector of length 64 using a method based on fast Fourier transform \cite{banerjee2020modeling}. The nominal inferencing accuracy of the unpruned network is 96.16\%. Fig. \ref{fig2}(b) shows the simulation results using the OS approach for different values of $\alpha_k^{OS}$. As expected, the overall sparsity of the PSes increases with $\alpha_k^{OS}$. As more PSes are pruned, the mean phase angle---averaged over the 155,268 PSes---and hence the static tuning power consumption decreases with increasing $\alpha_k^{OS}$. For smaller $\alpha_k^{OS}$, we observe that OS pruning provides considerable sparsity with minimal accuracy loss. In fact, with $\alpha_k^{OS}=$~2, we obtain 74.86\% PS sparsity (and a 46.05\% lower mean phase angle) with zero accuracy loss. We assume a maximum allowable accuracy loss of 5\% during pruning and therefore consider $acc_{min}^{OS}=$~91.16\%. Accordingly, the best performing OS model in our case is obtained using $\alpha_k^{OS}=$~2.4, where we achieve a sparsity of 83.77\% with a fine-tuned accuracy of 92.31\% (i.e., 3.85\% accuracy loss). Subsequently, we use this model as the starting point of the iterative pruning approach. Fig. \ref{fig3}(a) shows the simulation results for the IT approach with $\alpha_0^{IT}=$~2.4, $\Delta\alpha=$~0.2, and $acc_{min}^{IT}=$~91.16\% (5\% accuracy loss). We observe that with IT pruning, we can even achieve $\geq$~99\% sparsity with an accuracy loss less than 5\%. The best-performing IT-pruned model is obtained with $\alpha_i^{IT}=$~6 where we achieve a 99.45\% PS sparsity (and a 98.23\% reduction in mean phase angles and static power consumption) and an accuracy of 91.57\% (4.59\% accuracy loss). Fig. \ref{fig3}(b) compares the histogram of the phase angles in the unpruned, best-performing OS-pruned, and best-performing IT-pruned models. \par \begin{figure}[t] \centering \subfigure[Iterative (IT) magnitude pruning]{ \includegraphics[width=.385\textwidth]{Figures/fig3a.pdf} }\hspace{-0.01in}% \subfigure[Histogram of phase angles]{ \includegraphics[width=.289\textwidth]{Figures/fig3c.pdf} }\hspace{-0.05in}% \subfigure[Accuracy under phase uncertainties]{ \includegraphics[width=.295\textwidth]{Figures/fig3b.pdf} } \vspace{-1em} \caption{(a) Fine-tuned inferencing accuracy, PS sparsity, and mean phase angle for iterative pruning for different $\alpha_{i}^{IT}$. Yellow rectangle: best-performing IT-pruned model. (b) Histogram of the tuned phase angles in unpruned, best-performing one-shot-pruned, and best-performing iterative-pruned models. (c) Accuracy of the unpruned, best-performing one-shot-pruned, and best-performing iterative-pruned models under random phase uncertainties.} \vspace{-2.5em} \label{fig3} \end{figure} We also characterize the performance of pruned models under random uncertainties in the phase angles, which is indeed critical for sparse models because even overparameterized SC-IPNNs are sensitive to such uncertainties \cite{banerjee2020modeling}. For each model, we consider 1000 Monte Carlo iterations. In each iteration, the uncertainties are sampled from a zero-mean Gaussian distribution with a standard deviation of $\sigma_{PS}\times\pi$. Fig. \ref{fig3}(c) shows the mean classification accuracy (averaged over the 1000 iterations) of the unpruned, best-performing OS-pruned, and best-performing IT-pruned models. For the pruned models, we consider two cases: 1) the pruned PSes are power-gated and left in the network (solid lines), and 2) the pruned PSes are removed from the network (dashed lines). We observe that in the first case (power-gated PSes), the network is more susceptible to phase uncertainties because even small uncertainties in otherwise zero phase angles lead to a large relative deviation in the MZI operation. In contrast, removing pruned PSes reduces the number of uncertainty-susceptible components and leads to significantly higher accuracy (up to 74\%) under uncertainties. In addition, the resulting compact network leads to a lower optical loss, and thus lower dynamic power consumption. Therefore, in situations where hardware-level modifications are feasible, the pruned PSes can be removed to improve the SC-IPNN performance under phase-shift uncertainties.\vspace{-0.07in} \section{Conclusions}\vspace{-0.05in} We have presented CHAMP, the first photonic hardware-aware pruning technique for SC-IPNNs. CHAMP can prune a considerable fraction of redundant PSes and increase the network sparsity by 74.86\% with no accuracy loss, 98.57\% with a 1\% accuracy loss, and 99.45\% with a 5\% accuracy loss. Executed only once per SC-IPNN, CHAMP improves the power efficiency (by up to 98.23\%) and enhances the robustness of SC-IPNNs under random uncertainties in tuned phase angles due to fabrication-process variations and thermal crosstalk.\vspace{-0.07in} \bibliographystyle{unsrt}
1,116,691,501,453
arxiv
\section{Introduction} \label{sec:kura} In \cite{DF1}, the authors studied Lagrangian intersection Floer homology of a pair of monotone Lagrangians in an open symplectic manifold, which is isomorphic to a divisor complement. At the heart of the construction of \cite{DF1}, there is a compactification of the moduli spaces of holomorphic discs and strips satisfying Lagrangian boundary condition. The main purpose of this sequel to \cite{DF1} is to show that this compactification, called the {\it RGW compactification}, admits a {\it Kuranishi structure}. The virtual count of the elements of these Kuranishi spaces is used in \cite{DF1} to define the desired Lagrangian Floer homology. To be more detailed, let $(X,\omega)$ be a symplectic manifold and $\mathcal D$ be a symplectic submanifold of $X$ with codimension $2$. Another probably non-essential\footnote{In \cite{DF1}, this assumption is made to simplify the arguments. However, the authors believe that this assumption can be removed at the expense of a more complicated analysis of holomorphic curves in a neighborhood of $\mathcal D$ in $X$.} assumption is that $\mathcal D$ and $\mathcal N_{\mathcal D}(X)$, the normal bundle of $\mathcal D$ in $X$, admit integrable complex structures compatible with the symplectic structure. The complex structure on $\mathcal N_{\mathcal D}(X)$ induces an integrable complex structure in a neighborhood of $\mathcal D$ and we extend this complex structure to an almost complex structure $J$ which is tamed by $\omega$. (See \cite[Subsection 3.3]{DF1} for more details on the choice of almost complex structures.) Let $L_0$ and $L_1$ be compact orientable and transversal Lagrangians in $X\setminus \mathcal D$. For any $\beta \in \Pi_2(X;L_i)$ with $\beta \cap \mathcal D = 0$, let $\mathcal M_{k+1}^{\rm reg}(L_i;\beta)$ be the moduli space of $J$-holomorphic disks of homology class $\beta$ with $k+1$ boundary marked points and Lagrangian boundary condition associated to $L_i$. In \cite{DF1}, we introduced the RGW compactification $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ of $\mathcal M_{k+1}^{\rm reg}(L_i;\beta)$. (See \cite[Section 3]{DF1} for the definition of this moduli space. Note that this compactification is different from the stable map compactification.) We also defined the RGW compactification $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$ of the moduli space $\mathcal M_{k_1,k_0}^{\rm reg}(L_1,L_0;p,q;\beta)$ in a similar way in \cite[Section 3]{DF1}. \begin{thm-int}\label{Kura-exists} The moduli spaces $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ and $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$ admit Kuranishi structures. \end{thm-int} A topological space with a Kuranishi structure is locally modeled by the vanishing locus of an equation defined on a manifold, or more generally an orbifold. These orbifolds and equations for different points need to satisfy some compatibility conditions. (See \cite[Definition A.1.1]{fooobook2} for a more precise definition of Kuranishi spaces.) Given a point of a space with Kuranishi structure, the zeros of the corresponding equation might be cut down transversally. In that case, our space looks like an orbifold in a neighborhood of such point. The main point is that such equations might not be transversal to zero and we might end up with a space which is not as regular as an orbifold. Nevertheless, a Kuranishi structure would be sufficient to have some of the interesting properties of smooth orbifolds. For example, it makes sense to talk about a space with Kuranishi structure which has boundary and corners. In fact, the Kuranishi structures of Theorem \ref{Kura-exists} have boundary and corners. Any corner of this moduli space is given by appropriate fiber products of the spaces of the form $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ and $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$. \begin{thm-int}\label{Kura-comp} The Kuranishi structures on the moduli spaces $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ and $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$ can be chosen such that they are compatible over the boundary and corners. These Kuranishi structures are also compatible with the forgetful maps of boundary marked points. \end{thm-int} Suppose Lagrangians $L_1$, $L_2$ are {\it monotone in $X\backslash \mathcal D$}, namely, there is a positive constant $c$ such that for any $\beta\in \Pi_2(X;L_i):={\rm Im}(\pi_2(X,L_i)\to H_2(X,L_i;\Z))$ with $\beta \cap \mathcal D = 0$, we have: \[ \omega(\beta)=c\mu(\beta). \] Here $\mu:H_2(X,L_i;\Z) \to \Z$ is the Maslov index associated to $L_i$. The first part of Theorem \ref{Kura-comp} is the essential ingredient in the definition of Lagrangian Floer homology for the monotone pairs $L_1$, $L_0$ and verifying its independence from auxiliary choices in the construction. Given any of the boundary marked points, we may define a forgetful map from $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ to $\mathcal M_{k}^{\rm RGW}(L_i;\beta)$. Similarly, we can define forgetful maps for $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$. The second part of Theorem \ref{Kura-comp} concerns compatibility with these maps which is necessary in the definition of Floer homology when the minimal Maslov index of one of the Lagrangians is $2$. For a more precise version of Theorem \ref{Kura-comp}, see Theorem \ref{lema362rev}, Theorem \ref{comp-forg} \cite[Lemma 3.70]{DF1} and \cite[Lemma 3.75]{DF1}. One of the novel features of the RGW compactification is that it has some strata which consist of obstructed objects by default. To make this point more clear, we make a comparison with the stable map compactification. In the stable compactification of the moduli space of holomorphic discs, each stratum is described by a fiber product of the moduli spaces of holomorphic discs and strips. If each of the moduli spaces appearing in such a fiber product consists of Fredholm regular elements and the fiber product is transversal, then the moduli space in a neighborhood of this stratum consists of regular objects and hence it is a smooth orbifold in this neighborhood. However, the situation in the case of the RGW compactification looks significantly different. There are strata of the compactification which belong to the singular locus of the moduli space, even if each element of the associated fiber product is Fredholm regular and the fiber product is cut down transversely. This subtlety is the main point that our treatment diverges from the proof of the analogues of Theorems \ref{Kura-exists} and \ref{Kura-comp} for the stable map compactification of the moduli spaces of holomorphic discs and strips. (See \cite{fooobook2,foootech,fooo:tech2-1,fooo:tech2-2} for such results in the context of the stable map compactification.) We resolve the above issue by introducing the notion of {\it inconsistent solutions} to the Cauchy-Riemann equation. Under the assumption of the previous paragraph, the space of inconsistent solutions forms a smooth manifold. Moreover, the elements of the moduli spaces $\mathcal M_{k+1}^{\rm RGW}(L_i;\beta)$ and $\mathcal M_{k_1,k_0}^{\rm RGW}(L_1,L_0;p,q;\beta)$ can be regarded as the zero sets of appropriate equations on the moduli space of inconsistent solutions. We treat these equations as extra terms for Kuranishi maps. We believe this approach could be also useful for the analysis of the relative Gromov-Witten theory in symplectic category. We also believe that this idea as well as some of the arguments provided in this paper can be generalized to study various conjectures proposed in \cite[Section 6]{DF1}. The main steps of the construction of the Kuranishi structures required for the proof of Theorems \ref{Kura-exists} and \ref{Kura-comp} are parallel to the ones for the stable map compactification. Throughout the paper, we point out relevant references for the corresponding results in the context of the stable map compactification. At the same time, we try to make our exposition as self-contained as possible. One of the exceptions is the exponential decay result of \cite{foooexp} where the same arguments can be used to deal with the exponential decay result which we need for this paper. \vspace{20pt}\noindent {\bf Outline of Contents.} In order to make the main ideas of the construction more clear, we devote the first part of the paper to the construction of a Kuranishi chart around a special point in the RGW compactification of moduli spaces of discs. This special point, described in Section \ref{subsec:gluing1}, belongs to a stratum of the moduli space which is always obstructed. Motivated by this example, we introduce the notion of inconsistent solutions in Section \ref{sub:statement}. The stratum of this special example is given by the fiber product of a moduli space of discs and two moduli spaces of spheres. In Sections \ref{sub:Fred} and \ref{sub:Obst}, we study the deformation theory of the elements of the moduli space within this stratum. A Kuranishi chart for each element of this stratum is constructed in Section \ref{sub:kuraconst}. The main analytical results required for the construction of the Kuranishi chart is verified in Section \ref{sub:proofmain}. In Section \ref{sub:kuraconst}, we explain how the method of the first part of the paper can be used to construct a Kuranishi chart around any point of the RGW moduli space. Section \ref{sub:kuracont} is devoted to showing that these Kuranishi charts are compatible with each other using appropriate coordinate changes. This completes of the proof of Theorem \ref{Kura-exists}. We study compatibility of Kuranishi structures at boundary components and corners in Section \ref{subsub:statecoma}. In order to verify the second part of Theorem \ref{Kura-comp}, compatibility of Kuranishi structures with forgetful maps is studied in Section \ref{subsub:compforget}. Since the case of strips are only notationally heavier, we focus on the moduli spaces of discs up to this point in the paper. In the final section, we turn our attention back to the moduli spaces of strips. Using the general theory of Kuranishi structures and the system of Kuranishi structures provided by Theorems \ref{Kura-exists} and \ref{Kura-comp}, we can construct a system of multi-valued perturbations on such moduli spaces. These perturbations allow us to make a virtual count of the elements of the moduli spaces of the RGW strips and show that such counts are independent of auxiliary choices. Thus they provide the crucial ingredient for the construction of \cite[Section 4]{DF1}. \section{A Special Point of the Moduli Spaces of Discs} \label{subsec:gluing1} In the first half of the paper, we focus on the analysis of a special case. We hope that this allows the main features of our construction stand out. The special case can be described as follows. Let $\Sigma$ be a surface with nodal singularities, which has three irreducible components $\Sigma_{\rm d}$, $\Sigma_{\rm s}$ and $\Sigma_{\rm D}$. The irreducible component $\Sigma_{\rm d}$ is a disc and the remaining ones are spheres. The components $\Sigma_{\rm d}$, $\Sigma_{\rm s}$ and $\Sigma_{\rm D}$ are respectively called the {\it disc component}, the {\it sphere component} and the {\it divisor component}. The divisor component $\Sigma_{\rm D}$ intersects $\Sigma_{\rm d}$ and $\Sigma_{\rm s}$ at the points, $z_{\rm d}$ and $z_{\rm s}$, respectively. When we want to emphasize that we consider these points as elements of $\Sigma_{\rm D}$, we denote them by $z_{\rm D, d}$ and $z_{\rm D, s}$. There is no intersection between $\Sigma_{\rm d}$ and $\Sigma_{\rm s}$. We are given a $J$-holomorphic map $u : (\Sigma,\partial \Sigma) \to (X,L)$. The restriction of this map to $\Sigma_{\rm d}$, $\Sigma_{\rm s}$, $\Sigma_{\rm D}$ are denoted by $u_{\rm d}$, $u_{\rm s}$, $u_{\rm D}$. We assume that the image of $u_{\rm D}$ is contained in the divisor ${\mathcal D}$. The images of $\Sigma_{\rm d}$, $\Sigma_{\rm s}$ intersect $\mathcal D$ only at the points $z_{\rm d}$ and $z_{\rm s}$ with multiplicities $2$ and $3$, respectively. Following \cite[Section 3]{DF1}, we also associate a {\it level function} $\lambda$ that evaluates to $0$ at the components $\Sigma_{\rm d}$ and $\Sigma_{\rm s}$ and to $1$ at $\Sigma_{\rm D}$. We assume that there is one boundary marked point $z_0$ on $\Sigma_{\rm d}$. We also assume that the homology class $(u_{\rm D})_*([\Sigma_{\rm D}])$ satisfies the following identity: (Compare to \cite[Condition (3.12)]{DF1}.) \[ 2 + 3 + c_1(\mathcal N_{\mathcal D}(X)) \cap (u_{\rm D})_*([\Sigma_{\rm D}]) = 0. \] This condition implies that there exists a meromorphic section $\frak s$ of $(u_{\rm D})^*\mathcal N_DX$ such that $\frak s$ has a pole of order 2 (resp. 3) at $z_{\rm d}$, (resp. $z_{\rm s}$), and $\frak s$ has no other pole or zero. The choice of this section $\frak s$ is unique up to a multiplicative constant in $\C_*$. We fix one such section $\frak s$ and define: \begin{equation}\label{U-D} U_{\rm D}: \Sigma_{\rm D}\setminus \{z_{\rm d},z_{\rm s}\}\to \mathcal N_{\mathcal D}(X) \setminus {\mathcal D} = \R \times S\mathcal N_{\mathcal D}(X) \end{equation} where $U_{\rm D}(z)$, for $z\in \Sigma_{\rm D}\setminus \{z_{\rm d},z_{\rm s}\}$, is defined to be $(u_{\rm D}(z),\frak s(z))$. The Riemann surface $\Sigma$ and the {\it detailed ribbon tree} corresponding to $u$ are sketched in Figures \ref{FIgsec6-1}, \ref{Figuresec6-2}. These data define an element of $\mathcal M^{\rm RGW}_{1}(L;\beta)$ for an appropriate choice of $\beta \in H_2(X,L;\Z)$. See \cite[Section 3]{DF1} for the definitions of detailed ribbon trees and moduli spaces $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$. Constructing a Kuranishi neighborhood for this element of $\mathcal M^{\rm RGW}_{1}(L;\beta)$ is the main goal of the first half of the paper. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FIgsec61} \caption{Detailed tree of the element we study.} \label{FIgsec6-1} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{Figuresec62} \caption{The source curves of $(u_{\rm d},u_{\rm D},u_{\rm s})$} \label{Figuresec6-2} \end{figure} \section{Fredholm theory of the Irreducible Components} \label{sub:Fred} In this subsection, we shall be concerned with the deformations of the restrictions of the map $u$ to the irreducible components $\Sigma_{\rm d}$, $\Sigma_{\rm s}$ and $\Sigma_{\rm D}$. We will see that the deformation theory of each irreducible component is governed by a Fredholm operator. Throughout this section, we use cylindrical coordinates both for the target and the source. There is a neighborhood $\frak U$ of the divisor ${\mathcal D}$ with a {\it partial $\C_*$-action} such that the complex structure $J$ on $\frak U$ is integrable. (See \cite[Subsection 3.2]{DF1} for the definition of partial $\C_*$-actions and \cite[Subsection 3.3]{DF1} for the existence of $\frak U$.) We may assume that the open set $\frak U$ is chosen such that its closure is diffeomorphic to: \begin{equation}\label{subset61} [0,\infty)_{\tau} \times S\mathcal N_{\mathcal D}(X) \end{equation} where $S\mathcal N_{\mathcal D}(X)$ is the unit $S^1$-bundle associated to the normal bundle $\mathcal N_{\mathcal D}(X)$ of $\mathcal D$ in $X$. We use $\tau$ to denote the standard coordinate on $[0,\infty)$. The 1-form $\theta:=-\frac{1}{r}d\tau\circ J$ determines a connection 1-form for the $S^1$-bundle $S\mathcal N_{\mathcal D}(X)$. Let $g'$ be a metric on $\mathcal D$ which is fixed for the rest of the paper. We also fix a metric $g$ on $X \setminus {\mathcal D}$ such that its restriction to \eqref{subset61} is given by: \begin{equation}\label{g-cylinder-end} g|_{[0,\infty)_{\tau} \times S\mathcal N_{\mathcal D}(X)}=d\tau^2+\theta^2+g'. \end{equation} In particular, $g$ is invariant with respect to the partial $\C_* \cong (-\infty,\infty) \times S^1$-action on \eqref{subset61}, where $(-\infty,\infty)$ acts (partially) by translation along the factor $[0,\infty)_{\tau} $ and the action of $S^1$ is induced by the obvious circle action on $S\mathcal N_{\mathcal D}(X)$. We also fix another metric $g_{NC}$ on $X\setminus \mathcal D$ whose restriction to \eqref{subset61} has the following form: \begin{equation}\label{g-smooth} g_{NC}|_{[0,\infty)_{\tau} \times S\mathcal N_{\mathcal D}(X)}=e^{-2\tau}(d\tau^2+\theta^2)+g'. \end{equation} This non-cylindrical metric extends to $\mathcal D$ to give a smooth metric on $X$ which is also denoted by $g_{NC}$. \begin{rem} We do not make any assumption on compatibility of the metric $g'$ with the almost complex structure or the symplectic structure. \end{rem} The metric $g$ determines the decomposition: \begin{equation} \label{decom-tan-bdle} TX|_{\frak U}=\underbrace{\R\oplus \R}_\C \oplus \pi^*(T\mathcal D) \end{equation} where the first factor is given by the action of $(-\infty,\infty)\times \{1\} \subset (-\infty,\infty) \times S^1$, the second factor is given by the action of $ \{1\}\times S^1 \subset (-\infty,\infty) \times S^1$, and the third factor is given by the vectors orthogonal to the first two factors. Note that the last factor and the direct sum of the first two factors determine complex subspaces of $TX|_{\frak U}$. \subsection{The Disk Component} \label{subsub:disk} The surface $\Sigma_{\rm d}$ can be identified uniquely with the standard unit disc $D^2\subset \C$ such that $z_0$ and $z_{\rm d}$ are mapped to $1$ and $0$. The map $u_{\rm d} : D^2 \to X$ induces a map from $D^2 \setminus \{0\}$ to $X \setminus \mathcal D$, which we also denote by $u_{\rm d}$. We identify $D^2 \setminus \{0\}$ with $[0,\infty)_{r_1} \times S^1_{s_1}$ and denote the standard coordinates on the $[0,\infty)$ and $S^1$ factors with $r_1$ and $s_1$. Namely, the point $(r_1,s_1)\in [0,\infty) \times S^1$ is mapped to $\exp(-r_1-\sqrt{-1} s_1)$. (Here and in what follows, $S^1$ is identified with $\R/2\pi\Z$.) Since the multiplicity of the intersection of $u_d$ and $\mathcal D$ at $z_{\rm d}$ is $2$, there exist $R_{\rm d}\in 0$ and $x_{\rm d} \in S\mathcal N_{\mathcal D}(X)$ such that: \begin{equation}\label{form62} d_{C^m}( u_{\rm d}(r_1, s_1),(2r_1+R_{\rm d},2 s_1+x_{\rm d})) \le C_m e^{-\delta_1r_1} \end{equation} for some $C_m,\delta_1 > 0$. The constant $\delta_1$ is independent of $m$ and in fact, we can pick it to be $1$. Here we regard $(2r_1+R_{\rm d},2 s_1+x_{\rm d})$ as an element of $[0,\infty)_{\sigma}\times S\mathcal N_{\mathcal D}(X)$ using the partial action of $(-\infty,\infty)\times S^1$ on $[0,\infty)_{\tau}\times S\mathcal N_{\mathcal D}(X)$. The expression on the left hand side of \eqref{form62} is the $C^m$ distance between the following two maps from $[0,\infty)_{r_1} \times S^1_{s_1}$ to $X\setminus \mathcal D$: \[ (r_1, s_1) \mapsto u_{\rm d}(r_1, s_1) \hspace{1cm}(r_1, s_1) \mapsto (2r_1+R_{\rm d},2 s_1+x_{\rm d}) \] Note that there exists $R'_{\rm d}>0$ such that $u_{\rm d}( s_1,r_1)$ is an element of \eqref{subset61} for $r_1 > R'_{\rm d}$. The $C^m$ norm is defined with respect to the cylindrical metric on $D^2\setminus \{0\}$ and the metric $g$ on $X\setminus \mathcal D$. The inequalities in \eqref{form62} are immediate consequences of the fact that $u_d : D^2 \setminus \{0\} \to X \setminus \mathcal D$ extends to a holomorphic map from $D^2$ to $X$, which intersects $\mathcal D$ with multiplicity $2$. \begin{defn}\label{sections-V} We define $C^{\infty}(([0,\infty)\times S^1, \{0\}\times S^1);(u_{\rm d}^*TX,u_{\rm d}^*TL))$ to be the space of all smooth sections $V$ of $u_{\rm d}^*TX$ on the space $[0,\infty)\times S^1$ with the boundary condition: \[ V(0,s_1) \in T_{u_{\rm d}(0,s_1)}L. \] \end{defn} We extend each vector $v \in T_{u_{\rm d}(z_{\rm d})}{\mathcal D}$ to a vector field defined on a neighborhood of $u_{\rm d}(z_{\rm d})$ in ${\mathcal D}$. Let $\hat v$ be the horizontal lift of this vector field using the decomposition in \eqref{decom-tan-bdle} to a neighborhood of $u_{\rm d}(z_{\rm d})$ in $X$. We may assume that the map $v \mapsto \hat v$ is linear. Using \eqref{decom-tan-bdle}, we can also obtain a vector field $[\frak r_{\infty},\frak s_{\infty}]$ on $X\setminus \mathcal D$ for each $(\frak r_{\infty},\frak s_{\infty}) \in \R \times \R$. These vector fields are also $(-\infty,\infty) \times S^1$-invariant. \begin{defn}\label{defn6262} Let $C^{\infty}_0(([0,\infty)\times S^1, \{0\}\times S^1);(u_{\rm d}^*TX,u_{\rm d}^*TL))^+$ be the space of all triples $(V,(\frak r_{\infty},\frak s_{\infty}),v)$ such that $V \in C^{\infty}(([0,\infty)\times S^1, \{0\}\times S^1);(u_{\rm d}^*TX,u_{\rm d}^*TL))$, $(\frak r_{\infty},\frak s_{\infty}) \in \R \times \R$, $v \in T_{u_{\rm d}(z_{\rm d})}{\mathcal D}$, and \[ V - [\frak r_{\infty},\frak s_{\infty}] -\hat v \] has compact support. We define a weighted Sobolev norm on this vector space as follows: \begin{equation}\label{form6464} \aligned &\Vert(V,(\frak r_{\infty},\frak s_{\infty}),v) \Vert_{W^2_{m,\delta}}^2 \\ = &\Vert V\Vert_{L^2_{m}([0,R'_{\rm d}]\times S^1)}^2 \\ &+ \sum_{j=0}^m\int_{[R'_{\rm d},\infty)\times S^1} e^{\delta r_1} \vert \nabla^j(V - [\frak r_{\infty},\frak s_{\infty}] -\hat v)\vert^2 d r_1d s_1 \\ &+ \vert(\frak r_{\infty},\frak s_{\infty})\vert^2 + \vert v\vert^2. \endaligned \end{equation} Later we shall be concerned with the case that $\delta >0$ is a sufficiently small positive number and $m$ is a sufficiently large positive integer. We denote by \[ W^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}^*TX,u_{\rm d}^*TL)) \] the completion of $C^{\infty}_0(([0,\infty)_{ r_1} \times S^1_{ s_1},\{0\}\times S^1_{s_1});(u_{\rm d}^*TX,u^*_{\rm d}TL))^+$ with respect to the norm $\Vert \cdot \Vert_{W^2_{m,\delta}}$. This completion is a Hilbert space and is independent of how we extend the vectors $v$ to $\hat v$. \end{defn} \begin{defn}\label{coker-weighted-sob} Let $C^{\infty}_0([0,\infty) \times S^1;u_{\rm d}^*TX \otimes \Lambda^{0,1})$ be the space of all smooth sections with compact supports, and define a weighted Sobolev norm on it by: \[ \Vert V\Vert_{L^2_{m,\delta}}^2= \Vert V\Vert_{L^2_{m}([0,R'_{\rm d}]\times S^1)}^2 +\sum_{j=0}^m\int_{[R'_{\rm d},\infty)\times S^1} e^{\delta r_1} \vert \nabla^j(V)\vert^2 d r_1d s_1. \] The completion of $C^{\infty}_0([0,\infty) \times S^1;u_{\rm d}^*TX \otimes \Lambda^{0,1})$ with respect to the norm $\Vert \cdot \Vert_{L^2_{m,\delta}}$ is denoted by \begin{equation}\label{hilb6566} L^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};u_{\rm d}^*TX \otimes \Lambda^{0,1}). \end{equation} \end{defn} Linearization of the Cauchy-Riemann equation at $u_{\rm d}$ gives a first order differential operator \[ \aligned D_{u_{\rm d}}\overline \partial : &C_0^{\infty}(([0,\infty) \times S^1,\{0\}\times S^1);(u_{\rm d}^*TX,u^*_{\rm d}TL)) \\&\to C^{\infty}_0([0,\infty) \times S^1;u_{\rm d}^*TX \otimes \Lambda^{0,1}). \endaligned \] \begin{lem}\label{lem6363} \begin{enumerate} \item The operator $D_{u_{\rm d}}\overline \partial$ induces a continuous linear map \begin{equation}\label{fredholmmap1} \aligned D_{u_{\rm d}}\overline \partial : &W^2_{m+1,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}^*TX,u_{\rm d}^*TL)) \\ &\to L^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};u_{\rm d}^*TX \otimes \Lambda^{0,1}). \endaligned \end{equation} In particular, for an element: \[(V,(\frak r_{\infty},\frak s_{\infty}),v) \in C^{\infty}_0(([0,\infty)\times S^1, \{0\}\times S^1);(u_{\rm d}^*TX,u_{\rm d}^*TL))^+\] we have $D_{u_{\rm d}}\overline \partial(V,(\frak r_{\infty},\frak s_{\infty}),v)=D_{u_{\rm d}}\overline \partial(V)$. \item \eqref{fredholmmap1} is a Fredholm operator. \item The index of the operator \eqref{fredholmmap1} is equal to the virtual dimension of the moduli space $\mathcal M^{\rm reg,d}_1(\beta_{\rm d};(2))$\footnote{See \cite[Definition 3.35]{DF1} for the definition of this moduli space. Here $(2)$ stands for the multiplicity number $2$.} which contains $u_{\rm d}$. \end{enumerate} \end{lem} \begin{proof} (1) is a consequence of (\ref{form62}). (We choose $\delta$ to be smaller than the constant $\delta_1$ in \eqref{form62}.) The differential operator $D_{u_{\rm d}}\overline \partial$ is asymptotic to an operator of the form \[ \frac{\partial}{\partial r_1} + P \] as $ r_1$ goes to infinity. Furthermore, $P = J \partial/\partial s_1$ and the kernel of this operator can be identified with $\R \oplus \R \oplus T_{u_{\rm d}(z_{\rm d})}{\mathcal D}$. Part (2) is a consequence of this observation and general results about Fredholm operators on manifolds with cylindrical ends \cite{APS:I}. Part (3) is also standard. \end{proof} \subsection{The Sphere Component} \label{subsub:sphere} In this part, we study the linearization of the problem governing the map $u_{\rm s}$. This can be done similar to the case of $u_{\rm d}$. We take a compact subset $K_{\rm s}$ of $\Sigma_{\rm s} \setminus \{z_{\rm s}\}$ such that $u_{\rm s}(\Sigma_{\rm s} \setminus K_{\rm s})$ is contained in \eqref{subset61}. We may assume that $\Sigma_{\rm s} \setminus K_{\rm s}$ is a disk. We take a coordinate $( r_2,s_2) \in \R \times S^1$ of $\Sigma_{\rm s} \setminus (K_{\rm s}\cup \{z_{\rm s}\})$ such that $( r_2,s_2)$ is identified with $\exp(- r_2-\sqrt{-1}s_2) \in D^2 \setminus \{0\} \cong \Sigma_{\rm s} \setminus (K_{\rm s} \cup \{z_{\rm s}\})$. In the same way as in \eqref{form62}, we have the following inequality: \begin{equation}\label{form62rev} d_{C^m}(u_{\rm s}( r_2,s_2),(3 r_2+R_{\rm s},3s_2+x_{\rm s})) \le C_m e^{-\delta_1 r_2}, \end{equation} for a constant $R_{\rm s}$ and $x_{\rm s} \in S\mathcal N_{\mathcal D}(X)$. \begin{defn}{\rm (Compare to \cite[Lemma 7.1.5]{fooobook2}.)}\label{defn64444} Let: \[ C^{\infty}_0(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX)^+ \] be the space of all triples $(V,(\frak r_{\infty},\frak s_{\infty}),v)$ such that $V\in C^{\infty}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX)$, $(\frak r_{\infty},\frak s_{\infty}) \in \R \times \R$, $v \in T_{u_{\rm s}(z_{\rm s})}{\mathcal D}$ and: \[ V - [\frak r_{\infty},\frak s_{\infty}] -\hat v \] is compactly supported.\footnote {Note that $(\frak r_{\infty},\frak s_{\infty})$ and $v$ determine a vector fields $[\frak r_{\infty},\frak s_{\infty}]$ and $\hat v$ on \eqref{subset61} in the same way as in the last subsection.} Analogous to \eqref{form6464}, we define a Sobolev norm on this space as follows: \begin{equation}\label{form6464-2} \aligned &\Vert(V,(\frak r_{\infty},\frak s_{\infty}),v) \Vert_{W^2_{m,\delta}}^2 \\ = &\Vert V\Vert_{L^2_{m}(K_{\rm s})}^2 \\ &+ \sum_{j=0}^m\int_{[0,\infty)\times S^1} e^{\delta r_2} \vert \nabla^j(V - [\frak r_{\infty},\frak s_{\infty}] -\hat v)\vert^2 d r_2d s_2 \\ &+ \vert(\frak r_{\infty},\frak s_{\infty})\vert^2 + \vert v\vert^2. \endaligned \end{equation} We shall be concerned with the case that $\delta$ is a sufficiently small positive number and $m$ is a sufficiently large positive integer. We denote by \[ W^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX) \] the completion of $C^{\infty}_0(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX)^+$ with respect to the norm $\Vert \cdot \Vert_{W^2_{m,\delta}}$. This completion is a Hilbert space. We can also define the Hilbert space: \[ L^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX \otimes \Lambda^{0,1}), \] in the same way as in \eqref{hilb6566}. \end{defn} \begin{lem} \begin{enumerate} \item The linearization of the Cauchy-Riemann equation at $u_{\rm s}$ defines a continuous linear map \begin{equation}\label{fredholmmap2ss} D_{u_{\rm s}}\overline \partial : W^2_{m+1,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX) \to L^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX \otimes \Lambda^{0,1}). \end{equation} \item \eqref{fredholmmap2ss} is a Fredholm operator. \item The index of the operator \eqref{fredholmmap2ss} is equal to 4 plus the virtual dimension of the moduli space $\mathcal M^{\rm reg,s}(\beta_{\rm s};(3))$\footnote{ See \cite[Definition 3.36]{DF1}. $(3)$ stands for the multiplicity number 3.} which contains $u_{\rm s}$. \end{enumerate} \end{lem} The proof is similar to the proof of Lemma \ref{lem6363}. The number $4$, that appears in Item (3), is the dimension of the group of automorphisms of $(S^2,z_{\rm s})$. \subsection{The Divisor Component} \label{subsub:divisor} Finally, we analyze the deformation theory of $u_{\rm D}$. Note that $u_{\rm D}$ is a map to the K\"ahler manifold ${\mathcal D}$. So we firstly describe a Fredholm theory for the deformation of $u_{\rm D}$ as a map to ${\mathcal D}$. This is a standard task in Gromov-Witten theory. We have a Fredholm operator: \begin{equation} \label{CR-uD} D_{u_{\rm D}}\overline \partial : L^2_{m+1}(\Sigma_{\rm D};u_{\rm D}^*T{\mathcal D}) \to L^2_{m}(\Sigma_{\rm D};u_{\rm D}^*T{\mathcal D}\otimes \Lambda^{0,1}). \end{equation} To perform gluing analysis, we compare this Fredholm operator with another Fredholm operator associated to the map $U_{\rm D}$ in \eqref{U-D}. \begin{defn}\label{defn66666} As in previous two subsections, we extend any $v_{\infty,\rm d} \in T_{u_{\rm d}(z_{\rm d})}\mathcal D$, $v_{\infty,\rm s} \in T_{u_{\rm s}(z_{\rm s})}\mathcal D$, to vector fields $\hat v_{\infty,\rm d}$, $\hat v_{\infty,\rm s}$ on open neighborhoods of the fibers of $\R_{\tau} \times S\mathcal N_{\mathcal D}(X)$ over $u_{\rm d}(z_{\rm d})$ and $u_{\rm s}(z_{\rm s})$. For any $(\frak r_{\infty},\frak s_{\infty}) \in \R \times \R$, we can also define a vector field $[\frak r_{\infty},\frak s_{\infty}]$ in a neighborhood of any of the points $u_{\rm d}(z_{\rm d})$ and $u_{\rm s}(z_{\rm s})$, as in the last two subsections. Let: \[ C^{\infty}_0(\Sigma_{\rm D} \setminus \{z_{\rm d}, z_{\rm s}\}; U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X) ))^+ \] be the space of all 5-tuples $(V,(\frak r_{\infty,\rm d},\frak s_{\infty,\rm d}), (\frak r_{\infty,\rm s},\frak s_{\infty,\rm s}),v_{\rm d},v_{\rm s})$ such that: \begin{enumerate} \item[(i)] The restriction of $V - [\frak r_{\infty,\rm d},\frak s_{\infty,\rm d}] -\hat v_{\rm d}$ to a punctured neighborhood of $z_{\rm d}$ in $\Sigma_{\rm D} \setminus \{z_{\rm d}, z_{\rm s}\}$ vanishes; \item[(ii)] The restriction of $V - [\frak r_{\infty,\rm s},\frak s_{\infty,\rm s}] -\hat v_{\rm s}$ to a punctured neighborhood of $z_{\rm s}$ in $\Sigma_{\rm D} \setminus \{z_{\rm d}, z_{\rm s}\}$ vanishes. \end{enumerate} We define a weighted Sobolev norm on this space as follows: \begin{equation}\label{form64641rev} \aligned &\Vert(V,(\frak r_{\infty,\rm d},\frak s_{\infty,\rm d}), (\frak r_{\infty,\rm s},\frak s_{\infty,\rm s}),v_{\rm d},v_{\rm s}) \Vert_{W^2_{m,\delta}}^2 \\ = &\Vert V\Vert_{L^2_{m}(K_{\rm D})}^2 \\ &+ \sum_{j=0}^m\int_{[0,\infty)\times S^1} e^{\delta r_1} \vert \nabla^j(V - [\frak r_{\infty,\rm d},\frak s_{\infty,\rm d}] -\hat v_{\rm d})\vert^2 d r_1d s_1 \\ &+ \sum_{j=0}^m\int_{[0,\infty)\times S^1} e^{\delta r_2} \vert \nabla^j(V - [\frak r_{\infty,\rm s},\frak s_{\infty,\rm s}] -\hat v_{\rm s})\vert^2 d r_2d s_2 \\ &+ \vert(\frak r_{\infty,\rm d},\frak s_{\infty,\rm d})\vert^2 + \vert(\frak r_{\infty,\rm s},\frak s_{\infty,\rm s})\vert^2 +\vert v_{\rm d}\vert^2+\vert v_{\rm s}\vert^2. \endaligned \end{equation} In order to clarify the notation in \eqref{form64641rev}, the following comments are in order. We take a compact subset $K_{\rm D} \subset \Sigma_{\rm D} \setminus \{z_{\rm d}, z_{\rm s}\}$ such that $\Sigma_{\rm D} \setminus K_{\rm D}$ is the union of two discs. We fix coordinates $(r_1,s_1) \in [0,\infty) \times \R/2\pi\Z$ and $(r_2,s_2) \in [0,\infty) \times \R/2\pi\Z$ on the complement of the origins of these two discs. That is, we identify $[0,\infty) \times \R/2\pi\Z$ with $D^2\setminus \{0\}$ using $(r_i,s_i) \mapsto \exp(-(r_i+\sqrt{-1} s_i))$. We denote the completion of $C^{\infty}_0(\Sigma_{\rm D} \setminus \{z_{\rm d}, z_{\rm s}\}; U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X) ))^+$ with respect to the norm $\Vert\cdot \Vert_{W^2_{m,\delta}}$ by: $$ H_1:=W^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}; U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X))). $$ This completion is a Hilbert space. We also define a Hilbert space \[ H_2:=L^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X)) \otimes \Lambda^{0,1}), \] in the same way as in \eqref{hilb6566}. \end{defn} We have a short exact sequence of holomorphic bundles on $\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}$ as follows: \[ 0 \to \underline {\C} \to U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X)) \to u_{\rm D}^*T\mathcal D \to 0. \] Here the first map is defined by the $\C_*$-action. This short exact sequence induce a diagram of the following form: \begin{equation} \label{diagram} \begin{CD} 0@>>>A_1@>>>H_1@>>>B_1@>>>0\\ @.@V{f}VV @V{g}VV@V{h}VV \\ 0@>>>A_2@>>>H_2@>>>B_2@>>>0\\ \end{CD} \end{equation} where we have: \[ A_1=W^2_{m+1,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};\underline {\C}), \hspace{1cm} A_2=L^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};\Lambda^{0,1}), \] and \[ B_1=W^2_{m+1,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};u_{\rm D}^*T\mathcal D), \hspace{.5cm} B_2=L^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};u_{\rm D}^*T\mathcal D \otimes \Lambda^{0,1}). \] The spaces $A_1$ and $B_1$ are defined similar to $H_1$ in an obvious way. In the same way as in the proof of Lemma \ref{lem6363}, we can show that the linearization of the Cauchy-Riemann equation at $U_{\rm D}$ defines a continuous linear map: \begin{equation}\label{fredholmmap2ssrev} \aligned D_{U_{\rm D}}\overline \partial : &W^2_{m+1,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X))) \\ &\to L^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X)) \otimes \Lambda^{0,1}).\endaligned \end{equation} which is the map $g$ in \eqref{diagram}. The map $f$ is the standard Cauchy-Riemann operator and $h$ is the linearized Cauchy-Riemann operator associated to the map $u_{\rm D}$. The diagram in \eqref{diagram} commutes and each row of the diagram forms an exact sequence. \begin{lem} \begin{enumerate} \item The operator in \eqref{fredholmmap2ssrev} is Fredholm. \item The kernel and the cokernel of the operator $h$ in \eqref{diagram} can be identified with the kernel and the cokernel of $D_{u_{\rm D}}\overline\partial$ in \eqref{CR-uD}. Moreover, \eqref{diagram} induces a short exact sequence of the following form: \[ 0 \to \C\to {\rm Ker} D_{U_{\rm D}}\overline\partial \to {\rm Ker} D_{u_{\rm D}} \overline\partial \to 0. \] and an isomorphism \[ {\rm CoKer} D_{u_{\rm D}}\overline\partial \cong {\rm CoKer} D_{U_{\rm D}}\overline\partial. \] \end{enumerate} \end{lem} \begin{proof} The proof of the claim in (1) is similar to the proof of Lemma \ref{lem6363}. Identification of the kernels and cokernels of the operators $h$ and $D_{u_{\rm D}}\overline\partial$ is straightofrward. Similarly, we can identify he kernels and cokernels of the operators $f$ and the Cauchy-Riemann operator associated to the trivial bundle on the sphere $\Sigma_{\rm D}$. The latter operator is surjective and its kernel is a copy of $\C$, consists of constant sections of the trivial bundle. We can use this observation and properties of the diagram in \eqref{diagram} to obtain the remaining claims in part (2). \end{proof} \section{Stabilization of the Source Curves and the Obstruction Bundles} \label{sub:Obst} The operators $D_{u_{\rm d}}\overline \partial$, $D_{u_{\rm s}}\overline \partial$, $D_{u_{\rm D}}\overline \partial$ are not necessarily surjective. If these operators are not surjective, then the deformation theories of $u_{\rm d}, u_{\rm s}, u_{\rm D}$ are obstructed. Following a general idea due to Kuranishi, we introduce obstruction bundles: \begin{defn}\label{defn6868} We consider linear subspaces $$ \aligned E_{\rm d} &\subset C^{\infty}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};u_{\rm d}^*TX \otimes \Lambda^{0,1}), \\ E_{\rm s} &\subset C^{\infty}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX \otimes \Lambda^{0,1}), \\ E_{\rm D} &\subset C^{\infty}(\Sigma_{\rm D}\setminus \{z_{\rm d},z_{\rm s}\};u_{\rm D}^*T{\mathcal D} \otimes \Lambda^{0,1}) \endaligned $$ of finite dimensions, which have the following properties: \begin{enumerate} \item Elements of $E_{\rm d}$, $E_{\rm s}$, $E_{\rm D}$ have compact supports away from $z_{\rm d}$, $z_{\rm s}$, $\{z_{\rm d},z_{\rm s}\}$, respectively; \item $ {\rm Im} (D_{u_{\rm d}}\overline \partial) + E_{\rm d}= L^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};u_{\rm d}^*TX \otimes \Lambda^{0,1})$; \item ${\rm Im} (D_{u_{\rm s}}\overline \partial) + E_{\rm s} = L^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX \otimes \Lambda^{0,1})$; \item ${\rm Im}(D_{u_{\rm D}}\overline \partial) + E_{\rm D} = L^2_{m}(\Sigma_{\rm D};u_{\rm D}^*T{\mathcal D}\otimes \Lambda^{0,1})$. \end{enumerate} We also require them to satisfy the mapping transversality condition of Definition \ref{defn6969}. \end{defn} \begin{defn}\label{defn6969} Let \[ {\mathcal{EV}}_{\rm d} : W^2_{m+1,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}^*TX,u_{\rm d}^*TL)) \to T_{u_{\rm d}(z_{\rm d})}\mathcal D \] be the continuous linear map that associates to a triple $(V,(\frak r_{\infty},\frak s_{\infty}),v)$ the vector $v$. The map \[ {\mathcal{EV}}_{\rm s} : W^2_{m+1,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX) \to T_{u_{\rm s}(z_{\rm s})}\mathcal D, \] is defined similarly. Finally, let: \[ {\mathcal{EV}}_{\rm D} := ({\rm EV}_{\rm D,d},{\rm EV}_{\rm D,s}) : L^2_{m+1}(\Sigma_{\rm D};u_{\rm D}^*T{\mathcal D}) \to T_{u_{\rm d}(z_{\rm d})}\mathcal D \oplus T_{u_{\rm s}(z_{\rm s})}\mathcal D. \] be the map that associates to $V\in L^2_{m+1}(\Sigma_{\rm D};u_{\rm D}^*T{\mathcal D})$ the pair of vectors $(V(z_{\rm d}),V(z_{\rm s}))$. We say that $E_{\rm s}$, $E_{\rm d}$, $E_{\rm D}$ of the previous definition satisfy the {\it mapping transversality condition}, if the following map is surjective: \begin{equation}\label{form61616161} \aligned ({\mathcal{EV}}_{\rm d}+{\mathcal{EV}}_{\rm D,d},&{\mathcal{EV}}_{\rm s}+{\mathcal{EV}}_{\rm D,s}): \\ & (D_{u_{\rm d}}\overline \partial)^{-1}E_{\rm d} \oplus (D_{u_{\rm s}}\overline \partial)^{-1}E_{\rm s} \oplus (D_{u_{\rm D}}\overline \partial)^{-1}E_{\rm D} \\ &\to T_{u_{\rm d}(z_{\rm d})}\mathcal D \oplus T_{u_{\rm s}(z_{\rm s})}\mathcal D. \endaligned \end{equation} \end{defn} We shall use these obstruction spaces to define Kuranishi neighborhoods of the elements represented by $u_{\rm d}$, $u_{\rm s}$, $u_{\rm D}$, respectively. Note that at this stage we are studying three irreducible components separately. The process of gluing them will be discussed in the next stage. The source curve $\Sigma_{\rm d}$ of $u_{\rm d}$ comes with one interior nodal point $z_{\rm d}$ and one boundary marked point $z_0$. The group of isometries of $\Sigma_{\rm d}$ preserving $z_{\rm d}$ and $z_0$ is trivial and hence $\Sigma_{\rm d}$ together with these marked points is stable. Moreover, this source curve does not have any deformation parameter. However, the source curve $\Sigma_{\rm s}$ of $u_{\rm s}$ comes with only one interior nodal point $z_{\rm s}$. Therefore, it is unstable and we add two extra marked points $w_{{\rm s},1}$, $w_{{\rm s},2}$ such that it becomes stable with no deformation parameter. Similarly, the source curve $\Sigma_{\rm D}$ of $u_{\rm D}$ comes with two interior nodal points $z_{\rm s}$, $z_{\rm d}$ and is unstable. We add one marked point $w_{\rm D}$ so that it becomes stable without any deformation parameter.We follow the method of \cite[appendix]{FO} to use transversal submanifolds for the purpose of killing the extra freedom of moving the auxiliary marked points. (See also \cite[Section 20]{foootech}, \cite[Subsection 9.3]{fooo:const1}.) Namely, we fix submanifolds $\mathcal N_{{\rm s},1}$, $\mathcal N_{{\rm s},2}$ and $\mathcal N_{\rm D}$ with the following properties: \begin{conds}\label{conds610} \begin{enumerate} \item $\mathcal N_{{\rm s},1}$, $\mathcal N_{{\rm s},2}$ are codimension $2$ smooth submanifolds of $X$, and $\mathcal N_{\rm D}$ is a codimension $2$ smooth submanifold of $\mathcal D$. \item For $i=1,2$, there exists an open neighborhood $\mathcal V_{\rm s}(w_{{\rm s},i})$ of $w_{{\rm s},i}$ such that $\mathcal V_{\rm s}(w_{{\rm s},i}) \cap u_{\rm s}^{-1}(\mathcal N_{{\rm s},i}) = \{w_{{\rm s},i}\}$ and $u_{\rm s}\vert_{\mathcal V_{\rm s}(w_{{\rm s},i})}$ is transversal to $\mathcal N_{{\rm s},i}$ at $w_{{\rm s},i}$. \item There exists an open neighborhood $\mathcal V_{\rm D}(w_{\rm D})$ of $w_{\rm D}$ such that $\mathcal V_{\rm D}(w_{\rm D}) \cap u^{-1}_{\rm D}(\mathcal N_{\rm D}) = \{w_{\rm D}\}$ and $u_{\rm D}\vert_{\mathcal V_{\rm D}(w_{\rm D})}$ is transversal to $\mathcal N_{\rm D}$ at $w_{\rm D}$. \end{enumerate} \end{conds} We now define (Kuranishi) neighborhoods of $u_{\rm d}$, $u_{\rm s}$, $u_{\rm D}$ as follows. Let $u'_{\rm d} : (\Sigma_{\rm d} \setminus \{z_{\rm d}\},\partial \Sigma_{\rm d}) \to (X \setminus \mathcal D,L)$ (resp. $u'_{\rm s} : \Sigma_{\rm s} \setminus \{z_{\rm s}\} \to X \setminus \mathcal D$, $u'_{\rm D} : \Sigma_{\rm D} \to \mathcal D$) be a $L^2_{m+1,loc}$ map such that: \begin{equation}\label{form617} \aligned d(u_{\rm d}(x),u'_{\rm d}(x)) \le \epsilon \quad &\text{(resp. $d(u_{\rm s}(x),u'_{\rm s}(x)) \le \epsilon$},\\ &\qquad\text{ $d(u_{\rm D}(x),u'_{\rm D}(x)) \le \epsilon$.)} \endaligned \end{equation} for any $x \in {\rm Int}(\Sigma_{\rm d} \setminus \{z_{\rm d}\})$ (resp. $x \in \Sigma_{\rm s}\setminus \{z_{\rm s}\}$, $x \in \Sigma_{\rm D}$). Here $d$ is defined with respect to the metric $g$ on $X\setminus \mathcal D$ or the metric $g'$ on $\mathcal D$, introduced at the beginning of Section \ref{sub:Fred}. We wish to define: \begin{equation}\label{form618} \aligned &E_{\rm d}(u'_{\rm d}) \subset L^2_m(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}')^*TX \otimes \Lambda^{0,1}) \\ &E_{\rm s}(u'_{\rm s}) \subset L^2_m(\Sigma_{\rm s} \setminus \{z_{\rm s}\};(u_{\rm s}')^*TX \otimes \Lambda^{0,1})\\ &E_{\rm D}(u'_{\rm D}) \subset L^2_m(\Sigma_{\rm D};(u_{\rm D}')^*T\mathcal D \otimes \Lambda^{0,1}) \endaligned \end{equation} which are finite dimensional subspaces consisting of elements with compact supports. Firstly we need to impose an additional constraint on $E_{\rm d}$, $E_{\rm s}$, $E_{\rm D}$. \begin{conds} If $x \in \Sigma_{\rm d}$ (resp. $x \in \Sigma_{\rm s}$, $x \in \Sigma_{\rm D}$) is in the support of an element of $E_{\rm d}$ (resp. $E_{\rm s}$, $E_{\rm D}$), then $u_{\rm d}$ (resp. $u_{\rm s}$, $u_{\rm D}$) is an immersion at $x$. \end{conds} This condition in particular implies that $E_{\rm d}$ (resp. $E_{\rm s}$, $E_{\rm D}$) is zero, if $u$ is constant on $\Sigma_{\rm d}$ (resp. $\Sigma_{\rm s}$, $\Sigma_{\rm D}$). Using the fact that $\Sigma$ has genus $0$, we can always take $E_{\rm d}$, $E_{\rm s}$, $E_{\rm D}$ satisfying this additional condition. We denote by ${\rm Supp} (E_{\rm d})$ (resp. ${\rm Supp} (E_{\rm s})$, ${\rm Supp} (E_{\rm D})$) the union of the supports of elements of $E_{\rm d}$ (resp. $E_{\rm s}$, $E_{\rm D}$). We define a map $I^{\rm t}_{\rm d} : {\rm Supp} (E_{\rm d}) \to \Sigma_{\rm d}$ (resp. $I^{\rm t}_{\rm s} : {\rm Supp} (E_{\rm s}) \to \Sigma_{\rm s}$, $I^{\rm t}_{\rm D} : {\rm Supp} (E_{\rm D}) \to \Sigma_{\rm D}$) as follows. (Here ${\rm t}$ stands for target.) For $x \in {\rm Supp} (E_{\rm d})$ (resp. $x \in {\rm Supp} (E_{\rm s})$, $x \in {\rm Supp} (E_{\rm D})$), the point $I^{\rm t}_{\rm d}(x)$ (resp. $I^{\rm t}_{\rm s}(x)$, $I^{\rm t}_{\rm D}(x)$) is given by the following conditions: \begin{conds}\label{cods612} \begin{enumerate} \item We require that the distance between $x$ and $I^{\rm t}_{\rm d}(x)$ (resp. $I^{\rm t}_{\rm s}(x)$, $I^{\rm t}_{\rm D}(x)$) is smaller than the constant $\epsilon$. We choose $\epsilon$ small enough such that \eqref{form617} and this condition imply: $$ \aligned d(u_{\rm d}(x),u'_{\rm d}(I^{\rm t}_{\rm d}(x))) \le o, \quad &\text{(resp. $d(u_{\rm s}(x),u'_{\rm s}(I^{\rm t}_{\rm s}(x))) \le o$},\\ &\qquad\text{ $d(u_{\rm D}(x),u'_{\rm D}(I^{\rm t}_{\rm D}(x))) \le o$.)} \endaligned $$ where $o$ is a constant smaller than the injectivity radii of $X \setminus \mathcal D$ and $\mathcal D$.\footnote{Note that $\Sigma_{\rm d}$, $\Sigma_{\rm s}$, $\Sigma_{\rm D}$ together with marked points are stable without automorphisms. In a more general case, some of the irreducible components might have automorphisms. In that case the analogues of the map $I^{\rm t}_{\rm d}$ are required to be equivariant with respect to the automorphisms of the source curve.} \item The condition in (1) implies that there exists a unique minimal geodesic $\gamma_{\rm d} : [0,1] \to X\setminus \mathcal D$ (resp. $\gamma_{\rm s} : [0,1] \to X\setminus \mathcal D$, $\gamma_{\rm D} : [0,1] \to \mathcal D$) joining $u_{\rm d}(x)$ to $u'_{\rm d}(I^{\rm t}_{\rm d}(x))$ (resp. $u_{\rm s}(x)$ to $u'_{\rm s}(I^{\rm t}_{\rm s}(x))$, $u_{\rm D}(x)$ to $u'_{\rm D}(I^{\rm t}_{\rm D}(x))$.)\footnote{ Here the geodesics are defined with respect to the metric $g$ on $X\setminus \mathcal D$ and the metric $g'$ on $\mathcal D$.} We require that the vector $(d\gamma_{\rm d}/dt)(0)$ (resp. $(d\gamma_{\rm s}/dt)(0)$, $(d\gamma_{\rm D}/dt)(0)$) is perpendicular to the image of $u_{\rm d}$ (resp. $u_{\rm s}$, $u_{\rm D}$) at $t=0$. \end{enumerate} \end{conds} We fix a unitary connection on $T (X\setminus \mathcal D)$, whose restriction to $\frak U$ is given by the direct sum of the trivial connection on $\underline \C$ and a unitary connection on $T\mathcal D$. In particular, this connection is invariant with respect to the partial $\C_*$-action. The parallel transport along the geodesics $\gamma_{\rm d}$, $\gamma_{\rm s}$ with respect to this unitary connection induces complex linear maps: \[ T_{u_{\rm d}(x)}X \to T_{u'_{\rm d}(I^{\rm t}_{\rm d}(x))}X, \quad T_{u_{\rm s}(x)}X \to T_{u'_{\rm s}(I^{\rm t}_{\rm s}(x))}X. \] We thus obtain bundle maps: \[ u_{\rm d}^*TX \to (u'_{\rm d}\circ I^{\rm t}_{\rm d})^*TX, \quad u_{\rm s}^*TX \to (u'_{\rm s}\circ I^{\rm t}_{\rm s})^*TX. \] By differentiating and projecting to the $(0,1)$ part, we also obtain bundle maps: \[ d^{0,1}I^{\rm t}_{\rm d} : \Lambda^{0,1} \to (I^{\rm t}_{\rm d})^* \Lambda^{0,1},\quad d^{0,1}I^{\rm t}_{\rm s} : \Lambda^{0,1} \to (I^{\rm t}_{\rm s})^*\Lambda^{0,1}. \] We may assume that these maps are isomorphisms by choosing $\epsilon$ to be small enough. Taking tensor product gives rise to the maps: $$ \aligned &u_{\rm d}^*TX\otimes \Lambda^{0,1} \to (u'_{\rm d}\circ I^{\rm t}_{\rm d})^*TX\otimes (I^{\rm t}_{\rm d})^*\Lambda^{0,1}, \\ &u_{\rm s}^*TX\otimes \Lambda^{0,1} \to (u'_{\rm s}\circ I^{\rm t}_{\rm s})^*TX\otimes (I^{\rm t}_{\rm s})^*\Lambda^{0,1}. \endaligned $$ which induce linear maps: \begin{equation}\label{PAL} \aligned \mathcal{PAL} : &L^2_m({\rm Supp}E_{\rm d};u_{\rm d}^*TX \otimes \Lambda^{0,1}) \to L^2_{m,loc}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}')^*TX \otimes \Lambda^{0,1}), \\ \mathcal{PAL} : &L^2_m({\rm Supp}E_{\rm s};u_{\rm s}^*TX \otimes \Lambda^{0,1}) \to L^2_{m,loc}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};(u_{\rm s}')^*TX \otimes \Lambda^{0,1}). \endaligned \end{equation} We now define \begin{equation}\label{newform620} E_{\rm d}(u'_{\rm d})= \mathcal{PAL}(E_{\rm d}),\quad E_{\rm s}(u'_{\rm s})=\mathcal{PAL}(E_{\rm s}). \end{equation} \begin{defn} We denote by $\mathcal U_{\rm d}$ (resp. $\mathcal U_{\rm s}$) the set of $L^2_{m+1,loc}$ maps $u'_{\rm d} : (\Sigma_{\rm d} \setminus \{z_{\rm d}\},\partial \Sigma_{\rm d}) \to (X \setminus \mathcal D,L)$ (resp. $u_{\rm s}' : \Sigma_{\rm s}\setminus \{z_{\rm s}\} \to X\setminus\mathcal D$) with the following properties: \begin{enumerate} \item The $C^2$-distance between $u_{\rm d}$ and $u'_{\rm d}$ (resp. $u_{\rm s}$ and $u'_{\rm s}$) is less than $\epsilon$. \item The equation \[ \overline \partial u_{\rm d}' \in E_{\rm d}(u'_{\rm d}),\quad \text{(resp. $\overline \partial u_{\rm s}' \in E_{\rm s}(u'_{\rm s})$)} \] is satisfied. \item There exists $p \in \mathcal D$ such that \[ \lim_{x \to z_{\rm d}} u_{\rm d}'(x) = p,\quad \text{(resp. $\lim_{x \to z_{\rm s}} u_{\rm s}'(x) = p$)}. \] \item In the latter case, $u_{\rm s}'(w_{{\rm s},1}) \in \mathcal N_{{\rm s},1}$ and $u_{\rm s}'(w_{{\rm s},2}) \in \mathcal N_{{\rm s},2}$. \end{enumerate} We define $\mathcal U^+_{\rm s}$ to be the set of maps $u_{\rm s}'$ satisfying (1), (2) and (3), but not necessarily (4). \end{defn} Note that standard regularity results imply that elements of $\mathcal U_{\rm d}$ and $\mathcal U_{\rm s}$ are smooth. In the same way as in the case of $u'_{\rm d}$, $u'_{\rm s}$, for $u'_{\rm D} : \Sigma_{\rm D} \to \mathcal D$ with $d(u_{\rm D}(x),u'_{\rm D}(x)) \le \epsilon$, we define: \begin{equation}\label{PAL-D} \mathcal{PAL} : L^2_m({\rm Supp}E_{\rm D};u_{\rm D}^*T\mathcal D \otimes \Lambda^{0,1})\to L^2_m(\Sigma_{\rm D};(u_{\rm D}')^*T\mathcal D \otimes \Lambda^{0,1}) \end{equation} using the map $I^{\rm t}_{\rm D}$ and parallel transport with respect to the chosen unitary connection on $T\mathcal D$. We also define: \begin{equation}\label{formula621} E_{\rm D}(u'_{\rm D})= \mathcal{PAL}(E_{\rm D}). \end{equation} \begin{rem} Since $\Sigma_{\rm d}$, $\Sigma_{\rm s}$ and $\Sigma_{\rm D}$ with the marked points are stable, we can use the identity map instead of $I_{\rm d}^{\rm t}$, $I_{\rm s}^{\rm t}$ and $I_{\rm D}^{\rm t}$. This is the approach used in \cite{FO,fooobook2} and many other places in the literature. We call our choice here the {\it target space parallel transportation}.\footnote{A similar method was used in \cite[page 250, Condition 4.3.27]{foooast}.} This method works better for our construction of Subsection \ref{subsub:existobst}. \end{rem} \begin{defn} We denote by $\mathcal U_{\rm D}$ the set of $L^2_m$ maps $u_{\rm D}' : \Sigma_{\rm D} \to \mathcal D$ with the following properties: \begin{enumerate} \item The $C^2$-distance between $u_{\rm D}$ and $u'_{\rm D}$ is less than $\epsilon$. \item The equation \begin{equation} \overline \partial u_{\rm D}' \in E_{\rm D}(u'_{\rm D}) \end{equation} is satisfied. \item $u_{\rm D}'(w_{{\rm D}}) \in \mathcal N_{{\rm D}}$. \end{enumerate} We define $\mathcal U^+_{\rm D}$ to be the set of maps $u_{\rm D}'$ satisfying (1) and (2), but not necessarily (3). \end{defn} We define maps: \[ {\rm ev}_{\rm d} : \mathcal U_{\rm d} \to \mathcal D, \quad{\rm ev}_{\rm s} : \mathcal U_{\rm s} \to \mathcal D, \quad({\rm ev}_{\rm D,d},{\rm ev}_{\rm D,s}) : \mathcal U_{\rm D} \to \mathcal D \times \mathcal D, \] by \[ \aligned &{\rm ev}_{\rm d}(u_{\rm d}') := u_{\rm d}'(z_{\rm d}),\quad {\rm ev}_{\rm s}(u_{\rm s}') := u_{\rm s}'(z_{\rm s}), \\ &{\rm ev}_{\rm d}(u_{\rm D}') := u_{\rm D}'(z_{\rm d}),\quad{\rm ev}_{\rm s}(u_{\rm D}') := u_{\rm D}'(z_{\rm s}). \endaligned \] We summarize their properties as follows. \begin{lem}\label{smooth-nbhd-comp} If $\epsilon$ is small enough, then we have: \begin{enumerate} \item $\mathcal U_{\rm d}$, $\mathcal U_{\rm D}$, $\mathcal U_{\rm s}$ are smooth manifolds. \item The maps ${\rm ev}_{\rm d}$, ${\rm ev}_{\rm D,d}$, ${\rm ev}_{\rm D,s}$, ${\rm ev}_{\rm s}$ are smooth. \item The fiber product \begin{equation}\label{form6210} \mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U_{\rm D}\,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U_{\rm s} \end{equation} is transversal. \end{enumerate} \end{lem} \begin{proof} Part (1) is a consequence of the implicit function theorem using the assumptions in Definition \ref{defn6868}. Part (2) follows from the way we set up Fredholm theory. Part (3) follows from the surjectivity of the map \eqref{form61616161}. \end{proof} The fiber product \eqref{form6210} describes a Kuranishi neighborhood of any element $[\Sigma,z_0,u]$ of the stratum of $\mathcal M^{\rm RGW}_1(L;\beta)$, consisting of objects with the combinatorial data given in Section \ref{subsec:gluing1}. Next, we include the gluing construction and construct a Kuranishi neighborhood of $[\Sigma,z_0,u]$ in the moduli space $\mathcal M^{\rm RGW}_1(L;\beta)$. Let $D^2$ be the unit disk in the complex plane and $D^2(r)$ denote $r\cdot D^2$. We fix coordinate charts: \begin{equation}\label{newform624} \aligned &\varphi_{\rm d} : {\rm Int}(D^2) \to \Sigma_{\rm d}, \quad &\varphi_{\rm D,\rm d} : {\rm Int}(D^2) \to \Sigma_{\rm D}, \\ &\varphi_{\rm D,\rm s} : {\rm Int}(D^2) \to \Sigma_{\rm D}, \quad &\varphi_{\rm s} : {\rm Int}(D^2) \to \Sigma_{\rm s}, \endaligned \end{equation} which are bi-holomorphic maps onto the image and $\varphi_{\rm d}(0) = z_{\rm d}$, $\varphi_{\rm D,\rm d}(0) = z_{\rm D,\rm d}$, $\varphi_{\rm D,\rm s}(0) = z_{\rm D,\rm s}$, $\varphi_{\rm s}(0) = z_{\rm s}$. We assume that the marked points $w_{\rm D}$, $w_{{\rm s},i}$ do not belong to the image of the above coordinate charts. For $\sigma_1,\sigma_2 \in D^2\backslash \{0\}$, we form the disk $\Sigma(\sigma_1,\sigma_2)$ as follows. Consider the disjoint union: \begin{equation}\label{form622} \aligned (\Sigma_{\rm d} \setminus \varphi_{\rm d}(D^2(\vert\sigma_1\vert))) &\sqcup (\Sigma_{\rm D} \setminus (\varphi_{\rm D,\rm d}(D^2(\vert\sigma_1\vert)) \cup \varphi_{\rm D,\rm s}(D^2(\vert\sigma_2\vert))) \\ &\sqcup (\Sigma_{\rm s} \setminus \varphi_{\rm s}(D^2(\vert\sigma_2\vert))). \endaligned \end{equation} and define the equivalence relation $\sim$ on \eqref{form622} as follows: \begin{enumerate} \item[(gl-i)] If $z_1z_2 = \sigma_1$, $z_1,z_2 \in D^2$, then $\varphi_{\rm d}(z_1) \sim \varphi_{\rm D,\rm d}(z_2)$. \item[(gl-ii)] If $z_1z_2 = \sigma_2$, $z_1,z_2 \in D^2$, then $\varphi_{\rm s}(z_1) \sim \varphi_{\rm D,\rm s}(z_2)$. \end{enumerate} Then $\Sigma(\sigma_1,\sigma_2)$ is the quotient space of (\ref{form622}) by this equivalence relation. See Figure \ref{Figuresec6-3} below. The above definition can be extended to the case that $\sigma_1$ or $\sigma_2$ vanishes. For example, if $\sigma_2=0$, then \eqref{form622} is replaced with: \begin{equation}\label{form622} (\Sigma_{\rm d} \setminus \varphi_{\rm d}(D^2(\vert\sigma_1\vert))) \sqcup (\Sigma_{\rm D} \setminus \varphi_{\rm D,\rm d}(D^2(\vert\sigma_1\vert))) \sqcup \Sigma_{\rm s} . \end{equation} where we use the identification in {\rm (gl-i)}, and the identification in {\rm (gl-ii)} is replaced with $\varphi_{\rm s}(0) \sim \varphi_{\rm D,\rm s}(0)$. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figure63} \caption{$\Sigma(\sigma_1,\sigma_2)$} \label{Figuresec6-3} \end{figure} We also define: \begin{equation}\label{metaform626} \aligned \Sigma_{\rm d}(\sigma_1) &= \Sigma_{\rm d} \setminus \varphi_{\rm d}(D^2(\vert\sigma_1\vert)), \\ \Sigma_{\rm s}(\sigma_2) &= \Sigma_{\rm s} \setminus \varphi_{\rm s}(D^2(\vert\sigma_2\vert)), \\ \Sigma_{\rm D}(\sigma_1,\sigma_2) &=\Sigma_{\rm D} \setminus (\varphi_{\rm D,\rm d}(D^2(\vert\sigma_1\vert)) \cup \varphi_{\rm D,\rm s}(D^2(\vert\sigma_2\vert))). \endaligned \end{equation} By construction, there exist bi-holomorphic embeddings: \begin{equation} \aligned\aligned &I_{\rm d} : \Sigma_{\rm d}(\sigma_1) \to \Sigma(\sigma_1,\sigma_2), \quad I_{\rm s} : \Sigma_{\rm s}(\sigma_2) \to \Sigma(\sigma_1,\sigma_2), \\ &I_{\rm D} : \Sigma_{\rm D}(\sigma_1,\sigma_2) \to \Sigma(\sigma_1,\sigma_2). \endaligned\endaligned \end{equation} Let $u'_{\rm d} : \Sigma_{\rm d}(\sigma_1) \to X \setminus \mathcal D$, $u'_{\rm s} : \Sigma_{\rm s}(\sigma_2) \to X \setminus \mathcal D$, $U'_{\rm D} : \Sigma_{\rm D}(\sigma_1,\sigma_2) \to \frak U\subset X \setminus \mathcal D$ be $L^2_{m+1}$ maps such that $u_{\rm d}'$, $u_{\rm s}'$, $u'_{\rm D}:=\pi \circ U'_{\rm D}$ are close to the restrictions of $u_{\rm d}$, $u_{\rm s}$, $u_{\rm D}$ in the same sense as in \eqref{form617}. We define: \begin{equation}\label{Ed-s} \aligned &E_{\rm d}(u'_{\rm d}) \subset L^2_m(\Sigma_{\rm d}(\sigma_1);(u_{\rm d}')^*TX \otimes \Lambda^{0,1}) \\ &E_{\rm s}(u'_{\rm s}) \subset L^2_m(\Sigma_{\rm s}(\sigma_2);(u_{\rm s}')^*TX \otimes \Lambda^{0,1}) \endaligned \end{equation} similar to \eqref{newform620}, using target space parallel transportations. Next, we define: \begin{equation} E_{\rm D}( U'_{\rm D}) \subset L^2_m(\Sigma_{\rm D}(\sigma_1,\sigma_2);(U_{\rm D}')^*TX \otimes \Lambda^{0,1}) \end{equation} as follows. Since $u'_{\rm D}$ is close to $u_{\rm D}$, we can use the same construction as in \eqref{formula621} to define: \[ E'_{\rm D}(u'_{\rm D})\subset L^2_m(\Sigma_{\rm D}; (u_{\rm D}')^*T\mathcal D \otimes \Lambda^{0,1}) \] Then the decomposition in \eqref{decom-tan-bdle} allows us to define: \begin{equation}\label{ED} E_{\rm D}(U'_{\rm D}) \subset L^2_m(\Sigma_{\rm D};(U_{\rm D}')^*TX \otimes \Lambda^{0,1}). \end{equation} By construction, we have isomorphisms \begin{equation}\label{form63000} \mathcal P_{\rm d} : E_{\rm d}(u'_{\rm d}) \to E_{\rm d}, \hspace{.5cm} \mathcal P_{\rm s} : E_{\rm s}(u'_{\rm s}) \to E_{\rm s}, \hspace{.5cm} \mathcal P_{\rm D} : E_{\rm D}(u'_{\rm D}) \to E_{\rm D}. \end{equation} Recall that we fixed a codimension 2 submanifold $\mathcal N_{\rm D} \subset \mathcal D$. We define $\widehat{\mathcal N}_{\rm D} \subset X$ to be its inverse image in the tubular neighborhood $\frak U$ of $\mathcal D$ in $X$ by the projection map $\pi$. In the following definition $\epsilon$ is the same constant as in Lemma \ref{smooth-nbhd-comp}. We may make this constant smaller as we move through the paper whenever it is necessary. \begin{defn}\label{defnn614} We denote by $\mathcal U_0$ the set of all triples $(u',\sigma_1,\sigma_2)$ where $\sigma_1, \sigma_2 \in D^2(\epsilon)$. In the case that $\sigma_1$ and $\sigma_2$ are non-zero, $(u',\sigma_1,\sigma_2)$ needs to satisfy the following properties: \begin{enumerate} \item $u' : \Sigma(\sigma_1,\sigma_2) \to X\backslash \mathcal D$ is a smooth map. \item Let: \[ u'_{\rm d} := u' \circ I_{\rm d},\quad u'_{\rm s} := u' \circ I_{\rm s},\quad U'_{\rm D} = u' \circ I_{\rm D}. \] Then the $C^2$ distance of $u'_{\rm d}$ (resp. $u'_{\rm s}$) with the restriction of $u_{\rm d}$ (resp. $u_{\rm s}$) to $\Sigma_{\rm d}(\sigma_1)$ (resp. $\Sigma_{\rm s}(\sigma_2)$) is less than $\epsilon$. The maps $U'_{\rm D}$ and $U_{\rm D}$ are also $C^2$-close to each other in the sense that the image of $U_{\rm D}'$ is contained in the open set $\frak U$ and there is a constant $r$ such that the $C^2$ distance of $U'_{\rm D}$ and ${\rm Dil}_r\circ U_{\rm D}$, restricted to $\Sigma_{\rm D}(\sigma_1,\sigma_2)$, is less than $\epsilon$.\footnote{ The $C^2$ distance in part (2) of the definition are defined with respect to the metric $g$ on $X\backslash \mathcal D$ and the metric on $\mathcal N_{\mathcal D}(X)\setminus \mathcal D$ which has the form in \eqref{g-cylinder-end}.} \item (Modified non-linear Cauchy-Riemann equation) $u'_{\rm d}$, $u'_{\rm s}$, $U'_{\rm D}$ satisfy the equations: \begin{equation}\label{eq630} \overline \partial u_{\rm d}' \in E_{\rm d}(u'_{\rm d}), \quad \overline \partial u_{\rm s}' \in E_{\rm s}(u'_{\rm s}), \quad \overline \partial U_{\rm D}' \in E_{\rm D}(U'_{\rm D}). \end{equation} \item (Transversal constraints) We also require: \begin{equation}\label{eq631} u'(w_{\rm D}) \in \widehat{\mathcal N}_{\rm D}, \quad u'(w_{\rm s,1}) \in \mathcal N_{{\rm s},1}, \quad u'(w_{\rm s,2}) \in {\mathcal N}_{{\rm s},2}. \end{equation} Here we use $I_{\rm D}$, $I_{\rm s}$ to regard $w_{\rm D}$, $w_{{\rm s},i}$ as elements of $\Sigma(\sigma_1,\sigma_2)$. In the case that one of the constants $\sigma_1$ and $\sigma_2$ vanishes, the other one is also zero, and $u'$ is an element of the fiber product \eqref{form6210}. \end{enumerate} \end{defn} One might hope that the space $\mathcal U_0$ is cut down transversely by \eqref{eq630} and \eqref{eq631}, and hence it could be used to define a Kuranishi neighborhood of $[\Sigma,z_0,u]$ in $\mathcal M^{\rm RGW}_1(L;\beta)$. However, this naive expectation does not hold. Roughly speaking, if that would hold, then one should obtain a solution for any element of the fiber product \eqref{form6210} close to $[\Sigma,z_0,u]$ and any small values of $\sigma_1$, $\sigma_2$. On the other hand, as a consequence of \cite[Proposition 3.63]{DF1}, the stratum in \eqref{form6210} has real codimension $2$ in our case, which is a contradiction. Note that this is in contrast with the stable map compactification, where a fiber product of the form \eqref{form6210} has codimension $4$. To resolve this issue, we introduce a space $\mathcal U$ larger than $\mathcal U_0$ such that $\mathcal U$ is a smooth manifold and $\mathcal U_0$ is cut out from $\mathcal U$ by an equation of the following form: \begin{equation}\label{equation631} \sigma_1^2 = c\sigma_2^3. \end{equation} The space $\mathcal U$ is realized as the moduli space of {\it inconsistent solutions}, which will be defined in the next section. Note that the set of solutions of \eqref{equation631} has a singularity at the locus $\sigma_1 = \sigma_2 = 0$. \section{Inconsistent Solutions and the Main Analytical Result} \label{sub:statement} In this section, we discuss the main step where the construction of the Kuranishi chart in our situation is different from the case of the stable map compactification. \begin{defn}\label{defn615inconsis} For $\sigma_1,\sigma_2\in D^2(\epsilon)$, an {\it inconsistent solution} is a 7-tuple $(u_{\rm d}',u_{\rm s}',U_{\rm D}',\sigma_1,\sigma_2,\rho_1,\rho_2)$ satisfying the following properties: \begin{enumerate} \item $u_{\rm d}' : \Sigma_{\rm d}(\sigma_1) \to X \setminus \mathcal D$, $u_{\rm s}' : \Sigma_{\rm s}(\sigma_2) \to X \setminus \mathcal D$, $U_{\rm D}' : \Sigma_{\rm D}(\sigma_1,\sigma_2) \to \mathcal N_{\mathcal D}(X)\setminus \mathcal D$. The $C^2$ distances of $u_{\rm d}'$, $u_{\rm s}'$ and $U_{\rm D}'$ with $u_{\rm d}$, $u_{\rm s}$ and $U_{\rm D}$ are less than $\epsilon$.\footnote{ Here we use the same convention as in Definition \ref{defnn614} to defined the $C^2$-distances.} \item The following equations are satisfied: \begin{equation}\label{eq630-def} \overline \partial u_{\rm d}' \in E_{\rm d}(u'_{\rm d}), \quad\overline \partial u_{\rm s}' \in E_{\rm s}(u'_{\rm s}),\quad\overline \partial U_{\rm D}' \in E_{\rm D}( U'_{\rm D}). \end{equation} Here $E_{\rm d}(u'_{\rm d})$, $E_{\rm d}(u'_{\rm s})$ and $E_{\rm d}(U'_{\rm D})$ are defined as in \eqref{Ed-s} and \eqref{ED} using target parallel transport. \item We require the following transversal constraints: \begin{equation}\label{eq631-p} \pi \circ U_{\rm D}'(w_{\rm D}) \in\mathcal N_{\rm D},\quad u_{\rm s}'(w_{\rm s,1}) \in \mathcal N_{{\rm s},1},\quad u_{\rm s}'(w_{\rm s,2}) \in {\mathcal N}_{{\rm s},2}. \end{equation} \item Let $z_1,z_2 \in D^2$. \begin{enumerate} \item If $z_1z_2 = \sigma_1$, then: \[ u_{\rm d}'(\varphi_{\rm d}(z_1)) =({\rm Dil}_{\rho_1} \circ U_{\rm D}')(\varphi_{\rm D,1}(z_2)). \] In particular, we assume that the left hand side is contained in the open neighborhood $\frak U$ of $\mathcal D$. \item If $z_1z_2 = \sigma_2$, then: \[ u_{\rm s}'(\varphi_{\rm s}(z_1)) = ({\rm Dil}_{\rho_2} \circ U_{\rm D}')(\varphi_{\rm D,2}(z_2)). \] \end{enumerate} \end{enumerate} We say two inconsistent solutions $(u^{(j)}_{\rm s},u^{(j)}_{\rm d},U^{(j)}_{\rm D},\sigma^{(j)}_1,\sigma^{(j)}_2,\rho^{(j)}_1,\rho^{(j)}_2)$, $j=1,2$, are {\it equivalent} if the following holds: \begin{enumerate} \item[(i)] $u^{(1)}_{\rm d} =u^{(2)}_{\rm d}$, $u^{(1)}_{\rm s} =u^{(2)}_{\rm s}$, $\sigma^{(1)}_1 = \sigma^{(2)}_1$, $\sigma^{(1)}_2 = \sigma^{(2)}_2$. \item[(ii)] There exists a nonzero complex number $c$ such that: \[ U^{(2)}_{\rm D} = {\rm Dil}_{1/c} \circ U^{(1)}_{\rm D},\qquad \rho^{(2)}_1 = c\rho^{(1)}_1, \qquad\rho^{(2)}_2 = c\rho^{(1)}_2. \] \end{enumerate} We will write $\mathcal U$ for the set of all equivalence classes of inconsistent solutions. \end{defn} \begin{rem} In the above definition, we include the case that $\sigma_1$ or $\sigma_2$ is $0$ in the following way: \begin{enumerate} \item If $\sigma_1 = 0$ (resp. $\sigma_2 = 0$), then the condition (4) (a) (resp. (b)) is replaced by the condition that $u_{\rm d}'(\varphi_{\rm d}(0)) = \pi \circ U_{\rm D}'(\varphi_{\rm D,1}(0))$ (resp. $u_{\rm s}'(\varphi_{\rm s}(0)) =\pi \circ U_{\rm D}'(\varphi_{\rm D,1}(0))$); \item If $\sigma_1 = 0$ (resp. $\sigma_2 = 0$), then $\rho_1 = 0$ (resp. $\rho_2 = 0$). \end{enumerate} In the case that exactly one of $\sigma_1$ and $\sigma_2$ is zero, the source curve $\Sigma(\sigma_1,\sigma_2)$ has only one node. Such source curves do not appear in $\mathcal U_0$. However, there are elements of this form in $\mathcal U$. \end{rem} Below we state our main analytic results about $\mathcal U$: \begin{prop}\label{prop617} If $\epsilon$ is small enough, then the moduli space $\mathcal U$ is a smooth manifold diffeomorphic to\footnote{See Remark \ref{rem619} for the definition of the smooth structure of $D^2(\epsilon)$.}: \begin{equation}\label{form633} (\mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U_{\rm s}) \times D^2(\epsilon) \times D^2(\epsilon). \end{equation} The diffeomorphism has the following properties: \begin{enumerate} \item This diffeomorphism identifies the projection to the factor $D^2(\epsilon) \times D^2(\epsilon)$ with: \[ [u'_{\rm s},u'_{\rm d},u'_{\rm D},\sigma_1,\sigma_2,\rho_1,\rho_2] \mapsto (\sigma_1,\sigma_2). \] \item There exists $\hat\rho_i : \mathcal U \to \C$ such that any element $q$ of $\mathcal U$ has a representative whose $\rho_i$ component is equal to $\hat\rho_i(q)$. The functions $\hat\rho_i$ are smooth. Moreover, there exists a homeomorphism: \begin{equation}\label{rho1rho2hito} \mathcal U_0 \cong \{ \frak y \in \mathcal U \mid \hat\rho_1(\frak y) = \hat\rho_2(\frak y)\}. \end{equation} This homeomorphism is given as follows. Let: \[ \frak y = [u'_{\rm d},u'_{\rm D},u'_{\rm s},\sigma_1,\sigma_2,\hat\rho_1,\hat\rho_2] \] be an element of the right hand side of \eqref{rho1rho2hito} with $c = \hat\rho_1 = \hat\rho_2$. Then we can glue the three maps $u'_{\rm d},u'_{\rm s},{\rm Dil}_{c}\circ U'_{\rm D}$ as in {\rm (gl-i),(gl-ii)} to obtain a map $u' : \Sigma(\sigma_1,\sigma_2) \to X$. This gives the desired element of the left hand side of \eqref{rho1rho2hito}. \end{enumerate} \end{prop} \begin{rem} We can take our diffeomorphism so that its restriction to $(\mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U_{\rm s})\times \{(0,0)\}$ is the obvious one. We can also specify the choice of $\hat\rho_i$ in (2) above by requiring \begin{equation}\label{formula634} \hat\rho_1 = \sigma_1^2. \end{equation} From now on, we will take this choice unless otherwise mentioned explicitly. The proof we will give implies that: \[ \hat{\rho}_2(\frak y) = f(\frak y) \sigma_2^3, \] where $f$ is a nonzero smooth function. \end{rem} The next proposition is the exponential decay estimate similar to those in the case of the stable map compactification. (See \cite{foooexp} for the detail of the proof of this exponential decay estimate in the case of the stable map compactification.) To state our exponential decay estimate, we need to introduce some notations. We define $T_i \in [0,\infty)$, $\theta_i \in \R/2\pi\sqrt{-1}\Z$ by the formula: \begin{equation}\label{form635} \sigma_i = \exp(-(T_i+\sqrt{-1}\theta_i)). \end{equation} The exponential decay estimate is stated in terms of $T_i$ and $\theta_i$. Let $\xi = (u^{\xi}_{\rm d},u^{\xi}_{\rm D},u^{\xi}_{\rm s})$ be an element of the fiber product \eqref{form6210}. The triple $(\xi,\sigma_1=\exp(-(T_1+\sqrt{-1}\theta_1)),\sigma_2=\exp(-(T_2+\sqrt{-1}\theta_2)))$ determines an element of $\mathcal U$, which is denoted by: \begin{equation}\label{form637777new} \aligned &\frak x(\xi,T_1,T_2,\theta_1,\theta_2)\\ &=(u^{\xi}_{\rm d}(T_1,T_2,\theta_1,\theta_2;\cdot),u^{\xi}_{\rm D}(T_1,T_2,\theta_1,\theta_2;\cdot),u^{\xi}_{\rm s}(T_1,T_2,\theta_1,\theta_2;\cdot),\\ &\qquad\sigma_1,\sigma_2,\rho_1(\xi,T_1,T_2,\theta_1,\theta_2), \rho_2(\xi,T_1,T_2,\theta_1,\theta_2)) \endaligned \end{equation} Here we fix the representative by requiring (\ref{formula634}), namely, $\rho_1(\xi,T_1,T_2,\theta_1,\theta_2) = \sigma_1^2$. Let $\frak R_2 \in [0,\infty)$, $\eta_2 \in \R/2\pi\Z$ be functions of $\xi$, $T_1,T_2,\theta_1,\theta_2$ given by $$ \rho_2(\xi,T_1,T_2,\theta_1,\theta_2) = \exp(-(\frak R_2+\sqrt{-1} \eta_2)). $$ \begin{prop}\label{prop618} \begin{enumerate} \item Let $u_\circ^{\xi}$ be one of $u^{\xi}_{\rm d}$, $u^{\xi}_{\rm s}$, $u^{\xi}_{\rm D}$. Then for any compact subset $K$ of $\Sigma_{\rm s} \setminus \{z_{\rm s}\}$ (resp. $\Sigma_{\rm d} \setminus \{z_{\rm d}\}$, $\Sigma_{\rm D} \setminus \{z_{\rm D,s},z_{\rm D,d}\}$) we have the following exponential decay estimates: \begin{equation} \left\Vert \frac{\partial^{m_1}}{\partial T_1^{m_1}}\frac{\partial^{m'_1}}{\partial \theta_1^{m'_1}} \frac{\partial^{m_2}}{\partial T_2^{m_2}} \frac{\partial^{m'_2}}{\partial \theta_2^{m'_2}}u_\circ^{\xi}\right\Vert_{L^2_{\ell}(K)} \le C \exp(-c\upsilon_1T_1 - c\upsilon_2T_2). \end{equation} Here $\upsilon_1 = 0$ if $m_1=m'_1 =0$. Otherwise $\upsilon_1 = 1$. Similarly, $\upsilon_2$ equals to $0$ if $m_2=m'_2 =0$ and is equal to $1$ otherwise. Here $C,c$ are positive constants depending on $K$, $\ell$, $m_1,m'_1,m_2,m'_2$. The same estimate holds for the derivatives of $u_\circ^{\xi}$ with respect to $\xi$. \item We also have the following estimates \begin{equation} \aligned &\left\vert \frac{\partial^{m_1}}{\partial T_1^{m_1}} \frac{\partial^{m'_1}}{\partial \theta_1^{m'_1}}\frac{\partial^{m_2}}{\partial T_2^{m_2}} \frac{\partial^{m'_2}}{\partial \theta_2^{m'_2}}{(\frak R_2 - 3T_2)}\right\vert\le C \exp(-c\upsilon_1T_1 - c\upsilon_2T_2) \\ &\left\vert \frac{\partial^{m_1}}{\partial T_1^{m_1}} \frac{\partial^{m'_1}}{\partial \theta_1^{m'_1}}\frac{\partial^{m_2}}{\partial T_2^{m_2}} \frac{\partial^{m'_2}}{\partial \theta_2^{m'_2}}{(\eta_2 - 3\theta_2)}\right\vert\le C \exp(-c\upsilon_1T_1 - c\upsilon_2T_2). \endaligned \end{equation} Here $\upsilon_1$ and $\upsilon_2$ are defined as in the first part. The same estimate holds for the derivatives of $\frak R_2$, $\eta_2$ with respect to $\xi$. \end{enumerate} \end{prop} We will discuss the proofs of Propositions \ref{prop617}, \ref{prop618} in Section \ref{sub:proofmain}. \begin{rem}\label{rem619} We follow \cite[Subsection A1.4]{fooobook2}, \cite[Section 8]{foooexp}, \cite[Subsection 9.1]{fooo:const1} to use a smooth structure on $D^2$ different from the standard one as follows. For $z \in D^2$, let $T,\theta$ be defined by the following identity: \[ z= \exp(-(T+\sqrt{-1}\theta)). \] We define a homeomorhism from a neighborhood of the origin in $D^2$ to $D^2$ as follows: \[ z\mapsto w = \frac{1}{T} \exp(-\sqrt{-1}\theta). \] We define a smooth structure on $D^2$, temporarily denoted by $D^2_{\text{new}}$, such that $z \mapsto w$ becomes a diffeomorphism from $D^2_{\text{new}}$ to $D^2$ with the standard smooth structure. This new smooth structure $D^2_{\text{new}}$ is used to define a smooth structure on the factors $D^2$ in \eqref{form633}. (We drop the term `new' from $D^2_{\text{new}}$ hereafter.) The Proposition \ref{prop618} implies smoothness of various maps at the origin of $D^2$ with respect to the new smooth structure. See for example \cite[Lemma 22.6]{foootech}, \cite[Subsection 8.2]{foooexp}, \cite[Section 10]{fooo:const1} for further discussions related to this point. \end{rem} \section{Kuranishi Charts: a Special Case} \label{sub:kuraconst} In this section we use Propositions \ref{prop617} and \ref{prop618} to obtain a Kuranishi chart at the point $[\Sigma,z_0,u] \in \mathcal M_1^{\rm RGW}(L;\beta)$. By definition, a Kuranishi chart of a point $p$ in a space $M$ consists of $(V_p,\Gamma_p, \mathcal E_p, \frak s_p,\psi_p)$ where $V_p$, the {\it Kuranishi neighborhood}, is a smooth manifold containing a distinguished point $\tilde p$, $\Gamma_p$, the {\it isotropy group}, is a finite group acting on $V_p$, $\mathcal E_p$, {the \it obstruction bundle} is a vector bundle over $V_p$ and $\frak s_p$, the {\it Kuranishi map}, is a section of $\mathcal E_p$ over $V$. Moreover, the action of $\Gamma_p$ at $\tilde p$ is trivial and the action of this group on $V_p$ is lifted to $\mathcal E_p$. The section $\frak s_p$ is $\Gamma_p$-equivariant and vanishes at $\tilde p$. Finally, $\psi_p$ is a homeomorphism from $\frak s_p^{-1}(0)/\Gamma_p$ to a neighborhood of $[\Sigma,z_0,u]$ in $\mathcal M_1^{\rm RGW}(L;\beta)$, which maps $\tilde p$ to $p$. In the present case, we define the Kuranishi neighborhood to be the manifold $\mathcal U$ in Proposition \ref{prop617}, and define the isotropy group to be the trivial one. The obstruction bundle $\mathcal E$ on $\mathcal U$ is a trivial bundle whose fiber is \begin{equation} E_{\rm d} \oplus E_{\rm D} \oplus E_{\rm s} \oplus \C. \end{equation} The Kuranishi map $$ \frak s = (\frak s_{\rm d},\frak s_{\rm D},\frak s_{\rm s},\frak s_{\rho}) : \mathcal U \to E_{\rm d} \oplus E_{\rm D} \oplus E_{\rm s} \oplus \C $$ is defined by \begin{equation}\label{ob-map} \aligned & \frak s_{\rm d}(\xi,T_1,T_2,\theta_1,\theta_2) = \mathcal P_{\rm d}(\overline\partial u_{\rm d}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot)) \\ &\frak s_{\rm D}(\xi,T_1,T_2,\theta_1,\theta_2) = \mathcal P_{\rm D}(\overline\partial u_{\rm D}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot)) \\ & \frak s_{\rm s}(\xi,T_1,T_2,\theta_1,\theta_2) = \mathcal P_{\rm s}(\overline\partial u_{\rm s}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot)) \\ &\frak s_{\rho}(\xi,T_1,T_2,\theta_1,\theta_2) = \sigma_1^2 - \hat\rho_2(\xi,T_1,T_2,\theta_1,\theta_2). \endaligned \end{equation} Here $\mathcal P_{\rm s}, \mathcal P_{\rm D}, \mathcal P_{\rm d}$ are as in \eqref{form63000}. The maps $u_{\rm s}^{\xi}$, $u_{\rm D}^{\xi}$, $u_{\rm d}^{\xi}$ are as in \eqref{form637777new}. Therefore, $\overline\partial u_{\rm s}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot) \in E_{\rm s}( u_{\rm s}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot))$ is a consequence of (\ref{eq630}). Since $E_{\rm s}( u_{\rm s}^{\xi}(T_1,T_2,\theta_1,\theta_2;\cdot))$ is in the domain of $\mathcal P_{\rm s}$, the first map is well-defined. Similarly, we can show that the second and the third maps are also well-defined. The last map is equivalent to $\hat\rho_1 - \hat\rho_2$, because of \eqref{formula634}. \begin{lem} The map $\frak s$ is smooth. \end{lem} \begin{proof} Proposition \ref{prop617} implies that $\frak s_{\rho}$ is smooth. Smoothness of the maps $\frak s_{\rm s},\frak s_{\rm d},\frak s_{\rm D}$ for non-zero values of $\sigma_1$ and $\sigma_2$ is a consequence of standard elliptic regularity. Smoothness for $\sigma_i=0$ follows from part (1) of Proposition \ref{prop618}. For similar results in the context of the stable map compactification, see \cite[Lemma 22.6]{foootech}, \cite[Theorem 8.25]{foooexp}, \cite[Proposition 10.4]{fooo:const1}, \cite[Section 26]{foootech} and \cite[Section 12]{fooo:const1}. The first three references concerns the $C^m$ property of the relevant maps whereas the last two discuss smoothness. \end{proof} We finally construct the parametrization map \[ \psi : \frak s^{-1}(0) \to \mathcal M_1^{\rm RGW}(L;\beta). \] Let $\frak x= [u'_{\rm d},u'_{\rm D},u'_{\rm s},\sigma_1,\sigma_2,\rho_1,\rho_2]\in \mathcal U$ be an element such that $\frak s(\frak x) = 0$. Firstly, let $\sigma_1$ and $\sigma_2$ be both non-zero. Equation $\frak s_{\rho}(\frak x) = 0$ implies that $\rho_1 = \rho_2$. Therefore, we can glue $u'_{\rm d},u'_{\rm D},u'_{\rm s}$, as in Proposition \ref{prop617} (2), to obtain $u' : \Sigma(\sigma_1,\sigma_2) \to X \setminus \mathcal D$. We use $\frak s_{\rm d}(\frak x) = \frak s_{\rm D}(\frak x) =\frak s_{\rm s}(\frak x) =0$ to conclude that $u'$ is $J$-holomorphic. We define $\psi(\frak x) \in \mathcal M_1^{\rm RGW}(L;\beta)$ to be the element determined by $u'$ and $z_0 \in \partial D_{\rm d} \subset \partial \Sigma(\sigma_1,\sigma_2)$. In the case that $\sigma_1=0$, $\rho_1$ vanishes by definition. Equation $\frak s(\frak x) = 0$ implies that $\rho_2$ is also zero. We can also conclude from Definition \ref{defn615inconsis} that $\sigma_2=0$. Finally the first three equations in \eqref{ob-map} imply that $\frak x$ determines an element of $\mathcal M_1^{\rm RGW}(L;\beta)$ in the stratum described in Section \ref{subsec:gluing1}. The case that $\sigma_2=0$ can be treated similarly. It is easy to see that $\psi$ is a homeomorphism to a neighborhood of $[\Sigma,z_0,u]$ in $\mathcal M_1^{\rm RGW}(L;\beta)$. Given Propositions \ref{prop617} and \ref{prop618}, we thus proved the following result: \begin{prop} $(\mathcal U,\mathcal E,\frak s,\psi)$ provides a Kuranshi chart for the moduli space $\mathcal M_1^{\rm RGW}(L;\beta)$ at $[\Sigma,z_0,u]$. \end{prop} \section{Proof of the Main Analytical Result} \label{sub:proofmain} The purpose of this section is to prove Proposition \ref{prop617}. The proofs are similar to the arguments in \cite{foooexp}. However, there is one novel point, which is related to the fact that we need the notion of inconsistent solutions. In this section, we go through the construction of the required family of inconsistent solutions, emphasizing on this novel point. Then the estimates claimed in Proposition \ref{prop618} can be proved in the same way as in \cite[Section 6]{foooexp}. Throughout this section, we use a different convention for our figures to sketch pseudo holomorphic curves in $X$. In our figures in this section (e.g. Figure \ref{Figuresec6-4}), we regard the divisor $\mathcal D$ as a vertical line on the right. This is in contrast with our convention in Figure \ref{Figuresec6-3} and \cite{DF1}, where we regard the divisor as a horizontal line on the bottom. Our new convention is more consistent with the previous literature, especially \cite{foooexp}. \subsection{Cylindrical Coordinates} \label{subsub:cylindricalcoordinate} In \eqref{newform624}, we fix coordinate charts on $\Sigma_{\rm d}$, $\Sigma_{\rm s}$, $\Sigma_{\rm D}$ near the nodal points and parametrized by the disc ${\rm Int} (D^2)$. In this section, it is convenient to use a cylindrical coordinates on the domain of these coordinate charts. Thus we modify the definition of the maps in \eqref{newform624} as follows: \[ \begin{array}{ccc} \varphi_{\rm d}:[0,\infty) \times S^1 \to \Sigma_{\rm d},&\hspace{1cm}& \varphi_{\rm D,d}:(-\infty,0] \times S^1 \to \Sigma_{\rm D},\\ \varphi_{\rm s}:[0,\infty) \times S^1 \to \Sigma_{\rm s},&\hspace{1cm}& \varphi_{\rm D,s}:(-\infty,0] \times S^1 \to \Sigma_{\rm D}, \end{array} \] where \[ \begin{array}{ccc} \varphi_{\rm d}(r_1',s_1'),&\hspace{1.5cm}& \varphi_{\rm D,d}(r_1'',s_1''),\\ \varphi_{\rm s}(r_2',s_2'),&\hspace{1.5cm}& \varphi_{\rm D,s}(r_2'',s_2''), \end{array} \] for $(r'_i,s'_i) \in [0,\infty) \times S^1$, $(r''_i,s''_i) \in (-\infty,0] \times S^1$, is defined to be what we denoted by \begin{equation}\label{form643} \begin{array}{cc} \varphi_{\rm d}(\exp(-(r'_1+\sqrt{-1} s'_1))),& \varphi_{\rm D,d}(\exp(r''_1+\sqrt{-1} s''_1)),\\ \varphi_{\rm s}(\exp(-(r'_2+\sqrt{-1} s'_2))),& \varphi_{\rm D,s}(\exp(r''_2+\sqrt{-1} s''_2)), \end{array} \end{equation} in Section \ref{sub:Obst}. The equations $z_1z_2 = \sigma_1$ or $z_1z_2 = \sigma_2$ appearing in (gl-i) and (gl-ii)\footnote{See the discussion about the construction of $\Sigma(\sigma_1,\sigma_2)$ around \eqref{form622}.} can be rewritten as: \begin{equation}\label{form644} \aligned r''_1 = r'_1 - 10T_1, \qquad s''_1 = s'_1 - \theta_1, \\ r''_2 = r'_2 - 10T_2, \qquad s''_2 = s'_2 - \theta_2, \endaligned \end{equation} where\footnote{We use the coefficient $10$ here to be consistent with \cite{foooexp}. Otherwise, they are not essential.} \begin{equation} \sigma_i = \exp(-(10T_i + \sqrt{-1}\theta_i)). \end{equation} We define \begin{equation}\label{r1r2s1s2} r_i = r'_i - 5T_i = r''_i + 5T_i, \quad s_i = s'_i - \theta_i/2 = s''_i + \theta_i/2. \end{equation} We also slightly change our convention for the polar coordinate of $\rho_i$ of Definition \ref{defn615inconsis} ($i=1,2$) and define $\frak R_i$, $\eta_i$ as follows: \[ \rho_i = \exp(-(10\frak R_i + \sqrt{-1}\eta_i)). \] See Figure \ref{Figuresec6-4} below and compare with \cite[(6.2) and (6.3)]{foooexp}\footnote{In \cite{foooexp}, the letter $\tau$ is used for the variables that we denote by $r_i$ here. In this paper, we use $\tau$ to denote the $\R$ factor appearing in the target space.}. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figure64} \caption{$r_i,r'_i,r''_i$} \label{Figuresec6-4} \end{figure} \subsection{Bump Functions} \label{subsub:Bump} For the purpose of constructing approximate solutions (pre-gluing) and for each step of the Newton's iteration used to solve our variant of non-linear Cauchy-Riemann equation, we use bump functions. Here we review various bump functions that we need. We may use the maps $\varphi_{\rm d}$, $\varphi_{\rm s}$, $\varphi_{\rm D,d}$ and $\varphi_{\rm D,s}$ to regard the following spaces as subspaces of $\Sigma(\sigma_1,\sigma_2)$: $$ \aligned \mathcal X_{i,T_i}= [-1,1]_{r_i} \times S^1_{s_i} &= [5T_i-1,5T_i+1]_{r'_i} \times S^1_{s'_i} \\ &= [-5T_i-1,-5T_i+1]_{r''_i} \times S^1_{s''_i}, \endaligned $$ $$ \aligned \mathcal A_{i,T_i} = [-T_i-1,-T_i+1]_{r_i} \times S^1_{s_i} &= [4T_i-1,4T_i+1]_{r'_i} \times S^1_{s'_i} \\ &= [-6T_i-1,-6T_i+1]_{r''_i} \times S^1_{s''_i}, \endaligned $$ $$ \aligned \mathcal B_{i,T_i} = [T_i-1,T_i+1]_{r_i} \times S^1_{s_i} &= [6T_i-1,6T_i+1]_{r'_i} \times S^1_{s'_i} \\ &= [-4T_i-1,-4T_i+1]_{r''_i} \times S^1_{s''_i}. \endaligned $$ Using $\varphi_{\rm d}$ (resp. $\varphi_{\rm s}$), the spaces $\mathcal X_{1,T_1}$, $\mathcal A_{1,T_1}$, $\mathcal B_{1,T_1}$ (resp. $\mathcal X_{2,T_2}$, $\mathcal A_{2,T_2}$, $\mathcal B_{2,T_2}$) can be identified with subspaces of $\Sigma_{\rm d} \setminus \{z_{\rm d}\}$ (resp. $\Sigma_{\rm s} \setminus \{z_{\rm s}\}$). Similarly, the map $\varphi_{\rm D,d}$ (resp $\varphi_{\rm D,s}$) allows us to regard $\mathcal X_{1,T_1}$, $\mathcal A_{1,T_1}$, $\mathcal B_{1,T_1}$, $\mathcal X_{2,T_2}$, $\mathcal A_{2,T_2}$, $\mathcal B_{2,T_2}$ as subspaces of $\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}$. (See Figure \ref{Figuresec6-5} below.) \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figure65} \caption{$\mathcal X_{i,T_i}$, $\mathcal A_{i,T_i}$, $\mathcal B_{i,T_i}$} \label{Figuresec6-5} \end{figure} We fix a non-increasing smooth function $\chi : \R \to [0,1]$ such that $$ \chi(r)= \begin{cases} 1 &{r< -1} \\ 0 & {1 < r}, \end{cases} $$ and $\chi(-r) = 1 - \chi(r)$. We now define \begin{equation}\label{form646new} \aligned &\chi_{i,\mathcal X}^{\leftarrow}(r_i,s_i) = \chi(r_i), \qquad &\chi_{i,\mathcal X}^{\rightarrow}(r_i,s_i) = \chi(-r_i), \\ &\chi_{i,\mathcal A}^{\leftarrow}(r_i,s_i) = \chi(r_i+T_i), \qquad &\chi_{i,\mathcal A}^{\rightarrow}(r_i,s_i) = \chi(-(r_i+T_i)), \\ &\chi_{i,\mathcal B}^{\leftarrow}(r_i,s_i) = \chi(r_i-T_i), \qquad &\chi_{i,\mathcal B}^{\rightarrow}(r_i,s_i) = \chi(-(r_i-T_i)). \endaligned \end{equation} The functions $\chi_{1,\mathcal X}^{\leftarrow}$, $\chi_{1,\mathcal A}^{\leftarrow}$ and $\chi_{1,\mathcal B}^{\leftarrow}$ can be extended to smooth functions on $\Sigma_d$ which are locally constant outside of the spaces $\mathcal X_{1,T_1}$, $\mathcal A_{1,T_1}$ and $\mathcal B_{1,T_1}$, respectively. We use the same notations to denote these extensions. Similarly, we can define functions $\chi_{2,\mathcal X}^{\leftarrow}$, $\chi_{2,\mathcal A}^{\leftarrow}$ and $\chi_{2,\mathcal B}^{\leftarrow}$ on $\Sigma_{\rm s}$. These functions can be also regarded as functions defined on $\Sigma(\sigma_1,\sigma_2)$ in the obvious way. We use $\chi_{i,\mathcal X}^{\rightarrow}$ (resp. $\chi_{i,\mathcal A}^{\rightarrow}$ and $\chi_{i,\mathcal B}^{\rightarrow}(r_i,s_i)$), for $i=1,2$, to define a smooth function $\chi_{\mathcal X}^{\rightarrow}$ (resp. $\chi_{\mathcal A}^{\rightarrow}$ and $\chi_{\mathcal B}^{\rightarrow}$) on $\Sigma(\sigma_1,\sigma_2)$ as follows. On the neck regions where the coordinate $(r_i,s_i)$, for $i=1$ or $2$, is defined, we set $\chi_{\mathcal X}^{\rightarrow}$ (resp. $\chi_{\mathcal A}^{\rightarrow}$, $\chi_{\mathcal B}^{\rightarrow}$) to be the function $\chi_{i,\mathcal X}^{\rightarrow}(r_i,s_i)$ (resp. $\chi_{i,\mathcal A}^{\rightarrow}(r_i,s_i)$ and $\chi_{i,\mathcal B}^{\rightarrow}(r_i,s_i)$) given in \eqref{form646new}. This function is defined to be locally constant on the complement of the above space. See Figures \ref{Figuresec6-6} and \ref{Figuresec6-7}. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figure66} \caption{$\chi_{2,\mathcal X}^{\leftarrow}$, $\chi_{2,\mathcal A}^{\leftarrow}$, $\chi_{2,\mathcal B}^{\leftarrow}$} \label{Figuresec6-6} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=1.2]{Figure67} \caption{ $\chi_{\mathcal X}^{\rightarrow}$ $\chi_{\mathcal A}^{\rightarrow}$, $\chi_{\mathcal B}^{\rightarrow}$} \label{Figuresec6-7} \end{figure} Note that the supports of the first derivatives of $\chi_{i,\mathcal X}^{\leftarrow}$, $\chi_{i,\mathcal A}^{\leftarrow}$, $\chi_{i,\mathcal B}^{\leftarrow}$ are subsets of $\mathcal X_{i,T_i}$, $\mathcal A_{i,T_i}$, $\mathcal B_{i,T_i}$, respectively. The supports of the first derivatives of $\chi_{\mathcal X}^{\rightarrow}$ $\chi_{\mathcal A}^{\rightarrow}$, $\chi_{\mathcal B}^{\rightarrow}$ are subsets of $\mathcal X_{1,T_1} \cup \mathcal X_{2,T_2}$, $\mathcal A_{1,T_1} \cup \mathcal A_{2,T_2}$, $\mathcal B_{1,T_1} \cup \mathcal B_{2,T_2}$, respectively. \subsection{Weighted Sobolev Norms} \label{subsub:Wsobolev} In Section \ref{sub:Fred}, we define weighted Sobolev norms on several function spaces on $\Sigma_{\rm d}$, $\Sigma_{\rm s}$, $\Sigma_{\rm D}$. Here we use weighted Sobolev norms to define a function space on $\Sigma(\sigma_1,\sigma_2)$. Since $\Sigma(\sigma_1,\sigma_2)$ is compact and the weight functions that we will define are smooth, the resulting weighted Sobolev norm is equivalent to the usual Sobolev norm. In other words, the ratio between the two norms is bounded as long as we fix $\sigma_1,\sigma_2$. However, the ratio depends on $\sigma_1,\sigma_2$ and is unbounded as $\sigma_1,\sigma_2$ goes to zero. Therefore, using weighted Sobolev norm is crucial to show that various estimates are independent of $\sigma_1,\sigma_2$. We decompose $\Sigma(\sigma_1,\sigma_2)$ as follows: \begin{align*} \Sigma(\sigma_1,\sigma_2)= &(\Sigma_{\rm d} \setminus {\rm Im}\varphi_{\rm d})\cup (\Sigma_{\rm s} \setminus {\rm Im}\varphi_{\rm s}) \cup (\Sigma_{\rm D} \setminus ({\rm Im}\varphi_{\rm D,d} \cup {\rm Im}\varphi_{\rm D,s}))\\ &\cup ([-5T_1,5T_1]_{r_1} \times S^1_{s_1}) \cup ([-5T_2,5T_2]_{r_2} \times S^1_{s_2}). \end{align*} Here we identify $[-5T_1,5T_1]_{r_1} \times S^1_{s_1}$ and $[-5T_2,5T_2]_{r_2} \times S^1_{s_2}$ with their images in $\Sigma(\sigma_1,\sigma_2)$. We also introudce the following notations for various subspaces of $\Sigma(\sigma_1,\sigma_2)$: (See Figures \ref{Figuresec6-8}, \ref{Figuresec6-9} and \ref{Figuresec6-10}.) \begin{equation}\label{formu647} \aligned \Sigma_{\rm d}^-(\sigma_1,\sigma_2) &= \Sigma_{\rm d} \setminus {\rm Im}\varphi_{\rm d}, \\ \Sigma_{\rm d}^+(\sigma_1,\sigma_2) &= \Sigma_{\rm d} \setminus \varphi_{\rm d}(D^2(\vert \sigma_1\vert)), \\ \Sigma_{\rm s}^-(\sigma_1,\sigma_2) &= \Sigma_{\rm s} \setminus {\rm Im}\varphi_{\rm s}, \\ \Sigma_{\rm s}^+(\sigma_1,\sigma_2) &= \Sigma_{\rm s} \setminus \varphi_{\rm s}(D^2(\vert \sigma_2\vert)), \\ \Sigma_{\rm D}^-(\sigma_1,\sigma_2) &= \Sigma_{\rm D} \setminus ({\rm Im}\varphi_{\rm D,d}\cup {\rm Im}\varphi_{\rm D,s}), \\ \Sigma_{\rm D}^+(\sigma_1,\sigma_2) &= \Sigma_{\rm D} \setminus (\varphi_{\rm D,d}(\vert \sigma_1\vert) \cup {\rm Im}\varphi_{\rm D,s}(\vert \sigma_2\vert)). \endaligned \end{equation} \begin{figure}[h] \includegraphics[scale=0.6]{Figure68} \caption{$\Sigma_{\rm d}^-(\sigma_1,\sigma_2)$, $\Sigma_{\rm s}^-(\sigma_1,\sigma_2)$, $\Sigma_{\rm D}^-(\sigma_1,\sigma_2)$} \label{Figuresec6-8} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{Figure69} \caption{$\Sigma_{\rm d}^+(\sigma_1,\sigma_2)$, $\Sigma_{\rm s}^+(\sigma_1,\sigma_2)$ } \label{Figuresec6-9} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{Figure610} \caption{$\Sigma_{\rm D}^+(\sigma_1,\sigma_2)$} \label{Figuresec6-10} \end{figure} Note that $\Sigma^+_{\rm d}(\sigma_1,\sigma_2)$, $\Sigma^+_{\rm s}(\sigma_1,\sigma_2) $ and $\Sigma^+_{\rm D}(\sigma_1,\sigma_2)$ are respectively equal to the spaces $\Sigma_{\rm d}(\sigma_1)$, $\Sigma_{\rm s}(\sigma_2)$ and $\Sigma_{\rm D}(\sigma_1,\sigma_2)$ defined in \eqref{metaform626}. \begin{defn} \label{formula648} Let $e^{\sigma_1,\sigma_2}_{\delta} : \Sigma(\sigma_1,\sigma_2) \to [0,\infty)$ be a smooth function satisfying the following properties (see Figure \ref{Figuresec6-11}): \begin{itemize} \item[(i)] If $x\in \Sigma^-_{\rm d}(\sigma_1,\sigma_2)\cup \Sigma^-_{\rm s}(\sigma_1,\sigma_2) \cup \Sigma^-_{\rm D}(\sigma_1,\sigma_2)$, then $e^{\sigma_1,\sigma_2}_{\delta}(x)= 1$; \item[(ii)] If $r_i \in [1-5T_i,-1]$, then $e^{\sigma_1,\sigma_2}_{\delta}(r_i,s_i)=e^{\delta (r_i+5T_i)}$; \item[(iii)] If $r_i \in [1,5T_i-1]$, then $e^{\sigma_1,\sigma_2}_{\delta}(r_i,s_i)=e^{\delta (-r_i+5T_i)}$; \item[(iv)] If $\vert \vert r_i\vert-5T_i\vert \le 1$, then $e^{\sigma_1,\sigma_2}_{\delta}(r_i,s_i)\in [1,10]$; \item[(v)] If $\vert r_i\vert \le 1$, then $e^{\sigma_1,\sigma_2}_{\delta}(r_i,s_i)\in [e^{5T_i\delta},10e^{5T_i\delta}]$. \end{itemize} \end{defn} \begin{figure}[h] \includegraphics[scale=0.8]{Figure611} \caption{$e^{\sigma_1,\sigma_2}_{\delta}$} \label{Figuresec6-11} \end{figure} We fix a smooth map $u' : \Sigma(\sigma_1,\sigma_2) \to X \setminus \mathcal D$ and assume that the diameters of: \begin{equation}\label{neck-img} u'([-5T_1,5T_1]_{r_1} \times S^1_{s_1})\qquad \text{and} \qquad u'([-5T_2,5T_2]_{r_2} \times S^1_{s_2}) \end{equation} with respect to the metric $g$ are less than a given positive real number $\kappa$. We require that the above sets are contained in $\frak U$, introduced in the beginning of Section \ref{subsec:gluing1}, where the partial $\C_*$-action is defined. Assuming $\kappa$ is small enough, to any: \[ V \in L^2_{m}(\Sigma(\sigma_1,\sigma_2);u^{\prime *}TX). \] we associate sections $\hat v_1$ and $\hat v_2$ of $u^{\prime *}TX$ over the subspaces $[-5T_1,5T_1]_{r_1} \times S^1_{s_1}$ and $[-5T_2,5T_2]_{r_2} \times S^1_{s_2}$ in the following way. Let $(0,0)_i \in [-5T_i,5T_i]_{r_i} \times S^1_{s_i}$ be the point whose $r_i,s_i$ coordinates are $0$. By choosing $m$ to be greater than $1$, the following vector is well-defined: \begin{equation}\label{defnvivi} v_i = V((0,0)_i) \in T_{u'((0,0)_i)}X. \end{equation} Suppose $v_i = v_{i,\R} + v_{i,S^1} + v_{i,\rm D}$ is the decomposition of this vector with respect to \eqref{decom-tan-bdle}. If $\kappa$ is small enough, we can assume that the distance between any two points of the projection of \eqref{neck-img} to $\mathcal D$ is less than the injectivity radius of $\mathcal D$. In particular, we can extend $v_{i,\rm D}$ to a vector field $\hat v_{i,\rm D}$ in a neighborhood of $(0,0)_i$ using parallel transport along geodesics based at $u'((0,0)_i)$ with respect to the unitary connection on $T\mathcal D$, which we fixed before. Then the vector $\hat v_i$ is defined to be: \begin{equation}\label{form651651newnew} \hat v_i = v_{i,\R} + v_{i,S^1} + \hat v_{i,\rm D}. \end{equation} Now we define \begin{equation}\label{form6464rev} \aligned &\Vert V \Vert_{W^2_{m,\delta}}^2 \\ = &\Vert V\Vert_{L^2_{m}((\Sigma_{\rm d} \setminus {\rm Im}\varphi_{\rm d}) \cup (\Sigma_{\rm s} \setminus {\rm Im}\varphi_{\rm s}) \cup (\Sigma_{\rm D} \setminus ({\rm Im}\varphi_{\rm D,d} \cup {\rm Im}\varphi_{\rm D,s}))}^2 \\ &+ \sum_{i=1}^2\sum_{j=0}^m\int_{[-5T_i,5T_i]_{r_i}\times S^1_{s_i}} e^{\delta}_{\sigma_1,\sigma_2}(r_i,s_i) \vert \nabla^j(V - \hat v_i)\vert^2 dr_id s_i \\ &+ \vert v_1 \vert^2 + \vert v_2 \vert^2. \endaligned \end{equation} We use the cylindrical metric on $\Sigma(\sigma_1,\sigma_2)$ and the metric $g$ on $X \setminus \mathcal D$ to define norms in the first and the second lines of the right hand side of \eqref{form6464rev}. This definition is analogous to \eqref{form6464}. The space of all $V$ as above with finite $\Vert\cdot\Vert_{W^2_{m,\delta}}$ norm which satisfies the boundary condition: \[ \hspace{3cm} V(z) \in T_{u'(z)}L \hspace{1cm} \forall z \in \partial \Sigma(\sigma_1,\sigma_2) \] forms a Hilbert space, which we denoted by: \begin{equation}\label{fcspace652} W^2_{m,\delta}((\Sigma(\sigma_1,\sigma_2),\partial \Sigma(\sigma_1,\sigma_2)); (u^{\prime *}TX,u'\vert_{\partial}^*TL)). \end{equation} Next, let: \[ V \in L^2_{m}(\Sigma(\sigma_1,\sigma_2);u^{\prime *}TX\otimes \Lambda^{0,1}) \] and define: \begin{equation}\label{formula650} \Vert V \Vert_{L^2_{m,\delta}}^2 = \sum_{j=0}^m\int_{\Sigma(\sigma_1,\sigma_2)} e^{\delta}_{\sigma_1,\sigma_2}(z) \vert \nabla^jV(z)\vert^2 {\rm vol}_{\Sigma(\sigma_1,\sigma_2)}. \end{equation} We use the cylindrical metric on $\Sigma(\sigma_1,\sigma_2)$ and the metric $g$ on $X \setminus \mathcal D$ to define the norm and the volume element ${\rm vol}_{\Sigma(\sigma_1,\sigma_2)}$. The set of all such $V$ with $\Vert V \Vert_{L^2_{m,\delta}} < \infty$ forms a Hilbert space, which we denote by \begin{equation} L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2);u^{\prime *}TX\otimes \Lambda^{0,1}). \end{equation} As a topological vector space, this is the same space as the standard space of Sobolev $L^2_m$ sections. However, the ratio between the above $L^2_{m,\delta}$ norm and the standard Sobolev $L^2_m$ norm is unbounded while $\sigma_1,\sigma_2$ go to $0$. Finally, we can use the above Sobolev spaces, to define the linearization of the non-linear Cauchy-Riemann equation at $u'$, which is a Fredholm operator: \begin{equation}\label{newform651} \aligned D_{u'}\overline{\partial} : &W^2_{m+1,\delta}((\Sigma(\sigma_1,\sigma_2),\partial \Sigma(\sigma_1,\sigma_2)); (u^{\prime *}TX,u'\vert_{\partial}^*TL)) \\ &\to L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2);u^{\prime *}TX\otimes \Lambda^{0,1}). \endaligned \end{equation} \subsection{Pre-gluing} \label{subsub:preglue} Suppose $\xi = (u_{\rm d}^{\xi},u_{\rm D}^{\xi},u_{\rm s}^{\xi})$ is an element of the following space\footnote{Recall that $\Sigma_{\rm d}$ together with the marked points $z_0$ and $z_{\rm d}$ is already source stable and we did not need to introduce auxiliary marked points on this space. This is the reason that the first factor is $\mathcal U_{\rm d}$, rather than $\mathcal U_{\rm d}^+$.}: \begin{equation}\label{fib-prod-str} \mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U^+_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U^+_{\rm s} \end{equation} In this subsection, for each choice of $\sigma_1$ and $\sigma_2$, we shall construct an approximate inconsistent solution and approximate the error for this approximate solution. By assumption, the pull back bundle $(u_{\rm D}^{\xi})^*\mathcal N_{\mathcal D}(X)$ has a meromorphic section $\frak s^\xi$ which has poles of order 2 and 3 at $z_{\rm d}$ and $z_{\rm s}$, respectively. As in \eqref{U-D}, $\frak s^\xi$ gives rise to a map \begin{equation}\label{form653} U_{\rm D}^{\xi} : \Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\} \to \mathcal N_{\mathcal D}(X)\backslash \mathcal D. \end{equation} A priori, the section $\frak s^\xi$ is well-defined up to the action of $\C_*$ and for each $\xi$ in \eqref{fib-prod-str}, we fix one such section such that $U_{\rm D}^{\xi}$ depends smoothly on $\xi$. Later we will pin down the choice of sections such that \eqref{formula634} is satisfied. Recall that a neighborhood of the zero section in $\mathcal N_{\mathcal D}(X)$ with the neighborhood $\frak U$ of $\mathcal D$ in $X$. For now, we assume that the section $\frak s^\xi$ is chosen such that the image of $U_{\rm D}^{\xi}$ on the domain $\Sigma_{\rm D}^+(\sigma_1,\sigma_2)$ belongs to this neighborhood of the zero section of $\mathcal N_{\mathcal D}(X)$. Recall that $\Sigma_{\rm D}^+(\sigma_1,\sigma_2)$ is defined in \eqref{formu647}. Next, we shall glue the three maps $u^{\xi}_{\rm d}$, $u^{\xi}_{\rm s}$, $U_{\rm D}^{\xi}$ by a partition of unity. One should beware that the output of this construction is an approximate inconsistent solution. In particular, it will not be a globally well-defined map from $\Sigma(\sigma_1,\sigma_2)$ to $X$. In order to describe this process, we need to fix an {\it exponential} map. There is a map \begin{equation} \label{eq653} {\rm Exp}:T(X\setminus \mathcal D)\to\frak N(\Delta) \end{equation} with $\frak N(\Delta)$ being a neighborhood of the diagonal $\Delta$ in $(X\setminus \mathcal D)\times (X\setminus \mathcal D)$ such that: \begin{itemize} \item[(i)] For $p \in X\setminus \mathcal D$ and $V\in T_p(X\setminus \mathcal D)$, the first component of ${\rm Exp}(p,V)$ is $p$. \item[(ii)] ${\rm Exp}$ maps $(p,0)\in T_p(X\setminus \mathcal D)$ to $(p,p)\in (X\setminus \mathcal D)\times (X\setminus \mathcal D)$. Moreover, at the point $(p,0)$, the derivative of ${\rm Exp}$ in the fiber direction given by $T_p(X\setminus \mathcal D)\subset T(X\setminus \mathcal D)$ is equal to $(0,{\rm id})$ where ${\rm id}$ is the identity map from $T_p(X\setminus \mathcal D)$ to itself. \item[(iii)] Recall that we defined partial $\C_*$ actions for a pair of an almost complex manifold $Y$ and a complex submanifold $D$ of (complex) codimension $1$ in \cite[Subsection 3.2]{DF1}. This notion can be generalized to the case of complex submanifolds of arbitrary codimension in an obvious way. For example, the derivative of the partial $\C_*$ action for the pair $(X,\mathcal D)$ determines a partial $\C_*$ action for the pair $(TX,T\mathcal D)$. Moreover, the product of two copies of partial $\C_*$ actions for the pair $(X,\mathcal D)$ induces a partial $\C_*$ action on $(X\times X,\mathcal D\times \mathcal D)$. We require that the map \eqref{eq653} is equivariant with respect to these two partial $\C_*$ actions. % % \item[(iv)] For a positive real number $\kappa$, let $D_{\kappa}TL$ denote the tangent vectors to $L$ whose norms are smaller than $\kappa$. There is $\kappa$ such that: \[{\rm Exp}(D_{\kappa}TL) \subset L \times L\]. \end{itemize} Let ${\rm exp}$ be the exponential map with respect to the metric $g$. The map $({\rm id},{\rm exp})$, defined on a neighborhood of the zero section of $T(X\setminus \mathcal D)$, satisfies (i)-(iii). We can modify this map and extend it to a map on $T(X\setminus \mathcal D)$ which satisfies (iv). We denote the inverse of \eqref{eq653} by $$ {\rm E} : \frak N(\Delta) \to T(X\setminus \mathcal D). $$ We now define $\rho_{i,(0)}^{\xi} \in \C_*$ ($i=1,2$) as follows. We define the composition $$ u^{\xi}_{\rm d} \circ \varphi_{\rm d} : D^2 \to X\setminus \mathcal D. $$ We take a (holomorphic) trivialization $\Pi:\mathcal N_{\mathcal D}(X) \to \C$ of the normal bundle $\mathcal N_{\mathcal D}(X)$ in a neighborhood of $u_{\rm d}^{0}(z_{\rm d})$. Note that $u^{\xi}_{\rm d}(z_{\rm d}) \in \mathcal D$ is in a small neighborhood of $u_{\rm d}^{0}(z_{\rm d})$. Therefore, $u^{\xi}_{\rm d} \circ \varphi_{\rm d}$ induces a holomorphic function $$ {\Pi} \circ u^{\xi}_{\rm d} \circ \varphi_{\rm d} : D^2(o) \to \C $$ for a small $o>0$. By assumption ${\Pi} \circ u^{\xi}_{\rm d} \circ \varphi_{\rm d}$ has a zero of order $2$ at $z_{\rm d}$. We define $c^{\xi}_{\rm d} \in \C_*$ by \begin{equation}\label{shiki655} ({\Pi} \circ u^{\xi}_{\rm d} \circ \varphi_{\rm d})(z) = c^{\xi}_{\rm d}z^2 + f(z) z^3 \end{equation} where $f(z)$ is holomorphic at $0$. Using the trivialization $\Pi$, we may regard the meromorphic section $\frak s^\xi\circ \varphi_{\rm d,D}$ as a meromorphic {\it function} which has a pole of order $2$ at $z_{\rm d}$. In particular, there is a constant $c^{\xi}_{\rm D,d} \in \C_*$ such that $\Pi\circ \frak s^\xi \circ \varphi_{\rm d,D}: D^2(o) \setminus \{0\} \to \C$ has the following form: \begin{equation}\label{shiki656} (\Pi\circ \frak s^\xi \circ \varphi_{\rm D,d})(w) = c^{\xi}_{\rm D,d} w^{-2} + \frac{g(w)}{w}, \end{equation} where $g$ is holomorphic at $0$. We now define: \begin{equation} \rho_{1,(0)}^{\xi}(\sigma_1,\sigma_2) = \frac{c^{\xi}_{\rm d}\sigma_1^2}{c^{\xi}_{\rm D,d}}. \end{equation} Note that $\rho_{1,(0)}^{\xi}$ is independent of the choice of the trivialization of $\mathcal N_{\mathcal D}(X)$, because an alternative choice affects the numerator and the denominator of the right hand side by multiplying with the same number. The constant $\rho_{1,(0)}^{\xi}$ has the property that if $zw = \sigma_1$, then: \begin{equation}\label{newnewform6555} (u^{\xi}_{\rm d} \circ \varphi_{\rm d})(z)\sim ({\rm Dil}_{\rho_{1,(0)}^{\xi}}\circ U^{\xi}_{\rm D} \circ \varphi_{{\rm D},{\rm d}})(w) \end{equation} where $\sim$ means the coincidence of the lowest order term. We define $\rho_{2,(0)}^{\xi}$ in a similar way using the behavior of $u_{\rm s}$ and $u_{\rm D,s}$ in a neighborhood of $z_{\rm s}$. Namely, we replace \eqref{shiki655} and \eqref{shiki656} by: \begin{equation}\label{shiki655rev} ({\Pi} \circ u^{\xi}_{\rm s} \circ \varphi_{\rm s})(z) = c^{\xi}_{\rm s}z^3 + f(z) z^4, \end{equation} \begin{equation}\label{shiki656rev} (\Pi\circ \frak s^\xi \circ \varphi_{\rm D,s})(w) = c^{\xi}_{\rm D,s} w^{-3} + \frac{g(w)}{w^2}, \end{equation} respectively and define: \begin{equation}\label{newold664} \rho_{2,(0)}^{\xi}(\sigma_1,\sigma_2) = \frac{c^{\xi}_{\rm s}\sigma_2^3}{c^{\xi}_{\rm D,s}}. \end{equation} \par Now we define a map $$ u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)} : \Sigma(\sigma_1,\sigma_2) \to X $$ as follows. Roughly speaking, $u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)} $ is obtained by gluing the three maps $u_{\rm d}^{\xi}$, $u_{\rm s}^{\xi}$, ${\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U_{\rm D}^{\xi}$, using bump functions $\chi_{i,\mathcal X}^{\leftarrow}$, $\chi_{\mathcal X}^{\rightarrow}$. From now on, we write $\rho_{i,(0)}^{\xi}$ instead of $\rho_{i,(0)}^{\xi}(\sigma_1,\sigma_2)$ when the dependence on $\sigma_i$ is clear. \begin{defn} \begin{enumerate} \item If $z \in \Sigma_{\rm d}^-(\sigma_1,\sigma_2)$, then: \[ u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}(z) = u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}(z) = u_{\rm d}^{\xi}(z). \] \item If $z \in \Sigma_{\rm s}^-(\sigma_1,\sigma_2)$, then \[ u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}(z) = u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}(z) = u_{\rm s}^{\xi}(z). \] \item If $z \in \Sigma_{\rm D}^-(\sigma_1,\sigma_2)$, then: \[ u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}(z) = ({\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U_{\rm D}^{\xi})(z) \] for $i=1,2$. \item Suppose $z = (r_1,s_1) \in [-5T_1,5T_1]_{r_1} \times S^1_{s_1}$. We define \[ u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}(z) \\ = {\rm Exp_2}\left(u_{\rm d}^{\xi}(z), \chi_{\mathcal X}^{\rightarrow}(z) {\rm E}(u_{\rm d}^{\xi}(z),({\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U_{\rm D}^{\xi})(z)) \right). \] Here $ {\rm Exp_2}$ denotes the composition of $ {\rm Exp}$ and projection map from $(X\setminus \mathcal D)\times (X\setminus \mathcal D)$ to the second factor. \item Suppose $z = (r_2,s_2) \in [-5T_2,5T_2]_{r_2} \times S^1_{s_2}$. We define \[ u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}(z) \\= {\rm Exp_2}\left(u_{\rm s}^{\xi}(z), \chi_{\mathcal X}^{\rightarrow}(z) {\rm E}(u_{\rm s}^{\xi}(z),({\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U_{\rm D}^{\xi})(z))\right). \] \end{enumerate} \end{defn} \begin{rem} In part (4), if $r_1$ is close to $-5T_1$, then the right hand side is $u^{\xi}_{\rm d}$, and if $r_1$ is close to $5T_1$ then the right hand side is ${\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U_{\rm D}^{\xi}$. A similar property holds for the definition in part (5). In particular, our definition is well-defined. \end{rem} \noindent{\bf (Step 0-3) (Error estimate)} \footnote{The enumeration of the steps of this paper is the same as those in \cite[Section A1.4]{fooobook2} and \cite{foooexp}.} The next lemma provides an estimate of $\overline{\partial}u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}$ modulo the obstruction space \[ (E_{\rm d} \oplus E_{\rm s} \oplus E_{\rm D})(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}). \] In the case that $\rho_{1,(0)}^{\xi} \ne \rho_{2,(0)}^{\xi}$, we need to restrict the domain in the following way to obtain an appropriate estimate. We put \begin{equation} \Sigma(\sigma_1,\sigma_2)_{i}^- = \begin{cases} \Sigma(\sigma_1,\sigma_2) \setminus ([-5T_2,5T_2]_{r_2} \times S^1_{s_2}) &\text{if $i=1$}, \\ \Sigma(\sigma_1,\sigma_2) \setminus ([-5T_1,5T_1]_{r_1} \times S^1_{s_1}) &\text{if $i=2$}. \end{cases} \end{equation} We consider the $L^2_{m,\delta}$ norm of the restriction of maps to $\Sigma(\sigma_1,\sigma_2)_{i}^-$ and denote it by $L^{2,i,-}_{m,\delta}$. \begin{lem}\label{lem623} There exist constants $\delta_1$, $C_m$ (for any integer $m$) and vectors $\frak e_{{\rm d},(0)}^{\xi} \in E_{\rm d}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)})$, $\frak e_{{\rm s},(0)}^{\xi} \in E_{\rm s}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)})$, $\frak e_{{\rm D},(0)}^{\xi,i} \in E_{\rm D}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)})$ such that $\delta_1$, $C_m$ are independent of $\sigma_1$, $\sigma_2$, $\xi$, and we have the following inequalities: \begin{enumerate} \item \[\Vert \overline{\partial}u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)} - \frak e_{{\rm d},(0)}^{\xi} - \frak e_{{\rm s},(0)}^{\xi} - \frak e_{{\rm D},(0)}^{\xi} \Vert_{L^{2,1,-}_{m,\delta}}\le C_m e^{-\delta_1T_1}.\] \item \[\Vert \overline{\partial}u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)} - \frak e_{{\rm d},(0)}^{\xi} - \frak e_{{\rm s},(0)}^{\xi} - \frak e_{{\rm D},(0)}^{\xi} \Vert_{L^{2,2,-}_{m,\delta}}\le C_m e^{-\delta_1T_2}.\] \end{enumerate} \end{lem} We can be more specific about the value of the constant $\delta_1$ as in \eqref{form62} and \eqref{form62rev}. However, the actual choices do not matter for the details of our construction. So we do not give an exact value for this constant. \begin{proof} We define: \begin{equation}\label{form6662} \aligned \frak e_{{\rm d},(0)}^{\xi} &:= \overline{\partial} u^{\xi}_{\rm d} \in E_{\rm d}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}), \\ \frak e_{{\rm s},(0)}^{\xi} &:= \overline{\partial} u^{\xi}_{\rm s} \in E_{\rm s}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}), \\ \frak e_{{\rm D},(0)}^{\xi,i} &:= \overline{\partial} ({\rm Dil}_{\rho_{i,(0)}^{\xi}} \circ U^{\xi}_{\rm D}) \in E_{\rm D}(u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}). \endaligned \end{equation} Then by construction the support of $\overline{\partial}u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)} - \frak e_{{\rm d},(0)}^{\xi} - \frak e_{{\rm s},(0)}^{\xi} - \frak e_{{\rm D},(0)}^{\xi}$ is contained in $([-5T_1,5T_1]_{r_1} \times S^1_{s_1}) \cup ([-5T_2,5T_2]_{r_2} \times S^1_{s_2})$. Therefore, it suffices to estimate $\overline{\partial}u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}$ on $[-5T_i,5T_i]_{r_i} \times S^i_{s_i}$. Below we discuss the case $i=1$. The other case is similar. Let $z = \varphi_{\rm d}(r'_1,s'_1)$ be the coordinate on $\Sigma_{\rm d}$ used to denote points in a neighborhood of $z_{\rm d}$ and $w = \varphi_{\rm D,d}(r''_1,s''_1)$ be the coordinate on $\Sigma_{\rm D}$ used to denote points in a neighborhood of $z_{\rm d}$. In order to obtain $\Sigma(\sigma_1,\sigma_2)$, the equation: $$ zw = \sigma_1 $$ is used to glue $\Sigma_{\rm d}$ and $\Sigma_{\rm D}$. Note that the supports of the derivatives of the bump functions $\chi_{1,\mathcal X}^{\leftarrow}$, $\chi_{\mathcal X}^{\rightarrow}$ are in $\mathcal X_{1,T_1}$. (Here we look at the restriction of the function $\chi_{\mathcal X}^{\rightarrow}$ to $\Sigma(\sigma_1,\sigma_2)_{i}^-$. Otherwise, part of the support of the derivate of this function is contained in $\mathcal X_{2,T_2}$.) Therefore, the support of $\overline{\partial}u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}$ is contained in the same subspace. Firstly we wish to show that the maps $f_1:=u^{\xi}_{\rm d}$ and $f_2:={\rm Dil}_{\rho_{1,(0)}^{\xi}} \circ U^{\xi}_{\rm D}$, as maps from from $\mathcal X_{1,T_1}$ to $\frak U\setminus \mathcal D\subset X\setminus \mathcal D$, are close to each other in the $C^m$ metric. In fact, analogues of the inequalities in \eqref{form62} and \eqref{form62rev} show that there are constants $C_m'$ and $\delta_0$ independent of $\sigma_1$, $\sigma_2$ and $\xi$ such that: \begin{equation} \label{C1-C2-dis} d_{C^m}(f_1,f_2)\leq C_m'e^{- 5 \delta_0 T_1} \end{equation} where $d_{C^m}$ is computed with respect to the cylindrical metric $g$. To be a bit more detailed, this inequality holds because the leading terms of $f_1$ and $f_2$ agree with each other, and $f_1$ and $f_2$ are both holomorphic. Let $h_1,\,h_2:\mathcal X_{1,T_1} \to \frak U$ are maps such that their $C^0$ distance is less than or equal to a constant $\kappa$. If $\kappa$ is small enough, then the following map is well defined: \[ F(h_1,h_2)= {\rm Exp_2}\left(h_1,\chi_{\mathcal X}^{\rightarrow}\cdot {\rm E}(h_1,h_2)\right). \] Clearly there is a constant $K$ such that: \[ \Vert\overline \partial F(h_1,h_2)-\overline \partial F(h_1,h_2')\Vert_{L^2_m}\leq K \cdot d_{C^{m+1}}(h_2,h_2') \] Since $F(f_1,f_1)=f_1$, the above inequality together with \eqref{C1-C2-dis} implies that there is a constant $C_m$ such that: \[ \Vert\overline{\partial}u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}\Vert_{L^2_m(\mathcal X_{1,T_1})} \leq C_me^{- 5 \delta_0 T_1} \] Therefore, if we pick $\delta$ and $\delta_1$ such that $\delta+\delta_1<5\delta_0$, then the desired inequality holds. \end{proof} \subsection{Why Inconsistent Solutions?} \label{subsub:linsol1} We already hinted at the necessity of inconsistent solutions at the end of Section \ref{sub:Obst}. In this section we elaborate on this point with an eye toward modifying the approximate solution of the previous section to a solution. We firstly sketch our approach for this modification which is based on {\it Newton's iteration method}. Next, we explain the main point where the proof in the case of the RGW compactification diverges from the case of the stable map compactification. The discussion of this subsection is informal, and the actual proof will be carried out in the next two subsections. Suppose $u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}$ and $u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}$ are the approximate solutions of the previous subsection associated to the element $\xi = (u_{\rm d}^{\xi},u_{\rm D}^{\xi},u_{\rm s}^{\xi})$ of \eqref{fib-prod-str}. We assume that $\sigma_1$ and $\sigma_2$ are chosen such that $\rho_{1,(0)}^{\xi}(\sigma_1,\sigma_2) = \rho_{2,(0)}^{\xi}(\sigma_1,\sigma_2)$. In particular, $u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}=u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}$ and we denote these maps by $u'$. Lemma \ref{lem623} gives the following estimate: $$ \Vert\overline\partial u'\Vert_{L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2))/E(u')} \le C e^{-c \delta_1 \min\{T_1,T_2\}}. $$ Here $E(u') = E_{\rm d}(u') \oplus E_{\rm s}(u') \oplus E_{\rm D}(u')$, and the norm on the left hand side is the induced norm on the quotient space. The next step would be to find: \[ V \in W^2_{m+1,\delta}((\Sigma(\sigma_1,\sigma_2),\partial \Sigma(\sigma_1,\sigma_2)); (u^{\prime *}TX,u'\vert_{\partial}^*TL)). \] which satisfies the equation: \begin{equation}\label{form670new} (D_{u'}\overline{\partial}) V + \overline\partial u' \equiv 0 \mod E(u'). \end{equation} and $$ \Vert V\Vert_{W_{m+1,\delta}^2} \le C \Vert\overline\partial u'\Vert_{L^2_m(\Sigma(\sigma_1,\sigma_2))/E(u')}. $$ Then we could define our first modified approximate solution as follows: $$ u''(z)={\rm Exp}(u'(z),V(z)) $$ This modified solution would satisfy the following inequality: $$ \Vert\overline\partial u''\Vert_{L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2))/E(u'')} \le \mu \Vert\overline\partial u'\Vert_{L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2))/E(u')} $$ for a fixed $0<\mu<1$ if $\sigma_1, \sigma_2$ are sufficiently small (or equivalently, $T_1$, $T_2$ are sufficiently large). We could then continue to obtain $u^{(i)}$ such that \[ \Vert\overline\partial u^{(i)}\Vert_{L^2_m(\Sigma(\sigma_1,\sigma_2))/E(u^{(i)}|)} \le \mu^{i} \Vert\overline\partial u'\Vert_{L^2_m(\Sigma(\sigma_1,\sigma_2))/E(u')} \] and for fixed constants $C$ and $c$, the $W^2_{m+1,\delta}$-distance between $u^{(i)}$ and $u^{(i+1)}$ is bounded by $C\mu^{i} e^{-c \delta_1 \min\{T_1,T_2\}}$. Then $u^{(i)}$ would be convergent to a map $u$, and it would be the required solution of the equation: \begin{equation}\label{eq671} \overline \partial u \equiv 0 \mod E(u). \end{equation} This is the standard Newton's iteration method to solve a nonlinear equation using successive solutions to the linearized equation. However, the RGW compactification is singular at the starting point of our construction, the element $\zeta$ of \eqref{fib-prod-str}. So we cannot expect the above Newton's iteration method works without some adjustments. We fix our approach by thickening the solution set of \eqref{eq671} to the set of inconsistent solutions. The main reason that we will work with this larger moduli space lies in the step that we find the solution $V$ of the equation \eqref{form670new}. To solve this equation, we need to find a right inverse to the following operator modulo $E(u')$: \[\aligned D_{u'}\overline{\partial} : &W^2_{m+1,\delta}((\Sigma(\sigma_1,\sigma_2),\partial \Sigma(\sigma_1,\sigma_2)); (u^{\prime *}TX,u'\vert_{\partial}^*TL)) \\ &\to L^2_{m,\delta}(\Sigma(\sigma_1,\sigma_2);u^{\prime *}TX\otimes \Lambda^{0,1})/E(u'). \endaligned \] The standard approach to construct this right inverse is to glue the right inverses of the linearized operators $D_{u_{\rm d}}\overline{\partial}$, $D_{u_{\rm s}}\overline{\partial}$ and $D_{U_{\rm D}}\overline{\partial}$. The linearized operator $D_{u'}\overline{\partial}$ over the cylinder $[-5T_i,5T_i]_{r_i} \times S^1_{s_i}$ is modeled by an operator of the form $$ \frac{\partial}{\partial r_i} + P_{r_i}. $$ The relevant operators $P_{r_i}$ in our setup have non-trivial kernel and our gluing construction is of ``Bott-Morse'' type. As it was clarified by Mrowka's Mayer-Vietoris principle \cite{Mr}, to have a well-behaved gluing problem we need to assume certain `mapping transversality conditions'. To be more specific, the zero eigenspace of the operator $P_{r_i}$ can be identified with: \begin{equation}\label{form672} \aligned (\R\oplus \R)\oplus &T_{u_{\rm d}(z_{\rm d})}\mathcal D \qquad \text{if $i=1$}, \\ (\R\oplus \R)\oplus &T_{u_{\rm s}(z_{\rm s})}\mathcal D \qquad \text{if $i=2$}. \endaligned \end{equation} Here $\R \oplus \R$ is the tangent space to $\C_*$. The mapping transversality condition we introduced in Definition \ref{defn6969} concerns the summand $T_{u_{\rm d}(z_{\rm d})}\mathcal D$. Therefore, it is {\it not} sufficient for the Mayer-Vietoris principle in our setup. However, working with inconsistent solutions allows us to enlarge the tangent spaces and obtain the required transversality condition. A byproduct of using inconsistent solutions is that we might end up with inconsistent solutions throughout Newton's iterations, even if the starting approximate solution has $\rho_1 = \rho_2$. \subsection{Inconsistent Maps and Linearized Equations} \label{subsub:linsol2} In Section \ref{sub:statement}, the notion of holomorphic maps was extended to inconsistent solutions of the Cauchy-Riemann equation. It is also convenient to define generalizations of maps from $\Sigma(\sigma_1,\sigma_2)$ to $X\backslash \mathcal D$: \begin{defn}\label{defn625625} A 7-tuple $\frak u'=(u'_{\rm d},u'_{\rm s},U'_{\rm D},\sigma_1,\sigma_2,\rho_1,\rho_2)$ is an {\it inconsistent map} if it satisfies only parts (1) and (4) of Definition \ref{defn615inconsis}. In other words, we do not require that the 7-tuple satisfies the Cauchy-Riemann equation in \eqref{eq630-def} and the constraint in \eqref{eq631-p}. We define equivalence of inconsistent maps in the same way as in Definition \ref{defn615inconsis}. \end{defn} An example of inconsistent maps can be constructed using the maps: \[ \hspace{3cm}u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)} : \Sigma(\sigma_1,\sigma_2) \to X\hspace{1cm} i=1,\,2 \] of Subsection \ref{subsub:preglue} which are associated to an element $\xi= (u_{\rm d}^{\xi},u_{\rm D}^{\xi},u_{\rm s}^{\xi})$ of \eqref{fib-prod-str}. We use these two maps to define: $$ \aligned u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(0)} &:= u^{\prime,\xi,1}_{\sigma_1,\sigma_2,(0)}\vert_{\Sigma_{\rm d}^+(\sigma_1,\sigma_2)}\\ u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(0)} &:= u^{\prime,\xi,2}_{\sigma_1,\sigma_2,(0)}\vert_{\Sigma_{\rm s}^+(\sigma_1,\sigma_2)} \\ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(0)} &:= \begin{cases} {\rm Dil}_{1/\rho_{1,(0)}^{\xi}} \circ u^{\prime,\xi,1}_{\sigma_1,\sigma_2,(0)} &\text{on $\Sigma_{\rm d}^+(\sigma_1,\sigma_2) \cap \Sigma_{\rm D}^+(\sigma_1,\sigma_2)$} \\ {\rm Dil}_{1/\rho_{2,(0)}^{\xi}} \circ u^{\prime,\xi,2}_{\sigma_1,\sigma_2,(0)} &\text{on $\Sigma_{\rm s}^+(\sigma_1,\sigma_2) \cap \Sigma_{\rm D}^+(\sigma_1,\sigma_2)$} \\ {\rm Dil}_{1/\rho_{1,(0)}^{\xi}} \circ u^{\prime,\xi,1}_{\sigma_1,\sigma_2,(0)} &\text{on $\Sigma_{\rm D}^-(\sigma_1,\sigma_2)$} \end{cases} \endaligned $$ Note that ${\rm Dil}_{1/\rho_{1,(0)}^{\xi}} \circ u^{\prime,\xi,1}_{\sigma_1,\sigma_2,(0)} ={\rm Dil}_{1/\rho_{2,(0)}^{\xi}} \circ u^{\prime,\xi,2}_{\sigma_1,\sigma_2,(0)}$ on $\Sigma_{\rm D}^-(\sigma_1,\sigma_2)$. The following lemma is obvious from the construction: \begin{lem}\label{lem62611} The 7-tuple: \[\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}:= (u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(0)},u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(0)},U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(0)},\sigma_1,\sigma_2, \rho_{1,(0)}^{\xi},\rho_{2,(0)}^{\xi})\] is an inconsistent map \end{lem} The inconsistent map $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}$ of Lemma \ref{lem62611} is the approximate solution at the 0-th step. In order to obtain an actual inconsistent solution, we keep modifying this approximate solution into better approximate solutions. To be a bit more detailed, we firstly use $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}$ and our bump functions to obtain a triple $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}=(u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)},u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)},u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)})$ as follows: \begin{equation}\label{uprime20} u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}:\Sigma_{\rm d} \setminus \{z_{\rm d}\} \to X \setminus \mathcal D,\hspace{1cm} u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}:\Sigma_{\rm s} \setminus \{z_{\rm s}\} \to X \setminus \mathcal D, \end{equation} \begin{equation}\label{uprime21} U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}:\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\} \to \mathcal N_{\mathcal D}(X). \end{equation} which are close to $(u_{\rm d}^{\xi},u_{\rm s}^{\xi},U_{\rm D}^{\xi})$. In fact, the smaller the values of $\sigma_1$ and $\sigma_2$ are, the closer $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}$ is to $\xi$. Thus we can exploit this to conclude that an appropriate version of the Cauchy-Riemann operator associated to $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}$ has a right inverse. (See Lemma \ref{lem63631}.) This allows us to find a modified inconsistent map $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}$. We repeat the same process to construct a sequence of inconsistent maps $\{\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(i)}\}$ which are approximate solutions and they converge to an inconsistent solution. This sequence of modified inconsistent solution is constructed using Newton's iteration, and it also has some components of the ``alternating method''.\footnote{See, for example, \cite[Sublemma 8.6]{fconnsum}. Application of alternating method for gluing analysis of this kinds is initiated by Donaldson \cite{Don}. He applied alternating method directly to a nonlinear equation.} In this method we solve the equation in various pieces and glue them together. In order to carry out the above plan, we need to introduce norms to quantify the distance between two inconsistent maps and to measure how good an approximate solution is. Such norms are given in the following definitions: \begin{defn}\label{tang-incon-map} Let $\frak u' = (u'_{{\rm d}},u'_{{\rm s}},U'_{{\rm D}},\sigma_1,\sigma_2,\rho_1, \rho_2) $ be an inconsistent map. We consider a triple ${\rm V} = (V_{\rm d},V_{\rm s},V_{\rm D})$ with $$ \aligned &V_{\rm d}\in L^2_{m+1}(\Sigma_{\rm d}^+(\sigma_1,\sigma_2),(u'_{\rm d})^*TX), \\ &V_{\rm s}\in L^2_{m+1}(\Sigma_{\rm s}^+(\sigma_1,\sigma_2),(u'_{\rm s})^*TX), \\ &V_{\rm D}\in L^2_{m+1}(\Sigma_{\rm D}^+(\sigma_1,\sigma_2),(U'_{\rm D})^*(T\mathcal N_{\mathcal D}(X))). \endaligned $$ We assume $V_{\rm d}(\frak z) \in T_{u'_{\rm d}(\frak z)}L$ if $\frak z \in \partial \Sigma_{\rm d}^+(\sigma_1,\sigma_2)$. Moreover, we assume that there exist $(a_{\rm d},b_{\rm d}), (a_{\rm s},b_{\rm s})\in \R\oplus \R$ such that $$ \aligned V_{\rm d} - V_{\rm D} &= (a_{\rm d},b_{\rm d}) \qquad \text{on $[-5T_1,5T_1]_{r_1} \times S^1_{s_1}$} \\ V_{\rm s} - V_{\rm D} &= (a_{\rm s},b_{\rm s}) \qquad \text{on $[-5T_2,5T_2]_{r_2} \times S^2_{s_2}$} \endaligned $$ Here we regard $\R\oplus \R$ as the vector field on the neighborhood $\frak U$ of $\mathcal D$ given by the $\C_*$ action. Define $v_i = V_{\rm D}((0,0)_i)$ where $(0,0)_i$ is the same as in \eqref{defnvivi}. We then define $\hat v_i$ in the same way as in \eqref{form651651newnew}. We now define $\Vert {\rm V}\Vert^2_{W^{2,\sim}_{m,\delta}}$ as follows: $$ \aligned &\Vert V_{{\rm d}} \Vert^2_{L^2_m(\Sigma_{\rm d}^-(\sigma_1,\sigma_2))} + \Vert V_{{\rm s}}\Vert^2_{L^2_m(\Sigma_{\rm s}^-(\sigma_1,\sigma_2))} + \Vert V_{{\rm D}}\Vert^2_{L^2_m(\Sigma_{\rm D}^-(\sigma_1,\sigma_2))} \\ &+ \sum_{j=0}^m \int_{[-5T_1,5T_1]_{r_1} \times S^1_{s_1}} e_{\delta}^{\sigma_1,\sigma_2} \left\vert \nabla^j (V_{\rm D} - \hat v_1)\right\vert^2 dr_1ds_1 \\ &+\sum_{j=0}^m \int_{[-5T_2,5T_2]_{r_2} \times S^1_{s_2}} e_{\delta}^{\sigma_1,\sigma_2} \left\vert \nabla^j (V_{\rm D} - \hat v_2)\right\vert^2 dr_2ds_2 \\ &+\vert v_1\vert^2+\vert v_2\vert^2. \endaligned $$ \par Let ${\rm V} = (V_{\rm d},V_{\rm s},V_{\rm D})$, ${\rm V}' = (V'_{\rm d},V'_{\rm s},V'_{\rm D})$ be as above. We say they are equivalent if $V_{\rm d} = V'_{\rm d}$, $V_{\rm s} = V'_{\rm s}$ and $V_{\rm D} - V'_{\rm D} \in \R\oplus \R$, where $\R\oplus \R$ is the set of vector fields generated by the $\C_*$ action. We put $$ \Vert {\rm V}\Vert^2_{W^{2}_{m,\delta}} = \inf \{\Vert {\rm V}'\Vert^2_{W^{2,\sim}_{m,\delta}} \mid \text{${\rm V}'$ is equivalent to ${\rm V}$}. \} $$ \end{defn} \begin{defn} For $j=1,2$, let $\frak u'_{(j)}$ be an inconsistent map. We assume that there is a representative $(u'_{{\rm d},(j)},u'_{{\rm s},(j)},U'_{{\rm D},(j)},\sigma_1,\sigma_2,\rho_{1,(j)},\rho_{2,(j)})$ for $\frak u'_{(j)}$ such that $(u'_{{\rm d},(1)},u'_{{\rm s},(1)},U'_{{\rm D},(1)})$ is $C^0$-close to $(u'_{{\rm d},(2)},u'_{{\rm s},(2)},U'_{{\rm D},(2)})$. Define $V_{\rm d}$, $V_{\rm s}$, $V_{\rm D}$ by the following properties: \begin{equation} \label{vector-exp} \aligned &{\rm Exp}(u'_{{\rm d},(1)},V_{\rm d}) = u'_{{\rm d},(2)}, \\ &{\rm Exp}(u'_{{\rm s},(1)},V_{\rm s}) = u'_{{\rm s},(2)}, \\ &{\rm Exp}(U'_{{\rm D},(1)},V_{\rm D}) = U'_{{\rm D},(2)}. \endaligned \end{equation} Let ${\rm V} = (V_{\rm d},V_{\rm s},V_{\rm D})$, and define: \[ d_{W^{2}_{m,\delta}}(\frak u'_{(1)},\frak u'_{(2)})= \inf\{\Vert {\rm V}\Vert_{W^{2}_{m,\delta}}\}. \] where the infimum is taken over all representatives for $\frak u'_{(1)}$ and $\frak u'_{(2)}$ which are close enough to each other in the $C^0$ metric such that the vectors in \eqref{vector-exp} exist. Therefore, $d_{W^{2}_{m,\delta}}$ is a well defined distance between two equivalence classes of inconsistent maps. \end{defn} For any inconsistent map $\frak u'=(u'_{\rm d},u'_{\rm s},U'_{\rm D},\sigma_1,\sigma_2,\rho_1,\rho_2)$, we may use a similar parallel transport construction as in Definition \ref{defn615inconsis} to define obstruction spaces for $\frak u'$. That is to say, we define maps $\mathcal {PAL}$ as in \eqref{PAL}. Then the images of $E_{\rm d}$ and $E_{\rm s}$ with respect to these maps give rise to the obstruction spaces $E_{\rm d}(u'_{\rm d})$ and $E_{\rm s}(u'_{\rm s})$. Similarly, we define $E_{\rm D}(U'_{\rm D})$ by replacing $u'_{\rm D}$ with $\pi\circ U'_{\rm D}$ in \eqref{PAL-D} and using the decomposition \eqref{decom-tan-bdle}. We will write $E(\frak u')$ for the direct sum of the vector spaces $E_{\rm d}(u'_{\rm d})$ and $E_{\rm s}(u'_{\rm s})$ and $E_{\rm D}(U'_{\rm D})$. Note that $E_{\rm d}(u'_{\rm d})$, $E_{\rm s}(u'_{\rm s})$ and $E_{\rm D}(U'_{\rm D})$ are identified with $E_{\rm d}$ and $E_{\rm s}$ and $E_{\rm D}$. Therefore, we drop $u'_{\rm d}$, $u'_{\rm s}$ and $U'_{\rm D}$ from our notation for these obstruction spaces if it does not make any confusion. \begin{defn} Let $\frak u' =(u'_{{\rm d}},u'_{{\rm s}},U'_{{\rm D}},\sigma_1,\sigma_2,\rho_1,\rho_2)$ be an inconsistent map and $\frak e = (\frak e_{\rm d},\frak e_{\rm s},\frak e_{\rm D})\in E_{\rm d}\oplus E_{\rm s} \oplus E_{\rm D}$. Then we define $\Vert \overline \partial \frak u' - \frak e\Vert^2_{L^2_{m,\delta}}$ to be the following sum: $$ \aligned &\Vert \overline\partial u'_{{\rm d}} - \frak e_{\rm d}\Vert^2_{L^2_m(\Sigma_{\rm d}^-(\sigma_1,\sigma_2))} +\Vert \overline\partial u'_{{\rm s}} - \frak e_{\rm s}\Vert^2_{L^2_m(\Sigma_{\rm s}^-(\sigma_1,\sigma_2))} \\ &+\Vert \overline\partial U'_{{\rm D}} - \frak e_{\rm D}\Vert^2_{L^2_m(\Sigma_{\rm D}^-(\sigma_1,\sigma_2))}\\ &+\sum_{j=0}^m\int_{[-5T_1,5T_1]_{r_1} \times S^1_{s_1}} e_{\delta}^{\sigma_1,\sigma_2} \left\vert \nabla^j \overline\partial U'_{{\rm D}}\right\vert^2 dr_1ds_1.\\ &+\sum_{j=0}^m\int_{[-5T_2,5T_2]_{r_2} \times S^1_{s_2}} e_{\delta}^{\sigma_1,\sigma_2} \left\vert \nabla^j \overline\partial U'_{{\rm D}}\right\vert^2 dr_2ds_2. \endaligned $$ \end{defn} We remark that the first 3 terms in the above definition are the Sobolev norms of $\overline \partial \frak u' - \frak e$ in the {\it thick part}. The fourth and the fifth terms are its weighted Sobolev norms in the neck region. Because of our choice of cylindrical metrics on $\frak U$, the partial $\C_*$-action induces isometries and preserves the almost complex structure. Therefore, the above sum is well-defined and only depends on the equivalence class of $\frak u'$. The process of the modifications of our approximate solutions are performed by finding solutions to the linearization of the modified Cauchy-Riemann equations in \eqref{eq630-def}. Since our equation has terms induced by the obstruction bundle, the linearized operator has an extra term in addition to $D_{u'}\overline{\partial}$. The equations in \eqref{eq630-def} can be regarded as an equation for an inconsistent map $\frak u' =(u'_{{\rm d}},u'_{{\rm s}},U'_{{\rm D}},\sigma_1,\sigma_2,\rho_1,\rho_2)$ and $(\frak e_{\rm d}, \frak e_{\rm s}, \frak e_{\rm D})\in E_{\rm d}\oplus E_{\rm s} \oplus E_{\rm D}$: \begin{equation}\label{eq630-rep} \overline \partial u_{\rm d}'- \frak e_{\rm d}=0, \quad \overline \partial u_{\rm s}' -\frak e_{\rm s}=0, \quad \overline \partial U_{\rm D}' -\frak e_{\rm D}=0. \end{equation} Suppose ${\rm V} = (V_{\rm d},V_{\rm s},V_{\rm D})$ is an element of the Hilbert space introduced in Definition \ref{tang-incon-map}. For each real number $\tau$ with $|\tau|<1$, let $\frak u^\tau$ be given by the triple $(u^\tau_{{\rm d}},u^\tau_{{\rm s}},U^\tau_{{\rm D}})$ defined as: \begin{equation} u^\tau_{{\rm d}}:={\rm Exp}(u'_{{\rm d}},\tau V_{\rm d}), \hspace{.6cm} u^\tau_{{\rm s}}:={\rm Exp}(u'_{{\rm s}},\tau V_{\rm s}), \hspace{.6cm} U^\tau_{{\rm D}}:={\rm Exp}(U'_{{\rm D}},\tau V_{\rm D}). \end{equation} We use parallel transport along minimal geodesics to obtain: \[ \mathcal{PAL}^\tau_{u'_{\rm d}} : L^2_{m,\delta}(\Sigma_{\rm d}^+(\sigma_1,\sigma_2); u_{\rm d}^{\prime *}TX\otimes \Lambda^{0,1})\xrightarrow{\cong} L^2_{m,\delta}(\Sigma_{\rm d}^+(\sigma_1,\sigma_2);u^{\tau *}_{\rm d} TX\otimes \Lambda^{0,1}). \] and maps $\mathcal{PAL}^\tau_{u'_{\rm s}}$ and $\mathcal{PAL}^\tau_{U'_{\rm D}}$. Then for $\frak e= (\frak e_{\rm d}, \frak e_{\rm s}, \frak e_{\rm D})\in E_{\rm d}\oplus E_{\rm s} \oplus E_{\rm D}$, we define: \begin{equation} \label{der-ob-bun} \aligned (D_{u_{\rm d}'}E)(\frak e_{\rm d},V_{\rm d})= & \left.\frac{d}{d\tau}\right\vert_{\tau=0} ((\mathcal{PAL}^\tau_{u'_{\rm d}})^{-1}(\frak e_{\rm d})), \\ (D_{u_{\rm s}'}E)(\frak e_{\rm s},V_{\rm s})= & \left.\frac{d}{d\tau}\right\vert_{\tau=0} ((\mathcal{PAL}^\tau_{u'_{\rm s}})^{-1}(\frak e_{\rm s})), \\ (D_{U_{\rm D}'}E)(\frak e_{\rm D},V_{\rm D})=& \left.\frac{d}{d\tau}\right\vert_{\tau=0} ((\mathcal{PAL}^\tau_{U'_{\rm D}})^{-1}(\frak e_{\rm D})), \\ \endaligned \end{equation} We also reserve the following notation for the triple given by the above vectors: \begin{equation} \label{der-ob-bun-triple} (D_{\frak u'}E)(\frak e,\rm V)= ((D_{u_{\rm d}'}E)(\frak e_{\rm d},V_{\rm d}), (D_{u_{\rm s}'}E)(\frak e_{\rm s},V_{\rm s}), (D_{U_{\rm D}'}E)(\frak e_{\rm D},V_{\rm D})) \end{equation} The linearizations of the Cauchy-Riemann equations in \eqref{eq630-rep} at $(\frak u',\frak e)$ evaluated at $\rm V$ as above and $\frak f\in E_{\rm d}\oplus E_{\rm s} \oplus E_{\rm D}$ have the following form: \begin{equation}\label{lin-eq630-rep} D_{\frak u'}\overline{\partial}({\rm V}) - (D_{\frak u'}E)(\frak e,{\rm V})-\frak f. \end{equation} where: \[ D_{\frak u'}\overline{\partial}({\rm V})=(D_{u_{\rm d}'}\overline{\partial}(V_{\rm d}), D_{u_{\rm s}'}\overline{\partial}(V_{\rm s}),D_{U_{\rm D}'}\overline{\partial}(V_{\rm D}) ). \] \subsection{Newton's Iteration} \label{subsub:newton} Now we are ready to carry out the strategy which is discussed in the previous subsection. In the following, we use the maps constructed in Subsection \ref{subsub:preglue}. \par\smallskip \noindent{\bf (Step 0-4) (Separating error terms into three parts)} \par We firstly fix notations for the error terms of our first approximation $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}$: \begin{equation}\label{error-first-approx} \aligned {\rm Err}^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)} &= \chi_{1,\mathcal X}^{\leftarrow}(\overline \partial u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}- \frak e_{{\rm d},(0)}^{\xi}),\\ {\rm Err}^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)} &= \chi_{2,\mathcal X}^{\leftarrow}(\overline \partial u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)} - \frak e_{{\rm s},(0)}^{\xi}), \\ {\rm Err}^{\xi}_{{\rm D},\sigma_1,\sigma_2,(0)} &= \chi_{\mathcal X}^{\rightarrow}(\overline \partial u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}- \frak e_{{\rm D},(0)}^{\xi,1}). \endaligned \end{equation} where $\frak e_{{\rm d},(0)}^{\xi}$, $\frak e_{{\rm s},(0)}^{\xi}$, $\frak e_{{\rm D},(0)}^{\xi,i}$ are defined in \eqref{form6662}. \par\smallskip \noindent{\bf (Step 1-1) (Approximate solution for linearization)} \par Next we define: \begin{equation}\label{new677770} \frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}=(u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}, u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}, U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}) \end{equation} whose entries have the form given in \eqref{uprime20} and \eqref{uprime21}. Let: \begin{equation} \aligned &u^{\xi}_{\rm d}(z_{\rm d}) = p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)} = p_{{\rm D,d},\sigma_1,\sigma_2,(0)}, \\ &u^{\xi}_{\rm s}(z_{\rm s}) = p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)} = p_{{\rm D,s},\sigma_1,\sigma_2,(0)}. \endaligned\end{equation} We take $c_{\rm d}^{\xi}$, $c_{\rm s}^{\xi}$, $c_{\rm D,d}^{\xi}$, $c_{\rm D,s}^{\xi}$ as in (\ref{shiki655}), (\ref{shiki655rev}), (\ref{shiki656}), (\ref{shiki656rev}), respectively. We regard $c_{\rm d}^{\xi} z^2$ as an element of the fiber of $\mathcal N_{\mathcal D}(X)$ at $p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)}$ and hence as an element of $X \setminus \mathcal D$. We define: \begin{equation}\label{new682new} \aligned &u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}(r_1,s_1) \\ &:= {\rm Exp}(c_{\rm d}^{\xi} z^2,\chi_{1,\mathcal B}^{\leftarrow}(r_1-T_1,s_1){\rm E}(c_{\rm d}^{\xi} z^2,u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}(r_1,s_1))) \endaligned \end{equation} if $(r_1,s_1) \in [-5T_1,\infty)_{r_1} \times S^1_{s_1} \subset \Sigma_{\rm d}\setminus\{z_{\rm d}\}$. If $\frak z \in \Sigma_{\rm d}\setminus\{z_{\rm d}\}$ is an element in the complement of $[-5T_1,\infty)_{r_1} \times S^1_{s_1}$, then we define: $$ u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}(\frak z) :=u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}(\frak z). $$ This completes the definition of $u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}$ as a map from $\Sigma_{\rm d}\setminus\{z_{\rm d}\}$ to $X\setminus \mathcal D$. Similarly, we define: \begin{equation}\label{new683new} \aligned &u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}(r_1,s_1) \\ &:={\rm Exp}(c_{\rm s}^{\xi} z^3,\chi_{2,\mathcal B}^{\leftarrow}(r_2-T_2,s_2){\rm E}(c_{\rm s}^{\xi} z^3,u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}(r_2,s_2))) \endaligned \end{equation} if $(r_2,s_2) \in [-5T_2,\infty)_{r_2} \times S^1_{s_2} \subset \Sigma_{\rm s}\setminus\{z_{\rm s}\}$. Here we regard $c_{\rm s}^{\xi} z^3$ as an element of the fiber of $\mathcal N_{\mathcal D}(X)$ at $p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)}$ and hence as an element of $X \setminus \mathcal D$. If $\frak z \in \Sigma_{\rm s}\setminus\{z_{\rm s}\}$ is an element in the complement of $[-5T_2,\infty)_{r_2} \times S^1_{s_2}$, then we define: \[ u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}(\frak z) :=u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}(\frak z). \] It is easy to see from the definitions that $u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}$ and $u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}$ satisfy Condition \ref{conds626}: \begin{conds}\label{conds626} We require that the map $u''_{\rm d} : (\Sigma_{\rm d}\setminus\{z_{\rm d}\},\partial \Sigma_{\rm d}) \to (X\setminus \mathcal D,L)$ (resp. $u''_{\rm s} : \Sigma_{\rm s}\setminus\{z_{\rm s}\} \to X\setminus \mathcal D$) satisfies the following conditions: \begin{enumerate} \item $u''_{\rm d}$ (resp. $u''_{\rm s}$) maps $[3T_1,\infty)_{r_1} \times S^1_{s_1}$ (resp. $[3T_2,\infty)_{r_2} \times S^1_{s_2}$) to $\frak U$. There exist $p_{\rm d}, p_{\rm s} \in \mathcal D$ such that the restriction of $\pi \circ u''_{\rm d}$ (resp. $\pi \circ u''_{\rm s}$) to $[3T_1,\infty)_{r_1} \times S^1_{s_1}$ (resp. $[3T_2,\infty)_{r_2} \times S^1_{s_2}$) is a constant map to $p_{\rm d}$ (resp. $p_{\rm s}$). \item After an appropriate trivialization of the normal bundle $\mathcal N_{\mathcal D}(X)$ at the points $p_{\rm d}, p_{\rm s}$, there exist $c_{\rm d}, c_{\rm s} \in \C_*$ such that the restriction of $u''_{\rm d} \circ \varphi_{\rm d}$ to $[3T_1,\infty)_{r_1} \times S^1_{s_1}$ (resp. $u''_{\rm s} \circ \varphi_{\rm s}$ to $[3T_2,\infty)_{r_2} \times S^1_{s_2}$) is \begin{equation}\label{form6733} (u''_{\rm d} \circ \varphi_{\rm d})(z)=c_{\rm d} z^2,\quad\text{(resp. $(u''_{\rm s} \circ \varphi_{\rm s})(z)= c_{\rm s} z^3$).} \end{equation} \end{enumerate} \end{conds} Next, we define the map $U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}:\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\} \to \mathcal N_{\mathcal D}(X)$. The trivializations of the fibers of $\mathcal N_{\mathcal D}(X)$ at the points $p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)}$ and $p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)}$ allow us to identify $c_{\rm D,d}^{\xi} w^{-2}$ and $c_{\rm D,s}^{\xi} w^{-3}$ with elements of $\mathcal N_{\mathcal D}(X)\setminus \mathcal D = \R_{\tau} \times S\mathcal N_{\mathcal D}(X)$. We define: \begin{equation}\label{new685new} \aligned &U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}(r_1,s_1) \\ &={\rm Exp}(c_{\rm D,d}^{\xi} w^{-2}, \chi_{\mathcal A}^{\rightarrow}(r_1+T_1,s_1)\cdot\\ &\qquad\qquad\qquad\qquad{\rm E}(c_{\rm D,d}^{\xi} w^{-2},(({\rm Dil}_{\rho_{1,(0)}^{\xi}})^{-1} \circ u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)})(r_1,s_1))) \endaligned \end{equation} if $(r_1,s_1) \in (-\infty,5T_1]_{r_1} \times S^1_{s_1} \subset\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}$, and: \begin{equation}\label{new686new} \aligned &U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}(r_2,s_2) \\ &={\rm Exp}(c_{\rm D,s}^{\xi} w^{-3},\chi_{\mathcal A}^{\rightarrow}(r_2+T_2,s_2)\cdot\\ &\qquad\qquad\qquad\qquad{\rm E}(c_{\rm D,s}^{\xi} w^{-3},(({\rm Dil}_{\rho_{2,(0)}^{\xi}})^{-1} \circ u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)})(r_2,s_2))) \endaligned \end{equation} if $(r_2,s_2) \in (-\infty,5T_2]_{r_2} \times S^1_{s_2} \subset \Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}$. If $\frak z$ is an element of $\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\}$, that does not belong to the above cylinders, then we define: $$ U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}(\frak z) :=({\rm Dil}_{1/\rho_{1,(0)}^{\xi}} \circ u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)})(\frak z). $$ Note that we can equivalently use the term $({\rm Dil}_{1/\rho_{2,(0)}^{\xi}} \circ u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)})(\frak z)$ on the right hand side of the above definition. We remark that the `highest order' terms of the maps $({\rm Dil}_{\rho_{i,(0)}^{\xi}})^{-1} \circ u^{\prime, \xi,i}_{\sigma_1,\sigma_2,(0)}$ and $U_{\rm D}^{\xi}$ agree with each other on $[-5T_i,5T_i]_{r_i} \times S^2_{s_i}$. Similarly, $U_{\rm D}^{\xi}(\varphi_{\rm D,d}(w))$ (resp. $U_{\rm D}^{\xi}(\varphi_{\rm D,s}(w))$) and $c_{\rm D,d}^{\xi} w^{-2}$ (resp. $c_{\rm D,s}^{\xi} w^{-3}$) have the same highest order terms on $[-5T_i,5T_i]_{r_i} \times S^2_{s_i}$. It is easy to see from definition that $U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}$ satisfies Condition \ref{conds627}: \begin{conds}\label{conds627} We require $U''_{\rm D} : \Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\} \to X\setminus \mathcal D$ satisfies the following conditions: \begin{enumerate} \item There exist $p_{\rm D,d}, p_{\rm D,s} \in \mathcal D$ such that the restriction of $\pi \circ U''_{D}$ to $(-\infty,-3T_1]_{r_1} \times S^1_{s_1}$ (resp. $(-\infty,,-3T_2]_{r_2} \times S^1_{s_2}$) is a constant map to $p_{\rm D,d}$ (rsep. $p_{\rm D,d}$). \item There exist $c_{\rm D,d}, c_{\rm D,s} \in \C_*$ such that the restriction of $U''_{\rm D} \circ \varphi_{\rm D,d}$ to $(-\infty,-3T_1]_{r_1} \times S^1_{s_1}$ (resp. $U''_{\rm D} \circ \varphi_{\rm D,s}$ to $(-\infty,-3T_2]_{r_2} \times S^1_{s_2}$) is \begin{equation}\label{form6744} (U''_{\rm D,d} \circ \varphi_{\rm D,d})(w)= c_{\rm D,d} w^{-2},\quad\text{(resp. $(U''_{\rm D} \circ \varphi_{\rm D,s})(w)=c_{\rm D,s} w^{-3}$).} \end{equation} \end{enumerate} \end{conds} Let $\frak u''=(u''_{\rm d},u''_{\rm s},U''_{\rm D})$ be a triple of maps satisfying Conditions \ref{conds626}, \ref{conds627}. We also assume: \begin{equation}\label{form67554} p_{\rm d} = p_{\rm D,d},\qquad p_{\rm s} = p_{\rm D,s}. \end{equation} \begin{defn}\label{defn62888} Let $W^{2 \sim}_{m,\delta}(\frak u'',U''_{\rm D};TX)$ be the set of all $\bf V=({\bf V}_{\rm d},{\bf V}_{\rm s},{\bf V}_{\rm D})$ satisfying the following properties: \begin{enumerate} \item ${\bf V}_{\rm d} = (V_{\rm d},(\frak r_{\infty,\rm d},\frak s_{\infty,\rm d}),v_{\rm d})\in W^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};((u''_{\rm d})^*TX,(u_{\rm d}'')^*TL))$. (This function space is introduced in Definition \ref{defn6262}.) \item ${\bf V}_{\rm s} = (V_{\rm s},(\frak r_{\infty,\rm s},\frak s_{\infty,\rm s}),v_{\rm s})\in W^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};(u''_{\rm s})^*TX)$. (This function space is introduced in Definition \ref{defn64444}.) \item ${\bf V}_{\rm D} = (V_{\rm D},(\frak r_{\infty,\rm D,d},\frak s_{\infty,\rm D,d}),(\frak r_{\infty,\rm D,s},\frak s_{\infty,\rm D,s})),v_{\rm D,d},v_{\rm D,s})\in W^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};(U_{\rm D}'')^*T(\R_{\sigma} \times S\mathcal N_{\mathcal D}(X))).$ (This function space is introduced in Definition \ref{defn66666}.) \item We assume \[ v_{\rm d} = v_{\rm D,d},\qquad v_{\rm s} = v_{\rm D,s}. \] \end{enumerate} The space $W^{2 \sim}_{m,\delta}(\frak u'';TX)$ is a linear subspace of finite codimension of the direct sum of three Hilbert spaces defined in Definitions \ref{defn6262}, \ref{defn64444}, \ref{defn66666}. Therefore, it is also a Hilbert space. We regard $\R \oplus \R$ as the subspace of $W^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};(U_{\rm D}'')^*T(\R \times S\mathcal N_{\mathcal D}(X)))$ given by constant sections with values in $\R\oplus \R\subset T(\R_{\sigma} \times S\mathcal N_{\mathcal D}(X))$. Thus $\R \oplus \R$ can be also regarded as a subspace of $W^{2 \sim}_{k,\delta}(\frak u'';TX)$. We define $W^{2}_{m,\delta}(\frak u'';TX)$ to be the quotient space of $W^{2 \sim}_{m,\delta}(\frak u'';TX)$ by this copy of $\R \oplus \R$. \end{defn} \begin{rem} We do {\it not} assume $\frak r_{\infty,\rm d} = \frak r_{\infty,\rm D,d}$ or $\frak r_{\infty,\rm s} = \frak r_{\infty,\rm D,s}$. The fact that we might have $\frak r_{\infty,\rm d} \ne \frak r_{\infty,\rm D,d}$ or $\frak r_{\infty,\rm s} \ne \frak r_{\infty,\rm D,s}$ is related to the shift of $\rho_1$, $\rho_2$, which we mentioned in Subsection \ref{subsub:linsol1}. \end{rem} \begin{defn} Let $L^{2}_{m,\delta}(\frak u'';TX\otimes \Lambda^{0,1})$ be the direct sum of the three Hilbert spaces: \[ \aligned &L^2_{m,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u''_{\rm d})^*TX \otimes \Lambda^{0,1}) \\ &\oplus L^2_{m,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};(u''_{\rm s})^*TX \otimes \Lambda^{0,1}) \\ &\oplus L^2_{m,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};(U''_{\rm D})^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X)) \otimes \Lambda^{0,1}), \endaligned \] introduced in Definitions \ref{coker-weighted-sob}, \ref{defn64444}, \ref{defn66666}. The three operators (\ref{fredholmmap1}), (\ref{fredholmmap2ss}), (\ref{fredholmmap2ssrev}) together induce a Fredholm operator: \begin{equation}\label{form67575} D_{\frak u''}\overline\partial : W^{2}_{m+1,\delta}(\frak u'';TX)\to L^{2}_{m,\delta}(\frak u'';TX\otimes \Lambda^{0,1}). \end{equation} \end{defn} \begin{rem} If $u''_{\rm d},u''_{\rm s},U''_{\rm D}$ are $C^1$-close to $u_{\rm d}^{\xi}$, $u_{\rm s}^{\xi}$, $U_{\rm D}^{\xi}$, then the surjectivity of (\ref{fredholmmap1}), (\ref{fredholmmap2ss}), (\ref{fredholmmap2ssrev}) (for $u_{\rm d}^{\xi}$, $u_{\rm s}^{\xi}$, $U_{\rm D}^{\xi}$) modulo $E_{\rm d}(u_{\rm d}^{\xi}) \oplus E_{\rm s}(u_{\rm s}^{\xi}) \oplus E_{\rm D}(u_{\rm D}^{\xi})$ and the mapping transversality condition of Definition \ref{defn6969} imply that (\ref{form67575}) is surjective modulo the obstruction space $E_{\rm d}(u_{\rm d}'') \oplus E_{\rm s}(u_{\rm s}'') \oplus E_{\rm D}(u_{\rm D}'')$. (See also Lemma \ref{lem63631}.) \end{rem} \begin{lem}\label{lem63030} The triple: \[ {\rm Err}^{\xi}_{\sigma_1,\sigma_2,(0)}:=({\rm Err}^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)}, {\rm Err}^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)}, {\rm Err}^{\xi}_{{\rm D},\sigma_1,\sigma_2,(0)}) \] determines an element of $L^{2}_{m,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX\otimes \Lambda^{0,1})$. The terms above are defined in \eqref{error-first-approx}. \end{lem} \begin{proof} It follows from the fact that the map $u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}$ (resp. $u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}$, $U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}$) coincides with $u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}$ (resp. $u^{\prime, \xi,2}_{\sigma_1,\sigma_2,(0)}$, $({\rm Dil}_{\rho_{1,(0)}^{\xi}})^{-1} \circ u^{\prime, \xi,1}_{\sigma_1,\sigma_2,(0)}$) on the support of ${\rm Err}^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)}$ (resp. ${\rm Err}^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)}$, ${\rm Err}^{\xi}_{{\rm D},\sigma_1,\sigma_2,(0)}$). \end{proof} \begin{lem}\label{lem63631} Let the linear operator \[L:W^{2}_{m+1,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX)\oplus E_{\rm d}\oplus E_{\rm s}\oplus E_{\rm D}\to L^{2}_{m,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX\otimes \Lambda^{0,1})\] be given as follows: \[ L({\bf V},\frak f)=D_{\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}}\overline\partial({\bf V}) -(D_{\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}}E)( \frak e_{(0)}^{\xi},{\bf V})-\frak f \] where the terms of $\frak e_{(0)}^{\xi}:= (\frak e_{{\rm d},(0)}^{\xi},\frak e_{{\rm s},(0)}^{\xi},\frak e_{{\rm D},(0)}^{\xi,1})$ are defined in \eqref{form6662}. The term $(D_{\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}}E)(\frak e_{(0)}^{\xi},{\bf V})$ is defined similar to the corresponding term in \eqref{lin-eq630-rep}. If $\sigma_1$ and $\sigma_2$ are small enough, then there is a continuous operator \[Q:L^{2}_{m,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX\otimes \Lambda^{0,1}) \to W^{2}_{m+1,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX)\oplus E_{\rm d}\oplus E_{\rm s}\oplus E_{\rm D}\] which is a right inverse to $L$. Let $(\overline Q,Q_{\rm d},Q_{\rm s},Q_{\rm D})$ be the components of $Q$ with respect to the decomposition of the target of $Q$. There is also constant $C$, independent of $\sigma_1$, $\sigma_2$ and $\xi$, such that for any $z\in L^{2}_{m,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX\otimes \Lambda^{0,1})$: \begin{equation} \label{estimate} \Vert \overline Q(z) \Vert_{W^2_{m+1,\delta}}+|Q_{\rm d}(z)|+|Q_{\rm s}(z)|+|Q_{\rm D}(z)|\le C \Vert z\Vert_{L^{2}_{m,\delta}}. \end{equation} Moreover, we can make this choose of $Q$ unique by demanding that its image is $L^2$-orthogonal\footnote{ We use the $L^2$ norm on the target of $Q$ given by $W^2_{m,\delta}$ with $m=0$ and $\delta=0$.} to the subspace ${\rm ker}(L)$.\footnote{The last condition is similar to \cite[Definition 5.9]{foooexp}.} \end{lem} \begin{proof} Using Definition \ref{defn6868} (2), (3), (4) and Definition \ref{defn6969}, we can construct a continuous operator: \[ \aligned Q_0 = (Q_{0,\rm d},Q_{0,\rm s},Q_{0,\rm D}, Q_{0,E}): &L^2_{m}(\Sigma_{\rm d}\setminus\{z_{\rm d}\};u_{\rm d}^*TX\otimes \Lambda^{0,1}) \\ &\quad \oplus L^2_{m}(\Sigma_{\rm s}\setminus\{z_{\rm s}\};u_{\rm s}^*TX\otimes \Lambda^{0,1}) \\ &\quad \oplus L^2_{m}(\Sigma_{\rm D}\setminus\{z_{\rm d},z_{\rm s}\};U_{\rm D}^*TX\otimes \Lambda^{0,1}) \\ &\to W^2_{m+1,\delta}(\Sigma_{\rm d} \setminus \{z_{\rm d}\};(u_{\rm d}^*TX,u_{\rm d}^*TL)) \\ &\quad \oplus W^2_{m+1,\delta}(\Sigma_{\rm s} \setminus \{z_{\rm s}\};u_{\rm s}^*TX)\\ & \quad \oplus W^2_{m+1,\delta}(\Sigma_{\rm D} \setminus \{z_{\rm d},z_{\rm s}\};U_{\rm D}^*T(\R_{\tau} \times S\mathcal N_{\mathcal D}(X))) \\ &\qquad \oplus E_{\rm d} \oplus E_{\rm s} \oplus E_{\rm D} \endaligned \] such that: \[ \mathcal{EV}_{0,\rm d}\circ Q_{0,\rm d} = \mathcal{EV}_{0,\rm d}\circ Q_{0,\rm D},\quad \mathcal{EV}_{0,\rm s}\circ Q_{0,\rm s} = \mathcal{EV}_{0,\rm d}\circ Q_{0,\rm D}. \] and: \[ (D_{u_{\rm d}}\overline \partial Q_{0,\rm d},D_{u_{\rm s}}\overline \partial Q_{0,\rm s}, D_{U_{\rm D}}\overline \partial Q_{0,\rm D})=Q_{0,E}. \] We use appropriate bump functions to obtain the operator: \[ Q_1 : L^{2}_{m,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX\otimes \Lambda^{0,1}) \to W^{2}_{m+1,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX)\oplus E_{\rm d}\oplus E_{\rm s}\oplus E_{\rm D} \] (See \cite[(7.1.23)]{fooobook2} for more details. Note that \cite[(7.1.23)]{fooobook2} concerns with the case of gluing two irreducible components. Here we glue three components. However, the construction is essentially the same.) In the same way as in \cite[Lemma 7.1.29]{fooobook2}, we can show: \begin{equation} \Vert (L \circ Q_1)(z) - z \Vert \le C e^{-c\delta_1 \min \{T_1,T_2\}} \Vert z \Vert ,\qquad \Vert Q_1(z) \Vert \le C' \Vert z \Vert. \end{equation} Here $c, C,C' > 0$ are constants independent of $\sigma_1$ and $\sigma_2$. Thus for $\sigma_1$ and $\sigma_2$ small enough, we may define: \[ Q_2 = \sum_{k=0}^{\infty} (-1)^k Q_1 \circ ({\rm id} - L \circ Q_1)^k. \] Then we have: \begin{equation} (L \circ Q_2)(z) ={\rm id}, \qquad \Vert Q_2(z) \Vert \le C'' \Vert z\Vert. \end{equation} (See \cite[(6.68)]{foooexp}. This formula is used there to estimate derivatives of the right inverse with respect to the gluing parameter.) The operator $Q_2$ has the required properties except the last one. To obtain the right inverse which also satisfies the last condition, we compose $Q_2$ with projection to the orthogonal complement of the finite dimensional space ${\rm ker}(L)$. \end{proof} Let $z:={\rm Err}^{\xi}_{\sigma_1,\sigma_2,(0)}$. By Lemma \ref{lem63030}, we know that $z$ belongs to the target of $L$. Therefore, $\overline Q(z)$ determines a triple as follows: \begin{equation*}\label{Vssx} {\bf V}_{\sigma_1,\sigma_2,(1)}^{\xi} = ({\bf V}^{\xi}_{\rm d,\sigma_1,\sigma_2,(1)}, {\bf V}^{\xi}_{\rm s,\sigma_1,\sigma_2,(1)},{\bf V}^{\xi}_{\rm D,\sigma_1,\sigma_2,(1)})\in W^2_{m+1,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)};TX) \end{equation*} Moreover, we have: \[ \Delta\frak e_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi}:=Q_{\rm d}(z),\hspace{.6cm} \Delta\frak e_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi}:=Q_{\rm s}(z),\hspace{.6cm} \Delta\frak e_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi}:=Q_{\rm D}(z). \] Lemmas \ref{lem623} and \ref{lem63631} imply that: \[ \Vert {\bf V}^{\xi}_{\sigma_1,\sigma_2,(1)}\Vert_{W^2_{m+1,\delta}}\le C e^{-c\delta_1 \min \{T_1,T_2\}} \] and \[ \vert \Delta\frak e_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi} \vert, \vert \Delta\frak e_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi} \vert, \vert \Delta\frak e_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi} \vert \le C e^{-c\delta_1 \min \{T_1,T_2\}}. \] In summary, we obtained a solution of the linearized equation with appropriate decay properties. \par\smallskip \noindent{\bf (Step 1-2) (Gluing solutions)} \par In this step we will use ${\bf V}_{\sigma_1,\sigma_2,(1)}^{\xi}$ to obtain an improved approximate inconsistent solution. Suppose the entries of ${\bf V}_{\sigma_1,\sigma_2,(1)}^{\xi}$ are given as follows: $$ \aligned &{\bf V}_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi} = (V^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)},(\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}),v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}), \\ &{\bf V}_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi} = (V^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)},(\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}),v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}), \\ &{\bf V}_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi} = (V^{\xi}_{{\rm D},\sigma_1,\sigma_2,(1)},(\frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}), \\ &\qquad\qquad\qquad\qquad\qquad\qquad(\frak r^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}), \\ &\qquad\qquad\qquad\qquad\qquad\qquad v^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}, v^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}). \endaligned $$ We also have the following identities: \[ v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}=v^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}, \hspace{1cm}v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)} =v^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}. \] by Definition \ref{defn62888} (4). We also define: \begin{equation} \aligned \Delta\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)} &= \frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)} - \frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}, \\ \Delta \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)} &= \frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)} - \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}, \endaligned \end{equation} \begin{equation} \aligned \Delta\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)} &= \frak r^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)} - \frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}, \\ \Delta \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)} &= \frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)} - \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}. \endaligned \end{equation} \begin{defn}\label{defn633} We define $u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(1)} : \Sigma_{\rm d}^+(\sigma_1,\sigma_2) \to X \setminus \mathcal D$ as follows. \begin{enumerate} \item If $\frak z \in \Sigma_{\rm d}^-(\sigma_1,\sigma_2)$ then $$ u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}(\frak z),V^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)}) $$ \item If $\frak z = (r_1,s_1) \in [-5T_1,5T_1]_{r_1} \times S^1_{s_1}$ then $$ u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(0)}(r_1,s_1),\diamondsuit) $$ where $$ \aligned \diamondsuit = &\chi_{1,\mathcal B}^{\leftarrow}(r_1,s_1) (V^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) \\ &+\chi_{\mathcal A}^{\rightarrow}(r_1,s_1) (V^{\xi}_{{\rm D,d},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm D},{\rm d},\sigma_1,\sigma_2,(1)}) \\ &+ (\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) + \hat v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}. \endaligned $$ Here and in Items (4), (6), (7), we extend $v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}$ to $\hat v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}$ in the same way as in Definition \ref{defn6262}. \end{enumerate} \par We define $u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(1)} : \Sigma_{\rm s}^+(\sigma_1,\sigma_2) \to X \setminus \mathcal D$ as follows. \begin{enumerate} \item[(3)] If $\frak z \in \Sigma_{\rm s}^-(\sigma_1,\sigma_2)$ then $$ u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(0)}(\frak z),V^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)}) $$ \item[(4)] If $\frak z = (r_2,s_2) \in [-5T_2,5T_2]_{r_2} \times S^1_{s_2}$ then $$ u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(u^{\xi,\prime\prime}_{{\rm s}, \sigma_1,\sigma_2,(0)}(r_2,s_2),\clubsuit) $$ where $$ \aligned \clubsuit = &\chi_{2,\mathcal B}^{\leftarrow}(r_2,s_2) (V^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}) \\ &+\chi_{\mathcal A}^{\rightarrow}(r_2,s_2) (V^{\xi}_{{\rm D,s},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm D,s},\sigma_2,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm D},{\rm s},\sigma_2,\sigma_2,(1)}) \\ &+ (\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}) + \hat v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}. \endaligned $$ \end{enumerate} We next define $U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)} : \Sigma_{\rm D}^+(\sigma_1,\sigma_2) \to X \setminus \mathcal D$ as follows. \begin{enumerate} \item[(5)] If $\frak z \in \Sigma_{\rm D}^-(\sigma_1,\sigma_2)$ then: $$ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}(\frak z),V^{\xi}_{{\rm D},\sigma_1,\sigma_2,(1)}) $$ \item[(6)] If $\frak z = (r_1,s_1) \in [-5T_1,5T_1]_{r_1} \times S^1_{s_1}$ then: $$ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)}(r_1,s_1),\heartsuit) $$ where: $$ \aligned \heartsuit = &\chi_{1,\mathcal B}^{\leftarrow}(r_1,s_1) (V^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) \\ &+\chi_{\mathcal A}^{\rightarrow}(r_1,s_1) (V^{\xi}_{{\rm D,d},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm D},{\rm d},\sigma_1,\sigma_2,(1)}) \\ &+ (\frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}) + \hat v^{\xi}_{\infty,{\rm D}{\rm d},\sigma_1,\sigma_2,(1)}. \endaligned $$ We remark that: \begin{equation}\label{formdiahear} \heartsuit-\diamondsuit = (\Delta\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}, \Delta \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}). \end{equation} \item[(7)] If $\frak z = (r_2,s_2) \in [-5T_2,5T_2]_{r_2} \times S^1_{s_2}$ then: $$ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)}(\frak z) = {\rm Exp}(U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(0)(r_2,s_2)},\spadesuit) $$ where: $$ \aligned \spadesuit = &\chi_{2,\mathcal B}^{\leftarrow}(r_2,s_2) (V^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}) \\ &+\chi_{\mathcal A}^{\rightarrow}(r_2,s_2) (V^{\xi}_{{\rm D,s},\sigma_1,\sigma_2,(1)} - (\frak r^{\xi}_{\infty,{\rm D,s},\sigma_2,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}) - \hat v^{\xi}_{\infty,{\rm D},\rm s,\sigma_2,\sigma_2,(1)}) \\ &+ (\frak r^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)},\frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}) + \hat v^{\xi}_{\infty,{\rm D},\rm s,\sigma_1,\sigma_2,(1)}. \endaligned $$ We remark that: \begin{equation}\label{speclub} \spadesuit-\clubsuit = (\Delta\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}, \Delta \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}). \end{equation} \end{enumerate} Let: \begin{equation} \aligned \rho_{1,(1)}^{\xi,\Delta} &= \exp(-(\Delta\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)} +\sqrt{-1}\Delta \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)})) \in \C_* \\ \rho_{2,(1)}^{\xi,\Delta} &= \exp(-(\Delta\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)} +\sqrt{-1}\Delta \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)})) \in \C_* \endaligned \end{equation} and \begin{equation} \rho_{i,(1)}^{\xi} = \rho_{i,(0)}^{\xi}\rho_{i,(1)}^{\xi,\Delta} \in \C_*, \end{equation} for $i=1,2$. Finally, we define: \begin{equation} \frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}: = (u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(1)},u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(1)},U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)},\sigma_1,\sigma_2,\rho_{1,(1)}^{\xi}, \rho_{2,(1)}^{\xi}). \end{equation} \end{defn} \begin{lem} The 7-tuple $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}$ is an inconsistent map in the sense of Definition \ref{defn625625}. \end{lem} \begin{proof} This is a consequence of Lemma \ref{lem62611} and (\ref{formdiahear}), (\ref{speclub}). \end{proof} \begin{rem} We remark that if we change ${\bf V}^{\xi}_{{\rm D,s},\sigma_1,\sigma_2,(1)}$ by an element of $\R \oplus \R$ (the tangent vector generated by the $\C_*$ action), then $V^{\xi}_{{\rm D,s},\sigma_1,\sigma_2,(1)}$, $(\frak r^{\xi}_{\infty,{\rm D,d},\sigma_2,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)})$ and $(\frak r^{\xi}_{\infty,{\rm D,s},\sigma_2,\sigma_2,(1)}, \frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)})$ change by the same amount. Therefore, $\diamondsuit$ and $\clubsuit$ do not change. On the other hand, $\heartsuit$ and $\spadesuit$ change by the same element in $\R \oplus \R$. This implies that the equivalence class of $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}$ is fixed among all representatives for ${\bf V}^{\xi}_{{\rm D,s},\sigma_1,\sigma_2,(1)}$. \end{rem} \par\smallskip \noindent{\bf (Step 1-3) (Error estimate)} $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}$ is our next approximate solution. Lemma \ref{lem636} quantifies to what extent this inconsistent map improves the previous approximate solution $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}$. \begin{lem}\label{lem636} There is a constant $C$ and for any positive number $\mu$, there is a constant $\eta$ such that the following holds. If $\sigma_1,\sigma_2$ are smaller than $\eta$, then there exists $\frak e_{\sigma_1,\sigma_2,(1)}^{\xi}= (\frak e_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi}, \frak e_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi}, \frak e_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi})$ with $\frak e_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi} \in E_{\rm d}$, $\frak e_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi} \in E_{\rm s}$, $\frak e_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi} \in E_{\rm D}$ such that the following holds: \begin{enumerate} \item \begin{equation} \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)} - \frak e_{\sigma_1,\sigma_2,(1)}^{\xi} \Vert_{L^2_{m,\delta}}\le\mu \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)} - \frak e_{(0)}^{\xi}\Vert_{L^2_{m,\delta}} \end{equation} where $\frak e_{(0)}^{\xi}= (\frak e_{{\rm d},(0)}^{\xi}, \frak e_{{\rm s},(0)}^{\xi}, \frak e_{{\rm D},(0)}^{\xi})$ is as in \eqref{form6662}. \item $$\Vert \frak e_{\sigma_1,\sigma_2,(1)}^{\xi} - \frak e_{(0)}^{\xi}\Vert \le \mu C.$$ The square of the left hand side, by definition, is the sum of the squares of the factors associated to ${\rm d}$, ${\rm s}$, ${\rm D}$.\footnote{ This estimate is provided for the first step of Newton's iteration. In the $\frak i$-th step, a similar estimate appears where $\mu C$ is replaced by $\mu^{\frak i}C$. It is important that $C$ is independent of $\frak i$.} \end{enumerate} \end{lem} We define: \begin{equation} \aligned \frak e_{\rm{d},\sigma_1,\sigma_2,(1)}^{\xi} &= \frak e_{\rm{d},\sigma_1,\sigma_2,(0)}^{\xi} +\Delta \frak e_{\rm{d},\sigma_1,\sigma_2,(0)}^{\xi} \\ \frak e_{\rm{s},\sigma_1,\sigma_2,(1)}^{\xi} &= \frak e_{\rm{s},\sigma_1,\sigma_2,(0)}^{\xi}+\Delta \frak e_{\rm{s},\sigma_1,\sigma_2,(0)}^{\xi} \\ \frak e_{\rm{D},\sigma_1,\sigma_2,(1)}^{\xi} &=\frak e_{\rm{D},\sigma_1,\sigma_2,(0)}^{\xi}+\Delta \frak e_{\rm{D},\sigma_1,\sigma_2,(0)}^{\xi}. \endaligned \end{equation} The proof of the estimates in the lemma is based on Lemma \ref{lem63631}. That is to say, we use the estimate \eqref{estimate} of Lemma \ref{lem63631} and the fact that ${\rm V}_{\sigma_1,\sigma_2,(1)}^{\xi}$ is given by solving the linearized equation. The details of this estimate is similar to the proof of \cite[Proposition 5.17]{foooexp} and is omitted. In particular, to estimate the effect of the bump function appearing in Definition \ref{defn633} (2), (4), (6) and (7), we use the `drop of the weight' argument, which is explained in detail in \cite[right above Remark 5.21]{foooexp}. Lemma \ref{lem6399} below concerns the estimate of the difference between $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)}$ and $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)}$. \begin{lem}\label{lem6399} Let $\sigma_1$, $\sigma_2$ be small enough such that Lemma \ref{lem63631} holds. There is a fixed constant\footnote{It is important that we can take the same constant for all the steps of inductive construction of Newton's iteration. The dependence of those constants to various choices are studied in detail in \cite{foooexp}. So we do not repeat it here.} $C$, independent of $\sigma_1$ and $\sigma_2$, such that: $$ d_{W^{2}_{m+1,\delta}}(\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(1)},\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)})\le C \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)} - \frak e_{\sigma_1,\sigma_2,(0)}^{\xi}\Vert_{L^2_{m,\delta}}. $$ \end{lem} \begin{proof} This is a consequence of Lemma \ref{lem63631} and definitions. \end{proof} \par\smallskip \noindent{\bf (Step 1-4) (Separating error terms into three parts)} \par We put \begin{equation} \aligned {\rm Err}^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)} &= \chi_{1,\mathcal X}^{\leftarrow} (\overline \partial u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(1)}- \frak e_{{\rm d},\sigma_1,\sigma_2,(1)}^{\xi}),\\ {\rm Err}^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)} &= \chi_{2,\mathcal X}^{\leftarrow} (\overline \partial u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(1)} - \frak e_{{\rm s},\sigma_1,\sigma_2,(1)}^{\xi}), \\ {\rm Err}^{\xi}_{{\rm D},\sigma_1,\sigma_2,(1)} &= \chi_{\mathcal X}^{\rightarrow}(\overline \partial u^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(1)} - \frak e_{{\rm D},\sigma_1,\sigma_2,(1)}^{\xi}). \endaligned \end{equation} \par\smallskip \noindent{\bf (Step 2-1) (Approximate solution for linearization)} \par We will next define \begin{equation}\label{new67777} \frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(1)} = (u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(1)}, u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(1)}, U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(1)}) \end{equation} satisfying Conditions \ref{conds626}, \ref{conds627}. This step is essentially the same as (Step 1-1). We mention a few points where the two steps slightly differ. \par Let $v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)} \in T_{p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)}} \mathcal D$ and $v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}\in T_{p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)}} \mathcal D$ be the element appearing at the beginning of Step 1-2. We put \begin{equation} \aligned p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)} &= {\rm Exp}(p^{\xi}_{{\rm d},\sigma_1,\sigma_2,(0)},v^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}) \\ p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)} &= {\rm Exp}(p^{\xi}_{{\rm s},\sigma_1,\sigma_2,(0)},v^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}). \endaligned \end{equation} We next define $$ \aligned c^{\xi}_{{\rm d},(1)} &= c^{\xi}_{{\rm d}} \exp(-(\frak r^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)}+ \sqrt{-1} \frak s^{\xi}_{\infty,{\rm d},\sigma_1,\sigma_2,(1)})) \\ c^{\xi}_{{\rm D,d},(1)} &= c^{\xi}_{{\rm D,d}} \exp(-(\frak r^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)}+ \sqrt{-1} \frak s^{\xi}_{\infty,{\rm D,d},\sigma_1,\sigma_2,(1)})) \\ c^{\xi}_{{\rm s},(1)} &= c^{\xi}_{{\rm s}} \exp(-(\frak r^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)}+ \sqrt{-1} \frak s^{\xi}_{\infty,{\rm s},\sigma_1,\sigma_2,(1)})) \\ c^{\xi}_{{\rm D,s},(1)} &= c^{\xi}_{{\rm D,s}} \exp(-(\frak r^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)}+ \sqrt{-1} \frak s^{\xi}_{\infty,{\rm D,s},\sigma_1,\sigma_2,(1)})). \endaligned $$ Then formulas similar to (\ref{shiki655}), (\ref{shiki656}), (\ref{shiki655rev}) and (\ref{shiki656rev}) hold. \par In (\ref{new682new}),(\ref{new683new}),(\ref{new685new}),(\ref{new686new}), we replace $c^{\xi}_{{\rm d}}$ with $c^{\xi}_{{\rm d},(1)}$ and so on. In these formulas, we also replace $(0)$ with $(1)$. We thus define $u^{\xi,\prime\prime}_{{\rm d},\sigma_1,\sigma_2,(1)}$, $u^{\xi,\prime\prime}_{{\rm s},\sigma_1,\sigma_2,(1)}$, $U^{\xi,\prime\prime}_{{\rm D},\sigma_1,\sigma_2,(1)}$ and $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(1)}$. Then $({\rm Err}^{\xi}_{{\rm d},\sigma_1,\sigma_2,(1)}, {\rm Err}^{\xi}_{{\rm s},\sigma_1,\sigma_2,(1)}, {\rm Err}^{\xi}_{{\rm D},\sigma_1,\sigma_2,(1)})$ determines an element of the space $L^{2}_{k,\delta}(\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(1)};TX\otimes \Lambda^{0,1})$. (Lemma \ref{lem63030}.) We can then formulate an analogue of Lemma \ref{lem63631} where $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(0)}$ is replaced by $\frak u^{\xi,\prime\prime}_{\sigma_1,\sigma_2,(1)}$. Using this lemma, we can obtain ${\rm V}_{\sigma_1,\sigma_2,(2)}^{\xi}$, $\Delta\frak e_{{\rm d},\sigma_1,\sigma_2,(2)}^{\xi}$, $\Delta\frak e_{{\rm s},\sigma_1,\sigma_2,(2)}^{\xi}$, $\Delta\frak e_{{\rm D},\sigma_1,\sigma_2,(2)}^{\xi}$. The counterpart of the estimate in \eqref{estimate} can be used to give appropriate bounds for these four terms. This completes (Step 2-1). (Step 2-2) and (Step 2-3) can be carried out in the same way as in (Step 1-2) and (Step 1-3). More generally, we can perform (Step $\frak i$-1), (Step $\frak i$-2) and (Step $\frak i$-3) in the case that $\vert \sigma_1\vert ,\vert \sigma_2\vert $ are smaller than a positive number $\epsilon_0$ and obtain a sequence of inconsistent maps: $$ \frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i)}: = (u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(\frak i)},u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(\frak i)},U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(\frak i)},\sigma_1,\sigma_2,\rho_{1,(\frak i)}^{\xi}, \rho_{2,(\frak i)}^{\xi}), $$ and a triple: \[ \frak e_{\sigma_1,\sigma_2,(\frak i)}^{\xi}= (\frak e_{{\rm d},\sigma_1,\sigma_2,(\frak i)}^{\xi}, \frak e_{{\rm s},\sigma_1,\sigma_2, (\frak i)}^{\xi}, \frak e_{{\rm D},\sigma_1,\sigma_2,(\frak i)}^{\xi})\in E_{\rm d}\oplus E_{\rm s} \oplus E_{\rm D} \] such that \begin{equation}\label{form697} \aligned \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i + 1)} - \frak e_{\sigma_1,\sigma_2,(\frak i + 1)}^{\xi} \Vert^2_{L^2_{m,\delta}} &\le \mu \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i)} - \frak e_{\sigma_1,\sigma_2,(\frak i)}^{\xi}\Vert^2_{L^2_{m,\delta}} \\ &\le \mu^{\frak i+1} \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(0)} - \frak e_{\sigma_1,\sigma_2,(0)}^{\xi}\Vert^2_{L^2_{m,\delta}} \endaligned \end{equation} and \begin{equation}\label{form698} \Vert \frak e_{\sigma_1,\sigma_2,(\frak i+1)}^{\xi} - \frak e_{\sigma_1,\sigma_2,(\frak i)}^{\xi} \Vert \le \mu^{\frak i} C. \end{equation} Moreover, we have: \begin{equation}\label{form699} d_{W^{2}_{m+1,\delta}}(\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i+1)},\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i)}) \le C \Vert \overline \partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i)} - \frak e_{\sigma_1,\sigma_2,(\frak i)}^{\xi}\Vert_{L^2_{m,\delta}}. \end{equation} We make a remark that the constants $\epsilon_0$ and $C$ may be taken independent of $\frak i$. But these constants might depend on $m$, the exponent in the weighted Sobolev space $L^2_{m,\delta}$. The estimates in (\ref{form697}) and (\ref{form699}) imply that the sequence $\{\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\frak i)}\}_\frak i$ converges in $W^{2}_{m+1,\delta}$. We denote the limit by: \begin{equation}\label{constructedfamilyof} \aligned \frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\infty)}: = (u^{\xi,\prime}_{{\rm d},\sigma_1,\sigma_2,(\infty)}, &u^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(\infty)},U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(\infty)},\\ &\sigma_1,\sigma_2,\rho_{1,(\infty)}^{\xi}, \rho_{2,(\infty)}^{\xi}). \endaligned \end{equation} The estimate in (\ref{form698}) implies that $\frak e_{{\rm d},\sigma_1,\sigma_2,(\frak i)}^{\xi} \in E_{\rm d}$, $\frak e_{{\rm s},\sigma_1,\sigma_2,(\frak i)}^{\xi} \in E_{\rm s}$, $\frak e_{{\rm D},\sigma_1,\sigma_2,(\frak i)}^{\xi} \in E_{\rm D}$ converges as $\frak i$ goes to infinity. We denote the limit by: $$ \frak e_{\sigma_1,\sigma_2,(\infty)}^{\xi} = (\frak e_{{\rm d},\sigma_1,\sigma_2,(\infty)}^{\xi}, \frak e_{{\rm s},\sigma_1,\sigma_2,(\infty)}^{\xi}, \frak e_{{\rm D},\sigma_1,\sigma_2,(\infty)}^{\xi}). $$ As a consequence of (\ref{form697}), we have: $$ \Vert \overline\partial\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\infty)} - \frak e_{\sigma_1,\sigma_2,(\infty)}^{\xi}\Vert_{L^2_{m},\delta} = 0. $$ In other words, $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\infty)}$ satisfies (\ref{eq630}). Thus $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\infty)}$ satisfies the requirements of an inconsistent solution (Definition \ref{defn615inconsis}) except possibly (\ref{eq631}) (the transversal constraint). \subsection{Completion of the Proof} \label{subsub:glucomple} We are now in the position to complete the proof of Proposition \ref{prop617}. For any $\xi \in \mathcal U^+_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}^+_{\rm D,d}} \mathcal U^+_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U^+_{\rm s}$ and sufficiently small $\sigma_1,\sigma_2 \in \C$, we defined an inconsistent map $\frak u^{\xi,\prime}_{\sigma_1,\sigma_2,(\infty)}$ in (\ref{constructedfamilyof}). For $(\sigma_1,\sigma_2)$ we define $$ {\rm EVw}_{\sigma_1,\sigma_2} : \mathcal U^+_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}^+_{\rm D,d}} \mathcal U^+_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U^+_{\rm s} \to \mathcal D \times X^2 $$ by: \begin{align*} {\rm EVw}_{\sigma_1,\sigma_2}&(\xi)=\\ &((\pi \circ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(\infty)})(w_{{\rm D},1}), (\pi \circ U^{\xi,\prime}_{{\rm D},\sigma_1,\sigma_2,(\infty)})(w_{{\rm D},2}), U^{\xi,\prime}_{{\rm s},\sigma_1,\sigma_2,(\infty)}(w_{{\rm s}})). \end{align*} Here $w_{{\rm D},1}$, $w_{{\rm D},2}$, $w_{{\rm s}}$ are as in Condition \ref{conds610}. \begin{lem} ${\rm EVw}_{\sigma_1,\sigma_2}$ is transversal to $\mathcal N_{\rm D,1} \times \mathcal N_{\rm D,2} \times \mathcal N_{\rm s}$ for sufficiently small $\sigma_1,\sigma_2$. \end{lem} \begin{proof} The map ${\rm EVw}_{\sigma_1,\sigma_2}$ converges to ${\rm EVw}_{0,0}$ in the $C^1$ sense as $\sigma_1,\sigma_2 \to 0$. Moreover, ${\rm EVw}_{0,0}$ is transversal to $\mathcal N_{\rm D,1} \times \mathcal N_{\rm D,2} \times \mathcal N_{\rm s}$ by assumption (Definition \ref{defn6969}). The lemma follows from these observations. \end{proof} By definition $$ \bigcup_{\sigma_1,\sigma_2} ({\rm EVw}_{\sigma_1,\sigma_2})^{-1}(\mathcal N_{\rm D,1} \times \mathcal N_{\rm D,2} \times \mathcal N_{\rm s}) \times \{(\sigma_1,\sigma_2)\} $$ can be identified with $\mathcal U$, the set of inconsistent solutions. We also have: $$ ({\rm EVw}_{0,0})^{-1}(\mathcal N_{\rm D,1} \times \mathcal N_{\rm D,2} \times \mathcal N_{\rm s}) \cong \mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U_{\rm D} \,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U_{\rm s} $$ by definition. Proposition \ref{prop617} is a consequence of these facts. \par\smallskip Once Proposition \ref{prop617} is proved the proof of Proposition \ref{prop618} is similar to the proof of \cite[Theorem 6.4]{foooexp}. We have written the proof of Proposition \ref{prop617} so that the construction of the inconsistent solutions are parallel to the gluing construction in \cite[Section 5]{foooexp}. Therefore, the proof of \cite[Section 6]{foooexp} can be applied with almost no change to prove Proposition \ref{prop618}. This completes the construction of the Kuranishi chart at the point $[\Sigma,z_0,u]$. \section{Kuranishi Charts: the General Case} \label{sub:kuraconst} Up to point, we constructed a Kuranishi chart of the space $\mathcal M_1^{\rm RGW}(L;\beta)$ at the particular point $[\Sigma,z_0,u]$ described in Section \ref{subsec:gluing1}. In this section, we explain how this construction generalizes to an arbitrary point $\frak u$ of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$. There is a DD-ribbon tree $\mathcal R = (R,c,\alpha,m,\lambda)$ such that $\frak u$ belongs to ${\mathcal M}^{0}(\mathcal R)\subset \mathcal M_{k+1}^{\rm RGW}(L;\beta)$. (See \cite[Subsection 3.6]{DF1}). Let $((\Sigma_{v},\vec z_v,u_{v});v \in C^{\rm int}_0(\mathcal R))$ be a representative for $\frak u$. In the case that $c(v)={\rm D}$, the image of $u_v$ is contained in $\mathcal D$, and we are given a meromorphic section $U_v$ of $u_v^*\mathcal N_{\mathcal D}(X)$. Recall that for each $i$, the set of all sections $\{U_v\}_{\lambda(v)=i}$ is well-defined up to an action of $\C_*$ \cite[Formula (3.38)]{DF1}. We firstly, associate a combinatorial object to $\frak u$ which is called a {\it very detailed DD-ribbon tree} and is the refinement of the notion of detailed DD-ribbon trees defined in \cite[Subsection 3.6]{DF1}. Let $\hat R$ be the detailed tree associated to $\mathcal R$. Recall that each interior vertex $v$ of $\hat R$ corresponds to a possibly nodal Riemann surface $\Sigma_v$. (See, for example, \cite[Figure 9]{DF1}.) We refine the detailed DD-ribbon tree $\hat R$ further to the very detailed DD-ribbon tree $\check R$ so that each vertex of $\check R$ corresponds to an irreducible component of $\Sigma$. To be more detailed, for each $v \in C^{\rm int}_0(\check R)$, we form a tree $\mathcal Q_v$ such that: \begin{enumerate} \item Each vertex corresponds to either an irreducible component of $\Sigma_{v}$ or a marked point on it. The latter corresponds to an edge of $\hat R$, which contains $v$. We call any such vertex an exterior vertex. \item There are two types of edges in $\mathcal Q_v$. An edge of the first type joins two edges such that the corresponding irreducible components intersect. An edge of the second type is called an exterior edge and connects a vertex corresponding to a marked point to the vertex corresponding to the irreducible component containing the marked point. \end{enumerate} We replace each interior vertex $v$ of the detailed tree $\hat R$ with $\mathcal Q_v$ and identify exterior edges of $\mathcal Q_v$ with the corresponding edges of $\hat R$ containing $v$. We thus obtain a tree $\check R$, called the very detailed DD-ribbon tree associated to $\frak u$, or the very detailed tree associated to $\frak u$ for short. Figure \ref{surfaceforverydetail} sketches an element $\frak u$ of our moduli space. The associated detailed DD-ribbon tree $\hat R$ and the very detailed DD-ribbon tree $\check R$ are given in Figures \ref{treRRRR} and \ref{verydetailedtree}. \begin{figure}[h] \includegraphics[scale=0.6]{surfaceforverydetail} \caption{An element of the moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$} \label{surfaceforverydetail} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{treRRRR} \caption{The detailed DD-ribbon tree $\hat R$} \label{treRRRR} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{verydetailedtree} \caption{The very detailed DD-ribbon tree $\check R$} \label{verydetailedtree} \end{figure} We say an edge of $\check R$ is a {\it fine edge} if it does not correspond to an edge of the detailed DD-ribbon tree $\hat R$. In Figure \ref{verydetailedtree}, the fine edges are illustrated by narrow lines and level $0$ edges are illustrated by dotted lines. An edge of $\check R$, which is not fine, is called a {\it thick edge}. We denote by $C^{\rm int}_{\rm fi}(\check R)$ and $C^{\rm int}_{\rm th}(\check R)$ the set of all fine and thick edges of $\check R$, respectively. The level of a vertex of $\check R$ induced by a vertex of $\mathcal Q_v$ is defined to be $\lambda(v)$. We do not associate a multiplicity number to a fine edge. Homology class of a vertex is the homology class of the map $u$ on this component. The color of an interior vertex $v$ of $\check R$, denoted by $c(v)$, is ${\rm D}$ if its level is positive. If this vertex has level $0$, then its color is either ${\rm s}$ or ${\rm d}$ depending on whether $\Sigma_v$ is a sphere or a disk. The notion of level shrinking and level 0 edge shrinking for very detailed DD-ribbon trees can be defined as in the case of detailed DD-ribbon trees. We define a {\it fine edge shrinking} as follows. We remove a fine edge $e$ and identify the two vertices connected to each other by $e$. For two very detailed DD-ribbon trees $\check R$, $\check R'$, we say $\check R' \le \check R$ if $\check R$ is obtained from $\check R'$ by a sequence of level shrinkings, level 0 edge shrinkings and fine edge shrinkings. Note that there might be a fine edge joining two vertices of level $0$. We do {\it not} call any such edge a level $0$ edge. The level $0$ edges are limited to those joining vertices of color ${\rm d}$. We can stratify the moduli space $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ using very detailed DD-ribbon trees $\check R$. Namely, we define $\mathcal M^{\rm RGW}_{k+1}(L;\beta)(\check R)$ to be the subset of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ consists of elements $\frak u$ whose associated very detailed DD-ribbon tree is $\check R$. If $\check R' \le \check R$, then the closure of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)(\check R)$ contains $\mathcal M^{\rm RGW}_{k+1}(L;\beta)(\check R')$. Let $\frak u=((\Sigma_{v},\vec z_v,u_{v});v \in C^{\rm int}_0(\check R))$ be an element of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)(\check R)$ as above. For an interior vertex $v$ of $\check R$, the triple $\frak u_v = (\Sigma_{v},\vec z_v,u_{v})$ is stable by definition. If $(\Sigma_{v},\vec z_v)$ is not stable, then we may add auxiliary interior marked points $\vec w_{v}$ so that $(\Sigma_{v},\vec z_v\cup \vec w_v)$ is stable. We assume that $\vec w_v$ is chosen such that the following symmetry assumption holds. Suppose $\Gamma_{\frak u}$ denotes the group of automorphisms of $\frak u$. Given $\gamma \in \Gamma_{\frak u}$, for each interior vertex $v$, there exists a vertex $\gamma(v)$ and a bi-holomorphic map $\gamma_v: (\Sigma_{v},\vec z_v) \to (\Sigma_{\gamma(v)},\vec z_{\gamma(v)})$ such that $u_{\gamma(v)}\circ \gamma_v = u_v$. We assume $\vec w_v$ is mapped to $\vec w_{\gamma(v)}$ via $\gamma_v$. Note that the case $\gamma(v) = v$ is also included. For each member $w_{v,i}$ of $\vec w_v$, we take a codimension 2 submanifold $\mathcal N_{v,i}$ of $X$ (resp. $\mathcal D$) if $c(v) = {\rm d}$ or $\rm s$ (resp. if $c(v) = {\rm D}$.) We assume that the same condition as Condition \ref{conds610} holds for these choices of {\it transversals}. If $\gamma \in \Gamma_{\frak u}$ and $\gamma_v(w_{v,i})=w_{v',i'}$, then we require that $\mathcal N_{v,i} = \mathcal N_{v',i'}$. In order to define Cauchy-Riemann operators, we introduce function spaces similar to those of Section \ref{sub:Fred}. For an interior vertex $v$ of $\check R$, if the color of $v$ is ${\rm d}$, ${\rm s}$ or ${\rm D}$, the Hilbert space $W^2_{m,\delta}(\frak u_v;T)$ is respectively defined as in Definition \ref{defn6262}, Definition \ref{defn64444} or Definition \ref{defn66666}. Here $T$ is a placeholder for the pull-back of the tangent bundle of $X$ (if $c(v)={\rm d}$, ${\rm s}$) or $\mathcal N_{\mathcal D}X \setminus \mathcal D$ (if $c(v)={\rm D}$). Similarly, for each $v$, we define the Weighted Sobolev spaces $L^2_{m,\delta}(\frak u_v;T \otimes \Lambda^{0,1})$ as in Section \ref{sub:Fred}. \begin{rem} In the case that $c(v) = \rm d$, the space $\Sigma_{v}$ may have boundary nodes. In that case we take cylindrical coordinates on a neighborhood of each boundary node and use a cylindrical metric on this neighborhood. The approach here is very similar to the case of interior nodes which is discussed in Section \ref{sub:Fred}. See \cite[Section 3]{foooexp} for the case of boundary nodes in the context of the stable map compactification. We also need to fix cylindrical coordinates for nodes corresponding to fine edges. In this case the target of the corresponding cylindrical end is contained in a compact subset of $X \setminus \mathcal D$ (for fine edges connecting level $0$ vertices) or $\mathcal N_{\mathcal D}X \setminus \mathcal D$ (for fine edges connecting positive level vertices). In the first case we use the metric $g$ given in Section \ref{sub:Fred}. In the latter case, we use the metric on $\mathcal N_{\mathcal D}X \setminus \mathcal D$ with the form given in \eqref{g-cylinder-end}. \end{rem} Let $W^{2,\sim}_{m,\delta}(\frak u;T)$ be the subspace of the direct sum: \begin{equation}\label{form6101} \bigoplus_{v \in C^{\rm int}_0(\check R)} W^2_{m,\delta}(\frak u_v;T) \end{equation} consisting of elements $(V_{v};v \in C^{\rm int}_0(\check R))$ with the following properties. Let $e$ be an interior edge of $\check R$ joining $v_1$ and $v_2$. The source curve of the element $\frak u_{v_i}$ contains a nodal point $z_{v_i,e}$ corresponding to the edge $e$. Suppose $e$ is not a level 0 edge or a fine edge. By definition $V_{v_i}$ has an asymptotic value $(\frak r_{v_i,e},\frak s_{v_i,e},v_{v_i,e})\in \R\oplus \R \oplus T_{p_{v_i,e}}\mathcal D$ where $p_{v_i,e}$ is the point of $\mathcal D$ such that $u_v(z_{v_i,e}) = p_{v_i,e}$ and $\R\oplus \R$ corresponds to the tangent space of the partial $\C_*$-action. (See Definition \ref{defn6262}.) We require: \[ v_{v_1,e} = v_{v_2,e}. \] This condition is the counterpart of part (4) of Definition \ref{defn62888}. In the case of a level 0 edge (resp. a fine edge), the corresponding asymptotic values are tangent vectors of $L$ (resp. tangent vectors of $X$ or $\mathcal N_{X}\mathcal D \setminus \mathcal D$) and we require that these two tangent vectors agree with each other. (See \cite[Definition 3.4]{foooexp}.) Analogous to Definition \ref{defn62888}, there is an action of $(\R\oplus \R)^{|\lambda|}$ on $W^{2,\sim}_{m,\delta}(\frak u;T)$ with $|\lambda|$ being the number of levels of $\check R$. We define $W^{2}_{m,\delta}(\frak u;T)$ to be the quotient space with respect to this action. We also write $L^2_{m,\delta}(\frak u;T\otimes \Lambda^{0,1})$ for the direct sum of $L^2_{m,\delta}(\frak u_v;T \otimes \Lambda^{0,1})$ for $v \in C^{\rm int}_0(\check R)$. The linearization of the Cauchy-Riemann equation associated to each vertex $v$ of the very detailed tree $\check R$, defines the linear operator: \[ D_{\frak u_v}\overline \partial:W^2_{m+1,\delta}(\frak u_v;T) \to L^2_{m,\delta}(\frak u_v;T \otimes \Lambda^{0,1}). \] The direct sum of these operators together determines a Fredholm operator: \begin{equation}\label{form6103} D_{\frak u}\overline\partial : W^2_{m+1,\delta}(\frak u;T) \to L^2_{m,\delta}(\frak u;T \otimes \Lambda^{0,1}). \end{equation} In the case that this operator is not surjective, we need to introduce obstruction spaces as in Section \ref{sub:Obst}. For each interior vertex $v$, we fix a vector space $E_v$ such that the following conditions are satisfied: \begin{conds}\label{cond643} \begin{enumerate} \item $E_{v}$ is a finite dimensional subspace of $L^2_{m,\delta}(\Sigma_v\setminus \vec z_v;u_v^*T(X\setminus \mathcal D) \otimes \Lambda^{0,1})$ if $c(v)={\rm d}$ or ${\rm s}$, and is a finite dimensional subspace of $L^2_{m}(\Sigma_v;u_v^*T\mathcal D \otimes \Lambda^{0,1})$ if $c(v)={\rm D}$. Moreover, $E_{v}$ consists of smooth sections. Using the decomposition in \eqref{decom-tan-bdle}, we can also regard $E_v$ as a subspace of $L^2_{m,\delta}(\frak u_v;T \otimes \Lambda^{0,1})$. \item Elements of $E_v$ have compact supports away from nodal points and boundary. \item If $u_{v}$ is a constant map, then $E_v$ is $0$. \item If $\gamma \in \Gamma_{\frak u}$, then $$ (\gamma_v)_* E_v = E_{\gamma(v)}. $$ Here $(\gamma_v)_* : L^2_{m,\delta}(\frak u_v;T \otimes \Lambda^{0,1}) \to L^2_{m,\delta}(\frak u_{\gamma(v)};T \otimes \Lambda^{0,1})$ is the map induced by $\gamma_v : \Sigma_v \to \Sigma_{\gamma(v)}$. (Recall that $u_{\gamma(v)} \circ \gamma_v = u_v$.) \item The operator $D_{\frak u}\overline\partial$ in \eqref{form6103} is transversal to: \begin{equation} E_0 = \bigoplus_{v \in C^{\rm int}_0(\check R)} E_v \subset L^2_{m,\delta}(\frak u;T \otimes \Lambda^{0,1}). \end{equation} \end{enumerate} \end{conds} It is straightforward to see that there are obstruction spaces satisfying Condition \ref{cond643}. Since each operator $D_{\frak u_v}\overline \partial$ is Fredholm, we can fix $E_v$, which satisfies part (1). This choice also would imply the required transversality in part (5). In the case that $u_v$ is constant, we can pick $E_v$ to be the trivial vector space because $\Sigma_v$ has genus $0$. (It is either a disk or a sphere.) Unique continuation implies that we can assume that the supports of the elements of $E_v$ is contained in a compact subset of $\Sigma_v$ away from the nodal points and boundary. By taking direct sums over the action of $\Gamma_{\frak u}$ if necessary, we may also assume that (4) holds. Using $E_0$, we can define a thickened moduli space which gives a Kuranishi neighborhood of $\frak u$ in a stratum of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ which contains $\frak u$. (See Definition \ref{defn64545}.) In the upcoming sections, we give a systematic construction of the obstruction spaces $E_0$ which satisfy further compatibility assumptions. We next discuss the process of gluing the irreducible components of $\frak u$. We firstly need to explain how the deformation of source curves is parametrized. The mathematical content here is classical and we follow the approaches in \cite[Section 8]{foooexp} and \cite[Section 3]{fooo:const1}. For an interior vertex $v$, we consider $(\Sigma_v,\vec z_v \cup \vec w_v)$. This is a disk or a sphere with marked points, which is stable. We may regard it as an element of the moduli space $\mathcal M_{v}^{\rm source}$. The space $\mathcal M_{v}^{\rm source}$ is metrizable and we fix one metric on it for our purposes later. The moduli space $\mathcal M_{v}^{\rm source}$ comes with a universal family: \begin{equation}\label{sourceunifami} \pi : \mathcal {C}_{v}^{\rm source}\to \mathcal M_{v}^{\rm source} \end{equation} and $\#\vec z_v + \#\vec w_v$ sections which are in correspondence with the marked points. (See, for example, \cite[Section 2]{fooo:const1}.) For $\frak x \in \mathcal M_{v}^{\rm source}$, the fiber $\pi^{-1}(\frak x)$ together with the values of the sections at $\frak x$ determines a representative for $\frak x$, which we denote it by $(\Sigma_{\frak x,v},\vec z_{\frak x,v} \cup \vec w_{\frak x,v})$. Since $\Sigma_v$ has no singularity, \eqref{sourceunifami} is a $C^\infty$-fiber bundle near the point $(\Sigma_v,\vec z_v \cup \vec w_v)$. We fix a neighborhood $\mathcal V_{v}^{{\rm source}}$ of $[\Sigma_{v},\vec z_{v} \cup \vec w_{v}]$ and a trivialization \begin{equation}\label{form10505} \phi_v : \mathcal V_{v}^{{\rm source}} \times \Sigma_v \to \mathcal {C}_{v}^{\rm source}. \end{equation} of \eqref{sourceunifami} over this neighborhood. We assume that these trivializations are compatible with the automorphisms of $\frak u$. For $\frak x_{v} \in \mathcal V_{v}^{{\rm source}}$, we define a complex structure $j_{\frak x_v}$ on $\Sigma_v$ such that the restriction of the trivialization \eqref{form10505} to $\{\frak x_{v}\} \times \Sigma_v$ defines a bi-holomorphic map $(\Sigma_v,j_{\frak x_v}) \to \pi^{-1}(\frak x_{v})$. Let $e$ be an interior edge of $\check R$ containing $v$. There is a nodal point of $(\Sigma_v,\vec z_v)$ associated to $e$. Let $\frak s_{v,e}$ be the section of \eqref{sourceunifami} corresponding to this marked point. In the case of an interior node, an {\it analytic family of coordinates} at this nodal point is a holomorphic map \begin{equation}\label{form19009} \varphi_{v,e} : \mathcal V_{v}^{{\rm source}} \times {\rm Int} (D^2)\to \mathcal {C}_{v}^{\rm source} \end{equation} such that for each $\frak x\in \mathcal V_{v}^{{\rm source}} $, we have $\varphi_{v,e}(\frak x,0) = \frak s_{v,e}(\frak x)$ and the restriction of $\varphi_{v,e}$ to $\{\frak x\} \times {\rm Int} D^2$ determines a holomorphic coordinate for $\Sigma_{\frak x} = \pi^{-1}(\frak x)$ around $\varphi_{v,e}(\frak x,0) = \frak s_{v,e}(\frak x)$. Thus $\varphi_{v,e}$ commutes with the projection map to $\mathcal {M}_{v}^{\rm source}$ and $\varphi_{v,e}$ is a bi-holomorphic map onto an open subset of $\mathcal {C}_{v}^{\rm source}$. When $z_{v,e}$ is a boundary node, we replace $D^2$ by $D^2_+ = \{z \in D^2 \mid {\rm Im} z \ge 0\}$ and define the notion of analytic family of coordinates in a similar way. (See \cite[Section 3]{fooo:const1} for more details.) We require that the images of the maps $\varphi_{v,e}$ are disjoint and away from the image of the sections of $\mathcal {C}_{v}^{\rm source}$ corresponding to the auxiliary marked points $\vec w_{v}$. We also assume that the chosen analytic families are compatibile with automorphisms of $\frak u$. We use analytic family of coordinates $\varphi_{v,e}$ to desingularize the nodal points as follows. Fix an element: \begin{equation}\label{vectorsigma} \vec\sigma = (\sigma_e ; e \in C^{\rm int}_{1}(\check R)) \in\prod_{e \in C^{\rm int}_{1}(\check R)}\mathcal V_{e}^{{\rm deform}}. \end{equation} Here $\sigma_e \in D^2 =: \mathcal V_{e}^{{\rm deform}}$ if $z_{v,e}$ is an interior node, and $\sigma_e \in [0,1] =: \mathcal V_{e}^{{\rm deform}}$ if $z_{v,e}$ is a boundary node.\footnote{Note that $z_{v,e}$ is a boundary node if and only if $e$ is a level $0$ edge.} Let \begin{equation}\label{defnuniverse} \vec{\frak x} = (\frak x_v ; v \in C^{\rm int}_0(\check R))\in \prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{v}^{{\rm source}}. \end{equation} We put \begin{equation}\label{form610} \aligned \Sigma^+_{v}(\vec{\frak x},\vec \sigma) = \Sigma_{\frak x_v,v} &\setminus \bigcup_{(v,e) : \text{$e$ is not a level $0$ edge}} \varphi_{v,e}(\frak x_v,D^2(\vert \sigma_{e}\vert)) \\ &\setminus \bigcup_{(v,e) : \text{$e$ is a level $0$ edge}} \varphi_{v,e}(\frak x_v,D_+^2(\sigma_{e})) \\ \Sigma^-_{v}(\vec{\frak x},\vec \sigma) = \Sigma_{\frak x_v,v} &\setminus \bigcup_{(v,e) : \text{$e$ is not a level $0$ edge}} \varphi_{v,e}(\frak x_v,D^2(1)) \\ &\setminus \bigcup_{(v,e) : \text{$e$ is a level $0$ edge}} \varphi_{v,e}(\frak x_v,D_+^2(1)). \endaligned \end{equation} Recall that a fine edge $e$ connecting two level $0$ vertices is not a level $0$ edge by definition.The auxiliary marked points $\vec w_v$ determine a set of marked points on $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$, which is also denoted by $\vec w_v$. We define an equivalence relation $\sim$ on: \begin{equation}\label{form6107} \bigcup_{v \in C^{\rm int}_0(\check R)}\Sigma^+_{v}(\vec{\frak x},\vec \sigma) \end{equation} as follows. Let $e$ be an edge which is not a level $0$ edge and connects the vertices $v_1,v_2$. Suppose $z_1,z_2 \in {\rm Int}(D^2)$ with $z_1z_2 = \sigma_e$. Then: $$ \varphi_{v_1,e}(\frak x_{v_1},z_1) \sim \varphi_{v_2,e}(\frak x_{v_2},z_2). $$ Let $e$ be a level $0$ edge connecting the vertices $v_1,v_2$. Suppose $z_1,z_2 \in {\rm Int}(D_+^2)$ with $z_1z_2 =-\sigma_e$. Then: $$ \varphi_{v_1,e}(\frak x_{v_1},z_1) \sim \varphi_{v_2,e}(\frak x_{v_2},z_2). $$ We divide the space (\ref{form6107}) by the equivalence relation $\sim$ and denote the quotient space by \begin{equation} \Sigma(\vec{\frak x},\vec \sigma). \end{equation} Let: $$ \aligned \sigma_e &= \exp(-(10T_e + \theta_e\sqrt{-1})) \qquad\qquad &\text{$e$ is not a level $0$ edge}, \\ \sigma_e &= \exp(-10T_e) \qquad\qquad &\text{$e$ is a level $0$ edge}. \endaligned $$ For each $e \in C^{\rm int}_1(\check R)$, there is a corresponding neck region in $\Sigma(\vec{\frak x},\vec \sigma)$. We define coordinates $r_e$, $s_e$ on this region as follows. Suppose $e$ is not a level $0$ edge. We choose $v_1,v_2$ so that $v_1$ and the root of $\check R$ (corresponding to the zero-th exterior marked point $z_0$ of $\frak u$) are in the same connected component of $\check R \setminus e$. Let: $$ \Sigma^+_{v}(\vec{\frak x},\vec \sigma) \setminus \Sigma^-_{v}(\vec{\frak x},\vec \sigma) = [-5T_e,5T_e]_{r_e} \times S^1_{s_e} $$ where $$ \hspace{3cm}(-5T_e,s_e) \in {\rm Closure}\,(\Sigma^-_{v_1}(\vec{\frak x},\vec \sigma))\hspace{1cm}\forall s_e\in S^1. $$ The coordinate $r_e, s_e$ is defined in the same way as in (\ref{form643}), (\ref{form644}). If {$e$ is a level $0$ edge}, then we take $v_1$, $v_2$ so that $v_1$ and the root of $\check R$ are in the same connected component of $\check R \setminus e$. Then $\Sigma^+_{v_1}(\vec{\frak x},\vec \sigma) \setminus \Sigma^-_{v_1}(\vec{\frak x},\vec \sigma)$ has a connected component corresponding to each edge which is incident to $v_1$. We identify the connected component corresponding to the edge $e$ with: \begin{equation}\label{rectangle} [-5T_e,5T_e]_{r_e} \times [0,\pi]_{s_e} \end{equation} where the point $\varphi_{v_1,e}(\frak x_v,\exp(-(r_e+5T_e)-\sqrt{-1}s_e))$ is identified with $(r_e,s_e)$ in \eqref{rectangle}. Similarly, $\Sigma^+_{v_2}(\vec{\frak x},\vec \sigma) \setminus \Sigma^-_{v_2}(\vec{\frak x},\vec \sigma)$ has a connected component corresponding to each edge which is incident to $v_2$. We identify the connected component corresponding to the edge $e$ with \eqref{rectangle} where the point $\varphi_{v_2,e}(\frak x_v,\exp((r_e-5T_e)+\sqrt{-1}s_e))$ is identified with $(r_e,s_e)$. These identifications are compatible with the equivalence relation $\sim$. We thus have the decomposition: \begin{equation}\label{form6110} \aligned \Sigma(\vec{\frak x},\vec \sigma) = &\bigcup_{v\in C^{\rm int}_0(\check R)}\Sigma^-_{v}(\vec{\frak x},\vec \sigma)\\ &\cup \bigcup_{e\in C^{\rm int}_1(\check R), \text{$e$ is not a level $0$ edge}}[-5T_e,5T_e]_{r_e} \times S^1_{s_e} \\ &\cup \bigcup_{e\in C^{\rm int}_1(\check R), \text{$e$ is a level $0$ edge}}[-5T_e,5T_e]_{r_e} \times [0,\pi]_{s_e}. \endaligned \end{equation} This is the thick and thin decomposition which is used frequently in various kinds of Gromov-Witten theory. The inclusion of $\vec w_v$ in $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ induces a set of marked points in $\Sigma(\vec{\frak x},\vec \sigma)$, which is also denoted by $\vec w_v$. Now we introduce several thickened moduli spaces which are used in the definition of our Kuranishi structures. We firstly define the stratum corresponding to $\check R$: \begin{defn}\label{defn64545} Given a positive real number $\kappa$, the space $\widetilde{\mathcal U}(\frak u,{\check R},\kappa)$ consists of triples $(\vec{\frak x},u',U')$ with the following properties: \begin{enumerate} \item $\vec{\frak x} = (\frak x_v ; v \in C^{\rm int}_0(\check R))\in \prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{v}^{{\rm source}}$. For each interior vertex $v$, $\frak x_v $ belongs to the $\kappa$-neighborhood of the point of $\mathcal V_{v}^{{\rm source}}$ induced by $\frak u$. A representative $(\Sigma_{v},\vec z_{\frak x,v} \cup \vec w_{\frak x,v})$ of $\frak x_v$ is also given where the irreducible component $\Sigma_{v}$ is equipped with an almost complex structure $j_{\frak x_v}$. \item $u' : \Sigma \to X$ is a continuous map, whose restriction $u'_v$ to $\Sigma_v$ is smooth. If $c(v)={\rm D}$, then we require that $u'(\Sigma_v)\subset \mathcal D$. Moreover, a meromorphic section $U_v'$ of $(u_v')^*(\mathcal N_{\mathcal D}(X))$ is also fixed such that the data of the zeros and poles of $U_v'$ is determined by the multiplicity of the thick edges connected to $v$. If $c(v)={\rm d}$, then the restriction of $u'_v$ to the boundary of the disc $\Sigma_v$ is mapped to $L$. \item The $C^2$-distance\footnote{If $c(v)={\rm d}$ or ${\rm s}$, then the $C^2$-distance is defined using the metric $g$ on $X$, and if $c(v)={\rm D}$, then the $C^2$-distance is defined using the metric $g'$ on $\mathcal D$.} between $u'_v$ and $u_v$ is less than $\kappa$. If $c(v)={\rm D}$, then the $C^2$-distance\footnote{The $C^2$-distance is defined with respect to a metric which has the form given in \eqref{g-cylinder-end}. Note that the set of sections $\{U_v\}_v$ is defined up to action of $\C_*^{|\lambda|}$, and here we mean than there is a representative for $\{U_v\}_v$ such that the distance between $U'_v$ and $U_v$ is less than $\kappa$.} between $U'_v$ and $U_v$ is less than $\kappa$. \item We require \begin{equation} \overline\partial_{j_{\frak x_v}} u'_v \in E_v(u'_v). \end{equation} Here $j_{\frak x_v}$ is the complex structure of $\Sigma_v$ corresponding to $\frak x_v$. (See the discussion proceeding (\ref{form10505}).) Using the complex structure $j_{\frak x_v}$ on $\Sigma_v$, we may define the target space parallel transportation in the same way as in Section \ref{sub:Obst}, and obtain $E_v(u'_v)$ from $E_v$. We also require: \begin{equation} \overline\partial_{j_{\frak x_v}} U'_v \in E_v(u'_v). \end{equation} if $c(v)={\rm D}$. Here we use \eqref{decom-tan-bdle} to regard $E_v(u'_v)$ as a subspace of $L^2_{m,\delta}(\Sigma_v;(U_v')^*T\mathcal N_{\mathcal D}(X) \otimes \Lambda^{0,1})$. \item We require \begin{equation}\label{trans-cons} u'_v(w_{v,i}) \in \mathcal N_{v,i}. \end{equation} \item If $e$ is a fine edge connecting vertices $v_1$ and $v_2$ with color d or s, then the values of $u_{v_1}'$ and $u_{v_2}'$ at the node $\Sigma_{v_1}\cap \Sigma_{v_2}$ are equal to each other. If $e$ is a fine edge connecting vertices $v_1$ and $v_2$ with color D, then the values of $U_{v_1}'$ and $U_{v_2}'$ at the node $\Sigma_{v_1}\cap \Sigma_{v_2}$ are equal to each other. \end{enumerate} \par We define an equivalence relation $\sim$ on $\widetilde{\mathcal U}(\frak u, {\check R},\kappa)$ as follows. Let $\vert\lambda\vert$ be the number of levels of the DD-ribbon tree associated to $\frak u$. For $i=1,\dots,\vert\lambda\vert$, we take $a_i \in \C_*$. We define $(\vec{\frak x},u',U_{(0)}') \sim (\vec{\frak x},u',U_{(1)}')$ \begin{equation} U_{(1),v}' = {\rm Dil}_{a_{\lambda(v)}}\circ U_{(0),v}'. \end{equation} We denote the quotient space with respect to this equivalence relation by $\widehat{\mathcal U}(\frak u,{\check R},\kappa)$. The group of automorphisms $\Gamma_{\frak u}$ acts on $\widehat{\mathcal U}(\frak u,{\check R},\kappa)$ in an obvious way. We write ${\mathcal U}(\frak u,{\check R},\kappa)$ for the quotient space $\widehat{\mathcal U}(\frak u,{\check R},\kappa)/\Gamma_{\frak u}$. \end{defn} The space ${\mathcal U}(\frak u,{\check R},\kappa)$ is a generalization of $\mathcal U_{\rm d} \,\,{}_{{\rm ev}_{\rm d}}\times_{{\rm ev}_{\rm D,d}} \mathcal U_{\rm D}\,\,{}_{{\rm ev}_{\rm D,s}}\times_{{\rm ev}_{\rm s}}\mathcal U_{\rm s}$ appearing in (\ref{form633}) and is a thickened version of a neighborhood of $\frak u$ in the stratum ${\mathcal M}^0(\mathcal R)$ of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ defined in \cite[(3.40)]{DF1}. The following lemma is a consequence of Condition \ref{cond643} and the implicit function theorem. \begin{lem} If $\kappa$ is small enough, then $\widehat{\mathcal U}(\frak u,{\check R},\kappa)$ is a smooth manifold and ${\mathcal U}(\frak u,{\check R},\kappa)$ is a smooth orbifold. \end{lem} \begin{rem}\label{TSD} Although it is not clear from the notation, the definition of ${\mathcal U}(\frak u,{\check R},\kappa)$ uses the choice of the additional marked points $w_{v,i}$, the set of transversals $\mathcal N_{v,i}$, the trivializations of the universal family $\phi_{v}$ and analytic family of coordinates $\varphi_{v,e}$. We call $\Xi= (\vec w_v,(\mathcal N_{v,i}),(\phi_{v}),(\varphi_{v,e}),\kappa)$ a choice of {\it trivialization and stabilization data} ({\it TSD}) for $\frak u$. We define the size of $\Xi$ to be the sum of $\kappa$, the diameters of $\mathcal V_v^{\rm source}$ and the images of the maps $\varphi_{v,e}$. When we say $\Xi$ is {\it small enough}, we mean that the size of $\Xi$ is small enough. Given this definition, a more accurate notation for ${\mathcal U}(\frak u,{\check R},\kappa)$ would be ${\mathcal U}(\frak u,{\check R},\Xi)$. \end{rem} We next introduce the generalization of the space $\mathcal U_0$ in Definition \ref{defnn614}. \begin{defn}\label{defn647} Let $\Xi= (\vec w_v,(\mathcal N_{v,i}),(\phi_{v}),(\varphi_{v,e}),\kappa)$ be a TSD at the element $\frak u$ of $\mathcal M_{k+1}^{\rm RGW}(L,\beta)$. The space $\widehat{\mathcal U}_0(\frak u,\Xi)$ consists of $(\vec{\frak x},\vec{\sigma},u')$ with the following properties: \begin{enumerate} \item $\vec{\frak x} (\frak x_v ; v \in C^{\rm int}_0(\check R))\in \prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{v}^{{\rm source}}$ and $\vec\sigma = (\sigma_e ; e \in C^{\rm int}_{1}(\check R)) \in \prod_{e \in C^{\rm int}_{1}(\check R)}\mathcal V_{e}^{{\rm deform}}$. Furthermore, for each interior vertex $v$, $\frak x_v $ belongs to the $\kappa$-neighborhood of the point of $\mathcal V_{v}^{{\rm source}}$ induced by $\frak u$. Similarly, for each $e$, we have $|\sigma_e|<\kappa$. \item $u' : (\Sigma(\vec{\frak x},\vec{\sigma}),\partial(\Sigma(\vec{\frak x},\vec{\sigma}))) \to (X\backslash \mathcal D,L)$ is a continuous map and is smooth on each irreducible component. \item If $e$ is (resp. is not) a level $0$ edge, then the image of the restriction to $u'$ to $[-5T_e,5T_e]_{r_e} \times [0,\pi]_{s_e}$ (resp. $[-5T_e,5T_e]_{r_e} \times S^1_{s_e}$) has a diameter\footnote{The diameter is defined with respect to the metric $g_{NC}$.} less than $\kappa$. If $c(v)={\rm D}$, then the restriction of $u'$ to $\Sigma^+_{v}(\vec{\frak x},\vec \sigma)$ is included in the open neighborhood $\frak U$ of $\mathcal D$ \item If $c(v)={\rm d}$ or ${\rm s}$, then the $C^2$-distance between the restrictions of $u'$ and $u_v$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ is less than $\kappa$. If $c(v)={\rm D}$, then the previous part implies that the restriction of $u'$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ may be regarded as a map to $\mathcal N_{\mathcal D}(X)\setminus \mathcal D$. We also demand that the $C^2$-distance between this map and $U_v$ is less than $\kappa$.\footnote{ Here again we use the convention that the distance between an object and $\{U_v\}_{\lambda(v)>0}$, is defined to be the minimum of the relevant distance between that object and all representatives of $\{U_v\}_{\lambda(v)>0}$.} \item We require \begin{equation} \overline\partial_{j_{\vec{\frak x},\vec \sigma}} u' \in E_0(u'). \end{equation} Here $j_{\vec{\frak x},\vec \sigma}$ is the complex structure of $\Sigma(\vec{\frak x},\vec \sigma)$, and $E_0(u')$ is defined from $E_v$ by target space parallel transportation in the same way as in Section \ref{sub:Obst}. \item We require \begin{equation} u'(w_{v,i}) \in \mathcal N_{v,i}. \end{equation} \end{enumerate} The group of automorphisms $\Gamma_{\frak u}$ acts on $\widehat{\mathcal U}_0(\frak u,\Xi)$ in the obvious way. We write ${\mathcal U}_0(\frak u,\Xi)$ for the quotient space $\widehat{\mathcal U}_0(\frak u,\Xi)/\Gamma_{\frak u}$. \end{defn} \begin{rem} The above definition needs to be slightly modified if some of the components of $\vec\sigma$ are zero. Let $e$ be an edge connecting a vertex of level $i$ to a vertex of level $i+1$ such that $\sigma_e=0$. If $e'$ is another edge that connects a vertex of level $i$ to a vertex of level $i+1$, then $\sigma_{e'}=0$. Next, we decompose $\check R$ into several blocks such that $\sigma_e = 0$ for the edges $e$ joining two different blocks and $\sigma_e \ne 0$ for an edge $e$, which is inside a block and is not a fine edge. In each block, we use Definition \ref{defn647} and join spaces associated to various blocks in the same way as in Definition \ref{defn64545}. We omit the details of this process because the actual space we use for the definition of our Kuranishi structure is not ${\mathcal U}_0(\frak u,\Xi)$ but ${\mathcal U}(\frak u,\Xi)$, introduced in Definition \ref{defn6488}. We can also define ${\mathcal U}_0(\frak u,\Xi)$ as a subspace of ${\mathcal U}(\frak u,\Xi)$. We brought firstly Definition \ref{defn647} because its geometric meaning is more clear. \end{rem} The space ${\mathcal U}_0(\frak u,\Xi)$ in general is singular (not an orbifold). We introduce the notion of inconsistent solutions to thicken ${\mathcal U}_0(\frak u,\Xi)$ into an orbifold. \begin{defn}\label{defn6488} Let $\Xi= (\vec w_v,(\mathcal N_{v,i}),(\phi_{v}),(\varphi_{v,e}),\kappa)$ be a TSD at the element $\frak u$ of $\mathcal M_{k+1}^{\rm RGW}(L,\beta)$. We say $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ is an {\it inconsistent solution near $\frak u$ with respect to $\Xi$} if it satisfies the following properties: \begin{enumerate} \item $\vec{\frak x} (\frak x_v ; v \in C^{\rm int}_0(\check R))\in \prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{v}^{{\rm source}}$, $\vec\sigma = (\sigma_e ; e \in C^{\rm int}_{1}(\check R)) \in \prod_{e \in C^{\rm int}_{1}(\check R)}\mathcal V_{e}^{{\rm deform}}$ and $\rho_e \in \C$ for each edge $e \in C^{\rm int}_{\rm th}(\check R)$ that is not a level $0$ edge, and $\rho_i \in D^2$ for each level $i =1 ,\dots,\vert\lambda\vert$. Furthermore, for each interior vertex $v$, $\frak x_v $ belongs to the $\kappa$-neighborhood of the point of $\mathcal V_{v}^{{\rm source}}$ induced by $\frak u$. Similarly, for each $e$, we have $|\sigma_e|<\kappa$. \item If $c(v) = {\rm d}$ (resp. ${\rm s}$), then $u'_{v} : (\Sigma^+_{v}(\vec{\frak x},\vec \sigma),\partial \Sigma^+_{v}(\vec{\frak x},\vec \sigma)) \to (X\setminus\mathcal D,L)$ (resp. $u'_{v} : \Sigma^+_{v}(\vec{\frak x},\vec \sigma)\to X \setminus\mathcal D$) is a smooth map. \item If $c(v) = {\rm D}$, then $U'_{v} : \Sigma^+_{v}(\vec{\frak x},\vec \sigma) \to \mathcal N_{\mathcal D}X \setminus\mathcal D$ is a smooth map, and $u'_{v}=\pi\circ U'_{v}$. \item $\rho_e = 0$ if and only if $\sigma_e = 0$. \item Suppose $e$ is an edge connecting vertices $v_0$ and $v_1$ such that $\lambda(v_0)=0$ and $\lambda(v_1)\geq1$. Then we require: \begin{equation}\label{form621} u'_{v_0} = {\rm Dil}_{\rho_e} \circ U'_{v_1} \end{equation} on $[-5T_e,5T_e]_{r_e} \times S^1_{s_e} = \Sigma^+_{v_1}(\vec{\frak x},\vec \sigma)\cap \Sigma^+_{v_2}(\vec{\frak x},\vec \sigma)$ if $\sigma_e \ne 0$. In particular, we assume that the restriction of $u'_{v_0}$ to $[-5T_e,5T_e]_{r_e} \times S^1_{s_e}$ is contained in the open neighborhood $\frak U$ of $\mathcal D$. If $\sigma_e = 0$, then the values of $u'_{v_0}$ and $\pi \circ U'_{v_1}$ at the nodal points corresponding to $e$ are equal to each other. \item Suppose $e$ is an edge connecting vertices $v_1$ and $v_2$ such that $\lambda(v_1)=i>0$ and $\lambda(v_2)\geq i+1$. We require \begin{equation}\label{form6222} U'_{v_1} = {\rm Dil}_{\rho_e} \circ U'_{v_2} \end{equation} on $[-5T_e,5T_e]_{r_e} \times S^1_{s_e} = \Sigma^+_{v_1}(\vec{\frak x},\vec \sigma)\cap \Sigma^+_{v_2}(\vec{\frak x},\vec \sigma)$ if $\sigma_e \ne 0$. If $\sigma_e = 0$, then the values of $U'_{v_1}$ and $\pi\circ U'_{v_2}$ at the nodal points corresponding to $e$ are equal. \item Suppose $e$ is a level $0$ edge connecting the vertices $v_1$ and $v_2$. If $\sigma_e \ne 0$, then we require: \begin{equation}\label{form623} u'_{v_1} = u'_{v_2} \end{equation} on $[-5T_e,5T_e]_{r_e} \times [0,1]_{s_e} = \Sigma^+_{v_1}(\vec{\frak x},\vec \sigma)\cap \Sigma^+_{v_2}(\vec{\frak x},\vec \sigma)$. If $\sigma_e =0$, then \eqref{form623} holds at the nodal point corresponding to $e$. \item Suppose $e$ is a fine edge connecting the vertices $v_1$ and $v_2$ with level zero (resp. with the same positive level). If $\sigma_e \ne 0$, then we require: \begin{equation}\label{form623rev} \hspace{1cm} u'_{v_1} = u'_{v_2} \quad {\rm (resp.} \,\,\, \quad U'_{v_1} = U'_{v_2}) \end{equation} on $[-5T_e,5T_e]_{r_e} \times S^1_{s_e}= \Sigma^+_{v_1}(\vec{\frak x},\vec \sigma)\cap \Sigma^+_{v_2}(\vec{\frak x},\vec \sigma)$. If $\sigma_e =0$, then \eqref{form623rev} holds at the nodal point corresponding to $e$. \item If $e$ is (resp. is not) a level $0$ edge, then the image of the restriction to $u_v'$ to $[-5T_e,5T_e]_{r_e} \times [0,\pi]_{s_e}$ (resp. $[-5T_e,5T_e]_{r_e} \times S^1_{s_e}$) has a diameter\footnote{The diameter is defined with respect to the metric $g_{NC}$.} less than $\kappa$. \item If $c(v)={\rm d}$ or ${\rm s}$, then the $C^2$-distance between the restrictions of $u_v'$ and $u_v$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ is less than $\kappa$. If $c(v)={\rm D}$, then we demand that the $C^2$-distance between the restrictions of $U_v'$ and $U_v$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ is less than $\kappa$.\footnote{ Here again we use the convention that the distance between an object and $\{U_v\}_{\lambda(v)>0}$, is defined to be the minimum of the relevant distance between that object and all representatives of $\{U_v\}_{\lambda(v)>0}$.} \item If $c(v) = {\rm d}$ or $\rm s$, then we require: \begin{equation}\label{form6119} \overline\partial_{j_{\vec{\frak x},\vec \sigma}} u'_v \in E_v(u'_v). \end{equation} Here $j_{\vec{\frak x},\vec \sigma}$ is the complex structure of $\Sigma(\vec{\frak x},\vec \sigma)$, and $E_v(u'_v)$ is defined from $E_v$ by target space parallel transportation in the same way as in Section \ref{sub:statement}. \item If $c(v) = {\rm D}$, then we require: \begin{equation}\label{form6120} \overline\partial_{j_{\frak x_v}} U'_v \in E_v(U'_v). \end{equation} Here $j_{\vec{\frak x},\vec \sigma}$ is the complex structure of $\Sigma(\vec{\frak x},\vec \sigma)$, and we use \eqref{decom-tan-bdle} to obtain $E_v(U'_v)$ from $E_v(u'_v)$ as a subspace of $L^2_{m,\delta}(\Sigma_v;(U_v')^*T\mathcal N_{\mathcal D}(X) \otimes \Lambda^{0,1})$. \item We have: \begin{equation}\label{form6121} u'_v(w_{v,i}) \in \mathcal N_{v,i}. \end{equation} \end{enumerate} We denote by $\widetilde{\mathcal U}(\frak u,\Xi)$ the set of all $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ satisfying the above properties. We define an equivalence relation $\sim$ on it as follows. Let ${\bf x}_j =(\vec{\frak x}_{(j)},\vec{\sigma}_{(j)},(u'_{v,(j)}),(U'_{v,(j)}),(\rho_{e,(j)}),(\rho_{i,(j)}))$ be elements of $\widetilde{\mathcal U}(\frak u,\Xi)$ for $j=1,2$. We say that ${\bf x}_1 \sim {\bf x}_2$ if there exists $a_i \in \C_*$ ($i=1,\dots,\vert\lambda\vert$) with the following properties. Let: $$ b_i = a_1 \cdots a_i \in \C_*. $$ \begin{enumerate} \item[(i)] $\vec{\frak x}_{(1)} = \vec{\frak x}_{(2)}$, $\vec{\sigma}_{(1)} = \vec{\sigma}_{(2)}$, $u'_{v,(1)} = u'_{v,(2)}$. \item[(ii)] $\rho_{i,(2)} = a_i \rho_{i,(1)}$. \item[(iii)] $U'_{v,(1)} = {\rm Dil}_{b_{\lambda(v)}}\circ U'_{v,(2)}$ \item[(iv)] Suppose $e$ is an edge connecting a vertex $v_0$ with $\lambda(v_0)=0$ to a vertex $v_1$ with $\lambda(v_1)\geq 1$. Then we require: $$ \rho_{e,(2)} =b_{\lambda(v_2)}\rho_{e,(1)}. $$ \item [(v)] Suppose $e$ is an edge connecting a vertex $v_1$ with $\lambda(v_1)\geq 1$ to a vertex $v_2$ with $\lambda(v_2)\geq 2$. Then we require: $$ \rho_{e,(2)} = a_{\lambda(v_1)+1}\cdots a_{\lambda(v_2)}\rho_{e,(1)}. $$ \end{enumerate} We denote by $\widehat{\mathcal U}(\frak u,\Xi)$ the quotient space $\widetilde{\mathcal U}(\frak u,,\Xi)/\sim$. The group $\Gamma_{\frak u}$ acts on $\widehat{\mathcal U}(\frak u,\Xi)$ in an obvious way. We denote by ${\mathcal U}(\frak u,\Xi)$ the quotient space $\widehat{\mathcal U}(\frak u,\Xi)/\Gamma_{\frak u}$. We say an element of ${\mathcal U}(\frak u,\Xi)$ is an {\it inconsistent solution} near $\frak u$ with respect to $\Xi$. When it does not make any confusion, the elements of $\widehat{\mathcal U}(\frak u,\Xi)$ or $\widetilde{\mathcal U}(\frak u,\Xi)$ are also called inconsistent solutions near $\frak u$ with respect to $\Xi$. \end{defn} \begin{rem} Initially it might seem that the complex numbers $\rho_i$ do not play any role in the definition of the elements of $\mathcal U(\frak u,\Xi)$. However, later they make it slightly easier for us to define the obstruction maps. \end{rem} Our generalization of Proposition \ref{prop617} claims that ${\mathcal U}(\frak u,\Xi)$ is a smooth orbifold. Before stating this result, we elaborate on the relationship between $\mathcal U(\frak u,\Xi)$ and $\mathcal U_0(\frak u,\Xi)$. \begin{defn} Let $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ be an inconsistent solution near $\frak u$ with respect to $\Xi$. We say that it satisfies {\it consistency equation} if for each edge $e$ connecting vertices $v_1$ and $v_2$ with $0\leq\lambda(v_1)< \lambda(v_2)$, we have: \begin{equation}\label{form6123} \rho_{e} = \rho_{\lambda(v_1)+1}\cdots \rho_{\lambda(v_2)}. \end{equation} It is easy to see that the consistency equation (\ref{form6123}) is independent of the choice of the representative with respect to the relations given by $\sim$ and the action of $\Gamma_{\frak u}$. \end{defn} \begin{lem}\label{lem650} The set of inconsistent solutions near $\frak u$ satisfying consistency equation can be identified with ${\mathcal U}_0(\frak u,\Xi)$. \end{lem} \begin{proof} Let $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ be an inconsistent solution near $\frak u$ satisfying consistency equations. For the simplicity of exposition, we consider the case that all the components of $\vec{\sigma}$ are nonzero.\footnote{ This is the case that we gave a detailed definition of ${\mathcal U}_0(\frak u,\Xi)$, after all. For other cases, this lemma can be used as the definition.} Define $\tau_i$ to be the product $\rho_1\cdot \rho_2 \dots \rho_i$. For each vertex $v$ with $c(v) = D$, we also define: \[ U^{\frak m}_{v} = {\rm Dil}_{\tau_{\lambda(v)}} \circ U^{\prime}_{v}. \] Then the maps $U^{\frak m}_{v}$ for $c(v)={\rm D}$ and $u'_{v}$ for $c(v)={\rm d}$ or ${\rm s}$ are compatible on the overlaps and by gluing them together, we obtain an element of ${\mathcal U}_0(\frak u,\Xi)$. The reverse direction is clear. \end{proof} \begin{exm} We consider the case of detailed DD-ribbon tree in Figure \ref{FIgsec6-1}. This tree has two edges (whose multiplicities are 2 and 3, respectively). We denote them by $e_{\rm d}$ and $e_{\rm s}$, respectively. Two parameters $\rho_{\rm d}$ and $\rho_{\rm s}$ are associated to these edges. (In Section \ref{sub:proofmain}, $\rho_{\rm d}$ and $\rho_{\rm s}$ are denoted by $\rho_1$ and $\rho_2$, respectively). The total number of levels is 1. So there is a parameter $\rho$ associated to this level. The consistency equation (\ref{form6123}) implies that: $$ \rho_{\rm d}=\rho =\rho_{\rm s}. $$ which is the same as the equation in \eqref{rho1rho2hito}. \end{exm} For any $\ell \le m-2$, we fix a $C^{\ell}$ structure on $\widehat{\mathcal U}(\frak u,\Xi)$ in the following way. For an interior vertex $v$ of $\Sigma_v$, let $\Sigma_v^-$ be the space $\Sigma_v^-(\vec{\frak x}, \vec{\sigma})$ in the case that $\vec{\sigma}=0$ and $\vec{\frak x}$ is induced by $\frak u$. The trivialization of the universal family allows us also to identify $\Sigma_v^-$ with $\Sigma_v^-(\vec{\frak x}, \vec{\sigma})$ for different choices of $\vec{\frak x}$, $\vec{\sigma}$. Define maps: \begin{align} {\rm Res}_v : \widetilde{\mathcal U}(\frak u,\Xi)&\to L^2_{m+1}(\Sigma_v^-,X\setminus \mathcal D) \qquad &\text{if $\lambda(v) = 0$}, \\ {\rm Res}_v : \widetilde{\mathcal U}(\frak u,\Xi) &\to L^2_{m+1}(\Sigma_v^-,\mathcal N_{\mathcal D}X \setminus \mathcal D) \qquad &\text{if $\lambda(v) > 0$},\label{lambda-v-p} \end{align} such that ${\rm Res}_v(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ is the restriction of $u'_v$ or $U'_v$ to $\Sigma_v^-(\vec{\frak x},\vec{\sigma}) \cong \Sigma_v^-$. By unique continuation, ${\rm Res}_v$ and the obvious projection maps induce an embedding: \begin{equation}\label{ccc} \aligned \widetilde{\mathcal U}(\frak u,\Xi)\to &\prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{v}^{{\rm source}} \times \prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{e}^{{\rm deform}} \times (D^2)^{\vert\lambda\vert}\\ &\times\prod_{v \in C^{\rm int}_0(\check R), \lambda(v) = 0}L^2_{m+1} (\Sigma_v^-,X\setminus \mathcal D)\\ &\times\prod_{v \in C^{\rm int}_0(\check R), \lambda(v) > 0}L^2_{m+1} (\Sigma_v^-,\mathcal N_{\mathcal D}X\setminus \mathcal D) \endaligned \end{equation} We use this embedding to fix a $C^{\ell}$-structure on $\widetilde{\mathcal U}(\frak u,\Xi)$. The group $\C_*^{|\lambda|}$ acts freely on the target and the domain of \eqref{ccc}, and the above embedding is equivariant with respect to this action. We use the induced map at the level of the quotients to define a $C^{\ell}$-structure on $\widehat{\mathcal U}(\frak u,\Xi)$. Note that we can define a slice for $\widehat{\mathcal U}(\frak u,\Xi)$ using the following idea. For each $1\leq i \leq |\lambda|$, we fix an interior vertex $v_i$ with $\lambda(v_i)=i$ and a base point $x_i\in\Sigma_{v_i}^{-}$. We also trivialize the bundle $\mathcal N_{\mathcal D}(X)$ in a neighborhood of $U_{v_i}(x_i)$. Each element of $\widehat{\mathcal U}(\frak u,\Xi)$ has a unique representative $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ such that $U_{v_i}'(x_i)=1\in \C$. Here we assume that $\kappa$ is small enough such that $U_{v_i}'(x_i)$ belongs to the neighborhood of $U_{v_i}(x_i)$ that $\mathcal N_{\mathcal D}(X)$ is trivialized. \begin{prop}\label{prop652} The space $\widehat{\mathcal U}(\frak u,\Xi)$ is a $C^{\ell}$-manifold and ${\mathcal U}(\frak u,\Xi)$ is a $C^{\ell}$-orbifold. There exists a $\Gamma_{\frak u}$-invariant open $C^{\ell}$-embedding for $\ell \le m-2$: \[ \Phi : \prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{e}^{{\rm deform}} \times \widehat{\mathcal U}(\frak u,{\check R},\Xi)\times D^2(\epsilon)^{\vert\lambda\vert} \to \widehat{\mathcal U}(\frak u,\Xi) \] with the following properties: \begin{enumerate} \item \[ \Phi(\vec{\sigma},\xi,(\rho_i))=[\vec{\frak x}, \vec{\sigma},(u'_{\vec{\sigma},\xi,v}), (U'_{\vec{\sigma},\xi,v}),(\rho_{e}(\vec{\sigma},\xi)),(\rho_i)] \] Namely, the gluing parameters $\vec{\sigma}$ are preserved by the map $\Phi$. Moreover, the deformation parameter $\vec{\frak x}$ is the same as the one for the source curve of $\xi$. \item For each edge $e \in C^{\rm int}_{\rm th}(\check R)$ that is not a level $0$ edge, there exists a nonzero smooth function $f_e$ such that: \[ \rho_{e}(\vec{\sigma},\xi)=f_e(\vec{\sigma},\xi) \sigma_e^{m(e)} \] where $m(e)$ is the multiplicity of the edge $e$. \item Let $\xi = (\vec{\frak x},u',U')\in \widehat{\mathcal U}(\frak u,{\check R},\Xi)$ and $\vec{\sigma}_0$ be the vector that $\sigma_e$ are all zero. Then we have: \[ \Phi(\vec{\sigma}_0,\xi,(\rho_i))=[\vec{\frak x},\vec{\sigma}_0,(u'_{\vec{\sigma}_0,\xi,v}), (U'_{\vec{\sigma}_0,\xi,v}),(\rho_{e}(\vec{\sigma}_0,\xi)),(\rho_i)], \] where $u'_{\vec{\sigma}_0,\xi,v}$ is the restriction $u'_v$ of $u'$, $U'_{\vec{\sigma}_0,\xi,v}$ is the restriction of $U^{\prime}_{v}$ and $\rho_{e}(\vec{\sigma}_0,\xi)=0$. \end{enumerate} \end{prop} The proof of Proposition \ref{prop652} is essentially the same as the proof of Proposition \ref{prop617}, and it is only notationally more involved. We next state a generalization of Proposition \ref{prop618}. For a thick edge $e$ which is not of level $0$, we define $T_e$, $\theta_e$, $\frak R_e$, $\eta_e$ using the following identities: \begin{equation}\label{form635rev} \aligned \sigma_e &= \exp(-(T_e+\sqrt{-1}\theta_e)), \\ \rho_e &= \exp(-(\frak R_e+\sqrt{-1} \eta_e)). \endaligned \end{equation} If $e$ is a level $0$ edge, then we define $T_e$ using: \begin{equation}\label{form635rev-2} \sigma_e = \exp(-T_e). \end{equation} We may also define $T_e$ and $\theta_e$ for a fine edge as in \eqref{form635rev}. Using $\Phi$, we regard $\frak R_e$, $\eta_e$ as functions of $T_{e'}$, $\theta_{e'}$ and $\xi$. We again use the trivialization of the universal family to identify $\Sigma_v^-(\vec{\frak x}_0,\vec{\sigma})$ (see \eqref{form610}.) for various choices of $\vec{\frak x}_0$, $\vec{\sigma}$. For the purpose of the next proposition, we also regard $u'_{\vec{\sigma},\xi,v}$, $U'_{\vec{\sigma},\xi,v}$ as maps \[ \aligned &u'_{\vec{\sigma},\xi,v} :\Sigma_v^-(\vec{\sigma}) \to X \setminus \mathcal D \\ &U'_{\vec{\sigma},\xi,v} :\Sigma_v^-(\vec{\sigma}) \to \mathcal N_{\mathcal D}X \setminus \mathcal D. \endaligned \] In particular, the domain of these maps are independent of $T_e$, $\theta_e$ and $\xi$. \begin{prop}\label{prop653} Let $\ell$ be an arbitrary positive integer and $k_e,k'_e$ be non-negative integers. Let $\upsilon_e = 0$ if $k_e,k'_e = 0$. Otherwise we define $\upsilon_e = 1$. \begin{enumerate} \item We have the following exponential decay estimates: \begin{equation} \aligned &\left\Vert \prod_e \frac{\partial^{k_e}}{\partial^{k_e} T_e}\frac{\partial^{k'_e}} {\partial^{k'_e} \theta_e}u'_{\vec{\sigma},\xi,v} \right\Vert_{L^2_{\ell}(\Sigma_v^-(\vec{\sigma}))}\le C \exp(-c\sum \upsilon_eT_e).\\ &\left\Vert\prod_e \frac{\partial^{k_e}}{\partial^{k_e} T_e}\frac{\partial^{k'_e}} {\partial^{k'_e} \theta_e}U'_{\vec{\sigma},\xi,v} \right\Vert_{L^2_{\ell}(\Sigma_v^-(\vec{\sigma}))}\le C \exp(-c\sum \upsilon_eT_e). \endaligned \end{equation} Here $C,c$ are positive constants depending on $\ell$, $k_e$, $k_e'$. The same estimate holds for the $\xi$ derivatives of $u'_{v,\vec{\sigma},\xi}$, $U'_{v,\vec{\sigma},\xi}$. \item For any thick edge $e_0$ which is not a level $0$ edge, we also have the following exponential decay estimates: \begin{equation} \aligned &\left\vert \prod_e \frac{\partial^{k_e}}{\partial^{k_e} T_e} \frac{\partial^{k'_e}}{\partial^{k'_e} \theta_e} {(\frak R_{e_0} - m({e_0})T_{e_0})} \right\vert \le C \exp(-c\sum \upsilon_eT_e) \\ &\left\vert \prod_e \frac{\partial^{k_e}}{\partial^{k_e} T_e} \frac{\partial^{k'_e}}{\partial^{k'_e} \theta_e} {(\eta_{e_0} - m({e_0})\theta_{e_0})} \right\vert \le C \exp(-c\sum \upsilon_eT_e). \endaligned \end{equation} Here $C,c$ are positive constants depending on $\ell$, $k_e$, $k_e'$. The same estimate holds for the $\xi$ derivatives of $\frak R_{e_0}$, $\eta_{e_0}$. \end{enumerate} \end{prop} Similar to Proposition \ref{prop618}, Proposition \ref{prop653} can be verified using the same argument as in the proof of \cite[Section 6]{foooexp}. We now use Propositions \ref{prop652} and \ref{prop653} to produce a Kuranishi chart at $\frak u$. Let $\frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ be a representative of an element of $\widehat{\mathcal U}(\frak u,\Xi)$. Recall that we fix vector spaces $E_v$ for each $\frak u$, and use target parallel transportation to obtain the vector spaces $E_v(u'_{v})$ and $E_v(U'_{v})$. We define: \begin{equation}\label{form63131} \mathcal E_{0,\frak u}(\frak y) = \bigoplus_{v \in C_0^{\rm int}(\check R),\,\lambda(v) = 0} E_v(u'_{v}) \oplus \bigoplus_{v \in C_0^{\rm int}(\check R),\,\lambda(v) > 0} E_v(U'_{v}). \end{equation} Using Proposition \ref{prop653}, it is easy to see that (\ref{form63131}) defines a $\Gamma_{\frak u}$-equivariant $C^{\ell}$ vector bundle on $\widehat{\mathcal U}(\frak u,\Xi)$. We define the other part of the obstruction bundle as follows. Let $e$ be a thick edge which connects vertices $v_1$ and $v_2$ with $0\leq \lambda(v_1)<\lambda(v_2)$. We fix the trivial line bundle $\C_e$ on $\widetilde{\mathcal U}(\frak u,\Xi)$. Let ${\bf x}_1 \sim {\bf x}_2$ and $a_i \in \C_*$ ($i=1,\dots,\vert\lambda\vert$) be as in Definition \ref{defn6488} (i)-(v). Then define an equivalence relation on $\C_e$ where $({\bf x}_1,V_1) \sim ({\bf x}_2,V_2)$ if: \[ V_2 = a_{\lambda(v_1)+1}\dots a_{\lambda(v_2)} V_1. \] We thus obtain a line bundle $\mathscr L_e$ on $\widehat{\mathcal U}(\frak u,\Xi)$. The group $\Gamma_{\frak u}$ acts on $\bigoplus_e\mathscr L_e$ in an obvious way. Our obstruction bundle $\mathcal E_{\frak u}$ on $\widehat{\mathcal U}(\frak u,\Xi)$ is defined to be: \begin{equation}\label{form1162} \mathcal E_{\frak u} = \mathcal E_{0,\frak u} \oplus \bigoplus_{e \in C^{\rm int}_{\rm th}(\check R),\, \lambda(e) > 0} \mathscr L_e. \end{equation} It induces an orbi-bundle on ${\mathcal U}(\frak u,\Xi)$. By an abuse of notation, this orbi-bundle is also denoted by $\mathcal E_{\frak u}$. Next, we define Kuranishi maps. If $v$ is a vertex with $\lambda(v) = 0$, then we define: \begin{equation}\label{form630} \frak s_{\frak u,v}(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))= \overline{\partial} u'_v \in E_v(u'_{v}). \end{equation} If $v$ is a vertex with $\lambda(v) >0$, then we define: \begin{equation}\label{form631} \frak s_{\frak u,v}(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i)) = \overline{\partial} U'_v \in E_v(U'_{v}). \end{equation} If $e$ is a thick edge connecting vertices $v_1$ and $v_2$ with $0\leq \lambda(v_1)<\lambda(v_2)$, then we define: \begin{equation}\label{form632} \frak s_{\frak u,e}(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i)) = \rho_{e} - \rho_{\lambda(v_1)+1}\cdots \rho_{\lambda(v_2)}. \end{equation} We define: \[ \frak s_{\frak u} = ((\frak s_{\frak u,v};v \in C^{\rm int}_0(\check R)), (\frak s_{\frak u,e};e \in C^{\rm int}_{\rm th}(\check R), \lambda(e) > 0)). \] It is easy to see that $\frak s_{\frak u}$ induces a $\Gamma_{\frak u}$-invariant section of $\mathcal E_{\frak u}$. Using Proposition \ref{prop653}, we can show that the section $\frak s_{\frak u}$ is smooth. Suppose $\frak s_{\frak u}(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i)) = 0$. By (\ref{form632}) and Lemma \ref{lem650}, it induces an element $(\vec{\frak x},\vec{\sigma},u')$ of $\widehat{\mathcal U}_0(\frak u,\Xi)$. By (\ref{form630}) and (\ref{form631}) the map $u'$ is pseudo holomorphic. Therefore, $(\Sigma({\vec{\frak x}},\vec{\sigma}),u')$ and marked points on $\Sigma_{\vec{\frak x}}$ determine an element of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$. This element does not change if we change $(\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ by the $\Gamma_{\frak u}$-action. We thus obtained: \begin{equation}\label{paramatp} \psi _{\frak u}: \frak s_{\frak u}^{-1}(0)/\Gamma_{\frak u} \to \mathcal M_{k+1}^{\rm RGW}(L;\beta), \end{equation} which is a homeomorphism onto an open neighborhood of $\frak u$. We thus proved: \begin{thm}\label{pro65411} $\mathcal U_{\frak u} = (\mathcal U(\frak u),\mathcal E_{\frak u},\Gamma_{\frak u},\frak s_{\frak u},\psi_{\frak u})$ is a Kuranshi chart of the moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ at $\frak u$. \end{thm} \section{Construction of Kuranishi Structures} \label{sub:kuracont} So far, we constructed a Kuranishi chart at each point of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$. In this section we construct a global Kuranishi structure. We follow similar arguments as in \cite{foootech, fooo:const1}. However, there are certain points that our treatment is different. We discuss the construction emphasizing on those differences. \subsection{Compatible Trivialization and Stabilization Data} \label{subsub:Nplustri} Throughout this subsectio , we fix: $$ \hspace{3cm}\frak u_{(j)} = ((\Sigma_{(j),v},\vec z_{(j),v},u_{(j),v});v \in C^{\rm int}_0(\check R_{(j)}))\hspace{1cm} j=1,\,2 $$ an element of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ which is contained in the stratum corresponding to the very detailed DD-ribbon trees $\check R_{(j)} = (c_{(j)},\alpha_{(j)},m_{(j)},\lambda_{(j)})$. We denote the union of the irreducible components $\Sigma_{(j),v}$ by $\Sigma_{(j)}$. The map $u_{(j)}$ are also defined similarly, and $\vec z_{(j)}$ is the set of boundary marked points of $\Sigma_{(j)}$. We use a similar convention several times in this section. We assume that $\frak u_{(2)}$ belongs to a small neighborhood of $\frak u_{(1)}$ in the RGW-topology. To be more precise, for $\frak u_{(j)}$, let $\Xi_{(j)}= (\vec w_{(j)},(\mathcal N_{(j),v,i}),(\phi_{(j),v}),(\varphi_{(j),v,e}),\kappa_{(j)})$ be a fixed TSD. We assume that $\frak u_{(2)}$ is represented by an element of the space ${\mathcal U}(\frak u_{(1)},\Xi_{(1)})$. This assumption implies that $\check R_{(2)}$ is obtained from $\check R_{(1)}$ by level shrinkings, level $0$ edge shrinkings and fine edge shrinkings. In particular, we may regard: \[ C^{\rm int}_1(\check R_{(2)})\subseteq C^{\rm int}_1(\check R_{(1)}). \] There also exists a surjective map $\pi : \check R_{(1)} \to \check R_{(2)}$ inducing: \[ \pi : C^{\rm int}_0(\check R_{(1)}) \to C^{\rm int}_0(\check R_{(2)}) \] such that the irreducible component corresponding to $v \in C^{\rm int}_0(\check R_{(2)})$ is obtained by gluing the irreducible components corresponding to $\hat v \in \pi^{-1}(v) \subset C^{\rm int}_0(\check R_{(1)})$. There also exists a surjective map: \[ \nu : \{0,1,\dots,\vert\lambda_{(1)}\vert\} \to \{0,1,\dots,\vert\lambda_{(2)}\vert\} \] such that $i\le j$ implies $\nu(i) \le \nu(j)$, and $\lambda_{(2)}(\pi(\hat v)) =\nu(\lambda_{(1)}(\hat v))$ for inside vertices $\hat v$ of $\check R_{(1)}$. The maps $\pi$ and $\nu$ are the analogue of ${\rm treesh}$ and ${\rm levsh}$ in \cite[Lemma 5.23]{DF1} defined for detailed trees. To describe the coordinate change, it is convenient to start with the case that the TSDs $\Xi_{(1)}$ and $\Xi_{(2)}$ satisfy some compatibility conditions. In this subsectio , we discuss these compatibility conditions and in Subsection \ref{subsub:coordinatechage1}, we explain how a coordinate change can be constructed assuming these conditions. In Subsection \ref{subsub:coordinatechage2}, we consider the case that $\Xi_{(1)}$ and $\Xi_{(2)}$ are two (not necessarily compatible) TSDs associated to the same element of the moduli space. We combine the results of Subsections \ref{subsub:coordinatechage1} and \ref{subsub:coordinatechage2} in Subsection \ref{subsub:coordinatechage3} to define coordinate changes in the general case and verify the co-cycle condition for these coordinate changes. The assumption that $\frak u_{(2)}$ belongs to a small neighborhood of $\frak u_{(1)}$ implies that we can find: $$\vec\sigma_0 = (\sigma_{0,e} ; e \in C^{\rm int}_1(\check R_{(1)})) \in \prod_{e \in C^{\rm int}_1(\check R_{(1)})}\mathcal V_{(1),e}^{{\rm deform}} $$ and $$ \vec{\frak x}_{0} = (\frak x_{0,v} ; v \in C^{\rm int}_0(\check R_{(1)})) \in \prod_{v \in C^{\rm int}_0(\check R_{(1)})}\mathcal V_{(1),v}^{{\rm source}} $$ such that the inconsistent map: \begin{equation}\label{incon-basis-change} (\Sigma_{(1)}(\vec{\frak x}_0,\vec \sigma_0),\vec z_{(1)}(\vec{\frak x}_0,\vec{\sigma}_0),u_{(1)} (\vec{\frak x}_0,\vec{\sigma}_0)) \end{equation} is isomorphic to $(\Sigma_{(2)},\vec z_{(2)},u_{(2)})$. Although it is not clear from the notation, the map $u_{(1)}(\vec{\frak x}_0,\vec{\sigma}_0)$ in \eqref{incon-basis-change} depends on $\frak u_{(2)}$ and not just on $\vec{\frak x}_0$ and $\vec{\sigma}_0$. We assume that the additional marked points and transversals in $\Xi_{(2)}$ satisfy the following conditions: \begin{conds}\label{choi655} Since \eqref{incon-basis-change} is induced by an element of ${\mathcal U}(\frak u_{(1)},\Xi_{(1)})$, there is a set of marked points $\vec w_{(1)}(\vec{\frak x}_0,\vec \sigma_0)\subset \Sigma_{(1)}^{-}(\vec{\frak x}_0,\vec \sigma_0)$ determined by $\vec w_{(1)}$, $\vec{\frak x}_0$ and $\vec \sigma_0$. Then we require that the marked points $\vec w_{(2)}$ of $\Xi_{(2)}$ are chosen such that: \begin{equation}\label{6134form} (\Sigma_{(1)}(\vec{\frak x}_0,\vec \sigma_0),\vec z_{(1)}(\vec{\frak x}_0,\vec \sigma_0) \cup \vec w_{(1)}(\vec{\frak x}_0,\vec \sigma_0))\cong (\Sigma_{(2)},\vec z_{(2)}\cup \vec w_{(2)}). \end{equation} Furthermore, if $w_{(2),v,i}$ corresponds to $w_{(1),\hat v,\hat i}$, then we require:\footnote{ Here we use the correspondence between $\vec w_{(1)}$ $\vec w_{(2)}$ given by the identification in \eqref{6134form} and the correspondence between the elements of $\vec w_{(1)}$ and $\vec w_{(1)}(\vec{\frak x}_0,\vec \sigma_0)$.} \begin{equation} \mathcal N_{(2),v',i'} = \mathcal N_{(1),v,i}. \end{equation} \end{conds} Next, we impose some constraints on the choices of the maps $\phi_{(2),v}$ and $\varphi_{(2),v,e}$. Let $v$ be an interior vertex of $\check R_{(2)}$. We consider the moduli space $\mathcal M^{\rm source}_v$ of deformation of the irreducible component $(\Sigma_{(2),v},\vec z_{(2),v}\cup \vec w_{(2),v})$. We firstly fix a neighborhood $\mathcal V^{\rm source}_{(2),v}$ of $[\Sigma_{(2),v},\vec z_{(2),v}\cup \vec w_{(2),v}]$ in $\mathcal M^{\rm source}_v$ as follows. The Riemann surface $\Sigma_{(2),v}$ is obtained by gluing spaces $\Sigma_{(1),\hat v}$ for $\hat v \in \pi^{-1}(v)$. Here the complex structure on $\Sigma_{(1),\hat v}$ is given by $\frak x_{0,\hat v}$ and the gluing parameters are $\sigma_{(0),\hat e}$ for edges $\hat e$ in $\pi^{-1}(v)$. There is a neighborhood $\mathcal U_{(1),\hat v}^{{\rm source}}$ of $\frak x_{0,\hat v}$ in $\mathcal V_{(1),\hat v}^{{\rm source}}$ and a neighborhood $\mathcal V_{(1),(2),e}^{{\rm deform}}$ of $\sigma_{(0),\hat e}\in \mathcal V_{(1),e}^{{\rm deform}}$ such the following map: \[ \prod_{\hat v \in C^{\rm int}_0(\check R_{(1)}),\, \pi(\hat v) = v} \mathcal U_{(1),\hat v}^{{\rm source}}\times \prod_{e \in C^{\rm int}_1(\check R_{(1)}),\, \pi(e) = v} \mathcal V_{(1),(2),e}^{{\rm deform}}\to \mathcal M^{\rm source}_v. \] is an isomorphism onto an open neighborhood of the point determined by $(\Sigma_{(2),v},\vec z_{(2),v}\cup \vec w_{(2),v})$. Therefore, we may define: \begin{equation}\label{form6143} \mathcal V_{(2),v}^{{\rm source}}:=\prod_{\hat v \in C^{\rm int}_0(\check R_{(1)}),\, \pi(\hat v) = v} \mathcal U_{(1),\hat v}^{{\rm source}}\times \prod_{\hat e \in C^{\rm int}_1(\check R_{(1)}),\, \pi(\hat e) = v} \mathcal V_{(1),\hat e}^{{\rm deform}} \end{equation} Let $\frak x_{2,v} = ((\frak x_{1,\hat v}),(\sigma_{1,\hat e}))$ be an element of \eqref{form6143}. Then $\Sigma_{(2),v}(\frak x_{2,v})$, the Riemann surface $\Sigma_{(2),v}$ with the complex structure induced by $\frak x_{2,v}$, has the following decomposition: \begin{align} \Sigma_{(2),v}(\frak x_{2,v})= &\coprod_{\hat v \in C^{\rm int}_0(\check R_{(1)}),\, \pi(\hat v) = v} \Sigma^-_{(1),\hat v}(\frak x_{1,\hat v}) \label{form614444pre}\\ &\cup \coprod_{e \in C^{\rm int}_1(\check R_{(2)}),\, v \in \partial e} D^2 \label{form614444pre-2}\\ &\cup \coprod_{\hat e \in C^{\rm int}_1(\check R_{(1)}),\, \pi(\hat e) = v} [-5T_{\hat e},5T_{\hat e}] \times S^1. \label{form614444pre-3} \end{align} The following comments about the above decomposition is in order. In \eqref{form614444pre}, $\Sigma^-_{(1),\hat v}(\frak x_{(1),\hat v})$ denote the subspace of $\Sigma_{(1),\hat v}(\frak x_{(1),\hat v})$ given by the complements of the discs $\varphi_{(1),\hat v,\hat e}(\frak x_{(1),\hat v},D^2(1))$ where $\hat e$ runs among the edges of $\check R_{(1)}$ which are connected to $\hat v$. For each edge $e \in C^{\rm int}_1(\check R_{(2)})$ which is incident to $v\in C^{\rm int}_0(\check R_{(2)})$, there is a unique edge $\hat e\in C^{\rm int}_1(\check R_{(1)})$ which is mapped to $e$. In particular, one of the endpoints of $\hat e$, denoted by $\hat v$, is mapped to $v$. The disc corresponding to $e$ in \eqref{form614444pre-2} is given by the space $\varphi_{(1),\hat v,\hat e}(\frak x_{(1),\hat v},D^2(1))$. Finally if an edge $\hat e\in C^{\rm int}_1(\check R_{(1)})$ is mapped to a vertex $v\in C^{\rm int}_0(\check R_{(2)})$ by $\pi$, then the space in \eqref{form614444pre-3} is identified with the neck region associated to $\hat e$. In particular, the positive number $T_{\hat e}$ is determined by $\sigma_{(1),\hat e}$. The union of the spaces in \eqref{form614444pre} and \eqref{form614444pre-2} is called the thick part of $\Sigma_{(2),v}(\frak x_{(2),v})$, and the spaces in \eqref{form614444pre-3} form the thin part of $\Sigma_{(2),v}(\frak x_{(2),v})$. The above decomposition can be used in an obvious way to define the map $\varphi_{(2),v,e}$ on $\mathcal V_{(2),v}^{{\rm source}}\times {\rm Int}(D^2)$. We have the following decomposition of $ \Sigma_{(2),v}$ as a special case of the above decomposition applied to the point $((\frak x_{0,\hat v}),(\sigma_{0,\hat e}))$: \begin{align} \Sigma_{(2),v}=&\coprod_{\hat v \in C^{\rm int}_0(\check R_{(1)}),\, \pi(\hat v) = v} \Sigma^-_{(1),v}(\frak x_{0,\hat v})\label{form614444}\\ &\cup \coprod_{e \in C^{\rm int}_1(\check R_{(2)}),\, v \in \partial e} D^ \label{form614444-2} \\ &\cup \coprod_{\hat e \in C^{\rm int}_1(\check R_{(1)}),\, \pi(\hat e) = v} [-5T'_{\hat e},5T'_{\hat e}] \times S^1. \label{form614444-3} \end{align} The trivialization $\phi_{(2),v}$ that we intend to define is a family (parametrized by $\frak x_{2,v}$) of diffeomorphisms from $\Sigma_{(2),v}$ to $\Sigma_{(2),v}(\frak x_{2,v})$. The trivialization $\phi_{(1),v}$ defines a diffeomorphism between the subspaces in \eqref{form614444pre} and \eqref{form614444}. We then use the coordinate at nodal points, $\varphi_{(1),\hat v,\hat e}$, to extend it to a diffeomorphism from the unions of the subspaces in \eqref{form614444} and \eqref{form614444-2} to the union of the subsapces in \eqref{form614444pre} and \eqref{form614444pre-2}. Finally we extend this family of diffeomorphisms in an arbitrary way to the neck region to complete the construction of $\phi_{(2),v}$. This construction of the maps $\varphi_{(2),v,e}$ and $\phi_{(2),v}$ is analogous to \cite[Sublemma 10.15]{fooo:const1}. \begin{conds}\label{choi655-2} We require that the maps $\phi_{(2),v}$ and $\varphi_{(2),v,e}$ of the TSD $\Xi_{(2)}$ are obtained form the TSD $\Xi_{(1)}$ as above. \end{conds} \begin{defn}\label{incon-map-near-u-gen} Let $\frak u$ be an element of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\Xi$ be a TSD at $\frak u$. An {\it inconsistent map near $\frak u$ with respect to $\Xi$} is an object similar to the ones in Definition \ref{defn6488} where we relax the Cauchy-Riemann equations in (4) and (5). \end{defn} This definition is almost a straightforward generalization of Definition \ref{defn625625} with the difference that we also include transversal constraints in \eqref{eq631} as one of the requirements for an inconsistent map near $\frak u$. Let $\Xi_{(2)}$ satisfy Conditions \ref{choi655} and \ref{choi655-2}. For any: \begin{equation}\label{form136} \aligned \vec\sigma_{(2)} &= (\sigma_{2,e} ; e \in C^{\rm int}_1(\check R_{(2)})) \in \prod_{e \in C^{\rm int}_1(\check R_{(2)})}\mathcal V_{(2),e}^{{\rm deform}}\\ \vec{\frak x}_{(2)} &= (\frak x_{2,v} ; v \in C^{\rm int}_0(\check R_{(2)}))\in \prod_{v \in C^{\rm int}_0(\check R_{(2)})}\mathcal V_{(2),v}^{{\rm source}}, \endaligned \end{equation} satisfying Definition \ref{defn6488} (1) with respect to $\Xi_{(2)}$, there exist: \begin{equation} \aligned \vec\sigma_{(1)} &= (\sigma_{1,e} ; e \in C^{\rm int}_1(\check R_{(1)})) \in \prod_{e \in C^{\rm int}_1(\check R_{(1)})}\mathcal V_{(1),e}^{{\rm deform}}\\ \vec{\frak x}_{(1)} &= (\frak x_{1,v} ; v \in C^{\rm int}_0(\check R_{(1)}))\in \prod_{v \in C^{\rm int}_0(\check R_{(1)})}\mathcal V_{(1),v}^{{\rm source}}. \endaligned \end{equation} satisfying Definition \ref{defn6488} (1) with respect to $\Xi_{(1)}$ such that: \begin{equation}\label{form6138} (\Sigma_{(2)}(\vec{\frak x}_{(2)},\vec \sigma_{(2)}),\vec z_{(2)}(\vec{\frak x}_{(2)}, \vec \sigma_{(2)})\cup \vec w_{(2)}(\vec{\frak x}_{(2)},\vec \sigma_{(2)}))\cong\hspace{2cm} \end{equation} \begin{equation*} \hspace{2cm}(\Sigma_{(1)}(\vec{\frak x}_{(1)},\vec \sigma_{(1)}),\vec z_{(1)}(\vec{\frak x}_{(1)},\vec \sigma_{(1)}) \cup \vec w_{(1)}(\vec{\frak x}_{(1)},\vec \sigma_{(1)})). \end{equation*} Here $\vec{\frak x}_{(1)},\vec \sigma_{(1)}$ depend on $\vec{\frak x}_{(2)},\vec \sigma_{(2)}$. (See Figure \ref{Figuresec6p164}.) \begin{figure}[h] \centering \includegraphics[scale=0.5]{Figuresec6p164} \caption{$\Sigma_{(1)}$, $\Sigma_{(2)}$ and $\Sigma_{(2)}(\vec {\frak x}_{(2)},{\vec \sigma}_{(2)})$.} \label{Figuresec6p164} \end{figure} Next, let $\frak y_{(2)} = (\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},(u'_{(2),v}),(U'_{(2),v}),(\rho_{(2),e}),(\rho_{(2),i}))$ be an inconsitent map near $\frak u_{(2)}$ with respect to $\Xi_{(2)}$. Let $\vec{\frak x}_{(1)}$ and $\vec\sigma_{(1)}$ be chosen as in the previous paragraph. Let $\hat v$ be an interior vertex of $\check R_{(1)}$. Identification in (\ref{form6138}) and Condition \ref{choi655-2} imply: \begin{equation}\label{form614343bef} \Sigma^+_{(1),\hat v}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})\subseteq \Sigma^+_{(2),\pi(\hat v)} (\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}). \end{equation} We write $I_{\hat v}$ for this inclusion map. Define: \begin{equation}\label{form614343} \aligned U'_{(1),\hat v} &= \begin{cases} u'_{(2),\pi(\hat v)}\circ I_{\hat v} &\text{if $\lambda_{(2)}(\pi(\hat v)) = 0$} \\ U'_{(2),\pi(\hat v)}\circ I_{\hat v} &\text{if $\lambda_{(2)}(\pi(\hat v)) > 0$} \end{cases} \\ u'_{(1),\hat v} &=u'_{(2),\pi(\hat v)}\circ I_{\hat v}. \endaligned \end{equation} We also define: \begin{equation}\label{newform614154} \rho_{(1),e}= \begin{cases} \rho_{(2),e} &\text{if $e \in C^{\rm int}_{\rm th}(\check R_{(2)})\subset C^{\rm int}_{\rm th}(\check R_{(1)})$} \\ &\text{and $e$ is not a level $0$ edge,}\\ 1 &\text{otherwise}. \end{cases} \end{equation} We next define: \begin{equation} \rho_{(1),i} = \begin{cases} 1 \qquad &\text{if $ \nu(i-1) = \nu(i)$} \\ \rho_{(2),i} \qquad &\text{otherwise}. \end{cases} \end{equation} It is easy to check that $\frak y_{(1)} = (\vec{\frak x}_{(1)},\vec{\sigma}_{(1)},(u'_{(1),\hat v}),(U'_{(1),\hat v}),(\rho_{(1),e}),(\rho_{(1),i}))$ is an inconsistent map near $\frak u_{(1)}$ with respect to $\Xi_{(1)}$. In fact, \eqref{form621}-\eqref{form623rev} for $\frak y_{(1)}$ follow from the corresponding identities for $\frak y_{(2)}$ and the definition. This discussion is summarized in the following lemma: \begin{lem}\label{lem65865777} Suppose $\frak u_{(1)}$ and $\frak u_{(2)}$ are as above and $\Xi_{(j)}$ is a TSD at $\frak u_{(j)}$ such that $\Xi_{(1)}$ and $\Xi_{(2)}$ satisfy Conditions \ref{choi655} and \ref{choi655-2}. Then an inconsistent map near $\frak u_{(2)}$ with respect to $\Xi_{(2)}$ can be regarded as an inconsistent map near $\frak u_{(1)}$ with respect to $\Xi_{(1)}$. \end{lem} \begin{rem} The equality (\ref{newform614154}) in the second case shows that the consistency equation is satisfied for such edges. \end{rem} Using the above argument, we can also verify the following lemma: \begin{lem} Suppose $\frak u_{(1)}$, $\frak u_{(2)}$, $\Xi_{(1)}$ and $\Xi_{(2)}$ are given as in Lemma \ref{lem65865777}. In particular, $\frak u_{(2)}$ can be regarded as an inconsistent solution: $$(\vec{\frak x}_{(0)},\vec{\sigma}_{(0)},(u'_{(0),\hat v}),(U'_{(0),\hat v}),(\rho_{(0),e}),(\rho_{(0),i}))$$ with respect to $\Xi_{(1)}$. An inconsistent map: $$\frak y_{(1)} = (\vec{\frak x}_{(1)},\vec{\sigma}_{(1)},(u'_{(1),\hat v}),(U'_{(1),\hat v}),(\rho_{(1),e}),(\rho_{(1),i}))$$ near $\frak u_{(1)}$ with respect to $\Xi_{(1)}$ may be regarded as an inconsistent map near $\frak u_{(2)}$ with respect to $\Xi_{(2)}$ if and only if the following conditions hold: \begin{itemize} \item[(i)] For each vertex $\hat v\in C^{\rm int}_0(\hat R_{(1)})$, the distance between ${\frak x}_{(0),\hat v}$ and ${\frak x}_{(1),\hat v}$ is less than $\kappa_{(2)}$. (Recall that $\kappa_{(2)}$ is the size of $\Xi_{(2)}$). \item[(ii)] For each edge $e\in C^{\rm int}_1(\hat R_{(2)})\subset C^{\rm int}_1(\hat R_{(1)})$, we have $|\sigma_{(1),e}|<\kappa_{(2)}$. \item[(iii)] If $e\in C^{\rm int}_1(\hat R_{(2)})\subset C^{\rm int}_1(\hat R_{(1)})$ is (resp. is not) a level $0$ edge, then the image of the restriction of $u'_{(1),\hat v}$ to $[-5T_e,5T_e]_{r_e} \times [0,\pi]_{s_e}$ (resp. $[-5T_e,5T_e]_{r_e} \times S^1_{s_e}$) has a diameter less than $\kappa_{(2)}$. \item[(iv)] If $c(v)={\rm d}$ or ${\rm s}$, then the $C^2$-distance between the restrictions of $u'_{(0),\hat v}$ and $u'_{(1),\hat v}$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ is less than $\kappa_{(2)}$. If $c(v)={\rm D}$, then we demand that the $C^2$-distance between the restrictions of $U'_{(0),\hat v}$ and $U'_{(1),\hat v}$ to $\Sigma^-_{v}(\vec{\frak x},\vec \sigma)$ is less than $\kappa_{(2)}$. \item[(v)] The consistency equation $$ \rho_{(1),e} = \rho_{(1),\lambda_{(1),\hat v_1}+1}\dots \rho_{(1),\lambda_{(1),\hat v_2}} $$ are satisfied for any edge $e \in C^{\rm int}_{\rm th}(\check R_{(1)})\setminus C^{\rm int}_{\rm th}(\check R_{(2)})$ which is not a level $0$ edge. \end{itemize} \end{lem} \subsection{The Choice of Obstruction Spaces} \label{subsub:chooseobst} In order to define an inconsistent {\it solution}, we need to fix obstruction spaces. For any inconsistent map $\frak u$ and for any other inconsistent map $\frak y$ which is close to $\frak u$, we explained how to make a choice of $\mathcal E_{0,\frak u}(\frak y)$ in Section \ref{sub:kuraconst}. We need to insure that we can arrange for such choices such that they satisfy some nice properties when we move $\frak u$. More precisely, we need to pick them so that they are {\it semi-continuous} with respect to $\frak u$. We will prove this property in the next subsection. In this subsection, we explain how we modify our choice of obstruction spaces. For $j=1,2$, let $\frak u_{(j)} = ((\Sigma_{(j),v},\vec z_{(j),v},u_{(j),v});v \in C^{\rm int}_0(\check R_{(j)}))$ be representatives of two elements of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ in the strata corresponding to very detailed DD-ribbon trees $\check R_{(j)}$. We fix a TSD $\Xi_{(j)}$ at $\frak u_{(j)}$. We do {\it not} assume that they are related as in Subsection \ref{subsub:Nplustri}. We also fix $E_{\frak u_{(1)},v} \subset L^2_{m,\delta}(\frak u_{(1),v})$ satisfying Condition \ref{cond643}. We wish to use $\{E_{\frak u_{(1)},v}\}$ to define obstruction spaces for an inconsistent map with respect to $\Xi_{(2)}$, under the assumption that $\frak u_{(2)}$ is close to $\frak u_{(1)}$. In particular, we assume that $\check R_{(2)}$ is obtained from $\check R_{(1)}$ by level shrinking, level $0$ edge shrinking and fine edge shrinking. As in the previous subsection, we may define a surjective map $\pi : \check R_{(1)} \to \check R_{(2)}$. Note that in Section \ref{sub:kuraconst}, we studied the case $\frak u_{(2)} = \frak u_{(1)}$. The following lemma is a straightforward consequence of the implicit function theorem: (See \cite[Lemma 9.9]{fooo:const1}.) \begin{lem}\label{lemma660} If $\frak u_{(2)}$ is close enough to $\frak u_{(1)}$ with respect to the $C^1$ distance, then for any $\hat v\in C_0^{\rm int}(\check R_{(1)})$, there exists a unique choice of $\vec w_{(2),(1),\hat v} \subset \Sigma_{(2),v}$ with $v$ being $\pi(\hat v)$ such that: \begin{enumerate} \item $(\Sigma_{(2),v},\vec z_{(2),v} \cup \coprod_{\pi(\hat v)=v} \vec w_{(2),(1),v})$ is close to: $$\coprod_{\pi(\hat v)=v} (\Sigma_{(1),\hat v},\vec z^{\,\prime}_{(1),\hat v} \cup \vec w_{(1),\hat v})$$ in the moduli space of stable curves with marked points. Here $\vec z^{\,\prime}_{(1),\hat v}$ is the subset of $\vec z_{(1),\hat v}$ that is the set of all marked points on $\Sigma_{(1),v}$ and nodal points on $\Sigma_{(1),v}$ which correspond to the edges $e$ incident to $v$ with $\pi(e) \ne v$. \item $u_{(2),v}(w_{(2),(1),\hat v,i}) \in \mathcal N_{(1),\hat v, i}$. \end{enumerate} \end{lem} From now on, we assume that $\frak u_{(2)}$ is close enough to $\frak u_{(2)}$ such that the claim in Lemma \ref{lemma660} holds. Furthermore, Let also an inconsistent map: $$ \frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i})) $$ with respect to $\Xi_{(2)} = (\vec w_{(2)},(\mathcal N_{(2),v,i}),(\phi_{(2),v}),(\varphi_{(2),v,e}),\kappa_{(2)})$ be fixed. Suppose also $(\Sigma(\vec{\frak x},\vec{\sigma}),\vec z(\vec{\frak x},\vec{\sigma}) \cup \vec w(\vec{\frak x},\vec{\sigma}))$ denotes the representative of $\vec{\frak x}$, as a part of the data of $\frak y$. Let $\hat v$ be a vertex of $\check R_{(1)}$ and $v = \pi(\hat v)$. Lemma \ref{lemma660} allows us to find $w_{(2),(1),\hat v,i} \in \Sigma_{(2),v}$. If $\Xi_{(2)}$ is small enough, then we can regard $w_{(2),(1),\hat v,i}$ as an element of $\Sigma_{(2),v}^-$, and hence an element of $\Sigma_{(2),v}^-(\vec{\frak x},\vec{\sigma})$. This implies that if we replace $\vec w(\vec{\frak x},\vec{\sigma})$ with the points $w_{(2),(1),\hat v,i}$ to obtain: \begin{equation}\label{surgered-element} (\Sigma_v(\vec{\frak x},\vec{\sigma}),\vec z_v(\vec{\frak x},\vec{\sigma}) \cup \coprod_{\pi(\hat v)=v} \vec w_{(2),(1),\hat v}) \end{equation} then \eqref{surgered-element} is close to $\coprod_{\pi(\hat v)=v} (\Sigma_{(1),\hat v},\vec z^{\,\prime}_{(1),\hat v} \cup \vec w_{(1),\hat v})$. We use this fact and the target space parallel transportation in the same way as in Section \ref{sub:Obst} to obtain the following map for any $\hat v \in C^{\rm int}_0(\check R_{(1)})$: $$ \mathcal P_{\hat v} : E_{\frak u_{(1)},\hat v} \to L^2_{m,\delta}(\Sigma^-_{(2),v}(\vec{\frak x},\vec{\sigma})). $$ We then define for any $v\in \check R_{(2)}$: $$ E_{\frak u_{(2)},\frak u_{(1)},v}(u'_v) = \bigoplus_{\hat v, \pi(\hat v) = v} \mathcal P_{\hat v}(E_{\frak u_{(1)},\hat v}) $$ for $\lambda(v) = 0$. We also define $E_{\frak u_{(2)},\frak u_{(1)},v}(U'_v)$ for $\lambda(v) > 0$ by a similar formula. Now we replace \eqref{form63131} by \begin{equation}\label{form6313122} \aligned \mathcal E_{0,\frak u_{(2)},\frak u_{(1)}}(\frak y)= &\bigoplus_{v \in C_0^{\rm int}(\check R_{(2)}),\,\lambda(v) = 0} E_{\frak u_{(2)},\frak u_{(1)},v}(u'_v) \\ &\oplus\bigoplus_{v \in C_0^{\rm int}(\check R_{(2)}),\,\lambda(v) > 0} E_{\frak u_{(2)},\frak u_{(1)},v}(U'_v). \endaligned \end{equation} \begin{lem}\label{lemn6661} Suppose $\Xi_{(1)}$ is small enough such that we can apply the construction of Section \ref{sub:kuraconst} to $\Xi_{(1)}$ and the vector spaces $\{E_{\frak u_{(1)},\hat v}\}$ to obtain a Kuranishi chart at $\frak u_{(1)}$. Then for $\Xi_{(2)}$ small enough, applying the construction of Section \ref{sub:kuraconst} to $\{E_{\frak u_{(2)},\frak u_{(1)},v}\}$ (instead of $\{E_{\frak u_{(2)},v}\}$) gives rise to a Kuranishi chart at $\frak u_{(2)}$. \end{lem} This is immediate from the construction of Section \ref{sub:kuraconst}. In fact, the choice of obstruction bundles we take here satisfies the `smoothness' condition. See Definition \ref{defn6888} or \cite[Definition 5.1 (2)]{foooexp}. (Smoothness here means smoothness with respect to $\frak y$.) For each $\frak p \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, let $\Xi_{\frak p}=(\vec w_{\frak p}, (\mathcal N_{\frak p,v,i}), (\phi_{\frak p,v}),(\varphi_{\frak p,v,e}),\kappa_{\frak p})$ and $E_{\frak p,v}$ be a TSD and obstruction vector spaces at $\frak p$. We assume that $\Xi_{\frak p}$ is small enough such that the assumption of Lemma \ref{lemn6661} holds. Let $\frak U(\frak p)$ be a neighborhood of $\frak p$ in $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ determined by the TSD $\Xi_{\frak p}$. We also fix a compact neighborhood $\mathcal K(\frak p)$ of $\frak p$ which is a subset of $\frak U(\frak p)$. Compactness of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ implies that we can find a finite subset \begin{equation}\label{formfrakJ} {\frak J} = \{\frak p_j : j=1,\dots,J\} \subset \mathcal M^{\rm RGW}_{k+1}(L;\beta) \end{equation} such that \begin{equation} \mathcal M^{\rm RGW}_{k+1}(L;\beta) \subseteq \bigcup_{j=1}^{J} {\rm Int}\,\mathcal (K(\frak p_j)). \end{equation} For $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, we define: \begin{equation} {\frak J}(\frak u) = \{ \frak p_j \mid \frak u \in \mathcal K(\frak p_j)\}. \end{equation} \begin{lem} \label{sum=direct-sum} Let $\check R_j$ be the very detailed tree associated to $\frak p_j$. We can perturb $\{E_{\frak p_j,v} \mid v \in C^{\rm int}_0(\check R_j)\}$ by an arbitrary small amount so that the following holds. For any $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, the vector spaces $\mathcal E_{0,\frak u,\frak p_j}(\frak u)$ for $\frak p_j \in {\frak J}(\frak u)$ are transversal, i.e., the sum of $\mathcal E_{0,\frak u,\frak p_j}(\frak u)$ for $\frak p_j \in {\frak J}(\frak u)$ is the direct sum: \begin{equation} \bigoplus_{\frak p_j \in {\frak J}(\frak u)} \mathcal E_{0,\frak u,\frak p_j}(\frak u). \end{equation} \end{lem} \begin{proof} The proof is the same as the proof of the analogous statement in the case of the stable map compactification. See the proof of \cite[Lemma 11.7]{fooo:const1} in \cite[Subsection 11.4]{fooo:const1}. \end{proof} Now we define a Kuranishi chart at each point $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ as follows: \begin{defn}\label{Kura-chart} Let $\Xi_{\frak u} = (\vec w_{\frak u},(\mathcal N_{\frak u,v,i},\kappa_{\frak u}), (\phi_{\frak u,v}),(\varphi_{\frak u,v,e}),\kappa_{\frak u})$ be a TSD, which is small enough such that the conclusion of Lemma \ref{lemn6661} holds for $\frak u_{(1)}=\frak p_j$, $\frak u_{(2)}=\frak u$, $\Xi_{(1)}=\Xi_{\frak p_j}$ and $\Xi_{(2)}=\Xi_{\frak u}$ with $\frak p_j$ being an arbitrary element in ${\frak J}(\frak u)$. The Kuranishi neighborhood $\mathcal U(\frak u,\Xi_{\frak u})$ is the set of the equivalence classes of inconsistent maps $\frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_e),(\rho_i))$ near $\frak u$ such that \begin{equation}\label{form6157} \aligned \overline\partial_{j_{\vec{\frak x},\vec \sigma}} u'_v &\in \bigoplus_{\frak p_j \in {\frak J}(\frak u)} \mathcal E_{0,\frak u,\frak p_j}(\frak y),\\ \overline\partial_{j_{\vec{\frak x},\vec \sigma}} U'_v &\in \bigoplus_{\frak p_j \in {\frak J}(\frak u)} \mathcal E_{0,\frak u,\frak p_j}(\frak y). \endaligned \end{equation} In other words, it is the set of inconsistent solutions (Definition \ref{defn6488}) where \eqref{form6119} and \eqref{form6120} are replaced with \eqref{form6157}. The obstruction bundle $\mathcal E_{\frak u}$ is defined in the same way as in \eqref{form1162} in the following way: \begin{equation}\label{form116200} \mathcal E_{0,\frak u}(\frak y)= \bigoplus_{\frak p_j \in {\frak J}(\frak u)} \mathcal E_{0,\frak u,\frak p_j}(\frak y) \end{equation} \begin{equation}\label{form1162rev} \mathcal E_{\frak u,\Xi_{\frak u}}(\frak y) = \mathcal E_{0,\frak u}(\frak y) \oplus \bigoplus_{e \in C^{\rm int}_1(\check R),\, \lambda(e) > 0} \mathscr L_e. \end{equation} where $\check R$ is the very detailed tree associated to $\frak u$. The Kuranishi map $\frak s_{\frak u}$ is defined in the same way as in (\ref{form630}), (\ref{form631}), (\ref{form632}), and the parametrization map $\psi _{\frak u}$ is defined in the same way as in \eqref{paramatp}. \end{defn} \subsection{Construction of Coordinate Change I} \label{subsub:coordinatechage1} In this and the next subsections, we construct coordinate changes. The next two lemmas state the semi-continuity of our obstruction spaces, a property that we hinted at the beginning of the last subsection.\footnote{Compare to \cite[Definition 5.1 (4)]{fooo:const1} or Definition \ref{defn684684}.} \begin{lem}\label{loem668} For any $\frak u_{(1)} \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, there exists a neighborhood $U(\frak u_{(1)})$ of $\frak u_{(1)}$ in $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ such that for any $\frak u_{(2)} \in U(\frak u_{(1)})$: \begin{equation}\label{form6162} {\frak J}(\frak u_{(2)}) \subseteq {\frak J}(\frak u_{(1)}). \end{equation} \end{lem} \begin{proof} This is obvious because we pick the subspaces $\mathcal K(\frak p_j)$ to be closed. \end{proof} \begin{lem}\label{lem6666} Let $\frak u_{(2)} \in U(\frak u_{(1)})$. For $j=1,2$, we choose a TSD $\Xi_{(j)} = (\vec w_{(j)},(\mathcal N_{(j),v,i}), (\phi_{(j),v}),(\varphi_{(j),v,e}))$. We assume that $\frak u_{(1)}$, $\frak u_{(2)}$, $\Xi_{(1)}$ and $\Xi_{(2)}$ satisfy Conditions \ref{choi655} and \ref{choi655-2}. Let: $$ \frak y_{(2)} = (\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},(u'_{(2),v}),(U'_{(2),v}),(\rho_{(2),e}),(\rho_{(2),i})) $$ be an inconsistent map near $\frak u_{(2)}$ with respect to $\Xi_{(2)}$ and $$ \frak y_{(1)} = (\vec{\frak x}_{(1)},\vec{\sigma}_{(1)},(u'_{(1),\hat v}),(U'_{(1),\hat v}), (\rho_{(1),e}),(\rho_{(1),i})) $$ be the inconsistent map near $ \frak u_{(1)}$ with respect to $\Xi_{(1)}$, constructed by Lemma \ref{lem65865777}. Let $\frak p_j \in {\frak J}(\frak u_{(2)})$. Then we have an isomorphism: \begin{equation}\label{formnew6163} \mathcal E_{0,\frak u_{(2)};\frak p_j}(\frak y_{(2)})\cong \mathcal E_{0,\frak u_{(1)};\frak p_j}(\frak y_{(1)}). \end{equation} \end{lem} \begin{proof} Let $I_{\hat v}$ be the inclusion map (\ref{form614343bef}). This is a holomorphic embedding. Then (\ref{form614343}) induces the required isomorphism. The fact that transversality constraint (\ref{form6121}) is preserved is an immediate consequence of (\ref{form614343}) and our choice of transversals $\mathcal N_{(j),v,i}$. \end{proof} \begin{lem} Suppose $\frak u_{(1)}$, $\frak u_{(2)}$, $\Xi_{(1)}$, $\Xi_{(2)}$, $\frak y_{(1)}$ and $\frak y_{(2)}$ are given as in Lemma \ref{lem6666}. If $\frak y_{(2)}$ is an element of $\widehat {\mathcal U}(\frak u_{(2)},\Xi_{(2)})$, then $\frak y_{(1)}$ is an element of $\widehat{\mathcal U}(\frak u_{(1)},\Xi_{(1)})$. \end{lem} \begin{proof} The isomorphism induced by $I_{\hat v}$ send $\overline\partial u'_{(2),\hat v}$ (resp. $\overline\partial U'_{(2),\hat v}$) to $\overline\partial u'_{(1),\hat v}$ (resp. $\overline\partial U'_{(1),\hat v}$). This is a consequence of (\ref{form614343}). Therefore, if $\frak y_{(2)}$ satisfies (\ref{form6157}) then $\frak y_{(1)}$ satisfies (\ref{form6157}). \end{proof} We thus constructed a $\Gamma_{\frak u_{(2)}}$-invariant map: $$ \varphi_{\frak u_{(1)}\frak u_{(2)}} : \widehat{\mathcal U}(\frak u_{(2)},\Xi_{(2)}) \to \widehat{\mathcal U}(\frak u_{(1)},\Xi_{(1)}). $$ It is clear from the construction that the above map can be lifted to a map $\widetilde{\varphi}_{\frak u_{(1)}\frak u_{(2)}}$ from $\widetilde{\mathcal U}(\frak u_{(2)},\Xi_{(2)})$ to $\widetilde{\mathcal U}(\frak u_{(1)},\Xi_{(1)})$. \begin{lem}\label{lem668888} The maps $\varphi_{\frak u_{(1)}\frak u_{(2)}}$ and $\widetilde \varphi_{\frak u_{(1)}\frak u_{(2)}}$ are $C^{\ell}$ embeddings. \end{lem} \begin{proof} It follows from the definition of $\widetilde \varphi_{\frak u_{(1)},\frak u_{(2)}}$ and the choices of $\Xi_{(j)}$ that the following two diagrams commute: \begin{equation}\label{dia6164} \begin{CD} \widetilde{\mathcal U}(\frak u_{(2)},\Xi_{(2)}) @>{F_1}>> \displaystyle{\prod_{v \in C^{\rm int}_0(\check R_{(2)})}\mathcal V_{(2),v}^{\rm source} \atop \qquad\qquad\quad \times \prod_{e \in C^{\rm int}_1(\check R{(2)})}\mathcal V_{(2),e}^{{\rm deform}} \times (D^2)^{\vert\lambda_{(2)}\vert}}\\ @VV{\widetilde{\varphi}_{\frak u_{(1)}\frak u_{(2)}}}V @VV{R_1}V \\ \widetilde{\mathcal U}(\frak u_{(1)},\Xi_{(1)}) @>{}>> \displaystyle{\prod_{v \in C^{\rm int}_0(\check R_{(1)})}\mathcal V_{(1),v}^{\rm source} \atop \qquad\qquad\quad \times \prod_{e \in C^{\rm int}_1(\check R{(1)})}\mathcal V_{(1),e}^{{\rm deform}} \times (D^2)^{\vert\lambda_{(1)}\vert}} \end{CD} \end{equation} \begin{equation}\label{dia6165} \begin{CD} \widetilde{\mathcal U}(\frak u_{(2)},\Xi_{(2)}) @>{F_2}>> \oalign{ $\displaystyle{\prod_{v \in C^{\rm int}_0(\check R_{(2)}), \lambda_{(2)}(v) = 0} L^2_{m+1}(\Sigma_{(2),v}^{-},X\setminus \mathcal D)}$\\ $\displaystyle{\times \prod_{v \in C^{\rm int}_0(\check R_{(2)}), \lambda_{(2)}(v) > 0} L^2_{m+1}(\Sigma_{(2),v}^-, \mathcal N_{\mathcal D}X\setminus \mathcal D)}$} \\ @VV{\widetilde \varphi_{\frak u_{(1)}\frak u_{(2)}}}V @VV{R_2}V \\ \widetilde{\mathcal U}(\frak u_{(1)},\Xi_{(1)}) @>{}>> \oalign{ $\displaystyle{\prod_{v \in C^{\rm int}_0(\check R_{(1)}), \lambda_{(1)}(v) = 0} L^2_{m+1}(\Sigma_{(1),v}^-,X\setminus \mathcal D)}$ \\ $\displaystyle{\times\prod_{v \in C^{\rm int}_0(\check R_{(1)}), \lambda_{(1)}(v) > 0}L^2_{m+1}(\Sigma_{(1),v}^-,\mathcal N_{\mathcal D}X\setminus \mathcal D)}$} \end{CD} \end{equation} Here the horizontal arrows $F_1$ and $F_2$ are as in \eqref{ccc}. The right vertical arrow $R_1$ of Diagram \eqref{dia6164} is obtained by requiring (\ref{form6138}) and is a smooth embedding. Diagram (\ref{dia6164}) commutes since $\widetilde \varphi_{\frak u_{(1)}\frak u_{(2)}}$ does not change the conformal structure of source (marked) curves. The right vertical arrow $R_2$ of Diagram (\ref{dia6165}) is obtained by restriction of domain and is a smooth map. Diagram (\ref{dia6165}) commutes because of Condition \ref{choi655-2}. Now the definitions of the $C^{\ell}$ structures on $\widetilde {\mathcal U}(\frak u_{(1)},\Xi_{(1)})$ and $\widetilde{\mathcal U}(\frak u_{(2)},\Xi_{(2)})$ imply that $\widetilde \varphi_{\frak u_{(1)}\frak u_{(2)}}$ is a $C^{\ell}$ map. Unique continuation implies that the differential of the map $(R_1\circ F_1,R_2\circ F_2)$ is injective. In particular, this implies that $\widetilde \varphi_{\frak u_{(1)}\frak u_{(2)}}$ is an embedding. A similar argument applies to the map $\varphi_{\frak u_{(1)}\frak u_{(2)}}$. \end{proof} We next define a bundle map $\overline \varphi_{\frak u_{(1)}\frak u_{(2)}} : \mathcal E_{\frak u_{(2)}} \to \mathcal E_{\frak u_{(1)}}$ which lifts $\varphi_{\frak u_{(1)}\frak u_{(2)}}$. Using (\ref{form6162}) and (\ref{formnew6163}), we obtain a linear embedding: \begin{equation}\label{form166633} \bigoplus_{\frak p_j \in {\frak J}(\frak u_{(2)})} \mathcal E_{0,\frak u_{(2)},\frak p_j}(\frak y_{(2)})\to \bigoplus_{\frak p_j \in {\frak J}(\frak u_{(1)})} \mathcal E_{0,\frak u_{(1)},\frak p_j}(\frak y_{(1)}) \end{equation} if $\frak y_{(1)} = \varphi_{\frak u_{(1)}\frak u_{(2)}}(\frak y_{(2)})$. The map : \begin{equation}\label{form166634} \bigoplus_{e \in C^{\rm int}_{\rm th}(\check R_{(2)}), \lambda(e) > 0} \mathscr L_e\to \bigoplus_{e \in C^{\rm int}_{\rm th}(\check R_{(1)}), \lambda(e) > 0} \mathscr L_e \end{equation} is defined as identity on $e \in C^{\rm int}_{\rm th}(\check R_{(2)}) \subset C^{\rm int}_{\rm th}(\check R_{(1)})$ and is zero on the other factors. The bundle map $\overline \varphi_{\frak u_{(1)}\frak u_{(2)}}$ is defined using \eqref{form166633} and \eqref{form166634}. Analogous to Lemma \ref{lem668888}, we can prove that $\overline \varphi_{\frak u_{(1)}\frak u_{(2)}}$ is $C^{\ell}$. \begin{lem} $$ \frak s_{(1)} \circ \varphi_{\frak u_{(1)}\frak u_{(2)}} = \overline \varphi_{\frak u_{(1)}\frak u_{(2)}} \circ \frak s_{(2)}. $$ \end{lem} \begin{proof} On the factor in \eqref{form166633} this is a consequence of the definitions of the map $\varphi_{\frak u_{(1)}\frak u_{(2)}}$ and \eqref{form166633}. Namely, it follows from \eqref{form614343} and the fact that $I_{\hat v}$ is bi-holomorphic. For the factor in \eqref{form166634}, this is a consequence of (\ref{newform614154}). \end{proof} Compatibility of the parametrization map with $\varphi_{\frak u_{(1)}\frak u_{(2)}}$ is also an immediate consequence of the definitions. We thus proved that: \begin{prop} Let $\frak u_{(1)}$, $\frak u_{(2)}$, $\Xi_{(1)}$ and $\Xi_{(2)}$ satisfy Conditions \ref{choi655} and \ref{choi655-2} and $\frak u_{(2)} \in U(\frak u_{(1)})$. Then the pair $(\varphi_{\frak u_{(1)}\frak u_{(2)}},\overline\varphi_{\frak u_{(1)}\frak u_{(2)}})$ is a coordinate change of Kuranishi charts. \end{prop} \subsection{Construction of Coordinate Change II} \label{subsub:coordinatechage2} Let $\frak u=((\Sigma_{v},\vec z_{v},u_{v});v \in C^{\rm int}_0(\check R)) \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. We fix two TSDs: $$ \hspace{3cm} \Xi_{(j)} = (\vec w_{(j)},(\mathcal N_{(j),v,i}),(\phi_{(j),v}),(\varphi_{(j),v,e}),\kappa_{(j)}) \hspace{1cm} j=1,2 $$ at $\frak u$ such that we can use Definition \ref{Kura-chart}, to form Kuranishi charts: $$ \mathcal U_{\frak u,\Xi_{(j)}} = (\mathcal U(\frak u,\Xi_{(j)}),\mathcal E_{\frak u,\Xi_{(j)}},\Gamma_{\frak u},\frak s_{\frak u,\Xi_{(j)}},\psi_{\frak u,\Xi_{(j)}}). $$ These Kuranishi charts depend on the choices of the subset $\{\frak p_j\}$ of the moduli space $ \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, the TSDs $\{\Xi_{\frak p_j}\}$, the vector spaces $E_{\frak p_j,v}$ and the open sets $\mathcal K(\frak p_j)$. We assume that these choices agree with each other for the above two charts. In this subsection, we will construct a coordinate change from $\mathcal U_{\frak u,\Xi_{(2)}}$ to $\mathcal U_{\frak u,\Xi_{(1)}}$.\footnote{In fact, these two coordinate charts are isomorphic after possibly shrinking $\mathcal U(\frak u,\Xi_{(j)})$ into appropriate open subspaces.} The TSD $\Xi_{(j)}$ determines the subspace $\Sigma_{(j),v}^-$ of $\Sigma_{v}$ for each interior vertex $v$ of $\check R$. We assume that $\Xi_{(2)}$ is small enough such that $\vec w_{(1)} \cap\Sigma_v$ is a subset of $\Sigma^-_{(2),v}$. We pick an inconsistent map with respect to $\Xi_{(2)}$ denoted by: $$ \frak y_{(2)} = (\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},(u'_{(2),v}),(U'_{(2),v}),(\rho_{(2),e}),(\rho_{(2),i})) $$ Associated to $\frak y_{(2)}$, we have $\Sigma_{(2),v}^-(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$, which comes with marked points \[\vec z_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}) \cup \vec w_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}).\] Here the elements of $\vec z_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ are in correspondence with the boundary marked points of $\Sigma_v$ and $\vec w_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ are in correspondence with the additional marked points $\vec w_{(2),v}$ given by $\Xi_{(2)}$. We will write $\vec z_{(2)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ for the union of all boundary marked points of $\Sigma_{(2)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$. The following lemma is the analogue of Lemma \ref{lemma660}: \begin{lem}\label{lem67111} There exists $\vec w_{(2);(1),v}(\frak y_{(2)}) \subset \Sigma^-_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ such that: \begin{enumerate} \item $w_{(2);(1),v,i}(\frak y_{(2)})$ is close to $w_{(1),v,i}$. Here we identify $\Sigma^-_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ and $\Sigma^-_{(2),v}$ using $\Xi_{(2)}$. \item $u'_{(2),v}(w_{(2);(1),v,i}(\frak y_{(2)})) \in \mathcal N_{(1),v,i}$. \end{enumerate} \end{lem} We define: \[ \vec w_{(2);(1)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}) = \bigcup_{v\in C_0^{\rm int}(\check R)} \vec w_{(2);(1),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}). \] Then $(\Sigma_{(2)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}),\vec z_{(2)}(\frak y_{(2)}) \cup \vec w_{(2);(1)}(\frak y_{(2)}))$ is close to $(\Sigma,\vec z \cup \vec w_{(1)})$ in the moduli space of bordered nodal curves. Therefore, there exists $\vec{\frak x}_{(1)}, \vec\sigma_{(1)}$ such that: \begin{equation}\label{form6168} (\Sigma_{(2)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}),\vec z_{(2)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}) \cup \vec w_{(2);(1)}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})) \cong \hspace{2cm} \end{equation} \[ \hspace{2cm}(\Sigma_{(1)}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)}),\vec z_{(1)}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})\cup \vec w_{(1)}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})). \] Here we use $\Xi_{(1)}$ to define the right hand side. Let $I$ be an isomorphism from the right hand side of \eqref{form6168} to the left hand side. Note that the choices of $\vec{\frak x}_{(1)}, \vec\sigma_{(1)}$ and $I$ are unique up to an element of $\Gamma_{\frak u}$. We consider decompositions: \begin{equation}\label{form614444prerev1} \aligned \Sigma_{(j)}(\vec{\frak x}_{(j)},\vec{\sigma}_{(j)})= &\coprod_{v \in C^{\rm int}_0(\check R)} \Sigma^-_{(j),v}(\vec{\frak x}_{(j)},\vec{\sigma}_{(j)})\\ &\cup \coprod_{e \in C^{\rm int}_1(\check R)} [-5T_{(j),e},5T_{(j),e}] \times S^1. \endaligned \end{equation} for $j=1,2$. In the above identity, we define $T_{(j),e}$ by requiring $e^{-10T_{(j),e}} = \vert\sigma_{(j),e}\vert$. Here for simplicity, we assume that $\sigma_{(2),e}$ is non-zero for all interior edges $e$ of $\check R$. A similar discussion applies to the case that $\sigma_{(2),e}=0$ with minor modifications. For example, in \eqref{form614444prerev1} we need to include two half cylinders for each $e$ that $\sigma_{(2),e}=0$. We also have: \begin{equation}\label{form614444prerev12} \aligned \Sigma_{(j)}(\vec{\frak x}_{(j)},\vec{\sigma}_{(j)})= \bigcup_{v \in C^{\rm int}_0(\check R)}\Sigma^+_{(j),v}(\vec{\frak x}_{(j)},\vec{\sigma}_{(j)}). \endaligned \end{equation} Although $I$ in \eqref{form6168} is an isomorphism, it does not respect the decompositions in \eqref{form614444prerev1} or \eqref{form614444prerev12} for $j=1,2$. This is because $\Xi_{(1)} \ne \Xi_{(2)}$.\footnote{Conditions \ref{choi655} and \ref{choi655-2} are used in Subsection \ref{subsub:coordinatechage1} to show the compatibility of the similar decompositions. We do not assume them here.} Nevertheless, one can easily prove: \begin{lem} If $\Xi_{(2)}$ is small enough, then $I$ can be chosen such that the following holds. Let $v \in C^{\rm int}_0(\check R)$ and $\frak z \in \Sigma^+_{(1),v}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})$. Then at least one of the following conditions holds: \begin{enumerate} \item [(I)] $I(\frak z) \in \Sigma^+_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$. \item[(II)] There exists $e \in C^{\rm int}_1(\check R)$ with $\partial e = \{v,v'\}$ such that $I(\frak z) \in \Sigma^+_{(2),v',\frak x_{(2),v'}}$. \end{enumerate} \end{lem} This is the consequence of the fact that the decomposition (\ref{form614444prerev12}) is `mostly preserved' by $I$. Now we define $u'_{(1),v}$, $U'_{(1),v}$ as follows. If $\lambda(v)=0$, we have: \begin{equation}\label{form17166} u'_{(1),v}(\frak z) = \begin{cases} u'_{(2),v}(\frak z) \quad &\text{if (I) holds,} \\ U'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $\lambda(v) < \lambda(v')$,} \\ u'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $\lambda(v)=\lambda(v')$}. \end{cases} \end{equation} and if $\lambda(v)>0$, we have: \begin{equation}\label{form17266} U'_{(1),v}(\frak z) = \begin{cases} U'_{(2),v}(\frak z) \quad &\text{if (I) holds,} \\ U'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $\lambda(v)=\lambda(v')$,}\\ {\rm Dil}_{\rho_{{(2)},e}} \circ U'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $\lambda(v) < \lambda(v')$,}\\ {\rm Dil}_{1/\rho_{{(2)},e}} \circ U'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $0 <\lambda(v') < \lambda(v)$}, \\ {\rm Dil}_{1/\rho_{{(2)},e}} \circ u'_{(2),v'}(\frak z)\quad &\text{if (II) holds, $0 =\lambda(v') < \lambda(v)$.} \end{cases} \end{equation} Using the fact that $\frak y_{(2)}$ satisfies (\ref{form621}), (\ref{form6222}), (\ref{form623}), we can easily check that in the case that (I) and (II) are both satisfied the right hand sides coincide. \par We also define $$ \rho_{(1),e} = \rho_{(2),e}, \qquad \rho_{(1),i} = \rho_{(2),i}. $$ \begin{lem}\label{lem676} The 6-tuple $$ \frak y_{(1)} = (\vec{\frak x}_{(1)},\vec{\sigma}_{(1)},(u'_{(1),v}),(U'_{(1),v}),(\rho_{(1),e}),(\rho_{(1),i})) $$ is an inconsistent solution near $\frak u$ with respect to $\Xi_{(1)}$. \end{lem} \begin{proof} Definition \ref{defn6488} (1), (2), (3) are obvious. (4)-(8), (11) and (12) follow from the definition of $\frak y_{(1)}$ and the corresponding conditions for $\frak y_{(2)}$. (9) and (10) hold by shrinking the size of $\Xi_{(2)}$ if necessary. (13) is a consequence of Lemma \ref{lem67111}. \end{proof} Thus after shrinking the size of $\Xi_{(2)}$ if necessary, we may define: \begin{equation} \widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}:\widetilde {\mathcal U}(\frak u,\Xi_{(2)}) \to \widetilde{\mathcal U}(\frak u,\Xi_{(1)}) \end{equation} by: \begin{equation} \widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}(\frak y_{(2)}) = \frak y_{(1)}. \end{equation} Similarly, we can define $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}:\widehat {\mathcal U}(\frak u,\Xi_{(2)}) \to \widehat{\mathcal U}(\frak u,\Xi_{(1)})$. \begin{lem} The maps $\widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ and $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ are $C^{\ell}$ diffeomorphisms into their images. \end{lem} \begin{proof} We cannot apply the same proof as in Lemma \ref{lem668888}. In fact, Diagram \eqref{dia6165} does not commute anymore because our TSDs $\Xi_{(1)}$ and $\Xi_{(2)}$ are not compatible in the sense of Conditions \ref{choi655} and \ref{choi655-2}. In order to resolve this issue, we need to modify the definition of right vertical arrow in Diagram \eqref{dia6165}. Assuming $\Xi_{(2)}$ is small enough, we define a map: \[ \frak I_{v_0} : \prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{(2),v}^{\rm source} \times \prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{(2),e}^{{\rm deform}}\times \Sigma_{(1),v_0}^{-}\to \Sigma_{(2),v_0}^{-} \] for any interior vertex $v_0$ of $\check R$ as follows. Fix an element $(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)})$ of $\prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{(2),v}^{\rm source} \times\prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{(2),e}^{{\rm deform}}$, and let $(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})$ be the element of $\prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{(1),v}^{\rm source} \times\prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{(1),e}^{{\rm deform}}$ that satisfies \eqref{form6168}. By taking $\Xi_{(2)}$ small enough, we can form the following composition: \begin{equation}\label{form6174} \Sigma_{(1),v_0}^{-} \to\Sigma^-_{(1),v_0}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)})\to \Sigma^-_{(2),v_0}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}) \to \Sigma^-_{(2),v_0} \end{equation} Here the first map is defined using $\Xi_{(1)}$, the second map is induced by the isomorphism \eqref{form6168}, and the last map is defined using $\Xi_{(2)}$. For $\frak z\in \Sigma_{(1),v_0}^{-}$, we define $\frak I_{v_0}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},\frak z)$ to be the image of $\frak z$ by the map \eqref{form6174}. It is clear that $\frak I_{v_0}$ is a smooth map. For a vertex $v_0$ with $\lambda(v_0)=0$, define: \[ \aligned \frak I_{v_0}^* : &\prod_{v \in C^{\rm int}_0(\mathcal R)}\mathcal V_{(2),v}^{\rm source} \times\prod_{e \in C^{\rm int}_1(\mathcal R)}\mathcal V_{(2),e}^{{\rm deform}}\\ &\times L^2_{m+\ell+1}(\Sigma_{(2),v_0}^-,X\setminus \mathcal D)\to L^2_{m+1}(\Sigma_{(1),v_0}^{-},X\setminus \mathcal D) \endaligned \] as follows: \[ \frak I_{v_0}^*(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},u')(\frak z)=u'(\frak I_{v_0}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},\frak z)). \] Note that we pick different Sobolev exponents for the Sobolev spaces on the domain and the target of $\frak I_{v_0}^*$. This allows us to obtain a $C^{\ell}$ map $\frak I_{v_0}^*$. Similarly, for a vertex $v_0$ with $\lambda(v_0)>0$, we can define a $C^{\ell}$ map: \[ \aligned \frak I_{v_0}^* : &\prod_{v \in C^{\rm int}_0(\mathcal R)}\mathcal V_{(2),v}^{\rm source} \times\prod_{e \in C^{\rm int}_1(\mathcal R)}\mathcal V_{(2),e}^{{\rm deform}}\\ &\times L^2_{m+\ell+1}(\Sigma_{(2),v_0}^-,\mathcal N_{\mathcal D}X\setminus \mathcal D)\to L^2_{m+1}(\Sigma_{(1),v_0}^{-},\mathcal N_{\mathcal D}X\setminus \mathcal D) \endaligned \] Now we replace Diagram \eqref{dia6165} with the following: \begin{equation}\label{dia6165rev} \xymatrix{ \widetilde{\mathcal U}(\frak u,\frak X_{(2)}) \ar[dd]^{\widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}}\ar[rr]& &\txt{$\displaystyle{\prod_{v \in C^{\rm int}_0(\check R)}\mathcal V_{(2),v}^{\rm source} \times\prod_{e \in C^{\rm int}_1(\check R)}\mathcal V_{(2),e}^{{\rm deform}}}$\\ $\displaystyle{\times \prod_{v \in C^{\rm int}_0(\check R), \atop \lambda(v) > 0}L^2_{m+\ell+1}(\Sigma_{(2),v}^{-},\mathcal N_{\mathcal D}X\setminus \mathcal D)}$\\ $\displaystyle{\times \prod_{v \in C^{\rm int}_0(\check R), \atop \lambda(v) > 0}L^2_{m+\ell+1}(\Sigma_{(2),v}^{-},\mathcal N_{\mathcal D}X\setminus \mathcal D)}$ }\ar[dd]^{\frak I_{v}^*}\\ &&\\ \widetilde{\mathcal U}(\frak u,\frak X_{(1)})\ar[rr]&&\txt{$\displaystyle{\prod_{v \in C^{\rm int}_0(\check R), \atop \lambda(v) = 0}L^2_{m+1}(\Sigma_{(1),v}^{-},X\setminus \mathcal D)}$\\ $\displaystyle{\times \prod_{v \in C^{\rm int}_0(\check R), \atop \lambda(v) > 0}L^2_{m+1}(\Sigma_{(1),v}^{-},\mathcal N_{\mathcal D}X\setminus \mathcal D)}$}\\ } \end{equation} Here horizontal arrows are defined as in \eqref{ccc}. Commutativity of (\ref{dia6165rev}) is immediate from the definition. We can also form a diagram similar to Diagram \eqref{dia6164}, which is commutative by the same reason as in Lemma \ref{lem668888}. Commutativity of these two diagrams and the fact that $\frak I_{v}^*$ is $C^{\ell}$ implies that $\widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ is also $C^{\ell}$. A similar argument applies to $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$. By changing the role of $\Xi_{(1)}$ and $\Xi_{(2)}$, we can similarly obtain $C^{\ell}$ maps in different directions. To be more precise we can define maps $\widetilde \varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')}$ and $\varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')}$ where $\Xi_{(1)}'$ is given by a small enough shrinking of $\Xi_{(1)}$. The compositions: \[ \widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}\circ \widetilde \varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')}\hspace{1cm} \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})} \circ \varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')} \] are equal to the identity map. Moreover, the compositions \[ \widetilde \varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')} \circ \widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}\hspace{1cm} \varphi_{(\frak u,\Xi_{(2)})(\frak u,\Xi_{(1)}')} \circ \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})} \] are also equal to the identity map, wherever they are defined. This implies that $\widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ and $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ are diffeomorphisms, after possibly shrinking $\Xi_{(2)}$. \end{proof} \begin{rem} By following an argument similar to the case of stable maps, one can show that $\widetilde \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ and $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ are $C^{\infty}$ using the above $C^{\ell}$ property for all values of $\ell$. We omit this argument and refer the reader to \cite[Section 12]{foooexp} for details of the proof. (See also Remark \ref{rem691new}.) \end{rem} We thus constructed a $C^{\ell}$ embedding $\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$. One can easily define a lift $\overline \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ of $ \varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ and obtain embedding of obstruction bundles. The compatibility of the Kuranishi maps and the parametrization maps with the maps $\varphi_{(\frak u,\Xi_{(1)}(\frak u,\Xi_{(2)})}$ and $\overline\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}$ are immediate from the construction. In summary, we have coordinate change: \[ \Phi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})} = (\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})},\hat\varphi_{(\frak u,\Xi_{(1)})(\frak u,\Xi_{(2)})}) :\mathcal U_{\frak u,\Xi_{(2)}} \to \mathcal U_{\frak u,\Xi_{(1)}}. \] \subsection{Co-cycle Condition for Coordinate Changes} \label{subsub:coordinatechage3} For $j=1,\,2$, let $\frak u_{(j)} \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, and $\Xi_{(j)}$ be a TSD at $\frak u_{(j)}$. We assume that $\Xi_{(j)}$ is small enough such that we can form the Kuranish chart $\mathcal U_{\frak u_{(j)},\Xi_{(j)}}$ as in Definition \ref{Kura-chart}. We also assume that $\frak u_{(2)}$ is sufficiently close to $\frak u_{(1)}$ in the sense that it belongs to the open subset of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ determined by $\mathcal U_{\frak u_{(1)},\Xi_{(1)}}$. Therefore, we may use the constructions of Subsection \ref{subsub:Nplustri} to obtain a TSD $\Xi_{(2);(1)}$ at $\frak u_{(2)}$ which is compatible with $\Xi_{(1)}$, namely, it satisfies Conditions \ref{choi655} and \ref{choi655-2}. Finally by shrinking $\Xi_{(2)}$, we can assume that we can define the coordinate change $\Phi_{(\frak u_{(2)},\Xi_{(2);(1)})(\frak u_{(2)},\Xi_{(2)})}$ following the construction of the previous subsection. Now we define: \begin{defn}\label{defn6766} We define the coordinate change: \[ \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2)})} : \mathcal U_{\frak u_{(2)},\Xi_{(2)}} \to \mathcal U_{\frak u_{(1)},\Xi_{(1)}}. \] as the composition \begin{equation} \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2)})} = \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2);(1)})} \circ \Phi_{(\frak u_{(2)},\Xi_{(2);(1)})(\frak u_{(2)},\Xi_{(2)})}. \end{equation} Here \[ \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2);(1)})} : \mathcal U_{\frak u_{(2)},\Xi_{(2);(1)}} \to \mathcal U_{\frak u_{(1)},\Xi_{(1)}} \] is defined in Subsection \ref{subsub:coordinatechage1} and \[ \Phi_{(\frak u_{(2)},\Xi_{(2);(1)})(\frak u_{(2)},\Xi_{(2)})} : \mathcal U_{\frak u_{(2)},\Xi_{(2)}} \to\mathcal U_{\frak u_{(2)},\Xi_{(2);(1)}} \] is defined in Subsection \ref{subsub:coordinatechage2}. \end{defn} To complete the construction of the Kuranishi structure on $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$, we need to prove the next lemma. \begin{lem} For $j=1,\,2,\,3$, let $\frak u_{(j)} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, and $\Xi_{(j)}$ be a TSD at $\frak u_{(j)}$ such that we can use Definition \ref{defn6766}, to define the coordinate changes $\Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2)})}$, $\Phi_{(\frak u_{(2)},\Xi_{(2)})(\frak u_{(3)},\Xi_{(3)})}$, $\Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(3)},\Xi_{(3)})}$. Then we have: \begin{equation}\label{formula6177} \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2)})} \circ \Phi_{(\frak u_{(2)},\Xi_{(2)})(\frak u_{(3)},\Xi_{(3)})}= \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(3)},\Xi_{(3)})} . \end{equation} \end{lem} \begin{proof} We use the constructions of Subsection \ref{subsub:Nplustri} to find TSDs $\Xi_{(3);(1)}$, $\Xi_{(3);(2)}$ at $\frak u_{(3)}$ such that the pairs $(\Xi_{(1)},\Xi_{(3);(1)})$ $(\Xi_{(2)},\Xi_{(3);(2)})$ both satisfy Conditions \ref{choi655} and \ref{choi655-2}. We similarly choose the TSD $\Xi_{(2);(1)}$ at $\frak u_{(2)}$. We can easily check the following three formulas: \[ \aligned \Phi_{(\frak u_{(3)},\Xi_{(3);(1)})(\frak u_{(3)},\Xi_{(3);(2)})} \circ \Phi_{(\frak u_{(3)},\Xi_{(3);(2)})(\frak u_{(3)},\Xi_{(3)})} &=\Phi_{(\frak u_{(3)},\Xi_{(3);(1)})(\frak u_{(3)},\Xi_{(3)})} \\ \Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(2)},\Xi_{(2);(1)})}\circ \Phi_{(\frak u_{(2)};\Xi_{(2);(1)})(\frak u_{(3)},\Xi_{(3);(1)})} &=\Phi_{(\frak u_{(1)},\Xi_{(1)})(\frak u_{(3)},\Xi_{(3);(1)})} \endaligned \] \[ \aligned &\Phi_{(\frak u_{(2)},\Xi_{(2);(1)})(\frak u_{(3)},\Xi_{(3);(1)})}\circ \Phi_{(\frak u_{(3)},\Xi_{(3);(1)})(\frak u_{(3)},\Xi_{(3);(2)})} \\ &= \Phi_{(\frak u_{(2)},\Xi_{(2);(1)})(\frak u_{(2)},\Xi_{(2)})}\circ \Phi_{(\frak u_{(2)},\Xi_{(2)})(\frak u_{(3)},\Xi_{(3);(2)})} \endaligned \] Then \eqref{formula6177} is a consequence of these three formulas and Definition \ref{defn6766}. See the diagram below. In this diagram, the notation $\Phi_{(\frak u_{(3)},\Xi_{(3)})(\frak u_{(3)},\Xi_{(3);(2)})}$ is simplified to $\Phi_{\Xi_{(3)}\Xi_{(3);(2)}}$. Similar notations for other coordinate changes are used. \tiny{ \begin{equation} \xymatrix{ \mathcal U_{\frak u_{(3)},\Xi_{(3)}} \ar[rr]_{\Phi_{\Xi_{(3);(2)}\Xi_{(3)}}} \ar@/^30pt/[rrrrr]^{\Phi_{\Xi_{(3);(1)}\Xi_{(3)}}} \ar@/_20pt/[rrdd]_{\Phi_{\Xi_{(2)}\Xi_{(3)}}} \ar@/_100pt/[rrrrrdddd]_{\Phi_{\Xi_{(1)}\Xi_{(3)}}} && \mathcal U_{\frak u_{(3)},\Xi_{(3);(2)}} \ar[rrr]_{\Phi_{\Xi_{(3);(1)}\Xi_{(3);(2)}}} \ar[dd]_{\Phi_{\Xi_{(2)}\Xi_{(3);(2)}}} &&& \mathcal U_{\frak u_{(3)},\Xi_{(3);(1)}} \ar[dd]_{\Phi_{\Xi_{(2);(1)}\Xi_{(3);(1)}}} \ar@/^30pt/[dddd]^{\Phi_{\Xi_{(1)}\Xi_{(1);(3)}}} \\ \\ && \mathcal U_{\frak u_{(2)},\Xi_{(2)}} \ar[rrr]_{\Phi_{\Xi_{(2);(1)}\Xi_{(2)}}} \ar@/_20pt/[rrrdd]_{\Phi_{\Xi_{(1)}\Xi_{(2)}}} &&& \mathcal U_{\frak u_{(2)},\Xi_{(2);(1)}} \ar[dd]_{\Phi_{\Xi_{(1)}\Xi_{(2);(1)}}} \\ \\ &&&&& \mathcal U_{\frak u_{(1)},\Xi_{(1)}} } \nonumber \end{equation} } \end{proof} This lemma completes the proof of the following result: \begin{thm}\label{sing-Kuranishi-str} The space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ carries a Kuranishi structure. \end{thm} \section{Construction of a System of Kuranishi Structures} \label{sub:systemconst} \subsection{Statement} \label{subsub:statecoma} In Section \ref{sub:kuracont}, we completed the construction of Kuranishi structure for each moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$. In this section, we study how these Kuranishi structures are related to each other at their boundaries and corners. More specifically, we prove the disk moduli version of \cite[Lemma 3.67]{DF1}, stated as Theorem \ref{lema362rev}. The notation $\hat\times_L$ is discussed in \cite[Subsection 3.7]{DF1}. Recall also from \cite[Subsection 3.7]{DF1} that $\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)^{(1)}$ is the union of the strata of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ which are described by DD-ribbon trees with at least one positive level. The proof of \cite[Lemma 3.67]{DF1} is entirely similar to one of Theorem \ref{lema362rev}. So we only focus on the proof of Theorem \ref{lema362rev}. \begin{thm}\label{lema362rev} Suppose $E$ is a positive real number and $N$ is a positive integer. There is a system of Kuranishi structures on the moduli spaces $\{\mathcal M^{\rm RGW}_{k+1}(L;\beta)\}_{k,\beta}$ with $\omega\cap \beta\leq E$ and $k\leq N$ such that if $\beta_1 + \beta_2 = \beta$, $k_1+k_2 =k$, then the space $$ \mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)\,\hat\times_L\, \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2) $$ is a codimension one stratum of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ with the following properties. There exists a continuous map: \begin{equation}\label{form6179} \aligned \Pi : &\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1) \,\hat\times_L\,\mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2) \\ &\to\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)\,\times_L\, \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2) \endaligned \end{equation} with the following properties. \begin{enumerate} \item On the inverse image of the complement of $$ \aligned &\left(\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)^{(1)}\times_L \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)\right)\\ &\cup\left(\mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)\times_L \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)^{(1)}\right) \endaligned $$ $\Pi$ is induced by an isomorphism of Kuranishi structures. \item Let $$\frak p \in \mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)\,\hat\times_L\, \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)$$ and $$\Pi(\frak p) = \overline{\frak p}\in\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1) \,\times_L\,\mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2).$$ Let $\mathcal U_{\frak p} = (U_{\frak p},E_{\frak p},s_{\frak p},\psi_{\frak p})$ and $\mathcal U_{\overline{\frak p}} = (U_{\overline{\frak p}},E_{\overline{\frak p}}, s_{\overline{\frak p}},\psi_{\overline{\frak p}})$ be the Kuranishi neighborhoods of $\frak p$ and $\overline{\frak p}$ assigned by our Kuranishi structures. Let also $U_{\frak p} = V_{\frak p}/\Gamma_{\frak p}$ and $U_{\overline{\frak p}} = V_{\overline{\frak p}}/\Gamma_{\overline{\frak p}}$. Then we have: \begin{enumerate} \item There exists an injective homomorphism $\phi_{\frak p} : \Gamma_{\frak p} \to \Gamma_{\overline{\frak p}}$. \item There exists a $\Gamma_{\frak p}$-equivariant map $$ F_{\frak p} : V_{\frak p} \to V_{\overline{\frak p}} $$ that is a strata-wise smooth submersion. \item $E_{\frak p}$ is isomorphic to the pullback of $E_{\overline{\frak p}}$ by $F_{\frak p}$. In other words, there exists fiberwise isomorphic lift $$ \tilde F_{\frak p} : E_{\frak p} \to E_{\overline{\frak p}} $$ of $F_{\frak p}$, which is $\Gamma_{\frak p}$-equivariant. \item $\tilde F_{\frak p} \circ s_{\frak p} = s_{\overline{\frak p}} \circ F_{\frak p}$. \item $\psi_{\overline{\frak p}} \circ F_{\frak p} = \Pi \circ \psi_{\frak p}$ on $s_{\frak p}^{-1}(0)$. \end{enumerate} \item $\tilde F_{\frak p},F_{\frak p}$ are compatible with the coordinate changes. \end{enumerate} \end{thm} For elaboration on item (3) of the above theorem, see the discussion proceeding \cite[Subsection 3.7]{DF1}. \begin{rem} In order to prove Theorem \ref{lema362rev}, we need to slightly modify our choices of obstruction bundles used in the proof of Theorem \ref{sing-Kuranishi-str}. \end{rem} \begin{rem} Theorem \ref{lema362rev} concerns only the behavior of Kuranishi structures at codimension one boundary components. In fact, there is a similar statement for the behavior of our Kuranishi structures at higher co-dimensional corners. This generalization to higher co-dimensional corners are counterparts of \cite[Condition 16.1 XI,XII and Condition 21.7 X,XI]{fooo:tech2-2} in the context of the stable map compactification.\footnote{In \cite{foooast}, the corresponding statement is called corner compatibility conditions.} The main difference is that we again need to replace $\times_L$ with $\hat\times_L$. To work out the whole construction of simultaneous perturbations, we need the generalization of Theorem \ref{lema362rev} to the higher co-dimensional corners. In Subsection \ref{subsub:componentwise}, we will formulate a condition (Definition \ref{defn687nnew}) for the obstruction spaces, which implies the consistency of Kuranishi structures at the corners of arbitrary codimension. Since the proof and the statement for the case of corners is a straightforward generalization of the case of boundary (but cumbersome to write in detail), we focus on the case of codimension one boundary components. \end{rem} \subsection{Disk component-wise-ness of the Obstruction Bundle Data} \label{subsub:componentwise} A {\it disk splitting tree} is defined to be a very detailed DD-ribbon tree $\mathcal S$ such that the color of all vertices is ${\rm d}$. We say a detailed DD-ribbon tree $\check R$ belongs to a disk splitting tree $\mathcal S$ if $\mathcal S$ is obtained from $\check R$ by level shrinking and fine edge shrinking. In other words, geometrical objects with combinatorial type $\check R$ are limits of objects with type $\mathcal S$ such that new disc bubble does not occur. However, it is possible to have sphere bubbles. Let $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $\check R$ be the associated very detailed tree. Suppose $\mathcal S$ is a disk splitting tree such that $\check R$ belongs to $\mathcal S$. Let also $\lambda$ be the level function assigned to $\check R$. For each interior vertex $\frak v$ of $\mathcal S$, let $\check R_{\frak v}$ be the subtree of $\check R$ given by the connected component of: \[ \check R\setminus \bigcup_{e\in C^{\rm int}_1({\check R}),\, \lambda(e) = 0}e \] which contains the vertex $\frak v$. Let $\overline{\mathcal S}$ be a disk splitting tree obtained from $\mathcal S$ by a sequence of shrinking of level $0$ edges \cite[Definition 3.57]{DF1}. Let $\pi : \mathcal S \to \overline{\mathcal S}$ be the associated contraction map. For each $\frak w \in C^{\rm int}_{0}(\overline{\mathcal S})$, let ${\check R}(\frak w)$ be the very detailed DD-ribbon tree defined as: \begin{equation}\label{bewfirn182} {\check R}(\frak w)= \bigcup_{\pi(\frak v) = \frak w}\check R_{\frak v} \cup \bigcup_{\substack{e\in C^{\rm int}_1({\check R}),\, \lambda(e) = 0,\\ \, \text{$\pi(e)$ is adjacent to $\frak w$}}} e. \end{equation} Clearly we have $C^{\rm int}_0({\check R}(\frak w)) \subseteq C^{\rm int}_0({\check R})$ and $C^{\rm int}_1({\check R}(\frak w)) \subseteq C^{\rm int}_1({\check R})$. The restriction of the quasi-order\footnote{See \cite[Subsection 3.4]{DF1} for the definition of a quasi-order.} of $C^{\rm int}_0({\check R})$ to the set $C^{\rm int}_0({\check R}(\frak w))$ determines\footnote{See \cite[Lemma 3.34]{DF1}.} a level function $\lambda_{\frak w}$ for ${\check R}(\frak w)$. The tree ${\check R}(\frak w)$ also inherits a multiplicity function, a homology class assigned to each interior vertex and a color function from $\check R$, which turn ${\check R}(\frak w)$ into a very detailed tree associated to a detailed DD-ribbon tree $\mathcal R(\frak w)$. There is a map \begin{equation}\label{pifrakw} \pi_{\frak w} : \{1,\dots,\vert\lambda\vert\} \to \{1,\dots,\vert\lambda_{\frak w}\vert\} \end{equation} such that $i \le j$ implies $\pi_{\frak w}(i) \le \pi_{\frak w}(j)$ and for any $v \in C^{\rm int}_0({\mathcal R}(\frak w)) \subseteq C^{\rm int}_0({\mathcal R})$ \begin{equation}\label{form618010} \lambda_{\frak w}(v) = \pi_{\frak w}(\lambda(v)). \end{equation} Let $\Sigma_{\frak u}$ be the source curve of $\frak u$, and $\Sigma_{\frak u,v}$ denote the irreducible component of $\Sigma_{\frak u}$ corresponding to an interior vertex $v$ of $\check R$. For any $\frak w\in C^{\rm int}_{0}(\overline{\mathcal S})$, we define $\Sigma_{\frak u,\frak w}$ to be the union of irreducible components $\Sigma_{\frak u,v}$ where $v \in C^{\rm int}_0({\check R}(\frak w))$. A boundary marked point of $\Sigma_{\frak u,\frak w}$ is either a boundary marked point of a disc component $\Sigma_{\frak u,v}$ in $\Sigma_{\frak u,\frak w}$ or a boundary nodal point of $\Sigma_{\frak u}$ which joins an irreducible component of $\Sigma_{\frak u,\frak w}$ to an irreducible component of $\Sigma_{\frak u}$, which is not in $\Sigma_{\frak u,\frak w}$. The $0$-th boundary marked point $z_{0,\frak w}$ of $\Sigma_{\frak u,\frak w}$ is defined as follows. If the 0-th boundary marked point $z_0$ of $\Sigma_{\frak u}$ is contained in $\Sigma_{\frak u,\frak w}$ then $z_{0,\frak w} = z_0$. If not, $z_{0,\frak w}$ is the boundary nodal point such that $z_0$ and $\Sigma_{\frak u,\frak w} \setminus \{z_{0,w}\}$ are contained in the different connected component of $\Sigma_{\frak u} \setminus \{z_{0,\frak w}\}$. The restriction of $u_{\frak u} : (\Sigma_{\frak u},\partial\Sigma_{\frak u}) \to (X,L)$ to $\Sigma_{\frak u,\frak w}$ defines a map $u_{\frak u,\frak w} : (\Sigma_{\frak u,\frak w},\partial\Sigma_{\frak u,\frak w}) \to (X,L)$. The bordered nodal curve $\Sigma_{\frak u,\frak w}$ together with the boundary marked points described above, the choice of the $0$-th boundary marked point $z_{0,\frak w}$ and the map $u_{\frak u,\frak w}$ determines an element of the moduli space $\mathcal M_{k_{\frak w}+1}^{\rm RGW}(L;\beta(\frak w))$ where $\beta(\frak w) = \sum_{v \in C^{\rm int}_0(\check R(\frak w))} \alpha(v)$ and $k_{\frak w}+1$ is the number of the boundary marked points of $\Sigma_{\frak u,\frak w}$. We denote this element by $\frak u_{\frak w}$. Let $\Xi_{\frak u} = (\vec w_{\frak u},(\mathcal N_{\frak u,v}), (\phi_{\frak u,v}), (\varphi_{\frak u,v,e}))$ be a TSD for $\frak u$. This induces a TSD $\Xi_{\frak u_{\frak w}}$ for $\frak u_{\frak w}$ in an obvious way. Let \begin{equation}\label{florm680} \frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i})) \end{equation} be an inconsistent map with respect to $\Xi_{\frak u}$. Let $\mathcal S'$ be a disc splitting tree such that the very detailed tree of $\frak y$ belongs to $\mathcal S'$. We assume that $\overline {\mathcal S}$ is obtained from $\mathcal S'$ by a sequence of shrinking of level $0$ edges. Given $\frak w \in C^{\rm int}_{0}(\overline{\mathcal S})$, let $\sigma_{e}=0$ for any level $0$ edge $e\in C^{\rm int}_1({\check R})$ that corresponds to an exterior edge of ${\check R}(\frak w)$. Then we can define an inconsistent map $\frak y({\frak w})$ with respect to $\Xi_{\frak u_{\frak w}}$ in the following way. Since $C^{\rm int}_0({\check R}(\frak w)) \subseteq C^{\rm int}_0({\check R})$, $C^{\rm int}_1({\check R}(\frak w)) \subseteq C^{\rm int}_1({\check R})$, the restriction of the data of $\frak y$ determine $\vec{\frak x}_{\frak w}$, $\vec{\sigma}_{\frak w}$, $(u'_{v,\frak w})$, $(U'_{v,\frak w})$ and $(\rho_{e,\frak w})$. We also define: $$ \rho_{\frak w,i} = \prod_{\hat i: \pi_{\frak w}(\hat i)=i} \rho_{\hat i} $$ where $\pi_{\frak w}$ is given in \eqref{pifrakw}. \begin{lem} The following element is an inconsistent map with respect to $\Xi_{\frak u_{\frak w}}$: \begin{equation}\label{form6182} \frak y(\frak w) = (\vec{\frak x}_{\frak w},\vec{\sigma}_{\frak w},(u'_{v,\frak w}),(U'_{v,\frak w}), (\rho_{e,\frak w}),(\rho_{\frak w,i})). \end{equation} \end{lem} Next, we shall formulate a condition on the obstruction spaces so that the resulting system of Kuranishi structures satisfy the claims in Theorem \ref{lema362rev}. For this purpose, we firstly introduce the notion of {\it obstruction bundle data}. \begin{defn}\label{defn684684} Suppose we are given vector spaces $\{E_{\frak u,\Xi}(\frak y)\}$ for any $\frak u\in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, any small enough TSD $\Xi$ at $\frak u$, and an inconsistent map $\frak y$ with respect to $\Xi$. This data is called an {\it obstruction bundle data} for $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ if the following holds: \begin{enumerate} \item We have: \[ E_{\frak u,\Xi}(\frak y) = \bigoplus_{v \in C^{\rm int}_0(\check R)}E_{\frak u,\Xi,v}(\frak y) \] where $E_{\frak u,\Xi,v}(\frak y) \subset L^2_{m,\delta}(\Sigma_{\frak y,v},\Lambda^{0,1} \otimes T)$. \item $E_{\frak u,\Xi,v}(\frak y)$ is a finite dimensional subspace. The supports of its elements are subsets of $\Sigma^{-}_{\frak y,v}$ and are away from the boundary. \item $E_{\frak u,\Xi,v}(\frak y)$ is independent of $\Xi$ in the sense of Definition \ref{defn6ten86}. \item $E_{\frak u,\Xi,v}(\frak y)$ is semi-continuous with respect to $\frak u$ in the sense of Definition \ref{defn6ten86r3ev}. \item $E_{\frak u,\Xi,v}(\frak y)$ is smooth with respect to $\frak y$ in the sense of Definition \ref{defn6888}. \item The linearization of the Cauchy-Riemann equation is transversal to $E_{\frak u,\Xi,v}(\frak y)$ in the sense of Definition \ref{defn68899}. \item $E_{\frak u,\Xi,v}(\frak y)$ is invariant under the $\Gamma_{\frak u}$-action. (See \cite[Definition 5.1 (5)]{fooo:const1}.) \end{enumerate} \end{defn} Definition \ref{defn684684} is the RGW counterpart of \cite[Definition 5.1]{fooo:const1} for the stable map compactification. Before discussing the precise meaning of (3), (4), (5) and (6), we define {\it disk-component-wise-ness} of a system of obstruction bundle data. This is the analogue of \cite[Definition 4.2.2]{foooast} for the stable map compactification: \begin{defn}\label{defn687nnew} Suppose $E$ is a positive real number and $N$ is a positive integer. Suppose $\{E_{\frak u,\Xi}(\frak y)\}$ is a system of obstruction bundle data for the spaces $\{\mathcal M^{\rm RGW}_{k+1}(L;\beta)\}_{k,\beta}$ where $k=0,1,2,\dots,N$, $\beta \in H_2(X,L)$ and $\beta \cap [\mathcal D] = 0$ with $\omega\cap \beta\leq E$. We say this system is {\it disk-component-wise} if we always have the identification: \begin{equation}\label{form618383} E_{\frak u,\Xi}(\frak y)= \bigoplus_{\frak w \in C^{\rm int}_0(\overline {\mathcal S})}E_{\frak u_{\frak w},\Xi_{\frak w}}(\frak y(\frak w)). \end{equation} where $\overline {\mathcal S}$ is a detailed DD-ribbon tree as in the beginning of the subsection and $\frak y(\frak w)$ is as in \eqref{form6182}. \end{defn} \subsubsection*{Explanation of Definition \ref{defn684684} (3)} We pick two TSDs at $\frak u$ denoted by $\Xi_{(j)} = (\vec w_{(j)},(\mathcal N_{(j),v}), (\phi_{(j),v}), (\varphi_{(j),v,e}),\kappa_{(j)})$. If $\Xi_{(2)}$ is small enough in compare to $\Xi_{(1)}$, then as in \eqref{form17166} and \eqref{form17266}, we can assign to any inconsistent map: \[ \frak y_{(2)}=(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)},(u'_{(2),v}),(U'_{(2),v}),(\rho_{(2),e}),(\rho_{(2),i})) \] with respect to $\Xi_{(2)}$ an inconsistent map \[ \frak y_{(1)}=(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)},(u'_{(1),v}),(U'_{(1),v}),(\rho_{(1),e}),(\rho_{(1),i})) \] with respect to $\Xi_{(1)}$. In particular, there is a bi-holomorphic embedding: $$ I_{v;\Xi_{(2)}\Xi_{(1)}} : \Sigma_{(1),v}^{-}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)}) \to \Sigma_{(2),v}^{-}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}) $$ as in \eqref{form6174} such that: $$ \aligned u'_{(2),v} \circ I_{v;\Xi_{(2)}\Xi_{(1)}} &= u'_{(1),v} \qquad \text{if $\lambda(v) = 0$} \\ U'_{(2),v} \circ I_{v;\Xi_{(2)}\Xi_{(1)}} &= U'_{(1),v} \qquad \text{if $\lambda(v) > 0$} \endaligned $$ It induces a map $$ \frak I_{v;\Xi_{(2)}\Xi_{(1)}} : L^2_{m}(\Sigma^-_{(2),v}(\vec{\frak x}_{(2)},\vec{\sigma}_{(2)}),T \otimes \Lambda^{0,1}) \to L^2_{m}(\Sigma^-_{(1),v}(\vec{\frak x}_{(1)},\vec{\sigma}_{(1)}),T \otimes \Lambda^{0,1}). $$ \begin{defn}\label{defn6ten86} We say the system $\{E_{\frak u,\Xi}(\frak y)\}$ is {\it independent of $\Xi$}, if we always have: \begin{equation} \frak I_{v;\Xi_{(2)}\Xi_{(1)}}(E_{\frak u,\Xi_{(2)}}(\frak y_{(2)})= E_{\frak u,\Xi_{(1)}}(\frak y_{(1)}). \end{equation} \end{defn} The choices of obstruction bundles that we made in the previous section have this property. In fact, this property was used in the proof of Lemma \ref{lem676}. \subsubsection*{Explanation of Definition \ref{defn684684} (4)} Let $\frak u_{(1)}\in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $\Xi_{(1)}$ be a small enough TDS at $\frak u_{(1)}$. Let also $\frak u_{(2)} \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ be in a neighborhood of $\frak u_{(1)}$ determined by $\Xi_{(1)}$ and $\Xi_{(2)}$ be a TSD at $\frak u_{(2)}$. We assume that $\Xi_{(1)}$, $\Xi_{(2)}$ satisfy Conditions \ref{choi655} and \ref{choi655-2}. Let $\check R_{(j)}$ be the very detailed tree associated to $\frak u_{(j)}$. Our assumption implies that there is a map $\pi:\check R_{(1)} \to \check R_{(2)}$. Let $\frak y_{(2)}$ be an inconsistent map with respect to $\Xi_{(2)}$. Lemma \ref{lem65865777} associates an inconsistent map $\frak y_{(1)}$ with respect to $\Xi_{(1)}$. In particular, for any $\hat v\in C^{\rm int}_0(\check R_{(1)})$ with $v:=\pi(\hat v)$, we have a bi-holomorphic isomorphism: $$ I_{\hat v}:\Sigma_{(1),\hat v}^{-} \to \Sigma_{(2),v}^{-} $$ such that $$ \aligned u'_{(2),v} \circ I_{\hat v} &= u'_{(1),\hat v} \qquad \text{if $\lambda(v) = 0$}, \\ U'_{(2),v} \circ I_{\hat v} &= U'_{(1),\hat v} \qquad \text{if $\lambda(v) > 0$}. \endaligned $$ It induces a map: $$ \frak I_{v;\frak y_{(1)}\frak y_{(2)}} : L^2_{m}(\Sigma^-_{(2),v},\Lambda^{0,1} \otimes T) \to \bigoplus_{\pi(\hat v)=v}L^2_{m}(\Sigma^-_{(1),\hat v},\Lambda^{0,1} \otimes T). $$ \begin{defn}\label{defn6ten86r3ev} We say that $\{E_{\frak u,\Xi}(\frak y)\}$ {\it is semi-continuous with respect to $\frak u$} if the following condition holds. If $\frak u_{(1)}$, $\frak u_{(2)}$, $\frak y_{(1)}$, $\frak y_{(2)}$, $\Xi_{(1)}$ and $\Xi_{(2)}$ are as above, then we have: \begin{equation} \frak I_{v;\frak y_{(1)}\frak y_{(2)}}(E_{\frak u_{(2)},\Xi_{(2)}}(\frak y_{(2)}))\subseteq E_{\frak u_{(1)},\Xi_{(1)}}(\frak y_{(1)}). \end{equation} \end{defn} Lemmas \ref{loem668} and \ref{lem6666} imply that our choices of obstruction bundles in Section \ref{sub:kuracont} satisfy the above property. \subsubsection*{Explanation of Definition \ref{defn684684} (5)} Let $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, $\check R$ be the very detailed tree associated to $\frak u$, and $\Xi$ be a choice of TSD at $\frak u$. Let also $\frak y= (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i}))$ be an inconsistent map with respect to $\Xi$. For $v \in C_{0}^{\rm int}(\check R)$, the TSD $\Xi$ determines an isomorphism $I_{\frak y,v} : \Sigma^-_{v}(\vec{\frak x},\vec{\sigma}) \to \Sigma_{v}^-(\vec{\frak x},\vec{\sigma_0}) $. Here $\vec{\sigma_0}$ is a vector with zero entries. If $\Xi$ is small enough, then $u_v \circ I_{\frak y,v}$ (resp. $U_v \circ I_{\frak y,v}$ in the case $c(v)={\rm D}$) is $C^2$-close to $u'_{\frak y,v}$ (resp. $U'_{\frak y,v}$). Therefore, we obtain: \[ \frak I_{\frak y,v} : L^2_{m}(\Sigma^-_{\frak y,v}(\vec{\frak x},\vec{\sigma_0}) ;\Lambda^{0,1} \otimes T) \to L^2_{m}(\Sigma_v^{-}(\vec{\frak x},\vec{\sigma}) ;\Lambda^{0,1} \otimes T). \] Let $\mathscr L^2_{m+\ell}(u;v)$ be a small neighborhood of $u_v\vert_{\Sigma_v^{-}}$ or $U_v\vert_{\Sigma_v^{-}}$ with respect to the $L^2_{m+\ell}$-norm. \begin{defn}\label{defn6888} We say $\{E_{\frak u,\Xi}(\frak y)\}$ is in $C^{\ell}$ with respect to $\frak y$, if there exists a $C^{\ell}$ map: $$ \frak e_i : \prod_{e \in C^{\rm int}_1(\mathcal S)}\mathcal V_{e}^{{\rm deform}} \times \prod_{v \in C^{\rm int}_0(\mathcal S)}\mathcal V_{v}^{{\rm source}} \times \mathscr L^2_{m+\ell+1}(u;v)\to L^2_{m}(\Sigma_v^-;\Lambda^{0,1} \otimes T) $$ for $i=1$, $\dots$, $\dim (E_{\frak u,\Xi}(\frak y))$ with the following properties. For the inconsistent map $\frak y$ with respect to $\Xi$ and $v \in C_{0}^{\rm int}(\check R)$, let $\frak y(v) \in \mathscr L^2_{m+\ell+1}(u;v)$ be the map $u'_{\frak y,v} \circ I_{\frak y,v}$ or $U'_{\frak y,v} \circ I_{\frak y,v}$. Then the set of elements: $$ \frak I_{\frak y,v}\circ \frak e_i(\vec{\frak x},\vec{\sigma},\frak y(v)) $$ for $i=1$, $\dots$, $\dim (E_{\frak u,\Xi}(\frak y))$ forms a basis for $E_{\frak u,\Xi}(\frak y)$. \end{defn} This condition is mostly the analogue of \cite[Definition 8.7]{foooexp} in the context of the stable map compactifications and we refer the reader to the discussion there for a more detailed explanation. If this condition is satisfied, then the gluing analysis in the previous sections gives rise to $C^{\ell}$-Kuranishi charts and $C^{\ell}$-coordinate changes. The proof of the fact that the choices of obstruction data in the previous section and Subsection \ref{subsub:existobst} satisfy this condition is similar to \cite[Subsection 11.4]{foooexp} and hence is omitted. \begin{rem}\label{rem691new} We discussed the notion of $C^{\ell}$-obstruction data. There is also the notion of smooth obstruction data which is slightly stronger. This is related to \cite[Definition 8.7]{foooexp} Item (3), and we do not discuss this point in this paper. This condition is necessary to construct smooth Kuranishi structures rather than $C^{\ell}$-Kuranishi structures. The Kuranishi structure of class $C^{\ell}$ would suffice for our purposes in \cite{DF1}. Using smooth Kuranishi structures would be essential to study the Bott-Morse case and/or construct filtered $A_{\infty}$-category based on de-Rham model. \end{rem} \subsubsection*{Explanation of Definition \ref{defn684684} (6)} We consider $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $\Xi$. A system $\{E_{\frak u,\Xi}(\frak y)\}$ determines the vector spaces $E_{\frak u,\Xi}(\frak u)$ in the case that $\frak y =\frak u$. \begin{defn}\label{defn68899} We say the linearization of the Cauchy-Riemann equation is transversal to $E_{\frak u,\Xi}(\frak u)$, if $L^2_{m,\delta}(\frak u;T \otimes \Lambda^{0,1})$ is generated by the image of the operator $D_{\frak u}\overline\partial$ in \eqref{form6103} and $E_{\frak u,\Xi}(\frak u)$. \end{defn} \subsubsection*{From Disk-component-wise-ness to Theorem \ref{lema362rev}} The construction of the last section implies that we can use an obstruction bundle data to construct a Kuranishi structure. The next lemma shows that to prove Theorem \ref{lema362rev} it suffices to find a system of obstruction bundle data which is disk-component-wise: \begin{lem}\label{lem684} If a system of obstruction bundle data is disk-component-wise, then the Kuranishi structures constructed in the last section on moduli spaces $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ satisfy the claims in Theorem \ref{lema362rev}. \end{lem} \begin{proof} This is in fact true by tautology. For the sake of completeness, we give the proof below. Let $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, $\check R$ be the very detailed DD-ribbon tree associated to $\frak u$, and $\mathcal S$ be the disk splitting tree such that $\check R$ belongs to $\mathcal S$. We assume that $\frak u$ is a boundary point, i.e., there are $k_1$, $k_2$, $\beta_1$ and $\beta_2$ such that $\frak u$ is contained in $\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)\,\hat\times_L\, \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2).$ In particular, the disk splitting tree $\overline{\mathcal S}$ in Figure \ref{Figuresimplegraph} is obtained from $\mathcal S$ by shrinking of level 0 edges. We also have a map $\pi : \mathcal S \to \overline{\mathcal S}$. The construction of the beginning of this subsection allows us to from $\frak u_{\frak w_1}\in \mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1)$ and $\frak u_{\frak w_2}\in \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)$ from $\frak u$. Here $\frak w_1$, $\frak w_2$ are the two interior vertices of $\overline{\mathcal S}$. (See Figure \ref{Figuresimplegraph}.) The map $\Pi$ in (\ref{form6179}) is given by $\Pi(\frak u) = (\frak u_{\frak w_1},\frak u_{\frak w_2})$. Let $\overline{\frak u} = (\frak u_{\frak w_1},\frak u_{\frak w_2})$. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figuresimplegraph} \caption{$\overline{\mathcal S}$.} \label{Figuresimplegraph} \end{figure} A Kuranishi neighborhood of $\frak u$ in $\mathcal M^{\rm RGW}_{k_1+1} (L;\beta_1)\,\hat\times_L\,\mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)$ coincides with a Kuranishi neighborhood of $\frak u$ in a {\it normalized boundary} of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$. It contains inconsistent solutions $\frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i}))$ with respect to $\Xi$ such that $\sigma_{e_0} = 0$. Here $e_0$ is the unique interior edge of level 0 of $\overline{\mathcal S}$. We may regard $e_0$ as an edge of ${\mathcal S}$ and $\check R$, too. We denote this set by $\partial_{e_0} \mathcal U(\frak u;\Xi)$. The TSD $\Xi$ induces the TSD $\Xi_j$ on $\frak u_{\frak w_j}$ for $j=1,2$. Then we obtain a Kuranishi neighborhood of $\mathcal U(\frak u_{\frak w_j};\Xi_j)$ of $\frak u_{\frak w_j}$ in $\mathcal M^{\rm RGW}_{k_j+1}(L;\beta_j)$, for $j=1,2$. We can define evaluation maps ${\rm ev}_{j,i} : \mathcal U(\frak u_{\frak w_j};\Xi_j) \to L$ for $i=0,\dots,k_j$ and define \begin{equation}\label{form6186} \mathcal U(\frak u_{\frak w_1};\Xi_1) \,{}_{{\rm ev}_{1,i}}\times_{{\rm ev}_{2,0}} \, \mathcal U(\frak u_{\frak w_2};\Xi_2). \end{equation} Here $i$ is determined so that the edge $e_0$ is the $i$-th edge of $\frak w_1$. (\ref{form6186}) is a Kuranishi neighborhood of $(\frak u_{\frak w_1},\frak u_{\frak w_2})$ in the fiber product Kuranishi structure of $\mathcal M^{\rm RGW}_{k_1+1}(L;\beta_1) \,\times_L\, \mathcal M^{\rm RGW}_{k_2+1}(L;\beta_2)$. \par We next define a map $$ F_{\frak u} : \partial_{e_0} \mathcal U(\frak u;\Xi) \to \mathcal U(\frak u_{\frak w_1};\Xi_1) \,{}_{{\rm ev}_{1,i}}\times_{{\rm ev}_{2,0}} \, \mathcal U(\frak u_{\frak w_2};\Xi_2). $$ For $j=1,2$, let $\check R(\frak w_j)$ be the very detailed DD-ribbon tree associated to $\frak w_j$, defined in the beginning of this subsection. Given an inconsistent solution $\frak y\in \partial_{e_0} \mathcal U(\frak u;\Xi)$, we can define $\frak y_{(j)} = (\vec{\frak x}_{(j)},\vec{\sigma}_{(j)},(u'_{(j),v}), (U'_{(j),v}),(\rho_{(j),e}),(\rho_{(j),i}))$, an inconsistent solution with respect to $\Xi_{j}$, as in \eqref{form6182}. Identity \eqref{form618383} implies that $\frak y_{(j)}$ satisfies \eqref{form6119} and \eqref{form6120}, the thickened non-linear Cauchy-Riemann equations. Thus $\frak y_{(j)}$ is an inconsistent solution with respect to $\Xi_j$ for $j=1,2$. Since $\frak y$ is an inconsistent solution with $\sigma_{e_0}=0$, we also have: (See Definition \ref{defn6488} Items (10).) $$ {\rm ev}_{1,i}(\frak y_{(1)}) = {\rm ev}_{2,0}(\frak y_{(2)}) $$ We define $F_{\frak u}(\frak y)=(\frak y_{(1)},\frak y_{(2)})$. We have: \begin{equation}\label{inc-isot-gp} {\rm Aut}(\frak u) \subseteq {\rm Aut}(\frak u_{\frak w_1}) \times {\rm Aut}(\frak u_{\frak w_2}). \end{equation} because the restriction of all automorphisms to disk components are identity maps. Thus any $\gamma\in {\rm Aut}(\frak u)$ maps the sources curves of $\frak u_{\frak w_1}$ and $\frak u_{\frak w_2}$ to themselves. Consequently, $\gamma$ induces $(\gamma_1,\gamma_2)\in {\rm Aut}(\frak u_{\frak w_1}) \times {\rm Aut}(\frak u_{\frak w_2})$ which determines $\gamma$ uniquely.\footnote{However, any $(\gamma_1,\gamma_2)\in {\rm Aut}(\frak u_{\frak w_1}) \times {\rm Aut}(\frak u_{\frak w_2})$ does not necessarily determine an element of ${\rm Aut}(\frak u)$. For example, we could have two vertices $v_1$ and $v_2$ with the same positive levels such that $v_i$ belongs to $C^{\rm int}_0(\check R(\frak w_i))$. Then there is $c_i\in \C_*$ such that $U_{v_i}\circ \gamma_i=c_i \cdot U_{\gamma_i(v_i)}$. In the case that $c_1\neq c_2$, we cannot produce an automorphism of $\frak u$ using $\gamma_1$, $\gamma_2$.} It is then easy to see that $F_{\frak u}$ is ${\rm Aut}(\frak u)$-invariant. By (\ref{form618383}) we have: $$ \mathcal E_{0,\frak u,\Xi}(\frak y) \cong \bigoplus_{j=1,2}\mathcal E_{0,\frak u_{\frak w_j},\Xi_j}(\frak y_{(j)}) $$ We also have: $$ \bigoplus_{e \in C^{\rm int}_{\rm th}(\check R), \lambda(e) > 0} \mathscr L_e\cong \bigoplus_{j=1,2}\bigoplus_{e \in C^{\rm int}_{\rm th}(\check R(\frak w_j)), \lambda(e) > 0} \mathscr L_e. $$ This is because the set of the edges of positive level of $\check R$ is the union of the set of the edges of positive level of $\check R(\frak w_1)$ and $\check R(\frak w_2)$. Therefore, we obtain a bundle map: $$ \tilde F_{\frak u} : \mathcal E_{\frak u,\Xi} \to \mathcal E_{\frak u_{\frak w_1},\Xi_{1}} \oplus \mathcal E_{\frak u_{\frak w_2},\Xi_{2}} $$ which is a lift of $F_{\frak u}$. The bundle map $\tilde F_{\frak u}$ is an isomorphism on each fiber. Therefore, we proved (a), (b) and (c) of Theorem \ref{lema362rev} (2). Pars (d), compatibility with Kuranishi maps, and (e), compatibility with the parametrization maps, are obvious from the construction. Item (3), compatibility with the coordinate change, is also an immediate consequence of the definitions. It remains to prove that $F_{\frak u}$ is an isomorphism outside the strata of codimension 2. For this purpose, it suffices to consider the cases where $ \check R(\frak w_1)$ and $\check R(\frak w_2)$ have no vertex of positive level. Note that if we ignore the parameter $\rho_i$, then the map $F_{\frak u}$ is a bijection. In the present case where there is no vertex of positive level, there is no parameter $\rho_i$. This completes the proof of Lemma \ref{lem684}. \end{proof} \subsection{Existence of disk-component-wise Obstruction Bundle Data} \label{subsub:existobst} The main goal of this subsection is to prove: \begin{prop}\label{lem685} There exists a system of obstruction bundle data which is disk-component-wise. \end{prop} The proof is divided into 5 parts. In (Part 1-Part 3) we define various objects (OBI)-(OBIII) and formulate certain conditions we require them to satisfy. We then show that we can use them to obtain a system of obstruction bundle data which is disk-component-wise (Part 4). Finally, in Part 5, we show the existence of objects satisfying the required conditions. \subsubsection*{Disk-component-wise Obstruction Bundle Data: Part 1} Suppose $E$ is a positive real number and $N$ is a positive integer. Let $\mathscr{TP}$ be the set of all pairs $(k,\beta)$ such that $\mathcal M_{k+1}^{\rm RGW}(L;\beta) \ne \emptyset$, $\omega\cap \beta\leq E$ and $k\leq N$. Let $(k,\beta), (k',\beta') \in \mathscr{TP}$ we say $(k',\beta') < (k,\beta)$ if $\beta'\cap \omega < \beta \cap \omega$ or $\beta'\cap \omega = \beta \cap \omega$, $k' < k$. We also say $(k',\beta') \le (k,\beta)$ if $(k',\beta') < (k,\beta)$ or $(k',\beta') = (k,\beta)$. By Gromov compactness theorem, for each $(k,\beta) \in \mathscr{TP}$ the set $\{ (k',\beta') \in \mathscr{TP} \mid (k',\beta') < (k,\beta)\}$ is a finite set. \par\medskip \noindent {\bf (OBI):} For $(k,\beta) \in \mathscr{TP}$, $\frak P(k+1,\beta)$ is a finite subset of ${\rm Int}(\mathcal M_{k+1}^{\rm RGW}(L;\beta))$, the interior of the moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$. To be more specific, the space ${\rm Int}(\mathcal M_{k+1}^{\rm RGW}(L;\beta))$ consists of elements that their source curves have only one disc component. \par\smallskip Let $\frak p \in \frak P(k+1,\beta)$. We write $\Sigma_{\frak p}$ for the source curve of $\frak p$ and $u_{\frak p} : (\Sigma_{\frak p},\partial \Sigma_{\frak p}) \to (X,L)$ for the map part of $\frak p$. Let $\check R_{\frak p}$ be the very detailed tree describing the combinatorial type of $\frak p$. For $v \in C^{\rm int}_{0}(\check R_{\frak p})$, we denote the corresponding component of $\Sigma_{\frak p}$ by $\Sigma_{\frak p_v}$ and the restriction of $\frak p$ to $\Sigma_{\frak p_v}$ by $\frak p_{v}$. \par\smallskip \noindent {\bf (OBII):} For any $v \in C^{\rm int}_{0}(\check R_{\frak p})$, we take a finite dimensional subspace: $$ E_{\frak p_{v}} \subseteq \begin{cases} C^{\infty}(\Sigma_{\frak p_{v}};u^*_{\frak p_{v}}TX \otimes \Lambda^{0,1})&\text{if $c(v)={\rm d}$ or ${\rm s}$}, \\ C^{\infty}(\Sigma_{\frak p_{v}};u_{\frak p_{v}}^*T\mathcal D \otimes \Lambda^{0,1})&\text{if $c(v)={\rm D}$.} \end{cases} $$ whose support is away from nodal and marked points and the boundary of $\Sigma_{\frak p_v}$. \par\smallskip We require: \begin{conds} The restriction of $u_{\frak p}$ to a neighborhood of the support of ${\rm Supp}(E_{\frak p_{v}})$ is a smooth embedding. In particular, if ${\rm Supp}(E_{\frak p_{v}})$ is nonzero, $u_{{\frak p}_{v}}$ is non-constant. \end{conds} \subsubsection*{Disk-component-wise Obstruction Bundle Data: Part 2} We fix an element $\frak u=(\Sigma_{\frak u,v},z_{\frak u, v},u_{\frak u, v}; v\in C_0^{\rm int}(\check R_{\frak u}))$ of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$, where $\check R_{\frak u}$ is the very detailed tree assigned to $\frak u$. There is a forgetful map from the moduli space $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ to the moduli space of stable discs $\mathcal M_{k+1}^{\rm d}$ where for any $\frak u\in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ we firstly forget all the data of $\frak u$ except the source curve $\Sigma_{\frak u}$, and then shrink the unstable components. There is a metric space $\mathcal C_{k+1}^{\rm d}$, called the {\it universal family}, with a map $\pi:\mathcal C_{k+1}^{\rm d} \to \mathcal M_{k+1}^{\rm d}$ such that $\pi^{-1}(\zeta)$, for $\zeta\in \mathcal M_{k+1}^{\rm d}$, is a representative for $\zeta$. (See, for example, \cite[Section 2]{fooo:const1} or \cite[Subsection 5.1]{DF1}.) We pull-back $\mathcal C_{k+1}(L;\beta)$ to $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ via the forgetful map to obtain the space $\mathcal C^{\rm RGW}_{k+1}(L;\beta)$ with the projection map $\pi_{\rm RGW}:\mathcal C^{\rm RGW}_{k+1}(L;\beta) \to \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. The pull-back of the metric on $\mathcal C_{k+1}(L;\beta)$ to $\mathcal C^{\rm RGW}_{k+1}(L;\beta)$ defines a quasi metric\footnote{A quasi-metric is a distance function which satisfies the reflexive property and triangle inequality. But we allow for two distinct points to have distance zero.} on $\mathcal C^{\rm RGW}_{k+1}(L;\beta)$. Here we obtain a quasi-metric because the forgetful map from $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ to $\mathcal M_{k+1}^{\rm d}$ is not injective. Note that this quasi metric is in fact a metric in each fiber $\pi_{\rm RGW}^{-1}(\frak u)$. The fiber $\pi_{\rm RGW}^{-1}(\frak u)$ can be identified with a quotient of $\Sigma_{\frak u}$. Thus by pulling back the metric on each fiber $\pi_{\rm RGW}^{-1}(\frak u)$, we define a quasi metric on the source curve $\Sigma_{\frak u}$ of $\frak u$. \begin{lem}\label{delta-k-b} For each $\beta$ and $k$, there is a positive constant $\delta(k,\beta)$ with the following property. If $\frak u\in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $v\in C^{\rm int}_0(\check R_{\frak u})$ is a vertex with $u_{\frak u,v}:\Sigma_{\frak u,v}\to X$ being a non-constant map, then there is $x\in \Sigma_{\frak u,v}$ such that the distance between $x$ and any nodal point and boundary point of $\Sigma_{\frak u}$ is greater than $\delta(k,\beta)$. Moreover, if $x'\in \Sigma_{\frak u}$ is chosen such that $u_{\frak u}(x)=u_{\frak u}(x')$, then the distance between $x$ and $x'$ is greater than $\delta(k,\beta)$. \end{lem} \begin{proof} Given any $\frak u$, there is a constant $\delta(k,\beta,\frak u)$ such that the lemma holds for any non-constant irreducible component $u_{\frak u,v}$ of $u_{\frak u}$. In fact, there is a neighborhood $\mathcal U(\frak u)$ such that the lemma holds for the constant $\delta(k,\beta,\frak u)$ and any $\frak u' \in \mathcal U(\frak u)$. Now we can conclude the lemma using compactness of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$. \end{proof} In the following definition, $\epsilon(k',\beta')$ is a constant which shall be fixed later. \begin{defn}\label{def1014} A triple $(\frak u,\frak p,\phi)$ is said to be a {\it local approximation} to ${\frak u}$, if we have: \begin{enumerate} \item There is $(k',\beta')\le (k,\beta)$ such that $\frak p \in \frak P(k'+1,\beta')$. \item $\phi$ is a smooth embedding from a neighborhood of $\bigcup_v {\rm Supp}(E_{\frak p_{v}})$ to $\Sigma_{\frak u}$. If $x$ belongs to the image of $\phi$, then its distance to the nodal points in $\Sigma_{\frak u}$ is greater than $\delta(k',\beta')$. For each $v\in C_0^{\rm int}(\check R_{\frak p})$, there is $v'\in C_0^{\rm int}(\check R_{\frak u})$ such that $\phi$ maps ${\rm Supp}(E_{\frak p_{v}})$ to $\Sigma_{\frak u,v'}$. Furthermore, if $x'$ is another point in the source curve of $\frak u$ such that $u_{\frak u}(x)=u_{\frak u}(x')$, then the distance between $x$ and $x'$ is greater than $\delta(k',\beta')$. \item For each $v$, we require: $$ d_{C^2;{\rm Supp}(E_{\frak p_{v}})}(u_{\frak u} \circ \phi,u_{\frak p}) < \epsilon(k',\beta'). $$ \item $\phi$ satisfies the following point-wise inequality: $$ \vert \overline\partial \phi \vert < \vert \partial \phi \vert/100. $$ \end{enumerate} \end{defn} The next definition is similar to Condition \ref{cods612}. \begin{defn}\label{defn1015} Let $(\frak u,\frak p,\phi)$ be a local approximation to ${\frak u}$. We say a map $\hat\phi$ from a neighborhood of $\bigcup_v {\rm Supp}(E_{\frak p_{v}})$ to $\Sigma_{\frak u}$ is a {\it normalization} of $\phi$ if the following holds: \begin{enumerate} \item If $x$ belongs to the image of $\hat \phi$, then its distance to the nodal points in $\Sigma_{\frak u}$ is greater than $\frac{1}{3}\delta(k',\beta')$. Furthermore, if $x'$ is another point in the source curve of $\frak u$ such that $u_{\frak u}(x)=u_{\frak u}(x')$, then the distance between $x$ and $x'$ is greater than $\frac{1}{3}\delta(k',\beta')$. \item For each $v$, we require: $$ d_{C^2;{\rm Supp}(E_{\frak p_{v}})}(u_{\frak u} \circ \hat\phi,u_{\frak p}) < 2\epsilon(k',\beta'). $$ and: \[ d_{C^0;{\rm Supp}(E_{\frak p_{v}})}(\hat\phi,\phi) < \frac{\delta(k',\beta')}{3}. \] \item Let $z$ be in a neighborhood of ${\rm Supp}(E_{\frak p_{v}})$: \begin{enumerate} \item Suppose $z$ is in a component with color ${\rm d}$ or ${\rm s}$. We take the unique minimal geodesic $\gamma$ in $X\setminus \mathcal D$ (with respect to the metric $g$), which joins $u_{\frak p}(z)$ to $(u_{\frak u} \circ \hat\phi)(z)$. Then: $$ \frac{d\gamma}{dt}(0) \perp T_{u_{\frak p}(z)} u_{\frak p}(\Sigma_{\frak p}). $$ \item Suppose $z$ and $\hat\phi(z)$ are in a component with color ${\rm D}$. We take the unique minimal geodesic $\gamma$ in $\mathcal D$ (with respect to the metric $g'$), which joins $u_{\frak p}(z)$ to $(u_{\frak u} \circ \hat\phi)(z)$. Then: $$ \frac{d\gamma}{dt}(0) \perp T_{u_{\frak p}(z)} u_{\frak p}(\Sigma_{\frak p}). $$ \item Suppose $z$ is in a component with color ${\rm D}$ and $\hat\phi(z)$ is in a component with color ${\rm d}$ or ${\rm s}$. We take the unique minimal geodesic $\gamma$ in $\mathcal D$ (with respect to the metric $g'$), which joins $u_{\frak p}(z)$ to $(\pi\circ u_{\frak u} \circ \hat\phi)(z)$. Then: $$ \frac{d\gamma}{dt}(0) \perp T_{u_{\frak p}(z)} u_{\frak p}(\Sigma_{\frak p}). $$ \end{enumerate} \end{enumerate} \end{defn} \begin{lem} If the constant $\epsilon(k',\beta')$ is small enough, then for any local approximation $(\frak u,\frak p,\phi)$ to ${\frak u} \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$, there exists a normalization $ \hat\phi$ of $\phi$ and for any other normalization $ \hat\psi$ of $\phi$, we have: \[ \hat\phi|_{\bigcup_v {\rm Supp}(E_{\frak p_{v}})} = \hat\psi|_{\bigcup_v {\rm Supp}(E_{\frak p_{v}})}. \] \end{lem} From now on, we assume that the constant $\epsilon(k',\beta')$ satisfies the assumption of this lemma. \begin{proof} This is a consequence of the implicit function theorem and compactness of $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$. \end{proof} \begin{defn} For $j=1,2$, suppose $(\frak u,\frak p_j,\phi_j)$ is a local approximation to $\frak u$. We say these two approximations are {\it equivalent} if $\frak p_1 =\frak p_2$ and $$ \hat\phi_1|_{\bigcup_v {\rm Supp}(E_{\frak p_{v}})} = \hat\phi_2|_{\bigcup_v {\rm Supp}(E_{\frak p_{v}})}. $$ This is obviously an equivalence relation. Each equivalence class is called a {\it quasi component} (of $\frak u$). See Figure \ref{quasi-comp} for the schematic picture of a quasi-component. \end{defn} \begin{figure}[h] \centering \includegraphics[scale=0.6]{Figurecomponent} \caption{$[\frak u,\frak p,\phi]$ is a quasi-component of $\frak u$} \label{quasi-comp} \end{figure} We next define obstruction spaces $E_{\frak u,{\bf p},\Xi}(\frak y)$ where ${\bf p}$ is a quasi component of $\frak u$ and ${\frak y}= (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i}))$ is an inconsistent map with respect to a TSD $\Xi$ at $\frak u$. The definition is similar to the corresponding definitions in Subsection \ref{subsub:chooseobst}. The TSD $\Xi$ induces a holomorphic embedding: \[ \psi_{\frak y,\frak u}: \bigcup_{v\in C^{\rm int}_0(\check R_{\frak u})}\Sigma_{\frak u,v}^-\to \Sigma(\vec{\frak x},\vec{\sigma}). \] By taking $\Xi$ to be small enough, we may assume the image of $\phi$ is contained in the domain of $\psi_{\frak y,\frak u}$. Define: $$ \phi_{\frak y,\frak p} = \psi_{\frak y,\frak u} \circ \phi. $$ Using Implicit function theorem, we can modify $\phi_{\frak y,\frak p}$ to obtain $\hat\phi_{\frak y;\frak p}$ from a neighborhood of $\bigcup_v {\rm Supp}(E_{\frak p_{v}})$ to $\Sigma_{\frak y}$ such that the analogue of Definition \ref{defn1015} is satisfied. This map is clearly independent of the representative of ${\bf p}$ and also independent of $\Xi$ if this TSD is sufficiently small. By replacing $I^{\rm t}_{\rm d}(x)$, $I^{\rm t}_{\rm s}(x)$ and $I^{\rm t}_{\rm D}(x)$ with $\hat\phi_{\frak y;\frak p}$ and using the vector spaces $E_{\frak p_{v}}$, we obtain: \begin{equation}\label{form6189} E_{\frak u,{\bf p},\Xi}(\frak y) \subset \bigoplus_{v \in C^{\rm int}_0(\check R_{\frak y})} L^2_{m}(\Sigma_{\frak y,v};T \otimes \Lambda^{0,1}) \end{equation} in the same way as in \eqref{newform620}. Here $\check R_{\frak y}$ is the very detailed tree describing the combinatorial type of $\frak y$. \subsubsection*{Disk-component-wise Obstruction Bundle Data: Part 3} Our obstruction bundle data $E_{\frak u,\Xi}(\frak y)$ is a direct sum of $E_{\frak u,{\bf p},\Xi}(\frak y)$ for an appropriate set of quasi components ${\bf p}$ of $\frak u$. Our next task is to find a way to choose this set of quasi components. \begin{defn} For ${\frak u} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, we denote by $\mathscr{QC}(k,\beta)(\frak u)$ the set of all quasi components of ${\frak u}$. Let: $$ \mathscr{QC}(k,\beta) := \bigcup_{{\frak u} \in \mathcal M_{k+1}(L;\beta)} \mathscr{QC}(k,\beta)(\frak u). $$ The map \begin{equation} \label{Pi} \Pi : \mathscr{QC}(k,\beta) \to \mathcal M_{k+1}^{\rm RGW}(L;\beta) \end{equation} is the obvious projection. \end{defn} \begin{lem} If the constant $\epsilon(k',\beta')$ is small enough, then $\mathscr{QC}(k,\beta)(\frak u)$ is a finite set. \end{lem} \begin{proof} By Gromov compactness, there is only a finite number of $(k',\beta') \le (k,\beta)$ such that $\mathcal M^{\rm RGW}_{k'+1}(L;\beta') \ne \emptyset$. Let $(k',\beta')$ be such a pair and $\frak p$ be an element of the finite set $\frak P(k'+1,\beta')$. Assume $y$ is an element of the source curve of $\frak p$. There is a neighborhood $U_y$ of $y$ in the source curve of $y$ such that if $\epsilon'(k',\beta')$ is small enough, then the following holds. Let $[\frak u,\frak p,\phi]$ and $[\frak u,\frak p,\psi]$ be two quasi components of an element $\frak u\in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ with $\hat \phi$ and $\hat \psi$ being the normalizations of $\phi$ and $\psi$. If $\hat \phi(y)\neq\hat \psi(y)$, then $\hat \phi|_{U_y}$ and $\hat \psi|_{U_y}$ are disjoint. This would imply that given the element $\frak u$, there are only finitely many possibilities for the restriction of the normalization map to $U_y$. Therefore, there are finitely many quasi components of the form $[\frak u,\frak p,\phi]$ for $\frak u$. Finiteness of the sets $\frak P(k'+1,\beta')$ completes the proof. \end{proof} We next define a topology on $\mathscr{QC}(k,\beta)$. Let ${\frak u} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, $\Xi$ be a TSD at $\frak u$ and $\mathcal U({\frak u},\Xi)$ be the associated set of inconsistent solutions. We construct a map $$ \frak I : \mathcal U({\frak u},\Xi)\cap\mathcal M_{k+1}^{\rm RGW}(L;\beta) \to \mathscr{QC}(k,\beta) $$ such that $\Pi \circ \frak I = {\rm id}$ assuming $\Xi$ is small enough. The TSD $\Xi$ induces a map: $$ \psi_{\frak u',\frak u}: \bigcup_{v\in C^{\rm int}_0(\check R_{\frak u})} \Sigma_{\frak u,v}^-\to \Sigma_{\frak u'} $$ for $\frak u' \in \mathcal U({\frak u},\Xi)$. Let $(\frak u,\frak p,\phi)$ be a local approximation to ${\frak u}$. If $\Xi$ is sufficiently small, then $(\frak u,\frak p,\psi_{\frak u',\frak u}\circ \phi)$ is a local approximation to ${\frak u}'$. Using Implicit function theorem, it is easy to see that the equivalence class of $(\frak u,\frak p,\psi_{\frak u',\frak u}\circ \phi)$ depends only on the equivalence class of $(\frak u,\frak p,\phi)$. We thus obtain the map $\frak I$. This map in a small neighborhood of $\frak u$ is independent of the choice of $\Xi$. The map $\frak I$ is also injective. \begin{defn}\label{def1020} A neighborhood system of a quasi component ${\bf p} = [\frak u,\frak p,\phi]$ of $\frak u$ in $\mathscr{QC}(k,\beta)$ is given by mapping the neighborhood system of $\frak u$ in $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ via the map $\frak I$. \end{defn} It is straightforward to see from the above definition that: \begin{lem} $\mathscr{QC}(k,\beta)$ is Hausdorff and metrizable with respect to this topology. For each quasi component ${\bf p}$ of $\frak u$, there exists a neighborhood of ${\bf p}$ in $\mathscr{QC}(k,\beta)$ such that the restriction of $\Pi$ to this neighborhood is a homeomorphism onto an open subset. \end{lem} Let $\mathscr F$ be a subset of $\mathscr{QC}(k,\beta)$. For $\frak u \in \mathcal M_{k+1}(L;\beta)$, we define: $$ \mathscr F(\frak u) = \Pi^{-1}(\frak u) \cap \mathscr F. $$ It is a map which assigns to $\frak u$ a finite set of quasi components of $\frak u$. Justified by this, we call $\mathscr F$ a {\it quasi component choice map}. \begin{defn} A quasi component choice map $\mathscr F$ is open (resp. closed) if it is an open (resp. closed) subset of $\mathscr{QC}(k,\beta)$ (with respect to the topology of Definition \ref{def1020}). We say $\mathscr F$ is proper if the restriction of $\Pi$ to $\mathscr F$ is proper. \end{defn} \par\smallskip \noindent {\bf (OBIII):} For each $(k,\beta) \in \mathscr{TP}$, we take quasi component choice maps $\mathscr F_{k,\beta}$ and $\mathscr F^{\circ}_{k,\beta}$. \par\medskip We are mainly concerned with the objects as in (OBIII) which satisfy the following condition: \begin{conds}\label{conds1023} The quasi component choice map $\mathscr F^{\circ}_{k,\beta}$ is open and is a subset of $\mathscr F_{k,\beta}$. The quasi component map $\mathscr F_{k,\beta}$ is proper. \end{conds} The next condition is related to the disk-component-wise-ness. Let ${\frak u} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\check R$ be the very detailed tree associated to $\frak u$. Let $\check R$ belong to the disk splitting tree $\mathcal S . Let $\frak w$ be an interior vertex of ${\mathcal S}$. Following the discussion of the beginning of Subsection \ref{subsub:componentwise}, we obtain an element $\frak u_{\frak w} \in \mathcal M_{k_{\frak w}+1}^{\rm RGW}(L;\beta(\frak w))$ for each interior vertex $\frak w$ of $\mathcal S$. Define: \begin{equation}\label{form1012} \mathscr I_{\frak w} : \mathscr{QC}(k_{\frak w},\beta(\frak w))(\frak u_{\frak w}) \to \mathscr{QC}(k,\beta)(\frak u) \end{equation} to be the map given by: $$ \mathscr I_{\frak w}([\frak u_{\frak w},\frak p,\phi]) = [\frak u,\frak p,\phi]. $$ The target of $\phi$ on the left hand side is $\Sigma_{\frak u_{\frak w}}$, the source curve of $\frak u_{\frak w}$, which is a subset of the source curve $\Sigma_{\frak u}$ of $\frak u$. So $\phi$ on the right hand side is regarded as a map to $\Sigma_{\frak u}$. It is clear that $\mathscr I_{\frak w}$ maps equivalent objects to equivalent ones and hence the above definition is well-defined. \begin{lem} The map $\mathscr I_{\frak w}$ is injective. If $\frak w \ne \frak w'$, then the image of $\mathscr I_{\frak w}$ is disjoint from the image of $\mathscr I_{\frak w'}$. \end{lem} \begin{proof} The injectivity is obvious from the definition. If $[\frak u,\frak p,\phi]$ is in the image of $\mathscr I_{\frak w}$, then the image of $\phi$ is contained in the component $\Sigma_{\frak u,\frak w}$ corresponding to $\frak w$. Therefore, for $\frak w \ne \frak w'$, the image of the maps $\mathscr I_{\frak w}$ and $\mathscr I_{\frak w'}$ are disjoint. \end{proof} \begin{conds}\label{conds1025} Let ${\frak u} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\check R$ and $\mathcal S$ be given as above. Then we require: \begin{equation} \aligned \mathscr F_{k,\beta}({\frak u}) &= \bigcup_{\frak w} \mathscr I_{\frak w} \left( \mathscr F_{k_{\frak w},\beta_{\frak w}} ({\frak u}_{\frak w}) \right),\\ \mathscr F^{\circ}_{k,\beta}({\frak u}) &= \bigcup_{\frak w} \mathscr I_{\frak w} \left( \mathscr F^{\circ}_{k_{\frak w},\beta_{\frak w}}({\frak u}_{\frak w}) \right). \endaligned \end{equation} \end{conds} We need the following definition to state the next condition: \begin{defn} Let ${\frak u} \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\frak y$ be an inconsistent map with respect to a TSD $\Xi$ at $\frak u$. We assume $\Xi$ is sufficiently small such that the vector spaces $E_{\frak u,{\bf p},\Xi}(\frak y)$ in \eqref{form6189} are well-defined. We define: \begin{equation}\label{form19223rev} \aligned E_{\frak u,\mathscr F,\Xi}(\frak y):= &\sum_{[\bf p] \in \mathscr F_{k,\beta}({\frak u})}E_{\frak u,{\bf p},\Xi}(\frak y) \subset \bigoplus_{v \in C^{\rm int}_0(\check R)} L^2_{m}(\Sigma_{\frak y,v} ;T \otimes \Lambda^{0,1}). \endaligned \end{equation} where $\Sigma$ denotes the sum of vector subspaces of a vector space. Similarly, we define: \begin{equation}\label{form19223revrev} \aligned E_{\frak u,\mathscr F^{\circ},\Xi}(\frak y):= &\sum_{[\bf p] \in \mathscr F^{\circ}_{k,\beta}({\frak u})}E_{\frak u,{\bf p},\Xi}(\frak y) \subset \bigoplus_{v \in C^{\rm int}_0(\check R)} L^2_{m}(\Sigma_{\frak y,v} ;T \otimes \Lambda^{0,1}). \endaligned \end{equation} Note that $E_{\frak u,\mathscr F^{\circ},\Xi}(\frak y) \subseteq E_{\frak u,\mathscr F,\Xi}(\frak y)$. \end{defn} \begin{conds}\label{conds1027} We require that the sum in \eqref{form19223rev} is a direct sum for $\frak y = \frak u$. Namely, \begin{equation}\label{form19223revrevrev} \aligned E_{\frak u,\mathscr F,\Xi}(\frak u)= &\bigoplus_{[\bf p] \in \mathscr F_{k,\beta}({\frak u})}E_{\frak u,{\bf p},\Xi}(\frak y) . \endaligned \end{equation} \end{conds} Note that the above condition implies that the sum in \eqref{form19223revrev} for $\frak y = \frak u$ is also a direct sum. \begin{defn}\label{defn68899rev} We say the linearization of the non-linear Cauchy-Riemann equation is {\it transversal} to $E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)$ if the sum of the images of the operator $D_{\frak u}\overline\partial$ in (\ref{form6103}) and $E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)$ is $L^2_{m,\delta}(\frak u;T \otimes \Lambda^{0,1})$. \end{defn} \begin{defn}\label{defn68899revref} Consider the operator: \[ \mathcal{EV}_{z_0} : W^2_{m+1,\delta}(\frak u;T) \to T_{u(z_0)}L \] given by evaluation at the point $z_0$, the 0-th boundary marked point of the source of $\frak u$. Recall that the Hilbert space $W^2_{m+1,\delta}(\frak u;T)$ is the domain of the operator $D_{\frak u}\overline\partial$ in \eqref{form6103}. We say $E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)$ satisfies the {\it mapping transversality} property, if the restriction: \[ \mathcal{EV}_{z_0}\vert_{D_{\frak u}\overline\partial^{-1}(E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)} : D_{\frak u}\overline\partial^{-1}(E_{\frak u,\mathscr F^\circ,\Xi}(\frak u))\to T_{u(z_0)}L, \] is surjective. \end{defn} \begin{conds}\label{conds30} We require that the linearization of the non-linear Cauchy-Riemann equation is transversal to $E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)$ and $E_{\frak u,\mathscr F^\circ,\Xi}(\frak u)$ satisfies the mapping transversality property. \end{conds} Let ${\rm Aut}(\frak u)$ be the group of automorphisms of $\frak u$. If $\gamma \in {\rm Aut}(\frak u)$ and $[\frak u,\frak p,\phi]$ is a quasi component of $\frak u$, then $[\frak u,\frak p,\gamma\circ\phi]$ is also a quasi component of $\frak u$. Thus ${\rm Aut}(\frak u)$ acts on $\mathscr{QC}(k,\beta)(\frak u)$ \begin{conds}\label{conds31} We require that $\mathscr F_{k,\beta}({\frak u})$, $\mathscr F^{\circ}_{k,\beta}({\frak u})$ are invariant with respect to the action of ${\rm Aut}(\frak u)$. \end{conds} \subsubsection*{Disk-component-wise Obstruction Bundle Data: Part 4} Given the above objects satisfying the mentioned conditions, we can construct the desired obstruction bundle data: \begin{lem} Suppose we are given the objects in {\rm (OBI)}-{\rm (OBIII)}, which satisfy Conditions \ref{conds1023}, \ref{conds1025}, \ref{conds1027}, \ref{conds30}, \ref{conds31}. Then $\{E_{\frak u,\mathscr F,\Xi}(\frak y)\}$ is a disk-component-wise system of obstruction bundle data. \end{lem} \begin{proof} The system $\{E_{\frak u,\mathscr F,\Xi}(\frak y)\}$ satisfies Definition \ref{defn684684} (1), (2), (3) and (5) as immediate consequences of the construction. Definition \ref{defn684684} (3) is a consequence of the properness of $\mathscr F_{k,\beta}$. (Compare to Lemma \ref{loem668}.) Definition \ref{defn684684} (5) is a consequence of Condition \ref{conds30}. Definition \ref{defn684684} (6) is a consequence of Condition \ref{conds31}. Disc-component-wise-ness is an immediate consequence of Condition \ref{conds1025}. \end{proof} \subsubsection*{Disk-component-wise Obstruction Bundle Data: Part 5} To complete the proof of Proposition \ref{lem685}, it suffices to prove the next lemma. \begin{lem}\label{eximanuyobj} There exist objects {\rm (OBI)}-{\rm (OBIII)} which satisfy Conditions \ref{conds1023}, \ref{conds1025}, \ref{conds1027}, \ref{conds30}, \ref{conds31}. \end{lem} \begin{proof} The proof is by induction on $(k,\beta)$ and is given in several steps. {\bf Step 1} ({\it The base of induction}): We assume that $(k,\beta)$ is minimal in this order $<$. In this case, the moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ has no boundary. We follow a similar argument as in Subsection \ref{subsub:chooseobst}. For each $\frak p \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, we fix a vector space $E_{\frak p, v}$ for $v \in C^0_{\rm int}(\check R_{\frak p})$ as in (OBII). We require that the linearization of the non-linear Cauchy-Riemann equation is transversal to \begin{equation}\label{form1017} \bigoplus_{v \in C^0_{\rm int}(\check R_{\frak p})} E_{\frak p, v} \end{equation} at $\frak p$ (Definition \ref{defn68899rev}) and \eqref{form1017} has the mapping transversality property at $\frak p$ (Definition \ref{defn68899revref}). Using Lemma \ref{delta-k-b}, we may assume that the distance of any point $x$ in the support of the elements of $E_{\frak p, v}$ to nodal points of $\Sigma_{\frak p}$ is at least $\delta(k,\beta)$. Moreover, if $x'$ is another point in the source curve of $\frak p$ such that $u_{\frak p}(x)=u_{\frak p}(x')$, then the distance between $x$ and $x'$ is greater than $\delta(k',\beta')$. For each $\frak p$, we also pick a TSD $\Xi_{\frak p}$ and a compact neighborhood $\mathcal K(\frak p)$ of $\frak p$ in $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$, which satisfy the following conditions. Firstly we require that the compact neighborhood $\mathcal K(\frak p)$ is included in the set of inconsistent maps determined by $\Xi_{\frak p}$. Thus for any $\frak u \in \mathcal K(\frak p)$, there is a holomorphic embedding $\phi_{\frak u,\frak p}$ from a neighborhood of $\bigcup_v {\rm Supp}(E_{\frak p_{v}})$ to $\Sigma_{\frak u}$, assuming that $\Xi_{\frak p}$ is small enough. We may also assume that we can normalize $\phi_{\frak u,\frak p}$ as in Definition \ref{defn1015} to obtain: \[ \hat\phi_{\frak u,\frak p} : \bigcup_v {\rm Supp}(E_{\frak p_{v}}) \to \Sigma_{\frak u}. \] We use $\hat\phi_{\frak u,\frak p}$ to transport $E_{\frak p, v}$ to the point $\frak u$ and obtain $E_{\frak p, v}(\frak u)$. We may choose $\Xi_{\frak p}$ small enough such that the linearization of the non-linear Cauchy-Riemann equation is transversal to \begin{equation}\label{form1018} E_{\frak p, v}(\frak u) = \bigoplus_{v \in C^0_{\rm int}(\check R_{\frak p})} E_{\frak p,v}(\frak u) \end{equation} at $\frak u$ and \eqref{form1018} satisfies the mapping transversality property at $\frak u$, for any $\frak u \in \mathcal K(\frak p)$. Now we take a finite set $\frak P(k+1,\beta) \subset \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ such that \begin{equation}\label{form1019} \bigcup_{\frak p \in \frak P(k+1,\beta)} {\rm Int}\, \mathcal K(\frak p) = \mathcal M_{k+1}^{\rm RGW}(L;\beta). \end{equation} We define: \begin{equation} \aligned \mathscr F_{k,\beta}(\frak u) &= \{[\frak u,\frak p,\phi_{\frak u,\frak p}] \mid \frak p \in \frak P(k+1,\beta),\,\, \frak u \in \mathcal K(\frak p)\}\\ \mathscr F^{\circ}_{k,\beta}(\frak u) &= \{[\frak u,\frak p,\phi_{\frak u,\frak p}] \mid \frak p \in \frak P(k+1,\beta),\,\, \frak u \in {\rm Int}\, \mathcal (K(\frak p))\} \endaligned \end{equation} Condition \ref{conds1023} is immediate from the definition. Condition \ref{conds1025} is void in this case. We can perturb $E_{\frak p,v}$ arbitrarily small in $C^2$ topology so that Condition \ref{conds1027} holds. (See Lemma \ref{sum=direct-sum}.) Condition \ref{conds30} follows from the choice of $E_{\frak p,v}$ and \eqref{form1019}. We can take $E_{\frak p,v}$ to be invariant under the action of ${\rm Aut}(\frak p)$ and hence Condition \ref{conds31} holds. Thus we complete the first step of the induction. Next, we suppose that the required objects in {\rm (OBI)}-{\rm (OBIII)} are defined for $(k',\beta')$ with $(k',\beta') < (k,\beta)$. We use Condition \ref{conds1025} to define $\mathscr F'_{k,\beta}(\frak u)$, $\mathscr F^{\prime \circ}_{k,\beta}(\frak u)$ for $\frak u \in \partial \mathcal M_{k+1}(L;\beta)$. \vspace{3pt} {\bf Step 2:}{\it\, The set: \[ \bigcup_{\frak u \in \partial \mathcal M_{k+1}(L;\beta)} \mathscr F^{\prime \circ}_{k,\beta}(\frak u) \] is an open subset of $\Pi^{-1}(\partial \mathcal M_{k+1}(L;\beta))$, where $\Pi$ is the map in \eqref{Pi}.} \vspace{3pt} Let ${\bf p}_j \in \mathscr{QC}(k,\beta)$ with $\frak u_j = \Pi({\bf p}_j) \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. Suppose also $\lim_{j\to \infty} {\bf p}_j = {\bf p} \in \mathscr F^{\prime \circ}_{k,\beta}(\frak u)$ with $ \frak u=\lim_{j\to \infty} \frak u_j$. We need to show that ${\bf p}_j \in \mathscr F^{\prime \circ}_{k,\beta}(\frak u_j)$ for sufficiently large values of $j$. Let the combinatorial type of $\frak u$ be given by a very detailed DD-ribbon tree $\check R$ which belongs to the disc splitting tree $\mathcal S$. We may assume that the very detailed tree associated to $\frak u_j$ is independent of $j$ because there are finitely many very detailed tree obtained by level shrinking, level $0$ edge shrinking and fine edge shrinking. We denote this very detailed DD-ribbon tree by $\check R'$. We also assume that $\check R'$ belongs to the disc splitting tree $\mathcal S'$. Since $\mathcal S'$ is obtained from $\mathcal S$ by shrinking level $0$ edges, there is a standard shrinking map $\pi:\mathcal S\to \mathcal S'$. Note that $\mathcal S$ and $\mathcal S'$ have at least two interior vertices. By Condition \ref{conds1025}, there exists $\frak w \in C^0_{\rm int}(\mathcal S)$ and ${\bf p}_{\frak w} \in \mathscr F^{\circ}_{k,\beta}(\frak u_{\frak w})$ such that $$ {\bf p} = \mathscr I_{\frak w} ({\bf p}_{\frak w}). $$ Let ${\frak w}' = \pi(\frak w)$, which is an interior vertex of $\mathcal S'$. We also define $ \mathcal S(\frak w'):=\pi^{-1}(\frak w')$, which is a subtree of $\mathcal S$. Let ${\frak u}_{ \mathcal S(\frak w')}$ be an object of $\mathcal M^{\rm RGW}_{k_{\frak w'}+1}(L;\beta_{\frak w'})$ which is obtained from $\frak u$ and $\mathcal S(\frak w')$ in the same way as in the beginning of Subsection \ref{subsub:componentwise}. $\lim_{j\to \infty} \frak u_j = \frak u$ implies $$ \lim_{j\to \infty} \frak u_{j,{\frak w}'} = {\frak u}_{ \mathcal S(\frak w')} $$ by the definition of the RGW topology. Since ${\frak w}$ is a vertex of $ \frak R(\frak w')$, there exists: $$ \mathscr I'_{\frak w} : \mathscr{QC}(k_{\frak w},\beta_{\frak w})(\frak u_{\frak w}) \to \mathscr{QC}(k_{\frak w'},\beta_{\frak w'})( {\frak u}_{ \mathcal S(\frak w')}). $$ We define $ {\bf p}_{\frak w'} = \mathscr I'_{\frak w}({\bf p}_{\frak w}). $ Using the definition of the topology of $\mathscr{QC}(k,\beta)$ and of $\mathscr I_{\frak w}$, it is easy to see that there exists ${\bf p}_{j,\frak w'} \in \mathscr{QC}(k_{\frak w'},\beta_{\frak w'})(\frak u_{j,{\frak w}'})$ such that $$ \lim_{j\to \infty}{\bf p}_{j,\frak w'} = {\bf p}_{\frak w'} $$ in $\mathscr{QC}(k_{\frak w'},\beta_{\frak w'})$ and $$ \mathscr I'_{\frak w} ({\bf p}_{j,\frak w'}) = {\bf p}_{j}. $$ Now by induction hypothesis: $$ {\bf p}_{j,\frak w'} \in \mathscr F^{\circ}_{k_{\frak w'},\beta_{\frak w'}}(\frak u_{j,{\frak w'}}) $$ for sufficiently large $j$. Condition \ref{conds1025} implies ${\bf p}_j \in \mathscr F^{\prime \circ}_{k,\beta}(\frak u_j)$ for large $j$, as required. \vspace{3pt} {\bf Step 3:}{\,\it The restriction of $\Pi$ to $$ \bigcup_{\frak u \in \partial \mathcal M_{k+1}(L;\beta)} \mathscr F^{\prime}_{k,\beta}(\frak u) $$ is a proper map to $\partial \mathcal M_{k+1}^{\rm RGW}(L;\beta)$.} \vspace{3pt} Let ${\bf p}_j \in \mathscr F'_{k,\beta}(\frak u_j)$ with $\frak u_j \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. Suppose $\lim_{j\to \infty} \frak u_j = \frak u \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. It suffices to find a subsequence of ${\bf p}_j$ which converges to an element of $\mathscr F'_{k,\beta}(\frak u)$. Let the combinatorial type of $\frak u$ be given by a very detailed DD-ribbon tree $\check R$ which belongs to the disc splitting tree $\mathcal S$. After passing to a subsequence, we may assume that the very detailed tree associated to $\frak u_j$ is independent of $j$. We denote this tree by $\check R'$ which belongs to the disc splitting tree $\mathcal S'$. Let $\pi:\mathcal S\to \mathcal S'$ be defined as in the previous step. By Condition \ref{conds1025} and after passing to a subsequence, we may assume that there exist $\frak w \in C^{0}_{\rm int}(\mathcal S')$ and ${\bf p}_{j,\frak w} \in\mathscr F'_{k_{\frak w}, \beta_{\frak w}}(\frak u_{j,\frak w})$ such that $$ \mathscr I_{\frak w}({\bf p}_{j,\frak w}) = {\bf p}_j. $$ Let $\mathcal S_{\frak w}$ be the subtree $\pi^{-1}(\frak w)$ of $\mathcal S$. We obtain $\frak u_{\frak R_{\frak w}}$ from $\frak u$ in the same way as in the beginning of Subsection \ref{subsub:componentwise}. Convergence of $\frak u_j$ to $\frak u$ implies that: \[ \lim_{j \to \infty} \frak u_{j,\frak w} = \frak u_{\frak R_{\frak w}}. \] by the definition of the RGW topology. Now we use the induction hypothesis to find a subsequence such that ${\bf p}_{j,\frak w} \in \mathscr F'_{k_{\frak w},\beta_{\frak w}}(\frak u_{j,\frak w})$ converges to $ {\bf p}_{\frak w} \in \mathscr F'_{k_{\frak w},\beta_{\frak w}}(\frak u_{\frak R_{\frak w}}). $ Therefore: $$ \lim_{j\to \infty} {\bf p}_{j} = \mathscr I_{\frak w}({\bf p}_{\frak w}) \in \mathscr F'_{k,\beta}(\frak u). $$ This completes the proof of this step. \vspace{3pt} {\bf Step 4:\,}({\it Extension to a neighborhood of the boundary}) In the previous steps, we defined $\mathscr F^{\prime}_{k,\beta}$ and $\mathscr F^{\prime\circ}_{k,\beta}$ on the boundary. Next, we extend these quasi component choice maps to a neighborhood of the boundary. We fix $\rho > 0$ sufficiently small such that if $d(\frak u,\frak u') < 5\rho$, $\frak u' \in \partial \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $[\frak u',\frak p,\phi] \in \mathscr F'_{k,\beta}(\frak u')$, then $[\frak u,\frak p,\psi_{\frak u,\frak u'}\circ \phi]$ is a quasi component. Then for $\frak u\in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$ with $d(\frak u,\partial \mathcal M_{k+1}^{\rm RGW}(L;\beta)) < 2\rho$, we define: \begin{enumerate} \item $\mathscr F^{\prime}_{k,\beta}(\frak u)$ is the set of $[\frak u,\frak p,\phi]$ such that there are $\frak u' \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $[\frak u',\frak p,\phi'] \in \mathscr F^{\prime}_{k,\beta}(\frak u')$ with the following properties: \begin{enumerate} \item $d(\frak u,\frak u') \le 2d(\frak u,\partial \mathcal M^{\rm RGW}_{k+1}(L;\beta))\le\rho$. \item $(\frak u,\frak p,\phi)$ is equivalent to $(\frak u,\frak p,\psi_{\frak u,\frak u'}\circ \phi')$. \end{enumerate} \item $\mathscr F^{\prime\circ}_{k,\beta}(\frak u)$ is the set of $[\frak u,\frak p,\phi]$ such that there are $\frak u' \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $[\frak u',\frak p,\phi'] \in \mathscr F^{\prime\circ}_{k,\beta}(\frak u')$ with the following properties: \begin{enumerate} \item $\frak u = \frak u'$ or $d(\frak u,\frak u') < 2d(\frak u,\partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)) < \rho$. \item $(\frak u,\frak p,\phi)$ is equivalent to $(\frak u,\frak p,\psi_{\frak u,\frak u'}\circ \phi')$. \end{enumerate} \end{enumerate} We put $$ \mathscr F^{\prime\circ}_{k,\beta} = \bigcup_{\frak u} \mathscr F^{\prime\circ}_{k,\beta}(\frak u), \qquad \mathscr F^{\prime}_{k,\beta} = \bigcup_{\frak u} \mathscr F^{\prime}_{k,\beta}(\frak u). $$ It follows easily from Step 2 that $\mathscr F^{\prime\circ}_{k,\beta}$ is open. It follows easily from Step 3 that the restriction of $\Pi$ to $\mathscr F^{\prime\circ}_{k,\beta}$ is proper. Items (b) in the above definition imply that $\mathscr F^{\prime}_{k,\beta}(\frak u)$ and $\mathscr F^{\prime\circ}_{k,\beta}(\frak u)$ coincide with the previously defined spaces for $\frak u \in \partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)$. Therefore, Condition \ref{conds1025} holds. Thus we have constructed objects for $(k,\beta)$ which satisfy all the required conditions except Condition \ref{conds30}. By taking a smaller value of $\rho$ if necessary, we can guarantee that Condition \ref{conds30} is also satisfied. \vspace{3pt} {\bf Step 5:\,}({\it Extension to the rest of the moduli space $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$}) The rest of the proof is similar to Step 1. For each $\frak p \in {\rm Int}(\mathcal M^{\rm RGW}_{k+1}(L;\beta))$ we choose $\Xi_{\frak p}$ , $E_{\frak p,v}$ and $\mathcal K(\frak p)$ as in the first step of the induction. We take a finite set $\frak P(k+1,\beta)$ such that: \begin{equation}\label{form1019rev} \bigcup_{\frak p \in \frak P(k+1,\beta)} {\rm Int}\, \mathcal K(\frak p) = \mathcal M^{\rm RGW}_{k+1}(L;\beta) \setminus B_{\rho}(\partial \mathcal M^{\rm RGW}_{k+1}(L;\beta)). \end{equation} is satisfied instead of (\ref{form1019}). Now we define: \begin{equation} \aligned \mathscr F_{k,\beta}(\frak u) &= \mathscr F'_{k,\beta}(\frak u) \cup \{[\frak u,\frak p,\phi_{\frak u,\frak p}] \mid \frak p \in \frak P(k,\beta),\,\, \frak u \in \mathcal K(\frak p)\}\\ \mathscr F^{\circ}_{k,\beta}(\frak u) &= \mathscr F^{\prime\circ}_{k,\beta}(\frak u) \cup \{[\frak u,\frak p,\phi_{\frak u,\frak p}]\mid \frak p \in \frak P(k,\beta),\,\, \frak u \in {\rm Int}\, \mathcal K(\frak p)\}. \endaligned \end{equation} They satisfy all the required conditions including Condition \ref{conds30}. We thus have completed the inductive step. \end{proof} We verified Proposition \ref{lem685} and hence Theorem \ref{lema362rev}. This completes the construction of a system of Kuranishi structures on $\mathcal M_{K+1}^{\rm RGW}(L;\beta)$ which are compatible at the boundary components and corners. For the proof of the main theorems of \cite{DF1}, we also need to construct a system of Kuranishi structures on the moduli space of strips, that are compatible at the boundary components and corners. See \cite[Lemma 3.67]{DF1} for the precise statement of this compatibility at the boundary. The proof in the case of strips is similar to the case of disks and we omit it here. \begin{rem} The proof in this subsection is different from the approach in \cite[Section 8]{fooo:const2}, where the case of stable map compactification is treated. In this subsection, we use target space parallel transportation. On the other hand, in \cite[Section 8]{fooo:const2} extra marked points are added to $\frak p \in \frak P(k+1,\beta)$ and are used to fix a diffeomorphism between open subsets of the source domains. Both methods work in both situations. \end{rem} \section{Compatibility of Kuranishi Structures with Forgetful Maps} \label{subsub:compforget} So far, we have constructed a system of Kuranishi structures on the moduli spaces $\{\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)\}_{\omega\cap \beta<E}$ which are compatible over the boundary components and corners. If $L_0$ and $L_1$ are monotone Lagrangians in $X\backslash \mathcal D$ with minimal Maslov number $2$, then we also need compatibility of Kuranishi structures of $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ with the forgetful maps of boundary marked points to define Lagrangian Floer homology. (See \cite[Section 4]{DF1}.) This compatibility requirement is discussed in this subsection. As in previous sections, we focus mainly on the analogous results for the case of discs. The proof in the case of moduli space of strips is similar. If the reader is only interested in the case of Lagrangian Floer homology for Lagrangians with minimal Maslov number greater than two, then this section can be skipped. This turns out to be the case for Lagrangians coming from Yang-Mills gauge theory \cite{DF-peoc}. \begin{defn}\label{defn61144} Let $\frak u \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$. Let $\check R$ be the very detailed tree describing the combinatorial type of $\frak u$. We fix a TSD $\Xi$ at $\frak u$. Let $\frak y = (\vec{\frak x},\vec{\sigma},(u'_{v}),(U'_{v}),(\rho_{e}),(\rho_{i}))$ be an inconsistent map with respect to $\Xi$. \begin{enumerate} \item Remove all the edges $e$ of $\check R$ with $\sigma_e = 0$, and let $\check R_0$ be one of the connected components of the resulting graph. The union of all the spaces $\Sigma_{\frak y,v}$, where $v$ belongs to $\check R_0$, is called {\it an irreducible component of $\frak y$}. If it does not make any confusion, the union of all the interior vertices $v$ of $\check R$, which belong to $\check R_0$, is also called an irreducible component. \item An irreducible component of $\frak y$ is called a {\it trivial component} if the following holds: \begin{enumerate} \item All the vertices in this component have color ${\rm d}$. \item All the homology classes assigned to the vertices in this component are $0$. \end{enumerate} \item We say $\frak y$ {\it preserves triviality} if for any interior vertex $v$ in a trivial component, the map $u'_{v}$ is constant. \end{enumerate} \end{defn} \begin{lem}\label{lem115555} Given any element $\frak u\in\mathcal M_{k+1}^{\rm RGW}(L;\beta)$, the Kuranishi neighborhood of $\frak u$, constructed in Subsection \ref{subsub:existobst}, is contained in the set of inconsistent maps which preserve triviality. \end{lem} \begin{proof} Suppose $\Xi$ is a small enough TSD such that we can form the obstruction bundle $E_{\frak u,\mathscr F,\Xi}(\frak y)$ over inconsistent maps $\frak y$ with respect to $\Xi$. We assume that $\frak y$ is chosen such that it represents an element of $\widehat {\mathcal U}(\frak u,\Xi)$. If $[\frak u,\frak p,\phi]$ is a quasi component of $\frak u$, then the image of $\phi$ is away from the components of $\frak u$ with trivial homology classes. Consequently, restriction of the obstruction bundle $E_{\frak u,\mathscr F,\Xi}(\frak y)$ to any trivial component of $\frak y$ is trivial. Therefore, the restriction of $u_{\frak y}$ to any such component has trivial homology class and satisfies the Cauchy-Riemann equation with a trivial obstruction bundle, and hence it is a constant map. \end{proof} Suppose $\frak y$ is an inconsistent map with respect to $\Xi$. Let $\Xi'$ be another TSD at the same point $\frak u$. If $\Xi'$ is small enough, then we obtain a corresponding inconsistent map $\frak y'$ with respect to $\Xi'$. (See the discussion preceding Lemma \ref{lem676}.) It is clear that $\frak y$ preserves triviality if and only if $\frak y'$ preserves triviality. \par We form the forgetful map: \begin{equation}\label{formdeffgggg} \frak{fgg} : \mathcal M^{\rm RGW}_{k+1}(L;\beta) \to \mathcal M^{\rm RGW}_{1}(L;\beta) \end{equation} by forgetting the boundary marked points other than the $0$-th one. (See \cite[Definition 3.72 and Lemma 3.74]{DF1} for the definition of the forgetful maps of one boundary marked point. The forgetful map in \eqref{formdeffgggg} can be defined in a similar way.) \begin{lem}\label{lem166666} Let $\frak u \in \mathcal M^{\rm RGW}_{k+1}(L;\beta)$ and $\frak u' = \frak{fgg}(\frak u) \in \mathcal M_{1}^{\rm RGW}(L;\beta)$. For any TSD $\Xi'= (\vec w_{\frak u'},(\mathcal N_{\frak u',v,i}),(\phi_{\frak u',v}), (\varphi_{\frak u',v,e}),\delta')$ at $\frak u'$ there is a TSD $\Xi=(\vec w_{\frak u},(\mathcal N_{\frak u,v,i}),(\phi_{\frak u,v}), (\varphi_{\frak u,v,e}),\delta)$ at $\frak u$ such that any inconsistent map $\frak y$ with respect to $\Xi$ which preserves triviality induces an inconsistent map $\frak y'$ with respect to $\frak u'$. \end{lem} \begin{proof} Let $\check R$ (resp. $\check R'$) be the very detailed DD-ribbon tree describing the combinatorial type of $\frak u$ (resp. $\frak u'$). By construction we observe that the vertices of $\check R$ with color $\rm s$ or $\rm D$ are in one to one correspondence with the vertices of the same color in $\check R'$. Also the set of vertices of $\check R'$ with color $\rm d$ is a subset of the vertices of $\check R$ with color $\rm d$. The difference $C^{\rm int}_{0}(\check R)\setminus C^{\rm int}_{0}(\check R')$ consists of vertices $v$ such that the map $u_{v}$ is constant on it.\footnote{This is not a necessary and sufficient condition.} In particular, for any such vertex $v$, the component $\Sigma_{\frak u,v}$ together with the marked points and nodal points is already source stable. Therefore, we can require that the additional marked points $\vec w_{{\frak u}}$ of the TSD $\Xi$ do not belong to such irreducible components of $\Sigma_{\frak u}$. Thus we may find marked points $\vec w_{\frak u}$ on $\Sigma_{\frak u'}$ such that they are identified with $\vec w_{{\frak u'}}$ using the map $\Sigma_{\frak u} \to \Sigma_{\frak u'}$ which collapses the components associated to $C^{\rm int}_{0}(\check R)\setminus C^{\rm int}_{0}(\check R')$. We define $\vec w_{\frak u}$ to be the set of the additional marked points of $\Xi$. We also assume that the set of transversals of $\Xi$ are identified with that of $\Xi'$ in an obvious way. We also require that the trivialization of the universal families of irreducible components associated to to $\Xi$ and $\Xi'$ to be related to each other as follows. For any sphere component we assume that the associated trivializations coincide with each other. For $v \in C^{\rm int}_{0}({\check R}') \subset C^{\rm int}_{0}(\check R)$ with level $0$, let $\mathcal M^{{\rm source}}_{\frak u,v}$, $\mathcal M^{{\rm source}}_{\frak u',v}$ be the corresponding moduli spaces of marked disks and $\mathcal{C}^{{\rm source}}_{\frak u,v} \to \mathcal M^{{\rm source}}_{\frak u,v}$, $\mathcal{C}^{{\rm source}}_{\frak u',v} \to \mathcal M^{{\rm source}}_{\frak u',v}$ be the universal families. We take a trivialization of $\mathcal{C}^{{\rm source}}_{\frak u,v}$ over a sufficiently small neighborhood $ \mathcal V_{\frak u,v}^{{\rm source}}$ of $\Sigma_{\frak u,v}$ so that the next diagram commutes: \begin{equation}\label{diadia} \begin{CD} \mathcal V_{\frak u,v}^{{\rm source}} \times \Sigma_v @>{\phi_{\frak u,v}}>> \mathcal{C}^{{\rm source}}_{\frak u,v} \\ @VV{}V @VV{}V \\ \mathcal V_{\frak u',v}^{{\rm source}} \times \Sigma_v @>{\phi_{{\frak u'},v}}>> \mathcal{C}^{{\rm source}}_{\frak u',v} \end{CD} \end{equation} where the vertical arrows are obvious forgetful maps. For the trivializations of the universal families of disk components corresponding to $v \in C^{\rm int}_{0}(\check R) \setminus C^{\rm int}_{0}(\check R')$, that are parts of $\Xi$, we take an arbitrary choice. For $v\in C^{\rm int}_{0}({\check R}')\subset C^{\rm int}_{0}(\check R)$ and an edge $e$ incident to $v$, we pick $\varphi_{\frak u,v,e}$ to be the analytic family induced by $\varphi_{\frak u',v,e}$. In the case that $v\in C^{\rm int}_{0}({\check R})\setminus C^{\rm int}_{0}(\check R')$, the corresponding component $\Sigma_{\frak u,v}$ in addition to boundary marked points has at most two boundary nodes. If there are two boundary nodes inducing edges $e_+$ and $e_-$ incident to $v$, then we can identify $\Sigma_{\frak u,v}$ with the strip $[0,1]\times \R$ where the boundary node associated to $e_{\pm}$ is in correspondence with the point at $\pm \infty$ on the boundary of $[0,1]\times \R$. We fix one such identification and let $[0,1]\times [T,\infty)$ and $[0,1]\times (-\infty,-T]$, for a large value of $T$, induce the analytic families of coordinates $\varphi_{\frak u,v,\pm e}$. In the case that there is only one interior edge incident to $v$, we follow a similar strategy with the difference that we only need to use the half strip $[0,1]\times [T,\infty)$ to define the corresponding analytic family of coordinates. We also let $\delta=\delta'$. Now, let $$ \frak y = (\vec{\frak x}_{\frak y},\vec{\sigma}_{\frak y},(u'_{{\frak y},v}),(U'_{{\frak y},v}), (\rho_{{\frak y},e}),(\rho_{{\frak y},i})) $$ be an inconsistent map with respect to $\Xi$ which preserves triviality. We wish to define $$ {\frak y'} = (\vec{\frak x}_{\frak y'},\vec{\sigma}_{{\frak y'}}, (u'_{{\frak y'},v}), (U'_{{\frak y'},v}),(\rho_{{\frak y'},e}),(\rho_{{\frak y'},i})) $$ an inconsistent map with respect to $\Xi'$. It is clear from the definition of $\Xi$ that there are $\vec{\frak x}_{{\frak y'}},\vec{\sigma}_{{\frak y'}}$ such that: \begin{equation}\label{form6200} \Sigma_{\frak u}(\vec{\frak x}_{\frak y},\vec{\sigma}_{\frak y}) \cong \Sigma_{{\frak u'}}(\vec{\frak x}_{{\frak y'}},\vec{\sigma}_{{\frak y'}}). \end{equation} We take $\rho_{{\frak y},e} = \rho_{{\frak y'},e}$, $\rho_{{\frak y},i} = \rho_{{\frak y'},i}.$ Moreover, $U'_{{\frak y},v} = U'_{{\frak y'},v}$ and $u'_{{\frak y},v} = u'_{{\frak y'},v}$ if the color of $v$ is ${\rm s}$. We consider a disk component $\Sigma_{{\frak y'},v}$. There exists a unique irreducible component (in the sense of Definition \ref{defn61144}, where we use $\vec{\sigma}_{{\frak y'}}$) which contains this component. We denote by $\Sigma^+_{\frak u',v}(\vec{\frak x}_{{\frak y'}},\vec{\sigma}_{{\frak y'}})$ the union of the disk components contained in this irreducible component.\footnote{See (\ref{form610}) for the meaning of the symbol $+$.} We take the irreducible components of $\frak y$ which correspond to it and define $\Sigma^+_{{\frak u},v}(\vec{\frak x}_{{\frak y}},\vec{\sigma}_{{\frak y}})$ in the same way. By (\ref{form6200}) we have an isomorphism \begin{equation}\label{form6201} \Sigma^+_{\frak u',v}(\vec{\frak x}_{{\frak y'}},\vec{\sigma}_{{\frak y'}}) \cong \Sigma^+_{{\frak u},v}(\vec{\frak x}_{{\frak y}},\vec{\sigma}_{{\frak y}}). \end{equation} The maps $u'_{\frak y,v}$ for various $v$ in this irreducible component induces a map \begin{equation}\label{form6202} (\Sigma^+_{{\frak u},v}(\vec{\frak x}_{{\frak y}},\vec{\sigma}_{{\frak y}}), \partial \Sigma^+_{{\frak u},v}(\vec{\frak x}_{{\frak y}},\vec{\sigma}_{{\frak y}})) \to (X,L). \end{equation} This map is smooth. (Since $\Sigma^+_{{\frak u},v}(\vec{\frak x}_{{\frak y}},\vec{\sigma}_{{\frak y}})$ is obtained by gluing along the components associated to the level $0$ edges, the maps $u'_{\frak y,v}$ are consistent on overlaps.) We use \eqref{form6201} and \eqref{form6202} to define $u'_{{\frak y'},v}$. Using the fact that $\frak y$ is an consistent map preserving triviality, it is easy to see that $u'_{{\frak y'},v}$ for various $v$ are consistent at the nodal points corresponding to the level 0 edges $e$ with $\sigma_{{\frak y'},e} = 0$, and ${\frak y'} = (\vec{\frak x}_{\frak y'},\vec{\sigma}_{{\frak y'}},(u'_{{\frak y'},v}),(U'_{{\frak y'},v}), (\rho_{{\frak y'},e}),(\rho_{{\frak y'},i}))$ is an inconsistent map with respect to $\Xi'$. \end{proof} \begin{rem} The notion of preserving triviality plays an important role in the proof. The other important point is that we do not put any obstruction bundle on the components where the maps are constant. \end{rem} Let $\frak u$, $\frak y$, $\frak u'$ and $\frak y'$ be as in Lemma \ref{lem115555} and $\check R$, $\check R'$ be the very detailed tree describing the combinatorial types of $\frak u$, $\frak u'$, respectively. We define: \[ \aligned L^2_{m,\delta,{\rm nontri}}(\frak y,{\frak u})= &\bigoplus_{v \in C^{\rm int}_{0}(\check R) ; c(v) = {\rm s}} L^2_{m,\delta}(\Sigma^+_{\frak y,v};(u'_{\frak y,v})^*TX \otimes \Lambda^{0,1}) \\ &\oplus \bigoplus_{v \in C^{\rm int}_{0}(\check R) ; c(v) = {\rm D}} L^2_{m,\delta}(\Sigma^+_{\frak y,v};(\pi\circ U'_{\frak y,v})^*T\mathcal D \otimes \Lambda^{0,1}) \\ &\oplus \bigoplus_{v \in C^{\rm int}_{0}(\check R) ; c(v) = {\rm d}, \atop\text{$u'_{\frak y,v}$ is not constant}} L^2_{m,\delta}(\Sigma^+_{\frak y,v};(u'_{\frak y,v})^*TX \otimes \Lambda^{0,1}) \endaligned \] $$ \aligned L^2_{m,\delta,{\rm nontri}} ({\frak y'},{\frak u'}) = &\bigoplus_{v \in C^{\rm int}_{0}(\check R') ; c(v) = {\rm s}} L^2_{m,\delta}(\Sigma^-_{{\frak y'},v};(u'_{{\frak y'},v})^*TX \otimes \Lambda^{0,1}) \\ &\oplus \bigoplus_{v \in C^{\rm int}_{0}(\check R') ; c(v) = {\rm D}} L^2_{m,\delta}(\Sigma^-_{{\frak y'},v};(\pi\circ U'_{{\frak y'},v})^*T\mathcal D \otimes \Lambda^{0,1}) \\ &\oplus \bigoplus_{v \in C^{\rm int}_{0}(\check R') ; c(v) = {\rm d}, \atop\text{$u'_{{\frak y'},v}$ is not constant}} L^2_{m,\delta}(\Sigma^-_{{\frak y'},v};(u'_{{\frak y'},v})^*TX \otimes \Lambda^{0,1}). \endaligned $$ There are canonical identification between the components appearing in the above two formulas. Therefore, there exists a canonical map: \begin{equation} I_{{\frak y}\frak y'} : L^2_{m,\delta,{\rm nontri}}(\frak y',\frak u') \to L^2_{m,\delta,{\rm nontri}}(\frak y,{\frak u}). \end{equation} \begin{defn}\label{defn118} Let $\{E_{\frak u,\Xi}(\frak y)\}$ and $\{E_{\frak u',\Xi'}(\frak y')\}$ be obstruction bundle data for $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\mathcal M_{1}^{\rm RGW}(L;\beta)$, respectively. We say that they are {\it compatible with the forgetful map} if \begin{equation}\label{form19199} I_{{\frak y}\frak y'} (E_{\frak u',\Xi'}(\frak y'))=E_{\frak u,\Xi}(\frak y) \end{equation} when $\frak u'$, $\Xi'$, $\frak y'$ are related to $\frak u$, $\Xi$, $\frak y$ as in Lemma \ref{lem115555}. \end{defn} \begin{defn}\label{defn119} A system of obstruction bundle data for moduli spaces $\{\mathcal M_{k+1}^{\rm RGW}(L;\beta)\}_{\omega\cap \beta\leq E}$ is said to be {\it compatible with the forgetful map} if Definition \ref{defn118} holds for each of spaces $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\mathcal M_{1}^{\rm RGW}(L;\beta)$ with $\omega\cap \beta\leq E$. \end{defn} Suppose $\{E_{\frak u,\Xi}(\frak y)\}$ is a system of obstruction bundle data for moduli spaces $\{\mathcal M_{k+1}^{\rm RGW}(L;\beta)\}_{\omega\cap \beta\leq E}$, which is disk-component-wise and is compatible with forgetful map. Let $\frak u$ be an element of $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ and $\frak u'=\frak {fgg}(\frak u)$. Suppose $\Xi$, $\Xi'$ are TSDs at $\frak u$, $\frak u'$ which are related to each other as in Lemma \ref{lem166666}. Using Lemmas \ref{lem115555} and \ref{lem166666} and consistency of obstruction bundle data with the forgetful map, we can define a map: \[ F_{\frak u} : \widehat {\mathcal U}(\frak u;\Xi) \to \widehat {\mathcal U}(\frak u';\Xi'). \] In the process of forgetting boundary marked points and passing from $\frak u$ to $\frak u'$, we only might collapse disc components. Since the elements of $\Gamma_{\frak u}$ and $\Gamma_{\frak u'}$ act as identity on disc components, the isotropy groups $\Gamma_{\frak u}$ and $\Gamma_{\frak u'}$ are isomorphic. The map $F_{\frak u}$ is also $\Gamma_\frak u$ equivariant. It is straightforward to lift the map $F_{\frak u}$ to a $\Gamma_{\frak u}$-equivariant map: \[ \tilde F_{\frak u}:\mathcal E_{\frak u} \to \mathcal E_{\frak u'} \] such that: \[ \tilde F_{\frak u}\circ \frak s_{\frak u}=\frak s_{\frak u'} \circ F_{\frak u} \] and for any $\frak y\in \frak s_{\frak u}^{-1}(0)/\Gamma_{\frak u}$ we have: \[ \psi_{\frak u'}\circ F_{\frak u}(\frak y)=\frak {fgg} \circ \psi_{\frak u}(\frak y) \] The maps $F_{\frak u}$ and $\tilde F_{\frak u}$ are also compatible with coordinate changes. We can summarize this discussion as follows: \begin{thm}\label{comp-forg} Suppose a system of obstruction bundle data $\{E_{\frak u,\Xi}(\frak y)\}$ for moduli spaces $\{\mathcal M_{k+1}^{\rm RGW}(L;\beta)\}_{\omega\cap \beta\leq E}$ is disk-component-wise and is compatible with forgetful map. Then the resulting system of Kuranishi structures is compatible at the boundary components and corners (in the sense of Theorem \ref{lema362rev}) and compatible with the forgetful map (in the sense of \cite[Lemma 3.75]{DF1}). \end{thm} \begin{proof} Compatibility of the forgetful map with Kuranishi structures of moduli spaces $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ is equivalent to the existence of the maps $F_{\frak u}$ and $F_{\frak u'}$ with the above properties. (See \cite[Lemma 3.75]{DF1} for more details.) We just need to point our that in \cite[Lemma 3.75]{DF1} we consider the map \[ \frak{fg}_j^\partial : \mathcal M_{k+1}^{\rm RGW}(L;\beta) \to \mathcal M_{k}^{\rm RGW}(L;\beta) . \] given by forgetting the $j$-th marked point. The proof of a similar result for the map $\frak{fg}_j^\partial$ is essentially the same. Let $\frak u_1\in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, $\frak u_2=\frak{fg}_j^\partial(\frak u_1)$ and . Starting with a TSD $\Xi_2$ at $\frak u_2$, we can follow the proof of Lemma \ref{lem166666} to define a TSD $\Xi_1$ at $\frak u_1$, and form a map from $\widehat {\mathcal U}(\frak u_1,\Xi_1)$ to $\widehat {\mathcal U}(\frak u_2,\Xi_2)$. The remaining properties can be verified in a similar way. \end{proof} \begin{rem} In general, one needs to be careful about the differentiability of $F_{\frak u}$. The strata-wise smoothness is easy to show by elliptic regularity. The issue of differentiability where strata changes is discussed in \cite[page 778]{fooobook2}. This issue is relevant to the application of \cite[Lemma 3.75]{DF1}, when we want to pull-back a multi-valued perturbation by the forgetful map. There are two ways to resolve this issue. First we can consider multi-sections which have exponential decay in the gluing parameter $T$. (We use $T,\theta$ where $\sigma = \exp(-(T+\theta\sqrt{-1}))$.) Even though the forgetful map $F_{\frak u}$ may not be smooth, the pull back of a multi-section with exponential decay is a multi-section which is not only smooth but also has an exponential decay. (See also \cite[page 778]{fooobook2}) In our situation discussed in the next subsection, we can use a simpler method to resolve this issue. For the purpose of this paper, we need to pull back a never vanishing multi-section. Thus pulling back the multi-section in $C^0$ sense is enough. (See the proof of Proposition \ref{prop12866}.) This is because we need differentiability of the multi-section only in a neighborhood of its zero set. \end{rem} To complete our construction of a system of Kuranishi structures which is compatible with the forgetful map, it remains to prove the following result: \begin{lem} There exists a system of obstruction bundle data which is disk component wise and is compatible with forgetful map. \end{lem} \begin{proof} The proof is essentially the same as the proof of Proposition \ref{lem685}. As before, we construct the system of obstruction bundle data by induction on $\beta \cap [\omega]$. In each step of the induction, we firstly construct an obstruction bundle data on $\mathcal M^{\rm RGW}_{1}(L;\beta)$. This system automatically induces an obstruction bundle data on $\mathcal M^{\rm RGW}_{k+1}(L;\beta)$ by requiring Condition \eqref{form19199}. To be more detailed, we fix a finite subset $\frak P(\beta)$ of $\mathcal M_{1}^{\rm RGW}(L;\beta)$ as in (OBI) and a vector space $E_{\frak p}$ for $\frak p \in \frak P(\beta)$ as in (OBII). We also fix spaces $\mathscr F_{\beta}$, $\mathscr F^{\circ}_{\beta}$ which fixes a set of quasi components for each $\frak u \in \mathcal M_{1}^{\rm RGW}(L;\beta)$. We require these objects satisfy Conditions \ref{conds1023}, \ref{conds1025}, \ref{conds1027}, \ref{conds30}, \ref{conds31}. If $\frak u \in \mathcal M_{k+1}^{\rm RGW}(L;\beta)$, then we define $\mathscr F_{\beta}(\frak u)$, $\mathscr F^{\circ}_{\beta}(\frak u)$ to be $\mathscr F_{\beta}(\frak u')$, $\mathscr F^{\circ}_{\beta}(\frak u')$ where $\frak u'=\frak{fgg}(\frak u)$. Since the obstruction bundle data for $\mathcal M_{1}^{\rm RGW}(L;\beta)$ satisfies Conditions \ref{conds1023}, \ref{conds1025}, \ref{conds1027}, \ref{conds30}, \ref{conds31}, the induced obstruction bundle for $\mathcal M_{k+1}^{\rm RGW}(L;\beta)$ satisfies the corresponding conditions. \end{proof} \section{Construction of a System of Multi-sections} \label{sub:multiconst} The purpose of this section is to construct a system of multivalued perturbations used in \cite[Section 4]{DF1} to prove the main theorems there. As in \cite[Section 4]{DF1}, the proof in the case that minimal Maslov numbers are greater than $2$ is simpler than the case that the minimal Maslov numbers could be $2$. In fact, only in the case of minimal Maslov number $2$, we need to use the results of the last section and perturb the moduli spaces in a way that are compatible with the forgetful map. In this section, we focus on the case of strips rather than discs because the description of the boundary in the case of discs is different and the moduli spaces of holomorphic strips are used to prove the main results of \cite{DF1}. Here we do {\it not} prove the existence of a system of transversal multi-sections in complete generality and restrict ourselves to the cases which suffices for the proof of our main theorems. In fact, we perturb $\mathcal M_{1}^{\rm RGW}(L_i;\beta)$ only in the case that the Maslov index of $\beta$ is $2$. In other words, we do not prove \cite[Proposition 4.7]{DF1} in the generality that it is stated.\footnote{The construction of such system of multisections in the general case can be carried out in the same way as in \cite[Section 20]{fooo:tech2-2}. In this paper we only perturb moduli space of virtual dimension $1$ or less. Because of the monotonicity assumption in \cite{DF1}, it suffices to study only such moduli spaces to prove the main results of \cite{DF1}.} Let $L_0,L_1 \subset X \setminus \mathcal D$ be a pair of compact Lagrangian submanifolds. We assume that they are monotone and their intersection is transversal. For $p,q \in L_0 \cap L_1$, we defined $\Pi_2(X;L_1,L_0;p,q)$, the set of homology classes of strips asymptotic to $p$ and $q$, in \cite[Definition 2.1]{DF1}. Let $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ be the compactified moduli space of pseudo-holomorphic strips of the homology class $\beta \in \Pi_2(X;L_1,L_0;p,q)$ with $k_1$ and $k_0$ boundary marked points. See \cite[Sections 3 and 5]{DF1} for the definition of this compactification. \subsection{Lagrangians with Minimal Maslov Number greater than $2$} \label{subsub:constmulti} In this subsection, we prove the existence of a system of multivalued perturbations that is used in \cite[Section 4]{DF1} in the case that the minimal Maslov numbers are greater than 2. More precisely we prove the following proposition: \begin{prop}\label{prop61111} Let $L_0,L_1 \subset X \setminus \mathcal D$ be a pair of compact Lagrangian submanifolds. We assume that they are monotone and their minimal Maslov numbers are strictly greater than $2$. We assume that $L_0$ is transversal to $L_1$. Let $E$ be a positive number. Then there exists a system of multivalued perturbations $\{\frak s_{n}\}$ on the moduli spaces $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ of virtual dimension at most $1$ and $\omega(\beta) \le E$ such that: \begin{enumerate} \item The multi-section $\frak s_{n}$ is $C^0$ and is $C^1$ in a neighborhood of $\frak s_{n}^{-1}(0)$. The multi-sections $\{\frak s_{n}\}$ are transversal to $0$. The sequence of multi-sections $\{\frak s_n\}$ converges to the Kuranishi map in $C^0$. Moreover, this convergence is in $C^1$ in a neighborhood of the zero locus of the Kuranishi map. \item The multivalued perturbations $\{\frak s_{n}\}$ are compatible with the description of the boundary given by \cite[Lemmata 3.67, 3.70]{DF1}. \item Suppose that the (virtual) dimension of $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ is not greater than $1$. Then the multisection $\frak s_n$ does not vanish on the codimension $2$ stratum $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)^{(1)}$ described by \cite[Proposition 3.63]{DF1}. \end{enumerate} \end{prop} This proposition is a weaker version of \cite[Proposition 4.7]{DF1}. Note that we do not claim compatibility with the forgetful map (\cite[Proposition 4.7 (3)]{DF1}) and multivalued perturbations are given only on the moduli spaces of virtual dimension at most $1$. We do {\it not} need to perturb moduli spaces of pseudo holomorphic disks $\mathcal M_{k+1}^{\rm RGW}(L_j;\beta)$ in Proposition \ref{prop61111}. \begin{proof} The proof is by induction on $\omega \cap \beta$ and $(k_0,k_1)$. In this inductive process we construct multi-valued perturbations for all moduli spaces $\omega\cap \beta\leq E$ and $k_0+k_1\leq N$ for some constants $E$ and $N$. In particular, we may construct perturbations for the moduli spaces with dimension greater than $1$. But the conditions (1) and (3) hold only for moduli spaces with dimension at most $1$. We assume that we already constructed multivalued perturbations for the moduli spaces of type $(\beta;k_0,k_1;p,q)$ such that $\omega \cap \beta < \omega \cap \alpha$ or $\omega \cap \beta=\omega \cap \alpha$ and $k_1+k_1<j_1+j_2$. We can use the induction hypothesis to define a continuous multi-section on $\partial\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$. For example, part of the boundary of the moduli space $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ is described by the moduli spaces of the following form: \begin{equation}\label{bdry-type-2} \mathcal M^{\rm RGW}_{j_1',j_0}(L_1,L_0;p,q;\beta_1) \hat \times_{L_1} \mathcal M^{\rm RGW}_{j_1''+1}(L_1;\beta_2) \end{equation} where $j_1'+j_1''=j_1+1$, $\beta_i\cap \mathcal D=0$ and $\beta_1\#\beta_2=\alpha$. Assuming $ \mathcal M^{\rm RGW}_{k_1''+1}(L_1;\beta_2)$ is non-empty, we have $\omega\cap \beta_1< \omega\cap \alpha$ or $j_1'<j_1$ and $\omega\cap \beta_1\leq \omega\cap \alpha$ . Therefore, the induction hypothesis implies that we already fix a multivalued perturbation for $\mathcal M^{\rm RGW}_{j_1',j_0}(L_1,L_0;p,q;\beta_1)$. We use the fiber product\footnote{ See \cite[Lemma-Definition 20.16]{fooo:tech2-2} for the definition of fiber product of multi-valued perturbations.} of this perturbation and the trivial perturbation for $\mathcal M^{\rm RGW}_{j_1''+1}(L_1;\beta_2)$ to define a multi-valued perturbation for: \[ \mathcal M^{\rm RGW}_{j_1',j_0}(L_1,L_0;p,q;\beta_1) \times_{L_1} \mathcal M^{\rm RGW}_{j_1''+1}(L_1;\beta_2) \] Now we use the analogue of the map $\Pi$ in Theorem \ref{lema362rev} to pull-back this perturbation to the space in \eqref{bdry-type-2}. More generally, we can use the already constructed perturbations for the moduli spaces of strips and trivial perturbations for the moduli spaces of discs to define a multi-valued perturbation for any boundary component and corner of $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$. The induction hypothesis implies that these perturbations are compatible. In the case that the virtual dimension of $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ is greater than $1$, we extend the chosen multi-valued perturbation on the boundary into a $C^0$ perturbation defined over the whole moduli space. In the case that the virtual dimension is at most $1$, we need to choose this extension such that the conditions in (1) and (3) of Proposition \ref{prop61111} are satisfied. To achieve this goal, we analyze the vanishing locus of the multi-valued perturbation over the boundary of $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$. Let $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ have virtual dimension not greater than $1$. On the stratum of the boundary where there exists at least one disk bubble on which the map is non-constant, the assumption implies that the Maslov number of the disk bubble is at least $4$. This implies that there is at least one irreducible component, which is a strip with homology class $\beta_1$ and is contained in a moduli space with negative virtual dimension. (See also the proof of \cite[Lemma 4.12]{DF1}.) Therefore, our multivalued perturbation on this boundary component does not vanish. The rest of the proof is divided into two parts. We firstly consider the case where $\dim(\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha))$ is non-positive. The part of the boundary corresponding to splitting into two or more strips has a strip component which has negative virtual dimension. Therefore, our multi-valued perturbation does not vanish on this part of the boundary, too. As a consequence, we can extend the perturbation in the $C^0$ sense to a neighborhood of the boundary such that it is still non-vanishing in this neighborhood. We approximate the perturbation by a section which is $C^1$ outside a smaller neighborhood of the boundary. Now we can extend this multi-section in a way which is transversal to $0$ on $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ and $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)^{(1)}$ using the existence theorem of multivalued perturbations. (See, for example, \cite[Proposition 13.29]{fooo:tech2-1}.) If the virtual dimension is $0$, there exists finitely many zero's which do not belong to $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)^{(1)}$. If the virtual dimension is negative, the multivalued perturbation does not vanish. This completes the proof in the case that $\dim(\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha))\le 0$. Next, we consider the case that $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ is $1$-dimensional. The constructed multivalued perturbation on the boundary does not vanish except the boundary components of the forms \begin{equation}\label{form6197} \mathcal M^{\rm RGW}_{j'_1,j'_0}(L_1,L_0;p,r;\beta_1) \hat \times \mathcal M^{\rm RGW}_{j''_1,j''_0}(L_1,L_0;r,q;\beta_2) \end{equation} or \begin{equation}\label{form6198} \aligned &\mathcal M^{\rm RGW}_{j_1-1,j_0}(L_1,L_0;p,q;\alpha) \hat \times_{L_1} \mathcal M^{\rm RGW}_{3}(L_1;0), \\ &\mathcal M^{\rm RGW}_{j_1,j_0-1}(L_1,L_0;p,q;\alpha) \hat \times_{L_0} \mathcal M^{\rm RGW}_{3}(L_0;0). \endaligned \end{equation} In \eqref{form6197}, both of the factors have virtual dimension $0$. Therefore, we may assume that the zero sets of the multivalued perturbation we have produced on those factors do not lie in the strata of codimension at least $2$. Since the multi-sections on the two factors have finitely many zeros and the map $\Pi$ in \cite[Lemma 3.67]{DF1}\footnote{ See also Theorem \ref{lema362rev}.} is an isomorphism, the first factor gives rise to finitely many points in the boundary. In the case of (\ref{form6198}), the first factor has virtual dimension $0$ and the multi-section there vanishes only at finitely many points. The second factor is identified with $L_1$ or $L_0$. Therefore, the fiber product is identified with the first factor. In summary, the multi-valued perturbation has finitely many zeros on the boundary and item (3) holds. We fix Kuranishi charts at the finitely many zeros on the boundary. Since these are boundary points, we can easily extend our multivalued perturbation to the interior of the chosen Kuranishi charts so that it is transversal to $0$. Now we extend the multi-valued perturbation further to a neighborhood of the boundary of $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ in the $C^0$ sense such that the multivalued perturbation does not vanish except at those finitely many charts. We may assume that the multi-valued perturbation is $C^1$ outside a smaller neighborhood of the boundary. We use again the existence theorem of multi-sections that are transversal to zero everywhere to complete the construction of the multi-valued perturbation on $\mathcal M^{\rm RGW}_{j_1,j_0}(L_1,L_0;p,q;\alpha)$ \end{proof} \begin{rem} This proof never uses the smoothness of the coordinate change with respect to the gluing parameters $\sigma_{e}$ when $\sigma_{e}=0$. In most part of the proof, we extend the multi-section at the boundary to the interior only in the $C^0$ sense. When we extend the multi-section near a point of the boundary where the it vanishes, we fix a chart there and extend it on that chart. We use other charts to extend the multi-section in $C^0$ sense to a neighborhood of the boundary. (Recall that we only need the differentiability of the multivalued perturbation in a neighborhood of its vanishing set to define virtual fundamental chain.) The key point here is that the multivalued perturbation on the boundary has only isolated zeros. \end{rem} \begin{rem} Even though we do not perturb the moduli spaces of disks in the proof of Proposition \ref{prop61111}, we used the fact that these moduli spaces admit Kuranishi structures. \end{rem} \begin{rem} Proposition \ref{prop61111} is one of the main inputs in the definition of Lagrangian Floer homology\footnote{We need (relative) spin structure to define orientations.} for the pair $L_0$, $L_1$. Similar results also play a key role in showing that Floer homology is invariant of Hamiltonian perturbations compactly supported on $X\backslash \mathcal D$. See \cite[Section 4]{DF1} for more details. \end{rem} \subsection{Lagrangians with Minimal Maslov Number 2} \label{subsub:constmulti2} Now we turn our attention to the case that the Maslov numbers of our Lagrangians in $X \setminus \mathcal D$ could be $2$. Let $L$ be a compact and monotone Lagrangian submanifold in $X \setminus \mathcal D$ such that its minimal Maslov number is $2$. Let the Maslov index of $\alpha \in H_2(X;L)$ be $2$ and $\alpha \cap \mathcal D = 0$, and form the moduli space $\mathcal M_1^{\rm RGW}(L;\alpha)$. This moduli space has a Kuranishi structure without boundary, which has virtual dimension $n = \dim L$. For $p \in L$, we form the fiber product: \[ \mathcal M_1^{\rm RGW}(L;\alpha;p) = \mathcal M_1^{\rm RGW}(L;\alpha) \times_L \{p\}. \] of virtual dimension $0$. We fix a multi-valued perturbation on $\mathcal M_1^{\rm RGW}(L;\alpha)$ such that the induced multivalued perturbation on $\mathcal M_1^{\rm RGW}(L;\alpha;p)$ is transversal to $0$. Moe generally, we can assume that this transversality assumption holds when $p$ is an element of a finite set. Thus the multi-section on $\mathcal M_1^{\rm RGW}(L;\alpha;p)$ has only finitely many zeros. The size of this zero set counted with multiplicity and weight is defined to be $\frak{PO}_L$. \begin{prop}\label{prop12866} Let $L_0$, $L_1$ be a pair of transversal compact monotone Lagrangians in $X \setminus \mathcal D$ such that their minimal Maslov numbers are $2$. For a positive number $E$, there exists a system of multivalued perturbations $\{\frak s_{n}\}$ on the moduli spaces $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ of virtual dimension $\le 1$ and $\omega \cap \beta \le E$ such that: \begin{enumerate} \item The multi-section $\frak s_{n}$ is $C^0$ and is $C^1$ in a neighborhood of $\frak s_{n}^{-1}(0)$. The multi-sections $\{\frak s_{n}\}$ are transversal to $0$. The sequence of multi-sections $\{\frak s_n\}$ converges to the Kuranishi map in $C^0$. Moreover, this convergence is in $C^1$ in a neighborhood of the zero locus of the Kuranishi map. \item The multivalued perturbations $\{\frak s_{n}\}$ are compatible with the description of the boundary given by \cite[Lemmata 3.67, 3.70]{DF1}. \item The multivalued perturbations $\{\frak s_{n}\}$ are compatible with the forgetful map of the marked points given by \cite[Lemma 3.75]{DF1}. \item Suppose that the (virtual) dimension of $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ is not greater than $1$. Then the multi-section $\frak s_n$ does not vanish on the codimension $2$ stratum $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)^{(1)}$ described by \cite[Proposition 3.63]{DF1}. \end{enumerate} \end{prop} Proposition \ref{prop12866} is a slightly simpler version of \cite[Proposition 4.7]{DF1} where we claimed similar results for the more general case of moduli spaces $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ with dimensions possibly greater than $1$. As it is pointed out there, Proposition \ref{prop12866} suffices for our purposes in \cite[Section 4]{DF1} (including the proof of \cite[Lemma 4.17]{DF1}) and we content ourselves to the proof of this simpler result. \begin{proof} For $j=1,\,2$ and $\alpha\in \Pi_2(X;L_j)$ with Maslov index $2$, we fix a multivalued perturbation on $\mathcal M_1^{\rm RGW}(L_j;\alpha)$ such that it induces a transversal multivalued perturbation on $\mathcal M_1^{\rm RGW}(L_j;\alpha;p)$ for any $p \in L_0 \cap L_1$. We extend these multi-valued perturbations to all moduli spaces $\mathcal M_{k+1}(L_j;\alpha)$ in the $C^0$ sense such that they are compatible over the boundary in a similar sense as in Proposition \ref{prop61111}. We use these multi-valued perturbations and induction to define the required multi-valued perturbations on the moduli space $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$. To be more detailed, we construct multi-valued perturbations on $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\beta)$ by induction on $\omega\cap \beta$. The perturbation on the general moduli space the multi-valued perturbation on $\mathcal M^{\rm RGW}_{k_1,k_0}(L_1,L_0;p,q;\beta)$ is given by pulling back from $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\beta)$. Here we use consistency of Kuranishi structures with the forgetful map. Suppose we have constructed required multivalued perturbations for $\beta$ with $\beta \cap \omega < \alpha \cap \omega$. We use the induction hypothesis and the constructed multi-valued perturbations for the moduli spaces of discs to define a perturbation on $\partial \mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\beta)$ in the same way as in Proposition \ref{prop61111}. We wish to analyze zeros of our induced multi-section on the boundary of $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\beta)$. In compare to Proposition \ref{prop61111}, the new types of zeros are given by disc bubbles with Maslov index $2$. Such boundary components have the form: \begin{equation}\label{case-I} \mathcal M^{\rm RGW}_{1,0}(L_1,L_0;p,q;\beta_0) \,\hat\times_{L_1}\, \mathcal M_1^{\rm RGW}(L_1;\beta_1) \end{equation} or \begin{equation}\label{case-II} \mathcal M^{\rm RGW}_{0,1}(L_1,L_0;p,q;\beta_0) \,\hat\times_{L_0}\, \mathcal M_1^{\rm RGW}(L_0;\beta_2). \end{equation} where $\beta_0 + \beta_1 = \alpha$ and the Maslov index of $\beta_1$ is $2$. We focus on the boundary components of the form in \eqref{case-I}. The other case is similar. There are two cases to consider: \par\smallskip \noindent {\bf (Case 1)} ($\beta_0\neq 0$): The virtual dimension of $\mathcal M^{\rm RGW}_{1,0}(L_1,L_0;p,q;\beta_0)$ is: \[ \dim(\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\alpha))-1 \] If the virtual dimension of $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\alpha)$ is not greater than $0$, then the multi-section does not vanish on this component. To treat the case that the virtual dimension of $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\alpha)$ is $1$, note that the multi-section of $\mathcal M^{\rm RGW}_{1,0}(L_1,L_0;p,q;\beta_0)$ is the pull-back of the multi-valued perturbation on $\mathcal M^{\rm RGW}_{0,0}(L_1,L_0;p,q;\beta_0)$. This latter moduli space has virtual dimension $-1$ and hence the multi-section does not vanish on it. Therefore, the multi-section does not have any zero on the moduli spaces $\mathcal M^{\rm RGW}_{1,0}(L_1,L_0;p,q;\beta_0)$ and \eqref{case-I}. \par\smallskip \noindent {\bf (Case 2)} ($\beta_0=0$): In this case, $p=q$ and $\alpha= 0 \# \beta_1$ where $\beta_1$ is a homology class in $\Pi_2(X;L_1)$ with Maslov index $2$. Therefore, the corresponding boundary component is identified with $\mathcal M_1^{\rm RGW}(L_1;\alpha;p)$ where $p \in L_0\cap L_1$. We defined a multi-valued perturbation on this moduli space such that its zero set is cut down transversely and consists of isolated points. Now we can proceed as in Proposition \ref{prop61111} to complete the construction of multi-valued perturbations. \end{proof}
1,116,691,501,454
arxiv
\section{Introduction} It is known that most stars are born within giant molecular clouds forming clusters \citep{Lad03}. Numerical simulations demonstrate that star formation occurs mainly along the patterns defined by the densest regions of the molecular clouds \citep{Bon03}. Thus, the hierarchical structure observed in some open clusters \citep[see, for example,][]{Lar95} is presumably a consequence of its formation in a medium with an underlying fractal structure. This fractality is considered to be a clear signature of its own turbulent nature \citep{Elm04}. Otherwise, open clusters having central star concentrations with radial star density profiles likely reflect the dominant role of gravity, either on the primordial gas structure or as a result of a rapid evolution from a more structured state \citep{Lad03}. It is therefore important to study the distribution of stars because it may yield some information on the formation process and early evolution of open clusters. It is necessary, however, that this kind of analysis is done by measuring the cluster structure in an objective, quantitative, as well as systematic way. The application of the two-point correlation function by \citet{Lar95} to young stars in Taurus suggested that the distribution of stars on spatial scales larger than the binary regime exhibits a fractal pattern with a projected dimension of $D \sim 1.4$. This value is very similar to the average value $D \sim 1.5 \pm 0.2$ found by \citet{Sim97} for the Taurus, Ophiuchus, and Orion trapezium regions. More recent works find significantly smaller values for stars in Taurus, such as $D = 1.02 \pm 0.04$ \citep{Har02} and $D = 1.049 \pm 0.007$ \citep{Kra08}. The difference could be at least partly due to differences in the completeness of the sample \citep{San08}. \citet{Nak98} studied the clustering of stars in the Orion, Ophiuchus, Chamaeleon, Vela, and Lupus regions obtaining significant variations from region to region in the range $1.2 < D < 1.9$. The interpretation of these results requires some caution because it has been shown that a good power-law fit to the two-point correlation function does not necessarily mean that the stellar distribution is fractal \citep{Bat98}. \citet{Car04} developed a different method to quantify the structure of star clusters. Their method is based on the construction of the minimum spanning tree (MST) of the cluster and it has the important advantage of being able to distinguish between centrally concentrated and fractal-like structures. They concluded that the fractal dimension of the three-dimensional distribution of stars in Chamaeleon and IC~2391 is $D_f = 2.25 \pm 0.25$, whereas in Taurus $D_f = 1.9 \pm 0.2$ \citep[see also][]{Sch06}. These dimensions seem too small when compared with the average value of $D_f \simeq 2.6-2.7$ suggested for the structure of the interstellar medium \citep{San05,San07b}, although higher dimension values have been reported using this technique (MST) on clusters in other regions such as Serpens and Ophiuchus \citep{Sch08}. The rapid early evolution of star clusters may complicate the picture, because the parameters characterizing the cluster structure must only be taken as instantaneous values which might change significantly in a few Myr \citep{Bas08}. \citet{Sch06} applied the MST method to both observed and simulated clusters to argue that star clusters preferentially form with a clustered, fractal-like structure and gradually evolve to a more centrally concentrated state \citep[see also][]{Sch08}. In any case, some kind of relationship between the {\it initial} structure of the clusters and the properties of the turbulent medium where they were born is expected \citep{Bal07}. \citet{Sch08} find certain evidence that regions with relatively high Mach numbers form clusters more hierarchically structured, i.e. with relatively small fractal dimensions. They estimated a Mach number of $M \simeq 5.8$ in the Ophiuchus region where the cluster L1688 is found, for which they reported a structure parameter compatible with $D_f \sim 2.5$. These results would be in agreement with simulations of turbulent fragmentation in molecular clouds \citep{Bal06}, but it has to be pointed out that other studies do not find such a correlation \citep{Sch06} and others directly contradict it \citep{Eno07}. \citet{Fed07} used numerical simulations of supersonic turbulence to show that, for the same Mach number ($M \simeq 5.5$), the fractal dimension of the medium can be very different ranging from $D_f \simeq 2.3$ to $D_f \simeq 2.6$ depending on whether turbulence is driven by the usually adopted solenoidal forcing or by compressive forcing, respectively. In this work, we consider this subject by systematically analyzing the distribution of stars in a sample of open clusters spanning a representative range of age and distance values with kinematical data available in the literature. The clusters are visible at optical wavelengths possibly indicating that even the youngest ones have dispersed most of the gas and dust from which they were born. Obviously, these objects may present significant contamination by field stars projected along the line of sigh. The MST technique tends to lose information on the degree of fractality as the number of contaminating field stars increases \citep{Bas09}. Moreover, and very important, the combination of data coming from different sources with different membership selection criteria might introduce undesired scatter as well as some bias in the final results. To overcome these problems, we decided to calculate the memberships by applying the same general, non-parametric method to all the clusters. In order to achieve a representative work sample (Section~\ref{membresias}), we first collect in Section~\ref{sample} as much data as possible on positions and proper motions of stars in open cluster regions. Using these data, we apply in Section~\ref{salson} the non-parametric method to assign cluster memberships. A comparison between these memberships and those obtained from the classical parametric method is done in Section~\ref{comparacion}. The distribution of the stars is then quantified in Section~\ref{resultados} by means of the MST technique, King profile fittings, and the correlation dimension if the distribution is fractal. The dependence of the cluster structure on its age is discussed in Section~\ref{correlaciones}. Finally, the main results are summarized in Section~\ref{conclusiones}. \section{Star cluster membership} \label{membresias} \subsection{The sample of clusters} \label{sample} We first used VizieR\footnote{http://vizier.u-strasbg.fr} \citep{Och00} to search for catalogs of open clusters containing both positions and proper motions available in machine-readable format. We required the data to be available for all the stars in the field and not only for the probable members according to each author's criteria. Then we checked the catalogs and rejected those that could generate some sort of bias. For example, catalogs containing data only for a specified region of the cluster or for a limited sample of stars were ruled out. In the end, we have a total of 16 open clusters which are listed in Table~\ref{cumulos}. This table also gives the logarithm of the cluster age in Myr ($\log (T)$) and the distance in pc ($D$), taken from the Webda database\footnote{http://www.univie.ac.at/webda}, as well as the number of stars having positions and proper motions in the original catalog ($N_d$), the number of stars selected as cluster members in Section~\ref{salson} ($N_s$), and the values calculated in Section~\ref{resultados} for the structure parameter ($Q$) and the core ($R_c$) and tidal ($R_t$) radii in pc. The last column in Table~\ref{cumulos} lists the references from which the data used in this work were taken. We have to mention that clusters in this sample have been observed at optical wavelengths. They have little or no primordial interstellar gas in them and therefore they may be in a supervirialized state \citep{Goo04}, mainly the youngest ones. \subsection{Non-parametric method} \label{salson} An initial step in any study on open clusters is the reliable identification of probable members. This is a complex problem that deserves to be addressed more comprehensively. Several different methods for estimating membership probabilities may be used depending on whether one is dealing with positions, proper motions, radial velocities, multiband photometry, or a combination of them. However, it is commonly accepted that membership probabilities obtained from kinematical variables are more reliable than those derived from other kind of physical variables. When working with proper motion data, the most often used method is the algorithm designed by \citet{San71} based on the former model proposed by \citet{Vas58} for the proper motion distribution in the cluster vicinity. The method assumes that the two populations (cluster members and field stars) are distributed according to normal bivariate functions and then the observed distribution is a weighted mixture of these two underlying distributions. It can be proven that the classification and estimation problem derived from this model has mathematical solutions. Some problems may arise when applying the method of \citet{San71} if the two parent populations are very far from the mathematical functions on which the model is based \citep{Pla01}. In order to prevent this and other potential problems, \citet{Cab90} developed a more general, non-parametric method which makes no {\it a priori} assumptions about the cluster and field star distributions. Besides the proper motions, the method uses the spatial distribution of stars as a complementary and necessary source of information. Generally speaking, the method iteratively estimates the probability density function using Kernel functions with smooth parameters such that the likelihood is maximum \citep[see details in][]{Cab90}. The only astronomical hypotheses remaining are that there are two populations (cluster and field) and that cluster members are more densely distributed than field stars (both in proper motions and in positions). An important distinction between the classical Sanders method and this one is that here the classification of the stars can be done according to three different probabilities: the probability derived from the position space $P(x,y)$, the probability derived from the proper motion space $P(\mu_x,\mu_y)$, and the joint probability $P(x,y,\mu_x,\mu_y)$. We applied this method to our sample of open cluster (Section~\ref{sample}) in a systematic and self-consistent way. We used exactly the same algorithm and the same selection criteria for cluster members: $P(x,y,\mu_x,\mu_y) \ge 0.5$ and $P(\mu_x,\mu_y) \ge 0.5$. This choice puts more weight on the kinematical variables than on the positional variables. If the algorithm did not find any cluster member (this happened in 5 of the 16 cases) then the joint probability criterion was changed to $P(x,y,\mu_x,\mu_y) \ge 0.4$ but no additional condition was needed to achieve convergence. It is worth noting that, given the iterative nature of this method, the final membership probabilities are in principle dependent upon the decision rule chosen. We have performed some tests by varying the selection criteria around the above values and, although there were changes in the membership assignments, the results and trends obtained on the spatial structure of the clusters (next sections) remained practically unaltered. One advantage of using this method is that the combination of position and proper motion distributions as membership criteria, along with the fact that it does not make any assumption on the underlying distributions, give a higher degree of flexibility that can make it easier to see the underlying structure. Here we show, as illustrative examples, the results for two different open clusters. In Figures~\ref{ic2391} and \ref{ngc2194} \begin{figure}[th] \epsscale{.9} \plotone{f1.eps} \caption{Positions ({\it left}) in pc relative to the center and proper motions ({\it right}) in mas/yr for the stars in the region of the open cluster IC~2391. Red circles indicate field stars and blue circles cluster members according to the method applied in this work.} \label{ic2391} \end{figure} \begin{figure}[th] \epsscale{.9} \plotone{f2.eps} \caption{Same as Fig.~\ref{ic2391}, but for the open cluster NGC~2194.} \label{ngc2194} \end{figure} we can see both positions and proper motions for the stars in the region of the open clusters IC~2391 and NGC~2194, respectively. We also show, in Figures~\ref{pdfic2391} and \ref{pdfngc2194}, \begin{figure}[th] \epsscale{.9} \plotone{f3.eps} \caption{Probability density funcions for the stars in the region of the open cluster IC~2391. The two upper panels show the projections in $x$ and $y$ of the probability densities in the position space. The two lower panels are the projections in $\mu_x$ and $\mu_y$ of the probability densities in the proper motion space. Red circles refer to field stars and the blue ones to cluster members.} \label{pdfic2391} \end{figure} \begin{figure}[th] \epsscale{.9} \plotone{f4.eps} \caption{Same as Fig.~\ref{pdfic2391}, but for the open cluster NGC~2194.} \label{pdfngc2194} \end{figure} the corresponding probability density funcions for the same two clusters. We see that both populations (field stars and cluster members) have been successfully separated by the algorithm. The spatial distribution of stars in IC~2391 is more irregular than in NGC~2194, but this is difficult to see from the spatial distribution because of the small number of members in IC~2391. However, the probability density functions allow a very easy visualization of the spatial structure. For example, two separate peaks are clearly visible in IC~2391 located at ($x$,$y$) positions close to (1,0) and (-0.5,0), and an additional weaker overdensity close to (0,-2). NGC~2194 exhibits a smoother distribution in the central region that becomes more irregular at the border of the cluster. For example, a small overdensity can be observed close to position (-5,5). Here we are showing the projected probability density funcions, obviously the three-dimensional display allows a better visualization of the cluster structure. \subsection{Comparison between the parametric and non-parametric methods} \label{comparacion} The methods for discriminating between cluster and field stars based on the proper motion distributions \citep{Vas58} use parametric Gaussian functions to represent the corresponding probability density functions (PDFs). Usually a circular Gaussian function is assumed for the PDF of the cluster whereas an elliptical one is adopted for the field. As mentioned in Section~\ref{salson}, this procedure may present problems if the underlying PDFs are far from being simple Gaussians, if the proper motion errors are anisotropic, or if the heterocedastic distance between the two stellar populations is small \citep[see more detailed discussions in][]{Cab85,Cab90,Pla01,Bal04}. In this case, a suitable option is to apply a non-parametric discriminating method that determines the PDFs empirically without a priori assumptions about the profile shapes. Additionally, even though the underlying PDFs may be well represented by Gaussians, if the cluster mean proper motion is very close to the maximum of the field distribution then the discriminating procedure becomes challengingly difficult. In fact, the discrimination becomes more difficult as the statistical distance between both populations decreases. To increase the statistical distance between cluster and field it becomes necessary to extend the dimension of the measurement space, and this is done by including the spatial coordinates in the non-parametric method used in this study \citep{Cab90}. In order to illustrate (and quantify) these arguments, let us compare the membership assignments obtained in this work (Section~\ref{salson}) with those obtained from the classical parametric method. We have used the algorithm proposed by \citet{Cab85} which estimates the parameters with a procedure more simple and efficient than that of \citet{San71}. Moreover, the algorithm first identifies outliers in the data in an objective way, i.e., in a distribution-free way not based on any previous parameters estimation. This is an important previous step because outliers make the distribution of field stars to be flatter than the actual one, modifying the final probabilities of cluster membership. In order to perform a better comparison we applied this parametric method to exactly the same data that we used for the non-parametric method. As representative examples, Figure~\ref{ejemplos} \begin{figure}[th] \epsscale{.9} \plotone{f5.eps} \caption{Probability density functions in the proper motion space (in mas/yr) for the stars in the region of (a) M~67 and (b) NGC~1513. Red and blue circles refer to field and cluster stars according to the non-parametric method, whereas thick black solid lines refer to the results of the parametric method.} \label{ejemplos} \end{figure} shows the resulting PDFs in the proper motion space for two different open clusters (for a better clarity only the projection on the coordinate $\mu_x$ is shown). For the case of M~67 (Figure~\ref{ejemplos}a) the parametric model finds the position of the cluster centroid at $\mu_{x,c} = -0.54$ and $\mu_{y,c} = +0.43$ with $\sigma_c = 1.04$, and the field centroid at $\mu_{x,f} = +0.46$ and $\mu_{y,f} = +2.22$ with $\sigma_{x,f} = 4.71$ and $\sigma_{y,f} = 4.53$. Both parametric and non-parametric PDFs are similar to each other because cluster and field PDFs are different enough to allow an adequate separation of both populations. In fact, 93.67 \% of the stars in the field of M~67 were assigned to the same population (cluster member or field star) by both methods. For NGC~1513 (Figure~\ref{ejemplos}b) the parametric method finds the cluster centroid at $\mu_{x,c} = -0.34$ and $\mu_{y,c} = +0.53$ with $\sigma_c = 1.45$, and the field at $\mu_{x,f} = +0.41$ and $\mu_{y,f} = +0.24$ with $\sigma_{x,f} = 3.83$ and $\sigma_{y,f} = 4.06$. For this case the statistical difference between both Gaussian PDFs is small in comparison with M~67 so that, in principle, it is more difficult to disentangle both populations. The differences between the parametric and non-parametric PDFs are more evident and only the 71.4 \% were assigned to the same class by both methods. The difference in class assignments arises from the different PDFs in the proper motion space, but it equally arises from the fact that the non-parametric method also uses information from the position space: a star relatively far from the proper motion centroid might be classified as a probable cluster member if it was in a high density region in the corresponding spatial PDF. The statistical separation between any two types of populations can be described through the Chernoff probabilistic distance \citep{Che52}, which is a measure of the difference between two probability distributions. We have calculated the Chernoff distance between the two Gaussian PDFs obtained by the parametric method. This was done for all the clusters in the sample to quantify the differences between the two stellar populations (cluster and field) in the porper motion space. Figure~\ref{chernoff} \begin{figure}[th] \epsscale{.9} \plotone{f6.eps} \caption{Agreement in membership assignation between the parametric and non-parametric methods as a function of the Chernoff distance between cluster and field parametric probability density functions.} \label{chernoff} \end{figure} shows the percentage of stars that have obtained the same assignation (member or non-member) by both methods as a function of the Chernoff distance. The non-parametric method used in this work is robust in the sense that if cluster and field stars can be easily separated in the proper motion space then the results agree very well with those of the standar parametric method. For small Chernoff distances is more difficult to disentangle both stellar populations only from their proper motions. In this case, the non-parametric method has the advantage of using additional information from the star positions and then is able to provide a better discrimination. \section{Distribution of stars} \label{resultados} We start by using the minimum spanning tree (MST) technique to analyse the distribution of stars in the clusters. The MST is the set of straight lines (called edges) connecting a given set of points without closed loops, such that the total edge length is minimum. \citet{Car04} used this technique to study the distribution of stars in clusters introducing the dimensionless parameter $Q$. In order to calculate $Q$ we first need to determine the normalized correlation length $\overline{s}$, i.e. the mean separation between stars divided by the overall radius of the cluster. Next, from the MST we determine the normalized mean edge length $\overline{m}$, i.e. the mean length of the branches of the tree divided by $(A/N)^{1/2}$ where $A$ is the cluster area and $N$ the total number of stars. To estimate the area (and from that the radius) we use the strategy suggested by \citet{Sch06}, which consists in using the area of the convex hull, i.e. the minimum-area convex polygon containing the whole set of data points. Each one of these parameters ($\overline{s}$ and $\overline{m}$) cannot distinguish between a (relatively smooth) large-scale radial density gradient and a multiscale (fractal) subclustering. However, \citet{Car04} showed that the combination $Q=\overline{m}/\overline{s}$ not only is able to distinguish between radial clustering and fractal type clustering but can also quantify them. We have generated two different sets of random three-dimensional distributions of points: one having a volume density of stars $n$ decreasing smoothly with the distance from the center $r$ as $n \propto r^{-\alpha}$ \citep{Car04}, and the other having fractal patterns according to a recipe that generates distributions with a well-defined fractal dimension $D_f$ \citep{San08}. These random simulations were done 50 times, they were projected on random planes, and then we calculated the parameter $Q$ directly from the projected distributions. The overall results are shown in Figure~\ref{teoria}. \begin{figure}[th] \epsscale{.9} \plotone{f7.eps} \caption{Mean values of the parameter $Q$ as a function of the fractal dimension $D_f$ for projected fractals (open squares, bottom axis), and as a function of the index $\alpha$ for projected radial profiles (open circles, top axis). The bars are the corresponding standard deviations. The solid horizontal line indicates the value $Q=0.785$ for which both results converge to the homogeneous distribution case ($D_f=3$ and $\alpha=0$).} \label{teoria} \end{figure} The value $Q=0.785$ (indicated as a horizontal line) separates radial clustering (open circles) from fractal clustering (open squares). Moreover, the value of $Q$ itself gives information about the value of $\alpha$ or $D_f$. We have to point out, however, that the uncertainties for the fractal distributions are rather large to determine in a precise way $D_f$ from the $Q$ value. We applied this method to the sample of stellar clusters and the resulting $Q$ values are given in Table~\ref{cumulos}. Stars in clusters with $Q>0.80$ are distributed following radial clustering profiles. For a better characterization of this kind of structure we have fitted \citet{Kin62} profiles to the radial density distributions of the cluster members \citep[see][for a discussion on the applicability of this kind of fit to open clusters]{Hil98}. Before doing the fit we subtract from the cluster density function the maximum of the field density function, i.e. we perform the fit only for the stars in the cluster having probability densities above the maximum field density. Figure~\ref{perfiles} \begin{figure}[th] \epsscale{.9} \plotone{f8.eps} \caption{Radial density profiles for the members of the open clusters IC~2391 (solid circles) and NGC~2194 (open circles). Dashed curves are the King profiles fitted up to the maximum density of the field stars.} \label{perfiles} \end{figure} shows the results for the same two example clusters shown in the previous figures (IC~2391 and NGC~2194). We performed this fit for all the clusters in our sample, even for the ones that do not follow smooth profiles. From the best fits we obtained the core ($R_c$) and tidal ($R_t$) radii. Both radii are shown in Table~\ref{cumulos}. Clearly, this fit is unrealistic when the cluster exhibits a high degree of substructure but, even in this case, it allows us to estimate the cluster radius ($R_t$) in a homogeneous way for the 16 clusters in the sample. Eight of the clusters of our sample (IC~2391, M~34, NGC~581, NGC~1513, NGC~1647, NGC~1817, NGC~4103, and NGC~6530) have structure parameter values close to, or below, the threshold value $Q \simeq 0.80$. These clusters would follow fractal-like patterns but, as mentioned before, to infer the fractal dimension from the $Q$ value is quite uncertain. For these clusters, we choose to estimate the degree of clumpiness by calculating the correlation dimension ($D_c$). For this we use an algorithm that estimates $D_c$ in a reliable (precise and accurate) way \citep{San07a,San08}. The algorithm avoids the usual problems that arise at relatively large scales (boundary effects) and small scales (finite-data effects) by using objective and suitable criteria. Moreover, an uncertainty associated to each $D_c$ value is estimated using bootstrap techniques. The application of this algorithm to the eight clusters having fractal structure yields the results shown in Table~\ref{Dc_cumulos}. \section{Discussion} \label{correlaciones} Now we proceed to examine the dependence of $Q$ on the cluster age in order to compare it with the trend mentioned by other authors. This kind of dependence has been suggested not only for stellar clusters \citep{Sch06,Sch08} but also for the distribution of young stars in the Gould Belt \citep{San07a}, the distribution of young clusters in the solar neighborhood \citep{Fue06}, and the distribution of stars \citep{Bas09,Gie08,Ode08} and HII regions \citep{San08} in external galaxies. A slight positive trend is apparent when we plot $Q$ versus $\log(T)$, i.e. fractal clusters tend to be younger than clusters having radial density profiles. However, the statistical analysis indicates that there is no significant correlation between this structure parameter and cluster age, neither for the full sample nor for the fractal clusters and density profile clusters considered individually. From simple arguments one would expect that $Q$ increases with time {\it for each cluster}. Gravitationally unbound cluster will tend to nearly homogeneous distributions ($Q=0.79$, $D_c=2.0$) because of the dispersal of stars, whereas self-gravity will lead to more centrally peaked distributions in bound clusters. It could take several crossing times to reach an equilibrium state and/or to eliminate the original distribution \citep{Bon98,Goo04}, although maybe it could take only a crossing time \citep{Bas09}. The typical crossing time in open clusters is of the order of $10^6$ years \citep{Lad03} but, assuming nearly the same typical velocity dispersion, the crossing time is roughly proportional to the cluster size. Let us consider the new variable, $T/R_t$ (in yr/pc), which is proportional to time measured in crossing time units. In this case we do observe the correlation \begin{displaymath} Q = (0.07 \pm 0.03) \log (T/R_t) + (0.35 \pm 0.21)\ \ \ , \end{displaymath} which is significant at 96\% confidence level. This result is shown in Figure~\ref{correla1b}. \begin{figure}[th] \epsscale{.9} \plotone{f9.eps} \caption{Structure parameter $Q$ as a function of the logarithm of age divided by the tidal radius, which is nearly proportional to age in crossing time units. The dashed line at $Q=0.8$ roughly separates radial from fractal clustering, and the solid line is the best linear fit.} \label{correla1b} \end{figure} Previous detailed studies on the fractal properties of projected distributions of points \citep{San07a,San08} have shown that the uncertainty associated with $D_c$ depends on the number of available data. Moreover, when the number of data points is too low ($N \lesssim 200$) a bias in the mean $D_c$ values is produced. We performed a similar analysis for the parameter $Q$ using the simulated fractals. We verified that the mean measured value of $Q$ tends to be overestimated if $N \lesssim 200$, and the bias was higher as the fractal dimension (and therefore $Q$) decreased. For the extreme case studied here ($D_f = 2$), the maximum difference between the mean value of $Q$ for well-sampled point sets (namely $Q=0.576$, see Figure~\ref{teoria}) and fractals having $N \sim 200$ data points was $\Delta Q \simeq 0.06$. The important point here is that if this kind of bias is present in our results, then the correlation shown in Figure~\ref{correla1b} might be reinforced. The structure parameter is shown in Figure~\ref{QvsLogC} \begin{figure}[th] \epsscale{.9} \plotone{f10.eps} \caption{Structure parameter as a function of the concentracion parameter of the King model. The dashed line ($Q=0.8$) roughly separates clusters with well-defined radial density profiles (open circles) and clusters with substructures (filled circles). The solid line is the best linear fit for the upper subsample.} \label{QvsLogC} \end{figure} as a function of the concentration parameter of the King model. Interestingly, the behaviors of the subsamples $Q > 0.8$ and $Q \leq 0.8$ are clearly differentiated. $Q$ correlates strongly with the concentration for cluster with well-defined radial density profiles, the best linear fit (solid line in Fig.~\ref{QvsLogC}) being \begin{displaymath} Q = -(0.66 \pm 0.20) \log (R_t/R_c) + (1.24 \pm 0.10)\ \ \ , \end{displaymath} with a confidence level greater than 98\%. Otherwise, the fractal-like subsample does not show any correlation at all. As we have seen, there seems to be some evidence that young clusters tend to distribute their stars following fractal patterns whereas older clusters tend to exhibit centrally concentrated structures. But this is only an overall trend. Note, for example, that NGC~1513 and NGC~1647 have both $Q \sim 0.7$ with ages of $T \gtrsim 100$ Myr. The advantage of analyzing the clustering properties via the correlation dimension is that, apart from {\it directly} measuring the fractal dimension, the assignment of an associated uncertainty allows us to know the reliability of each measurement. The results of $D_c$ for the clusters having $Q\lesssim 0.8$ are shown in Table~\ref{Dc_cumulos}. The best linear fit between the fractal dimension and the age is: \begin{displaymath} D_c = (0.14 \pm 0.05) \log (T) + (0.77 \pm 0.39)\ \ \ , \end{displaymath} significant at a confidence level of 97\%. If we use $T/R_t$ instead $T$ the fit becomes: \begin{displaymath} D_c = (0.11 \pm 0.04) \log (T/R_t) + (1.08 \pm 0.30)\ \ \ , \end{displaymath} significant at a level of 96\%. This last fit is shown in Figure~\ref{correla2b}, \begin{figure}[th] \epsscale{.9} \plotone{f11.eps} \caption{Calculated correlation dimension as a function of the logarithm of age divided by the tidal radius. The solid line is the best linear fit.} \label{correla2b} \end{figure} where we can see that the correlation looks very good by eye. The point farthest from the best-fit line is the cluster IC~2391, which has the smallest number of members ($N_s=62$) and the largest uncertainty in $D_c$ ($0.2$). If the result for this cluster is biased, then the fractal dimension should be higher than the value reported here and the correlation should be even stronger. An important aspect to be mentioned is that there exist stellar clusters as old as $\sim 100$ Myr that have not totally destroyed their clumpy substructure. This is a particularly meaningful result that gives some observational support to recent simulations of the dynamical evolution of young clusters \citep{Goo04}. We have already mentioned that converting from two-dimensional to three-dimensional fractal dimensions increases the associated uncertainties. However, it is interesting to note that according to our previous works \citep{San07a,San08}, clusters with the smallest correlation dimensions ($D_c=1.74$) would have three-dimensional fractal dimensions around $D_f \sim 2.0$. This values is considerably smaller than the average value $D_f \simeq 2.6-2.7$ estimated for the interstellar medium in recent studies \citep{San05,San07b}. Perhaps the development of some kind of substructure in initially more homogeneous clusters observed in some simulations could explain this difference, although some coherence in the initial velocity dispersion would be necessary \citep{Goo04}. Another plausible explanation is that this difference is a consequence of a more clustered distribution of the densest gas at the smallest spatial scales in the molecular cloud complexes, according to a multifractal scenario for the interstellar medium \citep{Cha01,Tas07}. The problem is complex because it depends on: (a) the initial distribution of gas and dust in the parent cloud, (b) the way and degree in which this information is transferred to the new-born stars, and (c) how, and how fast, this initial star distribution evolves. Each one of these factors will depend to a greater or lesser extent on the involved physics and environmental variables. These points clearly require more investigation. \section{Conclusions} \label{conclusiones} We have characterized quantitatively the distribution of stars in a relatively large sample of open clusters (a total of 16) spanning a wide range of ages. Membership probabilities were obtained by applying a non-parametric method that does not make any assumption on the underlying star distribution. This is a crucial point to avoid possible bias introduced by the cluster member selection process. We found evidence that stars in young clusters tend to be distributed following clustered, fractal-like patterns, whereas older clusters tend to exhibit radial star density profiles. This result supports the idea that stars in new-born cluster likely follow the fractal patterns of their parent molecular clouds, and that eventually evolve toward more centrally concentrated structures \citep[see also][]{Sch06}. However, we have also obtained some other interesting results: (a) there exists a strong correlation between the structure parameter $Q$ and the concentration parameter of the King model $\log (R_t/R_c)$ for the clusters with well-defined radial density profiles, (b) clusters as old as $\sim 100$ Myr can exhibit a high degree of spatial substructure, and (c) there is a significant correlation between fractal dimension and age for the cluster with fractal distribution of stars. Additionally, we find that the smallest values of the corresponding three-dimensional fractal dimensions are $D_f \sim 2.0$, which is considerably smaller than the value $D_f \simeq 2.6-2.7$ estimated for the average interstellar gas distribution. If this is a general result, then some further explanation would be required. \acknowledgments We want to thank the referee for his/her comments which improved this paper. This research has made use of the VizieR database (operated at CDS, Strasbourg, France), the WEBDA database (operated at the Institute for Astronomy of the University of Vienna), and the NASA's Astrophysics Data System. We acknowledge financial support from MICINN of Spain through grant AYA2007-64052 and from Consejer\'{\i}a de Educaci\'on y Ciencia (Junta de Andaluc\'{\i}a) through TIC-101. N.S. is supported by a post-doctoral JAE-Doc (CSIC) contract.
1,116,691,501,455
arxiv
\section{Introduction} Solutions of certain partial differential equations (PDEs) exhibit self-similar dynamics in the asymptotic limit. A self-similar solution in the region of large-time asymptotic limit normally is in the form of \begin{equation}\label{eq:structure} u(x, t) \sim \frac{A}{t^{\alpha}}\phi\left(B\frac{x}{t^{\beta}}\right), \quad\text{as}\,\,t\rightarrow\infty. \end{equation} This particular form of solution indicates that in the large-time asymptotic region the dynamics of the solution are controlled by two factors, the decay in the magnitude of $u$ and the spread of the spatial distribution of $u$, while the profile of the self-similar solution is described by the function $\phi$. The constant $A$ and $B$ in Eq. (\ref{eq:structure}) are usually related to some conservation laws of the PDE under study. In the literature, physicists were able to predict or determine the scaling exponents $\alpha$ and $\beta$ for critical phenomena by using the renormalization group (RG) approach for a variety of physical models in equilibrium statistical mechanics or quantum field theory. \cite{GL54,G92,SO98, W71-I, W71-II}. In the early 1990s, Goldenfeld et al. developed a perturbative renormalization group method for PDEs and applied it to the study of a number of large-time asymptotic problems \cite{CGO91, GMOL90, GMO92, G92}. A slight twist of the original method was reported in \cite{C97, C01, MC03}. Almost at the same period, a nonperturbative RG approach was introduced by Bricmont et al. \cite{BK95, BKL94}, and was applied to the study of nonlinear dispersive and dissipative wave equations and thermal-diffusive combustion system \cite{BPW94, BKX96, P02}. Also in the 1990s, Chen and Goldenfeld proposed a numerical RG (nRG) calculation for similarity solutions of porous medium (Barenblatt) equation and traveling waves \cite{CG95}. Their numerical procedure inspired mesh renormalization schemes for studying focusing problems arising in porous medium flow \cite{AABL01, BAA00}. Inspired by the numerical approach of Chen and Goldenfeld and the nonperturbative RG approach of Bricmont et al., Braga et al., introduced a class of nRG algorithms in a short conference paper \cite{BFI04} that allow them to systematically search for the critical exponents and the hidden decay in asymptotically self-similar dynamics through repeated scalings in time and space. In this paper, we carefully examine and validate the nRG algorithms of Braga et al. by comparing the numerical solutions of the nRG algorithms with either the exact or the asymptotic solutions of the model equations in literature. We show that the self-similar dynamics captured by the nRG algorithms agree with the theoretical results for the scalar equations, as well as the system of equations. Furthermore, a novel contribution of this paper is that we demonstrate that this nRG procedure could shed light on the behavior of self-similar solutions of certain physical models, such as the nonlinear diffusion-absorption model, that are difficulty to analyze, both numerically and analytically. It is worth noting that, a procedure introduced by Braga et al. for studying a nonlinear diffusion equation with periodic coefficients in \cite{BFMR03} shares the similar spirit as an nRG algorithm studied in this paper. Numerical procedures based on rescaling the solutions and the time and spacial variables have previously been developed and were used to study the solutions of PDE that blow up in finite time \cite{BK88, FW03, LPSS86, RW00}. Such procedures exploit the known self-similar structure of the solutions under study to determine the appropriate rescalings. The nRG algorithms in this paper, however, are unique in exploiting fixed points by generating successive iterations of a discrete RG transformation in space and time that drive the system towards a fixed point, and the current implementation of the algorithms in this paper is not suitable for studying blow-up problems. A numerical procedures that mimics the renormalization group theory to compute the spatial profile and blow-up time for self-similar behavior was proposed by Isaia \cite{bib:blow_up}. This version of nRG algorithm, however, uses the Berger-Kohn's time step \cite{BK88} that assumes the time decay exponent is known. Modification of the nRG algorithms presented in this paper for studying the blow-up problems is currently under our investigation and will be reported in a separate paper. Finally, to aid the reader, we now outline the contents of the remainder of this paper. In Section \ref{sec:Burgers}, using the Burgers equation, we explain the fundamental idea of the nRG algorithm and outline the procedure of the algorithm. We show that the algorithm allows us for detecting and further finding the critical exponents in the self-similar solutions. In particular, we demonstrate the ability of the algorithm for determining the critical scaling exponents in time and space that render explicitly the distinct physical effects of the solutions of the Burgers equation, depending on the initial conditions. In Section \ref{sec:KdV}, we use the phenomena of dispersive shock waves of the Korteweg-de Vries equation to show that the algorithm can be used as a time integrator for investigating intermediate asymptotic behavior of solutions. In Section \ref{sec:diffusion_absorption}, we study a class of nonlinear diffusion-absorption models. A conjecture on the existence of a critical exponent of the nonlinear absorption term is proposed for problems with discontinuous diffusivities. We also present a marginal case, for which the phenomenon of anomalous decay is observed, as a motivation for the next section. Finally, we present a modified nRG algorithm and illustrate the ability of the modified algorithm for detecting and capturing the hidden logarithmic decay through a nonlinear system of cubic autocatalytic chemical reaction equations in Section \ref{sec:cubic_autocatalytic}. \section{Scaling Transformation for Burgers Equation}\label{sec:Burgers} The nRG method studied in this paper is simply the integration of the PDE over a finite time-interval with fixed length followed by a rescaling. To explain this idea, we use the Burgers equation as an example to illustrate the scaling transformation procedure of nRG algorithms. We compare the asymptotic solutions obtained by the nRG algorithm with the exact solutions in the asymptotic region to demonstrate the robust nature of our algorithm. The Burgers equation with initial data at $t=1$ are written as \begin{equation}\label{eq:burgers} \begin{split} &u_t+uu_x=\nu u_{xx},\,\,t >1,\\ \text{I. C.\,\,:}\quad&u(x,1) = f(x), \end{split} \end{equation} where $\nu > 0$ is the viscosity. Let the time and space variables be scaled by powers of a fixed length $L>1$, \begin{equation}\label{eq:scale_tx} t = L\tilde{t},\quad x=L^{\beta}\tilde{x}, \end{equation} where $\beta> 0$, $\tilde{t}$ and $\tilde{x}$ are new variables. Suppose the solution of the initial value problem (IVP) (\ref{eq:burgers}), $u(x, t)$, is scaled by \begin{equation}\label{eq:scale_u} u_L(\tilde{x},\tilde{t}) = L^{\alpha}\,u(x,t) = L^{\alpha}\,u(L^{\beta}\tilde{x},L\tilde{t}), \end{equation} where $\alpha > 0$. This implies that \begin{equation}\label{eq:scale_u2} u(x, t) =L^{-\alpha}u_{L}(\tilde{x},\tilde{t}) = L^{-\alpha}u_{L}(L^{-\beta}x, L^{-1}t). \end{equation} With the above scalings, each term in the Burgers equation is scaled as follows: \begin{equation}\label{eq:u_t} u_t = L^{-(\alpha+1)}\frac{\partial}{\partial \tilde{t}}\left(u_{L}(\tilde{x}, \tilde{t})\right),\,\, u_x = L^{-(\alpha+\beta)}\frac{\partial}{\partial \tilde{x}}\left(u_{L}(\tilde{x}, \tilde{t})\right),\,\, u_{xx} = L^{-(\alpha+2\beta)}\frac{\partial^{2}}{\partial \tilde{x}^2}\left(u_{L}(\tilde{x}, \tilde{t})\right). \end{equation} Substituting Eq. (\ref{eq:u_t}) into Eq. (\ref{eq:burgers}) yields \begin{equation}\label{eq:s_burgers} \begin{split} &L^{-(\alpha+1)}(u_L)_{\tilde{t}}+L^{-(2\alpha+\beta)}u_L(u_L)_{\tilde{x}}=\nu L^{-(\alpha+2\beta)} (u_L)_{\tilde{x}\tilde{x}},\,\,\tilde{t} >1,\\ \text{I. C.\,\,:}\quad&u_L(\tilde{x},1) = \tilde{f}(\tilde{x}), \end{split} \end{equation} We rewrite the above equation as \begin{equation}\label{eq:s2_burgers} \begin{split} &(u_L)_{\tilde{t}}+L^{-\alpha-\beta+1}u_L(u_L)_{\tilde{x}}=\nu L^{-2\beta+1} (u_L)_{\tilde{x}\tilde{x}},\,\,\tilde{t} >1,\\ \text{I. C.\,\,:}\quad&u_L(\tilde{x},1) = \tilde{f}(\tilde{x}). \end{split} \end{equation} The integration length for time is from $\tilde{t}=1$ to $\tilde{t}=L$, while the transformed initial condition is $\tilde{f}(\tilde{x})=L^{\alpha}u(L^{\beta}\tilde{x}, L)$. \subsection{Sequence of Scaling Transformations}\label{sec:seq} If we perform a sequence of scalings (iterations), then with a fixed $L>1$, and sequences of scaling exponents $\{\alpha_n\}$ and $\{\beta_n\}$, we can define a sequence of rescaled functions $\{u_n\}$ by rewriting Eq. (\ref{eq:scale_u}) (dropping $\tilde{}$ in $\tilde{x}$ and $\tilde{t}$ ) as \begin{equation}\label{eq:scale_un} u_n(x,t) = L^{\alpha_n}\,u_{n-1}(L^{\beta_n}x,Lt), \end{equation} with $u_0=u$ of the original IVP, Eq. (\ref{eq:burgers}). A simple calculation reveals that \begin{equation}\label{eq:u_seq} \begin{split} u_n(x,t)&=L^{(\alpha_1+\alpha_2+\alpha_3+\cdots+\alpha_n)}u(L^{(\beta_1+\beta_2+\beta_3+\cdots+\beta_n)}x, L^nt)\\ &=L^{n\bar{\alpha}_n}u(L^{n\bar{\beta}_n}x, L^nt), \end{split} \end{equation} where $\bar{\alpha}_n = \frac{1}{n}(\alpha_1+\alpha_2+\cdots+\alpha_n)$ and $\bar{\beta}_n = \frac{1}{n}(\beta_1+\beta_2+\cdots+\beta_n)$. Eq. (\ref{eq:u_seq}) shows how $u_n$ in the time interval $t \in [1, L]$ is related to $u$ in the time interval in $t\in [L^{n}, L^{n+1}]$. Since at each iteration, the scaling of the PDE shown in Eq.(\ref{eq:s2_burgers}) is applied to the previous scaled equation, the solution of the $n^{th}$ iteration, $u_n(x,t)$, is the solution of the following scaled initial value problem \begin{equation}\label{eq:s_burgers_un} \begin{split} &(u_n)_{t}+L^{n(-\bar{\alpha}_n-\bar{\beta}_n+1)}u_n(u_n)_{x}=\nu L^{n(-2\bar{\beta}_n+1)} (u_n)_{xx},\,\,t >1,\\ \text{I. C.\,\,:}\quad&u_n(x,1) = f_n(x), \end{split} \end{equation} where $f_n(x)=L^{\alpha_n}u_{n-1}(L^{\beta_n}x, L)$, with $f_0=f$, the initial condition of the IVP (\ref{eq:burgers}). A simple observation for the scaled IVP (\ref{eq:s_burgers_un}) is that if $\bar{\alpha}_n\rightarrow1/2$ and $\bar{\beta}_n\rightarrow1/2$ as $n\rightarrow\infty$, the diffusion term has unscaled viscosity and the pre-factor of the quasilinear term in the left-hand-side is of order 1 for sufficiently large $n$. In this case, for small viscosity, the long-time behavior of the solution is dominated by the quasilinear term. On the other hand, if $\bar{\alpha}_n\rightarrow 1$ and $\bar{\beta}_n\rightarrow 1/2$ as $n\rightarrow\infty$, the viscosity in the diffusion term remains unscaled, but the advection term on the left-hand-side has $L$ to the power of negative $n/2$. With $L > 1$ and $n\rightarrow\infty$, the advection term eventually drops out the equation, and the diffusion will be the dominant term. The numerical renormalization group calculations based on the algorithm in the next section confirm that for positive mass initial data, it is the former case, while for the zero mass initial data, it is the later one. There is a third possibility, $\bar{\alpha}_n\rightarrow 0$ and $\bar{\beta}_n\rightarrow 1$ as $n\rightarrow\infty$, corresponding to the traveling wave solutions of the Burgers equation, that we do not discuss in this paper. \subsection{The nRG procedure}\label{sec:nrg} We describe the nRG procedure in Algorithm \ref{alg1:nrg}. \begin{algorithm}[t] \begin{algorithmic} \For {$n=0,1,2,\ldots,$ until convergence} \begin{enumerate} \item[1.] Start with the IVP (\ref{eq:burgers}) for $n=0$. Evolve $u_n$ from $t=1$ to $t=L$, using the IVP (\ref{eq:s_burgers_un}) for $n\ge 1$. \item[2.] Compute $\alpha_{n+1}$ by \begin{equation*} L^{\alpha_{n+1}} = \frac{||u_n(\cdot,1)||_{\infty}}{||u_n(\cdot,L)||_{\infty}} =\frac{||u(L^{n\bar{\beta}_n}\cdot,L^n)||_{\infty}}{||u(L^{n\bar{\beta}_n}\cdot,L^{n+1})||_{\infty}}. \end{equation*} \item[3.] Compute $\beta_{n+1}$ from an appropriate scaling relation between $\alpha_{n+1}$ and $\beta_{n+1}$. Normally, $\beta_{n+1}= g(\alpha_{n+1})$, for some function $g$. \item[4.] Compute $A_n=L^{n(\alpha_n-\bar{\alpha}_n)}$, $B_n=L^{n(\beta_n-\bar{\beta}_{n})}$, and $f_{n+1}(x)=L^{\alpha_{n+1}}u_n\left(L^{\beta_{n+1}}x,L\right)$, where $\bar{\alpha}_n = \frac{1}{n}(\alpha_1+\alpha_2+\cdots+\alpha_n)$ and $\bar{\beta}_{n} = \frac{1}{n}(\beta_1+\beta_2+\cdots+\beta_n)$. \end{enumerate} \EndFor \end{algorithmic} \caption{The nRG procedure for the Burgers equation} \label{alg1:nrg} \end{algorithm} Note that in Step 4. of Algorithm \ref{alg1:nrg}, the variable $A_n$ is defined, as now we explain, based on the assumption that for the solution of Eq. (\ref{eq:burgers}), denoted $u(x, t)$, there exists a self-similar profile function $\phi$ such that \begin{equation}\label{eq:rg_similarity} u(x,t)\sim \frac{A}{t^{\alpha}}\phi\left(B\frac{x}{t^{\beta}}\right),\quad {\text{as}}\,\,\,t\rightarrow\infty, \end{equation} where $A$ and $B$ are constants and $\alpha$ and $\beta$ are the powers of decay and spreading with respect to time, respectively. After $n$ iterations, $t=L^{n}$, and hence Eq. (\ref{eq:rg_similarity}) becomes \begin{equation}\label{eq:rg_similarity_Ln} u(x,L^{n})\sim \frac{A}{L^{n\alpha}}\phi\left(B\frac{x}{L^{n\beta}}\right), \end{equation} which implies that for some large enough $n$ \begin{equation}\label{eq:rg_similarity_Ln_inv} L^{n\alpha}u(L^{n\beta}x,L^{n})\sim A\phi\left(Bx\right). \end{equation} Suppose that $\alpha_n\rightarrow\alpha$ and $\beta_n\rightarrow\beta$, as $n\rightarrow\infty$, Eq. (\ref{eq:rg_similarity_Ln_inv}) is equivalent to \begin{equation}\label{eq:rg_similarity_Ln_inv_n} L^{n\alpha_n}u(L^{n\beta_n}x,L^{n})\sim A\phi\left(Bx\right),\quad n\rightarrow\infty. \end{equation} From Eq. (\ref{eq:u_seq}), we have \begin{equation}\label{eq:u_seq_inv} L^{-n\bar{\alpha}_n}u_n(L^{-n\bar{\beta}_n}x, t) = u(x, L^{n}t). \end{equation} Letting $t=1$ in Eq. (\ref{eq:u_seq_inv}), Eqs. (\ref{eq:rg_similarity_Ln_inv_n}) and (\ref{eq:u_seq_inv}) together imply that \begin{equation}\label{eq:u_n_phi_1} L^{n(\alpha_n - \bar{\alpha}_n)}u_n(L^{n(\beta_n-\bar{\beta}_n)}x, 1) \sim A\phi(Bx). \end{equation} If we define $A_n=L^{n(\alpha_n-\bar{\alpha}_n)}$, $B_n=L^{n(\beta_n-\bar{\beta}_n)}$, we expect that \begin{equation}\label{eq:An_Bn} \lim_{n\rightarrow\infty}A_n=\lim_{n\rightarrow\infty}L^{n(\alpha_n-\bar{\alpha}_n)}\rightarrow A,\quad \lim_{n\rightarrow\infty}B_n=\lim_{n\rightarrow\infty}L^{n(\beta_n-\bar{\beta}_n)} \rightarrow B, \end{equation} provided \begin{equation}\label{eq:u_n_phi_2} u_n(L^{n(\beta_n-\bar{\beta}_n)}x, 1)\rightarrow \phi(Bx),\,\,\text{as}\,\,\,n\rightarrow\infty. \end{equation} \subsection{Numerical experiments}\label{sec:implement} We now describe the simple explicit algorithm used for solving the scaled PDE (\ref{eq:s_burgers_un}). Suppose that within each iteration of the nRG algorithm, we discretize the spatial derivatives by the centered difference scheme and use the first-order Euler method for our temporal discretization. If we denote \begin{equation}\label{eq:k_n} \kappa_n = L^{n(-\bar{\alpha}_n-\bar{\beta}_n+1)},\quad \nu_n=\nu L^{n(-2\bar{\beta}_n+1)},\quad v=u_n, \end{equation} at time $t^{j+1}=(j+1)\Delta t$, the fully discretized scaled equation of Eq. (\ref{eq:s_burgers_un}) at the $i^{th}$ grid point is \begin{equation}\label{eq:Euler-center} v_i^{j+1} = v_i^j -\Delta t\frac{\kappa_n}{2}\left(\frac{(v_{i+1}^j)^2-(v_{i-1}^j)^2}{2\Delta x_n}\right) + \Delta t \nu_n\left (\frac{v_{i-1}^j-2v_i^j+v_{i+1}^j}{\Delta x_n^2}\right ), \end{equation} where $\Delta x_n$ is the spatial grid size for the $n^{th}$ iteration. From Eq. (\ref{eq:scale_un}), suppose that we denote the $i^{th}$ spatial node on a uniform mesh at the $n^{th}$ iteration as $(x_i)_n=i\Delta x_n$, where $i=0,\pm1,\pm2\cdots$, we have \begin{equation}\label{eq:scaled_xi} u_n(i\Delta x_n,t)=L^{\alpha_n}\,u_{n-1}\left(L^{\beta_n}i\Delta x_{n},Lt\right)=L^{\alpha_n}\,u_{n-1}\left(i\Delta x_{n-1},Lt\right). \end{equation} This implies that \begin{equation}\label{eq:scaled_dx} \Delta x_n = L^{-\beta_n}\Delta x_{n-1}. \end{equation} Since $L>1$, if $\beta_n > 0$ for all $n$ then $\Delta x_n < \Delta x_{n-1} < \Delta x_{n-2}<\cdots<\Delta x_0$, and for sufficiently large $n$, $\Delta x_n<<\Delta x_0$. If a uniform $\Delta t$ is used for all iterations, this could lead to numerical instability at later iterations for an explicit time integrator, such as the Euler method adopted in this paper. Conversely, if $\Delta t$ decreases accordingly to maintain the stability requirement, eventually the time integration becomes too costly for the nRG algorithm. A remedy for this situation is that instead of scaling the mesh size, we keep the same $\Delta x$ at all time, i.e. $\Delta x_n=\Delta x_{n-1}= \cdots = \Delta x_0 = \Delta x$, through interpolation. To explain this idea, we see that at the end of the first iteration ($n=0$), we are supposed to set the initial data at the $j^{th}$ node for the next iteration to be $f_1(x_j) = f_1(L^{\beta_1}j\Delta x_1)=L^{\alpha_1}u_0(j\Delta x_0, L)$, or equivalently $f_1(x_j)= f_1(j\Delta x_1)=L^{\alpha_1}u_0(L^{-\beta_1}j\Delta x_0, L)$. If we choose $x_j$ on the new grid to be the same as that of the old grid, i.e. $j\Delta x_1 = j\Delta x_0$, we need the value $u_0(L^{-\beta_1}j\Delta x_0, L)$. Since $u_0(j\Delta x_0, L) $ has been computed for each $j$, to obtain $u_0(L^{-\beta_1}j\Delta x_0, L)$, we can simply linearly interpolate $u_0(k\Delta x_0, L)$ and $u_0((k+1)\Delta x_0, L)$, where $k\Delta x_0< L^{-\beta_1}j\Delta x_0< (k+1)\Delta x_0$ for some $k$. By repeating this process, we have $\Delta x=\Delta x_0=\Delta x_1=\Delta x_2=\cdots = \Delta x_n$. The above interpolation principle can be extended to quadratic or cubic interpolation, and spline functions as well. This interpolation-resampling strategy was previously proposed in \cite{SO98} as a means to capture the consequences of space-time translational symmetry on a discrete lattice. \begin{note} If an interpolation scheme is used, the last step in Algorithm \ref{alg1:nrg} is modified by $L^{\alpha_{n+1}}u_{n}\left(L^{\beta_{n+1}}x,L\right) = \displaystyle\frac{1}{\max|\bar{u}_n\left(L^{\beta_{n+1}}x,L\right)|}\bar{u}_{n}\left(L^{\beta_{n+1}}x,L\right)$, where $\bar{u}_{n}\left(L^{\beta_{n+1}}x,L\right)$ is the interpolation of $u_n\left(L^{\beta_{n+1}}x,L\right)$. This normalization step is necessary to avoid diminishing of the amplitude due to the interpolation. \end{note} \subsubsection{Positive initial mass}\label{sec:positive} The first initial condition we consider for our numerical experiments is the characteristic function \begin{equation}\label{eq:init} u(x,0)=\chi_{[-\ell,\ell]}(x) =\begin{cases} 1, & -\ell \le x \le \ell, \\ 0, &\text{else}. \end{cases} \end{equation} It is known that a conserved quantity of the Burgers equation is the total mass defined by \begin{equation}\label{eq:tm} M=\int_{-\infty}^{\infty}u(x,t)dx. \end{equation} For the characteristic function, the total mass is trivial to compute initially. Whitham \cite{whitham74} showed that a special asymptotic self-similar solution of single hump for the Burgers equation with initial data possessing positive total mass has the following explicit formula (page 104, Eq. (4.32)): \begin{equation}\label{eq:whitham} u(x,t) = \sqrt{\frac{2M}{t}} g(z,R), \end{equation} where $M>0$ is the total mass of initial data, $z= x/\sqrt{2Mt}$ is the similarity variable, and $R=M/(2\nu)$ is the Reynolds number, where $\nu$ is the viscosity. The function $g(z,R)$ is \begin{equation}\label{eq:g} g(z,R) = \frac{(e^R-1)}{2\sqrt{R}}\frac{e^{-z^2R}}{\sqrt{\pi}+(e^R-1)\displaystyle\int_{z\sqrt{R}}^{\infty}e^{-\xi^2}d\xi}. \end{equation} Based on dimensional arguments, Whitham indicated that the similarity form of the above solution is \begin{equation}\label{eq:similarity} u(x,t) = \displaystyle\sqrt{\frac{\nu}{t}}\phi\left(\displaystyle\frac{x}{\sqrt{\nu t}};\displaystyle\frac{M}{\nu}\right). \end{equation} Comparing Eq. (\ref{eq:rg_similarity}) and Eq. (\ref{eq:similarity}), we expect that the sequences of exponents $\{\alpha_n\}$ and $\{\beta_n\}$ in the nRG calculation converge to $\alpha=1/2$ and $\beta=1/2$, respectively, for sufficiently large iteration numbers. Hence our nRG procedure starts with letting $\beta_n=1/2$ for all $n$. This means that the spatial variable is always scaled by $L^{-1/2}$ for the next iteration. As we mentioned earlier, this assumption also corresponds to ensuring that the viscosity remains the same at all time, that is $\nu_n=\nu$. Comparing Eqs. (\ref{eq:u_seq}) and (\ref{eq:whitham}), we see that the term $\sqrt{t}\,u(\hat{z},t)$ is equivalent to $L^{n\bar{\alpha}_n}u(\hat{z}, L^n)$, where $\hat{z}=\sqrt{2 M} z = \frac{1}{\sqrt{t}} x = L^{-n/2} x$ with $t=L^n$. With the assumption $\beta_n=1/2$ for all $n$, we have $\bar{\beta}_n=1/2$, and $L^{-n\bar{\beta}_n} = \frac{1}{\sqrt{t}}$. Moreover, if $\{\alpha_n\}$ approaches to $1/2$ (later we will show in our numerical experiments that this is in fact the case), then $\bar{\alpha}_n \approx 1/2$ for sufficiently large $n$. Hence $L^{n\bar{\alpha}_n} \approx \sqrt{t}$. This implies \begin{equation}\label{eq:2mg} u_n(L^{-n/2}x,L^n) = L^{n/2} u(x, t) = \sqrt{t}\, u(x,t)=\sqrt{2M}\,g(z,R). \end{equation} Now plugging $z=\displaystyle\frac{\hat{z}}{\sqrt{2M}}$ into $g(z,R)$ to obtain $g(\hat{z},R)$, we have \begin{equation}\label{eq:2mg_hat} u_n(L^{-n/2}x,L^n) = \sqrt{2M}\,g(\hat{z},R). \end{equation} Using Eq. (\ref{eq:2mg_hat}), if we plot the normalized function of $\sqrt{2M}\,g(\hat{z}, R)$ versus $\hat{z}$ (so that the amplitude is one), this should be equivalent to plotting $u_n$ against $L^{-n/2}x$ at the time evolution $t=L^{n}$. We are now ready to compare the theoretical similarity profile derived by Whitham \cite{whitham74} (page 106, Figure 4.1) and the one obtained from our nRG procedure. Figure \ref{fig:positive_comp1} is the comparison of nRG calculations and the theoretical asymptotic solutions. For each case (a), (b), and (c) of Figure \ref{fig:positive_comp1}, the initial mass is $M=1,\,1$, and $2$, respectively ($\ell =1/2, 1/2$ and 1, respectively in Eq. (\ref{eq:init})), while the diffusivity constant is $\nu=0.01,\, 0.05$, and $0.01$, respectively. The figure shows that the nRG calculations remarkably capture the theoretical predictions for various initial data and diffusivity constant. In particular, the left panel plots the final waveform of the solution between the predicted theoretical profiles and that of our RG calculations, while the right panel indicates the convergence of $A_n$ to its theoretical value $A$. Note that, integrating Eq. (\ref{eq:u_n_phi_1}), and using Eqs. (\ref{eq:An_Bn}) and (\ref{eq:u_n_phi_2}), the theoretical value $A$ is given by \begin{equation} A = \displaystyle\frac{M}{\int_{\mathbb{R}}\phi(x)dx}, \end{equation} where $\phi(x)$ is the computed RG profile. The total number of iterations for all three cases is $n=500$. Within each iteration, the calculation was carried out in a periodic domain, $-8\le x\le 8$, with $\Delta x= 16/5000$, while the time integration is from $t=1$ to $t=L=1.2$ with $\Delta t=0.2/2000$. \begin{figure}[bhtp] \centering (a)\includegraphics[width=2.8in]{positive_M_1_nu_0_01-eps-converted-to.pdf} \includegraphics[width=2.8in]{A_n_nu001m1-eps-converted-to.pdf}\\ (b)\includegraphics[width=2.8in]{positive_M_1_nu_0_05-eps-converted-to.pdf} \includegraphics[width=2.8in]{A_n_nu005m1-eps-converted-to.pdf}\\ (c)\includegraphics[width=2.8in]{positive_M_2_nu_0_01-eps-converted-to.pdf} \includegraphics[width=2.8in]{A_n_nu001m2-eps-converted-to.pdf} \caption{Comparisons between the nRG calculations and the asymptotic self-similar solutions shown in \cite{whitham74}. The left panel shows the waveforms of the RG solutions at the final iteration $n=500$ and the theoretic predictions for various $M$ and $\nu$, while the right panel shows the convergence of $A_n$ to its theoretical value $A$. The initial mass and viscosity used for the comparisons are (a) $M=1$, $\nu=0.01$, (b) $M=1$, $\nu=0.05$, and (c) $M=2$, $\nu=0.01$. All numerical calculations use $\Delta x=0.0032$ and $\Delta t=0.0001$.} \label{fig:positive_comp1} \end{figure} Figure \ref{fig:positive_comp2} shows the calculations of $\alpha_n$ and $\kappa_n$. The figure indicates that $\alpha_n\rightarrow\frac{1}{2}$ and $\kappa_n \ne 0$ and is of $O(1)$. This suggests that in the asymptotical region the diffusion constant is unscaled. Since the diffusion constant is small and the coefficient of the advection is of order one in all three cases, the equation is advection dominant in the asymptotical region. \begin{figure}[bhtp] \centering (a)\includegraphics[width=2.8in]{alpha_m_1_nu_001-eps-converted-to.pdf} \includegraphics[width=2.8in]{kappa_m_1_nu_001-eps-converted-to.pdf}\\ (b)\includegraphics[width=2.8in]{alpha_m_1_nu_005-eps-converted-to.pdf} \includegraphics[width=2.8in]{kappa_m_1_nu_005-eps-converted-to.pdf}\\ (c)\includegraphics[width=2.8in]{alpha_m_2_nu_001-eps-converted-to.pdf} \includegraphics[width=2.8in]{kappa_m_2_nu_001-eps-converted-to.pdf}\\ \caption{$\alpha_n$ (left) and $\kappa_n$ (right) versus $n$ (number of iteration). The initial mass and viscosity are the same as that in Figure \ref{fig:positive_comp1} (a), (b), and (c). The figure indicates that $\alpha_n\rightarrow\frac{1}{2}$ and $\kappa_n \ne 0$ and is of $O(1)$, as expected.} \label{fig:positive_comp2} \end{figure} \subsubsection{Zero initial mass} In this example, we consider the following initial condition for the Burgers equation \begin{equation}\label{eq:init_zero} u_0(x)=-\chi_{[-\ell, 0]}+\chi_{[\ell, 0]}=\begin{cases} -1, & -\ell \le x \le 0, \\ 1, & 0 < x \le \ell, \\ 0, &\text{else}. \end{cases} \end{equation} The total mass of the above function is zero. For this initial condition, Whitham showed that for small diffusivity constants, the inviscid theory is adequate to explain the solutions of the Burgers equation for most of the range, except in the final decay period. The solutions are of typical $N$ wave structures before the final decay. However, as $t\rightarrow\infty$, in the final decay, for any fixed diffusivity constant, no matter how small, the solution of the Burgers equation is \begin{equation}\label{eq:diploe} u(x,t)\sim \frac{x}{t}\sqrt{\frac{a}{t}}e^{-x^2/(4\nu t)}, \end{equation} for some fixed $a$. \begin{figure}[hbtp] \centering (a)\includegraphics[width=2.8in]{zero_mass_n_400_nu_0_01-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{zero_mass_comp_nu_0_01-eps-converted-to.pdf} \caption{Numerical experiment with zero initial mass ($M=0$). The viscosity is $\nu=0.01$. (a) Waveform exhibiting $N$-wave structure computed by the nRG algorithm at the $400^{th}$ iteration. (b) Comparison between the asymptotic dipole-like solution and the nRG calculation at the $1500^{th}$ iteration.} \label{fig:zero_comp} \end{figure} Eq. (\ref{eq:diploe}) is the dipole solution of the heat equation, which means that the diffusion dominates the nonlinear term in the final decay, regardless the magnitude of the diffusivity constant. To study the final decay of solution by the nRG algorithm for this class of initial data, we let $\beta$ in the spacial scaling be fixed and equal to $1/2$. This ensures that the coefficient in front of the diffusion term remains unscaled at all time. We set the diffusivity constant $\nu=0.01$ and the parameter $L=1.2$. The total number of iteration is $n=1500$, and $\ell=1$ in the initial condition. Figure \ref{fig:zero_comp}(a) shows the snap shot of the nRG solution at the $400^{th}$ iteration. It clearly shows that the solution is of the $N$ wave structure at this stage. In Figure \ref{fig:zero_comp}(b), the final profile of the nRG calculation at $n=1500$ is compared with the dipole solution of heat equation. The fixed number $a$ in Eq. (\ref{eq:diploe}) is chosen to be $L^{2n}$. Recall that the scaled spatial variable in the nRG calculation is $\hat{z}=L^{-n(1/2)}x=x/\sqrt{t}$. Substituting $a=L^{2n}$ and $\hat{z}=L^{-n(1/2)}x=x/\sqrt{t}$ into Eq. (\ref{eq:diploe}) and knowing that $L^{2n}=t^2$ yields the dipole solution as a function of the similarity variable $\hat{z}$ \begin{equation}\label{eq:dipole_scaled} u(\hat{z})=\hat{z}e^{-\hat{z}^{2}/(4\nu)}. \end{equation} The circles in Figure \ref{fig:zero_comp}(b) are the above dipole solution, with the amplitude being normalized to one, plotted against the similarity variable. The result of the nRG calculation at $n=1500$ (the solid line in Figure \ref{fig:zero_comp}(b)) correctly predicts the final decay. \begin{figure}[hbtp] \centering (a)\includegraphics[width=2.8in]{alpha_zero_mass-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{kappa_zero_mass-eps-converted-to.pdf} \caption{Scaled coefficients in the previous calculation (Figure \ref{fig:zero_comp}). (a) $\alpha_n$ and (b) $\kappa_n$ versus $n$ (number of iteration). } \label{fig:zero_mass_param} \end{figure} The convergences of the scaling parameters $\alpha_n$ and $\kappa_n$ are shown in Figure \ref{fig:zero_mass_param}(a) and (b), respectively. They indicate that $\alpha_n\rightarrow 1$, which results in $\kappa_n\rightarrow 0$, where $\kappa_n$ is the coefficient in front of the advection term of the scaled PDE. Such a convergence of $\kappa_n$ suggests that the diffusion term dominates the final decay, regardless the magnitude of the diffusion constant, which is kept unscaled through out the calculation. These figures show the mechanism of the solution transiting from the $N$-wave structure to the dipole solution, as predicted in Whitham's analysis. The choice of the periodic domain, $\Delta x$, and $\Delta t$ is the same as that used in Figure \ref{fig:positive_comp1}. We remark that to prevent the numerical artifacts from distorting the symmetry of the solution during the step of normalizing the amplitude, we apply the negative mirror image of the right hump for the left one at the end of each time evolution. \section{Korteweg-de Vries equation and dispersive shock waves}\label{sec:KdV} In this section, we illustrate that the nRG procedure is an efficient method for studying asymptotic behavior of dispersive shock waves (DSWs). DSWs appear when dispersion dominates dissipation for step-like data; they have been seen in plasmas, fluids, superfluids and optics \cite{AB13a}. In a sequence of papers, Ablowitz et al. analyze interactions and asymptotics of DSWs for the Korteweg-de Vries (KdV) equation \cite{ABH09, AB13a, AB13b}. KdV equation is chosen for their study because it is the leading-order asymptotic equation for weakly dispersive and weakly non-linear systems. Consider the dimensionless form of KdV equation \begin{equation}\label{eq:KdV} u_t+uu_x+\epsilon^2u_{xxx} = 0, \end{equation} with $u=u(x,t)$ going rapidly to the boundary conditions \begin{equation} \lim_{x\rightarrow -\infty} u(x, t)=0,\quad \lim_{x\rightarrow \infty} u(x, t)=-6c^2. \end{equation} Single stepwise initial data for the above problem evolves to a wedge-shape envelope combining three basic regions: exponential decay region on the right, the DSW region in the middle, and the region of oscillating tail on the left \cite{AB13a}. All three regions travel to the left with time at a speed $x=-12c^2 t$, while the DSW region is expanding and is of the order $O(t)$ \cite{AB13a, AB13b}. The amplitude of the DSWs saturates at $6c^2$. The KdV equation for the described problem can be posed as an initial-boundary-value problem (IBVP) on a truncated domain $-\ell\le x\le\ell$, where the initial and boundary data are prescribed as follows \begin{equation}\label{eq:KdV_ibvp} \begin{cases} u_t+uu_x+\epsilon^2u_{xxx}&= 0,\quad x\in[-\ell,\ell],\,t>0,\\ u(x,0)&=3(1-\tanh((x-x_0)/w)-2),\\ u(-\ell,t)&=3(1-\tanh((-\ell-x_0)/w)-2),\\ u(\ell,t)&=3(1-\tanh((\ell-x_0)/w)-2),\\ u_x(\ell,t)&=0. \end{cases} \end{equation} For example, if $w=1$ and $\ell-x_0=20$, we have $u(-\ell,t)\approx 0$ and $u(\ell,t)\approx-6$. The existence and uniqueness of solution of the above IBVP is discussed in \cite{BSZ03}. The numerical scheme we adopt for solving Eq. (\ref{eq:KdV_ibvp}) is a non-oscillatory explicit finite-difference method \cite{wwh08}. The spatial and temporal discretization for the algorithm is \begin{equation}\label{eq:kdv_FD_algorithm} \begin{split} \frac{1}{2}\left(\frac{u^{n+1}_{j-1} - u^{n}_{j-1}}{\Delta t} +\frac{u^{n}_{j+1} - u^{n-1}_{j+1}}{\Delta t} \right) = & - \left(\frac{u^n_{j+1}+u^n_{j}+u^n_{j-1}}{3}\right)\frac{u^n_{j+1}-u^n_{j-1}}{\Delta x}\\ &-\frac{\epsilon^2}{2\Delta x^3}\left(u^n_{j+2}-2u^n_{j+1}+2u^n_{j-1}-u^n_{j-2}\right). \end{split} \end{equation} Applying the von Neumann analysis for the above scheme yields the stable condition for the scheme \begin{equation}\label{eq:stable-cond} \frac{\Delta t}{\Delta x} < \frac{2}{\underset{x, t}{\max}|u| + 4 \epsilon^2 / \Delta x}. \end{equation} Using the same scaling variables as Eq. (\ref{eq:scale_tx}) and Eq. (\ref{eq:scale_u}), the scaled KdV equation is \begin{equation} u_t + L^{-\alpha-\beta+1}uu_x +\epsilon^2 L^{-3\beta +1}u_{xxx} = 0. \end{equation} Here we drop the subscript $L$ for $u$ and $\,\tilde{}$ for $t$ and $x$. Since the amplitude of DSWs saturates at $6c^2$, it is not necessary to scale the amplitude and hence we set $\alpha=0$ for our nRG calculations. Also, similar to the Burgers equation, we choose to retain the coefficient of the dispersion term unscaled. This results in $\beta=1/3$ at all time for our nRG calculations, which suggests that the DSW region expands in the order $O(t^{1/3})$ for the RG calculations. We choose $c^2=1$. With this set of parameters, Figure \ref{fig:KdV_step} and Figure \ref{fig:KdV_step2} are the comparisons between direct numerical simulations and nRG calculations. The initial condition is a single-step tangent profile. The graphs show that the nRG procedure accurately capture the DSWs in a confined domain and the results are consistent with that in the literature \cite{AB13a, AB13b}. Moreover a simple calculation below will illustrate that the nRG procedure is more efficient than direct numerical calculation. Suppose that the final time for a direct simulation is $t=L^n$ and the number of solitons in the region of DSWs at the final time is $N_{s}$. Assume that the spacial grid-size required to resolve these solitons is $\Delta x$. For this $\Delta x$, the required temporal step-size is $\Delta t =O((\Delta x)^2)$, by the stable condition (\ref{eq:stable-cond}). Therefore the total number of time steps required for the simulation is $L^{n}/\Delta t \sim L^{n}/(\Delta x)^2$. Now for the nRG procedure, $t=L^n$ means the number of iterations is $n$. Since the dispersion coefficient is kept the same, the number of solitons after $n$ iterations is also $N_{s}$ for the nRG procedure. However, because of the spacial scaling, the spacial grid-size required for resolving the solitons is now $\Delta\tilde{x}=L^{-n\beta}\Delta x$. Hence the temporal step-size for stability requirement is $\Delta\tilde{t} = O((\Delta\tilde{x})^2)$. The number of time steps for the nRG procedure to the final time is $n(L-1)/\Delta\tilde{t} \sim n(L-1)L^{2n\beta}/(\Delta x)^2$. Since $\beta=1/3$, the numerator of the above equation is $n(L-1)L^{2n/3}$ and this is less than $L^n$ for large $n$. Our numerical experiments also confirm that direct numerical simulation is much more time consuming than the nRG calculation for long-time simulations. We further remark that if an implicit algorithm is used, for which $\Delta t$ could be chosen at the same order of $\Delta x$ \cite{SK09}, then the nRG procedure is even more preferable than direct numerical simulation. \begin{figure}[bhtp] \centering (a)\includegraphics[width=2.8in]{t-e-10-direct-t-1_2-to-8-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{t-e-10-nrg-t-1_2-to-8-eps-converted-to.pdf} \caption{(a) Direct numerical simulation. $t/ \epsilon=10$, where $t=1.2^8$. The spacial and temporal step sizes are $\Delta x=40/2000$ and $\Delta t=1.2^{8}/80000$, respectively. $x_0=20$ and $w=1$. (b) Numerical renormalization group calculation. $\beta=1/3$, $\alpha=0$, $L=1.2$, and $t/ \epsilon =10$. Eight iterations is performed ($n=8$, i.e. $t=1.2^{8}$). For both (a) and (b), dashed line is the initial condition.} \label{fig:KdV_step} \end{figure} \begin{figure}[bhtp] \centering (a)\includegraphics[width=2.8in]{direct_simulation_t_1_2_to_20-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{ep2_0_1_t_1_2_to_20-eps-converted-to.pdf} \caption{(a) Direct numerical simulation. $x_0=350$, $w=1$, $\Delta x=1/80$, $\Delta t=(1.2)^{20}/32000000\approx 1.19805\times 10^{-6}$. $\epsilon^2=0.1$. (b) Numerical renormalization group calculation. $\beta=1/3$, $\alpha=0$, $L=1.2$, and $\epsilon^2=0.1$. Twenty iterations is performed ($n=20$, i.e. $t=1.2^{20}$), $x_0=150$, $w=1$, $\Delta x=1/80$, and $\Delta t=1\times 10^{-6}$. For both (a) and (b), the dashed line is the initial condition.} \label{fig:KdV_step2} \end{figure} \section{A modified diffusion-absorption model}\label{sec:diffusion_absorption} In this section we consider a one-dimensional modified diffusion-absorption model \begin{equation}\label{eq:diff-absorp} u_t = D(u_t) \partial_{xx}u^{m+1} -\lambda u^p, \end{equation} where $m\ge 0$ and $D(u_t)$ is the transport coefficient. Typically, in the literature the following cases are considered: \begin{enumerate} \item $D(u_t)=1$ and $m\ge1$. In this case, if $\lambda >0$ and $p\ge m+1$, this is a model of nonlinear heat equation with absorption \cite{bib:HV87}, whereas if $\lambda > 0$ and $0<p<1$, this is slow diffusion combined with strong absorption \cite{bib:GSV99}. \item $D(u_t)$ is a Heaviside function: \begin{equation}\label{eq:Heaviside} D(u_t) = \begin{cases} 1+\epsilon & \quad \text{for}\,\, u_t < 0 \\ 1 & \quad \text{for}\,\, u_t > 0. \end{cases} \end{equation} For this case, if $\lambda =0$ and $m=0$, this is the so-called Barenblatt's equation. \cite{bib:CW96, GMOL90, Barenblatt87}, whereas if $\lambda =0$ and $m=1$, it is called the modified porous-medium equation \cite{CGO91}. \end{enumerate} In this section, our investigation focuses on the comparison of the longtime solution behavior between the discontinuous $D(u_t)$ and the following non-constant continuous alternatives: \begin{enumerate} \item $D(u_t)$ is a smooth transitioned profile connecting $1+\epsilon$ and 1, or \begin{equation}\label{eq:D(u_t)} D(u_t) = 1+ \epsilon\left(\frac{1}{2}\left(1+\tanh(-u_t/\sigma)\right)\right), \end{equation} where $D(u_t)\rightarrow 1+\epsilon$ as $u_t\rightarrow -\infty$ and $D(u_t)\rightarrow 1$ as $u_t\rightarrow\infty$. $\sigma > 0$ is a parameter to adjust the width of the smooth transition of the hyperbolic tangent profile. \item $D(u_t)$ is a piecewise continue function that connects the two constant states $1+\epsilon$ and $1$ by a straight line, or \begin{equation}\label{eq:D(u_t)2} D(u_t) = \begin{cases} 1+\epsilon &\quad \text{for}\,\,u_t < -\delta \\ 1-\displaystyle\frac{\epsilon}{2\delta}\left(u_t-\delta\right) & \quad \text{for}\,\,-\delta<u_t<\delta\\ 1& \quad \text{for}\,\,u_t > \delta, \end{cases} \end{equation} where $\delta$ plays the similar role as $\sigma$ in Eq. (\ref{eq:D(u_t)}). \end{enumerate} \subsection{Validation for the RG algorithm}\label{sec:barenblatt} The self-similar solution of the Barenblatt's equation studied in the literature \cite{bib:CW96} is an ideal example to validate our nRG algorithm. Consider the Barenblatt's equation \begin{equation}\label{eq:Baren} \begin{split} &u_t = D(u_t) u_{xx},\\ &u(t=0,x) = u_0(x), \end{split} \end{equation} where $D(u_t)$ is a Heaviside function defined in Eq. (\ref{eq:Heaviside}). The asymptotic similarity form of the Barenblatt's equation is \begin{equation}\label{eq:time_decay} u(x,t)\simeq \frac{A}{t^\alpha}\phi\Big(\frac{x}{2\sqrt{\kappa_{+} t}}\Big),\quad \kappa_{+} = \,\,\text{diffusivity in the regime where}\,\, \frac{\partial}{\partial t} > 0 \end{equation} \cite{bib:CW96}, where $A$ is some pre-factor. For our numerical validation $\kappa_{+} =1$ and $\kappa_{-} =1+\epsilon$, as shown in Eq. (\ref{eq:Heaviside}). The parameter $\alpha$ is a function of the diffusivity ratio $\kappa_{-} / \kappa_{+}$. Cole and Wagner \cite{bib:CW96} showed that the asymptotic expansions of $\alpha$ in $\epsilon$ is \begin{equation} \alpha(\epsilon) = \alpha_0 + \epsilon\alpha_1+\epsilon^2\alpha_2+\cdots, \end{equation} where \begin{equation}\label{eq:perturbation} \alpha_0 = \frac{1}{2},\quad \alpha_1= \frac{1}{\sqrt{2\pi e}},\quad \text{and}\,\,\,\,\,\alpha_2 = -0.06354624. \end{equation} We remark that since the asymptotic similarity solution of the linear heat equation has the time decay rate $\alpha=\alpha_0=1/2$, the $\epsilon$ and $\epsilon^2$ terms in Eq. (\ref{eq:perturbation}) are sometimes called {\it the anomalous dimension} of the decay. Eq. (\ref{eq:Baren}), the Barenblatt's equation, is essentially a nonlinear equation, since the diffusivity is a function of $u_t$. Suppose that the time and space variables and the amplitude of $u$ are scaled the same way as that in Eqs. (\ref{eq:scale_tx}) and (\ref{eq:scale_u}), the scaled Barenblatt's equation for the $n^{th}$ RG iteration becomes \begin{equation}\label{eq:scaled_Baren} \begin{split} &(u_n)_{t}=D\left(L^{-n(\bar{\alpha}_{n}+1)}(u_{n})_t\right) L^{n(-2\bar{\beta}_{n}+1)} (u_n)_{xx},\,\,t >1,\\ \text{I. C.\,\,:}\quad&u_n(x,1) = f_n(x), \end{split} \end{equation} where $\bar{\alpha}_{n}$, $\bar{\beta}_{n}$ and $f_n(x)$ have been defined in Section \ref{sec:seq}. We solve the above initial-value-problem by choosing an initial condition \begin{equation}\label{eq:IC_experiment} u_0(x, 1) = \begin{cases} \cos x,\quad &-\frac{\pi}{2} \le x\le \frac{\pi}{2},\\ 0\quad &x > \frac{\pi}{2}\,\,\text{or}\,\, x < -\frac{\pi}{2}. \end{cases} \end{equation} and discretizing Eq.(\ref{eq:scaled_Baren}) with the $2^{nd}$-order Crank-Nicolson scheme. The diffusivity at the $(k+1)^{th}$ time step in the $n^{th}$ RG iteration is linearized by $D(L^{-n(\bar{\alpha}_{n}+1)}(3u_n^{k}-4u_n^{k-1}+u_n^{k-2})/(2\Delta t))$. In our nRG calculation, the spacial scaling parameter $\beta=1/2$ is fixed, so that the magnitude of the diffusivity is unscaled of all time. The time integration span in each nRG iteration is from $L=1$ to $L=2$, while the total iteration number is 100. A periodic domain $x\in [-10, 10]$ with the temporal step $\Delta t = 0.05$ and the spacial step $\Delta x = \frac{20}{160}= 0.125$ are used for all numerical computations with different $\epsilon$-values. Cubic interpolation scheme is employed for the grid interpolation approach discussed in Section \ref{sec:implement}. Figure \ref{fig:validation} is the comparison between the time decay exponent $\alpha$ in Eq. (\ref{eq:time_decay}) captured by the nRG algorithm and the linear and quadratic perturbative values predicted by Eq. (\ref{eq:perturbation}) given in the literature \cite{bib:CW96}. The actual values of $\alpha(\epsilon)$ for the comparison for 20 $\epsilon$-values, $\epsilon = -0.9, -0.8,\cdots, 0.9, 1.0$, are listed in Table \ref{tab:validation} in Appendix \ref{app:data1}. From both the Figure and the Table, we see that when $\epsilon$ is small, the $\alpha$-values given by the nRG algorithm and the perturbative formula are almost coincided, since the formula are derived by assuming small $\epsilon$. \begin{figure}[h] \centering \includegraphics[width=4.8in]{Barenblatt_validation-eps-converted-to.pdf} \caption{The time decay exponent $\alpha$ in Eq. (\ref{eq:time_decay}) captured by the nRG algorithm versus the perturbative values predicted by the linear and quadratic formula, Eq. (\ref{eq:perturbation}), given in the literature \cite{bib:CW96}.} \label{fig:validation} \end{figure} \subsection{Non-constant continuous $D(u_t)$} Suppose now the Heaviside diffusivity function (\ref{eq:Heaviside}) is replaced by two types of continuous functions: (i) a smooth transitioned function described in Eq. (\ref{eq:D(u_t)}) with $\epsilon=0.5$ and $\sigma=1, 0.5$ and 0.1, as shown in Figure \ref{fig:D(u_t)}(a), and (ii) a piecewise linear function described in Eq. (\ref{eq:D(u_t)2}) with $\epsilon=0.5$ and $\delta=1, 0.5$ and 0.1, as shown in Figure \ref{fig:D(u_t)}(b). We will demonstrate numerically the change of the behavior for the decaying parameter $\alpha$ with such a replacement. Before our numerical experiments, it is worth pointing out that the Heaviside diffusivity function under the time-scaling is \begin{equation} D\left(L^{-n(\bar{\alpha}_{n}+1)}((u_{n}))_t\right)=D((u_{n})_t) = \begin{cases} 1+\epsilon & \quad \text{for}\,\, (u_{n})_t < 0 \\ 1 & \quad \text{for}\,\, (u_{n})_t > 0, \end{cases} \end{equation} for $n\rightarrow\infty$. i.e. the value of the diffusivity depends only on the sign of $(u_{n})_t$. The diffusivity functions described in (i) and (ii), however, behave differently under the time-scaling: \begin{equation}\label{eq:D(Lu_t)} D\left(L^{-n(\bar{\alpha}_{n}+1)}((u_{n}))_t\right)\longrightarrow D(0)=1+\frac{\epsilon}{2}, \quad \text{as}\,\,\, n\longrightarrow\infty, \quad \text{if}\,\,\, \bar{\alpha}_n > -1\,\,\,\text{and}\,\,\, |(u_{n})_t| < \infty. \end{equation} Equivalently to say, the above back-of-the-envelope calculation suggests that if the diffusivity approaches to a constant, the Barenblatt-like's equation approaches to a linear heat equation, and consequently the time decay parameter $\alpha$ approaches to 1/2, i.e. $\bar{\alpha}_n \rightarrow 1/2$. We now repeat the same numerical experiment in Section \ref{sec:barenblatt}, except we replace the Heaviside diffusivity function by the continuous functions in Figure \ref{fig:D(u_t)}. Figure \ref{fig:tanh}(a) shows the initial profile and the computed asymptotic similarity forms. While the three continuous hyperbolic tangent diffusivity functions have different transition bandwidths, the computed final asymptotic similarity forms are visually indistinguishable after 200 nRG iterations. Figure \ref{fig:tanh}(b) shows the time decay parameter $\alpha$ in Eq. (\ref{eq:time_decay}) between the continuous and discontinuous diffusivity function during the nRG calculations. As expected, the time decay parameter approaches to 1/2 for the continuous functions. Figure \ref{fig:linear} shows the similar calculations to Figure \ref{fig:tanh} for the continuous diffusivity function in Figure \ref{fig:D(u_t)}(b). The results in Figures \ref{fig:tanh} \& \ref{fig:linear} are extremely close, despise the two different types of continuity connecting the jump. Figure \ref{fig:comp_profile)}(a) is the comparison of the computed asymptotic similarity forms between the discontinuous Heaviside and the continuous hyperbolic tangent diffusivities. Figure \ref{fig:comp_profile)}(b) is their diffusivity distributions at the end of the 200 nRG iterations. As the conjecture in Eq. (\ref{eq:D(Lu_t)}), for the hyperbolic tangent function, the diffusivity distribution approaches to a constant of $1+\frac{\epsilon}{2}$ in the asymptotic regime. \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{D_u_t_-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{D_u_t_2-eps-converted-to.pdf} \caption{(a) Smooth transitioned $D(u_t)$ in Eq. (\ref{eq:D(u_t)}), where $\epsilon=0.5$ and $\sigma=1, 0.5$, and 0.1. (b) Piecewise linear $D(u_t)$ in Eq. (\ref{eq:D(u_t)2}), where $\epsilon=0.5$ and $\delta=1, 0.5$, and 0.1.} \label{fig:D(u_t)} \end{figure} \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{tanh_u-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{tanh_alpha-eps-converted-to.pdf} \caption{ (a) The initial profile and the computed asymptotic similarity forms for Eq. (\ref{eq:scaled_Baren}). Three smooth transitioned $D(u_t)$ shown in Figure \ref{fig:D(u_t)}(a) are used for the calculations. The final asymptotic similarity forms computed after 200 nRG iterations are visually indistinguishable for the three different bandwidths of hyperbolic tangent profiles. (b) The time decay exponent $\alpha$ captured by the nRG algorithm for the calculations in (a), compared with that of the Barenblatt's equation, for which $D(u_t)$ is a Heaviside function. $\alpha$ approaches to 1/2 for the continuous smooth $D(u_t)$ after only 20 iterations.} \label{fig:tanh} \end{figure} \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{linear_u-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{linear_alpha-eps-converted-to.pdf} \caption{The same calculation as Figure \ref{fig:tanh}, except the diffusivity is a continuous function that connects the two constant states with a straight line.} \label{fig:linear} \end{figure} \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{profiles_u-eps-converted-to.pdf} (a)\includegraphics[width=2.8in]{comp_nu-eps-converted-to.pdf} \caption{(a) A comparison of the computed asymptotic similarity forms between the discontinuous Heaviside function and the continuous hyperbolic tangent diffusivity function ($\sigma=0.1$). (b) The diffusivity distributions at the end of the $200^{th}$ nRG iterations. The diffusivity distribution (dashed line) is a constant of $1+\frac{\epsilon}{2}$ for the hyperbolic tangent diffusivity function.} \label{fig:comp_profile)} \end{figure} \subsection{The modified diffusion-absorption model ($\lambda > 0$)} We now consider the full modified diffusion-absorption equation, Eq. (\ref{eq:diff-absorp}). It has been shown that for $D(u_t)\equiv \text{constant},\,\lambda>0,\, p>1+m,\, m\ge 0$ (the absorptive regime), two different regimes of time decay exist, where one is dominated by diffusion and the other by absorption \cite{SGKM95}. A critical point $p^{*}=p^{*}(m, d)$, where $d$ is the dimension of Eq. (\ref{eq:diff-absorp}), separates the two regimes. In the case, $p > p^{*}$, the time decay parameter $\alpha$ becomes a constant function of $p$, indicating that for these absorptive exponents, the model equation is an irrelevant perturbation to the diffusion equation, which means a regime of diffusion dominant. On the other hand, for $1+m<p<p^{*}$, the absorptive time decay is given by \begin{equation}\label{eq:time_decay1} \alpha = \frac{1}{p-1}, \end{equation} which indicates that the time decay is strongly influenced by absorption and the longtime behavior of the solution does not correspond to the diffusion equation. Therefore, the model equation is a relevant perturbation to the diffusion equation, and so is in the absorption dominant regime. Moreover, for the marginal case, $p=p^{*}$, the absorptive time decay equals the diffusive time decay, or \begin{equation} \frac{1}{p-1} = \frac{d}{md+2}, \end{equation} and hence the critical value of $p$ is \begin{equation}\label{eq:time_decay2} p^{*}=m+\frac{2}{d} +1. \end{equation} Note that for $p>p*$ the time decay $\alpha$ is a constant \begin{equation}\label{eq:time_decay3} \alpha_{c} = \frac{1}{p^{*}-1}. \end{equation} Detailed results and their derivations for the model equation in the absorptive regime can be found in \cite{SGKM95}. For the rest of this section, we will consider the cases $m=0$ and $m=1$ with the normalized coefficient $\lambda =1$. We will investigate the time decay parameter $\alpha$, as a function of the absorptive exponent $p$ for $D(u_t)$ that is beyond a constant. In particular, we will illustrate the different behaviors of $\alpha$ vs $p$ between the discontinuous and continuous diffusivity functions. Similar to Eq. (\ref{eq:scaled_Baren}), the scaled Eq. (\ref{eq:diff-absorp}) for the $n^{th}$ RG iteration is \begin{equation}\label{eq:scaled_absorption} \begin{split} &(u_n)_{t}=D\left(L^{-n(\bar{\alpha}_{n}+1)}(u_{n})_t\right) L^{-n(\bar{\alpha}_n m +2 \bar{\beta}_n -1)}((u_n)^{m+1})_{xx}-L^{-n(\bar{\alpha}_n(p-1)-1)}(u_n)^{p},\,\,t >1,\\ \text{I. C.\,\,:}\quad&u_n(x,1) = f_n(x), \end{split} \end{equation} We consider two types of diffusivity functions in this section: (i) the discontinuous Heaviside function and (ii) the continuous hyperbolic tangent function. To incorporate the nonlinear diffusion ($\partial_{xx}u^2$ for the case $m=1$) and the absorption term, we will use the second-order explicit centered-difference discretization for the spatial derivative and the Euler's scheme for the time evolution. The constrain of the ratio of temporal and spatial step sizes, $\nu\frac{\Delta t}{(\Delta x)^2} \le 1/2$, will be enforced to ensure the stability, where $\nu$ is the diffusivity. The nonlinear diffusivity $D(u_t)$ is linearized the same way as before. \vskip 12pt \noindent {\bf Case I: $m=0$:} Using the nRG algorithm, we numerically study $\alpha$ versus $p$ for diffusivities beyond a constant function. In particular, we study a discontinuous $D(u_t)$ that is the Heaviside function described in Eq. (\ref{eq:Heaviside}) with $\epsilon=0.5,\, 0$ and $-0.5$. We also study the continuous hyperbolic tangent profile described in Eq. (\ref{eq:D(u_t)}) with $\sigma=0.1$ and $\epsilon=0.5,\, 0$ and $-0.5$. We choose $2\le p \le 5$ with the increment $\Delta p=0.1$. We remark that in order to study the effect of different diffusivities, it will be a good idea to keep the magnitude of the diffusivity unscaled throughout the RG iterations. From Eq. (\ref{eq:scaled_absorption}), if we choose $\beta_{n}=1/2$ for every $n$, the magnitude of the diffusivity will remain unscaled in each nRG iteration. Figure \ref{fig:p_vs_alpha_m=0}(a) is the plot for the computed time decay parameter $\alpha$ as a function of $p$ for Heaviside diffusivity with three different jump $\epsilon$-values. The solid line is the theoretically predicted $\alpha$ values, for which, by Eq. (\ref{eq:time_decay2}), the critical value $p^{*}=3$. Hence for $p<3$, $\alpha$ obeys Eq. (\ref{eq:time_decay1}) and for $p\ge 3$, $\alpha = \frac{1}{2}$ by Eq.(\ref{eq:time_decay3}). The numerically computed $\alpha$ values corresponding to the constant diffusivity, $D(u_t)\equiv 1$ ($\epsilon=0$) are marked by the circles. The computed values agree with the theoretical prediction. The computed $\alpha$ values for the diffusivities that are the discontinuous Heaviside function with $\epsilon=0.5$ and $-0.5$ are marked by the squares and the triangles, respectively. The results suggest that the $\alpha$ values obey Eq. (\ref{eq:time_decay1}) as a function of $p$, until a critical $p$ value is reached. For $p$ that is larger than the critical $p$ value, the time decay parameter $\alpha$ is a constant. Moreover, different $\epsilon$ values give rise to different critical $p$ values. Figure \ref{fig:p_vs_alpha_m=0}(b) is the results for diffusivity that is the continuous hyperbolic tangent profile. Unlike the Heaviside diffusivity, the time decay $\alpha$ values for continuous tangent profiles with different jumps $\epsilon$ obey the theoretical prediction for the case of constant diffusivities, or $\alpha=\frac{1}{p-1}$. This behavior is understandable, thanks to the back-of-the-envelope calculation in Eq.(\ref{eq:D(Lu_t)}). Finally we note that for all our numerical computations, the time is integrated from $L=1$ to $L=2$ with $\Delta t=5\times10^{-4}$, while the computational domain is $-10\le x\le 10$ with $\Delta x=0.125$. The initial condition is described in Eq. (\ref{eq:IC_experiment}). For each simulation, the number of RG iterations is 200. \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{p_vs_alpha_m_0-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{p_vs_alpha_m_0_c-eps-converted-to.pdf} \caption{$m=0$. (a) $D(u_t)$ is the Heaviside function (\ref{eq:Heaviside}) with $\epsilon=0.5,\, 0$ and $-0.5$. The solid line is the theoretically predicted $\alpha$ values. (b) The same as (a), except $D(u_t)$ is the continuous hyperbolic tangent function (\ref{eq:D(u_t)}) with $\sigma=0.1$.} \label{fig:p_vs_alpha_m=0} \end{figure} \vskip 12pt \noindent {\bf Case II: $m=1$:} When $m\ne 0$, based on Eq.(\ref{eq:scaled_absorption}) the scaling factor for the diffusivity depends on both $\alpha_n$ and $\beta_n$. If an unscaled diffusivity is desired, the spacial scaling factor $\beta_n$ should be calculated by $\beta_n = 1-m \alpha_n$, after $\alpha_n$ is computed at the end of each RG iteration. Figure \ref{fig:p_vs_alpha_m=1} shows that the nonlinear diffusion with absorption exhibits the similar behavior as that of the linear case, shown in {\bf Case I}. \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{p_vs_alpha_m_1-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{p_vs_alpha_m_1_c-eps-converted-to.pdf} \caption{$m=1$. (a) $D(u_t)$ is the Heaviside function (\ref{eq:Heaviside}) with $\epsilon=0.5,\, 0$ and $-0.5$. The solid line is the theoretically predicted $\alpha$ values. (b) The same as (a), except $D(u_t)$ is the continuous hyperbolic tangent function (\ref{eq:D(u_t)}) with $\sigma=0.1$.} \label{fig:p_vs_alpha_m=1} \end{figure} The results of the experiments in {\bf Case I \& II} suggest the following conjecture: \begin{conjecture} For the diffusion-absorption model (\ref{eq:diff-absorp}) with $\lambda > 0$, the behavior of the time decay parameter $\alpha$ at the asymptotic regime for the discontinuous Heaviside diffusivity is similar to that for the constant diffusivity, and there exists a critical $p$ value for the Heaviside diffusivity with the jump $\epsilon$, i.e. $p^{*} = 1+\frac{1}{\alpha(\epsilon)}$, where $\alpha(\epsilon)$ depends on $\epsilon$ and is constant for $p\ge p^{*}$. \end{conjecture} \subsection{Marginal case: vanishing pre-factor} As we indicated in Section \ref{sec:Burgers} that if the self-similar solution has only power law decay, then we can monitor the sequence of $A_n$, defined in Eq. (\ref{eq:An_Bn}), and expect that $A_n$ will converge to some constant $A\ne 0$. For the diffusion-absorption model (\ref{eq:diff-absorp}) with a constant diffusivity, $m=0$, and $\lambda > 0$, Bricmont and Kupiainen \cite{BKL94} shows that the longtime self-similar solution for the marginal case, $p=3$, is in the form of \begin{equation} u(x, t)\sim \left(\frac{\lambda t \log t}{2\sqrt{3}} \right)^{-\frac{1}{2}}\phi(x t^{-\frac{1}{2}}), \end{equation} where $\phi$ is a Gaussian distribution. This suggests that, using Algorithm 1, even though our nRG iterations could successfully capture the decay exponent, $\alpha=\frac{1}{2}$ (as indicated in Figure \ref{fig:p_vs_alpha_m=0} for $\epsilon=0$) and $\beta=1/2$ for keeping the diffusivity unscaled, the sequence of the pre-factor $A_n$ should continue approaching to zero, because \begin{equation} \lim_{n\rightarrow\infty} A_n \sim \left(\frac{\lambda \log t}{2\sqrt{3}} \right)^{-\frac{1}{2}} \rightarrow 0,\quad \text{as}\,\, t \rightarrow\infty. \end{equation} Figure \ref{fig:prefactors_porous}(a) shows that for linear diffusion ($m=0$) with constant diffusivity, the computed pre-factor $A_n$ continues to approach to zero after 50,000 nRG iterations for $p=3$, while $A_n$ quickly settles into a nonzero constant for $p=3.05$ and $p=2.95$ that are slightly deviated away from the critical $p$ value, $p=3$. It is worth pointing out that for the nonlinear diffusion ($m=1$) with constant diffusivity, we observed exactly the same behavior for the marginal case of the critical value $p=4$. Qi and Liu \cite{QL04} showed that, for the diffusion-absorption model (\ref{eq:diff-absorp}) with a constant diffusivity, $m=1$, and $\lambda=1$, for the marginal case ($p=4$), the similarity solution is \begin{equation}\label{eq:sim_m=1} u(x, t) \sim (t \log t )^{-\frac{1}{3}} \phi\left(\frac{(\log t)^{1/6} x}{t^{1/3}}\right),\quad t\rightarrow\infty, \end{equation} for initial data $u_0$ that satisfies \begin{equation} \lim_{|x|\rightarrow\infty} \sup |x|^{k} u_0 < \infty, \quad k>1. \end{equation} This implies that for our compactly supported initial data if the time decay factors captured by our nRG calculation approach to the theoretical prediction, i.e. $\alpha\rightarrow 1/3$ and $\beta\rightarrow 1/3$, then we have \begin{equation}\label{eq:prefactor_m1} \lim_{n\rightarrow\infty}A_n \sim (\log t )^{-\frac{1}{3}} \rightarrow 0,\quad \text{as}\,\, t \rightarrow\infty. \end{equation} In Figure \ref{fig:decayfactors}, we show that the time decay factors captured by the nRG algorithm are consistent with the theoretical values predicted in Eq. (\ref{eq:sim_m=1}), and in Figure \ref{fig:prefactors_porous}(b) we show that the computed pre-factor $A_n$ continues to approach to zero after 50,000 nRG iterations for $p=4$, as indicated in Eq. (\ref{eq:prefactor_m1}), while $A_n$ quickly settles into a nonzero constant for $p=4.05$ and $p=3.95$ that are slightly deviated away from the critical $p$ value, $p=4$. \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{alpha_m1-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{beta_m1-eps-converted-to.pdf} \caption{Comparison of the time decay factors captured by the nRG and that of the theoretical prediction. (a) $\alpha\rightarrow 1/3$, (b) $\beta\rightarrow 1/3$.} \label{fig:decayfactors} \end{figure} We comment that the numerical experiments in Figure \ref{fig:prefactors_porous} use the compactly supported initial condition (\ref{eq:IC_experiment}) with the constant diffusivity $D\equiv 1$ and the normalized absorption coefficient $\lambda =1$. The parameters for the nRG iterations are $L=2$, $\Delta x=0.1$, and $\Delta t=10^{-4}$. Similar to the previous experiments, the explicit second-order discretization is applied to the spatial derivative and the Euler's method is used for the time evolution. The peculiar phenomenon observed in Figure \ref{fig:prefactors_porous} motivates us to modify our Algorithm 1 in the next section, in order to capture the hidden logarithmic time decay exponent illustrated in this section. \begin{figure}[h] \centering (a)\includegraphics[width=2.8in]{prefactorm0-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{prefactorm1-eps-converted-to.pdf} \caption{(a) Linear diffusion ($m=0$) with constant diffusivity. $A_n\rightarrow 0$ for the marginal case $p=3$. (b) Nonlinear diffusion ($m=1$) with constant diffusivity. $A_n\rightarrow 0$ for the marginal case $p=4$.} \label{fig:prefactors_porous} \end{figure} \section{Cubic autocatalytic chemical reaction system}\label{sec:cubic_autocatalytic} The nRG procedure described in Algorithm 1 assumes that the asymptotic solutions decay or expand at a rate obeying the power law. However, there are differential equations (or systems of differential equations) whose solutions decay at a rate other than the power law, such as the logarithmic decay discussed in the previous Section. For these solutions, the aforementioned Algorithm 1 is not sufficient to capture the correct decay at the asymptotic region. Nevertheless, the procedure could provide sufficient information that allows us to modify the current algorithm to capture the similarity solutions of those equations. To illustrate the modification, we consider the Cauchy problem of the chemical reaction system \begin{equation}\label{eq:chem-reaction} \begin{split} u_t=u_{xx} - u^{p}v^{q},\\ v_t=dv_{xx} + u^{p}v^{q}, \end{split} \end{equation} where $p+q=3, 1\le p, q\le 2,$ and $d>0$. This system arises as a model for cubic autocatalytic chemical reactions of the type \begin{equation} pK_1+qK_2\longrightarrow 3K_2 \end{equation} with isothermal reaction rate proportional to $u^{p}v^{q}$, where $u$ is the concentration of reactant $K_1$ and $v$ is the concentration of auto-catalyst $K_2$ \cite{LQ03}. The system is subject to the initial data $u(x, 0)=a_1(x)$ and $v(x,0)=a_2(x)$, where $a_1, a_2 \ge 0$ and $a_1, a_2 \in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})$. The above system has been applied for modeling thermal-diffusive combustion problems \cite{BKX96} or mathematical biology \cite{FHMW02}. The similarity solutions of this system, based on different values of $p$ and $q$, are investigated in the papers by Bricmont et al. \cite{BKX96} and Li and Qi \cite{LQ03}. For $p=1$ $q=2$, Bricmont et al. show that \begin{equation} \begin{split} & t^{1/2+E_{A}}u(\sqrt{t}x, t) \longrightarrow B\psi_{A}(x),\\ & t^{1/2}v(\sqrt{t}x, t) \longrightarrow A\phi_{d}(x), \end{split} \end{equation} as $t\rightarrow \infty$. Here the total mass $A=\int_{\mathbb{R}}\left(a_1(x) + a_2(x)\right)dx$ is conserved along time. $B$ depends continuously on $(a_1, a_2)$, and the extra decay power $E_{A}$ in time is due to the critical cubic nonlinearity of the system \cite{BKX96}. $\phi_{d}$ is the Gaussian \begin{equation}\label{eq:gaussian} \phi_{d}(x) = \frac{1}{\sqrt{4\pi d}}e^{-x^2/4d}. \end{equation} Li and Qi extend the above result by considering the values $1< p, q <2$ and $p+q=3$. The nontrivial initial data $a_i\ge 0$, for $i=1, 2$ are the same as before, whereas the total mass $A$ defined as before is positive. Li and Qi show that \begin{equation}\label{eq:log_decay} \begin{split} & \sqrt{t}(\log t)^{1/(p-1)}u(\sqrt{t}x, t)\longrightarrow B\phi_1(x),\\ & \sqrt{t}v(\sqrt{t}x, t)\longrightarrow A\phi_d(x), \end{split} \end{equation} as $t\rightarrow\infty$, where \begin{equation}\label{eq:B} B=\left(\frac{4\pi d^{q/2}(p+q/d)^{1/2}}{(p-1)A^{q}} \right)^{1/(p-1)}, \end{equation} and $\phi_1$ is $d=1$ in Eq. (\ref{eq:gaussian}). The peculiar phenomena of the similarity solution (\ref{eq:log_decay}) is that the $u$-component contains two decays, the regular power-law decay and a logarithmic decay. We illustrate below that the nRG algorithm described in section \ref{sec:nrg} is not sufficient to capture the second decay. However, the procedure will provide a clue for the existence of the second decay, and allows us to design a nRG procedure to capture the similarity solutions in Eq. (\ref{eq:log_decay}). We start with the regular nRG procedure stated in Section \ref{sec:nrg}, i.e. the scaling for $t$ and $x$ is the same as that in Eq. (\ref{eq:scale_tx}), which results in $u$ and $v$ being scaled by \begin{equation}\label{eq:sys_scaling} \begin{split} u_L(\tilde{x},\tilde{t}) &= L^{\alpha_1}\,u(x,t) = L^{\alpha_1}\,u(L^{\beta_1}\tilde{x},L\tilde{t}),\\ v_L(\tilde{x},\tilde{t}) &= L^{\alpha_2}\,v(x,t) = L^{\alpha_2}\,v(L^{\beta_2}\tilde{x},L\tilde{t}).\\ \end{split} \end{equation} Thus the scalings result in a system of PDEs (dropping the subscript $L$ and $\,\tilde{}$ ) \begin{equation}\label{eq:sys-scaled} \begin{split} u_t & = L^{-2\beta_1+1}u_{xx}-L^{-p\alpha_1-q\alpha_2+\alpha_1+1}u^{p}v^{q},\\ v_t & = L^{-2\beta_2+1} d v_{xx}+L^{-p\alpha_1-q\alpha_2+\alpha_2+1}u^{p}v^{q}.\\ \end{split} \end{equation} Similar to the Burgers equation, we choose to keep the diffusion coefficients invariant. Thus $\beta_1=\beta_2=1/2$ at all time, whereas $\alpha_1$ and $\alpha_2$ are computed by step (2) in the nRG algorithm described in Section \ref{sec:nrg}, respectively. With this choice of $\beta_1$ and $\beta_2$, the scaled PDE at the $n^{th}$ iteration is \begin{equation}\label{eq:power_decay_pde} \begin{split} (u_n) &= (u_n)_{xx} - L^{n}\left(L^{-n\bar{\alpha}_{1, n}} \right)^{p-1}\left(L^{-n\bar{\alpha}_{2, n}}\right)^{q}(u_n)^{p}(v_n)^{q},\\ (v_n) &= d (v_n)_{xx} + L^{n}\left(L^{-n\bar{\alpha}_{1,n}} \right)^{p}\left(L^{-n\bar{\alpha}_{2, n}}\right)^{q-1}(u_n)^{p}(v_n)^{q}, \end{split} \end{equation} where $\bar{\alpha}_{1, n}$ and $\bar{\alpha}_{2, n}$ are defined as $\bar{\alpha}_n$ in Section \ref{sec:seq}. At this stage, we assume that the power-law scaling, based on the hypothesis, is \begin{equation}\label{eq:similarities} \begin{split} u(x, L^{n}) \sim \frac{A_{u}}{L^{n/2}}\phi_{u}(\frac{x}{L^{n/2}}),\\ v(x, L^{n}) \sim \frac{A_{v}}{L^{n/2}}\phi_{v}(\frac{x}{L^{n/2}}), \end{split} \end{equation} where $A_{u}$ and $A_{v}$ are non-zero constant. The nRG iteration, based on the power-law decay assumption, in principle will show $A_{u, n}=L^{n(\alpha_{1, n} - \bar{\alpha}_{1, n})}\sim A_{u}$, and $A_{v, n}=L^{n(\alpha_{2, n} - \bar{\alpha}_{2, n})}\sim A_{v}$ (cf. Eq. (\ref{eq:An_Bn})), as $n\rightarrow\infty$. Unfortunately (or fortunately), this is not the case. The numerical experiment, in fact, shows that $A_{u, n}\rightarrow 0$, while $A_{v, n}\ne 0$ and converges to some constant proportional to the total mass $A_{total}$, as $n\rightarrow\infty$. Based on this result, we conjecture that the $v$-component follows the power-law decay, similar to the Burgers equation, while the $u$-component exists a hidden decay that is not captured by solely assuming the power-law decay. \subsection{A numerical experiment}\label{sec:no-decay} We conduct a nRG experiment using the power-law scaling, Eq. (\ref{eq:similarities}), for the above chemical reaction problem with the parameters, $p=q=1.5$, $d=0.75$, and $L=1.2$. The initial data are \begin{equation} u(x,0) = v(x,0) = \chi_{[-\ell,\ell]}(x) =\begin{cases} 1, & -\ell \le x \le \ell, \\ 0, &\text{else}. \end{cases} \end{equation} We choose $\ell = 0.5$ and the computational domain to be $[-10, 10]$. For these initial data, the total conserved mass is $A=2$. Figure \ref{fig:An_and_alpha}(a) is a plot for $\alpha_{1, n}$ and $\alpha_{2, n}$ versus $n$. From the figure, we expect that $\alpha_{1, n}$ and $\alpha_{2, n}$ both converge to 1/2 as $n\rightarrow\infty$, although the figure suggests that $\alpha_{1, n}$ may converge much slower than $\alpha_{2, n}$. Since $\beta_n=1/2$ for all $n$, from Eq. (\ref{eq:An_Bn}), the convergences of $\alpha_{1, n}$ and $\alpha_{2, n}$, leads to Eq. (\ref{eq:similarities}). Moreover, Figure \ref{fig:An_and_alpha}(b) are the computed $A_{u, n}$ and $A_{v, n}$. As expected, $A_{u, n}$ (the dashed-line) approaches to 0 as $n\rightarrow\infty$, while $A_{v, n}$ approaches to a constant $A_{v}$. Note that from Eqs. (\ref{eq:log_decay}) and (\ref{eq:similarities}), fo $t=L^n$, we have \begin{equation} A_{v, n} \phi_{v} \rightarrow A_{v} \phi_v = A\phi_d. \end{equation} Since $\phi_{v}= \sqrt{4\pi d}\, \phi_d$, $\sqrt{4\pi d}\,A_{v} = A$, or $A_{v}=\displaystyle\frac{1}{\sqrt{4\pi d}} A$. For $d=0.75$, $A=2$, $A_{v}\approx 0.6515$. i.e. $A_{v,n }\rightarrow A_{v}\approx 0.6515$, and this is exactly what we observe in Figure \ref{fig:An_and_alpha}(b). Figure \ref{fig:Gaussian_profile} shows the comparison between the computed Gaussian similarity profile and the predicted theoretical profile in \cite{LQ03} at $n=3000$, after adjusting the amplitudes. Both components correctly match the prediction, even though the hidden logarithmic decay in $u$ is not captured by the nRG algorithm. Now let's turn our attention to $A_{u, n}$. The fact that $A_{u, n}\rightarrow 0$ as $n\rightarrow\infty$ indicates that there was a ``hidden'' decay factor that was not captured by the current nRG procedure. Taking a log-log plot for $A_{u, n}$ and $n$, Figure \ref{fig:An_no_log_decay} shows that $\log A_{u, n} = (-2) (\log n) + \log C$, as $n\rightarrow\infty$. If we suppose that the hidden decay factor is related to $\log t$, then we could choose $C$ to be $C=A(\log L)^{-2}$, this results in $A_{u, n} = A(\log L^n)^{-2}$, and thus Eq. (\ref{eq:similarities}) becomes \begin{equation}\label{eq:log-decay-nrg} \begin{split} u(x, L^{n}) &\sim \frac{A}{L^{n/2}(\log L^n)^{2}}\phi_{u}(\frac{x}{L^{n/2}}),\\ v(x, L^{n}) &\sim \frac{A_{v}}{L^{n/2}}\phi_{v}(\frac{x}{L^{n/2}}). \end{split} \end{equation} Eq. (\ref{eq:log-decay-nrg}) is evidently the similarity (asymptotic) solutions of the system of chemical-reaction equations for $t=L^{n}$ and $p=q=3/2$ (cf. Eq. (\ref{eq:log_decay})). \begin{figure}[bhtp] \centering (a)\includegraphics[width=2.8in]{alpha_reaction-eps-converted-to.pdf} (b)\includegraphics[width=2.8in]{An_reaction-eps-converted-to.pdf} \caption{Computed scaling factors by the nRG procedure stated in Section \ref{sec:nrg} for the chemical reaction system. (a) $\alpha_{1, n}$ and $\alpha_{2, n}$ (b) $A_{u, n}$ and $A_{v, n}$.} \label{fig:An_and_alpha} \end{figure} \begin{figure}[bhtp] \centering (a) \includegraphics[width=2.8in]{f_u-eps-converted-to.pdf} (b) \includegraphics[width=2.8in]{f_v-eps-converted-to.pdf} \caption{Comparison between the computed Gaussian similarity profile by Algorithm 1 and the predicted theoretical profile in \cite{LQ03} at $n=3000$, after adjusting the amplitudes. (a) $u$-component, (b) $v$-component.} \label{fig:Gaussian_profile} \end{figure} \begin{figure}[bhtp] \centering \includegraphics[width=2.8in]{Aun_loglog-eps-converted-to.pdf} \caption{log$-$log plot for $A_{u, n}$ and $n$. The dashed-line is the straight line whose slope is equal to $-2$. } \label{fig:An_no_log_decay} \end{figure} \section{Modified RG algorithm for logarithmic decay}\label{sec:AM2} The above experiment suggests that the component $v$ has the decay factor $\sqrt{t}$, but the component $u$ may have more than one decay factor. Without the asymptotic formula (\ref{eq:log_decay}), in principle, we do not know what the decay factors are. However, if we ``guess'' that one of them is also $\sqrt{t}$ based on Figures \ref{fig:An_and_alpha} and \ref{fig:An_no_log_decay}, and suppose that the other is related to $\log t$ with some unknown power $\gamma$, then the solution in the asymptotic region gives us an idea how to compute the power $\gamma_n$ at the end of each iteration. Taking the hint from Eq. (\ref{eq:log_decay}), at times $t$ and $Lt$, the ratio of the solutions is \begin{equation}\label{eq:modified-RG0} \frac{||u(x,t)||_{\infty}}{||u(x, Lt)||_{\infty}} = \frac{L^{1/2}t^{1/2}(\log Lt)^{\gamma}}{t^{1/2}(\log t)^{\gamma}}=L^{1/2}\left(\frac{(\log Lt)}{(\log t)}\right)^{\gamma} \end{equation} To modify the nRG procedure for this case, we observe that at the end of the $(n-1)^{th}$ iteration $t=L^{n-1}$, \begin{equation}\label{eq:modified-RG1} L^{1/2}\left(\frac{\log L^{n}}{\log L^{n-1}}\right)^{\gamma_{n}} = L^{1/2}\left(\frac{n}{n-1}\right)^{\gamma_{n}} = \frac{||u_{n-1}(\cdot,1)||_{\infty}}{||u_{n-1}(\cdot,L)||_{\infty}},\quad n > 1, \end{equation} following Eq. (\ref{eq:modified-RG0}). Note that for the case $n=1$, $\gamma_{1}$ is computed by the power-law scaling, \begin{equation}\label{eq:n=0-gamma} L^{\gamma_{1}} = \frac{||u_0(\cdot,1)||_{\infty}}{||u_0(\cdot,L)||_{\infty}}. \end{equation} Eqs. (\ref{eq:modified-RG1}) and (\ref{eq:n=0-gamma}) suggest that for the $u$-component, the initial condition for the next iteration is set by \begin{equation}\label{eq:modified-RG3} u_{n}(x, 1)= L^{1/2}\left(\frac{n}{n-1}\right)^{\gamma_{n}}u_{n-1}(L^{1/2}x, L),\quad \text{for}\,\,n > 1, \end{equation} and \begin{equation}\label{eq:modified-RG4} u_{1}(x, 1)=L^{\gamma_{1}}u(L^{1/2}x, L),\quad \text{for}\,\, n=1. \end{equation} Here we have chosen $\beta_1=\beta_2=1/2$ in order to keep the diffusion coefficients unchanged. Note that from Eq. (\ref{eq:modified-RG3}), at the end of the $n^{th}$ iteration ($n > 1$), the iterative solutions $u_{n}$ and $v_{n}$ are related to the solutions of the PDE's by \begin{equation}\label{eq:u_v_sys} \begin{split} u_{n}(x,t)&=L^{\gamma_{1}+(n-1)/2}\,\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{k}}\,u(L^{n/2}x, L^{n}t),\\ v_{n}(x,t)&= L^{n\bar{\alpha}_{2, n}}v(L^{n/2}x, L^{n}t), \end{split} \end{equation} where $\bar{\alpha}_{2, n} = \left(\alpha_{2,1}+\cdots+\alpha_{2,n}\right)/n$. Eq. (\ref{eq:u_v_sys}) implies that \begin{equation}\label{eq:u_v_sys_inv} \begin{split} u(x,t)&=L^{-\gamma_{1}-(n-1)/2}\,\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k-1}{k}\right)^{\gamma_{k}}\,u_n(L^{-n/2}x, L^{-n}t),\\ v(x,t)&= L^{-n\bar{\alpha}_{2, n}}v_n(L^{-n/2}x, L^{-n)}t). \end{split} \end{equation} Hence for the $n^{th}$ iteration ($n \ge 1$), the scaled system of PDEs for $u_n$ and $v_n$ is \begin{equation}\label{eq:un-sys} \begin{split} (u_{n})_t &= (u_{n})_{xx} - L^{n}\,L^{(-p+1)\gamma_{1}}(L^{-(n-1)/2})^{(p-1)}\,\left(\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k-1}{k}\right)^{\gamma_{k}}\right)^{p-1}\, (L^{-n\bar{\alpha}_{2, n}})^{q}(u_n)^{p}(v_n)^{q},\\ (v_{n})_t &= d\,(v_{n})_{xx} + L^{n}\,L^{-p\gamma_{1}}(L^{-(n-1)/2})^{p}\,\left(\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k-1}{k}\right)^{\gamma_{k}}\right)^{p}\, (L^{-n\bar{\alpha}_{2, n}})^{q-1}(u_n)^{p}(v_n)^{q}, \end{split} \end{equation} for $\beta_1=\beta_2=1/2$. For $n=0$, the unscaled equation (\ref{eq:chem-reaction}) is solved. Now, similar to the steps from Eq. (\ref{eq:rg_similarity_Ln}) to Eq. (\ref{eq:An_Bn}), we can define a variable $A_{u,n}$ for the $u$-component, so that we can monitor $A_{u, n}$ for convergence. From the first equation in Eq. (\ref{eq:log-decay-nrg}) and the first equation in Eq. (\ref{eq:u_v_sys}), we have \begin{equation}\label{eq:An_sys} \begin{split} u(L^{n/2}x, L^{n}) & \sim L^{-n/2}(\log L^{n})^{-\gamma}A\phi(x),\\ u_n(x,1) & =L^{\gamma_{1}+(n-1)/2}\,\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{k}}\,u(L^{n/2}x, L^n). \end{split} \end{equation} If we let $A_{*}=A(\log L)^{-\gamma}$ and assume that $\gamma_n\rightarrow\gamma$ as $n\rightarrow\infty$, Eq. (\ref{eq:An_sys}) implies \begin{equation}\label{eq:An_sys2} \begin{split} u_n(x, 1) & \sim L^{\gamma_{1}+(n-1)/2}\,\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{k}}\,L^{-n/2}n^{-\gamma_{n}}A_{*}\phi(x)\\ & \sim L^{\gamma_{1}-1/2}\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{k}-\gamma_{n}}A_{*}\phi(x), \end{split} \end{equation} The above equation holds, since $\gamma_{n}\rightarrow\gamma$ for $n\rightarrow\infty$ and thus $n^{-\gamma_{n}}=\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{-\gamma_{n}}$ for $n\rightarrow\infty$. Eq. (\ref{eq:An_sys2}) is equivalent to \begin{equation} L^{1/2-\gamma_1}\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_n-\gamma_k}u_n(x, 1) \sim A_{*}\phi(x). \end{equation} If we define \begin{equation} A_{u, n}= L^{1/2-\gamma_{1}}\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{n}-\gamma_{k}},\quad n > 1, \end{equation} we expect that $A_{u, n}\rightarrow A_{*}$ for $n$ large enough, provided $u_n(x, 1)\rightarrow \phi(x)$. Since $\phi=\sqrt{4\pi}\phi_1$, where $\phi_1$ is the Gaussian function in Eq. (\ref{eq:gaussian}) with $d=1$, this implies that $A\sqrt{4\pi}=B$, where $B$ is the theoretical prediction in Eq. (\ref{eq:B}). Therefore \begin{equation}\label{eq:Astar} A_{u, n}\rightarrow A_{*}=A(\log L)^{-\gamma} = \frac{B (\log L)^{-\gamma}}{\sqrt{4\pi}}. \end{equation} For $A_{v, n}$ we expect $A_{v,n}\rightarrow A_v = \displaystyle\frac{A}{\sqrt{4\pi}}$, where $A$ is the conserved total mass, the same as before. We summarize the modified nRG procedure for the chemical reaction problem with the choice of parameters $\beta_1=\beta_2=1/2$ in Algorithm 2. \begin{algorithm}[t] \begin{algorithmic} \label{alg2:nrg} \For {$n=0,1, 2,\ldots,$ until convergence} \begin{enumerate} \item[1.] Start with the IVP (\ref{eq:chem-reaction}) for $n=0$. Evolve $u_n$ and $v_n$ from $t=1$ to $t=L$, using the IVP (\ref{eq:un-sys}) for $n \ge 1$. \item[2.] Compute $\gamma_{n}$ for the $u$-component by \begin{equation}\label{eq:modified-RG-u-comp} \begin{split} &L^{\gamma_{1}} = \frac{||u_0(\cdot,1)||_{\infty}}{||u_0(\cdot,L)||_{\infty}}, \\ &L^{1/2}\left(\frac{n}{n-1}\right)^{\gamma_{n}} = \frac{||u_{n-1}(\cdot,1)||_{\infty}}{||u_{n-1}(\cdot,L)||_{\infty}},\quad n \ge 2. \end{split} \end{equation} Compute $\alpha_{2, n}$ for the $v$-component by \begin{equation*} L^{\alpha_{2, n}} = \frac{||v_{n-1}(\cdot,1)||_{\infty}}{||v_{n-1}(\cdot,L)||_{\infty}}. \end{equation*} \item[3.] Compute $A_{u, n}= L^{1/2-\gamma_{1}}\displaystyle\overset{n}{\underset{k=2}{\Pi}}\left(\frac{k}{k-1}\right)^{\gamma_{n}-\gamma_{k}}$, $n > 1$; $A_{v, n}=L^{n(\alpha_{2, n}-\bar{\alpha}_{2, n})}$, where $\bar{\alpha}_{2, n} = \left(\alpha_{2,1}+\cdots+\alpha_{2,n}\right)/n$. \item[4.] Set initial data for the next iteration by $f_{u, n+1} = L^{1/2}\left(\frac{n}{n-1}\right)^{\gamma_{n}}u_n\left(L^{1/2}x,L\right)$ and $f_{v, n+1}(x)=L^{\alpha_{2, n}}v_n\left(L^{1/2}x,L\right)$. \end{enumerate} \EndFor \end{algorithmic} \caption{The nRG procedure for the chemical reaction system} \end{algorithm} \section{A numerical experiment for the logarithmic decay}\label{sec:AM2_example} To illustrate that the modified RG algorithm accurately captures the hidden logarithmic decay, we apply Algorithm 2 to the diffusion-reaction equations Eq. (\ref{eq:chem-reaction}) with $p=q=3/2$. For this set of parameters, the asymptotical behavior of the solutions follows Eqs. (\ref{eq:log_decay}) and (\ref{eq:B}). Thanks to the choice of $\beta_1=\beta_2=1/2$, the diffusivities in Eq. (\ref{eq:un-sys}) are kept to be 1 and $d$, respectively. Here we choose $d=3/4$. The system of PDEs (\ref{eq:un-sys}) is discretized by an explicit second-order method (forward Euler for the time derivative and the second-order center difference for the spatial second derivative). The parameters used in our nRG algorithm are $L=1.25$, $x\in [-10, 10]$, $dx= 0.04$, and $dt=0.00025$. The number of iterations for nRG is 3000. The theoretical prediction for the critical exponents is $\gamma =2$ (power of the logarithmic decay for $u$) and $\alpha = 0.5$ (power of the power law decay for $v$). Figure \ref{fig:exponents} shows that Algorithm 2 accurately captures these two exponents. At the mean time, the theoretical prediction for the pre-factor $A_{*}$ is $A_{*}\approx 1016.89$, computed by Eq. (\ref{eq:Astar}), and the pre-factor $A_{v}$ is $A_{v}\approx 0.6515$, the same as our previous calculation, and we observe from Figure \ref{fig:prefactors} that both $A_{u, n}$ and $A_{v, n}$ numerically converges to their theoretical values, respectively. Finally, in the the previous section \ref{sec:no-decay}, our calculation suggests that the original nRG algorithm captures the power law exponents and produces the final similarity profiles that match the theoretical prediction in \cite{LQ03} without taking into account the logarithmic decay. In this experiment, we use the hint from the previous calculation to assume the exponent of the power law decay for the $u$ component. We modified the RG algorithm to include the logarithmic decay. The modified RG algorithm captures the critical exponents and render the numerically convergent pre-factors for both components. It remains to show whether the similarity profiles produced by the modified RG algorithm match the theoretical prediction. Sure enough, Figure \ref{fig:Gaussian_profile_mnrg} shows that the modified RG algorithm produces the similarity profiles that match the theoretical prediction exactly after we adjust the amplitudes by multiplying the factor $1 / \sqrt{4\pi d}$, where $d=1$ for $u$ and $d=3/4$ for $v$, respectively. \begin{figure}[bhtp] \centering (a) \includegraphics[width=2.8in]{gamma_3000-eps-converted-to.pdf} (b) \includegraphics[width=2.8in]{alpha_3000-eps-converted-to.pdf} \caption{Comparison of the theoretical prediction and the nRG computation for the exponents: (a) the sequence of logarithmic decay exponent $\gamma_n\rightarrow\gamma =2$ for $u$, and (b) the sequence of the power law decay exponent $\alpha_n\rightarrow \alpha =0.5$ for $v$.} \label{fig:exponents} \end{figure} \begin{figure}[bhtp] \centering (a) \includegraphics[width=2.8in]{Aun_3000-eps-converted-to.pdf} (b) \includegraphics[width=2.8in]{Avn_3000-eps-converted-to.pdf} \caption{Comparison of the theoretical prediction and the nRG computation for the pre factors: (a) $A_{u, n}\rightarrow A_{*} \approx 1016.89$, and (b) $A_{v, n}\rightarrow A_v\approx 0.6515$. } \label{fig:prefactors} \end{figure} \begin{figure}[bhtp] \centering (a) \includegraphics[width=2.8in]{u_profile_mnrg-eps-converted-to.pdf} (b) \includegraphics[width=2.8in]{v_profile_mnrg-eps-converted-to.pdf} \caption{Comparison between the computed Gaussian similarity profile by Algorithm 2 and the predicted theoretical profile in \cite{LQ03} at $n=3000$, after adjusting the amplitudes. (a) $u$-component, (b) $v$-component.} \label{fig:Gaussian_profile_mnrg} \end{figure} \section{Concluding Remarks} We have presented and systematically examined a numerical procedure, based on the RG theory for PDEs, that renders the detailed and efficient computation of asymptotically self-similar dynamics in solutions of PDEs. The effectiveness and robustness of the nRG algorithms were illustrated through several examples of quasilinear and nonlinear PDEs combining diffusive, reactive and nonlinear propagation effects. It is worth noting that the modified RG algorithm presented in Sections \ref{sec:AM2} and \ref{sec:AM2_example} for the nonlinear system of cubic autocatalytic chemical reaction equations nicely responds to the remark made by Li and Qi \cite{LQ03}: \begin{quote} ``The appearance of $\log t$ indicates the analysis is more involved and subtle. In particular, it is well known in the scientific computation field that a scaling of $\log t$ is hardly detectable in computation." \end{quote} by detecting the extra decay and capturing the power of logarithmic decay. We refer readers to \cite{Isaia_thesis} for some preliminary results of the calculations of multidimensional problems by using the similar numerical scaling strategy described in this paper. A proper modification of the described RG algorithm can be used to compute traveling waves and is currently under our investigation. We are also investigating the applicability of an adapted version of the RG algorithm to blow-up problems. We expect to report our results in the future. \section{Acknowledgement} GAB thanks the Department of Mathematics at the University of Wyoming for the hospitality which made possible this collaboration. LL is partially supported by NSF DMS 1413273.
1,116,691,501,456
arxiv
\section{Introduction} The induction of the Chern-Simons-like Lorentz and CPT violating term, given by ${\cal L}_{CS}=\frac{1}{2}k_{\mu}\epsilon^{\mu\nu\lambda\theta}F_{\nu\lambda}A_{\theta}$, being $k_{\mu}$ a constant vector characterizing the preferred direction of the space-time, is one of the most important results in the study of Lorentz symmetry violation \cite{JK,JK2}. This term which is known to have some important implications, such as birefringence of light in the vacuum \cite{biref}, naturally emerges as a quantum correction in the theory suggested in \cite{JK2} as a possible extension of QED: \begin{eqnarray}\label{eq:01} {\cal L}_{QED}=\bar{\psi}( i \partial\!\!\!/ - m )\psi - \bar{\psi} b\!\!\!/ \gamma_{5}\psi - e\bar{\psi} A\!\!\!/ \psi, \end{eqnarray} where $b_{\mu}$ is a parameter introducing CPT symmetry breaking. Carrying out the integration over fermions, the relation between the coefficients $k_{\mu}$ and $b_{\mu}$ could be obtained in terms of some loop integrals with some of them being divergent. Therefore one has to implement some regularization scheme to calculate these integrals. Thus, the constant relating the coefficient $ k_{\mu} $ and $ b_{\mu}$ turns out to be dependent on the regularization scheme used \cite{bpp,csym}. Such dependence on the regularization scheme has been intensively discussed in \cite{Jack2,Perez, bonneau}. However, there is an alternative study with absence of ambiguities put forward recently in \cite{prl}. Based on the theory (\ref{eq:01}), the purpose of our study is to investigate different possibilities of finding ambiguities inherent to generation of Chern-Simons-like term via quantum corrections in four dimensions. We do this by using derivative expansion method of the fermion determinant \cite{de} and the imaginary time formalism. The work structure is organized as follows: In section \ref{secI} we will investigate the induction of the Chern-Simons-like term via quantum corrections by using distinct approaches to deal with the exact fermion propagator. We find distinct relations between the coefficients $k_{\mu}$ and $b_{\mu}$ for different approaches, but with a same regularization scheme. Therefore, we conclude that different approaches to deal with the exact fermion propagator of the theory, leads to a new ambiguity for the problem of radiatively induced Chern-Simons-like term. In section \ref{secII} we develop another method to investigate the present issue. By modifying the derivative expansion method, we obtain a different self-energy tensor. We then use a specific regularization scheme to find a finite result identical to that obtained in section \ref{secI} by using another regularization scheme. This effect seems to imply ambiguity absence in our calculations. Finally, in section \ref{secIII} we present our conclusions. \section{Inducing Chern-Simons-like term: Two different approaches} \label{secI} In this section, we focus on the induction of the Chern-Simons-like term coefficient by expanding the self-energy (\ref{I1}) and using two distinct approaches to deal with the exact fermion propagator (\ref{eq:06}) up to the leading order in $b$. We find a new ambiguity because two different results appear. The one-loop effective action $S_{eff}[b,A]$ of the gauge field $A_{\mu}$ related to theory (\ref{eq:01}), can be expressed in the form of the following functional trace: \begin{eqnarray} S_{eff}[b,A]=-i\,{\rm Tr}\,\ln(p\!\!\!/- m - b\!\!\!/\gamma_5-e A\!\!\!/) . \end{eqnarray} This functional trace can be represented as $S_{eff}[b,A]=S_{eff}[b]+S_{eff}^{\,\prime}[b,A]$, where the first term $S_{eff}[b]=-i\,{\rm Tr}\ln(p\!\!\!/- m - b\!\!\!/\gamma_5)$ does not depend on the gauge field. The only nontrivial dynamics is concentrated in the second term $S_{eff}^{\,\prime}[b,A]$, which is given by the following power series: \begin{eqnarray}\label{eq:02} S_{eff}^{\,\prime}[b,A]=i\,{\rm Tr} \sum_{n=1}^{\infty}\frac1n \Biggl[\frac1{p\!\!\!/- m - b\!\!\!/\gamma_5}\,e A\!\!\!/\Biggr]^n. \end{eqnarray} To obtain the Chern-Simons-like term we should expand this expression up to the second order in the gauge field: \begin{eqnarray}\label{eq:03} S_{eff}^{\,\prime}[b,A]=S_{eff}^{(2)}[b,A]+\ldots \end{eqnarray} The dots in (\ref{eq:03}) stand for higher order terms in the gauge field. Here \begin{eqnarray}\label{eq:04} S_{eff}^{(2)}[b,A]=\frac{ie^{2}}{2}{\rm Tr}\Biggl[\frac1{p\!\!\!/- m - b\!\!\!/\gamma_5}\;A\!\!\!/\;\frac1{p\!\!\!/- m - b\!\!\!/\gamma_5}\,\;A\!\!\!/\Biggl]. \end{eqnarray} Using the derivative expansion method \cite{de} one can find that the one-loop contribution to $S_{\rm eff}^{(2)}[b,A]$ reads \begin{equation}\label{eq:05} S_{\rm eff}^{(2)}[b,A(x)]=\frac{1}{2}\int d^4x\;\Pi^{\alpha\mu\nu}F_{\alpha\mu}A_{\nu}, \end{equation} where the one-loop self-energy $\Pi^{\alpha\mu\nu}$ is given by \begin{equation}\label{I1} \Pi^{\alpha\mu\nu}=-\frac{ie^{2}}{2}\int\,\frac{d^{4}p}{(2\pi)^{4}}\,\text{tr}\bigl[S_{b}(p)\,\gamma^ {\mu}\,S_{b}(p)\,\gamma^{\alpha}\,S_{b}(p)\,\gamma^{\nu}\bigl], \end{equation} where \begin{eqnarray}\label{eq:06} S_b(p)=\frac{i}{ p\!\!\!/- m - b\!\!\!/\gamma_5} \end{eqnarray} is a $b^{\mu}$ dependent exact fermion propagator of the theory. \subsection{Approach I: Fermion propagator rationalized} Firstly, we use the approximation developed in \cite{Ebert}, where the exact propagator (\ref{eq:06}) is rationalized in the form \begin{equation}\label{eq:07} S_{b}(p)=i\Big[\frac{p\!\!\!/+m-\gamma_{5}b\!\!\!/}{(p^{2}-m^{2})}-\frac{2\gamma_{5}(mb\!\!\!/-(b\cdot p))(p\!\!\!/+m)}{(p^{2}-m^{2})^{2}}\Big]+\cdots. \end{equation} Substituting (\ref{eq:07}) into (\ref{I1}), we can calculate the trace of gamma matrices, resulting in the following expression for the self-energy tensor \cite{bpp}: \begin{eqnarray}\label{eq:08} \Pi^{\mu\alpha\nu}_{\bf r}&=&-2ie^{2}\int\,\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{(p^{2}+m^{2})^{3}}\{3\varepsilon^{\alpha\mu\nu\theta}\bigl[b_{\theta}(p^{2}-m^{2})-2p_{\theta}(b\cdot p)\bigl]\nonumber\\&&-2b_{\theta}\bigl[\varepsilon^{\beta\mu\nu\theta}p_{\beta}p^{\alpha} +\varepsilon^{\alpha\beta\nu\theta}p_{\beta}p^{\mu}+\varepsilon^{\alpha\mu\beta\theta} p_{\beta}p^{\nu}\bigl]\}, \end{eqnarray} where $\Pi^{\mu\alpha\nu}_{\bf r}$ means self-energy tensor is rationalized up to the first order in $b$-coefficient. In Eq.(\ref{eq:08}), we turn the Minkowski space to an Euclidean space by performing the Wick rotation $x_{0}\to-ix_{0}$, $p_{0}\to ip_{0}$, $b_{0}\to ib_{0}$, $d^{4}x\to -id^{4}x$ and $d^{4}p\to id^{4}p$. Note that by power counting, the momentum integral in (\ref{eq:08}) involves a finite term and terms with logarithmic divergences. In order to regularize such divergence, we use a scheme which we implement translation only on space coordinates of the momentum $p_{\rho}$ \cite{grig}. Hence, we have \begin{eqnarray} p_{\rho}\to\vec{p}_{\rho}+p_0\delta_{0\rho}. \end{eqnarray} We use the covariance under spatial rotations which allows us to carry out the following replacement \begin{eqnarray} \vec{p}_{\rho}\vec{p}^{\sigma}\to\frac{\vec{p}^2}{D}(\delta^{\sigma}_{\rho}-\delta_{\rho 0}\delta^{\sigma}_0). \end{eqnarray} Thus, \begin{eqnarray} &&2p_{\rho}(b\cdot p)\to 2\left(b_{\rho}\frac{\vec{p}^2}{D}-b_0\delta_{\rho 0}(\frac{\vec{p}^2}{D}-p^2_0) \right),\nonumber\\&&2p_{\beta}p^{\alpha}\to2\left(\delta_{\beta}^{\alpha}\,\frac{\vec{p}^{\,2}}{D}-\delta_{\beta 0}\delta_{0}^{\alpha}(\frac{\vec{p}^2}{D}-p^2_0)\right). \end{eqnarray} We have that only the terms above can contribute to the Chern-Simons structure. Therefore, we can split the expression (\ref{eq:04}) into a sum of two parts, ``covariant" and ``noncovariant", i.e., \begin{eqnarray}\label{cov1} S^{\rm cov}_{\rm eff}=\frac{(-i)}{2}\int\,d^{4}x\,I_{1}\varepsilon^{\alpha\mu\nu\beta}\,b_{\beta} F_{\alpha\mu}A_{\nu}, \end{eqnarray} with \begin{eqnarray} I_{1}&=&\frac{-3ie^{2}}{2\pi}\,\int\,\frac{d^D\vec{p}}{(2\pi)^D}\,\int_{-\infty}^{+\infty}\,dp_{0} \,\Big[\frac{(1-\frac{4}{D})\vec{p}^{2}+p_{0}^{2}-m^{2}}{(\vec{p}^{2}+p_{0}^{2}+m^{2})^{3}}\Big], \end{eqnarray} and \begin{eqnarray} S^{\rm ncv}_{\rm eff}&=&\frac{i}{2}\int\,d^{4}xI_{2}\,\bigl[3\varepsilon^{\alpha\mu\nu 0}\,b_{0}F_{\alpha\mu}A_{\nu}+ b_{\theta}(\varepsilon^{0\mu\nu\theta}F_{0 \mu}A_{\nu}\nonumber\\&&+\varepsilon^{\alpha 0 \nu\theta}F_{\alpha 0}A_{\nu}+\varepsilon^{\alpha\mu 0\theta}F_{\alpha\mu}A_{0})\bigl], \end{eqnarray} with \begin{eqnarray} I_{2}=\frac{ie^{2}}{\pi}\,\int\,\frac{d^D\vec{p}}{(2\pi)^D} \int_{-\infty}^{+\infty}\,dp_{0} \,\Big[\frac{\frac{\vec{p}^{2}}{D}-p_{0}^{2}}{(\vec{p}^{2}+p_{0}^{2}+m^{2})^{3}}\Big]. \end{eqnarray} The integrals $I_{1}$ and $I_{2}$ over $p_{0}$ are finite and can be calculated by residues theorem. Hence, we have \begin{eqnarray} I_{1}=\frac{3ie^{2}}{8 D}\,\int\,\frac{d^D\vec{p}}{(2\pi)^D} \,\Big[\frac{2(3-D)\vec{p}^{\,2}+Dm^{2}}{(\vec{p}^{2}+m^{2})^{5/2}}\Big], \end{eqnarray} and \begin{eqnarray} I_{2}=-\frac{ie^{2}}{8 D}\,\int\,\frac{d^D\vec{p}}{(2\pi)^D} \,\Big[\frac{(3-D)\vec{p}^{\,2}-Dm^{2}}{(\vec{p}^{2}+m^{2})^{5/2}}\Big]. \end{eqnarray} Now, we can integrate over the spatial momentum in $D$-dimensions \cite{GM}, and we find, \begin{eqnarray} I_{1}=\frac{3ie^{2}}{8(4\pi)^{D/2}} \frac{\Gamma{(1+\frac{\epsilon}{2})}}{\Gamma(\frac{5}{2})(m^{2})^{\epsilon/2}} \end{eqnarray} where $\epsilon=3-D$. The integral $ I_{2}$ presents a value identically equal to zero. Therefore, the effective action Eq.(\ref{cov1}) in the limit $D=3$ can be written in the form \begin{eqnarray}\label{eq:09} S^{\rm cov}_{\rm eff}=\frac{1}{2}\int\,d^{4}x\,\varepsilon^{\alpha\mu\nu\beta}\,k_{\beta} F_{\alpha\mu}A_{\nu}, \end{eqnarray} where \begin{eqnarray}\label{eq:10} k_{\beta}=\frac{e^{2}}{16\pi^{2}}\,b_{\beta}, \end{eqnarray} that coincides with the resulted of work \cite{Csw} in the Schwinger constant field approximation. \subsection{Approach II: Fermion propagator expansion} Now we use the approximation developed in \cite{pexp} to expand the exact propagator (\ref{eq:06}) up to the first order in $b$-coefficient \begin{eqnarray}\label{eq:10.1} S_{b}(p)=\frac{i}{p\!\!\!/-m}+\frac{i}{p\!\!\!/-m}\,(-ib\!\!\!/\gamma_5)\,\frac{i}{p\!\!\!/-m}+\cdot\cdot\cdot. \end{eqnarray} In this case, we have our self-energy tensor in the form \cite{csym,jhep1}: \begin{eqnarray}\label{eq:11} \Pi^{\mu\alpha\nu}_{\bf e}&=&-2ie^{2}\epsilon^{\mu\alpha\nu\rho}\int\,\frac{d^{4}p}{(2\pi)^{4}}\,\frac{1}{(p^{2}+m^{2})^{3}} \nonumber\\&&\times[b_{\rho}(p^{2}-3m^{2})-4p_{\rho}(b\cdot p)], \end{eqnarray} where $\Pi^{\mu\alpha\nu}_{\bf e}$ means the expanded self-energy tensor. In Eq.(\ref{eq:11}), we also change the Minkowski space to Euclidean space. Note that by power counting, the integral in the momentum space Eq.(\ref{eq:11}) also present a finite term and another that diverges logarithmically. Thus, we use the same regularization scheme adopted in the rationalized fermion propagator approach. In this way, we obtain the following effective action in the limit of $D=3$: \begin{eqnarray}\label{eq:12} \tilde{S}^{\rm cov}_{\rm eff}=\frac{1}{2}\int\,d^{4}x\,\varepsilon^{\alpha\mu\nu\beta}\,\tilde{k}_{\beta} F_{\alpha\mu}A_{\nu}, \end{eqnarray} where \begin{eqnarray}\label{eq:13} \tilde{k}_{\beta}=\frac{e^{2}}{4\pi^{2}}\,b_{\beta}. \end{eqnarray} Here, we also have the absence of ``noncovariant" part. The result (\ref{eq:13}) is equivalent to the obtained in \cite{aa} where one uses a physical cutoff for fermions. According to the results (\ref{eq:10}) and (\ref{eq:13}), we found that the use of different approaches to deal with the exact fermion propagator leads to distinct relations between the coefficient $k_{\mu}$ and $b_{\mu}$ for the Chern-Simons-like term. Our results establish the following relation: \begin{eqnarray} &&\tilde{k}_{\beta}=k_{\beta}+\frac{3 e^{2}}{16\pi^{2}}\,b_{\beta}, \end{eqnarray} that means \begin{eqnarray}\label{eq:14} \Delta k_{\beta}=\frac{3 e^{2}}{16\pi^{2}}\,b_{\beta}. \end{eqnarray} The result (\ref{eq:14}) corresponds to variation between the results (\ref{eq:10}) and (\ref{eq:13}), that is identical to the result of Ref.~\cite{jhep1} originated from the self-energy (\ref{eq:11}), in another regularization scheme. The same result has been found in the literature \cite{pexp,p-v}. \section{Other aspects on induced Chern-Simons-like term} \label{secII} In this section, we present an alternative method to compute induced Chern-Simons-like term, that is independent of the approach used to deal with the exact fermion propagator. Let us rewrite the Eq.(\ref{eq:02}) in the form, \begin{eqnarray}\label{eq:15} S_{eff}^{\,\prime}[b,A]=i\,{\rm Tr} \sum_{n=1}^{\infty}\frac1n \Biggl[\frac1{p\!\!\!/- m - e A\!\!\!/}\,b\!\!\!/\gamma_5\Biggr]^n. \end{eqnarray} To obtain the Chern-Simons-like term we should expand this expression up to the leading order in $b$. Thus, for $n=1$, we have \begin{eqnarray}\label{eq:16} S_{eff}^{(1)}[b,A]&=&i{\rm Tr}\Biggl[\frac1{p\!\!\!/- m -eA\!\!\!/ }\;b\!\!\!/\gamma_5\Biggl]. \end{eqnarray} Using the relation \begin{equation} \frac{1}{A-B}=\frac{1}{A}+\frac{1}{A}B\frac{1}{A}+\frac{1}{A}B\frac{1}{A}B\frac{1}{A}+\cdot\cdot\cdot \end{equation} for $A=p\!\!\!/-m$ and $B=eA\!\!\!/$, we find \begin{eqnarray} S_{eff}^{(1)}[b,A]=ie^{2}{\rm Tr}\Biggl[\frac1{p\!\!\!/- m}\,b\!\!\!/\gamma_5\,\frac1{p\!\!\!/- m}\,A\!\!\!/\,\frac1{p\!\!\!/- m}\,A\!\!\!/\Biggl], \end{eqnarray} where one was considered the cyclic property of the trace in the product of the $\gamma$-matrices. By using derivative expansion method, we find the following effective action: \begin{equation}\label{eq:17} S_{\rm eff}^{(1)}[b,A(x)]=\frac{1}{2}\int d^4x\;\Pi^{\alpha\mu\nu}F_{\alpha\mu}A_{\nu}, \end{equation} where the one-loop self-energy $\Pi^{\alpha\mu\nu}$ is given by \begin{eqnarray}\label{eq:18} \Pi^{\mu\alpha\nu}&=&-2ie^{2}\int\,\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{(p^{2}-m^{2})^{3}} \{\varepsilon^{\alpha\mu\nu\theta}b_{\theta}(p^{2}-m^{2})\nonumber\\&&-2b_{\theta} \bigl[\varepsilon^{\alpha\nu\theta\beta}p_{\beta}p^{\mu} -\varepsilon^{\alpha\mu\theta\beta}p_{\beta}p^{\nu}\bigl]\}. \end{eqnarray} Note that in the self-energy tensor (\ref{eq:18}) there exists a convergent contribution and the remaining term diverges logarithmically. However, differently of the previous situations, in this case the calculation of the divergent integrals is very delicate. Calculating this self-energy tensor by using the same regularization scheme of the section \ref{secI}, we find a null result, and in turn, the absence of the Chern-Simons term. On the other hand, there also exists the possibility of using in Eq.(\ref{eq:18}) the relation \begin{equation}\label{lorentz} \int\frac{d^{4}p}{(2\pi)^{4}}p_{\mu}p_{\nu}\,f(p^{2})= \frac{g_{\mu\nu}}{4}\int\frac{d^{4}p}{(2\pi)^{4}}p^{2}\,f(p^{2}), \end{equation} that naturally removes the logarithmic divergence. As a result, we have only the finite contribution \begin{equation} \Pi^{\mu\alpha\nu}=\frac{e^{2}}{16\pi^{2}}\,\varepsilon^{\mu\alpha\nu\theta} \,b_{\theta}. \end{equation} In this case, we find \begin{equation}\label{Ef7} S_{\rm eff}^{(1)}[b,A(x)]=\frac{1}{2}\int d^4x\;k_{\beta}\varepsilon^{\mu\alpha\nu\beta}\partial_{\alpha}A_{\mu}A_{\nu}, \end{equation} where \begin{equation} \label{c2} k_{\beta}=\frac{e^{2}}{16\pi^{2}}b_{\beta}. \end{equation} Note that the result (\ref{c2}) is the same as the result found in (\ref{eq:10}) where the exact fermion propagator was rationalized up to the first order in the $b$-coefficient. Thus, we have here another surprising effect: The result $k_{\beta}=\frac{e^{2}}{16\pi^{2}}b_{\beta}$ given in (\ref{eq:10}) as result of dimensional regularization and also given in (\ref{c2}) as result of the Lorentz preserving regularization (\ref{lorentz}), appears to be $independent$ of the regularization scheme used by properly carrying out the self-energy tensor. The result (\ref{c2}) was also obtained in the work \cite{p-v} by using massless exact fermion propagator. We observe that the factor $(e^{2}/16\pi^{2})$, is exactly the same as found in the well-known Adler-Bell-Jackiw anomaly \cite{abj,peskin,abj1}. \section{Conclusions} \label{secIII} We have investigated the induction of Chern-Simons-like term via quantum corrections in two different situations. Firstly, we use a same regularization to different approaches to deal with the exact fermion propagators up to the leading order in the $b$-coefficient. In this case, our results are finite and agree with other results in the literature, but they do not agree with each other, because they are different depending on the approach used. Moreover, it generates values whose difference is exactly: $\Delta k_{\beta}=(3 e^{2}/{16\pi^{2}})\,b_{\beta}$, which agrees with the result of Ref.~\cite{jhep1} found in another context. We conclude that this is due to the different approximation of the exact fermion propagator of theory. The problem was also investigated in a context independent of the approaches used to deal with the exact fermion propagator. In this case, we modify the derivative expansion method and obtain a new self-energy tensor for the effective action. The momentum integrals were calculated by using another regularization scheme. As a result, we obtained a relation between the coefficients $k_{\beta}$ and $b_{\beta}$ identical to the one obtained in the case where it was used the fermion propagator expansion. We also observed that the parameter of proportionality between these two coefficients is exactly the same as the one found in the well-known Adler-Bell-Jackiw anomaly. In our calculations, we also observed that the ``noncovariant" contributions for the Chern-Simons-like term are absent, as was anticipated in \cite{bpp} in the finite temperature context. Therefore, we insist that a complete comprehension of these question will require further investigations. {\bf Acknowledgments.} The author would like to thank A. Kosteleck\'{y} for useful comments and F.A. Brito for interesting discussions. This work was partially supported by Conselho Nacional de De\-sen\-vol\-vi\-men\-to Cient\'{\i}fico e Tecnol\'{o}gico (CNPq).
1,116,691,501,457
arxiv
\section{Electronic Submission} \nocite{langley00} \section{Introduction} Partially Observable Markov Decision Process (POMDP) \citep{ASTROM1965174} provides a principled and generic framework to model real world sequential decision making processes. Unlike Markov Decision Process (MDP), the observations of a POMDP are generally non-Markovian. Therefore, to make optimal decisions, the agent needs to consider all historical information, which is usually intractable. One effective solution is to obtain the belief state. The belief state is defined as the probability distribution of the unobservable environment state conditioned on the past observations and actions \citep{kaelbling1998planning}. Such belief state accurately summarizes the history. Traditional methods of calculating belief states \citep{smallwood1973optimal, sondik1971optimal, kaelbling1998planning} assume finite discrete space with a known model. In many real world problems, however, the underlying model remains unknown, and the state space is large and even continuous. To track belief states in POMDPS with continuous state and action spaces, another line of works \citep{thrun1999monte, silver2010monte} uses Monte Carlo algorithms like particle filters to estimate belief states. \citet{nishiyama2012hilbert} proposes to solve the POMDP based on models defined in appropriate reproducing kernel Hilbert spaces (RKHSs). However, this requires access to samples from hidden states during training. With the recent advances of deep learning technologies, recent works mainly focus on POMDPs with unknown models and continuous state spaces. To capture belief states, a branch of works including \citet{hausknecht2015deep, gregor2019shaping} uses vector-based representations, namely scalars, to represent belief states. However, vector-based belief states may fall short in making predictions for multiple future trajectories (as discussed in Appendix \ref{append-relatedworks}). Another line of works proposes to learn belief states by approximating belief state distributions. The current state-of-the-art performance on many visual-motor control tasks is also achieved in this manner by sequentially maximizing the observation probability at each timestep using the variational inference \citep{hafner2019dream, zhu2020bridging, okada2020planet, ma2020contrastive}. They approximate the belief states with distributions like diagonal Gaussians \citep{krishnan2015deep, han2019variational, gregor2019temporal, hafner2019learning, hafner2019dream, lee2020stochastic}, Gaussian mixture \cite{tschiatschek2018variational}, categorical distribution \cite{hafner2021mastering}, or particle filters \cite{ma2020particle,igl2018deep}. However, they still cannot capture general belief states due to the intractability of complex distributions in high-dimensional continuous space. They either suffer from the curse of dimensionality, or instead make some assumptions and learn only the approximated distributions. Those approximation imposes strong restrictions and is problematic. Taking the Gaussian belief assumption as an example: as shown in Figure \ref{latent}, the blue area denotes the unobservable state space of the POMDP. Given the past information $\tau$, the agent maintains a prior distribution of the state $s$, denoted as $p(s|\tau)$ (the distribution in white). Each colored distribution corresponds to the belief state after receiving a different new observation $o$, denoted as the posterior distribution $q(s|\tau,o)$. Consider an example of the true beliefs as shown in Figure \ref{latent}(b), with their Gaussian approximations shown in Figure \ref{latent}(a). The approximation error of Gaussian distributions will easily result in problems of intersecting belief which leads to a mixed-up state (e.g., the white triangle), and empty belief, which leads to a meaningless state (e.g., the grey triangle). This also explains the poor reconstruction problems in interactive environments observed by \citet{okada2021dreaming}. Furthermore, as mentioned in \citet{hafner2021mastering}, the Gaussian approximation of belief states also makes it difficult to predict multi-modal future behaviours. Therefore, it is preferable to relax the Gaussian assumptions and use a more flexible family of distributions to learn accurate belief states as shown in Figure \ref{latent}(b). For a more detailed discussion of the related works, please check Section \ref{sec:relatedwork} and Appendix \ref{append-relatedworks}. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{figs/latent.png}} \caption{Difference between (a) spherical Gaussian belief states and (b) true belief states (better viewed in color). The spherical Gaussian belief states (a) approximate the true belief states in (b) using Gaussian assuptions, which may result in intersection (the white triangle) or vacancy (the gray triangle) points in the state space.} \label{latent} \end{center} \vskip -0.2in \end{figure} In this paper, we propose a new method called \textbf{F}l\textbf{O}w-based \textbf{R}ecurrent \textbf{BE}lief \textbf{S}tate model (FORBES) that is able to learn general continuous belief states for POMDPs. FORBES incorporates Normalizing Flows \citep{tabak2013family, rezende2015variational, dinh2017density} into the variational inference step to construct flexible belief states. In experiments, we show that FORBES allows the agent to maintain flexible belief states, which result in multi-modal and precise predictions as well as higher quality reconstructions. We also demonstrate the results combining FORBES with downstream RL algorithms on challenging visual-motor control tasks (DeepMind Control Suite, \cite{tassa2018deepmind}). The results show the efficacy of FORBES in terms of improving both performance and sample efficiency. Our contributions can be summarized as follows: \begin{itemize} \item We propose FORBES, the first flow-based belief state learning algorithm that is capable of learning general continuous belief states for POMDPs. \item We incorporate FORBES into a POMDP RL framework for visual-motor control tasks that can fully exploit the benefits brought by FORBES. \item Empirically, we show that FORBES allows the agent to learn flexible belief states that enable multi-modal predictions as well as high quality reconstructions and help improve both performance and sample efficiency for challenging visual-motor control tasks. \end{itemize} \section{Preliminaries} \subsection{Partially Observable Markov Decision Process} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{figs/POMDP2.png} \caption{ The PGM of POMDP. The grey circle represents the unobservable hidden states $s$, while the observations $o$, rewards $r$ are observable, and the actions $a$ are determined by the agent.} \label{PGM-POMDP} \end{figure} Formally, a Partially Observable Markov Decision Process (POMDP) is a 7-tuple $(\mathcal{S}, \mathcal{A}, T, R, \Omega, O, \gamma)$, where $ \mathcal{S} $ is a set of states, $ \mathcal{A} $ is a set of actions, $ T $ is a set of conditional transition probabilities between states, $ R $ is the reward function, $\Omega $ is a set of observations, $ O $ is a set of conditional observation probabilities, and $\gamma $ is the discount factor. At each timestep $t-1$, the state of the environment is $s_{t-1} \in \mathcal{S}$. The agent takes an action $a_{t-1} \in \mathcal{A}$, which causes the environment to transit to state $s_t$ with probability $T\left(s_t \mid s_{t-1}, a_{t-1}\right)$. The agent then receives an observation $o_t\in \Omega$ which depends on the new state of the environment $s_t$ with probability $O\left(o_t \mid s_t\right)$. Finally, the agent receives a reward $r_{t-1}$ equal to $R(s_{t-1})$. The agent's goal is to maximize the the expected sum of discounted rewards $\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^t r_t \right]$. Such a POMDP model can also be described as a probabilistic graphical model (PGM) as shown in Figure \ref{PGM-POMDP}. After having taken action $a_{t-1}$ and observing $o_t$, an agent needs to update its belief state, which is defined as the probability distribution of the environment state conditioned on all historical information \begin{equation} \begin{aligned} b(s_t)&=p(s_t\mid \tau_t, o_t)\\ \end{aligned} \label{eq:belief-def} \end{equation} where $\tau_t = \{o_1, a_1, \ldots, o_{t-1}, a_{t-1}\}$. \subsection{Normalizing Flow} Instead of using the Gaussian family to approximate the prior and posterior belief distributions, we believe it is more desirable to use a family of distributions that is highly flexible and preferably flexible enough to describe all possible true belief states. Therefore, we use Normalizing Flows \citep{tabak2013family,rezende2015variational} to parameterize those distributions. Rather than directly parameterizing statistics of the distribution itself, Normalizing Flows model the transformations, or the “flow” progress, needed to derive such a distribution. More specifically, it describes a sequence of invertible mappings that gradually transform a relatively simple probability density to a more flexible and complex one. Let $f_\theta: \mathbb{R}^{D} \rightarrow \mathbb{R}^{D}$ to be an invertible and differentiable mapping in state space parameterized by $\theta$. Given a random variable $\mathbf{x} \in \mathbb{R}^{D}$ with probability distribution $p(\mathbf{x})$, we can derive the probability of the transformed random variable $\mathbf{z} = f_{\theta}(\mathbf{x})$ by applying the change of variable formula: \begin{align} p(\mathbf{z}) &= p(\mathbf{x}) \left| \det \frac{\partial f_{\theta}^{-1}}{\partial \mathbf{z}} \right| \\ \log p(\mathbf{z}) &= \log p(\mathbf{x}) - \log \left| \det \frac{\partial f_{\theta}}{\partial \mathbf{z}} \right| \end{align} To construct a highly flexible family of distributions, we can propagate the random variable at beginning $\mathbf{z}_0$ through a sequence of $K$ mappings and get $\mathbf{z}_K = f_{\theta_K} \circ f_{\theta_{K-1}} \circ \cdots \circ f_{\theta_1} (\mathbf{z}_0)$ with the probability \begin{equation} \log p_K(\mathbf{z}_K) = \log p(\mathbf{z}_0) - \sum_{k=1}^{K} \log \left| \det \frac{\partial f_{\theta_{k}}}{\partial \mathbf{z}_{k-1} } \right| \end{equation} Given a relatively simple distribution of $\mathbf{z}_0$, say, Gaussian distribution, by iteratively applying the transformations, the flow is capable of representing a highly complex distribution with the probability that remains tractable. The parameters $\theta_1,\ldots,\theta_K$ determine the transformations of the flow. An effective transformation that is widely accepted is affine coupling layer \citep{dinh2017density, kingma2018glow, kingma2017improving}. Given the input $\mathbf{x} \in \mathbb{R}^{D}$, let $s$ and $t$ stand for scale and translation functions which are usually parameterized by neural networks, where $s,t: \mathbb{R}^{k} \rightarrow \mathbb{R}^{D-k}, k<D$. The output, $\mathbf{y}$, can be viewed as a concatenation of its first $k$ dimensions $\mathbf{y}_{1:k}$ and the remaining part $\mathbf{y}_{k+1:D}$: \begin{align} \mathbf{y}_{1:k} &= \mathbf{x}_{1:k}, \nonumber \\ \mathbf{y}_{k+1:D} &= \mathbf{x}_{k+1:D} \odot \mathrm{exp}(s(\mathbf{x}_{1:k})) + t(\mathbf{x}_{1:k}) \end{align} where $\odot$ denotes the element-wise product (see details about affine coupling layer in Appendix \ref{affine details}). \section{Flow-based Recurrent Belief State Learning} \subsection{Flow-based Recurrent Belief State model} \label{sec:method-1} We propose the \textbf{F}l\textbf{O}w-based \textbf{R}ecurrent \textbf{BE}lief \textbf{S}tate model (FORBES) which learns general continuous belief states via normalizing flows under the variational inference framework. Specifically, the FORBES model consists of components needed to construct the PGM of POMDP as shown in Figure \ref{PGM-POMDP}: \begin{equation} \label{Eq:models} \begin{aligned} \mathrm{State\ transition\ model:}\quad &p(s_t|s_{t-1},a_{t-1})\\ \mathrm{Observation\ model:}\quad &p(o_t|s_t)\\ \mathrm{Reward\ model:}\quad &p(r_t|s_t)\\ \end{aligned} \end{equation} \vspace{-8pt} In addition, we have a belief inference model $q(s_t|\tau_t,o_t)$ to approximate the true posterior distribution $p(s_t|\tau_t,o_t)$ as defined in Equation \ref{eq:belief-def}, where $\tau_t = \{o_1, a_1, \ldots, o_{t-1}, a_{t-1}\}$ is the past information. The above components of FORBES can be optimized jointly by maximizing the Evidence Lower BOund (ELBO) \citep{jordan1999viintro} or more generally the variational information bottleneck \citep{tishby2000ib,alemi2016vib}: \vspace{-15pt} {\small\begin{multline} \log p(o_{1:T}|a_{1:T}) \\ \begin{aligned} &\geq\sum_{t=1}^T \Big( \E{q(s_t|o_{\leq t},a_{<t})}{\log p(o_t|s_t) + \log p(r_t | s_t)} \quad \\%\eqbr &\quad - \Ebelow[\big]{q(s_{t-1}|\tau_{t-1}, o_{t-1})}{D_{\mathrm{KL}}(q(s_t|\tau_t, o_t) \| p(s_t|s_{t-1},a_{t-1})}) \Big) \doteq \mathcal{J}_{\mathrm{Model}} \end{aligned} \label{Eq:Jmodel} \end{multline}} \vspace{-5pt} Detailed derivations can be found in Appendix \ref{append-ELBO}. In practice, the state transition model, observation model, reward model, and belief inference model can be represented by stochastic deep neural networks parameterized by $\psi$: \vspace{-10pt} \begin{equation} \begin{aligned} p_\psi(s_t|s_{t-1},a_{t-1}),\ p_\psi(o_t|s_t),\ p_\psi(r_t|s_t), \ q_\psi(s_t|\tau_t,o_t) \nonumber \end{aligned} \end{equation} where their outputs usually follow simple distributions such as diagonal Gaussians. The parameterized belief inference model $q_\psi(s_t|\tau_t,o_t)$ acts as an encoder that encodes the historical information using a combination of convolutional neural networks and recurrent neural networks. \begin{figure*}[t] \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=0.95\linewidth]{figs/algo-1.png} \caption{Belief state inference} \label{algo-1} \end{subfigure} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=0.95\linewidth]{figs/algo-2-white.png} \caption{Predictions beginning from different samples} \label{algo-2} \end{subfigure} \centering \caption{The algorithm framework of FORBES. Figure \ref{algo-1} shows how to calculate prior and posterior belief distribution given previous information. The blue arrows bring in historical observations and actions, and the green path shows the evolution of prior belief distribution. The red path takes an additional $o_t$ and shows the evolution of posterior belief distribution. Figure \ref{algo-2} shows the predictions of future trajectories starting from different samples (yellow and purple triangles) given the future actions. \label{algo-fig}} \end{figure*} In FORBES we provide special treatments for the belief inference model and the state transition model to represent more complex and flexible posterior and prior distributions. As shown in Figure \ref{algo-fig}(a), the input images $o_{1:t}$ and actions $a_{1:t-1}$ are encoded with $q_\psi(s_t|\tau_t,o_t)$ (the blue and the red path). Then our final inferred belief is obtained by propagating $q_\psi(s_t|\tau_t,o_t)$ through a set of normalizing flow mappings denoted $f_{\theta_K} \circ \cdots \circ f_{\theta_1}$ to get a representative posterior distribution $q_{\psi,\theta}(s_t|\tau_t, o_t)$. For convenience, we denote $q_0=q_\psi$ and $q_K=q_{\psi,\theta}$. On the other hand, $o_{1:t-1}$ and $a_{1:t-2}$ are encoded with $q_\psi(s_{t-1}|\tau_{t-1},o_{t-1})$ (the blue path), then the state transition model is used to obtain the prior guess of the state $p_\psi(s_t\mid \tau_t)=\mathbb{E}_{q_\psi(s_{t-1}|\tau_{t-1},o_{t-1})} \left[p_\psi(s_t\mid s_{t-1},a_{t-1})\right]$ (the green path). Then our final prior is obtained by propagating $p_\psi(s_t|\tau_t)$ through another set of normalizing flow mappings denoted $f_{\omega_K} \circ \cdots \circ f_{\omega_1}$ to get a representative prior distribution $p_{\psi,\omega}(s_t|\tau_t)$. For convenience, we denote $p_0=p_\psi$ and $p_K=p_{\psi,\omega}$. Then as shown in Figure \ref{algo-fig}(b), we can sample the initial state $s_t$ (the yellow and purple triangles) from the belief states $q_K(s_t\mid\tau_t,o_t)$. For each sampled initial state, we can use the state transition model to predict the future states $\hat{s}_{t+h}$ given the future actions $a_{t:t+h-1}$, and then use the observation model to reconstruct the observations $\hat{o}_{t+h}$, where $h$ is the prediction horizon. With the above settings, we can substitute the density probability inside the KL-divergence term in Equation \ref{Eq:Jmodel} with Normalizing Flow: \vspace{-22pt} \begin{equation} \small \begin{aligned} \label{eq:KL} \log q_K(s_t|\tau_t, o_t) &= \log q_0(s_t|\tau_t, o_t) - \sum_{k=1}^{K} \log \left| \det \frac{\partial f_{\theta_k}}{\partial s_{t,k-1}} \right| \\ \log p_K(s_t|\tau_t) &= \log p_0(s_t|\tau_t) - \sum_{k=1}^{K} \log \left| \det \frac{\partial f_{\omega_k}}{\partial s_{t,k-1}} \right| \\ \end{aligned} \end{equation} where $p_K(s_t\mid s_{t-1},a_{t-1})=p_K(s_t\mid \tau_t)$ given the sampled $s_{t-1}$ from $q_K(s_{1:t}|\tau_t,o_t)$. $s_{t,k}$ is the state variable $s_t$ transformed by $k$ layers of normalizing flows, and $s_{t,0}=s_t$. To further demonstrate the properties of FORBES, we provide the following theorems. \begin{theorem} \label{elbo-thm} The approximation error of the log-likelihood when maximizing the $\mathcal{J}_{\mathrm{Model}}$ (the derived ELBO) defined in Equation \ref{Eq:Jmodel} is: \vspace{-10pt} \begin{equation} \begin{aligned}\label{error} &\log p(o_{1:T}, r_{1:T}|a_{1:T}) - \mathcal{J}_{\mathrm{Model}} \\ &= \Ebelow[\big]{q_K(s_{1:T}|o_{1:T},a_{1:T-1})}{ \Sigma_{t=1}^{T} D_{\mathrm{KL}}(q(s_t|\tau_t, o_t) \| p(s_t\mid \tau_t, o_t))} \end{aligned} \end{equation} where $p(s_t\mid \tau_t, o_t)$ denotes the true belief states. \end{theorem} Detailed proofs can be found in Appendix.\ref{sec:proofs}. Theorem \ref{elbo-thm} suggests that, when the learning algorithm maximizes the $\mathcal{J}_{\mathrm{Model}}$ (the derived ELBO), then the $D_{\mathrm{KL}}$ terms in the right-hand side are minimized, which indicate the KL-divergence between the learned belief states $q(s_t | \tau_{t}, o_t)$ and the true belief states $p(s_t\mid \tau_t, o_t)$. Clearly, if $p(s_t\mid \tau_t, o_t)$ is a complex distribution and $q(s_t | \tau_{t}, o_t)$ is chosen from a restricted distribution class such as diagonal Gaussian, then when the algorithm maximizes the $\mathcal{J}_{\mathrm{Model}}$ (the derived ELBO), there will still be a potentially large KL-divergence between the learned and the true belief states. \begin{figure*}[t!] \centering \includegraphics[width=.9\linewidth]{figs/seqmnist.png} \centering \caption{Predictions on sequential MNIST of two models. This is a digit writing task. The fully written digits are shown in the leftmost column. We use incomplete writing processes (the first 15 frames, partially shown in the grey column) as the inputs and let the models predict the complete digit (as shown in the blue/green columns). The results show that FORBES can make precise yet diverse predictions with less blur and no mode mixup.} \label{fig:seq-1} \vspace{-10pt} \end{figure*} Therefore, naturally there raises the problem that is normalizing flow a universal distributional approximator that is capable of accurately representing arbitrarily complex belief states, so the KL-divergence terms in the right-hand side of Equation (\ref{error}) can be minimized to approach zero? The answer is yes for a wide range of normalizing flows. To be specific, \citet{teshima2020couplingbased} provides theoretical results for the family of the flow used in FORBES. Besides the aforementioned affine coupling flow, many works show the distributional universality of other flows \citep{kong2020expressive, huang2018neural}. Ideally, the universal approximation property of the flow model $q_K(s_t\mid \tau_t,o_t)$ allows us to approximate the true posterior $p(s_t\mid \tau_t,o_t)$ with arbitrary accuracy. Thus, compared to previous methods, FORBES helps close the gap between the log-likelihood and the ELBO to obtain a more accurate belief state. Though we usually cannot achieve the ideal zero KL-divergence in practice, our method can get a smaller approximation error, equally a higher ELBO than previous works. We verify this statement in section \ref{exp:seqMNIST}. \subsection{ POMDP RL framework based on FORBES } To show the advantage of the belief states inferred by the FORBES model compared to the existing belief inference method in visual-motor control tasks, we incorporate FORBES into a flow-based belief reinforcement learning algorithm for learning the optimal policy in POMDPs. Inspired by \citet{hafner2019dream}, the algorithm follows an actor-critic framework but is slightly modified to exploit better the flexible nature of FORBES: The critic estimates the accumulated future rewards, and the actor chooses actions to maximize the estimated cumulated rewards. Instead of using only one sample, both the actor and critic operate on top of the samples of belief states learned by FORBES. They thus benefit from the accurate representations learned by the FORBES model. Note that this is an approximation of the true value on belief, which avoids the intractable integration through observation model. The critic $v_{\xi}\left(s_{\tau}\right)$ aims to predict the discounted sum of future rewards that the actor can achieve given an initial state $s_t$, known as the state value $\mathbb{E}\left(\sum_{\tau=t}^{t+\infty} \gamma^{\tau-t} r_{\tau}\right)$, where $\xi$ denote the parameters of the critic network and $H$ is the prediction horizon. We leverage temporal-difference to learn this value, where the critic is trained towards a value target that is constructed from the intermediate reward and the critic output for the next step's state. In order to trade-off the bias and the variance of the state value estimation, we use the more general TD($\lambda$) target \citep{sutton2018rlbook}, which is a weighted average of n-step returns for different horizons and is defined as follows: \vspace{-5pt} \begin{equation} V^\lambda_\tau \doteq \hat{r}_\tau + \hat{\gamma}_\tau \begin{cases} (1 - \lambda) v_\xi(s_{\tau+1}) + \lambda V^\lambda_{\tau+1} & \text{if}\quad \tau<t+H, \\ v_\xi(s_{t+H}) & \text{if}\quad \tau=t+H. \\ \end{cases} \end{equation} To better utilize the flexibility belief states from FORBES, we run the sampling method multiple times to capture the diverse predictions. Specifically, we sample $N$ states from the belief state given by FORBES and then rollout trajectories of future states and rewards using the state transition model and the reward model. Finally, we train the critic to regress the TD($\lambda$) target return using a mean squared error loss: \begin{equation}\label{Eq:JCritic} \mathcal{J}_{\mathrm{Critic}}(\xi) = \mathbb{E}\Big[ \textstyle \sum_{i=1}^{N} \textstyle \sum_{\tau=t}^{t+H} \frac{1}{2} \big( v_\xi(s_{i, \tau}) - \operatorname{sg}(V^\lambda_{i, \tau}) \big)^2 \Big]. \end{equation} where $sg(\cdot)$ is the stop gradient operation. The actor $a_{\tau} \sim q_{\phi}\left(a_{\tau} \mid s_{\tau}\right)$ aims to output actions that maximize the prediction of long-term future rewards made by the critic and is trained directly by backpropagating the value gradients through the sequence of sampled states and actions, i.e., maximize: \begin{equation}\label{Eq:JActor} \mathcal{J}_{\mathrm{Actor}}(\phi) = {\mathbb{E}}\left(\sum_{i=1}^{N} \sum_{\tau=t}^{t+H} \mathrm{~V}^{\lambda}_{i,\tau}\right) \end{equation} We jointly optimize the model loss $\mathcal{J}_{Model}$ with respect to the model parameters $\psi$, $\theta$ and $\omega$, the critic loss $\mathcal{J}_{Critic}$ with respect to the critic parameters $\xi$ and the actor $\mathcal{J}_{Actor}$ loss with respect to the actor parameters $\phi$ using the Adam optimizer with different learning rates: \vspace{-10pt} \begin{equation} \underset{\psi,\xi,\phi,\theta,\omega}{\mathrm{min}} \quad \alpha_{0}\mathcal{J}_{\mathrm{Critc}}(\xi) -\alpha_{1}\mathcal{J}_{\mathrm{Actor}}(\phi) -\alpha_{2}\mathcal{J}_{\mathrm{Model}}(\psi, \theta, \omega) \label{eq:joint_loss} \end{equation} where $\alpha_{0}$, $\alpha_{1}$, $\alpha_{2}$ are coefficients for different components, and we summarize the whole framework of optimizing in Algorithm \ref{algo-Forbes}. \begin{algorithm}[tb] \caption{FORBES Algorithm} \label{algo-Forbes} \begin{algorithmic} \STATE {\bfseries Input:} buffer $\mathcal{B}$, imagination horizon $H$, interacting step $T$, batch size $B$, batch length $L$, number of trajetories $N$. \STATE Initialize buffer $\mathcal{B}$ with $S$ random seed episodes. \WHILE{ not converged } \FOR {$c=1,\dots,C$} \STATE Draw $B$ data sequences $\{(o_t,a_t,r_t)\}_{t=k}^{k+L}$ from $\mathcal{B}$ \STATE Infer belief state $q_K(s_t|s_{t-1}, a_{t-1}, o_t)$. \FOR {$i=1,\dots,N$} \STATE Rollout imaginary trajectories $\{(s_{i,\tau},a_{i,\tau})\}_{\tau=t}^{t+H}$ with belief transition model. \ENDFOR \STATE For each $s_{i,\tau}$, predict rewards $p_\psi(r_{i,\tau}|s_{i,\tau})$ and values $v_{\phi}(s_{i,\tau})$ \COMMENT{{\color{gray}\emph{Calculate returns}}} \STATE Update $\theta,\omega,\xi,\phi,\psi$ using Equation (\ref{Eq:Jmodel}), (\ref{eq:KL}), (\ref{Eq:JCritic}), (\ref{Eq:JActor}) and (\ref{eq:joint_loss}) \COMMENT{{\color{gray}\emph{Optimize parameters}}} \ENDFOR \STATE Reset environment and get $o_1$. \FOR{$t=1,\dots,T$} \STATE Compute $s_t \sim q_K(s_t | s_{t-1}, a_{t-1}, o_t)$ from history. \STATE Compute $a_t \sim \pi(a_t|s_t)$ with action model. \STATE Add exploration noise to action. \STATE Execute $a_t$ and get $o_{t+1}, r_t$. \ENDFOR \STATE Add experience to buffer $\mathcal{B} = \mathcal{B} \cup \{(o_t,a_t,r_t)_{t=1}^T\}$ \ENDWHILE \end{algorithmic} \end{algorithm} \section{Experiments} Our experiments evaluate FORBES on two image-based tasks. We first demonstrate the belief learning capacity on a digit writing task in Section \ref{exp:seqMNIST}, and show that FORBES captures beliefs that allow for multi-modal yet precise long-term predictions as well as higher ELBO. For large-scale experiments, we test the proposed POMDP RL framework based on FORBES in Section \ref{exp:dmc}. The results of multiple challenging visual-motor control tasks from DeepMind Control Suite \citep{tassa2018deepmind} show that FORBES outperforms baselines in terms of performance and sample efficiency. In Section \ref{exp:ablation}, we further provide ablation studies of the multiple imagined trajectories technique used in our method. \subsection{Digit Writing Tasks} \label{exp:seqMNIST} In this experiment, we validate the capacity of FORBES by modelling the partially observable sequence with visual inputs. We adopt the MNIST Sequence Dataset \citep{mnist_seq} that consists of sequences of handwriting MNIST digit stokes. This problem can be viewed as a special case of POMDP, whose action space is $\O$ and rewards remain $0$. Such a problem setting separates the belief learning and policy optimizing problem and allows us to concentrate on the former one in this section. We convert the digit stroke to a sequence of images of size $28 \times 28$ to simulate the writing process. At time step $t$, the agent can observe $o_t$ that has already written $t$ pixels, and we train the agent maximizing $\mathcal{J}_{\mathrm{Model}}$ in Equation \ref{Eq:Jmodel} except for the reward reconstruction term. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figs/testelbo_.png} \centering \caption{ELBO on digit writing.} \label{fig:seq-elbo} \end{figure} As shown in Figure \ref{fig:seq-1}, we randomly select three digits as examples (see Appendix \ref{sup:digit-exp} for more results) and show the inputs as well as the prediction outputs of our model and the RSSM~\citep{hafner2019learning} baseline, which is the previous state-of-the-art method for learning continuous belief states of POMDPs. The leftmost column is the ground truth of the fully written digits. During the testing, we feed the initial 15 frames $\{o_1, o_2, \cdots, o_{15}\}$ to the model, and the columns in grey exhibit a part of the inputs. Then we sample several states from the inferred belief state and rollout via the learned state transition model (Equation (\ref{Eq:models})) for 15 steps and show the reconstruction results of the predictions. As shown in the blue and green columns on the right of Figure \ref{fig:seq-1}, though RSSM can also predict the future strokes in general, the reconstructions are relatively blurred and mix different digits up. It also fails to give diverse predictions. However, FORBES can make precise yet diverse predictions. Each prediction is clear and distinct from other digits. Given the beginning of the digit 7, FORBES successfully predicts both 7 and 3 since they have a similar beginning. The results can be partially explained via the mixed-up belief and the empty belief as shown in Figure \ref{latent}, which support the claim that FORBES can better capture the complex belief states. We also provide the quantitative results in Figure \ref{fig:seq-elbo}, which is the $ \mathrm{ELBO}$ on test digits sequence set that is never seen during training. The results show that FORBES can achieve a tighter ELBO, which verifies the theoretical results in \ref{sec:method-1}. The details of the implementation can be found in Appendix \ref{sec:hparams}. \vspace{-5pt} \subsection{Visual-motor control tasks} \label{exp:dmc} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/DMC_500K_5seeds.pdf} \centering \caption{Performance on DeepMind Control Suite. The shaded areas show the standard deviation across 5 seeds. FORBES achieves better performance and sample efficiency in various challenging tasks.} \vspace{-15pt} \label{fig:mujoco-exp} \end{figure} \vspace{-5pt} We experimentally evaluate the performance of FORBES on Reinforcement Learning on a variety of visual-motor control tasks from the DeepMind Control Suite \citep{tassa2018deepmind}, illustrated in Figure \ref{fig:mujoco-exp}. Across all the tasks, the observations are $64 \times 64 \times 3$ images. These environments provide different challenges. The Cartpole-Swingup task requires a long planning horizon and memorizing the state of the cart when it is out of view; Finger-Spinning includes contact dynamics between the finger and the object; Cheetah-Run exhibits high-dimensional state and action spaces; the Walker-Walk and Walker-Run are challenging because the robot has to learn to first stand up and then walk; Hopper Stand is based on a single-legged robot, which is sensitive to the reaction force on the ground and thus needs more accurate control. As for baselines, we include the scores for A3C \citet{mnih2016asynchronous} with state inputs (1e9 steps), D4PG \citet{barthmaron2018distributed} (1e9 steps), PlaNet \citep{hafner2019learning} (1e6 steps) and Dreamer \citet{hafner2019dream} with pixel inputs. All the scores of baselines are aligned with the ones reported in \citet{hafner2019dream} (see details in Appendix \ref{baselines}). We use $N=4$ trajectories. The details of the implementations and hyperparameters can be found in Appendix \ref{sec:hparams}. Our experiments empirically show that FORBES achieves superior performance and sample efficiency on challenging visual-motor control tasks. As illustrated in Figure \ref{fig:mujoco-exp}, FORBES achieves higher scores than Dreamer \citep{hafner2019dream} in most of the tasks and achieves better performance than PlaNet \citep{hafner2019learning} within much fewer environment steps. See Appendix \ref{append-dmc-1m} for more results. We provide some insights into the results. As shown in Section \ref{exp:seqMNIST}, baselines with Gaussian assumptions may suffer from the mixed-up belief and empty belief issues, while FORBES can better capture the general belief states. Furthermore, multiple imagined trajectories can better utilize the diversity in the rollout. Therefore, the inner coherency within the model components allows the agent a better performance. We further discuss the role of multiple imagined trajectories and other components in the next section. \vspace{-5pt} \subsection{Ablation Study} \vspace{-5pt} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/DMC_ablation_2.pdf} \centering \caption{Comparison of the performance between FORBES and Dreamer with multiple imagined trajectories.} \label{fig:ablation} \vspace{-5pt} \end{figure} \label{exp:ablation} In order to verify that the outperformance of FORBES is not simply due to increasing the number of imagined trajectories, we conducted an ablation study in this section. We compare FORBES with the ``Dreamer + multiple imagined trajectories" baseline by increasing the number of imagined trajectories in Dreamer to the same as in FORBES ($N=4$). As shown in Figure \ref{fig:ablation}, no consistent and obvious gain can be observed after increasing the number of trajectories to Dreamer. The agent gains slight improvements in two environments and suffers from slight performance loss on other tasks. This result indicates that increasing the number of imagined trajectories may only be effective when the agent can make diverse predictions as in FORBES. The Gaussian assumptions lead to the lack of trajectory diversity, so that increasing the number of imagined trajectories will not effectively help. Besides, Appendix \ref{append-ablation-n} compare different $N$ to illustrate the effect of multiple imagined trajectories. Appendix \ref{append-ablation-parameters} add parameters to baselines to illustrate the performance gain is not due to more parameters. \section{Related Work} \vspace{-5pt} \label{sec:relatedwork} \textbf{POMDP:} POMDP solving approaches can be divided into two categories based on whether their state, action and observation spaces are discrete or continuous. Discrete space POMDP solvers, in general, either approximate the value function using point-based methods \citep{kurniawati2008sarsop, shani2013survey} or using Monte-Carlo sampling in the belief space \citep{thrun1999monte, andrieu2002particle, silver2010monte, kurniawati2016online} to make the POMDP problem tractable. Monte Carlo algorithms like particle filters make it possible to handle POMDPs with continuous state space by maintaining sets of samples drawn from the belief states. Other continuous space POMDP solvers often approximate the belief states as a distribution with few parameters (typically Gaussian) and solve the problem analytically either using gradients \citep{van2012motion, indelman2015planning} or using random sampling in the belief space \citep{agha2014firm, hollinger2014sampling}. However, most of the classical POMDP methods mentioned above are based on an accurately known dynamic model, which is a restricted assumption in many real world tasks. More recently, \citet{nishiyama2012hilbert} proposes to solve the POMDP based on models defined in appropriate RKHSs, which represent probability distributions as embeddings in RKHSs. However, the embeddings are learned from training samples, and therefore this method requires access to samples from hidden states during training. \textbf{MBRL for visual-motor control:} Recent researches in model-based reinforcement learning (MBRL) for visual-motor control provides promising methods to solve POMDPs with high-dimensional continuous space and unknown models since visual-motor control tasks can be naturally modelled as POMDP problems. Learning effective latent dynamics models to solve challenging visual-motor control problems is becoming feasible through advances in deep generative modeling and latent variable models \citep{krishnan2015deep,karl2016deep,doerr2018probabilistic,buesing2018learning,ha2018world,han2019variational,hafner2019learning,hafner2019dream}. Among which, the recurrent state-space model (RSSM) based methods \citep{hafner2019learning,hafner2019dream} provide a principled way to learn continuous latent belief states for POMDPs by variational inference and learns behaviours based on the belief states using model-based reinforcement learning, which achieves high performance on visual-motor control tasks. However, they assume the belief states obey diagonal Gaussian distributions. Such assumptions impose strong restrictions to belief inference and lead to limitations in practice, including mode collapse, posterior collapse and object vanishing in reconstruction\citep{bowman2015generating, salimans2015markov,okada2020dreaming}. In addition to the diagonal Gaussian distributions, \cite{tschiatschek2018variational} uses a Gaussian mixture to approximate the belief states. More recently, \cite{hafner2021mastering} proposes to approximate the belief states by assuming a discrete latent space and results in superior performance. However, our algorithm makes no assumption and has the capability to approach arbitrary continuous distribution according to the theoretical analysis. Other works like \cite{hausknecht2015deep, gregor2019shaping} use a vector-based representation of belief states. However, this deterministic representation prohibits the agent from consistently forecasting the future since the results of the reconstructed observation contain multimodality, and one can hardly keep the samples stay in the same mode across time. Please check Appendix \ref{append-relatedworks} for more details. A few works propose particle filter based methods that use samples to approximate the belief states~\citep{ma2020particle,igl2018deep}. However, particle filters are reported to experience the curse of dimensionality\cite{daum2003curse, cod2008} and therefore suffer from insufficient sample efficiency and performance \citet{lee2020stochastic}. For a more detailed discussion of the related works, please refer to Appendix \ref{append-relatedworks}. \textbf{Normalizing Flows:} Normalizing Flows (NF) are a family of generative models which produce tractable distributions with analytical density. For a transformation $f:\mathbf{R}^D \rightarrow \mathbf{R}^D$, the computational time cost of the log determinant is $\mathcal{O}(D^3)$. Thus most previous works choose to make the computation more tractable. \cite{rezende2015variational, berg2019sylvester} propose to use restricted functional form of $f$. Another choice is to force the Jacobian of $f$ to be lower triangular by using an autoregressive model \citep{kingma2016improved, papamakarios2018masked}. These models usually excel at density estimation, but the inverse computation can be time-consuming. \citet{dinh2014nice, dinh2017density, kingma2018glow} propose the coupling method to make the Jacobian triangular and ensure the forward and inverse can be computed with a single pass. The applications of NF include image generation \citep{ho2019flow++,kingma2018glow}, video generation \citep{kumar2019videoflow} and reinforcement learning \citep{mazoure2020leveraging,ward2019improving,touati2020randomized}. \section{Conclusion} \vspace{-5pt} General continuous belief states inference is a crucial yet challenging problem in high-dimensional Partially Observable Markov Decision Process (POMDP) problems. In this paper, we propose the \textbf{F}l\textbf{O}w-based \textbf{R}ecurrent \textbf{BE}lief \textbf{S}tate model (FORBES) that can learn general continuous belief states by incorporating normalizing flows into the variational inference framework and then effectively utilize the learned belief states in downstream RL tasks. We show that theoretically, our method can accurately learn the true belief states and we verify the effectiveness of our method in terms of both the quality of learned belief states and the final performance of our extended POMDP RL framework on two visual input environments. The digit writing tasks demonstrate that our method can learn general belief states that enable precise and multi-modal predictions and high-quality reconstructions. General belief inference plays a vital role in solving the POMDP, and our method paves a way towards it. In the future, we will explore further approaches to improve the accuracy of belief states inference and information seeking, such as combining contrastive learning and using advanced network architectures such as transformers to build normalizing flows. \section{Details of affine coupling layer for normalizing flow} \label{affine details} In this section, we will introduce the details about the affine coupling layer \citep{dinh2017density}. In the forward function, we split the input $\mathbf{x}\in \mathbb{R}^{D}$ into two parts according to the dimension: $\mathbf{x}=[ \mathbf{x}_{1:k}, \mathbf{x}_{k+1:D} ]$. Then, we let the first part $\mathbf{x}_{1:k}$ stay identical, so that the first $k$ dimensions in the output $\mathbf{y} \in \mathbb{R}^{D}$ is $\mathbf{y}_{1:k}=\mathbf{x}_{1:k}$. After that, we use the identical part as the inputs to determine the transform parameters. In our case, we define two neural network $s,t: \mathbb{R}^k \rightarrow \mathbb{R}^{D-k}$, which stand for scale and translation functions. They receive $\mathbf{x}_{1:k}$ as inputs and output the affine parameters. As in \citep{dinh2017density}, the second part can be derived by: \begin{equation} \mathbf{y}_{k+1:D} = \mathbf{x}_{k+1:D} \odot \exp(s(\mathbf{x}_{1:k})) + t(\mathbf{x}_{1_k}) \end{equation} Finally, the output, $\mathbf{y}$ is the concatenation of the two parts: $\mathbf{y} = [\mathbf{y}_{1:k}, \mathbf{y}_{k+1:D}]$. The affine coupling layer is an expressive transformation with easily-computed forward and reverse passes. The Jacobian of affine coupling layer is a triangular matrix, and its log determinant can also be efficiently computed. \section{Hyper Parameters and implementation details} \label{sec:hparams} \paragraph{Network Architecture} We use the convolutional and deconvolutional networks that are similar to Dreamer\citep{hafner2019dream}, a GRU \citep{cho2014learning} with $200$ units in the dynamics model, and implement all other functions as two fully connected layers of size $200$ with ReLU activations. Base distributions in latent space are $30$-dimensional diagonal Gaussians with predicted mean and standard deviation. As for the parameters network , we use a residual network composed of one fully connected layer, one residual block, and one fully connected layer. The residual network receives $\mathbf{x}_{a}$ and $c$ as input. The input is first concatenated with the context and passed into the network. The residual block passes the input through two fully connected layers and returns the sum of the input and the output. Finally the last layer outputs the parameters and we use $5$ layers of affine coupling flows with a LU layer between them. In our case, we use samples from the belief distribution as the inputs to the actor and value function as an approximation to the actor and value function with belief distribution as input. Calculating $V(b)$ needs to integrate through both the observation model and state transition model. Our approximation makes an assumption like in Qmdp, to avoid integrating through the observation model. We use a GRU as the recurrent neural network to summary to temporal information. We assume an initial state $s_0$ to be a zero vector. After taking action $a_t$, we concatenate $a_t$ with the previous state $s_t$ and pass it through a small MLP to get $y_t = f(s_t, a_t)$, and use it as the input to the GRU: $h_{t+1}, z_{t+1} = GRU(h_t, y_t) $. We pass $z_{t+1}$ through an MLP to get the base prior belief distribution $p_0$ (mean and variance) and then we sample from $p_0$ and pass it through a sequence of Normalizing Flow to get a sample from $p_K$. For the posterior distribution, we first use a CNN as encoder to encode the observation $o_t$ into the feature $x_t$, and then concatenate $z_{t+1}$ and $x_t$ and pass them through an MLP to get the base posterior belief distribution $q_0$ and a sequence of Normalizing Flow. Similarly, we finally get a sample $s_{t+1}$ from $q_K$. \paragraph{Training Details} We basically adopt the same data buffer updating strategy as in Dreamer \cite{hafner2019dream}. First, we use a small amount of $S$ seed episodes ($S=5$ in DMC experiments) with random actions to collect data. After that, we train the model for $C$ update steps ($C=100$ in DMC experiment) and conduct one additional episode to collect data with small Gaussian exploration noise added to the action. Algorithm 1 shows one update step in $C$ update steps. After $C$ update steps, we conduct one additional episode to collect data (this is not shown in Algorithm 1). When the agent interacts with the environment, we record the observations, actions, and rewards of the whole trajectory ($(o_t, a_t, r_t)_{t=1}^{T}$) and add it to data buffer $\mathcal{B}$. \paragraph{Hyperparameters} For DMControl tasks, we pre-process images by reducing the bit depth to 5 bits and draw batches of 50 sequences of length 50 to train the FORBES model, value model, and action model models using Adam \citep{kingma2014adam} with learning rates $\alpha_{0}=5\times10^{-4}$, $\alpha_{1}=8\times10^{-5}$, $\alpha_{2}=8\times10^{-5}$, respectively and scale down gradient norms that exceed $100$. We clip the KL regularizers in $\mathcal{J}_{Model}$ below $3.0$ free nats as in Dreamer and PlaNet. The imagination horizon is $H=15$ and the same trajectories are used to update both action and value models. We compute the TD-$\lambda$ targets with $\gamma=0.99$ and $\lambda=0.95$. As for multiple imagined trajectories, we choose $N=4$ across all environments. For digit writing experiments in Section \ref{exp:seqMNIST}, we decrease the GRU hidden size to be $20$, let the base distributions be a $2$-dimensional diagonal Gaussian and only use $3$ layers of affine coupling flows. For the image processing, we simply divide the raw pixels by $255$ and subtract $0.5$ to make the inputs lie in $[-0.5, 0.5]$. \section{Extended information of Baselines} \label{baselines} For model-free baselines, we compare with D4PG \citep{barthmaron2018distributed}, a distributed extension of DDPG, and A3C \citep{mnih2016asynchronous}, the distributed actor-critic approach. D4PG is an improved variant of DDPG \citep{lillicrap2015ddpg} that uses distributed collection, distributional Q-learning, multi-step returns, and prioritized replay. We include the scores for D4PG with pixel inputs and A3C \citep{mnih2016asynchronous} with vector-wise state inputs from DMCcontrol. For model-based baselines, we use PlaNet \citep{hafner2019learning} and Dreamer \citep{hafner2019dream}, two state-of-the-art model-based RL. PlaNet \citep{hafner2019learning} selects actions via online planning without an action model and drastically improves over D4PG and A3C in data efficiency. Dreamer \citep{hafner2019dream} further improve the data efficiency by generating imaginary rollouts in the latent space. \newpage \section{Further Discussion on Related Works} \label{append-relatedworks} This section further discusses the relationship between our work and some related works \citet{gregor2019shaping, hafner2019dream, hafner2021mastering}. \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{figs/shaping_belief_traj.png} \centering \caption{Comparation of FORBES and SiMCore} \label{fig:sup-simcore} \end{figure} First of all, \citet{gregor2019shaping} proposes to use a flexible decoder to learn a compact vector representation of belief state and has promising results. Though it mentions using Normalizing Flow as a decoder, we believe it is orthogonal to our research. To summarize, we both aim to solve POMDP problems, hut our method has little in common with \cite{gregor2019shaping}, and our research directions are orthogonal: we use normalizing flows in different components for different purposes, which makes it unnecessary for citing and comparing with \cite{gregor2019shaping}. Specifically, our main contribution is to use Normalizing Flow to model accurate and flexible belief state distribution, and we prove its capability from theoretical and empirical perspectives. \cite{gregor2019shaping} did not model belief state distribution. Instead, they model the belief as a single state vector, and the belief transition is also deterministic. Their Normalizing Flow is only used on image reconstruction. They use expressive generative models, including normalizing flow, to reconstruct the images conditioned on the simulated future state. "Using a convolutional DRAW outperforms flows for learning a model of complex environment dynamics" is reported in the context of reconstruction. What's more, directly modeling the observation model from the belief in the form of a deterministic vector may have the following deficiencies: First of all, it seems that SimCore cannot make consistent trajectory predictions by directly modeling the belief observation model. As shown in Figure \ref{fig:sup-simcore}. For instance, in the sequential MNIST setting, suppose the SimCore\cite{gregor2019shaping} is well trained, and we feed a beginning sequence which is the same as the last line in our Figure 4 into the SimCore. Assume it is equally possible to be ‘3’ or ‘7’, and the ConvDRAW will predict ‘3’ and ‘7’ each at 50\% probability at every time step. However, we cannot consistently sample the same category (‘3’ or ‘7’) in the same trajectory when we sample the future trajectory. However, since we explicitly model the state distribution for FORBES, we can first sample initial states and then rollout to sample multiple state trajectories, each covering a different category. This allows us to make diverse and consistent predictions. Secondly, accurately obtaining the belief state is the main challenge in solving the POMDP. Dreamer \cite{hafner2021mastering} makes a strong isotropic Gaussian assumption to learn a continuous belief distribution, while Dreamer V2 \cite{hafner2021mastering} assumes discrete latent space. However, according to the theoretical analysis, our algorithm makes no assumption and can approach arbitrary continuous distribution. We believe that our methods can capture more subtle multimodal patterns without restricting the belief distribution to be discrete. This allows us to learn more general distribution (at least theoretically) and leaves great potential for future works. To the best of our knowledge, we are the first to propose a normalizing flow based recurrent belief learning method to obtain the general continuous belief states in POMDP accurately. We provide theoretical analysis to illustrate that our algorithm has the potential of learning near perfectly accurate belief states. Through the sequential MNIST experiment, we empirically show the benefits of learning flexible belief distribution. Our method provides better reconstruction quality and can make multimodal future predictions. This flexible and accurate belief learning is essential for obtaining optimal solutions for POMDPs. As for the multiple imagined trajectories, we agree that the unimodal latent space leads to the lack of trajectory diversity, so that increasing the number of imagined trajectories will not effectively help. Our flexible belief distribution enables more accurate and multimodal future predictions by combining multiple imagined trajectories. Therefore, we believe our proposed method is not merely a trivial combination of different components but a new framework for flexible and accurate belief distribution learning and POMDP RL with clear motivations and theoretical/empirical results. \newpage \section{An Ablation Study on the Number of Imagined Trajectories} \label{append-ablation-n} \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{figs/DMC_ablation_N_full6.pdf} \centering \caption{An ablation study on the effect of different $N$ on DMC environments.} \label{fig:sup-abl-N} \end{figure} To show the effect of $N$, we adjust the number of imagined trajectories on some DMC environments. We choose $N=1, 2, 4$ and run 500K environment steps. We run $N=1,2$ with $3$ different seeds, and $N=4$ with $5$ different seeds (we use the main DMC experiment results, where $N=4$ here). The result shows that, in Finger Spin, the performance gain caused by multiple imagined trajectories is obvious. In finger spin, there are two objects and their interactions may result in complex locomotion patterns. When the environmental locomotion pattern itself is complex and flexible enough to incorporate diverse possibilities, then using FORBES allows the agent to make diverse predictions and using the multiple imagined trajectories technique will further exploit the advantages of FORBES. However, not all environments can show the advantages of multiple imaginations. In other environments, where there’s only one agent and its behavior is relatively unimodal, a larger $N$ does not effectively improve the performance, and different $N$ results in similar performances. \newpage \section{Extended Results on DMC} \label{append-dmc-1m} \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{figs/DMC_1M.pdf} \centering \caption{The training curve on DMC environment for 1M environment steps.} \label{fig:sup-DMC1M} \end{figure} We run our algorithm for 1M environment steps and show the curve in Figure \ref{fig:sup-DMC1M}. We choose 1M environment steps because most of the curves have converged in most of the environments. FORBES achieves higher scores than Dreamer in most of the tasks. \newpage \section{An Ablation Study on the Model Parameters} \label{append-ablation-parameters} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/DMC_more_params_full.pdf} \centering \caption{An ablation study on the effect of adding parameters to Dreamer on two DMC environments.} \label{fig:sup-addparams} \end{figure} In this section, we show that having a flexible belief state distribution is the key to improving performance, rather than introducing more parameters. Having more parameters do not necessarily mean better performance. Increasing parameters may also make it difficult to converge and negatively affect the sample efficiency. We add an ablation study that adds more parameters to Dreamer to test the effectiveness of having more parameters. We add $1, 2$ hidden layer(s) to all the MLP in RSSM, and the result is shown in Figure \ref{fig:sup-addparams}. The results show that simply adding parameters cannot improve the performance. \newpage \section{Comparison of ELBO on FORBES and RSSM on DMC } \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/DMC_ELBO.pdf} \centering \caption{The ELBO of FORBES and RSSM.} \label{fig:sup-DMCELBO} \end{figure} We provide the ELBO in DMC environments and FORBES in Figure \ref{fig:sup-DMCELBO}, and FORBES has higher ELBO. \newpage \section{Evidence Lower Bound Derivations} \label{append-ELBO} The variational bound for latent dynamics models $p\left(o_{1: T}, s_{1: T} \mid a_{1: T}\right)=\prod_tp(s_t|s_{t-1},a_{t-1})p(o_t|s_t)$ and a variational posterior $q\left(s_{1: T} \mid o_{1: T}, a_{1: T}\right)=\prod_{t} q\left(s_{t} \mid o_{\leq t}, a_{<t}\right)$ follows from importance weighting and Jensen's inequality as shown, \begin{equation} \begin{aligned} \log p\left(o_{1: T} \mid a_{1: T}\right) &= \log \mathrm{E}_{p\left(s_{1: T} \mid a_{1: T}\right)}\left[\prod_{t=1}^{T} p\left(o_{t} \mid s_{t}\right)\right] \\ &=\log \mathrm{E}_{q\left(s_{1: T} \mid o_{1: T}, a_{1: T}\right)}\left[\prod_{t=1}^{T} p\left(o_{t} \mid s_{t}\right) p\left(s_{t} \mid s_{t-1}, a_{t-1}\right) / q\left(s_{t} \mid o_{\leq t}, a_{<t}\right)\right] \\ &\geq \mathrm{E}_{q\left(s_{1: T} \mid o_{1: T}, a_{1: T}\right)}\left[\sum_{t=1}^{T} \log p\left(o_{t} \mid s_{t}\right)+\log p\left(s_{t} \mid s_{t-1}, a_{t-1}\right)-\log q\left(s_{t} \mid o_{\leq t}, a_{<t}\right)\right]. \end{aligned} \end{equation} We use the same factorization of $q(s_{1:T}|\tau_t, o_t)$ in ELBO derivations and algorithm design as in \cite{hafner2019learning, hafner2019dream}. \newpage \section{Proofs of Theorem}\label{sec:proofs} \textbf{Theorem 1:}The approximation error of the lower bound is \begin{equation} \nonumber \log p(o_{1:T}, r_{1:T}|a_{1:T}) - \mathcal{J}_{\mathrm{Model}} = \mathbb{E}_{q_K(s_{1:T}|\tau_T,o_T)} \left[ \sum_{t=1}^T D_{\mathrm{KL}}(q(s_t | \tau_{t}, o_t) \| p(s_t\mid \tau_t, o_t)) \right] \end{equation} where $p(s_t\mid \tau_t, o_t)$ is the true posterior. \textbf{Proof:} \begin{equation} \begin{split} &D_{\mathrm{KL}}(q(s_{t} \mid \tau_{t},o_{t}) \| p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)) \mid a_{1: T}\\ =&\int q(s_{t} \mid \tau_{t},o_{t}) \log \frac{q(s_{t} \mid \tau_{t},o_{t})}{p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)} \mathrm{d} s_{t} \\ =&\int q(s_{t} \mid \tau_{t},o_{t}) \log \frac{q(s_{t} \mid \tau_{t},o_{t})}{\frac{p(s_{t} \mid s_{t-1},a_{t-1}) p\left(o_{t} \mid s_{t}\right)}{p(o_{t}\mid a_{1: T})}} \mathrm{d} s_{t} \\ =&\int q(s_{t} \mid \tau_{t},o_{t}) \log q(s_{t} \mid \tau_{t},o_{t}) \mathrm{d} s_{t}+\int q(s_{t} \mid \tau_{t},o_{t}) \log p(o_{t}\mid a_{1: T}) \mathrm{d} s_{t} \\ &\quad -\int q(s_{t} \mid \tau_{t},o_{t}) \log [p(s_{t} \mid s_{t-1},a_{t-1}) p(o_{t} \mid s_{t})] \mathrm{d} s_{t}\\ =& \log p(o_{t}\mid a_{1: T}) + \int q(s_{t} \mid \tau_{t},o_{t}) \log q(s_{t} \mid \tau_{t},o_{t}) \mathrm{d} s_{t} - \int q(s_{t} \mid \tau_{t},o_{t}) \log [p(s_{t} \mid s_{t-1},a_{t-1}) p(o_{t} \mid s_{t})] \mathrm{d} s_{t}\\ =& \log p(o_{t}\mid a_{1: T}) + \int q(s_{t} \mid \tau_{t},o_{t}) \log q(s_{t} \mid \tau_{t},o_{t}) \mathrm{d} s_{t} -\int q(s_{t} \mid \tau_{t},o_{t}) \log p(s_{t} \mid s_{t-1},a_{t-1}) \mathrm{d} s_{t}\\ & \quad -\int q(s_{t} \mid \tau_{t},o_{t}) \log p(o_{t} \mid s_t) \mathrm{d} s_{t}\\ =& \log p(o_{t}\mid a_{1: T}) +\int q(s_{t} \mid \tau_{t},o_{t}) \log \frac{q(s_{t} \mid \tau_{t},o_{t})}{p(s_{t} \mid s_{t-1},a_{t-1})} \mathrm{d} s_{t}-\int q(s_{t} \mid \tau_{t},o_{t}) \log p(o_{t} \mid s_{t}) \mathrm{d} s_{t}\\ =& \log p(o_{t}\mid a_{1: T}) +D_{\mathrm{KL}}\left(q\left(s_{t} \mid \tau_{t}, o_{t}\right) \| p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)\right) -\mathbb{E}_{q\left(s_{1: t} \mid \tau_{t}, o_{t}\right)} [\log p\left(o_{t} \mid s_{t}\right)]\\ \end{split} \end{equation} For a sequence from time 1 to T, we have \begin{equation} \begin{split} &\sum_{t} D_{\mathrm{KL}}\left(q\left(s_{t} \mid \tau_{t}, o_{t}\right) \| p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)\right)\\ = & \log p(o_{1: T}\mid a_{1: T})- \mathbb{E}_{q(s_{1:t}|\tau_t,o_t)} \left[ \sum_{t=1}^T (\log p(o_t | s_t) - D_{\mathrm{KL}} (q(s_t|\tau_t,o_t) \| p(s_t |s_{t-1},a_{t-1}) )) \right] \label{eq:time-sequence} \end{split} \end{equation} Then we can derive the Theorem 1 with \eqref{eq:time-sequence}: \begin{equation} \begin{split} & \log p(o_{1: T},r_{1:T}\mid a_{1: T})\\ =& \mathbb{E}_{q_K(s_{1:T}|\tau_T,o_T)} \left[ \sum_{t} D_{\mathrm{KL}}\left(q\left(s_{t} \mid \tau_{t}, o_{t}\right) \| p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)\right) \right]\\ &+\mathbb{E}_{q(s_{1:T}|\tau_T,o_T)} \left[ \sum_{t=1}^T (\log p(o_t | s_t) + \log p(r_t | s_t) - D_{\mathrm{KL}} (q(s_t|\tau_t,o_t) \| p(s_t |s_{t-1},a_{t-1}) )) \right]\\ =& \mathbb{E}_{q_K(s_{1:T}|\tau_T,o_T)} \left[ \sum_{t} D_{\mathrm{KL}}\left(q\left(s_{t} \mid \tau_{t}, o_{t}\right) \| p\left(s_{t} \mid s_{t-1}, a_{t-1}, o_{t}\right)\right) \right] + \mathcal{J}_{\mathrm{Model}}\\ =& \mathbb{E}_{q_K(s_{1:T}|\tau_T,o_T)} \left[\sum_{t} D_{\mathrm{KL}}\left(q\left(s_{t} \mid \tau_{t}, o_{t}\right) \| p\left(s_{t} \mid \tau_t, o_{t}\right)\right) \right] + \mathcal{J}_{\mathrm{Model}} \end{split} \end{equation} where $p(s_t\mid s_{t-1},a_{t-1}, o_t)=p(s_t\mid \tau_t, o_t)$ given the sampled $s_{t-1}$ from $q(s_{1:t}|\tau_t,o_t)$. \newpage \section{More Results on Digit Writing Experiments} \label{sup:digit-exp} In this section, we show more results of the predictions on the digit writing experiment in Figure \ref{fig:sup-seq}. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{figs/sup-mnist.png} \centering \caption{Additional prediction results on sequential MNIST of two models.} \label{fig:sup-seq} \end{figure} \newpage \section{Reconstructions of the visual control tasks} \begin{figure}[hbpt] \centering \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/cartpole-swingup.png} \caption{Cartpole Swing Up} \end{subfigure}% \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/cheetah-run.png} \caption{Cheetah Run} \end{subfigure} \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/finger-spin.png} \caption{Finger Spin} \end{subfigure} \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/hopper-stand.png} \caption{Hopper Stand} \end{subfigure}% \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/walker-run.png} \caption{Walker Run} \end{subfigure} \begin{subfigure}[h]{0.7\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/sup-exp-mujoco-reconstruction/walker-walk.png} \caption{Walker Walk} \end{subfigure} \caption{The reconstruction results on of FORBES six environments from DeepMind Control Suite\citep{tassa2018deepmind}.} \label{fig:sup-mu-re} \end{figure} In this section, we show the reconstructions of the visual control tasks during the evaluating phase. For each environment, we use 10 frames. The left one is the original picture for each frame, and the right one is the reconstruction picture. The following results in Figure \ref{fig:sup-mu-re} show that FORBES can make high-quality reconstructions. The corresponding videos can be found in the supplementary material.
1,477,468,749,824
arxiv
\section{Introduction} \label{Introduction} Calder\'on-Zygmund theory is concerned with $L^2(\mathbb{R}^n)$ bounded singular integral operators, $T$, of the form $$ Tf(x)=\int_{\mathbb{R}^n}K(x,y)f(y)\,dy, $$ where $f$ is compactly supported, $x\notin {\rm supp \,} f$, and $K$ is a kernel function defined on $(\mathbb{R}^n \times \mathbb{R}^{n}) \setminus \{(x,y):x=y\}$ that, for some $C_K>0$ and $0<\delta \leq 1$, satisfies $$ |K(x,y)|\leq \frac{C_K}{|x-y|^n} $$ whenever $x\neq y$ and $$ |K(x,y)-K(x',y')|\leq C_K\frac{|x-x'|^{\delta}+|y-y'|^{\delta}}{|x-y|^{n+\delta}} $$ whenever $|x-x'|+|y-y'|\leq \frac{1}{2}|x-y|$. The fact that these operators, known as Calder\'on-Zygmund operators, extend to be bounded on $L^p(\mathbb{R}^n)$ for all $1<p<\infty$ is of central importance in harmonic analysis. In \cite{HMW1973}, Hunt, Muckenhoupt, and Wheeden extended the Calder\'on-Zygmund theory to weighted spaces when they characterized the classes of weights, $A_p$, such that the Hilbert transform is bounded on $L^p(w)$ for $1<p<\infty$. A positive almost everywhere and locally integrable function $w$ is an \emph{$A_p$ weight} if $$[w]_{A_p}:=\sup_{Q}\langle w \rangle_Q \langle w^{1-p'}\rangle_Q^{p-1} < \infty,$$ where $\langle w\rangle_Q := \frac{1}{|Q|}\int_Qw(x)\,dx$, $p'$ satisfies $\frac{1}{p}+\frac{1}{p'}=1$, and the supremum is taken over all cubes $Q\subseteq \mathbb{R}^n$ with sides parallel to the coordinate axes. Shortly later, it was shown that any Calder\'on-Zygmund operator $T$ is bounded on $L^p(w)$ for all $1<p<\infty$ and all $w \in A_p$. However, determining the optimal dependence of $\|T\|_{L^p(w)\rightarrow L^p(w)}$ on $[w]_{A_p}$ was a much more difficult problem. Extrapolation methods allowed for a reduction to the case $p=2$, and the following optimal estimate became known as the $A_2$ conjecture: if $T$ is a Calder\'on-Zygmund operator and $w \in A_2$, then $$ \|Tf\|_{L^2(w)}\lesssim [w]_{A_2}\|f\|_{L^2(w)} $$ for any $f \in L^2(w)$. This question was first solved by Hyt\"onen in the celebrated paper \cite{H2012}. In \cite{L2013}, Lerner pursued a different approach to the $A_2$ conjecture using a bound by positive and local operators, called \emph{sparse operators}. A sparse operator has the form $$ Sf:=\sum_{Q\in\mathcal{S}}\langle f\rangle_Q\mathbbm{1}_{Q} $$ for locally integrable $f$, where $\mathcal{S}$ is a collection of cubes satisfying the \emph{sparseness condition}: for every $Q \in \mathcal{S}$, $$ \sum_{P\in \text{ch}_{\mathcal{S}}(Q)} |P|\leq \frac{1}{2}|Q|, $$ where $\text{ch}_{\mathcal{S}}(Q)$ is the set of maximal elements of $\mathcal{S}$ that are strictly contained in $Q$. A refinement of Lerner's result states that there exists a constant $C>0$ such that for any compactly supported $f \in L^1(\mathbb{R}^n)$, there is a sparse operator $S$ satisfying $$ |Tf(x)|\leq C S|f|(x) $$ for almost every $x\in {\rm supp \,} f$, see \cites{CAR2016,L2017,LN2019}. Since optimal weighted bounds for sparse operators are immediate, this method gives a different proof of the $A_2$ conjecture. Such ``sparse domination'' results have been of immense interest following \cite{L2013}. It is natural and of independent interest to study compactness of singular integral operators in addition to the previously described theory concerning boundedness. In \cite{V2015}, the second author began this study by describing necessary and sufficient conditions for Calder\'on-Zygmund operators to extend compactly on $L^p(\mathbb{R})$ for $1<p<\infty $. Since then, a complete theory for compact Calder\'on-Zygmund operators on $L^p(\mathbb{R}^n)$ and the corresponding endpoints has been established, see \cites{PPV2017, OV2017,V2019}. As shown in these papers, if a Calder\'on-Zygmund operator extends compactly on $L^p(\mathbb{R}^n)$, then the kernel $K$ satisfies the estimates $$ |K(x,y)|\lesssim \frac{F_K(x,y)}{|x-y|^n} $$ whenever $x\neq y$ and $$ |K(x,y)-K(x',y')|\leq \frac{|x-x'|^{\delta}+|y-y'|^{\delta}}{|x-y|^{n+\delta}}F_K(x,y) $$ for some $0<\delta \leq 1$ whenever $|x-x'|+|y-y'|\leq\frac{1}{2}|x-y|$, where $F_K$ is a bounded function satisfying $$\lim_{|x-y|\rightarrow \infty }F_K(x,y)=\lim_{|x-y|\rightarrow 0 }F_K(x,y)=\lim_{|x+y|\rightarrow \infty }F_K(x,y)=0.$$ The main result of this theory we use here is the characterization for compactness of Calder\'on-Zygmund operators at the endpoint case from $L^1(\mathbb{R}^n)$ to $L^{1,\infty} (\mathbb{R}^n)$. The explicit statement of this result can be found in Theorem \ref{tildeT} of Section \ref{CZOs}. The aim of the current paper is to extend the theory of compact Calder\'on-Zygmund operators on $L^p(\mathbb{R}^n)$ to weighted Lebesgue spaces using sparse domination methods \begin{thm} \label{CZOWeightedCompactness} Let $T$ be a Calder\'on-Zygmund operator that extends compactly on $L^2(\mathbb{R}^n)$. If $1<p<\infty$ and $w \in A_p$, then $T$ extends compactly on $L^p(w)$. \end{thm} \noindent The proof of Theorem \ref{CZOWeightedCompactness} involves establishing an appropriate sparse domination result which is interesting in its own right, Theorem \ref{domtilde}. The details are described in Section 3. It is worth noting that although our proof of Theorem \ref{CZOWeightedCompactness} is direct, it is possible to achieve weighted compactness results via extrapolation methods, see for example the subsequent paper of Hyt\"onen \cite{H2020}. The sparse technology also allows us to deduce results that are not attainable with extrapolation as we will next describe. Motivated by results of \cites{TTV2015,L2017}, we also study properties of Haar multiplier operators. For a bounded sequence of real numbers indexed by the standard dyadic grid of cubes $\mathcal{D}$ on $\mathbb{R}^n$, $\{\varepsilon_Q\}_{Q\in\mathcal{D}}$, the associated \emph{Haar multiplier}, $T$, is given by $$ Tf=\sum_{Q\in\mathcal{D}}\varepsilon_Q\langle f,h_Q \rangle h_Q, $$ where $h_Q$ is the Haar function adapted to $Q$. For generality, we work with Haar multipliers in the setting of arbitrary Radon measures on $\mathbb{R}^n$. See Section 2 for precise details. Estimates for Haar multiplier operators are often similar to those satisfied by Calder\'on-Zygmund operators but are easier to establish because of a Haar multiplier's diagonal structure. In this case, we obtain the following sparse bound. \begin{thm} \label{HaarSparseBound} Let $T$ be a Haar multiplier adapted to a Radon measure $\mu$ and a bounded sequence of real numbers $\{\epsilon_Q\}_{Q \in \mathcal{D}}$. Assume that $\mu$ is supported in a dyadic cube $Q_0$. If $f$ is bounded with compact support, then there exists an operator $S_{\varepsilon}$ satisfying $$ |Tf(x)|\lesssim S_{\varepsilon }|f|(x):=\sum_{Q\in \mathcal S}\tilde \varepsilon_Q \langle |f|\rangle_{Q}\mathbbm{1}_{Q}(x) $$ for almost every $x \in {\rm supp \,} f$, where $\mathcal{S}$ is a sparse collection of cubes, ${\displaystyle \tilde \varepsilon_{Q}:=\sup_{Q'\in {\mathcal D}(Q)}|\varepsilon_{Q'}|}$, and $\mathcal{D}(Q)$ is the set of dyadic cubes properly contained in $Q$. \end{thm} The first consequence of Theorem \ref{HaarSparseBound} is a weighted bound for Haar multipliers with weights in a class strictly larger than $A_p$. For a bounded sequence of real numbers $\{\varepsilon_Q\}_{Q\in \mathcal{D}}$, $0<q<\infty$, and $1<p<\infty$, we say that a nonnegative locally integrable function $w$ is an \emph{$\varepsilon^q A_p$ weight} if $$ [w]_{\varepsilon^q A_p}:=\sup_{Q\in \mathcal{D}} |\varepsilon_{Q}|^q\langle w\rangle_Q \langle w^{1-p'}\rangle_Q^{p-1}<\infty. $$ Notice that if $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ is a bounded sequence of real numbers, we have $ [w]_{\varepsilon^q A_p} \leq {\tilde \varepsilon^q}[w]_{A_p}, $ where $\displaystyle \tilde \varepsilon^q:= \sup_{Q\in\mathcal{D}} |\varepsilon_Q|^q$, and thus $A_p \subseteq \varepsilon^q A_p$. Again, the averages above are taken with respect to a general Radon measure $\mu$. \begin{thm} \label{Haarboundedness} Let $T$ be a Haar multiplier adapted to a Radon measure $\mu$ and a bounded sequence of real numbers $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ and let $\displaystyle {\tilde \varepsilon_{Q}:=\sup_{Q'\in {\mathcal D}(Q)}|\varepsilon_{Q'}|}$. If $2\leq p<\infty$, and $w \in \tilde \varepsilon A_p$, then $T$ is bounded on $L^p(w)$ with $$ \|Tf\|_{L^p(w)}\lesssim[w]_{\tilde\varepsilon A_p}\|f\|_{L^p(w)} $$ for all $f \in L^p(w)$; if $1< p\leq 2$ and $w \in \tilde \varepsilon^{p-1}A_p$, then $T$ is bounded on $L^p(w)$ with $$ \|Tf\|_{L^p(w)}\lesssim [w]_{\tilde\varepsilon^{p-1}A_p}^{\frac{p'}{p}}\|f\|_{L^p(w)} $$ for all $f \in L^p(w)$. \end{thm} \noindent We remark that Theorem \ref{Haarboundedness} cannot be obtained by existing extrapolation methods since it holds for weights beyond the $A_p$ classes. Moreover, if the coefficients $\varepsilon_Q$ possess extra decay, then we can deduce compactness of the associated Haar multiplier. We use Theorem \ref{HaarSparseBound} to obtain the following. \begin{thm} \label{HaarWeightedCompactness} Let $T$ be a Haar multiplier adapted to a Radon measure $\mu$ and a bounded sequence of real numbers $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ such that $$\lim_{\ell(Q)\rightarrow \infty}|\varepsilon_{Q}|=\lim_{\ell(Q)\rightarrow 0}|\varepsilon_{Q}|=\lim_{c(Q)\rightarrow \infty }|\varepsilon_{Q}|=0,$$ where $\ell(Q)$ and $c(Q)$ denote the side length and center of $Q$ respectively. If $1<p<\infty$ and $w \in A_p$, then $T$ is compact on $L^p(w)$. \end{thm} The paper is organized as follows. We prove the sparse bound (Theorem \ref{HaarSparseBound}) and its applications to weighted boundedness (Theorem \ref{Haarboundedness}) and compactness (Theorem \ref{HaarWeightedCompactness}) for Haar multipliers in Section 2. We prove the sparse bound (Theorem \ref{domtilde}) and weighted compactness result (Theorem \ref{CZOWeightedCompactness}) for Calder\'on-Zygmund operators in Section 3. The authors thank Jos\'e Conde-Alonso for valuable conversations regarding this work. We also thank David Cruz-Uribe for pointing out an issue with Theorem \ref{Haarboundedness} in a previous version of this article, leading to a slightly different result. \section{Haar multipliers} \label{HaarMultipliers} \subsection{Definitions and notation} \label{HaarDef} Let $\mu$ be a Radon measure on $\mathbb{R}^n$. Throughout this section, all of our integrals, averages, inner products, etcetera will be taken with respect to $\mu$. This will change in Section \ref{CZOs} where we will instead work with Lebesgue measure. Let $\mathcal{D}$ denote the standard dyadic grid on $\mathbb{R}^n$, that is, the family of cubes of the form $Q=\prod_{i=1}^n[2^km_i,2^k(m_i+1))$ for $k\in \mathbb Z $ and $\{m_i\}_{i=1}^n \in \mathbb Z^n$. The expression $\widehat{Q}$ denotes the parent of $Q$, namely, the unique dyadic cube such that $\ell(\widehat{Q})=2\ell(Q)$ and $Q\subseteq \widehat{Q}$. We denote by $\text{ch}(Q)$ the children of $Q$, that is, the set of cubes $R\in \mathcal{D}$ such that $\ell(R)=\ell(Q)/2$ and $R\subseteq Q$. Throughout the paper, all cubes are defined by the tensor product of intervals, and thus their sides are always parallel to the coordinates axes. For $\lambda >0$ and any cube $Q$, we write $\lambda Q$ for the unique cube that satisfies $c(\lambda Q)=c(Q)$ and $\ell(\lambda Q)=\lambda \ell(Q)$. Given a measurable set $\Omega \subseteq \mathbb R^{n}$, we denote by $\mathcal{D}(\Omega)$ the family of dyadic cubes $Q$ such that $Q\subsetneq\Omega$. If $\Omega $ is a dyadic cube, this inclusion is equivalent to $\widehat{Q}\subseteq \Omega$. For $Q \in \mathcal{D}$ such that $\mu(Q)>0$, define the \emph{Haar function adapted to $Q$} by $$ h_{Q}:=\mu(Q)^{-\frac{1}{2}}\left(\mathbbm{1}_{Q}-\frac{\mu(Q)}{\mu(\widehat{Q})}\mathbbm{1}_{\widehat{Q}}\right). $$ We note that this notation for $h_Q$ is not standard, but it is convenient for our purposes. Using this notation, $h_{Q}$ is supported on $\widehat{Q}$ and constant on $Q$ and on $\widehat{Q} \setminus Q$. As shown in \cite{V2019}, we have $$ f=\sum_{Q\in \mathcal{D}}\langle f, h_Q\rangle h_Q $$ with convergence in $L^2(\mu )$, where we write $\langle f,g \rangle :=\int_{\mathbb R^n}fg\,d\mu$. \begin{rem} \label{HaarFrame} Since $$ \langle h_Q,h_R\rangle =\delta (\widehat{Q},\widehat{R})\Bigg(\delta (Q,R) -\frac{\mu(Q)^{\frac{1}{2}}\mu(R)^{\frac{1}{2}}}{\mu(\widehat{Q})}\Bigg), $$ where $\delta (Q,R)=1$ if $Q=R$ and zero otherwise, $\{h_Q\}_{Q\in \mathcal{D}}$ is not an orthogonal system. However, $\{h_Q\}_{Q\in\mathcal{D}}$ is a frame for $L^2(\mu)$, namely, there exist $0<C_1\leq C_2$ such that $$ C_1\|f\|_{L^2(\mu)} \leq \Big(\sum_{Q\in {\mathcal D}}\langle f, h_Q\rangle^2\Big)^{\frac{1}{2}} \leq C_2\|f\|_{L^2(\mu)}, $$ and this is enough to prove our results. \end{rem} Recall that for a bounded sequence of real numbers indexed by dyadic cubes $\{\varepsilon_Q\}_{Q\in \mathcal{D}}$, the associated \emph{Haar multiplier}, $T$, is given by $$ Tf=\sum_{Q\in\mathcal{D}}\varepsilon_Q\langle f,h_Q\rangle h_Q. $$ The previous equality is understood with almost everywhere pointwise convergence, meaning $$ Tf=\lim_{M\rightarrow \infty} \sum_{Q\in\mathcal{\tilde D}(\mathbb B_{M})}\varepsilon_Q\langle f,h_Q\rangle h_Q, $$ where $\mathbb B_{M}$ is the ball centered at the origin with radius $M$ and $\mathcal{\tilde D}(\mathbb B_{M})$ is the finite family of dyadic cubes $Q$ such that both $Q\subsetneq \mathbb B_{M}$ and $\ell(Q)> M^{-1}$. Writing $\langle f \rangle_Q := \frac{1}{\mu(Q)} \int_Q f\,d\mu$, we note that \begin{align}\label{product} \langle f,h_{Q}\rangle h_{Q}&=\left(\langle f\rangle_{Q}-\langle f\rangle_{\widehat{Q}}\right)\left(\mathbbm{1}_{Q}-\frac{\mu(Q)}{\mu(\widehat{Q})}\mathbbm{1}_{\widehat{Q}}\right) =\langle f\rangle_{Q}\mathbbm{1}_{Q}+a_{Q}, \end{align} where $a_{Q}:=-\frac{\mu(Q)}{\mu(\widehat{Q})}\langle f\rangle_{Q}\mathbbm{1}_{\widehat{Q}}-\langle f\rangle_{\widehat{Q}}\mathbbm{1}_{Q}+\frac{\mu(Q)}{\mu(\widehat{Q})}\langle f\rangle_{\widehat{Q}}\mathbbm{1}_{\widehat{Q}}$ satisfies the bound $$ |a_{Q}|\leq \bigg(\frac{1}{\mu(\widehat{Q})}\int_{Q} |f|\,d\mu+2\langle |f|\rangle_{\widehat{Q}}\bigg)\mathbbm{1}_{\widehat{Q}}\leq 3\langle |f|\rangle_{\widehat{Q}}\mathbbm{1}_{\widehat{Q}}. $$ \subsection{Technical results} We use the following \emph{auxiliary maximal function} in the proof of Theorem \ref{HaarSparseBound}. Let $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ be a bounded sequence of real numbers indexed by dyadic cubes and define $M_{\varepsilon}$ by $$M_{\varepsilon}f(x):=\sup_{\substack{Q\in {\mathcal D} \\ Q\ni x }}\max_{Q'\in {\rm ch}(Q)}|\varepsilon_{Q'}| \langle |f|\rangle_Q. $$ Since trivially $$ M_{\varepsilon}f(x)\leq \sup_{\substack{Q\in {\mathcal D}}} \max_{Q'\in {\rm ch}(Q)}|\varepsilon_{Q'}|Mf(x), $$ where $M$ is the dyadic Hardy-Littlewood maximal operator defined by $$Mf(x):=\sup_{\substack{Q \in \mathcal{D}\\Q \ni x}}\langle |f|\rangle_Q,$$ we have the following property. \begin{lemma} \label{MaxWeakType} If $\mu$ is a Radon measure supported in a dyadic cube $Q_0$ and $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ is a bounded sequence of real numbers, then $$ \| M_{\varepsilon}f\|_{L^{1,\infty}(\mu)}:= \sup_{\lambda>0}\lambda \, \mu(\{x\in \mathbb R^n : M_{\varepsilon}f(x) > \lambda\}) \lesssim \sup_{\substack{Q\in {\mathcal D(Q_0)} }}\max_{Q'\in {\rm ch}(Q)}|\varepsilon_{Q'}| \| f\|_{L^1(\mu)} $$ for all $f\in L^1(\mu)$. \end{lemma} We will also use the following \emph{auxiliary maximal truncation Haar multiplier} in the proof of Theorem \ref{HaarSparseBound}. Let $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ be a bounded sequence of real numbers indexed by dyadic cubes and define $T^{\max}$ by $$ T^{\max}f:=\sup_{Q\in\mathcal{D}}\Bigg|\sum_{\substack{P \in \mathcal{D}\\ \widehat{P}\supsetneq Q}} \varepsilon_P\langle f, h_P\rangle h_P\Bigg|. $$ \begin{lemma} \label{HaarTruncationWeakType} If $\mu $ is a Radon measure supported in a dyadic cube $Q_{0}$, $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ is a bounded sequence of real numbers, and $T^{\max}$ is defined as above, then $$ \|T^{\max}f\|_{L^{1,\infty}(\mu)}\lesssim \sup_{\substack{Q\in {\mathcal D}(Q_0) }}|\varepsilon_{Q}| \|f\|_{L^1(\mu)} $$ for all $f \in L^1(\mu)$. \end{lemma} We will use the following Calder\'on-Zygmund decomposition in the proof of Lemma \ref{HaarTruncationWeakType}. This decomposition is described in \cite{CAP2019}*{Theorem 4.2} and is related to the decomposition of \cite{LSMP2014}*{Theorem 2.1}. \begin{lemma} \label{CZDecomposition} Let $\mu$ be a Radon measure. If $f \in L^1(\mu)$ is nonnegative and $\lambda>0$ (or $\lambda >\frac{\|f\|_{L^1(\mu)}}{\|\mu\|}$ if $\mu$ is a finite measure), then we can write $$ =g+\sum_{j=1}^{\infty}b_j, $$ where \begin{enumerate} \item\label{goodfunction} $\|g\|_{L^{2}(\mu)}^2\lesssim \lambda \|f\|_{L^1(\mu)}$, \item there exist pairwise disjoint dyadic cubes $Q_j$ such that $\operatorname{supp}b_j \subseteq \widehat{Q}_j$ and $$\sum_{j=1}^{\infty}\mu(Q_j)\leq \frac{1}{\lambda}\|f\|_{L^1(\mu)},$$ and \item $\int_{\mathbb{R}^n}b_j\,d\mu=0$ for each $j$ and $\sum_{j=1}^{\infty}\|b_j\|_{L^1(\mu)} \lesssim \|f\|_{L^1(\mu)}$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{CZDecomposition}] We only consider the case when $\mu(\mathbb{R}_j^n)=\infty$ for each $j=1,2,\ldots,2^n$, where the $\mathbb{R}_j^n$ denote the $2^n$ $n$-dimensional quadrants in $\mathbb{R}^n$; the case where at least one quadrant has finite measure is handled using arguments of \cite{LSMP2014}*{Section 3.4}. Write $$ \Omega:=\{Mf>\lambda\}=\bigcup_{j=1}^{\infty} Q_j, $$ where the $Q_j$ are maximal dyadic cubes in the sense that $$ \langle f \rangle_Q \leq \lambda < \langle f \rangle_{Q_j} $$ whenever $Q\supsetneq Q_j$. Set $$ g:=f\mathbbm{1}_{\mathbb{R}^n\setminus\Omega}+\sum_{j=1}^{\infty} \langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{\widehat{Q}_j} $$ and $$ b:=\sum_{j=1}^{\infty}b_j,\quad\text{where}\quad b_j:=f\mathbbm{1}_{Q_j}-\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{\widehat{Q}_j}. $$ Clearly, $$ f=g+b=g+\sum_{j=1}^{\infty}b_j. $$ To prove (1), write $g=g_1+g_2$ where $$ g_1:=f\mathbbm{1}_{\mathbb{R}^n\setminus\Omega} \quad\text{and}\quad g_2:=\sum_{j=1}^{\infty}\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{\widehat{Q}_j}. $$ By the Lebesgue Differentiation Theorem, $\|g_1\|_{L^{\infty}(\mu)}\leq \lambda$, and so $\|g_1\|_{L^2(\mu)}^2\leq\lambda\|f\|_{L^1(\mu)}$. On the other hand, $$ \| g_2\|_{L^2(\mu)}^2 =\sum_{i,j=1}^{\infty}\langle f\mathbbm{1}_{Q_i}\rangle_{\widehat{Q}_i} \langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mu(\widehat{Q}_i\cap \widehat{Q}_j). $$ Since $\widehat{Q}_i\cap \widehat{Q}_j\in \{ \widehat{Q}_i, \widehat{Q}_j, \emptyset \}$, by symmetry we have $$ \| g_2\|_{L^2(\mu)}^2 \leq 2\sum_{i=1}^{\infty}\langle f\mathbbm{1}_{Q_i}\rangle_{\widehat{Q}_i} \sum_{\widehat{Q}_j\subseteq \widehat{Q}_i} \langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mu(\widehat{Q}_j) = 2\sum_{i=1}^{\infty}\langle f\mathbbm{1}_{Q_i}\rangle_{\widehat{Q}_i} \sum_{\widehat{Q}_j\subseteq \widehat{Q}_i} \int_{Q_j} fd\mu. $$ Now, since $Q_j\subsetneq \widehat{Q}_j\subseteq \widehat{Q}_i$ and since the cubes $Q_j$ are pairwise disjoint by maximality, we get $$ \| g_2\|_{L^2(\mu)}^2 \leq 2 \sum_{i=1}^{\infty}\langle f\mathbbm{1}_{Q_i}\rangle_{\widehat{Q}_i} \int_{\widehat{Q}_i} fd\mu = 2 \sum_{i=1}^{\infty}\langle f\rangle_{\widehat{Q}_i} \int_{Q_i} fd\mu \leq 2\lambda \| f\|_{L^{1}(\mu)}. $$ For property (2), notice that $\operatorname{supp}b_j\subseteq \widehat{Q}_j$ by definition of $b_j$. Also, the cubes $Q_j$ are pairwise disjoint by maximality. With this and the stopping condition $\lambda < \langle f\rangle_{Q_j}$ for each $j$, we have $$ \sum_{j=1}^{\infty} \mu(Q_j) <\sum_{j=1}^{\infty}\frac{1}{\lambda}\|f\mathbbm{1}_{Q_j}\|_{L^1(\mu)}\leq \frac{1}{\lambda}\|f\|_{L^1(\mu)}. $$ Property (3) follows, first using Fubini's theorem to see $$ \int_{\mathbb{R}^n} b_j\,d\mu =\int_{Q_j}f\,d\mu-\int_{\widehat{Q}_j}\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\,d\mu =0. $$ Therefore, $$ \sum_{j=1}^{\infty}\|b_j\|_{L^1(\mu)} \leq \sum_{j=1}^{\infty} \left( \|f\mathbbm{1}_{Q_j}\|_{L^1(\mu)}+\|\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{\widehat{Q}_j}\|_{L^1(\mu)}\right)\lesssim \sum_{j=1}^{\infty}\|f\mathbbm{1}_{Q_j}\|_{L^1(\mu)}\leq \|f\|_{L^1(\mu)}. $$ \end{proof} \begin{proof}[Proof of Lemma \ref{HaarTruncationWeakType}] Since ${\rm supp \,}\mu \subseteq Q_0$, we have $\mu(Q)=\mu(\widehat{Q})$ and $\int_{Q} fd\mu=\int_{\widehat{Q}}fd\mu$ for every dyadic cube $Q$ containing $Q_0$. With this, we have \begin{align}\label{product2} \langle f,h_{Q}\rangle &=\mu(Q)^{\frac{1}{2}}(\langle f\rangle_{Q}-\langle f\rangle_{\widehat{Q}}) =0 \end{align} for such cubes $Q$. By (\ref{product2}) and the fact that $h_Q$ is not defined for dyadic cubes $Q$ such that $\widehat{Q} \cap Q_0 = \emptyset$, we only need to work with cubes satisfying $\widehat{Q}\subseteq Q_{0}$, or equivalently, cubes in ${\mathcal D}(Q_0)$. Let $\displaystyle \varepsilon :=\sup_{\substack{Q\in {\mathcal D(Q_0)} }}|\varepsilon_{Q}|$. We wish to show that for all $\lambda>0$ and all $f \in L^1(\mu)$, we have $$\mu(\{T^{\max}f>\lambda\})\lesssim \frac{\varepsilon}{\lambda}\|f\|_{L^1(\mu)}.$$ Fix $x\in \mathbb R^{n}$ and $Q \in \mathcal{D}(Q_{0})$. If $x$ is not in the same quadrant of $\mathbb{R}^n$ as $Q$, then $h_P(x)=0$ for every $P$ with $\widehat{P}\supsetneq Q$, and therefore $T^{\max}f(x)=0$. If $x$ and $Q$ are in the same quadrant, let $K$ be the unique dyadic cube containing $x$ such that $\widehat{K}$ is the smallest dyadic cube with $\{x\}\cup Q\subseteq \widehat{K}$. For all $P\in \mathcal{D}$ such that $Q\subsetneq \widehat{P}\subsetneq \widehat{K}$ we have $h_P(x)=0$, and so \begin{align*} \Bigg| \sum_{\substack{P\in\mathcal{D}(Q_0)\\ \widehat{P}\supsetneq Q}}\varepsilon_P\langle f,h_P \rangle h_P(x)\Bigg|&= \Bigg|\sum_{\substack{P\in\mathcal{D}(Q_0) \\ \widehat{P}\supsetneq K}}\varepsilon_P\langle f,h_P \rangle h_P(x)\Bigg| \\ &=\frac{1}{\mu(K)}\Bigg|\int_{K}\sum_{\substack{P\in\mathcal{D}(Q_0) \\ \widehat{P}\supsetneq K}} \varepsilon_P\langle f,h_P \rangle h_P(y)\,d\mu(y)\Bigg|\\ &=\frac{1}{\mu(K)}\Bigg|\int_{K}\sum_{P\in\mathcal{D}} \varepsilon_P\langle f,h_P \rangle h_P(y)\,d\mu(y)\Bigg|\\ &=|\langle Tf \rangle_{K}|\leq M(Tf)(x), \end{align*} where we have used the fact that $\int_K h_P\, d\mu =0$ for $P\in {\mathcal D}$ such that $\widehat{P}\cap K=\emptyset $ or $\widehat{P}\subseteq K$. Taking the supremum over all cubes $Q \in \mathcal{D}$ gives $$T^{\max}f(x) \leq M(Tf)(x).$$ To complete the proof, apply Lemma \ref{CZDecomposition} to $f$ at height $\frac{\lambda}{\varepsilon}$ to write $$ f=g+b=g+\sum_{j=1}^{\infty}b_j, $$ where properties (1), (2), and (3) of the lemma hold. Then $$ \mu(\{T^{\max}f>\lambda\}) \leq \mu\bigg(\bigg\{T^{\max}g>\frac{\lambda}{2}\bigg\}\bigg)+ \mu\bigg(\bigcup_{j=1}^{\infty}Q_j\bigg)+\mu\bigg(\bigg\{\mathbb{R}^n\setminus \bigcup_{j=1}^{\infty}Q_j: T^{\max}b>\frac{\lambda}{2}\bigg\}\bigg). $$ Use Chebyshev's inequality, the boundedness of $M$ on $L^2(\mu)$, and property (1) of Lemma \ref{CZDecomposition} to bound the first term as follows: \begin{align*} \mu\bigg(\bigg\{T^{\max}g>\frac{\lambda}{2}\bigg\}\bigg)&\lesssim \frac{1}{\lambda^2}\|M(Tg)\|_{L^2(\mu)}^2 \lesssim \frac{1}{\lambda^2}\|Tg\|_{L^2(\mu)}^2 \\ &\lesssim \frac{1}{\lambda^2}\sum_{\substack{Q\in {\mathcal D}(Q_0) }}|\varepsilon_{Q}|^{2}|\langle g,h_{Q}\rangle|^{2} \lesssim \frac{\varepsilon^2}{\lambda^2}\|g\|_{L^2(\mu)}^2 \\ &\lesssim\frac{\varepsilon}{\lambda}\|f\|_{L^1(\mu)}. \end{align*} The second term is controlled above by property (2) of Lemma \ref{CZDecomposition}: $$\mu\bigg(\bigcup_{j=1}^{\infty}Q_j\bigg)=\sum_{j=1}^{\infty}\mu(Q_j)\leq \frac{\varepsilon}{\lambda}\|f\|_{L^1(\mu)}.$$ For the third term, we fix $x\in \mathbb R^{n}\backslash \bigcup_{j=1}^{\infty}Q_j$ and $Q \in \mathcal{D}(Q_0)$. By linearity $$ \Bigg| \sum_{\substack{P \in \mathcal{D}(Q_0) \\ \widehat{P}\supsetneq Q}}\varepsilon_P\langle b,h_P\rangle h_P(x)\Bigg| \leq \sum_{j=1}^{\infty}\Bigg| \sum_{\substack{P \in \mathcal{D}(Q_0) \\ \widehat{P}\supsetneq Q}}\varepsilon_P\langle b_j,h_P\rangle h_P(x)\Bigg|. $$ For a fixed index $j$ and fixed $P \in \mathcal{D}(Q_0)$ with $Q\subsetneq \widehat{P}$, we consider three cases: \begin{enumerate} \item[(a)] when $\widehat{Q}_j \subsetneq \widehat{P}$, then $\langle b_j, h_P\rangle=0$ since $h_P$ is constant on $\widehat{Q}_j$ and $b_j$ has mean value zero on $\widehat{Q}_j$, \item[(b)] when $\widehat{Q}_j\cap \widehat{P}=\emptyset $, we have $\langle b_j,h_P\rangle=0$ due to their disjoint supports, and \item[(c)] when $\widehat{P} \subsetneq \widehat{Q}_j$, it must be that $\widehat{P} \subseteq Q_j'$ for some $Q_j' \in {\rm ch}{(\widehat{Q}_j)}$. If $Q_j' \neq Q_j$, then $\langle b_j,h_P\rangle =0$ since $b_j$ is constant on $Q_j'$ and $h_P$ has mean value zero on $\widehat{P}\subseteq Q_j'$. If $\widehat{P} \subseteq Q_j$, then $h_P(x)=0$ since $x \not \in Q_j$ and $\text{supp}(h_P) \subseteq \widehat{P}$. \end{enumerate} We are left with the case $\widehat{P}=\widehat{Q}_j$, so $$ \Bigg| \sum_{\substack{P \in \mathcal{D}(Q_0) \\ \widehat{P}\supsetneq Q}}\varepsilon_P\langle b,h_P\rangle h_P(x)\Bigg| \leq \sum_{j=1}^{\infty}\Bigg|\sum_{\substack{P \in \mathcal{D}(Q_0)\\ \widehat{P}\supseteq Q \\ \widehat{P}=\widehat{Q}_j}} \varepsilon_P\langle b_j,h_P \rangle h_P(x)\Bigg| \lesssim \varepsilon\sum_{j=1}^{\infty}\sum_{\substack{P\in\mathcal{D}\\ \widehat{P}=\widehat{Q}_j}}|\langle b_j,h_P\rangle||h_P(x)|. $$ Taking the supremum over $Q \in \mathcal{D}$, we have $$ T^{\max}b(x)\leq \varepsilon\sum_{j=1}^{\infty}\sum_{\substack{P\in\mathcal{D}\\ \widehat{P}=\widehat{Q}_j}}|\langle b_j,h_P\rangle| |h_p(x)| $$ for $x \not \in \bigcup_{j=1}^{\infty}Q_j$. Now, by definition $$ |\langle b_j , h_P \rangle| = \mu(P)^{-\frac{1}{2}}\int_{\mathbb{R}^n}\Big|f\mathbbm{1}_{Q_j}\mathbbm{1}_P-\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{P}-\frac{\mu(P)}{\mu(\widehat{P})}f\mathbbm{1}_{Q_j}+\frac{\mu(P)}{\mu(\widehat{P})}\langle f\mathbbm{1}_{Q_j}\rangle_{\widehat{Q}_j}\mathbbm{1}_{\widehat{Q}_j}\Big|\,d\mu, $$ and since $\widehat{P}=\widehat{Q}_j$, we have $$ |\langle b_j,h_P\rangle| \lesssim \mu(P)^{-\frac{1}{2}}\|f\mathbbm{1}_{Q_j}\|_{L^1(\mu)}. $$ On the other hand, \begin{align*} \|h_P\|_{L^1(\mu)} &= \mu(P)^{-\frac{1}{2}} \int_{\mathbb{R}^n} \bigg(\mathbbm{1}_{P}(x)-\frac{\mu(P)}{\mu(\widehat{P})}\mathbbm{1}_{\widehat{P}}(x)\bigg)\,d\mu(x) \leq 2\mu(P)^{\frac{1}{2}} \end{align*} Therefore using Chebyshev's inequality and the above estimates, we have \begin{align*} \mu\bigg(\bigg\{\mathbb{R}^n\setminus\bigcup_{j=1}^{\infty}Q_j: T^{\max}b>\frac{\lambda}{2}\bigg\}\bigg)&\lesssim \frac{1}{\lambda}\int_{\mathbb{R}^n\setminus \bigcup_{j=1}^{\infty}Q_j} T^{\max}b(x)\,d\mu(x)\\ &\leq \frac{\varepsilon}{\lambda}\int_{\mathbb{R}^n\setminus \bigcup_{j=1}^{\infty}Q_j} \sum_{j=1}^{\infty}\sum_{\substack{P\in\mathcal{D}\\ \widehat{P}=\widehat{Q}_j}}|\langle b_j,h_P\rangle| |h_P(x)| \\ &\lesssim \frac{\varepsilon}{\lambda}\sum_{j=1}^{\infty}\sum_{\substack{P\in\mathcal{D}\\ \widehat{P}=\widehat{Q}_j}}|\langle b_j,h_P\rangle|\mu(P)^{\frac{1}{2}}\\ &\lesssim \frac{\varepsilon}{\lambda}\sum_{j=1}^{\infty}\sum_{\substack{P\in\mathcal{D}\\ \widehat{P}=\widehat{Q}_j}}\|f\mathbbm{1}_{Q_j}\|_{L^1(\mu)}\\ &\lesssim \frac{\varepsilon}{\lambda}\|f\|_{L^1(\mu)}. \end{align*} Combining all estimates gives $$ \mu(\{T^{\max}f>\lambda\})\lesssim\frac{\varepsilon}{\lambda}\|f\|_{L^1(\mu)}. $$ \end{proof} \begin{rem} \label{TfPointwiseControl} We note for later reference that $|Tf|\leq T^{\max}f$ pointwise. Indeed, by definition and using that ${\rm supp \,}\mu\subseteq Q_0$, we have $$ |Tf(x)|=\lim_{\substack{\ell(Q)\rightarrow 0\\x\in Q}}\Bigg|\sum_{\substack{P \in \mathcal{D}(Q_{0})\\ \widehat{P}\supseteq Q}} \varepsilon_P\langle f, h_P\rangle h_P(x)\Bigg| \leq \sup_{Q\in \mathcal{D}}\Bigg|\sum_{\substack{P \in \mathcal{D}(Q_{0})\\ \widehat{P}\supseteq Q}} \varepsilon_P\langle f, h_P\rangle h_P(x)\Bigg|=T^{\max}f(x). $$ \end{rem} \subsection{Sparse domination} \label{HaarSparse} We are ready to prove the sparse bound for Haar multipliers. Our proof follows closely the ideas of \cite{L2017}. Differences include using the auxiliary maximal function $M_{\varepsilon}$, using the Haar wavelet frame $\{h_Q\}_{Q\in \mathcal{D}}$, and tracking the role of the coefficients $\varepsilon_Q$ throughout. \begin{proof}[Proof of Theorem \ref{HaarSparseBound}] Since $\langle f,h_Q\rangle =0$ for all $Q\in {\mathcal D}$ such that $Q_0\subsetneq \widehat{Q}$ or $Q_0\cap \widehat{Q}=\emptyset $, we only need to work with cubes in $\mathcal D(Q_{0})$, meaning that \begin{equation}\label{TQ} Tf=\sum_{Q\in {\mathcal D}(Q_0)} \varepsilon_Q\langle f, h_{Q}\rangle h_{Q}=:T_{Q_0}f. \end{equation} We start by adding the cube $Q_{0}$ to the family $\mathcal S$ and the functions $\tilde \varepsilon_{Q_{0}} \langle |f|\rangle_{Q_{0}}\mathbbm{1}_{Q_{0}}$ to the sparse operator $S_{\varepsilon }$. Define $$ E_{Q_{0}}:=\left\{x\in Q_{0}: \max\left\{M_{\varepsilon}f(x),T^{\max}f(x)\right\}> 2C\tilde \varepsilon_{Q_{0}} \langle |f|\rangle_{Q_{0}}\right\}. $$ where $C>0$ is the sum of the implicit constants in Lemma \ref{MaxWeakType} and Lemma \ref{HaarTruncationWeakType}. By these two results, we have $$ \mu(E_{Q_{0}})\leq \frac{1}{2C \tilde \varepsilon_{Q_{0}} \langle |f|\rangle_{Q_{0}}} C \max_{Q\in \mathcal D(Q_{0})} |\varepsilon_{Q}| \| f\|_{L^1(\mu)} \leq \frac{1}{2}\mu(Q_0). $$ Let $\mathcal E_{Q_{0}} $ be the family of maximal dyadic cubes $P$ contained in $E_{Q_{0}}$. For $x\in Q_{0}\backslash E_{Q_{0}}$ we trivially have $$ |Tf(x)|\leq T^{\max}f(x)\leq 2C\tilde \varepsilon_{Q_{0}} \langle |f|\rangle_{Q_{0}} \mathbbm{1}_{Q_{0}}(x)\lesssim S_{\varepsilon} |f|(x). $$ Otherwise, consider $x \in E_{Q_0}$. Let $P\in \mathcal E_{Q_{0}}$ be the unique cube such that $x\in P\subseteq Q_{0}$. We formally decompose $Tf$ as follows: \begin{align*} Tf&=\sum_{\tiny \begin{array}{l}I\in \mathcal D\\ I\supsetneq P\end{array}}\varepsilon_{I} \langle f, h_{I}\rangle h_{I} +\sum_{I\in {\rm ch}(\widehat{P})}\varepsilon_{I}\langle f, h_{I}\rangle h_{I} \\ & +\sum_{\tiny \begin{array}{l}I\in \mathcal D\\I\subsetneq P\end{array}}\varepsilon_{I}\langle f, h_{I}\rangle h_{I} +\sum_{\tiny \begin{array}{l}I\in \mathcal D\\I\cap P=\emptyset \\ I\notin {\rm ch}(\widehat{P})\end{array}}\varepsilon_{I}\langle f, h_{I}\rangle h_{I}. \end{align*} All cubes $I\in \mathcal D$ with $I\cap P=\emptyset$ and $I\notin {\rm ch}(\widehat{P})$ must also satisfy $\widehat{I}\cap P=\emptyset$, and thus since $x \in P$ and ${\rm supp \,} h_I \subseteq \widehat{I}$, we have $h_{I}(x)=0$. Therefore we have \begin{align}\label{decomp} \nonumber Tf(x)&=\sum_{\tiny \begin{array}{l}I\in \mathcal D\\ I\supsetneq P\end{array}}\varepsilon_{I} \langle f, h_{I}\rangle h_{I}(x) +\sum_{I\in {\rm ch}(\widehat{P})}\varepsilon_{I}\langle f, h_{I}\rangle h_{I}(x) +\sum_{I\in \mathcal D(P)}\varepsilon_{I}\langle f, h_{I}\rangle h_{I}(x) \\ & =\sum_{\tiny \begin{array}{l}I\in \mathcal D\\ I \supsetneq P\end{array}}\varepsilon_{I}\langle f, h_{I}\rangle h_{I}(x) +\sum_{I\in {\rm ch}(\widehat{P})}\varepsilon_{I}a_{I}(x)+\varepsilon_{P}\langle f \rangle_{P}\mathbbm{1}_{P}(x) +T_Pf(x), \end{align} where we used the decomposition in \eqref{product} and $T_P$ as defined in \eqref{TQ}. By maximality of $P$, there exists a point $y\in \widehat{P}\setminus E_{Q_{0}}$. The first term in \eqref{decomp} can be bounded as follows: $$ \Bigg|\sum_{\tiny \begin{array}{l}I\in \mathcal D\\ I \supsetneq P\end{array}}\varepsilon_{I} \langle f, h_{I}\rangle h_{I}(x)\Bigg| =\Bigg| \sum_{\tiny \begin{array}{l}I\in \mathcal D\\ I \supsetneq P\end{array}}\varepsilon_{I} \langle f, h_{I}\rangle h_{I}(y)\Bigg| \leq T^{\max}f(y)\leq 2C\tilde \varepsilon_{Q_0} \langle f\rangle_{Q_{0}}. $$ Similarly, since $|a_{I}(x)|\leq 3\langle |f| \rangle_{\widehat{P}}\mathbbm{1}_{\widehat{P}}(x)$ for $I\in {\rm ch}(\widehat{P})$, we have for the second term \begin{align*} \Big|\sum_{I\in {\rm ch}(\widehat{P})}\varepsilon_{I}a_{I}(x)\Big| &\lesssim \sum_{I\in {\rm ch}(\widehat{P})}|\varepsilon_{I}| \langle |f| \rangle_{\widehat{P}} \mathbbm{1}_{\widehat{P}}(x) \leq 2^{n}\max_{I\in {\rm ch}(\widehat{P})}|\varepsilon_{I} |\langle |f| \rangle_{\widehat{P}}\mathbbm{1}_{\widehat{P}}(x) \\ & = 2^{n}\max_{I\in {\rm ch}(\widehat{P})}|\varepsilon_{I} |\langle |f| \rangle_{\widehat{P}}\mathbbm{1}_{\widehat{P}}(y) \leq 2^{n}M_{\varepsilon}f(y)\leq 2^{n+1}C\tilde \varepsilon_{Q_{0}} \langle |f|\rangle_{Q_{0}}. \end{align*} For the third term, we directly add the cubes $P\in \mathcal{E}_{Q_{0}}$ to the family $\mathcal S$ and the functions $ \tilde \varepsilon_{P}\langle |f|\rangle_{P}\mathbbm{1}_{P}$ to $S_{\varepsilon}|f|$. By disjointness, the sparseness condition holds for $Q_0$: $$ \sum_{P\in \mathcal E_{Q_0}} \mu(P)\leq \mu(E_{Q_0})\leq \frac{1}{2}\mu(Q_0). $$ The last term in \eqref{decomp} is treated by repeating the previous reasoning applied to $T_Pf$ instead of $Tf=T_{Q_0}f$, that is, starting the argument with $$ E_{P}:=\left\{x\in P: \max\left\{M_{\varepsilon}f(x),T_P^{\max}f(x)\right\}> 2C\tilde \varepsilon_{P} \langle |f|\rangle_{P}\right\} $$ and adding to ${\mathcal S}$ the family $\mathcal E_{P}$ of maximal dyadic cubes contained in $E_{P}$. \end{proof} \subsection{Boundedness and compactness on weighted spaces} \label{HaarCompactness} We now study boundedness and compactness of Haar multiplier operators on weighted spaces. We first prove the boundedness result Theorem \ref{Haarboundedness}. We will use the following \emph{dyadic maximal function adapted to weights}. Given a locally integrable and positive almost everywhere function $w$, we define $M_{w}$ by $$ M_{w}f(x) :=\sup_{\substack{Q\in \mathcal{D}\\ x\in Q}}\frac{1}{w(Q)}\int_Q |f|w\,d\mu, $$ where $w(Q):=\int_Q w\,d\mu$. The following lemma is well-known, see \cites{LSMP2014,M2012} for example. \begin{lemma} \label{WeightedMaxBoundedness} If $w$ is locally integrable and positive almost everywhere and $1<p<\infty$, then $M_{w}$ is bounded from $L^p(w)$ to itself. Moreover, $\|M_{w }\|_{L^p(w)\rightarrow L^p(w)}$ does not depend on $w$. \end{lemma} \begin{proof}[Proof of Theorem \ref{Haarboundedness}] Our proof closely follows the argument in \cite{M2012}. It is enough to consider the case where $\mu$ is compactly supported, as long as we obtain bounds that are independent of ${\rm supp \,}\mu$. Assuming $\mu $ has compact support, there exist pairwise disjoint dyadic cubes $\{Q^k\}_{k=1}^{2^n}$ with each $Q^k$ in one of the quadrants of $\mathbb{R}^n$ and such that ${\rm supp \,} \mu \subseteq \bigcup_{k=1}^{2^n}Q^k$. Dividing $\mu$ into the $2^n$ measures $\mu_k(A):=\mu(A\cap Q^k)$, we can further assume that ${\rm supp \,} \mu$ is contained in a dyadic cube. Suppose that $p\ge 2$ and set $\sigma=w^{1-p'}$. We use the equivalence $$ \|T\|_{L^p(w)\rightarrow L^p(w)} = \|T(\cdot\,\sigma)\|_{L^p(\sigma)\rightarrow L^p(w)} $$ and proceed by duality. Let $f\in L^p(\sigma)$ and $g\in L^{p'}(w)$ be nonnegative functions with compact support. Apply Theorem \ref{HaarSparseBound} to obtain the estimate $$ \langle |T(f\sigma)|, gw\rangle \lesssim \langle S_{\varepsilon}(f\sigma),gw\rangle =\sum_{j,k} \tilde \varepsilon_{Q_{j}^{k}}\langle f\sigma\rangle_{Q_j^k}\langle gw\rangle_{Q_j^k}\mu(Q_j^k), $$ where we denote the cubes in the sparse collection $\mathcal{S}$ chosen at the step $k$ by $Q_j^k$. Note that, although the cubes $Q_j^k$ and the coefficients $\tilde \varepsilon_{Q_{j}^{k}}$ depend on ${\rm supp \,} \mu$, we aim for final estimates that are independent of ${\rm supp \,}\mu$. Define $E_j^k:=Q_j^k\setminus\left(\bigcup_{j}Q_j^{k+1}\right)$. Notice that the sparseness property of the cubes $Q_j ^k$ implies that the sets $E_j^k$ are pairwise disjoint and that $\mu(Q_j^k)\leq 2 \mu(E_j^k)$. Using the latter inequality, the $\tilde\varepsilon A_p$ condition for $w$, and the containment $E_j^k\subseteq Q_j^k$, we have \begin{align}\label{estmax} \nonumber \langle S_{\varepsilon}(f\sigma),gw\rangle &= \sum_{j,k} \tilde \varepsilon_{Q_{j}^{k}}\langle f\sigma\rangle_{Q_j^k}\int_{Q_j^k} gw\,d\mu\\ \nonumber &=\sum_{j,k}\tilde\varepsilon_{Q_j^k} \frac{w(Q_j^k)\sigma(Q_j^k)^{p-1}}{\mu(Q_j^k)^p}\frac{\mu(Q_j^k)^{p-1}}{w(Q_j^k)\sigma(Q_j^k)^{p-1}}\int_{Q_j^k}f\sigma\,d\mu \int_{Q_j^k}gw\,d\mu\\ \nonumber &\leq [w]_{\tilde \varepsilon A_p} \sum_{j,k}\left(\frac{1}{\sigma(Q_j^k)}\int_{Q_j^k}f\sigma d\mu\right)\left(\frac{1}{w(Q_j^k)}\int_{Q_j^k}gw\,d\mu\right)\mu(Q_j^k)^{p-1}\sigma(Q_j^k)^{2-p}\\ \nonumber &\leq 2^{p-1}[ w]_{\tilde\varepsilon A_p}\sum_{j,k}\left(\frac{1}{\sigma(Q_j^k)}\int_{Q_j^k}f\sigma d\mu\right)\left(\frac{1}{w(Q_j^k)}\int_{Q_j^k}gw\,d\mu\right)\mu(E_j^k)^{p-1}\sigma(E_j^k)^{2-p}. \end{align} By H\"older's inequality, $$ \mu(E_j^k)\leq w(E_j^k)^{\frac{1}{p}}\sigma(E_j^k)^{\frac{1}{p'}}, $$ and so $$ \mu(E_j^k)^{p-1}\sigma(E_j^k)^{2-p}\leq w(E_j^k)^{\frac{p-1}{p}}\sigma(E_j^k)^{\frac{p-1}{p'}}\sigma(E_j^k)^{2-p} =w(E_j^k)^{\frac{1}{p'}}\sigma(E_j^k)^{\frac{1}{p}}, $$ since $\frac{p-1}{p'}+2-p=\frac{1}{p}$. Using the estimates above, H\"older's inequality, the disjointness of the sets $E_j^k$, and Lemma \ref{WeightedMaxBoundedness}, we bound $\langle S_{\varepsilon}(f\sigma),gw\rangle$ by a constant times \begin{align} \nonumber &[w]_{\tilde\varepsilon A_p}\sum_{j,k}\left(\frac{1}{\sigma(Q_j^k)}\int_{Q_j^k}f\sigma d\mu\right)\left(\frac{1}{w(Q_j^k)}\int_{Q_j^k}gw\,d\mu\right)w(E_j^k)^{\frac{1}{p'}}\sigma(E_j^k)^{\frac{1}{p}}\\ \nonumber &\leq[w]_{\tilde\varepsilon A_p}\left(\sum_{j,k}\left(\frac{1}{\sigma(Q_j^k)}\int_{Q_j^k}f\sigma d\mu\right)^p\sigma(E_j^k)\right)^{\frac{1}{p}}\left(\sum_{j,k}\left(\frac{1}{w(Q_j^k)}\int_{Q_j^k}gw\,d\mu\right)^{p'}w(E_j^k)\right)^{\frac{1}{p'}}\\ \nonumber&\leq [w]_{\tilde\varepsilon A_p}\|M_{\sigma}f\|_{L^p(\sigma)}\|M_{w}g\|_{L^{p'}(w)}\\ \nonumber &\lesssim [w]_{\tilde\varepsilon A_p}\|f\|_{L^p(\sigma)}\|g\|_{L^{p'}(w)}. \end{align} The case $1<p<2$ follows from duality since $w \in \tilde\varepsilon^{p-1} A_p$ if and only if $\sigma \in \tilde\varepsilon A_{p'}$, and $[\sigma]_{\tilde\varepsilon A_{p'}}=[w]_{\tilde\varepsilon^{p-1}A_p}^{\frac{p'}{p}}$. Thus $$ \|T\|_{L^p(w)\rightarrow L^p(w)} = \|T^*\|_{L^{p'}(\sigma)\rightarrow L^{p'}(\sigma)}\lesssim [\sigma]_{\tilde\varepsilon A_{p'}}=[w]_{\tilde\varepsilon^{p-1} A_p}^{\frac{p'}{p}}. $$ \end{proof} In the second half of this subsection, we show that if an operator $T$ can be controlled pointwise by a sparse operator $S_{\varepsilon}$ where the coefficients $\varepsilon_Q$ decay, then $T$ is compact on $L^p(w)$ for $1<p<\infty$ and $w \in A_p$. The compactness of Haar multipliers will follow from this principle. We start with some definitions. For any positive integer $N$, let $\mathcal{D}_N$ denote the \emph{lagom cubes} $$ \mathcal{D}_N:= \{Q\in\mathcal{D}: -2^N\leq l(Q)\leq 2^N \,\,\text{and} \,\, \text{rdist}(Q,\mathbb{B}_{2^N})\leq N\}, $$ where $\text{rdist(P,Q)}:=1+\frac{{\rm dist}(P,Q)}{\max\{\ell(P),\ell(Q)\}}$ and $\mathbb{B}_{2^N}$ is the ball centered at the origin with radius $2^N$. We write $\mathcal D_{N}^{c}:=\mathcal D\backslash \mathcal D_{N}$. \begin{thm} \label{SparseImpliesWeightedCompactness} Let $T$ be a linear operator such that there exists a bounded sequence of nonnegative real numbers $\{\varepsilon_Q\}_{Q \in \mathcal{D}}$ with $\displaystyle\lim_{N\rightarrow \infty}\sup_{Q \in \mathcal{D}_N^c}\varepsilon_Q=0$ and that for each bounded function with compact support $f$, $Tf$ admits a pointwise bound of the form $$ |Tf(x)| \lesssim S_{\varepsilon}|f|(x):=\sum_{Q\in \mathcal{S}} \varepsilon_Q\langle |f|\rangle_Q\mathbbm{1}_Q(x) $$ for almost every $x \in {\rm supp \,} f$, where $\mathcal{S}$ is a sparse collection. If $1<p<\infty$ and $w \in A_p$, then $T$ is extends compactly on $L^p(w)$. \end{thm} \begin{proof} Assume without loss of generality that $f$ is nonnegative. Since $$ S_{\varepsilon ,N}(f):=\sum_{\substack{Q\in {\mathcal D}_N\\Q\in \mathcal{S}}} \varepsilon_Q\langle f\rangle_Q\mathbbm{1}_Q $$ is a finite rank operator, to show compactness of $S_{\varepsilon}f(x)$ (and thus of $T$) we only need to show that $$ \lim_{N\rightarrow \infty }\Bigg\|\sum_{\substack{Q\in {\mathcal D}^c_N\\Q\in \mathcal{S}}} \varepsilon_Q\langle f\rangle_Q\mathbbm{1}_Q\Bigg\|_{L^p(w)}=0 $$ uniformly over all $f$ in the unit ball of $L^p(w)$. By duality and the calculations in the proof of Theorem \ref{Haarboundedness}, we have for $f$ in the unit ball of $L^{p}(w^{1-p'})$ and $g$ in the unit ball of $L^{p'}(w)$ both nonnegative, \begin{align*} \Bigg|\Bigg\langle \sum_{\substack{Q\in {\mathcal D}^c_N\\Q\in \mathcal{S}}} \varepsilon_Q\langle f\rangle_Q\mathbbm{1}_Q,g\Bigg\rangle \Bigg| &\leq \sum_{\substack{Q\in {\mathcal D}^c_N\\Q\in \mathcal{S}}} \varepsilon_{Q}\langle fw^{1-p'}\rangle_{Q}\langle gw\rangle_{Q}\mu(Q) \\ &\lesssim \Big(\sup_{Q\in \mathcal{D}_N^c} \varepsilon_{Q}\, \langle w\rangle_Q \langle w^{1-p'}\rangle_Q^{p-1}\Big)\|f\|_{L^p(w^{1-p'})}\|g\|_{L^{p'}(w)}\\ &\leq \Big(\sup_{Q\in \mathcal{D}_N^c} \varepsilon_{Q}\Big [w]_{A_p \end{align*} when $2\leq p<\infty$, and \begin{align*} \Bigg|\Bigg\langle \sum_{\substack{Q\in {\mathcal D}^c_N\\Q\in \mathcal{S}}} \varepsilon_Q\langle f\rangle_Q\mathbbm{1}_Q,g\Bigg\rangle \Bigg| &\lesssim \Big(\sup_{Q\in \mathcal{D}_N^c} \varepsilon_{Q}^{p-1}\, \langle w\rangle_Q \langle w^{1-p'}\rangle_Q^{p-1}\Big)^{\frac{p'}{p}}\|f\|_{L^p(w^{1-p'})}\|g\|_{L^{p'}(w)}\\ &\leq \Big(\sup_{Q\in \mathcal{D}_N^c} \varepsilon_{Q}\Big)[w]_{A_p}^{\frac{p'}{p}} \end{align*} when $1<p\leq 2$. Together, we have $$ \Bigg|\Bigg\langle \sum_{\substack{Q\in {\mathcal D}^c_N\\Q\in \mathcal{S}}} \varepsilon_Q\langle f\rangle_Q\mathbbm{1}_Q,g\Bigg\rangle \Bigg| \lesssim \Big(\sup_{Q\in \mathcal{D}_N^c} \varepsilon_{Q}\Big)[w]_{A_p}^{\max\left\{1,\frac{p'}{p}\right\}}, $$ which proves the result since the conditions on $\varepsilon_Q$ imply that the supremum above tends to zero as $N$ gets large. \end{proof} We can now prove the weighted compactness result Theorem \ref{HaarWeightedCompactness} \begin{proof}[Proof of Theorem \ref{HaarWeightedCompactness}] This follows immediately from Theorem \ref{HaarSparseBound} and Theorem \ref{SparseImpliesWeightedCompactness}. \end{proof} \section{Calder\'on-Zygmund Operators} \label{CZOs} \subsection{Notation and definitions} In this section, all of our integrals, averages, pairings, etcetera will be taken with respect to Lebesgue measure on $\mathbb{R}^n$. We write $m$ for Lebesgue measure and denote the Lebesgue measure of a set $A \subseteq \mathbb{R}^n$ by $|A|$. We consider three bounded functions satisfying \begin{equation}\label{limits} \lim_{x\rightarrow \infty }L(x)=\lim_{x\rightarrow 0}S(x)=\lim_{x\rightarrow \infty }D(x)=0. \end{equation} Since any dilation of a function satisfying a limit in (\ref{limits}) also satisfies the same limit, we omit universal constants appearing in the argument of these functions. A measurable function $K:(\mathbb R^{n}\times \mathbb R^{n}) \setminus \{ (x,y)\in \mathbb R^{n}\times \mathbb R^{n} : x=y\} \to \mathbb C$ is a \emph{compact Calder\'on-Zygmund kernel} if it is bounded on compact subsets of its domain and there exist a function $\omega$ satisfying the Dini-type condition $$ \int_0^1\int_0^1\omega(st)\frac{ds}{s}\frac{dt}{t}<\infty $$ and bounded functions $L$, $S$, and $D$ satisfying \eqref{limits} such that \begin{equation}\label{smoothcompactCZ} |K(x,y)-K(x',y')| \lesssim \omega\Big(\frac{|x-x'|+|y-y'|}{|x-y|}\Big)\frac{F_{K}(x,y)}{|x-y|^{n}}, \end{equation} whenever $|x-x'|+|y-y'|\leq \frac{1}{2}|x-y|$ with \begin{align}\label{LSDinF} F_{K}(x,y)& =L(|x-y|)S(|x-y|)D(|x+y|). \end{align} As shown in \cite{V2015}, inequality \eqref{smoothcompactCZ} and $\displaystyle {\lim_{|x-y|\rightarrow \infty} K(x,y)=0}$ imply that $K$ satisfies the following decay estimate \begin{equation}\label{decaycompactCZ} |K(x,y)| \lesssim \frac{F_{K}(x,y)}{|x-y|^{n}} \end{equation} whenever $x\neq y$. For technical reasons, we will also use an alternative formulation of a compact Calder\'on-Zygmund kernel in which we substitute the function $F_K(x,y)$ of \eqref{smoothcompactCZ} with \begin{equation}\label{DecaySub} F_{K}(x,y,x',y')=L_{1}(|x-y|)S_{1}(|x-x'|+|y-y'|)D_{1}\Big(1+\frac{|x+y|}{1+|x-y|}\Big), \end{equation} where $L_{1}$, $S_{1}$, and $D_{1}$ satisfy the limits in (\ref{limits}). It was shown how this new condition can be obtained from \eqref{smoothcompactCZ} in \cite{V2015}. In general, we will omit the subindexes in the three factors of $F_K$, using the same notation as in \eqref{LSDinF}. We work with \emph{Calder\'on-Zygmund operators} $T$ having compact extensions on $L^2(\mathbb{R}^n)$ and satisfying \begin{equation}\label{kernelrep} Tf(x) =\int_{\mathbb R^{n}} K(x,y)f(y)\, dy \end{equation} for compactly supported functions $f$ and $x \not \in {\rm supp \,} f$, where $K$ satisfies properties \eqref{smoothcompactCZ}, \eqref{decaycompactCZ}, and \eqref{DecaySub} above. Given two cubes $I, J \in \mathcal{D}$ with $\ell(I) \neq \ell(J)$, we denote the smaller of $I$ and $J$ by $I \wedge J$ and the larger of $I$ and $J$ by $I \vee J$. We define $\langle I,J\rangle$ to be the unique cube containing $I\cup J$ with the smallest possible side length and such that $|c(I)|$ is minimum. Notice that $\langle I, J \rangle$ need not be dyadic. We also define the eccentricity and relative distance of $I$ and $J$ to be $$ {\rm ec}(I,J):=\frac{\ell(I\wedge J)}{\ell(I\vee J)}\quad \text{and} \quad {\rm \, rdist}(I,J):=\frac{\ell(\langle I,J\rangle )}{\ell(I\vee J)}. $$ Note that \begin{align}\label{equivrdist} {\rm \, rdist}(I,J) & \approx 1+\frac{|c(I)-c(J)|}{\ell(I)+\ell(J)}. \end{align} Given $I\in \mathcal D$, we denote the boundary of $I$ by $\partial I$, and the inner boundary of $I$ by $\mathfrak{D}_{I}:=\displaystyle{\cup_{I'\in {\rm ch}(I)}\partial I'}$. When $J\subseteq 3I$, we define the inner relative distance of $J$ and $I$ by $$ {\rm \, inrdist}(I,J):=1+\frac{{\rm dist}(J,{\mathfrak D}_{I})}{\ell(J)}. $$ Given three cubes $I_{1},I_{2},$ and $I_{3}$, we denote $$ F(I_{1}, I_{2}, I_{3}):=L(\ell(I_{1}))S(\ell(I_{2}))D(\hspace{-.03in}{\rm \, rdist}(I_{3},\mathbb B)) \quad \text{and} \quad F(I):=F(I,I,I),$$ where $\mathbb{B}:=[-\frac{1}{2},\frac{1}{2}]^n$. We define \begin{align*}\label{Dtilde} \tilde L(\ell(I)):= \int_{0}^{1} \omega(t)L(\ell(t^{-1}I))\frac{dt}{t} \quad \text{and}\quad \tilde{D}(\hspace{-.03in}{\rm \, rdist}(I,\mathbb B)):= \int_{0}^{1} W(t)D(\hspace{-.03in}{\rm \, rdist}(t^{-1}I,\mathbb B))\frac{dt}{t}, \end{align*} where $W(t):=\int_0^t\omega(s)\frac{ds}{s}$. We also define the corresponding $$ \tilde F(I_{1}, I_{2}, I_{3}):=\tilde L(\ell(I_{1}))S(\ell(I_{2}))\tilde D(\hspace{-.03in}{\rm \, rdist}(I_{3},\mathbb B)) \quad \text{and} \quad \tilde F(I):=\tilde F(I,I,I).$$ For a cube $Q$, let $Q^*$ be the cube such that $c(Q^*)=c(Q)$ and $\ell(Q^*)=5\ell(Q)$. For $Q \in \mathcal{D}$, we again write $h_Q$ for the Haar function adapted to $Q$, but now with respect to Lebesgue measure. Specifically, $$ h_{Q}:=|Q|^{-\frac{1}{2}}(\mathbbm{1}_{Q}-2^{-n}\mathbbm{1}_{\widehat{Q}}). $$ With this notation, $h_{Q}$ is supported on $\widehat{Q}$ and constant on $Q$ and on $\widehat{Q}\backslash Q$. Define the difference operator localized on a dyadic cube $Q$ as \begin{equation}\label{Deltaincoord} \Delta_{Q}f :=\sum_{R\in {\rm ch}(Q)}(\langle f\rangle_{R}-\langle f\rangle_{Q})\mathbbm{1}_{R}, \end{equation} where now $\langle f\rangle_Q:= \frac{1}{|Q|}\int_{\mathbb{R}^n}f\, dm$. It is shown in \cite{V2019} that $$ \Delta_{Q}f =\sum_{R\in {\rm ch}(Q)}\langle f, h_{R}\rangle h_{R}, $$ where we write $\langle f, g\rangle:=\int_{\mathbb R^{n}} fg\, dm$. Thus by summing a telescopic sum, we get \begin{align*} \sum_{\substack{Q\in {\mathcal D}\\ 2^{-N}\leq \ell(Q)\leq 2^N}}\sum_{R\in {\rm ch}(Q)}\langle f, h_{R}\rangle h_R(x) &=\sum_{\substack{Q\in {\mathcal D}\\ 2^{-N}\leq \ell(Q)\leq 2^N}}\Delta_{Q}f(x) =\langle f\rangle_{J}\mathbbm{1}_{J}(x)-\langle f\rangle_{I}\mathbbm{1}_{I}(x), \end{align*} where $I,J\in {\mathcal D}$ are such that $x\in J\subseteq I$, $\ell(J)=2^{-N}$ and $\ell(I)=2^{N+1}$. We denote the wavelet father adapted to $Q$ as $\varphi_Q:=|Q|^{-1}\mathbbm{1}_{Q}$. Given a function $b\in {\rm BMO}$, the paraproduct operators associated with $b$ are defined as follows: $$ \Pi_{b}(f):=\sum_{I\in \mathcal D}\langle b, h_I\rangle \langle f, \varphi_I\rangle h_I \quad \text{and} \quad \Pi^{*}_{b}(f):=\sum_{I\in \mathcal D}\langle b, h_I\rangle \langle f,h_I\rangle \varphi_I. $$ Note that the operator $$ \Pi^{*}_{P_M(b)}(f):=\sum_{I\in \mathcal D_M}\langle b, h_I\rangle \langle f,h_I\rangle \varphi_I $$ is of finite rank. A linear operator $T$ satisfies the weak compactness condition if there exists a bounded function $F_{W}$ satisfying $ {\displaystyle \lim_{\ell(Q)\rightarrow \infty }F_W(Q) =\lim_{\ell(Q)\rightarrow 0 }F_W(Q) =\lim_{c(Q)\rightarrow \infty }F_W(Q)=0} $ such that \begin{equation}\label{restrictcompact2} |\langle T\mathbbm{1}_{Q},\mathbbm{1}_{Q}\rangle | \lesssim |Q|F_{W}(Q) \end{equation} for all $Q\in {\mathcal D}$. We define ${\rm CMO}(\mathbb R^{n})$ as the closure in ${\rm BMO}(\mathbb R^{n})$ of the space of continuous functions vanishing at infinity. For positive integers $N$, define the \emph{projection operator} $P_N$ on lagom cubes by $$P_Nf:=\sum_{Q\in \mathcal{D}_N}\langle f,h_Q \rangle h_Q$$ and also $$ P_N^{\perp}f:= (I-P_N)f=\sum_{Q \in \mathcal{D}_N^c}\langle f,h_Q\rangle h_Q $$ with convergence interpreted pointwise almost everywhere. \begin{rem} To show that a linear operator $T$ is compact on $L^2(\mathbb R^n)$, for instance, one can equivalently show that for every $\varepsilon>0$, there exists $N_0>0$ so that $$ \|P_N^{\perp}Tf\|_{L^2(\mathbb R^n)}\lesssim \varepsilon\|f\|_{L^2(\mathbb R^n)} $$ for all $N>N_0$ and all $f\in L^2(\mathbb R^n)$. \end{rem} \subsection{Technical results} \label{tech} The following result is proved in \cite{V2019} in the particular case of $\omega(t)=t^{\delta}$ with $0<\delta\leq 1$. The proof of this lemma is a straightforward modification of that contained in \cite{V2019}. \begin{lemma} \label{tildeT} Let $T$ be a linear operator associated with a compact Calder\'on-Zygmund kernel satisfying the weak compactness condition \eqref{restrictcompact2} and such that $T1, T^*1\in {\rm CMO}$. If $\tilde T:=T-\Pi^{*}_{P_N(T^*1)}$, then $$ \|P_N^{\perp}\tilde T f\|_{L^{1,\infty}(\mathbb{R}^n)}\lesssim \sup_{Q \in \mathcal{D}_{N}^c} \varepsilon_Q \hskip5pt \|f\|_{L^1(\mathbb{R}^n)} $$ for all $f \in L^1(\mathbb{R}^n)$. The coefficients $\varepsilon_Q$ are defined for $Q\in \mathcal{D}_N^c$ by \begin{align*} \varepsilon_Q:&= \sum_{\substack{e\in \mathbb Z, m\in \mathbb N }}\omega(2^{-|e|})\frac{\omega(m^{-1})}{m} \max_{\substack{R \in \mathcal{D}\\R\in Q_{e,m}}}\max_{i=1,2,3}F_i(Q,R) \\ & +\bigg(|Q|^{-1}\sum_{R\in \mathcal{D}_N^c(Q)}\langle T1,h_R\rangle^2 \bigg)^{\frac{1}{2}} +\bigg(|Q|^{-1}\sum_{R\in \mathcal{D}_N^c(Q)}\langle T^*1,h_R\rangle^2 \bigg)^{\frac{1}{2}}, \end{align*} where $ Q_{e,m}:=\{ R\in {\mathcal D}:\ell(Q)=2^{e}\ell(R) \quad\text{and}\quad m\leq {\rm \, rdist}(Q,R)< m+1 \}, $ $\mathcal{D}_N^c(Q):=\mathcal {D}(Q)\cap \mathcal{D}_N^c$, and \begin{itemize} \item[i)] when ${\rm \, rdist} (Q,R)> 3$, $$ F_1(Q,R):=F_{K}(\langle Q,R\rangle ,Q\wedge R, \langle Q,R\rangle ), $$ \item[ii)] when ${\rm \, rdist} (Q,R)\leq 3$ and ${\rm \, inrdist}(Q,R)>1$, $$ F_{2}(Q, R):=\tilde F_{K}( Q\wedge R ,Q\wedge R, \langle Q,R\rangle ), $$ \item[iii)] and when ${\rm \, rdist} (Q,R)\leq 3$ and ${\rm \, inrdist}(Q,R)=1$, \begin{align*} F_{3}(Q, R):=F_{2}(Q,R)+\tilde F_{K}(Q\wedge R) +\delta(Q,R)F_{W}(Q) \end{align*} with $\delta(Q,R)=1$ if $Q=R$ and zero otherwise. \end{itemize} \end{lemma} \noindent As shown in \cite{V2019}, the $\varepsilon_Q$ defined in Lemma \ref{tildeT} satisfy $\tilde F_K(Q)\leq \varepsilon_Q $ and ${\displaystyle \lim_{N\rightarrow \infty} \sup_{Q \in \mathcal{D}_{N}^c} \varepsilon_Q =0}$. \begin{rem} We note that $Q\in \mathcal{D}_N^c$ with $\ell(Q)\leq 2^{-N}$ implies ${\mathcal D}(Q)\subseteq {\mathcal D}_N^c$, and also that $$ \bigg(|Q|^{-1}\sum_{R\in {\mathcal D}_N^c(Q)}\langle T1,h_R\rangle^2 \bigg)^{\frac{1}{2}} \leq \| P_{N}^\perp (T1)\|_{{\rm BMO}}. $$ \end{rem} \begin{lemma}\label{paraT} If $T$ is a linear operator associated to a compact Calder\'on-Zygmund kernel, $Q \in \mathcal D$, and $N>1$, then \begin{align*} |P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x)-P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x')| \leq \bar\varepsilon_{Q}Mf(x), \end{align*} for all $f \in L^1(\mathbb{R}^n)$ and all $x, x' \in Q$, where $\bar \varepsilon_Q := L(\ell(Q))S(\ell(Q))\tilde{ D}(\hspace{-.03in}{\rm \, rdist}(Q, \mathbb B))\leq \tilde F_K(Q)\leq \varepsilon_Q $ with $\varepsilon_Q$ as in Lemma \ref{tildeT}. \end{lemma} \begin{proof} By definition \begin{align}\label{PTout} |P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x)&-P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x')| \leq \sum_{R\in {\mathcal D}_N^c}| \langle T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*}), h_R\rangle | |h_R(x)-h_R(x')|. \end{align} For $R\in {\mathcal D}$ such that $\widehat{R}\cap Q=\emptyset$, we have $h_R(x)=h_R(x')=0$, while if $Q\subsetneq \widehat{R}$ we have $h_R(x)=h_R(x')$, and so the corresponding terms in \eqref{PTout} are zero. On the other hand, for $\widehat{R}\subseteq Q$, we have that $h_R(x)-h_R(x')\neq 0$ implies $x\in \widehat{R}$ or $x'\in \widehat{R}$. Moreover, in that case we have $|h_R(x)-h_R(x')|\lesssim |R|^{-\frac{1}{2}}$. Now, since $\widehat{R}\subseteq Q$ implies that $\widehat{R}$ does not intersect $\mathbb R^n\setminus Q^*$, we can use the integral representation of $T$ and the mean zero property of $h_R$ to write \begin{align*} \langle T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*}),h_R\rangle &=\int_{\widehat{R}}\int_{\mathbb{R}^n\setminus Q^*}f(y)h_R(z)(K(z,y)-K(c(\widehat{R}),y))dy dz. \end{align*} Since $|x-x'|\leq \ell(Q)\leq \frac{1}{2}|x-y|$ for all $y\in \mathbb{R}^n\setminus Q^*$, we can use the smoothness condition of the kernel to write \begin{align*} |\langle T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*}),h_R\rangle| &\leq \int_{\widehat{R}}\int_{\mathbb{R}^n\setminus Q^*}|f(y)||h_R(z)||K(z,y)-K(c(\widehat{R}),y)|\,dy dz\\ &\leq \int_{\widehat{R}} |h_R(z)|\sum_{k=0}^{\infty} \int_{2^{k+1}Q^*\setminus 2^kQ^*} \omega\Big(\frac{|z-c(\widehat{R})|}{|z-y|}\Big) \frac{F_K(z,c(\widehat{R}),y)}{|z-y|^{n}}|f(y)|\,dydz, \end{align*} where \begin{align*} F_{K}(z,c(\widehat{R}),y):= L(|z-y|)S(|z-c(\widehat{R})|)D\Big(1+\frac{|z+y|}{1+|z-y|}\Big). \end{align*} Since $\ell(Q)\leq 2^{k-1}\ell(Q^*)\leq |z-y|$ and $|z-c(\widehat{R})|\leq \ell(\widehat{R})/2=\ell(R)\leq \ell (Q)$, we have $L(|z-y|)\leq L(\ell(Q))$ and $S(|z-c(\widehat{R})|)\leq S(\ell(Q))$. To deal with $L$, we first note that $$2^{k}\ell(Q)\leq 2^{k-1}\ell(Q^*)\leq |z-y|\leq 2^{k}\ell(Q^*)= 2^{k}5\ell(Q),$$ that is, $|z-y|\approx 2^{k}\ell(Q)$. Using this and $|z|\leq \frac{1}{2}(|z-y|+|z+y|)$, we have $$ 1+\frac{|z|}{1+2^{k}\ell(Q)} \lesssim 1+\frac{|z|}{1+|z-y|} \leq \frac{3}{2}\Big(1+\frac{|z+y|}{1+|z-y|}\Big). $$ Moreover, since $|z-c(Q)|\leq \ell(Q)/2$, we also have $1+\frac{|c(Q)|}{1+2^k\ell(Q)}\leq \frac{5}{4}\big(1+\frac{|z|}{1+2^k\ell(Q)}\big)$. Using this and \eqref{equivrdist}, we have \begin{align*} 1+\frac{|z|}{1+2^{k}\ell(Q)} &\gtrsim 1+\frac{|c(2^{k} Q)|}{1+2^{k}\ell(Q)} \gtrsim {\rm \, rdist} (2^{k} Q,\mathbb B). \end{align*} Then \begin{align*} F_{K}(z,c(\widehat{R}),y) &\leq L(\ell(Q)) S(\ell(Q))D(\hspace{-.03in}{\rm \, rdist} (2^k Q,\mathbb B)) =F_{K}(Q,Q,2^k Q). \end{align*} Using previous estimates together with the facts that $|z-c(\widehat{R})|\leq \ell(\widehat{R})$, $2^k\ell(Q)\lesssim |z-y|$, and $\| h_R\|_{L^1(\mathbb R^n)}\lesssim |R|^{\frac{1}{2}}$, we get \begin{align*} |\langle T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*}),h_R\rangle| &\lesssim L(\ell(Q))S(\ell(Q)) \int_{\widehat{R}} |h_R(z)|dz \\ &\hskip50pt \sum_{k=0}^{\infty} \omega\Big(\frac{\ell(\widehat{R})}{2^k\ell(Q)}\Big) D(\hspace{-.03in}{\rm \, rdist} (2^{k} Q,\mathbb B)) \frac{1}{|2^{k+1}Q^*|}\int_{2^{k+1}Q^*} |f(y)| dy\\ &\lesssim L(\ell(Q))S(\ell(Q))|R|^{\frac{1}{2}} \sum_{k=0}^{\infty} \omega\Big(2^{-k}\frac{\ell(\widehat{R})}{\ell(Q)}\Big) D(\hspace{-.03in}{\rm \, rdist} (2^{k} Q,\mathbb B)) Mf(x)\\ &\lesssim |R|^{\frac{1}{2}}L(\ell(Q))S(\ell(Q))\int_{0}^{1} \omega\Big(t\frac{\ell(\widehat{R})}{\ell(Q)}\Big)D(\hspace{-.03in}{\rm \, rdist}(t^{-1}Q,\mathbb B))\frac{dt}{t}Mf(x). \end{align*} Now we parametrize all dyadic cubes $R\in {\mathcal D}_N^c$ such that $\widehat{R}\subseteq Q$ and $x\in \widehat{R}$ or $x'\in \widehat{R}$ by length $\ell(R_j)=2^{-j}\ell(Q)$. We note that there are at most two such cubes for each fixed $j$, one containing $x$ and another one containing $x'$. By summing over all these cubes, we finally get \begin{align*} |P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x)&-P_N^\perp T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x')| \lesssim \sum_{j=0}^{\infty } |\langle T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*}), h_{R_j}\rangle | |R_j|^{-\frac{1}{2}} \\ &\lesssim L(\ell(Q))S(\ell(Q))\sum_{j=0}^\infty\int_{0}^{1}\omega(t2^{-j})D(\hspace{-.03in}{\rm \, rdist}(t^{-1}Q,\mathbb B))\frac{dt}{t}Mf(x) \\ &\lesssim L(\ell(Q))S(\ell(Q))\int_{0}^{1}\int_{0}^{1} \omega(ts)D(\hspace{-.03in}{\rm \, rdist}(t^{-1}Q,\mathbb B))\frac{dt}{t}\frac{ds}{s}Mf(x) \\ &\leq \bar \varepsilon_{Q}Mf(x). \end{align*} \end{proof} We can also prove the following result using similar ideas. \begin{cor} \label{ParaproductDifferenceBound} If $b \in {\rm CMO}(\mathbb R^n)$, $Q \in \mathcal D$, and $N>1$, then \begin{align*} |P_N^{\perp}\Pi^*_b(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x)-P_N^{\perp}\Pi^*_b(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x')| \leq \bar\varepsilon_{Q}Mf(x), \end{align*} for all $f \in L^1(\mathbb{R}^n)$ and all $x, x' \in Q$, where $\bar \varepsilon_Q := L(\ell(Q))S(\ell(Q))\tilde D(\hspace{-.03in}{\rm \, rdist}(Q, \mathbb B))\leq \varepsilon_Q $ with $L(t)=\| P_{{\mathcal D}_{t}^{c}} b\|_{{\rm BMO}}$, $S(t)=\| P_{{\mathcal D}_{t^{-2/3}}^{c}} b\|_{{\rm BMO}} +(1+\| b\|_{{\rm BMO}})^{\frac{1}{2}}(\frac{t}{1+t})^{\frac{2}{3}}$, and $D(t)=\| P_{{\mathcal D}_{\log t}^{c}} b\|_{{\rm BMO}}$. \end{cor} \begin{proof} It was shown in \cite{V2015} that if $b \in {\rm CMO}$, then the paraproduct operator $\Pi_b^*$ is associated to a compact Calder\'on-Zygmund kernel with constant given by \begin{align*} &\| P_{{\mathcal D}_{|x-y|}^{c}} b\|_{{\rm BMO}} (\| P_{{\mathcal D}_{|x-y|^{-2/3}}^{c}} b\|_{{\rm BMO}} +(1+\| b\|_{{\rm BMO}})^{\frac{1}{2}}\min(1,|x-y|^{\frac{2}{3}})) \| P_{{\mathcal D}_{\log|x+y|}^{c}} b\|_{{\rm BMO}} \\ &=L(|x-y|)S(|x-y|)D(|x+y|). \end{align*} Similar reasoning to that developed in Lemma \ref{paraT} yields the result. \end{proof} As seen in \cite{PPV2017}, $P_N$ is bounded on ${\rm CMO}$. Then the hypothesis $T^*1\in {\rm CMO}$ implies that also $P_NT^*1 \in {\rm CMO}$. In fact, $\| P_NT^*1\|_{{\rm BMO}} \leq \| T^*1\|_{{\rm BMO}}$ and $\| P_M^\perp(P_NT^*1)\|_{{\rm BMO}}=0$ for all $M>N$. Lemma \ref{paraT} and Corollary \ref{ParaproductDifferenceBound} imply the following result. \begin{cor} \label{paratildeT} If $T$ is a linear operator associated with a compact Calder\'on-Zygmund kernel, $\tilde T:=T-\Pi^{*}_{P_N(T^*1)}$, and $Q \in \mathcal{D}$, then \begin{align*} |P_N^\perp \tilde T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x)-P_N^\perp \tilde T(f\mathbbm{1}_{\mathbb{R}^n\setminus Q^*})(x')| \leq \bar\varepsilon_{Q}Mf(x) \end{align*} for all $f \in L^1(\mathbb{R}^n)$ and all $x, x' \in Q$, where $\bar \varepsilon_Q := L(\ell(Q))S(\ell(Q))\tilde D({\rm \, rdist}(Q, \mathbb B))\leq \varepsilon_Q $. \end{cor} A consequence of the work in Lemma \ref{paraT} and Corollary \ref{ParaproductDifferenceBound} is that the kernels of both $T$ and $\Pi^{*}_{P_N(T^*1)}$ share similar estimates. In the next section we denote by $K$ the kernel of $\tilde T$, which satisfies the properties of a compact Calder\'on-Zygmund kernel \eqref{smoothcompactCZ}, \eqref{decaycompactCZ}, and \eqref{DecaySub}. \subsection{Sparse domination for compact Calder\'on-Zygmund operators} \begin{thm} \label{domtilde} Let $T$ be a linear operator associated to a compact Calder\'on-Zygmund kernel satisfying the weak compactness condition \eqref{restrictcompact2} and $T1, T^*1\in {\rm CMO}$ and let $\tilde T := T-\Pi^{*}_{P_N(T^*1)}$. For every $\varepsilon >0$ there exists $N_0>0$ such that for all $N>N_0$ and every compactly supported $f \in L^1(\mathbb{R}^n)$, there is a sparse family of cubes ${\mathcal S}$ such that $$ |P_N^\perp \tilde Tf(x)|\lesssim \varepsilon \sum_{R \in \mathcal{S}} \langle |f| \rangle_{R^*} \mathbbm{1}_R(x) =: \varepsilon S|f|(x) $$ for almost every $x \in \mathbb{R}^n$. \end{thm} \begin{proof} Without loss of generality, suppose there is a dyadic cube $B$ such that $\ell(B)>1$ and ${\rm supp \,} f\subseteq B$. Let $\{\varepsilon_Q\}_{Q\in {\mathcal D}}$ be the sequence in the statement of Lemma \ref{tildeT} which satisfies $\displaystyle\lim_{N\rightarrow \infty} \sup_{Q \in \mathcal{D}_{N}^c} \varepsilon_Q =0$. Given $\varepsilon>0$, let $N_0>0$ be such that $\displaystyle\sup_{Q\in {\mathcal D}_{N}^c} \varepsilon_Q <\varepsilon $ for all $N>N_0$. Fix $N>N_0$ and let $Q_{0}$ be a cube such that $B\subsetneq Q_0$ and ${\rm dist}(B, Q_0^c)\geq 2^{N+3}\ell(B)$. We first establish the sparse estimate outside of $Q_0$. For $j\geq 0$, we write $Q_j:=2^jQ_0$ and for $j\geq 1$ we define $P_j:=Q_{j}\backslash Q_{j-1}$. Note that the family $\{ Q_j\}_{j\geq 0}$ is sparse by construction. Let $x\in P_j$. By definition, \begin{align}\label{outside} P_N^\perp \tilde T f(x) &=\tilde T f(x)-\sum_{R\in {\mathcal D}_N}\langle \tilde Tf,h_R\rangle h_R(x); \end{align} we will bound each term separately. For the first term, since $x\notin {\rm supp \,} f$, we can write \begin{align*} |\tilde Tf(x)|& = \bigg|\int_{B}K(x,y)f(y)\,dy\bigg| \leq \int_{B}\frac{F_K(x,y)}{|x-y|^n}|f(y)|\,dy \end{align*} where $K$ denotes the kernel of $\tilde T$ and $$ F_K(x,y)=L(|x-y|)S(|x-y|)D\Big(1+\frac{|x+y|}{1+|x-y|}\Big). $$ Since $|x-y|\approx \ell(Q_{j})$ and $|y-c(Q_0)|\lesssim \ell(Q_0)$ for $y \in B$, we have by the same reasoning used in Lemma \ref{paraT} that $L(|x-y|)\leq L(\ell(Q_j))$, $S(|x-y|)\leq S(\ell(Q_j))$, and $D(1+\frac{|x+y|}{1+|x-y|})\lesssim D(\hspace{-.03in}{\rm \, rdist} (Q_j,\mathbb B))$. Then \begin{align}\label{Foutside} F_{K}(x,y) &\leq L(\ell(Q_j)) S(\ell(Q_j))D(\hspace{-.03in}{\rm \, rdist} (Q_j,\mathbb B)) =F_{K}(Q_j)\leq \varepsilon, \end{align} where in the last inequality we used that $\ell(Q_j)\geq \ell(Q_0)>2^{N+3}\ell(B)\geq 2^{N+3}$, and thus $Q_j\in {\mathcal D}_N^c$. With this and the fact that $|x-y|\approx \ell(Q_j)$, we have \begin{align*} |\tilde Tf(x)|&\lesssim \frac{\varepsilon }{|Q_j|}\|f\|_{L^1(\mathbb{R}^n)} = \varepsilon \langle |f| \rangle_{Q_j}\mathbbm{1}_{Q_{j}}(x). \end{align*} On the other hand, since the second term in \eqref{outside} is defined by a telescopic sum, we get \begin{align*} \sum_{R\in {\mathcal D}_N}\langle \tilde Tf,h_R\rangle h_R(x) & =\langle \tilde Tf\rangle_{J}\mathbbm{1}_{J}(x)-\langle \tilde Tf\rangle_{I}\mathbbm{1}_{I}(x), \end{align*} where $I,J\in \mathcal D$ are such that $x\in J\subseteq I$, $\ell(J)=2^{-N}$, and $\ell(I)=2^{N+1}$. We apply the same ideas to estimate both terms, and so we only work with the second term. Since $x\in I\cap P_j$, we have as before $2^{j-2}\ell(Q_0)<|x-y|$ for all $y\in B\subseteq Q_0$. On the other hand, $$ 2^{j-2}\ell(Q_0)\geq \ell(Q_0)/2\geq 2^{N+2}\ell(B) \geq 2\ell(I), $$ which implies $\ell(I)\leq 2^{j-3}\ell(Q_0)$. With this and $|t-x|\leq \ell(I)$ for all $t\in I$, we get $$ |t-y|\geq |x-y|-|t-x| \geq 2^{j-2}\ell(Q_0) -\ell(I) \geq 2^{j-3}\ell(Q_0) $$ and $$ |t-y|\leq |x-y|+|t-x| \lesssim 2^{j}\ell(Q_0) +\ell(I) \leq 2^{j+1}\ell(Q_0). $$ Therefore, $ |t-y| \approx \ell(Q_j) $. We also have $|y-c(Q_0)|\leq \ell(Q_0)/2$. Write \begin{align*} |\langle \tilde Tf\rangle_{I}\mathbbm{1}_{I}(x)|&= \frac{1}{|I|}\Big|\int_{I}\int_{B}K(t,y)f(y)\,dy\, dt\Big| \\ & \lesssim \frac{1}{|I|}\int_{I}\int_{B}\frac{F_K(t,y)}{|x-y|^n}|f(y)|\,dy dt \\ &\lesssim \frac{F_K(Q_j)}{|Q_j|}\|f\|_{L^1(\mathbb{R}^n)} \leq \varepsilon \langle |f| \rangle_{Q_j}\mathbbm{1}_{Q_{j}}(x), \end{align*} where the last inequality follows from \eqref{Foutside}. We now work to establish the sparse bound inside $Q_0$. For this local piece, we follow the ideas from \cite{LO2019} to define recursively the desired sparse family $\mathcal S$ and sparse operator $S$. Let $\mathcal{D}_N^c(Q_0):=\mathcal{D}(Q_0)\cap \mathcal{D}_N^c$ and $\mathcal Q:=\left\{Q \in \mathcal{D}_N^c(Q_0) : \ell(Q)=2^{-(N+2)}\right\}$. We decompose $\tilde Tf$ as $$ \tilde Tf=\sum_{Q\in \mathcal Q}\tilde T(f\mathbbm{1}_{Q}). $$ If we assume the desired sparse domination result holds for $P_N^{\perp}\tilde T(f\mathbbm{1}_{Q})$, then by disjointness of the cubes $Q$, we can deduce a similar sparse estimate for $P_N^{\perp}\tilde T$: \begin{align*} |P_N^{\perp}\tilde Tf|&\leq \sum_{Q\in \mathcal Q}|P_N^{\perp}\tilde T(f\mathbbm{1}_{Q})| \lesssim \varepsilon \sum_{Q\in \mathcal Q}S|f\mathbbm{1}_{Q}| \\ & = \varepsilon \sum_{Q\in \mathcal Q} \sum_{R\in {\mathcal S}(Q)}\langle |f|\rangle_{R} \mathbbm{1}_{R} \leq \varepsilon \sum_{R\in {\mathcal S}(Q_{0})}\langle |f|\rangle_{R} \mathbbm{1}_{R}. \end{align*} Therefore, we will only prove the sparse estimate for each $P_N^{\perp}\tilde T(f\mathbbm{1}_{Q})$. We start by adding all cubes $Q\in \mathcal Q$ to the family $\mathcal S$ and functions $\langle |f|\rangle_{Q}\mathbbm{1}_{Q}$ to the sparse operator $S|f|$. These cubes are pairwise disjoint and satisfy $\sum_{Q\in \mathcal Q}|Q|= |Q_{0}|$. This family does not satisfy the sparseness condition, but we can divide the family into two disjoint subfamilies ${\mathcal Q}_1$, ${\mathcal Q}_2$ containing exactly half of the cubes, each satisfying the sparseness condition $\sum_{Q\in {\mathcal Q}_i}|Q|= |Q_{0}|/2$. This leads to a domination by at most two sparse operators, which is acceptable. To simplify notation, we still denote each subfamily by $\mathcal{Q}$. Fix $Q\in \mathcal Q$ and define \begin{equation}\label{EQ} E_{Q}:=\{x\in Q : M(f\mathbbm{1}_{Q})(x)> c'\langle |f|\rangle_{Q}\} \cup \{x\in Q : |P_N^\perp \tilde T(f\mathbbm{1}_{Q})(x)|> c'\varepsilon \langle |f|\rangle_{Q}\}, \end{equation} where $c'>0$ is chosen so that $$ |E_{Q}|\leq \frac{1}{2^{n+2}}|Q|. $$ To show that $c'>0$ is independent of $\varepsilon$, from Lemma \ref{tildeT} we have \begin{align*} |E_{Q}|&\leq \frac{C}{c'\langle |f|\rangle_Q}\| f\mathbbm{1}_{Q}\|_{L^1(\mathbb R^n)} +\frac{C\sup_{Q\in {\mathcal D}_N^c}|\varepsilon_Q|}{c'\varepsilon \langle |f|\rangle_Q}\| f\mathbbm{1}_{Q}\|_{L^1(\mathbb R^n)} \leq \frac{2C}{c'}|Q|\leq \frac{1}{2^{n+2}}|Q| \end{align*} by choosing $c'>C2^{n+3}$. We note that the constant $C>0$ may depend on the dimension $n$ but not on $\varepsilon >0$. We define another exceptional set $$ \tilde E_{Q}:=\{x\in Q : M(\mathbbm{1}_{E_{Q}})(x)> 2^{-(n+1)}\}, $$ and define $\mathcal E_{Q}$ to be the family of maximal (with respect to inclusion) dyadic cubes $P$ contained in $\tilde E_{Q}$. Note that for each $Q\in \mathcal Q$, the containment $\mathcal{E}_{Q}\subseteq \mathcal{D}_N^c$ holds. Moreover, due to maximality, the cubes $P\in \mathcal E_{Q}$ are pairwise disjoint, and thus $\mathcal E_{Q}$ is a sparse collection: \begin{equation}\label{Psparse} \sum_{P\in \mathcal E_{Q}}|P|\leq |E_{Q}|\leq \frac{1}{2^{n+2}}|Q|. \end{equation} We see now that \begin{align}\label{PEQ} |P \cap E_{Q}| \leq \frac{1}{2}|P|. \end{align} By maximality, $2P\cap (Q\backslash E_Q)\neq \emptyset $, and so there exists $x\in 2P$ such that $M(\mathbbm{1}_{E_{Q}})(x)\leq 2^{-(n+1)}$. Then $$ \frac{|E_{Q}\cap 2P|}{|2P|}\leq 2^{-(n+1)}, $$ which proves the upper inequality. Note that the inequality in \eqref{PEQ} implies $|P \setminus E_{Q}| > \frac{1}{2}|P|$. We can now estimate $|P_N^\perp \tilde T(f\mathbbm{1}_Q)(x)|$ for $x \in Q$. First, for $x\in Q\backslash E_{Q}$ we trivially have $$ |P_N^\perp \tilde T(f\mathbbm{1}_{Q})(x)|\leq c'\varepsilon \langle |f|\rangle_{Q} =c'\varepsilon \langle |f|\rangle_{Q} \mathbbm{1}_{Q}(x). $$ Second, to obtain an estimate on $E_Q$, we note that $\left|E_{Q} \setminus \bigcup_{P\in \mathcal E_{Q}}P\right| \leq \left|\tilde E_{Q} \setminus \bigcup_{P\in \mathcal E_{Q}}P\right|=0$, and so, we do not need to bound $|P_N^\perp \tilde T(f\mathbbm{1}_Q)(x)|$ for $x \in E_Q \setminus \bigcup_{P\in \mathcal E_{Q}}P$. It only remains to control $|P_N^\perp \tilde T(f\mathbbm{1}_Q)(x)|$ for $x \in \bigcup_{P\in \mathcal E_{Q}}P$. For any $P\in \mathcal E_{Q}$, any $x \in P$, and any $x' \in P\setminus E_Q$, we decompose $P_N^\perp \tilde Tf(x)$ as follows: \begin{align*} |P_N^\perp \tilde T(f\mathbbm{1}_Q)&(x)|\leq |P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x)|+|P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x)|\\ &\leq |P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x)-P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x')|+|P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x')| \\ &\hskip50pt +|P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x)|\\ &\leq |P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x)-P_N^\perp \tilde T(f\mathbbm{1}_{Q\setminus P^*})(x')|+|P_N^\perp \tilde T(f\mathbbm{1}_Q)(x')|+|P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x')| \\ &\hskip50pt +|P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x)|\\ &:= \text{I} + \text{II} + \text{III} + \text{IV}. \end{align*} The second term is easily controlled since $x' \not \in E_{Q}$ implies $|P_N^\perp \tilde T(f\mathbbm{1}_Q)(x')|\leq c' \varepsilon \langle |f| \rangle_{Q}$, and so $$ \text{II} \leq c' \varepsilon \langle |f| \rangle_{Q}\mathbbm{1}_{Q}(x). $$ For the first and third terms, define $$ E_{P}':= \{x \in P: |P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x)|>c' \varepsilon \langle |f| \rangle_{P^*}\}. $$ By Lemma \ref{tildeT}, $$ |E_{P}'| \leq \frac{C\varepsilon}{c' \varepsilon \langle |f| \rangle_{P^*}}\|f\mathbbm{1}_{P^*}\|_{L^1(\mathbb{R}^n)} \leq \frac{1}{2^{n+2}}|P|. $$ Then $|P \setminus E_{P}'| > \frac{1}{2}|P|$. This, together with $|P \setminus E_{Q}| > \frac{1}{2}|P|$, implies that $(P \setminus E_{Q}) \cap (P \setminus E_{P}')\neq \emptyset $. Therefore, there exists $x'\in P$ such that $M(f\mathbbm{1}_Q)(x') \leq c'\langle |f|\rangle_{Q}$ and $|P_N^\perp \tilde T(f\mathbbm{1}_{P^*})(x')|\leq c' \varepsilon \langle |f|\rangle_{P^*}$. Then, since $(f\mathbbm{1}_Q)\mathbbm{1}_{\mathbb R^n \setminus P^*}= f\mathbbm{1}_{Q\setminus P^*}$, we can apply Corollary \ref{paratildeT} to obtain $$ \text{I}\leq \varepsilon M(f\mathbbm{1}_Q)(x') \leq c'\varepsilon \langle |f|\rangle_{Q}\mathbbm{1}_{Q}(x). $$ Moreover, $$ \text{III}\leq c' \varepsilon \langle |f|\rangle_{P^*} = c'\varepsilon \langle |f|\rangle_{P^*}\mathbbm{1}_{P}(x). $$ We add the cubes $P\in \mathcal{E}_Q$ to the family $\mathcal S$ and the functions $\langle |f|\rangle_{P^*}\mathbbm{1}_{P}$ into $S|f|$. The family $\mathcal{E}_Q$ is sparse by \eqref{Psparse}. The fourth term is controlled by iterating the above argument, starting at \eqref{EQ} but replacing $Q$ with $P$, and so defining \begin{equation*} E_{P}:=\{x\in P : M(f\mathbbm{1}_{P})(x)> c'\langle |f|\rangle_{P}\} \cup \{x\in P : |P_N^\perp \tilde T(f\mathbbm{1}_{P})(x)|> c'\varepsilon \langle |f|\rangle_{P}\}. \end{equation*} \end{proof} \subsection{Compactness on weighted spaces} We can now prove the compactness of Calder\'on-Zygmund operators on weighted spaces. \begin{proof}[Proof of Theorem \ref{CZOWeightedCompactness}] Let $\tilde T=T-\Pi^{*}_{P_N(T^*1)}$. Since $\Pi^{*}_{P_N(T^*1)}$ is of finite rank, showing that $T$ is compact on $L^2(w)$ is equivalent to showing that $\tilde T$ is compact on $L^2(w)$. In particular, we argue that for each $\varepsilon>0$, there exists $N_0>0$ such that $$\|P_N^{\perp}\tilde Tf\|_{L^p(w)} \lesssim \varepsilon [w]_{A_p}^{\max\left\{1,\frac{p'}{p}\right\}}\|f\|_{L^p(w)}$$ for all $N>N_0$ and all $f\in L^p(w)$. We provide a sketch of the proof using the reasoning of Theorem \ref{Haarboundedness}. By Theorem \ref{domtilde}, there exist $N_0>0$ and a sparse family of cubes ${\mathcal S}$ such that $$ |P_N^\perp \tilde Tf(x)|\lesssim \varepsilon \sum_{R \in \mathcal{S}} \langle |f| \rangle_{R^*} \mathbbm{1}_R(x) =: \varepsilon S|f|(x) $$ for all $N>N_0$ and almost every $x \in \mathbb{R}^n$. Let first $p\ge 2$ and set $\sigma=w^{1-p'}$. We use again $ \|T\|_{L^p(w)\rightarrow L^p(w)} = \|T(\cdot\,\sigma)\|_{L^p(\sigma)\rightarrow L^p(w)} $ and proceed by duality. Let $f\in L^p(\sigma)$ and $g\in L^{p'}(w)$ be nonnegative. For each $R\in {\mathcal S}$ we denote by $E(R)$ the set described in Theorem \ref{Haarboundedness} that satisfies $E(R)\subseteq R$, $|R|\leq 2 |E(R)|$, and such that given $R,R'\in {\mathcal S}$ with $R\neq R'$, the corresponding sets $E(R)$ and $E(R')$ are disjoint. We use these properties, the $A_p$ condition for $w$, the containment $R\subsetneq R^*$, the inequality $|R^*|\lesssim |R|$, and boundedness of the maximal functions $M_{\sigma}$ and $M_{w}$ from Lemma \ref{WeightedMaxBoundedness}, to obtain \begin{align*} \varepsilon \langle S(f\sigma),gw\rangle &= \varepsilon \sum_{R\in \mathcal{S}} \langle f\sigma\rangle_{R^*}\int_{R} gw\,dm \\ &\leq \varepsilon\sum_{R\in \mathcal{S}} \frac{w(R^*)\sigma(R^*)^{p-1}}{|R^*|^p}\frac{|R^*|^{p-1}}{w(R)\sigma(R^*)^{p-1}}\int_{R^*}f\, d\sigma \int_{R}g\,dw\\ &\lesssim \varepsilon [w]_{A_p} \sum_{R\in \mathcal{S}}\left(\frac{1}{\sigma(R^*)}\int_{R^*}f\, d\sigma \right)\left(\frac{1}{w(R)}\int_{R}g\,dw\right)|R|^{p-1}\sigma(R^*)^{2-p}\\ &\leq \varepsilon 2^{p-1}[ w]_{A_p}\sum_{R\in \mathcal{S}} \langle f\rangle_{R^*, d\sigma } \langle g \rangle_{R, dw} |E(R)|^{p-1}\sigma(E(R))^{2-p}\\ &\lesssim \varepsilon [w]_{A_p}\sum_{R\in \mathcal{S}}\langle f\rangle_{R^*, d\sigma} \langle g \rangle_{R, dw} w(E(R))^{\frac{1}{p'}}\sigma(E(R))^{\frac{1}{p}}\\ &\leq \varepsilon [w]_{A_p}\left(\sum_{R\in \mathcal{S}} \langle f\rangle_{R^*, d\sigma }^p \sigma(E(R))\right)^{\frac{1}{p}}\left(\sum_{R}\langle g \rangle_{R, dw}^{p'}w(E(R))\right)^{\frac{1}{p'}}\\ & \lesssim \varepsilon [w]_{A_p}\|M_{\sigma}f\|_{L^p(\sigma)}\|M_wg\|_{L^{p'}(w)}\\ &\lesssim \varepsilon [w]_{ A_p}\|f\|_{L^p(\sigma)}\|g\|_{L^{p'}(w)}. \end{align*} The case $1<p<2$ follows from duality exactly as in the proof of Theorem \ref{Haarboundedness}. \end{proof} \begin{bibdiv} \begin{biblist} \bib{CAP2019}{article}{ title={Nondoubling Calder\'on-Zygmund theory: a dyadic approach}, author={J. M. Conde-Alonso}, author={J. Parcet}, journal={J. Fourier Anal. Appl.}, volume={25}, date={2019}, number={4}, pages={1267--1292} } \bib{CAR2016}{article}{ title={A pointwise estimate for positive dyadic shifts and some applications}, author={J. M. Conde-Alonso}, author={G. Rey}, journal={Math. Ann.}, volume={365}, date={2016}, number={3-4}, pages={1111--1135} } \bib{HMW1973}{article}{ title={Weighted norm inequalities for the conjugate function and Hilbert transform}, author={R. Hunt}, author={B. Muckenhoupt}, author={R. Wheeden}, journal={Trans. Amer. Math. Soc.}, volume={176}, date={1973}, pages={227--251} } \bib{H2020}{article}{ title={Extrapolation of compactness on weighted spaces}, author={T. P. Hyt\"onen}, date={2020}, journal={Arxiv e-prints: 2003.01606} } \bib{H2012}{article}{ title={The sharp weighted bound for general Calder\'on-Zygmund operators}, author={T. P. Hyt\"onen}, journal={Ann. of Math. (2)}, volume={175}, date={2012}, number={3}, pages={1473--1506} } \bib{HRT2017}{article}{ title={Quantitative weighted estimates for rough homogeneous singular integrals}, author={T. Hyt\"onen}, author={L. Roncal}, author={O. Tapiola}, journal={Israel J. Math.}, volume={218}, number={1}, date={2017}, pages={133--164} } \bib{L2017}{article}{ title={An elementary proof of the $A_2$ bound}, author={M. T. Lacey}, journal={Israel J. Math.}, volume={217}, date={2017}, pages={181--195} } \bib{L2013}{article}{ title={A simple proof of the $A_2$ conjecture}, author={A. K. Lerner}, journal={Int. Math. Res. Not.}, volume={\,}, date={2013}, number={14}, pages={3159--3170} } \bib{LN2019}{article}{ title={Intuitive dyadic calculus: the basics}, author={A. K. Lerner}, author={F. Nazarov}, journal={Expo. Math.}, volume={37}, date={2019}, number={3}, pages={225--265} } \bib{LO2019}{article}{ title={Some remarks on the pointwise sparse domination}, author={A. K. Lerner}, author={S. Ombrosi}, journal={J. Geom. Anal.}, date={2019}, pages={1--17} } \bib{LSMP2014}{article}{ title={Dyadic harmonic analysis beyond doubling measures}, author={L. D. L\'opez-S\'anchez}, author={J. M. Martell}, author={J. Parcet}, journal={Adv. Math.}, volume={267}, date={2014}, pages={44--93} } \bib{M2012}{article}{ title={Sharp weighted bounds without testing or extrapolation}, author={K. Moen}, journal={Arch. Math. (Basel)}, volume={99}, date={2012}, number={5}, pages={457--466} } \bib{OV2017}{article}{ title={Endpoint estimates for compact Calder\'on-Zygmund operators}, author={J-F. Olsen}, author={P. Villarroya}, journal={Rev. Mat. Iberoam.}, volume={33}, date={2017}, pages={1285–-1308} } \bib{PPV2017}{article}{ title={Endpoint compactness of singular integrals and perturbations of the Cauchy integral}, author={K-M. Perfekt}, author={S. Pott}, author={P. Villarroya}, journal={Kyoto J. Math.}, volume={57}, date={2017}, number={2}, pages={365--393} } \bib{TTV2015}{article}{ title={Weighted martingale multipliers in the non-homogeneous setting and outer measure spaces}, author={C. Thiele}, author={S. Treil}, author={A. Volberg}, journal={Adv. Math.}, volume={285}, date={2015}, pages={1155--1188} } \bib{V2015}{article}{ title={A characterization of compactness for singular integrals}, author={P. Villarroya}, journal={J. Math. Pures Appl.}, volume={104}, date={2015}, pages={485--532} } \bib{V2019}{article}{ title={A global $Tb$ theorem for compactness and boundedness of Calder\'on-Zygmund operators}, author={P. Villarroya}, journal={J. Math. Anal. Appl.}, volume={480}, date={2019}, number={1} } \end{biblist} \end{bibdiv} \end{document} Since $|x-y|\geq 2^j\ell(Q_0)\geq 2^{j+N}\ell(B)\geq 2^{N}$ for all $y\in B$, we have $ F_{K}(x,y)\lesssim L(2^{N})\leq \varepsilon . $ Moreover, since the sets $P_j$ are pairwise disjoint and $P_{j}\subset Q_{j+1}$, we have \begin{align*} |\int_{\mathbb R^n\backslash Q_0}&Tf(x)g(x)dx| \lesssim \varepsilon \sum_{j\geq 0}\frac{1}{|Q_j|}\int_{B}|f(y)|\, dy \int_{P_j}|g(x)|\, dx\\ &\lesssim \varepsilon \sum_{j\geq 0}\frac{1}{|Q_j|}\Big(\int_{B}|f(y)|^2w(y)\, dy\Big)^{\frac{1}{2}}\Big(\int_{B}w^{-1}(y)dy\Big)^{\frac{1}{2}}\\ &\hskip70pt \Big(\int_{P_j}|g(x)|^2w^{-1}(x)\, dx\Big)^{\frac{1}{2}} \Big(\int_{P_j}w(x)dx\Big)^{\frac{1}{2}}\\ &\lesssim \varepsilon \sum_{j\geq 0}\langle w\rangle_{Q_{j+1}}^{\frac{1}{2}} \langle w^{-1}\rangle_{Q_{j+1}}^{\frac{1}{2}} \Big(\int_{P_j}|f(y)|^2w(y)\, dy\Big)^{\frac{1}{2}} \Big(\int_{P_j}|g(x)|^2w^{-1}(x)\, dx\Big)^{\frac{1}{2}}\\ &\lesssim \varepsilon [w]_{A_{2}}^{\frac{1}{2}} \Big(\sum_{j\geq 0}\int_{P_j}|f(y)|^2w(y)\, dy\Big)^{\frac{1}{2}} \Big(\sum_{j\geq 0}\int_{P_j}|g(x)|^2w^{-1}(x)\, dx\Big)^{\frac{1}{2}}\\ &\leq \varepsilon[w]_{A_2}\|f\|_{L^2(w)}\|g\|_{L^2(w^{-1})} \end{align*} To work with the second term, we first note the sum is telescopic and thus \begin{align*} \sum_{R\in {\mathcal D}_N}\langle Tf,h_R\rangle h_R(x) &=\sum_{-M\leq k\leq M}\Delta_{k}^{b}Tf =E_{M}Tf(x)-E_{-M}Tf(x) \\ & =\langle Tf\rangle_{J}\chi_{J}(x)-\langle Tf\rangle_{I}\chi_{I}(x) \end{align*} We choose $R\in {\mathcal D}$ with $\sup f\subset R$, and $M\in \mathbb N$ with $2^{-M}< \ell(R)< 2^{M}$. For every $x\in R$, we select $I,J\in \mathcal D$ such that $x\in J\subset I$, $\ell(J)=2^{-M}$ and $\ell(I)=2^{M}$. Since $R\subseteq I$ and $f$ has zero mean, then $\langle f\rangle_{I}=0$. With this, by summing a telescopic series, we have \begin{align}\label{pointconv} \sum_{-M<k\leq M}\hspace{-.5cm}\Delta_{k}^{b}f(x) &=E_{M}^{b}f(x)-E_{-M}^{b}f(x) =\frac{\langle f\rangle_{J}}{\langle b\rangle_{J}}\chi_{J}(x)b(x). \end{align} correspond to cubes $R$ such that $x\in R$ and $\ell(R)\leq 2^{N}\leq 2^{N}\ell(B)\leq {\rm dist}(x,P_{0})$. This implies that $|x-y|\geq 2^{N}\ell(B)$ for all $x\in R$ and $y\in B$. Then, using the integral representation and $|h_R(x)|\lesssim |R|^{-\frac{1}{2}}\chi_{\widehat{R}}(x)$, we get \begin{align*} |\langle Tf,h_R\rangle |&\lesssim \int_{R}\int_{B}K(x,y)f(y)\,dy\, h_{R}(x)dx| \\ & \leq \int_{R}\int_{B}\frac{1}{|x-y|^n}|f(y)||h_{R}(x)|\,dy dx \\ &\lesssim \frac{1}{2^{Nn}\ell(B)^n}|R|^{\frac{1}{2}}\|f\|_{L^1(B)} \leq 2^{-\frac{Nn}{2}}\langle |f| \rangle_{B} \end{align*} Add $P_j$, $j \ge 1$, to $\mathcal{S}$. (decide one of the three methods and also prove the result for the finite rank part of the paraproduct) Without loss of generality, suppose there is a ball $B$ such that $\text{supp}(f) \subseteq B \subseteq 2B \subseteq P_0.$ Apply the construction from \cite{HRT2017} to produce cubes $P_j$, $j\ge 1$, such that $2P_{j-1}\subseteq P_j$, $d(x,y) \leq \frac{1}{2}d(x, \partial P_1)$ for $x \in P_0$ and $y \in B$, and $\ell(P_{j+1})\lesssim d(x,B)$ for $x \in P_{j+1}\setminus P_j$. If $x \in \mathbb{R}^n \setminus P_0$, then $x \in P_{j+1}\setminus P_j$ for some $j \ge 0$. For all $y\in \sup f\subset B$, we have $2^{j}\ell(B)\leq |x-y|\leq 2^{j+1}\ell(B)$ and so \begin{align*} P_N^\perp T f(x) &=\sum_{R\in {\mathcal D}_N^c}\langle Tf,h_R\rangle h_R(x) \end{align*} \begin{proof} Let $N$ be large enough so that $$E:=\{\max\{M_Nf,|T_N^{\max}f|\}>\varepsilon\langle |f|\rangle_{Q_0}\}$$ satisfies $|E|\leq \frac{1}{2}|Q_0|$. Notice that this is possible by Theorem 4 and Theorem 5. Let $\mathcal{E}:=\{\text{maximal dyadic cubes contained in E}\}$. For $P\in\mathcal{E}$, let $\widehat{P}$ denote the parent of $P$ and set $$T_Pf:=\epsilon_{\widehat{P}}\langle f\rangle_P \mathbbm{1}_P + \sum_{Q\subseteq P} \epsilon_Q\langle f,h_Q\rangle h_Q.$$ We claim that $$|P_N^{\perp}Tf(x)|\leq \varepsilon\langle |f| \rangle_{Q_0} + \sum_{P\in\mathcal{E}}|P_N^{\perp}T_Pf(x)|.$$ Assuming the claim, the sparse collection $\mathcal{S}$ is constructed by first adding $Q_0$ to $\mathcal{S}$. Then we repeat the argument, replacing $T$ with $T_P$, and adding each cube in $\mathcal{E}$ to $\mathcal{S}$. Inducting on this argument completes the proof. To prove the claim, first notice that if $x \not \in E$, then $$|P_N^{\perp}Tf(x)|\leq |T_N^{\max}f(x)|\leq \varepsilon \langle |f|\rangle_{Q_0}.$$ Now assume that $x \in E$. Let $P$ be the unique cube in $\mathcal{E}$ containing $x$. Write $$Tf(x)=\sum_{Q\supsetneq \widehat{P}}\epsilon_Q\langle f,h_Q\rangle h_Q(x) - \epsilon_{\widehat{P}}\langle f\rangle_{\widehat{P}}+T_Pf(x),$$ then $$|P_N^{\perp}Tf(x)|\leq \left|P_N^{\perp}\left[\sum_{Q\supsetneq \widehat{P}}\epsilon_Q\langle f,h_Q\rangle h_Q(x)\right]\right| + |P_n^{\perp}\epsilon_{\widehat{P}}\langle f\rangle_{\widehat{P}}|+|P_N^{\perp}T_Pf(x)|.$$ By maximality of $P$ in $E$, there is a point $y \in \widehat{P}\setminus E$. Then the first term is controlled since $$\left|P_N^{\perp}\left[\sum_{Q\supsetneq \widehat{P}}\epsilon_Q\langle f,h_Q\rangle h_Q(x)\right]\right|=\left|P_N^{\perp}\left[\sum_{Q\supsetneq \widehat{P}}\epsilon_Q\langle f,h_Q\rangle h_Q(y)\right]\right|\leq |T_N^{\max}f(y)|\leq \varepsilon \langle |f|\rangle_{Q_0}.$$ Similarly with the second term, $$|P_N^{\perp}\epsilon_{\widehat{P}}\langle f\rangle_{\widehat{P}}|\leq M_Nf(y)\leq \varepsilon\langle |f|\rangle_{Q_0}.$$ This proves the claim. \end{proof}
1,477,468,749,825
arxiv
\section{Introduction} Convex optimization problems arise in areas like signal processing, control systems, estimation, communication, data analysis, and machine learning. They are also useful to bound the optimal values of certain nonlinear programming problems, and to approximate their optimizers. Due to their ubiquitous nature and importance, much effort has been devoted to efficiently solve them. This paper is motivated by the goal of designing fast methods that combine the simplicity and ease of gradient methods with acceleration techniques to efficiently solve constrained optimization problems. \emph{Literature review:} Gradient descent is a widespread method to solve unconstrained convex optimization problems. However, gradient descent suffers from slow convergence. To achieve local quadratic convergence, one can use Newton's method~\cite{SB-LV:04}. Newton's method uses second-order information of the objective function and requires the inversion of the Hessian of the function. In contrast, the accelerated gradient descent method proposed by Nesterov~\cite{YEN:83} uses only first-order information combined with momentum terms~\cite{WS-SB-EJC:16,BS-SSD-WS-MIJ:19} to achieve an optimal convergence rate. For constrained convex optimization, generalizations of gradient algorithms include the projected gradient descent~\cite{YN:18} (for simple set constraints where the projection of any point can be computed in closed form) and (continuous-time) saddle-point or primal-dual dynamics (for general constraints), see e.g.,~\cite{TK:56,KA-LH-HU:58,AC-BG-JC:17-sicon,AC-EM-SHL-JC:18-tac}. When the saddle function is strongly convex-strongly concave, the primal-dual dynamics converges exponentially fast, see e.g.,~\cite{JC-VKNL:12}. Recent work~\cite{JC-SKN:19-jnls,GQ-NL:19,DD-MRJ:19,SSD-WH:19} has explored the partial relaxation of the strong convexity requirement while retaining the exponential convergence rate. A method with improved rate of convergence for constrained problems is accelerated mirror descent~\cite{WK-AB-PLB:15} which, however necessitates the choice of an appropriate mirror map depending on the geometry of the problem and requires that each update solves a constrained optimization problem (which might be challenging itself). Some works~\cite{PEG-DPR:12,PA-RO:17,NKD-SZK-MRJ:17} have sought to generalize Newton's method for equality constrained problems, designing second-order updates that require the inversion of the Hessian matrix of the augmented Lagrangian. Similar to gradient descent, a generalization of Nesterov's method for constrained convex optimization described in~\cite{YN:18} uses the~projection for simple set constraints. Here we follow an alternative route involving continuously differentiable exact penalty functions~\cite{RF:70,TG-EP:79} to convert the original problem into the unconstrained optimization of a nonlinear function. The works~\cite{GdP-LG:89,SL:92,GDP:94} generalize these penalty functions and establish, under appropriate assumptions on the constraint set, complete equivalence between the solutions of the constrained and unconstrained problems. We employ these penalty functions to reformulate the constrained convex optimization problem and identify sufficient conditions under which the unconstrained problem is also convex. Our previous work~\cite{PS-JC:18-cdc} explores the distributed computation of the gradient of the penalty function when the objective is separable and the constraints are locally expressible. \emph{Statement of contributions:} We consider equality-constrained convex optimization problems. Our starting point is the exact reformulation of this problem as the optimization of an unconstrained continuously differentiable function. We show via a counterexample that the unconstrained penalty function might not be convex for any value of the penalty parameter even if the original problem is convex. This motivates our study of sufficient conditions on the objective and constraint functions of the original problem for the unconstrained penalty function to be convex. Our results are based on analyzing the positive semi-definiteness of the Hessian of the penalty function. We provide explicit bounds below which, for any value of the penalty parameter, the penalty function is either convex or strongly convex on the domain, resp. Since the optimizers of the unconstrained convex penalty function are the same as the optimizers of the original problem, we deduce that the proposed Nesterov implementation solves the original constrained problem with an accelerated convergence rate starting from an arbitrary initial condition. Finally, we establish that Nesterov's algorithm applied to the penalty function renders the feasible set forward invariant. This, coupled with the fact that the penalty terms vanish on the feasible set, ensures that the accelerated convergence rate is also achieved from any feasible initialization. \section{Preliminaries}\label{sec:prelim} We collect here\footnote{Throughout the paper, we employ the following notation. Let $\mathbb{R}$, $\mathbb{R}_{>0}$, $\mathbb{R}_{\geq 0}$ and $\mathbb{N}$ be the set of real, positive real, non-negative real and natural numbers, resp. We use $\mathcal{X}^o$ to denote the interior of a set $\mathcal{X}$. $e_i^n$ denotes the $n-$dimensional unit vector in direction $i$. Given a matrix $A$, $\N(A)$ denotes its nullspace, $A^\top$ its transpose, $\|A \|$ its 2-norm, $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ its minimum and maximum eigenvalue, resp. If $A$ is positive semi-definite, we let $\lambda_2(A)$ denote the smallest positive eigenvalue, regardless of the multiplicity of the eigenvalue~$0$. Finally, $V^\perp$ denotes the orthogonal complement of the vector space~$V$. } basic notions of convex analysis~\cite{RTR:70,SB-LV:04} and optimization~\cite{DPB:99}. \emph{Convex Analysis:} Let $\C \subseteq \mathbb{R}^n$ be a convex set. A function $f : \mathbb{R}^n \rightarrow \mathbb{R} $ is \emph{convex} on $\C$ if $ f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda) f(y)$, for all $ x,y \in \C$ and $\lambda \in [0,1]$. Convex functions have the property of having the same local and global minimizers. A continuously differentiable $f : \mathbb{R}^n \rightarrow \mathbb{R} $ is convex on $\C$ \emph{iff} $f(y) \geq f(x)+(y-x)^\top \nabla f(x)$, for all $ x,y \in \C$. A twice differentiable function is convex \emph{iff} its Hessian is positive semi-definite. A twice differentiable function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is \emph{strongly convex} on $\C$ with parameter $c \in \mathbb{R}_{>0}$ \emph{iff} $ \nabla^2 f(x) \geq c I$ for all $x \in \C$. \emph{Constrained Optimization:} Consider the following nonlinear optimization problem \begin{equation}\label{eq:nl} \begin{aligned} & \min_{x \in \mathcal{D}} & & f(x) \\ &\; \; \text{s.t.} & & h(x)=0, \end{aligned} \end{equation} where $f:\mathbb{R}^n \rightarrow \mathbb{R}, \; h: \mathbb{R}^n \rightarrow \mathbb{R}^p$ are twice continuously differentiable functions with $p \leq n$ and $\mathcal{D} \subset \mathbb{R}^n$ is a compact set which is regular (i.e., $\DD= \overline{\mathcal{D}^o}$). The feasible set of~\eqref{eq:nl} is $\F= \setdef{x \in \mathcal{D}}{ h(x)=0}$. \emph{Linear independence constraint qualification} (LICQ) holds at $x \in \mathbb{R}^n$ if $ \{\nabla h_k(x)\}_{k \in \until{p}}$ are linearly independent. The Lagrangian $L:\mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}$ associated to~\eqref{eq:nl}~is \begin{align*} L(x,\mu) = f(x)+ \mu^\top h(x) , \end{align*} where $\mu \in \mathbb{R}^p$ is the Lagrange multiplier (also called dual variable) associated with the constraints. A Karush-Kuhn-Tucker (KKT) point for~\eqref{eq:nl} is $(\bar{x},\bar{\mu})$ such that \begin{align*} \nabla_xL(\bar{x},\bar{\mu}) &=0 , \qquad \quad h(\bar{x})=0 . \end{align*} Under LICQ, the KKT conditions are necessary for a point to be locally optimal. \emph{Continuously Differentiable Exact Penalty Functions:} With exact penalty functions, the idea is to replace the constrained optimization problem~\eqref{eq:nl} by an equivalent unconstrained problem. Here, we discuss continuously differentiable exact penalty functions following~\cite{TG-EP:79,GdP-LG:89}. The key observation is that one can interpret a KKT tuple as establishing a relationship between a primal optimal solution $\bar{x}$ and the dual optimal $\bar{\mu}$. In turn, the following result introduces multiplier functions that extend this relationship to any $x\in \mathbb{R}^n$. \begin{proposition}\longthmtitle{Multiplier functions and their derivatives~\cite{GdP-LG:89}}\label{prop:lambda} Assume that LICQ is satisfied at all $x \in \mathcal{D}$. Define $N:\mathbb{R}^n \rightarrow \mathbb{R}^{p \times p}$ as $N(x) = \nabla h(x)^\top \nabla h(x)$. Then $N(x)$ is a positive definite matrix for all $x \in \DD$. Given the function $x \mapsto \mu(x)$ defined by $\mu(x)= -N^{-1}(x) \nabla h(x)^\top \nabla f(x)$, the following holds \begin{enumerate}[(a)] \item if $(\bar{x},\bar{\mu})$ is a KKT point for~\eqref{eq:nl}, then $\mu(\bar{x})=\bar{\mu}$; \item $\mu : \mathbb{R}^n \rightarrow \mathbb{R}^p $ is a continuously differentiable function. \end{enumerate} \end{proposition} The multiplier function can be used to replace the multiplier vector in the augmented Lagrangian to define a continuously differentiable exact penalty function. Consider the continuously differentiable function $\map{f^\epsilon}{\mathbb{R}^n}{\mathbb{R}}$, \begin{align}\label{eq:penalty} f^{\epsilon}(x) &=f(x)+ \mu(x)^\top h(x) + \frac{1}{\epsilon}\|h(x)\|^2 . \end{align} The next result shows when $f^\epsilon$ is an exact penalty function. \begin{proposition}\longthmtitle{Continuously differentiable exact penalty function~\cite{GdP-LG:89}}\label{prop:exactness} Assume LICQ is satisfied at all $x \in \mathcal{D}$ and consider the unconstrained problem \begin{align}\label{eq:unc} \min_{x \in \mathcal{D}^o} f^\epsilon(x) . \end{align} Then, there exists $\bar{\epsilon}$ such that the set of global minimizers of~\eqref{eq:nl} and~\eqref{eq:unc} are equal for all $\epsilon \in (0,\bar{\epsilon}]$. \end{proposition} \section{Problem Statement}\label{sec:problem} Consider the following convex optimization problem \begin{equation}\label{eq:convex} \begin{aligned} & \min_{x \in \mathcal{D}} & & f(x)\\ &\; \; \text{s.t.} & & Ax-b=0, \end{aligned} \end{equation} where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a twice continuously differentiable convex function and $\mathcal{D}$ is a convex set. Here $A \in \mathbb{R}^{p \times n}$ and $b \in \mathbb{R}^p$ with $p < n$. Without loss of generality, we assume $A$ has full row rank (implying that LICQ holds at all~$x \in \mathbb{R}^n$). Our aim is to design a Nesterov-like fast method to solve~\eqref{eq:convex}. We do this by reformulating the problem as an unconstrained optimization using continuously differentiable penalty function methods, cf. Section~\ref{sec:prelim}. Then, we employ the Nesterov's accelerated gradient method to design \begin{subequations}\label{eq:algorithm} \begin{align} x_{k+1}&=y_k - \alpha \nabla f^\epsilon(y_k), \label{eq:algorithm-a} \\ a_{k+1}&=\frac{1+\sqrt{4a_k^2+1}}{2}, \label{algorithm-b} \\ y_{k+1}&=x_{k+1}+\frac{a_k-1}{a_{k+1}}(x_{k+1}-x_k), \label{eq:algorithm-c} \end{align} \end{subequations} where $\alpha \in \mathbb{R}_{>0}$ is the stepsize. If $f^\epsilon$ is convex with Lipschitz gradient $L$ and the algorithm is initialized at an arbitrary initial condition $x_0$ with $y_0=x_0$ and $a_0=1$, then according to~\cite[Theorem 1]{YEN:83}, \begin{subequations}\label{eq:Nesterov} \begin{align}\label{eq:Nesterov-convex} f^\epsilon(x_k) - f^\epsilon(x^*) \leq \dfrac{C}{(k+1)^2}, \end{align} where $x^* \in \mathbb{R}^n$ is a global minimizer of $f^\epsilon$ and $C \in \mathbb{R}_{\geq 0}$ is a constant dependant upon the initial condition and $L$. If $f^\epsilon$ is strongly convex with parameter $s \in \mathbb{R}_{>0}$, and~\eqref{algorithm-b} and~\eqref{eq:algorithm-c} are replaced by \begin{align}\tag{\ref{eq:algorithm}d}\label{eq:algorithm-d} y_{k+1}&=x_{k+1}+\frac{\sqrt{L}-\sqrt{s}}{\sqrt{L}+\sqrt{s}}(x_{k+1}-x_k), \end{align} then one has from~\cite[Theorem 2.2.1]{YN:18} \begin{align}\label{eq:Nesterov-strongly-convex} f^\epsilon(x_k) - f^\epsilon(x^*) \leq C_s \exp \left(-k \sqrt{\frac{s}{L}} \right) , \end{align} \end{subequations} where $C_s \in \mathbb{R}_{\geq 0}$ is a constant dependant upon the initial condition, $s$, and $L$. The key technical point for this approach to be successful is to ensure that the penalty function $f^\epsilon$ is (strongly) convex. Section~\ref{sec:convex} below shows that this is indeed the case for suitable values of the penalty parameter under appropriate assumptions on the objective and constraint functions of the original problem~\eqref{eq:convex}. \begin{remark}\longthmtitle{Distributed Algorithm Implementation}\label{re:distributed} We note here that the algorithm~\eqref{eq:algorithm} is amenable to distributed implementation if the objective function is separable and the constraints are locally coupled. In fact, our previous work~\cite{PS-JC:18-cdc} has shown how, in this case, the computation of the gradient of the penalty function in~\eqref{eq:algorithm-a} can be implemented in a distributed way. Based on this observation, one could use the framework proposed here for fast optimization of convex problems in a distributed way. To obtain fast convergence, one could also use second-order augmented Lagrangian methods, e.g.,~\cite{PA-RO:17,NKD-SZK-MRJ:17}, but their distributed implementation faces the challenge of computing the inverse of the Hessian of the augmented Lagrangian to update the primal and dual variables. Even if the Hessian is sparse for separable objective functions and local constraints, its inverse in general is not. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \section{Convexity of the Penalty Function}\label{sec:convex} We start by showing that the continuously differentiable exact penalty function $f^\epsilon$ defined in~\eqref{eq:penalty} might not be convex even if the original problem~\eqref{eq:convex} is convex. For the convex problem~\eqref{eq:convex}, the penalty function takes the form \begin{align}\label{eq:penalty_eq} f^\epsilon(x) & = f(x) \\ & \quad -([AA^\top]^{-1}A \nabla f(x) )^\top (Ax-b)+\frac{1}{\epsilon}\|Ax-b\|^2. \notag \end{align} A look at this expression makes it seem like a sufficiently small choice of $\epsilon$ might make $f^\epsilon$ convex for all $x \in \mathcal{D}$. The following shows that this is always not the case. \begin{example}\longthmtitle{Non-convex penalty function}\label{ex:example} Consider \begin{equation*} \begin{aligned} &\min\limits_{x \in \mathcal{D}} & & x_1^4 + x_2^4 \\ &\; \; \text{s.t.} & & x_1+x_2=0. \end{aligned} \end{equation*} The optimizer is $(0,0)$. The penalty function takes the form \begin{align*} f^\epsilon(x)=x_1^4 + x_2^4 + \mu(x)^\top (x_1+x_2) + \dfrac{1}{\epsilon} (x_1+x_2)^2, \end{align*} where $\mu(x)=-(2x_1^3+2x_2^3)$. The Hessian of this function is \begin{align*} \nabla^2 f^\epsilon(x) \! = \! \! \left[ \begin{matrix} -12x_1^2-12x_1x_2+\dfrac{2}{\epsilon} & -6x_1^2-6x_2^2+\dfrac{2}{\epsilon} \\ -6x_1^2-6x_2^2+\dfrac{2}{\epsilon} & -12x_2^2-12x_1x_2+\dfrac{2}{\epsilon} \end{matrix} \right] \!. \end{align*} If $x_1=0$, then the determinant of $ \nabla^2 f^\epsilon(x)$ evaluates to $-36x_2^4$, which is independent of $\epsilon$. Hence, $f^\epsilon$ cannot be made convex over any set containing the vertical axis. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{example} Example~\ref{ex:example} shows that the penalty function cannot always be convexified by adjusting the value of~$\epsilon$. Intuitively, the reason for this fact is that the term susceptible to be scaled in the expression~\eqref{eq:penalty_eq} which depends on the parameter~$\epsilon$ is not strongly convex. This implies that there are certain subspaces where non-convexity arising from the term that involve the Lagrange multiplier function cannot be countered. In turn, these subspaces are defined by the kernel of the Hessian of the last term in the expression~\eqref{eq:penalty_eq} of the penalty function. These observations motivate our study of conditions on the objective function and the constraints that guarantee that the penalty function is convex. In our discussion, we start by providing sufficient conditions for the convexity of the penalty function over $\DD$. \subsection{Sufficient Conditions for Convexity over the Domain} Here we provide conditions for the convexity of the penalty function~$f^\epsilon$ by establishing the positive semi definiteness of its Hessian. Throughout the section, we assume $f$ is three times differentiable. Note that the gradient and the Hessian of $f^\epsilon$ are given, resp., by \begin{subequations} \begin{align} \nabla f^\epsilon(x)&=\nabla f(x)- \nabla^2 f(x) A^\top [AA^\top]^{-1}(Ax-b) \notag \\ & \quad - A^\top [AA^\top]^{-1} A \nabla f(x) +\frac{2}{\epsilon}A^\top(Ax-b). \label{eq:gradient_eq} \\ \nabla^2 f^\epsilon(x)& =\nabla^2 f(x) - W(x) - \nabla^2 f(x)A^\top [AA^\top]^{-1}A \notag \\ & \quad - A^\top [AA^\top]^{-1}A \nabla^2 f(x) + \frac{2}{\epsilon}A^\top A, \label{eq:hessian_eq} \end{align} \end{subequations} where we use the short-hand notation \begin{align}\label{eq:W} W(x)= \sum\limits_{i=1}^n \nabla_{x_i} \nabla^2 f(x) A^\top [A A^\top]^{-1}(Ax-b) e_i^{n^\top}. \end{align} The following result provides sufficient conditions under which the penalty function~\eqref{eq:penalty_eq} is convex on $\DD$. \begin{theorem}\longthmtitle{Convexity of the penalty function}\label{thm:eq} For the optimization problem~\eqref{eq:convex}, assume $ \nabla^2 f(x) - W(x) \succ 0$ for all $x \in \DD$ and let \begin{align*} \bar{\epsilon} = \min\limits_{x \in \DD} \dfrac{ 2 \lambda_{\min}(AA^\top ) \lambda_{\min}(\nabla^2f(x) - W(x)) }{ \lambda^2_{\max} ( \nabla^2 f(x) ) +R(x) \lambda_{\min}(\nabla^2f(x) - W(x)) } , \end{align*} where $R(x)= 2 \lambda_{\max} (\nabla^2 f(x)) - \lambda_{\min} ( \nabla^2 f(x) - W(x) )$. Then $f^\epsilon$ is convex on $\DD$ for all $\epsilon \in (0, \bar{\epsilon}]$ and consequently the convergence guarantee~\eqref{eq:Nesterov-convex} holds. \end{theorem} \begin{IEEEproof} For an arbitrary $x \in \DD$, we are interested in determining the conditions under which $\nabla^2 f^\epsilon(x) \succeq 0$, or in other words, $v^\top \nabla^2 f^\epsilon(x) v \geq 0$ for all $v \in \mathbb{R}^n$. From~\eqref{eq:hessian_eq}, \begin{align}\label{eq:expression} v^\top \nabla^2 f^\epsilon(x) v &= \frac{2}{\epsilon} v^\top A^\top A v + v^\top (\nabla^2 f(x) -W(x))v \\ & \quad - 2v^\top( \nabla^2 f(x)A^\top [AA^\top]^{-1}A )v. \notag \end{align} Let us decompose $v$ as $v = v^{\|}+v^{\perp}$, where $v^{\|}$ is the component of $v$ in the nullspace $\N(A)$ of $A$ and $v^{\perp}$ is the component orthogonal to it. Then~\eqref{eq:expression} becomes \begin{align*} v^\top \nabla^2 f^\epsilon(x) v &= \frac{2}{\epsilon} v^{\perp \top} A^\top A v^{\perp} + v^\top (\nabla^2 f(x) -W(x) )v \\ & \quad - 2v^{\| \top} \nabla^2 f(x)A^\top [A A^\top]^{-1} A v^{\perp} \\ & \quad -2v^{\perp \top} \nabla^2 f(x)A^\top [A A^\top]^{-1} A v^{\perp} . \end{align*} Since $A^\top(AA^\top)^{-1}Av^\perp = v^\perp$, cf.~\cite[Theorem 1.1.1]{SLC-CDM:09}, the above expression reduces to \begin{align*} & v^\top \nabla^2 f^\epsilon(x) v = \frac{2}{\epsilon} v^{\perp \top} A^\top A v^{\perp} + v^\top (\nabla^2 f(x) -W(x) )v \\ & \quad - 2v^{\| \top} \nabla^2 f(x) v^{\perp} -2v^{\perp \top} \nabla^2 f(x) v^{\perp} \\ & \geq \Big( \frac{2}{\epsilon} \lambda_2 (A^\top A ) - 2 \lambda_{\max} (\nabla^2 f(x)) \Big) \| v^\perp \|^2 \\ &\quad + \lambda_{\min} ( \nabla^2 f(x) - W(x) ) ( \|v^\perp \|^2 + \| v^\| \|^2 ) \\ & \quad - 2 \lambda_{\max} (\nabla^2 f(x)) \|v^\perp \| \|v^\| \| \\ & = \begin{bmatrix} \! \|v^\perp \| \! \\ \! \| v^\| \| \! \end{bmatrix}^{\hspace*{-0.1cm} \top} \hspace*{-0.2cm} \underbrace{\begin{bmatrix} S(x) & \hspace*{-0.1cm} - \lambda_{\max} (\nabla^2 f(x)) \\ \! - \lambda_{\max} (\nabla^2 f(x)) & \hspace*{-0.1cm} \lambda_{\min} ( \nabla^2 f(x) - W(x) ) \! \end{bmatrix}}_{P(x)} \hspace*{-0.1cm} \begin{bmatrix} \! \|v^\perp \| \! \\ \! \| v^\| \| \! \end{bmatrix} \! \!, \end{align*} where $S(x)=\dfrac{2}{\epsilon} \lambda_{\min} (A A^\top ) - R(x)$. Therefore, we deduce that $\nabla^2 f^\epsilon (x) \succeq 0$ if $\epsilon$ is such that $P(x) \succeq 0$. Being a $2\times2$-matrix, the latter holds if $S(x)$ and determinant of $P(x)$ are non-negative. The determinant is non-negative if and only if \begin{align*} \epsilon \leq \dfrac{2 \lambda_{\min} (AA^\top) \lambda_{\min} ( \nabla^2 f(x) - W(x) )}{\lambda^2_{\max} (\nabla^2 f(x))+R(x) \lambda_{\min} ( \nabla^2 f(x) - W(x) )}. \end{align*} The above value of $\epsilon$ also ensures that $S(x) > 0$. Taking the minimum over all $x \in \DD$ completes the proof. \end{IEEEproof} \begin{remark}\longthmtitle{Differentiability of the objective function} Note that the implementation of~\eqref{eq:algorithm} requires the objective function $f$ to be twice continuously differentiable, while the definition of $W$ in~\eqref{eq:W} involves the third-order partial derivatives of $f$. We believe that an extension of Theorem~\ref{thm:eq} could be pursued in case the objective function is only twice differentiable using tools from nonsmooth analysis, e.g.,~\cite{FHC:83}, but we do not pursue it here for space reasons. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} The next result provides sufficient conditions under which the penalty function is strongly convex on $\DD$. \begin{corollary}\longthmtitle{Strong convexity of the penalty function}\label{co:strong} For the optimization problem~\eqref{eq:convex}, assume $ \nabla^2 f(x) - W(x) \succeq cI$ for all $x \in \DD$ and let \begin{align*} \bar{\epsilon}_s = \min\limits_{x \in \DD}\! \dfrac{ 2\lambda_{\min}(AA^\top ) (c-s) }{ \lambda^2_{\max} ( \nabla^2 f(x) ) \!+ \! 2 (c-s)\lambda_{\max} ( \nabla^2 f(x)) \! - \!(c-s)^2\! } . \end{align*} Then $f^\epsilon$ is strongly convex on $\DD$ with parameter $s \in (0,c)$ for all $\epsilon \in (0, \bar{\epsilon}_s]$ and the convergence guarantee~\eqref{eq:Nesterov-strongly-convex} holds. \end{corollary} \begin{IEEEproof} Let us decompose $\nabla^2 f(x) - W(x)$ as $\nabla^2 f(x) - W(x)= B(x)+ sI$. Since $\nabla^2 f(x) - W(x) \succeq cI$, it follows that $B(x) \succeq (c-s)I $. Establishing that the penalty function is strongly convex with parameter $s$ is equivalent to establishing that, for all $x \in \DD$, $v^\top (\nabla^2 f^\epsilon(x) - sI ) v\geq 0$ for all $v \in \mathbb{R}^n$. Following the same steps as in the proof of Theorem~\ref{thm:eq}, one can verify that this is true if, for all $x \in \DD$, $\epsilon$ is less than or equal to \begin{align*} \dfrac{ 2\lambda_{\min}(AA^\top ) \lambda_{\min}(B(x)) }{ \lambda^2_{\max} ( \nabla^2 f(x) ) \! + \! 2 \lambda_{\min}(B(x))\lambda_{\max} ( \nabla^2 f(x)) \! - \! \lambda^2_{\min}(B(x)) } . \end{align*} Replacing $\lambda_{\min}(B(x))$ by $c-s$, it follows that the penalty function is strongly convex with parameter $s$ if $\epsilon \leq \bar{\epsilon}_s$. \end{IEEEproof} It is easy to verify that Example~\ref{ex:example} does not satisfy the sufficient condition identified in Theorem~\ref{thm:eq}. This condition can be interpreted as requiring the original objective function to be sufficiently convex to handle the non-convexity arising from the penalty for being infeasible. Finding the value of $\bar{\epsilon}$ still remains a difficult problem as computing $\lambda_{\min}(\nabla^2 f(x) - W(x))$ for all $x \in \DD$ is not straightforward. The next result simplifies the conditions of Theorem~\ref{thm:eq} for linear and quadratic programming problems. \begin{corollary}\longthmtitle{Sufficient conditions for problems with linear and quadratic objective functions}\label{co:quadratic} \begin{enumerate}[(i)] \item If the objective function in problem~\eqref{eq:convex} is linear, then the penalty function is convex on $\mathbb{R}^n$ for all values of~$\epsilon$; \item If the objective function in problem~\eqref{eq:convex} is quadratic with Hessian $Q \succ 0$, then the penalty function is convex on $ \mathbb{R}^n$ for all $\epsilon \in (0, \bar{\epsilon}]$, where \begin{align*} \bar{\epsilon} = \dfrac{ 2\lambda_{\min}(AA^\top ) \lambda_{\min}(Q) }{ \lambda^2_{\max} ( Q ) + 2 \lambda_{\min}(Q)\lambda_{\max} ( Q) - \lambda^2_{\min}(Q) } . \end{align*} \end{enumerate} In either case, the convergence guarantee~\eqref{eq:Nesterov-convex} holds. \end{corollary} \begin{IEEEproof} We present our arguments for each case separately. For case (i), we have $\nabla^2 f(x)=0$. Hence, \begin{align*} \nabla^2 f^\epsilon(x)=\dfrac{2}{\epsilon}A^\top A, \end{align*} which means that $\nabla^2 f^\epsilon(x) \geq 0$ for all $x \in \mathbb{R}^n$. For case (ii), \begin{align*} f(x)=\dfrac{1}{2}x^\top Q x + h^\top x, \end{align*} where $Q \in \mathbb{R}^{n \times n}$ and $h \in \mathbb{R}^n$. The expression for the Hessian of $f^\epsilon$ becomes \begin{align*} \nabla^2 f^\epsilon(x) &= Q + \frac{2}{\epsilon}A^\top A \notag - Q A^\top [AA^\top]^{-1} A \\ & \quad - A^\top [AA^\top]^{-1} A Q. \end{align*} Clearly $W(x)=0$ for all $x \in \mathbb{R}^n$, and the result follows from Theorem~\ref{thm:eq}. \end{IEEEproof} Following Corollary~\ref{co:strong}, one can also state similar conditions for the penalty function to be strongly convex in the case of quadratic programs, but we omit them here for space reasons. From Corollary~\ref{co:quadratic}, ensuring that the penalty function convex is easier when the objective function is quadratic. This follows from the fact that $W(x)$, which depends on the third order derivatives of the objection function, vanishes. Hence, in the quadratic case, the condition in Theorem~\ref{thm:eq} requiring the Hessian of the objective function to be greater than $W(x)$ for all $x \in \DD$ is automatically satisfied. In what follows we provide a very simple approach for general objective functions. \subsection{Convexity over Feasible Set Coupled with Invariance} Here we present a simplified version of the proposed approach, which is based on the fact that inside the feasible set the values of the penalty and the objective functions is the same. To build on this observation, we start by characterizing the extent to which the constraints are satisfied under the Nesterov's algorithm. \begin{lemma}\longthmtitle{Forward invariance of the feasible set under Nesterov's algorithm applied to the penalty function}\label{lemma:feasible} Consider the Nesterov's accelerated gradient algorithm~\eqref{eq:algorithm} applied to the penalty function~\eqref{eq:penalty_eq} for an arbitrary $\epsilon \ge 0$. If the algorithm is initialized at $y_0 = x_0$, with $x_0$ belonging to the feasible set $\F$, then $\{x_k\}_{k=0}^{\infty}$, $\{y_k\}_{k=0}^{\infty} \in \F$. \end{lemma} \begin{IEEEproof} We need to prove that $Ax_k=b$ and $Ay_k=b$ for all $k \geq 0$ if $Ax_0=Ay_0=b$. We use the technique of mathematical induction to prove this. Since this clearly holds for $k=0$, we next prove that if $Ax_k=Ay_k=b$, then $Ax_{k+1}=Ay_{k+1}=b$. From~\eqref{eq:algorithm-a} and~\eqref{eq:gradient_eq}, we have \begin{align*} Ax_{k+1}=&Ay_k-\alpha A \nabla f^\epsilon (y_k) \\ =&Ay_k \! -\! \alpha A (\nabla f(y_k) \! - \! \nabla^2 f(y_k) A^\top [AA^\top]^{-1}(Ay_k\! - \!b ) \notag \\ & -\! A^\top [AA^\top]^{-1} A \nabla f(y_k) \! + \! \frac{2}{\epsilon}A^\top(Ay_k \!- \!b)). \notag \end{align*} Substituting $Ay_k=b$, the above expression evaluates to $b$ independent of $\epsilon \ge 0$. Then from~\eqref{eq:algorithm-c}, one has $Ay_{k+1}=b$. Since the argument above is independent of the values of $a_k$ for all $k \in \mathbb{N}$, it holds for the strongly convex case~\eqref{eq:algorithm-d} as well, thus completing the proof by induction. \end{IEEEproof} As a consequence of this result, if the trajectory starts in the feasible set $\F$, then it remains in it forever. This observation allows us to ensure the convergence rate guarantee for any convex objective function. \begin{corollary}\longthmtitle{Accelerated convergence with feasible initialization} For the optimization problem~\eqref{eq:convex} and arbitrary $\epsilon \ge 0$, the algorithm~\eqref{eq:algorithm} initialized in $\F$ enjoys the guarantee~\eqref{eq:Nesterov} on convergence to the optimal value. \end{corollary} \begin{IEEEproof} Note that $f^\epsilon(x) = f(x)$ whenever $Ax = b$, and hence by definition, is automatically (strongly) convex on $\F$ regardless of the value of $\epsilon$. The convergence guarantee follows from this fact together with Lemma~\ref{lemma:feasible}. \end{IEEEproof} \begin{remark}\longthmtitle{Robustness of the proposed approach} Given any $x_0 \in \mathbb{R}^n$, one can find a feasible initial point $x_0 -A^\top [AA^\top]^{-1}(Ax_0-b)$ by projecting $x_0$ onto the feasible set $\F$, and then implement Nesterov's accelerated method with the projected gradient as $(I - A^\top [AA^\top]^{-1} A)\nabla f(x)$. In fact, this projected gradient method coincides with the approach proposed here when evaluated over~$ \F$. The advantage of our approach resides in the incorporation of error-correcting terms incorporating the value of $Ax-b$, cf.~\eqref{eq:gradient_eq}, that penalize any deviation from the feasible set and hence provide additional robustness in the face of disturbances. By contrast, the projected gradient approach requires either an error-free execution or else, if error is present, the trajectory may leave and remain outside the feasible set unless repeated projections of the updated state are taken. The inherent robustness property of the approach proposed here is especially important in the context of distributed implementations, cf. Remark~\ref{re:distributed}, where agents need to collectively estimate (and hence only implement approximations of) $A^\top [AA^\top]^{-1} A \nabla f(x)$ and taking the projection in a centralized fashion is not possible. The approach proposed here can also be extended to problems with convex inequality constraints, cf.~\cite{GdP-LG:89}, whereas computing the projection in closed form is not possible for general convex constraints. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \begin{figure*}[t] \centering \subfloat[][]{\includegraphics[width=.45\linewidth]{agents_50exp}\label{fig:example_ra}} \quad \subfloat[][]{\includegraphics[width=.45\linewidth]{time_loglog}\label{fig:cpu_time} } \caption{Performance comparison of the proposed algorithm (Nesterov's acceleration on the penalty function) with the second-order augmented Lagrangian method~\cite{PA-RO:17}, the saddle-point dynamics~\cite{AC-EM-SHL-JC:18-tac,GQ-NL:19} applied to the Lagrangian and the augmented Lagrangian, respectively, and the gradient descent of the penalty function. (a) shows the evolution of the error between the objective function and its optimal value for $n=50$ and (b) shows the computation time per iteration (note that the difference between second-order and first-order methods increases significantly with the problem dimension). For a desired level of accuracy, the proposed method outperforms the other methods when the number of iterations and the CPU time per iteration are jointly considered.}\label{fig:sim} \end{figure*} \section{Simulations}\label{sec:sims} In this section, we show the effectiveness of the proposed approach through numerical simulations. We consider \begin{equation*} \begin{aligned} & \min_{x \in \mathbb{R}^n} & & \sum\limits_{i=1}^n \frac{1}{2} \beta_i x^2_i + \gamma_i \exp(x_i) \\ &\; \; \text{s.t.} & & \sum\limits_{i=1}^n x_i=100, \end{aligned} \end{equation*} where $\beta_i$, $\gamma_i \in \mathbb{R}_{>0}$. We evaluate different scenarios with values of $n$ as $10,50,100,500,1000, 5000$ and $10000$. We take $\DD=\setdef{x \in \mathbb{R}^n}{ \| x \|_{\infty} \leq 5, \sum\limits_{i=1}^n x_i - 100 \leq 50 }$. By Corollary~\ref{co:strong}, for $n=50$, the penalty function is strongly convex on $\DD$ with parameter $s=0.01$ for all $\epsilon \in (0, \bar{\epsilon}_s]$, where $\bar{\epsilon}_s=0.3603$. In our simulations, we use $\epsilon = 10^{-1}$ and $\alpha = 10^{-3}$, resp. Figure~\ref{fig:sim} compares the performance of the proposed method with the second-order augmented Lagrangian method~\cite{PA-RO:17}, the saddle-point dynamics~\cite{AC-EM-SHL-JC:18-tac,GQ-NL:19} applied to the Lagrangian and the augmented Lagrangian, resp., and the gradient descent applied to the penalty function. Figure~\ref{fig:sim}(a) shows the evolution of the error between the objective function and its optimal value for $n=50$. For the same level of accuracy, the number of iterations taken by the second-order augmented Lagrangian method is smaller by an order of magnitude compared to the proposed method. However, one should note that the second-order augmented Lagrangian method involves the inversion of Hessian, which becomes increasingly expensive as the number of variables increases (see also Remark~\ref{re:distributed}). To illustrate this, Figure~\ref{fig:sim}(b) shows the computation time per iteration of the algorithms in Matlab version 2018a running on a Macbook Pro with 2GHz i5 processor and 8 GB ram. The time taken by the first-order algorithms is about the same, and is smaller by several orders of magnitude (depending on the number of variables) than the second-order augmented Lagrangian method. When both aspects (number of iterations and computation time per iteration) are considered together, the proposed approach outperforms the other methods, especially if the problem dimension is~large. \section{Conclusions}\label{sec:conc} We have presented a fast approach for constrained convex optimization. We have provided sufficient conditions under which we can reformulate the original problem as the unconstrained optimization of a continuously differentiable convex penalty function. Our proposed approach is based on the accelerated gradient method given by Nesterov for unconstrained convex optimization, and has guaranteed convergence rate when the penalty function is (strongly) convex. From simulations, it is clear that in terms of computation time required to reach the desired accuracy, the proposed method performs the best compared to other state-of-the-art methods. Based on our previous work, this method is amenable to distributed optimization if, in the original problem, the objective function is separable and the constraint functions are locally expressible. Future work would explore the effect of the choice of penalty parameter on the convergence speed of the proposed strategy, the generalization of the conditions identified here to ensure the penalty function is (strongly) convex with inequality constraints, the extension of Nesterov's accelerated gradient techniques to specific classes of non-convex functions (e.g., quasi-convex functions).
1,477,468,749,826
arxiv
\subsection{}\vskip -22pt\hskip \parindent} \def\DY#1{{\cal DY}\left({#1}\right)} \def\YD#1{{\cal DY}{#1}} \def\cO#1{{{\cal O}(#1)}} \defI\!\!R{I\!\!R} \defC\!\!\!\!I{C\!\!\!\!I} \defQ\!\!\!\!I{Q\!\!\!\!I} \defZ\!\!\!Z{Z\!\!\!Z} \defI\!\!N{I\!\!N} \def\T#1{{\rm T}(#1)} \def{\rm V}{{\rm V}} \def{\cal C}{{\cal C}} \def{\rm op}{{\rm op}} \def{\rm id}{{\rm id}} \def{\rm Obj}{{\rm Obj}} \unitlength=15 true pt \def\krr{\kern -.16667em}% \def\kr{}% \def\krrr{\kern -.3\unitlength}% \def\krl{}% \def\krrrr% {\kern -5\unitlength} \newlength{\textwd}% \def\hhstep{\kr\kr \kern -.5\unitlength} \def\hstep{\kr\kr \kern .5\unitlength} \def\step{\kr\kr \kern \unitlength} \def\Step{\kr\kr \kern 2\unitlength} \def\vvbox#1{{\offinterlineskip\vcenter{% \def\coev{\kr \begin{picture}(2,2)\put(1,0){\oval(2,2)[t]}\end{picture}} \def\ev{\kr \begin{picture}(2,2)\put(1,2){\oval(2,2)[b]}\end{picture}} \def\hcoev{\kr \begin{picture}(1,2)\put(.5,0){\oval(1,1)[t]}\end{picture}} \def\hev{\kr \begin{picture}(1,2)\put(.5,2){\oval(1,1)[b]}\end{picture}} \def\COEV{\kr \begin{picture}(2,2)\put(3,0){\oval(6,6)[t]}\end{picture}} \def\EV{\kr \begin{picture}(2,2)\put(3,2){\oval(6,6)[b]}\end{picture}} \def\unit{\kr \begin{picture}(0,2) \put(0,0){\line(0,1){1}}\put(0,1.2){\circle{0.4}} \end{picture}} \def\counit{\kr \begin{picture}(0,2) \put(0,1){\line(0,1){1}}\put(0,.8){\circle{0.4}} \end{picture}} \def\Q##1{\kr \begin{picture}(0,2) \put(0,0){\line(0,1){0.4}}\put(0,1){\circle{1.2}} \put(-0.6,0.4){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}} \end{picture}} \def\O##1{\kr \begin{picture}(0,2) \put(0,0){\line(0,1){0.4}}\put(0,1.6){\line(0,1){0.4}}\put(0,1){\circle{1.2}} \put(-0.6,0.4){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}} \end{picture}} \def\O{S}} \def\SS{\O{S^-}{\O{S}} \def\SS{\O{S^-}} \def\O{\overline S}} \def\tSS{\O{\overline S^-}{\O{\overline S}} \def\tSS{\O{\overline S^-}} \let\P\O \def\dash##1{\kr \begin{picture}(2,2) \put(-.5,0){\dashbox{.1}(3,2){$\scriptstyle ##1$}} \end{picture}} \def\Dash##1{\kr \begin{picture}(2,2) \put(-1,0){\dashbox{.1}(4,2){$\scriptstyle ##1$}} \end{picture}} \def\x{\kr \begin{picture}(2,2) \put(0,2){\line(1,-1){2}}\put(0,0){\line(1,1){.7}}\put(2,2){\line(-1,-1){.7}} \end{picture}} \def\xx{\kr \begin{picture}(2,2) \put(0,2){\line(1,-1){.7}}\put(0,0){\line(1,1){2}}\put(2,0){\line(-1,1){.7}} \end{picture}} \def\hx{\kr \begin{picture}(1,2) \put(0,2){\line(1,-2){1}}\put(0,0){\line(1,2){.35}}\put(1,2){\line(-1,-2){.35}} \end{picture}} \def\hxx{\kr \begin{picture}(1,2) \put(0,2){\line(1,-2){.35}}\put(0,0){\line(1,2){1}}\put(1,0){\line(-1,2){.35}} \end{picture}} \def\d{\kr \begin{picture}(1,2)\put(0,2){\line(1,-2){1}}\end{picture}} \def\dd{\kr \begin{picture}(1,2)\put(0,0){\line(1,2){1}}\end{picture}} \def\hd{\kr \begin{picture}(1,2) \put(0,2){\line(1,-2){.5}} \put(.5,1){\line(0,-1){1}} \end{picture}} \def\hdd{\kr \begin{picture}(1,2) \put(1,2){\line(-1,-2){.5}} \put(0,1){\line(0,-1){1}} \end{picture}} \def\ld{\kr \begin{picture}(1,2) \put(1,0){\oval(2,2)[lt]}\put(1,0){\line(0,1)2} \end{picture}} \def\Ld{\kr \begin{picture}(2,2) \put(2,0){\oval(4,2)[lt]}\put(2,0){\line(0,1)2} \end{picture}} \def\cd{\kr \begin{picture}(2,2) \put(1,0){\oval(2,2)[ct]}\put(1,1){\line(0,1)1} \end{picture}} \def\hdcd{\kr \begin{picture}(1,2) \put(0,2){\line(1,-2){.5}} \put(.5,0){\oval(1,1)[ct]}\put(.5,.5){\line(0,1){.5}} \end{picture}} \def\hddcd{\kr \begin{picture}(1,2) \put(1,2){\line(-1,-2){.5}} \put(.5,0){\oval(1,1)[ct]}\put(.5,.5){\line(0,1){.5}} \end{picture}} \def\hcd{\kr \begin{picture}(1,2) \put(.5,0){\oval(1,1)[ct]}\put(.5,.5){\line(0,1){1.5}} \end{picture}} \def\Cd{\kr \begin{picture}(4,2) \put(2,0){\oval(4,2)[ct]}\put(2,1){\line(0,1)1} \end{picture}} \def\rd{\kr \begin{picture}(1,2) \put(0,0){\oval(2,2)[rt]}\put(0,0){\line(0,1)2} \end{picture}} \def\Rd{\kr \begin{picture}(2,2) \put(0,0){\oval(4,2)[rt]}\put(0,0){\line(0,1)2} \end{picture}} \def\lu{\kr \begin{picture}(1,2) \put(1,2){\oval(2,2)[lb]}\put(1,0){\line(0,1)2} \end{picture}} \def\Lu{\kr \begin{picture}(2,2) \put(2,2){\oval(4,2)[lb]}\put(2,0){\line(0,1)2} \end{picture}} \def\cu{\kr \begin{picture}(2,2) \put(1,2){\oval(2,2)[cb]}\put(1,0){\line(0,1)1} \end{picture}} \def\hdcu{\kr \begin{picture}(1,2) \put(1,0){\line(-1,2){.5}} \put(.5,2){\oval(1,1)[cb]}\put(.5,1){\line(0,1){.5}} \end{picture}} \def\hddcu{\kr \begin{picture}(1,2) \put(0,0){\line(1,2){.5}} \put(.5,2){\oval(1,1)[cb]}\put(.5,1){\line(0,1){.5}} \end{picture}} \def\hcu{\kr \begin{picture}(1,2) \put(.5,2){\oval(1,1)[cb]}\put(.5,0){\line(0,1){1.5}} \end{picture}} \def\Cu{\kr \begin{picture}(4,2) \put(2,2){\oval(4,2)[cb]}\put(1,0){\line(0,1)1} \end{picture}} \def\ru{\kr \begin{picture}(1,2) \put(0,2){\oval(2,2)[rb]}\put(0,0){\line(0,1)2} \end{picture}} \def\Ru{\kr \begin{picture}(2,2) \put(0,2){\oval(4,2)[rb]}\put(0,0){\line(0,1)2} \end{picture}} \def\k{\kr \begin{picture}(1,2) \put(0,2){\oval(2,1)[rb]} \put(0,0){\oval(2,1)[rt]} \put(0,0){\line(0,1)2} \end{picture}} \def\kk{\kr \begin{picture}(1,2) \put(1,2){\oval(2,1)[lb]} \put(1,0){\oval(2,1)[lt]} \put(1,0){\line(0,1)2} \end{picture}} \def\ro##1{\kr \begin{picture}(2,2) \put(.4,0){\oval(.8,.8)[lt]}\put(1.6,0){\oval(.8,.8)[rt]} \put(1,0.4){\circle{1.2}} \put(0.4,-0.2){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}}% \end{picture}} \def\coro##1{\kr \begin{picture}(2,2) \put(.4,2){\oval(.8,.8)[lb]}\put(1.6,2){\oval(.8,.8)[rb]} \put(1,1.6){\circle{1.2}} \put(0.4,1){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}}% \end{picture}} \def\Ro##1{\kr \begin{picture}(4,2) \put(1.4,0){\oval(2.8,1.2)[lt]}\put(2.6,0){\oval(2.8,1.2)[rt]} \put(2,.6){\circle{1.2}} \put(1.4,0){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}}% \end{picture}} \def\coRo##1{\kr \begin{picture}(4,2) \put(1.4,2){\oval(2.8,1.2)[lb]}\put(2.6,2){\oval(2.8,1.2)[rb]} \put(2,1.4){\circle{1.2}} \put(1.4,.8){\makebox(1.2,1.2)[cc]{$\scriptstyle ##1$}}% \end{picture}} \def\ro{\cal R}} \def\rr{\ro{{\cal R}^-}{\ro{\cal R}} \def\rr{\ro{{\cal R}^-}} \def\ro{{\cal R}^{\tilde{}}}{\ro{{\cal R}^{\tilde{}}}} \def\ro{{\cal R}_A}} \def\rra{\ro{{\cal R}^-_A}{\ro{{\cal R}_A}} \def\rra{\ro{{\cal R}^-_A}} \def\ro{{\cal R}_B}} \def\rrb{\ro{{\cal R}^-_B}{\ro{{\cal R}_B}} \def\rrb{\ro{{\cal R}^-_B}} \def\ro{{\cal R}_H}{\ro{{\cal R}_H}} \def\Ro{\cal R}} \def\RR{\Ro{{\cal R}^-}{\Ro{\cal R}} \defI\!\!R{\Ro{{\cal R}^-}} \def\Ro{{\cal R}_A}} \def\RRa{\Ro{{\cal R}^-_A}{\Ro{{\cal R}_A}} \def\RRa{\Ro{{\cal R}^-_A}} \def\Ro{{\cal R}_B}} \def\RRb{\Ro{{\cal R}^-_B}{\Ro{{\cal R}_B}} \def\RRb{\Ro{{\cal R}^-_B}} \def\Ro{{\cal R}_H}{\Ro{{\cal R}_H}} \def{\rm id}{\kr \begin{picture}(0,2)\put(0,0){\line(0,1)2}\end{picture}} \def\obj##1{\settowidth{\textwd}{$##1$}% \raise .2\unitlength\hbox{\kern -.5\textwd $##1$ \kern -.5\textwd \krrr}} \def{\rm Obj}##1{\settowidth{\textwd}{$##1$}% \raise 1.1\unitlength\hbox{\kern -1\textwd $##1$}} \def\hhbox##1{\hbox{\krrrr \def\coev{\kr \begin{picture}(1,1)\put(.5,0){\oval(1,1)[t]}\end{picture}} \def\ev{\kr \begin{picture}(1,1)\put(.5,1){\oval(1,1)[b]}\end{picture}} \def\ld{\kr \begin{picture}(1,1) \put(1,0){\oval(2,2)[lt]}\put(1,0){\line(0,1)1} \end{picture}} \def\Ld{\kr \begin{picture}(2,1) \put(2,0){\oval(4,2)[lt]}\put(2,0){\line(0,1)1} \end{picture}} \def\rd{\kr \begin{picture}(1,1) \put(0,0){\oval(2,2)[rt]}\put(0,0){\line(0,1)1} \end{picture}} \def\Rd{\kr \begin{picture}(2,1) \put(0,0){\oval(4,2)[rt]}\put(0,0){\line(0,1)1} \end{picture}} \def\cd{\kr \begin{picture}(1,1) \put(.5,0){\oval(1,1)[ct]}\put(.5,.5){\line(0,1){.5}} \end{picture}} \def\lu{\kr \begin{picture}(1,1) \put(1,1){\oval(2,2)[lb]}\put(1,0){\line(0,1)1} \end{picture}} \def\Lu{\kr \begin{picture}(2,1) \put(2,1){\oval(4,2)[lb]}\put(2,0){\line(0,1)1} \end{picture}} \def\cu{\kr \begin{picture}(1,1) \put(.5,1){\oval(1,1)[cb]}\put(.5,0){\line(0,1){.5}} \end{picture}} \def\ru{\kr \begin{picture}(1,1) \put(0,1){\oval(2,2)[rb]}\put(0,0){\line(0,1)1} \end{picture}} \def\Ru{\kr \begin{picture}(2,1) \put(0,1){\oval(4,2)[rb]}\put(0,0){\line(0,1)1} \end{picture}} \def\hru{\kr \begin{picture}(.5,1) \put(0,1){\oval(1,1)[rb]}\put(0,0){\line(0,1)1} \end{picture}} \def\hlu{\kr \begin{picture}(.5,1) \put(.5,1){\oval(1,1)[lb]}\put(.5,0){\line(0,1)1} \end{picture}} \def\hrd{\kr \begin{picture}(.5,1) \put(0,0){\oval(1,1)[rt]}\put(0,0){\line(0,1)1} \end{picture}} \def\hld{\kr \begin{picture}(.5,1) \put(.5,0){\oval(1,1)[lt]}\put(.5,0){\line(0,1)1} \end{picture}} \def{\rm id}{\kr \begin{picture}(0,1)\put(0,0){\line(0,1)1}\end{picture}} \def\d{\kr \begin{picture}(.5,1)\put(0,1){\line(1,-2){0.5}}\end{picture}} \def\dd{\kr \begin{picture}(.5,1)\put(0,0){\line(1,2){0.5}}\end{picture}} ##1}}#1}\normalbaselines}} \def\object#1{\settowidth{\textwd}{$#1$}% \hbox{% \kern -.5\textwd $#1$ \kern -.5\textwd}} \def\map#1#2#3{\vcenter{\hbox{$#2\;$}} \vcenter{\settowidth{\textwd}{$#1$} \hbox{\kern -.5\textwd $#1$ \kern -.5\textwd} \hbox{\begin{picture}(0,2) \put(0,2){\vector(0,-1)2} \end{picture}} \settowidth{\textwd}{$#3$} \hbox{\kern -.5\textwd $#3$ \kern -.5\textwd}}} \begin{document} \begin{center} \Large { \bf On braided FRT-construction} \end{center} \bigskip {\bf Yuri BESPALOV } \footnote{The research described in this paper was made possible by Grant No U4J200 from the International Science Foundation.} \bigskip \\ {\it Bogolyubov Institute for Theoretical Physics } \par\noindent {\it Metrologichna str., 14-b \ \ Kiev 143, 252143 Ukraine} \par\noindent {\it E-mail: [email protected]} \par\medskip\noindent {\it Received: September, 1995} \bigskip \begin{abstract} \small Fully braided analog of Faddeev-Reshetikhin-Takhtajan construction of quasitriangular bialgebra $A(X,R)$ is proposed. For given pairing $C$ factor-algebra $A(X,R;C)$ is a dual quantum braided group. Corresponding inhomogeneous quantum group is obtained as a result of generalized bosonization. Construction of first order bicovariant differential calculus is proposed. \newline\par\noindent {\bf Key words:} Braided category, (dual) quantum braided group, bosonization. \newline\par\noindent {\bf AMS Subject Classifications (1991):} {16W30, 17B37, 18D10, 81R50.} \end{abstract} \section{Introduction and preliminaries} Hopf algebras in braided categories (braided groups) have been extensively studied over the last few years and play an important role in $q$-deformed physics and mathematics \cite{Majid8},\cite{Majid10}. Examples, applications and the basic theory of braided groups have been introduced and developed by Majid. Some similar concepts arise independently in works of Lyubashenko inspired by results on conformal field theory. Crossed modules over braided Hopf algebras were introduced and studied in \cite{Bespalov2} and provide a useful technique for investigation of braided Hopf algebras. In particular, crossed product of braided Hopf algebras and generalized bosonization for quantum braided groups are defined in \cite{Bespalov2}. The theory of Hopf bimodules in braided categories is developed in \cite{BD1}, on grounds of \cite{Bespalov2}. Application of this theory is an analog of Woronowicz construction of (bicovariant) differential calculi \cite{Wor} developed in \cite{BD} for the case of braided Hopf algebras and quantum braided groups. Quantum braided group defined by Majid \cite{Majid7,Majid6} is a natural generalization of Drinfel'd's concept of (ordinary) quantum group (quasitriangular Hopf algebra) \cite{Drinfel'd1}. Basic examples of coquasitriangular bialgebras $A(R)$ are obtained as a result of Faddeev-Reshetikhin-Takhtajan construction \cite{FRT} applied to an arbitrary $R$-matrix. Analog of FRT-construction for anyonic quantum groups is described in \cite{MR}. Majid proposed another construction of braided bialgebra $B(R)$ which can be obtained as a transmutation \cite{Majid7} of $A(R)$. Algebra $A(R,Z)$ defined in \cite{H} generalizes both $A(R)$ and $B(R)$. In this paper we describe a fully braided analog of FRT-construction of quasitriangular bialgebra $A(X,R)$, where $X$ is an object of an Abelian braided monoidal category ${\cal C}$ and $R:X\otimes X\rightarrow X\otimes X$ solution of the braid equation. This construction covers all mentioned above and can be considered as a coordinate-free version of \cite{H}. For a given pairing $C$ we define a factor-algebra $A(X,R;C)$ which is a dual quantum braided group. This is an analog of construction of quantum simple Lie groups of type $B,C,D$ in \cite{FRT}. Majid's definition of braided vectors ${\rm V}(R)$ is simply reformulated to our more abstract setting. In particular, our algebra ${\rm V}(X,R)$ is also a quantum braided group in the category of comodules over $A(X,R)$. Quantized analogs of inhomogeneous linear groups are studied in many papers (see \cite{AC,D} and references therein). Generalized bosonization construction \cite{Bespalov2} allows us to define quantum braided group $A(X,R)\mathrel{\,\hbox{\vrule height 4.5pt}\!\times}{\rm V}(X,R)$. We propose construction of a first order bicovariant differential calculus on dual quantum braided group $A$ related with any comodule $X$ over $A$. In our special case $A=A(X,R)$ this is a generalization of construction \cite{J}. In the rest of this part we give necessary preliminary results. The main results of the paper are presented in the second part. \subsection{}\vskip -22pt\hskip \parindent{} We will suppose that ${\cal C}$ is {\em an Abelian} and {\em braided (monoidal) category} with tensor product $\otimes$, unit object $\underline 1$ and braiding $\Psi$ (without loss of generality by Mac Lane's coherence theorem we will assume that underlying monoidal category is strict, i.e. the functors $\_\otimes (\_\otimes\_)$ and $(\_\otimes\_)\otimes\_$ coincide and $\underline 1\otimes X=X=X\otimes\underline 1$). Compatibility conditions between tensor product and Abelian structure are the following \cite{BD}: functors $(-)\otimes X$ and $X\otimes (-)$ are right exact for any object $X$ (this assumption is true if the category is closed); for any epimorphisms $X_i\buildrel{f_i}\over\rightarrow Y_i,\enspace i=1,2$ the diagram \begin{equation} \label{puth-out} \matrix{ X_1\otimes X_2 & \buildrel{X_1\otimes f_2}\over\longrightarrow & X_1\otimes Y_2 \cr {}^{f_1\otimes X_2}\downarrow && \downarrow^{f_1\otimes Y_2} \cr Y_1\otimes X_2 & \buildrel{Y_1\otimes f_2}\over\longrightarrow & Y_1\otimes Y_2} \end{equation} is push-out (the right-down part is a colimit of the left-up part). In this case there exists well-behaviored constructions of factor-algebra (coalgebra, bialgebra, Hopf algebra) by ideal (coideal, biideal, Hopf ideal). One can define an algebra by generator and relations. We means under 'the ideal generated by relations $f_1=f_2:X\rightarrow A\,$' the subobject ${\rm Im}\left(\mu\circ(\mu\otimes A)\circ (A\otimes(f_1-f_2)\otimes A)\right)$ of algebra $A$. \subsection{}\vskip -22pt\hskip \parindent{} We will work with {\em graded and filtered algebras in $\cal C$}. A ($I\!\!N$-)graded algebra $A$ means a collection of objects $A_k,\;k\in{\cal C}$, multiplications $m_{i,j}:A_i\otimes A_j\rightarrow A_{i+j}$ satisfying associativity conditions and unit $\eta:\underline 1\rightarrow A_0$. A ($I\!\!N$-)filtered algebra $A$ means a collection of objects $A_{(k)},\;k\in{\cal C}$, such that $A_{(i)}$ is subobject of $A_{(j)}$ if $i<j$, multiplications $m_{(i),(j)}:A_{(i)}\otimes A_{(j)}\rightarrow A_{(i+j)}$ satisfying conditions of associativity and compatibility with restrictions on subobjects, and unit $\eta:\underline 1\rightarrow A_{(0)}$. For any graded algebra $\{A_i\}$ the collection $\{A_{(k)}:=\oplus_{i=0}^kA_i\}$ with natural multiplications is a filtered algebra. As shown in \cite{BD} graded or filtered algebra can be considered as a usual algebra in a certain category of 'graded spaces' i.e. functors from a certain category to category $\cal C$. This category of 'graded spaces' is again an Abelian braided monoidal category. Similarly graded coalgebras, bialgebras, Hopf algebras can be defined. We will say briefly that graded (filtered) algebra lives in a category ${\cal C}$ if its components $A_n$ ($A_{(n)}$) live in ${\cal C}$. See \cite{BD} about more details. \subsection{}\vskip -22pt\hskip \parindent{} We actively use diagrammatic calculus in braided categories \cite{Majid6,Majid8} (see \cite{Bespalov2} about our slight modifications). Morphisms $\Psi$ and $\Psi^{-1}$ are represented by under and over crossing and algebraic information 'flows' along braids and tangles according to functoriality and the coherence theorem for braided categories \cite{JS}: \begin{equation} \Psi=\enspace \vvbox{\hbox{\hx}} \qquad\quad \Psi^{-1}=\enspace \vvbox{\hbox{\hxx}} \qquad\qquad \vvbox{\hbox{\O{f}\step{\rm id}} \hbox{\hx}} \enspace =\enspace \vvbox{\hbox{\hx} \hbox{{\rm id}\step\O{f}}} \qquad\quad \vvbox{\hbox{{\rm id}\step\O{f}} \hbox{\hx}} \enspace =\enspace \vvbox{\hbox{\hx} \hbox{\O{f}\step{\rm id}}} \label{Psi} \end{equation} \begin{figure} $$ \matrix{ \eta_A={}\enspace \vvbox{\hbox{\unit}} &\quad& \mu_A={}\enspace{} \vvbox{\hbox{\hcu}} &\quad& \epsilon ={}\enspace{}\; \vvbox{\hbox{\counit}} &\quad& \Delta ={}\enspace{} \vvbox{\hbox{\hcd}} &\quad& \mu_r:={}\enspace \vvbox{\hbox{\ru}} \cr \hbox{\scriptsize unit} && \hbox{\scriptsize multiplication} && \hbox{\scriptsize counit} && \hbox{\scriptsize comultiplication} && \hbox{\scriptsize right action} } $$ $$ \matrix{ \vvbox{\hbox{\unit\step{\rm id}}\hbox{\hcu}} \enspace ={}\; \vvbox{\hbox{{\rm id}}\hbox{{\rm id}}} \;={}\enspace{} \vvbox{\hbox{{\rm id}\step\unit}\hbox{\hcu}} \quad\enspace \vvbox{\hhbox{\cu\hstep{\rm id}} \hbox{\hstep\hcu}} \enspace ={}\enspace{} \vvbox{\hhbox{{\rm id}\hstep\cu} \hbox{\hcu}} &\quad& \vvbox{\hhbox{\cu} \hhbox{\cd}} \enspace ={}\enspace{} \vvbox{\hhbox{\cd\step\cd} \hbox{{\rm id}\step\hx\step{\rm id}} \hhbox{\cu\step\cu}} &\quad\;& \vvbox{\hhbox{\cd} \hbox{\O{S}} \def\SS{\O{S^-}\step{\rm id}} \hhbox{\cu}} \enspace ={}\enspace{} \vvbox{\hbox{\counit} \hbox{\unit}} \enspace ={}\enspace{} \vvbox{\hhbox{\cd} \hbox{{\rm id}\step\O{S}} \def\SS{\O{S^-}} \hhbox{\cu}} \cr \hbox{\scriptsize algebra axioms} && \hbox{\scriptsize bialgebra axiom} && \hbox{\scriptsize antipode axiom} } $$ \caption{ The basic algebraic structures in a braided category } \label{Fig-Main} \end{figure} Fig.\ref{Fig-Main} explains our notations: {\em An algebra} in a monoidal category $\cal C$ is an object $A$ equipped with unit $\eta=\eta_A:\, \underline 1\rightarrow A$ and multiplication $\mu=\mu_A:\, A\otimes A\rightarrow A$ obeying the axioms on Fig.\ref{Fig-Main}. {\em A coalgebra} is object $C$ equipped with counit $\epsilon=\epsilon_A:\,C\rightarrow\underline 1$ and comultiplication $\Delta=\Delta_A:\,A\rightarrow A\otimes A$ obeying the axioms of algebra turned upside-down Finally, \cite{M2},\cite{Majid7} {\em a bialgebra $A$ in a braided category $\cal C$} is an object in $\cal C$ equipped with algebra and coalgebra structures obeying the compatibility axiom on Fig.\ref{Fig-Main} which means that $\Delta_A$ is an algebra homomorphism. {\em A Hopf algebra $A$ in a braided category} $\cal C$ ({\em braided group\/} or {\em braided Hopf algebra}) is a bialgebra in $\cal C$ with antipode $S:\,A\rightarrow A$ which is convolution-inverse to identical map (the last identity on Fig.\ref{Fig-Main}). Axioms for (co-)module $X$ over a (co-)algebra $A$ are obtained by "polarization" of the (co-)algebra axioms. If ${\cal C}$ is a braided category we will denote by $\overline{\cal C}$ the same category with the same tensor product and with inverse braiding $\Psi^{-1}$. For any algebra (resp. coalgebra) $A$ in $\cal C$ we will always consider {\em the opposite algebra\/} $(A^{\rm op},\mu_{A^{\rm op}}:=\mu_A\circ\Psi^{-1})$ (resp. {\em the opposite coalgebra\/} $(A_{\rm op},\Delta_{A_{\rm op}}:=\Psi^{-1}\circ\Delta_A$) as an object of the category $\overline{\cal C}$. In particular, $(A^{\rm op})^{\rm op}=A$. If $A$ is a bialgebra in $\cal C$ then $A^{\rm op}$ and $A_{\rm op}$ are bialgebras in $\overline{\cal C}$ (cf. \cite{Majid8}). Antipode $S^-$ for $A^{\rm op}$ (or, the same, for $A_{\rm op}$) is called {\em skew antipode} and equals $S^{-1}$ if both $S$ and $S^-$ exist. Majid \cite{Majid8} derived from Hopf algebra axioms that antipode $S_A$ is a bialgebra morphism $(A^{\rm op})_{\rm op}\rightarrow A$ (or $A\rightarrow (A_{\rm op})^{\rm op}$) in $\cal C$. \subsection{}\vskip -22pt\hskip \parindent{} For objects $X,Y$ of a monoidal category $\cal C$ we will call any morphism \begin{equation} \cup=\cup^{X,Y} :\,X\otimes Y\rightarrow\underline 1\qquad (\;{\rm resp.}\enspace \cap=\cap_{Y,X} :\,\underline 1\rightarrow Y\otimes X\;) \label{Equation-Pairing} \end{equation} {\em a pairing between $X,Y$} (resp. {\em copairing between $Y,X$}). {\em Duality between $X$ and $Y$} is both pairing and copairing (\ref{Equation-Pairing}) obeying the identities on Fig.\ref{Fig-Pairing}a. In this case $X$ is called {\em left dual} to $Y$ (resp. $Y$ is called {\em right dual} to $X$) and we will write $X={}^\vee Y, Y=X^\vee$. Dual arrow $f^\vee$ is defined by one of the two equivalent conditions on Fig.\ref{Fig-Pairing}b. In this way a braided monoidal functor $(\_)^\vee:\,{\cal C}\rightarrow{\cal C}^{\rm op}_{\rm op}$ can be defined if $X^\vee$ exists for each $X\in{\rm Obj}({\cal C})$. Without loss of generality by coherence theorem we shall assume that $(\_)^\vee$ is a strict monoidal functor: $(X\otimes Y)^\vee=Y^\vee\otimes X^\vee$, $(f\otimes g)^\vee=g^\vee\otimes f^\vee$. Pairing $\rho$ between $X$ and $Y$ extends to pairing between $X^{\otimes n}$ and $Y^{\otimes n}$ defined by the diagram on Fig.\ref{Fig-Pairing}c. We say that arrows $f:\,X^{\otimes m}\rightarrow X^{\otimes n}$ and $g:\,Y^{\otimes n}\rightarrow Y^{\otimes m}$ are $\rho$-{\em dual} if $\rho\circ(f\otimes Y^{\otimes n})=\rho\circ(X^{\otimes n}\otimes g)\,.$ \begin{figure} $$ \matrix{ \matrix{\object{X}\Step\cr \vvbox{\hbox{{\rm id}\step\hcoev} \hbox{\hev\step{\rm id}}}\cr \Step\object{X}} \enspace =\enspace \matrix{\object{X}\cr \vvbox{\hbox{{\rm id}}\hbox{{\rm id}}}\cr \object{X}} \qquad\quad \matrix{\Step\object{Y}\cr \vvbox{\hbox{\hcoev\step{\rm id}} \hbox{{\rm id}\step\hev}}\cr \object{Y}\Step} \enspace =\enspace \matrix{\object{Y}\cr \vvbox{\hbox{{\rm id}}\hbox{{\rm id}}}\cr \object{Y}} &\qquad\qquad& \vvbox{\hbox{{\rm id}\step\O{f^\vee}} \hbox{\hev}} \enspace =\enspace \vvbox{\hbox{\O{f}\step{\rm id}} \hbox{\hev}} \quad\Leftrightarrow\quad f^\vee =\enspace \vvbox{\hbox{\hcoev\step{\rm id}} \hbox{{\rm id}\step\O{f}\step{\rm id}} \hbox{{\rm id}\step\hev}} \cr \hbox{\scriptsize a) duality between $X$ and $Y$} && \hbox{\scriptsize b) dual arrow} } $$ $$ \matrix{ \matrix{\object{X^{\otimes n}}\Step\object{Y^{\otimes n}}\cr \vvbox{\hbox{{\rm id}\Step{\rm id}} \hbox{\coro{\rho}}}} \enspace=\enspace \matrix{\object{X\!\dots\! X}\step\Step\object{Y\!\dots\! Y}\cr \vvbox{\hbox{{\rm id}\step\coro{\rho}\hhstep\hhstep\obj{\vdots}\step\step{\rm id}} \hbox{\coRo{\rho}}}} &\qquad\quad& \vvbox{\hhbox{\step\cd\step\cd\step} \hbox{\dd\step\hx\step\d} \hbox{\coro{\rho}\step\coro{\rho^\prime}}} &\qquad& \vvbox{\hhbox{\cd\Step\cd} \hbox{{\rm id}\step\coro{\rho^\prime}\step{\rm id}} \hbox{\coRo{\rho}}} \cr \hbox{\scriptsize c) $\rho$-dual arrows} && \hbox{\scriptsize d) \ $\rho\cdot\rho^\prime$} && \hbox{\scriptsize e) \ $\rho\,\widetilde\cdot\,\rho^\prime$} } $$ \caption{Duals and pairings.} \label{Fig-Pairing} \end{figure} Let $A$ and $H$ be bialgebras in braided category $\cal C$. Morphism $\rho:\,A\otimes H\rightarrow\underline 1$ is called {\em a bialgebra pairing} if algebra (resp. coalgebra) structure on $A$ and coalgebra (resp. algebra) structure on $H$ are $\rho$-dual. Convolution product '$\cdot$' and 'the second' product '$\widetilde\cdot$' for $\rho,\rho^\prime\in{\rm Hom}_{\cal C}(X\otimes Y,\underline 1)$ are defined on Fig.\ref{Fig-Pairing}d,e. We denote by $\rho^-$, $\rho^{\sim{}}$ corresponding inverse to $\rho$. Let $\overline\rho:=\rho^-\circ\Psi^{-1}$. If $A$ or $H$ has (skew) antipode then $\rho^{\sim{}}$ (resp. $\rho^-$) exists and \begin{equation} \rho\circ(S_A\otimes H)=\rho^{\sim{}}=\rho\circ(A\otimes S_H) \qquad\enspace \rho\circ(S_A^-\otimes H)=\rho^-=\rho\circ(A\otimes S_H^-) \end{equation} If $\rho^-$ or $\rho^{\sim{}}$ exists then $\rho$-duality between multiplications and comultiplications implies $\rho$-duality between units and counits. If $(A,H,\rho)$ is bialgebra pairing in $\cal C$ then $(A_{\rm op},H_{\rm op},\rho^-)$, $(A^{\rm op},H^{\rm op},\rho^{\sim{}})$, $(H^{\rm op},A^{\rm op},\overline\rho)$ are bialgebra pairings in $\overline{\cal C}$. \subsection{}\vskip -22pt\hskip \parindent{} Quantum braided groups in a braided category were introduced in \cite{Majid7} and basic theory was developed there. The following are input-output reversed variant of definitions from \cite{Majid7} in a slightly modified form \cite{Bespalov2} suitable for our use. \begin{figure} $$ \matrix{ \vvbox{\hbox{\cd\step\cd} \hbox{{\rm id}\Step\hx\Step{\rm id}} \hbox{\cu\hstep\obj{\overline\mu^{\rm op}}\hstep\coro{\rho}}} \enspace=\enspace \vvbox{\hbox{\cd\step\cd} \hbox{{\rm id}\Step\hx\Step{\rm id}} \hbox{\coro{\rho}\step\cu} }\cr \hbox{\scriptsize a) The axiom for a dual quantum braided group $(A,\overline A,{\rho})$. } } $$ $$ \matrix{ \matrix{ \object{A}\step\object{X}\step \cr \vvbox{ \hbox{{\rm id}\step\rd} \hbox{\hx\step{\rm id}} \hhbox{\krl{\rm id}\step\cu\hstep\obj{\mu}} }\cr \object{X}\step\hstep\object{A}\hstep} \enspace=\enspace \matrix{ \object{A}\step\object{X}\step \cr \vvbox{ \hbox{{\rm id}\step\rd} \hbox{\hxx\step{\rm id}} \hhbox{\krl{\rm id}\step\cu\hstep\obj{\overline\mu^{\rm op}}} }\cr \object{X}\step\hstep\object{A}\hstep } &\qquad\qquad& \Psi=\enspace \vvbox{ \hbox{{\rm id}\step\rd} \hbox{\hx\step\d} \hhbox{\krl{\rm id}\step\hrd\step\hstep\d} \hbox{{\rm id}\step{\rm id}\hstep\coro{\rho}} } \qquad\quad \Psi^{-1}=\enspace \vvbox{ \hbox{{\rm id}\step\rd} \hbox{\hxx\step\d} \hhbox{\krl{\rm id}\step\hrd\step\hstep\d} \hbox{{\rm id}\step{\rm id}\hstep\coro{\overline{\rho}}} } \cr \hbox{\scriptsize b) the condition on comodules from ${\cal C}^\cO{A}$} && \hbox{\scriptsize c) braiding in ${\cal C}^\cO{A}$} } $$ \caption{} \label{Fig-QBG} \end{figure} {\em A coquasitriangular bialgebra} in a braided category $\cal C$ is a pair of bialgebras $A$ in $\cal C$ and $\overline A$ in $\overline{\cal C}$ with the same underlying coalgebra ($\mu$ and $\overline\mu$ are multiplications in $A$ and $\overline A$ respectively), and convolution invertible bialgebra pairing ({\em coquasitriangular structure}) ${\rho}:\, \overline A^{\rm op}\otimes A \rightarrow\underline 1$, satisfying the condition on Fig.\ref{Fig-QBG}a. (It follows directly from the definition that units for $A$ and for $\overline A$ are the same.) {\em A dual quantum braided group} or {\em a coquasitriangular Hopf algebra} in $\cal C$ is a coquasitriangular bialgebra such that $A$ and $\overline A$ have antipodes $S$ and $\overline S$ respectively. (In this case ${\rho}^-= \rho\circ(\overline S\otimes A)$ and $\rho^{\sim{}}=\rho\circ(A\otimes S)$.) \par In particular, for any bialgebra (braided group) $A$ the pair $(A,A^{\rm op})$ is a coquasitriangular bialgebra (dual quantum braided group) with the trivial coquasitriangular structure $\rho=\epsilon\otimes\epsilon$. Category ${\cal C}^\cO{A,\overline A}$ is a full subcategory of the category ${\cal C}^A$ of right comodules with objects $X$ satisfying the first identity on Fig.\ref{Fig-QBG}b. ${\cal C}^\cO{A,\overline A}$ is a monoidal subcategory of ${\cal C}^A$ and braided with $\Psi$ and $\Psi^{-1}$ shown on Fig.\ref{Fig-QBG}c. We use a brief notation ${\cal C}^\cO{A}$ for ${\cal C}^\cO{A,A^{\rm op}}$. \section{On braided FRT-construction} \subsection{}\vskip -22pt\hskip \parindent{} Canonical epimorphism $B_n\rightarrow S_n$ of the braid group into the permutation group admits a section $S_n\buildrel{\widehat{}}\over\rightarrow B_n$ identical on generators and unqueenly defined by condition that $\widehat{\sigma_1\sigma_2}=\widehat{\sigma_1}\widehat{\sigma_2}$ if $\ell(\sigma_1\sigma_2)=\ell(\sigma_1)+\ell(\sigma_2)$ where $\ell(\sigma)$ is the length (of the minimal decomposition) of $\sigma$. For any object $X$ obvious action of braid group $B_n$ on $X^{\otimes n}$ is defined. We will use the same notation $\widehat\sigma$ for the image of the braid $\widehat\sigma\in B_n$ in ${\rm End}_{\cal C}(X^{\otimes n})$. For $k=1,\dots,n$ let us denote by $S^k_n\subset S_n$ subset of ${n!}\over{k!(n-k)!}$ shuffle permutations which preserves order of any two elements $i$ and $j$ if $i,j\le k$ or $i,j>k$. Majid in \cite{Majid11} defines braided binomial coefficient as a sum of ${n!}\over{k!(n-k)!}$ braids in ${\rm End}_{\cal C}(X^{\otimes n})$ and in particular, braided factorial as a sum of $n!$ braids: \begin{equation} \left[{n\atop k}; X\right] := \sum_{\sigma^{-1}\in S_n^k}\widehat\sigma, \qquad [n;X]!:=\sum_{\sigma\in S_n}\widehat\sigma. \end{equation} \subsection{}\vskip -22pt\hskip \parindent{} \label{tensor-Hopf} For any object $X$ of a braided category $\cal C$ the tensor algebra $\T{X}=\{ X^{\otimes n}\}_{n\in{\bf N}}$ is a graded Hopf algebra with the tensor product as multiplication, comultiplication \begin{equation} \Delta_{m,n}:=\left[{{m+n}\atop {m}};X\right]:\; X^{\otimes (m+n)}\rightarrow X^{\otimes m}\otimes X^{\otimes n} \end{equation} and antipode \begin{equation} S\vert_{X^{\otimes n}}:= (-1)^n\circ\widehat{\rho_n}:\; X^{\otimes n}\rightarrow X^{\otimes n}. \end{equation} Where $S_n\ni\rho_n:(1,2,\dots ,n)\mapsto(n,n-1,\dots ,1)$ and $\widehat{\rho_n}$ is a Garside element of $B_n$. The bialgebra axiom turns into Newton-Majid binomial formula \cite{Majid11}: \begin{equation} (\underline 1\otimes x+x\otimes\underline 1)^n= (\Delta\circ x)^n= \sum^n_{k=0}\left[{n\atop k};X\right]\circ (x^{\otimes k}\otimes x^{\otimes (n-k)}) \qquad \end{equation} for any $x:\; Z\rightarrow X$. One can define two graded ideals in $\T{X}$: $I=\{ I_n\subset X^{\otimes n}\}$ is an ideal generated by its 'quadratic part' $I_2:={\rm ker}\,[2;X]!={\rm ker}(\Psi_{X,X}+{\rm id})$. And let $I^\bullet=\{ I^\bullet_n:={\rm ker}\,[n; X]\}$. \par $I$ is non zero iff $-1$ is an eigenvalue of $\Psi_{X,X}$. If we suppose that the multiplicity of this eigenvalue is $1$, i. e. we can choose a minimal polynomial of $\Psi_{X,X}$ in the form $p(t)=p_{-1}(t)(t+1)$ with $p_{-1}(-1)=1$, then $P_{-1}:=p_{-1}(\Psi_{X,X})$ is an idempotent, and in this case $I_2={\rm ker}\,[n;X]!={\rm im}\,P_{-1}$. \par It's easy to see that $I\subset I^\bullet$ The following example from \cite{BD} show that, in general, the ideal $I^\bullet$ has 'generators' of power more than $2$. \begin{example} Let $\T{X}$ be an algebra generated by the one dimensional vector space $X=kx$ over a field $k$ with the braiding $\Psi(x\otimes x)=q(x\otimes x)$, $q\in k$. In this case braided integers are 'ordinary' $q$-integers: $[n]_q:=1+q+\dots +q^{n-1}\,.$ And for $q$ a primitive root of $1$ of order $n>2$: $I=\emptyset$ but $I^\bullet=(x^{\otimes n})$. \end{example} \begin{proposition} Both $I$ and $I^\bullet$ are Hopf ideals in $\T{X}$. We denote by ${\rm V}(X)$ and ${\rm V}^\bullet(X)$ corresponding factor-algebras \end{proposition} For the special case of the category ${\cal C}$ built from an arbitrary $R$-matrix ${\rm V}(X)$ is an algebra of functions on 'quantum vector space'. Majid discovered a Hopf algebra structure on this object (cf. \cite{Majid11} and references therein). \subsection{}\vskip -22pt\hskip \parindent{} It is well known that any solution $R=R_{X,X}:X\otimes X\rightarrow X\otimes X\,,\;X\in{\rm Obj}(X)$ of the braid equation \begin{equation} (X\otimes R)\circ (R\otimes X)\circ (X\otimes R)= (R\otimes X)\circ (X\otimes R)\circ (R\otimes X) \end{equation} with a certain invertibility conditions defines a braided structure on the monoidal subcategory of $\cal C$ generated by object $X$ and its dual ${}^\vee\!X$ as described in what follows. Morphisms $R_{X^{\otimes m},X^{\otimes n}}$ are uniquely defined by the hexagon identities: \begin{equation} \label{Eq-Hex-id} {R}_{Y\!\otimes\!Y^\prime,Z}= ({R}_{Y,Z}\otimes Y^\prime)\circ (Y \otimes{R}_{Y^\prime,Z})\,, \quad {R}_{Y,Z\!\otimes\!Z^\prime}= (Z\otimes{R}_{Y,Z^\prime})\circ(R_{Y,Z}\otimes Z^\prime)\,. \end{equation} where $Y,Y^\prime,Z,Z^\prime$ are powers of $X$. And let $R_{{}^\vee\!X^{\otimes m},{}^\vee\!X^{\otimes n}}:= {}^\vee\left(R_{X^{\otimes m},X^{\otimes n}}\right)$. We also suppose that there exists $R_{X,{}^\vee\!X}:X\otimes{}^\vee\!X\rightarrow {}^\vee\!X\otimes X$ inverse to $(\cup\otimes X\otimes{}^\vee\!X)\circ ({}^\vee\!X\otimes R_{X,X}\otimes {}^\vee\!X)\circ ({}^\vee\!X\otimes X\otimes \cap)$ Let $R_{X^{\otimes m},{}^\vee\!X^{\otimes n}}$ be uniquely defined by the hexagon identities (\ref{Eq-Hex-id}) and $R_{{}^\vee\!X^{\otimes m},X^{\otimes n}}$ be defined in dual way. ${\cal C}(X,{}^\vee\!X;R)$ is a subcategory of ${\cal C}$ whose objects are tensor products of $X$ and ${}^\vee\!X$ and morphism $f:Y\rightarrow Z$ are those from ${\cal C}$ which 'flow' along the braids labeled by $R_{X,\_}$ and $R_{\_,X}$, i.e. \begin{equation} R_{X,Z}\circ(X\otimes f)=(f\otimes X)\circ R_{X,Y}\,,\quad R_{Z,X}\circ(f\otimes X)=(X\otimes f)\circ R_{Y,X}\,. \end{equation} The analog $A(X,R)$ of FRT-bialgebra \cite{FRT} can be obtained as a result of a some type of reconstruction for the monoidal functor ${\cal C}(X,{}^\vee\!X;R)\rightarrow{\cal C}$. \subsection{}\vskip -22pt\hskip \parindent{} As the first step the following lemmas allow us to define a bialgebra $A(X)$. \begin{lemma} Let $X$ be an object of $\cal C$ with (left) dual ${}^\vee\!X$. Then ${}^\vee\!X\!\otimes\! X$ can be equipped with a coalgebra structure \begin{equation} \Delta_{{}^\vee\!X\!\otimes\! X}:= {}^\vee\!X\otimes\cap_{X,{}^\vee\!X}\otimes X\,,\quad \epsilon_{{}^\vee\!X\!\otimes\! X}:=\cup^{{}^\vee\!X,X}\,. \end{equation} $X$ (resp. ${}^\vee\!X$) becomes a right (resp. left) comodule over ${}^\vee\!X\!\otimes\! X$ with coaction \begin{equation} \label{Delta-r} \Delta^X_r:=\cap_{X,{}^\vee\!X}\otimes X\,,\quad \Delta^{{}^\vee\!X}_\ell:={}^\vee\!X\otimes\cap_{X,{}^\vee\!X}\,. \end{equation} \end{lemma} \begin{lemma} Let $(A,\Delta_A)$, $(B,\Delta_B)$ be coalgebras in $\cal C$ and $(X,\Delta^X_r)$, $(Y,\Delta^Y_r)$ right comodules over $A$ and $B$ respectively. Then $A\!\otimes\! B$ is a coalgebra with comultiplication \begin{equation} \Delta_{A\otimes B}:= (A\otimes\Psi_{A,B}\otimes B)\circ(\Delta_A\otimes\Delta_B) \end{equation} and $X\!\otimes\! Y$ is a right $A\!\otimes\! B$-comodule with coaction \begin{equation} \Delta^{X\otimes Y}_r:= (X\otimes\Psi_{A,Y}\otimes B)\circ(\Delta^X_r\otimes\Delta^Y_r) \end{equation} \end{lemma} So with any two objects $X$ and $Y$ which have left duals we can connect the following coalgebras: tensor product of two coalgebras $({}^\vee\!X\otimes X)\otimes({}^\vee Y\otimes Y)$ and coalgebra $({}^\vee Y\otimes{}^\vee\!X)\otimes (X\otimes Y)$ related with the object $X\otimes Y$. \begin{lemma} Morphism \begin{equation} \mu_{{}^\vee\!X\otimes X,{}^\vee\!Y\otimes Y}:= \Psi_{{}^\vee\!X\otimes X,{}^\vee\!Y}\otimes Y:\, ({}^\vee\!X\otimes X) \otimes({}^\vee\!Y\otimes Y) \rightarrow ({}^\vee\!Y\otimes{}^\vee\!X)\otimes (X\otimes Y) \end{equation} is coalgebra isomorphism and interlaces coactions of these coalgebras on $X\otimes Y$. \par For objects $X,Y,Z$ with left dual the following {\em associativity condition} is true: \begin{equation} \mu_{X\otimes Y,Z}\circ (\mu_{X,Y}\otimes Z)= \mu_{X,Y\otimes Z}\circ (X\otimes \mu_{Y,Z}) \end{equation} \end{lemma} \begin{proposition} $A(X):= \{A_n(X)= {}^\vee\!X^{\otimes n}\otimes X^{\otimes n} \}_{n\inZ\!\!\!Z_{\ge 0}}$ is a graded bialgebra with the following (co)multiplications: \begin{eqnarray} &&\mu_{m,n}:= \mu_{{}^\vee\!X^{\otimes m}\otimes X^{\otimes m}, {}^\vee\!X^{\otimes n}\otimes X^{\otimes n}}\,,\nonumber\\ &&\Delta _n:=\Delta_{{}^\vee\!X^{\otimes n}\otimes X^{\otimes n}}: {}^\vee\!X^{\otimes n}\otimes X^{\otimes n}\rightarrow ({}^\vee\!X^{\otimes n}\otimes X^{\otimes n})\otimes ({}^\vee\!X^{\otimes n}\otimes X^{\otimes n})\,. \nonumber \end{eqnarray} Graded tensor algebra $\T{X}$ (resp. $\T{{}^\vee\!X}$) is right (resp. left) $A(X)$-comodule algebra and coalgebra. \end{proposition} One can carry out the same construction in the category $\overline {\cal C}$. The result is a bialgebra $\overline A(X)$ with the same underlying coalgebra but with new multiplication $\overline\mu$ where $\Psi$ is replaced by $\Psi^{-1}$. Corresponding Hopf algebra $\overline\T{X}$ (resp. $\overline{\rm V}(X), \overline{\rm V}^\bullet(X)$) coincide with $\T{X}_{\rm op}$ (resp. ${\rm V}(X)_{\rm op}, {\rm V}^\bullet(X)_{\rm op}$). \begin{figure} $$ \matrix{ \vvbox{\hbox{\step\coev\Step{\rm id}\step{\rm id}} \hbox{\dd\step\hcoev\d\step{\rm id}\step{\rm id}} \hbox{\coro{C}\step{\rm id}\step{\rm id}\step{\rm id}\step{\rm id}}}\cr \Step\step\object{{}^\vee\!X}\step\object{{}^\vee\!X}\step\object{X}\step \object{X}} \enspace=\enspace \vvbox{\hbox{{\rm id}\Step{\rm id}} \hbox{\coro{C}}} \qquad \matrix{ \vvbox{\hbox{{\rm id}\step{\rm id}\Step\coev} \hbox{{\rm id}\step{\rm id}\step\dd\hcoev\step\d} \hbox{{\rm id}\step{\rm id}\step{\rm id}\step{\rm id}\step\coro{{}^\vee C}}}\cr \object{{}^\vee\!X}\step\object{{}^\vee\!X}\step\object{X}\step\object{X}\step \Step} \enspace=\enspace \vvbox{\hbox{{\rm id}\Step{\rm id}} \hbox{\coro{{}^\vee C}}} $$ \caption{Relations for $A(X;C)$} \label{Figure-C-algebra} \end{figure} \subsection{}\vskip -22pt\hskip \parindent{} Let, moreover, duality $C$ of $X$ with itself be given. Then for each $n$ the pairing and copairing \begin{equation} C=C^{X^{\otimes n},X^{\otimes n}}: X^{\otimes n}\otimes X^{\otimes n}\rightarrow \overline{1},\qquad C=C_{X^{\otimes n},X^{\otimes n}}: \overline{1}\rightarrow X^{\otimes n}\otimes X^{\otimes n}\,, \label{Equation-Duality-C} \end{equation} described by the diagram on Fig.\ref{Fig-Pairing}b and by the input-output reversed diagram, define duality of $X^{\otimes n}$ with itself. Let as define pairing ${}^{\vee}C=:{}^\vee\!X^{\otimes n}\otimes {}^\vee\!X^{\otimes n}\rightarrow \overline{1} $ as morphism left dual to copairing $C_{X^{\otimes n},X^{\otimes n}}$. We denote by $A(X;C)$ filtered algebra which is a factor-algebra of $A(X)$ by the ideal 'generated by relations' on Fig.\ref{Figure-C-algebra} which means that pairings $C$ and ${}^\vee C$ are invariant with respect to coactions of $A(X)$ on $\T{X}$ and $\T{{}^\vee\!X}$ respectively. And let $\T{X;C}$ be factor-algebra of $\T{X}$ by relations $C^{{}^\vee\!X\otimes X}=\underline 1$. \begin{proposition} $A(X;C)$ is a braided group with antipode and its inverse given by the diagram on Fig\ref{Figure-CC-algebra}a. $\T{X;C}$ is a right comodule algebra over $A(X;C)$. \end{proposition} \begin{figure} $$ \matrix{ \matrix{ \object{{}^\vee\!X^{\otimes n}}\Step\Step\object{X^{\otimes n}}\cr \vvbox{\hbox{{\rm id}\step\ro{C}\step{\rm id}} \hbox{\hev\hcoev\step\hx} \hbox{\dd\step\hx\step{\rm id}} \hbox{\coro{C}\step{\rm id}\step{\rm id}}}} \qquad\quad \matrix{\object{{}^\vee X^{\otimes n}}\Step\object{X^{\otimes n}} \Step\Step\Step\cr \vvbox{\hbox{{\rm id}\step{\rm id}\Step\hcoev\step\ro{C}} \hbox{\d\coro{C}\step{\rm id}\step\x} \hbox{\step\d\Step\hx\Step{\rm id}} \hbox{\Step\ev\step{\rm id}\Step{\rm id}}}\cr \step\Step\Step\object{{}^\vee X^{\otimes n}}\Step\object{X^{\otimes n}}} &\enspace& \matrix{\object{{}^\vee\!X^{\otimes m}}\Step\object{X^{\otimes m}} \Step\object{{}^\vee\!X^{\otimes n}}\Step\object{X^{\otimes n}}\cr \vvbox{\hbox{{\rm id}\step\hx\step{\rm id}} \hbox{{\rm id}\step{\rm id}\step\hx\obj{R}} \hbox{\d\hev\dd} \hbox{\step\hev}}} \cr \begin{picture}(.5,.5)\end{picture}&&\cr \hbox{\scriptsize a) antipode and its inverse for $A(X;C)$}&& \hbox{\scriptsize b) Coquasitriangular structure on $A(X,R)$ } } $$ \caption{} \label{Figure-CC-algebra} \end{figure} Let $I=\{ I_n\in{}^\vee\!X^{\otimes n}\otimes X^{\otimes n}\}$ be a graded ideal of algebra $A(X)$ generated by relations \begin{equation} \label{Eq-FRT-rel} {}^\vee\!R\otimes{\rm id}_{ X^{\otimes 2}}- {\rm id}_{{}^\vee\!X^{\otimes 2}}\otimes R:\, {}^\vee\!X^{\otimes 2}\otimes X^{\otimes 2}\rightarrow {}^\vee\!X^{\otimes 2}\otimes X^{\otimes 2} X^{\otimes 2}\otimes X^{\otimes 2} \end{equation} or explicitly \begin{equation} I_n=\bigcup_{i=1}^{n-1}({}^\vee\!X^{\otimes (i-1)}\otimes {}^\vee\!R\otimes {}^\vee\!X^{\otimes (n-i-1)}\otimes X^{\otimes n}\rightarrow {}^\vee\!X^{\otimes n}\otimes X^{\otimes (n-i-1)}\otimes R\otimes X^{\otimes (i-1)} ) \end{equation} \begin{lemma} \label{Lemma-AXR} Ideal $I$ described above is a biideal of $A(X)$. We denote by $A(X,R)$ corresponding factor-bialgebra. \end{lemma} Let, moreover, $C$ be morphism in ${\cal C}(X,{}^\vee\!X;R)$, i.e. pairing $C$ 'flows' along braids labeled by $R$. Then we define a bialgebra $A(X,R;C)$ which is a factor-algebra of $A(X)$ by an ideal generated by both (\ref{Eq-FRT-rel}) and the relations given on Fig.{\ref{Figure-C-algebra}. Similarly, one can define the factor-algebras $\overline{A}(X,R^{-1})$ and $\overline{A}(X,R^{-1};C)$ the bialgebra $\overline{A}(X)$ in the category $\overline{\cal C}$. \begin{lemma} A family of pairings $\rho_{m,n}:=({}^\vee\!X^{\otimes m}\otimes X^{\otimes m})\otimes ({}^\vee\!X^{\otimes m}\otimes X^{\otimes m})\rightarrow \underline{1}$ described by the diagram on Fig.\ref{Figure-CC-algebra}b define bialgebra pairings \begin{eqnarray} &&\rho_{A(X,R)}:\overline{A}(X,R^{-1})^{\rm op}\otimes A(X,R)\rightarrow \overline{1}\,,\nonumber\\ &&\rho:_{A(X,R;C)}:\overline{A}(X,R^{-1};C)^{\rm op}\otimes A(X,R;C) \rightarrow \overline{1}\,. \end{eqnarray} \end{lemma} \begin{theorem} $(A(X,R),\overline{A}(X,R^{-1}),\rho)$ is a coquasitriangular bialgebra and its factor-algebra $(A(X,R;C),\overline{A}(X,R^{-1};C),\rho)$ is a dual quantum braided group in ${\cal C}$ (or, more precisely, in a certain category of 'graded spaces' over ${\cal C}$). 'Second inverse' $\rho^\sim=\{\rho^\sim_{m,n}\}$ to quasitriangular structure $\rho$ takes the form: \begin{displaymath} \rho^\sim_{m,n}:=(\cup\otimes\cup)\circ ({}^\vee\!X^{\otimes m}\otimes \Psi^{-1}_{X^{\otimes m},{}^\vee\!X^{\otimes n}}\otimes X^{\otimes n} ) \end{displaymath} \end{theorem} \begin{lemma} $X^{\otimes n}$ equipped with coaction (\ref{Delta-r}) is an object of ${\cal C}^{\cO{A(X,R),\overline A(X,R^{-1})}}$. Braiding $\Psi_{X^{\otimes m},X^{\otimes n}}$ in this category equals to $R_{X^{\otimes m},X^{\otimes n}}$. Corresponding right action (defined by the first diagram on Fig.\ref{Fig-Jurco}b) \begin{displaymath} \mu^{X^{\otimes n}}_r= \{\mu^{X^{\otimes n}}_{r,m}: X^{\otimes n}\otimes A(X,R)_m\rightarrow X^{\otimes n}\} \end{displaymath} takes the form \begin{equation} \mu^{X^{\otimes n}}_{r,m}= (\cup^{{}^\vee\!X^{\otimes m},X^{\otimes m}}\otimes {\rm id}_{X^{\otimes n}}) \circ ({\rm id}_{{}^\vee\!X^{\otimes m}}\otimes R_{X^{\otimes n},X^{\otimes m}})\circ (\Psi_{X^{\otimes n},{}^\vee\!X^{\otimes m}}\otimes{\rm id}_{X^{\otimes m}})\,. \end{equation} \end{lemma} One can carry out constructions from \ref{tensor-Hopf} for $X\in{\rm Obj}({\cal C}^{\cO{A(X,R),\overline A(X,R^{-1})}})$ to get the Hopf algebras $\T{X,R},{\rm V}(X,R),{\rm V}^\bullet(X,R)$ in ${\cal C}^{\cO{A(X,R),\overline A(X,R^{-1})}}$, where letter '$R$' is added to specify a category. A pair $({\rm V}(X,R),{\rm V}(X,R)^{\rm op})$ is a quantum braided group in ${\cal C}^{\cO{A(X,R),\overline A(X,R^{-1})}}$ with the trivial coquasitriangular structure $\epsilon^{{\rm V}(X,R)}\otimes\epsilon_{{\rm V}(X,R)}$. Generalized bosonization theorem \cite{Bespalov2} allows us to define a quantum braided group $(A(X,R)\mathrel{\,\hbox{\vrule height 4.5pt}\!\times}{\rm V}(X,R), \overline A(X,R^{-1})\mathrel{\,\hbox{\vrule height 4.5pt}\!\times}{\rm V}(X,R)^{\rm op})$ with coquasitriangular structure $({\rm id}_{A(X,R)}\otimes\epsilon_{{\rm V}(X,R)}\otimes {\rm id}_{A(X,R)}\otimes\epsilon_{{\rm V}(X,R)})\circ\rho_{A(X,R)}$ in ${\cal C}$ which is an analog of algebra of functions on inhomogeneous linear group. The same construction performed for algebras $\overline A(X,R^{-1})$ and $\overline{\rm V}(X,R^{-1})={\rm V}(X,R)_{\rm op}$ produces quantum braided group in $\overline{\cal C}$. But in this way we obtain another corresponding quantum braided group in ${\cal C}$. \begin{figure} $$ \matrix{ \matrix{ \vvbox{\hbox{\coev} \hhbox{\krl{\rm id}\step\ld}}\cr \object{X}\step\object{A}\step\object{{}^\vee X}} \enspace=\enspace \matrix{ \vvbox{\hbox{\coev} \hhbox{\krl\rd\step{\rm id}}}\cr \object{X}\step\object{A}\step\object{{}^\vee X}} &\qquad& \matrix{\object{X}\hstep\Step\object{A}\cr \vvbox{\hhbox{\krl\hrd\Step{\rm id}} \hbox{{\rm id}\hstep\coro{\rho}}}} \quad \matrix{\object{A}\Step\object{{}^\vee X}\cr \vvbox{\hbox{\step\x} \hbox{\ld\Step{\rm id}} \hbox{\hxx\Step{\rm id}} \hbox{{\rm id}\step\coro{\rho}}}} &\qquad& \matrix{ \vvbox{\hbox{\coev} \hhbox{\krl\rd\step{\rm id}} \hbox{\hxx\step{\rm id}} \hbox{\O{S^-}\step\hxx} \hbox{\hxx\step{\rm id}}}\cr \object{{}^\vee X}\step\object{A}\step\object{X}} \cr \hbox{\scriptsize a) coaction $\Delta^{{}^\vee X}_\ell$ } && \hbox{\scriptsize b) actions $\mu^X_r,\;\mu^{{}^\vee X}_\ell$ } && \hbox{\scriptsize c) bi-invariant $\omega$ } } $$ \caption{} \label{Fig-Jurco} \end{figure} \subsection{}\vskip -22pt\hskip \parindent{} Let $(A,\overline A,\rho)$ be a dual quantum braided group in $\cal C$, $(X,\Delta^X_r)\in{\rm Obj}({\cal C}^{\cO{A,\overline A}})\,,$ ${}^\vee X$ left dual to $X$ in $\cal C$ with left comodule structure $\Delta^{{}^\vee X}_\ell$ defined by the condition in Fig.\ref{Fig-Jurco}a. Then $X$ (resp. ${}^\vee X$) equipped with right (resp. left) $A$-module structure as shown in Fig.\ref{Fig-Jurco}b becomes right (resp. left) crossed module over $A$. According to general theory \cite{BD1} the object $\Gamma:={}^\vee X\otimes A\otimes X$ with actions and coactions \begin{eqnarray} &&\mu^\Gamma_\ell:= (\mu^{{}^\vee X}_\ell\otimes m_A)\circ (A\otimes\Psi_{A,{}^\vee X}\otimes A)\circ (\Delta_A\otimes{}^\vee X\otimes A) \otimes X\,,\nonumber\\ &&\mu^\Gamma_r:= {}^\vee X\otimes (m_A\otimes\mu^X_r)\circ (A\otimes\Psi_{X,A}\otimes A)\circ (A\otimes X\otimes\Delta_A)\,, \nonumber\\ &&\Delta^\Gamma_\ell:= (m_A\otimes{}^\vee X\otimes A)\circ (A\otimes\Psi_{{}^\vee X,A}\otimes A)\circ (\Delta^{{}^\vee X}_\ell\otimes\Delta_A) \otimes X\,, \nonumber\\ &&\Delta^\Gamma_r:= {}^\vee X\otimes (A\otimes X\otimes m_A)\circ (A\otimes\Psi_{A,X}\otimes A)\circ (\Delta_A\otimes\Delta^X_\ell)\,. \nonumber \end{eqnarray} is a Hopf bimodule over $A$. Morphism $\omega$ defined on Fig.\ref{Fig-Jurco}c is a bicomodule morphism where $\underline 1$ is equipped with trivial left and right actions equal to $\epsilon_A$. 'Commutant with $\omega$': \begin{equation} {\rm d}:= \mu^\Gamma_r\circ (\omega\otimes A)- \mu^\Gamma_\ell\circ(A\otimes\omega): A\rightarrow\Gamma \end{equation} is a first order bicovariant derivative in sense of Woronowicz \cite{Wor} (See \cite{BD} for a fully braided context). In our case $A=A(X,R;C)$ 'biinvariant' $\omega$ equals to $\Psi^{-1}_{{}^\vee\!X,{}^\vee\!X}\circ{}^\vee\!C\otimes C: \underline 1\rightarrow{}^\vee\!X^{\otimes 2}\otimes X^{\otimes 2}$.
1,477,468,749,827
arxiv
\section{Introduction} Several statistical models are presented in the form of unnormalized densities and the calculation of the normalization constant (or the partition function) is intractable. Namely, \begin{align} p(x ;\theta)=\frac{1}{Z(\theta)} \tilde{p}(x; \theta), \label{nnm} \end{align} where $Z(\theta) = \int \tilde{p}(x;\theta)\mu(\mathrm{d}x)$, $\mu$ is a baseline measure such as Lebesgue measure or counting measure, and we only have access to $\tilde{p}(x;\theta)$. Such unnormalized models are widely used in many settings: Markov random fields \citep{BesagJulian1975SAoN}, Boltzmann machines \citep{HintonGeoffreyE.2002TPoE}, overcomplete independent component analysis models \citep{HyvarinenAapo2001Ica} and graphical models \citep{LinLina2016EoHG,Yu2016}. Several methods for estimating $\theta$ have been developed such as noise contrastive estimation (NCE) \citep{noise} and score matching \citep{score}. In this study, we investigate the estimation methods of unnormalized models with missing data. Missing data is frequently encountered and may cause nonresponse bias \citep{LittleRoderickJ.A2002Sawm}. Thus, how to handle missing data is an important problem. Our problem setting is as follows. Let $x$ be sampled from the unnormalized model \eqref{nnm} and suppose that we observe only part of $x$, which is denote by $x_{\mathrm{obs}}$. The objective of this study is to estimate $\theta$ based on the observed data $x_{\mathrm{obs}}$. The existing estimation methods for unnormalized models are not applicable here since all these methods assume that the complete data is fully observed. To solve this issue, we develop estimation methods that are developed through combination of NCE and score matching with fractional imputation \citep{Kim11}, which is a computationally efficient technique for the missing data free from Markov Chain Monte Carlo (MCMC). Note that \cite{Rhodes} proposed a variational NCE for unnormalized latent variable models corresponding to a special case of the current problem (missing at random, MAR). Though variational inference is fast and useful for a large--scale problem, it is challenging to conduct statistical inference \citep{BleiDavidM.2017VIAR}. On the other hand, the proposed methods enable the construction of confidence intervals based on the asymptotic theory. In addition, the proposed methods are valid under general missing mechanisms, including missing not at random (MNAR) case. Our main contributions are as follows. \begin{itemize} \item We propose imputation estimators for unnormalized models with missing data. These estimators are consistent under the general missing mechanism, including an MNAR case, and are computationally efficient. \item We derive the asymptotic distributions of the proposed estimators and construct confidence intervals. \item We confirm the validity of the proposed methods in a simulation with truncated Gaussian graphical models with missing data. \end{itemize} \section{Preliminary} \subsection{Notations} The parameters with a zero in the subscript such as $\theta_0$ and $\tau_0$, denote the true parameters. The notation $\nabla_{\theta}$ denotes a differentiation with respect to $\theta$, and $t(x)^{\otimes 2}=t(x)t(x)^{\top}$. The expectation and variance of $f(x)$ under the density $g(x)$ is denoted as $\mathrm{E}_{g}[f(x)]$ and $\mathrm{var}_{g}[f(x)]$, respectively. We often omit the subscript when it is obvious from the context. We present a summary of the notation in the Supplementary materials. \subsection{Missing data and imputation methods} We briefly review the framework of the missing data and the imputation methods. For more details, see \cite{kimshao13}. Suppose that $\{x_i\}_{i=1}^{n}$ are independently and identically distributed (i.i.d.) samples from a distribution with density $p(x;\theta)$. We consider the situation where some part of $x_i$ may be missing. Let $\{\delta_i\}_{i=1}^{n}$ be the missing indicators. Accordingly, $x_i=(x_{i,\mathrm{obs}},x_{i,\mathrm{mis}})$ is fully observed when $\delta_i=1$, while only $x_{i,\mathrm{obs}}$ is observed and $x_{i,\mathrm{mis}}$ is missing when $\delta_i=0$. We assume that $\delta_i$ follows the Bernoulli distribution with probability ${\rm Pr}(\delta_i=1 \mid x_i)$. The case with several missing patterns (the dimension of $x_{i,\mathrm{obs}}$ may differ with $i$) can be easily considered by extending this notation \citep{SeamanShaun2013WIMb}. The missing mechanism is called missing at random (MAR) if ${\rm Pr}(\delta=1 \mid x) = {\rm Pr}(\delta=1 \mid x_{\mathrm{obs}})$ holds. Importantly, the selection mechanism can be ignored for estimation of $\theta$ in the MAR cases \citep{LittleRoderickJ.A2002Sawm}, because \begin{align*} p(x_{\mathrm{obs}};\theta) &= \int p(x_{\mathrm{obs}},x_{\mathrm{mis}};\theta)\mathrm{Pr}(\delta \mid x)\mu(\mathrm{d}x_{\mathrm{mis}}) \\ &\propto \int p(x_{\mathrm{obs}},x_{\mathrm{mis}};\theta)\mu(\mathrm{d}x_{\mathrm{mis}}). \end{align*} As a special case of MAR, a missing mechanism is referred to as missing completely at random (MCAR) if ${\rm Pr}(\delta=1 \mid x)$ does not depend on $x$ at all. When the MAR does not hold, the missing mechanism is referred to as missing not at random (MNAR). For estimating $\theta$ from observations, the fundamental algorithm is the Expectation Maximization (EM) algorithm \citep{dempster77,MengXiao‐Li1997TEAO}, which maximizes the observed likelihood $p(x_{\mathrm{obs}};\theta)$. Equivalently, the EM algorithm solves the following observed (mean) score equation with respect to $\theta$ \citep{louis82,ElashoffMichael2004AEAf}: \begin{align} \label{eq:score2} \frac{1}{n}\sum_{i=1}^{n}\mathrm{E}\left[\nabla_{\theta}\log p(x_i;\theta) \mid x_{i,\mathrm{obs}};\theta\right]=0. \end{align} However, the EM algorithm requires a closed-form expression of the conditional expectation in \eqref{eq:score2}, which is often intractable. To solve this problem, Fractional Imputation (FI) has been proposed \citep{Kim11,YangShu2016FIiS}, which is closely connected with the Monte Carlo EM algorithm \citep{WeiGregC.G.1990AMCI}. FI is fast because it uses only importance sampling as an approximation procedure, and does not rely on MCMC. However, it is difficult to approximate the conditional expectation using only importance sampling for large-scale problems. In such cases, Multiple Imputation (MI) is commonly used, which utilizes MCMC for approximation \citep{rubin87,MurrayJaredS.2018MIAR}. \subsection{Estimation methods for unnormalized models} Several methods have been developed for estimating unnormalized models such as score matching \citep{score}, noise contrastive estimation (NCE) \citep{noise}, Monte Carlo maximum likelihood estimation (Monte Carlo MLE) \citep{GeyerC1994Otco}, and contrastive divergence (CD) \citep{HintonGeoffreyE.2002TPoE}. We briefly review NCE and score matching in the following. Note that both methods take the form of Z-estimators or M-estimators \citep{VaartA.W.vander1998As}. \subsubsection{Generalized NCE} We review the generalized NCE from the divergence perspective \citep{noise2, hirayama}. Suppose we have $\mathbf{x}=\{x_i\}_{i=1}^{n}$ from the true distribution with density $g(x)$, and $\mathbf{y}=\{y_i\}_{i=1}^{n}$ from a noise distribution with density $a(y)$. Note that all the algorithms below can be easily extended to the case where the noise sample size is different from the original sample size. In the NCE, we introduce a one-parameter extended model $q(x;\tau)=\exp(-c)\tilde{p}(x;\theta)$, where $\tau=(c,\theta^{\top})^{\top}$ and $c$ is an unknown nuisance parameter to approximate the normalizing constant. Note that it is different from the normalized model $p(x;\theta)$. For a twice differentiable strictly convex function $f(\cdot)$, a noise contrastive divergence is defined as \begin{align} \label{eq:bregman} D_{NC}(g,q(x;\tau))= \int \mathrm{Br}_{f}\left (\frac{g(x)}{a(x)},\frac{q(x;\tau)}{a(x)}\right)a(x)\mu(\mathrm{d}x), \end{align} where $\mathrm{Br}_{f}(o_1,o_2)$ is given by $f(o_1)-f(o_2)-f'(o_2)(o_1-o_2)$, and $f(\cdot)$ is the divergence function. By subtracting a term not associated with $\theta$ from $D_{NC}(g,q(x;\tau))$, the cross entropy between $g(x)$ and $q(x;\tau)$ is given by \begin{align*} d_{NC}(g,p)=\mathrm{E}_{g(x)}\left[M_{nc1}(\mathbf{x})\right]+ \mathrm{E}_{a(y)}\left[M_{nc2}(\mathbf{y})\right], \end{align*} where \begin{align*} &M_{nc1}(\mathbf{x})=-\frac{1}{n}\sum_{i=1}^{n}f'((r(x_i)),\\ &M_{nc2}(\mathbf{y})=\frac{1}{n}\sum_{j=1}^{n}\left\{f'\left(r(y_j)\right )r(y_j)-f(r(y_j))\right \}, \end{align*} and $q(x;\tau)/a(x)=r(x;\tau)$. The objective function is defined as $M_{nc1}(\mathbf{x})+M_{nc2}(\mathbf{y})$ because $\mathrm{D}_{NC}(g,q(x;\tau))$ takes the maximum when $\tau$ is equal to $\tau_0$. NCE is defined as the minimizer of this objective function regarding $\tau$. By differentiating the above $d_{NC}(g,q(x;\tau))$ regarding $\tau$, the following moment condition is obtained: \begin{align} \label{eq:z-estimator} \mathrm{E}_{g(x)}\left[Z_{nc1}(\mathbf{x};\tau)\right]+ \mathrm{E}_{a(y)}\left[Z_{nc2}(\mathbf{y};\tau)\right]|_{\tau_{0}}=0, \end{align} where $Z_{nc1}(\mathbf{x})=1/n\sum_{i=1}^{n}z_{nc1}(x_i;\tau),\,Z_{nc2}(\mathbf{y})=1/n\sum_{j=1}^{n}z_{nc2}(y_j;\tau)$, \begin{align*} z_{nc1}(x)&=-\nabla_{\tau}\log q(x;\tau)f''\left(r(x;\tau)\right)r(x;\tau),\\ z_{nc2}(y)&=\nabla_{\tau}\log q(y;\tau)f''\left(r(y;\tau)\right)r(y;\tau)^{2}. \end{align*} The estimator is also regarded as the solution to $Z_{nc}(\mathbf{x},\mathbf{y};\tau)=0$ where $Z_{nc}(\mathbf{x},\mathbf{y};\tau)=Z_{nc1}(\mathbf{x};\tau)+Z_{nc2}(\mathbf{y};\tau)$. Specific examples of an objective function are as follows. \begin{example}[Monte Carlo MLE] When $f(x)=x\log x$, the generalized NCE is defined as the minimizer of the following function with respect to $\tau$: \begin{align*} -\frac{1}{n}\sum_{i=1}^{n}\log q(x_{i};\tau)+\left(\frac{1}{n}\sum_{j=1}^{n}r(y_{j};\tau)\right). \end{align*} The objective function is essentially the same as the Monte Carlo MLE by profiling-out $c$ \citep{GeyerC1994Otco}. \end{example} \begin{example}[Original NCE] When $f(x)=x\log x-(1+x)\log(1+x)$, the generalized NCE is defined as the minimizer of the following function with respect to $\tau$: \begin{align*} -\frac{1}{n}\sum_{i=1}^{n}\frac{r(x_{i};\tau)}{r(x_i;\tau)+ 1}-\frac{1}{n} \sum_{j=1}^{n}\frac{1}{r(y_j;\tau)+1}. \end{align*} In this case, the objective function is the same as the original NCE \citep{noise}. The function $f(x)$ is optimal from the perspective of asymptotic variance \citep{Uehara}. \end{example} \subsubsection{Generalized score matching} Next, we review the score matching approach. The original score matching is introduced as a tool for minimizing the distance between the score function of the model and the data score function \citep{score}. It has been generalized to many settings: for truncated distributions \citep{HyvärinenAapo2007Seos,LinLina2016EoHG}, the cases involving high-order score functions \citep{LyuSiwei2012IaGo,DawidA.Philip2012PLSR,dawid2}. Here, we introduce score matching from the divergence perspective. The divergence between $\tilde{p}(x;\theta)$ and $\tilde{p}(x;\theta')$ of the score matching, $D_{SC}(\tilde{p}(x;\theta),\tilde{p}(x;\theta'))$, is given by \begin{align} \label{eq:score} \int \sum_{s=1}^{d_{x}} \mathrm{Br}_{f}\left(-c_{s}(x;\theta),-c_{s}(x;\theta')\right)g(x)\mu(\mathrm{d}x), \end{align} where $c_{s}(x;\theta)=\nabla_{x^{s}}\log \tilde{p}(x;\theta)$, $x^{s}$ is the s--th coordinate of $x$, $d_{x}$ is the dimension of $x$, and $f(\cdot)$ is the divergence function. Here, note that $c_{s}(x;\theta)$ is different from the score function $\nabla_{\theta}\log p(x;\theta)$ in the usual sense. The cross entropy is defined as $\mathrm{E}_{g(x)}[M_{sc}(\mathbf{x};\theta)]$, where $ M_{sc}(\mathbf{x};\theta)=n^{-1}\sum_{i=1}^{n}m_{sc}(x_i)$, and $m_{sc}(x)$ is \begin{align} \sum_{s=1}^{d_{x}}\left \{-f\left(c_{s}(x)\right)+\nabla_{x^{s}}\left(f'(c_{s}(x))\right)+f'(c_{s}(x))c_{s}(x)\right \}. \end{align} The estimator is defined as the minimizer of the objective function $M_{sc}(\mathbf{x};\theta)$ with respect to $\theta$. \begin{example}[Score matching] Consider the case when $f(x)=0.5x^{2}$. The objective function becomes \begin{align*} M_{sc}(\mathbf{x})=n^{-1}\sum_{i=1}^{n}\sum_{s=1}^{d_{x}}\left \{0.5c^{2}_{s}(x_i)+\nabla_{x^{s}}(c_{s}(x_i))\right \}, \end{align*} which reduces to the original score matching \citep{score}. It can be extended to the case where the data is on positive orthant \citep{HyvärinenAapo2007Seos}. The objective function becomes $ M_{sc}(\mathbf{x})=n^{-1}\sum_{i=1}^{n}\sum_{s=1}^{d_{x}} \{2x_{si}c_{s}(x_i)+x_{si}^{2}(0.5c_{s}(x_i)^{2}+ \\ \nabla_{x^{s}}(c_{s}(x_i)))\}$, where $x_{si}$ is a s--th component of $x_i$. \end{example} \section{FINCE and FISCORE} We propose estimation methods for unnormalized models with missing data: FINCE (fractional imputation noise contrastive estimation) and FISCORE (fractional imputation score matching). For methods using MI, see Supplementary materials. In this section, we focus on the MAR case, that is, $\mathrm{Pr}(\delta=1 \mid x)= \mathrm{Pr}(\delta=1 \mid x_{\mathrm{obs}})$. In Section \ref{sec:extensions}, we discuss an extension to the case of missing not at random (MNAR). \subsection{NCE with EM algorithm} We incorporate the EM algorithm to NCE. Though the score equation cannot be used as in \eqref{eq:score2}, an estimating equation such as the one in \eqref{eq:z-estimator} can be used. The estimator for $\theta$ is defined based on the solution to the following equation with respect to $\tau$: \begin{align} \label{eq:ideal} \mathrm{E}[\nabla_{\tau}Z_{nc1}(\mathbf{x};\tau)|\mathbf{x}_{\mathrm{obs}};\theta]+\nabla_{\tau}Z_{nc2}(\mathbf{y};\tau)=0, \end{align} where the expectation is taken with respect to the posterior predictive model $p(x_{\mathrm{mis}}|x_{\mathrm{obs}};\theta)$: \begin{align} \label{eq:mcmc} \frac{p(x;\theta)}{\int p(x;\theta)\mu(\mathrm{d}x_{\mathrm{mis}})}=\frac{\tilde{p}(x;\theta)}{\int \tilde{p}(x;\theta)\mu(\mathrm{d}x_{\mathrm{mis}})}. \end{align} More specifically, the estimator is defined as the solution to \begin{align} \label{eq:em} \frac{1}{n}\sum_{i=1}^{n}\mathrm{E}[z_{nc1}(x;\tau)|x_{i,\mathrm{obs}};\theta]+ \frac{1}{n} \sum_{j=1}^{n}z_{nc2}(y_j;\tau)=0. \end{align} Note that the conditional expectation in \eqref{eq:em} formally means \begin{align} \label{eq:em2} \frac{1}{n}\sum_{i=1}^{n}\left \{\delta_i z_{nc1}(x_i;\tau)+(1-\delta_i)\mathrm{E}[z_{nc1}(X;\tau)|x_{i,\mathrm{obs}}]\right\}. \end{align} This is because the dimension of $x_{\mathrm{obs}}$ is different for each sample. Throughout this paper, we implicitly assume this conversion following the convention in the literature of missing data \citep{SeamanShaun2013WIMb}. Generally, it is difficult to analytically calculate the conditional expectation under $p(x_{\mathrm{mis}}|x_{\mathrm{obs}};\theta)$ in \eqref{eq:em}. In subsequent sections, we discuss how this problem can be resolved. Here, assuming that the conditional expectation in \eqref{eq:em} can be calculated analytically, EM algorithm is described in Algorithm \ref{al:em} to solve the equation \eqref{eq:em}: \begin{algorithm} \label{al:em} Take a set of $n$ samples $\{y_i\}_{i=1}^{n}$ from $a(y)$ and initialize $\hat{\tau}_{0}$ \\ \Repeat{$\hat{\tau}_{t}$ converges }{ Solve the following equation and update the solution as $\hat{\tau}_{t+1}$: \begin{align*} \mathrm{E}[Z_{nc1}(\mathbf{x};\tau)|\mathbf{x}_{\mathrm{obs}};\hat{\theta}_{t}]+Z_{nc2}(\mathbf{y};\tau)=0. \end{align*}} \caption{NCE with EM algorithm} \end{algorithm} Note that the third line of Algorithm \ref{al:em} can be replaced with M-estimators. For example, when $f(x)=x\log x$, $\hat{\tau}_{t+1}$ is the solution to the minimizer of the following function: \begin{align*} -\frac{1}{n}\sum_{i=1}^{n}\mathrm{E}[\log q(x;\tau)|x_{i,\mathrm{obs}};\hat{\theta}_{t}]+ \left( \frac{1}{n} \sum_{j=1}^{n} r(y_{j};\tau) \right). \end{align*} Moreover, when $f(x)=x\log x-(1+x)\log(1+x)$, $\hat{\tau}_{t+1}$ is the solution to the minimizer of the following function: \begin{align} \label{eq:nce} -\frac{1}{n}\sum_{i=1}^{n}\mathrm{E}\left[\frac{r(x)}{r(x)+ 1}|x_{i,\mathrm{obs}};\hat{\theta}_{t}\right]-\frac{1}{n} \sum_{j=1}^{n}\frac{1}{r(y_j)+ 1}. \end{align} The form \eqref{eq:nce} clearly explains the difference between Algorithm \ref{al:em} and VNCE \citep{Rhodes}. For the details, refer to the Supplementary materials. \subsection{NCE with fractional imputation (FINCE)} \label{sec:fince} The challenge in using the EM algorithm is it is often infeasible to calculate the conditional expectation analytically. Therefore, in the same spirit of FI \citep{Kim11}, it is natural to incorporate an importance sampling using a random variable with a density $b(x)$. The idea is \begin{align*} &\int u(x)p(x_{\mathrm{mis}}|x_{\mathrm{obs}};\theta)\mu(\mathrm{d}x_{\mathrm{mis}})\\ &= \int u(x)\frac{\frac{\tilde{p}(x_{\mathrm{mis}},x_{\mathrm{obs}};\theta)}{b(x_{\mathrm{mis}})}b(x_{\mathrm{mis}})\mu(\mathrm{d}x_{\mathrm{mis}})}{\int \frac{\tilde{p}(x_{\mathrm{mis}},x_{\mathrm{obs}};\theta)}{b(x_{\mathrm{mis}})}b(x_{\mathrm{mis}})\mu(\mathrm{d}x_{\mathrm{mis}})}. \end{align*} for any function $u(x)$. Using the above technique, we estimate $\mathrm{E}[Z_1(\mathbf{x};\tau)|\mathbf{x}_{\mathrm{obs}};\theta]$ by the importance sampling in \eqref{eq:em}. The estimator is defined as in Algorithm \ref{al:em2}. Here, $\propto$ in the second step indicates a normalization so that the summation over $k$ is equal to $1$. \begin{algorithm} \label{al:em2} Take a set of $m$ samples $x_{i,\mathrm{mis}}^{*k}\sim b(x)$ for each $i$ with $\delta_i=0$ and take a set of $n$ samples from $y_{j}\sim a(y)$ ($1\leq k\leq m,1\leq j\leq n$). \\ Calculate the normalized weight: $w_{ik} \propto q(x_{i}^{*k};\tau)/b(x^{*k}_{\mathrm{mis}})$, where $x_{i}^{*k}=(x_{i,\mathrm{obs}},x^{*k}_{\mathrm{mis}})$. This means \begin{align*} w_{ik}(x;\tau)=\frac{q(x_{i}^{*k};\tau)/b(x^{*k}_{\mathrm{mis}})}{\sum_{k=1}^{m}q(x_{i}^{*k};\tau)/b(x^{*k}_{\mathrm{mis}})}. \end{align*} \\ Solve the following equation with respect to $\tau$: \begin{align} \label{eq:fi-ideal} 0=\left(\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}z_{nc1}(x_{i}^{*k};\tau)w_{ik}(x;\tau)\right)+Z_{nc2}(\mathbf{y};\tau) \end{align} \caption{FINCE} \end{algorithm} Generally, it is difficult to solve \eqref{eq:fi-ideal} directly. We can solve it with an EM approach as shown in Algorithm \ref{al:em3}. In the EM-style algorithm, the weights are fixed at every step. \begin{algorithm}[h \label{al:em3} Take the same first step as before in Algorithm \ref{al:em2} \\ \Repeat{$\hat{\tau}_{t}$ converges } { W-Step: $w_{ik} \propto q(x_{i}^{*k};\hat{\tau}_{t})/b(x^{*k}_{\mathrm{mis}})$ \\ M-step Update the solution to the following function with respect to $\tau$ as $\hat{\tau}_{t+1}$: \begin{align*} \frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}z_{nc1}(x_{i}^{*k};\tau)w_{ik}+Z_{nc2}(\mathbf{y};\tau)=0. \end{align*} } \caption{FINCE with EM algorithm} \end{algorithm} Note that Z--estimators in M--step can be replaced with M--estimators. For example, when $f(x)=x\log x$, M-step is the minimization of the following function with respect to $\tau$: \begin{align} -\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}w_{ik}\log q(x_{i}^{*k};\tau)+\left(\frac{1}{n} \sum_{j=1}^{n}r(y_{j};\tau)\right). \end{align} The choice of the noise and the auxiliary distribution is important. The noise distribution $a(x)$ should be generally close to $p(x_{\mathrm{mis}},x_{\mathrm{obs}};\theta_{0})$, the auxiliary distribution $b(x)$ should be closer to $p(x_{\mathrm{mis}};\theta_{0})$ in terms of statistical efficiency. When there are complete data for some set of samples as in Section \ref{sec:experiment}, moment matching can be used to determine $a(x)$ and $b(x)$. \subsection{Score matching with fractional imputation (FISCORE)} Score matching is defined in the form of M-estimators. Thus, the idea in Section \ref{sec:fince} can similarly be incorporated when there are missing data. The estimator is defined as the solution to the following equation with respect to $\theta$: \begin{align} \label{eq:fiscore} \mathrm{E}[Z_{sc}(\mathbf{x};\theta)|\mathbf{x}_{\mathrm{obs}};\theta]=0,\,Z_{sc}(\mathbf{x};\theta)=\nabla_{\theta}M_{sc}(\mathbf{x};\theta). \end{align} However, the calculation of the conditional expectation can be challenging. By introducing the auxiliary density $b(x)$, the above equation can be solved by an EM approach as shown in Algorithm \ref{al:em4}. \begin{algorithm} \label{al:em4} Take a set of $m$ samples $x_{i,\mathrm{mis}}^{*k}\sim b(x)$ for each $i$ with $\delta_i=0$\\ \Repeat{$\hat{\tau}_{t}$ converges } { W-Step: $w_{ik} \propto \tilde{p}(x_{i}^{*k};\hat{\theta}_{t})/b(x^{*k}_{\mathrm{mis}})$. \\ M-step: Update the solution to the minimizer of the following term with respect to $\theta$ as $\hat{\theta}_{t+1}$: \begin{align*} \frac{1}{n} \sum_{i=1}^{n}\sum_{k=1}^{m}w_{ik}m_{sc}(x_{i}^{*k};\theta). \end{align*} } \caption{FISCORE with EM algorithm} \end{algorithm} \section{Asymptotics and confidence intervals} \label{sec:theory} We derive the asymptotic distributions of FINCE and FISCORE by extending results of \cite{wang98} and \cite{Kim11}. Based on the asymptotic distributions, we also construct confidence intervals, which enable hypothesis testing. This is an advantage of the proposed methods compared with variational NCE \citep{Rhodes}. \subsection{FISCORE} \label{sec:the_fiscore} First, we consider the case of FISCORE. Given an initial $\sqrt{n}$-consistent estimator $\hat{\theta}_{p}$ for $\theta$, we obtain the imputed equation: \begin{align*} Z_{sc,m}(\theta|\hat{\theta}_{p})=\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}z_{sc}(\theta;x_{i}^{*k})w(x_{i}^{*k};\hat{\theta}_{p}), \end{align*} where $x^{*k}_{i}=(x_{i,\mathrm{obs}},x^{*k}_{i,\mathrm{mis}})$, $x^{*k}_{i,\mathrm{mis}}\sim b(x)$, \begin{align*} w(x_{i}^{*k};\theta)=\frac{\tilde{p}(x_{i}^{*k};\theta)/b(x_{i,\mathrm{mis}}^{*k}) }{\sum_{k=1}^{m}\tilde{p}(x_{i}^{*k};\theta)/b(x_{i,\mathrm{mis}}^{*k}) }. \end{align*} As an initial step, we consider the case $m\to \infty$ irrespective of the size of $n$. This result is easily applied to the case when $m \to \infty$ as $n \to \infty$. Refer to Supplementary materials when $m$ is finite. When $m$ is infinity, the above imputed equation $Z_{sc,m}(\theta|\hat{\theta}^{p})$ converges to \begin{align*} \bar{Z}_{sc}(\theta|\hat{\theta}_{p})=\mathrm{E}[Z_{sc}(\theta)|\mathbf{x}_{\mathrm{obs}};\hat{\theta}_{p}]. \end{align*} We define the solution to $\bar{Z}_{sc}(\theta|\hat{\theta}_{p})=0$ as $\hat{\theta}_{sc,\infty}$. Ideally, when the EM algorithm is solved analytically, the estimator is defined as the solution to $Z_{sc,\mathrm{obs}}(\mathbf{x}_{\mathrm{obs}};\theta)=0$, where \begin{align*} Z_{sc,\mathrm{obs}}(\mathbf{x}_{\mathrm{obs}};\theta)=\mathrm{E}[Z_{sc}(\theta)|\mathbf{x}_{\mathrm{obs}};\theta]. \end{align*} We define this solution as $\hat{\theta}_{s,f}$. Based on the theory of Z-estimators \citep{VaartA.W.vander1998As}, $\hat{\theta}_{s,f}$ has the following asymptotic property: \begin{theorem} \label{thm:easy} The term $\hat{\theta}_{s,f}-\theta_{0}$ is equal to \begin{align*} -\mathrm{E}[\nabla_{\theta^{\top}}Z_{sc,\mathrm{obs}}(\theta_{0})]^{-1}Z_{sc,\mathrm{obs}}(\theta_{0})+\mathrm{o}_{p}(n^{-1/2}). \end{align*} The term $\hat{\theta}_{s,f}-\theta_{0}$ asymptotically converges to the normal distribution with mean $0$ and variance $\mathcal{I}_{1,sc}^{-1}\mathcal{J}_{1,sc}\mathcal{I}_{1,sc}^{\top-1}$, where \begin{align*} \mathcal{I}_{1,sc}= \mathrm{E}[\nabla_{\theta^{\top}}Z_{sc,\mathrm{obs}}(\theta_{0})],\,\mathcal{J}_{1,sc}= \mathrm{Var}[Z_{sc,\mathrm{obs}}(\theta_{0})]. \end{align*} \end{theorem} Next, consider the asymptotic variance of $\hat{\theta}_{sc,\infty}$, and the corresponding result when $f(x)=0.5x^{2}$. In the case of $f(x)=0.5x^{2}$, each term is specified more explicitly because some terms cancel out using integration by parts. \begin{theorem} \label{thm:main} The term $\hat{\theta}_{sc,\infty}-\theta_{0}$ is equal to \begin{align*} (\hat{\theta}_{sc,f}-\theta_{0}) +\mathcal{I}^{-1}_{3,sc}\mathcal{I}_{2,sc}(\hat{\theta}_{p}-\hat{\theta}_{sc,f})+\mathrm{o}_{p}(n^{-1/2}),\, \end{align*} where \begin{align*} \mathcal{I}_{3,sc}&=\mathrm{E}[\nabla_{\theta^{\top}}Z_{sc}(\theta_{0})],\\ \mathcal{I}_{2,sc}&=-\mathrm{E}[Z_{sc}(\theta_{0})\nabla_{\theta^{\top}}\log p(\mathbf{x}_{\mathrm{mis}}|\mathbf{x}_{\mathrm{obs}};\theta_{0})]\\ &=-\mathrm{E}[\mathrm{Cov}[z_{sc}(\theta_{0}),\nabla_{\theta}\log \tilde{p}(x;\theta_{0})|x_{\mathrm{obs}} ]]. \end{align*} \end{theorem} \begin{corollary} \label{col:score} When $f=0.5x^{2}$ and the missing data mechanism is MAR, each term becomes \begin{align*} z_{sc}(\theta)&=\sum_{s=1}^{d_{x}}\left\{c_{s}(x)\nabla_{\theta}(c_{s}(x))+\nabla_{x^{s}}(\nabla_{\theta}c_{s}(x))\right\},\\ \mathcal{I}_{2,sc}&=\mathrm{E}\left[-\mathrm{cov}[z_{sc}(\theta),\log \tilde{p}(x;\theta)|x_{\mathrm{obs}}]\right]|_{\theta_{0}},\\ \mathcal{I}_{3,sc}&=\mathrm{E}\left[\sum_{s=1}^{d_{x}}\left\{\nabla_{\theta}c_{s}(x)^{\otimes 2}\right\}\right]|_{\theta_{0}},\\\mathcal{I}_{1,sc}&=\mathcal{I}_{3,sc}-\mathcal{I}_{2,sc},\\ \mathcal{J}_{1,sc}&= n^{-1}\mathrm{Var}[\mathrm{E}[z_{sc}(\theta_{0})|x_{\mathrm{obs}}]]. \end{align*} \end{corollary} In the proof of Theorem \ref{thm:main}, we used the relation: $ \mathcal{I}_{3,sc}=\mathcal{I}_{1,sc}+\mathcal{I}_{2,sc}$. This relation corresponds to the missing information principle or Louis' formula \citep{kimshao13,orchard72,louis82} when the normalized model is used. Specifically, when $Z_{sc}(\theta)$ is a true score equation: $S_{sc}(\mathbf{x};\theta)=\nabla_{\theta}\log\{p(\mathbf{x};\theta)\}$, the result is reduced to the one in \cite{wang98}. In this case, $\mathcal{I}_{3,sc}$,\,$\mathcal{I}_{1,sc}$ and $\mathcal{I}_{2,sc}$ become \begin{align*} \mathcal{I}_{com}&=\mathrm{E}[\nabla_{\theta^{\top}}S_{sc}(\theta_{0})],\,\mathcal{I}_{\mathrm{obs}}=\mathrm{E}[\nabla_{\theta^{\top}}S_{\mathrm{obs}}(\theta_{0})],\\ \mathcal{I}_{\mathrm{mis}}&=\mathrm{E}[S_{\mathrm{mis}}(\theta_{0})^{\otimes 2}],\\ S_{\mathrm{mis}}(\theta)&=S_{sc}(\theta)-\mathrm{E}[S_{sc}(\theta)|\mathbf{x}_{\mathrm{obs}};\theta], \\ S_{\mathrm{obs}}(\theta)&=\int S_{sc}(\theta)\mu(\mathrm{d}\mathbf{x}_{\mathrm{mis}}), \end{align*} respectively, and the relation $\mathcal{I}_{com}=\mathcal{I}_{\mathrm{obs}}+\mathcal{I}_{\mathrm{mis}}$ holds. The term $\mathcal{I}_{com}^{-1}\mathcal{I}_{\mathrm{mis}}$ is often called the fraction of missing information \citep{kimshao13}. For the current problem, $\mathcal{I}_{3,sc}^{-1}\mathcal{I}_{2,sc}$ can be considered as an analog. Writing $\hat{\theta}^{(t)}$ to be the $t$--th EM update of $\theta$ that is computed by solving $\bar{Z}_{sc}(\theta|\hat{\theta}^{(t-1)})=0$, we obtain the following Corollary. \begin{corollary} We have \begin{align*} \hat{\theta}^{(t)}=\hat{\theta}^{(t-1)}+\{\mathcal{I}_{3,sc}^{-1}\mathcal{I}_{2,sc}\}^{t-1}(\hat{\theta}^{(0)}-\hat{\theta}_{sc,f}). \end{align*} When the spectral radius of $\mathcal{I}_{3,sc}^{-1}\mathcal{I}_{2,sc}$ is less than $1$, $\hat{\theta}^{(t)}$ converges to $\hat{\theta}_{sc,f}$. \end{corollary} Generally, it is difficult to prove that the spectral radius of $\mathcal{I}_{3,sc}^{-1}\mathcal{I}_{2,sc}$ is less than $1$. However, experimental results in Section \ref{sec:experiment} show that this algorithm converges. \subsection{FINCE} Next, we consider the case of FINCE. Given an initial $\sqrt{n}$--consistent estimator $\hat{\tau}_{p}$, we can obtain an imputed equation $Z_{nc,m}(\tau|\hat{\tau}_{p})$: \begin{align*} \left \{\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}z_{nc1}(x_{i}^{*k};\tau)w(x_{i}^{*k};\hat{\tau}_{p})\right \}+Z_{nc2}(\mathbf{y};\tau)=0, \end{align*} where $w(x;\tau)=q(x;\tau)/b(x)$. For the case where $m$ is infinity. Then, $Z_{nc,m}(\tau|\hat{\tau}_{p})$ converges to $\bar{Z}_{nc}(\tau|\hat{\tau}_{p})$; \begin{align*} \left \{\frac{1}{n}\sum_{i=1}^{n}\mathrm{E}[z_{nc1}(X;\tau)|x_{i,\mathrm{obs}};\hat{\tau}_{p}]\right \}+\frac{1}{n}\sum_{j=1}^{n}z_{nc2}(y_{j};\tau). \end{align*} Furthermore, when the EM algorithm can be solved analytically, the estimator is defined as the solution to the following equation with respect to $\tau$: \begin{align*} 0&=Z_{nc,obs}(\mathbf{x}_{\mathrm{obs}},\mathbf{y};\tau),\\ Z_{nc,obs}(\tau)&= \mathrm{E}[Z_{nc1}(\tau)|\mathbf{x}_{\mathrm{obs}};\tau]+Z_{nc2}(\mathbf{y};\tau). \end{align*} Here, we refer this solution to $\hat{\tau}_{nc,f}$. Similar to Theorem \ref{thm:easy}, we have the following asymptotic property. \begin{theorem} The term $\hat{\tau}_{nc,f}-\tau_{0}$ converges to the normal distribution with mean $0$ and variance $\mathcal{I}_{1,nc}^{-1}\mathcal{J}_{1,nc}\mathcal{I}_{1,nc}^{\top -1}$, \begin{align*} \mathcal{I}_{1,nc}=\mathrm{E}[\nabla_{\tau^{\top}}Z_{nc,obs}(\tau_0)],\,\mathcal{J}_{1,nc}=\mathrm{Var}[Z_{nc,obs}(\tau_0)]. \end{align*} \end{theorem} Especially, in the case of the original NCE, each term is specified more explicitly as follows because some terms cancel out. Refer to the Supplementary materials for variance estimators based on this result. \begin{corollary} \label{col:nce} When the missing data mechanism is MAR and $f(x)=x\log x-(1+x)\log(1+x)$, all of the terms become as follows, where $\nabla_{\tau}\log q(x;\tau)=v(x;\tau)$ and \begin{align*} \mathcal{I}_{1,nc}&=\mathrm{E}\left[\mathrm{E}\left[\frac{v(x;\tau_0)}{1+r_0}|x_{\mathrm{obs}}\right]\mathrm{E}\left[v(x;\tau_0)^{\top}|x_{\mathrm{obs}}\right]\right],\\ \mathcal{I}_{3,nc}&=\mathrm{E}\left[\frac{v(x;\tau_0)^{\otimes 2}}{1+r_0}\right],\,r_0=q(x;\tau_0)/a(x),\\ \mathcal{J}_{1,nc}&=n^{-1}(\mathrm{var}_{q}[\mathrm{E}[z_{nc1}(x;\tau_0)|x_{\mathrm{obs}}]]\\ &+\mathrm{var}_{a}[z_{nc2}(y;\tau_0)]), \\ z_{nc1}(\tau)&=-\frac{v(x;\tau_0)}{1+r_0},\,z_{nc2}(\tau)=\frac{rv(x;\tau_0)}{1+r_0}. \end{align*} \end{corollary} Actually, when $f(x)=x\log x$, we can prove that $\{\mathcal{I}_{3,nc}^{-1}\mathcal{I}_{2,nc}\}^{j}$ tends to zero as $j$ tends to infinity. \begin{corollary} \label{cor:convergece} When $f(x)=x\log x$, $\mathcal{I}_{1,nc}$ and $\mathcal{I}_{3,nc}$ become as follows: \begin{align*} \mathcal{I}_{1,nc}&=\mathrm{E}\left[\mathrm{E}\left[v(x;\tau_0)|x_{\mathrm{obs}}\right]^{\otimes 2}\right],\\ \mathcal{I}_{3,nc}&=\mathrm{E}\left[v(x;\tau_0)^{\otimes 2}\right]. \end{align*} Additionally, $\{\mathcal{I}_{3,nc}^{-1}\mathcal{I}_{2,nc}\}^{j}$ tends to zero as $j$ tends to infinity. \end{corollary} Note when there is no missing data, NCE is more efficient than Monte Carlo MLE \citep{Uehara}. On the other hand, when there is missing data, this statement does not hold. However, the efficiency of the methods depends on the underlying generating mechanism. \section{Some extensions} \label{sec:extensions} \subsection{Extension to MNAR case} In general, the nonparametric identification condition does not hold in the MNAR case \citep{RobinsJM1997Taco}. However, assuming the existence of nonresponse instrument and parametric models, the parameter can be identified in some cases \citep{KimJiYoung2012Pfif,wang14}. We hereafter assume the existence of nonresponse instrument so that the parameter can be idenfitied. To estimate the parameter under MNAR data, FISCORE and FINCE can be still applied. First, we specify a propensity score model $\pi(\delta|x;\phi)$ for $\mathrm{Pr}(\delta|x)$. For the case of FISCORE, we want to solve the equation with respect to $\eta$: \begin{align} \label{eq:ideal2} \mathrm{E}\left[\begin{pmatrix} Z_{sc}(\mathbf{x};\theta) \\ \nabla_{\phi} \log \pi(\bm{\delta}|\mathbf{x};\phi) \end{pmatrix} |\mathbf{x}_{\mathrm{obs}},\bm{\delta};\eta \right]=0, \end{align} where the expectation is taken under $t(\mathbf{x}_{\mathrm{mis}}|\mathbf{x}_{\mathrm{obs}},\bm{\delta};\eta)\propto p(\mathbf{x};\theta)\pi(\bm{\delta}|\mathbf{x};\phi)$, and $\eta=(\theta,\phi)$. Importantly, we can take care of the selection mechanism unlike in the MAR and MCAR cases, because $p(x_{\mathrm{mis}}|x_{\mathrm{obs}})=p(x_{\mathrm{mis}}|\delta,x_{\mathrm{obs}})$ does not hold. The difference is evident when we compare \eqref{eq:ideal2} with \eqref{eq:fiscore}. Owing to MNAR, the first modification is such that the selection mechanism $\mathrm{\pi}(\delta|x)$ appears when calculating the fractional weight: $w_{ik} \propto \tilde{p}(x_{i}^{*k};\hat{\theta}_{t})\pi(\delta_i|x_{i}^{*k};\hat{\phi}_{t})/b(x_{\mathrm{mis}}^{*k})$. The second modification is the score of the propensity score model which is shown in \eqref{eq:ideal2}. In the case of FINCE, let $\zeta =(\tau^{\top},\phi^{\top})^{\top}$ and $Z_{nc}(\bm{\delta},\mathbf{x},\mathbf{y};\zeta)$ be defined as an augmented estimating equation: \begin{align*} \begin{pmatrix} Z_{nc}(\bm{\delta},\mathbf{x},\mathbf{y};\tau) \\ \nabla_{\phi} \log \pi(\bm{\delta}|\mathbf{x};\phi) \end{pmatrix}. \end{align*} The algorithm is modified to solve the following equation with respect to $\zeta$: \begin{align*} \mathrm{E}\left[\begin{pmatrix} Z_{nc}(\bm{\delta},\mathbf{x},\mathbf{y};\tau) \\ \nabla_{\phi} \log \pi(\bm{\delta}|\mathbf{x};\phi) \end{pmatrix}|\mathbf{x}_{\mathrm{obs}},\bm{\delta};\zeta \right]=0. \end{align*} \subsection{Extension to contrastive divergence methods} Although there are several variations of contrastive divergence methods \citep{younes,TielemanTijmen2008TrBm}, the basic idea is that $\theta$ is updated by adding the gradient of log-likelihood $\log p(\mathbf{x};\theta)$ with respect to $\theta$: \begin{align*} \frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta} \log \tilde{p}(x_{i};\theta)-\mathrm{E}_{ p(x;\theta)}[\nabla_{\theta}\log \tilde{p}(x;\theta)], \end{align*} multiplying some learning rate. When some data is not observed, the expected gradient becomes \begin{align*} \frac{1}{n}\sum_{i=1}^{n}\mathrm{E}[\nabla_{\theta} \log \tilde{p}(x_{i};\theta)|x_{i,\mathrm{obs}};\theta]-\mathrm{E}[\nabla_{\theta}\log \tilde{p}(x;\theta)]. \end{align*} The expectation of the first term is taken under $p(x_{\mathrm{mis}}|x_{\mathrm{obs}};\theta)$. It is possible to sample from MCMC like \eqref{eq:mcmc} without involving doubly-intractable distributions \citep{MllerJ.2006AeMc}. Therefore, the gradient is approximated as \begin{align*} \frac{1}{nm}\sum_{i=1}^{n}\sum_{k=1}^{m}\nabla_{\theta} \log \tilde{p}(x_{i}^{*k};\theta)-\frac{1}{n} \sum_{j=1}^{n}\nabla_{\theta}\log \tilde{p}(y_j;\theta), \end{align*} where $x_{i}^{*k}\sim p(x_{\mathrm{mis}}|x_{i,\mathrm{obs}};\theta) $ and $y_j\sim p(y;\theta)$. We refer the updating method using the above gradient as MICD. We can still use a FI approach for the approximation. By introducing an auxiliary distribution with a density $b(x)$, the gradient is approximated as \begin{align*} \frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}w_{ik}\nabla_{\theta}\log \tilde{p}(x_{i}^{*k};\theta)-\frac{1}{n} \sum_{j=1}^{n}\nabla_{\theta}\log \tilde{p}(y_j;\theta). \end{align*} where $x_{i}^{*k}\sim b(x),w_{ik}\propto \tilde{p}(x_{i}^{*k};\theta)/b(x_{i}^{*k}),y_j\sim p(y;\theta)$. We refer this approach to FICD. Furthermore, by introducing a noise distribution with a density $a(y)$ to prevent using MCMC totally, the gradient is approximated as \begin{align*} \frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}w_{ik}\nabla_{\theta}\log \tilde{p}(x_{i}^{*k};\theta)-\frac{1}{n} \sum_{j=1}^{n}r_{j}\nabla_{\theta}\log \tilde{p}(y_j;\theta), \end{align*} where $x_{i}^{*k}\sim b(x),\,w_{ik}\propto \tilde{p}(x_{i}^{*k};\theta)/b(x_{i}^{*k}),\,y_j\sim a(y)$, and $r_j\propto \tilde{p}(y_j;\theta)/a(y_j)$. In this case, the gradient is essentially equivalent to the objective function of FINCE when $f(x)=x\log x$ by profiling-out $c$. \section{Simulation results} \label{sec:experiment} We present some simulation results to show the performance of FINCE and FISCORE under the following two settings: (1) truncated normal distribution with missing data including MNAR case and\,(2) truncated Gaussian graphical models with missing data. \subsection{Truncated normal distribution Consider a truncated normal distribution: $\phi(x;\Sigma^{-1})=\exp(-0.5x^{\top}\Sigma^{-1} x)\mathrm{I}(x>0)$ where $\Sigma$ is a 2 by 2 matrix parameter and $x=(x_1,x_2)$ is a two-dimensional vector. Assume $x_1$ is fully observed; however, $x_2$ is subject to missingness. The random variable $\delta$ is binary; if $\delta=1$, $x_{2}$ is not missing, and if $\delta=0$, $x_{2}$ is subject to missingness. We performed simulations under two settings using a R-package developed by \cite{mvtnorm}. In both cases, the parameter values under the missing data models are chosen so that the overall missing rates are about 30\%. \begin{itemize} \item MAR : $\mathrm{Pr}(\delta=1|x)=1/[1+\exp\{-(x_1-0.9)/0.3\}]$ and \begin{align*} \Sigma = \begin{pmatrix} 2 & 1.3 \\ 1.3 & 2.0 \end{pmatrix}. \end{align*} \item MNAR : $\mathrm{Pr}(\delta=1|x)=1/[1+\exp\{-(x_2-\mu)/\sigma\}]$ where $\mu=0.9$ and $\sigma=0.2$, and the same $\Sigma$ as in the first setting. \end{itemize} We compared the following estimators: \begin{itemize} \item \textbf{COMP}: This estimator uses an NCE based on complete data only. We used a truncated distribution as an auxiliary distribution and noise distribution. \item \textbf{FINCE}: This estimator uses an FINCE with $m=100$. \item \textbf{FISCORE}: This estimator uses an FISCORE with $m=100$. In this case, we used a score matching for a truncated tensity \citep{HyvärinenAapo2007Seos}. See Supplementary materials for details. \end{itemize} We do not compare them with variational NCE because it does not take into account a MNAR case and does not give a confidence interval. Table \ref{tab:exp1} shows the results of Monte Carlo median of absolute bias and square errors. The results revealed that \textbf{COMP} leads to the significant bias. This outcome is expected because using only complete cases leads to the bias in the case of MAR, although it it not in the case of MCAR \citep{LittleRoderickJ.A2002Sawm}. On the other hand, it is shown that \textbf{FINCE} and \textbf{FISCORE} are consistent estimators. Though the performance of \textbf{FISCORE} is better than that of \textbf{FINCE} in this experiment, by increasing the number of auxiliary samples, it is expected that the efficiency of \textbf{FINCE} will be improved. \begin{table}[!] \centering \label{tab:exp1} \caption{Monte Carlo median square error and bias} MAR \\ \begin{tabular}{llccc} n & & COMP & FINCE & FISCORE \\ 500 & (bias) & 0.29 & 0.03 & 0.03 \\ & (mse) & 0.040 & 0.024 & 0.021 \\ 1000 & (bias) & 0.25 & 0.02 & 0.02 \\ & (mse) & 0.032 & 0.015 & 0.011 \\ \end{tabular} \\ NMAR \\ \begin{tabular}{llccc} n & & Comp & FINCE & FISCORE \\ 500 & (bias) & 0.33 & 0.18 & 0.12 \\ & (mse) & 0.041 & 0.027 & 0.021 \\ 1000 & (bias) & 0.24 & 0.14 & 0.14 \\ & (mse) & 0.039 & 0.020 & 0.012 \\ \end{tabular} \label{tab:exp2} \end{table} We also constructed a $95$\%confidence interval based on the variance estimators in Supplementary materials. Table \ref{tab:exp4} shows the result of the coverage rate. \begin{table}[h!] \centering \caption{Coverage rate under Setting 1} \begin{tabular}{llccc} n & FINCE & FISCORE \\ 500 & 94\% & 89\% \\ 1000 & 94\% & 92\% \end{tabular} \label{tab:exp4} \end{table} \subsection{Truncated Gaussian graphical model} Next, we consider the estimation of the truncated Gaussian graphical model (GGM) considered in \cite{LinLina2016EoHG} with missing data. Let $G=(V,E)$ be an undirected graph where $V=\{ 1,\cdots,d \}$. Then, the truncated GGM with graph $G$ is defined as $p(x \mid \Sigma) \propto \exp \left( -0.5x^{\top} \Sigma^{-1} x \right) \quad (x \in \mathbb{R}_+^d), \label{tGGM}$ where $\Sigma \in \mathbb{R}^{d \times d}$ is a positive definite matrix satisfying $(\Sigma^{-1})_{ij}=0$ for $(i,j) \not\in E$. Similar to the original GGM \citep{LauritzenSteffenL1996Gm}, $X_i$ and $X_j$ are conditionally independent on the other variables $X_k \ (k \neq i,j)$ if $(i,j) \not\in E$. Here, we estimate $G$ by using the confidence intervals of the entries of $\Sigma^{-1}$. We generated $n=1000$ independent samples $\{x_i\}_{i=1}^{n}$ from a truncated GGM \eqref{tGGM} with $d=10$ and the $G$ given in the top panel of Figure~\ref{fig_GGM}. Namely, there are three clusters $(x_1,x_2,x_3),(x_4,x_5,x_6)$, and $(x_7,x_8,x_9)$ of three variables and one isolated variable $x_{10}$. We set all the diagonal entries of $\Sigma^{-1}$ to 1 and all the nonzero off-diagonal entries of $\Sigma^{-1}$ to 0.5. We introduced missing values on $x_3$, $x_6$ and $x_9$ by using the following MAR mechanism: for $k=1,2,3$, random vector $c_k \in \mathbb{R}^{10}$ was generated by $(c_k)_3=(c_k)_6=(c_k)_9=0$ and $(c_k)_j \sim {\rm N} (0,1) \ (j \neq 3,6,9)$ and then $x_{3k}$ was missed with the probability $1/(3+\exp(c_k^{\top} x))$. The proportion of complete data was about 40\%. Then, we fitted the truncated GGM \eqref{tGGM} to $\{x_i\}_{i=1}^{n}$ by using FINCE and FISCORE with 100 imputations. We used ${\rm N}(0,2)$ truncated to the positive orthant as the proposal distribution for missing entries. In FINCE, we generated $n=1000$ noise samples $\{y_i\}_{i=1}^{n}$ from the product of the coordinate-wise exponential distributions with the same mean as $\{x_i\}_{i=1}^{n}$. We determined the graph $G$ by collecting all edges $(i,j)$ such that the 95 \% confidence interval of $(\Sigma^{-1})_{ij}$ did not include zero. Figure~\ref{fig_GGM} shows the result of one realization. We calculated the proportions of falsely selected edges (false positive) and falsely unselected edges (false negative) in 100 realizations. The results are given in Table~\ref{tab:ggm}. It shows that the coverage probabilities of the confidence intervals are approximately equal to 95\% in both FINCE and FISCORE. \begin{figure}[h!] \begin{center} truth\\ \vspace{0.2in} \begin{tikzpicture}[every node/.style={circle,draw}] \node (A) at (0,0) {}; \node (B) at (2,0) {}; \node (C) at (1,1) {}; \node (D) at (2,2) {}; \node (E) at (4,2) {}; \node (F) at (3,3) {}; \node (G) at (4,0) {}; \node (H) at (6,0) {}; \node (I) at (5,1) {}; \node (J) at (3,1) {}; \foreach \u \v in {A/B,B/C,C/A,D/E,E/F,F/D,G/H,H/I,I/G} \draw (\u) -- (\v); \end{tikzpicture} \end{center} \vspace{0.3in} \begin{center} FINCE\\ \vspace{0.2in} \begin{tikzpicture}[every node/.style={circle,draw}] \node (A) at (0,0) {}; \node (B) at (2,0) {}; \node (C) at (1,1) {}; \node (D) at (2,2) {}; \node (E) at (4,2) {}; \node (F) at (3,3) {}; \node (G) at (4,0) {}; \node (H) at (6,0) {}; \node (I) at (5,1) {}; \node (J) at (3,1) {}; \foreach \u \v in {B/C,D/E,D/F,E/G,G/I,H/I} \draw (\u) -- (\v); \end{tikzpicture} \end{center} \vspace{0.3in} \begin{center} FISCORE\\ \vspace{0.2in} \begin{tikzpicture}[every node/.style={circle,draw}] \node (A) at (0,0) {}; \node (B) at (2,0) {}; \node (C) at (1,1) {}; \node (D) at (2,2) {}; \node (E) at (4,2) {}; \node (F) at (3,3) {}; \node (G) at (4,0) {}; \node (H) at (6,0) {}; \node (I) at (5,1) {}; \node (J) at (3,1) {}; \foreach \u \v in {A/B,A/C,B/C,B/F,D/E,D/F,E/F,G/I,H/I} \draw (\u) -- (\v); \end{tikzpicture} \end{center} \caption{Selected graphs} \label{fig_GGM} \end{figure} \begin{table}[h!] \centering \caption{Proportions of false positive and false negative} \begin{tabular}{llccc} & FINCE & FISCORE \\ FP & 10.5\% & 6.4\% \\ FN & 12.6\% & 23.3\% \\ \end{tabular} \label{tab:ggm} \end{table} \section{Conclusion} We have proposed estimation methods for unnormalized models with missing data: FINCE and FISCORE. The proposed methods are computationally efficient, valid under generel missing mechanisms, and enable statistical inference using the confidence intervals. In this study, we focus on NCE and score matching. It is an interesting future work to investigate the theory of FICD (fractional imputation with contrastive divergence) and its application to large scale problems. An extension of the recently developed statistically efficient estimators for unnormalized models \citep{Uehara2} to missing data setting is another interesting future problem. \newpage \bibliographystyle{chicago}
1,477,468,749,828
arxiv
\section{introduction} Control of systems with uncertainties is a central challenge in control and is an extensively researched topic. There are various sub-fields in control such as stochastic control \cite{kumar2015stochastic, aastrom2012introduction}, robust control \cite{skogestad2007multivariable} and adaptive control \cite{sastry2011adaptive, ioannou2012robust} that address the challenge of controller synthesis for different types of uncertainties. In this work we are concerned with the problem of online control of systems with uncertainties such as disturbance and adversarial controller cost. The performance in online control is measured in terms of how the regret of performance, defined as the deviation of the performance from that of the best policy, scales with the duration $T$. The objective in online control is to design adaptive algorithms to disturbances and adversarial cost so that the regret scales sub-linearly in $T$, i.e., as $T^\alpha$ with $\alpha < 1$. Classical adaptive control investigates the problem of control of systems with parametric, structural and parametrizable disturbance uncertainties \cite{tao2014multivariable}. The main focus in classical adaptive control is the stability of system and asymptotic tracking performance. Adaptive control has been studied for systems of all types such as linear, non-linear, and stochastic. There are many variants of adaptive control such as adaptive model predictive control \cite{heirung2017dual, lorenzen2017adaptive}, adaptive learning control \cite{marino2012robust, yu2015switching}, stochastic adaptive control \cite{aastrom2013adaptive} and robust adaptive control \cite{ioannou2012robust}. These variations address the design of adaptive controller for different variations of the basic adaptive control setting. Many papers and books have been written on adaptive control; see for example \cite{sastry2011adaptive, ioannou2012robust, aastrom2013adaptive}. Thus, adaptive control is a very rich and extensively studied topic. The key variation of the online control setting from the classical adaptive control is the regret objective and in some cases the general nature of the costs, where they could be adversarial and unknown apriori. Thus, the classical adaptive control approaches can be inadequate to analyse online control problems and are typically solved by merging tools from statistical learning, online learning and optimization, and control theory. The field of online control has seen rising interest in the last few years. One of the first setting that was extensively explored is the Linear Quadratic Regulator (LQR) with the unknown system and stochastic disturbances. Abbasi \& Czepesvari \cite{abbasi2011regret} were the first to study the online LQR problem with unknown system and stochastic disturbances. The authors proposed an adaptive algorithm that achieved $\sqrt{T}$ regret w.r.t the best linear control policy, which is the optimal policy. After \cite{abbasi2011regret}, several authors improved the algorithm of \cite{abbasi2011regret}, which was an inefficient algorithm. Dean et al. \cite{dean2018regret} were the first to propose an efficient algorithm for the same problem. They showed that their algorithm achieved a regret of $\mathcal{O}(T^{2/3})$. Cohen et al. \cite{cohen2019learning} and Mania et al. \cite{mania2019certainty} improved on this result by providing an efficient algorithm with a regret guarantee of $\mathcal{O}(T^{1/2})$ for the same problem. Mania et al. \cite{mania2019certainty} extended these results to the partial observation setting and established $\mathcal{O}(\sqrt{T})$-regret for the partially observed Linear Quadratic Gaussian (LQG) setting. Cohen et al. \cite{cohen2018online} provided an $\mathcal{O}(\sqrt{T})$ algorithm for a variant of the online LQR, where the system is known and noise is stochastic but the controller cost function is an adversarially chosen quadratic function. Recently, Simchowitz et al. \cite{simchowitz2020naive} showed that $\mathcal{O}(T^{1/2})$ is the optimal regret for the online LQR control problem. While the above works focussed on online LQR, there are others who studied the control of much general systems: linear dynamic systems with adversarial disturbances and adversarial cost functions. Agarwal et al. \cite{agarwal2019online} considered the control of a known linear dynamic system with additive adversarial disturbance and an adversarial convex controller cost function. They proposed an online learning algorithm that learnt a Disturbance Response Controller (DRC): a linear feedback of the portion of the output contributed by the disturbances upto certain history. They showed that their proposed controller achieves $\mathcal{O}(\sqrt{T})$-regret with respect to the best DRC in hindsight. Agarwal et al. in a subsequent work \cite{agarwal2019logarithmic} showed that a poly logarithmic regret is achievable for strongly convex controller cost and well conditioned stochastic disturbances. Hazan et al. \cite{hazan2020nonstochastic} extended the setting of \cite{agarwal2019online} to the case where the system is unknown. They showed that when the system is unknown, while $\mathcal{O}(\sqrt{T})$-regret is not achievable, they can still achieve a sub-linear regret of $\mathcal{O}(T^{2/3})$-regret. Recently, \cite{simchowitz2020improper} generalized these results to provide similar regret guarantees for the same setting with partial observation for both known and unknown systems. In this work we study the online control setting of \cite{simchowitz2020improper}: linear dynamic systems with additive disturbance and adversarial controller cost, where the system state is only partially observable. We assume that our system is known and our cost functions are general convex controller costs. Previous works in the online adversarial setting \cite{agarwal2019online, agarwal2019logarithmic, hazan2020nonstochastic, simchowitz2020improper}, either assume the cost functions to be convex or strongly-convex. Reiterating the results of \cite{simchowitz2020improper} for the known system case, what has been established is that $\mathcal{O}(\sqrt{T})$ regret is achievable when the cost function are convex, and $\mathcal{O}(\log{T})$ regret is achievable when the cost functions are strongly convex. The question we address in this work is: {\it can we achieve intermediate regret guarantees for intermediate convex conditions}? \subsection{Our Contribution} The online control algorithm we propose is the adaptive gradient extension of the online learning disturbance response controller proposed in \cite{agarwal2019online, simchowitz2020improper}. Here the adaptive gradient refers to the adaptation of the gradient step size of the gradient learning algorithm used in \cite{agarwal2019online, simchowitz2020improper}. Thus, to the best of our knowledge, we present the {\it first adaptive gradient online learning control algorithm.} We show that the proposed learning algorithm {\it recovers the previously established regret guarantee of $\mathcal{O}(\sqrt{T})$ for general convex controller cost functions and $\mathcal{O}(\log{T})$ for strongly-convex and smooth controller cost functions (see \cite{simchowitz2020improper}), and simultaneously achieves an intermediate regret between $\mathcal{O}(\sqrt{T})$ and $\mathcal{O}(\log{T})$ for intermediate convex conditions of the controller cost functions}. We prove our main result by establishing a new result for adaptive gradient online learning for the problem of Online Convex Optimization with Memory (OCO-M), which is the online convex optimization problem where the cost at a time step also depends on a certain history of past decisions. \subsection{Other Related Work} {\it Online Convex Optimization (OCO)}: In the OCO framework, the learner encounters a sequence of convex loss functions which are unknown beforehand and may vary arbitrarily over time. The learner updates the estimate of the optimal solution at each time-step based on the previous losses and incurs a loss for its updated estimate as given by the loss function for this time step. At the end of each step, either the loss function may be revealed, a scenario referred to as full information feedback, or only the experienced loss is revealed, a scenario known as bandit feedback. The objective of the learner is to minimize the loss accumulated over time. Under the full information feedback setting, it has been established that the best possible regret scales as $O(T^{1/2})$ (resp. $O(\log T)$) for convex (resp. strongly convex) loss functions, where $T$ is the number of time steps \cite{zinkevich2003online, hazan2006logarithmic, abernethy2009stochastic}. These results have also been extended to constrained online convex optimization where it has been shown that the best regret scales as $O(T^{\max\{c,1-c\}})$ for the cost and $O(T^{1-c/2})$ for constraint violation, where $c$ is a constant \cite{jenatton2016adaptive, yuan2018online}. When compared to OCO, the key difference in online control is the dependence of the decision on the state of the system, and thus in online control what is to be learnt is a control policy instead of a single decision. {\it Policy Optimization}: Fazel et al. \cite{fazel2018global} proved that the policy gradient based learning converges asymptotically to the optimal policy for the Linear-Quadratic Regulator (LQR) problem. Zhange et al. \cite{zhang2019policy} extended this result to the ${\cal H}_2/{\cal H}\infty$ control problem. Recently, \cite{molybog2020global} proved asymptotic convergence of a gradient based meta-learner for the LQR problem. All of these works provide asymptotic convergence guarantees. {\it Notation}: We denote the transpose of a vector $X$ by $X^\top$. We denote the expectation of a random variable $X$ by $\mathbb{E}[X]$ and the expectation w.r.t a filtration $\mathcal{F}_t$ by $\mathbb{E}[. \vert\mathcal{F}_t]$. The minimum singular value of a matrix $M$ is denoted by $\sigma_{\text{min}}(M)$ and the minimum eigen value is denoted by $\lambda_{\text{min}}(M)$. The function $\rho(\cdot)$ denotes the spectral radius of the input matrix. We define $\norm{\cdot}$ to be 2-norm of the vector or the matrix as the case maybe. For a given variable $X_{t}$ that is dependent on time $t$, $X_{t_1:t_2}$ is used to denote the sequence $(X_{t_1}, X_{t_1+1},...,X_{t_2})$. By $\sum X_{t1:t2}$, we denote the sum of the elements in the sequence $X_{t_1:t_2}$. The big $\mathcal{O}(\cdot)$ is the standard order notation and $\tilde{\mathcal{O}}(\cdot)$ is the standard order notation that includes polylog factors. \section{Problem Preliminaries} \noindent The problem we consider is the online control of a linear dynamical system given by \begin{align} & x_{t+1} = Ax_t +Bu_t +w_t, \nonumber \\ & y_t = Cx_t + e_t, \label{eq:sys-dyn} \end{align} where $x_t \in \mathbb{R}^{d_x}$, is the state of the system, $u_t \in \mathbb{R}^{d_u}$, is the control input generated by the controller, $w_t, e_t$ are bounded disturbances of appropriate dimensions and $y_t \in \mathbb{R}^{d_y}$, is the observed output. The objective is to regulate the response of this system so as to achieve sub-linear regret with respect to the best policy from a class of policies, also called the comparator policy. The class of policies we consider for the comparator are {\it linear dynamic controllers}, denoted by $\Pi$. A linear dynamic controller $\pi \in \Pi$ is a linear dynamic system given by $(A_\pi, B_\pi, C_\pi, D_\pi)$ with the internal state $s^\pi_t \in \mathbb{R}^{d_\pi}$ and output being the control input at time $t$: \begin{equation} s^\pi_{t+1} = A_\pi s^\pi_t +B_\pi y_t, u^\pi_t = C_\pi s^\pi_{t} + D_\pi y_t \label{eq:ldc} \end{equation} We denote the online controller for the system in Eq. \eqref{eq:sys-dyn} by $\mathcal{C}$. The controller at any point has only access to the following information at time $t$: (i) all prior cost functions $c_{1:t-1}$, (ii) all prior observations $y_{1:t-1}$, and (iii) all prior control inputs $u_{1:t-1}$. The controller, unlike the classical setting, does not have access to the future cost functions, which are adversarial. The controller has to choose a policy to compute the control action at time $t$ based on this information. {\it The online control setting of ours is the following}: The controller $\mathcal{C}$, on applying the control input $u_t$ at time $t$, suffers the loss $l_t(y_t,u_t)$, an adversarially chosen convex function, which is apriori unknown. The controller can observe the loss function only after its decision at time step $t$. The controller can then use this information to update its control policy. The performance of the online controller is measured by the regret which is the total cost incurred by the controller for a duration $T$ minus the total cost incurred by the best controller in hindsight taken from the class of controllers $\Pi$. Denote the system output and the input corresponding to a controller $\pi \in \Pi$ by $(y^\pi_t, u^\pi_t)$. Let $J_T(\pi) = \sum_{t=1}^{T} l_t(y^\pi_t,u^\pi_t), \pi \in \Pi$. Then, the regret for the controller $\mathcal{C}$ is given by \begin{equation} R_T(\mathcal{C}) = \mathbb{E}[J_T(\mathcal{C})] - \min_{\pi \in \Pi} \mathbb{E}[J_T(\pi)]. \label{eq:regret-defn} \end{equation} \subsection{Assumptions} We state the assumptions we make below. \begin{assumption} The system is stable, i.e., $\rho(A) < 1$. The system matrices $A,B$ are known. \label{ass:stability} \end{assumption} The assumptions on the spectral radius (or the assumption that there is additional knowledge of a feedback rule to stabilize the system) are standard in online learning and control problems \cite{abbasi2011regret, dean2018regret, cohen2019learning, simchowitz2020improper}. We emphasize that analysis without stability or the knowledge of a stabilizing feedback law is still an hard and open challenge in online control. While there are works that investigate simultaneous safe exploration and control such as in Reinforcement Learning \cite{berkenkamp2017safe}, these works do not study the finite performance objective such as regret. \begin{assumption} The noise $w_t$ and $e_t$ are bounded and stochastic i.i.d. Their distribution is known and $\mathbb{E}[w^s_t] = 0$, $\mathbb{E}[e^s_t] = 0$. \label{ass:noise} \end{assumption} \begin{assumption} The loss function $l_t$ is convex and for $z^\top = [y^\top_t, u^\top_t], (z')^\top = [(y')^\top, (u')^\top]$ such that $R = \max\{\norm{z},\norm{z'},1\}, \norm{l_t(y_t,u_t) - l_t(y',u')} \leq LR\norm{z - z'}$. \label{ass:lipschitz} \end{assumption} The convexity assumption is standard in online learning and optimization and online control settings. Most of online control especially the setting with general adversarial cost functions and disturbances are built on tools from online convex analysis. This is because the tools for online optimization analysis have been well understood and developed for the convexity setting and such analysis for general non-convex cost setting are still non-existent. The second part of the assumption states that the loss functions are locally Lipschitz. We note that the assumptions stated here are exactly the assumptions in the state-of-the-art work in online control \cite{simchowitz2020improper}. \section{Online Control Algorithm} The online control algorithm we propose for the general controller $\mathcal{C}$ is the {\it adaptive gradient} version of the online DRC (or DRC-GD) proposed in \cite{simchowitz2020improper}. We call this the {\it disturbance response controller - adaptive gradient descent} (DRC-AGD). We briefly review the online DRC in \cite{simchowitz2020improper}, and then present the DRC-AGD algorithm. \subsection{Online Disturbance Response Controller} Let's define $y^{nat}$ to be the natural output, the system output when the control inputs are zero, i.e., \begin{align} & y^{nat}_t = e_t + \sum_{s=0}^{t-1} CA^{t-s-1}w_s \nonumber \\ & = y_t - \sum_{s = 1}^{t-1} G^{[s]} u_{t-s}, \ G^{[s]} = CA^{s-1}B. \nonumber \end{align} Since $e_t, w_t$ are bounded for all $t$ and $\rho(A) < 1$, $y^{nat}_t$ is bounded for all $t$. We define $R_{nat}$ to be the bound on $y^{nat}_t$. The DRC as defined in \cite{simchowitz2020improper} is parameterized by a $m-$length sequence of matrices, denoted by $M = (M^{[i]})_{i=0}^{m-1}$. The DRC's control decision is given by \begin{equation} u_t = \sum_{s=0}^{m-1} M^{[s]}y^{nat}_{t-s}. \label{eq:drc} \end{equation} Let's define the following class of disturbance response controllers: \begin{equation} \mathcal{M}(m,R) = \left\{M = (M^{[s]})_{s=0}^{m-1}: \norm{M} = \sum_s \norm{M^{[s]}} \leq R_M \right\} \label{eq:drc-class} \end{equation} The online learning algorithm or the DRC-GD proposed in \cite{simchowitz2020improper} continuously updates the feedback gain $M$ as the loss functions are revealed. It applies the control input as defined in Eq. \eqref{eq:drc} with the current value of the feedback gain $M$. The algorithm then updates the feedback gain $M$ based on the revealed loss function, similar to how the decision is updated in OCO. Thus the disturbance feedback gain $M$ is equivalent to the decision in OCO. For the choice of regret as defined in Eq. \eqref{eq:regret-defn}, the disturbance response controller is a good choice given that the best disturbance response controller for the realized sequence of cost functions is approximately equal to the best linear dynamic controller. We will show this in the proof of our main result. Thus, by learning the disturbance response controller online the controller can get closer to the optimal linear dynamic controller. We pick the control structure as DRC instead of linear dynamic controller because the DRC control form has advantages from the point of view of online regret analysis. It enables the regret analysis to be approximated by the regret analysis of a limited memory problem, where memory refers to the number of past controller parameters the realized cost at a time $t$ is dependent on. This will not be feasible with the linear dynamic control structure because the control input computed by a linear dynamic controller at any point of time is dependent on the entire history of control inputs unlike Eq. \eqref{eq:drc}. We introduce the following definitions for ease of presentation. Let $M^{[s]}(j)$ denote the $j$th row of the $M^{[s]}$ matrix. Let $z(i:j)$ denote the sub-vector of the vector $z$ corresponding to the elements from $i$ to $j$. Let $P$ denote the vector given by $P({sq+(j-1)d_y+1:sq+jd_y}) = (M^{[s]}(j))^\top$, where $q = d_yd_u, 1 \leq j \leq d_u$. Essentially, this defines $P$ to be the vector of the transposes of the rows of $M^{[s]}$ stacked one above the other. We introduce the following definitions that will be required for discussing the algorithms. \begin{definition} {\it $u_t\left[M_t \vert y^{nat}_{1:t}\right] := \sum_{s = 0}^{m-1} M^{[s]}_t y^{nat}_{t-s}$, \\ $\tilde{y}_t[P_{t:t-h} \vert y^{nat}_{1:t}] := y^{nat}_t + \sum_{s = 1}^h G^{[s]}u_{t-s}$, \\ $F_t\left[P_{t:t-h} \vert y^{nat}_{1:t}\right] := l_t\left(\tilde{y}_t\left[P_{t:t-h} \vert y^{nat}_{1:t}\right], u_t\left[M_t \vert y^{nat}_{1:t}\right]\right)$, \\ $f_t(P \vert y^{nat}_{1:t}) := F_t[\{P,P,...,P\} \vert y^{nat}_{1:t}].$} \label{def:Ft} \end{definition} The term $\tilde{y}_t$ is an approximate output that depends only on the past $h$ control inputs. Consequently this approximate estimate is only a function of $P_{t:t-h}$ for a given $y^{nat}_{1:t}$. The function $F_t$ is the loss $l_t$ evaluated for this approximate output $\tilde{y}_t$ and so it is also only a function of $P_{t:t-h}$. The function $f_t$ is the loss $F_t$ when $P_k$, for all $k$ s.t. $t \geq k \geq t-h$ is fixed to $P$, and so we term it as the memory-less loss. Minimizing the regret (Eq. \eqref{eq:regret-defn}) is an Online Convex Optimization problem with Memory (OCO-M) \cite{anava2015online} because the loss function at a time step depends on the past control inputs, which is the case even with the approximated cost $F_t[P_{t:t-h}]$, a function of the truncated output $\tilde{y}_t$. Following the key idea in \cite{anava2015online}, the DRC-GD algorithm \cite{simchowitz2020improper} uses the gradient of the memory-less function $f_t(\cdot)$ to update $P$. This, as can be expected, only minimizes the regret of $\sum f_t(\cdot)$ instead of the approximated cost $F_t[P_{t:t-h}]$. But as shown in \cite{anava2015online}, the memory-less regret closely approximates the regret of the approximated cost $F_t[P_{t:t-h}]$, which in turn, as we show later, is a good approximation of the regret of the actual realized cost. Let $\mathcal{P}(m,R) = \left\{P : \sum_{s=0}^{m-1} \norm{M^{[s]}} \leq R_M \right\}$. The learning algorithm for the online DRC proposed in \cite{simchowitz2020improper} initializes $P$ to an element drawn from the set $\mathcal{P}(m,R)$. It then updates $P$ along the gradient of the memory-less loss function $f_t(\cdot)$ as the loss functions (or cost) are revealed to continuously improve the feedback controller: \begin{equation} P \leftarrow \textnormal{Proj}_\mathcal{M}\left(P - \eta_{t+1} \partial f_t\left(P \vert y^{nat}_{1:t}\right)\right). \label{eq:ogd} \end{equation} In \cite{simchowitz2020improper}, the authors show that the disturbance response controller with the memory-less gradient update given by Eq. \eqref{eq:ogd}, where $\eta_t$ is fixed to a particular value (see Theorem 2, \cite{simchowitz2020improper}), achieves a regret of $\tilde{\mathcal{O}}(\sqrt{T})$ when the cost functions are general convex functions and polylog$(T)$ when the cost functions are smooth and strongly convex. In this work, we extend this online DRC controller by using an adaptive step rate akin to \cite{hazan2008adaptive} instead of a fixed step rate $\eta$. We discuss our extended algorithm in the next section. \subsection{Online Disturbance Response Controller: DRC-AGD} In this section, we present the DRC-AGD algorithm. First, we briefly review the adaptive gradient online learning algorithm \cite{hazan2008adaptive} for the standard OCO problem and then present our new regret result for adaptive gradient learning for the OCO-M problem. We then introduce our DRC-AGD online control algorithm and use its result to analyse the regret of the DRC-AGD algorithm. \subsubsection{Adaptive Gradient Online Learning} Consider the standard online convex optimization (OCO) setting (see \cite{hazan2008adaptive}). At time $t$, the player chooses an action $u_t$ from some convex subset $\mathcal{K}$ of $\mathbb{R}^n$, where $\max_{x \in \mathcal{K}} \norm{x} \leq D$, and the adversary chooses a convex loss function $f_t(\cdot)$. The regret for the player over duration $T$ is given by \begin{equation} R_T = \sum_{t=1}^{T} f_t(u_t) - \min_{u \in \mathcal{K}} \sum_{t=1}^{T} f_t(u) \label{eq:regret-oco} \end{equation} Let $f_t$ be $H_t$-strongly convex, i.e., let $f_t(u^{*}) \geq f_t(u) + \nabla f_t (u^{*} - u) + \frac{H_t}{2} \norm{u^{*} - u}^2_2$ and $\norm{\nabla f_t} \leq G_t$. Once the loss function is revealed at time $t$ the algorithm can use the loss function to update its decision. The adaptive gradient online learning algorithm proposed in \cite{hazan2008adaptive} updates the decision $u_t$ by the following gradient step: \begin{align} & u_{t+1} = \text{Proj}_\mathcal{K}\left(u_t-\eta_{t+1} \partial\left(f_t(u)+g_t(u)\right)\right) \nonumber \\ & \eta_{t+1} = \frac{1}{\sum H_{1:t}+ \sum \lambda_{1:t}}, \label{eq:aogd} \end{align} where $\sum H_{1:t} = \sum_{k=1}^t H_k, \sum \lambda_{1:t} = \sum_{k=1}^t \lambda_k$, and $\lambda_t$s are suitably defined parameters. Here, it is clear that the step rate at each time step is updated by the strong convexity $H_t$ of the loss function at $t$ as defined above. Thus the step rate is adapted and the algorithm is adaptive gradient online learning. The regret for this algorithm can be characterized as in the following Lemma. \begin{lemma} {\it Consider the online update given by Eq. \eqref{eq:aogd} with $g_t(u) = 1/2\lambda_t \norm{u}_2^2$. Then for any sequence of $\lambda_1, \lambda_2,...,\lambda_T$, \begin{equation} R_T \leq \frac{1}{2}D^2 \lambda_{1:T} + \frac{1}{2}\sum_{t=1}^T \frac{(G_t +\lambda_t D)^2}{\sum H_{1:t}+\sum \lambda_{1:t}}, \end{equation}} \label{lem:aogd-regret} \end{lemma} Please see Thoerem 3.1. \cite{hazan2008adaptive} for the proof. This is the basic result that the regret rate results in \cite{hazan2008adaptive} are based on. Here, the parameters $\lambda_{1:T}$ can be suitably chosen based on the convex conditions to achieve intermediate regret rates for intermediate convex conditions of the sequence of loss functions; for example, conditions such as $H_t \propto t^{-\alpha}$. We direct the reader to \cite{hazan2008adaptive} for a more detailed discussion of their results. \subsubsection{Adaptive Gradient Online Learning for OCO-M} In this section we discuss the extension of the adaptive gradient learning to the OCO-M problem. The difference in the OCO-M setting is that the cost function at a particular time $t$ is also dependent on a certain history of the past decisions. More specifically, the cost functions $f_t$ in OCO-M are a function of the decisions upto $h$ time steps in the past, i.e., $u_{t:t-h}$, where $h$ is a given number. Thus, the regret in the OCO-M problem is the following: \begin{equation} R_T = \sum_{t=1}^{T} f_t(u_{t:t-h}) - \min_{u \in \mathcal{K}} \sum_{t=1}^{T} f_t(u), \label{eq:regret-oco-m} \end{equation} where we used $f_t(u)$ as a shorthand notation for the cost when $u_{t-k} = u$, for all $k$, where $0 \leq k \leq h$. In the next theorem we present the equivalent of Lemma \ref{lem:aogd-regret} for the OCO-M problem, which we will use to analyse our main algorithm. \begin{theorem} {\it For a sequence of $(h+1)$-variate $F_t$ define $f_t(u) = F_t(u,u,...,u)$. Let $G_c$ be an upper bound on the coordinate wise Lipschitz constant of $F_t$, $G_f$ be an upper bound on the Lipschitz constant of $f_t$, $f_t$ be $H_t$-strongly convex, and $D$ be an upper bound on the diameter of $\mathcal{K}$. Consider the online update given by Eq. \eqref{eq:aogd}, with $g_t(u) = 1/2\lambda_t \norm{u}_2^2$. Then for any sequence of $\lambda_1, \lambda_2,...,\lambda_T$, $\lambda_{j} \leq \lambda_{i}, j \geq i$, \begin{align} & R_T = \sum_{t = h+1}^T F_t(u_t,...,u_{t-h}) - \min_{u\in\mathcal{K}} \sum_{t = h+1}^T F_t(u,...,u) \nonumber\\ & \leq \frac{1}{2}D^2 \lambda_{1:T} + \frac{1}{2}\sum_{t=1}^T \frac{\tilde{G}_{f,t}^2}{\sum H_{1:t}+\sum \lambda_{1:t}}, \nonumber \end{align} where $\tilde{G}_{f,t} = \sqrt{\left(G_f + \lambda_{t}D\right)(G_f + \lambda_{t}D+2G_ch^{3/2})}$.} \label{thm:aogd-memory} \end{theorem} Please see Appendix for the proof. \subsubsection{Adaptive Gradient Online Learning for Control} Here, we extend the adaptive gradient descent learning idea to the online DRC. The gradient learning algorithm we propose, which we call as DRC-AGD, is the extension of Eq. \eqref{eq:ogd} with an adaptive step rate similar to Eq. \eqref{eq:aogd}: \begin{align} & P_{t+1} \nonumber \\ & = \textnormal{Proj}_\mathcal{P}\left(P_t - \eta_{t+1}\partial \left( \mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right] + g_t(P_t)\right)\right) \nonumber \\ & g_t(P) = \frac{1}{2}\lambda_t \norm{P}_2^2, ~ \eta_{t+1} = \frac{1}{\sum H_{1:t}+\sum \lambda_{1:t}}, \label{eq:drc-aogd} \end{align} where the udpate is by the gradient of the memory-less cost $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$, with an adaptive step rate $\eta_{t+1}$, where $H_t$ is the strong convexity of $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$ and $\lambda_t$s are suitably chosen parameters as before. \begin{algorithm}[H] \DontPrintSemicolon \KwInput{Radius $R_M$, and the matrices $G^{[i]}$, $h$.} Initialize $P_1 \in \mathcal{P}$ \For{t = 1,....,T} { Observe $y_t$ and determine $y^{nat}_t = y_t - \sum_{i = 1}^{t-1} G^{[i]} u_{t-i}$ Choose $u_t = \sum_{s=0}^{m-1} M^{[s]}_t y^{nat}_{t-s}$ Observe the loss function and suffer the loss $l_t(y_t,u_t)$ Set $\eta_{t+1} = \frac{1}{\sum H_{1:t}+\sum \lambda_{1:t}}$ $P_{t+1} = \textnormal{Proj}_\mathcal{P}\left(P_t - \eta_{t+1}\partial \left( \mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right] + \frac{1}{2}\lambda_t \norm{P_t}_2^2\right)\right)$ } \caption{Disturbance Response Control - Adaptive Gradient Descent (DRC-AGD)} \label{alg:drc-agd} \end{algorithm} Algorithm \ref{alg:drc-agd} presents the full DRC-AGD algorithm. \subsubsection{Main Results} In DRC-AGD, the gradient of the memory-less cost $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$ is used. Hence, to apply Theorem \ref{thm:aogd-memory} to the analysis of the DRC-AGD algorithm, we need to establish the strong convexity of $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$. We also need to establish that $G_c$ and $G_f$ exist for the memory-less cost $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$; we prove all of this as part of the main theorem. In the next lemma we characterize the strong convexity of $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$ in terms of the strong convexity $H^l_t$ of $l_t$ (recall how $f_t$ is dependent on $l_t$ in Definition \ref{def:Ft}). \begin{lemma} {\it The function $\mathbb{E}\left[f_t\left[P_{t} \vert y^{nat}_{1:t}\right]\right]$ is $H_t$-strongly convex, where \begin{equation} H_t = H^l_t \left(\sigma^2_e + \sigma^2_w \left( \frac{\sigma_{\text{min}}(C)}{1+\norm{A}_2^2}\right)^2\right), \nonumber \end{equation} $\nabla^2 l_t \geq H^l_t$, $\mathbb{E}[w^{s}_tw^s_t] \geq \sigma^2_w$, $\mathbb{E}[e^{s}_te^s_t] \geq \sigma^2_e$. } \label{lem:strongconvexity-F} \end{lemma} Please see Proposition 7.1, \cite{simchowitz2020improper} for the proof. We introduce an additional definition before we discuss our main theorem. \begin{definition} $\psi(i) = \sum_{j \geq i} \norm{CA^{j-1}B}_2, i > 0$. Since $\rho(A) < 1$, there exists $c > 0$ and $\rho \in (0,1)$ such that $\psi(i) \leq C\rho^i$. $R_{G^{*}} = 1 + \psi(1)$. \end{definition} In the next theorem we use Theorem \ref{thm:aogd-memory} to characterize the regret for the DRC-AGD online control algorithm. \begin{theorem} {\it Suppose Assumptions \ref{ass:stability}, \ref{ass:noise}, \ref{ass:lipschitz} hold. Suppose the algorithm \ref{alg:drc-agd} is run with $m, h \geq 1$ such that $\psi(m) \leq R_{G^{*}}/T, \psi(h) \leq R_{M}/T$ then \begin{align} & R_T(\mathcal{C}) \leq R^2_MR^2_{G^{*}}R^2_{nat}(6L + 4(m+h)) \nonumber \\ & + \frac{1}{2}D^2 \lambda_{1:T} + \frac{1}{2}\sum_{t=1}^T \frac{(\tilde{G}_{f,t})^2}{\sum H_{1:t}+\sum \lambda_{1:t}}, \ \text{where} \nonumber\\ & G_f = G_C = L\sqrt{m}R_MR_{G^{*}}R^2_{nat}, \ D = 2\sqrt{\min\{d_u,d_y\}}R_M, \nonumber \end{align} $\tilde{G}_{f,t} = \sqrt{\left(G_f + \lambda_{t}D\right)(G_f+\lambda_{t}D+2G_ch^{3/2})}$. } \label{thm:drc-agd} \end{theorem} Please see the Appendix for the proof. The proof proceeds by splitting the regret (Eq. \eqref{eq:regret-defn}) into several terms; the burn-in loss, algorithm truncation error, f-policy error, comparator truncation error and the policy approximation error. This splitting follows the proof technique in \cite{simchowitz2020improper}. The burn-in loss is just the realized cost corresponding to the first $m+h$ time steps. The burn-in loss can be trivially bounded (see for example Lemma 5.2. \cite{simchowitz2020improper}). The algorithm truncation error is the difference between the realized cost for the remaining horizon and the cost that would be realized with the truncated output approximation $\tilde{y}_t$, i.e., $\sum F_t$. We recall that the output is truncated so that it depends only on the past $h$ control inputs; see Definition \ref{def:Ft} for the truncated output $\tilde{y}_t$ and the corresponding loss $F_t$. This splitting is done because Theorem \ref{thm:aogd-memory} can only be applied to fixed length memory while the actual realized cost is dependent on the entire history of control inputs. The f-policy error is the difference between the cost $\sum F_t$, which is the approximate cost by truncating the memory, and the same cost when $P_k = P ~ \forall ~ k$. Thus, the f-policy error is given by $\sum_{t=m+h+1}^T \mathbb{E}[F_t(P_{t:t-h} \vert y^{nat}_{1:t})] - \inf_{P} \sum_{t=m+h+1}^T \mathbb{E}[f_t(P \vert y^{nat}_{1:t})]$. Given the form of this regret term, we can apply Theorem \ref{thm:aogd-memory} to bound the f-policy error. We note that the approximated cost with truncated memory under fixed $P$ is different from the realized cost under a fixed disturbance response controller $P$. This introduces the comparator truncation error, the difference of the two costs, i.e., $\inf_{P} \sum_{t=m+h+1}^T \mathbb{E}[f_t(P \vert y^{nat}_{1:t})] - \inf_{P} \sum_{t=m+h+1}^T \mathbb{E}[l_t(y^{P}_t, u^{P}_t)]$. The policy approximation error is the difference between the realized cost for the best fixed $P$ disturbance response controller and the cost for the best linear dynamic controller. The truncation errors and policy approximation error can also be bounded (see \cite{simchowitz2020improper}). We give details of bounding the burn-in loss, truncation errors and the policy approximation error in the Appendix. Putting together the bounds of all these terms gives us the final result. We note that the regret bound for DRC-AGD has terms similar to the regular adaptive gradient algorithm (see Lemma \ref{lem:aogd-regret}). Given this result, we can apply the analysis similar to \cite{hazan2008adaptive} to establish regret scaling for various convex conditions. In the next corollary we discuss the specific scaling of the regret w.r.t $T$ under various convex conditions and in particular show that the DRC-AGD algorithm interpolates between $T^{1/2}$ and $\log{T}$. \begin{corollary} {\it Suppose Assumptions \ref{ass:stability}, \ref{ass:noise}, \ref{ass:lipschitz} hold. Suppose the algorithm \ref{alg:drc-agd} is run with $m, h \geq 1$ such that $\psi(m) \leq R_{G^{*}}/T, \psi(h) \leq R_{M}/T, T \geq 4$ then \begin{enumerate} \item for any sequence of convex loss functions $l_t$ \begin{equation} R_T \leq \tilde{\mathcal{O}}(\sqrt{T}) \nonumber \end{equation} \item for any sequence of convex loss functions $l_t$ with $H^l_t \geq H$ \begin{equation} R_T \leq \tilde{\mathcal{O}}(\log{T}) \nonumber \end{equation} \item for $H^l_t = t^{-\alpha}$, and $0 < \alpha \leq 1/2$ \begin{equation} R_T \leq \tilde{\mathcal{O}}(T^\alpha) \nonumber \end{equation} \item for $H^l_t = t^{-\alpha}$, and $\alpha > 1/2$ \begin{equation} R_T \leq \tilde{\mathcal{O}}(\sqrt{T}) \nonumber \end{equation} \end{enumerate} } \label{cor:drc-agd} \end{corollary} Please see the Appendix for the proof. We see that the DRC-AGD algorithm recovers the $\mathcal{O}(\sqrt{T})$ and $\mathcal{O}(\log{T})$ result for strongly convex and general convex cost functions and at the same time achieves intermediate regret scaling for intermediate convex conditions. We emphasize that the regret scaling of $\tilde{\mathcal{O}}(T^\alpha)$ is valid for a more general condition such as $\sum H_{1:t} \geq t^{1-\alpha}$. \section{Conclusion} In this work we considered the online control of a known linear dynamic system with adversarial disturbances and adversarial cost functions. Our objective is to improve regret rates established for this setting by prior works, which only considered either convex costs or strongly convex costs. Specifically, we addressed the question whether the regret rates can be improved when the convexity of controller cost functions are intermediate, i.e., between strongly convex and convex. We proposed an adaptive gradient extension of the disturbance response controller proposed in prior works for the same problem we study. We proved that the proposed online learning controller recovers the previously established regret guarantee of $\mathcal{O}(\sqrt{T})$ for general convex controller cost functions and $\mathcal{O}(\log{T})$ for strongly-convex and smooth controller cost functions (see \cite{simchowitz2020improper}), and achieves an intermediate regret between $\mathcal{O}(\sqrt{T})$ and $\mathcal{O}(\log{T})$ for intermediate convex conditions for the controller cost functions. \bibliographystyle{plain}
1,477,468,749,829
arxiv
\section{abstract} \section{Introduction} Dust is a fundamental constituent of galaxies. Dust is formed from processed stellar material returned back to the interstellar medium (ISM) either through supernovae or stellar winds. Massive stars ($\gtrsim 8 M_\odot$) which end their lives as Type II supernovae (SNe) and the AGB phase of intermediate mass stars ($1\lesssim M_\odot \lesssim 8$) are considered to dominate stellar dust production in star-forming galaxies while dust in the ISM may also be formed \emph{in situ} from accretion of enriched gas processed by stars \citep{Dwek1998}. Dust is formed from metals and therefore, not surprisingly, a strong correlation is observed between dust and the gas-phase oxygen abundance both in the local universe \citep{Heckman1998, Boissier2004, Asari2007, Garn2010b, Xiao2012, Zahid2012b} and at high redshifts \citep{Reddy2010}. \citet{Lequeux1979} first observed a relation between stellar mass and gas-phase oxygen abundance in star-forming galaxies. Using $\sim53,000$ galaxies, \citet{Tremonti2004} have since established a tight relation between stellar mass and the gas-phase oxygen abundance for star-forming galaxies in the local universe. This so-called mass-metallicity (MZ) relation has been observed at low stellar masses \citep{Lee2006, Zahid2012a} and out to high redshifts \citep{Savaglio2005, Erb2006b, Cowie2008, Maiolino2008, Mannucci2009, Lamareille2009, Zahid2011a, Moustakas2011, Yabe2012}. The metallicity is strongly correlated with stellar mass and the shape of the MZ relation is relatively constant with redshift. Over cosmic time the metallicities of galaxies at a fixed stellar mass evolve as galaxies become more enriched at late times. The MZ relation is shaped by several important physical processes. Oxygen, the most abundant metal in the ISM, is primarily produced in massive stars which end their lives as Type II SNe, subsequently returning enriched material back to the ISM. However, the observed gas-phase oxygen abundance is also subject to large scale gas flows. Pristine inflowing gas and enriched outflows can both reduce the gas-phase abundance within a galaxy. In pristine inflows the metal content is diluted whereas outflows physically remove metals from the ISM. If the outflowing gas is enriched to levels beyond the ambient ISM, either due to the direct escape of metal-rich ejecta from SNe or to preferential entrainment of metals in galactic winds, the average galaxy metallicity will decline. \citet{Tremonti2004} argue that the enriched outflows which more easily escape the shallow potential wells of low mass galaxies are responsible for the observed MZ relation. Because both inflows and outflows have a similar observational consequence, it has proven difficult to disentangle the effects of gas flows from observations of metallicity alone \citep[see][]{Dalcanton2007}. In this context, the dust content of galaxies may provide important leverage in breaking the degeneracy of these two effects owing to the fact that the observed extinction in galaxies is dependent only on the amount of dust along the line-of-sight and cannot be diluted by inflows of pristine gas. In this study we examine the relation between stellar mass, dust extinction and SFR for star-forming galaxies in the local universe in order to better constrain the physical processes responsible for chemical evolution of galaxies. In addition to the MZ relation, a tight relation between the stellar mass and SFR of galaxies is observed to exist out to $z\sim2.5$ \citep{Noeske2007a, Elbaz2007, Daddi2007, Pannella2009, Whitaker2012}. The slope and scatter of the stellar mass-SFR (MS) relation are constant and independent of redshift and the overall normalization evolves such that at a fixed stellar mass galaxies at later times have lower SFRs. The fixed slope and scatter in the MS relation suggest that quiescent processes such as cosmological gas accretion are largely responsible for stellar mass growth since $z\sim2.5$. Understanding how the scatter in the MS relation is populated and how quiescent galaxies move off the MS relation will provide important constraints for galaxy evolution. In this study we examine the dust properties of galaxies along the MS relation to shed light on this issue The MZ relation and its second parameter dependencies have been investigated by several groups. Most notable is the relation between stellar mass, metallicity and SFR. \citet{Ellison2008} show that there exists a correlation between metallicity and specific star formation rate (sSFR) for galaxies at a fixed stellar mass. The relationship between stellar mass, metallicity and SFR was subsequently investigated by \citet{Mannucci2010} who show that at a fixed stellar mass the SFR is \emph{anti}-correlated with metallicity. They argue for a ``fundamental metallicity relation" between stellar mass, metallicity and SFR. The lower metallicities observed in star-forming galaxies at early times are balanced by the higher SFRs in these galaxies such that the ``fundamental metallicity relation" does not evolve out to $z\sim2$. \citet{Lara-Lopez2010} have independently found a ``fundamental plane" relating the stellar mass, metallicity and SFR of galaxies which appears to match the observational data to $z\sim3.5$. The observed relation between stellar mass, metallicity and SFR does depend on methodology and sample selection. \citet{Yates2012} reexamine the ``fundamental metallicity relation" and find that while at lower stellar masses the SFR is \emph{anti-}correlated to metallicity, the relation reverses at higher stellar masses such that a \emph{positive} correlation is observed. \citet{Yates2012} argue that the ``twist" in the relation is a result of gas-rich mergers at higher stellar masses which fuel a starburst leading to gas exhaustion and quenching of star formation. Subsequent gas accretion at levels too low to efficiently form large amounts of stars leads to metallicity dilution in these systems thus giving rise to the observed correlation. In order to shed light on the stellar mass, metallicity and SFR relation and to understand the physical properties of galaxies populating the MS relation we examine the relation between stellar mass, dust extinction and SFR. In Section 2 we describe our sample and in Section 3 we present our results. We provide a detailed discussion of selection and aperture effects in Section 4. In Section 5 we provide a brief discussion and in Section 6 we summarize the main results of the paper. Throughout this work we adopt the standard cosmology $(H_{0}, \Omega_{m}, \Omega_{\Lambda}) = (70$ km s$^{-1}$ Mpc$^{-1}$, 0.3, 0.7) and a \citet{Chabrier2003} IMF. \section{Data and Methods} \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{f1.eps} \end{center} \caption{The distribution of a) stellar mass, b) SFR ($M_\odot$ yr$^{-1}$), c) Balmer decrement and d) $g$-band fiber covering fraction for the SN8 sample.} \label{fig:hist} \end{figure*} We draw our sample from the SDSS DR7 which consists of $\sim870,000$ unique galaxies spanning a redshift range of $0 < z < 0.7$ \citep{Abazajian2009}. The survey has a Petrosian limiting magnitude of $r_P$ = 17.8 covering 8,200 $deg^2$. The spectra have a nominal spectral range of 3900 - 9100$\mathrm{\AA}$ and a spectral resolution of R $\sim$ 2000. Both the stellar masses, which are determined from the $ugriz$-band photometry \citep{Stoughton2002}, and the emission line fluxes are measured by the MPA-JHU group\footnote{http://www.mpa-garching.mpg.de/SDSS/DR7/}. We adopt the DR7 values in this work but subtract 0.2 dex from the stellar masses for consistency with our previous work where stellar masses are estimated using a different set of routines \citep[see][]{Zahid2011a}. The SFRs in the DR7 are derived using the technique of \citet{Brinchmann2004} with additional improvements given by \citet{Salim2007}. The SFRs are determined from fitting prominent emission lines in the spectra with the largest contribution coming from H$\alpha$ and H$\beta$ and are corrected for dust and aperture effects. The emission line fluxes are measured by the MPA/JHU group \citep[see][]{Tremonti2004}. Balmer absorption is prominent in the atmospheres of A stars. For integrated spectra of galaxies a correction for Balmer absorption is required in order to estimate the emission line strength of Balmer lines. The emission lines are continuum subtracted and corrected for stellar absorption by fitting a linear combination of the Charlot \& Bruzual 2008 stellar population synthesis models (Charlot \& Bruzual, in prep). We have scaled the emission line uncertainties of H$\alpha$ and H$\beta$ by 2.473 and 1.882, respectively, as recommended by the MPA/JHU group. From the parent sample, we select a pure star-forming sample of emission line galaxies. We first distinguish star-forming galaxies from AGN by constraining the ionizing radiation source using the [OIII]$\lambda5007$, [NII]$\lambda6584$, H$\beta$ and H$\alpha$ emission lines \citep{Baldwin1981, Kauffmann2003, Kewley2006}. Following \citet{Kewley2006}, we remove galaxies where \begin{equation} \mathrm{log([OIII]/H\beta)} > 0.61/\left(\mathrm{log([NII]/H\alpha)} - 0.05 \right) + 1.3. \end{equation} This selection yields a sample of $388,000$ galaxies. In order to obtain a robust estimate of the Balmer decrement we require that the signal-to-noise (S/N) of the H$\alpha$ and H$\beta$ line be greater than 8. These selection criteria yield a sample of $\sim157,000$ star-forming galaxies. We refer to this selected sample as the SN8 sample. \citet{Groves2012} find that the H$\beta$ equivalent widths and line fluxes may be systematically underestimated due to an overcorrection for H$\beta$ absorption. They argue that this systematic underestimate in the H$\beta$ line flux leads to a 0.1 mag overestimate of $A_v$. In this study we are investigating the relation between stellar mass, dust extinction and SFR. \citet{Groves2012} conclude that the H$\alpha$ and H$\gamma$ lines do not suffer from the same systematic errors in the absorption correction. In order to asses whether the effect of this possible systematic uncertainty qualitatively changes the relation between stellar mass, dust extinction and SFR, we also determine dust extinction from the H$\alpha$/H$\gamma$ ratio. We require a S/N$ > 3$ in the H$\gamma$ line when determining the dust extinction from the H$\alpha$/H$\gamma$ ratio. Most galaxies in the SN8 sample ($>99\%$) have a S/N $>3$ in H$\gamma$. We determine the dust extinction from the H$\alpha$/H$\gamma$ ratio in Section 4.1 in order to assess any systematic effects due to improper subtraction of H$\beta$ Balmer absorption. We measure dust extinction from the Balmer decrement. For case B recombination with electron temperature T$_e$ = 10$^4$K and electron density $n_e = 10^2$ cm$^{-3}$, the intrinsic H$\alpha$/H$\beta$ and H$\alpha$/H$\gamma$ ratio are expected to be 2.86 and 6.11, respectively \citep{Hummer1987}. We obtain the intrinsic color excess, E(B$-$V), and the correction for dust attenuation using the extinction law of \citet{Cardelli1989} and a corresponding $R_v = 3.1$. We note that the results of this study are largely independent of our choice of extinction law as the relation presented only relies on relative values of extinction. From the color excess we determine the visual extinction measured in magnitudes from $A_v = R_v$ E(B$-$V). In Figure \ref{fig:hist} we show the distribution of stellar masses, SFRs, Balmer decrement and fiber covering fractions for the SN8 sample. We estimate the $g$-band fiber covering fraction, $f_c$, by comparing the photometric and fiber $g$-band magnitude. The $g$-band covering fraction is an estimate of the fraction of the galaxy luminosity contained within the fiber. The median covering fraction for the SN8 sample is 0.24 (see Figure \ref{fig:hist}d). We determine that selection and aperture effects do not significantly bias the observed relation between stellar mass, dust extinction and SFR presented below. In Section 4 we discuss selection and aperture effects in detail. \section{The Dust and Metallicity Properties of Star-Forming Galaxies} In Section 3.1 we present the relation between stellar mass, dust extinction and SFR. For comparison, we present the relation between stellar mass, metallicity and SFR in Section 3.2. \subsection{The Stellar Mass, Dust Extinction and SFR Relation} \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{f2.eps} \end{center} \caption{The observed relation between stellar mass, dust extinction and SFR ($M_\odot$ yr$^{-1}$). a) Undeciles of the SFR as a function of stellar mass. b) The median Balmer decrement and visual extinction (in magnitudes, see text for details) sorted into bins of stellar mass and SFR. The colors correspond to undeciles of the SFR shown in a). The black error bars show the median 1$\sigma$ dispersion of the data in each bin and the red error bars show the observational uncertainty.} \label{fig:tau} \end{figure*} In Figure \ref{fig:tau} we show the relation between stellar mass, dust extinction and SFR. Hereafter, we refer to the stellar mass, dust extinction and SFR relation as the MDSR. The data are first sorted into 16 equally populated bins of stellar mass and then each mass bin is sorted into 11 equally populated bins of SFR. Each bin contains $\sim890$ galaxies. In Figure \ref{fig:tau}a the different color curves correspond to undeciles\footnote{Each of eleven equal groups into which a population can be divided according to the distribution of values of a particular variable.} of the SFR as a function of stellar mass. In Figure \ref{fig:tau}b we show the median dust extinction sorted into bins of stellar mass and SFR. Again the curves are color coded to match the undeciles of SFR shown in Figure \ref{fig:tau}a (e.g. red curves correspond to the median dust extinction in the highest SFR bin and the black curve the median dust extinction in the lowest SFR bin in each stellar mass bin, respectively). The median 1$\sigma$ scatter of the Balmer decrement within each bin is 0.42 with 0.32 attributable to observational uncertainty. Given the large number of data points within each bin, the median standard error of the Balmer decrement for each bin is 0.01. There are several notable features present in Figure \ref{fig:tau}. There is a general trend for the extinction to increase with stellar mass \citep[e.g.][]{Brinchmann2004}. Older, higher stellar mass galaxies typically have greater extinction which is most likely due to the greater number of stars in these galaxies evolving through the AGB phase or ending their lives as supernovae \citep[see][]{Dwek1998}. Perhaps more interesting is the relation between extinction and SFR at a fixed stellar mass. At stellar masses $<10^{10}M_\odot$ the dust content of galaxies is \emph{anti}-correlated with the SFR such that galaxies with high SFRs tend to have less dust extinction. At a stellar mass of $\sim \!10^{10} M_\odot$ (or $A_v \sim1.2$) there is a sharp transition and at larger stellar masses the extinction is \emph{positively} correlated with the SFR. Figure \ref{fig:tau} shows two projections of the 3-dimensional MDSR. The reversal of the trend between SFR and dust extinction at a fixed stellar mass can be thought of as a twist in the two dimensional surface defining the relation between stellar mass, dust extinction and SFR. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{f3.eps} \end{center} \caption{a) The Spearman rank correlation coefficient between SFR ($M_\odot$ yr$^{-1}$) and dust extinction in 16 equally populated bins of stellar mass. The dotted line marks the zero point. b-d) Dust extinction plotted as a function of SFR for galaxies in three of the stellar mass bins shown in a). The range of stellar masses are shown in the text of each panel. The red curves are the median dust extinction in 15 bins of SFR and the dashed curves are the 68\% contours of the data.} \label{fig:cc} \end{figure*} The twist is also present in the data without binning in SFR. In Figure \ref{fig:cc} we plot the correlation coefficient between SFR and dust extinction in bins of stellar mass. The data are binned into 16 equally populated bins of stellar mass, same as in Figure \ref{fig:tau}. In the stellar mass range of $8.5 \gtrsim \mathrm{log}(M_\ast/M_\odot) \gtrsim10$ the data show a negative correlation. At $10^{10} M_\odot$ there is a transition to a positive correlation. We demonstrate this visually for the unbinned data in Figure \ref{fig:cc}b-d by showing the relation between dust extinction and SFR in three of the stellar mass bins. At the lowest stellar masses there is evidence for another transition to a positive correlation but the sparsity of data at low stellar masses does not allow us to draw any strong conclusions. Both \citet{Garn2010b} and \citet{Xiao2012} have studied the dust properties of star-forming galaxies in the SDSS. Neither of these studies report the observed twist in the relation between stellar mass, dust extinction and SFR. However, these studies do not examine the relation between dust extinction and SFR at a fixed stellar mass. Both studies employ a principal component analysis (PCA) on the full sample. PCA is a useful statistical technique for assessing the relative contribution of various correlated parameters. PCA finds the orthogonal linear combinations of the variables that maximize the variance in the data. However, traditional PCA techniques assume a linear relation between the variables. The twist is not obvious using PCA because the relation of dust extinction and SFR is opposite at lower stellar masses as compared to higher stellar masses, thus ``canceling out" in PCA. PCA performed on a sub-sample of the data in restricted range of stellar mass does reveal the twist between dust extinction and SFR (see Figure \ref{fig:cc}). \subsection{The Stellar Mass, Metallicity and SFR Relation} In Figure \ref{fig:yates}a we show the relation between stellar mass, metallicity and SFR. Hereafter we refer to this relation as the MZSR. The MZSR is determined using the sample and methodology of \citet{Yates2012}. The data are binned into mass bins of width 0.15 dex and SFR bins of width 0.3 dex. The mean metallicity determined from the Bayesian method of \citet{Tremonti2004} are plotted for each bin and the different color curves correspond to the various SFR bins. The center of each SFR bin is given in the legend of the figure. We refer the reader to \citet{Yates2012} for more details on methodology and sample selection. \begin{figure} \includegraphics[width=\columnwidth]{f4.eps} \caption{The observed relation between a) stellar mass, metallicity and SFR \citep[$M_\odot$ yr$^{-1}$, c.f. Figure 1 of][]{Yates2012} and b) stellar mass, Balmer decrement and SFR using the T2 sample from \citet{Yates2012}. The data are the a) mean metallicities and b) mean Balmer decrements in constant width bins of stellar mass and SFR. The curves are color-coded corresponding to the different SFR bins shown in the legend (the value given for the SFR is the bin center).} \label{fig:yates} \end{figure} We present Figure \ref{fig:yates} to draw attention to the qualitative similarities in the observed MZSR as compared to the observed MDSR (Figure \ref{fig:tau}). At a fixed stellar mass there exists an \emph{anti-}correlation between the metallicity and SFR for galaxies with stellar masses $\lesssim10^{10}M_\odot$. At a stellar masses $\gtrsim 10^{10} M_\odot$ the trend reverses and a \emph{positive} correlation is observed between metallicity and SFR at a fixed stellar mass. As can be seen in Figure \ref{fig:yates}b, the twist in the MDSR is also present in the \citet{Yates2012} data. The \emph{anti}-correlation between metallicity and SFR at lower stellar masses is significantly stronger than the \emph{anti}-correlation between dust extinction and SFR. We determine the correlation coefficient between SFR and metallicity for $\sim7400$ galaxies in the stellar mass range of $9.4<\mathrm{log}(M_\ast/M_\odot)<9.5$. The metallicities are taken from the DR7 and are determined using the Bayesian technique of \citet{Tremonti2004}. The sample correlation coefficient between the aperture corrected SFR and metallicity is $r = -0.41$. Using the SFR determined from the observed H$\alpha$ luminosity in the fiber the Spearman rank correlation coefficient between SFR and metallicity is $r = -0.15$. The \emph{anti-}correlation between stellar mass and metallicity is still present when using H$\alpha$ fiber SFRs \citep[e.g.][]{Mannucci2010}, however the strength of the correlation is diminished. Metallicity is a measure of oxygen relative to hydrogen whereas dust extinction is dependent on the absolute number of absorbers within the line of sight. To first order, the observed dust extinction, unlike metallicity, is independent of the gas fraction. The stronger correlation between SFR and metallicity as compared to SFR and dust extinction and much of the difference in the MZSR as compared to the MDSR seen in Figure \ref{fig:yates} is likely due to a correlation between the gas fraction and SFR. Higher gas fractions may sustain higher SFRs while also diluting the metallicity, thus strengthening the \emph{anti-}correlation between metallicity and SFR observed at stellar masses $<10^{10} M_\odot$. Measurements of gas masses in a large sample of star-forming galaxies should provide important insight into the relationship between metallicity and dust. \section{Systematic, Selection and Aperture Effects} In this section we investigate possible systematic issues with improper subtraction of H$\beta$ absorption (Section 4.1), biases in the observed MDSR associated with our method of sample selection (Section 4.2) and systematic effects of measuring global physical properties of galaxies from emission lines observed within a limited aperture (Section 4.3). We conclude that selection and aperture effects are not significant in our determination of the MDSR. \subsection{Systematic Effects in the H$\beta$ Absorption Correction} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f5.eps} \end{center} \caption{The observed relation between stellar mass, dust extinction and SFR ($M_\odot$ yr$^{-1}$). Similar to Figure \ref{fig:tau}b but with dust extinction determined from the H$\alpha$/H$\gamma$ ratio.} \label{fig:gamma} \end{figure} In Figure \ref{fig:gamma} we plot the MDSR with dust extinction determined from the H$\alpha$/H$\gamma$ ratio. The relation presented in Figure \ref{fig:gamma} displays the same characteristics as Figure \ref{fig:tau}b. An \emph{anti-}correlation between dust extinction and SFR at stellar masses $<10^{10}M_\odot$ and a positive correlation at higher stellar masses. We observe a $\sim0.1$ magnitude greater extinction when determining $A_v$ from H$\alpha$/H$\beta$ as compared to H$\alpha$/H$\gamma$. \citet{Groves2012} find a similar offset and by comparing SDSS DR7 data with DR4 data. They attribute the difference in extinction determined from H$\alpha$/H$\beta$ and H$\alpha$/H$\gamma$ to a systematic error in the subtraction of the underlying H$\beta$ Balmer absorption. While systematic effects in subtracting the underlying H$\beta$ absorption may affect the absolute measurement of $A_v$, we conclude that the observed twist in the MDSR is not affected. A comparison of Figure \ref{fig:gamma} with Figure \ref{fig:tau}b shows that a greater difference in $A_v$ is observed at higher stellar masses and SFRs. The overestimation of $A_v$ appears to be correlated with the stellar mass and SFR. \subsection{Selection Effects} We select star-forming galaxies from the parent sample using the BPT method which allows us to identify and remove galaxy spectra dominated by AGN emission (see Section 2). We require a S/N $>$ 8 in the H$\alpha$ and H$\beta$ emission line fluxes in order to obtain a robust estimate of the SFR and Balmer decrement from which we measure the extinction. The strength of the Balmer lines scales with number of UV ionizing photons originating from massive stars and therefore is a good indicator of the SFR \citep{Kennicutt1998b}. We may bias our sample by selecting galaxies above a fixed S/N threshold because galaxies with low levels of star formation will have weak Balmer lines that may not meet our S/N requirement. This selection criteria could lead to a spurious MDSR if the bias is mass dependent. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f6.eps} \end{center} \caption{The SFR ($M_\odot$ yr$^{-1}$) distribution for the SN8 (black curves) and SN3 (red curves) samples. The median SFR in 30 equally populated bins of stellar mass are shown by the filled circles and solid curves. The 68\% and 95\% contours of the distribution are shown by the dashed and dotted curves, respectively.} \label{fig:sfr_comp} \end{figure} In order to investigate the bias introduced by our S/N requirement we compare the distribution of SFR as a function of stellar mass for the SN8 sample with a sample selected requiring a S/N $>$ 3 in the H$\alpha$ and H$\beta$ emission lines. We refer to this sample as the SN3 samples. The SN3 sample consists of $\sim259,000$ galaxies and contains a factor of $\sim1.6$ more galaxies than the SN8 sample. In Figure \ref{fig:sfr_comp} we plot the distribution of SFRs in 30 bins of stellar mass for the SN8 (black curves) and SN3 (red curves) samples. The median of the SFR distribution (solid curves) of the SN3 sample is typically $\sim$0.1 dex lower than the SN8 sample except at the highest stellar masses where the difference is larger. We are interested in determining the MDSR for the star-forming sequence of galaxies as identified by \citet[and many others]{Noeske2007a}. The population of star-forming galaxies out to $z\sim2$ is characterized by a near unity slope and constant scatter in the relation between stellar mass and SFR that is independent of redshift \citep{Noeske2007a, Elbaz2007, Daddi2007, Pannella2009, Whitaker2012}. In the SN8 and SN3 sample, the scatter in the SFR distribution increases at higher stellar masses and a population of massive, low SFR galaxies is present in the distribution shown in Figure \ref{fig:sfr_comp}. By examining the sersic index of galaxies on the stellar mass-SFR diagram, \citet{Wuyts2011} show that this region of the diagram is populated by quiescent galaxies best described by de Vaucouleurs profiles. Decreasing our S/N threshold slightly broadens and shifts the distribution of SFRs at all stellar masses and selects a greater number of massive, quiescent galaxies. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f7.eps} \end{center} \caption{The MDSR for the SN3 sample.} \label{fig:tau_sn} \end{figure} In Figure \ref{fig:tau_sn} we reexamine the stellar mass, dust extinction and SFR relation using our SN3 sample. The greatest difference in the MDSR determined from the SN3 sample as compared to the SN8 sample is for galaxies with stellar masses $\gtrsim 10^{10} M_\odot$ and is attributable to the greater number of quiescent galaxies in the sample. We conclude that our S/N selection criterion does not strongly select against the star-forming sequence of galaxies. The S/N criterion of the SN8 sample does not completely remove quiescent galaxies from the sample, though they constitute a small fraction of the sample. Thus, selection bias is not a significant. Comparison with the SN3 sample suggests that the downturn in the relation between stellar mass and dust extinction observed in the highest mass, lowest SFR bin of the SN8 sample is a consequence of the presence of quiescent galaxies in the sample (see Figure \ref{fig:tau}a). This suggests that twist observed in the MDSR and the \emph{anti}-correlation between SFR and dust extinction is related to the shutting down of star formation in galaxies. We apply a higher S/N threshold in order to emulate selection effects that may be present in high redshift data. The ``twist" in the MDSR is \emph{clearly} observed in the MDSR when the S/N threshold for H$\alpha$ and H$\beta$ emission is $<20$. At a S/N$\gtrsim25$, there is no longer a ``twist" in the MDSR. In the SDSS data 74\% of the galaxies are removed when a S/N$>25$ is required. At a fixed stellar mass this selection criterion preferentially removes low SFR galaxies. This demonstrates that incompleteness in SFR may bias the observed MDSR and care must be taken when investigating the MDSR in higher redshift samples. \subsection{Aperture Effects} We investigate aperture effects associated with measurements of the SFR (Section 4.2.1) and the Balmer decrement (Section 4.2.2). \subsubsection{Star Formation Rate} The SFRs used in this study are derived by the MPA/JHU group using the technique developed by \citet[hereafter B04]{Brinchmann2004}. B04 model the stellar continuum and absorption lines using \citet{Bruzual2003} stellar population synthesis models. They model the emission lines using the CLOUDY photoionization code \citep{Ferland1996} and \citet{Charlot2001} nebular emission models. Dust attenuation is largely constrained using the Balmer decrement with small contributions from other emission lines. This procedure gives the SFR within the 3'' fiber aperture. B04 apply a fiber aperture correction in order to obtain the total SFR. The correction is derived by calculating the dependency of SFR on color within the fiber. The SFR outside the fiber is accounted for by assuming that the color dependency of the SFR is the same inside and outside the fiber (see B04 for more details). \citet{Salim2007} derive dust corrected SFRs for $\sim50,000$ galaxies in the local universe by fitting the UV and optical spectral energy distribution (SED) with a library of stellar population synthesis models. The SFRs determined from the SED are not subject to aperture effects. \citet{Salim2007} show that the aperture and dust corrected SFRs derived by B04 for the sample of star-forming galaxies (i.e. those with H$\alpha$ detected with S/N $>$ 3) agree with those determined from the UV and optical SED. There is a systematic difference of 0.02 dex (when 3$\sigma$ outliers are excluded) and the scatter in the two methods is accounted for by the uncertainty of each method. \citet{Salim2007} conclude that for star-forming galaxies the UV-based SFRs agree remarkably well with the aperture corrected SFRs derived by B04. For the star-forming galaxies in the SN8 sample, the SFRs made available in the DR7 and used in this study are robust against aperture bias. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f8.eps} \end{center} \caption{Relation between covering fraction, $f_c$, and a) H$\alpha$ SFRs and b) aperture corrected SFRs ($M_\odot$ yr$^{-1}$). The relation between Balmer decrement and c) H$\alpha$ SFR and d) aperture corrected SFRs. The data are the SN8 sample in a limited mass range ($9.4 < \mathrm{log}(M_\ast/M_\odot) < 9.5$). The median covering fraction plotted by the solid red curve in 15 bins of a) H$\alpha$ SFRs and b) aperture corrected SFRs. The median Balmer decrement plotted by the solid red curve in 15 bins of c) H$\alpha$ SFRs and d) aperture corrected SFRs. The 68\% contours of the distributions are shown by the dashed red curves.} \label{fig:cf} \end{figure} The MDSR derived in this study is dependent on how the SFR is measured. To demonstrate the need for an aperture correction, we also determine the SFR from the dust corrected H$\alpha$ luminosity observed in a 3'' fiber aperture using the conversion of \citet{Kennicutt1998b}. We refer to the SFR determined from the observed H$\alpha$ luminosity as the Fiber SFR and explicitly refer to the aperture corrected SFRs as the DR7 SFR. The relation between stellar mass, dust extinction and Fiber SFR is qualitatively different at lower stellar masses when compared to the same relation using DR7 SFRs. At a fixed stellar mass, the Fiber SFRs are not \emph{anti-}correlated with dust extinction at stellar masses $<10^{10}M_\odot$. We attribute the different relation observed between dust extinction and SFR at lower stellar masses to aperture effects resulting from the use of Fiber SFRs rather than the DR7 SFRs. In Figure \ref{fig:cf}a we show the distribution of aperture covering fraction for galaxies as a function of Fiber SFR in a narrow mass range ($9.4<\mathrm{log}(M_\ast/M_\odot)<9.5$). The fiber covering fraction is strongly correlated with the Fiber SFR. In Figure \ref{fig:cf}c we show that Fiber SFRs and Balmer decrements are weakly (positively) correlated at Fiber SFRs $< 0$ but show a negative correlation at higher Fiber SFRs. In Figure \ref{fig:cf}b and d we show the distribution of covering fraction and Balmer decrement, respectively, plotted as a function of DR7 SFRs. The Balmer decrement is not strongly correlated with DR7 SFRs. This interval contains $\sim$75\% of data. At higher DR7 SFRs there is a positive correlation between the SFR and covering fraction. The correlation between covering fraction and DR7 SFR at higher stellar masses arises because the sample is not volume limited and we have applied a fixed S/N threshold. We observe a similar relation between stellar mass, dust extinction and SFR using the volume limited sample of \citet{Zahid2011a} which is comprised of data selected in a redshift range of $0.04<z<0.1$. No correlation between covering fraction and DR7 SFR is observed in the volume limited sample in the mass range investigated in Figure \ref{fig:cf}. However, restricting the redshift range removes a substantial number of low and high mass galaxies. Because the effect of not selecting a volume limited sample is not significant and does not change our conclusions, we do not apply this additional criterion in selecting data. By comparing Figure \ref{fig:cf}c and d we show that not including an aperture correction for SFR results in a spurious correlation between Balmer decrement and SFR. Figure \ref{fig:cf} highlights the importance of applying an aperture correction to SDSS SFRs when examining global trends in order to avoid systematic bias in relations between SFR and other physical properties. \subsubsection{Balmer Decrement} Several studies have reported variations in dust extinction with galactocentric radius \citep[e.g.][]{Holwerda2005, Boissier2007, Tamura2009, Munoz-Mateos2009}. However, \citet{Kewley2005} find little evidence for systematic variation between nuclear and global extinction for covering fractions $>20\%$. Here we test possible biases that may result from aperture effects in determining the Balmer decrement. If strong negative dust gradients exist in galaxies then the measured Balmer decrement will be biased towards larger values of extinction in galaxies with small covering fractions. In this case, the measured Balmer decrement will not reflect the global dust properties. We first test for any bias by plotting the Balmer decrement as a function of the aperture covering fraction. In Figure \ref{fig:dec_cf} we plot the Balmer decrement as a function of aperture covering fraction for data in a narrow mass range. For data in the stellar mass range of $9.4 < \mathrm{log}(M_\ast/M_\odot) < 9.5$, the Balmer decrement is not strongly correlated to the aperture covering fraction. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f9.eps} \end{center} \caption{The Balmer decrement plotted against the aperture covering fraction, $f_c$. The black data points are taken from the SN8 sample in a limited mass range ($9.4 < \mathrm{log}(M_\ast/M_\odot) < 9.5$). The median Balmer decrement is plotted by the solid red curve in 15 bins of aperture covering fraction, $f_c$. The 68\% contours of the distributions are shown by the dashed red curves.} \label{fig:dec_cf} \end{figure} We also apply a more global test for potential bias in the Balmer decrement due to aperture effects. We divide the SN8 sample into two equally populated subsamples. Sample S1 is comprised of galaxies that have covering fraction less than the median covering fraction of the SN8 sample ($f_c < 0.24$). Sample S2 is the complementary sample with covering fraction greater than the median covering fraction. Each sample has $\sim$74,000 galaxies. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{f10.eps} \end{center} \caption{The black histograms are for the SN8 sample and are the same as Figure \ref{fig:hist}. The stellar mass and SFR ($M_\odot$ yr$^{-1}$) distribution are identical for the S1s and S2s sample and are shown by the gray histograms. The distribution of the c) Balmer decrement and d) covering for the S1s and S2s samples are shown by the red and blue histograms, respectively.} \label{fig:dec_dist} \end{figure*} We want to examine the bias in the Balmer decrement associated \emph{only} with the covering fraction. While in small bins of stellar mass, the aperture corrected SFR is not strongly correlated to covering fraction (see Figure \ref{fig:cf}b), the SFRs and stellar masses across the whole sample are correlated to the covering fraction. The correlation of the SFR and stellar mass with covering fraction is a consequence of the fact that SDSS is a magnitude limited survey and we have selected our sample using a fixed S/N threshold. Therefore we must take care to remove second order correlations between Balmer decrement and covering fraction resulting from correlations between covering fraction, SFR and stellar mass. We do this by randomly selecting subsets of the S1 and S2 data that are matched to have an identical SFR and stellar mass distributions (in bins of 0.05 dex width). We refer to these as the S1s and S2s subsamples. The S1s and S2s subsamples are comprised of $\sim39,000$ galaxies each. Figure \ref{fig:dec_dist}a and b show the stellar mass and SFR distribution for S1s and S2s in gray. The stellar mass and SFR distributions of the S1s and S2s are representative of the SN8 sample and are identical for the S1s and S2s sample by design. Figure \ref{fig:dec_dist}c and d show the distribution of the Balmer decrement and covering fraction for the S1s (red histogram) and S2s (blue histogram) samples, respectively. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{f11.eps} \end{center} \caption{Distribution of the logarithm of the Balmer decrement for the S1s (red) and S2s (blue) samples.} \label{fig:dec_norm} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{f12.eps} \end{center} \caption{The MDSR for the SN8 sample but with a minimum aperture covering fraction requirement of a) 0, b) 0.1, c) 0.2 and d) 0.3.} \label{fig:cf_relation} \end{figure*} In Figure \ref{fig:dec_norm} we show the distribution of the logarithm of the Balmer decrement for the S1s (red histogram) and S2s (blue histogram) samples. The data are nearly log-normally distributed. The median observational uncertainties in the Balmer decrement for the S1s and S2s samples are 0.34 and 0.28, respectively. The S1s sample ($f_c < 0.24$) has a slightly broader distribution as compared to the S2s sample. The greater width of the S1s distribution may be due to the larger observational uncertainties of the S1s sample. Despite the very different distribution in covering fraction, the distribution of Balmer decrement in the S1s and S2s samples are very similar. If there were a strong bias in the measured Balmer decrement due to aperture effects, we would expect a relative shift in the distribution of the Balmer decrement for the S1s and S2s samples. We conclude that aperture effects do not significantly bias the measurement of the dust extinction in the SN8 sample. The Balmer decrement measured in the fiber is a flux-weighted average over many HII regions. We perform a simple calculation to test whether the similarity in the S1s and S2s Balmer decrement distribution is consistent with the extinction observed in nearby galaxies. \citet{Munoz-Mateos2009} measure the radial attenuation profile for galaxies in the Spitzer Infrared Nearby Galaxies Survey sample. The typical (median) radial attenuation gradient measured is -0.023 mags/kpc. We convert this gradient measured in magnitudes of extinction into a gradient in Balmer decrement assuming the \citep{Cardelli1989} extinction curve. We adopt an exponential profile for H$\alpha$ surface brightness with a fiducial scale length of 4 kpc. Using these values we determine the flux-weighted Balmer decrement as a function of fiber covering fraction. We find that for the fiducial values of radial attenuation gradient and H$\alpha$ scale length, the Blamer decrement is overestimated by $\sim5\%$ when the covering fraction is 5\% (the minimum for the SN8 sample) and a 4\% overestimate when the covering fraction is 20\%. The median covering fraction of the S1s and S2s samples is 16\% and 33\%, respectively. The relative systematic error in the Balmer decrement between these two covering fraction is $\sim1\%$. The calculated values for the relative error do not depend strongly on scale length adopted but do increase if a steeper attenuation gradient is adopted (e.g. relative error of 5\% for an attenuation gradient of -0.1 mags/kpc). We conclude that distribution of Balmer decrement shown in Figure \ref{fig:dec_norm} is consistent with the typical (shallow) gradients found in local spiral galaxies \citep[e.g.][]{Munoz-Mateos2009}. As we have shown in this section, the measured dust extinction and SFRs are not significantly biased by aperture effects. However, because aperture effects are present in the SDSS sample in a complicated way, we also test whether the observed MDSR is effected by aperture bias. In Figure \ref{fig:cf_relation} we plot the MDSR by applying an increasing minimum aperture covering fraction requirement. In Figure \ref{fig:cf_relation}a-d we apply a minimum covering fraction of 0, 0.1, 0.2 and 0.3 in selecting data and determine the MDSR using $\sim$157,000, $\sim$143,000, $\sim$98,000 and $\sim$52,000 galaxies, respectively. A very similar MDSR is observed in Figure \ref{fig:cf_relation}a-d. We conclude that selection and aperture effects are not significant in our determination of the MDSR. \section{Discussion} In this contribution we have presented the MDSR for local star-forming galaxies. The physics of dust production, destruction and evolution are not well understood \citep[for review see][]{Draine2003} and it is beyond the scope of this study to make detailed considerations of these processes. It is well established that dust forms from metals and therefore it is not surprising that a strong correlation between dust and metals is observed \citep[e.g.][]{Garn2010b}. \citet{Zahid2012b} derive a relation between stellar mass, metallicity and dust extinction. They show that the dust extinction increases with \emph{both} stellar mass and metallicity. That is, dust extinction increases with stellar mass and at a fixed stellar mass, the metallicity and dust extinction are positively correlated. The correlation between stellar mass, dust extinction and metallicity is also evident from PCA \citep{Garn2010b, Xiao2012}. The MDSR shows similar trends as the MZSR. While dust extinction can be easily estimated from the Balmer decrement, metallicity is substantially more difficult physical quantity to measure. Various methods have been developed to determine the gas-phase oxygen abundance from the emission line properties of star-forming galaxies \citep[for review of methods see][]{Kewley2008}. The most commonly used methods rely on theoretically or empirically calibrating ratios of strong collisionally excited emission lines to recombination lines. However, these so-called strong line methods suffer from various systematic uncertainties. In particular, \citet{Kewley2008} show that the metallicity determined for the same sample of galaxies can differ by up to $\sim$0.6 dex depending on the choice of calibration. Thus the observed MDSR presented in this study helps to independently establish the observed MZSR presented in \citet{Yates2012}. The observed MZ relation and its second parameter dependencies serve as a Rosetta stone for understanding the physical processes responsible for the chemical evolution of galaxies. Because dust is formed from metals, we favor a common physical origin for the twist observed in both the MDSR presented in this study and the MZSR investigated by \citet{Yates2012}. While dust may be destroyed, metals can not and therefore the observed MDSR can not be explained by destruction processes unless the similarities in the MDSR and MZSR are taken to be coincidental. Moreover, given that metals are locked up in dust, we may expect an opposite trend in the MDSR and MZSR if dust destruction is the responsible mechanism since the destruction of dust grains should liberate the constituent metals, thus increasing the gas-phase abundance. The metallicities of galaxies are set by a balance between star formation and gas flows. Dust extinction is, to first order, insensitive to the infall of metal-poor gas and cannot be diluted. Therefore the observed twist in the MDSR and the MZSR can not \emph{solely} be explained as a consequence of gas dilution. We consider differential mass loss in galaxies which is related to the SFR a natural explanation of the observed twist in the MDSR and MZSR. \citet{Whitaker2012} study the relation between stellar mass and SFR out to $z=2.5$. They conclude that the dust attenuation observed in star-forming galaxies increases with stellar mass. They find that for galaxies with stellar masses $\gtrsim10^{10} M_\odot$ dusty, blue galaxies populate the upper envelope of the scatter in the MS relation. Conversely, red low-dust galaxies have lower observed SFRs and are interpreted to possibly be in the process of shutting down star formation. They show that the MS relation at $1 < z < 1.5$ for red galaxies has a shallower slope than the MS relation for blue galaxies. It is possible that at lower stellar masses the two relations cross such that a similar trend of a reverse in the correlation between SFR and dust attenuation at a fixed stellar mass may be present in the high redshift population of star-forming galaxies \citep[see Figure 4 of][]{Whitaker2012}. However, due to incompleteness no firm conclusions can be drawn regarding the correlation between SFR and dust attenuation at low stellar masses. The results of our study are consistent with the interpretation of \citet{Whitaker2012}. In particular, decreasing our S/N threshold to S/N $>3$ (Section 4.2) results in the inclusion of a significant fraction of quiescent galaxies as identified by the sersic index analysis of \citet{Wuyts2011}. Figure \ref{fig:tau_sn} shows continuity in the correlation between dust extinction and SFR observed at stellar masses $>10^{10} M_\odot$. This suggests that the physical mechanism responsible for the correlation between SFR and dust extinction at higher stellar masses and the twist in the MDSR and MZSR may be related to the physical processes leading to the shut down of star formation and the migration of galaxies to the red sequence. \section{Summary} In this study we have investigated the relation between stellar mass, SFR and dust extinction. We conclude that: \begin{itemize} \item{Our analysis is consistent with the conclusions of \citet{Garn2010b} that the strongest correlation in the data is the \emph{positive} correlation between stellar mass, SFR and dust extinction.} \item{The relation between SFR and dust extinction \emph{at a fixed stellar mass} is mass dependent. At a fixed stellar mass, an \emph{anti}-correlation between the SFR and dust extinction is observed for galaxies with stellar masses $<10^{10}M_\odot$. There is a sharp transition at a stellar mass of $10^{10}M_\odot$. In galaxies with larger stellar masses there is a \emph{positive} correlation between the SFR and dust extinction at a fixed stellar mass.} \item{The relation between stellar mass, metallicity and SFR \citep[see][]{Yates2012} shows the same trends as the relation between stellar mass, dust extinction and SFR. Unlike metals, dust can not be diluted by inflows of pristine gas. The observed stellar mass, dust extinction and SFR relation provides important new constraints for understanding the physical processes governing chemical evolution of galaxies.} \item{Quiescent galaxies are observed to populate the high mass, low SFR part of the stellar mass - SFR diagram. When including quiescent galaxies in the sample we find continuity in the correlation between dust extinction and SFR at stellar masses $>10^{10} M_\odot$. The physical processes responsible for the relation between stellar mass, dust extinction and SFR at the high mass end may be related to the physical processes leading to the shutdown of star formation.} \end{itemize} In a forthcoming paper we develop a model incorporating momentum driven outflows as a possible explanation for the observed relation between stellar mass, dust extinction and SFR presented here. \acknowledgements We thank the anonymous referee for careful reading of the manuscript and G. Cresci for useful comments. HJZ and LJK gratefully acknowledge support by NSF EARLY CAREER AWARD AST07-48559. RMY acknowledges the financial support of the Deutsche Forschungsgesellshaft (DFG). RPK acknowledges support by the Alexander-von-Humboldt Foundation and the hospitality of the Max-Planck-Institute for Astrophysics in Garching where part of this work was carried out.
1,477,468,749,830
arxiv
\section{The AGILE contribution to the study of Active Galactic Nuclei} Contrary to previous $\gamma$-ray instruments based on gas spark chambers (like COS B and EGRET), the tracker of AGILE, using silicon strip detectors, does not need a separate triggering device. This allows to greatly increase the incidence angle of the accepted photons, resulting in a sensitivity that remains close to the on-axis value within a very large field of view. While EGRET at an angle of 30$^{\circ}$ has only $\sim$10\% of the on-axis effective area (Thompson et. al. 1993), the AGILE sensitivity is almost uniform within a radius of $\sim$ 50-60$^{\circ}$. For example, a single pointing toward 3C~273 (Fig.1) allows to simultaneously observe and to look for significant flaring events in more than 20 sources (including 3C~279, Cen A, Mrk 421 and many other blazars). A $\gamma$-ray outburst similar to that observed from the radio-loud quasar PKS 0528+134 in March 1993 (Mukherjee et al. 1996) can easily be detected by AGILE with an integration time of only one day (Fig.2). We plan to routinely search for such kind of events with a quick look analysis of the data and to inform the scientific community in order to react as soon as possible with coordinated observations. Another advantage of the large field of view is that the total exposure factor (cm$^2$ s) for each direction of the sky will be greater than that obtained by an instrument with a smaller field of view. We have estimated that, after a sequence of pointings equal to that carried out in the EGRET phases 1 and 2 ($\sim$2 years of observations), the average (over the whole sky) exposure factor reached by AGILE would be greater by a factor $\sim$4. Since this reflects into a factor $\sim$2 lower limiting flux, we can roughly expect, assuming a $\gamma$-ray LogN-LogS with slope 3/2, about three times more AGN's than the $\sim$70 discovered by EGRET. This is actually a lower limit since a further increase in sensitivity in confused regions at low galactic latitude is obtained thanks to the better angular resolution of AGILE (Morselli et al. 1998). \begin{figure} \epsfysize=6.cm \hspace{2.cm}\epsfbox{mereghetti_fig1.ps} \caption[h]{Comparison between the AGILE and EGRET fields of view for a pointing toward 3C~279 and 3C~273 (the two regions correspond to radii of 25$^{\circ}$ and 60$^{\circ}$).} \end{figure} \begin{figure} \epsfysize=6.cm \hspace{2.cm}\epsfbox{mereghetti_fig2.ps} \caption[h]{Expected AGILE sensitivity (solid lines) for observations of 1 day and 1 week.} \end{figure}
1,477,468,749,831
arxiv
\section{Introduction} This paper presents a theory that yields exponential growth of the horizontally-averaged magnetic field ({\it i.e., } a large-scale dynamo) in the presence of a time-mean horizontal shear flow and a randomly fluctuating, 3D, barotropic force ({\it i.e., } with spatial variations only within the mean shearing plane) in incompressible magnetohydrodynamics (MHD). This configuration provides perhaps the simplest paradigm for a dynamo without special assumptions about the domain geometry or forcing ({\it e.g., } without mean kinetic helicity). We call it the elemental shear dynamo (ESD). There is a long history of dynamo theory \citep{Moffatt78,Krause80,Roberts92,Brandenburg05}, but much of it is comprised of {\it ad hoc} closure {\it ansatz} ({\it i.e., } not derived from fundamental principles and devised for the intended behavior of the solutions) for how fluctuating velocity and magnetic fields act through the mean electromotive force curl to amplify the large-scale magnetic field. Here the horizontal-mean magnetic field equation is derived within the ``quasi-linear'' dynamical approximations of randomly forced linear shearing waves and flow-induced magnetic fluctuations. In the standard {\it ansatz} \citep{Moffatt78} , the mean-field equation in dynamo theory has the functional form of \begin{equation} \partial_t \overline{\vec{B}} = \mathsfbi{L} \cdot \overline{\vec{B}} + \mathsfbi{D} : \nabla \overline{\vec{B}} + \dots \,, \label{eq:ansatz} \end{equation} where the over-bar indicates some suitably defined average; $\overline{\vec{B}}$ is the mean magnetic field; and $\mathsfbi{L}$ and $\mathsfbi{D}$ are second- and third-order tensor operators (often denoted by $\alpha$ and $\beta$) that express the statistical effects of the velocity field $\vec{v}$ through the curl of the mean electromotive force, $\overline{\nabla\times(\vec{v}\times\vec{B})}$. The dots encompass possible higher-order derivatives of $\overline{\vec{B}}$ (which would be relatively small if there were a spatial scale separation between the mean field and the fluctuations) and resistive diffusion. If $\vec{v}$ itself is steady in time, then (\ref{eq:ansatz}) is an exact form for the electromotive effect, and the kinematic dynamo problem can be viewed as an eigenvalue problem for the exponential growth rate $\gamma$ given $\overline{\vec{v}}$; in this case, however, there will be no scale separation between $\overline{\vec{v}}$ and $\overline{\vec{B}}$, and $\gamma$ may not be positive. An important weakness in such an {\it ansatz} is the lack of justification for particular forms of $\mathsfbi{L}$ and $\mathsfbi{D}$ in time-dependent flows. We will see that the ESD theory provides a clear justification, and it mostly does not fit within the {\it ansatz} (\ref{eq:ansatz}) because the tensors are time-integral operators except in particular limits (Sec. \ref{sec:limit}). The ESD problem specifies a steady flow with uniform shear $S$, a small initial seed amplitude and vertical wavenumber $k_z$ for the mean magnetic field, and a particular horizontal wavenumber $\vec{k}_{\perp f}$ and correlation time $t_f$ for the random force. It defines an ensemble of random-force time series that each gives rise to a statistically stationary velocity field, and the induced dynamo behavior is assessed over long integration times with further ensemble averaging. This paper takes a general parametric view of the ESD derivation and solutions. A parallel report utilizing a minimal proof-of-concept derivation for the treble limit of small kinetic and magnetic Reynolds numbers and weak mean shear is in \citet{Heinemann11a}; the relation between the two papers is described in Sec. \ref{sec:limit.S}. The experimental basis for developing the ESD theory is the 3D MHD simulations in \citet{Yousef08b,Yousef08a}. They show a large-scale dynamo in a uniform shear flow with a random, small-scale force at intermediate kinetic and magnetic Reynolds numbers. Their dynamo growth rate is not affected by a background rotation, even Keplerian. Additionally, new 2$^{+}$D simulations --- a barotropic velocity with spatial variations only within the mean shearing plane $(x,y)$ and a magnetic field with $(x,y)$ variations plus a single wavenumber $k_z$ in the vertical direction $z$ perpendicular to the plane --- also manifests a large-scale dynamo \citep{Heinemann11b}. Furthermore, within this 2$^{+}$D model, successive levels of truncation of Fourier modes in the shearing-plane wavenumber demonstrate that its dynamo behavior persists even into the quasi-linear situation for which the mean-field theory is derived here. Thus, the dynamo solutions of the ESD theory are a valid explanation for computational dynamo behavior well beyond the asymptotic limit of vanishing magnetic Reynolds number. From general MHD for fluctuations in a shear flow (Sec. \ref{sec:govern}), a quasi-linear model is developed for shearing waves (Sec. \ref{sec:dynamics}) and for induced magnetic fluctuations and the horizontal-mean magnetic field evolution equation with dynamo solutions (Sec. \ref{sec:induction}). Analytic expressions for the dynamo growth rate $\gamma$ are derived in Sec. \ref{sec:limit} for several parameter limits, and general parameter dependences are surveyed in Sec. \ref{sec:general}. Section \ref{sec:summary} summarizes the results and anticipates future generalizations and tests. \section{Governing Equations} \label{sec:govern} The equations of incompressible MHD are the Navier-Stokes equation for velocity $\vec{v}$, \begin{equation} \partial_t\vec{v} + \vec{v}\cdot\nabla\vec{v} = -\,\frac{1}{\rho}\nabla p + \vec{B}\cdot\nabla\vec{B} + \nu\nabla^2\vec{v} + \vec{f} \,, \label{eq:Navier-Stokes} \end{equation} where $\vec{f}$ is a prescribed forcing function, density $\rho$ is constant, and pressure $p$ is determined by the constraint, \begin{equation} \nabla\cdot\vec{v} = 0 \,, \label{eq:incompressible} \end{equation} and the magnetic induction equation for $\vec{B}$ (in velocity units), \begin{equation} \partial_t\vec{B} + \vec{v}\cdot\nabla\vec{B} = \vec{B}\cdot\nabla\vec{v} + \eta\nabla^2\vec{B} \,, \label{eq:induction} \end{equation} with \begin{equation} \nabla\cdot\vec{B} = 0 \,. \label{eq:Bincompressible} \end{equation} An exact, conservative solution to the above equations is given by an unmagnetized, uniform shear flow of the form \begin{equation} \vec{v} = S x\vec{e}_y,\quad\vec{B} = 0 \,, \label{eq:background-solution} \end{equation} where the shear rate $S$ is a constant in space and time and $\vec{e}$ denotes a unit vector. To study the dynamics of fluctuations on top of the background shear flow (\ref{eq:background-solution}), we rewrite the equations of motion in terms of the velocity fluctuations $\vec{u}$ defined through \begin{equation} \vec{v} = S x\vec{e}_y + \vec{u} \,. \label{eq:velocity-fluctuations} \end{equation} Assume that the volume average of $\vec{u}$ is zero. Substituting (\ref{eq:velocity-fluctuations}) into (\ref{eq:Navier-Stokes}) and (\ref{eq:induction}) yields \begin{equation} \mathcal{D}\vec{u} + \vec{u}\cdot\nabla\vec{u} + S u_x\vec{e}_y = -\nabla p + \vec{B}\cdot\nabla\vec{B} + \nu\nabla^2\vec{u} \label{eq:Navier-Stokes-fluctuations} \end{equation} and \begin{equation} \mathcal{D}\vec{B} + \vec{u}\cdot\nabla\vec{B} = \vec{B}\cdot\nabla\vec{u} + S B_x\vec{e}_y + \eta\nabla^2\vec{B} \,, \label{eq:induction-fluctuations} \end{equation} where \begin{equation} \mathcal{D} = \partial_t + S x\partial_y \,. \label{eq:curly-D} \end{equation} The only explicit coordinate dependence in (\ref{eq:Navier-Stokes-fluctuations}) and (\ref{eq:induction-fluctuations}) arises through the differential operator (\ref{eq:curly-D}), which contains the cross-stream coordinate $x$. This means that we can trade the explicit $x$-dependence for an explicit time dependence by a transformation to a shearing-coordinate frame, defined by \begin{equation} x' = x, \quad y' = y - Stx, \quad z' = z, \quad t' = t \,. \label{} \end{equation} Partial derivatives with respect to primed and unprimed coordinates are related by \begin{equation} \partial_{x'} = \partial_x + S t\partial_y \,, \quad \partial_{y'} = \partial_y \,, \quad \partial_{z'} = \partial_z \,, \quad \partial_{t'} = \partial_t + S x\partial_y = \mathcal{D} \,, \end{equation} which shows that the explicit spatial dependence is indeed eliminated in the shearing frame. Therefore in shearing coordinates there are spatially periodic solutions, in particular a Fourier amplitude and phase factor, expressed alternatively as \begin{align} \chi(x,y,z,t) &= \mathrm{Re}\left\{\, \hat{\chi}(t)\exp\Bigl[\ensuremath{\mathrm{i}} k_x(t)x + \ensuremath{\mathrm{i}} k_y y + \ensuremath{\mathrm{i}} k_z z\Bigr] \,\right\} \nonumber \\ &= \mathrm{Re}\left\{\, \hat{\chi}(t')\exp\Bigl[\ensuremath{\mathrm{i}} k_{x0}x' + \ensuremath{\mathrm{i}} k_{y} y' + \ensuremath{\mathrm{i}} k_z z'\Bigr] \,\right\} \,, \label{eq:FT1} \end{align} where the transverse wavenumber $k_y$ and the spanwise wave number $k_z$ are constant in both coordinate frames, but the streamwise wavenumber $k_x$ varies in time according to $k_x(t) = k_{x0} - S k_y t$. For an observer in the unprimed (``laboratory'') coordinate system, a disturbance that varies along the streamwise direction stretches out as a result of being differentially advected by the background shear flow; for an observer in the shearing frame the Fourier phase has fixed wavenumbers $(k_{x0},k_{y0},k_z)$. \section{Dynamics} \label{sec:dynamics} \subsection{Simplifications} Guided by the experimental demonstrations of the shear dynamo \citep{Yousef08b,Yousef08a,Heinemann11b}, we make the following simplifying assumptions: \begin{enumerate} \item The magnetic field strength is sufficiently small so that there is no back reaction onto the flow. In this so-called kinematic regime, we drop the Lorentz force. \item The 3D forcing is restricted to two-dimensional spatial variations in the horizontal $(x,y)$ plane ({\it i.e., } barotropic flow with $\partial_z\vec{u} = \partial_z p = 0$). (With this assumption it makes no difference whether the system is rotating around the $\vec{e}_z$ axis or has a stable density stratification aligned with $\vec{e}_z$. For these dynamical influences to matter, $\vec{u}$ has to have 3D spatial dependence.) In this case the dynamics reduce to forced 2D advection-diffusion equations for the vertical velocity, $u_z$, and the vertical vorticity, $\omega_z = \vec{e}_z\cdot(\nabla_\perp\times\vec{u}_\perp)$; {\it viz., } \begin{align} \mathcal{D}u_z + \vec{u}_\perp\cdot\nabla_\perp u_z &= \nu\nabla_\perp^2 u_z + f_z \nonumber \\ \mathcal{D}\omega_z + \vec{u}_\perp\cdot\nabla_\perp\omega_z &= \nu\nabla_\perp^2 \omega_z + \vec{e}_z\cdot(\nabla_\perp\times\vec{f}_\perp) \,. \label{eq:2Ddynamics} \end{align} We use a notation for a horizontal vector as \begin{equation} \vec{a}_\perp = a_x\vec{e}_x + a_y\vec{e}_y \,. \end{equation} Because $\vec{u}$ has no $z$ dependence, the non-divergence condition reduces to $\nabla_\perp\cdot\vec{u}_\perp = 0$, and we introduce a streamfunction $\Phi$ for the horizontal velocity and its associated vertical vorticity: \begin{equation} \vec{u}_\perp = \vec{e}_z \times \nabla_\perp \Phi \,, \qquad \omega_z = \nabla_\perp^2 \Phi \,. \label{eq:psi} \end{equation} \item Fluctuation advection is neglected in (\ref{eq:2Ddynamics}), so the vertical momentum and vorticity balances are linear. \begin{align} \mathcal{D}u_z &= \nu\nabla^2u_z + f_z \nonumber \\ \mathcal{D}\omega_z &= \nu \nabla_{\perp}^2 \omega_z + \vec{e}_z \cdot (\nabla_\perp \times \vec{f}_\perp) \,. \label{eq:Navier-Stokes-kinematic} \end{align} \end{enumerate} \subsection{Conservative Shearing Waves} \label{sec:con-wave} For linearized conservative dynamics ($\vec{f} = 0$, $\nu = 0$), (\ref{eq:Navier-Stokes-kinematic}) is \begin{equation} \mathcal{D} u_z = \mathcal{D} \omega_z = 0 \,. \label{eq:conservative} \end{equation} The Fourier mode solutions are \begin{align} u_z &= \mathrm{Re}\left\{\, \hat u_{z0} \, e^{\ensuremath{\mathrm{i}} \phi} \,\right\} \nonumber \\ \omega_z &= \mathrm{Re}\left\{\, \hat \omega_{z0} e^{\ensuremath{\mathrm{i}} \phi} \,\right\} \,, \label{eq:Fouriermode} \end{align} with a phase function that can be alternatively expressed in shearing or laboratory coordinates as \begin{equation} \phi = k_x' x' + \ensuremath{\mathrm{i}} k_y' y' = k_x(t)x + \ensuremath{\mathrm{i}} k_{y0} y \,. \label{eq:phi-cons} \end{equation} The constants $k_x'= k_{x0}$, $k_y' = k_{y0}$, $\hat u_{z0}$, and $\hat \omega_{z0}$ are set by the initial conditions, and a tilting $x$-wavenumber is defined by $k_x(t) = k_{x0} - Sk_{y0} t$. From (\ref{eq:psi}) the associated horizontal velocity is \begin{equation} \vec{u}_\perp = \frac{- \, \vec{e}_z \times \vec{k}_\perp(t)} {k_\perp^2(t)} \, \mathrm{Re}\left\{\,\ensuremath{\mathrm{i}} \, \hat{\mathcal{\omega}}_{z0} e^{\ensuremath{\mathrm{i}} \phi} \,\right\} \,, \label{eq:Fourier_perp} \end{equation} where $k_\perp^2 = k_x^2 + k_{y0}^2$. Notice that $\vec{u}_\perp(t)$ grows when $k_x(t)/k_{y0} > 0$ by extracting kinetic energy from the mean shear (an up-shear phase tilt), and it decays when $k_x(t)/k_{y0} < 0$ (down-shear). As $t\rightarrow \infty$, $\vec{u}_\perp \rightarrow 0$ for any $\vec{k}_0$. This shearing wave behavior is sometimes called the Orr effect. \subsection{Single-Mode Forcing} \label{sec:force} In a quasi-linear theory the random fluctuations can be Fourier decomposed into horizontal wavenumbers, and the resulting velocity and magnetic fields summed over wavenumber. It suffices to examine a single wavenumber forcing to demonstrate the ESD process ({\it cf., } (\ref{eq:superpose})). When $\vec{f}(x,y)$ is restricted to a single horizontal wavenumber in the laboratory frame $\vec{k}_{\perp f}$, we have \begin{equation} \vec{f} = \mathrm{Re}\left\{\, \hat{\vec{f}}(t) e^{\ensuremath{\mathrm{i}} \phi_f} \,\right\} \,, \label{eq:sm_forcing} \end{equation} where the Fourier coefficient $\vec{\hat{f}}$ is specified from either a random process. The spatial phase of the forcing is fixed in laboratory coordinates: \begin{equation} \phi_f = k_{xf} x + k_{yf} y \,. \label{eq:phif} \end{equation} The non-divergence condition on the Fourier coefficient in (\ref{eq:sm_forcing}) is $\vec{k}_{\perp f}\cdot\vec{\hat{f}}_\perp = 0$; hence we can write \begin{equation} \vec{\hat{f}}_\perp = \hat{f}_\perp\vec{e}_{\perp f} \,, \qquad {\rm with} \qquad \vec{e}_{\perp f} = \frac{\vec{e}_z\times\vec{k}_{\perp f}}{k_{\perp f}} \end{equation} the unit vector perpendicular to the forcing wavevector. Here $k_{\perp f} = |\vec{k}_{\perp f}|$. The forcing coefficient is thus \begin{equation} \vec{\hat{f}} = \hat{f}_\perp\vec{e}_{\perp f} + \hat{f}_z\vec{e}_z \,. \end{equation} Taking the cross product of $\vec{k}_{\perp f}$ with $\vec{\hat{f}}$ yields \begin{equation} \vec{k}_{\perp f}\times\vec{\hat{f}} = k_{\perp f}(\hat{f}_\perp\vec{e}_z - \hat{f}_z\vec{e}_{\perp f}) \,. \end{equation} This is used to define two further relations. The forcing coefficient for vertical vorticity is \begin{equation} \hat{o}_z = \vec{e}_z \cdot \ensuremath{\mathrm{i}} \vec{k}_{\perp f}\times\vec{\hat{f}} = \ensuremath{\mathrm{i}} k_{\perp f} \hat{f}_\perp \,. \end{equation} The spatially-averaged forcing helicity (defined by $H= \Big\langle\, \vec{f} \cdot \nabla\times\vec{f}\,\Big\rangle^{\vec{x}}$ where brackets denote an average in the indicated superscript coordinate) associated with a single Fourier mode is defined by \begin{equation} \hat{H}(t) = \frac{1}{2} \, \mathrm{Re}[ \vec{\hat{f}}^\ast\cdot (\ensuremath{\mathrm{i}}\vec{k}_{\perp f}\times\vec{\hat{f}}) ] = \mathrm{Re}[ \hat{f}_z^\ast \hat{o}_z ] \,, \label{eq:helicity} \end{equation} which is a real number. The asterisk denotes a complex conjugate, and we now incorporate a caret symbol in $\hat{H}(t)$ to be consistent with other forcing amplitudes. The Fourier mode coefficients $\hat{f}_z(t)$ and $\hat{o}_z(t)$ are complex random time series that are mutually independent between their real and imaginary parts and between each other, and they have zero means. We consider an ensemble of many realizations for these time series. (We will also analyze solutions with steady forcing ({\it i.e., } with $\hat{\vec{f}}$ fixed in time with values taken from the same random distribution).) For a given realization, we generate the forcing coefficients from an Ornstein-Uhlenbeck processes with a finite correlation time, $t_f$. Thus, \begin{eqnarray} & {\cal E}\Big[ \hat{f}_z^\ast(t_1) \hat{f}_z(t_2) \Big] = F_z \exp\Bigl[ - \, |t_1-t_2|/t_f \Bigr] \nonumber \\ & {\cal E}\Big[ \hat{o}_z^\ast(t_1) \hat{o}_z(t_2) \Big] = O_z \exp\Bigl[ - \, |t_1-t_2|/t_f \Bigr] \nonumber \\ & {\cal E}\Big[ \hat{f}_z^\ast(t_1) \hat{o}_z(t_2) \Big] = 0 \,, \label{eq:fcor} \end{eqnarray} where ${\cal E}$ is the expectation value averaged over fluctuations and $F_z$ and $O_z$ are positive forcing variances. In particular, the helicity has zero mean, ${\cal E}\Big[ \hat{H}(t) \Big] = 0$. \subsection{Stochastic, Viscous Shearing Waves} \label{sec:Stochastic_waves} We assume single-mode forcing. For simplicity we assume that the fluid is at rest at $t=0$. The resulting solutions to (\ref{eq:psi})-(\ref{eq:Navier-Stokes-kinematic}) are \begin{align} u_z(x,y,t) &= \int_0^t \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu(t,\mu) \, \mathrm{Re}\left\{\, \hat{f}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \nonumber \\ \omega_z(x,y,t) &= \int_0^t \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu(t,\mu) \, \mathrm{Re}\left\{\, \hat{o}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \nonumber \\ \vec{u}_\perp(x,y,t) &= \int_0^t \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu(t,\mu) \, \left(\frac{- \, \vec{e}_z \times \vec{k}_\perp(t-\mu)} {k_\perp^2(t-\mu)}\right) \, \mathrm{Re}\left\{\,\ensuremath{\mathrm{i}} \hat{o}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \,, \label{eq:velocity} \end{align} which can be verified by substitution into the dynamical equations. The wavevector is $\vec{k}_\perp(t) = (k_x(t),k_{yf})$ with $k_x(t) = k_{xf} - Sk_{yf} t$ and $k_\perp^2(t) = k_x^2(t) + k_{yf}^2$. The phase function $\phi$ represents continuous forcing at the single, laboratory-frame wavenumber $\vec{k}_{\perp f}$, and its evolving shear tilting is expressed in $k_x(t)$. We can write it in either the sheared or laboratory coordinate frame: \begin{align} \phi(x',y',t'; \mu) \ &= \ (k_{xf} + Sk_{yf} \mu)x' + k_{yf}y' \nonumber \\ \phi(x,y,t; \mu) \ \ & = \ k_x(t-\mu) x + k_{yf}y \ = \ \vec{k}_\perp (t-\mu) \cdot {\bf x}\,, \label{eq:phase} \end{align} where $k_x(t-\mu) = k_{xf} - Sk_{yf} (t-\mu)$. The viscous damping effect is expressed by the decay factor, \begin{equation} G_\nu(t,\mu) = \exp\Bigl[ - \, \nu \, \int_\mu^t \, \ensuremath{\mathrm{d}}{}\rho \, k_\perp^2(\rho - \mu)\Bigr] = \exp\Bigl[ - \, \nu \, \int_0^{t-\mu} \, \ensuremath{\mathrm{d}}{}\zeta \, k_\perp^2(\zeta)\Bigr]\,, \label{eq:Enudef} \end{equation} which is a Green's function for (\ref{eq:2Ddynamics}). For compactness we can write this as an equivalent function of a single time difference, $G_\nu(t-\mu)$. In the first line of (\ref{eq:phase}), $\phi$ is expressed in shearing coordinates $(x',y',t')$; note that the phase of the shearing wave is independent of $t'$, but it does depend on the forcing at the time $\mu$ when the wave was spawned. The second line is the equivalent expression in laboratory coordinates $(x,y,t)$. For compactness we write this below as $\phi(\mu)$, with the other space-time dependences implicit. If $\ \nu=0$ (hence $G_\nu=1$) and the forcing is applied only at the initial instant ({\it i.e., } $\hat{f}_z = \delta(\mu) \hat{u}_{z0}$ and $\hat{o}_z = \delta(\mu) \hat{\omega}_{z0}$), then (\ref{eq:velocity}) reduces to the conservative shearing wave (\ref{eq:Fouriermode})-(\ref{eq:Fourier_perp}). For $\nu \ne 0$, $G_\nu \rightarrow 0$ as $t -\mu \rightarrow \infty$, which implies the eventual viscous decay of any shearing wave forced at a particular time $\mu$. For the dynamo problem we assume that the velocity fluctuations reach a stationary equilibrium after a finite time, long compared to $t_f$ and to an approximate viscous decay time, $1/(k_{\perp f}^2 \nu)$. This formulation implicitly assumes nonzero viscosity, or else the random velocity variance would grow without limit and not equilibrate. \subsection{Kinetic Energy, Non-dimensionalization, and Homogeneity} \label{sec:KE_ND} Define the volume-averaged kinetic energy as \begin{equation} KE(t) = \frac{1}{2} \, \Big\langle\, \vec{u}^2 \,\Big\rangle^{x,y,z} \,, \label{eq:KEdef} \end{equation} where the angle brackets again indicate an average over the spatial coordinates. For this dynamo problem we adopt a dual normalization in the fluctuation forcing scale and in the resulting velocity scale, or equivalently the equilibrium kinetic energy: \begin{equation} k_{\perp \, f} = 1 \qquad {\rm and} \qquad {\cal E}\Big[ KE \Big] = \frac{1}{2} \quad {\rm when} \quad t \gg t_f, \ ( k_{\perp f}^2 \nu)^{-1} \,. \label{eq:normalize} \end{equation} Henceforth, all quantities are made non-dimensional by the implied length and velocity scales ({\it i.e., } forcing amplitude, time, magnetic field amplitude, viscosity, and resistivity). We further assume, for definiteness, that the expected value of kinetic energy (\ref{eq:normalize}) is equally partitioned between the horizontal and vertical velocity components in (\ref{eq:KEdef}): \begin{equation} {\cal E}\Big[ KE_z \Big] = {\cal E}\Big[ KE_\perp \Big] = \frac{1}{4} \,. \label{eq:part-norm} \end{equation} There are no cross-terms in $KE$ because of the statistical independence of $\hat{f}_z$ and $\hat{o}_z$ in (\ref{eq:fcor}). This partition thus gives separate normalization conditions for $F_z$ and $O_z$. We will see in Sec. \ref{sec:induction} that both $F_z$ and $O_z$ must be nonzero for the shear dynamo to exist. For the solutions in (\ref{eq:velocity}), the kinetic energy density involves products of Fourier factors, with product phases $\pm \phi(\mu) \pm \phi(\mu')$, inside a double time-history integral over $\mu$ and $\mu'$. The $z$ average is trivially 1 for a barotropic flow with no $z$ dependence in $\phi$. We assume the horizontal domain size $L$ is large compared to the forcing scale, $1/k_{\perp f}$. For the terms with summed phases, the $x$ and/or $y$ averages of $\pm2(k_{xf}x+k_{yf}y)$ are approximately 0 if $Lk_{\perp f} \gg 1$. (This could also be assured if $Lk_{yf}/2\pi$ has an integer value as part of a discretization of the forcing; Sec. \ref{sec:limit.S}.) Focusing on the remaining terms with differenced phases, we take an $x$ average over phases $\pm (k_x(t-\mu) - k_x(t-\mu'))x = \pm Sk_{yf}(\mu-\mu')$. After performing the $z$ and $y$ averages and substituting the forcing covariance functions (\ref{eq:fcor}), the partitioned normalization conditions from (\ref{eq:part-norm}) are equivalent to \begin{align} & F_z \, \int_0^\infty \, \ensuremath{\mathrm{d}}{}\mu \, \int_0^\infty \, \ensuremath{\mathrm{d}}{}\mu' \ G_\nu(t-\mu) \, G_\nu(t-\mu') \nonumber \\ & \qquad \qquad \exp\Bigl[ - \, |\mu - \mu'|/t_f \Bigr]\, \aver{\, \exp\Bigl[\ensuremath{\mathrm{i}} S k_{yf} (\mu - \mu')x \Bigr] \,}^x \equiv F_z C_z = 1 \nonumber \\ & O_z \, \int_0^t \, \ensuremath{\mathrm{d}}{}\mu \, \int_0^t \, \ensuremath{\mathrm{d}}{}\mu' \ G_\nu(t-\mu) \, G_\nu(t-\mu') \ \frac{ {\bf k}_\perp (t - \mu) \cdot {\bf k}_\perp (t - \mu') }{k_\perp^2(t-\mu) \, k_\perp^2(t-\mu')} \nonumber \\ & \qquad \qquad \exp\Bigl[ - \, |\mu - \mu'|/t_f \Bigr] \, \aver{\, \exp\Bigl[\ensuremath{\mathrm{i}} S k_{yf} (\mu - \mu')x \Bigr] \,}^x \equiv O_z C_\perp = 1 \,, \label{eq:general_renorm} \end{align} which are independent of $t$ as $t \rightarrow \infty$. This defines the constants $C_z$ and $C_\perp$ that then determine $F_z$ and $O_z$. It will simplify the dynamo problem in Sec. \ref{sec:mean-field} to renormalize the random forcing amplitudes by \begin{equation} \hat{f}_z^\dagger = C_z^{1/2} \hat{f}_z \,, \qquad \hat{o}_z^\dagger = C_\perp^{1/2} \hat{o}_z \,, \label{eq:f-renorm} \end{equation} whose corresponding expected variances are unity, $F_z^\dagger = C_z F_z = 1$ and $O_z^\dagger = C_\perp O_\perp = 1$, and the associated expected energies are $KE_z = F_z^\dagger/4$ and $KE_\perp = O_z^\dagger/4$. $C_z$ and $C_\perp$ are continuous, finite (if $\nu > 0$), and positive functions of $S$, $\nu$, $L$, $t_f$, and the forcing wavenumber orientation angle $\theta_{f}$, \begin{equation} k_{x f} = \cos\theta_{f} \,, \qquad k_{y f} = \sin\theta_{f} \,. \label{eq:forcingk} \end{equation} Note that $0 < \theta_{f} < \pi/2$ is an up-shear tilt when $S > 0$, while $\pi/2 < \theta_{f} < \pi$ is down-shear. The extreme values $\theta_f = 0,\pi$ ($k_{yf} = 0$) are not of interest because there is no shear-tilting in (\ref{eq:phase}) and no dynamo in Secs. \ref{sec:induction}-\ref{sec:general}. We could proceed quite generally in all these parameters, but at the price of considerable complexity. Various degrees of simplification are available in different parameter limits, {\it e.g., } if the domain is large (as already partly assumed in $Lk_{\perp f} \gg 1$), $\nu \rightarrow \infty$, $S \rightarrow 0$, or $t_f \rightarrow 0$. The simplifications arise from being able to isolate and integrate over one or more of the factors in (\ref{eq:general_renorm}) while approximating the time arguments of the other factors as fixed at the importantly contributing times insofar as they are varying relatively slowly. Among all these parameters, the simplifying limit that seems most physically general and germane is large $L$, with provisionally finite values for the other parameters. For the rest of this section and Secs. \ref{sec:induction}-\ref{sec:dynamo}, we follow this path, and in Sec. \ref{sec:limit} some additional and alternative limits are discussed. On this path we isolate the spatial average factor in (\ref{eq:general_renorm}) by the $x$-averaging operation explicit and integrating over its time argument, $\delta = \mu - \mu'$, asymptotically over a large interval, while setting $\mu \approx \mu'$ for the other factors (because the spatial average factor is small everywhere that $\delta$ is not). Thus, \begin{align} \int \, \ensuremath{\mathrm{d}}{}\delta \, \aver{\, \exp\Bigl[\ensuremath{\mathrm{i}} S k_{yf} \delta x \Bigr] \,}^x &\approx \int^{-\infty}_{\infty} \, \ensuremath{\mathrm{d}}{}\delta \, \frac{1}{L}\int_{-L/2}^{L/2} \, \ensuremath{\mathrm{d}}{}s \, \exp\Bigl[\ensuremath{\mathrm{i}} Sk_{yf} \delta s \Bigr] \nonumber \\ &= \int^{-\infty}_{\infty} \, \ensuremath{\mathrm{d}}{}\delta \, \frac{2}{Sk_{yf} \delta L} \sin\left[\frac{Sk_{yf} \delta L}{2} \right] = \frac{2\pi}{S k_{yf} L} \,. \label{eq:spatial-delta} \end{align} The final step on the second line is based on the asymptotic integral of the sine integral function, $Si$ ({\it mathworld.wolfram.com}). To achieve this approximate isolation from the viscous and forcing-correlation factors, we assume $Lk_{yf} S / \nu, \ Lk_{yf} S t_f \gg 1$, along with the previous assumption for averaging, $Lk_{yf} \gg 1$ This is not the distinguished limits of small $S$ or $t_f$ in a finite domain (Sec. \ref{sec:limit.S}), although when taken successively following (\ref{eq:spatial-delta}) such limits are well behaved (Sec. \ref{sec:limit.L}). The relation (\ref{eq:spatial-delta}) can equivalently but more compactly be expressed as \begin{equation} \aver{\, \exp\Bigl[\ensuremath{\mathrm{i}} S k_{yf} (\mu - \mu')x \Bigr] \,}^x \approx \ C_L \delta(\mu-\mu') \,, \label{eq:x-avg} \end{equation} with $C_L = 2\pi/SLk_{yf}$. Inserting (\ref{eq:x-avg}) into (\ref{eq:general_renorm}) yields \begin{eqnarray} & C_z = C_L A_z^2\,, \qquad C_\perp = C_L A_\perp^2 \,, \nonumber \\ & A_z^2 = \int_0^\infty \, \ensuremath{\mathrm{d}}{}\rho G_\nu^2 (\rho) \,, \qquad A_\perp^2 = \int_0^\infty \, \ensuremath{\mathrm{d}}{}\rho G_\nu^2 (\rho) k_{\perp f}^{-2}(\rho) \,. \label{eq:norm_consts} \end{eqnarray} After the normalizations (\ref{eq:normalize})-(\ref{eq:part-norm}) and the large $L$ approximation yielding (\ref{eq:x-avg}), the non-dimensional parameters of the ESD model are $S$, $\nu$, $t_f$, and $\theta_f$, plus other quantities related to $\vec{B}$ defined in Sec. \ref{sec:induction}. There is no dependence on $L$. As an aside we examine the ensemble-mean local velocity variance, ${\cal E}\Big[ {\vec{u}^2} (x,y,z,t) \Big]$, which is different from the domain-averaged $2{\cal E}\Big[ KE \Big]$. From (\ref{eq:velocity}) and the covariance properties of the random force (\ref{eq:fcor}), {\it e.g., } the vertical velocity variance has the expected value at late time, \begin{equation} {\cal E}\Big[ u_z^2 \Big] = \int_0^\infty \ensuremath{\mathrm{d}}{}\mu \, \int_0^\infty \ensuremath{\mathrm{d}}{}\mu' \, G_\nu(t-\mu) \, G_\nu(t-\mu') \, F_z \, \exp\Bigl[ - \, |\mu-\mu'|/t_f \Bigr] \, \cos[S k_{yf}x (\mu - \mu')] \,. \label{eq:uzvar} \end{equation} This variance is independent of $t$ because nonzero viscosity renders $\vec{u}$ stationary. It is independent of $y$ and $z$, {\it i.e., } homogeneous in these coordinates. But the local variance is not in general homogeneous in $x$. In the limit $\nu \tti$, the integrals can approximately be evaluated (as discussed more fully in Secs. \ref{sec:limit.L} and \ref{sec:general}) to yield a constant value equal to $F_z^\dagger = F_z C_z$ in (\ref{eq:general_renorm}). For finite viscosity the peak variance is at $x=0$, and it decreases with $|x|$ on a scale $\sim \, 1/(Sk_{yf}t_f)$; this can be seen by taking the limit of small $t_f$ where \begin{equation} {\cal E}\Big[ u_z^2 \Big] \approx \frac{2 F_b t_f}{1+ (St_fk_{yf}x)^2} \ \int_0^\infty \ensuremath{\mathrm{d}}{}\mu G_\nu^2(t-\mu) \,. \end{equation} Homogeneity is thus restored for small $S$ or small $t_f$, although these limits are formally incompatible with the approximation underlying (\ref{eq:x-avg}), which is therefore to be understood as a horizontal average over a region that encompasses any variance inhomogeneity. The fundamental source of forced shearing-wave inhomogeneity is the special zero value of the mean flow $Sx\vec{e}_y$ at $x=0$: the phase-tilting rate $Sk_{yf}x$ increases with $|x|$, while the forcing correlation time $t_f$ does not depend on $x$. Homogeneity holds for $\nu \tti$ because the forced shearing waves have non-trivial amplitude only for $\phi = \phi_f$, {\it i.e., } no phase tilting. An amelioration of the inhomogeneity magnitude results from the dynamical freedom to add a random forcing phase $r(\mu)$ to (\ref{eq:phase}); {\it e.g., } a model for $r$ is a 2$\pi$-periodic random walk with correlation time $t_r$. Inhomogeneity is eliminated if $t_r \rightarrow 0$, but it still occurs with finite $t_r$. A broader posing of the ESD problem is for a family of mean flows with the same mean shear, {\it i.e., } $\vec{V} = U_*\vec{e}_x + (V_*+S(x-x_*))\vec{e}_y$, and a corresponding modification of the forced shearing-wave phase (\ref{eq:phase}) to $\phi(x,y,t;\mu) = k_x(t-\mu) (x - x_*) + k_{yf} (y-y_*) - \vec{k}_{\perp f} \cdot \vec{V}_*(t-\mu) + 0.5 S U_* (t-\mu)^2 + r(\mu)$. An expanded-ensemble average over $\vec{V}$, and over $x_*$ in particular, restores homogeneity in $x$ of ${\cal E}\Big[ {\vec{u}^2} \Big]$ for general parameters, which thus is a corollary of translational and Galilean invariances. These generalizations in $r$ and $\vec{V}$ do not change the dynamo behavior in anything except the shearing-wave phase, which does not appear in $KE$ or the ESD (Sec. \ref{sec:mean-field} {\it et seq.}), so we now drop further consideration of them. \section{Magnetic Induction} \label{sec:induction} Write the induction equation (\ref{eq:induction-fluctuations}) as \begin{equation} \mathcal{D}\vec{B} = \nabla\times(\vec{u}\times\vec{B}) + S B_x\vec{e}_y + \eta\nabla^2\vec{B} \,. \label{eq:3D-induct} \end{equation} To simplify matters, we note that the induction equation is linear in the magnetic field. Therefore, for a barotropic velocity field $\vec{u}(x,y)$, the electromotive force does not give rise to any mode coupling in $z$. We pose the dynamo problem as exponential growth of the horizontally-averaged ({\it i.e., } mean) horizontal magnetic field with an initial seed amplitude and a single $z$-wavenumber $k_z$, \begin{equation} \aver{\vec{B}_\perp}^{x,y}= \mathrm{Re}\left\{\vec{\mathcal{B}}(t) \, e^{\ensuremath{\mathrm{i}} k_z z} \right\} \,. \label{eq:averB} \end{equation} Thus, both $k_z$ and the initial mean field, $\vec{\mathcal{B}}(0)$, are parameters of the problem; without loss of generality, we can take $|\vec{\mathcal{B}} (0) | = 1$ as the non-dimensional normalization of $\vec{B}$. Because we are interested in dynamo behavior with exponential growth, this normalization choice does not affect the resulting growth rate. We then define $\theta_B$ as its initial orientation angle: \begin{equation} \mathcal{B}_x (0) = \cos\theta_{B} \,, \qquad \mathcal{B}_y (0) = \sin\theta_{B} \,. \end{equation} Because $\hat{\vec{f}}(t)$ is a stochastic variable, the more precisely stated dynamo problem is exponential growth of mean magnetic energy $|\vec{\mathcal{B}}|^2(t)$ looking over many realizations and/or long time intervals. Because there is no Fourier mode coupling in $z$, we can assume the entire magnetic field has only a single $k_z$, and the application of the gradient operator is simplified to \begin{equation} {\nabla} = \nabla_\perp + \ensuremath{\mathrm{i}} k_z\vec{e}_z \,. \end{equation} We only need to solve for the horizontal component of $\vec{B}$, {\it i.e., } $\vec{B}_\perp$, and obtain $B_z$ diagnostically from the solenoidality condition, \begin{equation} B_z = -\, \frac{\nabla_\perp\cdot\vec{B}_\perp}{\ensuremath{\mathrm{i}} k_z} \,. \label{eq:vertB} \end{equation} For the mean field $\aver{\vec{B}_\perp}^{x,y}$, there is no associated vertical component. The horizontal induction equation from (\ref{eq:3D-induct}) is \begin{equation} \mathcal{D}\vec{B}_\perp = - (\vec{u}\cdot{\nabla}) \, \vec{B}_\perp + (\vec{B}_\perp\cdot\nabla_\perp)\, \vec{u}_\perp + SB_x\vec{e}_y + \eta{\nabla}^2\vec{B}_\perp \,. \label{eq:induction-2D-p} \end{equation} Because it is enough to focus on the horizontal components of $\vec{B}$, we henceforth drop the subscript $\perp$ and interpret all vectors $\vec{a}$ as horizontal unless indicated otherwise by a subscript: a 3D vector will be $\vec{a}_3$ ({\it e.g., } $\nabla_3$). The non-dimensional parameters in the ESD associated with the magnetic field are $k_z$, $\eta$, and $\theta_B$; these are in addition to the dynamic parameters listed at the end of Sec. \ref{sec:KE_ND}. \subsection{Magnetic Fluctuations} \label{sec:magfluc} Decompose the horizontal magnetic field into fluctuation and mean components, \begin{equation} \vec{B}(x,y,z,t) = \vec{\delta B}(x,y,z,t) + \mathrm{Re}\left\{\vec{\mathcal{B}}(t)\,e^{\ensuremath{\mathrm{i}} k_z z}\right\} \,. \label{eq:ESDtruncation} \end{equation} For consistency with (\ref{eq:averB}), we specify that $\aver{\vec{\delta B}}^{x,y} = 0$. We evaluate the vertical companion field $\delta B_z$ by (\ref{eq:vertB}). Because (\ref{eq:induction-2D-p}) is linear in $\vec{B}$, we see that $\vec{\delta B}$ will have the same vertical phase factor as the mean field; {\it i.e., } we define its complex coefficient $\vec{b}$ by \begin{equation} \vec{\delta B} = \mathrm{Re}\left\{\vec{b}(x,y,t) \, e^{\ensuremath{\mathrm{i}} k_z z}\right\} \,. \label{bkz} \end{equation} By assumption the ESD contains only a single phase component for the horizontal magnetic fluctuation field $\vec{b}(x,y,z,t)$ determined from the horizontal forcing wave number $\vec{k}_f$ (through its shear-tilting phase $\phi$ in (\ref{eq:phase})) and the vertical wavenumber $k_z$ of the seed mean magnetic field. Its induction equation from (\ref{eq:induction-2D-p}) is forced by the stochastic shearing waves and the horizontal mean magnetic field, {\it i.e., } \begin{equation} \mathcal{D} \vec{\delta B} = \vec{\delta F} + S\vec{e}_y \delta B_x + \eta {\nabla}_3^2 \vec{\delta B} \,, \label{eq:b1} \end{equation} where the curl of the fluctuation electromotive force $\vec{\delta F}$ is \begin{align} \vec{\delta F}(x,y,z,t) &= - u_z \partial_z \aver{\vec{B}}^{x,y} + \left( \aver{\vec{B}}^{x,y} \cdot \nabla \right) \, \vec{u} \nonumber \\ &= -\, u_z \mathrm{Re}\left\{\ensuremath{\mathrm{i}} k_z \vec{\mathcal{B}} e^{\ensuremath{\mathrm{i}} k_z z}\right\} + \left( \mathrm{Re}\left\{\vec{\mathcal{B}} e^{\ensuremath{\mathrm{i}} k_z z}\right\} \cdot \nabla \right) \vec{u} \,. \label{eq:Fbdef} \end{align} There is no contribution from $-\, \left( \vec{u} \cdot \nabla \right) \, \aver{\vec{B}}^{x,y}$ because $\aver{\vec{B}}^{x,y}$ has no horizontal gradient. One can view the ESD fluctuation induction equation (\ref{eq:b1}) for $\vec{\delta B}$ as a first-iteration approximation to the full MHD induction in the presence of $\vec{u}$ and $\aver{\vec{B}}^{x,y}$; {\it i.e., } it is a projection of MHD onto a magnetic field with only the shearing-wave phase and a horizontally uniform component. This simplified equation for $\vec{b}$ is the heart of the quasi-linear ESD theory ({\it i.e., } linear for magnetic fluctuations, nonlinear for the horizontal mean). The quasi-linear simplification can be rigorously justified only if $|\vec{b}| \ll |\vec{\mathcal{B}}|$, in which case all higher harmonics of the phases in $\vec{b}$ will be negligibly small compared to the primary phase; this condition is met in the limit $\eta \tti$, {\it i.e., } vanishing magnetic Reynolds number (Sec. \ref{sec:limit}). In the next section we will see how the spatially-averaged induction from the shearing waves induces dynamo growth in $\vec{\mathcal{B}}$. This quasi-linear theory is formally incomplete when the preceding justification condition is not always well satisfied by its solutions. Nevertheless, they correspond to the shear dynamo found in 2$^+$D and 3D simulations for a fairly broad range of parameters \citep{Yousef08b,Yousef08a,Heinemann11b}, so we infer that the ESD provides a cogent explanation of the dynamo process even beyond its rigorously derivable limit. When $\vec{u}$ variance is inhomogeneous (Sec. \ref{sec:KE_ND}), $\vec{\delta B}$ variance will be so as well. Using the shearing wave solution (\ref{eq:velocity}) and the mean field expression in (\ref{eq:ESDtruncation}) and an analogous vertical phase factor decomposition for $\vec{\delta F}$ as for $\vec{\delta B}$ in (\ref{bkz}), we evaluate the fluctuation forcing term as \begin{align} \vec{F}_b (x,y,t) &= \int^t_0 \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu(t-\mu) \, \Bigl[\ - \, \ensuremath{\mathrm{i}} k_z \vec{\mathcal{B}}(t) \mathrm{Re}\left\{ \hat{f}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \nonumber \\ & + \, \frac{\vec{e}_z \times \vec{k}(t-\mu)}{k^2(t-\mu)} \, (\vec{k}(t-\mu) \cdot \vec{\mathcal{B}}(t)) \, \mathrm{Re}\left\{\, \hat{o}_z(\mu) \, \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \, \right\} \ \Bigr] \,. \label{eq:Fbdef-2} \end{align} {\it Pro tem} we do not yet use the renormalized forcings (\ref{eq:f-renorm}) but will do so in the next section. The two right-side lines here are, respectively, from the two terms in the second line of (\ref{eq:Fbdef}). The magnetic fluctuation Fourier phases are thus $\pm \, \phi(\mu)+ k_z z$ where $\phi$ is the shearing wave phase in (\ref{eq:phase}). We can write the solution of (\ref{eq:b1}) for $\vec{b}$ analytically. Again utilizing the vertical phase factorization (\ref{bkz}), we have \begin{align} \vec{b}(x,y,z,t) &= \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ G_\eta(t-\mu,\lambda-\mu)G_\nu(\lambda-\mu) \nonumber \\ & \Bigl[\ - \, \ensuremath{\mathrm{i}} k_z \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \, \mathrm{Re}\left\{\, \hat{f}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \nonumber \\ + \, \frac{\vec{e}_z \times \vec{k}(t-\mu)}{k^2(\lambda-\mu)} &\, (\, \vec{k}(\lambda-\mu) \cdot \vec{\mathcal{B}}(\lambda) \,) \, \mathrm{Re}\left\{\, \hat{o}_z(\mu) \, e^{\ensuremath{\mathrm{i}} \phi(\mu)} \,\right\} \ \Bigr] \,. \label{eq:mag-fluc-2} \end{align} Here we define the second-order, real tensor $\mathsfbi{S}$ by \begin{equation} \mathsfbi{S} (t) = \mathsfbi{I} + S t \vec{e}_y \vec{e}_x \,, \label{Stensor} \end{equation} with $\mathsfbi{I}$ the identify tensor, and the resistive decay factor (another Green's function) by \begin{equation} G_\eta(t,\lambda,\mu) = \exp\Bigl[ - \, \eta \, \int_\lambda^t \, \ensuremath{\mathrm{d}}{}\rho \, k_3^2(\rho - \mu)\Bigr] = \exp\Bigl[ - \, \eta \, \int_{\lambda-\mu}^{t-\mu} \, \ensuremath{\mathrm{d}}{}\zeta \, k_3^2(\zeta)\Bigr] \label{eq:Eeta} \end{equation} with $k_3^2(t) = k^2(t) + k_z^2$. Again, for compactness we write this as $G_\eta(t-\mu,\lambda-\mu)$. Thus, in the quasi-linear ESD, $\vec{\delta B}$ is an induced magnetic shearing wave arising from $\vec{u}$ and $\aver{\vec{B}}^{x,y}$. \subsection{Mean Field Equation} \label{sec:mean-field} The governing equation is the horizontal average of (\ref{eq:induction-2D-p}): \begin{equation} \partial_t \aver{\, \vec{B} \,}^{x,y} = \aver{\, \vec{F}_{B} \,}^{x,y} + S \aver{\, B_x \,}^{x,y} \vec{e}_y - \eta k_z^2 \aver{\, \vec{B} \,}^{x,y} \,, \label{eq:meanB} \end{equation} where \begin{equation} \aver{\, \vec{F}_{B} \,}^{x,y}(z,t) = \aver{\, - \, (\vec{u} \cdot{\nabla})\vec{b}^\prime - \, (u_z \partial_z )\vec{b} ^\prime + \, (\vec{b}^\prime\cdot\nabla)\vec{u} \,}^{x,y} \,. \label{eq:FBdef} \end{equation} Because of the horizontal average in the ESD mean-field equation, there is no representation of any spatial structure associated with wave-averaged inhomogeneity in the electromotive force curl (Secs. \ref{sec:KE_ND} and \ref{sec:magfluc}). The induction forcing itself depends linearly on $\vec{\mathcal{B}}$ through $\vec{b}$ in (\ref{eq:mag-fluc-2}), where it enters in a time-history integral. So (\ref{eq:meanB}) is a linear integral-differential equation for $\vec{\mathcal{B}}(t)$, for which no general analytic solution is known. Instead, we evaluate the expression for $\vec{F}_\mathcal{B}$ below and obtain a double-time integral, second-order tensor operator on $\vec{\mathcal{B}}(t)$ that we will solve numerically in general (Sec. \ref{sec:dynamo}) and analytically in certain limits (Sec. \ref{sec:limit}). This yields a closed-form equation for the mean magnetic field amplitude as a function only of the forcing time histories, $\hat{f}_z(t)$ and $\hat{o}_z(t)$, and the parameters $\vec{k}_{f}$, $S$, $\eta$, and $\nu$. As with the $\vec{b}$ solution in the preceding section, the derivation for $\aver{\, \vec{F}_\mathcal{B} \,}^{x,y}$ is rather elaborate. It involves substituting the shearing wave solution (\ref{eq:velocity}) and the magnetic fluctuation (\ref{eq:mag-fluc-2}) into (\ref{eq:FBdef}) and performing the horizontal average by identifying the zero horizontal phase components and applying (\ref{eq:x-avg}); these details are in Appendix \ref{sec:appA}. If we again define a vertical Fourier coefficient $\vec{F}_\mathcal{B}$, as in (\ref{eq:averB}), the result is \begin{align} \vec{F}_\mathcal{B}(z,t) &= - \, \frac{C_L}{2} \, \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ G_\eta(t-\mu,\lambda-\mu)\, G_\nu(\lambda-\mu) \, G_\nu(t-\mu) \nonumber \\ & \quad \Bigl[\ |\hat{f}_z|^2(\mu)\, k_z^2 \, \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \ + \, \ensuremath{\mathrm{i}} k_z \, \mathrm{Re}\left\{\hat{f}_z^\ast(\mu)\hat{o}_z(\mu)\right\} \, \vec{e}_z \times \vec{k}(t-\mu) \nonumber \\ & \qquad \quad \Bigl(\, \frac{\vec{k}(\lambda-\mu)}{k^2(\lambda-\mu)} \cdot \vec{\mathcal{B}}(\lambda) \, + \, \frac{\vec{k}(t-\mu)}{k^2(t-\mu)} \cdot \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \,\Bigr) \ \Bigr] \,. \label{eq:FBevaluation} \end{align} Notice that the forcing helicity $\hat{H}(\mu)$ from (\ref{eq:helicity}) plays a prominent role. With the solutions in Secs. \ref{sec:limit}-\ref{sec:general}, we find there is only transient algebraic growth in $\vec{\mathcal{B}}(t)$ ({\it i.e., } no dynamo) when the forcing helicity is zero. Therefore, there is no dynamo if either $\hat{f}_z$ or $\hat{o}_z$ is zero. In fact, the induced magnetic fluctuations from a horizontal velocity field, forced by $\hat{o}_z$ only, have no effect at all on $\vec{\mathcal{B}}$. Now simplify $\vec{F}_\mathcal{B}$ and the $\mathcal{B}$ equation by the forcing renormalization (\ref{eq:f-renorm}) augmented by the following related quantities: \begin{equation} {\cal F}^\dagger = \frac{1}{2} \, |\hat{f}_z^\dagger|^2 \,, \qquad {\cal H}^\dagger = \frac{A_z}{2A_\perp} \, \mathrm{Re}\left\{\hat{f}_z^{\dagger \ast}(\mu)\hat{o}_z^\dagger (\mu)\right\} = \frac{C_z}{2} \hat{H} \,, \qquad G_\nu^\dagger = \frac{1}{A_z^2} G_\nu \,. \label{eq:FHG_renorm} \end{equation} With these the mean electromotive force curl becomes \begin{align} \vec{F}_\mathcal{B}(z,t) &= - \, \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ G_\eta(t-\mu,\lambda-\mu)\, G_\nu^\dagger(\lambda-\mu)\, G_\nu^\dagger(t-\mu) \nonumber \\ & \quad \Bigl\{\ {\cal F}^\dagger(\mu)\, k_z^2 \, \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \ + \, \ensuremath{\mathrm{i}} k_z \, {\cal H}^\dagger(\mu) \, \vec{e}_z \times \vec{k}(t-\mu) \nonumber \\ & \qquad \quad \Bigl[\, k^{-2}(\lambda-\mu) + k^{-2}(t-\mu) \,\Bigr] \Bigl(\, \vec{k}(\lambda-\mu) \cdot \vec{\mathcal{B}}(\lambda)\,\Bigr) \ \Bigr\} \,. \label{eq:FBevaluation2} \end{align} An identity used for the final term is $\vec{k}(t-\mu) \cdot \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \, = \, \vec{k}(\lambda-\mu) \cdot \vec{\mathcal{B}}(\lambda)$. After factoring the structure $\mathrm{Re}\left\{ \ \cdot \ e^{\ensuremath{\mathrm{i}} k_z z}\right\}$ from (\ref{eq:meanB}), the equation for the complex amplitude $\vec{\mathcal{B}}(t)$ becomes \begin{align} & \partial_t \vec{\mathcal{B}} = S \mathcal{B}_x \vec{e}_y - \eta k_z^2 \vec{\mathcal{B}} \nonumber \\ - & \, \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ G_\eta(t-\mu,\lambda-\mu)G_\nu^\dagger(\lambda-\mu)G_\nu^\dagger(t-\mu) \, \Bigl\{\, {\cal F}^\dagger(\mu)\, k_z^2 \mathsfbi{S}(t-\lambda) \cdot \vec{\mathcal{B}}(\lambda) \ + \nonumber \\ & \, \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(\mu) \, \vec{e}_z \times \vec{k}(t-\mu) \, \Bigl[\, k^{-2}(\lambda-\mu) + k^{-2}(t-\mu) \,\Bigr] \Bigl(\, \vec{k}(\lambda-\mu) \cdot \vec{\mathcal{B}}(\lambda)\,\Bigr) \, \Bigr\} \,. \label{eq:complexB} \end{align} A final compaction step is to factor out the resistivity effect associated with the vertical wavenumber by defining \begin{equation} \vec{\mathcal{B}}(t) = \widetilde{\vec{\mathcal{B}}}e^{-\eta k_z^2 t} \,. \label{eq:factor-eta} \end{equation} This modifies (\ref{eq:complexB}) to \begin{align} & \partial_t \widetilde{\vec{\mathcal{B}}} = S \widetilde{\mathcal{B}}_x \vec{e}_y \nonumber \\ - & \, \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ \widetilde{G}_\eta(t-\mu,\lambda-\mu) G_\nu^\dagger(\lambda-\mu)G_\nu^\dagger(t-\mu) \, \Bigl\{\, {\cal F}^\dagger(\mu)\, k_z^2 \mathsfbi{S}(t-\lambda) \cdot \widetilde{\vec{\mathcal{B}}}(\lambda) \ + \nonumber \\ & \, \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(\mu) \, \vec{e}_z \times \vec{k}(t-\mu) \, \Bigl(\, k^{-2}(\lambda-\mu) + k^{-2}(t-\mu) \,\Bigr) \Bigl(\, \vec{k}(\lambda-\mu) \cdot \widetilde{\vec{\mathcal{B}}}(\lambda)\,\Bigr) \, \Bigr\} \, , \label{eq:complexB2} \end{align} where $\widetilde{G}_\eta$ is the resistive decay associated with the horizontal wavevector, defined analogously to $G_\eta$ with $k_3(\zeta)$ replaced by $k(\zeta)$ in (\ref{eq:Eeta}), {\it i.e., } factoring out the decay associated with $k_z$, \begin{equation} G_\eta(t,\lambda,\mu) = \exp\Bigl[- \, \eta k_z^2 (t - \lambda)\Bigr] \, \widetilde{G}_\eta(t,\lambda,\mu) \,. \label{eq:Gtilde} \end{equation} The functional form of (\ref{eq:complexB2}) is \begin{equation} \partial_t \widetilde{\vec{\mathcal{B}}} = \mathsfbi{L} \cdot \widetilde{\vec{\mathcal{B}}}(t) + \int_0^t \, \ensuremath{\mathrm{d}}{}\lambda \ \mathsfbi{J}(t,\lambda) \cdot \widetilde{\vec{\mathcal{B}}}(\lambda) \,, \label{eq:ansatz2} \end{equation} where $\mathsfbi{L}$ and $\mathsfbi{J}$ are second-order tensors. This ESD form differs from the common {\it ansatz} (\ref{eq:ansatz}) by the time-history integral, but it does fit within the formal framework analyzed by \citet{Sridhar09} for velocity fields whose dynamical origin was unspecified (in contrast to our particular case of shearing wave velocities). We show in Sec. \ref{sec:limit} that the common {\it ansatz} is recovered in our ESD theory in the limit of $\eta, \nu \rightarrow \infty$. The definitions of the $\mathsfbi{L}$ and $\mathsfbi{J}$ tensors are \begin{align} \mathsfbi{L}_{mn} &= S\, \delta_{my}\delta_{nx} \nonumber \\ \mathsfbi{J}_{mn}(t,\lambda) &= - \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \ \widetilde{G}_\eta(t-\mu,\lambda-\mu) G_\nu^\dagger(\lambda-\mu)G_\nu^\dagger(t-\mu) \, \nonumber \\ & \Bigl[\, {\cal F}^\dagger(\mu)\, k_z^2 \, \left(\, \mathsfbi{S}_{mn}(t-\lambda \,\right) + \, \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(\mu) \, \nonumber \\ & \qquad \quad \Bigl(\, k^{-2}(\lambda-\mu) + k^{-2}(t-\mu) \,\Bigr) \, k_\ell(t-\mu) \, k_n(t-\lambda) \,\epsilon_{z\ell m} \,\Bigr] \label{eq:LI} \end{align} for horizontal indices, $\{ m,n,\ell \} = \{ x,y \}$, and the usual Kronecker delta and Levi-Civita epsilon tensors. $\mathsfbi{L}$ contains the background shear effect on $\widetilde{\vec{\mathcal{B}}}$, while $\mathsfbi{J}$ contains the mean electromotive force resulting from the random barotropic forces and induced magnetic fluctuations. $\mathsfbi{S}_{mn} = \delta_{mn} + S(t-\lambda)\delta_{my}\delta_{nx}$ is as defined in (\ref{Stensor}). The ESD (\ref{eq:complexB}) is invariant with respect to several sign symmetries in the forcing, wavenumber, initial conditions, and mean shear. Because the random forcing amplitudes, $\hat{f}_z(t)$ and $\hat{o}_z$, are statistically symmetric in sign, a change of sign in either one implies ${\cal H}^\dagger \leftrightarrow - \, {\cal H}^\dagger$, ${\cal F}^\dagger \leftrightarrow {\cal F}^\dagger$, and the statistical distribution of $\vec{\mathcal{B}}(t)$ will be unchanged. In addition there are the following invariances for particular realizations of the ESD: (i) $(k_z,\ {\cal H}^\dagger) \leftrightarrow - \, (k_z, \ {\cal H}^\dagger)$; (ii) $\vec{k}_f \leftrightarrow - \, \vec{k}_f$; (iii) $\vec{\mathcal{B}} \leftrightarrow - \, \vec{\mathcal{B}}$; and (iv) $(S, \ {\cal H}^\dagger, \ k_{xf}, \ \mathcal{B}_x) \leftrightarrow - \, (S, \ {\cal H}^\dagger, \ k_{xf}, \ \mathcal{B}_x)$ with $(k_{yf}, \ \mathcal{B}_y) \leftrightarrow (k_{yf}, \ \mathcal{B}_y)$. Because the ESD is a quasi-linear theory based on Fourier orthogonality in $k_z$ and $\vec{k}_f$, it satisfies a superposition principle; the full MHD equations (\ref{eq:Navier-Stokes})-(\ref{eq:Bincompressible}) do not allow superposition, of course. The functional form of the superposition is a generalization of (\ref{eq:averB}) and (\ref{eq:ansatz2}): \begin{align} \aver{\vec{B}_\perp}^{x,y}(z,t) &= \sum_{k_z} \, \mathrm{Re}\left\{\widetilde{\vec{\mathcal{B}}}(k_z,t) \exp\Bigl[-\eta k_z^2 t + \ensuremath{\mathrm{i}} k_z z\Bigr]\right\} \,, \nonumber \\ \partial_t \widetilde{\vec{\mathcal{B}}}(k_z,t) &= \mathsfbi{L}(k_z) \cdot \widetilde{\vec{\mathcal{B}}}(k_z,t) + \int_0^t \, \ensuremath{\mathrm{d}}{}\lambda \ \left(\, \sum_{\vec{k}_f} \mathsfbi{J}(k_z,\vec{k}_f,t,\lambda) \,\right) \cdot \widetilde{\vec{\mathcal{B}}}(k_z, \lambda) \,. \label{eq:superpose} \end{align} The random force $\hat{\vec{f}}(\vec{k}_f,t)$ in (\ref{eq:sm_forcing}) is assumed to be statistically independent for each $\vec{k}_f$ component with whatever normalization is chosen in place of the single-component normalization (\ref{eq:normalize}). \subsection{Dynamo Behavior} \label{sec:dynamo} A numerical code has been written to solve the ESD in (\ref{eq:complexB2}). Its algorithm is described in Appendix \ref{sec:method}. As expected from the 3D and 2$^+$D full PDE solutions, a dynamo often occurs when $S$ and ${\cal H}(t)$ are nonzero. We now demonstrate a typical dynamo solution, deferring the more general examination of the ESD parameter dependences until Sec. \ref{sec:general}, after first obtaining analytic solutions in Sec. \ref{sec:limit} in certain limiting cases. An illustration of a random realization of the forcings, velocity variances, and helicity time series is in Figs. \ref{fig:forcing}-\ref{fig:velocity}. These are for a case with moderately up-shear forcing wavenumber orientation ($\theta_f = \pi/4$), moderately small correlation time $t_f=0.1$ and viscosity $\nu = 0.1$, and intermediate mean shear rate ($S=1$). The amplitude normalizations from (\ref{eq:part-norm}) are evident, as is the vanishing of the time-averaged helicity. Because $t_f\nu \ll 1$, the time scale of the velocity fluctuations is controlled primarily by the viscous decay time modified by the shear tilting in the $k_x(t)$: in (\ref{eq:Enudef}) the initial exponential linear decay rate, $\nu = 0.1$, is at first slowed as $k_x$ passes through zero at $t = 1/ S \tan\theta_f = 1$ and then augmented toward a exponential cubic decay with a rate coefficient $\approx \, (\nu S^2 k_{yf}^2/3)^{1/3} = 0.26$. To obtain a dynamo in (\ref{eq:complexB2}), the vertical wavenumber $k_z$ must be small but finite; we show below that this is true for general parameters. With $k_z = 0.125$ and moderately small $\eta = 0.1$, the time series of the mean magnetic field component variances are shown in Fig. \ref{fig:mean-field} for the same realization of the forcing and velocity as in Figs. \ref{fig:forcing}-\ref{fig:velocity}. There is evident exponential growth in both components of $\vec{\mathcal{B}}(t)$, {\it i.e., } this is a dynamo. If we make an exponential fit over a long time interval with $|\vec{\mathcal{B}}| \propto e^\gamma t$, we obtain the same value of $\gamma \approx 0.03$ for each component. $\vec{\mathcal{B}}(t)$ also manifests a stochastic variability inherited from the random forcing, and its fluctuations about the exponential growth exhibit power even at much lower frequencies than are evident in the forcing and velocity time series. The magnitude of $\mathcal{B}_y$ is larger than of $\mathcal{B}_x$ in Fig. \ref{fig:mean-field}. This is a common behavior for magnetic fields in shear flow. A partial and somewhat simplistic explanation is as a consequence of the first right-side shear term in (\ref{eq:complexB2}). A simplified (non-dynamo) system with arbitrary forcing $\vec{R}(t)$, \begin{equation} \partial_t \vec{\mathcal{B}} = S \mathcal{B}_x \vec{e}_y + \vec{r}(t) \,, \qquad \vec{\mathcal{B}}(0) = \vec{\mathcal{B}}_0 \\, \end{equation} has the solution, \begin{equation} \vec{\mathcal{B}}(t) = \vec{\mathcal{B}}_0 + \int_0^t \, \ensuremath{\mathrm{d}}{}t' \, \vec{R}(t') + S \vec{e}_y \left(\, \mathcal{B}_{x 0} t + \int_0^t \, \ensuremath{\mathrm{d}}{}t' \, \int_0^{t'} \, \ensuremath{\mathrm{d}}{}t'' \, r_x(t'') \, \right) \,. \label{eq:simple} \end{equation} The last term $\propto S \vec{e}_y$ will make $|\mathcal{B}_y| \gg |\mathcal{B}_x$ at late time for most $\vec{R}(t)$. This anisotropy effect carries over to the ESD but also involves further right-side $\vec{\mathcal{B}}$ coupling absent in (\ref{eq:simple}); a coupled explanation for the anisotropy in dynamo solutions is made in Sec. \ref{sec:limit.L}. The initial condition $\vec{\mathcal{B}}_0$ is usually not dominant in (\ref{eq:simple}) at late time. The initial condition is even less important for $\vec{\mathcal{B}}(t)$ in Fig. \ref{fig:mean-field}, which is obtained with $\theta_B = \pi/4$; in particular, $\theta_B$ does not determine the dynamo growth rate $\gamma$. $\vec{b}(t)$ (not shown) also shows exponential growth in its amplitude, with $|b_y|$ typically much larger than $|b_x|$ for the same reason as just explained. $\vec{b}(t)$ has comparable time dependence to $u_z(t)$ and $\vec{u}(t)$, as well as an additional resistive decay influence from $\eta$ and modulations by the exponential growth and slow variation in $\vec{\mathcal{B}}(t)$. \section{Dynamo Analysis in Limiting Cases} \label{sec:limit} \subsection{$L \rightarrow \infty$; $\eta, \ \nu \rightarrow \infty$} \label{sec:limit.L} The ESD in Secs. \ref{sec:dynamics}-\ref{sec:induction} is based on an assumption that the horizontal domain size is large, $L \rightarrow \infty$ ({\it n.b., } the average of a Fourier exponential in (\ref{eq:x-avg})). As a means of obtaining a more readily analyzed form of the ESD (\ref{eq:complexB2}), we take the additional limit of $\eta \rightarrow \infty$. This limit does not change the forcing amplitude nor the velocity field (Sec. \ref{sec:dynamics}), which are independent of $\eta$, but it allows an elimination of one of the time integrals in the expression for $\vec{b}$ in (\ref{eq:mag-fluc-2}) and in the equation (\ref{eq:complexB2}) for $\widetilde{\vec{\mathcal{B}}}$. It also makes the quasi-linear approximation rigorously accurate because it yields $|\vec{\delta B}| \ll |\vec{\mathcal{B}}|$ (as explained after (\ref{eq:Mform})). The essence of the $\eta \rightarrow \infty$ approximation is that first the order of integration in (\ref{eq:complexB2}) is reversed, \[ \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu = \int^t_0 \, \ensuremath{\mathrm{d}}{}\mu \, \int^t_{t-\mu} \, \ensuremath{\mathrm{d}}{}\lambda \,, \] and then the $\lambda$ integral is performed by assuming that $\widetilde{G}_\eta$ is more rapidly varying in $\lambda$ than any of the other integrand factors and furthermore is nonzero only when $\lambda \rightarrow t$, {\it i.e., } $t-\lambda = O(\eta^{-1})$. We evaluate this approximation as \begin{equation} \int^t_{t-\mu} \, \ensuremath{\mathrm{d}}{}\lambda \, \widetilde{G}_\eta(t-\mu,\lambda-\mu) \rightarrow \frac{1}{\eta k^2(t-\mu)} \end{equation} for all $\mu \ne t$ (the integral is zero for $\mu = t$) and set the $\lambda$ arguments of other factors in the integrand to $t$. With this approximation, the $(L,\eta)$-limiting form of (\ref{eq:complexB2}) becomes \begin{align} & \partial_t \widetilde{\vec{\mathcal{B}}} = S \widetilde{\mathcal{B}}_x(t) \vec{e}_y - \, \frac{1}{\eta} \, \int^t_0 \, \ensuremath{\mathrm{d}}{}\mu \ \frac{G_\nu^{\dagger 2}(t-\mu)}{k^2(t-\mu)} \, \nonumber \\ & \qquad \Bigl[\, k_z^2 {\cal F}^\dagger(\mu)\, \widetilde{\vec{\mathcal{B}}}(t) \ + 2 \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(\mu) \ \frac{\vec{e}_z \times \vec{k}(t-\mu)}{k^{2}(t-\mu)} \ \vec{k}(t-\mu) \cdot \widetilde{\vec{\mathcal{B}}}(t) \,\Bigr] \,. \label{eq:complexB-eta} \end{align} This is a purely differential equation for $\widetilde{\vec{\mathcal{B}}}(t)$; {\it i.e., } it matches the common {\it ansatz} form in (\ref{eq:ansatz}), {\it viz., } \begin{equation} \partial_t \widetilde{\vec{\mathcal{B}}} = \mathsfbi{L} \cdot \widetilde{\vec{\mathcal{B}}}(t) \,, \label{eq:Mform} \end{equation} for the identifiable single-time, second-order tensor $\mathsfbi{L}(t)$ that contains a time-history integral in $\mu$ over the random forcing. An analogous simplification of the expression for $\vec{b}$ in (\ref{eq:mag-fluc-2}) can be made, with the result that $\vec{b} \propto 1/\eta$. This gives the important analytic result that the quasi-linear approximation to (\ref{eq:induction}) is asymptotically convergent as $\eta \rightarrow \infty$; the higher harmonics of the shearing-wave Fourier phase ($\pm m \phi$, $m > 1$) generated in $\vec{b}$ by the fluctuation electromotive term are $O(\eta^{-m})$, hence negligible compared to the mean-field term proportional to $\aver{\vec{B}}^{x,y}$ in (\ref{eq:Fbdef}). Numerical solutions of (\ref{eq:complexB-eta}) exhibit dynamo behavior similar to the example in Sec. \ref{sec:dynamo}, and the parameter dependences for $\gamma$ are similar to those described in Sec. \ref{sec:general} for the general ESD. In particular, $\gamma$ is small here because $\eta$ is large, in contrast to the ``fast dynamo'' limit where $\gamma$ becomes independent of $\eta$ ({\it cf., } Fig. \ref{fig:gamma-eta}). To obtain further analytic simplicity we can take a sequential limit of (\ref{eq:complexB-eta}) as $\nu \rightarrow \infty$. As with the $\eta$ limit, this selects an integration time $\mu \approx t$, where the viscous decay factor is integrated out by the approximate relation for large $t$, \begin{equation} \int^t_0 \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu^2(t-\mu) \rightarrow \frac{1}{2\nu} \quad {\rm or} \quad \int^t_0 \, \ensuremath{\mathrm{d}}{}\mu \, G_\nu^{\dagger 2}(t-\mu) \rightarrow 1 \,, \end{equation} utilizing the renormalization relations in (\ref{eq:norm_consts}) and (\ref{eq:FHG_renorm}). The $(L,\eta,\nu)$-limit mean-field equation from (\ref{eq:complexB-eta}) is \begin{align} \partial_t \widetilde{\vec{\mathcal{B}}} &= S \widetilde{\mathcal{B}}_x(t) \vec{e}_y - \, \frac{1}{\eta} \, \Bigl[\, k_z^2 {\cal F}^\dagger(t)\, \widetilde{\vec{\mathcal{B}}}(t) \ + 2 \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(t) \ (\vec{e}_z \times \vec{k}_f) \ \vec{k}_f \cdot \widetilde{\vec{\mathcal{B}}}(t) \, \Bigr] \,, \label{eq:complexB-eta-nu} \end{align} after using $k^2(0) = k_f^2 = 1$ from (\ref{eq:normalize}). In the tensor representation (\ref{eq:Mform}), $\mathsfbi{L}(t)$ is defined for (\ref{eq:complexB-eta-nu}) by \begin{equation} \mathsfbi{L} = S \, \begin{pmatrix} 0 & 0 \cr 1 & 0 \cr \end{pmatrix} \, - \, \frac{k_z^2 {\cal F}^\dagger(t)}{\eta} \, \begin{pmatrix} 1 & 0 \cr 0 & 1 \cr \end{pmatrix} \, - \, \frac{2 \ensuremath{\mathrm{i}} k_z {\cal H}^\dagger(t)}{\eta} \begin{pmatrix} \cos\theta_f \sin\theta_f & \sin^2\theta_f \cr -\, \cos^2\theta_f & - \, \cos\theta_f \sin\theta_f \cr \end{pmatrix} \,, \label{eq:L-B-t} \end{equation} after a substitution for $\vec{k}_f$ from (\ref{eq:forcingk}). All of the forcing time history in the coefficient tensor $\mathsfbi{L}(t)$ has now disappeared. The history integral also disappears in the companion $\vec{b}$ formula derived from (\ref{eq:mag-fluc-2}). Furthermore, there is no remaining dependence on $\nu$ in (\ref{eq:complexB-eta-nu}) because ${\cal F}^\dagger$ and ${\cal H}^\dagger$ are $O(1)$ quantities by the $KE$ normalization in (\ref{eq:normalize}) and the forcing renormalization in (\ref{eq:f-renorm}) and (\ref{eq:FHG_renorm}). Large $\eta$ and $\nu$ values lead to momentum and induction equation balances with negligible time tendency terms and negligible shear tilting in $\vec{k}(t)$ because $\phi \rightarrow \phi_f$ and ${\bf k}(t) \rightarrow {\bf k}_f$. We now consider two further limits in the forcing correlation time $t_f$ that yield analytic expressions for $\gamma$. \subsubsection{Steady Forcing} \label{sec:steady} Suppose the forcing values taken from the random distributions in Sec. \ref{sec:force} but are held steady in time; this is a limit based on the physical approximation that the forcing amplitudes change more slowly than the inverse growth rate for the dynamo, $\gamma t_f \gg 1$. In this limit (\ref{eq:complexB-eta-nu})-(\ref{eq:L-B-t}) has its $\mathsfbi{L}$ independent of time, hence there are eigensolutions with \begin{equation} \widetilde{\vec{\mathcal{B}}} \ \propto \ e^{\Gamma t } \,. \end{equation} The eigenvalues of $\mathsfbi{L}$ are \begin{equation} \Gamma = - \, \frac{k_z^2}{\eta} {\cal F}^\dagger \ \pm \ \left( \frac{2 \ensuremath{\mathrm{i}} k_z \sin^2\theta_f {\cal H}^\dagger \, S }{\eta} \right)^{1/2} \,. \label{eq:Mev} \end{equation} The dynamo growth rate for total mean field $\vec{\mathcal{B}}$ is defined as the largest real part of $\Gamma$ plus a correction of $- \, \eta k_z^2$ from the transformation in (\ref{eq:factor-eta}): \begin{equation} \gamma = - \left( \eta + \frac{{\cal F}^\dagger}{\eta} \right) \, k_z^2 + \left( \frac{k_z | S {\cal H}^\dagger | \sin^2\theta_f }{\eta} \right)^{1/2} \,. \label{eq:steady} \end{equation} The first term is negative and the second positive. A dynamo occurs with $\gamma > 0$ if there are both forcing helicity and shear and if $k_z$ is small enough but nonzero. With $S=0$, there is no dynamo. For $|S|$ above a critical-shear threshold value, \begin{equation} S_{cr} = \frac{\eta k_z^3}{\sin^2\theta_f} \, \frac{(\eta + \eta^{-1} {\cal F}^\dagger)^2}{|{\cal H}^\dagger|} > 0 \,, \end{equation} $\gamma$ increases with $S$, asymptotically as $\sqrt{S}$ when the other parameters are held constant, and $\gamma$ decreases with $\eta$ as $1/\eta$. For given $S$, there is a lower threshold value for $\eta$ to have a dynamo. Nonzero forcing helicity is necessary for a dynamo, but its sign does not matter. $\gamma = 0$ for $k_z = 0$, and $\gamma < 0$ for $k_z$ large. Within an intermediate range where $\gamma > 0$, the optimal $k_z$ and its associated growth rate are \begin{align} k_{z \, opt} &= \left( \frac{| S {\cal H}^\dagger| \sin^2\theta_f} {16\, \eta (\eta + \eta^{-1} {\cal F}^\dagger)^2}\right)^{1/3} \approx \ \left( \frac{| S {\cal H}^\dagger| \sin^2\theta_f} {16\, \eta^3}\right)^{1/3} \nonumber \\ \gamma_{opt} &= \left( \frac{27 \, | S {\cal H}^\dagger|^2 \sin^4[\theta_f]} {256 \, \eta^2 (\eta + \eta^{-1} {\cal F}^\dagger)} \right)^{1/3} \approx \ \left( \frac{27 \, | S {\cal H}^\dagger|^2 \sin^4[\theta_f]} {256 \, \eta^3} \right)^{1/3} \,, \label{eq:steady-opt} \end{align} where the approximations are based on neglecting ${\cal F}^\dagger)/\eta^2$. The optimal $k_z$ decreases with increasing $\eta$. (In a general MHD simulation with fixed $(S,\eta,\nu)$ values, all $k_z$ are available, and the ones supporting a dynamo will emerge in the evolution.) The vertical forcing variance ${\cal F}^\dagger$ reduces the dynamo, while the forcing helicity amplitude $|{\cal H}^\dagger|$ enhances it. ${\cal F}^\dagger$ enters (\ref{eq:L-B-t}) and (\ref{eq:steady}) exactly as an enhanced resistivity; however, the effect is small as $O(\eta^{-2})$ when ${\cal F}^\dagger = O(1)$ in this large $\eta$ limit. This is an anisotropic turbulent eddy resistivity acting on the mean field in the direction perpendicular to the shear plane as a result of the shearing-wave vertical velocity \citep{Parker71,Moffatt78}. The horizontal force $\vec{f}$ acting by itself has no effect; it makes ${\cal F} = |{\cal H}| = 0$, hence $\gamma < 0$ (no dynamo). $\gamma$ is largest where $k_{y f}$ is largest at $\theta_{f} = \pi/2$; in Sec. \ref{sec:general} we show that $\gamma$ is usually larger for $\theta_f < \pi/2$ (Fig. \ref{fig:gamma-theta}) because of a dynamo enhancement by the shear-tilting Orr effect when $\nu < \infty$. $k_{x f}$ does not explicitly enter the formula for $\gamma$ in the present case. The system (\ref{eq:complexB-eta-nu})-(\ref{eq:L-B-t}) in its steady-helicity limit is a close analog of the so-called alpha--omega dynamo for galactic disks \citet{Parker71,Kulsrud10}. Using a mixed notation from these two sources and assuming a vertical structure $\aver{\, \vec{B} \,}^{x,y} \ \propto \ e^{\ensuremath{\mathrm{i}} k_z z}$, an ODE system analogous to (\ref{eq:Mform}) results, with \begin{equation} \mathsfbi{L}^{\alpha\Omega} = \begin{pmatrix} - \, \widetilde{\eta} k_z^2 & \ensuremath{\mathrm{i}} k_z \alpha \cr \Omega & - \, \widetilde{\eta} k_z^2 \cr \end{pmatrix} \,. \label{eq:Mtensor-aO} \end{equation} For constant $\alpha$ and $\Omega$, its eigenvalues are \begin{equation} \Gamma^{\alpha\Omega} = - \, k_z^2 \widetilde{\eta} \pm \left( \ensuremath{\mathrm{i}} k_z \alpha \Omega \right)^{1/2} \,. \label{eq:Mev-aO} \end{equation} The correspondence with (\ref{eq:Mev}) is evident with appropriate identifications between $(\alpha,\ \Omega,\ \widetilde{\eta})$ and $(\eta^{-1} {\cal H}^\dagger,\ S, \ \eta + \eta^{-1} {\cal F}^\dagger)$. However, the ODE systems are not isomorphic except in the special case of $k_{xf} = 0$ in (\ref{eq:Mform}). Thus, in the steady-forcing ESD, the shear $S$ plays the role of $\Omega$, helical forcing ${\cal H}^\dagger$ plays the role of $\alpha$, and ${\cal F}^\dagger$ plays the role of a turbulent eddy resistivity that augments the effect of $\eta$. The physical paradigm in this paper is random forcing. Therefore, even if the forcing is steady in time, it is taken from a random distribution, and we can ask what the expected value is for $\widetilde{\vec{\mathcal{B}}}$ ({\it i.e., } having factored out the resistive decay in (\ref{eq:factor-eta}), which is not dominant for small $k_z$). To answer this we now neglect the turbulent resistivity by ${\cal F}^\dagger$, which is shown above to be a small effect for large $\eta$. The eigenvalue (\ref{eq:Mev}) of the tensor (\ref{eq:L-B-t}) is for a particular forcing value, which we now generalize to an ensemble distribution, \begin{equation} \Gamma(\varepsilon) = \pm \gamma (1 + \ensuremath{\mathrm{i}} s ) \,, \quad \gamma(\varepsilon) = \frac{1}{\sqrt{2}} \, E S \sin\theta_f > 0 \,, \label{eq:Gdistrib} \end{equation} with a composite parameter that is a rescaled helicity forcing, \begin{equation} \varepsilon = \frac{2k_z{\cal H}^\dagger}{S\eta} \equiv E^2 s \,. \end{equation} $E^2$ is the magnitude of $\varepsilon$, and $s = \pm 1$ is its sign. Consistent with the Ornstein-Uhlenbeck process for the forcing amplitudes (Sec. \ref{sec:force}), $\varepsilon$ has a Gaussian probability distribution function, \begin{equation} {\cal P}(\varepsilon) = \frac{1}{\sqrt{2\pi\varepsilon_0^2}} \ \exp\Bigl[ - \, \varepsilon^2/2\varepsilon_0^2 \Bigr] \,, \qquad \int_{-\infty}^{\infty} \, {\cal P} \, \ensuremath{\mathrm{d}}{}\varepsilon = 1 \,, \label{eq:epspdf} \end{equation} with an expected variance $\varepsilon_0^2$. Utilizing ${\cal E}\Bigl[{\cal H}^{\dagger 2}\Bigr] = 0.5 \, F_z\dagger \, O_z^\dagger = 0.5$ from the remark after (\ref{eq:f-renorm}), we obtain \begin{equation} \varepsilon_0^2 = \frac{2k_z^2}{S^2\eta^2} \rightarrow \frac{1}{4S^{4/3}\eta^4} \,, \end{equation} where the arrow indicates substitution of $k_z^{opt}$ from (\ref{eq:steady-opt}). We analyze the dynamo solutions with general $\varepsilon_0$, but for large $\eta$, $\varepsilon_0$ is expected to be small. After a large elapsed time $t_e$, the dynamo solution is dominated by its leading eigenmode with $\mathrm{Re}\left\{\,\Gamma\right\} = \gamma > 0$ for any $E \ne 0$. Neglecting the decaying mode, we write the late-time solution in vector form as \begin{equation} \begin{pmatrix} \widetilde{\mathcal{B}}_x(\varepsilon,t_e) \cr \widetilde{\mathcal{B}}_y (\varepsilon,t_e) \cr \end{pmatrix} \ = \ {C}_0 \, e^{\gamma t_e} \, \left( \cos[\gamma t_e] + i s \sin[\gamma t_e] \right) \ \begin{pmatrix} (1+\ensuremath{\mathrm{i}} s) \gamma/S + \ensuremath{\mathrm{i}} \varepsilon \cos\theta_f \sin\theta_f \cr 1 - \ensuremath{\mathrm{i}} \varepsilon \cos^2\theta_f \cr \end{pmatrix} \,. \label{eq:ef-te} \end{equation} ${C}_0$ is a complex constant determined from the initial condition, \begin{equation} {C}_0 = \frac{1}{\sqrt{2}E(i+\ensuremath{\mathrm{i}} s)} \, \left\{\, \widetilde{\mathcal{B}}_x(0) + \widetilde{\mathcal{B}}_y(0) \, \left(\frac{(1+\ensuremath{\mathrm{i}} s) \gamma/S - \ensuremath{\mathrm{i}}\varepsilon \cos\theta_f \sin\theta_f}{1 - \ensuremath{\mathrm{i}} \varepsilon \cos^2\theta_f}\right)\,\right\} \,. \label{eq:C0} \end{equation} With (\ref{eq:epspdf}) and (\ref{eq:ef-te}), we can evaluate the expected value of any property of $\widetilde{\vec{\mathcal{B}}}(t_e)$ and its corresponding distribution $D$ with $\varepsilon$; {\it e.g., } for the mean-field vector magnitude, \begin{equation} B^{rms} \, \equiv \, {\cal E}\Big[\, |\widetilde{\vec{\mathcal{B}}}|\,(t_e) \, \Big] \, = \, \int_{-\infty}^{\infty} \, |\widetilde{\vec{\mathcal{B}}}|(\varepsilon,t_e) \, {\cal P}(\varepsilon) \, \ensuremath{\mathrm{d}}{}\varepsilon \equiv \int_{-\infty}^{\infty} \, D[\,|\widetilde{\vec{\mathcal{B}}}| \,] \, \ensuremath{\mathrm{d}}{}\varepsilon \,. \label{eq:Brms} \end{equation} Figure \ref{fig:steady_distributions} (left panel) shows the distributions $D$ for the vector magnitude and for the directional component magnitudes for a small value of $\varepsilon_0$. These distributions are smooth, positive, symmetric in $s$, and peak at intermediate $\varepsilon/\varepsilon_0$. $B^{rms}$ and the component magnitudes are growing exponentially with time. We can fit this with a cumulative growth rate, $\gamma^{rms} = t_e^{-1} \, \log B^{rms}$, which we know from (\ref{eq:Gdistrib}) will scale as $S \sqrt{\varepsilon_0} \sin\theta_f$. For this value of $\varepsilon_0 = 0.1$, $|\widetilde{\mathcal{B}}_x|$ is smaller than $|\widetilde{\mathcal{B}}_y|$, with a ensemble-mean ratio of 0.78. For the leading eigenfunction in (\ref{eq:ef-te}), the anisotropy ratio is \begin{equation} \frac{|\widetilde{\mathcal{B}}_x|}{|\widetilde{\mathcal{B}}_y|} = \frac{(1+\ensuremath{\mathrm{i}} s)E\sin\theta_f + \ensuremath{\mathrm{i}} \sqrt{2}\varepsilon \cos \theta_f \sin \theta_f} {\sqrt{2}(1 - \ensuremath{\mathrm{i}} \varepsilon \cos^2\theta_f)} \,. \end{equation} For small $E$, the ratio tends to $E\sin\theta_f/\sqrt{2}$, which is small; this is consistent with the anisotropy in Fig. \ref{fig:mean-field}. For large $E$, the ratio tends to $|\tan\theta_f|$, which can have any value. What is the ensemble-mean magnetic field? Its magnitude is \begin{equation} B^{mean} \, \equiv \, \Big| \, {\cal E}\Big[\, \widetilde{\vec{\mathcal{B}}}(t_e) \, \Big] \, \Big| \, = \, \Big| \, \int_{-\infty}^{\infty} \, \widetilde{\vec{\mathcal{B}}}(\varepsilon,t_e) \, {\cal P}(\varepsilon) \, | \, \ensuremath{\mathrm{d}}{}\varepsilon \, \Big| \, \equiv \, \Big| \, \int_{-\infty}^{\infty} \, D[\,\widetilde{\vec{\mathcal{B}}} \,] \, \ensuremath{\mathrm{d}}{}\varepsilon \, \Big| \,. \label{eq:Bmean} \end{equation} Again this is evaluated with (\ref{eq:ef-te}). We find that it too exhibits exponential growth, so we fit a cumulative growth rate, $\gamma^{mean}(t_e) = t_e^{-1} \, \log B^{mean} > 0$. But the ensemble mean growth is smaller than the ensemble r.m.s. growth, {\it i.e., } $\gamma^{mean} < \gamma^{rms}$. The reason is illustrated in Fig. \ref{fig:steady_distributions} (right panel) for the distributions of two components, $D[\,\mathrm{Re}\left\{\,\widetilde{\mathcal{B}}_x\,\right\}\,]$ and $D[\,\mathrm{Im}\left\{\,\widetilde{\mathcal{B}}_y\,\right\}\,]$. Their amplitude is comparable to the magnitude distributions in the left panel, but they are oscillatory in $\varepsilon$ as a result of $\cos[\gamma t_e]$ and $\sin[\gamma t_e]$ terms in (\ref{eq:ef-te}). So the expected value from integration over $\varepsilon$ is small, although not zero. For Fig. \ref{fig:steady_distributions}, $B^{mean} = 0.073 B^{rms}$, and $\gamma^{mean} = 0.76 \gamma^{rms}$. These relations are not sensitive to the initial condition $\widetilde{\vec{\mathcal{B}}}(0)$, although it does influence the partition among the real and imaginary parts of $\widetilde{\vec{\mathcal{B}}}(t_e)$. There are the expected dependences of larger $\gamma$ with larger $S$ and $\varepsilon_0$ and with $\theta_f$ closer to $\pi/2$, as in (\ref{eq:steady}). With larger $t_e$ the expected values are dominated by the farther tails of the $D$ distributions, with slowly increasing $\gamma^{rms}(t_e)$ and $\gamma^{mean}(t_e)$ associated with larger $\gamma(\epsilon)$ in the tails (Fig. \ref{fig:steady_te}). Even though larger $\epsilon$ values are less probable in $P(\varepsilon)$ in (\ref{eq:epspdf}), they do have a more than compensating stronger dynamo growth rate that emerges after long enough time. Because the discrepancy between $\gamma^{rms}$ and $\gamma^{mean}$ persists even in the $P(\varepsilon)$ tail, the ratio $B^{mean}/B^{rms}$ decreases with $t_e$ exponentially. The steady-forcing dynamo does not become independent of $t_e$ as $t_e \rightarrow \infty$, in contrast to the finite-$t_f$ dynamo, in particular the small-$t_f$ dynamo analyzed in Sec. \ref{sec:rapid}. \subsubsection{Rapidly Varying Forcing} \label{sec:rapid} The limiting forms for the ESD equation, (\ref{eq:complexB-eta}) and (\ref{eq:complexB-eta-nu}), are also analyzable in the opposite limit of $t_f \rightarrow 0$ by means of a cumulant expansion of a linear, stochastic, ODE system \citep[Chap. XVI]{vanKampen07}. For a stochastic vector $\vec{A}(t)$ governed by \begin{equation} \partial_t \vec{A} = \left(\, \mathsfbi{L}_0 + \mathsfbi{L}_1(t) \,\right) \, \cdot \vec{A} \,, \label{eq:SDE} \end{equation} with the tensors $\mathsfbi{L}_0$ independent of time and $\mathsfbi{L}_1(t)$ a random stationary process with zero expected mean and finite variance, the expected value ${\cal E}\Big[\vec{A}\Big]$ satisfies the approximate deterministic ODE system, \begin{equation} \partial_t {\cal E}\Big[\vec{A}\Big] = \left(\, \mathsfbi{L}_0 + \int_0^\infty \, {\cal E}\Big[\mathsfbi{L}_1(t) \, \mathsfbi{L}_1(t-t') \Big] \, dt' + \dots \,\right) \, \cdot {\cal E}\Big[\vec{A}\Big] \,, \label{eq:vK} \end{equation} with the dots indicating neglected higher-order cumulant terms. The system (\ref{eq:vK}) has a time-independent matrix; hence, it has eigenmodes with exponential time dependence with growth rates given by the matrix eigenvalues. The solution formula for ${\cal E}\Big[{\bf A}\Big](t)$ is called a time-ordered exponential matrix, and it has a non-terminating series expansion with the leading terms as indicated here. The basis for the approximate neglect of the higher order terms can be taken as the vanishing of ${\cal E}\Big[\mathsfbi{L}_11(t) \, \mathsfbi{L}_1(t-t') \Big]$ except as $|t-t'| \rightarrow 0$. In the present situation with large $\nu$, this is equivalent to short correlation times $t_f \rightarrow 0$ for the random forces, $\hat{f}_z(t)$ and $\hat{o}_z(t)$, with $St_f \ll 1$ and $S/\nu \ll 1$ to be able to neglect higher-order products of $\mathsfbi{L}_0$ and $\mathsfbi{L}_1$ in deriving (\ref{eq:vK}). We apply (\ref{eq:SDE})-(\ref{eq:vK}) to (\ref{eq:complexB-eta-nu}) with $\vec{A} = \widetilde{\vec{\mathcal{B}}}\,\exp\Bigl[k_z^2 {\cal F}_0^\dagger t /\eta\Bigr]$ with the following tensors: \begin{equation} \mathsfbi{L}_0 = S \, \begin{pmatrix} 0 & 0 \cr 1 & 0 \cr \end{pmatrix} \,, \quad \mathsfbi{L}_1 = \frac{2\ensuremath{\mathrm{i}} k_z{\cal H}^\dagger(t)}{\eta} \, \begin{pmatrix} \cos\theta_f \sin\theta_f & \sin^2\theta_f \cr -\, \cos^2\theta_f & - \, \cos\theta_f \sin\theta_f \cr \end{pmatrix} \,. \label{eq:L-B} \end{equation} This is a second-order, complex system. We have made one {\it ad hoc} simplification here, {\it viz., } replacing ${\cal F}^\dagger(t)$ by its expected value, ${\cal F}_0^\dagger \equiv{\cal E}\Big[{\cal F}^\dagger\Big] = 0.5$ from (\ref{eq:FHG_renorm}), and then factoring its decay effect on $\widetilde{\vec{\mathcal{B}}}$ analogously to (\ref{eq:factor-eta}). The motivation is to simplify the analysis. We already understand ${\cal F}$ as an eddy resistive damping. This role is played with qualitative fidelity by retaining only its mean value, and anyway for large $\eta$ it is only a small increment to the ordinary resistivity. The result for (\ref{eq:vK}) is very simple with (\ref{eq:L-B}) because $\mathsfbi{L}_1^{2} = 0$ independent of its time-variable prefactor, and the eigenvalues of $\mathsfbi{L}_0$ are zero. Hence, again after restoring the resistive decay factors, the growth rate for ${\cal E}\Big[\vec{\mathcal{B}}\Bigr]$ is \begin{equation} \gamma = - \, \left(\eta + \frac{1}{2\eta} \right) \, k_z^2 \le 0 \,; \end{equation} {\it i.e., } in this ($\eta \rightarrow \infty$, $t_f \rightarrow 0$) limit there is only resistive decay of the expected value of the mean magnetic field, weakly augmented by the eddy resistive effect. We could continue the cumulant expansion for $\vec{\mathcal{B}}$ and (\ref{eq:SDE}) to higher orders in $St_f$ and $S/\nu$ \citep{vanKampen07}, seeking growth in the ensemble-mean, large-scale field, ${\cal E}\Big[\vec{\mathcal{B}}\Bigr]$, but its $\gamma$ would be small in these parameters compared to the growth in the mean magnetic variance, ${\cal E}\Big[\, |\vec{\mathcal{B}}|^2 \, \Bigr]$. To obtain a dynamo result for the latter, we instead apply (\ref{eq:SDE})-(\ref{eq:vK}) to the fourth-order real covariance system derived from (\ref{eq:complexB-eta}) for the vector, \begin{equation} \vec{A} = \left(\ |\widetilde{B}_x|^2 , \ |\widetilde{B}_y|^2 , \ \mathrm{Re}[\widetilde{B}_x^\ast\widetilde{B}_y], \ \mathrm{Im}[\widetilde{B}_x^\ast\widetilde{B}_y] \ \right) \, \times \, \exp\Bigl[2 k_z^2 {\cal F}_0^\dagger t/\eta\Bigr] \,, \end{equation} again factoring out the mean eddy resistive effect with the simplification ${\cal F}^\dagger(t) \approx {\cal F}^\dagger_0 = 0.5$. The associated tensors are defined by \begin{eqnarray} & \mathsfbi{L}_0 = S \mathsfbi{L}_0^\dagger , \, \mathsfbi{L}_0^\dagger = \begin{pmatrix} 0 & 0 & 0 & 0 \cr 0 & 0 & 2 & 0 \cr 1 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 \cr \end{pmatrix} \,, \nonumber \\ & \mathsfbi{L}_1 = \frac{-2k_z{\cal H}^\dagger(t)}{\eta} \mathsfbi{L}_1^\dagger , \, \mathsfbi{L}_1^\dagger = \begin{pmatrix} 0 & 0 & 0 & 2\sin^2\theta_f \cr 0 & 0 & 0 & 2\cos^2\theta_f \cr 0 & 0 & 0 & - 2\cos\theta_f \sin\theta_f \cr \cos^2\theta_f & \sin^2\theta_f & 2\cos\theta_f \sin\theta_f & 0 \cr \end{pmatrix} . \label{eq:Lcov} \end{eqnarray} The expectation value in (\ref{eq:vK}) applied to $\mathsfbi{L}_1(t) \mathsfbi{L}_1(t-t')$ acts entirely on its scalar prefactor in (\ref{eq:Lcov}) because its matrix factor $\mathsfbi{L}_1^\dagger$ is deterministic and time-independent. We evaluate the corresponding scalar prefactor that arises in (\ref{eq:vK}) as \[ \frac{4k_z^2}{\eta^2} \, \int_0^\infty \, {\cal E}\Big[{\cal H}^\dagger(t) {\cal H}^\dagger(t-t')\Bigr] \, dt' \,. \] Tracing backwards through the forcing relations (\ref{eq:helicity}), (\ref{eq:f-renorm}), and (\ref{eq:FHG_renorm}), we derive \begin{equation} {\cal E}\Big[{\cal H}^\dagger(t) {\cal H}^\dagger(t-t')\Bigr] = 0.5 \, {\cal E}\Bigl[|\hat{f}_z^\dagger|^2\Bigr] \, {\cal E}\Bigl[|\hat{o}_z^\dagger|^2\Bigr] \, \exp\Bigl[-2|t'|/t_f\Bigr] \,, \end{equation} utilizing the fact that the real and imaginary parts of $\hat{f}_z$ and $\hat{o}_z$ are independent, stationary processes each with an exponential correlation time $t_f$ as in (\ref{eq:fcor}). After performing the time integration with this expression, the value of the preceding prefactor is \begin{equation} \frac{2k_z^2\,t_f}{\eta^2} \, {\cal E}\Bigl[{\cal H}^{\dagger 2}\Bigr] = \frac{k_z^2\,t_f}{\eta^2}\,, \label{eq:prefactor} \end{equation} because ${\cal E}\Bigl[{\cal H}^{\dagger 2}\Bigr] = 0.5 \, F_z\dagger \, O_z^\dagger = 0.5$ from (\ref{eq:f-renorm}). This completes the specification of the deterministic, time-independent matrix in (\ref{eq:vK}) for the covariance system as \begin{equation} \mathsfbi{L} = S \mathsfbi{L}_0^\dagger + \frac{k_z^2\,t_f}{\eta^2} \mathsfbi{L}_1^{\dagger 2} \,. \label{eq:vKmatrix} \end{equation} We evaluate its eigenvalues $\Gamma$ analytically from $det[\mathsfbi{L} - \Gamma \mathsfbi{I}] = 0$, which is a fourth-order polynomial equation. We can factor a $\Gamma = 0$ root, leaving a third-order system with the reduced form of $\Gamma^3 + p \Gamma = q$ for coefficients $p \propto S$ and $q \propto S^2$. With a simplification provided by the prefactor (\ref{eq:prefactor}) being small compared to $S$, we can neglect the $p$ term and obtain the approximate solution, \begin{equation} \Gamma \approx q^{1/3} = \left(\, \frac{2 k_z^2 S^2 \sin^4\theta_f \, t_f}{\eta^2} \right)^{1/3} \,. \label{eq:Gamma} \end{equation} This approximation is consistent with finite $S$, small $t_f$ and $k_z$, and large $\eta$; recall that we also assume $St_f, \ S/\nu \ll 1$ for the leading order cumulant approximation (\ref{eq:vK}). The three solutions (\ref{eq:Gamma}) are one with real, positive $\Gamma$ ({\it i.e., } a dynamo) and a complex conjugate pair with $\mathrm{Re}[\Gamma] < 0$. We divide the positive eigenvalue $\Gamma$ by 2 and restore the resistive decay factors to obtain the growth rate for the r.m.s. value of the mean field, $\left( \, {\cal E}\Big[|\vec{\mathcal{B}}|^2\Bigr]\,\right)^{1/2}$: \begin{equation} \gamma = - \, \left(\eta + \frac{1}{2\eta} \right) k_z^2 \ + \ \left(\, \frac{k_z^2S^2 \, \sin^4\theta_f \, t_f }{4\eta^2} \, \right)^{1/3} \,. \label{eq:smalltc} \end{equation} A dynamo can occur with $\gamma > 0$ if there are both forcing helicity and shear and if $k_z$ is small but nonzero; this behavior is the same as in the steady-forcing dynamo (\ref{eq:steady}) for this same limiting ESD system (\ref{eq:complexB-eta-nu}), as well as for the general dynamo in Sec. \ref{sec:general}. In this limit of small correlation time with zero mean helicity and finite helicity variance, the expected value for the mean field $\vec{\mathcal{B}}$ does not grow, but the expected value for the mean magnetic energy $\vec{\mathcal{B}}^2$ does. The steady-forcing dynamo also has a much smaller ensemble mean than r.m.s. (Sec. \ref{sec:steady}). Besides the leading eigenvalue (\ref{eq:Gamma}), we can obtain the associated eigenfunction for the matrix (\ref{eq:vKmatrix}). With the same approximation of a small prefactor for $\mathsfbi{L}^{\dagger 2}$, we derive the following for the expected ratio of component variances, \begin{align} {\cal E}\Bigl[ |\widetilde{B}_x|^2 \Bigr] & \approx \frac{2 \sin^4\theta_f}{\Gamma} \ {\cal E}\Bigl[ |\widetilde{B}_y|^2 \Bigr] \nonumber \\ & = \left(\frac{2k_z^2 \sin^4\theta_f t_f}{S\eta^2}\right)^{2/3} \, {\cal E}\Bigl[ |\widetilde{B}_y|^2 \Bigr] \,. \label{eq:Bxyratio} \end{align} Thus, the streamwise mean magnetic energy is small compared to the transverse energy in the present limit with transient forcing, small $k_z$ and $t_f$, and large $\eta$. The small ratio is also consistent with the previous example of dynamo behavior with more general parameters in Fig. \ref{fig:mean-field}, as well as with the steady-forcing dynamo in Sec. \ref{sec:steady} when $\varepsilon_0$ is small. As with the steady forcing (\ref{eq:steady-opt}) we can optimize the growth rate in $k_z$: \begin{align} k_{z \, opt} &= \left(\, \frac{S^2 \sin^4\theta_f \, t_f } {108 \, \eta^2 \, (\eta + \frac{1}{2\eta})^3} \, \right)^{1/4} \nonumber \\ \gamma_{opt} &= \left(\, \frac{S^2 \sin^4\theta_f \, t_f } {27\, \eta^2(\eta+ \frac{1}{2\eta})} \, \right)^{1/2} \,. \label{eq:short-opt} \end{align} The parameter tendencies here all have the same signs as with steady-forcing and with the general ESD (Sec. \ref{sec:general}), but the exponents are different in the two $t_f$ limits. In particular, the optimal growth rate dependences are \begin{align} \gamma &\sim \ S \ \ || {\cal H} || \, \eta^{-3/2} \, k_{yf}^2 \, t_f^{1/2} \qquad \quad {\rm as} \ t_f \rightarrow 0 \nonumber \\ \gamma &\sim \ S^{2/3} \, || {\cal H}^\dagger ||^{2/3} \, \eta^{-1} \, k_{yf}^{4/3} \, t_f^0 \qquad {\rm as} \ t_f \rightarrow \infty \,, \end{align} where the norm symbol $|| \cdot ||$ denotes the r.m.s. or mean magnitude as appropriate, and we have formally restored the helicity variance factor $|| {\cal H} ||$ for emphasis. In both cases the growth rate $\gamma$ is vanishingly small as $\eta \rightarrow \infty$, $S \rightarrow 0$, $|| {\cal H} || \rightarrow 0$, or $\theta_f \rightarrow 0, \pi$, and for the short correlation time case, $\gamma$ is small as $t_f \rightarrow 0$. For non-limiting values of the parameters, however, $\gamma$ is not small (Sec. \ref{sec:general}). We reiterate that there is no dependence of $\gamma$ on $\nu$ in the limit $\nu \rightarrow \infty$, independent of the value of $t_f$. As with the steady forcing limit, an analogy exists between the fluctuating helicity ESD in (\ref{eq:complexB-eta-nu}) and a low-order ODE fluctuating alpha--omega dynamo {\it ansatz} \citep{Vishniac97,Silantev00} (also called the incoherent alpha--shear dynamo). Therefore, from a historical perspective of astrophysical dynamo theory, we see that the ESD in (\ref{eq:complexB}) provides both a theoretical justification for the alpha--omega {\it ansatz}, with an explicit characterization of the relevant shearing-wave velocity fluctuations, and a generalization to finite Reynolds numbers ({\it i.e., } $\eta,\nu < \infty$). In summary, these two different $t_f$ limits with analytic dynamo solutions for the large-$(\eta,\nu)$ ESD (\ref{eq:complexB-eta-nu}) show qualitatively similar but functionally different parameter tendencies in $S$, $\eta$, $k_z$, and $\theta_f$; anisotropy with $|\widetilde{\mathcal{B}}_y|$ usually larger than $|\widetilde{\mathcal{B}}_x|$; and an ensemble-mean magnetic energy, ${\cal E}\Bigl[\, |\widetilde{\vec{\mathcal{B}}}|^2 \,\Bigr]$, much larger than the energy of the ensemble-mean field, $|\, {\cal E}\Bigl[ |\widetilde{\vec{\mathcal{B}}} \Bigr]\,|^2$. These characteristics carry over to the more general ESD solutions in Sec. \ref{sec:general}. \subsection{Other Limit Pathways} \label{sec:limit.S} The preceding ESD derivation of (\ref{eq:complexB2}) assumes $k_{yf}L \gg 1$ to assure $\aver{exp[\ensuremath{\mathrm{i}}(\phi+\phi')]}^{x,y} \approx 0$ and $k_{yf}LS \ {\rm min}[t_f,\ 1/\nu] \gg 1$ to assure $\aver{exp[\ensuremath{\mathrm{i}}(\phi+\phi')]}^{x,y} \ne 0$ for selected time arguments of the phases $\phi(\mu)$ and $\phi(\mu')$. The latter assumption yields (\ref{eq:x-avg}), which is useful in simplifying the normalization condition (\ref{eq:general_renorm}) for $KE$ and compacting the ESD equation (\ref{eq:complexB2}) for $\widetilde{\vec{\mathcal{B}}}$ by reducing the number of time history integrals in the mean electromotive force curl (Appendix \ref{sec:appA}). We prefer the physical rationale of this pathway based only on a primary assumption of large $L$, consistent with uniform mean shear and no boundary conditions, because it does not constrain the values of the other parameters that are physically more meaningful than $L$. The result is independent of $L$ itself. The further ESD simplifications in Sec. \ref{sec:limit.L} follow from $\eta,\nu \rightarrow \infty$. However, this is not a unique pathway for deriving ESD equations that are essentially similar. In particular, neither of the limits $S \rightarrow 0$ nor $t_f \rightarrow 0$ is problematic even though they appear inconsistent with the second assumption above. As previously explained, we do require $\nu > 0$ for statistical equilibration of velocity fluctuations and $k_{yf} \ne 0$ for nontrivial shear tilting and dynamo behavior. Shear tilting makes $\phi(t)$ in (\ref{eq:phi-cons}) or (\ref{eq:phase}) a continuous function of time. When $S=0$, $\phi=\phi_f$, and the average of the differenced-phase factor is $\aver{exp[\ensuremath{\mathrm{i}}(\phi-\phi')]}^{x,y} = 1$ for all time arguments. When $S \rightarrow 0$ as a primary assumption, this relation is approximately true. We still require the weaker assumption about large domain size, $k_{yf}L \gg 1$, to be able to neglect the summed-phase factors, $\aver{exp[\ensuremath{\mathrm{i}}(\phi+\phi')]}^{x,y}$. Even with these phase averaging relations resolved, further assumptions are needed to compact the electromotive forcing, and large $\eta$ and/or $\nu$ suffice. The outcome is equivalent to (\ref{eq:complexB-eta-nu}) with dynamo solutions when $S>0$. If instead the primary assumption is $t_f \rightarrow 0$ in combination with $k_{yf}L \gg 1$, then the requirement on the average of the differenced-phase factor in the $KE$ normalization is resolved with an approximate integral over the forcing correlation factor, $\exp[-|\mu-\mu'|/t_f]$, in (\ref{eq:general_renorm}), but this assumption is not enough to compact the electromotive force curl. Again this can be accomplished with additional assumptions of large $\eta$ and/or $\nu$, leading to the equivalents of (\ref{eq:complexB-eta}) with shear tilting and (\ref{eq:complexB-eta-nu}) without it. In neither of these limits is there a compact equivalent to the general ESD (\ref{eq:complexB2}) with finite $\eta$ and $\nu$. Also, because the dynamo solutions of (\ref{eq:complexB-eta-nu}) have $\gamma$ small with $S$ and $t_f$, this derivation pathway is not as physically germane as the primary one in Sec. \ref{sec:limit.L}. Yet another derivation pathway assumes finite $L$ and spatially periodic boundary conditions in shearing coordinates with discretized shearing-frame wavenumbers with $\Delta k = 2\pi/L$. If the forcing is at one of the discretized wavenumbers at least in $k_{yf}$, then the spatial average of the summed-phase factor vanishes. To accommodate continuous shear tilting in the finite Fourier series representation, the forcing amplitude time series is viewed as impulses at discrete times, $t_m = t_0 + m\Delta t$, $\Delta t = 2\pi/S k_{yf}L$, $m=0,1,2,\dots $, when a discrete shearing-frame $x$-wavenumber $k_{xm} = k_{xf}+ Sk_{yf}t_m$ (or its periodic alias) coincides with $k_{xf}$ in the laboratory frame. (This discretization is the one used in a MHD computational code with a finite number of Fourier modes \citep{Yousef08b}.) This allows the shearing-coordinate spatial average of the differenced-phase factors to have the requisite property for a compact ESD derivation. The resulting ESD replaces the time-history integrals with finite sums over $m$ at discrete forcing times $t_m$, and it replaces the continuous laboratory-frame $\vec{k}(t-\mu)$ with $\vec{k}(t-t_m)$. This pathway retains the familiar dependence on $L$ for a discrete Fourier series; this dependence disappears as $L \rightarrow \infty$ when the shearing-periodicity pathway merges with the large-domain pathway as $\Delta k$ and $\Delta t$ vanish. The general behaviors of the finite-$L$ shearing-periodicity ESD and $L \rightarrow \infty$ ESD in (\ref{eq:complexB2}) are essentially the same. Because of the simplicity of the spatial averaging with the shearing-periodic boundary conditions and the analytical advantages of the assumptions of large $\eta$ and $\nu$, small $S \ne 0$, and small $t_f$, a proof-of-concept ESD exposition is in \citet{Heinemann11a}. Its solution coincides with Sec. \ref{sec:rapid}. Notice that this combined pathway achieves spatial homogeneity even without the enlarged ensemble of uniform mean flows in $\vec{V}$ (Sec. \ref{sec:KE_ND}). \section{General Parameter Dependences} \label{sec:general} With the normalization conditions (\ref{eq:normalize})-(\ref{eq:part-norm}), the non-dimensional parameters of the ESD equation (\ref{eq:complexB2}) are $S$, $\nu$, $t_f$, $\theta_f$, $k_z$, $\eta$, and $\theta_B$. {\it A priori} we are interested in possible dynamo behavior over their full ranges. Section \ref{sec:dynamo} shows a typical ``mid-range'' example by computational integration, and Sec. \ref{sec:limit} has analytic formulas for the parameter dependences of the growth rate $\gamma$ in two asymptotic limits associated with $\eta, \nu \rightarrow \infty$ and $t_f \rightarrow 0$ or $\infty$. In this section we survey the parameters space computationally to show that $\gamma$ in the ESD solution is a smooth, simple function of all its parameters. For given parameters, a computational solution provides a particular realization of the random forcing in Sec. \ref{sec:force}. When there is exponential growth in $|\vec{\mathcal{B}}(t)|$, a fit $\propto e^\gamma t$ is made over a long integration period ({\it e.g., } $S \Delta t = 10^3$ in Fig. \ref{fig:mean-field}). The $\gamma$ value varies from one realization to another, but the results we report here are fairly well determined, as indicated by the smoothness of parameter curves based on separate estimations at separate parameters. Nevertheless, it is computationally laborious to obtain an ensemble perspective over many realizations. Dynamo growth occurs for finite values of $0< k_z < k_f = 1$ (Fig. \ref{fig:gamma-kz}); {\it i.e., } increasing $k_z$ amplifies the fluctuating helical forcing in (\ref{eq:complexB2}) that is essential to the ESD, and dynamo growth is quenched by resistive decay when $k_z$ is too large. There is an optimal intermediate value for $k_z$ where $\gamma$ is a maximum. This behavior is approximately the same as evident in the analytic solutions in Secs. \ref{sec:steady} and \ref{sec:rapid}. The functional dependence of $\gamma$ on the shear $S$ is in Fig. \ref{fig:gamma-S}, based on optimization over $k_z$ with the other parameters held fixed. The dynamo growth rate increases monotonically with $S$; the slope of $\gamma(S)$ decreases for larger $S$. A power-law fit to $\gamma(S)$ shows an exponent approximately in the range 0.5--1, which is consistent with the values of 2/3 and 1 in the limiting formulas (\ref{eq:steady-opt}) and (\ref{eq:short-opt}) The associated optimal $k_z(S)$ is always small relative to $k_f=1$, and it too increases with $S$. A power law fit shows an exponent similar to the limit values of 1/2 and 1/3 in (\ref{eq:steady-opt}) and (\ref{eq:short-opt}). In the ESD there is no threshold in $S$ for dynamo growth, given sufficiently small $k_z \ne 0$. With either $S=0$ or $k_z =0$, there is no dynamo. $\gamma(k_z)$ is a convex function of $k_z$ that vanishes when $k_z$ is not small as well as when $k_z \rightarrow 0$; this is a similar shape as in the limit formulas (\ref{eq:steady}) and (\ref{eq:smalltc}). For all other parameters held fixed (including $k_z$), there is a minimum threshold value of $S$ for dynamo action, as is also true in the limit formulas (\ref{eq:steady}) and (\ref{eq:smalltc}). The dependence of $\gamma$ on the forcing correlation time $t_f$ is in Fig. \ref{fig:gamma-tau}, again based on optimization over $k_z$. $\gamma$ and $k_z$ both increase with $t_f$. This tendency is consistent at small $t_f$ with the limit formulas in (\ref{eq:short-opt}). For larger $t_f$ values the slope of $\gamma(t_f)$ increases with $t_f$ in the range surveyed here, although we know from (\ref{eq:steady-opt}) that $\gamma$ asymptotes to a finite value with steady forcing. The optimal $k_z(t_f)$ levels off with large $t_f$, here at a value only slightly smaller than $k_f = 1$; this behavior is not anticipated by the limit formulas in Sec. \ref{sec:limit} that indicate small $k_z$ for large $\eta$. We demonstrate the roles of the forcing components $\hat{f}_z$ and $\hat{o}_z$ by alternately setting them to zero. $\hat{o}_z = 0$ removes all forcing from (\ref{eq:complexB2}), hence has no effect on $\vec{\mathcal{B}}$. $\hat{f}_z = 0$ retains the forcing in ${\cal F}$ but makes ${\cal H} = 0$; in this case $\vec{\mathcal{B}}(t)$ shows algebraic growth in time but no dynamo. Thus, a dynamo requires both $u_z$ and $\vec{u}_\perp$ to be nonzero. By keeping both components non-zero but arbitrarily setting ${\cal F} = 0$ with ${\cal H} \ne 0$ in (\ref{eq:complexB2}), $\gamma$ is modestly increased; this confirms the interpretation of the ${\cal F}$ effect as turbulent resistivity that weakens dynamo growth (Sec. \ref{sec:limit}). If ${\cal F}(t)$ is replaced by its time-mean value, the dynamo behavior is essentially the same. Viscous and resistive diffusion both diminish dynamo growth, but they do not suppress it entirely (Figs. \ref{fig:gamma-nu}-\ref{fig:gamma-eta}). The growth rate becomes independent of $\nu \rightarrow 0$ for fixed $\eta$, and it becomes independent of $\eta \rightarrow 0$ for fixed $\nu$. The latter indicates that the ESD is a ``fast'' dynamo with $\gamma \ne 0$ as $\eta \rightarrow 0$ (Roberts and Soward, 1992). At the other extreme, to sustain a dynamo as $\eta \rightarrow \infty$, the value of $k_z(\eta)$ must become very small so that resistive decay is not dominant; this is consistent with the limit formulas (\ref{eq:steady-opt}) and (\ref{eq:short-opt}), where $\gamma(\eta)$ decreases as a power law with exponents of -1 and -5/2, respectively. $\gamma(\nu)$ decreases with $\nu$ for large $\nu$. We can take the $\nu \rightarrow \infty$ limit of (\ref{eq:complexB2}) for general $\eta$, using the same type of approximation procedure as at the beginning of Sec. \ref{sec:limit}. The key approximation in this limit is \begin{equation} \int^t_0 \, \ensuremath{\mathrm{d}}{}\lambda \, \int^\lambda_0 \ensuremath{\mathrm{d}}{}\mu \, G_\nu^\dagger(\lambda-\mu)G_\nu^\dagger(t-\mu) \rightarrow \frac{1}{\nu} \,, \end{equation} with $\mu, \lambda \rightarrow t$ for the arguments of the other integrand factors. The resulting $(L,\nu)$-limit mean-field equation has the same structure as (\ref{eq:complexB-eta-nu}) except now the electromotive force curl has a prefactor of $1/\nu$ instead of $1/\eta$. Consequently, $\gamma(\nu)$ must decrease with large $\nu$ as in Fig. \ref{fig:gamma-nu}. The optimal $\gamma(\theta_f)$ and $k_z(\theta_f)$ are both largest for intermediate $\theta_f$ values (Fig. \ref{fig:gamma-theta}). The limit formulas predict a peak at $\theta_f = \pi/2$ and $\gamma = 0$ at $\theta_f = 0, \pi$ ($k_{yf} = 0$). However, these limits are based on (\ref{eq:complexB-eta-nu}) after $\nu \rightarrow \infty$, which suppresses any effect of shear tilting in the ESD. In the more general case an up-shear orientation ($0 < \theta_f \pi/2$) is more conducive to dynamo growth. Thus, the Orr effect of phase tilting in shearing waves (Sec. \ref{sec:con-wave}) augments the dynamo efficiency. This is because, when $\theta_f$ is up-shear, the helical forcing factor transiently increases in magnitude as $k_x(t)$ decreases between $t = 0$ and $t = 1/\tan\theta_f/S>0$ when $k_x(t)$ passes through zero and thereafter becomes increasingly large and negative. This has the effect of transiently augmenting the effective helicity, hence dynamo forcing, compared to a down-shear case where $|k_x(t)|$ monotonically increases and the effective helicity only decreases with time. The magnitude of this transient dynamo enhancement is limited by the viscous decay that ensues during the phase tilting toward $k_x = 0$ (and beyond), consistent with the Orr effect disappearing when $\nu \rightarrow \infty$. From an ensemble of numerical integrations, we find that the estimate mean value of $\gamma$ is independent of $\theta_B$; {\it i.e., } the initial conditions of $\vec{\mathcal{B}}$ are not important for the dynamo apart from the necessity of a seed amplitude in $\vec{\mathcal{B}}$ to enable the dynamo. The analytic solutions in Sec. \ref{sec:limit.L} for the $\eta,\nu \rightarrow \infty$ limit show that the ensemble mean field, ${\cal E}\Big[\,\vec{\mathcal{B}}\,\Big]$, has a smaller (but nonzero) dynamo growth rate $\gamma$ than the r.m.s. field for a steady-forcing ensemble as well as a smaller (but undetermined) $\gamma$ for rapidly-varying forcing. Figure \ref{fig:ensemble} illustrates, for a more generic parameter set, how the components of the complex amplitude $\vec{\mathcal{B}}(t)$ vary substantially both with time and among different realizations, including spontaneous sign reversals on a time scale longer than those directly related to the parameters ({\it i.e., } the non-dimensional fluctuation turn-over time of 1, as well as $t_f$, $1/S$, $1/\eta$, and $1/\nu$); long-interval reversals also occur for Earth's magnetic field. This occurs even as the mean magnetic field amplitude inexorably grows, albeit with evident but relatively modest low-frequency and inter-realization variability. It has proved to be computationally difficult to accurately determine the ensemble mean of $\vec{\mathcal{B}}$ over many random realizations for fixed initial conditions in the general ESD (\ref{eq:complexB2}). Our computational experience is consistent with the mean field magnitude typically being only a small fraction of the square root of the mean magnetic energy. Thus, the ESD with random small-scale forcing is essentially a random large-scale dynamo. \section{Summary and Prospects} \label{sec:summary} We derive the Elemental Shear Dynamo (ESD) model for a random barotropic force with a single horizontal wavevector in a steady flow with uniform shear in a large domain. It is a quasi-linear theory that is rigorously justified for vanishing magnetic Reynolds number ($1/\eta \rightarrow 0$) and experimentally supported for more general parameters. It robustly exhibits kinematic dynamo behavior as long as the force $\vec{f}$ has both vertical and horizontal components with finite forcing helicity variance; the vertical wavenumber $k_z$ of the initial seed amplitude of the mean magnetic field $\aver{\vec{B}}^{x,y}$ is nonzero but small compared to the horizontal wavenumber of the forcing; and the forcing wavenumber orientation is not shear-normal ({\it i.e., } $k_{yf} \ne 0$). When these conditions are satisfied, the dynamo growth rate is larger when $S$ is larger, the resistivity $\eta$ and viscosity $\nu$ are smaller, the forcing correlation time $t_f$ is larger, and the forcing wavenumber $\theta_f$ is in an upshear direction. The ensemble-mean of the energy of the horizontally averaged magnetic field grows as a dynamo, but the energy of the ensemble-mean magnetic field is much smaller. Reversals in $\aver{\vec{B}}^{x,y}(t)$ are common over time intervals long compared to $t_f$. Because the growth-rate curves have broad maxima in both parameters and fluctuation wavenumbers (Sec. \ref{sec:general}), we expect dynamo action with a broad spectrum in $\vec{k}_f$ and $k_z$, consistent with the quasi-linear superposition principle (\ref{eq:superpose}). The ESD ingredients of small-scale velocity fluctuations and large-scale shear are generic across the universe, so its dynamo process is likely to be relevant to the widespread existence of large-scale magnetic fields. Of course, the simple spatial symmetries assumed in the ESD model are a strong idealization of natural flows, and the ESD is not a general MHD model because of its quasi-linearity assumptions. Investigation of more complex situations is needed to determine the realm of relevance for the ESD behavior shown here, especially in turbulent flows with intrinsic variability and large Reynolds number. {\bf Acknowledgments:} This work benefited greatly from extensive discussions with Tobias Heinemann, who also helped with some of the calculations and figures, and with Alexander Schekochihin, who has led our inquiry into the shear dynamo. I also appreciate a long and fruitful partnership with Steven Cowley on dynamo behaviors, first at small scales and now at large. This paper is a fruit of unsponsored research.
1,477,468,749,832
arxiv
\section{Introduction} \vspace{-1em} Avec l'essor des assistants vocaux, de plus en plus d'objets connectés sont déployés chez les consommateurs. Ces assistants ont besoin d'une connexion internet et de serveurs centralisés pour fonctionner. Les signaux de parole de l'utilisateur sont envoyés à ces serveurs pour bénéficier d'une expérience confortable et accessible en permanence. Sur les serveurs, les fournisseurs de service font appel à des systèmes de reconnaissance automatique de parole et de compréhension du langage naturel afin de répondre à la demande de l'utilisateur. Cependant, les signaux de parole contiennent de nombreuses informations relatives au locuteur, on y retrouve des attributs sensibles comme le genre du locuteur, son l'identité, son âge, ses sentiments, ses émotions, etc. Ces attributs sensibles peuvent être extraits et utilisés à des fins malveillantes. Cette collecte excessive, et sans précédent, de signaux de parole sert à établir des profils d'utilisateurs complets et à construire de très grands jeux de données, nécessaires pour enrichir et améliorer les modèles de reconnaissance et de compréhension. Ce transfert global des données vers les fournisseurs de services soulève de sérieuses questions à propos de la protection de la vie privée. Récemment, des systèmes de reconnaissance de parole embarqués ont été proposés affin de résoudre cette problématique. Cependant, les performances de ces systèmes sont encore restreintes dans les environnements peu favorables (c’est-à-dire, environnements bruyants, parole réverbérée, forts accents, etc.). La collecte de grands corpus de parole représentatifs des utilisateurs réels et des diverses conditions d'utilisation est nécessaire pour améliorer les performances. Mais cela doit être effectué tout en préservant la vie privée des utilisateurs, ce qui signifie au moins garder l'identité du locuteur privée. Dans l'approche proposée, un encodeur réside sur chaque objet connecté et effectue des calculs locaux pour créer une représentation anonymisée de la parole. Ce processus de calcul défini par \cite{hybrid_privacy_framework} est adapté pour les assistants vocaux. Jusqu'à présent, les travaux suivants s'y sont inscrits : dans \cite{mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial}, les auteurs emploient une méthode d'apprentissage antagoniste pour supprimer l'identité du locuteur dans un réseau de reconnaissance de la parole. Cependant, leur approche a eu un fable impact, le système de vérification locuteur n'a pas vu ses performances significativement dégradées. Dans \cite{private_wake_word}, les auteurs cherchent à créer une représentation capable de détecter des mots de réveil sans être capable de décoder le contenu linguistique. Dans \cite{kmean_asr_privacy_configuratble}, les auteurs ont étudié la discrétisation de la parole dans de multiples systèmes de reconnaissance de la parole afin de minimiser l'inférence de plusieurs attributs sensibles (comme le locuteur, l'émotion, le genre). Finalement, dans le Challenge Voice Privacy (VPC) 2020 \cite{tomashenkoVoicePrivacy2020Challenge}, un protocole dédié et des métriques ont été proposés afin d'évaluer différentes méthodes d'anonymisation du locateur. Dans cet article, notre travail est similaire à ceux de \cite{mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial,kmean_asr_privacy_configuratble} ou nous nous concentrons sur la création d'une représentation anonymisée, où l’objectif est d'envoyer au fournisseur de services uniquement les informations qui lui sont nécessaires pour un bon fonctionnement du service. Dans le cas considéré des assistants vocaux, l'information relative au contenu linguistique doit être gardée alors que celle relative aux locuteurs doit être supprimée. L'encodeur effectuant l'anonymisation étudiée dans cet article est basé sur un système de reconnaissance de la parole. La représentation est extraite au niveau de la couche \textit{bottleneck} du réseau. Ce type de représentation a pour but de compresser l'information afin qu’elle soit efficiente. Dans le cas d'un système de reconnaissance de la parole, les \textit{bottlenecks} sont supposés encoder l'information du contenu linguistique, et ce, en étant invariants aux locuteurs. En utilisant le protocole d'évaluation du VPC, nous avons observé que les \textit{bottlenecks} n'encodent pas uniquement l'information linguistique, le locuteur peut être lui aussi identifié à un degré élevé. Afin de mieux supprimer l'information relative au locuteur (donc améliorer l'anonymisation), nous avons introduit l'utilisation de la quantification vectorielle au niveau de la couche \textit{bottleneck} du réseau de reconnaissance de la parole. La quantification vectorielle consiste en l'approximation d'un vecteur continu par un autre vecteur de même dimension, mais ce dernier appartenant à un ensemble fini de vecteurs \cite{vector_quantization}. La quantification vectorielle est fréquemment utilisée dans la compression de données avec pertes. Dans notre cadre d'utilisation, la quantification vectorielle permet d'imposer une contrainte sur le la couche \textit{bottleneck}. Cette contrainte incite le réseau de reconnaissance de parole à encoder l'information du contenu linguistique dans un ensemble fini de vecteurs. De ce fait, les autres informations relatives au locuteur se retrouvent moins encodées par manque de capacité d'encodage. Nos contributions sont les suivantes. Premièrement, nous évaluons à quelle hauteur l'information du locuteur est présente dans un \textit{bottleneck} d'un système de reconnaissance de la parole. Deuxièmement, nous étudions l'impact que la quantification vectorielle a sur les performances de reconnaissance de parole et du locuteur. Troisièmement, nous montrons que les \textit{bottlenecks} peuvent être utilisés pour générer un signal de parole audible permettant une potentielle annotation et réapprentissage du modèle de reconnaissance de la parole. La structure du reste du document est la suivante. Dans la section \ref{sec:hybrid_framework}, nous décrivons le cadre de travail et le modèle proposé pour anonymiser la parole. La section \ref{sec:experiments} explique le dispositif expérimental et présente nos résultats. Enfin, nous concluons et discutons des travaux futurs dans la section \ref{sec:conclusion}. \vspace{-1em} \section{Processus de calcul hybride avec des calculs locaux et mutualisés} \vspace{-1em} \label{sec:hybrid_framework} Dans cette section, nous présentons le cadre de travail hybride proposé par \cite{hybrid_privacy_framework} permettant d'effectuer des calculs locaux et d'autres mutualisés tout en respectant la vie privée des utilisateurs. L'objectif de ce processus de calcul est de partager une représentation de parole avec un fournisseur de service, mais ce, en anonymisant les données de parole au niveau du périphérique avant de les partager. Dans le contexte des assistants vocaux, la représentation anonymisée doit être riche en information relative au contenu linguistique tout en empêchant l'exposition d'informations sensibles qui pourrait potentiellement révéler des informations privées de l'utilisateur. Dans nos expériences, nous nous focalisons sur l’identité du locuteur, et considérons que cette information doit être supprimée. Dans ce processus de calcul hybride, la tâche compliquée est de concevoir l'encodeur qui extrait la représentation anonymisée, car le codage ou la modification du signal de parole peut nuire au bon fonctionnement de la tâche de reconnaissance de la parole. Dans la section suivante, nous décrivons l'architecture de l'encodeur utilisé pour anonymiser la parole. \vspace{-1em} \subsection{Présentation du modèle} \vspace{-0.5em} De par leurs fonctions d'apprentissage, les modèles acoustiques utilisés dans les systèmes de reconnaissance de parole cherchent à encoder l'information du contenu linguistique (par exemple via la classification temporelle des phonèmes). Ces modèles sont souvent conçus pour être invariants au locuteur dans le but de proposer les mêmes performances de reconnaissance à tout utilisateur. C'est pour ces raisons que nous avons choisi d'utiliser comme encodeur un modèle acoustique. Nous utilisons une architecture \textit{time delay neural network factorized} (\textit{TDNN-F}) introduite par \cite{Povey2018SemiOrthogonal_TDNNF}. Elle est utilisée dans le cadre d'un système de reconnaissance de la parole hybride \textit{Hidden Markov Model - Deep Neural Network (HMM-DNN)} \cite{KaldiPovey}. Cette architecture a été reconnue comme l'une des plus efficientes dans un récent classement comparant les performances des modèles par rapport aux exigences matérielles \cite{Performance_vs_hardware_asr}. L'architecture \textit{TDNN-F} est donc appropriée pour l'utilisation embarquée nécessaire au fonctionnement du processus hybride avec des calculs locaux et d'autres effectuées par un serveur centraliser. La fonction d'objectif \textit{Lattice-Free Maximum Mutual Information (LF-MMI)} \cite{lfmmi} est utilisée afin de réaliser un entraînement discriminatif des séquences. La fonction \textit{MMI} traditionnelle vise à maximiser la probabilité postérieure : \begin{equation} \begin{aligned} \mathcal{L}_{mmi}(\lambda) =\sum_{r=1}^{R} \log P_{\lambda}\left(S_{r} \mid O_{r}\right) &=\sum_{r=1}^{R} \log \frac{P_{\lambda}\left(O_{r} \mid S_{r}\right) P\left(S_{r}\right) }{\sum_{S}P_{\lambda}\left(O_{r} \mid S\right) P\left(S\right)} \end{aligned} \label{eq:lfmmi} \end{equation} où $\lambda$ est l'ensemble des paramètres du réseau de neurones, $R$ est le nombre total de segments d'apprentissage, $S_{r}$ est la transcription correcte du $r^{eme}$ segment de parole $O_{r}$, $P(S)$ est la probabilité du modèle de langage pour la phrase $S$. La distribution $P\left(S\right)$ est considérée comme fixe, et est estimée avec un modèle de langage à partir des transcriptions d'entraînement. Le numérateur indique la vraisemblance de la prédiction pour une séquence de mots de référence, tandis que le dénominateur indique la vraisemblance totale de la prédiction pour toutes les séquences de mots possibles, ce qui équivaut à la somme sur toutes les séquences de mots possibles estimées par le modèle acoustique et le modèle de langage. Le numérateur encode les caractéristiques de supervision et il est spécifique à chaque segment, tandis que le dénominateur encode toutes les séquences de mots possibles et il est identique pour tous les segments. Cette fonction de coût est optimisée en maximisant le numérateur et en minimisant le dénominateur. \textit{MMI} maximise la log-vraisemblance conditionnelle des probabilités globalement normalisées des transcriptions correctes. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{LaTeX/figures/tdnnf.pdf} \end{center} \caption{ Architecture du modèle \textit{TDNN-F}, totalisant 15 couches. Les \textit{bottleneck} sont extraits à partir de la 13e couche. } \label{image:tdnnf} \ \vspace{-1em} \vspace{-1em} \end{figure} Dans l'objectif d'obtenir une représentation anonymisée de la parole, nous extrayons des \textit{bottlenecks} de faibles dimensions ($D$ = 256 dimensions) depuis une couche profonde du réseau (la 13ème couche sur les 15 du réseau, cf.: figure \ref{image:tdnnf}). Il a été observé par \cite{adiReverseGradientNot2019,mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial,whatdoesanetworkhears} que ce type de représentation encode principalement l'information relative au contenu linguistique et supprime une partie de l'information de l'identité locuteur. \vspace{-1em} \subsection{Introduction à la quantification vectorielle pour l'anonymisation} \vspace{-0.5em} Afin d'améliorer l'anonymisation, nous proposons de contraindre la couche du réseau de neurones produisant les \textit{bottlenecks} en ajoutant une couche de quantification vectorielle. La quantification vectorielle consiste en l'approximation d'un vecteur continu par un autre vecteur de même dimension, mais ce dernier appartenant à un ensemble fini de vecteurs \cite{vector_quantization}, ces vecteurs sont dénommés vecteurs prototypes. Dans la tâche d'apprentissage non supervisé de représentation discriminante via l'utilisation d'auto-encodeur, il a été observé que les vecteurs prototypes appris suite à une quantification vectorielle représentent principalement l'information relative aux phonèmes \cite{neural_disctre_vq,Unsupervised_speech_rep_vq,one-shot-vc-vector-quant}. L'application de la quantification vectorielle dans un modèle acoustique a pour but d'inciter le modèle à supprimer l'information du locuteur, car la quantification vectorielle réduit la capacité d'encodage du réseau. Comparée aux tâches non supervisées, la fonction de coût d'un modèle acoustique impose explicitement que l'information phonétique soit encodée dans le \textit{bottleneck}, de ce fait, nous pouvons appliquer une contrainte élevée en réduisant le nombre de vecteurs prototypes. Étant donné la séquence audio d'entrée $s=\left(s_{1}, s_{2}, \ldots, s_{T}\right)$ de longueur $T$, l'encodeur \textit{TDNN-F} produit les \textit{bottlenecks} $h(s) = \left(h_{1}, h_{2}, \ldots, h_{j}\right)$ de longueur $J$ ($J < T$ dû au sous-échantillonnage effectué par l'encodeur) où $h_{j} \in \mathbb{R}^{D}$ pour chaque pas temporel $t$, et $D$ est la dimensionnalité de la représentation latente. La couche de quantification vectorielle prend en entrée la séquence de vecteurs continus $h(s)$ et remplace chaque $h_{j} \in h(s)$ par un prototype du dictionnaire apprenable $E=\left\{e_{1}, e_{2}, \ldots, e_{V}\right\}$ de taille $V$, chaque $e_{i} \in \mathbb{R}^{D}$. \begin{equation} q(s)=\underset{e_i}{\arg \min }\left\|h(s)-e_{i}\right\|_{2}^{2} \end{equation} Le vecteur $h_{j}$ est remplacé par son vecteur prototype $e_{v}$ le plus proche en termes de distance euclidienne. Puisque la quantification est non différentiable (à cause de l'opération $\arg \min$), sa dérivée doit être approximée. Pour ce faire, nous utilisons un \textit{straight-through estimator} \cite{strat_through_estimator} i.e.,$\frac{\partial \mathcal{L}}{\partial h(s)} \approx \frac{\partial \mathcal{L}}{\partial q(s)}$. Les vecteurs prototype sont contraints de se rapprocher des vecteurs \textit{bottlenecks} qu'ils remplacent par l'ajout d'une fonction de coût auxiliaire: \begin{equation} \mathcal{L}_{vq} = \left\|\operatorname{sg}\left[h(s)\right]-q(s)\right\|_{2}^{2} \end{equation} où $\mathrm{sg}[\cdot]$ désigne l'opération bloquant la rétropropagation du gradient, donc la mise à jour des poids. Cette opération est semblable à un k-means, mais appliquée à chaque minibatch pendant l'apprentissage, les prototypes du dictionnaire correspondant aux centroïdes d'un k-means. Étant donné que les \textit{bottleneck} peuvent prendre n'importe quelle valeur, l'ajout d'une fonction de coût régularise l'encodeur a produire des \textit{bottlenecks} proches des prototypes afin que l'apprentissage de l'encodeur ne diverge pas de l'apprentissage du dictionnaire : \begin{equation} \mathcal{L}_{vq\_reg} = \|h(x)-\operatorname{sg}[q(x)]\|_{2}^{2} \end{equation} La fonction de coût du modèle acoustique peut être alors exprimée par la somme des fonctions \textit{mmi}, de quantification et de régularisation : \begin{equation} \mathcal{L}=\mathcal{L}_{mmi}+\mathcal{L}_{vq}+\lambda \mathcal{L}_{vq\_reg} \end{equation} où $\lambda$ désigne le coefficient du facteur de régularisation (nous avons utilisé $\lambda = 0.25$). Pour mettre à jour les prototypes du dictionnaire, nous utilisons une moyenne mobile exponentielle (EMA) \cite{ema_vq}. EMA met à jour le dictionnaire $E$ indépendamment de l'optimiseur, l'apprentissage est donc plus robuste face aux différents choix d'optimiseurs et d'hyperparamètres (par example: le taux d'apprentissage, momentum). \vspace{-1em} \section{Expériences} \vspace{-1em} \label{sec:experiments} \subsection{Jeux de données} \vspace{-1em} Nous avons utilisé le corpus LibriSpeech \cite{Librispeech} pour toutes nos expériences. Les statistiques des jeux de données sont disponibles dans le tableau \ref{tab:data-train}. \begin{table*}[htbp] \caption{Statistiques des jeux de données d'entraînement et de test.}\label{tab:data-train} \centering \begin{tabular}{l r r r r c } \toprule \multirow{2}{*}{{}} & \multirow{2}{*}{{Taille}} & \multicolumn{3}{c}{{Nombre de locuteurs}} & \multirow{2}{*}{ \begin{minipage}[t]{0.14\textwidth} \centering {Nombre d'utterances} \end{minipage} } \\ & & {Femme} & {Homme} & {Total} & \\ \midrule LibriSpeech: train-clean-100 & 100h & 125 & 126 & 251 & {~~28539} \\ LibriSpeech: train-clean-360 & 364h & 439 & 482 & 921 & {104014} \\ LibriSpeech: test-clean & 5.4h & 20 & 20 & 40 & {~~~~2620} \\ LibriSpeech: test-other & 5.1h & 17 & 16 & 33 & {~~~~2939} \\ \bottomrule \end{tabular} \end{table*} Le sous-ensemble LibriSpeech train-clean-100 a été utilisé pour apprendre le modèle acoustique. Les jeux de données utilisées pour évaluer les performances de reconnaissance de parole sont LibriSpeech test-clean et LibriSpeech test-other. Le challenge Voice Privacy définit LibriSpeech train-clean-360 comme jeux d'apprentissage pour apprendre le système de vérification du locuteur. Il est important de remarquer que ce jeu d'apprentissage ne propose pas une grande variabilité intralocuteur du aux longues sessions d'enregistrement des chapitres des livres audio. Entraîner le système de vérification du locuteur sur les \textit{bottlenecks} d'un modèle acoustique n'est pas une tâche facile, car toute erreur de représentation effectuée par le modèle acoustique est propagée dans celui de vérification du locuteur. Pour atténuer cet effet, nous avons appris le système de reconnaissance du locuteur sur la combinaison de train-clean-100 et train-clean-360. Le modèle acoustique produit une très bonne représentation pour le sous-ensemble train-clean-100 (vu lors de l'entraînement du modèle acoustique), ce qui aide l'apprentissage du modèle de vérification du locuteur. Conformément au challenge Voice Privacy, les performances en reconnaissance du locuteur ont été évaluées avec le jeu de donnée LibriSpeech test-clean. Parmi les 40 locuteurs de LibriSpeech test-clean, 29 d'entre eux sont sélectionnés, pour chaque locuteur un sous-ensemble totalisant 1 min de parole (après détection d'activité vocale) a été sélectionné pour l'ensemble d'enrôlement et le reste a été utilisé pour l'ensemble de test. Les nombres de test cible et imposteur sont détaillés dans le tableau \ref{tab:data-test}. \vspace{-1em} \begin{table*}[htbp] \caption{Nombre de test de vérification dans l'ensemble de donnée d'évaluation.}\label{tab:data-test} \vspace{0.5mm} \centering \begin{tabular}{l c r r c} \toprule {} & {Type} & {Femme} & {Homme} & {Total} \\ \midrule \multirow{2}{*}{Librispeech: test-clean} & Cible & 548 & 449 & {~~~997} \\ & Imposteur & {11196} & {9457} & {20653} \\ \bottomrule \end{tabular} \end{table*} \vspace{-1em} \vspace{-1em} \subsection{Métriques et évaluation} \vspace{-0.5em} \label{sec:metrics} Pour évaluer les performances du système en matière d'anonymisation (\textit{capacité de dissimulation de l'identité du locuteur}) et d'utilité (\textit{capacité à reconnaître le contenu linguistique}), deux systèmes et métriques sont utilisés. Pour évaluer quantitativement la qualité de l'anonymisation, une architecture de vérification automatique du locuteur implémentée dans SideKit \cite{sidekit} est utilisée. Il s'agit d'un système x-vecteur composé de cinq couches TDNN suivit d'une couche de \textit{statistics pooling} \cite{snyder2018xvector}. La fonction de coût utilisée pour l'apprentissage est la \textit{large margin softmax loss} \cite{aamlossArcMarginProduct}. La métrique d'évaluation est le taux d'égale erreur (EER$_\%$), plus l'EER$_\%$ est élevé, mieux les locuteurs sont anonymisés. Pour l'utilité, le système de reconnaissance de parole transcrit la parole depuis la représentation \textit{bottleneck}. La mesure du taux d'erreurs mots (WER$_\%$) est utilisée pour évaluer dans quelle mesure les \textit{bottlenecks} encodent correctement l'information linguistique. Plus le WER$_\%$ est faible, mieux le contenu linguistique est encodé. \vspace{-1em} \subsection{Résultats et discussions} \vspace{-0.5em} Le tableau \ref{tab:results} présente les résultats expérimentaux. La première ligne présente les scores de vérification du locuteur et de reconnaissance de parole pour les jeux de données test-clean et test-other, sans introduction de quantification vectorielle. Ces scores sont cohérents avec ceux reportés dans la littérature \cite{tomashenkoVoicePrivacy2020Challenge,pkwrap}. Les résultats de vérification du locuteur montrent que la représentation \textit{bottleneck} d'un système de reconnaissance de parole de référence (No VQ) est capable de correctement discriminer les locuteurs. Il est à noter que pour cet exemple les femmes sont plus difficiles à différencier que les hommes, c'est-à-dire : 9,3 EER$_\%$ pour les femmes et 4,2 EER$_\%$ pour les hommes. Le WER$_\%$ sur LibriSpeech test-clean est de 5,8, valeur utilisée comme référence pour les expériences suivantes. En contraignant la représentation \textit{bottleneck} du réseau avec l'utilisation de quantification vectorielle, les performances de vérification du locuteur sont drastiquement réduites. Le nombre $V$ de prototypes dans le dictionnaire de quantification contraint plus ou moins le modèle acoustique, avec $V$ vecteurs prototypes l'information linguistique du signal est compressée dans un espace vectoriel discret de $V$ vecteurs prototypes. Plus le dictionnaire est petit, plus le réseau doit trouver une transformation efficace pour représenter l'information linguistique, ce qui laisse moins de place pour encoder l'information relative au locuteur. Ainsi avec $V$ = 16 les scores d'EER$_\%$ de vérification du locuteur sont de 30,0 pour les femmes et 32,4 pour les hommes valeurs plus élevées qu'avec $V$ = 1024 ou le réseau obtient 17,9 pour les femmes et 18,3 pour les hommes. En comparaison avec le système de référence (No VQ), l'utilisation de la quantification vectorielle permet d'anonymiser les \textit{bottlenecks}. Plus la valeur de $V$ est petite, moins les \textit{bottlenecks} sont représentatifs du locuteur, permettant une meilleure anonymisation. Cependant, les performances de reconnaissance de parole, mesurées en termes de WER$_\%$, sont elles aussi impactées par la taille du dictionnaire de quantification. Avec $V$ = 16 le WER$_\%$ est de 15,9 sur LibriSpeech test-clean, dégradation très importante par rapport à la valeur de référence de 5,8. En augmentant le nombre de vecteurs prototype, le WER$_\%$ redescend. Pour $V$ = 1024 le WER$_\%$ est de 7,2. Le tableau \ref{tab:results} présente aussi les scores de reconnaissance de parole sur le jeu de données LibriSpeech test-other. \begin{table*}[t] \caption{Résultats de la reconnaissance vocale et de la vérification du locuteur en fonction du nombre de vecteurs prototypes dans le dictionnaire de quantification. La mesure de l'intervalle de confiance pour l'EER et le WER est effectuée avec un ré-échantillonnage \textit{bootstrap}.} \centering \begin{tabular}{ c c c c c c } \toprule \multicolumn{2}{c}{} Nb vecteurs & \multicolumn{2}{c}{EER$_\%$} & \multicolumn{2}{c}{WER$_\%$} \\ \multicolumn{2}{c}{} prototypes & F & H & test-clean & test-other \\ \midrule & (No VQ) & 9.3 \footnotesize{$\pm$0.5} & 4.2 \footnotesize{$\pm$1.0} & 5.8 \footnotesize{$\pm$0.3} & 19.5 \footnotesize{$\pm$0.6} \\ \midrule & 16 & 30.0 \footnotesize{$\pm$2.1} & 32.4 \footnotesize{$\pm$2.1 } & 15.9\footnotesize{ $\pm$0.5 } & 42.5\footnotesize{ $\pm$0.8} \\ & 32 & 25.6 \footnotesize{$\pm$2.1 } & 27.3 \footnotesize{$\pm$1.9 } & 9.8 \footnotesize{$\pm$0.4 } & 31.4 \footnotesize{$\pm$0.8 }\\ & 48 & 22.0 \footnotesize{$\pm$1.7 } & 22.6 \footnotesize{$\pm$2.1 } & 8.7 \footnotesize{$\pm$0.4 } & 28.8 \footnotesize{$\pm$0.8 }\\ & 128 & 22.0 \footnotesize{$\pm$1.8 } & 22.8 \footnotesize{$\pm$2.0 } & 8.5 \footnotesize{$\pm$0.4 } & 28.5 \footnotesize{$\pm$0.8 }\\ & 256 & 19.2 \footnotesize{$\pm$1.6 } & 19.6 \footnotesize{$\pm$2.0 } & 7.6 \footnotesize{$\pm$0.3 } & 26.1 \footnotesize{$\pm$0.7 }\\ & 512 & 19.6 \footnotesize{$\pm$1.6 } & 19.2 \footnotesize{$\pm$2.0 } & 7.6 \footnotesize{$\pm$0.3 } & 25.4 \footnotesize{$\pm$0.7 }\\ & 1024 & 17.9 \footnotesize{$\pm$1.6} & 18.3\footnotesize{ $\pm$1.8} & 7.2\footnotesize{ $\pm$0.3 } & 24.7\footnotesize{ $\pm$0.7} \\ \bottomrule \end{tabular} \label{tab:results} \vspace{-1em} \end{table*} Le compromis entre de bonnes performances en reconnaissance de parole et une bonne anonymisation est inhérent au problème de partage de données anonymisées (problème connu sous le nom de \say{\textit{privacy-utility tradeoff}} \cite{privacy-utility-tradeoff}). Dans notre cadre de travail, ce compromis est paramétrable et peut être ajusté au souhait de l'utilisateur ou du fournisseur de service via la taille $V$ du dictionnaire de quantification. De manière générale, le tableau \ref{tab:results} montre que plus $V$ est faible, meilleure est l'anonymisation, mais cela est au prix d'une dégradation des performances en reconnaissance de parole. Et, inversement, plus $V$ est grand, meilleure est la reconnaissance de parole au détriment d'une moins bonne anonymisation. \vspace{-1em} \section{Conclusion} \vspace{-0.5em} \label{sec:conclusion} Dans cet article, nous avons appliqué le processus de calcul hybride respectueux de la vie privée, qui décompose un modèle neuronal en deux parties, un encodeur qui génère une représentation anonyme sur l'appareil de l'utilisateur, et un décodeur qui utilise cette représentation anonyme pour effectuer des calculs mutualisés. Nous avons étudié ce système dans le contexte des assistants vocaux. Comme encodeur, un modèle acoustique \textit{TDNN-F} a été considéré, et nous avons montré ses limitations. En utilisant le jeu de donnée du challenge Voice Privacy, nous avons mesuré que le locuteur peut être vérifié à la hauteur de 9,3$_\%$ d'EER pour les femmes et 4,2$_\%$ d'ERR pour les hommes dans un modèle \textit{TDNN-F} classique. Nous avons proposé d'utiliser un algorithme de quantification vectorielle afin de contraindre l'espace de représentation, forçant ainsi le modèle acoustique à uniquement encoder l'information phonétique. Cet algorithme est configurable en fonction de la taille du dictionnaire de quantification, ce qui permet d'ajuster le compromis entre de bonnes performances en reconnaissance de parole et une bonne anonymisation. Par exemple, avec un dictionnaire de 128 vecteurs, le locuteur est dramatiquement moins vérifiable, 22,0$_\%$ d'EER pour les femmes et 22,8$_\%$ d'ERR pour les hommes ce qui correspond a un gain 232\%. Mais ce gain en anonymisation impacte les performances de reconnaissance de parole, le WER augmente de 47\% (augmentation de 5,8$_\%$ à 8,5$_\%$ de WER). Dans les prochains travaux, nous prévoyons de générer de la parole à partir de ces représentations anonymes et d'évaluer les performances en reconnaissance de parole et masquage d'identité du locuteur à partir de la parole générée. \vspace{-1em} \vspace{-0.5em} \section*{Remerciements} \vspace{-0.1em} Ce travail a été réalisé avec le soutien de l'Agence nationale de la recherche française, dans le cadre du projet ANR DEEP-PRIVACY (18-CE23-0018) et de la Région Grand Est. \vspace{-1em} \bibliographystyle{jep2022} \section{Introduction} Avec l'essor des assistants vocaux, de plus en plus d'objets connectés sont déployés chez les consommateurs. Ces assistants ont besoin d'une connexion internet et de serveurs centralisés pour fonctionner. Les signaux de parole de l'utilisateur sont envoyés à ces serveurs pour bénéficier d'une expérience confortable et accessible en permanence. Sur les serveurs les fournisseurs de service font appel à des systèmes de reconnaissance automatique de parole et de compréhension du langage naturel afin de répondre à la demande de l'utilisateur. Cependant, les signaux de parole contiennent de nombreuses informations relatives au locuteur, on y retrouve des attributs sensibles comme le genre du locuteur, son l'identité, son âge, ses sentiments, ses émotions, etc. Ces attributs sensibles peuvent être extraits et utilisés à des fins malveillantes. Cette collecte excessive, et sans précédent, de signaux de parole sert à établir des profils d'utilisateurs complets et à construire de très grands jeux de données, nécessaires pour enrichir et améliorer les modèles de reconnaissance et de compréhension. Ce transfert global des données vers les fournisseurs de services soulève de sérieuses questions à propos de la protection de la vie privée. Des systèmes de reconnaissance de parole embarqués ont récemment été proposés. Cependant, les performances de ces systèmes sont encore restreintes dans les environnements peu favorables (c'est à dire, environnements bruyants, parole réverbérée, forts accents, etc.). La collecte de grands corpus de parole représentatifs des utilisateurs réels et des diverses conditions d'utilisation est nécessaire pour améliorer les performances. Mais cela doit être effectué tout en préservant la vie privée des utilisateurs, ce qui signifie au moins garder l'identité du locuteur privée. Dans l'approche proposée, un encodeur réside sur chaque objet connecté et effectue des calculs locaux pour créer une représentation anonymisée de la parole. Ce cadre de travail définit par \cite{hybrid_privacy_framework} est adaptée pour les assistants vocaux. Jusqu'à présent, les travaux suivants s'y sont inscrits : dans \cite{mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial}, les auteurs emploient une méthode d'apprentissage antagoniste pour supprimer l'identité du locuteur dans un réseau de reconnaissance de la parole. Cependant, leur approche a eu un fable impact, le système de vérification locuteur n'a pas vu ses performances dégradées. Dans \cite{private_wake_word}, les auteurs cherchent a créé une représentation capable de détecter des mots de réveil sans être capable de décoder le contenu linguistique. Dans \cite{kmean_asr_privacy_configuratble}, les auteurs ont étudié la discrétisation de la parole dans multiples systèmes de reconnaissance de la parole afin de minimiser l'inférence de plusieurs attributs sensibles (comme le locuteur, l'émotion, le genre). Finalement, dans le Challenge Voice Privacy (VPC) 2020 \cite{tomashenkoVoicePrivacy2020Challenge}, un protocole dédié et des métriques ont été proposée afin d'évaluer différentes méthodes d'anonymisation du locateur. Dans cet article, nos travaux sont proches de ceux de \cite{mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial,kmean_asr_privacy_configuratble} nous nous concentrons sur la création d'une représentation anonymisée, où l’objectif est d'envoyer au fournisseur de services uniquement les informations qui lui sont nécessaires au bon fonctionnement du service. Dans le cas considéré des assistants vocaux, l'information relative au contenu linguistique doit être gardée alors que celle relative aux locuteurs doit être supprimée. L'encodeur effectuant l'anonymisation étudiée dans cet article est basé sur un système de reconnaissance de la parole. La représentation est extraite au niveau de la couche \textit{bottleneck} du réseau. Ce type de représentation a pour but de compresser l'information afin qu’elle soit efficiente. Dans le cas d'un système de reconnaissance de la parole, les \textit{bottlenecks} sont supposés encoder l'information du contenu linguistique, et ce, en étant invariants aux locuteurs. En utilisant le protocole d'évaluation du VPC, nous avons observé que les \textit{bottlenecks} n'encodent pas uniquement l'information linguistique, le locuteur peut être lui aussi identifié à un degré élevé. Afin de mieux supprimer l'information relative au locuteur (donc améliorer l'anonymisation), nous avons introduit l'utilisation de la quantification vectorielle au niveau de la couche \textit{bottleneck} du réseau de reconnaissance de la parole. La quantification vectorielle consiste en l'approximation d'un vecteur continue par un autre vecteur de même dimension, mais ce dernier appartenant à un ensemble fini de vecteurs \cite{vector_quantization}. La quantification vectorielle est fréquemment utilisée dans la compression de données avec pertes. Dans notre cadre d'utilisation, la quantification vectorielle permet d'imposer une contrainte sur la couche \textit{bottleneck}. Cette contrainte incite le réseau de reconnaissance de parole à encoder l'information du contenu linguistique dans un ensemble fini de vecteurs. De ce fait, les autres informations relatives au locuteur se retrouvent moins encodées par manque de capacité d'encodage. Nos contributions sont les suivantes. Premièrement, nous évaluons à quelle hauteur l'information du locuteur est encodée dans un \textit{bottleneck} d'un système de reconnaissance de la parole. Secondement, nous étudions l'impact que la quantification vectorielle a sur les performances de reconnaissance de parole et du locuteur. La structure du reste du document est la suivante. Dans la section \ref{sec:hybrid_framework}, nous décrivons le cadre de travail et le modèle proposé pour anonymiser la parole sur le client. La section \ref{sec:experiments} explique le dispositif expérimental et présente nos résultats. Enfin, nous concluons et discutons des travaux futurs dans la section \ref{sec:conclusion}. \section{Cadre de travail hybride avec des calculs locaux et mutualisés} \label{sec:hybrid_framework} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.80\linewidth]{LaTeX/figures/hybrid.pdf} \end{center} \caption{ Déroulement des opérations du cadre d'anonymisation hybride : L'appareil de l'utilisateur extrait une représentation anonymisée de la parole. Le serveur centraliser effectue le reste des calculs sur les données anonymisées et renvoie le résultat à l'utilisateur. } \label{image:hybrid_framework} \ \end{figure} Dans cette section nous présentons le cadre de travail hybride présenté par \cite{hybrid_privacy_framework} permettant d'effectuer des calculs locaux et mutualisés tout en respectant la vie privée des utilisateurs. Comme montré sur la figure \ref{image:hybrid_framework}, l'objectif de ce cadre de travail est de partager une représentation de parole avec un fournisseur de service, mais ce, en anonymisant les données de parole au niveau du périphérie avant de les partager. Dans le contexte des assistants vocaux, la représentation anonymisée doit être riche en information relative au contenu linguistique tout en empêchant l'exposition d'informations sensibles qui pourrait mettre en danger la vie privée de l'utilisateur. Dans nos expériences, nous considérons que l'identité du locuteur doit être supprimée. Dans ce cadre de travail hybride, la tâche compliquée est de concevoir l'encodeur qui extrait la représentation anonymisée, car la suppression d'information locuteur peut nuire au bon fonctionnement de la tâche de reconnaissance de la parole. Dans la prochaine section, nous décrivons l'architecture de l'encodeur utilisé pour anonymiser la parole. \subsection{Présentation du Modèle} De par leurs fonctions d'apprentissage, les modèles acoustiques utilisés dans les systèmes de reconnaissance de parole cherchent à encoder l'information du contenu linguistique (par exemple via la classification temporelle des phonèmes). Ces modèles sont souvent conçus pour être invariants au locuteur dans le but de proposer les mêmes performances de reconnaissance à tout utilisateur. C'est pour ces raisons que nous avons choisi d'utiliser comme encodeur un modèle acoustique. Nous utilisons une architecture \textit{time delay neural network factorized} (\textit{TDNN-F}) introduite par \cite{Povey2018SemiOrthogonal_TDNNF}. Elle est utilisée dans le cadre d'apprentissage d'un système de reconnaissance de la parole hybride \textit{Hidden Markov Model - Deep Neural Network (HMM-DNN)} \cite{KaldiPovey}. Cette architecture a été reconnue comme l'une des plus efficientes dans un récent classement comparant les performances des modèles par rapport aux exigences matérielles \cite{Performance_vs_hardware_asr}. L'architecture \textit{TDNN-F} est donc appropriée pour l'utilisation embarquée nécessaire au fonctionnement du cadre de travail hybride avec des calculs locaux et mutualités. La fonction d'objectif \textit{Lattice-Free Maximum Mutual Information (LF-MMI)} \cite{lfmmi} est utilisée afin de réaliser un entrainement discriminatif des séquences. La fonction \textit{MMI} traditionnelle vise à maximiser la probabilité postérieure : \begin{equation} \begin{aligned} \mathcal{L}_{mmi}(\lambda) &=\sum_{r=1}^{R} \log P_{\lambda}\left(S_{r} \mid O_{r}\right) \\ &=\sum_{r=1}^{R} \log \frac{P_{\lambda}\left(O_{r} \mid S_{r}\right) P\left(S_{r}\right) }{\sum_{S}P_{\lambda}\left(O_{r} \mid S\right) P\left(S\right)} \end{aligned} \label{eq:lfmmi} \end{equation} où $\lambda$ est l'ensemble des paramètres apprenable du réseau de neurones, $R$ est le nombre total de segments d'apprentissage, $S_{r}$ est la transcription correcte du $r^{eme}$ segment de parole $O_{r}$, $P(s)$ est la probabilité du modèle de langage pour la phrase $s$. La distribution $P\left(S\right)$ est considérée comme fixe, et est estimée avec un modèle de langage appris sur transcriptions d'entraînement. Le numérateur indique la vraisemblance de la prédiction pour une séquence de mots de référence, tandis que le dénominateur indique la vraisemblance totale de la prédiction pour toutes les séquences de mots possibles, ce qui équivaut à la somme de toutes les séquences de mots possibles estimées par le modèle acoustique et le modèle de langage. Le numérateur encode les caractéristiques de supervision et il est spécifique à chaque segment, tandis que le dénominateur encode toutes les séquences de mots possibles et il est identique pour tous les segments. Cette fonction de coût est optimisée en maximisant le numérateur et en minimisant le dénominateur. \textit{MMI} maximise la log-vraisemblance conditionnelle des probabilités globalement normalisées des transcriptions correctes. Le symbole utilisé pour l'apprentissage est le phonème. \begin{figure}[hbtp] \begin{center} \includegraphics[width=1.0\linewidth]{LaTeX/figures/tdnnf.pdf} \end{center} \caption{ Architecture du modèle \textit{TDNN-F}, totalisant 15 couches. Les \textit{bottleneck} sont extrait de la 13 couche. } \label{image:tdnnf} \ \end{figure} Dans l'objectif d'obtenir une représentation anonymisée de la parole, nous extrayons des \textit{bottlenecks} de faibles dimensions ($D$ = 256 dimensions) depuis une couche profonde du réseau (la 13ème couche sur les 15 du réseau, cf.: figure \ref{image:tdnnf}). Il a été observé par \cite{adiReverseGradientNot2019,mohanPrivacyPreservingAdversarialRepresentation2019_reality_adversarial,whatdoesanetworkhears} que ce type de représentation encode principalement l'information relative au contenu linguistique et supprime une partie de l'information de l'identité locuteur. \subsection{Introduction à la quantification vectorielle pour l'anonymisation} Afin d'améliorer l'anonymitée, nous proposons de contraindre la couche du réseau de neurones produisant les \textit{bottlenecks} en ajoutant une couche de quantification vectorielle. La quantification vectorielle consiste en l'approximation d'un vecteur continue par un autre vecteur de même dimension, mais ce dernier appartenant à un ensemble fini de vecteurs \cite{vector_quantization}, ces vecteurs sont dénommés vecteurs prototype. Dans la tâche d'apprentissage non supervisé de représentation discriminante via l'utilisation d'auto-encodeur, il a été observé que les vecteurs prototype appris suite à une quantification vectorielle représentent principalement l'information relative aux phonèmes \cite{neural_disctre_vq,Unsupervised_speech_rep_vq,one-shot-vc-vector-quant}. L'application de la quantification vectorielle dans un modèle acoustique a pour but d'inciter le modèle à supprimer l'information du locuteur, car la quantification vectorielle réduit la capacité d'encodage du réseau. Comparé aux tâches non supervisées, la fonction de coût d'un modèle acoustique impose explicitement que l'information phonétique soit encodée dans le \textit{bottleneck}, de ce fait, nous pouvons appliquer une contrainte élevée en réduisant le nombre de vecteurs prototype. Étant donné la séquence audio d'entrée $s=\left(s_{1}, s_{2}, \ldots, s_{T}\right)$ de longueur $T$, l'encodeur \textit{TDNN-F} produit les \textit{bottlenecks} $h(s) = \left(h_{1}, h_{2}, \ldots, h_{j}\right)$ de longueur $J$ ($J < T$ dû au sous-échantillonnage effectuer par l'encodeur) où $h_{j} \in \mathbb{R}^{D}$ pour chaque pas temporel $t$, et $D$ est la dimensionnalité de la représentation latente. La couche de quantification vectorielle prend en entrée la séquence de vecteurs continus $h(s)$ et remplace chaque $h_{j} \in h(s)$ par un prototype du dictionnaire apprenable $E=\left\{e_{1}, e_{2}, \ldots, e_{V}\right\}$ de taille $V$, chaque $e_{i} \in \mathbb{R}^{D}$. \begin{equation} q(s)=\underset{i}{\arg \min }\left\|h(s)-e_{i}\right\|_{2}^{2} \end{equation} Le vecteur $h_{j}$ est remplacé par son vecteur prototype $e_{v}$ le plus proche en termes de distance euclidienne. Puisque la quantification est non différentiable (à cause de l'opération $\arg \min$), sa dérivée doit être approximée. Pour ce faire, nous utilisons un \textit{straight-through estimator} \cite{strat_through_estimator} i.e.,$\frac{\partial \mathcal{L}}{\partial h(s)} \approx \frac{\partial \mathcal{L}}{\partial q(s)}$. Les vecteurs prototype sont contraints de se rapprocher des vecteurs \textit{bottlenecks} qu'ils remplacent par l'ajout d'une fonction de coût auxiliaire: \begin{equation} \mathcal{L}_{vq} = \left\|\operatorname{sg}\left[h(s)\right]-q(s)\right\|_{2}^{2} \end{equation} où $\mathrm{sg}[\cdot]$ désigne l'opération de blocage de gradient. Cette opération est semblable à un k-means, mais appliquée à chaque minibatch pendant l'apprentissage, les prototypes du dictionnaire correspondent aux centroïdes d'un k-means. Étant donné que les \textit{bottleneck} peuvent prendre n'importe quelle valeur, une additionnelle fonction de coût régularise l'encodeur a produire des \textit{bottlenecks} proches des prototypes afin que l'apprentissage de l'encodeur ne diverge pas de l'apprentissage du dictionnaire : \begin{equation} \mathcal{L}_{vq\_reg} = \|h(x)-\operatorname{sg}[q(x)]\|_{2}^{2} \end{equation} La fonction de coût du modèle acoustique peut être alors exprimée par la somme des fonctions \textit{mmi}, de quantification et de régularisation : \begin{equation} \mathcal{L}=\mathcal{L}_{mmi}+\mathcal{L}_{vq}+\lambda \mathcal{L}_{vq\_reg} \end{equation} où $\lambda$ désigne le coefficient du facteur de régularisation (nous avons utilisé $\lambda = 0.25$). Pour mettre à jour les prototypes du dictionnaire, nous utilisons une moyenne mobile exponentielle (EMA) \cite{ema_vq}. EMA met à jour le dictionnaire $E$ indépendamment de l'optimiseur, l'apprentissage est donc plus robuste face aux différents choix d'optimiseurs et d'hyperparamètres (par example: le taux d'apprentissage, momentum). EMA évite également le problème d'effondrement postérieur \cite{vq_vae_collaps}. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.7\linewidth]{LaTeX/figures/quant.pdf} \end{center} \caption{ Quantification des vecteurs résultant de l'encodeur \textit{TDNN-F} avant l'extraction des \textit{bottlenecks}. } \label{image:quant} \ \end{figure} \section{Expériences} \label{sec:experiments} \subsection{Jeux de données} Nous avons utilisé le corpus LibriSpeech \cite{Librispeech} pour toutes nos expériences. Les statistiques des jeux de données sont disponibles dans le tableau \ref{tab:data-train}. \begin{table*}[htbp] \caption{Statistiques des jeux de données d'entraînement et de test.}\label{tab:data-train} \centering \begin{tabular}{l r r r r c } \toprule \multirow{2}{*}{{}} & \multirow{2}{*}{{Taille}} & \multicolumn{3}{c}{{Nombre de locuteurs}} & \multirow{2}{*}{ \begin{minipage}[t]{0.14\textwidth} \centering {Nombre d'utterances} \end{minipage} } \\ & & {Femme} & {Homme} & {Total} & \\ \midrule LibriSpeech: train-clean-100 & 100h & 125 & 126 & 251 & {~~28539} \\ LibriSpeech: train-clean-360 & 364h & 439 & 482 & 921 & {104014} \\ LibriSpeech: test-clean & 5.4h & 20 & 20 & 40 & {~~~~2620} \\ LibriSpeech: test-other & 5.1h & 17 & 16 & 33 & {~~~~2939} \\ \bottomrule \end{tabular} \end{table*} Le sous-ensemble LibriSpeech train-clean-100 a été utilisé pour apprendre le modèle acoustique. Les jeux de données utilisées pour évaluer les performances de reconnaissance de parole sont LibriSpeech test-clean et LibriSpeech test-other. Le challenge Voice Privacy définit LibriSpeech train-clean-360 comme jeux d'apprentissage pour apprendre le système de vérification du locuteur. Il est important de remarquer que ce jeu d'apprentissage ne propose pas une grande variabilité intralocuteur du aux longues sessions d'enregistrement des chapitres des livres audio. Entraîner le système de vérification du locuteur sur les \textit{bottlenecks} d'un modèle acoustique n'est pas une tâche facile, car toute erreur de représentation effectuée par le modèle acoustique est propagée dans celui de vérification du locuteur. Pour atténuer cet effet, nous avons appris le système de reconnaissance du locuteur sur la combinaison de train-clean-100 et train-clean-360. De ce fait, le modèle acoustique produit une très bonne représentation pour le sous-ensemble train-clean-100 (vue lors de l'entraînement du modèle acoustique) ainsi aidant l'apprentissage du modèle de vérification du locuteur. Conformément au challenge Voice Privacy, les performances en reconnaissance du locuteur ont été évaluées avec le jeu de donnée LibriSpeech test-clean. Parmi les 40 locuteurs de LibriSpeech test-clean, 29 d'entre eux ont été sélectionnés, pour chaque locuteur un sous-ensemble totalisant 1 min de parole (après détection d'activité vocale) a été sélectionné pour l'ensemble d'enrôlement et le reste a été utilisé pour l'ensemble de test. Les nombres de test cible et imposteur sont détaillés dans le tableau \ref{tab:data-test}. \begin{table*}[htbp] \caption{Nombre de test de vérification dans l'ensemble de donnée d'évaluation.}\label{tab:data-test} \centering \begin{tabular}{l c r r c} \toprule {} & {Type} & {Femme} & {Homme} & {Total} \\ \midrule \multirow{2}{*}{Librispeech: test-clean} & Cible & 548 & 449 & {~~~997} \\ & Imposteur & {11196} & {9457} & {20653} \\ \bottomrule \end{tabular} \end{table*} \subsection{Métriques et évaluation} \label{sec:metrics} Pour évaluer les performances du système en matière d'anonymisation (\textit{capacité de dissimulation du locuteur}) et d'utilité (\textit{capacité à reconnaitre le contenu linguistique}), deux systèmes et métriques sont utilisés. Pour évaluer quantitativement la qualité de l'anonymisation, une architecture de vérification automatique du locuteur implémentée dans SideKit \cite{sidekit} est utilisée. Il s'agit d'un système x-vecteurs composé de cinq couches TDNN suivit d'une couche de \textit{statistics pooling} \cite{snyder2018xvector}. La fonction de coût utilisée pour l'apprentissage est la \textit{large margin softmax loss} \cite{aamlossArcMarginProduct}. La métrique d'évaluation est le taux d'égales erreurs (EER$_\%$), plus l'EER$_\%$ est élevé, mieux les locuteurs sont anonymisés. Pour l'utilité, le système de reconnaissance de parole transcrit la parole depuis la représentation \textit{bottleneck}. La mesure du taux d'erreurs mots (WER$_\%$) est utilisée pour évaluer dans quelle mesure les \textit{bottlenecks} encodent correctement l'information linguistique. Plus le WER$_\%$ est faible, mieux le contenu linguistique est encodé. \subsection{Résultats et discussions} Le tableau \ref{tab:results} présente les résultats expérimentaux. La première ligne présente les scores de vérification du locuteur et de reconnaissance de parole pour les jeux de données test-clean et test-other. Ces scores sont cohérents avec ceux reportés dans la littérature \cite{tomashenkoVoicePrivacy2020Challenge,pkwrap}. Les résultats de vérification du locuteur montrent que la représentation \textit{bottleneck} d'un système de reconnaissance de parole de référence (No VQ) est capable de correctement discriminer les locuteurs. Il est à noter que pour cet exemple les femmes sont plus difficiles à différencier que les hommes, c'est à dire: 9,3 EER$_\%$ pour les femmes et 4,2 EER$_\%$ pour les hommes. Le WER$_\%$ sur LibriSpeech test-clean est de 5,8, valeur utilisée comme référence pour les expériences suivantes. En contraignant la représentation \textit{bottleneck} du réseau avec l'utilisation de quantification vectorielle, les performances de vérification du locuteur sont drastiquement réduites. Le nombre $V$ de prototypes dans le dictionnaire de quantification contraint plus ou moins le modèle acoustique, avec $V$ vecteurs prototypes l'information linguistique du signal est compressée dans un espace vectoriel discret de $V$ vecteurs prototypes. Plus le dictionnaire $V$ est petit, plus le réseau dois trouver une transformation efficace pour représenter l'information linguistique, ce qui laisse moins de place pour encoder l'information relative au locuteur. Ainsi avec $V$ = 16 les scores d'EER$_\%$ de vérification du locuteur sont de 30,0 pour les femmes et 32,4 pour les hommes valeurs plus élevées qu'avec $V$ = 1024 ou le réseau obtient 17,9 pour les femmes et 18,3 pour les hommes. En comparaison avec le système de référence (No VQ), l'utilisation de la quantification vectorielle permet d'anonymiser les \textit{bottlenecks}. Plus la valeur de $V$ est petite, moins les \textit{bottlenecks} sont représentatives du locuteur, permettant une meilleure anonymisation. Cependant, les performances de reconnaissance de parole, mesurées en termes de WER$_\%$, sont-elles aussi impactées par la taille du dictionnaire de quantification. Avec $V$ = 16 le WER$_\%$ est de 15,9 sur LibriSpeech test-clean, dégradation très importante par rapport a la valeur de référence de 5,8. En augmentant le nombre de vecteurs prototype, le WER$_\%$ redescend. Pour $V$ = 1024 le WER$_\%$ est de 7,2. Le tableau \ref{tab:results} présente aussi les scores de reconnaissance de parole sur le jeu de données LibriSpeech test-other. Le compromis entre de bonnes performances en reconnaissance de parole et une bonne anonymisation est inhérent au problème de partage de données anonymisées (problème connu sous le nom de \say{\textit{privacy-utility tradeoff}} \cite{privacy-utility-tradeoff}). Dans notre cadre de travail, ce compromis est paramétrable et peut être ajusté au souhait de l'utilisateur ou du fournisseur de service via la taille $V$ du dictionnaire de quantification. De manière générale, le tableau \ref{tab:results} montre que plus $V$ est faible, meilleure est l'anonymisation, mais cela est au prix d'une dégradation des performances en reconnaissance de parole. Et, inversement, plus $V$ est grand, meilleure est la reconnaissance de parole au détriment d'une moins bonne anonymisation. \begin{table*}[htbp] \caption{Caption} \centering \begin{tabular}{ c c c c c c } \toprule \multicolumn{2}{c}{} Nb vecteurs & \multicolumn{2}{c}{EER$_\%$} & \multicolumn{2}{c}{WER$_\%$} \\ \multicolumn{2}{c}{} prototypes & F & H & test-clean & test-other \\ \midrule & (No VQ) & 9.3 \footnotesize{$\pm$0.5} & 4.2 \footnotesize{$\pm$1.0} & 5.8 \footnotesize{$\pm$0.3} & 19.5 \footnotesize{$\pm$0.6} \\ \midrule & 16 & 30.0 \footnotesize{$\pm$2.1} & 32.4 \footnotesize{$\pm$2.1 } & 15.9\footnotesize{ $\pm$0.5 } & 42.5\footnotesize{ $\pm$0.8} \\ & 32 & 25.6 \footnotesize{$\pm$2.1 } & 27.3 \footnotesize{$\pm$1.9 } & 9.8 \footnotesize{$\pm$0.4 } & 31.4 \footnotesize{$\pm$0.8 }\\ & 48 & 22.0 \footnotesize{$\pm$1.7 } & 22.6 \footnotesize{$\pm$2.1 } & 8.7 \footnotesize{$\pm$0.4 } & 28.8 \footnotesize{$\pm$0.8 }\\ & 128 & 22.0 \footnotesize{$\pm$1.8 } & 22.8 \footnotesize{$\pm$2.0 } & 8.5 \footnotesize{$\pm$0.4 } & 28.5 \footnotesize{$\pm$0.8 }\\ & 256 & 19.2 \footnotesize{$\pm$1.6 } & 19.6 \footnotesize{$\pm$2.0 } & 7.6 \footnotesize{$\pm$0.3 } & 26.1 \footnotesize{$\pm$0.7 }\\ & 512 & 19.6 \footnotesize{$\pm$1.6 } & 19.2 \footnotesize{$\pm$2.0 } & 7.6 \footnotesize{$\pm$0.3 } & 25.4 \footnotesize{$\pm$0.7 }\\ & 1024 & 17.9 \footnotesize{$\pm$1.6} & 18.3\footnotesize{ $\pm$1.8} & 7.2\footnotesize{ $\pm$0.3 } & 24.7\footnotesize{ $\pm$0.7} \\ \bottomrule \end{tabular} \label{tab:results} \end{table*} \section{Conclusion} \label{sec:conclusion} Dans cet article, nous avons appliqué le cadre de travail hybride respectueux de la vie privée, qui décompose un modèle neuronal en deux parties, un encodeur qui génère une représentation anonyme sur l'appareil de l'utilisateur, et un décodeur qui utilise cette représentation anonyme pour effectuer des calculs mutualisés. Nous avons étudié ce cadre de travail dans le contexte des assistants vocaux. Comme encodeur, un modèle acoustique \textit{TDNN-F} a été considéré, et avons montré ces limitations. En utilisant le jeu de donnée du challenge voice privacy, nous avons mesuré que le locuteur peut être vérifié à la hauteur de 9,3$_\%$ d'EER pour les femmes et 4,2$_\%$ d'ERR pour les hommes dans un modèle \textit{TDNN-F} classique. Nous avons proposé d'utiliser un algorithme de quantification vectorielle afin de contraindre l'espace de représentation, limitant le modèle acoustique à uniquement encoder l'information phonétique. Cet algorithme est configurable en fonction de la taille du dictionnaire de quantification, ce qui permet d'ajuster le compromis entre de bonnes performances en reconnaissance de parole et une bonne anonymisation. Par exemple, avec un dictionnaire de 128 vecteurs, le locuteur est dramatiquement moins vérifiable, 22,0$_\%$ d'EER pour les femmes et 22,8$_\%$ d'ERR pour les hommes ce qui correspond a un gain 232\%. Mais ce gain en anonymisation impacte les performances de reconnaissance de parole, le WER augmente de 47\% (augmentation de 5,8$_\%$ à 8,5$_\%$ de WER). Dans les prochains travaux, nous prévoyons de générer de la parole à partir ces représentations anonymes et d'évaluer leurs performances en reconnaissance de parole et masquage d'identité du locuteur. \section*{Remerciements} Ce travail a été réalisé avec le soutien de l'Agence nationale de la recherche française, dans le cadre du projet ANR DEEP-PRIVACY (18-CE23-0018) et de la Région Grand Est. \bibliographystyle{jep2022}
1,477,468,749,833
arxiv
\section{Introduction} \label{sec:introduction} Satellite-based \ac{M2M} communications represent a large fraction of the \ac{IoT} market, showing an increasing popularity both in the research and in the industrial community. The ubiquitous coverage provided by the satellites may represent a key feature to enable the so-called \ac{IoT} massive internetworking, bringing the connectivity even in remote areas that are unlikely to be covered by other communication infrastructures (e.g., cellular). In order to evaluate the feasibility of a satellite-based solution, along with any physical layer issues (e.g., \ac{SNR}, power consumption), the interactions between the transport and the application layer protocols need further investigations. Connection-oriented transport protocols, like \ac{TCP}, require connection establishment procedures, the use of flow control or congestion control algorithms, which may increase the communication overhead. The latter is an issue that needs to be carefully taken into account, especially in the case of \ac{IoT}/\ac{M2M} short-lived connections. In order to mitigate the aforementioned issue, \ac{IETF} has proposed the use of \ac{CoAP} (RFC 7252), a lightweight protocol designed for resource-constrained devices. It relies on the use of \ac{UDP} at the transport layer; because of this, the reliability is left out as an optional feature, to be implemented at the application layer. \ac{CoAP} endpoints exchange messages according to a request/response mode and the resources are accessed through a \ac{URI}. In order to avoid a polling mechanism, \ac{IETF} has designed a protocol extension to \ac{CoAP}, based on the so-called \textit{observer} design pattern (RFC 7641). \ac{CoAP} clients \textit{register} to the \ac{CoAP} server; then, each client receives a \textit{notification} every time the state of a resource changes. The observer pattern is somewhat similar to the \ac{PUB/SUB} paradigm \cite{pubsub}, as implemented by MQTT, for instance. The performance provided by the use of MQTT on \ac{RA} satellite channels, according to the \ac{DVB-RCS2} standard \cite{dvbrcs2}, has been preliminary studied in \cite{advances}. Contrarily to \ac{CoAP}, MQTT is \ac{TCP}-based, thus the congestion control algorithm at the transport layer of each \ac{RCST} is responsible for rate control and retransmissions, if any erasures occur on the satellite channel. In this work, we propose a comparison between the MQTT-based scenario in \cite{advances}, and a \ac{CoAP}-based protocol stack. The performance metric under consideration is the \textit{completion time}, which is the time a producer takes to successfully deliver data to a consumer. The rest of this paper is organized as follows: Section \ref{sec:rlatedWorks} reviews some of the most relevant works in the literature, focusing on \ac{IoT}/\ac{M2M} communication scenarios via satellite and on the comparisons between \ac{IoT}/\ac{M2M} protocol stacks. Section \ref{sec:archit} compares some typical \ac{M2M}/\ac{IoT} protocol stacks. Section \ref{sec:scenarioSescription} describes the application scenario under consideration and Section \ref{sec:performanceEvaluation} shows some preliminary numerical results obtained via extensive simulation runs. Finally, the conclusions are provided in Section \ref{sec:conclusion}. \section{Related Works} \label{sec:rlatedWorks} The work in \cite{related_work_1} considers \ac{M2M} terminals that communicate with a remote receiver via a satellite link. Each terminal transmits a fixed amount of data: \ac{RA} is used to deliver the first few messages, then the \ac{NCC} allocates reserved timeslots to each \ac{RCST}, in order to ensure a successfully delivery of data. The authors assess the system performance by setting the burst duration and by using three different reception modes at the receiver: \ac{TDMA}, \ac{FDMA}, \ac{PDMA} and \ac{TCDMA}. The throughput and the packet error rate are the performance metrics under consideration. The authors show that \ac{TCDMA} with multiuser detection outperforms both \ac{FDMA} and \ac{TDMA} in terms of throughput, thus minimizing the time required to serve a given set of \ac{M2M} terminals. In \cite{related_work_2}, a satellite-based \ac{WSN} is considered. In order to handle a potentially large number of \ac{M2M} devices, the authors suggest to organize the nodes in clusters of different sizes. Within a cluster, the nodes communicate with the cluster-head (CH), which in turn forwards the collected data to the satellite gateway. The clustering mechanism aims at organizing the clusters in such a way that the application requirements (e.g., minimum required \ac{SNR}) can be met. Moreover, in order to cope with the rain fading that can affect the signal propagation, multiple interconnected satellite gateways are considered. Physical layer metrics are used to assess the performance level: \ac{SNR} and energy consumption. The aforementioned works do not consider any interactions with higher layer protocols, focusing on \ac{MAC} and physical layer metrics. Nonetheless, they consider \ac{M2M} scenarios involving satellite communications. Conversely, application-layer protocols are explicitly compared in \cite{related_work_4, related_work_3, related_work_5}. In \cite{related_work_4}, three classes of protocols for \ac{M2M} communications are compared: protocols targeting the \ac{SOA}; protocols implementing the \ac{REST} paradigm; and message-oriented protocols. As \ac{SOA} protocol, the authors consider OPC\footnote{A description of the OPC standard is available at https://opcfoundation.org/about/what-is-opc/} Unified Automation (UA), a platform-independent middleware, whereas \ac{CoAP} and MQTT are chosen as representative of \ac{REST} architectures and message-oriented protocols, respectively. The application scenario is represented by a cellular network delivering \ac{M2M} data, where reliability and real-time data exchanges are required. The completion time in a emulated cellular network is used as key performance indicator, and, according to the authors, OPC-based communications outperform \ac{CoAP} and MQTT-based data transfer, at the price of a larger overhead. MQTT is \ac{TCP}-based, and the contributions provided in \cite{Celandroni2016522, Gotta2014147, Bacco2014405, bacco2016m2m} shed some lights on how \ac{TCP} behaves in presence of a random access satellite link dominated by collisions, because it may represent a limiting factor on the achievable throughput in satellite environments. On the other side, \ac{CoAP} is \ac{UDP}-based, and a comparison is on order when dealing with random access satellite links, in order to provide some reference figures on the achievable performance level. A preliminary comparison between MQTT and \ac{CoAP} is provided in \cite{related_work_3}, in terms of bandwidth usage and latency. In order to compare the two protocols, the authors consider a simple scenario composed by a single MQTT publisher that sends data to a MQTT broker; similarly, the \ac{CoAP}-based scenario considers data exchanges between a \ac{HTTP} client and a \ac{CoAP} server. The metrics under consideration are the total amount of generated traffic and the average \ac{RTT}. Both reliable and non-reliable data transmissions are considered. In both cases, when no losses occur, \ac{CoAP} transfers less data and exhibits a shorter \ac{RTT} than MQTT, on average. In \cite{related_work_5}, a satellite-based architecture is considered: multiple \ac{M2M} devices send data to a remote gateway. The performance provided by the use of MQTT and of \ac{CoAP} is evaluated, and the authors show that \ac{CoAP} can be properly tuned, in order to outperform MQTT even in the presence of high offered traffic. \section{Typical \ac{IoT} protocol stacks} \label{sec:archit} In \cite{igor}, the authors propose a general satellite network architecture for \ac{IoT}/\ac{M2M} application scenarios, where the use of satellite communications could provide some benefits, such as broadcast communications, large coverage also in suburban and rural areas, support for highly mobile nodes in absence of the fixed infrastructure. When comparing possible \ac{IoT} architectures, several different protocol stacks can be used, each providing different advantages. Two communication paradigms are typically considered in \ac{IoT} scenarios: request/response and \ac{PUB/SUB} ones. \ac{IoT} nodes collect or produce new data in a event-driven or a time-driven fashion, typically. While the latter may exhibit regularity over time, the former shows variable traffic patterns. If the request/response paradigm is taken into account, the clients should periodically query the servers in order to retrieve fresh data. On high-delay links, like in the case of the satellites, the time needed to successfully complete data exchanges should be carefully evaluated. The \ac{PUB/SUB} paradigm, a possible alternative to the request/response one, allows the data consumers, or \textit{subscribers}, to receive any fresh data as soon as they are available at the data producers, or \textit{publishers}. A key feature of the \ac{PUB/SUB} paradigm is the decoupling between data producers and data consumers at an intermediate entity, called \textit{broker}. In topic-based \ac{PUB/SUB} systems, each data piece belongs to one or more \textit{topics}, or logical channels. The publishers send new data to the broker, specifying the topic(s) the data belong to. The broker keeps the list of the active topics and, for each topic, the list of the active subscribers. Each subscriber, in fact, declares its interests to the broker through an initial \textit{registration} procedure. After that, the mechanism is straightforward: the publishers send new data to the broker, which forward them to the subscribed nodes. On high-delay links, relying on a \ac{PUB/SUB}-based data exchange allows halving the delivery delay than relying on a request/response-based one. \ac{CoAP} and MQTT are notable examples of application protocols implementing the aforementioned two paradigms: request/response the former, \ac{PUB/SUB} the latter. MQTT and \ac{CoAP} protocol stacks can be seen in Figure \ref{fig:stacks} and are discussed in Section \ref{subsec:mqtt} and \ref{subsec:coap}, respectively. \begin{figure} \centering \includegraphics[scale=0.95, clip=true, trim=0 0 0 0]{stack.png} \caption{Typical \ac{IoT} protocol stacks} \label{fig:stacks} \end{figure} \subsection{MQTT protocol} \label{subsec:mqtt} MQTT is a \ac{M2M}/\ac{IoT} application protocol designed by IBM in 1999 for use in satellite networks. Since then, its use has largely widespread to terrestrial communications. A typical MQTT data packet is composed of a 2 bytes long fixed header part, a variable header part whose size depends on the packet type, and a variable length payload. Each data packet is sent to the broker, which maintains the list of the active subscriptions and of the active topics. Although reliable data transmission are inherently guaranteed by \ac{TCP}, MQTT offers three \ac{QoS} levels to deliver the messages. In fact, \ac{TCP} guarantees the reliability for the messages exchanged over the network connection between broker-publisher and broker-subscriber, but an \ac{E2E} mechanism is absent. To address that, MQTT provides additional reliability levels at the application. \subsection{\ac{CoAP} protocol} \label{subsec:coap} \ac{CoAP} follows a \ac{REST} architectural style and is designed for resource-constrained environments. Each \ac{CoAP} server logically encapsulates a \textit{resource}, uniquely identify by a \ac{URI}. A \ac{CoAP} client sends a request by means of a \textit{Confirmable} or \textit{Non-confirmable} message, in order to retrieve the resource representation available at the server. If a \textit{Confirmable} message type is sent, an \ac{ACK} is expected to confirm the correct reception of data at the intended receiver; otherwise, an unreliable data exchange occurs (\textit{Non-confirmable} message type). A \ac{CoAP} request contains the \ac{URI} of the resource and is typically performed by means of the HTTP \textit{GET} verb. The typical message format includes a fixed-size header (4 bytes), a variable-length \textit{Token} field (0-8 bytes), an \textit{options} field, and the payload. \ac{CoAP} is \ac{UDP}-based, and it provides optional reliability at the application layer. A $NSTART$-long [packets] transmission window\footnote{RFC 7252 defines $NSTART$ as the number of \textit{simultaneous outstanding interactions} (as in Section 4.7). For the sake of brevity, we refer to it as \textit{transmission window}.} is dictated by the specifications; in the default configuration, $NSTART$ is equal to one. If \textit{Confirmable} messages are sent, then the \ac{ARQ} mechanism in use is a simple Stop-and-Wait protocol, employing exponential back-off. This choice is motivated by the fact that the protocol is intended for low-power resource-constrained devices, where the implementation of more complex mechanisms can present some computational or technological issues because of the limited available resources. Anyway, \ac{CoAP} is a promising solution as application layer protocol, and its specifications open to the implementation and to the use of a different \ac{ARQ} mechanism, as the use of $NSTART > 1$ would require\footnote{Section 4.7 of RFC 7252 opens to the possibility of using $NSTART > 1$, if a congestion control mechanism is available.}. This would allow to take advantage of a large class of devices supporting more complex mechanisms and benefiting of a larger transmission window. In order to reduce the delivery delay of a request-response pattern, like in the case of \ac{CoAP}, the mechanisms described in the next two paragraphs can be applied \subsubsection{Observer pattern} \ac{CoAP} specifications open to the implementation of the so-called \textit{observer} pattern, which provides a data exchange model semantically close to the \ac{PUB/SUB} one. A \ac{CoAP} client performs a registration to the server(s), indicating the \acp{URI} it is interested into. Anyway, there is still a difference with respect to the \ac{PUB/SUB} paradigm: the decoupling between data consumers and producers guaranteed by the publish/subscribe paradigm cannot be provided by the observer pattern. The absence of a intermediate entity leaves the server(s) in charge of keeping a list of the interested clients. The next paragraph explains how the use of a proxy can solve the aforementioned issue. \subsubsection{\ac{CoAP} proxying} In order to have a \ac{CoAP}-based configuration really similar to a \ac{PUB/SUB} one, a further step is necessary, which exploits the use of the \textit{proxying} functionality, as described in RFC 7252. A proxy is defined as a \ac{CoAP} endpoint that can be delegated by clients to perform requests on their behalf. Thus, a \ac{CoAP} proxy is an intermediate entity, which can actually decouple the clients from the servers. By implementing both the proxying functionality and the observer pattern, as we propose in this work, \ac{CoAP} behaves similarly to MQTT. \subsection{Transport protocols} A key difference between \ac{CoAP} and MQTT is the transport protocol they rely onto. MQTT is \ac{TCP}-based, which is connection oriented, thus a \ac{3WHS} procedure is needed to establish a connection. On the other side, \ac{CoAP} is \ac{UDP}-based\footnote{A \ac{TCP}-based \ac{CoAP} version is in a draft IETF proposal available at\\https://datatracker.ietf.org/doc/draft-ietf-core-coap-tcp-tls}, which realizes a connectionless communication and does not provide any congestion or flow control algorithms. While the use of \ac{TCP} can be of interest in some \ac{M2M}/\ac{IoT} scenarios \cite{bacco2016m2m, bacco2015tcp}, the majority of them would largely benefit of a lightweight transport protocol. In the following section, the performance provided by the use of MQTT and of \ac{CoAP} are compared, when the latter implements the observer pattern and the proxying functionality. \section{Scenario description} \label{sec:scenarioSescription} In this section, the scenario under consideration is described, and the logical architecture is visible in Figure \ref{fig:coap_scenario}. \begin{figure \centering \includegraphics[scale=0.35]{coap_v2.png} \caption{The \ac{CoAP}-based scenario under consideration. The dotted lines show a typical setup procedure at application layer (steps 1 and 2) and the notification of new messages via proxy (steps 3 and 4).} \label{fig:coap_scenario} \end{figure} Several \ac{CoAP} servers produce data that are sent to the \ac{CoAP} proxies. Each proxy is in charge of delivering the received data to a remote \ac{CoAP} client via \ac{DVB-RCS2}-compliant \acp{RCST}. Thus, we assume that the \ac{CoAP} servers, which encapsulate available resources, act as data producers. For instance, such an architecture can be applied to low-altitude \ac{UAV} swarms, whose use is growing more and more common in several application fields, such as the case of precision agriculture \cite{bacco2014uavs}. A master \ac{UAV} acts as a proxy, collecting data from other \acp{UAV} in the same swarm and delivering data via satellite to a remote data center We recall that \ac{CoAP} is \ac{UDP}-based, thus the \ac{ARQ} protocol must be implemented at the application layer, if reliable delivery is expected. In Section \ref{subsec:modCOAP}, the \ac{CoAP} settings in use are described, while Section \ref{subsec:sc} deepens the description of the system configuration. \subsection{\ac{CoAP} protocol implementation} \label{subsec:modCOAP} We implemented the \ac{CoAP} protocol as a Network Simulator 3 (NS-3) module, along with the observer pattern and the proxying functionality. The numerical results in Section \ref{sec:performanceEvaluation} are based on the use of those extensions, thus the typical \ac{CoAP} request/response paradigm is substituted by a \ac{PUB/SUB}-like mechanism. The reason behind the latter choice is straightforward: removing the need for a request, the data delivery delay is reduced, because fresh data are available to a client as soon as they are generated or collected by a server. On high-delay links, a \textit{push} strategy can provide large gains with respect to a \textit{pull} one, for instance in terms of delivery delay. \subsection{System configuration} \label{subsec:sc} We consider a network composed by a \ac{GEO} satellite and a large number of \ac{CoAP} servers, connected to \ac{CoAP} proxies; each proxy is connected to a \ac{RCST} (see Figure \ref{fig:coap_scenario}). It is worth to underline here that a single proxy (instead of multiple ones) can be used if a single network is desired; multiple proxies are to be used if separated networks are required. In Figure \ref{fig:coap_scenario}, we refer to the more general case with multiple separated networks. \ac{CoAP} servers produce \ac{M2M}/\ac{IoT}-like data that is delivered to a remote \ac{CoAP} client. If more clients were present on the remote side, a connection per client would be opened, thus increasing the contention level on the random access channel. Alternatively, a receiving proxy can be placed on the remote side, too, in order to have a single connection per sender proxy to the receiving one. Thus, the scenario under consideration can be considered as representative of both aforementioned cases, because the \ac{CoAP} client in Figure \ref{fig:coap_scenario} can be substituted by a \ac{CoAP} proxy, then connected to multiple clients. In the extensive simulations we ran, data sources begin the transmission according to an exponentially distributed inter-arrival time with parameter $\lambda$. The data payload length is randomly drawn, with probability $1/i$, from $i$ Pareto distributions with parameters $x_m^i > 0$ and $\alpha^i > 0$, where $i \in \{1,2,3\}$. The three distributions are here meant to represent small, medium and large application \ac{M2M}/\ac{IoT} payload lengths, as generated or collected by the server(s). More technically, each \ac{CoAP} server sends a burst of packets, then forwarded by the proxy, which are packed into a bulk of \ac{DVB-RCS2} RA blocks, according to the specifications of Waveform 14\footnote{The waveform specifications are drawn from Table A-1 \textit{"Reference Waveforms for Linear Modulation Bursts"} in \cite{dvbrcs2}.}, reported in Table \ref{table:settings}. A single timeslot can be used per \ac{RA} block by each \ac{RCST}, according to typical configurations of \ac{DVB-RCS2} systems. The \ac{MAC} protocol in use is \ac{CRDSA} \cite{crdsa}, configured with 3 replicas. Time is slotted and each \ac{RA} block is composed of 64 timeslots. The \ac{MAC} queue length is set to an arbitrary large value and both return and forward links are assumed to be error-free. In the return link, collisions can occur, thus retransmissions are triggered in order to ensure a reliable data delivery. \acp{ACK} are assumed to be always correctly received \section{Performance Evaluation} \label{sec:performanceEvaluation} The scenario presented in the previous section is here numerically evaluated and then compared to the MQTT-based scenario in \cite{advances}, which is sketched in Figure \ref{fig:mqtt_scenario}. \begin{figure \centering \includegraphics[scale=0.38]{mqtt_v2.png} \caption{The MQTT-based scenario in use for comparison. The dotted lines show a typical setup procedure at application layer (step 1) and the notification of new messages via broker (steps 2 and 3).} \label{fig:mqtt_scenario} \end{figure} In this work, the length of the transmission windows of the \ac{CoAP} servers ranges into $NSTART \in [1,100]$. Thanks to this, we explore the possibility to reduce the completion time by increasing $NSTART$. In the following, a Go-Back-N \cite{burton1972errors} \ac{ARQ} protocol, employing exponential backoff, is in use if $NSTART > 1$. The following numerical results are based on extensive simulation runs based on the use of S-NS3 \cite{hytonen2014satellite}, a satellite network extension to NS-3 platform. The simulator parameters have values as reported in Table \ref{table:settings} and a minimum of 1000 data exchanges per scenario has been simulated, in order to ensure statistical reliability. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Name} & \textbf{Value} \\ \hline \hline \ac{RA} scheme & 3-\ac{CRDSA} \\ \hline \ac{RA} blocks per superframe & 1 \\ \hline \ac{RA} block duration & 13 [ms] \\ \hline Timeslots per \ac{RA} block & 64 \\ \hline Gross slot size & 188 [B] \\ \hline Net slot size & 182 [B] \\ \hline Bandwidth & 8012820 [Hz] \\ \hline Roll off & 0.2 \\ \hline Carrier spacing & 0.3 [Hz] \\ \hline Nominal \ac{RTT} & 0.52 [s] \\ \hline Pareto distributions &$x_m^1 = 931$ [B] \\ &$x_m^2 = $ 9532 [B] \\ &$x_m^3 = $ 47663 [B] \\%524288 &$\alpha^1 = \alpha^2 = \alpha^3 = 1.1$ \\ \hline \end{tabular} \end{center} \caption{Simulator setup parameters} \label{table:settings} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Protocol stack} & \textbf{Avg. aggregated goodput} \\ \hline \hline \ac{CoAP}/UDP ($NSTART = 1$) & 32.14 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 2$) & 40.46 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 3$) & 43.21 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 4$) & 45.58 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 5$) & 50.1 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 10$) & 57.1 [KB/s] \\ \hline \ac{CoAP}/UDP ($NSTART = 100$) & 77.5 [KB/s] \\ \hline \hline MQTT/\ac{TCP} & 50.6 [KB/s] \\ \hline \end{tabular} \end{center} \caption{Average aggregated goodput at MQTT broker or at \ac{CoAP} client/proxy in the scenarios under consideration} \label{table:goodput} \end{table} In Figure \ref{fig:CT}, the completion time of \ac{CoAP} and MQTT data exchanges is visible, in presence of low/moderate load. The completion time is plotted against an increasing number of application packets sent per data exchange, as readable on the x-axis. \begin{figure} \centering \includegraphics[scale=0.4, clip=true, trim=40 50 0 0]{coap_mqtt.pdf} \caption{Completion time of MQTT and \ac{CoAP} data exchanges per \ac{CoAP} proxy/client or MQTT publisher. \label{fig:CT} \end{figure} Seven increasing $NSTART$ values have been selected in the \ac{CoAP}-based scenario: the use of a larger value provides a lower completion time, as expected. Anyway, the latter is valid only in presence of a low/medium traffic profile on the \ac{RA} channel, so that erasures due to collisions are unlikely to occur. Table \ref{table:results_IAT_1s} reports the normalized \ac{MAC} offered load for each $NSTART$ value and its 25th and 75th percentiles, when $\lambda^{-1} = 1$ [s]. The collision rate is almost negligible for the load intervals under consideration. A low/medium traffic profile is used, in order to avoid congestion phenomena. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{NSTART} &\multicolumn{3}{|c|}{\textbf{Normalized \ac{MAC} offered load}} \\ \hline & \textbf{mean} & \textbf{25th p.} & \textbf{75th p.} \\ \hline 1 & 0,0671 & 0,031 & 0,09 \\ \hline 2 &0,0820 & 0,047 & 0,11 \\ \hline 3 &0,0880 & 0,047 & 0,12 \\ \hline 4 &0,0929 & 0,048 & 0,12 \\ \hline 5 &0,1016 & 0,048 & 0,14 \\ \hline 10 &0,1161 & 0,062 & 0,15 \\ \hline 100 &0,1172 & 0,063 & 0,16 \\ \hline \end{tabular} \end{center} \caption{Normalized \ac{MAC} offered load for increasing $NSTART$ values if \ac{CoAP} is in use at the application layer} \label{table:results_IAT_1s} \end{table} The default \ac{CoAP} configuration ($NSTART=1$) provides a quite large completion time, even for small amounts of data, sub-utilizing the available system resources. In case of larger values, the completion time decreases and, for $NSTART=100$, \ac{CoAP} provides a lower completion time than MQTT. It is worth underlining here that the \ac{BDP} of the satellite link is $\approx 40$ \ac{CoAP} packets; thus, when $NSTART = 100$ and the burst length is larger than \ac{BDP}, backlogging is present. The completion time of the MQTT-based scenario depends on \ac{TCP}\footnote{TCP NewReno (RFC 6582) is in use in this scenario.} congestion control algorithm; as the \ac{TCP} congestion window increases over time, the curve exhibits an almost linear trend to a first approximation, in presence of a low collision rate on \ac{RA} channels. If looking at the completion time, a comparable value is obtained with a \ac{CoAP} configuration with $NSTART=10$. Anyway, the different trends in the completion time provided by MQTT and \ac{CoAP} are clearly visible: in the first part, the completion time in MQTT-based scenario increases faster than in the \ac{CoAP}-based scenario because of the small \ac{TCP} congestion window. As soon as \ac{TCP} congestion window increases because of larger payload lengths, the completion time reduces accordingly. Table \ref{table:goodput} shows the average aggregated goodput at \ac{CoAP} client/proxy (see Figure \ref{fig:coap_scenario}), compared with the same at MQTT broker (see Figure \ref{fig:mqtt_scenario}). Thanks to the lower overhead provided by the \ac{CoAP}/\ac{UDP} stack, it outperforms the MQTT/TCP stack. In fact, even if MQTT/\ac{TCP} is approximately equivalent to using \ac{CoAP}/\ac{UDP} with $NSTART=10$, the larger overhead reduces the achievable goodput w.r.t. the latter configuration, as shown in Table \ref{table:goodput}. Eventually, some considerations are in order: in \ac{IoT}/\ac{M2M} scenarios, the use of \ac{CoAP} can provide some advantages over MQTT, because the length of the transmission window can be set at the application level, as well as the \ac{ARQ} algorithm in use, thus providing a larger flexibility. Furthermore, the \ac{CoAP}-based protocol stack exhibits a lower overhead, which is desirable in such application scenarios. \section{Conclusions} \label{sec:conclusion} This work focuses on a comparison between two of the largely used \ac{IoT}/\ac{M2M} protocol stacks, based on the use of \ac{CoAP} and MQTT protocols, implementing the request/response and the \ac{PUB/SUB} communications paradigm, respectively. The \ac{PUB/SUB} paradigm can bring large benefits in satellite-based architectures, because of the reduction of the delivery time thanks to the fact that fresh data are sent to registered subscribers as soon they are produced. Because of the latter, in this work, we investigated the use of the \ac{CoAP} protocol in conjunction with the so-called \textit{observer} pattern and the \textit{proxying} functionality, in order to exploit the advantages provided by the \ac{PUB/SUB} paradigm, which also provides a fairer comparison than relying on the default \ac{CoAP} implementation. A qualitative comparison is provided in this work, together with some preliminary numerical results, highlighting how the performance level provided by the use of \ac{CoAP} outperforms the one provided by MQTT on \ac{RA} satellite channels, and underlining the flexibility that easily tunable settings at application layer provide w.r.t to lower layer settings. Future work will focus on \ac{M2M}/\ac{IoT} communications in presence of higher traffic rates, where the performance provided by \ac{CoAP} may still need further investigation. \section*{Acknowledgments} This work has been partially supported by the Tuscany region in the framework of SCIADRO project (FAR-FAS 2014), and by SatNEx (Satellite Network of Experts) programme, IV phase. \IEEEtriggeratref{0} \balance \bibliographystyle{IEEEtran}
1,477,468,749,834
arxiv
\section{Introduction} \label{secIN} Statistical-mechanical lattice-gas modeling provides a paradigm for analyzing site-specific single- and multicomponent chemisorption at electrode--electrolyte interfaces. The method is particularly useful to describe spatial ordering and fluctuations in the contact-adsorbed layer, which are strongly influenced by effective, lateral adsorbate--adsorbate interactions. The history of successful lattice-gas studies of phase transitions at solid--vacuum and solid--gas interfaces \cite{ZANG88} makes the early applications of the method to double-layer studies \cite{RIKV88A,RIKV88B,COLL89,BLUM90,HUCK90,BLUM91,% HUCK91,HUCK91B,ARMA91A,RIKV91A,RIKV91B,RIKV92} excellent examples of the transfer of a methodology from one research area to another. Here we present a condensed review of the basics of lattice-gas modeling of specific adsorption in the double-layer region, including a short discussion of poisoning and enhancement effects and illustrated by results from recent studies of specific systems. The outline of the remainder of the paper is as follows. In Sec.~\ref{secMOD} we briefly review the lattice-gas formulation and some of the methods that can be used to obtain specific numerical results for such experimentally measurable quantities as adsorption isotherms, voltammetric currents and charge densities, and images obtained by low-energy electron diffraction (LEED) and atomic-resolution microscopies, such as scanning tunneling microscopy (STM) and atomic force microscopy (AFM). In particular we concentrate on non-perturbative numerical methods, such as Monte Carlo (MC) simulations \cite{COLL89,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,% JZHA95A,JZHA95B,BIND86,BIND92,BIND92B} and transfer-matrix (TM) calculations \cite{RIKV88A,RIKV88B,COLL89,RIKV91A,DOMB60,HUAN63,STAN71,NIGH90}, which are often combined with finite-size scaling methods \cite{BIND92,NIGH90,PRIV90}. In Sec.~\ref{secPE} we briefly consider, within the lattice-gas picture, such nonlinear effects in multicomponent adsorption as poisoning \cite{RIKV88A,RIKV88B,COLL89,ARMA91A,RIKV91B,RIKV92,HOMM89} and enhanced adsorption \cite{RIKV88B,ARMA91A,RIKV91A,RIKV91B,RIKV92,HOMM89,KRUK95}, both with semiquantitative applications to specific systems. Reference \cite{RIKV91B} contains more extensive discussions and comparisons of these phenomena, which are just as relevant at solid--vacuum and solid--gas interfaces as they are in electrochemistry, and which also can be extended to multilayer adsorption \cite{KRUK95}. In Sec.~\ref{secEX} we provide further quantitative illustrations in the form of applications to two specific cases of adsorption on single-crystal electrodes: the electrosorption of urea on Pt(100) from an acid electrolyte \cite{RIKV92,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,GAMB94} and the underpotential deposition (UPD) of copper on Au(111) from a sulfate-containing electrolyte \cite{HUCK90,BLUM91,HUCK91,HUCK91B,JZHA95A,JZHA95B,% BLUM93,BLUM94A,BLUM94B,LEGA95}. A final summary and conclusions are given in Sec.~\ref{secDIS}. \section{The Lattice-Gas Method} \label{secMOD} The lattice-gas models discussed here are defined through a generalization of the standard three-state lattice-gas Hamiltonian (energy function) used, {\it e.g.}, in Refs.~\cite{RIKV88A,RIKV88B,COLL89,RIKV91A,RIKV91B,RIKV92}, to give the energies of particular adsorbate configurations: \begin{eqnarray} {\cal H}_{\rm LG} &=& \sum_n \Bigg[ -\Phi_{\rm AA}^{(n)} \sum_{\langle ij \rangle}^{(n)} c_i^{\rm A} c_j^{\rm A} -\Phi_{\rm AB}^{(n)} \sum_{\langle ij \rangle}^{(n)} \left(c_i^{\rm A} c_j^{\rm B} + c_i^{\rm B} c_j^{\rm A} \right) -\Phi_{\rm BB}^{(n)} \sum_{\langle ij \rangle}^{(n)} c_i^{\rm B} c_j^{\rm B} \Bigg] \nonumber\\ & & + {\cal H}_3 - \bar{\mu}_{\rm A} \sum_i c_i^{\rm A} - \bar{\mu}_{\rm B} \sum_i c_i^{\rm B} \; . \label{eq1} \end{eqnarray} Here $c_i^{\rm X}$$\in$\{0,1\} is the local occupation variable for species X (X=A or~B), and the third adsorption state (``empty'' or ``solvated'') corresponds to $c_i^{\rm A}$=$c_i^{\rm B}$=0. The sums $\sum_{\langle ij \rangle}^{(n)}$ and $\sum_i$ run over all $n$th-neighbor bonds and over all adsorption sites, respectively, $\Phi_{\rm XY}^{(n)}$ denotes the effective XY pair interaction through an $n$th-neighbor bond, and $\sum_n$ runs over the interaction ranges. The term ${\cal H}_3$ contains three-particle \cite{EINS91} and possibly multi-particle interactions. Both the interaction ranges and the absence or presence of multi-particle interactions depend on the specific system. The change in electrochemical potential when one X particle is removed from the bulk solution and adsorbed on the surface is $-\bar{\mu}_{\rm X}$. The sign convention is such that $\Phi_{\rm XY}^{(n)}$$>$0 denotes an effective attraction, and $\bar{\mu}_{\rm X}$$>$0 denotes a tendency for adsorption in the absence of lateral interactions. The main differences between models for particular systems are the binding-site geometries of the adsorbed species and the strengths of the effective, lateral interactions. (Straightforward modifications of Eq.~(\ref{eq1}) are necessary if the adsorption sites for the two species are different, as they are, {\it e.g.}, in the model describing urea on Pt(100).) Some previously studied models that can be defined by Eq.~(\ref{eq1}) or similar lattice-gas Hamiltonians, are the one for urea on Pt(100) \cite{RIKV92,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,GAMB94}, the model developed by Huckaby and Blum for UPD of copper on Au(111) in the presence of sulfate \cite{HUCK90,BLUM91,HUCK91,HUCK91B,JZHA95A,JZHA95B,BLUM93,% BLUM94A,BLUM94B,LEGA95}, and the standard three-state models with single-site bonding, used in previous studies of poisoning and enhancement in multicomponent adsorption \cite{RIKV88B,COLL89,RIKV91A,RIKV91B,RIKV92}. As illustrations of the lattices and interactions that can be used, we show in Fig.~\ref{figLATa} the model used for urea adsorption on Pt(100) \cite{RIKV92,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,GAMB94} and in Fig.~\ref{figLATb} one used for copper UPD on Au(111) \cite{JZHA95A,JZHA95B}. The thermodynamic density conjugate to the electrochemical potential $\bar{\mu}_{\rm X}$ in Eq.~(\ref{eq1}) is the surface coverage by species X, \begin{equation} \label{eq1b} \Theta_{\rm X} = N^{-1}\sum_i c_i^{\rm X} \;, \end{equation} where $N$ is the total number of surface unit cells in the system. To connect the electrochemical potentials to the bulk concentrations [X] and the electrode potential $E$, one has (in the weak-solution approximation): \begin{equation} \label{eq2} \bar{\mu}_{\rm X} = {\mu}_{\rm X}^0 + RT \ln {[\rm X] \over [\rm X]^0} - z_{\rm X}FE \;, \end{equation} where $R$ is the molar gas constant, $T$ is the absolute temperature, $F$ is Faraday's constant, and the effective electrovalence of X is $z_{\rm X}$. The quantities superscripted with a 0 are reference values which contain the local binding energies to the surface. They are generally temperature dependent due, among other effects, to rotational and vibrational modes. In the absence of diffusion and double-layer effects and in the limit that the potential sweep rate d$E$/d$t$$\rightarrow$0 \cite{BARD80}, the voltammetric current $i$ per unit cell of the surface is the time derivative of the charge transported across the interface during the adsorption/desorption process. With a sign convention such that oxidation/anodic currents are considered positive, this charge is \begin{equation} \label{eq2c} q = -e(z_{\rm A} \Theta_{\rm A} + z_{\rm B} \Theta_{\rm B}) \;, \end{equation} where $e$ is the elementary charge unit. Using partial differentiation involving the relation between the electrode potential and the electrochemical potentials, Eq.~(\ref{eq2}), as well as the Maxwell relation ${\partial \Theta_{\rm A}}/{\partial \bar{\mu}_{\rm B}}$=${\partial \Theta_{\rm B}}/{\partial \bar{\mu}_{\rm A}}$, one obtains $i$ in terms of the lattice-gas response functions ${\partial \Theta_{\rm X}}/{\partial \bar{\mu}_{\rm Y}}$: \begin{equation} i = e {\cal F} \left\{ z_{\rm A}^2 \frac{\partial \Theta_{\rm A}}{\partial \bar{\mu}_{\rm A}} + 2 z_{\rm A} z_{\rm B} \frac{\partial \Theta_{\rm B}}{\partial \bar{\mu}_{\rm A}} + z_{\rm B}^2 \frac{\partial \Theta_{\rm B}}{\partial \bar{\mu}_{\rm B}} \right\} \frac{{\rm d}E}{{\rm d}t} \;. \label{eq2b} \end{equation} It must be emphasized that the interactions in Eq.~(\ref{eq1}) are {\it effective\/} interactions mediated through several channels. The mechanisms involved include interactions between the adsorbate and the substrate electron structure \cite{EINS73,LAU78,EINS78,MUSC86,FEIB89}, adsorbate-induced local deformations of the substrate, interactions with the fluid electrolyte \cite{BLUM90,HUCK90,BLUM91,HUCK91,HUCK91B,RIKV91A,BLUM93,% BLUM94A,BLUM94B,LEGA95}, and (screened) electrostatic interactions \cite{GLOS93A}. All these effects give rise to indirect, effective interactions between the adsorbate particles. In general one must assume that these quantities could be dependent on temperature and electrode potential. The spatial structure of the generalized pair interactions generally involves rather complicated dependences on both the magnitude and the direction of the vector joining the two adsorbate particles, as well as on the relative orientation of the particles. Empirical models for the electronic contribution to the effective, lateral pair interactions are well known \cite{EINS73,LAU78,EINS78,MUSC86,FEIB89} and are often of a decaying, oscillatory form proportional to $\cos(2k_{\rm F}r)/r^\alpha$, where $k_{\rm F}$ is the Fermi momentum and $\alpha$ may be between 2 and 5, depending on the substrate's electronic structure \cite{LAU78,EINS78}. However, changes in the effective interaction energies of only a few percent may cause very substantial changes in the finite-temperature phase diagram (see, {\it e.g.}, Refs.~\cite{RIKV93D,CCAG90,HILT92}). First-principles calculations of lateral adsorbate interactions to this level of accuracy are not yet feasible, even for the electronically mediated contributions \cite{FEIB89}. Here we advocate an approach to the problem of determining the effective adsorbate--adsorbate interaction energies, which provides a practical alternative to the ideal ``first-principles'' approach mentioned above. This strategy consists in fitting the thermodynamic and structural predictions of the lattice-gas model directly to experiments, taking into account as wide a spectrum of experimental information as possible. Obviously, this method also involves considerable difficulties. In particular, the number of parameters that can reasonably be included in a lattice-gas model is large, and there is no {\it a priori} guarantee that a minimal set of fitted interactions is unique. Nevertheless, the encouraging results of previous lattice-gas studies of electrochemical systems that have employed this strategy \cite{RIKV88A,RIKV88B,COLL89,BLUM90,HUCK90,BLUM91,HUCK91,% HUCK91B,ARMA91A,RIKV91A,RIKV91B,RIKV92,RIKV93A,RIKV93B,% GAMB93B,HIGH93,RIKV95,JZHA95A,JZHA95B,BLUM93,BLUM94A,% BLUM94B,LEGA95} indicate that when proper attention is paid to including all available experimental information in a consistent fashion, the predictive power of this approach is considerable. Furthermore, as effective interactions obtained by first-principles calculations become available in the future, the results obtained from lattice-gas models will provide crucial information for testing the consistency of such first-principles interactions with the experimentally observed thermodynamic and structural information. The steps in the modeling strategy outlined here can be summarized as follows.\\ 1. Use prior theoretical and experimental knowledge about the adsorbate lattice structure and lattice constant and the shapes and sizes of the adsorbate particles to formulate a specific lattice-gas model. Examples are shown in Figs.~\ref{figLATa} and~\ref{figLATb}.\\ 2. Use available experimental information about adsorbate coverages and adlayer structure to determine the adsorbate phases or at least narrow down the possible choices as much as possible.\\ 3. Perform a group-theoretical ground-state calculation \cite{DOMA78,DOMA79,SCHI81} to determine a minimal set of effective interactions compatible with the observed adsorbate phases. Relations between the effective interactions take the form of a set of inequalities \cite{RIKV88A,RIKV88B,COLL89,RIKV91A,RIKV91B}. A ground-state diagram (zero-temperature phase diagram) is obtained by pairwise equating the ground-state energies of the different phases. Examples of ground-state diagrams corresponding to the specific models in Figs.~\ref{figLATa} and~\ref{figLATb} are shown in Figs.~\ref{figGSa} and~\ref{figGSb}, respectively.\\ 4. At nonzero temperatures, the thermodynamic and structural properties of the lattice-gas model constructed through steps 1--3 can be studied by a number of analytical and numerical methods, depending on the quantities of interest and the complexity of the Hamiltonian. These methods include mean-field approximations \cite{ARMA91A,HOMM89} (although these can be unreliable for low-dimensional systems with short-range interactions \cite{RIKV93D}), Pad{\'e}-approximant methods based on liquid theory \cite{HUCK90,BLUM91,HUCK91,HUCK91B,BLUM93,BLUM94A,% BLUM94B,LEGA95}, numerical TM calculations \cite{RIKV88A,RIKV88B,COLL89,RIKV91A,RIKV91B,RIKV92}, and MC simulations \cite{COLL89,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95% ,JZHA95A,JZHA95B}.\\ 5. Whatever method is used to calculate the finite-temperature properties of the model, these should be used to refine the effective interactions by comparison with the available experiments, or by obtaining additional experimental data for such comparison.\\ Steps 4 and 5 should be iterated until satisfactory agreement between model and experiment is achieved. One of the main reasons for the rapid expansion in theoretical surface science over the last three decades is the development of numerical methods that allow nonperturbative calculations of thermal and structural properties of statistical-mechanical systems. Two such methods, which are particularly well suited to the study of lattice-gas models, are Monte Carlo (MC) simulation \cite{COLL89,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,% JZHA95A,JZHA95B,BIND86,BIND92,BIND92B} and numerical transfer-matrix (TM) calculations \cite{RIKV88A,RIKV88B,COLL89,RIKV91A,DOMB60,HUAN63,STAN71,NIGH90}. In combination with finite-size scaling analysis of phase-transition phenomena \cite{BIND92,NIGH90,PRIV90}, these methods have contributed significantly to the theoretical understanding of fluctuations and ordering at surfaces and interfaces. The reason for our emphasis on non-perturbative numerical methods is that they are much more accurate for two-dimensional systems than even quite sophisticated mean-field approximations \cite{RIKV93D}, yet they are quite easy to program. Moreover, with modern computer technology their implementation is well within the resources of most researchers. At present, a large number of monographs and textbooks exist that describe MC methods in great detail \cite{BIND86,BIND92,BIND92B}. We therefore limit ourselves to pointing out that these methods can produce thermodynamic and structural information for a variety of systems, with a very modest amount of programming and with computational resource needs that are readily met by modern workstations. For example, all the MC results presented here were obtained on workstations. For studies of real systems, MC models have the advantage that programs are relatively easy to modify to accommodate changes in lattice structure and/or interaction geometries and ranges. Despite their power and beauty, TM methods are much less known outside the statistical-mechanics community. However, good reviews are available \cite{DOMB60,NIGH90}, and simple textbook expositions for the one-dimensional case are quite illustrative \cite{HUAN63,STAN71}. An abundance of details are scattered throughout the technical literature and can be found, together with further references in {\it e.g.}\ Refs.~\cite{RIKV88A,RIKV88B,COLL89,RIKV91A,RIKV84,AUKR90}. Briefly, the method allows the numerical calculation of free energies (an advantage over MC, which does not easily produce entropies), thermodynamic densities, and their associated response functions from the eigenvalues and eigenvectors of a matrix of Boltzmann factors, called the transfer matrix. In addition to the ability to easily calculate free energies, the method has the further advantage over MC that the results are obtained without statistical errors. The main disadvantages, relative to MC, are the limited system sizes and interaction ranges that can be attained. The first problem can relatively easily be overcome with finite-size scaling. The second, however, severely restricts the applicability of TM methods to realistic electrochemical systems. \section{Poisoning and Enhancement Effects} \label{secPE} Depending on the relative interaction ranges and strengths, the lattice-gas models discussed in Sec.~\ref{secMOD} allow many topologically different adsorbate phase diagrams. The specific coadsorption phenomena, such as poisoning \cite{RIKV88A,RIKV88B,COLL89,ARMA91A,RIKV91B,RIKV92,HOMM89} or enhanced adsorption \cite{RIKV88B,ARMA91A,RIKV91A,RIKV91B,RIKV92,HOMM89,KRUK95}, which occur for any particular set of interactions, depend crucially on the detailed topology of the phase diagram \cite{RIKV88A,RIKV88B,COLL89,RIKV91A,RIKV91B,RIKV92}. The terms ``poisoning'' and ``enhancement'' can be defined as follows.\\ {\it Poisoning of {\rm A} by {\rm B}:} When $\bar{\mu}_{\rm B}$ is increased at constant $\bar{\mu}_{\rm A}$, the total coverage, $\Theta_{\rm A} + \Theta_{\rm B}$, goes through a minimum as $\Theta_{\rm A}$ decreases sharply with only a small corresponding increase in $\Theta_{\rm B}$.\\ {\it Enhancement of {\rm A} by {\rm B}:} $\Theta_{\rm A}$ goes through a maximum as $\bar{\mu}_{\rm B}$ is increased at constant $\bar{\mu}_{\rm A}$. For large $\bar{\mu}_{\rm B}$, the enhancement gives way to substitutional desorption, each adsorbed B particle replacing one or more A particles. {For} both poisoning and enhancement, a measure of the modification strength is the differential coadsorption ratio, ${\rm d}\Theta_{\rm A} / {\rm d}\Theta_{\rm B}$. The modification is characterized as {\it strong} if $|{\rm d}\Theta_{\rm A} / {\rm d}\Theta_{\rm B}| > Z$, the lattice coordination number. In Refs.~\cite{COLL89,RIKV91A,RIKV91B} it was discussed in detail how the modification strength is related to the interaction constants for specific, triangular lattice-gas models through the shape of the adsorbate phase diagram. From the standpoint of statistical mechanics, strong modification results from fluctuations typical of the region near a line of critical end points, which joins a surface of discontinuous phase transitions to one of continuous transitions. More intuitively, these fluctuations can be described as follows. For poisoning, they correspond to an almost bare surface, from which the A particles are repelled by a very small coverage of repulsively interacting B particles \cite{COLL89}. In the case of enhanced adsorption, the corresponding picture is that of a surface almost fully covered by a monolayer of A particles, which is ``pinned down'' by a low concentration of attractively interacting B particles \cite{RIKV91A}. These considerations lead to inequalities that must be obeyed by the interaction constants, in order for the system to exhibit either poisoning or enhancement of various strengths. The inequalities are illustrated in Fig.~1 of Ref.~\cite{RIKV91B} for triangular models with nearest-neighbor interactions. \subsection{An Example of Poisoning} \label{secPOIS} An example of poisoning is provided by the model for the coadsorption of sulfur and hydrogen on Pt(111) in acid aqueous environment, studied by Rikvold and coworkers \cite{RIKV88A,RIKV88B,COLL89}. In this case, the effective nearest-neighbor lateral interactions were obtained from experimental thermodynamic and scattering data. The numerical adsorption isotherms (obtained by both MC and TM methods) gave maximum desorption ratios d$\Theta_{\rm H}/$d$\Theta_{\rm S} \approx -7 \pm 1$, in favorable agreement with experiments \cite{PROT86}. It was argued that the general shape of the phase diagram for this model is characteristic of strong poisoning behavior \cite{COLL89}. \subsection{An Example of Enhancement} \label{sec_EA} An application of lattice-gas models to study enhanced adsorption was given by Rikvold and Deakin \cite{RIKV91A}, who analyzed experimental data for the electrosorption of organics on metal electrodes: naphthalene on copper \cite{BOCK64B} and n-decylamine on nickel \cite{BOCK64A}. They followed a suggestion by Damaskin {\it et al.\/} \cite{DAMA71} that the potential dependence of adsorption of organics on metals can be attributed to the influence of coadsorbed hydrogen. Although the experimental results concerned rough, polycrystalline electrodes, a simple nearest-neighbor model on a triangular lattice was used, aiming merely for semiquantitative agreement. The effective electrovalences were taken as $z_{\rm H}$=+1 and $z_{\rm organic}$=0, and the three effective interaction constants, $\Phi_{\rm XY}$, together with $\mu_{\rm H}^0$ and $\mu_{\rm organic}^0$, were determined by nonlinear least-squares fits of numerical coadsorption isotherms obtained from a TM calculation to the experimental data. The experimental and fitted numerical adsorption isotherms for naphthalene on copper are shown in Fig.~\ref{figNAPH}. The maxima are due to the formation of a mixed naphthalene/hydrogen adsorbed phase in the potential region between $-$1000 and $-$800 mV versus the normal hydrogen electrode (NHE). The fitted lattice-gas interactions are consistent with independent estimates \cite{BOCK64B,BOCK64A}, as discussed in detail in Ref.~\cite{RIKV91A}. \section{Adsorption on Single-Crystal Surfaces} \label{secEX} A major source of uncertainty in the applications of simple lattice-gas models to the experimental results discussed in the previous section, is the poor characterization of the electrode surfaces. To remedy this situation, Wieckowski and Rikvold with collaborators have undertaken a series of studies of the electrosorption of small molecules and ions on well-characterized single-crystal surfaces. A characteristic aspect of these systems is the high specificity of the adsorption phenomena with respect to the structures of the substrate lattice and the main adsorbate. A good geometric fit promotes the formation of ordered adsorbate phases commensurate with the substrate, which can be observed both by {\it in situ}\ atomic-scale spectroscopies and by {\it ex situ}\ scattering techniques. The detailed experimental results that can be extracted from such systems merit the construction of more complicated models with longer-ranged and multi-particle interactions. By way of examples we discuss two specific single-crystal adsorption systems: the electrosorption of urea on Pt(100) from an acid electrolyte \cite{RIKV92,RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,GAMB94} and the UPD of copper on Au(111) from a sulfate-containing electrolyte \cite{HUCK90,BLUM91,HUCK91,HUCK91B,JZHA95A,JZHA95B,BLUM93,% BLUM94A,BLUM94B,LEGA95}. Both systems exhibit a dramatic peak sharpening in the cyclic voltammogram (CV), from several hundred~mV to on the order of 10$\,$mV when a small concentration of the adsorbate species (urea, or a mixture of sulfate and copper ions, respectively) is added to the supporting electrolyte. This effect is also exhibited by other systems, such as sulfuric acid on Rh(111) \cite{RIKV95,GAMB94,JZHA95C}. Whereas the urea/Pt(100) system develops only a single, sharp CV peak \cite{RUBE91}, in the case of copper UPD, two peaks, approximately 100~mV apart, are exhibited \cite{KOLB87,ZEI87}. We associate these effects with phase transitions in the layer of contact adsorbed particles. These transitions involve the replacement of a monolayer of adsorbed hydrogen or copper on the negative-potential side of the CV peaks by ordered submonolayers at more positive potentials. The observed voltammetric changes are much weaker or absent when the same substances are adsorbed onto other crystal planes of the same metals \cite{GAMB94,RUBE91,SCHU76}. The high specificity with respect to the adsorbent surface structure indicates that the effects depend crucially on the geometric fit between (at least one of) the adsorbate species and the surface. This observation was used in developing the specific lattice-gas models. \subsection{Urea on Pt(100)} \label{secUPT} In addition to the surface-specific narrowing of the CV peak upon the addition of urea to the supporting electrolyte, the experimental observations to which the model was fitted are as follows. (For details, see Refs.~\cite{GAMB93B,RIKV95}.)\\ 1. The urea coverage $\Theta_{\rm U}$, measured {\it in situ\/} by a radiochemical method (RCM), changes over a potential range of approximately 20$\,$mV around the CV peak position from near zero on the negative side to approximately 1/4 monolayers (ML) on the positive side.\\ 2. {\it Ex situ\/} Auger electron spectroscopy (AES) studies are consistent with the RCM results.\\ 3. {\it Ex situ\/} LEED studies at potentials on the positive side of the CV peak show an ordered c(2$\times$4) adsorbate structure, consistent with an ideal coverage of 1/4$\,$ML. Upon emersion on the negative side of the CV peak, only an unreconstructed (1$\times$1) surface is found.\\ The lattice-gas model developed to account for these observations was based on the assumption that urea [CO(NH$_2)_2$] coordinates the platinum through its nitrogen atoms (or NH$_2$ groups), with the C=O group pointing away from the surface. Since the unstrained N-N distance in urea matches the lattice constant of the square Pt(100) surface quite well (2.33~{\AA} \cite{ITAI77} versus 2.77~{\AA} \cite{KITT86}), it was assumed that urea occupies two adsorption sites on the square Pt(100) lattice. Integration of the CV profiles indicates that the hydrogen saturation coverage in the negative-potential region corresponds to one elementary charge per Pt(100) unit cell, and that most of the surface hydrogen is desorbed in the same potential range where urea becomes adsorbed. Therefore, it was assumed \cite{RIKV92} that hydrogen adsorbs in the same on-top positions as the urea nitrogen atoms. This assumption was recently strengthened by visible-infrared sum generation spectroscopy observations \cite{TADJ94,TADJ95}. The resulting model \cite{RIKV92} is a dimer-monomer model in which hydrogen is adsorbed at the nodes and urea on the bonds of a square lattice representing the Pt(100) surface. Simultaneous occupation of bonds that share a node by two or more urea molecules is excluded, as is occupation by hydrogen of a node adjacent to a bond occupied by urea. In order to stabilize the observed c(2$\times$4) phase, effective interactions were included through eighth-nearest neighbors \cite{RIKV93A,RIKV93B,GAMB93B,HIGH93,RIKV95,GAMB94}. The configuration energies are given by Eq.~(\ref{eq1}) with A=U (urea) and B=H (hydrogen). The model is illustrated in Fig.~\ref{figLATa} and its ground-state diagram in Fig.~\protect\ref{figGSa}. The effective lattice-gas interactions were determined from ground-state calculations followed by numerical MC simulations. The numerical simulations, which used systems with up to 32$\times$32 square-lattice unit cells, were performed with a heat-bath MC algorithm \cite{BIND86,BIND92,BIND92B} with updates of clusters consisting of five nearest-neighbor nodes arranged in a cross, plus their four connecting bonds. After symmetry reductions these clusters have 64 different configurations, and the corresponding code is rather slow in terms of machine time per MC step. However, the additional transitions allowed by these clusters, relative to minimal clusters consisting of two nodes and their connecting bond, include ``diffusion-like'' moves in which the urea molecules can go from one bond to another and the hydrogen atoms from one node to another, without changing the local coverages within the cluster. These moves significantly reduce the free-energy barriers that must be surmounted in order to locally minimize the adsorbate free energy, and they dramatically reduce the number of MC steps per site (MCSS) necessary for the system to reach thermodynamic equilibrium. For this system, simulated ``LEED patterns'' were obtained as the squared Fourier transform of the adsorbed urea configurations. These were obtained by the Fast Fourier Transform algorithm and averaged in the same way as the thermodynamic quantities \cite{RIKV95}. Since the number of model parameters is large, the numerical calculations are time consuming, and the experimental data concern a number of different quantities, parameter estimation by a formal optimization procedure was not a practical alternative for this study. (This contrasts with the simpler situations discussed in Refs.~\cite{RIKV91A,RIKV93D,HILT92}, where a small number of lattice-gas parameters could be determined by a formal least-squares procedure to fit extensive experimental results for a single thermodynamic quantity.) To make maximum use of all available information, the model parameters were therefore varied ``by hand'', taking into consideration both the various experimental results and available chemical and physical background information, until acceptable agreement was obtained with room-temperature experimental results. In particular, agreement was sought between the shapes of the simulated and experimental CV profiles, as shown in in Fig.~\ref{figCVa}. The resulting interactions are given in the caption of Fig.~\ref{figLATa}. \subsection{Copper UPD on Au(111)} \label{secUPD} Underpotential deposition (UPD) is a process whereby a monolayer or less of one metal is electrochemically adsorbed onto another in a range of electrode potentials more positive than those where bulk deposition would occur \cite{BARD80}. The UPD of copper on Au(111) electrodes in sulfate-containing electrolytes has been intensively studied, both experimentally (see discussion of the literature in Ref.~\cite{JZHA95B}) and theoretically \cite{HUCK90,BLUM91,HUCK91,HUCK91B,JZHA95A,JZHA95B,BLUM93,% BLUM94A,BLUM94B,LEGA95}. The most striking feature observed in CV experiments with Au(111) electrodes in sulfate-containing electrolyte is the appearance of two peaks, separated by about 100$\sim$150 mV, upon the addition of Cu$^{2+}$ ions \cite{KOLB87,ZEI87}. Typical CV profiles are shown in Fig.~\ref{figCVb}, together with preliminary simulation results \cite{JZHA95A,JZHA95B}. In the potential range between the peaks, the adsorbate layer is believed to have a ($\sqrt3$$\times$$\sqrt3$) structure consisting of 2/3$\,$ML copper and 1/3$\,$ML sulfate \cite{HUCK90,BLUM91,HUCK91,HUCK91B,JZHA95B,BLUM93,BLUM94A,% BLUM94B,LEGA95,ZSHI94B,ZSHI94C,ZSHI95}. The lattice-gas model for UPD of copper on Au(111) in sulfate-containing electrolyte, used in Refs.~\cite{JZHA95A,JZHA95B}, is a refinement of the model introduced and studied by Huckaby and Blum \cite{HUCK90,BLUM91,HUCK91,HUCK91B,BLUM93,BLUM94A,% BLUM94B,LEGA95}. It is based on the assumption that the sulfate coordinates the triangular Au(111) surface through three of its oxygen atoms, with the fourth S-O bond pointing away from the surface, as is also the most likely adsorption geometry on Rh(111) \cite{RIKV95}. This adsorption geometry gives the sulfate a ``footprint'' in the shape of an approximately equilateral triangle with a O-O distance of 2.4 \AA \, \cite{PASC65}, reasonably matching the lattice constant for the triangular Au(111) unit cell, 2.88 \AA \, \cite{KITT86}. The copper is assumed to compete for the same adsorption sites as the sulfate. The configuration energies are given by Eq.~(\ref{eq1}) with A=S (sulfate) and B=C (copper). The model is illustrated in Fig.~\ref{figLATb} and its ground-state diagram in Fig.~\ref{figGSb}. It has been experimentally observed \cite{ZSHI94C,ZSHI95} that sulfate remains adsorbed on top of the copper monolayer in the negative-potential region. In principle, this system should therefore be described by a multilayer lattice-gas model \cite{KRUK95}. In Refs.~\cite{JZHA95A,JZHA95B} this complication was avoided by using the following, simple mean-field estimate for the sulfate coverage in this second layer: \begin{equation} \label{eqTheta2} \Theta_{\rm S}^{(2)} = \alpha\Theta_{\rm C}(1/3-\Theta_{\rm S}) \;, \end{equation} which allows the difference between the first-layer coverage $\Theta_{\rm S}$ and its saturation value of 1/3 to be transferred to the top of the copper layer. The factor $\alpha$ is a phenomenological constant. Since the transfer of sulfate between the gold and copper surfaces does not involve an oxidation/reduction process, the total charge transport per unit cell during the adsorption/desorption process becomes \begin{equation} \label{eqQ2} q = -e [ z_{\rm s} ( \Theta_{\rm S} + \Theta_{\rm S}^{(2)} ) + z_{\rm C} \Theta_{\rm C} ] \;, \end{equation} giving a CV current density which reduces to that of Eq.~(\ref{eq2b}) for $\alpha$=0: \begin{eqnarray} i & = & eF\left\{ z_{\rm S}^2(1-\alpha\Theta_{\rm C}) \left.\frac{\partial\Theta_{\rm S}} {\partial\bar{\mu}_{\rm S}}\right|_{\bar{\mu}_{\rm C}} +z_{\rm C}(z_{\rm C}-2\alpha z_{\rm S}\Theta_{\rm S}/3) \left.\frac{\partial \Theta_{\rm C}} {\partial \bar{\mu}_{\rm C}} \right|_{\bar{\mu}_{\rm S}} \right. \nonumber \\ & & + \left. z_{\rm S}(2z_{\rm C} + \alpha z_{\rm S}(1/3-\Theta_{\rm S})-\alpha z_{\rm C} \Theta_{\rm C}) \left.\frac{\partial\Theta_{\rm S}} {\partial \bar{\mu}_{\rm C}}\right|_{\bar{\mu}_{\rm S}} \right\}\frac{{\rm d}E}{{\rm d}t} \;. \label{eqIalpha} \end{eqnarray} The effective electrovalences, $z_{\rm S}$ and $z_{\rm C}$, must be determined from experiments \cite{ZSHI94B,ZSHI94C,ZSHI95}. In Refs.~\cite{JZHA95A,JZHA95B} the approximate values, $z_{\rm C}$=+2 and $z_{\rm S}$=$-$2, were used. The ground-state diagram corresponding to the interactions used in Refs.~\cite{JZHA95A,JZHA95B} is shown in Fig.~\ref{figGSb}. For large negative $\bar{\mu}_{\rm S}$, only copper adsorption is possible, and the phase diagram is that of the lattice-gas model corresponding to the triangular-lattice antiferromagnet with next-nearest neighbor ferromagnetic interactions \cite{LAND83}. Similarly, in the limit of large positive $\bar{\mu}_{\rm S}$ and large negative $\bar{\mu}_{\rm C}$, the zero-temperature phase is the $(\sqrt{3}\!\times\!\sqrt{3})_0^{1/3}$ sulfate phase characteristic of the hard-hexagon model \cite{BLUM91,HUCK91,HUCK91B,BLUM93,% BLUM94A,BLUM94B,LEGA95,BAXT82}. The phase diagram for intermediate electrochemical potentials is quite complicated. To obtain adsorption isotherms and CV currents at room temperature, MC simulations were performed on a 30$\times$30 triangular lattice, using a heat-bath algorithm \cite{BIND86,BIND92,BIND92B} with updates at randomly chosen sites. In order to avoid getting stuck in metastable configurations (a problem which is exacerbated by the nearest-neighbor sulfate-sulfate exclusion), clusters consisting of two nearest-neighbor sites were updated simultaneously. The potential scan path corresponding to the CV shown in Fig.~\ref{figCVb} is indicated by the dotted line labeled ``1'' in the ground-state diagram, Fig.~\ref{figGSb}. With the aid of this diagram, it is easy to analyse the simulation results. As was pointed out above, there is experimental evidence that at the negative end of the UPD potential range, sulfate adsorbs in a neutral submonolayer on top of the monolayer of copper, with a coverage $\Theta_{\rm S}^{(2)}\!\approx\! 0.2$ \cite{ZSHI94C,ZSHI95}. This corresponds to $\alpha \! = \! 0.6$ in Eq.~(\ref{eqTheta2}), which was used to obtain the simulated CV current shown in Fig.~\ref{figCVb}. Starting from the negative end, we scan in the direction of positive electrode potential (upper left to lower right in Fig.~\ref{figGSb}). Near the CV peak at approximately 70 mV, the sulfate begins to compete with copper for the gold surface sites, resulting in a third of the copper desorbing into the bulk and being replaced by sulfate. The potential range over which the replacement takes place corresponds to a peak width of about 30 mV. Due to the strong effective attraction between the copper and sulfate adparticles, a mixed $(\sqrt{3}\!\times\!\sqrt{3})_{2/3}^{1/3}$ phase is formed, which extends through the entire potential region between the two CV peaks. As the CV peak at approximately 170 mV is reached, most of the copper is desorbed within a potential range of about 20 mV. As it is thus deprived of the stabilizing influence of the coadsorbed copper, the sulfate is partly desorbed, reducing $\Theta_{\rm S}$ from 1/3 to approximately 0.16. This system provides another illustrative example of the enhanced adsorption phenomenon described in Sec.~\ref{sec_EA}. The $(\sqrt{3}\!\times\!\sqrt{7})_0^{1/5}$ phase found in the potential region near 200 mV is consistent with experimental observations on copper free systems \cite{MAGN90,EDEN94}. Eventually, more positive electrode potentials cause the sulfate to form its saturated $(\sqrt{3}\!\times\!\sqrt{3})_0^{1/3}$ hard-hexagon phase. However, in the model, this transition occurs at a somewhat more negative potential than is observed experimentally \cite{ZSHI94B,ZSHI94C,ZSHI95}. The scenario described here corresponds closely to that proposed by Huckaby and Blum \cite{HUCK90,BLUM91,HUCK91,HUCK91B,BLUM93,% BLUM94A,BLUM94B,LEGA95}. The agreement between the experimental and theoretical results is reasonable, except for large positive $E$, where the model predicts less copper and more sulfate on the surface than indicated by the experiments. The heights of the CV peaks predicted by the model are larger than what is observed in experiments, a discrepancy which is probably due to defects on the electrodes used in the experiments. \section{Conclusion} \label{secDIS} We have briefly reviewed the application of statistical-mechanical lattice-gas modeling to specific adsorption in the double-layer region. The method is well suited to describe ordering and fluctuation effects in the contact-adsorbed layer, which are strongly influenced by effective, lateral interactions. Phenomena that can be described include poisoning and enhancement effects, and concrete examples were given for several systems of experimental interest. The effective interactions arise from a number of different sources, including mediation through the substrate electrons, through phonons, and through the fluid near the surface, and their calculation from first principles is not yet feasible in general. The alternative route advocated here provides a microscopic picture of the adsorbate structure, as well as a procedure for estimating approximate effective interaction energies from experimentally observed structural and thermodynamic quantities. The resulting models have considerable predictive power regarding the dependences of observed thermodynamic quantities on the electrochemical potential and the bulk solute concentrations, as well as on the geometric structure of the substrate and the adsorbates. Since the methods discussed are simple to program and not particularly computationally intensive, they are well suited for experimental data analysis. \vspace{0.25in} \noindent {\bf Acknowledgement} \\*[0.15in] We acknowledge useful discussions with L.~Blum. Helpful comments on the manuscript were provided by M.~A.\ Novotny, R.~A.\ Ramos, H.~L.\ Richards, and S.~W.\ Sides. This research was supported by the Florida State University (FSU) Supercomputer Computations Research Institute under US Department of Energy Contract No.\ DE-FC05-85ER25000, by the FSU Center for Materials Science and Technology, and by the University of Illinois Frederick Seitz Materials Research Laboratory under US Department of Energy Contract No.\ DE-AC02-76ER01198. Work at FSU was also supported by US National Science Foundation Grant No.\ DMR-9315969.
1,477,468,749,835
arxiv
\section{Introduction} \label{sec:int} An astronomical black hole can be readily characterized with two parameters, mass ($M$) and spin ($a_*$), hence it could be described by Kerr metric \citep{ker1963}. Once we know these two parameters, we can make a complete description to the system. Compared to the mass, the spin is relatively harder to be constrained mainly because it only manifests in the most proximate, strong gravity region. The spin is commonly defined in terms of the dimensionless parameter $a_* = Jc/GM^2$ ($-1 \le a_* \le 1$, where $J$ is the angular momentum of the black hole, $c$ is the speed of light, and $G$ is the gravitational constant). As to the spin measurement, currently there are two leading approaches: the continuum-fitting method \citep{zha1997, Li2005} and the X-ray reflection fitting method \citep{iwa1997, mil2002}. Both approaches are based on the fundamental assumption that the inner edge of the accretion disc extends down to the innermost stable circular orbit ($R_\mathrm{in} = R_\mathrm{ISCO}$), which is a monotonic function of spin parameter $a_*$. $a_{*}=$ -1, 0 and 1 correspond to $R_\mathrm{ISCO}=$ 9, 6 and 1 $R_\mathrm{g}$, respectively, where $R_\mathrm{g}$ is the gravitational radius and is defined to be $R_\mathrm{g} = GM/c^2$. Given the inner radius of the accretion disc, one can readily obtain the spin. The continuum-fitting method can measure the inner radius by modeling the thermal continuum of the accretion disc using {\tt kerrbb2}. {\tt kerrbb2} is a combination model of {\tt kerrbb} and {\tt bhspec}. {\tt kerrbb} has three fit parameters, $a_*$, the hardening factor $f$ and the mass accretion rate $\dot{M}$, only two of which can be determined at one time. A look-up table between $f$ and the scaled luminosity using {\tt bhspec} is generated. Then {\tt kerrbb} and the table allow one to directly fit for a* and $\dot{M}$ (refer sec. 4.2 of \citealt{mcc2006}). This method relies on accurate measurement to the system parameters of mass, distance, and inclination angle (often assumed to be identical to the orbital inclination angle) for the source \citep{gou2009, ste2011, che2016}. The X-ray reflection fitting method mainly models the relativistic reflection spectrum, which is a combination of fluorescent lines, absorption edges and recombination continua \citep{wan2017, wal2019, Gar2018}. One of the advantages of this technique is that it does not require information on the binary parameters, furthermore, it can make an independent constraint on the inclination angle of the inner disc. There have been consistent check for these two methods on several sources, and they generally showed consistent results \citep{rey2019}. It is expected that there exists billions of stellar-mass black holes in Milky Way galaxy \citep{bro1994,tim1996}, however, only roughly two dozen dynamically-confirmed black hole X-ray binaries have been confirmed \citep{rem2006}, and 4U~1543-47 (4U 1543) is one of them. This transient source was first discovered by \emph{Uhuru} satellite in 1971 \citep{mat1972}. Then, it went into outburst again in 1983, 1992 and 2002, respectively \citep{kit1984,har1992,par2004}. It was a very long time after the first discovery that the compact primary was confirmed to be a black hole \citep{rho1974,oro1998}. As to its spin parameter, \citet{sha2006} first reported its spin with the continuum-fitting method. They estimated its spin to be $0.8\pm0.1$. Then, \citet{mil2009} and \citet{mor2014} reported two spin measurements, $0.3\pm0.1$ and $0.43^{+0.22}_{-0.31}$, respectively, both constrained by combining the continuum-fitting and the X-ray reflection fitting methods. These three works utilized the previous mass of $9.4 \pm{1.0} M_{\odot}$ and distance of $7.5 \pm{1.0}$ kpc, which were reported in \citet{par2004}. Except that \citet{mil2009} used the inclination angle of $32_{-4}^{+3}$ degrees constrained by their own fits to the iron line, the other two works used the inclination angle of $20.7 \pm{1.5}$ degrees which is equal to the binary orbital inclination angle. Recently, an updated set of dynamical parameters have been identified which have significant differences compared to earlier (J.Orosz, private communication). In this paper, we revisited 7 \emph{Rossi X-ray Timing Explorer} (\emph{RXTE}, \citealt{zha1993}) data-sets of 4U 1543 to check the spin parameter via the X-ray reflection fitting method, i.e., carefully exploring the reflection component. We made a joint-fit for all spectra in order to achieve a better signal-to-noise ratio. we have adopted the updated reflection-emission model, {\tt relxill}~\citep{gar2014a, dau2014}. The whole paper is organized as follows. In Section \ref{sec:data}, we provide details of the data reduction and selection. In Section \ref{sec:results}, we describe the analysis of spectra and the spin result. In Section \ref{sec:disc} and Section \ref{sec:con}, we present our discussions and conclusions, respectively. \section{Data Reduction and selection} \label{sec:data} We revisited data sets for 4U~1543 which were observed by \emph{RXTE} during its 2002 outburst \citep{par2004}. There are 130 continuous pointed observations in total (the long exposures were split), collected by the Proportional Counter Array (PCA, \citealt{jah1996}). We only focused our analysis on the best-calibrated proportional counter unit, namely PCU2, as in previous work \citep{par2004, jah2006, sha2012}. All layers of the PCU2 were combined. We neglected 49 observations whose count rate is smaller than 10 counts s$^{-1}$. The remaining observations are presented in the hardness-intensity diagram (HID, Figure \ref{fig:q}). \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{q.pdf} \caption{The evolutionary tracks in hardness-intensity diagram for observations except those with count rate smaller than 10 counts s$^{-1}$. The vertical axis presents the count rate in energy band 3-45 keV. The horizontal axis presents the hardness ratio (HR) defined as the ratio of count rate between 5-8.6 keV and 8.6-18 keV. The 7 red open circles represent the data (two overlapping) we used to determine the spin of the black hole.} \label{fig:q} \end{figure} \emph{RXTE}/PCA data of bright X-ray binaries are fundamentally limited not by counting statistics but by the systematic measure of calibration certainty in the detector. We apply a calibration correction, {\tt pcacorr} \citep{gar2014b}, which improves the instrumental response to a quality of 0.1\% precision. We include this 0.1\% as a systematic error. A second correction, {\tt crabcorr} \citep{ste2010}, standardizes the PCA absolute flux calibration to the \citet{too1974} values for the Crab. This latter tool improves not on the precision of the detector, but on the accuracy of our measurement. We firstly subtracted background and made deadtime correction for the \emph{RXTE} data. Next, the calibration tool {\tt pcacorr} was applied. Then, a 0.1\% systematic error was added to the spectra. Finally, we performed \emph{RXTE} data analysis over the energy range between 2.8-45.0 keV using XSPEC 12.9.0g software package \citep{arn1996}. The quoted errors were given with a 90\% confidence level ($\Delta \chi ^2=2.71$) if not specified. We selected 7 observations (MJD 52459-MJD 52463, defined as Spec. A-G) which show strong reflection components. In Table {\ref{tab:spec}}, we give the detailed information for these observations. In order to show the reflection features more clearly, we analysed the Spec. A-G between 2.8-45.0 keV, omitting 4.5-8.0 keV and 15.0-35.0 keV with the model {\tt crabcor*TBabs*(diskbb+powerlaw)} in XSPEC. For the model {\tt crabcor}, the normalization coefficient of $C=1.097$ and the slope difference of $\Delta\Gamma = 0.01$, are applied. For the model {\tt TBabs}, which is used to account for the galactic absorption by the interstellar medium (ISM) along the line of sight, the \citet{wil2000} set of solar abundances and the \citet{ver1996} photoelectric cross sections were specified accordingly. Since the effective low energy of \emph{RXTE} is limited at 2.8 keV, the data cannot constrain the column density ($N_\mathrm{H}$) well. The column density\footnote{\citet{mil2003} measured the column density for 4U 1543-47 to be (3.8 $\pm$ 0.2) $\times10^{21}$ cm$^{-2}$ by analysing the \emph{XMM-Newton}/EPIC-pn spectrum (0.3-10.0 keV). This value is consistent with the one we currently use in the fit. The new value of the column density has a negligible effect on our fitting results.} was fixed at $4.0\times10^{21}$ cm$^{-2}$ as in \citet{par2004} and \citet{mor2014}. The fits to all 7 spectra are statistically unacceptable with $\chi^2_{\nu}$ = 31.32 (2129.78/68), 32.59 (2216.33/68), 23.26 (1581.93/68), 36.65 (2492.28/68), 63.75 (4335.57/68), 49.58 (3371.47/68) and 39.70 (2699.92/68), respectively. Data-to-model ratios are plotted in Figure {\ref{fig:ratio}}. The positive features in residuals are the broadened iron line and Compton hump characteristic of reflection emission. \begin{figure*} \centering \includegraphics[scale=0.86,angle=0]{ratio_Fe_1206.pdf} \caption{Spec. A-G were analysed using a simple mixture of an absorbed power-law together with multicolor disc blackbody model, respectively. The continuum models were fit over the energy band of 2.8-45.0 keV, ignoring 4.5-8.0 keV and 15-35keV region. Data-to-model ratios are plotted, with those regions added back in after the fit. The dotted green line represents the energy of 6.4 keV, which corresponds to the neutral iron line. The broad iron line and Compton hump, i.e. reflection component, are quite prominent.} \label{fig:ratio} \end{figure*} \begin{table*} \caption{Properties of Spec. A-G} \label{tab:spec} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{threeparttable}[b] \centering \begin{center} \footnotesize \begin{tabular}{ccccccc} \toprule Spec.&Date&MJD&\tabincell{c}{Count Rate\\ (cts/s/PCU)} & \tabincell{c}{Exp.\\(s)}&\tabincell{c}{HR\\(8.6-18/5-8.6 keV)}&\tabincell{c}{$\chi^2_{\nu}$ $^a$\\(63 d.o.f)}\\ \midrule A & July 04 & 52459 & 2348 & 1072 &0.35&67.70\\ B & July 05 & 52460 & 1991 & 1120&0.39&63.76\\ C & July 06 & 52461 & 1836 & 800 &0.38&77.15\\ D & July 07 & 52462 & 1432 & 1328 &0.32&57.47\\ E & July 07 & 52462 & 1406 & 3376&0.31&63.49\\ F & July 08 & 52463 & 1179 &3056&0.29&74.34\\ G& July 08 & 52463 & 1184 &1520&0.29&83.88\\ \bottomrule \end{tabular} \begin{tablenotes} \item[$a$] The spectrum is fitted with {\tt crabcor*TBabs*smedge(diskbb+powerlaw+Gauss)} in XSPEC. \end{tablenotes} \end{center} \end{threeparttable} \end{table*} \section{Analysis and results} \label{sec:results} We fitted Spec. A-G with a phenomenological model, {\tt crabcor*TBabs*smedge(diskbb+powerlaw}\\ {\tt +Gauss)}, in which {\tt Gauss} and {\tt smedge} \citep{ebi1994} are used to model the reflection features. The central energy of the iron line was constrained between 6.0 and 6.97 keV. The width and the normalization were allowed to be free. The width of {\tt smedge} is fixed at 7.0 keV, the smeared edge could change from 7.0 to 9.0 keV, and the optical depth floated freely. We only focused on the best-fitting models. The detailed information on the quality of the fit for each spectrum is shown in Table \ref{tab:spec}. The temperatures of thermal emission are $\sim$0.75-0.85 keV. The photon indexes of power-law emission are $\sim$2.45-2.75, which indicate that the source is in the Steep Power Law (SPL) state during these observations. Line peaks are less than 6.4 keV, which suggest the presence of strong gravitational redshift around the black hole. We then fit the full ionized reflection spectrum with an sophisticated model. The reflection information, however, was weak relative to the disc and corona continua, we, therefore, make a joint-fit to Spec. A-G. We used a relativistic reflection model {\tt relxill} \citep{gar2014a, dau2014} which is a combination of the reflection model {\tt xillver} \citep{gar2010, gar2011, gar2013} and the relativistic convolution kernel {\tt relconv} \citep{dau2010, dau2012, dau2013}. This model is designed to fit the reflection and the power-law components simultaneously. It has been widely used in recent years for the reflection exploration in stellar-mass black hole binaries and AGNs, sometimes also in neutron star binaries. The returned parameter list contains inner index ($q_\mathrm{in}$), outer index ($q_\mathrm{out}$), and break radius ($R_\mathrm{br}$) which describe the radial dependence of the emissivity of reflection emission; spin parameter ($a_*$), inclination angle ($i$), inner radius ($R_\mathrm{in}$), outer radius ($R_\mathrm{out}$), redshift ($z$) to the source (set to 0 for Galactic systems), photon index ($\Gamma_\mathrm{r}$), ionization state ($\mathrm{log}\xi$), iron abundance ($A_\mathrm{Fe}$), high energy cut-off ($E_\mathrm{cut}$), reflection fraction ($R_\mathrm{f}$), and normalization ($N_\mathrm{r}$). The overall self-consistent model we adopt here is {\tt crabcor*TBabs(diskbb+relxill)}. For {\tt relxill} model, we assumed a single emissivity profile ($q_\mathrm{in}=q_\mathrm{out}=q$) and the inner radius of accretion disc extended down to the ISCO radius ($R_\mathrm{in} = R_\mathrm{ISCO}$). Some parameters were independent for each spectrum: the temperature $T_\mathrm{col}$ and normalization constant $N_\mathrm{DISC}$ of thermal emission; the emissivity index $q$, photon index $\Gamma _\mathrm{r}$, ionization state $\mathrm{log}\xi$, reflection fraction $R_\mathrm{f}$ and normalization $N_\mathrm{r}$ of reflection component. The other parameters were linked together among 7 spectra. The spin parameter $a_*$ and the inclination angle $i$ were free. The outer radius was set to default value: $R_\mathrm{out}=400$ $R_\mathrm{g}$. Because the power-law is extremely steep, we can't detect the high energy cut-off in these observations. We, then, fixed $E_\mathrm{cut}$ at 300~keV which is a physically reasonable and sufficiently large value for our purposes. Meanwhile, it is beneficial for reducing the complexity of the model. When the iron abundance was fixed at unity (i.e. solar abundance), the model returned an acceptable but not a good fit with $\chi^2_{\nu}$ = 1.56 (707.96/453), and the spin tended to peg at the maximal negative value of -0.998. Therefore, we let the iron abundance $A_\mathrm{Fe}$ free. \begin{table*} \caption{Best-fitting parameters for Model 1: {\tt crabcor*TBabs}({\tt diskbb}+{\tt relxill})} \begin{threeparttable}[b] \begin{center} \label{tab:relxill} \footnotesize \begin{tabular}{llccccccc} \toprule Model & Parameter &Spec. A & Spec. B & Spec. C & Spec. D & Spec. E & Spec. F & Spec. G \\ \midrule {\tt crabcor}&$C$ &\multicolumn{7}{c}{1.097 (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$\Delta\Gamma$ &\multicolumn{7}{c}{0.01 (f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt TBabs}&$N_\mathrm{H}$ ( cm$^{-2}$) &\multicolumn{7}{c}{$4.0\times10^{21}$ (f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt diskbb} &$T_\mathrm{col}$ (keV)&$0.858_{-0.014}^{+0.011}$&$0.825\pm0.012$&$0.835_{-0.018}^{+0.015}$&$0.787_{-0.010}^{+0.008}$&$0.787_{-0.009}^{+0.007}$&$0.771\pm0.007$&$0.772_{-0.009}^{+0.010}$\\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{DISC}$&$3889_{-219}^{+366}$&$4151_{-273}^{+237}$&$3561_{-297}^{+312}$&$4862_{-247}^{+329}$&$4923_{-234}^{+398}$&$5025_{-212}^{+230}$&$4891_{-311}^{+328}$\\ \specialrule{0em}{1pt}{1.1pt} {\tt relxill} &$q$&$3.69_{-0.56}^{+1.03}$&$3.78_{-0.64}^{+1.23}$&$3.29_{-0.52}^{+1.04}$&$3.98_{-0.70}^{+1.41}$&$3.82_{-0.57}^{+1.09}$&$3.67_{-0.55}^{+1.20}$&$3.71_{-0.65}^{+1.13}$ \\ \specialrule{0em}{1pt}{1.1pt} &$a_*$&\multicolumn{7}{c}{$0.67_{-0.08}^{+0.15}$}\\ \specialrule{0em}{1pt}{1.1pt} &$i$ (deg)&\multicolumn{7}{c}{$36.3_{-3.4}^{+5.3}$ }\\ \specialrule{0em}{1pt}{1.1pt} &$A_\mathrm{Fe}$&\multicolumn{7}{c}{$5.05_{-0.26}^{+1.21}$}\\ \specialrule{0em}{1pt}{1.1pt} &$\Gamma_\mathrm{r}$&$2.79\pm0.06$&$2.59_{-0.04}^{+0.06}$&$2.65_{-0.07}^{+0.08}$&$2.71_{-0.09}^{+0.06}$&$2.66_{-0.08}^{+0.07}$&$2.69_{-0.08}^{+0.06}$&$2.80_{-0.07}^{+0.12}$ \\ \specialrule{0em}{1pt}{1.1pt} &$\mathrm{log}\xi$&$3.72_{-0.16}^{+0.13}$&$3.54_{-0.13}^{+0.22}$&$3.53_{-0.15}^{+0.46}$&$3.70_{-0.21}^{+0.13}$&$3.72_{-0.19}^{+0.29}$&$3.73_{-0.12}^{+0.15}$&$3.71_{-0.18}^{+0.05}$\\ \specialrule{0em}{1pt}{1.1pt} &$R_\mathrm{f}$&$0.35_{-0.10}^{+0.07}$&$0.29_{-0.03}^{+0.06}$&$0.27_{-0.04}^{+0.07}$&$0.44_{-0.11}^{+0.05}$&$0.41_{-0.08}^{+0.06}$&$0.44_{-0.07}^{+0.16}$&$0.61_{-0.14}^{+0.12}$ \\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{r}$&$0.184_{-0.031}^{+0.038}$&$0.091_{-0.011}^{+0.016}$&$0.099_{-0.022}^{+0.024}$&$0.065_{-0.016}^{+0.013}$&$0.051_{-0.012}^{+0.011}$&$0.04_{-0.009}^{+0.008}$&$0.057_{-0.012}^{+0.026}$ \\ \midrule & $\chi^2$ &\multicolumn{7}{c}{390.40} \\ & $\nu$ & \multicolumn{7}{c}{452} \\ & $\chi^2_{\nu}$ & \multicolumn{7}{c}{0.86} \\ \bottomrule \end{tabular} \begin{tablenotes} \item \textbf{Notes.} Columns 3-9 show successively the results of Spec. A-G. The parameters with ``f'' in parenthesis indicate they were fixed at values given. All errors for one parameter of interest were calculated with 90\% confidence level. \item \textbf{1.} Parameters including the spin $a_*$, the inclination angle $i$ and the iron abundance $A_\mathrm{Fe}$ of {\tt relxill} were linked together among different spectra. \item \textbf{2.} Parameters including the temperature $T_\mathrm{col}$, normalization constant $N_\mathrm{DISC}$, emissivity index $q$, photon index $\Gamma _\mathrm{r}$, ionization state $\mathrm{log}\xi$, reflection fraction $R_\mathrm{f}$ and the normalization $N_\mathrm{r}$ were independent for each spectrum. \end{tablenotes} \end{center} \end{threeparttable} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.50,angle=0]{ratio_1206.pdf} \caption{Making joint-fit to Spec. A-G using model: {\tt crabcor*TBabs*(diskbb+relxill}). Data-to-model ratios and contributions to the total $\chi^2$ are presented in left and right panels, respectively. Different colors represent different spectra. The model achieved a satisfactory fit with $\chi^2_{\nu}=390.40/452$.} \label{fig:relxill} \end{figure*} The model achieved a statistically good fit with $\chi^2_{\nu}$ = 0.86 (390.4/452) for 7 observations (Table \ref{tab:relxill}). All parameters including the spin and the inclination angle are well constrained. The spin parameter $a_*$ is obtained to be $0.67_{-0.08}^{+0.15}$. The inclination angle $i$ is obtained to be $36.3_{-3.4}^{+5.3}$ degrees. The iron abundance $A_\mathrm{Fe}$ is obtained to be $5.05_{-0.26}^{+1.21}$. Figure \ref{fig:relxill} shows the data-to-model ratios and the contributions to the total $\chi^2$ of the best-fitting. No distant reflection from the outer disc, the wind or the surface of companion \citep{wan2018, xu2018}, was necessary, which is attributed to that the \emph{RXTE} is not sensitive to the narrow line. When the distant reflection component is added using {\tt xillver}, the statistic is not improved with $\chi^2_{\nu}$ = 0.87 (385.82/445). The spin is $0.78_{-0.13}^{+0.98}$, and the inclination angle is $41.35_{-6.69}^{+10.86}$ degrees, which is still consistent with the model without {\tt xillver}. In order to investigate the effect of different values of column density on our model, especially the main parameters, we tried to let the parameter $N_\mathrm{H}$ free. However, the model was unable to provide any meaningful constraint on $N_\mathrm{H}$ (a detection of $N_\mathrm{H}$ only 1$\sigma$). This is not surprising given the band of sensitivity for the PCA. Most importantly, this has negligible impact on the fit parameters for the model. We investigated the model dependence on the high energy cut-off. We made $E_\mathrm{cut}$ vary among 7 observations. Comparing with the best-fitting result we describe above, the fit was only improved with $\Delta \chi^2 = 9.33$ for reducing 7 degrees of freedom (d.o.f), which is not a significant improvement. Meanwhile, the fit did not constrain $E_\mathrm{cut}$ well. This changed setting for $E_\mathrm{cut}$ did not affect profiles of thermal, power-law and reflection emission largely. It still requires a higher super-solar iron abundance of 6.61$_{-1.17}^{+2.06}$. Moreover, the free $E_\mathrm{cut}$ did not change the inclination angle and the spin of the black hole largely. The spin parameter $a_*$ is obtained to be $0.73_{-0.14}^{+0.10}$. The inclination angle $i$ is obtained to be $34.2_{-3.7}^{+6.8}$ degrees. For the broken emissivity profile, i.e. the inner index is free, the outer index is fixed at 3, and the break radius is fixed at 15 $R_{\rm{g}}$, is assumed, the fit statistics are not improved with $\chi^2_{\nu} = 0.86$ (388.72/452). It is also found that the best-fitting parameters are not changed significantly, such as the spin parameter $a_*$ is $0.71_{-0.08}^{+0.11}$, the inclination angle $i$ is $35.8_{-2.7}^{+3.6}$ degrees, and the inner indexes are $\sim3-4$. For the single emissivity profile, we explored the implication of freezing the emissivity indexes at typical value ($q$ = 3) instead of being free. The best-fitting model only gave a lower limit 0.65 of spin at 90\% confidence level with $\chi^2_{\nu} = 0.88$ (403.92/459), which is slightly worse than the case with free $q$ ($\Delta\chi^2=13.52$ for increasing 7 d.o.f). The inclination angle $i$ is also greater than the orbital inclination angle by approximately 10$^{\circ}$. We explored the $\chi^2$ parameter space using the command ``steppar'' for the spin and the inclination angle. During the searching process, at each step the parameters of interest were fixed at incrementally stepped values while all other parameters were allowed to fit. For the spin parameter, the stepsize of 0.01 was used from 0.0 to 1.0 (Figure \ref{fig:a}). For the inclination angle parameter, the stepsize of 0.1$^{\circ}$ was explored from 20$^{\circ}$ to 50$^{\circ}$ (Figure \ref{fig:i}). Three levels of confidence (68\%, 90\% and 99\%) are also marked in both figures. The spin is constrained well between $\sim{0.58-0.82}$ at 90\% statistical confidence which is consistent with a moderate spin black hole. Negative and low spins (< 0.5) at more than 99\% statistical confidence are ruled out. The inclination angle is constrained to be $\sim{32^{\circ}-42^{\circ}}$ at 90\% statistical confidence. \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{a_contour.pdf} \caption{Joint-fit to Spec. A-G using model {\tt crabcor*TBabs*(diskbb+relxill)} (Model 1). The goodness-of-fit statistic as a function of the black hole spin parameter $a_{*}$ is shown in the above plot. The stepsize of 0.01 was explored from 0.0 to 1.0. Confidence levels of 68\%, 90\%, and 99\% are labeled with red dotted lines. It suggests a moderate rotating black hole and strongly excluded negative and low spins.} \label{fig:a} \end{figure} \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{i_contour.pdf} \caption{Joint-fit to Spec. A-G using model {\tt crabcor*TBabs*(diskbb+relxill}) (Model 1). The goodness-of-fit statistic as a function of the accretion disc inclination angle parameter $i$ is shown in the above plot. The stepsize of 0.1$^{\circ}$ was explored from 20$^{\circ}$ to 50$^{\circ}$. Confidence levels of 68\%, 90\%, and 99\% are labeled with red dotted lines. It indicates the inclination angle is larger than $\sim$32$^{\circ}$ and smaller than $\sim$40$^{\circ}$ at 90\% confidence.} \label{fig:i} \end{figure} \section{Discussions} \label{sec:disc} In this paper, we have carefully explored the constraint on the spin of the black hole in 4U 1543 on the basis of its reflection emission. We selected 7 SPL state spectra which show strong reflection component. These spectra were selected from the 2002 outburst observed by \emph{RXTE}. According to the phenomenological model of {\tt crabcor*TBabs*smedge(diskbb+powerlaw+\\Gauss)} in Section \ref{sec:results}, the central energy of the Gaussian profile is less than 6.4 keV, suggesting the presence of strong gravitational redshift around the black hole, and the reflection region is concentrated quite close to the black hole. To improve sensitivity to faint reflection features, we have fitted the 7 spectra simultaneously \citep{gar2015}. \citet{mil2009} found that {\tt diskbb} and physically rigorous relativistic disc models such as {\tt kerrbb} performed similarly in their ability to characterize the thermal continuum for the purposes of isolating the reflection signal. From the best-fitting parameters in Table \ref{tab:relxill}, the color temperature $T_\mathrm{col}$ of accretion disc drops down with decreasing luminosity. The relative consistency of $N_\mathrm{DISC}$ indicates that the disc radius and coronal covering cannot change appreciably over the 5-day span in which these observations were accrued. The photon index is relatively constant and constrained between 2.6-2.8. The emissivity index is also relatively constant but larger than the canonical value ($q=3$). The high values of $\mathrm{log}\xi$ (> 3.5) indicate that the accretion disc is highly ionized. The reflection fraction $R_\mathrm{f}$, which is defined as the ratio that the power-law emission hit on the disc to that escaped to the infinite, is in the range of $\sim{0.25-0.65}$. It goes up modestly with decreasing luminosity. Because the hot gas layer at the surface of the disc dilutes the reflection signal by scattering and blurring reflection features \citep{nay2001}. The hotter the surface of disc, the more significant the dilution is. We also report a super-solar iron abundance for the binary system 4U 1543. Based on the full reflection Model l, we investigated the degeneracy between the iron abundance and the spin by fixing the spin value and fitting for the iron abundance. We divided the spin between the range of 0.4-1.0 into 60 evenly-spaced values, and it is found that 50 of 60 fitted iron abundance $A_{Fe}$ is in the range of 3.5-8.0 in units of solar abundance (Figure \ref{fig:relxill_aFe}). The relation indicates that the iron abundance has a weakly positive correlation with the spin. The iron abundance is greater than 4 given the 99\% confidence range level. The very large iron abundance is not unique to 4U 1543. Similar results have been reported in other stellar-mass black hole binaries such as GX 339-4 ($A_\mathrm{Fe}=5\pm1$ solar in \citealt{gar2015} and $A_\mathrm{Fe}=6.6\pm0.5$ solar in \citealt{par2016}), V404 Cyg ($A_\mathrm{Fe}\sim{5}$ solar in \citealt{wal2017}), and Cyg X-1 ($A_\mathrm{Fe}=4.7\pm0.1$ solar in \citealt{par2015} and $A_\mathrm{Fe}=$4.0-4.3 solar in \citealt{wal2016}). At present, there is no satisfactory physical explanation for the occurrence of high iron abundance in these systems. The most likely explanation is the atomic data shortcomings in current reflection models. \citet{joh2018} explored the super-solar iron abundance of Cyg X-1 using observation of the intermediate state. They found that the higher electron density ($n_{e} \approx 4 \times 10^{20}$ cm$^{-3}$) model was compatible with solar iron abundance using the high-density model {\tt reflionx\_hd} (a new version of {\tt reflionx}). However, the range of the photon index in that model is 1.4-2.3, which can not be applied to the observations in our paper. While the maximum density in high-density version of {\tt relxill} ({\tt relxillD}) is only 10$^{19}$ cm$^{-3}$. When it is used to fit the data, the density is pegged to its upper limit, which indicates that the density in the disc is larger than the maximal value in the model. We found that the spin parameter pegged at -0.998 when the iron abundance was decreased to $\sim{3}$. To better understand this surprising finding, we did another trial. We fixed the iron abundance at values between 3.5-6.5 with a stepsize of 1.0, and found four best fits with $\chi^2_{\nu}$ < 1 (d.o.f = 453). We, then, explored the $\chi^2$ for spins from 0 to 1.0 with stepsize of 0.01 for them (Figure \ref{fig:Fe_step}). These four models all obtain moderate spin black holes at 90\% statistical confidence level, but when the iron abundance is higher than $\sim{5.5}$, at more than 90\% statistical confidence level, we note the reduced sensitivity of models to large values of spin. The increase of iron abundance induces more photoelectric absorption making the Fe K-edge near 8 keV deeper. At the same time, the strength of the Fe k emission in the band of 6-8 keV increases \citep{gar2013}. More fluorescent iron photons near the black hole would be scattered down below $\sim{6}$ keV to make a stronger red wing, as expected. \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{aFe.pdf} \caption{Joint-fit to Spec. A-G using model {\tt crabcor*TBabs*(diskbb+relxill)} (Model 1). Contours of $\chi^2$ with 68\%, 90\% and 99\% confidence levels for the spin parameter $a_{*}$ and iron abundance parameter $A_\mathrm{Fe}$ were measured using reflected components and shown in the above plot. By dividing the spin between 0.4-1.0 into 60 evenly spaced values and fixing the spin at certain value each time, we then fit spectra for the iron abundance, and it is found that 50 out of 60 fixed spin values give the $A_\mathrm{Fe}$ in the range of $3.5-8.0$. It shows a weakly positive relationship between them.} \label{fig:relxill_aFe} \end{figure} \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{Fe_step.pdf} \caption{Fixing the iron abundance from 3.5 to 6.5 with a stepsize of 1.0, respectively. The 4 contours for $\chi ^2$ dependence on the spin with 68\%, 90\% ,and 99\% confidence levels are shown.} \label{fig:Fe_step} \end{figure} Then, we further explored the dependence of the spin parameter $a_*$ on the inclination angle $i$. We fit spectra for 60 evenly-spaced values of the $a_*$ in the range of 0.4-1.0 and 30 evenly-spaced values of the $i$ with the range of 20$^{\circ}$-50$^{\circ}$ (Figure \ref{fig:relxill_ai}). When $i$ is larger than $\sim{36}^{\circ}$, a positive relationship is shown. Moreover, the model lost the ability to give an upper limit on the spin parameter at 99\% confidence level. The inclination angle ($36.3_{-3.4}^{+5.3}$ degrees) in this paper is consistent with the value ($32_{-4}^{+3}$ degrees) in \citet{mor2014}, which may indicate that the inclination angle of the inner disc is misaligned with the orbital inclination angle ($\sim{21}^{\circ}$). However, for a transient system, the timescale for accretion to torque the black hole into alignment is approximated to be $10^{6}-10^{8}$ years \citep{mar2008}. Therefore, the alignment is expected to occur early in the typical lifetime of transients, which are characteristically Gyrs old \citep{whi1998,fra2013}. In our estimation, the most likely resolution to this apparent tension lies in the reflection modeling. The inclination angle estimation via X-ray reflection fitting method is principally determined by the blue wing of the broad Fe line. The high density model leads to increasing soft X-ray flux. Recent reflection analyses of Cyg X-1 by \citet{joh2018}, and GX 339-4 by \citet{gar2015} and \citet{jia2019} suggest that reflection models which underestimate the density of disc introduce systematic changes of order ~10$\degr$ in the inclination angle. \begin{figure} \centering \includegraphics[scale=0.51,angle=0]{ai.pdf} \caption{Joint-fit to Spec. A-G using model {\tt crabcor*TBabs*(diskbb+relxill}) (Model 1). Contours of $\chi ^2$ with 68\%, 90\% and 99\% confidence levels for the spin parameter $a_{*}$ and inclination angle $i$ are shown in the above plot. We fit spectra for 30 spaced values of $i_r$ for the range of $20^{\circ}$-$50^{\circ}$ and 60 spaced values of the $a_*$ for the range of $0.4-1.0$.} \label{fig:relxill_ai} \end{figure} \citet{sha2006} first reported the spin of the black hole in 4U 1543 via the continuum-fitting method. They estimated its spin to be $0.8\pm0.1$. Then, \citet{mil2009} and \citet{mor2014} reported two spin measurements, $0.3\pm0.1$ and $0.43^{+0.22}_{-0.31}$, respectively, both constrained by combining the continuum-fitting and X-ray reflection fitting methods. These three works utilized the dynamical parameters which were reported in \citet{par2004}, but found conflicting values of spin. For measuring the spin of a black hole via the continuum-fitting method to succeed, one constrains the size of the emitting region via the efficient blackbody-like property of the optically thick disc. To relate the emitting area to a dimensionless ISCO and thereby spin, it is critical to have accurate measurements of the distance to the source, the mass of the black hole, and the inclination angle of the accretion disc \citep{mcc2011,mcc2006,gou2011}. The spin and the inclination angle measured by modeling the reflection emission with {\tt relxill} in this paper is consistent with those reported by \citet{mor2014}. However, the spin measurement is in conflict with the one reported by \citet{mil2009}. \citet{mil2009} assumed the inclination angle of accretion disc is equal to the orbital inclination angle. Accordingly, we test the implication of the lower inclination angle on the spin measurement. When the inclination angle parameter in REXILL model was fixed at the orbital inclination angle of 21.0$^{\circ}$, which is considered as Model 2, we find that the fit becomes worse than when the inclination angle is free ($\Delta \chi^2 = 53.51$ for increasing 1 d.o.f). The best-fitting values are listed in Table \ref{tab:relxill2}. The temperature and the normalization of the thermal emission do not change significantly. The photon index parameters become smaller than the Newtonian value ($q=3$), which indicates the coronal model changes from a compact geometry ($q>3$ in Model 1) to extended. As for reflected emission, the emissivity index decreased. The spin pegged at 0.998 in this condition, possibly owning to the higher iron abundance of $7.72_{-1.62}^{+1.23}$, a more nonphysical value. The ionization state becomes much higher, and the reflection fraction becomes smaller. As an extension of Model 2, we define a new Model 3 in which we keep the inclination fixed at 21.0$^{\circ}$ and also fix the spin parameter to the value found by \citet{mil2009}: $a_{*}$=0.3. The best-fitting values are listed in Table \ref{tab:relxill3}. Compared to Model 2, its $\chi^2$ increased 12.99 for 1 d.o.f. Except the iron abundance was constrained at $4.50_{-0.21}^{+0.46}$, which is lower than that in Model 2, other parameters were not appreciably affected. We also plot the contribution to $\chi^2$ for Spec. A resulting from Model 1-3 in Figure \ref{fig:comp}. The most pronounced changes in comparing the model differences are residuals around the iron line region ($\sim$5.0-8.0 keV). \begin{table*} \caption{Best-fitting parameters for Model 2: {\tt crabcor*TBabs}*({\tt diskbb}+{\tt relxill}), in which $i$ = 21 deg} \begin{threeparttable}[b] \begin{center} \label{tab:relxill2} \footnotesize \begin{tabular}{llccccccc} \toprule Model & Parameter &Spec. A & Spec. B & Spec. C & Spec. D & Spec. E & Spec. F & Spec. G \\ \midrule {\tt crabcor}&$C$ &\multicolumn{7}{c}{1.097 (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$\Delta\Gamma$ &\multicolumn{7}{c}{0.01 (f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt TBabs}&$N_\mathrm{H}$ ($ cm^{-2}$) &\multicolumn{7}{c}{$4.0\times10^{21}$ (f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt diskbb} &$T_\mathrm{col}$(keV)&$0.842_{-0.013}^{+0.015}$&$0.814_{-0.015}^{+0.016}$&$0.829\pm0.015$&$0.784\pm0.013$&$0.783\pm0.007$&$0.770_{-0.007}^{+0.008}$&$0.770_{-0.011}^{+0.010}$ \\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{DISC}$&$4349_{-291}^{+220}$&$4471_{-374}^{+376}$&$3778_{-301}^{+229}$&$5021_{-383}^{+388}$&$5124_{-216}^{+221}$&$5110_{-272}^{+251}$&$5052_{-306}^{+355}$ \\ \specialrule{0em}{1pt}{1.1pt} {\tt relxill} &$q$&$2.72_{-0.13}^{+0.10}$&$2.69\pm0.13$&$2.44\pm0.17$&$2.78_{-0.14}^{+0.08}$&$2.67\pm0.09$&$2.59_{-0.06}^{+0.11}$&$2.76_{-0.15}^{+0.10}$\\ \specialrule{0em}{1pt}{1.1pt} &$a_*$&\multicolumn{7}{c}{$> 0.83$ }\\ \specialrule{0em}{1pt}{1.1pt} &$i$ (deg)&\multicolumn{7}{c}{$21.0$ (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$A_\mathrm{Fe}$&\multicolumn{7}{c}{$7.72_{-1.62}^{+1.23}$ }\\ \specialrule{0em}{1pt}{1.1pt} &$\Gamma_\mathrm{r}$&$2.64_{-0.03}^{+0.04}$&$2.48_{-0.03}^{+0.04}$&$2.51_{-0.03}^{+0.04}$&$2.56_{-0.04}^{+0.09}$&$2.52\pm0.03$&$2.53_{-0.02}^{+0.04}$&$2.59_{-0.05}^{+0.14}$ \\ \specialrule{0em}{1pt}{1.1pt} &$\mathrm{log}\xi$&$4.52_{-0.15}^{+4.52}$&$4.19_{-0.35}^{+0.25}$&$4.26_{-0.34}^{+0.15}$&$4.07_{-0.34}^{+0.25}$&$4.29_{-0.20}^{+0.09}$&$4.19_{-0.23}^{+0.16}$&$4.01_{-0.23}^{+0.35}$\\ \specialrule{0em}{1pt}{1.1pt} &$R_\mathrm{r}$&$0.21_{-0.09}^{+0.06}$&$0.23_{-0.01}^{+0.06}$&$0.18_{-0.03}^{+0.04}$&$0.3_{-0.04}^{+0.06}$&$0.32\pm0.04$&$0.32_{-0.04}^{+0.05}$&$0.36_{-0.06}^{+0.13}$ \\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{r}$&$0.111_{-0.011}^{+0.015}$&$0.063_{-0.008}^{+0.010}$&$0.064_{-0.007}^{+0.010}$&$0.04_{-0.003}^{+0.015}$&$0.032_{-0.003}^{+0.004}$&$0.023_{-0.003}^{+0.004}$&$0.031_{-0.005}^{+0.016}$ \\ \midrule & $\chi^2$ &\multicolumn{7}{c}{443.71} \\ & $\nu$ & \multicolumn{7}{c}{453} \\ & $\chi^2_{\nu}$ & \multicolumn{7}{c}{0.98} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[] \textbf{Notes.} Columns 3-9 show successively the results of Spec. A-G. The parameters with ``f'' in parenthesis indicate they were fixed at values given. All errors for one parameter of interest were calculated with 90\% confidence level. \item \textbf{1.} Parameters including the spin $a_*$ and the iron abundance $A_\mathrm{Fe}$ of {\tt relxill} were linked together among different spectra. \item \textbf{2.} Parameters including the temperature $T_\mathrm{col}$, normalization constant $N_\mathrm{DISC}$, emissivity index $q$, photon index $\Gamma _\mathrm{r}$, ionization state $\mathrm{log}\xi$, reflection fraction $R_\mathrm{r}$ and the normalization $N_\mathrm{r}$ were independent for each spectrum. \end{tablenotes} \end{center} \end{threeparttable} \end{table*} \begin{figure} \centering \includegraphics[scale=0.45,angle=0]{chi_comp.pdf} \caption{Contributions to $\chi^2$ for Spec. A resulting from fitting Model 1-3. From top to bottom, the d.o.f increases, starting with the inclination angle and the spin are free; the inclination angle is set at 21.0 degrees and the spin is free; the the inclination angle is set at 21.0 degrees and the spin is set at 0.3. We also included a semi-transparent box to highlight the most pronounced changes in comparing the model differences ($\sim$5-8 keV). For the other six spectra, the comparable residual plots are qualitatively similar.} \label{fig:comp} \end{figure} \begin{table*} \caption{Best-fitting parameters for Model 3: {\tt crabcor*TBabs}*({\tt diskbb}+{\tt relxill}), in which $i$ = 21 deg and $a_{*}$ = 0.30} \begin{threeparttable}[b] \begin{center} \label{tab:relxill3} \footnotesize \begin{tabular}{llccccccc} \toprule Model & Parameter &Spec. A & Spec. B & Spec. C & Spec. D & Spec. E & Spec. F & Spec. G \\ \midrule {\tt crabcor}&$C$ &\multicolumn{7}{c}{1.097 (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$\Delta\Gamma$ &\multicolumn{7}{c}{0.01 (f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt TBabs}&$N_\mathrm{H}$ ($ cm^{-2}$) &\multicolumn{7}{c}{$4.0\times10^{21}$(f)}\\ \specialrule{0em}{1pt}{1.1pt} {\tt diskbb} &$T_\mathrm{col}$(keV)&$0.853\pm0.013$&$0.83\pm0.015$&$0.837_{-0.016}^{+0.013}$&$0.798\pm0.009$&$0.791_{-0.008}^{+0.007}$&$0.778_{-0.007}^{+0.006}$&$0.783_{-0.010}^{+0.008}$\\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{DISC}$&$4127_{-172}^{+262}$&$4103_{-341}^{+346}$&$3586_{-242}^{+331}$&$4595_{-259}^{+324}$&$4868_{-212}^{+250}$&$4862_{-192}^{+239}$&$4603_{-241}^{+360}$\\ \specialrule{0em}{1pt}{1.1pt} {\tt relxill} &$q$&$2.75\pm0.15$&$2.75_{-0.15}^{+0.07}$&$2.49_{-0.20}^{+0.18}$&$2.76_{-0.19}^{+0.15}$&$2.72_{-0.11}^{+0.10}$&$2.61_{-0.13}^{+0.12}$&$2.65\pm0.23$\\ \specialrule{0em}{1pt}{1.1pt} &$a_*$&\multicolumn{7}{c}{$0.30$ (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$i$ (deg)&\multicolumn{7}{c}{$21.0$ (f)}\\ \specialrule{0em}{1pt}{1.1pt} &$A_\mathrm{Fe}$&\multicolumn{7}{c}{$4.5_{-0.21}^{+0.46}$ }\\ \specialrule{0em}{1pt}{1.1pt} &$\Gamma_\mathrm{r}$&$2.65\pm0.03$&$2.5_{-0.03}^{+0.05}$&$2.54\pm0.03$&$2.62_{-0.06}^{+0.08}$&$2.54\pm0.02$&$2.56\pm0.03$&$2.73_{-0.12}^{+0.06}$ \\ \specialrule{0em}{1pt}{1.1pt} &$\mathrm{log}\xi$&$4.31_{-0.63}^{+0.22}$&$3.89_{-0.36}^{+0.29}$&$4.03_{-0.27}^{+0.30}$&$3.73_{-0.21}^{+0.31}$&$4.06_{-0.18}^{+0.24}$&$4.0_{-0.20}^{+0.17}$&$3.63_{-0.16}^{+0.27}$\\ \specialrule{0em}{1pt}{1.1pt} &$R_\mathrm{r}$&$0.18\pm0.03$&$0.19_{-0.02}^{+0.03}$&$0.17\pm0.02$&$0.29\pm0.06$&$0.27_{-0.02}^{+0.04}$&$0.29\pm0.03$&$0.39_{-0.06}^{+0.09}$ \\ \specialrule{0em}{1pt}{1.1pt} &$N_\mathrm{r}$&$0.115_{-0.012}^{+0.025}$&$0.069_{-0.008}^{+0.012}$&$0.07_{-0.008}^{+0.009}$&$0.049_{-0.009}^{+0.013}$&$0.034\pm0.003$&$0.026\pm0.003$&$0.046_{-0.014}^{+0.009}$ \\ \midrule & $\chi^2$ &\multicolumn{7}{c}{456.69} \\ & $\nu$ & \multicolumn{7}{c}{454} \\ & $\chi^2_{\nu}$ & \multicolumn{7}{c}{1.01} \\ \bottomrule \end{tabular} \begin{tablenotes} \item \textbf{Notes.} Columns 3-9 show successively the results of Spec. A-G. The parameters with ``f'' in parenthesis indicate they were fixed at values given. All errors for one parameter of interest were calculated with 90\% confidence level. \item \textbf{1.} The iron abundance $A_\mathrm{Fe}$ of {\tt relxill} were linked together among different spectra. \item \textbf{2.} Parameters including the temperature $T_\mathrm{col}$, normalization constant $N_\mathrm{DISC}$, emissivity index $q$, photon index $\Gamma _\mathrm{r}$, ionization state $\mathrm{log}\xi$, reflection fraction $R_\mathrm{r}$ and the normalization $N_\mathrm{r}$ were independent for each spectrum. \end{tablenotes} \end{center} \end{threeparttable} \end{table*} \section{Conclusions} \label{sec:con} We have measured the spin of 4U 1543 via modeling its reflected components in 7 SPL state observations carefully. The spectra consist of 4 different components: the galactic absorption, thermal emission from the accretion disc, power-law emission and reflected emission. We did joint-fit to all the spectra to improve the signal-to-noise ratio of X-ray reflected component. We used the reflection model, {\tt relxill}, to fit the data. We find a super-solar iron abundance for the disc. At the same time, the disc is highly ionized. The model with free inclination angle and spin (Model 1) describes the spectra best. The inclination angle of the inner accretion disc is constrained to be $36.3_{-3.4}^{+5.3}$ degrees at 90\% statistical confidence. When the inclination angle is fixed at the orbital inclination value of 21.0$^{\circ}$ in Model 2 or Model 3, the statistic becomes significantly worse, and the spin is larger than 0.83. The best-fitting inclination differs from that of the orbital plane by more than 10 degrees. This may be owed to the systematic limitations of current models which underestimate the density of disc. Our results indicate a moderate rotation rate for the black hole in 4U 1543. The spin parameter is established to be $0.67_{-0.08}^{+0.15}$ at 90\% statistical confidence. At the 99\% statistical confidence level, we exclude spins below $a_{*}$< 0.5 (which also excludes any retrograde geometries). \section*{Acknowledgements} We thank the useful discussion with Prof. J.Orosz, Prof. Youjun Lu, Dr. Erlin Qiao, Dr. Weiwei Xu, and Dr. Zhu Liu. We would also like to thank the reviewer for his/her valuable input. Lijun Gou are supported by the National Program on Key Research and Development Project through grant No. 2016YFA0400804, and by the National Natural Science Foundation of China with grant No. U1838114, and by the Strategic Priority Research Program of the Chinese Academy of Sciences through grant No. XDB23040100. We also thank RXTE/PCA public data and facilities. This work is made under the help with tools available on Astrophysics Science Archive Research Centre (HEASARC), belonging to NASA's Coddard Space Flight Centre (GSFC). \bibliographystyle{mnras}
1,477,468,749,836
arxiv
\section{Introduction} \label{sec:1} After the notorious Address delivered by H. Minkowski at the Assembly of German Natural Scientists and Physicians in 1908 \cite{Mink1908}, which was devoted to space and time, all physical quantities which were previously described as vectors in 3-D Euclidean space, were recast accordingly to form either vectors in a 4-D pseudo-Euclidean space-time (like the 4-momentum), or to tensorial objects (like the electromagnetic field). The magnitude of all 4-vectors that are used in special relativity is defined as \[ ||A||^2=A^\mu A_\mu=-(A^0)^2+\textbf{A}^2, \] where $\textbf{A}$ is the 3-D vector that represents the spatial part of $A^\mu$. This expression resembles the pythagorean theorem relating the sides of a right triangle, with the time component $A^0$ playing the role of triangle's hypotenuse. This property has been extensively used in many introductory textbooks devoted to Special Relativity \cite{TaylorWheeler}, in order to visualize relations that apply between various physical quantities which become interrelated in the framework of Minkowski's space-time. However, there is an intrinsic problem in drawing such right triangles in a generic situation: one has to draw 4-D objects since one of the sides of the triangle is already a vector living in the 3-D Euclidean space, while the other perpendicular side should extend beyond this 3-D space. This practical problem arises whenever one deals with more than two such triangles, since then there is not enough space to draw everything. In all cases explored here we will show that the most generic situation can always be analyzed with only two such triangles, the vector sides of which are defining a plane while the third dimension could be used to develop the other perpendicular sides of the corresponding triangles. In this paper we demonstrate the use of such right triangles in analyzing elementary-particle reactions; the corresponding sides of each triangle will represent the energy, $E$, the 3-momentum, $\textbf{p}$, and the rest mass, $m$, of each particle (we will work with units where the velocity of light is $c=1$). This diagrammatic tool has hardly been used in textbooks (for a recent article that refers to this tool see \cite{Okun}) for these physical quantities, although, as it will be shown later, the quantitative use of such geometric tools is in many cases advantageous compared to the usual analytic methods. Solving problems concerning particles that collide with each other and produce a number of new particles by the usual algebraic way is quite often a rather complicated procedure, mainly due to the fact that apart of energy and momentum conservation there are also constraints of quadratic form from the energy-momentum-mass relation. Thus if one does not follow the most clever way to analyze the problem, computations may become really meshy. The situation looks like a labyrinth; if one does not know the right path connecting the starting point with the endpoint, various paths leading to dead ends will be chosen until the aim is accomplished. By using right triangles for each particle, arranged in space in a convenient way, we can get answers about all physical quantities in a much more straightforward fashion. Moreover, these geometric constructions can easily be used by someone to draw preliminary qualitative conclusions, like ``is a reaction allowed or is it forbidden'', or ``could the angle between the lines of motion of two products exceed $90^\circ$ or not?'' Also, if one intends to construct a new problem, the graphical analysis through right triangles may help in posing well defined questions that have a clear answer. The only difficulty in this geometrical approach of solving or sketching problems is to draw a suitable 3-D construction of right triangles. We will demonstrate the power of this geometrical method by solving accordingly a few characteristic problems. For each problem we will draw the right triangles in such an arrangement in 3-D space that answers will come out quickly and easily. \section{Orthogonal triangles corresponding to particles} \label{sec:2} As was mentioned previously the basic unit in our geometric construction will be a right triangle with hypotenuse $E$, and two perpendicular sides with magnitude $|\textbf{p}|$ and $m$ respectively, where $E,\textbf{p},m$ are the energy, the 3-momentum, and the mass of a single particle (see Fig.~\ref{fig:1}), all of them measured in a specific inertial frame of reference. Since $\textbf{p}$ is a vector in a 3-D Euclidean space, the corresponding side will be represented by a vector; hence particular attention should often be paid in arranging the vectorial side of the triangle along the corresponding direction in a 3-D drawing. The kinetic energy of a particle is defined as \begin{eqnarray} T=E-m, \end{eqnarray} thus it is represented by the difference between the corresponding sides of the right triangle. Also the velocity of the particle in the particular frame of reference is given by the ratio \begin{eqnarray} \mathbf{v}=\frac{\mathbf{p}}{E}, \end{eqnarray} that is by the cosine of the right angle subtending the mass side of the triangle. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=13pc] {Apos_fig1.eps}} \caption{\label{fig:1} This is the basic geometric unit that will be used throughout the paper to obtain geometric solutions in a wide variety of problems related to relativistic collisions. It is a right triangle that represents the relativistic relation $E^2=\textbf{p}^2+m^2$. For the two extreme cases (classical particle and highly relativistic particle) the corresponding triangles are like degenerate two-sided triangles. For a photon-like particle one of the sides of the right triangle has exactly zero length.} \end{center} \end{figure} The diagram corresponding to the two extreme cases, that of a classical particle and that of a highly relativistic one, should be considered separately. The former one will be represented by a degenerate right triangle with its hypotenuse, $E$, being almost equal to the $m$ side, while the latter particle will be represented again by a degenerate right triangle with its hypotenuse, $E$, being almost equal to the $|\textbf{p}|$ side. By Taylor expansion we see that in the former (classical) case \begin{eqnarray} E=\sqrt{m^2+\textbf{p}^2}=m \sqrt{1+\frac{\textbf{p}^2}{m^2}}\cong m+ \frac{\textbf{p}^2}{2 m}, \end{eqnarray} the last term being the classical Newtonian kinetic energy, while in the latter (ultra-relativistic) case the corresponding relation is \begin{eqnarray} E \cong |\textbf{p}|+ \frac{m^2}{2 |\textbf{p}| \end{eqnarray} by mere similarity of the corresponding sides ($m \leftrightarrow |\textbf{p}|$) in these two extreme cases. Of course photons, or any other zero rest-mass particle will be represented by a degenerate right triangle with two equal sides ($E=|\textbf{p}|$). At this point we should note that the shape of any triangle depends on the frame of reference on which the specific particle is observed (two of its sides $E,|\textbf{p}|$ are frame dependent). Therefore a geometric construction of many such right triangles at the same diagram will refer to a single frame of reference for all particles. Although in algebraic analysis of problems related to relativistic collisions we usually change frame of reference to make computations easier, there is no need to appeal to such tricks in geometric solutions, since the geometric conclusion will be clear, independently of the frame of reference used to draw all triangles. Actually by avoiding Lorentz transformations to shift frames of reference we avoid a lot of algebraic computations. The geometric solutions are based on conservation of 4-momentum in relativistic reactions; that is simultaneous conservation of the total energy, and the total 3-momentum of all particles that participate in a reaction. Therefore all geometric constructions that correspond to allowed arrangements for the kinematical characteristics of the particles involved, share common total length of $E$-sides and vectorial sum of $\textbf{p}$ sides. Next, we proceed to describe some properties regarding the extremum of specific quantities for a number of particles. These general propositions, that we will prove graphically, will be later used in some of the problems that we will analyze by means of our geometrical method. \noindent\textbf{Proposition I:} The total 3-momentum of $N$ particles with energies $E_i$ and masses $m_i$, respectively ($i=1,2,\ldots,N$), gets maximized if all particles are moving along the same direction. \noindent\textbf{Proof:} This proposition is quite obvious, and there is no need to use any right triangles to prove it. However the proof is purely geometrical, as in the forthcoming propositions and problems. Given the magnitudes of all energies and masses, the magnitudes of all 3-momenta are fixed: $|\textbf{p}_i|=\sqrt{E_i^2-m_i^2}$. Therefore if we connect all these vectors head-to-tail we construct the total 3-momentum of the system of particles (which could be either the reacting particles or the products). From the triangle inequality the total 3-momentum of all particles $\textbf{p}_\textrm{tot}$ satisfies the inequality \begin{eqnarray} |\textbf{p}_{\textrm{tot}}| \leq |\textbf{p}_1|+|\textbf{p}_2|+\ldots+|\textbf{p}_N|, \end{eqnarray} where the equality holds if $\textbf{p}_i/|\textbf{p}_i| = \textbf{p}_j/|\textbf{p}_j|$ for all $i,j=1,2,\ldots,N$, that is when all the 3-momenta are aligned to each other (common direction of motion for all particles). Intuitively we could just say that the head of the last 3-momentum vector and the tail of the first one are further apart when the broken line of the corresponding vectors is straightened. \noindent\textbf{Proposition II:} Two or more particles can be considered as a single particle with respect to geometric representation by right triangles. \noindent\textbf{Proof:} It is obvious that we could always construct a right triangle with one leg equal to the total 3-momentum of the particles \begin{eqnarray} {\bf p}_{\textrm{tot}}=\sum_i {\bf p}_i \end{eqnarray} and hypotenuse equal to the sum of the energies of all particles \begin{eqnarray} E_{\textrm{tot}}=\sum_i E_i \label{totAne} \end{eqnarray} since for each particle it holds $E_i \geq |{\bf p}_i|$ (the equality holds only for massless particles). The latter sum (Eq.~(\ref{totEne})) could serve as a hypotenuse since \begin{eqnarray} E_{\textrm{tot}}=\sum_i E_i \geq \sum_i |{\bf p}_i| \geq \left| \sum_i {\bf p}_i \right| =|{\bf p}_{\textrm{tot}}|. \end{eqnarray} It should be noted though that this representation is not one-to-one; for a specific value of $E_{\textrm{tot}}$ and ${\bf p}_{\textrm{tot}}$ there is too much freedom in choosing the energy and 3-momentum of each particle. The characteristic mass of the representative right triangle is not equal to the sum of the corresponding particles' masses; it is equal to $(E_\textrm{tot}^2-{\bf p}_\textrm{tot}^2 )^{1/2}$, which in its turn is equal to the total energy in the center-of-momentum frame for these particles (the frame in which ${\bf p}_{\textrm{tot}}=0$. In propositions III and IV we will actually replace many particles in the corresponding inductive proofs by a single one. As we shall see in both cases the minimization/maximization procedure leads to equal velocities for all particles. This is a distinct representation of many particles by a single composite particle. It is the only case where the representation is one-to-one since then the rest mass in the center-of-momentum frame is exactly equal to the some of all rest masses. From the point of view of our geometrical method there is only one way to construct many similar right triangles with one of their sides (the mass sides) given; this corresponds to particles with equal velocities. \noindent\textbf{Proposition III:} For a given total 3-momentum of $N$ particles, the sum of their energies $\sum_i E_i$ assumes its minimum value when all particles move with the same velocity $\textbf{v}$. \noindent\textbf{Proof:} Let us begin assuming that there are only two particles. Consider the geometric construction of Fig.~\ref{fig:2} depicting two right triangles with sides $E_1,m_1,\textbf{p}_1$, and $E_2,m_2,\textbf{p}_2$. From now on we will call the plane defined by the vector sides $\textbf{p}_1,\textbf{p}_2$ \textit{momenta screen}, since we are going to use extensively this plane to move the vectors of 3-momenta around. On this plane we have also drawn the total 3-momentum of the two particles, which according to Proposition III is assumed to be fixed. Any point B on momenta screen represents a specific split for the 3-momenta of the two particles. When two line segments, $\textrm{AA}^\prime$ and $\Gamma\Gamma^\prime$ with lengths equal to the masses of the two particles are drawn perpendicular to the plane of momenta screen as in Fig.~\ref{fig:2}, the total length of the hypotenuses $\textrm{AB}$, $\textrm{B}\Gamma$ corresponds to the total energy of the particles. Clearly the shortest distance between the two points $\textrm{A}$ and $\Gamma$ (which corresponds to the minimum total energy) will be the straight distance $\textrm{A}\Gamma$. This straight line intersects vector $\textbf{p}_\textrm{tot}$ at point $\textrm{K}$, forming two similar right triangles $\textrm{AA}^\prime\textrm{K}$ and $\Gamma\Gamma^\prime\textrm{K}$. From similarity of the two triangles the equality of the ratios of masses and 3-momenta is apparent. By induction we could generalize our result to more than two particles: For $N$ particles we represent the first $N-1$ particles by a single right triangle according to Proposition II and we minimize the energy of the $N$-th particle and the rest particles. Then we proceed to minimize the energy of the first $N-1$ particles and so on. Thus we conclude that the total energy of all particles is minimized when the total 3-momentum is split in $N$ segments proportional to the corresponding masses of the particles. Then the right triangles that correspond to all particles are similar triangles and thus all particles should have the same velocity (cf.~Fig.~\ref{fig:1}). Note that in the case where three or more particles are involved we cannot draw any momenta screen and then construct all the corresponding right triangles perpendicular to such a plane, since three or more arbitrary vectors do not lie in the same plane. This explains why we had to appeal to induction to prove this proposition. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=19pc] {Apos_fig2.eps}} \caption{\label{fig:2} In order to minimize the total energy of two particles we should split their total 3-momentum in two parts with ratio equal to the ratio of their masses (point B should be moved to point K). The mass segments have been drawn perpendicular to the plane of momenta screen.} \end{center} \end{figure} At this point we could add a bit of information to Proposition II. According to Proposition III the mass of the composite particle described by a single right triangle, which represents a number of particles (see Proposition II), \begin{eqnarray} M_{\textrm{comp}}= \sqrt{ E_{\textrm{tot}}^2-{\bf p}_{\textrm{tot}}^2 }, \label{Mcomposite} \end{eqnarray} is higher than the total rest mass of the specific particles, since for a given ${\bf p}_{\textrm{tot}}|$ of such particles their total energy is minimized when all particles are moving with the same velocity. As it was noticed in the analysis of Proposition II, this optimum case corresponds to $M_{\textrm{comp}} = \sum_i m_i$. Thus \begin{eqnarray} M_{\textrm{comp}} \geq \sqrt{ (E_{\textrm{tot}}\left.\right|_{\min})^2-{\bf p}_{\textrm{tot}}^2 } = \sum_i m_i. \end{eqnarray} \noindent\textbf{Proposition IV:} For a given total energy of $N$ particles the sum of their 3-momenta $\sum_i \textbf{p}_i$ assumes its maximum magnitude if all particles are moving with the same velocity $\mathbf{v}$. This is simply the inverse of Proposition III. \noindent\textbf{Proof:} We shall use induction once again. Referring to Fig.~\ref{fig:3} that depicts the right triangles of two particles, it is clear that we are free to move the mass segments $m_1,m_2$ (which stick out of the plane of momenta screen in a perpendicular fashion) around as long as we keep the sum of the two hypotenuses, $E_1+E_2$, represented by the length of segments $\textrm{AB}$ and $\textrm{B}\Gamma$, fixed. The total 3-momentum of the two particles is the distance between the projections of points $\textrm{A}$ and $\textrm{B}$ on momenta screen. In order to render this distance maximum we have to move the two mass segments as far as possible from each other. The maximum distance between the mass segments, compatible with the total energy constraint, is achieved when the line $\textrm{AB}\Gamma^\prime$ is a straight line. Then the two right triangles that form are similar, and the ratio of the two masses $m_1/m_2$ equals the ratio of the magnitudes of the two 3-momenta $|\mathbf{p}'_1|/|\mathbf{p}'_2|$, while both 3-momenta lie along the same line. This configuration corresponds again to common velocity for both particles. For more than two particles, we first maximize the 3-momentum of the first pair of particles while keeping the sum of their energies constant. Then we replace these two particles by a single one which moves with the common velocity of the two particles and has mass equal to the sum of the masses of the initial particles and then we proceed further. In every step we maximize the total 3-momentum, without changing the total energy, by attributing the same velocity to all particles considered up to that point. \begin{figure}[h] \begin{center} \scalebox{0.50}{\includegraphics{Apos_fig3.eps}} \caption{\label{fig:3} The procedure of maximizing the magnitude of the total 3-momentum of two particles for a given total energy. Initially the two right triangles are not coplanar. By moving the mass segments $m_1,m_2$ around we maximize the total 3-momentum by putting them as far apart as possible. Then two similar right triangles form. This situation corresponds to the same velocity for both particles.} \end{center} \end{figure} As mentioned before, the similarity of triangles that arises from the graphical solution of the last two propositions corresponds to the case where all particles are moving with the same velocity. This is also the velocity of the center-of-momentum frame. In this frame all particles are then at rest. Therefore this minimization/maximization is succeeded when the particles are still in their center of momentum. The following problems of relativistic reactions will be presented in a sequence of gradual difficulty with respect to the corresponding analytic solutions for each of them. On the other hand applying the geometrical tools in order to draw, at least qualitative conclusions, is in all cases of the same difficulty and much more direct. Thus, although in the first couple of problems one may argue that the geometric method does not offer an easier solution than the usual algebraic solution, for the rest of problems the directness of the geometrical proof is apparent. \section{Problem 1} \label{sec:3} We start familiarizing ourselves with the geometric constructions by treating accordingly the following simple problem: Show that a photon cannot be split spontaneously into two or more massive particles (like in pair production). \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=20pc] {Apos_fig4.eps}} \caption{\label{fig:4} The geometric construction for the spontaneous decay of a photon.} \end{center} \end{figure} In Figure \ref{fig:4} we have drawn on momenta screen the initial 3-momentum ${\bf p}$ of the photon the magnitude of which equals the energy of the photon $E$. The 3-momenta of two new hypothetic particles have also been drawn on momenta screen, as well as their energies coming out of this plane. However, the following inequality \begin{eqnarray} E_1+E_2=\textrm{AB}+\textrm{B}\Gamma \geq \textrm{A}\Gamma > \Delta\textrm{Z}=E \end{eqnarray} is true. Therefore, due to conservation of energy this type of reaction is forbidden. The same is also true for more than two particles, since all but one particles could have been replaced by a single representative triangle according to Proposition II. The case of a single massive particle produced by a photon is obviously forbidden since a degenerate triangle (photon) could not be equal to a non-degenerate triangle (massive particle). \section{Problem 2} \label{sec:4} Next, we show that particle creation from a single photon is allowed, as long as a massive particle (usually an atomic nucleus), assumed to be initially motionless, is used to interact with the photon, while after the reaction this particle remains unaltered. Therefore the massive particle plays the role of a catalyst for such a reaction. We will also obtain the sufficient condition for this reaction to happen. \begin{figure}[h] \begin{center} \scalebox{0.60}{\includegraphics{Apos_fig5.eps}} \caption{\label{fig:5} The geometric construction corresponding to the allowed production of a pair of massive particles through interaction of a photon with a massive target particle.} \end{center} \end{figure} Let us depict the photon with a degenerate two-sided triangle ($E_\gamma=|{\bf p}_\gamma|$), and the massive particle used to interact with the photon again as a degenerate two-sided triangle (since it is motionless) with $E=M$ (see Figure \ref{fig:5}). If two new particles ($\#1,\#2$) are finally produced we draw on momenta screen their total 3-momentum as a ${\bf p}_{12}$ vector and the 3-momentum of the target particle as ${\bf p}'$. Of course ${\bf p}_{12}+{\bf p}'={\bf p}_\gamma$, while conservation of energy rules that \begin{eqnarray} \textrm{AB}+\textrm{B}\Gamma=E_{12}+E'=E_\gamma+M=\Delta\textrm{Z}+\textrm{Z}\Gamma, \label{prb2} \end{eqnarray} where the subscript $_{12}$ is used to denote the 3-momentum and energy of a composite particle that corresponds to the two new massive particles that are produced (see Proposition II) and ${\bf p}',E'$ are the 3-momentum and the energy of the massive target after the collision, respectively. The composite particle is depicted by the right triangle $\textrm{A}\Delta\textrm{B}$. The massive target particle, after the reaction will in general be set in motion, and the corresponding right triangle for this particle is $\textrm{BZ}\Gamma$. Therefore the reaction is allowed to take place, as long as the geometric equality of Eq.~(\ref{prb2}) holds good. \begin{figure}[h] \begin{center} \scalebox{0.60}{\includegraphics{Apos_fig6.eps}} \caption{\label{fig:6} The configuration of right triangles which corresponds to a minimum total energy. This configuration yields the threshold energy of a photon that could lead to pair production, if the angle $\phi$ is such that $\Delta \textrm{Z}+\textrm{Z} \Gamma=\textrm{A} \Gamma$.} \end{center} \end{figure} Now let us find the condition for such a reaction to be possible to happen, that is to find the minimum energy for the photon in order to be able to produce two massive particles. From Proposition III, the minimum total energy of particles $\#1,\#2$, and the massive target-particle is achieved when all 3-momenta ${\bf p}_1,{\bf p}_2$, and ${\bf p}'$ are parallel to each other and have magnitudes proportional to their masses, $m_{1},m_2$, and $M$, respectively. This extremal situation, if allowed, sets the threshold condition for the reaction to take place. Therefore, the necessary condition for the pair production (via a motionless massive target) to take place is the initial available energy $E_\gamma+M$ to be equal to the minimal total energy of all products. The geometric construction of triangles that corresponds to this extremal situation is shown in Figure \ref{fig:6}, and the required condition is to find a point $\Gamma$ along the dashed line which lies at a distance $m_1+m_2+M$ from point $\textrm{A}$ so that the length of $\textrm{A}\Gamma$ equals the length of $\Delta\textrm{Z}+\textrm{Z}\Gamma$. It is intuitively obvious that such a point $\Gamma$ always exists, since while the point $\Gamma$ moves along the dashed line the difference between the two lengths varies continuously, with the former length ($\textrm{A}\Gamma$) being longer than the latter one ($\Delta\textrm{Z}+\textrm{Z}\Gamma$) when the point $\Gamma$ lies near the projection of $\textrm{A}\Delta$ on the dashed line, while the two lengths are in opposite relation to each other when point $\Gamma$ is moved far away ($\Delta\textrm{Z}>>m_1+m_2+M$) since then \begin{widetext} \begin{eqnarray} \textrm{A}\Gamma=\sqrt{(\Delta\textrm{Z})^2+(m_1+m_2+M)^2} \simeq \Delta\textrm{Z} <\Delta\textrm{Z}+M. \end{eqnarray} Therefore, whatever the mass of the target, there is always a sufficiently energetic photon that could lead to the production of the two specific massive particles. Let us now quantify this conclusion. From the triangles depicted in Figure \ref{fig:6} we have \begin{eqnarray} (E_1+E_2+E')_{\min}=\textrm{A}\Gamma=E_{\gamma}^{\textrm{(thres)}} +M=\textrm{A}\Gamma(\cos\phi+\frac{M}{M+m_1+m_2}\sin\phi), \end{eqnarray} where $\phi=\widehat{\Delta\textrm{BA}}$. This leads to the algebraic relation \begin{eqnarray} \cos\phi+\frac{M}{M+m_1+m_2}\sin\phi=1. \end{eqnarray} Expressing all trigonometric functions shown up in the above formula in terms of $\tan\phi$, we obtain the solution for $\phi$ \begin{eqnarray} \tan\phi=\frac{2 M (M+m_1+m_2)}{(M+m_1+m_2)^2-M^2}. \end{eqnarray} Therefore the photon should have at least energy equal to \begin{eqnarray} E_\gamma^{\textrm{(thres)}}= |{\bf p}_\gamma^{\textrm{(thres)}}|= \frac{M+m_1+m_2}{\tan\phi}=(m_1+m_2)\left(1+\frac{m_1+m_2}{2 M} \right), \label{Ethresh2} \end{eqnarray} in order to be able to produce the particular pair of particles. \end{widetext} If the photon has energy higher than this minimum value, the conservation of energy is not satisfied by this optimum configuration (it would then be $\Delta\textrm{Z}+\textrm{Z}\Gamma>\textrm{A}\Gamma$ in Fig.~\ref{fig:6}). One should then arrange the 3-momenta of all particles after the reaction in such a way that the total energy exceeds this minimum total energy by breaking the constraint of having all 3-momenta corresponding to equal velocities for all three particles, thus permitting the line $\textrm{A}\Gamma$ to be a broken one as in Fig.~\ref{fig:5}. From Eq.~(\ref{Ethresh2}) we observe that for $M>>m_1+m_2$, the threshold energy for the photon is approximately $E_\gamma^{\textrm{(thres)}} \simeq m_1+m_2$. Now we will directly demonstrate this fact geometrically without going through the general result (\ref{Ethresh2}). We draw a segment of length $\Gamma\textrm{Z}= M$, and with center $\textrm{Z}$ we draw a circle of radius $r=E_\gamma<M$. We also draw another circle of radius $R=E_\gamma+M$ (which is the initial energy of the system), having its center at $\Gamma$. The broken line $\Gamma \textrm{Z} \Delta \textrm{A}$ (see Fig.~\ref{fig:7}) with $\widehat{\textrm{Z}}=\widehat{\Delta}=90^\circ$, and $\Delta \textrm{A}=m_1+m_2$ constitutes the configuration of masses and 3-momenta for the products of the reaction which corresponds to the minimum total energy (cf.~Fig.~\ref{fig:6}). In this case the initial 3-momentum of the photon is distributed proportionally to $M$ and $m_1,m_2$ after the reaction in such a way that all particle-triangles are similar to each other. From the drawing in Fig.~\ref{fig:7} it is clear that the radius of the small circle $E_\gamma^{\textrm{(thres)}}$ should be approximately equal to $\Delta \textrm{A}$, that is equal to $m_1+m_2$ when $M>>m_1+m_2$ with a relative error of order $r/(2 R)=(m_1+m_2)/[2(M+m_1+m_2)] \cong (m_1+m_2)/(2 M)$. This error represents the missing segment in order to form an exact square that is shown in Fig.~\ref{fig:7}, and it actually leads to the exact expression for the threshold energy (see Eq.~\ref{Ethresh2}). \begin{figure}[h] \begin{center} \scalebox{0.60}{\includegraphics{Apos_fig7.eps}} \protect\caption{\label{fig:7} In the case of a very massive target, the configuration that corresponds to the threshold for a pair production is represented by the depicted geometrical structure: $\Gamma\textrm{Z}=M$, $\textrm{Z} \Delta=r=E_\gamma^{\textrm{(thres)}}=|{\bf p}_\gamma^{\textrm{(thres)}}|$, $\Delta \textrm{A}=m_1+m_2$, and $\Gamma \textrm{A}=R=$ $M+E_\gamma^{\textrm{(thres)}}$. From this drawing it is obvious that for $R>>r$, $m_1+m_2=\Delta \textrm{A} \protect\cong r= E_\gamma^{\textrm{(thres)}}$. The error of this approximation is of order $r^2/2R$ (deviation of a circle from its tangent).} \end{center} \end{figure} \section{Problem 3} \label{sec:5} The following problem is something like a generalization of Problem 2. Let us assume that a particle ${\cal A}$ hits a motionless target-particle ${\cal B}$, and a number of particles ${\cal C}_i$ ($i=1,2,\ldots$) emerge after the collision (the initial particles could be among the products). The question is what is the minimum energy of particle ${\cal A}$ in order to render this particular reaction possible to happen. Any particle, apart of ${\cal B}$, could be massless. Following the same line of arguments as in Problem 2, we look for the minimum energy required to obtain the optimal final configuration, which is the one with all particles moving with the same velocity (according to proposition III). We now discern two cases: (a) The produced particles have total mass less than, or equal to, the total mass of the initial particles. (b) The produced particles have larger total mass than the initial particles. In case (a) the optimum configuration cannot comply with the conservation of energy. The argument goes as follows: if the total mass of the products was equal to the total mass of the initial particles, the optimal configuration for the products would correspond to a right triangle (see Fig.~\ref{fig:8}) with vertical sides equal to the total 3-momentum ($\textrm{ZA}$), and the total mass of them ($\textrm{A}\Gamma$), respectively (remember that in the minimum energy configuration --according to Proposition III-- all particles are moving at the same speed; consequently all right triangles are similar and they could all be combined in a single right triangle with its orthogonal sides equal to the sum of all masses, and the total 3-momenta, respectively, while the hypotenuse is equal to the total energy.) Thus the energy would be represented by the segment $\textrm{Z}\Gamma$ . However, from triangular inequality, this is shorter than $\textrm{ZB}+\textrm{B}\Gamma$ which represents the total energy of the initial particles. For lower total mass of products, the inequality is even stronger; then the segment $\textrm{Z}\Delta^{(a)}$ in Figure \ref{fig:8}, which represents the energy of the produced particles in their optimum configuration is shorter than the available energy of the initial particles. Thus in case (a) we could increase the final energy (in order to comply with energy conservation), by choosing a non optimal configuration for the products, without changing the total 3-momentum of the system of particles. Thus the reaction could happen by following such a non-optimal configuration of 3-momenta. In case (b) we will look for the minimum required 3-momentum of particle ${\cal A}$ to make conservation of energy hold under the optimum configuration of the products; if $|{\bf p}_{\cal A}|=|{\bf p}|$ exceeds this threshold value, a non-optimal arrangement of products' 3-momenta could again be followed to make conservation of energy hold. \begin{figure}[h] \begin{center} \scalebox{0.7}{\includegraphics{Apos_fig8.eps}} \caption{\label{fig:8} Both cases, (a) and (b), are depicted in this diagram. Case (a): when $M_{\cal C} \leq M_{\cal A}+M_{\cal B}$, in order to comply with energy conservation, we should have a non optimum configuration; therefore there is no restriction for $E_{\cal A}$. Case (b): when $M_{\cal C} > M_{\cal A}+M_{\cal B}$, there is a minimum value of $|{\bf p}_{\cal A}|$, and thus a minimum value of $E_{\cal A}$, for which the reaction is allowed to happen by following the optimal energy configuration for the products.} \end{center} \end{figure} In Figure \ref{fig:8} we have also drawn the triangle that corresponds to the optimal configuration for the products in case (b), which is supposedly compatible with the energy conservation: $\textrm{AZ}$ is the threshold magnitude of 3-momentum of particle ${\cal A}$ (and thus of all products), $\textrm{AB}$ is the mass of the incident particle, ${\cal A}$, $\textrm{B}\Gamma$ is the mass of the target particle ${\cal B}$, and $\textrm{A}\Delta^{(b)}=M_{\cal C}$ is the sum of masses of all products. From conservation of energy $E_\textrm{tot}=E_{\cal A}+M_{\cal B}$. Thus \begin{eqnarray} \begin{split} (\textrm{AZ})^2 + (\textrm{A}\Delta^{(b)})^2= &|{\bf p}|^2 + (M_{\cal C}^{(b)})^2 = \\ E_\textrm{tot}^2 =& (E_{\cal A}+M_{\cal B})^2, \end{split} \end{eqnarray} and \begin{eqnarray} (\textrm{AZ})^2 + (\textrm{AB})^2 = & |{\bf p}|^2 + (M_{\cal A})^2 = E_{\cal A}^2. \end{eqnarray} Subtracting these two relations and solving for $E_{\cal A}$ we obtain the desired threshold energy \begin{eqnarray} E_{\cal A}=\frac{M_{\cal C}^2 - M_{\cal A}^2 - M_{\cal B}^2 }{2 M_{\cal B}}, \end{eqnarray} which is definitely positive since $M_{\cal C}^{(b)} > M_{\cal A} + M_{\cal B}$. We should note once again that this final result generalizes the result of the previous problem since by setting $M_{\cal A}=0$ (photon), $M_{\cal B}=M$, and $M_{\cal C}^{(b)}=m_1+m_2+M$ we obtain Eq.~(\ref{Ethresh2}). \section{Problem 4} \label{sec:7} What are the possible arrangements of the 3-momenta of two particles as a result of an elastic collision between them? By elastic collision we mean that the two particles remain unaltered after the collision, preserving their total kinetic energy, while no new particles are created. This is a problem that most clearly demonstrates the power of geometric constructions in getting qualitative results with respect to relativistic particle collisions. The classical analytic solution (see \cite{LandauRela}) requires first to shift to the center-of-momentum frame by applying Lorentz transformations on the initial 4-momenta, analyze the possible arrangements of the particles' 4-momenta in this particular frame of reference after the collision, and finally go back to laboratory frame by performing an inverse Lorentz transformation of the post-collision 4-momenta. In our graphical method, we draw on momenta screen a hypothetical possible configuration of the two particles' 3-momenta after collision. From conservation of total 3-momentum these vectors should sum up to the total 3-momentum of the particles before collision. Thus the two vectors on momenta-screen should be the two sides of a triangle with a fixed third side (the total 3-momentum). On the other hand, conservation of total energy means that the two 3-momentum vectors should have such magnitudes that the corresponding energies sum up to a fixed value (the total energy of the initial particles). Actually, the specific arrangement of the particles' 3-momenta before collision is one of the solutions of all possible configurations we are looking for. One should keep in mind that the orientation of the plane of momenta screen is not known a priori, since only the total 3-momentum vector is given; thus the plane on which all configurations of 3-momenta that are compatible with conservation of both 3-momentum and energy are drawn, should be rotated around the line of total 3-momentum to get all possible configurations of 3-momenta in space. For the moment we will just fix the orientation of this plane to find the locus of the heads of the particles' 3-momentum vectors that correspond to all possible relative arrangements of the two particles' 3-momenta on this particular plane. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig9.eps}} \caption{\label{fig:10} The intersection of the ellipsoid of constant energy by the plane of momenta screen provides all possible configurations of particles' 3-momenta in an elastic collision.} \end{center} \end{figure} To find these arrangements, we draw two segments (see Fig.~\ref{fig:10}) of length $m_1,m_2$, respectively, perpendicular to this plane so that one edge of the first segment is the tail of the first particle's 3-momentum (which is also the tail of the total 3-momentum), while one edge of the second segment is the head of the second particle's 3-momentum (which is also the head of the total 3-momentum). Both segments have been drawn on the same side of the plane (although they could be drawn on opposite sides without affecting the results). Then, the broken line AKB connecting the free edges of the perpendicular mass-segments, A, B, which lie out of the plane, through point K which lies on moment screen (this point is representing a possible arrangement of the two 3-momenta vectors) has length equal to the total energy $E_1+E_2$ of the two particles (AK and KB are the hypotenuses of the two right triangles that correspond to the two particles). The latter quantity is fixed and equals the total energy of the particles before collision. Thus all possible arrangements of 3-momenta are described by the locus of a point that lies on momenta screen while the sum of its distances from the given points A, B is equal to the total energy. Obviously the points K that satisfy the above requirements are described by the intersection of the plane of momenta-screen with the axially symmetric ellipsoid with focuses A, B and major axis equal to the total energy of the particles. The intersection of a plane with an ellipsoid is definitely an ellipse. The explanation is simple. Such an intersection is clearly a closed curve, and since the ellipsoid is described by a quadratic polynomial of cartesian coordinates, its intersection by a plane is described by a quadratic relation as well. However, the most general closed plane curve of quadratic form is an ellipse. As mentioned above, we should finally rotate this elliptical curve around the line of the total 3-momentum to obtain all possible configurations of ${\bf p}_1,{\bf p}_2$. It is easy to see that this line coincides with the major axis of the constructed ellipse, by reflection symmetry of the whole 3-D construction with respect to the plane defined by the two parallel mass segments (see Fig.~\ref{fig:10}). It is remarkable that we have arrived at this conclusion, with respect to all possible arrangements of 3-momenta, by simple geometric arguments without resorting to any mathematical relations at all. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=20pc] {Apos_fig10.eps}} \caption{\label{fig:11} A 2-D projection (a ground plan) of Fig.~\protect\ref{fig:10} on which the various lengths related to the ellipsoid are written down. The semi-major axis of the ellipsoid is $a=E/2$, while the focal distance $\textrm{AB}$ is $2 a e=\sqrt{p^2+\Delta^2}$ ($e$ is the eccentricity of the ellipsoid). Finally the semi-minor axis is $b=a \sqrt{1-e^2}=(\sqrt{E^2-p^2-\Delta^2})/2$. All these parameters have been used to construct the equation describing the plane of momenta screen which is drawn here as a line that intersects both $z$- and $x$-axis. The $y$-axis is perpendicular to the plane of the diagram.} \end{center} \end{figure} Now let us proceed to quantify our geometric result. We will use the following notation to simplify our formulae: $E \equiv E_1+E_2$ is the total energy of particles, $p \equiv |{\bf p}_\textrm{tot}|=|{\bf p}_1+ {\bf p}_2|$ is the magnitude of the total 3-momentum, $\Delta \equiv |m_1-m_2|$ and $\Sigma \equiv m_1+m_2$. First we write the equation that describes the ellipsoid with focuses A, B, in cartesian coordinates: \begin{eqnarray} \frac{z^2}{E^2}+\frac{x^2+y^2}{E^2-p^2-\Delta^2}=\frac{1}{4}, \label{ellipseq} \end{eqnarray} where the $z$-axis lies along the line $\textrm{AB}$ while the $x$- and $y$-axes are perpendicular to the $z$-axis, with the $y$-axis being parallel to the plane of momenta screen. The origin of this cartesian coordinate system is at the middle point of the segment $\textrm{AB}$, which is also the center of the ellipsoid. On the other hand the plane of momenta screen in the same coordinate system is described by the equation: \begin{eqnarray} x=-\frac{\Delta}{p} \left(z- \frac{\Sigma \sqrt{p^2+\Delta^2}}{2 \Delta} \right), \label{planeeq} \end{eqnarray} as one could infer from the lengths of various segments shown in Fig.~\ref{fig:11}. Finally if we use a new $z'$-axis defined as the line formed by the intersection of momenta screen by the plane $y=0$, with its origin ($z'=0$) coinciding with the origin of ${\bf p}_\textrm{tot}$ (empty circle on the $z'$ axis in Fig.~\ref{fig:11}), the relation between the old $z$- and the new $z'$-values of a point along the $z'$-axis is \begin{eqnarray} z'=z \frac{\sqrt{p^2+\Delta^2}}{p}+\frac{p^2-\Sigma \Delta}{2 p}. \end{eqnarray} Now, if we seek for a simultaneous solution of Eq.~(\ref{ellipseq}) and of Eq.~(\ref{planeeq}) (intersection of the ellipsoid by the plane of momenta screen) and we replace the $z$-values by its equivalent $z'$-values we arrive at the general form of the ellipse describing the locus of the head points of all possible configurations of 3-momenta vectors after the collision. The corresponding equation is \begin{widetext} \begin{eqnarray} \left( 1-\frac{p^2}{E^2} \right) \left[ p_{\parallel} - \frac{p}{2} \left(1+ \frac{\Sigma \Delta}{E^2-p^2} \right) \right]^2 + p_{\perp}^2=\frac{(E^2-p^2-\Delta^2)(E^2-p^2-\Sigma^2)}{4(E^2- p^2)}, \label{theellipse} \end{eqnarray} \end{widetext} which is clearly the equation of an ellipse, as it was anticipated by the aforementioned geometric arguments. Since on the plane of momenta screen we draw the 3-momenta vectors of the particles after the collision, we have replaced the $z'$ and the $y$, with $p_{\parallel}$ and $p_{\perp}$, respectively. The $_{\perp}$ and $_{\parallel}$ notation corresponds to the components of each 3-momentum vector that is parallel or perpendicular to the total 3-momentum vector. Constructing the equation of the ellipse that is describing all possible configurations of 3-momenta might be a bit tedious but it is straightforward compared to the usual analytic method. We could also draw further conclusions from our geometric construction alone, without any reference to Eq.~(\ref{theellipse}). \noindent (a) In the non-relativistic limit the two $m_i$-segments ($i=1,2)$ are so large with respect to the magnitude of the total 3-momentum ($m_i >> |{\bf p}_i|$), that we could think of both mass segments as lying along the same line perpendicular to the plane of momenta screen. In this case the ellipsoid of constant energy is an extremely elongated ellipsoid oriented so that its axis of symmetry (major axis) is almost perpendicular to the plane of momenta screen. The intersection of such an ellipsoid with the plane of momenta is the well known circular locus of momenta vectors in elastic collision in classical mechanics (see \cite{LandauMech}). \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig11.eps}} \caption{\label{fig:12} When the moment screen (dashed line) is tangent to the ellipsoid of constant energy there is only a single arrangement of 3-momenta. The two equal angles correspond to equal velocities for the two particles; thus corresponds to a non-collisional configuration (case b).} \end{center} \end{figure} \noindent (b) In the relativistic regime the extreme case of a single point intersection of the ellipsoid with the momenta screen corresponds to a configuration where the plane of momenta is tangent to the ellipsoid. Due to reflection symmetry with respect to the $x-z$ plane (cf.~Fig.~\ref{fig:10}), this intersecting point is along the direction of the total 3-momentum vector (see Fig.~\ref{fig:12}); therefore the two particles can only move along the same direction. Moreover, there is a geometric property according to which the tangent line to an ellipse forms equal angles with the focal radii at the point of contact \cite{AnalGeom}. The tangents of these angles in our geometric construction are nothing but the two particles' velocities; thus this singular case corresponds to two particles moving with exactly the same speed, one behind the other. Of course such particles will never collide; they will always move maintaining their distance. So a single point intersection in our geometrical solution corresponds to this trivial physical configuration. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig12.eps}} \caption{\label{fig:13} The relative position of the ellipsoid and the plane of momenta screen corresponding to the two cases of comment (c) where ${\bf p}_2=0$: The top drawing (a) is for $m_1<m_2$ and the bottom (b) for $m_1>m_2$. Both drawings are planar cross sections perpendicular to the momenta-screen (dashed line) as in Figs~\protect\ref{fig:11},\protect\ref{fig:12}.} \end{center} \end{figure} \noindent (c) When one of the particles --let's say particle $\#2$-- is initially at rest, the ellipsoid intersects the momenta screen at one of the edges of the vector ${\bf p}$. This happens because the initial (before the collision) configuration $({\bf p}_2=0, {\bf p}_1={\bf p})$ should correspond to one of all possible arrangements of 3-momenta vectors. In this case, that point is the terminal point of ${\bf p}$. Now it is clear that if particle $\#1$ is the lighter one (see Fig.~\ref{fig:13}(a)) the elliptical locus of the possible arrangements of ${\bf p}_1$ (this ellipsis is on momenta screen, therefore it is not shown in Fig.~\ref{fig:13}) intersects the line of ${\bf p}$ at the terminal point of ${\bf p}$ and at a second point that lies along the opposite direction of ${\bf p}$. This means that the light particle could be backscattered. On the other hand for a heavy particle $\#1$ (with respect to particle $\#2$) the corresponding situation is depicted in Fig.~\ref{fig:13}(b). The second intersecting point of the ellipsoid with the plane of 3-momenta now is an intermediate point along the vector ${\bf p}$. Therefore, a heavy incident particle cannot be backscattered. Moreover the maximum angle at which particle $\#1$ could be deflected corresponds to the tangent line from the starting point of ${\bf p}$ to the ellipse on momenta screen that describes all 3-momenta arrangements. Algebraic manipulations of the standard form of the equation for the tangent line of an ellipse from a given point \cite{AnalGeom} lead to the following value for this angle: \begin{eqnarray} \tan\theta_{1, \max}=\frac{P_\perp}{\sqrt{(P_1+P_2/2)^2-(P_2/2)^2}}, \label{Ps} \end{eqnarray} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig13.eps}} \caption{\label{fig:14} Top: A ground plan of the ellipsoid of constant total energy with respect to momenta screen (dashed line). Bottom: The elliptical locus of all possible arrangements for the two 3-momenta which arises from the intersection of the ellipsoid with the plane of momenta screen. This is the portrait of momenta screen corresponding to the above relative position of the ellipsoid and the momenta screen. The $P_\perp, P_1$ and $P_2$ lengths used in Eq.~\protect{\ref{Ps}} are shown on this diagram.} \end{center} \end{figure} where $P_1$ is the minimum magnitude of ${\bf p}_1$ (it is equal to that part of ${\bf p}$ that is left out of the ellipsoid), $P_2$ is the maximum magnitude of ${\bf p}_2$ (it is the part of ${\bf p}$ that lies inside the ellipsoid), and $P_\perp$ is the semi-minor axis of the ellipse (the maximum magnitude of ${\bf p}_1$ and ${\bf p}_2$ that is perpendicular to ${\bf p}$). All these elements are easily computed from expression (\ref{theellipse}). The corresponding expressions yield the following values: \begin{widetext} \begin{eqnarray} \begin{split} P_\perp=\max [p_\perp ]=\sqrt{ \frac{(E^2-p^2-\Delta^2)(E^2-p^2-\Sigma^2)}{4(E^2- p^2)} } \\%- P_1=\min [ p_{\parallel}|_{p_\perp=0}]=\frac{p}{2} \left(1+ \frac{\Sigma \Delta}{E^2-p^2} \right) -\sqrt{ \frac{(E^2-p^2-\Delta^2)(E^2-p^2-\Sigma^2) E^2}{4(E^2- p^2)^2} }\\ P_2=\max [p_{\parallel}|_{p_\perp=0}] - \min [p_{\parallel}|_{p_\perp=0}]= \sqrt{ \frac{(E^2-p^2-\Delta^2)(E^2-p^2-\Sigma^2) E^2}{(E^2- p^2)^2} }. \end{split} \end{eqnarray} After quite some algebra we obtain the following simple value for this angle: \begin{eqnarray} \tan\theta_{1, \max}=\frac{m_2}{\sqrt{m_1^2-m_2^2}}. \end{eqnarray} Oddly enough, this happens to be exactly the answer one gets for the analogous classical non-relativistic elastic collision (see \cite{LandauMech}). \end{widetext} \noindent (d) In a relativistic billiard game (the analogue of the classical non-relativistic one), the major axis of the ellipse describing all 3-momenta configurations is equal to the magnitude of the total 3-momentum (one of the two identical balls is assumed to be initially at rest). Hence the angle between the lines of motion of the two balls after their elastic collision is the inscribed angle which subtends a diameter of the elliptical locus of 3-momenta (cf.~Figure \ref{fig:15}) and with its vertex at the starting point of the total ${\bf p}$ (the leftmost point of the ellipse). This angle lies between $90^\circ$ and $\phi_{\min} =2 \tan^{-1}(b/a)$, where $b,a$ are the semi-minor and the semi-major axis of the ellipse, respectively. The former happens when one of the balls is almost still after the collision (the angle formed by the dotted lines in Fig.~\ref{fig:15}), while the latter corresponds to the case of equal 3-momenta after collision. The non-relativistic locus of momenta is a circle instead of an ellipse (cf.~case (a)); all inscribed angles then that subtend to a diameter are equal to $90^\circ$, and the balls always move perpendicular to each other after the collision, as is well known by billiard players. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig14.eps}}% \caption{\label{fig:15} The 3-momenta $\protect{\bf p}_1,\protect{\bf p}_2$ of two billiard balls after an elastic collision are represented by two vectors that sum up to the major axis of an ellipse, the total 3-momentum $\protect{\bf p}$. The angle between them is the inscribed angle from the leftmost point of the ellipse that subtends one of the diameters of the ellipse. This angle ranges from $90^\circ$ (approximately the angle formed by the dotted chords), to $\phi_{\min}$ (dashed chords).} \end{center} \end{figure} \section{Problem 5} \label{sec:8} In beta decay, what is the range of kinetic energy of the electron produced, and what is the distribution function of that energy? This is a reaction where a single particle (in the case of beta decay this particle is a neutron) spontaneously disintegrate into three particles; one of them (an electron antineutrino) being a very light one. It was first found by L. Meitner and O. Hahn that the electrons produced in beta decay have a continuous spectrum although only two particles (an electron and a proton) were apparently produced. As we will demonstrate through geometrical arguments, production of only two particles is not compatible with a continuous spectrum. Pauli suggested that production of a third light particle, which could not be detected then, could resolve the mystery. Fermi baptized that particle neutrino, and it was actually detected a quarter of a century later. \begin{figure}[h] \begin{center} \scalebox{0.60}{\includegraphics{Apos_fig15.eps}} \caption{\label{fig:16} There is only one way to distribute the total energy in two particles that are produced from the spontaneous decay of a neutron. The two particles should have opposite 3-momenta with the same magnitude. These magnitudes should be compatible with the aforementioned conservation of energy. The mutual line of motion, though, could have any direction.} \end{center} \end{figure} Let's first assume that a neutron, which is initially at rest, spontaneously decays into two particles; a proton and an electron. On momenta screen we draw (cf.~Figure \ref{fig:16}) the two 3-momenta vectors so that they add up to zero (since the parent particle is assumed to be still in the lab frame). Two segments of length $m_p$ and $m_e$ are drawn perpendicular to the momenta-screen plane at one of the common edges of the two opposite 3-momenta, on each side of the plane. The length of the broken line AKB connecting the second common edges of 3-momenta, K, with the free edges A, B of the mass segments ($m_e$ and $m_p$ respectively), equals the total energy of the system, that is the rest mass $m_n$ of the neutron. It is intuitively obvious that there is only one solution for the magnitude of 3-momenta of the daughter particles (apart from their direction). This solution is represented by the radius of the circular intersection of an ellipsoid --with its foci at A, B, and with major axis equal to $m_n$-- with the plane of momenta screen. Therefore, if there was no extra particle produced, the electron should be monoenergetic. Now if we allow for one more particle (more than one extra particles could be considered equivalent to a single composite particle; cf.~Proposition II), there is a continuous sequence of arrangements for the 3-momenta of proton and electron, and correspondingly energies, that could accommodate an extra particle. More specifically we will focus our attention on a zero-rest-mass particle, like what was assumed for many years for the neutrinos (practically it could be considered as such, compared to the other two heavy-mass particles). The three 3-momenta drawn on momenta screen should still sum up to zero, forming a generic triangle $\textrm{K}\Lambda \textrm{M}$. Since the new particle has no rest mass, the three-segment broken line $\textrm{A} \textrm{K} \Lambda \textrm{B}$ ($\textrm{K}\Lambda$ representing the new particle's 3-momentum), should have total length equal to $m_n$. It is easy to see that we could continuously deform the triangle of 3-momenta $\textrm{K}\Lambda \textrm{M}$ while keeping the total energy (represented by the length of $\textrm{A} \textrm{K} \Lambda \textrm{B}$) fixed. This is a simple pictorial argument that demonstrates why the electrons in beta decay should come out with a continuous spectrum. \begin{figure}[h] \begin{center} \scalebox{0.70}{\includegraphics{Apos_fig16.eps}} \caption{\label{fig:17} A third massless particle among the products allows for a wide range of energies for the electron. Any choice for the points $\textrm{K}$, $\Lambda$ on momenta screen, such that $\textrm{AK}+\textrm{K}\Lambda+\Lambda\textrm{B}=m_n$, represents a distinct configuration of 3-momenta for the products of beta decay. } \end{center} \end{figure} Next we will turn into a more quantitative analysis of all possible arrangements for the energies of the three particles. First of all we can convince ourselves that an electron could be produced with no kinetic energy at all. This arises when the $\textrm{KM}$ side of the triangle of 3-momenta has zero length ($\textrm{K}$ and $\textrm{M}$ points coincide), while the third vertex is such that \begin{eqnarray} \textrm{AM}+\textrm{M}\Lambda+\Lambda\textrm{B} &=& m_e+\textrm{M}\Lambda+\sqrt{(\textrm{M}\Lambda)^2+m_p^2} \nonumber \\ &=& m_n. \end{eqnarray} This situation is unique with respect to the magnitude of $E_\nu=|{\bf p}_\nu|=\textrm{M}\Lambda$, as with the decay into two particles. On the other hand, if the electron has a specific non-zero kinetic energy, there is a whole family of arrangements for the 3-momenta of the rest two particles. These arrangements are described by the intersection of the ellipsoid with major axis equal to the rest of energy shared by the other two particles ($m_n-m_e-T_e$) and $\textrm{B},\textrm{K}$ as foci, with the momenta screen. The maximum allowed energy for the electron is such that there is but a unique arrangement for the other two particles' momenta. In order to have a single intersection point (unique solution) of an ellipsoid with a plane on which one of the ellipsoid's foci is lying, the ellipsoid should be a degenerate one with major axis equal to the focal distance. This is the case where no energy is left for the antineutrino; therefore it is exactly the situation with only two particles, an electron and a proton, which was analyzed above. The whole range of energies for the electron within this interval of kinetic energies are actually observed in beta-decay experiments. Besides predicting the range of the spectrum itself, we can also predict the distribution function for the energy of the electron, at least with respect to kinematics. For a given kinetic energy of the electron, which corresponds to a magnitude of $|{\bf p}_e|$, there is a whole family of vector arrangements for the 3-momenta of the rest two particles. In order to visualize these arrangements, we should draw the ellipsoid of constant energy $E_p+E_\nu=m_n-E_e=m_n-m_e-T_e$ (this is the major axis of the ellipsoid), with the ending point of ${\bf p}_e$, \textrm{K}, and the free edge of the vertical segment $m_p$, \textrm{B}, as its foci, and then find its intersection with the plane of momenta screen. The ellipse $\it{C}$ (see Fig.~\ref{fig:18}) that will arise from such an intersection is the locus of all possible ${\bf p}_\nu$ that are compatible with the particular value of $T_e$. Therefore the possible arrangements of the three particles' 3-momenta that correspond to $T_e$ for the electron, are described by the surface of a sphere of radius $|{\bf p}_e|$ (all possible directions for the electron), times the surface of the ellipsoid that comes about when the ellipse $\it{C}$ is rotated around the direction of ${\bf p}_e$ (all possible inclinations of the plane of momenta around the axis of ${\bf p}_e$). Hence, the distribution function of $T_e$ will be given by the product of the two surfaces. The surface of the corresponding sphere is \begin{eqnarray} S^{\textrm{sph}}=4 \pi |{\bf p}_e|^2 \end{eqnarray} while the surface of an ellipsoidal surface that arises from the revolution of $\it{C}$ around ${\bf p}_e$ is \begin{eqnarray} S^{\textrm{ell}}=2 \pi a b \left( \frac{b}{a}+\frac{\sin^{-1}\sqrt{1-(b/a)^2}}{\sqrt{1-(b/a)^2}} \right) \end{eqnarray} where $a,b$ are the semi-major and the semi-minor axes of the ellipse respectively. Finally, the derivative $d|{\bf p}_e|/dT_e$ is needed to transform the distribution into a distribution over $T_e$: \begin{eqnarray} \label{Prob} {\cal P}(T_e)=\frac{d |{\bf p}_e | }{dT_e}S^{\textrm{sph}} S^{\textrm{ell}} =4 \pi (T_e+m_e)|{\bf p}_e| S^{\textrm{ell}}. \end{eqnarray} Computing the dimensions $a,b$ of the ellipse is straightforward, though quite tedious. It can be performed through algebraic computations similar to that used in problem 4 to obtain the intersection of an ellipsoid with a plane. In Fig.~\ref{fig:19} we have plotted the spectrum of the electron as a function of its kinetic energy $T_e$ (Eq.~(\ref{Prob})) for a neutron undergoing beta decay, assuming it was initially at rest. It should be noted though, that the usual experimental curves for beta decay correspond to a metastable nucleus undergoing beta decay, instead of a single neutron. This affects the value of $T_{e,\max}$ (the maximum value of $T_e$) and the general shape of the curve. Also, our description is accurate with respect only to the kinematics implied by special relativity, without taking into account the internal dynamics of the transformation. \begin{figure}[h] \begin{center} \scalebox{0.60}{\includegraphics{Apos_fig17.eps}} \caption{\label{fig:18} The arrangements of ${\bf p}_\nu$ and ${\bf p}_p$ are described by the intersection of an ellipsoid with the plane of momenta screen, as with problem 4.} \end{center} \end{figure} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=18pc] {Apos_fig18.eps}} \caption{\label{fig:19} The probability distribution of $T_e$ in beta decay due to kinematics only.} \end{center} \end{figure} \section{Conclusion} \label{sec:9} The graphic tools we have used in this paper are simply right triangles, suitably drawn in order to describe the kinematics of either simple or more complicated relativistic reactions. \begin{figure}[h] \begin{center} \scalebox{0.70}{\includegraphics{Apos_fig19.eps}} \caption{\label{fig:20} A mechanical construction to draw right triangles corresponding to relativistic particles that take part in relativistic reactions.} \end{center} \end{figure} Using these tools when teaching relativistic reactions, not only is fun, but also renders complicated analytic computations unnecessary. The student can easily visualize the kinematics allowed by energy and momentum conservation. The teacher can benefit from using 3-D diagrams with right triangles, when she or he attempts to construct new problems with reactions. After a quick drawing she or he could make sure that the problem has the desired solution. Furthermore, the teacher could make teaching more vivid by using a simple mechanical construction: a simple metal planar board, rods of adjustable length to use as mass segments with suitable magnetic bases so that one could stick them perpendicular to the board, an adjustable string and a board marker (see Fig.~\ref{fig:20}) to build 3-D diagrams that correspond to any relativistic reaction one may think of. We bet that such a construction will make the analysis of relativistic kinematics even more fun. \begin{acknowledgments} This work was supported by the research funding program ``Kapodistrias'' with Grant No 70/4/7672. \end{acknowledgments}
1,477,468,749,837
arxiv
\section{Introduction} The Riemann--Hilbert correspondence relates the integrable logarithmic (or Fuchsian) connections over an algebraic variety $X$ to the representations of the fundamental group $\pi_1(X\setminus\{D\})$, where $D$ denotes the divisor of poles of a connection. Deligne \cite{De} proved its bijectivity, on condition that $D$ is a f\/ixed normal crossing divisor and the data on both sides are taken modulo appropriate equivalence relations. Nevertheless, Deligne's solution is not ef\/fective in the sense that it does not imply any formulas to compute the Riemann--Hilbert correspondence. Therefore, it is important to have on hand a stock of examples that can be solved explicitly. The authors of \cite{Kor-2,EG} constructed logarithmic connections of rank $n$ over $\PP^1$ with quasi-permutation monodromy in terms of theta functions on a ramif\/ied cover of $\PP^1$ of degree~$n$. Korotkin in~\cite{Kor-1} considers a class of generalized connections, called connections with constant twists, and constructs such twisted connections of rank $2$ with logarithmic singularities on an elliptic curve $E$ via theta functions on a double cover $C$ of $E$. In the present paper, we obtain genuine (non-twisted) rank-$2$ connections on $E$ from its double cover $C$ by a dif\/ferent method, similar to the method applied in \cite{LVU} to the double covers of $\PP^1$. We consider a genus-$2$ cover $f:C\ra E$ of degree $2$ with two branch points $p_+$, $p_-$ and a regular connection $\nabla_{\mathcal L} $ on a line bundle ${\mathcal L}$ over $C$. Then the sheaf-theoretic direct image ${\mathcal E}=f_* ({\mathcal L})$ is a rank-$2$ vector bundle carrying the connection $\nabla_{\mathcal E} :=f_* (\nabla_{\mathcal L} )$ with logarithmic poles at $p_+$ and $p_-$. We explicitly parameterize all such connections and their monodromy representations $\rho:\pi_1(E\setminus\{p_{-}, p_{+}\})\ra GL(2, \CC)$. We also investigate the abstract group-theoretic structure of the obtained monodromy groups as well as their Zariski closures in $GL(2, \CC)$, which are the dif\/ferential Galois groups of the connections $\nabla_{\mathcal E} $. Establishing a bridge between the analytic and algebro-geometric counterparts of the problem is one of the main objectives of the paper. We show that the underlying vector bundle ${\mathcal E}$ of $\nabla_{\mathcal E} $ is stable of degree $-1$ for generic values of parameters and identify the special cases where it is unstable and is the direct sum of two line bundles. We also illustrate the following Bolibruch--Esnault--Viehweg Theorem \cite{EV-2}: any irreducible logarithmic connection over a curve can be converted by a sequence of Gabber's transforms into a logarithmic connection with same singularities and same monodromy on a semistable vector bundle of degree $0$. Bolibruch has established this result in the genus-0 case, in which ``semistable of degree 0'' means just ``trivial'' \cite{AB}. We explicitly indicate a Gabber's transform of the above direct image connection $({\mathcal E}, \nabla_{\mathcal E} )$ which satisf\/ies the conclusion of the Bolibruch--Esnault--Viehweg Theorem. The importance of results of this type is that they allow us to consider maps from the moduli space of connections to the moduli spaces of vector bundles, for only semistable bundles have a consistent moduli theory. Another useful feature of the elementary transforms is that they permit to change arbitrarily the degree, and this enriches our knowledge of the moduli space of connections providing maps to moduli spaces of vector bundles of dif\/ferent degrees, which may be quite dif\/ferent and even have dif\/ferent dimensions (see Remark~\ref{0and-1}). \looseness=1 All the relevant algebro-geometric tools are introduced in a way accessible to a non-specialist. One of them is the usage of ruled surfaces in f\/inding line subbundles of rank-2 vector bundles. This is classical, see~\cite{LN} and references therein. Another one is the reconstruction of a vector bundle from the singularities of a given connection on it. Though it is known as a theoretical method \cite{EV-1,EV-2}, it has not been used for a practical calculation of vector bundles underlying a given meromorphic connection over a Riemann surface dif\/ferent from the sphere. For the Riemann sphere, any vector bundle is the direct sum of the line bundles ${\mathcal O}(k_i)$, and Bolibruch developed the method of valuations (see~\cite{AB}) serving to calculate the integers $k_i$ for the underlying vector bundles of connections. He exploited extensively this method, in particular in his construction of counter-examples to the Riemann--Hilbert problem for reducible representations. Genus-2 double covers of elliptic curves is a classical subject, originating in the work of Legendre and Jacobi~\cite{J}. We provide several descriptions of them, based on a more recent work~\cite{Di}. We determine the locus of their periods (Corollary \ref{period_locus}), a result which we could not f\/ind elsewhere in the literature and which we need for f\/inding the image of the Riemann--Hilbert correspondence in Proposition \ref{imageRHonC}. \looseness=1 Now we will brief\/ly survey the contents of the paper by sections. In Section \ref{g2-covers}, we describe the genus-$2$ covers of elliptic curves of degree $2$ and determine their periods. In Section \ref{rank-1}, we investigate rank-$1$ connections on $C$ and discuss the dependence of the Riemann--Hilbert correspondence for these connections on the parameters of the problem: the period of $C$ and the underlying line bundle ${\mathcal L}$. In Section \ref{Direct_Images}, we compute, separately for the cases ${\mathcal L}={\mathcal O}_C$ and ${\mathcal L}\neq{\mathcal O}_C$, the matrix of the direct image connection $\nabla_{\mathcal E} $ on ${\mathcal E}=f_* {\mathcal L}$. For ${\mathcal L}={\mathcal O}_C$, we also provide two dif\/ferent forms for a scalar ODE of order 2 equivalent to the $2\times2$ matrix equation $\nabla_{\mathcal E} \phi=0$. In Section \ref{monodromy-1}, we determine the fundamental matrices and the monodromy of connections $\nabla_{\mathcal E}$ and discuss their isomonodromy deformations. Section \ref{elementary} introduces the elementary transforms of rank-$2$ vector bundles, relates them to birational maps between ruled surfaces and states a criterion for (semi)-stability of a rank-$2$ vector bundle. In Section \ref{underlying}, we apply the material of Section \ref{elementary} to describe ${\mathcal E}$ as a result of a series of elementary transforms starting from ${\mathcal E}_0=f_* {\mathcal O}_C$ and prove its stability or unstability depending on the value of parameters. We also describe Gabber's elementary transform which illustrates the Bolibruch--Esnault--Viehweg Theorem and comment brief\/ly on the twisted connections of \cite{Kor-1}. In Section~\ref{monodromy}, we give a description of the structure of the monodromy and dif\/ferential Galois groups for~$\nabla_{\mathcal E} $. {\bf Terminology.} If not specif\/ied otherwise, a curve will mean a nonsingular complex projective algebraic curve, which we will not distinguish from the associated analytic object, a compact Riemann surface. \newpage \section{Genus-2 covers of an elliptic curve} \label{g2-covers} In this section, we will describe the degree-2 covers of elliptic curves which are curves of genus~$2$. \begin{definition} Let $\pi:C\rar E$ be a degree-2 map of curves. If $E$ is elliptic, then we say that $C$ is bielliptic and that $E$ is a degree-2 elliptic subcover of $C$. \end{definition} Legendre and Jacobi \cite{J} observed that any genus-2 bielliptic curve has an equation of the form \begin{gather}\label{Jacobi} y^2=c_0x^6+c_1x^4+c_2x^2+c_3\qquad (c_i\in\CC) \end{gather} in appropriate af\/f\/ine coordinates $(x,y)$. It immediately follows that any bielliptic curve $C$ has two elliptic subcovers $\pi_i:C\rar E_i$, \begin{gather} E_1:\ \ y^2=c_0x_1^3+c_1x_1^2+c_2x_1+c_3,\ \ \pi_1:(x,y)\mapsto (x_1=x^2,y),\ \ \mbox{and}\nonumber\\ E_2:\ \ y_2^2=c_3x_2^3+c_2x_2^2+c_1x_2+c_0,\ \ \pi_2:(x,y)\mapsto (x_2=1/x^2,y_2=y/x^3).\label{Jacobi-Ei} \end{gather} This description of bielliptic curves, though very simple, depends on an excessive number of parameters. To eliminate unnecessary parameters, we will represent $E_i$ in the form \begin{gather}\label{E1-E2} E_i: y_i^2=x_i(x_i-1)(x_i-t_i) \qquad (t_i\in\CC\setminus \{0,\ 1\},\ t_1\neq t_2). \end{gather} Remark that any pair of elliptic curves $(E_1,E_2)$ admits such a representation even if $E_1\simeq E_2$. We will describe the reconstruction of $C$ starting from $(E_1,E_2)$ following \cite{Di}. This procedure will allow us to determine the periods of bielliptic curves $C$ in terms of the periods of their elliptic subcovers $E_1$, $E_2$. Let $\phi_i:E_i\rar \PP^1$ be the double cover map $(x_i,y_i)\mapsto x_i$ ($i=1,2$). Recall that the f\/ibered product $E_1\times_{\PP^1} E_2$ is the set of pairs $(P_1,P_2)\in E_1\times E_2$ such that $\phi_1(P_1)= \phi_2(P_2)$. It can be given by two equations with respect to three af\/f\/ine coordinates $(x,y_1,y_2)$: \begin{gather* \bar C:=E_1\times_{\PP^1} E_2: \left\{\begin{array}{l} y_1^2=x(x-1)(x-t_1),\vspace{1mm}\\ y_2^2=x(x-1)(x-t_2). \end{array}\right. \end{gather*} It is easily verif\/ied that $\bar C$ has nodes over the common branch points $0$, $1$, $\infty$ of $\phi_i$ and is nonsingular elsewhere. For example, locally at $x=0$, we can choose $y_i$ as a local parameter on~$E_i$, so that $x$ has a zero of order two on $E_i$; equivalently, we can write $x=f_i(y_i)y_i^2$ where $f_i$ is holomorphic and $f_i(0)\neq 0$. Then eliminating $x$, we obtain that $\bar C$ is given locally by a single equation $f_1(y_1)y_1^2=f_2(y_2)y_2^2$. This is the union of two smooth transversal branches $\sqrt{f_1(y_1)}y_1=\pm \sqrt{f_2(y_2)}y_2$. Associated to $\bar C$ is its normalization (or desingularization) $C$ obtained by separating the two branches at each singular point. Thus $C$ has two points over $x=0$, whilst the only point of~$\bar C$ over $x=0$ is the node, which we will denote by the same symbol $0$. We will also denote by~$0_+$,~$0_-$ the two points of $C$ over~$0$. Any of the functions $y_1$, $y_2$ is a local parameter at $0_\pm$. In a similar way, we introduce the points $1,\infty\in \bar C$ and $1_\pm, \infty_\pm\in C$. \begin{proposition}\label{fp} Given a genus-$2$ bielliptic curve $C$ with its two elliptic subcovers $\pi_i:C\rar E_i$, one can choose affine coordinates for $E_i$ in such a way that $E_i$ are given by the equations \eqref{E1-E2}, $C$ is the normalization of the nodal curve $\bar C:=E_1\times_{\PP^1} E_2$, and $\pi_i=\operatorname{pr}\nolimits_i\circ\nu$, where $\nu:C\rar\bar C$ denotes the normalization map and $\operatorname{pr}\nolimits_i$ the projection onto the $i$-th factor. \end{proposition} \begin{proof} See \cite{Di}. \end{proof} It is interesting to know, how the descriptions given by \eqref{Jacobi} and Proposition \ref{fp} are related to each other. The answer is given by the following proposition. \begin{proposition}\label{deg6eq} Under the assumptions and in the notation of Proposition {\rm \ref{fp}}, apply the following changes of coordinates in the equations of the curves $E_i$: \[ (x_i,y_i)\rar (\tilde x_i, \tilde y_i),\qquad \tilde x_i=\frac{x_i-t_j}{x_i-t_i},\qquad \tilde y_i= \frac{y_i}{(x_i-t_i)^2}\sqrt{\frac{(t_j-t_i)^3}{t_i(1-t_i)}}, \] where $j=3-i$, $i=1,2$, so that $\{i,j\}=\{1,2\}$. Then the equations of $E_i$ acquire the form \begin{gather} E_1:\ \ \tilde y_1^2=\left(\tilde{x_1}-\dfrac{t_2}{t_1}\right)\ \left(\tilde{x_1}-\dfrac{1-t_2}{1-t_1}\right)(\tilde{x_1}-1),\nonumber\\ E_2:\ \ \tilde y_2^2=\left(1-\dfrac{t_2}{t_1}\tilde{x_2}\right) \left(1-\dfrac{1-t_2}{1-t_1}\tilde{x_2}\right)(1-\tilde{x_2}).\label{eqEi} \end{gather} Further, $C$ can be given by the equation \begin{gather}\label{eqC} \eta^2=\left(\xi^2-\frac{t_2}{t_1}\right) \left(\xi^2-\frac{1-t_2}{1-t_1}\right)(\xi^2-1), \end{gather} and the maps $\pi_i:C\rar E_i$ by $(\xi,\eta)\mapsto (\tilde x_i, \tilde y_i)$, where \[ (\tilde x_1, \tilde y_1)=(\xi^2,\eta),\qquad (\tilde x_2, \tilde y_2)=(1/\xi^2,\eta/\xi^3). \] \end{proposition} \begin{proof} We have the following commutative diagram of double cover maps \begin{gather*} \xymatrix{ &\ar_{\pi_1}[dl] C \ar_{f}[d] \ar^{\pi_2}[dr]\\ E_1 \ar_{\phi_1}[dr] & \ar_{\tilde{\phi}}[d] {\PP^1} & E_2\ar^{\phi_2}[dl] \\ & {\PP^1} } \end{gather*} in which the branch loci of $\phiti$, $\phi_i$, $f$, $\pi_i$ are respectively $\{t_1,t_2\}$, $\{0,1,t_i,\infty\}$, $\phiti^{-1} (\{0,1,\infty\})$, $\phi_i^{-1}(t_j)$ ($j=3-i$). Thus the $\PP^1$ in the middle of the diagram can be viewed as the Riemann surface of the function $\sqrt{\frac{x-t_2}{x-t_1}}$, where $x$ is the coordinate on the bottom $\PP^1$. We introduce a~coordinate $\xi$ on the middle $\PP^1$ in such a way that $\phiti$ is given by $\xi\mapsto x$, $\xi^{2}=\frac{x-t_2}{x-t_1}$. Then $C$ is the double cover of $\PP^1$ branched in the 6 points $\phiti^{-1} (\{0,1,\infty\})=\big\{\pm1,\pm\sqrt{\frac{1-t_2}{1-t_1}},\pm\sqrt{\frac{t_2}{t_1}}\big\}$, which implies the equation \eqref{eqC} for $C$. Then we deduce the equations of $E_i$ in the form \eqref{eqEi} following the recipe of \eqref{Jacobi-Ei}, and it is an easy exercise to transform them into \eqref{E1-E2}. \end{proof} The locus of bielliptic curves in the moduli space of all the genus-2 curves is 2-dimensional, hence is a hypersurface. In~\cite{SV}, an explicit equation of this hypersurface is given in terms of the Igusa invariants of the genus-2 curves. We will give a description of the same locus in terms of periods. We start by recalling necessary def\/initions. Let $a_1$, $a_2$, $b_1$, $b_2$ be a symplectic basis of $H_1(C,\ZZ)$ for a genus-2 curve $C$, and $\omega_1$, $\omega_2$ a basis of the space $\Gamma(C,\Omega^1_C)$ of holomorphic 1-forms on $C$. \begin{definition} Let us introduce the $2\times 2$-matrices $A=(\int_{a_i}\omega_j)$ and $B=(\int_{b_i}\omega_j)$. Their concatenation $\Pi=(A|B)$ is a $2\times 4$ matrix, called the period matrix of the $1$-forms $\omega_1$, $\omega_2$ with respect to the basis $a_1$, $a_2$, $b_1$, $b_2$ of $H_1(C,\ZZ)$. The period of $C$ is the $2\times 2$-matrix $Z=A^{-1}B$. If $A=I$ is the identity matrix, the basis $\omega_1,\omega_2$ of $\Gamma(C,\Omega^1_C)$ and the corresponding period matrix $\Pi_0= (I|Z)$ are called normalized. The period lattice $\Lambda=\Lambda(C)$ is the $\ZZ$-submodule of rank $4$ in $\Gamma(C,\Omega^1_C)^*$ generated by the $4$~linear forms $\omega\mapsto \int_{a_i}\omega$, $\omega\mapsto \int_{b_i}\omega$. A choice of the basis $\omega_i$ identif\/ies $\Gamma(C,\Omega^1_C)^*$ with $\CC^2$, and $\Lambda$ is then generated by the $4$ columns of $\Pi$. \end{definition} The period $Z_C$ of $C$ is determined modulo the discrete group Sp$(4,\ZZ)$ acting by symplectic base changes in $H_1(C,\ZZ)$. {\bf Riemann's bilinear relations.} The period matrix of any genus-2 curve $C$ satisf\/ies the conditions \[ Z^t=Z \qquad \mbox{and}\qquad \Im Z >0. \] \begin{figure}[t] \centerline{\includegraphics[width=13cm]{Machu-fig1}} \caption{The 4 sheets of $C$. The segments of two edges of the cuts are glued together if they are: (1) situated one under the other, and (2) hatched by dashes of the same orientation. Thus, the upper edge of the cut on $\Sigma_{++}$ between $t_1$, $t_2$ is glued to the lower edge of the cut on $\Sigma_{-+}$ between $t_1$, $t_2$. Four black points over $t_2$ glue together to give one point $t_{2+}\in C$, and similarly four white ones give $t_{2-}\in C$. The 4 preimages of each one of the points $0$, $1$, $t_1$, $\infty$ are glued in pairs, as shown by the colors black/white and by dotted lines, and give 8 points of $C$ denoted by $0_\pm$, $1_\pm$, $t_{1\pm}$, $\infty_\pm$.}\label{fig1} \end{figure} \looseness=1 To determine the periods of bielliptic curves $C$, it is easier to use the representation from Proposition \ref{fp} rather than the standard equation of a genus-2 curve \eqref{eqC}. This is due to the fact that we can choose $\omega_1=dx/y_1$, $\omega_2=dx/y_2$ as a basis of the space $\Gamma(C,\Omega^1_C)$ of holomorphic 1-forms on $C$, and the periods of these 1-forms are easily related to the periods on $E_i$. (Basically, $(\omega_1, \omega_2)$ can be seen as a basis of eigenvectors of the action of $(\ZZ/2\ZZ)^{2}$ on~$C$.) \looseness=1 To f\/ix the ideas, we assume for a while that $t_1$, $t_2$ are real and $1<t_1<t_2$ (the general case is obtained by a deformation moving the points $t_i$). $E_i$ can be represented as the result of gluing two sheets $\Sigma_{i+}$, $\Sigma_{i-}$, which are Riemann spheres with cuts along the segments $[0, 1]$ and $[t_i, \infty]$. Then $C$, parameterizing the pairs of points $(P_1,P_2)$ with $P_i\in E_i$ and with the same $x$-coordinate, is the result of gluing 4 sheets, which are copies of the Riemann sphere with cuts along the segments $[0, 1]$ and $[t_1, \infty]$ labelled by $++$, $--$, $+-$, $-+$. For example, the sheet $\Sigma_{+-}$ is formed by the pairs $(P_1,P_2)$ where $P_1$ lies on $\Sigma_{1+}$ and $P_2$ on~$\Sigma_{2-}$. Fig.~\ref{fig1} shows the gluings of the edges of the cuts with the help of hatching and f\/ixes the choice of the cycles $a_i$, $b_i$. Black points on one vertical are identif\/ied, the same for the white ones. \begin{proposition} Let $C$, $E_1$, $E_2$ be as in Proposition {\rm \ref{fp}}, and $a_i$, $b_i$ as on Fig.~{\rm \ref{fig1}}. Then the period matrix of $C$ is \[ Z_C=\left( \begin{array}{cc} \frac{1}{2} (\tau_1+\tau_2) & \frac{1}{2} (\tau_1-\tau_2) \vspace{1mm}\\ \frac{1}{2} (\tau_1-\tau_2) & \frac{1}{2}(\tau_1+\tau_2) \end{array}\right), \] where $\tau_i$ is the period of $E_i$ with respect to the basis $\gamma_i=\pi_{i*}(a_1)$, $\delta_i=\pi_{i*}(b_1)$ of $H_1(E_i,\ZZ)$. \end{proposition} \begin{proof} Let $k_i$, $l_i$ be the periods of the dif\/ferential $dx/y_i$ on $E_i$ along the cycles $\gamma_i$, $\delta_i$ respectively. Take $\omega_i=\pi_i^*(dx/y_i)$ as a basis of $\Gamma(C,\Omega_C)$. We have \[ \int_{a_1}\pi_j^*(dx/y_j)=\int_{\pi_{j*}(a_1)}dx/y_j=k_j. \] But when calculating the integral over $a_2$, we have to take into account the fact that a positively oriented loop around a cut on $\Sigma_{+-}$ projects to a positively oriented loop on $\Sigma_{2-}$, and the latter def\/ines the cycle $-\gamma_2$ on $E_{2}$. Thus $\pi_{2*}(a_2)=-\gamma_2$, and the corresponding period acquires an extra sign: \[ \int_{a_2}\pi_j^*(dx/y_j)=\int_{\pi_{j*}(a_2)}dx/y_j=(-1)^{j+1}k_j. \] The integrals over $b_j$ are transformed in a similar way. We obtain the period matrix of $C$ in the form \[ \Pi=\left( \begin{array}{cc|cc} k_1 & k_1 &l_1 & l_1 \\ k_2 & -k_2 &l_2 & -l_2 \end{array}\right). \] Multiplying by the inverse of the left $2\times 2$-block and using the relations $\tau_i=l_i/k_i$, we obtain the result. \end{proof} \begin{corollary}\label{period_locus} The locus ${\mathcal H}$ of periods of genus-$2$ curves $C$ with a degree-2 elliptic subcover is the set of matrices \[ Z_C=\left( \begin{array}{cc} \frac{1}{2} (\tau+\tau') & \frac{1}{2} (\tau-\tau') \vspace{1mm}\\ \frac{1}{2} (\tau-\tau') & \frac{1}{2}(\tau+\tau') \end{array}\right)\qquad(\Im\tau>0, \Im\tau'>0) .\] Equivalently, ${\mathcal H}$ is the set of all the matrices of the form $Z=\left( \begin{array}{cc} a & b \\ b & a \end{array}\right)$ ($a,b \in\CC$) such that $\Im Z>0$. \end{corollary} \section[Rank-1 connections on $C$ and their monodromy]{Rank-1 connections on $\boldsymbol{C}$ and their monodromy} \label{rank-1} We start by recalling the def\/inition of a connection. Let $V$ be a curve or a complement of a~f\/inite set in a curve $C$. Let ${\mathcal E}$ be a vector bundle of rank $r\geq 1$ on $V$. We denote by ${\mathcal O}_V$, $\Omega^1_V$ the sheaves of holomorphic functions and 1-forms on $V$ respectively. By abuse of notation, we will denote in the same way vector bundles and the sheaves of their sections. A {\em connection} on ${\mathcal E}$ is a~$\CC$-linear map of sheaves $\nabla:{\mathcal E}\rar{\mathcal E}\otimes\Omega^1_V$ which satisf\/ies the Leibnitz rule: for any open $U\subset V$, $f\in\Gamma(U,{\mathcal O})$ and $s\in\Gamma(U,{\mathcal E})$, $\nabla(fs)=f\nabla(s)+s\:df$. If ${\mathcal E}$ is trivialized by a basis of sections ${\boldsymbol e}=(e_1,\ldots,e_r)$ over $U$, then we can write $\nabla(e_j)=\sum_ia_{ij}e_i$, and the matrix $A({\boldsymbol e})=(a_{ij})$ of holomorphic 1-forms is called the connection matrix of $\nabla$ with respect to the trivialization ${\boldsymbol e}$. If there is no ambiguity with the choice of a trivialization, one can write, by abuse of notation, $\nabla =d+A$. Given $r$ meromorphic sections ${\boldsymbol s}=(s_1,\ldots,s_r)$ which span ${\mathcal E}$ over an open subset, the mat\-rix~$A({\boldsymbol s})$ def\/ined as above is a matrix of meromorphic 1-forms on $V$. Its poles in $V$ are called {\em apparent} singularities of the connection with respect to the meromorphic trivialization ${\boldsymbol s}$. The apparent singularities arise at the points $P\in V$ in which either some of the $s_i$ are non-regular, or all the $s_i$ are regular but $s_i(P)$ fail to be linearly independent. They are not singularities of the connection, but those of the chosen connection matrix. In the case when the underlying vector bundle is def\/ined not only over $V$, but over the whole compact Riemann surface $C$, we can speak about singularities at the points of $C\setminus V$ of the connection itself. To this end, choose local trivializations ${\boldsymbol e}_P$ of ${\mathcal E}$ at the points $P\in C\setminus V$, and def\/ine the local connection matrices $A({\boldsymbol e}_P)$ as above, $\nabla ({\boldsymbol e}_P)={\boldsymbol e}_PA({\boldsymbol e}_P)$. The connection~$\nabla$, regular on $V$, is said to be meromorphic on $C$ if $A({\boldsymbol e}_P)$ has at worst a pole at $P$ for all $P\in C\setminus V$. If, moreover, $A({\boldsymbol e}_P)$ can be represented in the form $A({\boldsymbol e}_P)=B(\tau_P)\frac{d\tau_P}{\tau_P}$, where $\tau_P$ is a local parameter at $P$ and $B(\tau_P)$ is a matrix of holomorphic functions in $\tau_P$, then $P$ is said to be a logarithmic singularity of $\nabla$. A connection is called logarithmic, or Fuchsian, if it has only logarithmic singularities. To def\/ine the {\em monodromy} of a connection $\nabla$, we have to f\/ix a reference point $P_0\in V$ and a basis ${\boldsymbol s}=(s_1,\ldots,s_r)$ of solutions of $\nabla s=0$, $s\in \Gamma(U,{\mathcal E})$ over a small disc $U$ centered at~$P_0$. The analytic continuation of the $s_i$ along any loop $\gamma$ based at $P_0$ provides a new basis ${\boldsymbol s}^\gamma=(s_1^\gamma,\ldots,s_r^\gamma)$, and the monodromy matrix $M_\gamma$ is def\/ined by ${\boldsymbol s}^\gamma ={\boldsymbol s} M_\gamma$. The monodromy matrix depends only on the homotopy class of a loop, and the monodromy $\rho_\nabla$ of $\nabla$ is the representation of the fundamental group of $V$ def\/ined by \[ \rho=\rho_\nabla:\pi_1(V,P_0)\lra GL_r(\CC),\qquad \gamma\mapsto M_\gamma. \] Let now $C=V$ be a genus-2 bielliptic curve with an elliptic subcover $\phi:C\rar E$. Our objective is the study of rank-2 connections on $E$ which are direct images of rank-1 connections on $C$. We f\/irst study the rank-1 connections on $C$ and their monodromy representations. Let ${\mathcal L}$ be a line bundle on $C$ and $e$ a meromorphic section of ${\mathcal L}$ which is not identically zero. Then a connection $\nabla_{\mathcal L}$ on ${\mathcal L}$ can be written as $\mathrm{d}+\omega$, where $\omega$ is a meromorphic 1-form on $C$ def\/ined by $\nabla_{\mathcal L} (e) =\omega e$. The apparent singularities are simple poles with integer residues at the points where $e$ fails to be a basis of ${\mathcal L}$. We will start by considering the case when ${\mathcal L}$ is the trivial line bundle ${\mathcal O}={\mathcal O}_C$. Then the natural trivialization of ${\mathcal L}$ is $e=1$, and $\omega$ is a regular 1-form. The vector space $\Gamma(C,\Omega^1_{C})$ of regular 1-forms on $C$ is 2-dimensional; let $\omega_1$, $\omega_2$ be its basis. We can write $\omega=\lambda_1\omega_1+\lambda_2\omega_2$ with $\lambda_1$, $\lambda_2$ in $\CC$. The horizontal sections of ${\mathcal O}$ are the solutions of the equation $\nabla_{\mathcal O} \phi=0$. To write down these solutions, we can represent $C$ as in Proposition \ref{fp} and introduce the multi-valued functions $z_1=\int \omega_1$ and $z_2=\int \omega_2$, normalized by $z_1(\infty_+)=z_2(\infty_+)=0$. We denote by the same symbols $z_1$, $z_2$ the f\/lat coordinates on the Jacobian $JC=\CC^2/\Lambda$ associated to the basis $(\omega_1,\omega_2)$ of $\Gamma(C,\Omega^1_{C})$, and $C$ can be considered as embedded in its Jacobian via the Abel--Jacobi map $AJ:C\rar JC$, $P\longmapsto\ ((z_{1} (P), z_{2} (P))$ modulo $\Lambda$. To determine the monodromy, we will choose $P_0=\infty_+$ and f\/ix some generators $\alp_i$, $\beta_i$ of $\pi_1(C,\infty_+)$ in such a way that the natural epimorphism \[ \pi_1(C,\infty_+)\lra H_1(C,\ZZ)=\pi_1(C,\infty_+)/[\pi_1(C,\infty_+),\pi_1(C,\infty_+)] \] is given by $\alp_i\mapsto a_i$, $\beta_i\mapsto b_i$. The following lemma is obvious: \begin{lemma}\label{obvious} The general solution of $\nabla_{\mathcal O} \phi=0$ is given by $\phi =ce^{-\lambda_1 z_1-\lambda_2 z_2} $, where $c$ is a~complex constant. The monodromy matrices of $\nabla_{\mathcal O}$ are $M_{\alp_i}=\exp(-\oint_{a_i} \omega)$, $M_{\beta_i}=\exp(-\oint_{b_i} \omega)$ ($i=1,2$). \end{lemma} Now we turn to the problem of Riemann--Hilbert type: determine the locus of the representations of $G$ which are monodromies of connections $\nabla_{\mathcal L}$. Since any rank-1 representation $\rho$ of~$G$ is determined by 4 complex numbers $\rho(\alp_i)$, $\rho(\beta_i)$, we can take $(\CC^*)^4$ for the moduli space of representations of $G$ in which lives the image of the Riemann--Hilbert correspondence. Before solving this problem on $C$, we will do a similar thing on an elliptic curve $E$. The answer will be used as an auxiliary result for the problem on $C$. Any rank-$1$ representation $\rho:\pi_{1}(E)\ra\CC^{*}$ is determined by the images $\rho(a)$, $\rho(b)$ of the generators $a$, $b$ of the fundamental group of $E$, so that the space of representations of $\pi_{1}(E)$ can be identif\/ied with $\CC^{*}\times\CC^{*}$. We will consider several spaces of rank-$1$ connections. Let ${\mathcal C}(E, {\mathcal L} )$ be the space of all the connections $\nabla:{\mathcal L} \ra {\mathcal L} \otimes\Omega^{1}_{E}$ on a line bundle ${\mathcal L} $ on $E$. It is non empty if only if $\deg {\mathcal L} =0$, and then ${\mathcal C}(E, {\mathcal L} )\simeq\Gamma(E, \Omega^{1}_{E})\simeq\CC$. Further, ${\mathcal C}(E)$ will denote the moduli space of pairs $({\mathcal L} , \nabla)$, that is, ${\mathcal C}(E)=\cup_{[{\mathcal L} ]\in J(E)} {\mathcal C}(E, {\mathcal L} )$. We will also def\/ine the moduli space ${\mathcal C}$ of triples $(E_{\tau}, {\mathcal L} , \nabla)$, ${\mathcal C}=\cup_{\Im\tau>0} {\mathcal C}(E_{\tau})$, and ${\mathcal C}_{\rm triv}=\cup_{\Im\tau>0} {\mathcal C}(E_{\tau}, {\mathcal O}_{E_{\tau}})$, where $E_{\tau}=\CC/(\ZZ+\ZZ \tau)$. For any of these moduli spaces, we can consider the Riemann--Hilbert correspondence map \[ RH:(E_{\tau}, {\mathcal L} , \nabla)\longmapsto (\rho_{\nabla}(a), \rho_{\nabla} (b)), \] where $\rho_{\nabla}$ is the monodromy representation of $\nabla$, and $(a, b)$ is a basis of $\pi_{1}(E)$ corresponding to the basis $(1, \tau)$ of the period lattice $\ZZ+\ZZ \tau$. Remark that $RH\mid_{{\mathcal C}(E, {\mathcal L} )}$ cannot be surjective by dimensional reasons. The next proposition shows that $RH\mid_{{\mathcal C}_{\rm triv}}$ is dominant, though non-surjective, and that $RH\mid_{{\mathcal C}}$ is surjective. \begin{proposition}\label{imageRHonE} In the above notation, \[ RH({\mathcal C}_{\rm triv})=(\CC^{*}\times\CC^{*}\setminus\{S^{1}\times S^{1}\}) \cup \{(1,1)\},\qquad RH({\mathcal C})=\CC^{*}\times\CC^{*}. \] \end{proposition} \begin{proof} Let $\nabla=\mathrm{d}+\omega$ be a connection on an elliptic curve $E$, where $\omega\in\Gamma(E_{\tau}, \Omega^{1}_{E_{\tau}})$, $A=\oint_{a} \omega$, $B=\oint_{b} \omega=\tau A$. By analytic continuation of solutions of the equation $\nabla\phi=0$ along the cycles in $E$, we obtain $\rho(a)=e^{-A}$ and $\rho(b)=e^{-\tau A}$. The pair $(-A, -B)=(-A,-\tau A)$ is an element of ${(0,0)}\cup\CC^{*}\times\CC^{*}$. By setting $z=-A$, we deduce $RH({\mathcal C}_{\rm triv})=\{(e^{z}, e^{z\tau})\mid(z,\tau)\in\CC\times\HH\}$. The map $\exp:\CC^{*}\lra\CC^{*}$ is surjective, so for all $w_1\in\CC^{*}$, we can solve the equation $e^{z}=w_1$, and once we have f\/ixed $z$, it is possible to solve $e^{\tau z}=w_2$ with respect to $\tau$ if and only if $(w_1, w_2)\notin S^{1}\times S^{1} \setminus\{(1,1)\}$. This ends the proof for $RH({\mathcal C}_{\rm triv})$. The proof for $RH({\mathcal C})$ is similar to the genus-$2$ case, see Proposition \ref{imageRHonC} below. \end{proof} From now on, we turn to the genus-$2$ case. We def\/ine the moduli spaces ${\mathcal C}_2 (C, {\mathcal L} )$, ${\mathcal C}_2 (C)$, ${\mathcal C}_2$, ${\mathcal C}{_{2,{\rm triv}}}$ similarly to the above, so that ${\mathcal C}_2(C)=\cup_{[{\mathcal L} ]\in J(C)} {\mathcal C}_2(C, {\mathcal L} )$, ${\mathcal C}_2=\cup_{Z\in{\mathcal H}} {\mathcal C}_2(C_Z)$, and ${\mathcal C}_{\rm triv}=\cup_{Z\in{\mathcal H}} {\mathcal C}_2(C_Z, {\mathcal O}_{C_Z})$. Here ${\mathcal H}$ is the locus of periods introduced in Corollary \ref{period_locus}, $C_Z$ is the genus-$2$ curve with period $Z$, $J_2(C)=\CC^{2}/\Lambda$, where $\Lambda\simeq\ZZ^{4}$ is the lattice generated by the column vectors of the full period matrix $(1\mid Z)$ of $C$. The Riemann--Hilbert correspondence is the map \[ RH:(C_Z, {\mathcal L} , \nabla)\longmapsto (\rho_{\nabla}(\alp_1), \rho_{\nabla}(\alp_2) ,\rho_{\nabla}(\beta_1), \rho_{\nabla}(\beta_2))\in (\CC^{*})^{4}, \] where the generators $\alp_i$, $\beta_i$ of $\pi_1(C)$ correspond to the basis of the lattice $\Lambda$. \begin{proposition}\label{imageRHonC} In the above notation, \begin{gather*} RH({\mathcal C}_{2,{\rm triv}})=\left\{w\in (\CC^{*})^{4}\mid (w_1 w_2, w_3 w_4)\in W, \left(\frac{w_1}{w_2},\frac{w_3}{w_4}\right)\in W \right\},\\ RH({\mathcal C}_2)=(\CC^{*})^{4}, \end{gather*} where $W$ denotes the locus $RH({\mathcal C}_{\rm triv})$ determined in Proposition~{\rm \ref{imageRHonE}}. \end{proposition} \begin{proof} Let $\nabla=\mathrm{d}+\omega$, $\omega\in\Gamma(C_Z, \Omega^{1}_{C_Z}).$ We can consider $C_Z$ in its Abel--Jacobi embedding in $JC$, then $\omega=\lambda_1\mathrm{d} z_1+\lambda_2\mathrm{d} z_2$, where $(z_1, z_2)$ are the standard f\/lat coordinates on $\CC^{2}/\Lambda$. Therefore, \[ RH(C_Z,{\mathcal O}_Z, \nabla)=\big(e^{\lambda_{1} {z_1}}, e^{\lambda_{2}{z_2}}, e^{\frac{1}{2}(\tau +\tau')\lambda_1 z_1+\frac{1}{2}(\tau-\tau')\lambda_2z_2}, e^{\frac{1}{2}(\tau -\tau')\lambda_1 z_1+\frac{1}{2}(\tau+\tau')\lambda_2z_2}\big). \] Denoting the latter $4$-vector by $w$, we see that $(w_1w_2, w_3w_4)=(e^{z}, e^{\tau z})$ with $z=\lambda_1z_1+\lambda_2z_2$, and $(\frac{w_1}{w_2},\frac{w_3}{w_4})=(e^{z'}, e^{\tau'z'})$ with $z'=\lambda_1z_1-\lambda_2z_2$. Then Proposition \ref{imageRHonE} implies the answer for $RH({\mathcal C}_{2,{\rm triv}})$. Now, we will prove the surjectivity of $RH\mid_{{\mathcal C}_2}$. On a genus-2 curve, any line bundle of degree 0 can be represented in the form ${\mathcal L}={\mathcal O}_C(P_1+P_2-Q_1-Q_2)$ for some 4 points $P_i,Q_i\in C$. It is def\/ined by its stalks: for any $P\in C$, ${\mathcal L}_P={\mathcal O}_P$ if $P\not\in \{P_1,P_2,Q_1,Q_2\}$, ${\mathcal L}_{P_i}=\frac{1}{\tau_{P_i}}{\mathcal O}_{P_i}$, ${\mathcal L}_{Q_i}=\tau_{Q_i}{\mathcal O}_{Q_i}$, where $\tau_P$ denotes a~local parameter at $P$ for any $P\in C$. This implies that the constant function $e=1$ considered as a~section of ${\mathcal L}$ has simple zeros at $P_i$ and simple poles at $Q_i$, that is, for its divisor we can write: $(e)=P_1+P_2-Q_1-Q_2$. According to \cite{A}, any line bundle of degree 0 admits a connection, and two connections dif\/fer by a holomorphic 1-form. Hence any connection on ${\mathcal L}$ can be written in the form $\nabla=d+\omega$, $\omega=\nu+\lambda_1dz_1+ \lambda_2dz_2$, where $\nu$ is a meromorphic 1-form with simple poles at $P_i$, $Q_i$ such that $\operatorname{Res}\nolimits_{P_i}\nu=1$, $\operatorname{Res}\nolimits_{Q_i}\nu=-1$ (these are apparent singularities of $\nabla$ with respect to the meromorphic trivialization $e=1$). We can choose the coef\/f\/icients $\lambda_1$, $\lambda_2$ in such a way that $\omega$ will have zero $a$-periods. Let us denote the periods of $\omega$ by $N_i$: \begin{gather}\label{equaperiod} N_1=\int_{a_1}\omega,\qquad N_2=\int_{a_2}\omega,\qquad N_3=\int_{b_1}\omega,\qquad N_4=\int_{b_2}\omega. \end{gather} Then $N_1=N_2=0$ by the choice of $\omega$, and \[ N_{2+j}=2\pi i\sum_{k}\operatorname{Res}\nolimits_{s_k}(\omega)\int_{s_0}^{s_k}dz_j,\qquad j=1, 2, \] by the Reciprocity Law for dif\/ferentials of 1$^{\rm st}$ and 3$^{\rm rd}$ kinds \cite[Section~2.2]{GH}, where $\sum_{k}s_k$ is the divisor of poles $(\omega)_\infty$ of $\omega$, and $s_0$ is any point of $C$. Taking into account that $(\omega)_\infty=(\nu)_\infty=P_1+P_2+Q_1+Q_2$, $\operatorname{Res}\nolimits_{P_i}\nu=1$, $\operatorname{Res}\nolimits_{Q_i}\nu=-1$, and $z_j(P)=\int_{P_0}^Pdz_j$, we can rewrite: \[ N_{2+j}=2\pi i [z_j(P_1)-z_j(Q_1)+z_j(P_2)-z_j(Q_2)]. \] Hence the components of the vector $\frac{1}{2\pi i}\genfrac{(}{)}{0pt}{0}{N_3}{N_4}$ are the $2$ coordinates on $JC$ of the class $[{\mathcal L}]$ of the line bundle ${\mathcal L}$, which is the same as the divisor class $[P_1+P_2-Q_1-Q_2]$. Now, we can f\/inish the proof. Let $(w_i)\in(\CC^*)^4$. Then, we can f\/ind a 1-form $\eta_1$ of a connection on a degree-0 line bundle~${\mathcal L} _1$ with monodromy $(1,1,w_3,w_4)$ in choosing ${\mathcal L} _1$ with coordinates $-\frac{1}{2\pi i}(\log w_3, \log w_4)$ on $JC$. In interchanging the roles of $a$- and $b$-periods, we will f\/ind another 1-form of connection $\eta_2$ on another degree-0 line bundle ${\mathcal L} _2$, with monodomy $(w_1,w_2,1,1)$. Then $\omega=\eta_1+\eta_2$ is the form of a connection on ${\mathcal L} _1\otimes {\mathcal L} _2$ with monodromy $(w_i)\in(\CC^*)^4$. \end{proof} \section{Direct images of rank-1 connections} \label{Direct_Images} We will determine the direct image connections $f_*(\nabla_{\mathcal L})=\nabla_{\mathcal E}$ on the rank-$2$ vector bundle ${\mathcal E}=f_{*}{\mathcal L} $, where $f:C\rar E$ is an elliptic subcover of degree 2 of $C$. From now on, we will stick to a representation of $C$ in the classical form $y^2=F_6(\xi)$, where $F_6$ is a degree-6 polynomial. We want that $E$ is given the Legendre equation $ y^2=x(x-1)(x-t), $ but $F_6$ is not so complicated as in \eqref{eqC}. Of course, this can be done in many dif\/ferent ways. We will f\/ix for $C$ and $f$ the following choices: \begin{gather} f:C=\{y^2=(t'-{\xi^2})(t'-1-{\xi^2})(t'-t-{\xi^2)}\} \ra E=\{y^2=x(x-1)(x-t) )\},\nonumber \\ (\xi,y) \mapsto (x,y)=(t'-{\xi^2}, y).\label{new_setting}% \end{gather} \begin{lemma} For any bielliptic curve $C$ with an elliptic subcover $f:C\rar E$ of degree $2$, there exist affine coordinates $\xi$, $x$, $y$ on $C$, $E$ such that $f$, $C$, $E$ are given by \eqref{new_setting} for some $t,t'\in \CC\setminus \{0, 1\}$, $t\neq t'$. \end{lemma} \begin{proof} By Proposition \ref{fp}, it suf\/f\/ices to verify that the two elliptic subcovers $E$, $E'$ of the curves~$C$ given by \eqref{new_setting}, as we vary $t$, $t'$, run over the whole moduli space of elliptic curves independently from each other. $E'$ can be determined from \eqref{Jacobi-Ei}. It is a double cover of $\PP^1$ ramif\/ied at $\frac{1}{t'}$, $\frac{1}{t'-1}$, $\frac{1}{t'-t}$, $\infty$. This quadruple can be sent by a homographic transformation to $0$, $1$, $t$, $t'$, hence $E'$ is given by $y^2=x(x-1)(x-t)(x-t')$. If we f\/ix $t$ and let vary $t'$, we will obviously obtain all the elliptic curves, which ends the proof. \end{proof} The only branch points of $f$ in $E$ are $p_\pm =(t', \pm y_0)$, where $y_0=\sqrt{t'(t'-1)(t'-t)}$, and thus the ramif\/ication points of $f$ in $C$ are $\tilde{p}_{\pm} =(0, \pm y_0)$. In particular, $f$ is non-ramif\/ied at inf\/inity and the preimage of $\infty\in E$ is a pair of points $\infty_\pm\in C$.\ \ $E$ is the quotient of $C$ by the involution $\iota:C\rar C$, called the Galois involution of the double covering $f$. It is given in coordinates by $\iota:(\xi, y)\mapsto (-\xi, y)$. We f\/irst deal with the case when ${\mathcal L} $ is the trivial bundle ${\mathcal O}_C$, in which we write $\nabla_{\mathcal O}$ instead of~$\nabla_{\mathcal L}$. The direct image ${\mathcal E}_0=f_{*}{\mathcal O}_C$ is a vector bundle of rank 2 which splits into the direct sum of the $\iota$-invariant and anti-invariant subbundles: ${\mathcal E}_0=(f_{*}{\mathcal O}_C)^+ \oplus (f_{*}{\mathcal O}_C)^-$. The latter subbundles are def\/ined as sheaves by specifying their sections over any open subset $U$ of $E$: \[ \Gamma(U,(f_{*}{\mathcal O}_C)^\pm )= \{s\in \Gamma(f^{-1}(U),{\mathcal O}_C)\ | \ \iota^*(s)=\pm s\}. \] Obviously, the $\iota$-invariant sections are just functions on $E$, so the f\/irst direct summand $(f_{*}{\mathcal O}_C)^+$ is the trivial bundle ${\mathcal O}_E$. The second one is generated over the af\/f\/ine set $E\setminus \{\infty\}$ by a single generator $\xi$, one of the two coordinates on $C$. Thus, we can use $(1, \xi)$ as a basis trivializing ${\mathcal E}_0$ over $E\setminus\{\infty\}$ and compute $\nabla=f_*(\nabla_{\mathcal O})$ in this basis. We use, of course, the constant function~1 to trivialize ${\mathcal O}_C$ and write $\nabla_{\mathcal O}$ in the form \begin{gather}\label{nablaO} \nabla_{\mathcal O}=d+\omega,\qquad \omega=\nabla_{\mathcal O} (1)=\lambda_1\frac{\mathrm{d}\xi}{y}+\lambda_2\frac{\xi\mathrm{d}\xi}{y}. \end{gather} Re-writing $\nabla_{\mathcal O} (1)=\omega$ in terms of the coordinate $x=t'-\xi^{2}$, we get: \[ \nabla_{\mathcal O} (1)=-\frac{\lambda_1}{2(t'-x)}\frac{\mathrm{d} x}{y} \xi - \frac{\lambda_2}{2}\frac{\mathrm{d} x}{y} 1. \] Likewise, \[ \nabla_{\mathcal O} (\xi)=-\frac{\lambda_1}{2}\frac{\mathrm{d} x}{y}1 -\frac{\lambda_2}{2y}\frac{\mathrm{d} x}{y} \xi-\frac{\mathrm{d} x}{2(t'-x)} \xi. \] We obtain the matrix of $\nabla=f_*(\nabla_{\mathcal O} )$ in the basis $(1, \xi)$: \begin{gather}\label{conn_mat} A=\left( \begin{array}{cc} -\frac{\lambda_2}{2y} \mathrm{d} x & -\frac{\lambda_1}{2y} \mathrm{d} x \vspace{1mm}\\ -\frac{\lambda_1}{2(t'-x)y}\mathrm{d} x & -\left(\frac{\lambda_2}{2y} + \frac{1}{2(t'-x)}\right) \mathrm{d} x \end{array}\right). \end{gather} This matrix has poles at the branch points $p_\pm$ with residues \begin{gather}\label{residuesA} {\operatorname{Res}\nolimits_{p_+} A}=\left( \begin{array}{cc} 0 & 0 \\ \frac{\lambda_1}{2y_0} & \frac{1}{2} \end{array}\right), \qquad {\operatorname{Res}\nolimits_{p_-} A}=\left( \begin{array}{cc} 0 & 0 \\ -\frac{\lambda_1}{2y_0} & \frac{1}{2} \end{array}\right) .\end{gather} As the sum of residues of a meromorphic 1-form on a compact Riemann surface is zero, we can evaluate the residue at inf\/inity: \[ \operatorname{Res}\nolimits_{p_-} A + \operatorname{Res}\nolimits_{p_+} A =-\operatorname{Res}\nolimits_{\infty} (A)=\left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right) .\] It is nonzero, hence $A$ is not regular at $\infty$ and has exactly 3 poles on $E$. In fact, the pole at $\infty$ is an apparent singularity due to the fact that $(1, \xi)$ fails to be a basis of $f_{*}{\mathcal O}_C$ at $\infty$, which follows from the following proposition: \begin{proposition} Let $f:C\rar E$ be the bielliptic cover \eqref{new_setting}, and $\nabla_{\mathcal O}=d+\omega$ a regular connection on the trivial bundle ${\mathcal O}_C$ with connection form $\omega=\lambda_1\frac{\mathrm{d}\xi}{y}+\lambda_2\frac{\xi\mathrm{d}\xi}{y}$. Then the direct image $\nabla=f_*(\nabla_{\mathcal O})$ is a logarithmic connection on a rank-$2$ vector bundle ${\mathcal E}_0$ over $E$, whose only poles are the two branch points $p_\pm$ of $f$. In an appropriate trivialization of ${\mathcal E}_0$ over $E\setminus\{\infty\}$, $\nabla$ is given by the connection matrix \eqref{conn_mat}, and the residues at $p_\pm$ are given by \eqref{residuesA}. \end{proposition} \begin{proof} If $P\in E$ is not a branch point, then we can choose a small disk $U$ centered at $P$ such that $f^{-1}(U)$ is the disjoint union of two disks $U_\pm$. Let $e_\pm$ be a nonzero $\nabla_{\mathcal O}$-f\/lat section of ${\mathcal O}_C$ over $U_\pm$. Then $(e_+,e_-)$ is a basis of ${\mathcal E}_0$ over $U$ consisting of $\nabla$-f\/lat sections. This implies the regularity of $\nabla$ over $U$ (the connection matrix of $\nabla$ in this basis is zero). We have shown that the only points where the direct image of a regular connection might have singularities are the branch points of the covering. In particular, $\infty$ is not a singularity of~$\nabla$. The fact that the branch points are logarithmic poles follows from the calculation preceding the statement of the proposition. \end{proof} At this point, it is appropriate to comment on the horizontal sections of $\nabla$, which are solutions of the matrix ODE $d\Phi+A\Phi=0$ for the vector $\Phi=\left(\!\!\begin{array}{c}\Phi_1\\ \Phi_2\end{array}\!\!\right)$. We remark that the matrix ODE is equivalent to one scalar equation of second order which we have not encountered in the literature. It is obtained as follows: the f\/irst line of the matrix equation gives \[ \Phi_2=\frac{2\lambda_2}{\lambda_1} \Phi'_1 -\frac{\lambda_2}{\lambda_1} \Phi_1, \] where $\Phi_1$, $\Phi_2$ denote the components of a single $2$-vector $\Phi$, and the prime denotes the derivative with respect to $x$. The second equation gives: \[ \Phi'_2=\frac{\lambda_1}{2y(t'-x)} \Phi_1 + \left(\frac{\lambda_2}{2y} + \frac{1}{2(t'-x)}\right)\Phi_2. \] By substituting here $\Phi_2$ in terms of $\Phi_1$, we get one second order equation for $\Phi_1$. By setting $y^2=P_3(x)=x(x-1)(x-t)$, we have $y'=\frac{P'_3}{2y}$ and the dif\/ferential equation for $\Phi_1$ takes the form \[ \Phi_1''+ \left[\frac{P'_{3}(x)}{2P_{3}(x)}-\frac{\lambda_2}{y}+\frac{1}{2(x-t')}\right]\Phi'_1 + \left[\frac{\lambda^{2}_{1}}{4P_{3}(x)(x-t')}+\frac{\lambda^{2}_{2}}{4P_3 (x)}-\frac{\lambda_2}{4(x-t')y}\right]\Phi_1=0. \] We can also write out the second order dif\/ferential equation for $\Phi_1$ with respect to the f\/lat coordinate $z=\int \frac{\mathrm{d} x}{y}$ on $E$. Now, set up the convention that the prime denotes $\frac{\mathrm{d}}{\mathrm{d} z}$. Then, after an appropriate scaling, $x=\wp(z)+\frac{t+1}{3}$, $y=\frac{\wp'(z)}{2}$. Let $z_0, -z_0$ be the solutions of $\wp(z)=t'-\frac{t+1}{3}$ modulo the lattice of periods. Then we have the following equation for $\Phi_1$: \[ \Phi_1''+\left[-\lambda_2 +\frac{\wp'(z)}{2(\wp(z)-\wp(z_0))}\right]\Phi'_1+ \left[\frac{\lambda^{2}_{2}}{4}+\frac{2\lambda^{2}_{1}-\lambda_2\wp'(z)}{8(\wp(z)-\wp(z_0))}\right]\Phi_1 =0. \] We now go over to the general case, in which ${\mathcal L}$ is any line bundle of degree $0$ on $C$ endowed with a regular connection $\nabla_{\mathcal L}$. Then $f_* {\mathcal L}={\mathcal E}$ is a vector bundle of rank $2$ on $E$ endowed with a~logarithmic connection $\nabla_{\mathcal E}=f_{*} \nabla_{\mathcal L}$. We can represent ${\mathcal L}$ in the form ${\mathcal L}={\mathcal O}(\tilde{q}_{1}+ \tilde{q}_{2}- \infty_+ -\infty_-)$ with $\tilde{q}_{1}=(\xi_1,y_1)$ and $\tilde{q}_{2}=(\xi_2, y_2)$ some points of $C$. Their images on $E$ will be denoted by $q_i$, or $(x_i,y_i)$ in coordinates. We will use $1\in\Gamma({\mathcal O}_C)$ as a meromorphic trivialization of ${\mathcal L}$ as in the proof of Proposition \ref{imageRHonC}. In this trivialization, the connection form of $\nabla_{\mathcal L}$ has simple poles at the 4 points $\tilde{q}_{i},\infty_\pm$ with residues $+1$ at $\tilde{q}_{i}$ and $-1$ at $\infty_\pm$. It is easy to invent one example of such a form: $\nu= \frac{1}{2}\left(\frac{y+y_{1}}{\xi-\xi_1} +\frac{y+y_{2}}{\xi-\xi_2}\right)\frac{\mathrm{d}\xi}{y}$. Hence the general form of $\nabla_{\mathcal L}$ is as follows: \begin{gather}\label{eq}\nabla_{\mathcal L}=\mathrm{d}+\omega=\mathrm{d}+\frac{1}{2}\left(\frac{y+y_{1}}{\xi-\xi_1} +\frac{y+y_{2}}{\xi-\xi_2}\right) \frac{\mathrm{d}\xi}{y} +\lambda_1 \frac{\mathrm{d}\xi}{y} +\lambda_2 \frac{\xi\mathrm{d}\xi}{y}.\end{gather} We compute $\nabla_{\mathcal L}(1)$ and $\nabla_{\mathcal L}(\xi)$ and express the result in the coordinates $(x, y)$ of $E$. This brings us to formulas for the connection $\nabla_{\mathcal E}$ on $E$. We obtain: \[ \nabla_{\mathcal L}(1) =\omega.1=\left[\frac{1}{2}\left(\frac{y+y_{1}}{\xi(\xi-\xi_1)} +\frac{y+y_{2}}{\xi (\xi-\xi_2)}\right)+\frac{\lambda_1}{\xi} + \lambda_2\right]\frac{\xi\mathrm{d}\xi}{y}. \] Splitting $\frac{1}{\xi-\xi_i}$ into the invariant and anti-invariant parts, we get: \begin{gather*}\nabla_{\mathcal L}(\xi)=\xi\nabla_{\mathcal L}(1)+ \mathrm{d}\xi\cdot 1=\Bigg[\frac{1}{2}\left(\frac{(y+y_{1})\xi_1}{\xi^{2}-\xi^{2}_1} +\frac{(y+y_{2})\xi_2}{\xi^{2}-\xi^{2}_2}\right) +\lambda_1 \\ \phantom{\nabla_{\mathcal L}(\xi)=}{} +\frac{1}{2}\left(\frac{y+y_{1}}{\xi^{2}-\xi^{2}_1} +\frac{y+y_{2}}{\xi^{2}-\xi^{2}_2}\right)\xi + \lambda_2\xi+\frac{y}{\xi^{2}} \xi\Bigg]\frac{\xi\mathrm{d}\xi}{y}. \end{gather*} By using the relations $x=t'-\xi^{2}$, $\mathrm{d} x=-2\xi\mathrm{d}\xi$, we determine the connection $\nabla_{\mathcal E}=d+A$, where $A$ is the matrix of $\nabla_{\mathcal E}$ in the basis $(1,\xi)$: \begin{gather}\label{Conn_Mat} \left(\!\! \begin{array}{cc} -\frac{1}{2} (\frac{1}{2}(\frac{y+y_{1}}{x_1-x} +\frac{y+y_{2}}{x_2-x})+\lambda_2)\frac{\mathrm{d} x}{y} & -\frac{1}{2} (\frac{1}{2}(\frac{(y+y_{1})\xi_1}{x_1-x} +\frac{(y+y_{2})\xi_2}{x_2-x})+\lambda_1)\frac{\mathrm{d} x}{y} \vspace{1mm}\\ -\frac{1}{2}(\frac{1}{2} (\frac{(y+y_{1})\xi_1}{(x_1-x)(t'-x)} +\frac{(y+y_{2})\xi_2}{(x_2-x)(t'-x)})+\frac{\lambda_1}{t'-x})\frac{\mathrm{d} x}{y} & -\frac{1}{2}(\frac{1}{2} (\frac{y+y_{1}}{x_1-x} +\frac{y+y_{2}}{x_2-x})+\lambda_2+ \frac{y}{t'-x}) \frac{\mathrm{d} x}{y} \end{array}\!\!\right)\!.\!\!\!\! \end{gather} We compute $\operatorname{Res}\nolimits_{{p}_{\pm}}A$, where $p_{\pm}$ are the only singularities of $\nabla_{\mathcal E}$: \begin{gather}\label{Residues} \operatorname{Res}\nolimits_{{p}_{\pm}}A=\left( \begin{array}{cc} 0 & 0 \\ \frac{1}{4}(\frac{(y_{1}\pm y_0)\xi_1}{x_1-t'} +\frac{(y_{2}\pm y_0)\xi_2}{x_2-t'}) \pm\frac{\lambda_1}{2y_0} & \frac{1}{2} \end{array}\right). \end{gather} \begin{proposition}\label{deltaE} Let $f:C\rar E$ be the bielliptic cover \eqref{new_setting}, ${\mathcal L}={\mathcal O}(\tilde{q}_{1}+ \tilde{q}_{2}- \infty_+ -\infty_-)$ with $\tilde{q}_{i}=(\xi_i,y_i)\in C$ ($i=1,2$), and $\nabla_{\mathcal L}=d+\omega$ a regular connection on ${\mathcal L}$ with connection form~$\omega$ defined by \eqref{eq}. Assume that $\xi_i\neq 0$, that is $\tilde{q}_{i}\neq \tilde p_\pm$. Then the direct image $\nabla_{\mathcal E}=f_*(\nabla_{\mathcal L})$ is a logarithmic connection on a rank-$2$ vector bundle ${\mathcal E}$ over $E$ whose only poles are the two branch points $p_\pm$ of $f$. In the meromorphic trivialization of ${\mathcal E}$ defined by $(1,\xi)$, $\nabla_{\mathcal E}$ is given by the connection matrix \eqref{Conn_Mat}, and the residues at $p_\pm$ are given by \eqref{Residues}. \end{proposition} Remark that the points $q_i=f(\tilde q_i)=(x_i, y_i)$ are apparent singularities of $\nabla_{\mathcal E} $. We write down the residues of $A$ at these points for future use: \begin{gather}\label{eqnarray.3.} {\operatorname{Res}\nolimits_{q_1}A}=\left( \begin{array}{cc} \frac{1}{2} & \frac{\xi_1}{2} \vspace{1mm}\\ \frac{1}{2\xi_1 } & \frac{1}{2} \end{array}\right), \qquad {\operatorname{Res}\nolimits_{q_2}A}=\left( \begin{array}{cc} \frac{1}{2} & \frac{\xi_2}{2} \vspace{1mm}\\ \frac{1}{2\xi_2 } & \frac{1}{2} \end{array}\right) . \end{gather} We can also compute $\operatorname{Res}\nolimits_{\infty} A$. First homogenize the equation of $E$ via the change $x=\frac{x_1}{x_0}$, and $y=\frac{x_2}{x_0}$. The homogeneous equation is $x_0x_2^2=x_1^3-(1+t)x_1^2 x_0 +tx_1x_0^2$. Then, setting $v=\frac{x_0}{x_2}$, $u=\frac{x_1}{x_2}$, we obtain the equation $v=u^3-(1+t)u^2v+tuv^2$ in the neighborhood of $\infty$. Near $\infty=(0,0)$, we have $v\sim u^3$, $\frac{\mathrm{d} x}{y}\sim -2\mathrm{d} u$. Therefore, $\operatorname{Res}\nolimits_{\infty} A$ is: \begin{gather}\label{eqnarray.4.} \operatorname{Res}\nolimits_{\infty}A=\operatorname{Res}\nolimits_{u=0}A=\left( \begin{array}{cc} -1 & -\frac{\xi_1+\xi_2}{2} \vspace{1mm}\\ 0 & -2 \end{array}\right). \end{gather} \section{Monodromy of direct image connections}\label{monodromy-1} \begin{figure}[t] \centerline{\includegraphics{Machu-fig2}} \caption{Generators of $\pi_1(C\setminus\{\tilde{p}_\pm\}, \infty_+)$. The parts of the arcs represented in solid (resp. dash) lines are on the upper (resp. lower) sheet.}\label{fig3} \end{figure} We are using the notation of the previous section. We will calculate the monodromy of the direct image connections $\nabla_{\mathcal E}$. We will start by choosing generators of the fundamental group $\pi_1(E\setminus \{p_+,p_-\})$. To express the monodromy of $\nabla$ in terms of periods of $C$, we will f\/irst introduce generators $a_i$, $b_i$, $c_i$ for $\pi_1(C\setminus \{\tilde p_+,\tilde p_-\})$, and then descend some of them to $E$ by applying $f_*$. We choose $\infty_+$ (resp.~$\infty$) as the reference point on $C$ (resp.~$E$). For this def\/inition, assume that $t$, $t'$ are real and $1<t<t'$ (for general $t$, $t'$, the loops $a_i$, $b_i$, $c_i$ are def\/ined up to an isotopy bringing $t$, $t'$ onto the real axis so that $1<t<t'$). $C$ can be represented as the result of gluing two copies of the Riemann sphere along three cuts. We call these copies of the Riemann sphere upper and lower sheets, and the cuts are realized along the rectilinear segments $[-\sqrt{t'}$, $-\sqrt{t'-1}]$, $[-\sqrt{t'-t}, \sqrt{t'-t}]$ and $[\sqrt{t'-1}, \sqrt{t'}]$. The sheets are glued together in such a way that the upper edge of each cut on the upper sheet is identif\/ied with the lower edge of the respective cut on the lower sheet, and vice versa. Let $\infty_+$ be on the upper sheet, singled out by the condition $\Im y>0$ when $\xi\in\RR$, $\xi \to +\infty$. This implies that the values of $\Re{y}$, $\Im{y}$ on $\RR$ are as on Fig.~\ref{fig3}, where the loops $a_i$, $b_i$, $c_i$ generating $\pi_1(C\setminus \{\tilde p_+,\tilde p_-\})$ are shown. Remark that the loops $c_i$ are chosen in the form $c_i=d_i\tilde c_id_i^{-1}$, where $d_i$ is a path joining $\infty_+$ with some point close to $\tilde p_\pm$ and $\tilde c_i$ is a small circle around $\tilde p_\pm$ (the values $i=1,2$ correspond to $\tilde p_+$, $\tilde p_-$ respectively). The paths $d_i$ follow the imaginary axis of the upper sheet. Now, we go over to $E$. Set $a=f_*(a_1)$, $b=f_*(b_1)$, and def\/ine the closed paths running round the branch points $p_\pm$ as follows: $\gamma_i=f(d_i)\tilde\gamma_i f(d_i)^{-1}$, where $\tilde\gamma_i$ are small circles around~$p_\pm$ running in the same direction as $f(\tilde c_i)$ (but $f(\tilde c_i)$ makes two revolutions around $p_\pm$, whilst $\tilde\gamma_i$ only one). One can verify that the thus def\/ined generators of both\sloppy\ fundamental groups satisfy the relations $[a_1, b_1]c_1[a_2, b_2]c_2=1$ and $[a,b]\gamma_1\gamma_2=1$ and that the group morphism \mbox{$f_*:\pi_1(C\setminus\{\tilde{p}_\pm\},\infty_+)\lra\pi_1(E\setminus\{p_\pm\},\infty)$} is given by the formulas \[ f_*(a_1)=a,\quad f_*(b_1)=b,\quad f_*(a_2)=\gamma^{-1}_{1}a\gamma_1,\quad f_*(b_2)=\gamma^{-1}_{1} b\gamma_1,\quad f_*(c_i)=\gamma^{2}_{i}\quad (i=1, 2). \] As $\nabla_{\mathcal L}$ is regular at $\tilde p_\pm$, it has no monodromy along $c_i$, and this together with the above formulas for $f_*$ immediately implies that the monodromy matrices $M_{\gamma_i}$ of $\nabla_{\mathcal E}$ are of order 2. We f\/irst assume that ${\mathcal L}={\mathcal O}_C$ is trivial, in which case $\nabla_{\mathcal L}$ is denoted $\nabla_{\mathcal O}$, and $\nabla_{\mathcal E}$ just $\nabla$. As in the previous section, we trivialize ${\mathcal E}_0=f_{*}({\mathcal O}_C)$ by the basis $(1, \xi)$ over $E\setminus\{\infty\}$. Splitting the solution $\phi =e^{-\lambda_1 z_1-\lambda_2 z_2}$ of $\nabla_{\mathcal O}\phi=0$ into the $\iota$-invariant and anti-invariant parts, we represent~$\phi$ by a $2$-component vector in the basis $(1, \xi)$: \[ \Phi=\left( \begin{array}{c} e^{-\lambda_2 z_2}\cosh(\lambda_1 z_1) \vspace{1mm}\\ -\frac{e^{-\lambda_2 z_2}}{\xi}\sinh(\lambda_1 z_1) \end{array}\right). \] We have to complete $\Phi$ to a fundamental matrix $\mathbf{\Phi}$, and then we can def\/ine the monodromy~$M_\gamma$ along a loop $\gamma$ by $T_\gamma(\mathbf{\Phi})=\mathbf{\Phi} M_\gamma$, where $T_\gamma$ denotes the analytic continuation along $\gamma$. We already know the f\/irst column of $\mathbf{\Phi}$: this is just $\Phi$. Denote it also by $\mathbf{\Phi_1}$, the column vector $\left(\begin{array}{c}\Phi_{1,1}\\ \Phi_ {2,1}\end{array}\right)$. It remains to f\/ind $\mathbf{\Phi_2}= \left(\begin{array}{c}\Phi_{1,2}\\ \Phi_ {2,2}\end{array}\right)$ so that \[ \mathbf{\Phi}=\left( \begin{array}{cc} \Phi_{1,1} & \Phi_{1,2} \\ \Phi_{2,1} & \Phi_{2,2} \end{array}\right) \] is a fundamental matrix. By Liouville's theorem, the matrix equation $\mathbf{\Phi'}+ A\mathbf{\Phi}=0$ implies the following scalar equation for $\Psi=\det\Phi$: $\Psi'+ {\rm Tr}\,(A)\Psi=0$. In our case, ${\rm Tr}\,(A)=-\frac{\lambda_{2}}{y}-\frac{1}{2(t'-x)}$, and we get a solution in the form: $\Psi=\frac{e^{-2 \lambda_{2}z_{2}}}{\sqrt{t'-x}}=\frac{e^{-2 \lambda_{2}z_{2}}}{\xi}$. Thus we can determine $\mathbf\Phi_2$ from the system: \begin{gather*} \cosh(\lambda_1 z_1)\Phi_{2, 2} +\frac{1}{\xi} \sinh(\lambda_1 z_1)\Phi_{1, 2} = \frac{e^{-2 \lambda_{2}z_{2}}}{\xi}, \\ \Phi'_{1, 2} = \frac{1}{2y} (\lambda_2 \Phi_{1, 2} + \lambda_1 \Phi_{2, 2}). \end{gather*} Eliminating $\Phi_{2, 2}$, we obtain an inhomogeneous f\/irst order linear dif\/ferential equation for $\Phi_{1, 2}$. Finally, we f\/ind: \[ \mathbf{\Phi_2}=\left( \begin{array}{c} -e^{-\lambda_2 z_2}\sinh(\lambda_1 z_1) \vspace{1mm}\\ \frac{e^{-\lambda_2 z_2}}{\xi}\cosh(\lambda_1 z_1) \end{array}\right). \] Now we can compute the monodromies of $\nabla_{\mathcal E}$ along the loops $a$, $b$, $\gamma_i$. It is convenient to represent the result in a form, in which the real and imaginary parts of all the entries are visible as soon as $t,t'\in \RR$ and $1<t<t'$. Under this assumption, the entries of the period matrix $\Pi=((a_{ij}|(b_{ij})$ of $C$ are real or imaginary and can be expressed in terms of hyperelliptic integrals along the real segments joining branch points. Thus reading the cycles of integration from Fig.~\ref{fig3}, we obtain: \begin{gather*} a_{1,1}=-a_{1,2}=2iK, \qquad K=\int_{\sqrt{t'-t}}^{\sqrt{t'-1}} \frac{\mathrm{d}\xi}{\sqrt{(t'-\xi^2)(t'-1-\xi^2)(t-t'+\xi^2)}}>0,\\ a_{2,1}=a_{2,2}=2iK', \qquad K'=\int_{\sqrt{t'-t}}^{\sqrt{t'-1}} \frac{\xi\mathrm{d}\xi}{\sqrt{(t'-\xi^2)(t'-1-\xi^2)(t-t'+\xi^2)}}>0,\\ b_{1,1}=-b_{1,2}=-2L, \qquad L=\int_{ \sqrt{t'}}^{\sqrt{t'-1}} \frac{\mathrm{d}\xi}{\vert{y}\vert}>0,\\ b_{2,1}=b_{2,2}=-2L', \qquad L'=\int_{ \sqrt{t'-1}}^{\sqrt{t'}} \frac{\xi\mathrm{d}\xi}{\vert{y}\vert}>0. \end{gather*} \begin{proposition} The monodromy matrices of the connection $\nabla=f_*(\nabla_{\mathcal O})$, where $\nabla_{\mathcal O}$ is the rank-$1$ connection \eqref{nablaO}, are given by \begin{gather* M_{a}=\left( \begin{array}{cc} e^{-2i\lambda_2 K'} \cos(2\lambda_1 K) & -e^{-2i\lambda_2 K'}i\sin(2\lambda_1 K) \\ -e^{-2i\lambda_2 K'}i\sin(2\lambda_1K) & e^{-2i\lambda_2 K'} \cos(2\lambda_1K) \end{array}\right), \\ M_{b}=\left( \begin{array}{cc} e^{2\lambda_{2}L'}\cosh(2\lambda_1L) & e^{2\lambda_{2}L'}\sinh(2\lambda_1L) \\ e^{2\lambda_{2}L'} \sinh(2\lambda_1L) & e^{2\lambda_{2}L'} \cosh(2\lambda_1L) \end{array}\right), \\ M_{\gamma_i}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) \qquad (i=1,2). \end{gather*} \end{proposition} Now we turn to the general case of nontrivial ${\mathcal L}$. Our computations done in the special case allow us to guess the form of the fundamental matrix of solutions to $\nabla_{\mathcal E}\Phi=0$, where $\nabla_{\mathcal E}=f_*\nabla_{\mathcal L}$ and $\nabla_{\mathcal L}$ is given by \eqref{eq} (remark, it would be not so easy to f\/ind it directly from~\eqref{Conn_Mat}): \begin{gather*} \mathbf{\Phi}=\frac{1}{2}\left(\begin{array}{cc} e^{-\int\omega}+e^{-\int\omega^*} & e^{-\int\omega}-e^{-\int\omega^*} \vspace{1mm}\\ \frac{1}{\xi}( e^{-\int\omega}-e^{-\int\omega^*}) & \frac{1}{\xi}(e^{-\int\omega}+e^{-\int\omega^*}) \end{array}\right), \end{gather*} where $\omega^*:=\iota^*(\omega)$ is obtained from $\omega$ by the change $\xi\mapsto-\xi$. We deduce the monodromy: \begin{proposition} The monodromy matrices of the connection $\nabla_{\mathcal E}$ given by \eqref{Conn_Mat} are the following: \begin{gather*} M_{a}=\frac{1}{2}\left( \begin{array}{cc} e^{-N_1}+e^{-N_2} & e^{-N_1}-e^{-N_2} \\ e^{-N_1}-e^{-N_2} & e^{-N_1}+e^{-N_2} \end{array}\right), \\ M_{b}=\frac{1}{2}\left( \begin{array}{cc} e^{-N_3}+e^{-N_4} & e^{-N_3}-e^{-N_4} \\ e^{-N_3}-e^{-N_4} & e^{-N_3}+e^{-N_4} \end{array}\right), \qquad M_{\gamma_i}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) \qquad (i=1,2). \end{gather*} Here $(N_1, N_2, N_3, N_4)$ are the periods of $\omega$ as defined in \eqref{equaperiod}. \end{proposition} \begin{proof} By a direct calculation using the observation that the periods of $\omega^*$ are $(N_2, N_1, N_4$, $N_3)$. \end{proof} Here the connection (\ref{Conn_Mat}) depends on $6$ independent parameters $\tilde{q_1}$, $\tilde{q_2}$, $\lambda_1$, $\lambda_2$, $t$, $t'$, and the monodromy is determined by the $4$ periods $N_i$. Hence, it is justif\/ied to speak about the isomonodromic deformations for this connection. The problem of isomonodromic deformations is easily solved upon an appropriate change of parameters. Firstly, change the representation of~$\omega$: write $\omega=\omega_0+\lambda_1\omega_1+\lambda_2\omega_2$, where $\omega_0=\nu+\lambda_{10}\omega_1+\lambda_{02}\omega_2$ is chosen with zero $a$-periods, as in the proof of Proposition \ref{imageRHonC}, and assume that $(\omega_1, \omega_2)$ is a normalized basis of dif\/ferentials of f\/irst kind on $C$. Secondly, replace the $2$ parameters $\tilde{q_1}$, $\tilde{q_2}$ by the coordinates $z_1 [L]$, $z_2 [L]$ of the class of $L={\mathcal O}(\tilde{q}_{1}+ \tilde{q}_{2}- \infty_+ -\infty_-)$ in $JC$. Thirdly, replace $(t, t')$ by the period $Z$ of $C$. Then an isomonodromic variety $N_i=\textrm{const}$ $(i=1,\ldots,4)$ is def\/ined, in the above parameters, by the equations \begin{gather* \genfrac{(}{)}{0pt}{0}{\lambda_1}{\lambda_2}=\textrm{const},\qquad \genfrac{(}{)}{0pt}{0}{z_1 [L]}{z_2 [L]}+Z\genfrac{(}{)}{0pt}{0}{\lambda_1}{\lambda_2}=\textrm{const}. \end{gather*} Thus the isomonodromy varieties can be considered as surfaces in the 4-dimensional relative Jacobian $J({\mathcal C}/{\mathcal H})$ of the universal family of bielliptic curves~${\mathcal C}\rar{\mathcal H}$ over the bielliptic period locus~${\mathcal H}$ introduced in Corollary~\ref{period_locus}. The f\/iber $C_Z$ of ${\mathcal C}$ over a point $Z\in{\mathcal H}$ is a genus-2 curve with period $Z$, and $J({\mathcal C}/{\mathcal H})\rar{\mathcal H}$ is the family of the Jacobians of all the curves $C_Z$ as $Z$ runs over~${\mathcal H}$. The isomonodromy surfaces $S_ {\lambda_1,\lambda_2,\mu_1,\mu_2}$ in $J({\mathcal C}/{\mathcal H})$ depend on 4 parameters $\lambda_i$, $\mu_i$. Every isomonodromy surface is a cross-section of the projection $J({\mathcal C}/{\mathcal H})\rar{\mathcal H}$ def\/ined by \[ S_{\lambda_1,\lambda_2,\mu_1,\mu_2}= \left\{(Z,[{\mathcal L}])\ |\ Z\in{\mathcal H},\ [{\mathcal L}]\in JC_Z,\ \genfrac{(}{)}{0pt}{0}{z_1 [L]}{z_2 [L]}=-Z\genfrac{(}{)}{0pt}{0}{\lambda_1}{\lambda_2} +\genfrac{(}{)}{0pt}{0}{\mu_1}{\mu_2}\right\}. \] \section{Elementary transforms of rank-2 vector bundles} \label{elementary} In this section, we will recall basic facts on elementary transforms of vector bundles in the particular case of rank 2, the only one needed for application to the underlying vector bundles of the direct image connection in the next section. The impact of the elementary transforms is twofold. First, they provide a tool of identif\/ication of vector bundles. If we are given a vector bundle ${\mathcal E}$ and if we manage to f\/ind a sequence of elementary transforms which connect ${\mathcal E}$ to some ``easy" vector bundle ${\mathcal E}_0$ (like ${\mathcal O}\oplus {\mathcal O}(-p)$ for a point $p$), we provide an explicit construction of ${\mathcal E}$ and at the same time we determine, or identify ${\mathcal E}$ via this construction. Second, the elementary transforms permit to change the vector bundle endowed with a connection without changing the monodromy of the connection. The importance of such applications is illustrated in the article~\cite{EV-2}, in which the authors prove that any irreducible representation of the fundamental group of a Riemann surface with punctures can be realized by a logarithmic connection on a {\em semistable} vector bundle of degree 0 (see Theorem \ref{thm-EV}). On one hand, this is a far-reaching generalization of Bolibruch's result \cite{AB} which af\/f\/irms the solvability of the Riemann--Hilbert problem over the Riemann sphere with punctures, and on the other hand, this theorem gives rise to a map from the moduli space of connections to the moduli space of vector bundles, for only the class of {\em semistable} vector bundles has a consistent moduli theory. We will illustrate this feature of elementary transforms allowing us to roll between stable, semistable and unstable bundles in the next section. Let $E$ be a curve. As before, we identify locally free sheaves on $E$ with associated vector bundles. Let ${\mathcal E}$ be a rank-$2$ vector bundle on $E$, $p$ a point of $E$, ${\mathcal E}_{\mid p}={\mathcal E}\otimes\CC_p$ the f\/iber of ${\mathcal E}$ at $p$. Here $\CC_p$ is the sky-scraper sheaf whose only nonzero stalk is the stalk at $p$, equal to the 1-dimensional vector space $\CC$. We emphasize that ${\mathcal E}_{\mid p}$ is a $\CC$-vector space of dimension 2, not to be confused with the stalk ${\mathcal E}_p$ of ${\mathcal E}$ at $p$, the latter being a free ${\mathcal O}_p$-module of rank 2. Let $e_1$, $e_2$ be a basis of ${\mathcal E}_{\mid p}$. We extend $e_1$, $e_2$ to sections of ${\mathcal E}$ in a neighborhood of $p$, keeping for them the same notation. We def\/ine the elementary transforms ${\mathcal E}^{+}$ and ${\mathcal E}^{-}$ of ${\mathcal E}$ as subsheaves of ${\mathcal E}\otimes\CC(E)\simeq\CC(E)^{2}$, in giving their stalks at all the points of $E$: \begin{gather} {\mathcal E}^{-}=elm^{-}_{p,e_2} ({\mathcal E}),\qquad {\mathcal E}^{-}_{p}={\mathcal O}_p \tau_P e_1+{\mathcal O}_p e_2,\nonumber\\ \label{def-elms}{\mathcal E}^{+}=elm^{+}_{p,e_1} ({\mathcal E}),\qquad {\mathcal E}^{+}_{p}={\mathcal O}_p \frac {1}{\tau_P} e_1+{\mathcal O}_p e_2,\\ {\mathcal E}^{\pm}_{z}={\mathcal E}_z, \qquad \forall\; z \in E\setminus\{p\},\nonumber \end{gather} where $\tau_P$ denotes a local parameter at $p$. The thus obtained sheaves are locally free of rank $2$. They f\/it into the exact triples: \begin{gather}\label{equa.1.} 0\ra{\mathcal E}^{-}\ra{\mathcal E}\xymatrix@1{\ar[r]^{\gamma}&} \CC(p)\ra 0,\qquad 0\ra{\mathcal E}\ra{\mathcal E}^{+}\ra\CC(p)\ra 0. \end{gather} Remark, that the surjection $\gamma$ restricted to ${\mathcal E}_{\mid p}$ is a projection parallel to the $e_2$ axis; this is the reason for which we included in the notation of $elm^{-}$ its dependence on $e_2$. Thus, if we vary~$e_1$, in keeping~$e_2$ (or in keeping the proportionality class of~$e_2$), the isomorphism class of~$elm^{-}_{e_2}$ will not change, but it can change if we vary the proportionality class~$[e_2]$ in the projective line~$\PP ({\mathcal E}_{\mid p})$. For degrees, we have $\deg {\mathcal E}^{\pm}=\deg{\mathcal E}\pm 1$. We can give a more precise version of this equality in terms of the determinant line bundles: $\det {\mathcal E}^{\pm}=\det{\mathcal E} (\pm p).$ Here and further on, given a~line bundle ${\mathcal L}$ and a divisor $D=\sum n_ip_i$ on $E$, we denote by ${\mathcal L}(D)$ (``${\mathcal L}$ twisted by $D$'') the following line bundle, def\/ined as a sheaf by its stalks at all the points of $E$: ${\mathcal L}(D)_z={\mathcal L}_z$ if~$z$~is not among the $p_i$, and ${\mathcal L}(D)_{p_i}= \tau_{p_i}^{-n_i}{\mathcal L}_z$. For example, the regular sections of ${\mathcal L}(p)$ can be viewed as meromorphic sections of ${\mathcal L}$ with at most simple pole at $p$, whilst the regular sections of ${\mathcal L}(-p)$ are regular sections of ${\mathcal L}$ vanishing at $p$. For the degree of a twist, we have $\deg{\mathcal L}(D)=\deg{\mathcal L}+\deg D=\deg{\mathcal L}+\sum n_i$, so that $\deg {\mathcal L}(\pm p)=\deg {\mathcal L}\pm 1$. A similar notion of twists applies to higher-rank bundles ${\mathcal E}$: the twist ${\mathcal E}(D)$ can be def\/ined either as ${\mathcal E}\otimes {\mathcal O}(D)$, or via the stalks in replacing ${\mathcal L}$ by ${\mathcal E}$ in the above def\/inition. For degrees, we have $\deg {\mathcal E}(D)=\deg{\mathcal E}+\operatorname{rk}\nolimits {\mathcal E}\cdot \deg D$. Coming back to $\operatorname{rk}\nolimits{\mathcal E}=2$ and twisting ${\mathcal E}$ by $\pm p$, we obtain some more exact triples: \[ 0\ra{\mathcal E}^{+}\ra{\mathcal E}(p)\ra\CC(p)\ra 0,\qquad 0\ra{\mathcal E}(-p)\ra{\mathcal E}^{-}\ra\CC(p)\ra 0. \] They are easily def\/ined via stalks, as ${\mathcal E}(p)={\mathcal E}\otimes{\mathcal O}_E (p)$ is spanned by $\frac{1}{\tau_P} e_1$, $\frac{1}{\tau_P} e_2$ at $p$, and ${\mathcal E}(-p)$ by $\tau_Pe_1$, $\tau_Pe_2$. A basis-free description of elms can be given as follows: Let $W\subset{\mathcal E}_{\mid p}$ be a $1$-dimensional vector subspace. Then $elm^{-}_{(p, W)}{({\mathcal E})}$ is def\/ined as the kernel of the composition of natural maps ${\mathcal E}\ra{\mathcal E}_{\mid p}\ra{\mathcal E}_ {\mid p}/W$ (here ${\mathcal E}_{\mid p}$, ${\mathcal E}_ {\mid p}/W$ are considered as sky-scraper sheaves, i.e.\ vector spaces placed at $p$). The positive elm is def\/ined via the duality: \[ elm^{+}_{p, W} ({\mathcal E}):=(elm^{-}_{p,W^{\bot}} ({\mathcal E}^{\vee}))^{\vee}. \] To set a correspondence with the previous notation, we write: \[ elm^{+}_{p,e_1} ({\mathcal E})=elm^{+}_{p,\CC e_1} ({\mathcal E}),\qquad elm^{-}_{p,e_2} ({\mathcal E})=elm^{-}_{p,\CC e_2} ({\mathcal E}). \] One can also def\/ine $elm^{+}$ as an appropriate $elm^{-}$, applied not to ${\mathcal E}$, but to ${\mathcal E}(p)$: \begin{gather}\label{gather.1.} elm^{+}_{p,e_1}=elm^{-}_{p,e_1} ({\mathcal E}(p)).\end{gather} We will now interpret the elementary transforms in terms of ruled surfaces. For a vector bundle ${\mathcal E}$ over $E$, we denote by $\PP({\mathcal E})$ the projectivization of ${\mathcal E}$, whose f\/iber over $z\in E$ is the projective line $\PP({\mathcal E}|_z)$ parameterizing vector lines in ${\mathcal E}_{\mid z}$. It has a natural projection $\PP({\mathcal E})\rar E$ with f\/ibers isomorphic to $\PP^1$ and is therefore called a ruled surface. We will see that the elementary transforms of vector bundles correspond to birational maps between associated ruled surfaces which split into the composition of one blowup and one blowdown. The transfer to ruled surfaces allows us to better understand the structure of ${\mathcal E}$, for it replaces all the line subbundles ${\mathcal L}\subset{\mathcal E}$ by cross-sections of the f\/iber bundle $\PP({\mathcal E})\rar E$; the latter cross-sections being curves in a surface, we can use the intersection theory on the surface to study them. As an example, we will give a~criterion of (semi)stability of ${\mathcal E}$ in terms of the intersection theory on $\PP({\mathcal E})$. Let us return to the setting of the description of elms via bases. We can assume that $e_1$, $e_2$ are rational sections of ${\mathcal E}$, regular and linearly independent at $p$. Let $S=\PP({\mathcal E})$ and let $\pi:S\lra E$ be the natural projection. Then $e_1$, $e_2$ def\/ine two global cross-sections of $\pi$, which will be denoted $\overline{e_1}$, $\overline{e_2}$. If ${\mathcal E}^{-}=elm^{-}_{p, e_2} ({\mathcal E})$, then the natural map ${\mathcal E}^{-}\lra{\mathcal E}$ gives rise to the birationnal isomorphism of ruled surfaces $S\lra S^{-}=\PP({\mathcal E}^{-})$ which splits into the composition of one blowup and one blowdown, as shown on Fig.~\ref{Fig.6.}. \begin{figure}[t] \centerline{\includegraphics[width=13cm]{Machu-fig3}} \caption{Decomposition of $elm^{-}$ in a blowup followed by a blowdown.}\label{Fig.6.} \end{figure} Let $f_p$ denote the f\/iber $\pi^{-1}(p)\simeq\PP^1$ of $\pi$; we keep the same notation for curves and their proper transforms in birational surfaces. We label some of the curves by their self-intersection; for example, $ (\overline{e_1}^{2})_S=\alpha_1$, $(f^{2}_p)_S=0$, $(f^{2}_p)_{\hat{S}}=-1 $. For any vector $v\in{\mathcal E}_{\mid_{p}}\setminus\{0\}$, we denote by $[p, v]$ the point of $f_p=\PP({\mathcal E}_{\mid_{p}})$ which is the vector line spanned by~$v$. Remark that the cross-sections~$\overline{e_1}$,~$\overline{e_2}$ are disjoint in the neighborhood of $p$ where $e_1$, $e_2$ is a basis of ${\mathcal E}$, but $\overline{e_1}$ can intersect $\overline{e_2}$ at a f\/inite number of points where $e_1$, $e_2$ fail to generate ${\mathcal E}$. The positive $elm$ has a similar description. Basically, as $\PP({\mathcal E})\simeq\PP({\mathcal E}\otimes L)$ for any invertible sheaf $L$ on $E$, we have $\PP({\mathcal E})\simeq\PP({\mathcal E}(p))$. Hence, in view of (\ref{gather.1.}), $elm^{+}$ and $elm^{-}$ have the same representation on the level of ruled surfaces. There exists also an elegant way to def\/ine $elm^{-}$ in using $\pi:S\lra E$: \[ elm^{-}_{p, v} ({\mathcal E})=\pi_{*}(I_{S, [p,v]} (1)). \] Here, $I_{S, [p,v]}$ is the ideal sheaf of the point $[p, v]$, and $F(1)$ denotes the twist of a sheaf $F$ by ${\mathcal O}_{\PP({\mathcal E})/E} (1)$. We have the natural exact triple of an ideal sheaf on $S=\PP({\mathcal E})$: \[ 0\ra I_{S, [p,v]}(1)\ra{\mathcal O}_{S/E} (1) \ra\CC_{[p, v]}\ra 0. \] By a basic property of the tautological sheaf ${\mathcal O}_{S/E} (1)$, we have $\pi_{*}{\mathcal O}_{S/E} (1)\simeq {\mathcal E}$. By applying~$\pi_{*}$, we get the exact triple \[ 0\ra \pi_{*}I_{S, [p,v]}(1)\ra{\mathcal E}\ra\CC_p\ra 0. \] One can prove that in this way we recover the f\/irst exact triple~(\ref{equa.1.}). Now, we will say a few words about the (semi)-stability in terms of ruled surfaces. \begin{definition} A rank-$2$ vector bundle on a curve $E$ is stable (resp.\ semistable) if for any line subbundle ${\mathcal L}\subset{\mathcal E}$, $\deg{\mathcal L}<\frac{1}{2} \deg{\mathcal E}$ (resp., $\deg{\mathcal L}\leq\frac{1}{2} \deg{\mathcal E}$), or equivalently, if for any surjection ${\mathcal E}\rar{\mathcal M}$ onto a line bundle ${\mathcal M}$, $\deg{\mathcal M}>\frac{1}{2} \deg{\mathcal E}$ (resp., $\deg{\mathcal M}\geq\frac{1}{2} \deg{\mathcal E}$). A vector bundle is called unstable if it is not semistable. It is called strictly semistable if it is semistable, but not stable. \end{definition} \begin{definition} Let ${\mathcal E}$ be a rank-$2$ vector bundle on a curve $E$. The index of the ruled surface $\pi:S=\PP({\mathcal E})\lra E$ is the minimal self-intersection number of a cross-section of $\pi$: \[ i(S)=\min\{(e)^{2}_{S}\mid e\subset S\textrm{ is a cross-section of $\pi$}\}. \] \end{definition} The assertion of the following proposition is well-known, see e.g.~\cite[p.~55]{LN}. For the reader's convenience, we provide a short proof of it. \begin{proposition}\label{ss-via-rs} ${\mathcal E}$ is stable (resp.\ semi-stable) iff $i(S)>0$ (resp.\ $i(S)\geq0$). \end{proposition} \begin{proof} The cross-sections of $\PP({\mathcal E})\lra E$ are in $1$-to-$1$ correspondence with the exact triples \begin{gather}\label{equa.2.} 0\ra {\mathcal L} _{1}\xymatrix@1{\ar[r]^{\alpha}&}{\mathcal E}\xymatrix@1{\ar[r]^{\beta}&} {\mathcal L} _{2} \ra 0,\end{gather} where ${\mathcal L} _{1}$, ${\mathcal L} _{2}$ are line bundles over $E$. The cross-section $e$ associated to such a triple is $\PP({\mathcal L} _1)\subset\PP({\mathcal E})$. It is the zero locus of $\pi^{*}\beta\circ\pi^{*}\alpha\in \Hom(\pi^{*}{\mathcal L} _1, \pi^{*}{\mathcal L} _2)\simeq H^{0}(S, \pi^{*}({\mathcal L} _2\otimes {\mathcal L} ^{-1}_1)).$ Hence, the normal bundle $N_{e/S}$ is isomorphic to ${\mathcal L} _2\otimes {\mathcal L} ^{-1}_1$. The stability (resp.\ semi-stability) of ${\mathcal E}$ is equivalent to the fact that $\deg {\mathcal L} _1<\deg {\mathcal L} _2$ (resp.\ $\deg {\mathcal L} _1\leq\deg {\mathcal L} _2$) for any triple (\ref{equa.2.}). As $(e^{2})_S=\deg N_{e/S}=\deg {\mathcal L} _2 - \deg {\mathcal L} _1$, this ends the proof. \end{proof} We will end this section by two lemmas which help to identify vector bundles via the geometry of the associated ruled surfaces. \begin{lemma}\label{direct-sum} Let ${\mathcal E}$ be a rank-$2$ vector bundles over a curve $X$ such that the associated ruled surface $S=\PP({\mathcal E})$ has two disjoint cross-sections $s_1$, $s_2$. Then ${\mathcal E}={\mathcal L}_1\oplus{\mathcal L}_2$, where ${\mathcal L}_i$ are line subbundles of ${\mathcal E}$ corresponding to $s_i$: $s_i=\PP({\mathcal L}_i)$, $i=1,2$. Further, for the self-intersection numbers of $s_i$, we have $(s_1)^2=-(s_2)^2=\deg {\mathcal L} _2 - \deg {\mathcal L} _1$. \end{lemma} \begin{proof} The f\/irst assertion is obvious, and the second one follows from the formula for $(e^{2})_S$ in the proof of Proposition \ref{ss-via-rs}, in taking into account that ${\mathcal E}={\mathcal L}_1\oplus{\mathcal L}_2$ f\/its into an exact triple of the form \eqref{equa.2.}. \end{proof} \begin{lemma}\label{elm-on-subb} Let ${\mathcal E}$ be a rank-$2$ vector bundle over a curve $X$, ${\mathcal L}$ a line subbundle of ${\mathcal E}$ and $s=\PP({\mathcal L})$ the associated cross-section of the ruled surface $S=\PP({\mathcal E})$. Let $p\in X$, $[p,v]\in f_p$, where $f_p$ denotes the fiber of $S$ over $p$. Let $S^\pm=\PP({\mathcal E}^\pm)$, where ${\mathcal E}^\pm=elm^\pm_{p,v}$, $\pi^\pm:S\dasharrow S^\pm$ the natural birational map, $s^\pm$ the proper transform of $s$ in $S^\pm$ under $\pi^\pm$ $($that is, the closure of $\pi^\pm(s\setminus\{[p,v]\})\,)$, and ${\mathcal L}^\pm$ the line subbundle of ${\mathcal E}^\pm$ such that $s^\pm=\PP({\mathcal L}^\pm)$. Then we have: \begin{enumerate}\itemsep=0pt \item[(i)] If $[p,v]\in s$, then $(s^\pm)^2_{S^\pm}=(s^2)_S-1 $, $\deg{\mathcal L}^+=\deg{\mathcal L}+ 1$, and $\deg{\mathcal L}^-=\deg{\mathcal L}$. Moreover, ${\mathcal L}^+\simeq{\mathcal L}(p)$ and ${\mathcal L}^-\simeq{\mathcal L}$. \item[(ii)] If $[p,v]\not\in s$, then $(s^\pm)^2_{S^\pm}=(s^2)_S+1$, $\deg{\mathcal L}^+=\deg{\mathcal L}$, and $\deg{\mathcal L}^-=\deg{\mathcal L}-1$. Moreover, ${\mathcal L}^+\simeq{\mathcal L}$ and ${\mathcal L}^-\simeq{\mathcal L}(-p)$. \end{enumerate} \end{lemma} \begin{proof} The formulas for $(s^\pm)^2_{S^\pm}$ follow from the behavior of the intersection indices as shown on Fig.~\ref{Fig.6.}, and those for $\deg{\mathcal L}^\pm$ are easily deduced directly from the def\/inition of elementary transforms \eqref{def-elms} by choosing for $e_1$ or $e_2$ a rational trivialization of ${\mathcal L}$. \end{proof} \section{Underlying vector bundles of direct image connection} \label{underlying} Let us go over again to the setting of Section \ref{Direct_Images}. Consider f\/irst the case when ${\mathcal L}$ is the trivial bundle, ${\mathcal L}={\mathcal O}_C$. The following fact is well known: \begin{lemma} Let $f:X\ra Y$ be a finite morphism of smooth varieties of degree $2$ and $\Delta$ the class of its branch divisor in ${\rm Pic}(Y)$. Then $\Delta$ is divisible by two in ${\rm Pic}(Y)$, and there exists $\delta\subset {\rm Pic}(Y)$ such that $2\delta$ is linearly equivalent to $\Delta$ and $f_{*}{\mathcal O}_X={\mathcal O}_Y\oplus{\mathcal O}_Y(-\delta)$. \end{lemma} \begin{proof} See \cite[Section 1]{Mu}. \end{proof} Applying this lemma to $f:C\ra E$, we f\/ind that $f_{*}{\mathcal O}_C={\mathcal O}_E\oplus{\mathcal O}_E(-\delta)$, where $2\delta\simeq p_+ +p_-$. This property determines $\delta$ only modulo $E[2]$, but as we saw in Section~\ref{Direct_Images}, ${\mathcal O}_E (-\delta)$ is trivialized by a section $\xi$ over $E\setminus\{\infty\}$, thus $\delta=\infty$ and $f_{*}{\mathcal O}_C={\mathcal O}_E\oplus{\mathcal O}_E(-\infty)$. We deduce: \begin{proposition} If ${\mathcal L} ={\mathcal O}_C$, then the direct image connection $\nabla_{\mathcal E} =f_{*}(\nabla_{\mathcal L} )$, determined by formula \eqref{Conn_Mat}, is a logarithmic connection on the vector bundle ${\mathcal E}_0={\mathcal O}_E\oplus{\mathcal O}_E(-\infty)$ with two poles at $p_+$, $p_-$. \end{proposition} Let now ${\mathcal L} $ be an arbitrary line bundle over $C$ of degree $0$. By continuity, $\deg f_{*}{\mathcal L} =\deg f_{*}{\mathcal O}_C=-1$. To determine $f_{*}{\mathcal L} $, we use the following lemma: \begin{lemma}\label{lemma5.3}\label{oh-la-la} Let $X$ be a nonsingular curve, $p$ a point in $X$, and $z$ a local parameter at $p$. Let ${\mathcal E}$ be a rank-$2$ vector bundle on $X$ with a meromorphic connection $\nabla$, regular at $p$. Let $s_1$, $s_2$ be a pair of meromorphic sections of ${\mathcal E}$, linearly independent over $\CC(X)$ and ${\mathcal F}=\langle s_1, s_2\rangle$ the subsheaf of ${\mathcal E}\otimes\CC(X)$ generated by $s_1$, $s_2$ as a ${\mathcal O}_X$-module. Let ${\mathcal A}$ be the matrix of $\nabla$ with respect to the $\CC(X)$-basis $s_1$, $s_2$ and $A={\rm res}_{z=0}{\mathcal A}$. Assume that ${\mathcal E}_{\mid p}$ has a basis $v_1$, $v_2$ consisting of eigenvectors of $A$. Then $v_1$, $v_2$ extend to a basis of the stalk ${\mathcal E}_p$, the corresponding eigenvalues $n_1$, $n_2$ of $A$ are integers and we have the following relations between the stalks of subsheaves of ${\mathcal E}\otimes\CC(X)$ at $p$: \begin{gather*} {\mathcal E}_p=\langle v_1, v_2\rangle,\qquad {\mathcal F}_p=\langle z^{n_{1}} v_1, z^{n_{2}} v_2\rangle,\\ {\rm if} \ \ n_1=1,\ \ n_2=0,\quad {\mathcal E}_{p}=elm^{+}_{p, v_1} (F_p),\\ {\rm if} \ \ n_1=-1,\ \ n_2=0,\quad {\mathcal E}_{p}=elm^{-}_{p, v_2} (F_p),\\ {\rm if} \ \ n_1=n_2,\quad {\mathcal E}_p=(F(n_1))_p. \end{gather*} \end{lemma} \begin{proof} Straightforward. \end{proof} Let us apply this lemma to the connection $\nabla_{\mathcal E} $, given by formula (\ref{Conn_Mat}) in the basis $(1, \xi)$. We have: ${\mathcal E}_{p}\neq {\mathcal F}_p\iff p \in\{q_1, q_2, \infty\},$ \[ A_{i}={\operatorname{Res}\nolimits_{q_i}{\mathcal A}}=\left( \begin{array}{cc} \frac{1}{2} & \frac{\xi_1}{2} \\ \frac{1}{2\xi_1 } & \frac{1}{2} \end{array}\right) \quad (i=1, 2),\qquad A_{\infty}=\operatorname{Res}\nolimits_{\infty}{{\mathcal A}}=\left( \begin{array}{cc} -1 & -\frac{\xi_1+\xi_2}{2} \\ 0 & -2 \end{array}\right). \] We list the eigenvectors $v^{(i)}_{j}$, $v^{(\infty)}_{j}$ together with the respective eigenvalues for the matrices $A_i$, $A_{\infty}$: \begin{gather*} v^{(i)}_{1}=\left( \begin{array}{c} - \xi_i \\ 1 \end{array}\right), \quad \eta^{i}_{1}=0, \qquad \ v^{(i)}_{2}=\left( \begin{array}{c} \xi_i \\ -1 \end{array}\right), \quad \eta^{i}_{2}=-1, \\ \ v^{(\infty)}_{1}=\left( \begin{array}{c} 1 \\ 0 \end{array}\right),\quad \eta^{\infty}_{1}=1, \qquad \ v^{(\infty)}_{2}=\left( \begin{array}{c} -\frac{\xi_1+\xi_2}{2} \\ 1 \end{array}\right),\quad \eta^{\infty}_{2}=-2. \end{gather*} Applying Lemma \ref{lemma5.3} (twice at $\infty$), we obtain the following corollary: \begin{corollary}\label{two-elms} Let ${\mathcal L}={\mathcal O}_C(\tilde q_1+\tilde q_2-\infty_+-\infty_-)$, $\tilde q_i=(\xi_i, y_i)$, $q_i=f(\tilde q_i)$, $i=1,2$, as in Proposition \ref{deltaE}, and let $v_j^{(i)}$ be the eigenvectors of $A_i$ as above. Then \[ {\mathcal E}=elm^{+}_{q_1, v_2^{(1)}}elm^{+}_{q_2, v_2^{(2)}} ({\mathcal E}_0(-\infty)). \] \end{corollary} \begin{remark} Note that though the sheaf-theoretic direct image $f_*{\mathcal L}$ does not depend on the choice of a connection $\nabla_{\mathcal L}$ on ${\mathcal L}$, our method of computation of $f_*{\mathcal L}$, given by Corollary \ref{two-elms}, uses the direct image connection $\nabla_{\mathcal E}=f_*\nabla_{\mathcal L}$ for some $\nabla_{\mathcal L}$. \end{remark} \begin{proposition}\label{generic-case} For generic ${\mathcal L} \in {\rm Pic}(C)$, the rank-$2$ vector bundle ${\mathcal E}$ is stable. \end{proposition} \begin{proof} Starting from the ruled surface $S_0=\PP({\mathcal E}_0)=\PP({\mathcal O}_E\oplus{\mathcal O}_E(-\infty))$, we apply two elementary transforms $S_0\ra S_1\ra S_2=\PP({\mathcal E})$, and we have to prove that any cross-section of $S_2$ has strictly positive self-intersection, provided that $\tilde q_i=(\xi_i, y_i)$ are suf\/f\/iciently generic. For a rational section $s$ of ${\mathcal E}_0$, let us denote by $\overline s$ the associated cross-section of $S_0$. $S_0$ is characterized by the existence of two distinguished sections $\overline s_1$, $\overline s_2 $ associated to $s_1=1$, $s_2=\xi$ with self-intersections $\overline s_1 ^2=-1$, $\overline s_2 ^2=1$, and we have the relations $\overline s_1 \overline s_2=0$, $\overline s_2\sim \overline s_1 + f_{\infty}$, where $f_p=\pi^{-1}(p)$ is the f\/iber of the structure projection $\pi:S_0\lra E$. When there is no risk of confusion, we will keep the same notation for curves and their proper transforms in birational surfaces. Any cross-section~$\overline s$ is linearly equivalent to $\overline s_1 + f_{p_1}+\dots +f_{p_r}$ for some points $p_1,\ldots, p_r$ in $E$, and $\overline s^2=2r+1$. In particular, $i(S_0)=-1$, attained on $\overline s_1$. Remark that $\overline s_0$ is rigid, whilst $\overline s_1$ moves in a pencil $|\overline s_1+ f_{\infty}|$. Let us apply $elm^{+}_{q_1, v_2^{(1)}}$. First, we blow up $P_1=[q_1, v_2^{(1)}]$. Let $\overline e_1$ be the corresponding $(-1)$-curve and $\hat{S_0}$ the blown up surface. For the self-intersection numbers of the cross-sections, we have the following relations: $(\overline s^2)_{\hat{S_0}}=(\overline s^2)_{S_0}$ if $P_1\notin\overline s$ and $(\overline s^2)_{\hat{S_0}}=(\overline s^2)_{S_0} -1$ if $P_1\in \overline s$. Hence, $\hat{S_0}$ has only one cross-section for each one of the self-intersection numbers $-1$, $0$, and $(\overline s^2)_{S_0} \geq 1$ for all the other cross-sections. The cross-section with self-intersection $-1$ is $\overline s_1$ and the one with self-intersection $0$ is the proper transform of the unique member $\overline s_{P_1}$ of the pencil $\mid \overline s_1+ f_\infty \mid$ on $S_0$ going through $P_1$, see Fig.~\ref{Fig.7.}. The next step is the blowdown of $f_{q_{1}}\subset\hat{S_0}$. The self-intersection number of all the cross-sections of~$\hat{S_0}\ra E$ that meet $f_{q_{1}}$ goes up by $1$. We conclude that $S_1=\PP(elm^{+}_{q_1, v_2^{1}} ({\mathcal E}_0))$ has two cross-sections~$\overline s_{P_1}$, $\overline s_1$ with square~$0$, and $(\overline s^{2})_{S_1}\geq 2$ for any other cross-section of~$S_1$. In the language of vector bundles, this means that ${\mathcal E}_1$ is the direct sum of two line bundles of degree $0$. More precisely, ${\mathcal E}_1={\mathcal O}_E\oplus{\mathcal O}_E (q_1 -\infty)$ by Lemma~\ref{direct-sum}, the f\/irst summand corresponding to $\overline s_1$ and the second one to $\overline s_{P_1}$. \begin{figure}[t] \centerline{\includegraphics[width=9cm]{Machu-fig4}} \caption{The ruled surface $S_0$. The pencil $\mid \bar s_1+f_\infty \mid$ has a unique member passing through $P_i$ for each $i=1, 2$.} \label{Fig.7.} \end{figure} The second elementary transform is performed at $P_2\in S_1$. As $P_2\notin\overline s_{P_1}\cup\overline s_1$, the minimal self-intersection number of a cross-section in $S_1$ passing through $P_2$ is $2$. The elementary transform decreases by $1$ the self-intersection of such cross-sections and increases by $1$ the self-intersection of all other cross-sections (Lemma \ref{elm-on-subb}). Hence, $i(S_2)=1$, the value attained on many cross-sections, for example, $\overline s_{P_2}$, $\overline s_{P_1}$, $\overline s_1$. This ends the proof. \end{proof} \begin{theorem}[Atiyah, \cite{A}] For any line bundle ${\mathcal N} $ of odd degree over an elliptic curve $E$, there exists one and only one stable rank-$2$ vector bundle on $E$ with determinant ${\mathcal N} $. \end{theorem} Using Atiyah's theorem in our case, we have $\deg {\mathcal N} =-1$ , so that ${\mathcal N} $ can be represented in the form ${\mathcal N} ={\mathcal O}_E (-q)$ for some $q\in E$. ${\mathcal E}$ is obtained as the unique non-trivial extension of vector bundles: \[ 0\ra{\mathcal O}_E(-q)\ra{\mathcal E}\ra{\mathcal O}_E\ra 0. \] Moreover, the correspondence ${\mathcal E}\leftrightarrow q$ identif\/ies the moduli space ${\mathcal M}^{s}_{E} (2, -1)$ of rank-$2$ stable vector bundles of degree $-1$ over $E$ with $E$ itself. We deduce: \begin{corollary}\label{map-to-E} Under the above identification ${\mathcal M}^{s}_{E} (2, -1)\simeq E$, the rational map: \begin{gather*} f:JC \dashrightarrow {\mathcal M}^{s}_{E} (2, -1), \\ {\mathcal L} ={\mathcal O}_C(\tilde{q_1} +\tilde{q_2}-\infty_+ -\infty_-) \mapsto f_{*}({\mathcal L} ) \end{gather*} can be given by \[ [\tilde{q_1} +\tilde{q_2}-\infty_+ -\infty_-] \mapsto [q_1+q_2-2\infty]. \] \end{corollary} Now we go over to the nongeneric line bundles ${\mathcal L}$. The direct image $f_{*}{\mathcal L} $ can be unstable for special ${\mathcal L} $. This may happen when either the argument of Proposition \ref{generic-case} does not work anymore, or when formulas (\ref{Conn_Mat})--(\ref{eqnarray.4.}) are not valid. We list the cases which need a separate analysis in the next proposition. \begin{proposition} Let ${\mathcal L} ={\mathcal O}_C(\tilde{q_1} +\tilde{q_2}-\infty_+ -\infty_-)$, ${\mathcal E}=f_*({\mathcal L})$, ${\mathcal E}_0=f_*{\mathcal O}_C$, as above. Whenever $\tilde{q_i}$ is finite, it will be represented by its coordinates: $\tilde{q_i}=(\xi_i,y_i)$. The following assertions hold: \begin{enumerate}\itemsep=0pt \item[(a)] If $\tilde{q_1} +\tilde{q_2}$ is a divisor in the hyperelliptic linear series $g^{1}_{2}(C)$ (that is $\xi_1=\xi_2$, $y_1=-y_2$, or $\{\tilde{q}_1,\tilde{q}_2\}=\{\infty_+,\infty_-\}$), then ${\mathcal E}\simeq{\mathcal E}_0$, and hence ${\mathcal E}$ is unstable. \item[(b)] If $\tilde{q_1}=\tilde{q_2}\neq \infty_\pm$, then ${\mathcal E}\simeq {\mathcal O}_E{(-\infty)}\oplus{\mathcal O}_E({2q_1-2\infty})$ is unstable. \item[(c)] If $\tilde{q_i}={\infty_{\pm}}$ for at least one value $i\in \{1,2\}$, then ${\mathcal E}\simeq {\mathcal O}_E(-2\infty+q_{3-i})\oplus{\mathcal O}_{E}$ is unstable. \item[(d)] If $\tilde{q_i}=\tilde{p}_\pm$ for exactly one value $i\in \{1,2\}$, then ${\mathcal E}$ is a stable bundle of degree $-1$ with $\det{\mathcal E}\simeq {\mathcal O}_E(q_{3-i}+p_\pm-3\infty)$. \end{enumerate} \end{proposition} \begin{proof} (a) In this case, $s_{p_1}=s_{p_2}$, $\tilde{q_1} +\tilde{q_2}\sim{\infty_+}+{\infty_-}$, then ${\mathcal L} \simeq{\mathcal O}_C$, and ${\mathcal E}\simeq{\mathcal E}_0$. (b) Let, for example, $i=1$. Then ${\mathcal L} ={\mathcal O}_C(\tilde{q_1} +\tilde{q_2}-\infty_+ -\infty_-)$ degenerates to ${\mathcal L} ={\mathcal O}_C(2\tilde{q_1}-\infty_+ -\infty_-)$. In this case, the matrix of a regular connection on ${\mathcal L} $ in the rational basis~$1$ of ${\mathcal L} ={\mathcal O}_C(2\tilde{q_1}-\infty_+ -\infty_-)\hookrightarrow{\mathcal O}_C(2\tilde{q_1})\hookleftarrow{\mathcal O}_C\ni 1$ is a rational $1$-form with residues~$2$ at~$\tilde{q_1}$ and $-1$ at points~$\infty_{\pm}$. Such a $1$-form can be written by the same formula $\omega=\frac{1}{2}\big(\frac{y+y_{1}}{\xi-\xi_1} +\frac{y+y_{2}}{\xi-\xi_2}\big) \frac{\mathrm{d}\xi}{y} +\lambda_1 \frac{\mathrm{d}\xi}{y} +\lambda_2 \frac{\xi\mathrm{d}\xi}{y}$, as in the general case, but now we substitute $\xi_2=\xi_1$, $y_2=y_1$ in it: \[ \omega=\frac{y+y_{1}}{\xi-\xi_1}\frac{\mathrm{d}\xi}{y} +\lambda_1 \frac{\mathrm{d}\xi}{y} +\lambda_2 \frac{\xi\mathrm{d}\xi}{y}. \] Assume that $\xi_1=\xi_2\neq 0$; the case when $\tilde q_1=\tilde q_2=\tilde p_\pm$ should be treated separately as in~(d). Then we get the matrix $A$ of $\nabla_{\mathcal E} $ in substituting $\xi_2=\xi_1$, $y_2=y_1$, $x_2=x_1$ into formulas (\ref{Conn_Mat})--(\ref{eqnarray.4.}). We obtain the following residues: \begin{gather*} \operatorname{Res}\nolimits_{p\pm}A=\left( \begin{array}{cc} 0 & 0 \\ \frac{1}{2}\frac{(y_1\pm y_{0})\xi_1}{(x_1-t')}\pm\frac{\lambda_1}{2y_0} & \frac{1}{2} \end{array}\right), \qquad \operatorname{Res}\nolimits_{q_1}A=\left( \begin{array}{cc} 1 & \xi_1 \\ \xi_1 & 1 \end{array}\right),\\ \operatorname{Res}\nolimits_{\infty}A=\operatorname{Res}\nolimits_{u=0}A=\left( \begin{array}{cc} -1 & -\xi_1 \\ 0 & -2 \end{array}\right). \end{gather*} As in the proof of Proposition \ref{generic-case}, we can describe ${\mathcal E}$ as the result of two successive positive elm's applied to ${\mathcal E}_0(-\infty)$. In contrast to the general case, considered in Lemma \ref{oh-la-la}, the second elm has for its center the point $\tilde{P_1}=\overline{s}_{P_1} \cap\tilde{f_{q_1}}\subset S_1$, where $\tilde{f_{q_1}}$ is the f\/iber of $S_1\ra E$ over~$q_1$. As $(\overline{s}_{P_1})^2_{S_1}=0$, the resulting surface $S_2$ has a cross-section with self-intersection $-1$, thus $i(S_2)=-1$, and consequently ${\mathcal E}$ is unstable. Applying Lemmas \ref{direct-sum} and \ref{elm-on-subb}, we can identify it with ${\mathcal O}_E{(-\infty)}\oplus{\mathcal O}_E({2q_1-2\infty})$. (c) Let, for example, $\tilde{q_2}={\infty_{-}}$. Then ${\mathcal L} $ degenerates to ${\mathcal O}_C(\tilde{q_1}-{\infty_{+}})$, and we can again write the connection in the same way as in the previous case. ${\mathcal E}$ is obtained from ${\mathcal E}_0(-\infty)$ by $2$ positive elms. From Lemmas \ref{direct-sum} and \ref{elm-on-subb}, we deduce that ${\mathcal E}\simeq{\mathcal O}_E(-2\infty+q_1)\oplus{\mathcal O}_{E}$. (d) One of the points $\tilde{p}_{\pm}$ collides with $\tilde{q_i}$. This corresponds to $\xi_{i}= 0$. So, we assume that $\xi_{2}=0$, $\xi_{1}\neq 0$ ($\tilde{q_2}=\tilde{p}_+$). Hence, the $1$-form of the connection can be written as follows: \[ \omega=\frac{1}{2}\left(\frac{y+y_0}{\xi}+\frac{y+y_{1}}{\xi-\xi_1}\right)\frac{\mathrm{d}\xi}{y} +\lambda_1 \frac{\mathrm{d}\xi}{y} +\lambda_2 \frac{\xi\mathrm{d}\xi}{y}. \] The matrix $A$ is given by \begin{gather*} A=\left( \begin{array}{cc} -\frac{1}{2} \left(\frac{1}{2}\frac{y+y_{1}}{x_1-x} +\frac{y+y_{0}}{t'-x}+\lambda_2\right)\frac{\mathrm{d} x}{y} & -\frac{1}{2}\left(\frac{1}{2} \frac{(y+y_{1})\xi_1}{x_1-x}+\lambda_1\right)\frac{\mathrm{d} x}{y} \vspace{1mm}\\ -\frac{1}{2}\left(\frac{1}{2} \frac{(y+y_{1})\xi_1}{(x_1-x)(t'-x)}+\frac{\lambda_1}{t'-x}\right)\frac{\mathrm{d} x}{y} & -\frac{1}{2}\left(\frac{1}{2}\left(\frac{y+y_{1}}{x_1-x} +\frac{y+y_{0}}{t'-x}\right)+\lambda_2 +\frac{y}{t'-x}\right) \frac{\mathrm{d} x}{y} \end{array}\right). \end{gather*} Its residues at f\/inite points are: \begin{gather*} \operatorname{Res}\nolimits_{p_+}A=\left( \begin{array}{cc} \frac{1}{2} & 0 \vspace{1mm}\\ \frac{1}{4}\frac{(y_0+y_{1})\xi_1}{(x_1-t')}+\frac{\lambda_1}{2y_0} & 1 \end{array}\right), \qquad \operatorname{Res}\nolimits_{p_-}A=\left( \begin{array}{cc} 0 & 0 \vspace{1mm}\\ \frac{1}{4}\frac{(y_1-y_0)\xi_1}{(x_1-t')}-\frac{\lambda_1}{2y_0} & \frac{1}{2} \end{array}\right), \\ \operatorname{Res}\nolimits_{q_1}A=\left( \begin{array}{cc} \frac{1}{2} & \frac{\xi_1}{2} \vspace{1mm}\\ \frac{1}{2\xi_1 } & \frac{1}{2} \end{array}\right). \end{gather*} Here ${\mathcal E}_0={\mathcal O}_E\oplus{\mathcal O}_E(-\infty)$, the f\/irst elm applied to ${\mathcal E}_0$ gives ${\mathcal E}_1={\mathcal O}_E\oplus{\mathcal O}_E(p_+ -\infty)$, and the second one transforms ${\mathcal E}_1$ into a stable vector bundle ${\mathcal E}_2$ which f\/its into the exact triple \[ 0\ra{\mathcal O}_E \ra{\mathcal E}_2\ra{\mathcal O}_E(q_1+p_+ -\infty)\ra 0. \] Thus, the resulting vector bundle ${\mathcal E}=f_{*}({\mathcal L} )={\mathcal E}_2(-\infty)$ behaves exactly as in the general case ($\tilde{p_+}\neq\tilde{q_2})$. \end{proof} Next we will discuss Gabber's elementary transforms as def\/ined by Esnault and Viehweg~\cite{EV-2}. Gabber's transform of a pair $({\mathcal E}, \nabla)$, consisting of a vector bundle ${\mathcal E}$ over a curve and a logarithmic connection on ${\mathcal E}$ is another pair $({\mathcal E}', \nabla')$, where ${\mathcal E}'$ is an elementary transform of ${\mathcal E}$ at some pole~$p$ of~$\nabla$, and one of the eigenvalues of $\operatorname{Res}\nolimits_p \nabla'$ dif\/fers by~$1$ from the respective eigenvalue of $\operatorname{Res}\nolimits_p \nabla$, whilst the other eigenvalues as well as the other residues remain unchanged. We adapt the def\/inition of Esnault--Viehweg to the rank-2 case and to our notation: \begin{definition}\label{def-gabber} Let ${\mathcal E}$ be a rank-$2$ vector bundle on a curve $X$, $\nabla$ a logarithmic connection on ${\mathcal E}$, $p\in X$ a pole of $\nabla$, and $v\in {\mathcal E}|_p$ an eigenvector of the residue $\operatorname{Res}\nolimits_p(\nabla)\in\End ({\mathcal E}|_p)$. The Gabber transform $elm_{p,v}({\mathcal E},\nabla)$ is a pair $({\mathcal E}', \nabla')$ constructed as follows: \begin{enumerate}\itemsep=0pt \item[(i)] ${\mathcal E}' =elm_{p,v}^{+}({\mathcal E})$. \item[(ii)] $\nabla'$ is identif\/ied with $\nabla$ under the isomorphism ${\mathcal E}|_{X-p}\simeq{\mathcal E}'|_{X-p}$ as a meromorphic connection over $X-p$, and this determines $\nabla'$ as a meromorphic connection over $X$. \end{enumerate} \end{definition} By a local computation of $\nabla'$ at $p$ one proves: \begin{lemma} In the setting of Definition {\rm \ref{def-gabber}}, let us complete $v$ to a basis $(e_1=v,e_2)$ of ${\mathcal E}$ near $p$, so that ${\mathcal E}'_p={\mathcal O}_p\cdot\frac{1}{\tau_p}v +{\mathcal O}_p\cdot e_2$ and the matrix $R$ of $\operatorname{Res}\nolimits_p(\nabla)$ has the form $R=\left(\begin{array}{cc} \lambda_1& *\\ 0&\lambda_2\end{array}\right)$. Then $\nabla'$ is a logarithmic connection on ${\mathcal E}'$ and the matrix $R'$ of its residue at $p$ computed with respect to the basis $(e'_1,e'_2)=(\frac{v}{\tau_p},e_2)$ of ${\mathcal E}'$ has the form $R'=\left(\begin{array}{cc} \lambda_1-1& 0\\ *&\lambda_2\end{array}\right)$. \end{lemma} \begin{theorem}[Bolibruch--Esnault--Viehweg \cite{AB,EV-2}] \label{thm-EV} Let ${\mathcal E}$ be a rank-$r$ vector bundle on a~curve $X$, $\nabla$ a logarithmic connection on ${\mathcal E}$, and assume that the pair $({\mathcal E}, \nabla)$ is irreducible in the following sense: ${\mathcal E}$ has no $\nabla$-invariant subbundles ${\mathcal F} \subset{\mathcal E}$. Then there exists a sequence of Gabber's transforms that replaces $({\mathcal E}, \nabla)$ by another pair $({\mathcal E}', \nabla')$, in which ${\mathcal E}'$ is a semistable vector bundle of degree $0$ and $\nabla'$ is a logarithmic connection on ${\mathcal E}'$ with the same singular points and the same monodromy as $\nabla$. \end{theorem} We are illustrating this theorem by presenting explicitly one elementary Gabber's transform which transforms our bundle ${\mathcal E}=f_*{\mathcal L}$ of degree $-1$ into a semistable bundle ${\mathcal E}'$ of degree 0: \begin{proposition} \label{illu} Let ${\mathcal E}$, $\nabla$ be as in Proposition {\rm \ref{generic-case}}. Let $v$ be an eigenvector of $\operatorname{Res}\nolimits_{p_{+}}(\nabla)$ with eigenvalue $\frac{1}{2}$ (see formula \eqref{Residues}). Then the Gabber transform $({\mathcal E}',\nabla')=elm^{+}_{p_{+}, v} ({\mathcal E},\nabla)$ satisfies the conclusion of the Bolibruch--Esnault--Viehweg theorem: ${\mathcal E}'$ is semistable of degree $0$ and $\nabla'$ is a~logarithmic connection with the same singularities and the same monodromy as $\nabla$. Furthermore, ${\mathcal E}'\simeq{\mathcal O}_E (p_+ -\infty)\oplus{\mathcal O}_E (q_1 +q_2 -2\infty)$. \end{proposition} \begin{proof} By Corollary \ref{two-elms}, ${\mathcal E}'$ is the result of application of three positive elms to ${\mathcal E}_0(-\infty)={\mathcal O}_E(-\infty)\oplus{\mathcal O}_E(-2\infty)$: \[{\mathcal E}'=elm^{+}_{p_{+}, v} elm^{+}_{q_1, v_2^{(1)}}elm^{+}_{q_2, v_2^{(2)}} ({\mathcal E}_0(-\infty)). \] The surface $S_0=\PP({\mathcal E}_0(-\infty))$ can be decomposed as the open subset $S_0\setminus \bar s_1$ (see Fig.~\ref{Fig.7.}), which is a line bundle over $E$ with zero section $\bar s_2$, plus the ``inf\/inity section'' $\bar s_1$. The line bundle is easily identif\/ied as the normal bundle to $\bar s_2$ in $S_0$: $S_0\setminus \bar s_1\simeq {\mathcal N}_{\bar s_2/S_0}\simeq {\mathcal O}_E(\infty)$. Then the pencil $\lvert \bar s_2\rvert= \lvert \bar s_1+f_{\infty}\rvert$ is the projective line which naturally decomposes into the af\/f\/ine line $H^0(E,{\mathcal O}(\infty))$ and the inf\/inity point representing the reducible member of the pencil $\bar s_1+f_\infty$ (the curves $\bar s_{P_1}$, $\bar s_{P_2}$ shown on Fig.~\ref{Fig.7.} are members of this pencil). The fact that all the global sections $s\in H^{0}({\mathcal O}_E (\infty))$ come from $H^{0}({\mathcal O}_E)$=\{constants\}\ under the embedding ${\mathcal O}_E\hookrightarrow{\mathcal O}_E (\infty)$ implies that they all vanish at $\infty$. Thus all the $\bar s\in \lvert \bar s_2\rvert$ pass through the point $f_{\infty}\cdot\bar s_2$ which is the zero of the f\/iber of the line bundle ${\mathcal O}_E (\infty)$ over $\infty$. Using this representation of $S_0$, we can prove the existence of a cross-section $\bar r\subset S_0$, $\bar r\in \lvert \bar s_2+f_{p_{+}}\rvert$ passing through the three points $P_0=[p_{+}, v]$ and $P_i=[q_i, v_2^{(i)}]$ ($i=1,2$). Namely, the curves from the linear system $\lvert \bar s_2+f_{p_{+}}\rvert$ are the sections of ${\mathcal O}_E (\infty+p_+)$ considered as sections of ${\mathcal O}_E(\infty)$ having a simple pole at $p_+$. The fact that they have a simple pole at $p_+$ means that they meet $\bar s_1$ at $f_{p_+}\cdot \bar s_1$. The vector space $H^{0}({\mathcal O}_E(\infty+p_+))$ is $2$-dimensional, so we can f\/ind $r$ in it taking the values $v_2 ^{(1)}$, $v_2^{(2)}$ at $q_1$, resp.~$q_2$. We have $\bar s_1^{2}=-1$, $\bar s_2^{2}=1$, $\bar r^{2}=3$, $\bar s_1\cdot \bar r=1$, and $\bar s_1\cap \bar r=P_0$. After we perform the $3$ elementary transforms at $P_i$ ($i=0, 1, 2$), the self-intersection $\bar r^2$ goes down by $3$. At the same time $\bar s_1^{2}$ goes up by $2$ when making elms $P_1$, $P_2$ and descends by $1$ after the elm at $P_0$. Hence in $S'=\PP({\mathcal E}')$, we have two disjoint sections $\bar r$, $\bar s_1$ with self-intersection $0$. Thus, by Lemma~\ref{direct-sum}, ${\mathcal E}'={\mathcal L}_1\oplus {\mathcal L}_2$, where ${\mathcal L}_1$, ${\mathcal L}_2$ are line bundles of the same degree. By Lemma~\ref{elm-on-subb}, $\deg{\mathcal E}'=\deg{\mathcal E}+3=0$, hence $\deg{\mathcal L}_1=\deg{\mathcal L}_2=0$. The direct sum of line bundles of the same degree is strictly semistable. Next, $\bar s_1$ (in $S_0$) corresponds to the line subbundle ${\mathcal O}_E (-\infty)$. It remains ${\mathcal O}_E (-\infty)$ after elms in $P_1, P_2$, and becomes ${\mathcal O}_E(p_+ -\infty)$ after the elm in $P_0$. Hence ${\mathcal L}_1={\mathcal O}_E(p_+ -\infty)$ and ${\mathcal L}_2=\det{\mathcal E}'\otimes L^{-1}$. But $\det{\mathcal E}'=\det{\mathcal E}(q_1+q_2+p_+)={\mathcal O}_E (q_1+q_2+p_+ -3\infty)$. Thus ${\mathcal L}_2={\mathcal O}_E(q_1+q_2-2\infty)$. \end{proof} \begin{remark}\label{0and-1} If we f\/ix $E$ and let vary $p_+$, $q_1$, $q_2$, then we see that the generic direct sum ${\mathcal L} _1\oplus {\mathcal L} _2$ of two line bundles of degree $0$ occurs as the underlying vector bundle of $\nabla'$. According to~\cite{Tu}, the moduli space of semistable rank-$2$ vector bundles on $E$ is isomorphic to the symmetric square~$E^{(2)}$ of $E$, and its open set parameterizes, up to an isomorphism, the direct sums ${\mathcal L} _1\oplus {\mathcal L} _2$. Thus we obtain a natural map from the parameter space of our direct image connections to the symmetric square~$E^{(2)}$, whilst using the {\em stable} bundles ${\mathcal E}$ of degree $-1$ provides a natural map onto $E$ (Corollary \ref{map-to-E}). \end{remark} \begin{remark} Korotkin \cite{Kor-1} considers twisted rank-$2$ connections on $E$ with connection matrices~$A$ satisfying the transformation rule \begin{gather}\label{eqa} T_a(A)=QAQ^{-1},\qquad T_b(A)=RAR^{-1} \end{gather} for some $2\times 2$ matrices $Q$, $R$. In the case when $Q$, $R$ commute, such a twisted connection can be understood as an ordinary connection on a nontrivial vector bundle ${\mathcal E}$ over $E$ that can be described as follows: let $E=\CC/\Lambda$ where $\Lambda$ is the period lattice of $E$ with basis $(1, \tau)$, and let $z$ be the f\/lat coordinate on $E$ (or on the universal cover $\CC$ of $E$) such that $T_a(z)=z+1$, $T_b(z)=z+\tau$. Let us make $\Lambda$ act on $\CC^{2}\times\CC$ by the rule \[ (v,z)\stackrel{a}\mapsto(Qv, z+1), \qquad (v,z) \stackrel{b}\mapsto (Rv, z+\tau) . \] Then ${\mathcal E}\ra E$ is obtained as the quotient $\CC^{2}\times\CC/\Lambda\ra\CC/\Lambda$ of the trivial vector bundle $\CC^{2}\times\CC\xymatrix@1{\ar[r]^{pr_{2}}&}\CC$. However, the twisted connections obtained in \cite{Kor-1} satisfy (\ref{eqa}) with non-commuting $Q$, $R$, given by Pauli matrices: \[ Q=\sigma_1= \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right),\qquad R=\sigma_3=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right). \] This follows from the relation $A=\mathrm{d}\Psi\Psi^{-1}$, where $\Psi$ is a fundamental matrix of the connection, and from the transformation law for $\Psi$: $T_a(\Psi)=i\sigma_1\Psi$, $T_b(\Psi)=i\sigma_3\Psi e^{-2i\pi\lambda\sigma_3}$, where $\lambda\in\CC$ is a parameter (see (3.74) in loc. cit). Hence Korotkin's connections are really twisted and have no underlying vector bundles. This is a major dif\/ference between the result of \cite{Kor-1} and that of the present paper. Another dif\/ference, concerning the method, is that the starting point in \cite{Kor-1} is an ad hoc expression for $\Psi$ in terms of Prym theta functions of the double cover $C\ra E$, and the connection matrix is implicit. \end{remark} \section[Monodromy and differential Galois groups]{Monodromy and dif\/ferential Galois groups}\label{monodromy} Let $G$ be the monodromy group of the connection $\nabla_{\mathcal E} $ on $f_{*}{\mathcal L} ={\mathcal E}$ def\/ined by formula (\ref{Conn_Mat}). It is the subgroup of $GL(2,\CC)$ generated by $M_a$, $M_b$, $M_{\gamma_1}$. We will f\/irst consider the case of generic values of the parameters $(\lambda_1K,\lambda_1L,\lambda_2K',\lambda_2L')$. Here, {\em generic} means that the point belongs to the complement of a countable union of af\/f\/ine $\QQ$-subspaces of $\CC^4$. More exactly, we require that the triples $(i\lambda_2 K', \lambda_2 L', i\pi)$ and $(\lambda_1 K, i\lambda_1 L, \pi)$ are free over $\QQ$. Let \[ {R^{\theta}}=\left( \begin{array}{cc} \cos\theta & i\sin\theta \\ i\sin\theta & \cos\theta \end{array}\right) , \qquad {H^{\theta}}=\left( \begin{array}{cc} \cosh\theta & \sinh\theta \\ \sinh\theta & \cosh\theta \end{array}\right)\qquad (H^{i\theta}=R^{\theta}) .\] Let $N$ be the normal subgroup of $G$ def\/ined by \begin{gather}\label{N-via-det} N=\{X\in G\ \mid\ \det X =\pm 1\}.\end{gather} We have: \begin{gather}\label{N-general} N=\left\{{\prod_{i=1}^{r} M^{j_i}_{a} M^{k_i}_{b} M^{\epsilon_i}_{\gamma_1}\mid r\geq 0, j_i\in\ZZ, k_i\in\ZZ, \epsilon_i\in \{0,1\}},\sum_{i} k_{i}=\sum_{i}j_{i}=0\right\}. \end{gather} We can write $G$ as the semi-direct product of $N$ with the subgroup of $G$ generated by~$M_a$,~$M_b$. The latter is identif\/ied with $\ZZ\times\ZZ$, so $G=N\rtimes(\ZZ\times\ZZ)$. Let $N_1$ be the subgroup of $N$ generated by $R^{4 {\lambda_1} K}$, $H^{4 {\lambda_1}L}$. As $M_a=e^{-2i\lambda_2 K'} R^{-2\lambda_1 K}$, and $M_b=e^{2\lambda_2 L'} H^{2\lambda_1 L}$, we have $[M_a, M_{\gamma_1}]=R^{-4\lambda_{1} K}$, $[M_b, M_{\gamma_1}]=H^{4\lambda_{1} L}$, $[M_a, M_b]=1.$ Hence $N$ is the semi-direct product $N=N_1\rtimes\mu_2$, where $\mu_n\simeq \ZZ/n\ZZ$ denotes a cyclic group of order $n$, and the factor $\mu_2$ of the semi-direct product is generated by $M_{\gamma_1}$. Finally, we obtain a normal sequence $1\lhd N_1\lhd N \lhd G$ with successive quotients $\ZZ\times\ZZ$, $\ZZ/2\ZZ$, $\ZZ\times\ZZ$, all of whose levels are semi-direct products. We can write: \[ G\simeq((\ZZ\times\ZZ)\rtimes\ZZ/2\ZZ)\rtimes(\ZZ\times\ZZ). \] We have also $N_1=D(G)$, the commutator subgroup of $G$. As $D(G)\simeq\ZZ\times\ZZ$ is Abelian, $G$~is solvable of height $2$. From now on, we go over to the general case. The formulas (\ref{N-via-det}), (\ref{N-general}) are no more equivalent. Let us def\/ine $N$ by (\ref{N-general}), and $N_1$ by the same formula with the additional condition $\sum_{i} \epsilon_i\equiv 0 (2).$ We have again the normal sequence $1\lhd N_1\lhd N \lhd G$. Its f\/irst level is a semidirect product, $N=N_1\rtimes\mu_2$, but the upper one may be a nonsplit extension. Def\/ine two group epimorphisms \begin{alignat}{3} & \ZZ\times\ZZ \xymatrix@1{\ar[r]^{\phi_{1}}&} N_1 ,\qquad && \ZZ\times\ZZ \xymatrix@1{\ar[r]^{\phi_{2}}&} G/N ,&\nonumber \\ & (n_1, n_2) \longmapsto \frac{\sigma (n_1,n_2)^2}{\det\sigma (n_1,n_2)}, \qquad & & (n_1, n_2) \longmapsto \sigma (n_1,n_2)N , & \label{phi_i} \end{alignat} where $\sigma (n_1,n_2)=M^{n_1}_{a} M^{n_2}_{b}$. Thus both $N_1$ and $G/N$ are quotients of $\ZZ\times\ZZ$. We want to f\/ind out, which pairs $Q_1$, $Q_2$ of quotients of $\ZZ\times\ZZ$ can be realized as the pair $N_1$, $G/N$ for some monodromy group $G$. We will denote by $\pi_N$ the canonical epimorphism $G\rar G/N$, and the maps $\bar\phi_1$, $\bar\phi_2$ are def\/ined by the following commutative diagram: \[ \xymatrix{ & \ZZ\times\ZZ \ar @{->>} _{\phi_2} [dl] \ar @{->>} ^{\sigma} [d] \ar @{->>} ^{\phi_1} [dr] & \\ G/N & \langle M_a,M_b\rangle \ar @{->>} _{\bar\phi_2} [l] \ar @{^{(}->} [d] \ar @{->>} ^{\bar\phi_1} [r] & N_1 \\ & G \ar @{->>} _{\pi_N} [lu] & \\ } \] One can also give $\bar\phi_1$ by the formulas \[ \bar\phi_1(X)=\frac{1}{\det X}X^2=[X, M_{\gamma_1}] \qquad \mbox{for all}\ \ X\in \langle M_a,M_b\rangle . \] \begin{proposition}\label{towers} For any connection \eqref{Conn_Mat}, its monodromy group $G$ fits into a normal sequence $N_1\lhd N \lhd G$ in such a way, that the following properties are verified: \begin{enumerate}\itemsep=0pt \item Both $N_1$ and $G/N$ are quotients of $\ZZ\times\ZZ$, and $N/N_1\simeq\mu_2$. \item The extension $N_1\lhd N$ is always split: $N\simeq N_1\rtimes \mu_2$, the generator $h\in \mu_2$ acting on $N_1$ via the map $g\mapsto g^{-1}$. \item The subgroup $\langle M_a,M_b\rangle $ of $G$ provides a splitting of the extension $N \lhd G$ if and only if $\bar\phi_2$ is an isomorphism. In this case, the action of $G/N$ on $N$ defining the split extension is given by $x:g\mapsto g$ and $x:h\mapsto \bar\phi_1\bar\phi_2^{-1}(x)h$ for any $x\in G/N$, $g\in N_1$. \end{enumerate} Conversely, let $(Q_1,Q_2)$ be a pair of group quotients of $\ZZ\times\ZZ$. Then $(Q_1,Q_2)$ can be realized as the pair $(N_1,G/N)$ for the monodromy group $G$ of a connection \eqref{Conn_Mat} if and only if $(Q_1,Q_2)$ occurs in the following table: \medskip \centerline{{\em \begin{tabular}{|c|c|c|c|c|c|} \hline N$^\circ$ & $\operatorname{rk}\nolimits Q_1$ & $\operatorname{rk}\nolimits Q_2$ & $Q_1$ & $Q_2$ & Restrictions\\ \hline 1$^*$ & 2 & 2 & $\ZZ\times\ZZ$ & $\ZZ\times\ZZ$ & ---\\ 2 &2 & 1 & $\ZZ\times\ZZ$ & $\mu_{d}\times\ZZ$ & $2|d$ \\ 3 &2 & 0 & $\ZZ\times\ZZ$ & $\mu_{2}\times\mu_{d}$ & $2|d$ \\ \hline 4$^*$ & 1 & 2 & $\mu_{d}\times\ZZ$ & $\ZZ\times\ZZ$ & $d\geq 1$ \\ 5 & 1 & 1 & $\mu_{d}\times\ZZ$ &$\mu_{d'}\times\ZZ$ & {\em if $2|d$, then} $2|d'$ \\ 6 &1 & 0 & $\mu_{d}\times\ZZ$ & $\mu_{d'}$ & $2\nmid d$, $2|d'$ \\ 7 &1 & 0 & $\mu_{d}\times\ZZ$ & $\mu_{2}\times\mu_{d'}$ & $d\geq 1$, $2|d'$ \\ \hline 8$^*$ & 0 & 2 & $\mu_{d}$ & $\ZZ\times\ZZ$ & $d\geq 1$ \\ 9 & 0 & 1 & $\mu_{d}$ & $\mu_{d'}\times\ZZ$ & $d\equiv d'\!\!\mod\! 2$ \\ 10 & 0& 0 &$\mu_{d}$ & $\mu_{d'}$ & $d\geq 1$, $d'\geq 1$ \\ 11 & 0& 0&$\mu_{d}$ & $\mu_{2}\times\mu_{d'}$ & $2|d$, $2|d'$ \\ \hline \end{tabular} }}\medskip \noindent The items whose numbers are marked with an asterisk correspond to the pairs that always give a split extension $N \lhd G$. \end{proposition} \begin{proof} The f\/irst part, resuming the properties of the tower of group extensions \mbox{$N_1\lhd N \lhd G$}, is an easy exercise, and we go over to the second one. Given a pair $(Q_1,Q_2)$, we f\/ind out whether it is possible to choose epimorphisms $\ZZ\times\ZZ\xymatrix@1{\ar[r]^{\phi_{1}}&}Q_1$ and $\ZZ\times\ZZ\xymatrix@1{\ar[r]^{\phi_2} &} Q_2$ and identify them as the morphisms def\/ined in (\ref{phi_i}) for a suitable choice of matrices $M_a$, $M_b$. The proof follows a~case by case enumeration of dif\/ferent types of kernels of $\phi_1$ and $\phi_2$. To shorten the notation, let us write $M_a=e^{\alp_1}H^{\beta_1}$, $M_b=e^{\alp_2}H^{\beta_2}$. The case $\operatorname{rk}\nolimits\ker\phi_1=\operatorname{rk}\nolimits\ker\phi_2=0$, corresponding to $\operatorname{rk}\nolimits_\QQ(\alp_1,\beta_1,\pi i)=\operatorname{rk}\nolimits_\QQ(\alp_2,\beta_2,\pi i)=3$, has been treated before the statement of the proposition. It gives item 1 of the table. The proofs of all the other cases resemble each other, and we will give only one example of this type of argument, say, when both kernels are of rank~1. Under this assumption, there exist $(d,k_1, k_2)\in\ZZ^{3}$ and $(d',k'_1, k'_2)\in\ZZ^{3}$ such that \begin{gather}\label{gcd} d\geq 1, \qquad d'\geq 1, \qquad \mbox{$\gcd(k_1, k_2)=1$}, \qquad \mbox{$\gcd(k'_1, k'_2)=1$}, \end{gather} and \[ \ker\phi_1=\mbox{$\langle d(k_1, k_2)\rangle $},\qquad \ker\phi_2=\mbox{$\langle d'(k'_1, k'_2)\rangle $}. \] For $(n_1,n_2)\in\ZZ^2$, we have: \begin{gather}\label{kerphi1} (n_1,n_2)\in\ker\phi_1 \ \ \equi\ \ \exists\ m\in\ZZ\ | \ n_1\beta_1 +n_2\beta_2=\pi im ; \\ \label{kerphi2} (n_1,n_2)\in\ker\phi_2 \ \equi\ \exists\ (m_1,m_2)\in\ZZ^2\ \mid \ M_a^{n_1}M_b^{n_2}=\big( H^{2\beta_1}\big)^{m_1} \big( H^{2\beta_2}\big)^{m_2}. \end{gather} The latter equality can be written in the form $e^\alpha H^\beta=1$, where \[ \alpha = n_1\alp_{1}+n_2\alp_2, \qquad \beta = (n_1-2m_1)\beta_1+(n_2-2m_2)\beta_2. \] As $e^\alpha H^\beta=1$ if and only if $e^\alpha =H^\beta=\pm 1$, we see that the condition of (\ref{kerphi2}) is equivalent to the existence of an integer vector $(m_0,m_1,m_2,m_3)\in\ZZ^4$ such that \begin{gather} m_0\equiv m_3\!\!\!\mod 2 \label{m0m3}\\ n_1\alp_{1}+n_2\alp_2= \pi i m_0, \label{kerphi2-1}\\ (n_1-2m_1)\beta_1+(n_2-2m_2)\beta_2= \pi i m_3 . \label{kerphi2-2} \end{gather} Substituting the generators of $\ker\phi_i$ for $(n_1,n_2)$, we obtain the following system of equations: \begin{gather} dk_1\beta_1+dk_2\beta_2=\pi im, \label{m}\\ d'k'_1\alp_{1}+d'k'_2\alp_2= \pi i m_0, \label{m0}\\ (d'k'_1-2m_1)\beta_1+(d'k'_2-2m_2)\beta_2= \pi i m_3 . \label{m3} \end{gather} The condition that $d(k_1, k_2)$, $d'(k'_1, k'_2)$ are not just elements of the corresponding kernels, but their generators, is transcribed as follows: \begin{gather}\label{minimality} \gcd(m,d)=\gcd(d',m_0,2m_1,2m_2,m_3)=1 \end{gather} for any $(m_1,m_2,m_3)$ satisfying (\ref{m0m3}), (\ref{m3}). As $\operatorname{rk}\nolimits\ker\phi_1=1$, the equations (\ref{m}) and (\ref{m3}) have to be proportional. If $d'$ is odd, but $d$ is even, then at least one of the coef\/f\/icients of $\beta_i$ in (\ref{m3}) is odd. But both coef\/f\/icients in (\ref{m}) are even, and this contradicts (\ref{minimality}). We get the restriction from item 5 of the table: if $d$ is even, then $d'$ is even, too. This leaves possible three combinations of parities of $d$, $d'$, and it is easy to see that a solution to (\ref{gcd}), (\ref{m})--(\ref{minimality}) exists for any of them. For example, if $d\equiv d'\!\!\mod\! 2$, then we can choose $k_i$, $k'_i$ in such a way that $k_i\equiv k'_i\!\!\mod\! 2$ ($i=1,2$), $k_1k'_1\neq 0$. We get a solution to the problem as follows: \begin{gather*} m_i=\frac{1}{2}d(k'_i-k_i)\quad (i=1,2),\qquad m=m_0=m_3=1,\qquad \alp_2=\beta_2=1,\\ \alp_1=\frac{\pi i-d'k'_2}{d'k'_1},\qquad \beta_1=\frac{\pi i-dk_2}{dk_1}. \end{gather*} Our choice for $\alp_2$, $\beta_2$ is explained by the observation that we should have $\operatorname{rk}\nolimits_\QQ(\alp_1,\beta_1,\pi i)=\operatorname{rk}\nolimits_\QQ(\alp_2,\beta_2,\pi i)=2$, and 1 is the simplest complex number which is not a rational multiple of~$\pi i$. \end{proof} \begin{remark} In the above proof, if $\ker\phi_2\not\subset\ker\phi_1$, then any solution of (\ref{m})--(\ref{m3}) satisf\/ies the condition $(m_1,m_2)\neq 0$, which means that $d'(k'_1, k'_2)\not\in\ker\sigma$. Hence $\sigma(d'(k'_1, k'_2))$ is a nonzero element of $\ker\bar\phi_2$, and $\bar\phi_2$ is not an isomorphism. This implies that the extension $N\lhd G$ is nonsplit. Hence it is never split, unless $d|d'$. In this case, it can be occasionally split, if $\ker\phi_2\subset\ker\phi_1$. \end{remark} We can deduce from Proposition \ref{towers} a description of all the f\/inite monodromy groups; they correspond to lines 10 and 11 of the table. This description is only partial, because we do not determine completely the extension data. \begin{corollary} All the finite monodromy groups $G$ of connections \eqref{Conn_Mat} are obtained as extensions \[ D_d\hookrightarrow G\twoheadrightarrow \mu_{d'}\qquad (d\geq 1,\ d'\geq 1) \] or \[ D_d\hookrightarrow G\twoheadrightarrow \mu_2\times\mu_{d'}\qquad (2\mid d,\ 2\mid d'), \] where $D_d=\mu_d\rtimes\mu_2$ is the dihedral group. \end{corollary} \begin{corollary} The only finite Abelian groups occurring as the monodromy groups of connections~\eqref{Conn_Mat} are $\mu_2$ and $\mu_2\times\mu_d$\ ($d\geq 2$). \end{corollary} We add a few examples of inf\/inite monodromy groups with nongeneric parameters $(\lambda_1K,\lambda_1L$, $\lambda_2K',\lambda_2L')$. \begin{example} It is easy to select the parameters to get for $G$ one of the groups $D_n\times \ZZ^i$ or $D_n\rtimes \ZZ^i$, where $n\in\NN\cup\{\infty\}$, $i=0,1,2$. For example, to get $D_n\rtimes\ZZ $, we can set $M_a=R^{\frac{2\pi}{n}}$, $M_b=R^1$, and to get $D_n\times\ZZ $, we can set $M_a=R^{\frac{2\pi}{n}}$, $M_b=1$. \end{example} Now that we have described the structure of the monodromy group of $\nabla_{\mathcal E} $, we can ask the question on its Zariski closure. According to \cite[Proposition 5.2]{Ka}, the Zariski closure of $G$ is the dif\/ferential Galois group $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )$. For the reader's convenience, we recall its def\/inition. Let $(K,\; ')$ be a dif\/ferential f\/ield with f\/ield of constants $\CC$. This means that $K$ is endowed with a $\CC$-linear derivation \ $':K\rar K$. \begin{definition} Let $(K,\; ')\subset (L,\; ')$ be an extension of dif\/ferential f\/ields with f\/ield of constants $\CC$. The dif\/ferential Galois group $\operatorname{DGal}\nolimits(L/K)$ is the group consisting of all the $K$-automorphisms $\sigma$ of $L$ such that $\sigma(f')=(\sigma(f))'$ for all $f\in L$. \end{definition} If $L$ is f\/initely generated as a $K$-algebra, say, by $p$ elements, then $\operatorname{DGal}\nolimits(L/K)$ can be embedded onto $GL(p,\CC)$, and it is an algebraic group if considered as a subgroup of $GL(p,\CC)$ in this embedding. We apply this def\/inition to $K=\CC(E)$, the derivation $'$ being the dif\/ferentiation with respect to some nonconstant function $z\in K$. Given a connection $\nabla_{\mathcal E} $ on $E$, we can consider a fundamental matrix $\Phi$ of its solutions, and set $L$ to be the f\/ield generated by all the matrix elements of $\Phi$. The group $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )$ is def\/ined to be $\operatorname{DGal}\nolimits(L/K)$. See \cite{VDP,VS} for more details. Remark that the monodromy group $G$ lies in the subgroup $\GG$ of $GL(2, \CC)$ def\/ined by \begin{displaymath} \GG=\left\{\left( \begin{array}{cc} C\alpha & C\epsilon\beta \\ C\beta & C\alpha\epsilon \end{array}\right)\mid C\in\CC^{*}, (\alpha,\beta)\in\CC^2 , \epsilon\in\{-1,1\}, \alpha^{2}-\beta^{2}=1 \right\} . \end{displaymath} Denote by $\GG_0$ the connected component of unity in $\GG$, singled out by the condition $\epsilon=1$. The Zariski closure $\bar G$ of $G$ is contained in $\GG$ and is not contained in $\GG_0$. The following statement is obvious: \begin{lemma} Let $\psi:\CC^{*}\times\CC^{*}\rtimes\{-1,1\}\lra\GG$ be defined by \[ (\lambda,\mu,\epsilon)\longmapsto \left( \begin{array}{cc} \lambda\alpha & \lambda\beta \epsilon \\ \lambda\beta & \lambda\alpha\epsilon \end{array}\right) \] with $\alpha=\frac{1}{2} (\mu+\frac {1}{\mu})$, $\beta=\frac{1}{2} (\mu-\frac {1}{\mu})$. Then $\psi$ is a surjective morphism with kernel $\{(1, 1, 1)$, $(-1, -1, -1)\}$. \end{lemma} We see that $\GG_0=\psi(\CC^{*}\times\CC^{*})$ is identif\/ied with the quotient $\CC^{*}\times\CC^{*}/\{-1,1\}$, and the latter is isomorphic to $\CC^{*}\times\CC^{*}$ via the map $(z_1, z_2)\!\!\!\mod\!\!\{-1,1\}\longmapsto (z_1z_2, \frac{z_1}{z_2})$. Thus we get an explicit isomorphism $\GG_0\simeq \CC^{*}\times\CC^{*}$. Using this identif\/ication, one can easily determine the Zariski closure~$\overline{G_0}$ of the subgroup $G_0=G\cap \GG_0=\langle M_a,M_b\rangle $ of $\CC^{*}\times\CC^{*}$, and $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=\overline{G_0}\rtimes \langle M_{\gamma_1}\rangle $. We can use the following observations: \begin{enumerate}\itemsep=0pt \item[a)] If a pair $(s,t)\in\CC^{*}\times\CC^{*}$ is such that $\operatorname{rk}\nolimits_\QQ (\ln(s), \ln(t), i\pi)=1$ (that is, $s$ and $t$ are roots of unity), then the group generated by the pair $(s,t)$ is f\/inite and coincides with its closure. \item[b)] If a pair $(s,t)\in\CC^{*}\times\CC^{*}$ is such that $\operatorname{rk}\nolimits_\QQ (\ln(s), \ln(t), i\pi)=2$, and $k_1\ln(s)+k_2\ln (t)+ 2k_3 i\pi=0$ is a $\ZZ$-linear relation with relatively prime $k_i$, then $\overline{\langle (s,t)\rangle }$ is the subgroup $V$ of $\CC^{*}\times\CC^{*}$ def\/ined by $z_1^{k_1}z_2^{k_2}=1$, isomorphic to $\CC^*\times\mu_d$, where $d=\gcd(k_1,k_2)$, and $\mu_d$ is the cyclic group of order~$d$. \item[c)] If the triple $(\ln(s), \ln(t), \pi i)$ is free over $\QQ$, then the closure of $\langle (s,t)\rangle$ is $\CC^*\times \CC^{*}$. \end{enumerate} Apply this to pairs $(s,t)$ belonging to the subgroup generated by two pairs $(s_1,t_1)$, $(s_2,t_2)$ which are the images of $M_a$, resp. $M_b$. Then if $(\ln(s_j), \ln(t_j), \pi i)$ is free over $\QQ$ for at least one value of $j=1$ or 2, then $\overline{G_0}=\CC^{*}\times\CC^{*}$ and $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=\GG$. In the case when both triples $(\ln(s_1), \ln(t_1), \pi i),(\ln(s_2), \ln(t_2), \pi i)$ are not free over $\QQ$, the necessary and suf\/f\/icient condition for $\overline{\langle (s_1,t_1),(s_1,s_2)\rangle }$ to be $\CC^{*}\times\CC^{*}$ is the following: $\operatorname{rk}\nolimits_\QQ(\ln(s_j), \ln(t_j), i\pi)=2$ for both values $j=1,2$, and if $a_{j1}\ln(s_j)+a_{j2}\ln(t_j)+ a_{j3}\pi i=0$ ($j=1,2$) are nontrivial $\QQ$-linear relations in these triples, then $\left|\begin{array}{cc}a_{11} & a_{12}\\ a_{21} & a_{22}\end{array}\right|\neq 0$. This condition can be easily formulated in terms of the epimorphisms $\phi_i$ def\/ined in (\ref{phi_i}): $\ker\phi_1$, $\ker\phi_2$ are both of rank 1 and $\ker\phi_1\cap \ker\phi_2=0$. In this case we have the same conclusion: $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=\GG$. We obtain the following description of possible dif\/ferential Galois groups of connections (\ref{Conn_Mat}): \begin{proposition} Let $r_i=\operatorname{rk}\nolimits_\QQ\ker\phi_i$\ $(i=1,2)$. \begin{enumerate}\itemsep=0pt \item[(i)] $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=\GG$ if and only if one of the following condition is verified: either $\min \{r_1,r_2\}=0$, or $r_1=r_2=1$ and $\ker\phi_1\cap \ker\phi_2=0$. \item[(ii)] $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )$ is a $1$-dimensional subgroup of $\GG$ if and only if $\min \{r_1,r_2\}=1$ and the condition of (i) is not satisfied. Then there exists a one-parameter subgroup $V_0$ and a finite cyclic subgroup $\mu_d$ in $\GG$ such that $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=(V_0\mu_d)\rtimes \langle M_{\gamma_1}\rangle $. \item[(iii)] $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )$ is finite if and only if $r_1=r_2=2$, and then $\operatorname{DGal}\nolimits(\nabla_{\mathcal E} )=G$. \end{enumerate} \end{proposition} \subsection*{Acknowledgements} I am greatly indebted to my research advisor D.~Markushevich for his encouragement and help. I~would like to thank D.~Korotkin for explaining me some points. I also acknowledge with pleasure the hospitality of the Mathematics Institute of the Chinese Academy of Sciences, where was done a part of the work on the article. The work was partially supported by the Conseil Departemental du Nord. \pdfbookmark[1]{References}{ref}
1,477,468,749,838
arxiv
\section{Introduction} \label{intro} In recent years, portable devices such as smartphones and tablet PCs have evolved to reach a significant performance enhancement. However, applications running on those mobile devices also consume many resources, e.g. computing, storage, et al. Particularly, multiple applications are often run on the same devices of mobile users (MUs). Therefore, the resource-limited mobile devices still require a lot more resources for better performance, to tackle real-time and delay-sensitive tasks, such as Virtual Reality games and Automatic driving. A cloudlet is formed by a group of internet-well-connected, resource-rich, and trusted computers When the centralized cloud is too far away from MUs. Cloudlet can be utilized by neighboring MUs \cite{Satyanarayanan2009The}, and it also can bring us a good solution for the resource requirement problem as described above. MUs can achieve much better performance by offloading their delay-sensitive or computation-intensive tasks to the cloudlet nearby \cite{kumar2013survey}, because the cloudlets can provide them with low-latency and rich computing resource access \cite{Jia2016Cloudlet}. The resource allocation has been investigated in the work \cite{2017ICC}, and the cloudlet deployment for task offloading has been discussed in \cite{Xia2013Throughput}, \cite{Satyanarayanan2001Pervasive}. Many efficient algorithms have been proposed in \cite{Jia2015Optimal}, \cite{Xu2015Capacitated}, to balance the workload among the cloudlets for reducing the MUs' delay. But access points (APs) and cloudlets may be reluctant to provide those services without any rewards, due to selfishness. To inspire cloudlets sharing their resources with MUs, incentive mechanisms have been introduced \cite{Jin2015Auction}, \cite{Samimi2014A}. However, one cloudlet only serve one MU in those works. Moreover, the resource in a cloudlet is always too expensive to be employed by a single MU. To solve the above problems, there are several challenges: 1) How to place the cloudlets at APs efficiently. 2) How to assign cloudlet resources to the MUs when each MU has limited budget. 3) How to provide incentive for the three kinds entities (MUs, APs, Cloudlets). Therefore, motivated by the group-buying scheme for spectrum allocation \cite{Lin2013Groupon}, we propose three efficient auction schemes to solve the problems of cloudlet placement and resource assignment jointly, which consists of three stages in each scheme. In the first stage, we divide all MUs into several small groups of MUs according to the AP they connected to, and then we figure out the total budget for each group of MUs. In the second stage, we assign cloudlets to APs. Finally, we charge MUs in the third stage according to the matching results. The main contributions of this work can be summarized as follows. \begin{enumerate} \item We propose three auction schemes for joint cloudlet placement and resource assignment. The first scheme randomly generates a number $m$ according to the capacity of each given cloudlet, followed by selecting the first $m$ MUs according to the performance price ratio, calculating the budget for the given cloudlet. \item Based on the first scheme, the second scheme calculates several profitable cases and then randomly selects one from them. It can improve the revenue of the small MU groups significantly. In the third scheme, we match cloudlets with APs in a global way based on the second scheme. \item We prove that all three schemes can work in polynomial time. We also provide proofs for individual rationality, budget balance and truthfulness. Both theoretical analysis and simulation results show that the proposed schemes outperform the existing work in this paper. \end{enumerate} The rest of the paper is organized as follows. Section \ref{sec:1} describes related works about incentive mechanisms for resource allocation in mobile cloud computing. Section \ref{sec:2} formulates the resource allocation problem and describe the three-stage auction model. Section \ref{sec:3} introduces our algorithms in the auction model, together with some examples. In section \ref{sec:4}, we prove the economic properties for the proposed auction model. Simulation results are given in section \ref{sec:5}. Finally, section \ref{sec:6} concludes this paper. \section{Related Work} \label{sec:1} Resource allocation in mobile cloud computing is one of the fresh and meaningful topics in recent years \cite{2016TMC_MCC}, \cite{2014_Survey_MCC}. Mobile users offload their heavy tasks to the neighboring cloudlet, this has been an appealing way to relieve their demand for resources \cite{2015IEEE_Magazine_cloudlet}, \cite{2017IEEE_cloud_computing}. For cloudlet deployment, many existing works such as \cite{Jia2016Cloudlet}, \cite{Jia2015Optimal}, \cite{Xu2015Capacitated} care about the cloudlet placement in a given network, and most of them focus on allocation cloudlet resource in a centralized manner. Mike and his partners \cite{Jia2016Cloudlet} \cite{Jia2015Optimal} discuss the challenge of cloudlet load balancing, and they proposed a useful algorithm which is fast and scalable to balance the workload for each cloudlet in the wireless metropolitan area networks. In \cite{Xu2015Capacitated}, how to place cloudlets is first considered to reduce the processing delay for tasks while the resource of the cloudlet is limited. Authors propose a heuristic algorithm and an approximation algorithm to place cloudlets. However, those works \cite{Jia2016Cloudlet} \cite{Jia2015Optimal} \cite{Xu2015Capacitated} do not take the cost of cloudlets and APs into consideration. Cloudlets and APs in this system may feel reluctant to share their resource to the mobile users without any reward. Incentive mechanisms which take those costs into consideration have been discussed in \cite{2015TWC}. Resource allocation schemes in those works are more flexible and intelligent. Also, the resource holder and relay nodes are willing to serve users. The auction schemes are wildly used in the study of computer science, the details can be seen in \cite{2011ACM_Survey_auction}, \cite{2013IEEE_Survey_auction}. In \cite{2015TWC}, a cooperative bargaining game-theoretic algorithm is addressed for resource allocation in cognitive small cell networks. However, one cloudlet can only serve one MU in those works. The group-buying idea is introduced in \cite{Lin2013Groupon} and \cite{yang2014truthful}. In \cite{Lin2013Groupon}, a group-buying auction model is proposed to manage the spectrum redistribution, and the problem of that a single buyer cannot afford the whole spectrum is fixed. In this paper, we introduce group-buying model into cloudlet deployment, to divide independent MUs into small groups based on the associated APs. Therefore, MUs of each group can afford those expensive cloudlets, and the cloudlet may share its resources with MUs in a flexible and efficient way. Different from our conference version \cite{zhou2017tacd}, we have added one more auction scheme in this work and we have extended the conference work to better present the main idea of the three stage auction scheme. \section{System Model and Problem Formulation} \label{sec:2} \subsection{Problem Formulation} The MU can be regarded as the buyer in our auction schemes. The cloudlet is constituted by resource-abundant devices, it is also the seller in our auction schemes. The AP is the access point of the wireless network for MUs, and it also can be placed with a cloudlet to improve mobile devices' performance, so it is the auctioneer between MUs and cloudlets. Assume that the number of cloudlets is $K$. $C_{k}$ indicates the $k$th cloudlet. $Cap^{k}$ indicates the resource capacity of $C_{k}$. As defined as in \cite{Kang2015Incentive}, the cost function of cloudlet is \begin{equation} Cos(k) = c(k) \cdot w(k), \end{equation} where $c(k)$ is the cost factor of $C_{k}$, and $w(k)$ is the workload brought by MUs' offloaded tasks. In this paper, we try to make the cloudlet share its resources to a suitable small group of MUs rather than just one MU. To inspire cloudlets sharing their resources, we define the reserve price of $C_{k}$, denoted as $r_{k}$, \begin{equation}\label{fomul:rk} r_{k} = c(k) \cdot Cap^{k} + \delta. \end{equation} Where $C_{k}$ must be paid at least $r_{k}$, no matter which group of MUs finally wins $C_{k}$. Cloudlets in this paper may be heterogeneous, we assume that their capacity and cost factor may be different with each other, so their reserve price will also be different. While cloudlet $C_{k}$ joins in the auction scheme, its total resource capacity $Cap^{k}$ and cost factor $c(k)$ are fixed. $C_{k}$ cannot change them during the whole auction. Then, $r_{k}$ is also fixed. By the way, $C_{k}$ can adjust its reserve price $r_{k}$ by changing its parameter $\delta$ after a whole auction, such as increasing the value of $\delta$ if its resource is over competitive in the market, and decreasing the value of $\delta$ while the resource is oversupplied, which will make $C_{k}$ benefit more from the auction, but this feedback mechanism is out of the scope this paper. Therefore, we assume $\delta = 0$ in this paper. Assume that the number of AP is $n$ in the given network. $a_{i}$ indicates the $i$th AP, and it is connected with $n_{i}$ MUs. In this paper, MUs connect to the wireless network through AP. Therefore, we can easily divide MUs into some groups base on the connected AP by the MUs. Each group of MUs can be assigned at most one cloudlet, and if the group of MUs which connects with $a_{i}$ is assigned with cloudlet $C_{k}$, the MUs in the group cannot request other cloudlet resource, and the cloudlet $C_{k}$ can only serve for the MUs in the group of $a_{i}$. It is noteworthy that this is different with \cite{Jia2015Optimal}, where MU can request service from other cloudlets if it's local AP do not have cloudlet or the assigned cloudlet is out of service. In our auction schemes, APs are the auctioneer who deals with the transaction between MUs and cloudlets. For MUs that connected with the wireless network through the $i$th AP, we call them the $i$th group of MUs. Different groups have different amount of tasks to offload. Let $m_{i}^{j}$ be the $j$th MU of the $i$th AP. Its valuation for each cloudlet may be different. The mobile user $m_{i}^{j}$ may give a higher valuation for the cloudlet it preferred (such as the cloudlets which have a good quality of service to it). Then it will submit a much higher bid on those cloudlets based on their valuation. Instead, $m_{i}^{j}$ will submit a much lower bid on the cloudlets which $m_{i}^{j}$ do not like. Then, the valuations of $m_{i}^{j}$ on the $k$th cloudlet $C_{k}$ is $v_{i}^{j}(k)$, which is private information of $m_{i}^{j}$. The budget of $m_{i}^{j}$ for $C_{k}$ is $b_{i}^{j}(k)$, which is public information, as this budget is the bid that MU submits for cloudlets. Namely, MUs' valuation for each cloudlet depends on their preference of those cloudlets, and is known only by themselves. Different MUs may produce different valuations on the same cloudlet, according to their different preferences. Usually, in an auction schemes, the buyer bid truthfully only if its budget equals its valuation. For instance, MU $m_{i}^{j}$ bid truthfully on cloudlet $C_{k}$ only if $b_{i}^{j}(k) = v_{i}^{j}(k)$. But MUs' valuation for each cloudlet is unknown to others, so the auction scheme must be truthful enough to prevent MU benefit more by bidding untruthfully, or the auction will bankrupt soon. When the transactions are done after our three-stage auctions, the winner MUs will pay for the winner cloudlets and the connected APs, the winner cloudlets will be placed on its matching AP and serve for the small group of MUs connected by this AP. For instance, if MUs in $a_{i}$ wins $C_{k}$, $C_{k}$ will be placed on $a_{i}$, and then $C_{k}$ provides services to MUs in $a_{i}$. Let $w_{i}$ be the winner set, which consists of the winner MUs in the group of MUs in $a_{i}$. Let $p_{i}^{j}$ be clearing price of the MU $m_{i}^{j}$. If $m_{i}^{j}$ is a winner, then $m_{i}^{j}$ will be charged at $p_{i}^{j}$ after the auction. For the case of that $m_{i}^{j}$ bid truthfully, we define its utility $u_{i}^{j}$ as \begin{equation} u^{j}_{i} =\left \{ \begin{array}[l]{lcl} v_{i}^{j}(k) - p_{i}^{j} & &{if\ m^{j}_{i}\in w_{i},}\\ 0 & &{otherwise,}\\ \end{array} \right. \end{equation} where $v_{i}^{j}(k)$ is the valuation of $m_{i}^{j}$ on the cloudlet $C_{k}$ it wins. This equation implies the $m_{i}^{j}$ obtains the benefit from the auction. Similarly, the winner set $W$ contains the winner APs. If $a_{i}$ is a winner AP, its clearing price is $P_{i}$. When $a_{i}$ bid truthfully, its utility $u_{i}$ is defined as \begin{equation} u_{i} =\left \{ \begin{array}[l]{lcl} R_{i}^{k} - P_{i} & &{if\ a_{i}\in W,}\\ 0 & &{otherwise,}\\ \end{array} \right. \end{equation} where $R_{i}^{k}$ is the actual revenue that $a_{i}$ calculates for its winner cloudlet $C_{k}$. Let $W'$ be the set of winner cloudlets, and $P^{k}$ be the clearing price of $C_{k}$. Its utility $u^{k}$ is defined as \begin{equation} u^{k} =\left \{ \begin{array}[l]{lcl} P^{k} - r_{k} & &{if\ C_{k}\in W',}\\ 0 & &{otherwise.}\\ \end{array} \right. \end{equation} The social welfare can quantify the efficiency of our auction schemes. Let $SW$ be the social welfare, which means the total utility of all participants in the auction. It is defined as \begin{equation} SW = \sum_{i = 1}^{n}\sum_{j = 1}^{n_{i}}u_{i}^{j} + \sum_{i = 1}^{n}u_{i} + \sum_{k = 1}^{K}u^{k}. \end{equation} \begin{table}[htbp]\scriptsize \vskip -3mm \centering \caption{\label{table:symbols1}Symbols of Participant} \begin{tabular}{C{3.5cm}C{1cm}C{1cm}C{1.3cm}} \hline\noalign{\smallskip} \textbf{Definition} & \textbf{$C_{k}$}& \textbf{$a_{i}$}& \textbf{$m_{i}^{j}$} \\ \noalign{\smallskip}\hline\noalign{\smallskip} Quantity & $K$& $n$& $n_{i}$ for $a_{i}$\\ Capacity or Workload& $Cap^{k}$&$-$&$l_{i}^{j}$\\ Cost, Revenue or Valuation& $Cos(k)$ & $R_{i}^{k}$&$v_{i}^{j}(k)$\\ Reserve price or Budget&$r_{k}$&$B_{i}^{k}$&$b_{i}^{j}(k)$\\ Clearing price& $P^{k}$ & $P_{i}$ & $p_{i}^{j}$\\ Utility& $u^{k}$ & $u_{i}$ & $u_{i}^{j}$\\ Winner set& $W'$ & $W$ & $w_{i}$\\ \noalign{\smallskip}\hline \end{tabular} \vskip -3mm \end{table} \subsection{System Model} The Fig. \ref{fig:model} shows the model of our three-stage auction schemes. In the first stage, we divide MUs into $n$ small groups according to the APs that connect the MUs and cloudlets. Then, in each group the AP calculates its total revenue for each cloudlet, e.g. the AP $a_{i}$ calculates the revenue $R_{i}^{k}$ for cloudlet $C_{k}$. $R_{i}^{k}$ is calculated according to the budget of the MU group in $a_{i}$, and these budgets are their bids for $C_{k}$, i.e., $b_{i}^{j}(k) (j \in [1, \ldots, n_{i}])$. The total revenue quantify the preference of the MU group on each cloudlet. In the AP $a_{i}$, the MU $m_{i}^{j}$ which has been utilized in calculating $R_{i}^{k}$ will be regarded as a potential winner for cloudlet $C_{k}$, and its potential price is $p_{i}^{j}(k)$. If $a_{i}$ wins $C_{k}$ in the next stage, $C_{k}$ will share its resources with $m_{i}^{j}$, and $m_{i}^{j}$ will be charged at $p_{i}^{j}(k)$, i.e., its clearing price $p_{i}^{j}$ equals to $p_{i}^{j}(k)$. On the other hand, $C_{k}$ will only share its resources with the MU who paid for it. We cannot ensure that all MUs in $a_{i}$ can be served by $C_{k}$, due to the constraints of economic properties. The rest of MUs will be left to the next round of the auction, which is not within the scope of this paper. \begin{figure}[h] \vskip -3mm \centering \setlength{\belowcaptionskip}{-1em} \includegraphics[width=4.5in]{Model_r1.eps} \caption{\label{fig:model} {\small Three-stage auction model.}} \vskip -3mm \end{figure} In the second stage, APs submit their budget to each cloudlet. This budget is the total budget of the MU group in the corresponding AP, which is generated base on the revenue for each cloudlet. For instance, the budget of $a_{i}$ for $C_{k}$ is $B_{i}^{k}$ which is the price that $a_{i}$ bid for $C_{k}$. For each AP, its revenue $R_{i}^{k}$ is provided by its MU group. It is a real value, and the budget $B_{i}^{k}$ is generated by itself, we can easily find that both $R_{i}^{k}$ and $B_{i}^{k}$ are public information. Therefore, we can easily verify that whether $a_{i}$ bid truthfully or not. After that, we try to match cloudlets with APs while subjecting to our desired properties. As a result, for the winner set of cloudlets $W'$ and the winner set of APs $W$, the matching result between $W'$ and $W$ can be defined by the mapping function $\sigma(\dot)$. For example, $\sigma(i) = k$ means cloudlet $C_{k}$ is assigned to AP $a_{i}$, and their clearing prices $P_{i}$ and $P^{k}$ are same. Then, in the third stage, the winner MUs set in $a_{i}$ is $w_{i}$, winning APs will charge them according to their potential winner price generated in the first stage. \subsection{Desirable Properties} \subsubsection{\quad \quad \quad Truthfulness} Let $\theta$ be a positive number. The participants may pay an extra cost $\theta$ to figure out how to bid in the auction scheme that makes them benefit more. When MU $m_{i}^{j}$ bid untruthfully, we define the utility $\widetilde{u}_{i}^{j}$ as follows. \begin{equation} \widetilde{u}_{i}^{j} =\left \{ \begin{array}[l]{lcl} v_{i}^{j}(k) - p_{i}^{j} - \theta & &{if\ m^{j}_{i}\in w_{i},}\\ -\theta & &{otherwise.}\\ \end{array} \right. \end{equation} Similarly, we now define the utility $\widetilde{u}_{i}$ for the case of that AP $a_{i}$ bid untruthfully. \begin{equation} \widetilde{u}_{i} =\left \{ \begin{array}[l]{lcl} R_{i}^{k} - P_{i} - \theta & &{if\ a_{i}\in W,}\\ -\theta & &{otherwise.}\\ \end{array} \right. \end{equation} The extra cost $\theta$ varies for different MUs and different APs. The different market situation also causes different extra cost even for the same MU (or the same AP). In this paper, we define truthfulness as a weakly dominate strategy as mentioned in \cite{Jin2015Auction}, where the player cannot improve its utility by bidding an untruthful bid in truthful auction scheme. Truthfulness is significant for an auction, we must ensure $u_{i}^{j} \geq \widetilde{u}_{i}^{j}$ and $u_{i} \geq \widetilde{u}_{i}$ for each MUs and APs to keep our auction truthful. In our auction scheme, we discuss the truthfulness in which only one player can change its bid or strategy, and the others cannot. \subsubsection{\quad\quad \quad Budget balance} The total price charging for buyers is not less than the total price paid for sellers. If $\sigma(i) = k$, then $$\sum_{i=1}^{n}\sum_{j=1}^{n_{i}}p_{i}^{j} \geq \sum_{k=1}^{K}P^{k} + (\sum_{i=1}^{n}R_{i}^{k} - \sum_{i=1}^{n}P_{i}).$$ \subsubsection{\quad\quad \quad Individual rationality} For sellers, they cannot benefit at a price smaller than it's asked, i.e., $P^{k} \geq r_{k}$. For buyers, they cannot be charged at a price bigger than it's bid, i.e., $b_{i}^{j}(k) \geq p_{i}^{j}(k) = p_{i}^{j}$ if $\sigma(i) = k$. For APs, we define their individual rationality as $R_{i}^{k} \geq B_{i}^{k} \geq P_{i}$ if $\sigma(i) = k$. \subsubsection{\quad\quad \quad Computation efficiency} We will prove that the schemes can be performed in polynomial time. \section{Auction Schemes} \label{sec:3} In this section, we describe the proposed three auction schemes. The first is for Three-stage Auction scheme for Cloudlet Deployment, named TACD. The second, named TACDp, is an improved version of TACD by refining the first stage of the auction scheme. The third is called TACDpp, that is derived from TACDp by improving the mapping approach in its second stage. \subsection{Framework of the Schemes} All these three schemes are inspired by the idea of ``group-buying''. Each scheme consists of three stages. In stage \uppercase\expandafter{\romannumeral1}, APs calculate the revenue from their small group of MUs, and figure out the potential winner MUs for each cloudlet, the algorithm used in this stage is named ACRC. The revenue matrix is indicated as $\{R_{i}^{k}\} (i \in [1, \ldots, n], k \in [i, \ldots, K])$, which is formed by the revenues of the APs for each cloudlet. APs can bid for cloudlets according to $\{R_{i}^{k}\}$, and these bids form the budget matrix $\{B_{i}^{k}\} (i \in [1, \ldots, n], k \in [i, \ldots, K])$. In stage \uppercase\expandafter{\romannumeral2}, we match APs with cloudlets according to the budget matrix $\{B_{i}^{k}\}$ and the reserve price vector $\{r_{k}\} (k \in [1, \ldots, K])$, where the vector is formed by the reserve price of cloudlets, and the algorithm in this stage named ASC. In stage \uppercase\expandafter{\romannumeral3}, the winner APs, which are placed with cloudlets, allocate resources to their winner MUs and charge these MUs. \begin{table}[htbp]\scriptsize \vskip -3mm \centering \caption{\label{table:symbols2}Symbols in Algorithms} \begin{tabular}{cc} \hline\noalign{\smallskip} \textbf{Symbol} & \textbf{Definition } \\ \noalign{\smallskip}\hline\noalign{\smallskip} $t_{i}^{j}(k)$ &$m_{i}^{j}$'s performance price ratio on $C_{k}$\\ $A$ &Array of MU sorted by $t_{i}^{j}(k)$\\ $l_{x}$&The workload of the $x$th MU in $A$\\ $A_{x}$, ${L_{x}}$&The first $x$ MUs in $A$, and their total workload \\ $s$&The maximum quantity of MUs in $A$ while $L_{s} \leq Cap^{k}$\\ $S_{x}$&The revenue of the first $x$ MUs in $A$\\ $m$&The independent integer\\ $w_{i}^{k}$&The potential winner MUs in $a_{i}$ for $C_{k}$\\ $p$&The unit price of MUs\\ $p_{i}^{j}(k)$&$m_{i}^{j}$'s potential price on $C_{k}$\\ $top_1, top_2$&The top factor in ACRC, ASC\\ $A'$&The randomly sorted AP set\\ $D$&The profit matrix $\{B_{i}^{k} - r_{k}\}$\\ $\sigma$&Mapping function from $a_{i}$ to $C_{k}$\\ \noalign{\smallskip}\hline \end{tabular} \vskip -3mm \end{table} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \vskip -3mm \begin{algorithm}[h] \small \begin{algorithmic}[1] \caption{ACRC: AP $a_{i}$ Calculating the Revenue vector for each Cloudlet}\label{alg:ACRC} \REQUIRE{Sorted MUs array $A$, cloudlets' capacity set $\{ Cap^{k} \}$} \ENSURE{$a_{i}$'s revenue vector $\{ R^{k}_{i} \}$, $a_{i}$'s potential winner matrix $\{ w_{i}^{k} \}$ and $a_{i}$'s potential price matrix $\{p_{i}^{j}(k)\}$} \FOR{$k = 1$ to $K$ } \STATE{Maximizing the number $s$ subject to $L_{s} \leq Cap^{k}$ and $L_{s} + l_{s+1} > Cap^{k}$.} \STATE{If $L_{n_{i}} \leq Cap^{k}$, then $s = n_{i}$.} \STATE{The revenue set $\{S_{x}\} = GTR(A, s)$, and the revenue of the first $s-1$ cases is $S_{1}, S_{2}, \ldots, S_{s-1}$.} \STATE{The integer $m$ is randomly generated in $[(s+1)/2, s-1]$.} \STATE{Then the revenue $R_{i}^{k} = S_{m}$.} \STATE{$a_{i}$'s potential winner set for $C_{k}$ is $w_{i}^{k} = A_{m}$. } \STATE{Then the unit price $p$ equals to the $(m+1)$th MU's performance price ratio in $A$.} \IF{$m_{i}^{j} \in w_{i}^{k}$} \STATE{$p_{i}^{j}(k) = l_{i}^{j} \cdot p$} \ELSE \STATE{$p_{i}^{j}(k) = 0$} \ENDIF \ENDFOR \RETURN{$\{ R_{i}^{k} \}$, $\{ w_{i}^{k} \}$, $\{p_{i}^{j}(k)\}$} \end{algorithmic} \end{algorithm} \vskip -3mm \begin{algorithm}[h] \small \begin{algorithmic}[1] \caption{GTR: Getting the Revenue set}\label{alg:GTR} \REQUIRE{$A$, $s$} \ENSURE{The revenue set $\{S_{x}\}$} \STATE{Let $\{S_{x}\}$ be the revenue set of the first $s-1$ cases in $A$. } \FOR{$x = 1$ to $s-1$} \STATE{The unit price $p$ is equal to the $(x+1)$-th MU's performance price ratio in $A$.} \STATE{$L_{x}$ is total workload of the first $x$ MUs in $A$.} \STATE{Then $S_{x} = p \cdot L_{x}$.} \ENDFOR \RETURN{$\{S_{x}\}$} \end{algorithmic} \end{algorithm} \vskip -3mm \subsection{ Scheme 1: TACD} \subsubsection{\quad\quad \quad Stage \uppercase\expandafter{\romannumeral1}: Calculating Revenue} The algorithm used in the first stage of TACD is named ACRC. For more details, see Algorithm \ref{alg:ACRC}. At first, for each AP such as $a_{i}$, we calculate its revenue $R_{i}^{k}$ for all cloudlets. Obviously, the revenue $R_{i}^{k}$ is calculated from the small group of MUs in $a_{i}$. Let $t_{i}^{j}(k)$ be the performance price ratio of the MU $m_{i}^{j}$. In other words, $t_{i}^{j}(k)$ is the unit budget of $m_{i}^{j}$ for the cloudlet $C_{k}$, and it is defined as follows. \begin{equation} t_{i}^{j}(k) = \frac{b_{i}^{j}(k)}{l_{i}^{j}}, \end{equation} where $l_{i}^{j}$ is the workload of $m_{i}^{j}$, and the value of $l_{i}^{j}$ is kept unchange no matter which cloudlet receives the tasks offloaded by $m_{i}^{j}$. The value of $t_{i}^{j}(k)$ will increase with the increasing $b_{i}^{j}(k)$, i.e., $m_{i}^{j}$ will get a higher performance price ratio on $C_{k}$ if it has more budget on $C_{k}$. The set $A$ consists of the MUs in $a_{i}$, where the MUs are sorted in descending order in terms of their performance price ratio $t^{j}_{i}(k)$. Let $A_{x}$ be the set of MUs which are the first $x$ ($x \leq n_{i}$) members of $A$. Let $l_{x}$ be the workload of the $x$th MU in $A$, i.e., $l_{1}$ is the workload of the first MU in $A$. Let $L_{x}$ be the total workload of $A_{x}$, i.e., $L_{x} = l_{1}+l_{2}+l_{3}+...+l_{x}$. We try to find the index $s$ in $A$ to maximize $L_{s}$, in which $L_{s} \leq Cap^{k}$ and $L_{s} + l_{s+1} > Cap^{k}$. If the total workload of the MUs in $a_{i}$ is less than or equal to $Cap^{k}$, i.e., $L_{n_{i}} \leq Cap^{k}$, then $s = n_{i}$. Let $S_{x}$ be the revenue which is generated by the first $x$ MUs of $A$. Let $S_{x} = p \cdot L_{x}$, where $p$ is the unit price which equals to the performance price ratio of the $(x+1)$th member in $A$. The Algorithm \ref{alg:GTR} which named GTR is to get the revenue set $\{S_{x}\}$, where $\{S_{x}\} = \{S_{1}, S_{2}, \ldots, S_{s-1}\}$. In order to keep the MUs bid truthfully, we randomly generate an integer $m$, where $(s+1)/2 \leq $m$ \leq s-1 $. The random number $m$ is independent of the bids of MUs'. Then $a_{i}$'s revenue for $C_{k}$ equals to $S_{m}$, i.e., $R_{i}^{k} = S_{m}$. The set of potential winner of $a_{i}$ for $C_{k}$ consists of the first $m$ MUs of $A$, i.e., $w_{i}^{k} = A_{m}$. The unit price $p$ equals to the performance price ratio of the $(m+1)$th MU in $A$. For the MU $m_{i}^{j}$ in $a_{i}$, its potential price on $C_{k}$ is $p_{i}^{j}(k)$, and $p_{i}^{j}(k) = l_{i}^{j} \cdot p$ if $m_{i}^{j} \in w_{i}^{k}$, or $p_{i}^{j}(k) = 0$ if $m_{i}^{j} \notin w_{i}^{k}$. It means if $a_{i}$ is allocated with $C_{k}$ after the whole auction scheme, then the MUs which $m_{i}^{j} \in w_{i}^{k}$ are winners, and they will be charged at $p_{i}^{j}(k)$ by $a_{i}$. The sum of $\{p_{i}^{j}(k)\}$ equals to $R_{i}^{k}$, i.e., $R_{i}^{k} = \Sigma_{j = 1}^{n_{i}}p_{i}^{j}(k)$, that is the preference of the MUs in $a_{i}$ for the cloudlet $C_{k}$. In TACD, we choose the random number $m$ in $[s+1)/2, s-1]$ based on the following reasons. First, the number $m$ must be a random number to keep our auction truthful, and we will discuss it later. Second, if the random number $m$ is close to $1$, the unit price $p$ will be increased but the number of winner MUs will be reduced, and it will go opposite side if $m$ close to $s$. The performance comparisons for different $m$ values are mentioned in \cite{Lin2013Groupon}, the authors addressed that the APs will get more budget while the number of MUs fall in $[30\%, 70\%]$. Similarly, in this paper the APs will get more budget when the number $m$ is randomly generated in $[s+1)/2, s-1]$. Third, for each AP, the more budget it calculates the easier it wins a more profitable cloudlet in the second stage. Finally, if AP gets the same revenue at $m_{1} = 0.3s$ and $m_{2} = 0.7s$, it will win the next stage at the same probability, but there is a big difference between the social welfare derived from the two settings of $m$. It is clear that $m = 0.7s$ is better. In summary, we generate the random number in $[s+1)/2, s-1]$, so that AP can calculate a higher budget and get more profits. To illustrate the detail of ACRC in TACD, we provide an simple example to demonstrate how this algorithm works for AP $a_{i}$. In this example, the performance price ratios of the MUs on $C_{1}$ and $C_{2}$ are shown in Table \ref{table:example1}(a). Their workload vector is shown in Table \ref{table:example1}(b), and the capacity vector of cloudlet is shown in Table \ref{table:example1}(c). For cloudlet $C_{1}$, we sort MUs in terms of their performance price ratio $t_{i}^{j}(1)$ in descending order at first. Then the order of MUs in the sorted array $A$ is: $A = \{ m_{i}^{4}, m_{i}^{1}, m_{i}^{5}, m_{i}^{9}, m_{i}^{6}, m_{i}^{10}, m_{i}^{2}, m_{i}^{3}, m_{i}^{7}, m_{i}^{8}\}$. Let $l_{s}$ be the workload of the $s$th MU in $A$. The workloads of the MUs in $A$ are $\{ l_{1}=1.4, l_2=1.5, l_{3}=1.6, ... \}$, which are shown in Fig. \ref{fig:example1}. Let $L_{x}$ be the total workload of the first $x$ members of $A$. For instance, $L_{3} = l_{1}+l_{2}+l_{3} = 4.5$. According to ACRC, $s = 8$, MUs $m_{i}^{7}$ and $m_{i}^{8}$ which are painted in red are losers in ACRC. Then we calculate the revenue for this $s-1$ cases. The unit price $p$ for $S_{x}$ is the $(x+1)$-th performance price ratio of MU in $A$, and $S_{x} = p \cdot L_{x}$. For instance, the unit price $p$ for $S_{5}$ is the $6$th performance price ratio of MU in $A$, i.e., $p = t_{i}^{10}(1) = 3.6$. Then, $S_{5} = p \cdot L_{5} = 3.6 * 9.1 = 32.76$. We get a random number within $( 4, 7 )$. Assume that $m =6$. We `sacrifice' MUs $m_{i}^{2}, m_{i}^{3}$ which are painted in yellow to keep ACRC truthful. Therefore $R_{i}^{1} = S_{6} = 32.8$ and the unit price $p = t_{i}^{2}(1) = 2.9$. The first $6$ MUs in this example form the potential winner set for $C_{1}$, i.e., $w_{i}^{1} = \{ m_{i}^{4}, m_{i}^{1}, m_{i}^{5}, m_{i}^{9}, m_{i}^{6}, m_{i}^{10} \}$. For these MUs, their potential price $p_{i}^{j}(k) = p \cdot l_{i}^{j}$. In this example, their potential price in $A$ is $p_{i}^{4}(1) = 2.9 * 1.4 = 4.06$, $p_{i}^{1}(1) = 2.9 * 1.5 = 4.35$, $p_{i}^{5}(1) = 2.9 * 1.6 = 4.64$, and $p_{i}^{9}(1) = 6.96$, $p_{i}^{6}(1) = 6.38$, $p_{i}^{10}(1) = 6.38$. For the rest of MUs $m_{i}^{j} \notin w_{i}^{1}$, their potential price $p_{i}^{j}(1) = 0$. Then we can get the potential price set $\{p_{i}^{j}(1)\}$. It is similar for $C_{1}$ when AP $a_{i}$ calculates revenue for other cloudlets. After all the APs have calculated the revenue of each cloudlet, the revenue matrix $\{R_{i}^{k}\}$ is formed. Then APs will bid for each cloudlet in the next stage. These bids constitute the budget matrix $\{B_{i}^{k}\}$ which means the APs' budget for each cloudlet. All these APs have submitted their truthful bid if $\{B_{i}^{k}\} = \{R_{i}^{k}\}$, or there must be one/some cheater(s). The later case is what we need to avoid. \begin{table} \centering{ \caption{Example for ACRC}\label{table:example1} \begin{subtable}{(a) MUs' performance price ratio on each Cloudlet} \centering \begin{tabular}{cccccc} \hline\noalign{\smallskip} &$t_{i}^{1}(k)$&$t_{i}^{2}(k)$&$t_{i}^{3}(k)$&$t_{i}^{4}(k)$&$t_{i}^{5}(k)$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $C_{1}$& 6 & 2.9&2.7&6.4&5.6 \\ $C_{2}$& 6 & 2.5&4.5&5.7&3.1 \\ ...& & &&& \\ \noalign{\smallskip}\hline\noalign{\smallskip} &$t_{i}^{6}(k)$&$t_{i}^{7}(k)$&$t_{i}^{8}(k)$&$t_{i}^{9}(k)$&$t_{i}^{10}(k)$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $C_{1}$& 3.6 & 2&1.7&3.7&3.6 \\ $C_{2}$& 1.8 & 3.2&4.3&3.7&2.9 \\ ...& & &&& \\ \noalign{\smallskip}\hline \end{tabular} \label{table:example1.1} \end{subtable} } \begin{subtable}{(b) The total workload of MUs' offloading task(s)} \centering \begin{tabular}{cccccccccc} \hline\noalign{\smallskip} $l_{i}^{1}$&$l_{i}^{2}$&$l_{i}^{3}$&$l_{i}^{4}$&$l_{i}^{5}$&$l_{i}^{6}$&$l_{i}^{7}$&$l_{i}^{8}$&$l_{i}^{9}$&$l_{i}^{10}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1.5&2.7&2.2&1.4&1.6&2.2&2.5&2.3&2.4&2.2 \\ \noalign{\smallskip}\hline \end{tabular} \label{table:example1.2} \end{subtable} \begin{subtable}{(c) Cloudlets' resource capacity} \centering \begin{tabular}{cccccccc} \hline\noalign{\smallskip} $Cap^{1}$&$Cap^{2}$&$Cap^{3}$&$Cap^{4}$&$Cap^{5}$&$Cap^{6}$&$Cap^{7}$&...\\ \noalign{\smallskip}\hline\noalign{\smallskip} 17&22&25&11&19&21&18&... \\ \noalign{\smallskip}\hline \end{tabular} \label{table:example1.3} \end{subtable} \end{table} \begin{figure}[h] \vskip -3mm \centering \setlength{\belowcaptionskip}{-1em} \includegraphics[width=3.5in]{Visio_example1_r1.eps} \caption{\label{fig:example1} {\small Illustration of ACRC in TACD.}} \vskip -3mm \end{figure} \subsubsection{\quad\quad \quad Stage \uppercase\expandafter{\romannumeral2}: Matching Cloudlet for AP} The algorithm used in this stage is named ASC, more details are shown in Algorithm \ref{alg:ASC}. In this stage, APs deal with cloudlets according to the budget of APs and the reserve price of cloudlets. In TACD, we assign cloudlet to AP in a greedy manner, as mentioned in the existing work \cite{Goldberg2010Competitive}. In ASC, we generate the profit matrix $D$ at first, where $D = \{B_{i}^{k}\} - \{r_{k}\}$, and $d_{i}^{k} = B_{i}^{k} - r_{k}$. Then we distribute the terms of APs randomly to $A'$. For each AP in $A'$, we try to match it with an available cloudlet $C_{k}$ to maximize the profit $B_{i}^{k}-r_{k}$ by the algorithm FRM. The algorithm FRM is shown in Algorithm \ref{alg:FRM}. For the AP $a_{i}$ in $A'$, then we try to match it with a most profitable cloudlet among the rest of available cloudlets. The profit vector of $a_{i}$ is $D_{i}$ which is the $i$th row of the matrix $D$. Then we select the largest element $d_{i}^{k}$ in $D_{i}$. The cloudlet $C_{k}$ is the most profitable cloudlet for $a_{i}$ among the rest of available cloudlets. If ties, we choose the $C_{k}$ with the smaller $k$. As a result, FRM matches $a_{i}$ with $C_{k}$ and return the matching to ASC. For this AP-cloudlet matching, its profit is $d_{i}^{k}$. Then, the algorithm ASC judges that if their profit is a positive value, i.e., whether $d_{i}^{k} > 0$. The budget of $a_{i}$ is bigger than the reserve price of $C_{k}$ if $d_{i}^{k} > 0$, i.e., if $B_{i}^{k} > r_{k}$. Then we try to find a bid for $C_{k}$ from the other APs. The selected bid has the biggest value between $B_{i}^{k}$ and $r_{r}$. In other words, we try to find the $B_{j}^{k}$ where $B_{i}^{k} \geq B_{j}^{k} \geq \ldots \geq r_{k}$ and $i \neq j$. If there is no such $B_{j}^{k}$, then $a_{i}$ fails to be allocated with $C_{k}$, and we set $d_{i}^{k} = 0$. Otherwise, we allocate $C_{k}$ on $a_{i}$, i.e., let $\sigma(i) = k$. The clearing prices of $a_{i}$ and $C_{k}$ equal to the highest bid between $B_{i}^{k}$ and $r_{k}$, i.e., $P_{i} = P^{k} = B_{j}^{k}$. Then we add $a_{i}$ and $C_{k}$ in their winner set, such as $W = W \cup a_{i}$ and $W' = W' \cup C_{k}$. Finally, for the matrix $D$ we set the values of all elements in the $i$th row to $0$. Meanwhile, we also set the values of all elements in the $k$th column to $0$. Algorithm ASC can ensure the utility of both APs and cloudlets if they are winners in the auction. For each winner AP-cloudlet matching, their clearing price $P_{i}, P^{k}$ are independent with $B_{i}^{k}$ and $r_{k}$, both $a_{i}$ and $C_{k}$ cannot modify the clearing price by themselves. This is helpful to keep the auction truthful. \vskip -3mm \begin{algorithm} \small \begin{algorithmic}[1] \caption{ASC: APs' auction to Select suitable Cloudlet}\label{alg:ASC} \REQUIRE{$\{ B_{i}^{k} \}$, $\{ r_{k} \}$, $D$} \ENSURE{ $W$, $W'$, $\{P_{i}\}$, $\{P^{k}\}$, $\sigma$} \STATE{Distributing APs randomly into $A'$.} \FOR{$x = 1$ to $n$} \STATE{Getting AP $a_{i}$ and its matching cloudlet $C_{k}$ by algorithm FRM($D$, $A'$, $x$).} \IF{$d_{i}^{k} > 0$} \IF{$B_{i}^{k} \geq B_{j}^{k} \geq \ldots \geq r_{k}$, which $j \neq i$} \STATE{$\sigma(i) = k$} \STATE{$P_{i} = P^{k} = B_{j}^{k}$} \STATE{$W = W \cup a_{i}$} \STATE{$W' = W' \cup C_{k}$} \STATE{Setting the values of elements in $i$th row of $D$ to $0$} \STATE{Setting the values of elements in $k$th column of $D$ to $0$} \ELSE \STATE{$d_{i}^{k} = 0$} \ENDIF \ENDIF \ENDFOR \RETURN{$W$, $W'$, $\{P_{i}\}$, $\{P^{k}\}$, $\sigma$} \end{algorithmic} \end{algorithm} \begin{algorithm} \small \begin{algorithmic}[1] \caption{FRM: Finding a Rational Matching to $a_{i}$}\label{alg:FRM} \REQUIRE{$D$, $A'$, $x$} \ENSURE{AP $a_{i}$ and it's matching cloudlet $C_{k}$} \STATE{Let $a_{i}$ denote the $x$th AP of $A'$.} \STATE{Let vector $D_{i}$ be the $i$th row of matrix $D$.} \STATE{$d_{i}^{k}$ is the maximum of $D_{i}$.} \RETURN{$a_{i}$, $C_{k}$} \end{algorithmic} \end{algorithm} \subsubsection{\quad \quad \quad Stage III: Charging for winner} In this stage, the winner APs choose the winner MUs according to their potential winner set, and then charge them at their potential winner price $p_{i}^{j}(k)$. For instance, while $a_{i}$ wins $C_{k}$ in stage \uppercase\expandafter{\romannumeral2}, the MUs in the potential winner set $w_{i}^{k}$ is the winner MUs of $a_{i}$. For each MU $m_{i}^{j}$ where $m_{i}^{j} \in w_{i}^{k}$, it will be charged by $a_{i}$ at the clearing price $p_{i}^{j}$, where $p_{i}^{j} = p_{i}^{j}(k)$. \subsection{Scheme 2: TACDp} In this subsection, we propose a more efficient scheme named TACD plus (TACDp). The TACDp improves the first stage of TACD by changing the generation method of $m$ in ACRC, so that the APs in TACDp can get more revenue. In TACD, $m$ is randomly generated in $[(s+1)/2, s-1]$, it may sacrifice many MUs, resulting in the performance decrease of TACD, although it can keep the auction scheme truthful. In TACDp, we calculate several profitable revenues and then randomly select one from them as the revenue of the target AP. In this section, we assume that the default value of $top_1$ is $3$. Then, we select the top $3$ profitable revenues $S_{x_{1}}, S_{x_{2}}, S_{x_{3}}$ from $S$, and $m$ is randomly selected from $\{ x_1, x_2, x_3\}$, denoted as $m = random\{x_1, x_2, x_3\}$. We can also change the value of $top_1$ to get a better result, e.g., $top_1 = 2$, then we select the top $2$ profitable revenues $S_{x_{1}}, S_{x_{2}}$ from $S$, and $m = random\{x_1, x_2\}$. The different value of $top_1$ will lead to different average revenue and different degree of truthfulness. The effect of $top_1$ will be discussed in the next section. To illustrate the first stage of TACDp, we calculate the revenue of $a_{i}$ on $C_{2}$, which is shown in Table \ref{table:example1}. The ACRC in TACDp is shown in Fig. \ref{fig:example2}. In this example, $top_1 = 3$. Following TACD, we generate the number $s$ and the revenue set $S$, resulting in $s = 10$ and $S = \{8.5, 13.0, 21.9, 27.3, 31.3, 38.1, 40.3, 40.2, 33.8\}$. The top $3$ cases in $S$ is $S_{7}, S_{8}, S_{6}$, then $m =$ $random$ $\{6, 7, 8\}$, the average revenue is $39.5$. It is worthwhile to point out that, the average revenue in TACD is $36.7$. Thus, the revenue of the APs in TACDp is improved. \begin{figure}[h] \setlength{\belowcaptionskip}{-1em} \includegraphics[width=4.5in]{Visio_example2_r1.eps} \caption{\label{fig:example2} {\small Illustration of ACRC in TACDp.}} \end{figure} The rest steps of TACDp are the same with TACD. Note that, the value of $top_1$ must be larger than $1$. In this case, let $S_{max}$ be the most profitable revenue of $S$, we cannot fix the revenue of AP at $S_{max}$. This is because, we cannot keep ACRC truthful if we always choose $R_{i}^{k} = S_{max}$. For instance, $S_{max} = S_{7}$, i.e.,$40.3$ in Fig \ref{fig:example2} and the unit price $p = t_{i}^{10}(2)$, i.e., $2.9$ while MUs bid truthfully. For $m_{i}^{5}$, it's valuation on $C_{2}$ is $v_{i}^{5}(2)$ where $v_{i}^{5}(2) = b_{i}^{5}(2)$, i.e., $ 4.96$, its potential price $p_{i}^{5}(2) = p \cdot l_{i}^{5} = 2.9 * 1.6 = 4.64$. We assume that $a_{i}$ wins $C_{2}$ in stage \uppercase\expandafter{\romannumeral2}, then $m_{i}^{5}$ will be charged at clearing price $p_{i}^{5} = p_{i}^{5}(2) = 4.64$ in stage \uppercase\expandafter{\romannumeral3}. Therefore, the utility of $m_{i}^{5}$ is $u_{i}^{5}$ where $u_{i}^{5} = v_{i}^{5}(2) - p_{i}^{5} = 4.96 - 4.64 = 0.32$. However, if $m_{i}^{5}$ bid untruthfully, we assume that $m_{i}^{5}$ changes its budget on $C_{2}$ to $b_{i}^{5}(2) = 4.32$ which is less than its valuation on $C_{2}$. Then the performance price ratio of $m_{i}^{5}$ on $C_{2}$ is $t_{i}^{5}(2)$ where $t_{i}^{5}(2) = 2.7$ and it will be sorted behind $t_{i}^{10}(2)$ according to ACRC. $L_{7} = L_{6} + l_{i}^{10} = 12.3 + 2.2 = 14.5$, $L_{8} = L_{7} + l_{i}^{5} = 14.5 + 1.6 = 16.1$, and $S_{6} = L_{6} * 2.9 = 35.67$, $S_{7} = L_{7} * 2.7 = 39.15$, $S_{8} = L_{8} * 2.5 = 40.25$, then $S_{max} = S_{8}$. If $R_{i}^{k}$ always equal to the most profitable revenue, then $R_{i}^{2} = S_{8}$ and its unit price $p = 2.5$. We assume that the matching result are the same in stage \uppercase\expandafter{\romannumeral2}, then $m_{i}^{5}$ will be charged at the clearing price $p_{i}^{5} = p_{i}^{5}(2) = l_{i}^{5} \cdot p = 1.6 * 2.5 = 4$ in stage \uppercase\expandafter{\romannumeral3}. Then, if $m_{i}^{5}$ bids untruthfully, its utility is $\widetilde{u}_{i}^{5}$ where $\widetilde{u}_{i}^{5} = v_{i}^{5}(2) - p_{i}^{5} - \theta = 4.96 - 4 - \theta = 0.96 - \theta$. $m_{i}^{5}$ can improve its utility if the value of the extra cost $\theta$ is small enough, e.g., $\theta < 0.96$, when $m_{i}^{5}$ bids untruthfully. \subsection{Scheme 3: TACDpp} We introduce another efficient algorithm named TACDpp in this subsection. The TACDpp is the improved version of TACDp, which refines the second stage of TACDp. The difference between TACDp and TACDpp is that, TACDpp replaces algorithm FRM with algorithm FRMG in ASC. The first stage of TACDpp is the same as that of TACDp. In the second stage, TACDpp matches cloudlets for APs in a global way, which is different with TACDp. In TACDpp, we match cloudlets with APs by algorithm FRMG, which is shown in Algorithm \ref{alg:FRMG}. Let $top_2$ be a small number, it is the top factor in FRMG, its default value is $2$. For each round, FRMG gets a random integer $rnd$ in $[1, top_2]$, then selects the $rnd$th profitable value $d_{i}^{k}$ from the profit matrix $D$, and it returns $\{a_{i}, C_{k}\}$ to ASC for further judgement. When the network is unbalanced between supply and demand, i.e., $K \neq n$, TACDpp can perform better due to the global idea. It is also worth to mention that we must ensure $top_2 > 1$ which is similar with $top_1$. It will be discussed later. \added{The performance comparison of the proposed schemes is shown in Table \ref{table:Comparison}. This table lists the algorithms employed in each stage and the generation approach of the number $m$.} \vskip -3mm \begin{algorithm}[h] \small \begin{algorithmic}[1] \caption{FRMG: Finding a Rational Matching in the Global scope}\label{alg:FRMG} \REQUIRE{$D$, $top_2$} \ENSURE{$a_{i}$, $C_{k}$} \IF{$top_2 > 1$} \STATE{$rnd$ is the random integer in $[1, top_2]$} \ELSE \STATE{$rnd = 1$.} \ENDIF \STATE{Finding out the $rnd$-th profitable matching $d_{i}^{k}$ from $D$.} \RETURN{$a_{i}$, $C_{k}$} \end{algorithmic} \end{algorithm} \vskip -3mm \begin{table}[htbp]\scriptsize \vskip -3mm \centering \caption{\label{table:Comparison}Comparison for TACD, TACDp and TACDpp} \begin{tabular}{C{1.2cm}C{1.5cm}C{2.5cm}C{1.5cm}} \hline\noalign{\smallskip} \textbf{Schemes}&Stage \uppercase\expandafter{\romannumeral1} & The number $m$ & Stage \uppercase\expandafter{\romannumeral2} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{TACD}&\textbf{ACRC}+\textbf{GTR}&[$(s+1)/2, s-1$]&\textbf{ASC}+\textbf{FRM}\\ \textbf{TACDp}&\textbf{ACRC}+\textbf{GTR}&One of $top_1$ cases&\textbf{ASC}+\textbf{FRM}\\ \textbf{TACDpp}&\textbf{ACRC}+\textbf{GTR}&One of $top_1$ cases&\textbf{ASC}+\textbf{FRMG}\\ \noalign{\smallskip}\hline \end{tabular} \vskip -3mm \end{table} \section{Desired Properties} \label{sec:4} \subsection{Truthfulness} \begin{Theo} The schemes TACD, TACDp and TACDpp are truthful in ACRC. \begin{proof} To verify the truthfulness of ACRC, we only need to prove that MUs are truthful in our auction. In TACD, for the MU $m_{i}^{j}$, $b_{i}^{j}(k)$ is the truthful bid of $m_{i}^{j}$. Let $\widetilde{b}_{i}^{j}(k)$ be the untruthful bid. Then the utility of $m_{i}^{j}$ is $u_{i}^{j}$ when it bids truthfully. Let $\widetilde{u}_{i}^{j}$ be the utility when it bids untruthfully. We prove that $m_{i}^{j}$ cannot improve its utility by submitting an untruthful bid as follows, i.e., $\widetilde{u}_{i}^{j} \leq u_{i}^{j}$. There are four cases for MU $m_{i}^{j}$ in TACD: \begin{enumerate} \item MU $m_{i}^{j}$ fails in the auction both in truthful bid $b_{i}^{j}(k)$ and untruthful bid $\widetilde{b}_{i}^{j}(k)$. Then, $u_{i}^{j} = 0$ and $\widetilde{u}_{i}^{j} = -\theta$. \item MU $m_{i}^{j}$ wins the auction while bid truthfully and fails in the auction while bid untruthfully. In this case, $u_{i}^{j} \geq 0$, and $\widetilde{u}_{i}^{j} = -\theta$. \item MU wins the auction both in truthful bid and untruthful bid. When $m_{i}^{j}$ wins the auction in TACD, its clearing price is $c$ in our rules. On the other hand, if $m_{i}^{j}$ also wins the auction in another bid, from the definition of truthfulness, the clearing price is also $c$ while other bids of MUs are fixed. Then $\widetilde{u}_{i}^{j} = u_{i}^{j} - \theta$. \item MU fails in the auction while bid truthfully and wins the auction while bid untruthfully. When $m_{i}^{j}$ fails in TACD and it bids truthfully, the clearing price $c$ is greater than or equal to its bid, i.e., $c \geq b_{i}^{j}(k)$. And if $m_{i}^{j}$ wins the auction in another bid $\widetilde{b}_{i}^{j}(k)$, it must have $\widetilde{b}_{i}^{j}(k) \geq c$, so $\widetilde{b}_{i}^{j}(k) > b_{i}^{j}(k)$ and $\widetilde{b}_{i}^{j}(k) > v_{i}^{j}(k)$, then we have $\widetilde{u}_{i}^{j} \leq u_{i}^{j} = 0$. \end{enumerate} We have now discussed the truthfulness of MUs in TACD, while MU $m_{i}^{j}$ bid for the $k$th cloudlet. And the other cloudlets do not need care about whether $m_{i}^{j}$ cheat or not, if the $k$th cloudlet $C_{k}$ is assigned to the AP $a_{i}$ finally. Similarly, MUs in TACDp and TACDpp are also truthful in ACRC, because these two schemes only change the way we get the random integer $m$. \end{proof} \end{Theo} \begin{Theo} The schemes TACD, TACDp and TACDpp are truthful in ASC. \begin{proof} For TACD and TACDp, their algorithms in the second stage are similar to the algorithm \textit{fixed price auction} as mentioned in\cite{Goldberg2010Competitive}. This auction scheme has been proved to be truthful, we only change the generation manner of clearing price in TACD and TACDp while the transactions is done. Furthermore, the clearing price is independent to AP and cloudlet in the second stage of TACD and TACDp. Therefore, TACD and TACDp are also truthful for ASC. For TACDpp in ASC, we ensure its truthfulness by the top factor $top_2$, which is discussed in the simulation section. \end{proof} \end{Theo} \subsection{Budget Balanced} \begin{Theo} The schemes TACD, TACDp and TACDpp are budget balanced. \begin{proof} In this paper, we only prove that TACD is budget balanced. The proof of TACDp and TACDpp are identical to that of TACD. In TACD, if $\sigma(i) = k$, $a_{i} \in W$ and $C_{k} \in W'$, then the total clearing price charge for the MUs is $val_{1}$ where $val_{1} = \sum_{i=1}^{n}\sum_{j=1}^{n_{i}}p_{i}^{j}$. Similarly, the total clearing price for cloudlets is $val_{2}$ where $val_{2} = \sum_{k=1}^{K}P^{k}$, the total clearing price for APs is $val_{3}$ where $val_{3} = \sum_{i=1}^{n}(R_{i}^{k} - P_{i})$. The total budget of APs is $val_{4}$ where $val_{4} = \sum_{i=1}^{n}B_{i}^{k}$, then $val_{1} = val_{4}$ according to ACRC, and $val_{4} = val_{2} + val_{3}$ according to ASC. Then, $val_{1} = val_{4} = val_{3} + val_{2}$, and $val_{1} \geq val_{3} + val_{2}$, i.e., $$\sum_{i=1}^{n}\sum_{j=1}^{n_{i}}p_{i}^{j} \geq \sum_{k=1}^{K}P^{k} + (\sum_{i=1}^{n}R_{i}^{k} - \sum_{i=1}^{n}P_{i}).$$ \end{proof} \end{Theo} \subsection{Individual Rationality} \begin{Theo} The schemes TACD, TACDp and TACDpp are subject to the individual rationality. \begin{proof} The individual rationality for TACD can be proved as follows. For sellers, according to the judgement in ASC, the clearing price for cloudlets cannot smaller than they asked, i.e., $P^{k}$ is always bigger than $r_{k}$. For buyers, if MU $m_{i}^{j}$ wins the cloudlet $C_{k}$, the MU will be charged at $p_{i}^{j}$ where $p_{i}^{j}= p \cdot l_{i}^{j}$. $p$ is the performance price ratio of the $m$th MU in $A$, and $p \leq t_{i}^{j}(k)$. Therefore, $p_{i}^{j} \leq t_{i}^{j}(k) \cdot l_{i}^{j} = b_{i}^{j}(k)$. For APs, we obtain $B_{i}^{k} = R_{i}^{k}$ according to the ACRC. Also, the adjustment factor $f$ is in the scope of $(0, 1)$ in ASC, thus, the clearing price of AP $P_{i} = f \cdot B_{i}^{k} < B_{i}^{k} = R_{i}^{k}$. Therefore, $R_{i}^{k} \geq B_{i}^{k} \geq P_{i}$. The proof of individual rationality for TACDp and TACDpp iss the same as that of TACD. \end{proof} \end{Theo} \subsection{Computational Efficiency} \begin{Theo} The time complexity of TACD as well as TACDp is $O(K\cdot n\log n)$. \begin{proof} For ACRC, the sorting needs $O(n\log n)$ time, finding the number $s$ takes $O(n)$ time, and the algorithm GTR also takes $O(n)$ time. The time complexity of ACRC in TACD and TACDp is $O(K\cdot n\log n)$. For ASC, distributing APs randomly takes $O(n\log n)$ time, the algorithm FRM takes $O(K)$ time. So, the time complexity of ASC in TACD and TACDp is $O(n\cdot K)$. Therefore the total time complexity of TACD and TACDp are $O(K\cdot n\log n)$. \end{proof} \end{Theo} \begin{Theo} The time complexity of TACDpp is $O(K\cdot n^{2})$. \begin{proof} The time complexity of ACRC in TACDpp is the same as that of TACDp, which is $O(K\cdot n\log n)$. For ASC, the algorithm FRMG takes $O(n\cdot K)$ time, which is different from the algorithm FRM. Thus, the time complexity of ASC in TACDpp is $O(K\cdot n^{2})$. Therefore the total time complexity of TACDpp is $O(K\cdot n^{2})$. \end{proof} \end{Theo} \section{Numerical Results} \label{sec:5} \subsection{Simulation Setup} In this paper, we simulate our works on MATLAB R2014a. In the simulation, the capacities of all the cloudlets follow the normal distribution $N(25, 5)$ and each capacity $Cap^k$ satisfy the constraint $10 \leq Cap^{k} \leq 30$. Its cost factor $c(k)$ follows to the normal distribution $N(0.75, 0.1)$ and $0.5 \leq c(k) \leq 1$. Then, its reserve price $\{r_{k}\}$ can be calculated by formula \ref{fomul:rk}. For each AP such as $a_{i}$, the number of MUs in $a_{i}$ follows the uniform distribution $U(5, 30)$. For the MUs in $a_{i}$ such as $m_{i}^{j}$, their workload follow the normal distribution $N(2, 1)$ and $1 \leq l_{i}^{j} \leq 3$. Their valuations for each cloudlet follow the uniform distribution $U(1, 15)$. We compare our auction schemes with the strategy Heaviest Access Point First (HAF) \cite{Jia2015Optimal}. HAF is an efficient scheme for cloudlet placement and resource allocation without auction. In this paper, the strategy HAF is working in the following way, at first, HAF sorts APs in terms of the total workload of MUs in descending order. Then, HAF sorts cloudlets in terms of their capacity in descending order. At last, HAF matches cloudlets for APs by turns. For instance, HAF assigns the first cloudlet whose capacity is the biggest to the first AP whose total workload of MUs is the heaviest, then HAF assigns the second cloudlet to the second AP and so on. If $C_{k}$ is assigned to $a_{i}$, the budget that $a_{i}$ bid for $C_{k}$ is $B_{i}^{k}$. It is calculated using the method as in ACRC, but the number $m$ is a fixed integer where $m = s$, and the potential winner MUs is the first $m$ MUs in $A$, i.e., $A_{m}$. The unit price $p$ charged by AP is the performance price ratio of the $m$th MU in $A$. In HAF, $a_{i}$ only needs to calculate the budget on $C_{k}$. The transaction between $a_{i}$ and $C_{k}$ will be done if $B_{i}^{k} \geq r_{k}$, which is different from the algorithm ASC. It is obvious that, if HAF is an incentive mechanism, then it is untruthful. \added{ Moreover, the time complexity of HAF is $O(n\log n) + O(K\log K)$. In the first stage of HAF, the sorting of APs and cloudlets takes $O(n\log n)$ and $O(K\log K)$ time, respectively. In the second stage of HAF, the matching algorithm takes $O(n) + O(K)$ time. } \subsection{Simulation Results} \begin{figure}[htbp] \vskip -3mm \centering \begin{minipage}{2in} \includegraphics[width=2in]{utility_of_cloudlets_r1.eps} \caption{\label{fig:utility_Cloudlet} {\small Utility of Cloudlet.}} \end{minipage} \begin{minipage}{2in} \includegraphics[width=2in]{utility_of_APs_r1.eps} \caption{\label{fig:utility_AP} {\small Utility of APs.}} \end{minipage} \vskip -3mm \end{figure} \begin{figure}[htbp] \vskip -1mm \centering \begin{minipage}{2in} \includegraphics[width=2in]{utility_of_MUs_r1.eps} \caption{\label{fig:utility_MU} {\small Utility of MUs.}} \end{minipage} \begin{minipage}{2in} \includegraphics[width=2in]{Social_welfare_r1.eps} \caption{\label{fig:Social_welfare} {\small Social welfare.}} \end{minipage} \vskip -3mm \end{figure} In the first part of our simulation, the top factors $top_1$ and $top_2$ are set to $2$, and the market is balanced, i.e., $K = n$. The utility of cloudlets, APs and MUs are shown in Fig. \ref{fig:utility_Cloudlet}, Fig. \ref{fig:utility_AP}, and Fig. \ref{fig:utility_MU}, respectively. The social welfare of auction schemes are shown in Fig. \ref{fig:Social_welfare}. There are big differences between our schemes and the HAF in the first three figures. Fig. \ref{fig:utility_Cloudlet} shows that our schemes are good for cloudlets, while Fig. \ref{fig:utility_AP} show that our schemes are weak for APs. The differences are caused due to the following reasons. In our schemes, we select a bid $B_{j}^{k}$ other than $B_{i}^{k}$ and $r_{k}$ to keep ASC truthful where $B_{i}^{k} \geq B_{j}^{k} \geq r_{k}$. The clearing price of this transaction is $B_{j}^{k}$ which is bigger than $r_{k}$. However, if $C_{k}$ is assigned for $a_{i}$, HAF does not care about the truthfulness, the transaction is done while $B_{i}^{k} \geq r_{k}$, and the clearing price is equal to $r_{k}$. As a result, the utility of cloudlets is close to $0$ in HAF as shown in Fig. \ref{fig:utility_Cloudlet}. Moreover, the APs in HAF may catch many profits during the transaction as shown in Fig. \ref{fig:utility_AP}. For the Fig. \ref{fig:utility_MU}, it shows that HAF is more profitable for MU than our algorithms. It is because that the winner cloudlet in HAF serve for more MUs by a greedy manner and these MUs are charged with a lower unit price by AP than our schemes. In our schemes, the number of winner MUs is $m - 1$ where $m \leq s$, the unit price of these MUs is the performance price ratio of the $m$th MU in $A$. However, the number of winner MUs in HAF is $m$ where $m = s$, and the unit price of these MUs are the performance price ratio of the $s$th MU in $A$. Then, the number of winner MUs in HAF is more than our schemes, and these winner MUs are charged by a lower price than us. Therefore, it is more profitable for MUs as show in Fig. \ref{fig:utility_AP}. \added{The social welfare demonstrates that, while the number of MUs is $1000$, the social welfare in TACD is $5\%$ less than HAF, TACDp is $4.5\%$ higher than HAF, and TACDpp is $5.6\%$ higher than HAF. Moreover, our schemes perform better if there are more MUs in the wireless access network. For example, when the number of MUs is $1400$, TACD is $1.7\%$ less than HAF, TACDp and TACDpp are $7.6\%$ and $7.9\%$ higher than HAF respectively.} If the number of APs is bigger than the number of cloudlets, i.e., $n > K$, the performance of our auction schemes in ``unbalanced market" is shown in Fig. \ref{fig:Social_welfare_unbalanced}. In this situation, the TACDpp performs better than that in the balanced market, because the global matching algorithm FRMG works better. \begin{figure}[htbp] \vskip -3mm \centering \begin{minipage}{2in} \includegraphics[width=2in]{unbalanced_market_r1.eps} \caption{\label{fig:Social_welfare_unbalanced} {\small Performance in unbalanced market.}} \end{minipage} \vskip -3mm \end{figure} Now, we evaluate the second stage of TACDpp while modifying the value of $top_2$ in a smaller data set, and we verify the truthfulness of TACDpp through different values of $B_{1}^{1}$. In this section, we fix the value of $top_1$ at $2$ and modify the value of $top_2$ from $1$ to $2$ and then to $5$. The utility of $B_{1}^{1}$ are shown in Fig. \ref{fig:top2=1}, Fig. \ref{fig:top2=2} and Fig. \ref{fig:top2=5}, for the cases of $top_2 = 1, 2, 5$, respectively. In these figures, the solid line shows the profit of $a_{1}$ when $a_{1}$ bids truthfully in ASC, i.e. $B_{1}^{1} = 85.5$. The dotted line shows the profit of $a_{1}$ when it bids untruthfully from $\widetilde{B}_{1}^{1} = B_{1}^{1} - 80$ to $\widetilde{B}_{1}^{1} = B_{1}^{1} + 50$ with the increase unit of $1$. The result is the averaged over $100$ random instances. Fig. \ref{fig:top2=1} is the utility of AP for the case of $top_2 = 1$. In this case, TACDpp matches cloudlet $C_{k}$ for AP $a_{i}$, while the profit of this matching is the most profitable one in the rest of cloudlets and APs. The utility of $a_{1}$ is $U_{1}$ and it is $18.7$. It is stable and profitable, because TACDpp always makes the same strategy to match cloudlets with APs. In such a fixed strategy, $a_{1}$ will get the same profit if it bids truthfully, so the solid line is straight in Fig. \ref{fig:top2=1}. However, it is hard to check whether TACDpp is truthful in ASC while $top_2 = 1$. Because it may has some "bugs", in which APs can benefit more from their preferred cloudlet, by biding budgets that lower than their revenues. For instance, as we can see in Fig. \ref{fig:top2=1}, the utility of $a_{1}$ is $\widetilde{U}_{1}$ where $\widetilde{U}_{1} = 22.7 - \theta$, while $a_{1}$ bid untruthfully among $\{64.5, 65.5, 66.5\}$. $\widetilde{U}_{1}$ is larger than $U_{1}$, if $\theta < 4$. It is because that, when $\widetilde{B}_{1}^{1} = \{64.5, 65.5, 66.5\}$, the profit $\widetilde{B}_{1}^{1} - r_{1}$ is so big that $a_{1}$ still wins $C_{1}$. Also, there is another AP $a_{x}$ whose budget is $B_{x}^{1}$ where $B_{x}^{1} \leq 64.5$, and it is the largest $B_{j}^{1}$ in which $B_{j}^{1} \leq \widetilde{B}_{1}^{1}$, $j \in [1, n]$ and $j \neq 1$. Then, the clearing price will be much lower than that when it bids truthfully. Therefore, if AP $a_{1}$ pays some extra price $\theta$ to figure out these more profitable cases, it will get more profits firmly by bidding untruthfully. Simulation results of TACDpp are shown in Fig. \ref{fig:top2=2} where $top_2 = 2$. The solid line shows the utility of $a_{1}$ while it bids truthfully. This line is not a straight line as shown in Fig. \ref{fig:top2=1}, as the matching strategy is not a fixed pure strategy anymore. When $top_2 = 2$, the matching strategy turns to a mixed strategy, we combine the following two strategies with equal probability, i.e. $1/2$, \begin{enumerate} \item Matching cloudlet $C_{k}$ with AP $a_{i}$ whose profit $B_{i}^{k} - r_{k}$ is the most profitable one. \item Matching cloudlet $C_{k}$ with AP $a_{i}$ whose profit $B_{i}^{k} - r_{k}$ is the second profitable one. \end{enumerate} So the utility of $a_{1}$ is not a stable value, even $a_{1}$ always bid truthfully. The utility varies within an interval near $18$, which is shown in green solid line. In contrast, the green dotted line shows the utility of $a_{1}$ while it bids untruthfully. There are also some more profitable cases while $a_{1}$ bids untruthfully, such as $\{64.5, 65.5, 66.5\}$ as occour ed as in the case of $top_2 = 1$. But the difference is that, if $top_2 = 2$, $a_{1}$ can also benefit more in those cases with the probability of $50\%$. Otherwise, $a_{1}$ will be matched with other less profitable cloudlets, and it also must pay an extra cost $\theta$ to find those cases. Therefore, there is not any evident case in which $a_{1}$ can get more utility than the truthful case. It is worthless for $a_{1}$ to pay an extra cost $\theta$ to determine how to bid untruthfully. Therefore, TACDpp is truthful while $top_2 = 2$. \begin{figure}[htbp] \vskip -3mm \centering \begin{minipage}{2in} \includegraphics[width=2in]{top1_r1.eps} \caption{\label{fig:top2=1} {\small $top_2 = 1$.}} \end{minipage} \begin{minipage}{2in} \includegraphics[width=2in]{top2_r1.eps} \caption{\label{fig:top2=2} {\small $top_2 = 2$.}} \end{minipage} \vskip -3mm \end{figure} \begin{figure}[htbp] \vskip -1mm \centering \begin{minipage}{2in} \includegraphics[width=2in]{top3_r1.eps} \caption{\label{fig:top2=5} {\small $top_2 = 5$.}} \end{minipage} \begin{minipage}{2in} \includegraphics[width=2in]{top123_r1.eps} \caption{\label{fig:top2=125} {\small Comparison.}} \end{minipage} \vskip -3mm \end{figure} Similarly, Fig. \ref{fig:top2=5} shows the utility of $a_{1}$ while $top_2 = 5$. It is also a mixed strategy by $5$ pure strategies, with the probability of $1/5$ for each strategy. These $5$ pure strategies are used to match cloudlets to APs. The $j$th pure strategy is corresponding to the $j$th profitable value of $B_{i}^{k} - r_{k}$ for $j=1, 2, \cdots, 5$. In the mixed strategy, the utility of $a_{1}$ varies with a larger range than that in Fig. \ref{fig:top2=2} while $a_{1}$ bid truthfully. The value of its utility fall in $[12, 14.5]$, and it is less than that in Fig. \ref{fig:top2=2}. In other words, the strategy for $top_2 = 5$ is less profitable and less stable than the strategy for $top_2 = 2$, while APs bid truthfully. This is because the stronger randomness brings APs many solutions which are not profitable. For truthfulness, there is no evidence that $a_{1}$ can get more utility than the truthful one. \section{Conclusion} \label{sec:6} In this paper, we have proposed efficient auction schemes for cloudlets placement and resource allocation in wireless networks to improve the social welfare subject to economic properties. We have introduced the group-buying model to inspire cloudlets to serve the MUs. In our auction schemes, MUs can get access to cloudlets through APs, according to their preference and resource demands for cloudlets. The whole three entities MUs, APs, and cloudlets are motivated to participate in resource sharing. We have verified that our schemes are truthful, individual rational, budget balanced and computational efficient. Through simulations, we have shown that our schemes TACDp and TACDpp outperform HAF by about $4.5\%$ and $5.6\%$ respectively, in terms of social welfare, for the case that the number of MUs is 1000. \bibliographystyle{IEEEtran}
1,477,468,749,839
arxiv
\section{Implementation} \label{sec:implementation} In this section, we discuss a number of technical aspects of our \textsc{LMQL}\xspace implementation, as published together with this paper. \subsection{Language Runtime} \paragraph{Parser and Python Compatibility} We implement \textsc{LMQL}\xspace{} as a superset of python. This also manifests in our implementation, where we rely on the python tokenizer and parser to process LMQL code. Subexpressions in an LMQL query, such as in the \lstinline|where| clause, are parsed as standard python. After some basic program transformations, we emit a python function that interacts with the \textsc{LMQL}\xspace{} runtime, and allows for interrupted execution by leveraging \lstinline|yield| and \lstinline|async| semantics. This allows us to implement \textsc{LMQL}\xspace{} as a regular python library, which can be used in any python environment. \paragraph{Eager Evaluation Semantics} To implement our evaluation semantics, we transform the abstract syntax tree as returned by the python parser into a runtime representation of a computational graph, modelling dependencies among operations explicitly. Users can easily extend \textsc{LMQL}\xspace{} with custom operators, by implementing a simple class interface with \lstinline|forward|, \lstinline|final| and \lstinline|follow| functions, similar to the integration of custom operators in the popular \lstinline|pytorch| library. Custom operators can easily be registered with the runtime, and the compiler will automatically generate the necessary code to integrate them into the \textsc{LMQL}\xspace{} computational graph. \subsection{Model Integration} \label{sec:model-integration} \paragraph{Inference API} To enable quick turnaround times during development, \textsc{LMQL}\xspace{} relies on a client-server-architecture. The server is responsible for inference, loading and managing the model. In our current implementation, it is configured to use a specific HuggingFace Transformers model. Users then interact with the \textsc{LMQL}\xspace{} client, which is a simple python library. The client parses the user-provided \textsc{LMQL}\xspace{} code, constructs the computational graph, and also runs the decoding loop. Only the forward pass of the underlying model is outsourced to the server. This naturally aligns with settings in which inference is run on some remote server with capable hardware, while the user interacts with the model via a fast, local client with quick startup times. \paragraph{Inference as a Service} The underlying client-server architecture of \textsc{LMQL}\xspace{} also allows for a separation of the \textsc{LMQL}\xspace{} client and inference as a service. In principle, vendors of API-gated LMs may therefore support \textsc{LMQL}\xspace{} by providing just the necessary inference API. Alternatively, vendors could accept to-be-executed \textsc{LMQL}\xspace{} code directly, which would offer customers more control over the decoding process than with current standard APIs. In this context, we consider \textsc{LMQL}\xspace{} a proposal for the standardization of language model interaction across different vendor-specific APIs. Implementing \textsc{LMQL}\xspace{} support would allow users to write prompting code once, and run it on any LM platform, without having to change their code. In such a setting, however, we advise for sandboxing of the executed \textsc{LMQL}\xspace{} queries (like in \emph{serverless computing}), as \textsc{LMQL}\xspace{} allows for arbitrary code to be executed. \paragraph{Decoding Loop} \textsc{LMQL}\xspace{} only requires a small change to existing decoder implementations. For a practical demonstration, see our implementation as published with this paper, in which we adapt the existing HuggingFace Transformers decoding loop to be \textsc{LMQL}\xspace{}-compatible. In general, \textsc{LMQL}\xspace{} scripted prompting and output constraining both compile down to token level prediction masks. This is typically already implemented with existing decoders and just needs an additional hook, to call the \textsc{LMQL}\xspace{} runtime after each produced token. Using this simple interface, \textsc{LMQL}\xspace{} can be integrated into any decoder implementation, without requiring any changes or retraining of the underlying model. \subsection{Visual Debugger} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/debugger.pdf} \caption{Screenshot of the \textsc{LMQL}\xspace{} visual debugger.} \label{fig:debugger} \end{figure} Apart from command-line tooling, the \textsc{LMQL}\xspace{} runtime also includes a web-based visual editor tool, helpful in constructing and debugging \textsc{LMQL}\xspace{} programs. A screenshot of the visual debugger is shown in \cref{fig:debugger}. \paragraph{Editor and Compiler} The visual debugger provides an editor window for constructing \textsc{LMQL}\xspace{} queries. After a query is executed, users can view the compiler output, i.e. the resulting python code, including the code that constructs the computational graph and executes the prompt. \paragraph{Decoder State} Users can track the different decoding branches of the currently active decoding method in real-time. This includes simple parallel decoding when sampling more than one sequence, but also multi-branch decoding like beam search. The debugger visualizes (sub-)tokens, and at each decoder step, users can inspect the current interaction trace, the value of prompt variables as well as the current state of \lstinline|where| clause validation. \paragraph{Validation and Masking} Lastly, the computational graph of the \lstinline|where| clause can be visualized and users can track the current value of the expression. In addition to the regular value semantics and partial evaluation, this includes support for both \textsc{Final} and \textsc{Follow} semantics. Different shades of green and red indicate final and non-final \lstinline|True| and \lstinline|False| values, respectively. The \textsc{FollowMap} at each operation can also be inspected, allowing for a detailed analysis of the current state of the computational graph. This can be helpful when developing new \textsc{LMQL}\xspace{} operators, as it allows for a quick and easy debugging of the underlying semantics. \newpage \section{Proofs} \subsection{Proof of Theorem~\ref{thm:brzozowski}} \label{sec:proof-brzozowski} \newcommand{\ballnumber}[1]{\tikz[baseline=(myanchor.base)] \node[circle,fill=.,inner sep=1pt] (myanchor) {\color{-.}\bfseries\footnotesize #1};} \begin{proof} \textit{(Brzozowski Soundness)} Given query $\mathcal{Q}$, partial interaction trace $u$, a scope $\sigma$ and the set of allowed tokens $M := \{t \in \mathcal{V} \;|\; \text{\scshape{Follow}}[\text{\texttt{where}}_\mathcal{Q}](u, t) \neq \fin(\bot)\}$. \begin{enumerate} \item By definition, we get the following: \begin{enumerate} \item $T_\mathcal{Q} \subseteq \mathcal{V}$, since we operate with limited vocabulary $\mathcal{V}$. \item Next, inverting the masking condition, we get $M = \mathcal{V} \setminus M^{-1}$ with the set of disallowed tokens $M^{-1} = \{t \in \mathcal{V} \;|\; \text{\scshape{Follow}}[\text{\texttt{where}}_\mathcal{Q}](u, t) = \fin(\bot)\}$ \item Now, if we establish $T_\mathcal{Q} \cap M^{-1} = \emptyset$ (\textasteriskcentered), we can derive Brzozowski soundness as follows: \begin{center}$T_\mathcal{Q} \stackrel{(*)}{=} T_\mathcal{Q} \setminus M^{-1} \stackrel{(a)}{\subseteq} \mathcal{V} \setminus M^{-1} \stackrel{(b)}{=} M$ \;i.e.\; $T_\mathcal{Q} \subseteq M$\end{center}\vspace{0.5em} \item For $T_\mathcal{Q} \subseteq M$, it thus suffices to show (\textasteriskcentered), i.e. that no disallowed token in $M^{-1}$ is in $T_\mathcal{Q}$: $\forall t \in \mathcal{V} \bullet t \in M^{-1} \implies t \notin T_\mathcal{Q}$. \end{enumerate} \item Now we prove (\textasteriskcentered). For any disallowed $t$ we know that $\text{\scshape{Follow}}[\text{\texttt{where}}_\mathcal{Q}](u, t) = \fin(\bot)$: \begin{itemize} \item Thus, for the current hole variable $v$, it holds that: $\eval[v \leftarrow ut]{\text{\texttt{where}}_\mathcal{Q}} = \fin(\bot)$. \item By final semantics, this means that there is no $p \in \mathcal{V}^*$ such that $\eval[v \leftarrow utp]{\text{\texttt{where}}_\mathcal{Q}} \neq \bot$. \item However, by definition we know that $L_\mathcal{Q} := \{s \in \Sigma^* \;|\; \eval[\text{parse}(s)]{\text{\texttt{where}}_\mathcal{Q}} = \top\}$, where $\sigma[\text{parse}(s)]$ refers to the variable store, with variables set according to $\mathcal{Q}$ and interaction trace $s$. \item Therefore, we know that $utp \notin L_\mathcal{Q}$, which means that $tp \notin u^{-1}L_\mathcal{Q}$, i.e. $t \notin T_\mathcal{Q}$. \end{itemize} \item Overall, we therefore have shown that (\textasteriskcentered) holds, which implies via (1) that $T_\mathcal{Q} \subseteq M$. \qedhere \end{enumerate} \end{proof} \newpage \section{More Evaluation Results} \label{app:xyz} \subsection{Interative Prompting (\texttt{ReAct})} \label{app:scripted-prompting} This section includes additional details on our case study with the interactive scheme \texttt{ReAct} in \cref{sec:eval-interactive-prompting}. \paragraph{Interaction Traces} We compare the interaction traces of our \textsc{LMQL}\xspace query in \cref{lst:output-interactive:prompting} with the output of the equivalent python implementation in \cref{lst:output-interactive:python}. For our python implementation we additionally mark each \lstinline{generate()} call during output generation. For the \textsc{LMQL}\xspace trace, this is not necessary as \textsc{LMQL}\xspace{} decodes the whole trace in one go, using its token level interaction model. \paragraph{Python Baseline} In \cref{lst:python-scripted-prompting} we include the full source of our python baseline implementation for the \texttt{ReAct} prompting scheme. \input{figures/output-interactive-prompting} \input{figures/output-interactive-python} \begin{figure} \begin{lstlisting}[language=python, caption={Python baseline implementation of \texttt{ReAct} \cite{yao2022react} prompting. In comparison to the equivalent \textsc{LMQL}\xspace query in \cref{lst:eval-interactive-prompting}, parsing and interaction has to be implemented manually.}, label={lst:python-scripted-prompting}, basicstyle=\footnotesize\ttfamily] prompt = f"""<few-shot samples> {question}""".strip() + "\n" for i in range(1024): new_prompt = await hf.generate(prompt, max_new_tokens=40, stopping_phrases=["Act"], step_size=10, remove_stopping_phrases=False) new_text = new_prompt[len(prompt):] # find first Act indices = [i for i in [new_text.find("Act")] if i != -1] first_kw = min(indices + [len(prompt) + 1]) if first_kw == len(prompt) + 1: prompt = new_prompt continue new_text = new_text[first_kw:] # find end of line end_of_line = new_text.find("\n") if end_of_line != -1: new_text = new_text[:end_of_line] if new_text.startswith("Act"): s = new_text.strip() try: query = s.split(": ", 1)[1] command, subject = query.split(" '", 1) index = s.split(": ", 1)[0].split(" ", 1)[1] if "Search" in command: if subject.endswith("'"): subject = subject[:-1] result = wikipedia_utils.search(subject) new_text += "\nObs {}: {}".format(index, result) elif "Finish" in command: print("FINISHED with", subject) except: print("Failed to parse action", s) print("======generate()======\n", new_text) prompt += "\n" + new_text \end{lstlisting} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we introduce the concept of Language Model Programming\xspace, a novel way to interact with (large) language models. We presented \textsc{LMQL}\xspace{}, a high-level query language, offering a concise and intuitive syntax. \textsc{LMQL}\xspace{} implements purpose-designed evaluation semantics, which enable efficient query execution. We have substantiated this claim in a series of case studies, where we demonstrate that complex, state-of-the-art prompting techniques can be implemented as intuitive, concise and efficient \textsc{LMQL}\xspace{} programs that reduce (compute) costs by up to $80\%$. \paragraph{Acknowledgements} We thank Mark Müller for his thoughtful comments and proofreading. \section{Evaluation} \label{sec:evaluation} Here, we evaluate the effectiveness of \textsc{LMQL}\xspace as a language as well as a tool for prompt engineers. We evaluate \textsc{LMQL}\xspace in three different case studies, encompassing a wide range of prompting scenarios. \subsection{Research Questions and Setup} We focus our evaluation on three core questions: \begin{itemize} \item \textbf{Expressiveness} Can users rely on \textsc{LMQL}\xspace for effective language model programming? Can we easily implement common and advanced prompting techniques with simple and concise query logic, especially in the case of interactive prompting? \item \textbf{Performance} Can \textsc{LMQL}\xspace be used to effectively lower the required number of model queries and thereby lower the implied computational or API-related cost of using LMs? \item \textbf{Accuracy} Can constraint decoding be used to improve the accuracy of LMs on standard benchmarks by providing hand-crafted validation rules? \end{itemize} \paragraph{Baseline} Although \textsc{LMQL}\xspace{} queries can become quite complex when using constraints and scripted prompts, overall, the language still provides a comparatively accessible interface close to natural language. Therefore, we evaluate \textsc{LMQL}\xspace{} mainly as an alternative to other, existing high-level interfaces for Python, that are typically used to interact with LMs. More specifically, we assume a simple \texttt{generate()} API as e.g. provided by the HuggingFace Transformers \cite{wolf2020transformers} package\footnote{\texttt{GenerationMixin.generate()} API documentation: \url{https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/text_generation\#transformers.generation_utils.GenerationMixin.generate}}. \texttt{generate()} can be called with some string, which is then used to invoke a language model to generate a likely continuation sequence. The method supports a range of parameters, including maximum length, decoding methods and stop tokens. Most importantly however, we assume that \texttt{generate()} does not support token level validation, but instead requires users to generate sequences chunk-wise, and then parse and validate the output manually. This is also comparable to how popular, state-of-the-art interfaces for LMs on the web, e.g. OpenAI's GPT-3 API\footnote{GPT-3 API, \url{https://openai.com/api/}} work. \paragraph{Datasets and Model} In our case studies, we address tasks relating to \textit{general and date understanding} \cite{srivastava2022beyond}, \textit{question answering} \cite{yang2018hotpotqa} and \textit{arithmetic math} \cite{cobbe2021training}. As language model, we rely on the publicly available open source model GPT-J 6B \cite{gpt-j} (6 billion parameters). The model's performance is comparable to the widely used GPT-3 model with 6.7 billion parameters across many important benchmarks. Further, where GPT-J exceeds the abilities of our hardware, we rely on \texttt{gpt2-xl}\footnote{\url{https://huggingface.co/gpt2-xl}}, a 1.5B parameter version of GPT-2 \cite{radford2019language}. Even though recent variants of GPT-3 have demonstrated better performance, we chose GPT-J 6B as it is publicly available. This is crucial, because the \textsc{LMQL}\xspace runtime requires integration with the decoding loop of a language model, which cannot be implemented with limited high-level APIs. Please see \cref{sec:implementation}, for more details on the integration of \textsc{LMQL}\xspace in the decoder logic of a language model. \paragraph{Metrics} To quantify performance, cost and usability characteristics of \textsc{LMQL}\xspace, we consider a number of metrics: \begin{itemize} \item \textbf{LOC} As a simple measure of conciseness and simplicity we provide the number of lines of code (LOC) for each implemented case study. We only count functional LOC, i.e. excluding comments, empty lines, and fixed prompt parts (e.g. few-shot samples). \item \textbf{Number of Model Queries} We count the number of times the model ${\bm{f}}$ is invoked for next-token prediction. This metric directly measures the computational cost of using a self-hosted LM, however, abstracts the computational cost of running the model itself. \item \textbf{Number of \texttt{generate()} Calls} We also count the number of times the \texttt{generate()} method is called, i.e. a new decoding process is started. This metric relates to API costs of using an LM, as each call to \texttt{generate()} may incur a cost, e.g. in terms of API requests or latency. \item \textbf{Billable Tokens} Lastly, to model closely how API-gated models are billed, we count the number of tokens per \texttt{generate()} call that is processed by the model as part of the prompt, plus the number of tokens that are generated. This metric is based on the billing mechanics of API-gated models like GPT-3. Based on Billable Tokens, we will make cost estimates, given the current token pricing of $\$0.02/1K$ tokens of the GPT-3 \texttt{davinci} model\footnote{\url{https://openai.com/api/pricing/}}. This highlights the potential savings if \textsc{LMQL}\xspace could be used in place of standard high-level APIs. \end{itemize} We motivate this choice of performance metrics over pure runtime by the reality of using LMs in practice. Any reduction in the number of processed tokens will directly translate to a saving in cost, both with API-based models and when running a language model locally. \paragraph{Experimental Setup} As a runtime for the language models we use HuggingFace Transformers'~\cite{wolf2020transformers} \texttt{transformers} library with \texttt{pytorch} on the backend. All experiments are run on an Nvidia A100 GPU with 40GB VRAM. For more details on the implementation of \textsc{LMQL}\xspace{}, please see \cref{sec:implementation}. \input{figures/eval_chain_of_thought.tex} \subsection{Case Study 1: Chain-of-Thought Prompting} We first consider multiple-choice question answering tasks: A language model is presented with a question $Q$ and a set of options $\mathcal{O} = \{O_1, \dots, O_n\}$. While direct prompting of a model to obtain the result as $argmax_\mathcal{O}\;P(O_i|Q)$ is possible, it is often not enough to reach good levels of performance. Further, the model's reasoning may not be clear and the resulting answers can appear quite arbitrary. \emph{Chain-of-thought} prompting \cite{wei2022chain} aims to address this, by preceding the actual question with few-shot samples that demonstrate how to arrive at a correct answer through a multi-step reasoning process. By priming the model in this way, it is more likely to produce a similar chain of thoughts, eventually leading up to the correct answer for a new question. For this case study we implement queries for two task: The general knowledge reasoning task \emph{Odd One Out} and the \emph{Date Understanding} task, both included in the recently published BIG benchmark collection \cite{srivastava2022beyond}. \input{figures/eval-study-1.tex} \paragraph{Query and Results} We implement chain-of-thought reasoning in \textsc{LMQL}\xspace as shown in \cref{lst:eval-chain-of-thought}. The prompt clause contains two few-shot examples with reasoning steps. We provide the comma-separated list of words of the Odd One Out task as query argument \lstinline{OPTIONS} when iterating over the dataset. The first hole variable generated by the model is \lstinline{REASONING}. We constrain the \lstinline{REASONING} variable in multiple ways, including a maximum number of words and several stopping conditions. Further, we disallow the use of \texttt{"Pick"} and the newline character, to prevent the model from digressing or skipping the reasoning steps alltogether. For decoding, we rely on \lstinline{argmax} which provides us with the greedily-determined most likely answer. Lastly, we use the \lstinline{distribute} clause, to compute a probability distribution over the set of possible answers in $\mathcal{O}$, i.e. $P( \cdot | \text{\texttt{"\ebnfph{p}\ebnfph{q}\ebnfph{r}"}})$, which is conditioned on the concatenation of the few-shot samples \texttt{\ebnfph{p}}, the question \texttt{\ebnfph{q}} and the generated reasoning steps \texttt{\ebnfph{r}}. Analogously to our \textsc{LMQL}\xspace query, we implement the same prompting behavior with a \texttt{generate()}-based python program. As discussed, the baseline program employs similar stopping conditions for \texttt{REASONING} but does not encode token level constraints. We evaluate both programs on Odd One Out and Date Understanding and document the results in \cref{tab:eval-chain-of-thought}. We observe the same or improved accuracy for constrained \textsc{LMQL}\xspace decoding when compared to Standard Decoding. Depending on the dataset, \textsc{LMQL}\xspace can reduce model queries and the total consumed tokens by up to $24\%$. This is a significant reduction in cost/compute, especially when considering that the \textsc{LMQL}\xspace-based constrained decoding can achieve the same or better accuracy. Lastly, we find that \textsc{LMQL}\xspace{} reduces program size down to $26\%$ ($34\%$ resp.) of the LOC required in our python baseline implementations, to address the two tasks. \subsection{Case Study 2: Interactive Prompting} \label{sec:eval-interactive-prompting} Chain-of-thought prompting is an effective method to improve model understanding \cite{wei2022chain}. It can be used to extract knowledge from a model or generate new insights by multi-step reasoning. However, in some cases a model may not know about the required context information and external sources have to be consulted. For instance, for question answering the prompting scheme \texttt{ReAct}~\cite{yao2022react} proposes to augment chain-of-thought-based prompting with the ability for the model to interactively query external sources such as Wikipedia. As \textsc{LMQL}\xspace supports loops, branches, and function calls in its prompt clause, it lends itself well to implementing these kinds of interactive prompting scenarios. By relying on control flow in the prompting clause of a query, we can interpret model results step-by-step and inject information from external sources as requested. \paragraph{Query} To invoke external actions like Wikipedia lookups, \texttt{ReAct} relies on designated action phrases such as \texttt{Search} and \texttt{Finish}, that the LM can produce as needed. To implement this interactive behavior in \textsc{LMQL}\xspace, we rely on a basic interpretation loop as shown in \cref{lst:eval-interactive-prompting}. The loop iterates over the model's output and interprets actions when applicable. Wikipedia lookups are implemented as calls to an external python utility. During branching and beam search with multiple hypotheses, the loop and corresponding lookup operations will automatically be issued as required during decoding. The loop terminates when the model generates a \texttt{Finish} action, storing the overall results of the query in the \texttt{SUBJECT} variable. To further guide the generation process, we constrain \texttt{MODE} to be in \;\{\texttt{Tho}, \texttt{Act}\}. Further, we implement simple stopping conditions for \texttt{THOUGHT} and \texttt{SUBJECT} to prevent the model from violating the \texttt{ReAct} reasoning pattern. \input{figures/react} \newpage \paragraph{Python Baseline} As a baseline for scripted interpretation, we implement a python program that supports the same \texttt{ReAct} prompting as the query in \cref{lst:eval-interactive-prompting}. To implement \textsc{LMQL}\xspace's declarative parsing of \texttt{THOUGHT}, \texttt{SUBJECT}, and \texttt{ACTION}, we rely on built-in python functionality to parse and process the chunk-wise produced output. For this, we note that we have to resort to hand-crafted parsing logic, whereas in \textsc{LMQL}\xspace we can simply rely on declarative predicates like \texttt{STOPS\_AT} and validation conditions in the where clause of the query. We include the full source of our baseline prompting implementation in the appendix in \cref{app:scripted-prompting}. We also note that the baseline implementation can only support \texttt{sample} and \texttt{argmax} decoding. Deeper integration, e.g. with beam search, is not easily realizably in python, as the prompting program must be capable of branching into multiple execution heads in accordance with the branching of decoding. In contrast, \textsc{LMQL}\xspace supports this out-of-the-box. Lastly, in our baseline implementation, we have to invoke the model multiple times, each time generating a new chunk of output, parsing, and evaluating potential action phrases. For this, we have to choose the chunk size appropriately. We overview the implications of different choices for this parameter in \cref{fig:eval-react-chunk-size}. For our comparison with \textsc{LMQL}\xspace{}, we choose standard decoding with chunk size of 30, which minimizes the number of billable tokens, while not issuing exceedingly many model queries. \paragraph{Results} To assess \textsc{LMQL}\xspace performance benefits with interactive prompting workloads, we apply our \texttt{ReAct} implementations to a question answering task from the HotpotQA \cite{yang2018hotpotqa} dataset (see \cref{app:scripted-prompting} for further details). We observe a significant reduction of \lstinline|generate()| calls of up to 80\% when using \textsc{LMQL}\xspace{} over standard decoding. This can be attributed to \textsc{LMQL}\xspace{}'s ability to decode the whole sequence in one run, validating on-the-fly. Standard Decoding on the other hand has to decode the whole sequence in chunks, invoking \lstinline|generate()| at least as many times as interactions are required. Regarding the total number of model queries, we observe a reduction of at least $30\%$. For Billable Tokens, we observe an even stronger effect, where \textsc{LMQL}\xspace saves up to $76\%$ of the tokens, leading to a significant saving in costs, i.e. $76\%$ fewer tokens or 5.2\textcent{} per query for GPT-3 \lstinline{davinci}. Considering program size last, we implement \texttt{ReAct} in just $22$ LOC of \textsc{LMQL}\xspace{}, which is $63\%$ fewer lines than in our python-based implementation. \begin{figure} \includegraphics[width=0.9\textwidth]{figures/python-react.pdf} \caption{Comparing different chunk sizes used for the baseline implementation as compared to \textsc{LMQL}\xspace{}, which does not require chunk-wise decoding. All results were measured for interactive \texttt{ReAct} prompting.} \label{fig:eval-react-chunk-size} \end{figure} \subsection{Case Study 3: Arithmetic Reasoning} \label{sec:eval-arithmetic} Lastly, we consider the task of arithmetic reasoning. Existing work shows that LMs can struggle with evaluating arithmetic expressions correctly \cite{wei2022chain}. While reasoning steps might be correct, mistakes in the concrete arithmetic calculations will lead to an incorrect result \cite{wei2022chain,cobbe2021training}. This is exacerbated by the open-ended nature of math problems, where the result is not picked from a limited set of options, but can be any valid number. Recent works \cite{wei2022chain, cobbe2021training,andor2019giving} therefore propose to augment LM generation with the ability to externally evaluate arithmetic expressions on-the-fly. \begin{table} \footnotesize \caption{\textsc{LMQL}\xspace constrained decoding compared to Standard Decoding in an interactive prompting scenario. In both experiments, we decode according to the prompting scheme implemented by the query in \cref{lst:eval-interactive-prompting}. For chunk-wise standard decoding, we further document the implications of different choices for the chunk size.} \begin{tabular}{lrrrr} \footnotesize & \textbf{Standard Decoding} & \textbf{\textsc{LMQL}\xspace (constrained)} & \textbf{$\Delta$} & \textbf{Cost Savings} \\ \toprule \textit{\texttt{ReAct} (Case Study 2)}\\ \texttt{generate()} calls & 5 & \textbf{1} & -80\% & \\ Model Queries & 150 & \textbf{95} & -36.67\% & \\ Billable Tokens & 3,404 & \textbf{807} & -76.29\% & 5.2\textcent\raise-0.1ex\hbox{\tiny/query}\\ LOC & 59 & \textbf{22} & -62.71\% & \\ \midrule \textit{Arithmetic Evaluation (Case Study 3)}\\ \texttt{generate()} calls & 7 & \textbf{1} & -85.71\% & \\ Model Queries & 210 & \textbf{73} & -65.24\% & \\ Billable Tokens & 3,649 & \textbf{541} & -85.17\% & 6.2\textcent\raise-0.1ex\hbox{\tiny/query}\\ LOC & 78 & \textbf{18} & -76.92\% & \\ \bottomrule \end{tabular} \label{tab:eval-react-comparison} \end{table} \input{figures/eval_arithmetics.tex} \paragraph{Query} In \cref{fig:eval_arithmetics:query} we demonstrate how to implement such an arithmetic evaluator in \textsc{LMQL}\xspace, relying on scripted prompting and constraints. The query decodes reasoning and calculations steps from the model, scanning for occurrences of \lstinline{"<<"}. Once it encounters such a sequence, it queries the model for the to-be-evaluated expression (e.g. \lstinline{1+2=?}), evaluates it using an external utility function, and passes back the result. This generation process is repeated, until the model produces the stopping phrase \lstinline{"So the answer is"}. Once the loop exits, the query parses the result, constraining the remaining tokens to form a valid integer, using the built-in function \lstinline|INT|. For few-shot samples, we rely on the ones chosen in \cite{wei2022chain}. \paragraph{Results} We applied our query, as well as a baseline program, to an arithmetic reasoning problem from the GSM8K dataset \cite{cobbe2021training}. As shown by the interaction trace in \cref{fig:eval_arithmetics:trace}, our \textsc{LMQL}\xspace{} query detects and processes arithmetic expressions, as they occur in the model's output, leading up to the final answer. The necessary query logic is comparatively basic, only requiring some text processing and a simple interpretation loop. Finally, by asserting an \lstinline|INT| constraint on \lstinline|RESULT|, we can enforce the final model's output to always be a valid integer. While the concrete model in use (GPT-J 6B) is not able to solve the problem correctly, the example still demonstrates that \textsc{LMQL}\xspace{} can be used to implement on-the-fly arithmetic evaluation, aiding the model in solving the task. Collecting query statistics, we compare the two implementations in \cref{tab:eval-react-comparison}. For the baseline implementation (standard decoding), the number of \lstinline|generate()| calls is determined by the number of arithmetic expressions in the model's output. For \textsc{LMQL}\xspace{}, this has no impact, as arithmetic expressions can be evaluated on-the-fly. Overall this means that \textsc{LMQL}\xspace{} only requires one \lstinline|generate| call, where the standard approach requires $7$. Further, we observe a significant reduction of 65\% in model queries and 85\% in billable tokens (saving 6.2\textcent{} per query with GPT-3 \lstinline|davinci|). Lastly, we implement arithmetic evaluation in just $18$ LOC of \textsc{LMQL}\xspace{}, compared to $78$ LOC required for our python-based implementation. \subsection{Discussion} Summarizing, our three case studies show that: i) \textsc{LMQL}\xspace allows great expressiveness, i.e. several approaches from current state-of-the-art methods can be directly encoded in a straightforward scripting style, requiring much fewer lines of code than corresponding python-based implementations; ii) \textsc{LMQL}\xspace drastically reduces the number of model queries and thereby both efficiency and run time. This is enabled by \textsc{LMQL}\xspace{}s support for token level validation, which enables us to enforce constraints on-the-fly rather than with chunk-wise decoding and backtracking. And, iii) that \textsc{LMQL}\xspace does not impact the accuracy achieved by the model. In fact, in some cases, the enforced constraints even yield improvements in accuracy. In addition to all this, we have shown that when used in the context of paid, API-gated models, \textsc{LMQL}\xspace would enable significant monetary savings, given the reduction in billable tokens that we observe. \section{Validation and Constraint Decoding} \label{sec:constraint_decoding} In this section we show how our decoding procedure can be extended to handle validation and constrained decoding. In particular, we discuss how the constraints from the \lstinline{where} clause can be used to automatically and efficiently find decoding masks for each step of decoding. Our main contribution to this end is a purpose-designed, eager execution model that supports partial evaluation and lookahead. To motivate this, we first discuss a naive solution and then introduce the idea of \textit{final semantics} and \textsc{FollowMap}{}s, the two abstractions at the core of our evaluation model. \newpage \paragraph{Naive Approach} \begin{wrapfigure}[17]{r}{0.54\textwidth} \begin{minipage}{0.54\textwidth} \begin{algorithm}[H] \SetAlgoLined \LinesNumbered \DontPrintSemicolon \KwIn{trace $u$, scope $\ensuremath{\sigma}$, language model $f$} \KwOut{decoded sequence $v$} \SetKwRepeat{Do}{do}{while} \SetKwProg{Fn}{Function}{}{} \Fn{decode\_step($f$, $u$, $v$)}{ ${\bm{z}} \leftarrow \ensuremath{\mathrm{softmax}}({\bm{f}}(uv))$ \; ${\bm{m}} \leftarrow \mathbf{1}^{|\ensuremath{\mathcal{V}}|}$ \; \Do{$\bigvee_i m_i = 1$}{ $t \leftarrow \text{pick}( \sfrac{1}{Z} \cdot {\bm{m}} \odot {\bm{z}})$ \; \lIf{$t \neq \textsc{eos}\xspace$}{decode\_step($u$, $v$, $vt$)} \lElseIf{$t = \textsc{eos}\xspace \land \text{check}(u, vt)$}{\KwRet $v$} \lElse { ${\bm{m}}[t] \leftarrow 0$ } } } decode\_step($f$, $u$, \ensuremath{\epsilon}) \; \caption{Naive Decoding with Constraints} \label{alg:naivedecoding} \end{algorithm} \end{minipage} \end{wrapfigure} We first consider a naive approach to constrained decoding, outlined in \cref{alg:naivedecoding}. Here, similar to \cref{alg:decoding}, we start with an empty string $v$ and append tokens. However, we don't assume a function compute\_mask and thus apply a backtracking-based approach, where we generate sequences up to the \textsc{eos}\xspace token and then check if $uv$ satisfies our constraints. Checking the constraints, denoted as $check$, is easy as it just amounts to the evaluation of an expression. Note that here we assume that $uv$ is sufficient to check the constraints, at least up to the hole corresponding to $v$. If this is not possible, we would need to perform the generation sequence for the sequence of all holes, advancing to the next one, once \textsc{eos}\xspace is produced, but potentially backtracking over all, if validation fails at some point later on. This strategy leads to multiple problems: First, navigating the search space of sequences using backtracking is computationally expensive, especially when considering that the search space of LMs (even when trained well), is still a combinatorial explosion due to the many likely continuations of any given sequence. Second, querying the LM can be very expensive. State-of-the-art models often require high-end GPUs or are only available as API-gated, paid services. Thus, every token that is generated and later dismissed incurs a significant computational or financial cost. With this in mind, we implement eager, partial evaluation semantics that model not only whether or not an expression holds, but also whether the expression can be guaranteed to never hold for any possible continuation of the currently-generated sequence. This allows us to terminate early if validation already provides a definitive result. Further, our semantics enable us to automatically compute a subset of next tokens that are guaranteed to violate the expression. Using this token set, we can effectively prune the search space of an LM and prevent the costly generation of invalid sequences before they are even generated. \subsection{Partial Evaluation} \input{figures/final} Given some expression $e$ occurring in the \lstinline|where| condition, some interaction trace $u$ and some global scope \ensuremath{\sigma}, we define the evaluation semantics of $\eval{e}$ on multiple levels: \paragraph{Value Semantics} First, we interpret $e$ on a value level, meaning we define $\eval{e}$ as the value of evaluating $e$ as a python expression, given the variable values assigned in $\sigma$. \paragraph{Final Semantics} In addition to value semantics, we define so-called \emph{final semantics} as a function $\text{\scshape{Final}}[e; \ensuremath{\sigma}]$. The function $\text{\scshape{Final}}$ annotates each computed value with one of the annotators $\mathcal{A} = \{ \fin, \var, \inc, \dec \}$. Depending on the annotator, the value of an expression $e$, as decoding progresses is either considered $\fin$ (it will retain a fixed value), $\var$ (its value may still change), $\inc$ (its value will monotonically increase) or $\dec$ (its value will monotonically decrease). For the latter two, we consider monotonicity both in a numerical sense and in a set theoretic sense (e.g. growing sets, append-only strings). Based on this, $\text{\scshape{Final}}$ can be computed by applying it recursively to the intermediate results of a top-level expression $e$, as defined by the rules in \cref{tab:final}. \paragraph{Notation} In the following, we use the short-hand notation $\text{\scshape{Final}}[e]$ instead of $\text{\scshape{Final}}[e; \ensuremath{\sigma}]$, as we assume that the scope is always the global scope. Further, we will sometimes refer to value and final semantics jointly, i.e. we will denote the value of an expression $e$ as $\eval{e} = v$ and $\text{\scshape{Final}}[e] = \fin$, simply as $\eval{v}^F = \fin(v)$. For boolean expressions we let $\top$ denote \lstinline|True| and $\bot$ \lstinline|False|. \paragraph{Application} Using $\text{\scshape{Final}}$, we can evaluate \lstinline|where| constraints, even on outputs that are only partially available, i.e. a currently generating sequence. For this, we evaluate all (sub-)expressions, as far as possible. For expressions that depend on future hole values, we set their result to \texttt{None} and define all other operators to be tolerant of that. For instance, given some validation constraints $a \wedge b$, where $b$ cannot be determined yet, we can evaluate $a$ and return \texttt{False} if $a$ evaluates to $\fin(\bot)$. This is possible, as $\fin$ indicates that no matter the value of $b$, $a$ will always evaluate to $\bot$, even as more tokens of the generated sequence are revealed. \paragraph{Eager Validation} Final semantics provide an abstraction that enables us to implement more aggressive short-circuiting over validation conditions. These can be executed on each new token rather than waiting for the entire sequence to be generated. Using this, validation can be applied more eagerly, detecting invalid sequences before they are completed. However, final semantics do not help us to mask any next tokens in the decoding function. To enable this, we additionally introduce a third level of evaluation semantics, which we call \emph{follow semantics}, discussed next. \input{figures/followmap} \subsection{Generating Token Masks using \textsc{FollowMap}{}s} Provided that we can now evaluate \lstinline|where| conditions eagerly on every new token, the task that remains is to construct a token mask, that allows us to soundly identify tokens that are guaranteed to violate the condition when chosen next by the $decode$ function. To this end, we introduce a novel abstraction called \textsc{FollowMap}{}s. \paragraph{Follow Maps} A follow map is a function $\textsc{FollowMap}{}(u,t)$ that takes a partial interaction trace $u$ and a token $t$ as input, and approximates the future value of some expression during validation, given $ut$ is validated next. We implement \textsc{FollowMap}{}s for all supported operators in \textsc{LMQL}\xspace, and show a subset of the rules in \cref{tab:follow}. As shown, per operation, only a few rules are required. Note that a \textsc{FollowMap}{} always also produces a final annotator, but we only show them if the standard rules from \cref{tab:final} do not apply. Based on this, we define a recursive $\text{\textbf{\text{\scshape{Follow}}}}[\text{\ebnfph{expr}}](u,t)$ operator that automatically constructs the \textsc{FollowMap}{} for a provided expression, considering the definitions in \cref{tab:follow} as its base cases. This is implemented by recursively applying case-wise composition to the follow maps of the respective sub-expressions. Using \text{\scshape{Follow}}, we obtain an all-encompassing follow map for the entire validation expression. By inspecting the sub-cases of the resulting \textsc{FollowMap}{}, we then identify tokens that are guaranteed to violate the expression, which allows us to generate a decoding mask. \paragraph{Example} Assume that we have the constraint \lstinline|TEXT in ["Stephen Hawking"]| and that we are currently decoding hole variable \lstinline|TEXT|. So far it has been assigned the value \texttt{"Steph"}. Using the rules in \cref{tab:follow}, we can construct a \textsc{FollowMap}{}: $$ \text{\scshape{Follow}}[\text{\lstinline|TEXT in ["Stephen Hawking"]|}](\text{\texttt{"Steph"}},t) = \begin{cases} \fin(\top) & \text{if } t = \text{\texttt{"en Hawking"}} \\ \fin(\bot) & \text{else} \end{cases} $$ The \textsc{FollowMap}{} returns $\fin(\top)$ if the following sequences matches \texttt{"en Hawking"} and $\fin(\bot)$ otherwise. During decoding, this can be translated into a token mask, as we know that tokens other than prefixes of \texttt{"en Hawking"} will definitively ($\fin$) violate our constraint. To enforce this, we derive a mask vector ${\bm{m}}$ that only allows the first token of \texttt{"en Hawking"} to be generated next. \newpage \paragraph{Soundness} While a perfect next-token validator is desirable, this can be hard to achieve, especially with constraints that rely on forward references. For this reason, we do not require $\text{\scshape{Follow}}$ to return \textsc{FollowMap}{}s that mask out all tokens that will violate our constraints (i.e. \textit{completeness}). Instead, we focus on \textit{sound} approximation: Given some boolean \lstinline|where| condition $e$ and the currently decoded hole variable $v$ (cf. \cref{alg:eval-string}), we consider the $\text{\scshape{Follow}}$ operator to be sound if and only if: \begin{equation} \forall t \in \mathcal{V} \bullet (\text{\scshape{Follow}}[{e}])(u,t) = \fin(\bot) \Rightarrow \eval[v \leftarrow ut]{e} = \fin(\bot) \label{eq:soundness-constraint} \end{equation} In other words, if the returned \textsc{FollowMap}{} indicates that the next token $t$ is guaranteed to violate the condition $e$, then the condition $e$ must evaluate to $\fin(\bot)$ when $t$ is picked in the next decoding step. While this potentially over-approximates the set of valid tokens, it guarantees that we will never mask out any tokens that may actually be valid. Note also, how we rely on final semantics, i.e. $\fin(\bot)$, to express that a token will lead to a definitive violation of our constraints, and not just a temporary one during generation. \paragraph{Brzozowski derivatives} To provide another perspective on \textsc{FollowMap}{} soundness, consider Brzozowski derivatives \cite{brzozowski1964derivatives}: For a language $S \in \Sigma^*$, i.e. a set of strings over the alphabet $\Sigma$, and prefix $u \in \Sigma^*$ the Brzozowski derivative $u^{-1}S = \{ v \in \Sigma^* \mid uv \in S \}$ denotes the set of postfixes such that the concatenation $uv \in S$. In our case we are interested in the possible sequences over the token vocabulary $\ensuremath{\mathcal{V}}^*$. In particular, given some query $\mathcal{Q}$, we are interested in the subset $L_\mathcal{Q} \subseteq \ensuremath{\mathcal{V}}*$, which we do not necessarily have in closed form, that contains all interaction traces that fulfill the constraints specified in $\text{\texttt{where}}_\mathcal{Q}$. If during an execution of $\mathcal{Q}$ we have a partial interaction trace $u$, then $u^{-1}L_\mathcal{Q}$ denotes all possible legal postfixes completing this interaction trace. Using this, we define the \emph{set of Brzozowski-admissible tokens} $T_\mathcal{Q} = \{t \in \ensuremath{\mathcal{V}} \mid (ut)^{-1}L_\mathcal{Q}) \neq \emptyset \}$, which can be decoded in the next step such that legal continuations in $L_\mathcal{Q}$ exist , i.e. $T_\mathcal{Q}$ describes the set of legal tokens for the next decoding step, thus forming a decoding mask $M$. Based on these definitions, the \textsc{FollowMap}{} and the \text{\scshape{Follow}}{} operator satisfy the following property with proof in \cref{sec:proof-brzozowski}: \begin{theorem} \textit{(Brzozowski Soundness)} Given a query $\mathcal{Q}$, partial interaction trace $u$, and the corresponding set of allowed tokens $M := \{t \in \mathcal{V} \;|\; \text{\scshape{Follow}}[\text{\texttt{where}}_\mathcal{Q}](u, t) \neq \fin(\bot)\}$, it holds that $T_\mathcal{Q} \subseteq M$, where $T_\mathcal{Q}$ is the set of Brzozowski-admissible tokens. \label{thm:brzozowski} \end{theorem} This result is in line with \cref{eq:soundness-constraint}, and implies that \textsc{FollowMap}{}s will always allow, i.e. not mask out, any tokens that could still yield a legal decoding. \section{Introduction} \label{sec:intro} Large Language Models (Large LMs - LLMs) \citep{VaswaniSPUJGKP17,DevlinCLT19,radford2019language,BrownMRSKDNSSAA20} have proven successful at various language-based tasks such as machine translation, text summarization, question answering, reasoning, code generation from text and many more. Due to these results LMs have become popular beyond the machine learning community and are slowly being integrated into many applications. \paragraph{(Large) Language Models} Internally, language models operate on tokens, which are different from how humans perceive language. Given the tokenized version of some input, called the \emph{prompt}, a large language model predicts the next token. That is, over a large vocabulary of tokens it assigns each a score or probability. A \emph{decoding} procedure is then used, which by invoking the LM multiple times, computes a completion of the prompt. Commonly, the goal is to determine (or approximate) the highest probability continuation, however, as producing a particular token might lower the probability, before a subsequent token increases it, the decoding procedure often can include expensive search or backtracking strategies. Nonetheless, LM-based text completion remains powerful and can be leveraged for a wide range of downstream applications as listed above. \paragraph{Key Challenges in Using Language Models} While the newer generation of language models can be prompted with examples or instructions in a conceptually simple manner, making the best use of these models and keeping up as new models are released requires a deep understanding of their internals, as well as the use of vendor-specific libraries and implementations. For example, as LMs operate on tokens, it can be hard to constrain the decoding procedure to a set of legal words or phrases. Further, many prompting techniques require either back-and-forth interaction between the LM and the user (e.g. chatbots like ChatGPT \cite{openaiChatGPTOptimizing}) or very task-specific interfaces (e.g. to perform arithmetic calculations with external control logic). To implement such prompts, a lot of manual work and interaction with a model's decoding procedure is required, which restricts the generality of the resulting implementations. Lastly, as an LM only produces one (sub-word) token at a time, completing a sequence may require many calls. Also, decoding becomes increasingly expensive as the prefix, the prompt, and the so-far generated response grow. Because of these factors, and as language models are typically very large neural networks, practical inference demands high computational costs and significant latency. In the case of pay-to-use APIs, such as OpenAI's well-known GPT-3, this results in high usage costs per query answered. \input{figures/example_queries} \paragraph{This work: Language Model Programming via \textsc{LMQL}\xspace} In this work, we propose the idea of language model programming, extending on natural language prompting by additionally allowing lightweight scripting and constraining of outputs. This facilitates a front-end/back-end separation for LM prompting, i.e. allows a user to specify complex interactions, control flow, and constraints without requiring knowledge of an LM's internals such as tokenization, implementation, and architecture. Further, the constructed programs remain agnostic concerning the underlying LM, greatly improving portability. Overall, Language Model Programming (LMP) retains the simple natural-language-driven interface to LMs but additionally enables precise constraining, scripting, and efficient decoding, which as of now is not possible with existing high-level APIs. To enable LMP, we present a novel language and runtime called the Language Model Query Language (\textsc{LMQL}\xspace). \textsc{LMQL}\xspace is a high-level language with declarative SQL-like elements and an imperative syntax for scripting. The underlying runtime is compatible with existing LMs and can be supported easily, requiring only a simple change in the decoder logic. \textsc{LMQL}\xspace can be used to express a wide variety of existing prompting methods \cite{ReynoldsM21, wei2022chain,cobbe2021training,yao2022react, scholak2021picard, shin2021constrained} using simple, concise, and vendor-agnostic code. Further, purpose-designed evaluation semantics with support for partial evaluation and lookahead, enable us to optimize query execution end-to-end: \textsc{LMQL}\xspace{} leverages user constraints and scripted prompts to prune the search space of an LM by masking, resulting in an up to 80\% reduction of inference cost. We showcase two examples of simple \textsc{LMQL}\xspace{} programs in \cref{fig:query-example}. \paragraph{Main Contributions} Our core contributions are: \begin{itemize} \item We introduce the novel paradigm of language model programming, formulating and addressing several challenges that arise with recent LM prompting techniques (\cref{sec:overview}). \item \textsc{LMQL}\xspace, an efficient, high-level query language for LMs with support for scripted prompting and output constraining. (\cref{sec:query,sec:language}). \item A formal model of eager, partial evaluation semantics based on so-called \emph{final and follow} abstractions. Using these, we can automatically generate model-specific token masks for LM decoding, given just a set of high-level constraints (\cref{sec:constraint_decoding}). \item A comprehensive evaluation of \textsc{LMQL}\xspace that shows how to express a wide range of common and advanced prompting techniques as simple and concise \textsc{LMQL}\xspace programs, and that the resulting programs enable more efficient decoding by reducing inference cost and latency by 13-80\% while allowing for more accurate decoding. (\cref{sec:evaluation}). \end{itemize} \section{The LMQL Language} \label{sec:language} Here we provide a high-level explanation of the syntax of \textsc{LMQL}\xspace, before discussing the runtime and language semantics next. For concrete examples, consider the \textsc{LMQL}\xspace programs given in \cref{fig:query-example}. The grammar of \textsc{LMQL}\xspace is shown in \cref{fig:syntax}. An \textsc{LMQL}\xspace program has 5 parts: the decoder, the actual query, the \lstinline|from| clause specifying the queried model, the \lstinline|where| clause specifying constraints, and lastly a \lstinline|distribution| instruction. The decoder and model are both specified by strings, while query and constraints are given in python syntax. We now explain these components in detail: The \ebnfph{query} block models the interaction with the model. Informally it can be thought of as the body of a python function subject to some restrictions and additions: i) We do not allow the declaration of inner functions (however, imports can be made), and ii) Each top-level string is treated as a direct query to an LM. These query strings allow for two specially escaped subfields, similar to python f-strings\footnote{https://peps.python.org/pep-0498}: 1) \lstinline|"{varname}"| recalls the value of a variable from the current scope. And 2.), \lstinline|"[varname]"| represents a phrase that will be generated by the LM, also called \emph{hole}. When the language model generates values for these holes, they will be subject to the constraints defined in the \lstinline|where| clause of the query. Under these constraints, the decoding procedure specified by \ebnfph{decoder} (disussed next) will be used. Once decoding finishes, a corresponding variable will be created in the scope of the query program and assigned this value. If a variable with the same name already exists, it will be overwritten. \ebnfph{decoder} denotes the decoding procedure employed by the \textsc{LMQL}\xspace runtime when solving the query. The presented version of \textsc{LMQL}\xspace enables \lstinline{argmax}, \lstinline{sample} and \lstinline{beam}. \lstinline{argmax} and \lstinline{sample} work as discussed in \cref{sec:background}. \lstinline{beam} however, denotes a novel procedure called \emph{scripted beam search} which performs beam search jointly over all holes and control flow. We discuss this further in \cref{sec:query}. Once completed, the result of a query program is comprised of a number of things: It contains the \emph{interaction trace}, that is, the whole text transcript of the \textsc{LMQL}\xspace query with the answers of the LM in the holes substituted. Further, the set of all hole variables is accessible, allowing clients to directly access specific parts of the LM response. In case of \lstinline{sample} and \lstinline{beam}, the parameter $n$ specifies the number of samples or beams respectively. In this case, $n$ interaction traces with the respective variables will be returned. Note, that we omit a detail in favor of readability: In practice, we allow further parameters to the decoder to be specified, e.g. the temperature $\tau$. To illustrate queries and decoding, consider \cref{fig:query-example:joke} which utilizes a query purely made from strings, and \cref{fig:query-example:list} which utilizes a combination of strings and control flow. An corresponding interaction trace is shown in \cref{fig:interaction-trace}. Note how in the program on the right, \lstinline|THING| is reassigned on each iteration of the loop, which is in line with the semantics of python. \input{figures/example_queries_traces} \lstinline|from |\ebnfph{model} denotes which LM to use. In our implementation \ebnfph{model} denotes a string identifying a text generation model from the popular Hugging Face Model repository\footnote{https://huggingface.co/models}. However, this could easily be extended to a local repository, or even hosted, API-gated models like GPT-3 \cite{BrownMRSKDNSSAA20}. \lstinline|where |\ebnfph{condition} places constraints on the \lstinline|[varname]| hole variables, thereby constraining the language model in what it can generate. Constraints can be an arbitary conjunction or disjunction of \ebnfph{cond\_expr} which allow comparison ($<$, $>$, $=$) and membership (\lstinline{in}) checks between standard python expressions. Note that, as hole variables are added to the scope of the query program, they can also be referenced there. We allow any deterministic pure python function along with constants. We distinguish, for reasons discussed in \cref{sec:constraint_decoding} , built-in functions (discussed next) and user-defined functions, which also includes standard python built-ins. If we invoke the LM multiple times for the same variable, like for the \texttt{THING} variable in \cref{fig:query-example:list}, the constraints apply to all intermediate values. Lastly, \lstinline|distribute |{}\ebnfph{var}{}\lstinline| in |{}\ebnfph{python\_expression} is an optional instruction that can be added to augment the returned result. Here, \ebnfph{var} \emph{must} refer to the last hole in the query and the python expression to a set (or other iterable). We will refer to this set as the support. \input{figures/example_distribute} For queries with \lstinline|distribution| clause, the interaction trace will only be evaluated up to prior to the last hole according to the specified decoding method. In addition to the holes decoded so far and the interaction trace, the last variable is not decoded, but rather the probability distribution over support. Thus for every value in the support the likelihood of this output is evaluated. \cref{fig:interaction-trace-distribute} shows this for the example from \cref{fig:query-example:list}. In this case the interaction trace up to the brace is produced, as well as the distribution over the possible values after. This is particularly useful to encode classification tasks such as sentiment analysis, where the downstream user is interested in the probability distribution over e.g. $\{$\texttt{POSITIVE}, \texttt{NEGATIVE}$\}$. \input{figures/builtins} \subsection{Built-in Functions} In the \lstinline|where| clause, we support a set of built-in functions in addition to standard python code. For instance, we implement the functions \lstinline{words}, \lstinline{sentences} that, given a string or token representation, convert it to the desired representation. To enable users to explicitly define stopping criteria, we also provide \lstinline{stops_at}, which can be used to provide constraints within the \lstinline{where} clause. \lstinline{stops\_at(}{}\ebnfph{var}{}\lstinline{, }\ebnfph{str}{}\lstinline{)} expresses that when the variable \ebnfph{var} is decoded it should stop decoding of the variable when the specified phrase is encountered. For similar purposes we provide \lstinline{len} (not shown), which overloads its default python counterpart with the comparable functionality -- it returns the length of a string (or iterable). For these designated, built-in functions, we implement additional semantics, required for the efficient output validation and the generation of decoding masks, as discussed in \cref{sec:constraint_decoding}. \section{Overview: Language Model Programming\xspace} \label{sec:overview} In this section we first review how modern language models (LMs) are utilized and the challenges that arise from this. Then, based on examples, we show how Language Model Programming\xspace (\textsc{LMP}\xspace) can overcome or simplify these challenges and outline the rest of the paper. While our goal with \textsc{LMP}\xspace is to improve the usage of state-of-the-art large language models (LLMs), e.g. GPT \cite{radford2019language} variants, the size of the model does not change how \textsc{LMP}\xspace is employed, we thus utilize the acronym LM rather than the more common LLM in the remainder of this text. \input{figures/example_tokenization} \subsection{Background: (Large) Language Models} \label{sec:background} Current language models \citep{VaswaniSPUJGKP17,radford2019language,BrownMRSKDNSSAA20} operate on a vocabulary \ensuremath{\mathcal{V}} of (sub-word) tokens. \cref{fig:tokenization} shows this for a simple example, where we see that common words have their own token (even with a space in front), while more rare words are split into multiple tokens. Similar to formal languages we let $\ensuremath{\mathcal{V}}^*$ denote all possible sequences of tokens over \ensuremath{\mathcal{V}}. Given an input sequence of words ${\bm{w}}_1, \dots {\bm{w}}_t$, a tokenizer then first maps the sequence of words to a sequence of tokens ${\bm{t}}_1, \dots, {\bm{t}}_k$ and then a language model ${\bm{f}}: \ensuremath{\mathcal{V}}^k \to \mathbb{R}^{|\ensuremath{\mathcal{V}}|}$ predicts a score ${\bm{z}} = {\bm{f}}({\bm{t}}_1, \dots, {\bm{t}}_k)$ for every possible next token. We treat the implementation of ${\bm{f}}$ as a black box (it does not need to be a neural network), yet in practice virtually all such models are variants of the Transformer architecture~\citep{VaswaniSPUJGKP17}. Via the softmax function, the resulting scores ${\bm{z}}$ can then be turned into a probability distribution over the vocabulary $\mathcal{V}$: \begin{equation*} \ensuremath{\mathrm{softmax}}({\bm{z}})_i := \frac{\exp(z_i)}{\sum_j \exp(z_j)}. \end{equation*} \paragraph{Decoding} Based on this, the language model ${\bm{f}}$ is applied multiple times to produce a sequence ${\bm{t}}_1, \dots, {\bm{t}}_K$ for $K > k$. When we want to pick the $(i+1)$-th token, $\ensuremath{\mathrm{softmax}}({\bm{f}}({\bm{t}}_1, \dots, {\bm{t}}_i))$ gives a probability distribution over this next token. Several ways of picking from this distribution have been discussed in the literature. Below we review a selection of the most popular ones. Each method is iterated until a special end-of-sequence-token \textsc{eos}\xspace is predicted or another stopping criterion is met. This can be seen as sampling from a distribution over $\ensuremath{\mathcal{V}}^*$, and thus, some of these methods can return multiple possible decodings: \begin{itemize} \item \textbf{Greedy decoding} (or \textbf{Argmax decoding}) picks the token with the highest probability at each turn and feeds it back into the model to predict the next one (this corresponds to a depth-first search of all possible decodings). Importantly, this decoding does not necessarily (and in practice very rarely) corresponds to the decoding with the highest overall probability (obtained by multiplying all individual probabilities of selected tokens). As this determines just the most probable decoding. Overall, only one decoding is returned. \item \textbf{Sampling}, treats the output \ensuremath{\mathrm{softmax}}{} distribution as a categorical distribution from which a next token can be sampled. With sampling, it is common to decode multiple, e.g., $n$, outputs. \item \textbf{Full decoding} enumerates all possible sequences to the end and picks the one with the highest probability. This corresponds to a breadth-first search of all possible decodings. However, such enumeration (even with optimizations) is prohibitively expensive. \item \textbf{Beam search} picks the middle ground between greedy and full decoding. It maintains a set of $n$ beams at all times, each corresponding to a predicted sequence. For each sequence, it predicts a possible next token and again picks the top $n$ from the resulting $n |\ensuremath{\mathcal{V}}|$ sequences. In the end, the top sequence from the $n$ resulting beams is picked. \end{itemize} For beam search and sampling, an additional parameter, the temperature $\tau \in \mathbb{R}^{>0}$, can be used to control the diversity of the output, by using $\ensuremath{\mathrm{softmax}}({\bm{z}}/\tau)$ rather than $\ensuremath{\mathrm{softmax}}({\bm{z}})$. A higher $\tau$ leads to more diverse outputs, while a lower $\tau$ leads to more likely outputs. \paragraph{Masked Decoding} A particular case of decoding is if we can already rule out certain tokens at certain positions. This means we can simply ignore these tokens and perform decoding over the remaining set. In such a case, we assume that we are given a mask ${\bm{m}} \in \{0, 1\}^{|\ensuremath{\mathcal{V}}|}$, where a $1$ denotes a viable token and a $0$ denotes a discarded one. We can apply the decoding methods discussed above on ${\bm{m}} \odot \ensuremath{\mathrm{softmax}}({\bm{z}})$, where $\odot$ denotes element-wise multiplication. (Note that, to obtain correct probabilities again this vector needs to be scaled by $1/\sum_i ({\bm{m}} \times \ensuremath{\mathrm{softmax}}({\bm{z}}))_i$.) An extreme case of this occurs when asking the model yes/no questions or classification tasks (e.g., to "positive" or "negative"). There we only allow the model to respond with the respective word and thereby the corresponding tokens. Another case where this is applied, is when decoding a formal language such as in code completion or synthesis, where only a subset of possible tokens can form a legal program according to a grammar. \input{figures/example_few_shot} \paragraph{Few-Shot Prompting} Few-shot prompting \citep{BrownMRSKDNSSAA20} refers to the idea that language models do not need to be specifically trained for a downstream task (e.g. classification, question answering, etc.). Rather, it is sufficient to train them on broad text-sequence prediction datasets (e.g., the pile \citep{pile}) and to provide context in the form of examples when invoking them. We show an example of this in \cref{fig:few_shot}, where our goal is to translate "cheese" from English to French. To this end we provide several examples of successful translation pairs and then ask the LM to complete the pair for "cheese" in the same syntax, where we expect the model to predict the tokens forming \lstinline|fromage| followed by the end-of-sequence token. In this way, translation and other tasks can be reframed as simple sequence completion tasks, which makes LMs powerful multi-task reasoners. \newpage \subsection{Key Challenges} \label{sec:challenges} Here we want to outline challenges faced by current approaches to LM prompting, before outlining in \cref{sec:lm_programming} how \textsc{LMP}\xspace via our implementation \textsc{LMQL}\xspace can be used to overcome them. \paragraph{Interaction} Consider for example the approach from \citet{ReynoldsM21}, which discusses the idea of \emph{meta prompts}, where in order to obtain the answer to a particular question, a language model is first asked to expand the prompt, which is then fed again to the same model in order to obtain an answer. An example, inspired by this approach is shown in \cref{fig:circumference}~(a). There the goal is to ask the LM for the answer to the question "What is the circumference of the earth?". In meta prompting we first ask the language model for the name of an expert regarding this question, and then ask how this expert would answer the question. With current LM interfaces, one would input the first part of the prompt, manually invoke the LM to complete the sequence with the expert name, then extract the expert name from the LM output, and enter it manually into the rest of the template, and again feed it to the LM to obtain the actual answer. This current approach requires a large amount of manual interaction via an API, or even with a human in the loop. Further, due to this manual intervention, the name of the expert will be fixed before the actual answer is generated. For decoding procedures that aim to optimize the overall likelihood of the result, this may produce worse results then letting the optimization procedure jointly optimize both inputs. \input{figures/example_circumference} \paragraph{Constraints \& Token Representation} Another issue of this example query arises when we consider the completions as shown in \cref{fig:circumference}~(b). Sometimes, LMs will digress during generation and produce long ongoing sequences of text. While some answers work well for substitution in the next part of the prompt, others produce awkward and clumsy sentences at least and wrong sentences at worst. This demonstrates, that often as a user, we actually have constraints regarding the generated text, which sometimes are violated, as the LM will not adhere to them naturally. Ideally, these constraints would be expressible in terms of human understandable concepts and logic, since users will reason in terms of words, sentences and entities, not on a token level like the LM. In contrast, practical methods of constraining LMs in this way \cite{shin2021constrained, PoesiaP00SMG22} still involve a lot of manual implementation effort and model-level understanding of the decoding procedures, tokenization and vocabulary of the LM. \paragraph{Efficiency and Cost} Lastly, efficiency and performance remain big challenges. While a lot of work went into making the inference step in modern LMs more efficient, they still require expensive, high-end GPUs to be run with reasonable performance. Because of this, many practical users resort to hosted models running in the cloud, some of which are even guarded behind paid APIs. For this reason, LM querying can become very expensive, both in a computational and a financial sense. When relying on Language Model Programming and constraints however, new opportunities for optimization arise, as predefined behavior and a limitation of the search space can be exploited to reduce the number of times an LM has to be invoked. In this setting, the cost of validation, parsing and mask generation is negligible compared to the vast cost of even just a single LM call. \subsection{Language Model Programming\xspace in \textsc{LMQL}\xspace} \label{sec:lm_programming} Now we consider Language Model Programming\xspace instantiated via our implementation \textsc{LMQL}\xspace, and how it can help overcome these challenges. Shown in \cref{fig:circumference}~(c), we write the same query as before in \textsc{LMQL}\xspace syntax (formally defined in \cref{sec:language}). Here, when we encounter the construction \lstinline{[VAR]}, everything before the variable is fed to the LM and the answer found via decoding is then assigned to the variable \lstinline{VAR}, while a variable name in braces just recalls previously defined variables. This greatly simplifies the prompt and removes the need for manual interaction. Additionally, it enables the use of decoding procedures that consider both the expert name and answer jointly (as discussed in \cref{sec:query}). Further, to address the issue of long on-running sentences, \textsc{LMQL}\xspace allows constraints on the variable parts of the LM interaction on an intuitive level, e.g. words and phrases \cref{fig:circumference}~(d) shows the intuitive \textsc{LMQL}\xspace syntax for this, also discussed formally later on. Here, the constraints enforce that the decoded tokens for \lstinline{EXPERT} are at most three words and that decoding stops if the sequence ends in a "\lstinline{.}". While it is possible to specify a maximum length with current query APIs, they usually work directly on the (model-specific) token level and thus can not be mapped 1-to-1 to longer sequences. In contrast, \textsc{LMQL}\xspace allows the intuitive declaration of high-level constraints that are automatically translated into token level inference masks, using partial evaluation semantics discussed in \cref{sec:constraint_decoding}. \section{The LMQL runtime: Query Execution \& Decoding} \label{sec:query} \begin{wrapfigure}[20]{r}{0.64\textwidth} \vspace{-1.5em} \begin{minipage}{0.64\textwidth} \begin{algorithm}[H] \SetAlgoLined \LinesNumbered \DontPrintSemicolon \KwIn{string $s$, trace $u$, scope \ensuremath{\sigma}, language model ${\bm{f}}$} \uIf{$s$ contains $[\text{\ebnfph{<varname>}}]$} { $s_{\text{pre}}, \text{varname}, s_{\text{post}} \gets \text{unpack}(s)$ \; \tcp*{e.g. "a [b] c" $\rightarrow$ "a ", "b", " c"} $u \leftarrow u s_{\text{pre}}$ \tcp*{append to trace} $v \leftarrow decode({\bm{f}}, $u$)$ \tcp*{use the LM for the hole} \label{alg:eval-string:decode} $\ensuremath{\sigma}[\text{varname}] \leftarrow v$ \tcp*{updated scope} $u \leftarrow u v$ \tcp*{append to trace} } \uElseIf{$s$ contains $\{\text{\ebnfph{varname}}\}$} { $\text{varname}\gets \text{unpack}(s)$ \tcp*{e.g. "\{b\}" $\rightarrow$ "b"} $v \leftarrow \ensuremath{\sigma}[\text{varname}]$ \tcp*{retrieve value from scope} $s \leftarrow \text{subs}(s, \text{varname}, v)$ \tcp*{replace placeholder with value} $u \leftarrow u s$ \tcp*{append to trace} } \Else{ $u \leftarrow u s$ \tcp*{append to trace} } \caption{Evaluation of a top-level string $s$} \label{alg:eval-string} \end{algorithm} \end{minipage} \end{wrapfigure} We now discuss how the \textsc{LMQL}\xspace runtime executes a query. To this end we consider the execution of the \ebnfph{query} as a python program. In this execution we assume that, i) functions are pure and do not cause side effects, ii) functions are deterministic. Ignoring the constraints in \lstinline|where| for now, the \ebnfph{query} is executed line-by-line like a regular python function with one difference: At the beginning of the execution, the interaction trace $u \leftarrow \ensuremath{\epsilon}$ is initialized to the empty string $\ensuremath{\epsilon}$. Whenever a top-level string $s$ is encountered in the program execution, the procedure in \cref{alg:eval-string} is evoked. If a hole \texttt{[\text{\ebnfph{varname}}]} is encountered, the string $s$ is split into the text preceeding the hole $s_\text{pre}$, the variable name and the text after the hole $s_\text{post}$. $s_\text{pre}$ is directly appended to $u$ \footnote{As is common we use multiplication to denote string concatenation and write $uv$ to denote the concatenation of $u$ and $v$.}, which is then used to $decode$ a sequence $v$ to fill the hole from the LM ${\bm{f}}$. This string is then assigned to \ebnfph{varname} in the scope $\ensuremath{\sigma}$ of the python program. If $\{\text{\ebnfph{varname}}\}$ is encountered, the value of \ebnfph{varname} is retrieved from scope $\ensuremath{\sigma}$ and the placeholder is replaced with the value. In all cases the string $s$ (with the decoded or substituted text replaced) is added to $u$. Note that, for simplicity in \cref{alg:eval-string} we assume that there is at most one hole or placeholder in a string $s$. In practice we allow multiple. Formally this can be thought of as splitting $s$ into a list of strings and then applying \cref{alg:eval-string} to each resulting string. \input{figures/example_execution} We illustrate this execution model in \cref{fig:example-execution} where we list the evaluation steps of the first 7 lines of \cref{fig:query-example:list}. The first two lines are directly appended to the interaction trace $u$, while the next two lines (emitted inside the for loop) contain holes, which invokes the $decode$ function, discussed next. \input{figures/decoding} \paragraph{Decoding Algorithm} When $decode$ is invoked, the decoding procedure declared at the top of the \textsc{LMQL}\xspace program is utilized to generate a value for the placeholder. Decoding is usually stopped i) when an end-of-sequence token is produced, or ii) when no more tokens can be produced due to the given constraints (discussed in \cref{sec:constraint_decoding}). In \cref{alg:eval-string} we assume that $decode$ returns a de-tokenized string $v$ rather than a sequence of tokens. For decoding algorithms that just output a single possible sequence, such as \lstinline{argmax} or \lstinline{sample(n=1)} the straightforward combination of \cref{alg:eval-string} and standard decoding function denotes the full end-to-end decoding procedure. However, a particular case occurs if multiple results are produced, e.g., \lstinline{sample(n=}{}\ebnfph{int}{}\lstinline{)} produces $n$ possible interaction traces $u$. In this case, we track $n$ parallel execution of the query program, where $decode$ acts non-deterministically. In practice, we execute all calls in lockstep, such that we can batch calls to the underlying model ${\bm{f}}$ and therefore improve efficiency. \paragraph{Scripted Beam Search} For the decoder \lstinline{beam(n=}{}\ebnfph{int}{}\lstinline{)}, the query is executed similarly: When the first hole in the interaction is encountered, $n$ beams (with their estimated probabilities) are created and retained. Each beam then corresponds to an interaction trace $u$, for which the query function is executed independently. Note that each $u$ might cause different control flow. Further, since we only consider the top $n$ beams at each step, we also only continue query execution for the top $n$ beams. Interaction traces that are discarded along the way, are pruned and not extended further. On termination, the overall query result corresponds to final top $n$ interaction traces. \paragraph{Optimization} For large $n$ the execution of query code for multiple samples or beams can potentially be expensive, especially if expensive functions are involved on top of the LM output. However, as we assume functions to be pure and deterministic, results can be cached based on the function arguments, therefore greatly decreasing the total number of required function invocations. \paragraph{Language Model Integration} As shown in our decoding algorithm, we do not impose any restrictions on language model ${\bm{f}}$, apart from being able to access the resulting distribution over vocabulary tokens. As fundamentally, this is the core interface of most language models, we can easily integrate them without further changes. However, we note that our decoding procedure requires our runtime to be invoked for each token, which can be expensive for API-gated models that are billed by the number of API calls. For more details on the integration of the \textsc{LMQL}\xspace{} runtime with a language model, see \cref{sec:model-integration}. \paragraph{Decoding Internals} \cref{alg:decoding} shows the internals of a decoding procedure (\text{decode} in \cref{alg:eval-string}) for a single sample or beam. Here, the goal is to build up the string $v$, initialized to the empty string $\ensuremath{\epsilon}$ in \cref{alg:decoding:init}, by appending tokens $t$ to it. For each new token we compute a mask ${\bm{m}}$ over the vocabulary, which only allows tokens that result in legal sequences, e.g., those that satisfy our \lstinline|where| constraints. If we can not produce any further tokens (i.e., $\bigwedge_i m_i = 0$) we stop the decoding procedure. Otherwise, we re-normalize ${\bm{m}} \odot {\bm{z}}$ into a probability distribution, i.e. a vector where entries add up to 1, by dividing it by $Z = \sum_i ({\bm{m}} \odot {\bm{z}})_i$. The function pick depends on the exact decoding algorithm (e.g. \lstinline|argmax|, \lstinline|sample|, \lstinline|beam|) and is used to pick a token $t$ from the distribution. If we obtain an end-of-sequence \textsc{eos}\xspace token we stop. If we return early because no legal tokens are available, we are unable to find a response to the query that fulfils the constraints. If we return at \textsc{eos}\xspace, we found a legal decoding. Next, we discuss how to compute the mask ${\bm{m}}$, such that the specified constraints can be enforced during decoding. \section{Related Work} \label{sec:related} \paragraph{Language Model Programming\xspace (\textsc{LMP}\xspace)} Recent work has proposed a variety of different prompting techniques: chain-of-thought prompting \cite{wei2022chain}, interactive question answering \cite{yao2022react}, aggregation-based schemes like self-consistency \cite{wang2022self} and ThinkSum \cite{ozturkler2022thinksum}. We consider all these works as instances of \textsc{LMP}\xspace (also discussed under the term of prompt programming \cite{ReynoldsM21, ZhouMHPPCB22}), where the goal is to leverage the reasoning abilities of a pre-trained model to achieve a specific task. A few select works have identified this trend, and propose novel LM-focused programming systems: PromptChainer \cite{WuJD0MTC22}, OpenPrompt \cite{ding2021openprompt}, and PromptSource \cite{bach2022promptsource} provide integrated development environments for LM interaction. The latter two even support a simple templating language akin to \textsc{LMQL}\xspace{} top-level string semantics. However, neither of the projects implements constraints or control flow like \textsc{LMQL}\xspace{} does. Finally, \citet{dohan2022language} discuss the idea of language model cascades, relating LM querying to probabilistic programming, which opens up interesting avenues for future work, also in the more general context of language model programming and \textsc{LMQL}\xspace. Recently, an interesting version of chain-of-thought \citep{gao2022pal,chen2022xxw} with access to to a language interpreter was proposed. There, the LM produces step-by-step instructions that can be evaluated by a python (or similar) interpreter in order to obtain the answer to simple reasoning or arithmetic tasks. This does not require specially trained LMs, but rather works via few-shot prompting with examples. This approach is orthogonal to the idea of Language Model Programming\xspace. However, it can be encoded similar to the arithmetic task (see \cref{sec:eval-arithmetic}) in \textsc{LMQL}\xspace. If safety is no concern, this is trivially realized by invoking python's \lstinline|eval| function on the model's response. \paragraph{Constraining Language Models} The idea of constraining LMs has been applied across a range of fields. \citet{shin2021constrained} constrain a model's output to a more easily-interpretable subset of the English language. More specifically, they handcraft custom next-token prediction programs to implement specific semantic parsing tasks using LMs. \citet{PoesiaP00SMG22} and \citet{scholak2021picard} on the other hand, are concerned with the task of generating source code. In this setting, syntactic and semantic validity is crucial. To realize this, they integrate existing parsers and validation methods. \textsc{LMQL}\xspace{} on the other hand provides a generic interface to facilitate constrained decoding by providing high-level constructs. Still, our set of operators can easily be extended by the user, allowing for the integration of grammar-based parsers, semantic code validation or other methods.
1,477,468,749,840
arxiv
\section{Introduction} To identify the dynamical state of multi-planetary systems, we use the MEGNO technique (the acronym of Mean Exponential Growth factor of Nearby Orbits; Cincotta \& Sim\`o 2000). This method provides relevant information about the global dynamics and the fine structure of the phase space, and yields simultaneously a good estimate of the Lyapunov Characteristic Numbers with a comparatively small computational effort. From the MEGNO technique, we have built the MIPS package (acronym of Megno Indicator for Planetary Systems) specially devoted to the study of planetary systems in their multi-dimensional space as well as their conditions of dynamical stability. Particular planetary systems presented in this paper are only used as initial condition sources for theoretical studies of 3-body problems. By convention, the reference system is given by the orbital plane of the inner planet at $t = 0$. Thus, we suppose the orbital inclinations and the longitudes of node of the inner (noted 1) and the outer (noted 2) planets (which are non-determined parameters from observations) as follows : $i_1 = 0^\circ$ and $\Omega_1 = 0^\circ$ in such a way that the relative inclination and the relative longitude of nodes are defined at $t=0$ as follows : $i_r = i_2-i_1 = i_2$ and $\Omega_r = \Omega_2-\Omega_1 = \Omega_2$. The MIPS maps presented in this paper have been confirmed by a second global analysis technique (Marzari {\it et al.} 2006) based on the Frequency Map Analysis (FMA; Laskar 1993). \section{Fine structure of retrograde resonance} Studying conditions of dynamical stability in the neighborhood of the HD\thinspace73526 two-planet system (period ratio: 2/1, see initial conditions in Table 1), we only find one stable and robust island (noted (2)) for a relative inclination of about $180^\circ$ (see Fig. \ref{fig1}a). Such a relative inclination (where in fact $i_1=0^\circ$ and $i_2=180^\circ$) may be considered to a coplanar system where the planet 2 has a retrograde motion with respect to the planet 1. From a kinematic point of view, it amounts to consider a scale change of $180^\circ$ in relative inclinations. Taking into account initial conditions inside the island (2) of Fig. 1a, we show that the presence of a strong mean-motion resonance (MMR) induces clear stability zones with a nice V-shape structure, as shown in Fig. 1b plotted in the $[a_1, e_1]$ parameter space. Let us note the narrowness of this V-shape, namely only about 0.006 AU wide for the inner planet (it is 5 times larger in the Jupiter-Saturn case). A similar V-shape structure is obtained in $[a_2, e_2]$ with about 0.015 AU wide. Due to the retrograde motion of the outer planet 2, this MMR is a 2:1 retrograde resonance, also noted 2:-1 MMR. \begin{figure}[!h] \begin{center} \includegraphics[width=4.4cm,keepaspectratio=true,angle=270]{Figure_1.eps} \label{fig1} \end{center} \caption{Panel (a): Stability map in the $[i_r, \Omega_r]$ non-determined parameter space of the HD\thinspace73526 planetary system (see Table 1). Panel(b): Stability map in the $[a_1, e_1]$ parameter space for initial conditions taken in the stable zone (2) of panel (a). Note that masses remain untouched whatever the mutual inclinations may be; they are equal to their minimal observational values. Black and dark-blue colors indicate stable orbits ($<Y>=2 \pm 3\%$ and $<Y>=2 \pm 5\%$ respectively with $<Y>$, the MEGNO indicator value) while warm colors indicate highly unstable orbits.} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=4.4cm,keepaspectratio=true,angle=270]{Figure_2.eps} \label{fig2} \end{center} \caption{Stability maps in the $[i_r, \Omega_r]$ parameter space. Panel (a): initial HD\thinspace82943 planetary system (see Table 1). Panel (b): scale reduction of the HD\thinspace82943 planetary system according to a factor 7.5 on semi- major axes. Masses in Panel (a) and Panel (b) are identical. Color scale is the same as in Fig. 1.} \vspace{3mm} \end{figure} \section{Efficiency of retrograde resonances} Fig. 2 exhibits stability maps in the $[i_r, \Omega_r]$ parameter space considering a scale reduction of the HD\thinspace82943 planetary system (see Table 1) according to a factor 7.5 on semi-major axes (masses remaining untouched). The dynamical behavior of the reduced system (Fig. 2b) with respect to the initial one (Fig. 2a) points up the clear robustness of retrograde configurations contrary to prograde ones. The ``prograde'' stable islands completely disappear while only the ``retrograde'' stable island resists, persists and even extends more or less. Even for very small semi-major axes and large planetary masses, which should a priori easily make a system unstable or chaotic, stability is possible with counter-revolving orbits. In the case of the 2:1 retrograde resonance, although close approaches happen more often (3 for the 2:-1 MMR) compared to the 2:1 prograde resonance, the 2:-1 MMR remains very efficient for stability because of faster close approaches between the planets. A more detailed numerical study of retrograde resonances can be found in Gayon \& Bois (2008). \begin{table}[!h] \begin{center} \begin{tabular}{|c|ccccc|} \hline Elements& \textrm{HD$\thinspace$73526}& \textrm{HD$\thinspace$82943}& \textrm{HD$\thinspace$128311}& \textrm{HD$\thinspace$160691}& \textrm{HD$\thinspace$202206}\tabularnewline \hline $M_{star}$ $(M_{\odot})$& $1.08 \pm 0.05$& $1.15$& $0.84$& $1.08 \pm 0.05$& $1.15$\tabularnewline \hline $m \textrm{ sin } i_l \textrm{ ($M_J$)}$& $\begin{array}{c} 2.9 \pm 0.2\\ 2.5 \pm 0.3 \end{array}$& $\begin{array}{c} 1.85 \\ 1.84 \end{array}$& $\begin{array}{c} 1.56 \pm 0.16\\ 3.08 \pm 0.11 \end{array}$& $\begin{array}{c} 1.67 \pm 0.11\\ 3.10 \pm 0.71 \end{array}$& $\begin{array}{c} 17.4 \\ 2.44 \end{array}$\tabularnewline \hline $a \textrm{ (AU)}$& $\begin{array}{c} 0.66 \pm 0.01\\ 1.05 \pm 0.02 \end{array}$& $\begin{array}{c} 0.75 \\ 1.18 \end{array}$& $\begin{array}{c} 1.109 \pm 0.008 \\ 1.735 \pm 0.014 \end{array}$& $\begin{array}{c} 1.50 \pm 0.02\\ 4.17 \pm 0.07 \end{array}$& $\begin{array}{c} 0.83 \\ 2.55 \end{array}$\tabularnewline \hline $e$& $\begin{array}{c} 0.19 \pm 0.05\\ 0.14 \pm 0.09 \end{array}$& $\begin{array}{c} 0.38 \pm 0.01\\ 0.18 \pm 0.04 \end{array}$& $\begin{array}{c} 0.38 \pm 0.08 \\ 0.21 \pm 0.21 \end{array}$& $\begin{array}{c} 0.20 \pm 0.03\\ 0.57 \pm 0.1 \end{array}$& $\begin{array}{c} 0.435 \pm 0.001\\ 0.267 \pm 0.021 \end{array}$\tabularnewline \hline $\omega \textrm{ (deg)}$& $\begin{array}{c} 203 \pm 9\\ 13 \pm 76 \end{array}$& $\begin{array}{c} 124.0 \pm 3\\ 237.0 \pm 13 \end{array}$& $\begin{array}{c} 80.1 \pm 16\\ 21.6 \pm 61 \end{array}$& $\begin{array}{c} 294 \pm 9\\ 161 \pm 8 \end{array}$& $\begin{array}{c} 161.18 \pm 0.30 \\ 78.99 \pm 6.65 \end{array}$\tabularnewline \hline $M \textrm{ (deg)}$& $\begin{array}{c} 86 \pm 13\\ 82 \pm 27 \end{array}$& $\begin{array}{c} 0\\ 75.21 \pm 1.96 \end{array}$& $\begin{array}{c} 257.6 \pm 2.7\\ 166 \pm 2 \end{array}$& $\begin{array}{c} 0 \\ 12.6 \pm 11.2 \end{array}$& $\begin{array}{c} 105.05 \pm 0.48 \\ 311.6 \pm 9.5 \end{array}$ \tabularnewline \hline \end{tabular} \begin{tabular}{ll} \end{tabular} \caption{\label{tab1}Orbital parameters of the HD$\thinspace$73526, HD$\thinspace$82943, HD$\thinspace$128311, HD$\thinspace$160691 and HD$\thinspace$202206 planetary systems. Data sources come from Tinney et al. (2006), Mayor et al. (2004), Vogt et al. (2005), McCarthy et al. (2004) and Correia et al. (2005) respectively. For each system and each orbital element, the first line corresponds to the inner planet and the second one to the outer planet.} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{cccc}\hline \hspace{0.2cm}Data sources\hspace{0.2cm} & \hspace{0.2cm}Period ratio\hspace{0.2cm} & \hspace{0.2cm}Prograde MMR\hspace{0.2cm} & \hspace{0.2cm}Retrograde MMR\hspace{0.2cm}\\ \hline HD\thinspace73526 & 2/1& 17 & 500 \\ HD\thinspace82943 & 2/1& 755 & 1000 \\ HD\thinspace128311& 2/1& 249 & 137 \\ HD\thinspace160691& 5/1& $\varepsilon$ & 320 \\ HD\thinspace202206& 5/1& $\varepsilon$ & 631 \\ \hline \end{tabular} \label{tab2} \caption{Statistical results. For each type of MMR (prograde or retrograde), 1000 random systems have been integrated in the error bars of each data source. The proportion of stable systems over 1000 is indicated in each case. $\varepsilon$ designates a very small value that depends on the size of the random system size. Data sources come from Tinney et al. (2006), Mayor et al. (2004), Vogt et al. (2005), McCarthy et al. (2004) and Correia et al. (2005) respectively (see Table 1).} \end{center} \end{table} \section{Occurrence of stable counter-revolving configurations} The occurence of stable two-planet systems including counter-revolving orbits appears in the neighborhood of a few systems observed in 2:1 or 5:1 MMR. New observations frequently induce new determinations of orbital elements. It is the case for the HD\thinspace160691 planetary system given with 2 planets in McCarthy {\it et al.} (2004) then with 4 planets in Pepe {\it et al.} (2007). Hence, systems related to initial conditions used here (see Table 1) have to be considered as {\it academic} systems. Statistical results for stability of these academic systems are presented in Table 2, both in the prograde case ($i_r=0^\circ$) and in the retrograde case ($i_r=180^\circ$). For each data source, 1000 random systems taken inside observational error bars have been integrated. Among these random systems, the proportion of stable systems either with prograde orbits or with counter-revolving orbits is given in Table 2. In all cases, a significant number of stable systems is found in retrograde MMR. Moreover, in most data sources, retrograde possibilities predominate. \section{Resources of retrograde resonances} The 2:1 (prograde) MMRs preserved by synchronous precessions of the apsidal lines (ASPs) are from now on well understood (see for instance Lee \& Peale 2002, Bois {\it et al.} 2003, Ji {\it et al.} 2003, Ferraz-Mello {\it et al.} 2005). The MMR-ASP combination is often very effective; however, ASPs may also exist alone for stability of planetary systems. Related to subtle relations between the eccentricity of the inner orbit ($e_1$) and the relative apsidal longitude $\Delta\tilde{\omega}$ (i.e. $\tilde{\omega}_1-\tilde{\omega}_2$), Fig. 3 permits to observe how the 2:1 retrograde MMR brings out its resources in the $[\Delta{\tilde{\omega}}, e_1]$ parameter space : \begin{itemize} \item In the island (1) (i.e. inside the $[a, e]$ V-shape of Fig. 1b), the 2:-1 MMR is combined with a uniformly prograde ASP (both planets precess on average at the {\it same rate} and in the {\it same prograde direction}) \item In the island (2) (i.e. outside but close to the $[a, e]$ V-shape of Fig. 1b), the 2:-1 {\it near}-MMR is combined with a particular apsidal behavior that we have called a {\it rocking} ASP (see Gayon \& Bois 2008): both planets precess at the {\it same rate} but in {\it opposite directions}. \item The $[\Delta\tilde{\omega}, e_1]$ map also exposes a third island (3) that proves to be a wholly chaotic zone on long term integrations. \end{itemize} Let us note that the division between islands (1) and (2) is related to the degree of closeness to the 2:-1 MMR. \begin{figure}[h] \begin{center} \begin{multicols}{2} \includegraphics[width=4.4cm,keepaspectratio=true,angle=270]{Figure_3.eps} \columnbreak \\ $\!$ \vspace{1.4cm} \caption{Stability map in the $[\Delta{\tilde{\omega}}, e_1]$ parameter space. A similar distribution of stable islands is obtained in $[\Delta{\tilde{\omega}}, e_2]$. Color scale and initial conditions are the same as in Fig. 1 with in addition the $i_r$ and $\Omega_r$ values chosen in the island (2) of Fig. 1a.} \end{multicols} \label{fig3} \end{center} \end{figure} \section{Conclusion} We have found that retrograde resonances present fine and characteristic structures particularly relevant for dynamical stability. We have also shown that in cases of very compact systems obtained by scale reduction, only the "retrograde" stable islands survive. From our statistical approach and the scale reduction experiment, we have expressed the efficiency for stability of retrograde resonances. Such an efficiency can be understood by very fast close approaches between the planets although they are in greater number. We plan to present an Hamiltonian approach of retrograde MMRs in a forthcoming paper (Gayon, Bois, \& Scholl, 2008). Besides, in Gayon \& Bois (2008), we propose two mechanisms of formation for systems harboring counter-revolving orbits. Free-floating planets or the Slingshot model might indeed explain the origin of such planetary systems. In the end, we may conclude that retrograde resonances prove to be a feasible stabilizing mechanism. \acknowledgements{We thank the anonymous referee for his comments that greatly helped to improve the paper.}
1,477,468,749,841
arxiv
\section{Introduction} \label{sec:intro} The discharge length of fusion machines so far has been of the order of a few minutes only. However, as fusion research approaches reactor relevant conditions the discharge lengths increase. Fusion machines such as Wendelstein 7-X (W7-X) aim at discharge lengths of \SI{15}{\minute} at full power\cite{Klinger2019}. ITER, the first plasma experiment to reach ignition, will run a base-line length of \SI{30}{\minute} shots and the current European demonstration power plant (DEMO), will run at least on basis of a \SI{2}{\hour} cycle\cite{Snipes2012,Biel2019}. Already machines such as QUEST have achieved such extensive discharges\cite{Hanada2017}. Interferometry is the primary density control diagnostic for fusion machines in the world. All of the aforementioned machines have or will have continuous interferometry based real-time density control, albeit the real-time requirements differ wildly between these machines\cite{vanZeeland2013,Biel2019}. The techniques primary advantage is its simplicity. To first order it is only sensitive to the dispersion the traversed medium and the wavelength of the employed laser beam. And although vibrations do affect the measured phase, advanced techniques such as dispersion interferometry or the well-established two-color interferometry have successfully eliminated their effect\cite{Mlynek2010,Boboc2012,Akiyama2014,vanZeeland2017,Brunner2018}. However, what has thus far been neglected is the effect of environmental parameters, i.e. air-humidity, air-temperature and air-pressure, on the phase measurement. The reason is that for most applications to date the environmental parameters are either considered constant over the course of the measurement, or (as in the case of the LIGO system) have been mitigated by evacuating the entire optical setup. For specific components, such as \ce{ZnSe}-vacuum windows, temperature-induced phase-drifts have been presented at TJII and post-processing correction methods were consequently developed\cite{Sanchez2005}. The phase-drifts presented were the result of absorbed microwave stray-radiation and could be easily mitigated by choice of an appropriate wavelengths-combination. The overall environmental parameters were not considered. In general an interferometer placed at a fusion experiment will have the primary optical components outside of the test cell far away from the point of measurement, which advantageously allows the diagnostic to be maintained while the fusion experiment continues to run. This in turn requires large parts of the optical beam path to be in air. For \mbox{DIII-D}, which is the test-bed for the ITER interferometer, the beam path though air is of the order of \SI{120}{\m}\cite{vanZeeland2017}. For DEMO this will potentially increase by a factor of 2. \begin{figure}[t] \centering \includepdf{motivation} \caption{\label{fig:motivation} Measured phase drift for Oct 16 \& 17, 2018 of the OP1.2b operation campaign at Wendelstein 7-X. The actual drifting phase is shown in the left-hand plot. The ordinates indicate the phase value on the left and the corresponding IED equivalent for the W7-X system on the right. The right-hand plot shows the rate of phase drift for this day. For visualization purposes the abscissa is not to scale. The gray line indicates the day break.} \end{figure} It has recently come to the attention of the fusion community that the diagnostic-hall climate has a profound impact on the phase measurement of fusion interferometers\cite{vanZeeland2017}. In particular air humidity was shown to have an impact on the measurement of the phase\cite{Brunner2018}. \Cref{fig:motivation} shows the phase measured by the W7-X interferometer on Oct 16 \& 17, 2018, which was part of the OP1.2b operation campaign. The left-hand plot shows the plain phase with the equivalent line-Integrated Electron Density (IED) indicated on the right ordinate. As can be seen a changing weather front resulted in significant phase drifts over the course of the day. On the right plot the rate of phase-drift is indicated, where the box-averaging was used to remove short-scale drifts, i.e. only the drifts from discharge to discharge are shown. As can be seen the phase drifts can (under bad circumstances) amount to \SI{\approx1e17}{\m^{-2}\s^{-1}}. The maximum shot length during the OP1.2b operation campaign was already \SI{100}{\s} yielding a maximum drift-induced density error of \SI{1e19}{\m^{-2}}. The projected maximum shot length of \SI{\approx1000}{\s} would correspondingly result in a \SI{1e20}{\m^{-2}} density error, which would be unacceptable. The data in \cref{fig:motivation} shows, that phase drifts must be compensated for semi-continuously operating interferometers. In this paper we present a real-time compensation method to stabilize the phase measurement of a long base-line interferometer for the purpose of continuous density feed-back control. The system is very easily retro-fitted to already existing interferometers and can be implemented with a very low budget. The system was implemented at W7-X as part of the single channel dispersion interferometer\cite{Knauer2016,Brunner2018}. In \cref{sec:model} the compensation model will be derived. Its implementation in the W7-X interferometer system will be detailed in \cref{sec:impcalib}, where the method of calibration is also described. The effectiveness of the compensation will be revealed in \cref{sec:results}, followed by a discussion and an outlook in \cref{sec:discussion}. \section{Environmental Phase Model} \label{sec:model} The phase measured by an interferometer is generally the combination of various contributions. This is because an interferometer is sensitive to both the physical path length difference as well as the refractive index of the traversed medium (the combination of both is known as the optical path length). Generally, the contribution of interest is only a part of the measured phase. \begin{equation} \label{eq:phase} \Phi_{\text{meas}} = \Phi_{\Delta L} + \Phi_{\text{disp. med.}} + \Phi_{\text{interest}} + \delta\Phi \end{equation} For the application of nuclear fusion the interesting quantity is the dispersion of the fusion plasma $\Phi_{\text{interest}}=\Phi_{\text{plasma}}$. The perturbing quantities in this instance are the difference in path length $\Phi_{\Delta L}$ as well as the change in refractive index of the dispersive components in the optical setup $\Phi_{\text{disp. media}}$. In \cref{eq:phase} $\delta\Phi$ is the contribution of phase errors related to the diode signal evaluation. In large-scale experiments $\Phi_{\Delta L}$, which are ubiquitous in fusion, will generally dominate, unless the interferometer has extremely long wavelengths. These are however unfavorable for high-performance fusion machines, since they result in high levels of refraction. As such, fusion experiments rely on interferometric measurements such as two-color interferometry (2CI) or dispersion interferometry (DI) to remove the $\Phi_{\Delta L}$-contribution\cite{Brunner2015,Brunner2018}.With appropriately calibrated systems $\Phi_{\Delta L}$ can be neglected. Ignoring errors from the signal evaluation, the measured phase in a fusion setting is therefore : \begin{equation} \label{eq:dispPhase} \begin{split} \Phi_{\text{meas}} &= \Phi_{\text{beam path}} + \Phi_{\text{plasma}} = \Phi_{\text{trans. comp.}} + \Phi_{\text{air}} + \Phi_{\text{plasma}} \\ &\approx \frac{L_{\text{air}}(T, p, H)}{\lambda} N_{\text{air}} (\lambda, T, p, H) + \sum_c \frac{L_{c}(T, p, H)}{\lambda} N_c (\lambda, T, p, H) + \Phi_{\text{plasma}} \\ &= \lambda^{-1} \left<L N\right>(\lambda, T, p, H) + \Phi_{\text{plasma}}. \end{split} \end{equation} In most fusion settings the optical interferometer setup is installed in areas, where environmental conditions cannot be tightly regulated. As such, the dispersive disturbance is generally the sum of contributions from the transmission components '$c$', e.g. lenses, as well as the refractive index of air (see \cref{eq:dispPhase}). Without a loss of precision, the phase contribution due to the optical path length of the individual components can be equivalently expressed by a single \emph{average} optical path length $\left<L N\right>$. Note that this quantity is a function of the environmental parameters, i.e. the temperature $T$, the air pressure $p$ and the (absolute) humidity $H$, as well as the wavelength $\lambda$ of the traversing light. For this reason, the measured phase tends to drift over time. Due to the long path lengths in large scale experiments, $\Phi_{\text{air}}$ will tend to dominate and has been shown to be a significant contribution to the measured phase\cite{Brunner2018}. R.~J.~Mathar pointed out that the refractive index of air can be approximated by a Taylor series\cite{Mathar2007}. It was also shown at W7-X that the approximation can be reduced to only the 0th order component of eqn.~(5)~\&~(6) of Mathar's approximation and still produce an acceptable fit\cite{Brunner2018}. While this approximation is published for air only, it is not unreasonable to assume that a similar approximation is applicable to any dispersive transmission component in the beam path of the laser, e.g. lenses. Since the approximation describes the refractive index as a polynomial, one can therefore sum the individual contributions of each component to yield one approximation for the mean optical path length $\left<L N\right>$ in \cref{eq:dispPhase}. \begin{equation} \label{eq:refrIdxFourier} \begin{split} \left<L N\right>(\lambda, T, p, H) \approx & s_0 - s_{H2}*H_{\text{air}}^2 - s_{H1}*H_{\text{air}} - \\ & s_{p2}*p_{\text{air}}^2 - s_{p1}*p_{\text{air}} - s_{T2}*T_{\text{air}}^2 - s_{T1}*T_{\text{air}} - \\ & s_{\text{Tp}}*T_{\text{air}}*p_{\text{air}} - s_{\text{TH}}*T_{\text{air}}*H_{\text{air}} - s_{\text{pH}}*p_{\text{air}}*H_{\text{air}}. \end{split} \end{equation} In \cref{eq:refrIdxFourier} we have summed the Fourier coefficients of each component to form a single polynomial, i.e. $s_n \approx L_{\text{air}} c_{n,\text{air}}(\lambda) + \sum_{\text{comp.}} L_{\text{comp.}} c_{n,\text{comp}}(\lambda)$. This approximation assumes that the transmission component has some proportionality relation of its refractive index with the ambient environmental parameters. This is reasonable as each component will for example equilibrate its internal temperature (which can be far from $T_{\text{air}}$) according to the ambient air temperature by convection. The coefficients also deal with the relative length variations due to thermal expansion. All \emph{constant} factors are summed up in $s_0$. In general the measured phase $\Phi_{\text{meas}}$ will be a function of the sum of \cref{eq:refrIdxFourier} for two wavelengths $\lambda$. However, one can make use of the fact, that the ratio of these two wavelength is fixed, i.e. $\lambda_1 = \text{\emph{const.}} \cdot \lambda_2$. For a DI that constant is 2 exactly, but even for a 2CI the ratio is fixed. Given this circumstance the difference of the Fourier constants $s_n$ in \cref{eq:refrIdxFourier}, which are only a function of the wavelength, can be simplified into a single constant by \begin{equation} \label{eq:lambdaScale} s_n(\lambda_1) - s_n(\lambda_2) = s_n(\lambda_1) - s_n(A\cdot\lambda_1) = s_n(\lambda_1) - B\cdot s_n(\lambda_1) \equiv e_n(\lambda_1) \end{equation} Combining \cref{eq:dispPhase,eq:refrIdxFourier,eq:lambdaScale} therefore yields a comparatively simple equation to remove the environmentally induced phase drift from an interferometer measurement. \begin{equation} \label{eq:corrEqn} \begin{split} \Phi_{\text{plasma}} = \Phi_{\text{meas}} - (&e_0 + e_{H2}*H_{\text{air}}^2 + e_{H1}*H_{\text{air}} + \\ & e_{p2}*p_{\text{air}}^2 + e_{p1}*p_{\text{air}} + e_{T2}*T_{\text{air}}^2 + e_{T1}*T_{\text{air}} + \\ & e_{\text{Tp}}*T_{\text{air}}*p_{\text{air}} + e_{\text{TH}}*T_{\text{air}}*H_{\text{air}} + e_{\text{pH}}*p_{\text{air}}*H_{\text{air}} ) \end{split} \end{equation} Note that at this point the $e_n$ are unknown constants, which depend on the laser's wavelength (for 2CI an arbitrary choice may be made) and the non-evacuated beam path length. \section{Implementation \& Calibration} \label{sec:impcalib} Since the $e_n$ in \cref{eq:corrEqn} are usually given by the design of the system and do not change (significantly), once it has been taken into operation, they can generally be assumed constant for a given interferometer system. As such it is possible to find them by correlating measurements of environmental parameters ($H$, $p$ and $T$) with the measurements of the measured phase $\Phi_{\text{meas}}$ in absence of a plasma induced phase shift, i.e. $\Phi_{\text{plasma}} = 0$. This measurement was conducted with the W7-X integral electron density dispersion interferometer (IEDDI) described in the next section. Insufficient air conditioning during the 2018 operation campaign (OP1.2b) at W7-X enabled the calibration measurement, as shown in \cref{ssec:calibration}. \subsection{Implementation in the W7-X IEDDI system} \label{ssec:implementation} The W7-X IEDDI system is modulated dispersion interferometer utilizing a \SI{10.6}{\um} \ce{CO2} laser for line-integrated density measurements as first developed at TEXTOR\cite{Bagryansky2006}. The system uses a novel real-time phase extraction method for the diode signal, which is based on a field programmable gate array (FPGA) signal processor\cite{Brunner2018}. It samples the diode signal at \SI{50}{\MHz} and conducts real-time processing of the diode signal in under \SI{40}{\us}. The total beam path is around \SI{50}{\m}, which is predominantly placed on an optical table next to the stellarator in the W7-X torus hall\cite[fig.2]{Knauer2016}. This circumstance was actually advantageous for the development of this method, as it can be assumed that the majority of the beam path is governed by the same environmental parameters. \begin{figure}[t] \centering \includepdf{tikzDSPtop} \caption{\label{fig:DSPtop} The structure of the firmware's DSP core. The original implementation is indicated in blue and has been detailed previously\cite{Brunner2018}. The external environmental sensor is indicated in red. The logic applying the phase drift model detailed in \cref{sec:model} to the real-time phase is indicated in orange. The logic simulated in \cref{sec:results} is marked.} \end{figure} \Cref{fig:DSPtop} shows the structure of the real-time phase evaluation firmware. The blue cores are the original implementation described in an earlier publication\cite{Brunner2018}, whereas the modifications to the FPGA firmware are depicted in orange. Before the start of the OP1.2b campaign the system was equipped with an environmental sensor based on a Raspberry~Pi~3B+ combined with a SenseHAT (shown in red in \cref{fig:DSPtop})\cite{raspberryPi}. It was placed at the center of the optical table, measuring continuously. The Rasperry~Pi measures temperature, pressure and relative humidity on the time scale of the sensor, which is roughly every \SI{100}{\ms}. The data is written to the central W7-X data storage using Ethernet and in parallel sent via a co-axial cable to the real-time processing FPGA using a universal asynchronous receiver transceiver (UART) protocol. The observant reader will have noticed that \cref{eq:corrEqn} uses the absolute humidity, while the SenseHAT measures relative humidity. This does not pose a problem, since the relative humidity can be related to the absolute humidity using the perfect gas law and the Arden Buck equation\cite{Buck1981}. Using a basic Fourier expansion again, the deviation are additional temperature and pressure terms, which would simply add themselves to the coefficients in \cref{eq:corrEqn}, merely changing the actual value of the $e_n$. The FPGA firmware was fitted with a UART frame receiver. It translates the data received from the Raspberry~Pi into a format understood by the FPGA firmware. The measurement of temperature, rel. humidity and pressure are then passed to a drift prediction core, which calculates a drift phase $\Phi_{\text{drift}}$ based on the polynomial inside the parentheses of \cref{eq:corrEqn}. The final correction is then conducted by basic subtraction of $\Phi_{\text{drift}}$ from the plain measured phase $\Phi_{\text{meas}}$. This is the only modification to the previous data flow in the firmware. This is highly beneficial as it demonstrates the ease with which the scheme can be retro-fitted to existing systems. The drift-phase $\Phi_{\text{drift}}$ is calculated every time new samples of environmental parameters arrive at the FPGA (roughly every \SI{100}{\ms}). A register is used to transfer the slow stream to the processed stream. There is a significant delay between measuring the environmental parameters at the Rasperry~Pi their ``application'' on the FPGA. However, this time-lag should be negligible compared to the time-scales on which the environmental parameters change. The Fourier coefficients for the correction are fixed in registers, which can be set externally. This can even be done continuously during the operation. \subsection{Calibration of Fourier Coefficients} \label{ssec:calibration} Calibrating the Fourier coefficients in \cref{eq:corrEqn} is the primary challenge when implementing this method. Since (at W7-X) the environmental parameters in the torus hall (TH) cannot be controlled arbitrarily, it was necessary to rely on the environmental parameters changing naturally, e.g. due to changing weather fronts. This inherently resulted in a very long time to conduct the calibration measurements. The necessary measurements took the entire 2018 operation campaign (OP1.2b). Humidity and temperature varied significantly during this time due to the very hot summer close to the sea with changing weather fronts passing through Greifswald. The calibration measurements have to be conducted through the beam path that requires the correction, e.g. torus vessel. Safety measures prevented a continuous measurement, since the laser's shutters have to be closed when people were enter the W7-X TH. Therefore, the necessary data had to be taken from the interferometer's offset measurements. More explicitly the raw data acquired by the interferometer before the onset of plasma heating was evaluated according to the scheme depicted in \cref{fig:DSPtop}. This yielded $\Phi_{\text{meas}}$ \emph{without} the offset correction. A final continuous measurement over 4 consecutive days was recorded at the end of the campaign to complete the data set. \begin{figure}[t] \centering \includepdf{envfitOverview} \caption{\label{fig:envCalib} Environmental drift calibration measurement during the OP1.2b operation campaign. Each data point plotted is a \SI{1}{\s} box average of the values recorded during the time. To better visualize the data, the abscissa is not to scale, and the data has been decimated. The fitted phase model of \cref{eq:corrEqn} is plotted in orange on the top left. The data of Oct 16 \& 17, 2018 have been excluded from the graph and the fit to prevent falsification of the simulation presented in \cref{sec:results}.} \end{figure} \Cref{fig:envCalib} shows the calibration measurement and subsequent model fit. The temperature, air pressure and \emph{relative} humidity, as measured by the Raspberry~Pi are shown on the top right, bottom right and bottom left respectively. The measured phase is shown on the top left in blue and the fitted drift phase $\Phi_{\text{drift}}$ according to \cref{eq:corrEqn} in orange. Each data point is a \SI{1}{\s} box average of all data recorded. Since time is not a fitted quantity the natural spaces between each point have been omitted, i.e. the abscissa is not to scale. Even before considering the fitted curve, the data reveals a strong correlation between relative humidity and the measured phase. It is also obvious that the optical table of the W7-X system heats up over the course of an operation day. Quantifying the correlation is however more difficult. The most applicable correlation metric is the distance correlation $\mathcal{R}$, which enables a correlation metric for non-linear dependencies\cite{Szekely2007}. However, due to the coupling between the model parameters ($T$, $p$ and $H$), ``plain'' distance correlation does not deliver a good metric of the individual dependencies. The ``standard'' approach to remove the unwanted combined coupling is partial correlation, which removes the effect of a third variable coupled to the two variables of interest\cite{Szekely2014}. Since there are always two ``confounding'' variables for any combination of environmental parameters and the phase, the average partial distance correlation $\bar{\mathcal{R}}(\Phi;H|p,T) = 0.5 \cdot \left( \mathcal{R}(\Phi;H|p) + \mathcal{R}(\Phi;H|T) \right)$ for any permutation of $p$, $T$ and $H$ is calculated to yield an indicator for the level of correlation. Given the data-set in \cref{fig:envCalib} the partial distance correlation factors are as follows : \begin{itemize} \centering \item[$\mathcal{R}(\Phi;H|T,p)$] : 0.986 \item[$\mathcal{R}(\Phi;T|H,p)$] : 0.443 \item[$\mathcal{R}(\Phi;p|H,T)$] : 0.187 \end{itemize} As can be seen there is a very high level of correlation between the humidity and phase, whereas the correlation between pressure and phase is relatively small. This emphasizes, that the humidity is the primary driving factor for the phase drifts of the W7-X interferometer. The fitted drift phase in orange traces the measured phase very well. The errors are within the \SI{0.6}{\radian} natural phase extraction error of the phase evaluation technique used by the W7-X system\cite{Brunner2018}. \section{Results} \label{sec:results} To show the effectiveness of the calibration derived in the previous section the real-time correction must be shown using the FPGA real-time evaluation. Unfortunately, since the W7-X interferometer as a control diagnostic had to be fully available for the operation campaign, it was not possible to test the correction already during the course of the OP1.2b campaign. Nonetheless, the firmware was appropriately modified. For the operation and the correction module simply ``disabled''. Fortunately, the benefit of an FPGA logic design is that it is well-behaved and can be simulated. The W7-X system has the additional benefit of storing most of its raw data, i.e. the data that is fed into the real-time evaluation logic of the FPGA, to support system-developments. It was therefore possible to demonstrate the effectiveness of the compensation presented here directly using a logic simulation. Since the simulation is computationally very expensive a compiled logic simulation based on the open-source tool GHDL by Tristan Gingold was written\cite{ghdl}. To further reduce the computational effort required, the simulation only included the logic relevant to the compensation scheme presented here as indicated by the box in \cref{fig:DSPtop}. The logic test bench written simulated the acquisition of an interferometer shot exactly as it happens during a W7-X discharge\cite{ghdl}. The primary simulation logic is indicated in blue and orange in \cref{fig:DSPtop}. As inputs to the system raw data acquired by the FPGA during normal operation. No averaging was conducted. Instead the raw diode and reference signal of a full period of a dispersion interferometer modulation was taken every \SI{100}{\ms}. From this data the wrapping phase (as supplied by the CORDIC core in \cref{fig:DSPtop}) was calculated. The wrapping phase was then fed into the simulation at the appropriate point. The environmental parameters were chosen at a lower rate and spaced randomly to appropriately mimic the system behavior. While this is not exactly what the FPGA would see during an actual operation, it shows that the model works, since phase evaluation and drift calculation are independent of each other and do not depend on previous modulation periods. \begin{figure}[t] \centering \includepdf{fpgaTest} \caption{\label{fig:FPGAtest} Demonstration of phase drift stabilization using a GHDL simulation. The simulation input data is real data from Oct 16 \& 17, 2018. The measured drifting phase is shown in blue and the modeled drift phase in orange. The absolute value of the compensated phase is shown in red. The left ordinate indicates the phase in radian and the right one the equivalent error to the IED in \SI{e19}{m^{-2}}. For visualization purposes the abscissa is not to scale. The gray line indicates the day break.} \end{figure} \Cref{fig:FPGAtest} shows the results of the simulation. The simulation data was taken from Oct 16 \& 17, 2018, i.e. during the last operational week of the OP1.2b campaign and the same data shown in \cref{fig:motivation}. Note that this data was excluded from the fit in \cref{ssec:calibration}, as to not distort the result. The plot depicts the drifting measured phase $\Phi_{\text{meas}}$ in blue with the modeled phase $\Phi_{\text{env}}$ in orange. The compensated phase $\Phi_{\text{corr}}$ is shown in red, which corresponds to $\Phi_{\text{plasma}}$ in \cref{fig:DSPtop} and \cref{eq:corrEqn}. The equivalent density error is marked on the right ordinate. As can be seen, the phase drift is reduced by an order of magnitude, yielding a compensated phase error of only \SI{0.4}{\radian} or and IED of \SI{\approx4e18}{m^{-2}} over the course of 2 days, during which the measured phase drifted significantly. The plot shows a continuously increasing gap between the modeled phase and the measured one. However as has been noted before, there is an error associated with the compensation, which is of the order of the natural phase error of the W7-X phase evaluation algorithm. There is a high probability that the gap is due to this error. Addressing these errors will be topic of other publications. An additional issue obvious from \cref{fig:FPGAtest} is the increase ins statistical noise of the compensated signal. It is evident that the source is the fit model $\Phi_{\text{env}}$, which in turn is subject to the noise in the environmental parameter measurement. \section{Discussion \& Outlook} \label{sec:discussion} We have shown that it is possible to reduce the phase drift induced into dispersion measuring interferometers by a simple measurement of air temperature, humidity and pressure using cheap hardware based on a Raspberry~Pi. The total hardware cost of implementing this compensation scheme was around \EUR{100}. While the system could not be shown to operate in-situ, logic simulation with data taken during the actual operation of the system show a reduction of the phase drift by an order of magnitude. The primary difficulty is calibration of the system, which requires a lengthy measurement of environmental parameters. However, the improvement of the calibration can be conducted continuously by recording the measured phase and improving the Fourier coefficients over time. The compensation method presented here is not specific to fusion, although this appears to be one of the fields, were long time scale phase drifts are a prominent problem. Nonetheless any interferometer conducting a similar measurement scheme could implement this, e.g. measuring the dispersion of a material like water over long periods of time. The measurement presented here was conducted with a relatively short optical beam path of "only" \SI{50}{\m}. Larger systems such as ITER and DEMO will have optical beam path of more than \SI{100}{\m}. It is foreseeable, that a single measurement of environmental parameters does not suffice to conduct the appropriate fit. However, one can simply split \cref{eq:corrEqn} into several sub-paths of L. This results in a sum of two polynomials with two different sets of coefficients. Nonetheless the fitting procedure would not change. This must be tested on an appropriate set-up. \begin{figure}[t] \centering \includepdf{driftPhase} \caption{\label{fig:driftPhase} A drift phase measurement for W7-X shot \#20180927.14. The top left shows the modeled phase $\Phi_{\text{env}}$ with the direct calculation in blue. The orange line was smoothed using a 30~sample box averaging filter. The other three plots show the environmental parameters for comparison.} \end{figure} The tests conducted here also showed that the measurement of humidity can result in significant statistical noise on the density measurement. This is again indicated in \cref{fig:driftPhase} in the top left. The fluctuations are of very short time scale, which is to a large extent statistical noise in the humidity measurement. A simple approach is to use a simple 30 sample box-averaging filter, which can easily be implemented on an FPGA. Given the same hardware such a filter can already reduce the error on the direct model-phase (in blue), which was used for this paper, below \SI{2e18}{\m^{-2}} (orange). This is well below the control accuracy of the W7-X density control system and approaches the statistical noise of the IEDDI system itself\cite{Brunner2018}. It is conceivable that a significant portion of the fluctuations is caused by air turbulence affecting the sensor itself, e.g. by changing the local temperature (which the SenseHAT uses to calculate the rel. humidity). To circumvent this issue, multiple measurements of the local parameters could be taken at different locations. This of course comes at a cost, but multiple sensors could be managed by a single RaspberryPi (or equivalent miniPC). With an even higher budget the relatively cheap SenseHAT sensor, which is specified with a rel. humidity error of \SI{4}{\%}, could be exchanged for a more accurate sensor. Since the time constants of interest for the model are well above \SI{1}{\s}, a significantly slower sensor can be accepted. Eventually the system developed here will be used at W7-X in the OP2 operation campaign where discharge lengths of up to \SI{15}{\minute} are envisaged. Due to the recent changes in the ITER diagnostic layout it should also be considered, whether this compensation scheme becomes mandatory for the ITER interferometers. \acknowledgments The authors wish to explicitly thank T.~Akiyama and M.~van~Zeeland for the very fruitful discussions on humidity induced phase drifts. \itshape This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the EURATOM research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. \section*{Sources} Much of the evaluation in this article was conducted using Python 3.7 in combination with the numpy, matplotlib and pandas libraries\cite{python,pandas,matplotlib,numpy}. The version of the FPGA firmware this paper is based can be found under\\\mbox{\url{https://gitlab.mpcdf.mpg.de/kjbrunne/ieddi_fpgaware}}. The git commit 750d4005bdf986b39ef05757c5cf3effd3f09d51 was used for the contents of this article. This also includes the simulation code. The codes for data evaluation and plot generation can be supplied on request. \bibliographystyle{JHEP}
1,477,468,749,842
arxiv
\section{Introduction} PCA is probably the most common tool for exploratory data analysis, dimensionality reduction and clustering, e.g.,~\cite{Jol2002}. It can either be seen as finding the best low-dimensional subspace approximating the data or as finding the subspace of highest variance. However, due to the fact that the variance is not robust, PCA can be strongly influenced by outliers. Indeed, even one outlier can change the principal components (PCs) drastically. This phenomenon motivates the development of robust PCA methods which recover the PCs of the uncontaminated data. This problem received a lot of attention in the statistical community and recently became a problem of high interest in machine learning. In the statistical community, two main approaches to robust PCA have been proposed. The first one is based on the robust estimation of the covariance matrix, e.g.,~\cite{HamEtAl1986},~\cite{HubRon2009}. Indeed, having found a robust covariance matrix one can determine robust PCs by performing the eigenvalue decomposition of this matrix. However, it has been shown that robust covariance matrix estimators with desirable properties, such as positive semidefiniteness and affine equivariance, have a breakdown point\footnote{The breakdown point~\cite{HubRon2009} of a statistical estimator is informally speaking the fraction of points which can be arbitrarily changed and the estimator is still well defined.} upper bounded by the inverse of the dimensionality~\cite{HamEtAl1986}. The second approach is the so called projection-pursuit~\cite{Hub1985},~\cite{LiChe1985}, where one maximizes a robust scale measure, instead of the standard deviation, over all possible directions. Although, these methods have the best possible breakdown point of~$0.5$, they lead to non-convex, typically, non-smooth problems and current state-of-the-art are greedy search algorithms~\cite{CroEtAl2007}, which show poor performance in high dimensions. Another disadvantage is that robust PCs are computed one by one using deflation techniques~\cite{Mac2009}, which often leads to poor results for higher PCs. In the machine learning and computer vision communities, matrix factorization approaches to robust PCA were mostly considered, where one looks for a decomposition of a data matrix into a low-rank part and a sparse part, e.g.,~\cite{CanEtAl2009},~\cite{MatGia2012},~\cite{MacTro2011},~\cite{XuCara2012}. The sparse part is either assumed to be scattered uniformly~\cite{CanEtAl2009} or it is assumed to be row-wise sparse corresponding to the model where an entire observation is corrupted and discarded. While some of these methods have strong theoretical guarantees, in practice, they depend on a regularization parameter which is non-trivial to choose as robust PCA is an unsupervised problem and default choices, e.g.,~\cite{CanEtAl2009},~\cite{MacTro2011}, often do not perform well as we discuss in Section~\ref{sec:exp}. Furthermore, most of these methods are slow as they have to compute the SVD of a matrix of the size of the data matrix at each iteration. As we discuss in Section~\ref{sec:rpca}, our formulation of robust PCA is based on the minimization of a robust version of the reconstruction error over the Stiefel manifold, which induces orthogonality of robust PCs. This formulation has multiple advantages. First, it has the maximal possible breakdown point of $0.5$ and the interpretation of the objective is very simple and requires no parameter tuning in the default setting. In Section~\ref{sec:trpca}, we propose a new fast TRPCA algorithm for this optimization problem. Our algorithm computes both orthogonal PCs and a robust center, hence, avoiding the deflation procedure and preliminary robust centering of data. While our motivation is similar to the one of~\cite{MatGia2012}, our optimization scheme is completely different. In particular, our formulation requires no additional parameter. \section{Robust PCA}\label{sec:rpca} \textit{Notation.} All vectors are column vectors and $I_p \in \mathbb{R}^{p\times p}$ denotes the identity matrix. We are given data $X\in\mathbb{R}^{n\times p}$ with $n$ observations in $\mathbb{R}^p$ (rows correspond to data points). We assume that the data contains $t$ true observations $T\in\mathbb{R}^{t\times p}$ and $n-t$ outliers $O\in\mathbb{R}^{n-t \times p}$ such that $X=T\cup O$ and $T\cap O\ne\emptyset$. To be able to distinguish true data from outliers, we require the standard in robust statistics assumption, that is $t\ge \ceil{\frac{n}{2}}$. The Stiefel manifold is denoted as $\mathcal{S}_k =\cbra{U\in\mathbb{R}^{p \times k}\;|\; U^{\top} U=I}$ (the set of orthonormal $k$-frames in $\mathbb{R}^p$). \textit{PCA.} Standard PCA~\cite{Jol2002} has two main interpretations. One can either see it as finding the $k$-dimensional subspace of maximum variance in the data or the $k$-dimensional affine subspace with minimal reconstruction error. In this paper we are focusing on the second interpretation. Given data $X\in\mathbb{R}^{n\times p}$, the goal is to find the offset $m\in\mathbb{R}^p$ and $k$ principal components $(u_1,\ldots,u_k)=U \in \mathcal{S}_k$, which describe $\mathcal{A}(m,U)=\cbra{z\in\mathbb{R}^p\;\big|\; z= m+\sum_{j=1}^k s_j u_j,\;s_j\in\mathbb{R}}$, the $k$-dimensional affine subspace, so that they minimize the reconstruction error \begin{equation}\label{pca} \cbra{\hat m, \hat U}=\mathop{\rm arg\,min}\limits_{m\in\mathbb{R}^p,\;U \in \mathcal{S}_k,\;z_i\in\mathcal{A}(m,U)} \;\frac{1}{n}\sum_{i=1}^n\norm{z_i - x_i}_2^2. \end{equation} It is well known that $\hat m=\frac{1}{n}\sum_{i=1}^n x_i$, and the optimal matrix $\hat U \in \mathcal{S}_k$ is generated by the top $k$ eigenvectors of the empirical covariance matrix. As $U \in \mathcal{S}_k$ is an orthogonal projection, an equivalent formulation of~\eqref{pca} is given by \begin{equation}\label{pcare} \cbra{\hat m,\hat U}=\mathop{\rm arg\,min}\limits_{m\in\mathbb{R}^p,\; U\in\mathcal{S}_k}\frac{1}{n}\sum_{i=1}^n\norm{\rbra{UU^{\top}-I}\rbra{x_i-m}}_2^2. \end{equation} \textit{Robust PCA.} When the data $X$ does not contain outliers ($X=T$), we refer to the outcome of standard PCA, e.g.,~\eqref{pcare}, computed for the true data $T$ as $\{\hat m_T,\hat U_T\}$. When there are some outliers in the data $X$, i.e. $X=T\cup O$, the result $\{\hat m,\hat U\}$ of PCA can be significantly different from $\{\hat m_T,\hat U_T\}$ computed for the true data $T$. The reason is the non-robust squared $\ell_2$-norm involved in the formulation, e.g.,~\cite{HamEtAl1986},~\cite{HubRon2009}. It is well known that PCA has a breakdown point of zero, that is a single outlier can already distort the components arbitrarily. As outliers are frequently present in applications, robust versions of PCA are crucial for data analysis with the goal of recovering the true PCA solution $\{\hat m_T,\hat U_T\}$ from the contaminated data $X$. As opposed to standard PCA, robust formulations of PCA based on the maximization of the variance (the projection-pursuit approach as extension of~\eqref{pca}), eigenvectors of the empirical covariance matrix (construction of a robust covariance matrix), or the minimization of the reconstruction error (as extension of~\eqref{pcare}) are not equivalent. Hence, there is no universal approach to robust PCA and the choice can depend on applications and assumptions on outliers. Moreover, due to the inherited non-convexity of standard PCA, they lead to NP-hard problems. The known approaches for robust PCA either follow to some extent greedy/locally optimal optimization techniques, e.g.,~\cite{CroEtAl2007},~\cite{LiChe1985},~\cite{TorBla2001},~\cite{XuCara2013}, or compute convex relaxations, e.g.,~\cite{CanEtAl2009},~\cite{MatGia2012},~\cite{MacTro2011},~\cite{XuCara2012}. In this paper we aim at a method for robust PCA based on the minimization of a robust version of the reconstruction error and adopt the classical outlier model where entire observations (corresponding to rows in the data matrix $X$) correspond to outliers. In order to introduce the trimmed reconstruction error estimator for robust PCA, we employ the analogy with the least trimmed squares estimator~\cite{Rou1984} for robust regression. We denote by $r_i(m,U)=\norm{\rbra{UU^{\top}-I}\rbra{x_i-m}}_2^2$ the reconstruction error of observation $x_i$ for the given affine subspace parameterized by $(m,U)$. Then the trimmed reconstruction error is defined to be the sum of the $t$-smallest reconstruction errors $r_i(m,U)$, \begin{equation}\label{tre} R(m,U)=\frac{1}{t}\sum_{i=1}^t r_{(i)}(m,U), \end{equation} where $r_{(1)}(m,U)\le\dots\le r_{(n)}(m,U)$ are in nondecreasing order and $t$, with $\ceil{\frac{n}{2}}\leq t\leq n$, should be a lower bound on the number of true examples $T$. If such an estimate is not available as it is common in unsupervised learning, one can set by default $t=\ceil{\frac{n}{2}}$. With the latter choice it is straightforward to see that the corresponding PCA estimator has the maximum possible breakdown point of $0.5$, that is up to $50\%$ of the data points can be arbitrarily corrupted. With the default choice our method has no free parameter except the rank $k$. The minimization of the trimmed reconstruction error~\eqref{tre} leads then to a simple and intuitive formulation of robust PCA \begin{equation}\label{rpca} \cbra{m^*,U^*}=\mathop{\rm arg\,min}\limits_{{m\in\mathbb{R}^p,\;U\in\mathcal{S}_k}}R(m,U) =\mathop{\rm arg\,min}\limits_{{m\in\mathbb{R}^p,\;U\in\mathcal{S}_k}}\frac{1}{t}\sum_{i=1}^tr_{(i)}(m,U). \end{equation} Note that the estimation of the subspace $U$ and the center $m$ is done jointly. This is in contrast to~\cite{CanEtAl2009},~\cite{CroEtAl2007},~\cite{LiChe1985},~\cite{MacTro2011},~\cite{XuCara2013},~\cite{XuCara2012}, where the data has to be centered by a separate robust method which can lead to quite large errors in the estimation of the true PCA components. The same criterion~\eqref{rpca} has been proposed by~\cite{MatGia2012}, see also~\cite{XuYui1995} for a slightly different version. While both papers state that the direct minimization of~\eqref{rpca} would be desirable,~\cite{MatGia2012} solve a relaxation of~\eqref{rpca} into a convex problem while~\cite{XuYui1995} smooth the problem and employ deterministic annealing. Both approaches introduce an additional regularization parameter controlling the number of outliers. It is non-trivial to choose this parameter. \section{TRPCA: Minimizing Trimmed Reconstruction Error on the Stiefel Manifold}\label{sec:trpca} In this section, we introduce TRPCA, our algorithm for the minimization of the trimmed reconstruction error~\eqref{rpca}. We first reformulate the objective of~\eqref{rpca} as it is neither convex, nor concave, nor smooth, even if $m$ is fixed. While the resulting optimization problem is still non-convex, we propose an efficient optimization scheme on the Stiefel manifold with monotonically decreasing objective. Note that all proofs of this section can be found in the supplementary material~\cite{Suppl}. \subsection{Reformulation and First Properties}\label{sec:relax} The reformulation of~\eqref{rpca} is based on the following simple identity. Let $\widetilde{x}_i = x_i-m$ and $U \in \mathcal{S}_k$, then \begin{equation} r_i(m,U)=\norm{\rbra{UU^{\top}-I}\rbra{x_i-m}}_2^2 = -\norm{U^{\top} \widetilde{x}_i}^2_2 + \norm{\widetilde{x}_i}^2_2 := \widetilde r_{i}(m,U). \end{equation} The equality holds only on the Stiefel manifold. Let $\widetilde r_{(1)}(m,U)\leq \ldots \leq \widetilde{r}_{(n)}(m,U)$, then we get the alternative formulation of~\eqref{rpca}, \begin{equation}\label{rpca2} \cbra{m^*,U^*}=\mathop{\rm arg\,min}\limits_{{m\in\mathbb{R}^p,\;U\in\mathcal{S}}} \widetilde R(m,U) = \frac{1}{t}\sum_{i=1}^t \widetilde{r}_{i}(m,U). \end{equation} While~\eqref{rpca2} is still non-convex, we show in the next proposition that for fixed $m$ the function $\widetilde R(m,U)$ is concave on $\mathbb{R}^{p \times k}$. This will allow us to employ a simple optimization technique based on linearization of this concave function. \begin{proposition}\label{concavity} For fixed $m \in \mathbb{R}^p$ the function $\widetilde R(m,U): \mathbb{R}^{p \times k} \rightarrow \mathbb{R}$ defined in~\eqref{rpca2} is concave in $U$. \end{proposition} \begin{proof} We have $\widetilde r_i(m,U)=-\norm{U^{\top}\widetilde{x}_i}^2_2+\norm{\widetilde{x}_i}_2^2$. As $\norm{U^{\top} \widetilde{x}_i}^2$ is convex, we deduce that $\widetilde r_i(m,U)$ is concave in $U$. The sum of the $t$ smallest concave functions out of $n\geq t$ concave functions is concave, as it can be seen as the pointwise minimum of all possible $\binom{n}{t}$ sums of $t$ of the concave functions, e.g.,~\cite{BoyVan2004}. \end{proof} The iterative scheme uses a linearization of $\widetilde R(m,U)$ in $U$. For that we need to characterize the superdifferential of the concave function $\widetilde R(m,U)$. \begin{proposition}\label{pro:superdifferential} Let $m$ be fixed. The superdifferential $\partial \widetilde R(m,U)$ of $\widetilde R(m,U): \mathbb{R}^{p \times k} \rightarrow \mathbb{R}$ is given as \begin{equation} \partial \widetilde R(m,U) = \Big\{ \sum_{i \in I} \alpha_i (x_i-m)(x_i-m)^{\top} U\,\Big|\, \sum_{i=1}^n \alpha_i = t,\; 0\leq \alpha_i \leq 1 \Big\}, \end{equation} where $I=\{ i \,|\, \widetilde r_i(m,U) \leq \widetilde r_{(t)}(m,U)\}$ with $\widetilde r_{(1)}(m,U)\leq \ldots \leq \widetilde r_{(n)}(m,U)$. \end{proposition} \begin{proof} We reduce it to a well known case. We can write $\widetilde R(m,U)$ as \begin{equation} \widetilde R(m,U) = \mathop{\rm min}\limits_{0\leq \alpha_i\leq 1, \; i=1,\ldots,n, \; \sum\limits_{i=1}^n \alpha_i=t} \quad\sum_{i=1}^n \alpha_i \widetilde r_i(m,U), \end{equation} that is a minimum of a parameterized set of concave functions. As the parameter set is compact and continuous (see Theorem 4.4.2 in~\cite{HirLem2001}), we have \begin{equation} \partial \widetilde R(m,U) = \mathrm{conv}\Big(\bigcup_{\alpha^j \in I(U)} \partial \big(\sum_{i=1}^n \alpha^{j}_i \widetilde r_i(m,U)\big)\Big) \\ = \mathrm{conv}\Big(\bigcup_{\alpha^j \in I(U)} \sum_{i=1}^n \alpha^{j}_i \partial \widetilde r_i(m,U)\Big), \end{equation} where $I(U)=\{ \alpha \,|\, \sum_{i=1}^n \alpha_i \widetilde r_i(m,U)=\widetilde R(m,U),\,\sum_{i=1}^n \alpha_i = t,\, 0\leq \alpha_i \leq 1, \,i=1,\ldots,n\}$ and $\mathrm{conv}(S)$ denotes the convex hull of $S$. Finally, using that $\widetilde r_i(m,U)$ is differentiable with $\partial \widetilde r_i(m,U) = \{(x_i-m)(x_i-m)^{\top} U\}$ yields the result. \end{proof} \subsection{Minimization Algorithm} Algorithm~\ref{alg:trpca} for the minimization of~\eqref{rpca2} is based on block-coordinate descent in $m$ and $U$. For the minimization in $U$ we use that $\widetilde R(m,U)$ is concave for fixed $m$. Let $G \in \partial \widetilde R(m,U^k)$, then by definition of the supergradient of a concave function, \begin{equation} \label{eq:supergrad} \widetilde R\rbra{m,U^{k+1}} \leq \widetilde R\rbra{m,U^k} + \inner{ G, U^{k+1}-U^k}. \end{equation} The minimization of the linear upper bound on the Stiefel manifold can be done in closed form, see Lemma~\ref{le:polar} below. For that we use a modified version of a result of~\cite{JouEtAll2010}. Before giving the proof, we introduce the polar decomposition of a matrix $G \in\mathbb{R}^{p\times k}$ which is defined to be $G=QP$, where $Q\in\mathcal{S}$ is an orthonormal matrix of size $p\times k$ and $P$ is a symmetric positive semidefinite matrix of size $k\times k$. We denote the factor $Q$ of $G$ by $\mathop{Polar}(G)$. The polar can be computed in ${\cal O}(p k^2)$ for $p\geq k$~\cite{JouEtAll2010} as $\mathop{Polar}(G)=UV^{\top}$ (see Theorem 7.3.2. in~\cite{MatrAnal}) using the SVD of $G$, $G=U \Sigma V^{\top}$. However, faster methods have been proposed, see~\cite{HigSch1990}, which do not even require the computation of the SVD. \begin{lemma}\label{le:polar}Let $G\in\mathbb{R}^{p\times k}$, with $k\le p$, and denote by $\sigma_i(G)$, $i=1,\dots,k$, the singular values of $G$. Then $\mathrm{min}_{U\in\mathcal{S}_k}\inner{G,U}=-\sum_{i=1}^k\sigma_i(G)$, with minimizer $U^*=-\mathop{Polar}(G)$. If $G$ is of full rank, then $\mathop{Polar}(G)=G(G^{\top} G)^{-1/2}$. \end{lemma} \begin{proof} Let $G=U\Sigma V^{\top}$ be the SVD of $G$, that is $U \in O(p)$, $V \in O(k)$, where $O(m)$ denotes the set of orthogonal matrices in $\mathbb{R}^m$, \begin{equation} \mathop{\rm min}\limits_{O \in \mathcal{S}_k} \inner{G,O} = \mathop{\rm min}\limits_{O \in \mathcal{S}_k} \inner{\Sigma,U^{\top} O V}\\ = \mathop{\rm min}\limits_{W \in \mathcal{S}_k} \sum_{i=1}^k \sigma_i(G) W_{ii} \geq - \sum_{i=1}^k \sigma_i(G). \end{equation} The lower bound is realized by $-UV^{\top} \in \mathcal{S}_k$ which is equal to $-\mathop{Polar}(G)$. We have, $ - \inner{U\Sigma V^{\top}, UV^{\top}}=-\mathrm{trace}(\Sigma)=-\sum_{i=1}^k \sigma(G)_i.$ The final statement follows from the proof of Theorem 7.3.2. in~\cite{MatrAnal}. \end{proof} \begin{algorithm} \caption{TRPCA} \label{alg:trpca} \begin{algorithmic} \State {\bfseries Input:} $X$, $t$, $d$, $U^0\in\mathcal{S}$, and $m^0$ median of $X$, tolerance $\varepsilon$ \State {\bfseries Output:} robust center $m^k$ and robust PCs $U^k$ \Repeat \; for $k = 1,2,\dots$ \State Center data $\widetilde{X}^{k}=\cbra{\widetilde{x}_i^{k}=x_i-m^{k},\;i=1,\dots,n}$ \State Compute supergradient $\mathcal{G}(U^k)$ of $\widetilde R(m^k,U^k)$ for fixed $m^k$ \State Update $U^{k+1}=-\mathop{Polar}\rbra{\mathcal{G}(U^k)}$ \State Update $m^{k+1}=\frac{1}{t}\sum_{i\in\mathcal{I}^{k'}}x_i$, where $\mathcal{I}^{k'}$ are the indices of the $t$ smallest \\ \hspace{0.4cm} $\widetilde r_i(m^k,U^{k+1})$, $i=1,\ldots,n$ \Until{relative descent below $\varepsilon$} \end{algorithmic} \end{algorithm} Given that $U$ is fixed, the center $m$ can be updated simply as the mean of the points realizing the current objective of~\eqref{rpca2}, that is the points realizing the $t$-smallest reconstruction error. Finally, although the objective of~\eqref{rpca2} is neither convex nor concave in $m$, we prove monotonic descent of Algorithm~\ref{alg:trpca}. \begin{theorem}\label{thm:descent} The following holds for Algorithm~\ref{alg:trpca}. At every iteration, either $\widetilde R(m^{k+1},U^{k+1})<\widetilde R(m^k,U^k)$ or the algorithm terminates. \end{theorem} \begin{proof} Let $m^k$ be fixed and $G(U^k) \in \partial \widetilde R(m,U^k)$, then from \eqref{eq:supergrad} we have \begin{equation} \widetilde R(m^k,U) \leq \widetilde R(m,U^k) - \inner{G(U^k),U^k} + \inner{G(U^k),U}. \end{equation} The minimizer $U^{k+1}=\mathop{\rm arg\,min}\limits_{U \in \mathcal{S}_k} \inner{G(U^k),U}$, over the Stiefel manifold can be computed via Lemma~\ref{le:polar} as $U^{k+1}=-\mathop{Polar}(G(U^k))$. Thus we get immediately, \begin{equation*} \widetilde R(m^k,U^{k+1}) \leq \widetilde R(m^k,U^k). \end{equation*} After the update of $U^{k+1}$ we compute $\mathcal{I}^{k'}$ which are the indices of the $t$ smallest $\widetilde r_i(m^k,U^{k+1})$, $i=1,\ldots,n$. If there are ties, then they are broken randomly. For fixed $U^{k+1}$ and fixed $\mathcal{I}^{k'}$ the minimizer of the objective \begin{equation} \sum_{i \in \mathcal{I}^{k'}} -\norm{(U^{k+1})^{\top} (x_i - m)}^2_2 + \norm{x_i-m}^2_2, \end{equation} is given by $m^{k+1}=\frac{1}{t}\sum\limits_{i \in \mathcal{I}^{k'}} x_i$, which yields, $\sum\limits_{i \in \mathcal{I}^{k'}} \widetilde r_i(m^{k+1},U^{k+1}) \leq \widetilde R(m^k,U^{k+1})$. After the computation of $m^{k+1}$, $\mathcal{I}^{k'}$ need no longer correspond to the $t$ smallest reconstruction errors $\widetilde r_i(m^{k+1},U^{k+1})$. However, taking the $t$ smallest ones only further reduces the objective, $\widetilde R(m^{k+1},U^{k+1}) \leq \sum_{i \in \mathcal{I}^{k'}} \widetilde r_i(m^{k+1},U^{k+1})$. This yields finally the result, $ \widetilde R(m^{k+1},U^{k+1}) \leq \widetilde R(m^k,U^k)$. \end{proof} The objective is non-smooth and neither convex nor concave. The Stiefel manifold is a non-convex constraint set. These facts make the formulation of critical points conditions challenging. Thus, while potentially stronger convergence results like convergence to a critical point are appealing, they are currently out of reach. However, as we will see in Section~\ref{sec:exp}, Algorithm~\ref{alg:trpca} yields good empirical results, even beating state-of-the-art methods based on convex relaxations or other non-convex formulations. \subsection{Complexity and Discussion} The computational cost of each iteration of Algorithm~\ref{alg:trpca} is dominated by ${\cal O}(pk^2)$ for computing the polar and ${\cal O}(pkn)$ for a supergradient of $\widetilde R(m,U)$ and, thus, has total cost ${\cal O}(pk(k+n))$. We compare this to the cost of the proximal method in~\cite{CanEtAl2009},~\cite{WriEtAl2009} for minimizing $\mathrm{min}_{X=A+E} \norm{A}_* + \lambda \norm{E}_1$. In each iteration, the dominating cost is ${\cal O}(\mathop{\rm min}\limits\{pn^2,np^2\})$ for the SVD of a matrix of size $p \times n$. If the natural condition $k \ll \mathop{\rm min}\limits\{p,n\}$ holds, we observe that the computational cost of TRPCA is significantly better. Thus even though we do $10$ random restarts with different starting vectors, our TRPCA is still faster than all competing methods, which can also be seen from the runtimes in Table~\ref{tab:runtime}. In~\cite{MatGia2012}, a relaxed version of the trimmed reconstruction error is minimized: \begin{equation} \mathop{\rm min}\limits_{m \in \mathbb{R}^p,\,U \in S_k\, ,s \in \mathbb{R}^k} \norm{X - \mathbf{1}_n m^{\top} - Us - O}_F^2 + \lambda \norm{O}_{2,1}, \end{equation} where $\norm{O}_{2,1}$ is added in order to enforce row-wise sparsity of $O$. The optimization is done via an alternating scheme. However, the disadvantage of this formulation is that it is difficult to adjust the number of outliers via the choice of $\lambda$ and thus requires multiple runs of the algorithm to find a suitable range, whereas in our formulation the number of outliers $n-t$ can be directly controlled by the user or $t$ can be set to the default value $\ceil{\frac{n}{2}}$. \section{Experiments}\label{sec:exp} We compare our TRPCA (the code is available for download at ~\cite{Suppl}) algorithm with the following robust PCA methods: ORPCA~\cite{MatGia2012}, LLD\footnote{Note, that the LLD algorithm~\cite{MacTro2011} and the OPRPCA algorithm~\cite{XuCara2012} are equivalent.}~\cite{MacTro2011}, HRPCA~\cite{XuCara2013}, standard PCA, and true PCA on the true data $T$ (ground truth). For background subtraction, we also compare our algorithm with PCP~\cite{CanEtAl2009} and RPCA~\cite{TorBla2001}, although the latter two algorithms are developed for a different outlier model. To get the best performance of LLD and ORPCA, we run both algorithms with different values of the regularization parameters to set the number of zero rows (observations) in the outlier matrix equal to $\tilde t$ (which increases runtime significantly). The HRPCA algorithm has the same parameter $t$ as our method. We write $(0.5)$ in front of an algorithm name if the default value $\tilde t=\ceil{\frac{n}{2}}$ is used, otherwise, we use the ground truth information $\tilde t=|T|$. As performance measure we use the reconstruction error relative to the reconstruction error of the true data (which is achieved by PCA on the true data only): \begin{equation} \begin{aligned} \mathrm{tre}(U,m)=\frac{1}{t}\sum\nolimits_{\cbra{i\;|\;x_i\in T}}r_i(m,U) - r_i(\hat m_T,\hat U_T), \end{aligned} \end{equation} where $\{\hat m_T, \hat U_T\}$ is the true PCA of $T$ and it holds that $tre(U,m)\ge0$. The smaller $tre(U,m)$, i.e., the closer the estimates $\cbra{m,U}$ to $\{\hat m_T, \hat U_T\}$, the better. We choose datasets which are computationally feasible for all methods. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width=.33\textwidth]{DataWithLeg.pdf} & \includegraphics[width=.33\textwidth]{DataSecond.pdf} & \includegraphics[width=.33\textwidth]{DataFourth.pdf} \\ \includegraphics[width=.33\textwidth]{DataThird.pdf} & \includegraphics[width=.33\textwidth]{xxxplotUSPS10d1.pdf} & \includegraphics[width=.33\textwidth]{xxxplotUSPS10d10.pdf} \end{tabular} \caption{ First row left to right: 1) Data1, $p=100$, $\sigma_o=2$; 2) Data1, $p=20$, $\sigma_o=2$; 3) Data2, $p=100$, $\sigma_o=0.35$ ; Second row left to right: 1) Data2, $p=20$, $\sigma_o=0.35$; 2) USPS10, $k=1$; 3) USPS10, $k=10$. } \label{fig:toy} \end{figure} \subsection{Synthetic Data Sets}\label{sec:synthetic} We sample uniformly at random a subspace of dimension $k$ spanned by $U \in S_k$ and generate the true data $T \in \mathbb{R}^{t \times p}$ as $T=AU^{\top}+E$ where the entries of $A \in \mathbb{R}^{t \times k}$ are sampled uniformly on $[-1,1]$ and the noise $E \in \mathbb{R}^{t \times p}$ has Gaussian entries distributed as $\mathcal{N}(0,\sigma_T)$. We consider two types of outliers: (Data1) the outliers $O \in \mathbb{R}^{o \times p}$ are uniform samples from $[0,\sigma_o]^p$, (Data2) the outliers are samples from a random half-space, let $w$ be sampled uniformly at random from the unit sphere and let $x \sim \mathcal{N}(0,\sigma_0\mathds{1})$ then an outlier $o_i \in \mathbb{R}^p$ is generated as $o_i= x - \mathop{\rm max}\limits\{\inner{x,w},0\}w$. For Data2, we also downscale true data by $0.5$ factor. We always set $n=t+o=200$, $k=5$, and $\sigma_T=0.05$ and construct data sets for different fractions of outliers $\lambda=\frac{o}{t+o}\in \cbra{0.1,\,0.2,\,0.3,\,0.4,\,0.45}$. For every $\lambda$ we sample 5 data sets and report mean and standard deviation of the relative true reconstruction error $\mathrm{tre}(U,m)$. \subsection{Partially Synthetic Data Set} We use USPS, a dataset of $16 \times 16$ images of handwritten digits. We use digits 1 as true observations $T$ and digits 0 as outliers $O$ and mix them in different proportions. We refer to this data set as USPS10 and the results can be found in Fig.~\ref{fig:toy}. Another similar experiment is on the MNIST data set of $28 \times 28$ images of handwritten digits. We use digits $1$ (or $7$) as true observations $T$ and all other digits $0,2,3,\dots,9$ as outliers $O$ (each taken in equal proportion). We mix true data and outliers in different proportions and the results can be found in Fig.~\ref{fig:mnist} (or Fig.~\ref{fig:mnist7}), where we excluded LLD due to its low computational time, see Tab.~\ref{tab:runtime}. We notice that TRPCA algorithm with the parameter value $\tilde t=t$ (ground truth information) performs almost perfectly and outperforms all other methods, while the default version of TRPCA with parameter $\tilde t=\ceil{\frac{n}{2}}$ shows slightly worse performance. The fact that TRPCA estimates simultaneously the robust center $m$ influences positively the overall performance of the algorithm, see, e.g., the experiments for background subtraction and modeling in Section~\ref{sec:bms} and additional ones in the supplementary material. That is Fig.~\ref{fig:wsorig}-\ref{fig:rmores}. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.48\columnwidth]{MNIST1vsALLk1PlotRecErr22014-7-29--12-5-42.eps} & \includegraphics[width=.48\columnwidth]{MNIST1vsALLk5PlotRecErr22014-7-29--12-4-17.eps} \end{tabular} \caption{ Experiment on the MNIST data set with digits 1 as true observations $T$ and all other digits $0,2,3,\dots,9$ as outliers. Number of recovered PCs is $k=1$ (left) and $k=5$ (right). } \label{fig:mnist} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.48\columnwidth]{MNIST7vsALLk1PlotRecErr22014-8-7--17-57-33.eps} & \includegraphics[width=.48\columnwidth]{MNIST7vsALLk5PlotRecErr22014-8-7--12-22-53.eps} \end{tabular} \caption{ Experiment on the MNIST data set with digits 7 as true observations $T$ and all other digits $0,2,3,\dots,9$ as outliers. Number of recovered PCs is $k=1$ (left) and $k=5$ (right). } \label{fig:mnist7} \end{figure} \begin{figure} \centering \begin{tabular}{ccccc} \includegraphics[width=.194\columnwidth]{rePCAt2} & \includegraphics[width=.194\columnwidth]{reTRPCA2} & \includegraphics[width=.194\columnwidth]{reORPCA2} & \includegraphics[width=.194\columnwidth]{reLLD2} & \includegraphics[width=.194\columnwidth]{rePCP12} \\ \includegraphics[width=.194\columnwidth]{rePCA2} & \includegraphics[width=.194\columnwidth]{reTRPCA052} & \includegraphics[width=.194\columnwidth]{reHRPCA2} & \includegraphics[width=.194\columnwidth]{reRPCA2} & \includegraphics[width=.194\columnwidth]{rePCP92} \end{tabular} \caption{Reconstruction errors, i.e., $||(x_i-m^*)-U^*\rbra{U^*}^{\top}(x_i-m^*)||_2^2$, on the y-axis, for each frame on the x-axes for $k=10$. Note that the person is visible in the scene from frame 481 until the end. We consider the background images as true data and, thus, the reconstruction error should be high after frame 481 (when the person enters).} \label{fig:imagesresWS} \end{figure} \subsection{Background Modeling and Subtraction} \label{sec:bms} In~\cite{TorBla2001} and~\cite{CanEtAl2009} robust PCA has been proposed as a method for background modeling and subtraction. While we are not claiming that robust PCA is the best method to do this, it is an interesting test for robust PCA. The data $X$ are the image frames of a video sequence. The idea is that slight change in the background leads to a low-rank variation of the data whereas the foreground changes cannot be modeled by this and can be considered as outliers. Thus with the estimates $m^*$ and $U^*$ of the robust PCA methods, the solution of the background subtraction and modeling problem is given as \begin{equation}\label{foreground} x_i^{b}=m^*+U^*(U^*)^{\top}(x_i-m^*) \end{equation} where $x_i^b$ is the background of frame $i$ and its foreground is simply $x_i^f=x_i-x_i^b$. We experimentally compare the performance of all robust PCA methods on the water surface data set~\cite{WaterSurf}, which has moving water in its background. We choose this dataset of $n=633$ frames each of size $p=128\times 160=20480$ as it is computationally feasible for all the methods. In Fig.~\ref{fig:bfwsi560}, we show the background subtraction results of several robust PCA algorithms. We optimized the value $\lambda$ for PCP of~\cite{CanEtAl2009},~\cite{WriEtAl2009} by hand to obtain a good decomposition, see the bottom right pictures of Fig.~\ref{fig:bfwsi560}. How crucial the choice of $\lambda$ is for this method can be seen from the bottom right pictures. Note that the reconstruction error of both the default version of TRPCA and TRPCA(0.5) with ground truth information provide almost perfect reconstruction errors with respect to the true data, cf., Fig.~\ref{fig:imagesresWS}. Hence, TRPCA is the only method which recovers the foreground and background without mistakes. We refer to the supplementary material for more explanations regarding this experiment as well as results for another background subtraction data set. The runtimes of all methods for the water surface data set are presented in Table~\ref{tab:runtime}, which shows that TRPCA is the fastest of all methods. \begin{table} \footnotesize{ \centering \caption{ Runtimes for the water surface data set for the algorithms described in Section~\ref{sec:exp}. For TRPCA/TRPCA(0.5) we report the average time of one initialization (in practice, $5-10$ random restarts are sufficient). For PCP we report the runtime for the employed parameter $\lambda=0.001$. For all others methods, it is the time of one full run of the algorithm including the search for regularization parameters. } \begin{center} \begin{tabular}{| l | r | r | r | r | r | r | r | r | r |} \hline & trpca & trpca(.5) & orpca & orpca(.5) & hrpca & hrpca(.5) & lld & rpca & pcp($\lambda=0.001$) \\ \hline $k=1$ & $7$ & ${13}$ & $3659$ & $3450$ & $45990$ & $48603$ & $-$ & $1078$ & $-$ \\ \hline $k=3$ & ${99}$ & ${61}$ & $8151$ & $13852$ & $50491$ & $56090$ & $-$ & $730$ & $-$ \\ \hline $k=5$ & ${64}$ & ${78}$ & $2797$ & $3726$ & $72009$ & $77344$ & $232667$ & $3615$ & $875$ \\ \hline $k=7$ & ${114}$ & ${62}$ & $4138$ & $3153$ & $67174$ & $90931$ & $-$ & $4230$ & $-$ \\ \hline $k=9$ & ${119}$ & ${92}$ & $6371$ & $8508$ & $96954$ & $106782$ & $-$ & $4113$ & $-$ \\ \hline \end{tabular} \label{tab:runtime} \end{center} } \end{table} \section{Conclusion} We have presented a new method for robust PCA based on the trimmed reconstruction error. Our efficient algorithm, using fast descent on the Stiefel manifold, works in the default setting ($t=\ceil{\frac{n}{2}}$) without any free parameters and is significantly faster than other competing methods. In all experiments TRPCA performs better or at least similar to other robust PCA methods, in particular, TRPCA solves challenging background subtraction tasks.\\[.2cm] \noindent \textbf{Acknowledgements.} M.H. has been partially supported by the ERC Starting Grant NOLEPRO and M.H. and S.S. have been partially supported by the DFG Priority Program 1324, ``Extraction of quantifiable information from complex systems". \begin{figure} \centering \begin{tabular}{cccc} \includegraphics[width=.25\columnwidth]{resWSbi560PCA} & \includegraphics[width=.25\columnwidth]{resWSfi560PCA} & \includegraphics[width=.25\columnwidth]{resWSbi560PCAt} & \includegraphics[width=.25\columnwidth]{resWSfi560PCAt} \\ \hline \includegraphics[width=.25\columnwidth]{resWSbi560TRPCA} & \includegraphics[width=.25\columnwidth]{resWSfi560TRPCA} & \includegraphics[width=.25\columnwidth]{resWSbi560TRPCA05} & \includegraphics[width=.25\columnwidth]{resWSfi560TRPCA05} \\ \includegraphics[width=.25\columnwidth]{resWSbi560ORPCA}& \includegraphics[width=.25\columnwidth]{resWSfi560ORPCA} & \includegraphics[width=.25\columnwidth]{resWSbi560HRPCA} & \includegraphics[width=.25\columnwidth]{resWSfi560HRPCA} \\ \includegraphics[width=.25\columnwidth]{resWSbi560LLD} & \includegraphics[width=.25\columnwidth]{resWSfi560LLD} & \includegraphics[width=.25\columnwidth]{resWSbi560RPCA} & \includegraphics[width=.25\columnwidth]{resWSfi560RPCA} \\ \includegraphics[width=.25\columnwidth]{resWScand0009b} & \includegraphics[width=.25\columnwidth]{resWScand0009f} & \includegraphics[width=.25\columnwidth]{resWScand0001b} & \includegraphics[width=.25\columnwidth]{resWScand0001f} \end{tabular} \caption{Backgrounds and foreground for frame $i=560$ of the water surface data set. The last row corresponds to the PCP algorithm with values of $\lambda$ set by hand} \label{fig:bfwsi560} \end{figure} \clearpage \bibliographystyle{splncs03}
1,477,468,749,843
arxiv
\section{Introduction} Layered heterostructures provide a versatile platform for the construction of nanophotonic devices, enabling extensive functionality of light propagating through nanoscale stratified systems \cite{Xia2014}. The tremendous progress reported using layered systems is significantly fueled by polaritons -- strong light-matter interaction featuring strongly localized, immense electric field strengths -- advancing a variety of nanophotonic fields such as optoelectronics \cite{He2013,Ross2014}, photovoltaics \cite{Fortin1982,Yu2013}, polaritonic optics \cite{Folland2018,Chaudhary2019,Passler2019a}, or sensing \cite{Rodrigo2015}. In particular, layered systems that are composed of strongly optically anisotropic polar crystals currently receive increasing interest due to their capability of supporting infrared polariton modes of high propagation directionality, so called hyperbolic phonon polaritons (hPhP) \cite{Jacob2014,Li2015,Dai2019,Passler2022,He2022}. While in isotropic polar crystals, phonon polaritons arise in the frequency region of negative permittivity between the transverse optical (TO) and longitudinal optical (LO) phonon modes, hPhPs in anisotropic crystals arise at frequencies where the permittivity is only negative along one (type I hyperbolic) or two (type II hyperbolic) principle crystal axes. Thin films of materials with out-of-plane anisotropy, such as hexagonal boron nitride (hBN), support volume-confined hPhPs, which have proven to enable subdiffraction imaging and hyperlensing \cite{Dai2015,Ferrari2015}. Materials with strong in-plane anisotropy, such as molybdenum trioxide (\ensuremath{\text{MoO}_{\text{3}}}), on the other hand, support in-plane hyperbolic phonon polaritons (ihPhPs) featuring directional propagation in the surface plane. Only recently, the potential of these materials has captured attention, in particular demonstrated by the seminal work of several groups on twisted \ensuremath{\text{MoO}_{\text{3}}}~layers \cite{Hu2020,Duan2020,Zheng2020,Chen2020,HerzigSheinfux2020}, where the twist angle enables control over the ihPhP wavefront geometries, propagation characteristics, and its topology. Advances in the field of polaritonic nanophotonics often are only feasible with the aid of a robust theoretical framework for the simulation of the optical response of the material system in question. For layered heterostructures, a $4 \times 4$ transfer matrix method (TMM) \cite{Passler2017a} has proven useful, providing the reflection and transmission coefficients as well as the local electric fields of a multilayer system consisting of any number of arbitrarily anisotropic materials. Furthermore, the analysis of the Poynting vector \SSS~allows for a layer-resolved calculation of the absorption and transmittance in the system even for fully anisotropic constituent materials \cite{Passler2020b}. However, polaritons typically are evanescent modes, that is, they feature in-plane momenta $k$ larger than the momentum of light in vacuum $k_0$, and thus cannot be accessed in a freespace excitation scheme. This condition for the excitation has to be accounted for in both the experimental as well as the theoretical observation of polaritons, and is, for instance, met in prism-coupling techniques such as the Otto geometry \cite{Otto1968,Passler2017,Folland2019} or the Kretschmann-Raether configuration \cite{Kretschmann1971}. While in particular the Otto geometry allows for a systematic, thorough study of phonon polaritons and has proven to be quite versatile \cite{Neuner2009,Passler2018,Ratchford2019,Passler2019a}, the intrinsic properties of the polariton modes in the sample are inevitably modified by the presence of the coupling prism. Other optical excitation techniques where large momenta are achieved by scattering off a nanoscale object, such as scattering-type scanning near field optical microscopy (s-SNOM) \cite{Huber2005,Novotny2006}, on the other hand, cannot fully be described theoretically using a $4 \times 4$ transfer matrix method, due to the deviation from a stratified system by the scattering source. A common way to circumvent the specifics of the excitation method in the simulations is to calculate the optical response solely of the sample, with an excitation beam featuring artificially large in-plane momenta $k/k_0 > 1$. This evanescent wave excitation does not lead to physical results regarding the far-field reflectance, transmittance, or absorption, but has nevertheless proven insightful into the supported polariton mode dispersion \cite{Dai2014,Gubbin2019,Fali2019,Folland2018,AlvarezPerez2020,Passler2018}. In particular, the imaginary part of the p-polarized reflection coefficient $\Im{\ensuremath{r_{pp}}}$ peaks at frequencies where the system supports a polariton mode, thus providing a means to map out the instrinsic polariton dispersion. However, in layered heterostructures comprising several materials that support polaritons, the method of using $\Im{\ensuremath{r_{pp}}}$ only reveals the resonances of the overall system, while the relative distribution of the polariton resonance intensity across the layers remains inaccessible. For far-field excitations with $k/k_0 < 1$, a layer-resolved calculation framework for anisotropic multilayers has already been published \cite{Passler2020b}, but an equivalent method for evanescent excitation with $k/k_0 > 1$ has, to the best of our knowledge, not been discussed in literature so far. Here, we present an empirical approach for the layer-resolved calculation of the relative intensity of polariton resonances in arbitrarily anisotropic layered heterostructures. The method of using $\Im{\ensuremath{r_{pp}}}$ for the determination of polariton dispersions, even though lacking a thorough theoretical justification so far, has been successfully and continuously used for several years. We build on this empirical knowledge, expanding the established method by a layer-resolved calculation based on the Poynting vector obtained from a $4 \times 4$ TMM that is implemented in an open-access computer program \cite{Passler2022a}. We demonstrate our method by calculating the layer-resolved polariton resonances in two state-of-the-art polaritonic systems, covering strongly coupled SPhPs in an aluminum nitride (AlN) / silicon carbide (SiC) heterostructure, and tunable ihPhPs in twisted \ensuremath{\text{MoO}_{\text{3}}}~layers on a quartz (\ensuremath{\text{SiO}_{\text{2}}}) substrate. Fulfilling an empirical conservation law, our method provides insight into the relative intensity of the polariton resonances in the different layers of the sample system. \section{Method} The TMM we employ in this work has been described in detail previously \cite{Passler2017a}. For the calculation of the layer-resolved polaritonic response of the sample system, we further use an extended formalism based on the TMM \cite{Passler2020b}, providing the time-averaged Poynting vector $\vec{\SSS}^p_i(z)$ for p-polarized incident light, in layer $i$, at position $z$: \begin{align} \vec{\SSS}^p_i(z) = \frac{1}{2} \text{Re} \left[ \vec{\E}^p_i(z) \times \vec{\HH}_i^{p*}(z) \right], \label{eq:Svec} \end{align} where $\vec{\E}_i(z)$ and $\vec{\HH}_i(z)$ are given elsewhere \cite{Passler2020b}. Note that this formalism is originally designed for propagating incident light with $k/k_0 < 1$. Further on (Eq. 5), we extend the method to evanescent excitation with $k/k_0 > 1$. The coordinate system is chosen such that the $z$-axis points along the surface normal, the exciting light beam is incident in the $x$-$z$-plane, and the origin of the coordinate system lies in the interface plane between the semi-infinite incident medium ($i=0$) and the first layer ($i=1$). The multilayer system comprises $N$ layers of thicknesses $d_i$, and layer $i=N+1$ is the semi-infinite substrate. Because polaritons are only excitable by p-polarized light \cite{Passler2019a}, we omit the specification of the incoming polarization in the following, referring always to p-polarization. In order to calculate the transmittance up to layer $i$ and position $z$, the $z$-component of the Poynting vector at the corresponding position is normalized by the $z$-component of the Poynting vector of the incoming excitation beam $\SSS_{\text{inc},z}$: \begin{align} \T_i^{a}(z) &= \frac{\SSS_{i,z}(z)}{\SSS_{\text{inc},z}}, \label{eq:Ti} \end{align} and the transmittance $\T$ into the substrate $i=N+1$ at the interface with layer $N$ is given by: \begin{align} \T = \frac{\SSS_{N+1,z}(D)}{\SSS_{\text{inc},z}}, \end{align} where $D=\sum_{i=1}^{N} d_i$ is the thickness of the multilayer system. Using Eq. \ref{eq:Ti}, the layer-resolved absorption can be calculated as follows: \begin{align} \begin{split} \A_i &= \T_i(d_{1..i-1}) - \T_i(d_{1..i-1} + d_i), \end{split} \label{eq:Ai} \end{align} where $d_{1..i-1} = \sum_{i=1}^{i-1} d_i$ is the thickness of all layers through which the incident light has propagated before reaching the layer $i$. For a propagating excitation beam with $k/k_0 < 1$, $\SSS_{\text{inc},z}$ is real-valued, as specified in Eq. 22 of reference \cite{Passler2020b}, and $\A$ and $\T$ correctly describe the absorption and transmission, respectively. For an evanescent incident beam with $k/k_0 > 1$, however, $\SSS_{\text{inc},z}$ is purely imaginary, since an evanescent beam features no net energy flow in $z$-direction. As a consequence, for evanescent excitation, this would lead to a zero denominator in Eq. \ref{eq:Ti}. Here, we therefore normalize to the imaginary part of $\SSS_{\text{inc},z}$ instead, as can be calculated from Eq. 22 of reference \cite{Passler2020b} with the following modification: \begin{align} \vec{\SSS}_{\text{inc}} &= \frac{1}{2} \text{Im} \left[ \vec{\E}_{\Rightarrow,0}(0) \times \left( \ensuremath{\vec{k}}_{01} \times \vec{\E}_{\Rightarrow,0}(0)\right)^* \right], \label{eq:Sinc} \end{align} where $\Rightarrow$ denotes the forward propagating (incident) mode, and $\vec{k}_{01}$ is the wavevector of the p-polarized incident beam. \begin{figure*}[!ht] \includegraphics[width = \textwidth]{fig1.pdf} \captionof{figure}{\textbf{Strong coupling between an AlN ENZ mode and a SiC SPhP.} \textbf{a} Sketch of the AlN/SiC structure, illustrating the strong coupling of a SPhP of a bare SiC substrate and an ENZ mode of a freestanding AlN film. \textbf{b} Analytical dispersion of the uncoupled SiC SPhP (blue line) and AlN ENZ mode (green line), as well as the resulting strongly coupled modes in the heterostructure (red lines) featuring an avoided crossing. \textbf{c} Dispersion of the strongly coupled modes obtained by calculating the total resonance intensity $\Im{r_{pp}}$. \textbf{d,e} Layer-resolved distribution of the resonance intensity in AlN and SiC, respectively. In c,d, and e, the analytical dispersions of the uncoupled SiC SPhP (blue lines) and AlN ENZ modes (green lines) are plotted for reference. \textbf{f,g} Mode partition of the AlN film and the SiC substrate for the upper and the lower dispersion branch, respectively.} \label{fig1} \end{figure*} We note that this modified normalization to the imaginary part is, similar to the use of $\Im{\ensuremath{r_{pp}}}$, empirically motivated. Nonetheless, the layer-resolved "absorption" calculated according to Eq. \ref{eq:Ai} conveniently reflects the relative intensities of a polariton mode present in the different layers of a multilayer structure, as we will demonstrate in the following section. Furthermore, please note that, analogously to $\Im{\ensuremath{r_{pp}}}$, $\T$ and $\A$ take values larger than 1 in the case of $k/k_0 > 1$, rendering the use of the terms "transmittance" and "absorption" inadequate. Therefore, in the following we refer to the quantities simply by their mathematical symbols. Strikingly, the sum of the layer-resolved quantities $\A_i$ and $\T$ fulfill, as we have numerically verified for a broad variety of test cases, the following conservation law: \begin{align} 2 \Im{r_{pp}} = \sum_{i=1}^{N} \A_i + \T, \end{align} where we calculate $r_{pp}$ employing a TMM \cite{Passler2017a}. This equation constitutes the conservation between the resonance intensity distributed between the layers of the system described by $A_i$ and $\T$, and the overall resonance intensity, here found to be $2 \Im{r_{pp}}$. In the following, we will apply our method to two sample systems that have been discussed in literature before, demonstrating that our results are not only in accordance with previous findings, but also provide additional insight into the resonance behavior of polariton modes in layered heterostructures. \section{Strongly Coupled ENZ Polaritons} At frequencies close to zero crossings of the real part of the dielectric permittivity \ensuremath{\varepsilon}, a material features epsilon-near-zero (ENZ) light propagation with remarkable properties of the ENZ photonic modes, such as high emission directionality \cite{Enoch2002,Kim2016}, enhanced nonlinear-optical conversion efficiency \cite{Argyropoulos2012,Suchowski2013}, and tunneling through narrow distorted waveguide channels \cite{Silveirinha2007,Edwards2009}. In a polar crystal, ENZ conditions are met at the LO phonon frequency \ensuremath{\omega_{\text{LO}}}, and an ENZ polariton can be found in subwavelength-thin polar crystal films \cite{Vassant2012,Nordin2017,Campione2015}. However, a thin-film ENZ polariton is a non-propagating mode due to its intrinsically flat dispersion close to \ensuremath{\omega_{\text{LO}}}, thus hindering its usability for effective nanoscale communication applications. This limitation can be overcome by strongly coupling an ENZ polariton to a propagating SPhP, as has been demonstrated for an aluminum nitride (AlN) thin film / silicon carbide (SiC) heterostructure \cite{Passler2018}, see Fig.~1a. By combining the advantages of the constituent uncoupled modes, the resulting ENZ-SPhPs feature strong electrical field enhancement characteristic for ENZ modes, while maintaining a propagative character typical for SPhPs. The dispersions of both the uncoupled AlN ENZ mode (green line) and the SiC SPhP (blue line) as well as the strongly coupled modes (red lines) are plotted in Fig.~1b, calculated with an analytical formula for a three-layer system \cite{Burke1986,Campione2015}. Characteristically for strong coupling, the ENZ-SPhP dispersion lines exhibit an avoided crossing, while approaching the dispersion lines of the uncoupled modes with increasing distance to the dispersion crossing point. Accordingly, the mode nature along each of the strongly coupled mode dispersions undergoes a transition across the avoided crossing, while at the avoided crossing, both strongly coupled modes have identical characteristics such as electric field enhancement and spatial confinement \cite{Passler2018}, sharing equal measures of both uncoupled modes. In order to verify and visualize this transition of mode nature across the strong coupling region, we here apply our method to calculate the polariton resonance intensity in the AlN/SiC heterostructure resolved for each layer. The overall polaritonic response of the material system can be obtained by calculating $\Im{r_{pp}}$, as it is shown in Fig.~1c, where the entire dispersions of both strongly coupled modes are reproduced. The layer-resolved calculations obtained from our method are plotted in Fig.~1d ($\A$ in AlN) and Fig.~1e ($\T$ in SiC). For both layers, only parts of the same dispersion lines as for $\Im{r_{pp}}$ are obtained. In the AlN film (Fig.~1d), the resonance intensity is strongest in close proximity to the AlN ENZ mode (green line), whereas the intensity fades out along the SiC SPhP (blue line). In the SiC substrate (Fig.~1e), on the contrary, the resonance intensity is most pronounced along the SiC SPhP and almost no intensity can be found along the AlN ENZ mode. This relative intensity distribution between the different layers reflects the respective partial mode nature along the dispersion, changing from the AlN ENZ mode to the SiC SPhP and vice versa. This behavior can be demonstrated by quantifying the mode partition \PP~as follows: \begin{align} \PP_i = \frac{\A_i}{2 \Im{r_{pp}}}, \end{align} and evaluating $\PP_{\text{SiC}}$ and $\PP_{\text{AlN}}$ (blue and green lines) along both dispersion branches of the strongly coupled polariton modes, as shown in Fig.~1f and g, respectively. Clearly, along both branches the mode nature undergoes the aforementioned transition, with a crossing point where the mode exhibits AlN ENZ and SiC SPhP features in equal measures. Notably, this crossing point sits at slightly different in-plane momenta for the upper and the lower branch, corresponding to the momentum where the uncoupled mode dispersions are equidistant to the respective branch in frequency-momentum space. An alternative approach to obtain the relative mode distribution in the multilayer system would be to calculate the layer-resolved absorption for excitation with a propagating wave ($k/k_0 < 1$) via Otto-type prism coupling. However, in this scheme, the relative absorption of the polariton modes is distorted by the coupling prism, because the AlN ENZ and the SiC SPhP modes feature distinct critical gaps of optimal coupling conditions. In contrast, our approach is free of the influence of the excitation method, revealing consistent additional information about the mode nature of the strongly coupled modes in the AlN/SiC heterostructure. \begin{figure*}[!ht] \includegraphics[width = \textwidth]{fig2.pdf} \captionof{figure}{\textbf{Tunable phonon polaritons in twisted \ensuremath{\text{MoO}_{\text{3}}}~layers.} \textbf{a-d} $\Im{r_{pp}}$ as a function of in-plane momenta $k_x/k_0$ and $k_y/k_0$ for a \nmetr{200} \ensuremath{\text{MoO}_{\text{3}}}/\nmetr{200} \ensuremath{\text{MoO}_{\text{3}}}/\ensuremath{\text{SiO}_{\text{2}}}~heterostructure, as illustrated in the inset, at four different twist angles $\alpha=0, 30, 63, \dg{90}$ of the upper \ensuremath{\text{MoO}_{\text{3}}}~layer, respectively. The calculations reveal a topological transition at the magic twist angle $\alpha^*=\dg{63}$ from an ihPhP to an elliptical SPhP. \textbf{e-h} Layer-resolved resonance intensity $\A_1$ in the upper and \textbf{i-l} in the lower \ensuremath{\text{MoO}_{\text{3}}}~layer, \textbf{m-p} $\T$ in the \ensuremath{\text{SiO}_{\text{2}}}~substrate, and \textbf{q-t} polar plots of the resonance intensities of all four quantities along the dispersion of the first-order SPhP mode, each at four different twist angles $\alpha$, respectively.} \label{fig1} \end{figure*} \section{In-Plane Hyperbolic Polaritons in Twisted \ensuremath{\text{MoO}_{\text{3}}}~Layers} In-plane hyperbolic phonon polaritons (ihPhPs) are supported on polar crystals with in-plane hyperbolicity, that is, at frequencies where $\Re{\ensuremath{\varepsilon}_x} \Re{\ensuremath{\varepsilon}_y} < 0$ (with the crystal surface lying in the $x$-$y$-plane). The dispersion of ihPhPs take the form of a hyperbola in the surface plane, oriented such that the hyperbola minimum lies on the crystal axis along which $\Re{\ensuremath{\varepsilon}} < 0$, whereas no solution is supported along the perpendicular surface direction where $\Re{\ensuremath{\varepsilon}} > 0$. Therefore, ihPhPs intrinsically feature a strong propagation directionality. At frequencies where both in-plane permittivity tensor elements are negative, on the other hand, the dispersion describes an ellipse, and the resulting SPhP can propagate along any direction in the surface plane. Recently, it has been demonstrated that by stacking and twisting two \ensuremath{\text{MoO}_{\text{3}}}~layers, the propagation direction of the supported surface polaritons becomes configurable as a function of the twist angle $\alpha$ \cite{Chen2020,Hu2020,Duan2020}. Furthermore, at a specific, frequency-dependent magic angle, the surface polariton performs a topological transition from a hyperbolic to an elliptical dispersion. The overall change in propagation direction and topology as a function of $\alpha$ is well-captured by $\Im{r_{pp}}$, as is reproduced in Fig.~2a-d in perfect agreement with literature. At twist angles $\alpha = \dg{0}$ and \dg{30} (Fig. 2a,b respectively), the polariton is hyperbolic, and the propagation direction rotates with $\alpha$. At the magic angle $\alpha^*=\dg{63}$, the dispersion transitions from hyperbolic to elliptical, resulting in flattened dispersion lines that exhibit diffractionless and low-loss directional polariton canalization \cite{Hu2020}. Finally, at $\alpha=\dg{90}$ (Fig. 2d), the topological transition is completed and the stacked system features an "elliptical" dispersion (that is, finite in all in-plane directions) of almost rectangular shape. In order to reveal the optical response resolved for each material layer of the twisted heterostructure, we employ our formalism to calculate $\A_1$ and $\A_2$ for the two \ensuremath{\text{MoO}_{\text{3}}}~layers, and $\T$ for the \ensuremath{\text{SiO}_{\text{2}}}~substrate (the system is sketched in the inset in Fig.~2o). The resonance intensities $\A_1$ and $\A_2$ for the four twist angles $\alpha=0,30,63,\dg{90}$ in the first and second \ensuremath{\text{MoO}_{\text{3}}}~layers are shown in Fig.~2e-h and 2i-l, respectively, and the resonance intensity $\T$ in the substrate is plotted in Fig.~2m-p. Finally, the resonance intensity peak value along the dispersion of the first-order mode is shown in polar plots in Fig.~2q-t. Note that the curves are not continuous for $\alpha=0,30,$ and \dg{63} because of the finite plot range and the divergent nature of the dispersion. Clearly, the maximum resonance intensity is strongest in the first \ensuremath{\text{MoO}_{\text{3}}}~layer and decreases towards the substrate (Fig.~2q-t). As a consequence, rotating the first layer dominates the overall maximum intensity along the dispersion in $\Im{r_{pp}}$ (black lines), which rotates with $\alpha$. The same is true for \T~in the isotropic \ensuremath{\text{SiO}_{\text{2}}}~substrate (red lines). The intensity maxima of $\A_1$ and $\A_2$ in the first and second \ensuremath{\text{MoO}_{\text{3}}}~layer, however, follow the orientation of the optical axis in the respective layer, where in the first layer (blue lines), the maximum is shifted clockwise in the direction of the twist rotation, while in the second layer (green lines), the maximum is only mildly rotated. This leads to strongly asymmetric intensity distributions along the dispersion in both \ensuremath{\text{MoO}_{\text{3}}}~layers for the hyperbolic region, that is, at twist angles $\alpha=30$ and \dg{63} (Fig.~2f,j and g,k, respectively). At $\alpha=\dg{90}$, finally, the intensity maximum is oriented along the y-axis and arises mostly from the first \ensuremath{\text{MoO}_{\text{3}}}~layer, while the small fraction of resonance intensity along the x-axis solely originates in the second layer. By resolving the spatial origin of the resonance intensity layer by layer, our method reveals that the partial resonance intensity in each \ensuremath{\text{MoO}_{\text{3}}}~film is oriented along the respective polariton-active crystal axis. However, due to the presence of the respective other \ensuremath{\text{MoO}_{\text{3}}}~layer, the partial response in each \ensuremath{\text{MoO}_{\text{3}}}~film can feature strongly asymmetric azimuthal intensity distributions, depending on the twist angle $\alpha$. Thus, the polariton modes of the individual films are modified by the presence of the adjacent twisted \ensuremath{\text{MoO}_{\text{3}}}~film, while not featuring full hybridization, as has been observed in the previous example system. Finally, the resulting polariton mode in the full system can be seen as the sum of these partial polaritonic responses in each \ensuremath{\text{MoO}_{\text{3}}}~layer. Revealing this layer-resolved information, our method therefore provides a deeper analysis of the supported ihPhP modes for each topological state in the twisted \ensuremath{\text{MoO}_{\text{3}}}~double layer heterostructure, and may even accomplish the guiding principles for engineering the dispersion. \section{Discussion} We have introduced here an empirical approach to analyze the layer-resolved intensities of evanescent modes in heterostructures. Yet, it remains unresolved how to embed such a method into a solid theoretical framework, where for instance energy conservation is rigorously traceable and absorption and transmission take physical values $<1$. The mode partition in the air/AlN/SiC strong coupling system, Fig.~1f, may give a hint, though, on how this could be achieved. Consider that with evanescent wave excitation also the reflected wave is evanescent and cannot transport any energy, that is, the reflectance is $0$ by definition. Then, the SiC partition would actually define the transmission while the AlN partition defines the absorption. In such a picture, reflectance, absorption and transmission take physical values, i.e. $\R=0$, $\A, \T \leq 1$ and $\R + \A + \T = 1$. This would still hold true also in multilayer systems, with more than one layer contributing to the total absorption. While this analogy is intriguing, it is beyond the scope of this work to rigorously connect these considerations to the well-established physics of propagating plane waves. Nonetheless, our empirical method reveals unprecedented details on the polariton distribution in multilayer systems at low computational cost. Following the recent success of twisted double layer structures, we anticipate high demand for modeling forthcoming twisted multilayer concepts. Here, our approach could provide comprehensive data that may significantly help to identify the guiding principles for designated design goals. If additionally the relevant physics is driven by the polariton intensity in a specific layer or at a given interface of the structure, as for example expected for polariton-driven chemistry, the relevance of our layer-resolved analysis is enhanced even further. As a natural extension, it would be highly desirable to be able to quantitatively connect the empirical results obtained here to experimentally accessible quantities, as for instance the scattering amplitude and phase in nano-FTIR or s-SNOM, which would enable much enhanced data analysis capabilities for multilayer structures. \section{Conclusion} In this work, we have presented an empirical approach for the layer-resolved analysis of the resonance intensity of polariton modes in arbitrarily anisotropic, birefringent, and absorbing multilayer media. Our method builds on the empirical approach of calculating the imaginary part of the reflection coefficient $\Im{r_{pp}}$ for evanescent wave excitation that has been successfully used in literature for several years. The resulting layer-resolved resonance intensities that we calculate from the Poynting vectors obtained from a TMM \cite{Passler2017a,Passler2020b} fulfill an empirical conservation law, balancing the resonance intensity expressed in $\Im{r_{pp}}$ with the sum of the resonance intensities in each system layer. The presented method is implemented in an open-access computer program \cite{Passler2022a}. As case studies, we applied our approach to the analysis of two recently studied nanophotonic systems featuring strong coupling between an ENZ and a propagating SPhP mode and the modulation of the propagation direction and the topological state of ihPhPs, revealing yet undiscovered details about the supported polariton modes. By allowing to analyze any multilayer system independent of the excitation scheme, our method holds great potential for understanding, optimizing and predicting new forms of polariton heterostructures in the future. \section{Acknowledgments} We thank M. Wolf, S. Wasserroth and R. Ernstorfer (FHI Berlin) for careful reading of the manuscript and M. Wolf and the Max Planck Society for supporting this work. \bibliographystyle{apsrev4-2}
1,477,468,749,844
arxiv
\section{Introduction} \label{sec:intro} There is a recent interest for the formal verification of monadic programs stemming from \newterm{monadic equational reasoning\/}: an approach to the verification of monadic programs that emphasizes equational reasoning~\cite{gibbons2011icfp,gibbons2012utp,mu2019tr2,mu2019tr3,mu2020flops}. In this approach, an effect is represented by an operator belonging to an interface together with equational laws. The interfaces all inherit from the type class of monads and the interfaces are organized in a hierarchy where they are extended and composed. There are several efforts to bring monadic equational reasoning to proof assistants~\cite{affeldt2019mpc,pauwels2019mpc,affeldt2020trustful}. In monadic equational reasoning, the user cannot rely on the \newterm{model\/} of the interfaces because the implementation of the corresponding monads is kept hidden. The construction of models is nevertheless important to avoid mistakes when adding equational laws~\cite{affeldt2020trustful}. This means that a formalization of monadic equational reasoning needs to provide tools to formalize models. In this paper, we extend an existing formalization of monadic equational reasoning (called \textsc{Monae}{}~\cite{affeldt2019mpc}) with \newterm{monad transformers}. Monad transformers is a well-known approach to combine monads that is both modular and practical~\cite{liang1995popl}. It is also commonly used to write Haskell programs. The interest in extending monadic equational reasoning with monad transformers is therefore twofold: (1)~it enriches the toolbox to build formal models of monad interfaces, and (2)~it makes programs written with monad transformers amenable to equational reasoning. In fact, the interest for a formal theory of monad transformers goes beyond its application to monadic equational reasoning. Past research advances about monad transformers could have benefited from formalization. For example, a decade ago, Jaskelioff identified a lack of uniformity in the definitions of the liftings of operations through monad transformers~\cite{jaskelioff2009esop}. He proposed \newterm{modular monad transformers} which come with a uniform definition of lifting for operations that qualify as \newterm{sigma-operations} or their sub-class of \newterm{algebraic operations}. Unfortunately, the original proposal in terms of System F$\omega$ was soon ruled out as faulty~\cite[Sect.~6]{jaskelioff2010} \cite[p.~7]{jaskelioff2009phd} and its fix gave rise to a more involved presentation in terms of (non-trivial) category theory~\cite{jaskelioff2010}. More recently, this is the comparison between monad transformers and algebraic effects that attracts attention, and it connects back to reasoning using equational laws (e.g., \cite[Sect.~7]{schrijvers2019haskell}). This is why in this paper, not only do we provide examples of monad transformers and applications of monadic equational reasoning, but also formalize a theory of monad transformers. \paragraph*{Contributions} In this paper, we propose a formalization in the \textsc{Coq}{} proof assistant~\cite{coq} of monad transformers. This formalization comes as an extension of \textsc{Monae}{}, an existing library that provides a hierarchy of monad interfaces~\cite{affeldt2019mpc}. The benefits of this extension are as follows. \begin{itemize} \item The addition of sigma-operations and of monad transformers to \textsc{Monae}{} improves the implementation of models of monads. These models are often well-known and it is tempting to define them in an ad hoc way. Sigma-operations help us discipline proof scripts and naming, which are important aspects of proof-engineering. \item We illustrate with an example how to extend \textsc{Monae}{} to verify a program written with a monad transformer. Verification is performed by equational reasoning using equational laws from a monad interface whose model is built using a monad transformer. \item We use our formalization of monad transformers to formalize the theory of lifting of modular monad transformers. Thanks to \textsc{Monae}{}, the main theorems of modular monad transformers can be given short formal proofs in terms of equational reasoning. \end{itemize} Regarding the theory of lifting of modular monad transformers, our theory fixes the original presentation~\cite{jaskelioff2009esop}. This fix consists in a non-standard use of \textsc{Coq}{} combining impredicativity and parametricity (as implemented by \textsc{ParamCoq}{}~\cite{keller2012csl}) that allows for an encoding using the language of the proof assistant and thus avoids the hassle of going through a technical formalization of category theory (which is how Jaskelioff fixed his original proposal). It must be said that this was not possible at the time of the original paper on modular monad transformers because parametric models of dependent type theory were not known~\cite{atkey2014popl} (but were ``expected''~\cite{jaskelioffPC}). We are therefore in the situation where formalization using a proof assistant allowed for a fruitful revisit of pencil-and-paper proofs. Regarding the benefit of extending \textsc{Monae}{} with sigma-operations and monad transformers, we would like to stress that this is also a step towards more modularity in our formalization of monadic equational reasoning. Indeed, one important issue that we have been facing is the quality of our proof scripts. Proof scripts that reproduce monadic equational reasoning must be as concise as they are on paper. Proof scripts that build models (and prove lemmas) should be maintainable (to be improved or fixed easily in case of changes in the hypotheses) and understandable (this means having a good balance between the length of the proof script and its readability). This manifests as mundane but important tasks such as factorization of proof scripts, generalization of lemmas, abstraction of data structures, etc. From the viewpoint of proof-engineering, striving for modularity is always a good investment because it helps in breaking the formalization task into well-identified, loosely-coupled pieces. \paragraph*{Outline} In Sect.~\ref{sec:background}, we recall the main constructs of \textsc{Monae}{}. In Sect.~\ref{sec:extension}, we formalize the basics of modular monad transformers: sigma-operations, monad transformers, and their variants (algebraic operations and functorial monad transformers). Section~\ref{sec:application} is our first application: we show with an example how to extend \textsc{Monae}{} to verify a program written using monad transformers. In Sect.~\ref{sec:theorem19}, we use our formalization of monad transformers to prove a first theorem about modular monad transformers (namely, the lifting of algebraic operations) using equational reasoning. In Sect.~\ref{sec:lifting_operations}, we formalize (and fix) the main theorem of modular monad transformers (namely, the lifting of sigma-operations that are not necessarily algebraic along functorial monad transformers). We review related work in Sect.~\ref{sec:related_work} and conclude in Sect.~\ref{sec:conclusion}. \section{Overview of the \textsc{Monae}{} Library} \label{sec:background} \textsc{Monae}{}~\cite{affeldt2019mpc} is a formal library implemented in the \textsc{Coq}{} proof assistant~\cite{coq} to support monadic equational reasoning~\cite{gibbons2011icfp}. It takes advantage of the rewriting capabilities of the tactic language called \textsc{SSReflect}{}~\cite{ssrman} to achieve formal proofs by rewriting that are very close to their pencil-and-paper counterparts. \textsc{Monae}{} provides a hierarchy of monad interfaces formalized using the methodology of \newterm{packed classes}~\cite{garillot2009tphols}. Effects are declared as operations in interfaces together with equational laws, and some effects extend others by (simple or multiple) inheritance. This modularity is important to achieve natural support for monadic equational reasoning. \def\Ret#1#2{{{\coqin{Ret}}^{\coqin{#1}}_{\coqin{#2}}}} \def\Join#1#2{{{\coqin{Join}}^{\coqin{#1}}_{\coqin{#2}}}} Let us briefly explain some types and notations provided by \textsc{Monae}{} that we will use in the rest of this paper. \textsc{Monae}{} provides basic category-theoretic definitions such as functors, natural transformations, and monads. By default, they are specialized to \coqin{UU0}, the lowest universe in the hierarchy of \textsc{Coq}{} types, understood as a category\footnote{\textsc{Monae}{} also provides a more generic setting~\cite[file \coqin{category.v}]{monae} but we do not use it in this paper.}. The type of functors is \coqin{functor}. The application of a functor \coqin{F} to a function \coqin{f} is denoted by \mintinline{ssr}{F # f}. The composition of functors is denoted by the infix notation~\coqin{\O}. The identity functor is denoted by \coqin{FId}. Natural transformations from the functor \coqin{F} to the functor \coqin{G} are denoted by \coqin{F ~> G}. Natural transformations are formalized by their components (represented by the type \coqin{forall A, F A -> G A}, denoted by \coqin{F ~~> G}) together with the proof that they are natural, i.e., the proof that they satisfy the following predicate: \begin{minted}{ssr} Definition naturality (M N : functor) (m : M ~~> N) := forall (A B : UU0) (h : A -> B), (N # h) \o m A = m B \o (M # h). \end{minted} (The infix notation \coqin{\o} is for function composition.) Vertical composition of natural transformations is denoted by the infix notation \coqin{\v}. The application of a functor \coqin{F} to a natural transformation \coqin{n} is denoted by \mintinline{ssr}{F ## n}. The type of monads is \coqin{monad}, which inherits from the type \coqin{functor}. Let \coqin{M} be of type \coqin{monad}. Then \coqin{Ret}{} is a natural transformation \coqin{FId ~> M} and \coqin{Join}{} is a natural transformation \coqin{M \O M ~> M}. Using \coqin{Ret}{} and \coqin{Join}{}, we define the standard bind operator with the notation~\coqin{>>=}. In this paper, we show \textsc{Coq}{} proof scripts verbatim when it is reasonable to do so. When we write mathematical formulas, we keep the same typewriter font, but, for clarity and to ease reading, we make explicit some information that would otherwise by implicitly inferred by \textsc{Coq}{}. For example, one simply writes \coqin{Ret}{} or \coqin{Join}{} in proof scripts written using \textsc{Monae}{} because it has been implemented in such a way that \textsc{Coq}{} infers from the context which monad they refer to and which type they apply to. In mathematical formulas, we sometimes make the monad explicit by writing it as a superscript of \coqin{Ret}{} or \coqin{Join}{}, and we sometimes write the argument of a function application as a subscript. This leads to terms such as $\Ret{M}{A}$: the unit of the monad \coqin{M} applied to some type~\coqin{A}. See the online development for technical details (in particular, \cite[file \coqin{hierarchy.v}]{monae}). \section{Sigma-operations and Monad Transformers in \textsc{Monae}{}} \label{sec:extension} The first step is to formalize sigma-operations (Sect.~\ref{sec:sigma-operations}) and monad transformers (Sect.~\ref{sec:transf}). We illustrate sigma-operations with the example of the model of the state monad (Sect.~\ref{sec:statemodel}) and its get operation (Sect.~\ref{sec:sigmaget}). \subsection{Extending \textsc{Monae}{} with Sigma-operations} \label{sec:sigma-operations} Given a functor \coqin{E}, an \newterm{\coqin{E}-operation} for a monad \coqin{M} (sigma-operation for short) is a natural transformation from \coqin{E \O M} to \coqin{M}. The fact that sigma-operations are defined in terms of natural transformations is helpful to build models because it involves structured objects (functors and natural transformations) already instrumented with lemmas. In other words, we consider sigma-operations as a disciplined way to formalize effects. For illustration, we explain how the get operation of the state monad is formalized. \subsubsection{Example: Model of the State Monad} \label{sec:statemodel} First we define a model \coqin{State.t} for the state monad (without get and put for the time being). We assume a type~\coqin{S} (line~\ref{line:typeS}) and define the action on objects \coqin{acto} (line~\ref{line:actoS}), abbreviated as~\coqin{M} (line~\ref{line:notationM}). We define the action on morphisms \coqin{map} (line~\ref{line:mapS}) and prove the functor laws (omitted here, see \cite[file \coqin{monad_model.v}]{monae} for details). This provides us with a functor \coqin{functor} (line~\ref{line:functorS}, \coqin{Functor.Pack} and \coqin{Functor.Mixin} are constructors from \textsc{Monae}{} and are named after the packed classes methodology~\cite{garillot2009tphols}). We define the unit of the monad by first providing its components \coqin{ret_component} (line~\ref{line:ret0S}), and prove naturality (line~\ref{line:retS_nat}, proof script omitted). We then package this proof to form a genuine natural transformation at line~\ref{line:retS} (\coqin{Natural.Pack} and \coqin{Natural.Mixin} are constructors from \textsc{Monae}{}). We furthermore define \coqin{bind} (line~\ref{line:bindS}), prove the properties of the unit and bind (omitted). Finally, we call the function \coqin{Monad_of_ret_bind} from \textsc{Monae}{} to build the monad (line~\ref{line:monadS}): \begin{minted}[fontsize=\small,numbers=left,xleftmargin=2em,escapeinside=77]{ssr} (* in Module State *) Variable S : UU0. 7\label{line:typeS}7 Definition acto := fun A => S -> A * S. 7\label{line:actoS}7 Local Notation M := acto. 7\label{line:notationM}7 Definition map A B (f : A -> B) (m : M A) : M B := 7\label{line:mapS}7 fun (s : S) => let (x1, x2) := m s in (f x1, x2). (* functor laws map_id and map_comp omitted *) Definition functor := Functor.Pack (Functor.Mixin map_id map_comp). 7\label{line:functorS}7 Definition ret_component : FId ~~> M := fun A a => fun s => (a, s). 7\label{line:ret0S}7 Lemma naturality_ret : naturality FId functor ret_component. 7\label{line:retS_nat}7 (* proof script of naturality omitted *) Definition ret : FId ~> functor := 7\label{line:retS}7 Natural.Pack (Natural.Mixin naturality_ret). Definition bind := fun A B (m : M A) (f : A -> M B) => uncurry f \o m. 7\label{line:bindS}7 (* proofs of neutrality of ret and of associativity of bind omitted *) Definition t := Monad_of_ret_bind left_neutral right_neutral associative. 7\label{line:monadS}7 \end{minted} \subsubsection{Example: The Get Operation as a Sigma-operation} \label{sec:sigmaget} By definition, for each sigma-operation we need a functor. The functor corresponding to the get operation is defined below as \coqin{Get.func} (line~\ref{line:getfunc}): \coqin{acto} is the action on the objects, \coqin{actm} is the action on the morphisms (the prefix~\coqin{@} disables implicit arguments in \textsc{Coq}{}): \begin{minted}[fontsize=\small,numbers=left,xleftmargin=2em,escapeinside=77]{ssr} (* in Module Get *) Variable S : UU0. Definition acto X := S -> X. Definition actm (X Y : UU0) (f : X -> Y) (t : acto X) : acto Y := f \o t. Program Definition func := Functor.Pack (@Functor.Mixin _ actm _ _). 7\label{line:getfunc}7 (* proofs of the functors law omitted *) \end{minted} We then define the sigma-operation itself (\coqin{StateOps.get_op} at line~\ref{line:getop}), which is a natural transformation from \coqin{Get.func S \O M} to \coqin{M}, where \coqin{M} is the state monad \coqin{State.t} built in Sect.~\ref{sec:statemodel}. Note that this get operation ($\lambda s.\, k\,s\,s$, line~\ref{line:get}) is {\em not\/} the usual operation~\cite[Example~13]{jaskelioff2009esop}. \begin{minted}[fontsize=\small,numbers=left,xleftmargin=2em,escapeinside=77]{ssr} (* in Module StateOps *) Variable S : UU0. Local Notation M := (State.t S). Definition get A (k : S -> M A) : M A := fun s => k s s. 7\label{line:get}7 Lemma naturality_get : naturality (Get.func S \O M) M get. (* proof script of naturality omitted *) Definition get_op : (Get.func S).-operation M := 7\label{line:getop}7 Natural.Pack (Natural.Mixin naturality_get). \end{minted} \subsubsection{Example: Model of the Interface of the State Monad} \textsc{Monae}{} originally comes with an interface \coqin{stateMonad} for the state monad ({\em with} the get and put operations). It implements the interface as presented by Gibbons and Hinze~\cite[Sect.~6]{gibbons2011icfp}; it therefore expects the operations to be the usual ones. We show how to instantiate it using the definition of sigma-operations. First, we need to define the usual get from \coqin{StateOps.get_op} (line~\ref{line:usual_get} below): \begin{minted}[fontsize=\small,numbers=left,xleftmargin=2em,escapeinside=77]{ssr} (* in Module ModelState *) Variable S : UU0. Local Notation M := (ModelMonad.State.t S). Definition get : M S := StateOps.get_op _ Ret. 7\label{line:usual_get}7 \end{minted} We do the same for the put operation (omitted). We then build the model of interface of the state monad (with its operations) using the appropriate constructors from \textsc{Monae}{}: \begin{minted}[fontsize=\small]{ssr} Program Definition state : stateMonad S := MonadState.Pack (MonadState.Class (@MonadState.Mixin _ _ get put _ _ _ _)). (* proofs of the laws of get and put automatically discharged *) \end{minted} Similarly, using sigma-operations, we have formalized the operations of the list, the output, the state, the environment, and the continuation monads, which are the monads discussed along with modular monad transformers~\cite[Fig.~1]{jaskelioff2009esop} (see~\cite[file \coqin{monad_model.v}]{monae} for their formalization). \subsection{The Sub-class of Algebraic Operations} \label{sec:algebraic_operations} An \coqin{E}-operation \coqin{op} for \coqin{M} is \newterm{algebraic}~\cite[Def.~15]{jaskelioff2009esop} when it satisfies the predicate \coqin{algebraicity} defined as follows in \textsc{Coq}{} (observe the position of the continuation~\coqin{>>= f}): \begin{minted}{ssr} forall A B (f : A -> M B) (t : E (M A)), op A t >>= f = op B ((E # (fun m => m >>= f)) t). \end{minted} Algebraic operations are worth distinguishing because they lend themselves more easily to lifting, and this result can be used to define lifting for the whole class of sigma-operations (this is the purpose of Sections~\ref{sec:theorem19} and~\ref{sec:lifting_operations}). We can check using \textsc{Coq}{} that, as expected, all the operations discussed along with modular monad transformers~\cite[Fig.~1]{jaskelioff2009esop} are algebraic except for flush, local, and handle\footnote{In fact, we had to fix the output operation of the output monad. Indeed, it is defined as follows in \cite[Example 32]{jaskelioff2009esop}: $ {\sf output}((w,m) : W \times {\sf O}X) : {\sf O}X \,\hat{=}\, {\sf let\,}(x,w') = m {\sf \,in\,}(x, {\sf append}(w',w)). $ We changed ${\sf append}(w',w)$ to ${\sf append}(w,w')$ to be able to prove algebraicity.}. \subsubsection*{Example: the Get operation is Algebraic} For example, the get operation of the state monad is algebraic: \begin{minted}{ssr} Lemma algebraic_get S : algebraicity (@StateOps.get_op S). Proof. by []. Qed. \end{minted} In the \textsc{Coq}{} formalization, we furthermore provide the type \coqin{E.-aoperation M} (note the prefix ``\coqin{a}'') of an \coqin{E.-operation M} that is actually algebraic. For example, here is how we define the algebraic version of the get operation: \begin{minted}[fontsize=\small]{ssr} Definition get_aop S : (StateOps.Get.func S).-aoperation (ModelMonad.State.t S) := AOperation.Pack (AOperation.Class (AOperation.Mixin (@algebraic_get S))). \end{minted} \subsection{Extending \textsc{Monae}{} with Monad Transformers} \label{sec:transf} Given two monads \coqin{M} and \coqin{N}, a \newterm{monad morphism} \coqin{e} is a function of type \coqin{M ~~> N} such that for all types \coqin{A}, \coqin{B} the following laws hold: \begin{itemize} \item \begin{minted}{ssr} e A \o Ret = Ret. (* MonadMLaws.ret *) \end{minted} \item \begin{minted}{ssr} forall (m : M A) (f : A -> M B), (* MonadMLaws.bind *) e B (m >>= f) = e A m >>= (e B \o f). \end{minted} \end{itemize} In \textsc{Coq}{}, we define the type of monad morphisms \coqin{monadM} that implement the two laws above. Monad morphisms are also natural transformations (this can be proved easily using the laws of monad morphisms). We therefore equip monad morphisms \coqin{e} with a canonical structure of natural transformation. Since it is made canonical, \textsc{Coq}{} is able to infer it in proof scripts but we need to make it explicit in statements; we provide the notation \coqin{monadM_nt e} for that purpose. A \newterm{monad transformer} \coqin{t} is a function of type \coqin{monad -> monad} with an operator \coqin{Lift} such that for any monad~\coqin{M}, \coqin{Lift t M} is a monad morphism from \coqin{M} to \coqin{t M}. Let \coqin{monadT} be the type of monad transformers in \textsc{Monae}{}. We reproduced all the examples of modular monad transformers (state, exception, environment, output, continuation monad transformers, resp.\ \coqin{stateT}, \coqin{exceptT}, \coqin{envT}, \coqin{outputT}, and \coqin{contT} in \cite[file \coqin{monad_transformer.v}]{monae}). \subsubsection*{Example: The Exception Monad Transformer} Let us assume given some type \coqin{Z : UU0} for exceptions and some monad~\coqin{M}. First, we define the action on objects of the monad transformed by the exception monad transformer (the type \coqin{Z + X} represents the sum type of the types~\coqin{Z} and~\coqin{X}): \begin{minted}[fontsize=\small]{ssr} Definition MX := fun X : UU0 => M (Z + X). \end{minted} We also define the unit and the bind operator of the transformed monad (the constructors \coqin{inl}/\coqin{inr} inject a type into the left/right of a sum type): \begin{minted}[fontsize=\small]{ssr} Definition retX X x : MX X := Ret (inr x). Definition bindX X Y (t : MX X) (f : X -> MX Y) : MX Y := t >>= fun c => match c with inl z => Ret (inl z) | inr x => f x end. \end{minted} Second, we define the monad morphism that will be returned by the lift operator of the monad transformer. In \textsc{Coq}{}, we can formalize the corresponding function by constructing the desired monad assuming~\coqin{M}. This is similar to the construction of the state monad we saw in Sect.~\ref{sec:sigma-operations}. We start by defining the underlying functor \coqin{MX_map}, prove the two functor laws (let us call \coqin{MX_map_i} and \coqin{MX_map_o} these proofs), and package them as a functor: \begin{minted}[fontsize=\small]{ssr} Definition MX_functor := Functor.Pack (Functor.Mixin MX_map_i MX_map_o). \end{minted} We then provide the natural transformation \coqin{retX_natural} corresponding to \coqin{retX} and call the \textsc{Monae}{} constructor \coqin{Monad_of_ret_bind} (like we did in Sect.~\ref{sec:sigma-operations}): \begin{minted}[fontsize=\small]{ssr} Program Definition exceptTmonad : monad := @Monad_of_ret_bind MX_functor retX_natural bindX _ _ _. (* proofs of monad laws omitted *) \end{minted} Then we define the lift operation as a function that given a computation \coqin{m} in the monad \coqin{M X} returns a computation in the monad \coqin{exceptTmonad X}: \begin{minted}[fontsize=\small]{ssr} Definition liftX X (m : M X) : exceptTmonad X := m >>= (@RET exceptTmonad _). \end{minted} (The function \coqin{RET} is a variant of \coqin{Ret} better suited for type inference here.) We can finally package the definition of \coqin{liftX} to form a monad morphism: \begin{minted}{ssr} Program Definition exceptTmonadM : monadM M exceptTmonad := monadM.Pack (@monadM.Mixin _ _ liftX _ _). (* proof of monad morphism laws omitted *) \end{minted} The exception monad transformer merely packages the monad morphism we have just defined to give it the type~\coqin{monadT}: \begin{minted}{ssr} Definition exceptT Z := MonadT.Pack (MonadT.Mixin (exceptTmonadM Z)). \end{minted} One might wonder what is the relation between the monads that can be built with these monad transformers and the monads already present in \textsc{Monae}{}. For example, in Sect.~\ref{sec:sigma-operations}, we already mentioned the \coqin{stateMonad} interface and we built a model for it (namely, \coqin{ModelState.state}). On the other hand, we can now, say, build a model for the identity monad (let us call it \coqin{identity}) and build a model for that state monad as \coqin{stateT S identity} (we have not provided the details of \coqin{stateT}, see \cite{affeldt2019mpc}). We can actually prove in \textsc{Coq}{} that \coqin{stateT S identity} and \coqin{State.t} are {\em equal}\footnote{\cite[\coqin{Section instantiations_with_the_identity_monad}, file \coqin{monad_model.v}]{monae}}, so that no confusion has been introduced by extending \textsc{Monae}{} with monad transformers. \subsection{Functorial Monad Transformers} \label{sec:fmt} A \newterm{functorial monad transformer}~\cite[Def.~20]{jaskelioff2009esop} is a monad transformer \coqin{t} with a function \coqin{h} (hereafter denoted by \coqin{Hmap t}) of type \begin{minted}{ssr} forall (M N : monad), (M ~> N) -> (t M ~> t N) \end{minted} such that (1)~\coqin{h} preserves monad morphisms (the laws \coqin{MonadMLaws.ret} and \coqin{MonadMLaws.bind} seen in Sect.~\ref{sec:transf}), (2)~\coqin{h} preserves identities and composition of natural transformations, and (3)~\coqin{Lift t} is natural, i.e., \begin{minted}{ssr} forall (M N : monad) (n : M ~> N) X, h M N n X \o Lift t M X = Lift t N X \o n X. \end{minted} Note that we cannot define the naturality of \coqin{Lift t} using the predicate \coqin{naturality} we saw in Sect.~\ref{sec:background} because it is restricted to endofunctors on~\coqin{UU0}. Also note that Jaskelioff distinguishes monad transformers from functorial monad transformers while Maillard defines monad transformers as functorial by default~\cite[Def.~4.1.1]{maillard2019phd}. \section{Application 1: Monadic Equational Reasoning in the Presence of Monad Transformers} \label{sec:application} We apply our formalization of monad transformers to the verification of a recursive program combining the effects of state and exception. We argue that this program is similar in style to what an Haskell programmer would typically write with monad transformers. Despite this programming style and the effects, the correctness proof is by equational reasoning. \subsection{Extending the Hierarchy} \label{sec:exthier} \begin{figure} \includegraphics[width=14cm]{hier.png} \caption{Hierarchy of Monad Interfaces Provided by \textsc{Monae}{}} \label{fig:hier} \end{figure} The first thing to do is to extend the hierarchy of interfaces with \coqin{stateRunMonad} and \coqin{exceptStateRunMonad} (Fig.~\ref{fig:hier}). The interface \coqin{stateRunMonad} is a parameterized interface that extends \coqin{stateMonad} with the primitive \coqin{RunStateT} and its equations. Concretely, let \coqin{N} be a monad and \coqin{S} be the type of states. When \coqin{m} is a computation in the monad \coqin{stateRunMonad S N}, \coqin{RunStateT m s} runs \coqin{m} in a state \coqin{s} and returns a computation in the monad~\coqin{N}. There is one equation for each combination of \coqin{RunStateT} with operations below in the hierarchy: \begin{minted}{ssr} RunStateT (Ret a) s = Ret (a, s) RunStateT (m >>= f) s = RunStateT m s >>= fun x => RunStateT (f x.1) x.2 RunStateT Get s = Ret (s, s) RunStateT (Put s') s = Ret (tt, s') \end{minted} This is the methodology of packed classes that allows for the overloading of the notations~\coqin{Ret} and \coqin{>>=} here. The notation \coqin{.1} (resp.\ \coqin{.2}) is for the first (resp.\ second) projection of a pair. The unique value of type~\coqin{unit} is~\coqin{tt}. The operations \coqin{Get} and \coqin{Put} are the standard operations of the state monad. Intuitively, given a monad \coqin{M} that inherits from the state monad, \coqin{Get} is a computation of type \coqin{M S} that returns the state and \coqin{Put} has type \coqin{S -> M unit} and updates the state (see Sect.~\ref{sec:extension} for a model of these operations). The interface \coqin{exceptStateRunMonad} is the combination of the operations and equations of \coqin{stateRunMonad} and \coqin{exceptMonad}~\cite[Sect.~5]{gibbons2011icfp}\cite[file \coqin{hierarchy.v}]{monae} plus two additional equations on the combination of \coqin{RunStateT} with the operations of \coqin{exceptMonad}. Recall that the operations of the exception monad are the computations \coqin{Fail} of type \coqin{M A} and \coqin{Catch} of type \coqin{M A -> M A -> M A} for some type~\coqin{A} (which happens to be the type of the state in this example); intuitively, \coqin{Fail} raises an exception while \coqin{Catch} handles it. \begin{minted}{ssr} RunStateT Fail s = Fail RunStateT (Catch m1 m2) s = Catch (RunStateT m1 s) (RunStateT m2 s) \end{minted} Using our formalization of monad transformers presented in this paper, it is then easy to build a model that validates those equations, whereas in previous work we had to build a model from scratch each time we were introducing a new combination of effects. \subsection{Example: The Fast Product} Now let us write a program and reason on it equationally. First, we write a recursive function that traverses a list of natural numbers to compute their product, but fails in case a 0 is met. Intermediate results are stored in the state: \begin{minted}{ssr} Variables (N : exceptMonad) (M : exceptStateRunMonad nat N). Fixpoint fastProductRec l : M unit := match l with | [::] => Ret tt | 0 :: _ => Fail | n.+1 :: l' => Get >>= fun m => Put (m * n.+1) >> fastProductRec l' end. \end{minted} Then, the main function will catch an eventual failure. If there is a failure, then the result is~0, else the result is the value stored in the state: \begin{minted}{ssr} Variables (N : exceptMonad) (M : exceptStateRunMonad nat N). Definition fastProduct l : M _ := Catch (Put 1 >> fastProductRec l >> Get) (Ret 0 : M _). \end{minted} To implement this algorithm in Haskell, we would use the state monad transformer applied to the exception monad. It would then be necessary to prefix each primitive of the exception monad with a lifting operation (\texttt{lift} or \texttt{mapStateT2} in Haskell). Here, we avoid this by using the hierarchy of interfaces, and the use of monad transformers is restricted to the construction of models for the interfaces. The correctness states that the result of the fast product is always the same as a purely functional version: \begin{minted}{ssr} Lemma fastProductCorrect l n : evalStateT (fastProduct l) n = Ret (product l). \end{minted} where \coqin{evalStateT m s} is defined as \coqin{RunStateT m s >>= fun x => Ret x.1}. This proposition is proved easily with a 10 lines proof script that consists of an induction on \coqin{l}, rewriting with the equations in \coqin{exceptStateRunMonad} and application of standard arithmetic (see Appendix~\ref{sec:fpscript}). Note that in this section we are dealing with the state monad transformer applied to the exception monad, and that the last equation in Sect.~\ref{sec:exthier} specifies that the state is ``backtracked'', i.e., if the state is modified in \coqin{m1} before an exception occurs, then this change is forgotten before \coqin{m2} is executed. This is usual in Haskell. The alternative semantics without backtracking would be closer to, say, OCaml, where the state is not be backtracked in case of an exception. Our program would behave the same way because it happens that the exception handler ignores the state. However, we would need to devise new equations to deal with \coqin{Fail} and \coqin{Catch}. \section{Application 2: Formalization of the Lifting of an Algebraic Operation} \label{sec:theorem19} This section is an application of \textsc{Monae}{} extended with the formalization of sigma-operations and of monad transformers of Sect.~\ref{sec:extension}. We prove using equational reasoning a theorem about the lifting of algebraic operations along monad morphisms. This corresponds to the theorem that concludes the first of part of the original paper on modular monad transformers~\cite[Sect.2--4]{jaskelioff2009esop}. \newsavebox\mybox \begin{lrbox}{\mybox} \mintinline{ssr}{#} \end{lrbox} \def{\usebox{\mybox}}{{\usebox{\mybox}}} In the following \coqin{M} and \coqin{N} are two monads. Given an \coqin{E}-operation \coqin{op} for \coqin{M} and a monad morphism \coqin{e} from \coqin{M} to \coqin{N}, a \newterm{lifting} of \coqin{op} (to \coqin{N}) along \coqin{e} is an \coqin{E}-operation \coqin{op'} for \coqin{N} such that for all \coqin{X}: $$ \coqin{e}_{\coqin{X}} \circ \coqin{op}_{\coqin{X}} = \coqin{op'}_{\coqin{X}} \circ (\coqin{E} {\usebox{\mybox}}{} \coqin{e}_{\coqin{X}}). $$ \tikzset{myblack/.style={->,draw=black}, myblue/.style={->,draw=blue}, myred/.style={->,draw=red}, mygreen/.style={->,draw=green}, myviolet/.style={->,draw=violet}, myorange/.style={->,draw=orange}} \def\myarrow#1{\begin{tikzpicture}[baseline=-0.5ex]\draw[#1,->] (0,0)--(0.3,0);\end{tikzpicture}} \begin{theorem}[Uniform Algebraic Lifting {\cite[Thm.~19]{jaskelioff2009esop}}] \label{thm:theorem19} Given an algebraic \coqin{E}-operation \coqin{op} for \coqin{M} and a monad morphism \coqin{e} from \coqin{M} to \coqin{N}, let \coqin{op'} be $$ \coqin{X} \mapsto \Join{N}{X} \circ \coqin{e}_{(\coqin{N X})} \circ \coqin{op}_{(\coqin{N X})} \circ (\coqin{E} {\usebox{\mybox}}{} \Ret{M}{(N X)}). $$ Then \coqin{op'} is an algebraic \coqin{E}-operation for \coqin{N} and a lifting of \coqin{op} along \coqin{e}. \end{theorem} \begin{proof} The proof that \coqin{op'} is a lifting is depicted by the diagram of Fig.~\ref{fig:diag19}. \def\lbl#1{${}^{#1}$} \def(a){(a)} \def(b){(b)} \def(c){(c)} \def(d){(d)} \def(e){(e)} \def(f){(f)} \def(g){(g)} \def(h){(h)} \def(i){(i)} \def(j){(j)} \def(k){(k)} \def(l){(l)} \begin{figure}[t] \centering \begin{tikzpicture} \node (EMMX) {\coqin{E(M(M X))}} ; \node (MMX) [right=of EMMX,xshift=2em] {\coqin{M(M X)}} ; \node (EMNX) [above=of EMMX] {\coqin{E(M(N X))}} ; \node (MNX) [above=of MMX] {\coqin{M(N X)}} ; \node (EMX) [below left=of EMMX] {\coqin{E(M X)}}; \node (ENX) [above left=of EMNX] {\coqin{E(N X)}}; \node (MX) [below right=of MMX] {\coqin{M X}}; \node (NX) [above right=of MNX] {\coqin{N X}}; \path (EMX) edge[myblack] node[below] {\lbl{(l)} $\coqin{op}_{\coqin{X}}$} (MX); \path (MX) edge[myblack] node[right] {\lbl{(k)} $\coqin{e}_{\coqin{X}}$} (NX); \path (EMX) edge[myblue] node[left] {\lbl{(e)} $\coqin{E} {\usebox{\mybox}}{} \coqin{e}_{\coqin{x}}$} (ENX) ; \path (ENX) edge[myblue] node[right] {\lbl{(b)} $\coqin{E} {\usebox{\mybox}}{} \coqin{Ret}$} (EMNX) ; \path (EMNX) edge[myblue] node[above] {\lbl{(c)} $\coqin{op}_{\coqin{(N X)}}$} (MNX) ; \path (MNX) edge[myblue] node[left] {\lbl{(d)} $\coqin{Join} \circ \coqin{e}_{\coqin{(N X)}}$} (NX) ; \path (EMX) edge[myred] node[right] {\lbl{(f)} $\coqin{E} {\usebox{\mybox}}{} \coqin{Ret}$} (EMMX) ; \path (EMMX) edge[myred] node[left] {\lbl{(g)} $\coqin{E} {\usebox{\mybox}}{} (\coqin{M} {\usebox{\mybox}}{} \coqin{e}_{\coqin{X}})$} (EMNX) ; \path (EMMX) edge[mygreen] node[below] {\lbl{(h)} $\coqin{op}_{\coqin{(M X)}}$} (MMX) ; \path (MMX) edge[mygreen] node[right] {\lbl{(i)} $\coqin{M} {\usebox{\mybox}}{} \coqin{e}_{\coqin{X}}$} (MNX) ; \path (MMX) edge[myviolet] node[left] {\lbl{(j)} $\coqin{Join}$} (MX) ; \path (ENX) edge[myorange] node[above] {\lbl{(a)} lifting of $\coqin{op}_{\coqin{X}}$} (NX) ; \end{tikzpicture} \caption{Proof of Uniform Algebraic Lifting (Theorem~\ref{thm:theorem19})} \label{fig:diag19} \end{figure} The first step is to show that the path (a){} (\myarrow{myorange}) and the path (b){}-(c){}-(d){} (\myarrow{myblue}) are equal, which is by definition of a lifting. The resulting goal is rendered in \textsc{Coq}{} as follows (for any~\coqin{Y}): \begin{minted}{ssr} e X (op X Y) = Join (e (N X) (op (N X) ((E # Ret) ((E # e X) Y)))) \end{minted} The second step of the proof is to show that the path (e){}-(b){} (\myarrow{myblue}) and the path (f){}-(g){} (\myarrow{myred}) are equal, which is achieved by appealing to the functor laws and the naturality of $\Ret{}{}$. More precisely, to prove \begin{minted}{ssr} (E # Ret) ((E # e X) Y) = (E # (M # e X)) ((E # Ret) Y), \end{minted} it suffices to execute the following sequence of rewritings: \begin{minted}[fontsize=\small]{ssr} rewrite -[in LHS]compE -functor_o. (* functor composition law in the lhs *) rewrite -[in RHS]compE -functor_o. (* functor composition law in the rhs *) (* the goal is now: (E # (Ret \o e X)) Y = (E # (M # e X \o Ret)) Y *) rewrite (natural RET). (* naturality of ret *) (* the goal is now: (E # (Ret \o e X)) Y = (E # (Ret \o FId # e X)) Y *) by rewrite FIdf. (* property of the identity functor *) \end{minted} The next step is to show that the path (g){}-(c){} and the path (h){}-(i){} (\myarrow{mygreen}) are equal; this is by naturality of \coqin{op}. The next step is to show that the paths (i){}-(d){} and (j){}-(k){} are equal, which is by the bind law of monad morphisms and naturality of monad morphisms. The last step (equality of the paths (f){}-(h){}-(j){} and (l){}) amounts to proving: \begin{minted}{ssr} op X Y = Join (op (M X) ((E # Ret) Y)). \end{minted} This step depends of an intermediate lemma~\cite[Prop.~17]{jaskelioff2009esop}. Let us explain it because it introduces functions and we will use one of them again later in this paper. Given a natural transformation \coqin{n : E ~> M}, \coqin{psi} is an \coqin{E}-operation for \coqin{M} defined by the function $\coqin{X} \mapsto \Join{}{X} \circ \coqin{n}$. Given an \coqin{E}-operation for \coqin{M}, \coqin{phi} is a natural transformation \coqin{E ~> M} defined by the function $\coqin{X} \mapsto \coqin{op}_{\coqin{X}} \circ (\coqin{E} {\usebox{\mybox}}{} \Ret{}{})$. It turns out that \coqin{psi} is algebraic and that \coqin{psi} cancels \coqin{phi} for algebraic operations (proofs omitted here, see~\cite{monae}), which proves the last goal. The second part of the proof is to prove that \coqin{op'} is algebraic. This is a direct consequence of the fact that \coqin{psi} is algebraic. It should be noted that, even though the statement of the theorem defines the lifting as the composition of the functions \coqin{Join}, \coqin{e}, etc., it is actually much more practical from the view point of formal proof to define it as \coqin{psi (monadM_nt e \v phi op)}, i.e., the application of the function \coqin{psi} to the vertical composition of \coqin{e} and \coqin{phi op}, because this object (let us call it \coqin{alifting}) is endowed with the properties of algebraic operations, whose immediate availability facilitates the formal proof. \end{proof} The reader can observe in Appendix~\ref{appendix:theorem19} that the complete proof script for Theorem \ref{thm:theorem19} essentially amounts to a small number of rewritings, as has been partially illustrated in the proof just above. \subsection*{Example: Lifting the get Operation along the Exception Monad Transformer} Let us assume the availability of a type \coqin{S} for states and of a type \coqin{Z} for exceptions. We consider \coqin{M} to be the state monad. To define the lifting of the get operation of \coqin{M} (more precisely its algebraic version seen in Sect.~\ref{sec:algebraic_operations}) along \coqin{exceptT} (Sect.~\ref{sec:transf}) it suffices to call the \coqin{alifting} function with the right arguments: \begin{minted}{ssr} Let M S : monad := ModelState.state S. Definition aLGet {Z S} : (StateOps.Get.func S).-aoperation (exceptT Z (M S)) := alifting (get_aop S) (Lift (exceptT Z) (M S)). \end{minted} By the typing, we see that the result \coqin{aLGet} is also an algebraic operation. For example, we can check that the resulting sigma-operation is indeed the get operation of the transformed monad: \begin{minted}{ssr} Goal forall Z (S : UU0) X (k : S -> exceptT Z (M S) X), aLGet _ k = StateOps.get_op _ k. by []. \end{minted} \section{Application 3: Formalization of the Lifting of Sigma-Operations} \label{sec:lifting_operations} This section is an application of our formalization of sigma-operations and (functorial) monad transformers of Sect.~\ref{sec:extension} and also of Theorem~\ref{thm:theorem27}. Using \textsc{Monae}{}, we give an equational proof for a theorem that generalizes the lifting of Sect.~\ref{sec:theorem19} which was restricted to algebraic operations. This corresponds to the second part of the original paper on modular monad transformers~\cite[Sect.~5]{jaskelioff2009esop}. This application requires us to use a non-standard setting of \textsc{Coq}{}. Section~\ref{sec:codensity} introduces a monad transformer whose formalization requires impredicativity. Section~\ref{sec:naturality_of_m} focuses on the main technical difficulty that we identified when going from the pencil-and-paper proofs to a formalization using \textsc{Coq}{}: an innocuous-looking proof that actually calls for an argument based on parametricity. We conclude this section with the formal statement of~\cite[Thm.~27]{jaskelioff2009esop} and its formal proof (Sect.~\ref{sec:theorem27}). \subsection{Impredicativity Setting for the Codensity Monad Transformer} \label{sec:codensity} To implement the lifting of an operation along a functorial monad transformer, Jaskelioff introduces a monad transformer \coqin{codensityT} related to the construction of the codensity monad for an endofunctor~\cite[Def.~23]{jaskelioff2009esop}. Its formalization requires impredicativity and if nothing is done, the standard setting of \textsc{Coq}{} would lead to \newterm{universe inconsistencies}. Let us give a bit of background on impredicativity with \textsc{Coq}{}. The type theory of \textsc{Coq}{} is constrained by a hierarchy of universes {\color{setcolor}{\tt Set}}{}, $\coqin{Type}_1$, $\coqin{Type}_2$, etc. The \textsc{Coq}{} language only provides the keywords {\color{setcolor}{\tt Set}}{} and \coqin{Type}, the \textsc{Coq}{} system figures out the right indices for \coqin{Type}s. Universes are not impredicative by default; yet, \textsc{Coq}{} has an option ({\tt -impredicative-set}{}) that changes the logical theory by declaring the universe {\color{setcolor}{\tt Set}}{} as impredicative. This option is useful in \textsc{Coq}{} to formalize System $F$/$F\omega$, their impredicative encodings of data types, and for extraction of programs in CPS style. It is known to be inconsistent with some standard axioms of classical mathematics \cite{impredicativeset,geuvers2001} but we do not rely on them here\footnote{More precisely, the development we discuss in this paper~\cite[directory \coqin{impredicative_set}]{monae} uses together with impredicative {\color{setcolor}{\tt Set}}{} only the standard axioms of functional extensionality and proof irrelevance, which are compatible.}. To keep a firm grip on the universes involved, we fix a few universes at the beginning of the formal development~\cite[file \coqin{ihierarchy.v}]{monae}: \begin{minted}[escapeinside=77]{ssr} Definition UU2 : Type := Type. Definition UU1 : UU2 := Type. Definition UU0 : UU1 := 7{\color{setcolor}{\tt Set}}{}7. \end{minted} and only use them instead of {\color{setcolor}{\tt Set}}{} or \coqin{Type} (so far we have been using \coqin{UU0} but it is really another name for the native {\color{setcolor}{\tt Set}}{} universe). Now that we have set \textsc{Coq}{} appropriately, we define the codensity monad transformer. Given a monad~\coqin{M}, a computation of a value of type \coqin{A} in the monad \coqin{codensityT M} has type \coqin{forall (B : UU0), (A -> M B) -> M B} of type \coqin{UU0}: here, impredicativity comes into play. We abbreviate this type expression as \coqin{MK M A} in the following. We do not detail the formalization of \coqin{codensityT} because it follows the model of the exception monad transformer that we explained in Sect.~\ref{sec:transf}. Let us just display its main ingredients, i.e., the unit, bind, and lift operations~\cite[Def.~23]{jaskelioff2009esop}: \begin{minted}{ssr} Definition retK (A : UU0) (a : A) : MK M A := fun (B : UU0) (k : A -> M B) => k a. Definition bindK (A B : UU0) (m : MK M A) f : MK M B := fun (C : UU0) (k : B -> M C) => m C (fun a : A => (f a) C k). (* definition of codensityTmonadM omitted *) Definition liftK (A : UU0) (m : M A) : codensityTmonadM A := fun (B : UU0) (k : A -> M B) => m >>= k. \end{minted} We can check in \textsc{Coq}{} that they indeed give rise to a monad transformer in the sense of Sect.~\ref{sec:transf}, so that \coqin{codensityT} does have the type \coqin{monadT} (Sect.~\ref{sec:transf}) of monad transformers. \subsection{Parametricity to Prove Naturality} \label{sec:naturality_of_m} The monad transformer \coqin{codensityT} is needed to state the theorem about the lifting of sigma-operations and in particular to define a natural transformation called \coqin{from}~\cite[Prop.~26]{jaskelioff2009esop}. Formally, we can define \coqin{from}'s components as follows (\coqin{M} is a monad): \begin{minted}{ssr} Definition from_component : codensityT M ~~> M := fun (A : UU0) (c : codensityT M A) => c A Ret. \end{minted} At first sight, the naturality of \coqin{from_component} seems obvious and indeed no proof is given in the original paper on modular monad transformers~(see the first of the two statements of~\cite[Prop.~26]{jaskelioff2009esop}). It is however a bit more subtle than it appears and, as a matter of fact, it is shown in a later paper that this claim is wrong: $\coqin{from}_{\coqin{M}}$ cannot be a natural transformation in the setting of $F\omega$ \cite[p.~4452]{jaskelioff2010}. We explain how we save the day in \textsc{Coq}{} by relying on parametricity. We state the naturality of $\coqin{from}_{\coqin{M}}$ as \coqin{naturality (codensityT M) M from_component}. This goal reduces\footnote{By functional extensionality, by naturality of {\tt Ret}, and by definition of {\tt from\char`\_{}component}.} to: \begin{minted}{ssr} forall (m : codensityT M A) (h : A -> B), (M # h \o m A) Ret = m B (M # h \o Ret). \end{minted} This last goal is an instance of a more general statement (recall from Sect.~\ref{sec:codensity} that \coqin{MK M} is the action on the objects of the monad \coqin{codensityT M}): \begin{minted}{ssr} forall (M : monad) (A : UU0) (m : MK M A) (A1 B : UU0) (h : A1 -> B), M # h \o m A1 = m B \o (fun f : A -> M A1 => (M # h) \o f). \end{minted} This is actually a special case of naturality as one can observe by rewriting the type of \coqin{m} with the appropriate functors: \coqin{exponential_F A \O M} and \coqin{M}, where \coqin{exponential_F} is the functor whose action on objects is \coqin{forall X : UU0, A -> X}: \begin{minted}{ssr} forall (M : monad) (A : UU0) (m : MK M A), naturality (exponential_F A \O M) M m \end{minted} Unfortunately, we are not able to prove it in plain \textsc{Coq}{} (with or without impredicative {\color{setcolor}{\tt Set}}{}), even if we consider particular functors \coqin{M} such as the identity functor. The solution consists in assuming an axiom of parametricity for each functor~\coqin{M} and derive naturality from it. That is, we follow the approach advocated by Wadler~\cite{DBLP:conf/fpca/Wadler89}. It has been shown to be sound in \textsc{Coq}{}~\cite{DBLP:conf/csl/KrishnaswamiD13,bernardy2012lics,keller2012csl,atkey2014popl} and it is implemented by the \textsc{ParamCoq}{} plugin~\cite{keller2012csl}. For instance, let us describe what happens when \coqin{M} is the list monad. First, we rewrite the naturality statement above in the case of the list functor (\coqin{map} is the map function of lists): \begin{minted}{ssr} forall (X Y : UU0) (f : X -> Y) (g : A -> seq X), (map f \o m X) g = (m Y \o (exponential_F A \O M) # f) g. \end{minted} The proof proceeds by induction on a proof-term of type \begin{minted}{ssr} list_R X Y (fun x y => f x = y) (m X g) ((m Y \o (exponential_F A \O M) # f) g) \end{minted} where \coqin{list_R X Y X_R l1 l2} means that the elements of lists~\coqin{l1} and~\coqin{l2} are pairwise related by the relation \coqin{X_R}. The role of \textsc{ParamCoq}{} is to generate definitions (including \coqin{list_R}) for us to be able to produce this proof. Concretely, starting from \coqin{MK}, \textsc{ParamCoq}{} generates the logical relation \coqin{T_R} of type (it is obtained by induction on types~\cite{Goubault-LarrecqLN08}): \begin{minted}{ssr} (forall X : UU0, (A -> list X) -> list X) -> (forall X : UU0, (A -> list X) -> list X) -> UU0 \end{minted} Here, \coqin{T_R m1 m2} expands to: \begin{minted}{ssr} forall (X1 X2 : UU0) (RX : X1 -> X2 -> UU0) (f1 : A -> list X1) (f2 : A -> list X2), (forall a1 a2 : A, a1 = a2 -> list_R X1 X2 RX (f1 a1) (f2 a2)) -> list_R X1 X2 RX (m1 X1 f1) (m2 X2 f2) \end{minted} It is then safe to assume the following parametricity axiom: \begin{minted}{ssr} Axiom param : forall m : MK M A, T_R m m. \end{minted} The application of \coqin{param} is the first step to produce the proof required for the induction: \begin{minted}[escapeinside=77]{ssr} have : list_R X Y (fun x y = f x = y) (m X g) ((m Y \o (exponential_F A \O M) # f) g). apply: param. (* 7$\forall$7 a a', a = a' -> list_R X Y (fun x y = f x = y) (g a) (((exponential_F A \O M) # f) g a') *) \end{minted} The goal generated is proved by induction on \coqin{g a} which is a list. The same approach is applied to other monads (identity, exception, option, state)~\cite[file \coqin{iparametricity_codensity.v}]{monae}. \subsection{Lifting of Sigma-operations: Formal Statement} \label{sec:theorem27} Before stating and proving the main theorem about lifting of sigma-operations, we formally define a special algebraic operation~\cite[Def.~25]{jaskelioff2009esop}. Let \coqin{E} be a functor, \coqin{M} be a monad, and \coqin{op} be an \coqin{E}-operation for \coqin{M}. The natural transformation \coqin{kappa} from \coqin{E} to \coqin{codensityT M} is defined by the components $$ \coqin{A}, (\coqin{s : E A}), \coqin{B}, (\coqin{k : A -> M B}) \mapsto \coqin{op} \, \coqin{B} \, ((\coqin{E} {\usebox{\mybox}}{} \coqin{k}) \, \coqin{s}) $$ and \coqin{psik} is the algebraic \coqin{E}-operation for the monad \coqin{codensityT M} defined by: \begin{minted}{ssr} Definition psik : E.-aoperation (codensityT M) := psi (kappa op). \end{minted} Recall that the function \coqin{psi} has been defined in the proof of Theorem~\ref{thm:theorem19}. \begin{theorem}[Uniform Lifting {\cite[Thm.~27]{jaskelioff2009esop}}] \label{thm:theorem27} Let \coqin{M} be a monad such that any computation \coqin{m : MK M A} is natural in the sense of Sect.~\ref{sec:naturality_of_m} (hypothesis \coqin{naturality_MK}). Let \coqin{op} be an \coqin{E}-operation for \coqin{M} and \coqin{t} be a functorial monad transformer. % We denote: \begin{itemize} \item by \coqin{op1} the term \coqin{Hmap t (from naturality_MK)} (see Sect.~\ref{sec:naturality_of_m} for \coqin{from}, \coqin{Hmap} was defined in Sect.~\ref{sec:fmt}), \item by \coqin{op2} the algebraic lifting along \coqin{Lift t} of \coqin{(psik op)} (see just above for \coqin{psik}), and \item by \coqin{op3} the term \mintinline{ssr}{E ## Hmap t (monadM_nt (Lift codensityT M))} (see Sect.~\ref{sec:codensity} for \coqin{codensityT}). \end{itemize} Then the operation \coqin{op1 \v op2 \v op3} (where \coqin{\v} is the vertical composition seen in Sect.~\ref{sec:background}) is a lifting of \coqin{op} along \coqin{t}. \end{theorem} \begin{proof} The proof is depicted by the diagram in Fig.~\ref{fig:diag27}. \def\gbl#1{${}^{#1}$} \def(a){(a)} \def(b){(b)} \def(c){(c)} \def(d){(d)} \def(e){(e)} \def(f){(f)} \def(g){(g)} \def(h){(h)} \def(i){(i)} \def(j){(j)} \def(k){(k)} \def(l){(l)} \begin{figure}[t] \centering \begin{tikzpicture} \node (EKMX) {\coqin{E( K M X )}} ; \node (KMX) [right=of EKMX] {\hspace{1em}\coqin{K M X}} ; \node (ETKMX) [above=of EKMX] {\coqin{E(T K M X)}} ; \node (TKMX) [above=of KMX] {\coqin{T K M X}} ; \node (EMX) [below left=of EKMX,xshift=-1.8cm] {\coqin{E( M X )}}; \node (ETMX) [above left=of ETKMX,xshift=-1.8cm] {\coqin{E(T M X)}}; \node (MX) [below right=of KMX,xshift=1.8cm] {\hspace{1em}\coqin{M X}}; \node (TMX) [above right=of TKMX,xshift=1.8cm] {\coqin{T M X}}; \path (EMX) edge[myblack] node[below] {\footnotesize \gbl{(l)} $\coqin{op}_{\coqin{X}}$} (MX); \path (MX) edge[myblack] node[above,rotate=-90] {\footnotesize \gbl{(k)} $(\coqin{Lift t M})_{\coqin{X}}$} (TMX); \path (EMX) edge[myblue] node[above,rotate=90] {\footnotesize \gbl{(e)} $\coqin{E} {\usebox{\mybox}}{} (\coqin{Lift t M})_{\coqin{X}}$} (ETMX) ; \path (ETMX) edge[myblue] node[right] {\footnotesize \gbl{(b)} $\coqin{op3}_{\coqin{X}}$} (ETKMX) ; \path (ETKMX) edge[myblue] node[above] {\footnotesize \gbl{(c)} $\coqin{op2}_{\coqin{X}}$} (TKMX) ; \path (TKMX) edge[myblue] node[left] {\footnotesize \gbl{(d)} $\coqin{op1}_{\coqin{X}}$} (TMX) ; \path (EMX) edge[myred] node {\footnotesize \gbl{(f)} $\coqin{E} {\usebox{\mybox}}{} (\coqin{Lift codensityT M})_{\coqin{X}}$} (EKMX) ; \path (EKMX) edge[myred] node[left] {\footnotesize \gbl{(g)} \scriptsize $\coqin{E} {\usebox{\mybox}}{} (\coqin{Lift t (codensityT M)})_{\coqin{X}}$} (ETKMX) ; \path (EKMX) edge[mygreen] node[below] {\footnotesize \gbl{(h)} $(\coqin{psik op})_{\coqin{X}}$} (KMX) ; \path (KMX) edge[mygreen] node[right] {\footnotesize \gbl{(i)} \scriptsize $(\coqin{Lift t (codensityT M)})_{\coqin{X}}$} (TKMX) ; \path (KMX) edge[myviolet] node[left] {\footnotesize \gbl{(j)} $\coqin{from_nt}_{\coqin{X}}$} (MX) ; \path (ETMX) edge[myorange] node[above] {\footnotesize \gbl{(a)} lifting of $\coqin{op}_{\coqin{X}}$} (TMX) ; \end{tikzpicture} \caption{Proof of Uniform Lifting (Theorem~\ref{thm:theorem27})} \label{fig:diag27} \end{figure} The first step of the proof is to unfold the definition of lifting (which amounts to showing that the paths (a){} (\myarrow{myorange}) and (b){}-(c){}-(d){} are equal). Consequently, the proof goal is rendered in \textsc{Coq}{} as follows (for all \coqin{X : UU0}): \begin{minted}{ssr} Lift t M X \o op X = (op1 \v op2 \v op3) X \o E # Lift t M X \end{minted} The second step of the proof is to show that the path (e){}-(b){} and the path (f){}-(g){} (\myarrow{myred}) are equal, which is achieved by appealing to the law of functor composition and the naturality of \coqin{Hmap}. The next step is to show that the path (g){}-(c){} and the path (h){}-(i){} (\myarrow{mygreen}) are equal; this is by applying Theorem~\ref{thm:theorem19}. At this point, the goal becomes: \begin{minted}{ssr} Lift t M X \o op X = (op1 X \o (Lift t (codensityT M) X \o psik op X)) \o E # Lift codensityT M X \end{minted} It happens that we can use the naturality of \coqin{Hmap} to make the \coqin{from} function appear in the right-hand side of the goal: \begin{minted}{ssr} Lift t M X \o op X = ((Lift t M X \o from naturality_MK X) \o psik op X) \o E # Lift codensityT M X \end{minted} The last step is to identify \coqin{op} with the composition of the \coqin{from} function, \coqin{psik op}, and \mintinline{ssr}{E # List codensityT M}, which is the purpose of a lemma~\cite[Prop.~26]{jaskelioff2009esop} (see \cite[file \coqin{ifmt_lifting.v}, lemma \coqin{psikE}]{monae}). \end{proof} The proof script corresponding to the proof above is reproduced in Appendix~\ref{appendix:theorem27}. Finally, we show that, for all the monad transformers considered in this paper, the lifting of an algebraic operation provided by Theorem~\ref{thm:theorem27} coincides with the one provided by Theorem~\ref{thm:theorem19}. This corresponds to the last results about modular monad transformers~\cite[Prop.~28]{jaskelioff2009esop}. \section{Related Work} \label{sec:related_work} The example we detail in Sect.~\ref{sec:application} adds to several examples of monadic equational reasoning~\cite{gibbons2011icfp,gibbons2012utp,mu2019tr2,mu2019tr3,pauwels2019mpc,mu2020flops}. Its originality is to use a parameterized interface and the \coqin{RunStateT} command, which are typical of programs written using monad transformers. Huffman formalizes three monad transformers in the Isabelle/HOL proof assistant~\cite{huffman2012icfp}. This experiment is part of a larger effort to overcome the limitations of Isabelle/HOL type classes to reason about Haskell programs that use (Haskell) type classes. Compared to Isabelle/HOL, the type system of \textsc{Coq}{} is more expressive so that we could formalize a much larger theory, even relying on extra features of \textsc{Coq}{} such as impredicativity and parametricity to do so. Maillard proposes a meta language to define monad transformers in the \textsc{Coq}{} proof assistant~\cite[Chapter~4]{maillard2019phd}. It is an instance implementation of one element of a larger framework to verify programs with monadic effects using Dijskstra monads~\cite{maillard2019icfp}. The lifting of operations is one topic of this framework but it does not go as far as the deep analysis of Jaskelioff~\cite{jaskelioff2009phd,jaskelioff2009esop,jaskelioff2010}. There are also formalizations of monads and their morphisms that focus on the mathematical aspects, e.g., UniMath~\cite{UniMath}. However, the link to the monad transformers of functional programming is not done. Monad transformers is one approach to combine effects. Algebraic effects is a recent alternative. It turns out that the two are related~\cite{schrijvers2019haskell} and we have started to extend \textsc{Monae}{} to clarify formally this relation. \section{Conclusions and Future Work} \label{sec:conclusion} In this paper, we extended \textsc{Monae}{}, a formalization of monadic equational reasoning, with monad transformers. We explained how it helps us to better organize the models of monads, thanks to sigma-operations in particular. We also explained how to extend the hierarchy of monad interfaces to handle programs written with monad transformers in mind. We also used our formalization of monad transformers to formalize the theory of liftings of modular monad transformers~\cite{jaskelioff2009esop} using equational reasoning. For that purpose, we needed to fix the original presentation by using \textsc{Coq}{}'s impredicativity and parametricity. The main result of this paper is a robust, formal theory of monad transformers. We plan to extend the hierarchy of monad interfaces of \textsc{Monae}{} similarly to how we proceeded for \coqin{exceptStateRunMonad}. Such an extension will call for more models to be formalized and we expect our formalized theory of liftings to be useful on this occasion. Results up to Sect.~\ref{sec:theorem19} hold whether or not {\color{setcolor}{\tt Set}}{} is impredicative. In contrast, the setting of Sect.~\ref{sec:lifting_operations} conflicts with \textsc{Monae}{} programs relying on some data structures from the \textsc{MathComp}{} library~\cite{mathcomp} (such as fixed-size lists) or from the \textsc{InfoTheo}{} library~\cite{infotheo} (such as probability distributions) because these data structures are in \coqin{Type} and cannot be computed with monads in~{\color{setcolor}{\tt Set}}{}. One could think about reimplementing them but this is a substantial amount of work. A cheap way to preserve these data structures together with the theorem on lifting of sigma-operations is to disable universe checking as soon as this theorem is used; this way, monads can stay in~\coqin{Type}. Disabling universe checking is not ideal because it is unsound in general\footnote{One can derive \coqin{False} by applying a variant of Hurkens paradox (see \url{https://coq.inria.fr/library/Coq.Logic.Hurkens.html}).}; note however that this is sometimes used for the formalization of category-theoretic notions (e.g., \cite[Sect.~6]{ahrens2017csl}). How to improve this situation is another direction for future work. \paragraph{Acknowledgements} We acknowledge the support of the JSPS KAKENHI Grant Number 18H03204. We thank all the participants of the JSPS-CNRS bilateral program ``FoRmal tools for IoT sEcurity'' (PRC2199) for fruitful discussions. We also thank Takafumi Saikawa for his comments. This work is based on joint work with C\'elestine Sauvage~\cite{sauvage2020jfla}.
1,477,468,749,845
arxiv
\section{Convex sets and projection} \label{sec:convex} We will prove our main theorems in the more general setting of closed convex cones. Our goal is the following general theorem about projections of convex sets, which will quickly imply Theorem~\ref{thm:generic-ur-min-stress-kernel}. (See Definitions~\ref{def:extreme}, \ref{def:exposed}, and~\ref{def:univ-rigid} for the terms used.) \begin{theorem} \label{thm:urexp} Let $K$ be a closed line-free convex semi-algebraic set in $\mathbb R^m$, and $\pi: \mathbb R^m \to \mathbb R^n$ a projection, both defined over~$\mathbb Q$. Suppose $x$ is locally generic in $\ext_k(K)$ and universally rigid under $\pi$. Then $\pi(x)$ is $k$-exposed. \end{theorem} \subsection{Extreme points} \label{sec:extremity} \begin{definition}\label{def:extreme} Let $K$ be a non-empty, convex set. A point $x\in K$ is \emph{$k$-extreme} if $f(x) \le k$. (Recall from Definition~\ref{def:faces} that $f(x)$ is the dimension of the face of $K$ containing~$x$.) Let $\ext_k(K)$ be the set of $k$-extreme points in~$K$. It is easy to see that $\ext_k(K) \subset \ext_{k+1}(K)$ and if $K$ is closed, then $\ext_k(K)$ is closed. \end{definition} We will also use the following elementary propositions (see for example the exercises in chapter 2.4 of~\cite{grunbaum2003convex}.) \begin{proposition} For $K$ a convex set and $x \in K$, the following statements are equivalent: \begin{itemize} \item $f(x) \le k$, and \item $x$ is not in the relative interior of any non-degenerate $(k+1)$-simplex contained in~$K$. \end{itemize} \end{proposition} \begin{proposition}\label{prop:face-segment} \label{prop:altFace} For $K$ a convex set and $x\in K$, the face $F(x)$ is the set of points $z \in K$ so that there is an $z' \in K$ with $x$ in the relative interior of the segment $[z', z]$. \end{proposition} \begin{remark} One special case of Proposition~\ref{prop:face-segment} is when $z=x$, in which case we also take $z'=x$ and the segment consists of a single point. \end{remark} \begin{corollary} For $K$ a convex set, the faces of~$K$ are the convex subsets~$F$ of~$K$ such that every line segment in $K$ with a relative interior point in $F$ has both endpoints in~$F$. \end{corollary} \subsection{Exposed points} \label{sec:exposedness} \begin{definition}\label{def:exposed} A point $x\in K$ is \emph{$k$-exposed} if there is a closed half-space~$H$ containing~$x$ so that $\dim H \cap K \le k$. Let $\exp_k(K)$ be the set of $k$-exposed points in~$K$. \end{definition} \begin{figure} \begin{center} $\mfigb{racetrack-0}$ \end{center} \caption{Examples of extreme and exposed points. For the racetrack shape illustrated (bounded by semi-circles and line segments), $a$ is $0$-extreme and $0$-exposed, $b$ is $1$-extreme and $1$-exposed, $c$ is $2$-extreme and $2$-exposed, and $d$ is $0$-extreme and $1$-exposed.} \label{fig:examples-exposed} \end{figure} See Figure~\ref{fig:examples-exposed} for some examples of $k$-extreme and $k$-exposed points, including a case where they differ. If $x$ is $k$-exposed, it is also $k$-extreme, as any simplex containing $x$ in its relative interior is contained in any supporting hyperplane. The following theorem is a crucial tool in our proof. \begin{otheorem}[Asplund~\cite{Asplund63:k-extreme}] \label{thm:k-exposed-dense} For $K$ a closed convex set, the $k$-exposed points are dense in the $k$-extreme points. \end{otheorem} \begin{remark} Asplund only states and proves Theorem~\ref{thm:k-exposed-dense} for compact convex sets. The result follows for any closed convex set~$K$ as follows. For $x\in \ext_k(K)$, fix $R > 0$ and consider the compact convex set $K' \coloneqq K \cap \overline{B}_R(x)$. Then Asplund's theorem says we can find $k$-exposed points~$z$ in~$K'$ that are arbitrarily close to~$x$. If $z$ is sufficiently close to~$x$ (say, within $R/2$ of $x$), it will also be $k$-exposed in~$K$, as desired. \end{remark} Theorem~\ref{thm:k-exposed-dense} is an extension of a theorem of Straszewicz~\cite{Straszewicz35:exposed-dense}, who proved it in the case $k=0$. There are several improvements of Theorem~\ref{thm:k-exposed-dense}, replacing ``dense'' with various stronger assertions. In our applications we will consider semi-algebraic sets~$K$ defined over~$\mathbb Q$. Note that for any such set~$K$, $\ext_k(K)$ and $\exp_k(K)$ are also semi-algebraic over~$\mathbb Q$, as the $k$-extreme and $k$-exposed conditions can be phrased algebraically. \begin{corollary}\label{cor:generic-extreme-exposed} Let $K$ be a closed, convex, semi-algebraic set. If $x$ is locally generic in $\ext_k(K)$, then $x$ is $k$-exposed. \end{corollary} \begin{proof} This follows from an application of Lemma~\ref{lem:dense} to the inclusion $\exp_k(K) \subset \ext_k(K)$ in a small neighborhood around $x$. \end{proof} \subsection{Projection} \label{sec:projection} The two convex sets $M(\Delta_v)$ and $M(\Gamma)$ are related by a projection from $\mathbb R^{\binom{n}{2}}$ to~$\mathbb R^e$. We will consider this situation more generally. Throughout this section, fix two Euclidean spaces, $\mathbb E^m$ and $\mathbb E^n$, with a surjective projection map $\pi: \mathbb E^m \to \mathbb E^n$ and a closed convex set $K \subset \mathbb E^m$. We will work with the image $\pi(K) \subset \mathbb E^n$. Let $\pi_K$ be the restriction of $\pi$ to $K$. \begin{remark} In general, $\pi(K)$ need not be closed, even if $K$ is. We continue to work with $\pi(K)$ and its ``faces'' as defined in Definition~\ref{def:faces}, even if it is not closed. \end{remark} \begin{definition}\label{def:univ-rigid} We say that $x\in K$ is \emph{universally rigid (UR) under $\pi$} if $\pi_K^{-1}(\pi(x))$ is the single point~$x$. \end{definition} Note that for $y \in \pi(K)$, $\pi_K^{-1}(y)$ is always a convex set in its own right. \begin{lemma}\label{lem:faces-nest} Let $K$ be a convex set and $\pi$ a projection. For any $x\in K$, $\pi(F(x)) \subset F(\pi(x))$. \end{lemma} \begin{proof} The set $\pi(F(x))$ is a convex set containing~$x$ in its relative interior, so by definition is contained in $F(\pi(x))$. \end{proof} A point that is $0$-extreme in a convex set is called a \emph{vertex} of the set. A convex set is \emph{line free} if it contains no complete affine line. Recall that a non-empty, closed, line-free convex set has a vertex (see e.g.,~\cite[2.4.6]{grunbaum2003convex}). Our convex sets of interest, $M(K_\Delta)$ and $M(\Gamma)$, are closed and also automatically line free, as the length-squared takes only positive values and any line contains both positive and negative values in at least one coordinate. \begin{proposition} \label{prop:dims-decrease} Let $K$ be a convex set and $\pi$ a projection. For any $y \in \pi(K)$ and $x$ a vertex of $\pi_K^{-1}(y)$, $F(x)$ maps injectively under $\pi_K$ into $F(y)$. In particular, $f(x) \le f(y)$. If $K$ is closed and line free, then for every $y \in \pi(K)$ there is a $x \in \pi^{-1}(y)$ so that $f(x) \le f(y)$. \end{proposition} \begin{proof} Let $A(x)$ be the smallest affine subspace containing $F(x)$. Suppose $A(x)$ contained a direction vector $v$ in the kernel of $\pi$. Then, since $x$ is in the relative interior of $F(x)$ for small enough $\epsilon$, we would have the segment $[x+\epsilon v,x-\epsilon v]$ contained in $F(x)$ and also in $\pi_K^{-1}(y)$. This would contradict the assumption that $x$ is a vertex of $\pi_K^{-1}(y)$. Thus $F(x)$ maps injectively under~$\pi_K$. From Lemma~\ref{lem:faces-nest} we see that $f(x) \le f(y)$. For the last part of the lemma, observe that if $K$ is closed and line free, so is $\pi_K^{-1}(y)$, and so $\pi_K^{-1}(y)$ has a vertex, which is the desired point~$x$ by the first part. \end{proof} \begin{proposition} \label{prop:dims-increase} Let $K$ be a convex set and $\pi$ a projection. For any $y \in \pi(K)$ and $x$ in the relative interior of $\pi_K^{-1}(y)$, $F(x) = \pi_K^{-1}(F(y))$. In particular, $f(x) \ge f(y)$. \end{proposition} \begin{proof} By Lemma~\ref{lem:faces-nest}, we already know that $F(x) \subset \pi_K^{-1}(F(y))$, so we just need to show the other inclusion. Pick any $y_1 \in F(y)$ and $x_1 \in \pi_K^{-1}(y_1)$. We must show that $x_1 \in F(x)$. Since $y_1 \in F(y)$, by Proposition~\ref{prop:face-segment} there is an $y_2 \in \pi(K)$ so that $y \in \Int([ y_1,y_2])$. Let $x_2 \in \pi_K^{-1}(y_2)$. Let $x' \in K$ be the (unique) point of intersection of $[x_1, x_2]$ with $\pi_K^{-1}(y)$. Since $x \in \Int(\pi_K^{-1}(y))$, there is a point $x''\in\pi_K^{-1}(y)$ with $x \in \Int([x', x''])$. But then $x$ is in the interior of the simplex $[x_1, x_2, x'']$, which implies that $x_1\in F(x)$. \end{proof} In order to be able to apply Asplund's theorem at $\pi(x)$ we need $\pi(x)$ to be locally generic in $\ext_k(\pi(K))$. The following lemma will assist us. \begin{lemma} \label{lem:proper-map-inverse} Let $K$ be a closed, convex set in $\mathbb R^m$ and let $\pi_K: K \to \mathbb R^n$ be a projection. Let $x\in K$ be universally rigid under $\pi$. Then for any $\epsilon$ there is a $\delta > 0$ so that $\pi_K^{-1}(B_\delta(\pi(x))) \subset B_\epsilon(x) \cap K$. \end{lemma} Intuitively, the only points in~$K$ that map close to $\pi(x)$ are close to~$x$. \begin{proof} Suppose not. Then there is an $\epsilon$ and a sequence of points $x_i$ with the following property: $\pi(x_i)$ approach $\pi(x)$, while $d(x,x_i) > \epsilon$. Let $x_i'$ be the point on the interval $[x,x_i]$ that is a distance exactly $\epsilon$ from $x$. By convexity $x_i' \in K$. Then the $x_i'$ are in a bounded and closed set and therefore have an accumulation point $x'\in K$, which will also have distance $\epsilon$ from~$x$. By the linearity of $\pi$ the $\pi(x'_i)$ also approach $\pi(x)$. So by continuity $\pi(x')=\pi(x)$. This contradicts the universal rigidity of~$x$. \end{proof} And now we can prove the following. \begin{lemma} \label{lem:loc-gen} Let $K$ be a closed line-free convex semi-algebraic set in $\mathbb R^m$ and $\pi: \mathbb R^m \to \mathbb R^n$ a projection, both defined over~$\mathbb Q$. Suppose $x$ is locally generic in $\ext_k(K)$ and universally rigid under $\pi$. Then $\pi(x)$ is locally generic in $\ext_k(\pi(K))$. \end{lemma} \begin{proof} By Proposition~\ref{prop:dims-decrease}, $\ext_k(\pi(K)) \subset \pi(\ext_k(K))$. By Proposition~\ref{prop:dims-increase} and the universal rigidity of $x$, we have that $\pi(x)$ is in $\ext_k(\pi(K))$. Let $V\coloneqq \ext_k(K) \cap B_\epsilon(x)$ and $W\coloneqq \ext_k(\pi(K)) \cap B_\delta(\pi(x))$. For sufficiently small~$\epsilon$, by local genericity, $x$ is generic in $V$. Meanwhile, $W \subset \pi(\ext_k(K))$, and, from Lemma~\ref{lem:proper-map-inverse} , for small enough $\delta$, we have $\pi_K^{-1}(W) \subset B_\epsilon(x)$ and thus $W \subset \pi(V)$. Thus from Lemma~\ref{lem:image-generic}, we have $\pi(x)$ generic in $W$. Thus $\pi(x)$ is locally generic in $\ext_k(\pi(K))$. \end{proof} \begin{remark} In our special case where $K$ is the cone of positive semidefinite matrices, $x$ is in fact generic in $\ext_k(K)$ (which is irreducible), and thus the second paragraph in the above proof (and in turn Lemma~\ref{lem:proper-map-inverse}) are not needed. \end{remark} Finally, in order to be able to apply Asplund's theorem to $\pi(K)$ we need $\pi(K)$ to be a closed set. In the graph embedding case, we can use the fact that $\pi$ is a proper map whenever $\Gamma$ is connected, and thus $\pi(K)$ must be closed. In the setting of a general $K$ and $\pi$ we can argue closedness using standard techniques. \begin{definition} A direction of recession $v$ for a convex set~$K$ is a vector such that for every (equivalently, any) $x \in K$ and every $\lambda \geq 0$ we have $x + \lambda v \in K$ (see e.g.~\cite[Chapter 8]{rockefeller1970convex}). \end{definition} A sufficient condition for closedness is given by the following theorem~\cite[Theorem 9.1]{rockefeller1970convex}. \begin{otheorem} \label{thm:fibcl} Let $K$ be a line free closed convex set and $\pi$ a projection. If $K$ does not have a direction of recession $v$ with $\pi(v)=0$, then then $\pi(K)$ is closed. \end{otheorem} \begin{corollary} \label{cor:projclosed} Let $K$ be a line free closed convex set and $\pi$ a projection. If there is a point $x\in K$ that is universally rigid under $\pi$, then $\pi(K)$ is closed. \end{corollary} \begin{proof} Suppose there was a direction of recession $v$ for $K$ with $\pi(v)=0$. Then $\pi(x + v) = \pi(v)$ with $x+v$ in $K$. This would contradict the universal rigidity of $x$. \end{proof} Putting this all together, we can deduce Theorem~\ref{thm:urexp}. \begin{proof}[Proof of Theorem~\ref{thm:urexp}] From Corollary~\ref{cor:projclosed}, $\pi(K)$ is closed. From Lemma~\ref{lem:loc-gen}, $\pi(x)$ is locally generic in $\ext_k(\pi(K))$. Thus by Corollary~\ref{cor:generic-extreme-exposed}, $\pi(x)$ is $k$-exposed. \end{proof} \subsection{Proof of main theorem} We are now in position to complete the proof of Theorem~\ref{thm:generic-ur-min-stress-kernel}. Recall that we have a universally rigid framework $(p,\Gamma)$ with $p$ generic in $C^d({\mathcal V})$. Let $K$ be $M(\Delta_v)$, and $\pi$ be its projection to $M(\Gamma)$, $k$ be $\binom{d+1}{2}$. and $x \in M_d(\Delta_v) \subset K$ be $\ell_\Delta(p)$. The assumption that $p$ is generic in $C^d({\mathcal V})$ implies that $x$ is generic in $M_d(\Delta_v) = \ext_k(K)$. From Theorem~\ref{thm:urexp}, $\pi(x)$ is $k$-exposed. So there must be a halfspace $H$ in $\mathbb R^e$ whose intersection with $\pi(K)$ is exactly $F(\pi(x))$. The preimage $\pi^{-1}(H)$ is a halfspace in $\mathbb R^{\binom{v}{2}}$ whose intersection with $K$ is $\pi_K^{-1}(F(\pi(x)))$, which by Proposition~\ref{prop:dims-increase} is $F(x)$. When $v>d+1$, $\pi^{-1}(H)$ must be tangent to $K$. Since $K$ is a cone, the boundary of $\pi^{-1}(H)$ must pass through the origin. Thus $\pi^{-1}(H)$ is represented by a dual vector $\phi\in \bigl(\mathbb R^{\binom{v}{2}}\bigr)^*$, in the sense that $\pi^{-1}(H) = \bigl\{\,x \in \mathbb R^{\binom{v}{2}} \mathbin{\big\vert} \phi(x) \geq 0\,\bigr\}$. Since we started with a halfspace in $\mathbb R^e$, $\phi_{ij} =0$ for $\{i,j\} \not\in {\mathcal E}$. From Lemma ~\ref{lem:covec-gamma-rankpsdstress}, then, $\Omega\coloneqq M(\phi)$ must be a positive semi-definite equilibrium stress matrix of $(p,\Gamma)$ with rank $v-d-1$. \section{Algebraic geometry preliminaries} \label{sec:algebr-geom} We start with some preliminaries about semi-algebraic sets from real algebraic geometry, somewhat specialized to our particular case. For a general reference, see, for instance, the book by Bochnak, Coste, and Roy~\cite{BCR98:RealAlgGeom}. \begin{definition} An affine real \emph{algebraic set} or \emph{variety}~$V$ contained in~$\mathbb R^n$ is a subset of $\mathbb R^n$ that is defined by a set of algebraic equations. It is \emph{defined over~$\mathbb Q$} if the equations can be taken to have rational coefficients. An algebraic set has a \emph{dimension} $\dim(V)$, which we will define as the largest $t$ for which there is an open subset of~$V$ (in the Euclidean topology) homeomorphic to $\mathbb R^t$. \end{definition} \begin{definition} A \emph{semi-algebraic set}~$S$ is a subset of $\mathbb R^n$ defined by algebraic equalities and inequalities; alternatively (by a non-trivial theorem), it is the image of an algebraic set (defined only by equalities) under an algebraic map. It is \emph{defined over~$\mathbb Q$} if the equalities and inequalities have rational coefficients. Like algebraic sets, a semi-algebraic set has a well defined (maximal) dimension~$t$. \end{definition} We next define genericity in larger generality and give some basic properties. \begin{definition} A point in a (semi-)algebraic set~$V$ defined over~$\mathbb Q$ is \emph{generic} if its coordinates do not satisfy any algebraic equation with coefficients in~$\mathbb Q$ besides those that are satisfied by every point on~$V$. A point $x$ on a semi-algebraic set $S$ is \emph{locally generic} if for small enough $\epsilon$, $x$ is generic in $S \cap B_\epsilon(x)$. \end{definition} \begin{remark} A semi-algebraic set $S$ will have no generic points if its Zariski closure over $\mathbb Q$ is reducible over $\mathbb Q$. In this case all points in each irreducible component will satisfy some specific equations not satisfied everywhere over $S$. But even in this case, almost every point in the set will be generic within its own component. Such points will still be locally generic in $S$. \end{remark} We will need a few elementary lemmas on generic points in semi-algebraic sets. \begin{lemma} \label{lem:dense} If $X$ and $Y$ are both semi-algebraic sets defined over $\mathbb Q$, with $X\subset Y$ and $X$ dense in $Y$ (in the Euclidean topology), then $X$ contains all of the locally generic points of~$Y$. \end{lemma} \begin{proof} Due to the density assumption, $Y\backslash X$ must be a semi-algebraic set defined over $\mathbb Q$ with dimension less than that of $Y$. The Zariski closure of a semi-algebraic set maintains its dimension, thus all points in $Y\backslash X$ must satisfy some algebraic equation that is non-zero over $Y$ and thus these points must be non-generic. To see that a point $y \in Y \setminus X$ is not locally generic either, apply this argument in an $\epsilon$-neighborhood of~$y$. (That is, apply the argument to $Y \cap B_\epsilon(y)$ and $X \cap B_\epsilon(y)$.) \end{proof} \begin{lemma} \label{lem:image-generic} Let $V$ be a semi-algebraic set, $f$ be an algebraic map from $V$ to a range space~$X$, and $W$\! be an semi-algebraic set contained in the image of $f$, with $V$, $W$, and $f$ all defined over~$\mathbb Q$. If $x_0\in V$\! is generic and $f(x_0) \in W$\!, then $f(x_0)$ is generic in~$W$\!. \end{lemma} \begin{proof} Let $\phi$ be any algebraic function on~$X$ with rational coefficients so that $\phi(f(x_0)) = 0$. Then $\phi \circ f$ vanishes at~$x_0$, which is generic in~$V$, so $\phi\circ f$ vanishes identically on~$V\!$. This implies that $\phi$ vanishes on~$W$\!. Thus any algebraic function defined over~$\mathbb Q$ that vanishes at $f(x_0)$ must vanish on all of $W\!$. This proves that $f(x_0)$ is generic in~$W\!$. \end{proof} \section{Introduction} \label{sec:intro} In this paper we characterize generic frameworks which are universally rigid in $d$\hyp dimensional Euclidean space. A framework is universally rigid if, modulo Euclidean transforms, there is no other framework of the same graph in \emph{any} dimension that has the same edge lengths. A series of papers by Connelly~\cite{Connelly82:RigidityEnergy, Connelly99:TensegrityStable, Connelly01:StressStability} and later Alfakih~\cite{Alfakih07:DimensionalRigidity,alfakih2007universal} described a sufficient condition for a generic framework to be universally rigid. In this paper we show that this is also a necessary condition. Universally rigid frameworks are especially relevant when applying semidefinite programming techniques to graph embedding problems. Suppose some vertices are embedded in $\mathbb E^d$ and we are told the distances between some of the pairs of vertices. The graph embedding problem is to compute the embedding (up to an unknown Euclidean transformation) from the data. This problem is computationally difficult as the graph embeddablity question is in general NP-HARD~\cite{Saxe79:EmbedGraphsNP}, but because of its utility, many heuristics have been attempted. One approach is to use semidefinite programming to find an embedding~\cite{linial1995gga}, but such methods are not able to find the solutions that are specifically $d$-dimensional, rather than embedded in some larger dimensional space. However, when the underlying framework is, in fact, universally rigid, then the distance data itself automatically constrains the dimension, and therefore the correct answer will be (approximately) found by the semidefinite program. The connection between rigidity and semi-definite programming was first explored by So and Ye~\cite{SY07:SemidefProgramming}. We also discuss the more general topic of strict complementarity in semidefinite programming. Strict complementarity is a strong form of duality that is needed for the fast convergence of many interior point optimization algorithms. Using our arguments from universal rigidity, we show that if the semidefinite program has a sufficiently generic primal solution, then it must satisfy strict complementarity. This is in distinction with previous results on strict complementarity~\cite{alizadeh1997complementarity,pataki2001generic} that require the actual parameters of the program to be generic. In particular, our result applies to programs where the solution is of a lower rank than would be found generically. \subsection{Rigidity definitions} \label{sec:definitions} \begin{definition}\label{def:config-space} A \emph{graph}~$\Gamma$ is a set of $v$ vertices ${\mathcal V}(\Gamma)$ and $e$ edges~${\mathcal E}(\Gamma)$, where ${\mathcal E}(\Gamma)$ is a set of\ two\hyp element subsets of ${\mathcal V}(\Gamma)$. We will typically drop the graph~$\Gamma$ from this notation. A \emph{configuration}~$p$ is a mapping from ${\mathcal V}$ to~$\mathbb E^v$. Let $C({\mathcal V})$ be the space of configurations. For $p\in C({\mathcal V})$ and $u \in {\mathcal V}$, let $p(u)$ denote the image of $u$ under~$p$. Let $C^d({\mathcal V})$ denote the space of configurations that lie entirely in the $\mathbb E^d$, contained as the first $d$ dimensions of~$\mathbb E^v$. A \emph{framework} $(p,\Gamma)$ is the pair of a graph and a configuration of its vertices. For a given graph~$\Gamma$ the \emph{length-squared function} $\ell_\Gamma:C({\mathcal V})\rightarrow\mathbb R^e$ is the function assigning to each edge of $\Gamma$ its squared edge length in the framework. That is, the component of $\ell_\Gamma(p)$ in the direction of an edge $\{u,w\}$ is $\abs{p(u)-p(w)}^2$. \end{definition} \begin{definition} \label{def:generic} A configuration in $C^d({\mathcal V})$ is \emph{proper} if it does not lie in any affine subspace of $\mathbb E^d$ of dimension less than~$d$. It is \emph{generic} if its first $d$ coordinates (i.e., the coordinates not constrained to be~$0$) do not satisfy any algebraic equation with rational coefficients. \end{definition} \begin{remark} A generic configuration in $C^d({\mathcal V})$ with at least $d+1$ vertices is proper. \end{remark} \begin{definition} \label{def:universally-rigid} The configurations $p, q$ in $C({\mathcal V})$ are \emph{congruent} if they are related by an element of the group of $\Eucl(v)$ of rigid motions of~$\mathbb E^v$. A framework $(p,\Gamma)$ with $p \in C^d({\mathcal V})$ is \emph{universally rigid} if any other configuration in $C({\mathcal V})$ with the same edge lengths under $\ell_{\Gamma}$ is a configuration congruent to $p$. A graph $\Gamma$ is \emph{generically universally rigid} in $\mathbb E^d$ if any generic framework $(p,\Gamma)$ with $p \in C^d({\mathcal V})$ is universally rigid. \end{definition} Essentially, universal rigidity means that the lengths of the edges of $p$ are consistent with essentially only one embedding of $\Gamma$ in \emph{any} dimension, up to~$v$. (In general, a configuration in any higher dimension can be related by a rigid motion to a configuration in $\mathbb E^v$.) This is stronger than \emph{global rigidity}, where the lengths fully determine the embedding in the smaller space $\mathbb E^d$. And global rigidity is, in turn, stronger than (local) \emph{rigidity}, which only rules out continuous flexes in $\mathbb E^d$ that preserve edge lengths. \begin{remark} Universal rigidity or closely related notions have also been called \emph{dimensional rigidity}~\cite{Alfakih07:DimensionalRigidity}, \emph{uniquely localizable}~\cite{SY07:SemidefProgramming}, and \emph{super stability}~\cite{Connelly99:TensegrityStable}. \end{remark} \begin{remark} In Definition~\ref{def:universally-rigid}, it would be equivalent to require that $(p,\Gamma)$ be (locally) rigid in $\mathbb E^v$ since by a result of Bezdek and Connelly~\cite{BC04:KneserPoulsen} any two frameworks with the same edge lengths can be connected by a smooth path in a sufficiently large dimension. \end{remark} \begin{remark} Universal rigidity, unlike local and global rigidity, is not a generic property of a graph: for many graphs, some generic frameworks are universally rigid and others are not. For instance, an embedding of a 4-cycle in the line~$\mathbb E^1$ is universally rigid iff one side is long in the sense that its length is equal to the sum of the lengths of the others. On the other hand, some graphs, such as a simplex, and any trilateration graph~\cite{eren2004rigidity} are generically universally rigid in $\mathbb E^d$. (A $d$-trilateration graph has the property that one can order its vertices such that the following holds: the first $d+2$ vertices are part of a simplex in $\Gamma$ and each subsequent vertex is adjacent in $\Gamma$ to $d+1$ previous vertices.) We do not know a characterization of generically universally rigid graphs. \end{remark} \begin{figure}[t] \includegraphics{pentagon-0} \qquad \includegraphics{pentagon-1} \qquad \includegraphics{pentagon-2} \qquad \includegraphics{pentagon-3} \qquad \includegraphics{pentagon-4} \caption{Planar graph embeddings with increasing types of rigidity. From left to right, generically locally flexible graph, generically locally rigid graph, generically globally rigid graph, generic universally rigid framework, generically universally rigid graph. (More precisely, the fourth framework is not itself generic, but it and any generic framework close to it are universally rigid.) This paper focuses on frameworks of the type of the fourth one. The fifth graph is generically universally rigid as it is a 2-trilateration graph. } \label{seq} \end{figure} See Figure~\ref{seq} for examples of embedded graphs with increasing types of rigidity. \subsection{Equilibrium stresses} \begin{definition} \label{def:stress-matrix} An \emph{equilibrium stress matrix} of a framework~$(p,\Gamma)$ is a matrix~$\Omega$ indexed by ${\mathcal V}\times {\mathcal V}$ so that \begin{enumerate} \item for all $u,w \in {\mathcal V}$, $\Omega(u,w) = \Omega(w,u)$; \item for all $u,w \in {\mathcal V}$ with $u \ne w$ and $\{u,w\} \not\in {\mathcal E}$, $\Omega(u,w) = 0$; \item \label{item:stress-row-sum} for all $u \in {\mathcal V}$, $\sum_{w\in{\mathcal V}} \Omega(u,w) = 0$; and \item \label{item:stress-equilib} for all $u \in {\mathcal V}$, $\sum_{w\in{\mathcal V}} \Omega(u,w)p(w) = 0$. \end{enumerate} (The last condition is the equilibrium condition.) Let $S_\Gamma(p)$ be the linear space of equilibrium stress matrices for $(p,\Gamma)$. \end{definition} Conditions \eqref{item:stress-row-sum} and \eqref{item:stress-equilib} give us: \begin{equation} \forall u \in {\mathcal V} \;\; \sum_{\{w \in {\mathcal V} \mid \{u,w\} \in {\mathcal E}\}}\!\!\Omega(u,w)(p(w) - p(u)) = 0. \end{equation} The kernel of an equilibrium stress matrix $\Omega$ of a framework $(p,\Gamma)$ always contains the subspace of $\mathbb R^v$ spanned by the coordinates of~$p$ along each axis and the vector~$\vec 1$ of all $1$'s. This corresponds to the fact that any affine image of $p$ satisfies all of the equilibrium stresses in $S_\Gamma(p)$. If $p$ is a proper $d$-dimensional configuration, these kernel vectors span a $(d+1)$-dimensional space, so for such frameworks $\rank \Omega \leq v-d-1$. \begin{definition} \label{def:conic} We say that the edge directions of $(p,\Gamma)$ with $p \in C^d({\mathcal V})$ are \emph{on a conic at infinity} if there exists a symmetric $d$-by-$d$ matrix~$Q$ such that for all edges $(u,v)$ of $\Gamma$, we have \[ [p(u)-p(v)]^t Q [p(u)-p(v)] = 0,\] where the square brackets mean the projection of a vector in $\mathbb E^v$ to $\mathbb E^d$ (i.e., dropping the $0$'s at the end of $p(u)$). \end{definition} \begin{remark} If the edges of~$(p,\Gamma)$ are not on a conic at infinity in $C^d({\mathcal V})$, then in particular $p$ is proper. \end{remark} A framework has edges on a conic at infinity iff there is a continuous family of $d$-dimensional non-orthonormal affine transforms that preserve all of the edge lengths. However, if the configuration is not proper, this family of affine transforms might all be congruent to the original one. \subsection{Results} Connelly in a series of papers has studied the relationship between various forms of rigidity and equilibrium stress matrices. In particular he proved the following theorem that gives a sufficient condition for a framework to be universally rigid. \begin{otheorem}[Connelly] \label{thm:suff} Suppose that $\Gamma$ is a graph with $d+2$ or more vertices and $p$ is a (generic or not) configuration in $C^d({\mathcal V})$. Suppose that there is an equilibrium stress matrix $\Omega \in S_\Gamma(p)$ that is positive semidefinite (PSD) and that $\rank \Omega = v-d-1$. Also suppose that the edge directions of $p$ do not lie on a conic at infinity. Then $(p,\Gamma)$ is universally rigid. \end{otheorem} Connelly also proved a lemma that allows him to ignore the conic at infinity issue when $p$ is generic. \begin{lemma}[Connelly] \label{lem:cinf} Suppose that $\Gamma$ is a graph with $d+2$ or more vertices and $p$ is a generic configuration in~$C^d({\mathcal V})$. Suppose that there is an equilibrium stress matrix $\Omega \in S_\Gamma(p)$ with rank $v-d-1$. Then the edge directions of $p$ do not lie on a conic at infinity. \end{lemma} Putting these together, we can summarize this as \begin{corollary} \label{cor:suff2} Suppose that $\Gamma$ is a graph with $d+2$ or more vertices and $p$ is a generic configuration in $C^d({\mathcal V})$. Suppose that there is a PSD equilibrium stress matrix $\Omega \in S_\Gamma(p)$ with $\rank \Omega = v-d-1$. Then $(p,\Gamma)$ is universally rigid. \end{corollary} The basic ideas for the proof of Theorem~\ref{thm:suff} appear in~\cite{Connelly82:RigidityEnergy} where they were applied to show the universal rigidity of Cauchy polygons. It is also described in~\cite{Connelly99:TensegrityStable} and stated precisely in~\cite[Theorem~2.6]{Connelly01:StressStability}. Lemma~\ref{lem:cinf} can be derived from the proof of Theorem 1.3 in~\cite{Connelly05:GenericGlobalRigidity}. Corollary~\ref{cor:suff2} in two dimensions is summarized in Jord\'an and Szabadka~\cite{jordan2009operations}. A different set of sufficient conditions for the related concept of dimensional rigidity were described by Alfakih~\cite{Alfakih07:DimensionalRigidity}. In~\cite{alfakih2007universal} he showed that these conditions were equivalent to the existence of a PSD equilibrium stress matrix of maximal rank. In~\cite{alfakih2007universal} he also showed that a generic dimensionally rigid framework (with $v \geq d+2$) must be universally rigid; this has the same effect as Lemma~\ref{lem:cinf}, and thus results in Corollary~\ref{cor:suff2}. In these papers, he also conjectured that for a generic universally rigid framework, a maximal rank PSD equilibrium stress matrix must exist. In the related context of frameworks with pinned anchor vertices, So and Ye~\cite{SY07:SemidefProgramming} showed the appropriate analogue of Theorem~\ref{thm:suff} follows from complementarity in semidefinite programming (see section~\ref{sec:sdp} below for more on this). In this paper our main result is the converse to Corollary~\ref{cor:suff2}: \begin{theorem}\label{thm:generic-ur-min-stress-kernel} A universally rigid framework $(p,\Gamma)$, with $p$ generic in $C^d({\mathcal V})$ and having $d+2$ or more vertices, has a PSD equilibrium stress matrix with rank $v-d-1$. \end{theorem} \begin{remark} Alfakih has given an example \cite[Example 3.1]{Alfakih07:DimensionalRigidity} showing that Theorem~\ref{thm:generic-ur-min-stress-kernel} is false if we drop the assumption that $p$ is generic. For any universally rigid framework (generic or not), it's not hard to see that there is a non-zero PSD equilibrium stress matrix. (See \cite[Theorem 5.1]{Alfakih09:BarSDP}.) The difficulty in Theorem~\ref{thm:generic-ur-min-stress-kernel} is finding a stress matrix of high rank. \end{remark} Theorems \ref{thm:suff} and \ref{thm:generic-ur-min-stress-kernel} compare nicely with the situation for global rigidity, where a framework is generically globally rigid iff it has an equilibrium stress matrix of rank $v-d-1$ (with no PSD constraint). Sufficiency was proved by Connelly\cite{Connelly05:GenericGlobalRigidity}, and necessity was proved by the authors and Alex Healy~\cite{GHT10:GGR}. \begin{question} For a graph~$\Gamma$ that is generic globally rigid $\mathbb E^d$, is there always a generic framework in $C^d({\mathcal V})$ that is universally rigid? \end{question} In this paper, we also prove more general versions of Theorem~\ref{thm:generic-ur-min-stress-kernel} in the general context of convex optimization and strict complementarity in semidefinite programming. See Theorem~\ref{thm:urexp} in Section~\ref{sec:convex} and Theorem~\ref{thm:sdp-opt} in Section~\ref{sec:sdp}. \section{The geometry of PSD stresses} We now turn to the main construction in our proof. \subsection{The measurement set} \begin{definition} The $d$-dimensional \emph{measurement set}~$M_d(\Gamma)$ of a graph~$\Gamma$ is defined to be the image of $C^d({\mathcal V})$ under the map $\ell_\Gamma$. These are nested by $M_d(\Gamma) \subset M_{d+1}(\Gamma)$ and eventually stabilize at $M_{v-1}(\Gamma)$, also called the \emph{absolute measurement set} $M(\Gamma)$. \end{definition} Since $M_d(\Gamma)$ is the image of $C^d({\mathcal V})$ under an algebraic map, by Lemma~\ref{lem:image-generic}, if $p$ is generic in $C^d({\mathcal V})$ then $\ell_\Gamma(p)$ is generic in $M_d(\Gamma)$ (which must be irreducible). \begin{lemma} The set $M(\Gamma)$ is convex. \end{lemma} \begin{proof} The squared edge lengths of a framework~$p$ are computed by summing the squared edge lengths of each coordinate projection of~$p$, so $M_d(\Gamma)$ is the $d$-fold Minkowski sum of $M_1(\Gamma)$ with itself. Since $M_1(\Gamma)$ is invariant under scaling by positive reals, $M_d(\Gamma)$ can also be described as an iterated chord variety, and in particular $M(\Gamma)$ is the convex hull. \end{proof} A particular case of interest is $M(\Delta_v)$, the absolute measurement set of the complete graph on the vertices. It is a cone in $\mathbb R^{\binom{v}{2}}$. \begin{lemma}\label{lem:meas-sdp} The measurement set $M_d(\Delta_v)$ is isomorphic (as a subset of the linear space $\mathbb R^{\binom{v}{2}}$) to the set of PSD $(v-1)\times (v-1)$ matrices of rank at most~$d$. \end{lemma} \begin{proof} There is a standard linear map from $M(\Delta_v)$ to the convex cone of $v-1$ by $v-1$ positive semidefinite matrices, which we recall for the reader. (See, e.g., \cite{schoenberg1935remarks,gower1985properties}.) For a point $x \in M_d(\Delta_v)$, by the universal rigidity of the simplex, up to $d$-dimensional Euclidean isometry there is a unique~$p\in C^d({\mathcal V})$ with $x= \ell_\Delta(p)$. If we additionally constrain the last vertex to be at the origin, then $p$ is unique up to an orthonormal transform. Think of such a constrained~$p$ as a $(v-1) \times d$ matrix, denoted~$\varrho$. The map sends $x$ to the matrix $\varrho \varrho^t$, which has rank at most~$d$. (The rank of $\varrho \varrho^t$ is less than~$d$ if $p$ has an affine span of dimension less than~$d$). It is easy to check that $\varrho \varrho^t$ does not change if we change $\varrho$ by an orthonormal transform, and that in fact the entries of $\varrho \varrho^t$ are linear combinations of the squared lengths of edges, so the map is well-defined and linear. \end{proof} \begin{definition}\label{def:faces} Let $K$ be a non-empty convex set in an affine space. The \emph{dimension} of~$K$ is the dimension of the smallest affine subspace containing~$K$. A point $x\in K$ is in the \emph{relative interior} $\Int(K)$ of~$K$ if there is a neighborhood $U$ of $x$ in $\mathbb E^{e}$ so that $U\cap K\cong \mathbb R^{\dim K}$. (It is easy to see that the relative interior of $K$ is always non-empty.) The \emph{face}~$F(x)$ of a point $x\in K$ is the (unique) largest convex set contained in~$K$ so that $x \in \Int(F(x))$. The faces of~$K$ are the sets $F(x)$ as $x$ ranges over~$K$. (In Section~\ref{sec:extremity} we will see more characterizations of $F(x)$.) Let $f(x)$ be the dimension of $F(x)$. \end{definition} \begin{lemma} \label{lem:mes-faces} Let $x = \ell_\Delta(p)$ with $p \in C^d({\mathcal V})$. Then $F(x) = \ell_\Delta(\Aff_d(p)) = \ell_\Delta(\GL_d(p))$. When $p$ is proper, $f(x) = \binom{d+1}{2}$. \end{lemma} Here $\GL_d$ is the group of $d$-dimensional linear transforms, and $\Aff_d$ is the corresponding affine group. \begin{proof} By Lemma~\ref{lem:meas-sdp}, this reduces to understanding the faces of the cone of $v-1$ by $v-1$ PSD matrices, which is well understood (see, e.g.,~\cite[Theorem A.2]{pataki2000geometry}). In particular, for $\varrho$ a $(v-1)\times d$ matrix, the points in $F(\varrho \varrho^t)$ in the semidefinite cone are those matrices of the form $\varrho L^t L \varrho^t$ for some $d\times d$ matrix~$L$. But each such $\varrho L^t L \varrho^t$ maps under our isometry to $\ell_\Delta(\sigma)$ where $\sigma \in C^d({\mathcal V})$ is obtained from $p$ by a $d$-dimensional affine transform, with linear part described by $L$. When $p$ has an affine span of dimension $d$, $\varrho$ is of rank~$d$. This implies that $F(\varrho \varrho^t)$ has dimension $\binom{d+1}{2}$, as we can see by computing the dimension of $\GL_d/O_d$, the linear transforms modulo the orthonormal transforms. \end{proof} \subsection{Dual vectors and PSD stresses} \begin{definition} Let us index each of the coordinates of $\mathbb R^{\binom{v}{2}}$ with an integer pair $ij$ with $1 \le i<j \le v$. Given $\phi$, a functional in the dual space $\bigl(\mathbb R^{\binom{v}{2}}\bigr)^*$, define the $v$-by-$v$ matrix $M(\phi)$ as follows: for $i \ne j$, $M(\phi)_{ij}= M(\phi)_{ji}\coloneqq \phi_{ij}$ and $M(\phi)_{ii} \coloneqq -\sum_{j\neq i} M(\phi)_{ij}$ \end{definition} \begin{lemma} \label{lem:covec-stress} Let $\phi$ be a dual vector in $\bigl(\mathbb R^{\binom{v}{2}}\bigr)^*$ and let $\Omega\coloneqq M(\phi)$ be the corresponding matrix. Let $p \in C^d({\mathcal V})$. Then \begin{equation} \langle \phi, \ell_\Delta(p) \rangle = \frac{1}{2}\sum_{k=1}^d (p^k)^t \Omega p^k. \end{equation} \end{lemma} Here we use the notation $p^k$ for vector in $\mathbb R^v$ describing the component of $p$ in the $k$'th coordinate direction of $\mathbb E^d$. \begin{proof} Both sides are equal to $\sum_k \sum_{i,j \mid i < j} (p^k_i-p^k_j)^2 \phi_{ij}$. \end{proof} \begin{definition} For $K\subset \mathbb R^n$ a convex set and $\phi$ a functional in the dual space $(\mathbb R^n)^*$, we say that $\phi$ is \emph{tangent to $K$ at~$x \in K$} if for all $y\in K$, $\langle \phi, y\rangle \geq 0$ while $\langle \phi, x\rangle =0$. \end{definition} \begin{lemma} \label{lem:covec-psdstress} Let $\phi$ be a dual vector in $\bigl(\mathbb R^{\binom{v}{2}}\bigr)^*$ that is tangent to $M(\Delta_v)$ at $\ell_\Delta(p)$ for some $p \in C({\mathcal V})$. Let $\Omega\coloneqq M(\phi)$ be the corresponding matrix. Then $\Omega$ is a PSD equilibrium stress matrix for $(p,\Delta_v)$. \end{lemma} \begin{proof} Conditions (1) and (3) in Definition~\ref{def:stress-matrix} are automatic by definition of~$\Omega$, and condition~(2) is vacuous for the complete graph $\Delta_v$. It remains to check condition~(4) and that $\Omega$ is PSD. By Lemma~\ref{lem:covec-stress}, $\langle \phi, \cdot \rangle$ can be evaluated as a quadratic form using the matrix $\Omega$. Since $\langle \phi, \cdot \rangle$ is not negative anywhere on $M(\Delta_v)$, $\Omega$ must be PSD. Because $\Omega$ is positive semidefinite and $\sum_{k=1} (p^k)^t \Omega p^k=0$, it must also be true that for each $k$, $(p^k)^t \Omega p^k = 0$ and so $\Omega p^k =0 $, which is the last necessary condition to show $\Omega$ is an equilibrium stress matrix. \end{proof} \begin{lemma} \label{lem:covec-rankpsdstress} In the setting of Lemma~\ref{lem:covec-psdstress}, suppose furthermore that $\phi$ is only tangent to points in $F(\ell_\Delta(p))$, and suppose the affine span of $p$ has dimension $d$. Then $\Omega$ is a PSD equilibrium stress matrix for $(p,\Delta_v)$ with rank $v-d-1$. \end{lemma} \begin{proof} From Lemma~\ref{lem:covec-psdstress}, $\Omega$ is a PSD equilibrium stress matrix for $(p,\Delta_v)$. By assumption, its kernel is spanned by the coordinates of frameworks that map under $\ell_\Delta$ to $F(\ell_\Delta(p))$. From Lemma~\ref{lem:mes-faces}, such frameworks are $d$-dimensional affine transforms of $p$. Thus the kernel is spanned by the $d$ coordinates of $p$ and the all-ones vector. Since the kernel has dimension $d+1$, $\Omega$ has rank $v-d-1$. \end{proof} \begin{lemma} \label{lem:covec-gamma-rankpsdstress} In the setting of Lemma~\ref{lem:covec-rankpsdstress}, suppose furthermore that $\phi_{ij}=0$ for $\{i,j\} \not\in {\mathcal E}(\Gamma)$ for a graph~$\Gamma$. Then $\Omega$ is a positive semidefinite equilibrium stress matrix for $(p,\Gamma)$ with rank $v-d-1$. \end{lemma} \begin{proof} $\Omega$ must have zeros in all coordinates corresponding to non-edges in $\Gamma$, thus it will be an equilibrium stress matrix for $\Gamma$ as well as for $\Delta_v$. The rest follows from Lemma~\ref{lem:covec-rankpsdstress}. \end{proof} To be able to use Lemma~\ref{lem:covec-gamma-rankpsdstress}, we want to find a dual vector $\phi$ that is only tangent to points in $F(\ell_\Delta(p))\subset M(\Delta_v)$. Additionally we need that $\phi_{ij}=0$ for $\{i,j\} \not\in {\mathcal E}$. As we will see, this will correspond to finding a dual vector in $(\mathbb R^e)^*$ that is only tangent to points in $F(\ell_\Gamma(p)) \subset M(\Gamma)$. The proper language for this kind of condition is the notion of convex extreme and exposed points, which we turn to next. \section{Relation to semidefinite programming} \label{sec:sdp} Theorem~\ref{thm:generic-ur-min-stress-kernel} can be interpreted as a strict complementarity statement about a particular semidefinite program (SDP) and its associated dual program, as we will now explain. For a general survey on semidefinite programming see Vandenberghe and Boyd~\cite{vandenberghe1996semidefinite}. Our notation is similar to that of Pataki and Tun{\c{c}}el~\cite{pataki2001generic}. For a related discussion on universal rigidity, semidefinite programming and complementarity, see, for instance, the paper by So and Ye~\cite{SY07:SemidefProgramming}. \subsection{Semidefinite programming} \label{sec:sdp-graphs} We first recall the basic definitions of semidefinite programming. Let $S^n$ be the linear space of all symmetric $n$-by-$n$ matrices and $S^n_+ \subset S^n$ be the cone of positive semidefinite matrices. A \emph{semidefinite program} is given by a triple $(L,b,\beta)$ where $L \subset S^n$ is a linear subspace, $b \in S^n$ is a matrix, and $\beta \in (S^n)^*$ is a dual to a matrix (which we can also view as a matrix, using the natural inner product $\langle A, B \rangle = \tr (A^t B) = \sum_{i,j} A_{ij} B_{ij}$). This describes a primal constrained optimization problem: \[ \inf_x \{\langle \beta,x\rangle \mid x \in (L+b) \cap S^n_+ \} \] That is, we optimize a linear functional over all symmetric positive semidefinite matrices~$x$, subject to the constraint that $x$ lies in a chosen affine space. Associated with every primal SDP is the (Lagrangian) dual program \[ \inf_\Omega \{\langle \Omega,b\rangle \mid \Omega \in (L^{\perp}+\beta) \cap (S^n_+)^*\} \] Since the cone of PSD matrices is self-dual, elements of $(S^n_+)^*$ also correspond to PSD matrices, so this dual program can be thought of as a semidefinite program itself. \begin{definition} We say a pair of primal/dual points $(x,\Omega)$ with $x \in S^n_+$ and $\Omega \in (S^n_+)^*$ are a \emph{complementary pair} for the program $(L,b,\beta)$ if both are feasible points for the respective programs and $\langle \Omega,x\rangle=0$. \end{definition} A calculation shows that for any feasible primal/dual pair of points $(x,\Omega)$, we have $\langle \beta,b\rangle -\langle \Omega,b\rangle \leq \langle \beta,x\rangle $ and that complementarity implies $\langle \beta,b\rangle -\langle \Omega,b\rangle = \langle \beta,x\rangle$. Thus, writing the dual problem in the alternative form \[ \sup_\Omega \{ \langle \beta,b\rangle -\langle \Omega,b\rangle \mid \Omega \in (L^{\perp}+\beta) \cap (S^n_+)^*\} \] we see that a complementary pair must represent optimal solutions to the primal and dual SDPs respectively. \begin{definition} We say that an SDP problem $(L,b,\beta)$ is \emph{gap free} if it has complementary pairs of solutions. \end{definition} Gap freedom is typically seen as a fairly mild constraint on an SDP problem. For every complementary pair $(x,\Omega)$ we have~\cite{alizadeh1997complementarity} \begin{equation} \rank(x) + \rank(\Omega) \leq n.\label{eq:complement-rank-bound} \end{equation} \begin{definition} A complementary pair $(x,\Omega)$ is said to be \emph{strictly complementary} if we have $\rank(x) + \rank(\Omega)= n$. \end{definition} There is a rich theory on when SDPs have strictly complementary solution pairs. (See \cite{alizadeh1997complementarity,pataki2001generic, nie2010algebraic}; see also Section~\ref{thm:AHOsc}.) In particular, certain SDP algorithms converge more quickly on problems that satisfy strict complementarity~\cite{luo1998superlinear}. The linear programming counterpart of strict complementarity is strict complementary slackness~\cite{dantzig2003linear}. In the linear case, strict complementarity is always achieved. By contrast, for SDP problems it depends on the particular problem. \subsection{Graph embedding as an SDP} In the context of graph embedding, suppose we are looking for a configuration of unconstrained dimension, $p \in C^{v}({\mathcal V})/\Eucl(v)$, such that the framework of $\Gamma$ is constrained to have a set of squared edge lengths $d^2_{ij}$ for $ij\in{\mathcal E}(\Gamma)$. It is well known that we can set up this graph embedding problem as a semidefinite program~\cite{linial1995gga}, as we will now review. To a configuration~$p$ we can associate a $v \times v$ \emph{Gram matrix}~$x$, with $x_{ij} = p(i) \cdot p(j)$. The Gram matrix of~$p$ is unchanged by elements of $O(v)$ (although it does change when $p$ is translated). The matrix rank, $r$, of such an $x$ is the dimension of the linear span of the associated configuration~$p$. This will be typically be $d+1$; one greater than the affine span, $d$, of the framework. In the graph embedding SDP, we set $n\coloneqq v$ and take the Gram matrix $x$ as our unknown. The distance constraint at an edge $ij$ can be expressed as the linear constraint $x_{ii}+x_{jj}-2x_{ij}=(d_{ij})^2$. The collection of matrices satisfying these constraints for all edges forms an affine space; we choose $L$ and $b$ such that $L+b$ is this affine space. In our context, we are only interested in feasibility, and thus have no objective function, so we set $\beta \coloneqq 0$. Note that semidefinite programs do not allow us to explicitly constrain the rank of the solution (which is related to the dimension of the configuration). Let us now look at the dual program to our graph embedding SDP problem. In the primal problem, the linear space $L$ corresponds to symmetric matrices $x$ with $x_{ii}+x_{jj}-2x_{ij}=0$ for all $\{i,j\} \in {\mathcal E}(\Gamma)$. Thus the space $L^\perp$, when represented as matrices, is spanned by a basis elements $B_{ij} = -e_{ii}+e_{ij}+e_{ji}-e_{jj}$ for $\{i,j\}\in{\mathcal E}(\Gamma)$, where $e_{ij}$ is the elementary matrix with a $1$ in the $ij$ entry and $0$ elsewhere. The space $L^\perp$ is therefore those matrices $\Omega$ with row-sums of zero and with $\Omega_{ij} = 0$ for all $i \ne j$, $\{i,j\} \notin {\mathcal E}(\Gamma)$. That is, $L^\perp$ is the space of all (not necessarily equilibrium) stress matrices. We also have $\beta=0$, so we optimize over these matrices. Minimizing $\langle \Omega, b\rangle$ imposes the further constraint that the solution be an equilibrium stress matrix. This can be seen as follows. Since $\beta=0$, the all-zeros matrix is dual feasible. Whenever there is some primal feasible $x$ corresponding to a configuration~$p$, the SDP must be gap free as $\langle 0, x\rangle=0$, and the optimal $\Omega$ are the feasible $\Omega$ with $\langle \Omega, x\rangle=0$. For any $\Psi \in (S^n_+)^*$ we have $\langle \Psi, x\rangle= \frac{1}{2}\sum_{k=1}^r (p^k)^t \Psi p^k$. (Here we use the notation $p^k$ for vector in $\mathbb R^n$ describing the component of $p$ in the $k$'th coordinate direction of $\mathbb R^r$.) And thus, as in Lemma~\ref{lem:covec-psdstress}, the feasible $\Omega$ with $\langle \Omega, x\rangle=0$ are the PSD equilibrium stress matrices for $(p,\Gamma)$. In this language we can restate Theorem~\ref{thm:generic-ur-min-stress-kernel} as follows. \begin{proposition} Let $p$ be generic in $C^d({\mathcal V})$, let $(p,\Gamma)$ be universally rigid in $E^d$, and let $(L,b,0)$ be the associated SDP using the graph $\Gamma$ and the distances of the edges in $(p,\Gamma)$. Then $(L,b,0)$ has a strictly complementary solution pair $(x,\Omega)$. \end{proposition} \begin{proof} Simply let $x$ be the Gram matrix corresponding to a translation of $p$ such that its affine span does not include the origin. Thus $x$ will have rank $d+1$. Let $\Omega$ be a PSD equilibrium stress matrix of rank $v-d-1$, which exists by Theorem~\ref{thm:generic-ur-min-stress-kernel}. Then $\rank x + \rank \Omega = v$. \end{proof} \subsection{SDP feasibility} Our discussion of universal rigidity and equilibrium stress matrices carries over directly to any SDP feasibility problem $(L,b,0)$ (i.e., where $\beta=0$). To see this we first set up some notation. Pick an integer $r > 0$ and let $k\coloneqq \binom{r+1}{2}$. Then, as in Lemma~\ref{lem:mes-faces}, $\ext_k(S^n_+)$ is the set of the PSD matrices of rank less than or equal to $r$, which we will denote $S^n_{+r}$. Let $\pi$ be projection from $S^n$ to $S^n/L$, and $\pi_+$ its restriction to $S^n_+$. A point $x \in S^n_+$ is a solution to the feasibility problem iff $\pi(x) = \pi(b)$, or $x \in \pi_+^{-1}(\pi(b))$. A point $x \in S^n_+$ is universally rigid under $\pi$ iff $\pi_+^{-1}(\pi(x))$ is a single point: A universally rigid feasible point is the unique feasible solution to $(L,b,0)$. As we vary~$b$ with fixed~$L$, we will see as solutions all points in~$S^n_+$; it may happen that for some~$b$ there is a unique, generic feasible solution. \begin{proposition} \label{pro:sdp} Let $(L,b,0)$ be an SDP feasibility problem, where $L$ has rational coefficients. Suppose there is a unique feasible solution~$x$ which is generic in $S^n_{+r}$. Then there exists an optimal dual solution $\Omega$ such that $(x,\Omega)$ is a strictly complementary pair. \end{proposition} \begin{proof} Since $x$ is generic in $S^n_{+r} = \ext_k(S^n_+)$, by Theorem~\ref{thm:urexp} $\pi(x)$ is $k$-exposed. Thus there is a PSD matrix $\Omega$ in $(S^n/L)^*=L^\perp$ that is tangent to $\pi(S^n_+)$ at $\pi(x)$, with contact $F(\pi(x))$. Since $\Omega \in L^\perp$, it is a feasible point of the dual SDP\@. Since $\Omega$ is tangent to $\pi(S^n_+)$ at $\pi(x)$, it is also tangent to $S^n_+$ at $x$; thus $\langle \Omega, x\rangle=0$ and $\Omega$ is a complementary and therefore optimal dual solution. The tangency of $\Omega$ to $S^n_+$ at $x$ has contact $\pi^{-1}_+(F(\pi(x))$ which, by Proposition~\ref{prop:dims-increase}, is $F(x)$. Let $p$ be an $r$-dimensional configuration of $n$ points with Gram matrix $x$. From the facial stucture of the PSD cone we see that $F(x)$ consists only of Gram matrices of $r$-dimensional linear transforms of~$p$. We also have the relation $\langle \Omega, x\rangle= \frac{1}{2}\sum_{k=1}^r (p^k)^t \Omega p^k = 0$. Thus we see that $\Omega$ must have a kernel of dimension $r$ and have rank $n-r$. \end{proof} \subsection{SDP optimization} We can apply the approach above to a more general SDP problem $(L,b,\beta)$, where $\beta \neq 0$. Specifically, we prove the following. \begin{theorem} \label{thm:sdp-opt} Let $(L,b,\beta)$ be a gap-free SDP problem, with $L$ and $\beta$ rational. Suppose there is a unique optimal solution $x$ which is generic in $S^n_{+r}$. Then there exists a strictly complementary pair $(x, \Omega)$. \end{theorem} We will prove the theorem by reducing to the $\beta=0$ case. For any SDP optimization problem $(L,b,\beta)$ with an optimal solution~$x$, there is an associated SDP feasibility problem $(L', b', 0)$, where $L' + b'$ is $\{ y \in L + b \mid \langle \beta, y\rangle = \langle \beta, x\rangle\}$. In particular, $L'$ is $L \cap \ker \beta$ (which is still rational) and the dual space $L'^\perp$ is $L^\perp + \langle\beta\rangle$. Then feasible solutions to $(L', b', 0)$ correspond to optimal solutions to $(L, b, \beta)$. \begin{lemma} \label{lem:sdp-almostSC} Let $(L,b,\beta)$ be an SDP problem, with $L$ and $\beta$ rational. Suppose there is a unique optimal solution~$x$ which is generic in $S^n_{+r}$. Then there exists a PSD $\Omega$ such that $\rank \Omega = n-r$, with $\Omega \in L^\perp + \langle\beta\rangle$ and $\langle \Omega,x\rangle=0$. \end{lemma} \begin{proof} Apply Proposition~\ref{pro:sdp} to the feasibility problem $(L', b', 0)$ constructed above. This gives a $\Omega \in L'^\perp = L^\perp + \langle \beta \rangle$ that is strictly complementary to~$x$. \end{proof} Note that we have not yet proved Theorem~\ref{thm:sdp-opt}, as Lemma~\ref{lem:sdp-almostSC} gives $\Omega$ in the linear space $L^\perp + \langle\beta\rangle$ rather than the desired affine space $L^\perp + \beta$. \begin{proof}[Proof of Theorem~\ref{thm:sdp-opt}] Let $\Omega_1$ be the PSD matrix given by Lemma~\ref{lem:sdp-almostSC}. From the assumption of gap freedom there exists a PSD $\Omega_2$ with $\Omega_2 \in L^\perp + \beta$ and $\langle \Omega_2,x\rangle=0$. Thus for any positive scalars $\lambda_1$ and $\lambda_2$, the matrix $\Omega \coloneqq \lambda_1 \Omega_1 + \lambda_2 \Omega_2$ is PSD, has $\rank$ no less than $n-r$, and has $\langle \Omega,x\rangle=0$. By adjusting $\lambda_1$ and $\lambda_2$ we can achieve $\Omega \in L^\perp + \beta$. By Equation~\eqref{eq:complement-rank-bound}, we in fact have $\rank \Omega = n-r$. \end{proof} We note though that for a given $(L,\beta)$ and rank $r$, as we vary $b$, there may be no unique solutions with rank $r$. Even if there are such $x$, they may all be non-generic. In such cases, Theorem~\ref{thm:sdp-opt} is vacuous, as there are no choices of $(L, b, \beta)$ satisfying the hypotheses. \subsection{Genericity} \label{sec:genericity} In the context of SDP optimization, the hypothesis in Theorem~\ref{thm:sdp-opt} that $L$ and $\beta$ be rational may seem a little unnatural. This hypothesis can be relaxed if we work with points that are generic over a field that is larger than~$\mathbb Q$. \begin{definition} Let $\mathbf{k}$ be a field containing $\mathbb Q$ and contained in $\mathbb R$. A semi-algebraic set~$S \subset \mathbb R^n$ is \emph{defined over $\mathbf{k}$} if there is a set of equalities and inequalities defining~$S$ with coefficients in~$\mathbf{k}$. If $S$ is defined over~$\mathbf{k}$, a point $x \in S$ is \emph{generic over~$\mathbf{k}$} if the coordinates of~$x$ do not satisfy any algebraic equation with coefficients in $\mathbf{k}$ beyond those that are satisfied by every point in~$S$. Similarly, $x$ is \emph{locally generic over~$\mathbf{k}$} if for small enough $\epsilon$, $x$ is generic in $S \cap B_\epsilon(x)$. A \emph{defining field} of a semi-algebraic set~$S$, written $\mathbb Q[S]$, is any field over which it is defined. (There is a unique smallest field for algebraic sets by a result of Weil \cite[Corollary IV.3]{Weil46:FoundAlgGeom}. For our purposes, we can use any field over which $S$ is defined.) Similarly, if $f : X \to Y$ is a map between semi-algebraic sets, then $\mathbb Q[f]$ is a field over which it is defined (or, equivalently, a field over which the graph of~$f$ is defined). \end{definition} For instance, if $S$ is a single point~$x$, we can take $\mathbb Q[S]$ to be the same as $\mathbb Q[x]$, the smallest field containing all the coordinates of~$x$. Also, if $x$ is generic over $\mathbb Q$, then $y$ is generic over $\mathbb Q[x]$ iff the pair $(x,y)$ is generic over~$\mathbb Q$. With this definition, Theorems \ref{thm:urexp} and~\ref{thm:sdp-opt} can be improved to allow non-rational sets and projections. \begin{citingthm}[\ref*{thm:urexp}$'$] Let $K$ be a closed line-free convex semi-algebraic set in $\mathbb R^m$, and $\pi: \mathbb R^m \to \mathbb R^n$ a projection. Suppose $x$ is locally generic over $\mathbb Q[K,\pi]$ in $\ext_k(K)$ and universally rigid under $\pi$. Then $\pi(x)$ is $k$-exposed. \end{citingthm} \begin{citingthm}[\ref*{thm:sdp-opt}$'$] Let $(L,b,\beta)$ be a gap-free SDP problem. Suppose there is a unique optimal solution $x$ which is generic over $\mathbb Q[L,\beta]$ in $S^n_{+r}$. Then there exists a strictly complementary pair $(x, \Omega)$. \end{citingthm} The proofs follow exactly the proofs of the versions given earlier. For instance, in Theorem~\ref*{thm:sdp-opt}$'$, we apply Theorem~\ref*{thm:urexp}$'$ to the cone $S^n_+$ (which is defined over~$\mathbb Q$) and the projection onto $S^n/(L + \langle \beta \rangle)$, which is defined over whatever field is needed to define $L$ and $\beta$. \subsection{Relation to previous results} A fundamental result of~\cite{alizadeh1997complementarity,pataki2001generic} on strict complementarity can be summarized as follows. \begin{otheorem} \label{thm:AHOsc} Suppose $(L,b,\beta)$ is a generic SDP program that is gap free. Then $(L,b,\beta)$ admits a strictly complementary pair of solutions. \end{otheorem} This result is neither stronger nor weaker than our Theorem~\ref{thm:sdp-opt}. In particular Theorem~\ref{thm:AHOsc} requires that all parameters be generic. In contrast, Theorem~\ref{thm:sdp-opt} does not assume genericity of any of the parameters but rather assumes genericity of the solution within its rank. Indeed, in our application to rigidity of graphs, $\beta=0$, which is obviously not generic. (The parameter $b$ is also not usually generic.) In fact, there are very few situations where both theorems can apply, as the following proposition shows. \begin{proposition} In an SDP problem $(L,b,\beta)$ with $\dim L = D$ and primal solution~$x$ of rank~$r$, if Theorem~\ref{thm:AHOsc} applies, then \[ \binom{n-r+1}{2} \le D. \] On the other hand, if Theorem~\ref{thm:sdp-opt} applies, then \[ \binom{n-r+1}{2} \ge D. \] \end{proposition} \begin{proof} For a given $(L,b,\beta)$, if $b$ is generic over $\mathbb Q[L]$, then $r$ satisfies the first inequality, as described in~\cite[Theorem 12]{alizadeh1997complementarity} and~\cite[Proposition 5]{nie2010algebraic}. In particular, for generic $b$, the intersection of $L+b$ with $S^n_{+r}$ must be transversal, which from a dimension count gives the first inequality. This takes care of the first part of the proposition. (If $\beta$ is generic over $\mathbb Q[L]$, as in Theorem~\ref{thm:AHOsc}, there is also an upper bound on~$r$: $\binom{r+1}{2} \leq \binom{n+1}{2}-D$. But we do not need this.) For the second part, recall that if Theorem~\ref{thm:sdp-opt} applies, there is a point~$x$, generic over $\mathbb Q[L,\beta]$ in $S^n_{+r}$ which is the unique solution to the SDP problem $(L, b, \beta)$. Recall that there is an associated feasibility problem $(L', b', 0)$. Let $\pi: S^n \to S^n/L'$ be the associated projection and $\pi_+$ its restriction to $S^n_+$. Since $x$ is unique (in both $(L,b,\beta)$ and $(L',b',0)$), $x$ is the only point in $S^n_+$ mapping to $\pi(x)$, and $\pi(x) \in \partial \pi(S^n_+)$. Moreover, since $x$ is generic in $S^n_{+r}$, there is an open neighborhood $U$ of~$x$ in $S^n_{+r}$ with these properties. In particular, $U$ maps injectively by $\pi$ to $\partial \pi(S^n_+) \subset S^n/L'$. $S^n_{+r}$ has dimension $\binom{n+1}{2}-\binom{n-r+1}{2}$. On the other hand, $\dim (S^n/L') \le\binom{n+1}{2}-D+1$ (with equality iff $\beta \not\in L^\perp$ or equivalently $L' \ne L$), so $\dim (\partial\pi(S^n_+))\le \binom{n+1}{2}-D$. If there is a smooth injection from $S^n_{+r}$ to $\partial \pi(S^n_+)$, we must have $\binom{n-r+1}{2}\ge D$, as desired. \end{proof} \subsection{Complexity of universal rigidity} \label{sec:comp} Assuming $(p,\Gamma)$ is not at a conic at infinity and is translated so that its affine span does not include~$0$, $(p,\Gamma)$ is UR iff there is no higher rank solution to the SDP than $p$~\cite{Alfakih07:DimensionalRigidity}. Thus, to test for universal rigidity, the main step is to test if there is a feasible solution with rank higher than that of a known input feasible solution~$p$ (given say as integers). Numerically speaking, SDP optimization algorithms that use interior point methods produce approximate solutions of the highest possible rank~\cite{guler1993convergence,de1997initialization} and so in practice one could try such a method to produce a guess about the universal rigidity of $(p,\Gamma)$. The complexity of getting a definitive answer is a trickier question, even assuming one could reduce the UR question to one of ``yes-no'' SDP feasibility. In particular, an approximate solution to an SDP optimization problem can be found in polynomial time but the complexity of the SDP feasibility problem remains unknown, even with strict complementarity. See~\cite{ramana1997exact} for formal details. Theorem~\ref{thm:generic-ur-min-stress-kernel} tells us that for generic inputs, the UR question can be answered by finding the highest rank dual optimal solution. (A framework with integer coordinates will not be generic; however, if the integers are large enough we are likely to avoid all special behavior, as in \cite[Section 5]{GHT10:GGR}.) This does not appear to be any help in determining the complexity of testing algorithmically for universal rigidity.
1,477,468,749,846
arxiv
\section{Introduction} \noindent The Drude model of electric conductivity \cite{d} offers a rather generic behavior for charge transport. Although the concept of the Drude conductivity seems to be old-fashioned it is very useful for different applications, so for chaotic transport and weak localization \cite{jw}, magnetoresistance \cite{cdk}, dynamical ordering in a confined Wigner crystal \cite{gd} and optical conductivity \cite{s}. More recently the Drude model came into the focus in describing the spin Hall effect \cite{c}, for a more detailed consideration see \cite{erh}. In spite of being a classical concept the Drude model yields the universal dc and ac conductivity and its temperature dependence \cite{ma}. In particular, the conductivity is expressed by the charge $e$, the mass $m$ and the density $n$ of the charge carriers as well as a relaxation time $\tau$ via $\sigma_D = n e^2 \tau/m$. In general the parameter $\tau$ reflects the microscopic origin of the transport process such as different scattering mechanisms. In the Markov approach the relaxation time is given in terms of a simple damping constant $\gamma = \tau^{-1}$. However, the Drude conductivity should be influenced and corrected by the underlying periodic crystal potential. And even that point we would like to discuss in the present paper. Here the Drude conductivity is generalized by including the lattice potential. The analysis is based on the Smoluchowski equation \cite{gar} which yields likewise the probability density for the electrons in the lattice. As the result we get an exact expression for the charge conductivity $\sigma = j/E$ in the limit $E \to 0$. The final relation is illustrated for realistic model potentials. Especially, the approach enables us to study the conductivity of nanocontacts by assuming piecewise constant potentials. However to find out the conductivity one is confronted with the problem, that the applied external electric field $E$ exhibits a feedback to the potential. The potential is deformed under the influence of the field which leads to a modification of the motion of the charge carriers. Different to the quantum approach the carriers can overcome barriers or traps not by tunneling processes but by thermal activation. \section{Model} \noindent The classical equation of motion for the charge carriers under consideration of thermodynamic fluctuations are \begin{equation} \gamma \dot{x}(t) = \frac{\partial \mathcal{H}}{\partial x} + \eta(t)\quad\rm{with}\quad \langle \eta (t) \eta (t') \rangle = 2\, D\, \delta(t - t')\,. \label{kan} \end{equation} Here $\gamma$ is a damping parameter and $D$ is the strength of the Gaussian process. Both of them model the interaction with a heat bath. The Hamiltonian $\mathcal{H}$ is given by \begin{equation} \mathcal{H} = \frac{m}{2} v^2 + U(x) - e E x\,, \label{ham} \end{equation} where $E$ is a homogeneous electric field and $U(x)$ is the periodic crystal potential which is originated by all other particles of the system, but one can also include other potentials such as defect potentials. Although in the Hamiltonian the contribution of the potential and the electric field energy are well separated the potential may modify by the field as demonstrated. The Smoluchowski equation equivalent to eq.~\eqref{kan} reads \cite{gar} \begin{equation} \frac{\partial \rho (x,t)}{\partial t} = - \frac{\partial j_p(x,t)}{\partial} \quad\rm{with}\quad j_p(x,t) = \left\{ - D \frac{\partial }{\partial x} - \frac{1}{m \gamma} \left[ \frac{\partial \mathcal{H}}{\partial x} \right] \right\} \rho (x,t)\,. \label{fp} \end{equation} Inserting the Hamiltonian eq.\eqref{ham} the probability current $j_p$ can be written as \begin{equation} j_p(x,t) = - D \frac{\partial \rho(x,t)}{\partial x} - \frac{1}{m \gamma} \left [ \frac{\partial U(x)}{\partial x} - e E \right] \rho(x,t)\,. \label{strom} \end{equation} Here $\rho(x,t)$ is the probability density In case the system is coupled to a heat bath with temperature $T$ the equilibrium solution is given by the \begin{equation} \rho_e(x) = \mathcal{N} \exp( - \beta \mathcal{H})\quad \beta^{-1} = k_B T\,, \label{eq} \end{equation} provided the Einstein relation $D = k_B T/\gamma m $ is fulfilled. $\mathcal{N}$ is a normalization constant. Notice that the equilibrium distribution satisfies $j_p = 0$ whereas a steady state solution $\rho_s(x)$ obeys the weaker condition $\partial_x j_p = 0$. It is easy to see that the equilibrium solution $\rho_e$ has a physical meaning only in case of a vanishing electric field. Otherwise, the probability density $\rho_e$ is due to the presence of the electric field not normalizable. If there exist an equilibrium state for nonzero field all charged particles would accumulate at $x \to - \infty$. It should be remarked that the same situation occurs in a quantum treatment of the problem where one can not find a wave function normalized in the whole space. However, different to the quantum case the statistical approach allows a steady state solution describing the physical situation adequately. The electric current is given by \begin{equation} j_{el} = n e \langle v \rangle = \frac{N e}{L} \int_0^L dx j_s(x)\,. \label{dru} \end{equation} Here, $N$ is the total number of charge carriers and $n$ is the corresponding density. From here we conclude immediately that the equilibrium distribution \eqref{eq} is an improper distribution function because the current is zero due to the symmetry $\mathcal{H}(v) = \mathcal{H}(-v)$. \section{Electrical Conductivity} \noindent In order to obtain the electrical conductivity one has to apply the steady state solution of Eq.~\eqref{fp}. For a zero potential $U = 0$ we find the constant steady state current by integration of Eq.~\eqref{strom} \begin{equation} j_{el} = \frac{N e}{L}\left[ D (\rho(0) - \rho(L)) + \frac{e E }{\gamma m} \int_0^L \rho(x) dx \right]\,. \end{equation} $N$ is the total number of charge carriers and $L$ is the size of the system. Taking into account periodic boundary conditions and the normalization condition for $\rho(x)$ one obtains \begin{equation} j_{el} = \sigma_D E\quad{\rm with}\quad \sigma_D = \frac{ n e^2 }{m \gamma}\,. \label{dru1} \end{equation} The quantity $n = N/L$ is the particle density. The last relation is nothing else than the Drude conductivity where the relaxation time is given by the inverse damping parameter $\gamma$. For a non-zero potential the relation in Eq.~\eqref{dru1} will be modified accordingly. Directly by integration it follows \begin{equation} j_{el} = \sigma_D E - \frac{N e}{m \gamma L} \int_0^L dx \frac{dU(x)}{dx} \rho_s(x) = \sigma_D E - \frac{N e}{m \gamma L} \left \langle \frac{d U}{dx} \right \rangle \,. \label{dru2} \end{equation} The average is taken using the field dependent steady state probability distribution function $\rho_s(x) \equiv \rho_s(x; E)$. To find out the contribution induced by the electric field we need the steady state solution for the current which obeys $\partial_x j_p(x) \neq 0$. Because the steady state solution $\rho_s$ depends likewise on the electric field we have to incorporate the influence of the electric field on the averaged crystal potential, compare Eq.~\eqref{dru2}. Since we are interested in the conductivity we need the steady state solution for a weak electric field $E$ only. Therefore we make the ansatz \begin{equation} \rho_s(x; E) = \mathcal{N}(E) e^{-\frac{U(x)}{k_BT}} [ 1 + E \phi(x) ]\,, \label{stea} \end{equation} with an arbitrary function $\phi(x)$ and a normalization factor $\mathcal{N}(E)$. Inserting Eq.~\eqref{stea} in Eq.~\eqref{strom} we find \begin{equation} \frac{d \phi(x)}{dx} = \frac{e}{k_B T} - \frac{j_p }{ E D \mathcal{N}(E)} \exp(U(x)/k_B T)\,. \label{stea1} \end{equation} The current $j_p$, defined in Eq.~\eqref{strom}, disappears in for zero field. Therefore we write for small electric field $j = a E$ with the unknown parameter $a$ which is determined below. The solution of the Eq.~\eqref{stea1} is \begin{equation} \phi(x) = \phi_0 + \frac{e x}{k_BT} - \frac{a}{D \mathcal{N}(E)} \int_0^x dx' \exp(U(x')/k_BT)\,. \label{1d} \end{equation} Imposing periodic boundary conditions it follows \begin{equation} \frac{e L}{k_B T} = \frac{a}{D \mathcal{N}(E) } \int_0^L dx \exp(U(x)/k_BT)\,. \label{stea3} \end{equation} According to Eq.~\eqref{dru2} and Eq.~\eqref{stea} as well as periodic boundary conditions the contribution to the conductivity is $$ \left \langle \frac{d U}{dx} \right \rangle = \int_0^L dx \frac{d U}{dx} \rho_s(x; E) = k_B T \mathcal{N}(E) E \int_0^L e^{-\frac{U(x)}{k_BT}} \frac{d \phi(x)}{dx} dx \,. $$ Using Eqs.~\eqref{stea}, \eqref{stea1} and \eqref{stea3} one finds \begin{equation} \left \langle \frac{d U}{dx} \right \rangle = k_B T \mathcal{N}(E) E \left[ \frac{e}{k_B T \mathcal{N}(0)} - \frac{a L}{D \mathcal{N}(E)} \right] = e E \left[ 1 - \frac{L^2}{Z_L[U]\,Z_L[-U]} \right] + O(E^2) \label{cor} \end{equation} with \begin{equation} Z_L[U] = \int_0^L dx \exp( U (x)/k_BT)\,. \end{equation} Inserting the result in Eq.~\eqref{dru2} our final relation for the conductivity reads \begin{equation} \sigma = \sigma_D \frac{L^2}{Z_L[U]\,Z_L[-U]} \equiv \sigma_D \frac{Z^2[U=0]}{Z_L[U] Z_L[-U]}\,. \label{fin} \end{equation} Because the last relation includes the ratio of the partition functions one can also insert the total partition function of the system without the electric field part. \begin{equation} Z_L[U] = {\rm Tr} \exp [ - \frac{m v^2}{2} + U(x) ]\,. \label{fin1} \end{equation} The final relation offers the duality property namely a symmetry against $U(x) \to -\, U(x)$. Thus, in our approach one can not distinguish the barrier and the trapping problem. Eq.~\eqref{fin} can be also rewritten in the form \begin{equation} \sigma = \sigma_D e^{- E_A/k_BT}\quad{\rm with}\quad E_A = F[U] + F[-U] - 2 F[U=0],\quad F = - k_B T \ln Z\,. \label{fin2} \end{equation} A typical non-equilibrium quantity as the activation energy $E_A$ is completely expressed by the equilibrium free energy $F$. Let us note that the Einstein relation is fulfilled by the equilibrium distribution when the system is coupled to a heat bath. Otherwise in the present approach we apply the steady state solution. In that case we can set $D \gamma m = \epsilon_0\,$, where $\epsilon_0$ is a characteristic energy of the system, for instance the ground state energy of a quantum model. \section{Model Potentials} \noindent Now let us illustrate our approach in two examples. Firstly we consider the tight binding potential, dotted line in Fig.~\ref{fig1}: $$ U(x) =U_0\,[ 1 - \cos(qx)\,]\quad q=\frac{2\pi}{L}\,. $$ The electric field causes a systematic shift of the potential due to Eq.~\eqref{cor}. As the result the change of the probability density for the position $\rho_s(x;E)$ according to Eq.~\eqref{stea} is shown in Fig.~\ref{fig1}. With increasing strength of the electric field $E$ the shift becomes more pronounced. The shift of the potential or the probability, respectively is so organized that a constant current is maintained. For a small field in terms of $U_0/L$ the charge carriers follow immediately the potential. The largest probability to find the electron is at the minimum of the potential. When the field strength is enlarged then the probability density for the position is shifted considerably. For very high field the charge carriers are actually not influenced by the potential. In that case the probability density is nearly constant as shown in the last graph in Fig.~\ref{fig1}. The analytical calculation for the conductivity yields according to Eq.~\eqref{fin} \begin{equation} \sigma = \frac{\sigma_D}{I_0^2(\frac{U_0}{k_BT})} \,. \label{tb} \end{equation} Here $I_0(y)$ is the Bessel function. In the low temperature case $U_0 \gg k_B T$ the conductivity behaves as $\sigma \simeq \sigma_D \exp( - 2 U_0/k_BT)$, i. e. the activation energy due to Eq.~\eqref{fin2} is dominated by the amplitude of the periodic potential $ E_A \simeq 2 U_0 $. In the opposite case of high temperatures it results $ E_A \simeq U_0^2/2 k_B T $. The conductivity offers a small correction $ \sigma \simeq \sigma_D [ 1 - (U_0/ k_BT)^2/2 \,]$. The conductivity is depicted in Fig.~\ref{fig2} and the activation energy in Fig.~\ref{fig3}.\\ A second illustration is given by a sequence of $N$ nanocontacts modeled by a piecewise constant potential, $U(x) = U_0\quad 0\leq x \leq a$ and $U = -U_0 \quad a \leq x \leq 2a$ and periodic continuation with $L = N a$. The number of barriers and the number of traps are both $N/2$. The conductivity follows from Eq.~\eqref{fin} to \begin{equation} \sigma = \sigma_D \frac{2}{\,1 + \cosh(\frac{2 U_0}{k_B T})}\,. \label{nano} \end{equation} The activation energy can be expressed by \begin{equation} E_A = k_B T \ln\left( \frac{1 + \cosh(2 U_0/k_B T)}{2} \right) \end{equation} The limiting cases are $$ \frac{\sigma}{\sigma_D} \simeq 4\,e^{- 2 U_0/k_B T}\quad{\rm if}\quad U_0 \gg k_B T\,;\quad \frac{\sigma}{\sigma_D} \simeq 1 - \left(\frac{U_0}{k_B T}\right)^2 \quad{\rm if}\quad U_0 \ll k_B T\,. $$ For an infinite high barrier the material becomes an insulator, whereas for zero potential the conventional Drude conductivity appears. The conductivity for the piecewise constant potential is shown in Fig.~\ref{fig2} as the dashed line. The conductivity increases continuously with increasing temperature. The higher the temperature in comparison to the potential height $U_0$ the more charge carriers are able to overcome the barrier. For rather high temperature the conductivity becomes constant, i. e. the influence of the potential is negligible. The same situation is observed for the $\cos$-potential (full line in Fig.~\ref{fig2}. The increase of the conductivity is accompanied with a decrease of the activation energy $E_A$. The behavior of $E_A$ for both the piecewise constant potential (dashed line) and the $\cos$-potential is shown in Fig.~\ref{fig3}. For practical purposes let us consider the case that there exists only one barrier of height $U_B$ and width $\triangle $. The length of each of the two input leads is assumed to be $l$ and the corresponding heights are $U_0$. Introducing the dimensionless ratio $\kappa = \triangle /2 l$ the conductivity can be written as $$ \sigma = \sigma_D \frac{(1 + \kappa )^2}{\,(1 - \kappa )^2 + 4 \kappa \frac{\sigma_D}{\sigma_m}} $$ where $\sigma_m$ is the minimal conductivity realized for $\kappa = 1 $. It holds $$ \sigma_m = \frac{ 2 \sigma_D}{ 1 + \cosh(U_B - U_0)/k_B T)} $$ For $\kappa \ll 1$ one gets $$ \frac{\sigma}{\sigma_D} \simeq 1 - 4 \kappa \left[ \frac{\sigma_D}{\sigma_m} - 1 \right]\,. $$ In the same manner one can find the conductivity for any other potential. \section{Conclusion} \noindent In the present paper a simple classical model for the electrical conductivity is proposed and solved in one dimensions. Regarding nanocontacts and nanowires such one dimensional models are in the focus of recent interests. Here we get an exact expression for the conductivity where systematic contributions originated by the underlying periodic potential are taken into account. To be specific the potential is subjected to a systematic alteration caused by the applied external field. The probability distribution for the position of the charged particles is modified by the field. Simultaneously the charge carriers are subjected to an effective averaged potential which is likewise modified by the field. The alteration is so organized that a constant current is maintained. The contribution of the crystal potential is significant for low temperatures whereas in the high temperature limit the influence of the potential is weak and the conventional Drude conductivity is recovered. Although our approach is based on a classical stochastic one the conductivity can be also calculated for a piecewise constant potential which models nanoconducts or nanowires. The present analysis is restricted to the one-dimensional case. The three dimensional case is more complicated because the steady state solution has to satisfy the condition $\nabla \cdot \vec j_p = 0$ which is fulfilled by $\vec j_p = \nabla \times \vec A$ with an arbitrary vector field $\vec A$. In a forthcoming paper \cite{st} we study this problem in view of the spin Hall effect applying a similar approach. We are able to estimate the contributions of the potential to the conductivity.\\ Let us finally remark that a more general expression for the conductivity is obtained when the Einstein relation $D \gamma m = k_B T$ is not longer fulfilled. Then for example the conductivity for the piecewise constant potential reads $$ \sigma = \sigma_D \frac{2}{\,1 + \cosh(\frac{2 U_0}{\epsilon_0})}\,, $$ whereas the characteristic energy scale $\epsilon_0$ can not be determined within our approach. However it is appropriate to identify $\epsilon_0$ with the ground state energy in presence of the potential.\\ \noindent The work has been supported by the DFG: SFB 418
1,477,468,749,847
arxiv
\subsubsection*{Example: Gaussian states} A generic zero mean Gaussian state can be described by the characteristic function \cite{Hol2} \begin {equation}\label {charact} \Phi(x,y)=\exp\left[-\frac {1}{2}(\sigma _{xx}x^{2}+2\sigma _{xp}xy+\sigma _{pp}y^{2})\right], \end {equation} where $x,y\in {\mathbb R}$ and the covariances $\sigma _{xx}$, $\sigma _{pp}$, $\sigma _{xp}$ satisfy the Schroedinger-Robertson uncertainty relation $$ \sigma _{xx}\sigma _{pp}-\sigma _{xp}^{2}\ge \frac {1}{4}. $$ The symplectic quantum tomograms corresponding to the characteristic function (\ref{charact}) are given by $$ \omega (X,\mu ,\nu)=\frac {1}{\sqrt {2\pi }(\sigma _{xx}\mu ^{2}+2\sigma _{xp}\mu \nu +\sigma _{pp}\nu^{2})^{1/2}}\exp\left (-\frac {X^{2}}{2(\sigma _{xx}\mu ^{2}+2\sigma _{xp}\mu \nu +\sigma _{pp}\nu ^{2})}\right ). $$ Suppose to only know the tomogram \begin {equation}\label {A} \omega (X,1,0)=\frac {1}{\sqrt {2\pi \sigma _{xx}}}\exp\left (-\frac {X^{2}}{2\sigma _{xx}}\right ). \end {equation} From it we can retrieve the covariance $\sigma _{xx}$. Let us calculate the measure $\overline C({\mathcal A})$ for the set $\mathcal A$ consisting of the Gaussian states compatible with the distribution (\ref{A}), i.e. with covariance $\sigma _{xx}$. This quantity equals to the maximum von Neuman entropy overall states in $\mathcal A$ \cite{shirokov}. In passing we note that the von Neumann entropy of a Gaussian state results \cite {Hol} \begin {equation}\label {entr} S(\hat \rho)=g\left(\sqrt {\sigma _{xx}\sigma _{pp}-\sigma _{xp}^{2}}-\frac {1}{2}\right). \end {equation} with $$ g(x)=(x+1)\log (x+1)-x\log x. $$ Then, because the condition (\ref{A}) does not restrict the value of $\sigma _{pp}$ we obtain for our case $\overline C({\mathcal A})=+\infty$. Now suppose that besides the tomogram (\ref{A}), we also know the tomogram \begin {equation}\label {B} \omega (X,0,1)=\frac {1}{\sqrt {2\pi \sigma _{pp}}}\exp\left (-\frac {X^{2}}{2\sigma _{pp}}\right ). \end {equation} From it we can retrieve the covariance $\sigma _{pp}$. Then, taking into account (\ref {A}),(\ref {B}) and (\ref {entr}) we get $$ \overline C({\mathcal A})=g\left(\sqrt {\sigma _{xx}\sigma _{pp}}-\frac {1}{2}\right). $$ Finally, if we know any other tomogram for additional parameters $(\mu ,\nu)\neq (1,0)$ or $(0,1)$, it will allow us to retrieve the covariance $\sigma _{xp}$. Since we supposed a priori that the set $\mathcal A$ is generated by pure states, we obtain that our Gaussian state is pure, i.e. $\sigma _{xp}^{2}=\sigma _{xx}\sigma _{pp}-\frac {1}{4}$, so that $\overline C({\mathcal A})=0$. \section {Conclusion} We have addressed the problem of informational completeness of quantum measurements in connection to quantum state tomography and with particular concern to quantum symplectic tomography. We have put forward some relevant cases where the state reconstruction is possible by incomplete knowledge of symplectic quantum tomograms. We have then introduced a measure of information completeness and we have applied it to symplectic quantum tomograms. This work sheds further light on the subject of quantum state characterization which is becoming relevant for many purposes, e.g. quantum information processing. \section*{Acknowledgments} The authors are grateful to M.E. Shirokov and S. Weigert for useful discussions. Grigori Amosov is grateful to Stefano Mancini for a kind hospitality during his staying at the University of Camerino. The work of G.A. is partially supported by the INTAS grant Ref. Nr. 06-1000014-6077 and by the Russian Foundation for Basic Research under Project No.~07-02-00598. \begin {thebibliography}{99} \bibitem{Amosov} Amosov G.G. and Man'ko V.I., Physics Letters A \textbf{318} (2003) p.287. \bibitem{Busch} Busch P. and Lahti P.J., Ann. der Phys. \textbf{47} (1990) p.369. \bibitem{Cass} Cassinelli G., D'Ariano G.M., De Vito E. and Levrero A., J. Math. Phys. \textbf{41} (2000) p.7940. \bibitem{Fresnel1} De Nikola S., Fedele, R., Man'ko M.A., Man'ko V.I. Theor. Math. Phys. \textbf{144} (2005) p.1206. \bibitem{Fresnel2} De Nikola S., Fedele, R., Man'ko M.A., Man'ko V.I. J. Russ. Laser Res.\textbf{25} (2004) p.1. \bibitem{Fresnel3} De Nikola S., Fedele, R., Man'ko M.A., Man'ko V.I. European J. Phys. B \textbf{36} (2003) p.385. \bibitem{Hol} Holevo A.S. and Werner R.F., Physical Review A \textbf{63} (2001) p.032312. \bibitem{Hol2} Holevo A.S., \emph{Probabilistic and statistical aspects of quantum theory}, North-Holland Publ. Comp., (1982). \bibitem{Leonhardt} Leonhardt U. and Raymer M.G., Physical Review Letters \textbf{76} (1996) p.1985. \bibitem{Manko} Malkin I.A. and Man'ko V.I., Physics Letters A \textbf{32} (1970) p.243. \bibitem{Vent1} Man'ko V.I., Marmo G., Simoni A., Sudarshan E.C.G., Ventriglia F. Phys. Lett. A \textbf {351} (2006) p.1. \bibitem{Vent2} Manko V.I., Marmo G., Simoni A., Ventriglia F. Open Syst. Inf. Dyn. \textbf{13} (2006) p.239. \bibitem{Mancini} Mancini S., Man'ko V.I. and Tombesi P., Quant. Semiclass. Opt. \textbf{7} (1995) p.615. \bibitem{Prug} Prugovecki E., Int. J. Theor. Phys. \textbf{16} (1977) p.321. \bibitem{shirokov} Shirokov M.E., quant-ph/0510073. \bibitem{Vogel} Vogel K. and Risken H., Phys. Rev. A \textbf{40} (1989) p.2847. \bibitem {Weigert} Weigert S., Physical Review A \textbf{53} (1996) p.2078. \end {thebibliography} \end {document}
1,477,468,749,848
arxiv
\section{\sc Introduction} Different censoring schemes are extensively used in practice to make a life testing experiment to be more time and cost effective. In a type-I censoring scheme, the experiment is terminated at a prefixed time point. But it may happen that, no failure is observed during that time and it will lead to a very poor statistical analysis of the associated model parameters. To ensure a certain number of failures, type-II censoring scheme has been introduced in the literature. But in none of these censoring schemes any experimental unit can be removed during the experiment. The progressive censoring scheme allows to withdraw some experimental units during the experiment also. Different progressive censoring schemes have been introduced in the literature. The most popular one is known as the progressive type-II censoring scheme and it can be briefly described as follows. Suppose $n$ identical units are put on a life testing experiment. The integer $k < n$ is prefixed, and $R_1$,\ldots,$R_k$ are $k$ prefixed non-negative integers such that $\displaystyle \sum_{i=1}^{k}R_i +k=n$. At the time of the first failure, $R_1$ units are chosen randomly from the remaining $n-1$ units and they are removed from the experiment. Similarly at the time of the second failure, $R_2$ units are chosen randomly from the remaining $n-R_1-2$ units and they are removed, and so on. Finally at the time of $k$-th failure remaining $R_k$ units are removed, and the experiment stops. Extensive work has been done during the last ten years on various aspects of different progressive censoring schemes. Interested readers may refer to the recent book by Balakrishnan and Cramer \cite{BC:2014} for a detailed account on different progressive censoring schemes and the related issues. See also Balakrishnan \cite{Bala:2007}, Pradhan and Kundu \cite{PK:2009} and Kundu \cite{Kundu:2008}, in this respect. Although extensive work has been done on different aspects of the progressive censoring schemes for one sample, not much work has been done related to two sample problems. Recently, Rasouli and Balakrishnan \cite{RB:2010} introduced the joint progressive type-II censoring for two samples. The joint progressive censoring scheme is quite useful to compare the lifetime distribution of products from different units which are being manufactured by two different lines in the same facility. The joint progressive censoring (JPC) scheme introduced by Rasouli and Balakrishnan \cite{RB:2010} can be briefly stated as follows. It is assumed that two samples of products of sizes $m$ and $n$, respectively, are selected from these two lines of operation (say Line 1 and Line 2), and they are placed on a life testing experiment simultaneously. A type-II progressive censoring scheme is implemented on the combined sample of size $N=m+n$ as follows. Let $k < N$, and $R_1, \ldots, R_k$ are pre-fixed non-negative integers such that $\displaystyle \sum_{i=1}^k R_i + k = N$. At the time of the first failure, it may be from Line 1 or Line 2, $R_1$ units are chosen at random from the remaining combined $N-1$ units which consists of $S_1$ units from Line 1 and $T_1$ units from Line 2, and they are removed from the experiment. Similarly at the the time of the second failure from the combined $N-2-R_1$ remaining units $R_2$ items are chosen at random, which consists of $S_2$ and $T_2$ units from Line 1 and Line 2, respectively, are removed, and so on. Finally at the $k$-th failure remaining $R_k = S_k+T_k$ units are removed from the experiment, and the experiment stops. Note that in a JPC, although $R_j$'s are pre-fixed, $S_j$'s and $T_j$'s are random quantities, and that makes the analysis more difficult. Rasouli and Balakrishnan \cite{RB:2010} provided the exact likelihood inference for two exponential populations under the proposed JPC scheme. See also Parsi and Bairamov \cite{PB:2009}, Ashour and Abo-Kasem \cite{AA:2014}, Balakrishnan and Su \cite{BS:2015} for some problems related to the JPC scheme. In this paper we introduce a new joint progressive type-II censoring (NJPC) scheme. It is observed that the proposed NJPC scheme is easier to handle analytically, therefore the properties of the proposed estimators can be derived quite conveniently. It has some other advantages also. In this paper we provide the exact inference for two exponential populations under the NJPC scheme, although the results can be extended for other lifetime distributions also. We obtain the maximum likelihood estimators (MLEs) of the unknown parameters when it exist, and provide the exact distributions of the MLEs. The generation of samples from the NJPC are quite simple, hence the simulation experiments can be performed quite conveniently. It is observed that the MLEs obtained from the NJPC scheme satisfy the stochastic monotonicity properties stated by Balakrishnan and Iliopoulos \cite{BI:2009}, hence the exact distribution of the MLEs can be used to construct the confidence intervals of the unknown parameters. For comparison purposes we proposed to use bootstrap confidence intervals also. Some simulation experiments are performed to compare the performances of the estimators based on JPC and NJPC. It is observed that the estimators based on NJPC behave better than the corresponding estimators based on JPC for certain censoring schemes. One data analysis has been performed for illustrative purposes. The rest of the paper is organized as follows. In Section 2 we introduce the model and provide the necessary assumptions. The MLEs are obtained and their exact distributions are provided in Section 3. In Section 4 we provide a simple algorithm to simulate data from a NJPC scheme and obtain the expected time of the experiment. The construction of confidence intervals are provided in Section 5. Simulation results and the analysis of one data set are provided in Section 6. Finally in Section 7 we propose some open problems and conclude the paper. \section{\sc Model Description and Model Assumption} Suppose we have products from two different populations. We draw a random sample of size $m$ from population one (Pop-1) and a random sample of size $n$ from population two (Pop-2). We place two independent samples simultaneously on a life testing experiment. The proposed NJPC can be described as follows. Let $k < \min\{m, n\}$ be the total number of failures to be observed and $R_1, \ldots, R_{k-1}$ are such that $\displaystyle \sum_{i=1}^{k-1}(R_i+1) < \min \{m, n \}$. Suppose the first failure takes place at the time point $W_1$ and it comes from Pop-1, then $R_1$ units are randomly chosen from the remaining $m-1$ surviving units of Pop-1 and they are removed. At the same time $(R_1 +1)$ units are randomly chosen from $n$ surviving units of Pop-2 and they are removed. Suppose the next failure takes place at the time point $W_2$ and it comes from Pop-2, then $R_2+1$ units are chosen at random from the remaining $m-1-R_1$ surviving units of Pop-1, and they are removed. At the same time $R_2$ units are chosen at random from the remaining $n-2-R_1$ surviving units of Pop-2, and they are removed, and so on. Finally, at the time of the $k$-th failure, it may be either from Pop-1 or from Pop-2, all the remaining items from both the populations are removed and the experiment stops. We further define a new set of random variables $Z_1, \ldots, Z_k$, where $Z_j$ = 1 if the $j$-th failure takes place from Pop-1 and $Z_j$ = 0, otherwise. Hence for a NJPC scheme, the data will be of the form $(\bold W,\bold Z)$, where $W=(W_1, \ldots, W_k)$, $W_1\leq \ldots \leq W_k$ and $Z=(Z_1, \ldots, Z_k)$. Schematically, NJPC can be described as follows. \noindent Case-I: $k$-th failure comes from Pop-1 \vspace{5mm} \begin{tikzpicture}[scale=0.8] \draw[gray, thick] (-7,4)node[anchor=north]{\textbf{Pop-1}} -- (2,4); \draw[gray, thick](2,4)--(2.5,3.5)--(2.8,4.5)--(3.1,4)--(10,4); \draw[gray, thick] (-7,0)node[anchor=north]{\textbf{Pop-2}} -- (2,0); \draw[gray, thick](2,0)--(2.5,-.5)--(2.8,.5)--(3.1,0)--(10,0); \filldraw[black] (-7,4) circle (2pt) node[anchor= south] {\small start}; \filldraw[black] (-7,0) circle (2pt) node[anchor= south] {\small start}; \draw[dashed,->] (-5,0) --(-4,2) node[anchor=west] {\small $R_1+1$}; \draw[arrows=->] (-5,4)--(-4,6)node[anchor=west] {\small $R_1$}; \filldraw[black](-4,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](-4,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \draw [dashed] (-5,-1)--(-5,5); \filldraw[black] (-5,4) circle (2pt) node[anchor= north west] {$W_1$}; \draw[arrows=->] (-1,0) --(0,2) node[anchor=west]{\small $R_2$}; \filldraw[black] (-1,0) circle(2pt) node[anchor=north west]{$W_2$}; \draw[dashed,->] (-1,4)--(0,6) node[anchor=west]{\small $R_2+1$}; \filldraw[black](0,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](0,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \draw [dashed] (-1,-1)--(-1,5); \draw[dashed,->] (5,0) --(6,2) node[anchor=west]{\small $n-\sum_{j=1}^{k-1}(R_j+1)$}; e\draw[arrows=->] (5,4)--(6,6) node[anchor=west]{\small $m-\sum_{j=1}^{k-1}(R_j+1)-1$}; \draw [dashed] (5,-1)--(5,5); \filldraw[black] (5,4) circle (2pt) node[anchor= north west] {$W_k$}; \filldraw[black](6,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](6,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \end{tikzpicture} \newpage \noindent Case-II: $k$-th failure comes from Pop-2 \vspace{5 mm} \begin{tikzpicture}[scale=0.8] \draw[gray, thick] (-7,4)node[anchor=north]{\textbf{Pop-1}} -- (2,4); \draw[gray, thick](2,4)--(2.5,3.5)--(2.8,4.5)--(3.1,4)--(10,4); \draw[gray, thick] (-7,0)node[anchor=north]{\textbf{Pop-2}} -- (2,0); \draw[gray, thick](2,0)--(2.5,-.5)--(2.8,.5)--(3.1,0)--(10,0); \filldraw[black] (-7,4) circle (2pt) node[anchor= south] {\small start}; \filldraw[black] (-7,0) circle (2pt) node[anchor= south] {\small start}; \draw[dashed,->] (-5,0) --(-4,2) node[anchor=west] {\small $R_1+1$ }; \draw[arrows=->] (-5,4)--(-4,6)node[anchor=west] {\small $R_1$}; \draw [dashed] (-5,-1)--(-5,5); \filldraw[black] (-5,4) circle (2pt) node[anchor= north west] {$W_1$}; \filldraw[black](-4,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](-4,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \draw[arrows=->] (-1,0) --(0,2) node[anchor=west]{\small $R_2$}; \filldraw[black] (-1,0) circle(2pt) node[anchor=north west]{$W_2$}; \draw[dashed,->] (-1,4)--(0,6) node[anchor=west]{\small $R_2+1$}; \draw [dashed] (-1,-1)--(-1,5); \filldraw[black](0,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](0,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \draw[arrows=->] (5,0) --(6,2) node[anchor=west]{\small $n-\sum_{j=1}^{k-1}(R_j+1)-1$}; \filldraw[black] (5,0) circle (2pt) node[anchor= north west] {$W_k$}; \draw[dashed,->] (5,4)--(6,6) node[anchor=west]{\small $m-\sum_{j=1}^{k-1}(R_j+1)$}; \draw [dashed] (5,-1)--(5,5); \filldraw[black](6,1.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \filldraw[black](6,5.5) circle(0.0005pt) node[anchor=west]{\small withdrawn}; \end{tikzpicture} Suppose $X_1, \ldots, X_m$ denote the lifetimes of $m$ units of Pop-1, and it is assumed that they are independent and identically distributed (i.i.d.) exponential random variables with mean $\theta_1$ (Exp($\theta_1$)). Similarly, it is assumed that $Y_1,\ldots ,Y_n$ denote the lifetimes of $n$ units of Pop-2, and they are i.i.d exponential random variables with mean $\theta_2$. \section{\sc Maximum likelihood estimators And Their Exact Distributions} \subsection{\sc Maximum Likelihood Estimators } For a given sampling scheme $m$, $n$, $k$ and $\displaystyle R_1, \ldots, R_{k-1}$ based on the observation $(\bold W,\bold Z)$ the likelihood function can be written as \begin{eqnarray} L(\theta_1,\theta_2| \bold w,\bold z)=C\frac{1}{\theta_1^{m_k}}\frac{1}{\theta_2^{n_k}} e^{-(\frac{A_1}{\theta_1}+\frac{A_2}{\theta_2})}; \label{ll} \end{eqnarray} where the normalizing constant $\displaystyle C=\prod_{i=1}^{k}[(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)]$, $\displaystyle A_1=\sum_{i=1}^{k-1}(R_i+1)w_i+(m-\sum_{i=1}^{k-1}(R_i+1))w_k$, $\displaystyle A_2=\sum_{i=1}^{k-1}(R_i+1)w_i+(n-\sum_{i=1}^{k-1}(R_i+1))w_k$, $\displaystyle m_k=\sum_{i=1}^{k}z_i$, $n_k=\sum_{i=1}^{k}(1-z_i) = k - m_k$. From \eqref{ll} it follows that $(m_k, n_k, A_1, A_2)$ is the joint complete sufficient statistics of the unknown parameters $(\theta_1, \theta_1)$. It is immediate that the MLEs of both $\theta_1$ and $\theta_2$ exist when $1\leq m_k \leq k-1$, and they are as follows: $$ \widehat{\theta}_1 = \frac{A_1}{m_k} \ \ \ \ \hbox{and} \ \ \ \ \widehat{\theta}_2 = \frac{A_2}{n_k}. $$ Hence $(\widehat{\theta}_1, \widehat{\theta}_2)$ is the conditional MLE of $(\theta_1,\theta_2$), conditioning on $1\leq m_k \leq k-1$. \subsection{\sc Joint and Marginal Distributions} In this section we provide the joint and marginal distribution function of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ based on the joint and marginal moment generating function (MGF) approach. Lemma 1 is needed for further development. \noindent {\sc Lemma 1:} $$ P(m_k=r)=\sum_ {\substack{\bold z}\in Q_r}\{{\prod_{i=1}^{k}\frac{(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)}{(m-\sum_{j=1}^{i-1}(R_j+1))\theta_2+(n-\sum_{j=1}^{i-1}(R_j+1))\theta_1}}\}\theta_1^{k-r}\theta_2^r, $$ where $\displaystyle Q_r=\left \{\bold z=(z_1,\ldots,z_k):\sum_{i=1}^{k}z_i=r \right \}; r=0,\ldots,k.$ \noindent {\sc Proof:} See in the Appendix. \hfill \vrule height7pt width 5pt depth 0pt Note that when $m = n$, then \begin{equation} P(m_k=r) = {k\choose{r}} \left ( \frac{\theta_2}{\theta_1+\theta_2} \right )^r \left ( \frac{\theta_2}{\theta_1+\theta_2} \right )^{k-r}; \ \ \ r = 0, \ldots, k. \label{pmk} \end{equation} Now we provide the joint moment generating function (MGF) of $(\widehat{\theta}_1, \widehat{\theta}_2)$ conditioning on $1 \le m_k \le k-1$. \noindent {\sc Theorem 1:} The joint MGF of $(\widehat{\theta}_1, \widehat{\theta}_2)$ conditioning on $1 \le m_k \le k-1$ is given by \begin{equation} M_{ \widehat{\theta}_1, \widehat{\theta}_2 }\big(t_1,t_2\big) = \frac{\sum_{r=1}^{k-1}P(m_k=r) \prod_{s=1}^{k}({1-\alpha_{sr} t_1-\beta_{sr} t_2})^{-1}}{P(1\leq m_k \leq k-1)}, \end{equation} where \begin{eqnarray*} \alpha_{sr} & = & \frac{(m-\sum_{i=1}^{s-1}(R_i+1))\theta_1\theta_2}{r\{(m-\sum_{i=1}^{s-1}(R_i+1))\theta_2+(n-\sum_{i=1}^{s-1}(R_i+1))\theta_1\}} \\ \beta_{sr} & = & \frac{(n-\sum_{i=1}^{s-1}(R_i+1))\theta_1\theta_2}{(k-r)\{(m-\sum_{i=1}^{s-1}(R_i+1))\theta_2+(n-\sum_{i=1}^{s-1}(R_i+1))\theta_1\}}. \end{eqnarray*} \noindent {\sc Proof:} See in the Appendix. \hfill \vrule height7pt width 5pt depth 0pt \noindent Using Theorem 1, we immediately get the following corollary. \noindent {\sc Corollary 1:} Conditioning on $1\leq m_k \leq k-1$, the marginal MGF of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ are given by \begin{eqnarray*} M_{\widehat{\theta}_1}\big(t\big) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) \prod_{s=1}^{k}({1-\alpha_{sr} t})^{-1}}{P(1\leq m_k \leq k-1)} \ \ \ \hbox{and} \ \ \ \\ M_{\widehat{\theta}_2}\big(t\big) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) \prod_{s=1}^{k}({1-\beta_{sr} t})^{-1}}{P(1\leq m_k \leq k-1)}, \end{eqnarray*} respectively. Hence we have the PDFs of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ as follows. \noindent {\sc Theorem 2:} Conditioning on $1\leq m_k \leq k-1$, the PDF of $\widehat{\theta}_1$ is given by \begin{equation} f_{\widehat{\theta}_1}\big(t\big)=\frac{\sum_{r=1}^{k-1}P(m_k=r) g_{X_r}(t)}{P(1\leq m_k \leq k-1)}. \end{equation} Here $\displaystyle X_r \overset{d}{=} \sum_{s=1}^{k} U_{sr}$, where $U_{sr} \sim \hbox{Exp}(\alpha_{sr})$ and they are independently distributed. Also, $g_{X_r}(t)$ is the PDF of $X_r$, and when $m \ne n$, $$ g_{X_r}(t) = \prod_{s=1}^{k}\frac{1}{\alpha_{sr}} \times \sum_{s=1}^{k}\frac{e^{-\frac{t}{\alpha_{sr}}}}{\prod_{j=1,j\neq s}^{k} (\frac{1}{\alpha_{jr}}-\frac{1}{\alpha_{sr}})}; \ \ \ t > 0, $$ and 0, otherwise. When $m = n$, $$ g_{X_r}(t) = \frac{1}{\Gamma (k) \alpha_r^k} t^{k-1} e^{-\frac{t}{\alpha_r}}; \ \ \ t > 0, $$ and 0, otherwise. Here $\displaystyle \alpha_r = \frac{\theta_1 \theta_2}{r(\theta_1+\theta_2)}$. The PDF of $\widehat{\theta}_2$ is given by \begin{equation} f_{\widehat{\theta}_2}\big(t\big)=\frac{\sum_{r=1}^{k-1}P(m_k=r) g_{Y_r}(t)}{P(1\leq m_k \leq k-1)}. \end{equation} Here $Y_r\overset{d}{=}\sum_{s=1}^{k}V_{sr}$, where $V_{sr} \sim \hbox{Exp}(\beta_{sr})$ and they are independently distributed. Also, $g_{Y_r}(t)$ is the PDF of $Y_r$, and when $m \ne n$, $$ g_{Y_r}(t) = \prod_{s=1}^{k}\frac{1}{\beta_{sr}} \times \sum_{s=1}^{k}\frac{e^{-\frac{t}{\beta_{sr}}}}{\prod_{j=1,j\neq s}^{k}(\frac{1} {\beta_{jr}}-\frac{1}{\beta_{sr}})}; \ \ \ t > 0, $$ and 0, otherwise. When $m = n$, $$ g_{Y_r}(t) = \frac{1}{\Gamma (k) \beta_r^k} t^{k-1} e^{-\frac{t}{\beta_r}}; \ \ \ t > 0, $$ and 0, otherwise. Here $\displaystyle \beta_r = \frac{\theta_1 \theta_2}{(k-r)(\theta_1+\theta_2)}$. \noindent {\sc Proof:} It immediately follows from Corollary 1. \hfill \vrule height7pt width 5pt depth 0pt \noindent {\sc Remark:} The distribution of the MLE is a mixture of $k-1$ components, where each component is a sum of $k$ independent exponentially distributed random variables. When $m = n$, it is a weighted mixture of gamma distributions. We can easily obtain the moments of $\widehat{\theta}_1$ and $\widehat{\theta}_2$. When $m \ne n$, the first two moments are \begin{eqnarray*} E(\widehat{\theta}_1) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) \sum_{s=1}^{k} \alpha_{sr}}{P(1\leq m_k \leq k-1)} \\ E({\widehat{\theta}_1}^2) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) (2\sum_{s=1}^{k} \alpha_{sr}^2+\sum_{\substack{i\neq j}} \alpha_{ir} \alpha_{jr})}{P(1\leq m_k \leq k-1)} \\ E(\widehat{\theta}_2) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) \sum_{s=1}^{k}\beta_{sr}}{P(1\leq m_k \leq k-1)} \\ E({\widehat{\theta}_2}^2) & = & \frac{\sum_{r=1}^{k-1}P(m_k=r) (2\sum_{s=1}^{k}\beta_{sr}^2+\sum_{\substack{i \neq j}} \beta_{ir} \beta_{jr})}{P(1\leq m_k \leq k-1)}. \end{eqnarray*} When $m = n$, $$ E(\widehat{\theta}_1)=\frac{\sum_{r=1}^{k-1}P(m_k=r)k\alpha_r}{P(1\leq m_k \leq k-1)} \ \ \ \hbox{and} \ \ \ E({\widehat{\theta}_1}^2)=\frac{\sum_{r=1}^{k-1}P(m_k=r)k(k+1){{\alpha_r}^2}}{P(1\leq m_k \leq k-1)} $$ $$ E(\widehat{\theta}_2)=\frac{\sum_{r=1}^{k-1}P(m_k=r)k\beta_r}{P(1\leq m_k \leq k-1)} \ \ \ \hbox{and} \ \ \ E({\widehat{\theta}_2}^2)=\frac{\sum_{r=1}^{k-1}P(m_k=r)k(k+1){{\beta_r}^2}}{P(1\leq m_k \leq k-1)}. $$ Here $\alpha_r$ and $\beta_r$ are same as defined before, and $P(m_k=r)$ is given by (\ref{pmk}). Now to get an idea about the shape of the PDFs of $\widehat{\theta}_1$ and $\widehat{\theta}_2$, for different censoring schemes, we have plotted in Figures \ref{fig:fig1} to \ref{fig:fig4} the PDFs of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ along with the histograms of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ based on 10,000 replications. \begin{figure}[H] \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{1.pdf} \caption{Histogram of $\widehat{\theta}_1$ along with its PDF} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{4.pdf} \caption{Histogram of $\widehat{\theta}_2$ along with its PDF} \label{fig:sfig2} \end{subfigure} \caption{Histogram of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ along with its PDF,taking $\theta_1=.5$, $\theta_2=1$, $m=20$, $n=25$, $k=8$,$R=(7,0_{(6)})$} \label{fig:fig1} \end{figure} \begin{figure}[H] \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{2.pdf} \caption{Histogram of $\widehat{\theta}_1$ along with its PDF} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{3.pdf} \caption{Histogram of $\widehat{\theta}_2$ along with its PDF} \label{fig:sfig2} \end{subfigure} \caption{Histogram of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ along with its PDF,taking $\theta_1=.5$, $\theta_2=1.5$, $m=20$, $n=25$, $k=8$,$R=(7,0_{(6)})$} \label{fig:fig2} \end{figure} \enlargethispage{1 in} \begin{figure}[H] \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{5.pdf} \caption{Histogram of $\widehat{\theta}_1$ along with its PDF} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{6.pdf} \caption{Histogram of $\widehat{\theta}_2$ along with its PDF} \label{fig:sfig2} \end{subfigure} \caption{Histogram of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ along with its PDF,taking $\theta_1=.5$, $\theta_2=1$, $m=20$, $n=25$, $k=6$, $R=(2_{(5)})$} \label{fig:fig3} \end{figure} \begin{figure}[H] \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{7.pdf} \caption{Histogram of $\widehat{\theta}_1$ along with its PDF} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \includegraphics[width=.8\linewidth]{8.pdf} \caption{Histogram of $\widehat{\theta}_2$ along with its PDF} \label{fig:sfig2} \end{subfigure} \caption{Histogram of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ along with its PDF,taking $\theta_1=.5$, $\theta_2=1.5$, $m=20$, $n=25$, $k=6$,$R=(2_{(5)})$} \label{fig:fig4} \end{figure} Some of the points are quite clear from the PDFs of $\widehat{\theta}_1$ and $\widehat{\theta}_2$. The PDFs of both $\widehat{\theta}_1$ and $\widehat{\theta}_2$ are unimodal and are right skewed for different parameter values and for different sample sizes. Moreover, in all the cases it is observed that the modes of the PDFs are very close to the corresponding true parameter values, as expected. \section{\sc Generation of the Data and the Expected Experimental Time} It is observed that for the proposed NJPC scheme, it is quite simple to generate samples for a given censoring scheme, hence simulation experiments can be performed quite efficiently. In this section we provide an algorithm to generate sample from a given NJPC scheme. This algorithm is based on the following lemma. \noindent {\sc Lemma 2:} If $W_1\leq \ldots \leq W_k$ are the ordered lifetime from a NJPC, then $$ W_i\overset{d}{=}\sum_{s=1}^{i}V_s, $$ where $V_s$'s are independent random variables such that $$ V_s \sim \hbox{Exp} \left (\frac{1}{E_s} \right ), \ \ \ E_s=\frac{(m-\sum_{j=1}^{s-1}(R_j+1))}{\theta_1}+\frac{(n-\sum_{j=1}^{s-1}(R_j+1))}{\theta_2}. $$ \noindent {\sc Proof:} See in the Appendix. \hfill \vrule height7pt width 5pt depth 0pt Now we can use the following algorithm to generate $({\bold W}, {\bold Z})$ for a given $n,m,k, R_1, \ldots, R_{k-1}$. \noindent {\sc Algorithm:} \begin{itemize} \item Step 1: Compute $E_s$, for $s=1,\ldots, k$. \item Step 2: Generate $\displaystyle V_s \sim \hbox{Exp} \left (\frac{1}{E_s} \right )$, $s=1,\ldots, k$. \item Step 3: Compute $\displaystyle W_i=\sum_{s=1}^{i}V_s, i=1,\ldots,k.$ \item Step 4: Generate $\displaystyle Z_i\sim \hbox{Bin}(1,p_i), i = 1, \ldots, k$, where $$ p_i=\frac{(m-\sum_{j=1}^{i-1}(R_j+1))\theta_2}{(m-\sum_{j=1}^{i-1}(R_j+1))\theta_2+(n-\sum_{j=1}^{i-1}(R_j+1))\theta_1}. $$ \end{itemize} \hfill \vrule height7pt width 5pt depth 0pt \noindent Using Lemma 2, we can easily obtain the expected experimental time as $$ E(W_k) = \sum_{s=1}^k E(V_s) = \sum_{s=1}^k \frac{1}{E_s}. $$ \section{\sc Construction of Confidence Interval} \subsection{\sc Exact Confidence Interval} Based on the assumptions that $P_{\theta_1}(\widehat{\theta}_1>t)$ is a strictly increasing function of $\theta_1$ for any point $t > 0$ when $\theta_2$ is fixed, a $100(1-\alpha)\%$ exact confidence interval of $\theta_1$ can be constructed. Similarly, based on the assumption that $P_{\theta_2}(\widehat{\theta}_2>t)$ is a strictly increasing function of $\theta_2$ for any point $t$ when $\theta_1$ is fixed, a $100(1-\alpha)\%$ exact confidence interval of $\theta_2$ can be constructed as follows, see for example Lehmann and Romano \cite{LR:2005}. Conditioning on $1\leq m_k\leq k-1$, a $100(1-\alpha)\%$ exact confidence interval for $\theta_1$ as ($\theta_{1L}, \theta_{1U}$) can be obtained by solving the following two nonlinear equations keeping $\theta_2$ fixed. \begin{eqnarray} \begin{cases}P_{\theta_{1L}}(\widehat{\theta}_1 > \widehat{\theta}_{1\textit {obs}}|1\leq m_k\leq k-1)=\frac{\alpha}{2},\\ P_{\theta_{1U}}(\widehat{\theta}_1 > \widehat{\theta}_{1\textit {obs}}|1\leq m_k\leq k-1)=1-\frac{\alpha}{2}. \label{neq-1} \end{cases} \end{eqnarray} Similarly, conditioning on $1\leq m_k\leq k-1$, a $100(1-\alpha)\%$ exact confidence interval for ${\theta}_2$ as ($\theta_{2L}, \theta_{2U}$) can be obtained by solving the following nonlinear equations keeping $\theta_1$ fixed. \begin{eqnarray} \begin{cases}P_{\theta_{2L}}(\widehat{\theta}_2 > \widehat{\theta}_{2\textit {obs}}|1\leq m_k\leq k-1)=\frac{\alpha}{2},\\ P_{\theta_{2U}}(\widehat{\theta}_2 > \widehat{\theta}_{2\textit {obs}}|1\leq m_k\leq k-1)=1-\frac{\alpha}{2}. \label{neq-2} \end{cases} \end{eqnarray} In practice to compute $(\theta_{1L}, \theta_{1U})$, we replace $\theta_2$ by its MLE $\widehat{\theta}_2$, similarly, to compute $(\theta_{2L}, \theta_{2U})$, we replace $\theta_1$ by its MLE $\widehat{\theta}_1$. One can use the standard bisection method or Newton-Raphson method to solve these two \eqref{neq-1} and \eqref{neq-2} non-linear equations. The following result provides the necessary monotonicity properties of $P_{\theta_1}(\widehat{\theta}_1>t)$ and $P_{\theta_2}(\widehat{\theta}_2>t)$. It also justifies using \eqref{neq-1} and \eqref{neq-2} to construct the exact confidence intervals of $\theta_1$ and $\theta_2$, respectively. \noindent {\sc Lemma 3:} \noindent (i) $P_{\theta_1}(\widehat{\theta}_1>t|1\leq m_k \leq k-1)$ is a strictly increasing function of $\theta_1$ for any point $t$ when $\theta_2$ is kept fixed. \noindent (ii) $P_{\theta_2}(\widehat{\theta}_2>t|1\leq m_k \leq k-1)$ is a strictly increasing function of $\theta_2$ for any point $t$ when $\theta_1$ is kept fixed. \noindent {\sc Proof:} See in appendix. \hfill \vrule height7pt width 5pt depth 0pt \subsection{\sc Bootstrap Confidence Interval} Since the exact confidence intervals can be obtained by solving two non-linear equations we propose to use parametric bootstrap confidence intervals also as an alternative. The following steps can be followed to construct parametric bootstrap confidence intervals. \noindent Step 1: Given the original data, compute $\widehat{\theta}_1$, $\widehat{\theta}_2$. \\ \noindent Step 2: Generate a bootstrap sample $\{({W_1}^{\ast},{Z_1}^{\ast}) \dots,({W_k}^{\ast},{Z_k}^{\ast})\}$ using the algorithm provided in Section 4 for a given $m$, $n$, $k$, $(R_1,\ldots R_{k-1})$, $\widehat{\theta}_1$, $\widehat{\theta}_2$,. \\ \noindent Step 3: Compute $\widehat{{\theta}_1^{\ast}}$, $\widehat{\theta}_2^{\ast}$ based on the bootstrap sample. \\ \noindent Step 4: Repeat Step 1-Step 3 say $B$ times and obtain $\{\widehat{\theta}_{11}^*, \ldots, \widehat{\theta}_{1B}^*\}$ and $\{\widehat{\theta}_{21}^*, \ldots, \widehat{\theta}_{2B}^*\}$. Sort $\widehat{\theta}_{1j}^{\ast}$ in ascending order to get $\displaystyle (\widehat{\theta}_{1(1)}^{\ast}, \ldots, \widehat{\theta}_{1(B)}^{\ast})$. Similarly sort $\widehat{\theta}_{2j}^{\ast}$ in ascending order to get $\displaystyle (\widehat{\theta}_{2(1)}^{\ast}, \ldots, \widehat{\theta}_{2(B)}^{\ast})$. \\ \noindent Step 5: Construct a $100(1-\alpha)\%$ confidence interval for $\theta_1$ as $\big(\widehat{\theta}_{1([\frac{\alpha}{2}B])}^{\ast}, \widehat{\theta}_{1([(1-\frac{\alpha}{2})B])}^{\ast} \big)$ and a $100(1-\alpha)\%$ confidence interval for $\theta_2$ as $\big(\widehat{\theta}_{2([\frac{\alpha}{2}B])}^{\ast}, \widehat{\theta}_{2([(1-\frac{\alpha}{2}B)])}^{\ast} \big)$. Here $[x]$ denotes the largest integer less than or equal to $x$. \section{\sc Simulation Results And Data Analysis} \subsection{\sc Simulation Results} We perform some simulation experiments to compare the performances of the estimators based on NJPC and JPC schemes. We have taken different $m$, $n$, $k$, different $(\theta_1, \theta_2)$ and different $R_1, \ldots, R_{k-1}$ values. For a given set of parameters and the sample sizes, we generate sample based on the algorithm provided in Section 4. In each case we compute the MLEs based on the observed sample, and report their average estimates (AE) and mean squared errors (MSEs) based on 10,000 replications. In each case for the NJPC scheme we construct the exact confidence intervals of $\theta_1$ and $\theta_2$, and we report the average lengths (AL) and the coverage percentages (CP) based on 1000 replications. For each sample we compute the bootstrap confidence intervals based on 1000 replications and we report the average lengths and the coverage percentages based on 1000 replications. All the results are reported in Tables \ref{table-1} - \ref{table-4}. We use the following notation to denote a particular progressive censoring scheme. For example when $m$ = 15, $n$ = 12, $k$ = 6 and $R = (4, 0_{(4)})$ means $R_1$ = 4, $R_2 = R_3 = R_4 = R_5 = 0$. \begin{table}[H] \caption{ AE and MSE of the MLE's taking $\theta_1=.5, \theta_2=1, m=15,n=12$}\label{table-1} \begin{center} \begin{tabular}{lllllll} \toprule \multirow{2}{*}{Censoring scheme} & \multirow{2}{*}{MLE} &\multicolumn{3}{l}{NJPC} & \multicolumn{2}{l}{JPC}\\ \cline{3-4} \cline{6-7} &&\multicolumn{1}{c}{AE} & \multicolumn{1}{c}{MSE} &\multicolumn{1}{c}{•} &\multicolumn{1}{c}{AE} &\multicolumn{1}{l}{MSE} \\ \midrule k=6,R=(4,0$_{(4)}$) & $\widehat{\theta}_1$ & 0.575 & 0.099 & &0.563 & 0.113\\ & $\widehat{\theta}_2$ & 0.995 & 0.377& &1.125 & 0.607 \\ \midrule k=6,R=(0,4,0$_{(3)}$) & $\widehat{\theta}_1$ & 0.577 & 0.106 & &0.565 & 0.114 \\ & $\widehat{\theta}_2$ & 1.001 & 0.380& &1.122 & 0.599 \\ \midrule k=6,R=(0$_{(2)}$,4,0$_{(2)}$) & $\widehat{\theta}_1$ & 0.573 & 0.106 & &0.571 & 0.112 \\ & $\hat{\theta}_2$ & 1.016 & 0.388 & &1.147 & 0.622 \\ \midrule k=6,R=(0$_{(3)}$,4,0) & $\hat{\theta}_1$ & .580 & 0.108 & &0.567 & 0.112 \\ & $\hat{\theta}_2$ & 1.034 & 0.411& &1.133 & 0.598 \\ \midrule k=6,R=(0$_{(4)}$,4) & $\hat{\theta}_1$ & 0.571 & 0.103 & & 0.569 & 0.106 \\ & $\hat{\theta}_2$ & 1.044 & 0.421& &1.124 & 0.585 \\ \bottomrule \end{tabular} \end{center} \end{table} \enlargethispage{1 in} \begin{table}[H] \caption{ AE and MSE of the MLE's taking $\theta_1=.5$, $\theta_2=1$, $m=15$, $n=12$}\label{table-2} \begin{center} \begin{tabular}{lllllll} \toprule \multirow{2}{*}{Censoring scheme} & \multirow{2}{*}{MLE} &\multicolumn{3}{l}{NJPC} & \multicolumn{2}{l}{JPC}\\ \cline{3-4} \cline{6-7} &&\multicolumn{1}{c}{AE} & \multicolumn{1}{c}{MSE} &\multicolumn{1}{c}{•} &\multicolumn{1}{c}{AE} & \multicolumn{1}{l}{MSE} \\ \midrule k=8,R=(3,0$_{(6)}$) & $\widehat{\theta}_1$ & 0.538 & 0.056 & & 0.537 & 0.062 \\ & $\widehat{\theta}_2$ & 1.121 & 0.504& &1.238 & 0.838 \\ \midrule k=8,R=(0$_{(2)}$,3,0$_{(4)}$) & $\widehat{\theta}_1$ & 0.541 & 0.059 & & 0.534 & 0.063 \\ & $\widehat{\theta}_2$ & 1.134 & 0.523& &1.226 & 0.805 \\ \midrule k=8,R=(0$_{(3)}$,3,$0_{(3)}$) & $\widehat{\theta}_1$ & .539 & 0.056 & & 0.534 & 0.061 \\ & $\widehat{\theta}_2$ & 1.138 & 0.543& &1.238 & 0.817 \\ \midrule k=8,R=(0$_{(5)}$,7,0) & $\widehat{\theta}_1$ & 0.540 & 0.059 & & 0.537 & 0.061 \\ & $\widehat{\theta}_2$ & 1.156 & 0.577& &1.231 & 0.792\\ \midrule k=8,R=(0$_{(6)}$,7) & $\widehat{\theta}_1$ & 0.543 & 0.063 & & 0.538 & 0.066 \\ & $\widehat{\theta}_2$ & 1.159 & 0.574& &1.227 & 0.834 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[H] \caption{ AL and CP of CI's taking $\theta_1=.5, \theta_2=.6, m=20,n=25$}\label{table-3} \begin{center} \begin{tabular}{lllllll} \toprule \multicolumn{1}{c}{Censoring scheme} & \multicolumn{1}{c}{Parameter} &\multicolumn{3}{l}{Exact 90\% CI} & \multicolumn{2}{l}{Bootstrap 90\%CI}\\ \cline{3-4} \cline{6-7} &&\multicolumn{1}{c}{AL} & \multicolumn{1}{c}{CP} &\multicolumn{1}{c}{} &\multicolumn{1}{c}{AL} & \multicolumn{1}{l}{CP} \\ \midrule k=8,R=(7,0$_{(6)}$) & $\theta_1$ & 2.920 &89.80\% & &1.279 &91.80\% \\ & $\theta_2$ &2.190 &90.90\% & & 1.384 &89.00\% \\ \midrule k=8,R=(0$_{(3)}$,7,0$_{(3)}$) & $\theta_1$ & 2.912 &89.40\% & &1.288&90.70\% \\ & $\theta_2$ &2.101 &91.70\% & &1.395&90.60\% \\ \midrule k=8,R=(0$_{(5)}$,7,0)& $\theta_1$ &2.799& 88.80\% & &1.237&89.60\% \\ & $\theta_2$ & 2.214 &91.40\% &&1.479&91.10\%\\ \midrule k=8,R=(0$_{(6)}$,7)& $\theta_1$ &2.871 &89.30\% & & 1.246&89.50\% \\ & $\theta_2$ &2.399 &90.50\% &&1.409&89.20\% \\ \midrule k=8,R=(0$_{(7)}$)& $\theta_1$ &2.476 &90.40\% & &1.223&90.50\% \\ & $\theta_2$ &2.455 &91.40\% && 1.485 & 89.20\% \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[H] \caption{ AL and CP of CI's taking $\theta_1=.5, \theta_2=.6, m=20,n=25$} \label{table-4} \begin{center} \begin{tabular}{lllllll \toprule \multicolumn{1}{c}{Censoring scheme} & \multicolumn{1}{c}{Parameter} &\multicolumn{3}{l}{Exact 90\% CI} & \multicolumn{2}{l}{Bootstrap 90\%CI}\\ \cline{3-4} \cline{6-7} &&\multicolumn{1}{c}{AL} & \multicolumn{1}{c}{CP} &\multicolumn{1}{c}{•} &\multicolumn{1}{c}{AL} & \multicolumn{1}{l}{CP} \\ \midrule k=6,R=(10,0$_{(4)}$) & $\theta_1$ & 4.410 & 89.10\% &&1.213 &92.90\% \\ & $\theta_2$ &3.188 &88.90\% & &1.531 &91.40\% \\ \midrule k=6,R=(0$_{(2)}$,10,0$_{(2)}$)& $\theta_1$ &4.252 &88.50\%& & 1.241&92.30\% \\ & $\theta_2$ & 3.201 &89.40\% & &1.578 & 90.80\% \\ \midrule k=6,R=(0$_{(4)}$,10)& $\theta_1$ & 4.008 &88.40\% &&1.293 &91.70\% \\ & $\theta_2$ &3.550& 90.90\%&& 1.543 &92.60\% \\ \midrule k=6,R=(0$_{(5)}$)& $\theta_1$ &3.642 &89.70\% & &1.253 &90.90\% \\ & $\theta_2$ & 3.860 &90.10\% & & 1.511 &89.20\% \\ \bottomrule \end{tabular} \end{center} \end{table} Some of the points are quite clear from the above Tables. It is clear that for both the censoring schemes the estimators are quite satisfactory. In most of the cases considered here it is observed that the MSEs of both the estimators are smaller in case of NJPC than the JPC. Regarding the confidence intervals it is observed that the confidence intervals obtained using the exact distribution and also using the bootstrap method provide satisfactory results. In all the cases the coverage percentages are very close to the nominal level. Regarding the length of the confidence intervals, the bootstrap confidence intervals perform slightly better than the exact confidence intervals. Moreover, the implementation of the bootstrap method is also quite simple in this case. Now we would like to discuss some of the computational issues we have encountered during the simulation experiments mainly to calculate the exact confidence intervals of $\theta_1$ and $\theta_2$. It is observed that for $m \ne n$, and when $k$ is large the computation of $P(X_r > t)$ and $P(Y_r > t)$ become quite difficult for large value of $t$. For small value of $k$, if $\theta_1$ and $\theta_2$ are quite different, then solving the two non-linear equations (\ref{neq-1}) and (\ref{neq-2}) become quite difficult. In this case $\displaystyle P_{\theta_{1U}}(\widehat{\theta}_1 > \widehat{\theta}_{1\textit{obs}}|1 \le m_k \le k-1)$ and $\displaystyle P_{\theta_{2U}}(\widehat{\theta}_2 > \widehat{\theta}_{2\textit{obs}}|1 \le m_k \le k-1)$ become very flat for large values of $\theta_{1U}$ and $\theta_{2U}$, respectively. Hence the confidence intervals become very wide. On the other hand the construction of confidence intervals based on bootstrapping does not have any numerical issues. Considering all these points we propose to use bootstrap method for constructing the confidence intervals in this case. \subsection{\sc Data Analysis} In this section we provide the analysis of a data set mainly for illustrative purposes. These data sets were used by Rasouli and Balakrishnan \cite{RB:2010} also and they were originally taken from Proschan \cite{Proschan:1963}. The data represent the intervals between failures (in hours) of the air conditioning system of a fleet of 13 Boeing 720 jet airplanes. It is observed by Proschan \cite{Proschan:1963} that the failure time distribution of the air conditioning system for each of the planes can be well approximated by exponential distributions. We have considered the planes ``7913'' and ``7914'' for our illustrative purposes. The data are presented below: \noindent {\sc Plane 7914:} 3, 5, 5, 13, 14, 15, 22, 22, 23, 30, 36, 39, 44, 46, 50, 72, 79, 88, 97, 102, 139, 188, 197, 210. \noindent {\sc Plane 7913:} 1, 4, 11, 16, 18, 18, 18, 24, 31, 39, 46, 51, 54, 63, 68, 77, 80, 82, 97, 106, 111, 141, 142, 163, 191, 206, 216. In this case $m$ = 24 and $n$ = 27. We have considered two different NJPC with $k$ = 8, and different $R_i$ values. \noindent {\sc Censoring Scheme 1:} $k=8$ and $R=(0_{(7)})$ Based on the above censoring scheme we generate ${\bf W}$ and ${\bf Z}$, and they are as follows. $w=(1, 3, 4, 5, 5, 11, 13, 15)$ $z=(0, 1, 0, 1, 1, 0, 1, 1)$. We compute the MLEs of the unknown parameters and 90\% exact and bootstrap confidence intervals in both the cases. The results are reported in Table \ref{res-1}. \begin{table}[h] \caption{\sc Results related to Censoring Scheme 1.} \label{res-1} \begin{center} \begin{tabular}{lrll} \toprule \multicolumn{1}{c}{parameter} & \multicolumn{1}{c}{MLE} & \multicolumn{1}{l}{Bootstrap 90\% CI} &\multicolumn{1}{l}{Exact 90\% CI}\\ \midrule $\theta_1$&59.4 &(27.862,132.911) & (30.027,141.049) \\ $\theta_2$&114.0 &(49.146,345.655) & (49.183,422.490)\\ \bottomrule \end{tabular} \end{center} \end{table} \noindent {\sc Censoring Scheme 2:} $k=8$ and $R=(2_{(7)})$ For the Censoring Scheme 2, the generated ${\bf W}$ and ${\bf Z}$ are $w=(1, 3, 4, 5, 5, 14, 15, 16)$ and $z=(0, 1, 0, 1, 1, 1, 1, 0)$. In this case the MLEs and the associate confidence intervals are reported in Table \ref{res-2} \begin{table}[h] \caption{Results related to Censoring Scheme 2.} \label{res-2} \begin{center} \begin{tabular}{lrll} \toprule \multicolumn{1}{c}{parameter} & \multicolumn{1}{c}{MLE} & \multicolumn{1}{l}{Bootstrap 90\% CI} &\multicolumn{1}{l}{Exact 90\% CI}\\ \midrule $\theta_1$&37.8 &(17.239,82.119) & (19.318,93.453) \\ $\theta_2$&79.0 &(31.003,249.636) & (34.588,283.294)\\ \bottomrule \end{tabular} \end{center} \end{table} It is clear that the MLEs of the unknown parameters depend quite significantly on the censoring schemes, as expected. The length of the confidence intervals based on bootstrapping are smaller than the exact confidence intervals. \section{\sc Conclusion} In this paper we introduce a new joint progressive censoring scheme for two samples. Based on the assumptions that the lifetime distributions of the two populations follow exponential distributions we obtain the MLE's of the unknown parameters, and derive their exact distributions. It is observed that analytically the proposed model is easier to handle than the existing joint progressive censoring scheme of Rasouli and Balakrishnan \cite{RB:2010}. We perform some simulation experiments and it is observed that in certain cases the MLEs of the unknown parameters based on the proposed model behave better than the existing model. Moreover, performing the simulation experiments based on the proposed model is easier compared to the existing model. Therefore, the proposed model can be used for two sample problem quite conveniently in practice. In this paper we have assumed that the lifetimes of the items follow exponential distribution. In practice it may not be the case always because exponential distribution has a constant hazard rate. It is well known that because of the flexibility, the Weibull distribution or the generalized exponential distribution are more useful in practice. Therefore, it is important to develop the proper inferential procedures for other lifetime distributions for a two sample problem. More work is needed along these directions. \section*{\sc Appendix} \noindent{\sc Proof of Lemma 1:} Note that \begin{eqnarray*} P(m_k=r) & = & \sum_{\substack{\bold z}\in Q_r} P(Z_1=z_1,\ldots,Z_k=z_k) \\ & = & \sum_{\substack{\bold z}\in Q_r} P(Z_1=z_1)P(Z_2=z_2|Z_1=z_1)\cdots P(Z_k=z_k|Z_{k-1}=z_{k-1},\ldots ,Z_1=z_1). \end{eqnarray*} Now \begin{eqnarray*} P(Z_i=z_i|Z_{i-1},\dots,Z_1=z_1) & = &\frac{(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)}{(m-\sum_{j=1}^{i-1}(R_j+1))p+(n-\sum_{j=1}^{i-1}(R_j+1))q} p^{z_i} q^{1-z_i}, \end{eqnarray*} where $\displaystyle p=P(X<Y)=\frac{\theta_2}{\theta_1+\theta_2}$, $q=1-p$. Hence $Z_i's$ are independent, therefore \begin{eqnarray*} P(m_k=r)&=&\sum_{\substack{\bold z}\in Q_r} \prod_{i=1}^{k}\{\frac{(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)}{(m-\sum_{j=1}^{i-1}(R_j+1))p+(n-\sum_{j=1}^{i-1}(R_j+1))q} p^z_iq^{1-z_i}\}\\ & =& \sum_{\substack{\bold z}\in Q_r} {\prod_{i=1}^{k} \frac{(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)}{(m-\sum_{j=1}^{i-1}(R_j+1))p+(n-\sum_{j=1}^{i-1}(R_j+1))q} }p^{m_k} q^{n_k}\\ & =& \sum_ {\substack{\bold z}\in Q_r} {\prod_{i=1}^{k}\frac{(m-\sum_{j=1}^{i-1}(R_j+1))z_i+(n-\sum_{j=1}^{i-1}(R_j+1))(1-z_i)}{(m-\sum_{j=1}^{i-1}(R_j+1))\theta_2+(n-\sum_{j=1}^{i-1}(R_j+1))\theta_1}}{\theta_1}^{k-r}{\theta_2}^r. \end{eqnarray*} \noindent{\sc Proof of Theorem 1:} Conditioning on $ 1\leq m_k \leq k-1$, \begin{eqnarray*} M_{\widehat{\theta_1},\widehat{\theta_2}} \big(t_1,t_2\big) &=& E(e^{t_1\widehat{\theta_1} +t_2\widehat{\theta_2}}|1\leq m_k \leq k-1)\\ & = &\sum_{r=1}^{k-1}E(e^{t_1\widehat{\theta_1} + t_2\widehat{\theta_2}}|m_k=r)P(m_k=r|1 \leq m_k \leq k-1)\\ & = & \sum_{r=1}^{k-1}\sum_{\substack{\bold z}\in Q_r} E(e^{t_1\widehat{\theta_1} +t_2\widehat{\theta_2}}|m_k=r,\bold Z=\bold z)P(\bold Z=\bold z|m_k=r)P(m_k=r|1\leq m_k \leq k-1)\\ & = & \frac{C}{P(1 \leq m_k \leq k-1)} \sum_{r=1}^{k-1}\sum_{\substack{\bold z}\in Q_r} C\frac{1}{{\theta_1}^r}\frac{1}{{\theta_2}^{k-r}} \times \\ & & \int\limits_0^\infty\ \int\limits_{w_1}^{\infty}\ldots \int\limits_{w_{k-1}}^{\infty} e^{\frac{t_1 \{\sum_{i=1}^{k-1}(R_i+1)w_i+(m-\sum_{i=1}^{k-1}(R_i+1))w_k \}}{r}} \times e^{\frac{ t_2\{\sum_{i=1}^{k-1}(R_i+1)w_i+(n-\sum_{i=1}^{k-1}(R_i+1))w_k\}}{k-r}} \\ & & \times e^{-\frac{1}{\theta_1}\{\sum_{i=1}^{k-1}(R_i+1)w_i+(m-\sum_{i=1}^{k-1}(R_i+1))w_k\}}\\ & & \times e^{-\frac{1}{\theta_2}\{\sum_{i=1}^{k-1}(R_i+1)w_i+(n-\sum_{i=1}^{k-1}(R_i+1))w_k\}} \hspace{0.2mm}dw_k \ldots\hspace{0.2mm}dw_2\hspace{0.2mm} dw_1\\ & = &\frac{1}{P(1 \leq m_k \leq k-1)} \sum_{r=1}^{k-1}\sum_{\substack{\bold z}\in Q_r} C\frac{1}{{\theta_1}^r}\frac{1}{{\theta_2}^{k-r}}\\ & & { \{ \prod_{j=1}^{k}{\frac{(m-\sum_{i=1}^{j-1}(R_i+1))}{\theta_1} +\frac{(n-\sum_{i=1}^{j-1}(R_i+1))}{\theta_2}} \} }^{-1} \times\prod_{s=1}^{k}{(1-\alpha_{sr} t_1 -\beta_{sr}t_2)}^{-1} \\ & = &\frac{1}{P(1 \leq m_k \leq k-1)}\sum_{r=1}^{k-1}P(m_k=r) \prod_{s=1}^{k}{(1-\alpha_{sr} t_1 -\beta_{sr}t_2)}^{-1}. \end{eqnarray*} \noindent{\sc Proof of Lemma 2:} \begin{eqnarray*} E(e^{tW_j})& =& \sum_{r=0}^{k}\sum_{\substack{\bold z}\in Q_r}E(e^{tW_j}|m_k=r,\bold Z=\bold z)P(\bold Z=\bold z|m_k=r)P(m_k=r)\\ & = & C \sum_{r=1}^{k-1}\sum_{\substack{\bold z}\in Q_r}\int\limits_0^\infty\ \int\limits_{w_1}^{\infty}\ldots \int\limits_{w_{k-1}}^{\infty}\frac{1}{{\theta_1}^r}\frac{1}{{\theta_2}^{k-r}} e^{tw_j} \times e^{-\frac{1}{\theta_1} \{ \sum_{i=1}^{k-1}(R_i+1)w_i+(m-\sum_{i=1}^{k-1}(R_i+1))w_k \} } \\ & & \times e^{-\frac{1}{\theta_2} \{\sum_{i=1}^{k-1}(R_i+1)w_i+(n-\sum_{i=1}^{k-1}(R_i+1))w_k \} } \hspace{0.2mm}dw_k \ldots\hspace{0.2mm}dw_2\hspace{0.2mm} dw_1\\ & = & C \sum_{r=0}^{k}\sum_{\substack{\bold z}\in Q_r}\frac{1}{{\theta_1}^r}\frac{1}{{\theta_2}^{k-r}}\\ & & \times { \{ a_k(a_k+a_{k-1})\cdots(a_k+a_{k-1}+\cdots +a_{j+1}) (a_k+a_{k-1}+\cdots + a_{j+1}+a'_j+a_{j-1})\}}^{-1}\cdots \\ & & \times {(a_k+a_{k-1}+\cdots +a_{j+1}+a'_j+a_{j-1}+\cdots +a_1) }^{-1} \\ & =& C \sum_{r=0}^{k}\sum_{\substack{\bold z}\in Q_r}\frac{1}{{\theta_1}^r}\frac{1}{{\theta_2}^{k-r}}\\ & & \times { \{ a_k(a_k+a_{k-1})\cdots(a_k+a_{k-1}+\cdots + a_j)\cdots(a_k+a_{k-1}+\cdots +a_j+a_{j-1}+\cdots a_1) \} }^{-1} \\ & & \times \frac{(a_k+a_{k-1}+\cdots +a_j)\cdots (a_k+a_{k-1}+\cdots +a_j+a_{j-1}+\cdots +a_1)}{(a_k+a_{k-1}+\cdots +a'_j)\cdots (a_k+a_{k-1}+\cdots +a'_j+a{j-1}+\cdots +a_1)}\\ & =& \sum_{r=0}^{k}P(m_k=r) \left \{ \frac{(a_k+a_{k-1}+\cdots +a_j)\cdots (a_k+a_{k-1}+\cdots +a_j+a_{j-1}+\cdots +a_1)}{(a_k+a_{k-1}+\cdots +a'_j)\cdots (a_k+a_{k-1}+\cdots +a'_j+a_{j-1}+\cdots +a_1)} \right \} \\ & = & \left \{ \frac{(a_k+a_{k-1}+\cdots +a_j)\cdots (a_k+a_{k-1}+\cdots +a_j+a_{j-1}+\cdots +a_1)}{(a_k+a_{k-1}+\cdots +a'_j)\cdots (a_k+a_{k-1}+\cdots +a'_j+a_{j-1}+\cdots +a_1)} \right \} \sum_{r=0}^{k}P(m_k=r) \\ & =& \prod_{s=1}^j \left (1-\frac{t}{E_s} \right )^{-1}. \end{eqnarray*} Here \begin{eqnarray*} a_j & = & \frac{(R_j+1)}{\theta_1} + \frac{(R_j+1)}{\theta_2}, \ \ \ j=1,\ldots,k-1; \\ a_k & = & \frac{(m-\sum_{j=1}^{k-1}(R_j+1))}{\theta_1} + \frac{(n-\sum_{j=1}^{k-1}(R_j+1))}{\theta_2}; \ \ \ a'_j=a_j-t; \\ E_s & = & \frac{(m-\sum_{j=1}^{s-1}(R_j+1))}{\theta_1}+\frac{(n-\sum_{j=1}^{s-1}(R_j+1))}{\theta_2}. \end{eqnarray*} \noindent{\sc Proof of Lemma 3:} To prove Lemma 3, we mainly use the ``Three Monotonicity Lemmas'' of Balakrishnan and Iliopoulos \cite{BI:2009}. We briefly state the ``Three Monotonicity Lemmas'' for convenience, and we will show that both $\widehat{\theta}_1$ and $\widehat{\theta}_2$ satisfy the ``Three Monotonicity Lemmas''. Suppose $\widehat{\theta}$ is an estimate of $\theta$, and the survival function of $\widehat{\theta}$ can be written in the following form: $$ P_{\theta}(\widehat{\theta}>x)=\sum_{\substack{d}\in{ {\cal D}}}P_{\theta}(\widehat{\theta}>x|D=d)P_{\theta}(D=d), $$ where ${\cal D}$ is a finite set. \noindent {\sc Lemma} (Three Monotonicity Lemmas:) Assume that the following hold true: \\ \noindent (M1) $P_{\theta}(\widehat{\theta}>x|D=d)$ is increasing in $\theta$ for all $x$ and $d \in {\cal D}$; \\ \noindent (M2) For all $x$ and $\theta>0$, $P_{\theta}(\widehat{\theta}>x|D=d)$ is decreasing in $d \in {\cal D}$; \\ \noindent (M3) $D$ is stochastically decreasing in $\theta$. \\ \noindent Then $P_{\theta}(\widehat{\theta}>x)$ is increasing in $\theta$ for any fixed $x$. \noindent Now to prove (i), first observe that $$ P_{\theta_1}(\widehat{\theta}_1>t|1\leq m_k\leq k-1)=\sum_{r=1}^{k-1}P_{\theta_1}(\widehat{\theta}_1 >t|m_k=r)P_{\theta_1}(m_k=r|1\leq m_k \leq k-1). $$ Hence, (i) can be proved if we can show that \\ \noindent (M1) $P_{\theta_1}(\widehat{\theta}_1>t|m_k=r)$ is increasing in $\theta_1$, $\forall t,r\in \{1,\ldots k-1\}$; \\ \noindent (M2) $P_{\theta_1}(\widehat{\theta}_1>t|m_k=r)$ is decreasing in $r$, $\forall t,\theta_1>0$; \\ \noindent (M3) The conditional distribution of $m_k$ is stochastically decreasing in $\theta_1$. \noindent From the moment generating function of $\displaystyle E(e^{t\widehat{\theta}_1}|m_k=r)$ it is easily observe that conditioning on $m_k=r$, $\widehat{\theta}_1\overset{d}{=}\sum_{s=1}^{k}X_{sr}$, where $X_{sr} \sim \hbox{Exp}(\alpha_{sr})$ and they are independently distributed. Here $\alpha_{sr}$'s are same as defined in Theorem 1. Since $\alpha_{sr}$ is increasing with $\theta_1$, the distribution of $X_{sr}$ is stochastically increasing with $\theta_1$. Since $X_{sr}$'s are independently distributed, (M1) is satisfied. Now to prove (M2), observe that \begin{eqnarray*} \widehat{\theta_1}|\{m_k=r\} & \overset{d}{=} & \frac{\sum_{i=1}^{k-1}{(R_i+1)w_i} +(m-\sum_{i=1}^{k-1} (R_i+1))w_k}{r} \\ \widehat{\theta_1}|\{m_k=r+1\} & \overset{d}{=} & \frac{\sum_{i=1}^{k-1}{(R_i+1)w_i} +(m-\sum_{i=1}^{k-1} (R_i+1))w_k}{r+1}. \end{eqnarray*} Hence for all $t$ and for $\theta_1 > 0$, $\displaystyle P_{\theta_1}(\widehat{\theta}_1>t|m_k=r)>P_{\theta_1}(\widehat{\theta}_1>t|m_k=r+1)$. This proves (M2). To prove (M3) it is enough to show $m_k$ has monotone likelihood ratio property with respect to $\theta_1$. For $\theta_1<{\theta_1}'$ $$ \frac{P_{\theta_1}(m_k=r|1\leq m_k \leq k-1)}{P_{{\theta_1}'}(m_k=r|1\leq m_k \leq k-1)} \propto \frac{P_{\theta_1}(m_k=r)}{P_{{\theta_1}'}(m_k=r)} \propto {\big(\frac{\theta_1}{{\theta_1}'}\big)}^{k-r} \uparrow r. $$
1,477,468,749,849
arxiv
\section{Introduction and statement of results} \subsection{Background and motivation - a birds eye view}\label{ss:bird} The purpose of this paper is twofold: We initiate a systematic study of random quasiconformal homeomorphisms, and we develop a framework for homogenization of iterated singular integrals. Our main results regarding the former topic will be obtained as consequences of our results regarding the latter, which are of independent interest. Since the precise statements of our results require some preparation, in this section we give a brief and informal description of our work. Recall that quasiconformal maps are homeomorphic $W^{1,2}_{loc}-$solutions of the Beltrami equation \begin{equation}\label{eq:beltrami} \partial_{\overline z} F = \mu \partial_z F, \end{equation} and that for any measurable function $\mu \colon \C\to \C$ with $\|\mu\|_\infty<1$ there is an essentially unique quasiconformal solution. Recent developments have shown an emerging need for a theory of random quasiconformal maps. For example, simple closed planar curves can be described via their welding homeomorphism, and random loops such as those associated with the Schramm-Loewner evolution SLE lead to random circle homeomorphisms. Beginning with the work of Sheffield, these welding homeomorphisms can be described in terms of Liouville Quantum Gravity. It is still open to analytically solve the ``welding problem'' of re-constructing the loops from these homeomorphisms. The standard approach of solving welding problems is via the Beltrami equation \eqref{eq:beltrami}, leading to random Beltrami coefficients $\mu$ in the case of random weldings. Progress towards solving this problem has been made in \cite{AJKS}. There are also other cases in random geometry where quasiconformal mappings arise naturally. For instance, certain scaling limits of domino tilings \cite{CKP}, and more generally of dimer models \cite{KOS}, exhibit different limiting phases. Quasiconformal mappings appear particularly useful in describing their geometry \cite{ADPZ}. Moreover, there is a connection to homogenization of random conductance models, which in turn can be thought of as a special case of Brownian motion in a random environment. Here we refer to the review \cite{B}. In another direction, in material sciences it is important to understand random materials structures, modelled by elliptic PDE's, and look for global or homogenised properties of the material. From the vast literature on homogenization of random PDE's we mention as examples \cite{PV1},\cite{GO}, \cite{AKM}, where the last mentioned monograph contains an extensive bibliography. \bigskip In the present paper we will approach the Beltrami equation \eqref{eq:beltrami}, with a random coefficient $\mu$, via the method of singular integral operators. We will mostly work with solutions normalized by \begin{equation}\label{3point} F(w)=w\quad\text{ for }\quad w\in\{ 0,1,\infty\}. \end{equation} However, in the special deterministic case where $\mu$ happens to be compactly supported, it is often more convenient to work with the unique homeomorphic solution to \eqref{eq:beltrami} that has the \emph{hydrodynamic normalization} \begin{equation}\label{hydrodynamic} F(z)-z=o(1) \quad \textrm{as}\quad z\to\infty. \end{equation} This so-called \emph{principal solution} to \eqref{eq:beltrami} can be obtained from the Neumann series\footnote{Operators and multipliers in this paper are always applied from right to left unless otherwise specified, thus for instance $\mu T \mu T \mu = \mu T( \mu T\mu)$.} $$\deeb F = \mu + \mu T\mu + \mu T\mu T \mu + \ldots $$ with $T\,$ a specific singular integral operator, the Beurling transform, see \eqref{eq:beurl} below. Therefore we are naturally led to the study of homogenisation phenomena for iterated singular integral operators. Here it is useful to consider the problem from a broader point of view. Our main result on homogenised iterated singular integrals shows that this can be carried out in surprising generality, allowing for flexibility and a wide range of potential applications: \begin{theorem}\label{th:main22} For each $1\leq k\leq m-1$ let $T_k$ be a translation and dilation invariant singular integral. Further, let $\mu^{(1)} = \mu^{(1)}_\delta,\ldots,\mu^{(m)} = \mu^{(m)}_\delta$ be stochastic multiscale functions. Then for any $p\in(1,\infty)$ the iterated singular integral $$ h_\delta\coloneqq \mu^{(m)}_\delta T_{m-1}\mu^{(m-1)}_\delta\ldots \mu^{(2)}_\delta T_1\mu^{(1)}_\delta $$ converges weakly in $L^p(\R^d)$ to a deterministic limit function as $\delta\to 0$ (convergence in probability). For the subsequence $h_{2^{-k}}$ the weak convergence takes place almost surely. \end{theorem} The stochastic multiscale functions above are a large class of random functions with $\delta-$periodic statistical structure. Their precise definition is given in Section \ref{random-sec}, and Section \ref{sec:proof} is devoted to the proof of Theorem \ref{th:main22}. In general, the multiscale functions need not be bounded or compactly supported. An example of such function is provided by \eqref{eq:expl}. In the next subsection we present a variety of natural and specific random Beltrami equations $\partial_{\overline z} F_\delta = \mu_\delta \partial_z F_\delta$, where the coefficients $\mu_\delta$ are stochastic multiscale functions with $\| \mu \|_{\infty}$ bounded by some $k<1$. To complete the picture we then need methods more specifically related to quasiconformal mappings to show that the corresponding random solutions $F_\delta$ have almost surely a unique deterministic normalised quasiconformal limit $F_\infty$, see e.g., Theorem \ref{th:main} below. Finally, we mention that Theorem \ref{th:main22} also applies to many basic homogenization problems of random partial differential operators, see Example \ref{example:general} in Section \ref{ss:sing_int_homogenization}. \subsection{Quasiconformal homogenization}\label{ss:qc_homogenization} In this subsection we state our main results on quasiconformal homogenization and illustrate them by means of several model examples of coefficients $\mu_\delta$. We consider both \emph{deterministic} and \emph{random} quasiconformal maps, though our main emphasis is on the latter case. It will be convenient to adopt the following rescaling notation: \begin{definition}[Rescaling notation]\label{Rescale} If $\delta>0$, $n \in \Z^d$, and $g \colon \R^d \to \C$ is a function, we define the rescaled function $g_{[n,\delta]} \colon \R^d \to \C$ by the formula $$ g_{[n,\delta]}(x) \coloneqq g \left( \frac{x}{\delta} - n \right).$$ For instance, we will apply this convention to the weight $$ \langle x \rangle \coloneqq (1 + |x|^2)^{1/2}$$ so that $$ \langle x \rangle_{[n,\delta]} = \left( 1 + \left| \frac{x}{\delta}-n \right|^2 \right)^{1/2}$$ for any $x \in \R^d$, $\delta>0$, and $n \in \Z^d$. More generally, if $g \colon \R^d \times \Omega \to \C$ is a function of a spatial variable $x \in \R^d$ and a supplementary variable $\omega \in \Omega$, we define $g_{[n,\delta]} \colon \R^d \times \Omega \to \C$ by the formula $$ g_{[n,\delta]}(x,\omega) \coloneqq g \left( \frac{x}{\delta} - n, \omega \right).$$ We extend this convention to the complex plane $\C$ by identifying $\C$ with $\R^2$ (and $\Z^2$ with the Gaussian integers $\Z[i]$. \end{definition} We will typically apply this convention with functions $g$ that are concentrated near the unit ball $B(0,1)$, in which case the rescaled function $g_{[n,\delta]}$ will be concentrated near the ball $B(n\delta,\delta)$. Conversely, the weight $\langle \cdot \rangle_{[n,\delta]}$ is small in $B(n\delta,\delta)$ and large elsewhere. \medskip \noindent {\bf Model 1:} The deterministic function \begin{equation}\label{mu-ex1} \mu_\delta(z) \coloneqq \varphi(z) \sum_{n\in\Z^2}a_{[n,\delta]}(z), \end{equation} where $\varphi \in C^\infty_0(\C)$ is a test function and $a\colon \C \to \C$ is a smooth non-constant function supported on $[0,1]^2$, and the rescaling $a_{[n,\delta]}$ is defined by Definition \ref{Rescale}. One assumes that $\|\varphi\|_\infty \| a\|_\infty<1.$ \medskip \noindent {\bf Model 2:} Here $\mu_\delta$ is a random function given by either \begin{equation}\label{mu-ex2} \begin{split} \mu_\delta(z) &\coloneqq a 1_{Q_0}(z) \sum_{n \in \Z^2} \varepsilon_n (1_{Q_0})_{[n,\delta]}(z) \\ &= a 1_{Q_0}(z) \sum_{n \in \Z^ 2} \varepsilon_n 1_{n\delta + [0,\delta]^2}(z), \end{split} \end{equation} or \begin{equation}\label{mu-ex3,5} \mu_\delta(z) \coloneqq a \sum_{n \in \Z^2} \varepsilon_n 1_{n\delta + [0,\delta]^2}(z), \end{equation} where $a \in \C$ satisfies $|a|<1$, and $Q_0 \coloneqq [0,1]^2$ is the unit square with corners $0, 1, i, 1+i$, the $\varepsilon_n \in \{-1,+1\}$ are i.i.d. random signs, and $n\delta + [0,\delta]^2$ is the square of sidelength $\delta$ and bottom left corner equal to $n\delta$, $n \in \Z^2$. We could as well allow the $\varepsilon_n$ to be arbitrary i.i.d. random variables with $|\varepsilon_n|\leq1$. \medskip \noindent {\bf Model 3:} A more general model is obtained by allowing the independent `bumps' to have non-compact support and adding an envelope factor that varies the size of $\mu$ locally, and is independent of the scaling $\delta$. Thus, let $g$ be a rapidly decreasing function and define the random `bump field' \begin{equation}\label{eq:bf} B_\delta = \sum_{n \in \Z^2} \varepsilon_n g_{[n,\delta]}, \end{equation} where $\varepsilon_n$ are any i.i.d random variables, the $g_{[n,\delta]}$ are defined by Definition \ref{Rescale} and we assume the pointwise bound $|B_\delta|\leq 1.$ Then set \begin{equation}\label{mu-ex3} \mu_\delta \coloneqq \phi 1_U B_\delta, \end{equation} where the `envelope function' $\phi$ satisfies the pointwise bound $|\phi|\leq k$ for some $k<1$ and is H\"older continuous with some exponent $\alpha>0$, and $U\subset\C$ is a domain with piecewise H\"older-boundary (e.g., $U$ could as well be the whole plane). If we specialize to the case $\phi\equiv a$, where $a$ is a complex constant with $|a|<1,$ $\mu_\delta$ becomes a constant multiple of the random bump field \eqref{eq:bf}: \begin{equation}\label{mu-ex2,5} \mu_\delta(z) \coloneqq a B_\delta(z). \end{equation} \medskip In each of the above model cases, let $F_\delta$ be the unique solution to the (random or deterministic) Beltrami equation \begin{equation}\label{eq:randombeltrami} \deeb F_\delta =\mu_\delta \, {\partial_z} F_\delta \end{equation} with $3$-point normalization \eqref{3point}. The basic question of quasiconformal homogenization then asks if the sequence $F_{2^{-k}}$ converges as $k\to\infty$. We answer this question by showing that there is almost sure convergence to a deterministic limit homeomorphism. We will prove a general result that covers all the above models as special cases, and is substantially of more general nature. In order to state the result we need to define the admissible envelope functions and random bump fields. \begin{definition}[Random bump fields]\label{de:rbf} We define \emph{random bump data} to be a pair $(g,X)$, where $X$ is a random variable taking values in\footnote{We place our random parameter in the space $\R$ for sake of concreteness, but this space could be replaced by a more general measurable space, e.g., $\R^d$ for any $d$, if one wished.} $\R$, and $g\colon \C \times \R \to \C$ be a measurable function with rapid decrease in the first variable, \begin{equation}\label{mu-decay} |g(z,y)|\leq C_M \langle z \rangle^{-M} \quad \textrm{for all}\;\; M\geq 1\quad \textrm{and}\;\; z\in\C, y\in\R \end{equation} which obeys the pointwise bound $$ \left|\sum_{n\in\Z^2} g(z-n,y_n)\right|\leq 1 $$ for all $z\in\C$ and all real sequences $(y_n)_{n\in\Z^2}$. We define a \emph{random bump field} with data $(g,X)$ and scaling parameter $\delta>0$ to be a random field of the form \begin{equation}\label{eq:rbf1} B_\delta(z)\coloneqq \sum_{n\in\Z^2}g_{[n,\delta]}(z,X_n) \end{equation} where the rescaling $g_{[n,\delta]}$ is defined by Definition \ref{Rescale}, and $X_n, n \in \Z^2$ are independent copies of the random variable $X$. \end{definition} In turn, the admissible envelope functions are as follows: \begin{definition}[Beltrami envelope functions]\label{de:ref} A measurable function $\phi\colon \C\to\C$ is a \emph{Beltrami envelope function} if there is $k\in (0,1)$ such that $|\phi(z)|\leq k$ for almost every $z\in \C$ and $\phi$ is locally H\"older-continuous in $L^1$-norm: there is $\alpha>0$ such that for any $R>0$ there is $C_R<\infty$ with $$ \| \Delta_h (1_{B(0,R)}\phi)\|_{L^1(\C)} \leq C_R|h|^\alpha,\quad\textrm{for}\;\; |h|\leq 1 $$ where the difference operator $\Delta_h$ is defined by \begin{equation}\label{difference-def} \Delta_h f(x) \coloneqq f(x+h) - f(x). \end{equation} \end{definition} \begin{example}\label{ex:envelope} Assume that $\phi\colon \C\to\C$ is $\alpha$-H\"older continuous and satisfies $|\phi|\leq k<1$. Assume also that $U\subset \complex$ is a domain with locally H\"older-regular boundary. Then it is easy to verify that $1_U \phi$ is a Beltrami envelope function. This holds also true if (locally) the Minkowski dimension of $\partial U$ is strictly less than $2$. \end{example} In each of the models 1-3 above the random dilatation can be written in the form $\mu_\delta=\phi \, B_\delta,$ where $\phi$ is a Beltrami envelope function and $B_\delta$ a random bump field. Hence our result on quasiconformal homogenization , to be stated next, covers all these cases. \begin{theorem}\label {th:main} Let $(g,X)$ be random bump data, and let $\phi$ be a Beltrami envelope function. For $\delta >0$ let $$, \mu_\delta(z) = \phi (z) B_\delta (z), $$ where $B_\delta$ is the random bump field \eqref{eq:rbf1} determined by $g,X$. Denote by $F_j$, $j\geq 1$, the $3$-point normalized solution to the random Beltrami equation \begin{equation}\label{eq:Fj} \deeb F_j =\mu_{2^{-j}} {\partial_z} F_j\, . \end{equation} \begin{itemize} \item[(i)] There is a unique deterministic limit function $F_\infty$ such that $F_\infty\colon \C\to \C$ is a quasiconformal homeomorphism and as $j\to\infty$, almost surely $$ F_{j}\to F_\infty\quad \textrm{locally uniformly.} $$ \item[(ii)] Assume that the envelope function $\phi$ is continuous at $z_0$. Then the dilatation $\mu_{F_\infty}$ of the limit function $F_\infty$ is continuous at $z_0$, and $\mu_{F_\infty}(z_0)$ depends only on the random bump data $(g,X)$ and on the value $\phi(z_0)$. More precisely, one has $$ \mu_{F_\infty}(z_0)= h_{(g,X)}( \phi(z_0)), $$ where the function $h_{(g,X)}\colon \{ |z|<1\}\to \{ |z|<1\}$ is continuous. \item[(iii)] If the random variables $\varepsilon_n$ are symmetric, the limit $F_\infty$ in both cases of Model 2, \eqref{mu-ex2} and \eqref{mu-ex3,5}, is given by the identity map, $F_\infty(z)=z$ for all $z$. This is not necessarily the case in the more general setting of \eqref{mu-ex2,5}. \end{itemize} \end{theorem} \medskip \noindent The proof of this theorem is contained in Section \ref{se:qchomogenization}, which also contains other related results and remarks. In particular, the above theorem applies as well to the deterministic homogenization problem. We also stress that the coefficient $\mu$ need not be compactly supported, in spite of the fact that the proof is based on the Neumann series. \begin{figure}[t] \centering \includegraphics[height=125mm]{randgummies2.pdf} \caption{A random qc-map obtained by Model 2 with $a=1/2.$ We thank David White for help in producing the picture.} \label{whitfig} \end{figure} \begin{Remark}\label{re:explain} One should note that in the above result there is no need for the stochastic bump fields corresponding to different $\delta$'s to be independent. Indeed, their stochastic relation can be arbitrary. This can be understood by by writing the dilatation of $F_j$ in the form $$ \mu_{F_j}(z)=\phi(z)\left(\sum_{n\in \Z^2}g_{[n,2^{-j}]}(z,X_{n,j})\right), $$ where $X_{n,j}\sim X,$ for each $n,j$, and only for each fixed $j$ the random variables $X_{n,j}$, $n\in\Z^2$, are assumed to be independent. Thus there can be arbitrary stochastic relations between the different layers $(X_{n,j})_{n\in\Z^2}$ and $(X_{n,j'})_{n\in\Z^2}$ for $j\not=j'.$ In particular, this possible dependence structure between different scales does not affect the deterministic limit function $F_\infty$, which depends only on the triplet $(\phi,g,X).$ The main reason for this is that the failure probability in our main estimate (Theorem \ref{mainthm}) decays polynomially in $\delta$ (and hence exponentially in $j$ if $\delta = 2^{-j}$). Let us also point out that for the sake of simplicity we leave out many considerations that would be possible via the techniques of the present paper. For example, one may relax the speed of convergence to zero in the subsequence $B_{2^{-j}}$, and it is possible to consider quasiconformal maps between arbitrary domains. \end{Remark} We also present a homogenization result for mappings of finite distortion, i.e. we consider random dilatations $\mu_\delta$ with $\|\mu_\delta\|_{L^\infty(\R^2)}=1.$ An example of this kind of dilatation is given by \begin{description} \item[Model 4] A random function as in the model example \eqref{mu-ex2} \begin{equation}\label{mu-ex4} \begin{split} \mu_\delta(z) &\coloneqq 1_{Q_0}(z) \sum_{n \in \Z^2} \varepsilon_n (1_{[0,1]^2})_{[n,\delta]}(z) \\ &= 1_{Q_0}(z) \sum_{n \in \Z^2} \varepsilon_n 1_{n\delta + [0,\delta]^2}(z), \end{split} \end{equation} \end{description} but now with random i.i.d. variables $\eps_n$ such that $|\eps_n| < 1$ and $(1-|\eps_n|)^{-1}$ has sufficiently fast exponential decay. Theorems \ref{degeco:2.1}, \ref{th:last_deterministic} and \ref{th:last_random} in Section \ref{se:qchomogenization} generalize Theorem \ref{th:main} to the degenerate case \eqref{mu-ex4} and beyond. \medskip It is tempting to try to prove almost sure convergence of $F_\delta$ in the above examples solely using weak convergence of $\mu_\delta$. However, it is important to note that this is impossible, as the following example illustrates. Some deeper properties of $\mu_\delta$ and their interaction with singular integrals are involved here. \begin{example}\label{ex1} Let $a\in (-1,1)$ and define the Beltrami coefficient $\nu(z)$ that is $2$-periodic in the $x$-variable and constant in the $y$-variable by setting $$ \nu (z)\coloneqq \begin{cases} a& \textrm{if}\; x\in [2n,2n+1), \quad n\in\Z,\\ -a& \textrm{if}\; x\in [2n+1,2n+2), \quad n\in\Z . \end{cases} $$ Write $b=\frac{1-a}{1+a}$ and observe that the function $$ g(x+iy)\coloneqq \begin{cases} (x-2n)+n(1+b^2)+iby& \textrm{if}\; x\in [2n,2n+1), \quad n\in\Z,\\ b^2(x-(2n+1))+n(1+b^2)+1+iby& \textrm{if}\; x\in [2n+1,2n+2), \quad n\in\Z \end{cases} $$ solves $g_{\overline{z}}=\nu g_z.$ Now consider the homogenized dilatation $\mu_j(z)\coloneqq \nu(2jz)$ for any integer $j\geq 1$ and let $F_j$ satisfy $$\deeb F_j=\mu_j\, {\partial_z} F_j$$ with the three-point normalization, so that $$ F_j(z)= \frac{g(2jz)}{j(1+b^2)}. $$ As $j\to\infty,$ it is clear that $\mu_j$ converges locally weakly to zero. However, by the above formulas we see that there is the uniform convergence $F_j\to F_\infty$, where $$ F_\infty(x+iy)=x+\frac{1-a^2}{1+a^2}iy, $$ and $\mu_{F_\infty}\equiv a^2$ identically. By considering the sequence $\widetilde \mu_j$, where $\widetilde \mu_{2j}=\mu_j$ and $\widetilde \mu_{2j+1}=0$ we obtain a locally weakly null sequence of dilatations for which the homogenization limit does not exist. Finally, we observe that its is easy to localize this observation and obtain the same phenomenon for compactly supported dilatations (see Lemma \ref{le:locality} below). \end{example} \begin{Remark}\label{rem:oleg} In a recent interesting work \cite{IM}, Ivrii and Markovic provide a more elementary geometric proof of some special cases of our results on quasiconformal homogenization, also allowing for non-uniform ellipticity, and give an application to random Delaunay triangulations. \end{Remark} \subsection{Further remarks on Theorem \ref{th:main22}}\label{ss:sing_int_homogenization} As explained in Section \ref{ss:bird}, the application of Theorem \ref{th:main22} to quasiconformal homogenization and solutions to Beltrami equation \eqref{eq:beltrami} comes via the {\it Beurling transform} \begin{equation}\label{eq:beurl} Tg(z) \coloneqq -\frac{1}{\pi} \operatorname{p.v.} \int_\C \frac{g(w)}{(z-w)^2}\ dw. \end{equation} Namely, since $T \circ \partial_{\overline z} = \partial_z$ on $W^{1,2}(\C)$, finding a solution to $\partial_{\overline z} f_\delta = \mu_\delta \partial_z f_\delta$, with the hydrodynamic normalization $f_\delta(z) - z = o(1)$ at $z \to \infty$, is equivalent to solving the integral equation $ (1 - \mu_\delta T) \deeb f_\delta = \mu_\delta$. One then finds the solution via the $L^2$-Neumann series representation $$ \deeb f_\delta = \mu_\delta + \mu_\delta T\mu_\delta + \mu_\delta T\mu_\delta T \mu_\delta + \ldots, $$ and the theorem allows us to deduce the weak convergence of each single summand in the above formula. The Beurling transform extends to all of $L^2(\C)$ as an isometric isomorphism. Moreover, it commutes with dilatations and translations. The class of singular integrals in $\R^d$ allowed in Theorem \ref{th:main22} shares these two basic symmetries: \begin{definition}[Singular integral operator]\label{sio-def} A dilation and translation invariant \emph{singular integral operator} is any bounded linear operator $T\colon L^2(\R^d) \to L^2(\R^d)$ of the form $$ Tf(x) = p.v. \int_{\R^d} \frac{\Omega( (x-y)/|x-y| )}{|x-y|^d} f(y)\ dy, \qquad \mbox{for } \; f \in C^\infty_0(\R^d),$$ where $\Omega: S^{d-1} \to \C$ is smooth and has mean zero. \end{definition} The definition of the general class of random multipliers considered in Theorem \ref{th:main22}, \emph{the stochastic multifunctions}, is slightly opaque as it employs the notion of \emph{stochastic tensor products}. Both these notions will be explained in Section \ref{random-sec} below. However, to give a perhaps more intuitive idea of these notions, we describe here in detail a special class of multifunctions which fits well with the notions of bump-fields and Beltrami envelope functions discussed in the previous Section \ref{ss:qc_homogenization}. Thus, working in arbitrary dimension $d\geq 1$, fix $m\geq 1$ and consider for each index $1\leq \ell \leq m$ the random function \begin{equation}\label{eq:random multiplier} \mu^{(\ell)}_\delta(x)= \phi_\ell(x) \sum_{n \in \Z^d} (g_\ell)_{[n,\delta]}(x,X_n), \end{equation} where we assume for each fixed $\ell$ that: \begin{itemize} \item\label{envelope} The `\emph{envelope function}' $\phi_\ell$ does not depend on $\delta$. Moreover, $\phi_\ell\in L^p(\R^d)$ for every $p\in(1,\infty)$ with the H\"older bound $$ \| \Delta_h \phi_\ell\|_{L^p(\R^d)} \leq C(p)|h|^{\alpha_p},\quad\textrm{for}\;\; |h|\leq 1,\quad \textrm{where}\;\; \alpha_p>0. $$ \item The random variables $\{ X_{n}\}_{n\in\Z^d}$ are independent and identically distributed, $X_n\sim X$ for all $n\in\Z^d.$ \item The `\emph{bump function}' $g_\ell(\cdot , \cdot)$ satisfies an $d$-dimensional analogue of condition \eqref{mu-decay}. \end{itemize} Lemma \ref{prod3} below implies that such $\mu^{(\ell)}_\delta$ is a stochastic multifunction, covered by Theorem \ref{th:main22}. As a last aspect, Theorem \ref{th:main22} applies easily to homogenization of many random differential operators: \begin{example}\label{example:general} For each $\ell=1,\ldots, L$, let $P_\ell(D)$ be a constant coefficient second order differential operator on $\R^d$. Also let $\mu_\delta^{(\ell)}$ be random multipliers as in Theorem \ref{th:main22}. For simplicity we assume that $d\geq 3$, and that the $\mu_\delta^{(\ell)}$ are supported on a fixed ball $B\subset\R^d$. Our basic ellipticity assumption is that they satisfy a.s. \begin{equation}\label{eq:diff0} \sum_{\ell=1}^La_\ell\|\mu_\delta^{(\ell)}\|_{L^\infty (B)}\leq k<1 \quad \textrm{for all}\;\;\delta \in (0,1), \end{equation} where the constants $a_j$ will be soon defined. We consider the following PDE on $\R^d$ with random coefficient functions \begin{equation}\label{eq:diff} \Delta u_\delta + \sum_{\ell=1}^L\mu_\delta^{(\ell)}P_\ell(D) u_\delta =h. \end{equation} Here the right hand side $h\in L^2(\R^d)$ is fixed, and is also supported in the ball $B$. We normalize the solutions $u_j$ of \eqref{eq:diff} by demanding that $u_j(x)\to 0$ as $x\to\infty$. We claim that this problem has a unique solution $u_\delta\in \dot W^{2,2}(\R^d)$ converging strongly (in probability) in $\dot W^{s,2}(\R^d)$ for every $s<2$ towards a deterministic function $u_\infty\in \dot W^{2,2}(\R^d)$ as $\delta\to 0.$ Thus the present homogenization problem is solvable with a deterministic limit. In order to sketch the argument, let us denote by $T_\ell$ the homogeneous Fourier multiplier $T_\ell\coloneqq \Delta^{-1}P_\ell(D)$ and note that it is a scaling and translation invariant singular integral\footnote{\label{fn:1} Strictly speaking the $T_\ell$ might not be precisely of the form in Definition \ref{sio-def} because there may be an identity component in addition to a principal value integral; however it is a routine matter to extend the analysis in this paper to this more general setting.} on $\R^d$. We choose $a_\ell\coloneqq \sup_{|\xi|=1}|P(\xi)|$ in condition \eqref{eq:diff0}, i.e., $a_\ell$ is the $L^2$-norm of the operator $T_\ell$. Then $f_j\coloneqq \Delta u\in L^2(B)$ satisfies the equation \begin{equation*}\label{eq:diff2} f_j + \sum_{\ell=1}^L\mu^{(\ell)}Tf_j =h, \end{equation*} which can be uniquely solved in $L^2(\R^d)$ by the Neumann series and, via condition \eqref{eq:diff0}, we obtain an $L^2$-convergent series \begin{equation}\label{eq:diff3} f_j= h+\sum_{\substack{1\leq \ell_1,\ldots ,\ell_m\leq L\\m\geq 1}}(-1)^m\mu^{(\ell_1)}T_{\ell_1}\mu^{(\ell_2)}T_{\ell_2}\cdots \mu^{(\ell_m)}T_{\ell_m}h. \end{equation} By applying the fundamental solution of the Laplacian, we see that $u_j=c_d|\cdot|^{2-d}*f_j$ solves \eqref{eq:diff} with the right behaviour in the infinity, as $d\geq 3$. Since any other solution has the same Laplacian, they must differ by a harmonic function that vanishes at infinity, and hence their difference is zero. Theorem \ref{th:main22} applies to each term in the sum \eqref{eq:diff3}, and together with the uniform convergence in $\delta$ of the series in $L^2$ we deduce that $f_\delta \to f_\infty$ weakly in $L^2(\R^d)$, where $f_\infty$ is also supported in $B$. The rest of the claim follows from the standard properties of the fundamental solution $c_d|\cdot|^{2-d}$. We finally note that above the operators $P_j$ may well have lower order terms since those produce compact Fourier multipliers between functions on fixed compact subsets of $\R^d$. Hence the terms in the Neumann-series containing them can be taken care of by multiple application of Theorem \ref{th:main22}. Actually, we could instead of differential operators $P_j$ with constant coefficients consider as well classical pseudodifferential operators of order 2 whose principal part is a homogeneous Fourier multiplier. Similarly, the technique applies to fractional Laplacians, and in many other type of homogenization problems. In order to spell out one more specific example -- completely without details -- consider the homogenization of the general conductivity equation in the plane. $$ \nabla \cdot \big( A(x)\nabla u(x)\big)=0, $$ where the $2\times 2$ matrix $A(x)=(\delta_{j,k}+\mu_{jk}(x))_{j,k=1,2}$ is measurable and uniformly elliptic, and each $\mu_{j,k}$ is a stochastic multifunction. One may reduce this to the study of the generalized Beltrami equation $$ {\partial_z} f+\eta_1\deeb f+\eta_2\overline{\deeb f}, $$ where the coefficients $\eta_j$'s are expressed in terms of in the matrix coefficients $\mu_{j,k}$, see e.g. \cite[Theorem 16.1.6]{AIM}. The structure of the $\eta_j$'s allows them be approximated in a suitable sense by multifunctions (see footnote \ref{fn:1} in this connection). The generalized Beltami-equation may be solved by a $2\times 2$-matrix valued Neumann series, and the analysis can be then carried out analogously to the case of the classical Beltrami equation. For classical treatments of homogenization of the above PDE's (however, not including the case of more general case of Fourier multipliers we allow for), we refer to \cite{PV1}, \cite{PV2}, and \cite{AS}. \end{example} \subsection{Structure of the paper} Section \ref{dmd-sec} develops the homogenization of deterministic iterated singular integrals. This is much easier than the random setting, but has its own interest, and it will provide a handy tool in treating the stochastic case later on. The admissible class of deterministic multipliers will be called called \emph{multiscale functions} (see Definition \ref{ms}). They are defined using the notion of `multiscale tensor product' (Definition \ref{mtp}), which generalizes the product of an envelope function and a bump field. Our deterministic homogenization result is stated as Corollary \ref{singular_equivalence}. Section \ref{random-sec} first defines the probabilistic analogues of the deterministic notions, especially the `stochastic multiscale tensor product' (Definition \ref{smtp}) is used to define \emph{stochastic multiscale functions} (Definition \ref{sms}), which are quite a bit more general than the multipliers we discussed in Section \ref{ss:qc_homogenization}. The general form of our main result on homogenization of randomized iterated singular integrals is formulated in Theorem \ref{mainthm} and Corollary \ref{co:main}. Lemma \ref{prod3} then verifies that the random multipliers \eqref{eq:random multiplier} are particular instances of a stochastic multiscale tensor product. The proof of Theorem \ref{th:main22} is carried out in Section \ref{sec:proof}, where it is obtained as a consequence of Theorem \ref{mainthm} and Corollary \ref{co:main}. Somewhat surprisingly, a considerable effort needs to be spent in establishing the convergence of the expectation of the iterated randomized integrals. Finally, Section \ref{se:qchomogenization} applies Theorem \ref{th:main22} to quasiconformal homogenization. There we combine Theorem \ref{th:main22} with methods from the theory of planar quasiconformal mappings in order to show that the corresponding random solutions $F_\delta$ almost surely have a unique normalised deterministic quasiconformal limit $F_\infty$, see e.g., Theorem \ref{th:main}. \subsection{Acknowledgements} The authors thank Dima Shlyakhtenko for helpful discussion, and in particular for conveying the intuition (from free probability) that the contribution from crossing partitions is negligible. The first author was supported was supported by Academy of Finland projects 1134757, 1307331 and 13316965, and ERC project 834728. The second author was supported by NSF grants DMS-1068105 and DMS-1700069. The third author was supported by the Finnish Academy, grants 75166001, 1309940, and the Finnish Academy COE 'Analysis and Dynamics'. The fourth author was supported by a grant from the MacArthur Foundation, NSF grants DMS-0649473, DMS-1266164, DMS-1764034, and a Simons Investigator Award. \section{Deterministic multiscale functions}\label{dmd-sec} In the sequel we use extensively the notations $X \lesssim Y$ or $X = O(Y)$ to denote the estimate $|X| \leq CY$ where $C$ is an absolute constant. If we need the constant $C$ to depend on some parameters, we shall indicate this by subscripts, or else indicate the dependence in the text. For instance, $X \lesssim_p Y$ or $X = O_p(Y)$ means that $|X| \leq C_p Y$ for some constant $C$ depending on $p$. Our arguments in the following three sections are not specific to the Beurling transform in the plane, and so we shall work in the more general context of singular integral operators in a Euclidean space. Accordingly, we fix a dimension $d \geq 1$; in the application to the Beltrami equation, we will have $d=2$. We shall work with the standard Euclidean space $\R^d$, the standard lattice $\Z^d$, and the standard torus $\T^d = \R^d/\Z^d$. We also have a scale parameter $0 < \delta < 1$, which we shall think of as being small; several of our functions shall depend on this parameter, and we shall indicate this by including $\delta$ as a subscript. Before we can state the main result, it will be convenient to introduce a certain calculus regarding various classes of functions (namely, envelope functions, localized functions, negligible functions, and multiscale functions; we will define these classes later in this section). To set up this calculus we shall need a certain amount of notation and basic theory. \begin{definition}[H\"older space] If $1 < p < \infty$ and $\alpha \in (0,1)$, we let $\Lambda^{\alpha,p}(\R^d)$ denote the space of functions $f$ whose norm $$ \| f \|_{\Lambda^{\alpha,p}(\R^d)} \coloneqq \|f\|_{L^p(\R^d)} + \sup_{0 < |h| < 1} \frac{\|\Delta_h f\|_{L^p(\R^d)}}{|h|^\alpha}$$ is finite, where $\Delta_h$ was defined in \eqref{difference-def}. \end{definition} \begin{remark} One could also use Sobolev spaces $W^{\alpha,p}(\R^d)$ instead of H\"older spaces $\Lambda^{\alpha,p}(\R^d)$ in what follows, but we have elected to use H\"older spaces as they are slightly more elementary. Also note that we usually use the symbol $\phi$ for a Beltrami envelope function, c.f. Definition \ref{de:ref}. \end{remark} We recall the following definition from p. \pageref{envelope}. \begin{definition}[Envelope function]\label{def:env} An \emph{envelope function} is a function $f: \R^d \to \C$ (not depending on the scale parameter $\delta$) such that for every $1 < p < \infty$ there exists $\alpha > 0$ such that $f \in \Lambda^{\alpha, p}(\R^d)$. Thus the space of all envelope functions is $$ \bigcap_{1 < p < \infty} \bigcup_{\alpha \in(0,1)} \Lambda^{\alpha, p}(\R^d).$$ \end{definition} \begin{example} If $Q$ is a cube in $\R^d$, then one checks that $1_Q\in \Lambda^{\alpha, p}(\R^d)$ for $\alpha\in (0,1/p)$ so that the indicator function $1_Q$ is an envelope function. Any function in the Schwartz class is an envelope function. \end{example} \begin{lemma}\label{envprod} The product of two envelope functions is again an envelope function. \end{lemma} \begin{proof} From H\"older's inequality one quickly sees that the product of two functions in $\Lambda^{\alpha, p}(\R^d)$ lies in $\Lambda^{\alpha, p/2}(\R^d)$. The claim follows. \end{proof} \begin{definition}[Localized function] A (deterministic) \emph{localized function} is a function $g: \R^d \to \C$ (not depending on the scale parameter $\delta$) such that for every $1 < p < \infty$ and $N > 0$, the function $\langle \cdot \rangle^N g$ lies in $L^p(\R^d)$, where $\langle \cdot \rangle$ is as in Definition \ref{Rescale}. Thus the space of all localized functions is $$ \bigcap_{1 < p < \infty} \bigcap_{N > 0}\langle \cdot \rangle^{-N}L^p(\R^d), $$ \end{definition} \begin{example} The indicator function $1_Q$ of a cube is a localized function, as is any function in the Schwartz class. \end{example} We shall often exploit the ability of localized functions to absorb arbitrary powers of $\langle \cdot \rangle_{[n,\delta]}$ via the following lemma, which improves upon the triangle inequality in $L^p$ at the cost of inserting different localizing weights $\langle \cdot \rangle_{[n,\delta]}$ on each summand. \begin{lemma}[Localization lemma]\label{loc} Let $\delta > 0$ and let $1 < p < \infty$. Then we have the estimate $$ \| \sum_{n \in\Z^d} f_n \|_{L^p(\R^d)} \lesssim_{p,d} (\sum_{n \in \Z^d} \| \langle \cdot \rangle_{[n,\delta]}^d f_n \|_{L^p(\R^d)}^p)^{1/p}$$ for any sequence $f_n \in L^p(\R^d)$ of functions. \end{lemma} \begin{proof} We can rescale $\delta=1$. It will suffice to prove the pointwise inequality $$ |\sum_{n \in \Z^d} f_n| \lesssim_{p,d} (\sum_{n \in \Z^d} \langle \cdot \rangle_{[n,1]}^{pd} |f_n|^p )^{1/p}.$$ By H\"older's inequality, it is enough to show that pointwise $$ (\sum_{n \in \Z^d} \langle \cdot \rangle_{[n,1]}^{-p' d})^{1/p'} \lesssim_{p,d} 1$$ where $p' = p/(p-1)$ is the dual exponent of $p$. But this can be established by direct calculation. \end{proof} Let us record couple of elementary properties of localized functions. \begin{lemma}\label{localized} \text{} \begin{itemize} \item[(i)] The product of two localized functions is a localized function. \item[(ii)] For any localized function $g$ and any sequence $(a_n)$ it holds that $$ \|\sum_{n\in\Z^d}a_n g_{[n,\delta]}\|_{L^p(\R^d)}\lesssim_{p,g}\delta^{d/p} \|(a_n)\|_{\ell^p},\quad 1<p<\infty $$ where $g_{[n,\delta]}$ is given by Definition \ref{Rescale}. \end{itemize} \end{lemma} \begin{proof} Claim (i) follows from H\"older's inequality, and claim (ii) is an immediate consequence of Lemma \ref{loc}. \end{proof} \begin{definition}[Discretization] Let $f$ be an envelope function, and let $0 < \delta < 1$. We define the \emph{discretization} $[f]_\delta\colon \Z^d \to \C$ of $f$ at scale $\delta$ to be the function $$[f]_\delta(n) \coloneqq \frac{1}{\delta^d} \int_{n\delta + [0,\delta]^d} f,$$ thus $[f]_\delta(n)$ is the average value of $f$ on the cube $n\delta + [0,\delta]^d$. \end{definition} \begin{definition}[Multiscale tensor product]\label{mtp} Let $f$ be an envelope function and $g$ be a localized function. We define the \emph{multiscale tensor product} $f \otimes_\delta g \colon \R^d \to \C$ of $f$ and $g$ to be the function $$ f \otimes_\delta g \coloneqq \sum_{n \in \Z^d} [f]_\delta(n) g_{[n,\delta]}$$ where $g_{[n,\delta]}$ is defined by Definition \ref{Rescale}. \end{definition} \begin{definition}[Negligible function] A function $F = F_\delta \colon \R^d \to \C$ depending on the parameter $0 < \delta < 1$ is said to be \emph{negligible} if for every $1 < p < \infty$ there exists $\eps_p > 0$ and $C_p > 0$ such that $$ \|F_\delta \|_{L^p(\R^d)} \leq C_p \delta^{\eps_p}$$ for all $0 < \delta < 1$. \end{definition} \begin{remark} Note in particular that if $F$ is negligible, then $F_\delta$ converges to zero in $L^p$ norm as $\delta \to 0$ for every $1 < p < \infty$, and furthermore the same is true even if one multiplies $F_\delta$ by an arbitrary power of $(\log \frac{1}{\delta})$. This freedom to absorb logarithmic factors in $\delta$ will be useful for technical reasons later in this paper. \end{remark} \begin{definition}[Multiscale function]\label{ms} A function $F = F_\delta \colon \R^d \to \C$ depending on the parameter $0 < \delta < 1$ is said to be a (deterministic) \emph{multiscale function} if it has an expansion $$ F_\delta = \sum_{j=1}^J f_j \otimes_\delta g_j + G_\delta$$ where $J \geq 1$ is an integer, $f_1,\ldots,f_J$ are envelope functions, $g_1,\ldots,g_J$ are localized functions, and $G_\delta$ is a negligible function. If $F_\delta$ and $\widetilde F_\delta$ are two multiscale functions such that the difference $F_\delta-\widetilde F_\delta$ is negligible, we say that $F_\delta$ and $\widetilde F_\delta$ are \emph{equivalent}. \end{definition} \begin{example} If $Q$ is a cube, and $g$ is a localized function, then the function $$ F_\delta(x) \coloneqq \sum_{n \in \Z^d} 1_Q(n \delta) g_{[n,\delta]}(x)$$ can be easily verified to be a multiscale function. To this end we use Lemma \ref{localized}(ii) to estimate \begin{align*} \|F_\delta - 1_Q \otimes_\delta g \|_{L^p(\R^d)}\;\; &\leq \;\;\left\|\sum_{n\textrm{ : } d(n\delta,\partial Q)\leq 2\sqrt{d}\delta} |g_{[n,\delta]}|\right\|_{L^p(\R^d)} \lesssim \big(\delta^{(1-d)}\big)^{1/p}\delta^{d/p}=\delta^{1/p} . \end{align*} Hence the difference $F_\delta - 1_Q \otimes_\delta g$ is negligible. A similar statement is true if $1_Q$ is replaced by a Schwartz function. \end{example} \begin{example} The function $\mu_\delta$ defined in \eqref{mu-ex1} is a multiscale function. Indeed, $\mu_\delta$ is equivalent to $\varphi \otimes_\delta a .$ More generally, we will prove in Lemma \ref{prod3} below that if $g$ is bounded and quickly decaying, $$ |g(x)|\leq C_N \langle x \rangle^{-N}\quad \textrm{for all}\;\; N\geq 1\;\;\textrm{and}\;\; x\in\R^d, $$ then for any envelope function $f$ the stochastic multiscale tensor product $f\otimes_\delta g$ is equivalent to the function $\displaystyle f\sum_{n\in \Z^d}g_{[n,\delta]}.$ \end{example} We continue with basic discretization estimates for envelope functions, encoded in the following two lemmas. \begin{lemma}\label{discret} Let $f$ be an envelope function. \begin{itemize} \item[(i)] One has $\|[f]_\delta \|_{\ell^p(\Z^d)} \lesssim_{p,d} \|f\|_{L^p(\R^d)}\delta^{-d/p}$ for all $0 < \delta < 1$ and $1 < p < \infty$. \item[(ii)] For any $r \in \Z^d$ we have $$ \|\Delta_r [f]_\delta \|_{\ell^p(\Z^d)} \lesssim_{p,f,d} (r \delta)^{\eps_p} \delta^{-d/p}$$ for some $\eps_p > 0$ independent of $\delta$ or $r$. \end{itemize} \end{lemma} \begin{proof} From H\"older's inequality followed by Fubini's theorem we have $$ \| [f]_\delta \|_{\ell^p(\Z^d)}\leq \left\| \left(\frac{1}{\delta^d} \int_{n\delta+[0,\delta]^d} |f|^p\right)_{n\in \Z^d}^{1/p} \right\|_{\ell^p(\Z^d)} = \delta^{-d/p} \|f\|_{L^p(\R^d)}$$ and (i) follows since $f \in L^p(\R^d)$. For (ii), observe that $$ \Delta_r [f]_\delta =[\Delta_{\delta r} f]_\delta. $$ The claim now follows from (i) as $f \in \Lambda^{\eps_p,p}(\R^d)$ for some $\eps_p > 0$. \end{proof} Also, discretization and multiplication almost commute: \begin{lemma}\label{commutator} Let $f, F$ be envelope functions. Then for every $1 < p < \infty$ there exists $\eps_p > 0$ such that $\| [fF]_\delta - [f]_\delta [F]_\delta \|_{\ell^p(\Z^d)} \lesssim_{p,f,F,d} \delta^{-d/p + \eps_p}$. \end{lemma} \begin{proof} From Fubini's theorem we have $$ [fF]_\delta(n) - [f]_\delta(n) [F]_\delta(n) = \frac{1}{\delta^{2d}} \int_{\delta n+[0,\delta]^d} \int_{\delta n+[0,\delta]^d} f(x)(F(x) - F(y))\ dx dy.$$ Writing $y = x + r$ we can thus estimate $$ |[fF]_\delta(n) - [f]_\delta(n) [F]_\delta(n)| \leq \frac{1}{\delta^{2d}} \int_{[-\delta,\delta]^d} \int_{\delta n+[0,\delta]^d}| f(x) \Delta_r F(x)|\ dx dr,$$ and hence by Minkowski's inequality $$\| [fF]_\delta - [f]_\delta [F]_\delta \|_{\ell^p(\Z^d)} \leq \frac{1}{\delta^{d}} \int_{[-\delta,\delta]^d} \left\| \frac{1}{\delta^{d}} \int_{\delta n+[0,\delta]^d} |f(x) \Delta_r F(x)|\ dx \right\|_{\ell^p_n(\Z^d)}\ dr.$$ Lemma \ref{discret}(i) allows us to conclude $$\| [fF]_\delta - [f]_\delta [F]_\delta \|_{\ell^p(\Z^d)} \leq \frac{\delta^{-d/p}}{\delta^{d}} \int_{[-\delta,\delta]^d} \| f \Delta_r F \|_{L^p(\R^d)}\ dr,$$ and finally by H\"older's inequality and the fact that $f \in L^{2p}(\R^d)$ and $F \in \Lambda^{\eps_{2p}, 2p}(\R^d)$ for some $\eps_{2p} > 0$ we see that for $r\in [0,\delta]^d$ $$ \| f \Delta_r F\|_{L^p(\R^d)} \lesssim_{p,f,F} \delta^{\eps_{2p}}$$ and the claim follows. \end{proof} Next we consider the basic properties of multiscale functions. For this purpose we need a couple of useful lemmas. \begin{lemma}\label{Lp-multiscale} Assume that $f$ is an envelope function and $g$ a localized function. Then for any $\delta >0$ \begin{equation}\label{Lp-multiestimate} \| f\otimes_\delta g \|_{L^p(\R^d)}\lesssim_{p,d}\|f\|_{L^p(\R^d)} \|\langle \cdot \rangle^d g\|_{L^p(\R^d)}. \end{equation} \end{lemma} \begin{proof} We apply the localization lemma (Lemma \ref{loc}) to estimate \begin{align*} \| f\otimes_\delta g \|_{L^p(\R^d)}&= \|\sum_{n \in \Z^d} [f]_\delta(n) g_{[n,\delta]}\|_{L^p(\R^d)}\\ &\lesssim_{p} \left(\sum_{n\in\Z^d} [f]^p_\delta(n)\| \langle \cdot \rangle_{[n,\delta]}^d g_{[n,\delta]} \|^p_{L^p(\R^d)}\right)^{1/p}\\ &\lesssim_{p,d} \delta^{-d/p}\|f\|_{L^p(\R^d)} \delta^{d/p}\|\langle \cdot \rangle^dg\|_{L^p(\R^d)} \end{align*} where we applied Lemma \ref{discret}(i) and the observation $\| \langle \cdot \rangle_{[n,\delta]}g_{[n,\delta]} \|^p_{L^p(\R^d)}= \delta^{d}\| \langle \cdot \rangle g \|^p_{L^p(\R^d)}$ for all $n.$ \end{proof} \noindent In particular, if supp$(g)\subset[0,1]^d,$ then the above lemma yields the simple estimate $$ \| f\otimes_\delta g \|_{L^p(\R^d)}\lesssim_{p,d}\|f\|_{L^p(\R^d)} \|g\|_{L^p(\R^d)}. $$ The following lemma reduces us to considering multiscale tensor products $f\otimes_\delta g$ with $g$ supported in $[0,1]^d.$ \begin{lemma}\label{adecay} Assume that $f$ is an envelope function and $g$ is either a localized function, or $($more generally$)$ that it satisfies for each $p\in (0,\infty)$ \begin{equation}\label{decay} \|g1_{k+[0,1]^d}\|_{L^p(\R^d)}\lesssim_{g,p}\langle k \rangle^{-a},\quad k\in\Z^d, \end{equation} with some $a>d$. Then $f\otimes_\delta g$ is a multiscale function that is equivalent to $f\otimes_\delta \widetilde g$, where $\widetilde g$ is supported in $[0,1]^d$ and given explicitly by the formula \begin{equation}\label{discret1} \widetilde g(x)\coloneqq 1_{[0,1]^d}(x)\sum_{k\in\Z^d}g(x+k). \end{equation} \end{lemma} \begin{proof} Observe first that any localized function satisfies \eqref{decay}. The idea of the proof is to use the H\"older type continuity of $f$ to show that one may actually treat $f$ locally as a constant in the relevant scales. To show this, fix $p\in (1,\infty)$ and observe that by Lemma \ref{Lp-multiscale} we have \begin{equation}\label{discret2} \|f\otimes_\delta 1_{[0,1]^d}g\|_{L^p(\R^d)}\lesssim \|f\|_{L^p(\R^d)}\|1_{[0,1]^d}g\|_{L^p(\R^d)}. \end{equation} From Definition \ref{mtp}, for any $\delta >0$ we may decompose $$f\otimes_\delta g (x)= \sum_{k\in\Z^d}\big(f(\cdot +k\delta)\otimes_\delta (1_{[0,1]^d}g(\cdot-k))\big)(x).$$ Hence $$ H_\delta\coloneqq f\otimes_\delta (g-\widetilde g)=\sum_{k\in\Z^d} (\Delta_{k\delta} f) \otimes_\delta 1_{[0,1]^d}g(\cdot-k). $$ By the envelope property of $f$ there is $\varepsilon\in (0,a-d)$ so that $\|\Delta_{k\delta} f\|_{L^p(\R^d)}\lesssim (|k|\delta)^\eps.$ Hence an application of \eqref{discret2} and our assumption on $g$ yield that $$ \| H_\delta\|_{L^p(\R^d)}\lesssim \sum_{k\in\Z^d}(|k|\delta)^\eps \|1_{[0,1]^d}g(\cdot -k)\|_{L^p(\R^d)}\lesssim \delta^\varepsilon, $$ and the neglibility of $H_\delta$ follows. \end{proof} We remark that later on when we deal with stochastic multiscale functions then the natural analogue of the above lemma is no longer valid, causing some additional technical complications. We now describe the weak convergence of multiscale functions in the limit $\delta \to 0$. \begin{lemma}\label{weak} Let $F = F_\delta$ be a multiscale function and $1 < p < \infty$. Then $\|F_\delta\|_{L^p(\R^d)}$ is bounded uniformly in $\delta$. Furthermore, there exists $F_0 \in L^p(\R^d)$ such that $F_\delta$ converges weakly in $L^p(\R^d)$ to $F_0$. Actually, there is $\eps >0$ such that for any test function $\phi\in C^\infty_0(\R^d)$ $$ \left|\int_{\R^d}(F_0(x)-F_\delta(x))\phi(x)dx\right|\lesssim_{\phi,F_\delta} \delta^\eps $$ \end{lemma} \begin{proof} By linearity it suffices to treat the cases when $F$ is either a multiscale tensor product or a negligible function. The claims are trivial in the latter case, so assume that $F_\delta = f \otimes_\delta g$ for some envelope function $f$ and localized function $g$. The uniform boundedness of $\|F_\delta\|_{L^p(\R^d)}$ follows immediately from Lemma \ref{Lp-multiscale}. In order to establish the weak convergence let us first consider the model case in which $g = 1_{[0,1]^d}$. Then for a.e. $x$ we have $F_\delta(x) = [f]_\delta(n)$, where $n$ is the integer part of $x/\delta$. We thus see that $$ F_\delta(x) - f(x) = \frac{1}{\delta^d} \int_{n\delta + [0,\delta]^d} f(y) - f(x)\ dy$$ and so by the triangle inequality $$ |F_\delta(x) - f(x)| \leq \frac{1}{\delta^d} \int_{[-\delta,\delta]^d} |\Delta_r f(x)|\ dr.$$ Taking $L^p$ norms, applying Minkowski's inequality, and using the fact that $f \in \Lambda^{\eps_p,p}(\R^d)$ for some $\eps_p > 0$ we conclude that $F_\delta$ converges strongly in $L^p(\R^d)$ to $f$, which certainly suffices. By subtracting a constant multiple of this model case, we may assume in general that $g$ has mean zero. We claim that $F_\delta$ now converges weakly to zero. Let $\phi \in C^\infty_0(\R^d)$ be a test function. We need to show that $$ \int_{\R^d} \sum_{n \in \Z^d} [f]_\delta(n) g_{[n,\delta]}(x) {\phi}(x)\ dx \to 0$$ as $\delta \to 0$. Using the mean zero nature of $g$, we can rewrite the left-hand side as $$ \delta^d \sum_{n \in \Z^d} [f]_\delta(n) \int_{\R^d} g(r) \Delta_{r\delta} {\phi}(n\delta)\ dr.$$ Since $\phi$ is a test function and $g$ is localized, the inner integral has magnitude $O( \delta )$, and furthermore vanishes unless $n= O_\phi(1/\delta)$. Thus the whole expression is bounded by $\delta\int_{|x|<c(\phi)/\delta} |f|$, and the Lemma follows. \end{proof} Parts (i) and (iii) of the the following corollary follow immediately from the above proof, and (ii) is a consequence of (i). \begin{corollary}\label{envelope=multiscale} \text{} \begin{itemize} \item[(i)] If $f$ is an envelope function then $f-f \otimes_\delta 1_{[0,1]^d}$ is negligible. \item[(ii)] Every envelope function is a multiscale function. \item[(iii)] If $F_\delta$ is a multiscale function with expansion $F_\delta = \sum_{j=1}^J f_j \otimes_\delta g_j + G_\delta$ $($where $G_\delta$ is negligible$)$, then for any $p\in (1,\infty)$ $$ F_\delta \underset{\delta\to 0}{\longrightarrow} \sum_{j=1}^Jc_jf_j \quad \textrm{\rm weakly in } L^p\;\; \textrm{\rm with}\;\; c_j\coloneqq \int_{\R^d} g_j,\;\; j=1,\ldots, J. $$ \end{itemize} \end{corollary} \noindent We remark that conclusion (iii) makes precise the intuitively obvious statement that a multiscale tensor product approximates in some natural sense (a multiple of) the envelope function as $\delta\to 0.$ The sum of two multiscale functions is clearly a multiscale function. We proceed to give other closure properties of multiscale functions, the first one being the closure under multiplication. \begin{proposition}\label{multmult} If $F = F_\delta$ and $G = G_\delta$ are multiscale functions, then $FG = F_\delta G_\delta$ is also a multiscale function. \end{proposition} \begin{proof} If either $F$ or $G$ is negligible, then by Lemma \ref{weak} and H\"older's inequality we see that $FG$ is also negligible. Hence by linearity we may assume that $F, G$ are multiscale tensor products, e.g., $F = f \otimes_\delta g$ and $G = f' \otimes_\delta g'$. By Lemma \ref{adecay} we may assume in addition that supp$(g)\subset[0,1]^d$ and supp$(g')\subset [0,1]^d. $ Then, by observing that $g_{[n,\delta]}, g'_{n',\delta}$ have disjoint supports if $n\not=n'$ we get $$ FG(x) = \sum_{n \in \Z^d} [f]_\delta(n) [f']_\delta(n) g''_{[n,\delta]},$$ where $g'' \coloneqq gg'$ is a localized function. From Lemma \ref{commutator} we see that $$ \sum_{n \in \Z^d} ([f]_\delta(n) [f']_\delta(n) - [ff']_\delta(n)) g''_{[n,\delta]}$$ is negligible. Hence $F G$ is equivalent to $$ \sum_{n \in \Z^d} [ff']_\delta(n) g''_{[n,\delta]}$$ which equals $(ff') \otimes_\delta g''(x)$. Since $g''$ is localized, and (by Lemma \ref{envprod}) $ff'$ is an envelope function, the claim follows. \end{proof} \begin{corollary}\label{product_equivalence} Assume that $f,f'$ are envelope functions and $g,g'$ are localized functions. Then the product $(f\otimes_\delta g)(f'\otimes_\delta g')$ is a multiscale function equivalent to either of the multiscale tensor products $ff'\otimes \widetilde g_1$, $ff'\otimes \widetilde g_2$, where $$ \widetilde g_1(x) \coloneqq 1_{[0,1]^d}(x)\sum_{n,m\in \Z^d} g(n+x)g'(m+x) $$ and $$ \widetilde g_2(x) \coloneqq \sum_{n\in \Z^d} g(n+x)g'(x). $$ \end{corollary} \begin{proof} The statement concerning $\widetilde g_1$ follows directly from examining the proofs of Lemma \ref{adecay} and Proposition \ref{multmult}. The second statement in turn follows from Lemma \ref{adecay} by observing that $\widetilde g_2$ is localized and $\widetilde g_1(x)=1_{[0,1]^d}(x)\sum_{k\in\Z^d}\widetilde g_2(x+k)$. \end{proof} Interestingly enough, the multiscale property is also preserved under (translation and scaling invariant) singular integrals. This is not at all evident a priori since $Tg$ is usually not even integrable if $g$ is a localized function. Recall from standard Calder\'on-Zygmund theory (see e.g., \cite{stein:small}) that $T$ extends to a bounded linear operator on $L^p(\R^d)$ for all $1 < p < \infty$. \begin{proposition}\label{tf} If $F = F_\delta$ is a multiscale function, and $T$ is a (translation and dilation invariant) singular integral operator (independent of $\delta$), then $TF = T F_\delta$ is also a multiscale function. \end{proposition} \begin{proof} If $F$ is negligible, then $TF$ is negligible also since $T$ is bounded on every $L^p(\R^d)$ space. So we may assume that $F_\delta = f \otimes_\delta g$ for some envelope function $f$ and localized function $g$. To simplify the notation we now allow all implicit constants to depend on $f,g,T,d$. First suppose that $g = 1_{[0,1]^d}$. By Corollary \ref{envelope=multiscale} $F_\delta$ differs from $f$ by a negligible function, thus $TF_\delta$ differs from $Tf$ by a negligible function. Since $f$ is an envelope function, and $T$ is translation-invariant and bounded on every $L^p(\R^d)$, we conclude that $Tf$ is an envelope function, and thus a multiscale function again by Corollary \ref{envelope=multiscale}, and the claim follows. By linearity it now suffices to treat the case when $g$ has mean zero. Using the translation and dilation invariance of $T$, we observe that $$ TF = \sum_{n \in \Z^d} [f]_\delta(n) T(g_{[n,\delta]}) = \sum_{n \in \Z^d} [f]_\delta(n) (Tg)_{[n,\delta]}.$$ If $Tg$ were a localized function we would now be done, but this clearly not true in general. However, $Tg(x)$ decays roughly like $\langle x\rangle^{-d-1}$ or, more precisely, we have for any $x \in \R^d$ and $1 < p < \infty$ that \begin{equation}\label{tgp} \| Tg \|_{L^p(B(x,1))} \lesssim_{p} \langle x\rangle^{-d-1}, \end{equation} whence Lemma \ref{adecay} applies and the desired conclusion follows. In order to verify \ref{tgp}, observe first from the $L^p$ boundedness of $T$ that the claim is easy for $|x| \leq 4$, so we may assume $|x| > 4$. We then use the localized mean zero nature of $g$ to decompose $g$ into a piece $g_1$ of $L^p$ norm $O_{p}( \langle x \rangle^{-d-1} )$ supported in $\R^d\setminus B(0,|x|/8)$ and mean zero, and a mean zero localized function (with quantitative bounds independent of $x$) $g_2$ supported in $B(0,|x|/4)$. The contribution of $g_1$ is acceptable by the $L^p$ boundedness of $T$; the contribution of $g_2$ is acceptable by using the mean zero nature of $g_2$ to write for $x'\in B(x,1)$ $$ Tg_2(x') = \int_{B(0,|x|/4)} (K(x',y) - K(x',0)) g_2(y),$$ where $K(x,y) = \Omega(\frac{x-y}{|x-y|})/|x-y|^d$ is the singular kernel of $T$, and then using the triangle inequality, the Calder\'on-Zygmund type bounds on $K$,i.e. $$ |K(x',y) - K(x',0)|\leq C|y||x'|^{-d-1}\quad \textrm{for}\quad |y|<2|x'|, $$ and the localized nature of $y\mapsto g_2(y)|y|$. \end{proof} \begin{corollary} For any envelope function $f$, localized function $g$ and singular integral $T$ the application $T(f\otimes_\delta g)$ is equivalent to the multiscale function $ATf+f\otimes_\delta g'$ where $$ A\coloneqq \int_{\R^d}g\qquad \textrm{and}\qquad g'(x)\coloneqq T\big(g-A1_{[0,1]^d}\big)(x). $$ \end{corollary} Iterating Proposition \ref{multmult} and Proposition \ref{tf}, we obtain our deterministic homogenization result for iterated singular integrals: \begin{corollary}\label{singular_equivalence} Let $\mu = \mu_\delta$ be a multiscale function, and let $T$ be a singular integral operator. Let $m \geq 1$, let $1 < p < \infty$, and define $\mu_m = \mu_{m,\delta}$ recursively by $\mu_{1,\delta} \coloneqq \mu_\delta$ and $\mu_{m,\delta} \coloneqq \mu_\delta T \mu_{m-1,\delta}$ for $m > 1$. Then $\mu_{m,\delta}$ is bounded in $L^p(\R^d)$ uniformly in $\delta$, and is weakly convergent to a limit $\mu_{m,0} \in L^p(\R^d)$. \end{corollary} \begin{remark} In principle it is possible to deduce a formula for the limit $\mu_{m,0}$ by a repeated application of Lemma \ref{weak} and Corollaries \ref{product_equivalence} and \ref{singular_equivalence}. \end{remark} \section{Stochastic multiscale functions}\label{random-sec} \newcommand{\widetilde\Omega}{\widetilde\Omega} \newcommand{\widetilde\omega}{\widetilde\omega} We now turn to a generalization of the above theory, in which the localized functions $g$ are allowed to be stochastic. We fix a probability space $\Omega$, and then define a product probability space $$ \widetilde\Omega\coloneqq \Omega^{\Z^d} \coloneqq \{ (\omega_n)_{n \in \Z^d}: \omega_n \in \Omega \hbox{ for all } n \in \Z^d \}. $$ We often write $\widetilde\omega\coloneqq (\omega_n)_{n\in\Z^d}$ for an element of $\widetilde\Omega.$ \begin{definition}[Stochastic localized function]\label{sms} A \emph{stochastic localized function} is a function $g\colon \R^d \times \Omega \to \C$ (not depending on the scale parameter $\delta$), where $\Omega$ is a probability space, such that for every $1 < p < \infty$ and $k > 0$, the function $\langle x \rangle^k g(x,\omega)$ lies in $L^p(\R^d \times \Omega)$. We view $g$ as a random function from $\R^d$ to $\C$. \end{definition} \begin{definition}[Stochastic multiscale tensor product]\label{smtp} Let $f\colon \R^d \to \C$ be an envelope function and $g\colon \R^d \times \Omega \to \C$ be a localized function. We define the \emph{multiscale tensor product} $f \otimes_\delta g \colon \R^d \times \widetilde\Omega \to \C$ of $f$ and $g$ to be the function \begin{equation}\label{smtp-def} f \otimes_\delta g(x, \widetilde\omega) \coloneqq \sum_{n \in \Z^d} [f]_\delta(n) g_{[n,\delta]}(x,\omega_n) \end{equation} where $g_{[n,\delta]}$ is defined by Definition \ref{Rescale}. One can view $f \otimes_\delta g$ as a random function from $\R^d$ to $\C$. \end{definition} \begin{remark} In the above definition we of course could have instead spoken of independent copies of random functions $g(\cdot,\cdot)$ without introducing the product space $\widetilde\Omega.$ However, the product space perhaps makes things slightly more transparent, at least for readers with little background in probability. \end{remark} \begin{definition}[Stochastic negligible function] A function $F = F_\delta \colon \R^d \times \widetilde\Omega \to \C$ depending on the parameter $0 < \delta < 1$ is said to be \emph{negligible} if for every $1 < p < \infty$ there exists $\eps_p > 0$ and $C_p > 0$ such that $$ \|F_\delta \|_{L^p(\R^d \times \widetilde\Omega)} \leq C_p \delta^{\eps_p}$$ for all $0 < \delta < 1$. We view $F_\delta$ as a random function from $\R^d$ to $\C$. \end{definition} \begin{definition}[Stochastic multiscale function]\label{smf} A function $F = F_\delta \colon \R^d \times \tilde \Omega \to \C$ depending on the parameter $0 < \delta < 1$ is said to be a \emph{stochastic multiscale function} if we can write $$ F_\delta = \sum_{j=1}^J f_j \otimes_\delta g_j + G_\delta$$ where $J \geq 1$ is an integer, $f_1,\ldots,f_J$ are envelope functions, $g_1,\ldots,g_J$ are stochastic localized functions, and $G = G_\delta$ is a stochastic negligible function. \end{definition} \noindent As in the previous section, we say that functions $F_\delta$ and $F'_\delta$ are {\it equivalent} if their difference is a stochastic negligible function. \begin{example} For each $n \in \Z^d$, let $\varepsilon_n \in \{-1,1\}$ be an i.i.d. collection of signs, and let $Q$ be a cube in $\R^d$. Then the random function $$ F_\delta(x) \coloneqq 1_Q(x) \sum_{n \in \Z^d} \varepsilon_n 1_{n\delta + [0,\delta)^d}(x)$$ is a stochastic multiscale function (setting $\Omega = \{-1,1\}$ with the uniform distribution, and $\varepsilon_n$ to be the $n^{th}$ coordinate function of $\tilde \Omega = \Omega^{\Z^d}$). \end{example} \begin{remark} In the case when the probability space $\Omega$ is trivial, stochastic multiscale functions are essentially the same as deterministic multiscale functions. \end{remark} Lemma \ref{loc} and its proof immediately generalize, so that we have \begin{equation}\label{loc3} \left\| \sum_{n \in\Z^d} f_n(x,\omega_n)\right\|_{L^p(\R^d\times\widetilde\Omega)} \lesssim_{p,d,N} (\sum_{n \in \Z^d} \| \langle x \rangle_{[n,\delta]}^d f_n(x,\omega) \|_{L^p(\R^d\times\Omega)}^p)^{1/p}. \end{equation} In a similar vein, Lemma \ref{Lp-multiscale} also generalizes to the stochastic setup, one just replaces $\|\langle \cdot \rangle^dg\|_{L^p(\R^d)}$ by $\|\langle \cdot \rangle^dg\|_{L^p(\R^d\times\Omega)}$ on the right hand side. In particular, we obtain \begin{lemma}\label{fbound} Let $F = F_\delta \colon \R^d \times \tilde \Omega \to \C$ be a stochastic multiscale function, and let $1 < p < \infty$. Then $\|F_\delta \|_{L^p(\R^d \times \tilde \Omega)}$ is bounded uniformly in $0 < \delta < 1$. \end{lemma} \begin{remark}\label{generalLp} Exactly the same proof actually shows that $\|F_\delta \|_{L^p(\R^d \times \tilde \Omega)}$ stays bounded in $\delta$ for more general functions of the form $$ F_\delta(x,\widetilde\omega)= \sum_{n \in \Z^d} [f]_\delta(n) (g_n)_{[n,\delta]}(x, \widetilde\omega) $$ assuming only the uniform localization $\sup_{n\in\Z^d}\|\langle \cdot \rangle^N g_n(\cdot,\cdot)\|_{L^p(\R^d\times\widetilde\Omega)}<\infty$ for $N\geq 1$ and $p\in (1,\infty).$ \end{remark} In turn, Proposition \ref{multmult} has the following counterpart: \begin{proposition}\label{multmult2} If $F' = F'_\delta$ is a deterministic multiscale function and $F = F_\delta$ is a stochastic multiscale function, then $F'F = F'_\delta F_\delta$ is a stochastic multiscale function. \end{proposition} \begin{proof} It is enough to treat the case $F=f\otimes_\delta g$ and $F'=f'\otimes_\delta g',$ where $f,f'$ are envelope functions, $g$ is a stochastic localized function and $g'$ is a deterministic localized function. Furthermore, by Lemma \ref{adecay} we may assume that $g'$ has support in $[0,1]^d.$ Fix $p>1$ and observe that by the definition of localized functions and H\"older's inequality we have \begin{equation}\label{gg} \| \langle \cdot \rangle^d g'(\cdot-m) g (\cdot ,\cdot)\|_{L^p(\R^d\times \widetilde\Omega)}\lesssim_{g,g',p,a} \langle m \rangle^{-a} \end{equation} for all $a>1, $ in particular for $a=d+1$. It follows that $\widetilde g$ is a stochastic localized function, where $\widetilde g$ is defined by $\widetilde g(x,\omega)\coloneqq \sum_{m\in\Z^k}g'(x-m)g(x,\omega)$. By Lemma \ref{envprod}, the proposition follows as soon as we show that $F_\delta' F_\delta$ is stochastically equivalent to $(ff')\otimes_\delta \widetilde g.$ Note that by Lemma \ref{commutator} and the stochastic counterpart of Lemma \ref {Lp-multiscale} the latter function is equivalent to $H_\delta\coloneqq \sum_{n\in\Z^d}[f]_\delta(n)[f']_\delta (n)\widetilde g_{[n,\delta]}(\cdot,\omega_n)$, so it suffices to show that the difference $F_\delta'F_\delta-H_\delta$ is negligible. To that end we compute \begin{align*} (F_\delta F'_\delta-H_\delta)(x,\widetilde\omega) &=\sum_{m\in\Z^d}\left( \sum_{n\in\Z^d}[f]_\delta(n)\big(\Delta_m [f']_\delta(n)\big)g'_{[m+n,\delta](x)} g_{[n,\delta]}(x,\omega_n)\right) \end{align*} Using \ref{gg}, first applying the stochastic version of Lemma \ref{loc}, and then Lemma \ref{discret} together with H\"older's inequality, we obtain \begin{align*} \|F_\delta F'_\delta-H_\delta\|_{L^p(\R^d\times\widetilde\Omega)} &\leq \delta^{d/p}\sum_{m\in\Z^d}\|[f]_\delta \Delta_m [f']_\delta\|_{\ell^p(\Z^d)} \langle m \rangle^{-d-1} \\ &\lesssim_{f,f',p,g,g'}\delta^{d/p}\sum_{m\in\Z^d} \langle m \rangle^{-d-1} |m\delta|^{\varepsilon_{2p}}\delta^{-d/2p}\delta^{-d/2p}\\ &\lesssim\delta^{\varepsilon_{2p}} . \end{align*} \end{proof} We can now state our main technical result about stochastic multiscale functions: \begin{theorem}[Main estimate on stochastic multiscale functions]\label{mainthm} Let $m \geq 1$, let $\mu^{(1)} = \mu^{(1)}_\delta,\ldots,\mu^{(m)} = \mu^{(m)}_\delta$ be stochastic multiscale functions, and let $T_1,\ldots,T_{m-1}$ be singular integral operators. Define $\mu_m = \mu_{m,\delta} \colon \R^d \times \tilde \Omega \to \C$ recursively by \begin{equation}\label{mum-def} \mu_{1,\delta} \coloneqq \mu_\delta^{(1)}; \quad \mu_{m,\delta} \coloneqq \mu_\delta^{(m)} T_{m-1} \mu_{m-1,\delta}\;\; \hbox{ for } m > 1. \end{equation} Then $\mu_{m,\delta}$ is bounded in $L^p(\R^d \times \tilde \Omega)$ uniformly in $0 < \delta < 1$, for any $p\in (1,\infty).$ Furthermore, there exists a (deterministic) limit function $\mu_{m,0} \in L^p(\R^d)$ and $\eps > 0$ with the property that given any test function $\phi \in C^\infty_0(\R^d)$ we have \begin{equation}\label{conc} \P( |\int_{\R^d} [\mu_{m,\delta}(x,\omega) - \mu_{m,0}(x)] {\phi(x)}\ dx| \geq \delta^{\eps} | ) \lesssim_{m,d,\eps,\mu^{(1)},\ldots,\mu^{(m)},T_1,\ldots,T_{m-1},\phi} \delta^\eps \end{equation} where $\P$ denotes the probability measure on $\tilde \Omega$. \end{theorem} The proof of this theorem is lengthy and shall occupy the next section. Later, in Section \ref{se:qchomogenization} we shall give applications to random Beltrami equations through the following immediate consequence: \begin{corollary}[Almost sure convergence]\label{co:main} Let $\mu_{m,\delta}$ be as in Theorem \ref{mainthm} and assume that $a\in (0,1).$ Then almost surely $\mu_{m,a^{j}}\to \mu_{m,0}$ weakly in $L^p(\R^d)$ as $j\to\infty$, for all $p\in (1,\infty)$. \end{corollary} \begin{proof} The almost sure weak convergence, when tested against a test function follows immediately from Theorem \ref{mainthm} and the Borel-Cantelli lemma. For the rest it is enough to recall that the sequence $\mu_{m,a^{j}}$ is uniformly bounded in each $L^p(\R^d)$ and that one may pick a countable set of test functions that is dense in every $L^p(\R^d)$, with $1<p<\infty.$ \end{proof} Theorem \ref{th:main22} then follows from Theorem \ref{mainthm} and Corollary \ref{co:main}. We finish this section by observing that in the case where $g$ is bounded with quick decay as $x\to\infty$, our definition of a deterministic multiscale function is equivalent to the product of the envelope function and the `bump field defined via $g$'. \begin{lemma}\label{prod3} Assume that $g$ is bounded and has quick decay in the first variable: $$ |g(x,\omega)|\leq C_N \langle x \rangle^{-N}\quad \textrm{for all}\;\; N\geq 1\;\;\textrm{and}\;\; x\in\R^d,\; \omega\in\Omega. $$ Then for any envelope function $f$ the stochastic multiscale tensor product $f\otimes_\delta g$ is equivalent to the function $\displaystyle f(x)\Big(\sum_{n\in \Z^d}g_{[n,\delta]}(x,\omega_n)\Big).$ \end{lemma} \begin{proof} Observe first that by the decay assumption $g$ is a stochastic localized function and the bump field is uniformly bounded: $$ |\sum_{n\in \Z^d}g_{[n,\delta]}(x,\omega_n)|\leq C\quad \textrm{for all}\;\; x\in\R^d. $$ Combining this with Corollary \ref{envelope=multiscale}(i) we see that the product $\displaystyle f(x)\Big(\sum_{n\in \Z^d}g_{[n,\delta]}(x,\omega_n)\Big)$ is equivalent to $\displaystyle f_\delta(x)\Big(\sum_{n\in \Z^d}g_{[n,\delta]}(x,\omega_n)\Big)$ , where $$ f_\delta\coloneqq f\otimes_\delta 1_{[0,1)^d}=\sum_{n\in \Z^d} [f]_\delta(n) 1_{n\delta+[0,\delta)^d}. $$ By using for each $n\in\Z^d$ the trivial identities $[f]_\delta(n)=\sum_{k\in\Z^d}[f]_\delta(n) 1_{(n+k)\delta+[0,\delta)^d}$ and $f_\delta=\sum_{k\in \Z^d} [f]_\delta(n+k) 1_{(n+k)\delta+[0,\delta)^d}$ we may use \eqref{loc3} to estimate for any $p\in (1,\infty)$ \begin{align*} Q_p\coloneqq &\|f\otimes_\delta g - f_\delta\Big(\sum_{n\in \Z^d}g_{[n,\delta]}(x,\omega_n)\Big)\|_{L^p(\R^d\times \widetilde\Omega)}\\ =&\left\|\sum_{k\in \Z^d}\left(\sum_{n\in \Z^d}\big( \Delta_k [f]_\delta (n)\big) 1_{(n+k)\delta+[0,\delta)^d} (x)\big)g_{[n,\delta]}(x,\omega_n)\right) \right\|_{L^p(\R^d\times \widetilde\Omega)}\\ \lesssim&\sum_{k\in \Z^d}\left(\sum_{n\in \Z^d}\big| \Delta_k [f]_\delta (n)\big|^p\left\| 1_{(n+k)\delta+[0,\delta)^d} (x)\langle x \rangle_{[n,\delta]}^d g_{[n,\delta]}(x,\omega_n)\right\|^p_{L^p(\R^d\times\widetilde\Omega)}\right)^{1/p}\\ \lesssim & \sum_{k\in \Z^d} \|\Delta_k [f]_\delta \|_{\ell^p(\Z^d)} \| 1_{k\delta+[0,\delta)^d} (x) \langle x \rangle_{[0,\delta]}^d g_{[0,\delta]}(x,\omega) \|_{L^p(\R^d\times \Omega)}\ . \end{align*} By assumption $\| 1_{k\delta+[0,\delta)^d} (x)\langle x \rangle_{[0,\delta]}^d g_{[0,\delta]}(x,\omega) \|_{L^p(\R^d\times \Omega)}\lesssim \langle k \rangle^{-2d}\delta^{d/p}$, and hence an application of Lemma \ref{discret} yields that $$ Q_p\lesssim \sum_{k\in \Z^d}\frac{(k\delta)^{\varepsilon_p}\delta^{-d/p}\delta^{d/p}}{\langle k \rangle^{2d}}\lesssim\delta^{\varepsilon_p}, $$ which completes the proof. \end{proof} \section{Proof of Theorem \ref{mainthm}}\label{sec:proof} The purpose of this section is to prove Theorem \ref{mainthm}. To abbreviate the notation we allow all implied constants to depend on $m, d, T_1,\ldots,T_{m-1}, \mu^{(1)},\ldots,\mu^{(m)},\phi$. We first settle the claim that $\mu_{m,\delta}$ is uniformly bounded in $L^p(\R^d \times \widetilde\Omega)$, stating this result as a separate lemma: \begin{lemma}\label{uniformLp} Define $\mu_{m,\delta}(\cdot, \omega)$ as in Theorem \ref{mainthm}. Then $\|\mu_{m,\delta}\|_{L^p(\R^d \times \widetilde\Omega)}$ is uniformly bounded in $\delta >0.$ \end{lemma} \begin{proof} Fix $\omega \in \tilde \Omega$. Since $T_1,\ldots,T_{m-1}$ are bounded on $L^q(\R^d)$ for every $1 < q < \infty$, we see from H\"older's inequality and induction on $m$ that \begin{equation}\label{integralholder} \| \mu_{m,\delta}(\cdot, \omega) \|_{L^p(\R^d)} \lesssim_p \prod_{j=1}^m \| \mu_{\delta}^{(j)}(\cdot, \omega) \|_{L^{mp}(\R^d)}. \end{equation} Namely, if this is true for the value $ m-1$, we choose $q>1$ with $\frac{1}{q}+\frac{1}{mp}=\frac1p$ and use the $L^q$-boundedness of $T_{m-1}$ to estimate \begin{align*} \| \mu_{m,\delta}(\cdot, \omega) \|_{L^p(\R^d)} \leq &\|\mu^{(m)}_\delta (\cdot,\omega)\|_{L^{mp}(\R^d)}\|T_{m-1}\mu_{\delta,m-1} (\cdot,\omega)\|_{L^q(\R^d)}\\ \lesssim &\|\mu^{(m)}_\delta (\cdot,\omega)\|_{L^{mp}(\R^d)}\|\mu_{\delta,m-1} (\cdot,\omega)\|_{L^q(\R^d)}\\ \lesssim &\|\mu^{(m)}_\delta (\cdot,\omega)\|_{L^{mp}(\R^d)} \prod_{j=1}^{m-1} \|\mu^{(j)}_\delta (\cdot,\omega)\|_{L^{(m-1)q}(\R^d)}, \end{align*} and as $(m-1)q=mp, $ the desired inequality \eqref{integralholder} with index $m$ follows. Finally Fubini's theorem and another application of H\"older's inequality yields \begin{equation}\label{stochintegralholder} \| \mu_{m,\delta} \|_{L^p(\R^d \times \tilde \Omega)} \lesssim_p \prod_{j=1}^m \| \mu_{\delta}^{(j)} \|_{L^{mp}(\R^d \times \tilde \Omega)}. \end{equation} The claim then follows from Lemma \ref{fbound}. \end{proof} \begin{remark}\label{generalLp2} The above bound also holds if the stochastic multiscale functions $\mu_\delta^{(j)}$ are replaced by more general ones described in Remark \ref{generalLp}. \end{remark} Next, by multilinearity, we may assume that each of the stochastic multiscale functions $\mu^{(j)}_\delta$ is either a stochastic negligible function, or is a stochastic multiscale tensor product. If at least one of the $\mu^{(j)}_\delta$ is stochastically negligible, we see by \eqref{stochintegralholder} that for every $1 < p < \infty$ there exists $\eps > 0$ such that $$ \| \mu_{m,\delta} \|_{L^p(\R^d \times \tilde \Omega)} \lesssim_{p,\eps} \delta^\eps.$$ By applying $p=2$ (say) and setting $\mu_{m,0} :\equiv 0$ we easily obtain the claim \eqref{mum-def}. We may thus assume that for each $1 \leq j \leq m$ we have \begin{equation}\label{muj-delta} \mu^{(j)}_\delta = f_j \otimes_\delta g_j \end{equation} for some envelope functions $f_j$ and some stochastic localized functions $g_j$. We allow all implied constants to depend on $f_j$ and $g_j$. Next, we shall make the qualitative assumption that the envelope functions $f_j$ are compactly supported in $\R^d$. This is purely in order to justify certain interchanges of summation, as now all sums in the multiscale functions are finite for a fixed $\delta>0$. At the very end of the proof we describe how this assumption can be dispensed with by a standard limit argument. For the next reduction, we observe that it suffices to show for each $\phi \in C^\infty_0(\R^d)$ that there exists a limit $z=z_\phi \in \C$ such that $$ \P\left( \left|\int_{\R^d} \mu_{m,\delta} \phi(x)\ dx - z\right| \geq \delta^{\eps} | \right) \lesssim_{\eps} \delta^\eps$$ for some $\eps > 0$ independent of $\phi, \delta$. Indeed, the map $\phi \mapsto z_\phi$ is then a continuous (by the uniform boundedness of $\| \mu_{m,\delta}(\cdot, \omega) \|_{L^p(\R^d)}$), linear, and densely defined functional on $L^p(\R^d)$ for every $1 < p < \infty$, and can then be used to reconstruct $\mu_{m,0}$ by duality. By \eqref{muj-delta} and \eqref{smtp-def} we can write $$ \mu^{(j)}_\delta(x,\omega) = \sum_{n_j \in \Z^d} \mu^{(j)}_{\delta,n_j}(x,\omega_{n_j})$$ where \begin{equation}\label{muj-nj} \mu^{(j)}_{\delta,n_j}(x,\omega_{n_j}) \coloneqq [f_{j}]_\delta(n_j) (g_j)_{\delta,n_j}( x, \omega_{n_j} ). \end{equation} We can therefore expand out the expression \begin{equation}\label{muph} \int_{\R^d} \mu_{m,\delta}(x,\omega) {\phi}(x)\ dx \end{equation} using \eqref{mum-def} as \begin{equation}\label{muph-2} \sum_{\vec n \in (\Z^d)^m} X_{\delta,\vec n} \end{equation} where $\vec n \coloneqq (n_1,\ldots,n_m)$, and $X_{\delta,\vec n}$ is the complex-valued random variable \begin{equation}\label{Xdef} X_{\delta,\vec n} \coloneqq \int_{\R^d} [\mu^{(m)}_{\delta,n_m}(\cdot,\omega_{n_m}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_1}(\cdot, \omega_{n_1})](x) {\phi(x)}\ dx. \end{equation} Note that our qualitative hypotheses ensure that for each fixed $\delta$, only finitely many of the $X_{\delta,\vec n}$ are non-zero, and that each of the random variables $X_{\delta,\vec n}$ are bounded. To obtain the concentration result \eqref{conc} we use Chebyshev's inequality. From that inequality we see that it suffices to show a first moment estimate \begin{equation}\label{first-mom} | \sum_{\vec n \in (\Z^d)^m} \E(X_{\delta,\vec n}) - z | \lesssim_\eps \delta^\eps \end{equation} together with a second moment estimate of the form \begin{equation}\label{second-mom} \E \left| \sum_{\vec n \in (\Z^d)^m} X_{\delta,\vec n} - \E(X_{\delta,\vec n}) \right|^2 \lesssim_\eps \delta^\eps \end{equation} for some $\eps > 0$ (independent of $\delta$ and $\phi$). (One can also control higher moments, but the second moment will suffice for our application.) \subsection{The second moment estimate} Let us first settle the second moment estimate \eqref{second-mom}. We can expand the left-hand side as $$ \sum_{\vec n, \vec n' \in (\Z^d)^m} \E( X_{\delta,\vec n} \overline{X_{\delta,\vec n'}} ) - \E( X_{\delta,\vec n} ) \overline{\E(X_{\delta,\vec n'})}.$$ Now observe from \eqref{Xdef} that $X_{\delta,\vec n}$ and $X_{\delta,\vec n'}$ are independent and the corresponding term in the above sum vanishes, unless we have $n_j = n'_{j'}$ for some $1 \leq j, j' \leq m$. Thus by the triangle inequality and Cauchy-Schwarz we can estimate the previous expression by $$ 2 \sum_{1 \leq j,j' \leq m} \sum_{\vec n, \vec n' \in (\Z^d)^m: n_j = n'_{j'}} \E( |X_{\delta,\vec n}|^2 )^{1/2} \E( |X_{\delta,\vec n'}|^2 )^{1/2}.$$ It therefore suffices to establish an estimate of the form \begin{equation}\label{sumxx} \sum_{\vec n, \vec n' \in (\Z^d)^m: n_j = n'_{j'}} \E( |X_{\delta,\vec n}|^2 )^{1/2} \E( |X_{\delta,\vec n'}|^2 )^{1/2} \lesssim_\eps \delta^\eps \end{equation} for all $1 \leq j,j' \leq m$. Fix $j,j'$. We now pause to give a basic estimate on the size of each of the $X_{\delta,\vec n}$. Define the kernel $K_0$ by setting $$ K_0(n) \coloneqq \frac{1}{\langle n\rangle^{d}}.$$ \begin{proposition}[Size estimate]\label{sizeestimate} If $1 < p < \infty$ and $\vec n \in (\Z^d)^m$, then $$ \E( |X_{\delta,\vec n}|^p )^{1/p} \lesssim_{p} \delta^d \langle \delta |n_m| \rangle^{-2d} \left( \prod_{i=1}^m |[f_{i}]_\delta(n_i)| \right) \prod_{i=1}^{m-1} K_0(n_{i+1}-n_i).$$ \end{proposition} For the proof of the proposition we shall need the following weighted version of the $L^p$ bounds for singular integrals. \begin{lemma}[Localized singular integral bounds]\label{lsib} Let $T$ be a singular integral operator. If $n, n' \in \Z^d$, $\delta > 0$, $1 < p < \infty$, and $N > d$, then we have the bound $$ \| \langle \cdot \rangle_{n',\delta}^{-N} T f \|_{L^p(\R^d)} \lesssim_{T,p,d,N} K_0(n-n') \| \langle \cdot \rangle_{[n,\delta]}^N f \|_{L^p(\R^d)},$$ where $f$ is any function for which the right-hand side is finite. \end{lemma} \begin{proof} By scaling we may set $\delta = 1$. We have $$ \| T f \|_{L^p(B(n',1))} \lesssim_{T,p,d} K_0(n-n') \| f \|_{L^p(B(n,1))}$$ for all $n,n' \in \Z^d$ and all $f \in L^p(B(n,1))$ (extending $f$ by zero outside of this ball). Namely, if $|n-n'| \geq 2$ then the claim follows simply by using the integral representation of $T$ and the triangle inequality (and H\"older's inequality). If $|n-n'| < 2$ the claim instead follows by using the boundedness of $T$ on $L^p(\R^d)$. It then follows that \begin{align*} &\| \langle \cdot \rangle_{n',1}^{-N} T f \|_{L^p(\R^d)}\lesssim \sum_{k\in\Z^d}\langle k \rangle^{-N}\|Tf\|_{L^p(B(n'+k,1))}\\ \lesssim& \sum_{k\in\Z^d}\langle k \rangle^{-N}\Big(\sum_{\ell\in\Z^d} \langle n'+k-n-\ell \rangle^{-d}\|f\|_{L^p(B(n+\ell,1))}\Big)\\ \lesssim &\Big(\sum_{k,\ell\in\Z^d}\langle k \rangle^{-N} \langle n'+k-n-\ell \rangle^{-d} \langle \ell \rangle^{-N}\Big) \| \langle \cdot \rangle_{[n,1]}^N f \|_{L^p(\R^d)}. \end{align*} This yields the stated estimate, since the last written sum is easily estimated to be less than $O\left(\langle n-n'\rangle^{-d}\right)$ by considering separately the case $ \max(|k|,|\ell|)\leq |n-n'|/4$ and its complement. \end{proof} \begin{proof}[Proof of Proposition \ref{sizeestimate}] Pick $p\in(1,\infty)$ and fix $\omega\in\widetilde\Omega$. Denote $$ g(\cdot,\omega)\coloneqq \langle x \rangle_{n_m,\delta}^{3d}[\mu^{(m)}_{\delta,n_m}(\cdot,\omega_{n_m}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_1}(\cdot, \omega_{n_1})](x) $$ so that we may write \begin{equation}\label{term} |X_{\delta, \vec n}|=\Big|\int_{\R^d} g(x,\omega)\langle x \rangle_{n_m,\delta}^{-N}\phi(x)dx \Big| \leq \|g(\cdot,\omega)\|_{L^p(\R^d)}\|\langle \cdot \rangle_{n_m,\delta}^{-3d}\phi \|_{L^{p'}(\R^d)}. \end{equation} By an inductive application of Lemma \ref{lsib} and H\"older's inequality as in the proof of \eqref{integralholder}, we see that \begin{equation}\label{mu_prod} \|g(\cdot,\omega)\|_{L^p(\R^d)}\lesssim \prod_{i=1}^{m-1} K_0(n_{i+1}-n_i) \prod_{i=1}^m \| \langle \cdot \rangle_{n_i,\delta}^{6d} \mu^{(i)}_{\delta,n_i}(\cdot,\omega_{n_i}) \|_{L^{pm}(\R^d)}. \end{equation} Since $\phi$ is a Schwartz function one easily verifies that $$ \|\langle \cdot \rangle_{n_m,\delta}^{-3d}\phi \|_{L^{p'}(\R^d)}\lesssim\frac{\delta^{d/p'}}{\langle \delta n_m \rangle^{2d}}. $$ Moreover, \eqref{muj-nj} and the localized nature of $g_i$ yield that $$ \big(\E \| \langle \cdot \rangle_{n_i,\delta}^{6d} \mu^{(i)}_{\delta,n_i}(\cdot,\omega_{n_i}) \|_{L^{pm}(\R^d)}^{pm} \big)^{1/pm} \lesssim \delta^{d/mp}|[f_{i}]_\delta(n_i)|. $$ By combining these estimates with \eqref{term} the desired estimate follows by H\"older's inequality and the relation $1/p+1/p'=1$. \end{proof} In order to utilize the above proposition we need to introduce discrete fractional integrals. To that end, given any real number $\alpha\in [0,d)$, define the more general kernels $K_\alpha \colon \Z^d \to \R^+$ on the integer lattice by \begin{equation}\label{K-def} K_\alpha(n) \coloneqq \frac{1}{\langle n \rangle^{d-\alpha}}. \end{equation} The convolution of functions defined on the lattice $\Z^d $ is defined in the usual manner: $$ F*G(n) \coloneqq \sum_{m \in \Z^d} F(m) G(n-m).$$ By direct computation we have the convolution estimate \begin{equation}\label{k-conv} K_\alpha * K_\beta \lesssim_{\alpha,\beta,n} K_{\alpha + \beta} \end{equation} whenever $\alpha, \beta >0$ and $\alpha + \beta < d$. These estimates are unfortunately not true at the endpoints $\alpha=0$ or $\beta =0$, due to the logarithmic failure of summability of $K_0$. However, from Young's inequality we easily see that \begin{equation}\label{young} \| K_0 * f \|_{l^q(\Z^d)} \lesssim_{d,p,q} \|f\|_{\ell^p(\Z^d)} \end{equation} for all $1 \leq p < q \leq \infty$ and all $f \in l^p(\Z^d)$. Finally, we are ready to estimate the left-hand side of \eqref{sumxx} by \begin{align} \lesssim S&\;\coloneqq \;\delta^{2d}\sum_{\vec n, \vec n' \in (\Z^d)^m: n_j = n'_{j'}} \langle \delta n_m \rangle^{-2d} ( \prod_{i=1}^m |[f_{i}]_{\delta}(n_i)| ) \prod_{i=1}^{m-1} K_0(n_{i+1}-n_i)\label{product2}\\ &\quad \times \langle \delta n'_m \rangle^{-2d} ( \prod_{i'=1}^m |[f_{i'}]_{\delta}(n'_{i'})| ) \prod_{i'=1}^{m-1} K_0(n'_{i'+1}-n'_{i'}).\nonumber \end{align} Writing $n_j = n'_{j'} = n$, we can rewrite this expression using the convolution operator $T_{K_0} f \coloneqq f*K_0$ and by denoting $\Phi_\delta(n) \coloneqq \langle \delta n \rangle^{-2d}$ as \begin{align}\label{6product} \sum_{n \in \Z^d} & |[f_{j}]_{\delta}(n)| |[f_{j'}]_{\delta}(n)|H_{1,\delta}(n)H_{2,\delta}(n)G_{1,\delta}(n)G_{2,\delta}(n) \end{align} with\footnote{Below the definitions of $H_{1,\delta}$ and its analogues are to be interpreted as follows: starting from the right, one alternatively performs either a pointwise multiplication a sequence $|[f_{k}]_\delta|$ or an application by the operator $T_{K_0}$.} \begin{align*} &H_{1,\delta}(n)\coloneqq \big(T_{K_0} (|[f_{j+1}]_\delta|) \ldots T_{K_0} (|[f_m]_\delta| \Phi_\delta)\big)(n) \\ &H_{2,\delta}(n)\coloneqq \big(T_{K_0} (|[f_{j'+1}]_\delta|) \ldots T_{K_0} (|[f_m]_\delta| \Phi_\delta)\big)(n) \\ &G_{1,\delta}(n)\coloneqq \big( T_{K_0} (|[f_{j-1}]_\delta|) \ldots T_{K_0} (|[f_1]_\delta|)\big)(n) \\ &G_{2,\delta}(n)\coloneqq \big( T_{K_0} (|[f_{j'-1}]_\delta|) \ldots T_{K_0} (|[f_1]_\delta|)\big)(n). \end{align*} In order to bound these functions, observe first that for given $p>1$, for any $\widetilde\varepsilon>0$, and for an arbitrary sequence $(a(n))_{n\in\Z^d}$ \begin{align}\label{closetoinfinity} \|a[f_i]_\delta\|_{\ell^p(\Z^d)}\leq \|a\|_{\ell^{p}(\Z^d)}\|[f_i]_\delta\|_{\ell^{\infty}(\Z^d)}\lesssim_{f_i,p,\widetilde\varepsilon}\delta^{-\widetilde\varepsilon}\|a\|_{\ell^{p}(\Z^d)}, \end{align} since Lemma \ref{discret} yields that $ \|[f_i]_\delta\|_{\ell^{\infty}(\Z^d)} \leq \|[f_i]_\delta\|_{\ell^{q}(\Z^d)}\lesssim \delta^{-d/q}$ for all $q>1$ and we just take $q$ large enough. Fix $\varepsilon >0$. Using alternately the above estimate (with a very small value of $\widetilde\varepsilon$) and the boundedness of $T_{K_0}\colon \ell^p(\Z^d)\to \ell^q(\Z^d)$ for any $1<p<q<\infty$ we obtain that \begin{equation}\label{H} \|H_{k,\delta}\|_{\ell^{2+\varepsilon}(\Z^d)}\lesssim \delta^{-\varepsilon} \|\Phi_\delta\|_{\ell^2(\Z^d)}\lesssim \delta^{-\varepsilon-d/2},\quad k=1,2. \end{equation} Set $q=q(\varepsilon)=4\varepsilon^{-1}(2+\varepsilon)$ so that $2/q+1/(2+\varepsilon)=1/2,$ and use \eqref{closetoinfinity} to similarly obtain the estimate \begin{equation}\label{G} \|G_{k,\delta}\|_{\ell^q(\Z^d)}\lesssim \delta^{-\varepsilon} \|[f_1]_\delta\|_{\ell^{q-\eps'}(\Z^d)}\lesssim \delta^{-\varepsilon-d/(q-\varepsilon')}\lesssim \delta^{-(d+1)\varepsilon} ,\quad k=1,2, \end{equation} where we just picked $\varepsilon'>0$ small enough. Finally, plugging the above bounds in \eqref{6product}, using $2/(1+\varepsilon)+4/q=1$ and the fact that $\|[f_1]_\delta\|_{\ell^{q}(\Z^d)}\lesssim \delta^{-d\varepsilon}$ we obtain via H\"older's inequality \begin{align*} S\;\leq \;&\delta^{2d}\|H_{1,\delta}\|_{\ell^{2+\varepsilon}(\Z^d)}\|H_{2,\delta}\|_{\ell^{2+\varepsilon}(\Z^d)}\|G_{1,\delta}\|_{\ell^{q}(\Z^d)} \|G_{2,\delta}\|_{\ell^{q}(\Z^d)}\|[f_j]_\delta\|_{\ell^{q}(\Z^d)} \|[f_{j'}]_\delta\|_{\ell^{q}(\Z^d)}\\ \lesssim\;&\delta^{2d}\delta^{-\varepsilon-d/2}\delta^{-\varepsilon-d/2}\delta^{-2(d+1)\varepsilon}\delta^{-2d\varepsilon}\\ \lesssim\;& \delta^{d-O(\varepsilon)}. \end{align*} If $j=1$ (resp. $j'=1$), the term $G_{1,\delta}$ (resp. $G_{2,\delta}$) is not present in \eqref{6product}, and the above argument goes through with obvious modifications. The desired estimate follows as $\varepsilon >0$ is arbitrary. \begin{remark} One way to understand the obtained bound for the second moment is to observe that a computation analoguous to the above one could also be used e.g., to estimate the quantity $\E \| X_{\delta, \vec n, c}\|^2$, which we know to be bounded. However, direct implementation the above method would give us a divergent upper bound of the form $ \delta^{-O(\varepsilon)}$, due to the logarithmic non-boundedness of the kernel $K_0$ on $\ell^p$-spaces, as we are ignoring nontrivial cancellations that are behind Proposition \ref{sizeestimate}. Roughly speaking, what saves us above is that the condition $n_j=n'_{j'}$, due to independence, reduces the number of terms by a factor $\delta^{d}.$ \end{remark} \subsection{The first moment estimate} It now remains to establish the first moment estimate \eqref{first-mom}, whose proof is more combinatorial in nature. We can split the left-hand side into finitely many components, depending on the equivalence class that $n_1,\ldots,n_m$ generates. Given any surjective coloring function $c\colon \{1,\ldots,m\} \to \{1,\ldots,k\}$ which assigns a ``color'' in some finite set of integers $\{1,\ldots,k\}$ to every integer $\{1,\ldots,m\}$, let $(\Z^d)^m_c$ denote the set of all $\vec n \in (\Z^d)^m$ such that $n_j = n_{j'}$ if and only if $c(j) = c(j')$. Clearly we can partition $(\Z^d)^m$ into finitely many of the $(\Z^d)^m_c$. Thus it will suffice to show that for each coloring function $c$ there exists a complex number $z_c$ (independent of $\delta$, but depending on all other parameters) for which we have $$ | \sum_{\vec n \in (\Z^d)^m_c} \E(X_{\delta,\vec n}) - z_c | \lesssim_\eps \delta^\eps.$$ Fix $c$. We can reparameterise this as $$ | \sum_{\vec n \in (\Z^d)^k_{{\neq}}} \E( X_{\delta,\vec n, c}) - z_c | \lesssim_\eps \delta^\eps. $$ where $(\Z^d)^k_{\neq}$ is the space of all $k$-tuples $(n_1,\ldots,n_k) \in (\Z^d)^k$ with $n_1,\ldots,n_k$ distinct, and $X_{\delta,\vec n, c}\colon \Omega^k \to \C$ is the complex-valued random variable $$ X_{\delta,\vec n,c} \coloneqq \int_{\R^d} [\mu^{(m)}_{\delta,n_{c(m)}}(\cdot,\omega_{n_{c(m)}}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_{c(1)}}(\cdot, \omega_{n_{c(1)}})](x) {\phi(x)}\ dx. $$ Observe from the inclusion-exclusion principle that the sum $\sum_{\vec n \in (\Z^d)^k \backslash (\Z^d)^k_{\neq}} \E( X_{\vec n, c} )$ can be expressed as a finite linear combination of expressions of the form $\sum_{\vec n \in (\Z^d)^{k'}} \E_c( X_{\vec n, c'} )$ where $k' < k$ and $c'\colon \{1,\ldots,n\} \to \{1,\ldots,k'\}$ is a surjective coloring, and $c$ is a refinement of $c'$ (i.e., $c(j_1)=c(j_2)$ implies that $c'(j_1)=c'(j_2)$). Thus by induction on $k$ it in fact suffices to show that for every pair of colorings $(c,c')$ with $c$ finer than $c'$ there exists a complex number $z'_{c,c'}$ for which we have $$ | \sum_{\vec n \in (\Z^d)^k} \E_c(X_{\delta,\vec n, c'}) - z'_{c,c'} | \lesssim_\eps \delta^\eps, $$ where we used the notation $$ \E_cX_{\delta,\vec n,c'} \coloneqq \int_{\R^d} \E[\mu^{(m)}_{\delta,n_{c'(m)}}(\cdot,\omega_{c(m)}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_{c'(1)}}(\cdot, \omega_{c(1)})](x) \phi(x)\ dx. $$ Let us now use Fubini's theorem to write $$ \sum_{\vec n \in (\Z^d)^k} \E_c( X_{\vec n, c'} ) = \int_{\R^d} T_{c,c',\delta}(1)( x ) {\phi(x)}\ dx,$$ where $T_{c,c',\delta}$ is the (deterministic) operator $$ T_{c,c',\delta}h(x) \coloneqq \sum_{\vec n \in (\Z^d)^k} \E [\mu^{(m)}_{\delta,n_{c'(m)}}(\cdot,\omega_{c(m)}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_{c'(1)}}(\cdot, \omega_{c(1)}) h](x).$$ Let us next verify the uniform boundedness of our `colored' sum: \begin{lemma}\label{uniLP} Assume that $h=h_\delta$ is a deterministic multiscale function. Then \begin{equation}\label{Lph} \|T_{c,c',\delta}h_\delta\|_{L^p(\R^d)}\leq C<\infty\quad \textrm{for}\;\; \delta >0. \end{equation} \end{lemma} \begin{proof} We double the number of coordinates in our probability space and consider the product (probability) space $\widetilde \Omega\times \widetilde \Omega'$ whose elements we can write as sequences $(\widetilde\omega,\widetilde\omega')=(\omega_n,\omega'_n)_{n\in Z^d}$, and choose unimodular random variables $Y_{k,j}\colon \widetilde\Omega\to \{ 1,-1\}$ for $k=1,\ldots ,m$ and $j\in\Z^d$ such that $ \E Y_{1,n_1}\cdot Y_{m,n_m} $ is equal to 1 if $(n_1,\ldots ,n_m)$ respects the coloring $c'$ (i.e., $n_\ell=n_{\ell'}$ for those $\ell,\ell'\in\{ 1,\ldots , m\}$ that have the same color with respect to $c'$), and otherwise this expectation is zero. For example, in the case $m=2$ and one color (i.e., $c'(1)=c'(2)=1$) one may take $Y_{1,j}=Y_{2,j}=\Theta_j,$ where $(\Theta_j)$ is a Bernoulli sequence. In the general case one associates independent copies of such sequences for all the pairs $(k,k')$ that have the same color. More explicitly, one can set $\widetilde\Omega'\coloneqq \{-1,1\}^A$ with the Bernoulli measure where $A$ is the set of triples $(n,\ell,\ell')$ with $n\in\Z^d$ and $\ell,\ell' \in \{1,\ldots ,m\}$ with $c'(\ell)=c'(\ell')$, and set $$ Y_{r,n}(\widetilde\omega')=\prod_{(n,\ell,\ell')\in A:\; r\in\{\ell,\ell'\}}\widetilde\omega'_{n,\ell,\ell'} $$ for any $\widetilde\omega'=\widetilde\omega'_{(n,\ell,\ell')\in A},$ $r\in\{1,\ldots, m\},$ and $n\in \Z^d.$ We may then write \begin{equation}\label{representation} T_{c,c',\delta}h_\delta(x)=\E_{\widetilde\Omega\times \widetilde \Omega'}\Big(\sum_{\vec n \in (\Z^d)^m} [\widetilde \mu^{(m)}_{\delta,n_m}(\cdot,\omega_{c(m)},\widetilde \omega') T_{m-1} \ldots T_1 \widetilde\mu^{(1)}_{\delta,n_{1}}(\cdot, \omega_{c(1)},\widetilde \omega') h](x),\Big) \end{equation} where for $k\in\{ 1,\ldots ,m\}$ and $n\in\Z^d$ we set $$ \widetilde\mu^{(k)}_{\delta,n}(x,\widetilde \omega,\widetilde\omega')\coloneqq \mu^{(k)}_{\delta,n}(x,\widetilde \omega)Y_{k,n}(\widetilde\omega'). $$ In particular, we may write \begin{equation}\label{representation2} T_{c,c',\delta}h=\E_{\widetilde\Omega\times \widetilde \Omega'} H^{(m)}_\delta T_{m-1}\ldots H^{(1)}_\delta h_\delta, \end{equation} with $$ H^{(k)}_\delta(x,\widetilde \omega,\widetilde\omega')= \sum_{n\in \Z^d}[f_k]_\delta (n)(g_k)_{[n,\delta]}(x,\omega_{c(k)})Y_{k,n}(\widetilde\omega) $$ Recalling Remark \ref{generalLp}, the argument of Lemma \ref{fbound} applies as before since the additional factors $Y_{k,n}$ or having the variable $\omega_{c(k)}$ instead of $\omega_{n}$ do not affect our old estimates, whence $$ \|H^{(k)}_\delta\|_{L^p(\R^d\times \widetilde\Omega \times \widetilde\Omega)} \leq_{p} C \qquad \textrm{for all}\; \delta >0, \;\; p\in (1,\infty). $$ Finally, Lemma \ref{uniformLp} (together with Remark \ref{generalLp2}) and H\"older's inequality yields the desired result. \end{proof} We pause to clarify by an example the role of colorings introduced above. \begin{remark}\label{re:splitting} In order to illustrate the use of the colorings and the division to cases `split and `non-split' (the latter notions will introduced shortly below in the proof of Proposition \ref{mainprop}) let us consider in case $m=3$ the expectation $$ S\coloneqq \E \left(\sum_{(n_1,n_2,n_3)\in\Z^3} X_{n_1}TY_{n_2}TZ_{n_3}\right), $$ that is of the type we have to handle. Here the $X_n=X_n(x,U_n)$, $Y_j=Y_n(x,U_n),$ $Z_n=Z_n(x,U_n)$ ($n\in\Z$) are (say bounded) random functions, and the $U_j$ are i.i.d random variables. The linear operator $T$ could be e.g., a singular integral operator. In the first step one uses independence and Fubini to write the above sum in the form (the extra subindex $\not=$ indicates that one sums only over triples or tuples consisting of \emph{unequal} indices) \begin{equation}\label{eq:isojako} \begin{split} S&=\sum_{n_1,n_2,n_3,\not=} (\E X_{n_1}) T(\E Y_{n_2})T(\E Z_{n_3})+ \sum_{n_1,n_2\not=} \big(\E (X_{n_1}TY_{n_1})\big)T\E Z_{n_2}\\ &+ \sum_{n_1,n_2\not=} \E \big( X_{n_1}T(\E Y_{n_2})T Z_{n_1}\big)+ \sum_{n_1,n_2\not=} (\E X_{n_1})T\E (Y_{n_2} T Z_{n_2})+ \sum_{n_1} \E \big(X_{n_1}TY_{n_1}TZ_{n_1}\big) \\ &=: S_1+S_2+S_3+S_4+S_5. \end{split} \end{equation} In the next step uses the inclusion exclusion principle to rewrite the sums so that one sums over all indices. For example, we obtain \begin{eqnarray*} S_1&=&\sum_{n_1,n_2,n_3} (\E X_{n_1})T (\E Y_{n_2})T(\E Z_{n_3})- \sum_{n_1,n_2} (\E X_{n_1})T(\E Y_{n_1})T(\E Z_{n_2}) -\sum_{n_1,n_2} (\E X_{n_1})T(\E Y_{n_2})T(\E Z_{n_1})\\ &-&\sum_{n_1,n_2} (\E X_{n_1})T(\E Y_{n_2})T(\E Z_{n_2}) +2\sum_{n_1} (\E X_{n_1})T(\E Y_{n_1})T(\E Z_{n_1})\\ &\coloneqq &S_{11}-S_{12}-S_{13} -S_{14}+2S_{15}. \end{eqnarray*} \end{remark} Each of these terms can be expressed via a pair of colourings $(c,c')$, let $c_{\ell k}$ and $c'_{\ell k}$ stand for the colours of the term $S_{\ell k}.$ At most three colours are needed. We have $c_{1\ell}=(1,2,3)$ for each $\ell\in\{1,\ldots 5\}$. In turn, $c'_{11}=(1,2,3)$, $c'_{12}=(1,1,2)$,$c'_{13}=(1,2,1)$, $c'_{11}=(1,2,2)$, and $c'_{15}=(1,1,1)$. In similar vein the term $S_2$ can be rewritten as \begin{eqnarray*} S_2&=&\sum_{n_1,n_2} \big(\E (X_{n_1}TY_{n_1})\big)T\E Z_{n_2}- \sum_{n_1} \big(\E (X_{n_1}TY_{n_1})\big)T\E Z_{n_1}\\ &=:& S_{21}-S_{22}. \end{eqnarray*} Now the colourings are $c_{21}=c_{22}=(1,1,2)$, $c'_{21}=(1,1,2)$ and $c'_{22}=(1,1,1)$. The terms $S_3$ and $S_4$ are analogous, and finally the term $S_5$ needs no further subdivision and one has $c_5=c'_5=(1,1,1).$ Among the terms $ S_{11},S_{12},S_{13},S_{14},S_{15},S_{21}, S_{22}$ and $S_5$ the terms $S_{11}, S_{12},S_{14}$ and $S_{21}$ will later on be designated as \emph{split}, and the remaining ones as \emph{nonsplit}. This means the following: for a split term one can concretely divide the defining sum to independent left side and right hand side summations, and also the expectations split accordingly. E.g., we may write $$ S_{21}= fTg\qquad \textrm{with}\quad f\coloneqq \quad \sum^n_{j_1} \E (X_{j_1}TY_{j_1}) \quad \textrm{and} \quad g\coloneqq \sum_{j_2}^n \E Z_{j_2}. $$ \bigskip We return to the main course of the argument and note that, in view of Lemma \ref{weak}, it suffices to show that \begin{proposition}[Main proposition]\label{mainprop} If $c\colon \{1,\ldots,m\} \to \{1,\ldots,k\}$ is surjective, and $h = h_\delta$ is a (deterministic) multiscale function, then $T_{c,c'}(h) = T_{c,c',\delta}(h_\delta)$ is also a (deterministic) multiscale function. \end{proposition} The remainder of this section is devoted to the proof of this proposition. We first observe that one proves easily (e.g., compare the proof of Proposition \ref{multmult2}) that if we know the claim (for a given colouring $c'$) in the special case $h_\delta = 1$, then it is true (for the given colouring $c'$) in the general case. Namely, the proof of Proposition \ref{multmult2} applies as such to the product term $H^{(1)}_\delta h_\delta$ in representation \eqref{representation2} verifying that it can be replaced by $\widetilde H^{(1)},$ which is of the same form as $H^{(1)}$, and by decoupling the representation we obtain an expression with $1$ in place of $h_\delta.$ We induct on $k$, i.e., the number of colors in $c'$. If there is only one color in $c'$, then \begin{align*} T_{c,c',\delta}(1)(x) \coloneqq &\sum_{ n \in (\Z^d)} \E_c [\mu^{(m)}_{\delta,n}(\cdot,\omega_{c(m)}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n}(\cdot, \omega_{c(1)}) ](x)\\ =&\Big[\sum_{n \in \Z^d} \Big(\prod_{j=1}^m[f_j]_\delta(n)\Big) g_{[n,\delta]}(x)\Big], \end{align*} where $$ g(\cdot,\omega)=:E g_m(\cdot ,\omega_{c(j)})T_{m-1}\ldots T_1 g_1(\cdot,\omega_{c(j)}) $$ Obviously $g$ is a localized function, and hence Lemma \ref{commutator} and Proposition \ref{multmult} verify that $T_{c,c',\delta}(h)$ is a multiscale function. Now we suppose inductively that $k> 1$, and that the claim has already been proven for all smaller values of $k$. We begin by disposing of the \emph{split} case, in which there exists a non-trivial partition $\{1,\ldots,m\} = \{1,\ldots,j\} \cup \{j+1,\ldots,m\}$ with $1 \leq j < m$ such that $c'(\{1,\ldots,j\})$ and $c'(\{j+1,\ldots,m\})$ are disjoint. By relabeling colors if necessary we may assume that $c'(\{1,\ldots,m\}) = \{1,\ldots,k'\}$ for some $1 \leq k' < k$. Then, we let $c_1'\colon \{1,\ldots,j\} \to \{1,\ldots,k'\}$ be the restriction of $c$ to $\{1,\ldots,j\}$, and $c'_2\colon \{1,\ldots,m-j\} \to \{1,\ldots,k-k'\}$ be the function $c'_2(i) \coloneqq c'(i+j)-k'$. The restrictions $c_1,c_2$ are defined analoguously using the fact that $c$ refines $c'.$ Observe by the definition of $\E_c$ that \begin{align*} &T_{c,c',\delta}(1)(x)\\ \coloneqq &\sum_{\vec n \in (\Z^d)^{k-k'}} \E_{c_2} [\mu^{(m)}_{\delta,n_{c'_2(m-j)}}(\cdot,\omega_{c_2(m-j)}) T_{m-1} \ldots T_{j+1} \mu^{(j+1)}_{\delta,n_{c'_2(1)}}(\cdot, \omega_{c_2(1)}) T_j T_{c_1,c'_1,\delta}(1)](x). \end{align*} By induction hypothesis, $T_{c_1,c'_1,\delta}(1)$ is a deterministic multiscale function, and then by Proposition \ref{tf} $T_j T_{c_1,c'_1,\delta}(1)$ is also. The claim then follows by another application of the inductive hypothesis. Finally, we deal with the more difficult \emph{non-split} case in which no non-trivial partition of the above type exists. In other words, we need to show that $$ T_{c,c'\delta}(1)(x) = \sum_{\vec n \in (\Z^d)^k} \E \left(\mu^{(m)}_{\delta,n_{c'(m)}}(\cdot,\omega_{c(m)}) T_{m-1} \ldots T_1 \mu^{(1)}_{\delta,n_{c'(1)}}(\cdot, \omega_{c(1)})\right)(x)$$ is a multiscale function. Using \eqref{muj-nj} and the fact that all the $T_1,\ldots,T_{m-1}$ commute with dilations, we can rewrite $T_{c,c',\delta}1(x)$ as $$ T_{c,c'\delta}1(x) \coloneqq \sum_{\vec n \in (\Z^d)^k} (\prod_{i=1}^m [f_i]_\delta(n_{c'(i)})) (G_{\vec n})_{[0,\delta]}(x)$$ where $$ G_{\vec n}(x) \coloneqq \E\left(g_m( \cdot - n_{c'(m)}, \omega_{n_{c(m)}} ) T_{m-1} \ldots T_1 g_1(\cdot - n_{c'(1)}, \omega_{n_{c(1)}})\right)(x).$$ Using the translation-invariance of the $T_1,\ldots,T_{m-1}$, we can rewrite this as $$ T_{c,c',\delta}1(x) \coloneqq \sum_{n \in \Z^d} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0} (\prod_{i=1}^m [f_i]_\delta(n + r_{c(i)})) (G_{\vec r})_{[n,\delta]}(x).$$ To estimate this expression, we observe that exactly as in \eqref{mu_prod} we have for any $\vec r \in (\Z^d)^k$, $N > 0$, and $1 < p < \infty$ the estimate \begin{equation}\label{warp} \| \langle \cdot \rangle^N G_{\vec r} \|_{L^p(\R^d)} \lesssim_{N,p} \prod_{i=1}^{m-1} K_0( r_{c(i+1)} - r_{c(i)} ) \end{equation} We combine this lemma with the non-split nature of $c$ to obtain the following. \begin{lemma}\label{rcm} For any $N > 0$ and $1 < p < \infty$ there exists $\alpha > 0$ such that $$ \| \langle \cdot \rangle^N \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0: R \leq \langle \vec r \rangle < 2R} |G_{\vec r}| \|_{L^p(\R^d)} \lesssim_{p,N,\alpha} R^{-\alpha} $$ for all $R \ge 1$. \end{lemma} \begin{proof} In view of \eqref{warp} and the triangle inequality, it suffices to show that $$ \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0: R \leq \langle \vec r \rangle < 2R} \prod_{i=1}^{m-1} K_0( r_{c(i+1)} - r_{c(i)} ) \lesssim_\alpha R^{-\alpha}$$ for $\alpha$ sufficiently small. Now recall the kernels $K_\alpha$ defined in \eqref{K-def}. From the triangle inequality (and the surjectivity of $c$) we see that $$ \prod_{i=1}^{m-1} K_0( r_{c(i+1)} - r_{c(i)} ) \lesssim_\alpha R^{\alpha} \prod_{i=1}^{m-1} K_\alpha( r_{c(i+1)} - r_{c(i)} )$$ whenever $R \leq \langle \vec r \rangle$. Thus it will suffice to show that \begin{equation}\label{split_warp} S_\alpha(c)\coloneqq \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0} \prod_{i=1}^{m-1} K_\alpha( r_{c(i+1)} - r_{c(i)} ) \lesssim_\alpha 1 \end{equation} for all $\alpha\leq \alpha_0(m)>0$. In order to prove this we need a simple lemma on colorings. For that end we need some terminology. Let $c\colon \{ 1,\ldots, m\}\to \{ 1,\ldots k\}$ be a (surjective) coloring. Fix $k'\in \{ 1,\ldots k\}$, and denote $\ell=\#c^{-1}(k').$ One defines in an obvious way the coloring $c'\colon \{ 1,\ldots, m-\ell\}\to \{ 1,\ldots k-1\}$ that is obtained by removing color $k'$ from $c$. More precisely, if $c$ is thought as a sequence of length $m$ containing integers from $\{ 1,\ldots k\}$, the sequence $c'$ is obtained by taking of all occurrences $k$ from $c$, keeping the order of the remaining elements, and replacing every $j>k'$ by $j-1.$ \begin{lemma}\label{color} Let $c$ be a non-split coloring with at least 3 colors. Then we may remove from $c$ a color $($different from $ c(m))$ so that the remaining coloring is also non-split. \end{lemma} \begin{proof} We begin by defining the convex support of a color $k'$ as the interval $\{ j,j+1,\ldots ,j'\},$ where $j= \min\{ i\in\{1,\ldots ,k\} : c(i)=k'\}$ and $j'= \max\{ i\in\{1,\ldots ,k\} : c(i)=k'\}$. To prove the Lemma, note first that in case $c(1)=c(m)$ we may remove any other color and what remains is non-split. In case $c(m)\not= c(1)$ we first try to remove the color $c(1)$. If the outcome is non-split we are done. If the outcome is split it means that there must be a color $k'$ whose convex support is contained in the convex support of $c(1)$, especially that color is different from $c(m).$ When color $k'$ is removed it is clear that remaining coloring is non-split. \end{proof} We return to the proof of \eqref{split_warp} and induct on the number of colours in $c$. If there is only one color the statement is obviously true. Assume then that $c$ contains $k$ different colors with $k\geq 2$ and the statement is true if the number of colors does nor exceed $k-1.$ Now, if $k\geq3$, according to the previous lemma there is a color $k'$ that can be removed from $c$ so that the remaining coloring $c'$ is non-split. If $k=2$ we just pick $k'$ to be the color different from $c(m).$ Then, since $c$ is non-split, we may pick $ 1\leq j<j'\leq k$ so that $j\leq j'-2$ and $c(i)=k'$ for all $i$ with $j'<i<j'$, but $c(i)\not= k'$ for $i=j,j'.$ We obtain \begin{align*} \sum_{r_{k'}\in\Z^d}\prod_{i=j}^{j'-1}K_\alpha (r_{c(i+1)}-r_{c(i)}) \;=\; &\sum_{m\in\Z^d}K_\alpha (m-r_{c(j)})K_\alpha (r_{c(j')}-m)\\ \lesssim_\alpha\; &K_{2\alpha}( r_{c(j)}-r_{c(j')}). \end{align*} We thus obtain $$ S_\alpha (c)\leq S_{3\alpha}(c')\lesssim 1, $$ and by induction the claim follows if we take (say) $\alpha\leq \alpha_0\coloneqq 3^{-(m+1)}$ initially. \end{proof} Now we can finally show that $T_{c,\delta}1$ is a multi-scale function. Fix $1 < p < \infty$, let $N > d$ be large, and let $\eps_0 > 0$ be a small number to be chosen later. Let us first consider the ``non-local'' contribution when $\langle \vec r \rangle \geq R \coloneqq \delta^{-\eps_0}$. From Lemma \ref{discret} (applied with $p$ close to infinity) we see that $$ \|[f_i]_\delta\|_{l^\infty(\Z^d)} \lesssim_\eps \delta^{-\eps}$$ for all $\eps > 0$. From Lemma \ref{rcm} and the triangle inequality we thus see that $$ \| \langle \cdot \rangle^N \sum_{\underset{|\vec r|}{\vec r \in (\Z^d)^k: r_{c(m)} = 0}} (\prod_{i=1}^m [f_i]_\delta(n + r_{c(i)})) G_{\vec r} \|_{L^p(\R^d)} \lesssim_{p,N,\eps} \delta^{-\eps} R^{-\alpha} |[f_m]_\delta(n)|$$ for some $\alpha > 0,$ assuming $|\vec r|\geq R,$ and so $$ \| \langle \cdot \rangle_{[n,\delta]}^N \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0} (\prod_{i=1}^m [f_i]_\delta(n + r_{c(i)})) (G_{\vec r})_{[n,\delta]} \|_{L^p(\R^d)} \lesssim_{p,N,\eps} \delta^{-\eps} \delta^{d/p} R^{-\alpha} |[f_m]_\delta(n)|$$ for all $n \in \Z^d$. Taking $l^p(\Z^d)$ norms of both sides and using H\"older and Lemma \ref{discret} we obtain (if $N$ is large enough) $$ \| \sum_{n \in \Z^d} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0} (\prod_{i=1}^m [f_i]_\delta(n + r_{c(i)})) (G_{\vec r})_{[n,\delta]} \|_{L^p(\R^d)} \lesssim_{p,N,\eps} \delta^{-\eps} R^{-\alpha}$$ which is negligble by the choice of $R$ if we let $\eps$ be sufficiently small. Thus we only need to consider the ``local'' contribution when $\langle \vec r\rangle < R$. We split this local contribution into three pieces: the main term \begin{equation}\label{zero} \sum_{n \in \Z^d} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} [\prod_{i=1}^m f_i]_{\delta}(n) (G_{\vec r})_{[n,\delta]}(x), \end{equation} a first error term \begin{equation}\label{first} \sum_{n \in \Z^d} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} \left( \prod_{i=1}^m [f_i]_\delta(n) - [\prod_{i=1}^m f_i]_{\delta}(n) \right) (G_{\vec r})_{[n,\delta]}(x) \end{equation} and a second error term \begin{equation}\label{second} \sum_{n \in \Z^d} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} [ \prod_{i=1}^m [f_i]_\delta(n+r_{c(i)}) - \prod_{i=1}^m [f_i]_\delta(n) ] (G_{\vec r})_{[n,\delta]}(x). \end{equation} Let us first consider the main term \eqref{zero}. By Lemma \ref{envprod}, $\prod_{i=1}^m f_i$ is an envelope function. From Lemma \ref{rcm} we see that the function $$\sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} G_{\vec r}$$ is a localized function. By Definition \ref{mtp}, we thus see that \eqref{zero} is a multiscale tensor product of an envelope function and a localized function, and is thus a multiscale function. To conclude the proof of Proposition \ref{mainprop}, and hence Theorem \ref{mainthm}, it suffices to show that the expressions \eqref{first} and \eqref{second} are negligible. For this we shall just use \eqref{warp} rather than the more sophisticated estimate in Lemma \ref{rcm} (in particular, we do not need the non-split hypothesis). Now we turn to \eqref{first}. Let $1 < p < \infty$, and pick any $N > d$. Using the triangle inequality, followed by Lemma \ref{loc}, we can estimate the $L^p(\R^d)$ norm of \eqref{first} by $$ \lesssim_{p,N} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} ( \sum_{n \in \Z^d} ( |\prod_{i=1}^m [f_i]_\delta(n) - [\prod_{i=1}^m f_i]_{\delta}(n)| \| \langle \cdot \rangle_{[n,\delta]}^N (G_{\vec r})_{[n,\delta]}(x) \|_{L^p(\R^d)} )^p )^{1/p}.$$ Applying a rescaled version of \eqref{warp}, we can estimate this by $$ \lesssim_{p,N} \delta^{d/p} \sum_{\vec r \in (\Z^d)^k: r_{c(m)} = 0; \langle \vec r \rangle < R} \prod_{i=1}^{m-1} K_0( r_{c(i+1)} - r_{c(i)} ) \| \prod_{i=1}^m [f_i]_\delta(\cdot) - [\prod_{i=1}^m f_i]_{\delta}(\cdot)\|_{\ell^p(\Z^d)}.$$ Observe that on the ball of radius $R$, $K_0$ has an $\ell^1$ norm of $O_\eps(\delta^{-\eps})$ for any $\eps$. Thus we can estimate the previous expression by $$ \lesssim_{p,N,\eps} \delta^{d/p-\eps} \| \prod_{i=1}^m [f_i]_\delta(\cdot) - [\prod_{i=1}^m f_i]_{\delta}(\cdot)\|_{\ell^p(\Z^d)}$$ for any $\eps > 0$. Applying Lemma \ref{commutator} and H\"older's inequality repeatedly, we can thus estimate this expression by $$ \lesssim_{p,N,\eps} \delta^{\eps_p - \eps}$$ for some $\eps_p > 0$ depending on $p$. Setting $\eps \coloneqq \eps_p/2$ (say) we see that \eqref{first} is negligible as desired. Finally, we estimate \eqref{second}. Again let $1 < p < \infty$, and pick any $N > d$. Arguing as before, especially using the $\ell^1$ norms on $K_0$ on ball of radius $R$ we can estimate the $L^p(\R^d)$ norm of \eqref{second} by $$ \lesssim_{p,N,\eps} \delta^{d/p-\eps} \| \prod_{i=1}^m [f_i]_\delta(\cdot+r_{c(i)}) - \prod_{i=1}^m [f_i]_\delta(\cdot)\|_{\ell^p(\Z^d)}.$$ Using the crude estimate $$ \left|\prod_{i=1}^m a_i - \prod_{i=1}^m b_i \right|\lesssim \sum_{i=1}^m |a_i - b_i| \prod_{j \neq i} (|a_i| + |b_i|),$$ the triangle inequality, and the already familiar estimate $$ \|[f_i]_\delta(\cdot+r_{c(i)})-[f_i]_\delta(\cdot)\|_{\ell^q(\Z^d)}\lesssim \delta^{-d/q+\varepsilon_q} $$ we get by H\"older that the $L^p(\R^d)$-norm of \eqref{second} has the upper bound $\lesssim_{p,N,\eps} (R\delta)^{\eps_{mp}} \delta^{-\eps}$, where $\eps_{mp} >0$. By choice of $R$ we see by choosing $\eps$ sufficiently small that \eqref{second} is negligible as required. This proves Proposition \ref{mainprop}. The only thing that remains to be done to complete the proof of Theorem \ref{mainthm} is to get rid of the assumption that the envelope functions are compactly supported. Recall \eqref{muph} and denote in the general case $Z_\delta\coloneqq \int_{\R^d} \mu_{m,\delta}(x,\omega) {\phi}(x)\ dx$ and for $R>0$ set $Z_{\delta,R}\coloneqq \int_{\R^d} \mu_{R,m,\delta}(x,\omega) {\phi}(x)\ dx,$ where $\mu_{R,m,\delta}$ is obtained from $\mu_{m,\delta}$ by replacing each envelope function $f_j$ in its definition by $f_j 1_{B(0,R)}.$ Then for a suitably chosen sequence $R_k\uparrow\infty$ we have $\|Z_{\delta,R_k}- Z_{\delta}\|_{L^2(\R^d\times\widetilde\Omega)}\leq 2^{-k}$ as $k\to\infty$, according to \eqref{stochintegralholder}, and combined with H\"older's inequality this easily implies that $Z_{\delta,R_k}\to Z_{\delta}$ almost surely as $k\to\infty.$ We know that there are complex numbers $z_k$ and $c,\varepsilon >0$ so that \begin{equation}\label{zdk} \P (|Z_{\delta,R_k}-z_k|>\delta^\eps)\leq c\delta^\varepsilon, \end{equation} and the argument in the present section verifies that $c$ is independent of $k\geq 1.$ As $\E |Z_{\delta,R_k}|^2$ is uniformly bounded in $\delta$ and $k$, we deduce that the sequence $(z_k)$ is uniformly bounded, and by moving to a subsequence we may assume that $z_k\to z$ as $k\to\infty.$ One obtains the desired inequality simply by letting $k\to\infty$ in \eqref{zdk}. The proof is complete. \section{Quasiconformal homogenization}\label{se:qchomogenization} Our next task is to apply Theorem \ref{mainthm} with Corollary \ref{co:main} to homogenization of quasiconformal maps. Here it turns out convenient to proceed via the principal solutions, c.f., Subsection \ref{ss:bird}. This, on the other hand, requires us to first make use the Theorems in the setting of compactly supported envelope functions. Once that is done the application to general quasiconformal homogenization poses no substantial difficulties. However, for the reader's convenience we present rather complete details. We refer to e.g., \cite[Section 1]{CG} for a quick account of basic facts about planar quasiconformal maps, and to \cite{AIM} for a comprehensive exposition on the topic. Throughout this section $T$ stands for the Beurling operator \eqref{eq:beurl}. Recall from the introduction that a (quasiconformal) complex dilatation $\mu$ is a complex valued measurable function on the plane whose sup-norm is strictly less than 1, that a 3-point normalized homeomorphism of the extended plane $f\colon \overline{\C}\to\overline{\C}$ fixes points $0,1$ and $\infty$, and that the measurable Riemann mapping theorem quarantees existence and uniqueness of a 3-point normalized homeomorphic $W^{1,2}_{loc}$-solution to the Beltrami equation $\deeb f=\mu {\partial_z} f$ for any quasiconformal dilatation. In preparation for the proof ot Theorem \ref{th:main}(i), we begin with a few simple deterministic lemmas, which are modifications of well-known methods in the theory of planar quasiconformal mappings. Our first lemma shows that weak convergence of each individual term in the Neumann series is enough to guarantee uniform convergence of the corresponding principal solutions and locally uniform convergence of the 3-point normalized solutions. \begin{lemma}\label{lisaco:2.1} Let us assume that for any $j\geq 1$ the dilatation $\mu_j$ satisfies $ \|\mu_j\|_\infty\leq k<1$ and $\supp(\mu_j)\subset B$, where $B\subset\C$ is a ball. Denote the $m$-th term in the Neumann series for $\mu_j$ by $$ \psi_{m,j}\coloneqq \mu_jT\mu_j\ldots T\mu_j, $$ where $\mu_j$ appears $m$ times, $m\geq 1$. Assume also that for every fixed $m$ there is the weak convergence in $L^p(\C)$ $$ \psi_{m,j}\overset{w}{\to} \psi_m \quad \textrm{as}\;\; j\to\infty, $$ for all $1 < p< \infty.$ Then the solution $F_j$ of the Beltrami equation $\deeb F_j=\mu_j{\partial_z} F_j$, normalized by the 3-point condition, converges locally uniformly to a $k$-quasiconformal limit $F_\infty\colon \C\to\C$. \end{lemma} \begin{proof} Let first $f_j$ be the principal solution that has the representation $$ f_j=z+\sum_{m=1}^\infty C\psi_{m,j}, $$ where $C$ is the Cauchy transform. All the functions $\psi_{m,j}$ are supported in the ball $B$, and by the standard properties of $T$ (see \cite[Section 4.5.1]{AIM}), we have $\|\psi_{m,j}\|_{L^p(B)}\leq c a^m$ for all $j$, where $a=a(p, k)<1$ as soon as if we fix $p>2$ close enough to $2$. It is well-known that for $p>2$ the map $C\colon L^p(B)\to C^\alpha(\C)$ is bounded and compact e.g., \cite[Thms 4.3.11 and 4.3.14]{AIM} for $\alpha\in (0, 1-2/p)$. Here clearly the homogeneous norm for $C^\alpha$ used in \cite{AIM} can be replaced by the non-homogenous norm $$ \|f\|_{C^\alpha}(\C)\coloneqq \|f\|_{L^\infty(\C)}+\sup_{z,w}|f(z)-f(w)||z-w|^{-\alpha} $$ by the good decay of the Cauchy transforms of compactly supported functions. We may thus deduce from the weak convergence of $\psi_{m,j}$ in $L^p(B)$ that for each $m\geq 1$ the term $C\psi_{m,j}$ converges in the $C^\alpha (\C)$-norm to an element $g_m\in C^\alpha(\C)$. Moreover, we have the uniform bounds $\|C\psi_{m,j}\|_{C^\alpha(\C)}\leq Ca^m$ and $ \| g_m\|_{C^\alpha (\C)}\leq Ca^m$ for all $m,j\geq 1$. This clearly yields the uniform convergence of the principal solutions \begin{equation}\label{psl} f_j\to f_\infty=z+\sum_{m=1}^\infty C\psi_{m} \quad\text{as}\quad j\to\infty . \end{equation} The limit $f_\infty$ is $k$-quasiconformal from the normal family property of hydrodynamically normalized $k$-quasiconformal maps with dilatations supported in a fixed ball. Finally, to treat the $3$-point normalized solutions $F_j$, simply observe we may write them in terms of the principal solution as $$ F_j(z)=(f_j(1)-f_j(0))^{-1}(f_j(z)-f_j(0)). $$ Thus $(F_j)$ converges uniformly to the $k$-quasiconformal map $$ F_\infty (z)\coloneqq (f_\infty(1)-f_\infty(0))^{-1}(f_\infty(z)-f_\infty(0)). $$ \end{proof} Our second auxiliary result verifies that normalized $k$-quasiconformal maps whose dilatations agree in a large ball are close to each other near the center of the ball. \begin{lemma}\label{qc_locality} Let $k<1$ and assume that both $f\colon \C\to\C$ and $g\colon \C\to\C$ are $k$-quasiconformal homeomorphisms that satisfy the $3$-point normalization and, moreover $$ \mu_g=\mu_f \quad\textrm{in}\;\; B(0,L), $$ where $L\geq 1.$ Then for any $R<L$ we have $$ \sup_{|z|\leq R}|g(z)-f(z)|\leq \varepsilon (L,k,R), $$ where $\lim_{L\to\infty}\varepsilon (L,k,R)= 0$ for any fixed $k,R$. \end{lemma} \begin{proof} First of all, quasisymmetry (see \cite[Def. 3.2.1 and Thm 3.5.3]{AIM}) and the normalization of $g$ imply that $g(B(0,R))\subset B(0,r_1)$ and $g(B(0,L))\supset B(0,r_2)$ with $r_{1}=r_1(R,k)$ and $r_2=r_2(L,k)\to\infty$ as $L\to\infty.$ Writing $f=h\circ g$, it follows that $h$ is analytic in $B(0,r_2)$ with $h(0)=0$ and $h(1)=1$. Then the function $$ H(z)\coloneqq r_2^{-1}h(r_2z) $$ is analytic and univalent in $B(0,1)$ and satisfies the normalization $H(0)=0$, $H(1/r_2)=1/r_2$. By the Koebe type estimates (\cite[(2.74)]{AIM}) it is clear that $H'(0)\to 1$ as $L\to\infty$. Since the second derivative of $H$ has a universal bound on say $B(0,1/2)$ (\cite[Thm. 1.8]{CG}) we deduce that for any given $\varepsilon >0$ we have for large enough $L$ $$ |H(z)-z|\leq \varepsilon |z| \leq \varepsilon r_1/r_2\quad \textrm{for} \quad |z|<r_1/r_2. $$ This implies that $|f(z)-g(z)|<\varepsilon r_1$ for $|z|<R,$ proving the lemma. \end{proof} Next we have a global variant of Lemma \ref{lisaco:2.1}. \begin{lemma}\label{lisaco:2.2} Let the dilatations $\mu_j$ satisfy $|\mu_j|\leq k<1$ for $j=1,2,\ldots$. For any $L>1$ we write $\mu_{j,L}\coloneqq \mu_j 1_{B(0,L)}$ and set $\psi_{m,j,L}\coloneqq \mu_{j,L}T\mu_{j,L}\ldots T\mu_{j,L},$ where $\mu_{j,L}$ appears $m$ times. Assume that for every $m\geq 1$ and $L>1$ there is the weak convergence $$ \psi_{m,j,L}\overset{w}{\to} \psi_{m,L} \quad \textrm{as}\;\; j\to\infty $$ in $L^p(\C)$ for all $1 < p< \infty.$ Then the $3$-point normalized solution $F_j$ of the Beltrami equation $\deeb F_j=\mu_j{\partial_z} F_j$ converges locally uniformly on $\C$ to a $k$-quasiconformal homeomorphism $F$. \end{lemma} \begin{proof} Fix $R>0$. For any $L=1,2,3,\ldots$ let $F_{j,L}$ be the $3$-point-normalized solution to the Beltrami equation $$ \deeb F_{j,L}=\mu_{j,L}{\partial_z} F_{j,L} . $$ By Lemma \ref{lisaco:2.1}, for every $L\geq 1$ we have uniform convergence $F_{j,L}\to F_{\infty,L}$ as $j\to\infty$, where $F_{\infty,L}$ is a $k$-quasiconformal homeomorphism. Given $\varepsilon >0$, Lemma \ref{qc_locality} shows that we may choose $L_0\coloneqq L_0(k,\varepsilon, R)$ so that $$ |F_{j,L}-F_{j,L'}|\leq \varepsilon \quad \textrm{in}\;\; z\in B(0,R),\qquad \textrm{for}\quad L,L'\geq L_0 . $$ A fortiori, $$ |F_{\infty,L}-F_{\infty,L'}|\leq \varepsilon\quad \textrm{in}\;\; z\in B(0,R),\qquad \textrm{for}\quad L,L'\geq L_0 . $$ We deduce that the sequence $(F_{\infty,L})_{L\geq 1}$ is Cauchy in $C(B(0,R)), $ so that $F_{\infty,L}\to F_\infty$ uniformly on $B(0,R)$. Since $R$ was arbitrary, we see that $F_\infty$ is a 3-point normalized $k$-quasiconformal homeomorphism of the plane. It remains to check that also $F_j\to F_\infty$ uniformly on $B(0,R)$ for any given $R\geq 1$. To this end, take $L\geq L_0$ and estimate \begin{align*} &\limsup_{j\to\infty}\|F_{j}-F_\infty\|_{C(B(0,R))}\\ \leq& \limsup_{j\to\infty} \| F_{j}-F_{j,L}\|_{C(B(0,R))} + \limsup_{j\to\infty}\| F_{j,L}-F_{\infty,L}\|_{C(B(0,R))} + \| F_{\infty,L}-F_\infty\|_{C(B(0,R))}\\ &\leq \varepsilon +0+\varepsilon \;=\; 2\varepsilon, \end{align*} where we used Lemma \ref{qc_locality} again to estimate the first term. \end{proof} We are ready to establish the first statement in Theorem \ref{th:main}(i). \begin{proof}[Proof of Theorem \ref{th:main}(i)] Let us first assume that the Beltrami envelope function $\phi$ in the statement of Theorem \ref{th:main}(i) (see Definition \ref{de:ref}) is compactly supported. Observe that in this case $\phi$ is an envelope function in the sense of Section \ref{dmd-sec} (Definition \ref{def:env}), since taking $R$ large enough in Definition \ref{de:ref} we may apply the bound $|\phi|\leq 1$ to obtain for any $1 < p< \infty.$ $$ \|\Delta_h\phi\|_{L^p(\C)}\leq 2^{1-1/p} |supp(\phi)|^{1/p} \|\Delta_h\phi\|_{L^1(\C)}^{1/p}\leq C'|h|^{\alpha/p}\quad \textrm{for}\;\; |h|\leq 1. $$ Lemma \ref{prod3} shows that $\phi\, B_\delta$ is a stochastic multiscale function. By Corollary \ref{co:main}, for each $m\geq1$ there exists a (deterministic) limit function $\psi_m$ such that, with probability one, $\psi_{m,j}\coloneqq \mu_{2^{-j}}T\mu_{2^{-j}}\ldots T\mu_{2^{-j}}$ converges weakly to $\psi_m$ in $L^p(\C)$ for each $1 < p< \infty$ and each $m$. The statement of part (i) then follows from Lemma \ref{lisaco:2.1}. In the case where the envelope $\phi$ is not compactly supported, we use Lemma \ref{lisaco:2.2} to reduce to the compactly supported case. For this reduction it is enough to note that $\phi 1_{B(0,R)}$ is an envelope function if $\phi$ is a Beltrami envelope function, by essentially the same argument as above -- one uses additionally the observation that a characteristic function of a ball is an envelope function. \end{proof} \begin{lemma}\label{le:locality} Assume that $k\in [0,1)$ and let $(f_j)$ and $(g_j)$ be sequences of locally uniformly convergent $k$-quasiconformal maps in a domain $\Omega\subset\C$ such that the limit functions $f=\lim_{j\to\infty} f_j$ and $g=\lim_{j\to\infty} g_j$ are non-constant. Assume also that $ |\mu_{f_j}-\mu_{g_j}|\leq \varepsilon$ in $\Omega$ for all $j\geq 1.$ Then $$ |\mu_f-\mu_g|\leq \varepsilon\frac{1+k^2}{1-k^2}\quad \textrm{in}\;\; \Omega . $$ \end{lemma} \begin{proof} Take any ball $B(z_0,R)\subset\Omega$. By considering $f_j(z)-f_j(z_0)$ and $g_j(z)-g_j(z_0)$ instead, we may assume that $g_j(z_0)=f_j(z_0)=0$ for all $j.$ The assumptions together with the quasisymmetry property of the maps imply that if $r>0$ is taken small enough, then $B(0,r)\subset g_j(B(z_0,R))$ for all $j\geq j_0$, and hence the map $f_j\circ g_j^{-1}$ is well-defined in $B(0,r)$ for $j\geq j_0$. We may compute (see \cite[(13.37)]{AIM}) \begin{equation}\label{mu_comp} \mu_{ f_j\circ g_j^{-1}}(w)=\left({ \frac{\mu_{f_j}-{\mu_{g_j}}}{1-\mu_{f_j}\overline{\mu_{g_j}}}\frac{{\partial_z} g_j}{\overline{{\partial_z} g_j}} }\right)\circ g^{-1}(w),\quad \textrm{for a.e.}\;\; w\in B(0,r). \end{equation} In particular, $|\mu_{ f_j\circ g_j^{-1}}|\leq \varepsilon(1-k^2)^{-1}$ and letting $k\to\infty$ we infer by the local uniform convergence that $|\mu_{f\circ g^{-1}}|\leq \varepsilon(1-k^2)^{-1}$ in the neighbourhood of $z_0.$ In particular, applying formula \eqref{mu_comp} to $f$ and $g$ we obtain $$ \left|\mu_{f}-\mu_{g}\right|\leq (1+k^2)\left|\frac{\mu_{f}-\mu_{g}}{1-\mu_{f}\overline{\mu_{g}}}\right|\leq (1+k^2)|\mu_{f\circ g^{-1}}|\leq \varepsilon\frac{1+k^2}{1-k^2}. $$ \end{proof} Our next auxiliary result is quite specialized to our situation. Note that the existence of the deterministic homogenization limit $F_\infty$ is guaranteed by part (i) of Theorem \ref{th:main} that we already verified. \begin{lemma}\label{le:constant} Suppose in Theorem \ref{th:main}(i) the Beltrami envelope function $\phi$ is constant on the complex plane. Then the dilatation $\mu$ of the homogenization limit $F_\infty\colon \C\to\C $ is constant on $\C$, and therefore $F_\infty$ is linear: \; $$ F_\infty(z)=\frac{1}{1+A}z+\frac{A}{1+A}\overline{z}, $$ where the constant $A=\mu_{F_\infty}$ satisfies $|A|<1$. \end{lemma} \begin{proof} \newcommand{{\mathbf Q}}{{\mathbf Q}} Let $F_j$ be defined via \eqref{eq:Fj}, and let $B_{2^{-j}}$ be the random bump field defined by \eqref{eq:rbf1}. Denote by ${\mathbf Q}^2_d$ the set of dyadic rational points in $\C$, i.e., numbers of the form $(n+mi)2^{-\ell},$ where $m,n$ and $\ell\geq 1$ are integers. Since now $\mu_{F_j}=a B_{2^{-j}}$, where $a$ is a constant with $|a|<1$, we have for any $b\in {\mathbf Q}^2_d$ $$ \mu_{F_j(\cdot+b)}\sim\mu_{F_j(\cdot)}\quad\textrm{for}\quad j\geq j_0(b) $$ where $\sim$ stands for equivalence in distribution. As a consequence of the 3-point normalization we may write for $j\geq j_0(b)$ $$ F_j(z)\sim a_jF_j(z+b) +c_j, $$ where $a_j=(F_j(b+1))-F_j(b))^{-1}$ and $c_j=-a_jF_j(b).$ In the limit $j\to\infty$ we thus obtain $$ F_\infty(z) = aF_\infty(z+b) +c $$ with constants $a\not=0$ and $c$ that depend only on $b$. This implies that $$ \mu_{F_\infty}(z)= \mu_{F_\infty}(z+b), $$ where the equality is in the sence of $L^\infty$-functions. Therefore $\mu$ is periodic on $\C$ with dyadic rational periods, and this easily implies that $\mu$ is constant. Finally, for any $A\in\D$ the linear map $z\mapsto \frac{1}{1+A}z+\frac{A}{1+A}\overline{z}$ satisfies the 3-point normalization and has dilatation $A$, whence it is the unique quasiconformal homeomorphism $\C\to\C$ with these properties. \end{proof} We are now ready to prove the second statement of Theorem \ref{th:main}. \begin{proof}[Proof of Theorem \ref{th:main}(ii)] We first define the function $h_{(g,X)}$ with the help of a reference homogenization limit. For any $a\in\{ |w|<1\}$ let $F_a$ be the unique deterministic limit map of the homogenization problem $$ \deeb F_{a,j}(z)=a B_{2^{-j}}(z){\partial_z} F_{a,j}. $$ By Lemma \ref{le:constant} $F_a$ has constant dilatation in the whole plane; let us denote by $h_{(g,X)}(a)$ its value. Part (i) of Theorem \ref{th:main} and Lemma \ref{le:locality} yield immediately that the map $a\to h_{(g,X)}(a)$ is continuous. Assume next that the envelope function $\phi$ is continuous in a neighbourhood of $z_0$. with $\phi(z_0)=a$. Then the dilatations of the sequences $F_{a,j}$ and $F_j$ (where $F_j$ is as in the Theorem, see \eqref{eq:Fj}) are $\varepsilon$-close in a small enough neighbourhood $U$ of $z_0$. Thus Lemma \ref{le:locality} shows that the dilatation of the homogenization limit $F_\infty$ differs from $h_{(g,X)}(a)$ by less than $\varepsilon(1+k^2)(1-k^2)^{-1}$ in a small enough neighbourhood $U$, and we deduce the continuity of $\mu_{F_\infty}$ and the equality $ \mu_{F_\infty}(z_0) =h_{(g,X)}(a)=h_{(g,X)}(\phi(z_0)).$ \end{proof} We state one more auxilary result which actually contains a more general statement than what is needed in the last part of Theorem \ref{th:main}. \begin{lemma}\label{le:identitylimit} Assume that $g$ is invariant under rotation by the angle $\pi/2 $: $$ g(z,t)=g(iz,t)\quad \textrm{for all }\quad z\in\C, t\in\R. $$ Moreover, assume that $X$ is such that the random field $g(\cdot ,X)$ is symmetric, i.e $g(\cdot ,X) \sim -g(\cdot ,X)$. Then $h_{(g,X)}(a)=0$ for every $a\in\D.$ \end{lemma} \begin{proof} Let $B_{\delta}$ be the random bump field defined by \eqref{eq:rbf1}. The symmetry of $g$ together with the indepence of the $X_n$ implies the symmetry of $B_{\delta}$. Fix $a\in\D.$ For $j\geq 1$, let $F_j$ solve the random Beltrami equation \begin{equation}\label{eq:symmetricbeltrami} \deeb F_j=a B_{2^{-j}}(z) {\partial_z} F_k, \end{equation} and denote $\widetilde F_j(z)=(F_j(i))^{-1}F_j(iz)$. One computes that $\mu_{\widetilde F_j}(z)=-\mu_{F_j}(iz)$. The assumptions of the lemma thus verify that $$ \mu_{\widetilde F_j}\sim \mu_{ F_j}, $$ whence in the limit $j\to\infty$ we deduce that $F_\infty (z)= cF_\infty (iz)$ with a constant $c\not=0$. By lemma \ref{le:constant} we obtain the identity $$ \frac{1}{1+A}z+\frac{A}{1+A}\overline{z}= c\Big(\frac{1}{1+A}iz-\frac{Ai}{1+A}\overline{z}\Big) \qquad \textrm{ for all}\;\;z\in\C. $$ The above identity is possible only if $c=-i$ and $A=0$. Thus $h_{g,X}(a)=A=0$ as was to be shown. \end{proof} \begin{proof}[Proof of Theorem \ref{th:main}(iii)] The statement that for both of the models \eqref{mu-ex2} and \eqref{mu-ex3,5} the deterministic limit map is the identity map follows immediately from Lemma \ref{le:identitylimit} and Theorem \ref{th:main}(ii). Finally, we show that in the generic case, the limit map is not the identity or equivalently, that the Beltrami coefficient of the limiting map is not zero. To this end, we consider a very simple case of the general model. Fix a bump function $g\in C^\infty_0((0,1)^2)$ with $||g||_{\infty}\leq1$ and consider the sequence of random dilatations $\mu_{j,a}$ that depend on the complex parameter $a\in\D$ $$ \mu_{j,a}(z)=a 1_{[0,1]^2}(z)\sum_{n\in\Z^2}\varepsilon_n g(2^jz-n), $$ where the $\varepsilon_n$ are an independent sequence of random signs $\pm 1.$ Let $f_{j,a}$ be the principal solution of the corresponding Beltrami equation, and denote by $f_a$ the almost sure deterministic limit function $f_a=\lim_{j\to\infty} f_{j,a}.$ Using notation as in Lemma \ref{lisaco:2.1} (with $\mu_j=\mu_{j,a}$) we see from \eqref{psl} that $f_a$ has the (power series) representation $$ f_a(z)=z+\sum_{m=1}^\infty (C \psi_m)(z)=z+\sum_{m=1}^\infty a^m(C\widetilde \psi_m)(z), $$ with $\widetilde \psi_m=\lim_{j\to\infty}\widetilde\psi_{m,j},$ where $\widetilde \psi_{m,j}\coloneqq \mu_{j,1}T\mu_{j,1}\ldots T\mu_{j,1}$ and where the almost sure weak convergence to the (deterministic) limit $\widetilde \psi_m$ in $L^p(\C)$ for each $p>1$ again follows from Corollary \ref{co:main}. We claim that $f_a$ is non-linear (equivalently, the 3-point normalized limit is not the identity) for all but countably many values of $a\in\D$, unless $\widetilde \psi_m$ is identically $0$ for all $m:$ To see this, notice that $f_a(z)-z\to0$ as $z\to\infty,$ so that $f_a$ cannot be linear unless $f_a(z)-z$ is independent of $z$. By interpreting $(C\widetilde \psi_m)(z)$ as the Taylor coefficients in the power series representation of $a\mapsto f_a(z)-z$ above, we see that $f_a$ is non-linear for all but countably many values of $a$ unless $C\widetilde \psi_m(z)$ is independent of $z$ for all $m,$ or equivalently $\widetilde \psi_m \equiv 0$. It thus suffices to give an example with $\widetilde \psi_2\not\equiv 0.$ Let $h\in C^\infty_0(\C)$ be a compactly supported test function that equals 1 on $[0,1]^2$. For $j\geq 1$ set $$ Y_j\coloneqq \int_{\C} h\widetilde\psi_{2,j}=\int_{[0,1]^2} \mu_{j,1} T\mu_{j,1}\qquad \textrm{and}\quad Y\coloneqq \int_{\C}h\widetilde\psi_2 =\int_{[0,1]^2}\widetilde\psi_2. $$ Then almost surely $Y=\lim_{j\to\infty}Y_j$ and $Y$ is a deterministic constant. We note that in this special case, the convergence is not difficult to prove directly without resorting to our general theory. In any case, we claim that the limit is non-zero for a suitable choice of $g$. As the random variables $Y_j$ are uniformly bounded, we actually have $Y=\lim_{j\to\infty}\E Y_j$. Since the supports of $g(2^jz-n)$ are disjoint for different values of $n$, and $\E \varepsilon_n \varepsilon_{n'}=\delta_{n,n'}$ we may compute \begin{equation}\label{eq:nonzero} \E Y_j =\sum_{n\in\Z^2:\; 2^{-j} n\in [0,1)^2}\int_\C g(2^jz-n)Tg(2^jz-n)dz =\int_\C g(z)Tg(z)dz, \end{equation} where in the last step we used the translation and scaling invariance of $T.$ It remains to verify that $g\in C^\infty_0((0,1)^2)$ can be chosen so that the last integral in \eqref{eq:nonzero} is not identically zero. The following example can be generalized to all kernels that are not odd. Fix any $\varphi\in C_0^\infty(\D)$ with $0\leq\varphi\leq 1$ and $\varphi\not\equiv 0.$ If $\int_\C\varphi T\varphi =0,$ then setting $\varphi_A\coloneqq \varphi(\cdot-A)+\varphi(\cdot+A)$ we have $\int_\C\varphi_AT\varphi_A \sim 2\times \frac{-1}{\pi}(\int \varphi)^2 (2A)^{-2} \not=0$ as $A\to\infty.$ By scaling and translating the support may be taken to be in $(0,1)^2$, and the choice $g=\varphi_A$ for large enough $A$ completes the proof of Theorem \ref{th:main}. \end{proof} \smallskip We next sketch an alternative statement of the solution to the homogenization problem, replacing `almost sure convergence' by `convergence in probability'. Then there is no need to restrict to subsequences of $\delta\to 0.$ In order to rephrase Theorem \ref{th:main} in this manner, consider the principal solution $f_\delta$ of the homogenization problem \begin{equation}\label{eq:problem} \deeb F_\delta=\phi B_{\delta} {\partial_z} F_\delta. \end{equation} In the case where the envelope function $\phi$ is compactly supported, we know that the terms $\psi_{m,\delta}$ in the corresponding Neumann-series are all supported in a ball $B(0,R)$, where $R$ is independent of $\delta.$ Each term in the series converges weakly in probability in $L^p(B(0,R))$ as $\delta\to 0,$ i.e., for any $h\in L^{p'}$ there is the convergence in probability $$ \int_{\C} h\psi_{m,\delta}\to \int_{\C} h \psi_{m}. $$ Moreover, the Neumann series converges $L^p(\C)$, with an exponentially decaying remainder term, uniformly with respect to $\delta>0$. All this easily implies a norm convergence in $C^\alpha$ (compare the proof of Lemma \ref{lisaco:2.1}), i.e. $$ \P \big( \| f_\delta -f\|_{C^\alpha(\C)}> t \big)< t $$ for all $t >0$ as soon as $\delta <\delta_0( t ).$ In particular, $f_\delta\to f$ locally uniformly in probability. Finally, we may argue exactly as in the proof of Theorem \ref{th:main} and dispense with the assumption that the envelope has compact support. Let us record our conclusion as a theorem: \begin{theorem}\label{th:inprobability} Let $\mu_\delta$ be as in Theorem \ref{th:main} and denote by $F_\delta$ the $3$-point normalized solution to the Beltrami equation \eqref{eq:problem}. Then, $F_\delta\to F_\infty$ locally uniformly in probability as $\delta\to 0,$ where $F_\infty$ is the deterministic limit map given by Theorem \ref{th:main}. In other words, for any $R>0$ and $\varepsilon>0$ one has for $\delta<\delta_0(\varepsilon,R)$ that $$ \P\big( \|F_\delta-F\|_{L^\infty (B(0,R))}>\varepsilon\big)<\varepsilon. $$ \end{theorem} As our final application to quasiconformal homogenization we consider some random mappings of finite distortion, i.e., homeomorphisms for which the assumption $\| \mu \|_{\infty} \leq a < 1$ is relaxed. This leads to the study of solutions to the Beltrami equation $\partial_\zbar f = \mu \partial_z f$ where we only have $| \mu(z)| < 1$ almost everywhere. From the general theory of quasiconformal mappings and mappings of finite distortion one knows that in order to have a viable theory one needs some control on the size of the set where $| \mu(z)|$ is close to $1$. For basic properties of planar maps of finite distortion we refer to \cite[Chapter 20]{AIM} or \cite{AGRS}. There is a well-established theory for mappings of G. David type, i.e., maps whose distortion function $$ K(z)\coloneqq \frac{1+|\mu (z)|}{1-|\mu (z)|} $$ is exponentially integrable, namely $\exp(aK(z))\in L^1_{loc}$ for some $a>0.$ With this theory in mind, a natural model for degenerate random Beltrami coefficients is \begin{equation}\label{degeq} \mu_j(z)\coloneqq \sum_{n\in\Z^2:\; 2^{-j}n\in [0,1)^2}\varepsilon_{j,n}g(2^jz-n). \end{equation} where $\|g\|_{L^\infty(\C)}=1,$ one has supp$(g)\subset[0,1]^2$, and for each $j\geq 1$ we assume that $\varepsilon_{j,n}$ ($n\in\Z^2$) are complex valued i.i.d. random variables taking values in $\D$. Their common distribution is assumed to be independent of $j$. In this situation we have the following result: \begin{theorem}\label{degeco:2.1} Assume the uniform tail estimate \begin{equation} \label{taildecay} \P\left( \frac{1+|\varepsilon_{j,n}|}{1-|\varepsilon_{j,n}|} > t \right) \leq e^{- \gamma\, t} \end{equation} for some $ \gamma >2.$ Define the {\rm(}possibly degenerate{\rm )} Beltrami coefficients $\mu_j$ as in \eqref{degeq}. Then the 3-point normalized solutions $F_j$ of the Beltrami equation $\deeb F_j=\mu_j {\partial_z} F_j$ converge almost surely locally uniformly to a deterministic limit homeomorphism $F:\C\to\C$. \end{theorem} \begin{proof} We start the proof with a couple of auxiliary observations. First of all, we again use that convergence of the 3-point normalizations is equivalent to convergence of the hydrodynamically normalized ones. Thus, we again consider the principal solution \begin{equation} \label{neumann2} f_j(z): = z + C \big(\sum_{m=1}^\infty \psi_{m,j}\big), \end{equation} of the Beltrami equation, where as before $\psi_{m,j}= \mu_j T\mu_j\ldots T\mu_j$ with $\mu_j$ occuring $m$ times. This series is well-defined since almost surely each $\mu_j$ satisfies $$ \|\mu_j\|_{L^\infty (\C)}\leq \max\{|\varepsilon_{j,n}|:n\in\Z^2,\; 2^{-j}n\in [0,1)^2\} < 1. $$ By Corollary \ref{co:main}, almost surely each of the terms $ \psi_{m, j}$ converges weakly to a limit $\psi_m$ in $L^p$ for every $1 < p<\infty$, and $C( \psi_{m, j})(z)$ converges locally uniformly on $\C$. Therefore we expect that the limit map can be written again as \begin{equation}\label{neumann3} f_\infty= z + C \big(\sum_{m=1}^\infty \psi_{m}\big), \end{equation} and in proving the convergence one only needs to control the tail of this series. Our main tool will be the following statement: \begin{equation} \label{sarja} \lim_{M\to \infty }\sup_{j\geq 1} \; \sum_{m=M}^\infty \| \psi_{m,j} \|_{L^2(\complex)} \; = 0 \quad\quad \mbox{almost surely}\, . \end{equation} The proof of \eqref{sarja} is based on the following basic estimate \cite[Theorem 3.1]{AGRS} (see also \cite{D}) with $R=2$ on the decay of the $L^2$-norm of the terms in the Neumann series. \begin{lemma}\label{le:decay} Assume that the dilatation $\mu$ is compactly supported, supp$(\mu)\subset B(0,R).$ If for some $p>0$ we have \begin{equation}\label{eq:expint} A\coloneqq \int_{B(0,R)}e^{pK(z)} dz <\infty, \end{equation} where $K\coloneqq \frac{1+|\mu |}{1-|\mu |},$ then for any $q\in (0,p/2)$ the $m$-th term in the Neumann-series satisfies the bound \begin{equation}\label{eq:m_term_bound} \|\psi_m\|_{L^2(\C)}\leq C_{R,q,A}m^{-q} \end{equation} \end{lemma} Denote the distortion function of $f_j$ by $K_j(z)\coloneqq \frac{1+|\mu_j(z) |}{1-|\mu_j(z) |},$ where $\mu_j$ is as in \eqref{degeq}. In view of the above lemma, \eqref{sarja} follows as soon as we verify that there is $p>2$ such that \begin{equation}\label{eq:unif_exp_bound} \sup_{j\geq 1}\int_{[0,1]^2}e^{pK_j(z)} dz <\infty \qquad \textrm{almost surely}\, . \end{equation} To this end, choose $q\in (1,2)$ and $p>2$ so that $pq<\gamma,$ where $\gamma>2$ is from condition \eqref{taildecay}. Denote by $Y$ a random variable with the distribution $$ Y\sim \exp\left(p\Big(\frac{1+|\varepsilon|} {1-|\varepsilon|}\Big)\right)-M\qquad \textrm{with}\quad M\coloneqq \E \exp\left( p\Big(\frac{1+|\varepsilon|}{1-|\varepsilon|}\Big)\right) , $$ where $\varepsilon$ has the same distribution as all of the variables $\varepsilon_{j,n}$. The expectation $M$ above is finite according to our assumption \eqref{taildecay}, in fact $\E Y^q<\infty.$ The very definition of $\mu_j$ yields that $$ \int_{[0,1]^2}e^{pK_j(z)}dz\leq M+Z_j, $$ with $$ Z_j\sim 2^{-2j}\sum_{\ell=1}^{2^{2j}} Y_{j,\ell} $$ where for each $j\geq 1$ the random variables $Y_{j,\ell}$ are identically distributed copies of $Y.$ In order to estimate the tail of $Z_j$, we recall the von Bahr and Esseen estimate \cite{BE} that states for centered i.i.d. random variables $X_1,\ldots , X_N$ the inequality $$ \E\left| X_1+\ldots X_N\right|^q\leq C_q \sum_{s=1}^N\E\left| X_s \right|^q,\qquad 1\leq q\leq 2. $$ We obtain \begin{align*} \P(Z_j>1)\leq \E Z_j^q\leq 2^{-2jq}C_q2^{2j}\E Y^q=O(2^{-2(q-1)j}), \end{align*} and the Borel-Cantelli lemma yields that almost surely eventually $Z_j\leq 1.$ This proves \eqref{eq:unif_exp_bound}, and we have finished the verification of \eqref{sarja}. We will prove Theorem \ref{degeco:2.1} using the Arzela-Ascoli theorem. To this end we need uniform modulus of continuity estimates for both sequences $(f_j)$ and $(f_j^{-1}).$ Here note first that \eqref{sarja} implies the uniform bounds (with a random constant $C$) \begin{equation} \|\deeb f_j\|_{L^2(\C)} = \|{\partial_z} f_j-1\|_{L^2(\C)}\leq C, \quad \textrm{for all}\;\; j\geq 1. \end{equation} Since the support of each $\mu_j$ is contained in $2\D$, this estimate together with the properties of the Cauchy transform shows that, outside $3\D,$ the functions $f_j$ are uniformly equicontinuous and $f_j(z)-z$ is uniformly bounded. Thus uniform equicontinuity in all of $\C$ follows from the following useful result (see \cite{GoldsteinVodopyanov},\cite[Theorem 20.1.6]{AIM}). \begin{lemma}[Gehring, Goldstein and Vodopyanov]\label{GV} Assume that $f\in W^{1,2} (4\D)$ is a homeomorphism. Then, if $z_1,z_2\in 4\D$ one has $$ |f(z_1)-f(z_2)|^2\;\leq \;\; \frac {9\pi\int_{2\D} |\nabla f|^2}{\log (e+1/|z_1-z_2|)}. $$ \end{lemma} Next, the equicontinuity of the inverse maps is dealt with by another lemma (whose proof actually reduces the situation to Lemma \ref{GV}, see \cite{ISv},\cite[Lemma 20.2.3]{AIM}). \begin{lemma}[Iwaniec and Sverak] Assume that $f$ is a (homeomorphic) principal solution of the Beltrami equation with distortion function $K$, and with $\mu$ supported in $B(0,R')$. Then, for $z_1,z_2$ in the disc $B(0,R),$ the inverse map $g\coloneqq f^{-1}$ satisfies $$ |g(z_1))-g(z_2)|^2\;\leq \;\; \frac {C(R,R')}{\log (e+1/|z_1-z_1|)}\int_{B(0,R')} K(z) dz. $$ \end{lemma} \noindent The original version assumes that $\mu$ is supported in $\D$, but the more general statement follows again by scaling. Now \eqref{eq:unif_exp_bound} entails that in our case $\int_{B(0,R)} K_j(z) dz$ is uniformly bounded, and we obtain a (locally) uniform modulus of continuity for the inverse maps $f^{-1}_j$. Now Theorem \ref{degeco:2.1} follows quickly. Almost surely, we have local uniform equicontinuity for both sequences $(f_j)$ and $(f^{-1}_j)$, uniform boundedness of $f_j(z)$ at every point $z$ outside $3\D,$ and thus locally uniform subsequential convergence to a homeomorphism by Arzela-Ascoli. Moreover, as before in the proof of Lemma \ref{lisaco:2.1}, almost surely each term in the series \eqref{neumann2} converges locally uniformly. Also, \eqref{sarja} implies that the $VMO$-norm of the remainder in \eqref{neumann2} converges uniformly to zero (\cite{AIM}, Theorem 4.3.9). Put together, we deduce the convergence in $VMO(3\D)$ of the whole sequence $f_j$. Since the $f_j$ are analytic outside $2\D,$ this implies the uniqueness of the subsequential limit in $\C$ and establishes almost sure locally uniform converge $f_j\to f_\infty$, where the limit $f_\infty$ is a self-homeomorphism of the plane given by \eqref{neumann3}. \end{proof} Let us finally observe that the above proof actually yields the following more general results, stated both for the deterministic and random homogenization problem. \begin{theorem}\label{th:last_deterministic} Let $\mu = \mu_\delta$ be a compactly supported deterministic multiscale function such that for every $0 < \delta < 1$, we have $|\mu_\delta(x)| < 1$ for almost all $x$, and furthermore the dilatation $K_{\mu_\delta}(x) \coloneqq \frac{1 + |\mu_\delta(x)|}{1 - |\mu_\delta(x)|}$ is such that $\int_{\C} e^{pK_{\mu_\delta}}$ is bounded uniformly in $\delta$ for some $p>2.$ Then the associated normalized solutions $F_\delta$ with dilatation $\mu_\delta$ converge locally uniformly in distribution to a homeomorphism $F_\infty \colon \C\to\C$ as $\delta \to 0$. \end{theorem} \begin{theorem}\label{th:last_random} Let $\mu = \mu_\delta$ be a stochastic multiscale function such that for $\delta>0$ we have almost surely $|\mu_{\delta}(x)| < 1$ for almost all $x$, and furthermore for some $p>2$ almost surely $\sup_{j\geq1}\int_{\C} e^{pK_{\mu_{2^{-j}}}}<\infty $. Then the associated normalized solutions $F_{\mu_{2^{-j}}}$ are almost surely locally uniformly convergent as $j\to \infty$. \end{theorem}
1,477,468,749,850
arxiv
\section{Introduction} Announcements have recently been made by the four detector collaborations at RHIC \cite{RHIC-QGP} that a new state of matter, distinct from ordinary hadronic matter has been created in ultrarelativistic heavy-ion collisions (URHIC). This state of mattter has shown a number of unexpected properties already. Recently, measurements of two-particle correlations involving one hard trigger particle have shown a surprising splitting of the away side peak for all centralities but peripheral collisions, qualitatively very different from a broadened away side peak observed in p-p or d-Au collisions \cite{PHENIX-2pc}. Interpretations in terms of energy lost to propagating colourless \cite{Shuryak,Stoecker} and coloured \cite{Wake} sound modes have been suggested for this phenomenon. A comparison with data using a realistic trigger simulation has been performed in \cite{Mach}. As an alternative mechanism to generate such large angle correlations, Cherenkov radiation from the away side parton has been suggested \cite{Cherenkov}. In hydrodynamical models for Mach shocks investigated so far \cite{Shuryak,Chaudhouri}, the simplifying assumption has been made that instead of considering the full three dimensional propagation of the Mach cone through the medium, a boost-invariant 'Mach-wedge' is simulated as an approximation of the situation close to midrapidity. Likewise, in scenarios where large angles arise directly from induced in-medium radiation \cite{Cherenkov,Vitev}, the angular structure is depicted for a single parton. In contrast, in experiment only the rapidity $y_{trig}$ of the trigger hadron is constrained to be inside the acceptance experimentally. This does however not determine the rapidity $y$ of the away side parton. For this, the conditional probability $P(y)$ given the rapidity and momentum of the parton leading to the trigger hadron can be obtained from pQCD. The measured correlation then results from an integral over all possible $y$ weighted with $P(y)$. However, most models of jet energy loss would (without considering further interaction of modes excited in the medium) result in a correlation signal that exhibits angular symmetry with respect to the away side parton's axis. This means that for an away side parton at midrapidity, there is not only a correlation signal at large angle and zero rapidity but also a signal at small angle and large rapidity. Thus, the averaging over $P(y)$ will tend to smear the signal measured at midrapidity out towards smaller angles as compared to a simple midrapidity projection. It follows that the signal before averaging must be at even larger angles than a naive interpretation of the experimental data would suggest. In the following, we will illustrate this point in greater detail. First, we derive the expression for $P(y)$ for the dominant reaction channels at typical trigger energies from LO pQCD. Then we show how the rapidity structure of the correlation signal arises from the angular information, both for scenarios in which the correlation propagates with the flowing medium and in those where it doesn't. We demonstrate within the full trigger simulation described in \cite{Mach} that a Mach shockwave is recovered in the detector acceptance sufficiently undistorted. Since no detailed comparison of a radiative large angle scenario with the data is available, we demonstrate how the large angle correlation relative to a single away-side jet must move to even larger angles than observed in experiment in such a description. \section{Rapidity distribution of away-side jets} \label{Py} The differential production cross-section of hard partons in A-A collisions can be obtained in leading order pQCD by folding the two particle production cross-sections $d\hat{\sigma}/d\hat{t}$ with the nuclear parton distribution functions $f_{i,j/A}$ (here we use \cite{NPDF}): \begin{eqnarray} \label{cross-y} \frac{d^3\sigma^{AA \rightarrow k+l+X}}{dp_T^2dy_1dy_2}=\sum_{i,j} x_1 f_{i/A}(x_1,Q^2)x_2 f_{j/A}(x_2,Q^2) \frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}\,\,. \end{eqnarray} If the outgoing partons are at rapidities $y_1$ and $y_2$, $x_1$ and $x_2$ are determined by: \begin{equation} x_1 = \frac{p_T}{\sqrt{s}} \left[\exp(y_1) + \exp(y_2)\right] \quad \text{and}\quad x_2 = \frac{p_T}{\sqrt{s}} \left[\exp(-y_1) + \exp(-y_2) \right] \end{equation} \begin{figure}[htb] \epsfig{file=P_y_f.eps, width=8cm} \caption{\label{F-pQCD}Conditional probability $P(y)$ to find the away side parton at rapidity $y$ if the trigger parton is found at $y_{trig}=0$ calculated in LO pQCD for the dominant production channels in the low momentum regime as a function of different $p_T$.} \end{figure} The conditional probability distributions $P_{}(y)$ of producing an away-side parton at rapidity $y$ can then be calculated from the normalized cross-section (Eq.~(\ref{cross-y})) given the trigger parton at $y_1=0$. We show the normalized contributions of the two-dominant channels $gg \rightarrow gg$ and $gq \rightarrow gq$ to those conditional probabilities in Fig.~\ref{F-pQCD} for different $p_T$. There is a significant production probability of away-side partonic jets in the range $\pm 2$ for lower $p_T$ which gets somewhat narrower as $p_T$ increases (note that the gluonic contribution dominates over the quark contributions in this momentum regime). \section{Rapidity structure of the correlations} The STAR and PHENIX experiments at RHIC measure the correlation signal averaged over rapidity intervals of $\pm 1$ and $\pm 0.35$, respectively. The measured signal must thus be understood as a superposition of individual two-jet production events in which the trigger condition was fulfilled. The trigger condition largely determines the weighted jet-production vertex distribution from which the near-side and away-side partons emerges. Since the trigger must be inside the acceptance (i.e. close to midrapidity), the away-side partons are be distributed in rapidity according to the conditional probability distributions $P(y)$ derived in section \ref{Py}. We use a radiative energy loss formalism \cite{QuenchingWeights} to determine how much energy is deposited in the medium locally while the near-side and away-side partons traverse it. Characteristic $dE/d\tau$ distributions emerging from the folding of the medium evolution with the jet-energy loss calculation are discussed in \cite{Mach}. Standard radiative energy loss calculations do not lead to angular structures in the away-side jet's secondaries that could account for the observed large angle correlation, see e. g. \cite{Vitev}. Several explanation have been brought forward to explain such large angle correlations. It has been argued that those could emerge via the excitation of colorless \cite{Shuryak,Stoecker} or colorful \cite{Wake} sound modes by the supersonically traveling away-side jet (excitation of 'Mach cones') or by the emission of Cerenkov-like gluon radiation \cite{Cherenkov} by the superluminally traveling jet in the nuclear medium. Colorful sound modes and Cerenkov-like gluon radiation could only contribute in a QGP phase and are only possible if a space-like longitudinal or transverse dispersion relation is realized. This was pointed out first in \cite{Wake} and it has been shown in \cite{Cherenkov2} that those could emerge in a plasma if bound states are present. A space-like gluon dispersion relation does not emerge in HTL resummed calculations of the longitudinal and transverse plasma modes which have time-like dispersion relations. First we focus on the excitations of Mach cones. We only give a brief overview since the details of the calculation employed here have already been discussed in \cite{Mach}. In addition, the rapidity structure is governed by rather general arguments for which details of the excitation are not relevant. We assume that a fraction $f$ of the locally lost energy of the away-side jet is transferred to a collective colorless mode with a linear dispersion relation $E=c_s p$, where $c_s$ is the speed of sound which is determined by the lattice EoS via $c_s^2=\partial p/\partial\epsilon$. The evolution of the shock wave is tracked in the medium and the Mach angle is determined via the averaged speed of sound during the evolution until freeze-out time as $\bar{c}_s=\int_{\tau_E}^{\tau} d\tau c_s(\tau)/(\tau-\tau_E)$. Finally the additional boost to hadrons due to the Mach shock wave is determined at freeze-out. At momentum scales of $1$ GeV, well above typical temperature scales in the medium, considerable contributions to the correlation signal are only expected where transverse flow and the Mach shock are aligned \cite{Mach2}. Freeze-out is then calculated using the Cooper-Frye formula. It has been pointed out in \cite{Stoecker2} that since the shock wave travels with $c_s$ in the local rest frame, the spatial position the of the shock front has to be determined by solving the characteristic equation: \begin{eqnarray} \left.\frac{dz}{dt}\right|_{z=z(t)}=\left.\frac{u(z,R,t)+c_s(T(z,R,T))}{1+u(z,R,t)c_s(T(z,R,t))}\right|_{z=z(t)}. \end{eqnarray} In \cite{Stoecker2} it has been argued that this could destroy a Mach cone signal. This statement seems to be based on a misinterpretation of the measurement: the observed correlation signal is not a Mach cone in position space but the resulting boost of hadrons in momentum space. Thus the position of the Mach cone at freeze-out is relevant only insofar as the longitudinal flow at $z_{\rm final}$ determines a longitudinal boost in momentum space. This means that a Mach cone in $\phi,y$-space is elongated significantly in $y$ direction by longitudinal flow. For instance for a Bjorken evolution this amounts to an elongation of a factor of about $1.5$ in rapidity. We refer to the effect described in this paragraph in the following as the 'effect of longitudinal flow' on the correlation signal where 'longitudinal' refers to the medium expansion parallel to the the beam direction and not along the direction of partonic jets. In order to compare with experiment, one has to fold the elongated Mach cone structure with the away-side jet distribution $P(y)$ and average the emerging correlation signal over the detector's rapidity acceptance. This is very different from scenarios where no hydrodynamical mode is excited (e.g. gluon radiation). Here, there is no reason to assume that the excited mode is moving with given velocity relative to the medium. If the emitted mode is not carried away by the flowing medium, the angular structure should translate directly into the observable signal. This holds true unless re-interaction with the medium is considered which then needs to be taken into account consistently also for the angular distribution. We emphasize that the initial angular distribution of Cerenkov-like gluon radiation in a static medium faces these problems. The situation could be different if one takes into account how the initial emission structure Cerenkov-like gluons might be altered during the expansion of the medium where a changing index of refraction that depends on the medium evolution and the relative direction between jet and flow could introduce significant changes. We point out that since a theoretical analysis how this influences the predicted angular correlation signal in a Cerenkov-like gluon radiation scenario has not yet been performed, it is not clear that this would eventually predict a considerable elongation of the cone in $y$ direction. \section{Results} We calculate shockwave excitation in the dynamical evolution model framework as outlined above and compare with the $1 <p_T< 2.5$ GeV two-particle correlation signal for central collisions given a $2.5 <p_T< 4$ GeV trigger averaged over the PHENIX detector's rapidity $|\eta|<0.35$ window. Since the fraction of the jet's lost energy that is transfered to the collective sound mode $f$ cannot yet be calculated from first principles, we treat $f$ as a parameter \cite{Mach}. \vspace{0.7cm} \begin{figure}[htb] \epsfig{file=Rapidity_contributions_f.eps, width=8cm} \caption{\label{F-MachStd}Calculated 2-particle correlation on the away side for $|y|< 0.35$ and $1.0 < p_T < 2.5$ GeV. Indicated are also the partial contributions originating from away side partons going into different rapidity intervals given a trigger parton at midrapidity. } \end{figure} In Fig.~\ref{F-MachStd} we show a comparison of our calculation with the PHENIX two-particle correlation data \cite{PHENIX-2pc} on the away-side for $f=0.75$\footnote{This value of $f$ differs somewhat from that one previously determined in \cite{Mach} were we used to make somewhat more simplistic assumptions about the rapidity structure of the source.}. Note that zero degrees is chosen such that it is opposite to the trigger, i.e. at the expected average position of the away side parton. We also show the relative contribution to this signal from Mach cones excited by away-side partons from different rapidity intervals. Contributions emerging from Mach cones from away-side jets produced at $|y|>2$ are suppressed since only part of the cone contributes in the detector's rapidity window $|y|<0.35$. The maximum of the $\phi$ distribution is shifted to lower angles $\phi\ll\phi_{\rm max}$, where $\phi_{\rm max}$ is the maximum of the calculated correlation signal for all $y$. Contributions emerging from Mach cones from away-side jets produced at $0.5<|y|<2$ contribute significantly at angle $\phi \sim \phi_{\rm max}$. The contribution at low angles $\phi<40$ degrees is dominated by contributions from the bow shock (i.e. the $(1-f)$ contribution to the deposited energy) emerging from away-side jets at $|y|<0.5$. Contributions of away side jets at $|y|<0.5$ are also important for the correlation signal around $\phi \sim 0$. This bow shock contribution falls almost completely out of the acceptance of the detector for away-side jets with $|y|>0.5$ as it is always very close to the rapidity of the away side parton. \vspace{0.5cm} \begin{figure}[htb] \epsfig{file=Elongation_f.eps, width=8cm} \caption{\label{F-Elongation}Calculated 2-particle correlation under the assumption that a) the away side parton is always at midrapidity and the excited mode doesn't couple to flow (green) b) the excited mode doesn't couple to flow (red) and c) including realistic $P(y)$ and longitudinal flow effect. } \end{figure} Fig.~\ref{F-Elongation} illustrates what correlation signal would be expected if all away-side jets were confined to mid-rapidity ($P(y)=\delta(y)$) and no flow would be present in comparison to the case where the rapidity distribution of the away-side jet $P(y)$ as calculated in section (\ref{Py}) is appropriately taken into account. The calculation is performed for the two-particle correlation signal as it is measured in the PHENIX detector's acceptance region $|\eta|<0.35$. The contribution for $\phi<40$ degrees from the bow-shock is much smaller for the wider $P(y)$ distribution of the away-side jets than for the narrow one, since the bow shock contribution from jets with $|y|>0.5$ falls almost completely out of the acceptance. The large angle contribution is shifted to lower angles in comparison to the $P(y)=\delta(y)$ case, since the Mach shock cone of away-side jets with large $|y|$ contribute at lower $\phi<\phi_{\rm max, \delta(y)}$ in the detector acceptance interval $|\eta|<0.35$. Here $\phi_{\rm max, \delta(y)}$ is the maximum of the correlation signal at large angle if all away-side jets were confined to mid-rapidity. Fig.~\ref{F-Elongation} also illustrates that the shift of the maximum of the correlation signal to lower angles $\phi<\phi_{\rm max, \delta(y)}$ is more pronounced, if the elongation of the Mach cone in momentum space due to longitudinal flow would not have been taken into account. In addition the width of the two-particle correlation would increase. Such a correlation signal would no longer be in agreement with the PHENIX data. This argument can also be turned around: if no elongation would be present, the maximum of the emission angle from a single away-side jet at fixed $y$ has to be larger than $\phi_{\rm max}$ and the width considerably smaller than in the measured correlation signal. This is a strong constraint for gluon radiation scenarios, if the angular emission pattern of the radiated gluons is assumed to be directly translated in a two-particle hadronic correlation signal. \section{Conclusions} In this paper we have investigated the rapidity structure of two-particle correlation signals involving one hard trigger particle. We have shown that the rapidity structure of the correlation signal arises because of two different reasons: the away-side jets have a specific distribution in rapidity $P(y)$ that can be determined by pQCD since the near-side jet is almost centered at midrapidity\footnote{ We mention that we checked that even if the near-side partonic jet is slightly (e.g. $\pm$ 0.3 units in rapidity) off mid-rapidity the shape of the correlation signal (black line in Fig.2) remains essentially unaltered and no additional spread is introduced.} and the Mach shock fronts induced by those away-side jets in the medium are elongated along the rapidity axes by a longitudinal boost in momentum space due to longitudinal flow. We have demonstrated that Mach cones as excited modes of the nuclear medium lead to the explanation of a correlation signal on the away-side that is in good agreement with the PHENIX data if we employ the dynamical evolution framework developed in \cite{Mach}. The most significant contribution to the correlation signal at small angles away from the away-side jet's axis stems from a bow-shock contribution emerging from away-side jets centered around midrapity ($|y|<0.5$) whereas the large angle correlation is induced mainly by away-side jets from a wider range of rapidities ($|y|<2$). We also discussed how the rapidity structure would appear if the medium would not lead to an elongation of the Mach cone along the rapidity axes due to the longitudinal boost in momentum space. The correlation signal would in this case be considerably widened and the maximum of the correlation signal would be shifted to significantly smaller angles. This is what would appear to have to be realized in scenarios in which no hydrodynamical mode is excited (e.g. gluon radiation). Therefore in general scenarios in which the signal does not couple strongly to the longitudinal flow face new challenges. Our findings hence indicate that medium effects on the signal structure along the beam axis are essential in order to explain the observed correlation. \section*{Acknowledgments} We'd like to thank K.~Eskola, U.~Heinz, B.~M\"uller, V.~Ruuskanen, and I.~Vitev for useful exchanges. This work was financially supported by the Academy of Finland, Project 206024 and by the Department of Energy, DOE grant DE-FG02-96ER40945. J. R. acknowledges support as a Feodor Lynen fellow of the Alexander v. Humboldt foundation and by the Natural Sciences and Engineering Research Council of Canada.
1,477,468,749,851
arxiv
\section{Introduction} \IEEEPARstart{E}{lectromagnetic} field models that do not account for radiation effects are dubbed quasistatic field models. For static fields, the Maxwell equations decouple and enable the consideration of resistive, and capacitive or inductive effects, with either electrostatic, or magnetostatic formulations, separately. For capacitive-resistive effects, the electro-quasistatic (EQS) field model is applicable, while resistive-inductive effects can be modelled with the magneto-quasistatic (MQS) field approximation \cite{bHausMelcher:01s}. Quasistatic field scenarios where inductive, resistive, and capacitive effects need to be considered simultaneously, appear in high-frequency coils and coils of inductive charging systems, where the capacitive effects between the coil windings need to be taken into account and they are a common problem in electromagnetic compatibility, e.g. in automotive engineering. For such scenarios, it is quite common to use lumped $R$, $L$, $C$ parameter circuit-type models such as Kirchhoff's model or circuit models in combination with field models used either for parameter extraction or in strong/weakly coupled models, and circuit-formulation oriented partial-element equivalent circuit (PEEC) methods. Rather recently, also field oriented models based on the Darwin field formulation are considered. These quasistatic electromagnetic field models are represented in terms of combined electric scalar and magnetic vector potentials and feature a modified version of Amp\`{e}re's law by eliminating the rotational parts of the displacement currents, i.e., by neglecting the radiation effects in the model. Darwin formulations \cite{Darwin1920:01s} are not gauge-invariant, and thus, a number of different Darwin model formulations have been considered, \cite{RaviartSonnendruecker1995:01s2}, \cite{Larsson2007:01s}, \cite{KochWeiland2011:01s}, \cite{KochSchneiderWeiland2012:01s}, \cite{inpGarcia2018Chapter1:01s}, \cite{ZhaoTang2019:01s}, \cite{inpBadicsetal2018:01s}, \cite{inpClemensKaehneSchoeps2019:01s}, and \cite{inpKaimori2020:01s}. The paper is organized as follows. After this introduction, Darwin field models with different established gauge conditions are highlighted. In the third section, a Darwin model is presented which allows to use a two-step numerical solution scheme. Section \ref{sec:numerical} is comprised of numerical experiments with the two-step time domain Darwin formulation, and is followed by conclusions. \section{The Darwin Field Model}\label{sec:darwin} Darwin or Darwin-type field models for quasistatic electromagnetic field distributions can be obtained by considering a decomposition of the electric field intensity $\bm{E}$ into an irrotational part $\bm{E}_{\mathrm{irr}}$ and a remaining part $\bm{E}_{\mathrm{rem}}$, \begin{equation} \bm{E} = \bm{E}_{\mathrm{irr}} + \bm{E}_{\mathrm{rem}}, \end{equation} where the irrotational part is represented as the gradient of an electric scalar potential $\varphi$, that is, $\bm{E}_{\mathrm{irr}} = - \grad \varphi$. The remainder part is represented by the time derivative of a magnetic vector potential $\bm{A}$, i.e., $\bm{E}_{\mathrm{rem}} = - \partial\bm{A}/\partial t$. Hence, \begin{equation} \bm{E} = - \frac{\partial}{\partial t} \bm{A} -\grad \varphi\text{,}\qquad \bm{B} = \curl \bm{A} \label{eq:E_and_B} \end{equation} holds for the electric field intensity and for the magnetic flux density, respectively. The assumption of a quasistatic electromagnetic field model enables the elimination of the rotational parts of the displacement currents, $\varepsilon \partial^2 \bm{A}/\partial t^2 \cong \bm{0}$, in Amp\`{e}re's law. The result of this elimination is the so-called Darwin-Amp\`{e}re equation \begin{equation} \curl (\nu \curl \bm{A}) + \kappa \frac{\partial}{\partial t} \bm{A} + \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi = \bm{J}_\mathrm{S}, \label{eqn_Darwin-Ampere} \end{equation} where $\nu$ is the reluctivity, $\kappa$ is the electric conductivity, $\varepsilon$ is the permittivity, and $\bm{J}_\mathrm{S}$ is a source current density. The original formulation of the Darwin model \cite{Darwin1920:01s} can be obtained by enforcing a Coulomb gauge $\divergence \left(\bm{A}\right)=0$ corresponding to a Helmholtz decomposition of the electric field intensity $\bm{E}$, i.e., assuming $\curl \bm{E}_{\mathrm{irr}}=0$ and $\divergence \bm{E}_{\mathrm{rem}}= - \divergence \left(\partial\bm{A} / \partial t \right)=0$. Intended to model charges in free space without conductive materials, i.e., $\kappa=0$, $\varepsilon = \varepsilon_0$ and $\nu=\nu_0$, as a consequence to the Helmholtz decomposition, the Gau{\ss} law does not consider the rotational parts of the electric field and yields the electrostatic Poisson equation as a gauge equation. Thus, the original Darwin formulation is given by \begin{align} -\nu_0 \Delta \bm{A} + \varepsilon \grad \frac{\partial}{\partial t} \varphi &= \bm{J}_\mathrm{S},\\ \divergence \grad \varphi &= -\rho_{\mathrm{E}} / \varepsilon_0, \end{align} which requires to know the electric charge density $\rho_{\mathrm{E}}$ and its motion with $\bm{J}_\mathrm{S} = \rho_\mathrm{E} \bm{v}$ along some velocity vector $\bm{v}$. To eliminate the free space assumption of the original Darwin model and to include conductors, and permeable and dielectric materials, an application of the divergence operator to the Darwin-Amp\`{e}re equation \eqref{eqn_Darwin-Ampere} results in a modified Darwin field formulation \cite{KochWeiland2011:01s}, \cite{KochSchneiderWeiland2012:01s}, and yields the Darwin continuity equation \begin{equation} \divergence \left(\kappa \frac{\partial}{\partial t} \bm{A} + \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi\right) = \divergence \bm{J}_\mathrm{S}. \label{eqn_Darwin-continuity} \end{equation} This Darwin continuity equation lacks the radiation term $\divergence (\varepsilon \partial^2 \bm{A}/\partial t^2)$, which is present in the full Maxwell continuity equation. It was shown \cite{KochWeiland2011:01s} that the combined discrete formulation of the Darwin-Amp\`{e}re equation \eqref{eqn_Darwin-Ampere} and the Darwin continuity equation \eqref{eqn_Darwin-continuity} results in non-symmetric systems. In addition, the resulting system is singular and requires an additional gauge for the magnetic vector potential in the non-conductive regions of the problem, such as artificial conductivity \cite{KochWeiland2011:01s} or an additional Coulomb-type gauge \cite{ZhaoTang2019:01s}, \cite{inpKaimori2020:01s} \begin{equation} \divergence \left(\varepsilon \frac{\partial}{\partial t} \bm{A}\right)=0, \label{eqn_Coulomb-type_gauge} \end{equation} which can be enforced by adding this term with a scaling factor $1/\Delta t$ to the Darwin continuity equation \eqref{eqn_Darwin-continuity}, such that the gauge equation becomes a temporally semi-discrete version of the full Maxwell continuity equation, expressed in terms of the electrodynamic potentials $\bm{A}$ and $\varphi$. The Coulomb-type gauge \eqref{eqn_Coulomb-type_gauge} can be additionally imposed as a third equation via a Lagrange multiplier formulation \cite{ZhaoTang2019:01s}. Both Darwin model field formulations, \cite{ZhaoTang2019:01s} and \cite{inpKaimori2020:01s}, are symmetric and do not require additional regularization. \section{Two-Step Darwin Model Algorithms}\label{sec:twostep} The Darwin continuity equation \eqref{eqn_Darwin-continuity} is extended with an additional gauge term $\divergence \left(\kappa \partial\bm{A}/\partial t \right)=0$ \cite{inpClemensKaehneSchoeps2019:01s} to yield the electro-quasistatic equation \begin{equation} \divergence \left( \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi\right) = \divergence \bm{J}_\mathrm{S}. \label{eqn:eqs-continuity} \end{equation} The expression $\divergence \left(\kappa\partial\bm{A}/\partial t\right)$ omitted in \eqref{eqn:eqs-continuity} from the Darwin continuity equation \eqref{eqn_Darwin-continuity}, corresponds to explicitly enforcing divergence-free eddy currents in conductive media, i.e., neglecting eventually arising sources and sinks of current densities due to the irrotational parts of the electric field. The combination of equation \eqref{eqn_Darwin-Ampere} rewritten as \begin{equation} \curl (\nu \curl \bm{A}) + \kappa \frac{\partial}{\partial t} \bm{A} = -\kappa \grad \varphi - \varepsilon \grad \frac{\partial}{\partial t} \varphi + \bm{J}_\mathrm{S}, \label{eqn_Darwin-Ampere_2} \end{equation} with equation \eqref{eqn:eqs-continuity} results in a two-step formulation, where first the electro-quasistatic total current density $\bm{J}_\mathrm{total}= - \kappa \grad \varphi - \varepsilon \grad \left( \partial \varphi/ \partial t\right) + \bm{J}_\mathrm{S}$ is used as a solenoidal source term with $\divergence \bm{J}_\mathrm{Total} = 0$ to a magneto-quasistatic formulation for the magnetic vector potential $\bm{A}$ represented by the lefthand side of (\ref{eqn_Darwin-Ampere_2}). This modified magneto-quasistatic formulation, however, initially does not not address irrotational parts of $\bm{A}$ in the non-conductive regions. While this does not affect the evaluation of $\bm{B}$ in (\ref{eq:E_and_B}), the evaluation of the electric field according to \eqref{eq:E_and_B} involves the expression $\partial\bm{A}/\partial t$ also in the non-conductive regions, which is commonly not covered in magneto-quasistatic field formulations. To control the irrotational parts of $\bm{A}$, the magneto-quasistatic formulation needs to be regularized. For this, the introduction of a small artificial electrical conductivity $\hat{\kappa}$ in the non-conducting regions has been suggested \cite{KochWeiland2011:01s}. In case that $\kappa \gg 1/(\Delta t) \varepsilon$ holds for a given time-step length $\Delta t$, a modified electrical conductivity as e.g. $\hat{\kappa}=\kappa +1/(\Delta t) \varepsilon$ will regularize the formulation, where the resulting time-discrete formulations will feature expressions of the type $1/(\Delta t)\kappa +1/(\Delta t)^2 \varepsilon$ as they occur in second-order time discretization schemes, as e.g. Newmark-beta schemes used for full wave Maxwell-Amp\`{e}re equations \cite{DibbenMetaxas97:01s}. Alternatively, a grad-div term augmentation for spatially discretized magneto-quasistatic formulations is applicable \cite{Bossavit2001:01s}, \cite{ClemensSchoepsDeGersemBartel2011:01s}. The introduction of a small artificial electrical conductivity $\hat{\kappa}$ in the non-conducting regions is also an established technique to mitigate the static limit instability of the electro-quasistatic formulation (\ref{eqn:eqs-continuity}) that is known to occur for $\frac{\partial}{\partial t} \varphi \rightarrow 0$. By assuming that $\divergence \left(\kappa\partial\bm{A}/\partial t\right)=0$ holds in equation \eqref{eqn_Darwin-continuity}, the calculation of the electric scalar potential $\varphi$, using \eqref{eqn:eqs-continuity}, is decoupled from that of the magnetic vector potential $\bm{A}$. Thus, it is possible to independently first solve an electro-quasistatic initial-boundary value problem that corresponds to \eqref{eqn:eqs-continuity}, and in a second step solve the modified magneto-quasistatic problem \eqref{eqn_Darwin-Ampere_2} with the then available total current densities $\bm{J}_\mathrm{Total}$, as depicted in Algorithm \ref{alg1_2-Step_Darwin_TD}. \begin{algorithm}[ht] \caption{Two-Step Darwin Time Domain} \label{alg1_2-Step_Darwin_TD} \begin{algorithmic}[1] \State Initialize $\varphi(t^0)$ and $\bm{A}(t^0)$; \For{$n\gets 0:n_{\mbox{\scriptsize End}}-1$} % \State Solve problem \eqref{eqn:eqs-continuity} for $\varphi(t^{n+1})$; \EndFor \For{$n\gets 0:n_{\mbox{\scriptsize End}}-1$} \State Solve problem \eqref{eqn_Darwin-Ampere_2} for$\bm{A}(t^{n+1})$; \State Evaluate $\bm{E}(t^{n+1})$ and $\bm{B}(t^{n+1})$ with (\ref{eq:E_and_B}); \EndFor \end{algorithmic} \end{algorithm} Alternatively, it is possible to consecutively execute a solution step for an electro-quasistatic and a magneto-quasistatic field formulation for each discrete timestep, using suitable time stepping schemes \cite{Clemens2005:01s}. \subsection{Discrete Two-Step Darwin Time Domain Schemes} Reformulating (\ref{eqn_Darwin-continuity}) and (\ref{eqn_Darwin-Ampere_2}) with a spatial volume discretization scheme, such as the finite integration technique \cite{Weiland1996:01s} or the finite element method with N{\'e}d{\'e}lec elements \cite{Nedelec80:01s}, results in a coupled system of time continuous matrix equations \begin{equation} \Gr^{\trans} \fMkap \Gr \bm{\upphi} + \Gr^{\trans} \fMeps \Gr \frac{\mathrm d}{\mathrm dt} \bm{\upphi} = \Gr^{\trans} \fitvec{j}_{\mathrm{s}} - \Gr^{\trans} \fMkap \frac{\mathrm d}{\mathrm dt} \fitvec{a}, \label{FIT_Darwin_Continuity} \end{equation} \begin{equation} \C^{\trans} \fMnu \C \fitvec{a} + \fMkap \frac{\mathrm d}{\mathrm dt} \fitvec{a} = \fitvec{j}_{\mathrm{s}} - \fMkap \Gr \bm{\upphi} - \fMeps \Gr \frac{\mathrm d}{\mathrm dt} \bm{\upphi}, \label{FIT_Darwin-Ampere_2} \end{equation} where $\fitvec{a}$ is the degrees of freedom (dof) vector related to the magnetic vector potential, $\bm{\upphi}$ is the dof vector of electric nodal scalar potentials, $\fitvec{j}_{\mathrm{s}}$ is a vector of transient source currents, $\C$ is the discrete curl operator matrix, $\Gr$ and $\Gr^{\trans}$ are discrete gradient and (negative) divergence operator matrices. The matrices $\fMnu$, $\fMkap$, $\fMeps$ are discrete material matrices of possibly nonlinear reluctivities, conductivities and permittivities, respectively, corresponding to the specific discretization scheme in use. Employing e.g. an implicit Euler backward differentiation time stepping scheme with time step $\Delta t$ to \eqref{FIT_Darwin_Continuity} and \eqref{FIT_Darwin-Ampere_2} and $\fM_{\sigma}=\fMkap + (1/\Delta t)\fMeps$ yields a coupled system \begin{align} \left[ \Gr^{\trans} \fM_{\sigma} \Gr \right] \bm{\upphi}^{n+1} & = \fitvec{f}_1 (\fitvec{a}^{n+1}), \label{eqn_FIT-BDF1_Darwin_continuity}\\ \bigl[\C^{\trans} \fMnu \C + \frac{1}{\Delta t} \fMkap \bigr] \fitvec{a}^{n+1} & = \fitvec{f}_2 (\bm{\upphi}^{n+1}), \label{eqn_FIT-BDF1_Darwin_Ampere} \end{align} where, using the notation $\Delta\fitvec{a}^{n+1}=\fitvec{a}^{n+1}-\fitvec{a}^{n}$, the right-hand side vectors are \begin{equation} \fitvec{f}_1 = {\Gr}^{\trans} \fitvec{j}_s^{n+1} + \frac{1}{\Delta t}{\Gr}^{\trans} \fMeps {\Gr}\bm{\upphi}^{n} - \frac{1}{\Delta t}{\Gr}^{\trans}\fMkap \Delta\fitvec{a}^{n+1}, \label{eqn_rhs_f1} \end{equation} \begin{equation} \fitvec{f}_2 = \fitvec{j}_s^{n+1} + \frac{1}{\Delta t} \fMkap \fitvec{a}^{n} - \fM_{\sigma} {\Gr} \bm{\upphi}^{n+1} + \frac{1}{\Delta t} \fMeps {\Gr} \bm{\upphi}^{n}. % \label{eqn_rhs_f2} \end{equation} System \eqref{eqn_FIT-BDF1_Darwin_continuity}, \eqref{eqn_FIT-BDF1_Darwin_Ampere} are solved for each time step, starting from initial values $\fitvec{a}^0=\fitvec{a}(t^0)$ and $\bm{\upphi}^0=\bm{\upphi}(t^0)$. Adopting an iterative solution approach with iteration index $i=0,1,2,\ldots$ for each time step $t^{n+1}$ requires to provide an initial guess vector $\fitvec{a}^{n+1}_{i=0}$ with $\fitvec{f}_{1,i=0}=\fitvec{f}_1(\fitvec{a}^{n+1}_{i=0})$. Inserting the solution vector of \eqref{eqn_FIT-BDF1_Darwin_continuity} rewritten as $\bm{\upphi}^{n+1}=\left[ \Gr^{\trans} \fM_{\sigma} \Gr \right]^{-1} \fitvec{f}_{1,i=0}$ into the right-hand side vector equation $\fitvec{f}_{2,i=0} = \fitvec{f}_2(\bm{\upphi}^{n+1}_{i=0})$ of \eqref{eqn_FIT-BDF1_Darwin_Ampere} yields an expression for the next iterative solution vector $\fitvec{a}^{n+1}_{i=1}$. Left application of the discrete divergence operator ${\Gr}^{\trans}$ to this equation using the relation ${\Gr}^{\trans}\C^{\trans}=\bm{0}$ yields the identity ${\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i=1} = {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i=0}$ and by induction \begin{equation} {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i} = {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{0}\quad\forall i\in\{0,1,\ldots\}, \end{equation} i.e., in exact arithmetics the converged solution $\fitvec{a}^{n+1}$ of the iterative process will maintain the discrete divergence of its initial guess solution $\fitvec{a}^{n+1}_{i=0}$ in the right-hand side \eqref{eqn_rhs_f1} of \eqref{eqn_FIT-BDF1_Darwin_continuity}. An initial guess $\fitvec{a}^{n+1}_{i=0}= \fitvec{a}^{n}$ yields the result \begin{equation} {\Gr}^{\trans} \fMkap \fitvec{a}^{n} = {\Gr}^{\trans} \fMkap \fitvec{a}^{0}\quad\forall n\in\{0,1,\ldots\}. \end{equation} Thus, with the choice of the initialization vector $\fitvec{a}^{0}$ of the time integration process at $t^0$ acting as an initial gauge, the difference expression ${\Gr}^{\trans} \fMkap \bigl[\fitvec{a}^{n+1} -\fitvec{a}^{n}\bigr] = \bm{0}$ in \eqref{eqn_rhs_f1} vanishes for all time steps, and thus, system \eqref{eqn_FIT-BDF1_Darwin_continuity}, \eqref{eqn_FIT-BDF1_Darwin_Ampere} gets decoupled. \section{Numerical Experiments}\label{sec:numerical} To verify the performance of the proposed two-step Darwin time domain algorithm, two three-dimensional copper coil problems are considered (electrical conductivity $\kappa_{\mathrm{copper}},=5.96\cdot 10^7~\mathrm{S/m}$. Both problems are illustrated in Fig.~\ref{fig:geom}. For each case-study, the computational domain is $\Omega=(\Omega_0\cup\overline{\Omega}_\kappa)\setminus(\Gamma_\mathrm{E}\cup\Gamma_\mathrm{G})\subset\mathbb{R}^3$ and is free from charge and current sources. The bounding surfaces are perfectly conducting, with $\Gamma_\mathrm{G}$ being grounded and $\Gamma_\mathrm{E}$ supplying the transient excitation \begin{equation}\label{eq:excit} \varphi(t) = \varphi_\mathrm{max}\cdot f\cdot\min(t,1/f)\cdot\sin(2\pi f t), \end{equation} where $\varphi_\mathrm{max}=12~\mathrm{V}$ is the maximum voltage and $f=10~\mathrm{MHz}$ is the excitation frequency with a wavelength $\lambda = 30~\mathrm{m}$ in void. The longest side of the domain $\Omega$ associated with the helical coil is $l=6.3~\mathrm{cm}$, the one associated with the planar coil is $l=1.35~\mathrm{cm}$. Since $\ell\ll\lambda$ in both cases, the radiation-free assumption of the Darwin field model is justified. \begin{figure} \includegraphics[width=0.99\columnwidth]{Figures/CoilsMesh3D} \caption{In both coil case-studies, $\Omega_0$ is void, with $\hat\kappa$ being a small artificial electrical conductivity ($\hat\kappa=10^{-2}$ S/m), while $\Omega_\kappa$ is occupied by copper.}\label{fig:geom} \end{figure} The problems that constitute the two-step algorithm are discretized in space with the FEM, using first-order Lagrange elements for the scalar electric problem and zeroth-order N{\'e}d{\'e}lec elements for the vectorial magnetic problem; see Table \ref{tab:dof} for the number of degrees of freedom in each finite element space. Regarding time-discretization, both problems have been integrated with the trapezoidal method, which is implicit, second-order accurate, and A-stable, with different time steps $\Delta t\in\{2.5,1.25,0.625\}~\mathrm{ns}$ for a total of $t_\mathrm{End}=(n_\mathrm{End}-1)\Delta t=1200~\mathrm{ns}$. With the excitation functions in \eqref{eq:excit}, the two-step algorithm is expected to yield an approximation of a frequency-domain full Maxwell solution, and hence, the latter is used for obtaining reference solutions on the same meshes. For all linear systems a direct solver is used. \begin{table}\centering \caption{Number of Degrees of Freedom (dof)} \label{tab:dof} \begin{tabular}{lcc} \toprule & Lagrange Elements & N{\'e}d{\'e}lec Elements\\ \midrule Helical Coil & $16\,882$ & $118\,609$ \\ Planar Coil & $41\,357$ & $292\,905$\\ \bottomrule \end{tabular} \end{table} In Fig.~\ref{fig:fields}, the magnetic flux density and the electric field intensity are depicted for each coil. There, the two-step algorithm for the Darwin field model, successfully captures, not only the induction, but also the capacitance between the coil windings. Table \ref{tab:freq} depicts the relative differences of the electric field approximations provided by the Darwin field model for the planar coil model at frequencies ranging from $10~\mathrm{kHz}$ up through $1~\mathrm{GHz}$. The results show that the remainder part $-\partial\bm{A} /\partial t$ of the electric field needs to be evaluated in (\ref{eq:E_and_B}) which necessitates the regularization of (\ref{eqn_Darwin-Ampere_2}). \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Figures/AllFields}\hfill \caption{The magnitude of the magnetic flux density and the electric field intensity for the two coil problems at $t=825~\mathrm{ns}$.}\label{fig:fields} \end{figure} In Fig.~\ref{fig:error}, the difference between the Maxwell and Darwin field models is quantified with the norm \begin{equation} \Vert \operatorname{Re}(\bm{F}_\mathrm{M})-\bm{F}_\mathrm{D}\Vert_{L^2(\Omega)}/\Vert\bm{F}_\mathrm{M}\Vert_{L^2(\Omega)}, \end{equation} where $\bm{F}\in\{\bm{B},\bm{E}\}$ is a physical field quantity, computed as in \eqref{eq:E_and_B}, and the subscripts $\mathrm{M}$ and $\mathrm{D}$ stand for the Maxwell and Darwin field models, respectively. In Fig.~\ref{fig:error}, the first row of results is associated with the helical coil, while the second row with the planar coil. In the same figure, the effect of the time discretization scheme is also apparent, with a tendency towards improved accuracy for smaller time steps, since convergence to the time-harmonic solution is expected. \begin{figure} \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotHelical-B}\hfill \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotHelical-E} \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotPlanar-B}\hfill \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotPlanar-E} \caption{The relative difference between the field quantities $\bm{B}$ and $\bm{E}$ for the helical coil (first row) and for the planar coil (second row), computed using a full Maxwell frequency-domain solver and the two-step Darwin algorithm.}\label{fig:error} \end{figure} \begin{table}\centering \caption{Relative $\bm{E}$-field Differences at $t=3.125/f$ for the Planar Coil as Functions of the Frequency $f$.} \label{tab:freq} \begin{tabular}{ccc} \toprule $f$ [Hz] & $\Vert \operatorname{Re}(\bm{E}_\mathrm{M})-\bm{E}_\mathrm{D}\Vert/\Vert\bm{E}_\mathrm{M}\Vert$ & $\Vert \operatorname{Re}(\bm{E}_\mathrm{M})-\bm{E}_\mathrm{D,irr}\Vert/\Vert\bm{E}_\mathrm{M}\Vert$ \\ \midrule $10^4$ & $8.22\cdot10^{-6}$ & $2.74\cdot10^{-3}$ \\ $10^5$ & $9.31\cdot10^{-6}$ & $2.92\cdot10^{-2}$ \\ $10^6$ & $4.60\cdot10^{-5}$ & $1.59\cdot10^{-1}$ \\ $10^7$ & $7.33\cdot10^{-4}$ & 1.47 \\ $10^8$ & $2.00\cdot10^{-3}$ & 2.28 \\ $10^9$ & $1.41\cdot10^{-2}$ & 2.22 \\ \bottomrule \end{tabular} \vspace*{2pt} \end{table} \section{Conclusions} A two-step algorithm for the transient $(\bm{A},\varphi)$ formulation of the quasistatic Darwin field model is introduced and, for the first time, numerically validated against the full system of Maxwell's equations. The presented two-step Darwin time domain quasistatic field formulation accounts for capacitive, inductive, and resistive effects. The advantages of this scheme result from consecutively combining an electro-quasistatic and a modified magneto-quasistatic field model, and thus, it benefits from existing efficient time domain solution techniques that provide flexibility in terms of material non-linearities and excitation profiles. \bibliographystyle{ieeetr} \section{Introduction} \IEEEPARstart{E}{lectromagnetic} field models that do not account for radiation effects are dubbed quasistatic field models. For static fields, the Maxwell equations decouple and enable the consideration of resistive, and capacitive or inductive effects, with either electrostatic, or magnetostatic formulations, separately. For capacitive-resistive effects, the electro-quasistatic (EQS) field model is applicable, while resistive-inductive effects can be modelled with the magneto-quasistatic (MQS) field approximation \cite{bHausMelcher:01s}. Quasistatic field scenarios where inductive, resistive, and capacitive effects need to be considered simultaneously, appear in high-frequency coils and coils of inductive charging systems, where the capacitive effects between the coil windings need to be taken into account and they are a common problem in electromagnetic compatibility, e.g. in automotive engineering. For such scenarios, it is quite common to use lumped $R$, $L$, $C$ parameter circuit-type models such as Kirchhoff's model or circuit models in combination with field models used either for parameter extraction or in strong/weakly coupled models, and circuit-formulation oriented partial-element equivalent circuit (PEEC) methods. Rather recently, also field oriented models based on the Darwin field formulation are considered. These quasistatic electromagnetic field models are represented in terms of combined electric scalar and magnetic vector potentials and feature a modified version of Amp\`{e}re's law by eliminating the rotational parts of the displacement currents, i.e., by neglecting the radiation effects in the model. Darwin formulations \cite{Darwin1920:01s} are not gauge-invariant, and thus, a number of different Darwin model formulations have been considered, \cite{RaviartSonnendruecker1995:01s2}, \cite{Larsson2007:01s}, \cite{KochWeiland2011:01s}, \cite{KochSchneiderWeiland2012:01s}, \cite{inpGarcia2018Chapter1:01s}, \cite{ZhaoTang2019:01s}, \cite{inpBadicsetal2018:01s}, \cite{inpClemensKaehneSchoeps2019:01s}, and \cite{inpKaimori2020:01s}. The paper is organized as follows. After this introduction, Darwin field models with different established gauge conditions are highlighted. In the third section, a Darwin model is presented which allows to use a two-step numerical solution scheme. Section \ref{sec:numerical} is comprised of numerical experiments with the two-step time domain Darwin formulation, and is followed by conclusions. \section{The Darwin Field Model}\label{sec:darwin} Darwin or Darwin-type field models for quasistatic electromagnetic field distributions can be obtained by considering a decomposition of the electric field intensity $\bm{E}$ into an irrotational part $\bm{E}_{\mathrm{irr}}$ and a remaining part $\bm{E}_{\mathrm{rem}}$, \begin{equation} \bm{E} = \bm{E}_{\mathrm{irr}} + \bm{E}_{\mathrm{rem}}, \end{equation} where the irrotational part is represented as the gradient of an electric scalar potential $\varphi$, that is, $\bm{E}_{\mathrm{irr}} = - \grad \varphi$. The remainder part is represented by the time derivative of a magnetic vector potential $\bm{A}$, i.e., $\bm{E}_{\mathrm{rem}} = - \partial\bm{A}/\partial t$. Hence, \begin{equation} \bm{E} = - \frac{\partial}{\partial t} \bm{A} -\grad \varphi\text{,}\qquad \bm{B} = \curl \bm{A} \label{eq:E_and_B} \end{equation} holds for the electric field intensity and for the magnetic flux density, respectively. The assumption of a quasistatic electromagnetic field model enables the elimination of the rotational parts of the displacement currents, $\varepsilon \partial^2 \bm{A}/\partial t^2 \cong \bm{0}$, in Amp\`{e}re's law. The result of this elimination is the so-called Darwin-Amp\`{e}re equation \begin{equation} \curl (\nu \curl \bm{A}) + \kappa \frac{\partial}{\partial t} \bm{A} + \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi = \bm{J}_\mathrm{S}, \label{eqn_Darwin-Ampere} \end{equation} where $\nu$ is the reluctivity, $\kappa$ is the electric conductivity, $\varepsilon$ is the permittivity, and $\bm{J}_\mathrm{S}$ is a source current density. The original formulation of the Darwin model \cite{Darwin1920:01s} can be obtained by enforcing a Coulomb gauge $\divergence \left(\bm{A}\right)=0$ corresponding to a Helmholtz decomposition of the electric field intensity $\bm{E}$, i.e., assuming $\curl \bm{E}_{\mathrm{irr}}=0$ and $\divergence \bm{E}_{\mathrm{rem}}= - \divergence \left(\partial\bm{A} / \partial t \right)=0$. Intended to model charges in free space without conductive materials, i.e., $\kappa=0$, $\varepsilon = \varepsilon_0$ and $\nu=\nu_0$, as a consequence to the Helmholtz decomposition, the Gau{\ss} law does not consider the rotational parts of the electric field and yields the electrostatic Poisson equation as a gauge equation. Thus, the original Darwin formulation is given by \begin{align} -\nu_0 \Delta \bm{A} + \varepsilon \grad \frac{\partial}{\partial t} \varphi &= \bm{J}_\mathrm{S},\\ \divergence \grad \varphi &= -\rho_{\mathrm{E}} / \varepsilon_0, \end{align} which requires to know the electric charge density $\rho_{\mathrm{E}}$ and its motion with $\bm{J}_\mathrm{S} = \rho_\mathrm{E} \bm{v}$ along some velocity vector $\bm{v}$. To eliminate the free space assumption of the original Darwin model and to include conductors, and permeable and dielectric materials, an application of the divergence operator to the Darwin-Amp\`{e}re equation \eqref{eqn_Darwin-Ampere} results in a modified Darwin field formulation \cite{KochWeiland2011:01s}, \cite{KochSchneiderWeiland2012:01s}, and yields the Darwin continuity equation \begin{equation} \divergence \left(\kappa \frac{\partial}{\partial t} \bm{A} + \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi\right) = \divergence \bm{J}_\mathrm{S}. \label{eqn_Darwin-continuity} \end{equation} This Darwin continuity equation lacks the radiation term $\divergence (\varepsilon \partial^2 \bm{A}/\partial t^2)$, which is present in the full Maxwell continuity equation. It was shown \cite{KochWeiland2011:01s} that the combined discrete formulation of the Darwin-Amp\`{e}re equation \eqref{eqn_Darwin-Ampere} and the Darwin continuity equation \eqref{eqn_Darwin-continuity} results in non-symmetric systems. In addition, the resulting system is singular and requires an additional gauge for the magnetic vector potential in the non-conductive regions of the problem, such as artificial conductivity \cite{KochWeiland2011:01s} or an additional Coulomb-type gauge \cite{ZhaoTang2019:01s}, \cite{inpKaimori2020:01s} \begin{equation} \divergence \left(\varepsilon \frac{\partial}{\partial t} \bm{A}\right)=0, \label{eqn_Coulomb-type_gauge} \end{equation} which can be enforced by adding this term with a scaling factor $1/\Delta t$ to the Darwin continuity equation \eqref{eqn_Darwin-continuity}, such that the gauge equation becomes a temporally semi-discrete version of the full Maxwell continuity equation, expressed in terms of the electrodynamic potentials $\bm{A}$ and $\varphi$. The Coulomb-type gauge \eqref{eqn_Coulomb-type_gauge} can be additionally imposed as a third equation via a Lagrange multiplier formulation \cite{ZhaoTang2019:01s}. Both Darwin model field formulations, \cite{ZhaoTang2019:01s} and \cite{inpKaimori2020:01s}, are symmetric and do not require additional regularization. \section{Two-Step Darwin Model Algorithms}\label{sec:twostep} The Darwin continuity equation \eqref{eqn_Darwin-continuity} is extended with an additional gauge term $\divergence \left(\kappa \partial\bm{A}/\partial t \right)=0$ \cite{inpClemensKaehneSchoeps2019:01s} to yield the electro-quasistatic equation \begin{equation} \divergence \left( \kappa \grad \varphi + \varepsilon \grad \frac{\partial}{\partial t} \varphi\right) = \divergence \bm{J}_\mathrm{S}. \label{eqn:eqs-continuity} \end{equation} The expression $\divergence \left(\kappa\partial\bm{A}/\partial t\right)$ omitted in \eqref{eqn:eqs-continuity} from the Darwin continuity equation \eqref{eqn_Darwin-continuity}, corresponds to explicitly enforcing divergence-free eddy currents in conductive media, i.e., neglecting eventually arising sources and sinks of current densities due to the irrotational parts of the electric field. The combination of equation \eqref{eqn_Darwin-Ampere} rewritten as \begin{equation} \curl (\nu \curl \bm{A}) + \kappa \frac{\partial}{\partial t} \bm{A} = -\kappa \grad \varphi - \varepsilon \grad \frac{\partial}{\partial t} \varphi + \bm{J}_\mathrm{S}, \label{eqn_Darwin-Ampere_2} \end{equation} with equation \eqref{eqn:eqs-continuity} results in a two-step formulation, where first the electro-quasistatic total current density $\bm{J}_\mathrm{total}= - \kappa \grad \varphi - \varepsilon \grad \left( \partial \varphi/ \partial t\right) + \bm{J}_\mathrm{S}$ is used as a solenoidal source term with $\divergence \bm{J}_\mathrm{Total} = 0$ to a magneto-quasistatic formulation for the magnetic vector potential $\bm{A}$ represented by the lefthand side of (\ref{eqn_Darwin-Ampere_2}). This modified magneto-quasistatic formulation, however, initially does not not address irrotational parts of $\bm{A}$ in the non-conductive regions. While this does not affect the evaluation of $\bm{B}$ in (\ref{eq:E_and_B}), the evaluation of the electric field according to \eqref{eq:E_and_B} involves the expression $\partial\bm{A}/\partial t$ also in the non-conductive regions, which is commonly not covered in magneto-quasistatic field formulations. To control the irrotational parts of $\bm{A}$, the magneto-quasistatic formulation needs to be regularized. For this, the introduction of a small artificial electrical conductivity $\hat{\kappa}$ in the non-conducting regions has been suggested \cite{KochWeiland2011:01s}. In case that $\kappa \gg 1/(\Delta t) \varepsilon$ holds for a given time-step length $\Delta t$, a modified electrical conductivity as e.g. $\hat{\kappa}=\kappa +1/(\Delta t) \varepsilon$ will regularize the formulation, where the resulting time-discrete formulations will feature expressions of the type $1/(\Delta t)\kappa +1/(\Delta t)^2 \varepsilon$ as they occur in second-order time discretization schemes, as e.g. Newmark-beta schemes used for full wave Maxwell-Amp\`{e}re equations \cite{DibbenMetaxas97:01s}. Alternatively, a grad-div term augmentation for spatially discretized magneto-quasistatic formulations is applicable \cite{Bossavit2001:01s}, \cite{ClemensSchoepsDeGersemBartel2011:01s}. The introduction of a small artificial electrical conductivity $\hat{\kappa}$ in the non-conducting regions is also an established technique to mitigate the static limit instability of the electro-quasistatic formulation (\ref{eqn:eqs-continuity}) that is known to occur for $\frac{\partial}{\partial t} \varphi \rightarrow 0$. By assuming that $\divergence \left(\kappa\partial\bm{A}/\partial t\right)=0$ holds in equation \eqref{eqn_Darwin-continuity}, the calculation of the electric scalar potential $\varphi$, using \eqref{eqn:eqs-continuity}, is decoupled from that of the magnetic vector potential $\bm{A}$. Thus, it is possible to independently first solve an electro-quasistatic initial-boundary value problem that corresponds to \eqref{eqn:eqs-continuity}, and in a second step solve the modified magneto-quasistatic problem \eqref{eqn_Darwin-Ampere_2} with the then available total current densities $\bm{J}_\mathrm{Total}$, as depicted in Algorithm \ref{alg1_2-Step_Darwin_TD}. \begin{algorithm}[ht] \caption{Two-Step Darwin Time Domain} \label{alg1_2-Step_Darwin_TD} \begin{algorithmic}[1] \State Initialize $\varphi(t^0)$ and $\bm{A}(t^0)$; \For{$n\gets 0:n_{\mbox{\scriptsize End}}-1$} % \State Solve problem \eqref{eqn:eqs-continuity} for $\varphi(t^{n+1})$; \EndFor \For{$n\gets 0:n_{\mbox{\scriptsize End}}-1$} \State Solve problem \eqref{eqn_Darwin-Ampere_2} for$\bm{A}(t^{n+1})$; \State Evaluate $\bm{E}(t^{n+1})$ and $\bm{B}(t^{n+1})$ with (\ref{eq:E_and_B}); \EndFor \end{algorithmic} \end{algorithm} Alternatively, it is possible to consecutively execute a solution step for an electro-quasistatic and a magneto-quasistatic field formulation for each discrete timestep, using suitable time stepping schemes \cite{Clemens2005:01s}. \subsection{Discrete Two-Step Darwin Time Domain Schemes} Reformulating (\ref{eqn_Darwin-continuity}) and (\ref{eqn_Darwin-Ampere_2}) with a spatial volume discretization scheme, such as the finite integration technique \cite{Weiland1996:01s} or the finite element method with N{\'e}d{\'e}lec elements \cite{Nedelec80:01s}, results in a coupled system of time continuous matrix equations \begin{equation} \Gr^{\trans} \fMkap \Gr \bm{\upphi} + \Gr^{\trans} \fMeps \Gr \frac{\mathrm d}{\mathrm dt} \bm{\upphi} = \Gr^{\trans} \fitvec{j}_{\mathrm{s}} - \Gr^{\trans} \fMkap \frac{\mathrm d}{\mathrm dt} \fitvec{a}, \label{FIT_Darwin_Continuity} \end{equation} \begin{equation} \C^{\trans} \fMnu \C \fitvec{a} + \fMkap \frac{\mathrm d}{\mathrm dt} \fitvec{a} = \fitvec{j}_{\mathrm{s}} - \fMkap \Gr \bm{\upphi} - \fMeps \Gr \frac{\mathrm d}{\mathrm dt} \bm{\upphi}, \label{FIT_Darwin-Ampere_2} \end{equation} where $\fitvec{a}$ is the degrees of freedom (dof) vector related to the magnetic vector potential, $\bm{\upphi}$ is the dof vector of electric nodal scalar potentials, $\fitvec{j}_{\mathrm{s}}$ is a vector of transient source currents, $\C$ is the discrete curl operator matrix, $\Gr$ and $\Gr^{\trans}$ are discrete gradient and (negative) divergence operator matrices. The matrices $\fMnu$, $\fMkap$, $\fMeps$ are discrete material matrices of possibly nonlinear reluctivities, conductivities and permittivities, respectively, corresponding to the specific discretization scheme in use. Employing e.g. an implicit Euler backward differentiation time stepping scheme with time step $\Delta t$ to \eqref{FIT_Darwin_Continuity} and \eqref{FIT_Darwin-Ampere_2} and $\fM_{\sigma}=\fMkap + (1/\Delta t)\fMeps$ yields a coupled system \begin{align} \left[ \Gr^{\trans} \fM_{\sigma} \Gr \right] \bm{\upphi}^{n+1} & = \fitvec{f}_1 (\fitvec{a}^{n+1}), \label{eqn_FIT-BDF1_Darwin_continuity}\\ \bigl[\C^{\trans} \fMnu \C + \frac{1}{\Delta t} \fMkap \bigr] \fitvec{a}^{n+1} & = \fitvec{f}_2 (\bm{\upphi}^{n+1}), \label{eqn_FIT-BDF1_Darwin_Ampere} \end{align} where, using the notation $\Delta\fitvec{a}^{n+1}=\fitvec{a}^{n+1}-\fitvec{a}^{n}$, the right-hand side vectors are \begin{equation} \fitvec{f}_1 = {\Gr}^{\trans} \fitvec{j}_s^{n+1} + \frac{1}{\Delta t}{\Gr}^{\trans} \fMeps {\Gr}\bm{\upphi}^{n} - \frac{1}{\Delta t}{\Gr}^{\trans}\fMkap \Delta\fitvec{a}^{n+1}, \label{eqn_rhs_f1} \end{equation} \begin{equation} \fitvec{f}_2 = \fitvec{j}_s^{n+1} + \frac{1}{\Delta t} \fMkap \fitvec{a}^{n} - \fM_{\sigma} {\Gr} \bm{\upphi}^{n+1} + \frac{1}{\Delta t} \fMeps {\Gr} \bm{\upphi}^{n}. % \label{eqn_rhs_f2} \end{equation} System \eqref{eqn_FIT-BDF1_Darwin_continuity}, \eqref{eqn_FIT-BDF1_Darwin_Ampere} are solved for each time step, starting from initial values $\fitvec{a}^0=\fitvec{a}(t^0)$ and $\bm{\upphi}^0=\bm{\upphi}(t^0)$. Adopting an iterative solution approach with iteration index $i=0,1,2,\ldots$ for each time step $t^{n+1}$ requires to provide an initial guess vector $\fitvec{a}^{n+1}_{i=0}$ with $\fitvec{f}_{1,i=0}=\fitvec{f}_1(\fitvec{a}^{n+1}_{i=0})$. Inserting the solution vector of \eqref{eqn_FIT-BDF1_Darwin_continuity} rewritten as $\bm{\upphi}^{n+1}=\left[ \Gr^{\trans} \fM_{\sigma} \Gr \right]^{-1} \fitvec{f}_{1,i=0}$ into the right-hand side vector equation $\fitvec{f}_{2,i=0} = \fitvec{f}_2(\bm{\upphi}^{n+1}_{i=0})$ of \eqref{eqn_FIT-BDF1_Darwin_Ampere} yields an expression for the next iterative solution vector $\fitvec{a}^{n+1}_{i=1}$. Left application of the discrete divergence operator ${\Gr}^{\trans}$ to this equation using the relation ${\Gr}^{\trans}\C^{\trans}=\bm{0}$ yields the identity ${\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i=1} = {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i=0}$ and by induction \begin{equation} {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{i} = {\Gr}^{\trans} \fMkap \fitvec{a}^{n+1}_{0}\quad\forall i\in\{0,1,\ldots\}, \end{equation} i.e., in exact arithmetics the converged solution $\fitvec{a}^{n+1}$ of the iterative process will maintain the discrete divergence of its initial guess solution $\fitvec{a}^{n+1}_{i=0}$ in the right-hand side \eqref{eqn_rhs_f1} of \eqref{eqn_FIT-BDF1_Darwin_continuity}. An initial guess $\fitvec{a}^{n+1}_{i=0}= \fitvec{a}^{n}$ yields the result \begin{equation} {\Gr}^{\trans} \fMkap \fitvec{a}^{n} = {\Gr}^{\trans} \fMkap \fitvec{a}^{0}\quad\forall n\in\{0,1,\ldots\}. \end{equation} Thus, with the choice of the initialization vector $\fitvec{a}^{0}$ of the time integration process at $t^0$ acting as an initial gauge, the difference expression ${\Gr}^{\trans} \fMkap \bigl[\fitvec{a}^{n+1} -\fitvec{a}^{n}\bigr] = \bm{0}$ in \eqref{eqn_rhs_f1} vanishes for all time steps, and thus, system \eqref{eqn_FIT-BDF1_Darwin_continuity}, \eqref{eqn_FIT-BDF1_Darwin_Ampere} gets decoupled. \section{Numerical Experiments}\label{sec:numerical} To verify the performance of the proposed two-step Darwin time domain algorithm, two three-dimensional copper coil problems are considered (electrical conductivity $\kappa_{\mathrm{copper}},=5.96\cdot 10^7~\mathrm{S/m}$. Both problems are illustrated in Fig.~\ref{fig:geom}. For each case-study, the computational domain is $\Omega=(\Omega_0\cup\overline{\Omega}_\kappa)\setminus(\Gamma_\mathrm{E}\cup\Gamma_\mathrm{G})\subset\mathbb{R}^3$ and is free from charge and current sources. The bounding surfaces are perfectly conducting, with $\Gamma_\mathrm{G}$ being grounded and $\Gamma_\mathrm{E}$ supplying the transient excitation \begin{equation}\label{eq:excit} \varphi(t) = \varphi_\mathrm{max}\cdot f\cdot\min(t,1/f)\cdot\sin(2\pi f t), \end{equation} where $\varphi_\mathrm{max}=12~\mathrm{V}$ is the maximum voltage and $f=10~\mathrm{MHz}$ is the excitation frequency with a wavelength $\lambda = 30~\mathrm{m}$ in void. The longest side of the domain $\Omega$ associated with the helical coil is $l=6.3~\mathrm{cm}$, the one associated with the planar coil is $l=1.35~\mathrm{cm}$. Since $\ell\ll\lambda$ in both cases, the radiation-free assumption of the Darwin field model is justified. \begin{figure} \includegraphics[width=0.99\columnwidth]{Figures/CoilsMesh3D} \caption{In both coil case-studies, $\Omega_0$ is void, with $\hat\kappa$ being a small artificial electrical conductivity ($\hat\kappa=10^{-2}$ S/m), while $\Omega_\kappa$ is occupied by copper.}\label{fig:geom} \end{figure} The problems that constitute the two-step algorithm are discretized in space with the FEM, using first-order Lagrange elements for the scalar electric problem and zeroth-order N{\'e}d{\'e}lec elements for the vectorial magnetic problem; see Table \ref{tab:dof} for the number of degrees of freedom in each finite element space. Regarding time-discretization, both problems have been integrated with the trapezoidal method, which is implicit, second-order accurate, and A-stable, with different time steps $\Delta t\in\{2.5,1.25,0.625\}~\mathrm{ns}$ for a total of $t_\mathrm{End}=(n_\mathrm{End}-1)\Delta t=1200~\mathrm{ns}$. With the excitation functions in \eqref{eq:excit}, the two-step algorithm is expected to yield an approximation of a frequency-domain full Maxwell solution, and hence, the latter is used for obtaining reference solutions on the same meshes. For all linear systems a direct solver is used. \begin{table}\centering \caption{Number of Degrees of Freedom (dof)} \label{tab:dof} \begin{tabular}{lcc} \toprule & Lagrange Elements & N{\'e}d{\'e}lec Elements\\ \midrule Helical Coil & $16\,882$ & $118\,609$ \\ Planar Coil & $41\,357$ & $292\,905$\\ \bottomrule \end{tabular} \end{table} In Fig.~\ref{fig:fields}, the magnetic flux density and the electric field intensity are depicted for each coil. There, the two-step algorithm for the Darwin field model, successfully captures, not only the induction, but also the capacitance between the coil windings. Table \ref{tab:freq} depicts the relative differences of the electric field approximations provided by the Darwin field model for the planar coil model at frequencies ranging from $10~\mathrm{kHz}$ up through $1~\mathrm{GHz}$. The results show that the remainder part $-\partial\bm{A} /\partial t$ of the electric field needs to be evaluated in (\ref{eq:E_and_B}) which necessitates the regularization of (\ref{eqn_Darwin-Ampere_2}). \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Figures/AllFields}\hfill \caption{The magnitude of the magnetic flux density and the electric field intensity for the two coil problems at $t=825~\mathrm{ns}$.}\label{fig:fields} \end{figure} In Fig.~\ref{fig:error}, the difference between the Maxwell and Darwin field models is quantified with the norm \begin{equation} \Vert \operatorname{Re}(\bm{F}_\mathrm{M})-\bm{F}_\mathrm{D}\Vert_{L^2(\Omega)}/\Vert\bm{F}_\mathrm{M}\Vert_{L^2(\Omega)}, \end{equation} where $\bm{F}\in\{\bm{B},\bm{E}\}$ is a physical field quantity, computed as in \eqref{eq:E_and_B}, and the subscripts $\mathrm{M}$ and $\mathrm{D}$ stand for the Maxwell and Darwin field models, respectively. In Fig.~\ref{fig:error}, the first row of results is associated with the helical coil, while the second row with the planar coil. In the same figure, the effect of the time discretization scheme is also apparent, with a tendency towards improved accuracy for smaller time steps, since convergence to the time-harmonic solution is expected. \begin{figure} \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotHelical-B}\hfill \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotHelical-E} \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotPlanar-B}\hfill \includegraphics[width=0.5\columnwidth]{Figures/ErrorPlotPlanar-E} \caption{The relative difference between the field quantities $\bm{B}$ and $\bm{E}$ for the helical coil (first row) and for the planar coil (second row), computed using a full Maxwell frequency-domain solver and the two-step Darwin algorithm.}\label{fig:error} \end{figure} \begin{table}\centering \caption{Relative $\bm{E}$-field Differences at $t=3.125/f$ for the Planar Coil as Functions of the Frequency $f$.} \label{tab:freq} \begin{tabular}{ccc} \toprule $f$ [Hz] & $\Vert \operatorname{Re}(\bm{E}_\mathrm{M})-\bm{E}_\mathrm{D}\Vert/\Vert\bm{E}_\mathrm{M}\Vert$ & $\Vert \operatorname{Re}(\bm{E}_\mathrm{M})-\bm{E}_\mathrm{D,irr}\Vert/\Vert\bm{E}_\mathrm{M}\Vert$ \\ \midrule $10^4$ & $8.22\cdot10^{-6}$ & $2.74\cdot10^{-3}$ \\ $10^5$ & $9.31\cdot10^{-6}$ & $2.92\cdot10^{-2}$ \\ $10^6$ & $4.60\cdot10^{-5}$ & $1.59\cdot10^{-1}$ \\ $10^7$ & $7.33\cdot10^{-4}$ & 1.47 \\ $10^8$ & $2.00\cdot10^{-3}$ & 2.28 \\ $10^9$ & $1.41\cdot10^{-2}$ & 2.22 \\ \bottomrule \end{tabular} \vspace*{2pt} \end{table} \section{Conclusions} A two-step algorithm for the transient $(\bm{A},\varphi)$ formulation of the quasistatic Darwin field model is introduced and, for the first time, numerically validated against the full system of Maxwell's equations. The presented two-step Darwin time domain quasistatic field formulation accounts for capacitive, inductive, and resistive effects. The advantages of this scheme result from consecutively combining an electro-quasistatic and a modified magneto-quasistatic field model, and thus, it benefits from existing efficient time domain solution techniques that provide flexibility in terms of material non-linearities and excitation profiles. \bibliographystyle{ieeetr}
1,477,468,749,852
arxiv
\section{Introduction} \label{sec.Intro} A complex Hadamard matrix is a square matrix $W$ of order $n$ which satisfies $W\overline{W}^\top= nI$ and all of whose entries are complex numbers of absolute value $1$. A complex Hadamard matrix is said to be Butson-type, if all of its entries are roots of unity. In our earlier work \cite{MI}, we proposed a method to classify symmetric complex Hadamard matrices belonging to the Bose--Mesner algebra of a symmetric association scheme. In this paper, we propose an analogous method to classify nonsymmetric hermitian complex Hadamard matrices belonging to the Bose--Mesner algebra of a commutative nonsymmetric association scheme. First we consider nonsymmetric hermitian complex Hadamard matrices belonging to the Bose--Mesner algebra of a commutative nonsymmetric association scheme of class $3$, and then we show that the first eigenmatrix of such a scheme is characterized as \begin{equation}\label{d3P1} \begin{pmatrix} 1&a(2a-1)&a(2a-1)&2a-1\\ 1&ai&-ai&-1\\ 1&-ai&ai&-1\\ 1&-a&-a&2a-1 \end{pmatrix}, \end{equation} where $a$ is a positive integer and $i=\sqrt{-1}$. Moreover, we show that such a complex Hadamard matrix is necessarily a Butson-type complex Hadamard matrix whose entries are $4$-th roots of unity. An association scheme with the first eigenmatrix (\ref{d3P1}) is a nonsymmetric amorphous association scheme belonging to $L_{2a;1}$, according to \cite{IMY}. An example of such an association scheme was constructed from a Galois ring of characteristic $4$ in \cite[Theorem 9]{IMY}. Galois rings have been used to construct certain association schemes. Yamada \cite{Y} used Galois rings of characteristic $4$ to construct distance-regular digraphs, which are nonsymmetric association schemes of class $3$. This construction was generalized in \cite{IMY} to produce amorphous association schemes. Ma \cite{Ma} gave association schemes of class $3$ on Galois rings of characteristic $4$, which do not come from fusions in any amorphous association schemes. Moreover, certain properties of association schemes obtained from Galois rings of odd characteristic have been investigated in \cite{EP}. Our second result of this paper is a construction of nonsymmetric association schemes $\mathfrak{X}$ of class $6$ on Galois rings of characteristic 4, whose first eigenmatrix is given by \begin{equation} \label{P6} (p_{i,j})_{\substack{0\leq i\leq 6 \\ 0\leq j\leq 6}}=\begin{pmatrix} 1&2b(b-1)&2b(b-1)&b&b&b-1&b\\ 1&bi&-bi&0&0&-1&0\\ 1&-bi&bi&0&0&-1&0\\ 1&0&0&bi&-bi&b-1&-b \\ 1&0&0&-bi&bi&b-1&-b \\ 1&-2b&-2b&b&b&b-1&b \\ 1&0&0&-b&-b&b-1&b \end{pmatrix}, \end{equation} where $b$ is a power of $4$. We also classify hermitian complex Hadamard matrices belonging to the Bose--Mesner algebra of $\mathfrak{X}$. We show that such a matrix is necessarily a Butson-type matrix whose entries are $4$-th roots of unity. \section{Association schemes and complex Hadamard matrices} \label{sec.AS} In this section, we consider hermitian matrices belonging to the Bose--Mesner algebra of a commutative association scheme. Assuming that all entries are complex numbers with absolute value $1$, we find conditions under which such a matrix is a complex Hadamard matrix. We refer the reader to \cite{BI,BCN} for undefined terminology in the theory of association schemes. Let $X$ be a finite set with $n$ elements, and let $\mathfrak{X}=(X,\{R_i\}_{i=0}^d)$ be a commutative association scheme with the first eigenmatrix $P=(P_{i,j})$. We let $\mathfrak{A}$ denote the Bose--Mesner algebra spanned by the adjacency matrices $A_0,A_1,\ldots,A_d$ of $\mathfrak{X}$. Then the adjacency matrices are expressed as \begin{equation}\label{eq:pij} A_j=\sum_{i=0}^d P_{i,j}E_i \quad (j=0,1,\ldots,d), \end{equation} where $E_0=\frac{1}{n}J,E_1,\ldots,E_d$ are the primitive idempotents of $\mathfrak{A}$. Let $w_0=1$ and $w_1,\ldots,w_d$ be complex numbers with absolute value $1$. Set \begin{equation}\label{eq:W} W=\sum_{j=0}^d w_jA_j\in\mathfrak{A}. \end{equation} Define \begin{equation}\label{eq:gamma} \gamma_k=\sum_{j=0}^d w_jP_{k,j} \quad (k=0,1,\ldots,d). \end{equation} By \eqref{eq:pij}, \eqref{eq:W} and \eqref{eq:gamma} we have \begin{equation}\label{eq:WE} W=\sum_{k=0}^d \gamma_kE_k. \end{equation} Let $X_j$ {\rm($1\leq j\leq d$)} be indeterminates. For $k=1,2,\dots,d$, let $e_k$ be the polynomial defined by \begin{equation}\label{eq:ek} e_k=1+2\left(\sum_{j=1}^d P_{k,j}X_j+ \sum_{1\leq j_1<j_2\leq d}P_{k,j_1}P_{k,j_2}X_{j_1}X_{j_2}\right) +\sum_{j=1}^dP_{k,j}^2X_j^2-n. \end{equation} Then we have the following. \begin{lem}\label{lem:equiv} Assume that the matrix $W$ given in \eqref{eq:W} is a hermitian matrix. Then the following statements are equivalent: \begin{itemize} \item[\rm{(i)}] $W$ is a complex Hadamard matrix, \item[\rm{(ii)}] $\gamma_k^2=n$ for $k=1,\ldots,d$, \item[\rm{(iii)}] $(w_j)_{1\leq j\leq d}$ is a common zero of $e_k$ {\rm($k=1,\ldots,d$)}. \end{itemize} \end{lem} \begin{proof} By \eqref{eq:WE} we have $W\overline{W}^T=W^2=\sum_{k=0}^d\gamma_k^2E_k$. Therefore, (i) implies (ii). To prove the converse, it is enough to show that $\gamma_0^2=n$. Since \begin{align*} (W^2)_{j,j}&=\sum_{k=1}^nW_{j,k}\overline{W_{j,k}}\\ &=n, \end{align*} the diagonal entries of $W^2$ are all $n$. Thus, \begin{align*} n^2&=\tr W^2\\ &=\sum_{k=0}^d\gamma_k^2\tr E_k\\ &=\gamma_0^2+\sum_{k=1}^dn\tr E_k\\ &=\gamma_0^2+n\tr(I-E_0)\\ &=\gamma_0^2+n(n-1). \end{align*} Hence $\gamma_0^2=n$. By \eqref{eq:gamma} we have \[ \gamma_k^2=1+2\left(\sum_{j=1}^d P_{k,j}w_j+ \sum_{1\leq j_1<j_2\leq d}P_{k,j_1}P_{k,j_2}w_{j_1}w_{j_2}\right) +\sum_{j=1}^dP_{k,j}^2w_j^2 \] for $k=1,\dots,d$. Therefore, the equivalence of (ii) and (iii) follows. \end{proof} \section{Hermitian complex Hadamard matrices and nonsymmetric association schemes of class $3$}\label{sec.3-class} In this section, we classify nonsymmetric hermitian complex Hadamard matrices belonging to the Bose--Mesner algebra of a nonsymmetric commutative association scheme $\mathfrak{X}=(X,\{R_i\}_{i=0}^3)$ of class $3$, where $R_1^\top=R_2$, $R_3$ symmetric. We use the following result. \begin{lem}[{\cite[Lemma in (5.3)]{S}}]\label{lem:song} Let \[ \begin{pmatrix} 1&\frac{k_1}{2}&\frac{k_1}{2}&k_2\\ 1&\frac12(r+bi)&\frac12(r-bi)&-(r+1)\\ 1&\frac12(r-bi)&\frac12(r+bi)&-(r+1)\\ 1&\frac{s}{2}&\frac{s}{2}&-(s+1) \end{pmatrix} \] be the first eigenmatrix of $\mathfrak{X}$, where $r,s$ are integers and $i^2=-1$. Then one of the following holds. \begin{itemize} \item[\rm{(i)}] $(r,s,b^2)=(0,-(k_2+1),\frac{k_1(k_2+1)}{k_2})$, \item[\rm{(ii)}] $(r,s,b^2)=(-(k_2+1),0,(1+k_2)(1+k_1+k_2))$, \item[\rm{(iii)}] $(r,s,b^2)=(-1,k_1,k_1+1)$. \end{itemize} \end{lem} Note that, if we put \begin{equation}\label{eq:0930} (k_1,k_2)=(2a(2a-1),2a-1), \end{equation} in part (i) of Lemma~\ref{lem:song}, we have the matrix (\ref{d3P1}). Let $\mathfrak{A}$ be the Bose--Mesner algebra of $\mathfrak{X}$ which is the linear span of the adjacency matrices $A_0,A_1,A_2,A_3$ of $\mathfrak{X}$, where $A_1^\top=A_2$, $A_3$ symmetric. Let $w_0=1$ and $w_1,w_2,w_3$ be complex numbers of absolute value $1$. Set \begin{equation} \label{eq:w0W} W=w_0A_0+w_1A_1+w_2A_2+w_3A_3\in\mathfrak{A}. \end{equation} Then our first main theorem is the following. \begin{thm}\label{thm:main0925} Assume that the matrix \eqref{eq:w0W} is a hermitian complex Ha\-da\-mard matrix and not a real Ha\-da\-mard matrix. Then $\mathfrak{X}$ is a nonsymmetric association scheme whose unique nontrivial symmetric relation consists of $2a$ cliques of size $2a$, and the first eigenmatrix of $\mathfrak{X}$ is given by \eqref{d3P1}. Moreover, $w_1=\pm i$ and $w_3=1$. \end{thm} \begin{proof} Suppose that the matrix $W$ is a complex Hadamard matrix. Without loss of generality, we may assume $w_0=1$ in (\ref{eq:w0W}). Since $W$ is a hermitian matrix, we have $w_1w_2=1$ and $w_3=\pm1$. Since $W$ is not a real Hadamard matrix, we have $w_1-w_2\not=0$. By Lemma~\ref{lem:equiv}, $(w_i)_{i=1}^3$ is a common zero of the polynomials $e_k$ ($k=1,2,3$) defined in (\ref{eq:ek}). Since \[ e_1-e_2=bi(X_1-X_2)(r(X_1+X_2-2X_3)-2X_3+2), \] we have \begin{equation}\label{eq:1-1} r(w_1+w_2-2w_3)-2w_3+2=0. \end{equation} First assume that $w_3=1$. Then by (\ref{eq:1-1}) we have $r(w_1+w_2-2)=0$. Hence $r=0$. Therefore we have case (i) of Lemma~\ref{lem:song}. Then \begin{equation}\label{eq:nk1} (k_1,k_2)=\left(\frac{(s+1)b^2}{s},-(s+1)\right). \end{equation} After specializing $X_3=1$, we have \[ e_3-e_1=\frac{1}{4}((s-bi)X_1+(s+bi)X_2-2s)((s+bi)X_1+(s-bi)X_2-2s). \] Hence $((s-bi)w_1+(s+bi)w_2-2s)((s+bi)w_1+(s-bi)w_2-2s)=0$. Then by $w_2=w_1^{-1}$ we have \begin{equation}\label{eq:1-2} (w_1-1)^2((s-bi)w_1-(s+bi))((s+bi)w_1-(s-bi))=0. \end{equation} Put $w=(s+2bi)/(s-2bi)$. By $w_1\not=1$ we have $w_1\in\{w,\overline{w}\}$ by (\ref{eq:1-2}). We may assume $w_1=w$. After specializing $X_1=w$, $X_2=\overline{w}$ and $X_3=1$, we have \[ e_1=-(1+k_1+k_2)-\frac{4b^4s^2}{(b^2+s^2)^2}. \] Hence \begin{equation}\label{eq:1-3} k_1+k_2=-1-\frac{4b^4s^2}{(b^2+s^2)^2}. \end{equation} Then by (\ref{eq:nk1}), (\ref{eq:1-3}) we have \[ (b-s)(b+s)((s+1)b^4-(s-2)s^2b^2+s^4)=0. \] Since $b>0$ and $s=-(k_2+1)<-1$, we have $b-s\not=0$. First assume that $s=-b$. Then, by putting $b=2a$, we have $(k_1,k_2)=(2a(2a-1),2a-1)$ and $w_1=-i$. Therefore we have the first eigenmatrix (\ref{d3P1}). Next assume that \begin{equation}\label{eq:0921} (s+1)b^4-(s-2)s^2b^2+s^4=0. \end{equation} Then the discriminant of (\ref{eq:0921}) as a quadratic equation in $b^2$ is $s^4((s-4)^2-16)$. Since $b^2$ is rational, $(s-4)^2-16$ is a square. This implies $s\in\{0,-1\}$. This contradicts $s=-(k_2+1)<-1$. Next assume that $w_3=-1$. Then by (\ref{eq:1-1}) we have $r(w_1+w_2+2)+4=0$. Put $g(x)=rx^2+2(r+2)x+r$. By $w_2=w_1^{-1}$ we have $g(w_1)=0$. Since $w_1\not\in\mathbb{R}$, we have $r+1<0$. Thus we have case (ii) of Lemma~\ref{lem:song}. After specializing $X_1=w_1$, $X_2=w_1^{-1}$, $X_3=-1$ and $s=0$, we have \[ e_1-e_3=\frac{(w_1+1)f(w_1)}{4w_1^2}, \] where $f(x)=((r+bi)x+r-bi)((r+bi)x^2+2(r+4)x+r-bi)$. Since $w_1\not=-1$, we have $f(w_1)=0$. Since \begin{align*} f(x)&=\frac{(r+bi)((r+bi)rx+r(r+4)-(3r+4)bi)}{r^2}g(x)\\ &\quad-\frac{4((r+1)b^2+r^2)}{r^2}((r+4)x-r) \end{align*} and $(r+4)w_1-r\not=0$, we have \begin{equation}\label{eq:0918} (r+1)b^2+r^2=0. \end{equation} Substituting $(r,b^2)=(-k_2+1,(1+k_1+k_2)(k_2+1))$ in (\ref{eq:0918}), we have $(k_2+1)(k_2^2+k_1k_2-1)=0$. This is a contradiction. \end{proof} In the next section, we will construct a nonsymmetric association scheme $\mathfrak{X}$ of class $6$ on a Galois ring of characteristic $4$, with the first eigenmatrix \eqref{P6}. An association scheme with the first eigenmatrix (\ref{d3P1}) can be obtained by fusing some relations of $\mathfrak{X}$. \section{Association schemes on Galois rings} \label{sec.Galoisring} For the reminder of this section, we let $e\geq 3$ be an odd positive integer. We refer the reader to \cite{Y} for basic theory of Galois rings. Let $F=\text{GF}(2)$ be the prime field of characteristic $2$ and $K=\text{GF}(2^e)$ be an extension of degree $e$. We identify $K$ with $F[x]/(\varphi(x))$, where $\varphi(x)$ is a primitive polynomial of degree $e$ over $F$, and we denote by $\zeta$ a root of $\varphi(x)$ in $K$. Let $\mathcal{A}=\mathbb{Z}/4\mathbb{Z}$. There exists a monic polynomial $\Phi(x)$ over $\mathcal{A}$ such that $\Phi(x)\equiv \varphi(x)$ mod $2\mathcal{A}[x]$ and $\Phi(x)$ divides $x^{2^e-1}-1$ in $\mathcal{A}[x]$ (see \cite[Theorem 1]{CS}). The ring $\mathcal{R}=\mathcal{A}[x]/(\Phi(x))$ is called a Galois ring, and it is a local ring with maximal ideal $\mathcal{P}=2\mathcal{R}$ and residue field $\mathcal{R}/\mathcal{P}\cong K$. Let $\xi$ be the image of $x$ in $\mathcal{R}$, so that $\xi+\mathcal{P}$ is mapped to $\zeta$ under the isomorphism $\mathcal{R}/\mathcal{P}\cong K$. Then $\mathcal{R}=\mathcal{A}[\xi]$ and $\xi$ has order $2^e-1$ in $\mathcal{R}$. Let $\mathcal{T}$ be the cyclic group of order $2^e-1$ generated by $\xi$. Since $\mathcal{T}$ is mapped bijectively onto $K\setminus\{0\}$ under the natural homomorphism $\mathcal{R}\rightarrow K$, each element $\alpha\in \mathcal{R}$ is uniquely expressed as \[ \alpha=\alpha_0+2\alpha_1, \quad \alpha_0,\alpha_1\in\mathcal{T}\cup\{0\}. \] The unit group $\mathcal{R}^{\ast}$ is the direct product of $\mathcal{T}$ and the principal unit group $\mathcal{E}=1+\mathcal{P}$. Let $\mathcal{T}_\delta=\{\xi^j\mid 0\leq j\leq 2^e-2,\;\mathrm{Tr}(\zeta^j)=\delta\}$ ($\delta=0,1$). We set $\mathcal{P}_0=2\mathcal{T}_0\cup\{0\}$ and $H=1+\mathcal{P}_0$. Thus $\mathcal{P}_0$ and $H$ are subgroups of index $2$ in the additive group $\mathcal{P}$ and the multiplicative group $\mathcal{E}$, respectively. Setting $b=2^{e-1}$, we have $b=|\mathcal{P}_0|=|H|$. The mapping $\xi\mapsto\xi^2$ can be extended to a ring automorphism $f$ of $\mathcal{R}$ which fixes $\mathcal{A}$ elementwise. The ring automorphism $f$ is called the {\em Frobenius automorphism}. For $\alpha=\alpha_0+2\alpha_1$ ($\alpha_0,\alpha_1\in\mathcal{T}\cup\{0\}$), we have \[ \alpha^f=\alpha_0^2+2\alpha_1^2, \] and the trace of $\alpha\in\mathcal{R}$ is defined by \[ \mathcal{S}(\alpha)=\alpha+\alpha^f+\cdots+\alpha^{f^{e-1}}. \] Note that $\mathcal{S}$ is an $\mathcal{A}$-linear mapping from $\mathcal{R}$ to $\mathcal{A}$. The additive characters of $\mathcal{R}$ are given by $\alpha\mapsto\chi(\alpha\beta)$ ($\beta\in\mathcal{R}$), where \[ \chi(\alpha)=i^{e\mathcal{S}(\alpha)}, \] and $i^2=-1$. Define \[ \lambda_{\alpha}(A)=\sum_{\beta\in A}\chi(\alpha\beta), \] for $\alpha\in\mathcal{R}$ and $A\subset\mathcal{R}$. Set $\lambda=\lambda_1$. Observe \begin{align} \mathcal{E}&=H\cup(-H), \nonumber \\ \alpha\mathcal{T}&=\mathcal{P}\setminus\{0\} \quad \text{for any $\alpha\in\mathcal{P}\setminus\{0\}$}, \label{eq:aT} \\ \mathrm{Tr}(\zeta^j)&\equiv \mathcal{S}(\xi^j)\bmod{2}. \nonumber \end{align} Since $e$ is odd, we obtain \begin{align} \chi(\alpha)&= \begin{cases} 1 & \ \text{if} \ \alpha\in\mathcal{P}_0, \\ -1 & \ \text{if} \ \alpha\in\mathcal{P}\setminus\mathcal{P}_0, \end{cases}\label{chival} \\ \chi(\alpha)&=i \quad \text{for} \ \alpha\in H. \label{chiH} \end{align} \begin{lem} \label{lem:aP0} For any $\alpha\in\mathcal{R}^{\ast}\setminus\mathcal{E}$ we have the following. \begin{align} \lambda(\alpha\mathcal{P}_0)&=0, \label{eq:aP0} \\ \lambda(\alpha H)&=0. \label{eq:aH} \end{align} \end{lem} \begin{proof} Note that the correspondence $2\xi^j\mapsto \zeta^j$ extends to an isomorphism from $\mathcal{P}$ to $K$ as additive groups. Under this isomorphism, $\mathcal{P}_0$ is mapped to $\mathrm{Tr}^{-1}(0)$. Since $\alpha\notin\mathcal{E}$, the image $a=\alpha+\mathcal{P}$ of $\alpha$ in $K=\mathcal{R}/\mathcal{P}$ is not $1$. By the non-degeneracy of the trace function on $K$, we have $a\mathrm{Tr}^{-1}(0)\neq\mathrm{Tr}^{-1}(0)$. This implies $\alpha\mathcal{P}_0\neq\mathcal{P}_0$. Then \[A=\alpha\mathcal{P}_0\cap\mathcal{P}_0\] is a subgroup of index $4$ in the additive group $\mathcal{P}$. By \eqref{chival} we have \[ \chi(\beta)= \begin{cases} 1 & \text{if} \quad \beta\in A, \\ -1 & \text{if} \quad \beta\in\alpha\mathcal{P}_0\setminus A. \end{cases} \] Therefore $\lambda(\alpha\mathcal{P}_0)=0$. Since $\alpha H=\alpha+\alpha\mathcal{P}_0$, we have $\lambda(\alpha H)=\lambda(\alpha)\lambda(\alpha\mathcal{P}_0) =0$. \end{proof} Recall $b=|\mathcal{P}_0|=|H|$. \begin{lem} \label{lem:lamH} We have the following. \begin{itemize} \item[\rm{(i)}] $\lambda(H)=bi$, \item[\rm{(ii)}] $\lambda(\mathcal{T} H)=bi$. \end{itemize} \end{lem} \begin{proof} (i) Obvious from \eqref{chiH}. (ii) Since $\mathcal{T}\setminus\{1\}\subset\mathcal{R}^{\ast}\setminus\mathcal{E}$, we have $\lambda((\mathcal{T}\setminus\{1\})H)=0$ by \eqref{eq:aH}. Then the result follows from (i). \end{proof} \begin{lem} \label{lem:lamaH} We have \[ \lambda_{\alpha}(H)= \begin{cases} b & \text{if} \quad \alpha\in\mathcal{P}_0, \\ -b & \text{if} \quad \alpha\in\mathcal{P}\setminus\mathcal{P}_0. \end{cases} \] \end{lem} \begin{proof} Let $\alpha\in\mathcal{P}$. Then $\alpha H=\{\alpha\}$. Hence $\lambda_{\alpha}(H)=|H|\chi(\alpha)$, and the result follows from \eqref{chival}. \end{proof} \begin{lem} \label{lem:lamaTH} For any $\alpha\in\mathcal{P}\setminus\{0\}$ we have \[ \lambda_{\alpha}(\mathcal{T} H)=-b. \] \end{lem} \begin{proof} By \eqref{eq:aT} we have \begin{align*} \lambda_{\alpha}(\mathcal{T} H)&=\sum_{\beta\in\mathcal{P}\setminus\{0\}}\lambda_{\beta}(H) \\ &=\sum_{\beta\in\mathcal{P}}\lambda_{\beta}(H)-\lambda_0(H) \\ &=-\lambda_0(H) && (\text{by Lemma~\ref{lem:lamaH}}) \\ &=-b. \end{align*} \end{proof} Then our second main theorem is the following. \begin{thm} \label{thm:1} Let \begin{align*} S_0&=\{0\}, \\ S_1&=(\mathcal{T}\setminus\{1\})H, \\ S_2&=(\mathcal{T}\setminus\{1\})(-H), \\ S_3&=H, \\ S_4&=-H, \\ S_5&=\mathcal{P}_0\setminus\{0\}, \\ S_6&=\mathcal{P}\setminus\mathcal{P}_0. \end{align*} Then $\lambda_{\alpha}(S_j)$ is constant for any $\alpha\in S_i$ {\rm($i=0,1,\ldots,6$)}. Put $p_{i,j}=\lambda_{\alpha}(S_j)$ for $\alpha\in S_i$ {\rm($i,j=0,1,\ldots,6$)}. Then $P=(p_{i,j})_{\substack{0\leq i\leq 6 \\ 0\leq j\leq 6}}$ is given by {\rm(\ref{P6})}. In particular, if we set \[ R_j=\{(\alpha,\beta)\in\mathcal{R}\times\mathcal{R} \mid \alpha-\beta\in S_j \} \quad (j=0,1,\ldots,6), \] then $\mathfrak{X}=(\mathcal{R},\{R_i\}_{i=0}^6)$ is a nonsymmetric association scheme of class $6$ with the first eigenmatrix \eqref{P6}. \end{thm} \begin{proof} It is easy to check that $|S_1|=|S_2|=2b(b-1)$, $|S_3|=|S_4|=b$, $|S_5|=b-1$, and $|S_6|=b$. So we have $p_{0,j}$ ($j=0,1,\ldots,6$) as in \eqref{P6}. It is trivial that $\lambda_{\alpha}(S_0)=1$ for any $\alpha\in S_0\cup S_1\cup \cdots\cup S_6$. Hence $p_{j,0}=1$ for $j=0,1,\ldots,6$ as in \eqref{P6}. We claim that \begin{align} \lambda_{\alpha}(S_1)&=\begin{cases} bi & \text{if} \quad \alpha\in S_1,\\ 0 & \text{if} \quad \alpha\in S_3,\\ -2b & \text{if} \quad \alpha\in S_5,\\ 0 & \text{if} \quad \alpha\in S_6, \end{cases} \label{case:1} \displaybreak[0]\\ \lambda_{\alpha}(S_3)&=\begin{cases} 0 & \text{if} \quad \alpha\in S_1,\\ bi & \text{if} \quad \alpha\in S_3,\\ b & \text{if} \quad \alpha\in S_5,\\ -b & \text{if} \quad \alpha\in S_6, \end{cases} \label{case:2} \displaybreak[0]\\ \lambda_{\alpha}(S_5)&=\begin{cases} -1 & \text{if} \quad \alpha\in S_1,\\ b-1 & \text{if} \quad \alpha\in S_3,\\ b-1 & \text{if} \quad \alpha\in S_5,\\ b-1 & \text{if} \quad \alpha\in S_6. \end{cases} \label{case:3} \end{align} First we prove \eqref{case:1}. Assume that $\alpha\in S_1$. Then $\lambda_{\alpha}(\mathcal{T} H)=\lambda(\mathcal{T} H)=bi$ by Lemma~\ref{lem:lamH} (ii), and $\lambda_{\alpha}(H)=\lambda(\alpha H)$. Since $S_1\subset\mathcal{R}^{\ast}\setminus\mathcal{E}$, we have $\lambda(\alpha H)=0$ by \eqref{eq:aH}. Hence $\lambda_{\alpha}(S_1)=bi$. Next assume that $\alpha\in S_3$. Similarly, we have $\lambda_{\alpha}(\mathcal{T} H)=\lambda(\mathcal{T} H)$ and $\lambda_{\alpha}(H)=\lambda(H)$. By Lemma~\ref{lem:lamH} (i), (ii) we have $\lambda_{\alpha}(S_1)=0$. Next assume that $\alpha\in S_5\cup S_6$. Then by Lemma~\ref{lem:lamaH} and Lemma~\ref{lem:lamaTH}, we have $\lambda_{\alpha}(S_1)=-2b$ or $0$ depending on $\alpha\in S_5$ or $\alpha\in S_6$. This completes the proof of \eqref{case:1}. Next we prove \eqref{case:2}. Assume that $\alpha\in S_1$. Then $\lambda_{\alpha}(S_3)=\lambda(\alpha S_3)$. Since $S_1\subset\mathcal{R}^{\ast}\setminus\mathcal{E}$, we have $\lambda(\alpha S_3)=0$ by \eqref{eq:aH}. Next assume that $\alpha\in S_3$. Then $\lambda_{\alpha}(S_3)=\lambda(S_3)=bi$ by Lemma~\ref{lem:lamH} (i). Next assume that $\alpha\in S_5\cup S_6$. Then by Lemma~\ref{lem:lamaH}, $\lambda_{\alpha}(S_3)=b$ or $-b$ depending on $\alpha\in S_5$ or $\alpha\in S_6$. This completes the proof of \eqref{case:2}. Finally we prove \eqref{case:3}. Assume that $\alpha\in S_1$. Since $S_1\subset\mathcal{R}^{\ast}\setminus\mathcal{E}$, we have $\lambda_{\alpha}(S_5)=\lambda(\alpha(\mathcal{P}_0\setminus\{0\}))=-1$ by \eqref{eq:aP0}. Next assume that $\alpha\in S_3$. Since $\alpha\mathcal{P}_0=\mathcal{P}_0$, we have $\lambda_{\alpha}(S_5) =\lambda(\mathcal{P}_0\setminus\{0\})=|S_5|=b-1$. Next assume that $\alpha\in S_5\cup S_6$. Then $\alpha(\mathcal{P}_0\setminus\{0\})=\{0\}$. Hence $\lambda_{\alpha}(S_5)=|S_5|=b-1$. This completes the proof of \eqref{case:3}. From \eqref{case:1}--\eqref{case:3} we have $p_{1,1}$, $p_{3,1}$, $p_{5,1}$, $p_{6,1}$, $p_{1,3}$, $p_{3,3}$, $p_{5,3}$, $p_{6,3}$, $p_{1,5}$, $p_{3,5}$, $p_{5,5}$, and $p_{6,5}$ as in \eqref{P6}. Since $\{S_j\}_{j=0}^6$ is a partition of $\mathcal{R}$, we have $\sum_{j=0}^6\lambda_{\alpha}(S_j)=0$ for $\alpha\not=0$. Thus, it is enough to check that $\lambda_{\alpha}(S_j)$ is a constant independent of $\alpha\in S_i$ for $i=1,\ldots,6$ and $j=1,\ldots,5$. Since $S_2=-S_1$, we have $p_{2,j}=\overline{p_{1,j}}$ and $p_{j,2}=\overline{p_{j,1}}$ for $j=1,\ldots,6$. Since $S_4=-S_3$, we have $p_{4,j}=\overline{p_{3,j}}$ and $p_{j,4}=\overline{p_{j,3}}$ for $j=1,\ldots,6$. Therefore we find all $p_{i,j}$ as in \eqref{P6} from \eqref{case:1}--\eqref{case:3}. \end{proof} Let $\mathfrak{X}=(X,\{R_i\}_{i=0}^d)$ be a commutative association scheme. A partition $\Lambda_0, \Lambda_1, \ldots, \Lambda_e$ of the index set $\{0,1,\ldots,d\}$ of the association scheme $\mathfrak{X}$ is said to be {\it admissible} if $\Lambda_0=\{0\}$, $\Lambda_i\not=\emptyset$ $(1\leq i\leq e)$ and $\Lambda_i'=\Lambda_j$ for some $j$ ($1\leq j\leq e$), where $\Lambda_i'=\{\alpha' \mid \alpha\in\Lambda_i\}$, $R_{\alpha'}=\{(x,y)\mid (y,x)\in R_{\alpha}\}$. Let $R_{\Lambda_i}=\bigcup_{\alpha\in\Lambda_i}R_{\alpha}$. If $\mathfrak{Y}=(X,\{R_{\Lambda_i}\}_{i=0}^e)$ becomes an association scheme, then it is called a {\it fusion} scheme of $\mathfrak{X}$. \medskip\par\noindent {\bf Bannai--Muzychuk criterion} (\cite{B, Muzychuk}). Let $\mathfrak{X}$ be a commutative association scheme with the first eigenmatrix $P$. Let $\{\Lambda_j\}_{j=0}^e$ be an admissible partition of the index set $\{0,1,\ldots,d\}$. Then $\{\Lambda_j\}_{j=0}^e$ gives rise to a fusion scheme $\mathfrak{Y}$ if and only if there exists a partition $\{\Delta_i\}_{i=0}^e$ of $\{0,1,\ldots,d\}$ with $\Delta_0=\{0\}$ such that each $(\Delta_i,\Lambda_j)$ block of the first eigenmatrix $P$ has a constant row sum. Moreover, the constant row sum of the $(\Delta_i,\Lambda_j)$ block is the $(i,j)$ entry of the first eigenmatrix of $\mathfrak{Y}$. \medskip Let $\mathfrak{X}$ be an association scheme given in Theorem~\ref{thm:1}. Fusion schemes of $\mathfrak{X}$ with at least three classes are listed in Table~\ref{table}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|} \hline &fused relations & class & nonsymmetirc or symmetric \\ \hline\hline $\mathfrak{X}_1$&$\{1,2\}$ & $5$ & nonsymmetric \\ \hline $\mathfrak{X}_2$&$\{3,4\}$ & $5$ & nonsymmetric \\ \hline $\mathfrak{X}_3$&$\{1,2\},\{3,4\}$ & $4$ & symmetric \\ \hline $\mathfrak{X}_4$&$\{3,4,6\}$ & $4$ & nonsymmetric \\ \hline $\mathfrak{X}_5$&$\{1,2\},\{3,4\},\{5,6\}$ & $3$ & symmetric \\ \hline $\mathfrak{X}_6$&$\{1,2,3,4\}$ & $3$ & symmetric \\ \hline $\mathfrak{X}_7$&$\{1,3\},\{2,4\},\{5,6\}$ & $3$ & nonsymmetric \\ \hline $\mathfrak{X}_8$&$\{1,4\},\{2,3\},\{5,6\}$ & $3$ & nonsymmetric \\ \hline \end{tabular} \caption{Fusion schemes of $\mathfrak{X}$} \label{table} \end{center} \end{table} The first eigenmatrix of $\mathfrak{X}_7$ and $\mathfrak{X}_8$ in Table~\ref{table} are the matrix (\ref{d3P1}) by putting $a=b$. Put $b=4^p$. We can verify that $\mathfrak{X}_7$ and $\mathfrak{X}_8$ for $p=3$ are isomorphic, and $\mathfrak{X}_7$ and $\mathfrak{X}_8$ for $p=5$ are not isomorphic. The computation needed to verify these facts was done with the help of Magma \cite{magma}. \section{Complex Hadamard matrices and Galois rings}\label{sec:5} We continue to use the notation introduced in Section~\ref{sec.Galoisring}. Let $\{A_j\}_{j=0}^6$ be the set of the adjacency matrices of $\mathfrak{X}$. Then $A_1^T=A_2$, $A_3^T=A_4$, and $A_5,A_6$ symmetric. We call the algebra $\mathfrak{A}=\langle A_0,A_1,\ldots,A_6\rangle$ the Bose--Mesner algebra of $\mathfrak{X}$. Our next theorem gives a classification of hermitian complex Hadamard matrices belonging to $\mathfrak{A}$. \begin{thm}\label{thm:2} Let $w_0=1$ and $w_j$ {\rm(}$1\leq j\leq6${\rm)} be complex numbers of absolute value $1$. Set \[ W=\sum_{j=0}^6w_jA_j\in\mathfrak{A}, \] and assume that $W$ is hermitian. Then, $W$ is a complex Hadamard matrix if and only if \begin{align} W=&A_0+\epsilon_1i(A_1-A_2)+\epsilon_2i(A_3-A_4)+A_5+A_6, \quad \text{or} \label{eq:W1}\\ W=&A_0+\epsilon_1i(A_1-A_2)+\epsilon_2(A_3+A_4)+A_5-A_6,\label{eq:W2} \end{align} for some $\epsilon_1,\epsilon_2\in\{\pm1\}$. \end{thm} \begin{proof} Recall that $\{A_j\}_{j=0}^6$ is the set of the adjacency matrices of $\mathfrak{X}$ given in Theorem~\ref{thm:1}. Notice that $A_1^T=A_2$, $A_3^T=A_4$, while $A_5$ and $A_6$ are symmetric. Since $W$ is hermitian, we have $w_1w_2=1$ and $w_3w_4=1$. Suppose that the matrix $W$ is a complex Hadamard matrix. Then $(w_i)_{i=1}^6$ is a common zero of the polynomials $e_k$ ($k=1,\ldots,6$) defined in \eqref{eq:ek}. Since \begin{equation}\label{01-7} e_2-e_1=4ib(X_1-X_2)(X_5-1), \end{equation} we have $w_1=w_2$ or $w_5=1$. Suppose first that $w_1=w_2$. After specializing $X_1=X_2$ and $X_1^2=1$, we have $$e_1=(X_5-(2b+1))(X_5+2b-1).$$ Then $|w_5|=|2b\pm1|\geq 2b-1>1$, which is a contradiction. Therefore, we must have $w_5=1$. After specializing $X_5=1$, we have $$e_1=-b^2(X_1-X_2-2i)(X_1-X_2+2i).$$ Hence $w_1=-w_2\in\{\pm i\}$. Moreover, after specializing $X_5=1$, $X_1=-X_2$ and $X_1^2=-1$, we have \begin{align} e_4-e_3&=4ib^2(X_6-1)(X_3-X_4),\label{01-8}\\ e_5-e_6&=4b^2(X_6+1)(X_3+X_4).\label{01-9} \end{align} If $w_6=1$, then $w_4=-w_3\in\{\pm i\}$ by \eqref{01-9}. Therefore we have \eqref{eq:W1}. If $w_3=w_4$, then $w_3\in\{\pm1\}$ and $w_6=-1$ by \eqref{01-9}. Therefore we have \eqref{eq:W2}. Conversely, assume that $W$ is one of the matrices \eqref{eq:W1}, \eqref{eq:W2}. We show that $W$ is a complex Hadamard matrix. By Lemma~\ref{lem:equiv}, it suffices to show that $(w_i)_{i=1}^6$ is common zero of the polynomials \eqref{eq:ek}, and it is easy to do this. \end{proof} We note that the matrix \eqref{eq:W1} belongs to the Bose--Mesner algebra of $\mathfrak{X}_7$ or $\mathfrak{X}_8$ in Table~\ref{table}, depending on $\epsilon_1=\epsilon_2$ or $\epsilon_1=-\epsilon_2$. Also, the matrix \eqref{eq:W2} belongs to the Bose--Mesner algebra of $\mathfrak{X}_2$ or $\mathfrak{X}_4$ in Table~\ref{table}, depending on $\epsilon_2=1$ or $\epsilon_2=-1$. Therefore, no proper fusion scheme of $\mathfrak{X}$ contains all the matrices \eqref{eq:W1} and \eqref{eq:W2} in its Bose--Mesner algebra $\mathfrak{A}$. We also note that Ma \cite{Ma} considered association schemes which are invariant under the multiplication by $\mathcal{T}$. The only association schemes in Table~\ref{table} which are invariant under the multiplication by $\mathcal{T}$ is $\mathfrak{X}_7$, and it has the first eigenmatrix described by \cite[Theorem 7]{Ma}.
1,477,468,749,853
arxiv
\section{Introduction} When studying a stellar dynamical model (single or multi--component), the fact that the Jeans equations have a physically acceptable solution is not a sufficient criterion for the validity of the model: the minimal requirement to be met by a physically acceptable model is the positivity of the phase--space distribution function (DF) of each distinct component. A model satisfying this essential requirement (which is much weaker than the stability of the model) is called a {\it consistent} model; moreover, when the total potential is determined by the model density components through the Poisson equation, the model is called {\it self--consistent}. In other words, a self--consistent model represents a physically acceptable, self--gravitating system. Two general strategies can be used to construct a (self) consistent model, or check whether a proposed model is (self) consistent: they are commonly referred to as the ``$f$--to--$\rho$'' and the ``$\rho$--to--$f$'' approaches (\cite{bt87}, Chap. 4, hereafter BT87). An example of the first approach is the extensive survey of self--consistent two--component spherical galaxy models carried out by Bertin and co--workers (Bertin, Saglia, and Stiavelli 1992), where they assume for the stellar and dark matter components two distribution functions of the $f_{\infty}$ form (and so positive by choice; \cite{bs84}). The main problem with this approach is that generally the spatial density is not expressible in terms of simple or at least well known functions, and so only numerical investigations are usually feasible. In the second approach, the density distribution is given, and assumptions about the model internal dynamics are made, making the comparison with the data simpler. The difficulties inherent in the operation of recovering the DF in many cases prevent a simple consistency analysis. In particular, in order to recover the DF of spherical models with anisotropy, the Osipkov--Merritt technique (\cite{osi79,mer85a}, hereafter OM) has been developed from the original Eddington (1916) method for isotropic systems, and widely used. Examples of {\it numerical} application of the OM inversion to one and two--component spherical galaxies can be found in the literature (see, e.g., \cite{cp92}, hereafter CP92; \cite{hio94}; Carollo, de Zeeuw, and van der Marel 1995a; \cite{cl97}, hereafter CL97). If one is just interested in the (self) consistency of a stellar system the previous methods obviously give "too much", i.e., give the full DF. In the OM framework, a simpler approach in order to check the (self) consistency of spherically symmetric, multi--component models, is given by the method described in CP92. This method uses directly the radial density profile of each component and of the total potential, and gives necessary and sufficient conditions for the model (self) consistency, avoiding the necessity of recovering the DF itself. Moreover, since it requires only spatial differentiation and inequality checks, this method is best suited for analytical investigations. The importance of studying multi--component galaxy models cannot be underestimated: in fact it is now accepted that a fraction of the mass in galaxies and clusters of galaxies is made of a dark component, whose density distribution -- albeit not well constrained by observations -- differs from that of the visible one (see, e.g., \cite{bbb94,czm95b,bc97,gjsb98}). Moreover, there is increasing evidence for the presence of massive black holes (BHs) at the center of most (if not all) elliptical galaxies (see, e.g., \cite{hft94,vzr97a}, van der Marel, de Zeeuw, and Rix 1997b, \cite{r98}). It follows that the obvious generalization of the one--component spherical models (the dynamicists zero--th order approximation of real galaxies) is not only in the direction of the actively developed modeling of axisymmetric and triaxial systems (see, e.g., de Zeeuw (1996) for a review) but also in the direction of the construction of two--component analytical models, and in the study of their phase--space properties, a field far less developed. From this point of view the $1^{\rm st}$ order approximation of real galaxies is the construction of analytical, spherically symmetric, (self) consistent {\it two--component} galaxy models. Unfortunately, few examples of two--component systems in which both the spatial density and the DF are analytically known are available, namely the very remarkable axisymmetric Binney--Evans model (\cite{bin81,eva93}), and the spherically symmetric two--component Hernquist model (HH models, Ciotti 1996, hereafter C96), and so it would be particularly interesting to find other members of this exclusive club. In C96 the (successful) choice of the Hernquist density distribution (\cite{her90}, hereafter H90) as building block for a two--component model with an analytical DF, was suggested by the extremely simple (and algebraic) expression of its potential as a function of radius. Moreover, the application of the CP92 method to HH models revealed itself as both simple and fruitful, explaining many properties of the derived DF. In this line, a natural and promising extension of the HH models is obtained by considering the wider family of spherically symmetric, two--component, anisotropic $(\gamma_1,\gamma_2)$ models. This family of models is made by the superposition of two $\gamma$ models [see equation (7)] with different total masses, scale--lengths, and slopes $\gamma$. The mass concentration and amount of mass of the two distributions are described by four free parameters, and orbital anisotropy is allowed in both components, following the OM prescription. Because the Hernquist profile is obtained from $\gamma=1$, it will referred as a $\gamma=1$ model, and so with the adopted nomenclature the HH models discussed in C96 will be called (1,1) models. Note that an increasing interest in $(\gamma_1,\gamma_2)$ models is seen in simulation and observational works: e.g., Pellegrini and Ciotti (1998) used $(2,1)$ models in their numerical simulations of hot gas flows in X--ray emitting elliptical galaxies, Loewenstein and White (1999) used galaxy models similar to $(1,1)$ models in the inner regions in order to observationally constraint the properties of dark matter halos around ellipticals. As expected, it is not possible to find the analytical DF for $(\gamma_1,\gamma_2)$ models in the general case, but the fact that the DF of (1,0) models with OM anisotropy is completely expressible in analytical way, as proved here, is still of great interest. The study of (1,0) models is also useful for many other reasons: to provide an analytical DF for a two--component system for which the virial quantities and the analytical solution of the Jeans equation can be found explicitly; to arrange initial conditions for numerical simulations of two--component systems; to investigate the role of anisotropy and mass distribution of each component in determining the positivity of their DF. The availability of the analytical DF for a two--component stellar system allows us to arrange with great accuracy the initial conditions for numerical simulations aimed at investigating the stability of galaxy models in the presence of dark matter halos, or with a central BH. A work on stability of (1,0) models is in progress (Londrillo and Ciotti, in preparation). Strictly related to the last point above, is the trend shown in the numerical investigations of two--component models described in CP92, i.e., the difficulty of consistently superimposing a centrally peaked distribution to a centrally flat one. CP92 showed numerically that King (1972) or quasi--isothermal density profiles can not be coupled to a de Vaucouleurs (1948) model, because their DFs run into negative values near the model center. On the contrary, the DF of the de Vaucouleurs component is qualitatively unaffected by the presence of centrally flat halos. From this point of view, the C96 work on (1,1) models is complementary to the investigation of CP92: in the (1,1) models the two density components are both centrally peaked, and their DF is positive (in the isotropic case) for all the possible choices of halo and galaxy masses and concentrations. The implications of these findings have been not explored sufficiently. One could speculate that in the presence of a centrally peaked dark matter halo, King--like elliptical galaxies should be relatively rare, or that a galaxy with a central power--law density profile cannot have a dark halo too flat in the center. In fact, observations of the central surface brightness profiles of elliptical galaxies (see, e.g., \cite {fer94,jaf94}; M{\o}ller, Stiavelli, M., and Zeilinger 1995, \cite{lau95,kor95,byu96}), and bulges of spirals (\cite{cs98}), as well as high--resolution numerical simulations of dark matter halo formation (see, e.g., \cite{dc91,white96}; Navarro, Frenk, and White 1997) seem to point in this direction. In this paper, I further explore the trend that emerged in CP92 and in C96, determining the limits imposed by phase--space constraints (i.e., the DF positivity) on the parameters describing the $(\gamma_1,\gamma_2)$ models and $\gamma$ models with a central BH [hereafter ($\gamma$,BH) models]. I focus on the (1,0) models, in which one component is centrally peaked ($\gamma=1$ density profile), and the other has a more flat core ($\gamma=0$ density profile). With the aid of the derived, analytical DFs, more stringent conclusions are then reached. In Section 2, I briefly review the technique developed in CP92 and applied in C96 to (1,1) models, and I formulate it in a way suitable for its application to the present problem. In Section 3, the $(\gamma_1,\gamma_2)$ models are introduced, as well as the CP92 method used to discuss the limits imposed on their parameters by requiring the positivity of the DF of the two components. In Section 4, the DF for the two components of (1,0) models are derived explicitly, and in Section 5, the exact boundaries of the region of consistency in the parameter space are obtained and compared to those given in Section 3. Finally, in Section 6, the main results are summarized. In the Appendix, the analytical expressions for the velocity dispersion profiles of both the (1,0) model components and the virial quantities useful in applications, are derived in the general OM case. \section{The Consistency of Multi--Component Systems} As outlined in the Introduction, a stellar system described as a sum of different density components $\rho _{\rm k}$ is called consistent if {\it each} $f_{\rm k}$ is non--negative over all the accessible phase--space; a consistent, self--gravitating system is called self--consistent. The technique developed in CP92 permits us to check whether the DF of a multi--component spherical system, where the orbital anisotropy of each component is modeled according to the OM parameterization, is positive, {\it without} calculating it effectively. In the OM formulation, the radially anisotropic case is obtained as a consequence of assuming $f=f(Q)$ with: \begin{equation} Q={\cal E}-{L^2\over 2r_{\rm a}^2}, \end{equation} where ${\cal E}$ and $L$ are respectively the relative energy and the angular momentum modulus per unit mass, $r_{\rm a}$ is the so--called {\it anisotropy radius}, and $f(Q)=0$ for $Q\leq 0$. With this assumption, the models are characterized by radial anisotropy increasing with galactic radius, and in the limit as $r_{\rm a}\to\infty$, the velocity dispersion tensor becomes globally isotropic. For a multi--component spherical system, the simple relation between energy and angular momentum prescribed by equation (1) allows us to express the DF of the k--th component as: \begin{equation} f_{\rm k} (Q_{\rm k})={1\over\sqrt{8}\pi^2}{d\over dQ_{\rm k}}\int_0^{Q_{\rm k}} {d\varrho _{\rm k}\over d\Psi_{\rm T}}{d\Psi_{\rm T}\over\sqrt{Q_{\rm k}-\Psi_{\rm T}}}= {1\over\sqrt{8}\pi^2}\int _0^{Q_{\rm k}} {d^2\varrho _{\rm k}\over d\Psi_{\rm T}^2}{d\Psi_{\rm T}\over\sqrt{Q_{\rm k}-\Psi_{\rm T}}}, \end{equation} where \begin{equation} \varrho _{\rm k} (r)=\left (1+{r^2\overr_{\rm ak}^2}\right) \rho _{\rm k} (r), \end{equation} $\Psi_{\rm T} (r)=\sum_k\Psi_{\rm k} (r)$ is the total relative potential, $Q_{\rm k} ={\cal E}-L^2/2r_{\rm ak}^2$, and $0\leqQ_{\rm k}\leq\Psi_{\rm T} (0)$. The second equivalence in equation (2) holds for untruncated systems with finite total mass, as the models discussed here (see, e.g., BT87, p.240). In C96, the original CP92 technique was applied to (1,1) models using the relative potential $\Psi_{\rm k}$ of the investigated component as the independent variable, but here the radius $r$ is found to be a more convenient variable. So, we have now \medskip \par\noindent {\bf Theorem}: A {\it necessary condition} (NC) for the non--negativity of $f_{\rm k}$ is: \begin{equation} {d\varrho _{\rm k}(r)\over dr}\leq 0,\quad 0\leq r \leq\infty . \end{equation} If the NC is satisfied, a {\it strong sufficient condition} (SSC) for the non--negativity of $f_{\rm k}$ is: \begin{equation} {d\over dr}\left[{d\varrho _{\rm k}(r) \over dr} {r^2\sqrt {\Psi_{\rm T}(r)}\overM_{\rm T} (r)}\right]\geq 0, \quad 0\leq r\leq\infty . \end{equation} Finally, a {\it weak sufficient condition} (WSC\footnote{The WSC is better suited than the SSC for analytical investigations, due to the absence of the weighting square root of the total potential.}) for the non negativity of $f_{\rm k}$ is: \begin{equation} {d\over dr}\left[ {d\varrho _{\rm k}(r) \over dr}{r^2\overM_{\rm T} (r)}\right]\geq 0, \quad 0\leq r\leq\infty . \end{equation} \par\noindent {\bf Proof}: See CP92 and C96. \par\noindent Some considerations follow from looking at the above conditions. The first is that the violation of the NC [equation (4)] is connected only to the radial behavior of $\rho _{\rm k}$ and the value of $r_{\rm ak}$, and so this condition applies independently of any other component added to the model. Obviously, the condition imposed by equation (4) is only necessary, so $f_{\rm k}$ can be negative (and so the k component will be inconsistent) even for values of model parameters allowed by the NC. This is due to the radial behavior of the integrand in equation (2), which not only depends on the particular $\rho _{\rm k}$ and $r_{\rm ak}$, but also on on the total potential. To summarize: a model failing the NC is {\it certainly inconsistent}, a model satisfying the NC {\it can be consistent}. The second consideration is that a model satisfying the WSC (or the more restrictive SSC) is {\it certainly} consistent, a model failing the WSC (SSC) {\it can be consistent}, due to the sufficiency of the conditions given by equations (5)-(6). As a consequence, the consistency of a model satisfying the NC and failing the WSC (or the SSC) can be proved only by direct inspection of its DF. For example, if one find that for $r_{\rm ak}\leqr_{\rm ak}$(NC) the model is inconsistent [i.e., equation (4) is not verified], while for $r_{\rm ak}\geqr_{\rm ak}$(WSC)$\geqr_{\rm ak}$(SSC) the model is consistent [i.e., equations (5) or (6) are verified], this means that the true critical anisotropy radius ($r_{\rm akc}$) for that model must satisfy the relation $r_{\rm ak}$(NC)$\leq r_{\rm akc}\leqr_{\rm ak}$(WSC)$\leqr_{\rm ak}$(SSC). In this case, the limitations on $r_{\rm ak}$ obtained from WSC or SSC, are upper bounds on the lower limit $r_{\rm akc}$ for consistency. Obviously, the situation can be more complicated [see equations (38)-(39), and the following discussion]. In the next section, after presenting the $(\gamma_1,\gamma_2)$ models, the analytical constraints on $r_{\rm a}$ for the consistency of the one--component OM anisotropic $\gamma$ models are derived, together with a general result on the consistency of two--component isotropic $(\gamma_1,\gamma_2)$ models. Successively, more specific results for the isotropic (1,0) models are proved. Finally, a limitation on $r_{\rm a}$ for the OM anisotropic ($\gamma$,BH) models, is explicitly derived as a function of $\gamma$. \section {The $(\gamma_1,\gamma_2)$ Models} Both density distributions of the $(\gamma_1,\gamma_2)$ models belong to the widely explored family of the so--called $\gamma$ models (\cite{deh93}, hereafter D93; \cite{car93,tre94}): \begin{equation} \rho (r)={3-\gamma\over 4\pi} {M\,r_{\rm c}\over r^{\gamma}(r_{\rm c}+r)^{4-\gamma}},\quad\quad M(r)=M\times\left({r\over r_{\rm c}+r}\right)^{3-\gamma},\quad\quad 0\leq\gamma <3, \end{equation} where $M$ is the total mass and $r_{\rm c}$ a characteristic scale--length. The corresponding relative potential is given by \begin{equation} \Psi(r)={GM\overr_{\rm c} (2-\gamma)} \left [1-\left({r\over r+r_{\rm c}}\right)^{2-\gamma}\right],\quad \Psi(r)={GM\overr_{\rm c}}\hbox{${\rm ln}\, $}{r+r_{\rm c}\over r}, \end{equation} where the second expression holds for $\gamma=2$. In the following, the mass $M=M_{\gau}$ and the characteristic scale--length $r_{\rm c}=r_{\rm c1}$ of the $\gamma_1$ model will be adopted as normalization constants, so that from equation (7) it follows $\rho_{\gau}(r)=\rho_{\rm N}\tilde\rho_{\gau}(s)$ and $\rho_{\gad}(r)=\rho_{\rm N}\mu\tilde\rho_{\gad}(s,\beta)$, where $s=r/r_{\rm c1}$, $\rho_{\rm N}=M_{\gau}/r_{\rm c1}^3$, $\mu=M_{\gad}/M_{\gau}$, and $r_{\rm c2}=\betar_{\rm c1}$. The fundamental ingredient in recovering the DF is the total potential $\Psi_{\rm T}=\Psi_{\gau}+\Psi_{\gad}$, where from equation (8) $\Psi_{\gau}(r)=\Psi_{\rm N}\tilde\Psi_{\gau}(s)$ and $\Psi_{\gad}(r)=\Psi_{\rm N}\mu\tilde\Psi_{\gad}(s,\beta)$, and $\Psi_{\rm N}=GM_{\gau}/r_{\rm c1}$. With this choice, the $(\gamma_1,\gamma_2)$ models are structurally determined by fixing the four independent parameters $(M_{\gau},r_{\rm c1},\mu,\beta)$, with the obvious condition $\mu\geq 0$ and $\beta\geq 0$. For future reference, I give here the explicit expressions of the density and the potential for the (1,0) models, for which $(M_{\gau},r_{\rm c1},\rho_{\gau},\Psi_{\gau})=(M_1,r_1,\rho_1,\Psi_1)$, and $(M_{\gad},r_{\rm c2},\rho_{\gad},\Psi_{\gad})=(M_0,r_0,\rho_0,\Psi_0)$: \begin{equation} \rho_1 (r) =\rho_{\rm N}\tilde\rhos (s)={\rho_{\rm N}\over 2\pi}{1\over s(1+s)^3}, \end{equation} and \begin{equation} \rho_0 (r) =\rho_{\rm N}\mu\tilde\rhoh (s)= \mu{3\rho_{\rm N}\over 4\pi}{\beta\over (\beta+s)^4}, \end{equation} where $s=r/r_1$, $\rho_{\rm N}=M_1/r_1^3$, $M_0=\muM_1$, and $r_0=\betar_1$; moreover, \begin{equation} \Psi_1 (r) =\Psi_{\rm N}\tilde\psis (s)={\Psi_{\rm N}\over 1+s}, \end{equation} \begin{equation} \Psi_0 (r) =\Psi_{\rm N}\mu\tilde\psih(s)= \Psi_{\rm N} \mu {\beta+2s\over 2(\beta+s)^2}, \end{equation} with $\Psi_{\rm N}=GM_1/r_1$. \subsection{The Necessary and Sufficient Conditions for the $\gamma$ Models} Here I study first the NC for the general case of the anisotropic one--component $\gamma$ models, in order to determine analytically a {\it critical} anisotropy radius such that a higher degree of radial OM anisotropy (i.e., a smaller anisotropy radius) produces a negative DF for some permitted value of $Q$, no matter what kind of halo density distribution is added. The unit mass and unit length are the total mass $M$ and the scale--length $r_{\rm c}$ of the model, with $s_{\rm a}=r_{\rm a}/r_{\rm c}$. As shown in Appendix A [equations (A1)-(A2)], for $2\leq\gamma <3$ the NC is satisfied for $s_{\rm a}\geq 0$, i.e., the possibility that $\gamma$ models with $\gamma\geq 2$ are assembled using only radial orbits is left open by the NC. On the contrary, for $0\leq \gamma <2$ the NC requires \begin{equation} s_{\rm a}\geq s_{\rm M}\sqrt{{2-\gamma-2s_{\rm M}\over\gamma+4s_{\rm M} }}, \end{equation} where $s_{\rm M}=s_{\rm M}(\gamma)$ is given by equation (A2). In this case the NC {\it proves} that $\gamma$ models with $0\leq \gamma <2$ cannot sustain radial orbits only. In Fig. 1 the lower bound for the anisotropy radius as a function of $\gamma$ derived from the NC is shown. From the discussion in Section 2, it follows that all $\gamma$ models (one or multi--component) in the nearly triangular region under the solid curve are inconsistent. The WSC can be treated analytically for the one--component $\gamma$ models, as shown in Appendix A [equations (A3)-(A6)], and we obtain the following limitats on $s_{\rm a}$: \begin{equation} s_{\rm a}\geq s_{\rm M}^{3/2}\sqrt{{3-\gamma-s_{\rm M}\over 6s_{\rm M}^2+2(1+\gamma)s_{\rm M}+\gamma}}, \end{equation} where $s_{\rm M}=s_{\rm M} (\gamma)$ is given by equations (A4)-(A6). As discussed in Section 2, the r.h.s. of equation (14) (represented in Fig. 1 by the dotted line), is an upper limit on the lower bound for the critical anisotropy radius as a function of $\gamma$: all one--component $\gamma$ models in the region above the dotted line are consistent. \placefigure{fig1} A stronger limitation on $r_{\rm a}$ is obtained using the SSC, but unfortunately this condition for a generic $\gamma$ results in a trascendental equation that cannot be solved explicitly. However, as shown in Appendix A, for the three values $\gamma=(0,1,3)$ the solution can be derived explicitly. For $\gamma=0$, \begin{equation} s_{\rm a}\geqs_{\rm M}\sqrt{{3(1+2s_{\rm M} -s_{\rm M}^2)\over 14s_{\rm M}^2+10s_{\rm M}+2}}\simeq 0.501, \end{equation} where $s_{\rm M} =s_{\rm M}(0)$ is given by equation (A8). For $\gamma=1$, equation (A9) shows that \begin{equation} s_{\rm a}\geqs_{\rm M}^{3/2}\sqrt{{3(3-2s_{\rm M} )\over 28s_{\rm M}^2+17s_{\rm M} +4}}\simeq 0.250, \end{equation} where $s_{\rm M}=s_{\rm M}(1)$ is given by equations (A10)-(A11). The numerical application of the SSC to the $\gamma=2$ model (\cite{jaf83}) gives $s_{\rm a}\ifmmode{\mathrel{\mathpalette\@versim>} 0.047$. Finally, the case $\gamma=3$ is trivial, the SSC reduces to $s_{\rm a}\geq 0$. These four values are represented in Fig. 1 by black dots: all one--component $\gamma$ models in the region above the dashed line are consistent. Note how the SSC improves the estimate of the lower bound of $r_{\rm a}$ with respect to the WSC. As already pointed out in Section 2, the true critical value of $s_{\rm a}$ for any one--component $\gamma$ model is between the NC and the SSC curves. For the $\gamma=0$ and $\gamma=1$ models these values, determined directly from their DFs (Sections B1-B2 in Appendix B), are $\simeq 0.445$ and $\simeq 0.202$, respectively. Merritt (1985b) derived the analytical DF for a totally radial Jaffe model (the $\gamma=2$ case): its positivity implies that in this case the true lower limit on $s_{\rm a}$ is zero. In Fig. 1 these three values are represented by black squares, and in Table 1 all the previous results for the specific cases $\gamma= (0,1,2,3)$ are summarized: note how the more the model is concentrated, the more radial anisotropy can be supported. The true critical $r_{\rm a}$ value for the OM anisotropic one--component $\gamma$ models as a function of $\gamma$ is numerically known (see Fig. 1 in \cite{czm95a}). \placetable{tbl-1} \subsection{Sufficient Conditions for Isotropic $(\gamma_1,\gamma_2)$ Models} In order to proceed further with this analytical discussion, and to allow for the presence of a ``halo'' component, we will use the WSC rather than the more complicated SSC. The following three results are proven analytically in this Section: \begin{enumerate} \item In the case of globally isotropic two--component $(\gamma_1,\gamma_2)$ models with $1\leq \gamma_1 <3$ and $0\leq \gamma_2\leq\gamma_1$, the DF of the more peaked component $\gamma_1$ is positive over all the phase space, for all values of $(\mu,\beta)=(M_{\gau}/M_{\gad},r_{\rm c2}/r_{\rm c1})$. As a consequence, the $\gamma=1$ component of (1,0) models is consistent for all values of the parameters $(\mu ,\beta)$. \item In the case of globally isotropic (1,0) models, the WSC applied to the $\gamma=0$ density distribution suggests the existence of a lower limit of $\mu=M_0/M_1$ as a function of $\beta=r_0/r_1$. In particular, for $\beta\leq 5/2$, i.e., when the $\gamma=0$ component is sufficiently concentrated, all values of $\mu$ can be accepted. Using the analytical DF the existence of this lower limit will be proved in Section 5. \item In the case of anisotropic $\gamma$ models with a BH at their center, it is possible to determine analytically a lower limit on $r_{\rm a}$ as a function of $\gamma$ using the WSC. \end{enumerate} The proof of the first result is conceptually straightforward but algebraically cumbersome. In Appendix A [equations (A12)-(A13)] it is proved that under the hypothesis assumed in point 1 above, equation (6) is verified for all choices of $(\mu,\beta)$. I note explicitly that this result contains as a particular case the fact -- already proved in C96 -- that globally isotropic (1,1) models can be self--consistently assembled for any choice of $(\mu,\beta)$. From this general result it also follows that -- e.g. -- the same is true for isotropic Jaffe+Jaffe models ($\gamma_1=\gamma_2=2$). Finally, considering that for $r_{\rm c}\to 0$ the potential of $\gamma$ models becomes that of a point mass [see equation (8)], the previous result means that a BH of any mass can be added at the center of a globally isotropic $\gamma$ model when $1\leq\gamma <3$. A different analysis, based on a series expansion of the integral representation of the DF for isotropic $\gamma$ models, shows that the true limit on $\gamma$ in order to allow for the presence of a BH of any mass at the models center, is $\gamma >1/2$ (\cite{tre94}). The result stated in point 2 above is proved in Appendix A [equation (A14)]. For equation (6) to be satisfied for the isotropic $\gamma=0$ component of (1,0) models, a sufficient condition for the consistency of this component is \begin{equation} \mu\geq (2\beta-5)\beta^2. \end{equation} This requirement can be interpreted in two different ways. The first is that, having fixed the ratio $\beta=r_0/r_1$, only for sufficiently high mass ratios $\mu=M_0/M_1$ can the $\gamma=0$ component ``dilute'' the effect of the central cusp of the $\gamma=1$ model on the total potential, and be consistent. More specifically, when $\beta<5/2$ (i.e., when the $\gamma=0$ density distribution is sufficiently concentrated), even a vanishing mass $M_0$ ($\mu\to 0$) can be accepted, while for large $\beta$ only very large $\mu$ are allowed. From another point of view, equation (17) tells us that having fixed $\mu$, $\beta$ cannot be arbitrarily large, but in some sense the concentration of the $\gamma=0$ component must adapt to the density distribution of the $\gamma=1$ component. The effect of the concentration is much more important than the amount of mass: in fact $\beta\ifmmode{\mathrel{\mathpalette\@versim<}(\mu/2)^{1/3}$. This means that even increasing considerably the mass ratio, the maximum value of $r_0$ allowed for the $\gamma=0$ model grows only like the {\it cube} root of $\mu$. The limitation $\beta\leq 5/2$ is only a {\it sufficient} condition for the consistency of a $\gamma=0$ model coupled with a dominant $\gamma=1$: a larger critical value for $\beta$ is expected from direct inspection of the DF when $\mu\to 0$ (see Section 5). The result presented in point 3 above can be interpreted as an extension to the radially anisotropic case of the analysis performed by Tremaine et al. (1994), and is proved in Appendix A by showing that the WSC applied to the anisotropic ($\gamma$,BH) models with $1\leq\gamma <3$ can be analytically discussed in the special case of a {\it dominant} BH, i.e., assuming in equation (6) $M_{\rm T} =M_{\rm BH}$ (and so $\Psi_{\rm T}=GM_{\rm BH}/r$). Unfortunately, in the non--asymptotic case, the equation to be discussed is transcendental, and no analytical discussion can be carried out. At first, the assumption of a dominant BH could appear as a very rough approximation of reality, but this is not true: the constraint derived can be used as a safe limitation when constructing models containing a BH of a realistic mass at their centre. As shown in Appendix A, [equation (A15)], for $1\leq\gamma <3$, \begin{equation} s_{\rm a}\geqs_{\rm M} \sqrt{(3-\gamma)(\gamma-2)+4(3-\gamma)s_{\rm M}-2s_{\rm M}^2\over 12s_{\rm M}^2+8(\gamma-1)s_{\rm M}+\gamma(\gamma-1)}, \end{equation} where $s_{\rm M}=s_{\rm M}(\gamma)$ is obtained by solving a fourth degree algebraic equation. In Fig. 1, the long--dashed line represents the lower bound for $s_{\rm a}$ as determined by the previous equation, while the explicit values for $\gamma=(1,2,3)$ are given in Table 1. In particular, the critical value for the $\gamma=1$ model was already derived as a limiting case of a (non--asymptotic) formula given in C96 [equation (15) there]. \section{The DF of (1,0) Models} We can now proceed to the explicit recovery of the DF of the (1,0) models. Just as for the density and the potential, it is also useful for the DF to work with dimensionless functions; the two components of the DF are of the form $f=f_{\rm N}\tilde f(\mu,\beta;\tilde Q)$ with $f_{\rm N}=\rho_{\rm N}\Psi_{\rm N}^{-3/2}$ and $0\leq\tilde Q =Q/\Psi_{\rm N}\leq\tilde\psis(0)+\mu\tilde\psih(0)$. The easiest way to compute each DF is to use the first of the identities in equation (2). For the evaluation of the integral one would be tempted to obtain $\varrho(\Psi_{\rm T})$, eliminating the radius from the modified density and the total potential: formally, this can be done, but the resulting expression for the radial coordinate involves a quadratic irrationality, that, after insertion in equations (3), (9) and (10), produces an intractable expression. Here I follow another approach: instead of eliminating the radius, the integration variable is changed from the total potential to the radius itself. This is equivalent to a remapping of the domain of definition of each $f$, from the range of variation of $\Psi_{\rm T}$ to the range of variation of $r$, and leads us to introduce the dimensionless radius $\nu$ using equations (11)-(12): \begin{equation} \tilde Q={1\over 1+\nu}+\mu{\beta+2\nu\over2(\beta+\nu)^2}, \quad 0\leq \nu\leq\infty. \end{equation} As shown in Appendix B [equations (B1)-(B2)], with this change of variable and after normalization to the dimensional scales of the Hernquist density distribution, the DF for the $\gamma=1$ and $\gamma=0$ components can be formally written as: \begin{equation} f(Q) = f_{\rm i}(Q)+{f_{\rm a}(Q)\overs_{\rm a}^2}= {f_{\rm N}\over\sqrt{8}\pi^2}\left({d\tilde Q\over d\nu}\right)^{-1} {d\over d\nu}\left[\Ftil_{\rm i} (\nu)+{\Ftil_{\rm a} (\nu)\overs_{\rm a}^2}\right], \quad\nu=\nu (\tilde Q), \end{equation} where $s_{\rm a} =r_{\rm a}/r_1$, and the subscripts refer to the isotropic and anisotropic parts of the DF, respectively. Following the procedure, $f(Q)$ results from the elimination of $\nu$ between equations (19)-(20). In the general case, i.e., for any choice of $(\mu,\beta,s_{\rm a})$, $f(Q)$ [and the so called {\it differential energy distribution} for each component as well, (see, e.g., BT87, p.242)] can be recovered analytically [see equation (B3) for a proof of this fact]. So, it is shown that in addition to the (1,1) models, the (1,0) models are a class of two--component stellar systems in which both the spatial density distributions, the solution of the Jeans equations (see Appendix C), and the phase--space distribution functions can be explicitly found. Unfortunately, the DF of the (1,0) models results in a combination of elliptic functions, even more complicated than the DF of the (1,1) models, and this limits their applicability to special problems in which the DF is required to be known with arbitrary precision or to be formally manipulated. Here I present only the DFs for the two density distributions obtained under the assumption of a dominant ``halo'' component; I derive the DF for a $\gamma=1$ model with a dominant $\gamma=0$ halo ($\mu\to\infty$), and for a $\gamma=0$ model with a dominant $\gamma=1$ halo ($\mu\to 0$). Technically, this reduces to the assumption that the total potential is the potential of the halo component only. Even though it is a limiting case, the study of {\it halo--dominated} models is interesting for several different reasons: 1) the formulae -- expressible using elementary functions -- are much simpler than in the general case, and can be studied very easily, making clearer the effect of the halo component on the DF; 2) the halo--dominated case is the one that differs most from the case of the corresponding one--component model, and so the differences are better evident; 3) all the intermediate cases fall between the one--component model and the halo--dominated one. A comparison with more realistic values of the halo masses is postponed to Section 5. In the following paragraphs, the two DFs will be compared with those of the corresponding one--component $\gamma=1$ and $\gamma=0$ models, and the exact phase--space constraints will be derived and compared with those obtained using the NC, WSC, and SSC in Section 3. \subsection{The $\gamma=1$ Model Plus a $\gamma=0$ Dominant Halo} The explicit expression for the DF of the $\gamma=1$ model with an arbitrary degree of OM orbital anisotropy immersed in a dominant $\gamma=0$ halo is derived here. Formally, this case corresponds to the assumption of $\mu\to\infty$ in the total potential, i.e., $\Psi_{\rm T} = \Psi_0$, and so in equations (19)-(20) \begin{equation} \tilde Q=\mu{\beta+2\nu\over 2(\beta+\nu)^2},\quad \left({d\tilde Q\over d\nu}\right)^{-1}=-{(\beta+\nu)^3\over\mu\nu}. \end{equation} After differentiation inside the integral in equation (B2) with $\tilde\varrho$ given by equations (3) and (9), and after a partial fraction decomposition of the rational part of the integrand, one obtains: \begin{equation} \Ftil_{\rm i}(\nu)={\beta+\nu\over\pi\sqrt{2\mu}\sqrt{\beta+2\nu}} [H^0_1+\beta H^0_2-H^1_1-(1+\beta)H^1_2-(1+2\beta)H^1_3-3(\beta-1)H^1_4], \end{equation} and \begin{equation} \Ftil_{\rm a}(\nu)={\beta+\nu\over\pi\sqrt{2\mu}\sqrt{\beta+2\nu}} [2H^1_2-(5-2\beta)H^1_3-3(\beta-1)H^1_4]. \end{equation} The $H$ functions depend on $\beta$ and $\nu$, and are defined as \begin{equation} H^z_n(\xi) = {2\over (\nu+\lambda)^n} \int_0^{\infty}{dx\over\sqrt{1+x^2}(x^2+\xi)^n}, \end{equation} where \begin{equation} \xi = {\nu+z\over\nu +\lambda}, \quad\quad {\rm and}\quad\quad \lambda = {\beta\nu\over\beta+2\nu}. \end{equation} When $\xi=1$ and $n\geq 1$ \begin{equation} H^z_n(1)={\sqrt{\pi}\,\Gamma (n)\over\Gamma (n-1/2)(\nu+\lambda)^n}, \end{equation} where $\Gamma$ is the complete gamma function. When $\xi\neq 1$ the recursion formula \begin{equation} H^z_{n+1}(\xi)=-{1\over n(\nu+\lambda)}{dH^z_n(\xi)\over d\xi}= {(-1)^n\over n!(\nu+\lambda)^n}{d^nH^z_1(\xi)\over d\xi ^n} \end{equation} holds, and so the explicit evaluation of $H^z_1(\xi)$ suffices: \begin{equation} H^z_1(\xi)={2\over\nu+\lambda}\cases{ {{\rm arccos}\sqrt{\xi}\over\sqrt{\xi (1-\xi)}}, &if $0\leq\xi <1$;\cr {{\rm arccosh}\sqrt{\xi}\over\sqrt{\xi (\xi-1)}}, &if $\xi >1$.\cr } \end{equation} In order to distinguish between the two cases $\xi>1$ and $0\leq\xi <1$, a careful discussion is needed. From equation (25) the value $z=0$ corresponds to $\xi =\nu/(\nu+\lambda)<1\quad\forall (\nu,\beta)$, and so the first of equations (28) must be used for the evaluation of all $H^0_n$ functions. More complicated is the case $z=1$, when $\xi=(\nu+1)/(\nu+\lambda)$: note that for $\beta\geq 0$, $\lambda$ is a monotonically increasing function of $\nu$, with $\lambda=0$ for $\nu=0$ and $\lambda\to\beta /2$ for $\nu\to\infty$. As a consequence, it follows that $\forall\nu$, $0<\beta\leq 2\Rightarrow\xi>1$. When instead, $\beta>2$, $\exists\,\nu_{\rm cr} =\beta/(\beta-2)$ so that $\nu <\nu_{\rm cr}\Rightarrow \xi>1$, $\nu =\nu_{\rm cr}\Rightarrow \xi =1$, and $\nu>\nu_{\rm cr}\Rightarrow \xi <1$. This completes the derivation of the DF for the $\gamma=1$ component. In Fig. 2 the comparison with the DF of the one--component $\gamma=1$ model (solid line), in case of global isotropy and for a specific value of the anisotropy radius, is given. Such DF was given in H90 as a function of $\tilde Q$, but for consistency with the present work it is derived in Appendix B as a function of $\nu$ [equations (B4)-(B6)]. The formulae derived in this paragraph have been tested for many values of $\beta$ and $r_{\rm a}$ using a code that numerically recovers the DF for spherically symmetric multi--component galaxy models with OM anisotropy; we obtained extremely good agreement; in all cases the maximum differences between the analytical and numerical DFs are much less than 1 per cent. \placefigure{fig2} In the upper panel of Fig. 2 the isotropic case is presented. Note how for $\beta>1$ the DF is more peaked than for the one--component $\gamma=1$ model, and the opposite holds when $\beta <1$: this behaviour was already found in the isotropic (1,1) models (C96, Fig. 2). In the lower panel the anisotropic case is shown when $s_{\rm a}=0.26$, near the consistency limit for the one--component $\gamma=1$ model (see \S 3.1). The main effect of anisotropy, as already found for (1,1) models and $R^{1/m}$ models (CL97), is the appearance in the DF of a depression well outside the galaxy center. Decreasing the anisotropy radius, the depression deepens, running finally into negative values for a critical value of $s_{\rm a}$ (dependent on $\beta$) and making the model inconsistent. Again, as already found for (1,1) models, this effect is stronger for smaller $\beta$ values, i.e., a very concentrated halo makes the DF more sensitive to the effects of anisotropy, while the opposite is true for halos more diffuse than the $\gamma=1$ density distribution. \subsection{The $\gamma=0$ Model Plus a $\gamma=1$ Dominant Halo} The explicit expression for the DF of a $\gamma=0$ model with an arbitrary degree of OM orbital anisotropy, immersed in a dominant Hernquist halo, is derived here. Formally this case corresponds to the assumption of $\mu\to 0$ in the total potential, i.e., $\Psi_{\rm T} = \Psi_1$, and so in equations (19)-(20) \begin{equation} \tilde Q={1\over 1+\nu};\quad \left({d\tilde Q\over d\nu}\right)^{-1}=-(1+\nu)^2. \end{equation} After differentiation inside the integral in equation (B2) with $\tilde\varrho$ given by equations (3) and (10), and after a partial fraction decomposition of the rational part of the integrand, one obtains: \begin{equation} \Ftil_{\rm i}(\nu)={3\mu\beta\sqrt{1+\nu}\over 4\pi}\,4 G_5, \end{equation} and \begin{equation} \Ftil_{\rm a}(\nu)={3\mu\beta\sqrt{1+\nu}\over 4\pi}\, (4\beta^2 G_5-6\beta G_4+2G_3), \end{equation} where \begin{equation} G_n(\beta,\nu) = \int_{\nu}^{\infty}\sqrt{{s+1\over s-\nu}} {ds\over (\beta+s)^n}. \end{equation} When $\beta=1$, and $n\geq 2$ \begin{equation} G_n(1,\nu)={\sqrt{\pi}\,\Gamma (n-1)\over \Gamma(n-1/2) (1+\nu)^{n-1}}. \end{equation} When $\beta\neq 1$ and $n\geq 2$ the recursion formula \begin{equation} G_{n+1}(\beta,\nu)=-{1\over n}{dG_n(\beta,\nu)\over d\beta}= {(-1)^{n-1}\over n!}{d^{n-1}G_2(\beta,\nu)\over d\beta^{n-1}} \end{equation} holds, and so the explicit evaluation of $G_2$ suffices: \begin{equation} G_2(\beta,\nu)={1\over\beta+\nu}+{1+\nu\over (\beta+\nu)^{3/2}}\cases{ {1\over\sqrt{1-\beta}} {\rm arctan} \sqrt{{1-\beta\over\beta +\nu}}, &if $0\leq\beta <1$;\cr {1\over\sqrt{\beta-1}} {\rm arctanh} \sqrt{{\beta -1\over\beta +\nu}}, &if $\beta >1$.\cr } \end{equation} In Fig. 3 the comparison with the DF of the one--component $\gamma=0$ model (solid line), in the case of global isotropy and for a specific value of the anisotropy radius, is given. Such DF was given in D93 as a function of $\tilde Q$, but for consistency with the present work it is derived in Appendix B as a function of $\nu$ [equations (B7)-(B9)]. As in the previous case, the derived formulae have been successfully tested for many values of $\beta$ and $r_{\rm a}$ by comparison with the numerically derived DFs, obtaining maximum differences less than 1 per cent in any case. \placefigure{fig3} In the upper panel the isotropic case is presented. Note how for $\beta<1$ the DF is more peaked than for the one--component $\gamma=0$ model, and the opposite holds when $\beta<1\footnote{Having defined $\beta=r_0/r_1$, at variance with what happened in \S 4.1, a more diffuse $\gamma=1$ component corresponds to $\beta <1$, and vice--versa.}$. This behavior is similar to that found in the previous section. In the lower panel the anisotropic case is shown when $r_{\rm a}=0.65r_0$, near the consistency limit for the one--component $\gamma=0$ model (see \S 3.1). As for the $\gamma=1$ component, the main effect of anisotropy is the appearance in the DF of a depression well outside the galaxy center, and again the depression becomes deeper and deeper decreasing the anisotropy radius. Finally, as in the previous case, the effect of anisotropy is found to be more important when the halo concentration increases. \section{Consistency of (1,0) Models} We move now to comment on the main similarities and differences between the DFs of the two components of the (1,0) models, especially considering the role of concentration and orbital anisotropy in determining their consistency. For simplicity the discussion is restricted to the halo--dominated cases. The first important point addressed by using the DFs, is the study of the effect of halo concentration in determining the consistency of the two (1,0) model components in the case of {\it global isotropy}. The effect of the $\gamma=0$ halo concentration on the consistency of the globally isotropic $\gamma=1$ component, can be derived by the direct inspection of the DF and confirms the analytical prediction obtained using the WSC in \S 3.1. In fact, it is found that the globally isotropic $\gamma=1$ component is consistent independent of the concentration and total mass of the superimposed $\gamma=0$ halo: {\it only anisotropic} $\gamma=1$ component in (1,0) models can be unphysical due to the presence of the $\gamma=0$ density distribution. For the globally isotropic $\gamma=0$ component with a dominant $\gamma=1$ halo, the situation is more complicate, because, in accordance with the analysis presented in \S 3.2, its DF may become negative in case of a high concentration of the external $\gamma=1$ component. In fact, the DF becomes negative for ${\cal E}\to\Psi_1 (0)$ when $\beta\ifmmode{\mathrel{\mathpalette\@versim>} 5.233$, a larger value than the more conservative one (5/2) derived using the WSC. A closer look at this behavior, and a comparison with the qualitatively different behavior exhibited by the DF of the $\gamma=1$ component, is particularly instructive. In fact, while the DF of the $\gamma=1$ density distribution diverges at high (relative) energies both in the one--component and in the halo--dominated cases (Fig. 2), the DF of the $\gamma=0$ model is divergent for high energies in the one--component case, but {\it finite} in the halo--dominated one (Fig. 3). Moreover, when increasing the $\gamma=1$ halo concentration (i.e., increasing $\beta=r_0/r_1$), the central value of the DF associated with the $\gamma=0$ density profile decreases monotonically, and, for $\beta$ greater than the before mentioned critical value, it becomes negative, revealing the model inconsistency. It must be stressed that a similar behavior was found in the numerical investigation of consistency of King (1972) and quasi--isothermal halos added to a de Vaucouleurs (1948) density distribution, carried out by CP92. Also, note how the decrease of the central value of the DF for increasing halo concentration is reminiscent of that found by C96 for (1,1) models, even if in that case the transition was found to be more discontinuous: the DF of a $\gamma=1$ component in (1,1) models remains divergent at the center for all finite concentrations of the other $\gamma=1$ component, and becomes exactly zero at the center only when the halo is reduced to a central BH (see Fig. 2 in C96). The qualitative discussion above can be put on more quantitative grounds. In fact, in the halo--dominated case, the central value of the DF of the $\gamma=0$ component is easily derived for a generic $\beta$ using the formulae given in \S 4.2: \begin{equation} \tildef_{\rm i}^0={3\mu\over 8\sqrt{2}\pi^3} \cases{ -{3(64\beta^3-240\beta^2+280\beta-105)\over 32(1-\beta)^{5/2}\beta^{9/2}}{\rm arctan}\sqrt{{1-\beta\over\beta}} -{16\beta^3-328\beta^2+630\beta-315\over 32(1-\beta)^2\beta^4}, &if $0<\beta <1$,\cr {64\over 5}, &if $\beta =1$,\cr -{3(64\beta^3-240\beta^2+280\beta-105)\over 32(\beta-1)^{5/2}\beta^{9/2}}{\rm arctanh}\sqrt{{\beta-1\over\beta}} -{16\beta^3-328\beta^2+630\beta-315\over 32(\beta-1)^2\beta^4}, &if $\beta >1$;\cr } \end{equation} and \begin{equation} \tildef_{\rm a}^0={3\mu\over 8\sqrt{2}\pi^3} \cases{ {3(8\beta^2-12\beta+5)\over 32(1-\beta)^{5/2}\beta^{5/2}}{\rm arctan}\sqrt{{1-\beta\over\beta}} +{(4\beta-3)(2\beta-5)\over 32(1-\beta-)^2\beta^2}, &if $0<\beta <1$,\cr {4\over 5}, &if $\beta =1$,\cr {3(8\beta^2-12\beta+5)\over 32(\beta-1)^{5/2}\beta^{5/2}}{\rm arctanh}\sqrt{{\beta-1\over\beta}} +{(4\beta-3)(2\beta-5)\over 32(\beta-1)^2\beta^2}, &if $\beta >1$.\cr } \end{equation} The limiting value for $\beta$ in the isotropic case is obtained by solving numerically the equation $\tildef_{\rm i}^0=0$ for $\beta >1$. In Fig. 4 (where the high--concentration case corresponding to $\beta >1$ is shown), the decrease of $\tildef_{\rm i}^0$ when $\beta$ increases is apparent. \placefigure{fig4} The second important point addressed by using the DFs, is the study of the {\it combined} effect of orbital anisotropy and halo concentration in determining the consistency of the two (1,0) model components. We cannot expect a simple behavior, because -- as should be clear from the previous sections -- halo concentration and anisotropy affect the DF in {\it different} regions of the phase--space, i.e., the high energy regions of the DF are more sensitive to concentration effects, while the OM orbital anisotropy acts mainly at intermediate energies. The simplest way to summarize the results is to express the consistency limitations in terms of the anisotropy radius of each component as function of $\beta$, determining in the parameter space $(s_{\rm a},\beta)$ the critical regions where the models are consistent. This approach is particularly useful because independent of the specific form of the density profile of the investigated model, the positivity requirement for each DF of an OM multi--component system over all the phase--space can be expressed in term of the anisotropy radius as a function of the other model parameters, due to the simple appearance of $s_{\rm a}$ in equation (20). In fact, let be $A_+$ the set defined by the property that $f_{\rm i}>0$ $\forall\nu\in A_+$. Then, from equation (20), \begin{equation} s_{\rm a}\geqs_{\rm ac}^- = \sqrt{\max\left\{0,{\rm sup}\left[-{f_{\rm a}(\nu)\over f_{\rm i}(\nu)}\right ]_{\nu\in A_+}\right\}}, \end{equation} is a first condition to be satisfied. Obviously, when $f_{\rm i}>0$ over all the phase--space (the common situation), $A_+$ coincides with the total range of variation for $\nu$, and equation (38) is also the {\it only} condition to be checked for the model consistency. In this case equation (38) shows that there is {\it at most} a lower bound for the anisotropy radius, $s_{\rm ac}^-$. For example, this is the case for the $\gamma=1$ component in (1,0) and (1,1) models, or for one--component anisotropic $\gamma$ models. When the set $A_-$ (complementary to $A_+$) is not empty, i.e., $f_{\rm i} <0$ over some region of phase--space, a second inequality, derived from equation (20), must necessarily be verified: \begin{equation} s_{\rm a}\leqs_{\rm ac}^+ = \sqrt{{\rm inf}\left[{f_{\rm a} (\nu)\over |f_{\rm i} (\nu)|}\right ]_{\nu\in A_-}}. \end{equation} A general consequence of equations (38)-(39) valid for {\it all} single or multi--component spherically symmetric, radially anisotropic OM models, is that the allowed region for consistency in the anisotropy space is given by $s_{\rm ac}^- <s_{\rm a} < s_{\rm ac}^+$. Moreover, if $f_{\rm a} <0$ $\forall\nu\in A_-$, or $s_{\rm ac}^+ < s_{\rm ac}^-$, then the proposed model is inconsistent. The quantitative trend of $s_{\rm ac}^-$ for the $\gamma=1$ density distribution with a dominant $\gamma=0$ halo is shown in Fig. 5a (solid line): for a given $\beta$ all values of $s_{\rm a}$ higher than the critical curve are acceptable. Note how an increase in the halo concentration (a decreasing $\beta$) produces an increase of $s_{\rm ac}^-$, i.e., a very concentrated halo makes the other component more sensitive to anisotropy effects, a behavior qualitatively anticipated in \S 4.1, and already found for (1,1) models (C96, Fig. 5). A more complicated (and more interesting) case is presented by the halo--dominated $\gamma=0$ model. In this case we already know that, due to the halo concentration, even the isotropic case can be inconsistent, i.e., $f_{\rm i} <0$. This means that $s_{\rm ac}^+$ must also be considered. The trend of $s_{\rm ac}^-$ is shown in Fig. 5ab (dotted line): as in the previous case an increase of the minimum anisotropy radius corresponds to an increasing halo concentration (i.e., to an increasing $\beta$). As $\beta$ increases above the critical value $\simeq 5.233$, $f_{\rm i}$ becomes negative at the center, and the isotropic $\gamma=0$ component becomes inconsistent: in Fig. 5a this region is contained in the box at the top--right, that is enlarged in Fig. 5b. Here the dotted line is again $s_{\rm ac}^-$, and the dashed line represents $s_{\rm ac}^+$: for $5.233\ifmmode{\mathrel{\mathpalette\@versim<}\beta\ifmmode{\mathrel{\mathpalette\@versim<} 6.15$ the inequality $s_{\rm ac}^-<s_{\rm ac}^+$ holds, and according to equations (38)-(39) the region between the two curves corresponds to consistent $\gamma=0$ components. This is a quite counterintuitive example of the combined effect of an external potential and anisotropy on the consistency of an anisotropic galaxy model, where an otherwise inconsistent isotropic model is made consistent by orbital anisotropy! Finally, for $\beta \ifmmode{\mathrel{\mathpalette\@versim>} 6.15$ no physically acceptable $\gamma=0$ components are possible, even considering the effect of anisotropy. A question arises: how well does the asymptotic analysis obtained in the limit of dominant halos compare to the more realistic cases of halos with finite mass? An answer can be obtained by inspection of Fig. 5a, where the dashed lines represent the limits on the anisotropy radius obtained when considering a halo ten times as massive as the component investigated. Note that when the halo is {\it more concentrated} than the considered density component [large $\beta$ in case of $\gamma=0$ model (dotted line) and small $\beta$ for the $\gamma=1$ model (solid line)], the curves corresponding to the asymptotic analysis and the dashed ones are indistinguishable for any practical application. On the contrary, a small departure appears when the halo scale--length is substantially larger than that of the considered density component, with the dashed curves approaching the critical value for the anisotropy radius corresponding to the one--component model (the two black dots). This is an obvious behavior, since for any {\it finite} value of the halo mass, its gravitational effect becomes weaker and weaker for larger and larger halo scale--length. \placefigure{fig5} \section{Conclusions} In this paper, an extensive analytical investigation of the phase--space of two--component spherical galaxy models made of the sum of a Hernquist density distribution and a $\gamma=0$ model with different physical scales, is carried out. Following the simple Osipkov--Merritt parameterization, a variable amount of orbital anisotropy is allowed in each component. For these models, other important properties useful in applications -- the velocity dispersion components and the various energy terms entering the scalar virial theorem -- can be expressed analytically, and are given in Appendix C. The main results can be summarized as follows: \begin{enumerate} \item The necessary and sufficient conditions that the model parameters must satisfy, in order to correspond to a (1,0) system for which the two physically distinct components have a positive DF are analytically derived using the method introduced in CP92. Some conditions are obtained for the wider class of two--component $(\gamma_1,\gamma_2)$ models [of which the (1,0) models are a special case]. In particular, it is shown that the DF of the $\gamma_1$ component in isotropic $(\gamma_1,\gamma_2)$ models is nowhere negative, independent of the mass and concentration of the $\gamma_2$ component, whenever $1\leq\gamma_1 <3$ and $0\leq\gamma_2\leq\gamma_1$. As a special application of this result, it follows that a BH of any mass can be consistently added at the center of any isotropic member of the $\gamma$ family of models, when $1\leq\gamma <3$. Two important consequences follow. The first is that the consistency of isotropic (1,1) [or (1,BH)] models proved in C96 using an ``ad hoc'' technique is not exceptional, but a common property of a large class of two--component $\gamma$ models: for example, also isotropic two--component Jaffe ($\gamma=2$) or Jaffe+BH models can be safely assembled. The second is that in two--component isotropic models, the component with the steeper central density distribution is usually the most robust against inconsistency. \item It is shown that an analytic estimate of a minimum value of $s_{\rm a}$ for one--component $\gamma$ models with a massive (dominant) BH at their center can be explicitly found. As expected, this minimum value decreases for increasing $\gamma$. \item It is shown that the analytic expression for the DF of (1,0) models with general OM anisotropy can be found in terms of elliptic functions. The special cases in which each one of the two density components are embedded in a dominant halo are also discussed: under this assumption the DFs can be expressed using just elementary functions, allowing a detailed analytical investigation. \item The region of the parameter space in which (1,0) models are consistent is explored using the derived DFs: it is shown that, unlike the $\gamma=1$ component, the $\gamma=0$ component becomes inconsistent when the halo is sufficiently concentrated, even in the isotropic case. This is an explicit example (albeit not so extreme) of the result found by CP92, that numerically proved the impossibility of adding a King or a quasi--isothermal halo to a de Vaucouleurs galaxy. In such models, the (isotropic) de Vaucouleurs galaxy was found instead consistent over all the parameter space. \item The combined effect of halo concentration and orbital anisotropy is finally investigated. The trend of the minimum value for the anisotropy radius as a function of the halo concentration is qualitatively similar in both components, and to that found for (1,1) models in C96: a more diffuse halo allows a larger amount of anisotropy. A qualitatively new behavior is found and explained by investigating the DF of the $\gamma=0$ component in the halo--dominated case for high halo concentrations. It is analytically shown that there exists a small region in the parameter space where a sufficient amount of {\it anisotropy} can compensate the inconsistency produced by the halo concentration on the structurally analogous -- but isotropic -- case. \end{enumerate} As a final remark, it can be useful to point out some general trends that emerge when comparing different one and two--component models with OM anisotropy, as those investigated numerically in CP92 and CL97, and analytically in C96 and in this paper. The first common trend is that OM anisotropy produces a negative DF outside the galaxy center, while the halo concentration affects mainly the DF at high (relative) energies. The second is that the possibility of sustaining a strong degree of anisotropy is weakened by the presence of a very concentrated halo. The third is that in two--component models, in cases of very different density profiles in the central regions, the component with the flatter density is the most ``delicate'' and can easily found to be inconsistent: particular attention should be paid in constructing such models. \acknowledgments I would like to thank Giuseppe Bertin, Laura Greggio, and Silvia Pellegrini for helpful comments and discussions. The referee, Stephen Levine, is especially thanked for his comments that greatly improved the paper. This work has been partially supported by contracts ASI-95-RS-152, ASI-ARS-96-70, and MURST--Cofin98.
1,477,468,749,854
arxiv
\section{Introduction} Interesting new directions in topological quantum computing include its extension from anyons to gapped boundaries and symmetry defects with the hope that anyonic systems with non-universal computational power can be enhanced to achieve universality. Enrichment of topological physics in two spatial dimensions by gapped boundaries has been investigated intensively, but their computing power has not been analyzed in detail yet. One interesting case is gapped boundaries of Dijkgraaf-Witten theories both for their experimental relevance and as theoretical exemplars (see \cite{cong2016topological,cong2017cmp,cong2017defects} and the references therein). In this paper, we study representations of the braid groups from braiding gapped boundaries of Dijkgraaf-Witten theories and their twisted generalizations, which are (twisted) quantum doubled topological orders in two spatial dimensions. We show that the resulting braid (pure braid) representations are all monomial with respect to some specific bases, hence all such representation images of the braid groups are finite groups (see also \cite{Rowell}). We give explicit formulas for the monomial matrices and the ground state degeneracy of the Kitaev models that are Hamiltonian realizations of Dijkgraaf-Witten theories. Our results imply that braiding gapped boundaries alone cannot provide universal gate sets for topological quantum computing with gapped boundaries. For a topological order of the form $\mathcal{C}= \mathcal{Z}(\mathcal{S})$, were $\mathcal{S}$ is some unitary fusion category, gapped boundaries are modelled by Lagrangian algebras (see \cite{cong2016topological} and the references therein). For these models the ground manifolds have the form $\operatorname{Hom}_\mathcal{C}(1,A_1\otimes \cdots \otimes A_n)$, where the $A_i$'s are the Lagrangian algebras modelling the gapped boundaries, see \cite[Section 3]{cong2016topological} for details. Recall that a Lagrangian algebra in any modular (tensor) category is a commutative etále algebra whose quantum dimension is maximal. A group theoretical modular category (GTMC) is a category of the form $\mathcal{C}= \mathcal{Z}(\operatorname{Vec}_G^\omega)$ for some finite group $G$ and some $\omega\in Z^3(G,\mathbb{C}^\times)$, where $\mathcal{Z}$ denotes the Drinfeld center and $\operatorname{Vec}_G^\omega$ is the category of finite dimensional $G$-graded vectors spaces with associativity constraint twisted by $\omega\in H^3(G,\mathbb{C}^\times)$. Kitaev \cite{kitaev2003fault} proposed Hamiltonian realizations of Dijkgraaf-Witten theories, who\-se topological orders are GTMCs. Moreover, extensions of these Hamiltonian realizations to surfaces with boundaries can be constructed from Lagrangian algebras \cite{bravyi1998quantum,bombin2008family,beigi2011quantum,kitaev2012models}. Lagrangian algebras in GTMC's are one-one correspondence with indecomposable modular categories of $\operatorname{Vec}_G^\omega$ \cite{davydov2013witt}, which are in bijection with pairs $(H,\gamma)$, where $H$ is a subgroup of $G$ and $\gamma\in C^2(H,\mathbb{C}^\times)$ such that $\delta(\gamma)=\omega|_{H^{\times 3}}$, all up to conjugation \cite{natale2016equivalence}. A more direct description between Lagrangian algebras and pairs $(H,\gamma)$ can be found in \cite{davydov2010modular}. Recently, a quantum computing scheme to use gapped boundaries to achieve universality is proposed \cite{cong2016topological, cong2017cmp,cong2017defects,cong2017universal}. Braiding gapped boundaries can be either added to braiding anyons as in Kitaev's original proposal or as new computing primitives supplemented with other topological operations. Gapped boundaries lead to additional degeneracy to the topologically protected subspace, which potentially allows the implementation of more powerful gates. More precisely, the new gates come from representation matrices of the braid groups, $\mathcal{B}_n$, on objects of the GTMCs that are tensor products of Lagrangian algebras. But a characterization of the computational power of these new braid representations, mathematically a study of the representation images, was left as an important open problem \cite{cong2016topological,cong2017universal}. The goal of this paper is to provide such a characterization. We find a canonical monomial structure for Lagrangian algebras in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$, which allows us to compute things more easily. This paper is organized as follows. Section \ref{Monomial} develops the theory of monomial representations. Specifically, it shows how to calculate invariants for a representation of $G$ using the monomial structure. In Section \ref{Section: monomial twisted Yetter-Drinfeld} we introduce the notion of monomial twisted Yetter-Drinfeld. We use the theory developed in Section \ref{Monomial} to give an explicit description and a basis for $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, V^{\otimes n})$ if $V$ is a monomial object. Next, we describe the representation of $\mathcal{B}_n$ with respect to this basis. Theorem \ref{thm: YD monomial } states the representation is monomial and Theorem \ref{thm:action in basis} gives an explicit formula for the non-zero entries. In Section \ref{Section: representation Lagrangian} we prove that every Lagrangian algebra in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ has a canonical monomial structure. Then the results of Section \ref{Section: monomial twisted Yetter-Drinfeld} are applied to Lagrangian algebras in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$. We finish the section developing some examples and applications. \section{Monomial representations}\label{Monomial} In this section we recall some basic definitions and results on monomial representations of groups. \begin{defi} A \textit{monomial space} is a triple $\mathbf{V}=(V,X,(V_x)_{x\in X})$ where, \begin{itemize} \item[(i)] V is a finite dimensional complex vector space. \item[(ii)] X is a finite set. \item[(iii)] $(V_x)_{x\in X}$ is a family of one dimensional subspaces of $V$, indexed by $X$, such that $ V=\bigoplus_{x\in X} V_x$. \end{itemize} Let $G$ be a group. By a \textit{monomial representation} of $G$ on $\mathbf{V}$ we mean a group homomorphism \[\Gamma: G\to \operatorname{GL}(V),\] such that for every $g\in G,$ $\Gamma(g)$ permutes the $V_x$'s; hence, $\Gamma$ induces an action by permutation of $G$ on $X$. We will denote $\Gamma(g)(v)$ just by $g\triangleright v$. \end{defi} If $V$ is a representation of $G$, we denote by $V^G$ the subspace of $G$-invariant vectors, \textit{i.e.}, \[V^G=\{v\in V: g\triangleright v=v, \text{ for all } g\in G\}.\] For each $x\in X$, we will denote $\operatorname{Sta}_G(x)$ the stabilizer of $x$ and by $\mathcal{O}_G(x)$ the $G$-orbit of $x$. For $G$ finite, and a representation $V$ define \begin{align*} \operatorname{Av}_G: V \to V,&& v\mapsto \frac{1}{|G|} \sum_{g\in G} g\triangleright v. \end{align*} It is easy to see that $\operatorname{Av}_G$ is a $G$-linear projection onto $V^G$. We define,\begin{align*} \operatorname{Av}_G(V_\mathcal{O}):=\operatorname{Av}_G(V_x),&& x\in \mathcal{O}(x), \end{align*} since for any $x'\in \mathcal{O}_G(x)$, $\operatorname{Av}_G(V_x)=\operatorname{Av}_G(V_{x'})$. We say that an element $x \in X$ is \textit{regular} under the monomial action of $G$ if $\Gamma(g)$ is the identity map on $V_x$, for all $g\in \operatorname{Sta}_G(x)$. Let us write $X/G$ for the set of orbits of the action of $G$ on $X$ and $\tilde{X}$ for the regular ones. \begin{prop}\cite[Lemma 9.1]{karpilovsky1985projective}\label{Prop:Lemma Karp} Let $\mathbf{V}=(V,X,(V_x)_{x\in X})$ be a monomial representation of $G$. \begin{enumerate}[leftmargin=*,label=\rm{(\alph*)}] \item $x\in X$ is a regular element if and only if $\operatorname{Av}_G(V_x)\neq 0$. \item If $x \in X$ is a regular element under the monomial action of $G$, then so are all elements in the $G$-orbit of $x$. \item The triple \[ \mathbf{V}^G = \left( V^G, \tilde{X}, \big (\operatorname{Av}_G(V_\mathcal{O})\big)_{\mathcal{O} \in \tilde{X}} \right) \] is a monomial space. \item The dimension of $V^ G$ is equal to the number of regular $G$-orbits under the monomial action of $G$ on $X$. \end{enumerate} \end{prop}\qed Let $\mathbf{V}=(V,X,(V_x)_{x\in X})$ and $\mathbf{V}'=(V',Y,(V'_y)_{y\in Y})$ be monomial spaces. A linear isomorphism $T:V\rightarrow V'$ is called an \textit{isomorphism of monomial spaces} if $T(V_x)=V'_y$ for any $x\in X$. \begin{prop}\label{iso of invariants} Let $\mathbf{V}=(V,X,(V_x)_{x\in X})$ and $\mathbf{V}'=(V',Y,(V'_y)_{y\in Y})$ be monomial representations of a finite group $G$. If $T:V\to V'$ is a $G$-linear isomorphism of monomial spaces, then $T|_{V^G}:\mathbf{V}^G\to \mathbf{V}'^G$ is an isomorphism of monomial spaces. \end{prop} \begin{proof} Clearly, $T|_{V^G}:V^G \rightarrow V'^G$ is a linear isomorphism. Let $x\in X$ be a regular element. Since $T$ is an isomorphism of monomial spaces, there is some $y \in Y$ such that $T(V_x)=V'_y$. In that case: \[ \operatorname{Av}_G(V'_y)=\operatorname{Av}_G(T(V_x))=T(\operatorname{Av}_G(V_x)). \] This implies $y$ is regular, because $\operatorname{Av}_G(V_x)\neq \{0\}$ and $T$ is an isomorphism. It also says $T|_{V^G}(\operatorname{Av}_G(V_{\mathcal{O}(x)}))=\operatorname{Av}_G(V_{\mathcal{O}(y)}')$, which means $T|_{V^G}$ is an isomorphism of monomial spaces. \end{proof} \section{Monomial representation of the braid group}\label{Section: monomial twisted Yetter-Drinfeld} In this section we introduce the notion of monomial twisted Yetter-Drinfeld and prove that the representation of the braid groups $\mathcal{B}_n$ over $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C},V^{\otimes n})$ is monomial if $V$ is monomial. \subsection{Dijkgraaf-Witten theories} Let $G$ be a discrete group. A (normalized) 3-cocycle $\omega \in Z^3(G, \mathbb{C}^\times)$ is a map $\omega:G\times G\to \mathbb{C}^\times$ such that \begin{align*} \omega(ab,c,d)\omega(a,b,cd)&= \omega(a,b,c)\omega(a,bc,d)\omega(b,c,d), & \omega(a,1,b)=1, \end{align*} for all $a,b,c,d\in G. $ Let us recall the description of the modular category $\mathcal{Z}(\operatorname{Vec}_G^\omega)$, the Drinfeld center of the category $\operatorname{Vec}_G^\omega$ sometimes called the category of twisted Yetter-Drinfeld modules. The category $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ is braided equivalent to the representations of the twisted Drinfeld double defined by Dijkgraaf, Pasquier and Roche in \cite[Section 3.2]{DPR}. Given $\omega \in Z^3(G; \mathbb{C}^\times)$, we define \begin{align*} \omega(g,g';h)&:=\frac{\omega(g,^{g'}h,g')}{\omega(^{gg'}h,g,g')\omega(g,g',h)},\\ \omega(g;f,h)&:=\omega(g,f,h)\omega(\prescript{g}{}{f},g,h)^{-1}\omega(\prescript{g}{}{f},\prescript{g}{}{h},g), \end{align*} for $f,g,g^\prime, h \in G$. The objects of $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ are $G$-graded vector spaces $V=\bigoplus_{g\in G} V_{g}$ with a linear map $\triangleright: \mathbb{C}^{\omega}G\otimes V\to V$ such that $1\triangleright v=v$ for all $v\in V$, \begin{align*} (gh)\triangleright v &=\omega(g,h;k) (g\triangleright (h\triangleright v)), & g,h,k & \in G, & v &\in V_k, \end{align*} satisfying the following compatibility condition: \begin{align*} g\triangleright V_h & \subseteq V_{ghg^{-1}}, & g,h & \in G. \end{align*} Morphisms in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ are $G$-linear $G$-homogeneous maps. The tensor product of $V=\oplus_{g\in G}V$ and $W=\oplus_{g\in G}w$ is $V\otimes W$ as vector space, with \[(V\otimes W)_g=\bigoplus_{h\in G}V_h\otimes W_{h^{-1}g},\] and for all $v\in V_g, w\in W_l$, \[h\triangleright (v\otimes w)= \omega(h;g,l)(h\triangleright v) \otimes (h\triangleright w).\] For $V, W, Z\in\ \mathcal{Z}(\operatorname{Vec}_G^\omega)$ the associativity constrain is defined by \begin{align*} a_{V,W,Z}: (V\otimes W)\otimes Z&\to V\otimes (W\otimes Z)\\ (v_g\otimes w_h)\otimes z_k &\mapsto \omega(g,h,k) v_g\otimes( w_h\otimes z_k) \end{align*} for all $g,h,k \in G, v_g\in V_x, w_h\in W_h, z_k\in Z_k$. The category is tensor braided, with braiding $c_{V,W}:V\otimes W\to W\otimes V$, $V,W\in\mathcal{Z}(\operatorname{Vec}_G^\omega)$, \begin{align*} c_{V,W}(v\otimes w) &= (g\triangleright w) \otimes v, & g & \in G, & v &\in V_g, & w&\in W. \end{align*} \subsection{Braid group representation of twisted Yetter-Drinfeld modules} Since the braided category $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ is not strict, we must be careful about the way we associate terms when we consider tensor products with more than two objects. For a list of objects $A_1, A_2, \ldots, A_n \in \mathcal{Z}(\operatorname{Vec}_G^\omega)$ we define \[A_1\otimes \cdots \otimes A_n := (\cdots (A_1\otimes A_2)\otimes \cdots \otimes A_{n-1})\otimes A_n, \] and an isomorphism by \begin{align} \sigma_i' = (a^{-1}_{A_1 \otimes \cdots \otimes A_{i-1},A_{i+1},A_{i}}&\otimes \operatorname{id}_{A_{i+2} \otimes \cdots \otimes A_{n}})\circ \label{generadores}\\ (\operatorname{id}_{A_1\otimes \cdots A_{i-1}}&\otimes c_{A_i,A_{i+1}}\otimes \operatorname{id}_{A_{i+2} \otimes \cdots \otimes A_{n}})\circ \notag\\ (&a_{A_1 \otimes \cdots \otimes A_{i-1},A_i,A_{i+1}}\otimes \operatorname{id}_{A_{i+2} \otimes \cdots \otimes A_{n}}), \notag \end{align}where $a_{V,W,Z}$ denotes the associativity constrains. If $A=A_1=\cdots =A_n$, there exists a unique group homomorphism $\rho_n:\mathcal{B}_n\to \operatorname{Aut}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(A^{\otimes n})$ sending the generator $\sigma_i \in \mathcal{B}_n$ to $\sigma'_i$. In general, the pure braid group $\mathcal{P}_n$ acts on $A_1\otimes \cdots \otimes A_n$, in the sense that there exists group homomorphism $\rho_n:\mathcal{P}_n\to \operatorname{Aut}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(A_1\otimes \cdots \otimes A_n)$. \subsection{Crossed $G$-sets} Let $G$ be a group. A (left) \textit{crossed} $G$-\textit{set} is a left $G$-set $X$ and a grading function $|-|:X\to G$ such that \[|gx|=g|x|g^{-1}\]for all $x\in X, g\in G$. If $X$ and $Y$ are crossed $G$-sets, a $G$-equivariant map $f:X\to Y$ is a morphism of crossed $G$-sets if $|f(x)|=|x|$ for all $x\in X$. If $X$ and $Y$ are crossed $G$-sets, the cartesian product $X\times Y$ is a crossed $G$-set with the diagonal action and grading map$|(x,y)|=|x||y|$. The category of crossed $G$-sets is a braided category with braiding \begin{align*} c_{X,Y}:X\times Y&\to Y\times X\\ (x,y)&\mapsto (|x|\triangleright y,x). \end{align*} Thus, given a crossed $G$-set $X$ the braid group $\mathcal{B}_n$ acts on $X^{n}$, in the following way \[\sigma_i':=\operatorname{id}_{X^{ i-1}}\times c_{X,X}\times \operatorname{id}_{X^{n-(i-1)}}.\] \subsection{Monomial objects of $\mathcal{Z}(\operatorname{Vec}_G^\omega)$} Let $G$ be a finite group and $\omega \in Z^3(G,\mathbb{C}^\times)$ a 3-cocycle. \begin{defi} A monomial Yetter-Drinfeld module is a monomial space $\mathbf{V}=(V,X,(V_x)_{x\in X})$ such that $V\in \mathcal{Z}(\operatorname{Vec}_G^\omega)$, the twisted $G$-action $\triangleright$ permutes the $V_x$'s and each $V_x$ is $G$-homogeneous. \end{defi} \begin{rem} \begin{enumerate}[leftmargin=*,label=\rm{(\alph*)}] \item If $\mathbf{V}=(V,X,(V_x)_{x\in X})$ is a monomial Yetter-Drinfeld module, the set $X$ is a crossed $G$-set with the induced $G$-action and the grading map. \item If $\mathbf{V}=(V,X,(V_x)_{x\in X})$ is a monomial Yetter-Drinfeld module, the action of $G$ on $(V_e,X_e,(V_x)_{x\in X_e})$ is monomial, where $X_e:=\{x\in X: |x|=e\}$ and $V_e=\oplus_{x\in X_e}V_x$. \end{enumerate} \end{rem} \begin{thm}\label{thm: YD monomial } Let $G$ be a finite group, $\omega \in Z^3(G,\mathbb{C}^\times)$. If $\mathbf{V}=(V,X,(V_x)_{x\in X})$ is a monomial Yetter-Drinfeld module in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$, then \begin{enumerate}[leftmargin=*,label=\rm{(\alph*)}] \item the action of $\mathcal{B}_n$ on $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, V^{\otimes n} )$ is monomial, \item the dimension of $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, V^{\otimes n} )$ is equal to the number of regular $G$-orbits under the monomial action of $G$ on \[(X^{ n})_e:=\{(x_1,\ldots,x_n): |x_1|\cdots |x_n|=e\}.\] \end{enumerate} \end{thm} \begin{proof} The action of $G$ on $(V^{\otimes n}_e,(X^{n})_e,(V_x)_{x\in X_e})$ is monomial. Hence by Proposition \ref{Prop:Lemma Karp}, the triple \[ \mathbf{V}_e^G := \left( ((V^{\otimes n}_e)^G, \widetilde{(X^{n})_e}, \big (\operatorname{Av}_G((V^{\otimes n}_e)_\mathcal{O})\big)_{\mathcal{O} \in \widetilde{(X^{n})_e}} \right) \] is a monomial space. Since $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, V^{\otimes n} )= (V^{\otimes n})_e^G$, and each of the automorphisms $\sigma'$ are morphisms in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$, hence $\sigma'|_{V^{\otimes n}_e}:(V^{\otimes n}_e,(X^{n})_e,(V_x)_{x\in X_e})\to (V^{\otimes n}_e,(X^{n})_e,(V_x)_{x\in X_e})$ is a $G$-linear isomorphism of monomial spaces. It follows from Proposition \ref{iso of invariants} that $\sigma'|_{(V^{\otimes n})_e^G}$ is an isomorphism of monomial spaces. Thus, the linear representation \begin{align*} \rho_n: \mathcal{B}_n&\to \operatorname{GL}((V^{\otimes n}_e)^G)\\ \sigma &\mapsto \sigma', \end{align*} is a monomial representation of $\mathcal{B}_n$. The second part follows immediately from Proposition \ref{Prop:Lemma Karp}. \end{proof} \subsection{Monomial matrices of the braid representation}\label{subsection:monomial basis} In this subsection we obtain concrete formulas for the monomial braid representations associated to a monomial Yetter-Drinfeld module. Let $G$ be a finite group, $\omega \in Z^3(G,\mathbb{C}^\times)$ and $\mathbf{V}=(V,X,(V_x)_{x\in X})$ be a monomial Yetter-Drinfeld module. If we fix non-zero vectors $\mathcal{S}:=\{v_x\in V_x:x\in X\}$, the twisted $G$-action defines a map \[\lambda_X:G\times X\to \mathbb{C}^\times,\] by $g \triangleright v_x= \lambda_X(g;x)v_{gx}$, where $g \in G, x\in X$. For the monomial Yetter-Drinfeld module $\mathbf{V}^{\otimes n}=(V^{\otimes n} ,X^{n},(V_x)_{x\in X^{n}})$ and the basis $\mathcal{S}^{\otimes n}:=\{v_{x_1}\otimes \cdots \otimes v_{x_n}:x_i\in X, \ 1\leq i\leq n\}$, the action is determined by the map $\lambda_{X^n}:G\times X^n\to \mathbb{C}^\times,$ \begin{align} \lambda_{X^n}(g;x_1,\ldots,x_n)&:=\prod_{i=1}^n\lambda_X(g;x_i)\omega(g;|x_1||x_2|\cdots |x_{n-1}|,|x_n|)\times \label{definition of lambda x n}\\ &\omega(g;|x_1|\cdots |x_{n-2}|,|x_{n-1}|) \cdots \omega(g;|x_1|,|x_2|), \notag \end{align}that is, \[g\rhd (v_{x_1}\otimes \cdots \otimes v_{x_n})= \lambda_{X^n}(g;x_1,\ldots,x_n) (v_{gx_1}\otimes \cdots \otimes v_{gx_n}),\] for all $g\in G, x_1,x_2\ldots , x_n \in X$. Hence an element $(x_1,\ldots, x_n) \in (X^{n})_e$ is regular if and only if \begin{align}\label{condicion regular} \lambda_{X^n}(g;x_1\ldots, x_n)=1,&& \text{ for all } g \in \bigcap_{i=1}^n \operatorname{Sta}(x_i). \end{align} Let $\mathcal{R}\subset X^n_e$ be a set of representatives of the regular orbits of $X_e^{\times n}$. Let $\mathcal{S}_{reg}= \{v_{x_1}\otimes \cdots \otimes v_{x_n}: (x_1,\ldots,x_n)\in R \}$. By Proposition \eqref{Prop:Lemma Karp} the set $\{\operatorname{Av}_G(v): v\in \mathcal{S}_{reg}\}$ is a basis of $(V^{\otimes n})_e^G$. In order to express the action of the generator $\sigma_i \in \mathcal{B}_n$ in terms of $\{\operatorname{Av}_G(v): v\in \mathcal{S}_{reg}\}$, for each $\mathbf{x}=(x_1,\ldots,x_n)\in \mathcal{R}$ choose $g_{\mathbf{x}}\in G$ such that $g_{\mathbf{x}}\triangleright \sigma_i'(\mathbf{x})=\mathbf{y}$, where $\mathbf{y}\in \mathcal{R}$ and $\sigma_i'(\mathbf{x})= (x_1, \cdots , x_{i-1}, |x_i|x_{i+1}, x_i, \cdots , x_n)$. Hence there is $\beta_{i,\mathbf{x}}\in \mathbb{C}^\times$ such that $g_{\mathbf{x}}\triangleright\sigma_i'(v_{x_1}\otimes \cdots v_{x_n})=\beta_{i,\mathbf{x}}v_{y_1}\otimes \cdots \otimes v_{y_n}$. Since the action of the generator $\sigma_i \in \mathcal{B}_n$ is given by \begin{align} \sigma'(v_{x_1}\otimes \cdots v_{x_n})&= \omega^{-1}(|x_1|\cdots |x_{i-1}|,|x_{i+1}|,|x_i|)\times \label{formula generador grupo de trenza}\\ &\lambda_X(|x_i|;x_{i+1})\omega(|x_1|\cdots |x_{i-1}|,|X_i|,|x_{i+1}|)\times \notag\\ & v_{x_1}\otimes \cdots \otimes v_{x_{i-1}}\otimes v_{|x_i|x_{i+1}}\otimes v_{x_i}\otimes \cdots \otimes v_{x_n}.\notag \end{align} we have that \begin{align} \beta_{i,\mathbf{x}}&= \omega^{-1}(|x_1|\cdots |x_{i-1}|,|x_{i+1}|,|x_i|)\times\label{def:beta} \\ &\lambda_X(|x_i|;x_{i+1})\omega(|x_1|\cdots |x_{i-1}|,|X_i|,|x_{i+1}|)\lambda_{X^n}(g_{\mathbf{x}};\sigma_i'(\mathbf{x})).\notag \end{align} \begin{thm}\label{thm:action in basis} Let $G$ be a finite group, $\omega \in Z^3(G,\mathbb{C}^\times)$ and $\mathbf{V}=(V,X,(V_x)_{x\in X})$ be a monomial Yetter-Drinfeld module. Let $Y$ be the set of all regular elements in $X^n_e$ and $\mathcal{R}\subset Y$ a set of representatives of the $G$-orbits of $Y$. \begin{enumerate}[leftmargin=*,label=\rm{(\alph*)}] \item The projection $\pi :Y\to \mathcal{R}$ is map of $\mathcal{B}_n$-sets. The image of $\mathbf{x}\in \mathcal{R}$ by the generator $\sigma_i\in \mathcal{B}_n$ will be denoted by $\sigma_i\triangleright \mathbf{x}$. \item Let $\mathcal{S}_{reg}= \{v_{x_1}\otimes \cdots \otimes v_{x_n}: (x_1,\ldots,x_n)\in \mathcal{R} \}$. The action of the generator $\sigma_i \in \mathcal{B}_n$ in the basis $\{\operatorname{Av}_G(v_{\mathbf{x}}): \mathbf{x}\in \mathcal{R}\}$ is given by \[\sigma_i(\operatorname{Av}_G(v_{\mathbf{x}}))= \beta_{i,\mathbf{x}}\operatorname{Av}_G(v_{\sigma_i \triangleright \mathbf{x}}),\]where $\beta_{i,\mathbf{x}}$ was defined in \eqref{def:beta}. \end{enumerate} \end{thm} \begin{proof} The first part is consequence of Theorem \ref{thm: YD monomial }. For the second part, recall that the number $\beta_{i,\mathbf{x}}$ and the element $g_\mathbf{x}\in G$ are such that \[g_\mathbf{x}\triangleright \sigma(v_{\mathbf{x}})=\beta_{i,\mathbf{x}}v_{\sigma_i\triangleright \mathbf{x}}.\] Hence, \begin{align*} \sigma_i(\operatorname{Av}_G(v_{\mathbf{x}}))&= \operatorname{Av}_G(\sigma_i(v_{\mathbf{x}}))\\ &=g_\mathbf{x}\triangleright \operatorname{Av}_G(\sigma_i(v_{\mathbf{x}}))\\ &=\operatorname{Av}_G(g_\mathbf{x}\triangleright\sigma_i(v_{\mathbf{x}}))\\ &=\operatorname{Av}_G(\beta_{i,\mathbf{x}}v_{\sigma_i\triangleright \mathbf{x}})\\ &=\beta_{i,\mathbf{x}}\operatorname{Av}_G(v_{\sigma_i \triangleright \mathbf{x}}). \end{align*} \end{proof} \begin{example}\label{ejemplo permutation crossed g-sets} Let $G$ be a finite group and $X$ be a left crossed $G$-set. Then the linearization $V_X:=\oplus_{x\in X}\mathbb{C}x$ is a (untwisted) Yetter-Drinfeld module in $\mathcal{Z}(\operatorname{Vec}_G)$. Clearly $\lambda_X\equiv 1$, thus every element in $(X^{n})_e$ is regular. Hence the canonical projection \[(X^{ n})_e\to (X^{ n})_e//G,\]is an epimorphism of $\mathcal{B}_n$-sets. In other words, the linear representation of $\mathcal{B}_n$ on $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G)}(\mathbb{C}, V_X^{\otimes n} )$ is the linearization of the permutation action of $\mathcal{B}_n$ on $(X^{ n})_e//G$. \end{example} \section{Braid groups representations Associated to Lagrangian algebras}\label{Section: representation Lagrangian} In this section we prove that every Lagrangian algebra $\mathcal{Z}(\operatorname{Vec}_G^\omega)$ as a canonical monomial structure. Then the results of Section \ref{Section: monomial twisted Yetter-Drinfeld} can be applied to Lagrangian algebras in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$. \subsection{Lagrangian algebras } Following \cite[Corollary 3.17]{davydov2017lagrangian}, we will describe the Lagrangian algebra on $\mathcal{Z}(G,\ThreeCocycle)$ associated to a pair $(H,\gamma)$, where $H\subseteq G$, is a subgroup and $\gamma:H\times H\to \mathbb{C}^\times$ a map such \begin{align*} \frac{\gamma(ab,c)\gamma(a,b)}{\gamma(a,bc)\gamma(b,c)}=\omega(a,b,c),&& a,b,c \in H. \end{align*} Let $\mathbb{C}_{\gamma}[H]=\oplus_{h\in H}\mathbb{C}e_h$ the group algebra of $H$ with the multiplication \begin{align*} e_{h_1}e_{h_2}=\gamma(h_1,h_2)e_{h_1h_2},&& h_1,h_2 \in H. \end{align*} The vector space $\mathbb{C}_{\gamma}[H]=\oplus_{h\in H}\mathbb{C}e_h$, is a commutative algebra in $\mathcal{Z}(\operatorname{Vec}_H^\omega)$, where the $H$-action is given by \begin{align*} h_1\triangleright e_{h_2}=\epsilon(h_1,h_2)e_{h_1h_2h_1^{-1}},&& \epsilon(h_1,h_2):=\frac{\gamma(h_1,h_2)}{\gamma(\prescript{h_1}{}{h_2},h_1)},&& h_1,h_2 \in H, \end{align*}and grading $|e_h|=h$ for all $h\in H$. Let $\operatorname{Map}(G, \mathbb{C}_{\gamma}[H])$ be the vector space of all set-theoretic maps from $G$ to $\mathbb{C}_{\gamma}[H]$. With the grading given by \begin{align*} \vert a\vert&=f &\Leftrightarrow &&\forall x\in G \quad \vert a(x)\vert =x^{-1}fx, \end{align*} and twisted $G$-action \begin{align*}\label{G-action} (g\triangleright a)(x):=\omega(x^{-1},g^{-1}; |a|)^{-1}a(g^{-1}\triangleright x), && g,x\in G. \end{align*} $\operatorname{Map}(G, \mathbb{C}_{\gamma}[H])$ is twisted Yetter-Drinfeld module. The Lagrangian algebra $L(H,\gamma)$ is the Yetter-Drinfeld submodule \begin{align*} L(H,\gamma):=\{a\in \textrm{Maps}(G,\mathbb{C}_{\gamma}[H])\vert \, a(xh)=\omega(h^{-1},x^{-1}; |a|)h^{-1}\triangleright a(g) \}, \end{align*}see \cite{davydov2017lagrangian} for more details. \subsection{Monomial structure of the Lagrangian algebras $L(H,\gamma)$} In this section we will proved that every Lagrangian algebra of the form $L(H,\gamma)$ has a canonical monomial structure. Let $G$ be a group and $H\subset G$ a subgroup. We can regard $G\times H$ as a left $H$-set with actions given $h\triangleright (g,h')=(gh^{-1},hh'h^{-1})$. Then we can consider the set of $H$-orbits that we will denote by $G \times_H H$. The set $G \times_H H$ is equipped with a left $G$-action given by left multiplication on the first component. \begin{defi} Let $L(H,\gamma)$ be a Lagrangian. For each $g\in G$, $f\in H$, define $\chi_{g,f}\in L(H,\gamma)$ by \begin{equation} \chi_{g,f}(x)= \begin{cases} 0, & x\notin gH \\ \omega(h^{-1},g^{-1}; \prescript{g}{}{f})\epsilon(h^{-1},f)e_{hfh^{-1}}, & x=gh, \text{ where } h\in H. \end{cases} \end{equation} \end{defi} \begin{rem} The function $\chi_{g,h}$ can be characterized as the unique map in $L(H,\gamma)$ with support $gH$ and such that $\chi_{g,h}(g)=e_h$. \end{rem} \begin{lem}\label{Lemma: equaciones} Let $L(H,\gamma)$ be a Lagrangian algebra in $\mathcal{Z}(G,\ThreeCocycle)$. Then \begin{align}\label{equ1: chi} \chi_{gh,f}=\omega(h,(gh)^{-1}; \prescript{gh}{}{f})\epsilon(h,f)\chi_{g,\prescript{h}{}{f}},&& g\in G, f,h\in H. \end{align} \begin{align}\label{equ2: chi} l\triangleright \chi_{g,f}=\omega((lg)^{-1},l^{-1};\prescript{g}{}{f})\chi_{lg,f},&& g,l \in G, h\in H. \end{align} \end{lem} \begin{proof} \eqref{equ1: chi}. Since the supports of $\chi_{gh,f}$ and $\chi_{g,^hf}$ are $gH$, and \begin{align*} \chi_{gh,f}(g) &= \chi_{gh,f}(ghh^{-1})\\ &= \omega(h,(gh)^{-1}; \prescript{gh}{}{f})\epsilon(h,f)\chi_{g,\prescript{h}{}{f}}(g), \end{align*} we obtain \eqref{equ1: chi}. \eqref{equ2: chi}. By the definition of the action of $G$ we have \begin{align*} l\triangleright \chi_{g,f}(lg)&=\omega((lg)^{-1},l^{-1};\prescript{g}{}{f})\chi_{g,f}(g)\\ &=\omega((lg)^{-1},l^{-1};\prescript{g}{}{f})e_f. \end{align*}Since $l\rhd \chi_{g,f}$ and $\chi_{gl,f}$ are supported in $glH$, we get \eqref{equ2: chi}. \end{proof} It follows from Lemma \ref{Lemma: equaciones} that $\mathbb{C}\chi_{gh,\prescript{h}{}{f}}=\mathbb{C}\chi_{g,f}$. Then for any $(g,h)\in G\times_H H$ the space $\mathbb{C}\chi_{g,f}$ is well defined. \begin{thm}\label{thm: Lagrangian are monomial} Let $L(H,\gamma)$ be a Lagrangian algebra in $\mathcal{Z}(G,\ThreeCocycle)$. Then $L(H,\gamma)$ with the decomposition \[L(H,\gamma)=\bigoplus_{(g,h)\in G\times_H H}\mathbb{C}\chi_{g,h}\] is a monomial twisted Yetter-Drinfeld module. \end{thm} \begin{proof} First we will check that in fact the sum $\sum_{(g,h)\in G\times_H H}\mathbb{C}\chi_{g,h},$ is direct. Since $\textrm{supp}(\chi_{g,f})=gH$, we have that $\chi_{g,f}$ and $\chi_{g^\prime,f}$ are linearly independent if $gH\neq g^\prime H$. Hence it is suffices to check linear independence of the collections $\{\chi_{g,f}\}_{f\in H}$, with $g$ fixed. But if $f\neq f^\prime$, $\vert \chi_{r,f}\vert \neq \vert \chi_{r,f^\prime}\vert$. It follows that the sum $\sum_{(g,h)\in G\times_H H}\mathbb{C}\chi_{g,h}$ is direct. In order to see that $L(H,\gamma)=\sum_{(g,h)\in G\times_H H}\mathbb{C}\chi_{g,h}$, fix $\mathcal{R}\subset G$ a set of representative of the left coset of $H$ in $G$. Let $a\in L(H,\gamma)$. For each $r\in \mathcal{R}$, suppose \begin{equation} a(r)=\sum_{f\in H} \lambda_{r,f} e_f. \end{equation} Then we have \begin{equation} a=\sum_{r\in \mathcal{R},f\in H} \lambda_{r,f}\chi_{r,f}\in \sum_{(g,h)\in G\times_H H}\mathbb{C}\chi_{g,h}. \end{equation} By \eqref{equ2: chi} and the fact that $|\chi_{g,f}|=gfg^{-1}$, we obtain that $L(H,\gamma)$ is a monomial twisted Yetter-Drinfeld module. \end{proof} \begin{cor}\label{cor: Lagrangian are monomial } Let $G$ be a finite group, $\omega \in Z^3(G,\mathbb{C}^\times)$. If $L(H,\gamma)$ is a Lagrangian algebra in $\mathcal{Z}(\operatorname{Vec}_G^\omega)$, then \begin{enumerate}[leftmargin=*,label=\rm{(\alph*)}] \item the action of $\mathcal{B}_n$ on $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, L(H,\gamma)^{\otimes n} )$ is monomial, \item the dimension of $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G^\omega)}(\mathbb{C}, L(H,\gamma)^{\otimes n} )$ is equal to the number of regular $G$-orbits under the monomial action of $G$ on \[(G\times_H H)^{\times n})_e:=\{((g_1,h_1),\ldots,(g_n,h_n)): g_1h_1g_1^{-1}g_2h_2g_2^{-1}\cdots g_nh_ng_n^{-1}=e\}.\] \end{enumerate} \end{cor} \begin{proof} Follows from Theorem \ref{thm: Lagrangian are monomial} and Theorem \ref{thm: YD monomial }. \end{proof} We will fix a set of representatives of the left cosets of $G$ in $H$, $\mathcal{R} \subset G$. Thus every element $g \in G$ has a unique factorization $g = rh$, $h \in H, r \in \mathcal{R}$. We assume $e \in \mathcal{R}$. The uniqueness of the factorization $G = \mathcal{R} H$ implies that there are well defined maps \begin{align*} \triangleright: G\times \mathcal{R}\rightarrow \mathcal{R},&& \kappa:G\times \mathcal{R}\rightarrow H, \end{align*} determined by the condition \begin{align*} gr=(g\triangleright r)\kappa(g,h), && g\in G, r\in \mathcal{R}. \end{align*} As crossed $G$-set we can identify $G\times_H H$ with $\mathcal{R}\times H$ with action \begin{align*} g\triangleright(r,h):= (g\triangleright r,^{\kappa(g,r)}h),&& r\in \mathcal{R},\ h\in H,\ g\in G, \end{align*} and grading map \begin{align*} |-|:\mathcal{R}\times H&\to G\\ (r,h)&\mapsto rhr^{-1}. \end{align*} It follows from Theorem \eqref{thm: Lagrangian are monomial} that $B_\mathcal{R}:=\{\chi_{r,h} | \, r\in \mathcal{R}, \, h\in H\}$ is a basis for $L(H,\gamma)$. In order to apply the results of Subsection \ref{subsection:monomial basis}, we only need to compute the map $\lambda_{\mathcal{R}\times H}:G\times (\mathcal{R}\times H)\to \mathbb{C}^\times$, such that \begin{align*} g \triangleright \chi_{r,h}= \lambda_{\mathcal{R}\times H}(g;r,h) \chi_{g\triangleright r,\prescript{\kappa(g,r)}{}{h}}, && g\in G, r\in \mathcal{R}, h\in H. \end{align*} Using Lemma \ref{Lemma: equaciones} we obtain that \begin{equation}\label{LambdaCoef} \lambda_{\mathcal{R}\times H}(g;r,h)= \omega((g r)^{-1},g^{-1}; \prescript{r}{}{h}) \omega(\kappa(g,r),(gr)^{-1}; \prescript{gr}{}{h}) \epsilon(\kappa(g,r),h), \end{equation}for all $g\in G, r\in \mathcal{R}, h\in H$. By \eqref{condicion regular},we have that an element $t=((r_1,h_1),\ldots,(r_n,h_n))\in (\mathcal{R}\times H)^n_e$ is regular if and only if \begin{align}\label{regular lagrangian} \lambda_{(\mathcal{R}\times H)^n}(g;(r_1,h_1),\ldots,(r_n,h_n))=1,&& \text{for all } g\in \bigcap_{i=1}^{n} r_i^{-1}C_H(h_i)r_i \end{align} where $\lambda_{(\mathcal{R}\times H)^n}$ was defined in \eqref{definition of lambda x n} in function of $\lambda_{\mathcal{R}\times H}$ and $\omega$. \subsection{Applications and examples} In this last section we present some application of the results of the previous section. \subsubsection{Central Subgroups} \begin{prop} Let $G$ be a finite group and $L(H,\gamma)$ a Lagrangian algebra in $\mathcal{Z}(\operatorname{Vec}_G)$, where $H\subset G$ is a central subgroup. Then \[\dim\Big (\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G)}(\mathbb{C},L(H,\gamma)^{\otimes n})\Big ) =|G|^{n-1}.\] Moreover, the representation of $\mathcal{B}_n$ is actually a representation of $S_n$. \end{prop} \begin{proof} Since $H$ is a central subgroup, $g\triangleright (r,h) = (g\triangleright r, h)$ and \begin{equation*} |\chi_{r_1,h_1} \otimes \dots \otimes \chi_{r_k,h_k}| = h_1 \cdots h_k, \end{equation*} for any $ r_1,\dots,r_k \in R, h_1, \dots, h_n \in H$. Hence, \begin{align*} |(\mathcal{R}\times H)^n_e|&=|(\mathcal{R}^n/G)| |H^{n-1}|\\ &= [G:H]^n|H|^{n-1}\\ &= |G|^{n-1}. \end{align*} To determine the number of orbits, notice that $\epsilon: H\times H\to \mathbb{C}^\times$ is a bicharacter such that $\epsilon(h_1,h_2)\epsilon(h_2,h_1)=1$. Then by equation \eqref{regular lagrangian} an element \[\Big( (r_1,h_1),\ldots , (r_n,h_n)\Big ) \in (\mathcal{R}\times H)_e^n\] is regular if and only if \begin{align*} \prod_{i=1}^n\epsilon(h,h_i)=1, && \text{for all } h\in H. \end{align*} But $\prod_{i=1}^n \epsilon(h,h_i) = \epsilon(h,h_1\cdots h_n) = \epsilon(h', e) = 1.$ Hence every element is regular. By Corollary \ref{cor: Lagrangian are monomial } the dimension of $\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G)}(\mathbb{C},L(H,\gamma)^{\otimes n})$ is $|G|^{n-1}$. Finally, using equation \eqref{formula generador grupo de trenza}, we see that \begin{align*} \sigma'_i\circ\sigma'_i (\chi_{r_1,h_1}\otimes\cdots \otimes \chi_{r_n,h_n})&= \epsilon(h_i,h_{i+1})\epsilon(h_{i+1},h_{i}) (\chi_{r_1,h_1}\otimes\cdots \otimes \chi_{r_n,h_n})\\ &= \chi_{r_1,h_1}\otimes\cdots \otimes \chi_{r_n,h_n}. \end{align*} Hence representation of $\mathcal{B}_n$ factors as a representation of $S_n$. \end{proof} \subsubsection{Lagrangian algebra of the form $L(H,1)$} The Lagrangian algebras $L(H,1)$ as an object in $\mathcal{Z}(\operatorname{Vec}_G)$ are completely determined by the crossed $G$-set $G\times_H$, and the monomial representation $\operatorname{Hom}(\mathbb{C}, L(H,1)^{\otimes})$ is a permutation representation, see Example \ref{ejemplo permutation crossed g-sets}. Let us see some extreme cases: \subsubsection*{Case $H=\{e\}$} In this case the crossed $G$-set is $G$ with the regular action and grading map the constant map $e$. It is clear that the braiding $c_{G,G}$ is just the flip map \[(g_1,g_2)\mapsto (g_2,g_1)\]hence, really the symmetric group $S_n$ acts on $G^n$. The set of $G$-orbits is in biyection with $G^{n-1}$, \begin{align*} \mathcal{O}(G^n)&\to G^{n-1}\\ \mathcal{O}_G(g_1,g_2,\ldots, g_n)&\mapsto (e,g_1^{-1}g_2,\ldots, g_1^{-1}g_n). \end{align*} Using the previous map the action of $\S_n$ is given by \[\sigma_1\Big( g_1,\ldots, g_{n-1}\Big)= \Big ( g_1^{-1},g^{-1}_1g_2,\ldots, g_1^{-1}g_{n-1}\Big)\] and \[\sigma_i(g_1,\ldots, g_i,g_{i+1},\ldots,g_{n-1})=(g_1,\ldots,g_{i+1},g_i,\ldots,g_{n-1}), \quad 1<i<n.\] It is clear that permutation action of $S_n$ on $G^{n-1}$ is faithful, thus the image is isomorphic to $S_n$. \subsubsection*{Case $H=G$} In this case the crossed $G$-set is $G$ with the action by conjugation and grading map the identity map. Hence, the braiding is given by \[c_{G,G}:(x,y)\mapsto (y,y^{-1}xy).\] Note $c_{G,G}$ is symmetric if and only if $G$ is abelian. If $G$ is abelian, $G^n_e=\{(g_1,\ldots,g_{n-1},(g_1\ldots g_{n-1})^{-1})\}$ is the set of orbits and as the previous example the group $S_n$ acts faithfully. \subsubsection{Dihedral group} Every time we take $H$ to be a normal subgroup of $G$, the following proposition provides a way to simplify the situation. \begin{prop}\label{decoupledBasis} Let $G$ be a finite group, $H\trianglelefteq G$, $\mathcal{R}$ a collection of representatives for $G/H$. Define $B_\gamma[H]\in\mathcal{Z}(\operatorname{Vec}_G)$ as \begin{equation*} B(H,\gamma) := \operatorname{span} \{ b_{r,h} |\, r\in \mathcal{R}, h\in H \} \end{equation*} with grading $|b_{r,h}|=h$ and the $G$-action \begin{equation}\label{actionOnTheBs} g\triangleright b_{r,h} = \epsilon(\kappa(g,r)\prescript{r^{-1}}{}{h}) b_{g\triangleright r, \prescript{g}{}{h}}. \end{equation} Then, the mapping \begin{align*} B(H,\gamma) &\rightarrow L(H,\gamma) \\ b_{r,h} &\mapsto \chi_{r, \prescript{r^{-1}}{}{h}} \end{align*} is an isomorphism in $\mathcal{Z}(\operatorname{Vec}_G)$. \end{prop} \begin{proof} We need to show the map preserves the grading and the $G$-representation. We have \begin{equation*} |\chi_{r,\prescript{r^{-1}}{}{h}}|=\prescript{r}{}{(\prescript{r^{-1}}{}{h})} = h = |b_{r,h}| \end{equation*} Now, since \begin{equation*} g\cdot \chi_{r,\prescript{r^{-1}}{}{}h} = \epsilon(\kappa(g,r),\prescript{r^{-1}}{}{h})\chi_{g\triangleright r, \prescript{\kappa(g,r)}{}{(\prescript{r^{-1}}{}{h})} } , \end{equation*} and \begin{equation*} \prescript{\kappa(g,r)}{}{(\prescript{r^{-1}}{}{h})} = \prescript{(g\triangleright r)^{-1}}{}{ghg^{-1}}, \end{equation*} we have that \begin{equation*} g\triangleright b_{r,h} = \epsilon(\kappa(g,r),\prescript{r^-1}{}{h}) a_{g\triangleright r, \prescript{(g\triangleright r)^{-1}}{}{(\prescript{g}{}{h})}}. \end{equation*} Hence by \eqref{actionOnTheBs} the map is equivariant. \end{proof} Proposition \ref{decoupledBasis} works particularly well when $\gamma =1$, since equation \eqref{actionOnTheBs} is just \begin{equation*} g\triangleright b_{r,h} = b_{g\triangleright r, \prescript{g}{}{h}}. \end{equation*} Thus, the action of $G$ is "decoupled". We use this idea in the following example. Let $G=D_{2k}$ be the dihedral groups of order $2k$ and $H=\langle r \rangle$. We take $\mathcal{R}=\{e,s\} = \{s^i\}_{i\in \mathbb{Z}/2\mathbb{Z}}$. Then \begin{equation*} |b_{s^{i_1},r^{j_1}}\otimes \cdots \otimes b_{s^{i_n},r^{j_n}}| = r^{\sum_{m=1}^n j_m}, \end{equation*} and \begin{equation*} \dim( B(H,\gamma)^{\otimes n}_e) = 2^n \times k^{n-1}. \end{equation*} Since \begin{equation*} (s^i r^j )(s^k) = s^{i+k} r^{(-1)^k j}, \end{equation*} we have that \begin{align*} (s^i r^j) \triangleright s^k = s^{i+k},&& \text{ and }& &\kappa(s^ir^j,s^k) = r^{(-1)^kj}. \end{align*} Hence, the action, on the set label is \begin{equation*} s^i r^j (s^k,r^l) = (s^{i+k},r^l) \end{equation*} It follows that the number of orbits in $(\mathcal{R}\times H)^n_e$ is \begin{equation*} 2^{n-1}\times k^{n-1} = |G|^{n-1}. \end{equation*} Since $\gamma = 1$ all orbits are regular and then $\dim(\operatorname{Hom}_{\mathcal{Z}(\operatorname{Vec}_G)}(\mathbb{C},L(H,1)^{\otimes n}))=|G|^{n-1}$.
1,477,468,749,855
arxiv
\chapter{Introduction} \label{ch: introduction} \pagenumbering{arabic} Inter Stellar Medium (ISM), the medium in between the stars in our Galaxy and other spiral or dwarf galaxies, is composed of mainly gas, dust and charged particles. ISM play an important role in evolution of the spiral galaxies \citep{2011piim.book.....D}. In one end it is the seed for the star formation and then when stars die their material changes the composition of the ISM, triggering its evolution \citep{2004Ap&SS.292..193B}. Earlier observations, like by \citet{1974ApJ...193L.121J}, of the absorption lines in the stellar spectra suggested the existence of gas in between the stars. These were later called as the ISM. In depth studies of the medium followed and branched into other techniques than just optical astronomy: x-ray observations that probes the hot ionized medium, observation of molecular lines in radio and millimeter waves, low radio frequency observations of neutral hydrogen are some examples. It is understood that ISM is not a smooth passive medium, rather it has intricate structures and interesting dynamics. Theoretical understanding of these structure and dynamics indicated a turbulent medium that influence the star formation in the ISM \citep{2004Ap&SS.292..193B,2004RvMP...76..125M}) and its chemical and compositional evolution. Systematic study of ISM had begun perhaps in 1951, when \citet{1951ApJ...114..165V} proposed a theory of ISM, where he assumed that the differential galactic rotation stirs the entire ISM at the large scales. It results in supersonic turbulence, the energy cascades down to small scales and dissipates by viscosity. Detailed analytical and numerical studies has addressed generation of ISM turbulence and its effects in various length scales. It is well understood that though broadly we call it ISM turbulence, the physics of the process is different and leads to different effects at different length scales in the ISM \citep{2004Ap&SS.289..479D}. A detailed review on the progress on study of ISM turbulence and their observation can be found in \citet{2004ARA&A..42..211E}. Turbulence generate scale invariant fluctuations in the density and velocity of gases in ISM \citep{2009ApJ...692..364F}. Since this fluctuations are intermittent and random, the information of the physical process lie in the the statistical nature of this fluctuations. Observationally, the observation of the properties of the specific intensity of radiation gives us informations about the dynamical variables of the process. In this thesis we address how we can probe the ISM turbulence at the largest scale, that is at the scales compared to the extent of the ISM in the galaxy. Here we discuss the backgrounds of the observational procedure and give a brief description about the ISM of external spiral galaxies. \section{Statistical probes to ISM turbulence} As turbulence being a stochastic process, the observational signatures are through various scale dependence statistical estimators of the density and velocity fluctuations of the ISM. Kolmogorov in 1941 formulated a theory of turbulence in incompressible fluid. He assumed that the system is stirred at the largest scale by some external force and the energy then percolates to the smaller and smaller scales till it reaches the scale of dissipation. In between the driving scale and the dissipation scale lies the inertial range, where rate of energy input at a scale equals the rate of energy that goes out from that scale. This results in a system with no prefered scales in between the driving and dissipation scale\footnote{See \citet{1995tlan.book.....F} for a detailed description of Kolmogorov Theory.}. Later when the nonlinear dynamics of the compressible systems were studied, it is realized that with the velocity fluctuations the density fluctuations of the compressible fluid, that is gas, would assume scale free nature in the inertial range for a detailed description of Kolmogorov theory. Hence, to observationally probe turbulence and find out the inertial range, driving and dissipation scales and mechanisms, it is useful to look at the statistics of the density and velocity fluctuations at different length scales \citep{1972fct..book.....T}. Here are a few statistical descriptor of the same. The basic statistical tools are mean, median, skewness and kurtosis, which all are one point statistics of the data. To probe turbulence, it is rather important to look at the two point and higher order statistics. Structure function of order p for a stochastic observable A is defined as, \begin{equation} S_P(|\delta\vec{r}|) = \left< |A(\vec{r})-A(\vec{r}+\delta\vec{r}) |^P \right>, \end{equation} and theoretically if evaluated to all orders it contains all the information of the stochastic fields. Statistical nature of Gaussian random fluctuations can be completely specified by its mean and either the auto-correlation function or power spectrum. The auto-correlation function $\xi_A(|\delta\vec{r}|)$ of any homogeneous and isotropic scalar field $A$ is defined as, \begin{equation} \xi_A(|\delta\vec{r}|) = \left< |A(\vec{r})A(\vec{r}+\delta\vec{r}) | \right>. \end{equation} Power spectrum is the Fourier transform of the auto-correlation function, \begin{equation} P_A(k) = \int d\vec{r} e^{i\vec{k}.\vec{r}} \xi_A(|\delta\vec{r}|) \end{equation} where $k=|\vec{k}|$. Angular brackets in the above expressions stands for ensemble average. For astronomical observations, we are always limited to have a single realization of the sky, assumptions like statistical homogeneity and statistical isotropy are evoked to perform the above averaging. \section{21 cm radiation from external spiral galaxies} Spectral lines provide the window to measure the ISM structures and dynamics. Spectral signature of the elements not only tell us the abundance and density fluctuations of the element in the ISM, but the width and the Doppler shift of the line provides the information of its dynamics. Key is to choose an element with easily observable spectral signature. Examples of spectral lines from ISM that has relatively higher strength are Ca$^+$, Na, CO etc \citep{1970ApJ...161L..43W, 1974ARA&A..12..279Z, 1974ApJ...189..441G}. However, these elements constitute a very small part of the ISM and distributed unevenly over the galactic disk. Since 70\% of the ISM is neutral hydrogen atom (\rm HI), observing it would trace the ISM more completely. The hyperfine structure transition line from \rm HI (frequency 1420 MHz), other wise known as 21 cm line, provides the nicest probe to ISM structure and dynamics. Specific intensity of the radiation of the 21-cm line emission is given by \citep{2011piim.book.....D}, \begin{equation} \label{eq:I_theta} I(\vec{\theta},\nu) = I_0 \int dz \hspace{2.5pt} n_{HI}(\vec{r}) \hspace{2.5pt} \phi(\nu) \end{equation} where z is the line of sight direction, $\vec{r} = (x,y,z) = (\vec{R},z)$. Here $\vec{R}$ is in the plane of the sky. The angular sepeartion of a direction in the field of view with respect to the field center is $\vec{\theta}=\frac{\vec{R}}{D}$. $D$ is the distance between the observer and the galaxy. Note that the galaxy constitutes a very small angle in the sky, hence we can assume the sky to be flat. $I_{0}=\frac{3}{16\pi}h\nu_0 A_{21}$, where $A_{21}$ is the Einstein coefficient for 21-cm radiation. $n_{HI}(r)$ is the number density of hydrogen atom and $ \phi(\nu) $ is the line shape function. As in the different direction in the sky the velocity of the \rm HI gas element emitting 21-cm radiation can be different, the observed frequency of the emission is Doppler shifted. Hence, the Doppler shift can be used to write equation 1.4 in terms of line of sight velocity of the gas as \begin{equation} \label{eq:I_theta} I(\vec{\theta},v) = I_0 \int dz \hspace{2.5pt} n_{HI}(\vec{r}) \hspace{2.5pt} \phi(v). \end{equation} The line shape function $\phi(v)$ is defined as, \begin{equation} \phi(v)= \phi_0 \hspace{2.5pt}exp{\left[- \frac{[v - v_z(\vec{r})]^2}{2\sigma^2} \right]} \end{equation} where $v_z(\vec{r})$ is the line of sight component of velocity of \rm HI gas and $\sigma = \sqrt{\frac{k_B T}{m_{HI}}}$ is the thermal velocity dispersion. The line shape function is normalized as, \begin{equation} \label{eq:phi} \int dv \hspace{2.5pt} \phi(v) = 1. \end{equation} Moments of the specific intensity provides information about the density and velocity structures of the gas \citep{2009PhT....62e..56B}. Here we describe only first two moments. The zeroth moment of the intensity is defined as, \begin{equation} M_0(\vec{\theta})= \int dv \hspace{2.5pt} I(\vec{\theta},v) \end{equation} where the velocity integral is over the entire spectral range of \rm HI emission. The column density of the \rm HI gas along line of sight direction is given by $N_{HI}(\vec{\theta}) = \int dz \hspace{2.5pt} n_{HI}(\vec{r})$. Clearly, the zeroth moment gives the column density of the \rm HI gas, \begin{equation} M_0(\vec{\theta}) = I_0 N_{HI}( \vec{\theta}). \end{equation} The first moment of $I(\vec{\theta},\nu)$ is defined as, \begin{equation} M_1(\vec{\theta}) = \frac{ \int dz \hspace{2.5pt} v \hspace{2.5pt} I(\vec{\theta},v) }{ M_0(\vec{\theta})} \end{equation} $M_1(\vec{\theta})$ essentially gives the density weighted line of sight component of velocity of \rm HI gas. This indicates that using the observed specific intensity of the 21-cm radation it would be possible to estimate the statistical properties of the density and velocity of the \rm HI. In this work we would like to probe the ISM of the external spiral galaxies. ISM in the spiral galaxies have morphology of a disk with the number density profile roughly exponential along the radial direction, whereas roughly it follow a Gaussian function along the direction perpendicular to the disk. The characteristic scale along the radial direction is called the scale length and that in the vertical direction is termed as the scale height. The ratio of the scale height to scale length in a typical spiral galaxy is $1:10$ or even higher suggesting that the disk is thin \citep{2004MNRAS.352..768K}. The number density of \rm HI in the disk, apart from this smooth variation, has statistical fluctuations in scales at 10 kpc to sub parsecs \citep{1983A&A...122..282C, 2013NewA...19...89D}. The major dynamical feature of the disk is its differential rotation with the rotation axis roughly aligned perpendicular to the disk. Added to this systematic rotation, there would be random motion of the ISM owing to the turbulent dynamics. For a given galaxy, the axis of systematic rotation can be oriented at a random direction in the sky. This orientation of the disk is usually quantified by two angles, the inclination angle and the position angle. Most of the galaxy disks however have warps, that is the position angle and the inclination angles changes with the distance from the galactic centre. The tangential rotation velocity added with the random velocity from turbulence, the inclination and position angles, gives rise to a particular pattern in the line of sight velocity. This is what can be estimated from the moment one map of the specific intensity. \section{Radio Interferometric observation of Neutral Hydrogen} In the earlier days, most of the understandings on ISM were obtained mainly from the Milky way galaxy using single dish radio telescopes (see for example \citet{1966AuJPA...1....3M}). Single dish radio telescopes have poor angular resolution so that it is inadequate to study the large scale structures of external galaxies. Meanwhile, radio interferometers are composed of many different array elements, called antenna or tiles, which effectively act as single dish radio telescope with high resolution. So, measurement of the \rm HI intensity at a large range of length scales, requires radio interferometric arrays \citep{1983A&A...122..282C}. VLA(Very Large Array), GMRT(Gaint Meterwave Radio Telescope), WSRT(Westerbok Synthesis Radio Telescope) are some of the important radio interferometers in the world. In an interferometer, every pair of these antenna measures a quantity called visibility $\mathcal{V}(\vec{U},\nu)$, which can be approximated as the Fourier transform of the specific intensity $I(\vec{\theta},\nu)$. Visibilities are measured discretely at the baseline vector $\vec{U}$ given by the instantaneous projected separation of the antenna pair along the plane of the sky in units of observing wavelength. Hence, the interferometers effectively sample the Fourier transform of $I(\vec{\theta},\nu)$, i.e $\tilde{I}(\vec{U},\nu)$ at a set of discrete points in $\vec{U}$ given by the array configuration of the antenna, declination of the source and the integration time of observation. The measured visibilities can be written as \begin{equation} \mathcal{V}(\vec{U},\nu)=\tilde{I}(\vec{U},\nu)S(\vec{U}) \end{equation} where, $S(\vec{U})$ is the sampling function. If the total number of sampling in the baseline space is $N_{b}$, the sampling function would be \begin{equation} S(\vec{U})=\sum^{N_{b}}_{i=1}\delta(\vec{U}-\vec{U_{i}}). \end{equation} Inverse Fourier transform of the measured visibility is called the dirty image: \begin{equation} I_{D}(\vec{\theta},\nu)=I(\vec{\theta},\nu) \otimes B(\vec{\theta}). \label{eq:psf} \end{equation} Here $B(\vec{\theta})$ is the Inverse Fourier transform of the sampling function and essentially the Point Spread Function (or beam) of the interferometer. Since the sampling function can be quite discrete, the interferometer beam is non trivial and the dirty image can not be used as an estimate of the sky image. It is necessary to deconvolve the interferometric beam from the dirty image. Strictly speaking, in most of the cases when the sampling in the baseline space is sparse, specially with extended emission, the deconvolution does not give unique results. Nevertheless, different deconvolution schemes are used, CLEAN, MEM are to name a few. Chapter 2 describes CLEAN algorithm \citep{2008AJ....136.2897R}, which we are using in our work. Finally, we note here that the observed frequency shift can be directly transferred to the velocity of the gas that is emitting the \rm HI radiation as discussed in the previous section. \section{Aim of this thesis} Column density power spectra of Milky way galaxy was found to obey power law at length scales ranging sub parsacs to $200$ pc, suggesting the existence of turbulence there in. It was argued that the turbulence is mostly generated as a results of the supernovae shocks stirring the ISM. Recent estimation of \rm HI column density power spectrum from external dwarf and spiral galaxies by \citet{2006MNRAS.372L..33B}, \citet{2008MNRAS.384L..34D,2009MNRAS.398..887D,2013NewA...19...89D} found that they obey power law ranging from 1 kpc to 10 kpc. It is unlikely that these large scale fluctuations are a result of turbulence driven by supernovae. Numerical studies by \citet{2009ApJ...692..364F} demonstrated the nature of solenoidal vs compressive forcing in compressible fluid turbulence. An observational probe in the velocity fluctuation will clearly suggest more on the character of the energy input and the generating mechanism of these large scale structures. We plan to investigate in this line here. There are several quantifier of the scale dependence of the velocity fluctuations of the ISM of our galaxies, like Velocity channel anaslyis(VCA)\footnote{VCA: \citet{2000ApJ...537..720L}}, Velocity Coordinate Spectrum (VCS)\footnote{VCA: \citet{2008arXiv0811.0845C}}, statistics of the centroid of velocities\footnote{\citet{2007MNRAS.381.1733E}} etc. Among these VCA is most efficient technique \citep{2001ApJ...551L..53S, 2015ApJ...810...33C}, and has been used extensively to probe the small scale velocity structures of our galaxy. Recently, \citet{2016MNRAS.456L.117D} demonstrated that VCA has limitations when applied to external galaxies. There are two distinct classes of statistical estimators. Some of these are based on the directly measured quantity from the interferometers, the visibilities, others rely on the reproduction of the sky brightness distribution from the interferometric data. It has been shown that in some cases these two different techniques results in conflicting estimates of the statistical quantities, like the power spectrum. Though visibility based estimators are more direct, the image based estimators can estimate the power spectrum of parts of the field of view of the telescopes. Later is essential at some particular cases, like to correlate star formation with ISM turbulence, variation of MHD turbulence in arm and inter arm regions of the spiral galaxies etc. In fact the ISM velocity structure estimators that we have outlined above all rely on the reconstructed image. In this work we first quantify the efficacy of the image and visibility based estimators of the power spectrum using numerically simulated observations. We use a model \rm HI observation from an external face on spiral galaxy for this purpose. We investigate the reason for the possible deviation from the true power spectrum. Our investigation suggested that the visibility based estimators are unbiased and more desired. Next we implement a visibility based estimator for the velocity power spectrum of spiral galaxies and draw scientific conclusions from the measurements. Remaining part of the report is divided as follow. Chapter 2 discuss about the efficacy of two power spectrum estimators. The velocity power spectrum estimator we have used is discussed in Chapter 3, we also note our result there. At the end, we conclude the report in Chapter 4 with our findings and its impact on ISM physics. \chapter{On the merit of the power spectrum estimators} \label{ch:introduction} Power spectrum quantifies the scale dependence of the random fluctuations. In this chapter we discuss the image and visibility based power spectrum estimator and check the efficacy of them using numerically simulated observations. We also comment on the reason for the possible deviation. \section{Power spectrum estimation} The moment zero map of the specific intensity, $M_0(\vec{\theta})$, gives the column density of the \rm HI gas. The power spectrum of the column density is defined as \begin{equation} P_{HI}(\vec{U})=\delta_{D}(\vec{U}-\vec{U}') \left \langle \tilde{M}_{0}(\vec{U})^{*}\tilde{M}_{0}(\vec{U}') \right \rangle \footnote{We denote the two dimensional Dirac delta function by $\delta$ with a suffix D. Tilde '~' denotes the corresponding Fourier transforms.} \end{equation} The angular bracket here refer to ensemble average and has to be taken over many realisation of the sky. In practice, the fluctuations are often statistically isotropic, the ensemble average can be replaced with an azimuthal average. Following, we discuss two different estimators that has been used in literature to probe the power spectrum of \rm HI intensity fluctuations from radio interferometric observations. \subsection{Visibility based power spectrum estimator} Visibility based power spectrum estimator was introduced by \citet{2001JApA...22..293B}. It has been widely used to find out the power spectrum of \rm HI intensity fluctuation of our Galaxy and external spiral and dwarf galaxies and also is proposed as a major tool in detecting the \rm HI 21 cm signal from the epoch of reionization. Here we give a brief overview. For more technical details, readers may refer to texts like \citet{2011arXiv1102.4419D}. We start with investigating the quantity \begin{equation} P_{V}(\vec{U}) = \delta_{D}(\vec{U}-\vec{U}') \left \langle V^{*}(\vec{U})V(\vec{U}') \right \rangle, \end{equation} and use the expression for visibility in eqn~(2) to write above as \begin{equation} P_{V}(\vec{U}) = \delta_{D}(\vec{U}-\vec{U}') \left \langle \tilde{M_{0}}(\vec{U})^{*}\tilde{M_{0}}(\vec{U}') \right \rangle \left \langle S^{*}(\vec{U}) S(\vec{U'})\right \rangle. \end{equation} Here we have used the fact that the fluctuations in the sky and the sampling function are uncorrelated. The measured visibilities directly gives $P_{V}$. For a particular observation the sampling function is well known and the second term can also be estimated completely. The first term in the angular bracket is essentially the \rm HI power spectrum defined in eqn.~(5). Since the effect of the sampling function here is just a multiplicative factor, the \rm HI power spectra can be directly estimated from the visibilities. This estimator is usually refered to as the visibility correlation estimator in literature. In practice, the measured visibilities also accompany measurement noise from the interferometer. However, such a noise is independent of baseline and can be trivially removed from the power spectrum estimates \citep{2008MNRAS.384L..34D}. Visibility based power spectrum estimator works with the directly observed quantity, the visibility, and hence is a more direct probe of the power spectrum from the data. One do not need to go through a complex deconvolution procedure here. On the other hand, as the power spectrum is calculated in visibility space, it is not straight forward to select a part of the field of view of the interferometer and selectively estimate the power spectrum for that part. This is a major shortcoming of this estimator. \subsection{Image based power spectrum estimator} Image based power spectrum are also used in literature, where the sky image estimated through deconvolution of the interferometer beam from the dirty image is used. Different deconvolution algorithm are used to reconstruct the sky brightness distribution, among which CLEAN is widely used. Here we briefly describes about the CLEAN algorithm. In CLEAN, the sky image is assumed to be a collection of point sources. It identifies the brightest point sources in the sky from the brightest pixels in the dirty image and (partially) remove the effect of these sources in the visibilities. It them remake the dirty image and proceed in a similar manner. Thus it uses a simple iterative procedure to find the position and strength of (all) the point sources. The final deconvolved CLEAN image is these point sources convolved with a synthesized beam. The synthezised beam is considered to be the best fit Gaussian to the primary beam of the interferometer. Residual flux, after modeling all the point sources from the dirty image, is added to the above reconstruction. We shall refer to the estimate of the sky image using clean by the quantity $M_{C}(\vec{\theta})$ and would call it the CLEAN image. The image based power spectrum is defined as \begin{equation} P_{HI}^{(I)}(\vec{U})=\delta(\vec{U}-\vec{U}') \left\langle \tilde{M}_{C}(\vec{U})\tilde{M}_{C}^*(\vec{U}') \right\rangle, \end{equation} where $\tilde{M}_{C}$ is the Fourier transform of the CLEAN image. As the power spectrum is calculated from the image here, it is possible to select a particular region in the field of view and estimate power spectrum of it. A typical interferometer like GMRT\footnote{GMRT: Giant Meterwave Radio Telescope}, VLA\footnote{Very Large Array}, WSRT\footnote{Westerburg Synthesis Radio Telescope}, LOFAR\footnote{Low Frequency Radio Array} etc samples the visibility functions only at specific points in the baseline space. Any deconvolution procedure that try to reduce the effect of the interferometer beam from the dirty image, requires interpolation of the visibilities at the non sampled baselines and hence introduces spurious correlation among visibilities at different baselines. This, manifests as a correlated noise $N_{C}(\vec{U})$ in $\tilde{M}_{C}(\vec{U})$, where \begin{equation} \tilde{M}_{C}(\vec{U}) = \tilde{M}_{0}(\vec{U}) + N_{C}(\vec{U}). \end{equation} Hence, the image based estimator can be written as \begin{equation} P_{HI}^{(I)}(\vec{U}) = P_{HI}(\vec{U}) + P_{N_{C}}(\vec{U}), \end{equation} $P_{N_{C}}(\vec{U})$ is the power spectrum of the correlated noise. Though we expect that the noise bias depends on the baseline coverage of the particular observation, an analytical estimation of it is not straight forward. Interestingly, such a bias is grossly ignored in literature where image based estimators are used \citep{2012ApJ...754...29Z}. In this work, we perform a controlled test on the efficacy of the two estimators of the power spectrum of the \rm HI column density discussed above. We proceed as follows. We generate a sky model that has a correlated column density fluctuation with a known power spectrum. We perform simulated observation assuming a baseline distribution of the interferometer to get the observed visibilities. We use the IMAGE task in AIPS \footnote{NRAO-AIPS: Astrophysics Image Processing System} that uses the CLEAN algorithm to deconvolve the interferometer beam from the dirty map and make an estimate of the sky image. We estimate $P_{HI}^{(I)}(\vec{U})$ from this deconvolved image and also estimate $P_{HI}^{(V)}(\vec{U})$ directly using the simulated visibilities. We finally compare these power spectrum estimates with the power spectrum of the input model image. \section{Simulating H~{\sc i} observations from a spiral galaxy} \subsection{Sky Model} We model the moment zero map of the \rm HI intensity (image) in the following way \begin{equation} M_{0}(\vec{\theta})=W(\vec{\theta})\left[ \bar{M}_{0} + \delta M_{0}(\vec{\theta}) \right], \label{eq:mod} \end{equation} where $W(\vec{\theta})$ quantifies the large scale \rm HI distribution in the sky and is normalized as \begin{equation} \int W(\vec{\theta}) d\vec{\theta} = 1. \end{equation} In case of observations where the \rm HI emission is spread over the entire field fo view of the interferometer, the primary beam of the interferometer gives $W(\vec{\theta})$. For observations related to external galaxies, where mostly the galaxies are localized to a small part of the field of view of the telescope, $W(\vec{\theta})$ quantifies the large scale distribution of \rm HI column density in the galaxy. The quantity $ \bar{M}_{0}$ is proportional to the total intensity coming form the entire field of view and can be written as \begin{equation} \bar{M}_{0} = \int M_{0}(\vec{\theta}) d\vec{\theta}. \end{equation} The component $\delta M_{0}(\vec{\theta})$ gives the fluctuation in the \rm HI column density. We assume it to have a zero mean, i.e, $\langle\delta M_{0}(\vec{\theta}) \rangle = 0$. We shall discuss how we model the window function and the fluctuations shortly. \subsubsection{Modeling window function} \rm HI profile of spiral galaxy is dominated by a radial variation in \rm HI column density. However, azimuthal variations, like spiral arms, rings, are also seen. We model the window function based on large scale structure of the face-on spiral galaxy NGC~628. As we want to keep anisotropic large scale features of the \rm HI distribution in the window function, we use `shapelets' here to represent it. We decompose the moment zero map of NGC~628 taken from THINGS survey\footnote{THINGS: The \rm HI Nearby Galaxy Survey \citet{2008AJ....136.2563W}} data product in terms of it's shapelet coefficients and use first few shapelets to model the window function. In interest of completeness of this report, we first give a brief description of `shapelet' here and then discuss the criteria we use to choose the parameters of the shapelet reconstruction. \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.32]{./chap2/polar_real1.png}} \subfloat[]{\includegraphics[scale=.32]{./chap2/polar_img1.png}} \caption{Fig~(a) and Fig~(b) shows the real and imaginary parts of the first few two dimensional polar shapelet basis functions respectively.} \label{fig:polar} \end{figure} Shapelets are defined as a set of localized basis functions with different shapes \citep{2003MNRAS.338...35R}, we use Gaussian weighted Hermite polynomials in polar coordinates here. These forms a complete orthonormal basis for smooth, integrable functions and hence any well behaved 2D functions can be decomposed into shapelets. Polar Hermite polynomials can be written using recursion formulae, \begin{eqnarray} \frac{l-k}{x}H_{k,l}(x)&=&lH_{k,l-1}(x)-kH_{k-1,l}(x), \ \ \ \ for\ k\neq l \\ \nonumber H_{k,k}(x)&=&H_{k+1,k-1}(x)-x^{-1}H_{k,k-1}(x) \ \ \ \ for\ k = l. \end{eqnarray} Shapelets are defined as, \begin{equation} S_{n_l,n_r}(r,\phi, \eta)= \sqrt{\pi n_l!n_r!}\ \eta^{-1}H_{n_l,n_r}(x/\eta)e^{-\frac{x^2}{2\eta^2}}e^{i(n_l-n_r)\phi}. \end{equation} Here $r$ is the radial and $\phi$ is the angular coordinate and $\eta$ corresponds to the scale of the shapelets. The first few order of polar shapelets are shown in the figure~\ref{fig:polar}. Any well behaved function $f(r, \theta)$ can be decomposed in terms of its shapelet coefficients $f_{n,m}$ as \begin{equation} f(r,\phi)=\sum_{n=0}^{\infty}\sum_{m=-n}^{n}f_{n,m}S_{n,m}(r,\phi,\eta). \end{equation} In order to model the window function using the moment zero map of the galaxy NGC~628, we need to choose the scale of the shapelets, i.e, $\eta$. We do this in the following way. Considering a given value of $\eta$, we construct the zeroth order shapelet (a Gaussian function essentially) from the moment zero map of NGC~628 and estimate the mean square difference between the moment zero map and this basic shapelet. We choose a value of $\eta$ which corresponds to the lowest mean square difference as estimated above. Dutta et al. (2013) have used a visibility based power spectrum estimator to estimate the \rm HI intensity fluctuation power spectrum of the galaxy NGC~628 from THIGNS observations. They found at baselines $> 1\ k\lambda$, the power spectrum is well fitted by a power law. At smaller baselines, the large scale structure of the galaxy, i.e, the window function dominates. We found that for shapelet reconstructions with shapelets higher than of order 12, the window function has an significant effect at baselines $> 1 \ k\lambda$. Hence, the maximum value of the shapelet order that we use here is 12. Using these parameters and the normalization criteria given in eqn.~(12), we construct the model window function. Figure~\ref{fig:NGC628} shows the moment zero map of the galaxy (a) NGC~628 and the model window function (b) constructed based on it. \subsubsection{Modeling the $\bar{M_{0}}$ and $\delta M_{0}(\vec{\theta})$} It is to be noted that the absolute amplitudes of the quantities $\bar{M_{0}}$ and $\delta M_{0}(\vec{\theta})$ are not of much interest apart from a scaling of the model image. The relative amplitudes of $\bar{M_{0}}$ and standard deviation of $\delta M_{0}(\vec{\theta})$, i.e, $\sigma_{\delta M_{0}}$, need to be fixed for the simulation. We call this quantity $\mathcal{R} = \bar{M_{0}}/\delta M_{0}(\vec{\theta})$. The random fluctuations, $\delta M_{0}(\vec{\theta})$, we model as zero mean Gaussian random distribution with a power law power spectrum. Variance of these fluctuations provides an amplitude to the power spectrum, we choose it to be unity. We choose different values for the index of the power law spectrum $\alpha$ and $\mathcal{R}$, which will be discussed in next section. \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.25]{./chap2/NGC628.png}} \hspace{10pt} \subfloat[]{\includegraphics[scale=.51]{./chap2/Window.png}} \caption{Fig(a) is moment zero map of the \rm HI emission distribution of the NGC~628 galaxy obtained from THINGS survey. Fig(b) is model of the window function that we use for the simulation. }\label{fig:NGC628} \end{figure} \subsection{Simulated observations} We use the model of the sky as discussed in the previous section to simulate radio interferometric observations and generate random group visibility fits files. For this purpose we need to choose a particular array configuration of the interferometer. We model our telescope based on the GMRT array configuration with same telescope latitude and the baselines scaled to half its original values \footnote{GMRT original array configurations can be seen in {\it http://gmrt.ncra.tifr.res.in/gmrt\_hpage/Users/doc/GMRT\-specs.pdf}}. The largest baseline of our model telescope is $60\ k\lambda$. \citet{2013NewA...19...89D} has estimated the power spectra of 18 spiral galaxies from the THINGS sample using a visibility based estimator. They found that the power spectra follow power laws at baselines larger than $\sim 1\ k\lambda$. The power law index $\alpha$ were found to lie between $-0.3$ to $-2.2$. Moreover, 9 of the 18 galaxies have $\alpha$ in between $-1.5$ to $-1.8$. We choose three sets of values of $\alpha$ for our model sky image : $[-0.5, -1.5, -2.0]$. \citet{2013MNRAS.436L..49D} found that the ratio of the fluctuating to the mean component varies between $5$ to $10$ for the six galaxies they have analysed. We consider two values of $\mathcal{R}$ here: $[5,10]$. We generate model sky images based on these parameters in a square grid of $1024^{2}$ with each grid element representing a $1.5^{''} \times 1.5^{''}$ patch in the sky. We choose the declination of the source to be $+54^{\circ}$ for our simulation. Using these parameters we perform 8 hours equivalent of simulated observation of the model sky. We do not include any measurement noise in the simulated observation. We use the AIPS task IMAGR to prepare the dirty image as well as a deconvolved estimate of the model sky. We use visibility and the image based power spectrum estimators with the simulated visibilities and the dirty as well as CLEAN image of the model sky respectively to estimate the power spectra. We also estimate the power spectra of the model sky directly from the sky model and use that as reference. \section{Analysis of the simulated data} \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.39]{./chap2/powsp.png}} \subfloat[]{\includegraphics[scale=.27]{./chap2/uvpscorr.png}} \caption{Fig~(a) shows the power spectrum estimations. Fig~(b) gives the correlation between the baseline fraction with the ratio of the image and visibility based power spectrum.} \label{fig:ps} \end{figure} In this section we shall compare the power spectrum estimated by the different methods. To keep things simpler, we describe the outcome for only one set of simulation in the main text with $\alpha = -1.5$ and $\mathcal{R} = 5$. All the conclusions drawn in this section can be carried forward for the rest of the models. The power spectrum plots are shown in Figure~\ref{fig:allps} at the end of this chapter. \subsection{Power spectrum} Figure~\ref{fig:ps}(a) summarizes the power spectrum analysis we do here, where we plot the azimuthal averaged power spectra in a log-log scale as a function of baseline $U$. The black solid line refer to the power spectrum of the model sky and can be considered as a reference. It is clear that at baselines smaller than $1\ k\lambda$ the window function dominates, whereas at larger baselines the power spectrum assume a power law with $\alpha = -1.5$, same that of the model image. The open circles corresponds to the power spectrum estimated from the visibility based estimator. Clearly, it almost reproduces the reference spectrum. This demonstrates the efficacy of the visibility based power spectrum estimator to accurately reproduce the actual intensity fluctuation power spectrum of the sky. The dashed line and the dot-dashed line represent the power spectra calculated using the image based estimator from the dirty image and the CLEAN image respectively. Clearly, at baselines $> 1 \ k\lambda$ they deviates from the reference spectra. At these baselines, they appear to follow a power law spectra with a steeper slope compared to the reference spectra. Note that, the extra steepening of these power spectra at baselines $\sim 20\ k\lambda$ and higher is an effect of the convolution of the effective synthesised beam at the last stage of CLEAN. The baselines $> 20\ k\lambda$ do not carry information about the sky. It is clear from above discussion that the image based estimator fail to reproduce the reference power spectrum. In fact, if we blindly use the image based estimator, we shall infer a higher index for the power law. \subsection{ UV coverage and the power spectra} In figure~~\ref{fig:ps}(a), we see a trend that the ratio of the image and visibility based power spectra deviates more with the increase of the baseline value. Here we investigate the possible reason for this. As mentioned in section 2, we expect the image based power spectrum to have correlated noise bias because of incomplete baseline coverage. To emphasis, while estimating the image from the observed discretely sampled visibility values, the deconvolution procedure involves interpolations of the visibility at the baselines where the visibility is not sampled. The required amount of interpolation depend on the array configuration and source declination. Hence, we would expect, in case of an array with complete sampling, the noise bias discussed above will be absent. For example, when we estimate the power spectra from the model image, we perform a Fast Fourier Transform and get the visibilities in a grid. This is an example of complete sampling and hence we do see that the model image has the exact power law power index as it is modeled on. Naively, one would expect that the lack of baseline coverage would then correlate with the difference in the power spectra estimated from the image and from the visibility. As at the large baselines the power spectra amplitude decreases following a power law, additive difference in the two power spectra would be less at larger baselines. A better quantifier of their true difference would be the ratio of the two spectra which we use here. We quantify the baseline coverage by the quantity baseline-fraction $F_{b}$ defined as \begin{equation} F_{b}(U)=\frac{\int_{\vec{U}}^{\vec{U}+\Delta \vec{U}}S(\vec{U})d\vec{U}}{\int_0^{\vec{U}_{max}}S(\vec{U})d\vec{U}} \frac{A_{U}}{N_{b}}, \end{equation} where $A_{U}$ is the total area of the baseline plane. We plot baseline fraction against the ratio of the power spectrum for the baseline range $1 - 20 \ k\lambda$ in figure~~\ref{fig:ps}(b). It is clear that at the lower values of $F_{b}$, the deviation of the two power spectra is more. To quantify any correlation between the ratio of the power spectra with $F_{b}$, we estimate the Pearson linear correlation coefficient, which came out to be $-0.98$ suggesting a very strong (anti) correlation. We come to two important conclusions here. Firstly, the visibility based power spectrum estimator accurately reproduces the power spectrum of the true sky fluctuations and can be used without any ambiguity. The image based estimator, for the observations with incomplete baseline coverage, deviates significantly from the true spectrum and has a scale dependence bias. Use of the image based power spectrum without any definitive test would result in a biased quantification of the sky fluctuations. Secondly, we observe that the incomplete baseline has significant effect on the image based power spectrum estimator. In fact, the lack of baseline coverage correlates highly with the deviation of the image based power spectrum from the true value. This suggest clearly that the array configuration of the interferometer has a big role to play for an image based power spectrum to be effective. We mention at the end that though the results presented here is based on one set of simulation with $[\alpha = -1.5, \mathcal{R} = 5.$, the same conclusions can be drawn from the other sets. We request the interested reader to have a look at Appendix-II for details. \section{Power spectrum of THINGS images} \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.35]{./chap2/alpha_plot_NA.png}}\hspace{10pt} \subfloat[]{\includegraphics[scale=.35]{./chap2/alpha_plot_RO.png}} \caption{Figures showing the scatter plot of the power law slope for the THINGS galaxies estimated with image and visibility based methods with natural weighted [Fig~(a)] and robust weighted [Fig~(b)] images.} \label{fig:ps1} \end{figure} Dutta et al. (2013) has estimated the power spectrum of the \rm HI intensity fluctuation of 18 spiral galaxies from THINGS sample using a visibility based estimator. They found that over a range of length scales the power spectra are well fit by power laws. As we have found from our controlled test using simulation, that the visibility based estimator reproduce the intensity fluctuation power spectra almost exactly, we shall use the estimates reported by Dutta et al (2013) as proxy for the true \rm HI power spectra of these galaxies. We use the image based estimator described in section~(3.2) to estimate the power spectra of the same galaxies. We find that for a range of length scales, for each of the galaxies, the power spectra follow power laws. We perform a regression analysis on the image estimated power spectra to find the slope of these power laws. Note that the highest baseline that a visibility based power spectrum estimator is limited to depends on the measurement noise in visibilities, whereas ignoring the noise bias, the highest baseline that an image based power spectra probe is given by the inverse of the interferometer beam. The visibility data used by Dutta et al (2013) lack the measurement at the zero baseline. However, in addition to the visibilities, the total \rm HI flux of the galaxy estimated from single dish observations are also used to construct the moment zero maps in the THINGS data product. As this changes the power at the smaller length scales, the window function dominance in the power spectrum may extend to different baselines for the image and visibility based cases. Hence, we do not restrict ourself to choose the same range of baselines as in the visibility based case to find the power law fit for the image based case. We rather choose the best possible extent in the baseline to fit power laws to the image based power spectra. The power law slope, errors, the geometric mean of the range of baselines ($U^{(G)} = \sqrt{U_{min} U_{max}}$) they are fit to for each galaxies are listed in table~(1) for both visibility and image based estimators. Note that, we represent the best fit power law index from the visibility based power law estimator as $\alpha_{V}$ whereas the best fit power law index estimated from the image based estimator is termed as $\alpha_{I}$. Further, THINGS data products provide both robust weighted (RO) and natural weighted (NA) moment zero maps. We perform the analysis described in this section for both these maps. \begin{table} \begin{center} \begin{tabular}{lcc|cc|cc} \hline & Visibility & based & Natural & weighted & Robust & weighted \\ Galaxy & $\alpha_{V}$ & $U^{(G)}_{V}$ & $\alpha^{(NA)}_{I}$ & $U^{(G)}_{I}$ & $\alpha^{(RO)}_{I}$ & $U^{(G)}_{I}$ \\ \hline \hline & & & & & & \\ NGC628 & -1.6$\pm$0.1 & 3.2 & -2.4$\pm$0.2 & 4.0 & -2.3$\pm$0.1 & 4.5 \\ NGC925 & -1.0$\pm$0.2 & 3.2 & -2.3$\pm$0.1 & 5.5 & -2.4$\pm$0.3 & 4.4 \\ NGC2403 & -1.1$\pm$0.1 & 2.2 & -2.8$\pm$0.1 & 6.3 & -2.2$\pm$0.2 & 3.7 \\ NGC2903 & -1.5$\pm$0.2 & 2.5 & -3.1$\pm$0.1 & 4.9 & -2.4$\pm$0.2 & 3.7 \\ NGC3184 & -1.3$\pm$0.2 & 2.2 & -1.3$\pm$0.4 & 4.9 & -1.8$\pm$0.4 & 4.2 \\ NGC3198 & -0.4$\pm$0.3 & 4.0 & -2.2$\pm$0.2 & 5.5 & -1.8$\pm$0.3 & 5.9 \\ NGC3621 & -0.8$\pm$0.2 & 3.5 & -2.4$\pm$0.3 & 5.2 & -2.0$\pm$0.1 & 4.9 \\ NGC4736 & -0.3$\pm$0.2 & 2.4 & -2.7$\pm$0.1 & 4.0 & -2.4$\pm$0.3 & 4.5 \\ NGC5194 & -1.7$\pm$0.2 & 2.8 & -2.6$\pm$0.3 & 4.9 & -2.1$\pm$0.2 & 2.8 \\ NGC5236 & -1.9$\pm$0.2 & 1.9 & -2.6$\pm$0.1 & 3.1 & -2.0$\pm$0.1 & 2.4 \\ NGC5457 & -2.2$\pm$0.1 & 2.7 & -2.5$\pm$0.1 & 3.5 & -2.3$\pm$0.1 & 5.6 \\ NGC6946 & -1.6$\pm$0.1 & 3.9 & -2.0$\pm$0.1 & 6.3 & -1.6$\pm$0.2 & 3.9 \\ NGC7793 & -1.7$\pm$0.2 & 1.9 & -1.7$\pm$0.2 & 4.2 & -2.0$\pm$0.1 & 2.0 \\ NGC2841 & -1.7$\pm$0.2 & 3.2 & -2.9$\pm$0.2 & 3.2 & -1.8$\pm$0.2 & 5.9 \\ NGC3031 & -0.7$\pm$0.1 & 4.5 & -0.8$\pm$0.2 & 4.5 & -0.7$\pm$0.1 & 4.5 \\ NGC3521 & -1.8$\pm$0.1 & 4.1 & -3.4$\pm$0.1 & 7.1 & -2.6$\pm$0.1 & 5.7 \\ NGC5055 & -1.6$\pm$0.1 & 3.2 & -2.5$\pm$0.1 & 5.5 & -2.2$\pm$0.1 & 3.2 \\ \hline \end{tabular} \end{center} \label{tab:alpha} \caption{Table giving the best fit power law index and the geometric mean of the range of baselines for the fit to the power spectra estimated from the visibility based and image based estimators. Image based power spectra are estimated from both natural and robust weighted images. The baselines are in units of $k\lambda$.} \end{table} \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.35]{./chap2/Geo_mean_NA.png}} \subfloat[]{\includegraphics[scale=.35]{./chap2/Geo_mean_RO.png}} \caption{Figures showing the scatter plot of the geometric mean of the range of baselines for the power law fit with image and visibility based methods with natural weighted [Fig~(a)] and robust weighted [Fig~(b)] images.}\label{fig:ps2} \end{figure} We plot the values of $\alpha_{v}$ and $\alpha_{I}$ along with the error bars in figure~\ref{fig:ps1}. Left panel (a) corresponds to the robust weighted while the right panel (b) corresponds to the natural weighted map. The dashed line in each case gives $\alpha_{v} = \alpha_{I}$. As it is clear for majority of the galaxies we have studied here, the data points lie away from the equality line for both cases of natural and robust weighted maps. In fact, the image based estimator systematically produce a steeper spectra. This is exactly the same trend we have seen from the simulated observation. Though for three galaxies from the natural weighted image and five from the robust weighted image have $\alpha_{I} \sim \alpha_{V}$, on an average the two estimator gives statistically different results. Both the Pearson and Spearman correlation coefficients are also indicative of the lack of correlation between the two estimates of $\alpha$. To check if we are not using completely different range of baselines to perform the power law fit for visibility and image based estimators, we also do a comparison plot of the geometric mean of the maximum and minimum baseline values of the two fits in figure~\ref{fig:ps2}. Note that, for the reasons discussed before, we do not expect the range of baselines the fits are performed in two different cases would be exactly same. The solid black line corresponds to a complete match. A typical range of baselines for which the power law fit is performed using the visibility based estimator is $\sim 1 - 10 \ k\lambda$. The dashed line corresponds to $y = x+\sqrt{10}$. We see most of the galaxies lie within these two lines ensuring that we do perform the power law fit in similar range of baselines. \section{Invariance in mean quantity} Power spectrum is a second order statistical measure of the data. Various first order measures, for example local mean of the reconstructed images are of scientific interest. Azimuthally averaged window function is often used to access the radial variation of the \rm HI column density of the galaxy and compare it with stellar population, star formation rate etc. Unlike the power spectrum, we can not have a direct estimate of the window function from the observed visibilities, image reconstruction is necessary. We estimate azimuthally averaged window function from CLEANed images from the simulated visibilities. This is shown with the discrete circles in figure~\ref{fig:wind}. The black solid line corresponds to the azimuthally averaged window function of the model galaxy. Clearly, the window function estimated from the power spectrum almost exactly follow the window function of the model. This demonstrates that the mean quantities can be estimated by reconstructed images from the visibility as the mean of the residual deconvolution noise is practically small. \begin{figure}[t!] \begin{center} \includegraphics[scale=.45]{./chap2/WIN4.PNG} \caption{Azimuthally averaged window function of the model galaxy (black solid line) and the reconstructed CLEANed image from simulation (black circles) are shown.}\label{fig:wind} \end{center} \end{figure} \section{Discussion and Conclusion} In this chapter we perform controlled tests, using simulated sky observations, that quantify the reliability of the sky brightness fluctuation power spectrum estimators. We see that the visibility based method reproduce the reference power spectrum from the input image. The image based estimator, however, is seen to have noise bias and the estimated power spectrum does not reproduce the reference spectra in any of the simulations. We expected that the power spectrum of the dirty image would have a noise bias arising due to the mixing of scales by the interferometer beam. It is seen that even for the deconvolved image, obtained using CLEAN with sufficiently low gain, the image based power spectra has sufficiently high noise bias. We understand the mismatch between the image and visibility based power spectra in the following way. Interferometers sample the visibility function at discrete points in the baseline space. As the visibilities can be approximated as Fourier transform of the sky image, each measure of visibilities contains information of the entire observed field of view. In the case when the observed sky contain only a single point source, in principle, measuring the visibility at a single point in the baseline space would recover all the information of the source. Hence, in such a special case the sky image can be accurately reconstructed from the observed visibilities. As the number of point sources increase in the sky, it become more and more non trivial to infer the sky image from the direct Fourier transform of the discretely sampled visibilities, the dirty beam. Reconstruction of the sky image requires to estimate comparatively rage number of parameters from the visibilities that represent the sky and a deconvolution of the complicated dirty beam becomes necessary. Strictly speaking, for a sufficiently complicated sky image, with incomplete baseline coverage, it is not possible to accurately reconstruct the sky. One well used procedure is CLEAN, which essentially assume the sky to be a collection of point source and try to estimate the position and the amplitude of these sources. It starts with identifying the brightest point source(s). However, as in the dirty image each of the point sources are scaled in amplitude and shifted by the side lobes of the other sources, such an estimation is always errors. Such an error is more apparent, when there are two sources very near to each other. To address this limitation, one use a loss factor ( usually called gain ). An extreme example is an extended structure, which CLEAN consider as many point sources next to each other. In such a case, using a sufficiently low loss factor also does not seem to reproduce the sky fairly accurately. An errors reproduction of the sky and hence fluctuations in the sky image, in tern gives an errors power spectrum. In order to estimate the power spectra, however, one do not need to know all the properties of the sky. Power spectra gives a statistical description of the sky and is assumed to be a smooth function. Hence, it need to be sampled at a relatively larger baseline values. Moreover, here we are estimating the isotropic power spectra. In fact, in the absence of the window function the power spectra can be modeled by a few parameters only. This makes the visibility based power spectrum estimator practically accurate. For any given baseline coverage, the visibility based estimator estimates the power spectra at the baselines where it is measured. Hence, though it spans only a subspace of the power spectrum function, at the given subspace the measurements are accurate. This is essentially reflected in our test. Hence, it is fair statement that the visibility based estimator estimates the power spectrum of the sky brightness fluctuations irrespective of the details of the baseline coverage. For an image based estimator the estimated power spectrum depend on the baseline coverage and the details of the deconvolution algorithms used to reproduce the image. In our simulations we have not included any measurement noise to the visibilities ( as well as any uncalibrated gain variations, which also behave as noise ). Inclusion of measurement noise would give rise to an additional correlated noise problem as discussed in Dutta (2013). However, these effects would depend on signal to noise of the measurement and can be made small enough (in principle) by increasing the integration time of observations. Here we also estimate the power spectra of the THINGS galaxies using the image based estimator starting from the CLEANed images form the THINGS data product. Considering the visibility based power spectra as reference we see that statistically the imaged based power spectra of THINGS also have significant noise bias, particularly, the bias systematically makes the power spectra more steeper. In a few cases the image based spectra seem to agree with the reference spectra, however, it is clear from this investigation that image based power spectra should not be used without any characterization of the bias caused by the deconvolution noise. Finally we note that though we use intensity of the \rm HI emission from nearby galaxies as models of the sky image, the limitations of he image based power spectra discussed here would be as relevant for any extended structure. We make a strong statement here that the results discussed in this work amounts in to rethink about the use of the image based power spectrum in literature to infer various astrophysical phenomena. However, estimator of the the mean quantities, like the azimuthally averaged window function from the reconstructed image are unbiased and can be used directly. \newpage \begin{landscape} \begin{figure}[t!] \label{fig:allps} \includegraphics[scale=0.7]{./chap2/Allsim.png} \caption{Power spectrum comparison plots for different simulation runs.} \end{figure} \end{landscape} \chapter{Visibility moment and the power spectrum of turbulent velocity} \label{ch: introduction} In addition to the column density distribution, observing spectral lines we can also estimate the dynamical informations of the \rm HI gas in the galaxy. This is because the observed spectral lines are Doppler shifted proportional to the line of sight velocities of the gas in the galaxy. However, we should note here that because velocity is a vector, unlike the column density, velocity correlation is given in terms of a matrix of dimension $3 \times 3$, where each element measures the correlations of one component of the velocity with other. Since observationally we can only probe the line of sight component of the velocity, we can estimate only one component of the velocity correlation matrix, that is line of sight to itself. Unlike the power spectrum of the column density, estimating velocity power spectrum from the radio interferometric observations is not straight forward. One way is to reconstruct the sky brightness distribution at different frequencies and use that with various techniques like the Velocity Channel Analysis (VCA), the Velocity Coordinate Spectrum (VCS), the statistics of centroid of velocities to estimate the parameters of the velocity power spectrum. These methods were originally developed to measure the velocity power spectrum at relatively smaller scales ($\sim 1 - 100 $ pc), where mostly single dish observations were used.\footnote{It is possible to use visibilities directly for VCA.} An inherent limitation of the reconstructed image from the interferometric data is discussed and investigated in detail in chapter 2, where we clearly demonstrate that the power spectra calculated from the reconstructed image are biased. Moreover, \citet{2015MNRAS.452..803D} has demonstrated that the VCS techniques have limitations when applied to external spiral galaxies. \citet{2016MNRAS.456L.117D} has introduced a velocity based estimator for the turbulent velocity fluctuations. In this chapter we discuss this estimator, its implementation and application to a spiral galaxy NGC~6946. \section{Visibility moment estimator}\footnote{The calculations presented in this section are reproduced from Dutta et al. (2016)} For 21-cm observation of external spiral galaxies with existing radio telescopes like VLA,\footnote{Very Large Array, New Mexcico} GMRT,\footnote{Giant Meterwave Radio Telescope, Pune} the observed quantity is the visibility $\mathcal{V}(\vec{U},v)$, the Fourier transform of the sky brightness distribution: \begin{equation} \label{eq:V_u} \mathcal{V}(\vec{U},v) = \int d\vec{\theta} \hspace{2.5pt}e^{i2 \pi \vec{U}.\vec{\theta}} I(\vec{\theta},v) \end{equation} We define the zeroth moment of the visibilty as, \begin{equation} V_0(\vec{U}) = \int dv\hspace{2.5pt}\mathcal{V}(\vec{U},v) \end{equation} Using equation~\ref{eq:I_theta} and ~\ref{eq:phi}, we clearly see \begin{equation} V_0(\vec{U}) = \tilde{M}_0(\vec{U}), \end{equation} where $\tilde{}$ denotes Fourier transform. Similarly, we define the first moment of visibility as \begin{equation}\label{eq:V_u} V_1(\vec{U}) = \int dv\hspace{2.5pt}v\hspace{2.5pt}\mathcal{V}(\vec{U},v) \end{equation} Here the velocity integrals are over the entire spectral range of \rm HI emision. Using equation~\ref{eq:V_u} and~\ref{eq:I_theta}, we have \begin{equation} V_1(\vec{U}) = \int d\theta\hspace{2.5pt}e^{i2 \pi \vec{U}.\vec{\theta}}\hspace{2.5pt}I_0\int dz \hspace{2.5pt}n_{HI}(\vec{r})v_z(\vec{r}) \end{equation} where $v_z(\vec{r})$, line of sight velocity, has two components: (a) $v_z^{\Omega}(\vec{r})$, the line of sight component of the systematic rotation velocity of the galaxy and $v_z^T(\vec{r})$, the line of sight component of the random motion because of turbulence. For almost face on spiral galaxies with small inclination angle, systematic rotation can be assumed to be independent of z and thus we can write \begin{equation} v_z(\vec{r})=v_z^{T}(\vec{r})+v_z^{\Omega}(\vec{\theta}) \end{equation} Hence we get, \begin{eqnarray} \nonumber V_1(\vec{U}) &=& \int d\theta\hspace{2.5pt}e^{i 2 \pi \vec{U}.\vec{\theta}}\hspace{2.5pt}M_0(\vec{\theta})v_z^{\Omega}(\vec{\theta})+ \int d\theta\hspace{2.5pt}e^{i2 \pi \vec{U}.\vec{\theta}}\hspace{2.5pt}I_0\int dz \hspace{2.5pt}n_{H1}(\vec{r})v^T_z(\vec{r}) \\ &=& V_0(\vec{U})\otimes\tilde{v}^{\Omega}_z(\vec{U}) + \tilde{\chi}(\vec{U}), \end{eqnarray} where \begin{equation} \chi(\vec{\theta})= I_0\int dz \hspace{2.5pt}n_{H1}(\vec{r})v^T_z(\vec{r}) \end{equation} which contains information about both column density and turbulent velocity. Under the assumption that the random turbulent velocity and systematic rotational velocity of the galaxy are uncorrelated, the quantity $P_{\chi}(\vec{U})=\left<|\tilde{\chi}(\vec{U})|^2\right>$ is given by \begin{equation} P_{\chi}(\vec{U})= \left<|V_1(\vec{U})|^2- \tilde{C} (\vec{U})\right> \end{equation} with \begin{equation} \tilde{C}(\vec{U}) = P_0(\vec{U})\otimes|v^{\Omega}_z(\vec{U})|^2 \end{equation} Here $P_0(\vec{U})$ is the column density power spectrum of the galaxy. The angular bracket denotes average over statistical ensembles. In practice, we are always provided with a single sky realization. While calculating the column density power spectrum in chapter 2, we have bypassed this problem by performing an azimuthal average with the assumption of statistical homogeneity and isotropy. At the present case, neither $V_1(\vec{U})$ nor $\tilde{C}(\vec{U})$ are statistically anisotropic, and individual azimuthal averages to calculate them does not work. However, the complete right hand side (RHS) of above equation (without the angular bracket) is statistically isotropic. Hence, the ensemble average in the RHS of above equation can be replaced with an annular average when the average is done over the entire RHS expression. Here, $|V_1(\vec{U})|^2 $ and $P_0(\vec{U}) $ can be estimated directly from measured visibilities. Evaluation of the quantity $\tilde{C}(\vec{U})$ requires estimation of the line of sight component of the systematic rotational velocity of the galaxy. We shall discuss how this is performed in detail shortly. The quantity $P_{\chi}(\vec{U})$ has information on both power spectrum of column density and turbulent velocity. In case of the spiral galaxies with thin disk, for power spectrum estimated at very larger scales compared to the scale height of the galaxy \begin{equation} P_{\chi}(\vec{U})=P_0(\vec{U})\otimes P^T_{v}(\vec{U}) \end{equation} The power spectrum of the column density $P_0(\vec{U})$ can be estimated through the visibility moment zero. Estimating the power spectrum of the turbulent velocity $P^T_{v}(\vec{U})=\left< |\tilde{v}^T_z(U)|^2 \right>$ then reduces to a problem of either model dependent regression analysis or direct deconvolution. We shall discuss implementation of this estimator in the next section. \section{Implementation of the visibility moment estimator} In this section we describe in detail how the visibility moment estimator for $P_{\chi}(\vec{U})$ is implemented. First we measure the quantity $v_z^{\Omega}(\vec{\theta})$ using shapelet reconstruction from the moment 1 map of the reconstructed specific intensity distribution. We use this to estimate $\tilde{C}(\vec{U})$ and then $P_{\chi}(\vec{U})$. Finally we perform a parametric estimate of $P^T_{v}(\vec{U})$. \subsection{The systematic rotation velocity} We use the natural weighted moment one maps generated from the reconstructed specific intensities of the galaxy to estimate $v_z^{\Omega}(\vec{\theta})$. Using the definition of the moment one map and eqn.~(3.6), we get for a thin disk galaxy with low inclination angle, \begin{eqnarray} M_1(\vec{\theta}) &=& \frac{\int dz\hspace{2.5pt}I_0n_{H1}(\vec{r})\left[v^T_z(\vec{r})+ v^{\Omega}_z(\vec{\theta})\right]}{M_0(\vec{\theta})} \\ \nonumber &=& v^{\Omega}_z(\vec{\theta}) + \frac{\chi(\vec{\theta})}{M_{0}(\vec{\theta})}. \end{eqnarray} If we assume the density and velocity induced by turbulence is statistically homogeneous and isotropic and the moment zero map of the galaxy is azimuthally symmetric at large scales, average of the second term in the above expression is zero. Hence, \begin{equation} \nonumber \langle {M}_1(\vec{\theta}) \rangle = v_z^{\Omega}(\vec{\theta}). \end{equation} The above average is in principle over an ensemble. Owing to the fact that we have only one sample of the sky, we need to choose an alternate averaging procedure. As the line of sight component of the rotational velocity vary at large scales compared to the scales where the turbulence is effective, we may choose to perform a local average of the moment one map. Here we need to determine the choise of the averaging scale and the averaging kernel. An obvious choice is a Gaussian kernel with a length scale that is larger than the scales we would like to measure the velocity power spectrum, but smaller than the scales at which the rotational velocity may vary. However, a Gaussian kernel is azimuthally symmetric, while the rotational velocity at least have a dipole asymmetry. It would be good to rather perform a shapelet construction of the moment one map and choose the scale and order of the shapelet such that criteria for the averaging scale is satisfied. \subsection{Construction of $\tilde{C}(\vec{U})$} Line of sight component of the rotation velocity is a smooth function of the sky angular coordinates $\vec{\theta}$. We have discussed the method of estimating this in the previous section, where we get it in a grid in the $\vec{\theta}$ space. Hence the quantity $\mid \tilde{v}_{z}^{\Omega}(\vec{U}) \mid^{2}$ can be calculated by finding the modulus square of the discrete Fourier transform of $v_z^{\Omega}(\vec{\theta})$. We perform the convolution of $P_{0}(U)$ and $\mid \tilde{v}_z^{\Omega}(\vec{\theta}) \mid^{2}$ by multiplying their inverse Fourier transforms in a grid in the $\vec{\theta}$ plane and applying the convolution theorem. \subsection{Estimation of $P_{\chi}(\vec{U})$} In order to find $P_{\chi} $, we need to find $|V_1(\vec{U})|^2$ and $\tilde{C}(\vec{U})$. It is straight forward to estimate $V_1(\vec{U})$ from measured visibility $\mathcal{V}(\vec{U},v)$ through its definition. However, as the measured visibilities are not in uniform grids in the baseline space, the values of $V_1(\vec{U})$ are also not in grid. It is important to note here that the recorded visibilities (and hence also $V_1(\vec{U})$), contain sky signal and noise parts: \begin{equation} V_1(\vec{U}) = S_1(\vec{U})+ \mathcal{N} \end{equation} In the interferometric observations, the individual measurements of $V_1(\vec{U})$ often does not have signal to noise higher than one. Hence, if we directly calculate $\mid V_1(\vec{U}) \mid^{2}$, it will be biased by the standard deviation of the noise. \citet{2009MNRAS.398..887D} has discussed how correlating the visibilities at nearby baselines reduces the noise bias. Here we consider an alternate approach. Since the quantity $\tilde{C}(\vec{U})$ is calculated in grids, we estimate $\mid V_1(\vec{U}) \mid^{2}$ in grids by the following procedure. We calculate the gridded values of $V_1$, i.e, $V_{1}(\vec{U}_{g})$ as, \begin{equation} V_{1}(\vec{U}_{g})=\sum_{i=1}^{N_{g}}V_1(\vec{U_i}) \end{equation} where $N_{g}$ is the number of measured values in a particular grid $\vec{U}_{g}$. Similarly we define \begin{equation} S_{1}(\vec{U}_{g})=\sum_{i=1}^{N_{g}} \mid V_1(\vec{U_i})\mid ^2. \end{equation} The measurement noise $\mathcal{N}$ is uncorrelated across different baselines. In this case, the following combination of $V_{1}(\vec{U}_{g})$ and $S_{1}(\vec{U}_{g})$ gives an unbiased estimates of $\mid V_{1} \mid^{2}$ at the grid points $\vec{U_{g}}$, which we denote as $\mid V_{1} \mid^{2} (\vec{U} _{g})$, \begin{equation} |V_{1}|^2(\vec{U}_{g}) =\frac{\left[ (V_{1}(\vec{U}_{g})^2-S_{1}(\vec{U}_{g}) \right]}{^{N_{g}}C_2} \end{equation} where $|V^G_1(\vec{U})|^2$ are estimated in grids. Since $\tilde{C}(\vec{U})$ is a smooth function of $\vec{U}$ and are already estimated in a grid in $\vec{U}$, we can interpolate it to the baselines $\vec{U}_{g}$ . Finally, we use annular average over bins of $\vec{U}$ to find $P_{\chi}(U)$. \subsection{Parametric deconvolution of the velocity power spectrum} As mentioned in the beginning, $P_{\chi}( U )$ has information on both power spectrum of column density and turbulent velocity. We have been considering the particular case of spiral galaxies with thin disk, where \begin{equation*} P_{\chi}(\vec{U})=P_0(\vec{U})\otimes P^T_{v}(\vec{U}). \end{equation*} Method to estimate the quantities $P_{\chi}(\vec{U})$ and $P_{\chi}(\vec{U})$ have been discussed already. Hence estimating the power spectrum is to perform a two dimensional deconvolution. Though a direct deconvolution may be possible, we prefer to perform a parametric estimate here. Since the power spectrum of the turbulent velocity is expected to be a power law, we assume a two parameter model for $P^T_{v}(\vec{U})$, given by \begin{equation} P^T_{M}(\vec{U})=A_{v}\hspace{2.5pt}U^{\beta}. \end{equation} We estimate the two parameters of $ P^T_{v}(\vec{U})$ by regression analysis. However, we do not follow the most common chi-square minimization of the parameters here. Usual chi-square method assume that the models are exact (that is does not have associated error) and weight the residual by the error in the measurements. Here, since part of the model ($P_0(\vec{U})$) is estimated from the data, the model values also have uncertainties. We perform a monte-carlo based regression analysis, where we use several realizations of the model and the measured values and estimate the best fit parameters for each of these realizations. The statistics of the best fit values over the realizations are used then to access the parameters and their errors. For each set of values of the parameters we calculate \begin{equation} P_{\chi}^M(\vec{U})=P_0(\vec{U})\otimes P^T_{M}(\vec{U}) \end{equation} at the same grid as that of $P_{\chi}(\vec{U})$ and compare it with the measured value of $P_{\chi}(\vec{U})$ by estimating \begin{equation} \Delta (A, \beta) = \frac{\left( P_{\chi}^M(\vec{U}_{G})-P_{\chi}(\vec{U}_{G}) \right)^2}{N_{G}^2} \end{equation} where $N_{G}$ is the number of grid points. We minimize the $\Delta$ with respect to $A$ and $\beta$ and find the best fit value. For those best fitted parametric values, we can find $P^T_{v}(\vec{U}) $. \subsection{Monte-carlo error analysis} In the previous sections we have described the steps to estimate the velocity fluctuation power spectrum from the radio interferometric observations. Any observational estimates need to be supplemented by the statistical error associated with it. To estimate the errors first note the errors in the directly measured quantities. In this case the measurement errors are associated with the errors in measuring the velocity and the visibilities. For the relatively straight forward statistical estimators, it is possible to find the errors in the estimated quantities by propagating the errors analytically from the measurements. However, for the visibility moment estimator we described above, analytical estimation is complicated. In such a case we may do repeated observation of same sky. This is impractical given the limited observational time available at the telescope facilities. A solution to this problem is to produce synthetic observations that preserves the statistics of the measurements. For example, as the visibilities are known to have Gaussian random noise associated with their measurements, we use the mean and the standard deviation of the measured visibilities to generate different statistical realizations. Considering each of these realizations as an individual measurement, we can use the visibility moment estimator to get the parameters of the line of sight velocity fluctuation power spectrum. This gives us the probability distribution of the estimated quantities, their mean and standard deviation. The mean of the estimator hence calculated is taken to be the best estimate and the standard deviation is taken to be the error associated with it. This method of estimating the error is often termed as the monte-carlo estimation of the errors. Here we use a mixture of monte-carlo and analytical error-propagation methods to get to the errors in measured parameters of the velocity power spectrum. In the previous chapter we have described the visibility based estimator for the column density power spectrum $P_0(\vec{U})$. The visibility based estimator also provides us with error estimates on the annular bins in baselines where the power spectra are measured. We assume a Gaussian distribution to generate different realizations of the $P_0(\vec{U})$, that is \begin{equation} P^R_0(U) = P_0(U) + \sigma_{P_0}(U) \times \hat{n}, \end{equation} where $\sigma_{P_0}(U) $ is the 1 $\sigma$ error is estimating $P_0(U)$ and $\hat{n}$ is a random number drawn from a normal distribution (Gaussian distribution with unit standard deviation and zero mean). The superscript $R$ stand for a particular realization. The typical errors in the measured velocities arise from the width of the frequency channels of the observation. For the THINGS galaxies it is about $5$ km sec$^{-1}$. We use the moment one map to estimate the line of sight component of the rotational velocities $v^{\Omega}(\vec{\theta})$. The moment one maps are generated using the reconstructed images from the visibilities, their local average provides us unbiased estimate of $v^{\Omega}(\vec{\theta})$ (section 2.2.1 and chapter 2). We use these estimates of $v^{\Omega}(\vec{\theta})$ and assume a Gaussian error of standard deviation of $5$ km sec$^{-1}$ to generate different realization of the $v^{\Omega}(\vec{\theta})$. These realizations combined with the different realizations of $P_{0}(U)$ discussed above are used to generate different realizations of $\tilde{C}(\vec{U})$. We calculate the mean and standard deviations of these estimates and use them for further analysis. We estimate the errors in the estimates of $\mid V_{1}(\vec{U})\mid^{2}$ using similar analytical steps as that we use for $P_{0}(U)$. The final estimates of the errors in $P_{\chi}(\vec{U})$ is done by doing analytical error propagation from the errors in $\tilde{C}(\vec{U})$ and $\mid V_{1}(\vec{U})\mid^{2}$. \section{Data identification for application of the visibility moment estimator} The visibility moment estimator for the power spectrum of the line of sight turbulent velocity of the external galaxies can be applied to the external spiral galaxies with low inclination angle. As the inclination angle increases the assumption that the rotation velocity is independent of the line of sight direction fails and the uncertainty in the estimates of the velocity power spectrum increases. \citet{2016MNRAS.456L.117D} have validated the estimator by using noise free simulated data. They showed that for low inclination angles, $i < 40^o$, this estimator does a better job with less uncertanity. In their simulation they have chosen a simple model for line of sight velocity component with constant inclination angle, such that they avoid the uncertanity in the rotation curve estimation. In reality however, the galaxy's disk are not exactly flat, both inclination and position angle varies with radial distance. This can induce significant uncertanitity in the estimation of rotation curve, for considerably lower inclination angle ($< 15^{o}$). Hence we prefer to choose the inclination angle to lies in between $\sim 20^o$ to $\sim 35^o$. We also would like to avoid too much warp in the galaxy's disk, that is large variation in the inclination and position angles. The other important criteria to choose a galaxy is that it must have thin disk, adequate resolution in both angular and frequency axis. We use radio interferometric data product from the THINGS \footnote{THINGS: The \rm HI Nearby Galaxy Survey (Walter et al. 2008)} survey, where they have observed 23 spiral galaxies. We choose the galaxy NGC~6946 which satisfy the above criteria. It has an inclination angle of $33^o$. The scale length of the galaxy is $\sim 20$ kpc whereas the upper limit of the scale height \citep{2013NewA...19...89D} is 300 pc. The angular resolution corresponds to the data is about $75$ pc, where as the velocity resolution is $5$ km sec $^{-1}$. \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.39]{./chap3/NGC6946_M0.png}} \hspace{10pt} \subfloat[]{\includegraphics[scale=.39]{./chap3/NGC6946_M1.png}} \caption{Fig~(a) is the moment zero map of NGC6946 and Fig~(b) is the moment one map of the same.} \label{fig:NGC6946} \end{figure} In our analysis we would need both the visibility data from \rm HI emission and the moment one maps of the reconstructed specific intensity distribution. Along with H1 21 cm line emision, the direct observed visibilities originally also have continuum emission (mainly synchrotron emission). We have modelled the continuum emission from the line free frequency channels. This has been removed by UVSUB task in AIPS. We use the continuum subtracted visibilities for further analysis. THINGS \rm HI survey also provide moment maps of H1 emision for different external galaxies, generated by multi scale cleaning of the visibility data. Fig~\ref{fig:NGC6946} shows the moment zero and moment one map of NGC~6946 galaxy. We have used this moment one map for estimation of $\tilde{C}(\vec{U})$. \section{Velocity power spectrum of NGC~6946} In this section, we discuss about the results of the implementation of the visibility moment estimator on NGC 6946 galaxy. Following sections discuss about each step in detail and results obtained. \subsection{Shapelet construction of $v^{\Omega}_z(\vec{\theta})$} \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.35]{./chap3/rc.png}} \hspace{10pt} \subfloat[]{\includegraphics[scale=.35]{./chap3/ip.png}} \caption{Fig~(a) is the rotation curve of NGC 6946 and Fig~(b) is the plot showing position angle and inclination angle as a function of galacto-centric radius. The position angle is offset by $200^{o}$ to show then in the same plot. Note that the inclination angle vary almost as a linear function of the galactocentric radius}\label{fig:rot_curve} \end{figure} We estimate, the line of sight component of rotational velocity $v^{\Omega}_z(\vec{\theta})$ using shapelet decomposition. In chapter 2, we have performed shapelet construction of the window function. We grossly follow the same procedure here with appropriate choice of the shapelet scale $\eta$ and the highest shapelet order $n_{max}$. Both these parameters are necessary to identify the large scale component in the moment one map that corresponds to the line of sight component to the rotation velocity. Later has contribution from tangential rotational velocity, inclination angle and position angle of the galaxy. Using the reconstructed \rm HI image data cube, \citet{2008AJ....136.2648D} have estimated the rotation curve, position angle and inclination angle as a function of galactocentric radius. These are shown in figure~\ref{fig:rot_curve}. We estimate the 1D power spectrum of each of these three components to identify the large scale fluctuations of these curves. These power spectrums, normalized to the standard deviation of corresponding curve are shown in figure~\ref{fig:PS_curve}. \begin{figure}[t!] \begin{center} \includegraphics[scale=.45]{./chap3/PS_rot_curve.png} \caption{One dimensional power spectrum of the tangential rotational velocity, inclination angle and position angle as a function of the inverse angular scale.}\label{fig:PS_curve} \end{center} \end{figure} Clearly, the power spectra has considerably more power at the larger scales. We considered the angular scale where the all three power spectra goes below $1\%$ of peak value, as the scale for the shapelet construction. In this case, this happens at an angular scale of 157 arcsec, which corresponds to 7.5 kpc in the galaxy's disk. It is important to note that the estimates of the velocity power spectra we do here would be limited to 7.5 kpc at the larger scale side. Next we do shapelet decomposition of moment one map of the galaxy with different $n_{max}$. For sufficiently large value of $n_{max}$, the shapelet construction would peak up the small scale fluctuations. We identify the highest value of $n_{max}$ for which in the reconstructed map fluctuations at scales smaller than $\eta$ are absent. In our case, $\mbox{\boldmath$n$}_{max} = 8$ came out to be this optimal value and we use it for further analysis. \begin{figure}[t!] \subfloat[]{\includegraphics[scale=.43]{./chap3/SNR_C_Theta_1.png}} \hspace{10pt} \subfloat[]{\includegraphics[scale=.60]{./chap3/SNR_C_Theta_2.png}} \caption{Fig~(a) show the SNR plot of the $P_{\chi}(\vec{U})$ and Fig~(b) highlights (black) the regions with SNR $<5$.}\label{fig:SNR} \end{figure} We generate 128 realizations of moment one map and use the above parameters to estimate $v^{\Omega}_z(\vec{\theta})$. We calculate the mean and standard deviation of these maps. We use the column density power spectrum as given in Dutta et al. (2013) and the above realizations of the maps to generate 128 realizations of $C(\vec{\theta})$ and the maps corresponding to the mean and standard deviation of these maps. Figure~\ref{fig:SNR} (a) show the signal to noise ratio (snr) of $C(\vec{\theta})$, where Figure~\ref{fig:SNR} (b) shows in black the pixels in the snr map with snr$<$5. Clearly, most of the map has signal to noise more than 5 suggesting that the estimated $C(\vec{\theta})$ is statistically significant. \begin{figure}[t!] \begin{center} \includegraphics[scale=.55]{./chap3/Vel_PS.png} \caption{Estimated (solid black) $P_{\chi}(\vec{U})$ of NGC 6946 galaxy and best fit model $P^M_{\chi}(\vec{U})$ (dashed black) for a power law model of velocity power spectrum.}\label{fig:PS_curve} \end{center} \end{figure} \subsection{Estimation of velocity power spectrum for NGC~6946} We estimate $|V_1(\vec{U})|^2$ in a uniform grid in u-v plane from measured visibility data following the procedure discussed in section 3.2.3. Fourier transform of $C(\vec{\theta})$ are estimated on the same u-v grids and subtracted from the $|V_1(\vec{U})|^2$ values. The resultant quantity is expected to be statistically homogeneous and isotropic. We estimate the $P_{\chi}(\vec{U})$, by doing annular average over these for the 128 realizations. The mean and standard deviation of the measured $P_{\chi}(\vec{U})$ of these realizations is plotted in Figure~\ref{fig:PS_curve} with solid black line. The estimated $P_{\chi}(\vec{U})$ can be used for the measurement of velocity power spectrum. For thin disk galaxy, $P_{\chi}(\vec{U})$ is a two dimensional convolution of the column density map with the power spectrum of the line of sight component of the turbulent velocity. As mentioned before, we use a two parameter model for the visibility power spectrum and try to find the best fit values of the parameters. An obvious choice for the functional form of the velocity power spectrum is a power law, \begin{equation*} P^T_{M}(\vec{U})=A_{v}\hspace{2.5pt}U^{\beta} \end{equation*} For each realization of $P_{\chi}(\vec{U})$, we estimate the best fit parameter values for A and $\beta$ and calculate the mean and standard deviation over the realizations. Dashed line in the Figure~\ref{fig:PS_curve} shows the mean value of best fit model $P^M_{\chi}(\vec{U})$. The best fit value for parameter, in the units of the power spectrum we use are: $A_{v} =(270.\pm 30.)\times 10^6$ [km sec$^{-1}$]$^{2}$ and $\beta=-3.1\pm 0.4$. \section{Discussion and Conclusion} We adopt a distance to the galaxy NGC~6946 to be $5.5$ Mpc (Walter et al. 2008). Dutta et al. (2013) have estimated the column density power spectrum of this galaxy, where they found that for baselines less than $1.5$ k$\lambda$ the window function effect dominates. At these baselines and lower, we also have relatively small number of independent realizations of the estimates of the power spectrum and hence sample variance dominates. We would like to restrict our interpretation within the baseline values $1.5$ k$\lambda$ to $10.$ k$\lambda$, that is the range over which the column density power spectrum was measured. These baselines corresponds to a length scale range of $300$ pc to $4$ kpc at the galaxy. Based on the amplitude of the velocity spectrum, at $4$ kpc we see velocity fluctuation of magnitude $9$ km sec$^{-1}$, whereas at $300$ pc it is as low as $1.2$ km sec$^{-1}$. The fluctuations at the largest scale is comparable to the typical vertical velocity dispersion in the spiral galaxies (Lewis 1984). Certainly at the smallest length scales the velocity fluctuations are comparable to thermal velocity fluctuations by the cold neutral medium. \citet{2009AJ....137.4424T} has estimated the velocity dispersion of the galaxy NGC~6946 ( amongst other galaxies), where they find the median velocity dispersion is about $10.1$ km sec$^{-1}$, consistent with the amplitude of the velocity fluctuations we measure here. They estimated the variation of the velocity dispersion as a function of radius and see that it reduces with increasing galactocentric radius. They compare the velocity dispersion with the star formation rate and discuss that within the radius of $8$ kpc, where all the star formation happens, feed back from the star formation is enough to maintain the observed dispersion. At larger radius, they conclude, other mechanisms like magneto rotational instability (MRI) etc may be driving the velocity dispersion. Our result suggest a large scale correlated velocity fluctuations at scales at least comparable to the star forming disk. \citet{2009ApJ...692..364F} investigated the effect of solenoidal and compressive forcing for compressible fluid turbulence. They found that the resultant velocity spectrum (in our definition) have a slope of $-2.89$ for solenoidal forcing, while for compressive forcing the slope is $-3.03$. Though our measurement of the slope $-3.1 \pm 0.4$ favours the compressive forcing slightly, it can not completely rule out the other. If we consider that the compressive forcing is favoured, MRI would not be an effective mechanism as the forcing associated would be more solenoidal than compressive in nature. A possible forcing may arise from the rotational instability caused by the density fluctuation in the dark matter halo that hosts the galaxy. We would like to investigate in this line in future. \chapter{Discussion and Future Scope} In this thesis we investigated the statistical properties of large scale density and velocity fluctuations from the radio interferometric observation of neutral hydrogen of external spiral galaxies. We compared different estimators of the column density fluctuation using numerical simulation. We implemented the visibility moment estimator of the velocity power spectrum and estimated the power spectrum of the line of sight velocity fluctuations of the spiral galaxy NGC~6946. In first part of our work, we checked the efficacy of the visibility and the image based power spectrum estimators using simulated observation of a model galaxy. We found that the image based power spectrum estimator have a noise bias which create a deviation from the true power spectrum. On the other hand, the visibility based power estimator recover the true power spectrum without any bias. The reason for the deviation is the sparce baseline coverage and the noise bias arising from the incomplete deconvolution is correlated with baseline distribution. Hence, the image based power spectrum estimator must be avoided for the radio interferometers with incomplete baseline coverage. It is important to note that it is not possible to estimate the locally averaged quantities directly from visibilities, image reconstruction is necessary. However, the mean of the deconvolution noise in the image is zero and the locally averaged quantities we estimate from the reconstructed image do not have any significant bias. We demonstrated this by inspecting the azimuthally average window function. We compared the column density power spectrum of 18 spiral galaxies from the THINGS sample between image based and visibility based power spectrum estimators. We found that the power spectrum estimated from reconstructed images systematically follow a steeper power slope. This suggests that the statistical study of the radio sky would require recording of the visibilities from the observations. For the future telescopes like the Square Kilometer Array (SKA) it is often argued that because of the huge data rate only images reconstructed in real time from the visibilities will be recorded. However, our result points out the importance of recording the visibilities. We implemented the visibility moment estimator and measured the turbulent velocity power spectrum of the spiral galaxy NGC~6946. A power law model spectrum offer a statistically significant fit to the data suggesting a scale invariance velocity fluctuations at length scales as large as 4 kpc. Scale invariance density and velocity fluctuations for this galaxies at these scales suggest strongly for a turbulent dynamics. We found that the amplitude of the fluctuation at 4 kpc is about $9$ km sec$^{-1}$, consistent with the velocity dispersion of the \rm HI gas. The apparently larger errors associated with our measurement of the slope of the spectrum do not allow us to comment strongly on the nature of the driving mechanism, though it weakly favour compressible forcing at large scale. We guess that this can be the result of density fluctuations coupled by gravity compared to MRI driven fluctuations. {\bf Nevertheless, this thesis presents the first ever measurement of the power spectrum of the line of sight velocity fluctuations of any external spiral galaxy (NGC~6946 here) and strongly indicate presence of large scale turbulence. } The limitations in pointing out the nature of the driving force in this work arises mainly because of the measurement noise. This can be trivially improved by increasing both the observing time and baseline coverage. In practice, however, the THINGS survey already combine the B, C and D array of VLA and have significantly large integration time. The importance of the science objective here would surely drive future observations to improve on the data. Also, observations taken from other radio telescopes like GMRT, WSRT etc can be combined to the VLA data to improve upon the baseline coverage. In this work we used a parametric estimation of the velocity power spectrum. However, in principle it is possible to perform a direct deconvolution of the velocity spectrum from $P_{\chi}$, using well established algorithms like the Richarson-Lucy deconvolutions etc. Finally we note that we plan to estimate the velocity power spectrum of a sample of spiral galaxies (THINGS have four more galaxies in the right inclination angle window) in future, to access the true nature of the large scale fluctuations in more detail. \chapter*{} \head{List of Symbols or Abbreviations} \vspace{-1.cm} \begin{center} {\bf \Large{ Abbreviations}} \end{center} \begin{center} \begin{tabular}{lcccl} \sline{\large {\bf Acronym }}{\large {\bf Full form }} \hline \hline & & & & \\ \sline{H~{\sc i}\, }{Neutral hydrogen} & & & & \\ \sline{ISM}{Interstellar Medium} & & & & \\ \sline{SFR}{Star Formation Rate} & & & & \\ \sline{NGC}{New General Catalogue} & & & & \\ \sline{AIPS}{Astrophysical Image Processing System} & & & & \\ \sline{VLA}{Very Large Array} & & & & \\ \sline{GMRT}{Gaint Meterwave Radio Telescope} & & & & \\ \sline{WSRT}{Westerbork Synthesis Radio Telescope} & & & & \\ \sline{THINGS}{The HI Nearby Galaxy Survey} \hline \end{tabular} \end{center} \pagebreak \begin{center} \begin{tabular}{lcccl} \sline{\large { \bf Symbols }}{\large {\bf Definitions }} \hline \hline & & & & \\ \sline{$\mbox{$\vec{U}$}$}{Baseline in kilo wavelength} & & & & \\ \sline{$\nu$}{Observing frequency in MHz} & & & & \\ \sline{$\mbox{$\vec{\theta}$}$}{Angle in the sky measured} \sline{}{from the centre of the galaxy} & & & & \\ \sline{$\vec{r}$}{Radial vector from galactic center} & & & & \\ \sline{${\mathcal{V}}(\mbox{$\vec{U}$}, \nu)$ or ${\mathcal{V}}(\mbox{$\vec{U}$})$}{Visibility} & & & & \\ \sline{$I(\mbox{$\vec{\theta}$}, \nu)$}{Specific Intensity} & & & & \\ \sline{$W(\mbox{$\vec{\theta}$})$}{Window function} & & & & \\ \sline{$\mathcal{N}(\mbox{$\vec{U}$}, \nu)$}{Noise at baseline $\mbox{$\vec{U}$}$ and frequency $\nu$} & & & & \\ \sline{$P_{A}(\mbox{$\vec{U}$})$}{Power spectrum of A} & & & & \\ \sline{$M_0(\mbox{$\vec{\theta}$})$}{Zeroth moment of intensity} & & & & \\ \sline{$M_1(\mbox{$\vec{\theta}$})$}{First moment of intensity} & & & & \\ \sline{$V_0(\mbox{$\vec{U}$})$}{Zeroth moment of visibility} & & & & \\ \sline{$V_1(\mbox{$\vec{\theta}$})$}{First moment of visibility} & & & & \\ \sline{$v_z(\vec{r})$}{Line of sight component of velocity} & & & & \\ \sline{$v^T_z(\vec{r})$}{Line of sight component of turbulent velocity} & & & & \\ \sline{$v^{\Omega}_z(\vec{r})$}{Line of sight component of} \sline{}{systematic rotational velocity} \hline \end{tabular} \end{center} \end{singlespacing} \thispagestyle{empty}
1,477,468,749,856
arxiv
\section*{Acknowledgments} M.T. would like to thank Professor Daniel DeBra and Dr. Sasha Buchman for many stimulating conversations about the gLISA mission concept, Dr. John W. Armstrong for reading the manuscript and his valuable comments, and Drs. Anthony Freeman and Daniel McCleese for for their constant encouragement. M.T. also acknowledges financial support through the Topic Research and Technology Development program of the Jet Propulsion laboratory. J.C.N.A. acknowledges partial support from FAPESP (2013/26258-4) and CNPq (308983/2013-0). This research was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
1,477,468,749,857
arxiv
\section{Introduction} \vspace{-0.5cm} In \cite{Se17}, we reported about the detection of an extended gamma-ray source, Source A, located $\sim$0$^{\circ}\!\!$.6 south-west of the supernova remnant (SNR) G306.3$-$0.9. Assuming Source A as a point-like gamma-ray source,we detected it with a significance of $\sim$9.7$\sigma$ (TS\footnote{Test Statistics (TS) values indicate that the null hypothesis (maximum likelihood value for a model without an additional source) is incorrect. The square-root of TS gives the detection significance of a source.}$\sim$94). Its best-fitted location was found to be R.A. (J2000) = 13$^{\rm{h}}$ 17$^{\rm{m}}$ 52$^{\rm{s}\!\!}$.80, Decl. (J2000) = $-$63$^{\circ}$ 55$'$ 48$''\!\!$.00. If the extended gamma-ray emission was fit to a disk-like extension model, the extension radius was measured to be 0$^{\circ}\!\!$.73 $\pm$ 0$^{\circ}\!\!$.07. As an extended source the total significance was found to be $\sim$13$\sigma$ and assuming a power-law (PL) spectrum, we obtained $\Gamma$ = 2.1 and the energy flux was found to be (2.07 $\pm$ 0.2) $\times$ 10$^{-5}$ MeV cm$^{-2}$ s$^{-1}$. In this paper we summarize the results of our {\it Swift} ToO observations on Source A in Section 2. Following the ToO observations, we extended the gamma-ray analysis by using 3 months of more {\it Fermi}-LAT data than what we used in our previous analysis \cite{Se17}. In Section 3, we explain the gamma-ray analysis and give its preliminary results. In Section 4, we present the conclusions and give an outlook. \vspace{-0.5cm} \section{{\it Swift} ToO Observations \& Results} \vspace{-0.5cm} To unravel the nature of the extended unidentified Fermi gamma-ray source, Source A, found near the SNR G306.3$-$0.9 \cite{Se17}, two {\it Swift} ToOs (IDs: 00010121001, 00010151001, 00010151002) were successfully completed in May and June 2017. We had 5.6 ks effective exposure for the May observations and 5 ks for June observations. Four new X-ray sources were discovered, which we named as SrcA/Src1, SrcB/Src2, SrcC/Src3, and SrcD/Src4 in our initial analyses. The June ToO was centered at SrcB, because this was found to be the brightest X-ray source. Except SrcC, all {\it Swift} XRT sources are within the 5$\sigma$ contour level of Source A. \begin{wraptable}{r}{0.55\textwidth} \vspace{-0.7cm} \caption{ X-ray point sources observed by {\it Swift} XRT in two ToO observations.} \vspace{0.2cm} \begin{tabular}{@{}lccccc@{}} \hline \hline Name & RA (deg) & Dec. (deg) & Exposure (ks) \\ \hline SrcA/Src1 & 199.331 & -63.882 &10.6 \\ SrcB/Src2 & 199.742 & -63.954 & 10.6 \\ SrcC/Src3 & 199.579 & -63.781 &10.6 \\ SrcD/Src4 & 198.964 & -63.905 & 10.6 \\ \hline \end{tabular} \label{table_1} \vspace{-0.3cm} \end{wraptable} The locations of these X-ray sources are given in Table \ref{table_1}. The {\it Swift} XRT sources are about 10$'$ away from each other. So, there is no physical connection between them, because the separation between them is too big for any distance that is $>$0.1 kpc. All {\it Swift} XRT sources, are close to or within the 5$\sigma$ contour level of Source A. Here are the details of the initial {\it Swift} analysis results and the multi-wavelength aspects of each of these four X-ray sources: \vspace{-0.3cm} \begin{itemize} \item {\bf SrcA (Src1):} This is the closest {\it Swift} XRT source to Source A. SrcA has an optical counterpart classified as a star in the Guide Star Catalog 2.3\footnote {http://gsss.stsci.edu/Catalogs/GSC/GSC2/GSC2.htm}, which is 4$''$ (S7KT124582) away from SrcA. Another close optical source (S7KT125052) is 6$''$ away from SrcA. SrcA might be a binary system, because a separation of $\sim$6$''$ amounts to $\sim$6000 AU at a distance of 1 kpc. SrcA has no radio counterparts. \vspace{-0.3cm} \item {\bf SrcB (Src2):} Flux of SrcB changed by a factor of 4 within 3 weeks implying that this source is a variable. However, there are no observed optical or radio counterparts for this source.\vspace{-0.3cm} \item {\bf SrcC (Src3):} This source has a very soft spectrum and since its position is outside the 5$\sigma$ gamma-ray contours of Source A, we assume that this source is not directly related to Source A. \vspace{-0.3cm} \item {\bf SrcD (Src4):} This source was found to exist in the {\it Swift} XRT data, while searching for radio counterparts for SrcA, SrcB, and SrcC in the Sydney University Molonglo Sky Survey (SUMSS) data. We found a SUMSS radio counterpart for SrcD at 843 MHz that also overlaps with PMN J1315-6354 (R.A. (J2000) = 13$^{\rm{h}}$ 15$^{\rm{m}}$ 52$^{\rm{s}\!\!}$.9, Decl. (J2000) = $-$63$^{\circ}$ 54$'$ 40$''$) from the Parkes-MIT-NRAO (PMN) 4.85GHz Surveys catalog \cite{Wr94}. \end{itemize} \vspace{-0.2cm} X-ray spectra for all {\it Swift} XRT sources are shown in Figure \ref{figure_1}. For SrcA, SrcB, and SrcD we assumed the absorbing Hydrogen column density to be 1.7 $\times$ 10$^{22}$ cm$^{-2}$, which is the same as that for G306.3$-$0.9, to calculate the absorption corrected flux in the 0.5-10 keV energy range. Hydrogen column density for SrcC was assumed to be zero to adequately fit its very soft X-ray spectrum. The absorption corrected flux value for SrcA, SrcB, SrcC, and SrcD is calculated to be $\sim$3.2 $\times$ 10$^{-11}$, $\sim$4.5 $\times$ 10$^{-13}$, $\sim$2.5 $\times$ 10$^{-15}$, $\sim$3.3 $\times$ 10$^{-13}$ ergs cm$^{-2}$ s$^{-1}$, respectively. Since SrcB is the brightest X-ray source among all four detected {\it Swift} XRT sources, we calculated the luminosity of SrcB to be $\sim$6 $\times$ 10 $^{31}$ erg s$^{-1}$ at a distance of 1 kpc. \begin{wrapfigure}{r}{0.47\textwidth} \vspace{-27pt} \begin{center} \includegraphics[width=0.47\textwidth]{fig1.png} \vspace{-26pt} \caption{\footnotesize{The gamma-ray TS map of Source A, which is not included in the background model.}} \label{figure_1a} \end{center} \vspace{-27pt} \end{wrapfigure} We checked the 3rd Fermi-LAT source catalog (3FGL) \cite{Ac15} and the 3rd catalog of hard Fermi-LAT sources (3FHL) \cite{Ac17} to find possible counterparts for Source A and the associated X-ray sources. In Figure \ref{figure_1a} {\it Swift} X-ray sources are shown with black markers. Fermi-LAT sources from the 3rd Fermi-LAT source catalog are shown in red markers. Sources from the 3FHL catalog are shown in cyan markers. The extended 3FGL source (3FGL J1303.0-6312e) was also reported in the 3FHL catalog and its extension is shown as a blue circle. The green contours are for the gamma-ray TS values (25, 36, 49). We could not find any counterparts for Source A in 3FGL and 3FHL catalogs. The white dashed circle shows the radio source PMN J1315-6354. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fig2.jpg} \includegraphics[width=0.4\textwidth]{fig3.jpg} \includegraphics[width=0.4\textwidth]{fig4.jpg} \includegraphics[width=0.4\textwidth]{fig5.jpg} \vspace{-0.2cm} \caption{ \footnotesize{{\it Swift} XRT spectra of SrcA (upper-left panel), SrcB (upper-right panel), SrcC (lower-left panel), and SrcD (lower-right panel). The photon indices are allowed to vary freely and they are found to be 8.6 for SrcA, 1.7 for SrcB, 5.9 for SrcC, 2.8 for SrcD.}} \label{figure_1} \vspace{-0.5cm} \end{figure} \vspace{-0.5cm} \section{Analysis \& Results of Gamma-ray Data} \vspace{-0.5cm} After discovering the X-ray sources within Source A, we re-analyzed the gamma-ray data. We used data between 2008-08-04 and 2017-06-30. We analyzed events within the energy range of 200 MeV - 300 GeV using the Fermi analysis toolkit \texttt{fermipy}\footnote{http://fermipy.readthedocs.io/en/latest/index.html}. We selected Fermi-LAT Pass 8 `Source' class and front+back type events, which come from zenith angles smaller than 90$^{\circ}$ and from within a circular region of interest (ROI) with a radius of 20$^{\circ}$ centered at the best-fit position of Source A. The maximum likelihood fitting method was employed on the spatially and spectrally binned data and used the instrument function P8R2$_{-}$SOURCE$_{-}\!\!$V6. The gamma-ray background model contains Galactic diffuse sources ({\it gll$_{-}$iem$_{-}\!$v6.fits}) and isotropic sources ({\it iso$_{-}$P8R2$_{-}$SOURCE$_{-}\!\!$V6$_{-}\!$v06.txt}). It also includes all point-like and extended sources from the 3rd Fermi-LAT Source Catalog located within a 15$^{\circ}\times$15$^{\circ}$ region centered at the ROI center. Freed normalization parameters of sources that are within 3$^{\circ}$ of ROI center. Freed all parameters of the diffuse Galactic emission and the isotropic component. All sources with TS $>$ 10 are set free and all sources with TS $<$ 10 are fixed. The analysis region shown by the 10$^{\circ}\times$10$^{\circ}$ TS map covers a very large area in the sky and Source A seems to show some sub-structures, but the X-ray sources concentrated around the best-fitted location of Source A. In order to clarify how much each X-ray source is contributing to Source A's total gamma-ray emission, we added each of the X-ray sources one by one as a point-like source into the gamma-ray background model. Then we checked for the significances and produced a TS map for every version of the gamma-ray background model. Since SrcC is located out of the 5$\sigma$ contours of Source A, we assumed that it is not a part of Source A. Thus, we excluded SrcC from the gamma-ray analysis. \begin{table} \begin{minipage}{150mm} \begin{center} \caption{The maximum likelihood analysis results for different combinations of {\it Swift} XRT sources and G306.3$-$0.9 that are included in the gamma-ray background model.} \vspace{-0.2cm} \begin{tabular}{@{}lccccc@{}} \hline \hline Included Source Names & TS (SrcA) & TS (SrcB) & TS (SrcD) & TS (G306.3$-$0.9) \\ \hline G306.3$-$0.9 & - & - & - & 55.48\\ SrcA \& G306.3$-$0.9 & 128.92 & - & - & 14.42\\ SrcB \& G306.3$-$0.9 & - & 139.10 & - & 6.14\\ SrcD \& G306.3$-$0.9 & - & - & 111.89 & 27.50 \\ SrcA \& SrcD \& G306.3$-$0.9 & 89.29 & - & 24.26 & 6.08\\ SrcA \& SrcB \& G306.3$-$0.9 & 42.57 & 46.86 & - & 8.15\\ SrcB \& SrcD \& G306.3$-$0.9 & - & 74.09 & 50.59 & 9.13\\ SrcA \& SrcB \& SrcD \& G306.3$-$0.9 & 2.52 & 90.81 & 43.27 & 3.73\\ \hline \end{tabular} \label{table_2} \end{center} \end{minipage} \vspace{-0.3cm} \end{table} \begin{figure} \centering \includegraphics[width=0.40\textwidth]{fig6.png} \includegraphics[width=0.40\textwidth]{fig7.png} \vspace{-0.2cm} \caption{ \footnotesize{The TS map for two different background gamma-ray models. Left Panel: Including SrcA, SrcB, SrcD, and G306.3$-$0.9 in the background model. Right Panel: Including SrcB, SrcD, and G306.3$-$0.9 in the background model. On both panels, the magenta significance contours of 5, 6, and 7$\sigma$ are taken from Figure \ref{figure_1a}. All sources added to the background model are shown with yellow crosses. Green contours represent the 5$\sigma$ significance level obtained after the background model is fit to the data.}} \label{figure_2} \vspace{-0.5cm} \end{figure} We found out that the included source combination of 'SrcB \& SrcD \& G306.3$-$0.9' cleans all excess gamma-ray emission from the nearby region of the best-fitted position of Source A. Although including 'SrcA \& SrcB \& SrcD \& G306.3$-$0.9' source combination also gives comparable results to the 'SrcB \& SrcD \& G306.3$-$0.9' combination, the TS value of SrcA comes out as 2.52 in the former analysis. The TS map with SrcA, SrcB, SrcD, and G306.3$-$0.9 included in the gamma-ray background model is shown in Figure \ref{figure_2} left panel and the same figure right panel shows the TS map for only including SrcB, SrcD, and G306.3$-$0.9 in the gamma-ray background model. A PL spectral fit to the spectra of SrcB and SrcD gives a spectral index of 2.47 $\pm$ 0.12 and 2.35 $\pm$ 0.14, respectively, and a total energy flux of (5.9 $\pm$ 1.1) $\times$ 10$^{-6}$ MeV cm$^{-2}$ s$^{-1}$ and (4.6 $\pm$ 1.0) $\times$ 10$^{-6}$ MeV cm$^{-2}$ s$^{-1}$, respectively. The spectra of SrcB and Src D are shown in Figure \ref{figure_3}. \begin{figure} \vspace{-0.3cm} \centering \includegraphics[width=0.45\textwidth]{fig8.png} \includegraphics[width=0.45\textwidth]{fig9.png} \vspace{-0.2cm} \caption{ \footnotesize{The SED of SrcB (left panel) and SrcD (right panel) assuming a PL-type spectrum in the energy range of 0.2$-$300 GeV for both sources. The central solid black line and grey band represent the best-fitted PL model and its statistical errors. }} \label{figure_3} \vspace{-0.3cm} \end{figure} To see the long term variability in the light curve of SrcB and SrcD, we apply Fermi-LAT aperture photometry taking data from the circular region of 1$^{\circ}$ around the best-fitting position of SrcB and SrcD. For each source we applied the barycenter correction to the data. We also applied event weighing to keep only events with a probability of $>$ 10\% of being from SrcB/SrcD. Higher probability values decreases the number of events abruptly. The 1-month-binned light curves of SrcB and SrcD assuming a PL-type spectrum in the energy range of 0.2 - 300 GeV are shown on the left and right panels of Figure \ref{figure_4}. \begin{figure} \vspace{-0.3cm} \centering \includegraphics[width=0.45\textwidth]{fig10.png} \includegraphics[width=0.45\textwidth]{fig11.png} \vspace{-0.2cm} \caption{ \footnotesize{The 1-month-binned gamma-ray light curve of SrcB (left panel) and SrcD (right panel) assuming a PL-type spectrum in the energy range of 0.2$-$300 GeV for both sources. The blue line shows the mean value. The dashed magenta and solid red line represents the 1$\sigma$ and 3$\sigma$ significance levels, respectively.}} \label{figure_4} \vspace{-0.3cm} \end{figure} \vspace{-0.5cm} \section{Conclusions \& Outlook} \vspace{-0.5cm} We analyzed the GeV gamma-ray data including the recently detected {\it Swift} XRT sources and the SNR G306.3$-$0.9 in the gamma-ray background as point-source source templates. Comparing the analysis results for the case where 'SrcA \& SrcB \& SrcD \& G306.3$-$0.9' source combination was part of the background model with the ones for 'SrcB \& SrcD \& G306.3$-$0.9' source combination inside the background model, we found comparable results. However, by looking at the excess in the TA maps, 'SrcB \& SrcD \& G306.3$-$0.9' source combination is favored. Possible source-type scenarios for SrcA, SrcB, SrcD are listed as follows: \vspace{-0.3cm} \begin{itemize} \item SrcA has an optical counterpart, which is a star. It could be a gamma-ray binary, and if so, its variability has to be observed in various wavelengths. However, it is not detected in gamma rays (TS=2.52). \vspace{-0.3cm} \item SrcB is the brightest X-ray source among all {\it Swift} XRT sources. It has no optical and radio counterpart, but was detected in gamma rays with a significance of $\sim$9$\sigma$. This source is possibly a quasar with a very weak radio emission, but probably the SUMSS sensitivity limit is insufficient for the detection of SrcB. Optical and radio observations are needed to confirm this estimate. \vspace{-0.3cm} \item SrcD has a radio counterpart found in SUMSS. It was detected in gamma rays with a significance of $\sim$7$\sigma$. It may be a blazar candidate, but more observations are needed to show the variability in different wave-bands. \end{itemize} \vspace{-0.3cm} As a next step, we plan for multi-waveband observations (radio, optical, and X-rays) on SrcB and SrcD. In addition, we will re-analyze the gamma-ray data to do more variability checks and investigate the energy-dependent source morphology. \vspace{-0.5cm}
1,477,468,749,858
arxiv
\section{Introduction} A vertex algebra describes the symmetries of a two-dimensional conformal field theory, while a factorization algebra over a complex curve consists of local data in such a field theory. Roughly, the factorization structure encodes collisions between local operators. The two perspectives are equivalent, in the sense that, given a fixed open affine curve $X$, the category of vertex algebras over $X$ is equivalent to the category of factorization algebras over $X$ \cite{HL, BD}. In 1999, Borcherds \cite{BorQVA} introduced yet another approach to studying vertex algebras: given some input data $(A, H, S)$, he introduces a category equipped with a ``singular'' tensor product, and he defines $(A, H, S)$-vertex algebras to be commutative ring objects in this category. For a particular choice of input data, $(A, H, S_B)$, he proves that these singular ring objects can be used to produce examples of vertex algebras. An advantage of this approach is that once one understands the definition of the category, one needs to work only with ring axioms rather than with the less familiar axioms of vertex algebras. However, at the time of Borcherds's paper, it was not clear whether all vertex algebras could be realized via this approach. In this paper, we prove that they can, if we modify the input data appropriately. More generally, we address the question of how closely the categories of $(A, H, S)$-vertex algebras and ordinary vertex algebras (or factorization algebras, or equivalently chiral algebras) are related. \begin{thm}\label{Theorem: Borcherds doesn't give an equivalence}[Theorem \ref{Theorem: Borcherds doesn't give an equivalence, actual}] The functor $\Phi$ from the category $\VA(A, H, S_B)$ of $(A,H,S_B)$-vertex algebras to the category $\VA$ of ordinary vertex algebras (arising from Borcherds's constructions) is not an equivalence. \end{thm} While this is a negative result, we can also obtain positive results by modifying our input data. \begin{thm}\label{Theorem: main results on X}[Theorems \ref{Theorem: AHS to vertex algebra}, \ref{Theorem: AHS to chiral algebra}, \ref{Theorem: Factorization to AHS}, \ref{Theorem: composition}, \ref{Theorem: Borcherds doesn't give an equivalence, actual}] Given $X$ an open affine subset of $\BBA^1_{\BC}$, there exist input data $(A, H, S_X)$ such that we have \begin{itemize} \item a functor $\Phi_X$ from the category $\VA(A, H, S_X)$ of $(A, H, S_X)$-vertex algebras to the category $\VA(X)$ of vertex algebras on $X$, and \item a functor $\Gamma_X$ from the category $\FA(X)$ of factorization algebras on $X$ to the category $\VA(A, H, S_X)$, \end{itemize} such that $\Gamma_X$ is a section (or left-inverse) of $\Phi_X$ in the following sense. Given a vertex algebra $V$ over $X$, let $\CB^V$ denote the corresponding factorization algebra over $X$. Then $\Phi_X \circ \Gamma_X (\CB^V) =V$. In particular, $\Phi_X$ is essentially surjective. However, $\Phi_X$ fails to be an equivalence: distinct $(A, H, S_X)$-vertex algebras may give rise to isomorphic vertex algebras over $X$. Furthermore, the composition of the functor $\Phi_X$ with the well-studied equivalence $\Psi_X$ (of \cite{BD}) from $\VA(X)$ to the category $\CAlg(X)$ of chiral algebras on $X$ can be described explicitly, without reference to the intermediate category $\VA(X)$. \end{thm} The functors and results of Theorem \ref{Theorem: main results on X} extend to the translation-equivariant setting, which allows us to use the equivalence $\Xi^{\VA}$ between $\VA$ and the category $\VA(\BBA^1)^{\BBA^1}$ of translation-equivariant vertex algebras on $X=\BBA^1$ to relate the new functor $\Phi_{\BBA^1}$ to the functor $\Phi$ resulting from Borcherds's original constructions. \begin{thm}\label{Theorem: translation-equivariance}[Proposition \ref{Prop: Sb to translation equivariant by tensoring}] There is a functor $\Xi$ from $\VA(A,H, S_B)$-vertex algebras to the category $\VA(A, H, S_{\BBA^1})^{\BBA^1}$ of translation-equivariant $(A, H, S_{\BBA^1})$-vertex algebras, which intertwines the functors $\Phi$ and $\Phi_{\BBA^1}$ via $\Xi^{\VA}$. \end{thm} The relationships between the categories in question can thus be summarized in the following diagram (where $\CAlg(\BBA^1)^{\BBA^1}$, respectively $\FA(\BBA^1)^{\BBA^1}$, denotes the category of translation-equivariant chiral algebras, respectively factorization algebras, on $\BBA^1$): \begin{center} \begin{tikzpicture}[>=angle 90, bij/.style={above,inner sep=0.5pt}] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {\VA & \VA(\BBA^1)^{\BBA^1} & \CAlg(\BBA^1)^{\BBA^1} & \FA(\BBA^1)^{\BBA^1}.\\ \VA(A, H, S_B) & \VA(A, H, S_{\BBA^1})^{\BBA^1} & & \\}; \path[->, font=\scriptsize] (b-1-1) edge node[above, bij]{$\sim$} node[below]{$\Xi^{\VA}$} (b-1-2) (b-1-2) edge node[above, bij]{$\sim$} (b-1-3) (b-1-3) edge node[above, bij]{$\sim$} (b-1-4) (b-2-1) edge node[below]{$\Xi$} (b-2-2) (b-2-1) edge node[left]{$\Phi$} (b-1-1) (b-2-2) edge node[right]{$\Phi_{\BBA^1}$} (b-1-2) (b-1-4) edge[dashed, bend left=20] node[below right]{$\Gamma_{\BBA^1}$}(b-2-2); \end{tikzpicture} \end{center} Here the arrow corresponding to $\Gamma_{\BBA^1}$ is dashed to remind us that the morphisms in the right-hand part of the diagram are not completely compatible: one does not obtain the identity functor if one begins tracing a clockwise cycle at the bottom vertex $\VA(A, H, S_{\BBA^1})^{\BBA^1}$, but only if one begins at one of the vertices in the top row. \subsection{Future directions/applications} In the preprint \cite{Joyce}, Joyce defines vertex algebra structures on homology of moduli spaces using techniques similar to Borcherds's construction of the lattice vertex algebra via the functor $\Phi$. It is exciting that these structures exist, but it is difficult to see where the geometry of the underlying moduli spaces is used in these constructions. The present paper illuminates the relationship between $(A, H, S)$-vertex algebras and factorization algebras, which are more explicitly geometric objects; in future work we will attempt to exploit this relationship to better understand Joyce's results. As suggested by its title, the paper \cite{BorQVA} generalized the definition of $(A, H, S)$-vertex algebras to $(A, H, S)$-\emph{quantum vertex algebras}. There are several approaches to quantum vertex algebras, but Borcherds's approach seems particularly promising, particularly in light of the results of the current paper. For example, Anguelova--Bergvelt \cite{AB} compare several approaches, and adopt Borchderds's techniques for constructing examples of quantum vertex algebras, but give as their reason for not framing their axioms in the language of $(A,H,S)$-vertex algebras the observation that \emph{``even for classical vertex algebras it seems not known how to include such basic examples as affine vertex algebras in the $(A, H, S)$-framework''}. However, we now see that the functor $\Gamma_{\BBA^1}$ allows us to view any vertex algebra as an $(A,H,S)$-vertex algebra. In future work, we will adapt the axioms of quantum $(A,H,S)$-vertex algebras to the setting of factorization algebras in a way compatible with the suitable analogue of $\Gamma_{\BBA^1}$, and will study the resulting objects. \subsection{Notation and conventions} \label{Subsec: notation} For simplicity, we work over $\BC$ throughout (unless explicitly stated otherwise), although some of the definitions and results admit generalizations to other fields and even rings. (In particular, in \cite{BorQVA}, Borcherds gives the definition of an $(A, H, S)$-vertex algebra over an arbitrary ring.) We work with $X$ an open affine subset of $\BBA^1$, and we fix a choice of global coordinate $x$ on $X$. This allows us to express $\Gamma(X, \CO_X)$ as a localization of $\BC[x]$; it also means that we have a derivation $\frac{d}{dx}$ on $\Gamma(X, \CO_X)$. \subsection{Structure of the paper} The paper is organized as follows. In Section \ref{Section: Background, general} we provide background on vertex algebras, chiral algebras, factorization algebras, and the relationships between them, while in Section \ref{Section: Background, AHS} we review the definitions of $(A, H, S)$-vertex algebras, following \cite{BorQVA}. We also introduce the variations $(A, H, S_X)$ of input data that will be needed for our applications. In Section \ref{Section: from AHS to vertex algebras} we construct the functor $\Phi_X$ from $\VA(A, H, S_X)$ to $\VA(X)$, and in \ref{Section: from AHS to chiral} we give an explicit description of the composition of $\Phi_X$ with the well-studied morphism $\Psi_X$ from $\VA(X)$ to $\CAlg(X)$. In Section \ref{Section: from factorization to AHS} we construct the functor $\Gamma_X$ from $\FA(X)$ to $\VA(A, H, S_X)$, and in Section \ref{sec: composition of functors} we prove that $\Gamma_X$ gives a Section of $\Phi_X$ in the sense described in Theorem \ref{Theorem: main results on X}. We study the translation-equivariant version of the story in Section \ref{Section: translation-equivariant}. Finally, in Section \ref{sec: failure to be an equivalence} we show that $\Phi$ cannot be an equivalence, by studying the specific example of the Virasoro vertex algebra mapping to the rank-one lattice vertex algebra. \subsection{Acknowledgements} I thank Dominic Joyce for introducing me to the paper \cite{BorQVA}, and for providing, via his work \cite{Joyce} on vertex algebra structures on the homology of moduli spaces, the motivation to study the relationship between Borcherds's constructions and factorization algebras. Thanks also to Kobi Kremnizer, Thomas Nevins, and Reimundo Heluani for helpful discussions, and to Yi-Zhi Huang for useful email communications. I am grateful as well to Anna Romanov for comments on a draft of the paper. \section{Background: Vertex algebras, chiral algebras, and factorization algebras}\label{Section: Background, general} In this section, we recall the definitions of the key players: vertex algebras, chiral algebras, and factorization algebras over $X$. Although the definitions of chiral algebras and factorization algebras make sense for more general $X$, we restrict ourselves to $X$ open affine subsets of $\BBA^1$, which is necessary for the definition of a vertex algebra over $X$. As in \ref{Subsec: notation}, we fix a global coordinate $x$ on $X$, which determines a derivation $\frac {d}{dx}$ on $\Gamma(X, \CO_X)$. \begin{defn}\label{Def: vertex algebra} A \emph{vertex algebra} \begin{align*} V=(V, \vac, T, Y(\bullet, z)) \end{align*} consists of the following data: \begin{itemize} \item a vector space $V$ over the field $k$; \item a distinguished element $\vac \in V$, called the \emph{vacuum element}; \item a linear map $T: V \to V$ called the \emph{translation operator}; \item a linear map $Y(\cdot, z): V \otimes V \to V[\![z^{\pm 1} ]\!]$. We write \begin{align*} Y(v, z)u = \sum _{n \in \BZ} v_n (u) z^{-n-1}, \end{align*} where for each $n \in \BZ$, $v_n$ is an endomorphism of $V$. \end{itemize} These data are subject to the following axioms: \begin{description} \item[The lower truncation condition] For fixed $v, u \in V$, there exists $N \gg 0$ such that for all $n \ge N$, $v_n u = 0$. \item[The vacuum condition] $Y(\vac, z) = \Id_V$. \item[The creation property] $Y(v, z)\vac = v + \sum_{n \le -2} v_n\vac z^{-n-1}$. \item[The derivative property] $\frac{d}{dz} Y(v,z) = Y(Tv, z)$. \item[The locality condition] This says that for any two $v,u \in V$, the corresponding fields $Y(v, z)$ and $Y(u, w)$ are \emph{mutually local}. More explicitly, for any third vector $a \in V$, there exists some $N \gg 0$ such that \begin{align*} (z-w)^N Y(v, z) Y(u, w) a = (z-w)^N Y(u, w) Y (v, z) a \in V[\![ z^{\pm1}, w^{\pm 1}]\!]. \end{align*} \end{description} A \emph{morphism} of vertex algebras is a linear map preserving all of the above structure. \end{defn} \begin{eg}\label{Example: lattice VOA} An important example for us will be the so-called \emph{lattice vertex algebra}, $V_L$. Let us fix some notation for this example, which will be necessary for computations later on; we follow the notation of Section 5.4 of \cite{FBZ}. Fix $L$ an even integral lattice, with $(\cdot, \cdot): L \times L \to \BZ$ its positive definite symmetric bilinear form. We let $\fh = L \otimes_\BZ \BC$, viewed as a commutative Lie algebra, and we denote by $\widehat{\fh}$ the \emph{Heisenberg Lie algebra} associated to $L$, a central extension of $\fh((t))$ by $\BC \mathbb{1}$. We consider the Weyl algebra $\widetilde{\CH}_L$, which is the (completed) enveloping algebra of $\widehat{\fh}$ modulo the relation $\mathbb{1} = 1$. It has (topological) generators $h_n$ for $h \in \fh$ and $n \in \BZ$, satisfying $[h_n, g_m] = n (h, g) \delta_{n, -m}$. Now for any $\lambda \in L$ we have the \emph{Fock representation} $\pi_\lambda$ of $\widetilde{\CH}_L$, which is generated by a single vector $| \lambda \rangle$ subject to the relations \begin{align*} h_n | \lambda \rangle = 0 \text{ for } n > 0; \qquad h_0 | \lambda \rangle =(\lambda, h) | \lambda \rangle. \end{align*} With this data, we can define the lattice vertex algebra $V_L$. As a vector space, we have \begin{align*} V_L = \bigoplus_{\lambda \in L} \pi_\lambda. \end{align*} It has a natural structure of vertex algebra (depending on a choice of 2-cocycle $c: L \times L \to \{\pm 1\}$), which we do not spell out here, except to say that the translation operator $T$ is given by acting by \begin{align*} \frac{1}{2}\sum_{a \in A} \sum_{n \in \BZ} (\lambda_a)_n (\lambda^a)_{-n-1}, \end{align*} where $\{\lambda_a\}_{a \in A}$ is a basis of $L$, and $\{\lambda^a\}_{a \in A}$ gives a dual basis in $\fh = L \otimes_\BZ \BC$. More details can be found in Section 5.4 of \cite{FBZ}. \end{eg} \begin{eg}\label{Example: Virasoro vertex algebra} Another important example is the \emph{Virasoro vertex algebra} $\Vir_c$ at level $c \in \BC$. Consider the \emph{Virasoro Lie algebra} $\Vir$, which is a central extension of the Lie algebra $\Der \BC [\![ t ]\!]$ of derivations of the ring $\BC [\![ t ]\!]$. It has topological generators $L_n = -t^{n+1}\partial_t, n \in \BZ$, and $\bc$, satisfying to the following relations: \begin{align*} [L_n, L_m] = (n-m)L^{n+m} + \frac{n^3 - n}{12} \delta_{n, -m} \bc \quad \forall n, m; \qquad [L_n, \bc] = 0 \quad \forall n. \end{align*} Given a complex number $c \in \BC$, the representation $\Vir_c$ is generated by a single vector $v_c$, subject to the relations: \begin{align*} L_n v_c = 0 \text{ for } n \ge -1; \qquad \bc v_c = c v_c. \end{align*} The Virasoro vertex algebra encodes the conformal symmetry of a vertex algebra corresponding to a 2d conformal field theory. More precisely, a \emph{conformal structure} (of central charge $c$) on a vertex algebra $V$ is a morphism of vertex algebras \begin{align*} \phi: \Vir_c \to V. \end{align*} It is not difficult to check that the data of such a map is determined by specifying the single vector $\phi(L_{-2}v_c) \in V$, which is then called the \emph{conformal vector} and is often denoted by $\omega$. This vector must satisfy certain properties in order for the morphism $\phi$ to be well-defined (see for example Section 2.5 of \cite{FBZ}). \end{eg} \begin{eg}\label{Example: conformal structure on lattice VOA} For example, in the case of a rank 1 lattice $L = \sqrt{N} \BZ$ (for $N$ even), there is an infinite family $\{\phi_\lambda\}$ of conformal structures on $V_L$, parametrized by a choice of $\lambda \in \frac{\sqrt{N}}{2} \BZ$. Let us denote by $b$ the basis element $\sqrt{N} \otimes \frac{1}{\sqrt{N}}$ of $\fh = L \otimes_\BZ \BC$. For a fixed choice of $\lambda \in \frac{\sqrt{N}}{2} \BZ$, the central charge will be $c_\lambda = 1 - 12\lambda^2$, and we have \begin{align}\label{Eq: conformal vector} \omega_\lambda \defeq \phi_\lambda(L_{-2}v_{c_\lambda}) = \frac{1}{2} b_{-1}^2\vac + \lambda b_{-2}\vac. \end{align} \end{eg} \begin{defn}[Def. 5.1, \cite{HL}] A \emph{vertex algebra over $X$} consists of a vertex algebra $(V, \vac, T, Y(\cdot, z))$ together with the structure of an $\Gamma(X, \CO_X)$-module on $V$ such that \begin{itemize} \item For any $f, g \in \Gamma(X, \CO_X)$, and any $v, u \in V$, we have \begin{align*} Y(f(x)\cdot v, z) (g(y) \cdot u) = f(x + z) g(x) \cdot Y(v, z)u, \end{align*} where $x$ and $y$ are the global coordinates of the first and second copies of $X$ respectively, acting on $V \otimes V$; \item For any $f \in \Gamma(X, \CO_X)$ and any $v \in V$ we have \begin{align*} T(f(x)\cdot v) = (\frac{d}{dx} f(x)) \cdot v + f(x) \cdot (T v). \end{align*} \end{itemize} \end{defn} We will let $\VA$ denote the category of vertex algebras, and $\VA(X)$ denote the category of vertex algebras over $X$. \begin{defn}[Sec. 3.3, \cite{BD}] A \emph{non-unital chiral algebra} over $X$ consists of a right $\CD_X$-module $\CA$ equipped with a morphism of $\CD$-modules \begin{align*} \mu^\ch : j_* j^* \CA \boxtimes \CA \to \Delta_! \CA \end{align*} on $X^2$, which satisfies skew-symmetry and the Jacobi identity. This morphism is called the \emph{chiral bracket}. Here $\Delta: X \to X^2$ is the diagonal embedding, and $j$ is the open embedding of its complement $U$. \end{defn} \begin{rmk}\label{Remark: affine chiral algebras} Because $X$ is affine, the data of the $\CD_X$-module $\CA$ is determined by its global sections $A$, viewed as a (right) module over $\CD(X) = \Gamma(X, \CD_X)$. Similarly, because $X^2$ and $U$ are also affine, the data of $\mu^\ch$ can also be described in terms of its action on global sections; for this we adopt some notation that will be convenient later on. For a finite set $I$, let $X^I$ denote the product of $|I|$ copies of $X$, let $x_i$ be the fixed global coordinate on the $i$th factor, and let $S_X(I) = \Gamma(X^I, \CO_{X^I})$. In particular, by a slight abuse of notation, we denote by $S_X(1)$ the coordinate ring $\Gamma(X, \CO_X)$, and by $S_X(1,2)$ the global sections $\Gamma(X^2, \CO_{X^2})$. Furthermore, let us denote by $S_X(1:2)$ the sections $\Gamma(U, \CO_{X^2})$, so that $S_X(1:2)$ is the localization of $S_X(1,2)$ by $(x_1 - x_2)$. With this notation in mind, we see that \begin{align*} \Gamma(X^2, j_* j^* \CA \boxtimes \CA) = (A \otimes_\BC A) \otimes_{S_X(1,2)} S_X(1:2). \end{align*} On the other hand, identifying $\Delta_! \CA = \frac{j_* j^* (\Omega_X \boxtimes \CA)}{\Omega_X \boxtimes \CA}$ as in Section 19.1.1 of \cite{FBZ}, we have \begin{align*} \Gamma(X^2, \Delta_! \CA) = \frac{(S_X(1)dx_1 \otimes_\BC A) \otimes_{S_X(1,2)} S_X(1:2)} {S_X(1)dx_1 \otimes_\BC A}. \end{align*} In particular, this module is spanned over $\BC$ by elements that look like \begin{align*} (f(x_1)dx_1 \otimes a) \otimes (x_1 - x_2)^{-N} \text{ mod } S_X(1)dx_1 \otimes_\BC A, \end{align*} for $f(x_1) \in S_X(1)$, $a \in A$, and $N >0$. Then the chiral bracket $\mu^\ch$ is determined by a morphism of these right $\CD(X^2)$-modules satisfying properties corresponding to the skew-symmetry and the Jacobi identity. \end{rmk} \begin{eg}\label{Example: unit chiral algebra} It is straightforward to check that the natural map $j_* j^* \omega_X \boxtimes \omega_X \to \Delta_! \omega_X$ makes the canonical bundle $\omega_X = \Omega_X$ into a chiral algebra. \end{eg} \begin{defn}[Sec. 3.3.3, \cite{BD}] A \emph{unital} chiral algebra $\CA$ is a chiral algebra $(\CA, \mu^\ch)$ equipped with a map \begin{align*} u: \omega_X \to \CA \end{align*} of chiral algebras such that the restriction of the chiral bracket $\mu^\ch$ to $j_*j^*(\omega_X \boxtimes \CA)$ is the canonical quotient map $j_*j^*(\omega_X \boxtimes \CA) \surj \Delta_! \CA$. From now on, all chiral algebras will be assumed to be unital unless explicitly stated otherwise. We let $\CAlg(X)$ denote the category of (unital) chiral algebras on $X$. \end{defn} Before we can make the definition of a factorization algebra, we need to set some additional notation. \begin{notation}\label{Notation: categories of finite sets} Let $\Fin$ denote the category of all finite sets with maps between them. On the other hand, let $\fSet$ denote the category of all \emph{non-empty} finite sets with morphisms being only the surjections between them. (Note that later on we will have a third category $\FinNEq$, but let us postpone its definition until Notation \ref{Notation: finite sets with equivalence relation} when it is actually needed.) A surjection $\alpha: I \surj J$ determines a closed embedding $\Delta(\alpha): X^J \to X^I$. On the other hand, an arbitrary map $\alpha: I \to J$ factors into a surjection following by an injection: \begin{align*} I \xrightarrow{\alpha^\prime} \im(\alpha) \defeq K \xrightarrow{\beta} J. \end{align*} The injection $\beta: K \to J$ determines a projection $\pi(\beta): X^J \to X^K$. We denote the composition of $\pi(\beta)$ and $\Delta(\alpha^\prime)$ by $\phi(\beta): X^J \to X^I$. The map $\alpha$ also determines an open embedding $U(\alpha) \emb X^I$, where \begin{align*} U(\alpha) = \{ (x_i) \in X^I \ \vert \ x_{i_1} \ne x_{i_2} \text{ unless } \alpha(i_1) = \alpha(i_2)\}. \end{align*} We denote this open embedding by $j(\alpha)$. \end{notation} \begin{defn}[Section 3.4, \cite{BD}]\label{Def: factorization algebra} A \emph{factorization algebra} on $X$ consists of a family $\left\{\CB_{X^I} \in \CD(X^I) \right\}_{I \in \fSet}$ of left $\CD$-modules together with isomorphisms \begin{align*} \nu_\alpha&: \CB_{X^I} \EquivTo \Delta(\alpha)^* \CB_{X^J}; \\ d_\alpha&: j(\alpha)^* \left(\boxtimes_{j \in J} \CB_{X^{I_j}} \right) \EquivTo j(\alpha)^* \CB_{X^I} \end{align*} for any $\alpha: I \surj J \in \fSet$. We require that the $\CB_{X^I}$ have no non-zero local sections supported at the diagonal divisor. We also require that the $\nu_\alpha$ and $d_\alpha$ satisfy compatibility conditions with respect to compositions of maps $\alpha$. \end{defn} \begin{eg} It is a straightforward exercise to check that the collection $\CO = \left\{\CO_{X^I}\right\}$ can be given a natural structure of factorization algebra. \end{eg} \begin{defn}[Section 3.4.4, \cite{BD}] A factorization algebra $\CB = \{\CB_{X^I}\}$ on $X$ is \emph{unital} if it is equipped with a map $u: \CO \to \CB$ of factorization algebras which satisfies the following two conditions: \begin{enumerate} \item Let $1 \in \CB_X$ denote the image of the unit of $\CO_X$ under $u$. For any local section $b$ of $\CB_{X}$, $1 \otimes b$ is a local section of $\CB_X \boxtimes \CB_X$, so we can view it as a section of $j_* j^* (\CB_X \boxtimes \CB_X)$; hence $d_{\id}(1 \otimes b)$ is a section of $j_* j^* (\CB_{X^2})$. We require that it is is actually a section of $\CB_{X^2}$. By abuse of notation, we will write $1 \otimes b \in \CB_{X^2}$. \item We require furthermore that $\Delta^*(1 \otimes b) = b$, under the identification of $\Delta^*(\CB_{X^2})$ with $\CB_X$. \end{enumerate} \end{defn} \begin{rmk}[Remark on the definition] Beilinson and Drinfeld (Section 3.4.5 \cite{BD}) remark that a unital factorization algebra is equivalent to the following data: \begin{itemize} \item For any $I \in \Fin$ (in particular for $I = \emptyset$) we have a left $\CD$-module on $X^I$. \item For any morphism of finite sets $\alpha: I \to J$ we have morphisms of left $\CD$-modules \begin{align*} \nu_\alpha&: \phi(\alpha)^* \CB_{X^I} \to \CB_{X^J}; \\ d_\alpha&: j(\alpha)^* \left(\boxtimes_{j \in J} \CB_{X^{I_j}} \right) \to j(\alpha)^* \CB_{X^I}. \end{align*} \end{itemize} These are required to satisfy the same compatibilities under compositions of the maps $\alpha$ as before; however, we require them to be isomorphisms only when $\alpha$ is surjective. We again require that the sheaves $\CB_{X^I}$ have no non-zero local sections supported on the diagonal divisor. Moreover, $\CB_{X^\emptyset}$ must be non-zero. \end{rmk} From now on, all factorization algebras will be assumed to be unital, unless specified otherwise. Let $\FA(X)$ denote the category of (unital) factorization algebras on $X$. Let us summarize the relationships between these categories. \begin{thm} \label{Theorem: everybody else's theorem} Let $X \subset \BBA^1$ be an open affine subset of $\BBA^1$, with a fixed global coordinate $x$. \begin{enumerate} \item (Thm. 5.4 \cite{HL}.) The category of vertex algebras over $X$ is equivalent to the category of chiral algebras over $X$. \item The category of vertex algebras is equivalent to the category of translation-equivariant vertex algebras on $\BBA^1$, and hence to the category of translation-equivariant chiral algebras on $\BBA^1$. \item (Thm. 3.4.9 \cite{BD}.) The category of chiral algebras on $X$ is equivalent to the category of factorization algebras on $X$. \end{enumerate} \end{thm} \begin{rmk} \begin{enumerate} \item Many of ideas in the proof of part (1) appear also in Section 19.2 of \cite{FBZ}. \item Part (2) can be proved as a corollary of part (1), and has widely been accepted as true throughout the literature. However, an explicit description of the functors appears in the appendix of the recent paper \cite{BDHK}. \item Part (3) is proved in greater generality (for schemes $X$ of arbitrary dimension $n$) in \cite{FG}; however, we do not need this level of generality. \item We will not comment on the proofs here---however, at certain stages in arguments in this paper, we will describe some of the equivalences above more explicitly, as needed. \end{enumerate} \end{rmk} \section{Background: Borcherds's category of \texorpdfstring{$(A, H, S)$}{(A,H,S)}-vertex algebras}\label{Section: Background, AHS} In this section, we introduce the main objects of Borcherds's paper \cite{BorQVA}. Borcherds works in an arbitrary symmetric monoidal category $A$; for simplicity, we will restrict our attention to the case that $A = \Vect$, the category of vector spaces over $\BC$. We will also introduce new examples, along with geometric perspectives which will be useful in relating Borcherds's constructions to more geometric objects (chiral algebras and factorization algebras) in subsequent sections. \begin{notation}\label{Notation: finite sets with equivalence relation} Recall from \ref{Notation: categories of finite sets} the category $\Fin$ of finite sets with arbitrary maps between them. We are also interested in the category $\FinNEq$ whose objects are finite sets equipped with an equivalence relations, and whose morphisms are functions between the finite sets which preserve \emph{inequivalence}: i.e., we consider only $\alpha: I \to J$ with the property that $\alpha(i_1) \sim \alpha(i_2)$ only if $i_1 \sim i_2$. We will consider $\Fin$ as a full subcategory of $\FinNEq$ by letting an ordinary finite set $I$ carry the degenerate equivalence relation, so all elements of $I$ are equivalent. Note that giving an equivalence relation on a finite set $I$ is equivalent to giving a surjection from $I$ to another set: we will sometimes write $\pi_I: I \surj I^\prime$ for this surjection, and identify $I^\prime$ with the set of equivalence classes of $I$. In the notation of \ref{Notation: categories of finite sets}, the surjection $\pi_I$ determines an open embedding $j(\pi_I): U(\pi_I) \emb X^I$; in the case that the equivalence relation is implicit, we may denote this open embedding by $j(I): U(I) \emb X^I$. Notice that the morphisms $\phi$ from $I$ to $J$ in $\FinNEq$ are exactly those functions $\alpha: I \to J$ such that the induced map $\phi(\alpha) : X^J \to X^I$ restricts to a map $U(J) \to U(I)$. We will denote this map by $\phi_U(\alpha)$. \end{notation} Borcherds begins by considering the functor categories $\Fun(\Fin^*, A)$ and $\Fun((\Fin^*)^\op, A)$, where $* \in \{\emptyset, \not\equiv\}$. These categories each have a natural symmetric monoidal structure induced by the tensor product on $A$: for two functors $V_1, V_2$ and for $I \in \Fin^*$, we have \begin{align*} (V_1 \otimes V_2)(I) = V_1(I) \otimes_\BC V_2(I). \end{align*} The next step in the construction involves choosing a coalgebra object in $\Fun((\Fin^*)^\op, A)$. Although the definitions make sense for general coalgebra objects, all our theorems and examples use the same coalgebra, which we will define now. \begin{defn}\label{Definition: coalgebra T} Let $H=\BC[\del]$ be the polynomial algebra in one generator $\del$. It has a natural cocommutative coalgebra structure given by \begin{align*} \Delta(\del) = \del \otimes 1 + 1 \otimes \del. \end{align*} Now define the functor $T$ by setting $T(I) = \otimes_{I} H \cong \BC[\del_i]_{i \in I}$. Given a morphism $\alpha: I \to J$, we use the comultiplication on $H$ to define a map \begin{align*} \alpha^*&: T(J) \to T(I)\\ & \del_j \mapsto \Delta^{\alpha^{-1} (j)} (\del). \end{align*} (Here $\Delta^{\alpha^{-1} (j)}$ is the $(\vert \alpha^{-1} (j) \vert -1)$-fold composition of $\Delta$ with itself, taking values in $\BC[\del_i]_{i \in \alpha^{-1} (j)}$: that is, $\Delta^{\alpha^{-1} (j)} (\del) = \sum_{i \in \alpha^{-1}(j)} \del_i$.) It is not hard to show that this makes $T$ into a contravariant functor on $\Fin^*$. Finally, we define a comultiplication map $\Delta^T : T \to T \otimes T$ by using the tensor products of $\Delta: H \to H \otimes H$ with itself. Coassociativity of $\Delta$ ensures that $\Delta^T$ is indeed a natural transformation, and makes $T$ into a cocommutative coalgebra. \end{defn} From now on, we will work only with this coalgebra object $T$. Borcherds next defines what it means for this coalgebra to \emph{act} on a covariant functor $V$. \begin{defn}\label{Definition: T action} Take $V \in \Fun(\Fin^*, A)$. An \emph{action} of $T$ on $V$ consists of a family of maps \begin{align*} \act_T(I):& T(I) \otimes V(I) \to V(I)\\ & t\otimes v \mapsto t.v \end{align*} satisfying the following compatibility condition: given any $\alpha: I \to J \in \Fin^*$, we have a commutative diagram \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {& T(J) \otimes V(I) & \\ T(I) \otimes V(I) & & T(J) \otimes V(J) \\ V(I) & & V(J). \\}; \path[->, font=\scriptsize] (b-1-2) edge node[above left]{$\alpha^* \otimes \id$} (b-2-1) (b-2-1) edge node[left]{$\act_T(I)$} (b-3-1) (b-3-1) edge node[below]{$\alpha_*^V$} (b-3-3) (b-1-2) edge node[above right]{$\id \otimes \alpha^V_*$} (b-2-3) (b-2-3) edge node[right]{$\act_T(J)$} (b-3-3) ; \end{tikzpicture} \end{center} \end{defn} We denote the category of functors equipped with $T$-action by $\Fun(\Fin^{*}, A, T)$. (Still, we can work in either $\Fin$, $\FinNEq$. We will begin to see the differences between these categories in the next steps.) We observe, following Borcherds, that the tensor product on $\Fun(\Fin^*, A)$ extends naturally to a tensor product $\otimes$ on $\Fun(\Fin^*, A, T)$. This uses the fact that $T$ is a coalgebra. We now fix a commutative algebra object (with unit) $S$ in the tensor category $\Fun(\FinNEq, A, T)$. The condition that $S$ has a unit is equivalent to requiring that $S(\emptyset)$ is a unital $\BC$-algebra. \begin{eg} This is the main example of an algebra object used by Borcherds in \cite{BorQVA}; for this reason we will denote it by $S_B$. For $I$ a finite set with an equivalence relation $\sim$, let \begin{align*} S_B(I) = \BC[(x_i - x_j)^{\pm 1}]_{ i \not\sim j \in I}. \end{align*} Given $\alpha : I \to J \in \FinNEq$, we define \begin{align*} \alpha^{S_B}_* : S_B(I) \to S_B(J) \end{align*} by sending $x_i$ to $x_{\alpha(i)}$. Note that it is crucial that $\alpha$ be a morphism in $\FinNEq$ (i.e. be inequivalence-preserving) in order for this to give a well-defined map. Thus $S_B$ is an object of $\Fun(\FinNEq, A)$. The action of $T$ on $S_B$ is given by letting $\del_i \in T(I)= \BC[\del_i]_{i \in I}$ act on $S_B(I)$ by $\frac{d}{dx_i}$. The diagram in definition \ref{Definition: T action} commutes as an immediate consequence of the product rule, so $S_B$ is an object of $\Fun(\FinNEq, A, T)$. Finally, to show that $S_B$ is a commutative algebra object in this category, we need to give a multiplication morphism $m: S_B \otimes S_B \to S_B$. On the level of a set $I$, $m_I: S_B(I) \otimes S_B(I) \to S_B(I)$ is just the ordinary algebra multiplication; it is easy to check from the definitions (and again using the product rule) that this is defines a natural transformation compatible with the action of $T$. Furthermore, it follows from the fact that the $S_B(I)$ themselves are commutative algebras that the map $m$ makes $S_B$ into a commutative algebra object. \end{eg} \begin{rmk}\label{Remark: restriction of Sb is trivial} Note that we can consider $S_B$ as a commutative algebra object in $\Fun(\Fin, A, T)$, by restricting along the embedding $\Fin \emb \FinNEq$; however, in this case the functor is constant, always producing $\BC$, and the action of $T$ is always trivial. \end{rmk} Before moving on, let us also introduce some more examples of algebra objects $S$, not studied by Borcherds. \begin{eg} \label{Example: S on A1} For $I \in \FinNEq$, we set \begin{align*} \Sa(I) = \BC[x_i]_{i \in I} [ (x_i - x_j)^{-1}]_{i \not\sim j \in I}. \end{align*} The construction of the structure of $\Sa$ as a commutative algebra object in $\Fun(\FinNEq, A, T)$ is analogous to the case of $\Sb$. More precisely, the functions $\alpha_*^{\Sa}$ are induced by $x_i \mapsto x_{\alpha(i)}$, elements $\del_i \in T(I)$ act on $\Sa(I)$ by $\frac{d}{dx_i}$, and the algebra structure of each $\Sa(I)$ is compatible with these structures. \end{eg} \begin{geom} Let $X = \BBA^1$. Recall from \ref{Notation: finite sets with equivalence relation} that each $I \in \FinNEq$ gives an open subscheme $U(I)$ of $X^I$; note that $\Sa(I) = \Gamma(U(I), \CO_{X^I})$, and the action of $T(I)$ on $\Sa(I)$ is just that of the vector fields $\frac{d}{dx_i}$ on this ring of sections. Motivated by this, we generalize Example \ref{Example: S on A1} as follows. \end{geom} \begin{eg} \label{Example: S_X} Let $X$ be an open affine subset of $\BBA^1$, with fixed global coordinate $x$, as in Section \ref{Subsec: notation}. For any $I \in \FinNEq$, the cartesian product $X^I$ is affine, and (because we are working over curves) the open set $U(I)$ is also affine. Let \begin{align*} S_X(I) = \Gamma(U(I), \CO_{X^I}). \end{align*} For $\alpha: I \to J$, recall that we have $\phi_U(\alpha): U(J) \to U(I)$. The required map $\alpha^{S_X}_*: S_X(I) \to S_X(J)$ is just the induced map on global sections. The action of $T(I)$ on $S_X(I)$ is again by letting $\del_i$ act by the vector field $\frac{d}{dx_i}$ (using the fixed global coordinate $x_i$ on the $i$th factor of $X^I$), and the algebra structure is again the natural one on each ring of functions $S_X(I)$. The compatibility conditions follow as in the previous examples. Note that this notation is consistent with the notation introduced in Remark \ref{Remark: affine chiral algebras}. \end{eg} With these examples in mind as prototypes, let us move on with Borcherds's construction. He defines the category $\Fun(\FinNEq, A, T, S)$ of $S$-modules: objects are objects $V$ of $\Fun(\FinNEq, A, T)$ equipped with an action map \begin{align*} \act_S: S \otimes V \to V, \end{align*} which is a morphism in $\Fun(\FinNEq, A, T)$, and which satisfies the usual axioms of a ring action. \begin{defn}\label{Definition: ordinary tensor product over S} We define a symmetric tensor product on $\Fun(\FinNEq, A, T, S)$ as follows: for $V, W \in \Fun(\FinNEq, A, T, S)$, the tensor product $V \otimes W$ is the functor given by \begin{align*} (V \otimes W): I \mapsto V(I) \otimes_{S(I)} W(I). \end{align*} It is straightforward to give the rest of the structure of $V \otimes W$ as an object of $\Fun(\FinNEq, A, T, S)$, and to see that this makes $\Fun(\FinNEq, A, T, S)$ into a symmetric monoidal tensor category. \end{defn} We will also define a second tensor product, which Borcherds refers to as the \emph{singular tensor product}. \begin{defn}\label{Definition: singular tensor product} Let $V, W \in \Fun(\FinNEq, A, T, S)$. We define $V \odot W$ to be the unique object of $\Fun(\FinNEq, A, T, S)$ such that for any other $Z \in \Fun(\FinNEq, A, T, S)$ a morphism $\psi: V \odot W \to Z$ consists of \begin{itemize} \item for any $I_1, I_2$ in $\FinNEq$, a morphism $\psi_{I_1, I_2}: V(I_1) \to W(I_2) \to Z(I_1 \sqcup I_2)$ (here $I_1 \sqcup I_2$ has the equivalence relation given by the disjoint union of the equivalence relations on $I_1$ and $I_2$). \end{itemize} This collection of maps is required to satisfy some natural compatibility conditions: \begin{itemize} \item For fixed $I_1$, $I_2$, $\psi_{I_1, I_2}$ should commute with the actions of $T(I_1)$ and $T(I_2)$, and with the actions of $S(I_1)$ and $S(I_2)$. \item The $\psi$ should be functorial in $I_1$ and $I_2$. \end{itemize} \end{defn} \begin{rmk}\label{Remark: singular tensor product} \begin{enumerate} \item See the remarks following definition 3.10 of \cite{BorQVA} for a discussion of why this object is defined (i.e. representable). \item Borcherds also remarks that if we were to attempt to make this definition in the category $\Fun (\Fin, A, T, S)$, we would recover the ordinary tensor product. This amounts to the fact that disjoint union is a coproduct in $\Fin$, but not in $\FinNEq$. \item On the other hand, given an object $V$ of $\Fun (\Fin, A, T, S)$, we can extend it to an object $V \in \Fun(\FinNEq, A, T, S)$ by setting \begin{align*} V(I_1: \ldots : I_n) \defeq V(I_1, \ldots, I_n) \otimes_{S(I_1) \otimes \cdots \otimes S(I_n)} S(I_1:\ldots : I_n). \end{align*} (We adopt the notation $I_1: \ldots : I_n$ to indicate the set $I_1 \sqcup \ldots \sqcup I_n$ with equivalence classes $I_1, \ldots, I_n$, and $I_1, \ldots, I_n$ to indicate the set $I_1 \sqcup \ldots \sqcup I_n$ with all elements in the same equivalence class.) Here $S$ acts in the obvious way by multiplication on the right factor, while $\del_i \in T(I)$ acts by $\del_i \otimes \id + \id \otimes \frac{\del}{\del x_i}$. This gives a fully faithful embedding of $\Fun(\Fin, A, T, S)$ into $\Fun(\FinNEq, A, T, S)$. Borcherds observes that if we start with $V, W \in \Fun (\Fin, A, T, S)$, view them as objects of $\Fun(\FinNEq, A, T, S)$, and compute the singular tensor product in this category, the resulting object $V \odot W$ is no longer an object of $\Fun (\Fin, A, T, S)$. \end{enumerate} \end{rmk} Finally, we can come to the main definition of \cite{BorQVA}: \begin{defn}[Definition 3.12, \cite{BorQVA}] \label{Definition: (A,H,S)-vertex algebra} An \emph{$(A, H, S)$-vertex algebra} is a singular commutative ring object in $\Fun (\Fin, A, T, S)$. \end{defn} \begin{rmk} [Remark on notation] Borcherds's notation reflects that he allows for more general additive symmetric monoidal categories $A$ than we use in this paper (we take always $A = \Vect$); likewise, his $H$ can be more general than our $H = \BC[\del]$ used in defining $T$ in definition \ref{Definition: coalgebra T}, and hence he considers slightly more general coalgebras than our $T$. We will still use this notation for consistency, even though for us the only choice is in the algebra object $S$. \end{rmk} \begin{rmk} [Remark on unit condition] \label{Remark: units for AHS} Fix a triple $(A,H,S)$. An $(A, H, S)$-vertex algebra $V$ is called \emph{unital} if $V(\varnothing)$ is a unital $S(\varnothing)$-algebra. Although Borcherds does not explicitly mention it, it seems likely that he intended all of his $(A,H,S)$-vertex algebras to be unital, and simply left out this axiom. (For example, this assumption is required for his theorems to be true.) It is a consequence of his axioms that $V(\varnothing)$ is an algebra over $S(\varnothing)$, but it is not necessary that it has a unit. On the other hand, a truly \emph{non-unital} $(A,H,S)$-vertex algebra, in the sense of non-unital factorization algebras, or vertex algebras without vacuum, should be a functor $V$ only defined over the category of non-empty finite sets and surjections, together with the rest of the data of an $(A, H,S)$-vertex algebra. Let us also remark that an $(A,H,S)$-vertex algebra is unital if and only if it is equipped with a map of $(A,H,S)$-vertex algebras \begin{align*} S \to V \end{align*} which satisfies the following compatibility condition relating singular-multiplication by the unit to the action of $S$ on $V$: take $I, J$ any finite sets; then the diagram below must commute. \end{rmk} \begin{equation}\label{diagram: unit condition} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {S(I) \otimes V(J) & V(I) \otimes V(J) & V(I:J) \\ S(I, J) \otimes V(I, J) & & V(I, J) \\}; \path[->, font=\scriptsize] (b-1-1) edge node[above]{unit} (b-1-2) (b-1-2) edge node[above]{$\mu$} (b-1-3) (b-1-1) edge (b-2-1) (b-2-1) edge node[below]{act}(b-2-3); \path[right hook->, font=\scriptsize] (b-2-3) edge (b-1-3); \end{tikzpicture} \end{equation} (Here again we require that $S(\varnothing)$ is unital, and $1 \in S(\varnothing)$ acts trivially on $V(\varnothing)$.) From now on, we will assume that all $(A, H, S)$-vertex algebras are unital. \begin{rmk} [Remark on morphisms] \label{Remark: morphisms} Borcherds does not explicitly define morphisms between $(A, H, S)$-vertex algebras. For now, we will define the category $\VA(A, H, S)$ to have objects as in definition \ref{Definition: (A,H,S)-vertex algebra}, and morphisms natural transformations of the underlying functors which respect all of the additional structure. See Remark \ref{Remark: modify morphisms} for a discussion of why we may wish to modify the definition of a morphism. \end{rmk} \begin{geom} \label{Remark: geometric interpretation of V} In the case that $S = S_X$ as in examples \ref{Example: S on A1} and \ref{Example: S_X} above, we can interpret the data of an $(A, H, S)$-vertex algebra geometrically. Indeed, let $V$ be an $(A, H, S_X)$-vertex algebra. Recall that $X^I$ is affine for any finite set $I$, and let $\CV(I)$ denote the $\CO_{X^I}$-module with global sections given by $V(I)$. The generators of $T(I) = \BC[\del_i]_{i \in I}$ act on $V(I)$ by derivations; together with the action of $\Sgen{X}(I)$, this generates a left $\CD_{X^I}$-module structure on $\CV(I)$. Given any $\alpha: I \to J$, we have $\alpha_*^V: V(I) \to V(J)$. It is straightforward to check from the axioms that this descends to a map \begin{align} S_X(J) \otimes_{S_X(I)} V(I) \to V(J), \label{eq: pullback} \end{align} which is compatible with both the $S_X(J)$-action on both sides, and the ``action'' of $T(J)$ (given on the right hand side from the definitions, and given on the left hand side by letting $\del_j$ act by $\del_j \otimes \id + \id \otimes \alpha^*(\del_j)$). To translate this into the $\CD$-module picture, we consider the morphism $\phi(\alpha) : X^J \to X^I$. Then the map (\ref{eq: pullback}) is the morphism on global sections induced by a map of left $\CD_{X^J}$-modules \begin{align*} \widetilde{\alpha}: \phi(\alpha)^*(\CV(I)) \to \CV(J). \end{align*} These maps are compatible with compositions $I \to J \to K$ in the obvious way. For $I \in \FinNEq$, let $\overline{I}$ denote the underlying finite set (i.e. the result of forgetting the equivalence relation). The data of the $V(I)$ for $(\pi_I : I \surj I^\prime) \in \FinNEq$ simply encodes the restriction of $\CV(\overline{I})$ to $U(I) \subset X^I$. Likewise, a function $\alpha: I \to J$ of finite sets is a morphism in $\FinNEq$ if and only if the induced morphism $\phi(\alpha): X^J \to X^I$ sends $U(J)$ to $U(I)$. Then the data of $\alpha_*: V(I) \to V(J)$ encodes the restriction of $\widetilde{\alpha}$ to $U(J)$. \end{geom} Borcherds's main result is the following: \begin{thm}\cite{BorQVA}\label{Theorem: Borcherds} Given an $(A, H, S_B)$-vertex algebra $V$, the vector space $V(1)$ has a natural structure of vertex algebra. \end{thm} In fact, it is not hard to see from the proof that this extends to a functor $\VA(A, H, S_B) \to \VA$, which we will denote by $\Phi$. Borcherds gives an instance of this theorem by constructing an $(A, H, S_B)$-vertex algebra $\BV^L$ for which $\Phi(\BV^L)$ is the lattice vertex algebra $V_L$. \begin{eg}\label{Example: lattice AHS} Let $L$ be an even integral lattice of rank $n$, with bilinear form $(\cdot, \cdot)$ as in example \ref{Example: lattice VOA}. Let $V^L$ be the commutative algebra \begin{align*} V^L \defeq \BC[L] \otimes \Sym(L(1) \oplus L(2) \oplus \cdots \oplus L(n) \oplus \cdots ). \end{align*} (Here we have the symmetric algebra on the direct sum of countably many copies of $L$, denoted by $L(1), L(2)$, etc.) For future reference, let us fix a basis $\{\lambda_a\}_{a \in A}$ of $L$, and let us denote the generators of this algebra by $e^\lambda \in \BC[L]$ (for $\lambda \in L$), and by $\lambda_a(k) \in L(k)$ (for $a \in A$ and $k \in \BN$). We define an action of $\BC[\del]$ on $V^L$ by setting \begin{align*} \del(e^{\lambda_a}) &= \lambda_a(1);\\ \del(\lambda_a(k)) &= (k+1)\lambda_a(k+1), \end{align*} and requiring that $\del$ satisfies the Leibniz rule. In particular, denoting $\frac{1}{k!} \del^k$ by $\del^{(k)}$, we have $\lambda_a(k) = \del^{(k)} e^{\lambda_a}$. In addition to the obvious algebra structure, $V^L$ has a coalgebra structure compatible with the algebra structure and the action of $\del$ determined by requiring that \begin{align*} \Delta(e^\lambda) &= e^\lambda \otimes e^\lambda \text{ for any } \lambda \in L;\\ \Delta(v_1 v_2) &= \Delta(v_1)\cdot \Delta(v_2) \text{ for any } v_1, v_2 \in V^L;\\ \Delta(\del v) &= (\del \otimes 1 + 1 \otimes \del) \Delta(v) \text{ for any } v \in V^L. \end{align*} Define $\BV^L : \Fin \to \CA$ by $\BV^L(I) \defeq \otimes_{I} V^L$. Given a morphism $\alpha: I \to J$, we define $\BV^L(I) \to \BV^L(J)$ by $(v_i)_{i \in I} \mapsto (\prod_{i \in \alpha^{-1}(j)} v_i)_{j \in J}$. Then we can define the action of $T(I)=\BC[\del_i]_{i \in I}$ on $\BV^L(I)$ by letting $\del_i$ act on the $i$th factor $V^L$. The action of $S_B$ is given by the natural action of $S_B(I_1: \ldots : I_n)$ on $\BV^L (I_1: \ldots : I_n) = \BV^L(I_1, \ldots, I_n) \otimes_\BC S_B(I_1: \ldots: I_n)$. It remains to define the singular multiplication $\mu$ on $\BV^L$. This should be a map $\BV^L \odot \BV^L \to \BV^L$, or in other words a collection of compatible maps \begin{align*} \mu_{I_1, I_2}: \BV^L(I_1) \otimes \BV^L(I_2) \to \BV^L(I_1, I_2) \otimes_{\BC} \BC[(x_{i_1} - x_{i_2})^{\pm 1} ]_{i_1 \in I_1, i_2 \in I_2}. \end{align*} We define these maps in several steps. Fix a two-cocycle $c: L \times L \to \{\pm1\}$ as in example \ref{Example: lattice VOA}. Now define what Borcherds calls a \emph{bicharacter} $r: V^L \otimes V^L \to \BC[(x_1 - x_2)^{\pm 1}]$ as follows. For $\alpha, \beta \in L$, set $r(e^\alpha, e^\beta) = c(\alpha, \beta) (x_1 - x_2)^{(\alpha, \beta)}$. Note that since $c$ satifies $c(\alpha, \beta) = (-1)^{(\alpha, \beta)}c(\beta, \alpha)$, we have $r(e^\beta, e^\alpha) = c(\alpha, \beta) (x_2 - x_1)^{(\alpha, \beta)}$. The value of $r$ on any element of $V^L \otimes V^L$ is completely determined by the requirement that $r$ is bilinear and satisfies\footnote{More generally, these are the axioms given by Borcherds for a \emph{(symmetric) bicharacter} from an arbitrary commutative cocommutative bialgebra into $\BC[(x_1 - x_2)^{\pm 1}]$; he uses this general set-up to produce other examples of $(A, H, S_B)$-vertex algebras, but this example is sufficient for our purposes.} \begin{itemize} \item $\text{if } r(v \otimes v) = f(x_1, x_2), \text{ then } r(w \otimes v) = f(x_2, x_1) \text{ for } v, w, \in V^L;$ \item $r(\del v \otimes w) = \frac{\del}{\del x_1} r(v \otimes w) \text{ for } v, w \in V^L;$ \item $r(v_1v_2 \otimes w) = r(v_1, w_{(1)})r(v_2, w_{(2)}), \text{ for } v_1, v_2, w \in V^L, \text{ where } \Delta(w) = w_{(1)} \otimes w_{(2)}.$ \end{itemize} Borcherds explains (Lemma 4.1, \cite{BorQVA}) that this bicharacter can be extended to give maps \begin{align*} r: \BV^L(I_1) \otimes \BV^L(I_2) \to S_B(I_1: I_2), \end{align*} for any pair $I_1, I_2$, using the coalgebra structure of $V^L$ and the algebra structure of $S_B(I_1: I_2)$. With this in hand, we define $\mu_{I_1, I_2}$ as follows: given $v \in \BV^L(I_1)$ and $w \in \BV^L(I_2)$, write $\Delta(v) = v_{(1)} \otimes v_{(2)} \in \BV^L(I_1) \otimes \BV^L(I_1)$, and likewise write $\Delta(w) = w_{(1)} \otimes w_{(2)} \in \BV^L(I_2) \otimes \BV^L(I_2)$. Then we define \begin{align} \label{Eq: definition of mu} \mu_{I_1, I_2} (v \otimes w) \defeq (v_{(1)} \otimes w_{(1)}) \otimes r(v_{(2)} \otimes w_{(2)}). \end{align} It is not hard to check that this gives a compatible family of maps, which in turn gives a commutative multiplication map $\BV^L \odot \BV^L \to \BV^L$. Although we will not explain here why the vertex algebra structure on $\Phi(\BV^L)$ matches that of the lattice vertex algebra $V_L$ (see Example \ref{Example: lattice VOA}), let us at least explain how the underlying vector spaces $\BV^L(1) = V^L$ and $V_L$ are identified, by defining the isomorphism \begin{align}\label{Eq: identifying V^L and V_L} f: V^L \to V_L. \end{align} In fact, both $V^L$ and $V_L$ are commutative algebras with derivation (given by the action of $\partial$ on $V^L$ and the action of $T$ on $V_L$, respectively). Moreover, each is generated under multiplication and the derivation by elements parametrized by $L$: $\{e^\lambda\}_{\lambda \in L}$ in the case of $V^L$ and $\{ | \lambda \rangle\}_{\lambda \in L}$ in the case of $V_L$. The map $f$ is the homomorphism of commutative algebras with derivation given by $f(e^\lambda) = | \lambda \rangle$. As an example, we can calculate that for a basis element $\lambda_a$, $f(\lambda_a(1)) = f(\partial e^{\lambda_a} ) = T | \lambda_a \rangle = (\lambda_a)_{-1} | \lambda_a \rangle$. \end{eg} \begin{rmk}[Remark on bicharacter constructions] \label{Remark: bicharacter} In \cite{Patnaik}, Patnaik proves carefully that $\Phi(\BV^L)$ is indeed the lattice vertex algebra, as Borcherds's claims. He also defines and characterizes the class of \emph{$r$-vertex algebras}, vertex algebras which arise as objects in the image of $\Phi$ of $(A,H,S)$-vertex algebras which are constructed in a way similar to the above method, namely beginning with a commutative cocommutative bialgebra and a symmetric bicharacter. In particular, not all vertex algebras arise in this way; this result is different from the results of the current paper, which concern the application of $\Phi$ (or $\Phi_X$) to \emph{any} $(A, H, S)$-vertex algebra. \end{rmk} \begin{rmk}[Remark on commutativity condition] \label{Remark: commutativity} Let us describe more explicitly what it means for a singular multiplicative structure $\mu$ on an object $V \in \Fun(\Fin, \CA, T, S)$ to be \emph{commutative}. In terms of the components $\mu_{I_1, I_2}$, the condition simply amounts to the commutativity of the following diagrams: \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] { V(I_1) \otimes V(I_2) & V(I_1: I_2) \\ V(I_2) \otimes V(I_1) & V(I_2: I_1)\\}; \path[->, font=\scriptsize] (b-1-1) edge node[above]{$\mu_{I_1, I_2}$} (b-1-2) (b-1-1) edge node[left]{$\alpha \otimes \beta \mapsto \beta \otimes \alpha$} (b-2-1) (b-1-2) edge node[right]{$\id$}(b-2-2) (b-2-1) edge node[below]{$\mu_{I_1, I_2}$} (b-2-2); \end{tikzpicture} \end{center} However, when we are dealing with explicit examples, say $I_1$ and $I_2$ each consisting of one point, there is a risk of notational confusion. Rather than labelling the element of $I_1$ by ``$1$'' and the element of $I_2$ by ``$2$'', we do not specify the labels, and reserve the notation $x_1$ to refer to the first label appearing on the map $\mu$---i.e. to $I_1$ in the first row of the diagram, and to $I_2$ in the second row---while $x_2$ refers to the second label. We do this for consistency with notation of \cite{BorQVA} and for ease of translation to the (ordinary) vertex algebra setting. Thus in this case, to express the commutativity of the above diagram, we use the transposition $\sigma: \{1,2\} \to \{2,1\}$, and we see that \begin{align}\label{Eq: commutativity condition} \mu_{12}(\alpha \otimes \beta) = \sigma^V_* \mu_{1,2}(\beta \otimes \alpha). \end{align} The interested reader can check that this formula holds in the example of the lattice $(A, H, S)$-vertex algebra $\BV^L$. \end{rmk} \begin{rmk}[Generalization to families]\label{rmk: AHS in families} Borcherds's definitions work for a general commutative ring $R$, not just over $\BC$. In particular, consider the case when $R = \BC[t]$ and $X$ is an affine curve in $\BBA^1_R$, viewed as a scheme over $\Spec R = \BBA^1$; the case we will use later on is the special setting of $X = \BBA^1 \times \BBA^1$, mapping to $\Spec R$ via the first projection. The theory is entirely analogous to the complex setting; let us mention only a few differences. \begin{itemize} \item We consider functors from $\Fin^*$ to the category of $\BC[t]$ modules (our new category $A$). \item The coalgebra object $T$ is as before (with $\BC[t]$-module structure given by $t\mapsto 0$). \item The algebra object $S_X$ comes from considering functions on the iterated fibre products $X^I_{\BBA^1} \defeq X \times _{\BBA^1} X \times _{\BBA^1} \cdots \times_{\BBA^1} X$ and their open subschemes $U_{\BBA^1}(I)$ defined analogously to the non-relative setting. (In the case $X = \BBA^1 \times \BBA^1$ mentioned above, $X^I_{\BBA^1} \cong X \times X^I $ and $U_{\BBA^1}(I) \cong X \times U(I)$.) \item The structure given by the action of $T$ and of $S$ makes $V(I)$ into a relative $\CD$-module on $X^I_{\BBA^1}$ over $\BBA^1$. \item The naive tensor product is defined as before, by tensoring over $S_X(I)$. \item The singular tensor product is defined by considering compatible families of maps on tensor products of modules over $\BC[t]$ (rather than over $\BC$). \end{itemize} We will call such an $(A,H,S)$-vertex algebra a \emph{family of $(A,H,S)$-vertex algebras over $\BBA^1$}. \end{rmk} \section{From \texorpdfstring{$(A,H,S_X)$}{(A,H,S)}-vertex algebras to vertex algebras on \texorpdfstring{$X$}{X}}\label{Section: from AHS to vertex algebras} In this section, we generalize Borcherds's result (Theorem \ref{Theorem: Borcherds}) to deal with our other class of algebra objects $S$: namely, we return to the setting of $X \subset \BBA^1$ open affine, with a fixed global coordinate $x$. \begin{thm}\label{Theorem: AHS to vertex algebra} For $X$ as above, and $S_X$ the algebra object defined in example \ref{Example: S_X}, we have a functor \begin{align*} \Phi_X : \VA(A, H, S_X) \to \VA(X). \end{align*} \end{thm} \begin{proof} For $\BV \in \VA(A, H, S_X)$, we will define a structure of vertex algebra over $X$ on the vector space $V = V(1)$. (For convenience, we will identify $V$ with $V(I)$ for any singleton set $I$, using the isomorphism induced by the unique isomorphism $I \to \{1\}$.) The key idea in the construction is the generalization of Borcherds's ``Taylor series expansion function'' to the $\Sgen{X}$ setting. More precisely, we consider the map $\alpha_*^V: V(1,2) \to V(1)$ corresponding to $\alpha: \{1,2\} \to \{1\}$, and define \begin{align*} \theta: V(1,2) & \to V(1) \otimes_\BC \BC [\![ z_1,z_2 ]\!]\\ v & \mapsto \sum_{i, j \geq 1} \alpha_*^V(\del_1^{(i)} \del_2^{(j)} v) \otimes z_1^i z_2^j. \end{align*} Observe that $\theta$ is compatible with the action of $x_1, x_2 \in S_X(1,2)$ on $V(1,2)$ in the following way: \begin{align} \theta (x_1.v) &= (x \otimes 1 + 1 \otimes z_1).\theta (v), \label{eq: theta} \\ \theta (x_2.v) &=(x \otimes 1 + 1 \otimes z_2).\theta (v). \nonumber \end{align} In particular, $\theta$ is linear over $\BC[(x_1 - x_2)]$, where $x_1 - x_2$ acts on the right by $z_1-z_2$. It follows that $\theta$ extends to a map on the localization at $(x_1 - x_2)$: \begin{align*} \theta: V(1,2) \otimes_{S_X(1,2)} S_X(1:2) \to V(1)\otimes_{\BC}\BC[\![z_1,z_2]\!][(z_1-z_2)^{-1}]. \end{align*} From the singular multiplication $\mu$ on $\BV$, we have in particular a map \begin{align*} \mu_{12}: V(1) \otimes V(2) \to V(1,2) \otimes_{S_X(1,2)} S_X(1:2). \end{align*} For $v, u \in V$ we define $Y(v, z_1)u \in V(1) \otimes_{\BC}\BC[\![z_1]\!][z_1^{-1}]$ by \begin{align*} Y(v,z_1)u \defeq \theta(\mu(v\otimes u))_{\vert z_2=0}. \end{align*} More explicitly, choose $N$ large enough that $(x_1 - x_2)^N \mu_{12}(v \otimes u)\defeq a \in V(1,2)$. Then \begin{align}\label{Eq: Y} Y(v,z_1)u = \sum_{i\ge 0} \alpha^V_*(\del_1^{(i)} a) \otimes z_1^{i-N}. \end{align} We claim that $Y(\cdot, z)(\cdot)$ makes $V$ into a vertex algebra on $X$, together with the data of vacuum vector $\vac$ given by the image of $1 \in S_X(1)$ under the unit morphism $S_X(1) \to V(1)$, and of translation operator given by $T = \del(\cdot): V \to V$. Let us check the axioms. First notice that the lower truncation property is immediate from the construction of $Y$. It is also immediate from the properties of an $(A, H, S_X)$-vertex algebra that $V$ is a $\Gamma(X, \CO_X)$-module. It easy to check from the linearity of $\mu$ and the properties (\ref{eq: theta}) of $\theta$ that we have \begin{align*} Y(xv,z)u & = (x+z) Y(v,z)u;\\ Y(v,z)(xu) & = xY(v,z)u, \end{align*} which are the $\CO(X)$-linearity properties required. The derivative property is also straightforward to check. Next let us check the vacuum axiom and the creation axiom. Since $\del(1) = 0$, we also have $\del(\vac) = 0$. For any other $v \in V$, the diagram (\ref{diagram: unit condition}) tells us that $\mu(\vac \otimes v) = \gamma^V_*v \in V(1,2)$, where $\gamma$ is the inclusion $\{2\} \hookrightarrow \{1,2\}$. Then from (\ref{Eq: Y}) we see that \begin{align*} Y(\vac, z) v &= \sum_{i \ge 0} \alpha^V_*(\del_1^{(i)} (\mu (\vac \otimes v)) \otimes z^i \\ & = \sum _{i \ge 0} \alpha^V_*(\mu( T^{(i)}\vac \otimes v)) \otimes z^i\\ & = \alpha^V_*(\mu(\vac \otimes v))\\ & = \alpha^V_* \gamma^V_* v = v. \end{align*} Thus the vacuum condition holds. Likewise, $\mu(\vac \otimes v) = \delta^V_*v \in V(1,2)$, for $\delta: \{1\} \hookrightarrow \{1,2\}$, so \begin{align*} Y(v, z) \vac &= \sum_{i \ge 0} \alpha^V_* (\del_1^{(i)} (\mu (v \otimes \vac) )\otimes z^i\\ & = \sum_{i \ge 0} \alpha^V_*(\mu (\del^{(i)}v \otimes \vac)) \otimes z^i\\ & = v + Tv \otimes z + \cdots. \end{align*} This proves that the creation property is satisfied. Finally, we claim that the following proposition holds: \begin{prop} \label{Proposition: locality holds} The locality axiom is satisfied. In other words, for any three elements $u_1, u_2, u_3 \in V$ there exists some large integer $N \gg 0$ such that \begin{align*} (z_1 - z_2)^N Y(u_1, z_1) Y(u_2, z_2) u_3 = (z_1 - z_2)^N Y(u_2, z_2) Y(u_1, z_1) u_3. \end{align*} \end{prop} We defer the proof of this proposition for now; in the mean time, assuming the proposition, we see that we have indeed constructed the data of a vertex algebra $(V, \vac, T, Y(\cdot, z))$ over $X$. It is also easy to see from the constructions that a morphism $F: \BV \to \BW$ of $(A, H, S_X)$-vertex algebras (in the sense of remark \ref{Remark: morphisms}) gives rise to a linear map $F(1) : V(1) \to W(1)$ which respects all of the vertex algebra structure just described. This describes the behaviour of the functor $\Phi_X$ on morphisms, and completes the proof of the theorem. \end{proof} The remainder of this section is devoted to the proof of Proposition \ref{Proposition: locality holds}. For $u_1, u_2, u_3 \in V$, we need to show that the two elements $ Y(u_1, z_1) Y(u_2, z_2) u_3$ and $Y(u_2, z_2) Y(u_1, z_1) u_3$ of $V(1) \otimes \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2 ^{-1}]$ agree when viewed as elements of $V(1) \otimes \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2 ^{-1}, (z_1 - z_2)^{-1}]$. Following Borcherds's proof in the setting of $(A, H, S_B)$-vertex algebras, we will introduce an intermediate function that will allow us to see how the associativity and commutativity of the singular multiplication map $\mu$ imply our result. For this, we use a generalization of the function $\theta$ to three variables: let $\beta$ be the function $\{1,2,3\} \to \{1\}$, and define \begin{align*} \theta_{123} &: V(1:2:3) \to V(1) \otimes \BC[\![ z_1, z_2, z_3 ]\!] [(z_1 - z_2) ^{-1} , (z_1 - z_3) ^{-1}, (z_2 - z_3) ^{-1}] \end{align*} by sending $(x_1 - x_2)^{d_1}(x_1 - x_3)^{d_2}(x_2-x_3)^{d_3} a$ for $a\in V(1,2,3)$ to \begin{align*} \sum_{i, j, k \ge 0} \beta^V_* (\del^{(i)}_1 \del^{(j)}_2 \del^{(k)}_3 a ) \otimes z_1^iz_2^jz_3^k (z_1 - z_2)^{d_1}(z_1 - z_3)^{d_2}(z_2-z_3)^{d_3}. \end{align*} \begin{defn} Let $U$ be the map \begin{align*} V(1) \otimes V(2) \otimes V(3) \to V(1) \otimes \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2 ^{-1}, (z_1 - z_2)^{-1}] \end{align*} given by the following composition: \begin{align*} V(1) \otimes V(2) \otimes V(3) & \xrightarrow{\id \otimes \mu} V(1) \otimes V(2:3) \xrightarrow{\mu} V(1:2:3) \\ & \xrightarrow{\theta_{123}} V(1) \otimes \BC[\![ z_1, z_2, z_3 ]\!] [(z_1 - z_2) ^{-1} , (z_1 - z_3) ^{-1}, (z_2 - z_3) ^{-1}] \\ & \xrightarrow{z_3 \mapsto 0} V(1) \otimes \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2 ^{-1}, (z_1 - z_2)^{-1}]. \end{align*} \end{defn} \begin{lemma}\label{lemma: U and Y agree} Let $Y_{123}: V(1) \otimes V(2) \otimes V(3) \to V(1) \otimes \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2 ^{-1}, (z_1 - z_2)^{-1}]$ be the map given by \begin{align*} u_1 \otimes u_2 \otimes u_3 \mapsto Y(u_1, z_1)Y(u_2, z_2)u_3. \end{align*} Then $Y_{123} = U$. \end{lemma} \begin{proof} Consider the following diagram. \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=0em, text height=1.5ex, text depth=0.25ex, font=\tiny] {V(1) \otimes V(2) \otimes V(3) & & \\ V(1) \otimes V(2:3) & {V(1) \otimes V(2) \otimes \BC[\![ z_2, z_3]\!][(z_2 - z_3)^{-1}]} & V(1) \otimes V(2) \otimes \BC[\![ z_2]\!][z_2^{-1}] \\ V(1:2:3) & V(1:2) \otimes \BC[\![ z_2, z_3]\!][(z_2 - z_3)^{-1}] & V(1:2) \otimes \BC[\![ z_2]\!][z_2^{-1}] \\ V(1) \otimes \BC[\![ z_1, z_2, z_3]\!][(z_i -z_j)^{-1}] & & V(1) \otimes \BC[\![ z_1, \tilde{z}_2, z_2]\!][(z_1 - \tilde{z}_2)^{-1}, z_2^{-1}] \\ & V(1) \otimes \BC[\![ z_1, z_2]\!][(z_1-z_2)^{-1},z_1^{-1}, z_2^{-1}] & V(1) \otimes \BC[\![ z_1, z_2]\!][z_1^{-1}, z_2^{-1}].\\}; \path[->, font=\tiny] (b-1-1) edge node[above right]{$\id \otimes \mu$} (b-2-1) (b-2-1) edge node[above]{$\id \otimes \theta_{23}$} (b-2-2) (b-2-2) edge node[above]{$z_3 \mapsto 0$} (b-2-3) (b-2-1) edge node[left]{$\mu$} (b-3-1) (b-2-2) edge node[left]{$\mu \otimes \id$} (b-3-2) (b-2-3) edge node[right]{$\mu \otimes \id$} (b-3-3) (b-3-1) edge node[below]{$\theta_{23}$} (b-3-2) (b-3-2) edge node[below]{$z_3 \mapsto 0$} (b-3-3) (b-3-1) edge node[left]{$\theta_{123}$}(b-4-1) (b-3-3) edge node[right]{$\theta_{12}$}(b-4-3) (b-4-3) edge node[right]{$\tilde{z}_2 \mapsto 0$} (b-5-3) (b-4-1) edge node[below left]{$z_3 \mapsto 0$} (b-5-2) (b-5-3) edge (b-5-2); \end{tikzpicture} \end{center} Note that the composition around the outer right edge of this diagram is exactly $Y_{123}$, while the composition around the outer left edge is $U$, so in order to prove the lemma it suffices to show that each of the three squares in the diagram commutes. The upper left square commutes because of the compatibility of $\mu$ with the functors induced by the surjections $\{2,3\} \to \{2\}$ and $\{1\} \sqcup \{2,3\} \to \{1\} \sqcup \{2\}$ and with the action of the $\del_i$. That the upper right square commutes is immediate, so it remains to check that the large bottom square commutes. Let us denote the two compositions by $F_{\urcorner}$ and $F_{\llcorner}$. First note that both functions give maps $V(1:2:3) \to V(1) \otimes \BC[\![ z_1, z_2]\!][(z_1-z_2)^{-1},z_1^{-1}, z_2^{-1}]$ which are linear with respect to the action of $(x_i- x_j)$ for $i \ne j$ in the following sense: for $v \in V(1:2:3)$, $(i,j) \in \{ (1,2), (1,3), (2,3)\}$, and $* = \urcorner, \llcorner$, we have \begin{align*} F_* ((x_i - x_j) \cdot v) = \left \{ \begin{array}{l l } (z_1 -z_2) F_*(v) & \text{ if } (i,j) = (1,2); \\ z_1 F_*(v) & \text{ if } (i,j) = (1,3); \\ z_2 F_*(v) & \text{ if } (i,j) = (2,3). \end{array} \right. \end{align*} Since $V(1:2:3)$ is the localization of $V(1,2,3)$ by $(x_1-x_2)(x_1 - x_3)(x_2 - x_3)$ we conclude that it suffices to check that $F_\urcorner(v) = F_\llcorner(v)$ for $v \in V(1,2,3)$. So we take $v \in V(1,2,3)$, and calculate from the definitions that we have \begin{align*} F_\llcorner (v) = \sum_{i, j \ge 0} \beta^V_* (\del_1^{(i)} \del_2^{(j)} v) \otimes z_1^{i} z_2^{j} = F_\urcorner(v). \end{align*} This completes the proof of the lemma. \end{proof} The locality condition is a straightforward consequence of this lemma, together with the properties of the singular multiplication map $\mu$. \begin{proof}[Proof of Proposition \ref{Proposition: locality holds}] First let us fix some notation: we will denote the transposition that swaps $1$ and $2$ by $\sigma$ when viewed as a permutation of $\{1,2\}$, and by $\overline{\sigma}$ when viewed as a permutation of $\{1,2,3\}$. It induces maps \begin{align*} \sigma^V_* &: V(1:2) \to V(1:2);\\ \overline{\sigma}^V_* &: V(1:2:3) \to V(1:2:3);\\ g_\sigma &: \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2^{-1}, (z_1 - z_2)^{-1} ] \to \BC[\![ z_1, z_2 ]\!] [z_1^{-1}, z_2^{-1}, (z_1 - z_2)^{-1} ] ;\\ g_{\overline{\sigma}} &: \BC[\![ z_1, z_2, z_3]\!][(z_i -z_j)^{-1}] \to \BC[\![ z_1, z_2, z_3]\!][(z_i -z_j)^{-1}]. \end{align*} Now we make the following observations: \begin{align*} Y(u_1, z_1) Y(u_2, z_2) u_3 & = U(u_1 \otimes u_2 \otimes u_3) \\ & =F_\llcorner (\mu (u_1 \otimes \mu(u_2 \otimes u_3))) \\ & =F_\llcorner (\mu (\mu (u_1 \otimes u_2) \otimes u_3))\\ & =F_\llcorner (\mu (\sigma^V_* \mu (u_2 \otimes u_1) \otimes u_3))\\ & =F_\llcorner (\overline{\sigma}^V_* \mu (\mu (u_2 \otimes u_1) \otimes u_3)). \end{align*} Here the first equality is Lemma \ref{lemma: U and Y agree}; the second is by definition of $U$ and $F_\llcorner$; the third is the associativity of $\mu$; the fourth is the commutativity condition (\ref{Eq: commutativity condition}); and the last is the functoriality of $\mu$. It is easy to check from the definitions that we have $\theta_{123} \circ \overline{\sigma}^V_*(v) = (\id \otimes g_{\overline{\sigma}}) \circ \theta_{123}(v)$ for any $v \in V(1,2,3)$. Furthermore, the two compositions satisfy the same linearity properties with respect to multiplication by $(x_i -x_j)$, so they agree on the localization $V(1:2:3)$. It follows that \begin{align*} F_\llcorner (\overline{\sigma}^V_* \mu (\mu (u_2 \otimes u_1) \otimes u_3)) = g_\sigma (F_\llcorner (\mu (\mu (u_2 \otimes u_1) \otimes u_3))). \end{align*} Arguing as above, we see that this is equal to $g_\sigma (U(u_2 \otimes u_1 \otimes u_3))$, which by Lemma \ref{lemma: U and Y agree} is $g_\sigma (Y(u_2, z_1) Y(u_1, z_2) u_3) = Y(u_2, z_2) Y(u_1,z_1) u_3$. \end{proof} \section{From \texorpdfstring{$(A,H, S_X)$}{(A,H,S)}-vertex algebras to chiral algebras on \texorpdfstring{$X$}{X}}\label{Section: from AHS to chiral} Recall part (1) of Theorem \ref{Theorem: everybody else's theorem} (Thm. 5.4 \cite{HL}). It provides an equivalence of categories \begin{align*} \Psi_X: \VA(X) \to \CAlg(X). \end{align*} Combining this with Theorem \ref{Theorem: AHS to vertex algebra}, we see that given any $(A, H, S_X)$-vertex algebra $\BV$, we obtain a chiral algebra $\Psi_X \circ \Phi_X (\BV)$ on $X$. In this section, we calculate the chiral bracket $\mu^\ch$ directly from the data of $\BV$, rather than passing through the category of vertex algebras on $X$. First let us establish some notation. As in the proof of Theorem \ref{Theorem: AHS to vertex algebra}, let $V = \BV(1)$ be the vector space underlying the vertex algebra $\Phi_X(\BV)$. As in remark \ref{Remark: geometric interpretation of V}, let $\CV = \CV(1)$ denote the left $\CD$-module on $X$ with global sections $V$, and similarly for any finite set $I$ let $\CV(I)$ denote the left $\CD$-module on $X^I$ with global sections $\BV(I)$. From the proof of Theorem 5.4 in \cite{HL} (or perhaps more explicitly seen in similar discussions in Section 19.2 of \cite{FBZ}), the right $\CD$-module underlying the chiral algebra $\Psi_X \circ \Phi_X$ is the right $\CD$-module $\CV^r= \CV \otimes \omega_X$ corresponding to the left $\CD$-module $\CV$. Let us denote this right $\CD$-module by $\CA$. To define a chiral bracket $\mu_0^\ch : j_* j^* \CA \boxtimes \CA \to \Delta_! \CA$, it is enough to give a map of left $\CD$-modules \begin{align*} \lambda_0^\ch: j_* j^* \CV \boxtimes \CV \to \Delta_* \CV. \end{align*} We give such a morphism by composing three maps as follows: \begin{enumerate} \item The first map $j_* j^* \CV \boxtimes \CV \to j_* j^* \CV(1,2)$ is induced on global sections by the singular multiplication map \begin{align*} \mu_{12}: (V(1) \otimes V(2)) \to V(1,2) \otimes_{S_X(1,2)} S_X(1:2), \end{align*} extended to a map of the localization \begin{align*} (V(1) \otimes V(2))\otimes_{S_X(1,2)} S_X(1:2) \to V(1,2) \otimes_{S_X(1,2)} S_X(1:2). \end{align*} \item The second map $\gamma: j_* j^* \CV(1,2) \to \Delta_*\Delta^* \CV(1,2)$ is the last map in the canonical exact sequence \begin{align*} 0 \to \Delta_* \Delta^! \CV(1,2) \to \CV(1,2) \to j_* j^* \CV(1,2) \to \Delta_* \Delta^* \CV(1,2) \to 0. \end{align*} (Here $\gamma$ is surjective because $j$ is affine and hence there are no higher derived terms for $j_*$.) \item Notice that for $\alpha: \{1,2\} \to \{1\}$, the map $\phi(\alpha)$ is just the diagonal embedding $\Delta$. The third map $\Delta_* \Delta^* \CV(1,2) \to \Delta_* \CV$ is induced from the map $\widetilde{\alpha}: \phi(\alpha)^*(\CV(1,2)) \to \CV(1)$ of Remark \ref{Remark: geometric interpretation of V}. \end{enumerate} \begin{thm} \label{Theorem: AHS to chiral algebra} The map $\mu^\ch_0$ defined in this way agrees with the chiral bracket $\mu^\ch$ on the chiral algebra $\Psi_X \circ \Phi_X (\BV) = (\CA, \mu^\ch)$. \end{thm} \begin{proof} Let us recall the construction of the chiral bracket $\mu^\ch$. (In fact, we adapt the notation of Frenkel--Ben-Zvi \cite{FBZ}, who work only over a formal disk, to the ideas of Huang--Lepowsky \cite{HL}, who work over arbitrary open affine $X \in \BBA^1$.) The bracket $\mu^\ch$ is again defined in terms of a map of left $\CD$-modules\footnote {In the notation of Frenkel--Ben-Zvi, this map, defined on a formal disk around a point $(x,x) \in X^2$, is called $\CY_x^2$.} \begin{align}\label{Eq: chiral bracket} \lambda^\ch: j_* j^* \CV \boxtimes \CV \to \Delta_* \CV. \end{align} Note that $j_* j^* \CV \boxtimes \CV$ has global sections $(V \otimes V) \otimes_{S_X(1,2)} S_X(1:2)$, viewed as an $S_X(1,2)$-module. Then $\lambda^\ch$ is given on global sections by \begin{align*} v \otimes u \otimes (x_1 - x_2)^{-n} \mapsto Y(v, x_1 - x_2)u \otimes (x_1 - x_2)^{-n} \text{ mod}\ S_X(1) \otimes V. \end{align*} Recalling the definition of $Y(\cdot,z)(\cdot)$ from the proof of Theorem \ref{Theorem: AHS to vertex algebra}, we see that $\lambda^\ch$ is given by \begin{align*} v \otimes u \otimes (x_1 - x_2) ^{-n} \mapsto \theta(\mu(v\otimes u))\vert _{z= (x_1 - x_2); w=0}(x_1 - x_2)^{-n}. \end{align*} Next let us write down the map $\gamma$ explicitly in terms of global sections. Global sections of \begin{align*} \Delta_* \Delta ^* \CV(1,2) = \frac{ j_* j^* (\CO_X \boxtimes \Delta^* \CV(1,2))}{\CO_X \boxtimes \Delta^* \CV(1,2)} \end{align*} are generated over $S_X(1,2)$ by elements that look like $1 \otimes \overline{v} \otimes (x_1-x_2)^{-n} \ \text{mod}\ S_X(1) \otimes V(1,2)/(x - y)V(1,2)$, where $\overline{v} \in V(1,2)/(x - y) V(1,2)$ and $n \in \BZ_{<0}$. Let us quickly recall the $\CD_{X^2}$-module structure in terms of these global sections: \begin{align*} x_1: 1 \otimes \overline{v} \otimes (x_1-x_2)^{-n} \mapsto & x \otimes \overline{v} \otimes (x_1-x_2)^{-n} ;\\ x_2: 1 \otimes \overline{v} \otimes (x_1-x_2)^{-n} \mapsto & 1 \otimes \overline{xv} \otimes (x_1-x_2)^{-n} = 1 \otimes \overline{yv} \otimes (x_1-x_2)^{-n};\\ \del_{x_1}: 1 \otimes \overline{v} \otimes (x_1-x_2)^{-n} \mapsto & \del_x(1) \otimes \overline{v} \otimes (x_1-x_2)^{-n} + 1 \otimes \overline{v} \otimes \del_{x_1}(x_1-x_2)^{-n}\\ & = -n \otimes \overline{v} \otimes (x_1-x_2)^{-n-1} ;\\ \del_{x_2}: 1 \otimes \overline{v} \otimes (x_1-x_2)^{-n} \mapsto & 1 \otimes \overline{(\del_x + \del_y)v} \otimes (x_1-x_2)^{-n} + 1 \otimes \overline{v} \otimes \del_{x_2}(x_1-x_2)^{-n} \\ & = 1 \otimes \overline{(\del_x + \del_y)v} \otimes (x_1-x_2)^{-n} + n \otimes \overline{v} \otimes (x_1-x_2)^{-n-1}. \end{align*} The map $\gamma$ is defined by \begin{align*} v \otimes (x_1 - x_2) ^{-n-1} \mapsto \sum_{i = 0}^{n} \frac{1}{i!} \otimes \overline{\del_x^i v} \otimes (x_1 - x_2) ^{i - n -1} \ \text{mod}\ S_X(1) \otimes V(1,2)/(x - y)V(1,2). \end{align*} Note that if we had taken the sum over all $i \ge 0$, we would get the same answer in the quotient; this is convenient when we don't explicitly know the order of the pole of the element we start with. With these explicit formulas to hand, it is easy to see that \begin{align*} \theta(\cdot)\vert_{ z= (x_1 - x_2); w=0} = \Delta_*(\alpha_*^V) \circ \gamma (\cdot). \end{align*} From this it follows that $\lambda^\ch = \lambda_0^\ch$, and hence that $\mu^\ch = \mu_0^\ch$, as claimed. \end{proof} \section{From factorization algebras on \texorpdfstring{$X$}{X} to \texorpdfstring{$(A, H, S_X)$}{(A,H,S)}-vertex algebras}\label{Section: from factorization to AHS} Composing the functor $\Psi_X \circ \Phi_X$ with the equivalence from chiral algebras on $X$ to factorization algebras on $X$, we obtain a functor $F: \VA(A, H, S_X) \to \FA(X)$. In this section, we discuss a functor going in the opposite direction. \begin{thm}\label{Theorem: Factorization to AHS} Let $\CB = \{\CB_{X^I} \}$ be a unital factorization algebra on $X \subset \BBA^1$. For each finite set $I$, let $\BV(I) = \Gamma(X^I, \CB_{X^I})$; then $\BV= \{\BV(I)\}$ has a natural structure of $(A, H, S_X)$-vertex algebra. This extends to a functor $\Gamma_X$ from the category $\FA(X)$ of factorization algebras on $X$ to the category of $(A, H, S_X)$-vertex algebras. \end{thm} \begin{proof} Let us first define the structure of $\BV$ as a functor from $\Fin$ to the category of vector spaces. Given a morphism $\alpha: I \to J$ of finite sets, we define $\alpha^V_*: \BV(I) \to \BV(J)$ using the Ran condition data of $\CB$. Indeed, we have \begin{align*} \nu_\alpha: \phi(\alpha)^*\CB_{X^I} \to \CB_{X^J}, \end{align*} which gives a map $\nu_\alpha: S_X(J) \otimes_{S_X(I)} \BV(I) \to \BV(J)$ on global sections. Then for $v \in \BV(I)$, we set $\alpha^V_* (v) = \nu_\alpha (1 \otimes v) \in \BV(J)$. The fact that the morphisms $\nu_\alpha$ are compatible with composition of the maps $\alpha$ implies that $\alpha^V_*$ are too. Next, $T(I)$ and $S_X(I)$ act naturally on $\BV(I)$, because $\CB_{X^I}$ is a $\CD_{X^I}$-module, and $T(I), S_X(I) \subset \Gamma(X^I, \CD_{X^I})$. Furthermore, because the map $\nu_\alpha$ is compatible with the $\CD$-module structures of $\CB_{X^I}$ and $\CB_{X^J}$, the $T$- and $S_X$-actions on $\BV(I)$ and $\BV(J)$ are intertwined by $\alpha^V_*$ in the appropriate way. The singular multiplication $\mu: \BV\odot \BV \to \BV$ is induced by the factorization isomorphisms in the following way. Recall that to define $\mu$ we need to give for each pair of finite sets $I_1, I_2$ a map \begin{align*} \mu_{I_1,I_2}: (\BV(I_1) \otimes_\BC \BV(I_2)) \otimes_{S_X(I_1, I_2)} S_X(I_1: I_2) \to \BV(I_1: I_2). \end{align*} This map should be compatible with the actions of $S_X(I_1: I_2)$ and $T(I_1: I_2)$ on each side, and furthermore the collection of maps $\mu_{I_1, I_2}$ should be functorial in $I_1$ and $I_2$. So let us choose a pair $I_1, I_2$. Let $I = I_1 \sqcup I_2$, and let $\alpha: I \to \{1,2\}$ be the obvious map. Then $\mu_{I_1, I_2}$ is defined to be the following composition: \begin{align*} (\BV(I_1) \otimes \BV(I_2)) \otimes_{S_X(I_1, I_2)} S_X(I_1: I_2) = \Gamma(X^I, j(\alpha)_* j(\alpha)^* (\CB_{X^{I_1}} \boxtimes \CB_{X^{I_2}})) \\ \EquivTo \Gamma(X^J, j(\alpha)_* j(\alpha)^* \CB_{X^I}) = \BV(I_1: I_2). \end{align*} That is, $\mu_{I_1, I_2} (v_1 \otimes v_2) = d_\alpha (v_1 \otimes v_2)$. Because $d_\alpha$ is a morphism of $\CD$-modules, $\mu_{I_1, I_2}$ is compatible with the actions of $S_X$ and $T$. To check the functoriality of $\mu$, we consider sets $K_1$ and $K_2$ with maps $\gamma_i: K_i \to I_i$, inducing $\gamma: K = K_1 \sqcup K_2 \to I$. We need to show that the following diagram commutes: \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {\BV(K_1) \otimes \BV (K_2) & \BV(K_1: K_2)\\ \BV(I_1) \otimes \BV (I_2) & \BV(I_1: I_2). \\}; \path[->, font=\scriptsize] (b-1-1) edge node[above]{$\mu_{K_1, K_2}$} (b-1-2) (b-1-2) edge node[right]{$\gamma^V_*$} (b-2-2) (b-1-1) edge node[left]{$\gamma_{1,*}^V \otimes \gamma_{2,*}^V$} (b-2-1) (b-2-1) edge node[below]{$\mu_{I_1, I_2}$}(b-2-2); \end{tikzpicture} \end{center} This follows immediately from the compatibility between the $\nu$s and the $d$s. It remains to check that the resulting singular multiplication map is associative and commutative. Both properties follow from the compatibility of the morphisms $d$ under composition. Finally, to see that $\BV$ is a unital $(A, H, S_X)$-vertex algebra, observe (similarly to the arguments appearing in Section 3.4.5 of \cite{BD}) that the factorization isomorphism for the decomposition $\varnothing= \varnothing \cup \varnothing$ makes $\BV(\varnothing)$ into an algebra with multiplication $\mu: \BV(\varnothing) \otimes \BV(\varnothing) \to \BV(\varnothing)$ an isomorphism of vector spaces. We conclude that $\BV(\varnothing) = \BC$, with unit $1$. \end{proof} \section{Composition of functors}\label{sec: composition of functors} We have now constructed the following (non-commutative) diagram of functors, where the horizontal arrows in the top row are the equivalences of Theorem \ref{Theorem: everybody else's theorem}: \begin{center} \begin{tikzpicture}[>=angle 90, bij/.style={above,inner sep=0.5pt}] \matrix(b)[matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {\VA(X) & \CAlg(X) & \FA(X) \\ & \VA(A, H, S_X)& \\}; \path[->, font=\scriptsize] (b-1-1) edge node[bij]{$\sim$} node[below]{$\Psi_X$} (b-1-2) (b-1-2) edge node[bij]{$\sim$} (b-1-3) (b-2-2) edge node[below left]{$\Phi_X$} (b-1-1) (b-1-3) edge node[below right]{$\Gamma_X$} (b-2-2); \end{tikzpicture} \end{center} Let us fix some notation. Suppose that $(V, Y(\bullet, z), \vac)$ is a vertex algebra on $X$; let $\CV$ denote the underlying left $\CD_X$-module, and let $\CV^r$ denote the corresponding right $\CD_X$-module. The chiral algebra corresponding to the vertex algebra has underlying right $\CD_X$-module $\CV^r$, and chiral bracket $\mu^\ch$ corresponding to a map $\lambda^\ch$ of left $\CD_X$-modules as in (\ref{Eq: chiral bracket}). Finally, let us denote the corresponding factorization algebra by $\CB = \{ \CB_{X^I} \}$. Unwinding the definitions of the functors in Theorem \ref{Theorem: everybody else's theorem}, one sees that $\CB$ has the following properties: \begin{itemize} \item $\CB_X = \CV$ and $\CB_{X^2} = \Ker(\lambda^\ch: j_* j^* (\CV \boxtimes \CV) \to \Delta_* \CV)$. \item The factorization isomorphism $d: j^*(\CB_X \boxtimes \CB_X) \EquivTo j*(\CB_{X^2})$ is the restriction of the obvious inclusion $\CV \boxtimes \CV \emb j_* j^* (\CV \boxtimes \CV)$ to $U = X \times X \setminus \Delta$. \item Ran's isomorphism $\nu: \CB_X \EquivTo \Delta^*\CB_{X^2} = \CB_{X^2} /(x_1-x_2)\CB_{X^2} $ is given on sections by $v \mapsto 1 \otimes v \mod (x_1-x_2)\CB_{X^2} $ (it follows from properties of the unit that $1 \otimes v \in \Ker(\lambda^\ch) \subset j_* j^*(\CV \boxtimes \CV)$). \item The inverse $\nu^{-1}$ can also be written explicitly as follows: given a section $A$ of $ \CB_{X^2} $, calculate $\lambda^\ch(A\cdot (x_1-x_2)^{-1})$, a section of $\Delta_* \CB_{X}$. Since $A$ is in the kernel of $\lambda^\ch$, we must obtain something of the form $\sum_{i} f_i(x_1) \otimes v_i \cdot (x_1-x_2)^{-1} = 1 \otimes \left(\sum_i f_i(x_2)\cdot v_i \right) (x_1-x_2)^{-1}$ (possibly $0$). Then it is easy to see that $A - \sum_i f_i(x_2)\cdot v_i$ are sections of the ideal sheaf $(x_1-x_2)\CB_{X^2} $, and so $A = \nu (\sum_i f_i(x) \cdot v_i)$ in $\Delta^*\CB_{X^2} $. \end{itemize} \begin{thm}\label{Theorem: composition} The functor $\Gamma_X$ provides a left-inverse to the functor $\Phi_X$ in the following sense: \begin{align*} \Phi_X \circ \Gamma_X (\CB) = V. \end{align*} \end{thm} \begin{rmk}\label{Remark: not an inverse} We will see in Section \ref{sec: failure to be an equivalence} that $\Phi_X$ is not an equivalence; in particular, $\Gamma_X$ does not provide a right-inverse to $\Phi_X$. \end{rmk} \begin{proof}[Proof of theorem] Let $\{V(I), \mu\}$ denote the $(A, H, S_X)$ vertex algebra $\Gamma_X (\CB)$. In particular, $V(1) = \Gamma(X, \CV) = V$, and so the vector space underlying the vertex algebra $\Phi_X \circ \Gamma_X(\CB)$ is indeed $V$, with the same $\Gamma(X, \CO_X)$-module structure as that of the original vertex algebra. Likewise, the derivation agrees with the original one. Furthermore, the unit morphism $\CO \to \CB$ of the factorization algebra is determined by the fact that the global section $1$ of $\CO_X$ is sent to $\vac \in V = \Gamma(X, \CB_X)$; hence the unit morphism $\{S_X(I) \to V(I) \}$ has the same property, and in particular the vector $\vac$ is still the vacuum vector of the vertex algebra $\Phi_X \circ \Gamma_X(\CB)$. Let $\widetilde{Y}(\bullet, z)$ denote the vertex operator function of the new vertex algebra. It remains to show that for any $v, u \in V$, \begin{align*} \widetilde{Y}(v, z) u = Y(v,z) u. \end{align*} Recall the definition (\ref{Eq: Y}) of $\widetilde{Y}$ in terms of the singular multiplication $\mu$: we must choose $N$ so that $\mu_{12}(v \otimes u) \cdot (x_1-x_2)^N = A $ is in $V(1,2) = \Gamma(X, \CB_{X^2})$; then \begin{align*} \widetilde{Y}(v,z_1)u = \sum_{i\ge 0} \alpha^V_*(\del_1^{(i)} A) \otimes z_1^{i-N}. \end{align*} But both $\mu_{12}$ and $\alpha^V_*$ can be written in terms of the factorization structure of $\CB$: \begin{itemize} \item $\mu_{12}(v \otimes u) = d(v \otimes u)$; of course this is just $v \otimes u$ as a section of $j_*j^*(\CV \boxtimes \CV)$, but when we think of it as a section of $j_*j^*(\CB_{X^2})$, we use the fact that by our assumption above, it is of the form $A \cdot (x_1-x_2)^{-N}$, where $A$ is a global section of $\CB_{X^2}$. Thus we conclude that $A = v\otimes u \cdot (x_1-x_2)^{N}$. \item We factor $\alpha^V_*$ as follows: \begin{align*} V(1,2) \surj V(1,2) / (x_1-x_2)V(1,2) \xrightarrow{\nu^-1} V(1); \end{align*} in other words, $\alpha^V_*$ is calculated using the procedure described just before the statement of the theorem. \end{itemize} Comparing with $Y(v,z)u = \sum_{n \in \BZ} v_{n} (u) \cdot z^{-n-1}$, we see that to prove the theorem, it is enough to show that for every $i \ge 0$, \begin{align}\label{Eq: equation for coefficients of vertex operator} v_{N - i - 1}(u) = \alpha^V_*(\del_1^{(i)} A). \end{align} (Note that the choice of $N$ ensures that $v_{N - i -1}(u) = 0$ for any $i < 0$: indeed, we know that $A \in \Ker(\lambda^\ch)$, or equivalently that $Y(v, (z-w))u \cdot (z-w)^N = \sum_{n \in \BZ} v_{n} u \cdot (z-w)^{N-n-1}$ has no non-zero non-positive powers of $(z-w)$.) We claim that the following equation holds for any pair $v \otimes u$ together with an integer $N$ such that $v \otimes u \cdot (x-y)^N = A$ lies in $V(1,2)$, and any integers $i, k \ge 0$: \begin{align}\label{Eq: induction claim} \lambda^\ch (\del_1^i A \cdot (x_1-x_2)^{-k}) = \sum_{j = N - k - i}^{N - i - 1} \frac{(N - j - 1)!}{(N - j - 1 - i)!} v_{(j)}u \cdot (x_1-x_2)^{N - j-k-i -1} \end{align} This claim follows from a straightforward induction argument, using the definition of $A, N$, and $\lambda^\ch$ for the base case $i=0$, and the fact that $\lambda^\ch$ commutes with the action of $\del_1$ for the induction step. In particular, consider the case $k = 1$: we conclude that \begin{align*} \lambda^\ch(\del_1^i A \cdot (x_1-x_2)^{-1}) & = \frac{(N - (N - 1 - i) - 1)!}{(N - (N - 1 - i) - 1 - i)!} v_{N - 1 - i}u \cdot (x_1-x_2)^{N - (N - 1 - i) - 1 - i -1} \\ & = i! v_{N - 1 - i} u \cdot (x_1-x_2)^{-1}.\\ \end{align*} This tells us that $\alpha^V_* (\del^i A ) =i! v_{N - 1 -i}u$; in other words, Equation (\ref{Eq: equation for coefficients of vertex operator}) holds. The proof is complete. \end{proof} \section{Translation-equivariant version of the story}\label{Section: translation-equivariant} In this section, we recall the notion of a translation-equivariant vertex algebra on $\BBA^1$. Then we introduce the notion of a translation-equivariant $(A,H,S_{\BBA^1})$-vertex algebra, and we show that the functor from $(A,H,S_{\BBA^1})$-vertex algebras to vertex algebras can be enhanced to a functor of the translation-equivariant categories; likewise, the functor from factorization algebras to $(A, H, S_{\BBA^1})$-vertex algebras can be enhanced to the translation-equivariant categories. We also give a functor from the category $(A,H,S_B)$-vertex algebras to the category of translation-equivariant $(A, H, S_{\BBA^1})$-vertex algebras. (Unlike the analogous functor from vertex algebras to translation-equivariant vertex algebras on $\BBA^1$, this functor is not an equivalence.) \begin{defn}\label{def: translation-equivariant vertex algebras} Recall that a vertex algebra $V$ on $\BBA^1$ is in particular a $\BC[x]$-module. We consider the map $p: \BBA^1 \times \BBA^1 \to \BBA^1$ given by projection onto the second factor, and also the map $a: \BBA^1 \times \BBA^1 \to \BBA^1$ given by addition (i.e. translation). The vertex algebra $V$ is said to be \emph{translation-equivariant} if it is equipped with an isomorphism $\psi: p^*V \to a^*V$ of $\BC[t, x]$-modules which is compatible with the vertex algebra structure. (More precisely, one can formulate the notion of a \emph{family} of vertex algebras over $\BBA^1$; $p^*V$ and $a^*V$ are examples of such families, and $\psi$ is an isomorphism between them.) In particular, for any choice of $t=t_0$, we consider the restriction $V_{t_0}$ of $a^*V$ to $\{t_0\} \times \BBA^1$. As a vector space, $V_{t_0} = V$, but the $\BC[x]$ action is shifted by $x \mapsto x - t_0$. Under this action, $V_{t_0}$ is a vertex algebra over $\BBA^1$, with the same vacuum vector and vertex operators as $V$. The isomorphism $\psi: p^* V \to a^*V$ restricts to give an isomorphism $\psi_{t_0}: V \to V_{t_0}$ of vertex algebras over $\BBA^1$; these isomorphisms are compatible under composition and addition of the parameters $t_0$. Let us denote the category of translation-equivariant vertex algebras on $\BBA^1$ by $VA(\BBA^1)^{\BBA^1}$. \end{defn} Recall also part two of Theorem \ref{Theorem: everybody else's theorem}, which states that the category of translation-equivariant vertex algebras on $\BBA^1$ is equivalent to the category of vertex algebras. This equivalence is given on the level of vector spaces by taking invariant vectors, or as explained in the appendix of \cite{BDHK}, by taking the stalk at $0$. Let us now formulate the definition of a \emph{translation-equivariant $(A,H,S)$-vertex algebra on $\BBA^1$}. First recall the definition of a family of $(A,H,S)$-vertex algebras over $\BBA^1$, as defined in Remark \ref{rmk: AHS in families}; here we are interested in the case $p_1: X = \BBA^1 \times \BBA^1 \to \BBA^1$. Observe that the second projection map $p:X \to \BBA^1$ extends to projection maps $p^I: X^I_{\BBA^1} \to \BBA^I$ which are compatible with diagonal embeddings and projections. It follows that for an $(A, H, S_{\BBA^1})$-vertex algebra $V$, we can use pullback along $p$ to define a family of $(A, H, S)$-vertex algebras $p^*V$ over $\BBA^1$: for $I \in \Fin$, we have $p^*V(I) \defeq (p^I)^*(V(I)) = \BC[t, x_i]_{i \in I} \otimes_ {\BC[x_i]} V(I)$. (This is the \emph{trivial} family of $(A, H, S)$-vertex algebras.) Similarly, the action of $\BBA^1$ on itself by addition induces compatible maps $a: X \to \BBA^1$ and $a^I: X^I_{\BBA^1} \to \BBA^I$ (the latter can be identified with the diagonal action of $\BBA^1$ on $\BBA^I$); hence we obtain a second family $a^* V$. \begin{defn} A \emph{translation-equivariant $(A,H,S_{\BBA^1})$-vertex algebra} is a pair $(V, \psi)$, where $V$ is an $(A,H,S_{\BBA^1})$-vertex algebra, and $\psi: p^*V \to a^*V$ is an isomorphism of families of $(A,H,S)$-vertex algebras. The isomorphism $\psi$ is required to satisfy a natural compatibility condition on $\BBA^1 \times \BBA^1 \times \BBA^1$: namely, letting $p_3: (x,y,z) \mapsto z$ and $a_3: (x,y,z) \mapsto x+y+z$, we obtain from $\psi$ two isomorphisms $p_3^* V \to a^* V$, \begin{align}\label{eq: cocycle condition} p_3^*V = (p \times \id)^* p^* V \EquivTo(p \times \id)^* a^* V = (\id \times a)^*p^*V \EquivTo (\id \times a)^* a^* V = a_3^* V; \nonumber\\ p_3^*V = (a \times \id)^*p^* V \EquivTo (a \times \id)^* a^* V = a_3^*V. \end{align} The condition on $\psi$ is that these two compositions must be equal. Let us denote the category of translation-equivariant $(A, H, S_{\BBA^1})$-vertex algebras by $\VA(A, H, S_{\BBA^1})^{\BBA^1}$. \end{defn} Now let $\CB$ be any factorization algebra on $\BBA^1$. Similarly to the above, the families of maps $\{p^I\}$ and $\{a^I\}$ induced by $p$ and $a$, respectively, allow us to pull $\BBA^1$ back to get relative factorization algebras $p^*\CB$ and $a^*\CB$ on $\BBA^1 \times \BBA^1$ over $\BBA^1$ (with respect to the first projection map). \begin{defn} A \emph{translation-equivariant factorization algebra on $\BBA^1$} is a factorization algebra $\CB$ on $\BBA^1$ together with an isomorphism $\psi_\CB: p^* \CB \to a^* \CB$, satisfying a compatibility condition analogous to the condition described in (\ref{eq: cocycle condition}) above. \end{defn} Let us denote the category of translation-equivariant factorization algebras on $\BBA^1$ by $\FA(\BBA^1)^{\BBA^1}$. The following is immediate from the definitions. \begin{prop}\label{Prop: translation-equivariant factorization algebra to AHS} The functor $\Gamma_{\BBA^1}$ naturally extends to a functor (still called $\Gamma_{\BBA^1}$) \begin{align*} \FA(\BBA^1)^{\BBA^1} \to \VA(A,H,S)^{\BBA^1}. \end{align*} \end{prop} Recall the functor $\Phi_{\BBA^1} : \VA(A, H, S_{\BBA^1}) \to \VA({\BBA^1})$ of Theorem \ref{Theorem: AHS to vertex algebra}. \begin{prop}\label{Prop: translation-equivariant AHS to vertex algebra} The functor $\Phi_{\BBA^1}$ extends naturally to a functor (still denoted by $\Phi_{\BBA^1}$) \begin{align*} \VA(A, H, S_{\BBA^1})^{\BBA^1} \to \VA({\BBA^1})^{\BBA^1}. \end{align*} \end{prop} \begin{proof} Given a pair $(V, \psi)$ in $\VA(A, H, S_{\BBA^1})^{\BBA^1}$, we know that $V(1)$ has a structure of vertex algebra on $\BBA^1$. The isomorphism $\psi: p^* V \to a^*V$ gives in particular an isomorphism of vector spaces $p^*(V(1)) \to a^*(V(1))$. It is not hard to check from the construction of $\Phi_{\BBA^1}$ that this isomorphism of vector spaces respects the vertex algebra structure. \end{proof} \begin{prop}\label{Prop: Sb to translation equivariant by tensoring} There is a natural functor $\Xi: \VA(A,H,S_B) \to \VA(A, H, S_{\BBA^1})^{\BBA^1}$. \end{prop} \begin{proof} Given $V \in \VA(A,H,S_B)$, we define an $(A, H, S_{\BBA^1})$-vertex algebra $W$ by setting, for $I \in \Fin$, $W(I) = V(I) \otimes_{S_B(I)} S_{\BBA^1}(I) = V(I) \otimes_{\BC} \BC[x_i]_{i \in I}$. This is naturally a module over $S_B$, and we define the action of $T(I) = \BC[\del_i]$ by letting $\del_i$ act by $\del_i \otimes 1 + 1 \otimes \frac{d}{dx_i}$. It is easy to see that this gives an object of $\Fun(\Fin, A, T, S_{\BBA^1})$; one can also extend the singular multiplication of $V$ $\BC[x_i]$-linearly to give a singular multiplication on $W$. It remains to show that the resulting $(A, H, S_{\BBA^1})$-vertex algebra $W$ can be given an equivariant structure $\psi$. For any $I \in \Fin$, we need to define $\psi^I: p^* W(I) \EquivTo a^* W(I)$ . Unwinding the defintions, we can see that $p^*W(I) \cong \BC[t, x_i] \otimes V(I)$, and likewise $a^*W(I) \cong \BC[t, x_i] \otimes V(I)$; we can take $\psi^I$ to be the identity map. It is immediate that this is functorial and respects the singular multiplication structures. \end{proof} \begin{rmk}[Remarks on Proposition \ref{Prop: Sb to translation equivariant by tensoring}] Note that the $(A,H,S_{\BBA^1})$-vertex algebra $W$ constructed in the proof is almost never in the image of the functor $\Gamma_{\BBA^1} : \FA(\BBA^1) \to \VA(A, H, S_{\BBA^1})$. Indeed, we can see that factorization fails to hold on $\BBA^2$ as follows. Let $\CW(I)$ be the sheaf on $\BBA^I$ associated to $W(I)$. We can see that the restriction of $\CW(1,2)$ to the diagonal has global sections $V(1,2) \otimes \BC[x]$, while its restriction to the complement of the diagonal has global sections $V(1,2) \otimes \BC[x_1,x_2, (x_1-x_2)^{-1}]$. If we assume that $\CW$ is a factorization algebra, then from the first fact, we conclude that $V(1,2) \cong V(1)$, while from the second, we see that $V(1,2) \cong V(1) \otimes V(1)$; hence we must have $V(1) \otimes V(1) \cong V(1)$. \end{rmk} \section{The failure of Borcherds's functor to be an equivalence: case study for the Virasoro and the lattice VOA.}\label{sec: failure to be an equivalence} In this section, we will show that the functor $\Phi: \VA(A, H, \Sb) \to \VA$ fails to be an equivalence. More precisely, recall Example \ref{Example: conformal structure on lattice VOA}, in which we defined the \emph{conformal structure} on the rank 1 lattice vertex algebra $V^L$. Recall also Example \ref{Example: lattice AHS}: Borcherds constructs an example $\BV^L$ of an $(A, H, \Sb)$-vertex algebra such that $\Phi(\BV^L)$ is the lattice vertex algebra $V_L$. If $\Phi$ were essentially surjective, we could find an object $W \in \VA(A, H, \Sb)$ such that $\Phi(W) \cong \Vir_1$; and then if $\Phi$ were in addition fully faithful, we could also find a morphism $F: W \to \BV^L$ such that $\Phi(F)$ recovered the map of vertex algebras described in Example \ref{Example: conformal structure on lattice VOA}, \begin{align*} \phi_0: \Vir_1 & \to V_L\\ L_{-2}v_1 & \mapsto \omega_0. \end{align*} We will prove that no such pair $(W, F)$ exists, and thus will have proved the following theorem: \begin{thm}\label{Theorem: Borcherds doesn't give an equivalence, actual} The functor $\Phi: \VA(A, H, \Sb) \to \VA$ is not an equivalence of categories. \end{thm} \begin{proof} Suppose towards a contradiction that we have a pair $(W, F)$ as described above. In particular, we can identify $W(1)$ with $\Vir_1$, and $F(1)$ with the map $\phi_0 : \Vir_1 \to V_L \cong V^L$. There is an element $\omega \in W1)$ corresponding to $L_{-2}v_1 \in \Vir_1$; its image in $V^L$ under $F(1)$ is the image of $\omega_0 = \frac{1}{2} b_{-1}^2\vac \in V_L$ under the identification $f$ of (\ref{Eq: identifying V^L and V_L}). Notice that $\frac{1}{2} b_{-1}^2 \vac = \frac{1}{2}(b_{-1}\vac)^2$ in the commutative algebra structure of $V_L$, and $b_{-1}| 0 \rangle = b_{-1}|\lambda_0 \rangle \cdot | - \lambda_0 \rangle$ (here $\lambda _0 = \sqrt{N}$ is the generator of $L = \sqrt{N}\BZ$, and $b = \sqrt{N} \otimes \frac{1}{\sqrt{N}}$ is the generator of $\fh$). Since $b_{-1} | \lambda _ 0 \rangle = \frac{1}{\sqrt{N}} (\lambda_0 )_{-1}| \lambda_0 \rangle = \frac{1}{\sqrt{N}} T | \lambda_0 \rangle$, we conclude from the properties of $f$ that $F(\pt)(\omega) = \frac{1}{2 N} e^{-2 \lambda_0} (\lambda_0(1))^2$. Let us denote this element by $\alpha$. Let us now outline the strategy of the proof, before continuing with the computations. \begin{description} \item[Step 1] We compute $\mu_{V^L}(\alpha \otimes \alpha)$ directly, and obtain \begin{align}\label{Eq: alpha computation} \alpha \otimes \alpha + \frac{1}{N}e^{-{\lambda_0}} {\lambda_0}(1) \otimes e^{{\lambda_0}} {\lambda_0}(1) (x-y)^2 + \frac{1}{2} e^0 \otimes e^0 (x-y)^{-4}. \end{align} \item[Step 2] Using the compatibility of $F$ with the multiplication structures on $W$ and $\BV^L$, we see that \begin{align*} F(1:2) \mu_W(\omega \otimes \omega) = \mu_{V^L}(\alpha \otimes \alpha); \end{align*} combining this with the equation (\ref{Eq: alpha computation}) and the fact that $F(1:2) = F(1,2) \otimes \id_{\BC[(x-y)^{\pm1}]}$, we conclude that there exists some element $\beta \in W(1,2)$ such that $F(1,2)(\beta) = \alpha \otimes \alpha$. \item[Step 3] We consider $\del_x \beta$; by compatibility of $F$ with the functoriality of $W$ and $\BV^L$ with respect to the map $\gamma: \{1,2\} \to \{1\}$ we have that \begin{align}\label{Eq: beta equation} \phi_0 \circ \gamma^W_* (\del_x \beta) = f\circ \gamma^{\BV^L}_* \circ F(1,2) (\del_x \beta). \end{align} However, we compute the right-hand side of this equation directly, and show that it is not in the image of $\phi_0$. This contradicts the assumption that the pair $(W, F)$ exists, and hence proves the theorem. \end{description} To complete the proof, we will now explain each step. \subsection{Step 1: Computation of \texorpdfstring{$\mu_{V^L}(\alpha \otimes \alpha)$}{multiplication of alpha with itself}} Recall from (\ref{Eq: definition of mu}) that $\mu_{V^L}(\alpha \otimes \alpha) = (\alpha_{(1)} \otimes \alpha_{(1)}) r(\alpha_{(2)} \otimes \alpha_{(2)})$. We compute that \begin{align*} \Delta(\alpha) & = \frac{1}{2N} \left(e^{-2\lambda_0} (\lambda_0(1))^2 \otimes e^0 + 2 e^{-\lambda_0} \lambda_0 (1) \otimes e^{-\lambda_0} \lambda_0 (1) + e^0 \otimes e^{-2\lambda_0} (\lambda_0(1))^2\right). \end{align*} In order to compute $\mu_{V^L}(\alpha \otimes \alpha)$, we need to compute $r(\alpha_{(2)}\otimes\alpha_{(2)})$ for each of the three terms in $\alpha_{(2)}$. Before we begin our computation, let us write down the value of $r$ on a few basic pairs of elements. This will help us get used to working with $r$, and will also be useful in later calculations. First, from the definition of $r$, we have \begin{align}\label{Eq: r on pure} r(e^{n{\lambda_0}} \otimes e^{m{\lambda_0}}) = (x-y)^ {nmN}. \end{align} In particular, when $n = m =0$, \begin{align}\label{Eq: r on trivial} r(e^0 \otimes e^0) = 1. \end{align} The compatibility of $r$ with the action of $\del$ on the first and second factors, respectively, gives us the following: \begin{align} r({\lambda_0}(1) \otimes e^{m{\lambda_0}}) &= \del_x r(e^{\lambda_0} \otimes e^{m{\lambda_0}}) = mN(x-y)^{mN -1} \label{Eq: T on x}\\ r(e^{n{\lambda_0}} \otimes {\lambda_0}(1)) & = -nN(x-y)^{nN -1} \label{Eq: T on y}. \end{align} In particular, when $m$ or $n$ are zero, respectively, the value of $r$ is 0. Combining, we see that \begin{align} r({\lambda_0}(1) \otimes {\lambda_0}(1)) &= N(N-1) (x-y)^{N -2} \label{Eq: T on both}. \end{align} To calculate more complicated instances of $r$, we use the compatibility of $r$ with the coproduct $\Delta$. Let us record the following values of $\Delta$, for future reference: \begin{align} \Delta(e^{n{\lambda_0}}) &= e^{n{\lambda_0}} \otimes e^{n{\lambda_0}} \label{Eq: Delta on group-like};\\ \Delta({\lambda_0}(1)) & = {\lambda_0}(1) \otimes e^{\lambda_0} + e^{\lambda_0} \otimes {\lambda_0}(1) \label{Eq: Delta on primitive};\\ \Delta(({\lambda_0}(1)^2) &= ({\lambda_0}(1))^2 \otimes e^{2{\lambda_0}} + 2 e^{\lambda_0} {\lambda_0}(1) \otimes e^{\lambda_0} {\lambda_0}(1) + e^{2{\lambda_0}} \otimes ({\lambda_0}(1))^2 \label{Eq: Delta on primitive squared}. \end{align} Now we begin the calculation of $\mu_{V^L}(\alpha \otimes \alpha)$. First, from (\ref{Eq: r on trivial}), we have \begin{align*} r(e^0 \otimes e^0) = 1. \end{align*} Next, we need to compute \begin{align*} r\left(e^{-\lambda_0}\lambda_0(1) \otimes e^{-\lambda_0}\lambda_0(1)\right) & = r\left(e^{-\lambda_0} \otimes \left(e^{-\lambda_0}\lambda_0(1)\right)_{(1)}\right)\cdot r\left(\lambda_0(1) \otimes \left( e^{-\lambda_0}\lambda_0(1)\right) _{(2)}\right). \end{align*} We have \begin{align*} \Delta\left( e^{-\lambda_0}\lambda_0(1) \right) & = e^{-{\lambda_0}} {\lambda_0}(1) \otimes e^0 + e^0 \otimes e^{-{\lambda_0}} {\lambda_0}(1), \end{align*} so $r\left(e^{-\lambda_0}\lambda_0(1) \otimes e^{-\lambda_0}\lambda_0(1)\right)$ is a sum of two terms, \begin{align} r(e^{-{\lambda_0}} \otimes e^{-\lambda_0}\lambda_0(1)) \cdot r({\lambda_0}(1) \otimes e^0) \label{Eq: first} \\ + \quad r(e^{-{\lambda_0}} \otimes e^0) \cdot r({\lambda_0}(1) \otimes e^{-\lambda_0}\lambda_0(1)). \label{Eq: second} \end{align} By (\ref{Eq: T on x}), the term (\ref{Eq: first}) is 0. To compute (\ref{Eq: second}), we observe that $r(e^{-{\lambda_0}} \otimes e^0) = 1$, and that \begin{align*} r({\lambda_0} (1) \otimes e^{-{\lambda_0}} {\lambda_0}(1)) & = \del_x \left( r(e^{\lambda_0} \otimes e^{-{\lambda_0}}{\lambda_0}(1)) \right) = \del_x \left( r(e^{\lambda_0} \otimes e^{-{\lambda_0}}) \cdot r(e^{\lambda_0} \otimes {\lambda_0}(1) )\right)\\ & = \del_x \left[ (x-y)^{-N} \cdot (-N)(x-y)^{N-1} \right] = N(x-y)^{-2}. \end{align*} We conclude that \begin{align*} r\left(e^{-\lambda_0}\lambda_0(1) \otimes e^{-\lambda_0}\lambda_0(1)\right) = N (x-y)^{-2}. \end{align*} Finally we need to calculate $R \defeq r(e^{-2{\lambda_0}} ({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2)$. Using our calculation of $\Delta (e^{-{\lambda_0}} {\lambda_0}(1))$ from above, we write down $\Delta(e^{-2{\lambda_0}} ({\lambda_0}(1))^2)$; then we can write $R$ as a sum of three terms as follows: \begin{align} \label{Eq: a} r(e^{-2{\lambda_0}} \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2) \cdot r( ({\lambda_0}(1))^2 \otimes e^0) \\ \label{Eq: b} + \quad 2 r(e^{-2{\lambda_0}} \otimes e^{-{\lambda_0}} {\lambda_0}(1)) \cdot r( ({\lambda_0}(1))^2 \otimes e^{-{\lambda_0}} {\lambda_0}(1) )\\ \label{Eq: c} + \quad r(e^{-2{\lambda_0}} \otimes e^0) \cdot r (({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2) . \end{align} However, we check that $r( ({\lambda_0}(1))^2 \otimes e^0)$ and $r( ({\lambda_0}(1))^2 \otimes e^{-{\lambda_0}} {\lambda_0}(1) )$ are both $0$, so the terms (\ref{Eq: a}) and (\ref{Eq: b}) vanish. To calculate (\ref{Eq: c}), we note that $r(e^{-2{\lambda_0}} \otimes e^0) = 1,$ and we use the expansion of $\Delta({\lambda_0}(1))^2$ to write $r (({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2)$ as a sum \begin{align} \label{Eq: d} r(e^{2{\lambda_0}} \otimes e^{-{\lambda_0}}{\lambda_0}(1)) \cdot r(({\lambda_0}(1))^2 \otimes e^{-{\lambda_0}}{\lambda_0}(1) ) \\ \label{Eq: e} +\quad 2r(e^{\lambda_0} {\lambda_0}(1) \otimes e^{-{\lambda_0}}{\lambda_0}(1)) \cdot r( e^{\lambda_0} {\lambda_0}(1) \otimes e^{-{\lambda_0}}{\lambda_0}(1) \\ \label{Eq: f} +\quad r( ({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ) \cdot r( e^{2{\lambda_0}} \otimes e^{-{\lambda_0}}{\lambda_0}(1)). \end{align} The first and last terms vanish, and so it remains to calculate the term (\ref{Eq: e}). Using the calculation of $\Delta(e^{-{\lambda_0}} {\lambda_0}(1))$ from above, we see that $r(e^{\lambda_0} {\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1))$ is equal to \begin{align} r(e^{\lambda_0} \otimes e^{-{\lambda_0}} {\lambda_0}(1)) \cdot r({\lambda_0}(1) \otimes e^0) \label{Eq: g}\\ + r(e^{\lambda_0} \otimes e^0) \cdot r({\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1)) \label{Eq: h}. \end{align} Once again, (\ref{Eq: g}) is 0, and $r(e^{\lambda_0} \otimes e^0) = 1$, so \begin{align*} r(e^{\lambda_0} {\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1)) = r({\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1)) = N(x-y)^{-2}. \end{align*} From this it follows that \begin{align*} r (({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2) &= 2 r(e^{\lambda_0} {\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1))^2 = 2N^2(x-y)^{-4}. \end{align*} Combining our results, we conclude that \begin{align*} \mu_{V^L}(\alpha \otimes \alpha &= \frac{1}{4N^2} e^{-2{\lambda_0}} ({\lambda_0}(1))^2 \otimes e^{-2{\lambda_0}} ({\lambda_0}(1))^2 \\ & \qquad + \frac{1}{N^2} e^{-{\lambda_0}}{\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1) \cdot N (x-y)^{-2} \\ & \qquad + \frac{1}{4N^2} e^0 \otimes e^0 \cdot 2N^2(x-y)^{-4}, \end{align*} and so \begin{align}\label{Eq: conclusion} \mu_{V^L}(\alpha \otimes \alpha) = \alpha \otimes \alpha + \frac{1}{N}e^{-{\lambda_0}}{\lambda_0}(1) \otimes e^{-{\lambda_0}} {\lambda_0}(1) \cdot (x-y)^{-2} + \frac{1}{2} e^0 \otimes e^0 \cdot (x-y)^{-4}, \end{align} as claimed. \subsection{Step 2: Existence of \texorpdfstring{$\beta$}{beta}} Consider now the following commutative diagram, showing the compatibility of $F$ with the singular multiplication structures on $W$ and $\BV^L$: \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=3em, text height=1.5ex, text depth=0.25ex] {W(1) \otimes W(2) \otimes \BC[(x - y) ^{\pm 1}] & W(1:2) = W(1, 2) \otimes \BC[(x - y) ^{\pm 1}] \\ V^L \otimes V^L \otimes \BC[(x-y)^{\pm 1}] & V(1:2) = V^L \otimes V^L \otimes \BC[(x-y)^{\pm 1}].\\ }; \path[->, font=\scriptsize] (b-1-1) edge node[above]{$\mu_W$} (b-1-2) (b-2-1) edge node[below]{$\mu_{V^L}$} (b-2-2) (b-1-1) edge node[left]{$F(\pt) \otimes F(\pt) \otimes \id_{\BC[(x-y)^{\pm 1}}$} (b-2-1) (b-1-2) edge node[right]{$F(1:2) = F(1,2) \otimes \id_{\BC[(x-y)^{\pm 1}}$} (b-2-2); \end{tikzpicture} \end{center} In particular, $\mu_W(\omega \otimes \omega) \in W(1:2)$ has the property that $F(1:2)(\mu_W(\omega \otimes \omega)) = \mu_{V^L} (\alpha \otimes \alpha)$. It follows that there is some element $\beta$ of $W(1,2)$ such that $F(1,2)(\beta) = \alpha \otimes \alpha$. \subsection{Step 3: Computations with \texorpdfstring{$\del_x \beta$}{derivative of beta}} Since $F$ is required to intertwine the functoriality of $W$ and $\BV^L$, the following diagram must commute: \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(b)[matrix of math nodes, row sep=2em, column sep=3em, text height=1.5ex, text depth=0.25ex] {W(1,2) & W(1) & \Vir_1\\ \BV^L(1,2) & \BV^L(1) & \\ V^L \otimes V^L & V^L & V_L.\\}; \path[->, font=\scriptsize] (b-1-1) edge node[above]{$\gamma^W_*$} (b-1-2) (b-2-1) edge node[above]{$\gamma^V_*$} (b-2-2) (b-3-1) edge node[above]{$m$} (b-3-2) (b-1-1) edge node[left]{$F(1,2)$} (b-2-1) (b-1-2) edge node[right]{$F(1)$} (b-2-2) (b-1-3) edge node[right]{$\phi_0$} (b-3-3) (b-3-2) edge node[above]{$f$} (b-3-3); \path[-] (b-2-1) edge[double] (b-3-1) (b-2-2) edge[double] (b-3-2) (b-1-2) edge[double] (b-1-3); \end{tikzpicture} \end{center} In particular, we must have \begin{align}\label{eq: commutativity condition} f \circ m \circ F(1,2) (\del_x \beta) = \phi_0 \circ \gamma^W_* (\del_x \beta). \end{align} We calculate the left-hand side as follows: \begin{align*} m \circ F(1,2) (\del_x \beta) &= m (\del_x (F(1,2)(\beta)) ) = m (\del_x (\alpha \otimes \alpha) ) = m ((\del_x \alpha) \otimes \alpha). \end{align*} From the definition of $\alpha$, we calculate that \begin{align*} \del_x (\alpha) & = \frac{1}{N} \left( 2 e^{-2{\lambda_0}} {\lambda_0}(1) {\lambda_0}(2) - e^{-3{\lambda_0}}({\lambda_0}(1))^3 \right), \end{align*} and so \begin{align*} m ((\del_x \alpha) \otimes \alpha) = \frac{1}{2N^2}\left(2e^{-4{\lambda_0}}({\lambda_0}(1))^3{\lambda_0}(2) - e^{-5{\lambda_0}}({\lambda_0}(1))^5 \right). \end{align*} Under the isomorphism $f: V^L \to V_L$, this corresponds to the element \begin{align*} \frac{1}{2N^2} ({\lambda_0})_{-1}^3 ({\lambda_0})_{-2} \vac = \frac{1}{2}b_{-2} b_{-1}^3 \vac. \end{align*} We claim that this element is not in the image of $\phi_0$, contradicting the equation ($\ref{eq: commutativity condition}$), and thus proving the theorem. To prove this, consider the gradings on $\Vir_1$ and $V_L$ defined by setting \begin{align*} \deg v_1 = 0, \qquad & \deg L_{-n} = n \qquad \text{ for $\Vir_1$};\\ \deg b_{-n} = n \qquad & \qquad \text{ for $V_L$}. \end{align*} Note that $\phi_0$ is compatible with these gradings. Note also that the degree 5 piece of $\Vir_1$ has dimension 2---it is spanned by the vectors $L_{-5}v_1$ and $L_{-2}L_{-3}v_1$ (since $L_{-3}L_{-2}v_1$ can be written as a linear combination of the latter two vectors). It follows that the image of $\phi_0$ in degree 5 has dimension at most two, and is spanned by $\phi_0(L_{-5}v_1)$ and $\phi_0(L_{-2}L_{-3}v_1)$. We calculate directly from the definition of $\phi_0$ that \begin{align*} \phi_0(L_{-5}v_1) & = b_{-4}b_{-1}\vac + b_{-3}b_{-2}\vac;\\ \phi_0(L_{-2}L_{-3}v_1) & = 2 b_{-4}b_{-1} \vac + b_{-3}b_{-2}\vac + \frac{1}{2}b_{-2}b_{-1}^3\vac. \end{align*} Since $\frac{1}{2}b_{-2}b_{-1}^3\vac$ is not in the span of these two vectors, it cannot be in the image of $\phi_0$. This completes Step 3, and the proof of the theorem. \end{proof} \begin{rmk}\label{Remark: modify morphisms} An interesting question is whether the notion of a morphism of $(A,H,S)$-vertex algebras can be weakened so that the properties of the morphism $F$ used to reach a contradiction in the proof of Theorem \ref{Theorem: Borcherds doesn't give an equivalence, actual} above no longer hold. One would hope that in that case, the pair $(W, F)$ could be defined; one would also hope that the lattice $(A,H,S)$-vertex algebra defined by Borcherds could be identified in $\VA(A,H,S_{\BBA^1})^{\BBA^1}$ with the $(A, H, S)$-vertex algebra coming from the translation-equivariant lattice factorization algebra. \end{rmk} \bibliographystyle{alpha}
1,477,468,749,859
arxiv
\section{Introduction} \label{sec:intro} Optical Coherence Tomography (OCT) is an imaging modality used to visualize corneal \cite{Izatt1994}, limbal \cite{Kira2012}, and retinal structures \cite{Huang1991} with micrometer resolution. OCT can be used to estimate corneal biometric parameters \cite{Kuo2012}, such as corneal curvature and refractive power, and it has been integrated into surgical microscopes for use in surgical procedures such as cataract surgery, LASIK, and Deep Anterior Lamellar Keratoplasty (DALK) \cite{Kuo2012,Keller2018}. Accurate reconstruction of the cornea and estimation of these parameters for clincal use requires precise delineation of corneal tissue interfaces, thereby aiding surgeons with their surgical planning. While many non-proprietary image analysis-based corneal interface segmentation approaches exist \cite{LaRocca2011,Ge2012,Williams2016,Rabbani2016,Zhang2017} in literature, they do not generalize to volumes acquired from different OCT scanners. These approaches are ad-hoc with key parameters being chosen manually; for example in Fig. \ref{fig:fig1}, recent approaches \cite{LaRocca2011,Ge2012,Zhang2017}, developed for images (B-scans) acquired by a Spectral Domain OCT (SD-OCT) scanner scanning a 6$\times$6mm area, failed while segmenting the Epithelium (shallowest layer) in 3$\times$3mm volumes acquired by a Ultra High Resolution OCT (UHR-OCT) scanner. Assumptions on the central artifact location \cite{LaRocca2011,Ge2012,Williams2016,Rabbani2016,Zhang2017} break down when they are located in different regions of the image (see Fig. \ref{fig:fig1}(c)). As shown in Figs. \ref{fig:fig1}(a) to \ref{fig:fig1}(c), a segmentation approach must perform reliably across datasets acquired with different scan settings from different scanners, even in the presence of strong vertical and horizontal specular artifacts. \begin{figure}[h] \centering \begin{subfigure}[b]{0.125\columnwidth} \centering \includegraphics[height=2cm,width=0.9\columnwidth]{d26_i0239_original_resized}\\ \centerline{(a)} \label{fig:d26_i0239_original} \end{subfigure} \begin{subfigure}[b]{0.29\columnwidth} \centering \includegraphics[height=2cm,width=0.9\columnwidth]{d10_i35_original_resized}\\ \centerline{(b)} \label{fig:d10_i35_CorNet} \end{subfigure} \begin{subfigure}[b]{0.125\columnwidth} \centering \includegraphics[height=2cm,width=0.9\columnwidth]{d26_i0239_splitCols_prevMethodRes_resized}\\ \centerline{(c)} \label{fig:d26_i0239_splitCols_prevMethodRes} \end{subfigure} \begin{subfigure}[b]{0.125\columnwidth} \centering \includegraphics[height=2cm,width=0.9\columnwidth]{d26_i0239_CorNet_resized}\\ \centerline{(d)} \label{fig:d26_i0239_CorNet} \end{subfigure} \begin{subfigure}[b]{0.29\columnwidth} \centering \includegraphics[height=2cm,width=0.9\columnwidth]{d10_i35_CorNet_resized}\\ \centerline{(e)} \label{fig:d10_i35_CorNet} \end{subfigure} \caption{(a)-(b) Original B-scans from a 3$\times$3mm UHR-OCT and 6$\times$6mm SD-OCT volume; (c) Failed Epithelium segmentation result (cyan) from algorithms in \cite{LaRocca2011,Ge2012,Zhang2017}; (d)-(e) Our segmentation results for Epithelium (red), Bowman's layer (green), and Endothelium (orange) for images in (a) and (b).} \label{fig:fig1} \end{figure} In recent years, neural networks have been shown to be successful in segmenting retinal tissue interfaces \cite{Roy2017,Shah2018,Devalla2018,Apo2017,Sedai2018} with great accuracy. In this paper, we detail a Convolutional Neural Network (CNN) based framework aimed at segmenting corneal interfaces. Our corneal interface segmentation network (CorNet) is purely data-driven, and learns to segment interfaces from examples drawn from different datasets acquired with different scanners. In contrast to current state-of-the-art approaches \cite{LaRocca2011,Zhang2017,Roy2017,Apo2017}, we show that our approach generalizes with better performance. \noindent \textbf{Contributions.} – 1) To the best of our knowledge, this is the first deep learning based approach to segment three corneal tissue interfaces. 2) We are the first to test a neural network on corneal datasets acquired with different scan settings from different OCT scanners. 3) We demonstrate the reliability of the approach through extensive validation on data acquired from different OCT scanners, and we establish superior performance over current state-of-the-art approaches. 4) We also investigate the performance of different downsampling and upsampling methods in our network, which are commonly used in segmentation tasks. \section{METHODS} \label{sec:methods} In this section, we outline the proposed CNN-based framework in Fig. \ref{fig:modelFlow} that segments three corneal interfaces. \noindent \textbf{Problem Statement.} Given a corneal OCT image $\mathcal{I}$, the task is to find a function $\mathcal{F : I \rightarrow L}$ that maps every pixel in $\mathcal{I}$ to a label $\mathcal{L} \in \{0,1,2,3\}$. Similar to \cite{LaRocca2011,Zhang2017}, the corneal interfaces to be segmented are: (1) Epithelium, (2) Bowman's Layer, and (3) Endothelium, with 0 being the background. \begin{figure}[!h] \centering \includegraphics[height=1.75cm,width=0.95\columnwidth]{crop_model_flow_sheet} \caption{Our framework takes as input an OCT image, predicts the location of corneal interfaces using the CorNet architecture, and fits curves to the detected interfaces.} \label{fig:modelFlow} \end{figure} \noindent \textbf{Network Architecture.} Fully convolutional networks, such as the UNET \cite{Roy2017,Ronneberger2015} and BRUNET \cite{Apo2017}, are the state-of-the-art in retinal OCT segmentation. Such networks comprise of contracting and expanding branches, providing a dense output where each pixel is assigned the tissue class that it belongs to. The BRUNET architecture \cite{Apo2017} overcame problems of the UNET, such as holes in the segmentation, by modifying the UNET architecture. First, dilated convolutions \cite{Koltun2016,Devalla2018,Szegedy2015} were used in Inception-like blocks \cite{Szegedy2015} to increase the receptive field of each layer. Next, batch normalization \cite{Ioffe2015}, residual \cite{He2016} and bottleneck connections \cite{Szegedy2015}, and a feature map growth rate governed by a Fibonnaci sequence were incorporated. Finally, the input image was appropriately downsampled and connected to each layer. These changes greatly improved segmentation accuracy \cite{Apo2017} over the UNET. However, when applied to corneal OCT images, the BRUNET under-segmented poorly defined corneal interfaces, which are very common in anterior segment OCT imaging. As seen in Figs. \ref{fig:fig1} and \ref{fig:modelFlow}, these boundaries are corrupted by speckle noise, and have low signal-to-noise ratio (SNR). We empirically observed higher false positives in the final segmentation; one explanation is that discriminative features related to these boundaries being learned in earlier layers are lost through the network, and residual connections are unable to recover this information. One way to combine both coarse and fine image details is through the use of dense connections, which have been used to improve segmentation accuracy by encouraging heavy feature reuse through deep supervision \cite{Sedai2018,Huang2017,Jegou2017}. With dense connections, each layer is connected to all its preceding layers by feature map concatenation, allowing discernible features of faint boundaries to be retrieved across multiple scales. But, this comes at a cost of increased computation \cite{Jegou2017,Khosravan2018}, and we empirically determined that a densely connected network at a depth of 6 levels provides a good balance between segmentation accuracy and computational efficiency \cite{Apo2017,Khosravan2018}. Additionally, max pooling was better at maintaining features of interest through the network over average pooling and convolutions of stride 2 \cite{Khosravan2018}. Furthermore, nearest neighbor interpolation based upsampling followed by 3$\times$3 convolution \cite{Odena2016} performed better than bilinear interpolation based upsampling, bilinear interpolation + 3$\times$3 convolution \cite{Odena2016}, unpooling \cite{Roy2017,Noh2015}, and fractionally-strided convolutions \cite{Long2015}. In our experiments, we adopted the BRUNET architecture \cite{Apo2017} as the base, and modified it based on our observations as shown in Fig. \ref{fig:arch}. Similar to \cite{Apo2017}, the number of output feature maps in each layer increased according to a capped Fibonacci sequence \{32,64,96,160,256,416\}, and limit the bottleneck feature map output to 32 to prevent feature map explosion. \noindent \textit{Key modifications} to the architecture, which we incorporated were: 1) Dense connections were used to improve gradient information flow and prevent over-fitting; 2) Max pooling was used to pick the most discriminative features at the end of each downsampling layer; 3) Nearest neighbor interpolation + 3$\times$3 convolution was used to upsample feature maps in the expanding branch of the network. We name our corneal tissue interface segmentation architecture as \textit{CorNet}. \begin{figure}[!h] \centering \includegraphics[height=6.5cm,width=0.99\columnwidth]{crop_networkArchitecture_new} \caption{Our network architecture comprises of contracting and expanding branches. The dark green and blue blocks represent downsampling and upsampling computations respectively. Our network makes efficient use of residual and dense connections to generate the corneal interface segmentation in the final image, where each pixel is assigned the label of the tissue it belongs to. The input image is split width-wise into a set of slices of dimensions 256$\times$1024 pixels, the network predicts an output for each slice, and the slices are aligned to recreate the original input dimension. Dense connections concatenate feature maps from previous layers. The light blue block at the bottom of the "U" does not perform upsampling, but it functions as a bottleneck and generates feature maps of the same dimensions as the output feature maps from the previous layer.} \label{fig:arch} \end{figure} \section{EXPERIMENTS AND RESULTS} \label{sec:experiments} \noindent \textbf{Data.} De-identified datasets that had been previously acquired for an existing research database was used \cite{Mathai2018}. 48 volumes from both eyes of 8 subjects were acquired with different scan sizes using two OCT scanners; a Bioptigen SD-OCT scanner (Device 1) \cite{Wang2014}, and a high-speed ultra-high resolution OCT (hsUHR-OCT) scanner (Device 2) \cite{Srinivasan2006}. Device 1 had a 3.4\SI{}{\micro\meter} axial and 6\SI{}{\micro\meter} lateral spacing when scanning a 6$\times$6mm area, generating volumes of dimensions 1000$\times$1024$\times$50 (W$\times$H$\times$B-scans) pixels. Device 2 had a 1.3\SI{}{\micro\meter} axial and a 15\SI{}{\micro\meter} lateral spacing when scanning a 6$\times$6mm area, and a 7.5\SI{}{\micro\meter} lateral spacing when scanning a 3$\times$3mm area respectively, yielding volumes of size 400$\times$1024$\times$50 pixels. Each dataset was annotated by an expert grader (Grader 1) and a trained grader (Grader 2). \noindent \textbf{Setup.} Of the 48 datasets, 18 datasets were chosen for training, such that it contained a balanced number of datasets from both devices, i.e., six 6$\times$6mm datasets each from Device 1 and 2, and six 3$\times$3mm datasets from Device 2. The testing dataset comprised of 30 datasets; ten 6$\times$6mm datasets each from Device 1 and 2, and ten 3$\times$3mm datasets from Device 2. 5-fold cross-validation was conducted, and the model from the fold with the lowest validation loss was chosen for testing. \noindent \textbf{Training.} Training a CorNet model with full-width OCT images is limited by available RAM on the GPU and by the varying image sizes obtained from OCT scanners. To address these issues, the input images were sliced width-wise \cite{Roy2017} into a set of images of dimensions 256$\times$1024 pixels, thereby preserving the OCT image resolution. Data augmentation \cite{Patrice2003} is done through horizontal flips, gamma adjustment, Gaussian noise addition, Gaussian blurring, Median blurring, Bilateral blurring, cropping, affine transformations, and elastic deformations. Similar to \cite{Apo2017}, the loss function used was Mean Squared Error (MSE), and the network was trained using the ADAM optimizer \cite{Kingma2015}. The batch size was set to 2. The learning rate was set to $10^{-3}$, and it was decreased by a factor of 2 if the loss did not improve for 5 epochs. Validation data comprised of 10\% of the training data, and the network was trained until the loss did not improve for 10 epochs, at which point we executed early stopping. The network with the lowest validation loss among all the folds was chosen for evaluation on the testing set. The prediction for each interface was then fitted with a curve \cite{LaRocca2011,Zhang2017,Mathai2018,Lowess1981} (see Fig. \ref{fig:res}). \noindent \textbf{Baseline Comparisons.} We extensively validated the performance of our CorNet architecture; first, we compared our results against those from the UNET \cite{Roy2017,Ronneberger2015} and BRUNET \cite{Apo2017} architectures as shown in Fig \ref{fig:comparisonDL}. Next, we compared our results against those obtained from \cite{LaRocca2011,Zhang2017} in Table \ref{table:traditionalComparison}; only 6$\times$6mm datasets from Device 1 were used as \cite{LaRocca2011,Zhang2017} solely considered datasets of this dimension. Finally, in Tables \ref{table:MADLBP} and \ref{table:HD}, we compared our results against each grader, and also computed the inter-grader variability measures to quantify our deviation from the agreement in ground truth between graders. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.125\columnwidth} \vspace*{\fill} \centering \includegraphics[width=0.95\columnwidth,height=1.55cm]{d23_i0260_original_resized} \centerline{(a)}\par\vfill \includegraphics[width=0.95\columnwidth,height=1.55cm]{d23_i0260_CF_resized} \centerline{(b)} \label{fig:cmodeResult} \end{subfigure}\hfill \begin{subfigure}[b]{0.125\columnwidth} \centering \includegraphics[width=0.95\columnwidth,height=1.55cm]{d29_i0248_original_resized} \centerline{(c)}\par\vfill \includegraphics[width=0.95\columnwidth,height=1.55cm]{d29_i0248_CF_resized} \centerline{(d)} \label{fig:cmodeResult} \end{subfigure}\hfill \begin{subfigure}[b]{0.25\columnwidth} \centering \includegraphics[width=0.9\columnwidth,height=1.55cm]{d6_i38_original_resized} \centerline{(e)}\par\vfill \includegraphics[width=0.9\columnwidth,height=1.55cm]{d6_i38_CF_resized} \centerline{(f)} \label{fig:cmodeResult} \end{subfigure}\hfill \begin{subfigure}[b]{0.25\columnwidth} \centering \includegraphics[width=0.9\columnwidth,height=1.55cm]{d8_i1_original_resized} \centerline{(g)}\par\vfill \includegraphics[width=0.9\columnwidth,height=1.55cm]{d8_i1_CF_resized} \centerline{(h)} \label{fig:cmodeResult} \end{subfigure} \caption{Original B-scans and segmented interfaces from different datasets: (a)-(b) 3$\times$3mm UHR-OCT, (c)-(d) 6$\times$6mm UHR-OCT, and (e)-(h) 6$\times$6mm SD-OCT.} \label{fig:res} \end{figure} \noindent \textbf{Metrics.} We computed the following metrics: 1) Mean Absolute Difference in Layer Boundary Position (MADLBP) and 2) Hausdorff Distance (HD) between the fitted curves. For consistency in comparison, we computed MADLBP as it was the metric (in pixels) of choice in \cite{LaRocca2011,Zhang2017}. However, MADLBP (Eq. \ref{eq1}) does not accurately quantify the distance error in microns between a particular pair of interfaces, which the Hausdorff distance (Eq. \ref{eq2}) captures instead. Dice similarity did not provide error in microns, and thus was not computed in this work. Metrics were computed for the Epithelium (EP), Bowman's Layer (BL), and Endothelium (EN). In Eqs. \ref{eq1} and \ref{eq2}, $G$ and $S$ are the set of points in the ground truth annotation and segmentation (fitted with curves) respectively. $y_{G}(w)$ is the mean Y-coordinate (rounded down) of the points in $G$ whose X-coordinate is $w$, and similarly for $y_{S}(w)$. $d_{S}(x)$ is the distance of a point $x$ in $G$ to the closest point in $S$, and similarly for $d_{G}(x)$. \begin{align} \textnormal{MADLBP} &= \frac{1}{W} \sum\limits^{W-1}_{w=0} \abs{y_{G}(w) - y_{S}(w)} \label{eq1}\\ \textnormal{HD} &= \max\bigg(\underset{x \in G}{\max} \ d_{S}(x), \ \underset{x \in S}{\max} \ d_{G}(x) \bigg) \label{eq2} \end{align} \begin{figure}[!h] \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[height=3.45cm,width=0.9\columnwidth]{crop_MADLBP_pixels_DL_comparison}\\ \centering{(a)} \end{subfigure}\hfill \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[height=3.45cm,width=0.9\columnwidth]{crop_HD_microns_DL_comparison}\\ \centering{(b)} \end{subfigure} \caption{Error comparison between expert annotation and automated segmentation (fitted with curves) obtained from different deep learning based methods across all 30 testing datasets.} \label{fig:comparisonDL} \end{figure} \begin{table}[!ht] \centering\fontsize{9}{11}\selectfont \caption{Comparison of Mean Absolute Difference in Layer Boundary Position (MADLBP) error between traditional methods against the proposed deep learning based approach on ten 6$\times$6mm volumes from Device 1. Only expert annotations were used for comparison. Errors are in pixels.} \begin{tabular}{cccc}\hline \toprule Approach & EP & BL & EN \\ \midrule LaRocca et al. \cite{LaRocca2011} & 0.84 $\pm$ 0.31 & 1.12 $\pm$ 0.4 & 1.97 $\pm$ 2.26 \\ Zhang et al. \cite{Zhang2017} & 0.69 $\pm$ 0.24 & 0.91 $\pm$ 0.35 & 1.73 $\pm$ 1.98 \\ Proposed & \textbf{0.33 $\pm$ 0.21} & \textbf{0.42 $\pm$ 0.13} & \textbf{0.79 $\pm$ 0.19} \\ \bottomrule \end{tabular} \label{table:traditionalComparison} \end{table} \begin{table}[!ht] \centering\fontsize{9}{11}\selectfont \caption{Mean Absolute Difference in Layer Boundary Position (MADLBP) error across 6$\times$6mm datasets from Device 1 (top half), and 3$\times$3mm and 6$\times$6mm datasets from Device 2 (bottom half). Errors are in pixels.} \begin{tabular}{cccc}\hline \toprule Layer & Grader 1 & Grader 2 & Inter-Grader \\ \midrule EP & 0.33 $\pm$ 0.21 & 0.41 $\pm$ 0.14 & 0.49 $\pm$ 0.07 \\ BL & 0.42 $\pm$ 0.13 & 0.68 $\pm$ 0.17 & 0.51 $\pm$ 0.06 \\ EN & 0.79 $\pm$ 0.19 & 0.84 $\pm$ 0.34 & 0.56 $\pm$ 0.22 \\ \midrule EP & 0.32 $\pm$ 0.09 & 0.49 $\pm$ 0.13 & 0.49 $\pm$ 0.09 \\ BL & 0.41 $\pm$ 0.13 & 0.61 $\pm$ 0.15 & 0.5 $\pm$ 0.09 \\ EN & 0.93 $\pm$ 0.19 & 1.45 $\pm$ 0.39 & 0.61 $\pm$ 0.29 \\ \bottomrule \end{tabular} \label{table:MADLBP} \end{table} \begin{table}[!ht] \centering\fontsize{9}{11}\selectfont \caption{Mean Hausdorff Distance (HD) error across 6$\times$6mm datasets from Device 1 (top half), and 3$\times$3mm and 6$\times$6mm datasets from Device 2 (bottom half). Errors are in microns.} \begin{tabular}{cccc}\hline \toprule Layer & Grader 1 & Grader 2 & Inter-Grader \\ \midrule EP & 3.17 $\pm$ 1.04 & 4.46 $\pm$ 1.23 & 3.21 $\pm$ 0.52 \\ BL & 3.52 $\pm$ 1.39 & 4.15 $\pm$ 1.05 & 3.22 $\pm$ 0.5 \\ EN & 5.55 $\pm$ 2.24 & 6.7 $\pm$ 3.78 & 4.05 $\pm$ 1.2 \\ \midrule EP & 1.52 $\pm$ 0.42 & 1.63 $\pm$ 0.42 & 1.21 $\pm$ 0.21 \\ BL & 1.89 $\pm$ 0.62 & 1.95 $\pm$ 0.68 & 1.23 $\pm$ 0.22 \\ EN & 3.05 $\pm$ 1.08 & 4.03 $\pm$ 1.34 & 1.76 $\pm$ 0.62 \\ \bottomrule \end{tabular} \label{table:HD} \end{table} \section{Discussion} \label{sec:discussion} From Fig. $\ref{fig:comparisonDL}$ and Table $\ref{table:traditionalComparison}$, our network outperformed the current deep learning \cite{Roy2017,Ronneberger2015,Apo2017} and traditional approaches \cite{LaRocca2011,Zhang2017} respectively. Paired t-tests conducted between our approach and every baseline established that for each metric our results were statistically significant (\textit{p} $<$ 0.05). The MADLBP error (in pixels) and mean Hausdorff distance (in microns) across 6$\times$6mm datasets from Device 1 (Tables $\ref{table:MADLBP}$ and $\ref{table:HD}$, top halves) for the expert grader is slightly lower when contrasted against the trained grader. We attribute this to the diffuse appearance of corneal interfaces \cite{Kuo2012,LaRocca2011,Roy2017} and lower axial resolution of Device 1 (3.4\SI{}{\micro\meter}), thereby causing an expected deviation between the grader annotations, which is reflected in the inter-grader MADLBP error. Similar measures on the MADLBP error (in pixels) and mean Hausdorff distance (in microns) across 3$\times$3mm and 6$\times$6mm datasets from Device 2 (Tables $\ref{table:MADLBP}$ and $\ref{table:HD}$, bottom halves) were observed. Overall, we closely matched the inter-grader error across all datasets for the EP and BL interfaces, and in some cases, perform better than the agreement between graders. With respect to the EN, our errors were worse than the inter-grader agreement on the interface location. We attribute this to the low SNR in many corneal images, particularly at the left and right edges of the EN where the signal dropoff is substantial \cite{LaRocca2011}. In these regions, the graders mentally extrapolated their annotations for this interface with poorly defined boundaries, which were usually obfuscated by speckle noise. When a curve is fitted to both the annotation and prediction, there is a small degree of error during the comparison, which is unavoidable. This behavior has also been observed in \cite{LaRocca2011, Zhang2017}. However, our EN errors were considerably better than the measured MADLBP and HD errors for the state-of-the-art image analysis-based and deep learning based approaches. The CorNet took $\sim$15.1 s (Python) to segment an entire volume of 50 images of dimensions 1000$\times$1024 pixels, at $\sim$302 ms per image. This is in contrast to 56.5 s for \cite{LaRocca2011} (Matlab), $\sim$26.1 s for \cite{Zhang2017} (Matlab), $\sim$6.25 s for BRUNET (Python), and $\sim$10.75 s for UNET (Python); CorNet is slower than UNET or BRUNET due to dense connections. The results were calculated on a desktop using a 3.10 GHz Intel Xeon processor, 64 GB RAM, and a NVIDIA Titan Xp GPU. \noindent \textbf{Major Observations.} 1) The proposed CorNet architecture consistently outperforms the state-of-the-art image analysis-based and deep learning-based approaches for the task of corneal tissue interface segmentation. 2) Maxpooling is optimal for feature selection across the common downsampling choices. 3) Nearest neighbor interpolation based feature map upsampling followed by 3$\times$3 convolution improved segmentation over other upsampling operations. 4) Dense connections increased segmentation accuracy due to greater gradient information flow through the network. \section{CONCLUSION AND FUTURE WORK} \label{sec:conclusion} To the best of our knowledge, we have presented the first CNN-based framework to segment three corneal tissue interfaces in datasets that have been acquired from different OCT scanners with different scan settings. Our CorNet results have been extensively validated against the annotations of two graders, current state-of-the-art approaches in deep learning, and against traditional approaches towards corneal interface segmentation. Future work is aimed at extending our work to pathological corneas, and using the segmentation to drive the registration of B-scans with out-of-plane tissue motion. \noindent \textbf{Acknowledgements.} We thank our funding sources: NIH 1R01EY021641, Core Grant for Vision Research EY008098-28. We thank NVIDIA Corporation for their GPU donations. We also thank Haewon Jeong, Wonmin Byeon, Bo Wang, Katie Lucey, and Gadi Wollstein for helpful comments. \clearpage
1,477,468,749,860
arxiv
\section{Introduction} \subsection{The Problem of Time and underlying Theory of Background Independence} Time is conceived of substantially differently across the observationally established paradigms of Physics. Detailed consideration of this -- the subject of Part I of \cite{ABook} -- reveals the main chasm to be between Newtonian Physics, Special Relativity (SR), Quantum Mechanics (QM), Quantum Field Theory (QFT) on the one side, and General Relativity (GR) on the other. \mbox{ } \noindent This chasm is present more generally between Background Dependent paradigms -- such those of as Newtonian Physics, SR, QM and QFT on the one hand, and Background Independent paradigms \cite{A64, A67, Giu06, ABook}, such as GR's, on the other. [The first four do of course differ among themselves to some extent in temporal and spatial conceptualization, but to a much lesser extent than any of them differ from GR in these regards.] This chasm is thus a {\sl Paradigm Split}, and can moreover be envisaged as the modern form taken by the Absolute versus Relational (Motion) Debate \cite{DoD, Buckets, ABook}. This dates at least as far back to Newton \cite{Newton} and his absolute conceptions space and time versus Leibniz and Mach's relational objections \cite{L, M}. \mbox{ } \noindent The Problem of Time is moreover multi-faceted, because there are multiple differences in temporal and spatial conceptualization across this chasm, and some of these having further consequences for the nature and form of Physical Law. Most of its facets were already known to Wheeler \cite{WheelerGRT, Battelle} and DeWitt \cite{DeWitt67} in the 1960s or to Dirac \cite{DiracObs, Dirac, Dirac51, Dirac58} in the 1950s. Many authors have since attempted to resolve the Problem of Time. See e.g. \cite{K81, K91} for early reviews, \cite{K92, I93} for Kucha\v{r} and Isham's seminal progress, \cite{APoT} for a summary thereof, Part I of \cite{ABook} for grounding the Problem of Time on established Physics' differing notions of time, \cite{APoT2, APoT3, A-Lett} and Parts II and III of \cite{ABook} for further progress, and the Appendix Part of \cite{ABook} for a suitable course in supporting mathematical techniques for understanding this literature. \mbox{ } \noindent Kucha\v{r} and Isham's seminal progress \cite{K92, I93} consists firstly of formalizing the conceptual classification into Problem of Time facets, giving 8 such (Fig \ref{1}.a). Secondly, in showing how every attempt to solve the Problem of Time up to that point fails when examined in sufficient detail. While some works were shown to fail to overcome even one facet, most were moreover shown to break down upon attempting to combine resolutions of piecemeal facets to jointly resolve facets. I.e.\, in addition to having multiple facets, the lion's share of the Problem of Time consists of interferences between facets. Such interferences occur, moreover, because of the various facets having common origins in the chasm between Background Dependent and Background Independent Physics. For sure, the current Series' titular `Problem of Time' refers to multi-faceted and facet-interfering conceptualizations along such lines. This is to be contrasted with works that confuse the Problem of Time with just one of its facets, thus missing out on all the other facets and thus also furthermore on the lion's share of the Problem of Time: the interferences between the multiple facets. While such works are of clearly much more likely to solve what they purport to be 'the Problem of Time', this is of course very likely to just be adding to the literature of single-facet resolutions which fail to combine into joint resolutions of all Problem of Time facets. \mbox{ } \noindent Whereas there has been quite widespread belief that the Problem of Time is a quantum matter, this is in fact a further misconception. Emphasizing the underlying Background Dependence versus Independence chasm renders this clear, since Newtonian Physics and SR on the one hand versus GR on the other already exhibits almost all Problem of Time facets. This has the benefit of the simpler classical version providing a model arena for the more complicated quantum version. This benefit is by using the mathematically simpler and thus more readily surmountable classical Problem of Time facets as a testing ground for structures and methods, some of which then transcend to the quantum level, whether directly or via suggesting harder quantum counterparts. The current Series serves to improve on this classical approach, by further identifying the mathematics in use and demonstrating the sufficiency of Lie mathematics in this regard at the classical level for the usual differential-geometric rendition of the classical laws of Physics. \mbox{ } \noindent The Author's previous further progress consisted in, firstly, incorporating spacetime-primary approaches into the conceptual classification to compensate for Isham and Kucha\v{r}'s classification's canonical-primality bias. See Part I of \cite{ABook} for detailed motivation of each of these primalities. \mbox{ } \noindent Secondly, the passage from these Problem of Time facets to a classification of an equal multiplicity of underlying theory-independent classical-or-quantum Background Independence aspects. This amounts a renaming of facets to more truly reflect each's conceptual content. The chain of thought involved is spread out over Articles I to IV, culminating in IV's Conclusion's summary figure of the passage to Fig \ref{1}.b) from Fig \ref{1}.a). \subsection{Some underlying principles} A starting point for considerations of Background Independence is as follows. \mbox{ } \noindent{\bf Relationalism-0)} {\sl Physics is to solely concern relations between tangible entities}.\footnote{These are not `just matter', and are named thus, via Isham, along the lines of Heidegger.} \mbox{ } \noindent Some key diagnostics of `tangible entities' are as follows. \mbox{ } \noindent{\bf Relationalism-1)} Tangible entities {\sl act testably and are actable upon}. \mbox{ } \noindent Things which do not act testably or cannot be acted upon are held to be {\sl physical} non-entities. These can still be held to be a type of thing as regards being able to {\sl philosophize} about them or {\sl mathematically represent} them. Absolute space is an obvious archetypal example of such a non-entity. Relational intuition is that imperceptible objects should not be playing causal roles influencing the motions of actual bodies. As a first sharpening of this, James L. Anderson's \cite{A67} stated that {\it ``the dynamical quantities depend on the absolute elements but not vice versa"}, and an absolute object {\it ``affects the behavior of other objects but is not affected by these objects in turn"} \cite{AG}. \mbox{ } \noindent{\bf Relationalism 2) [Leibniz's Identity of Indiscernibles]} \cite{L} {\sl Any entities indiscernible from each other are held to be identical}. \mbox{ } \noindent{\bf Remark 3} This posits that physical indiscernibility trumps multiplicity of mathematical representation. Such multiplicity still exists mathematically, but the mathematics corresponding to the {\sl true} physics in question is the equivalence class spanning that multiplicity. One would only wish to attribute physical significance to calculations of tangible entities which are independent of the choice of representative of the equivalence class. By this e.g.\ our Universe and a copy in which all material objects are collectively displaced by a fixed distance surely share all observable properties, and so they are one and the same. An archetype of such an approach in modern Physics is Gauge Theory (see Articles IV and VII). This additionally factors in the major insight that a mixture of tangible entities and non-entities is often far more straightforward to represent mathematically. \mbox{ } \noindent{\bf Remark 4} For now, consider separate treatments of time on the one hand, and space, configurations, dynamics and canonical formulation on the other. This befits the great conceptual heterogeneity between these (see Part I of \cite{ABook} for details). Once this is understood, relational postulates can be stated, and a coherent subset of these are sharply mathematically implementable. This leads to rejecting absolute time and absolute space, and, more eventually (as argued in Part I of \cite{ABook}) to the first 2 Background Independence aspects. \subsection{Temporal and Configurational Relationalisms}\label{TCR} \noindent {\bf Temporal Relationalism} (aspect 0a) \cite{L, FileR} is that {\sl there is no meaningful time for the Universe as a whole at the primary level}. \mbox{ } \noindent We shall see that this is implemented by actions \cite{Jacobi, Synge, BSW, Magic, BB82, B94I, FORD, FileR, ABook} which are, firstly, free of extraneous time-like quantities, and, secondly, with no physically meaningful role for `label times' either (strategy 0a-1). \mbox{ } \noindent Temporal Relationalism leads to the notorious {\bf Frozen Formalism Problem} \cite{Battelle, K92, I93} (facet 0a). This is more widely known at the quantum level, where an apparently frozen quantum wave equation -- the Wheeler--DeWitt equation \cite{Battelle, DeWitt67} -- occurs in a context in which one would expect an equation which is dependent on (some notion of) time. This is moreover unfortunately often confused with the entirety of the multi-faceted Problem of Time. \mbox{ } \noindent The current Series' approach to frozenness is to recover time at the secondary level from Mach's `time is to be abstracted from change', i.e.\ an {\bf emergent Machian time strategy} (strategy 0a-2). \mbox{ } \noindent If time is not primary, moreover, we need to study whatever other entities that are still regarded as primary; this starts with configurations $\mbox{\boldmath$Q$}$ and configuration spaces $\FrQ$. \mbox{ } \noindent {\bf Configurational Relationalism} (aspect 0b) \cite{Cauchy, Burnside, BSW, WheelerGRT, BB82, Kendall84, FORD, FileR, PE16, ABook, S-III, Minimal-N, A-Killing, A-Cpct} involves taking into account that a continuous group of transformations $\lFrg$ acting on the system's configuration space $\FrQ$ is physically irrelevant. For Mechanics, these transformations are usually translations and rotations of space, though in general Configurational Relationalism also covers physically irrelevant internal transformations, as occur in the most common types of Gauge Theory. \mbox{ } \noindent Configurational Relationalism can be implemented, at least in principle, by {\bf Best Matching} (strategy 0b), i.e.\ is bringing two configurations into minimum incongruence with each other by application of $\lFrg$'s group action. In the case of GR -- for which $\lFrg = Diff(\bupSigma)$: the spatial diffeomorphisms -- Configurational Relationalism leads to the {\bf Thin Sandwich Problem} (facet 0b). This is a particular GR specialization of the abovementioned notion of Best Matching. A more general strategy for Configurational Relationalism is the $\lFrg${\bf -Act} $\lFrg${\bf -All Method}: group action followed by involvement of all the group, for which Article II argues that the widely known group averaging is a useful prototype. \mbox{ } \noindent Temporal and Configurational elationalism give two precise senses in which GR is `Machian' \cite{M, ABook}: Mach's Time Principle (Sec \ref{MTP}) and Mach's Space Principle (Sec II.5.1). Mach's work is widely of foundational interest; for instance, some of Mach's concepts played a role in Einstein's search for GR. The above two Machian attributes do not coincide with how Einstein interpreted a partly different set of Mach's ideas; his historical route to GR ended up making at best indirect use of Machian themes. As Wheeler argued \cite{Battelle, MTW}, however, there are many routes to GR. Some of these routes arrive at a dynamical formulation of GR: a theory of evolving spatial geometry: Geometrodynamics \cite{Battelle}. It then turns out that a more specific formulation of GR-as-Geometrodynamics is Machian after all (see \cite{RWR, AM13} and Articles II and VI). Finally, GR in Machian Geometrodynamics form can furthermore even be rederived from Temporal and Configurational Relationalism first principles (see \cite{RWR, AM13} and Article IX). \mbox{ } \noindent Each of Temporal and Configurational Relationalism moreover provides constraint equations. In the case of GR, these are, respectively, the well-known Hamiltonian and momentum constraints that usually occupy centre-stage in accounts of the Problem of Time. Indeed, the abovementioned Wheeler--DeWitt equation is the quantum Hamiltonian constraint, whereas the Thin Sandwich Problem is a particular approach to solving the momentum constraint at the classical level. \subsection{Model arenas} Aside from the classical simplification, this Series of Articles makes use of the simpler {\bf Relational Particle Mechanics (RPMs)} and {\bf Minisuperspace} model arenas prior to passing to more complicated cases; these arenas are introduced in Secs II.4.1 and \ref{MSS-Intro} respectively. \mbox{ } \noindent{\bf Diffeomorphisms} are, moreover, crucial \cite{I93} as regards a number of Problem of Time facets; to feature nontrivially, \noindent these require as a minimum inhomogeneous GR models. \mbox{ } \noindent Balancing this requirement, enough simplicity for calculations, and cosmological applications, this Series of Articles' third choice of model arena is {\bf Slightly Inhomogeneous Cosmology (SIC)}: a type of perturbative Midisuperspace model; see Article XI for more. This furthermore permits investigating whether galaxies and cosmic microwave background hot-spots could have originated from quantum cosmological fluctuations \cite{HallHaw}. Finally, this choice of model arenas amounts to concentrating on Quantum Cosmology rather than Black Hole models. \subsection{Constraints and canonical observables} Following on from Sec \ref{TCR}, it is next natural to ask whether one has found all of the constraints. I.e. {\bf Constraint Closure} (aspect 3) in an algebraic sense. \mbox{ } \noindent If the answer is in the negative, one has a {\bf Constraint Closure Problem} (facet 1) \cite{K92, I93, APoT2, APoT3, ABook}. \mbox{ } \noindent This is approached by introducing a suitable brackets structure and systematically applying the {\bf Dirac Algorithm} (strategy 1) \cite{Dirac, HTBook}. \mbox{ } \noindent This consistency established, we show it subsequently makes sense to consider which objects brackets-commute with the constraints, or with specific subalgebraic structures thereof. These objects -- {\it observables} \cite{DiracObs, HTBook, K92, I93, AObs, AObs2, AObs3, ABook, DO-1} -- are useful objects due to their physical content, whereby aspect 2 is {\bf Assignment of Observables}. \mbox{ } \noindent If obtaining a sufficient set of these to do Physics is in practice blocked -- a common occurrence in Gravitational Theory -- then one has a {\bf Problem of Observables} (facet 2) \cite{K92, I93, AObs, ABook}. \mbox{ } \noindent Observables can moreover already be defined in the absense of constraints. Finding observables, with or without constraints applying, amounts to, given a state space (for now phase space) {\bf Finding Function Spaces Thereover} (strategy 2) \cite{ABook}. \subsection{Spacetime Constructability} Starting with less structure than spacetime -- assuming just one or both of spatial structure or less levels of mathematical structure for spacetime -- is particularly motivated by Quantum Theory \cite{Battelle}. Space from less mathematical levels of structure of space is also a valid pursuit. In such approaches, the spacetime concept is moreover to hold in suitable limiting regimes: {\bf Spacetime Constructability} is required. \mbox{ } \noindent Let us for now focus on {\bf Spacetime Constructability from Space} (aspect A). \mbox{ } \noindent If this is false, or remains unproven, then we have a {\bf Spacetime Construction Problem (from Space} (facet A). \mbox{ } \noindent A classical-level Spacetime Construction was given in \cite{RWR, Phan} and properly decked out to comply with the other local Problem of Time facets in \cite{AM13, ABook}. Strategy A is {\bf Feeding Families of Theories into the Dirac Algorithm}. This can be interpreted as {\bf deforming algebraic structures}, and it works out in cases for which {\bf Rigidity} manifests itself. We reformulate this in terms of deformations of algebraic structures. \mbox{ } \noindent We refer to {\bf Space Constructability from less Space Structure} as aspect 3 of Background Independence; this pursuit being free from time, it is not counted among the Problem of Time facets. We shall see that the strategy for this parallels the preceding one while making use of the more general Lie Algorithm. \subsection{Aspects concerning spacetime} Since GR is also a theory with a meaningful and nontrivial notion of spacetime, it has more Background Independence aspects than Relational Particle Mechanics does. Indeed, the Einstein field equations of GR determine the form of GR spacetime, as opposed to SR Physics unfolding on a fixed background spacetime. From a dynamical perspective, GR's geometrodynamical evolution {\sl forms spacetime itself}, rather than being a theory of the evolution of other fields {\sl on} spacetime or {\sl on} a sequence of fixed background spatial geometries. Regardless of whether spacetime is primary or emergent, there is now also need for the following. \mbox{ } \noindent The current Article serves, firstly, to further subdivide `spacetime' Background Independence \cite{APoT2, ABook} into the following. \mbox{ } \noindent{\bf Spacetime Relationalism} (aspect 0$^{\prime}$). \mbox{ } \noindent{\bf Spacetime Generator Closure} (aspect 1$^{\prime}$). \mbox{ } \noindent{\bf Assignment of Spacetime Observables} (aspect 2$^{\prime}$). \mbox{ } \noindent{\bf Spacetime Constructability from less Spacetime Structure} (aspect 3$^{\prime}$). \mbox{ } \noindent This increases facet-and-aspect count from \cite{APoT3, ABook}'s 9 to 12. This four-way partition serves to more closely match conceptualization with the canonical approach. \mbox{ } \noindent In Spacetime Relationalism, the {\it diffeomorphisms of spacetime} itself, $Diff(\Frm)$, the physically redundant transformations. Whereas this is straightforwardly implemented in the classical spacetime formulation of GR, and these attain Closure as well, implementation becomes harder at the quantum level. For instance, it feeds into the {\bf Measure Problem} \cite{K92, I93} of Path Integral Approaches to Quantum Gravity. The {\bf Problem of Spacetime Observables} (facet $2^{\prime}$) is moreover significant even at the classical level. The {\bf Problem of Spacetime Construction from less Spacetime Structure} (facet $3^{\prime}$) is a comparative point between theories rather than an essential part of the current basic treatise. \mbox{ } \noindent {\it Foliations} of spacetime play major roles, both in dynamical and canonical formulations, and as a means of modelling the different possible fleets of observers within approaches in which spacetime is primary. Background Independence Physics is moreover to possess {\bf Foliation Independence} (aspect B) \cite{K92, I93}. \mbox{ } \noindent If this cannot be established, or fails, then a {\bf Foliation Dependence Problem} is encountered (facet B). {\bf Refoliation Invariance} (strategy B) resolves this for GR at the classical level \cite{Tei73, TRiFol, ABook}. \subsection{Globality, nonuniqueness and `A Local Problem of Time' subproblem} \noindent The final facets-and-aspects are as follows. \mbox{ } \noindent{\bf Global Validity}. \mbox{ } \noindent{\bf Unexplained Multiplicities}. \mbox{ } \noindent These are criteria which apply all the other aspects, facets and strategies toward resolving these. Contentions with them are termed, respectively, as follows. \mbox{ } \noindent {\bf Global Problems of Time} \cite{K92, I93} as covered in Epilogues II.B and III.B of \cite{ABook}, \cite{A-CBI} and Article XIV's Conclusion. \mbox{ } \noindent {\bf Multiple Choice Problems of Time} \cite{K92, I93, Gotay00} and Epilogue III.C of \cite{ABook}. The main part of this would appear to be the Gronewold--van Hove phenomenon in the classical to quantum bridge. \mbox{ } \noindent All in all, the Problem of Time is a multi-faceted subset of the reasons why forming `Quantum Gravity' Paradigms is difficult and ambiguous; further reasons are purely technical, or a mixture of both. \mbox{ } \noindent 10 facets or 11 aspects remain if these last two are disregarded. This moreover remains a consistent problem: seeking for A Local Resolution of the Problem of Time. While the Author's abovementioned first two pieces of progress further clarify what the Problem of Time is, the Author's third piece of progress is in recently giving an actual local resolution of the Problem of Time. See \cite{A-Lett} for a 5-page summary and \cite{ABook} for a 900-page exposition. This Series of Articles concentrates on the classical version of {\bf A Local Resolution of the Problem of Time}. We set out to streamline Parts I and II of \cite{ABook}, by omitting both how the Problem of Time sits on established Physics' conflicting notions of time and survey of other works exploring alternative strategies for facet resolution. We thus now offer bridging 90 and 240-page versions (excluding references): respectively, Articles I-IV largely without facet interference, and Articles I-XIV including facet interference. \subsection{Outline of the rest of this Article} We present a first aspect of Background Independence -- Temporal Relationalism -- and the corresponding Problem of Time facet (most well-known as the quantum-level Frozen Formalism Problem). \mbox{ } \noindent The first part of Temporal Relationalism is to implement the Leibnizian principle that `there is no time for the Universe as a whole at the primary level' (Sec 3) This involves using Jacobi's Principle rather than Euler--Lagrange's. Combining this resolution with those of all the other facets requires moreover reworking around half of the Principles of Dynamics \cite{Lanczos, Goldstein} into Temporally Relational implementing (TRi) form. This can be thought of as `taking Jacobi's Principle' more seriously than Jacobi himself did, and thus reworking the rest of Principles of Dynamics to follow from this in place of Euler--Lagrange's. Sec 3 thus preliminarily presents the parts of the standard Principles of Dynamics \cite{Lanczos, Goldstein} that the current Article supplants in the fundamental context of building up A Local Resolution to the Problem of Time. This rests on Sec 2's preliminary outline of configuration space $\FrQ$ \cite{Lanczos, DeWitt67, Fischer70, Magic, FM96, Kendall, Giu09, PE16, ABook, S-I, S-II, S-III, Minimal-N}, configurations being `what is left' to build upon in primarily timeless pictures, including via building velocity, change or momentum bundles thereover. \mbox{ } \noindent The second part of Temporal Relationalism (Sec 4) is to resolve primary-level timelessness in the manner of Mach's `time is to be abstracted from change'. Sec 5 comments further on the consequent classical Machian emergent time. { \begin{figure}[!ht] \centering \includegraphics[width=1.0\textwidth]{Fig-I-v4.png} \caption[Text der im Bilderverzeichnis auftaucht]{\footnotesize{Evolution of conceptualization and nomenclature of a) Kucha\v{r} and Isham's \cite{K92, I93} Problem of Time facets into b) underlying Background Independence aspects over the course of this Series of Articles. This Figure's colour scheme for the first eleven columns is further used in Articles V to XIII's presentation of each facet and of how facets interfere with each other. 12/13ths of these aspects are moreover already classically present: all bar the issue of physically and conceptually unaccounted-for multiplicities. Solid black arrows connect each primality's parts, while grey arrows form the Wheelerian two-way route between primalities. \mbox{ } \noindent c) The nonlinear order in which the current Series incorporates Background Independence aspects, or, equivalently, resolves Problem of Time facets. } } \label{1}\end{figure} } \subsection{Outline of the rest of this Series} Article II covers Configurational Relationalism as resolved by the $\lFrg$-act, $\lFrg$-all method. \noindent Article III covers the other 9 classical local Problem of Time facets, and Article IV covers all 11 local facets at the quantum level alongside giving a piecemeal-level conclusion. \mbox{ } \noindent Articles V and VI combine Temporal and Configurational Relationalism, for Finite Theories and Field Theories -- including GR -- respectively. Article VII unifies classical treatment of Temporal Relationalism, Configurational Relationalism and Constraint Closure, using a suitably TRi Dirac-type Algorithm \cite{ABook}. Article VIII extends this combination to Assignment of Observables as well. Article IX extends Article VII to generalization to instead include classical Spacetime Constructiblility. Articles VIII and IX are independent as per the fork in Fig \ref{1}.c). Spacetime constructed, Article X considers its own Relationalism, Closure and Assignment of Observables. \mbox{ } \noindent Article XI serves as an arena pit stop, introducing Slightly Inhomogenous Cosmology (SIC) \cite{HallHaw, SIC-1, SIC-2, ABook} to cover the minisuperspace and RPM arenas' increasing inadequacy in modelling GR spacetime aspects. This Article includes reappraising all aspects covered so far in this further model arena. This arena has moreover the added benefit of being along the lines of Halliwell--Hawking's setting for origin of structure of the universe: a quantum-level and more Background Independent version of the same kind of model actually used in Observational Cosmology. I.e. a likely setting for eventual first observation of semiclassical quantum-gravitational effects. \mbox{ } \noindent Article XII covers Foliation Independence as resolved by Refoliation Invariance. \mbox{ } \noindent Articles V-X and XII involve `mild recategorization' creating a mathematically-consistent formalism for all of a local Problem of Time's facets concurrently. This involves in particular having to rewrite around half of the Principles of Dynamics used, in a physically equivalent but now satisfactorily Temporally Relational form. We emphasize that the justification for introducing our new Principles of Dynamics is not new problem-solving capacity at the level of numerous small exercises but rather that it resolves a 50-year-old fundamental question: the Problem of Time. In particular, it is a {\it Temporal Relationalism implementing Principles of Dynamics (TRiPoD)} \cite{TRiPoD, ABook, AM13, MBook}. Article XII requires TRiFol as well (foliations); subsequent quantum-level work requires TRiCQT (Canonical Quantum Theory) and TRiPIQT (Path Integral Quantum Theory). Article XIII is the series' combined-facets-level Conclusion, whereas Article XIV serve provide the current Series' Lie Mathematics in a self-contained manner. \mbox{ } \noindent Another well known joint conceptualization of aspects is viewing Spacetime Construction from Space and Refoliation Invariance as a Wheelerian two-way route between our two competing primalities. Namely spacetime on the one hand versus space, configuration, dynamics or canonical formalism on the other. \subsection{We just use Lie's mathematics but in a subsequent setting following from QM and GR} \noindent While this Series' Problem of Time resolution follows \cite{ABook}, we now further identify all the mathematics in the classical version of this as lying within the differential-geometric mathematics of Lie. Namely, {\bf Lie derivatives} \cite{Yano55, Yano70}, {\bf Lie brackets} \cite{Serre}, {\bf Lie algebras} \cite{FHBook}, {\bf Lie groups} \cite{Gilmore, Serre} and the {\bf Flow Method} \cite{John, Olver, Lee2, PE-1, DO-1} for solving PDE systems. This is a very useful observation from the points of view of clarity, simplicity, exposition and pedagogy. For Lie's Mathematics is widely familiar to physicists, to mathematicians working on continua, and, increasingly, to scientists in other STEM subjects. This represents a large improvement in removing both actual and perceived difficulty from a Local Resolution of the Problem of Time and its underlying Theory of Background Independence. It is now likely that most graduate students in Mathematics, Theoretical Physics, or Applied Differential Geometry-based areas of other STEM subjects can now follow the current Series' fundamental developments. \mbox{ } \noindent The Problem of Time moreover operates in a {\it mathematical amphitheatre} that was partly unemphasized and partly undiscovered in the epoque of Lie's own zenith (the 1880's and 90's \cite{Lie}). This reflects that the Problem of Time's most severe form requires both QM and GR to be in play. QM and GR are 1920's and 1910's developments respectively, with GR's dynamics and canonical form having to await the 1950's and 60's \cite{B52, Dirac51, Dirac58, ADM, Dirac}. In a nutshell, we apply Lie's mathematics to the configuration space and phase space mathematical amphitheatres in Poisson-brackets and Hamiltonian formalisms as are jointly suitable for constraints like GR's and passage to QM. \mbox{ } \noindent This ties down prerequisites to understand the current Series to the following. Firstly, Lie's approach to Differential Geometry (both widely known and with specifics catered for in Article XIV). Secondly, an MA-level course in the Principles of Dynamics, for which \cite{Lanczos} and Chapters 1-2 of \cite{Dirac} are excellent background reading. Articles II-IV and VI-XIII also require being familiar with an introductory account of Geometrodynamics, such as \cite{ADM} or Chapter 43 of \cite{MTW}. \mbox{ } \noindent Let us furthermore term the above multiply-occurring pieces -- i) Relationalism, ii) Closure, iii) Assignment of Observables -- {\it superaspects}. In particular, making careful distinction between assigning canonical observables and spacetime observables explains a fair amount of facet ordering difficulties and previous authors misunderstanding each other's work. Much of this was `hidden under' using just the word `observables' without sufficient definition and distinction between types of observables, whether within authors' own papers or relative to each others' uses \mbox{ } \noindent We will argue that Relationalism, whether Temporal, Configurational or Spacetime, Relationalism is implemented by {\bf Lie derivatives}. Closure, whether Constraint or Spacetime Generator, is moreover a matter of {\bf Lie brackets consistency}. Constraints themselves form a {\bf Lie algebraic structure}: the constraints algebraic structure. Next, Assigning Observables, canonical or spacetime, involves {\bf associated Lie algebraic zero-commutant definitions} -- of observables -- realized by further {\bf associated Lie algebraic structures}: the observables algebraic structures. These refine Taking Function Spaces Thereover, whether over the canonical phase space or over a space of spacetimes. The canonical case's zero-commutant conditions, moreover, can be readily converted, for the practicalities of solution, into a system of PDEs to which the {\bf Flow Method} applies. More specifically, we consider Lie's Integral Method of Invariants, and its slight modification to the Integral Method for Observables. \mbox{ } \noindent Constraint Closure can moreover be extended by Feeding Families of Candidate Theories -- deformation of algebraic structures -- into a Dirac-type Algorithm. This results in Rigidity returning details of GR's dynamics, and locally-SR spacetime structure thus amounting to Spacetime Construction \cite{RWR, AM13, ABook}. This can once again be conceptualized in Lie-theoretic terms, as {\bf Lie Algebraic Rigidity}, now embracing not only Quantum Spacetime Reconstruction but even the Foundations of Geometry \cite{A-Brackets}. \mbox{ } \noindent Refoliation Invariance can be expressed as a `commuting pentagon' Lie algebraic structure condition. The Dirac algebroid formed by GR's constraints furthermore satisfies these requirements. \mbox{ } \noindent Among the above uses of Lie's Mathematics, moreover, Lie brackets algebraic structures see particularly major and central use. These are moreover now argued to run not on Dirac's Mathematics but by Lie's more general Mathematics, through examples of `Dirac magic' having been observed \cite{A-Brackets} to be realized by Lie brackets more generally than in Dirac's classical Poisson brackets setting. \subsection{Frontiers} \noindent Subsequent series of Articles on A Local Resolution of the Quantum Problem of Time and on the classical Global Problem of Time are imminent. TRiCQT, TRiPIQT and semiclassical resolution are already out -- see Part III of \cite{ABook} -- as are some global parts of the Comparative Theory of Background Independence: \cite{ABook}'s Epilogues and \cite{A-Killing, A-Cpct, A-CBI}. \section{Configurations and configuration spaces $\FrQ$}\label{Q-Primary} \noindent{\bf Definition 1} {\it Configurations} \cite{Lanczos, Arnold} \be \mbox{\boldmath$Q$} \mbox{ } \mbox{ with components } \mbox{ } Q^A \mbox{ } , \end{equation} are instantaneous snapshots of the state of a system $\lFrs$. \mbox{ } \noindent{\bf Definition 2} The space of all possible configurations $\mbox{\boldmath$Q$}$ for a given system $\lFrs$ is the corresponding {\it configuration space} \cite{Lanczos, Arnold, ABook}, \be \FrQ(\lFrs) \mbox{ } ; \end{equation} {\bf Notation 1} The dimension of configuration space is \be k := \mbox{dim}(\FrQ(\lFrs)) \end{equation} \noindent{\bf Notation 2} In the current Series, we use slanted font for finite-dimensional entities and straight font for field entities. \mbox{ } \noindent{\bf Notation 3} We use mathfrak font for the corresponding spaces of entities. This is as a means of immediately avoiding confusion between objects of a given type and the spaces thereof. \mbox{ } \noindent{\bf Remark 1} In Articles I to V, we use Finite Theory as a default, with full GR as the only Field-Theoretic exception. From Article VI onward, Electromagnetism, Yang--Mills Theory and full GR coupled to whichever of these or a scalar field are considered, so the portmanteau notation starts in earnest in that Article. \mbox{ } \noindent{\bf Remark 2} This Sec's examples are moreover unreduced, to Sec II.6's further examples of configuration spaces being reduced. \subsection{Example 1) Newtonian Mechanics} \noindent{\bf Definition 3} For $N$ particles in the usual flat-space $\mathbb{R}^d$ model of absolute space, we denote the incipient configurations -- {\it N-point constellations} -- by \be \mbox{\boldmath$q$} \mbox{ } \mbox{ with components } \mbox{ } q^{aI} \mbox{ } , \end{equation} for $a$ a $\mathbb{R}^d$ vector index running from 1 to $d$ (such vectors are also denoted by underlining) and $I$ a particle label running from 1 to $N$. \mbox{ } \noindent{\bf Definition 4} The corresponding configuration spaces are {\it constellation spaces} \be \FrQ(d, N) \:= \FrQ(\mathbb{R}^d, N) \:= \mbox{\Large $\times$}_{I = 1}^N \mathbb{R}^d \mbox{ } . \end{equation} \noindent{\bf Remark 1} Straightforwardly, \be \FrQ(\mathbb{R}^d, N) \m = \m \{\mathbb{R}^{d}\}^N \m = \m \mathbb{R}^{d \, N} \mbox{ } , \end{equation} which is furthermore equipped with the standard Euclidean inner product or metric. \subsection{Example 2) Full GR} \noindent{\bf Structure 1} GR spacetime is \be \mbox{ a semi-Riemannian 4-metric } \mbox{ } \mbox{\bf g} \mbox{ } \mbox{ with components } \mbox{ } \mbox{g}_{\mu\nu}(\vec{X}) \end{equation} on a \be \mbox{ 4-$d$ topological manifold } \mbox{ } \FrM \mbox{ } ; \end{equation} we use the spacetime vector \be \vec{X} \mbox{ } \mbox{ with components } \mbox{ } X^{\mu} \mbox{ } \mbox{ to denote spacetime coordinates} \mbox{ } . \end{equation} \noindent{\bf Structure 2} Therein, the incipient configurations are \be \mbox{Riemannian 3-metrics } \mbox{ } \mbox{\bf h} \mbox{ } \mbox{ with components } \mbox{ } \mbox{h}_{ab}(\underline{x}) \end{equation} on a fixed \be \mbox{ 3-$d$ topological manifold } \mbox{ } \bupSigma \mbox{ } ; \end{equation} interpreted as a spatial slice of GR spacetime; we use the space vector \be \underline{x} \mbox{ } \mbox{ with components } \mbox{ } x^a \mbox{ } \mbox{ to denote spacetime coordinates} \mbox{ } . \end{equation} \noindent We will refer to this point of view as GR-as-Geometrodynamics. It is the most longstanding dynamical \cite{Darmois, B52} and canonical \cite{ADM, WheelerGRT, Battelle, DeWitt67} formulation of GR. \mbox{ } \noindent{\bf Modelling assumptions} We consider $\bupSigma$ compact without boundary (CWB) and connected. 3-spheres $\mathbb{S}^3$ and 3-tori $\mathbb{T}^3$ are the most commonly considered specific spatial topological manifolds in the geometrodynamical literature, of which we make later use of $\mathbb{S}^3$ specifically. \mbox{ } \noindent{\bf Remark 1} As the 3-metric $\mbox{\bf h}$ is a symmetric $3 \times 3$ matrix, it has 6 degrees of freedom per space point. \mbox{ } \noindent{\bf Definition 1} The space formed by the totality of the $\mbox{h}_{ab}$ on a fixed $\bupSigma$ is GR's incipient configuration space \be \FrQ(\bupSigma) = \Riem(\bupSigma) \mbox{ } . \end{equation} \noindent{\bf Remark 2} The 3-metrics $\mbox{h}$ are analogous to Newtonian Mechanics' incipient $N$-point configurations $\mbox{\boldmath$q$}$, with $\Riem(\bupSigma)$ playing an analogous role to constellation space $\FrQ(d, N)$, and $\bupSigma$ playing a more loosely analogous role to the underlying absolute space $\mathbb{R}^d$, See Articles II and VI for more about GR configuration spaces. \subsection{Example 3) Minisuperspace GR} \noindent{\bf Structure 1} Homogeneous positive-definite 3-metrics are notions of space in which every point has the same properties. \mbox{ } \noindent{\bf Structure 2} The space of all of these on a given spatial topology $\bupSigma$ is {\it Minisuperspace} \cite{mini, Magic} \be \Mini(\bupSigma) \mbox{ } ; \end{equation} This is a simpler, particularly symmetric subcase of GR. \mbox{ } \noindent The specific Minisuperspace models used in this Series of Articles' detailed examples are spatially closed on Machian grounds. I.e.\ these avoid undue influence of boundary or asymptotic physics, a criterion that Einstein also argued for \cite{Ein21}. \mbox{ } \noindent{\bf Example 1} The simplest choice is $\bupSigma = \mathbb{S}^3$; this is also the most conventional for closed-universe cosmologies. One needs at least 2 degrees of freedom, and Cosmology conventionally makes use of scalar fields alongside the scalefactor of the universe, $a$. The simplest case brings in one minimally-coupled scalar field. The $\FrQ$ metric for this Minisuperspace is (up to a conformal factor of $a^3$) just 2-$d$ Minkowski spacetime $\mathbb{M}^2$ equipped with its standard indefinite flat metric. \mbox{ } \noindent{\bf Example 2} Other models considered accompany the scalefactor $a$ with anisotropies: 2 $\beta_{\pm}$ or the larger set of $\underline{\underline{\beta}}$ (of which there are 5, by tracelessness). \mbox{ } \noindent{\bf Structure 2} Let us also introduce \be \Ani(\bupSigma) \end{equation} to denote the {\it anisotropyspace} that these form (this Series only considers the $\beta_{\pm}$ case in any detail). \subsection{Point transformations}\label{Q-Geom} For subsequent Articles' use, $\FrQ$'s morphisms -- the coordinate transformations of $\FrQ$ -- are termed the {\it point transformations} and form the mapping space \be Point(\FrQ) \mbox{ } . \end{equation} \section{Interlude on the Standard Principles of Dynamics} This and the next two Appendices support facets 1 to 4 of the Problem of Time. Consult \cite{Goldstein, Lanczos} as preliminary reading if unfamiliar with this material. \subsection{The Euler--Lagrange action} \noindent{\bf Structure 1} Given the configurations $\mbox{\boldmath$Q$}$ indexed by $\fA$, various derived objects can be built up from these. The first we require are {\it velocities} \be {\mbox{\boldmath$Q$}}^{\prime} \:= \frac{\d \mbox{\boldmath$Q$}}{\d t} \mbox{ } , \end{equation} for $t$ the Newtonian time $t^{\sN\mbox{\scriptsize e}\mbox{\scriptsize w}\mbox{\scriptsize t}\so\sn}$ in Mechanics or a coordinate time $\mbox{t}^{\mbox{\scriptsize c}\so\so\mbox{\scriptsize r}\mbox{\scriptsize d}}$ in GR. \mbox{ } \noindent{\bf Structure 2} {\it Lagrangian variables} \cite{Lagrange} are then \be (\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) \mbox{ } . \end{equation} \noindent{\bf Structure 3} It is enlightening to furthermore view these derived objects as forming the tangent bundle over configuration space, \be \FrT(\FrQ) \mbox{ } . \end{equation} \noindent{\bf Structure 4} {\it Kinetic metrics} $\underline{\underline{\mbox{\boldmath$M$}}}$ with components $M_{\sfA\sfB}(\mbox{\boldmath$Q$})$ are a further type of composite object which feature in the theory's kinetic term, \be T \:= \mbox{$\frac{1}{2}$} ||\mbox{\boldmath$Q$}^{\prime}||_{\mbox{\scriptsize\boldmath$M$}}\mbox{}^2 \:= \mbox{$\frac{1}{2}$} M_{\sfA\sfB} Q^{\sfA \, \prime} Q^{\sfB \, \prime} \mbox{ } . \label{T-1} \end{equation} This can additionally be considered to equip the configuration space $\FrQ$ with a metric, which the current Series assumes to be time and velocity independent in its fundamental whole-universe setting. \mbox{ } \noindent{\bf Modelling Assumption} This is for a finite second-order classical physical system \cite{Lanczos, Goldstein} expressed in Lagrangian variables. \mbox{ } \noindent The kinetic term can furthermore be viewed as a mapping \be \FrT(\FrQ) \longrightarrow \mathbb{R} \mbox{ } . \label{TT-R} \end{equation} \noindent{\bf Structure 5} The {\it potential term} is \be V = V(\mbox{\boldmath$Q$}) \mbox{ } : \end{equation} also time and velocity-independent in the intended fundamental whole-universe setting. In the case of Mechanics, this is otherwise an a priori free function, but takes a more specific form in the case of GR. \mbox{ } \noindent{\bf Structure 6} All dynamical information is contained within the {\it Lagrangian} function $L(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime})$. The most common form this takes is \begin{equation} L(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) \m = \m T(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) - V(\mbox{\boldmath$Q$}) \end{equation} for $T$ as given by (\ref{T-1}). \mbox{ } \noindent{\bf Structure 7} The familiar difference-type Euler--Lagrange action for a finite theory is \be {\cal S}_{\mbox{\scriptsize E}\sL} \m = \m \int L(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) \, \d t \m = \m \int \{T(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) - V(\mbox{\boldmath$Q$})\} \d t \m = \m \int \left\{ \mbox{$\frac{1}{2}$} {||\d \mbox{\boldmath$Q$}^{\prime}||_{\mbox{\scriptsize\boldmath$M$}}}^2 - V(\mbox{\boldmath$Q$}) \right\} \d t \mbox{ } , \label{S-EL} \end{equation} The bundle map interpretation of this parallels (\ref{TT-R}). \mbox{ } \noindent{\bf Example 1} The most usual Mechanics version \cite{Euler} of this has position vector coordinates \be \mbox{\boldmath$Q$} = \mbox{\boldmath$q$} \mbox{ } \mbox{ with components } \mbox{ } q^{Ia} \m , \m \m \end{equation} and diagonal constant-mass kinetic metric \be M_{IaJb} = m_I\delta_{IJ}\delta_{ab} \mbox{ } . \end{equation} \subsection{Euler--Lagrange equations and some of their common simplifications} Next apply the standard prescription of the Calculus of Variations to obtain the equations of motion such that ${\cal S}_{\mbox{\scriptsize E}\sL}$ is stationary with respect to the $\mbox{\boldmath$Q$}$. This approach considers the true motion between two particular fixed endpoints $e_1$ and $e_2$ alongside the set of varied paths about this motion (subject to the same fixed endpoints). It gives rise to the {\it Euler--Lagrange equations}, \be \frac{\d }{\d t} \left\{ \frac{\pa L}{\pa \mbox{\boldmath$Q$}^{\prime}} \right\} \m = \m \frac{\pa L}{\pa \mbox{\boldmath$Q$}} \mbox{ } . \label{ELE} \end{equation} These equations simplify in the three special cases below, two of which involve particular types of coordinates. Indeed, one major theme in the Principles of Dynamics is judiciously choosing a coordinate system with as many simplifying coordinates as possible. \mbox{ } \noindent{\bf Simplification 1} {\it Lagrange multiplier coordinates} $\mbox{\boldmath$m$}$ are such that $L$ is independent of $\mbox{\boldmath$m$}^{\prime}$, $$ \frac{\pa L}{\pa\mbox{\boldmath$m$}^{\prime}} \m = \m 0 \mbox{ } . $$ The corresponding Euler--Lagrange equations then simplify to \be \frac{\pa L}{\pa \mbox{\boldmath$m$}} \m = \m 0 \mbox{ } . \label{lmel} \end{equation} {\bf Simplification 2)} {\it Cyclic coordinates} are such that $L$ is independent of $\mbox{\boldmath$c$}$ itself, $$ \frac{\pa L}{\pa \mbox{\boldmath$c$}} \m = \m 0 \mbox{ } , $$ but features $\mbox{\boldmath$c$}^{\prime}$: the corresponding {\it cyclic velocities}. The $\mbox{\boldmath$c$}$ Euler--Lagrange equations then simplify to \be \frac{\pa L}{\pa \mbox{\boldmath$c$}^{\prime}} \m = \m \mbox{\bf const} \mbox{ } . \label{cyclic-vel} \end{equation} {\bf Simplification 3)} {\it The energy integral type simplification}. If $L$ is free from the independent variable $t$, $$ \frac{\pa L}{\pa t} \m = \m 0 \mbox{ } , $$ then one Euler--Lagrange equation may be supplanted by the first integral \be L - \mbox{\boldmath$Q$}^{\prime} \cdot \frac{\pa L}{\pa \mbox{\boldmath$Q$}^{\prime}} \m = \m constant \mbox{ } . \label{en-int} \end{equation} \subsection{Multiplier elimination} Suppose that we can \be \mbox{solve } \mbox{ } 0 \m = \m \frac{\pa L}{\pa \mbox{\boldmath$m$}}(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$m$}) \mbox{ } \mbox{ as equations for the } \mbox{ } \mbox{\boldmath$m$} \mbox{ } . \label{LME} \end{equation} \noindent{\bf Remark 1} These equations arise from simplification 1), and $\bar{\mbox{\boldmath$Q$}}$ denotes the system's non-multiplier coordinates. \mbox{ } \noindent Solvability is not in general guaranteed. Firstly, (\ref{LME}) can on occasion be not even well-determined due to some of the $\mbox{\boldmath$m$}$ being absent from the equations or due to some equations not being independent. \mbox{ } \noindent Secondly, it is also possible for (\ref{LME}) to admit no solution (or only a non-real solution which cannot be applied physically, or a solution that is not in closed form \cite{FileR}). \mbox{ } \noindent In the absence of these pathologies, \be \mbox{\it multiplier elimination}: \mbox{ } L(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$m$}) \longrightarrow L_{\mbox{\scriptsize r}\mbox{\scriptsize e}\mbox{\scriptsize d}}(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}) \mbox{ } . \end{equation} \subsection{Conjugate momenta} {\bf Structure 1} We now consider further derived objects: the {\it conjugate momenta}, \be \mbox{\scriptsize\boldmath$P$} \:= \frac{\pa L}{\pa \mbox{\boldmath$Q$}^{\prime} } \mbox{ } . \label{mom-vel} \end{equation} Explicit computation of this for (\ref{S-EL}) gives the {\it momentum--velocity relation} \be \underline{\mbox{\scriptsize\boldmath$P$}} = \underline{\underline{\mbox{\boldmath$M$}}} \cdot \underline{\mbox{\boldmath$Q$}}^{\prime} \mbox{ } . \end{equation} N.B.\ that $\mbox{\boldmath$M$}$ is a matrix, so one would need two dot products to get a scalar output. The definition of $\mbox{\scriptsize\boldmath$P$}$ enables further formulation of the preceding Section's simplifications. \mbox{ } \noindent {\bf Simplification 1} Now the preliminary condition in deducing the multiplier condition is \be \dot{\mbox{\scriptsize\boldmath$P$}}^{\mbox{\scriptsize c}} = 0 \mbox{ } . \end{equation} \noindent {\bf Simplification 2} The cyclic coordinate condition is \be \mbox{\scriptsize\boldmath$P$}^{\mbox{\scriptsize c}} = \mbox{ constant } \mbox{ } . \label{cyclic-vel-P} \end{equation} \noindent {\bf Simplification 3} The energy integral is \begin{equation} L - \dot{\mbox{\boldmath$Q$}} \cdot \mbox{\scriptsize\boldmath$P$} = \mbox{ constant } \mbox{ } . \end{equation} \subsection{Legendre Transformations}\label{Legendre} {\bf Definition 1} Suppose we have a function \be F(\mbox{\boldmath$y$}, \mbox{\boldmath$v$}) \end{equation} and we wish to use $$ \mbox{\boldmath$z$} \:= \frac{\pa F}{\pa \mbox{\boldmath$y$}} $$ as variables in place of the $\mbox{\boldmath$y$}$. To avoid losing information in the process, a {\it Legendre transformation} is required: passing to a function \begin{equation} G(\mbox{\boldmath$z$}, \mbox{\boldmath$v$}) = \mbox{\boldmath$y$} \cdot \mbox{\boldmath$z$} - F(\mbox{\boldmath$y$}, \mbox{\boldmath$v$}) \mbox{ } . \end{equation} \noindent{\bf Remark 1} Legendre transformations are symmetric between $\mbox{\boldmath$y$}$ and $\mbox{\boldmath$z$}$: if one defines $$ \mbox{\boldmath$y$} \:= \frac{\pa G}{\pa \mbox{\boldmath$z$}} \mbox{ } , $$ the reverse passage yields $$ F(\mbox{\boldmath$y$}, \mbox{\boldmath$v$}) = \mbox{\boldmath$y$} \cdot \mbox{\boldmath$z$} - G(\mbox{\boldmath$z$}, \mbox{\boldmath$v$}) \mbox{ } . $$ \noindent{\bf Example 2} Suppose that our function is a Lagrangian $L(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime})$ and that we wish to use some of the conjugate momenta $\mbox{\scriptsize\boldmath$P$}$ as variables in place of the corresponding $\mbox{\boldmath$Q$}^{\prime}$. \subsection{Passage to the Routhian}\label{Routh} {\bf Example 3} (of Legendre transformation). Start from a Lagrangian with cyclic coordinates \be \mbox{\boldmath$c$} \m , \m \m L(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$c$}^{\prime}) \mbox{ } , \end{equation} for $\bar{\mbox{\boldmath$Q$}}$ now the non-cyclic coordinates. \mbox{ } \noindent{\bf Step 1} Exchange the $\mbox{\boldmath$c$}^{\prime}$ for the corresponding momenta using (\ref{cyclic-vel-P}). \noindent This entails being able to \be \mbox{solve } \mbox{ } \mbox{ } \mbox{\bf const} \m = \m \mbox{\boldmath$p$}^c \m = \m \frac{\pa L}{\pa \mbox{\boldmath$c$}^{\prime}}(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$c$}^{\prime}) \mbox{ } \mbox{ } \mbox{ as equations for the $\mbox{\boldmath$c$}^{\prime}$ } . \label{Routhian-Reduction} \end{equation} \noindent{\bf Step 2} Unlike in multiplier elimination, however, these are not just to be substituted back into the Lagrangian. Rather, one additionally needs to apply the Legendre transformation (\ref{Routhian}), by which one passes not to a na\"{\i}ve reduced Lagrangian but to a {\it Routhian} \begin{equation} R(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$p$}^c) \:= L(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$c$}) - \mbox{\boldmath$p$}^c \cdot \mbox{\boldmath$c$}^{\prime} \mbox{ } . \label{Routhian} \end{equation} The overall process is known as \be \mbox{\it passage to the Routhian} \mbox{ } : L(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$c$}^{\prime}) \longrightarrow R(\bar{\mbox{\boldmath$Q$}}, \bar{\mbox{\boldmath$Q$}}^{\prime}, \mbox{\boldmath$p$}^c) \mbox{ } . \end{equation} \noindent{\bf Remark 1} This entails treating the cyclic coordinates as a separate package from the non-cyclic ones. \mbox{ } \noindent{\bf Remark 2} This can be a useful trick \cite{Goldstein}, most usually in the context of simplifying the Euler--Lagrange equations. If this reduction can be performed, we can furthermore use cyclic momenta's constant status (\ref{cyclic-vel-P}) to free a dynamical problem from its cyclic coordinates. I.e.\ this completes the corresponding part of the integration of equations of motion. \subsection{Passage to the Hamiltonian}\label{Hamiltonian} {\bf Example 3} (of Legendre transformation). Replace {\sl all} the velocities $\mbox{\boldmath$Q$}^{\prime}$ by the corresponding momenta $\mbox{\scriptsize\boldmath$P$}$, to form the {\it Hamiltonian} \begin{equation} H(\mbox{\boldmath$Q$}, \mbox{\scriptsize\boldmath$P$}) \:= \mbox{\scriptsize\boldmath$P$} \cdot \mbox{\boldmath$Q$} - L(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime}) \mbox{ } \end{equation} The variables \be (\mbox{\boldmath$Q$}, \mbox{\scriptsize\boldmath$P$}) \end{equation} are subsequently termed {\it Hamiltonian variables}. The issue of whether such a replacement is always possible is postponed to Sec \ref{Constraints}. \mbox{ } \noindent{\bf Subexample} For the Lagrangian (\ref{S-EL}), \begin{equation} H \m = \m \mbox{$\frac{1}{2}$} ||\mbox{\scriptsize\boldmath$P$}||_{\mbox{\scriptsize\boldmath$N$}}\mbox{}^2 + V(\mbox{\boldmath$Q$}) \mbox{ } . \end{equation} This Series concentrates on the $t$-independent notion of Hamiltonian. The equations of motion are {\it Hamilton's equations}, \be \mbox{\boldmath$Q$}^{\prime} \m = \m \frac{\pa H}{\pa \mbox{\scriptsize\boldmath$P$}} \m , \m \m \mbox{\scriptsize\boldmath$P$}^{\prime} \m = \m - \frac{\pa H}{\pa \mbox{\boldmath$Q$}} \mbox{ } . \end{equation} For the Lagrange multiplier coordinates, the first half of the corresponding Hamilton's equations collapse to just \begin{equation} \frac{\pa H}{\pa \mbox{\boldmath$m$}} \m = \m 0 \mbox{ } . \end{equation} On the other hand, in the $t$-independent case (\ref{en-int}) becomes $H = const$. \mbox{ } \noindent{\bf Remark 1} Most phase spaces considered in this Series take the form of cotangent space \be \FrT^*(\FrQ) \mbox{ } . \end{equation} \noindent{\bf Remark 2} Further motivations for the Hamiltonian formulation include admission of systematic treatment of constraints due to Dirac (\cite{Dirac, HTBook} and Sec \ref{Constraints}) and its greater closeness to Quantum Theory. \section{Temporal Relationalism (aspect 0a): Leibnizian implementation}\label{TR-Intro} \subsection{Leibnizian Time(lessness) Principle} \noindent{\bf Leibnizian Time(lessness) Principle} There is no time at the primary level for the universe as a whole \cite{L, B94I, ABook}. \mbox{ } \noindent The following two-part selection principles give a mathematically-sharp implementation \cite{BB82, FileR} at the level of Principles of Dynamics actions. \mbox{ } \noindent{\bf Temporal Relationalism i)} Include no extraneous times or extraneous time-like variables \cite{FileR}. \mbox{ } \noindent{\bf Temporal Relationalism ii)} Include no label times either \cite{FileR}. \mbox{ } \noindent The Euler--Lagrange action (\ref{S-EL}) however makes reference to the extraneous notion of time $t$, by which Temporal Relationalism i) fails. \subsection{Jacobi actions are Manifestly Reparametrization Invariant} {\bf Structure 1} We can however replace (\ref{S-EL}) with {\it Jacobi's action principle} \cite{Jacobi}. A first form for this is \be {\cal S}_{\mbox{\scriptsize J}} \m = \m 2 \int \sqrt{W \, T_{\lambda}} \d \lambda \m = \m \sqrt{2} \int \sqrt{E - V} ||\dot{\mbox{\boldmath$Q$}}||_{\mbox{\scriptsize\boldmath$M$}} \d \lambda \mbox{ } . \end{equation} The notation for this is as follows. $\lambda$ is a `label-time' parameter and \be \dot{\mbox{ }} \:= \frac{\d }{\d \lambda} \mbox{ } . \end{equation} \noindent{\bf Structure 2} A primary notion of velocity is here defined as the derivative with respect to $\lambda$: \begin{equation} \mbox{velocity} \:= \frac{\d \mbox{(configuration variable)}}{\d (\mbox{label time})} \mbox{ } \mbox{ i.e.\ } \mbox{ } \frac{\d \mbox{\boldmath$Q$}}{\d\lambda} \mbox{ } . \end{equation} \noindent{\bf Structure 3} The {\it parametrized Lagrangian variables} are \be (\mbox{\boldmath$Q$} , \mbox{ } \dot{\mbox{\boldmath$Q$}}) \mbox{ } . \end{equation} The tangent bundle $\FrT(\FrQ)$ is here realized as {\it configuration--parametrized-velocity space}. \mbox{ } \noindent It is now this version of velocities which enter the otherwise-standard definition of kinetic term, \be T_{\lambda} \:= \mbox{$\frac{1}{2}$} {||\dot{\mbox{\boldmath$Q$}}||_{\mbox{\scriptsize\boldmath$M$}}}^2 \m = \m \mbox{$\frac{1}{2}$} M_{\sfA\sfB} \dot{Q}^{\sfA} \dot{Q}^{\sfB} \mbox{ } . \end{equation} We assume in this Series that this takes the most physically standard form: homogeneous quadratic in the velocities; this assumption is removed in Sec \ref{JSS}. \mbox{ } \noindent{\bf Structure 3} $E$ is the {\it total energy} of the system, and the {\it potential factor} \be W(\mbox{\boldmath$Q$}) := E - V(\mbox{\boldmath$Q$}) \mbox{ } . \end{equation} \noindent{\bf Structure 4} The {\it Jacobi action} \cite{Lanczos} is of the product form \begin{equation} {\cal S}^{\sM\mbox{\scriptsize R}\mbox{\scriptsize I}}_{\mbox{\scriptsize J}} \:= \int \d\lambda \, L^{\sM\mbox{\scriptsize R}\mbox{\scriptsize I}}_{\mbox{\scriptsize J}} \m = \m 2 \int \d\lambda \sqrt{T_{\lambda}W} \mbox{ } . \label{J-action} \end{equation} \noindent{\bf Remark 1} The Jacobi action principle clearly complies with Temporal Relationalism i). \mbox{ } \noindent{\bf Remark 2} It moreover complies with Temporal Relationalism ii) as well. This is by {\bf Manifestly Reparametrization Invariance}: switching to a monotonically-related\footnote{This inequality ensures no zero factors enter in the form of $d \mu/\d \lambda$ terms.} label-time $\mu$ gives an equivalent action by cancellation of the label-time coordinate changes by Fig \ref{TR-Implems}a). This rests on the kinetic term $T_{\lambda}$ being homogeneous-quadratic in the velocities, so $\sqrt{T_{\lambda}}$ is linear therein. Thus interchanging the action's parameter for any other monotonically related label does not alter the physical content of the theory. \mbox{ } \noindent{\bf Remark 3} More generally, Manifest Reparametrization Invariance requires an action to be homogeneous-linear in its label-time velocities (Fig \ref{TR-Implems}.a). \mbox{ } \noindent{\bf Structure 3} The $\d/\d \lambda$ in our parametrized velocities can moreover be viewed \cite{Stewart} as the Lie derivative \be \pounds_{\frac{\d}{\d \lambda}} \label{Lie-dot} \end{equation} in a particular frame. {\begin{figure}[!ht] \centering \includegraphics[width=0.85\textwidth]{TR-Implems-3.png} \caption[Text der im Bilderverzeichnis auftaucht]{\footnotesize{Inter-relation of this Series of Articles' three implementations of Temporal Relationalism at the level of actions a-c).}} \label{TR-Implems}\end{figure}} \subsection{... or equivalently Manifestly Parametrization Irrelevant} Here no use of $\lambda$ is to be made at all. \mbox{ } \noindent{\bf Structure 1} One immediate consequence of this is that there is no primary notion of velocity: this has been supplanted by a {\it change in configuration} \be \d \mbox{(configuration variable)} \mbox{ } \mbox{ i.e.\ } \mbox{ } \d \mbox{\boldmath$Q$} \mbox{ } . \label{Change} \end{equation} {\bf Structure 2} {\it Configuration--change variables} are then \be (\mbox{\boldmath$Q$} , \mbox{ } \d\mbox{\boldmath$Q$}) \mbox{ } . \label{Q-dQ} \end{equation} These constitute an alternative representation of the tangent bundle $\FrT(\FrQ)$. \mbox{ } \noindent{\bf Structure 3} One is now to conceive in terms of {\it kinetic arc element} \be \d s := ||\d\mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} \m = \m \sqrt{ M_{AB} \d Q^A \d Q^B } \label{hom-quad} \end{equation} in place of kinetic energy. \mbox{ } \noindent{\bf Structure 4} One is additionally to conceive in terms of the {\it physical} alias {\it Jacobi arc element} \be \d J \:= \sqrt{ 2 W} \, \d s \m = \m \sqrt{2} \sqrt{ E - V } ||\d\mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} \m = \m \sqrt{2} \sqrt{ E - V } \sqrt{M_{AB} \d Q^A \d Q^B} \label{dJ} \end{equation} in place of Lagrangian. \mbox{ } \noindent{\bf Structure 5} On thus succeeds to formulate one's action without use of any meaningless label at all. This furnishes a second furtherly conceptually advanced implementation of Temporal Relationalism ii): the {\bf Manifestly Parametrization Irrelevant} form of the Jacobi action \be {\cal S}_{\mbox{\scriptsize J}}^{\sM\sP\mbox{\scriptsize I}} \m = \m \int \d J \m = \m \sqrt{2} \int \sqrt{W} \, \d s \m = \m \sqrt{2} \int \sqrt{W(\mbox{\boldmath$Q$})} ||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} \m = \m \sqrt{2} \int \sqrt{E - V(\mbox{\boldmath$Q$})} \sqrt{M_{AB}(\mbox{\boldmath$Q$}) \d Q^A \d Q^B} \mbox{ } . \label{GeneralAction} \end{equation} Jacobi himself \cite{Jacobi} already formulated his action principle in this way. \mbox{ } \noindent Actions are now more generally required to be homogeneous of degree one in the changes (this is clearly equivalent by Fig \ref{TR-Implems}.b). \mbox{ } \noindent{\bf Lemma 1}. (\ref{J-action}) and (\ref{GeneralAction}) are indeed equivalent. \mbox{ } \noindent{\underline{Proof}} The arrow between a) and b) of Fig \ref{TR-Implems}. $\Box$ \mbox{ } \noindent{\bf Structure 6} The $\d$ in our changes can now be interpreted in terms of the Lie derivative \cite{Pauli, Sleb, Yano55, Yano70} \be \pounds_{\d} \end{equation} in a particular frame (paralleling \ref{Lie-dot}). \subsection{... or, dually, a geometrical action} It is moreover a further conceptual advance for Background Independent Physics to cease to employ names or notions deriving from physically irrelevant or Background Dependent entities. \mbox{ } \noindent In the present case, this involves ceasing to even mention any meaningless label or parameter. This can be done because the Manifestly Parametrization Irrelevant implementation is, dually, a {\bf (Configuration Space) Geometric} Implementation, \be {\cal S}_{\mbox{\scriptsize J}}^{\sFrQ\mbox{-}\mbox{\scriptsize G}\mbox{\scriptsize e}\so\mbox{\scriptsize m}} \m = \m \sqrt{2} \int \sqrt{W(\mbox{\boldmath$Q$})} ||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} \mbox{ } . \end{equation} \noindent{\bf Remark 1} This view of our action is commonplace in the Dynamics \cite{Arnold} and Celestial Mechanics literatures in precisely this dual geometrical action conception. This does however obscure its Manifestly Parametrization Invariant meaning, which more directly addresses the relevance of such an action to the theory of time. \mbox{ } \noindent{\bf Remark 2} The premise of a geometrical action itself is to view dynamics as a geodesic on the corresponding configuration space geometry. \mbox{ } \noindent{\bf Remark 3} We use ${\cal S}_{\mbox{\scriptsize J}}$ to mean either interpretation: ${\cal S}_{\mbox{\scriptsize J}}^{\sFrQ\mbox{-}\mbox{\scriptsize G}\mbox{\scriptsize e}\so\mbox{\scriptsize m}}$ or ${\cal S}_{\mbox{\scriptsize J}}^{\sM\sP\mbox{\scriptsize I}}$. \subsection{Euler--Lagrange to Jacobi action as a passage to the Routhian} \noindent{\bf Remark 1} The passage from Euler--Lagrange's action principle with no explicit $t$ dependence to Jacobi's action principle (\ref{J-action}) is a subcase of passage to the Routhian. \mbox{ } \noindent For rewrite $S_{\mbox{\scriptsize E}\sL}$ as the $\lambda$-parametrization of the action arising by appending time to the system's configurations: \be {\cal S}_{\mbox{\scriptsize E}\sL}^{\sP\mbox{\scriptsize a}\mbox{\scriptsize r}\mbox{\scriptsize a}\mbox{\scriptsize m}} := \int \d \lambda \, \dot{t} \, L(\mbox{\boldmath$Q$}, \dot{\mbox{\boldmath$Q$}}) \mbox{ } . \label{Ltadjac} \end{equation} \noindent{\bf Remark 2} The original Lagrangian's explicit $t$-independence means that $t$ in (\ref{Ltadjac}) is a cyclic coordinate. \mbox{ } \noindent Passage to the Routhian thus applies, yielding \be L_{\mbox{\scriptsize T}\mbox{\scriptsize R}}(\mbox{\boldmath$Q$}, \dot{\mbox{\boldmath$Q$}}) \:= L(\mbox{\boldmath$Q$}, \dot{\mbox{\boldmath$Q$}})\dot{t} - P^t \dot{t} \mbox{ } \mbox{ } \mbox{ for } \mbox{ } \mbox{ } \frac{\pa L}{\pa \dot{t}} \m = \m P^t \m = \m - E \mbox{ } , \mbox{ } \mbox{ constant } . \label{tprimeeq} \end{equation} \noindent{\bf Remark 3} See Sec \ref{JME} for the converse working. \mbox{ } \noindent{\bf Remark 4} This working is termed the {\it parametrization procedure} in the Mechanics literature \cite{Lanczos}. This refers to the (nonrelational!) adjunction of the 1-$d$ space of a time variable to the configuration space \be \FrQ \longrightarrow \FrQ \times \mathbb{R} \mbox{ } . \end{equation} \noindent{\bf Remark 5} This working also serves to justify the identification of the Jacobi action's potential factor $W$ as the combination of well-known physical entities $E - V$. \subsection{Jacobi--Synge actions}\label{JSS} {\bf Structure 1} More generally, Temporal Relationalism is implemented by \be {\cal S}_{\mbox{\scriptsize J}\mbox{\scriptsize S}} \m = \m \int L_{\mbox{\scriptsize J}\mbox{\scriptsize S}} \d \lambda \m = \m \int \d JS \end{equation} with Jacobi--Synge Lagrangian $L_{\mbox{\scriptsize J}\mbox{\scriptsize S}} = L_{\mbox{\scriptsize J}\mbox{\scriptsize S}}^{\sM\mbox{\scriptsize R}\mbox{\scriptsize I}}$ that is homogeneous-linear in $\dot{\mbox{\boldmath$Q$}}$ and Jacobi--Synge arc element \cite{Synge, Lanczos} $\d JS = \d JS^{\sM\sP\mbox{\scriptsize I}}$, or dually, $\d JS^{\sFrQ\mbox{-}\mbox{\scriptsize G}\mbox{\scriptsize e}\so\mbox{\scriptsize m}}$ that is homogeneous-linear in $\d{\mbox{\boldmath$Q$}}$. The equivalences in Fig \ref{TR-Implems} are already stated at this level of generalization.\footnote{Synge's generalization of Jacobi parallels Finsler's (and in fact Riemann's) generalization of Riemannian Geometry, although Synge's considerations are furtherly general in not requiring nondegeneracy.} \mbox{ } \noindent{\bf Structure 2} The {\it Jacobi--Synge construction} is, given a configuration space metric geometry, to form the corresponding theory of Mechanics thereupon. \subsection{Hamiltonian formulation for constrained systems}\label{Constraints} {\bf Structure 1} Passage from the Lagrangian to the Hamiltonian formulations can be nontrivial. For the {\it Legendre (transformation) matrix} \begin{equation} \underline{\underline{\bslLambda}} \:= \frac{ \pa^2 L }{ \pa \underline{\mbox{\boldmath$Q$}}^{\prime} \pa \underline{\mbox{\boldmath$Q$}}^{\prime} } \mbox{ } \left( \m = \m \frac{ \pa \underline{\mbox{\scriptsize\boldmath$P$}} }{ \pa {\underline{\mbox{\boldmath$Q$}}}^{\prime} } \right) \label{Leg-Matrix} \end{equation} -- named by its latter form being associated with the Legendre transformation -- is in general non-invertible. Thereby, the momenta $\mbox{\scriptsize\boldmath$P$}$ cannot be independent functions of the velocities $\mbox{\boldmath$Q$}^{\prime}$. \mbox{ } \noindent{\bf Example 1} In the case of the action with purely-quadratic kinetic term, the Legendre matrix is just the kinetic matrix $\mbox{\boldmath$M$}$. \mbox{ } \noindent{\bf Definition 1} In the Hamiltonian formulation, {\it constraints} are relations \begin{equation} \mbox{\boldmath \scriptsize ${\cal C}$}(\mbox{\boldmath$Q$}, \mbox{\scriptsize\boldmath$P$}) = 0 \end{equation} between the momenta $\mbox{\scriptsize\boldmath$P$}$ by which these are not independent. These carry $\mbox{\sffamily C}$ indices when required, using primed and double-primed versions when multiple concurrent such are required in a given formula. This is our general notation for specialized-object indices: the letter says which type of specialized object, with priming used to cover multiple uses of such indices within a single formula. This is quite a general type of constraint considered by Dirac \cite{Dirac}. \mbox{ } \noindent{\bf Remark 1} The Euler--Lagrange equations can be rearranged to reveal the explicit presence of the Legendre matrix, \begin{equation} \underline{\underline{\bslLambda}} \cdot \underline{\mbox{\boldmath$Q$}}^{\prime\prime} \m = \m \frac{ \pa^2 L }{ \pa \mbox{\boldmath$Q$}^{\prime} \pa \mbox{\boldmath$Q$}^{\prime} } \cdot \mbox{\boldmath$Q$}^{\prime\prime} \m = \m \frac{ \pa L }{ \pa \mbox{\boldmath$Q$} } - \frac{ \pa^2 L }{ \pa \mbox{\boldmath$Q$} \pa \mbox{\boldmath$Q$}^{\prime} } \cdot \mbox{\boldmath$Q$}^{\prime} \mbox{ } . \label{acc} \end{equation} The above noninvertibility additionally means that the accelerations are not uniquely determined by the Lagrangian data $(\mbox{\boldmath$Q$}, \mbox{\boldmath$Q$}^{\prime})$. \mbox{ } \noindent{\bf Notation 1} Keeping track of which objects enter a Background Independence scheme alias Problem of Time resolution is moreover an issue. For this requires many objects, many of which are new to address aspects or facets on the one hand, or newly formulated or interpreted because traditional versions of them succumbing to facet interference \cite{K92, I93} on the other. Because of this, we jointly denote constraints -- a sizeable subclass of such objects -- by the undersized calligraphic font, so that constraint status can immediately be read off the formalism. \mbox{ } \noindent{\bf Definition 2} Constraints arising from the above non-invertibility of the momentum--velocity relations alone are termed {\it primary} \cite{Dirac, HTBook}, denoted by \be \mbox{\boldmath\scriptsize ${\cal P}$} \m , \m \m \mbox{ indexed by } \mbox{ } \mbox{\sffamily P} \mbox{ } . \end{equation} \noindent{\bf Definition 3} Constraints furthermore requiring input from the variational equations of motion are termed {\it secondary} \cite{Dirac, HTBook}, denoted by \be \mbox{\boldmath\scriptsize ${\cal S}$} \m , \m \m \mbox{ indexed by } \mbox{ } \mbox{\sffamily S} \mbox{ } . \end{equation} [Classification of constraints into primary and secondary is originally due to Bergmann; it is not reformulation independent \cite{HTBook}.] \mbox{ } \noindent{\bf Remark 2} Constraints arising from the propagation of existing constraints using the equations of motion are an intuitively valuable case of secondary constraints (albeit these are on some occasions called `tertiary constraints'). \mbox{ } \noindent{\bf Remark 3} We often use the parametrized-velocity, alias dotted, version of this subsection, i.e.\ using $\dot{\mbox{\boldmath$Q$}}$ and $L_{\mbox{\scriptsize J}\mbox{\scriptsize S}}$. \subsection{Primary constraints from Temporal Relationalism}\label{Prim-TR} We work for now in the Manifestly Reparametrization Invariant case. \mbox{ } \noindent{\bf Definition 1} In this implementation, the definition of momentum takes the form \be \mbox{\scriptsize\boldmath$P$} \:= \frac{\pa L_{\mbox{\scriptsize J}\mbox{\scriptsize S}}}{\pa \dot{\mbox{\boldmath$Q$}}} \mbox{ } . \label{P-MRI-Def} \end{equation} {\bf Lemma 2 (Dirac)} \cite{Dirac} Manifestly Reparametrization Invariant actions imply at least one primary constraint. \mbox{ } \noindent{\underline{Proof}} These are homogeneous of degree 1 in the velocities $\dot{\mbox{\boldmath$Q$}}$. \mbox{ } \noindent The $k := \mbox{dim}(\FrQ)$ conjugate momenta $\mbox{\scriptsize\boldmath$P$}$ are consequently (\ref{P-MRI-Def}) homogeneous of degree 0 in $\dot{\mbox{\boldmath$Q$}}$. \mbox{ } \noindent I.e. functions of at most $k - 1$ ratios of velocities. \mbox{ } \noindent There must thus be at least one relation between the momenta themselves, without any use having been made of the equations of motion. \mbox{ } \noindent But this meets the definition of primary constraint. $\Box$ \mbox{ } \noindent{\bf Remark 1} Temporal Relationalism thus acts as a {\it Constraint Provider}: an underlying principle that produces constraints \cite{Battelle, B94I}.\footnote{This is in opposition to the `Applied Mathematics' point of view that constraints just are, no questions asked. Which opposition is made, in particular, in the context of investigating origins for {\sl fundamental theories' constraints}. This is furthermore an example of Wheeler asking for `zeroth principles' \cite{Battelle} whenever presented with `first principles'.} \mbox{ } \noindent{\bf Example 1} In the particular case of Jacobi's action \cite{BB82, B94I}, the definition of momentum takes the form \be \underline{\mbox{\scriptsize\boldmath$P$}} \:= \frac{\pa L_{\mbox{\scriptsize J}}}{\pa \dot{\mbox{\boldmath$Q$}}} \m = \m \sqrt{\frac{W}{T}} \underline{\underline{\mbox{\boldmath$M$}}} \cdot \dot{\underline{\mbox{\boldmath$Q$}}} \mbox{ } . \end{equation} {\bf Lemma 3} (Barbour \cite{B94I}) For Jacobi-type actions, there is precisely one primary constraint, \begin{equation} \scE \m = \m \mbox{$\frac{1}{2}$} ||\mbox{\scriptsize\boldmath$P$}||_{\mbox{\scriptsize\boldmath$N$}}\mbox{}^2 + V(\mbox{\boldmath$Q$}) \m = \m E \mbox{ } , \label{E-Constraint} \end{equation} or, in terms of coordinates, \be \scE \:= \mbox{$\frac{1}{2}$} \, N^{AB} P_A P_B + V(\mbox{\boldmath$Q$}) \m = \m E \mbox{ } . \label{E} \end{equation} Here \be \mbox{\boldmath$N$} := \mbox{\boldmath$M$}^{-1} \mbox{ } . \end{equation} \noindent{\underline{Proof}} \be ||\mbox{\scriptsize\boldmath$P$}||_{\mbox{\scriptsize\boldmath$N$}} \m = \m \left|\left| \sqrt{\frac{W}{T}} \mbox{\boldmath$M$} \cdot \dot{\mbox{\boldmath$Q$}} \right|\right|_{\mbox{\scriptsize\boldmath$N$}} \m = \m \sqrt{\frac{W}{T}} || \dot{\mbox{\boldmath$Q$}} ||_{\mbox{\scriptsize\boldmath$M$}\mbox{\scriptsize\boldmath$N$}\mbox{\scriptsize\boldmath$M$}} \m = \m \sqrt{\frac{W}{T}} || \dot{\mbox{\boldmath$Q$}} ||_{\mbox{\scriptsize\boldmath$M$}} \m = \m \sqrt{\frac{W}{T}} \sqrt{2 \, T} \m = \m \sqrt{2 \, W} \mbox{ } . \mbox{ } \Box \end{equation} \noindent{\bf Remark 2} This can be envisaged as a `Pythagorean' or `direction-cosines' working. By this, $L_{\mbox{\scriptsize J}}$'s quadraticness in its velocities induces $\scE$'s quadraticness in its momenta. \mbox{ } \noindent{\bf Remark 3} In the most common case of Mechanics (Temporally Relational but Spatially Absolute!), \be \scE \m = \m \mbox{$\frac{1}{2}$} ||\mbox{\boldmath$p$}||_{\mbox{\scriptsize\boldmath$n$}}\mbox{}^2 + V(\mbox{\boldmath$q$}) \m = \m E \mbox{ } , \end{equation} where the \be \mbox{{\it inverse constant-mass matrix} \mbox{ } $\mbox{\boldmath$n$}$ \mbox{ } has components } \mbox{ } n_{IaJb} = \frac{1}{m_I}\delta_{IJ}\delta_{ab} \mbox{ } . \end{equation} \noindent{\bf Remark 4} (\ref{E}) is much more common in the literature in a context in which it is appropriate to consider it a `constant-energy equation'. It however has a distinct interpretation in the current Temporally Relational whole-universe context, as per Sec \ref{EoT}. \mbox{ } \noindent{\bf Remark 5} Taking into account this constraint causes one to pass from considering a point and a vector in $\FrQ$ to considering just a point and a direction. Thus one has not $\FrT(\FrQ)$ but a {\it direction bundle} alias {\it unit tangent bundle}, \be \FrU(\FrQ) \mbox{ } . \end{equation} Finally, in the current case, instead of the entities in question squaring to 1 as direction cosines do, the momenta `square' by use of the $\mbox{\boldmath$M$}$ matrix's inner product to the `square of the hypotenuse', $2 \, W$. \subsection{Equations of motion}\label{Evol-Eq} These are \begin{equation} \sqrt{\frac{W}{T}} \left\{ \sqrt{\frac{W}{T}} Q^{\sfA \, \prime} \right\}^{\prime} + \slGamma^{\sfA}\mbox{}_{\sfB\sfC} \sqrt{\frac{W}{T}} Q^{\sfB \, \prime} \sqrt{\frac{W}{T}} Q^{\sfC \, \prime} \m = \m N^{\sfA\sfB}\frac{\pa W}{\pa Q^{\sfB}} \mbox{ } . \label{MRI-ELE} \end{equation} where $\slGamma^{\sfA}\mbox{}_{\sfB\sfC}$ are the Christoffel symbols of the $\FrQ$ geometry. \mbox{ } \noindent Or, in terms of momenta (\ref{MRI-ELE}) becomes \be \sqrt{ \frac{W}{T} } \, {P}^{\sfA \, \prime} + \slGamma^{\sfA}\mbox{}_{\sfB\sfC} P^{\sfB} P^{\sfC} \m = \m - N^{\sfA\sfB} \frac{\pa V}{\pa Q^{\sfB}} \mbox{ } . \end{equation} \subsection{Outline of Manifestly Parametrization Irrelevant counterpart} \noindent{\bf Remark 1} The Author \cite{FileR, ABook} showed moreover that all of the preceding subsection's arguments transcend to Manifestly Parametrization Irrelevant actions and to geometric actions dual thereto. Most details of this are postponed to Article V; the details we presently need are as follows. \mbox{ } \noindent{\bf Structure 1} \noindent The notion of momentum $\mbox{\scriptsize\boldmath$P$}$ carries over to this context. \mbox{ } \noindent{\bf Remark 2} The formula defining generalized momentum does not, however, since (\ref{P-MRI-Def}) includes two parameter-bearing objects -- $\mbox{\boldmath$Q$}^{\prime}$ and $L$. \mbox{ } \noindent{\bf Definition 1} The Manifestly Parametrization Irrelevant formula for momentum is \be \mbox{\scriptsize\boldmath$P$} \:= \frac{\pa \, \d JS}{\pa \, \d \mbox{\boldmath$Q$}} \mbox{ } , \label{TRi-Mom} \end{equation} i.e.\ the partial derivative of the Jacobi--Synge arc element with respect to the change. \mbox{ } \noindent{\bf Lemma 4} This formula is equivalent to the standard one. \mbox{ } \noindent{\underline{Proof}} The standard formula's parameter-bearing objects' parameters wash each other out, by the elementary `cancellation of the dots Lemma' \cite{Goldstein}. $\Box$ \mbox{ } \noindent{\bf Example 1} Computing this out, in the case of the Jacobi action, \be \mbox{\scriptsize\boldmath$P$} \m = \m \mbox{\boldmath$M$} \frac{\sqrt{2 \, W} \, \d \mbox{\boldmath$Q$}}{ || \d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} } \mbox{ } . \label{New-P-Compute} \end{equation} \subsection{Model arena examples of Manifest Parametrization Irrelevance} \noindent{\bf Example 1} {\it Jacobi's action principle} for Spatially-Absolute Mechanics has \be \d s_{\mbox{\scriptsize J}} = \sqrt{ m_I\d q^I\d q^I } \end{equation} and \be W(\mbox{\boldmath$Q$}) = E - V(\mbox{\bf q}) \mbox{ } ; \end{equation} this is the example used in the current Section. \mbox{ } \noindent{\bf Example 2} {\it Misner's action principle} \cite{Magic} for Minisuperspace GR is also of this form. The Misner action is indeed a subcase of full Geometrodynamics' Baierlein--Sharp--Wheeler \cite{BSW} action (Sec II.3.1). Various subcases of the Misner action are considered in Sec \ref{MSS-Intro}. \section{Temporal Relationalism: resolution by Mach's Time Principle}\label{MTP} \noindent It is quite natural to ask whether there is a paradox between the Leibnizian Time(lessness) Principle, on the one hand, and, on the other hand, our appearing to `experience time', and this moreover featuring in many Laws of Physics that appear to apply in the Universe. \mbox{ } \noindent{\bf Remark 1} Let us start by pointing to discrepancies between the two contexts. \mbox{ } \noindent{\bf Discrepancy 1} Everyday experience concerns subsystems rather than the whole-Universe setting of this Leibnizian Principle. \mbox{ } \noindent{\bf Discrepancy 2} Whereas `time' is a useful concept for everyday experience, the nature of `time' itself is in general less clear. \mbox{ } \noindent{\bf Mach's Time Principle} is that \cite{M} {\it ``It is utterly beyond our power to measure the changes of things by time. Quite the contrary, time is an abstraction at which we arrive through the changes of things."} I.e.\ `{\sl time is to be abstracted from change}'. \mbox{ } \noindent{\bf Remark 2} Indeed, it is change that we directly experience, and temporal notions are merely an abstraction from that, albeit a very practically useful abstraction if carefully chosen. \mbox{ } \noindent{\bf Remark 3} Mach's Time Principle thus {\sl resolves} the Leibnizian Time(lessness) Principle's timelessness at the primary level, by a secondary notion of time, i.e. an {\it emergent time}. \mbox{ } \noindent{\bf Aside 1} See in particular \cite{K92, I93, ABook} for discussion of alternative strategies for handling Temporal Relationalism such as primary-level time (e.g.\ hidden or from appended matter), adhering to timelessness or supplanting time by a notion of history. \subsection{Implementation i) Jacobi--Mach variables} \noindent To implement Mach's Time Principle, we firstly, work in configuration--change variables $(\mbox{\boldmath$Q$}, \d\mbox{\boldmath$Q$})$. Indeed, combining the above Machian connotations with Jacobi's actual formulation of his action principle in terms of these, we now can justify using the further alias {\it Jacobi--Mach variables} for these. \subsection{Implementation ii) Jacobi--Mach formula for momentum}\label{JMM} \noindent{\bf Structure 1} The momentum--velocity relations are to be supplanted by {\it momentum--change relations} \be \mbox{\scriptsize\boldmath$P$} = \mbox{\scriptsize\boldmath$P$}(\mbox{\boldmath$Q$}, \d\mbox{\boldmath$Q$}) \m = \m \frac{\pa \, \d JS}{\pa \, \d \mbox{\boldmath$Q$}} (\mbox{\boldmath$Q$}, \d \mbox{\boldmath$Q$}) \mbox{ } . \label{P-MPI-Def} \end{equation} {\bf Example 1} For the Jacobi action itself, \be \underline{\mbox{\scriptsize\boldmath$P$}} \:= \frac{ \pa \, \d J }{ \pa \, \d \underline{\mbox{\boldmath$Q$}} } \m = \m \frac{ \sqrt{2 \, W} }{ \d s } \underline{\underline{\mbox{\boldmath$M$}}} \cdot \d \underline{\mbox{\boldmath$Q$}} \mbox{ } . \end{equation} {\bf Lemma 5} \cite{FileR} Manifestly Parametrization Irrelevant actions imply at least one primary constraint. \mbox{ } \noindent{\underline{Proof}} These are homogeneous of degree 1 in the changes $\d \mbox{\boldmath$Q$}$. \mbox{ } \noindent The $k$ conjugate momenta $\mbox{\scriptsize\boldmath$P$}$ are consequently (\ref{P-MPI-Def}) homogeneous of degree 0 in $\d \mbox{\boldmath$Q$}$. \mbox{ } \noindent I.e.\ functions of at most $k - 1$ ratios of changes. \mbox{ } \noindent There must thus be at least one relation between the momenta themselves (i.e.\ without using the equations of motion). \mbox{ } \noindent But this meets the definition of primary constraint. $\Box$ \mbox{ } \noindent Lemma 3 moreover admits the following Manifestly Parametrization Irrelevant {\underline{Proof}}: \be ||\mbox{\scriptsize\boldmath$P$}||_{\mbox{\scriptsize\boldmath$N$}} \m = \m \left|\left| \frac{\sqrt{2 \, W}}{\d s} \mbox{\boldmath$M$} \cdot \d \mbox{\boldmath$Q$} \right|\right|_{\mbox{\scriptsize\boldmath$N$}} \m = \m \frac{\sqrt{2 \, W}}{\d s} || \d \mbox{\boldmath$Q$} ||_{\mbox{\scriptsize\boldmath$M$}\mbox{\scriptsize\boldmath$N$}\mbox{\scriptsize\boldmath$M$}} \m = \m \frac{\sqrt{2 \, W}}{\d s} || \d \mbox{\boldmath$Q$} ||_{\mbox{\scriptsize\boldmath$M$}} \m = \m \frac{\sqrt{2 \, W}}{\d s} \d s \m = \m \sqrt{2 \, W} \mbox{ } . \mbox{ } \Box \end{equation} \subsection{Implementation iii) Equation of Time interpretation}\label{EoT} \noindent{\bf Remark 1} Working with Temporal Relationalism implementing formulations is but the larger of two parts in handling Temporal Relationalism. Approaches using this eventually need to be completed by a Machian `time is to be abstracted from change' step, as follows. \mbox{ } \noindent{\bf Interpretation 1} In the current context of being provided by Temporal Relationalism, the interpretation that $\scE$ is to receive is that of an {\it equation of time}. \mbox{ } \noindent{\bf Interpretation 2} In particular, $\scE$ can be rearranged to give the {\it Jacobi emergent time},\footnote{In this Series of Articles, given a time variable $t$, we denote the `calendar year zero' adjusted version of this by the corresponding oversized $\lt := t - t(0)$.} by integrating \be \frac{\pa}{\pa t^{\mbox{\scriptsize e}\mbox{\scriptsize m}(\mbox{\scriptsize J})}} \:= \sqrt{ \frac{W}{T} } \frac{\pa}{\pa\lambda} \mbox{ } , \label{Ast-0} \end{equation} to obtain \begin{equation} \lt^{\mbox{\scriptsize e}\mbox{\scriptsize m}(\mbox{\scriptsize J})} \m = \m \int \d\lambda \sqrt{ \frac{T}{W} } \m = \m \int \frac{\d s}{\sqrt{2 \, W}} \m = \m \int \frac{||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}}}{\sqrt{2 \, W}} \mbox{ } . \label{t-em-J} \end{equation} The third form therein -- the Manifestly Parametrization Irrelevant or dual $\FrQ$-geometric formulation -- is moreover manifestly an equation for obtaining time from change in direct compliance with Mach's Time Principle. More precisely, it is a formula for an emergent timefunction as an explicit {\sl functional} of change, schematically \be t^{\mbox{\scriptsize e}\mbox{\scriptsize m}(\mbox{\scriptsize J})} = {\cal F}[\mbox{\boldmath$Q$}, \d \mbox{\boldmath$Q$}] \mbox{ } . \end{equation} This entails interpreting the quadratic constraint not as an energy constraint in the usual sense but as an equation of time \cite{B94I, ARel2, FileR}. `em' stands for (classical) {\it emergent Machian} time, and `$J$' for Jacobi. Following Mach, this is ab initio a highly dependent variable rather than the independent variable that time is usually taken to be. This is because this `usual' situation assumes that one knows beforehand what the notion of time to use, whereas the current position involves operationally establishing that notion (see the Conclusion for discussion). \mbox{ } \noindent To celebrate this, let us term the type of constraint provided by Temporal Relationism a {\it Chronos constraint}, denoting all examples of such by \be \Chronos = 0 \mbox{ } . \end{equation} \subsection{Implementation iv) Jacobi--Mach equations of motion}\label{JME} (\ref{MRI-ELE}) also make reference to times, velocities and Lagrangians. In Manifestly Parametrization Irrelevant form, the equations of motion are, rather, the {\it Jacobi--Mach equations} that follow from Jacobi's arc element in terms of Machian variables: \begin{equation} \d \left\{ \frac{\pa \,\d J}{\pa \,\d \mbox{\boldmath$Q$}} \right\} \m = \m \frac{\pa \, \d J}{\pa \mbox{\boldmath$Q$}} \mbox{ } \Rightarrow \label{JME-1} \end{equation} \begin{equation} \frac{\sqrt{2 \, W}\, \d}{||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}}} \left\{ \frac{\sqrt{2 \, W} \, \d Q^{\sfA}}{||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}}} \right\} + \slGamma^{\sfA}\mbox{}_{\mbox{\scriptsize B}\sfC} \frac{\sqrt{2 \, W} \, \d Q^{\sfB}}{||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}}} \frac{\sqrt{2 \, W} \, \d Q^{\sfC}}{||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}}} \m = \m N^{\sfA\sfB}\frac{\pa W}{\pa Q^{\sfB}} \mbox{ } . \label{New-Evol} \end{equation} \noindent{\bf Remark 1} (\ref{JME-1}) is an `impulse formulation' of Newton's Second Law. \mbox{ } \noindent{\bf Remark 2} A final move -- useful in practical calculations -- involves supplanting one of the evolution equations by the emergent Lagrangian form of the quadratic constraint, \begin{equation} \mbox{$\frac{1}{2}$} M_{\sfA\sfB}\Last{Q}^{\sfA}\Last{Q}^{\sfB} + W = 0 \mbox{ } . \label{ENERGY} \end{equation} \subsection{Discussion} {\bf Interpretation 1} (\ref{t-em-J}) moreover also implements the further principle \cite{MTW} of `choose time so that motion is simplest'. For, via \begin{equation} \Ast \:= \frac{\pa}{\pa t^{\mbox{\scriptsize e}\mbox{\scriptsize m}(\mbox{\scriptsize J})}} \:= \sqrt{ \frac{W}{T} } \frac{\pa}{\pa\lambda} \mbox{ } , \label{Ast} \end{equation} it is also distinguished by its simplification of the momentum-velocity relations and equations of motion from (\ref{P-MPI-Def}) and (\ref{New-Evol}). `abs' here denotes the standard differential-geometric absolute derivative. \begin{equation} P_{\sfA} = M_{\sfA\sfB}\Last Q^{\sfB} \mbox{ } , \end{equation} \begin{equation} D_{\mbox{\scriptsize a}\mbox{\scriptsize b}\sss}\mbox{}^2 {Q}^{\sfA} \m = \m \Last\Last Q^{\sfA} + \slGamma^{\sfA}\mbox{}_{\sfB\sfC}\Last Q^{\sfB}\Last Q^{\sfC} \m = \m - N^{\sfA\sfB} \frac{ \pa V }{ \pa Q^{\sfA} } \mbox{ } . \label{parag} \end{equation} The latter is a {\it parageodesic equation} with respect to the kinetic metric (meaning it has a forcing term arising from the conformal $W$-factor). \mbox{ } \noindent{\bf Remark 1} In this way, Temporally-Relational Mechanics' emergent classical Machian time can be seen to amount to be a recovery of Newtonian time, but now on a Temporally-Relational footing. \mbox{ } \noindent{\bf Remark 2} We furthermore split $\mbox{\boldmath$Q$}$ into heavy slow $\mbox{\boldmath$h$}$ and light fast $\mbox{\boldmath$l$}$ parts, leading to an expansion of $t^{\mbox{\scriptsize e}\mbox{\scriptsize m}}$ with $\mbox{\boldmath$h$}$ part as leading term. This is a significant move to make as regards making contact with cosmological modelling. \mbox{ } \noindent{\bf Remark 3} Article V has a stronger form of Remark 1 -- for (Temporally {\sl and Configurationally}) Relational Mechanics -- as well as a more detailed treatment of Remark 2's topic. \mbox{ } \noindent{\bf Remark 4} One can finally posit the Euler--Lagrange principle, in terms of $t^{\mbox{\scriptsize e}\mbox{\scriptsize m}}$ as an action principle encoding these simplest-form equations. \mbox{ } \noindent{\bf Remark 5} The arguments of Leibniz and Mach are philosophically compelling enough to apply to not just Mechanics but to Physics as a whole. \section{Minisuperspace example}\label{MSS-Intro} \subsection{Overview of the general minisuperspace model}\label{MSS-Overview} The standard ADM-type minisuperspace action including a minimally-coupled scalar field is \be {\cal S} \m = \m \int \d t \sqrt{h} \, \alpha \left\{ \frac{T_{\mbox{\scriptsize G}\mbox{\scriptsize R}}^{\sM\mbox{\scriptsize S}\sS}}{4 \, \alpha^2} + R - 2\slLambda \right\} \mbox{ } , \end{equation} is of Euler--Lagrange type. Here, $\alpha$ is the lapse, $h$ is the determinant of the spatial homogeneous metric $h_{sb}$ and $\slLambda$ is the cosmological constant, and we have, schematically \be T_{\mbox{\scriptsize G}\mbox{\scriptsize R}}^{\sM\mbox{\scriptsize S}\sS} = M_{AB}Q^A Q^B \mbox{ } . \end{equation} \noindent{\bf Remark 1} One advantage of minisuperspace models over Mechanics models is that minisuperspace is a restriction of GR, so it inherits some features that Mechanics does not possess. \mbox{ } \noindent{\bf GR-like feature 1)} The kinetic metric $\mbox{\boldmath$M$}$ is now indefinite. \mbox{ } \noindent{\bf GR-like feature 2)} More specific restrictions are imposed on the form of the potential than in RPMs. \mbox{ } \noindent{\bf Remark 2} Minisuperspace modelling is additionally significant for the universe as a whole. Here, GR is more accurate than Mechanics, with even isotropic Minisuperspace constituting a highly accurate model for cosmological purposes. \mbox{ } \noindent See Articles II and III for some ways in which, conversely, suitably-relational Mechanics models have a distinct set of advantages over minisuperspace models. \mbox{ } \noindent The corresponding Jacobi-type action for minisuperspace is the {\it Misner-type action} \cite{Magic}, \be {\cal S} \m = \m \mbox{$\frac{1}{2}$} \int \d \lambda \, \sqrt{\overline{T} \, \overline{W}} \mbox{ } \end{equation} (here we use overlines to denote densitization: $\overline{O} = \sqrt{h} O$). Or, in geometrical form, \be {\cal S} \m = \m \mbox{$\frac{1}{2}$} \int \d s \, \sqrt{\overline{W}} \mbox{ } , \end{equation} for minisuperspace kintic arc element \be \d s = ||\d \mbox{\boldmath$Q$}||_{\mbox{\scriptsize\boldmath$M$}} \mbox{ } , \end{equation} \be W(\mbox{\boldmath$Q$}) = R - 2 \, \slLambda \end{equation} (in the undensitized presentation). \subsection{Isotropic model with scalar field}\label{Isotropic} Here the action picks up $T_{\phi}$ and $V(\phi)$ pieces. Specialization to closed $\mathbb{S}^3$ isotropic model with single scalar field has, \begin{equation} \overline{T} \:= \mbox{exp}(3 \, \slOmega) \left\{ - \dot{\slOmega}^2 + \dot{\phi}^2 \right\} \m , \m \m \overline{W} \:= \mbox{exp}(3 \, \slOmega)\{\mbox{exp}(-2\slOmega) - V(\phi) - 2 \, \slLambda\} \mbox{ } . \label{MSS-Action} \end{equation} Here, the {\it Misner variable} \begin{equation} \slOmega := \mbox{ln} \, a \mbox{ } , \label{Misner} \end{equation} for $a$ the usual cosmological scale factor. The cosmological constant term $\slLambda$ therein is needed to support \cite{Rindler} the spatially-$\mathbb{S}^3$ FLRW cosmology with scalar field matter in the case in which matter effects are presumed small. \mbox{ } \noindent{\bf Remark 3} On the one hand, from the ADM-type action, varying with respect to the lapse gives the minisuperspace version of the Hamiltonian constraint $\scH$. \mbox{ } \noindent On the other hand, from the Misner-type action, $\scH$ follows as a primary constraint, which is now conceptually a subcase of $\Chronos$. \mbox{ } \noindent The restriction to Minisuperspace of the GR Hamiltonian constraint $\scH$ now arises as a primary constraint, in an `indefinite triangle' version of Sec \ref{Prim-TR}'s `Pythagorean' or `direction-cosines' working. [Readers will already be familiar with `indefinite triangles' from studying SR or the hyperbolic functions.] \mbox{ } \noindent As we shall see in Article II, both of these manoeuvres carry over to full GR. \mbox{ } \noindent{\bf Remark 4} The constraints $\scH$ and $\scE$ thus both illustrate that the distinction between primary and secondary constraints is artificial, in so far as that the two are interchangeable under reformulation. \mbox{ } \noindent{\bf Structure 2} The specific form taken by the Hamiltonian constraint is \begin{equation} \scH \:= \mbox{$\frac{1}{2}$} \, \mbox{exp}(-3 \, \slOmega) \big\{ - p_{\slOmega}^2 + p_{\phi}^2 + \mbox{exp}(6 \, \slOmega)\{V(\phi) + 2 \, \slLambda - \mbox{exp}(-2\slOmega)\} \big\} \m = \m 0 \mbox{ } . \label{MSS-H} \end{equation} \noindent{\bf Structure 3} This can be rearranged to give the classical Machian emergent time, \begin{equation} \lt^{\mbox{\scriptsize e}\mbox{\scriptsize m}} \m = \m \int \sqrt{ \frac{- \d\slOmega^2 + \d\phi^2}{\mbox{exp}(-2\slOmega) - V(\phi) - 2 \, \slLambda} } \mbox{ } . \label{Mini-tem} \end{equation} \noindent{\bf Modelling Assumption 1)} The matter physics is light and fast ($l$) as compared to the gravitational physics being heavy and slow ($h$). \mbox{ } \noindent{\bf Modelling Assumption 2)} More conventionally, the scalefactor and the homogeneous matter mode are jointly taken to be $h$, with only the below aniotropy or Article XI's inhomogenity playing subsequent $l$ roles. \subsection{Anisotropic vacuum model}\label{MSS-Aniso} {\bf Structure 1} The vacuum anisotropic cases \cite{mini, Magic, Ryan} whose configuration space metric is \begin{equation} \d s^2 = - \d \slOmega^2 + \d\beta_+^2 + \d\beta_-^2 \mbox{ } . \label{Bianchi-A} \end{equation} These models are potentially of great importance through being conjectured to be GR's generic behaviour near cosmological singularities \cite{BKL}. \mbox{ } \noindent{\bf Structure 2} The potential term has the following specific form inherited from the densitized GR Ricci scalar potential term, \begin{equation} \overline{V} = \mbox{exp}(\slOmega)\{V(\mbox{\boldmath$\beta$}) - 1\} \mbox{ } , \mbox{ } \mbox{ for} \label{B-IX-1} \end{equation} \begin{equation} V(\mbox{\boldmath$\beta$}_{\pm}) = \frac{\mbox{ exp}(-8 \, \beta_+)}{3} - \frac{4 \, \mbox{\scriptsize exp}(-2 \, \beta_+)}{3} \mbox{cosh}\,(2\sqrt{3} \, \beta_-) + 1 + \frac{2\,\mbox{exp}(4 \, \beta_+)}{3}\{\mbox{cosh}(4\sqrt{3} \, \beta_-) - 1\} \mbox{ } : \label{B-IX-2} \end{equation} an open-ended well of equilateral triangular cross-section. \mbox{ } \noindent{\bf Structure 3} The Hamiltonian constraint is now \begin{equation} \scH \m = \m - p_{\Omega}^2 \, + \, p_+^2 \, + \, p_-^2 \, + \, \mbox{exp}(4 \, \slOmega)\{V(\beta_{\pm}) - 1\} \mbox{ } . \end{equation} \noindent{\bf Structure 4} This can be rearranged to give the classical Machian emergent time, \begin{equation} \lt^{\mbox{\scriptsize e}\mbox{\scriptsize m}(\mbox{\scriptsize J})}_{\mbox{\scriptsize isotropic-MSS}} \m = \m \int \frac{ \mbox{exp}(\slOmega) \sqrt{- \d\slOmega^2 + \d\beta_-^2 + \d\beta_+^2} } { \sqrt{ 1 - V(\beta_{\pm}) } } \mbox{ } . \label{Bianchi-IX-tem} \end{equation} \noindent{\bf Modelling Assumption 3)} The anisotropy physics is light and fast ($l$) as compared to the scalefactor physics being heavy and slow ($h$). \mbox{ } \noindent{\bf Remark 1} This and the previous subsection's models can readily be combined, including viewing slight anisotropy as a toy model for slight inhomogeneity. \mbox{ } \noindent{\bf Remark 2} Classical emergent Machian time from GR's Hamiltonian constraint amounts to a relational recovery of GR's version of proper time. \mbox{ } \noindent In cases dominated by scalefactor (and other homogenous modes') dynamics, moreover, classical emergent time amounts to a relational recovery of cosmic time to leading order. \section{Conclusion} \noindent 1) We have considered a first aspect of Background Independence, namely {\it Temporal Relationalism}: the Leibnizian `there is no time for the universe as a whole at the primary level'. This is implemented into Physical Theory \mbox{ } \noindent i) by involving neither extraneous time -- such as Newton's -- nor extraneous time-like variable, e.g.\ GR's lapse $\upalpha$. \mbox{ } \noindent ii) By additionally not involving any label-time parameters either. \mbox{ } \noindent Manifestly Reparametrization Invariant actions, such as Jacobi's for Mechanics or Misner's for minisuperspace GR, implement i) implicitly. Jacobi--Synge actions are totally general finite-theory such: homogeneous-linear in parameter-velocities $\d \mbox{\boldmath$Q$} / \d \lambda$, with Jacobi and Misner's cases attaining this via a 'square root of a homogeneous-quadratic kinetic term' factor. Full (or indeed restricted but inhomogeneous) GR parallels this, but requires more work, with the Baierlein--Sharp--Wheeler (BSW) doing a partial job but Problem of Time facet interferences requiring the further relational reformulation of Article VI. Namely, the BSW action also has a bare 'square root of a homogeneous-quadratic kinetric term' factor, but this is broken by correction terms involving the shift $\underline{\upbeta}$. \mbox{ } \noindent Manifestly Parametrization Irrelevant actions implement ii) explicitly. These moreover have the further benefit of being dual to geometrical actions: a viewpoint that does not even make reference what is irrelevant (i.e.\ parametrization). The current Article's final formulation of this second attribute is thus by use of geometrical actions. These involve change variables $\d \mbox{\boldmath$Q$}$ rather than parameter-velocity ones. \mbox{ } \noindent 2) A basic argument of Dirac establishes that Manifestly Reparametrization Invariant actions imply primary constraints. We uplift this argument to our preferred geometrical action setting. This means that Temporal Relationalism is a type of constraint provider, the next Article's Configurational Relationalism constituting another such. For Jacobi, BSW-type and Misner actions, homogeneous-quadraticity of the action leads to a constraint quadratic in the momenta. These examples' constraints are, respectively, the object elsewhere regarded as an energy constraint $\scE$, the famous GR Hamiltonian constraint $\scH$ and its minisuperspace restriction. \mbox{ } \noindent 3) In the present setting, moreover, these are to be viewed as equations of time, and thus collectively denoted by $\Chronos$. This interpretation refers to rearranging $\Chronos$ to implement Mach's Time Principle: that time is to be abstracted from change. This occurs, literally, via substituting the momentum-change relations -- the Temporally Relational replacement for momentum--velocity relations -- into $\Chronos$. This yields classical Machian emergent time. \mbox{ } \noindent{\bf Remark 1} The title of BSW's paper, ``{\it Three-dimensional geometry as carrier of information about time}", supports the above duality. Upon subsequently passing to the relational GR action, this can moreover be rephrased in the temporally Machian form `geometry and change of geometry as carrier of information about time'. GR's spatial geometries are moreover but an example of $\FrQ$ geometry. This can thus be further generalized as regards range of theories, to `{\sl Configuration and change of configuration as carrier of information about time}'. \mbox{ } \noindent 4) This is a major part of how the discrepancy between Leibnizian timelessness and us apparently experiencing time is resolved. Namely, that Leibnizian timelessness, at the primary level, is Machianly mitigated at the secondary, i.e.\ emergent, level. \mbox{ } \noindent 5) The other major part of this resolution follows from noticing that what we experience are subsystems, rather than the `universe as a whole'. \mbox{ } \noindent{\bf Example 1} In the case of Mechanics, this classical Machian emergent time moreover simplifies the theory's equations down to coincide with Newton's. In this manner, we arrive at a recovery of what is `computationally, to good accuracy' Newtonian time, but now resting on relationally-acceptable foundations and possessing an emergent character. I.e.\ {\sl a relational recovery of Newtonian time}. \mbox{ } \noindent The rotation of the Earth was long held to `read off' Newtonian time. This was subsequently found to be inaccurate to 1 part in $10^8$. \mbox{ } \noindent A Machian view of this is as follows. The rotation of the Earth `reads off' classical emergent Machian time to the stated accuracy. The notion of replacing one subsystem by another to increase accuracy of time abstracted has moreover a solid grounding within the Machian perspective. Namely, one knows here to ask `which change' is `time' to be abstracted from, so that this `time' meets a required standard of accuracy. In the present example, for instance using the Earth--Moon--Sun system instead of just the Earth takes one past the stated bound on accuracy. \mbox{ } \noindent 6) In general, as Chapter 15 of \cite{ABook} argues, `sufficient totality of locally relevant change (STLRC)' wins out over other authors' `any change' and `all change' positions on `which change'. \mbox{ } \noindent The STLRC and `all change' positions moreover lie within the astronomers' {\it ephemeris time} conception \cite{Clemence}, in which other solar system bodies contribute to the timestandard. STLRC furthermore chooses further features of the ephemeris time conception this over `all change's' further Leibnizian-but-now-impractical tenets. In STLRC, all changes are given the opportunity to contribute, but only those found to be locally relevant are kept in the calculation. This is significant since, firstly, `all change' would include many poorly-measurable or even unobservable changes, Secondly, `all change' does not moreover translate well to features of relativisitic and quantum paradigms not anticipated by Leibnizian thinking (see Part I and Chapter 15 of \cite{ABook}). By this the time abstracted from STLRC merits the name {\it GLET: generalized local ephemeris time}. \mbox{ } \noindent 7) This emergent time is {\sl provided by} the system; in this way, it complies with Mach's Time Principle. In contrast, the notion of time usually assumed as an independent variable is neither Leibnizian nor Machian. Once the above time has been abstracted from change, it {\sl is} a convenient choice for (emergent) independent variable. A caveat on `system' here is that nature, not us, chooses what the system is. So e.g.\ if we wish to study a pendulum, our wish to study that pendulum has to include whatever else that pendulum cannot be isolated from, for intance that it is being studied in the terrestrial reference system. By which it is the Earth that is overwhelmingly locally relevant, and not the pendulum itself. \mbox{ } \noindent{\bf Remark 2} While nowadays time is read off atomic clocks, these are still to be interpreted as clock hands that are in regular need of calibration checks against solar system observations. Thus it is clear that there is a sense in which atomic clocks have not supplanted ephemeris time type concepts. \mbox{ } \noindent This `reading hand' versus `calibration' distinction is moreover illustrative of how conceptual errors can arise from not separating out distinct notions within timekeeping. \mbox{ } \noindent{\bf Remark 2} Whereas such an ephemeris time has long been in use, its Machian character has only relatively recently been remarked upon \cite{B94I, ARel2, ABook}. \mbox{ } \noindent 8) Once armed with the above account, that quantum GR exhibits primary-level timelessness for the Universe as a whole is {\sl expected}, rather than {\sl surprising}. This refers to the {\bf Frozen Formalism Problem} facet of the Problem of Time. Namely that the Wheeler--DeWitt equation \cite{Battelle, DeWitt67} \be \widehat{\scH} \Psi = 0 \mbox{ } \end{equation} -- the quantum version of the GR Hamiltonian constraint $\scH$, where $\Psi$ the waverfunction of the universe -- is, at least at first sight, a time-independent Schr\"{o}dinger equation arising at a juncture at which we are accustomed to seeing time-dependent Schr\"{o}dinger equations. This is now accounted for as an ab intio position resulting from the Temporal Relationalism aspect of Background Independence, now moreover preceded by awareness of how timelessness is already also ab initio classically present and then resolved in a Machian manner. While the classical emergent Machian time {\sl fails} to carry over to the quantum level, this is for a further Machian reason. Namely that now {\sl quantum} change has to be given an opportunity to contribute, which can e.g.\ {\sl somewhat} change the form taken by the emergent Machian time in semiclassical models relative to that in the corresponding classical models. Thus the same conceptualization carries over, but the form taken by the explicit Machian realization requires adjustment, in accord with the `GLET is to be abstracted from STLRC' perspective. See Article IV for brief further details, or Part III of \cite{ABook} for a full account. \mbox{ } \noindent{\bf Remark 3} Toward combined rather than piecemeal resolutions of facets, let us make our first distinction between a Small Method -- for a piecemeal facet -- and a Large Method: suitable for combination to form A Local Resolution of the Problem of Time. \mbox{ } \noindent Our Small Method is to use a Jacobi geometrical action (or its field-theoretic equivalent and/or generalization to a Jacobi--Synge action when required). \mbox{ } \noindent In contrast, our Large Method is to \be \mbox{\sl take Jacobi's action principle seriously enough to rederive the rest of Physics concordantly}. \label{Jac-Ser} \end{equation} Doing this carries a guarantee of {\sl remaining within} Temporal Relationalism as one succesively deals with each further local facet. \mbox{ } \noindent This turns out to consist of the following. \mbox{ } \noindent 1) Having to reconceptualize and rederive around half of the Principles of Dynamics material used, thus forming TRiPoD: the Temporal Relationalism implementing Principles of Dynamics. The other half turns out to be already-TRi, which is remarkably interesting given its strong overlap with successful approaches to constrained systems and with those parts of the Principles of Dynamics that are direct precursors of Quantum Theory's structures. This material is spread out over Articles I to X, and summarized in Fig XIII.6. \mbox{ } \noindent 2) TRiFol: a TRi reformulation of foliation kinematics, as per Article XII. \mbox{ } \noindent 3) TRiCQT (Canonical Quantum Theory) and TRiPIQT (Path Integral Quantum Theory); see Part III of \cite{ABook}. \mbox{ } \noindent The main virtue of the TRi formalism is that keeping one's calculations within this formalism prevents the Frozen Formalism Problem inadvertently re-entering while one is subsequently addressing further facets. \mbox{ } \noindent The vast difference in size between our Small and Large methods is a reflection of how much more work it takes to resolve facet interferences as compared to just piecemeal facets. For sure, the current Series provides Large Methods for all local facets of the Problem of Time at the classical level. \mbox{ } \noindent{\Large\bf Acknowledgements for the whole Series \normalfont\normalsize} \mbox{ } \noindent Foremost, I thank the people I am close with. Without your support, I could have never written this. \mbox{ } \noindent For support when I was younger, I also thank in particular my father O and my friend L. \mbox{ } \noindent I thank Chris Isham in particular for discussions and support over the years in which I worked on this Series of Articles. I also thank my PhD supervisors Malcolm MacCallum and Reza Tavakol, and subsequently Enrique Alvarez, Jeremy Butterfield, Marc Lachi$\grave{\mbox{e}}$ze-Rey and Don Page for support wth my career. For a range of discussions, comments, proof reading and thoughts, hosting, or support with my career, I also thank Ozgur Acik, Jeremy Butterfield, Malcolm MacCallum, Przemyslaw Malkiewicz, Don Page, Christopher Small, S and the others, Reza Tavakol, various participants at the Centenary meeting on Noether's Theorems, and numerous proofreaders. \mbox{ } \noindent Most of the thinking and calculating for this Series of Articles was done while at Peterhouse -- the University of Cambridge College -- as regards Articles I to VI, Universidad Autonoma de Madrid for Articles VII and VIII, Queen Mary: University of London for Article IX, Universit\'{e} Paris VII for Articles X to XIII, where I held a Foundational Questions Institute (fqXi.org). I also thank fqXi for a number of travel mini-grants. This work could have not been carried out if Cambridge's Moore Library (Mathematics) did not have 24/7 access, a matter in which Professor Stephen Hawking was pivotal. I generally encourage 24/7 access to academic libraries elsewhere as in the best interests of foundational research actually getting done. \mbox{ } \noindent I also thank my friends $11A$, $3B$, $4C$, $D$, $4E$, $2F$, $G$, $2H$, $2J$, $2K$, $3L$, $3M$, $O$, $4R$, $7S$, $T$ and $W$ for keeping my spirits up at many points in this long journey. \mbox{ } \noindent I finally wish to pay my respects to the above-mentioned Professor Stephen Hawking, as well as to Professor John Stewart, who had strongly encouraged my study of Lie derivatives.
1,477,468,749,861
arxiv
\section{ I. Introduction} This is a tribute to Gerry Brown who was our mentor, collaborator and good friend for a long long time, from 1964 to 2013 for TTSK, and from 2003 to 2013 for JWH. We remember him and our times together very well, and here we shall review briefly two subjects, `core polarization' and `Brown-Rho scaling', on which we have collaborated extensively over the years. Before doing so, we think we should first describe a recent book project \cite{brownkuobook} we worked on together: Gerry talked with me (TTSK) one day in early 2007 about writing a book on `Nucleon-Nucleon Interactions and the Nuclear Many-Body Problem'. Gerry's idea was to put together a reprint volume with a few introductory chapters and a collection of our published works spanning a period of about forty years. I of course was very happy about the idea, and soon afterward World-Scientific (WS) agreed to support the project. Gerry then asked JWH (then finishing up his Ph.D with Gerry) and Prof.\ Sabine Lee (Univ.\ of Birmingham) to help with the writing, typsetting (LaTexing) and organization of the book. (Gerry often said ``The young are supposed to help the old.'') As usual, Gerry was a man of quick action. In the summer of 2007 we had meetings at Gerry's home, and a tentative plan of `who-does-what' was laid. So we started to work. Gerry in fact quickly wrote many neatly hand-written pages and sent us copies of them. Gerry often said he had only a 5-dollar pocket calculator, a sort of excuse he used to avoid doing calculations on computers and learning to use LaTex. So Sabine nicely typset all of Gerry's hand-written notes, and attached below is what Gerry wrote, in his unique style, in a memorable preface for the book: \vskip 0.2cm { {\bf Preface: Why now is a good time to write about the Nucleon-Nucleon Interaction and the Nuclear Many Body Problem}} \vskip 0.1cm ``Why do two old nuclear physicists, with the help of a junior colleague and a historian, now write about the nucleon-nucleon interaction to which they have devoted such a large portion of their research lives previously? The immediate explanation is straightforward. The main problems at the level of meson exchange physics have been solved. We now have an effective nucleon-nucleon interaction $V_{low - k}$, pioneered in a renormalization group formalism by several of us at Stony Brook and our colleagues at Naples, which is nearly universally accepted as the unique low-momentum interaction that includes all experimental information to date. Why does this make reconstructing the history of our understanding of the nucleon-nucleon interaction necessary or useful? There are several good reasons for engaging in a historical appreciation of the progression of research and the developments leading to our current knowledge in this subject area. First, our understanding is based on a multi-step development in which a variety of different scientific insights and a wide range of physical and mathematical methodologies fed into each other. This is best appreciated by a looking at the different `steps along the way', starting with the pioneering work by Brueckner and collaborators, which was just as necessary and important as the insightful, masterly improvements to Brueckner's approach by Hans Bethe and his students. The main achievement in the work of Brueckner and Bethe et al.\ was the `taming' of the hard core of the nucleon-nucleon potential, which has since been understood to result from the exchange of the $\omega$-meson, a `heavy' photon. The off-shell effects which bedevilled Bethe's work that ended up in the 1963 Reference Spectrum Method were treated relatively accurately by introducing an energy gap between initial bound states and intermediate state. Kuo and Brown showed that this would be accurately handled by taking the intermediate states to be free; i.e.\ by just using Fourier components, as now done in the effective field theory resulting from the renormalization group formalism. Well, one can say to the young people that this is `much ado about nothing'. In fact, long ago, when Gerald E.\ Brown was Professor at Princeton, Murph Goldberger (turning on its head Winston Churchill's famous quote about the R.A.F.\ during the Battle of Britain) claimed in reference to the nuclear interaction that `never have so many contributed so little to so few.' Admittedly, at the time it was hard going. If we had a unique set of interactions, one for each angular momentum, spin and isospin channel, it could be argued that it would be justified to stop there. However, since Brueckner came on the scene, Bethe reorganized the theory, Kuo and Brown wrote their paper that prepared the effective field theory by using the Scott-Mozskowski separation method, and chiral invariance hit the scene. Chiral invariance does not do anything for Yukawa's pion exchange, because the pion gets most of its mass from somewhere outside of the low-energy system, maybe by coupling to the Higgs boson. But the masses of the other mesons drop with increasing density, like $$ m_\rho^* \cong m_\rho (1-0.2 n/n_0) \hspace{2cm} \mbox{``Brown/Rho scaling''} $$ where $n$ is the density and $n_0$ is nuclear matter saturation density. The change in masses of the scalar-$\sigma$ and vector-$\omega$ mesons pretty much cancel each other in effects--the scalar exchange giving attraction and the vector repulsion. However, in the tensor force, the $\rho$-exchange `beats' against the pion exchange, the former cancelling more and more of the latter as the density increases. This decrease with density of the tensor force interaction has important effects: \begin{enumerate} \item It is responsible for saturation in the nuclear many-body system. \item It converts an around hour-long carbon-14 lifetime from a superallowed transition in the Wigner $SU(4)$ for $p$-shell nuclei into an archeologically long 5,700-year transition. \end{enumerate} `Brown/Rho scaling' is also important for neutron stars and may play an important role in turning them into black holes and for `cosmological natural selection'. It must be admitted that the same effects could be given by three-body forces, but Brown/Rho scaling has a deep connection with chiral symmetry restoration. We shall review these facets in detail. Undoubtedly, much more is to come, but we believe that now is a good time to summarize the interesting history of the nucleon-nucleon interaction.'' \vskip 0.2cm Indeed Gerry has devoted a large portion of his research life to nuclear physics, especially to questions related to the nucleon-nucleon interaction and nuclear many-body problem. He has also devoted a large portion of his life to guiding, helping and taking care of his students, postdocs and colleagues (including both of us). Gerry had two operations in 2008, and was not in good health afterwards. With Gerry ill, we worked hard together with Sabine to finish the book, which was published by WS in early 2010. Many of Gerry's friends and colleagues visited him regularly while he was recuperating. I (TTSK) and my wife Annette also visited him often (about once or more each month), and it happened that we saw him in the afternoon of May 27, 2013, just four days before he passed away. He was particularly cheerful that afternoon, smiling, tasting a pastry and making a typical Gerry-style joke. In the following, let us describe briefly the two research projects we have worked on together. \section{II. Core polarization} In September 1964, I (TTSK) went to Princeton as an instructor (which is a research associate with minor teaching duties) to work with Gerry. I only learned much later that Gerry knew my advisors Elizabeth Baranger (Univ.\ of Pittsburgh) and Michel Baranger (Carniege-Mellon Univ.) very well, and that they had arranged for me to work with Gerry. In fact, they were all close associates of Hans Bethe. I went to the Palmer Physical Laboratory one day and met Gerry for the first time. I still remember well when he introduced Chun-Wa Wong to me and said ``let me introduce my secret weapon to you'' (so I realized from that first interaction that Gerry had a good sense of humor and liked to make jokes). Chun-Wa was also a research associate of Gerry; he was a graduate student at Harvard and completed his thesis with Gerry in Copenhagen. I (JWH) had a similar experience. I met Gerry for the first time in the spring of 2003 when I was a first-year graduate student at Stony Brook University. My brother Jason (who was already working in the nuclear theory group with TTSK) introduced me one day to Gerry, whose first words to me came as a surprise: ``Is your middle name William?'' After I answered ``yes'', Gerry smiled and said ``My grandson's name is Jeremy William. Why don't you come work with me this summer?'' That was the easiest job interview of my life, but the challenging part came later as I tried to keep up with Gerry's diverse research interests, from hadronic physics, to nuclear astrophysics, to low-energy nuclear structure theory. Soon afterwards, I began to focus on BR scaling \cite{holt04} and CP \cite{holtkbb07}. At Princeton Gerry had a rather large and active nuclear theory group. Senior faculty members were Gerry and Ben Bayman. To the best of my (TTSK) memories, the research associates were (in alphabetical order, here and later) J.\ Blomquist, J.\ Flores, W.\ Friedman, A.M.\ Green, A.\ Kallio, T.T.S.\ Kuo, H.\ Picker, A.\ Lande, P.\ Mello, G.\ Ripka, C.W.\ Wong, and L.\ Zamick. Visiting faculty members were A.\ Arima, L.\ Castillejo, I.\ Talmi, H.\ McManus, M.\ Moshinsky, H.\ Lipkin and P.\ Zilser. Gerry had about ten graduate students during his four-year stay at Princeton. I only remember a few of them, namely G.\ Bertsch, M.Y.\ Chen, W.\ Gerace, H.\ Mavromatis, J.\ Noble and I.\ Sharon. I remember Gerry once said ``Bertsch was too fast: I gave him a problem and he would disappear for a couple of weeks and come back with the solution. So I soon ran out of problems. I let him graduate.'' Princeton also had a very active nuclear experimental group, which consisted of (as far as I remember) R.\ Sherr, J.\ McCullen, O.\ Ames and G.\ Garvey. Gerry's nuclear theory group worked closely with the Rutgers nuclear physics group, which was a large group with G.\ Temmer, A.\ Covello, G.\ Sartoris and others. Every Monday afternoon we went to Rutgers (about a 20-mile drive from Princeton) to attend their weekly seminar. Every Thursday evening they came to Princeton's `bull session', a Gerry specialty. Usually we all went to have dinner together, and then came to the seminar room at about 7 pm, starting the bull session which was an informal seminar with lots of discussions and `no-time-limit'. Typically it ran for about 3 hours or more till about 11 pm. In those years, computers were still `primitive'; we used card punchers to punch cards and submit jobs (boxes of computing cards) at the computing center. (Gerry often mentioned that in his graduate-student days the computers used paper tapes as inputs.) So after the bull session, the `young' postdocs almost all first drove to the computing center and submitted some jobs before going home. The so-called Kuo-Brown matrix elements \cite{kuobrown66,brownkuo67,kuobrown68} were first developed at that time in order to provide a microscopi basis for the nuclear shell model. We shall only briefly describe them, as more detailed discussions about them have been given by Osnes and by Coraggio (see contributions by them in this memorial volume). The NN interaction and the nuclear many-body problem are both difficult problems. Gerry recently wrote in his book \cite{brownkuobook}: ``One of the authors, Gerry Brown, arrived at Princeton in early September, 1964. The next morning, as he came to the Palmer Physics Laboratory, Eugene Wigner, who just preceded him, opened the door for him (It was a real contest to get ahead of Eugene and open the door for him which very few succeeded in doing.) Eugene asked Gerry, as he went into the building, what he planned to work on. `I plan to work out the nucleon-nucleon interaction in nuclei.' Eugene said that it would take someone cleverer than him, to which Gerry replied that they probably disagreed what it meant to `work out'. Gerry wanted to achieve a working knowledge, sufficiently good to be able to work out problems in nuclear physics...'' It indeed turned out to be very hard to `work out' the nucleon-nucleon (NN) interaction in nuclei in a fundamental way, and a more feasible and physically-motivated approach is to compute instead an `effective' or `renormalized' nucleon-nucleon interaction. After I (TTSK) arrived Princeton, Gerry asked me to study Brueckner theory \cite{brownkuobook}, which was a new and difficult subject for me at that time. I am still indebted to Chun-Wa for helping me greatly in learning the theory, which was originally designed for nuclear matter but which Gerry intended to apply to finite nuclei in a shell model approach. Consider as an example the nucleus $^{18}O$ which has 18 nucleons. In the shell-model effective theory for this nucleus, it is reduced from a many-body problem with `eighteen' nucleons to a simplified one of just `two' valence nucleons residing outside of an inert $^{16}O$ core. This is a simple, smart and bold step, and I think it is of the type of physics that Gerry appreciated. But the two nucleons are in fact renormalized quasi-nucleons, which are different from the original bare ones. The interaction for bare nucleons is $V_{NN}$ while that for the quasi-nucleons is $V_{eff}$. We first tried $V_{eff}=G$ where $G$ is the Brueckner G-matrix and used it to calculate the low-lying spectra of $^{18}O$ and $^{18}F$. The calculated spectra were however in poor agreement with experiment, so Gerry then suggested that we take the effective interaction as \begin{equation} V_{eff}=G+G_{3p1h}, \end{equation} where $G$ represents the direct interaction of the valence nucleons via a $G$-matrix interaction. The term $G_{3p1h}$ denotes the second-order core-polarization diagram shown in Fig.\ 1(a). It was to our great joy that the inclusion of $G_{3p1h}$ greatly improved our results. For example, it significantly lowered the lowest $0^+$ as well as raised a group of high-lying states of $^{18}O$, making the calculated spectrum in good agreement with experiment. The matrix elements based on $G+G_{3p1h}$ are generally referred to as the Kuo-Brown (KB) matrix elements \cite{brownkuobook,kuobrown66,brownkuo67,kuobrown68} and have been widely used in nuclear shell model calculations for decades with remarkably successful results (see for example Refs.\ \cite{poves81,wild84,brown88}). NN interactions are short-ranged as is the $G$-matrix interaction. In the 1960s it was found mainly in Copenhagen (led by Bohr and Mottelson) that to describe empirical nuclear properties we need also a long-range effective $P_2$-force of the form \cite{brownkuobook,ringschuck80} \begin{equation} V_{P_2}=-\chi \Sigma _{ij} {r_i^2} {r_j^2} Y_{2,m}(\theta _i,\phi_i) Y_{2,-m}(\theta _j,\phi_j)(-1)^m. \end{equation} This empirical force was in fact well reproduced by the core-polarization diagram $G_{3p1h}$ \cite{brownkuo67}, which allowed two nucleons far from each other to interact indirectly through excitations of the core. I (TTSK) remember well a cartoon-like picture of the core polarization effect drawn by Gerry and Akito Arima, where two satellites orbit near the earth surface. They are far away from each other on opposite sides of the earth, so that they can hardly interact with each other directly. But they can interact with each other via the tidal waves induced by them. \begin{figure} \includegraphics[width=7cm]{ConfProc1.eps} \caption{Core polarization diagrams.} \label{fig.1} \end{figure} The success of the Kuo-Brown interactions has led to a number of further studies \cite{barrettk,jensen95,corag09}, but since the KB core polarization diagram is only a second-order one, a natural question remained: how significant are the higher-order diagrams? This is a very important question, and we (the Holt brothers, former student Scott Bogner, TTSK and Gerry) have indeed made extensive efforts in answering it as reported in Ref.\ \cite{holtkbb07}. In comparison with our earlier calculation \cite{brownkuobook}, we have made several improvements: (i) We employ the renormalization group (RG) low-momentum nucleon-nucleon interaction $V_{low-k}$ \cite{bogner01,kuo02,bogner02,kuo0102,schwenk02,bogner03,holt04}, (ii) folded diagrams \cite{kuoosnes,klr} are summed to all orders, and (iii) an induced-interaction approach is used where particle-particle and particle-hole vertex functions are calculated self-consistently \cite{holtkbb07}. Microscopic nuclear many-body calculations using realistic $V_{NN}$ interactions are complicated by the difficulties caused by strong repulsive cores normally found in such interactions. For many years, a standard procedure to overcome such difficulties has been the Brueckner $G$-matrix method, where $V_{NN}$ is converted to a smooth $G$-matrix effective interaction by summing ladder diagrams to all orders in the nuclear medium. However, in many ways $G$ is not convenient for many-body calculations. First, its Pauli exclusion operator is complicated for calculation, and second $G$ is energy dependent in an off-energy-shell manner. These features complicate the calculation of diagrams with the $G$-matrix interaction, especially for high-order diagrams such as diagrams (b) and (c) of Fig.\ 1. The low-momentum NN interaction $V_{low-k}$ is based on a renormalization group approach where one integrates out momentum components beyond a decimation scale $\Lambda$ \cite{bogner01,kuo02,bogner02,kuo0102,schwenk02,bogner03,holt04}. Briefly speaking, it is given by a pair of $T$-matrix equivalence relations: \begin{equation} T(k',k,k^2) = V_{NN}(k',k) + P\int _0 ^{\infty} q^2 dq \frac{V_{NN}(k',q) T(q,k,k^2)} {k^2-q^2 }, \end{equation} \begin{eqnarray} && T(p',p,p^2) = V_{low-k}(p',p) \nonumber \\ &&+ P\int _0 ^{\Lambda} q^2 dq \frac{V_{low-k}(p',q)T (q,p,p^2)} {p^2-q^2 }, ~(p',p) \leq \Lambda \end{eqnarray} where $P$ denotes the principal value integral. From the above equations, $V_{low-k}$ can be obtained from $V_{NN}$. Note that $V_{low-k}$ is energy independent and thus convenient for many-body calculations. There are a number of high precision models for $V_{NN}$ \cite{paris,bonnabc,bonns,cdbonn,argonne,nijmegen,idaho}, but their $\langle k |V_{NN}|k'\rangle$ matrix elements are in fact significantly different from each other although they all reproduce the experimental two-nucleon data quite well \cite{bogner03}. An amazing feature of the different $V_{low-k}$ derived from the above different $V_{NN}$ potentials is that they are nearly identical to each other for $\Lambda \lesssim 2.0 fm^{-1}$, leading to a nearly universal low-momentum NN interaction \cite{bogner03}. Realistic NN potentials are all constructed to fit the experimental NN phase shifts up to $E_{lab} \leq ~350$ MeV which corresponds to $\Lambda \simeq 2 fm^{-1}$, providing an explanation for why $V_{low-k}$ with this decimation scale should be nearly universal. In our new CP calculation \cite{holtkbb07}, we employed a folded-diagram expansion \cite{kuoosnes,klr} which provides a formally exact method for calculating the effective interaction $V_{eff}$ for valence nucleons outside a closed core. It is of the form \begin{equation} V_{eff} = \hat{Q} - \hat{Q'} \int \hat{Q} + \hat{Q'} \int \hat{Q} \int \hat{Q} - \hat{Q'} \int \hat{Q} \int \hat{Q} \int \hat{Q} + ~...~~, \end{equation} where each $\int$ symbol represents a `fold'. Each $\hat Q$-box represents a collection of irreducible diagrams as shown by the diagrams of Fig.\ 1. The $\hat Q'$-box is the same as $\hat Q$-box except it starts from the second-order diagrams, namely $\hat Q'=\hat Q - V_{NN}$. As is well known, high-order CP calculations are difficult to perform, largely because the number of diagrams grows rapidly as one goes to higher orders in perturbation theory. The number of diagrams at third order is already quite large, though still manageable \cite{barrettk,jensen95,corag09,corag12}, but it was soon realized that an order-by-order calculation of CP diagrams beyond third order is not practicable. To fully assess the effects of core polarization to high order, a non-perturbative method is called for. The non-perturbative method we use is based on the elegant and rigorous induced interaction approach of Kirson \cite{kirson} and Babu and Brown \cite{babu}, hereafter referred to as KBB. Other successful non-perturbative summation methods have been developed, such as the parquet summation \cite{jackson} and the coupled cluster expansion \cite{dean}. In the KBB formalism the vertex functions are obtained by solving a set of self-consistent equations, thereby generating CP diagrams to all orders such as diagrams (b) and (c) of Fig.\ 1. \begin{figure} \includegraphics[width=2.8in,angle=270]{kbb2ndallordero18.eps} \caption{A comparison of the second-order core polarization matrix elements with those of the all-order KBB calculation.} \label{fig.3} \end{figure} \begin{figure} \includegraphics[height=2.4in,width=2.8in]{21cdb5shellspectraox.eps} \caption{Spectra for the $^{18} \rm O$ system calculated to different orders in perturbation theory. Dashed lines for the experimental levels \cite{tilley} indicate levels with large intruder state mixing \cite{wild84,brown88}. } \label{fig.4} \end{figure} Let us now give a brief summary of our all-order calculation for the shell-model effective interaction $V_{eff}$. We first use $V_{low-k}$ to calculate the $\hat Q$-box. And in this $\hat Q$-box the bubble-in-bubble CP diagrams, like those shown in Fig.\ 1, are included to all orders in a self-consistent way. Then we obtain $V_{eff}$ by summing up the $\hat Q$-box folded-diagram series of Eq.\ (5). To illustrate our results, we show in Fig.\ 2 a comparison of the $sd$-shell second-order CP matrix elements with those given by the all-order KBB calculation. The two groups of matrix elements are rather close to each other with the all-order elements being about $10 \%$ weaker. A similar comparison for the $^{18}O$ spectra is given in Fig.\ 3. The spectrum given in the `$V_{low-k}$' column is obtained with the $\hat Q$-box composed of the first-order diagram only, and consequently the resulting spectrum is too compressed compared to experiment. The spectrum given by the `2$^{\rm nd}$ order' column is obtained with the $\hat Q$-box composed of the first- and second-order diagrams. The inclusion of the KBB CP diagrams in the $\hat Q$-box largely improves the agreement of the resulting spectra (labeled `all-order') with experiment. \section{III. Brown-Rho scaling} Gerry moved to Stony Brook in 1968 and set up a large and very active nuclear theory group. Faculty members in his group were initially Akito Arima, Andy Jackson and TTSK. There were indeed a large number of visitors and postdocs during Gerry's first years at Stony Brook. Gerry took very good care of them, often putting them up in his home (so that, as Gerry would say, `they will work day and night'). As far as TTSK can remember, the visitors and postdocs included S.\ Backman, D.\ Bes, J.\ Blomqvist, R.\ Broglia, M.\ Chemtob, K.\ Dietrich, J.\ Durso, P.\ Ellis, A.\ Fessler, B.\ Friman, H.\ Gayer, E.\ Hajimichael, G.\ Hering, M.\ Ichimura, L.\ Ingber, M.\ Kawai, D.\ Kurath, R.\ Lawson, H.K.\ Lee, G.L.\ Li, Z.X.\ Li, Z.Y.\ Ma, R.\ Machleidt, H.\ Muether, E.\ Nyman, F.\ Osterfeld, E.\ Oset, E.\ Osnes, H.\ Pauli, Dan-Olaf Riska, M.\ Rho, J.P.\ Shen, R.\ Silbar, H.Q.\ Song, J.\ Speth, D.\ Strottman, K.\ Suzuki, J.\ Vergadoes, N.\ VinhMau, R.\ VinhMau, J.\ Wambach, W.\ Weise, H.F.\ Wu, S.S.\ Wu, S.D,\ Yang, Z.Y.\ Zhang ... Having a large number of physicists working together was very pleasant and productive, leading to many long-term collaborations. Mannque Rho was a frequent visitor, and it was at Stony Brook where Brown-Rho scaling \cite{brownrho91,brownrho04} originated. Realistic nuclear potentials are mediated by the exchange of mesons such as the $\pi$-, $\rho$-, $\omega$- and $\sigma$-meson. In constructing these potentials the meson-nucleon coupling constants are adjusted to fit the `free-space' NN scattering data. Mesons in a nuclear medium, however, can have properties (masses and couplings) that are different than in free space, as the former are `dressed' or `renormalized' by their interactions with the medium. Thus, the NN potential in medium, denoted by $V_{NN}(med)$, should be different from that in free-space. How to obtain $V_{NN}(med)$ is of course a most difficult and challenging problem, at least to most of us. Gerry was well known for his physics intuition as well as his brilliant ideas in making complicated problems simple. His Brown-Rho scaling is a typical example. A main result of the well-known Brown-Rho (BR) scaling is \cite{brownrho91,hatsuda,brownrho04} \begin{eqnarray} && \frac{m^*_{\sigma}}{m_{\sigma}}\simeq \frac{m^*_{N}}{m_{N}}\simeq \frac{m^*_{\rho}}{m_{\rho}}\simeq \frac{m^*_{\omega}}{m_{\omega}}\simeq \Phi_{BR} (n), \nonumber \\ && \Phi_{BR} (n)= 1-C\frac{n}{n_0}, \end{eqnarray} where $m^*$ and $m$ denote respectively the in-medium and in-vacuum mass. Here the parameter $C$ has the value $0.15-0.20$, $n$ is the density of the nuclear medium, and $n_0$ is nuclear matter saturation density ($0.16 fm^{-3}$). It is remarkable that the above simple scaling law, derived in the context of chiral symmetry restoration in dense matter, would have dramatic consequences for traditional nuclear structure physics. We shall denote the above linear scaling as the BR scaling. This scaling naturally renders $V_{NN}$ a density dependent interaction $V_{NN}(n)$. Before discussing the various effects of BR scaling, let me (TTSK) first recall a conversation with Gerry many years ago, probably in 1964 when I attended Gerry's Nuclear Physics course at Princeton. He talked a lot about Brueckner theory and also about the empirical Skyrme effective interaction \cite{ringschuck80} of the form \begin{eqnarray} V_{skyrme}&=&V_{sk}(\vec r_1-\vec r_2)+D_{sk}(\vec r_1-\vec r_2), \nonumber \\ D_{sk}&=&\frac{1}{6}(1+x_3P_{\sigma})t_3\delta (\vec r_1 - \vec r_2) n(\vec r_{av}), \end{eqnarray} where $V_{sk}$ is a two-body $\delta$-function force and $D_{sk}$ is a density-dependent two-body interaction. It was a bit `strange' that there was a piece of interaction which was density dependent. One day Gerry mentioned that it would be nice to work out a connection between the empirical Skyrme interaction and meson-exchange NN interactions. At that time, my understanding of them was minimal I have to confess, and I was really unable to pursue the matter further. But now there are, I think, indications \cite{dong09,dong11} that BR-scaling may provide a microscopic foundation for the density-dependent Skyrme effective interaction. We have carried out several studies on the effects of BR scaling on finite nuclei, nuclear matter and neutron stars \cite{holtbrs07,holtc1408,siu09,dong09,dong11,dongnewbr13}. Let us just briefly describe a few of them. For convenience in implementing BR scaling, we have employed the BonnA and/or BonnS one-boson-exchange potentials \cite{bonnabc,bonns} whose parameters for $\rho$, $\omega$ and $\sigma$ mesons are scaled with the density (in our calculations the meson masses and cut-off parameters are equally scaled). Note that $\pi$ is protected by chiral symmetry and is not scaled. That we scale $\rho$ but not $\pi$ has an important consequence for the tensor force, which plays an important role in the famous Gamow-Teller (GT) matrix element for the $^{14}C\rightarrow$ $^{14}N$ $\beta$-decay \cite{holtc1408}. The tensor force from $\pi$- and $\rho$-meson exchange are of opposite signs. A lowering of only $m_{\rho}$, but not $m_{\pi}$, can significantly suppress the net tensor force strength and thus largely diminish the GT matrix element. In addition, the scaling of the $\omega$ meson introduces additional short-distance repulsion into the nucleon-nucleon interaction, which was found to also contribute to the suppression \cite{holt09}. With BR-scaling we were able to satisfactorily reproduce the $\sim 5800$-yr long lifetime of this decay; I remember well that Gerry was very pleased with this result. BR scaling has been applied to several nuclear matter calculations \cite{holtbrs07,siu09,dong09,dong11,dongnewbr13}. Let me start from a result, as shown in Fig.\ 4 of Ref.\ \cite{dongnewbr13}, to illustrate the current situation. This calculation employed a so-called new-Brown-Rho (new-BR) scaling \cite{dongnewbr13} which is based on a half-Skyrmion model to be described briefly later. The equation of state (EOS) labeled (C) is obtained from $V_{low-k}$ (derived from the BonnS $V_{NN}$ \cite{bonns}) without any BR scaling. It does not exhibit satisfactory saturation properties. It is a general result that $V_{NN}$ alone cannot give satisfactory nuclear saturation properties as illustrated by (C) of Fig.\ 4. We have found that Brown-Rho scaling improves this situation dramatically \cite{holtbrs07,siu09,dong09,dong11,dongnewbr13}. Moreover the combined potential given by the sum of the unscaled-$V_{NN}$ and $D_{sk}$, the latter being the Skyrme density-dependent force of Eq.\ (7), can also give equally satisfactory nuclear matter saturation properties \cite{dong09}. Four such EOS's are shown in Fig.\ 5, where $\Lambda$ denotes the decimation momentum scale for $V_{low-k}$. As seen, the four EOS's agree with each other closely, all giving $E_0/A \simeq -15$ MeV, $k_F \simeq 1.40$ fm$^{-1}$ and $K \simeq 150$ MeV. Thus to have satisfactory nuclear matter saturation properties, we may use either a BR-scaled $V_{NN}$ or an unscaled-$V_{NN}$ + $D_{sk}$; this indicates that a microscopic foundation for the empirical Skyrme density-dependent force may be provided by BR scaling. \begin{figure}[here] \scalebox{0.36}{\includegraphics[angle=-90]{11113newbr.eps3sym2015n0}} \caption{Comparison of the EOS for symmetric nuclear matter calculated with and without the new-BR scaling. Transition densities of $n_{1/2}=2.0$ (solid square) and $1.5n_0$ (open square) are employed. See text for more explanations.} \end{figure} \begin{figure}[here] \scalebox{0.42}{\includegraphics[angle=-90]{10withskyrmesym}} \caption{Ring-diagram EOS for symmetric nuclear matter with the interaction being the sum of $V_{low-k}$ and the Skyrme density dependent force of Eq.(7). Four sets of results are shown for CDBonn and BonnA potentials with $\Lambda$=3 and 3.5 $fm^{-1}$. A common Skyrme force of $t_3$=2000 MeV-$fm^6$ and $x_3=0$ is employed.} \end{figure} We now describe the new-BR scaling \cite{dongnewbr13} on which the results shown in Fig.\ 4 are based. The idea behind this scaling is that when a large number of skyrmions as baryons are put on an FCC (face-centered-cubic) crystal to simulate dense matter, the skyrmion matter undergoes a transition to a matter consisting of half-skyrmions~\cite{goldhaber} in CC configuration at a density that we shall denote as $n_{1/2}$. This density is difficult to pin down precisely but it is more or less independent of the mass of the dilaton scalar, the only low-energy degree of freedom that is not well-known in free space. The density at which this occurs has been estimated to lie typically between 1.3 and 2 times normal nuclear matter density $n_0$ ~\cite{half}. In our model, nuclear matter is separated into two regions I and II respectively for densities $n\leq n_{1/2}$ and $n>n_{1/2}$. As inferred by our model, they have different scaling functions \begin{equation} \Phi_i (n) = \frac{1}{1+C_i \frac{n}{n_0}},~~ i=I,II. \end{equation} The above two-region scaling is the new-BR scaling mentioned earlier. The EOS (A) and (B) of Fig.\ (4) are obtained with the new-BR scaling with $n_{1/2}$= 1.5 and 2.0$n_0$ respectively. As described in \cite{dongnewbr13}, we employ in our new-BR calculations the BonnS potential \cite{bonns} with scaling parameters $C_{\rho}$=0.13, $C_{\sigma}$=0.121, $C_{\omega}$=0.139 , $C_{N}$=0.13 and $C_{g,\rho}=C_{g,\omega}$=0 for region I. For region II the scaling parameters are $C_{\rho}$=0.13, $C_{\sigma}$=0.121, $C_{\omega}$=0.139 , $C_{g,\rho}$=0.13, $C_{g,\omega}$=0 and $m^*_N/m_N=y(n)=0.77$. Note that this scaling has some special features: In region I the coupling constants $g_{\rho N}$ and $g_{\omega N}$ are not scaled, while in region II only the coupling constant $g_{\rho N}$ is scaled. Also in region II the nucleon mass is a density-independent constant ($m^* _N / m_N$=0.77). Note that our choices for the $C$ parameters are consistent with the Ericson scaling which is based on a scaling relation for the quark condensate $\frac{<\bar qq>^*}{<\bar qq>}$ \cite{ericson}. According to this scaling, at low densities one should have $C\simeq D/3$ with $D=0.35 \pm 0.06$. We note that the above scaling is only `inferred' by our Skyrmion-half-Skyrmion model \cite{dongnewbr13}. As a first step to check this scaling, we have carried out several applications. In Fig.\ 4, the calculated EOS for symmetric nuclear matter using transition densities $n_{1/2}$= 1.5 (A) and 2.0$n_0$ (B) are shown. Both give an energy per nucleon $E_0/A=-15$ MeV, saturation density $k_F=1.30 fm^{-1}$ and compression modulus $K$=208 MeV, all in satisfactory agreement with the empirical values \cite{dongnewbr13}. We believe that our scaling works well for low densities of $n \lesssim 1.5n_0$. How to scale the mesons at densities beyond $n_0$ is still an open question. By way of heavy-ion collision experiments, there has been much progress in determining the nuclear symmetry energy $E_{sym}$ up to densities as high as $\sim 5n_0$ \cite{li05,li08,tsang09}. Thus an application of our new-BR scaling to the calculation of $E_{sym}$ would provide an important test for this scaling in the region with $n>n_{1/2}$. As displayed in Fig.\ 6, our calculated symmetry energies agree well with the experimental constraints \cite{li05,tsang09}. \begin{figure}[here] \scalebox{0.34}{\includegraphics[angle=-90]{11113newbr.eps4esym2015}} \caption{ Comparison of our calculated nuclear symmetry energies with the empirical upper (expt-Li1) and lower (expt-Li2) constraints of Li et al. \cite{li05} and the empirical results of Tsang et al. (expt-Tsang) \cite{tsang09}. } \end{figure} The EOS at high densities ($n \simeq 5-10n_0$) is important for neutron-star properties. Thus an application of the new-BR scaling to neutron star structure would provide a useful test. As shown in Fig.\ 7, our calculated neutron-star maximum mass is about $2.4 M_{\odot}$, slightly larger than the empirical value of $\sim 2M_{\odot}$ \cite{dongnewbr13}. In our calculations, the central densities of neutron stars are $\sim 5n_0$. At such densities, how to scale the hadrons with the medium remains to be an interesting and open question. Much remains to be done. \begin{figure}[here] \scalebox{0.33}{\includegraphics[angle=-90]{81512newbr.epsnstar1520}} \caption{Mass-radius trajectories of neutron stars calculated with new-BR scalings using $n_{1/2}$ = 2.0 (A) and 1.5$n_0$ (B). The maximum neutron-star mass and its radius for these two cases are respectively \{$M_{\rm max} = 2.39 M_{\odot}$ and $R = 10.90$ km\} and \{ $M_{\rm max} = 2.38 M_{\odot}$ and $R= 10.89$ km\}. } \end{figure} \section{IV. Summary} It is indeed our fortune and privilege to have met Gerry and worked with him since our early careers. He was not only a great scientist but also a kind person, who supported his colleagues and students over many years. I (TTSK) especially have many fond memories of fun times together, starting in 1964 when I met Gerry at Princeton, and our relationship extended beyond academics (such as our regular tennis matches). Gerry's insights into core polarization and Brown-Rho scaling are of fundamental importance for our understanding of effective nucleon-nucleon interactions in a nuclear medium and the possible connections to chiral symmetry restoration in dense matter. It was a pleasure to explore these topics together, and we have learned much from him over the years. Let us express our deep gratitude to him. \vskip .2in {\bf Acknowledgement} We are grateful to M.\ Rho and R.\ Machleidt for helpful discussions. This work was supported in part by the Department of Energy under Grant No.\ DE-FG02-88ER40388 and DE-FG02-97ER-41014.
1,477,468,749,862
arxiv
\section{Introduction} In this note we will discuss examples of distributions in the sense of Schwartz that we believe are promising for the study of the solutions to the Kadomtsev--Petviashvili. These distributions allow the computation of solutions Kadomtsev--Petviashvili (KP) equation and the Korteweg--de Vries (KdV) equation. The main mathematical method used in this note is the nonlocal dbar problem. The nonlocal dbar problem was originally introduced by Ablowitz, Bar-Yaacov, and Fokas \cite{ABF83} to formulate the inverse scattering transform for the KP equation. This method was further generalized by Zakharov and Manakov \cite{ZM85} to other (2+1)D completely integrable systems. Recently, the nonlocal dbar problem has been used to study new classes of solutions to the KdV equation called primitive solutions \cite{DNZZ20,DZZ16,GGJM18,NZZ19,N20a,ZDZ16,ZZD16} and connections to the KP equation were discussed in \cite{N20b}. The distributions introduced in this note are singular on a Cantor set or Sierpinski gasket, where the singularities can potentially correspond to nonlocal identifications along two copies of a Cantor set or Sierpinski gasket on the Riemann sphere. These spaces of distributions are formed using the nonlocal dbar problem, and a limit of rational functions on the plane as the number of poles diverges to infinity. \subsection{Prerequisite Definitions} All distributions that appear in this note will be defined with respect to compactly supported smooth test functions. \begin{definition} \label{defQnQ} Suppose $\mathcal{Q}_n \subset \mathbb{C}$ is a sequence of complex point sets such that $\mathcal{Q}_n \subset \mathcal{Q}_{n+1}$ and $$\mathcal{Q} = \overline{\lim_{n \to \infty}} \mathcal{Q}_n := \overline{\{x \in \mathcal{Q}_n : n = 0,1,2,\dots\}}$$ such that $$\lim_{n \to \infty} \frac{1}{|\mathcal{Q}_n|} \sum_{\lambda_n \in \mathcal{Q}_n} \delta(\lambda - \lambda_n) d\lambda = d \mu_{\mathcal{Q}} (\lambda)$$ where $\mu_{\mathcal{Q}} (\lambda)$ is the probability measure on $\mathbb{C}$ that is supported on $\mathcal{Q}$ where it restricts to the uniform probability measure on $\mathcal{Q}$. \end{definition} \begin{definition} Let $H_\mu(\mathcal{Q})$ be the set of H\"{o}lder continuous functions on $\mathcal{Q}$ with H\"{o}lder coefficient $0<\mu<1$. \end{definition} \begin{definition} The following definitions are either classical constructions of Cantor \cite{C1883} and Sierpinski \cite{S1915} or explicit: \begin{enumerate} \item The $n$-th step of the Cantor iteration for the Cantor middle $\epsilon$ set $\mathcal{C}_n \subset \mathbb{C}$. \item The point set $\mathcal{P}_n \subset \mathcal{C}_n$ of the endpoints of the intervals making up $\mathcal{C}_n$. \item The $n$-th step $\mathcal{S}_n \subset \mathbb{C}$ of the iteration used to construct the Sierpinski gasket. \item The vertex sets $\mathcal{V}_n \subset \mathcal{S}_n$ of vertices of the triangles making up $\mathcal{S}_n$. \item The Cantor set can be defined by $$\mathcal{C} = \bigcap_{n=0}^\infty \mathcal{C}_n.$$ \item The Sierpinski gasket can be defined by $$\mathcal{S} = \bigcap_{n=0}^\infty \mathcal{S}_n.$$ \end{enumerate} \end{definition} \begin{proposition} The following are true: \begin{enumerate} \item The point sets $\mathcal{P}_n$ and $\mathcal{C}$ are related by $$\overline{\lim_{n \to \infty}} \mathcal{P}_n = \mathcal{C}$$ and satisfy the necessary properties from definition \ref{defQnQ} to be a choice of $\mathcal{Q}_n$ and $\mathcal{Q}$ respectively. \item The point sets $\mathcal{V}_n$ and $\mathcal{S}$ are related by $$\mathcal{S} = \overline{\lim_{n \to \infty}} \mathcal{V}_n$$ and satisfy the necessary properties from definition \ref{defQnQ} to be a choice of $\mathcal{Q}_n$ and $\mathcal{Q}$ respectively. \end{enumerate} \end{proposition} \section{The Nonlocal Dbar Problem and Spaces of Distributions} \begin{definition} We will use $\mathcal{Q}_n$ to refer to either $\mathcal{P}_n$ or $\mathcal{V}_n$ and we will let $$Q = \overline{\lim_{n \to \infty}} \mathcal{Q}_n.$$ We will consider some explicit spaces of compactly supported distributions defined with respect to smooth compactly supported test functions as follows: \begin{enumerate} \item From $\mathcal{Q}_n$ and a list of numbers $a_j$ we determine the following distributions \begin{align*} & \chi_n(\lambda) = 1 + \frac{1}{\pi |\mathcal{Q}_n|} \sum_{\lambda_j \in \mathcal{Q}_n} \frac{a_j}{\lambda - \lambda_j} \\ & \chi(\lambda) = \lim_{n \to \infty} \chi_n(\lambda).\end{align*} (at this point we make no comment on the existence of the limiting distribution) \item Let $\phi$ be some isometry of $\mathbb{C}$ such that $\phi: \mathcal{Q} \to \mathcal{Q}' \subset \mathbb{C}$ where $\mathcal{Q}$ and $\mathcal{Q}'$ are disjoint. If we enforce the nonlocal conditions $$a_j = r(\lambda_j)\chi_n(\phi(\lambda_j))$$ for all $j$ where $r \in H_\mu(\mathcal{Q})$ then the rational functions $\chi_n$ are uniquely defined by the above conditions. \end{enumerate} \end{definition} The distributions $\chi_n(\lambda)$ can be identified with rational functions by definition, but the distributions $\chi(\lambda)$ cannot be represented by rational functions. An equivalent construction can also be made using functions and measures instead of distributions and rational functions. In the previous definition, the closure of the rational functions in the topology of uniform convergence in compact sets that do not intersect the singular sets of the surface can intuitively be formulated formally using limits of rational functions, and can be made rigorous using Schwartz's theory of distributions. \begin{proposition} Let $\mathcal{Q}_n \supset \mathcal{Q}_{n-1}$ and $\mathcal{R}_n \supset \mathcal{R}_{n-1}$ be two sequences of point sets such that $$ \overline{\lim_{n \to \infty} \mathcal{Q}_n}=\mathcal{Q}, \quad \overline{\lim_{n \to \infty} \mathcal{R}_n} = \mathcal{Q}'$$ but so that the point measures on $\mathcal{Q}_n$ and $\mathcal{R}_n$ limit to measures on $\mathcal{Q}$ and $\mathcal{Q}'$, and $\phi(\mathcal{Q}_n)$ and $\mathcal{R}_n$ are disjoint. Now consider the distributions of the form \begin{align*} & \tilde \chi_n(\lambda) = 1 + \frac{1}{\pi |\mathcal{Q}_n|}\left(\sum_{\lambda_j \in \mathcal{Q}_n} \frac{a_j}{\lambda - \lambda_j}\right) + \frac{1}{\pi |\mathcal{R}_n|}\left(\sum_{\mu_k \in \mathcal{R}_n} \frac{b_j}{\lambda - \mu_k} \right) \\ & \tilde \chi(\lambda) = \lim_{n \to \infty} \tilde \chi_n(\lambda) \end{align*} where $a_j$ and $b_j$ are bounded sequences associated to the common points of $\mathcal{Q}_n$ (at this point we make no comment on the existence of the limiting distribution). If we assume the following nonlocal conditions on the poles $$a_j = r_1(\lambda_j) \chi_n(\phi(\lambda_j)), \quad b_k =r_2(\phi^{-1}(\mu_k)) \chi_n(\phi^{-1}(\mu_k))$$ for all $j$ where $r_1,r_2 \in H_{\mu}(\mathcal{Q})$ then the rational functions $\tilde \chi_n(\lambda)$ are uniquely defined by the above conditions. \end{proposition} The following lemma will be important, because it allows us to define the limit of the rational functions $\chi_n(\lambda)$ and $\tilde \chi_n(\lambda)$ as $n \to \infty$. This lemma allows us to study functions on $\mathcal{Q}$ that solve singular integral equations instead of the nonlocal dbar problem. \begin{lemma} \label{lma} The rational functions $\chi_n$ and $\tilde \chi_n$ solve the nonlocal dbar problems \begin{equation} \label{eqdbar} \frac{\partial \breve{\chi}}{\partial \lambda}(\lambda) = R(\lambda) \breve{\chi}(\phi(\lambda)) \end{equation} where \begin{equation} R(\lambda) = \frac{1}{|\mathcal{Q}_n|} \sum_{\lambda_j \in \mathcal{Q}_n} r(\lambda_j) \delta(\lambda - \lambda_j), \end{equation} or \begin{equation} R(\lambda) = \frac{1}{|\mathcal{Q}_n|} \sum_{\lambda_j \in \mathcal{Q}_n} r_1(\lambda_j) \delta(\lambda - \lambda_j) + \frac{1}{|\mathcal{R}_n|}\sum_{\mu_k \in \mathcal{R}_n} r_2(\phi^{-1}(\mu_k)) \delta(\lambda - \mu_k) \end{equation} and $\breve \chi(\lambda) \to 1$ as $\lambda \to \infty$. Moreover, if we suppose that $r,r_1, r_2\in H_\mu(\mathcal{Q})$ then $R(\lambda)$ limits to \begin{equation} \label{eqR1} R(\lambda) = \int_{\mathcal{Q}} r(s) \delta(\lambda-s) d\mu_\mathcal{Q}(s),\end{equation} or \begin{equation} \label{eqR2} R(\lambda) = \int_{\mathcal{Q}} r_1(s) \delta(\lambda-s) + r_2(s) \delta(\lambda-\phi(s)) d\mu_{\mathcal{Q}}(s) \end{equation} respectively as $n \to \infty$. \end{lemma} Because of the assumption that the uniform probability measure $d\mu_{\mathcal{Q}}$ exists as the limit of delta measure, the proof of this lemma is a simply application of the theory of distributions. One just needs to apply the distributions to test functions, and then take the limit. The proof is left to the reader. The functions $\chi_n(\lambda)$ and $\tilde \chi_n(\lambda)$ determine a holomorphic line bundle on a singular rational curve \cite{NZZ19}. As $n \to \infty$, the poles coalesce into the singular sets $\mathcal{Q}$ and $\mathcal{Q}'$ that are identified via the nonlocal identification. For the case of finite and infinite gap solutions to the the KdV equation, we can take $\mathcal{Q}$ and $\mathcal{Q}'$ to be intervals, and the limit gives the finite gap solutions to the KdV equation as primitive solutions to the KdV equation. The nonlocal dbar can instead be formulated as a system of two (local) singular integral equations. The justification is a simple implication of the constructions in \cite{DZZ16,DNZZ20,GGJM18,NZZ19,N20a,N20b}. The limiting procedure used in \cite{DZZ16} that was the inspiration for further results in \cite{DNZZ20,NZZ19,N20a,N20b} is deterministic. However, this limit is ineffective for some numerical and statistical calculations. In \cite{GGJM18} a random discrete soliton amplitude spectrum that leads to a Riemann--Stiljes integration is used and gives an alternative way of rigorously defining the functions $r(s)$, $r_1(s)$ and $r_2(s)$ in the singular integral equations in the case of a primitive solution to the KdV equation. The universality of the Riemann--Stiljes integral for any choice of partition generating it, also guarantees the definiteness of the limiting distribution. In the case considered in this note, these methods lead to the following theorem. \begin{theorem}\label{thm} The limiting distributions from Lemma \ref{lma} can be written in the forms \begin{align} &\label{eqchi1} \chi(\lambda) = 1 + \frac{1}{\pi} \int_{\mathcal{Q}} \frac{f(s)}{\lambda-s} d\mu_{\mathcal{Q}}(s) \\ & \label{eqtchi1} \tilde \chi(\lambda) = 1 + \frac{1}{\pi} \int_{\mathcal{Q}} \left(\frac{f_1(s)}{\lambda-s} + \frac{f_2(s)}{\lambda - \phi(s)} \right) d\mu_{\mathcal{Q}}(s). \end{align} Moreover, the function $f(s)$ solves the integral equation \begin{equation} \label{ieq1} r(t) = f(t) + \frac{r(t)}{\pi} \int_{\mathcal{Q}} \frac{f(s)}{\phi(t)-s} d \mu_{\mathcal{Q}}(s) \end{equation} and the functions $f_1(s)$ and $f_2(s)$ solve the system of singular integral equations \begin{align} \label{ieq2}& r_1(t) = f_1(t) + \frac{r_1(t)}{\pi} \left(\int_{\mathcal{Q}} \frac{f_1(s)}{\phi(t)-s} d \mu_{\mathcal{Q}}(s) + \fint_{\mathcal{Q}} \frac{f_2(s)}{\phi(t) - \phi(s)} d \mu_{\mathcal{Q}}(s)\right) \\ \label{ieq3}& r_2(t) = f_2(t) + \frac{r_2(t)}{\pi} \left( \fint_{\mathcal{Q}} \frac{f_1(s)}{t-s} d \mu_{\mathcal{Q}}(s) + \int_{\mathcal{Q}} \frac{f_2(s)}{t-\phi(s)} d \mu_{\mathcal{Q}}(s) \right) \end{align} which determine $f(s)$, $f_1(s)$ and $f_2(s)$ from $r,r_1,r_2\in H_\mu(\mathcal{Q})$. The principle value integrals are defined via the embedding of $\mathcal{C}$ or $\mathcal{S}$ into $\mathbb{C}$. \end{theorem} The integral equations \eqref{ieq1}, \eqref{ieq2}, \eqref{ieq3} can be produced immediately using the formal proof of the main theorem from \cite{N20b}. This formal limiting argument leads to well defined integrals because of the assumption on the existance of $d\mu_\mathcal{Q}$ in definition \ref{defQnQ}. This theorem is proven by plugging \eqref{eqchi1} and \eqref{eqtchi1} into \eqref{eqdbar}, using the functions $R$ of the form \eqref{eqR1} and \eqref{eqR2} respectively. The nonlocal dbar problem leads to the following integral equation $$ \breve \chi(\lambda) = 1 + \frac{1}{\pi} \iint_{\mathbb{C}} \frac{R(\zeta)\chi(\phi(\zeta))}{\lambda - \zeta} d^2 \zeta $$ where $d^2 \zeta$ is the usual area form on $\mathbb{C} \equiv \mathbb{R}^2$. This integral equation comes from combining the inversion formula for the dbar operator and the asymptotic condition $\breve \chi(\lambda) \to 1$ as $\lambda \to \infty$. The integral equation is determined by its behavior near the singular support, because $\chi(\lambda)$ solves the Cauchy--Riemann equations off of the singular support. This is because with no singular support, the Cauchy--Riemann equations would imply $\breve \chi(\lambda) = 1$. Therefore, the singular support is acting as a source for the Cauchy--Riemann equations (in a manner analogous to the way electrostatic charge distributions generate solutions to the Laplace equation). \begin{definition} We will use $\mathcal{D}_\bullet'$ to refer to the distribution $\chi(\lambda)$ and $\tilde\chi(\lambda)$ that are determined by the singular integral equations in theorem \ref{thm}. \end{definition} It seems likely that distributions $\tilde \chi(\lambda)$ in $\mathcal{D}_\bullet'$ could be used to extend the idea of a holomorphic line bundle on a singular rational curve, due to the interpretations of $\chi_n(\lambda)$ and $\tilde \chi_n(\lambda)$ as giving holomorphic line bundles on singular surfaces \cite{NZZ19}. The constructions in \cite{NZZ19,N20a} give evidence to the conjecture that the idea of a holomorphic line bundles on surfaces can be extended to the singular surface $\Sigma_{\mathcal{Q}} = S^2/ \left< \sim \right>$ --- which is the topological surface formed by taking $\mathbb{C}$ and identifying $\mathcal{Q}$ with $\mathcal{Q}'$ by $\sim$ via restriction of the isometry $\phi$ to $\mathcal{Q}$ --- using certain choices of $\tilde \chi(\lambda)$, or equivalently to certain choices of $r_1(s) \ge 0$ and $r_2(s) \le 0$. \begin{definition} When $\mathcal{Q} = \mathcal{C}$ we call $\Sigma_{\mathcal{Q}}$ a Cantor surface, and when $\mathcal{Q} = \mathcal{S}$ we call $\Sigma_{\mathcal{Q}}$ a Sierpinski surface. We will call $\tilde \chi(\lambda)$ a primitive distribution due to the connection with primitive potentials. \end{definition} An explicit link between a notion of holomorphic line bundles on these singular surfaces and the distributions discussed in the previous section would lead to an idea of a Picard group of the Cantor and Sierpinski surfaces. \begin{definition} We can form the holomorphic one forms $$\tilde \omega_n = \tilde \chi_n(\lambda) d \lambda, \quad \tilde \omega = \tilde \chi(\lambda) d \lambda.$$ We can define the space $\mathcal{A}$ of holomorphic differentials on $$\mathbb{C} \setminus \mathcal{Q}$$ of the form $$\omega = \chi(\lambda) d \lambda,$$ and the space $\tilde{\mathcal{A}}$ of holomorphic differentials on $$\mathbb{C} \setminus (\mathcal{Q} \cup \mathcal{Q}')$$ of the form $$\tilde \omega = \tilde \chi(\lambda) d \lambda.$$ \end{definition} It seems likely that for some choices of functions $r_1(s)$ and $r_2(s)$ these one forms can be interpreted as holomorphic one forms on $\Sigma_{\mathcal{Q}}$. This interpretation would be important to singularity theory in complex geometry because it would be an example of a complicated singular set for which the surface $\Sigma_{\mathcal{Q}}$ can still be given a large family of holomorphic one forms. These one forms could also potentially be used to define an idea of a holomorphic line bundle on $\Sigma_{\mathcal{Q}}$. The conjectures discussed at the end of this section also have evidence in connections between solutions to the nonlocal dbar problem and solutions to the KP equation. \section{The (2+1)D Kadomtsev--Petviashvili, the (1+1)D Korteweg--de Vries, and the (1+1)D Schr\"{o}dinger Equation.} Define the phase function $$\psi(s,x,y,t) = s x + s^2 y + s^3 t .$$ To produce a solution to the complex KP equation \begin{equation} \label{eqKP} (4u_t + 6uu_x - u_{xxx})_x -3u_{yy} = 0.\end{equation} we simply need to assume $$r_1(s) = e^{\psi(s,x,y,t)-\psi(\phi(s),x,y,t)}\tilde r_1(s), \quad r_2(s) = e^{\psi(\phi(s),x,y,t)-\psi(s,x,y,t)} \tilde r_2(s)$$ which is a simple implication of \cite{N20b}. The complex KP equation can reduce to the real KP-I or KP-II as follows. Suppose that $u$ solves the complex KP equation. If $u(x,y,t)$ is real for real values of $x,y,t$ then $u(x,y,t)$ solves the real KP-II equation. However, if $u(x,iy,t)$ is real for real values of $x,y,t$ then $u(x,iy,t)$ solves the real KP-I equation. A solution to the complex KP equation produced in the manner might not reduce to a nonsingular real solution of either to the KP-I equation or the KP-II equation. However, if we were to assume $$\tilde r_1(s) \ge 0, \quad \tilde r_2(s) \le 0$$ are supported on some positive intervals and $\phi(\lambda)=-\lambda$, then these solutions reduce to primitive solutions to the KdV equation, which are real, smooth and bounded \cite{DNZZ20,N20a,NZZ19,DZZ16}. For the choices $$r(s) \ge 0, \quad r_1(s) \ge 0, \quad r_2(s) \le 0$$ supported on $$\mathcal{Q}=\mathcal{C} \subset \mathbb{R}^+,$$ and $\phi(\lambda) = -\lambda$, this construction gives a solution to the KdV equation. This reasoning is discussed in detail in \cite{ZM85,DZZ16,N20a,N20b}. For most other choice of embedding of $\mathcal{C}$ into $\mathbb{C}$, the construction will produce solutions to the KP equation. For $\mathcal{Q} = \mathcal{S}$, the construction will always produce a solution to the KP equation rather than the KdV equation since $\mathcal{S}$ can not be restricted to a one dimensional set. Consider the case of $\mathcal{Q}=\mathcal{C}$ and a solution $u(x,t)$ to the KdV equation. Let us consider the $t$ dependent family of one dimensional Schr\"{o}dinger operators $$\hat H(t) = -\partial_x^2+u(x,t).$$ The (energy) spectrum $\sigma(\hat H) = \sigma(\hat H(t))$ is constant in $t$ and $$\sigma(\hat H) = \{E=\lambda^2: \lambda \in \mathbb{R}, \; -i \lambda \in \mathcal{C}, \text{ or } i \lambda \in \mathcal{C}\}.$$ In other words, we can produce a potential with the above spectrum, and an explicit basis of (generalized) eigenfunctions of $\hat H$. This is precisely the information we need to explicitly construct the spectral projection operators. We can use intuition from finite and infinite gap theory and the trace formula to make some conjectures on the behavior of the solutions to the KP equation, the KdV equation and the inverse spectral theory of one dimensional Schr\"{o}dinger operators: First, It seems likely that such a solution would either be quasi-periodic, or asymptotic to a quasi-periodic solution given the results in \cite{GGJM18,Sim}. It also seems likely that by associating gaps to the intervals $\mathcal{C}_n$, computing finite gap solutions, and then taking an infinite gap limit, would lead to an equivalent constructions of some solutions/potentials in the isospectral set of $u(x,0)$. Due to the fact that that gaps in the spectrum occurs at essentially all length scales smaller than the smallest interval $I$ such that $\mathcal{C} \subset I$, it is likely that the resulting solution has interesting multi-soliton interactions at all length scales smaller than $I$. These solutions to the KdV equation could therefore potentially exhibit complicated soliton gas dynamics. Second, while the connection to spectral gaps is currently not clear for the distributions supported on Sierpinski gaskets, it still may be true that the Sierpinski surface discussed in this paper leads to solutions to the KP equation that have interesting multi-soliton interaction at all length scales. Further study into the Sierpinski surface and the nonlocal dbar problem could potentially lead to complicated soliton gasses when the support of the soliton spectrum is homeomorphic to $\mathcal{C}$. Even if soliton gasses of this type turn out to not be physically relevant, they may still be interesting as a completely solvable model of a soliton gas.
1,477,468,749,863
arxiv
\section{Introduction} \label{sec:introduction} Shape matching is an important area of computational geometry, that has applications in computer vision, pattern recognition, and other fields that are concerned with matching objects by shape similarity. Generally, in shape matching we are given two geometric objects $A$ and $B$ and we want to measure to what extent they are similar. Usually we may allow certain transformations, like translations, rotations and/or scalings, of one object relative to the other, in order to improve the quality of the match. In many applications, the input data consists of finite sets of points sampled from the actual objects. To measure similarity between the sampled point sets, various distance functions have been used. One popular function is the Hausdorff distance that equals to the maximum distance from a point in one set to its nearest point in the other set. However, when the objects which we compare are curves, sequences, or contours of larger objects, and the sampled points are ordered along the compared contours, the discrete Fr\'echet distance may be a more appropriate similarity measure. This is because the discrete Fr\'echet distance takes into account the ordering of the points along the contours which the Hausdorff distance ignores. Comparing curves and sequences is a major task that arises in computer vision, image processing and bioinformatics (e.g., in matching backbone sequences of proteins). The \emph{discrete Fr\'echet distance} between a sequence of points $P$ and another sequence of points $Q$ is defined as the minimum, over all possible independent (forward) traversals of the sequences, of the maximum distance between the current point of $P$ and the current point of $Q$ during the traversals. See below and in Section~\ref{sec:preliminaries} for a more formal definition. In this work, we focus on the problem of computing the minimum discrete Fr\'echet distance \emph{under translation}. That is, given two sequences $P$ and $Q$ of $m$ and $n$ points, respectively, in the plane, we wish to translate $Q$ by a vector $t\in \mathbb{R}^2$ such that the discrete Fr\'echet distance between $P$ and $Q+t$ is minimized. \medskip \noindent\textbf{Background.} The Fr\'echet distance has been extensively studied during the past 20 years. The main variant, the continuous Fr\'echet distance, where no transformation is allowed, measures similarity between (polygonal) curves. It is the smallest $\delta$ for which there exist forward simultaneous traversals of the two curves, from start to end, so that at all times the distance between the corresponding points on the curves is at most $\delta$. The discrete Fr\'echet distance considers sequences $P$ and $Q$ of points instead of curves. It is defined analogously, where (a) the simultaneous traversals of the sequences are represented as a sequence of pairs $(p^{(1)}, q^{(1)}),\ldots,(p^{(t)},q^{(t)})$, where $p^{(i)}\in P$, $q^{(i)}\in Q$, for $i=1,\ldots,t$, (b) the first (resp., last) pair consists of the starting (resp., terminal) points of the two sequences, and (c) each $(p^{(i)}, q^{(i)})$ is obtained from $(p^{(i-1)}, q^{(i-1)})$ by moving one (or both) point(s) to the next position in the corresponding sequence. Most studies of the problem consider the situation where no translation (or any other transformation) is allowed. In this ``stationary'' case, the discrete Fr\'echet distance in the plane can be computed, using dynamic programming, in $O(mn)$ time (Eiter and Mannila~\cite{EM94}). Agarwal et al.~\cite{ABKS12} slightly improve this bound, and show that the (stationary) discrete Fr\'echet distance can be computed in $O\left(\dfrac{mn\log\log n}{\log n}\right)$ time on a word RAM, and a very recent result of Bringmann~\cite{Bri14} indicates that a substantially subquadratic solution (one that runs in time $O((mn)^{1-\delta})$, for some $\delta>0$) is unlikely to exist. Alt and Godau~\cite{AG95} showed that the (stationary) continuous Fr\'echet distance of two planar polygonal curves with $m$ and $n$ edges, respectively, can be computed, using dynamic programming, in $O(mn\log mn)$ time. This has been slightly improved recently by Buchin et al.~\cite{BBMM12}, who showed that the continuous Fr\'echet distance can be computed in $O(N^2 (\log N)^{1/2} (\log\log N)^{3/2})$ time on a pointer machine, and in $O(N^2 (\log\log N)^2)$ time on a word RAM (here $N=m=n$ denotes the number of edges in each curve). In short, the best known algorithms for the stationary case, for both discrete and continuous variants, hover around the quadratic time bound. Not surprisingly, the problems become much harder, and their solutions much less efficient, when translations (or other transformations) are allowed. For the problem of computing the minimum continuous Fr\'echet distance under translation, Alt et al.~\cite{AKW01} give an algorithm with $O(m^3n^3(m+n)^2\log(m+n))$ running time, where $m$ and $n$ are the number of edges in the curves. They also give a $(1+{\varepsilon})$-approximation algorithm for the problem, that runs in $O({\varepsilon}^{-2}mn)$ time. That is, they compute a translation of one of the curves relative to the other, such that the Fr\'echet distance between the resulting curves is at most $(1+{\varepsilon})$ times the minimum Fr\'echet distance under any translation. In three dimensions, Wenk~\cite{Wenk02} showed that, given two polygonal chains with $m$ and $n$ edges respectively, the minimum continuous Fr\'echet distance between them, under any reasonable family of transformations, can be computed in $O((m+n)^{3f+2} \log (m+n))$ time, where $f$ is the number of degrees of freedom for moving one chain relative to the other. So with translations alone $(f=3)$, the minimum continuous Fr\'echet distance in $\mathbb{R}^3$ can be computed in $O((m+n)^{11} \log (m+n))$ time, and when both translations and rotations are allowed $(f=6)$, the corresponding minimum continuous Fr\'echet distance can be computed in $O((m+n)^{20} \log (m+n))$ time. The situation with the discrete Fr\'echet distance under translation is somewhat better, albeit still inefficient. Jiang et al.~\cite{JXZ08} show that, given two sequences of points in the plane, the minimum discrete Fr\'echet distance between them under translation can be computed in $O(m^3n^3 \log(m + n))$ time. For the case where both rotations and translations are allowed, they give an algorithm that runs in $O(m^4n^4 \log(m + n))$ time. They also design a heuristic method for aligning two sequences of points under translation and rotation in three dimensions. Mosig et al.~\cite{MC05} present an approximation algorithm that computes the discrete Fr\'echet distance under translation, rotation and scaling in the plane, up to a factor close to $2$, and runs in $O(m^2n^2)$ time. \medskip \noindent\textbf{Our results.} Our algorithm improves the bound of Jiang et al.~\cite{JXZ08} by a nearly linear factor, with running time $O(m^3n^2(1+\log(n/m))\log(m+n))$, assuming $m\leq n$. It uses a $0/1$-matrix $M(P,Q)$ of size $m\times n$, whose rows (resp., columns) correspond to the points of $P$ (resp., of $Q$). Assuming a stationary situation, or, rather, a fixed translation of $Q$, an entry in the matrix is equal to 1 if and only if the distance between the two corresponding points is at most $\delta$, where $\delta$ is some fixed distance threshold. We use $(i,j)$ to denote an entry in the matrix that corresponds to the points $p_i$ and $q_j$, and we use $M_{i,j}$ to denote its value. The discrete Fr\'echet distance is at most $\delta$ if and only if there is a row- and column-monotone path of ones in $M$ that starts at $(1,1)$ and ends at $(m,n)$ (see Section~\ref{sec:preliminaries} for a more precise definition). We can partition the plane of translations into a subdivision ${\cal A}_\delta$ with $O(m^2n^2)$ regions, so that, for all translations in the same region, the matrix $M$ is fixed (for the fixed $\delta$). We then traverse the regions of ${\cal A}_\delta$, moving at each step from one region to a neighboring one. Assuming general position, in each step of our traversal exactly one entry of $M$ changes from $1$ to $0$ or vice versa. We present a dynamic data structure $\Gamma(M)$ that supports an update of an entry of $M$, in $O(m(1+\log(n/m)))$ time, assuming $m\leq n$,\footnote{This is without loss of generality as we can change the roles of $m$ and $n$ by flipping $M$.} and then re-determines whether there is a monotone path of ones from $(1,1)$ to $(m,n)$, in $O(1)$ additional time. If we find such a monotone path in $M$, we have found a translation $t$ (actually a whole region of translations\footnote{For a critical value of $\delta$, the region can degenerate to a single vertex of ${\cal A}_\delta$; see Sections~\ref{sec:arrangement} and~\ref{sec:optimization} for details.}) such that the discrete Fr\'echet distance between $P$ and $Q+t$ is at most $\delta$. Otherwise, when we traverse the entire ${\cal A}_\delta$ and fail after each update, we conclude that no such translation exists. Using this procedure, combined with the parametric searching technique~\cite{NM83}, we obtain an algorithm for computing the minimum discrete Fr\'echet distance under translation. We reduce the dynamic maintenance of $M$ to dynamic maintenance of reachability in a planar graph, as edges are inserted and deleted to/from the graph. Specifically, we can think of (the 1-entries of) $M$ as a representation of a planar directed graph with $N\leq mn$ nodes. Each 1-entry of $M$ corresponds to a node in the graph, and each possible forward move in a joint traversal is represented by an edge (see Section~\ref{sec:preliminaries} for details). Then, determining whether there is a row- and column-monotone path of ones from $(1,1)$ to $(m,n)$ corresponds to a reachability query in the graph (from $(1,1)$ to $(m,n)$). A data structure for dynamic maintenance of reachability in directed planar graphs was given by Subramanian~\cite{Sub93}. This data structure supports updates and reachability queries in $O(N^{2/3}\log N)$ time, where $N$ is the number of nodes in the graph. Diks and Sankowski~\cite{DS07} improved this data structure, and gave a structure that supports updates and reachability queries in $O(N^{1/2}\log^3 N)$ time. We give a simpler and more efficient structure for maintaining reachability in $M$ that exploits its special structure. Our structure can update reachability information in $M$ in $O(m(1+\log(n/m)))$ time, assuming $m\leq n$, and answers reachability query (from $(1,1)$ to $(m,n)$) in $O(1)$ time. In contrast, the data structure of \cite{DS07} applied in our context performs an update and a query in $O((mn)^{1/2}\log^3 (mn))$ time. Using our structure, we obtain an algorithm for computing the minimum discrete Fr\'echet distance under translation that runs in $O(m^3n^2(1 +\log(n/m))\log(m+n))$ time (again, assuming $m\leq n$). To summarize the contributions of this paper are twofold: (a) The reduction of the problem of computing the minimum discrete Fr\'echet distance to a dynamic planar directed graph reachability problem. (b) An efficient data structure for this reachability problem. For $m\approx n$ our structure is faster than the general reachability structure of \cite{DS07} by a polylogarithmic factor, and when $m\ll n$ the improvement is considerably more significant (roughly by a factor $\sqrt{n/m}$). Moreover, our data structure is simpler than that of Diks and Sankowski. \section{Preliminaries} \label{sec:preliminaries} We now define the (stationary) discrete Fr\'echet distance formally. Let $P=(p_1,\ldots,p_{m})$ and $Q=(q_1,\ldots,q_{n})$ be the two planar sequences defined in the introduction. For some fixed distance $\delta>0$ we define a $0/1$-matrix $M_{\delta}(P,Q)$ formally as follows. The rows (resp., columns) of $M_\delta(P,Q)$ correspond to the points of $P$ (resp., of $Q$) in their given order. An entry $(i,j)$ of $M_\delta(P,Q)$ is 1 if the distance between $p_i$ and $q_j$ is at most $\delta$, and is 0 otherwise. we denote $M_\delta(P,Q)$ by $M$ when $P$ and $Q$ and $\delta$ are clear from the context. The directed graph $\EuScript{G}_\delta(P,Q)$ associated with $P$, $Q$ and $\delta$ has a vertex for each pair $(p_i,q_j)\in P\times Q$ and an edge for each pair of adjacent ones in $M_\delta(P,Q)$. Specifically, we have an edge from $(p_i,q_j)$ to $(p_{i+1},q_j)$ if and only if both $(i,j)$ and $(i+1,j)$ are 1 in $M$, an edge from $(p_i,q_j)$ to $(p_{i},q_{j+1})$ if and only if both $(i,j)$ and $(i,j+1)$ are 1 in $M$, and an edge from $(p_i,q_j)$ to $(p_{i+1},q_{j+1})$ if and only if both $(i,j)$ and $(i+1,j+1)$ are 1 in $M$. we denote $G_\delta(P,Q)$ by $G$ when $P$ and $Q$ and $\delta$ are clear from the context. The \emph{(stationary) discrete Fr\'echet distance} between $P$ and $Q$, denoted by $\delta^*(P,Q)$, is the smallest $\delta>0$ for which $(p_{m},q_{n})$ is reachable from $(p_1,q_1)$ in $\EuScript{G}_\delta$. Informally, think of $P$ and $Q$ as two sequences of stepping stones and of two frogs, the $P$-frog and the $Q$-frog, where the $P$-frog has to visit all the $P$-stones in order and the $Q$-frog has to visit all the $Q$-stones in order. The frogs are connected by a rope of length $\delta$, and are initially placed at $p_1$ and $q_1$, respectively. At each move, either one of the frogs jumps from its current stone to the next one and the other stays at its current stone, or both of them jump simultaneously from their current stones to the next ones. Furthermore, such a jump is allowed only if the distances between the two frogs before and after the jump are both at most $\delta$. Then $\delta^*(P,Q)$ is the smallest $\delta>0$ for which there exists a sequence of jumps that gets the frogs to $p_{m}$ and $q_{n}$, respectively. The problem of computing the minimum discrete Fr\'echet distance under translation, as reviewed in the introduction, is to find a translation $t$ such that $\delta^*(P,Q+t)$ is minimized. We say that an entry $(i,j)$ of $M$ is \emph{reachable} from an entry $(k,l)$, with $k\leq i, l\leq j$, if $(p_i,q_j)$ is reachable from $(p_k,q_l)$ in $\EuScript{G}$. A path from $(p_k,q_l)$ to $(p_i,q_j)$ in $\EuScript{G}$ corresponds to a (weakly) row-monotone and column-monotone sequence of ones in $M$ connecting the one in entry $(k,l)$ to the one in entry $(i,j)$. This is sequence consists of three kinds of moves: 1) {\em upward moves} between entries of the form $(r,s)$ to $(r+1,s)$ in which the $P$-frog moves from $p_{r}$ to $p_{r+1}$, 2) {\em right moves} between entries of the form $(r,s)$ to $(r,s+1)$ in which the $P$-frog moves from $q_{s}$ to $q_{s+1}$, and 3) {\em diagonal moves} between entries of the form $(r,s)$ to $(r+1,s+1)$ in which the $P$-frog moves from $q_{s}$ to $q_{s+1}$ both frogs move simultaneously --- the $P$-frog from $p_{r}$ to $p_{r+1}$, and the $Q$-frog from $q_{s}$ to $q_{s+1}$. See Figure~\ref{fig:staircase}. We call such a monotone sequence of ones in $M$ a {\em path in M} from $(k,l)$ to $(i,j)$. To determine whether $\delta^*(P,Q)\le\delta$, we need to determine whether there is such a path in $M$ that starts at $(1,1)$ and ends at $(m,n)$. We say that an entry $(i,j)$ of $M$ is \emph{reachable} if there is a path from $(1,1)$ to $(i,j)$. We denote the concatenation of two paths $\pi_1,\pi_2$ by $\pi_1\cdot\pi_2$, assuming that the last entry of $\pi_1$ is the first entry of $\pi_2$; this entry appears only once in the concatenation. \begin{figure}[htb] \centering\begin{tabular}{cc} \includegraphics[scale=0.8]{staircase2.pdf} & \hspace{0.5cm} \includegraphics[scale=0.8]{staircase.pdf} \\ (a) & (b) \end{tabular} \centering \caption{\small (a) A reachability path from $(4,3)$ to $(7,7)$. (b) A reachability path from $(1,1)$ to $(8,12)$.} \label{fig:staircase} \end{figure} We use a decomposition of $M$ into \emph{blocks}. A block is a submatrix of $M$ that corresponds to contiguous subsequences $P'$ and $Q'$ of $P$ and $Q$, respectively. We denote by $M(P', Q')$ the block of $M(P,Q)$ formed by $P'$ and $Q'$. Consider a block $M(P', Q')$ of $M(P,Q)$. Let $p^-$ (resp., $p^+$) denote the first (resp., last) point of $P'$ and let $q^-$ (resp., $q^+$) denote the first (resp., last) point of $Q'$. We call the entries of $M(P', Q')$ corresponding to $\{p^-\} \times Q'\cup P' \times \{q^-\}$ the \emph{input boundary} of $M(P', Q')$, and denote it by $M(P', Q')^-$ (the common entry corresponding to $\{p^-\}\times\{q^-\}$ appears only once in the boundary). Similarly, we call the entries of $M(P', Q')$ corresponding to $\{p^+\} \times Q' \cup P' \times \{q^+\}$ the \emph{output boundary} of $M(P', Q')$, and denote it by $M(P', Q')^+$ (with a similar suppression of the duplication of the common element $\{p^+\}\times\{q^+\}$). Note that there is a two-entry overlap between the input and output boundaries. We enumerate the entries of $M(P', Q')^-$ by first enumerating the entries of $\{p^-\}\times Q'$ from right to left (i.e., backwards) and then the remaining entries of $P' \times \{q^-\}$ from bottom to top (forward). We enumerate the entries of $M(P', Q')^+$ by first enumerating the entries of $P' \times \{q^+\}$ from bottom to top (forward) and then the remaining entries of $\{p\}^+ \times Q'$ from right to left (backwards). Informally, $M(P',Q')^-$ is enumerated in ``clockwise'' order, while $M(P',Q')^+$ is enumerated in ``counterclockwise'' order; see Figure~\ref{fig:block}. For two entries $i,j$ of this enumeration of an input or output boundary $B$ of $M(P', Q')$, we use $[i,j]$ to denote the sequence of entries $(i,i+1,\ldots,j)$ of $B$. \begin{figure}[htb] \centering\includegraphics[scale=0.8]{block.pdf} \vspace{-4.1cm} \hspace{-1.5cm} (a) \hspace{5.2cm} (b \centering\caption{\small (a) A (highlighted) block $M(P',Q')$ of $M(P,Q)$ , where $P'=(p_5,p_6, p_7, p_8)$ and $Q' = (q_7,q_8,\ldots,q_{12})$. (b) The input boundary $M(P',Q')^-$ and the output boundary $M(P',Q')^+$ of $M(P',Q')$ are marked, with the orderings of their elements.} \label{fig:block} \end{figure} We also use the following definitions. We call the entries corresponding to $P' \times \{q^-\}$ the \emph{vertical input boundary} of $M(P',Q')$, and denote it by $\overline{M(P',Q')}\@^-$. We call the entries corresponding to $P' \times \{q^+\}$ the \emph{vertical output boundary} of $M(P',Q')$, and denote it by $\overline{M(P',Q')}\@^+$. That is, $\overline{M(P',Q')}\@^-$ and $\overline{M(P',Q')}\@^+$ are the vertical parts of $M(P',Q')^-$ and $M(P',Q')^+$, respectively. We enumerate the entries of each vertical boundary from bottom to top. \section{The subdivision of the plane of translations} \label{sec:arrangement} We first consider the corresponding decision problem. That is, given a value $\delta>0$, we wish to decide whether there exists a translation $t\in \mathbb{R}^2$ such that $\delta^*(P,Q+t)\le\delta$. For a point $x\in \mathbb{R}^2$, let $D_\delta(x)$ be the disk of radius $\delta$ centered at $x$. Given two points $p_i\in P$ and $q_j\in Q$, consider the disk $D_\delta(p_i-q_j)$, and notice that $t\in D_\delta(p_i-q_j)$ if and only if $\|(p_i-q_j)-t\| \leq \delta$ (or $\|p_i-(q_j+t)\| \leq \delta$). That is, $D_\delta(p_i-q_j)$ is precisely the set of translations $t$ for which $q_j+t$ is at distance at most $\delta$ from $p_i$. We construct the arrangement ${\cal A}_\delta={\cal A}_\delta(P,Q)$ of the disks in ${\cal D}=\{D_\delta(p_i-q_j)\mid (p_i,q_j)\in P\times Q\}$. We assume general position of the points. That is, we assume that (a) no more than two boundaries of these disks intersect in a common vertex of ${\cal A}_\delta$, and (b) no pair of the disks are tangent to each other. Nevertheless, such a degeneracy can arise when $\delta$ is a \emph{critical value} (see Section~\ref{sec:optimization} for details about critical values of $\delta$ that arise during the optimization procedure), but we assume that at most one such degeneracy can happen for a given $\delta$. Since the number of disks is $mn$, the combinatorial complexity of ${\cal A}_\delta$ is $O(m^2n^2)$. Let $f$ be a face of ${\cal A}_\delta$ of any dimension $0,1$ or $2$ (by convention, $f$ is assumed to be relatively open), and let $t\in f$ be a translation. Then, for points $p_i\in P, q_j \in Q$, $q_j+t$ is at distance at most $\delta$ from $p_i$ if and only if the disk $D_\delta(p_i-q_j)$ contains $f$ (otherwise, the disk is disjoint from $f$). Since this holds for every $t\in f$, it follows that $f$ corresponds to a unique pairwise-distances matrix $M(P,Q+t)$, for any $t \in f$. We denote this matrix by $M(P, Q+f)$, for short. The setup just described leads to the following naive solution for the decision problem. Construct the arrangement ${\cal A}_\delta$ for the given distance $\delta$, and traverse its faces. For each face $f\in {\cal A}_\delta$, form the corresponding pairwise-distances matrix $M(P,Q+f)$, and solve the (stationary) discrete Fr\'echet distance decision problem for $P$ and $Q+f$ using a straightforward dynamic programming on $M(P,Q+f)$ (or the more sophisticated slightly subquadratic algorithm of Agarwal et al.~\cite{ABKS12}). If $\delta^*(P, Q+f) \leq \delta$ for some face $f$, we conclude that there exists a translation $t$ such that $\delta^*(P, Q+t) \leq \delta$ (any translation $t\in f$ would do). If the entire arrangement ${\cal A}_\delta$ is traversed and no face $f$ of ${\cal A}_\delta$ satisfies $\delta^*(P, Q+f) \leq \delta$, we determine that $\delta^*(P, Q+t) > \delta$ for all translations $t\in \mathbb{R}^2$. The complexity of ${\cal A}_\delta$ is $O(m^2n^2)$, and solving the discrete Fr\'echet distance decision problem for each face of ${\cal A}_\delta$ takes $O(mn)$ time (or slightly less, as in~\cite{ABKS12}). Hence, the solution just described for the decision problem takes (slightly less than) $O(m^3n^3)$ time. Jiang et al.~\cite{JXZ08} used an equivalent solution for the decision problem, that takes the same asymptotic running time. Rephrasing their procedure in terms of ${\cal A}_\delta$, they test whether $\delta^*(P,Q+t)\leq \delta$ for translations $t$ corresponding to the vertices of ${\cal A}_\delta$, and over an additional set of $mn$ translations, one chosen from the boundary of each disk. The correctness of this approach follows by observing that if $f$ is a face of ${\cal A}_\delta$ and $t$ is any point on ${\partial}{f}$, then all the $1$-entries of $M(P,Q+f)$ are also $1$-entries of $M(P,Q+t)$, so it suffices to test the vertices of $f$, or, if $f$ has no vertices, to test an arbitrary point $t\in {\partial}{f}$. We will use this observation in our implementation of the optimization procedure. Our naive solution is similar to the algorithm of Jiang et al.~\cite{JXZ08}, in the sense that they both discretize the set of possible translations. However, our solution is more suitable for the improvement of this naive bound, that we present in Section~\ref{sec:dynamic}, since it allows us to traverse the set of possible translations in a manner that introduces only a single change in $M(P, Q+f)$, when we move from one face $f$ of translations to a neighboring one. To exploit this property we need a data structure that maintains reachability data for $M$, and updates it efficiently after each change. We present this structure in two stages. First, in Section~\ref{sec:linear_reachability}, we present a compact reachability structure for blocks of $M(P,Q+f)$, which is the main building block of the overall structure. Then, in Section~\ref{sec:dynamic}, we present the overall data structure, and show how to use it to improve the naive solution sketched above by a nearly linear factor. \section{Compact representation of reachability in a block} \label{sec:linear_reachability} Let $B$ be a block of $M=M(P,Q+f)$ of size $r\times c$, and suppose that we have already computed the reachable entries of $B^-$ and we then wish to compute the reachable entries of $B^+$. If the entries of the block are given explicitly, this can be done in $O(r c)$ time using dynamic programming (or slightly faster using the algorithm of~\cite{ABKS12}). Our goal in this section is to design a data structure, that we denote as $\Phi(B)$, that allows us to compute the reachable entries of $B^+$ from the reachable entries of $B^-$, in $O(r+c)$ time. The overall data structure itself is constructed recursively from these block structures (see Section~\ref{sec:dynamic} for details), and implicitly accesses all the entries of $B$. The advantage of using this block decomposition is that updating the structure can be done more efficiently. \vspace{-1cm} \begin{figure}[htb] \centering\begin{tabular}{cc} \includegraphics[scale=0.6]{monge3.pdf} & \includegraphics[scale=0.6]{overlapIntervals2.pdf} \\ (a) & (b) \end{tabular} \centering \caption{\small (a) Two entries $i,j$ of $B^-$ and two entries $\sigma(i)$ and $\sigma(j)$ of $B^+$ that are reachable from $i$ and $j$, respectively. Since $i<j$ and $\sigma(i)>\sigma(j)$, $\sigma(j)$ is reachable also from $i$, and $\sigma(i)$ is reachable also from $j$. (b) The intervals $[\sigma_A(k),\sigma_Z(k)]$, for any $1$-entry $k$ of $B^-$, are either disjoint or overlap in a common subinterval. Neither of these intervals can strictly contain both endpoints of the other. These intervals are defined (and shown in the figure) only for $1$-entries of $B^-$.} \label{fig:united_fig} \end{figure} \begin{observation}\label{obs:monge} Let $B$ be a block of $M$ and let $i,j$ be two entries of $B^-$ such that $j>i$ (in the ``clockwise'' order defined in Section~\ref{sec:preliminaries}). Let $\sigma(i)$ be an entry of $B^+$ that is reachable from $i$, and let $\sigma(j)$ be an entry of $B^+$ that is reachable from $j$. If $\sigma(j) <\sigma(i)$ (in the corresponding ``counterclockwise'' order) then $\sigma(j)$ is also reachable from $i$, and $\sigma(i)$ is also reachable from $j$. \end{observation} \begin{proof} See Figure~\ref{fig:united_fig}(a). Since $\sigma(i)$ is reachable from $i$, there is a (monotone) path $\pi(i, \sigma(i))$ from $i$ to $\sigma(i)$ in $M$. Similarly, since $\sigma(j)$ is reachable from $j$, there is a (monotone) path $\pi(j, \sigma(j))$ from $j$ to $\sigma(j)$. Since $i<j$ and $\sigma(j) <\sigma(i)$, $\pi(i, \sigma(i))$ must cross $\pi(j, \sigma(j))$ (i.e., there exists a 1-entry $e \in \pi(i, \sigma(i)) \cap \pi(j, \sigma(j))$. Hence, $\pi(i, \sigma(i))$ can be decomposed into two subpaths $\pi(i,e), \pi(e, \sigma(i))$ such that $\pi(i, \sigma(i))= \pi(i,e)\cdot \pi(e, \sigma(i))$, and $\pi(j,\sigma(j))$ can be similarly decomposed as $\pi(j, \sigma(j))= \pi(j,e)\cdot \pi(e, \sigma(j))$. As a result, the paths $\pi(i, \sigma(j)) = \pi(i,e) \cdot \pi(e, \sigma(j))$ and $\pi(j, \sigma(i)) = \pi(j,e) \cdot \pi(e, \sigma(i))$ are also (monotone) paths, and the claim follows. \end{proof} \begin{corollary}\label{cor:reachable_interval} Let $B$ be a block of $M$, let $i$ be an entry of $B^-$ and let $\sigma_1(i),\sigma_2(i)$ be two entries in $B^+$ that are both reachable from $i$, with $\sigma_1(i)<\sigma_2(i)$. If there exists an entry $\sigma(j)$ that is reachable from some $j\in B^-$, such that $\sigma_1(i)<\sigma(j)<\sigma_2(i)$, then $\sigma(j)$ is also reachable from $i$. \end{corollary} \begin{proof} By Observation~\ref{obs:monge}, if $i<j$, then $\sigma(j)$ is reachable from $i$ since $\sigma(j)<\sigma_2(i)$. If $i>j$, then $\sigma(j)$ is reachable from $i$ since $\sigma_1(i)<\sigma(j)$. \end{proof} The corollary is applied as follows. Let $i$ be an entry of $B^-$, let $\sigma_A(i)$ and $\sigma_Z(i)$ denote the first and last entries in $B^+$ that are reachable from $i$. (Note that for these entries to be defined, the value of the entry $i$ must be $1$. Symmetrically, the values of both $\sigma_A(i)$ and $\sigma_Z(i)$, if defined, must be equal to $1$.) Then the interval $[\sigma_A(i), \sigma_Z(i)]$ can only contain entries of the following three types. \begin{enumerate} \item 1-entries that are reachable from $i$. \item 0-entries. \item 1-entries that are not reachable from $i$, nor from any other entry of $B^-$. \end{enumerate} In other words, $[\sigma_A(i), \sigma_Z(i)]$ cannot contain $1$-entries that are reachable from some $j$ in $B^-$ and not from $i$. \begin{corollary}\label{cor:overlapping_intervals} Let $B$ be a block of $M$ and let $i$ and $j$ be two entries of $B^-$ such that $j>i$. Then $\sigma_A(j)\geq \sigma_A(i)$ and $\sigma_Z(j)\geq \sigma_Z(i)$. \end{corollary} \begin{proof} Assume to the contrary that $\sigma_A(i)>\sigma_A(j)$. Then, according to Observation~\ref{obs:monge}, $\sigma_A(j)$ is reachable from $i$. Hence, $\sigma_A(i) \leq \sigma_A(j)$, a contradiction. Similarly, if $\sigma_Z(j)<\sigma_Z(i)$, then Observation~\ref{obs:monge} implies that $\sigma_Z(i)$ is reachable from $j$. Hence, $\sigma_Z(j) \geq \sigma_Z(i)$, contradiction. \end{proof} In other words, $[\sigma_A(i),\sigma_Z(i)]$ and $[\sigma_A(j),\sigma_Z(j)]$ can be either disjoint or overlap in a common subinterval, but they cannot be properly nested inside one another (that is, neither of these intervals can contain both endpoints of the other in its ``interior''). Note, however, that one interval can weakly contain the other. That is, if one interval contains the other, then either $\sigma_A(i)=\sigma_A(j)$ or $\sigma_Z(i)=\sigma_Z(j)$, or both. See Figure~\ref{fig:united_fig}(b). Let $B$ be a block of $M$ of size $r\times c$. We construct a data structure $\Phi(B)$ for $B$, which stores the following information. (Here we only specify the structure; its construction is detailed in Section~\ref{sec:dynamic}.) \begin{enumerate} \item\label{enum:B^-} For each $1$-entry $i$ of $B^-$ we store \begin{enumerate} \item the first entry $\sigma_A(i)$ of $B^+$ that is reachable from $i$, and \item the last entry $\sigma_Z(i)$ of $B^+$ that is reachable from $i$. \end{enumerate} \item \label{enum:B^+} For each $1$-entry $j$ of $B^+$ we store \begin{enumerate} \item a flag $f(j)$ indicating whether $j$ is reachable from some entry of $B^-$. \item a list $L_A(j)$ of the $1$-entries $i\in B^-$ such that $\sigma_A(i)= j$, and \item a list $L_Z(j)$ of the $1$-entries $i\in B^-$ such that $\sigma_Z(i)= j$. \end{enumerate} \end{enumerate} \begin{lemma}\label{lem:linear} Given the data structure $\Phi(B)$ for a block $B$, and given the entries of $B^-$ that are reachable from $(1,1)$, we can determine, in $O(r+c)$ time, the entries of $B^+$ that are reachable from $(1,1)$. \end{lemma} \begin{proof} We go over the reachable $1$-entries of $B^-$ in order. For each such entry $i$, we go over the entries in the interval $I(i)=[\max\{\sigma_Z(i^-),\sigma_A(i)\}, \sigma_Z(i)]$ of $B^+$, where $i^-$ is the previous reachable $1$-entry of $B^-$ (for the first reachable entry $i$ of $B^-$, $I(i)=[\sigma_A(i),\sigma_Z(i)]$ and $i^-$ is undefined). Note that, by Corollary~\ref{cor:overlapping_intervals}, $\max\{\sigma_Z(i^-),\sigma_A(i)\}\in [\sigma_A(i), \sigma_Z(i)]$ so $I(i)\subseteq[\sigma_A(i), \sigma_Z(i)]$. (The entries of $[\sigma_A(i), \sigma_Z(i)]$ that precede $\max\{\sigma_Z(i^-),\sigma_A(i)\}$ were already processed when we went over $I(i^-)$ or over intervals associated with earlier indices.) For each $1$-entry $j$ of $I(i)$ that is reachable from some entry of $B^-$ (according to the flag $f(j)$), we determine that $j$ is reachable also from $(1,1)$. Since we traverse each interval $[\sigma_A(i), \sigma_Z(i)]$ starting from $\max\{\sigma_Z(i^-),\sigma_A(i)\}$, the internal portions of the subintervals that we inspect are pairwise disjoint, implying that the running time is linear in $r+c$. We omit the straightforward proof of correctness of this procedure. \end{proof} \section{Dynamic maintenance of reachability in $M(P,Q+f)$} \label{sec:dynamic} We present a data structure, that uses the compact representation of reachability in a block of the previous section, to support an update of a single entry of $M$ in $O(m(1+\log(n/m)))$ time, assuming $m\leq n$. We present this data structure in two stages. First, in Section~\ref{sec:data}, we show how to support an update of a single entry in $O(m)$ time, in the case where $M$ is a square matrix of size $m\times m$. Then, in Section~\ref{sec:improved}, we generalize this data structure to support an update of a single entry in $O(m(1+\log(n/m)))$ time, in the general case where $M$ is an $m\times n$ matrix with $m\leq n$ (the case $m\geq n$ is treated in a fully symmetric manner). In Section~\ref{sec:overall}, we describe the overall decision procedure that improves the naive solution sketched in Section~\ref{sec:arrangement}, using this dynamic data structure. \subsection{A dynamic data structure for reachability maintenance in a square matrix} \label{sec:data} \begin{figure}[htb] \vspace{-4cm} \centering\begin{tabular}{cc} \hspace{-1.2cm}\includegraphics[scale=0.6]{united_blocks.pdf} & \includegraphics[scale=0.6]{united_blocks3.pdf} \\ (a) & \hspace{2cm}(b) \end{tabular} \centering \caption{\small A block $B_y=M(P_y,Q_y)$, corresponding to a node $y$ of $\Gamma$, is composed of the blocks of the children $v,w$ of $y$. The block $B_v=M(P_v,Q_v)$ corresponding to the left child $v$ lies below the block $B_w=M(P_w,Q_w)$ that corresponds to the right child $w$, and we have $P_y=P_v\cup P_w$ and $Q_y=Q_v=Q_w$. \\(a) $\sigma^{y}(i)$ and $\sigma^{y}(k)$ are examples of reachable entries of $B_y^+$ and we have $\sigma^{y}(i)=\sigma^{w}(\sigma^{v}(i))$ and $\sigma^{y}(k)=\sigma^{v}(k)$. $\sigma^{w}(j)$ is an entry of $B_w^+$ that is reachable from $B_w^-$, but it is not a reachable entry of $B_y^+$ (from $B_y^-$) since all the paths in $B_y$ that lead to $\sigma^{w}(j)$ go through entries of $B_v^+$ that are not reachable from $B_v^-$. \\(b) $\sigma_A^{y}(\ell)=\sigma_A^{w}(\sigma_A^{v}(\ell))$ and $\sigma_Z^{y}(\ell)=\sigma_Z^{w}(\sigma_Z^{v}(\ell))$. } \label{fig:united_blocks} \end{figure} We store the reachability data of $M(P,Q+f)$ (of some arbitrary face $f$ from which we start the traversal of the arrangement ${\cal A}_\delta$) in a so-called \emph{decomposition tree} $\Gamma$, by halving $P$ and $Q$ alternately. That is, the root $v$ of $\Gamma$ corresponds to the entire matrix $M(P,Q+f)$ and we store at $v$ the reachability information $\Phi(M(P,Q+f))$, as described in the previous section. (The actual construction of the reachability data, at all nodes of $\Gamma$, is done bottom-up, as described below.) In the next level of $\Gamma$ we partition $P$ into two subsequences $P_1,P_2$, of at most $\lfloor m/2\rfloor+1$ points each, such that the last point of $P_1$ is the first point of $P_2$, and obtain a corresponding ``horizontal'' partition of $M(P,Q+f)$ into two blocks $M(P_1,Q+f)$, $M(P_2,Q+f)$, each of size at most $(\lfloor m/2\rfloor+1) \times m$, with a common ``horizontal'' boundary. We create two children $v_1, v_2$ of $v$ and store at each $v_i$ the reachability information $\Phi(M(P_i,Q+f))$, for $i=1,2$. In the next level of $\Gamma$, we partition $Q$ into two subsequences $Q_1,Q_2$, of at most $\lfloor m/2\rfloor+1$ points each, such that the last point of $Q_1$ is the first point of $Q_2$, and obtain a corresponding ``vertical'' partition of each block $M(P_i,Q+f), i\in\{1,2\}$, into two blocks $M(P_i,Q_j+f), j\in\{1,2\}$, each of size at most $(\lfloor m/2\rfloor+1) \times (\lfloor m/2\rfloor+1)$, with a common vertical boundary. We construct four respective grandchildren, and store the corresponding reachability structures $\Phi(M(P_i,Q_j+f))$ at these nodes. We continue recursively to partition each block by halving it horizontally or vertically, alternately, in the same manner, until we reach blocks of size $2\times 2$. For each node $v$ of $\Gamma$, let $P_v$ and $Q_v$ denote the subsequences of $P$ and $Q$ that form the block $M(P_v, Q_v+f)$ that is associated with $v$. To simplify the notation, we denote $\Phi(M(P_v,Q_v+f))$ as $\Phi_v$, for each node $v$. The reachability data $\Phi_v$ at the nodes $v$ of $\Gamma$ is computed by a bottom-up traversal of $\Gamma$, starting from the leaves. The construction of $\Phi(M(P_v,Q_v+f))$ at a leaf $v$ is trivial, and takes constant time. The following lemma provides an efficient procedure for constructing the reachability data at inner nodes of $\Gamma$. \begin{lemma}\label{lem:union} Let $y$ be an inner node of $\Gamma$ with left and right children $v$ and $w$, where the blocks stored at $v,w$ have a common horizontal boundary. Given the reachability data $\Phi_v, \Phi_w$, the data $\Phi_y$ can be computed in $O(|P_y|+|Q_y|)$ time. An analogous statement holds when the common boundary of the children blocks is vertical. \end{lemma} \begin{proof} Note that in the setup of the lemma, we have $Q_y=Q_v=Q_w$ and $P_y=P_v\cup P_w$. By construction, $M(P_v,Q_y)$ lies below $M(P_w,Q_y)$. Denote $M(P_v,Q_y)$ by $B_v$, $M(P_w,Q_y)$ by $B_w$, and $M(P_y,Q_y)$ by $B_y$. For each entry $i$ of $B_y^-$, denote by $\sigma_A^y(i)$ (resp., $\sigma_Z^y(i)$) the first (resp., last) entry of $B_y^+$ that is reachable from $i$. We also use $\sigma^y(i)$ to denote an entry of $B_y^+$ that is reachable from $i$ in $B_y$. Analogous notations are used for the children blocks $B_v, B_w$. See Figure~\ref{fig:united_blocks}. We first copy the reachability information from the boundaries of $B_v$ and $B_w$ to the boundary of $B_y$ (except for the ``interior'' portion $B^*_{vw}$ of the common boundary $B_v^+ \cap B_w^-$ of $B_v$ and $B_w$, which is not a boundary of $B_y$). The data for the $1$-entries on the left boundary of $B_w$ (which are of type \ref{enum:B^-} in the definition of $\Phi$) is still valid, since the reachability paths of $B_y$ that start at these entries are fully contained in $B_w$. Similarly, the data for the $1$-entries on the right boundary of $B_v$ (which are of type \ref{enum:B^+}) is still valid, since the reachability paths of $B_y$ that end at these entries are fully contained in $B_v$. We thus need to determine the reachability information from the $1$-entries of the input boundary $B_v^-$ of $B_v$ to the entries of the output boundary $B_w^+$ of $B_w$, and merge it with the already available data, to get the complete structure $\Phi$ at $y$. First note that an entry $j$ of $B_w^+$ that is reachable from $B_w^-$ may now become unreachable from $B_y^-$. This happens if all the reachability paths in $B_y$ to $j$ go through entries on $B^*_{vw}$ that are not reachable from $B_v^-$. See Figure~\ref{fig:united_blocks}(a). We thus need to turn the flag $f(j)$ of such entries to false. To do this, we go over the entries of $B_w^+$ in order, and maintain a queue ${\cal Q}$ that satisfies the invariant that, when we are at an entry $j$ of $B_w^+$, ${\cal Q}$ contains all the entries $i$ of $B_w^-$ that are reachable from $B_y^-$, such that $j$ is reachable from $i$. That is, ${\cal Q}$ contains all the entries $i \in B^*_{vw}$ that are reachable from $B_v^-$ such that $j$ is reachable from $i$, and all the entries $i\in B_w^-\setminus B_{vw}^*$ (that is, the left side of $B_w^-$) such that $j$ is reachable from $i$. We start with an empty queue. For each $1$-entry $j$ of $B_w^+$ we first go over the list $L_A(j)$ (of $\Phi_w$), and for each element $i$ in $L_A(j)$ that is in $B^*_{vw}$, we check if it is reachable from $B_v^-$ (using the flag $f(i)$ from $\Phi_v$). If it is, we put it in ${\cal Q}$. We also add to ${\cal Q}$ each element in $L_A(j)$ that is in $B_w^-\setminus B_{vw}^*$. If ${\cal Q}$ is empty, there is no reachability path from $B_y^-$ to $j$ and we set $f(j)$ to be false. We then go over the list $L_Z(j)$ (of $\Phi_w$) and remove from ${\cal Q}$ each element in $L_Z(j)$ that is in ${\cal Q}$. This traversal takes $O(|P_y|+|Q_y|)$ time, since each element of $B_w^-$ appears at most once in the lists $L_A$ and at most once in the lists $L_Z$. The correctness follows from the invariant that when we go over an entry $j\in B_w^+$, all the entries of $B_w^-$ that $j$ is reachable from, and that are reachable from $B_y^-$, are in ${\cal Q}$. The invariant is maintained correctly because each time that an interval $[\sigma_A(i),\sigma_Z(i)]$ of an entry $i\in B_w^-$ begins (and $i$ is reachable from $B_y^-$), $i$ is inserted into ${\cal Q}$, and when the interval ends, $i$ is removed from ${\cal Q}$, so $i$ is in ${\cal Q}$ for all entries $j$ that are reachable from $i$. In conclusion, if ${\cal Q}$ is empty, $j$ is not reachable from $B_y^-$ and the flag $f(j)$ can be turned false. Otherwise, $i$ is reachable from $B_y^-$. We now update the intervals $[\sigma_A(i),\sigma_Z(i)]$ of the entries $i\in B_v^-$ and, in correspondence, the lists $L_A(\sigma(i)), L_Z(\sigma(i))$ of $B_w^+$ (where $\sigma(i)$ is any entry of $B_w^+$ that is reachable from $i$). Consider a $1$-entry $i$ of $B_v^-$ and consider an entry $\sigma^v(i)$ in $B^*_{vw}$; that is, $\sigma^v(i)$ is a 1-entry in $[\sigma_A^v(i),\sigma_Z^v(i)]$ that is reachable from $i$. By transitivity, the entries $\sigma^w(\sigma^v(i))$ of $B_w^+$ that are reachable from $\sigma^v(i)$ are also reachable from $i$. We update $[\sigma_A(i),\sigma_Z(i)]$ according to this rule, as follows (see Figure~\ref{fig:united_blocks}(b)). We set $\sigma_A^{y}(i)=\sigma_A^{w}(\sigma_A^{v}(i))$, for each entry $i\in B_v^-$ such that $\sigma_A^v(i)\in B^*_{vw}$; correspondingly, we also add $i$ to $L_A(\sigma_A^{y}(i))$. Similarly, for each entry $i\in B_v^-$ such that $\sigma^v_Z(i)\in B^*_{vw}$, we set $\sigma_Z^{y}(i)=\sigma_Z^{w}(\sigma_Z^{v}(i))$ and we add $i$ to $L_Z(\sigma_Z^{y}(i))$. (Recall that if $\sigma_A^v(i)$ (or $\sigma_Z^v(i)$) is in $B_v^+\setminus B^*_{vw}$, this reachability information was already copied to $\Phi_y$ and that the reachability information for $B_w^-\setminus B_{vw}^*$ was also copied to $\Phi_y$.) Clearly, for each entry $i\in B_y^-$, no entry of $B_y^+\setminus [\sigma_A^v(i),\sigma_Z^v(i)]$ is reachable from $i$. This traversal takes $O(|P_y|+|Q_y|)$ time. Finally, when we copied information from $B_w^+$ to $B_y^+$, we also copied the lists $L_A$ and $L_Z$ that may include entries of $B^*_{vw}$. Since $B^*_{vw}$ is not a part of the boundary of $B_y$, we need to remove this information from the lists $L_A$ and $L_Z$ of $B_y^+$. We thus go over the entries of $B^*_{vw}$. For each entry $e$ of $B^*_{vw}$, we remove $e$ from $L_A(\sigma^w_A(e))$ and from $L_Z(\sigma^w_Z(e))$. Clearly, this traversal takes $O(|Q_y|)$ time. \end{proof} We now show how to use Lemma~\ref{lem:union} to construct $\Gamma$ in $O(m^2)$ time and to update it, when a single entry changes, in $O(m)$ time. We also show how to determine, using $\Gamma$, whether $(m,m)$ is reachable from $(1,1)$ in constant time after the update. \begin{lemma} \label{lem:constructGamma} (a) Given a square matrix $M$, the decomposition tree $\Gamma$ (including the reachability data at its nodes) can be constructed from scratch in $O(m^2)$ time. (b) If a single entry of $M$ is updated, then $\Gamma$ can be updated in $O(m)$ time. (c) Given $\Gamma$, we can determine whether $(m,m)$ is reachable from $(1,1)$ in constant time. \end{lemma} \begin{proof} (a) We construct $\Gamma$ in a bottom-up manner, as prescribed in Lemma~\ref{lem:union}. For the blocks at the leaves, the reachability data is computed in brute force, in $O(1)$ time per block, and at each inner node $y$, the data is computed from the data at its children in time $O(|P_y|+|Q_y|)$, using Lemma~\ref{lem:union}; we refer to $|P_y|+|Q_y|$ as the \emph{size} of the block $B_y$ at $y$. The sizes of the blocks at levels $2j-1$ and $2j$ is $O\left(\dfrac{m}{2^j}\right)$, and the number of these blocks is $O(2^{2j})$. The height of $\Gamma$ is $\lceil \log m \rceil$. The cost of the overall construction of $\Gamma$ is proportional to the sum of the sizes of its blocks (this also holds at the leaf level), which is thus \begin{equation*} O\left(\sum_{j=0}^{\lceil \log m\rceil} 2^{2j}\cdot\dfrac{m}{2^j}\right)=O\left(m\sum_{j=0}^{\lceil \log m\rceil} 2^{j}\right)=O\left(m^2\right). \end{equation*} \noindent(b) The main observation here is that to update $\Gamma$ when an entry $e$ of $M$ changes, it suffices to update the reachability data along the single path of $\Gamma$ of those nodes $y$ for which $e\in B_y$. (Actually, because of the overlap between block boundaries, there are two such paths that meet at the unique node $y$ for which $e$ belongs to the ``interior'' of the common boundary of the blocks of its children.) The reachability data of the nodes along this path is constructed again from scratch in a bottom-up manner, using Lemma~\ref{lem:union}. The cost of the updates of these blocks is proportional to the sum of their sizes, which is \begin{equation*} O\left(\sum_{j=0}^{\lceil \log m\rceil} \dfrac{m}{2^j}\right)=O(m). \end{equation*} \noindent(c) To determine whether $(m,m)$ is reachable from $(1,1)$, we simply check in the reachability data structure $\Phi(M)$ of the root of $\Gamma$ whether $(m,m)$ is a $1$-entry that belongs to $[\sigma_A((1,1)), \sigma_Z((1,1))]$ and the flag $f((m,m))$ is true. \end{proof} \subsection{A generalized structure for arbitrary matrices} \label{sec:improved} We next describe a modified variant of the structure for the case where $m$ and $n$ are unequal. In what follows we assume, as above and without loss of generality, that $m\le n$. We first partition $M$ into $k=O( n/m)$ square blocks $B_1,B_2,\ldots,B_k$, of size $m\times m$ each such that consecutive blocks overlap in a single column. (The last block may be of smaller width, but we handle it in the same manner as the other blocks; it is easy to show that the bounds of Lemma~\ref{lem:constructGamma} still hold.) We build the decomposition tree and the associated reachability data for each of these blocks, as in Section~\ref{sec:data}; denote the structure for block $B_i$ by $\bar{\Gamma}_i$, for $i=1,\ldots,k$. We now combine the structures $\bar{\Gamma}_1,\ldots,\bar{\Gamma}_k$ into a single global structure $\bar{\Gamma}$. For this, we construct a balanced binary tree $T$, with $k$ leaves $v_1,\ldots,v_k$, where $v_i$, for $i=1,\ldots,k$, corresponds to $B_i$ and stores $\bar{\Gamma}_i$. Each node $v$ of $T$ represents a block $B_v$ that is the concatenation of the blocks stored at the leaves of the subtree rooted at $v$. Since each leaf block spans all the rows of $M$, the common boundary of any pair of consecutive blocks consists only of a full single column of $M$. The same holds at any node $y$ of $T$, with left child $v$ and right-child $w$. That is, the common boundary $B_{vw} := B_v^+\cap B_w^-$ between $B_v$ and $B_w$ is vertical, and consists of a full single column of $M$. We claim that we can merge the reachability structures $\Phi_v$ of $B_v$ and $\Phi_w$ of $B_w$ into the structure $\Phi_y$ of $B_y$ in $O(m)$ time, instead of $O(|B_y|)=O(m+|Q_y|)$ time (as was the cost in the preceding subsection), which can be much larger. The main observation that facilitates this improvement is that there is no need to maintain the reachability data $\Phi_v$ at the horizontal portions of the boundary of any of the blocks $B_v$. This follows from the obvious property that any path $\pi$ from the initial entry $(1,1)$ to any entry $(i,j)$ in any leaf block reaches $(i,j)$ by crossing all the vertical boundaries $B_{12},B_{23},\ldots$ that delimit all the preceding leaf blocks, and the portion $\pi_l$ of $\pi$ within each of the preceding blocks $B_l$ connects an entry on the left vertical boundary of $B_l$ to an entry on its right vertical boundary. Note that $\pi_l$ can ``crawl'' along the lower or upper boundary of $B_l$, but to exit $B_l$ it has to cross the vertical boundary, possibly through its entries in row $1$ or row $n$. Figure~\ref{fig:reachable} is an illustration of an inner block $B_y$ of $\bar{\Gamma}$ that is composed of a left block $B_v$ and a right block $B_w$. \begin{figure}[htb] \centering\includegraphics[scale=0.6]{vertical_blocks.pdf} \vspace{-2.5cm} \centering \caption{\small A block $B_y=M(P_y,Q_y)$, corresponding to a node $y$ of $\bar{\Gamma}$, is composed of the blocks $B_v=M(P_v,Q_v)$ and $B_w=M(P_w,Q_w)$ of the children $v,w$ of $y$, with $v$ being the left child and $w$ being the right child. We have $P_y=P_v= P_w=P$ and $Q_y=Q_v \cup Q_w$. } \label{fig:reachable} \end{figure} We therefore use the same reachability data structure $\Phi_v$ at $v$ as defined in the previous subsection, except that we limit the input and output domains of its maps to the vertical boundaries only. Recall our notation from Section~\ref{sec:preliminaries}, where the left (resp., right) vertical boundary of a block $B$ is denoted as $\bar{B}^-$ (resp., $\bar{B}^+$). Specifically, denoting the modified structure as $\bar{\Phi}_v=\bar{\Phi(B_v)}$, it stores the following items. \begin{enumerate} \item For each $1$-entry $i$ of $\bar{B}_v^-$ we store \begin{enumerate} \item the first entry $\bar{\sigma}_A(i)$ of $\bar{B}_v^+$ that is reachable from $i$, and \item the last entry $\bar{\sigma}_Z(i)$ of $\bar{B}_v^+$ that is reachable from $i$. \end{enumerate} \item For each $1$-entry $j$ of $\bar{B}_v^+$ we store \begin{enumerate} \item a flag $\bar{f}(j)$ indicating whether $j$ is reachable from some entry of $\bar{B}_v^-$. \item a list $\bar{L}_A(j)$ of the $1$-entries $i\in \bar{B}_v^-$ such that $\bar{\sigma}_A(i)= j$, and \item a list $\bar{L}_Z(j)$ of the $1$-entries $i\in \bar{B}_v^-$ such that $\bar{\sigma}_Z(i)= j$. \end{enumerate} \end{enumerate} In other words, $\bar{\Phi}(B_v)$ is a constrained variant of $\Phi(B_v)$, obtained by replacing $B_v^-$ and $B_v^+$ by $\bar{B}_v^-$ and $\bar{B}_v^+$, respectively. The structure $\bar{\Phi}_v$ of the root $v$ of a child $\bar{\Gamma}_i$ of $\bar{\Gamma}$ is obtained from $\Phi_v$ by first setting, for each entry $i$ of $\bar{B}_v^-$ for which $\sigma_A(i)$ is in $\bar{B}_v^+$ and $\sigma_Z(i)$ is not in $\bar{B}_v^+$, $\sigma_Z(i)$ to be the last reachable entry $k$ of $\bar{B}_v^+$, then updating $L_Z(k)$ accordingly, and finally ignoring the horizontal parts of the boundaries of $B_v$ and deleting the data regarding them from the lists $L_A$ and $L_Z$ of entries of $\bar{B}_v^+$. We next claim that the modified structures $\bar{\Phi}_y$ are sufficient for obtaining reachability data for the blocks of $\bar{\Gamma}$, in the precise sense stated below, and that the structure $\bar{\Phi}_y$ at an inner node $y$ of $\bar{\Gamma}$ can be obtained from the structures at the children of $y$ in $O(m)$ time. Concretely, we have the following variants of Lemmas~\ref{lem:linear} and~\ref{lem:union}. \begin{lemma}\label{lem:linear2} Given the data structure $\bar{\Phi}(B)$ for a block $B$ of size $r\times c$., and given the entries of $\bar{B}^-$ that are reachable from $(1,1)$, we can determine, in $O(r)$ time, the entries of $\bar{B}^+$ that are reachable from $(1,1)$. \end{lemma} \begin{lemma} \label{cor:Gamma+2} Let $y$ be an inner node of $\bar{\Gamma}$ with left and right children $v$ and $w$. Given the reachability data $\bar{\Phi}(M(P_v,Q_v)), \bar{\Phi}(M(P_w,Q_w))$, the data $\bar{\Phi}(M(P_y,Q_y))$ can be computed in $O(m)$ time. \end{lemma} The proof is essentially identical to those of Lemma~\ref{lem:linear} and Lemma~\ref{lem:union}, except that we restrict the domains and the images of each of the maps (i.e., $\sigma_A,\sigma_Z,f,L_A$, and $L_Z$) to the vertical portions of the boundaries. This is justified using the observation made earlier that all the reachability paths traverse only vertical boundaries of the relevant blocks --- those that are stored at $\bar{\Gamma}$, from its leaves up, which span the entire range $P$ of rows of $M$. Since we only traverse vertical boundaries, the cost of constructing $\bar{\Phi}_y$ from $\bar{\Phi}_v$ and $\bar{\Phi}_w$ is $O(m)$. $\Box$ The following lemma extends Lemma \ref{lem:constructGamma}. \begin{lemma} \label{lem:constructGamma2} (a) Given the matrix $M$, $\bar{\Gamma}$ can be constructed in $O(mn)$ time. (b) If a single entry of $M$ is updated, then $\bar{\Gamma}$ can be updated in $O(m(1+\log(n/m)))$ time, assuming $m\leq n$. (c) Given $\bar{\Gamma}$, we can determine whether $(m,n)$ is reachable from $(1,1)$ in constant time. \end{lemma} \begin{proof} (a) We construct the structure $\bar{\Gamma}_i$ for each block $B_i$, and extract $\bar{\Phi}_{v_i}$ from it. We then construct $\bar{\Phi}_{y}$ for each inner node $y\in T$ by merging the corresponding data structures of the children of $y$ in $O(m)$ time. We obtain $\bar{\Gamma}$ at the root of $T$. Since $T$ is of size $O(n/m)$ and we spend $O(m)$ time at each block, it takes $O(n)$ time to construct $\bar{\Gamma}$ from the leaf structures $\bar{\Gamma}_i$ for $i=1,\ldots,k$. The cost of constructing each $\bar{\Gamma}_i$, $i=1,\ldots,k=\lceil n/m\rceil$, is $O(m^2)$, by Lemma~\ref{lem:constructGamma}, for a total of $O(km^2)=O(mn)$. It follows that that the overall construction of $\bar{\Gamma}$ takes $O(n+mn)=O(mn)$ time. \smallskip\noindent(b) To update $\bar{\Gamma}$ when an entry $e$ of $M$ changes we first need to update the reachability structures along a path in the structure of the block $B_i$ containing $e$ (If $e$ is in the common column of two blocks we update both structures.). This takes $O(m)$ time by Lemma \ref{lem:constructGamma}. Once we have the updated $\bar{\Gamma}_i$ we update the reachability structures along the path $\pi$ of $T$ of those nodes $y$ for which $e\in B_y$. (There are two such paths if $e$ is in the common column of two consecutive blocks.) Since the depth of $T$ is $O(\log(n/m))$ and we spend $O(m)$ time to reconstruct the structure at each node of $T$, we update $T$ in $O(m(1+\log(n/m)))$ time. \smallskip\noindent(c) As in Lemma~\ref{lem:constructGamma}, to determine whether $(m,n)$ is reachable from $(1,1)$, we simply check in the reachability data structure $\bar{\Phi}(M)$ of the root of $\bar{\Gamma}$ whether $(m,n)\in [\bar{\sigma}_A((1,1)), \bar{\sigma}_Z((1,1))]$ and the flag $\bar{f}((m,n))$ is true. \end{proof} \subsection{The overall decision procedure} \label{sec:overall} We now put together the pieces of the decision procedure. We construct the arrangement ${\cal A}_\delta$ of the disks $D_\delta(p_i-q_j)$ as in Section~\ref{sec:arrangement} in $O(m^2n^2\log(m+n))$ time. We pick an arbitrary ($0$-, $1$-, or $2$- dimensional) face $f_0$ of ${\cal A}_\delta$. $f_0$ corresponds to a unique matrix $M(P,Q+f_0)$ and we construct the data structure $\bar{\Gamma}$ of Section~\ref{sec:improved} based on $M(P,Q+f_0)$. We then perform a traversal of the entire arrangement ${\cal A}_\delta$. In each step of the traversal we move from a face $f$ of ${\cal A}_\delta$ to a neighbor face $f'$ (both faces are of any dimension $0$, $1$, or $2$). In this step, we either enter a single disk of ${\cal A}_\delta$ or exit a single disk of ${\cal A}_\delta$. This corresponds to a change in a single entry of $M(P,Q+f)$. We update $\bar{\Gamma}$ accordingly, in time $O(m(1+\log(n/m)))$, and thereby determine whether $\delta^*(P,Q+f')\leq\delta$. We continue in this manner till we process the entire arrangement. If we encounter a face $f$ along the traversal at which $\delta^*(P, Q+f)\leq \delta$ we report that the minimum distance under translation is $\leq \delta$, and otherwise we report that the minimum distance is $>\delta$. We thus obtain the following intermediate result. \begin{theorem}\label{th:decision} Let $P$, $Q$ be two sequences of points in ${\mathbb R}^2$ of sizes $m$ and $n$, respectively and let $\delta >0$ be a parameter. Then the decision problem, where we want to determine whether there exists a translation $t\in \mathbb{R}^2$ such that $\delta^*(P,Q+t) \le \delta$, can be solved in $O(m^3n^2(1+\log(n/m)))$ time, assuming that $m\le n$. \end{theorem} \section{The optimization procedure} \label{sec:optimization} We now show how to use the decision procedure of Section~\ref{sec:dynamic} to compute the minimum discrete Fr\'echet distance under translation. Assume without loss of generality that $m\leq n$. As we increase $\delta$, the disks $D_\delta(p_i-q_j)$ expand, and their arrangement ${\cal A}_\delta$ varies accordingly. Nevertheless, except for a discrete set of critical values of $\delta$, the combinatorial structure of ${\cal A}_\delta$ does not change. That is, the pairs of intersecting disk boundaries remain the same, all their intersection points remain distinct and vary continuously, and no pair of disks are tangent to each other. Consequently, the representation of ${\cal A}_\delta$ that we use, namely, a collection of circular sequences of vertices, each containing the vertices of ${\cal A}_\delta$ along some circle $C_\delta(p_i-q_j)$, for $p_i\in P$, $q_j\in Q$, sorted along the circle, remain unchanged. The critical values of $\delta$, at which this representation of ${\cal A}_\delta$ changes qualitatively, are \begin{enumerate} \item\label{enum:critical1} The radii of the disks that have three points of $P - Q = \{p_i-q_j\mid p_i\in P,\; q_j\in Q\}$ on their boundaries. \item\label{enum:critical2} The half-distances between pairs of points of $P-Q$. \end{enumerate} There are $O(m^3n^3)$ critical values (most of which are of type \ref{enum:critical1}), so we cannot afford to enumerate them and run an explicit binary search to locate the optimal value of $\delta$ among them. Instead, we use the parametric searching technique of~\cite{NM83}. In general, using parametric searching can be fairly complicated, since it is based on a simulation of a parallel version of the algorithm. However, we only have to simulate, by a parallel algorithm, the part of the decision procedure that depends on (the unknown value of) the optimum $\delta_T^* = \min_t \delta^*(P,Q+t)$. In our case, this portion is the construction of ${\cal A}_\delta$. Instead of actually constructing ${\cal A}_\delta$, we first observe that it suffices to restrict our attention to vertices of ${\cal A}_\delta$, in the sense that each face $f$ of ${\cal A}_\delta$ has a vertex $\xi$, such that all the $1$-entries of $M(P,Q+f)$ are also $1$-entries of $M(P,Q+\xi)$ (the latter matrix can contain additional $1$-entries), so it suffices to test for reachability in the matrices $M(P,Q+\xi)$ associated with vertices $\xi$ of ${\cal A}_\delta$. (Technically, we add to the set of vertices one additional point, say the rightmost point, on each disk boundary, to cater to faces that have no real vertices.) Hence, our parallel implementation of the algorithm will only simulate the construction of the sorted lists of vertices along each of the circles $C_\delta(p_i-q_j)$. Recall that during the parametric searching simulation, we collect comparisons that the decision procedure performs and that depend on $\delta$, and resolve them. This is done by finding the critical values of $\delta$ at which the outcome of some comparison changes, during a single (simulated) parallel step of the algorithm and then by running a binary search through these critical values of $\delta$, guided by the decision procedure of Theorem~\ref{th:decision}. In this manner, we maintain a shrinking half-open interval $I=(\alpha,\beta]$ of values of $\delta$ that contains $\delta_T^*$. Note that we have called the decision procedure at $\alpha$ and it has determined that $\delta_T^*>\alpha$. Then, as is easily seen, $\delta_T^*$ must be at least as large as the first critical value of $\delta$ within $I$ (and it cannot be arbitrarily close to $\alpha$). Assume that we have simulated the construction of ${\cal A}_\delta$, and obtained a half-open interval range $I=(\alpha,\beta]$ of $\delta$ that contains $\delta_T^*$. That is, we know that $\alpha<\delta_T^*\leq\beta$, and we know the sorted sequences of vertices of ${\cal A}_{\delta_T^*}$ along each circle $C_{\delta_T^*}(p_i-q_j)$. None of the comparisons that the decision procedure has performed has a critical value inside $I$, other than those comparisons that have produced ($\alpha$ and) $\beta$. Hence the output representation of ${\cal A}_\delta$ is fixed in the interior of $I$. The rest of the algorithm, which constructs the structure $\bar{\Gamma}$, traverses the vertex sequences along the circles $C_{\delta_T^*}(p_i-q_j)$, and dynamically updates the reachability data, is purely combinatorial, and does not introduce new critical values (i.e., does not involve comparisons that depend on $\delta_T^*$), so there is no need to run it at all. Since the decision procedure fails at $\alpha$ and succeeds at $\beta$, it follows that $\delta_T^*=\beta$. It is thus sufficient to simulate, at the unknown $\delta_T^*$, an algorithm that \begin{enumerate} \item\label{enum:intersection} finds the intersection points of each circle $C_{\delta_T^*}(p_i - q_j)$ with the circles $\{C_{\delta_T^*}(p_k-q_l)\mid p_k\in P, q_l\in Q\}$, other than itself, and \item\label{enum:sorting} sorts, for each circle $C_{\delta_T^*}(p_i-q_j)$, the intersection points that were found on its boundary in step \ref{enum:intersection}, along this boundary. \end{enumerate} During the simulation we progressively shrink an interval $I=(\alpha,\beta] \subseteq \mathbb{R}$ that is known to contain $\delta_T^*$. We start with $I = (0,\infty]$. We first obtain all the $O(m^2n^2)$ critical values of type \ref{enum:critical2}, sort them, and run an explicit binary search among them guided by the decision procedure. (This part requires no parametric simulation.) As a result $I$ is shrunk to an interval $(\alpha,\beta]$, where $\alpha,\beta$ are two consecutive critical values of type \ref{enum:critical2}. This takes $O(m^2n^2\log(m+n)+m^3n^2(1+\log(n/m))\log(m+n))= O(m^3n^2(1+\log(n/m))\log(m+n))$ time. We can now accomplish step \ref{enum:intersection}, because the property that a pair of circles $C_\delta(p-q), C_\delta(p'-q')$ intersect either holds for all $\delta\in(\alpha,\beta)$ or does not hold for any such $\delta$. We then execute step \ref{enum:sorting}. The task at hand is to sort, for each circle $C_{\delta_T^*}(p_0-q_0)$, the resulting fixed set of intersection points along $C_{\delta_T^*}(p_0-q_0)$. For each pair $C_{\delta_T^*}(p-q), C_{\delta_T^*}(p'-q')$ of such circles, the order of the intersection points can change only at the radius $\tilde{\delta}$ of the circumcircle of $p_0-q_0,p-q,p'-q'$. We then simulate a parallel sorting procedure, to sort these intersection points along $C_{\delta_T^*}(p_0-q_0)$, and run it in parallel over all these circles. We omit the (by now) routine details of this simulation (see, e.g.,~\cite{AG95} for similar application of parametric searching). They imply that we can simulate this sorting, for each circle $C_{\delta_T^*}(p_i-q_j)$, using $O(mn)$ processors and $O(\log (mn))=O(\log(m+n))$ parallel steps (for a total of $O(m^2n^2)$ processors). Thus, for each parallel step, we need to resolve $O(m^2n^2)$ comparisons, each of which compares $\delta_T^*$ to a critical circumradius of type \ref{enum:critical1}. We run a binary search among these critical values using the decision procedure. This takes $O(m^3n^2(1+\log(n/m))\log(m+n))$ time for each parallel step, for an overall $O(m^3n^2(1+\log(n/m))\log^2(m+n))$ time for $O(\log (m^2n^2))=O(\log(m+n))$ steps. To (slightly) improve this running time we use the improvement of Cole~\cite{RC87} which finds, for each parallel step, the (weighted) median of the (suitably weighted) unresolved critical values involved in this step, and calls the decision procedure only at this value, instead of using a complete binary search. This allows us to resolve comparisons that contribute at least some fixed fraction of the total weight, while the other unresolved critical values are carried over to the next step with their weights increased. Proceeding in this manner, we make only one call to the decision procedure at each parallel step, and add only $O(\log (m+n))$ parallel steps to the whole procedure. We thus obtain an overall algorithm with $O(m^3n^2(1+\log(n/m))\log(m+n))$ running time. In conclusion, we get the following main result of the paper. \begin{theorem} \label{th:optimization} Let $P$, $Q$ be two sequences of points in ${\mathbb R}^2$ of respective sizes $m$ and $n$, where $m\leq n$. Then the minimum discrete Fr\'echet distance under translation between $P$ and $Q$ can be computed in $O(m^3n^2(1+\log(n/m))\log(m+n))$ time. \end{theorem} \paragraph{Discussion.} Our algorithm is composed of two main parts. The first part is the construction of the subdivision ${\cal A}_\delta$ whose complexity is $O(m^2 n^2)$. The challenge here is either to argue that, in favorable situations, the actual complexity of ${\cal A}_\delta$ is $o(m^2n^2)$, or be able to process only a portion of ${\cal A}_\delta$ that has $o(m^2n^2)$ complexity. Here is a simple illustration of such an approach. Consider the case where $P$ and $Q$ are sampled along a pair of \emph{$c$-packed curves}, where a curve $\gamma$ is $c$-packed if, for every disk $D$, the length of $\gamma\cap D$ is at most $c$ times the radius of $D$. Assume also that the sampling is more or less uniform, so that the distance between any pair of consecutive points of $P$ or of $Q$ is roughly some fixed value $\Delta$. We may assume, without loss of generality that $p_1=q_1$. Consider the decision procedure with a given parameter $\delta$, and observe that if $t$ is any translation for which $\delta^*(P,Q+t)\leq \delta$ then $\|t\|\leq \delta$. Therefore, for each $p_i\in P$, the only points $q_j\in Q$ that can align with $p_i$ during a simultaneous traversal of $P$ and $Q+t$, for any such ``good'' translation $t$, are those at distance $\leq 2\delta$ from $p_i$. The assumptions on $P$ and $Q$ imply that the number of such points is at most roughly $2\delta c/\Delta$. That is, instead of constructing the entire arrangement ${\cal A}_\delta$, it suffices to construct a coarser arrangement, involving only roughly $2c\delta m/\Delta$ disks. Then, traversing the coarser arrangement is done as before, where each update step (and the following reachability query) cost $O(m\log(1+\log(n/m)))$ time, assuming that $m\leq n$. This improves the running time of the decision procedure to $O\left(\left(\frac{2c\delta}{\Delta}\right)^2 m^3 (1+\log(n/m))\right)$, assuming that $m\leq n$ and $\delta<\frac{\Delta n}{2c}$. Given this decision procedure, we can solve the optimization problem using parametric searching. However, to ensure that the decision procedure does not become too expensive, we want to run it only with values $\delta=O(\delta^*_T)$. This will become significant only when $\delta^*_T\leq \delta_0:=\frac{n\Delta}{2c}$; otherwise the running time will be close to the running time of the algorithm of Section~\ref{sec:optimization}. Therefore, in the following we describe how to solve the optimization problem assuming that $\delta_T^*< \delta_0$ (if the following procedure fails, we run the algorithm of Section~\ref{sec:optimization}). We also assume, for now, that $\delta_T^*>\Delta$ (we explain below how the case where $\delta_T^*\leq\Delta$ is dealt with). We consider the interval $(\Delta, \delta_0)$ that is assumed to contain $\delta_T^*$, and run an ``exponential search'' through it, calling the decision procedure with the values $\delta_i = 2^i \cdot\Delta$, for $i = 0,1,2, \ldots$, in order, until the first time we reach a value $\delta' = \delta_i \geq \delta_T^*$ (and $\delta'<\delta_0$). Note that the cost of running the decision procedure at $\delta'$ and at $\delta_T^*$ differ by at most a factor of $4$, so the cost of running the decision procedure at $\delta'$ is asymptotically the same as at $\delta_T^*$. Moreover, since the running time bounds on the executions of the decision procedure at $\delta_1,\ldots,\delta_i$ form a geometric sequence, the overall cost of the exponential search is also asymptotically the same as the cost of running the decision procedure at $\delta_T^*$. We then run the parametric searching technique as above, with the constraint that $\delta_T^*$ is at most $\delta'$ (i.e., we set $\delta'$ as the minimal $\beta$ obtained so far). Hence, from now on, each call to the decision procedure made by the parametric searching, will cost no more than the cost of calling the decision procedure with $\delta'$ (which is asymptotically the same as calling the procedure with $\delta_T^*$). We thus obtain an overall algorithm with $O\left(\left(\frac{c\delta_T^*}{\Delta}\right)^2 m^3 (1+\log(n/m))\log(m+n)\right)$ running time. Note that, in the case where $\delta_T^*\leq\Delta$, after running the decision procedure with $\delta=\Delta$, we realize that $\delta_T^*\leq \Delta$, and run the parametric searching technique with the constraint that $\delta_T^*$ is at most $\Delta$. In this case, the running time of the algorithm is $O\left(c^2 m^3 (1+\log(n/m))\log(m+n)\right)$. The second part of the algorithm presented in this work is the dynamic data structure for maintaining reachability in $M$. It is an open question of independent interest whether this data structure can be improved. A related problem is whether the techniques used in our structure can be extended to the general case of reachability in planar directed graphs, so as to simplify and improve the efficiency of the earlier competing method of Diks and Sankowski~\cite{DS07}.
1,477,468,749,864
arxiv
\section{Introduction} Multiphase flows are ubiquitious in several industrial applications. In the last three decades, there has been a surge in the numerical methods and algorithms for simulations of complex multiphase flows. There have been several different types of interface capturing strategies that have been proposed for two-phase flows. The most popular of these are the Level set method, Volume of Fluid (VOF) method, and Front tracking scheme \cite{ProsperettiTryggvasonBook,TryggvasonZeleskiBook}. VOF methods with geometric advection strictly conserve the volume of the two phases. Several improvements have been made since the inception of the method (see Hirt and Nichols\cite{Hirt1981,Renardy2002,Youngs1982, Pilliod1992,Pilliod2004,Scardovelli2003}). The most simplest and earliest representation for interface reconstruction is simple line interface calculation(SLIC) in which the interface is approximated by horizontal or vertical lines. Subsequently, piecewise line interface construction (PLIC) was introduced where the interface is approximated as a linear line at an angle in the cell \cite{Youngs1984}. Higher order interface construction have been proposed (such as Parabolic reconstruction by \cite{Renardy2002}), but considering the associated computational cost and complexity for geometric advection, PLIC is usually preferred. Scradovilli and Zaleski\cite{Scardovelli2000} proposed analytical formulae for the piecewise linear reconstruction of the interface in cartesian coordinates that led to a significant speedup over the earlier iterative schemes. However, for curvilinear coordinates (such as axisymmetric coordinate system), the proposed analytical formulae cannot be employed directly. In the present work, we derive similar analytic formulae for axisymmetric coordinates, which result in a speedup of $\sim 28$ over the iterative counterparts (Brent's root finding method). Further, we demonstrate that the existing interface advection schemes in VOF for axisymmetric coordinates are not strictly mass conserving. In this study, we propose modifications to the current operator split algorithms that result in machine-precision mass conservation in axisymmetric coordinates. We show the efficacy of the proposed algorithms using several test cases. The paper is organized as follows. We first present analytical formulae for the interface reconstruction schemes in axisymmetric coordinates in section 2. In section 3, we propose modifications in the existing interface advection algorithm for axisymmetric VOF and present test cases to show the efficacy of the scheme. Finally, in section 4, we discuss the important conclusions. \section{Interface Reconstruction Scheme} Interface reconstruction in the volume of fluid(VOF) method requires the volume fraction field. Using the volume fraction field, a piece-wise linear or a higher order interface is constructed in a given grid-cell. Interface reconstruction is an integral part of the geometric advection schemes to ensure mass conservation property of the VOF method \cite{TryggvasonZeleskiBook}. Initial condition for a multiphase flow simulation requires the initial distribution of the volume fraction field, usually provided as an implicit function of the spatial coordinates. VOFI library\cite{Bna2016} is an open source library to initialise the liquid volume fraction field in cartesian coordinate systems accurately. In VOFI, for cells cut by the interface (see figure \ref{fig:shoelace}), PLIC reconstruction method \cite{Youngs1984,Pilliod1992} is employed to approximate the interface as a linear line segment, \begin{equation} \boldsymbol{m}.\boldsymbol{x} = a, \label{eq:planeeqn} \end{equation} where $\boldsymbol{m}$ is the local normal at the interface, $\boldsymbol{x}$ is a point on the plane, and $a$ is the normal distance of the origin from the plane. Analytical relation, given by Scardovelli and Zaleski\cite{Scardovelli2000}, between the volume fraction and the line constant is employed to determine the line constant $a$. Thus, for two-dimensional and three-dimensional Cartesian coordinate systems, VOFI library can be directly employed for accurate assignment of the initial volume fraction field on a given discretized domain using an implicit equation of the interface. However, in curvilinear coordinate systems, for a given implicit function, the piece-wise linear interface constructed from VOFI would require computation of the volume fraction field using a formula specific to the curvilinear coordinates. For instance, for axisymmetric coordinate system, the modified Gauss area (shoelace) formula for computation of the area of a convex polygon is given by, \begin{equation} V = \frac{\pi}{3}\left|\sum_{i=1}^{n}\left( x_{i} + x_{i+1} \right)\left(x_{i} y_{i+1}-x_{i+1} y_{i}\right)\right| \label{formula_shoelace} \end{equation} where $(x_i,y_i)$ for $i = 1,...,n$ (with $x_{n+1}=x_{1}$ and $y_{n+1}=y_{1}$) are the coordinates of the vertices of a convex polygon ordered counter clockwise as shown in the figure \ref{fig:shoelace}. \begin{figure}[htbp] \centerline{\includegraphics[width=2.5in]{shoelace.pdf}} \caption{Coordinates of the vertices of a simple polygon cut by the interface ordered counter clockwise. Here $x_3,y_3$ and $x_4,y_4$ represents the end points of the PLIC line segment with interface normal $\bold{m}$ and line constant $a$. } \label{fig:shoelace} \end{figure} Thus, for initialization of the volume fraction field, $C$, once we obtain the linear interface in each grid cell using the VOFI library, we use the above formula to compute the volume, $V$, and assign volume fraction in each grid cell as, \begin{equation} C = \frac{V}{2 \pi r_c \Delta x \Delta y}. \label{eq:vol-fraction} \end{equation} Here $r_c$ is the distance of the center of the cell from the axis of symmetry, and $\Delta x$ and $\Delta y$ are the grid-cell sizes in the radial ($r$) and axial ($y$) directions, respectively. We note that the above procedure is followed essentially to minimize the error in the volume-fraction during initialization. To illustrate this, we initialise a torus of minor radius $r_t=0.25$ and major radius $r = 0.50$ in the center of a computational domain of size $1\times1$ as shown in the figure \ref{Test1}. The volume of the torus can be analytically computed as $V_t=2 \pi^2 r r_t^2$, where the major radius, $r$, is the distance to the center of the torus from the axis of symmetry. We compare the results for various grid sizes with the results obtained using the popular VOF based open source flow solver, Gerris\cite{popinet2003}, given in table \ref{table1}. \begin{table} \caption{Results for relative error in volume during initialization of a torus of radius, $r_t = 0.25$, for different grid sizes.} \centering \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Relative error in volume: $E = \frac{\left|V - V_t \right|}{V_t}$} \\ \cmidrule(r){1-3} Grid& Current Solver&Gerris Solver\\ \midrule $16 \times 16$ & $ \num{5.4e-16}$ & $\num{9.4e-3}$\\ $32 \times 32$ & $ \num{7.1e-16}$ & $\num{2.6e-3}$\\ $64 \times 64$ & $ \num{1.8e-16}$ & $\num{5.5e-4}$\\ $128 \times 128$ & $\num{5.4e-16}$ & $\num{1.6e-4}$\\ \bottomrule \end{tabular} \label{table1} \end{table} \begin{figure}[!htbp] \centerline{\includegraphics[width=2.5in]{Test1}} \caption{Reconstructed interface for 16 mesh points of a torus of radius $r_t = 0.25$ and major axis $r = 0.50$ initialised at the center of $1\times1$ domain where the dots represent the mid–points of the reconstructed PLIC line segments.} \label{Test1} \end{figure} Thus, we have shown that for curvilinear coordinates volume fraction field can be initialized up to machine accuracy. Now, we derive an analytic relation for PLIC reconstruction for axisymmetric coordinates on the lines of Scardovelli and Zaleski \cite{Scardovelli2000}. We use Youngs method\cite{Youngs1984} to get the interface normal ($\boldsymbol{m}$ in equation.\ref{eq:planeeqn}) from fluid-1 ($C = 1$) to fluid-2 $C = 0$ and is given by $\boldsymbol{m}= -\nabla C /|\nabla C| $. To complete the PLIC interface reconstruction for a given $C$, in addition to the normal $\boldsymbol{m}$, we also need to obtain the line constant, $a$, which is the normal distance of the interface from one of the vertices of the computational cell. In what follows, we present a methodology to get the line constant($a$) analytically for a given interface normal vector and the volume fraction of a mixed cell. As discussed in \cite{Scardovelli2000}, using an analytical relation between the volume fraction($C$), interface normal ($\boldsymbol{m}$) and the line constant ($a$), we can implement an $if-else-if-end-if$ construct to determine the line constant, $a$. This approach is computationally much more efficient compared to the alternative iterative approach to get the line constant. Given the equation of the interface, $m_1 x + m_2 y = a$, all combinations of $m_1, m_2$ (such that $m_1^2 + m_2^2 = 1$) can be reduced to one of the cases shown in figure \ref{fig:interface-configs}, either by changing the origin or by changing the reference fluid from fluid-$1$ to fluid-$2$, such that both $m_1$ and $m_2$ are positive and the left bottom corner of the mixed cell under consideration is contained in fluid-$1$. Figure \ref{fig:interface-configs} shows all the possible configurations for interface arrangement with $m_1 \ge 0$ and $m_2 \ge 0$. \begin{figure}[!htbp] \centerline{\includegraphics[width=16.45cm]{Appendix}} \caption{Various cases which can arise in the standard configuration of the interface with $m_1,m_2 \geq 0$ and fluid $1$ is occupying the bottom left corner of the cell.} \label{fig:interface-configs} \end{figure} We first discuss Case A shown in the figure \ref{fig:interface-configs} and similar procedure can be followed to obtain relations for other cases. For the axisymmetric coordinate system, the shaded area shown in the figure \ref{fig:interface-configs} for Case A is given by, \begin{equation} V=\frac{\pi}{3}\left(x_{0}+x_{2}\right)^{3} \frac{m_{1}}{m_{2}}-\frac{\pi}{3}\left(x_{0}+x_{1}\right)^{3} \frac{m_{1}}{m_{2}}-\pi x_{0}^{2} y_{1}. \label{VolumeEqn} \end{equation} Using the equation of line $ m_1 x + m_2 y = a$ we have $ x_1 = a/m_1 - (m_2 \Delta y)/m_1 $ and $ x_2 = a/m_1 $. In the present study, we assume $\Delta x = \Delta y$, but the same analysis can be easily extended for $\Delta x \ne \Delta y$. Substituting $x_1$ and $x_2$ in the equation \ref{VolumeEqn} and collecting the terms in powers of $a$, we obtain, \begin{equation} \left(\frac{\pi \Delta y}{m_{1}^{2}}\right) a^{2}+\left(-\frac{\pi m_{2} \Delta y^{2}}{m_{1}^{2}}+\frac{2 \pi \Delta y x_{0}}{m_{1}}\right) a+\left(\frac{\pi m_{2}^{2} \Delta y^{3}}{3 m_{1}^{2}}-\frac{\pi \Delta y^{2} m_{2} x_{0}}{m_{1}}\right)=V. \label{QuadraticEqn} \end{equation} Thus, we have an analytical relation between the volume and the line constant $a$. We note that the above relation holds true only when the interface cuts through the top and the bottom edges of the cell shown for Case A in Figure \ref{fig:interface-configs}: $(x_{2} - x_{0}) \leq \Delta x$, $y_{1} = \Delta y$ and $y_{2} = 0$. These conditions yield the bounds on the values of $a$: $m_{2} \Delta y \le a \leq m_{1} \Delta x$. Substituting the above bounds for $a$ in the equation \ref{QuadraticEqn}, yield the bounds on the limiting volumes: \begin{equation} V_{1}=\frac{\pi \Delta y\left(3 \Delta x^{2} m_{1}^{2}+3 \Delta x m_{1}\left(2 m_{1} x_{0}-\Delta y m_{2}\right)+\Delta y m_{2}\left(\Delta y m_{2}-3 m_{1} x_{0}\right)\right)}{3 m_{1}^{2}}. \end{equation} and \begin{equation} V_{2}=\frac{\pi \Delta y^{2} m_{2}\left(\Delta y m_{2}+3 m_{1} x_{0}\right)}{3 m_{1}^{2}} \end{equation} For a given volume fraction $C$ and interface normal ($m_1, m_2$), the volume occupied by fluid-$1$ in the configurations shown in the Fig.\ref{fig:interface-configs} is given by: $V = 2\pi r \Delta x \Delta y C$ where $r = x_0 + \Delta x/2$ is the distance from the axis of symmetry to the cell center. If $ V_1 \le V \le V_2$ then the analytical relation given by equation \ref{QuadraticEqn} can be used to determine the line constant $a$. For the quadratic equation in $a$ given by equation. \ref{QuadraticEqn}, we note that only one of the roots will satisfy the required bounds on $a$ for case A. For cases B and C, we obtain cubic equations that can be solved for $a$ using Cardano's formula or using Brent's method to find the appropriate root with necessary bounds for the line constant. Following the same approach we can get bounds on volume for case D as: \begin{equation} V_{3}=\frac{\pi \Delta y^{2}\left(-2 \Delta y m_{1}+3 \Delta y m_{2}-3 m_{1} x_{0}+6 m_{2} x_{0}\right)}{3 m_{2}} \end{equation} and \begin{equation} V_{4}=\frac{\pi \Delta y^{2} m_{1}\left(\Delta y+3 x_{0}\right)}{3 m_{2}}. \end{equation} \begin{figure}[!htbp] \centerline{\includegraphics[width=5.in]{Reconstruction}} \caption{Bounds for different cases shown in figure \ref{fig:interface-configs} as a function of the x-component of the interface normal, $m_1$. The interfacial cell is placed at a distance of $1$ from the axis of symmetry with $\Delta x = \Delta y =1$. The total volume of the cell is $V_{cell} = 2 \pi$ given by the top boundary in the plot.} \label{fig:phasediagram} \end{figure} Figure \ref{fig:phasediagram} shows all the possible configurations of the interface and the volume bounds which separate each case as the interface normal in radial direction varies from minimum to a maximum. Figure \ref{fig:phasediagram} clearly shows that the various bounds for the cases shown in figure \ref{fig:interface-configs} do not overlap and provide a unique criterion for computing the line constant $a$. Thus, we can use the following algorithm to classify each case. \begin{algorithm}[H] \KwData{$V,m_1$} \KwResult{Identification of the case in which the standard interface belongs to.} \eIf{$\: m_1\geq\frac{1}{\sqrt{2}} \:$} { \uIf{$\:V \geq V_1\:$} { $Case\:B$ } \uElseIf{$\:V \leq V_2\:$} { $Case\:C$ } \Else { $Case\:A$ \tcp*[f]{$\:V_1 > V > V_2\:$} } } { \uIf{$\:V \geq V_3\:$} { $Case\:B$ } \uElseIf{$\:V \leq V_4\:$} { $Case\:C$ } \Else { $Case\:D$ \tcp*[f]{$\:V_3 > V > V_4\:$} } } \caption{Classification of the standard case of the reconstructed interface where $m_1,m_2\geq0$.} \end{algorithm} We list below the analytical relation between the line constant($a$) and the volume($V$) for each case: \subsection*{Case $A$} \begin{equation} \left(\frac{\pi \Delta y}{m_{1}^{2}}\right) a^{2}+\left(-\frac{\pi m_{2} \Delta y^{2}}{m_{1}^{2}}+\frac{2 \pi \Delta y x_{0}}{m_{1}}\right) a+\left(\frac{\pi m_{2}^{2} \Delta y^{3}}{3 m_{1}^{2}}-\frac{\pi \Delta y^{2} m_{2} x_{0}}{m_{1}}\right)=V. \end{equation} \subsection*{Case $B$} \begin{equation} \begin{aligned} -\left(\frac{\pi}{3 m_{1}^{2} m_{2}}\right) a^{3}+\left(\frac{\pi \Delta y}{m_{1}^{2}}-\frac{\pi x_{0}}{m_{1} m_{2}}\right) a^{2}+\left(\frac{\pi\left(\Delta y+x_{0}\right)^{2}}{m_{2}}-\frac{\pi m_{2} \Delta y^{2}}{m_{1}^{2}}+\frac{2 \pi \Delta y x_{0}}{m_{1}}-\frac{\pi x_{0}^{2}}{m_{2}}\right) a \\ +\left(-\frac{\pi \Delta y\left(\Delta y+x_{0}\right)^{2} m_{1}}{m_{2}}+\frac{\pi m_{2}^{2} \Delta y^{3}}{3 m_{1}^{2}}-\frac{\pi \Delta y^{2} m_{2} x_{0}}{m_{1}}-\frac{\pi m_{1} x_{0}^{3}}{3 m_{2}}+\frac{\pi m_{1}\left(\Delta y+x_{0}\right)^{3}}{3 m_{2}}\right)=V \end{aligned} \end{equation} \subsection*{Case $C$} \begin{equation} \left(\frac{\pi}{3 m_{1}^{2} m_{2}}\right) a^{3}+\left(\frac{\pi x_{0}}{m_{1} m_{2}}\right) a^{2}=V \end{equation} \subsection*{Case $D$} \begin{equation} a=\frac{2 \pi \Delta y^{3} m_{1}+3 \pi \Delta y^{2} m_{1} x_{0}+3 m_{2} V}{3 \pi \Delta y\left(\Delta y+2 x_{0}\right)} \end{equation} We note here that the other cases can be readily transformed into one of the cases listed in the figure \ref{fig:interface-configs} by either changing the fluid (by using ($1 - C$) instead of $C$ to compute the volume and inverting the interface normal $\boldsymbol{m}$) or by changing the origin (keeping the location of the axis-of-symmetry same but inverting its direction). We now compare the above described analytical method with the iterative method for finding the line constant for the case A given in figure \ref{fig:interface-configs}. The relative error in the line constant is given in table \ref{table2} for the analytical and iterative methods with different tolerances. We note that the iterative method to reach an error with a tolerance of $10^{-8}$ is about $28$ slower compared to the analytical method. \begin{table}[h] \caption{Relative error in line constant $a$ and the ratio of CPU time required by iterative method(Brent's algorithm) to that required by analytical method for $10000$ repetitions for case A shown in figure \ref{fig:interface-configs} for different tolerances used in the iterative method.} \centering \begin{tabular}{llll} \toprule \multicolumn{4}{c}{Comparision between the analytical and iterative reconstruction methods} \\ \cmidrule(r){1-4} Method& Tolerance&Relative Error&$t_{iterative}/t_{analytical}$\\ \midrule Analytical & $ -$& $ 0 $ & $ 1$\\ Iterative & $ \num{1.0e-4}$& $ \num{1.8e-4}$ & $\num{14.2}$\\ Iterative & $ \num{1.0e-6}$& $ \num{1.1e-6}$ & $\num{21.2}$\\ Iterative & $ \num{1.0e-8}$& $ \num{6.7e-11}$ & $\num{28.3}$\\ \bottomrule \end{tabular} \label{table2} \end{table} Once the line constant, $a$, is obtained, the position of the endpoints of the linear approximation of the interface can be computed thus completing the construction of a planar interface in a given computational cell. As discussed earlier, this more precise description of the interface within the grid cell allows geometric advection which gives the VOF method its strict mass conservation property while maintaining a sharp interface. In what follows, we discuss an operator split algorithm for the geometric advection of the interface in axisymmetric coordinates. We note that the straightforward extension of the 2D cartesian algorithm does not yield accurate results, as also indicated by the results obtained from the existing open source codes. \section{Advection of the Interface} We present here a scheme for accurate geometric advection of the volume fraction in the axisymmetric coordinates. We have used a uniform grid to describe the variables with volume fraction being stored at the cell centers ($C_{i,j}$). The incompressible fluid flow is determined by the velocity field which is defined at the cell faces ($u_{i+1/2,j},v_{i,j+1/2}$). Here, $u$ denotes the radial direction velocity and $v$ is the axial velocity. The velocity field satisfies the discrete divergence free condition given by: \begin{equation} \frac{(r u)_{i+\frac{1}{2}, j}-(r u)_{i-\frac{1}{2}, j}}{r_{i} \Delta x}+\frac{v_{i, j+\frac{1}{2}}-v_{i, j-\frac{1}{2}}}{\Delta y} = 0. \end{equation} Motion of the interface is governed by the advection equation for the volume fraction field, \begin{equation} \frac{\partial C}{\partial t}+ \mathbf{u} \cdot \nabla C = 0 \label{adveqn1} \end{equation} For incompressible fluids, conservation of the individual volumes of the two fluids results in the conservation of mass. Thus, in the volume of fluid method, geometric advection of the volume fraction field is expected to yield machine-precision mass conservation. Given a volume fraction field, reconstructed interface and solenoidal velocity field, we can solve the equation \ref{adveqn1} using an operator splitting algorithm consisting of an $x-$sweep and a $y-$sweep following \cite{sussman2000coupled}. In order to employ an operator splitting algorithm, the advection of the interface (equation.\ref{adveqn1}), using $\nabla \cdot \mathbf{u} = 0$, can be written as: \begin{equation} \frac{\partial C}{\partial t} + \nabla \cdot ( \mathbf{u} C ) = C(\nabla \cdot \mathbf{u}). \label{adveqn3} \end{equation} The above form of the advection equation is essential for performing volume conserving $x-$direction and $y$-direction sweeps separately (see \cite{Pilliod2004}). Given a volume fraction ($C_{i,j}^{n}$) and velocity field ($u_{i+1/2,j}^{n},v_{i,j+1/2}^{n}$) at the $nth$ time step, the discretised equation \ref{adveqn3} is given by, \begin{equation} \begin{aligned} C_{i, j}^{n+1}= C_{i, j}^{n}+\frac{\Delta t}{r_{i,j} \Delta x}\left(\delta V_{i-1 / 2, j}-\delta V_{i+1 / 2, j}\right) +\frac{\Delta t}{\Delta y}\left(\delta V_{i, j-1 / 2}-\delta V_{i, j+1 / 2}\right) + \\ C_{i, j}^{n} \left( \frac{\Delta t}{r_{i,j} \Delta x} \left( r_{i+1/2,j}u_{i+1/2,j}^{n}-r_{i-1/2,j}u_{i-1/2,j}^{n} \right) + \frac{\Delta t}{\Delta y} \left( v_{i,j+1/2}^{n}-v_{i,j-1/2}^{n} \right) \right) \end{aligned} \label{adveqn4} \end{equation} where $\delta V_{i+1 / 2, j} = (ruC)_{i+1/2,j}^{n}$ is the amount of volume fraction fluxed through the right cell face. Similarly, fluxes $\delta V_{i-1 / 2, j}, \delta V_{i, j+1/2}$ and $\delta V_{i, j-1/2}$ can be computed for other cell faces. Using operator splitting, we can split the above equation as following: \begin{equation} C_{i, j}^{*}= C_{i, j}^{n}+\frac{\Delta t}{r_{i,j} \Delta x}\left(\delta V_{i-1 / 2, j}-\delta V_{i+1 / 2, j}\right) + C_{i, j}^{*} \left( \frac{\Delta t}{r_{i,j} \Delta x} \left( r_{i+1/2,j}u_{i+1/2,j}^{n}-r_{i-1/2,j}u_{i-1/2,j}^{n} \right) \right) \label{adveqn5} \end{equation} \begin{equation} C_{i, j}^{n+1}= C_{i, j}^{*} + \frac{\Delta t}{\Delta y}\left(\delta V_{i, j-1 / 2}-\delta V_{i, j+1 / 2}\right) + C_{i, j}^{*} \left( \frac{\Delta t}{\Delta y} \left( v_{i,j+1/2}^{n}-v_{i,j-1/2}^{n} \right) \right) \label{adveqn6} \end{equation} where $C_{i,j}^{*}$ is the intermediate value of the volume fraction. An implicit scheme is used in the first direction and an explicit scheme in the second direction to maintain the conservation of volume fraction\cite{puckett1997}. The order of sweep of direction is alternated every timestep \cite{strang1968}("Strang spliting") to achieve second order accuracy in time. The volume flux through cell faces, $\delta V_{cell-face}$, is computed geometrically. Consider the schematic in figure \ref{flux}, where the shaded region shows the volume of fluid-$1$ in the cell to be fluxed through the right face ($\delta V_{i+1/2,j}$). Considering the face velocity ($u_{i+1/2,j}$) to be positive, the flux can be computed as, \begin{equation} \delta V_{i+\frac{1}{2},j} = \frac{(r u)_{i+1/2,j} V}{2 \pi r \Delta r \Delta y} \end{equation} where $V$ is the volume of fluid $1$ fluxed through the right face (shown as the shaded region in figure \ref{flux}), $\Delta r$ is the distance in the radial direction which contains the volume advected in this timestep and $r$ is the distance to the center of this volume from the axis of symmetry. We can calculate $\Delta r$ by considering the conservation of volume fluxed through the right face and solving the resulting quadratic equation, which yields, $\Delta r = r_{i+\frac{1}{2},j}- \sqrt{r_{i+\frac{1}{2},j}^2 - 2 r_{i+\frac{1}{2},j} u_e \Delta t}$. Using the section of the piece-wise reconstructed interface lying in the volume to be fluxed through the cell-face over $\Delta t$ time-step and employing the Gauss area formula, given by equation \ref{formula_shoelace}, we can calculate the volume cut by this region. \begin{figure}[!htbp] \centerline{\includegraphics[width=2.5in]{advection}} \caption{The fluxed volume through the right face of the cell when $u_{i+1/2,j}$ is positive.} \label{flux} \end{figure} The above small correction in computing $\Delta r$ along with the accurate Gauss formula for axisymmetric simulations allows us to improve upon the existing volume fraction advection schemes. Existing schemes modify the velocity in $2D$ algorithms by using $r\mathbf{u}$ for velocity field and use $2D$ geometric advection scheme which results in a third order error ($O(\Delta t^2 h)$, where $h$ is the grid size and $\Delta t$ is the timestep) in mass conservation. We illustrate this by considering $x-$direction advection of a small volume of fluid through the right face of the cell with a velocity $u_e$, shown as the shaded region in the figure \ref{error} . \begin{figure}[!htbp] \centerline{\includegraphics[width=2.5in]{advection2}} \caption{The shaded region is the volume advected through the east cell face in a single timestep.} \label{error} \end{figure} The existing schemes compute the volume as $V_c = 2 \pi (r_e - \frac{u_e \Delta t}{2}) (u_e \Delta t) \Delta y$, where $r_e$ is the distance of the east cell face from the axis of symmetry, $\Delta t$ is the timestep, and $\Delta y$ is the height of the cell. Whereas, the proposed scheme yields the exact volume, $V = 2 \pi (r_e - \frac{\Delta r}{2}) \Delta r \Delta y$ with $\Delta r = r_e - \sqrt{r_e^2 - 2 r_e u_e \Delta t}$. Thus, the error in volume calculation is given by, $E = \pi (u_e \Delta t)^2 \Delta y$. We validate the proposed modifications with the following test cases and compare with the results obtained using the open source multiphase flow solver Gerris\cite{popinet2003}. \subsection{Advection of a torus} In this test case, a torus of radius $0.25$ is initialised at $(0.35,0.5)$ in a computational domain of size $(2.0,1.0)$. The torus is advected under the steady state velocity of $ u = 0.1/r$ for $r > 0.05$ and $ v = 0 $, where $r$ is the distance from the axis of symmetry. The fluid is advected $1000$ timesteps forward in time and then the velocity is reversed to compute $1000$ timesteps backwards in time. The grid size is $1/128$, the grid Courant number(CFL) is chosen to be $1$ which corresponds to a time step of $\Delta t=0.0078125$. As seen from the figure \ref{Test1}, after $1000$ timesteps the torus is highly compressed during the advection as less area (due to axisymmetry) occupies the same volume as we move away from the axis of symmetry. We note that the final interface shape matches very well with the initial position of the torus, thus validating our algorithm. \begin{figure}[!htbp] \centerline{\includegraphics[width=5.in]{Test2}} \caption{Interface shape after $1000$ time steps forward and backward after advecting the torus of radius $.25$ with a grid Courant(CFL) number of $1$.} \label{Test1} \end{figure} The relative error in the volume between the initial and final distribution of fluid-$1$ for various number of forward and backward advection time steps is given in table \ref{table2}. The corresponding relative change in the volume obtained for the same test case simulated using Gerris flow solver are also given for comparison. We note that the error obtained from the present schemes are highly accurate in comparison to those obtained from Gerris. \begin{table} \caption{Results for relative error in the volume for a torus of radius $0.25$ advected in radial direction forward and backward in time for different number of timesteps.} \centering \begin{tabular}{llll} \toprule \multicolumn{3}{c}{Relative error in volume} \\ \cmidrule(r){1-3} Number of timesteps& Current Solver&Gerris Solver\\ \midrule $1$ & $ \num{5.2e-15}$ & $ \num{1.1e-5}$\\ $10$ & $ \num{2.9e-14}$ & $\num{7.2e-5}$\\ $100$ & $ \num{4.8e-13}$ & $\num{2.1e-4}$\\ $1000$ & $ \num{3.4e-10}$ & $\num{3.8e-4}$\\ \bottomrule \end{tabular} \label{table3} \end{table} As suggested by Kothe et al.\cite{rider1995}, simple linear advection test cases do not reveal the efficacy of advection algorithms appropriately. Thus, we further test the efficacy of the algorithm by subjecting it to a more severe test case of advection of a torus in a Hill's vortex. This is axisymmetric equivalent of the circle in a vortex test case for 2D cases \cite{rider1995}. For this velocity field, the interface undergoes strong topological changes including fragmentation and merging due to strong shear effects. Here we use a modified form of Hills's vortex with a superimposed radial flow field. A torus of radius $0.1$ is initialised at $(0.2,0.8)$ in a computational domain of size $(1.0,1.0)$ with $L=0.5$. The fluid is advected under highly strained steady state velocity field given by \begin{eqnarray} u =&0.1\left(\frac{r}{L} \frac{(y-L)}{L}\right) + \frac{0.05}{r}\\ v=&0.1\left[1-\left(\frac{y-L}{L}\right)^{2}-2\left(\frac{r}{L}\right)^{2}\right]. \end{eqnarray} The fluid is advected $4000$ timesteps forward in time and then the velocity is reversed to advect $4000$ timesteps backwards in time. The grid size is $1/128$ and the time step is $\num{1.0e-3}$. \begin{figure}[!htbp] \centerline{\includegraphics[width=3.0in]{Test3}} \caption{Interface shape after $4000$ time steps forward and backward after advecting the torus placed in a vortex. The relative error in change in volume is $\num{1.7e-6}$} \label{Test3} \end{figure} As seen from figure \ref{Test3} after $4000$ timesteps the shape of the interface is highly distorted due to highly strained velocity field. The final interface shape matches very well with the initial position of the toroid thus validating our algorithm. The relative error in change in volume between the initial and the final distribution of fluid $1$ for various number on time steps is given in table \ref{table3}. The corresponding relative change in volume for the same test case in Gerris flow solver. We note the error in the proposed scheme, even for larger number of timesteps, is an order smaller compared to the results from Gerris flow solver. \begin{table} \caption{Results for relative error in volume for a torus of radius $0.1$ placed in a complex velocity field advected forward and backward in time for different number of timesteps.} \centering \begin{tabular}{llll} \toprule \multicolumn{3}{c}{Relative error in volume} \\ \cmidrule(r){1-3} Number of timesteps& Current Solver&Gerris Solver\\ \midrule $1$ & $ \num{2.7e-10}$ & $ \num{2.2e-6}$\\ $10$ & $ \num{5.2e-10}$ & $\num{4.5e-6}$\\ $100$ & $ \num{2.5e-8}$ & $\num{1.8e-5}$\\ $1000$ & $ \num{1.3e-6}$ & $\num{4.5e-5}$\\ $4000$ & $ \num{1.7e-6}$ & $\num{4.8e-5}$\\ \bottomrule \end{tabular} \label{table4} \end{table} \subsection{Bubble in a Stagnation Point Flow} In this test case we implement the VOF algorithm presented in this paper for a more complex flow. We solve Navier-Stokes equations in one fluid form given by: \begin{equation} \rho(C) \left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} . {\nabla} \mathbf{u} \right) = -{\nabla} p + {\nabla} \cdot \big[\mu(C) \left( {\nabla} \mathbf{u} + ({\nabla}\mathbf{u})^{\small T} \right)\big] + \rho(C) \mathbf{g} + \mathbf{f}^{\gamma}_{v} \label{one_fluid_form} \end{equation} where $\mathbf{u}$ and $p$ are the velocity vector and pressure, respectively, $\rho(C)$ and $\mu(C)$ are the fluid density and viscosity which are a function of void fraction field, $C$. We use Chorin's projection method~\cite{chorin1968numerical} to solve the above equation \ref{one_fluid_form} where we discretise the advection term using a second order ENO scheme~\cite{shu1988efficient} and the diffusion terms using central differencing. Surface tension forces, $\mathbf{f}^{\gamma}_{v}$ are acting only at the interface and have been modeled as volumetric body force using the continuum surface force model of Brackbill, Kothe, and Zemach~\cite{brackbill1992continuum}. The interface is captured using CLSVOF algorithm given by Sussman and Puckett~\cite{sussman2000coupled}. This algorithm is mass conserving and calculates the curvature and surface normal with high accuracy which is used for surface tension force calculation. The interface is advected by solving the advection equations for level-set function, $\phi$, and volume fraction, $C$. \begin{figure}[!htbp] \centerline{\includegraphics[width=4in]{Test4}} \caption{Interface shape after time $t=1$. The interface is flattened against the top wall and stretched due to the underlying velocity. Even when the cross-sectional area changes the volume of the toroidal bubble is maintained very accurately.} \label{Test4} \end{figure} We initialize a toroidal bubble of radius $0.1$ at $(0.2,0.5)$ in a computational domain of unit size, $1 \times 1$. The bottom boundary has an inlet velocity of unity in the upward axial direction and the right boundary has outflow boundary conditions. The top boundary acts as a rigid wall with no-slip and impermeable surface. The density ratio and viscosity ratio is $10$ with the Laplace number of the bubble, $La = \frac{\rho D \sigma}{\mu^2} = 0.048$. The incoming axial velocity drags the bubble and flattens it against the top wall, stretching it in the axial direction considerably. Even though the fluid interface undergoes drastic change in its shape, the volume is conserved with a high degree of accuracy with relative error in the volume of \num{2.1e-6}. \section{Conclusions} In the present work, we have presented several improvements for the implementation of volume of fluid method in axisymmetric coordinates. We have presented analytical relations for the reconstruction of piecewise linear interface in axisymmetric coordinates similar to those given by Scardovelli and Zaleski\cite{Scardovelli2000} for cartesian coordinates. The proposed scheme substantially reduces the computational cost in comparison to the iterative schemes usually employed for the reconstruction. Further, we showed that even for axisymmetric coordinate system, machine-precision advection of volume fraction field can be achieved. We illustrated the improvements by comparing the results with the popular open source multiphase flow solver Gerris. Finally, we would like to note that similar modifications in the advection scheme for volume of fluid method in other curvillinear coordinate systems(such as elliptic coordinates) can be derived using the approach presented in this work. \bibliographystyle{unsrt}
1,477,468,749,865
arxiv
\section{A brief review of Wannier functions} \label{sub:WF} Wannier functions (WFs) are defined as the Fourier transformations of Bloch wave functions, with a gauge freedom \begin{equation} \left|w_{\alpha\mathbf{R}}\right\rangle =\frac{1}{\sqrt{\mathcal{N}}}\sum_{\mathbf{k}\alpha}e^{-i\mathbf{k}\cdot\mathbf{R}}\left|\phi_{\alpha\mathbf{k}}\right\rangle \end{equation} \begin{equation} \left|\phi_{\alpha\mathbf{k}}\right\rangle =\sum_{n}U_{n\alpha}\left(\mathbf{k}\right)\left|\psi_{n\mathbf{k}}\right\rangle \end{equation} Here $\left|\psi_{n\mathbf{k}}\right\rangle $ is the $n$-th Bloch state at $\mathbf{k}$, $\mathbf{R}$ is a lattice vector in real space, $\mathcal{N}$ is the number of cells, $\alpha$ is the wannier index, and $U\left(\mathbf{k}\right)$ can be an arbitray unitary matrix. Generally speaking, as long as $\left|\phi_{\alpha\mathbf{k}}\right\rangle $ takes a smooth gauge in the whole Brillouin zone, the corresponding WF $|w_{\alpha \mathbf{R}}\rangle$ will be well localized around the lattice $\mathbf{R}$. The WC (Wannier center) is defined as the position expectation on the WF. It can be calculated from the Berry connection in momentum space \begin{equation} \left\langle w_{\alpha\mathbf{R}}\right|\hat{\mathbf{x}}\left|w_{\alpha\mathbf{R}}\right\rangle =\int \frac{d^{d}\mathbf{k}}{(2\pi)^d}\ \boldsymbol{\mathcal{A}}_{\alpha}\left(\mathbf{k}\right)+\mathbf{R} \end{equation} where the Berry connection is defined as \begin{equation} \boldsymbol{\mathcal{A}}_{\alpha}\left(\mathbf{k}\right)=i\sum_{mn}\left\langle U_{m\alpha}u_{m}\right|\partial_{\mathbf{k}}\left|U_{n\alpha}u_{n}\right\rangle \end{equation} and $u_{n}\left(\mathbf{k}\right)$ is the periodic part of the Bloch wavefunction $\left|\psi_{n\mathbf{k}}\right\rangle $. In general, WCs are gauge dependent, i.e. they depend on the choice of the gauge $U_{n\alpha}(\mathbf{k})$. An exception is the 1D insulating system where the 1D WCs can be thought as the spectrum of the Wilson loop along the 1D Brillouin zone and thus are gauge invariant quantities. Such a property leads to the modern theory of polarization and has greatly facilitated the studies of topological insulators because the spectrum of wilson loop is believed to be isomorphic with the surface dispersion. To define the symmetric WFs, let us firstly review the concept of Wyckoff positions. General positions in the unit cell can be classified into a few types of Wyckoff positions by the their site symmetry groups (SSGs). The SSG for a given site $\mathbf{x}_{1}$ can be defined as the collection of all the space group elements that leave $\mathbf{x}_{1}$ invariant (module a lattice vector), i.e. \begin{equation} G\left(\mathbf{x}_{1}\right)=\left\{ g\in G|\exists\mathbf{R}\ s.t.\ g\mathbf{x}_{1}=\mathbf{x}_{1}+\mathbf{R}\right\} \end{equation} Here $g\mathbf{x}=p_{g}\mathbf{x}_{1}+\mathbf{t}_{g}$ with $p_{g}$ the point group operation of $g$ and $\mathbf{t}_{g}$ the translational operation of $g$. The equivalent positions of $\mathbf{x}_{1}$ can be generated from a complete set of the representatives of the quotient group $G/G\left(\mathbf{x}_{1}\right)$ \begin{equation} \mathbf{x}_{\sigma}=g_\sigma \mathbf{x}_{1}-\left[g_\sigma \mathbf{x}_{1}\right] \qquad g_\sigma \in G/G(\mathbf{x}_1) \end{equation} where $g_{1}=e$ is the identity, and $\left[g_\sigma\mathbf{x}_{1}\right]$ is the lattice containing $g_\sigma\mathbf{x}_{1}$ such that $\mathbf{x}_{\sigma}$ locates inp the home cell. Hereafter, for convenience some times we will split the Wannier index $\alpha$ into a site index $\sigma$ and an orbital index $\mu$, indicating that the WF is the $\mu$-th orbital locating at the site $\mathbf{x}_{\sigma}$. A set of WFs is called symmetric if (i) they form representations (reps) of the SSGs \begin{equation} \forall g\in G\left(\mathbf{x}_{\sigma}\right)\quad g\left|w_{\sigma\mu\mathbf{R}}\right\rangle =\sum_{\nu}D_{\nu\mu}\left(g\right)\left|w_{\sigma\nu,\mathbf{R}+[g\mathbf{x}_\sigma]}\right\rangle \end{equation} , (ii) WFs at a general equivalent position of $\mathbf{x}_{1}$ can be generated from the WFs at $\mathbf{x}_{1}$ by a symmetry operation relating the two positions, and (iii) for time-reversal invariant systems, we further ask the WFs to form Kramers' pairs, i.e. \begin{equation} T\left|w_{\alpha\mathbf{R}}\right\rangle =\sum_{\beta}\Omega_{\beta,\alpha}\left|w_{\beta\mathbf{R}}\right\rangle \end{equation} where $\Omega$ is an anti-symmetric unitary matrix. For convenience in the following we take its standard form as \begin{equation} \Omega=\begin{bmatrix}0 & -\mathbb{I}\\ \mathbb{I} & 0 \end{bmatrix}\label{eq:omg} \end{equation} The transformations of $\left|\phi_{\sigma\mu\mathbf{k}}\right\rangle $ under symmetry operations are completely determined by the symmetry property of WFs. For the space groups considered in our work, we have \begin{equation} \forall g\in G\qquad g\left|\phi_{\sigma\mu\mathbf{k}}\right\rangle =\sum_{\sigma^{\prime}\mu^{\prime}}D_{\sigma^{\prime}\mu^{\prime},\sigma\mu}^{\mathbf{k}}\left(g\right)\left|\phi_{\sigma^{\prime}\mu^{\prime}g\mathbf{k}}\right\rangle \label{eq:g-phi} \end{equation} \begin{equation} D_{\sigma^{\prime}\mu^{\prime},\sigma\mu}^{\mathbf{k}}\left(g\right)=\delta_{\mathbf{x}_{\sigma^{\prime}},g\mathbf{x}_{\sigma}}^{\prime}e^{ig\mathbf{k}\cdot\left(\mathbf{x}_{\sigma^{\prime}}-g\mathbf{x}_{\sigma}\right)}D_{\mu^{\prime}\mu}\left(g\right)\label{eq:Dk-def} \end{equation} , where $\delta_{\mathbf{x}_{\sigma^{\prime}},g\mathbf{x}_{\sigma}}^{\prime}=1$ if $\mathbf{x}_{\sigma^{\prime}}=g\mathbf{x}_{\sigma}$ module a lattice, and \begin{equation} T\left|\phi_{\alpha\mathbf{k}}\right\rangle =\sum_{\beta}\Omega_{\beta,\alpha}\left|\phi_{\beta,-\mathbf{k}}\right\rangle \label{eq:T-phi} \end{equation} The transformation matrices $D^{\mathbf{k}}$ and $\Omega$ will be referred as the sewing matrices in the following. \section{Gauge invariant 2D Wannier centers} The discussion about the mismatch between atom sites and WCs in the text presumes the gauge invariance of the occupied 2D WCs. Otherwise, we can choose a gauge where the WCs move away from the plaquette center to negative all the arguments. Similarly, the gauge invariance of the $\mathbb{Z}_2$-flow discussed in the text also needs the 2D WCs at $k_{z}=0,\pi$-slices to be gauge invariant. Here, we will set such a cornerstone by proving that in some special cases the 2D WCs indeed are gauge invariant guaranteed by the crystalline symmetry. Since all the symmetry properties of symmetric WFs are encoded in the sewing matrices, in the proof below we follow the logic that two sets of symmetric WFs can be deformed to each other \emph{only if} the sewing matrices generated from them can be transformed to each other by a smooth gauge transformation. A relevant useful concept is the band representation (BR), i.e. the set of irreducible reps (irreps) at high-symmetry momenta, which is indeed the diagonal blocks of sewing matrices in momentum space. In some cases, the information in BR is enough to demonstrate two sets of WFs are inequivalent. Therefore, before the proof, let us figure out what BR can tell us. The smallest 2D space group containing $C_{4}$ is $p4$. As shown in table \ref{tab:wkf-p4}, it has four types of wyckoff positions in real space, wherein the $1a$ and $1b$ positions are the site and plaquette center, respectively. To describe the BR we only need to count the irreps at $\Gamma$ and $M$, because the irreps at $X$ and general momentum are always same. After a few derivations according to Eq. (\ref{eq:Dk-def}), we find the mappings from symmetric WFs to BRs as \begin{equation} E_{\frac{1}{2}}^{1a}\mapsto E_{\frac{1}{2}}^{\Gamma}+E_{\frac{1}{2}}^{M}\label{eq:2Dmap-1a12} \end{equation} \begin{equation} E_{\frac{3}{2}}^{1a}\mapsto E_{\frac{3}{2}}^{\Gamma}+E_{\frac{3}{2}}^{M}\label{eq:2Dmap-1a32} \end{equation} \begin{equation} E_{\frac{1}{2}}^{1b}\mapsto E_{\frac{1}{2}}^{\Gamma}+E_{\frac{3}{2}}^{M} \end{equation} \begin{equation} E_{\frac{3}{2}}^{1b}\mapsto E_{\frac{3}{2}}^{\Gamma}+E_{\frac{1}{2}}^{M}\label{eq:2Dmap-1b32} \end{equation} \begin{equation} E_{\frac{1}{2}}^{2c}\mapsto E_{\frac{1}{2}}^{\Gamma}+E_{\frac{3}{2}}^{\Gamma}+E_{\frac{1}{2}}^{M}+E_{\frac{3}{2}}^{M} \end{equation} \begin{equation} E_{\frac{1}{2}}^{4d}\mapsto2E_{\frac{1}{2}}^{\Gamma}+2E_{\frac{3}{2}}^{\Gamma}+2E_{\frac{1}{2}}^{M}+2E_{\frac{3}{2}}^{M}\label{eq:2Dmap-4d12} \end{equation} Here we use the symbol of an irrep decorated with a Wyckoff position to represent the WFs forming this irrep at this Wyckoff position, and use the symbol of an irrep decorated with a momentum to represent the bands forming this irrep at this momentum. We follow the notations for irreps in Ref. [\onlinecite{point-group}]. In the following, we will make use of these mappings. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline SSG & W & K & Coordinates & irreps at W & irreps at K\tabularnewline \hline \hline \multirow{2}{*}{$C_{4}$} & $1a$ & $\Gamma$ & $\left(0,0\right)$ & $E_{\frac{1}{2}}$ $E_{\frac{3}{2}}$ & $E_{\frac{1}{2}}$ $E_{\frac{3}{2}}$\tabularnewline \cline{2-6} & $1b$ & $M$ & $\left(\frac{1}{2},\frac{1}{2}\right)$ & $E_{\frac{1}{2}}$ $E_{\frac{3}{2}}$ & $E_{\frac{1}{2}}$ $E_{\frac{3}{2}}$\tabularnewline \hline $C_{2}$ & $2c$ & $X$ & $\left(0,\frac{1}{2}\right)$ & $E_{\frac{1}{2}}$ & $E_{\frac{1}{2}}$\tabularnewline \hline $C_{1}$ & $4d$ & ... & $\left(x,y\right)$ & $E_{\frac{1}{2}}$ & $E$ \tabularnewline \hline \end{tabular} \par\end{centering} \raggedright{}\protect\caption{\label{tab:wkf-p4}The Wyckoff positions and high-symmetry momenta in 2D space group $p4$ (with time-reversal symmetry). Due to the time-reversal symmetry, all the irreps at Wyckoff positions and time-reversal invariant momenta are double degenerate. } \end{table} \subsection{Gauge invariant Wannier centers at $1a$ (site)\label{sub:2DWF-1a}} As will be proved latter, a set of WFs at $1a$ can be moved away by a symmetric gauge transformation \emph{only if} they form a rep consist of an even number of combined rep $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$. Therefore, the conclusion is that, for a given set of WFs at $1a$, taking off all the even number of the rep $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$, the left WFs consisted of \begin{equation} n\left(E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}\right)+mE_{\frac{1}{2}}^{1a}+m^{\prime}E_{\frac{3}{2}}^{1a} \end{equation} , where $n=0,1$, and one of $m$ $m^{\prime}$ equals to zero, will stay still under any symmetry allowed gauge transformation. We will prove this statement in three steps. Firstly, we will show that the eight WFs forming the rep $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}$ can be moved to four Kramers' pairs at $4d$ positions without breaking any symmetry. Apply an unitary transform for the bases in $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}$ \begin{align} \left|w_{1}\right\rangle & =e^{-i\frac{\pi}{4}}\left|\frac{1}{2}\right\rangle_A + e^{-i\frac{\pi}{4}}\left|\bar{\frac{1}{2}}\right\rangle_B \nonumber \\ & + e^{i\frac{\pi}{4}}\left|\frac{3}{2}\right\rangle_A + e^{i\frac{\pi}{4}}\left|\bar{\frac{3}{2}}\right\rangle_B \end{align} \begin{equation} \left|w_{i+1}\right\rangle = C_4 \left|w_{i}\right\rangle \qquad i=0,1,2 \end{equation} \begin{equation} \left|w_{i+4}\right\rangle = T \left|w_{i}\right\rangle \qquad i=0,1,2,3 \end{equation} , where the subscript $A/B$ is used to distinguish the two same irreps, we find that the time-reversal rep has the standard form $\Omega$, while the $C_{4}$ rep is identical with the $E_\frac{1}{2}^{4d}$ WFs \begin{equation} D\left(C_{4}\right)=\sigma_{0}\otimes\begin{bmatrix}0 & 0 & 0 & -1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \end{equation} Therefore, the WFs in $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}$ can be moved to $4d$ without breaking any symmetry. We denote this equivalent relation as $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{1}}^{1a}\sim E_{\frac{1}{2}}^{4d}$. Readers may find that such a equivalence is consistent with the BR mappings in Eq. (\ref{eq:2Dmap-1a12})-(\ref{eq:2Dmap-4d12}). Secondly, it is obvious that the left $mE_{\frac{1}{2}}^{1a}$ or $m^{\prime}E_{\frac{3}{2}}^{1a}$ WFs alone can not be gauged away because the BR generated from $mE_{\frac{1}{2}}^{1a}$ or $m^{\prime}E_{\frac{3}{2}}^{1a}$ can not be reproduced by any combination of WFs at other sites. Thus, to complete the proof, we only need to prove a single rep $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ must stay at $1a$ under any symmetric gauge transformation. From the BR mappings, we find that $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ can only be moved to $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ or $E_{\frac{1}{2}}^{2c}$. Such transformations are very unnatural from an intuitive perspective, because if we continuously move the four WFs at $1a$ to $1b$ or $2c$, the intermediate process will break either time-reversal or $C_{4}$. (Enforced by the $C_{4}$ symmetry, the four WFs must move to four different directions, leading to separation of Kramers' pairs). To prove this statement, here we show that the gauge transformation from $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ to $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ or $E_{\frac{1}{2}}^{2c}$ must be singular. Let us first consider the transformation from $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ to $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$. For convenience, here we choose the WF bases with a ``cyclical'' gauge \begin{equation} \left|w_{1}\right\rangle =e^{-i\frac{\pi}{4}}\left|\frac{1}{2}\right\rangle +e^{-i\frac{\pi}{4}}\left|\bar{\frac{1}{2}}\right\rangle +e^{i\frac{\pi}{4}}\left|\frac{3}{2}\right\rangle +e^{i\frac{\pi}{4}}\left|\bar{\frac{3}{2}}\right\rangle \label{eq:w1} \end{equation} \begin{equation} \left|w_{i+1}\right\rangle =C_{4}^{\prime}\left|w_{i}\right\rangle \qquad i=1,2,3\label{eq:wi} \end{equation} where $C_{4}^{\prime}$ is the rotation operation centered at $1a$ and $1b$ for the $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ and $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ WFs, respectively. In this guage, for both $1a$ and $1b$ positions, $\left|w_{i}\right\rangle $ and $\left|w_{i+2}\right\rangle $ form a Kramers' pair and the time-reversal rep matrix has the standard form $\Omega$. According to Eq. (\ref{eq:Dk-def}) the $C_{4}$ sewing matrices generated from $\left|w_{i}\right\rangle $ at $1a$ and $1b$ positions can be derived respectively as \begin{equation} D^{\mathbf{k}}=W=\begin{bmatrix}0 & 0 & 0 & -1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \end{equation} \begin{equation} \tilde{D}^{\mathbf{k}}=We^{-ik_{y}} \end{equation} Now assume there is a well defined gauge transformation $U\left(\mathbf{k}\right)$ that relates these two sets of WFs, then from Eq. (\ref{eq:g-phi}) and (\ref{eq:T-phi}) such a gauge transform must satisfy \begin{equation} U\left(-\mathbf{k}\right)=\Omega U^{*}\left(\mathbf{k}\right)\Omega^{T}\label{eq:U-omg} \end{equation} \begin{equation} U\left(C_{4}\mathbf{k}\right)=WU\left(\mathbf{k}\right)W^{\dagger}e^{ik_{y}}\label{eq:U-W} \end{equation} leading to the equation $U^{*}\left(\mathbf{k}\right)=U\left(\mathbf{k}\right)e^{ik_{x}+ik_{y}}$. Thus we can write $U\left(\mathbf{k}\right)$ as an orthogonal matrix $O\left(\mathbf{k}\right)$ multiplied by a phase factor \begin{equation} U\left(\mathbf{k}\right)=O\left(\mathbf{k}\right)e^{-\frac{i}{2}\left(k_{x}+k_{y}\right)}\qquad O\left(\mathbf{k}\right)\in O\left(n\right) \end{equation} Substitute this back to Eq. (\ref{eq:U-omg}) and (\ref{eq:U-W}), we get the constraints on $O\left(\mathbf{k}\right)$ as (i) $O\left(-\mathbf{k}\right)=\Omega O\left(\mathbf{k}\right)\Omega^{T}$ and (ii) $O\left(C_{4}\mathbf{k}\right)=WO\left(\mathbf{k}\right)W^{\dagger}$, wherein the first constraint is implied by the second one. By requiring $U\left(\mathbf{k}\right)$ to be periodic, we get another two constraints as (iii) $O\left(k_{x}+2\pi,k_{y}\right)=-O\left(\mathbf{k}\right)$ and (iv) $O\left(k_{x},k_{y}+2\pi\right)=-O\left(\mathbf{k}\right)$. In fact, such constraints make $O\left(\mathbf{k}\right)$ must be singular at some momenta. To see this, express $O\left(\mathbf{k}\right)$ as \begin{equation} O\left(\mathbf{k}\right)=\xi\cdot\exp\left(-i\mathfrak{H}\left(\mathbf{k}\right)\right) \end{equation} where $\xi=\pm1$ is the determinant, and $\mathfrak{H}\left(\mathbf{k}\right)$ is a 4 by 4 imaginary Hermition matrix parameterized as \begin{align} \mathfrak{H}\left(\mathbf{k}\right) & =\omega\left(\mathbf{k}\right)\left[n_{1}\left(\mathbf{k}\right)\tau_{y}\sigma_{x}+n_{2}\left(\mathbf{k}\right)\tau_{0}\sigma_{y}+n_{3}\left(\mathbf{k}\right)\tau_{y}\sigma_{z}\right]\nonumber \\ & +\theta\left(\mathbf{k}\right)\left[m_{1}\left(\mathbf{k}\right)\tau_{z}\sigma_{y}+m_{2}\left(\mathbf{k}\right)\tau_{x}\sigma_{y}+m_{3}\left(\mathbf{k}\right)\tau_{y}\sigma_{0}\right] \end{align} Here we take the convention that $\omega$, $\theta$ are real and positive, and $\mathbf{n}=\left(n_{1}n_{2}n_{3}\right)^{T}$, $\mathbf{m}=\left(m_{1}m_{2}m_{3}\right)^{T}$ are unit vectors. It should be noticed that as the first three matrices and the last three matrices respectively form a set of $SU\left(2\right)$ generators, and these two sets are commutative with each other, such a parameterization realizes an isomorphic mapping from $SO\left(4\right)$ to $SU\left(2\right)\times SU\left(2\right)$. Therefore, the orthogonal matrix can be expressed as a product of two commutative matrices \begin{equation} O\left(\mathbf{k}\right)=\xi\cdot\mathcal{O}^{1}\left(\mathbf{k}\right)\mathcal{O}^{2}\left(\mathbf{k}\right) \end{equation} where \begin{equation} \mathcal{O}^{1}=\cos\omega-i\sin\omega\left(n_{1}\tau_{y}\sigma_{x}+n_{2}\tau_{0}\sigma_{y}+n_{3}\tau_{y}\sigma_{z}\right) \end{equation} \begin{equation} \mathcal{O}^{2}=\cos\theta-i\sin\theta\left(m_{1}\tau_{y}\sigma_{x}+m_{2}\tau_{0}\sigma_{y}+m_{3}\tau_{y}\sigma_{z}\right) \end{equation} The constraints (i) and (ii) lead to \begin{equation} \omega\left(C_{4}\mathbf{k}\right)=\omega\left(\mathbf{k}\right)\qquad\theta\left(C_{4}\mathbf{k}\right)=\theta\left(\mathbf{k}\right)\label{eq:omg-C4} \end{equation} \begin{equation} \mathbf{n}\left(C_{4}\mathbf{k}\right)=\left[n_{2},n_{1},-n_{3}\right]^{T}\left(\mathbf{k}\right) \end{equation} \begin{equation} \mathbf{m}\left(C_{4}\mathbf{k}\right)=\left[-m_{2},m_{1},m_{3}\right]^{T}\left(\mathbf{k}\right)\label{eq:nm-C4} \end{equation} And the anti-periodic property in constraints (iii) and (iv) must be realized by either $\mathcal{O}^{1}$ or $\mathcal{O}^{2}$. In fact, whatever which $\mathcal{O}$ is chosen to be anti-periodic, the anti-periodic condition together with the symmetry constraints (Eq. (\ref{eq:omg-C4})-(\ref{eq:nm-C4})) will make $\mathcal{O}$ singular at some momenta. Here we only take $\mathcal{O}^{1}$ as an example. With the anti-periodic condition, only two branches of solutions exist, the first is \begin{equation} \omega\left(k_{x}+2\pi,k_{y}\right)=\omega\left(k_{x},k_{y}+2\pi\right)=\omega\left(\mathbf{k}\right)+\pi \end{equation} \begin{equation} \mathbf{n}\left(k_{x}+2\pi,k_{y}\right)=\mathbf{n}\left(k_{x},k_{y}+2\pi\right)=\mathbf{n}\left(\mathbf{k}\right) \end{equation} and the second is \begin{equation} \omega\left(\mathbf{k}\right)=\frac{\pi}{2} \end{equation} \begin{equation} \mathbf{n}\left(k_{x}+2\pi,k_{y}\right)=\mathbf{n}\left(k_{x},k_{y}+2\pi\right)=-\mathbf{n}\left(\mathbf{k}\right)\label{eq:n-anti} \end{equation} Obviously, the first breaks the $C_{4}$ symmetry constraint (Eq. (\ref{eq:omg-C4})) because $\omega\left(\pi,0\right)=\omega\left(-\pi,0\right)+\pi$. While, the second solution is singular at $\left(\pi0\right)$, since there must be $\mathbf{n}\left(\pi,0\right)=0$ due to the $C_{4}$ constraint (Eq. (\ref{eq:omg-C4})) and the anti-periodic constraint (Eq. (\ref{eq:n-anti})). Therefore, we achieve the conclusion that there is no smooth gauge transformation can deform $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ to $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$. The proof for the inequivalence between $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ and $E_{\frac{1}{2}}^{2c}$ completely parallelizes the above process. \subsection{Gauge invariant Wannier centers at other positions} As $1a$ and $1b$ positions can be renamed to each other by re-choosing the origin point, there is no physical difference between them and all the statements about $1a$ should also hold for $1b$. Therefore a set of WFs at $1b$ can be moved away by a symmetric gauge transformation \emph{only if} the WFs consist of an even number of rep $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$. And, taking off all the even the number of the rep $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$, the centers of the left WFs are gauge invariant. As for the $1c$ position, just like the $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}$ WFs in section \ref{sub:2DWF-1a}, a pair of $E_{\frac{1}{2}}^{2c}$ can reduce to four Kramers' pairs at the $4d$ position, i.e. $2E_{\frac{1}{2}}^{2c}\sim E_{\frac{1}{2}}^{4d}$. Thus, if there are an even number of $E_{\frac{1}{2}}^{2c}$ at the $1c$ position all of them can be gauged away to $4d$ positions, however, if there is an odd number of $E_{\frac{1}{2}}^{2c}$, at least two WFs (a Kramers' pair) will stay at $1c$ under any symmetric gauge transformation. In summary, a set of WFs at any wyckoff position can be moved by symmetric gauge transformations \emph{if and only if} the rep (of the SSG at the Wyckoff position) they form is consistent with a set of $E_\frac{1}{2}^{4d}$ WFs. In other words, all the move of WFs should pass through $4d$ positions, which is very consistent with the intuitive picture. \subsection{Occupied Wannier functions for the 2D model\label{sub:2DWF}} The occupied bands of our 2D model give the BR $E_{\frac{1}{2}}^{\Gamma}+E_{\frac{3}{2}}^{\Gamma}+E_{\frac{1}{2}}^{M}+E_{\frac{3}{2}}^{M}$, which is irrelevant with the parameter $\Delta$ since the corresponding term vanish at $\Gamma$ and $M$. However, such a BR can not give a concrete real space information because it can be generated from $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$, or $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$, or $E_{\frac{1}{2}}^{2c}$. Here, by constructing the WFs explicitly, we will show that the occupied states are equivalent to $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ WFs. We follow the projection procedure described in Ref. [\onlinecite{Soluyanov2011}]. Firstly, let us guess four trial local orbitals, denoted by $\left|\gamma_{\alpha}\right\rangle $, at the home cell and define them by the model orbitals \begin{equation} \left|\gamma_{\alpha}\right\rangle =\sum_{\mathbf{R}\beta}^{\prime}\left|a_{\beta\mathbf{R}}\right\rangle M_{\beta\alpha}^{\mathbf{R}} \end{equation} Here $\left|a_{\beta\mathbf{R}}\right\rangle $ is the $\beta$-th atomic orbital in the lattice $\mathbf{R}$ in our 2D model, $M^{\mathbf{R}}$ (8 by 4) is the overlap matrices between atomic orbitals and the trial orbitals, and the summation $\sum_{\mathbf{R}}^{\prime}$ is taken only for a few lattices around the home cell. Assume $\left|\gamma_{\alpha}\right\rangle $ form the rep $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ and limit the the summation over $\mathbf{R}$ within $\mathbf{R}_{1}=\left(00\right)$, $\mathbf{R}_{2}=\left(10\right)$, $\mathbf{R}_{3}=\left(11\right)$, and $\mathbf{R}_{4}=(01)$, the symmetry properties satisfied by $\left|\gamma_{\alpha}\right\rangle $ imply a set of constraints on $M^{\mathbf{R}}$ \begin{equation} M^{\mathbf{R}_{i+1}}=C_{4}M^{\mathbf{R}_{i}}D^{\gamma\dagger}\left(C_{4}\right)\label{eq:MR-C4} \end{equation} \begin{equation} M^{\mathbf{R}_{i}}=TM^{\mathbf{R}_{i}}\Omega^{T}\label{eq:MR-T} \end{equation} Here $C_{4}=\tau_{z}e^{-i\frac{\pi}{4}s_{z}}$ is the $C_{4}$ operator on the atomic orbitals, $D^{\gamma}\left(C_{4}\right)$ is the $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ rep matrix of $C_{4}$, $T=-is_{y}K$ is the time-reversal operator on the atomic orbitals, and $\Omega$ is the time-reversal rep on $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$. Aligning the gauge of occupied Bloch states with respect to the trial orbitals, we can define a set of projected Bloch-like states \begin{equation} \left|\Upsilon_{\alpha\mathbf{k}}\right\rangle =\sum_{n\in\mathrm{occ}}\left|\psi_{n\mathbf{k}}\right\rangle \left\langle \psi_{n\mathbf{k}}\right|\gamma_{\alpha}\rangle \end{equation} and the overlap matrix between them \begin{equation} S_{\alpha\beta}\left(\mathbf{k}\right)=\langle\Upsilon_{\alpha\mathbf{k}}|\Upsilon_{\beta\mathbf{k}}\rangle \end{equation} Then by the L$\ddot{\text{o}}$wdin orthonormalization procudure, a set of orthonormal Bloch-like states are obtained \begin{equation} \left|\tilde{\psi}_{\alpha\mathbf{k}}\right\rangle =\sum_{\beta}S_{\beta\alpha}^{-\frac{1}{2}}\left(\mathbf{k}\right)\left|\Upsilon_{\beta\mathbf{k}}\right\rangle \end{equation} The WFs transformed from these Bloch-like states will be well localized and satisfy the $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ rep as long as the overlap matrix $S\left(\mathbf{k}\right)$ is non-singular over the whole Brillouin zone. In practice, we generate a random $M^{\mathbf{R}_{1}}$ matrix and symmetrize it due to Eq. (\ref{eq:MR-C4})-(\ref{eq:MR-T}). A non-singular $S\left(\mathbf{k}\right)$ has been successfully obtained. The WCs are also confirmed by the wilson loop method and are found indeed to locate at the $1b$ position. In the wilson loop method, the center of the $\alpha$-th WF is calculated by \begin{equation} x_{\alpha}=\frac{1}{2\pi}\int dk_{y}x_{\alpha}\left(k_{y}\right) \end{equation} \begin{equation} y_{\alpha}=\frac{1}{2\pi}\int dk_{x}y_{\alpha}\left(k_{x}\right) \end{equation} \begin{align} e^{i2\pi x_{\alpha}\left(k_{y}\right)} & =\langle\tilde{u}_{\alpha,0,k_{y}}|\tilde{u}_{\alpha,\left(N-1\right)\Delta k,k_{y}}\rangle\cdots\nonumber \\ & \times\langle\tilde{u}_{\alpha,2\Delta k,k_{y}}|\tilde{u}_{\alpha,\Delta k,k_{y}}\rangle\langle\tilde{u}_{\alpha,\Delta k,k_{y}}|\tilde{u}_{\alpha,0,k_{y}}\rangle \end{align} \begin{align} e^{i2\pi y_{\alpha}\left(k_{x}\right)} & =\langle\tilde{u}_{\alpha,k_{y},0}|\tilde{u}_{\alpha,k_{x},\left(N-1\right)\Delta k}\rangle\cdots\nonumber \\ & \times\langle\tilde{u}_{\alpha,k_{x},2\Delta k}|\tilde{u}_{\alpha,k_{x},\Delta k}\rangle\langle\tilde{u}_{\alpha,k_{x},\Delta k}|\tilde{u}_{\alpha,k_{x},0}\rangle \end{align} where $\Delta k=\frac{2\pi}{N}$, and $\left|\tilde{u}_{\alpha\mathbf{k}}\right\rangle $ is the periodic part of the orthonormal Bloch-like state $\left|\tilde{\psi}_{\alpha\mathbf{k}}\right\rangle $. It should be noticed that, whatever the value of $\Delta$ takes, the occupied WCs should locate at $1b$. This is simply because the $\Delta$ term can not close the band gap and the four WFs $E_\frac{1}{2}^{1b}+E_\frac{3}{2}^{1b}$ can not be moved away from $1b$ by any adiabatic process, as proved before. \section{Wannier center flow in 3D system} \subsection{Classification of the flows}\label{sub:flowclass} For a 3D system with both time-reversal and $C_{4}$ symmetries, we can choose a tetragonal cell with its principal axis along the $z$ direction and apply a fourier transformation along $z$. Here we focus on the case where all the $k_{z}$-slices are equivalent to some 2D atomic insulator. The above conclusions about the gauge invariant 2D WCs applies for the $k_{z}=0,\pi$-slices because both the time-reversal and $C_{4}$ symmetries present there. However, in general intermediate $k_{z}$-slices, the above conclusions fail because of the absence of time-reversal symmetry. Then an immediate observation follows is that a bulk state must be nontrivial if there is a mismatch between its 2D WCs in $k_{z}=0$- and $k_{z}=\pi$-slices. We argue that the bulk topology is only determined by the WFs at the $k_{z}=0,\pi$-slices, because, as the sewing matrices in the intermediate slices are completely determined by the two ends (from the compatibility relation) and all the 2D atomic insulator with same sewing matrices are topologically equivalent, all the possible evolutions must be equivalent with each other. Therefore, for a given set of 2D WFs at the $k_{z}=0$- and $k_{z}=\pi$-slices, a nontrivial flow exists if (i) the WCs at $k_{z}=0$- and $k_{z}=\pi$-slices are inequivalent with each other and (ii) the WCs at the two ends can be \emph{continuously} deformed to each other by a time-reversal breaking process. Considering that the gauge invariant WCs at $k_{z}=0,\pi$-slices can only locate at $1a$, $1b$ and $2c$ positions and the flows between $1a$ and $2c$ can be generated form the flows between $1b$ and $2c$ and the flows between $1b$ and $1a$, to figure out the flow classification we only need to discuss the latter two cases. Let us start with the flows between $1b$ and $1a$. For both $1b$ and $1a$ in $k_{z}=0$- or $k_{z}=\pi$-slices, we only need to study the left immovable irreps \begin{equation} n\left(E_{\frac{1}{2}}+E_{\frac{3}{2}}\right)+mE_{\frac{1}{2}}+m^{\prime}E_{\frac{3}{2}} \end{equation} as discussed in section \ref{sub:2DWF-1a}. Here $n=0,1$ and the one of $m,m^{\prime}$ equals to zero. Firstly, we will show that the $2m$ ($2m^{\prime}$) 2D WFs in the $E_{\frac{1}{2}}$ ($E_{\frac{3}{2}}$) irreps at $1a$ or $1b$ can not move along the flow. This can be seen by presuming an infinite small move and comparing the $C_{4}$ rep matrix before and after the move. Here we take $m$ $E_{\frac{1}{2}}^{1a}$ irreps for an example. Before the move, the $C_{4}$ rep matrix is a direct sum of $m$ $E_{\frac{1}{2}}$ rep matrices, thus the trace gives $\mathrm{Tr}\left[D\left(C_{4}\right)\right]=m\sqrt{2}$. While, after the move, the WFs locating at $4d$ positions must form a traceless $D\left(C_{4}\right)$, because the four equivalent $4d$ positions transform to each other in turn under the $C_{4}$ rotation. Therefore the presumption of the infinite small move is untenable. Secondly, notice that a single rep $E_{\frac{1}{2}}+E_{\frac{3}{2}}$ can be separated into four WFs at $4d$ positions by a time-reversal breaking process, which can be achieved in two steps. In the first step, we take the ``cyclical'' gauge defined in Eq. (\ref{eq:w1})-(\ref{eq:wi}), where $C_{4}$ transforms the WFs to each other in turn and time-reversal transforms $\left|w_{i}\right\rangle $ to $\left|w_{i+2}\right\rangle $. In the second step, we split $\left|w_{1}\right\rangle $ and $\left|w_{3}\right\rangle $ in the $x$ direction and split the $\left|w_{2}\right\rangle $ and $\left|w_{4}\right\rangle $ in the $y$ direction. Thus, through the $4d$ positions, there can be a flow $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}\to E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ or $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}\to E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$, where the arrow represents a flow from $k_{z}=0$ to $k_{z}=\pi$. Such flows are gauge invariant because both $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ and $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ at the two ends can not be gauged away. However, double of the flows must be trivial because the $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}$ or the $2E_{\frac{1}{2}}^{1b}+2E_{\frac{3}{2}}^{1b}$ can be gauged away, as discussed in section \ref{sub:2DWF-1a}. It should also be noticed that the flow $E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}\to E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ is equal to the flow $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}\to E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ module a trivial flow $2E_{\frac{1}{2}}^{1a}+2E_{\frac{3}{2}}^{1a}\to2E_{\frac{1}{2}}^{1b}+2E_{\frac{3}{2}}^{1b}$. Therefore, in summary, we obtain a $\mathbb{Z}_{2}$ class of the flow between $1b$ and $1a$, wherein the nontrivial element $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}\to E_{\frac{1}{2}}^{1a}+E_{\frac{3}{2}}^{1a}$ manifests itself by 1D helical modes, as discussed in the text. The discussion of the flow between $1b$ and $2c$ is much more simple. As discussed above, at $1b$ position, a nontrivial flow must start or end with $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ irreps. While, at the side of $2c$, the flow can only merge to the $E_{\frac{1}{2}}^{2c}$ irrep. As the $C_{4}$ sewing matrix of $E_{\frac{1}{2}}^{2c}$ irrep is consistent with the $C_{4}$ sewing matrix of $E_\frac{1}{2}^{4d}$, the flow $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}\to E_{\frac{1}{2}}^{2c}$ indeed can be realized by a time-reversal breaking process. Similar with the $1b\to1a$ flow, this flow also generates a $\mathbb{Z}_{2}$ class, because the double of it $2E_{\frac{1}{2}}^{1b}+2E_{\frac{3}{2}}^{1b}\to2E_{\frac{1}{2}}^{2c}$ can be trivialized by deformations at the two ends. What kind of surface state is manifested in this class of flow? By an isomorphic mapping from the flow to the surface dispersion, we find even number Dirac points on both the $zx$ and the $yz$ surfaces, indicating the bulk is a weak topological insulator. Beware that, the two flows defined above do not give a complete classification of the time-reversal and $C_{4}$ protected 3D topological crystalline insulators, instead, they only classify the a special kind of insulators where each $k_{z}$-slice is equivalent with a 2D atomic insulator. \subsection{Construct the flow of the 3D model}\label{sub:ModelFlow} To verify our theory, in this section we explicitly show the $1b\to1a$ flow in our 3D model by constructing 2D WFs continuously from $k_{z}=0$- to $k_{z}=\pi$-slices. As described in section \ref{sub:2DWF}, the overlap matrix in real space ($M^{\mathbf{R}}$) can be thought as the input of the construction procedure. Therefore, to get continuous WF gauges from $k_{z}=0$ to $k_{z}=\pi$, we can firstly work out the overlap matrices at the two ends, i.e. $M^{\mathbf{R}}\left(0\right)$ and $M^{\mathbf{R}}\left(\pi\right)$, and then interpolate $M^{\mathbf{R}}\left(k_{z}\right)$ in the intermediate slices. As the $k_{z}=0$-slice is equivalent with our 2D model, we can directly use the 2D WFs constructed in section \ref{sub:2DWF}. While, to be consistent with the flow process, here we choose the ``cyclical'' gauge defined in Eq. (\ref{eq:w1})-(\ref{eq:wi}), for which the overlap matrices build for $E_{\frac{1}{2}}^{1b}+E_{\frac{3}{2}}^{1b}$ in appendix \ref{sub:2DWF}, denoted as $\tilde{M}^{\mathbf{R}}$ here, should be multiplied by an unitary matrix \begin{equation} M^{\mathbf{R}_{i}}\left(0\right)=\tilde{M}^{\mathbf{R}_{i}}V^{\dagger} \end{equation} where $V$ can be read from the ``cyclical'' gauge definition in Eq. (\ref{eq:w1})-(\ref{eq:wi}). The overlap matrix at $k_{z}=\pi$ can also be constructed in a similar way: randomly generate an overlap matrix and then symmetrize it. To be consistent with the flow, in $k_{z}=\pi$-slice we also choose the ``cyclical" gauge for the four $E_\frac{1}{2}^{1a} + E_\frac{3}{2}^{1a}$ trial orbitals and put them in the lattices $\mathbf{R}^{1}=\left(00\right)$, $\mathbf{R}^{2}=\left(10\right)$, $\mathbf{R}^{3}=\left(11\right)$, and $\mathbf{R}^{4}=\left(01\right)$, respectively. We also generate an additional $\delta M^{\mathbf{R}_{i}}$ term to cover the time-reversal breaking process in the intermediate slices. At last, the overlap matrix can be interpolated as \begin{align} M^{\mathbf{R}_{i}}\left(k_{z}\right) & =\left(1-\frac{k_{z}}{\pi}\right)M^{\mathbf{R}_{i}}\left(0\right)+\frac{k_{z}}{\pi}M^{\mathbf{R}_{i}}\left(\pi\right)\nonumber \\ & +\lambda \frac{k_z}{\pi}\left(1-\frac{k_{z}}{\pi}\right)\delta M^{\mathbf{R}_{i}} \end{align} Here $\lambda$ is an adjustable parameter. Within the continuous symmetric WF gauges from $k_{z}=0$ to $k_{z}=\pi$, we calculate the evolution of WCs by the Wilson loop method and plotted it in Fig. (\ref{fig:Flow}), which indeed coincides with the nontrivial $\mathbb{Z}_2$-flow. \begin{figure} \begin{centering} \includegraphics[width=0.7\linewidth]{Flow} \par\end{centering} \protect\caption{\label{fig:Flow} The numerically calculated WC flow from $k_z=0$- to $k_z=\pi$-slices in the 3D model, where the parameter $\Delta$ is set to 0.2.} \end{figure} \section{Low energy theory}\label{sub:kp} Another perspective to understand the 1D helical modes is from the effective low energy theory on the surfaces. As the 3D model can be thought as two copies of topological insulators plus a mixing term, on surfaces there should be two Dirac points and a mass term between them \begin{equation} H=k_{1}\tau_{0}\tilde{s}_{1}+k_{2}\tau_{0}\tilde{s}_{2}+m\tau_{y}\tilde{s}_{3}\label{eq:Hsurf} \end{equation} Here $k_{1}$, $k_{2}$ are the surface momenta, $\tilde{s}_{i}$ are pauli matrices representing the pseudo spin. In the geometry in Fig. (3) in the text, we set $k_{1}=k_{y}$, $k_{2}=k_{z}$ for the $yz$ surface, and $k_{1}=-k_{x}$, $k_{2}=k_{z}$ for the $zx$ surface. For a general surface deviating from the $yz$ plane by an angle $\theta$, we set \begin{equation} k_{1}=-\sin\theta k_{x}+\cos\theta k_{y} \end{equation} \begin{equation} k_{2}=k_{z} \end{equation} Then, by the symmetry analysis in the following, we show that the mass term flips its sign under the $C_{4}$ rotation, i.e. $m\left(\theta\right)=-m\left(\theta+\frac{\pi}{2}\right)$. Thus, enforced by the $C_{4}$ symmetry, there must be domain walls of masses, where the helical states live, on the surfaces. Two interesting observations immediately follow. The first is that, as the domain walls are enforced by the $C_{4}$ symmetry, in general, they do not necessarily locate at the hinges. Instead, it can be anywhere on the surfaces. For our 3D model, the helical modes are pinned at the hinges by the accidental mirror symmetry. The second observation is that, the 1D helical modes are stable against to any time-reversal preserving perturbations---even the $C_{4}$ breaking perturbations---because to gap out the helical modes one need to move two domain walls separated far away in real space together to annihilate them in pair, which can not be realized by a perturbation. Now let us derive the effective theory in Eq. (\ref{eq:Hsurf}) and prove the mass flipping under $C_{4}$. Firstly, we write the bulk Hamiltonian in the surface coordinates $\left(k_{1}k_{2}k_{3}\right)$, where $k_{3}=\cos\theta k_{x}+\sin\theta k_{y}$, and expand it to first order of $k_{1}$, $k_{2}$ and second order of $k_{3}$ \begin{align} H & = \left(-1+\frac{k_{3}^{2}}{2}\right) \tau_{0}\sigma_{z}s_{0} + k_{3}\tau_{0}\sigma_{x}\left(s_{x}\cos\theta+s_{y}\sin\theta\right)\nonumber \\ & +k_{1}\tau_{0}\sigma_{x}\left(-s_{x}\sin\theta+s_{y}\cos\theta\right)+k_{2}\tau_{0}\sigma_{x}s_{z}\nonumber \\ & +\frac{\Delta\left(\theta\right)}{2}k_{3}^{2}\tau_{y}\sigma_{y}s_{0} \end{align} where \begin{equation} \Delta\left(\theta\right)=\Delta\left(\sin^{2}\theta-\cos^{2}\theta\right) \end{equation} It should be noticed that, even this is a low energy theory, the property $\Delta\left(\theta+\frac{\pi}{2}\right)=-\Delta\left(\theta\right)$ holds to any order because this is enforced by the symmetry relation $C_{4}\tau_{y}\tau_{y}s_{0}C_{4}^{-1}=-\tau_{y}\tau_{y}s_{0}$. By the gauge transformation $s_{x}\cos\theta+s_{y}\sin\theta\to s_{3}$, $-s_{x}\sin\theta+s_{y}\cos\theta\to s_{1}$, $s_{z}\to s_{2}$, and the replacement $k_{3}\to-i\partial_{3}$, we get the Hamiltonian on the surface as \begin{align} H & =\left(-1-\frac{\partial_{3}^{2}}{2}\right)\tau_{0}\sigma_{z}s_{0}-i\partial_{3}\tau_{0}\sigma_{x}s_{3}-\frac{\Delta\left(\theta\right)}{2}\partial_{3}^{2}\tau_{y}\sigma_{y}s_{0}\nonumber \\ & +k_{1}\tau_{0}\sigma_{x}s_{1}+k_{2}\tau_{0}\sigma_{x}s_{2}\label{eq:Heff-bulk} \end{align} Then, neglecting the $k_{1}$, $k_{2}$, and $\Delta$ terms, we get the zero modes equation in the $x_{3}\ge 0$ semi-infinite system \begin{equation} \left[\left(1+\frac{1}{2}\partial_{x}^{2}\right)-\tau_{0}\sigma_{y}s_{3}\partial_{3}\right]\psi\left(x_{3}\right)=0 \end{equation} with the boundary condition \begin{equation} \psi\left(0\right)=\psi\left(\infty\right)=0 \end{equation} It has four solutions \begin{equation} \psi_{\mu n}\left(x_{3}\right)=u_{\mu n}\frac{1}{\sqrt{\mathcal{C}}}\left(e^{-\lambda_{+}x_{3}}-e^{-\lambda_{-}x_{3}}\right) \end{equation} , where \begin{equation} \lambda_{\pm}=1\pm i \end{equation} \begin{equation} u=\frac{1}{\sqrt{2}}\begin{bmatrix}i & 0 & 0 & 0\\ 0 & i & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & i & 0\\ 0 & 0 & 0 & i\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1 \end{bmatrix} \end{equation} Then expand the remaining terms in Eq. (\ref{eq:Heff-bulk}) on these solutions, we get the effective theory \begin{align} \mathcal{H} & =k_{1}\tau_{0}\tilde{s}_{1}+k_{2}\tau_{0}\tilde{s}_{2}+m\left(\theta\right)\tau_{y}\tilde{s}_{3} \end{align} where $m\left(\theta\right)$ is the mass induced by the mixing term $\Delta\left(\theta\right)$ \begin{align} m\left(\theta\right) & =\frac{\Delta\left(\theta\right)}{2\mathcal{C}}\int dx_{3}\left(e^{-\lambda_{+}x_{3}}-e^{-\lambda_{-}x_{3}}\right)^{*}\nonumber \\ & \qquad\times\partial_{3}^{2}\left(e^{-\lambda_{+}x_{3}}-e^{-\lambda_{-}x_{3}}\right) \end{align} Therefore, as $\Delta\left(\theta+\frac{\pi}{2}\right)=-\Delta\left(\theta\right)$, the sign of mass must flip under the $C_{4}$ rotation. \section{Symmetry indicators} \label{sub:indicator} The $\mathbb{Z}_2$-flow has provided a good physical picture and serves as a topological invariant characterizing the nontrivial states. However, for real materials, it is practically impossible to find smooth and symmetric gauges to calculate the flow. Thus it would be very useful if there is some Fu-Kane-like criterion that can diagnose the topology from merely symmetry eigenvalues. As will be shown latter, such a criterion indeed exists if the system has an additional inversion symmetry. We follow the newly developed symmetry indicator method \cite{Po2017,Bradlyn2017} to diagnose the topology. In this method, a BR is represented by a column vector of integers, where each entry gives the appeared number of a particular irrep at a particular high-symmetry momentum. All the admissible BRs that satisfy the compatibility relations form a linear space. On the other hand, the bases of this linear space can also be generated from a set of atomic insulators. Consequently, any BR can be expanded by the atomic BR bases with integral or fractional coefficients, wherein fractional coefficients imply some kind of nontrivial topology. \subsection{Symmetry indicators in space group $P4/m$} \begin{table*} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Site sym.} & \multicolumn{2}{c|}{Wyckoff position} & \multicolumn{2}{c|}{High sym. K}\tabularnewline \cline{2-5} & W & Irrep & K & Irrep\tabularnewline \hline \hline \multirow{4}{*}{$C_{4h}$} & $1a$ $\left(000\right)$ & \multirow{4}{*}{$E_{\frac{3}{2}u}$, $E_{\frac{3}{2}g}$, $E_{\frac{1}{2}u}$, $E_{\frac{1}{2}g}$} & $\Gamma$ $\left(000\right)$ & \multirow{4}{*}{$E_{\frac{3}{2}u}$, $E_{\frac{3}{2}g}$, $E_{\frac{1}{2}u}$, $E_{\frac{1}{2}g}$}\tabularnewline \cline{2-2} \cline{4-4} & $1b$ $\left(00\frac{1}{2}\right)$ & & $Z$ $\left(00\pi\right)$ & \tabularnewline \cline{2-2} \cline{4-4} & $1c$ $\left(\frac{1}{2}\frac{1}{2}0\right)$ & & $M$ $\left(\pi\pi0\right)$ & \tabularnewline \cline{2-2} \cline{4-4} & $1d$ $\left(\frac{1}{2}\frac{1}{2}\frac{1}{2}\right)$ & & $A$ $\left(\pi\pi\pi\right)$ & \tabularnewline \hline \multirow{2}{*}{$C_{2h}$} & $2e$ $\left(\frac{1}{2}00\right)$ & \multirow{2}{*}{$E_{\frac{1}{2}u}$, $E_{\frac{1}{2}g}$} & $X$ $\left(0\pi0\right)$ & \multirow{2}{*}{$E_{\frac{1}{2}u}$, $E_{\frac{1}{2}g}$}\tabularnewline \cline{2-2} \cline{4-4} & $2f$ $\left(\frac{1}{2}0\frac{1}{2}\right)$ & & $R$ $\left(0\pi\pi\right)$ & \tabularnewline \hline \multirow{2}{*}{$C_{4}$} & $2g$ $\left(00z\right)$ & \multirow{2}{*}{$E_{\frac{3}{2}}$, $E_{\frac{1}{2}}$} & $\Lambda$ $\left(00u\right)$ & \multirow{2}{*}{$E_{\frac{3}{2}}$, $E_{\frac{1}{2}}$}\tabularnewline \cline{2-2} \cline{4-4} & $2h$ $\left(\frac{1}{2}\frac{1}{2}z\right)$ & & $V$ $\left(\pi\pi u\right)$ & \tabularnewline \hline $C_{2}$ & $4i$ $\left(\frac{1}{2}0z\right)$ & $E_{\frac{1}{2}}$ & $W$ $\left(0\pi u\right)$ & $E_{\frac{1}{2}}$\tabularnewline \hline \multirow{2}{*}{$C_{s}$} & $4j$ $\left(xy0\right)$ & \multirow{2}{*}{$E_{\frac{1}{2}}$} & $D$ $\left(uv0\right)$ & \multirow{2}{*}{$E_{\frac{1}{2}}$}\tabularnewline \cline{2-2} \cline{4-4} & $4k$ $\left(xy\frac{1}{2}\right)$ & & $E$ $\left(uv\pi\right)$ & \tabularnewline \hline $C_{1}$ & $8I$ $\left(xyz\right)$ & $E_{\frac{1}{2}}$ & $GP$ & $E_{\frac{1}{2}}$\tabularnewline \hline \end{tabular} \protect\caption{\label{tab:P4/m-wkf} Wyckoff positions, high-symmetry momenta, and the irreps of their SSGs in space group $P4/m$ (with time-reversal symmetry).} \end{table*} \begin{table*} \begin{tabular}{|c|c|c|} \hline & & BR \tabularnewline \hline \hline $\mathbf{A}_{1}$ & $E_{\frac{1}{2}g}^{1a}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{1}{2}g}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{1}{2}g}^{Z}+E_{\frac{1}{2}g}^{A}+E_{\frac{1}{2}g}^{R}$\tabularnewline \hline $\mathbf{A}_{2}$ & $E_{\frac{3}{2}g}^{1a}$ & $E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{3}{2}g}^{Z}+E_{\frac{3}{2}g}^{A}+E_{\frac{1}{2}g}^{R}$\tabularnewline \hline $\mathbf{A}_{3}$ & $E_{\frac{1}{2}u}^{1a}$ & $E_{\frac{1}{2}u}^{\Gamma}+E_{\frac{1}{2}u}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{1}{2}u}^{Z}+E_{\frac{1}{2}u}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{4}$ & $E_{\frac{3}{2}u}^{1a}$ & $E_{\frac{3}{2}u}^{\Gamma}+E_{\frac{3}{2}u}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{3}{2}u}^{Z}+E_{\frac{3}{2}u}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{5}$ & $E_{\frac{1}{2}g}^{1b}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{1}{2}g}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{1}{2}u}^{Z}+E_{\frac{1}{2}u}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{6}$ & $E_{\frac{3}{2}g}^{1b}$ & $E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{3}{2}u}^{Z}+E_{\frac{3}{2}u}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{7}$ & $E_{\frac{1}{2}g}^{1c}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{1}{2}g}^{Z}+E_{\frac{3}{2}g}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{8}$ & $E_{\frac{3}{2}g}^{1c}$ & $E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{1}{2}g}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{3}{2}g}^{Z}+E_{\frac{1}{2}g}^{A}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{9}$ & $E_{\frac{1}{2}u}^{1c}$ & $E_{\frac{1}{2}u}^{\Gamma}+E_{\frac{3}{2}u}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{1}{2}u}^{Z}+E_{\frac{3}{2}u}^{A}+E_{\frac{1}{2}g}^{R}$\tabularnewline \hline $\mathbf{A}_{10}$ & $E_{\frac{1}{2}g}^{1d}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{1}{2}u}^{Z}+E_{\frac{3}{2}u}^{A}+E_{\frac{1}{2}g}^{R}$\tabularnewline \hline $\mathbf{A}_{11}$ & $E_{\frac{3}{2}g}^{1d}$ & $E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{1}{2}g}^{M}+E_{\frac{1}{2}u}^{X}+E_{\frac{3}{2}u}^{Z}+E_{\frac{1}{2}u}^{A}+E_{\frac{1}{2}g}^{R}$\tabularnewline \hline $\mathbf{A}_{12}$ & $E_{\frac{1}{2}g}^{2e}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{1}{2}u}^{M}+E_{\frac{3}{2}u}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{1}{2}u}^{X}+E_{\frac{1}{2}g}^{Z}+E_{\frac{3}{2}g}^{Z}+E_{\frac{1}{2}u}^{A}+E_{\frac{3}{2}u}^{A}+E_{\frac{1}{2}g}^{R}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline $\mathbf{A}_{13}$ & $E_{\frac{1}{2}g}^{2f}$ & $E_{\frac{1}{2}g}^{\Gamma}+E_{\frac{3}{2}g}^{\Gamma}+E_{\frac{1}{2}u}^{M}+E_{\frac{3}{2}u}^{M}+E_{\frac{1}{2}g}^{X}+E_{\frac{1}{2}u}^{X}+E_{\frac{1}{2}u}^{Z}+E_{\frac{3}{2}u}^{Z}+E_{\frac{1}{2}g}^{A}+E_{\frac{3}{2}g}^{A}+E_{\frac{1}{2}g}^{R}+E_{\frac{1}{2}u}^{R}$\tabularnewline \hline \end{tabular} \protect\caption{\label{tab:P4/m-AI} Atomic BR bases of space group $P4/m$. In the first, second, and third columns, we list the notations of atomic BR bases, the irreps in real space to generate it, and the BR in momentum space, respectively.} \end{table*} The smallest space group containing both $C_{4}$ and inversion is $P4/m$, whose symmetry indicators form a group $\mathbb{Z}_{2}\times\mathbb{Z}_{4}\times\mathbb{Z}_{8}$. In this section, we will work out the generators of the group. In table \ref{tab:P4/m-wkf}, we list all the wyckoff positions and high-symmetry momenta, and the irreps of their SSGs in space group $P4/m$. By definition, a BR should be given by the numbers of irreps at $\Gamma$, $M$, $X$, $Z$, $A$, $R$, $\Lambda$, $V$, $W$, $D$, $E$. However, as the latter five momenta can be continuously connected to the former six momenta which have higher symmetries, the irreps at these five momenta can be inferred directly from the knowledge of the irreps at the former six momenta and the compatibility relation. Thus we conclude that the BR is completely determined by the irreps at $\Gamma$, $M$, $X$, $Z$, $A$, and $R$. By applying Eq. (\ref{eq:Dk-def}) repeatedly, we have found all the independent atomic BR bases and summarize them in table \ref{tab:P4/m-AI}. Now let us find out all the the compatibility relations allowed BRs. The compatibility relations consist of five constraints on the occupation number \begin{eqnarray} & & n\left(E_{\frac{1}{2}g}^{\Gamma}\right)+n\left(E_{\frac{3}{2}g}^{\Gamma}\right)+n\left(E_{\frac{1}{2}u}^{\Gamma}\right)+n\left(E_{\frac{3}{2}u}^{\Gamma}\right)\nonumber \\ & = & n\left(E_{\frac{1}{2}g}^{M}\right)+n\left(E_{\frac{3}{2}g}^{M}\right)+n\left(E_{\frac{1}{2}u}^{M}\right)+n\left(E_{\frac{3}{2}u}^{M}\right)\nonumber \\ & = & n\left(E_{\frac{1}{2}g}^{Z}\right)+n\left(E_{\frac{3}{2}g}^{Z}\right)+n\left(E_{\frac{1}{2}u}^{Z}\right)+n\left(E_{\frac{3}{2}u}^{Z}\right)\nonumber \\ & = & n\left(E_{\frac{1}{2}g}^{A}\right)+n\left(E_{\frac{3}{2}g}^{A}\right)+n\left(E_{\frac{1}{2}u}^{A}\right)+n\left(E_{\frac{3}{2}u}^{A}\right)\nonumber \\ & = & n\left(E_{\frac{1}{2}g}^{X}\right)+n\left(E_{\frac{1}{2}u}^{X}\right)\nonumber \\ & = & n\left(E_{\frac{1}{2}g}^{R}\right)+n\left(E_{\frac{1}{2}u}^{R}\right) \end{eqnarray} and two constraints on the angular momentum conservation along $\Gamma Z$ and $MA$ \begin{equation} n\left(E_{\frac{1}{2}g}^{\Gamma}\right)+n\left(E_{\frac{1}{2}u}^{\Gamma}\right)=n\left(E_{\frac{1}{2}g}^{Z}\right)+n\left(E_{\frac{1}{2}u}^{Z}\right) \end{equation} \begin{equation} n\left(E_{\frac{1}{2}g}^{M}\right)+n\left(E_{\frac{1}{2}u}^{M}\right)=n\left(E_{\frac{1}{2}g}^{A}\right)+n\left(E_{\frac{1}{2}u}^{A}\right) \end{equation} Solve these linear equations, we get 13 independent BR generators, wherein 10 of them are atomic BRs, while the other 3 are not. The three nontrivial generators can be chosen as \begin{align} \mathbf{B}_{\mathbb{Z}_2} & = E_{\frac{3}{2}g}^{\Gamma} - 2E_{\frac{1}{2}g}^{\Gamma} - E_{\frac{3}{2}u}^{\Gamma} + 2E_{\frac{1}{2}u}^{\Gamma} \nonumber \\ & - E_{\frac{3}{2}g}^{M} + E_{\frac{3}{2}u}^{M} -6 E_{\frac{3}{2}g}^Z + 6E_{\frac{3}{2}u}^Z \end{align} \begin{equation} \mathbf{B}_{\mathbb{Z}_4} = E_{\frac{3}{2}g}^{\Gamma} - E_{\frac{3}{2}u}^{\Gamma} - E_{\frac{3}{2}g}^{Z} + E_{\frac{3}{2}u}^{Z} \end{equation} \begin{equation} \mathbf{B}_{\mathbb{Z}_8} = E_{\frac{3}{2}g}^{\Gamma} - E_{\frac{3}{2}u}^{\Gamma} \end{equation} Here $\mathbb{Z}_N$ in the subscript represents that $N\mathbf{B}_{\mathbb{Z}_N}$ is an atomic BR. Therefore, the BRs of $P4/m$ can be classified into $2\times 4\times 8$ classes, each of them is given by three integers $(mnl)$ that define the representative BR $m\mathbf{B}_{\mathbb{Z}_2} + n\mathbf{B}_{\mathbb{Z}_4} + l\mathbf{B}_{\mathbb{Z}_8}$, with $m=0,1$, $n=0,1,2,3$, and $l=0,1\cdots 7$. \subsection{Understand the indicators}\label{sub:UnderstandID} From the parity criterion, we find that all these three generators correspond to weak or strong topological insulators. Specifically, $\mathbf{B}_{\mathbb{Z}_2}$ has a nontrivial weak index $\mathbb{Z}_2(110;0)$, $\mathbf{B}_{\mathbb{Z}_4}$ also has a nontrivial weak index $\mathbb{Z}_2(001;0)$, whereas $\mathbf{B}_{\mathbb{Z}_8}$ has a nontrivial strong index $\mathbb{Z}_2(000;1)$. The double of $\mathbf{B}_{\mathbb{Z}_4}$ or $\mathbf{B}_{\mathbb{Z}_8}$ corresponds to mirror ($M_z=PC_2$) protected topological crystalline insulators, wherein $2\mathbf{B}_{\mathbb{Z}_4}$ has mirror Chern number $2$ ($\mathrm{mod}\ 4$) in both the $k_z=0$- and $k_z=\pi$-slices, while $2\mathbf{B}_{\mathbb{Z}_8}$ has mirror Chern number $2$ ($\mathrm{mod}\ 4$) in the $k_z=0$-slice and mirror Chern number $0$ ($\mathrm{mod}\ 4$) in the $k_z=\pi$-slice. This can be proved by firstly divide the eigenstates at $k_z=0$-slice ($k_z=\pi$-slice) into two sectors according to their $M_z$ eigenvalues and then apply the following Chern number Fu-Kane-like formula in each sector \cite{fang_C4} \begin{equation}\label{mChern} i^C = (-1)^{N_{occ}}\prod_{n\in \mathrm{occ}} \xi_n (\Gamma) \xi_n(M) \zeta_n (X) \end{equation} Here $\xi_n(\Gamma)$ is the $C_4$ eigenvalue of the $n$-band at $\Gamma$, $\xi_n(M)$ is the $C_4$ eigenvalue of the $n$-band at $M$, $\zeta_n(X)$ is the $C_2$ eigenvalue of the $n$-band at $X$, and $C$ is the Chern number. Now the only left element is $4\mathbf{B}_{\mathbb{Z}_8}$. To figure out it, let us firstly generalize the mappings in Eq. (\ref{eq:2Dmap-1a12})-(\ref{eq:2Dmap-1b32}) to the case with inversion symmetry. Treating $k_z=0$- and $k_z=\pi$-slices as two 2D systems and applying Eq. (\ref{eq:Dk-def}), we find that \begin{equation} E_{\frac{1}{2}g}^{1a}(0/\pi) \mapsto E_{\frac{1}{2}g}^{\Gamma/Z} + E_{\frac{1}{2}g}^{M/A} + E_{\frac{1}{2}g}^{X/R} \end{equation} \begin{equation} E_{\frac{3}{2}g}^{1a}(0/\pi) \mapsto E_{\frac{3}{2}g}^{\Gamma/Z} + E_{\frac{3}{2}g}^{M/A} + E_{\frac{1}{2}g}^{X/R} \end{equation} \begin{equation} E_{\frac{1}{2}u}^{1a}(0/\pi) \mapsto E_{\frac{1}{2}u}^{\Gamma/Z} + E_{\frac{1}{2}u}^{M/A} + E_{\frac{1}{2}u}^{X/R} \end{equation} \begin{equation} E_{\frac{3}{2}u}^{1a}(0/\pi) \mapsto E_{\frac{3}{2}u}^{\Gamma/Z} + E_{\frac{3}{2}u}^{M/A} + E_{\frac{1}{2}u}^{X/R} \end{equation} \begin{equation} E_{\frac{1}{2}g}^{1b}(0/\pi) \mapsto E_{\frac{1}{2}g}^{\Gamma/Z} + E_{\frac{3}{2}g}^{M/A} + E_{\frac{1}{2}u}^{X/R} \end{equation} \begin{equation} E_{\frac{3}{2}g}^{1b}(0/\pi) \mapsto E_{\frac{3}{2}g}^{\Gamma/Z} + E_{\frac{1}{2}g}^{M/A} + E_{\frac{1}{2}u}^{X/R} \end{equation} \begin{equation} E_{\frac{1}{2}u}^{1b}(0/\pi) \mapsto E_{\frac{1}{2}u}^{\Gamma/Z} + E_{\frac{3}{2}u}^{M/A} + E_{\frac{1}{2}g}^{X/R} \end{equation} \begin{equation} E_{\frac{3}{2}u}^{1b}(0/\pi) \mapsto E_{\frac{3}{2}u}^{\Gamma/Z} + E_{\frac{1}{2}u}^{M/A} + E_{\frac{1}{2}g}^{X/R} \end{equation} Therefore, we find that the $4\mathbf{B}_{\mathbb{Z}_8}$ can be interpreted as \begin{align} & E_{\frac{1}{2}g}^{1b}(0) + E_{\frac{3}{2}g}^{1b}(0) + E_{\frac{1}{2}g}^{1a}(\pi) + E_{\frac{3}{2}g}^{1a}(\pi) \nonumber \\ \mapsto \quad & 4\mathbf{B}_{\mathbb{Z}_8} \textrm{ mod an atomic BR} \end{align} which implies the WC flow from the plaquette center ($k_z=0$) to the site ($k_z=\pi$). As all the atomic BRs can not imply any WC flow, all the BRs in the $(004)$ class must have the nontrivial $\mathbb{Z}_2$-flow. \subsection{Matlab script} Here we also provide a Matlab script to automatically calculate the symmetry indicator of a given BR of the space group $P4/m$. \begin{widetext} \begin{lstlisting} clear; e_G12g=zeros(20,1); e_G12g(1)=1; e_G32g=zeros(20,1); e_G32g(2)=1; e_G12u=zeros(20,1); e_G12u(3)=1; e_G32u=zeros(20,1); e_G32u(4)=1; e_M12g=zeros(20,1); e_M12g(5)=1; e_M32g=zeros(20,1); e_M32g(6)=1; e_M12u=zeros(20,1); e_M12u(7)=1; e_M32u=zeros(20,1); e_M32u(8)=1; e_X12g=zeros(20,1); e_X12g(9)=1; e_X12u=zeros(20,1); e_X12u(10)=1; e_Z12g=zeros(20,1); e_Z12g(11)=1; e_Z32g=zeros(20,1); e_Z32g(12)=1; e_Z12u=zeros(20,1); e_Z12u(13)=1; e_Z32u=zeros(20,1); e_Z32u(14)=1; e_A12g=zeros(20,1); e_A12g(15)=1; e_A32g=zeros(20,1); e_A32g(16)=1; e_A12u=zeros(20,1); e_A12u(17)=1; e_A32u=zeros(20,1); e_A32u(18)=1; e_R12g=zeros(20,1); e_R12g(19)=1; e_R12u=zeros(20,1); e_R12u(20)=1; AA=[ e_G12g + e_M12g + e_X12g + e_Z12g + e_A12g + e_R12g, ... e_G32g + e_M32g + e_X12g + e_Z32g + e_A32g + e_R12g, ... e_G12u + e_M12u + e_X12u + e_Z12u + e_A12u + e_R12u, ... e_G32u + e_M32u + e_X12u + e_Z32u + e_A32u + e_R12u, ... e_G12g + e_M12g + e_X12g + e_Z12u + e_A12u + e_R12u, ... e_G32g + e_M32g + e_X12g + e_Z32u + e_A32u + e_R12u, ... e_G12g + e_M32g + e_X12u + e_Z12g + e_A32g + e_R12u, ... e_G32g + e_M12g + e_X12u + e_Z32g + e_A12g + e_R12u, ... e_G12u + e_M32u + e_X12g + e_Z12u + e_A32u + e_R12g, ... e_G12g + e_M32g + e_X12u + e_Z12u + e_A32u + e_R12g, ... e_G32g + e_M12g + e_X12u + e_Z32u + e_A12u + e_R12g, ... e_G12g + e_G32g + e_M12u + e_M32u + e_X12g + e_X12u ... + e_Z12g + e_Z32g + e_A12u + e_A32u + e_R12g + e_R12u, ... e_G12g + e_G32g + e_M12u + e_M32u + e_X12g + e_X12u ... + e_Z12u + e_Z32u + e_A12g + e_A32g + e_R12g + e_R12u ]; BZ2 = e_G32g - 2*e_G12g - e_G32u + 2*e_G12u ... - e_M32g + e_M32u - 6*e_Z32g + 6*e_Z32u; BZ4 = e_G32g - e_G32u - e_Z32g + e_Z32u; BZ8 = e_G12u - e_G12g; BR = e_G12g + e_G32g + e_M12g + e_M32g + e_X12g + e_X12g ... + e_Z12g + e_Z32g + e_A12g + e_A32g + e_R12u + e_R12u; [m, n, l] = fun_class83( BR, AA, BZ2, BZ4, BZ8); fprintf('m, n, l= function [ n1, n2, n3 ] = fun_class83( bb, AA, b1,b2,b3 ) tol=1e-3; for n1=0:1 for n2=0:3 for n3=0:7 btmp = n1*b1+n2*b2+n3*b3; CC=AA\(btmp - bb); err = norm(CC-round(CC)); % % if err < tol break; end end % if err < tol break; end end % if err < tol break; end end if err>=tol n1=-1; n2=-1; n3=-1; end end \end{lstlisting} \end{widetext} where the variable BR is the input band representation, and $(mnl)$ is the indicator calculated from BR. For practical application, one only need to replace the definition of BR (line $52$ in the code). It should be noticed that, this script is not limited to the space group $P4/m$, but is applicable to any space group containing $C_4$ and inversion symmetries. We have used the occupied BR of our 3D model (the commented definition at line $50$ in the code) to verify the symmetry indicator, which indeed gives $(004)$. \subsection{Fu-Kane formula} \begin{table*} \begin{tabular}{|c|c|c|c|} \hline Lattice & Space groups & $n$ & Definitions for $n_{\frac{3}{2}}^{+}$, $n_{\frac{3}{2}}^{-}$, $n_{\frac{1}{2}}^{+}$, $n_{\frac{1}{2}}^{-}$\tabularnewline \hline \hline \multirow{4}{*}{$\Gamma_{q}$} & \multirow{4}{*}{83, 123, 124, 127, 128} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(E_{\frac{1}{2}g}^{M})+n(E_{\frac{1}{2}g}^{Z})+n(E_{\frac{1}{2}g}^{A})+n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{1}{2}g}^{R})$\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(E_{\frac{1}{2}u}^{M})+n(E_{\frac{1}{2}u}^{Z})+n(E_{\frac{1}{2}u}^{A})+n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{1}{2}u}^{R})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(E_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{3}{2}g}^{M})+n(E_{\frac{3}{2}g}^{Z})+n(E_{\frac{3}{2}g}^{A})+n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{1}{2}g}^{R})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(E_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{3}{2}u}^{M})+n(E_{\frac{3}{2}u}^{Z})+n(E_{\frac{3}{2}u}^{A})+n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{1}{2}u}^{R})$\tabularnewline \hline \multirow{4}{*}{$\Gamma_{q}^{v}$} & \multirow{4}{*}{87, 139, 140} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(E_{\frac{1}{2}g}^{M})+n(E_{\frac{1}{2}g}^{X})+2n(E_{\frac{1}{2}g}^{N})+n\left(E_{\frac{1}{2}}^{P}\right)$\footnotemark[1]\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(E_{\frac{1}{2}u}^{M})+n(E_{\frac{1}{2}u}^{X})+2n(E_{\frac{1}{2}u}^{N})+n\left(E_{\frac{3}{2}}^{P}\right)$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(E_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{3}{2}g}^{M})+n(E_{\frac{1}{2}g}^{X})+2n(E_{\frac{1}{2}g}^{N})+n\left(E_{\frac{3}{2}}^{P}\right)$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(E_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{3}{2}u}^{M})+n(E_{\frac{1}{2}u}^{X})+2n(E_{\frac{1}{2}u}^{N})+n\left(E_{\frac{1}{2}}^{P}\right)$\tabularnewline \hline \multirow{4}{*}{$\Gamma_{c}$} & \multirow{4}{*}{221} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{1}{2}g}^{R})+n(F_{\frac{3}{2}g}^{R})+2n(E_{\frac{1}{2}g}^{M})+n(E_{\frac{3}{2}g}^{M})+2n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{3}{2}g}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{1}{2}u}^{R})+n(F_{\frac{3}{2}u}^{R})+2n(E_{\frac{1}{2}u}^{M})+n(E_{\frac{3}{2}u}^{M})+2n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{3}{2}u}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{5}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{R})+n(E_{\frac{5}{2}g}^{R})+2n(E_{\frac{3}{2}g}^{M})+n(E_{\frac{1}{2}g}^{M})+2n(E_{\frac{3}{2}g}^{X})+n(E_{\frac{1}{2}g}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{5}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{R})+n(E_{\frac{5}{2}u}^{R})+2n(E_{\frac{3}{2}u}^{M})+n(E_{\frac{1}{2}u}^{M})+2n(E_{\frac{3}{2}u}^{X})+n(E_{\frac{1}{2}u}^{X})$\tabularnewline \hline \multirow{8}{*}{$\Gamma_{c}^{f}$} & \multirow{4}{*}{225} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{\Gamma})+2n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{3}{2}g}^{X})+2n(E_{\frac{1}{2}g}^{L})+2n(E_{\frac{3}{2}g}^{L})+n(E_{\frac{1}{2}}^{W})$\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{\Gamma})+2n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{3}{2}u}^{X})+2n(E_{\frac{1}{2}u}^{L})+2n(E_{\frac{3}{2}u}^{L})+n(E_{\frac{3}{2}}^{W})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{5}{2}g}^{\Gamma})+2n(E_{\frac{3}{2}g}^{X})+n(E_{\frac{1}{2}g}^{X})+2n(E_{\frac{1}{2}g}^{L})+2n(E_{\frac{3}{2}g}^{L})+n(E_{\frac{3}{2}}^{W})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{5}{2}u}^{\Gamma})+2n(E_{\frac{3}{2}u}^{X})+n(E_{\frac{1}{2}u}^{X})+2n(E_{\frac{1}{2}u}^{L})+2n(E_{\frac{3}{2}u}^{L})+n(E_{\frac{1}{2}}^{W})$\tabularnewline \cline{2-4} & \multirow{4}{*}{226} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{3}{2}g}^{X})+n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{3}{2}u}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{3}{2}u}^{X})+n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{3}{2}g}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{5}{2}g}^{\Gamma})+n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{3}{2}u}^{X})$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{5}{2}u}^{\Gamma})+n(E_{\frac{1}{2}u}^{X})+n(E_{\frac{1}{2}g}^{X})+n(E_{\frac{3}{2}g}^{X})$\tabularnewline \hline \multirow{4}{*}{$\Gamma_{c}^{v}$} & \multirow{4}{*}{229} & $n_{\frac{1}{2}}^{+}$ & $n(E_{\frac{1}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{1}{2}g}^{H})+n(F_{\frac{3}{2}g}^{H})+3n(E_{\frac{1}{2}g}^{N})+n\left(E_{\frac{1}{2}}^{P}\right)+n\left(F_{\frac{3}{2}}^{P}\right)$\tabularnewline \cline{3-4} & & $n_{\frac{1}{2}}^{-}$ & $n(E_{\frac{1}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{1}{2}u}^{H})+n(F_{\frac{3}{2}u}^{H})+3n(E_{\frac{1}{2}u}^{N})+n\left(F_{\frac{3}{2}}^{P}\right)+n\left(E_{\frac{5}{2}}^{P}\right)$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{+}$ & $n(F_{\frac{3}{2}g}^{\Gamma})+n(E_{\frac{5}{2}g}^{\Gamma})+n(F_{\frac{3}{2}g}^{H})+n(E_{\frac{5}{2}g}^{H})+3n(E_{\frac{1}{2}g}^{N})+n\left(F_{\frac{3}{2}}^{P}\right)+n\left(E_{\frac{5}{2}}^{P}\right)$\tabularnewline \cline{3-4} & & $n_{\frac{3}{2}}^{-}$ & $n(F_{\frac{3}{2}u}^{\Gamma})+n(E_{\frac{5}{2}u}^{\Gamma})+n(F_{\frac{3}{2}u}^{H})+n(E_{\frac{5}{2}u}^{H})+3n(E_{\frac{1}{2}u}^{N})+n\left(E_{\frac{1}{2}}^{P}\right)+n\left(F_{\frac{3}{2}}^{P}\right)$\tabularnewline \hline \end{tabular} \footnotetext[1]{In space group 87, the little group at $N$ is $C_i$ and the corresponding irreps are denoted as $A_{\frac{1}{2}g}$ and $A_{\frac{1}{2}u}$ in Ref. [\onlinecite{point-group}], both of which are one dimensional. However, due to the Kramers theorem, the irreps at $N$ should be double degenerate, thus we adopt the two dimensional notations $E_{\frac{1}{2}g}$ and $E_{\frac{1}{2}u}$.} \protect\caption{ The concrete expressions for $n_{\frac{3}{2}}^{+}$, $n_{\frac{3}{2}}^{-}$, $n_{\frac{1}{2}}^{+}$, $n_{\frac{1}{2}}^{-}$ in the $\mathbb{Z}_{8}$ Fu-Kane-like formulas in all applicable space groups. The notations of high symmetry momenta follow Ref. [\onlinecite{Bilbao_BZ_2014}], and the notations of point group irreps follow Ref. [\onlinecite{point-group}]. } \label{tab-Z8} \end{table*} As discussed in section \ref{sub:UnderstandID}, the $\mathbb{Z}_2$ indicator $m$ and the $\mathbb{Z}_4$ indicator $n$ can be indeed calculated from the knowledge of inversion and rotation eigenvalues. Specifically, we have \begin{equation} (-1)^m = \prod_{\mathbf{K}} \prod_{n\in \mathrm{occ}} \lambda_n(\mathbf{K}) \end{equation} \begin{equation} i^n = i^{C_\mathrm{m}(\pi)} \end{equation} where $\lambda_n(\mathbf{K})$ is the parity of the $n$-th Krammer pair at the momentum $\mathbf{K}$, $\mathbf{K}$ goes over the four time reversal invariant momenta (TRIM) with $k_x=\pi$, and $C_m(\pi)$ is the mirror Chern number of $k_z=\pi$ slice (mod 4) which can be calculated from Eq. (\ref{mChern}). Here we also find a similar formula to calculate the $\mathbb{Z}_8$ indicator $l$ from the symmetry eigenvalues directly \begin{equation} l = \left( n_\frac{1}{2}^- + 3n_\frac{3}{2}^+ - n_\frac{1}{2}^+ - 3n_\frac{3}{2}^- \right) /2 \mod 8 \end{equation} which is applicable to all space groups with inversion and symmorphic $C_4$ rotation. The concrete definition of $n_\frac{1}{2}^+$, $n_\frac{1}{2}^-$, $n_\frac{3}{2}^+$, $n_\frac{3}{2}^-$ in various groups are listed in table \ref{tab-Z8}. \end{document}
1,477,468,749,866
arxiv
\section{Introduction} We consider block ciphers acting on a vector space $(\mathbb{F}_2)^n$. It is important to identify conditions on the components of the cipher that may ensure its security. There are many competing notions of security, hence several kinds of security criteria, and some of them focus on the role of the S-Boxes. For a large class of nowadays block ciphers, the S-Boxes are bijective vectorial Boolean functions $f: (\mathbb{F}_2)^m \to (\mathbb{F}_2)^m$, hence they are functions from the finite field $(\mathbb{F}_2)^m$ to itself. In this paper we focus on $4$-bit S-Boxes, as used for example in SERPENT (\cite{serpent}) and PRESENT (\cite{present}), although we present also a theorem for the general case. Several security criteria are affine-invariant and this justifies the work done to achieve the classification of 4-bit S-Boxes in affine-equivalence classes, as done for example by De Canni\'ere (\cite{dec}) and Leander and Poschmann (\cite{Sboxes}) (these classifications have been achieved independently). There is a new security criteria for S-Boxes which is affine-invariant, the weakly differential uniformity. Particularly interesting is the concept of weakly APN. We determine several conditions (some computational and some theoretical), which are either sufficient or necessary for a $4$-bit vectorial Boolean function to be weakly APN. Our paper is structured as follows. In Section. \ref{theoretical}, we introduce and motivate the notion of \emph{weakly APN function}, highlighting the case of dimension $4$. In Section. \ref{secth} we present our theoretical results, including a theorem for any dimension. In Section. \ref{computational} we discuss our computational results. Finally, in Section. \ref{concl} we provide further computations that may be interesting and we draw our conclusions. \section{Preliminaries on weakly APN functions}\label{theoretical} Without loss of generality, in the sequel we consider only Boolean functions $f: (\mathbb{F}_2)^m \to (\mathbb{F}_2)^m$ such that $f(0)=0$. We also write $\hat{f}_u(x) := f(x+u)+f(x)$ (the \emph{derivative} of $f$) and $\mathrm{Im}(f)=\{ f(x) \mid x\in (\mathbb{F}_2)^m\}$ (the \emph{image} of $f$). A notion of non-linearity for S-Boxes that has received a lot of attention is the following. \begin{definition} The function $f$ is \emph{$\delta$-differentially uniform} if for any $u \in (\mathbb{F}_2)^m \setminus \{ 0 \}$ and for any $v \in (\mathbb{F}_2)^m$, $\vert \{ x \in (\mathbb{F}_2)^m: \hat{f}_u(x) = v \} \vert \le \delta$ . If $f$ is $2$-differentially uniform, then it is called an \emph{Almost Perfectly Nonlinear (APN)} function. \end{definition} The property of being $\delta$-differentially uniform is an affine-invariant. W.r.t. diffentially uniformity, the best S-Boxes are the APN S-Boxes. APN functions are indeed a very hot research topic (see for instance the recent contributions \cite{aubry} and \cite{bracken}). Unfortunately, for some even dimensions, no APN permutation exists. This is the case for dimension $m=4$, which has cryptographic significance at least for SERPENT and PRESENT. In this case, the best we can have is $\delta=4$. There is a natural generalization of differential uniformity presented recently in \cite{CDS}, which we recall in the following definition. \begin{definition} The function $f$ is \emph{weakly $\delta$-differentially uniform} if for any $u \in (\mathbb{F}_2)^m \setminus \{ 0 \}$ we have $\vert \mathrm{Im}(\hat{f}_u) \vert > 2^{m-1}/\delta$. If $f$ is weakly $2$-differentially uniform, then it is called a \emph{weakly Almost Perfectly Nonlinear (weakly APN)} function. \end{definition} By \cite{CDS}, \S 4, Fact 3, a $\delta$-differentially uniform map is weakly $\delta$-differentially uniform, and is easy to check that weak $\delta$-differential uniformity is affine-invariant. The significance for the previous definition lies in \cite{CDS}, Theorem 4.4. To appreciate it we need another definition. \begin{definition} A function $f$ is \emph{strongly $l$-anti-invariant} if for any two subspaces $V,W \leq (\mathbb{F}_2)^m$ such that $f(V)=W$ then either $\dim(V)=\dim(W)<m-l$ or $V=W=(\mathbb{F}_2)^m$. \end{definition} An iterated block cipher is obtained by the composition of several rounds (or round functions), i.e., key-dependent permutations of the message/cipher space. To avoid potential weaknesses of a given cipher $\mathcal{C}$, it is desirable that the permutation group $\Gamma_\infty(\mathcal{C})$ generated by its round functions with the key varying in the key space is primitive (for instance, a way to construct a trapdoor using imprimitivity is presented in \cite{trapdoor}). Translation-based ciphers (see \cite{CDS}, Def. 3.1) form an interesting class of iterated block ciphers containing AES\cite{AES}, SERPENT, PRESENT. According to Theorem.~4.4 in \cite{CDS}, if $\mathcal{C}$ is a translation-based cipher and each brick $\gamma'$ of every parallel S-Box $\gamma$ used in the proper round under consideration is both weakly $2^r$-differentially uniform and strongly $r$-anti-invariant for some $r$ with $1 \le r \le m/2$, then $\Gamma_\infty(\mathcal{C})$ is primitive. It may seem that Theorem~4.4 in \cite{CDS} requires too strong conditions in order to ensure primitivity, but indeed they turn out to be quite natural, as shown in \cite{CDS}, \S 5. In the case of $4$-bit S-Boxes, we have only two possibilities: $r=1$, requiring every $\gamma'$ to be both strongly $1$-anti-invariant (which always holds if it is maximally non-linear, see for instance \cite{CDS}, footnote 4 on p. 347) and weakly APN; or $r=2$, requiring every $\gamma'$ to be both weakly $4$-differentially uniform (which always holds if it is $4$-differentially uniform) and $2$-strongly-anti-invariant. \section{Theoretical results on weakly APN functions} \label{secth} Our first result is to show that for $4$-differentially uniform functions the case $r=2$ of Theorem~4.4 in \cite{CDS} is just a sub-case of the case $r=1$. \begin{Proposition}\label{invariant} Let $f: (\mathbb{F}_2)^4 \to (\mathbb{F}_2)^4$ be a Boolean function such that (i) $f$ is $4$-differentially uniform (ii) $f$ is strongly $2$-anti-invariant. \noindent Then $f$ is weakly APN. \end{Proposition} \proof Assume by contradiction that $\vert \mathrm{Im}(\hat{f}_u) \vert \le 4$. Then from (i) we deduce that $\vert \hat{f}_u^{-1}(y) \vert = 4$ for every $y\in \mathrm{Im}(\hat{f}_u)$. Hence we have $\hat{f}_u^{-1}(f(u)) = \{0,u,x,u+x \}$ for some $x$, in particular $\hat{f}_u^{-1}(f(u))$ is a $2$-dimensional vector subspace. On the other hand, $\hat{f}_u(x)=\hat{f}_u(u)$ implies $f(x+u)=f(u)-f(x)$. It follows that $f(\{0,u,x,u+x \})$ is a $2$-dimensional vector subspace, contradicting~(ii). \qed In other words, Proposition \ref{invariant} provides some sufficient conditions for a 4-bit S-Box to be weakly APN. Other sufficient conditions are presented in the next proposition and are based on the following non-linearity measures: \begin{equation}\label{degreecomp} n_i(f)=\vert \{v\in(\mathbb{F}_2)^m\setminus\{0\}: \deg(<f,v>)=i\} \vert \end{equation} and \begin{equation}\label{constant} \hat{n}(f)=\max_{u\in(\mathbb{F}_2)^m\setminus{\{0\}}}{\vert\{v\in(\mathbb{F}_2)^m\setminus{\{0\}}:\deg(<\hat{f}_u,v>)=0\}\vert}\,. \end{equation} \begin{Proposition}\label{derivative} Let $f: (\mathbb{F}_2)^4 \to (\mathbb{F}_2)^4$ be a Boolean function such that $\hat{n}(f)=0$. \noindent Then $f$ is weakly APN. \end{Proposition} \proof Let $(\mathbb{F}_2)^4 = \{ x_1, \ldots, x_{16} \}$ and given $u\in(\mathbb{F}_2)^m\setminus{\{0\}}$ let $M = (m_{ij}) \in (\mathbb{F}_2)^{4 \times 16}$ with $m_{ij} := (\hat{f}_u)_i(x_j)$. By definition, $f$ is weakly APN if and only if $\vert \mathrm{Im}(\hat{f}_u) \vert > 4$, hence if and only if $M$ has more than $4$ distinct columns. Assume by contradiction that $M$ has $n \le 4$ distinct columns and let $M' \in (\mathbb{F}_2)^{4 \times n}$ be the corresponding submatrix. If $M'$ has rank $4$, then we may write $(1,1,1,1)$ as a linear combination of the rows of $M'$: $$ (1,1,1,1) = a M'_1 + b M'_2 + c M'_3 + d M'_4. $$ Since all the other columns of $M$ are equal to the columns of $M'$, we may write $(1,\ldots,1) \in (\mathbb{F}_2)^{16}$ as the same linear combination of the rows of $M$: $$ (1,\ldots,1) = a M_1 + b M_2 + c M_3 + d M_4. $$ Hence the function $< \hat{f}_u,(a,b,c,d) >$ is the constant $1$, contradiction. If instead $M'$ has rank $\le 3$, then we may write $(0,0,0,0)$ as a nonzero linear combination of the rows of $M'$: $$ (0,0,0,0) = a M'_1 + b M'_2 + c M'_3 + d M'_4. $$ Since all the other columns of $M$ are equal to the columns of $M'$, we may write $(0,\ldots,0) \in (\mathbb{F}_2)^{16}$ as the same linear combination of the rows of $M$: $$ (0,\ldots,0) = a M_1 + b M_2 + c M_3 + d M_4. $$ Hence the function $< \hat{f}_u,(a,b,c,d) >$ is the constant $0$, contradiction. \qed The following partial converse to Proposition \ref{derivative} gives necessary conditions and holds for \emph{any} $m\ge 2$. \begin{theorem}\label{converse} Let $f: (\mathbb{F}_2)^m \to (\mathbb{F}_2)^m$ be a (weakly) APN function. \noindent Then $\hat{n}(f)\leq 1$. \end{theorem} \proof Let $f = (f_1,f_2,\ldots,f_m)$ with $f_i: (\mathbb{F}_2)^m \to \mathbb{F}_2$ and assume by contradiction that both $< \hat{f}_u,v_1 >$ and $< \hat{f}_u,v_2 >$ are constant for some $u, v_1 \ne v_2 \in (\mathbb{F}_2)^m \setminus \{ 0 \}$. Up to a linear transformation sending $v_1$ to $(1,0,0,\ldots,0)$ and $v_2$ to $(0,1,0,\ldots,0)$, without loss of generality we may assume that both $(\hat{f_u})_1 = \hat{(f_1)}_u$ and $(\hat{f_u})_2 = \hat{(f_2)}_u$ are constant. It follows that $\vert \mathrm{Im}(\hat{f}_u) \vert \le 2^{m-2}$ and $f$ is not weakly APN, contradiction. \qed As an application of Theorem \ref{converse}, we obtain the following: \begin{Proposition}\label{degree} Let $f: (\mathbb{F}_2)^4 \to (\mathbb{F}_2)^4$ be a weakly APN permutation. \noindent Then $\deg(f) = 3$ and $n_3(f) \in \{12,14,15 \}$. \end{Proposition} \proof It is well-known that $\deg{f} \le 3$ (see for instance \cite{permutation}). If $$ \vert \{v \in (\mathbb{F}_2)^4 \setminus \{ 0 \}: \deg (<f,v>) \le 2 \} \vert \le 5 $$ then our claim holds, since $\{v \in (\mathbb{F}_2)^4 \setminus \{ 0 \}: \deg (<f,v>) \le 2 \} \cup \{0 \}$ is a vector subspace of $(\mathbb{F}_2)^4$. Let $f = (f_1,f_2,f_3,f_4)$ with $f_i: (\mathbb{F}_2)^4 \to \mathbb{F}_2$ and assume by contradiction that $\deg(S) \le 2$ for $6$ different linear combinations $S = \sum_{i=1}^4 v_i f_i$. From the basic theory of quadratic Boolean functions (see for instance \cite{quadratic}, \S 2.2), it follows that the derivative $\hat{S}_u$ is constant for every $u \in V(S) \subseteq (\mathbb{F}_2)^4$, where $V(S)$ is a vector subspace of dimension $0$ if and only if $S$ is bent, $4$ if and only if $S$ is linear (affine), and $2$ otherwise. Now, $S$ is not bent since it is balanced (see for instance \cite{balanced}) and bent functions are never balanced (see for instance \cite{bent}). Thus $\dim V(S) \ge 2$ for every $S$ and $\vert V(S) \setminus \{ 0 \} \vert \ge 3$, in particular $6$ sets $V(S) \setminus \{ 0 \} \subseteq (\mathbb{F}_2)^4 \setminus \{ 0 \}$ cannot be disjoint. Hence there is $u \in (\mathbb{F}_2)^4 \setminus \{ 0 \}$ and two different non-zero linear combinations $S_1$ and $S_2$ such that both $\hat{(S_1)}_u$ and $\hat{(S_2)}_u$ are constant and this contradicts Theorem~\ref{converse}. \qed \section{Computational results on weakly APN function} \label{computational} The problem of classifying (invertible) S-Boxes $f:(\mathbb{F}_2)^m\to(\mathbb{F}_2)^m$ (w.r.t. affine-equivalence) was solved in \cite{dec,Sboxes} in the case $m=4$ and has been recently checked in \cite{Pul,saarinen}. By a direct check on the class representatives, we may draw a series of consequences, that we call \emph{Facts}. First of all, we see that three of our theoretical results cannot be inverted, as follows. \begin{Fact}\label{no-converse-invariant} The converse of Proposition \ref{invariant} does not hold. \end{Fact} \proof $(0,1,2,13,4,15,14,7,8,3,5,9,10,6,12,11)$ is weakly APN but is \emph{not} 4-differentially uniform. \qed \begin{Fact}\label{no-converse-derivative} The converse of Proposition \ref{derivative} does not hold. \end{Fact} \proof $(0,1,2,13,4,15,14,7,8,3,5,9,10,6,12,11)$ is weakly APN but $\hat{n}=1$. \qed \begin{Fact}\label{no-converse-converse} The converse of Theorem \ref{converse} does not hold. \end{Fact} \proof For $f=(0,1,2,7,4,10,15,9,8,3,13,14,12,5,6,11)$ we have $\hat{n}(f)=1$ but $f$ is not weakly APN. \qed Next, we can strengthen Proposition \ref{degree}: \begin{Fact}\label{refine-degree} Let $f: (\mathbb{F}_2)^4 \to (\mathbb{F}_2)^4$ be a weakly APN permutation. \noindent Then $\deg(f) = 3$ and $n_3(f) \in \{14,15 \}$. \end{Fact} Unfortunately, the previous fact cannot be inverted: \begin{Fact} The converse of Fact \ref{refine-degree} does not hold. \end{Fact} \proof For $f=(0,1,2,7,4,10,15,9,8,3,13,14,12,5,6,11)$ we have $\deg(f) = 3$ and $n_3(f)=14$, but $f$ is not weakly APN. \qed Finally, we want to provide some sufficient conditions (for $f$ to be weakly APN), involving also the following classical concept of non-linearity: \begin{definition} $$ \mathrm{Lin}(f)= \max_{a\in (\mathbb{F}_2)^m,\,b\in(\mathbb{F}_2)^m\setminus\{0\}}{\vert <f,b>^{\mathcal{W}}(a)}\vert\,, $$ where $\mathcal{W}$ denotes the Walsh coefficient (see for instance (1) in \cite{Sboxes}). \end{definition} Since for $m=4$ we have that the best $f$'s have $\mathrm{Lin}(f)=8$, we find of interest our following result: \begin{Fact}\label{sufficient} Let $f:(\mathbb{F}_2)^4\to(\mathbb{F}_2)^4$ be a Boolean permutation such that $$ \mathrm{Lin}(f)=8, \quad f \mbox{ is }4-\mbox{differentially uniform}, \quad n_3(f)\ge 14 \,. $$ \noindent Then $f$ is weakly APN. \end{Fact} Regrettably, the assumptions of Fact \ref{sufficient} cannot be weakened. We provide two (affine-independent) counterexamples: \begin{itemize} \item with $f=(0,1,2,12,4,13,11,10,8,15,5,9,6,14,7,3)$ we have $\mathrm{Lin}(f)=8$ and $n_3(f)= 14$, but $f$ is not weakly APN, \item with $f=(0,1,2,12,4,6,14,5,8,3,13,10,9,7,15,11)$ we have that $f$ is \\ $4$-differentially uniform and that $n_3(f)= 14$, but again $f$ is not weakly APN. \end{itemize} \section{More computational results and conclusions} \label{concl} Let we recall from \cite{Sboxes} the further measures of non-linearity: \begin{itemize} \item [-] $ \mathrm{Lin}_1(f)=\max_{\substack{a,b\in(\mathbb{F}_2)^m\\ \mathrm{w}(a)=\mathrm{w}(b)=1}}{\{\vert{<f,b>^{\mathcal{W}}(a)}\}\vert}\,$, \item[-] $\mathrm{Diff}_1(f)=\max_{\substack{a,b\in(\mathbb{F}_2)^m\\ \mathrm{w}(a)=\mathrm{w}(b)=1}}{\{\vert{\hat{f}_a}^{-1}(b)\vert \}}\,$. \end{itemize} Then we introduce a new class of S-Boxes suitable for block ciphers construction: \begin{Definition} We say that a Boolean permutation $f:(\mathbb{F}_2)^4\to(\mathbb{F}_2)^4$ is a \emph{strong} S-Box if $f$ is weakly APN, $4$-differentially uniform, and $$ \mathrm{Lin}(f)=8, \quad \mathrm{Diff}_1(f)=0 \quad \mathrm{Lin}_1(f)=4, \quad n_3(f)\ge 14 \,. $$ Morever, we say that $f$ is \emph{very strong} if it is strong and strongly $2$-anti-invariant. \end{Definition} Note that a very strong function is in particular both optimal (\cite{Sboxes}, Def. 1) and Serpent-type (\cite{Sboxes}, Def. 2), and also it satisfies Theorem. 4.4 of \cite{CDS}. A direct computation (see \cite{Pul}) allows us to conclude: \begin{Fact} \label{number} There are $55296$ strong S-Boxes and $2304$ very strong ones. \end{Fact} \begin{remark} As in the rest of the paper, all statements in this section assume $f(0)=0$. So Fact \ref{number} implies that there are actually $55296*16=884736$ invertible 4-bit S-Boxes equivalent via a translation to strong S-Boxes, therefore sharing their security robustness. The same goes for $2304*16=36864$ S-Boxes equivalent to very strong S-Boxes. \end{remark} Following \cite{Sboxes}, we have tested the properties of the S-Boxes used in SERPENT, denoted by $S_0,S_1,\ldots,S_7$ (for details see \cite{Pul}), and we get: \begin{Fact} The S-Boxes $S_3,S_4,S_5,S_7$ are strong. None of the $S_i$'s is very strong. \end{Fact} In conclusion, we have considered the link between the recent notion of weakly APN function and several more traditional non-linearity properties, such as differential uniformity, algebraic degree and classical non-linearity. We obtained both theoretical and computational results. In particular, sufficient conditions for an S-Box to be weakly APN are presented in Propositions \ref{invariant} and \ref{derivative} and Fact \ref{sufficient}; while necessary ones can be found in Theorem \ref{converse}, Proposition \ref{degree} and Fact \ref{refine-degree}. \section{Acknowledgements} This research has been supported by TELSY S.p.A., MIUR ``Rientro dei cervelli'', GNSAGA of INdAM and MIUR Cofin 2008 - "Geo\-metria delle variet\`{a} algebriche e dei loro spazi di moduli" (Italy). A preliminary version of this work has been available online as arXiv:1102.3882v1 since February 17, 2011.
1,477,468,749,867
arxiv
\section{A Review on Convolution Operators \raquel{convolution or convolutional?}} In this section we provide a review on convolution operators over discrete and continuous domain. Convolution is a mathematical operator over two functions $f$ and $g$, which produces a third function $h$. \begin{figure} \includegraphics[width=1.02\linewidth]{figs/continuous_conv.pdf} \caption{Standard Grid Convolution vs. Continuous Convolution. In grid convolution 3x3 kernel function in grid convolution could only takes 9 possible values as input. And despite image boundaries, input and output are corresponding to the same points in the domain. In continuous convolution, kernel function takes arbitrary point in continuous domain as input, and it is possible to output features at the point that was not seen in the input. } \label{fig:idea} \end{figure} \paragraph{Generic Convolution over Groups:} Considering a subset $\mathcal{S}$ in a group $\mathcal{G}$, with mutually inverse operators $+$ and $-$ defined in the group $\mathcal{G}$. % \raquel{do we need to define what mutually inverse is? shouldnt you first define the group and its operations and then the subgroup} Note that $\mathcal{S}$ does not necessarily have closure under $+$ and $-$ operators. Given two functions $f$ and $g$ defined on $\mathcal{G}$, the convolution filter can be defined as: \begin{equation} \label{conti-conv} h(\mathbf{x}) = \sum_{\mathbf{y} \in \mathcal{S}} f(\mathbf{y}) g(\mathbf{y} - \mathbf{x}) \end{equation} Especially there are two special cases that are worth to be discussed: \paragraph{Continuous Convolution:} Let $f$ and $g$ be functions over the real-valued domain, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\mathcal{G} = \mathcal{S} = \mathbb{R}$. The convolution operator is then defined as the following integral % \begin{equation} \label{conti-conv} h(\mathbf{x}) = (f \ast g) (\mathbf{x}) = \int_{-\infty}^{\infty} f(\mathbf{y}) g(\mathbf{y} -\mathbf{x}) d\mathbf{x} \end{equation} where $h(t)$ can be described as a weighted average of the function $f$ around the point $t$, where the weight is given by $g(-\tau)$ shifted by $t$. \paragraph{Discrete Convolution:} Let $f$ and $g$ be functions defined over the support domain of finite integer set: $\mathcal{G} = \mathcal{Z}^D$ and $ \mathcal{S} = \{ -M, -M + 1, ..., M - 1, M\}^D$, the convolution operator between $f$ and $g$ is defined as: \[ h[n] = (f \ast g)[n] = \sum_{m = -M}^M f[n - m] g[m] \] Note that if $D = 2$ we recover the standard convolution operator over 2D images. And if $D = 3$ we recover the standard convolution operator conducted on 3D volumetric representations. Due to the finite state in the support domain, one convolution kernel $g$ can be parameterized by $(M+1)^2$ real-values. \section{Conclusions} We have presented a new learnable convolution layer that operates over non-grid structured data. Our convolution kernel function is parameterized by multi-layer perceptrons and spans the full continuous domain. This allows us to design a new deep learning architecture that can be applied to arbitrary structured data, as long as the support relationships between elements are computable. We validate the performance on point cloud segmentation and motion estimation tasks, over very large-scale datasets with up to 200 bilion points. The proposed network achieves state-of-the-art performance on all the tasks and datasets. \section{Experimental Evaluation} We demonstrate the effectiveness of our approach in the tasks of semantic labeling and motion estimation of 3D point clouds, and show state-of-the-art performance. We conduct point-wise semantic labeling experiments over two datasets: a very large-scale outdoor lidar semantic segmentation dataset that we collected and labeled in house and \simon{a} large indoor semantic labeling dataset. To our knowledge, these are the largest real-world outdoor and indoor datasets that \simon{are available} for this task. The datasets are fully labeled and contain 137 billion and 629 million points respectively. \shenlong{The lidar flow experiment is also conducted on this dataset with ground-truth 3D motion label for each point. } \begin{table*}[] \centering \footnotesize \setlength\tabcolsep{4pt} % \begin{tabular}{c|cc|ccccccccccccc} Method & \cellcolor{blue!25} \textbf{mIOU} & \cellcolor{blue!25} \textbf{mAcc} & ceiling & floor & wall & beam & column & window & door & chair & table & bookcase & sofa & board & clutter \\ \hline PointNet \cite{pointnet} & \cellcolor{blue!25}41.09 & \cellcolor{blue!25}48.98 & 88.80 & \textbf{97.33} & 69.80 & 0.05 & 3.92 & 46.26 & 10.76 & 52.61 & 58.93 & 40.28 & 5.85 & 26.38 & 33.22 \\ 3D-FCN-TI \cite{segcloud} & \cellcolor{blue!25}47.46 & \cellcolor{blue!25} 54.91 & {90.17} & 96.48 & 70.16 & 0.00 & 11.40 & 33.36 & 21.12 & \textbf{76.12} & 70.07 & 57.89 & 37.46 & 11.16 & 41.61 \\ SEGCloud \cite{segcloud} & \cellcolor{blue!25} 48.92 & \cellcolor{blue!25}57.35 & 90.06 & 96.05 & 69.86 & 0.00 & \textbf{18.37} & 38.35 & 23.12 & 75.89 & \textbf{70.40} & \textbf{58.42} & 40.88 & 12.96 & {41.60} \\ \hline Ours PCCN & \cellcolor{blue!25} \textbf{58.27} & \cellcolor{blue!25} \textbf{67.01} & \textbf{92.26} & {96.20} & \textbf{75.89} & \textbf{0.27} & {5.98} & \textbf{69.49} & \textbf{63.45} & {66.87} & 65.63 & 47.28 & \textbf{68.91} & \textbf{59.10} & \textbf{46.22} \\ \end{tabular} \vspace{-3mm} \caption{Semantic Segmentation Results on Stanford Large-Scale 3D Indoor Scene Dataset } \label{tab-indoor3d} \end{table*} \subsection{Semantic Segmentation of Indoor Scenes} \paragraph{Dataset:} We use the Stanford large-scale 3D indoor scene dataset \cite{indoor3d} and follow the training and testing procedure used in \cite{segcloud}. We report the same metrics, i.e., mean-IOU, mean class accuracy (TP / (TP + FN)) and class-wise IOU. The input is six dimensional and is composed of the xyz coordinates and RGB color intensity. Each point is labeled with one of 13 classes shown in \tabref{tab-indoor3d}. \paragraph{Competing Algorithms:} We compare our approach to PointNet \cite{pointnet} and SegCloud \cite{segcloud}. \shenlong{We evaluate the proposed end-to-end continuous convnet} with eight continuous convolution layers (\textit{Ours PCCN}). \simon{The kernels are defined over the continuous support domain of 3D Euclidean space.} Each intermediate layer except the last has 32 dimensional hidden features followed by batchnorm and ReLU nonlinearity. The dimension of the last layer is 128. \simon{We observe that the distribution of semantic labels within a room is highly correlated with the room type (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot office, hallway, conference room, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot).} Motivated by this, we apply max pooling over all the points in the last layer to obtain a global feature, which is then concatenated to the output feature of each points in the last layer, resulting in a 256 dimensional feature. \shenlong{A fully connected layer with softmax activation is used to produce the final logits. Our network is trained end-to-end with cross entropy loss, using Adam optimizer. } \paragraph{Results:} As shown in Tab.~\ref{tab-indoor3d} our approach outperforms the state-of-the-art by 9.3\% mIOU and 9.6\% mACC. \figref{fig-indoor3d} shows qualitative results. Despite the diversity of geometric structures, our approach works very well. Confusion mainly occurs between columns vs walls and window vs bookcase. It is also worth noting that our approach captures visual information encoded in RGB channels. The last row shows two failure cases. In the first one, the door in the washroom is labeled as clutter whearas our algorithm thinks is door. In the second one, the board on the right has a window-like texture, which makes the algorithm predict the wrong label. % \begin{figure} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{cc} \adjincludegraphics[width=.5\linewidth, trim={{.2\width} {.2\height} {.2\width} {.25\height}}, clip]{./figs/flow/flow_32876bf4-310e-4560-dca0-47eab83a7395_200.png} & \adjincludegraphics[width=.5\linewidth, trim={{.2\width} {.2\height} {.2\width} {.25\height}}, clip]{./figs/flow/32876bf4-310e-4560-dca0-47eab83a7395_200.png} \\ Flow Field & Overlay of Target and Warped Source\\ \end{tabular} \vspace{-3mm} \caption{Right: \textcolor{purple}{purple} shows target frame, \textcolor{yellow}{yellow} shows source frame warped to target frame using ground truth flow} \label{fig:flow} \end{figure} \subsection{Semantic Segmentation of Driving Scenes} \label{sec:odtac} \paragraph{Dataset:} We first conduct experiments on the task of point cloud segmentation in the context of autonomous driving. Each point cloud is produced by a full sweep of a roof-mounted Velodyne-64 lidar sensor driving in several cities in North America. The dataset is composed of snippets each having 300 consecutive frames. % The training and validation set contains 11,337 snippets in total while the test set contains 1,644 snippets. % We report metrics on a subset of the test set which is generated by sampling 10 frames from each snippet to avoid bias brought due to scenes where the ego-car is static (e.g., when waiting at a traffic light). Each point is labeled with one of seven classes defined in Tab.~\ref{tab-odtac}. We adopt mean intersection-over-union (meanIOU) and point-wise accuracy (pointAcc) as our evaluation metrics. \begin{table*}[] \centering \footnotesize \setlength\tabcolsep{4pt} % \begin{tabular}{c|cc|ccccccc|c} Method & \cellcolor{blue!25} \textbf{pACC} & \cellcolor{blue!25} \textbf{mIOU} & vehicle & bicyclist & pedestrian & motorcycle & animal & background & road & params size \\ \hline PointNet \cite{pointnet} & \cellcolor{blue!25} 91.96 & \cellcolor{blue!25} 38.05 & 76.73 & 2.85 & 6.62 & 8.02 & 0.0 & 89.83 & 91.96 & 20.34MB \\ 3D-FCN \cite{resnet} & \cellcolor{blue!25} 94.31 & \cellcolor{blue!25} 49.28 & 86.74 & 22.30 & 38.26 & 17.22 & 0.98 & 86.91 & 92.56 & 74.66MB \\ \hline Ours PCCN & \cellcolor{blue!25} 94.56 & \cellcolor{blue!25} 46.35 & 86.62 & 8.31 & 41.84 & 7.24 & 0.00 & 87.27 & \textbf{93.20} & \textbf{9.34MB} \\ Ours 3D-FCN+PCCN & \cellcolor{blue!25} \textbf{95.45} & \cellcolor{blue!25} \textbf{58.06} & \textbf{91.83} & \textbf{40.23} & \textbf{47.74} & \textbf{42.91} & \textbf{1.25} & \textbf{89.27} & 93.18 & 74.67MB \\ \end{tabular} \vspace{-3mm} \caption{Semenatic Segmentation Results on Driving Scenes Dataset} \label{tab-odtac} \end{table*} \paragraph{Baselines:} We compare our approach to the point cloud segmentation network (\textit{PointNet}) \cite{pointnet} and a 3D fully convolutional network (\textit{3D-FCN}) conducted over a 3D occupancy grid. We use a resolution of 0.2m for each voxel over a 160mx80mx6.4m range. This results in an occupancy grid encoded as a tensor of size 800x400x32. We define a voxel to be occupied if it contains at least one point. We use ResNet-50 as the backbone and replace the last average pooling and fully connected layer with two fully convolutional layers and a trilinear upsampling layer to obtain dense voxel predictions. The model is trained from scratch with the Adam optimizer\cite{adam} \shenlong{to minimize the class-reweighted cross-entropy loss.} % Finally, the voxel-wise predictions are mapped back to the original points and metrics are computed over points. \shenlong{We adapted the open-sourced PointNet model onto our dataset and trained from scratch. The architecture and loss function remain the same with the original paper, except that we removed the point rotation layer since it negatively \simon{impacts} validation performance on this dataset. } % \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{cccc} \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/gt.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/pred.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/gt_2.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/pred_2.png} \\ \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/gt_3.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/pred_3.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/gt_4.png} & \adjincludegraphics[width=.24\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/pred_4.png} \\ Ground Truth & Ours 3D-FCN+PCCN & Ground Truth & Ours 3D-FCN+PCCN \\ \end{tabular} \vspace{-3mm} \caption{Lidar Flow Results on Driving Scene Dataset} \label{fig:flow-prediction} \end{figure*} \paragraph{Our Approaches:} We evaluate two versions of our approach. Our first instance conducts continuous convolutions directly over the raw xyz-intensity lidar points (\textit{Ours PCCN}). Our second version (\textit{Ours 3D-FCN+PCCN}) performs continuous convolutions over the features extracted from \textit{3D-FCN}. \textit{Ours PCCN} has 16 continuous conv layers with residual connections, batchnorm and ReLU non-linearities. We use the spatial support in $\mathbb{R}^3$ to define our kernel. We train the network with point-wise cross-entropy loss and Adam \cite{adam} optimizer. % In contrast, \textit{Ours 3D-FCN+PCCN} model has 7 residual continuous convolutional layers on top of the trained \textit{3D-FCN} model and performs end-to-end fine-tuning using Adam optimizer. % \paragraph{Results:} As shown in Tab.~\ref{tab-odtac}, by exploiting sophisticated feature via 3D convolutions, \textit{3D-FCN+PCCN} results in the best performance. % \figref{fig-odtac} shows qualitative comparison between models. As shown in the figure, all models produce good results. Performance differences often result from ambiguous regions. In particular, we can see that the 3D-FCN model oversegements the scene: it mislabels a background pole as vehicle (red above egocar), nearby spurirous points as bicyclist (green above egocar), and a wall as pedestrian (purple near left edge). This is reflected in the confidence map (as bright regions). We observe a significant improvement in our \textit{3D-CNN + PCCN} model, with all of the above corrected with high confidence. For more results and videos please refer to the supplementary material. \paragraph{Model Sizes:} \shenlong{We also compare the model sizes of the competing algorithms in Tab.~\ref{tab-odtac}. \simon{In comparison to} the 3D-FCN approach, the end-to-end continuous convolution network's model size is eight times smaller , while achieving comparable results. And the 3D-FCN+PCCN is just 0.01MB larger than 3D-FCN, but the performance is improved by a large margin in terms of mean IOU. } \paragraph{Complexity and Runtime} We benchmark the proposed model's runtime over a GTX 1080 Ti GPU and Xeon E5-2687W CPU with 32 GB Memory. The forward pass of a 8-layer PCCN model (32 feature dim in each layer with 50 neighbours) takes 33ms. The KD-Tree neighbour search takes 28 ms. The end-to-end computation takes 61ms. The number of operations of each layer is 1.32GFLOPs. \paragraph{Generalization:} To demonstrate the generalization ability of our approach, we evaluate our model, trained with only North American scenes, on the KITTI dataset \cite{kitti}, which was captured in Europe. As shown in \figref{fig:kitti}, the model achieves good results, with well segmented dynamic objects, such as vehicles and pedestrians. \subsection{Lidar Flow} \paragraph{Dataset:} \shenlong{We also validate our proposed method over the task of lidar based motion estimation, refered to as lidar flow. In this task, the input is two consecutive frames of lidar sweep. The goal is to estimation the 3D motion field for each point in the first frame, to undo both ego-motion and the motion of dynamic objects. The ground-truth ego-motion is computed through a comprehensive filters that take GPS, IMU as well as ICP based lidar alignment against pre-scaned 3D geometry of the scene as input. And the ground-truth 6DOF dynamics object motion is estimated from the temporal coherent 3D object tracklet, labeled by in-house annotators. Combining both we are able to get the ground-truth motion field. ~\figref{fig:flow} shows the colormapped flow field and the overlay between two frames after undoing per-point motion. This task is crucial for many applications, such as multi-rigid transform alignment, object tracking, global pose estimation, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. The training and validation set contains 11,337 snippets while the test set contains 1,644 snippets. % We use 110k frame pairs for training and validation, and 16440 frame pairs for testing. End-point error, and outlier percentage at 10 cm and 20 cm are used as metric. % } \paragraph{Competing Algorithms:} We compare against the 3D-FCN baseline using the same architecture and volumetric representation as used in Sec.~\ref{sec:odtac}. We also adopt a similar 3D-FCN + PCCN architecture with 7 residual continuous convolution layers added as a polishing network. In this task, we remove the ReLU nonlinearity and supervise the PCCN layers with MSE loss at every layer. \shenlong{The training objective function is mean square error loss between the ground-truth flow vector and the prediction.} \paragraph{Results:} Tab.~\ref{tab-lidarflow} reports the quantitative results. As shown in the table, our 3D-FCN+PCCN model outperforms the 3D-FCN by 0.351cm in end-point error \shenlong{and our method reduces approximately $20\%$ of the outliers. \figref{fig:flow-prediction} shows sample flow predictions compared with ground truth labels. As shown in the figure, our algorithm is able to capture both global motion of the ego-car including self rotation, and the motion of each dynamic objects in the scene. For more results please refer to our supplementary material.} \begin{table}[] \centering \footnotesize \begin{tabular}{c|cccc} Method & EPE (cm) & Outlier$\%_{10}$ & Outlier$\%_{20}$ \\ \hline 3D-FCN & 8.161 & 25.92\% & 7.12 \% \\ \hline Ours 3D-FCN+PCCN & \textbf{7.810} & \textbf{19.84\%} & \textbf{5.97\%} \\ \end{tabular} \vspace{-3mm} \caption{Lidar Flow Results on Driving Scenes Dataset} \label{tab-lidarflow} \end{table} \section{Deep Parametric Continuous CNNs} \subsection{Parametric Continuous Convolutions} Standard CNNs use discrete convolutions (i.e., convolutions defined over discrete domain) as basic operations. \[ h[n] = (f \ast g)[n] = \sum_{m = -M}^M f[n - m] g[m] \] where $f: \mathcal{G} \rightarrow \mathbb{R} $ and $g: \mathcal{S} \rightarrow \mathbb{R}$ are functions defined over the support domain of finite integer set: $\mathcal{G} = \mathcal{Z}^D$ and $ \mathcal{S} = \{ -M, -M + 1, ..., M - 1, M\}^D$ respectively. In contrast, continuous convolutions can be defined as \begin{equation} \label{conti-conv} h(\mathbf{x}) = (f \ast g) (\mathbf{x}) = \int_{-\infty}^{\infty} f(\mathbf{y}) g(\mathbf{x} -\mathbf{y}) d\mathbf{y} \end{equation} where both the kernel $g: \mathcal{S} \rightarrow \mathbb{R}$ and the feature $f: \mathcal{G} \rightarrow \mathbb{R} $ are defined as continuous functions over the support domain $\mathcal{G}=\mathbb{R}^D$ and $\mathcal{S}=\mathbb{R}^D$ respectively. % Continuous convolutions require the integration in Eq. \eqref{conti-conv} to be analytically tractable. % Unfortunately, this is not possible for real-world applications, where the input features are complicated and non-parametric, and the observations are sparse points sampled over the continuous domain. Motivated by monte-carlo integration \cite{mc} we derive our continuous convolution operator. In particular, given continuous functions $f$ and $g$ with a finite number of input points ${\mathbf{y}_i}$ sampled from the domain, the convolution at an arbitrary point $\mathbf{x}$ can be approximated as: \[ \label{sampled-conti-conv} h(\mathbf{x}) = \int_{-\infty}^{\infty} f(\mathbf{y}) g(\mathbf{x}-\mathbf{y}) d\mathbf{y} \approx \sum_i^N \frac{1}{N} f(\mathbf{y}_i) g(\mathbf{x} - \mathbf{y}_i) \] The next challenge we need to solve is % \simon{constructing} the continuous convolutional kernel function $g$. Conventional 2D and 3D discrete convolution kernels are parameterized in a way that each point in the support domain is assigned a value (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the kernel weight). Such a parameterization is \simon{infeasible} for continuous convolutions, \simon{since} the kernel function $g$ is defined over an infinite number of points (i.e., has infinite support). % Instead, in this paper we propose to use parametric continuous functions to model $g$. We \simon{name} our approach {\it Parametric Continuous Convolutions}. In particular, we use a multi-layer perceptron (MLP) \simon{as the approximator}. With reference to the universal approximation theorem of \cite{mlp-theory}, MLPs are expressive and capable of approximating continuous functions over $\mathbb{R}^n$. % Thus we define: \[ g(\mathbf{z}; \theta) = MLP(\mathbf{z}; \theta) \] The kernel function $g(\mathbf{z}; \theta): \mathbb{R}^D \rightarrow \mathbb{R}$ spans the full continuous support domain while remaining parametrizable by a finite number of parameters. Note that other choices such as polynomials are possible, however low-order polynomials are not expressive, whereas learning high-order polynomials can be numerically \simon{unstable} for back-propagation. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{figs/architecture.pdf} \vspace{-3mm} \caption{Architecture of \shenlong{the Deep Parametric Continuous CNNs for Semantic Labeling Task.}} \label{fig:network} \end{figure*} \subsection{From Convolutions to Deep Networks} In this section, we first design a new convolution layer based on the parametric continuous convolutions derived in the previous subsection. We then propose a deep learning architecture using this new convolution layer. \paragraph{Parametric Continuous Convolution Layer: } Note that, unlike standard discrete convolutions which are conducted over the same point set, the input and output points of our parametric continuous convolution layer can be different. This is important for many practical applications, \simon{where we want to make dense predictions based on partial observations.} Furthermore, this allow us to abstract information from redundant input points (i.e., pooling). As a consequence, the input of each convolution layer contains three parts: the input feature vector $\mathcal{F} = \{ \mathbf{f}_{\mathrm{in}, j} \in \mathbb{R}^F\}$, the associated locations in the support domain $\mathcal{S} = \{ \mathbf{y}_j \}$, as well as the output domain locations % $\mathcal{O} = \{ \mathbf{x}_i \}$. % For each layer, we first evaluate the kernel function $g_{d, k}(\mathbf{y}_i - \mathbf{x}_j; \theta)$ for all $\mathbf{x}_j \in \mathcal{S}$ and all $ \mathbf{y}_i \in \mathcal{O}$, given the parameters $\theta$. Each element of the output feature vector is then computed as: \[ {h}_{k, i} = \sum_d^{F} \sum_j^{N} g_{d, k}(\mathbf{y}_i - \mathbf{x}_j) f_{d, j} \] Let $N$ be the number of input points, $M$ be the number of output points, and $D$ the dimensionality of the support domain. Let $F$ and $O$ be predefined input and output feature dimensions respectively. Note that \simon{these} are hyperparameters of the continuous convolution layer analogous to \shenlong{input and output feature dimensions in standard grid convolution layers.} \figref{fig:idea} \simon{depicts} our parametric continuous convolutions in comparison with conventional grid convolution. \shenlong{Two major differences are highlighted: 1) the kernel function is continuous given the relative location in support domain; 2) the input/ouput points could be any points in the continuous domain as well and can be different. } % \begin{figure*} \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{cccccc} \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-0.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-1.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-2.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-6.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-7.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-8.jpg} \\ \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-9.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-10.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-11.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-33.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-34.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-35.jpg} \\ \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-12.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-13.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-14.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-15.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-16.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-17.jpg} \\ \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-18.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-19.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-20.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-30.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-31.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-32.jpg} \\ \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-24.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-25.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-26.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-27.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-28.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-29.jpg} \\ \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-21.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-22.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-23.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-3.jpg} & \includegraphics[width=0.16\linewidth]{./figs/indoor3d/indoor35-4.jpg} & \includegraphics[width=0.16\linewidth] {./figs/indoor3d/indoor35-5.jpg} \\ Input & Ground Truth & Ours PCCN & Input & Ground Truth & Ours PCCN \end{tabular} \vspace{-3mm} \caption{Semenatic Segmentation Results on Stanford Indoor3D Dataset} \label{fig-indoor3d} \end{figure*} \paragraph{Deep Parametric Continuous CNNs:} Using the parametric continuous convolution layers as building blocks, we can construct a new family of deep networks which operates on unstructured data defined in a topological group under addition. In the following {discussions}, we will focus on multi-diumensional euclidean space, and note that this is a special case. The network takes the input features and \shenlong{their} associated positions in the support domain as input. % Then the hidden representations are generated % from successive parametric continuous convolution layers. Following standard CNN \simon{architectures}, we can add batch normalization, non-linearities and residual connections between layers. Pooling can also be employed over the support domain to aggregate information. \simon{In practice, we find adding residual connection between parametric continuous convolution layers is critical to help convergence.} Please refer to \figref{fig:layer} for an example of the computation graph of a single layer, and to \figref{fig:network} for an example of the network architecture employed for our \shenlong{indoor} % semantic segmentation task. \paragraph{Learning:} All of our building blocks are differentiable, thus our networks can be learned through back-prop: \[ \frac{\partial h}{\partial \theta} = \frac{\partial h}{\partial g} \cdot \frac{\partial g}{\partial \theta} = \sum_d^F \sum_j^N f_{d, j} \cdot \frac{\partial g}{\partial \theta} \] \begin{figure*} \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{cccc} \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/6a4ad387-6d87-4329-de6c-0e69a1877336_049_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/6a4ad387-6d87-4329-de6c-0e69a1877336_049_predictions.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/6a4ad387-6d87-4329-de6c-0e69a1877336_049_prediction.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/6a4ad387-6d87-4329-de6c-0e69a1877336_049_predictions.png} \\ \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/6fc295ae-31be-4ec2-e621-df56b44f6b21_069_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/ccn-improved/6fc295ae-31be-4ec2-e621-df56b44f6b21_069_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/6fc295ae-31be-4ec2-e621-df56b44f6b21_069_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/ccn-improved/6fc295ae-31be-4ec2-e621-df56b44f6b21_069_overlay.png} \\ \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/bd6a3d25-fffa-4b8e-cf8b-d417a5e5e283_048_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/ccn-improved/bd6a3d25-fffa-4b8e-cf8b-d417a5e5e283_048_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/bd6a3d25-fffa-4b8e-cf8b-d417a5e5e283_048_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/ccn-improved/bd6a3d25-fffa-4b8e-cf8b-d417a5e5e283_048_overlay.png} \\ \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/3d7c166f-1b44-42c8-ec41-28159cb09027_162_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/3d7c166f-1b44-42c8-ec41-28159cb09027_162_predictions.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/3d7c166f-1b44-42c8-ec41-28159cb09027_162_prediction.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/3d7c166f-1b44-42c8-ec41-28159cb09027_162_predictions.png} \\ \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/cfda9b7a-fbd4-4a8b-f115-62ce792270d7_044_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/ccn-improved/cfda9b7a-fbd4-4a8b-f115-62ce792270d7_044_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/ccn-improved/cfda9b7a-fbd4-4a8b-f115-62ce792270d7_044_overlay.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/ccn-improved/cfda9b7a-fbd4-4a8b-f115-62ce792270d7_044_overlay.png} \\ \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/3b91e33b-02c9-4f75-c1c5-4266e9eb13a7_228_gt.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/3d-cnn/3b91e33b-02c9-4f75-c1c5-4266e9eb13a7_228_predictions.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/ccn-e2e/3b91e33b-02c9-4f75-c1c5-4266e9eb13a7_228_prediction.png} & \adjincludegraphics[width=0.25\linewidth, trim={{.2\width} {.2\height} {.2\width} {.2\height}}, clip]{./figs/odtac/cnn-ccn/3b91e33b-02c9-4f75-c1c5-4266e9eb13a7_228_predictions.png} \\ Ground Truth & 3D-FCN & Ours PCCN & Ours 3D-FCN+PCCN \end{tabular} \vspace{-3mm} \caption{Semenatic Segmentation Results on Driving Scene Dataset; Colored: correct prediciton; white: wrong prediciton.} \label{fig-odtac} \end{figure*} \subsection{Discussions} \paragraph{Locality \simon{Enforcing} Continuous Convolution:} Standard grid convolution are computed over a limited kernel size $M$ to keep locality. Similarly, locality can be enforced in our parametric continuous convolutions by constraining the influence of the function $g$ to points close to $\mathbf{x}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot., % \[ g(\mathbf{z}) = MLP(\mathbf{z}) w(\mathbf{z}) \] where $w(\cdot)$ is a modulating window function. This can be achieved in differently. First, we can constrain the cardinality of its local support domain and only keep non-zero kernel values for its K-nearest neighbors: $w(\mathbf{z}) = \mathbf{1}_{\mathbf{z} \in \mathrm{KNN}(\mathcal{S}, \mathbf{x})}$. Alternatively we can keep non-zero kernel values for points within a fixed radius $r$: $w(\mathbf{z}) = \mathbf{1}_{||\mathbf{z}||_2 < r}$. % \paragraph{Efficient Continuous Convolution:} For each continuous convolution layer, the kernel function is evaluated $N\times |\mathcal{S}| \times F \times O$ times, where $|\mathcal{S}|$ is the cardinality of the support domain, and the intermediate weight tensor is stored \simon{for backpropagation}. \simon{This is expensive in practice, especially when both the number of points and the feature dimension are large.} \simon{With the locality enforcing formulation}, we can constrain the cardinality of $\mathcal{S}$. Furthermore, motivated by the idea of separable filters, we use the fact that this computation can be factorized if the kernel function value across different output dimensionality is shared. That is to say, we can decompose the weight tensor $W \in \mathbb{R}^{N\times |\mathcal{S}| \times F \times O}$ into two tensors $W_1 = \mathbb{R}^{F\times O}$ and $W_2 = \mathbb{R}^{N\times |\mathcal{S}| \times O}$ , where $W_1$ is a linear weight matrix and $W_2$ is evaluated through the MLP. \simon{With this optimization,} only $N\times |\mathcal{S}| \times O$ \simon{kernel} evaluations need to be computed and stored. Lastly, in inference stage, through merging the operations of batchnorm and fc layer in MLP, 3x speed boosting can be achieved. % \paragraph{Special Cases:} Many previous convolutional layers are special cases of our approach. For instance, if the points are sampled over the finite 2D grid we recover conventional 2D convolutions. If the support domain is defined as concatenation of the spatial vector and feature vector with a gaussian kernel $g(\cdot)$, we recover the bilateral filter. If the support domain is defined as the neighboring vertices of a node we recover the first-order spatial graph convolution \cite{gcn}. % \section{Introduction} Discrete convolutions are the most fundamental building block of modern deep learning architectures. Its efficiency and effectiveness \simon{relies on} the fact that the data appears naturally in a \simon{dense} grid structure (e.g., 2D grid for images, 3D grid for videos). However, many real world applications such as visual perception from 3D point clouds, mesh registration and non-rigid shape correspondences rely on making statistical predictions from non-grid structured data. Unfortunately, standard convolutional operators cannot be directly applied \simon{in these cases}. Multiple approaches have been proposed to handle non-grid structured data. The simplest approach is to voxelize the space to form a grid where standard \simon{discrete} convolutions can be performed \cite{modelnet40, octnet}. However, most of the volume is typically empty, and thus this results in both memory inefficiency and wasted computation. % Geometric deep learning \cite{geometric-deep-learning, gcn} and graph neural network approaches \cite{gnn, ggnn} exploit the graph structure of the data and model the relationship between nodes. Information is then propagated through the graph edges. However, they either have difficulties generalizing well or require strong feature representations as input to perform competitively. End-to-end learning is typically performed via back-propagation through time, but \simon{it} is difficult to learn very deep networks due to the memory limitations of modern GPUs. % In contrast to the aforementioned approaches, in this paper we propose a new learnable operator, which we call {\it parametric continuous convolution}. % The key idea is a parameterized kernel function that spans the full continuous vector space. In this way, it can handle arbitrary data structures as long as its support relationship is computable. % This is a natural extension since objects in the real-world such as point clouds captured from 3D sensors are distributed unevenly in continuous domain. Based upon this we build a new family of deep neural networks that can be applied on generic non-grid structured data. % The proposed networks are both expressive and memory efficient. % We demonstrate the effectiveness of our approach in both semantic labeling and motion estimation of point clouds. \simon{Most} importantly, we show that very deep networks can be learned over raw point clouds in an end-to-end manner. Our experiments show that the proposed approach outperforms the state-of-the-art by a large margin in both outdoor and indoor 3D point cloud segmentation tasks, as well as lidar motion estimation in driving scenes. Importantly, our outdoor semantic labeling and lidar flow experiments are conducted on a very large scale dataset, containing 223 billion points captured by a 3D sensor mounted on the roof of a self-driving car. To our knowledge, this is 2 orders of magnitude larger than any existing benchmark. \section{Related Work} \paragraph{Deep Learning for 3D Geometry:} Deep learning approaches that exploit 3D geometric data have recently become populer in the computer vision community. Early approaches convert the 3D data into a two-dimensional RGB + depth image \cite{fcn, rgbd-detection-seg} and exploit conventional convolutional neural networks (CNNs). Unfortunately, this \simon{representation} does not capture the true geometric relationships between 3D points (i.e. neighboring pixels could be potentially far away geometrically). % Another popular approach is to conduct 3D convolutions over volumetric representations \cite{modelnet40, subvolume, octnet, sparseconvnet, voxnet}. Voxelization is \simon{employed to} convert point clouds into a 3D grid that encodes the geometric information. These approaches have been popular in medical imaging and indoor scene understanding, where the volume is relatively small. However, typical voxelization approaches sacrifice precision and the 3D volumetric representation is not memory efficient. Sparse convolutions \cite{sparseconvnet} and advanced data structures such as oct-trees \cite{octnet} have been used to overcome these difficulties. Learning directly over point clouds has only been studied very recently. The pioneer work of PointNet \cite{pointnet}, learns an MLP over individual points and aggregates global information using pooling. PointNet++ \cite{pointnet2}, the follow-up, improves the ability to capture local structures through a multi-scale grouping strategy. \paragraph{Graph Neural Networks:} Graph neural networks (GNNs) \cite{gnn} are generalizations of neural networks to graph structured data. Early approaches apply neural networks either over the hidden representation of each node or the messages passed between adjacent nodes in the graph, and use back-propagation through time to conduct learning. Gated graph neural networks (GGNNs) \cite{ggnn} exploit gated recurrent units along with modern optimization techniques, resulting in improved performance. In \cite{3dgnn}, GGNNs are applied to point cloud segmentation, \simon{achieving} significant improvements over the state-of-the-art. % One of the major difficulties of graph neural networks is that propagation is conducted in a synchronous manner and thus it is hard to scale up to graphs with millions of nodes. Inference in graphical models as well as recurrent neural networks can be seen as special cases of graph neural networks. \paragraph{Graph Convolution Networks:} An alternative \simon{formulation is} to learn convolution operations over graphs. % These methods can be categorized into spectral and spatial approaches depending on which domain the convolutions are applied to. For spectral methods, convolutions are converted to multiplication % by computing the graph Laplacian in Fourier domain \cite{spectralnn, anisotropic-cnn, syncspeccnn}. Parameterized spectral filters can be incorporated to reduce overfitting \cite{spectralnn}. These methods are not feasible for large scale data due to the expensive computation, since there is no FFT-like trick over generic graph. Spatial approaches directly propagate information along the node neighborhoods in the graph. This can be implemented either through low-order approximation of spectral filtering\cite{chebnet, gcn, cnn-graph-hash}, or diffusion in a support domain \cite{monet, anisotropic-cnn, ecc, syncspeccnn, schnet}. Our approach generalizes spatial approaches in two ways: first, we use more expressive convolutional kernel functions; second, the output of the convolution could be any point in the whole continuous domain. \begin{figure} \includegraphics[width=1.02\linewidth]{figs/continuous_conv.pdf} \vspace{-3mm} \caption{\simon{Unlike grid convolution, parametric continuous convolution uses kernel functions that are defined for arbitrary points in the continuous support domain. As a result, it is possible to output features at points not seen in the input.}} \label{fig:idea} \end{figure} \paragraph{Other Approaches:} Edge-conditioned filter networks \cite{ecc} use a weighting network to communicate between adjacent nodes on the graph \cite{dfn} conditioned on edge labels, which is primarily formulated as relative point locations. In contrast, our approach \simon{is not constrained to} a fixed graph structure, and \simon{has the flexibility} to output features at arbitrary points over the continuous domain. In a concurrent work, \cite{schnet} uses similar parametric function form $f(\mathbf{x}_i - \mathbf{x}_j)$ to aggregate information \shenlong{between points}. However, they only use shallow isotropic gaussian kernels to represent the weights, while we use expressive deep networks to parameterize the continuous filters. \section{Point Cloud Classification} To verify the applicability of the proposed parameteric continuous convolution over global prediction task, we conduct a simple point cloud classification task on the ModelNet40 benchmark. This dataset contains CAD models from 40 categories. The state-of-the-art and most representative algorithms conducted on ModelNet40 are compared \cite{mvcnn}. We randomly sampled 2048 points for each training and testing sample over the 3D meshes and feed the point cloud into our neural network. The architecture contains 6 continuous convolution layers with 32-dimensional hidden features, followed by two layers with 128-dimensions and 512 dimensions respectively. The output of the last continuous convolution layer is fed into a max pooling layer to generate the global 512-dimensional feature, followed by two fc layers to output the final logits. \tabref{tab-modelnet40} reports the classification performance. As we can see in the table, the performance is comparable with PointNet and slightly below PointNet++. Here we use a naive global max pooling to aggregate global information for our method. We expect to achieve better results with more comprehensive and hierachical pooling strategies. \begin{table}[] \centering \caption{ModelNet40 Point Cloud Classification} \label{tab-modelnet40} \begin{tabular}{c|cc} Method & Input & Accuracy \\ \hline MVCNN \cite{mvcnn} & Multi-view Image & 90.1\% \\ 3DShapeNet \cite{modelnet40} & Volume & 84.7\% \\ VoxNet \cite{voxnet} & Volume & 85.9\% \\ Subvolume \cite{subvolume} & Volume & 89.2\% \\ ECC \cite{ecc} & Point & 87.4\% \\ PointNet vanilla \cite{pointnet} & Point & 87.2\% \\ PointNet \cite{pointnet} & Point & 89.2\% \\ PointNet++ \cite{pointnet2} & Point & \textbf{91.9\%} \\ Ours & Point & 88.9\% \end{tabular} \end{table} \section{Generalization} We show the generalization ability of our proposed model by training over one dataset and test it over another in our supplementary video. To be specific, we have used the following configurations: \begin{itemize} \item Train our proposed semantic labeling network on the driving scene data (several north America cities), and test it on KITTI (Europe). \item Train our proposed semantic labeling network on the driving scene data (non-highway road), and test it on a lidar sequence mounted on top of a high-way driving truck (highway road). \item Train our proposed lidar flow network on the driving scene data (several north America cities), and test it on KITTI (Europe). \end{itemize} \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/kitti/000148_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/kitti/000214_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/kitti/000398_predictions.png} \\ \end{tabular} \vspace{-3mm} \caption{Lidar Flow Results on KITTI Dataset} \label{fig:flow-kitti} \end{figure*} \begin{figure*} \footnotesize \centering \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000386.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000094.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000004.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000330.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000042.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/seg/kitti/pred_000443.png} \\ \end{tabular} \vspace{-3mm} \caption{Semantic Segmentation on KITTI Dataset} \label{fig:seg-kitti} \end{figure*} Under all the settings, our algorithm is able to generalize well. \figref{fig:flow-kitti} shows our lidar flow model's performance on KITTI. As shown in \figref{fig:flow-kitti}, our Lidar flow model generalizes well to the unseen KITTI dataset. From left to right, the figures show the most common scenarios with moving, turning, and stationary ego-car. The model produces covincing flow predictions in all three cases. \figref{fig:seg-kitti} includes some additional segmentation results on KITTI. For more results over several sequences, please refer to our supplementary video. \section{Lidar Flow Data and Analysis} \paragraph{Ground-truth Generation} In this paragraph, we describe how we generate the ground-truth lidar flow data in detail. For each consecutive frame, we first get the global vehicle pose transform from frame 0 to frame 1: $\mathbf{R}_{\mathrm{ego}}, \mathbf{t}_\mathrm{ego}$ with the help of additional sensors and prior information. This global vehicle pose transform represents how far away the vehicle moves and how the vehicle turns. This localization accuracy is at centi-meter scale. Therefore, the motion per each static point is: \[ \mathbf{f}_{\mathrm{static-gt}}^{(0)} = \mathbf{R}_\mathrm{ego}^T (\mathbf{x}^{(0)}_{static} - \mathbf{t}_\mathrm{ego}) - \mathbf{x}^{(0)}_{static} \] where $\mathbf{f}_{\mathrm{gt}}^{(k)}$ is the ground-truth flow at the frame $k$ under the ego-car centered coordinate, $\mathbf{x}^{(k)}$ is the point's location at the frame $k$ under the ego-car centered coordinate. For dynamic objects in the scene, e.g. other vehicles and pedestrians, the motion between each lidar frame in the vehicle coordinate is not only due to the self-driving car's ego-motion. The movement of the dynamic objects themselves are also contributing to the motion. In this project, we assume rigid motion for all the objects. The labeling of the dynamics objects include two steps. Firstly, using the global pose that we get, we visualize the point cloud of 3D objects from two frames in the same reference coordinate and label the pose changes $\mathbf{R}_\mathrm{obj},\mathbf{t}_\mathrm{obj}$ between the objects at the different time. Secondly, both ego-motion and object motion are considered in order to generate the ground-truth flow vector: \[ \mathbf{f}_{\mathrm{dynamic-gt}}^{(0)} = \mathbf{R}_\mathrm{obj}^T (\mathbf{R}_\mathrm{ego}^T (\mathbf{x}^{(0)}_{dynamic} - \mathbf{t}_\mathrm{ego}) - \mathbf{t}_\mathrm{obj}) - \mathbf{x}^{(0)}_{dynamic} \] Please refer to Fig.~\ref{flow:data} for an illustration. \begin{figure*} \centering \includegraphics[width=.9\linewidth]{./figs/flow_data.pdf} \caption{Flow data generation. The source of motion comes from two components: motion of the ego-car and motion of the dynamic objects. } % \label{flow:data} \end{figure*} \paragraph{Ground-truth Motion Analysis} We also conduct an analysis over the ground-truth motion distribution. In Fig.~\ref{flow:gtmotion} we show the 2D histogram of the GT 3D translation component along $x$ and $y$ axis respectively. We also show the motion distribution across different object types, e.g. static background, vehicle and pedestrian. As we can see, different semantic types have different motion patterns. And the heaviest density of distribution is on the y-axis, which suggests the forward motion is the major motion pattern of our ego-car. \begin{figure*} \centering \includegraphics[width=.9\linewidth]{figs/flow/flow_validation_gt_5m_logscale_bvp_viz.png} \caption{Ground-truth Motion Distribution of the Lidar Flow Dataset (unit in meters)} % \label{flow:gtmotion} \end{figure*} \paragraph{Ground-truth Validation and Visualization} We validate the quality of our ground-truth motion labels by overlaying target frame points ($\mathbf{x}^{(1)}$) and source frame points warped with ground-truth motion ($\mathbf{x}^{(0)} + \mathbf{f}_{\mathrm{gt}}^{(0)}$). \figref{fig:flow-gt-overlay} shows overlays of entire scenes and \figref{fig:flow-gt-overlay-dynamic} shows overlays of individual dynamic objects. For vehicles, points align perfectly across frames. For pedestrians, the correspondence is also near perfect: the only discrepancy is caused by non-rigid motion (e.g. changing posture). \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/overall/2660d5a1-d918-46f4-cff3-6826bc5e5580_030.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/overall/2660d5a1-d918-46f4-cff3-6826bc5e5580_180.png} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/overall/2660d5a1-d918-46f4-cff3-6826bc5e5580_240.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.3\width} {.3\height} {.3\width} {.3\height}}, clip]{./figs/flow/gt_overlay/overall/32876bf4-310e-4560-dca0-47eab83a7395_020.png} & \adjincludegraphics[width=.33\linewidth, trim={{.3\width} {.3\height} {.3\width} {.3\height}}, clip]{./figs/flow/gt_overlay/overall/32876bf4-310e-4560-dca0-47eab83a7395_190.png} & \adjincludegraphics[width=.33\linewidth, trim={{.3\width} {.3\height} {.3\width} {.3\height}}, clip]{./figs/flow/gt_overlay/overall/32876bf4-310e-4560-dca0-47eab83a7395_240.png} \\ \end{tabular} \vspace{-3mm} \caption{Flow Ground Truth Overlay of Entire Scene. Yellow: target frame, purple: warped source frame.} \label{fig:flow-gt-overlay} \end{figure*} \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccccc} \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/pedestrian/snapshot00.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/pedestrian/snapshot01.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/pedestrian/snapshot02.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/pedestrian/snapshot03.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/pedestrian/snapshot04.png} \\ \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/vehicle/snapshot00.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/vehicle/snapshot01.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/vehicle/snapshot02.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/vehicle/snapshot03.png} & \adjincludegraphics[width=.20\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/flow/gt_overlay/vehicle/snapshot04.png} \\ \end{tabular} \vspace{-3mm} \caption{Flow Ground Truth Overlay of Individual Dynamic Objects. Green: target frame, red: warped source frame.} \label{fig:flow-gt-overlay-dynamic} \end{figure*} \section{More Results} In this section, we show additional qualitative results of the proposed algorithm over all the tasks. \subsection{Semantic Segmentation for Indoor Scenes} Fig.~\ref{fig:indoor} and Fig.~\ref{fig:indoor2} show more qualitatitive results over the stanford dataset. As the figure shown, in most cases our model is able to predict the semantic labels correctly. \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new00.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new01.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new02.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new03.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new04.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new05.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new06.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new07.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new08.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new09.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new10.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new11.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new12.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new13.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new14.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new15.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new16.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new17.jpg} \\ Input & GT & Ours \end{tabular} \vspace{-3mm} \caption{Semantic Segmentation Results on Stanford Indoor Dataset} \label{fig:indoor} \end{figure*} \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new18.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new19.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new20.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new21.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new22.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new23.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new24.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new25.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new26.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new27.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new28.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new29.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new30.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new31.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new32.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new33.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new34.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/indoor/new35.jpg} \\ Input & GT & Ours \end{tabular} \vspace{-3mm} \caption{Semantic Segmentation Results on Stanford Indoor Dataset} \label{fig:indoor2} \end{figure*} \subsection{Semantic Segmentation for Driving Scenes} \figref{fig:odtac} shows additional results for semantic labeling in driving scenes. As shown, the results capture very small dynamics, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot pedestrians and bicyclists. This suggests our model's potential in object detection and tracking. The model is also able to distinguish between road and non-road through lidar intensity and subtle geometry structure such as road curbs. This validates our model's potential in map automation. More specifically, we see that most error occur on road boundaries (bright curves in error map). \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/021_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/021_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/021_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/213_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/213_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/0033fff8-f0f7-4890-c28b-7fa8494351de/213_correctness.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/seg/voxel_ecc_res_3d_best/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_correctness.png} \\ Ground Truth & Ours & Error Map \\ \end{tabular} \vspace{-3mm} \caption{Semantic Segmentation Results on Driving Scenes Dataset} \label{fig:odtac} \end{figure*} \subsection{Lidar Flow} We show additional results on Lidar flow estimation in Fig.~\ref{fig:flow-prediction}. Unlike the visualization in the main submission, we visualize the colored vector in order to better depicts the magnitudes of the motion vector. As shown in the figure, our model is able to capture majority flow field. The majority of the error happens at the object boundary. This suggests that a better support domain that includes both space and intensity features could be potentially used to boost performance. \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/022_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/180_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/019_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/066_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/105_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/123_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/245_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/06a7dc1c-290d-4dd0-fa0b-bcbf703b3b96/163_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/005_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/005_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/005_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/072_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/072_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/37929c47-1ccf-4bfc-ddfc-002cd4bc488f/072_diff.png} \\ \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_labels.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_predictions.png} & \adjincludegraphics[width=.33\linewidth, trim={{.15\width} {.25\height} {.15\width} {.25\height}}, clip]{./figs/flow/motion_voxel_graph_polish_multi_tier_no_time/04d3d0ef-beb1-4706-c02b-d33c767abaae/077_diff.png} \\ Ground Truth & Ours & Error Map \\ \end{tabular} \vspace{-3mm} \caption{Lidar Flow Results on Driving Scenes Dataset} \label{fig:flow-prediction} \end{figure*} \section{Activations} We visualize the activation maps of the trained PCCN network over a single lidar frame from the driving scene dataset for segmentation. Fig.~\ref{fig:layer1} depicts the activation map at layer 1 of PCCN. As we can see, at the early conv layer the method mainly captures low-level geometry details, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the z coordinate, the intensity peak, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. Fig.~\ref{fig:layer8} shows the activation map at layer 8 of PCCN. The conv layers begin to capture information with more semantic meaning, e.g. the road curb and the dynamic objects. \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_000.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_001.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_002.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_003.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_004.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_006.jpg} \\ \end{tabular} \vspace{-3mm} \caption{Activation Map of PCCN at Layer 1} \label{fig:layer1} \end{figure*} \begin{figure*} \footnotesize \setlength\tabcolsep{0.5pt} % \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccc} \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_807.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_808.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_809.jpg} \\ \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_810.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_811.jpg} & \adjincludegraphics[width=.33\linewidth, trim={{.01\width} {.01\height} {.01\width} {.01\height}}, clip]{./figs/activation/layer_814.jpg} \\ \end{tabular} \vspace{-3mm} \caption{Activation Map of PCCN at Layer 8} \label{fig:layer8} \end{figure*}
1,477,468,749,868
arxiv
\section{Introduction} One of the most critical problem in wireless communication techniques is how information gets transmitted from one end (transmitter) to another end (receiver) reliably. During the transmission, the signal suffers from distortion and noise due to complicated channel state and hardware imperfections. In the study of physical layer of the OSI model (PHY), the whole system is optimized in a divide-and-conquer perspective \cite{tse2005fundamentals}. The PHY transmitter usually includes source coding, channel coding, and modulation module, while the PHY receiver includes synchronization, channel estimation, equalization, demodulation, channel decoding, source decoding, and so on. Each part is optimized separately and requires a great amount of expert knowledge. Massive research has been focused on the optimization of each module for different channel environments and application demands. According to data processing theorem \cite{ziv1973functionals} in information theory, the optimization of sub-modules for communications cannot guarantee global optimality for the whole communication system. In fact, such an implementation is known to be sub-optimal \cite{zehavi19928}. In the past decades, deep learning has seen wide and successful application in computer vision and natural language processing due to its strong ability of generalization. In order to further improve the transmission and network performance, the fifth generation communication system (5G) will apply many new techniques, such as massive MIMO \cite{driessen1999capacity}, mmWave \cite{rappaport2013millimeter}, and ultra-dense wireless network \cite{ge20165g}. As a result, it also raises a great number of challenges and opportunities. In communication area, deep learning is applied to communication network \cite{wang2017deep} and cognitive radio \cite{bkassiny2013survey}. However, since physical layer is quite complex and requires real time and high accuracy transmission, the development of related technology has been insufficient until recent years \cite{ibnkahla2000applications}. With the development of neural network compression technique and specialized hardware such as GPU and FPGA, the cost and time complexity of deep learning related techniques are significantly decreased. It is possible to run neural network on mobile devices and antennas. There has been massive research on deep learning based techniques on individual modules such as channel decoding \cite{nachmani2016learning,cammerer2017scaling,gruber2017deep,liang2018iterative,kim2018communication}, signal detection \cite{samuel2017deep,jeon2017blind,hong2017mimo,farsad2017detection}, and etc. In deep learning based communication systems, it is possible to optimize the transmitter and receiver jointly with a structure of autoencoder \cite{salakhutdinov2000nonlinear} instead of artificially introduced block schemes \cite{o2016learning, o2017introduction,o2017deep}. In previous efforts, the autoencoder is able to automatically design the mapping scheme from an one-hot encoded signal to constellation symbols and the way of demapping under AWGN channel. The network is proven to outperform the traditional coding techniques such as uncoded BPSK and Hamming coded BPSK scheme in the aspect of block error rate \cite{o2017introduction}. However, the application of this work is still limited since the input can only be short one-hot vector, which carries much less information than a random bit sequence of the same length. Also, uncoded BPSK and Hamming coded BPSK scheme is not specially designed for one-hot vector. The training of neural network has a natural advantage over the benchmark. Furthermore, the size of the dense layer based autoencoder would grow with the increased length of the input sequence, which would lead to quadratic growth in time complexity. Such issues as above have not been well addressed by the current research. In another work on autoencoder communication system \cite{dorner2018deep}, the authors addressed the issue of receiver synchronization by introducing a frame synchronization scheme based on another neural network. The authors further proposed a transfer learning based two-phase procedure to overcome the problem of missing channel gradient during training. The transmitter and receiver are trained under a stochastic channel model, and the receiver is then finetuned with respect to real channels. These work enable practical over-the-air transmission through pure neural networks. In this paper, we focus on further exploring the application of autoencoder in physical layer. Our main contribution is listed as follows. \begin{itemize} \item We propose a novel structure based on convolutional autoencoder which is able to jointly optimize the transmitter and receiver design from overall system performance point of view which has the following virtues: 1. the property of convolutional autoencoder enables the trained network to process input bit sequence of any length, 2. the soft output of this structure can be the input of any soft decoder, which can be easily combined with any other soft-input-soft-output channel decoding schemes, and 3. the proposed structure can be flexibly applied to either time or frequency domains. \item We have confirmed that the proposed structure is able to handle the challenge of mapping scheme of different levels under various channels, including AWGN channel, fading channels and non-Gaussian noise channels. \item From various BER performance evaluation and time complexity analysis of the proposed system, the robustness of the system is demonstrated. The proposed system structure is less sensitive to channel variation compared with traditional design, showing the great potential for the design principle and methodology of future telecommunication systems. \end{itemize} {\it Notation.} Bold capital letters refer to matrices and bold lowercase letters represent vectors. The subscript on a lowercase letter $y_i$ represent the $i$-th element of vector $\mathbf y$. For two real number $a < b$, $[a,b]$ refers to the set of all real number $x$ satisfying $a<x<b$ while $[a,b]^n$ refers to the set of all $n$ dimensional vector with each element in $[a,b]$. $\mathbb{R}^n$ represents the space of all $n$ dimensional real vectors. $ \mathbb{C}^n$ represents the space of all $n$ dimensional complex vectors. Functions $real(\cdot): \mathbb{C}^n \rightarrow \mathbb{R}^{n}$ and $imag(\cdot): \mathbb{C}^n \rightarrow \mathbb{R}^{n}$ turn a complex vector to a real vector by taking the real or imaginary part of each element. \section{Deep Learning Basics} A feedforward neural network \cite{goodfellow2016deep}, or multilayer perceptron, describes a mapping $f({\mathbf x}_i;{\mathbf w}_i): \mathbb{R}^{n_i} \rightarrow \mathbb{R}^{n_{i+1}} $in the $i$-th layer. Here the mapping is generally a linear transformation determined by the parameter ${\mathbf w}_i$ plus activation function to introduce nonlinearity. The output of the $i$-th layer is fed into next layer as input. The feedforward neural network with total layers of $L$ can be represented in (\ref{equ:mapping}). \begin{equation} \label{equ:mapping} {\mathbf x}_{i+1} = f({\mathbf x}_i;{\mathbf w}_i), \quad i=1,2,\dots,L-1 \end{equation} Given enough amount of training data, i.e. multiple pairs of input vector ${\mathbf x}_0$ and output vector ${\mathbf x}_L$, the mapping from input vector $x_0$ to output $x_L$ is approximated by the cascaded function in equation (\ref{equ:mapping}). Denote $f_i(\cdot) = f(\cdot;{\mathbf w}_i)$, we have ${\mathbf x}_L = f_{L-1}f_{L-2}\dots f_0({\mathbf x}_0)$. The training of a neural network is based on mini-batch stochastic gradient descent and back propagation methods. The gradient is calculated and propagated backward from the last layer to the first layer. The parameters ${\mathbf w}_i$ are updated accordingly. There are multiple kinds of the layer $f(\cdot;{\mathbf w}_i)$. Common choices are fully connected layer, convolutional layer \cite{lecun1989generalization} and recurrent layer \cite{hochreiter1997long} etc. A fully connected layer is also called a dense layer, where each element of the input is connected to all the elements of its output vector. The dense layer is common and effective in most of the application areas, but is relatively high in computational complexity. The convolutional layer consists of a set of learnable filters called kernel. It performs a simple convolution along the input data. In our work, a 1D convolutional layer is frequently used to convolve the sequence with learnable filters. An illustration of how a convolutional layer works is shown in Fig. \ref{ill-cnn}. We also use a layer wrapper called time-distributed wrapper. It applies a layer to every temporal slice of its input instead of all the elements of the input. The function of a time-distributed dense on a 2D array is similar to a 1D convolutional layer with kernel size = 1. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{conv1d2.png} \caption{The illustraion of a 1D convolutiaonal neural network} \label{ill-cnn} \end{figure} \section{System Model} We consider a typical communication system with block diagram in Fig. \ref{blockdiagram}. The input data modeled as a sequence of i.i.d. bits ${\mathbf s}$ is coded and modulated by the transmitter. The transmitted data $x_i$ is transmitted through a linear time-invariant channel with impulse response coefficients $h_n$. The distortion and noise introduced by channel can be modeled as equation (\ref{equ:channel}). \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{blockdiagram.png} \caption{The block diagram for a typical communication system} \label{blockdiagram} \end{figure} \begin{equation} \label{equ:channel} y_i = \sum_{n=0}^{L_h-1}h_nx_{i-n} + n_i \end{equation} The received data ${\mathbf y}$ is equalized, demodulated and decoded by the receiver to recover the original sequence $\hat{{\mathbf s}}$. There is a restriction on the power of transmitted symbol in the real transmitter. Usually the average power for the signal is constrained to 1, i.e. $\frac{1}{n}||\mathbf{x}||_2 = 1 $. The procedure of the communication system can be viewed as the cascade of three function $\hat{s} = g_3(g_2(g_1(s)))$. Here $g_1: [0, 1]^n \rightarrow \mathbb{C}^m$ is the function of the operation in transmitter. $g_2: \mathbb{C}^m \rightarrow \mathbb{C}^m $ is the channel function defined in equation (\ref{equ:channel}). $g_3: \mathbb{C}^m \rightarrow [0, 1]^n $ is the receiver function to recover the original information. The similarity between the representation of communication system and neural network leads us to optimize the communication system using neural networks. In the rest part of the paper, we study the problem of jointly optimizing function $g_1$ and $g_3$ with fixed $g_2$. \section{ Autoencoder for Time Domain Transmission} \subsection{Network Structure} The property of neural network enables us to train the model using only input bit sequences under different channel state. We jointly optimize the transmitter and receiver with an autoencoder structure. Convolutional neural network is used considering the sequential property of the input sequence. The network structure is shown in Fig. \ref{dnnfortime}. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{normalchan.png} \caption{DNN framework for time domain transmission in physical layer. The input and output is marked with white box. The trainable layers are in orange and untrainable layers are in gray.} \label{dnnfortime} \end{figure} Assume that the system is transferring $k \times M$ bits information $s \in \{0,1\}^{k \times M}$. Here $k$ represents the number of bits each symbol carries. And $M$ is the length of symbols. The transmitter maps the input bit stream to a sequential complex vector $x \in \mathbb{C}^M$ and transmits $x$ to the channel in the time domain. The received signal $y \in \mathbb{C}^N$ with distortion and noise is then equalized and demapped in the receiver to recover the original bit stream $s$. We compress the length of the input from $k \times M$ to $M$ in the first convolutional layer with stride size = $k$. And we combine the usage of time-distributed dense layer with convolutional layer to introduce further correlation and nonlinearity in the transmitter. The transmitted symbol $\mathbf{X} = [real(\mathbf{x});imag(\mathbf{x})]$ is of size $M \times 2$ as a complex vector. In the channel layer, the input symbols are first normalized to satisfy the power constraint. For AWGN channel, only additive white gaussian noise is added to the normalized symbols. For fading channels, the normalized symbols first convolve with the impulse response in the time domain. The convolution in complex number is implemented as in equation (\ref{equ:imagreal}). In neural networks, it can be represented by a 1D convolutional layer that convolves $\mathbf{X}$ with a 3D tensor. \begin{equation} \label{equ:imagreal} \begin{aligned} real(y_i) = &\sum_{n=0}^{L_h-1}(real(h_n)real(x_{i-n}) \\ & - imag(h_n)imag(x_{i-n})) + real(n_i) \\ imag(y_i) = &\sum_{n=0}^{L_h-1}(real(h_n)imag(x_{i-n}) \\ & - imag(h_n)real(x_{i-n})) + imag(n_i) \\ \end{aligned} \end{equation} Generally a neural network suffers from the restriction of input shape, i.e. the length of input for testing shall be the same as that in training procedure. However, due to the locally-connected property of convolutional layer and time-distributed layer, the proposed network structure is able to accept input sequences of any length without the need of retraining the whole model. Thus the system can process long sequences while trained on short ones. We conduct massive experiments to analyze the performance of our model. We train our model separately on AWGN channel and fading channel. In the rest part of this chapter, we give thorough analysis on the result of the learned system. \subsection{Setting} In our experiment, we set $k=6$ and compare the learned system with 64QAM for AWGN channel, and 64QAM plus minimum mean square error (MMSE) estimation \cite{johnson2004minimum} for fading channel. We also test the proposed model on $k=8$ case with 256QAM+MMSE in fading channel to prove the extensibility of our model. The modulation scheme is selected considering the throughput fairness. For the training and testing our model, we randomly generate i.i.d. bit sequences. The generated dataset is separated arbitrarily into training set, validation set and test set. The property of convolutional neural network enables the network to process input sequence of any length without changing network parameters. The change in the length of the input sequence would not affect the performance of our system. Thus we fix $M$ to be $400$. We train the autoencoder with $30,000$ training samples and test that with $10,000$ test data. The learning rate is set to be $0.001$ and the batch size is $32$. We train the system at a specific signal to noise ratio (SNR) but test at a wide range of SNR, as well as robustness and adaptivity to deviations from the AWGN and fading setting. We would like to show that although the state space of input bit sequence is as large as $2^{k \times M}$, the model can generalize well with mere $30,000$ training samples. In the time domain transmission system, we test three different channels, one AWGN channel and two fading channels. The amplitude and delay for two fading channels are plotted in Fig. \ref{fig:stem}. The integer in x-axis is the delay of each path in the unit of symbols. Note that the phase information is omitted in the figure. \begin{figure}[!htbp] \begin{minipage}[t]{0.51\linewidth} \centering \includegraphics[width=\linewidth]{chana.png} \end{minipage}% \begin{minipage}[t]{0.51\linewidth} \centering \includegraphics[width=\linewidth]{chanb.png} \end{minipage} \caption{The amplitude of two fading channels tested. The left is referred to as channel A and the right is channel B.} \label{fig:stem} \end{figure} \subsection{AWGN Channel} The BER performance for AWGN channel is shown in Fig. \ref{fig:64awgn}. The CNN based autoencoder can bring up to 1 dB gain when bit error rate (BER) ranges from $10^{-1}$ to $10^{-3}$. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{meremod64.png} \caption{The comparison between 64QAM and the trained system under AWGN channel. } \label{fig:64awgn} \end{figure} The constellation diagram is shown in Fig. \ref{fig:consdia}. We plot $40,000$ modulated symbols in a complex plane. The system learns an amplitude and phase-shift keying (APSK) like constellation method. However, the symbols seem not to concentrate into the finite constellation points, but are distributed as clusters instead. The clustered structure of constellation symbols may have higher requirements for antennas, but may bring extra gain for low SNR regime. The learned constellation is close to a Gaussian distribtion, which is optimal for a maximum likelihood receiver according to information theory. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{newmodel.png} \caption{The learned constellation diagram for AWGN channel. } \label{fig:consdia} \end{figure} \subsection{Fading Channel} We train the system under two Rayleigh fading channels in Fig. \ref{fig:stem}. The benchmark system adopts 64QAM modulation and MMSE detection method. MMSE is assumed to have perfect channel state information. The comparison between CNN based method and 64QAM for fading channel A is shown in Fig. \ref{fig:compchana}. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{SER_compare_trained.png} \caption{ BER comparison between 64QAM+MMSE and the system under fading channel A. } \label{fig:compchana} \end{figure} The trained system is able to bring up to 4 dB gain. The system is outperformed by the benchmark in high SNR regime. This also happens in the case of AWGN channel. And this phenomenon is not related to the SNR used in training procedure. We would like to point out that the performance decay in high SNR would not affect the advantage of our method in practical system since our system can be easily combined with any coding module as long as the output sequence of the coding module is i.i.d., which is easy to satisfy with current interleaving technique. The output of the neural network is a real number in $[0,1]$. This can be viewed as a probability and fed into any soft-decoding system as soft input. We add a standard LDPC coding system \cite{livshitz2012low} with code rate = $\frac{1}{2}$ to both 64QAM+MMSE system and our scheme. We can see in Fig. \ref{fig:ldpc} that after combining with a coding module, our system maintains a performance advantage over the entire SNR range. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{LDPCres.png} \caption{BER comparison after combining with LDPC under fading channel A.} \label{fig:ldpc} \end{figure} Furthermore, our system is able to bring higher gain for a channel with higher delay. We train the neural network under fading channel B. The result is shown in Fig. \ref{fig:2shutime}. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{2shutime.png} \caption{BER comparison between 64QAM+MMSE and the system under fading channel B.} \label{fig:2shutime} \end{figure} Traditional 64QAM system is highly tailored for AWGN channel. For a known fading channel, the proposed neural network is able to introduce precoding in modulation procedure. The transmitter and receiver can be jointly adjusted to fit the channels, thus making the system more robust to fading. The proposed structure can also be extended to different $k$. We train the neural network under $k=8$ and compare the result with 256QAM under channel A. The comparison is shown in Fig. \ref{fig:256qam}. Although the neural network is not as good as 256QAM in high SNR regime, the performance is guaranteed to maintain an advantage over the whole SNR regime after combining with coding system, as is the case shown above. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{256qam.png} \caption{BER comparison between 256QAM+MMSE and the system under fading channel A.} \label{fig:256qam} \end{figure} \subsection{Robustness} In this section, we show the neural network system is much more robust against the variations in the channel. This makes, among other things, the proposed system much more attractive alternative to traditional modulation methods in practice, where the channel model is not available. We consider two kinds of channel variations. One of them is the introduction of non-Gaussian noise. We test bursty AWGN channel where a small fraction of noise has much higher variance than other on the neural network system trained under AWGN channel, and compare that with 64QAM. Usually bursty AWGN channel is used to model inter-cell interference in OFDM cellular systems or co-channel radar interference \cite{Fertonani2008OnRC}. The comparison result is shown in Fig. \ref{fig:bursty2}. The result is expected since convolutional neural network would introduce correlation in a longer range of symbols than 64QAM, making system more robust to burst noise. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{bursty2.png} \caption{BER comparison between 64QAM and CNN for bursty AWGN channel.} \label{fig:bursty2} \end{figure} Another kind of channel imperfections is related to the time-variant property of real channel and inaccurate channel estimation. Channel in real world may change within a single frame. The signal may go through a slightly different channel from the one previous pilot passes. Furthermore, channel estimation cannot ensure perfect recovery of channel state information. In our experiment, we train the model on a fixed channel A. And test it with a different channel that adds a white Gaussian white noise with standard deviation = 0.05 to channel A. For MMSE detection, it takes the fixed channel A as channel state information. We test both 64QAM+MMSE and neural network system 100 times and take the average BER performance. The comparison of robustness between 64QAM+MMSE and neural network is shown in Fig. \ref{fig:robustness}. The channel variation affects 64QAM+MMSE more significantly than neural network structure. \begin{figure}[!htbp] \subfigure[]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=0.8\linewidth]{robustchan.png} \end{minipage}% } \subfigure[]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=0.8\linewidth]{robustcnn.png} \end{minipage} } \caption{The comparison of robustness of 64QAM+MMSE (a) and neural network (b). Both methods are tested under original channel and changed channel.} \label{fig:robustness} \end{figure} From the experiments above, we can see that the proposed method is especially promising under extreme channel condition, like severe fading, or frequent bursty noise. Furthermore, it is more robust to channel variation than traditional methods. \subsection{Time Complexity Comparison} We provide both simulation and numerical analysis for analyzing time complexity. We first test the time complexity by running demodulation plus detection algorithms and the receiver part in neural network on a Intel (R) Corel (TM) i7-7700HQ CPU @ 2.80GHz CPU and an NVIDIA GeForce GTX 1060 GPU. The platform in this experiment is python+keras \cite{chollet2015keras}. For AWGN channel, only demodulation is needed to recover the bit sequence. For fading channel, MMSE is also included in the receiver, which takes long for the FFT step. We test both methods for 100 sets of data and take the average. The comparison is shown in Table.\ref{tab:time}. \begin{table}[!htbp] \caption{Time Complexity Comparison between traditional demodulation method and neural network.} \centering \label{tab:time} \begin{tabular}{c|c|c} \toprule[2pt] \diagbox{Channel}{Method} & 64QAM(+MMSE) & CNN \\ \hline AWGN & \bf 1.501s & 3.695s \\ Fading & 8.709s &\bf 3.751s\\ \bottomrule[1pt] \end{tabular} \end{table} For transmitted bit sequence with length $n$, the time complexity for both CNN and QAM demodulation is $O(n)$. However, for MMSE detection, FFT requires a $O(nlogn)$. CNN is able to substitute the demodulation plus detection algorithm with a lower time complexity in theory and higher accuracy. Thus CNN based framework is quite suitable for designing communication system in fading channels. With the development of neural network, the dimension of the CNN can be potentially reduced with techniques such as network pruning and distillation. Parallelization is also possible in the multiplicative units in the neural network, as well as pipelining. Neural network can be further accelerated by specially designed hardware framework like GPU, FPGA and TPU etc. These designs along with a careful analysis of the fixed point arithmetic requirements of the different weights are under active research. The efficiency of neural network can be further improved in the future. \section{Autoencoder for Frequency Domain Transmission} \subsection{Network Structure} In the previous chapter, we studied the effect of the multipath fading channel. In Orthogonal Frequency Division Multiplexing (OFDM) \cite{le1995coded} system, the inter-symbol interference can be eliminated by introducing guard interval between each subcarrier. Cyclic prefix is a typical guard interval, which makes each subcarrier orthogonal to each other. In the following part of this chapter, we assume perfect cyclic prefix as guard interval. Thus there is no inter-symbol inference and inter-channel inference. However, there is still fading on each subcarrier. One of the most common equalization method for this issue is zero forcing (ZF) \cite{peel2005vector}. Since the fading on individual subcarrier might be quite significant, it is hard to recover the signal from those subcarriers due to poor SNR. The information carried by some subcarriers might experience deep fading and thus get lost during transmission. In fact, a zero order equalization system would cause inevitable loss of information \cite{daubechies1986painless}. By introducing correlation between subcarriers, the information can be carried by nearby subcarriers. Thus the burst error on subcarriers with deep fading can be decreased. Based on previous network structure, we design a frequency domain equalization system that is able to retrieve more information than ZF. Since the fading on each subcarrier is different, a convolutional layer that shares weight along the whole input sequence may not be suitable. Here locally connected layer is used to substitute some of the convolutional layers in previous structure. The locally connected layer works similarly as the convolutional layer, except that weights of kernels are unshared. That is, a different set of filters is applied at each different patch of the input. The whole network structure is shown in Fig. \ref{fig:dnnforfreq}. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{freqfadingchan.png} \caption{DNN framework for frequency domain transmission in physical layer.} \label{fig:dnnforfreq} \end{figure} \subsection{Simulation} We test the new structure under the case of an OFDM system transmission. The bit sequence is modulated and transmitted in frequency domain. All other settings are the same as the previous example. We consider the frequency transformation of channel B in Fig. \ref{fig:stem}. The frequency selective fading channel in frequency domain is shown in Fig. \ref{fig:ofdmchan}. We train and test our model on the same channel. And compare that with 64QAM+ZF method. Here ZF is assumed to have accurate channel state information. But the zero points in the fading plot would prohibit a significant number of subcarriers from transmitting information to the receiver. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{fftb.png} \caption{The amplitude of frequency selective fading for each subcarrier in frequency domain.} \label{fig:ofdmchan} \end{figure} The BER performance for both 64QAM+ZF and CNN under OFDM frequency selective fading channel is shown in Fig. \ref{fig:ofdmcomp}. The BER for CNN based system can reach as low as $10^{-4}$ while the BER for traditional 64QAM+ZF method goes down slow. Because of the correlation introduced in convolutional layer, the neural network is able to assign different amount of information to different subcarriers according to the statistical property of the noise. The function of the neural network is similar to adaptive bit loading techniques \cite{barreto2001adaptive} that assign bits efficiently based on subcarrier quality. This may explain the huge gain the neural network brings compared to traditional 64QAM+ZF system. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{freqfading2shu.png} \caption{BER comparison between 64QAM+ZF and CNN under OFDM frequency selective fading channel.} \label{fig:ofdmcomp} \end{figure} \section{Conclusion} In this paper, we propose a convolutional autoencoder structure that is able to automatically design communication physical layer scheme according to different channel status. The system has no restriction on the length of input bit sequence. We conduct massive experiment to give empirical evidence for the superiority of the proposed system. The neural network has lower time complexity and higher accuracy especially for fading channel, and is also quite robust to channel variation. The framework can also be extended to OFDM system which transmits in frequency domain. The trained autoencoder may not compete with the state-of-the-art system that is optimized over the past decades. But the autoencoder is able to learn the way of mapping and demapping for any known channel without prior mathematic model and analysis. We may further explore the feasibility and utility of neural network based communication methods in following aspects. \begin{itemize} \item One of the most important goals of designing a communication system is to maximize the capacity, i.e. the mutual information between input and output. However, since the constellation diagram in neural network based system is continuously distributed in the complex plane. It is hard for us to estimate the mutual information accurately, let alone optimizing the mutual information within neural network. A framework for analyzing mutual information in neural network based communication system may significantly enlarge our knowledge about both neural network and communication system. \item An iterative, soft-input soft-output receiver can significantly improve the BER performance. Designing a receiver with both log likelihood ratio and received symbol as input using neural network may also enable iteration inside receiver, thus improving the current performance. \item The performance of neural network is still not as good in high SNR regime. It is important to figure out how autoencoder can learn a communication physical layer that outperforms existing communication techniques even in the high-SNR regions. \end{itemize} \bibliographystyle{IEEEtran}
1,477,468,749,869
arxiv
\section{Introduction} RR Lyrae stars are fundamental standard candles and the accurate determination of their absolute luminosity has a wide range of applications, including the derivation of the Hubble constant and the determination of globular clusters ages. The results of the {\sc hipparcos}\ mission allow in principle a calibration of this luminosity, based on the parallaxes and proper motions. Fernley et al. (1998a, hereafter F98) did that by employing the method of statistical parallax on a sample of 84 RR Lyrae stars (out of the 144 they considered) with [Fe/H] $\leq -1.3$. Combining the statistical parallax result with the absolute magnitude of RR Lyrae itself, computed without applying any Lutz-Kelker (LK) type correction (see Lutz \& Kelker 1973, Turon Lacarrieu \& Cr\'ez\'e 1977, Koen 1992, Oudmaijer et al. 1998), they derived a zero point of 1.05 $\pm$ 0.15 mag for the $M_{\rm V}$-[Fe/H] relation, by assuming a slope of 0.18 $\pm$ 0.03 (Fernley et al. 1998b). Tsujimoto et al. (1998, hereafter T98) used the statistical parallax method, a maximum likelihood technique and the derived $M_{\rm V}$ of the star RR Lyrae (with LK correction included) for deriving a combined final value $M_{\rm V}$ = 0.6-0.7 mag at [Fe/H] = $-1.6$. Luri et al. (1998, hereafter L98) applied a maximum-likelihood method that takes all available data into account, including parallaxes, proper motions and radial velocities, considering the sample of 144 RR Lyrae stars given in F98. They derived $M_{\rm V}$ = 0.65 $\pm$ 0.23 at an average metallicity of [Fe/H] = $-1.51$. The results by F98, T98 and L98 imply dimmer RR Lyrae stars by about 0.3 mag with respect to the results from either the Main Sequence fitting technique using {\sc hipparcos}\ subdwarfs (see, e.g., Gratton et al. 1997) or recent theoretical Horizontal Branch models (see, e.g., Salaris \& Weiss 1998, Caloi et al. 1997). On the other hand, F98, T98 and L98 agree with the results from Baade-Wesselink analyses, which predict a zero point of about 1.00 mag for the $M_{\rm V}$-[Fe/H] relation (see, e.g., Clementini et al. 1995). Turon Lacarrieu \& Cr\'ez\'e (1977) presented two methods to derive the absolute magnitude of stars from the observed parallaxes, namely using individual LK-corrections and the method of ``reduced parallaxes'' (hereafter RP) on a sample of stars. The advantages of the RP method are the following: it avoids the biases due to the asymmetry of the errors when transforming the parallaxes into magnitudes, it can be applied to samples which contain negative parallaxes, it is free from LK-type bias if no selection on parallax, or error on the parallax is made (Koen \& Laney 1998), and it requires no knowledge about the space distribution of stars. We will apply the RP method to the sample of 144 RR Lyrae stars used by F98, and will derive the zero points of the $M_{\rm V}$-[Fe/H] and $M_{\rm K}$-$\log P_{0}$ relations. Recently, Koen \& Laney (1998) also briefly discussed the application of the RP method to RR Lyrae stars. \vspace{-2mm} \begin{table*} \caption[]{Values for the zero point of the $M_{\rm V}$-[Fe/H] and $M_{\rm K}$-$\log P_{0}$ relations from the RP method} \begin{tabular}{crcrrrcrrl} \hline & & $M_{\rm V}$-[Fe/H] & & & & $M_{\rm K}$-$\log P_{0}$ & & & \\ Solution& N & Zero point & Total & Slope & N & Zero Point & Total & Slope & Remarks \\ & & & Weight & & & & Weight & & \\ \hline 1 & 144 & 0.67 $\pm$ 0.24 & 45.5 & 0.18 & 108 & $-$1.28 $\pm$ 0.25 & 241.3 & $-$2.33 & whole sample \\ 2 & 84 & 0.77 $\pm$ 0.26 & 35.3 & 0.18 & 62 & $-$1.16 $\pm$ 0.27 & 188.4 & $-$2.33 & [Fe/H] $\le -1.3$ \\ 3 & 84 & 0.81 $\pm$ 0.24 & 40.0 & 0.18 & 62 & $-$1.14 $\pm$ 0.26 & 201.4 & $-2.33$ & [Fe/H] $\le -1.3$, ${\sigma}_{\rm H} = 0$ \\ 4 & 144 & 0.64 $\pm$ 0.24 & 46.7 & 0.18 & & & & & as 1, all [Fe/H] larger by 0.15 dex\\ 5 & 144 & 0.60 $\pm$ 0.24 & 48.2 & 0.18 & 108 & $-$1.28 $\pm$ 0.25 & 242.8 & $-$2.33 &as 1, all $E(B-V)$ larger by 0.02 \\ \hline \end{tabular} \vspace{-3mm} \end{table*} \section{The ``reduced parallax'' method} Let us consider a relation of the form: \begin{equation} M_{\rm V} = \delta \,\, {\rm [Fe/H]} + \rho. \end{equation} If $V$ is the intensity-mean visual magnitude and $V_0$ its reddening corrected value, then one can write: \begin{equation} 10^{0.2\rho} = \pi \times 0.01\,\;10^{0.2(V_0 - \delta \;{\rm [Fe/H]} )} \equiv \pi \times {\rm RHS}, \end{equation} which defines the quantity {\sc rhs} and where $\pi$ is the parallax in milli-arcseconds. A weighted-mean of the quantity 10$^{0.2 \rho}$ is calculated, with the weight (weight = $\frac{1}{{\sigma}^2}$) for the individual stars derived from: \begin{equation} {\sigma}^2 = \left( {\sigma}_{\pi} \times {\rm RHS} \right)^2 + \left(0.2\,\ln(10) \,\; \pi\; {\sigma}_{\rm H} \times {\rm RHS} \right)^2, \end{equation} with ${\sigma}_{\pi}$ the standard error in the parallax. This follows from the propagation-of-errors in Eq.(2). We have adopted the slope $\delta = 0.18$ (see the discussion in Fernley et al. 1998b), which is the one used by F98 and which is in agreement with the results from Baade-Wesselink methods (see, e.g., Clementini et al. 1995), Main Sequence fitting (Gratton et al. 1997) and theoretical models (see, e.g., Salaris \& Weiss 1998, Cassisi et al. 1999). The sample we consider is identical to that of F98, that is 144 stars out of a total of 180 stars in the {\sc hipparcos}\ catalogue. F98 discuss the reasons for discarding the 36 stars. Arguments include the fact that these stars do not have reddening determinations, are not RR Lyrae variables, or have poor quality {\sc hipparcos}\ solutions. Table~1 of F98 (retrievable from the CDS) lists all necessary data to perform the above analysis: periods, intensity-mean $V$ and $K$ magnitudes, colour-excesses $E(B-V)$, and metallicities [Fe/H]. The extinction is calculated from $A_{\rm V} = 3.1 E(B-V)$ (as done by F98). An important requirement when applying this method is that the value of ${\sigma}_{\rm H}$ is small compared to the errors on the parallax. If the dispersion ${\sigma}_{\rm H}$ of the exponent in the factor RHS is large, the distribution of errors on the right-hand term in equation 2 is asymmetrical and a bias towards brighter magnitudes is introduced (Feast \& Catchpole 1997, Pont 1999). The adopted value of ${\sigma}_{\rm H}$ has been computed by considering four different contributions: errors on the intensity-mean $V$ values of the RR Lyrae stars (as given in Table~1 of F98), on the extinction (as derived from the errors on E(B-V) given in Table~1 of F98), on [Fe/H] (again, from Table~1 of F98), and the intrinsic scatter due to evolutionary effects in the instability strip. This last term is the most important one, and we have adopted for it a 1$\sigma$ value by 0.12 mag (as in Fernley et al. 1998b), following the results of the exhaustive observational analysis by Sandage (1990). The final value is ${\sigma}_{\rm H}$ = 0.15, a quantity small enough in comparison with the parallax errors so that no substantial bias is introduced on the right-hand term of equation 2, as we have verified by means of numerical simulations. Even a ${\sigma}_{\rm H}$ of 0.20 mag. would lead to a bias by at most 0.02 mag. Table~1 lists the values of the zero point with error we obtain with different sample selections for the $M_{\rm V}$-[Fe/H] relation. Solution 1 corresponds to the case of the whole sample; the zero point of 0.67 $\pm$ 0.24 mag is about 0.4 mag brighter than the value derived by F98, and consistent with the value listed in Koen \& Laney (1998) using the same method with slightly different values for ${\sigma}_{\rm H}$. The sample with [Fe/H] $\le -1.3$ (Solution 2) corresponds to a sample constituted entirely (according to the discussion in F98) by Halo RR Lyrae stars, with a negligible contamination from the Disk population. In this case the zero point is equal to 0.77 $\pm$ 0.26 mag; it is slightly fainter than Solution 1, but well in agreement within the statistical errors. We also re-derived the zero point for Solution 2 in the case of ${\sigma}_{\rm H}$ = 0.0, and we found a change by only 0.04 mag. A systematic change in the metallicity scale (Solution 4) by 0.15 dex does not affect appreciably the zero point determination, while the result is more sensitive to a systematic variation of the adopted reddenings (Solution 5). The RP method has also been used to derive the zero point of the $M_{\rm K}$-$\log P_{0}$ relation. This relation appears to be insensitive to the metallicity (Fernley et al. 1987, Carney et al. 1995) and is also very weakly affected by reddening uncertainties, since $A_{\rm K} = 0.112\, A_{\rm V}$ (Rieke \& Lebofsky 1985). Moreover, the intrinsic scatter around this relation is smaller than in the case of the $M_{\rm V}$-[Fe/H] relation (Fernley et al. 1987). In the sample considered here there are 108 RR Lyrae stars with an observed intensity-mean $K$ magnitude. The procedure is the same as described before, the only difference is that now, instead of Eq.~1, we use the expression $ M_{\rm K} = \delta \,\, {\rm \log P_{0}} + \rho$ where $P_{0}$ is the fundamental pulsation period. For the first-overtone RRc variables we have derived the fundamental periods using the relation $\log P_{0}/P_{1}$ = +0.120 (Carney et al. 1995). We adopt a slope $\delta = -2.33$ following Carney et al. (1995); for the value of ${\sigma}_{\rm H}$ we have considered the same contributions previously described (with the exception, of course, of the contribution due to the error on [Fe/H]). In this case the observational estimate of the intrinsic scatter due to the width of the instability strip comes from Carney et al. (1995), and the final value results to be ${\sigma}_{\rm H}$=0.10. In Tab.~1 the values of the zero point for the $M_{\rm K}$-$\log P_{0}$ relation are listed. When considering the entire sample we obtain a zero point of $-1.28 \pm 0.25$ mag, $\approx$ 0.4 mag brighter than the value from the Baade-Wesselink method (see, e.g., Carney et al. 1995). In the case of a pure Halo RR Lyrae sample ([Fe/H]$ \leq -1.3$) we obtain $-1.16 \pm 0.27$ mag, slightly dimmer but again in agreement with the value derived for the whole sample. The influence of ${\sigma}_{\rm H}$ is even less than for the $M_{\rm V}$-[Fe/H] relation. As the sample of the RR Lyrae stars is not volume complete it may be subject to Malmquist type bias. If the space distribution of RR Lyrae is spherical it implies that the true zero points of the $M_{\rm K}$-[Fe/H] and $M_{\rm K}$-$\log P_{0}$ relations may be fainter by up to 0.03 and 0.01 mag, respectively, for the adopted values of ${\sigma}_{\rm H}$. This applies when average absolute magnitudes of a volume and brightness limited sample are compared. Oudmaijer et al. (1999) showed empirically that when the averaging is done over 10$^{0.2M_{\rm V}}$ the effect of Malmquist bias is less. \\ \begin{figure} \centerline{\psfig{figure=vergelijk.ps,width=8.8cm,angle=-90}} \caption[]{A comparison of the true distance moduli to the 62 metal-poor RR Lyrae with [Fe/H] $\le -1.3$ from the $M_{\rm}-$[Fe/H] and $M_{\rm K}-\log P$ relations. Each point has an error bar of about 0.26 mag in both $x-$ and $y-$direction. The solid line is the 1:1 relation. The dispersion is less than 0.10 mag.} \vspace{-3mm} \end{figure} \noindent In Fig.~1 we compare, for the 62 {\sc hipparcos}\ RR Lyrae stars with [Fe/H] $\le -1.3$ and both observed K and V magnitudes, the true distance moduli derived from the $M_{\rm V}$-[Fe/H] and $M_{\rm K}$-$\log P_{0}$ relations, using zero points of $0.77$ and $-1.16$ mag, respectively. Each data point has an error bar of 0.26 mag in $x-$ and 0.27 mag in the $y-$direction. The comparison of the two photometric distances can in principle give us an independent indication for possible biases in the determination of the zero points of the two relations with the RP method. As it is evident from the figure, the distance moduli from both relations agree very well. A linear fit to the data is consistent with a slope of unity, and the dispersion around the 1:1 relation is equal to 0.098 mag. A dispersion of this order is what is expected from the dispersions in the observed $\log P$-[Fe/H] and $(V-K)_0-\log P$ relations for the RR Lyrae sample. \begin{table*} \caption[]{Data on RR Lyrae in LMC clusters} \begin{tabular}{cccrrrcc} \hline Name & $<V>$ & $A_{\rm V}$ & [Fe/H] & $\Delta$ & Reference & $V_{0} +\Delta$ &$V_{0} +\Delta -$0.18[Fe/H] \\ & (mag.) & (mag.) & & (mag.) & &(mag.) &(mag.) \\ \hline NGC 1466 & 19.33 $\pm$ 0.02 & 0.28 $\pm$ 0.06 & $-1.85$ & 0.0 & Walker 1992b & 19.05 $\pm$ 0.07 & 19.38 $\pm$ 0.07 \\ NGC 1786 & 19.27 $\pm$ 0.03 & 0.23 $\pm$ 0.03 & $-2.3$ & 0.0 & Walker \& Mack 1988 & 19.04 $\pm$ 0.04 & 19.45 $\pm$ 0.04 \\ NGC 1835 & 19.38 $\pm$ 0.05 & 0.40 $\pm$ 0.09 & $-1.8$ & $-0.03$ & Walker 1993 & 18.94 $\pm$ 0.11 & 19.26 $\pm$ 0.11 \\ NGC 2210 & 19.12 $\pm$ 0.10 & 0.19 $\pm$ 0.09 & $-1.9$ & $+0.09$ & Reid \& Freedman 1994 & 19.02 $\pm$ 0.13 & 19.36 $\pm$ 0.13 \\ Reticulum & 19.07 $\pm$ 0.03 & 0.09 $\pm$ 0.06 & $-1.7$ & $-0.08$ & Walker 1992a & 18.91 $\pm$ 0.07 & 19.22 $\pm$ 0.07 \\ NGC 1841 & 19.31 $\pm$ 0.02 & 0.56 $\pm$ 0.06 & $-2.2$ & ($\sim$ 0.2) & Walker 1990 & & \\ NGC 2257 & 19.03 $\pm$ 0.02 & 0.12 $\pm$ 0.03 & $-1.8$ & +0.18 & Walker 1989 & & \\ \hline \end{tabular} \vspace{-3mm} \end{table*} \section{Discussion} For their preferred sample of 84 stars with [Fe/H] $\le -1.3$ F98 obtain a zero point of 1.05 $\pm$ 0.15 mag for the $M_{\rm V}$-[Fe/H] relation (assuming a slope of 0.18), in agreement with results from Baade-Wesselink methods. When applying the RP method to the same sample of stars, we find a zero point 0.28 mag brighter. An analogous result, which means a zero point $\approx$0.30 mag brighter than the Baade-Wesselink one, is derived for the $M_{\rm K}$-$\log P_{0}$ relation. Even if within the error bar the results derived with the different methods formally agree, there appears to exist a systematic difference between zero points obtained using the parallaxes directly and zero points obtained by employing methods which are sensitive to proper motions and radial velocities (F98, T98, L98), especially if one also takes into account the results for the {\sc hipparcos}\ Cepheids. Also with the Cepheids\ one finds that methods where the results are mostly sensitive to the proper motions and radial velocities find dimmer zero points for the Cepheids\ PL-relation compared to methods which directly use the parallax. In particular, using the RP method Feast \& Catchpole (1997) derived a zero point of $-1.43 \pm 0.10$ mag, and Lanoix et al. (1999) using a slightly bigger sample find $-1.44 \pm 0.05$ mag. Oudmaijer et al. (1998), using only the positive parallaxes but then correcting for the LK-bias, find $-1.29 \pm 0.08$ mag. On the other hand, L98 find a zero point of $-1.05 \pm 0.17$ mag using a maximum likelihood method that takes into account parallaxes, proper motions and velocity informations. As discussed by Pont (1999), in this technique the parallaxes do not influence the result to first order, and the method is similar to a statistical parallax analysis. A careful check of all assumptions implicit in the kinematical methods could be the key to understanding the nature of this puzzling disagreement. In the case of the RP method, as discussed extensively in the previous section, the condition for deriving the zero point without introducing a bias is to have ${\sigma}_{\rm H}$ small with respect to the errors on the parallaxes; this condition appears to be fulfilled in the sample considered. Our zero point for the $M_{\rm V}$-[Fe/H] relation is in agreement with results from the Main Sequence fitting technique (Gratton et al. 1997), and from theoretical Horizontal Branch models. In particular, the Horizontal Branch models by Salaris \& Weiss (1998) and Cassisi et al. (1999) give a zero point for the Zero Age Horizontal Branch (ZAHB) at the RR Lyrae instability strip in the range 0.74-0.77 mag. To compare the results for the ZAHB brightness with the $M_{\rm V}$-[Fe/H] relations mentioned in this paper which consider the mean absolute brightness of the RR Lyrae stars population at a certain metallicity, one has to apply a correction by $\approx -0.1$ mag (see, e.g., Caloi et al. 1997 and references therein) to the ZAHB result; this takes into account the evolution off the ZAHB of the observed RR Lyrae stars. Even after applying this correction the theoretical results are in good agreement with the results from the RP method. Moreover, the zero point derived with the RP method is also in agreement with the recent results by Kovacs \& Walker (1999), who derive, by employing linear pulsation models, RR Lyrae luminosities that are brighter by 0.2-0.3 mag with respect to Baade-Wesselink results. Finally, we want to derive the LMC distance implied by our zero point of the RR Lyrae distance scale. Table~2 collects the available data on RR Lyrae stars in LMC clusters: the name of the cluster, the observed mean $V$-magnitude, reddening, metallicity and the difference in distance modulus ($\Delta$) between the cluster and the main body of the LMC. All these data are taken from the references listed. From them the dereddened magnitude at the centre of the LMC (Col.~7), and this value minus the quantity (0.18 [Fe/H]) (Col.~8) have been calculated for those clusters with $\Delta <$0.1 mag. At this point we have taken into account the difference in metallicity between the clusters before deriving the LMC distance. More in detail, we have derived the weighted mean of the values in Col.~8 to find an average of 19.38 with a rms dispersion of 0.10 mag, which can be compared directly to the zero point of the $M_{\rm V}-$[Fe/H] relation to find a distance modulus of 18.61 $\pm$ 0.28. This result turns out to be consistent with the Cepheids\ distance to the LMC as derived by Feast \& Catchpole (1997) or Oudmaijer et al. (1998). \vspace{-5mm} \subsection*{Acknowledgements} Ren\'e Oudmaijer, Phil James and the referee, Xavier Luri, are warmly thanked for valuable comments and suggestions which improved the presentation of the paper. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \vspace{-5mm}
1,477,468,749,870
arxiv
\section{Introduction} \begin{figure*} \begin{center} \includegraphics[width=\paperwidth-5cm]{Heatmaps.pdf} \caption{Attention Head Shapley Values of 3 Languages for XLM-R finetuned on English XNLI. Each value indicates the mean marginal effect an attention head has on accuracy for the XNLI test set in the language.}\label{figure2} \end{center} \end{figure*} Cross-lingual transfer learning aims to utilize a natural language processing system trained on a source language to improve results for the same task in a different target language. The core goal is to maintain relevant learned patterns from the source while disregarding those which are inapplicable to the target. Multilingual pretraining of transformer language models has recently become a widespread method for cross-lingual transfer; demonstrating remarkable zero and few shot performance across languages when finetuned on monolingual data~\citep{mbert, xlmr, mt5-xue}. However, adding languages beyond a threshold begins to harm cross-lingual transfer in a fixed-size model as shown in prior work~\citep{xlmr, mt5-xue}. This has been addressed with additional parameters, both language-specific~\citep{pfeiffer-etal-2020-mad} and broadly~\citep{xlmr,mt5-xue}. \citet{wang-etal-2020-negative} justifies this by showing that competition over limited capacity drives interference. However, limited capacity seems unlikely to be the sole driver as \citet{lotterytransformers} shows that pretrained language models are highly overparametrized. We offer an alternate hypothesis that interference is caused by components that are specialized to language-specific patterns and introduce noise when applied to other languages. To test this hypothesis, we introduce a methodology for identifying language-specific components and further removing them, which is expected to improve model performance without updating or adding additional language-specific parameters. Our work builds on prior research studying monolingual models which shows that they can be pruned aggressively~\citep{michel-16, voita-etal-2019-analyzing}. We leverage Shapley Values, the mean marginal contribution of a player to a collaborative reward, to identify attention heads that cause interference. Shapley Values are consistent, gradient-free which allows for binary removal, and map harmful components clearly to negative values. While the process of identifying and pruning language-specific structures is agnostic to attribution methodology, this makes Shapley Values particularly well-suited to the task. To make computation tractable, we follow prior work in vision models~\citep{neuronshap} to approximate Shapley Values more efficiently using truncation and multi-armed bandit sampling. We contribute the following: \begin{enumerate} \item \textbf{Attention Head Language Affinity:} Even when computed from aligned sentences, Attention Head Shapley Values vary based on the language of input. This highlights that a subset of attention heads has language-specific importance, while others are language-agnostic as shown in Figure \ref{figure2}. \item \textbf{Improving through Pruning:} Model pruning according to Shapley Values improves performance without updating parameters on the Cross-Lingual Natural Language Inference corpus~\citep{conneau-etal-2018-xnli} and the Universal Dependencies Part-of-Speech corpus~\citep{udpos}. This opens a path of work to reduce interference by removing parameters rather than adding them \item \textbf{Interpreting Multilingual Heads:} In a qualitative study, we find that the most language-agnostic heads identified have human interpretable language-agnostic function, while language-specific heads have varied behavior. \end{enumerate} \section{Related Work} \subsection{Multilingual Learning} A large amount of work has studied both the theoretical underpinnings of learning common structures for language and their applications to cross-lingual transfer. Early works exploited commonality through the use of pivot representations, created either by translation~\citep{ mann-yarowsky-2001-multipath, tiedemann-etal-2014-treebank, mayhew-etal-2017-cheap} or language-agnostic task formulations~\citep{zeman2008reusable, mcdonald2011multi}. As NLP has increasingly used representation learning, dense embedding spaces replaced explicit pivots. This led to methods that identified the commonalities of embedding spaces and ways to align them~\citep{joulin-etal-2018-loss, artetxe-etal-2018-robust, artetxe-schwenk-2019-massively}. Recently, many works have trained multilingual transformer models~\citep{mbert, xlmr, mbart, mt5-xue, hu-etal-2021-explicit} as the basis for cross-lingual transfer. These models both implicitly and explicitly perform alignment, although they empirically achieve stronger alignment between closely related languages~\citep{artetxe-etal-2020-cross, conneau-etal-2020-emerging}. With language-specific data, further work has studied how to reduce interference by adding a small number of language-specific parameters. These works adapt a model for the target language by training only Adapters~\citep{wang-etal-2020-negative, pfeiffer-etal-2020-mad, ansell-etal-2021-mad-g}, prompts~\citep{zhao-schutze-2021-discrete}, or subsets of model parameters~\citep{ansell-etal-2022-composable}. \citet{ma-etal-2021-contributions} previously investigated pruning in multilingual models using gradient-based importance metrics to study variability across attention heads. However, they used a process of iterative pruning and language-specific finetuning. This iterative process is largely unstable since there are many trainable subnetworks within large models~\citep{prasanna-etal-2020-bert}. Our method is the first to address interference and improve cross-lingual performance purely by pruning, without updating or adding additional language-specific parameters. \subsection{Model Pruning} Model pruning has largely been focused reducing the onerous memory and computation requirements of large models. These techniques are broken into two approaches: structured and unstructured pruning. \emph{Unstructured pruning} aims to remove individual parameters, which allows for more fine-grained removal. This process often has minimal effects even at extremely high degrees of sparsity. To efficiently prune a large number of parameters, many techniques propose using gradients or parameter magnitude~\citep{integratedGradients, lee2018snip, lotteryticket, lotterytransformers} as importance metrics. \emph{Structured pruning}, or removing entire structural components, is motivated by computational benefits from hardware optimizations. In the case of Transformers, most of this pruning work targets removal of attention heads, either through static ranking~\citep{michel-16} or through iterative training~\citep{voita-etal-2019-analyzing, prasanna-etal-2020-bert, compact}. These pruning methods have also been used to study model behavior, but methods with iterative finetuning are highly unstable as many sub-networks can deliver the same level of performance once trained~\citep{prasanna-etal-2020-bert}. Our work studies pruning without updating model parameters, which aligns with ~\citet{michel-16} which was able to maintain reasonable model performance even when removing 50\% of total attention heads. Furthermore, \citet{kovaleva-etal-2019-revealing} found that pruning attention heads could sometimes improve model performance without further finetuning. We build on this to develop a methodology for consistently identifying pruned models which improve performance. \section{Methods} To identify and remove interference, we need a metric which can distinguish harmful, unimportant, and beneficial attention heads. Prior work~\citep{michel-16, ma-etal-2021-contributions} utilized the magnitude of gradients as an importance metric. However, this metric measures the sensitivity of the loss function to the masking of a particular head regardless of the direction of that sensitivity. Since the loss function is sensitive to the removal of both harmful and beneficial heads, we develop a simple yet effective method which separates these classes. Shapley Values~\citep{shapley_1953} have often been applied in model interpretability since they are the only attribution method to abide by the theoretical properties of local accuracy, missingness, and consistency laid out by ~\citet{shap}. In our setting, Shapley Values have two advantages over gradient-based importance metrics. Firstly, gradient-based approaches require differentiable relaxations of evaluation functions and masking, but Shapley Values do not. Therefore, we can instead use the evaluation functions and binary masks directly. Secondly, Shapley Values are meaningfully signed which allows them to distinguish beneficial, unimportant, and harmful heads rather than just important and unimportant heads. This latter property is essential for our goal of identifying interference. We apply Shapley Values to the task of structural pruning. In order to compute Shapley Values for each head, we first formalize the forward pass of a Transformer as a coalitional game between attention heads. Then, we describe a methodology to efficiently approximate Shapley Values using Monte Carlo simulation combined with truncation and multi-armed bandit search. Finally, we propose a simple pruning algorithm using the resulting values to evaluate the practical utility of this theoretically grounded importance metric. \subsection{Attention Head Shapley Values} We formalize a Transformer performing a task as a coalitional game. Our set of players $A$ are attention heads of the model. In order to remove self-attention heads from the game without retraining, we follow \citet{michel-16} which augments multi-headed attention with an added gate $G_h=\{0, 1\}$ for each head $\Att_h$ in a layer with $N_h$ heads as follows: \begin{equation} \MHAtt(x, q) = \sum_{h=0}^{N_h} G_h \Att_h(x, q) \end{equation} With $G_h=0$, that attention head does not contribute to the output of the transformer and is therefore considered removed from the active coalition. Our characteristic function $V(A)$ is the task evaluation metric $M_v(A)$ over a set of validation data within a target language, adjusted by the evaluation metric with all heads removed to abide by the $V(\emptyset) = 0$ property of coalitional games: \begin{equation} V(A) = M_v(A) - M_v(\emptyset) \end{equation} With these established, the Shapley Value $\varphi_h$ for an attention head $Att_h$ is the mean performance improvement from switching gate $G_h$ from $0$ to $1$ across all $P$ permutations of other gates: \begin{equation} \varphi_h = \frac{1}{|P|}\sum_{A\in P} V(A\cup h) - V(A) \label{shapleyFormula} \end{equation} \subsection{Approximating Shapley Values}\label{approx} However, the exact computation of Shapley Values for $N$ attention heads requires $2^N$ evaluations of our validation metric which is intractable for the number of heads used in most architectures. This intractability is often addressed by using Monte Carlo simulation as an approximate~\citep{monte}. This replaces $P$ in Equation \ref{shapleyFormula} with randomly constructed permutations, rather than the full set of all possible solutions. Computing low-variance Shapley Value estimates with Monte Carlo simulation alone still requires unreasonable amounts of compute time. Therefore, we follow \citet{neuronshap} to accelerate our computations. We add a truncation heuristic using priors about the behavior of neural networks and formulate estimation as a multi-armed bandit problem of separating harmful heads from all others. This reduces the number of samples required to compute consistent Shapley Values, with our experiments showing consistency even across languages. \paragraph{Truncation Heuristics} \noindent Truncation stops sampling the marginal contributions from the rest of a permutation of features once a stopping criterion is reached for that permutation of the Monte Carlo simulation. Prior work selects stopping criterion based on either total performance~\citep{neuronshap} or marginal improvements~\citep{datashap}. To avoid tailoring a threshold to each dataset, we instead choose to truncate based on the percentage of remaining attention heads. For all experiments, we truncate when less than 50\% of attention heads remain in the coalition to bias our estimations towards the effect of heads when the majority of the full network is present. \paragraph{Multi-Armed Bandit Sampling} \noindent The multi-armed bandit optimization stops sampling the marginal contributions of a particular player once a stopping criterion has been reached according to the variance of that player. This optimization uses Empirical Bernstein Bounds~\citep{empiricalbern} which establish limits on the true mean of a value using its variance. For $t$ samples with observed variance $\sigma_t$ and range $R$, there is a probability of $1-\delta$ that the difference between the observed mean $\hat{\mu}$ and true mean $\mu$ abides by the following inequality formulated by \citet{bernStopping}: \begin{equation} |\hat{\mu} - \mu| \leq \sigma_t \sqrt{\frac{2\log(3/\delta)}{t}} + \frac{3R\log(3/\delta)}{t} \label{bernEq} \end{equation} We stop sampling for a particular head once the lower bound established by this inequality is positive since this means there is a probability $1-\delta$ that this head is not harmful. This saves us significant computation without being likely to cause harmful attention heads to be missed. For all experiments in this paper, we use $R=1$ since the model's worst-case performance is zero and $\delta=0.1$ to give a 95\% confidence lower and upper bound. \begin{table*} \begin{center} \begin{tabular}{|llllllllllllllll|} \hline Dataset & EN & AR & BG & DE & EL & ES & FR & HI & RU & SW & TH & TR & UR & VI & ZH \\ \hline \multicolumn{1}{|l|}{XNLI} & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 & 5.0 \\ \multicolumn{1}{|l|}{UDPOS} & 5.4 & 1.7 & 1.1 & 22.4 & 2.8 & 3.1 & 9.5 & 2.7 & 11.3 & N/A & N/A & 4.8 & 0.5 & 0.8 & 5.5 \\ \hline \end{tabular} \caption{Size of the test sets for the datasets in thousands of sentence pairs and sentences respectively. We use a 512 example subset of the released development sets to compute Shapley Values in all languages for all datasets.} \label{descriptive} \end{center} \end{table*} \subsection{Importance-Based Structured Pruning}\label{pruningMethod} By using Shapley Values as the basis of structured pruning for multilingual tasks, we measure their practical ability to remove interference and generalize to unseen test data. Any signed importance metric can be used in this multilingual pruning process to test the metrics' ability to identify interference. Our hypothesis is that attention heads with negative Shapley Values introduce interference broadly. Our pruning method reflects this by using the sign of our approximation directly. Using the empirical Bernstein inequality from Equation~\ref{bernEq}, we compute the upper bound Shapley Value. We then remove all attention heads for which this upper bound is negative, i.e. the set of attention heads which have probability $1-\delta$ of having negative Shapley Values. This is a parameter-free approach for deciding the number of heads to preserve. This approach is stable, with same set of negative heads is identified for pruning across 3 separate runs. Alternatively, once Shapley Values are computed the model could be pruned to any sparsity level. Unlike prior pruning approaches besides \citet{michel-16}, we do not perform any weight updates following or during pruning and leave all parameters fixed. This provides constant time pruning to the desired size. We evaluate performance in this configurable pruning setting in \ref{configurablePruning}. \section{Experiments} \subsection{Datasets} We evaluate our methodology on the Cross Lingual Natural Language Inference (XNLI) and Universal Dependencies Part-Of-Speech (UDPOS) tasks. These allow us to analyze the applicability of Attention Head Shapley Values to both sequence classification and structured prediction. We provide a description of dataset sizes in Table \ref{descriptive}. \paragraph{Cross-Lingual Natural Language Inference (XNLI)} We use the Cross Lingual Natural Language Inference Benchmark~\citep{conneau-etal-2018-xnli}. This dataset is aligned which allows us to control for possible confounding variation from the underlying semantics of the content. Given a premise and a hypothesis and tasks, XNLI is the task of classifying whether the hypothesis is entailed by the premise, contradicted by the premise, or neither. \paragraph{Universal Dependencies Part-of-Speech (UDPOS)} For structured prediction, we evaluate on the Part-of-Speech tags from the Universal Dependencies v2 corpus~\citep{udpos}. The XTREME benchmark~\citep{xtreme} hypothesizes that structured prediction tasks are more language specific, with UDPOS having the largest cross-lingual gap in the benchmark. For direct comparison with our experiments on XNLI, we only retain the 13 languages from UDPOS have a development and test split and also exist in XNLI. Unlike XNLI, the subsections vary in data size and are not aligned across languages. \iffalse \subsubsection{Vernacular Language Understanding Evaluation (VALUE)} Finally, we evaluate our method on the Vernacular Language Understanding Evaluation~\citep{ziems-etal-2022-value}, an aligned benchmark with GLUE~\citep{wang-etal-2018-glue} which stress tests dialectal discrepancies between Standard American English (SAE) and African American English (AAE). \fi \subsection{Experimental Setup} As the basis for our experiments, we finetune XLM-R Base~\citep{xlmr} using the Transformers library on only English data. Evaluation is done using the Datasets library~\citep{lhoest-etal-2021-datasets} implementation of the accuracy metrics. Finetuning and Shapley Value computation were both done on a single NVIDIA GeForce 12GB RTX 2080 Ti. We finetune following hyper-parameter tuning procedures from prior work: using \citet{xtreme} for XNLI and \citet{de-vries-etal-2022-make} for UDPOS. For all tasks and languages, we use the accuracy on 512 examples of the development set as the characteristic function for our coalitional game. Our pruning baselines include the gradient-based importance metric of \citet{michel-16} and the average of 10 randomly pruned networks. We prune the same number of heads pruned by our method for all strategies, since our baselines require selecting the number of heads to prune. \subsection{Language Affinity}\label{affinity} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Correlations.pdf} \caption{Spearman $\rho$ of Attention Head Shapley Values across languages in XNLI using XLM-R finetuned on the English training split.} \label{correlation} \end{figure} First, we analyze the Attention Head Shapley values for XNLI. We focus only on the role of the source language by using an aligned sample from XNLI to control our results for differences independent from language variation. In Figure \ref{figure2}, we visualize the results across English, Chinese, and Swahili. As expected from prior work~\citep{michel-16, voita-etal-2019-analyzing}, many heads have low magnitude Shapley Values indicating that they play no significant positive or negative role in the final network. We compare the similarity of Shapley Values learned across languages using Spearman's $\rho$ in Figure \ref{correlation} and find that Shapley Values are heavily correlated between all languages but Swahili, which is a major outlier. This cross-lingual consistency shows the stability of our learned values even across languages, in contrast with the instability shown by \citet{prasanna-etal-2020-bert}. Despite this consistency, we find some attention heads demonstrate high language-specificity. Most notably, the fifth attention head in layer six is positive for Swahili, but strongly negative for all other 14 languages. This indicates that this head serves a function specific to Swahili within the model. We investigate the behavior of language-specific and language-agnostic heads further in Section \ref{analysis}. \begin{table*}[!ht] \begin{center} \small \setlength\tabcolsep{1.5pt} \renewcommand{1.15}{1.2} \begin{tabular}{c|ccccccccccccccc|} \cline{2-16} \multicolumn{1}{l|}{} & \multicolumn{15}{c|}{XNLI Accuracy} \\ \hline \multicolumn{1}{|c|}{Pruning Strategy} & EN & AR & BG & DE & EL & ES & FR & HI & RU & SW & TH & TR & UR & VI & ZH \\ \hline \multicolumn{1}{|c|}{No Pruning} & 84.1 & 70.6 & 76.7 & 76.8 & 75.4 & 79.8 & 77.7 & 70.0 & 74.7 & 63.4 & 70.6 & 71.9 & 65.9 & 73.3 & 73.5 \\ \multicolumn{1}{|c|}{Random} & 81.5$^-$ & 67.2$^-$ & 72.7$^-$ & 72.7$^-$ & 71.3$^-$ & 75.5$^-$ & 73.0$^-$ & 66.3$^-$ & 70.5$^-$ & 63.5 & 67.4$^-$ & 68.4$^-$ & 61.6$^-$ & 69.7$^-$ & 70.8$^-$ \\ \multicolumn{1}{|c|}{michel-16} & 84.3 & 71.0 & 77.3 & 77.4 & 72.8$^-$ & 80.2 & 78.4 & 71.5$^+$ & 75.2 & 63.1 & 70.7 & 71.7 & 66.9$^+$ & 73.3 & \textbf{77.2$^+$} \\ \hline \multicolumn{1}{|c|}{Shapley Value ($\varphi_i$)} & \textbf{85.1$^+$} & \textbf{72.0$^+$} & \textbf{77.8$^+$} & \textbf{78.3$^+$} & \textbf{76.3} & \textbf{80.6} & \textbf{79.7$^+$} & \textbf{71.5$^+$} & \textbf{76.5$^+$} & \textbf{63.8} & \textbf{73.3$^+$} & \textbf{73.2$^+$} & \textbf{67.6$^+$} & \textbf{75.3$^+$} & \textbf{77.2$^+$} \\ \hline \multicolumn{1}{|c|}{Pruned Heads ($K$)} & 4 & 6 & 6 & 5 & 4 & 5 & 5 & 7 & 5 & 5 & 7 & 5 & 6 & 6 & 9 \\ \hline \end{tabular} \\ \setlength\tabcolsep{2.85pt} \begin{tabular}{c|ccccccccccccccc|} \cline{2-16} & \multicolumn{15}{c|}{UDPOS Accuracy} \\ \hline \multicolumn{1}{|c|}{Pruning Strategy} & EN & AR & BG & DE & EL & ES & FR & HI & RU & SW & TH & TR & UR & VI & ZH \\ \hline \multicolumn{1}{|c|}{No Pruning} & \textbf{95.7} & 75.1 & \textbf{90.9} & \textbf{88.8} & 71.5 & \textbf{89.8} & 81.3 & 73.9 & \textbf{88.2} & - & - & \textbf{78.7} & 67.3 & \textbf{66.3} & 50.2 \\ \multicolumn{1}{|c|}{Random} & \textbf{95.7} & 74.3$^-$ & \textbf{90.9} & \textbf{88.8} & 71.8 & \textbf{89.8} & 81.4 & 73.7 & \textbf{88.2} & - & - & \textbf{78.7} & 67.5 & \textbf{66.3} & 55.3$^+$ \\ \multicolumn{1}{|c|}{\citet{michel-16}} & \textbf{95.7} & 75.1 & \textbf{90.9} & \textbf{88.8} & 71.1 & \textbf{89.8} & 81.1 & 73.8 & \textbf{88.2} & - & - & \textbf{78.7} & 67.3 & \textbf{66.3} & 48.9$^-$ \\ \hline \multicolumn{1}{|c|}{Shapley Value ($\varphi_i$)} & \textbf{95.7} & \textbf{76.6}$^+$ & \textbf{90.9} & \textbf{88.8} & \textbf{72.8}$^+$ & \textbf{89.8} & \textbf{82.6}$^+$ & \textbf{75.6}$^+$ & \textbf{88.2} & - & - & \textbf{78.7} & \textbf{69.5}$^+$ & \textbf{66.3} & \textbf{62.6}$^+$ \\ \hline \multicolumn{1}{|c|}{Pruned Heads ($K$)} & 0 & 4 & 0 & 0 & 4 & 0 & 2 & 2 & 0 & - & - & 0 & 4 & 0 & 18 \\ \hline \end{tabular} \end{center} \caption{Accuracy for UDPOS and XNLI after pruning according to importance metrics. For all metrics, we remove the Bottom-$K$ heads ($K=\lvert\{H_i \mid \varphi_i < 0\}\rvert$) according to that metric. $^+$ and $^-$ indicate significant ($P<0.05$) improvement and harm by a pairwise bootstrap test. Model parameters remain fixed for all methods.}\label{table1} \end{table*} It is worth noting that the outlier, Swahili, is the language with the fewest number of examples in the data used in the pretraining of XLM-R. Whether the large variation between Swahili and all other languages is induced by linguistic features or the training dynamics of low-resource languages within multilingual models is unclear. We leave this to be explored further in future work. \subsection{Targeted Pruning} To understand the practical applicability of the resulting Shapley Values, we evaluate models before and after pruning all attention heads with negative Shapley Values as described in Section \ref{pruningMethod}. Each resulting language-specific model can be represented with only the 144 mask parameters which indicate whether each attention head is removed or kept. Therefore, this pruning can be seen alternatively as a parameter-efficient learning method, using $5*10^{-7}\%$ of the parameters it would require to finetune the model for each language. \paragraph{XNLI} \noindent In Table \ref{table1}, we report the accuracy of models after targeted pruning across all languages for both XNLI and UDPOS. For XNLI, we see that targeted pruning improves performance by an average of +1.59 across all 15 languages with the maximum improvement being in Chinese (+3.78) and the minimum improvement in Swahili (+0.37). While it is expected that languages more closely related to English would benefit less from pruning, even closely related languages such as French (+1.97) and German (+1.53) are improved significantly. \paragraph{UDPOS} \noindent Improvements in UDPOS vary to a higher degree. Only 6 out of 13 languages improve after pruning, with the rest identical with no negative Shapley Values. The largest improvement is again in Chinese (+12.4) and the smallest in French (+1.3). In the case of Chinese, this is a 24.7\% improvement purely by removing attention heads. Across the languages which were pruned, the average improvement is 3.4 -- reducing the cross-lingual gap~\citep{xtreme} by 0.7. \paragraph{Comparison to Baselines} \noindent Randomly pruning is ineffectual or harms performance in both tasks, indicating that pruning alone is not the source of our improvement. Pruning according to the gradient-based metric proposed by \citet{michel-16} maintains rather than improves performance. This supports our hypothesis that methods which use the magnitude of gradients largely identify non-impactful heads as opposed to harmful heads. \begin{table*}[!ht] \begin{center} \small \setlength\tabcolsep{1.5pt} \renewcommand{1.15}{1.2} \begin{tabular}{|c|ccccccccccccccc|} \hline Pruning Strategy & EN & AR & BG & DE & EL & ES & FR & HI & RU & SW & TH & TR & UR & VI & ZH \\ \hline No Pruning & 84.1 & 70.6 & 76.7 & 76.8 & 75.4 & 79.8 & 77.7 & 70.0 & 74.7 & \textbf{63.4} & 70.6 & 71.9 & 65.9 & 73.3 & 73.5 \\ Random & 81.7$^-$ & 67.1$^-$ & 72.3$^-$ & 72.9$^-$ & 71.1$^-$ & 75.1$^-$ & 73.5$^-$ & 65.7$^-$ & 71$^-$ & 60.7$^-$ & 67$^-$ & 68.3$^-$ & 61$^-$ & 69.7$^-$ & 70.7$^-$ \\ \citet{michel-16} & 84.3 & 70.3 & 76.7 & 77.1 & 75.9 & 80.1 & 77.9 & 70.1 & 75.1 & 62.9 & 71.6 & 72.5 & 66.1 & 74.7$^+$ & 74.5$^+$ \\ \hline Shapley Value ($\varphi_i$) & \textbf{85.1$^+$} & \textbf{72.0$^+$} & \textbf{77.8$^+$} & \textbf{79.4$^+$} & \textbf{76.3} & \textbf{80.6} & \textbf{79.7$^+$} & \textbf{71.5$^+$} & \textbf{76.5$^+$} & 63.3 & \textbf{73.1$^+$} & \textbf{73.1$^+$} & \textbf{68.4$^+$} & \textbf{75.2$^+$} & \textbf{76.3$^+$} \\ \hline \end{tabular} \end{center} \caption{Accuracy for XNLI after pruning using importance metrics from English. For all metrics, we remove the Bottom-$K$ heads ($K=\lvert\{H_i \mid \varphi_i < 0\}\rvert$) according to that metric. $^+$ and $^-$ indicate significant ($P<0.05$) improvement and harm by a pairwise bootstrap test.}\label{table2} \end{table*} \subsection{Zero-Shot Pruning}\label{zero-shot} Given the high rank correlation between many of the languages, we evaluate transferability by using the Shapley Values for English to prune the model for all languages. We report results in Table \ref{table2}. \paragraph{XNLI} On XNLI, surprisingly, this transferred pruning across languages has similar benefits to our targeted pruning results despite only being learned for English. Two languages (Urdu and German) achieve better results in the zero-shot pruning than they did in the targeted pruning, five achieve worse results, and the remaining eight are equivalent. It is likely that the strength of zero-shot transfer is largely due to the removal of the fifth head of layer six, which is one of the top 2 most negative heads for all languages barring Swahili. Interestingly, the Attention Head Shapley Values for Swahili also have the lowest rank correlation with English of any language. \paragraph{UDPOS} However, UDPOS highlights the major shortcoming of zero-shot pruning: all attention heads receive a positive Shapley Value for English for UDPOS. This means that no zero-shot pruning is performed even though targeted pruning despite benefits for other languages shown in Table \ref{table1}. \subsection{Iterative Pruning of Attention Heads} Finally, we evaluate the effectiveness of Shapley Values as a ranking methodology for the iterative pruning evaluation performed by \citet{michel-16}. Iterative pruning evaluates how well each importance ranking captures the combinatorial effects of removing attention heads at different compute budgets. We compare random pruning, the gradient-based approach from \citet{michel-16}, and Shapley Values computed through plain Monte Carlo simulation and Shapley Values using Truncation and Multi-Armed Bandit optimization (TMAB). We plot results in Figure \ref{configurable}. Averaged across all levels of sparsity, our method outperforms the Random baseline (+5.8), Monte Carlo Shapley Values (+1.6), and the Gradient baseline (+0.6). Our method is the most effective at identifying strongly harmful heads in early stages of pruning with performance improving compared to the unpruned model for the first 6 heads removed. Additionally, our method is superior at pruning the model to approximately half of its original size, achieving the largest performance gap at 44\% of model capacity outperforming the Gradient baseline, Monte Carlo Shapley Values, and the Random Baseline by +12.2, +15.1, and +20.9 respectively. It is worth noting that the gradient baseline outperforms our method at very high sparsity when more than 80\% of heads are pruned. However, at this sparsity neither method results in a model which performs well above chance. \label{configurablePruning} \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{Configurable_Pruning.pdf} \caption{Evolution of XNLI Accuracy as Heads are removed according to different pruning strategies.}\label{configurable} \end{center} \end{figure} \begin{figure*}[!t] \begin{center} \includegraphics[width=\paperwidth-6cm]{Language_Agnostic.pdf} \vspace{-5pt} \caption{Attention of Layer 4, Head 8 of our XNLI model which is identified as language-agnostic. For clarity, we connect the left token to the token on the right which receives the largest attention weight.}\label{languageagnostic} \end{center} \end{figure*} \begin{figure*}[!t] \begin{center} \includegraphics[width=\paperwidth-6cm]{Language_Specific.pdf} \vspace{-5pt} \caption{Attention of Layer 6, Head 5 of our XNLI model which is identified as language-specific.}\label{language-specific} \end{center} \end{figure*} \section{Qualitative Attention Analysis}\label{analysis} In order to provide intuition into the function of attention heads, prior work has turned to attention visualization as the basis for qualitative analysis of the inner workings of transformer models. \citet{clark-etal-2019-bert} and \citet{hoover-etal-2020-exbert} both find human interpretable patterns within attention heads. We visualize the attention patterns of outlier attention heads using BertViz~\citep{vig-2019-multiscale} from our model to give a qualitative understanding of the attention head patterns associated with language-agnostic and language-specific heads. \subsection{Language-Agnostic Heads} To identify language-agnostic heads, we take the top 20 attention heads for each language and the intersection of these sets. In Figure \ref{languageagnostic}, we visualize the attention pattern of the higher ranked of the two heads which meet this criteria, though qualitatively both seem to have the same function. This head exhibits the same function across all languages: matching words from the premise to near synonyms in the hypothesis and vice versa. This pattern is clearly applicable to NLI, which requires finding commonalities and contradictions across the premise and hypothesis. On the other hand, it does not require any knowledge of language-specific syntax or morphology since token semantics combined with the separator tokens is sufficient to connect synonyms across sentences. \subsection{Language-Specific Heads} As highlighted in Section \ref{affinity}, the fifth head of layer six has a positive Shapley Value only for Swahili. In Figure \ref{language-specific}, we see that this head exhibits unique behavior for Swahili, connecting the suffixes of "\emph{Mimi Huishi}" and "\emph{Ninaishi}" meaning "\emph{I live}" in the Habitual and Present tense respectively. Beyond a single example, the attention pattern of this head varies quantitatively for Swahili. Using \citet{clark-etal-2019-bert} hypothesis that attention to separator tokens indicates an inapplicable learned pattern, we look at the percentage of sentences where all tokens attend primarily to separators. This criterion is true in 56\% Swahili XNLI inputs, but only 41\% of non-Swahili inputs on average ($\sigma = 4.3\%$). Given the small negative performance impact removing negative English heads has on Swahili in \ref{zero-shot}, we hypothesize that this head captures an infrequent pattern in Swahili, but more commonly introduces noise to other languages. However, systematic analysis of structural language affinity is a promising area for further work. \section{Conclusions \& Future Work} In this work, we developed a simple yet effective approach to identify language-specific structural components of multilingual transformer language models by leveraging Shapley Values. We demonstrated that the resulting values do exhibit language affinity, varying across languages. We then applied these Attention Head Shapley Values to improve cross-lingual performance through pruning for both sequence classification and structured prediction. Finally, we performed attention visualization and provided insights on language-agnostic and language-specific attention heads. Future work should attempt to understand the relationships between linguistic features, training data volume, and the language-specificity of attention heads more systematically. Additionally, the benefits of removing heads motivates work which reduces cross-lingual interference introduced by language-specific components during pre-training, such as pruning during pretraining or utilizing sparsely activated networks. \section{Limitations} Even with our computational optimizations, using Shapley Values as an importance metric requires a significant computational cost compared to gradient-based methods: gradient-based methods take approximately 3.33e14 FLOPs and our optimized Shapley Computation takes approximately 3.27e16 FLOPs. While the computation is parallelizable, it took several days on a single GPU to compute accurate estimates even for small validation sets. The computational expense is reasonable for understanding the behavior of base models more deeply, but limits the use of this method as a rapid iteration tool. Additionally, we rely on analysis of attention patterns to help ground our findings. However, there is debate as to whether analysis of attention patterns is a sound analytical tool~\citep{attention-is-not, attention-is-not-not}. \section{Acknowledgements} We are thankful to Chris Hidey, Yanzhe Zhang, Hongxin Zhang, Caleb Ziems and our anonymous reviewers for their feedback. This work is supported in part by Cisco, an Amazon Faculty Research Award and NSF grant IIS-2144562. \bibliographystyle{acl_natbib}
1,477,468,749,871
arxiv
\section{Introduction} The transition from a liquid to an amorphous solid that sometimes occurs upon cooling remains one of the largely unresolved problems of statistical physics~\cite{goetze04,debenedetti01}. At the experimental level, the so-called glass transition is generally associated with a sharp increase in the characteristic relaxation times of the system, and a concomitant departure of laboratory measurements from equilibrium. At the theoretical level, it has been proposed that the transition from a liquid to a glassy state is triggered by an underlying thermodynamic (equilibrium) transition~\cite{mezard99}; in that view, an ``ideal'' glass transition is believed to occur at the so-called Kauzmann temperature, $T_K$. At $T_K$, it is proposed that only one minimum-energy basin of attraction is accessible to the system. One of the first arguments of this type is due to Gibbs and diMarzio~\cite{gibbs58}, but more recent studies using replica methods have yielded evidence in support of such a transition in Lennard-Jones glass formers~\cite{mezard99,coluzzi00a,grigera01}. These observations have been called into question by experimental data and recent results of simulations of polydisperse hard-core disks, which have failed to detect any evidence of a thermodynamic transition up to extremely high packing fractions~\cite{santen00}. One of the questions that arises is therefore whether the discrepancies between the reported simulated behavior of hard-disk and soft-sphere systems is due to fundamental differences in the models, or whether they are a consequence of inappropriate sampling at low temperatures and high densities. Different, alternative theoretical considerations have attempted to establish a connection between glass transition phenomena and the rapid increase in relaxation times that arises in the vicinity of a theoretical critical temperature (the so-called ``mode-coupling'' temperature, $T_{MCT}$), thereby giving rise to a ``kinetic'' or ``dynamic'' transition~\cite{goetze92}. In recent years, both viewpoints have received some support from molecular simulations. Many of these simulations have been conducted in the context of models introduced by Stillinger and Weber and by Kob and Andersen ~\cite{kob95a}; such models have been employed in a number of studies that have helped shape our current views about the glass transition~\cite{coluzzi00a,sastry98,sciortino99,donati99,coluzzi00b,yamamoto00}. In its simplest (``idealized'') version, firstly analyzed in the ``schematic'' approach by Bengtzelius et al. \cite{bgs} and independently by Leutheusser \cite{leuth84}, the MCT predicts a transition from a high temperature liquid (``ergodic'') state to a low temperature arrested (``nonergodic'') state at a critical temperature $T_c$. Including transversale currents as additional hydrodynamic variables, the full MCT shows no longer a sharp transition at $T_c$ but all structural correlations decay in a final $\alpha$-process \cite{gotzesjo92}. Similar effects are expected from inclusion of thermally activated matter transport, that means diffusion in the arrested state \cite{das,sjogren90}. In the full MCT, the remainders of the transition and the value of $T_c$ have to be evaluated, e.g., from the approach of the undercooled melt towards the idealized arrested state, either by analyzing the time and temperature dependence in the $\beta$-regime of the structural fluctuation dynamics \cite{gleimkob,meyer,cum99} or by evaluating the temperature dependence of the so-called ${\bf{g}}_m$-parameter \cite{tei96l,tei96e}. There are further posibilities to estimates $T_c$, e.g., from the temperature dependence of the diffusion coefficients or the relaxation time of the final $\alpha$-decay in the melt, as these quantities for $T>T_c$ display a critical behaviour $|T-T_c|^{\pm \gamma}$. However, only crude estimates of $T_c$ can be obtained from these quantities, since near $T_c$ the critical behaviour is masked by the effects of transversale currents and thermally activated matter transport, as mentioned above. On the other hand, as emphasized and applied in \cite{barrat90,mfuchs,kobnauroth}, the value of $T_c$ predicted by the idealized MCT can be calculated once the partial structure factors of the system and their temperature dependence are sufficiently well known. Besides temperature and particle concentration, the partial structure factors are the only significant quantities which enter the equations of the so-called nonergodicity parameters of the system. The latter vanish identically for temperatures above $T_c$ and their calculation thus allows a rather precise determination of the critical temperature predicted by the idealized theory. At this stage it is tempting to consider how well the estimates of $T_c$ from different approaches fit together and whether the $T_c$ estimate from the nonergodicity parameters of the idealized MCT compares to the values from the full MCT. Regarding this, we here investigate a molecular dynamics (MD) simulation model adapted to the glass-forming Ni$_{0.8}$Zr$_{0.2}$ transition metal system. The Ni$_x$Zr$_{1-x}$-system is well studied by experiments \cite{kuschke,altounian} and by MD-simulations \cite{bert1,tei92,teidimat,tei97,teib99,masu1,masu2,masu3,masu4}, as it is a rather interesting system whose components are important constituents of a number of multi-component 'massive' metallic glasses. In the present contribution we consider, in particular, the $x=0.8$ composition and concentrate on the determination of $T_c$ from evaluating and analyzing the nonergodicity parameter, the ${\bf{g}}_m(T)$-parameter in the ergodic regime, and the diffusion coefficients. Our paper is organized as follows: In section II, we present the model and give some details of the computations. Section III. gives a brief discussion of some aspects of the mode coupling theory as used here. Results of our MD-simulations and their analysis are then presented and discussed in Section IV. \section{SIMULATIONS} The present simulations are carried out as state-of-the-art isothermal-isobaric ($N,T,p$) calculations. The Newtonian equations of $N=$ 648 atoms (518 Ni and 130 Zr) are numerically integrated by a fifth order predictor-corrector algorithm with time step $\Delta t$ = 2.5 10$^{-15}$s in a cubic volume with periodic boundary conditions and variable box length L. With regard to the electron theoretical description of the interatomic potentials in transition metal alloys by Hausleitner and Hafner \cite{haushafner}, we model the interatomic couplings as in \cite{tei92} by a volume dependent electron-gas term $E_{vol}(V)$ and pair potentials $\phi(r)$ adapted to the equilibrium distance, depth, width, and zero of the Hausleitner-Hafner potentials \cite{haushafner} for Ni$_{0.8}$Zr$_{0.2}$ \cite{teidimat}. For this model, simulations were started through heating a starting configuration up to 2000~K which leads to a homogeneous liquid state. The system then is cooled continuously to various annealing temperatures with cooling rate $-\partial_tT$ = 1.5 10$^{12}$~K/s. Afterwards the obtained configurations at various annealing temperatures (here 1500-600 K) are relaxed by carrying out additional isothermal annealing runs. Finally the time evolution of these relaxed configurations is modelled and analyzed. More details of the simulations are given in \cite{teidimat}. \section{THEORY} \subsection{Nonergodicity parameters} In this section we provide some basic formulae that permit calculation of $T_c$ and the nonergodicity parameters $f_{ij}(q)$ for our system. A more detailed presentation may be found in Refs.~\cite{barrat90,mfuchs,kobnauroth,gotze85,bosse87}. The central object of the MCT are the partial intermediate scattering functions which are defined for a binary system by \cite{bernu} \begin{eqnarray} F_{ij}(q,t) &=&\frac{1}{\protect\sqrt{N_{i}N_{j}}}\left\langle \rho ^{i}(q,t)\rho ^{j}(-q,0)\right\rangle \nonumber \\ &=&\frac{1}{\protect\sqrt{N_{i}N_{j}}}\sum\limits_{\alpha =1}^{N_{i}}\sum\limits_{\beta =i}^{N_{j}} \nonumber \\ &&\times \left\langle \exp (i{\bf{q}}\cdot [{\bf{r}}_{\alpha }^{i}(t)-{\bf{r}}_{\beta }^{j}(0)])\right\rangle \quad, \label{T.1} \end{eqnarray} where \begin{equation} \rho _{i }(\overrightarrow{q})=\sum\limits_{\alpha=1}^{N_{i }}e^{i \overrightarrow{q}\cdot \overrightarrow{r}_{\alpha i }},\text{ }i =1,2 \label{T.1a} \end{equation} is a Fourier component of the microscopic density of species $i$. The diagonal terms $\alpha=\beta$ are denoted as the incoherent intermediate scattering function \begin{equation} F_{i}^{s}(q,t)=\frac{1}{N_{i}}\sum\limits_{\alpha =1}^{N_{i}}\left\langle \exp (i{\bf{q}}\cdot [{\bf{r}}_{\alpha }^{i}(t)-{\bf{r}}_{\alpha }^{i}(0)])\right\rangle \quad . \label{T.2} \end{equation} The normalized partial- and incoherent intermediate scattering functions are given by \begin{eqnarray} \Phi_{ij}(q,t)&=& F_{ij}(q,t)/S_{ij}(q) \quad ,\\ \Phi^s_{i}(q,t)&=& F_{i}^{s}(q,t) \quad , \end{eqnarray} where the $S_{ij}(q)= F_{ij}(q,t=0)$ are the partial static structure factors. The basic equations of the MCT are the set of nonlinear matrix integrodifferential equations \begin{equation} \ddot{{\bf F}}(q,t)+{\mbox{\boldmath $\Omega $}}^2(q){\bf F}(q,t)+ \int_0^td\tau {\bf M}(q,t-\tau) \dot{{\bf F}}(q,\tau) = 0 \quad , \label{T.5} \end{equation} where ${\bf F}$ is the $2\times 2$ matrix consisting of the partial intermediate scattering functions $F_{ij}(q,t)$, and the frequency matrix ${\mbox{\boldmath $\Omega $}}^2$ is given by \begin{equation} \left[{\mbox{\boldmath $\Omega $}}^2(q)\right]_{ij}=q^2k_B T (x_i/m_i)\sum_{k}\delta_{ik} \left[{\bf S}^{-1}(q)\right]_{kj}\quad. \label{T.6} \end{equation} ${\bf S}(q)$ denotes the $2\times 2$ matrix of the partial structure factors $S_{ij}(q)$, $x_i=N_i/N$ and $m_i$ means the atomic mass of the species $i$. The MCT for the idealized glass transition predicts \cite{gotzesjo92} that the memory kern ${\bf M}$ can be expressed at long times by \begin{eqnarray} M_{ij}({\bf q},t)&=&\frac{k_B T}{2\rho m_i x_j}\int\frac{d {\bf k}}{(2\pi)^3} \sum_{kl}\sum_{k'l'} \nonumber \\ && \times V_{ikl}({\bf q},{\bf k}) V_{jk'l'}({\bf q},{\bf q-k}) \nonumber \\ && \times F_{kk'}({\bf k},t) F_{ll'}({\bf q-k},t)\quad , \label{T.7} \end{eqnarray} where $\rho=N/V$ is the particle density and the vertex $V_{i\alpha\beta}({\bf q},{\bf k})$ is given by \begin{equation} V_{ikl}({\bf q},{\bf k})=\frac{{\bf q}\cdot {\bf k}}{q}\delta_{il} c_{ik}({\bf k})+ \frac{{\bf q}\cdot ({\bf q}-{\bf k})}{q} \delta_{ik} c_{il} ({\bf q}-{\bf k}) \label{T.8} \end{equation} and the matrix of the direct correlation function is defined by \begin{equation} c_{ij}({\bf q})=\frac{\delta_{ij}}{x_i}- \left[{\bf S}^{-1}({\bf q})\right]_{ij} \quad . \label{T.9} \end{equation} The equation of motion for $F^s_i(q,t)$ has a similar form as Eq.(\ref{T.5}), but the memory function for the incoherent intermediate scattering function is given by \begin{eqnarray} M_{i}^{s}({\bf q},t) & = & \int \frac{d{\bf k}}{(2\pi)^3} \frac{1}{\rho} \left(\frac{{\bf q}\cdot {\bf k}}{q}\right) (cF)_i ({\bf k},t) \nonumber \\ && \times F_{i}^{s}({\bf q}-{\bf k},t) , \label{T.10} \end{eqnarray} \begin{eqnarray} (cF)_i(k,t)&=&(c_{ii}(q))^2 F_{ii}(q,t)+2c_{ii}(q)c_{ij}(q)F_{ij}(q,t) \nonumber \\ && +(c_{ij}(q))^2F_{jj}(q,t)\quad j\neq i \quad . \label{T.11} \end{eqnarray} In order to characterize the long time behaviour of the intermediate scattering function, the nonergodicity parameters ${\bf f}({\bf q})$ are introduced as \begin{equation} f_{ij}({\bf q})=lim_{t\to \infty}\Phi_{ij}({\bf q},t) \quad . \label{T.12} \end{equation} These parameters are the solution of eqs.~(\ref{T.5})-(\ref{T.9}) at long times. The meaning of these parameters is the following: if $f_{ij}({\bf q})=0$, then the system is in a liquid state with density fluctuation correlations decaying at long times. If $f_{ij}({\bf q})>0$, the system is in an arrested, nonergodic state, where density fluctuation correlations are stable for all times. In order to compute $f_{ij}({\bf q})$, one can use the following iterative procedure~\cite{kobnauroth}: \begin{eqnarray} {\bf f}^{(l+1)}(q) &=& \frac{ {\bf S}(q) \cdot {\bf N}[{\bf f}^{(l)},{\bf f}^{(l)}](q) \cdot {\bf S} (q)}{{\bf Z}} \nonumber \\ && + \frac{q^{-2}|{\bf S}(q)| |{\bf N}[{\bf f}^{(l)},{\bf f}^{(l)}](q)| {\bf S}(q)}{\bf Z} \quad , \label{T.13} \end{eqnarray} \begin{eqnarray} {\bf Z}&=& q^2+Tr({\bf S}(q) \cdot {\bf N}[{\bf f}^{(l)},{\bf f}^{(l)}](q)) \nonumber \\ && + q^{-2}| {\bf S}(q)| | {\bf N}[{\bf f}^{(l)},{\bf f}^{(l)}](q)| \nonumber \quad, \end{eqnarray} where the matrix ${\bf N}(q)$ is given by \begin{equation} N_{ij}(q)=\frac{m_i}{x_i k_B T} M_{ij}(q) \quad. \label{T.14} \end{equation} This iterative procedure, indeed, has two type of solutions, nontrivial ones with ${\bf f}(q)>0$ and trivial solutions ${\bf f}(q)=0$. The incoherent nonergodicity parameter $f_i^{s}(q)$ can be evaluated by the following iterative procedure: \begin{equation} q^2 \frac{f_i^{s,l+1}(q)}{1-f_i^{s,l+1}(q)} = M_i^{s}[{\bf f}, f_i^{s,l}](q) \quad . \label{T.15} \end{equation} As indicated by Eq.(\ref{T.15}), computation of the incoherent nonergodicity parameter $f_i^s(q)$ demands that the coherent nonergodicity parameters are determined in advance. \subsection{$\bf{g}_m$--parameter} Beyond the details of the MCT, equations of motion like (\ref{T.5}) can be derived for the correlation functions under rather general assumptions within the Lanczos recursion scheme \cite{lanczos} resp. the Mori-Zwanzig formalism \cite{morizwanzig}. The approach demands that the time dependence of fluctuations A, B, ... is governed by a time evolution operator like the Liouvillian and that for two fluctuating quantitites a scalar products (B, A) with the meaning of a correlation function can be defined. In case of a tagged particle, this leads for $\Phi^s_i(q,t)$ to the exact equation \begin{equation} \ddot{\Phi}^s_i(q,t)/\Omega_{0}^{2}+\Phi^s_i(q,t) +\int_{0}^{t}d\tau M^0_i(q,t-\tau )\dot{\Phi}^s_i(q,\tau )=0 \label{G.1} \end{equation} \noindent with memory kernel $M^0_i(q,t)$ in terms of a continued fraction. Within $M^0_i(q,t)$ are hidden all the details of the time evolution of $\Phi^s_i(q,t)$. As proposed and applied in \cite{tei96l,tei96e}, instead of calculating $M^0_i(q,t)$ from the time evolution operator as a continued fraction, it can be evaluated in closed forms once $\Phi^s_i(q,t)$ is known, e.g., from experiments or MD-simulations. This can be demonstrated by introduction of \begin{eqnarray} \Phi _{c}(\omega )\pm i\Phi _{s}(\omega )&:=& \lim_{\varepsilon \rightarrow 0}{\mathcal{L}}{\left\{ \Phi\right\}} (\varepsilon\mp i\omega) \quad, \label{G.2} \end{eqnarray} \noindent with \begin{equation} {\mathcal{L}}{\left\{ \Phi\right\}}(z)= \int_{0}^{\infty}dt e^{-zt}\Phi(t) \label{G.3} \end{equation} \noindent the Laplace transform of $\Phi(t)$, and \begin{eqnarray} M^0_i(\omega)_c \pm i M^0_i(\omega)_s &:=& \lim_{\varepsilon \rightarrow 0}{\mathcal{L}}{\left\{ M^0_i\right\}} (\varepsilon\mp i\omega) \quad. \label{G.4} \end{eqnarray} Eq.(\ref{G.1}) then leads to \begin{equation} M^0_i(\omega)_c=\frac{\Phi_{c}(\omega)}{\left[ 1-\omega \Phi_{s}(\omega )\right] ^{2}+\left[ \omega \Phi _{c}(\omega )\right] ^{2}} \quad. \label{G.5} \end{equation} On the time axis, $M^0_i(t)$ is given by \begin{equation} M^0_i(t)=\frac{2}{\pi }\int_{0}^{\infty }d\omega M^0_i(\omega)_c \cos (\omega t) \quad. \label{G.6} \end{equation} Adopting some arguments from the schematic MCT, Eq.(\ref{G.1}) allows asymptotically finite correlations $\Phi^s_i(q,t\rightarrow\infty)>0$, that means an arrested state, if $M^0_i(q,t\rightarrow\infty)$ remains finite where the relationship holds \begin{equation} M^0_i(q,t\rightarrow\infty) (\Phi^s_i(q,t\rightarrow\infty)^{-1}- 1)=1 \quad. \label{G.7} \end{equation} In order to characterize the undercooled melt and its transition into the glassy state, we introduced in \cite{tei96l} the function \begin{equation} {\bf{G}}(\Phi,M^0):= M^0(t)(1/\Phi(t)- 1) \quad. \label{G.8} \end{equation} According to (\ref{G.7}), ${\bf{G}}(\Phi,M^0)$ has the property that \begin{equation} {\bf {G}}(\Phi,M^0)\mid_{t\rightarrow\infty}= 1 \label{G.9} \end{equation} \noindent in the arrested, nonergodic state. On the other hand, if \begin{equation} {\bf {g}}_m:=Max\left \{{\bf {G}}(\Phi,M^0)\mid 0<t<\infty \right\} < 1\quad, \label{G.10} \end{equation} there is no arrested solution and the correlations $\Phi^s_i(q,t)$ decay to zero for $t\rightarrow\infty$, that means, the system is in the liquid state. From that we proposed \cite{tei96l} to use the value of ${\bf{g}}_m$ as a relative measure how much the system has approached the arrested state and to use the temperature dependence of ${\bf{g}}_m(T)$ in the liquid state as an indication how the system approaches this state. \section{Results and Discussions \label{RD}} \subsection{Partial structure factors and intermediate scattering functions} First we show the results of our simulations concerning the static properties of the system in terms of the partial structure factors $S_{ij}(q)$ and partial correlation functions $g_{ij}(r)$. To compute the partial structure factors $S_{ij}(q)$ for a binary system we use the following definition \cite{hansen} \begin{eqnarray} S_{ij }(\overrightarrow{q}) &=&x_{i}\delta _{ij }+\rho x_{i}x_{j}\int (g_{ij}(r)-1)e^{-i \overrightarrow{q}\cdot \overrightarrow{r}}d\overrightarrow{r} \label{E.5} \quad, \end{eqnarray} where \begin{equation} g_{ij }(\overrightarrow{r})=\frac{V}{N_{i}N_{j}} \left\langle \sum\limits_{\alpha=1}^{N_{i}}\sum_{\beta=1,\beta\neq \alpha}^ {N_{j}}\delta ({\bf{r}}-\left| {\bf{r}}_{\alpha}(t)- {\bf{r}}_{\beta}(t)\right| )\right\rangle \label{E.5a} \end{equation} are the partial pair correlation functions. The MD simulations yield a periodic repetition of the atomic distributions with periodicity length $L$. Truncation of the Fourier integral in Eq.(\ref{E.5}) leads to an oscillatory behavior of the partial structure factors at small $q$. In order to reduce the effects of this truncation, we compute from Eq.(\ref{E.5a}) the partial pair correlation functions for distance $r$ up to $R_c=3/2L$. For numerical evaluation of Eq.(\ref{E.5}), a Gaussian type damping term is included \begin{eqnarray} S_{i j }(q)&=&x_{i }\delta _{i j }+4\pi \rho x_{i }x_{j }\int\limits_{0}^{R_{c}}r^{2}(g_{i j }(r)-1)\frac{\sin (qr)}{qr} \nonumber \\ && \times \exp (-(r/R)^{2})dr \label{E.8GS} \end{eqnarray} with $R=R_{c}/3$. Fig.\ref{fig1}- fig.\ref{fig2a} shows the partial structure factors $S_{ij}(q)$ versus $q$ for all temperatures investigated. The figure indicates that the shape of $S_{ij}(q)$ depends weakly on temperature only and that, in particular, the positions of the first maximum and the first minimum in $S_{ij}(q)$ are more or less temperature independent. \begin{figure}[htbp] \centering \psfig{file=STRNNALLNEW_potrait.eps,width=8.5cm,height=5.50cm} \caption{Partial structure factors Ni-Ni-part at $T=$ 1500~K, 1400~K, 1300~K, 1200~K, 1100~K, 1000~K, 900~K and 800~K (from top to bottom),the curves are vertically shifted by 0.05 relative to each other. } \label{fig1} \end{figure} \begin{figure}[htbp] \centering \psfig{file=STRNZALLNEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{Partial structure factors Ni-Zr-part at $T=$ 1500~K, $T=$ 1400~K, 1300~K, 1200~K, 1100~K, 1000~K, 900~K and 800~K (from top to bottom),the curves are vertically shifted by 0.05 relative to each other. } \label{fig1a} \end{figure} \begin{figure}[htbp] \psfig{file=STRZZALLNEW_potrait.eps ,width=8.5cm,height=5.50cm} \caption{Partial structure factors Zr-Zr-part at $T=$ 1500~K, $T=$ 1400~K, 1300~K, 1200~K, 1100~K, 1000~K, 900~K and 800~K (from top to bottom),the curves are vertically shifted by 0.05 relative to each other.} \label{fig2a} \end{figure} To investigate the dynamical properties of the system, we have calculated the incoherent scattering function $F^s_{i}(q,t)$ and the coherent scattering function $F_{ij}(q,t)$ as defined in equations (\ref{T.1}) and (\ref{T.2}). \begin{figure}[htbp] \psfig{file=PHIALLNI_potrait.eps,width=8.5cm,height=5.50cm} \caption{Incoherent intermediate scattering function $\Phi^s_i(q,t)$ Ni-part for $q=24.4$~nm$^{-1}$ at $T=$ 1500~K, 1400~K, 1300~K, 1200~K, 1100~K, 1000~K, 950~K, 900~K, and 800~K (from left to right).} \label{fig2b} \end{figure} \begin{figure}[htbp] \psfig{file=PHIALLZRNEWWITHTEST_portrait.eps,width=8.5cm,height=5.50cm} \caption{The same as fig.\ref{fig2b} but for Zr-part.} \label{fig3a} \end{figure} Fig.\ref{fig2b} and fig.\ref{fig3a} presents the normalized incoherent intermediate scattering functions $\Phi^s_i(q,t)$ of both species evaluated from our MD data for wave vector $q_n$=$2\pi n/L$ with n = 9, that means $q_9=24.4$ nm~$^{-1}$. From the figure we see that $\Phi^s_i(q,t)$ of both species shows at intermediate temperatures a structural relaxation in three succesive steps as predicted by the idealized schematic MCT \cite{hansenyip}. The first step is a fast initial decay on the time scale of the vibrations of atoms ($t<0.2$~ps). This step is characterized by the MCT only globaly. The second step is the $\beta $-relaxation regime. In the early $\beta$-regime the correlator should decrease according to $\Phi^s_i(q,t)=f_{csi}(q)+A/t^{a}$ and in the late $\beta$-relaxation regime, which appears only in the melt, according the von Schweidler law $f_{csi}(q)-Bt^{b}.$ Between them a wide plateau is found near the critical temperature $T_{c}$. In the melt, the $\alpha$-relaxation takes place as the last decay step after the von Schweidler-law. It can be described by the Kohlrausch-Williams-Watts (KWW) law $\Phi^s_i(q,t)=A_{0}\exp (-(t/\tau _{\alpha})^{\beta })$ where the relaxation time $\tau _{\alpha}$ near the glass transition shifts drastically to longer times. The inverse power-law decay for the early $\beta$-regime $\Phi \sim f_{c}+A/t^{a}$ is not seen in our data. This seems to be due to the fact that in our system the power-law decay is dressed by the atomic vibrations (\cite{tei96l,tei96e} and references therein). According to our MD-results, $\Phi^s_i(q,t)$ decays to zero for longer times at all temperatures investigated. This is in agreement with the full MCT. Including transversal currents as additional hydrodynamic variables, the full MCT \cite{gotzesjo92} comes to the conclusion that all structural correlations decay in the final $\alpha$-process, independent of temperature. Similar effects are expected from inclusion of thermally activated matter transport, that means diffusion in the arrested state. At $T=$~900~K - 700~K, the $\Phi^s_i(q,t)$ drop rather sharply at large $t$. This reflects aging effects which take place, if a system is in a transient, non-steady state \cite{kobbarrat}. Such a behaviour indicates relaxations of the system on the time scale of the 'measuring time' of the correlations. \subsection{Nonergodicity parameters} The nonergodicity parameters are defined by Eq.(\ref{T.12}) as a non-vanishing asymptotic solution of the MCT Eq.(\ref{T.5}). Fig.~\ref{fig3b} presents the estimated $q$-dependent nonergodicity parameters from the coherent and incoherent scattering functions of Ni and Zr at T=1005~K. \begin{figure}[htbp] \psfig{file=ERGODIZITAETNEW_portrait.eps,width=8.5cm,height=6.50cm} \caption{Non-ergodicity parameter $f_{cij}$ for the incoheremt and coherent intermediate scattering functions as solutions of eqs. (\ref{T.6}) and (\ref{T.7})} \label{fig3b} \end{figure} In order to compute the nonergodicity parameters $f_{ij}(q)$ analytically, we followed for our binary system the self-consistent method as formulated by Nauroth and Kob \cite{kobnauroth} and as sketched in Section III.A. Input data for our iterative determination of $f_{ij}(q) = F_{ij}(q,\infty)$ are the temperature dependent partial structure factors $S_{ij}(q)$ from the previous subsection. The iteration is started by arbitrarily setting $F_{Ni-Ni}(q,\infty)^{(0)}=0.5 S_{Ni-Ni}(q)$, $F_{Zr-Zr}(q,\infty)^{(0)}=0.5 S_{Zr-Zr}(q)$, $F_{Ni-Zr}(q,\infty)^{(0)}=0$. For $T > 1100$~K we always obtain the trivial solution $f_{ij}(q) = 0$ while at T = 1000 K and below we get stable non-vanishing $f_{ij}(q)>0$. The stability of the non-vanishing solutions was tested for more than 3000 iteration steps. From this results we expect that $T_c$ for our system lies between 1000 and 1100 K. To estimate $T_c$ more precisely, we interpolated $S_{ij}(q)$ from our MD data for temperatures between 1000 and 1100 K by use of the algorithm of Press et.al. \cite{press}. We observe that at $T = 1005$~K a non-trivial solution of $f_{ij}(q)$ can be found, but not at $T = 1010$~K and above. It means that the critical temperature $T_c$ for our system is around 1005 K. The non-trivial solutions $f_{ij}(q)$ for this temperature shall be denoted the critical nonergodicty parameters $f_{cij}(q)$. They are included in Fig.~\ref{fig3b}. By use of the critical nonergodicity parameters $f_{cij}(q)$, the computational procedure was run to determine the critical nonergodicity parameters $f^s_{ci}(q)$ for the incoherent scattering functions at T = 1005 K . Fig.~\ref{fig3b} also presents our results for the so calculated $f^s_{ci}(q)$. \subsection{${\bf{g}}(\Phi^s_i,M^0_i)$-function and ${\bf{g}}_m$-parameters} Here we present our results about the ${\bf{g}}(\Phi^s_i,M^0_i)$-function \cite{tei96l,tei96e} described in section III.B. The memory functions $M_i^0(q,t)$ are evaluated from the MD data for $\Phi_i^s(q,t)$ by Fourier transformation along the positive time axis. For completeness, also $T=700$ and 800 K data are included where the corresponding $\Phi_i^s(q,t)$ are extrapolated to longer times by use of an KWW approximation. \begin{figure}[htbp] \psfig{file=FTALLNINEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{Ni-Part: Time dependence of the dimensionless memory function $M^0_s(q,t)/\Omega^2_{s-i}$ from MD simulations for $q_9=21.6$~nm$^{-1}$ and $T=$~800~K, 900~K, 950~K, 1000~K, 1100~K, 1200~K, 1300~K, 1400~K, and 1500~K (from top to bottom) } \label{fig4a} \end{figure} \begin{figure}[htbp] \psfig{file=FTALLZRNEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{The same as fig.\ref{fig4a} but for Zr-part. } \label{fig4b} \end{figure} Fig.~\ref{fig4a} and Fig.~\ref{fig4b} show the thus deduced $M_i^0(q,t)$ for $q = 24.4$~nm$^{-1}$. Regarding their qualitative features, the obtained $M_i^0(q,t)$ are in full agreement with the results in \cite{tei96e} for the Ni$_{0.5}$Zr$_{0.5}$ system. A particular interesting detail is the fact that there exists a minimum in $M_i^0(q,t)$ for both species, Ni and Zr, at all investigated temperatures around a time of 0.1 ps. Below this time, $\Phi_i^s(q,t)$ reflects the vibrational dynamics of the atoms. Above this value, the escape from the local cages takes place in the melt and the $\beta$-regime dynamics are developed. Apparently, the minimum is related to this crossover. \begin{figure}[htbp] \psfig{file=FPHIALLNINEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{Ni-Part: Dimensionless memory function $M^0_s(q,t)/\Omega^2_{s-i}$ as a function of $\Phi^s_i(q,t)$ (solid line) for $q_9=24.4$~nm$^{-1}$ and $T=$~800~K, 900~K, 950~K, 1000~K, 1100~K, 1200~K, 1300~K, 1400~K, and 1500~K (from top to bottom); a) Ni-part and b) Zr-part ; Polynom fit of the low $\Phi$ memory function $M(\Phi)$ at $T=800$~K (long dashed line).} \label{fig5} \end{figure} \begin{figure}[htbp] \psfig{file=FPHIALLZENEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{The same as fig.\ref{fig5} but for Zr-part.} \label{fig5a} \end{figure} In Fig.~\ref{fig5} and Fig.~\ref{fig5a} we display $M_i^0(q, \Phi_i^s(q,t))$, that means $M_i^0(q,t)$ versus $\Phi_i^s(q,t)$. In this figure we again find the features already described for Ni$_{0.5}$Zr$_{0.5}$ in \cite{tei96l,tei96e}. According to the plot, there exist ($q$-dependent) limiting values $\Phi_{i0}^s(q,t)$ so that $M_i^0(q,t)$ for $\Phi_i^s(q,t)< \Phi_{i0}^s(q,t)$ is close to an universal behavior, while for $\Phi_i^s(q,t)> \Phi_{i0}^s(q,t)$ marked deviations are seen. $\Phi_{i0}^s(q,t)$ significantly decreases with increasing temperature. It is tempting to identify $M_i^0(q,t)$ below $\Phi_{i0}^s(q,t)$ with the polynomial form for $M_i^0(q,t)$ assumed in the schematic version of the MCT \cite{gotzesjo92}. In fig.~\ref{fig5} and fig.~\ref{fig5a}, the polynomial obtained by fitting the 1000 K data below $\Phi_{i0}^s(q,t)$ is included by a dashed line, extrapolating it over the whole $\Phi$-range. By use of the calculated memory functions, we can evaluate the ${\bf{g}}(\Phi^s_i,M^0_i)$, Eq.(\ref{G.8}). In Fig.\ref{fig6} and Fig.~\ref{fig7} this quantity is presented versus the corresponding value of $\Phi_i^s(q,t)$ and denoted as ${\bf{g}}(\Phi^s_i)$. For all the investigated temperatures, ${\bf{g}}(\Phi^s_i)$ has a maximum ${\bf{g}}_m(q,T)$ at an intermediate value of $\Phi$. In the high temperature regime, the values of ${\bf{g}}_m(q,T)$ move with decreasing temperature towards the limiting value 1. This is, in particular, visible in Fig.~\ref{fig8} where we present ${\bf{g}}_m(q,T)$ as function of temperature for both species, Ni and Zr, and wave-vectors $q_9=24.4$~nm$^{-1}$. At temperatures above 1000 K, the ${\bf{g}}_m$-values increase approximately linear towards 1 with decreasing temperatures. Below 1000 K, they remain close below the limiting value of 1, a behavior denoted in \cite{tei96l,tei96e} as a balancing on the borderline between the arrested and the non-arrested state due to thermally induced matter transport by diffusion in the arrested state at the present high temperatures. \begin{figure}[htbp] \psfig{file=GTPTALLNI_potrait.eps,width=8.5cm,height=5.50cm} \caption{Ni-part: MD simulation results for the characteristic function ${\bf{g}}(\Phi^s)$ as a function of $\Phi^s_i$ for $q=24.4$~nm$^{-1}$} \label{fig6} \end{figure} \begin{figure}[htbp] \psfig{file=GTPTZRALLNEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{The same as fig.\ref{fig6} but for Zr-part.} \label{fig7} \end{figure} \begin{figure}[htbp] \psfig{file=GMAXRICHTIG.eps,width=5.50cm,height=8.5cm,angle=270} \caption{MD simulation results of the temperature dependence of ${\bf{g}}_m(q,T)$ for $q_9=24.4$~nm$^{-1}$ (symbols). Linear fits to the ${\bf{g}}_m(q,T)$ are included by full and dash lines ( for $q_9=24.4$~nm$^{-1}$); a) Zr-part with $T_c=970$~K and b) Ni-part with $T_c=950$~K. } \label{fig8} \end{figure} Linear fit of the ${\bf{g}}_m$-values for Ni above 950~K and for Zr above 1000~K predicts a crossover temperature $T^*_c$ from liquid (${\bf{g}}_m < 1$) to the quasi-arrested (${\bf{g}}_m = 1$) behavior around 970 K from the Ni data and around 1020 K from the Zr data. We here identify this crossover temperature with the value of $T_c$ as visible in the ergodic, liquid regime and estimate it by the mean value from the Ni- and Zr-subsystems, that means by $T_c = 1000$~K. While in \cite{tei96l,tei96e} for the Ni$_{0.5}$Zr$_{0.5}$ melt a $T_c$-value of 1120~K was estimated from ${\bf{g}}_m(T)$, the value for the present composition is lower by about 120~K. A significant composition dependence of $T_c$ is expected according to the results of MD simulation for the closely related Co$_x$Zr$_{1-x}$ system \cite{teirossler}. Over the whole $x$-range, $T_c$ was found to vary between 1170 and 650~K in Co$_x$Zr$_{1-x}$, with $T_c$($x=0.2$) $\simeq$ 800~K. Regarding this, the present data for the Ni$_x$Zr$_{1-x}$ system reflect a rather weak $T_c$ variation. \subsection{Diffusion-coefficients} From the simulated atomic motions in the computer experiments, the diffusion coefficients of the Ni and Zr species can be determined as the slope of the atomic mean square displacements in the asymptotic long-time limit \begin{equation} D_{i}(T)=\lim\limits_{t\rightarrow \infty }\frac{(1/N_{i})\sum\limits_{% \alpha =1}^{N_{i}}\left| \mathbf{r}_{\alpha }(t)-\mathbf{r}_{\alpha }(0)\right| ^{2}}{6t} \quad. \end{equation} Fig.~\ref{fig9} shows the thus calculated diffusion coefficients of our Ni$_{0.8}$Zr$_{0.2}$ model for the temperature range between 600 and 2000 K. At temperatures above approximately 1000 K, the diffusion coefficients for both species run parallel to each other in the Arrhenius plot, indicating a fixed ratio $D_{Ni}/D_{Zr}\approx 2.5$ in this temperature regime. At lower temperatures, the Ni atoms have a lower mobility than the Zr atoms, yielding around 800 K a value of about 10 for $D_{Ni}/D_{Zr}$. That means, here the Zr atoms carry out a rather rapid motion within a relative immobile Ni matrix. \begin{figure}[htbp] \psfig{file=DIFFALLNEW_portrait.eps,width=8.5cm,height=5.50cm} \caption{Diffusion coefficients $D_i$ as a function of $1000/T$. Symbols are MD results for Zr (squares) and Ni (diamonds)} \label{fig9} \end{figure} According to the MCT, above $T_c$ the diffusion coefficients follow a critical power law \begin{equation} D_{i}(T)\sim (T-T_{c})^{\gamma }, \text{ for }T > T_c \label{MA.1} \end{equation} with non-universal exponent $\gamma$ \cite{kob951}. In order to estimate $T_c$ from this relationship, we have adapted the critical power law by a least mean squares fit to the simulated diffusion data for 1000 K and above. According to this fit, the system has a critical temperature of about 850-900 K. Similar results for the temperature dependence of the diffusion coefficients have been found in MD simulations for other metallic glass forming systems, e.g., for Ni$_{0.5}$Zr$_{0.5}$ \cite{teidimat}, for Ni$_{x}$Zr$_{1-x}$ \cite{teirossler}, Cu$_{0.33}$Zr$_{0.67}$ \cite{gaukel}, or Ni$_{0.81}$B$_{0.19}$ \cite{vane}. In all cases, like here, a break is observed in the Arrhenius slope. In the mentioned Zr-systems, this break is related to a change of the atomic dynamics around $T_c$ whereas for Ni$_{0.81}$B$_{0.19}$ system it is ascribed to $T_G$. As in \cite{vane} $T_c$ and $T_G$ apparently fall together, there is no serious conflict between the obervations. \section{Conclusion} The present contribution reports results from MD simulations of a Ni$_{0.8}$Zr$_{0.2}$ computer model. The model is based on the electron theoretical description of the interatomic potentials for transition metal alloys by Hausleitner and Hafner \cite{haushafner}. There are no parameters in the model adapted to the experiments. There is close agreement between the $T_c$ values estimated from the dynamics in the undercooled melt when approaching $T_c$ from the high temperature side. The values are $T_c \approx 950 - 1020$~K from the ${\bf{g}}_m$-parameters, and $T_c \approx 950$~K from the diffusion coefficients. As discussed in \cite{teirossler}, the $T_c$-estimates from the diffusion coefficients seem to depend on the upper limit of the temperature region taken into account in the fit procedure, where an increase in the upper limit increases the estimated $T_c$. Accordingly, there is evidence that the present value of 950 K may underestimate the true $T_c$ by about 10 to 50 K, as it based on an upper limit of 2000 K only. Taking this into account, the present estimates from the melt seem to lead to a $T_c$ value around 1000 K. The $T_c$ from the nonergodicity parameters describe the approach of the system towards $T_c$ from the low temperature side. They predict a $T_c$ value of 1005 K. This value is clearly outside the range of our $T_c$ estimates from the high temperature, ergodic melt. We consider this as a significant deviation which, however, is much smaller than the factor of two found in the modelling of a Lennard-Jones system \cite{kobnauroth}. The here observed deviation between the $T_c$ estimates from the ergodic and the so-called nonergodic side reconfirm the finding from the soft spheres model\cite{mfuchs} of an agreement within some 10 $\%$ between the different $T_c$-estimates. \begin{acknowledgments} A.B.M. gratefully acknowledges financial support of the SFB 602 during the post-doctoral program. \end{acknowledgments}
1,477,468,749,872
arxiv
\section{Introduction\label{s:int}} Already for a few years, there exists a discrepancy in the determination of the proton charge radius by means of the spectroscopy of ordinary and muonic hydrogen (see, e.g., \cite{muh1,codata2014}), commonly known as the proton radius puzzle. There are different contributions to the uncertainty of the determination of the proton radius by those methods. The largest uncertainty originates from the hydrogen spectroscopy and a serious experimental activity in this direction is in progress (see, e.g., \cite{hessel:lp,lkb1s3s,mpq2s4p}). The second largest uncertainty comes from the Quantum electrodynamics (QED) theory of the $1s$ Lamb shift in hydrogen \cite{codata2014}. There are a few theoretical problems which require clarification. They relate to two-loop and three-loop radiative corrections. Some higher-order contributions have not been cross-checked, and some not studied at all. In particular, the two-loop contributions of order $\alpha^2(Z\alpha)^5m$ \cite{b50se,LbL1} are well established, while at the next order in $Z\alpha$ the contributions for the virtual light-by-light scattering have not been studied properly (see, e.g., a discussion on a previously missed term in \cite{LbL:CS}). Meanwhile, the results for the pure self-energy contribution of order $\alpha^2(Z\alpha)^6m$ \cite{jentscura1s,yerokhin09} are to some extent controversial (see, e.g., \cite{codata2014}). One more challenge is related to the next-to-leading order three-loop contribution (order $\alpha^3(Z\alpha)^5m$); the existing estimation \cite{codata2014} does not have a solid ground. Besides the proton radius puzzle, an improvement of the theoretical prediction of the $1s$ Lamb shift is essential for the determination of the Rydberg constant \cite{muh1,codata2014}, precision tests of the bound-state QED, constraints on light neutral particles, such as a dark photon from physics of simple atoms (see, e.g., \cite{constraint}), and interpretation of the currently ongoing $1s-2s$ He$^+$ experiments~\cite{he_ion1,he_ion2}. The Lamb shift of the atomic energy levels is a QED effect, that can be experimentally studied in light hydrogen-like atoms with high accuracy (cf. \cite{mpq_h_new_1}). The theoretical prediction of this phenomenon involves the values of the input parameters, such as the Rydberg constant and the proton charge radius, that limit the accuracy of the calculations. A separate input to the uncertainty originates in the computation of various high-order QED effects. The dominant contributions to the QED error budget come from the radiative corrections in the external-field approximation. We follow the standard convention and parametrize these corrections as (see, e.g., \cite{codata2014,VASH-book}) \begin{equation}\label{eq:F123} \Delta E(ns)=\frac{\alpha \,(Z\alpha)^4m}{\pi \, n^3} \left(F^{(1)}+\frac{\alpha}{\pi} \, F^{(2)}+\left(\frac{\alpha}{\pi}\right)^2F^{(3)}+\dots\!\right)\!, \end{equation} where $F^{(i)}=F^{(i)}_{ns}(Z\alpha)$ corresponds to the $i$-loop radiative insertions and the relevant contributions are at the one-, two-, and three-loop level. The four-loop contributions are neglected in (\ref{eq:F123}). The uncertainty due to the unknown leading four-loop term, which is expected at the level of a few units of $\alpha^4/\pi^4(Z\alpha)^4m$, is essentially below the uncertainty of the higher-order two-loop and three-loop terms. The latter are at the level of ten units of $\alpha^2/\pi^2(Z\alpha)^6m$ and $\alpha^3/\pi^3(Z\alpha)^5m$, respectively (see below). Theory of the one-loop contributions is firmly established (see \cite{codata2014,VASH-book} for details). The largest and most important contribution, related to the electron self-energy, has been calculated directly for $Z=1,2$ \cite{1sse1}, i.e., for H and He$^+$. We consider below the two-loop and three-loop radiative corrections. The functions $F^{(i)}$ can be expanded at low $Z\alpha$ and at two and three loops, the results read \begin{eqnarray (Z\alpha)^4\,F^{(2)}(nl)&=&\sum_{kp}B_{kp}(Z\alpha)^k\ln^p{\frac{1}{(Z\alpha)^2}} \;, \label{twoloop}\\ (Z\alpha)^4\,F^{(3)}(nl)&=&\sum_{kp}C_{kp}(Z\alpha)^k\ln^p{\frac{1}{(Z\alpha)^2}}\;. \label{threeloop} \end{eqnarray Here, we focus on the~$1s$ state and the $F^{(i)}$ coefficients are always meant to be related to the aforementioned, $1s$ state. It is not clear {\em a priori\/} which logarithmic terms are present in (\ref{twoloop}) and (\ref{threeloop}). Sometimes a special study is required. For example, it was believed \cite{codata2014} until recently that $C_{63}\neq0$, while the presence of $B_{72}\neq0$ was rather disputable. Both issues have been recently resolved in \cite{our_b72} and we discuss it also below. A number of the two-loop ($B_{\dots}$) and three-loop ($C_{\dots}$) coefficients have been known with a sufficient accuracy. These include $B_{40}$, $B_{50}$, $B_{63}$, $B_{62}$, $B_{61}$, and $C_{40}$. Estimations with a credible uncertainty have also been available for $B_{60}, C_{50}$, and $C_{63}$. A concise summary concerning all these coefficients can be found in \cite{codata2014}. Some of the corrections have been revisited since publication of \cite{codata2014}. These include, e.g., $B_{61}$ \cite{LbL:CS} and $B_{72}, C_{63}$, and $C_{62}$ \cite{our_b72}. In this letter, we reconsider $B_{61}$ (order $\alpha^2(Z\alpha)^6m\ln(Z\alpha)$), $B_{60}$ (the non-logarithmic $\alpha^2(Z\alpha)^6m$ term), and $ C_{50}$ ($\alpha^3(Z\alpha)^5m$) and discuss them below in subsequent sections in detail. Our findings are summarized in Tables~\ref{t:sum:two} and \ref{t:sum:three}. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c} \hline Quantity & $B_{61}^{\rm tot}$ & $B_{60}^{\rm tot}$ & $G_{60}^{\rm tot}(Z=1)$ \\ \hline \cite{codata2014}: coefficient & $48.958\,590$ & $-81.3(19.7)$ & \\ contribution, kHz & 48.50 & $-8.2(2.0)$ & \\ \hline this work: coefficient & $49.788\,899$ & & $-94.5(6.6)$ \\ contribution, kHz & 49.32 & & $-9.5(0.7)$ \\ \hline \end{tabular} \caption{Two-loop coefficients and their contributions to the~$1s$ Lamb shift in hydrogen. $G_{60}^{\rm tot}(Z)$ is equal to $B_{60}$ together with all the higher-order (in $Z\alpha$) corrections (see (\ref{G:twoloop})). \label{t:sum:two}} \vspace{-6.0mm} \end{center} \end{table} In the case of the two-loop corrections, rather than $B_{60}$, we use $G_{60}(Z\alpha)$ defined as \begin{equation} G_{60}(Z\alpha)=B_{60}+\sum_{kp; k\geq7}B_{kp}(Z\alpha)^{k-6}\ln^p{\frac{1}{(Z\alpha)^2}} \;, \label{G:twoloop} \end{equation} i.e., it is equal to $B_{60}$ with all the higher-order (in $Z\alpha$) corrections included. $G_{60}(Z\alpha)$ is more appropriate if one uses results of numerical calculations. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c} \hline Quantity & $C_{50}^{\rm tot}$ & $C_{63}^{\rm tot}$ & $C_{62}^{\rm tot}$ \\ \hline \cite{codata2014}: coefficient & $\pm30$ & $\pm1$ & \\ \cite{codata2014}: contribution, kHz & $\pm 0.96$ & $\pm0.22$ & \\ \hline this work$^*$: coefficient & $-3.3(10.5)$ & 0 & $-0.36$\\ this work$^*$: contribution, kHz & $-0.11(34)$ & 0 & $-0.01$\\ \hline \end{tabular} \caption{Three-loop coefficients and their contributions to the $1s$ Lamb shift in hydrogen. $^*$We use here $C_{63,62}$ from~\cite{our_b72}. \label{t:sum:three}} \end{center} \vspace{-6.0mm} \end{table} \section{Additional logarithmic two-loop contributions in order $\alpha^2(Z\alpha)^6m$ \label{s:log}} We begin with the two-loop logarithmic coefficient $B_{61}$. First calculated in~\cite{b601s,log1}, the result was applied in~\cite{codata2014}. After original publication, the diagrams with the light-by-light (LbL) scattering block (see Fig.~\ref{f:lbl}$a$) have been studied, and a correction to the previous result was found~\cite{LbL:CS} due to the LbL diagrams overlooked in \cite{b601s,log1}. The LbL contributions are the most difficult for the numerical calculations. The analytic calculations have been available only for the order $\alpha^2(Z\alpha)^5m$ \cite{LbL1} and absent for a while for the next order in $Z\alpha$ until the publication of \cite{LbL:CS}. Those diagrams receive a contribution from soft photons responsible for the appearance of a long-distance potential. After integrating out the hard modes (i.e. with momenta comparable with $m$), effective local operators appear, which give rise to the two-photon vertices shown in diagrams b and c in Fig.~\ref{f:lbl}. The remaining photons are soft (i.e. their momenta are much smaller than $m$). There are two possible soft pairs of photons, those which connect the nucleus and the electron loop (see Fig.~\ref{f:lbl}$b$) and those which connect the electron line and the electron loop (see Fig.~\ref{f:lbl}$c$). The former case was covered by \cite{LbL:CS}, while we consider the latter here. \begin{figure}[thbp] \begin{center} \resizebox{0.80\columnwidth}{!}{\includegraphics[clip]{Fig1_Lamb.eps}} \end{center} \vspace{-3.0mm} \caption{Example diagrams for the LbL contributions: an `initial' diagram ($a$) and two effective diagrams ($b,c$). The double horizontal line is for the Coulomb propagator, The effective diagrams are result of the hard integrations (with momenta comparable with $m$), which produce effective point-like vertices, while the remaining photons are soft (i.e. the momenta are much smaller than $m$). } \label{f:lbl} \vspace{-4.0mm} \end{figure} The bottom part of the diagram in Fig.~\ref{f:lbl}$a$, i.e., the electron loop in the Coulomb field of the nucleus, is the known virtual Delbr\"uck scattering amplitude (see, e.g., \cite{rev:vD2,vDs} and references therein). The upper part, i.e., the electron line and two soft photons connecting the electron line and the electron loop, can be drastically simplified within the soft-photon kinematics, where the energy transfer ($q_0$) is comparable with the momentum transfer ($\bf q$) and $Z\alpha m \ll |q_0| \sim |{\bf q}|\ll m$. The integral over $q_0$ simplifies considerably within a kind of static-electron kinematics discussed in details in \cite{LbL1mu}. The resultant integral induces an effective potential that behaves as $r^{-4}$ (cf. \cite{our_mulbl}, see also \cite{LbL:CS}). In the $1s$ state, the expectation value of this potential diverges at the short end of the interval $1/m\ll r \ll 1/(Z\alpha m)$. The manifestation of the divergence in the perturbation theory is a logarithmically enhanced correction to the hydrogen energy levels. On the technical side, the calculation is similar to that in \cite{LbL:CS} if we use the effective field theory approach. To confirm our result, we also considered diagrams with triple photon exchange and extracted the logarithmically divergent part; both methods gave the same correction to the $B_{61}$ coefficient. The total result for the logarithmic LbL contribution, including the one from~\cite{LbL:CS}, reads \begin{equation}\label{eq:lbl:log \Delta E^{\rm LbL}(1s) =\frac{\alpha^2(Z\alpha)^6m}{\pi^2}\left[ \frac{709\pi^2}{3456}-\frac{43}{36} \right]\ln\frac1{(Z\alpha)^2} \,. \end{equation} Result (\ref{eq:lbl:log}) is over 2.5 times larger than that of the previous [partial] computation~\cite{LbL:CS}. In standard notation (cf. \cite{codata2014}) it corresponds to the $B_{61}$ coefficient. \section{Two-loop contributions with closed electron loops in order $\alpha^2(Z\alpha)^6m$ \label{s:closed}} While considering the non-logarithmic part of the $\alpha^2(Z\alpha)^6m$ correction, i.e., the coefficient $B_{60}$ and higher-order terms, one has to distinguish three groups of diagrams and treat them differently. One group originates from the `pure' self-energy (SE) diagrams, i.e., the diagrams without any closed electron loops (see Fig.~\ref{f:two}$a$). The remaining groups, on the other hand, include the closed electron loops. The second group contains the loops in the so-called free-loop approximation, i.e., all appearing closed electron loops are due to the vacuum polarization (see Fig.~\ref{f:two}$b$). The last group contains virtual LbL scattering subdiagrams (see, e.g., Fig.~\ref{f:lbl}$a$). (In high-$Z$ atomic physics, those diagrams are referred to as the vacuum polarization in the presence of the Coulomb field of a nucleus.) \begin{figure}[thbp] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics[clip]{Fig2_Lamb.eps}} \end{center} \vspace{-2.0mm} \caption{Example diagrams for the two-loop contributions to the Lamb shift: a pure self-energy one ($a$) and and one with an electron loop in the free-loop approximation ($b$).} \label{f:two} \vspace{-4.0mm} \end{figure} The most accurately computed results exist for the {\em free\/}-loop approximation diagrams studied in~\cite{yero_num_1}. The result reads \begin{eqnarray} G_{60}^{\rm free}(Z\!=\!1)&=&-15.0(4)\;,\nonumber\\ G^{\rm free}_{60}(Z\!=\!2)&=&-13.9(1) \;.\end{eqnarray} The contributions beyond the free-loop approximation at order $\alpha^2(Z\alpha)^6m$ belong to two groups. One is due to radiative corrections to the Wichmann-Kroll contribution. (The Wichmann-Kroll contribution by itself is of order $\alpha(Z\alpha)^6m$.) We estimate it as \begin{equation}\label{eq:b60:rwk} B_{60}^{\rm rWK}(ns)=0.13\pm0.13\;. \end{equation} The estimation is based on a similarity of the behavior of a radiative correction to the Wichmann-Kroll potential and the K\"allen-Sabry potential in the so-called $t$ channel. The other group arises due to Coulomb corrections to the LbL contribution of order $\alpha^2(Z\alpha)^5m$. We have already considered their logarithmic part above in (\ref{eq:lbl:log}). We estimate the non-logarithmic part as \begin{equation}\label{eq:b60:lbl} B_{60}^{\rm LbL}=\pm \pi B_{61}^{\rm LbL}\simeq \pm 2.6\;. \end{equation} The $B_{60}$ term beyond the free-loop approximation was previously estimated in \cite{yero_num_1}. However, it was based on incorrect assumptions about the logarithmic contributions for the diagrams beyond the free-loop approximation, and thus we do not take into consideration those estimates. The quantitatively largest contribution in our consideration of the $B_{60}$ term beyond the free-loop approximation comes as a `tail' of the logarithmic $B_{61}$ term. The summary of the individual contributions to $G_{60}(1s)$ is given in Table~\ref{t:b:all2loop}. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c||c} \hline Quantity &$G_{60}^{\rm SE}(1s)$ & $G_{60}^{\rm free}(1s)$ & $G_{60}^{\rm beyond}(1s)$ & $G_{60}^{\rm tot}(1s)$\\ \hline $Z=1$ & $-79.6(6.0)$ & $-15.0(4)$ & 0.1(2.6)& $-94.5(6.6)$ \\ $Z=2$ & $-83.3(5.2)$ & $-13.9(1)$ & 0.1(2.6)& $-97.1(5.8)$ \\ \hline \end{tabular} \caption{Higher-order two-loop contributions to the $1s$ Lamb shift in hydrogen and the helium ion. The {\em free\/}-loop approximation result is from \cite{yero_num_1}. The pure {\em SE\/} value as well as the contribution {\em beyond\/} the free-loop approximation is a result of this letter. \label{t:b:all2loop}} \end{center} \vspace{-6.0mm} \end{table} The estimate above is obtained by a suggestion that a natural magnitude of the constant accompanying a logarithm is $\pi$, which is inspired by the value of the imaginary part of the logarithm of a negative real number. In a term with several logarithms, we substitute each of them by $\pi$, which produces a combinatoric factor. Often the terms beyond the leading logarithmic term are estimated by 50\% of its value. Using $\pi$ and combinatoric factors, we estimate the subleading terms in the case of the leading single-logarithm as below 50\%, but in the case of leading double- or triple-logarithmic term as above 50\%. We think that it is more realistic than a naive 50\%-estimate for all the separate cases. \section{Pure self-energy two-loop contributions $\alpha^2(Z\alpha)^6m$ \label{s:se}} The situation concerning the pure SE part of $B_{60}$ is more complicated than that of the closed-electron-loop contributions. A partial calculation exists, and it is accompanied by a plausible estimate of the unknown part of the contribution~\cite{jentscura1s}, \begin{equation}\label{eq:b60:j} B_{60}^{\rm pure\;SE}=-61.6(9.2)\;. \end{equation} The large magnitude of the $B_{60}^{\rm pure\;SE}$ coefficient is due to an enhancement of the low-momentum contribution, while the uncertainty comes from the unknown high-momentum one. Suggesting that the missing high-momentum contribution is not enhanced, we arrive at the result of $\pm\pi^3B_{63}$ for the missing contribution that coincides with the uncertainty in (\ref{eq:b60:j}). Consequently, our estimation of the magnitude of the unknown terms in~(\ref{eq:b60:lbl}) is consistent with that in~\cite{jentscura1s}. There exist essentially three approaches to calculation of the higher-order two-loop contributions. One suggests the use of an $Z\alpha$ expansion, in which case the accuracy is limited by (\ref{eq:b60:j}) \cite{jentscura1s}. It is also necessary to know the higher-order logarithmic terms, such as \cite{our_b72} \begin{equation}\label{eq:b72} B_{72}^{\rm pure\;SE}=-\frac{2\pi}{3}\,\left(\frac{139}{32}-2\ln2\right)\;. \end{equation} The size of the logarithmic contribution is smaller than of the uncertainty above, but not negligible. The second approach uses exact in $Z\alpha$ numerical calculations at $Z=1,2$. In the case of two-loop contributions, that approach has been successfully applied for the contributions with closed electron loops in the free-loop approximation \cite{yero_num_1} (see above), but its application to pure SE diagrams has proved challenging. Only the results at medium $Z$ such as $Z=10,12,15,17,20,25,30$ \cite{yerokhin09} are available. Those still can be extrapolated to $Z=0,1,2$. (The third approach includes a fit using the result of (\ref{eq:b72}) as a data point at $Z=0$ for $G_{60}^{\rm SE}(Z)$ from (\ref{G:twoloop}).) Due to the low accuracy such an extrapolation is possible for $F^{(2)}(Z\alpha)$, only because a number of the leading coefficients, such as ($B_{40}, B_{50}, B_{63}, B_{62}$, and $B_{61}$) is known (see \cite{codata2014,VASH-book} and references therein). In the meantime the `data area' ($Z=10,12,15,17,20,25,30$) is relatively far from the `target area' ($Z=0,1,2$) and it contains relatively few data points. The logarithmic terms go through a bigger change on their way from the data area to the target area than within the data area. Accordingly, from the point of view of a phenomenological fit, we have to consider nearly coinciding fits with different extrapolation expectations. Since we need not only to fit the data but eventually also to extrapolate, we have to maintain the correct shape of the fit function (see (\ref{twoloop})). To deal with logarithmic terms at orders $\alpha^2(Z\alpha)^7m$ and $\alpha^2(Z\alpha)^8m$, a calculation of some and an estimation of others is necessary. We present the summary of the leading logarithmic terms at each order in $Z\alpha$ in Table~\ref{t:b:coef:0}. The coefficients $B_{84}$ and $B_{93}$ are calculated in this letter using techniques developed in~\cite{log3,log2vp,our_b72}. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c| } \hline Coefficient &$B_{63}$ & $B_{72}$ & $B_{84}$ & $B_{93}$ \\ \hline Value & $-8/27$ & $-6.19$ & $-7/27$ & $5/6\cdot B_{72}\simeq-5.162$ \\ \hline \end{tabular} \caption{The leading higher-order pure SE two-loop logarithmic contributions. Note: the leading logarithmic terms of orders $\alpha^2(Z\alpha)^6m$ \cite{log3} and $\alpha^2(Z\alpha)^8m$ come only from the pure self-energy. In contrast to that, the leading logarithms of order $\alpha^2(Z\alpha)^7m$ \cite{our_b72} and $\alpha^2(Z\alpha)^9m$ come both from the diagrams with and without closed electron loops. Here we present only their self-energy part. \label{t:b:coef:0}} \end{center} \vspace{-6.0mm} \end{table} To estimate the subleading terms we use several approaches. As for an `estimation' we understand a constraint with a relatively large uncertainty such as in~(\ref{eq:b60:j}), which allows us to obtain more than one estimate for each subleading coefficient. Ultimately, we choose the most conservative constraint for the coefficients. The summary of our estimations is given in Table~\ref{t:b:coef:2}. The coefficients can be used for both for a low-$Z\alpha$ expansion and fits. We have to explain how we used the constraints for the fits. We consider the constraints as additional data and include their deviation from the related central values, measured in the units of their uncertainties, into the final $\chi^2$ we have to minimize. That is, e.g., similar to a treatment applied in \cite{codata2014} for the various not very precise theoretical corrections. Such a fitting procedure allows one to easily combine theoretical constraints with the existing `true' data. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c|c|c|c} \hline Coefficient & $B_{71}$ & $B_{70}$ &$B_{83}$&$B_{82}$&$B_{81}$&$B_{80}$ \\ \hline Value & $-12(40)$ & $\pm72$ & $\pm 3.2$ &$\pm 50$&$\pm 150$ &$\pm 200$\\ \hline \end{tabular} \caption{Estimated values of the coefficients for two-loop pure SE subleading terms in order $\alpha^2(Z\alpha)^7m$ and $\alpha^2(Z\alpha)^8m$. \label{t:b:coef:2}} \end{center} \vspace{-6.0mm} \end{table} Often, for the higher-order terms, it is possible to estimate the magnitude plausibly, but not the sign of a coefficient and therefore frequently the central values of estimations are zero. Once $B_{71,70}$ are estimated, we can find the result of the low-$Z\alpha$ expansion of $G_{60}^{\rm SE}(Z\!=\!1,2)$ (see Table~\ref{t:b:res}). \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c} \hline Quantity &$B_{60}^{\rm SE}$ & $G_{60}^{\rm SE}(Z=1)$ & $G_{60}^{\rm SE}(Z=2)$\\ \hline Low-$Z\alpha$ expansion & $-61.6(9.2)$ & $-66.8(9.6)$ & $-69.6(10.5)$ \\ Fit over data from \cite{yerokhin09} & $-90(12)$ & $-94(10)$ & $-95(9)$ \\ Combined fit & $-72.4(7.2)$ & $-79.6(6.0)$ & $-83.3(5.2)$ \\ \hline \end{tabular} \caption{Higher-order two-loop pure self-energy contribution to the $1s$ Lamb shift in hydrogen and the helium ion. The {\em combined\/} fit includes the numerical data from \cite{yerokhin09} and the value of $B_{60}$ from \cite{jentscura1s}, and less accurate numerical results from \cite{yero_num_3}. \label{t:b:res}} \vspace{-6.0mm} \end{center} \end{table} To compare those low-$Z\alpha$ results with the numerical data, we need to fit them. We found that by setting $B_{72}=0$, the fit results for $B_{60}$ are shifted by $8-10$ from the value $B_{60}$ of (\ref{eq:b60:j}). That would put the fit and that value of $B_{60}$ in disagreement and would not allow a combined fit. All the previously used fits have ignored the double-logarithmic $B_{72}$ term. Consequently, the fits found in the literature use an unrealistic shape with no estimation of systematic effects (see, e.g., \cite{yerokhin09}). Consequently, a comparison with the previously performed fits is meaningless. We have performed a fitting of the numerical data \cite{yerokhin09} ourselves, using realistic approximation functions. We present the result in Table~\ref{t:b:res} including the results of the combined fit, i.e., a fit which includes the low-$Z\alpha$ constraints and numerical data from \cite{yerokhin09}. We consider a difference between the low-$Z$ value and the fit over the numerical data, which is somewhat below $2\,\sigma$ as a fair agreement which validates the use of a combined fit. The summary for the calculation of the two-loop contributions in the external field approximation is given in Table~\ref{t:sum:two}. \section{Next-to-leading three-loop contributions \label{s:three}} The three-loop theory is more complicated and less advanced than the two-loop one. Only its leading term to the Lamb shift (order $\alpha^3(Z\alpha)^4m$, coefficient $C_{40}$) is known \cite{c40_dirac,c40_pauli,c40_vp}. The next-to-leading one (order $\alpha^3(Z\alpha)^5m$, $C_{50}$) has been calculated only partially \cite{some_c50} and boldly estimated in \cite{codata2014}. After improvement of the accuracy of $B_{60}$ above, $C_{50}$ \cite{codata2014} becomes the largest source of the QED uncertainty for the $1s$ Lamb shift in hydrogen. \begin{figure}[thbp] \begin{center} \resizebox{0.30\columnwidth}{!}{\includegraphics[clip]{3loop_1.eps}} \end{center} \vspace{-2.0mm} \caption{An example diagram for the three-loop contribution to the Lamb shift.} \label{f:three} \vspace{-4.0mm} \end{figure} The $\alpha^3(Z\alpha)^5m$ contribution can be represented as a set of two-photon exchange diagrams (see Fig.~\ref{f:three}). The related expression may be written in terms of the integral over the loop momentum $q$ (cf. \cite{log3}) \begin{equation}\label{eq:c50:t} \int_0^\infty{\frac{dq}{q^4}}\,T(q^2) \;, \end{equation} where $T(q^2)$ is a radiative correction to the skeleton-diagram integral, which is related to a virtual forward Compton scattering amplitude. The calculation of the radiative factor $T(q^2)$ is very complicated. Here, we calculate its asymptotics at high and low $q$ and estimate the total integral, by integrating those asymptotic expressions. As mentioned previously, a part of the contributions, i.e., the diagrams with closed electron loops in free-loop approximation except for graphs with the two-loop pure self-energy with one electron vacuum-polarization insertion, have already been considered in \cite{some_c50}. Here we estimate the unknown diagrams by integrating the asymptotics of the related integrand. The complete three-loop result is \begin{equation}\label{eq:c50:tot} C_{50}^{\rm total}(ns)=-3.3(10.5)\;. \end{equation} To verify our method, we have also found the contributions of order $\alpha(Z\alpha)^5m$ and $\alpha^2(Z\alpha)^5m$. Our estimation is in a perfect agreement with known results \cite{a50,b50se}. In each case of interest (one loop, two loops, three loops) the asymptotics of $T(q^2)$ are of the same sign for high and low $q$. That is an important requirement for a reliable estimation of the integral through the asymptotics of the integrand. The present situation with the three-loop contributions is summarized in Table~\ref{t:sum:three}. The $C_{50}$ uncertainty is reduced by a factor of 3. This makes the $C_{50}$ uncertainty comparable with the CODATA's $C_{63}$ one in \cite{codata2014}. Fortunately, the latter was eliminated in \cite{our_b72}, where it was found that \begin{eqnarray} C_{63}&=&0\;,\nonumber\\ C_{62}&\simeq& -0.36\;. \end{eqnarray} \section{Summary and conclusions \label{s:summary}} The summary on the theoretical accuracy of the $1s$ Lamb shift calculation for light hydrogen-like atoms with $Z=1,2$ is presented in Table~\ref{t:sum:h:he}. The uncertainty from the external-filed contributions, considered in this paper, is due to $\alpha^8m$ terms and consists of two sources, one is two-loop's $G_{60}$ and the other is three-loop's $C_{50}$. A comparison with the existing calculations of other authors is given in the introduction, in Tables~\ref{t:sum:two} and \ref{t:sum:three} in terms of the related coefficients and absolute values of the contributions for hydrogen. As one can see from there both two-loop and three-loop uncertainties are reduced approximately by factor of three. \begin{table}[htbp] \begin{center} \begin{tabular}{l|c|c|c} \hline Contribution, kHz & $G_{60}^{\rm tot}$ & $C_{50}^{\rm tot}$ & RR16 \\ \hline Contribution for H & $-9.5(0.7)$ & $-0.11(34)$ & $1.5(1.0)$ \\ Contribution for D & $-9.5(0.7)$ & $-0.11(34)$ & $0.76(0.49)$\\ Contribution for ${}^3$He$^+$ & $-625(37)$ & $-3.4(10.8)$ & $23(18)$\\ Contribution for ${}^4$He$^+$ & $-625(37)$ & $-3.4(10.8)$ & $18(14)$\\ \hline \end{tabular} \caption{The most uncertain contributions to the $1s$ Lamb shift in hydrogen, deuterium and the helium ions. RR16 stands for the $\alpha(Z\alpha)^6m^2/M$ radiative-recoil contribution, which is known only in the leading logarithmic approximation (see (\ref{eq:rr})). \label{t:sum:h:he}} \end{center} \vspace{-6.0mm} \end{table} The dominant contribution to the uncertainty budget for hydrogen currently comes from the radiative-recoil contribution of order $\alpha(Z\alpha)^6m^2/M$, that is known in the leading logarithmic approximation \cite{aza6ln22,aza6ln21} \begin{equation}\label{eq:rr} \Delta E_{\rm RR16}(1s)=\frac23 \frac{\alpha(Z\alpha)^6m}{\pi}\,\frac{m}M\,\ln^2{\frac1{(Z\alpha)^2}}\;. \end{equation} The uncertainty in Table~\ref{t:sum:h:he} comes from an estimation of subleading terms. For its estimation we use here the approach with $\pi$'s and combinatoric coefficients, as explained above, and the uncertainty is somewhat above 50\% (cf. \cite{codata2014}). The key uncertainty sources in \cite{codata2014} have also included pure recoil corrections, but their uncertainty (of about 0.7\;kHz for H) has recently been eliminated \cite{recoilyero} by a direct calculation of the recoil corrections for $Z=1,2$. Concluding, we have revisited the theory of the $\alpha^8m$ contributions to the Lamb shift of the $1s$ state in hydrogen and deuterium atoms and helium ions. We completed the calculation of the logarithmic terms, considered a controversy in the non-logarithmic two-loop contribution and improved its accuracy by approximately a factor of three, and obtained a complete approximate result for the three-loop terms, which is more reliable and three times more accurate than a previous bold estimation. The most accurate experimental results are available for the $1s-2s$ transition in hydrogen and deuterium~\cite{mpq_h_new_1 mpq_d_new}. Experimental efforts to measure the $1s-2s$ transition in the helium ion are underway \cite{he_ion1,he_ion2}. Since the weight of the individual contributions to the theoretical uncertainty budget varies substantially (see Table~\ref{t:sum:h:he}), combining the hydrogen and helium-ion experimental results would be beneficial not only for hydrogen and helium-ion spectroscopy but also for various applications including precision tests of bound-state QED, determination of the Rydberg constant, and constraints on new light neutral particles. A complete and detailed derivation, covering the technical side of the computations of our new results presented in this letter, is under preparation and will be published elsewhere. The authors are grateful to A. Czarnecki, M.I. Eides, K. Eikema, V.I. Korobov, E.Yu. Kor\-zinin, K. Pachucki, Th. Udem, and V.A. Yerokhin for valuable stimulating discussions. The work was supported in part by DFG (Grant No. KA 4645/1-1). {\bf A note, added after the paper has been completed.\/} After our paper was completed, we have learned about \cite{yero_pac}, which covers a broad range of the issues related to the Lamb shift in hydrogen and some other atoms. Concerning the two-loop and three-loop $\alpha^8m$ terms, discussed here, the consideration in \cite{yero_pac} is somewhat different from \cite{codata2014}. In particular, their fit for the two-loop contributions includes $B_{72}$ recently obtained in \cite{our_b72}. The diagrams with vacuum polarization loops in free-loop approximation and the diagrams with closed electron loops beyond the free-loop approximation are considered there separately from the pure self-energy (cf. \cite{yero_num_1}), the same way as we consider them here. Reference \cite{yero_pac} describes the fit for $G_{60}^{\rm pure\;SE}(Z)$ in few details only. It has a non-physical shape, i.e., comparing with the known shape of the true two-loop function (\ref{G:twoloop}) many logarithmic terms are omitted, and the systematic error is not estimated. The accuracy is worse than the accuracy of the estimation (\ref{eq:b60:j}) for $B_{60}^{\rm pure\;SE}$, which means that the fit was performed rather for the difference $G_{60}^{\rm pure\;SE}\!-\!B_{60}^{\rm pure\;SE}$, than used $B_{60}^{\rm pure\;SE}$ as a free parameter and the result in (\ref{eq:b60:j}) as a data point for $G_{60}^{\rm pure\;SE}(Z=0)$ (as it is done in our paper). The uncertainty of the diagrams with closed loops beyond the free-loop approximation is based in \cite{yero_pac} on a partial result for $B_{61}^{\rm LbL}$ from \cite{LbL:CS}, while the complete result for $B_{61}^{\rm LbL}$ found here is more than twice larger. As for the three-loop contributions a minor improvement in \cite{yero_pac} was due to use the results on $C_{63}$ and $C_{62}$ from \cite{our_b72}, which are not essential unless the accuracy of constraint on $C_{50}$ is improved first.
1,477,468,749,873
arxiv
\section{INTRODUCTION} \label{sec:intro} Mass loss in evolved stellar populations affects the chemical evolution of the interstellar medium (ISM) and mass loss from individual stars governs post main sequence evolution. The amount and duration of mass loss that occurs in giant stars remains one of the most uncertain parameters in stellar evolution theory and the effect of these processes on the inferences derived from stellar population models can be significant. Given the wide use of these models (e.g., in inferring stellar masses of high-redshift galaxies), understanding the mass losing process is vital for a range of problems in astrophysics. Although dust constitutes a small fraction of the total mass lost, it is frequently used as a marker as it is optically thin and its thermal emission is readily detectable in a variety of environments. Globular clusters (GCs), believed to have formed during the assemblage of the Galaxy, are coeval samples of stars at common, well-determined distances with nearly uniform initial compositions. GCs enable study of the chemical enrichment of the interstellar medium arising from mass ejection during the post-main sequence evolution of stars. Red Giant stars, especially those ascending the Asymptotic Giant Branch, are expected to develop winds that inject processed material into the intra-cluster medium (ICM) during post-main sequence evolution. These winds contain gas and solid phase materials, the latter in the form of dust grains that condense from the metals. However, detection of thermal emission arising from intra-cluster medium dust has been elusive, suggesting that the ICM in GCs is 100 to 1000 times less massive than expected from current stellar evolution theory and observations of mass-losing stars in clusters and in the solar neighborhood. The circumstellar environments of stars in the late stages of evolution, when most mass loss occurs, are most effectively detected and studied in the infrared (IR). IR surveys conducted by {\it IRAS}, {\it 2MASS,} and {\it ISO} revealed populations of dust-enshrouded asymptotic giant branch (AGB) and red supergiant (RSG) stars in the Galactic bulge \citep{Jacco03}, the Large Magellanic Cloud (LMC) \citep{Zijlstra96, jacco97, trams99}, and in galactic Globular Clusters \citep{Ramdani01,origlia02}. GCs are expected to contain dust from episodes of mass loss in red giant branch (RGB) and AGB stars. The IR excess of their circumstellar dust emission is expected to peak between 20~\micron{} to 30~\micron{}, and thus photometry at wavelengths larger than 20~\micron{} is necessary to estimate accurate dust masses lost by such stars. The amount of dust present in the intra-cluster medium (ICM) will vary depending on the cluster escape velocity, the time since last crossing of the Galactic disk where the ICM can be stripped away by the ISM, and the number of mass-losing stars. In general the dust in the ICM of GCs is expected to be $\simeq 10^{-2}$ to $10^{-3}$ M$_{\odot}$ for most galactic clusters. Globular clusters have reasonably homogeneous (in age and metallicity), well-understood stellar populations, so observations of ICM dust are reasonably straightforward to interpret to yield mass loss rates and duty cycles. Previous attempts to detect the ICM in GCs suggest that the ICM density is well below that expected from predictions of the mass loss input from RGB and AGB stars, even considering the low metallicity of GCs. The lowest 3-$\sigma$ upper limits to the ICM mass for 70~K dust are $\sim 6\times10^{-5}$ M$_\odot$ \citep{hopwood99}. Detecting thermal emission from the elusive ICM in GCs is observationally challenging. \citet{origlia02} reported \textit{ISO} observations of the IR thermal emission from the winds of individual RGB stars in six massive GCs (47 Tuc, NGC 362, Omega Cen, NGC 6388, M15, and M54) showing that, in those systems, stellar winds from these stars are enriching the ICM. Though thermal emission from the ICM material might be expected to be detectable, many attempts to do so with IR and millimeter observatories have produced only a single secure detection of ICM dust, a tentative (3.5-$\sigma$) detection of thermal emission in the core of the metal-poor GC M15 \citep{evans03}. Overall this result suggests that the ICM dust in GCs is significantly less massive than expected from current stellar evolution theory and observations of mass losing stars in GCs and the solar neighborhood. The causes of the paucity of ICM emission have been proposed to be ram-pressure stripping of ICM gas during Galactic plane passage, blowout by nova explosions, fast winds from the stars themselves, radiative ejection by the sheer luminosity of cluster stars, and continuous ram-pressure from hot gas in the galactic halo. However, the dominant state of the ICM is unclear. Sensitive searches for neutral H in the ICM at radio wavelengths have yielded upper limits $\leq$ 0.1 M$_\odot$ \citep{Birk83}, with a possible detection of $\approx$ 200 M$_\odot$ in NGC 2808 \citep{Faulk91} and a 5$\sigma$ detection of 0.3 M$_\odot$ in M15 \citep{Jacco06}. Perhaps much of the ICM is ionized, as suggested by the high electron densities measured from pulsar timing in 47 Tuc \citep{Paolo01}. However, H$\alpha$ searches have as yet been unsuccessful. ICM dust grains in radiative thermal equilibrium should attain temperatures of 50~K to 80~K because of the high energy density of starlight within a GC \citep{Forte02} and thus be detectable as an IR excess (above the photospheric emission) in GCs at mid- to far-IR wavelengths. Here we present observations of the galactic GC M15 with the \textit{Spitzer Space Telescope} \citep{Werner04}, whose instrumental sensitivity enables detection of dust masses as low as $4\times10^{-9}$ M$_\odot$ (assuming a population of grains radiating in thermal equilibrium, with integration times down to the background confusion limit) and therefore permits an unequalled opportunity to search for and set stringent limits on the ``missing'' ICM. \subsection{M15} \label{sec:m15} M15 (NGC 7078), with [Fe/H] = -2.4 \citep{sneden97}, is one of the most metal poor GCs. It is a well-studied cluster, as it is home to the first planetary nebula (PN) discovered in a GC (K648, also designated as Ps-1, \citet{pease28,howard97,alves00}) and to the first GC Low-Mass X-ray Binary source (X2127+119, \citet{auriere84,charles86}). At least eight millisecond pulsars are also associated with the cluster \citep{Kulkarni96}. M15 is generally believed to be a core-collapse GC, with a small, dense core containing approximately 4000~M$_\odot$ \citep{phinney96}. Properties of M15, reproduced from \citet{hopwood99}, are listed in Table~\ref{tab:m15_params}. Updated values as listed by \citet{evans03} include the escape velocity $V_{esc}$ from \citet{webbink85}, and the time $\tau_c$ since the last plane crossing from \citet{oden97}. The reddening and the distance are updated from \citet{schlegel98} and \citet{mcnamara04}, respectively. Galactic coordinates for M15 are $l = 65.01^\circ$ and $b = -27.31^\circ$, placing the cluster $\sim$ 4.5~kpc south of the galactic plane. The total dust mass expected in a GC can be estimated using the following equation: \begin{equation} M_{dust} = \frac{\tau_c}{\tau_{HB}} \ N_{HB} \ \delta M \ \frac{10^{[Fe/H]}}{100}, \end{equation} \noindent where $\tau_{HB}$ is the Horizontal Branch (HB) lifetime, $N_{HB}$ is the number of HB stars, [Fe/H] is the cluster metallicity, and $\delta M$ is the dust mass lost from each star at the tip of the RGB. The factor of 100 is the solar gas-to-dust ratio, which is scaled for the metallicity of M15 by adding the [Fe/H] factor. Using values typical of population II stars and this relationship, the expected dust mass in M15 has been estimated to be 3.7 $\times$ 10$^{-3}$ M$_{\odot}$ by \citet{evans03} and 2.0 $\times$ 10$^{-3}$ M$_{\odot}$ by \citet{hopwood99}. M15 is also home to the PN K648. K648 was the first globular cluster PN discovered, and subsequently it has been extensively studied at UV, optical and IR wavelengths to determine the chemical composition of the ejecta nebula and parameters of the central star \citep{barker83, adams84, howard97, bianchi01, hld03, garnett93}. The study of post-AGB stellar evolution in old metal-poor, low-mass stars in GCs or the galactic halo can be greatly enhanced if the by-products of stellar nucleosynthesis can be measured. Enriched material produced in the RGB and AGB stages of stellar evolution are dispersed into outer layers of the stellar system and subsequent mass loss processes lead to the formation of a PN. Few PNe in the galactic halo population have been identified and only four of these PNe, including K648, BB-1, DdDM~1, and H4-1 are associated with globular clusters \citep{jacoby97}. Below, we present findings derived from a 5-15 \micron{}~ IR spectrum of K648. \section{OBSERVATIONS} \label{sec:obsec} Image observations of the GC M15 were obtained on 2004 October 29 UT with the Multiband Imaging Photometer for \textit{Spitzer} (MIPS) \citep{Rieke04} camera through the 24~\micron{} and 70~\micron{} filters and with the Infrared Array Camera (IRAC) \citep{Fazio04} at 3.6, 4.5, 5.8, and 8~\micron{} conducted as part of the Gehrz Guaranteed Time Observing Program (GGTOP, PID 124). Raw data were processed with the \textit{Spitzer} Science Center (SSC) pipeline version S11.4.0. To avoid saturation in IRAC, High Dynamic Range (HDR) mode was implemented, using 0.4~s, 10.4~s, and 98.6~s exposures. For each IRAC channel, 55 frames at seven dither positions were obtained with each HDR exposure time. With the MIPS camera, 596 dithered frames were taken at 24~\micron{} along with 256 dithered frames at 70~\micron{}. \textit{Spitzer} IRS spectra of the PN and other red sources detected in the IRAC images have been obtained as part of a follow on program using GGTOP time and these data are discussed below. Table~\ref{tab:obsum} summarizes the observations discussed in this paper. \subsection{IRAC} The IRAC Basic Calibrated Data (BCD) images were post-processed to correct for various instrumental artifacts and were mosaicked using the 2005 May 09 version of the SSC Legacy MOPEX software \citep{mopex}. The MOPEX cosmetic correction was used to eliminate mux-bleed and column pull-down as described by the IRAC Data Handbook, v3.0.\footnote{http://ssc.spitzer.caltech.edu/irac/dh/iracdatahandbook3.0.pdf} The background matching correction was used to minimize pixel differences in overlapping areas of the mosaics. The MOPEX mosaicker also eliminated cosmic rays and other outliers in the data. The final IRAC mosaics are not subsampled, thus the pixel size of each final tiled image is 1.22\arcsec \ per pixel covering an area $\sim 50$ square arcminutes around the core (RA[J2000] = 21$^{\mbox{\small{h}}}$:29$^{\mbox{\small{m}}}$:58\arcsec.38; Dec[J2000] = +12$^{\circ}$:10\arcmin :00\arcsec.6) of M15 plus an off-field region of the same size 7\arcmin{} to the north or south. A three color image combining 3.6~\micron{}, 4.5~\micron{} and 8~\micron{} is presented in Fig.~\ref{fig:3color}a. The diffuse ICM is not detected in the IRAC mosaics, even at 8~\micron{}. However, the planetary nebula (K648) and several other dusty stellar objects become quite prominent at 8~\micron. Point source photometry was conducted using several stellar extraction routines including GlimpsePhot\footnote{http://www.astro.wisc.edu/sirtf/}, DAOphot \citep{Stets87}, and the astronomical point source extraction (APEX) tool contained in the SSC MOPEX package. Severe stellar crowding towards the cluster core in the IRAC bands made reliable point-response-function (PRF) photometry challenging. The best photometric results (95\% agreement to 2MASS K-band fluxes) were obtained by performing PRF fitting on the BCD images with APEX. Array location dependent photometric correction weights were applied before PRF fitting to minimize systematic error in the point source extraction flux. The images were then corrected with the background matching routine, and outliers were eliminated by the mosaicker. PRFs were created for 3.6~\micron{} and 4.5~\micron{} data using the PRF estimate routine provided in APEX. At least twenty stars were chosen to make each PRF in areas with the smallest amount of crowding. The PRFs provided by the \textit{Spitzer} Science Center (SSC)\footnote{http://ssc.spitzer.caltech.edu/irac/psf.html} that were made in-flight in January 2005 for 5.8~\micron{} and 8~\micron{} are sufficiently over-sampled, so there was no need to create new PRFs for these channels. Probable point sources with fluxes at least 4-$\sigma$ above the background were then identified and selected for extraction on coadded images that had been corrected for array distortion. Final flux extractions were performed at these world coordinates positions on the corrected BCD images using PRF subtraction. After applying the appropriate photometric color correction (as discussed in the IRAC Data Manual, v3.0) and comparing APEX 3.6~\micron{} fluxes with 2MASS K-band fluxes \citep{2MASS}, the average [K - 3.6~\micron{}] color is that of a K0 giant star. If we then assume that the majority of the stars we detect with {\it Spitzer} are K0 giant stars, then our fluxes are consistent with 2MASS K-band fluxes to within approximately 5\%. To further check our photometric results, we compared the fluxes from thirty of the stars we detect to K-band fluxes from \citet{cohen05}. The median [K-3.6~\micron{}] color for these sources is 0.09, which is also consistent with K0 giant stars (See Fig.~\ref{fig:cohen}). Color-magnitude diagrams (CMDs) derived from the IRAC point source photometry are presented in Fig.~\ref{fig:irac}. An RGB is clearly evident with the tip occurring near F$_{3.6}$ = 14 mJy. Stars with 3.6~\micron{} flux $>$ 11.2~mJy are possibly saturated in the long exposure frames, so only medium exposure (10.4 second) BCDs were used for the PRF subtractions and flux extraction. Flux densities for stars with colors redward of the RGB are listed in Table \ref{tab:irsources} (See \S\ref{sec:stars}). \subsection{MIPS} The MIPS Data Analysis Tool (DAT, \citet{Gordon05}) version 2.96 was used to do the basic processing and final mosaicking of the individual MIPS images. In addition, extra processing steps on each image were carried out before mosaicking using programs written specifically to improve the removal of MIPS detector instrumental signatures. At 24~\micron{} the extra steps included readout offset correction, scan mirror dependent flat fields, a scan mirror independent flat field, array averaged background subtraction, and exclusion of the bias boost images. At 70~\micron{} the extra steps were column average subtraction and pixel time filtering both with the exclusion of the regions around the bright sources. The pixel sizes of the final mosaics are 1.245\arcsec \ and 4.925\arcsec \ for 24~\micron{} and 70~\micron{}, respectively. The 24~\micron{} mosaic covers a 90 square arcminute area and the 70~\micron{} mosaic covers a 5.0 arcminute $\times$ 10.0 arcminute area, each centered on the core of M15. The MIPS mosaics (Fig.~\ref{fig:3color}b,\ c) show a possible ICM detection at 24~\micron{} and 70~\micron. The 24~\micron{} image shows two high surface brightness patches of material, both offset from the core by $\simeq 17$\arcsec \ towards the west (IR1b and IR2). The 70~\micron{} image shows only one high surface brightness patch that is likely an unresolved image of both 24~\micron{} regions. The integrated fluxes of the ICM detections at both wavelengths were determined using basic aperture photometry in IDL v6.0. A 130.5\arcsec \ square aperture centered at RA[J2000] = 21$^{\mbox{\small{h}}}$:29$^{\mbox{\small{m}}}$:56\arcsec.50; Dec[J2000] = +12$^\circ$:09\arcmin:53\arcsec.32 yields fluxes of 159.4$\pm$0.1~mJy at 24~\micron{} and 691.2$\pm$61.0~mJy at 70~\micron{}. The 70~\micron{} flux agrees well with that found by \citet{evans03} with \textit{ISO} observations using the same aperture size and position (Table~\ref{tab:obs_sf}). Also visible in the MIPS mosaics are the planetary nebula at 24~\micron{} and several dust enshrouded stars, some of which are also visible in the 8~\micron{} image, that may or may not be associated with the cluster. Point source photometry was performed on the 24~\micron{} mosaic using StarFinder \citep{diolaiti00}, which is well suited for the stable and well sampled MIPS 24~\micron{} PSF. A STinyTim \citep{krist02}\footnote{http://ssc.spitzer.caltech.edu/archanaly/contributed/stinytim.tar.gz} model PSF with a temperature of 100~K and smoothed to account for pixel sampling was used for the stellar extractions. It has been shown that the smoothed STinyTim PSFs are excellent matches to observed MIPS 24~\micron{} PSFs \citep{engelbracht06}. CMDs comparing 24~\micron{} to 3.6~\micron{} and 8~\micron{} are presented in Fig.~\ref{fig:mips}, and the uncertainties in the IRAC and MIPS photometry are summarized in Fig.~\ref{fig:err}. \subsection{IRS} Associated with our imaging programs, observations of the PN K648 \citep{pease28,howard97,alves00} in M15 were obtained with the \textit{Spitzer} Infrared Spectrograph (IRS) on 2005 November 17.77~UT using the short wavelength (5-15 $\mu$m), low resolution module (SL) in staring mode. All observations utilized the IRS blue peak up array at the target position of the PN, RA[J2000] = 21$^{\mbox{\small{h}}}$:29$^{\mbox{\small{m}}}$:59\arcsec.41; Dec[J2000] = $+12^{\circ}$:10\arcmin :25\arcsec.70, and the entire H$\alpha$ nebulosity of K648 (cf. Fig.~2 of \citet{alves00}) was contained within the spectrograph slit. The slit dimensions are 57\arcsec~$\times$ 3.6\arcsec, and the slit was oriented 18.92 degrees west of north. The SL spectroscopic astronomical observing templates consisted of 5 cycles of 60 second ramps. IRS BCDs were processed with version 13.0.1 of the IRS pipeline. A description of the IRS instrument and its operation is available in \citet{houck04}. Details of the calibration and raw data processing are specified in the IRS Pipeline Description Document, v1.0.\footnote{http://ssc.spitzer.caltech.edu/irs/dh/PDD.pdf} Post-pipeline processing was conducted to remove instrumental artifacts, perform background subtractions and to combine extracted spectral segments. Fatally bad pixels were interpolated over in individual BCDs using bad pixel masks provided by the SSC. Multiple data collection events were obtained at two different positions on the slit using \textit{Spitzer's} nod functionality. The two-dimensional BCDs were differenced to remove the background flux contribution and then the data were extracted with the \textit{Spitzer} IRS Custom Extractor (SPICE) (v1.1-beta15)\footnote{http://ssc.spitzer.caltech.edu/postbcd/doc/spice.gui\_manual.html} using the default point source extraction widths. The extracted, background corrected data were combined using a weighted linear mean into a single output data file and clipped at the 3$\sigma$ level. At the time of reduction, the errors generated by the SSC pipeline were not reliable enough for sound interpretation and so the errors were estimated from the standard deviation of the flux at each wavelength bin. Where there were less than three points in a wavelength bin, the error is the quadrature sum of the errors in the files. The resultant spectrum is presented in Fig.~\ref{fig:pn} and derived line fluxes are summarized in Table~\ref{tab:k648lf}. The spectral lines were fitted using a least squares Gaussian routine that fits the line center, line amplitude, continuum amplitude and the slope of the continuum. The full-width half-maximum was fixed at the resolution limit of the low resolution module. Integrating the flux over the 8~\micron{} IRAC bandpass yields a flux that agrees with the IRAC flux within the uncertainty limits. The strongest line in the mid-IR spectrum is the [\ion{Ne}{2}]$\lambda = 12.81$~\micron{} line, followed by hydrogen recombination lines \ion{H}{1}~$6-5$~(Pf$\alpha$)$\lambda = 7.46$~\micron{} and \ion{H}{1}~$7-6$~(Hu$\alpha$)$\lambda = 12.37$~\micron. Emission from S$^{3+}$ is evident in the spectrum, although the fit to the line flux of the [\ion{S}{4}]$\lambda = 10.52$~\micron{} is of marginal signal-to-noise (S/N $\approx 2.3$), while no [\ion{Ar}{3}]$\lambda = 8.99$~\micron \ is seen. Our detections of the mid-IR neon and sulfur lines are the first reported in the literature for K648. Abundance estimates derived from these forbidden lines are discussed in \S\ref{sec:pnk}. \section{DISCUSSION} \label{sec:disc} Our \textit{Spitzer} images of M15 for the first time clearly show both the stellar dust producers and the ICM dust (Fig.~\ref{fig:3color}), allowing a direct comparison to be made between the dust injection and dust survival rates. The brightest source of 70~\micron{} emission, IR1a, is blended at 70~\micron{} but visible as separate objects in the MIPS 24~\micron{} map, IR1b and IR2. These sources are completely invisible on the IRAC maps, even at 8~\micron{}. A three color image of 8, 24, and 70~\micron{} in which the 8 and 24~\micron{} images are degraded to match the 70~\micron{} resolution is presented in Fig.~\ref{fig:convolved}. This figure illustrates that IR1a is not unresolved stellar emission, but is a starless dust cloud(s) that are likely to be of an intra-cluster nature. The next brightest objects at 70~\micron{} are a pair of sources, IR3a and IR3b, that are situated at the fringes of the cluster and may not be physically associated with the cluster. IR3b was previously detected by 2MASS, but there are no known previous detections of IR3a. The probability of detecting non-member red sources in the field are sufficiently small to suggest that these sources are associated with the cluster. The only other 70~\micron{} source, IR4, is also located on the fringes of the cluster. Radial velocity measurements from \citet{pila00} confirm its membership. Table~\ref{tab:irsources} lists fluxes for IR3a, IR3b, IR4, and other possibly dusty sources, described in \S\ref{sec:stars}. All three of these sources have mid-IR colors that are consistent with post-AGB stars (Fig.~\ref{fig:mips}; \cite{Groenewegen06}). In addition to these sources, the planetary nebula K648 (Ps~1) is also detected in all IRAC bands and at 24~\micron{}. The strong 24~\micron{} dectection is likely due not only to dust, but also to line emission from [\ion{O}{4}] at 25.88~\micron{} and/or [\ion{Fe}{2}] at 25.98~\micron{}. \subsection{Mass-Losing Stars} \label{sec:stars} Figure~\ref{fig:massloss} shows the locations of the mass-losing AGB stars in M15. The coordinates and IRAC fluxes of these sources are listed in Table~\ref{tab:irsources}. These stars were identified by their locations on the IRAC CMDs (Fig.~\ref{fig:irac}), redward of the RGB. We find 24 mass-losing/dust-enshrouded stars and consider this a lower limit (due to potential source confusion) of the total number in the cluster. These stars also fall just redward of the RGB in the MIPS CMDs as well, which suggests that they could be post-AGB stars. Their IRAC colors, however, indicate that it is more likely that they are AGB stars that are approaching the end of the AGB phase of their evolution \citep{Groenewegen06}. The stars blueward of the RGB can be explained by the absorption in the fundamental bands of CO in the 4.5~\micron{} band and SiO in the 8~\micron{} band. The red, mass-losing stars in M15 populate an uneven spatial distribution about the core of the cluster as projected on the sky (Fig.~\ref{fig:massloss}). These stars are offset from the core in the same sense as IR1a, IR1b, and IR2, and the distribution is less cusped than the visual light. Since M15 is 13.2 Gyr old \citep{mcnamara04}, most stars currently on the AGB have a zero-age MS mass of $\simeq 0.8$~M$_\odot$. These stars will soon end their lives as white dwarfs of $\simeq 0.5$~M$_\odot$, having lost approximately 0.3~M$_\odot$ during their post-main sequence evolution. The loose spatial distribution of this population could therefore be due to mass segregation, in which lower-mass stars are displaced to the outer regions of the cluster due to their high velocities, leaving a preferential concentration of high-mass stars near the center of the potential well of the cluster \citep{Spitzer87}. The MIPS CMDs (Fig.~\ref{fig:mips}) identify a population of approximately 23 post-AGB stars \citep{Groenewegen06}. The colors of IR3a, IR3b, and IR4 are similar to the colors of this population. These stars are bright at 24~\micron{}, but they are not detected above the background at 70~\micron{} (Table~\ref{tab:irsources}). They are distributed around the cluster center at an average radius of approximately 3.3\arcmin, and their positions are biassed towards the southern side of the cluster, near IR3a, IR3b, and IR4. Mass segregation is most likely responsible for their locations at the fringes of the cluster. Although M15 is one of the most massive galactic globular clusters, it is surprising to find many dusty objects, as the cluster metallicity of [Fe/H] $= -2.4$ places it amongst the most metal-poor galactic globular clusters. Dust forms from metal condensates and it is difficult to understand how dust grains can form at such low metal abundances. It is likely that the metals condensing to form dust are produced in the stars themselves and brought to the surface near the end of their evolution. The fact that dust production does not seem to be inhibited at metallicities \ltsimeq 1\% solar implies that stellar mass loss must already have contributed dust very early on in the evolution of the Universe. Dust observed at high redshift is usually believed to originate in supernovae explosions that result from core collapse in massive stars, but our observations suggest that at least some of the dust formed within the first few 100 million years may have been produced by stars of only a few solar masses (e.g. MS lifetime from models by \citet{vassiliadis93}). Various investigators \citep{daulton02, amari01} suggest that dust grains can form more easily at low metallicity in carbon stars, as these stars produce carbon themselves. With less initial oxygen abundance to start with, it is easier for these stars to achieve C/O~\textgreater~1 in their carbon shells. With excess carbon available (after locking up equal amounts of C and O in CO), formation of carbon-chain molecules from which dust grain condensation can proceed. However, although there is evidence for high molecular abundances in stellar atmospheres at low metallicity, dust production in these stars may be less efficient due to a lack of SiC seeds \citep{sloan-aph}. Furthermore, \citet{jacco-aph} suggests that the low optical depth in carbon stars in the Magellanic clouds points against large dust-to-gas ratios at low metallicity. For low-mass oxygen-rich stars, one would not expect this evolution to lead to effective dust production, as dredge-up only increases the C/O ratio and does not facilitate the formation of oxygen-rich dust grains. In any case, for oxygen-rich stars it is well established that nucleation sites must be available to condense dust grains onto \citep{jeong99}. These seeds are likely to be TiO or similar which include secondary elements that are not produced by the star itself. Other s-process seeds, such as Zr, may be dredged-up in the atmospheres of these stars, but the limiting factor for dust production with such seeds is the oxygen abundance, as more oxygen will be locked into CO after dredge-up. Unfortunately, our observations do not allow us to draw any conclusions as to the abundances of secondary elements in the mass-losing stars in M15. The suspicion is that stars do adjust their structure until they finally can shed their mantles as demonstrated by K648 in M15. From the analysis of spectra of GC giant stars, a picture is emerging in which metal-poor stars do become very cool while nonetheless exhibiting early-type spectra because the low metal abundances give rise to weak absorption. They may not form much dust, but they may still be able to form enough of it to drive a wind. The mid-IR spectrum of 47~Tucanae~V1 suggests typical silicate dust grains and a typical mass loss rate of 10$^{-6}$~M$_\odot$~yr$^{-1}$ \citep{Jacco06}. \subsection{IR Spectrum of K648} \label{sec:pnk} Although UV and optical line ratios derived for K648 from previous investigations provide constraints on relative abundance ratios, several important $\alpha$-capture elements, such as S, Ar, and Ne, have ground configurations that produce only IR fine-structure. If lines from these ions are not observed and introduced into abundance models, the total abundance of these elements becomes uncertain \citep{hld03}. Often, measuring emission line flux from [\ion{Ne}{2}]$\lambda$12.81~\micron, [\ion{S}{4}]$\lambda$10.51~\micron, and [\ion{Ar}{3}]$\lambda$8.99~\micron \ is observationally challenging from the ground. However, the sensitivity of the \textit{Spitzer} IRS affords an opportunity to set stringent limits on the emission line flux of these ions resulting in better constrained estimates on derived abundances. In addition, new radiative transition rates and collision strengths are now available as a result of the IRON project \citep{iron_1} to improve the accuracy of derived abundance ratios. Below, we discuss the analysis of our \textit{Spitzer} measurements (Table \ref{tab:k648lf}) of the [\ion{S}{4}] and [\ion{Ne}{2}] lines in K648 and present a reanalysis of the S/O and Ne/O ratios with contemporary atomic parameters using a simple model. Undertaking a full photoionization analysis of K648 (i.e., \citet{howard97}) using the \textit{Spitzer} IR line fluxes is beyond the scope of this paper. \subsubsection{H$\beta$} \label{sec:hbeta} Accurate extinction correction of UV and optical lines with respect to H$\beta$ are required for abundance determinations. Interstellar extinction estimates to K648 range from $0.2 \leq A_{V} \leq 0.6$ \citep{garnett93}. However, at mid-IR wavelengths, extinction is minimal ($\ltsimeq 0.02$~mag), and the Pf$\alpha$ and Hu$\alpha$ lines detections (Fig.~\ref{fig:pn}) enable us to estimate the emitted H$\beta$ flux directly. Assuming Case B \citep{osterbrock99}, and adopting a $T_{e} = 12,500$~K and density of $\approx 10^{3}~\rm{cm}^{-1}$ \citep{garnett93} we derived $F(\rm{H}\beta) = (1.91 \pm 0.30) \times 10^{-12}$~ergs~cm$^{2}$~s$^{-1}$ and $F(\rm{H}\beta) = (1.13 \pm 0.36) \times 10^{-12}$~ergs~cm$^{2}$~s$^{-1}$ from the Pf$\alpha$ and Hu$\alpha$ lines respectively using the intrinsic hydrogen emissivity ratios tabulated in \citet{humstor87}. These two estimates are in reasonable agreement (within the formal error) with each other, considering there is an absolute photometric uncertainty of $\approx 5\%$ \citep{houck04} between the two spectral orders ($5.2 - 8.7$~\micron{} and $7.4 - 14.0$~\micron{}). Our average $F(\rm{H}\beta)$ of $(1.52 \pm 0.23) \times 10^{-12}$~ergs~cm$^{2}$~s$^{-1}$ is in good agreement with previous observational estimates, especially those obtained with large apertures \citep{garnett93}, and we will adopt this value in our abundance analysis. \subsubsection{Neon} \label{sec:neona} The ratio of [\ion{Ne}{2}]$\lambda = 12.81$~\micron{} line flux to our derived average value of H$\beta$ was used to estimate the Ne$^{+}$/H$^{+}$ abundance. We have adopted a collisional strength $\Upsilon$(T) = 0.283 (appropriate for T$_{e} = 10^{4}$~K, \citet{sartul94}) and an $A_{if}$ value of $8.59 \times 10^{-3}$~s$^{-1}$ from the NIST database and assumed an electron density of $N_{e} = 1.7 \times 10^{3}$~cm$^{-3}$ \citep{garnett93}. Rate coefficients, $q_{fi}$ \citep{iron_1} and the population levels were computed assuming Ne$^{+}$, a $2p^{5}$ ion, is a two-level atom. Following \citet{rank78}, we define the relative abundance of Ne$^{+}$/H$^{+}$ as \begin{equation} \frac{Ne^{+}}{H^{+}} = \frac{(4\pi\ j_{H\beta}/N_{e}N_{p})N_{e}} {h\nu_{fi}A_{if}f_{f}} \times \frac{I([\mbox{\ion{Ne}{2}}])}{I(H\beta)}, \end{equation} \noindent where the H$\beta$ volume emissivity, $4\pi j_{H\beta}/N_eN_p$, is $1.0301 \times 10^{-25}$~erg~cm$^{3}$~s$^{-1}$ interpolated for the assumed N$_{e}$ \citep{humstor87}, $f_{f} = 9.86 \times 10^{-4}$ is the population ratio of the upper to lower state, and I([\ion{Ne}{2}]), the observed neon line flux, is $(1.74 \pm 0.08) \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$ \ (Table~\ref{tab:k648lf}). This yields a ratio of Ne$^{+}$/H$^{+} = (1.53 \pm 0.21) \times 10^{-5}$. The relative abundance of Ne$^{2+}$/H$^{+}$ was estimated from optical measurements of the [\ion{Ne}{3}]$\lambda 3869$~\AA \ and [\ion{Ne}{3}]$\lambda 3967$~\AA \ lines \citep{adams84} and de-reddened assuming $c=0.12$ and a \citet{seaton79} extinction law. Using the [\ion{Ne}{3}] to H$\beta$ ratios (see Table~\ref{tab:k648abund}), the relative populations were computed using a multi-level atom program incorporating the best available $A_{ij}$(s$^{-1}$) and collision strengths available from the NIST database and the literature \citep{mohesduf04}, which is similar to code originally described by \citet{shawduf95}. Summing over all ions, we find a total Ne/H$ = (2.39 \pm 0.27) \times 10^{-5}$ and a Ne/O $= 0.48 \pm 0.14$ adopting an O/H $= (5.0 \pm 1.3) \times 10^{-5}$ \citep{pena93}. \subsubsection{Sulfur} \label{sec:sulfa} The relative sulfur abundance was computed in a similar manner as the neon abundance described in \S\ref{sec:neona}, although all relative populations were determined using the \citep{mohesduf04} code. The \textit{Spitzer} observation of [\ion{S}{4}] was used to determine the S$^{3+}$/H$^{+}$ population, while the optical fluxes reported by \citep{barker83} corresponding to H$\beta$ (assuming $j(H\alpha)/j(H\beta) = 2.81$; \citet{osterbrock99}) were used to estimate the relative S$^{2+}$/H$^{+}$ and S$^{+}$/H$^{+}$ abundances (Table~\ref{tab:k648abund}). We find an upper limit to the total S/H of $(4.28 \pm 1.28) \times 10^{-8}$, which is $\approx 2.5$ lower than that inferred by \citet{garnett93}. Adopting an O/H $= (5.0 \pm 1.3) \times 10^{-5}$ \citep{pena93}, yields a S/O $= (8.56 \pm 3.39) \time 10^{-4}$. \subsubsection{Abundance Comments} \label{sec:nesc} Our new estimates of the [S/O]$\le -2.64$ and [Ne/O] = +0.54 confirm that K648 is under-enriched in S as compared to O \citep{garnett93}, while Ne/O is enhanced with respect to solar. The Ne enhancement is seen in other halo population PNe, such as BB-1 \citep{hld03}. \citet{garnett93} argue that contamination of He-burning products by $\alpha$-captures at high temperature could account for enhanced Ne. \citet{bianchi01}, based on analysis of \textit{Hubble Space Telescope} (HST) \textit{Faint Object Spectrograph} (FOS) spectra of the central star, suggest that the nebular shell was ejected by a low-mass He-burning progenitor that has subsequently undergone a late thermal pulse, perhaps similar to the evolution of objects akin to FG Sge \citep{gehrz05}. Our derived neon abundance using the fine structure line, suggests that dredge-up from the stellar core may be an important mechanism to pollute the expelled nebular materials from slowly evolving, young PNe. \subsection{Dust in the ICM} \label{sec:dust} Assuming that the diffuse emission detected in the \textit{Spitzer} images arises from ICM material, we can compute a dust mass using the observed SEDs. The approximate temperature of the ICM dust was derived by least-squares fitting a greybody model to the SED using fluxes measured in a 130.5 $\times$ 130.5 square arcsecond aperture centered at the same right ascension and declination coordinates in images at all wave bands. Our choice of aperture size is equivalent to that used by \citet{evans03} to facilitate direct comparison. The large wavelength range of the \textit{Spitzer} SED enables us to distinguish between the contribution of a stellar blackbody (peaking near 0.64~\micron{} and dominated by K0 stars), and that of thermal radiating dust that generates an IR excess at wavelengths greater than 24~\micron{}. Use of a two-component model, incorporating a stellar blackbody that peaks near $4699 \pm 58$~K and a dust blackbody that peaks near $T_{d} = 70 \pm 2$~K, gives a rough fit to the data (Fig.~\ref{fig:sed}; Table~\ref{tab:obs_sf}). The fit produces a large reduced $\chi^2$ value of 5.26 suggesting that the integrated flux within our aperture sums the emission from stars of many disperate spectral types not well-represented by a simple, single emissivity and temperature blackbody. Another source of uncertainty in the fit is the effect of crowding in 2MASS data. This could lead to oversubtraction of the background and overestimation of the flux densities \citep{jacco05}. The mass of the ICM was determined using the methodology described in \citet{evans03}. We assume that the dust is optically thin, which yields the following expression \begin{equation} \frac{M_d}{M_\odot} = 4.79\times 10^{-17}\ f_\nu (mJy)\ \frac{D^{2}_{kpc}}{\kappa_\nu B(\nu,T_d)}, \end{equation} \noindent where $D_{kpc}$ is the distance to M15 in kiloparsecs (Table~\ref{tab:m15_params}), $\kappa_\nu$ is the dust absorption coefficient in cm$^2$ g$^{-1}$, $B(\nu,T_d)$ is the Planck function in cgs units, and $T_d$ is the dust temperature. $f_\nu$ at 70~\micron{} is 691.2 $\pm$ 61.1 mJy (Table~\ref{tab:obs_sf}), and $\kappa_\nu$ is taken from \citet{ossen94} to be 56 $\pm$ 11 cm$^2$ g$^{-1}$ at 70~\micron{}, assuming a standard MRN dust distribution \citep{mrn} and an ISM-type composition consisting of graphite and silicate grains. We derive a total dust mass of $(9 \pm 2) \times 10^{-4}$~M$_\odot$, which agrees within a factor of two compared to the value cited by \citet{evans03} of $(4.8 \pm 1.6) \times 10^{-4}$~M$_\odot$, and is approximately 2-4 times smaller than the dust mass predicted by equation (1). The discrepancy between the {\it Spitzer} and {\it ISO} calculated dust masses may be largely due to the different choice of $\kappa_\nu$, as the ICM flux densities at 70~\micron{} agree within the errors stated (Table~\ref{tab:obs_sf}). $\kappa_\nu$ is the most uncertain parameter in the dust mass estimate, as its value depends largly on composition and density assumptions. Therefore, we note that $\kappa_\nu$ could be up to an order of magnitude larger than the value we have invoked here. The diffuse emission from ICM dust in M15 is located approximately 17\arcsec{} to the west of the cluster core. The paucity of diffuse dust toward the cluster center, where the gravitational potential well is steepest, is puzzling. One possible explanation for this asymmetry is a collection of millisecond pulsars (PSRs) near the core of M15 \citep{sun02}. Seven PSRs are located within 17\arcsec{} of the core (Fig.~\ref{fig:radio}), and the radiative environs associated with these objects could lead to destruction of dust grains by sputtering or other ablation processes and may also inhibit dust production in stellar winds. The PSR nearest to the ICM dust distribution observed in the MIPS image, PSR2129+1210F, is located on the northeast outer edge of the 70~\micron{} emission \citep{taylor93}. If, on the other hand, each mass-losing star has contributed 0.15~M$_\odot$ to the ICM on average (assuming that each star will lose 0.3~M$_\odot$ over its entire lifetime), then we see that an ICM dust mass of $1 \times 10^{-3}$~M$_\odot$ corresponds to mass lost from $\sim$10$^2$ stars. This suggests that the ICM is short-lived, as this many stars will have passed through the AGB superwind phase, defined as the phase in which the mass loss rate exceeds the nuclear burning mass consumption rate (\citet{jacco99b} suggests $\dot{M} \sim 10^{-4} - 10^{-5}$~M$_\odot$~yr$^{-1}$ for low-mass stars), in only $\approx$ 10$^{6}$~yr, which is much shorter than the cluster's relaxation timescale. The ICM dust therefore cannot be expected to have relaxed and assumed the global shape of the gravitational potential well. If this is the case, the offset of the dust cloud from the center of the cluster would not come as a surprise. Sources of the ICM dust include the post main sequence mass-losing stars identified in Fig.~\ref{fig:massloss}. If we assume that the dust-to-gas ratio scales in proportion to metallicity during the superwind phase, as indicated by \citet{marshall04}, then at the metallicity of M15, a dust mass loss of 10$^{-10}$ M$_{\odot}$~yr$^{-1}$ is expected \citep{becker00}. With this mass loss rate, we again find that the dust has been accumulating for approximately 1$\times$10$^{6}$ years, significantly shorter than the time between subsequent passages of the galactic plane \citep{evans03}. This short time scale suggests that dust does not survive long in the ICM. Processes that could be responsible for removing dust from the ICM include ram pressure by the galactic halo gas, radiation-driven outflow or photo-destruction. \section{CONCLUSIONS} \label{sec:concl} Analysis of our \textit{Spitzer} image data on the core of the globular cluster M15 show strong evidence for the presence of intercluster medium (ICM) dust in the cluster core, with a mass of $(9 \pm 2) \times 10^{-4}$~M$_\odot$ and with an equilibrium temperature of $\approx 70$~K. This is the first secure, high signal to noise detection of ICM dust in a globular cluster. Also present surrounding the core are populations of dusty AGB and post-AGB stars, along with the planetary nebula K648. Using IRS spectral data, we have observed both the [\ion{S}{4}] and [\ion{Ne}{2}] fine structure lines in K648 and have derived abundance estimates. The unique capabilities of {\it Spitzer} have enabled us to identify both the interstellar dust and the dust producers in M15. This is surprising at such low metallicity ([Fe/H] = -2.4), and may have implications for dust production in the early universe. The mass of the ICM dust in M15 suggests that it has been accummulating for $\sim 10^6$ years, which is a factor of ten shorter than the time since the last galactic plane crossing. The dust mass is also approximately 4 times smaller than the mass predicted by \citet{evans03}. Both of these results imply that such dust does not survive long compared to its production rate, and is thus part of a stochastic process. \acknowledgments We acknowledge helpful discussions with G. Schwarz and B. Moore regarding approaches to abundance modeling in PNe, and BDM for providing source code for multi-level atom calculations. M.~L.~B., C,~E.~W., L.~A.~H., E.~P., and R.~D.~G. are supported in part by NASA through \textit{Spitzer} contracts 1256406, 1215746, 1268006, and 1276760 issued by JPL/Caltech to the University of Minnesota, and by National Science Foundation grant AST02-05814. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. \newpage
1,477,468,749,874
arxiv
\section{Qualitative Analysis} We visualize the distribution of source and target data in the feature space (output of the contrastive head) with the t-sne \cite{van2008visualizing} plots in Figure \ref{fig:tsne}. In particular, we focus on the Ar,Pr,Rw $\rightarrow$ Cl case of the Office-Home dataset: the red dots represent the source domain, the blue dots are the known samples of the target domain, and the green dots the unknown ones. We take three snapshots of the data on the hyperspherical embedding: at the beginning when the backbone network is inherited from SupClr \cite{NEURIPS2020_supclr} pre-trained on ImageNet, immediately before the first \emph{break-point} (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot before the application of self-training), and at the end of the training process. By observing the intermediate plot we can state that source balancing and style transfer already favor a good alignment of most of the known (blue) target classes with the respective source known clusters (red). The last plot indicates that self-training further improves the alignment while the unknown samples (green) remain in the regions among the clusters. Randomly zooming on a known sample (the bike) and on an unknown sample (the speaker) we observe how their position change during training. The first moves from an isolated region where its top five neighbors show high class confusion, towards the correct bike class. The second starts from a neighborhood populated by several samples of classes webcam and fan and finally appears in a different region shared mostly by other instances of the class speaker. \begin{figure*}[t!] \centering \includegraphics[width=0.99\textwidth]{tsne_silvia.pdf} \caption{Qualitative analysis on the Ar,Pr,Rw $\rightarrow$ Cl case of the Office-Home dataset. The red dots represent the source domain, the blue dots are the known samples of the target domain, and the green dots are the unknown ones. HyMOS\xspace 20k: source balancing and style transfer already favor a good alignment of most of the known target classes with the respective source known cluster. HyMOS\xspace 40k: self-training further move the target known samples towards the respective source clusters, while the unknown samples remain in the regions among the clusters. The zooms show how the neighborhood of a known (bike) and unknown (speaker) target samples change during training. } \label{fig:tsne \end{figure*} \section{Further experiments} \noindent\textbf{Complete results with additional metrics} In Table \ref{tab:sota} we present the same results of the main paper including also additional metrics: the average class accuracy over known classes $OS^*$, the accuracy on the unknown class $UNK$ and the average accuracy over all classes $OS$ defined as $OS=\frac{|\mathcal{C}_s |}{|\mathcal{C}_s | + 1} \times {OS^*}+ \frac{1}{|\mathcal{C}_s | + 1} \times {UNK}$. \noindent\textbf{Robustness to temperature variation} The temperature $\tau$ in the contrastive loss (main paper Eq. (1)) is kept fixed to the default value $0.07$ as suggested in \cite{tack2020csi}. We verified experimentally that the results are stable even when tuning $\tau$ and remain always higher than ROS (65.3) (see Figure \ref{fig:ablationtau}). \begin{figure}[t!] \centering \hspace{-4mm} \resizebox{0.27\textwidth}{!}{ \includegraphics[width=0.25\textwidth]{Temperature.pdf} } \caption{Sensitivity analysis for the temperature value $\tau$ on Office-Home.} \label{fig:ablationtau}\vspace{3mm} \end{figure} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{cc cccc cccc cccc cccc| } \hline \multicolumn{18}{c|}{\textbf{Office31}} \\ \hline & & \multicolumn{4}{|c|}{D,A $\rightarrow$ W } & \multicolumn{4}{c|}{W,A $\rightarrow$ D } & \multicolumn{4}{c|}{W,D$\rightarrow$ A } & \multicolumn{4}{c|}{Avg.}\\ & & \multicolumn{1}{|c}{OS} & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}}\\ \hline \multicolumn{1}{c}{\multirow{4}{*}{Source Combine }} & \multicolumn{1}{c|}{Inheritable \cite{kundu2020towards}} & 69.0 & 68.1 & 87.6 & \multicolumn{1}{c|}{76.6} & 74.7 & 74.1 & 85.6 & \multicolumn{1}{c|}{79.5} & 63.7 & 62.9 & 78.9 & \multicolumn{1}{c|}{70.0} & 69.1 & 68.4 & 84.0 & 75.4 \\ & \multicolumn{1}{c|}{ROS \cite{bucci2020effectiveness}} & 82.2 & 82.3 & 81.5 &\multicolumn{1}{c|}{81.8} & 95.3 & 96.5 & 68.7 & \multicolumn{1}{c|}{80.1} & 53.8 & 52.2 & 84.9 & \multicolumn{1}{c|}{64.7} & 77.1 & 77.0 & 78.4 & 75.5 \\ & \multicolumn{1}{c|}{CMU \cite{fu2020learning}} & 96.1 & 98.7 & 44.6 & \multicolumn{1}{c|}{61.4} & 96.2 & 98.7 & 47.3 & \multicolumn{1}{c|}{64.0} & 73.1 & 74.5 & 45.4 & \multicolumn{1}{c|}{56.4} & 88.5 & 90.6 & 45.8 & 60.6 \\ & \multicolumn{1}{c|}{DANCE \cite{saito2020dance}} & 95.9 & 99.5 & 23.9 & \multicolumn{1}{c|}{38.5} & 97.3 & 100.0 & 42.6 & \multicolumn{1}{c|}{59.7} & 78.0 & 79.6 & 45.6 & \multicolumn{1}{c|}{58.0} & 90.4 & 93.0 & 37.3 & 52.0 \\ & \multicolumn{1}{c|}{PGL \cite{pgl-luo20b-icml20}} & 94.1 & 97.4 & 27.8 & \multicolumn{1}{c|}{43.3} & 92.2 & 95.6 & 23.5 & \multicolumn{1}{c|}{37.7} & 77.1 & 79.8 & 22.9 & \multicolumn{1}{c|}{35.6} & 87.8 & 90.9 & 24.7 & 38.9\\ \hline \multicolumn{1}{c}{\multirow{2}{*}{Multi-Source }} & \multicolumn{1}{c|}{MOSDANET \cite{rakshit2020multi}} & 97.7 & 99.4 & 43.5 & \multicolumn{1}{c|}{60.5} & 97.0 & 99.0 & 55.9 & \multicolumn{1}{c|}{71.5} & 80.9 & 81.5 & 67.6 & \multicolumn{1}{c|}{\textbf{73.9}} & 91.9 & 93.3 & 55.7 & 68.6\\ & \multicolumn{1}{c|}{\textbf{HyMOS\xspace}} & 96.1 & 96.6 & 84.6 & \multicolumn{1}{c|}{\textbf{90.2}} & 96.7 & 97.3 & 83.6 & \multicolumn{1}{c|}{\textbf{89.9}} & 49.6 & 48.0 & 83.1 & \multicolumn{1}{c|}{60.8} & 80.8 & 80.6 & 83.8 & \textbf{80.3}\\ \hline \end{tabular} \begin{tabular}{cccc cccc cccc } \hline \multicolumn{12}{|c}{\textbf{DomainNet}} \\ \hline \multicolumn{4}{|c|}{I,P $\rightarrow$ S } & \multicolumn{4}{c|}{I,P $\rightarrow$ C } & \multicolumn{4}{c}{Avg.}\\ \multicolumn{1}{|c}{OS} & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c}{\textbf{HOS}}\\ \hline \multicolumn{1}{|c}{24.9} & 24.5 & 60.3 & \multicolumn{1}{c|}{34.8} & 33.5 & 33.1 & 65.6 & \multicolumn{1}{c|}{44.0} & 29.2 & 28.8 & 62.9 & 39.4 \\ \multicolumn{1}{|c}{31.7} & 31.3 & 77.5 & \multicolumn{1}{c|}{44.5} & 41.0 & 40.7 & 73.6 & \multicolumn{1}{c|}{52.4} & 36.4 & 36.0 & 75.5 & 48.5 \\ \multicolumn{1}{|c}{48.0} & 48.3 & 26.3 & \multicolumn{1}{c|}{38.1} & 49.6 & 49.8 & 27.6 & \multicolumn{1}{c|}{35.5} & 48.8 & 49.1 & 27.0 & 36.8\\ \multicolumn{1}{|c}{45.6} & 45.8 & 22.3 & \multicolumn{1}{c|}{30.0} & 54.4 & 54.7 & 28.7 & \multicolumn{1}{c|}{37.6} & 50.0 & 50.3 & 25.5 & 33.8 \\ \multicolumn{1}{|c}{54.9} & 55.3 & 11.1 & \multicolumn{1}{c|}{18.5} & 59.6 & 60.1 & 11.6 & \multicolumn{1}{c|}{19.4} & 57.3 & 57.7 & 11.4 & 19.0 \\ \hline \multicolumn{1}{|c}{30.2} & 29.9 & 60.2 & \multicolumn{1}{c|}{40.0} & 31.8 & 31.6 & 51.8 & \multicolumn{1}{c|}{39.3} & 31.0 & 30.8 & 56.0 & 39.6\\ \multicolumn{1}{|c}{43.6} & 43.2 & 86.0 & \multicolumn{1}{c|}{\textbf{57.5}} & 47.8 & 47.4 & 85.5 & \multicolumn{1}{c|}{\textbf{61.0}} & 45.7 & 45.3 & 85.8 & \textbf{59.3}\\ \hline \end{tabular} } \resizebox{\textwidth}{!}{ \begin{tabular}{cc cccc cccc cccc cccc cccc} \hline \multicolumn{22}{c}{\textbf{Office-Home}} \\ \hline & & \multicolumn{4}{|c|}{Ar,Pr,Cl $\rightarrow$ Rw } & \multicolumn{4}{c|}{Ar,Pr,Rw $\rightarrow$ Cl } & \multicolumn{4}{c|}{Cl,Pr,Rw $\rightarrow$ Ar } & \multicolumn{4}{c|}{Cl,Ar,Rw $\rightarrow$ Pr } & \multicolumn{4}{c}{Avg.}\\ & & \multicolumn{1}{|c}{OS} & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c|}{\textbf{HOS}} & OS & OS* & UNK & \multicolumn{1}{c}{\textbf{HOS}} \\ \hline \multicolumn{1}{c}{\multirow{4}{*}{Source Combine }} & \multicolumn{1}{c|}{Inheritable \cite{kundu2020towards}} & 58.6 & 58.4 & 68.9 & \multicolumn{1}{c|}{63.2} & 44.3 & 43.7 & 66.5 & \multicolumn{1}{c|}{52.6} & 36.4 & 35.5 & 77.6 & \multicolumn{1}{c|}{48.7} & 58.6 & 58.5 & 63.3 & \multicolumn{1}{c|}{60.7} & 49.5 & 49.1 & 69.1 & 56.3\\ & \multicolumn{1}{c|}{ROS \cite{bucci2020effectiveness}} & 69.9 & 69.8 & 76.9 & \multicolumn{1}{c|}{\textbf{73.0}} & 57.1 & 57.1 & 57.6 & \multicolumn{1}{c|}{57.3} & 57.5 & 57.2 & 66.7 & \multicolumn{1}{c|}{61.6} & 70.3 & 70.3 & 68.0 & \multicolumn{1}{c|}{69.1} & 63.7 & 63.6 & 67.3 & 65.3\\ & \multicolumn{1}{c|}{CMU \cite{fu2020learning}} & 62.9 & 62.5 & 81.5 & \multicolumn{1}{c|}{70.8} & 35.8 & 34.6 & 89.9 & \multicolumn{1}{c|}{50.0} & 44.6 & 43.7 & 87.0 & \multicolumn{1}{c|}{58.1} & 60.6 & 60.1 & 81.7 & \multicolumn{1}{c|}{69.3} & 51.0 & 50.2 & 85.0 & 62.1 \\ & \multicolumn{1}{c|}{DANCE \cite{saito2020dance}} & 83.9 & 85.6 & 4.5 & \multicolumn{1}{c|}{12.4} & 66.8 & 68.0 & 9.2 & \multicolumn{1}{c|}{16.1} & 72.7 & 74.1 & 10.7 & \multicolumn{1}{c|}{18.6} & 85.1 & 86.7 & 13.4 & \multicolumn{1}{c|}{22.9} & 77.1 & 78.6 & 9.4 & 17.5 \\ & \multicolumn{1}{c|}{PGL \cite{pgl-luo20b-icml20}} & 83.4 & 84.6 & 26.2 & \multicolumn{1}{c|}{40.0} & 62.0 & 63.0 & 21.0 & \multicolumn{1}{c|}{31.5} & 69.5 & 70.6 & 20.5 & \multicolumn{1}{c|}{31.8} & 82.6 & 83.8 & 28.2 & \multicolumn{1}{c|}{42.2} & 74.4 & 75.5 & 24.0 & 36.4 \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{Multi-Source }} & \multicolumn{1}{c|}{MOSDANET \cite{rakshit2020multi}} & 78.4 & 79.4 & 55.0 & \multicolumn{1}{c|}{65.0} & 67.5 & 68.1 & 40.9 & \multicolumn{1}{c|}{51.1} & 61.0 & 61.3 & 48.7 & \multicolumn{1}{c|}{54.3} & 81.1 & 82.2 & 55.0 & \multicolumn{1}{c|}{65.9} & 72.0 & 72.8 & 49.9 & 59.1\\ & \multicolumn{1}{c|}{\textbf{HyMOS\xspace}} & 69.5 & 69.4 & 72.7 & \multicolumn{1}{c|}{71.0} & 52.5 & 51.7 & 86.0 & \multicolumn{1}{c|}{\textbf{64.6}} & 50.1 & 49.4 & 84.1 & \multicolumn{1}{c|}{\textbf{62.2}} & 71.5 & 71.5 & 70.6 & \multicolumn{1}{c|}{\textbf{71.1}} & 60.9 & 60.5 & 78.4 & \textbf{67.2} \\ \hline \end{tabular} } \vspace{3mm}\caption{Accuracy (\%) averaged over three runs for each method on the Office31, DomainNet and Office-Home datasets.} \label{tab:sota} \end{table*} \section{Implementation Details} We implemented HyMOS with an architecture composed of the ResNet-50 \cite{resnet} backbone that corresponds to the \emph{encoder} and two fully connected layers of dimension 2048 and 128 which define the \emph{contrastive head}. The overall network is trained by minimizing the contrastive loss (see the main paper, Eq. (1)), setting $\tau=0.07$ as in \cite{tack2020csi}. Our distance-based classifier lives in the hyperspherical space produced by the model, whose dimension is not constrained by the number of classes. As a consequence, the architecture remains exactly the same for all our experiments. We initialize the backbone network with the ImageNet pre-trained SupClr model \cite{NEURIPS2020_supclr} and train HyMOS\xspace for 40k iterations with a balanced data mini-batch which contains one sample for each class of every source domain. The learning rate grows from 0 to 0.05 (at iteration 2500) with a linear warm-up schedule, to then decrease back to 0 at the end of training (iteration 40k) through a cosine annealing schedule. We use LARS optimizer \cite{LARS} with momentum 0.9 and weight decay $10^{-6}$. For the first 20k iterations we train only on source data, using target data exclusively for the style transfer based data augmentation for the supervised contrastive learning objective. We then perform an eval step that we call self-training \emph{break-point} in order to start including confident known target samples in the learning objective. We perform \emph{break-point} eval steps every 5K iterations till the end of the training. \input{evalalgorithm} For style transfer data augmentation we use the standard VGG19-based AdaIN model with default hyperparameters \cite{huang2017adain}, trained with content data from the available source domains and target samples as style data. For what concerns the instance transformations, we applied the same data augmentations originally proposed for SimClr \cite{simclr2020}, extending them with style transfer. Specifically, we used random resized crop with scale in $\{0.08, 1\}$ and random horizontal flip. The style transfer is applied with probability $p=0.5$ on the source images, while the remaining not-stylized images are transformed via color jittering with probability $p=0.8$ and grayscale with probability $p=0.2$. The final evaluation procedure of HyMOS\xspace is summarized in Algorithm \ref{alg:eval}. \section{Introduction} \input{intro_final} \section{Related works} \input{related} \section{Method} \label{sec:method} \input{method} \section{Experiments} \input{experiments_final} \section{Conclusions} \input{conclusions} \section*{Appendix} \input{appendix} {\small \bibliographystyle{ieee_fullname} \section{Ablation Analysis} \begin{table} \resizebox{1.01\linewidth}{!}{ \begin{tabular}{c c c c c c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} &\multicolumn{5}{c|}{\textbf{Office-Home}} \\ \cline{2-6} \multicolumn{1}{|c|}{}& \multicolumn{1}{c|}{$\rightarrow$ Rw } & \multicolumn{1}{c|}{ $\rightarrow$ Cl } & \multicolumn{1}{c|}{ $\rightarrow$ Ar } & \multicolumn{1}{c|}{ $\rightarrow$ Pr } & \multicolumn{1}{c|}{\textbf{Avg.}}\\ \hline \multicolumn{1}{|c|}{\textbf{HyMOS\xspace}} & \multicolumn{1}{c|}{71.0} & \multicolumn{1}{c|}{64.6} & \multicolumn{1}{c|}{62.2} & \multicolumn{1}{c|}{71.1} & \textbf{67.2} \\ \multicolumn{1}{|c|}{w/o Source Balance} & \multicolumn{1}{c|}{69.2} & \multicolumn{1}{c|}{58.4} & \multicolumn{1}{c|}{60.6} & \multicolumn{1}{c|}{70.2} & 64.6\\ \multicolumn{1}{|c|}{Style Tr. Target Known (Oracle)} & \multicolumn{1}{c|}{70.7} & \multicolumn{1}{c|}{63.7} & \multicolumn{1}{c|}{62.5} & \multicolumn{1}{c|}{71.2} & 67.0 \\ \multicolumn{1}{|c|}{w/o Style Transfer} & \multicolumn{1}{c|}{69.5} & \multicolumn{1}{c|}{56.4} & \multicolumn{1}{c|}{60.0} & \multicolumn{1}{c|}{68.3} & 63.6 \\ \multicolumn{1}{|c|}{w/o Self-Training} & \multicolumn{1}{c|}{72.2} & \multicolumn{1}{c|}{55.0} & \multicolumn{1}{c|}{58.6} & \multicolumn{1}{c|}{71.5} & 64.3\\ \hline\hline \multicolumn{1}{|c|}{Improved Cross-Entropy} & \multicolumn{1}{c|}{61.5} & \multicolumn{1}{c|}{61.2} & \multicolumn{1}{c|}{58.1} & \multicolumn{1}{c|}{57.1} & 59.5 \\ \hline\hline \multicolumn{1}{|c|}{ROS \cite{bucci2020effectiveness}} & \multicolumn{1}{c|}{73.0} & \multicolumn{1}{c|}{57.3} & \multicolumn{1}{c|}{61.6} & \multicolumn{1}{c|}{69.1} & 65.3\\ \multicolumn{1}{|c|}{+ Source Balance} & \multicolumn{1}{c|}{75.2} & \multicolumn{1}{c|}{55.5} & \multicolumn{1}{c|}{62.6} & \multicolumn{1}{c|}{66.9} & 65.0\\ \multicolumn{1}{|c|}{+ Style Transfer } & \multicolumn{1}{c|}{62.6} & \multicolumn{1}{c|}{46.3} & \multicolumn{1}{c|}{52.0} & \multicolumn{1}{c|}{60.1} & 55.2\\ \multicolumn{1}{|c|}{+ Self-Training} & \multicolumn{1}{c|}{69.6} & \multicolumn{1}{c|}{59.1} & \multicolumn{1}{c|}{61.5} & \multicolumn{1}{c|}{60.5} & 62.7\\ \multicolumn{1}{|c|}{+ S. Balance, Style Tr., Self-Train.} & \multicolumn{1}{c|}{62.0} & \multicolumn{1}{c|}{40.4} & \multicolumn{1}{c|}{52.2} & \multicolumn{1}{c|}{62.4} & 54.3 \\ \hline \end{tabular} } \caption{Ablation Study, HOS results.}\vspace{3mm} \label{tab:ablation} \end{table} \begin{table*}[h] \resizebox{\textwidth}{!}{ \begin{tabular}[t]{|c c c c c c c c|} \hline \multicolumn{8}{|c|}{\textbf{Multi-Source Closed-Set}} \\ \hline Method & \multicolumn{1}{|c|}{$\rightarrow$ clp } & \multicolumn{1}{|c|}{$\rightarrow$ inf } & \multicolumn{1}{|c|}{$\rightarrow$ pnt } & \multicolumn{1}{|c|}{$\rightarrow$ qdr } & \multicolumn{1}{|c|}{$\rightarrow$ rel } & \multicolumn{1}{|c|}{$\rightarrow$ skt } & \multicolumn{1}{c|}{\textbf{Avg.}} \\ \hline \multicolumn{1}{|c|}{Source Only \cite{li2021dynamic}} & \multicolumn{1}{c|}{52.1} & \multicolumn{1}{c|}{23.4} & \multicolumn{1}{c|}{47.7} & \multicolumn{1}{c|}{13.0} & \multicolumn{1}{c|}{60.7} & \multicolumn{1}{c|}{46.5} & 40.6 \\ \multicolumn{1}{|c|}{LtC-MSDA \cite{wang2020learning}} & \multicolumn{1}{c|}{63.1} & \multicolumn{1}{c|}{28.7} & \multicolumn{1}{c|}{56.1} & \multicolumn{1}{c|}{16.3} & \multicolumn{1}{c|}{66.1} & \multicolumn{1}{c|}{53.8} & 47.4 \\ \multicolumn{1}{|c|}{DRT \cite{li2021dynamic}} & \multicolumn{1}{c|}{71.0} & \multicolumn{1}{c|}{31.6} & \multicolumn{1}{c|}{\textbf{61.0}} & \multicolumn{1}{c|}{12.3} & \multicolumn{1}{c|}{71.4} & \multicolumn{1}{c|}{60.7} & 51.3 \\ \multicolumn{1}{|c|}{\textbf{HyMOS\xspace}} & \multicolumn{1}{c|}{\textbf{71.5}} & \multicolumn{1}{c|}{\textbf{41.8}} & \multicolumn{1}{c|}{60.8} & \multicolumn{1}{c|}{\textbf{34.5}} & \multicolumn{1}{c|}{\textbf{74.2}} & \multicolumn{1}{c|}{\textbf{66.6}} & \textbf{58.2} \\ \hline \end{tabular} \begin{tabular}[t]{|c c c c|} \hline \multicolumn{4}{|c|}{\textbf{Multi-Source Universal}} \\ \hline Method & \multicolumn{1}{|c|}{$\rightarrow$ S } & \multicolumn{1}{c|}{$\rightarrow$ C } & \multicolumn{1}{c|}{\textbf{Avg.}}\\ \hline \multicolumn{1}{|c|}{CMU \cite{fu2020learning}} & \multicolumn{1}{c|}{38.9} & \multicolumn{1}{c|}{31.2} & 35.1 \\ \multicolumn{1}{|c|}{DANCE \cite{saito2020dance}} & \multicolumn{1}{c|}{44.5} & \multicolumn{1}{c|}{49.9} & 47.2 \\ \multicolumn{1}{|c|}{ROS \cite{bucci2020effectiveness}} & \multicolumn{1}{c|}{39.7} & \multicolumn{1}{c|}{46.0} & 42.9 \\ \multicolumn{1}{|c|}{\textbf{HyMOS\xspace}} & \multicolumn{1}{c|}{\textbf{54.6}} & \multicolumn{1}{c|}{\textbf{57.1}} & \textbf{55.9}\\ \hline \end{tabular} } \vspace{-2mm}\caption{Multi-Source Closed-Set (Accuracy) and Universal Domain Adaptation (HOS) performance on DomainNet.} \label{tab:Closed-Set}\vspace{-3mm} \end{table*} We designed HyMOS\xspace to be straightforward while keeping in mind all the challenges of multi-source Open-Set domain adaptation. In the following we focus on each of them, providing a detailed ablation that sheds light on the inner functioning of our method. The results are in Table \ref{tab:ablation}. \noindent\textbf{Source-Source Alignment} Reducing the domain shift among the available sources improves model generalization. This aspect is largely discussed in multi-source Closed-Set domain adaptation literature \cite{ECCV20_curriculumManager,ECCV20_learningToCombine}. A dedicated source alignment component is also included in the only existing multi-source Open-Set method MOSDANET. HyMOS\xspace obtains cross-source adaptation by combining the supervised contrastive learning loss with an accurately designed batch sampling strategy: each training mini-batch contains one sample for each class and for each domain. The supervised contrastive loss provides a strong class-wise alignment by pulling together samples of the same class and pushing away samples of different classes regardless of the domain. HyMOS\xspace shows a gain in performance of 2.6\% over its version without this balancing (row \emph{w/o Source Balance}). \noindent\textbf{Source-Target Adaptation} In HyMOS\xspace, both the style transfer augmentation and the auto-regulated self-training procedure contribute to aligning source and target without incurring the risk of \emph{negative-transfer}. By adding target style transfer as one of the source augmentations we push the model to focus on domain agnostic visual characteristics without involving semantic content from the target. To evaluate the effect of this addition we present two ablation cases. We compare our method with an Oracle version where the target style is extracted only from \emph{known} categories (line \emph{Style Tr. Target Known (Oracle)}), and we conclude that HyMOS\xspace is not harmed when using the whole target for this adaptation step. Moreover, we deactivate style transfer (row \emph{w/o Style Transfer}) causing a performance drop of 3.6\%, which shows its important role in HyMOS\xspace. Finally, a strong feature-level class-wise source-target alignment is obtained thanks to the self-training procedure, which selects confident target known samples (closest to the source class prototypes) and includes them in the learning objective. The gain of HyMOS\xspace with respect to its version without this strategy is 2.9\% (row \emph{w/o Self-Training}). \noindent\textbf{Comparison with an Improved Cross-Entropy Baseline} Source balance, style transfer, and self-training appear as simple strategies that can be combined with any supervised learning model to improve its effectiveness in the multi-source Open-Set scenario. Still, we state that leveraging on supervised contrastive learning and its related hyperspherical embedding is crucial for the task at hand. To support our claim we substitute the contrastive loss of HyMOS\xspace with the standard cross-entropy loss. The row \emph{Improved cross-entropy} reports the obtained results, showing that this baseline approach is significantly worse than HyMOS\xspace. \noindent\textbf{Comparison with an improved version of ROS~\cite{bucci2020effectiveness}} We also enriched our best competitor ROS with source balancing, style transfer, and self-training. In the bottom part of Table \ref{tab:ablation}, the \emph{+ Source Balance} row indicates that organizing the training data batches so that they contain a balanced set of categories and source domains does not provide an improvement with respect to the standard version of ROS. The source-to-source alignment visible for HyMOS\xspace does not appear here: indeed the cross-entropy loss does not induce the same inherent clustering and adaptation effect that can be obtained via contrastive learning. The row \emph{+ Style Transfer} shows a low performance for ROS when using this augmentation. By checking the predictions we observe a slight advantage in the recognition accuracy of the \emph{known} classes, but a significant drop in the \emph{unknown} accuracy which causes a decrease in the overall result. We also followed \cite{rakshit2020multi} to extend ROS with self-training. The corresponding row \emph{+ Self-Training} shows again a drop in performance: this procedure tends to propagate the recognition errors due to the cross-entropy overconfidence. Indeed self-training may induce a dangerous model drift, but recent literature has shown that its effectiveness and safe nature hold when the sample selection is performed with a self-pacing strategy based on the distribution of the unlabeled samples \cite{cascantebonilla2020curriculum}, exactly as in HyMOS\xspace. Finally, when applying all the strategies at once, the results are similar to those obtained with style transfer alone. This last technique clearly steered the whole method towards a low performance. \section{Extension to Closed-Set and Universal} HyMOS\xspace can be easily extended to the simpler multi-source Closed-Set domain adaptation setting (perfect overlap between sources and target classes) and to the more challenging multi-source Universal domain adaptation case (both sources and target have their own private categories). We consider the DomainNet dataset and run an evaluation on those two scenarios, following \cite{li2021dynamic} for Closed-Set and \cite{fu2020learning} for Universal. In the latter, sources and target share the first 150 classes in alphabetic order, the next 50 categories are sources private classes and the rest are target private classes. For Closed-Set we use as reference LtC-MSDA \cite{wang2020learning} and DRT \cite{li2021dynamic} which leverage respectively on a graph connecting domain prototypes, and on a dynamic transfer that updates the model parameters on a per-sample basis. Table \ref{tab:Closed-Set} collects the results and show how HyMOS\xspace gets promising performance with respect to several state-of-the-art methods in the two scenarios.
1,477,468,749,875
arxiv
\section{INTRODUCTION} Atomic systems are prime candidates for long-lived storage and on-demand retrieval of optical quantum states, due to the long coherence times of their optically accessible spin-states~\cite{Lvovsky2009b, Heshami2016c}. Several spin-based memory approaches have been proposed and experimentally studied in a wide range of atomic media~\cite{Heshami2016c}, including warm~\cite{Phillips2001b} and cold atomic gases~\cite{Liu2001a}, rare-earth-ion doped solids~\cite{Turukhin2002a}, and single atoms in optical cavities~\cite{Boozer2007, Wilk2007} relying on various storage protocols, such as electromagnetically-induced-transparency (EIT)~\cite{Fleischhauer2000b}, off-resonant Raman~\cite{Nunn2007, Gorshkov2007}, and photon-echo~\cite{Moiseev2001, Afzelius2009a} techniques. To date, no combination of a platform and a protocol has been agreed upon as the ideal practical memory that concurrently features long lifetime~\cite{Dudin2013, Heinze2013}, efficient ~\cite{Hedges2010, Hosseini2011c, Hsiao2018a}, fast~\cite{Reim2010b, Guo2018} and reliable operation~\cite{Gundogan2015, Ding2015a, Vernaz-Gris2018, Wang2019b}, although these features have been demonstrated either individually or in pairs. Here, we introduce the Autler-Townes splitting (ATS) quantum memory protocol~\cite{Saglamyurek2018a} on a Bose-Einstein condensate (BEC) platform towards overcoming this obstacle. Bose-Einstein condensates (BECs) of alkali atoms were among the first-proposed light-storage platforms~\cite{VestergaardHau1999a, Dutton2004}, since a BEC's ultralow temperature inhibits thermal diffusion and thereby offers long-term storage~\cite{Zhang2009}. In addition, a BEC's large atomic density allows for strong light-matter coupling without optical cavities or particularly large atom numbers, leading to high-efficiency and high-speed quantum memory. Despite these intrinsic advantages, there have been only a few experimental studies exploring BECs for quantum memory. Early experiments focused on the long-lived storage and coherent manipulation of optical information in the classical domain~\cite{VestergaardHau1999a, Ginsberg2007, Zhang2009}, while more recent demonstrations tested the quantum nature of these processes~\cite{Lettner2011, Riedl2012}. All of these experiments used the EIT memory protocol~\cite{Fleischhauer2000b, Phillips2001b,Turukhin2002a, Lvovsky2009b, Dudin2013, Heinze2013, Heshami2016c, Hsiao2018a, Vernaz-Gris2018, Wang2019b}, which is favorable for efficient storage of long light-pulses but not well-suited to the short-pulse/large bandwidth storage regime~\cite{Gorshkov2007, Rastogi2019} due to this protocol's adiabatic nature. Moreover, the large optical densities and control-field powers required for a broadband EIT memory increase the impact of photonic noise processes, making reliable operation in the quantum regime difficult~\cite{Lauk2013, Geng2014a, Saglamyurek2019c}. The ATS protocol~\cite{Saglamyurek2018a} overcomes these protocol-related limitations. In contrast to the EIT scheme, the non-adiabatic (fast) character of this method allows for optimal storage of short light-pulses with substantially reduced technical demand and complexity~\cite{Rastogi2019}, and exceptional robustness to many noise processes~\cite{Saglamyurek2019c}. In this article, we present a proof-of-concept experimental implementation of the ATS protocol with a BEC to explore the unique advantages of this protocol-platform combination for a high-performance quantum memory. We demonstrate efficient and ultra-low-noise storage of single-photon-level light pulses that are one-to-two orders of magnitude shorter than those reported in EIT based BEC-memories. We also show that ATS based-storage in a BEC platform significantly outperforms its implementations in laser-cooled atoms. \begin{figure} \begin{center} \includegraphics [width=0.47\textwidth]{BECsetup_v6.pdf} \caption{\textbf{Demonstration of ATS memory in BEC.}~(\textbf{A})~Schematic of experimental setup. AOM: Acousto-optic modulator; NDF: Neutral density filter; FC: Fiber coupler; TDC: Time-to-digital convertor; ODT: Optical dipole trap. \textbf(\textbf{B}) Preparation of BEC, represented by velocity distributions for thermal, mixed, and condensated clouds (after $20$ ms of free expansion) at different stages of ODT-evaporation with temperatures of $T$ and BEC fractions of $\mathcal{F}_{\rm BEC}$. (\textbf{C}) $\Lambda$-system on the D2 transition of $^{87}$Rb. (\textbf{D}) Storage of 20 ns-long probe pulses at the single photon level with an input-mean photon number of $\overline{n}_{\rm in}= 1$. The measured memory efficiency is $30\%$ under the conditions of $T=340$~nK and $\mathcal{F}_{\rm BEC} = 15\%$. } \label{fig:setup} \end{center} \end{figure} In our experiments, a BEC of $^{87}$Rb atoms is prepared using standard laser- and evaporative-cooling techniques~\cite{Lin2009}, with the resulting ultracold atoms held in an optical dipole trap (ODT) (Fig.~\ref{fig:setup}(\textbf{A}),(\textbf{B}) and Methods for details). By reducing the depth of the ODT, evaporative cooling drives the temperature of the atoms below the critical temperature $T_{\rm c}\approx0.5~\mu$K at which Bose-Einstein condensation begins (Fig.~\ref{fig:setup}($\textbf{B}$)). The fraction of condensed atoms ($\mathcal{F}_{\rm BEC}$) increases with further cooling, resulting in nearly pure BECs at $\mathcal{F}_{\rm BEC}\approx0.8$ and $T=280$ nK with the atom number of $N\approx10^5$ and characteristic spatial extent (Thomas-Fermi diameter) of $R_{\rm TF}\approx10~\mu\rm m$. Using the trap depth as a control, we study memory operation above and below the transition temperature. The ATS protocol is implemented using a $\Lambda$-type three-level configuration within the ``D2'' transition of Rb atoms by addressing an excited level ($\ket{F^\prime = 2}\equiv\ket{e}$) and two ground hyperfine levels ($\ket{F= 1}\equiv\ket{g}$ and $\ket{F= 2}\equiv\ket{s}$ (Fig.~\ref{fig:setup}({\bf C})). In the storage (writing) stage, optical coherence from a weak ``probe'' pulse (resonant with $\ket{g}\rightarrow\ket{e}$) is transferred into collective excitations between the ground levels (spin-wave mode) via a strong ``control'' field (coupled to the $\ket{s}\rightarrow\ket{e}$) with a pulse area of $2\pi$. By reapplying the control pulse after an adjustable storage time (read-out stage), the coherence from the spin-wave is mapped back to the optical mode, resulting in reemission as an output probe, as demonstrated in~Fig.~\ref{fig:setup}({\bf D}) In our demonstrations, we use single-photon-level probe pulses with $\tau_{\rm p}=20$~ns duration (at full-width-half maximum of their Gaussian temporal profile), which is shorter than the natural lifetime of the ground-to-excited-level coherence [$\tau_{\rm eg}=1/(2\pi\gamma_{\rm eg})=54\rm~{ns}$] of the Rb D2 line. Limited by our setup's focusing ability, the probe beam diameter (at $1/e^2$) is reduced to $R_{\rm p}\approx25~\mu\rm{m}$, but is still larger than the diameter of the atomic cloud $R_{\rm a}\approx~10~\mu\rm{m}$ for the lowest temperatures. To alleviate this size mismatch, we release the atomic cloud from the trap and allow 3.5~ms of free expansion before the storage-and-recall process, resulting in a cloud diameter comparable to that of the probe. The storage-and-recall process is then achieved using the write and read-out control fields with the same temporal profile of the probe pulses, but in a spatial mode oriented by $\theta=110^{\circ}$ away from the probe beam, as depicted in Fig.~\ref{fig:setup}({\bf A}). The retrieved probe signal is detected via time-resolved photon-counting measurements using a single-photon detector (SPD) and a time-to-digital converter (TDC), allowing the evaluation of memory performance by recording detection-vs.-time histograms, as detailed in Methods. \begin{figure*} \begin{center} \includegraphics[width = 170mm]{Exp_results_FINAL3.pdf} \caption{\textbf{Experimental results for ATS-BEC memory.}~ \textbf{(A) Efficiency}. Measured efficiency is $\eta_{\rm m} = (p_{\rm s} - p_{\rm n}) / p_{\rm in}$, where $p_{\rm s}$, $p_{\rm n}$, and $p_{\rm in}$ are the detection probabilities for recalled probe, noise, and input probe, respectively, with $\overline{n}_{\rm in}= 1$ and $p_{\rm n} \ll p_{\rm s}$. \textbf{(B) Atom number and density}, estimated for 3.5~ms free- expansion time using $T,~\mathcal{F}_{\rm BEC}$ and ODT parameters, as detailed in Supplementary Information. The density refers to peak value for the cross-sectional density profile, obtained by the line integration of volume atomic density along the probe propagation direction. \textbf{(C) Low noise operation}. Measurement histograms in red (with probe) and black (without probe) show $p_{\rm s}$ and $p_{\rm n}$ respectively, after 200~ns storage in a nearly pure BEC for $\overline{n}_{\rm in}= 0.2$. The inset shows the time interval in which the retrieved photons are detected. \textbf{(D) Signal-to-noise ratio vs input photon number}. ${\rm SNR}=(p_{\rm s}-p_{\rm n})/p_{\rm n}$ is determined for each mean photon number by measuring $p_{\rm s}$ for $\overline{n}_{\rm in} \neq 0$ and $p_{\rm n}$ for $\overline{n}_{\rm in} = 0$, during a 50~ns time window centered around the recall time. \textbf{(E) Memory lifetime}. Variation of efficiency (normalized to its own maximum) with storage time for temperatures corresponding to thermal, mixed and nearly-pure BEC clouds, giving $1/e$ decay times of $\tau_{\rm m}=4.5~\mu\rm{s,}~7.8~\mu\rm{s~and}~15.8~\mu\rm{\rm s}$, respectively \textbf{(F) Preservation of phase}. Retrieved probe intensity vs storage time with magnetic field off (blue diamonds) and on (black squares). The dashed blue and solid red are fits to functions involving memory decoherence (Eq.~\ref{eq:decoherence1} in Methods) and a product of the decoherence with a sinus, respectively. The visibility $V=(I_{\rm max}-I_{\rm min})/(I_{\rm max}+I_{\rm min})$, where $I_{\rm max}$ and $I_{\rm min}$ are the maximum and minimum intensities for zero storage time, yields $V=80 \pm 3~\%$ and $V=62 \pm 7 ~\%$ (inset, same axes) for $\overline{n}_{\rm in}\gg1$ and $\overline{n}_{\rm in}=1$. } \label{fig:single-photon} \end{center} \end{figure*} We characterize the performance of the ATS-BEC memory at various temperatures, ranging from above the condensation temperature (with $\mathcal{F}_{\rm BEC}=0$) to well below the transition to BEC (where $\mathcal{F}_{\rm BEC}\rightarrow1$). First, memory efficiency $\eta_{\rm m}$ is measured at temperatures between $1.5~\mu$K to 280~nK (corresponding to thermal and nearly pure-BEC clouds, respectively) for an average input-probe-photon number of $\overline{n}_{\rm in}= 1$ and storage time of $\tau_{\rm s}=200~\rm{ns}$ (Fig.~\ref{fig:single-photon}). We observe that efficiency increases as the temperature is reduced (Fig.~\ref{fig:single-photon}({\bf A})), due to a significant increase in peak atomic density associated with BEC (Fig.~\ref{fig:single-photon}({\bf B})). When the BEC fraction is $\mathcal{F}_{\rm BEC}\approx15\%$ at $T\approx340~\rm nK$, the efficiency reaches its maximum $\eta_{\rm m}=(30.2\pm1.5)\%$. However, further evaporation leads to a reduction in efficiency with $\eta_{\rm m}=(13.0\pm0.9)\%$ for the nearly pure BEC at $T\approx280~\rm nK$ despite an additional increase in the peak density. We attribute the loss of efficiency to the limited ability to focus the probe beam onto a sufficiently small area at the center of the BEC, where the atomic density is largest. In such limits of $R_{\rm p} \gg R_{\rm a}$ or $R_{\rm p} \sim R_{\rm a}$ as in our demonstrations, the efficiency does not entirely follow the variation in peak atomic density (unfilled circles, Fig.~\ref{fig:single-photon}({\bf B})). Instead, it is either partly or fully governed by the atom number, which inevitably decreases during the evaporative cooling (unfilled squares, Fig.~\ref{fig:single-photon}({\bf B})). To verify this conjecture, we repeat the efficiency measurements with larger beam size $R_{\rm p}\approx65~\mu\rm{m}$ (grey squares, Fig.~\ref{fig:single-photon}({\bf A})) and find that memory efficiency decreases monotonically with temperature over the entire range and does, indeed, track the variation in atom-number. This also shows that the size mismatch between the probe and BEC is the primary limitation to reaching high efficiencies in our setup: the free expansion alleviates this mismatch at the expense of an overall reduction in the atomic density and hence efficiency. Next, we investigate the variation of memory lifetime $\tau_{\rm m}$ with respect to temperature $T$ in both the thermal and BEC regimes. We measure the efficiency of ATS memory as a function of storage time (from $\tau_{\rm s}=2 \rm~to~10~\mu \rm s$) at three different temperatures between $T=280$~nK and $6200$~nK and determine the lifetime (defined as storage time for which efficiency decreases to $1/e$ of its original value) for each $T$. As the temperature is lowered towards the BEC regime, memory lifetime increases significantly and reaches a maximum of $\tau_{\rm m}=15~\mu \rm s$ at $T=280$~nK for the nearly pure BEC, as shown in Fig.~\ref{fig:single-photon}({\bf E})). We attribute this observation to the combined effect of two spin-decoherence mechanisms: thermally induced atomic-diffusion (given by cloud temperature), and magnetic dephasing (due to uncancelled ambient magnetic fields). Memory lifetime is mainly limited by thermal decoherence at relatively high temperatures ($T>3~\mu$K) in the thermal regime, while magnetic decoherence dominates in the BEC regime, where thermal motion is suppressed. Since thermal decoherence also increases with probe-control separation angle ($\theta$), inhibiting thermal diffusion in the BEC regime provides a flexibility to use large angles ($\theta>2^\circ$, as in our demonstrations), where low-noise operation can be realized. \begin{figure*} \begin{center} \includegraphics[width = 150mm]{Performance_finalv2.pdf} \caption{\textbf{Predicting ATS-BEC memory performance.} All calculations use experimentally measured atom numbers, T, $\mathcal{F}_{\rm BEC}$, and ODT frequencies. We assume that atoms remain trapped (no free-expansion) during storage and ATS memory is implemented in the backward recall scheme using the D1 transition of $^{87}$Rb. \textbf{(A) Memory efficiency vs. temperature}, estimated for short storage times ($\tau_s\ll \tau_m$), memory bandwidth of $B=170$~MHz ($\tau_p=2.6$~ns), and probe-beam diameters $R_p=1-25~\mu\rm m$. The dashed black line refers to peak optical density. \textbf{(B) Bandwidth and optical depth vs. temperature}, estimated for an optimal ATS memory, where $d/2F=d\Gamma_{\rm eg}/4\pi B\approx4$. The horizontal dashed line indicates bandwidths and optical depths yielding efficiencies above $80\%$. \textbf {(C) Memory lifetime vs.\ temperature}, for a small probe-control separation angle ($\theta$), based on the combination of decoherence effects due to thermal motion, recoil momentum and inelastic two-body collisions, as detailed in Methods. The inset considers only the thermal memory lifetime $\tau_{\rm th}$ (Eq.~\ref{eq:dec} in Methods) for a wide range of temperatures and separation angles associated with different photonic noise levels. \textbf{(D) FWM noise strength vs.\ bandwidth} calculated for optimal ATS and EIT memories, as detailed in Methods. The noise strength is normalized with respect to its minimum value corresponding to the smallest bandwidth of $10\Gamma_{\rm eg}/ 2\pi$. The vertical dashed line indicates bandwidths yielding efficiencies above $85\%$ } \label{fig:performance} \end{center} \end{figure*} To demonstrate simultaneous low-noise and broadband operation, we implement ATS memory in the nearly-pure BEC for several mean input-photon-numbers less than unity (Fig.~\ref{fig:single-photon}({\bf C})) shows measurements for $\overline{n}_{\rm in}\approx0.2$). We determine the signal-to-noise ratio ${\rm SNR}=(p_{\rm s}-p_{\rm n})/p_{\rm n}$ as a function of $\overline{n}_{\rm in}$ after $\tau_{\rm s}=200$ ns, where $p_{\rm s}$ and $p_{\rm n}$ are independently measured detection probabilities for retrieved probe and noise. We measure a background-noise probability of $p_{\rm n}=(6.6\pm1.5)\times10^{-5}$ (inset of Fig. \ref{fig:single-photon}({\bf C})), yielding an $\rm SNR\gtrsim100$ in much of the low mean-photon-number regime ($\overline{n}_{\rm in}<1$), as illustrated in Fig. \ref{fig:single-photon}({\bf D})). The SNR can be as high as $42\pm9$, even for mean photon-numbers as small as $\overline{n}_{\rm in}=0.22$, which is typical in quantum photonics applications. This SNR would yield an error probability of $\mathcal{E}=1/\rm{SNR}=0.023\pm0.005$, if quantum states were encoded into these photons. Our additional characterisations show that the observed residual noise comes from scattered light leaking from both ODT and control beams, which can be almost entirely eliminated with simple technical upgrades. This also implies that there is no measurable noise contribution from any physical process linked to the memory operation, such as the four-wave mixing (FWM) noise, demonstrating the reliability of ATS-BEC memory for short pulses at the single-photon level. Finally, we examine the phase-preserving character of photon storage process in our implementation (at near $T_{\rm c}$) by controlling the phase evolution of the stored spin-wave. We apply a weak DC magnetic field to the ensemble such that $\ket{g}$ and $\ket{s}$ levels (forming spin-wave coherence) are split into the Zeeman sublevels with energy/frequency differences proportional to the strength of the field~\cite{Jenkins2006}. In the writing stage, we then map optical coherence onto different classes of spin-waves among these Zeeman levels with the proper selection of the magnetic field orientation, and polarizations of probe and control-field (see Supplementary Information for details and also Refs~\cite{Matsukevich2006, Wang2011, Farrera2018}). Since each class of spin-wave evolves with a different frequency, they acquire relative phase differences and thus interfere with one another, either constructively or destructively, resulting in the intensity of the recalled probe being modulated with storage time, as shown in Fig.~\ref{fig:single-photon}(\textbf{F}). We achieve an interference visibility of $V=62\%$ for small input photon-number $\overline{n}_{\rm in}=1$ (inset), while reaching up to $V=80\%$ for large mean photon numbers due to a better magnetic-field stability with single-shot measurements. These results demonstrate the phase-preserving nature of our memory, which is a key requirement for quantum information storage. Looking beyond these proof-of-principle demonstrations, we find that the ATS-BEC approach is suitable for a high performance quantum memory, featuring the co-existence of highly efficient and long-lived storage together with broadband and low-noise operation. Particularly, the relaxed optical-depth demand of the ATS protocol in conjunction with the ultra-large optical densities of the BEC platform renders near-unity efficiencies at GHz storage bandwidths. This performance can be achieved in our system by sampling the dense region of BEC with a probe-beam diameter that is significantly smaller than the Thomas-Fermi diameter ($R_{\rm p}\ll R_{\rm a} \approx R_{\rm TF}$). Fig.~\ref{fig:performance}(\textbf{A}) shows the predicted memory efficiency with respect to temperature for pulses as short as $\tau_{\rm p}=2.6$~ns and smaller probe-beam diameters than used in our demonstrations (see Methods for details). An effective optical depth of $d\approx200$ is possible for $R_{\rm p}=1~\mu\rm m$ at $T=280$~nK ($\mathcal{F}_{\rm BEC} \approx 0.8$), allowing a near-optimal memory efficiency of $\eta_{\rm m }\ge90\%$. We also predict the acceptance bandwidth and optical depth for an optimal ATS memory with respect to temperature, as shown in Fig.~\ref{fig:performance}B, confirming the feasibility of bandwidths approaching $200$ MHz with efficiencies above $\eta_{\rm m }\ge90\%$. The same performance via EIT or off-Resonant Raman memory would require an optical depth of about $d=1000-1500$~\cite{Saglamyurek2018a, Rastogi2019}, which is hard to achieve even with typical BEC systems. An ATS-BEC memory can also reach long lifetimes from milliseconds to a second by reducing the impact of the three major spin-decoherence mechanisms: magnetic dephasing, thermal diffusion, and internal/external dynamics of BEC~\cite{Dutton2004}. First, magnetic dephasing can be eliminated using well-mastered techniques, including a high-degree isolation from the static and time-dependent magnetic-field noise, spin-echo dephasing/rephasing schemes, and the precise control over magnetic-insensitive Zeeman states~\cite{Zhao2008, Dudin2013}. Second, thermal diffusion, which is an increasing function of both $T$ and $\theta$, is already significantly reduced due to ultra-low temperatures in our evaporatively cooled system. Beyond that, as the thermal velocity is virtually zero in the nearly-pure BEC, BEC memory features much longer lifetimes than what is achievable with a purely thermal cloud even at ultracold temperatures and small $\theta=0.1^\circ$, as shown in Fig.~\ref{fig:performance}(\textbf{C}). Third, BEC-specific decoherence mechanisms are already effective only at long time scales over milliseconds, in part, because of the coherent matter-wave nature of BEC. Among these mechanisms, spatial decoherence (arising from atomic motions in the trap) can be coherently compensated using matter-wave interferometry techniques. However, recoil motion (an increasing function of $\theta$) and inelastic two-body collisions (proportional to atomic density) set an ultimate limit to memory lifetime. Considering the combined effect of these two mechanisms together with thermal diffusion, we predict that memory lifetime in our Rb-system can reach one hundred milliseconds (Fig.~\ref{fig:performance}(\textbf{C})). Low-noise operation is another essential requirement that is difficult to satisfy simultaneously with long lifetimes and large bandwidths. In particular, noise from control-leak and FWM processes can be minimized using a large-$\theta$ configuration, but this conflicts with the small-$\theta$ requirement for reduced thermal decoherence. In comparison to laser-cooled systems, the small thermal-diffusion rates at ultracold temperatures ($T \lesssim 1~\mu$K) allow one to overcome the detriment of large-$\theta$ and thus provide a workable range of $\theta\approx 4-7^\circ$, where low-noise operation is possible while retaining a millisecond lifetime (inset of Fig.~\ref{fig:performance}(\textbf{D})). However, lifetimes of order one-hundred milliseconds still require a nearly-pure-BEC ensemble (at $T\rightarrow 0$) with a small $\theta$, which prevents decoherence due to recoil motion. In this scenario ($\tau_{\rm m} \gg 10$ ms and $\theta < 0.2^\circ$), low-noise operation favours small optical depths and control powers, contrasting with the high demand on these resources for a broadband memory. In particular, FWM noise is a significant issue, due to its exponential and quadratic dependencies on optical depth and control power, respectively (as detailed in Methods)~\cite{Lauk2013, Geng2014a, Saglamyurek2019c}. With respect to the adiabatic protocols like EIT and Raman, the ATS protocol is very advantageous to reduce FWM noise in the broadband regime due to the substantially lower requirements for these resources. Fig. \ref{fig:performance}(\textbf{D}) compares the estimated relative strength of FWM noise vs.\ bandwidth between the optimal implementations of the EIT and ATS protocols in our Rb-system. This prediction shows that the probability of FWM noise with ATS memory is four to five orders of magnitude less compared to that of EIT memory, for storing a few nanoseconds-long pulses at near-unity efficiencies. In conclusion, we have experimentally demonstrated the non-adiabatic storage of single-photon-level light in a rubidium-87 BEC using the ATS protocol with a pulse duration that is 1-2 orders of magnitude shorter than those reported in previous BEC memories. Our proof-of-principle experiments and predictive analysis highlight the inherent advantages of the ATS protocol-BEC platform combination for a high-performance quantum memory, simultaneously featuring high efficiency, long lifetime, high-speed and low noise operation. In view of the recent technical progress with portable, miniaturized, and even space-based BEC experiments~\cite{Elliott2018}, we anticipate that this approach offers a feasible solution for large-scale ground and satellite-based quantum networks~\cite{Gundogan2020}. \section*{METHODS} \subsection*{Experimental setup} The experimental setup (Fig.~\ref{fig:setup}({\bf A})) consists of ATS-memory components (including optical pulse generation and detection systems) and BEC-generation components. As part of the memory components, probe and control fields are extracted from two independent continuous-wave lasers and then temporally shaped into short pulses using acousto-optic modulators (AOMs). After attenuating the probe beam to the single-photon-level with neutral density filters (NDF) and setting the peak power of the control beam to $8$~mW, both beams are coupled into single-mode optical fibers (FC), and decoupled back to free-space on a separate bench where BEC apparatus is located. Following the polarization control with quarter-wave plates, the probe and control beam diameters are focused to a waist of $25~\mu$m (or $65~\mu$m) and $150~\mu$m at the intersection of two crossed ODT beams (derived from a 1064 nm laser), using a telescope and single lens respectively (not displayed in the figure). After coupling into an optical fiber, the output probe pulses are detected using a single-photon detector, and their arrival times are recorded on a time-to-digital converter (TDC), triggered by a function generator. The ultracold atoms are prepared using standard laser cooling and trapping techniques, similar to Ref.~\cite{Lin2009}. The sequence of these techniques begins with the preparation of cold atoms in a magneto-optical trap (MOT) followed by further sub-Doppler laser cooling to temperatures down to $T\approx50~\mu$K. Next, the atoms are transferred to a quadrupole magnetic trap for RF-induced evaporative cooling that leads to $T<10~\mu$K. Finally, these atoms are transferred into the ODT, which has a controllable trap-depth for further evaporative cooling, and with sufficient cooling, they reach the conditions of Bose-Einstein condensation, as detailed in Supplementary Information. \subsection*{Measurements} In our demonstrations, the memory performance is assessed with time-resolved photon-counting measurements for the detection of both the input-probe photons and stored-and-recalled probe photons (Fig.~\ref{fig:setup}({\bf A})). Each measurement period is performed during a 1-ms-detection window that follows 15-seconds of BEC-preparation and 3.5-ms of free-expansion. In each period, a storage-and-recall event (defined by a pre-set storage time) is repeated $N_{\rm r}=1000$ to 100 times, depending on the storage time $t_{\rm s}$ between 200~ns and 10~$\mu$s, respectively. By repeating these measurement periods several times from $N_{\rm cyc}=10~\rm to~300$, we acquire detection-vs-time histograms for a total number of storage-and-recall events between $N_{\rm A}=N_{\rm r} N_{\rm cyc}=10^4$ and $3\times10^5$, depending on average photon-number of the input probe ($\overline{n}_{\rm in}=0~\rm to~3$). \subsection*{Memory efficiency and bandwidth} Here, we detail the current experimental limitations and quantitative predictions for realizing both large memory efficiency and large acceptance bandwidth by exploiting both the favorable efficiency scaling of the ATS protocol and the large optical depth of a BEC. First, we look into the optical depth and bandwidth dependence of ATS-memory~\cite{Saglamyurek2018a}. In the forward-recall configuration (i.e., the input and recalled photons propagate in the same direction, as in our experiments), efficiency is given by \begin{align} \eta_{\rm f}\approx{(d/2F)}^2 e^{-d/2F} e^{-1/F} \label{eq:effForward \end{align} where $d$ is the peak optical depth, and $F=2\pi B/\Gamma_{\rm eg}$ is the ATS factor that depends on the bandwidth of the probe pulse ($B=0.44/\tau_{\rm p}$ for a Gaussian profile) and the natural linewidth of the optical transition ($\Gamma_{\rm eg}=2\gamma_{\rm eg}$). Equivalently, $F$ relates the duration of the probe pulse $\tau_{\rm p}$ relative to coherence lifetime of the optical transition $1/\gamma_{\rm eg}$, on the condition that the pulse area of the control fields are set to be $2\pi$. According to this expression, the theoretical maximum efficiency is only $\eta_{\rm f}\approx 40\%$ for the probe bandwidth used in our demonstrations ($B=3.7\Gamma_{\rm eg}/2\pi$), which is, in part, due to the fundamental limitation imposed by the decay time of the ground-to-excited-level coherence (optical decoherence), accounted for by the term $e^{-1/F}$ in Eq.~\ref{eq:effForward}. Although our experimental maximum efficiency ($\eta_{\rm m}=30\%$) is already close to this theoretical maximum, and can even reach it with more optical depth, efficiencies above $40\%$ are only possible with larger bandwidths (shorter pulses), which reduce the impact of optical decoherence ($e^{-1/F}\rightarrow 1$). Further, even with pulses much broader than the transition linewidth ($B\gg\Gamma_{eg}/2\pi$, or equivalently $\tau_{\rm p}\ll1/\gamma_{\rm eg}$, such that $e^{-1/F}\approx 1$), the theoretical efficiency cannot exceed $\eta_{f}\approx54\%$ due to the unavoidable re-absorption of the retrieved probe pulses in the storage medium. This limitation can be circumvented by implementing ``backward recall'' using counter-propagating write and read-out control pulses~\cite{Gorshkov2007, Saglamyurek2018a}. In this case, the efficiency of ATS memory is \begin{align} \eta_{\rm b}\approx(1-e^{-d/2F})^{2}e^{-1/F}.\label{eq:effBack} \end{align} This expression dictates that near-unity memory efficiencies $\eta\geq90\%$ necessitate both sufficiently large bandwidths $B\ge14\Gamma_{\rm eg}/2\pi$ and optical depths $d\ge90$ such that $d/2F\ge3$, which also highlights the inherent broadband character of the ATS protocol. Given that large optical depths are readily available with the BEC platform, we can experimentally achieve such large efficiencies in the backward scheme simply by meeting the following technical requirements: (i) sampling the dense region of BEC with a probe-beam diameter that is smaller than the Thomas-Fermi diameter ($R_{\rm p}\ll R_{\rm TF}$), and (ii) reducing the probe pulse duration, which also requires using the D1 line in Rb due to the larger hyperfine splitting in the excited state manifold ($\approx 0.8$ GHz). Second, to verify this conjecture, we predict the optical depth of our ultracold system based on its experimental characterisations (see Supplementary Information). The effective value of the optical depth depends on the spatial intensity profile of probe as well as the spatial-density profile of the atomic cloud, which is determined by $T$ and ODT parameters. This value can be numerically extracted from Beer's absorption law \begin{align} d = -\ln\left[\frac{\iint I_{\rm out}(x,y)\: dx dy}{\iint I_{\rm in}(x,y)\: dx dy} \right], \label{eq:effd} \end{align} where $I_{\rm in}(x,y)$ and $I_{\rm out}(x,y)$ are the transverse intensity profiles of the input and transmitted probe, propagating along the $z$-direction. Assuming that the input probe beam is characterized by a Gaussian profile, $I_{\rm in}(x,y)$ and $I_{\rm out}(x,y)$ are then given by \begin{align} & I_{\rm in}(x,y) = I_{0}\exp\left(-\frac{2x^{2}}{{R_{p x}}^{2}}\right)\exp\left(-\frac{2y^{2}}{{R_{py}}^{2}}\right) \label{eqn13} \\ & I_{\rm out}(x,y) = I_{\rm in}(x,y)\exp\left[-\frac{3\lambda^{2}}{2\pi}\alpha^{2}\int_{0}^{L} \rho(x,y,z)\: dz,\right] \label{eqn14} \end{align} where \{$R_{\rm px}$, $R_{\rm py}$\} are the beam diameters along \{$x,y$\} axes, $I_{0}$ is the peak intensity of the input probe, $\lambda$ is resonant wavelength, $\alpha$ is the strength of the atomic transition, and $\rho(x,y,z)$ is the density distribution of the atomic cloud, involving thermal and BEC components. In this way, we predict $d$ for a given $R_{\rm p}$ as well as $T$ and ODT-trap frequencies, both of which determine the density distributions of BECs and thermal clouds, as further detailed in Supplementary Information. \subsection*{Memory lifetime} In this section we detail the limitations of memory lifetime in our demonstrations, and show our predictive calculations for long-lived storage. In our experiments, storage times are limited to the microsecond timescale due to the decoherence of collective spin excitations. We find that this decoherence is mainly governed by the combined effect of thermal diffusion and magnetic dephasing; the impact of the other decoherence mechanisms (including inelastic collisions~\cite{Dutton2004} and recoil motion~\cite{Dutton2004, Ginsberg2007, Riedl2012}) are predicted to be considerable only at much longer timescales (on the order of milliseconds). The detriment of the thermal diffusion is two-fold: the loss of the atoms due to their dispersive motion out of the interaction cross-section, and the loss of spatial coherence (initially set by the probe and control-wave vectors during writing). While the former is expected to be observable at millisecond and greater time scales, the latter can be observed at much shorter times due to the non-zero probe-control separation angle. In such a configuration, a spatially periodic phase pattern is imprinted as the stored spin-wave with a spatial period of $\lambda{\rm sw}=2\pi/|\boldsymbol{\kappa}_{\rm sw}|$, where $\boldsymbol{\kappa}_{\rm sw} =\mathbf{k}_{\rm p}-\mathbf{k}_{\rm c}$ is imposed by conservation of momentum (phase-matching condition), involving the wavevectors of the probe ($\mathbf{k}_{\rm p}$) and control ($\mathbf{k}_{\rm c}$) beams~\cite{Zhao2008}. Since this phase-grating can be partially or completely erased as a result of the atomic diffusion, thermal-memory lifetime ($\tau_{\rm th}$) depends on $\theta$ (determining the spatial period) and $T$ (determining the diffusion rate), expressed by \begin{align} \tau_{\rm th}= \frac {\lambda_{\rm sw}}{2\pi v_{\rm th}}\approx\frac{\lambda}{4\pi\sin{\theta/2}}\sqrt{\frac{m}{k_{\rm B}T}}.\label{eq:dec} \end{align} where $v_{\rm th}=\sqrt{{k_{\rm B}T}/{m}}$ is the mean thermal speed, $m$ is the mass of an atom, and $k_{\rm B}$ is the Boltzmann constant. Given that the difference between the wavelength ($\lambda$) of the probe and control fields is very small, $\lambda_{\rm sw}$ is nearly equal to $\lambda/[2\sin(\theta/2)]$, yielding $|\mathbf{k}_{\rm p}|\approx|\mathbf{k}_{\rm c}|$. In our experimental configuration with the large probe-control separation angle ($\theta=110^{\circ}$), the impact of thermal diffusion becomes significant at relatively high temperatures ($T>3~\mu$K) and hence dominates over magnetic dephasing in this regime. For example, we expect the thermal decay time constant to be $\tau_{\rm th}=3.2~\mu$s (Eq.~\ref{eq:dec}) at $T=6.2~\mu$K, which is close to the experimentally measured memory lifetime for the same temperature ($4.5~\mu$s). However, in the BEC regime where $T<0.5~\mu$K, observed memory lifetimes are significantly shorter than those predicted from thermal decoherence, indicating that magnetic dephasing is the dominant decoherence mechanism at ultralow temperatures. Based on these characteristics, we describe the observed storage time ($t_{\rm s}$) dependence of memory efficiency for a given cloud-temperature as, \begin{align} \eta(t_{\rm s})=&\eta(t_{\rm s0}) e^{-{(t_{\rm s}-t_{\rm s0})}/{\tau_{\rm mag}}} \nonumber \\ &\times\left[\mathcal{F}_{\rm BEC}+\mathcal{F}_{\rm th}e^{-{(t_{\rm s}-t_{\rm s0})^2}/{\tau_{\rm th}^2}}\right], \label{eq:decoherence1} \end{align} where $\mathcal{F}_{\rm BEC}$ and $\mathcal{F}_{\rm Th}=1-\mathcal{F}_{\rm BEC}$ are the fractions of BEC and thermal atoms in the cloud, respectively, and $\eta(t_{\rm s0})$ is memory efficiency measured for the shortest storage time, $t_{\rm s0}$. In this simplified model, the Gaussian term with the charasteristic decay time of $\tau_{\rm th}$ describes diffusion-induced decoherence (Eq.~\ref{eq:dec}) only for thermal atoms, as the BEC atoms are considered to be free from thermal motion. The common exponential factor with the decay-time constant of $\tau_{\rm mag}$ corresponds to decoherence due to magnetic dephasing of spin excitations across whole ensemble. We note that in typical cold-atom experiments with mm-scale ensembles, magnetic dephasing is generally characterized by a Gaussian decay-function (with a time-constant exhibiting an inverse linear dependence on the length of the cloud), instead of the exponential decay term in Eq.~\ref{eq:decoherence1}. We attribute this difference to the fact that in our system, the spatial extent of our atomic ensembles is much smaller and hence exhibits more sensitivity to variations of ambient magnetic field on micrometer length scales. Moreover, as the atomic cloud falls from the ODT during the 1-ms storage-recall cycle in our single-photon-level measurements, it experiences additional position-dependent magnetic field variations across the free-fall distance, which is comparable to size of the cloud. Using this decoherence model, we fit our results from the measurements of storage time vs. efficiency to Eq.~\ref{eq:decoherence1} (Fig.~\ref{fig:single-photon}({\bf E})), which shows a reasonable agreement with data taken at different cooling temperatures. In the fitting procedure, for a given $T$, we fix the parameters of $\mathcal{F}_{\rm BEC}$, $\mathcal{F}_{\rm th}$~(extracted from independent sets of temperature measurements) and $\tau_{\rm th}$ (calculated from Eq.~\ref{eq:dec} for $\theta=110^{\circ}$) such that $\tau_{\rm mag}$ is a single free parameter that is evaluated from experimental data. We find that $\tau_{\rm mag}$ tends to be larger with lower cloud-temperatures because of the fact that the spatial extension of atomic clouds become smaller and thus less susceptible to the variations of ambient magnetic fields. We also note that the extracted value of $\tau_{\rm mag}$ exhibits a variation on long timescales (over several days), which we attribute to changes in the conditions of ambient magnetic-field. For instance, we find that $\tau_{\rm mag}$ varies between $5$ and $7~\mu$s for a cloud at $T=340$~nK, depending on day-to-day optimization of bias magnetic field. Figure.~\ref{fig:single-photon}({\bf E}) represents one of the best data sets, obtained after a careful bias-field optimization, yielding magnetic dephasing constant of $\tau_{\rm mag} =$ ($7.0 \pm 2.5$, $7.0 \pm 1.0$, $16.5 \pm 2.8$) $ \mu$s with $1/e$ memory-lifetime of $\tau_{\rm m} =$ ($4.5 \pm 2.5$, $7.8 \pm 1.0$, $15.8 \pm 2.8$) $\mu$s for $T = (6200$, $340$, $280$) nK, respectively. In addition, we occasionally observe that memory efficiency drifts by about $10-30~\%$ of the typical $\eta_{\rm m}$ (in part due to instability of the atom-number of prepared clouds in hours-long time scales), which seems to be more pronounced in measurements for short storage times ($\tau_{\rm s}<2~\mu$s) carried out for a BEC cloud. Finally, we show the details of our predictions for the ultimately achievable memory lifetimes ($\tau_{\rm m}$) in our current system. These predictions are based on our experimentally realised clouds (characterised by $T$, $\mathcal{F}_{\rm BEC}$ and $\mathcal{F}_{\rm th}$) at a probe-control separation angle $\theta$, under the assumption that the decoherence effects such as magnetic dephasing and BEC-spatial decoherence can be eliminated by technical means. In this case, we consider the impact of three major decoherence mechanisms: (i) thermal motion, (ii) recoil motion, and (iii) inelastic two-body collisions. The effects of thermal- and recoil-motion induced decoherence are separately described by Gaussian decays of memory efficiency with characteristic times of $\tau_{\rm th}$ (Eq.~\ref{eq:dec}) and $\tau_{\rm rec}=R_{\rm p} \lambda m / [2h \sin(\theta/2)]$~\cite{Lettner2011, Riedl2012}, respectively. While thermal-motion induced decoherence is not applicable to the BEC part of atomic cloud, the effect of recoil-motion is ignored for the thermal part (as being significantly dominated by thermal decoherence) in our regime of interest. Furthermore, decoherence due to inelastic two-body collisions in BEC is characterised by an exponential decay of memory efficiency with a decay time of $\tau_{\rm col}=m/[4h I_{\rm m} (a_{\rm sc}) \rho_{\rm B}]$, where $I_m (a_{\rm sc})$ is the imaginary part of scattering length for the two component Rb-BEC (in $\ket{\rm g}$ and $\ket{\rm s}$), and $\rho_{\rm B}$ is the peak density of BEC~\cite{Dutton2004,Zhang2009}. As the atomic density in a thermal cloud is substantially smaller than a BEC, $\tau_{\rm col}$ is considered to be negligibly small for the thermal portion of the cloud. Under these conditions, the combined effect of the decoherence mechanisms on memory efficiency $\eta$ is described by \begin{align} \eta(t_{\rm s})=\eta(0) \Bigg[&\mathcal{F}_{\rm BEC}\left(e^{-t_{\rm s}/ \tau_{\rm col}}\right) \left(e^{-{t_{\rm s}}^2/{\tau_{\rm rec}}^2}\right)\nonumber\\ &+(1-\mathcal{F}_{\rm BEC})\left(e^{-{t_{\rm s}}^2/{\tau_{\rm th}}^2}\right)\Bigg], \label{eq:decoher} \end{align} where $t_{\rm s}$ is storage time, and $\eta(0)$ is memory efficiency for $t_{\rm s0}=0$. The memory lifetime shown in Fig.~\ref{fig:performance}({\bf C}) is defined as the characteristic time at which the memory efficiency drops to ($1/e$) of $\eta(0)$. \subsection*{Four-wave mixing noise} FWM noise is one of the main limitations towards developing reliable broadband spin-wave memories. The probability of FWM noise depends on two general factors: (i) geometric relationships between the wavevectors of probe fields and control fields, and (ii) memory resources including optical depth and control-field power along with system specific parameters. The geometric factor is characterised by the angular separation $\theta$ between the probe and control fields. FWM noise is detrimental to memory for a $\theta \le \theta_{\rm FWM}$, which that satisfies the general phase-matching conditions, as detailed in Supplementary Information. This threshold angle is $\theta_{\rm FWM}=2\arcsin(\sqrt{{\lambda}/[{8\pi L}]})$, for an effective medium length of $L$ along the propagation direction of the probe field. By choosing $\theta\geq\theta_{\rm FWM}$, FWM noise can be mostly eliminated, at the expense of larger thermal-diffusion-induced decoherence. The resource dependence of FWM is characterised by a ``noise strength'' parameter, which is proportional to the probability of FWM noise corrupting the memory, as detailed in Ref.~\cite{Lauk2013, Geng2014a}. The FWM noise-strength is determined by optical depth $d$, peak Rabi frequency of the control field $\Omega_{\rm c}$, and system-specific parameters (e.g.~$\gamma_{ge}$ in a $\Lambda$-type three-level system) as in the following, \begin{equation} S_{\rm FWM}\propto \Omega_{\rm c}^4\left[\sinh\left(\frac{\zeta d \gamma_{\rm eg}}{\Delta_{\rm gs}}\right)\right]^2, \label{eq:FWM} \end{equation} where $h\Delta_{\rm gs}$ is the energy difference between the ground levels of the $\Lambda$-system, and $\zeta={\Omega_{\rm c}}/{\Omega_{\rm c}^{\prime}}$ is the ratio of the Rabi frequency from $\ket{s}\rightarrow\ket{e}$ to the one from $\ket{g}\rightarrow\ket{e}$. Since the required $d$ and $\Omega_{\rm c}$ for implementing an optimal broadband memory are proportional to the memory bandwidth ($B>\Gamma_{\rm eg}/2\pi$) with certain proportionality constants specific to the memory protocol, FWM noise strength strongly depends on bandwidth and the employed protocol. The ATS protocol is advantageous for eliminating FWM noise due to its favorable resource scaling in the broadband operation regime, as compared to adiabatic memories, such as EIT and off-resonant Raman protocols. In Fig.~\ref{fig:performance}({\bf D}), we make a comparison of FWM noise-strength ($S_{\rm FWM}$) between the ATS and EIT protocols for implementation of an optimal broadband memory in our $^{87}$Rb system (featuring $\Delta_{\rm gs}/2\pi=$6.83 GHz and $\zeta\approx1.33$). In this comparison, we calculate $S_{\rm FWHM}$ using Eq.~\ref{eq:FWM} for a bandwidth range of $10(\Gamma_{\rm eg}/2\pi)<B<40(\Gamma_{\rm eg}/2\pi)$, requiring optical depths of $d_{\rm ATS}=8\times(2\pi B/\Gamma_{\rm eg})$ and $d_{\rm EIT}=50\times(2\pi B/\Gamma_{\rm eg})$ as well as peak Rabi frequencies of $\Omega_{\rm ATS}=1.5\times (2\pi B)$ and $\Omega_{\rm EIT}=4\times (2\pi B)$ for optimal ATS and EIT memories, based on their non-adiabatic and adiabatic operation conditions, respectively (see Ref.~\cite{Rastogi2019} for details). These results show that in this bandwidth range, corresponding to probe pulse durations between $1.9-7.3$~ns in our Rb system, the probability of FWM noise associated with an optimal ATS memory is 4--5 orders of magnitude smaller than that associated with an optimal EIT memory. Consequently, the ATS protocol offers a favorable option for the realization of long-lived broadband quantum memories featuring both high-speed and faithful operation. \subsection*{Acknowledgments} We thank Dr. Khabat Heshami for useful discussions, and appreciate generous technical support from Paul Davis and Greg Popowich. We gratefully acknowledge funding from the Natural Science and Engineering Research Council of Canada (NSERC RGPIN-2014-06618), Canada Foundation for Innovation (CFI), Canada Research Chairs Program (CRC), the Alberta Major Innovation Fund Quantum Technologies project, Alberta Innovates, and the University of Alberta.
1,477,468,749,876
arxiv
\section{Introduction} Since the work of Lotka and Volterra, ecologists have attempted to mathematize the interactions between populations to build predictive models of population dynamics. This is a complex problem -- ecological communities are often composed of a large number of species~\cite{May1988,May1988}, the equations describing their interactions have been debated for decades~\cite{Arditi2012}, and the estimation of parameters and initial conditions is often unfeasible from an empirical standpoint. To circumvent this problem, Robert May~\cite{May1972} introduced the idea of modeling complex ecological communities using random matrices. Consider the case in which the dynamics of the populations can be described by a system of ordinary differential equations: \begin{equation} \frac {dx_i(t)} {dt} = f_i(\vect{x}(t)), \label{eq:generic} \end{equation} \noindent where $\vect{x}(t)$ is a vector containing the populations abundances at time $t$, and the function $f_i$ relates the abundance of all populations to the growth of population $i$. In general, $f_i$ is a nonlinear equation with several parameters. Suppose that the system admits a feasible equilibrium point, i.e., a vector $\vect{x}^\ast$ such that $f_i(\vect{x}^\ast) = 0$ and $x_i^\ast > 0$ for all $i$. If we start the system at this point, it will remain there indefinitely. We can therefore ask whether the system will go back to the equilibrium, or rather move away from it, following a perturbation. This type of stability analysis can be carried out by building the Jacobian matrix $J_{ij} = \partial f_i(\vect{x}(t)) / \partial x_j$ and evaluating it at the equilibrium point, yielding the so-called community matrix $\mat{M} = \left . \mat{J} \right \rvert_{\vect{x}^\ast}$. If all the eigenvalues of $\mat{M}$ have negative real part, then the equilibrium is locally asymptotically stable, and the system will return to it after sufficiently small perturbations; if any of the eigenvalues have a positive real part, the system will move away from the equilibrium when perturbed. Clearly, to build $\mat{M}$ one would need to precisely know the functions $f_i$, as well as their parameters, and solve for the equilibrium (or equilibria) $\vect{x}^\ast$. May took a radically different approach and analyzed the case in which $\mat{M}$ is a random matrix with independent, identically distributed off-diagonal elements, and constant diagonal elements~\cite{May1972}. For this parameterization, he was able to show that the community matrices describing sufficiently large and complex ecological communities are always unstable. The random-matrix approach was recently extended and refined to include different types of interaction between the populations~\cite{Allesina2012,ReviewRMT}, as well as to study the effect of more complex network structures, such as the hierarchical organization of food-webs~\cite{Allesina2015a} and the modular pattern often displayed by biological networks~\cite{Grilli2016}. By modeling directly the matrix $\mat{M}$ as a random matrix, one does not require a precise characterization of the functions $f_i$ and the equilibrium $\vect{x}^\ast$. While mathematically convenient, this approach does not explicitly take into account the abundance of the populations---a type of data that is empirically much more accessible than interaction coefficients or the elements of the community matrix. The distribution of species abundances (SAD) has been shown to have remarkably similar features across different species rich communities~\cite{Bell2001a} with a skewed shape and few highly abundant species. The log-series distribution~\cite{Fisher1943a}, discrete lognormal~\cite{Preston1948} and negative binomial~\cite{Volkov2007} have all been proposed to describe empirical SADs, and have been shown to emerge from either neutral~\cite{Caswell1976,Hubbell2001a,Volkov2003,Azaele2016} or niche mechanisms~\cite{MacArthur1957a,Vandermeer1966}. The role of species abundances in structuring the community matrix $\mat{M}$ can be easily seen by considering one of the simplest models of population dynamics, the Generalized Lotka-Volterra (GLV) model: \begin{equation} \frac {dx_i(t)} {dt} = x_i(t) \left(r_i + \sum _{j} A_{ij} x_j(t) \right) \ , \label{eq:LV} \end{equation} \noindent where $r_i$ is the intrinsic growth rate of species $i$, and $A_{ij}$ is the per-capita effect of species $j$ on the growth of $i$. If a feasible equilibrium (i.e., one where all species have positive abundance) exists, then it can be found solving the system of equations \begin{equation} 0 = r_i + \sum _{j} A_{ij} x^\ast_j \ , \end{equation} \noindent yielding the community matrix $M_{ij} = A_{ij} x_i^\ast$, which can be written in matrix form as \begin{equation} \mat{M} = \mat{X} \mat{A} \ , \end{equation} \noindent where $\mat{X}$ is a diagonal matrix with $X_{ii} = x_i^\ast$ and zeros elsewhere. Even if the elements of $\mat{A}$ were independent, identically distributed samples from a distribution, the elements of $\mat{M}$ would not be---the matrix of abundances $\mat{X}$ couples all the coefficients in the same row, such that the distribution of the elements in each row would in principle be different. One of the main goals of this work is to extend the random matrix approach by considering a random matrix of abundances $\mat{X}$ and a random matrix of interactions $\mat{A}$, and determining the stability of $\mat{M}$ under these conditions. In this way, we address the effect of species abundances on stability, thereby lifting one of the main criticisms of the random matrix approach~\cite{ReviewRMT,amnatjames,Jacquet2016}. As we stated above when analyzing coexistence, we need population abundances to be positive (\emph{feasible}). Stability cannot, at least in principle, be disentangled from the constraint imposed by feasibility on interactions~\cite{Roberts1974}. Diversity and interaction properties have important consequences for the range of parameters corresponding to feasible solutions~\cite{Rohr2014,Stone2016,Grilli2017}. While the interest in feasibility has grown considerably in recent years, the relationship between feasibility and stability is still unclear. In fact, most of the studies on feasibility assume strong conditions on the interaction matrix (e.g., D-stability, diagonal stability) that guarantee stability of any feasible solution~\cite{Rohr2014,Grilli2017}. It is still unclear when these assumptions are justified and how likely it is for large random interaction matrices to meet these conditions. In the second part of this work, we focus on the relationship between feasibility and stability. In particular, we study the relationship between the stability of $\mat{A}$ and that of $\mat{M}$ for the GLV model. Our results show that, given a stable random matrix $\mat{A}$, the probability that an arbitrary feasible equilibrium is unstable decreases exponentially with diversity. This result strongly suggests that, provided that the interaction matrix $\mat{A}$ is stable, feasible solutions are almost surely stable. We therefore provide a more robust justification to both May's original paper---by showing that population abundances do not affect qualitatively stability---and the more recent work on feasibility that assumes stability---by predicting that this assumption is almost surely met for large random systems. \section{Constructing the community matrix with arbitrary population abundance} We consider a system of $S$ interacting populations whose dynamics are described by the GLV model in equation~\ref{eq:LV}, assume that a feasible equilibrium $\vect{x}^\ast$ exists, and define $\mat{X}$ as the diagonal matrix with diagonal entries $X_{ii} = x^\ast_i$. The feasible fixed point $\vect{x}^\ast$ is locally asymptotically stable if and only if all the eigenvalues of the community matrix $\mat{M} = \mat{X} \mat{A}$, with components $M_{ij} = x^\ast_j A_{ij}$, have a negative real part. Here, we model $\mat{A}$ as a random matrix and $\vect{x}^\ast$ as a random vector with positive components, with the goal of studying the spectrum (distribution of the eigenvalues) of the community matrix $\mat{M} $ . From the GLV model, specifiying a feasible fixed point $\vect{X}^\ast$ is the same as specifying a vector of intrinsic growth rates $\vect{r}$ inside the feasibility domain~\cite{Rohr2014,Grilli2017}. More specifically, we assume that the diagonal entries of the diagonal matrix $\mat{X}$ are drawn from an arbitrary distribution with positive support, mean $\mu_X$, and variance $\sigma_X^2$. The diagonal entries of $\mat{A}$ are drawn from an arbitrary distribution with support in the negative axis, mean $\mu_d$, and variance $\sigma_d^2$. Finally, each off-diagonal pair $(A_{ij},A_{ji})$ in $\mat{A}$ is drawn independently from a bivariate distribution with identical marginal means $\mu$, variances $\sigma^2$, and correlation $\rho$. Unless otherwise specified, we focus on the case $\sigma_d = 0$, while we discuss in the Supplementary Information\ the effects of variability in self-regulation. In the case of $\sigma_d = 0$ and in the limit of large $S$, the spectrum of $\mat{A}$ is known and is independent of the choice of the bivariate distribution (provided that mild conditions on the finiteness of the moments are satisfied~\cite{nguyen2012elliptic}). In particular, $\mat{A}$ has one eigenvalue equal to $-\mu_d + S \mu$~\cite{ORourke2014}, while the others (the \emph{bulk} of eigenvalues) are uniformly distributed in an ellipse in the complex plane centered in $-\mu_d-\mu$ with horizontal axis $\sqrt{S} \sigma (1+\rho)$ and vertical axis $\sqrt{S} \sigma (1-\rho)$~\cite{nguyen2012elliptic,Allesina2012,ORourke2014}. Figure~\ref{fig:populationaffect} shows an example of the spectrum of $\mat{A}$. Figure~\ref{fig:populationaffect} also shows an example of the eigenvalues of the community matrix $\mat{M} = \mat{X} \mat{A}$ where the diagonal entries of $\mat{X}$ are independent random variables drawn from a uniform distribution. It is evident that the bulk of eigenvalues of $\mat{M}$ does not follow the elliptic law. \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{Fig1.pdf} \caption{The top row shows the vector of abundances $\vect{x}^\ast$, the interaction matrix $\mat{A}$ and the community matrix $\mat{M} = \mat{X} \mat{A}$ (where $\mat{X}$ is a diagonal matrix with diagonal entries $\vect{x}^\ast$), with colors from red (negative) to green (positive). The bottom row shows the eigenvalues of $\mat{A}$ and $\mat{M}$, for $S = 500$. The diagonal entries of $\mat{X}$ are sampled from a uniform distribution on $[0,1]$, and matrix $\mat{A}$ is built sampling independently each pair $(A_{ij}, A_{ji})$ from a normal bivariate distribution with identical marginals defined by $\mu = 0$, $\sigma = 1 / \sqrt{S}$, and correlation $ \rho = -0.5$. The diagonal elements of $\mat{A}$ are fixed at -1. The main goal of this work is to characterize the spectrum of $\mat{M}$ given the properties of $\mat{A}$ and $\mat{X}$.} \label{fig:populationaffect} \end{figure} \section{Disentangling the effect of the mean interaction strength} \label{sec:disentangle} When the mean $\mu$ of the off-diagonal elements of the interaction matrix $\mat{A}$ does not equal zero, the spectra of $\mat{A}$ and $\mat{M}$ are characterized by the presence of an outlier. The value of this eigenvalue for the matrix $\mat{A}$ is known for the case $\sigma_d = 0$, and in the limit of large $S$~\cite{ORourke2014}. It can be obtained by decomposing the matrix $\mat{A}$ as a sum of three matrices \begin{equation} \mat{A} = (\mu_d-\mu) \mat{I} + \mu \mat{1} + \mat{B} \ , \end{equation} \noindent where $\mat{I}$ is the identity matrix, $\mat{1}$ is a matrix of ones, and $\mat{B}$ is a random matrix with mean zero that follows the elliptic law. It has been proved~\cite{ORourke2014} that the spectrum of $\mat{A}$ is characterized by a bulk of eigenvalues, determined by the spectrum of $(\mu_d-\mu) \mat{I} + \mat{B}$, and the presence of an outlier, whose value is (approximately) given by the largest eigenvalue of $(\mu_d-\mu) \mat{I} + \mu \mat{1}$, which has value $\mu_d + (S-1) \mu$. Figure~\ref{fig:observation} shows that, if $\mu \neq 0$, the spectrum of $\mat{M}$ is also characterized by the presence of a bulk and of an outlying eigenvalue. By decomposing the matrix $\mat{M}$ as \begin{equation} \mat{M} = \mat{X} \left( (\mu_d-\mu) \mat{I} + \mu \mat{1} + \mat{B} \right) \ , \end{equation} \noindent we show in the Supplementary Information\ that the bulk of the spectrum of $\mat{M}$ is determined by the eigenvalues of the matrix $\mat{J} = \mat{X} \left( (\mu_d-\mu) \mat{I} + \mat{B} \right)$ and the outlier is given by largest eigenvalue of $\mat{Q} = \mat{X} \left( (\mu_d-\mu) \mat{I} + \mu \mat{1} \right)$. Figure~\ref{fig:observation} shows an example of this decomposition, where it is evident that the bulks of eigenvalues of $\mat{M}$ and $\mat{J}$ are the same, and the outliers of $\mat{M}$ and $\mat{Q}$ match. \begin{figure} \centering \includegraphics[width = 0.8\textwidth]{Fig2.pdf} \caption{The top row shows the three matrices $\bf{M}$, $\bf{Q}$ and $\bf{J}$. The community matrix $\mat{M} = \mat{X} \mat{A}$, is obtained from the interaction matrix $\mat{A}$ that, without loss of generality, can be written as $\mat{A} = (\mu_d - \mu) \mat{I} + \mu \mat{1} + \mat{B}$, where $\mat{1}$ is a matrix of ones and $\mat{B}$ is a random matrix with diagonal elements fixed at zero whose coefficients have mean zero and variance $\sigma^2$. We define $\mat{Q} = \mat{X} ( (\mu_d - \mu)\mat{I} + \mu \mat{1}) $ and $\mat{J} = \mat{X} (\mu_d\mat{I} + \mat{B})$. Equivalently, $\mat{Q}$ is the matrix with the same parameters as $\mat{M}$ except with $\sigma = 0$, and $\mat{J}$ is obtained from the same parameters as $\mat{M}$ except with $\mu = 0$ for the off-diagonal terms. Remarkably, the eigenvalues of $\mat{M}$, $\mat{J}$ and $\mat{Q}$ are simply related: the bulk of eigenvalues of $\mat{J}$ and that of $\mat{M}$ are the same, while the outlier of $\mat{M}$ is the same as that of $\mat{Q}$. This decomposition allows us to obtain an analytical prediction for the outlier, and in the Supplementary Information\ we find the spectrum of $\mat{Q}$ analytically. In the figure, we set $S = 500$. The diagonal entries of $\mat{X}$ are sampled from a uniform distribution on $[0,1]$. $\mat{A}$ is sampled from a normal bivariate distribution with identical marginals $\mu = 5 / S$, $\sigma = 5 / \sqrt{S}$ and correlation $\rho = -0.5$. } \label{fig:observation} \end{figure} The trace of $\mat{M}$ is given by \begin{equation} \tr\left(\mat{M}\right) = \lambda_{\textrm{out}} + (S-1) \langle \lambda \rangle_{\textrm{bulk}} \ , \end{equation} \noindent where $\lambda_{\textrm{out}}$ is the value of the outlier and $\langle \lambda \rangle_{\textrm{bulk}}$ is the average eigenvalue in the bulk. Since the bulks of the eigenvalues of $\mat{M}$ and $\mat{J}$ are the same, we have that \begin{equation} \langle \lambda \rangle_{\textrm{bulk}} = \frac{1}{S} \tr\left(\mat{J}\right) = \mu_X \left(\mu_d - \mu \right) \ . \end{equation} Using the fact that \begin{equation} \tr\left(\mat{M}\right) = S \mu_d \mu_X \ , \end{equation} \noindent we see that the outlier is equal to \begin{equation} \lambda_{\textrm{out}} = \mu_X \left( \mu_d + (S-1) \mu \right) \ . \label{eq:outlier} \end{equation} Figure~\ref{fig:outeigen} shows that this analytical prediction closely matches the outlier of the spectrum of $\mat{M}$. \begin{figure} \centering \includegraphics[width = 0.75\textwidth]{Fig3.pdf} \caption{The three panels show that our analytical prediction (equation~\ref{fig:outeigen}) correctly matches the outlier of the spectrum, for different values of $\rho$ and population abundance distributions. The matrix $\mat{A}$ is built independently sampling the coefficients from a normal bivariate distribution with identical marginals defined by $\mu$, $\sigma$, and $\rho$. Here, we set $S = 1000$ and $\sigma = 1 / \sqrt{S}$, and vary $\mu$ between $-10$ and $10$ to test our prediction. We draw $\mat{X}$ from three different distributions with positive support: uniform (on [0,1]), log-normal (with mean log-mean $0.5$ and log-standard deviation $0.5$) and half-normal (shifted rightwards to have support $(1, \infty)$, and with parameter $ \theta = 1$). We can observe deviations from our prediction when $\mu$ is small, especially when $\mat{X}$ is drawn from a log-normal distribution. This is because the eigenvalue corresponding to equation~\ref{eq:outlier} is now contained in the bulk.} \label{fig:outeigen} \end{figure} \section{Analytical solution in the case $\rho = 0$} In section~\ref{sec:disentangle} we showed that the spectrum of $\mat{M}$ is characterized by a bulk of eigenvalues and an outlier, which is determined by the mean of interaction matrix $\mu$. In the following, we focus on the bulk of eigenvalues, so we assume $\mu = 0$. Using the cavity method~\cite{Rogers2008,Rogers2009,Grilli2016}, we derive in the Supplementary Information\ a system of equations for the spectral density of the matrix $\mat{M}$. These equations cannot be explicitly solved in the most general case, but they take a particularly simple form in the case where the correlation $\rho = 0$. In this case, it is possible to write an implicit equation for the support of the spectrum, which takes the form \begin{equation} \int {\ud} x \ {\ud} s \ P_{XD}(x,s) \ \frac{ S x^2 \sigma^2} {| \lambda - s x |^2 } = 1 \ , \label{eq:uncpred} \end{equation} \noindent where $P_{XD}(x,s)$ is the joint distribution of the population abundances $x$, with mean $\mu_X$ and variance $\sigma_X^2$, and the self-regulation terms (i.e., the diagonal elements of the interaction matrix) with mean $\mu_d$ and variance $\sigma_d^2$. The complex solutions $\lambda$ of this equation define the support of the spectrum in the complex plane. In the Supplementary Information\ we explicitly solve the case of constant self-regulation terms (i.e., $\sigma_d = 0$) and population abundances drawn from a uniform distribution. When the self-regulation terms are constant, equation~\ref{eq:uncpred} reduces to \begin{equation} \int {\ud}x \ P_{X}(x) \ \frac{S x^2 \sigma^2} {| \lambda - \mu_d x |^2 } = 1 \ , \label{eq:uncpred_diag} \end{equation} \noindent where $P_{X}(x)$ is the species abundance distribution. Figure~\ref{fig:uncorrelatedpredictions} compares the analytical prediction with the bulk of eigenvalues of $\mat{M}$ for different distributions of $\mat{X}$, showing that the solutions of equation~\ref{eq:uncpred_diag} closely match the support of the spectrum of $\mat{M}$. \begin{figure} \centering \includegraphics[width = 0.75\textwidth]{Fig4.pdf} \caption{The top row shows that the analytical predictions for the support of the eigenvalue distribution obtained in equation~\ref{eq:uncpred_diag} (solid blue line) correctly predict the support of the spectrum of $\mat{M} = \mat{X} \mat{A}$. In all the three plots, $\mat{A}$ is built using a bivariate normal distribution with identical marginals $\mu = 0$, $\sigma = 1 / \sqrt{S}$ and correlation $ \rho = 0$. The diagonal entries of $\mat{A}$ are fixed at $-2$. We considered three different abundance distributions: uniform ($\mat{X}$ is sampled from a uniform distribution on [0.25, 1.75]) lognormal ($\mat{X}$ is sampled from a log-normal distribution with log-mean $0.5$ and log-standard deviation $0.5$) and half-normal ($\bf{X}$ is sampled from a half-normal, shifted rightwards to have support $(1, \infty)$, and with parameter $ \theta = 1$). The bottom row shows the value of the rightmost eigenvalue of $\bf{M}$ against the analytical prediction for the leading eigenvalue of matrices with the same abundance distributions used above, but varying their variances $\sigma_X^2$. Different colors correspond to different values of $\sigma$. Each point is an average over $20$ simulations.} \label{fig:uncorrelatedpredictions} \end{figure} Equation~\ref{eq:uncpred_diag} also predicts that if $\mat{A}$ is stable, then $\mat{M}$ is stable. In fact, equation~\ref{eq:uncpred_diag} predicts that the matrix $\mat{A}$ is stable iff $\mu_d + S \sigma^2 < 0$. If this condition is met, it is simple to observe that \begin{equation} \frac{S x^2 \sigma^2} {| \lambda - \mu_d x |^2 } < 1 \end{equation} \noindent for any complex $\lambda$ with positive real part and any positive real $x$. When this inequality is used in equation~\ref{eq:uncpred_diag} one obtains that the points on the boundary of the support, and therefore all the eigenvalues, always have negative real part. \section{The stability of large community matrices does not depend on population abundance} \label{sec:dstab} In the previous section, we derived the spectrum in the case $\rho = 0$, finding that if the interaction matrix $\mat{A}$ is stable, then $\mat{M}$ is stable. The goal of this section is to study more deeply the relationship between the stability of $\mat{A}$ and that of $\mat{M}$. More specifically, given a stable random matrix $\mat{A}$, we ask what is the probability of finding a positive diagonal matrix $\mat{X}$, such that $\mat{M} = \mat{X} \mat{A}$ is stable. A matrix $\mat{A}$ is \emph{D-stable} if, for any positive diagonal matrix $\mat{X}$, $\mat{X} \mat{A}$ is stable~\cite{Kaszkurewicz2000}. An explicit condition for D-stability that does not require checking all the possible choices of $\mat{X}$ is not known in dimension larger than four~\cite{Redheffer1985}. Therefore, it is not known, in general, under which values of $\mu$, $\sigma$, $\rho$ and $\mu_d$ random matrices are expected to be \emph{D-stable}. A stronger condition for stability is \emph{diagonal stability}. A matrix $\mat{A}$ is diagonally stable if there exists a positive diagonal matrix $\mat{X}$ such that $\mat{X}\mat{A}+\mat{A}^t\mat{X}$ is stable. Interestingly, diagonal stability implies D-stability~\cite{Kaszkurewicz2000}. As for D-stability, a simple necessary and sufficient test for diagonal stability is not known. On the other hand, it is simple to observe that the stability of $(\mat{A}+\mat{A}^t)/2$ is a sufficient condition for diagonal stability (corresponding to choosing a constant diagonal matrix $\mat{X}$), and therefore also implies D-stability. All the eigenvalues of $(\mat{A}+\mat{A}^t)/2$ are real and, if $\mat{A}$ is a symmetric random matrix of independently distributed entries with bounded higher moments, the bulk of eigenvalues of $(\mat{A}+\mat{A}^t)/2$ follows Wigner's semicircle distribution~\cite{Wigner1958,Tang2014} \begin{equation} \varrho_{\frac{\mat{A}+\mat{A}^t}{2}}(\lambda) = \frac{\sqrt{ 2 S \sigma^2 (1+\rho) - \left( \lambda - (\mu_d - \mu) \right)^2 }}{ \pi S \sigma^2 (1+\rho)} \ , \end{equation} \noindent with one outlying eigenvalue equal to $\mu_d + (S-1) \mu$. For positive mean $\mu$, if $\mu > (1+\rho) \sigma / \sqrt{S}$, the rightmost eigenvalue is the outlier. In this case, the rightmost eigenvalue of $\mat{A}$ and of $(\mat{A}+\mat{A}^t)/2$ are the same. Therefore, for non-negative $\mu$, stable random matrices are almost surely diagonally stable. Since diagonal stability implies D-stability, if $\mat{A}$ is stable, then $\mat{M}=\mat{X}\mat{A}$ is stable. This argument is in agreement with our formula for the outlier of $\mat{M}$ in the case of non-vanishing mean $\mu$, obtained in equation~\ref{eq:outlier}. For positive mean $\mu$, the rightmost eigenvalue of $\mat{M}$ is equal to $\mu_X \lambda_{\mat{A}}$, where $\lambda_{\mat{A}}$ is the rightmost eigenvalue of $\mat{A}$ and $\mu_X$ is positive by definition. The sign of the rightmost eigenvalue of $\mat{M}$ is therefore the same as that of the rightmost eigenvalue of $\mat{A}$. Since a negative $\mu$ only produces a equal shift in the rightmost eigenvalue of $\mat{A}$, $(\mat{A}+\mat{A}^t)/2$ and $\mat{M}$, we can restrict our analysis to the case $\mu = 0$. For vanishing mean, the rightmost eigenvalue of $(\mat{A}+\mat{A}^t)/2$ is equal to~\cite{Tang2014} \begin{equation} \lambda_{\frac{\mat{A}+\mat{A}^t}{2}} = \mu_d + \sqrt{2 S \sigma^2 (1+\rho) } \ , \label{eq:lowerdiag} \end{equation} \noindent which should be compared with the rightmost eigenvalue of $\mat{A}$ \begin{equation} \lambda_{\mat{A}} = \mu_d + \sqrt{S \sigma^2} (1+\rho) \ . \label{eq:stab} \end{equation} As shown in~\cite{Tang2014,Grilli2017}, $\lambda_{\frac{\mat{A}+\mat{A}^t}{2}} \geq \lambda_{\mat{A}}$ and they are equal in the case $\rho = 1$. Equation~\ref{eq:lowerdiag} imposes a sufficient condition on diagonal stability: if \begin{equation} \mu_d + \sqrt{2 S \sigma^2 (1+\rho) } < 0 \ , \label{eq:diagstabsuff} \end{equation} \noindent $\mat{A}$ is diagonally stable and, for any choice of positive diagonal matrix $\mat{X}$, $\mat{M}=\mat{X}\mat{A}$ is stable. The non-trivial regime therefore corresponds to the values of parameters where $\mu_d + \sqrt{2 S \sigma^2 (1+\rho) } > 0$ and $\mu_d + \sqrt{S \sigma^2} (1+\rho) < 0$~\cite{Grilli2017}. Since an explicit condition for D-stability does not exist, we computed the probability that, given a stable random matrix $\mat{A}$, a positive diagonal matrix $\mat{X}$ would make $\mat{M} = \mat{X} \mat{A}$ unstable. Note that, since any matrix has a non-null probability of being generated when entries are sampled from a bivariate distribution with infinite support, this probability is always non-zero. The relevant question in this context is therefore how this probability depends on the number of species $S$. Figure~\ref{fig:unweightedprobabilities} shows that the probability of finding a $\mat{X}$ with a destabilizing effect decreases exponentially with the number of species $S$, with a rate that depends on the rightmost eigenvalue $\lambda_{\mat{A}}$ and the correlation $\rho$. This implies that, for large values of $S$, $\mat{M}$ is almost surely stable if $\mat{A}$ is stable. \begin{figure} \includegraphics[width = 0.75\textwidth]{Fig5.pdf} \caption{ We computed the probability that a matrix $\mat{M} = \mat{X} \mat{A}$ is unstable (i.e., that the leading eigenvalue has positive real part), given that $\mat{A}$ is a stable random matrix with rightmost eigenvalue equal to $\lambda_{max} = -d$. This probability decreases exponentially with $S$ for different values of $\rho$ ($-0.5$ in the left panel, $0$ in the center and $0.5$ in the right) and $\lambda_{max}$ (different colors). For a given number of species $S$, we construct the random matrix $\mat{A}$ sampling its entries from a bivariate normal with identical marginals $\mu = 0$, $\sigma = 1 / \sqrt{S}$ and given $\rho$. The diagonal elements of $\mat{A}$ are all equal and their value is determined in order to have dominant eigenvalue equal to $\lambda_{max}$. The diagonal entries of $\mat{X}$ were sampled from a uniform distribution on $[0,1]$. For each value of the parameters $\rho$, $\lambda_{max}$ and $S$, we constructed $XXX$ matrices $\mat{A}$ and $\mat{X}$ and computed the fraction of matrices $\mat{M} = \mat{X}\mat{A}$ with positive dominant eigenvalue. } \label{fig:unweightedprobabilities} \end{figure} \section{Fixed points are almost surely stable in large random Lotka-Volterra equations.} If we consider the Lotka-Volterra equations (equation~\ref{eq:LV}), and we set the values of the intrinsic growth rates $\vect{r}$, the fixed point has components \begin{equation} x_i^\ast = \sum_j A^{-1}_{ij} r_j \ . \label{eq:fixedpoint} \end{equation} Let us also assume that all these components are positive (i.e., $\vect{r}$ is inside the feasibility domain). In section~\ref{sec:dstab} we showed that the matrix obtained by multiplying a stable random matrix $\mat{A}$ and a random positive diagonal matrix $\mat{X}$ is more and more likely to be stable as $S$ increases. It is evident (from equation~\ref{eq:fixedpoint}) that the components of $\vect{x}^\ast$ are not independent of the entries of the matrix $\mat{A}$. The presence of this correlation implies that, at least in principle, choosing a random vector $\vect{r}$ inside the feasibility domain to define $\mat{X}$ could produce different results from sampling independent entries from a specified species abundance distribution. In this section we repeat the simulations detailed in section~\ref{sec:dstab}, but instead of considering a random fixed point $\vect{x}^\ast$, we find the $\vect{x}^\ast$ determined by a random intrinsic growth rate vector $\vect{r}$ sampled uniformly from the feasibility domain. The most intuitive method for this simulation would consist of taking a random matrix $\mat{A}$, choosing a value $\vect{r}$ at random on the unit sphere, checking if it corresponds to a feasible fixed-point using equation~\ref{eq:fixedpoint}, and finally computing the eigenvalue of $\mat{M} = \mat{X}\mat{A}$. However, as the number of species $S$ increases this method becomes practically unfeasible. In fact, the fraction of intrinsic growth rate vectors $\vect{r}$ corresponding to a feasible solution decreases exponentially with $S$~\cite{Grilli2017}. If this intuitive method was employed, most of the simulation time would be spent trying to find vectors $\vect{r}$ inside the feasibility domain. On the other hand, since the relation between $\vect{r}$ and $\vect{x}^\ast$ (via equation~\ref{eq:fixedpoint}) is bijective, we can easily construct all the vectors $\vect{r}$ inside the feasibility domain by considering all the possible feasible solution $\vect{x}^\ast$. In section~\ref{sec:dstab} we specified a distribution on the $\vect{x}^\ast$. This distribution translates to a non-trivial distribution on the $\vect{r}$ (that can be obtained from equation~\ref{eq:fixedpoint}). In this section, we instead assume a distribution on the $\vect{r}$ and derive a corresponding distribution for the $\vect{x}^\ast$. For instance, if we assume that the vectors $\vect{r}$ are uniformly distributed on the unit sphere, the distribution of the $\vect{x}^\ast$ reads~\cite{Grilli2017} \begin{equation} P(\vect{x}^\ast | \mat{A} ) \propto | \det \mat{A} | \frac{ \delta \left( \|\vect{x}^\ast\|^2 - 1 \right) }{ \| \mat{A} \vect{x}^\ast \|^S } \ . \label{eq:probx} \end{equation} Sampling vectors $\vect{x}^\ast$ according to this distribution is equivalent to sampling vectors $\vect{r}$ uniformly from the feasibility domain. It is important to observe that when $\vect{x}^\ast$ is drawn according to this distribution, its entries are not independent and their densities depend on $\mat{A}$. Figure~\ref{fig:weightedprobabilitiess} shows the stability of $\mat{M} = \mat{X} \mat{A}$ when the diagonal entries of $\mat{X}$ are sampled from the probability distribution defined in equation~\ref{eq:probx}. Despite the presence of a correlation between the entries of $\mat{X}$ and $\mat{A}$, the result obtained in section~\ref{sec:dstab} is confirmed: the probability of observing a stable $\mat{A}$ but an unstable $\mat{M}$ decreases exponentially with $S$. If the interaction matrix $\mat{A}$ is stable, in the limit of large $S$, the set of intrinsic growth rates corresponding to feasible unstable solutions has measure zero. \begin{figure} \includegraphics[width = 0.75\textwidth] {Fig6.pdf} \caption{ These panels plot the same quantity as of fig~\ref{fig:unweightedprobabilities}. Instead of sampling $\mat{X}$ from a uniform distribution, we used the distribution of eq.~\ref{eq:probx}, which guarantee an unbiased sampling of the intrinsic growth rates of a Lotka-Volterra systems. This sampling method is in fact equivalent to sampling a random interaction matrix $\mat{A}$ and an intrinsic growth rate vector $\vect{r}$ inside the feasibility domain and checking the stability of the corresponding feasible fixed point. The exponential decay with increasing $S$ strongly suggests that the set of feasible unstable fixed points has measure zero for large, randomly interacting Lotka-Volterra systems. } \label{fig:weightedprobabilitiess} \end{figure} \section{Discussion} We explored the effect of population abundances on the stability of random interacting ecosystems. We derived an expression for the spectral density of a community matrix that explicitly includes the species abundance distribution. While the effect on the eigenvalues is highly heterogeneous and strongly depends on the specific choice of the abundance distribution, a remarkably simple message emerges for large randomly interacting ecosystems: the community matrix is stable if and only if the interaction matrix is stable. In other words, the abundances of species seem to not affect the sign of the eigenvalues. We further explored this intriguing result by explicitly estimating the probability of choosing a species abundance distribution leading to instability. While for finite systems this probability is always positive, it decreases exponentially with the number of species, confirming what was found studying the spectrum of the community matrix analytically. Our results strongly suggest that large random matrices are D-stable \emph{almost surely}: the set of destabilizing positive diagonal matrices has measure zero. This fact has important consequences on Lotka-Volterra systems of equations, implying that feasible unstable fixed-points are very unlikely. This result allows to disentangle the problem of feasibility (how often are fixed points feasible?) from the problem of stability (how often are fixed points stable?), justifying a-posteriori what assumed in many studies on feasibility~\cite{Rohr2014,Grilli2017} and expanding the validity of their results. The generalized Lotka-Volterra equations display a rich dynamical behavior, leading to limit cycles when two or more species are considered and chaos with three or more species~\cite{Smale1976,Takeuchi1996}. Both limit cycles and chaos require the existence of an unstable fixed point in the interior of the feasibility domain~\cite{Hofbauer1998}. Since the chance of observing a feasible unstable fixed point decays rapidly when the number of species increases, our results suggest that chaos and limit cycles are extremely rare in large random Lotka-Volterra systems. A stronger notion than D-stability is diagonal stability. While for Lotka-Volterra systems, the former implies local asymptotic stability of any feasible solution, the latter implies global stability. We showed that large random stable matrices are always D-stable. Under which conditions they are also diagonally stable is an important open problem. A sufficient condition for diagonal stability is negative definiteness~\cite{Grilli2017}. In the context of random matrices, negative definiteness is equivalent to the condition expressed in equation~\ref{eq:diagstabsuff}. The condition for negative definiteness should be compared to the condition for stability (see equation~\ref{eq:stab}). For large random matrices, two extreme scenarios are possible: negative definiteness is almost surely a necessary condition for diagonal stability, or stable random matrices are almost surely diagonally stable. It is also possible that the condition for diagonal stability is less trivial, corresponding to values of parameters between the conditions imposed by equations~\ref{eq:diagstabsuff} and \ref{eq:stab}. Even more complicated, it is also possible that a sharp condition for diagonal stability does not exist for random matrices and, in the limit of large $S$, stable and non negative definite random matrices have a non-vanishing probability of being (or not being) diagonally stable. Our results shed light on one of the most controversial aspects of the classic result of May~\cite{May1972} and its extensions. Many authors~\cite{Roberts1974,Pimm1979,King1983,ReviewRMT,amnatjames,Jacquet2016} have argued that the unrealistic assumption of constant population abundances was a key choice in May's paper, suggesting that more realistic abundance distribution would have produced drastically different results. We showed that the conditions obtained in the original paper and in its extension~\cite{May1972,Allesina2012} are in fact valid for any species abundance distribution. In other words, the stability of fixed points (i.e., the stability of the community matrix) is determined only by the stability of the interaction matrix. \begin{acknowledgments} We thank A. Maritan, S. Tang and G. Barab\'as for comments and discussions. T.G. and S.A. were supported by NSF grant DEB-1148867. J.G. was supported by the Human Frontier Science Program. \end{acknowledgments} \bibliographystyle{nature}
1,477,468,749,877
arxiv
\section{Introduction} Translationally cold samples of molecules offer interesting perspectives for high-resolution spectroscopy. The long measurement times that are possible with such samples and the reduced Doppler widths are ideally suited for precision measurements of transition frequencies in molecules, and such measurements are beginning to be relevant in the context of tests of the standard model of particle physics and some of its extensions \cite{steimle2014}. Precision measurements in few-electron, light molecules such as H${_2}^+$, H$_2$ and He${_2}^+$ are used as tests of \emph{ab initio} quantum-chemical calculations which aim at an exact solution of the Schr{\"o}dinger equation and a rigorous determination of relativistic and quantum-electrodynamics (QED) corrections \cite{korobov2006,korobov2008,piszczatowski2009,pachucki2010,korobov2014}. In these molecules, the velocity of the electrons is relatively low, and nonrelativistic quantum-electrodynamics turns out to be particularly successful. In this approach, the energy is expressed as a series expansion in powers of the fine-structure constant $\alpha$, which is a measure of the classical electron speed \begin{equation} E\left ( \alpha \right ) = \mathcal{E}^{(0)}+\alpha^2\mathcal{E}^{(2)}+\alpha^3\mathcal{E}^{(3)}+\alpha^4\mathcal{E}^{(4)}+\mathcal{O}\left (\alpha^5 \right ). \end{equation} The terms in different powers of $\alpha$ are associated with different contributions to the overall energy. $\mathcal{E}^{(0)}$ contains the Born-Oppenheimer energy including adiabatic and nonadiabatic interactions, while $\alpha^2\mathcal{E}^{(2)}$ and $\alpha^3\mathcal{E}^{(3)}$ represent the relativistic and leading-order QED corrections. Higher powers of $\alpha$ are associated with higher-order QED corrections. Recent calculations for the molecular hydrogen ion include relativistic and QED corrections up to terms proportional to $\alpha^6$ and report an accuracy of 2\,kHz for the first vibrational intervals of H${_2}^+$ \cite{korobov2008} and HD$^+$ \cite{korobov2014}. The most accurate calculations of the energies in H$_2$, HD, and D$_2$ include full corrections up to terms proportional to $\alpha^3$ as well as the dominant one-loop contribution of the $\alpha^4$ term \cite{piszczatowski2009,pachucki2010}. The reported uncertainties are less than 30\,MHz and the calculated and experimental results agree within this uncertainty \cite{liu2009,sprecher2011}. The best calculations of the rovibrational levels of He${_2}^+$ \cite{tung2012-1} have an accuracy of about 120\,MHz, sufficient to reproduce the energy-level structure measured in earlier experiments \cite{yu1987,yu1989-1}, although they do not include relativistic and radiative corrections. Few-electron diatomic molecules present experimental challenges for high-resolution studies of their spectra, and experimental data on their energy-level structures are scarce. Indeed, the symmetric isotopomers do not have a permanent electric dipole moment, which implies that these species do not have a pure rotational nor a rovibrational spectrum. Spectroscopic data on the molecular helium cation are limited to rotational and vibrational transitions in the asymmetric $^3$He$^4$He$^+$ isotopomer reported by Yu \emph{et al.} \cite{yu1987,yu1989-1} and microwave transitions between highly excited vibrational levels of the electronic ground state and the lowest vibrational levels of the first electronically excited states in He${_2}^+$ by Carrington \emph{et al.} \cite{carrington1995}. The only experimental data available on the low-lying rovibrational levels of $^4$He${_2}^+$ have been obtained by photoelectron spectroscopy \cite{raunhardt2008} and from the Rydberg spectrum of He$_2$ using Rydberg-series extrapolation techniques \cite{ginter1980,ginter1984,raunhardt2008,sprecher2014,jansen2015}. The work presented in this article is devoted to measurements of the energy level structure of He${_2}^+$ by high-resolution spectroscopy of Rydberg states of He$_2$ and extrapolation of the Rydberg series \cite{raunhardt2008,sprecher2014,jansen2015}. The strength of our approach to study He${_2}^+$ relies on the facts that (1) its energy levels structure is obtained by extrapolation of allowed electronic transitions of He$_2$ in the ultraviolet (UV) range of the electromagnetic spectrum, (2) the initial state of He$_2$ we use, the metastable $a\,^3\Sigma_u^+$ state (called He$_2^*$ hereafter), can easily be generated in supersonic beams, (3) He$_2^*$ has a magnetic moment of two Bohr magneton, which makes it possible to decelerate He$_2^*$ beams to low velocities in the laboratory reference frame using the technique of multistage Zeeman deceleration \cite{vanhaecke2007,motsch2014}, and (4) the electronic spectrum of He$_2^*$ is well known and information on low-lying Rydberg states and fine-structure intervals facilitates the interpretation of spectra of high Rydberg states. We believe that, in the long term, these advantages will enable us to reach a higher precision and accuracy than currently possible in H$_2$ \cite{liu2009,sprecher2011,sprecher2013}. The spectrum of He$_2$ has been investigated exhaustively after its first detection in 1913, independently by Curtis \cite{curtis1913} and Goldstein \cite{goldstein1913}. Most information about the Rydberg states of He$_2$ has been obtained with classical emission grating spectroscopy in the extensive measurements of Ginter and coworkers \cite{ginter1965,ginter1965-1,ginter1965-2,ginter1966,ginter1968,ginter1970-1,brown1971,ginter1983,ginter1984}. In addition, low-lying Rydberg states have been investigated using infrared emission \cite{hepner1956} and absorption \cite{gloersen1965} spectroscopy, laser-induced fluorescence spectroscopy \cite{miller1979}, Fourier-transform emission spectroscopy \cite{rogers1988,herzberg1986,focsa1998,hosaki2004}, laser absorption spectroscopy \cite{solka1987,kawakita1985,lorents1989,hazell1995}, optical heterodyne concentration-modulation spectroscopy \cite{li2010}, and infrared emission spectroscopy from proton-irradiated cryogenic helium gas \cite{brooks1988,tokaryk1995}. Highly accurate measurements of the fine structure in the lowest rotational states of He$_2^*$ ($\nu''=0$) have been performed by Lichten \emph{et al.} using molecular-beam radio-frequency (r.f.) spectroscopy \cite{lichten1974,vierima1975,lichten1978}. Bjerre and coworkers \cite{lorents1989,kristensen1990,hazell1995} employed laser-r.f. double-resonance spectroscopy to extend the measurements of the fine-structure intervals to higher rotational and vibrational states. Focsa \emph{et al.} \cite{focsa1998} performed a global fit on infrared and r.f. data to obtain a consistent set of molecular constants for the six lowest excited electronic states of He$_2$. The structure of this article is as follows: After an overview of current knowledge on He$_2^*$, on the triplet Rydberg states of He$_2$, and on the ground state of He${_2}^+$ in section~\ref{sec_He2}, we summarize our experimental approach in section~\ref{sec_exp}. The experimental results are presented in section~\ref{sec_results} and a brief summary is provided in the conclusions section. \section{Energy levels of He$_2$ and He${_2}^+$: general considerations} \label{sec_He2} The van-der-Waals interaction between two helium atoms in their $^1$S$_0$ ground state is extremely weak and gives rise to a very shallow potential-energy well for the ground state of He$_2$, with a depth in the order of $10^{-3}$\,cm$^{-1}$ \cite{cencek2012} and a single bound rovibrational state with a mean internuclear distance of almost 5 nm~\cite{grisenti2000}. In contrast, He${_2}^+$ in its $X^{+}\,^2\Sigma_u^+$ ground state is covalently bound, with a well depth of almost 2.5\,eV \cite{tung2012-1}. The strongly bound nature of the He${_2}^+$ ground state implies the existence of singlet and triplet Rydberg series of He$_2$. Many Rydberg states are known for He$_2$ that all belong to series converging on the $X^{+}\,^2\Sigma_u^+$ electronic ground state of He${_2}^+$~\cite{ginter1970,huber1979,ginter1984}. With the exception of the single bound level of the electronic ground state, all bound states of He$_2$ are Rydberg states, so that He$_2$ can be regarded as a Rydberg molecule~\cite{herzberg1987}. \subsection{Metastable helium molecules He$_2^*$} The lowest Rydberg states of He$_2$, the $a\,^3\Sigma_u^+$ state, is metastable, with a calculated radiative lifetime of 18\,s~\cite{chabalowski1989}, because radiative decay to the ground electronic state is spin forbidden. Its long lifetime and the ease with which it can be produced in electric discharges make He$_2^*$ an ideal initial state to study the electronic spectrum and the photoionization of He$_2$, as demonstrated in the numerous studies cited in the introduction. The triplet nature of He$_2^*$ gives rise to a magnetic moment of two Bohr magneton and thus to an electron-Zeeman effect that can be exploited to slow down supersonic beams of He$_2$ by multistage Zeeman deceleration \cite{motsch2014,jansen2015} (see also Sections~\ref{sec_exp} and~\ref{sec_results}). The generalized Pauli principle requires the total wavefunction to be symmetric under exchange of the two bosonic $^4$He$^{2+}$ ($I=0$) nuclei, so that only rotational states for which the quantum number $N$ ($N$ is the quantum number associated with the total angular momentum excluding spin) is odd are allowed in states of $\Sigma_u^+$ symmetry, such as the $a\,^3\Sigma_u^+$ state of $^4$He$_2$ and the $X^+\,^2\Sigma_u^+$ state of He${_2}^+$. The rotational and fine structure in the vibrational ground state of He$_2^*$ can be described by an effective Hamiltonian~\cite{lefebvre-Brion2004,brown2003} appropriate to Hund's case (b) molecules in electronic states of $\Sigma$ symmetry \begin{equation} H = B_0 \vec{N}^2 - D_0 \vec{N}^4 + H_0 \vec{N}^6 + \tfrac{2}{3}\lambda_0 \left ( 3S_z^2-\vec{S}^2 \right ) + \gamma_0 \vec{S}\!\cdot\! \vec{N}, \label{eq:Hfs} \end{equation} where $B_0$ is the rotational constant, $D_0$ and $H_0$ are the quartic and sextic centrifugal distortion constants, $\lambda_0$ is the spin-spin interaction constant, $\gamma_0$ is the spin-rotation interaction constant, and $\vec{N}$ and $\vec{S}$ are the total angular momentum excluding spin and the total electron spin, respectively. The spin-spin and spin-rotation interactions split each rotational state $N$ into three fine-structure components with total angular momentum quantum number $J=N,N\pm 1$. Levels of the same $J$ value but $N$ values differing by 2 mix under the influence of the spin-spin interaction. Matrix elements of Eq.~\eqref{eq:Hfs} can be found in Ref.~\cite{brown2003}. The fine-structure splittings of the three lowest rotational states in He$_2$ ($a\,^3\Sigma_u^+$ $\nu=0$) are shown on the left-hand side of Fig.~\ref{fig:ZeemanShift}. \begin{figure}[bt] \centering \includegraphics[width=0.6\columnwidth]{./Fig01_ZeemanShift.pdf} \caption{Fine structure (left panel) and Zeeman effect (right panel) of the $N''=1,3$, and 5 states of He$_2^*$. The low-field-seeking magnetic sublevels of the $J''=N''+1$ manifold are shown in red and states of the $J''=N''$ and $N''-1$ are shown in grey. \label{fig:ZeemanShift}} \end{figure} To treat the effects of an external magnetic field, one needs two additional terms in the Hamiltonian \begin{equation} H_\text{Z} = -\frac{g_\text{e}\mu_\text{B}}{\hbar}\vec{S}\!\cdot\!\vec{B}-\frac{g_\text{R}\mu_\text{B}}{\hbar}\vec{N}\!\cdot\!\vec{B}, \label{eq:Hz} \end{equation} where $g_\text{e}\approx -2.00232$ and $g_\text{R}$ denote the electron and rotational $g$-factors, respectively. The second term on the right-hand side of Eq.~\eqref{eq:Hz} represents the coupling of the overall rotation of the molecule to the external magnetic field. This coupling is four orders of magnitude weaker than the coupling of the electron spin to the field and plays a negligible role for the deceleration experiments described below. On the right-hand side of Fig.~\ref{fig:ZeemanShift}, we show the eigenvalues of the combined Hamiltonian as a function of the magnetic-field strength for the three lowest rotational states in He$_2^*$. The Zeeman effect in higher rotational states is almost identical to that for $N=5$ because the fine structure at zero field only slowly changes with $N$ at high $N$ values and the rotational spacings are much larger than the Zeeman shifts, even for magnetic-field strengths of several Tesla \cite{motsch2014}. \subsection{He${_2}^+$ and He$_2$ Rydberg states} In the $X^+\,^2\Sigma_u^+$ electronic ground state, the rotational structure of He${_2}^+$ can also be described by Eq. (\ref{eq:Hfs}). However, only the spin-rotation interaction [last term on the right-hand side of Eq. (\ref{eq:Hfs})] contributes to the fine structure, because there is only one unpaired electron. The spin-rotation interaction in the $X^+\,^2\Sigma_u^+$ state of He${_2}^+$ splits each rotational state $N^+$ into two fine-structure states with $J^+=N^+\pm\tfrac{1}{2}$. In the following, we denote quantum numbers and molecular constants of the $a\,^3\Sigma_u^+$ state of He$_2$ and the $X^+\,^2\Sigma_u^+$ state of He${_2}^+$ with double primed symbols and a ``$+$'' superscript, respectively, to avoid confusion. The splitting between the two $J^+$ states of a given $N^+$ value is given by $\gamma_0^+(N^+ +\tfrac{1}{2})$, where $\gamma_0^+$ is the spin-rotation coupling constant of the vibrational ground state of He${_2}^+$ ($X^+\,^2\Sigma_u^+$ $\nu^+=0$). Although $\gamma_0^+$ has not been measured for He${_2}^+$ yet, one can estimate it to be $\approx -3$\,MHz from the known spin-rotation coupling constant in $^3\text{He}^4\text{He}^+$ \cite{yu1989-1} and the fact that $\gamma_0^+$ scales as $\mu^{-1}$ \cite{brown1977}, where $\mu$ is the reduced mass of the molecule. To resolve the two fine-structure levels of the rotational states of He${_2}^+$ with $N^+$ values greater than 9 and 51, an experimental resolution of 25 and 150\,MHz, respectively, would be required. The energy-level structure of low-lying Rydberg states of He$_2$ is adequately described by Hund's angular-momentum coupling case (b) and thorough analyses of many of these low-lying states have been reported (see references cited in the introduction). Rydberg states of high principal quantum number are more conveniently described by Hund's angular-momentum coupling case (d). The Rydberg series of He$_2$ that can be accessed from He$_2^*$ by single-photon excitation are $np\sigma$ $^3\Sigma_g^+$ and $np\pi$ $^3\Pi_g^{\pm}$ in Hund's case (b) notation and $npN^+_N$ in Hund's case (d) notation. The two angular-momentum coupling schemes are related by a unitary angular-momentum frame transformation with which rotational-electronic interactions are treated in the realm of multichannel quantum-defect theory \cite{jungen2011}. The high-$n$ states we use to extrapolate the series and determine the rotational energy-level structure of He${_2}^+$ are best described in Hund's angular-momentum coupling case (d), as illustrated schematically in Fig.~\ref{fig:energy_levels}. Three $np$ series converge on each rotational level $N^+$ of He${_2}^+$, with $N$ values of $N^+$ and $N^+\pm 1$ ($\vec{N}= \vec{N^+}+\vec{\ell}$, with $\ell =1$ for $p$ series). The series with $N=N^+$ have pure Hund's case (b) $np\pi$ $^3\Pi_g^{-}$ character and those with $N=N^+\pm 1$ and $N>1$ have mixed $np\sigma$ $^3\Sigma_g^+$ and $np\pi$ $^3\Pi_g^{+}$ character. Because $N$, and not $N^+$, is the good quantum number, levels of the same $N$ value that converge on different rotational states of He${_2}^+$ interact, giving rise to spectral perturbations below and to rotational autoionization above the $N^+=N-1$ thresholds. These interactions, indicated by horizontal arrows in Fig.~\ref{fig:energy_levels}, couple series differing in $N^+$ by 2 and need to be accounted for in the extrapolation of the Rydberg series. The best way to do so is by using multichannel quantum-defect theory as implemented by Jungen~\cite{jungen2011}. In our determination of the lowest rotational interval of He${_2}^+$ we extrapolated the series using the quantum defect parameters of the $n$p triplet states of He$_2$ reported by Sprecher \emph{et al.}~\cite{sprecher2014}. If the positions of the rotational levels of the cation are not known with sufficient precision, as is the case for the series converging to the $N^+=11$ and 13 ionic levels discussed in Section~\ref{sec_results}, the series limits can be extrapolated in first approximation using Rydberg's formula \begin{equation} hc\tilde{\nu}_{n\ell}=E_{\text{I}}\left (\text{He}_2^* \right ) + E_\text{rv}\left (\text{He}{_2}^+ \right )-\frac{hc\mathcal{R}_{\text{He}_2}}{\left ( n-\delta_\ell\right )^2}, \end{equation} where $\tilde{\nu}_{n\ell}$ represents the spectral position of the Rydberg states of principal and orbital angular-momentum quantum numbers $n$ and $\ell$, respectively, and quantum defect $\delta_\ell$. The quantities $E_{\text{I}}(\text{He}_2^*)$, $E_\text{rv}(\text{He}{_2}^+)$ and $\mathcal{R}_{\text{He}_2}$ represent the adiabatic ionization energy of He$_2^*$, the rovibrational energy of the He${_2}^+$ ion core, and the mass-corrected Rydberg constant for He$_2$, respectively. Rydberg's formula adequately describes the noninteracting $npN^+_{N=N^+}$ series and also gives good extrapolation results for the other two series if states of very high $n$ values are used in the extrapolation. The spin-spin coupling of the triplet Rydberg states of He$_2$ scales with $n^{-3}$ and becomes negligible at high $n$ values. The spin-rotation interaction is primarily that of the ion core, so that the fine structure converges to the spin-rotation splitting of He${_2}^+$ at high values of $n$ (see Fig.~6 of Ref.~\cite{haase2015}). \begin{figure}[bt] \centering \includegraphics[width=0.7\columnwidth]{./Fig02_EnergyLevels.pdf} \caption{Energy-level diagram showing the rotational levels of He$_2^*$ and the triplet $n$p Rydberg states of He$_2$ that converge to the three lowest rotational levels of He${_2}^+$. The positions of pure rotational levels of the metastable state are marked by dashed horizontal lines. The spin-rotation fine structure is exaggerated for clarity. Solid and dashed arrows indicate optically allowed transitions and channel interactions, respectively. Rapidly autoionizing levels are drawn in blue. \label{fig:energy_levels}} \end{figure} The rotational selection rules for single-photon excitation from He$_2^*$ to $n$p$\sigma\,^3\Sigma_g^+$ and $n$p$\pi\,^3\Pi_g^\pm$ Rydberg states are given by $N-N''=0,\pm 1$. Combined with the $\Delta\ell=\pm 1$ selection rule for transitions between Rydberg states, the overall selection rule for single-photon excitation from the $a\,^3\Sigma_u^+$ state to Rydberg levels converging to the $X^+\,^2\Sigma_u^+$ state is given by $\Delta N=N^+-N''=0,\pm 2$. Transitions from He$_2^*$ to $n$p Rydberg levels can therefore be unambiguously labeled as $N''_{J''}\rightarrow n\text{p}N^+_N$. Transitions to Rydberg states with $\Delta N=0,-2$, and $+2$ are referred to as Q($N''$)-type, O($N''$)-type, and S($N''$)-type transitions, respectively. The Q-type transitions are far stronger than the O- and S-type transitions because the He${_2}^+$ ion core is left with an electron hole of mainly s character after excitation of He$_2$ to $n$p Rydberg states \cite{willitsch2005}. However, O-type transitions can gain intensity through rotational channel interactions between levels converging on different rotational states of the ion discussed above and indicated by the horizontal dashed arrows in Fig.~\ref{fig:energy_levels}. The observation of these O-type transitions in He$_2$ is essential for the determination of the relative positions of energy levels in both the metastable state and ion ground state. \section{Experimental setup and procedure} \label{sec_exp} A schematic view of the laser systems and the experimental setup is shown in Fig.~\ref{fig:setup}. In the experiments, we use both a pulsed UV laser system with a near Fourier-transform-limited bandwidth of 150\,MHz [depicted in Fig.~\ref{fig:setup}a)] and a continuous-wave (cw) single-mode UV laser with a bandwidth of 1.5\,MHz [depicted in Fig.~\ref{fig:setup}c)]. To generate the pulsed UV radiation, the cw output of a ring dye laser with a wavelength around 580\,nm is pulse amplified in dye cells pumped with the second harmonic of a neodymium-doped yttrium-aluminium-garnet (Nd:YAG) laser and frequency doubled in a beta-barium-borate (BBO) crystal. The pulse amplified fundamental radiation is frequency calibrated to an accuracy of 20\,MHz (1$\sigma$) with a wave meter and a fraction of the cw output of the ring dye laser is used to record the laser-induced-fluorescence spectrum of I$_2$, as desribed in Ref.~\cite{jansen2015}. The difference between the frequencies of the cw and pulse-amplified outputs of the ring-dye laser is used to quantify the effects of the frequency chirp arising in the pulse-amplification process. The experiments involving cw UV radiation made use of the cw output of the ring laser, which was frequency doubled in an external cavity. In the experiments involving cw radiation, only the relative frequency was measured with an etalon at an accuracy of 5\,MHz. A supersonic beam of metastable helium molecules is produced in an electric discharge through an expansion of pure helium gas \cite{raunhardt2008} in a source chamber. The body of the valve can be cooled to temperatures of 77 and 10\,K, resulting in supersonic beams with velocities of approximately 1000 and 500\,m/s, respectively \cite{motsch2014}. The molecular beam is collimated with a skimmer before entering a second, differentially-pumped vacuum chamber that contains a 55-coil multistage Zeeman decelerator \cite{vanhaecke2007,motsch2014,wiederkehr2011}. The Zeeman decelerator exploits the Zeeman effect to manipulate the longitudinal velocity of the metastable helium molecules. When a He$_2^*$ molecule approaches an inhomogeneous magnetic field, it experiences a force that depends on its effective dipole moment. In a magnetic field, the $J''=2$ fine-structure component of the $N''=1$ rotational ground state of He$_2^*$ is split into five magnetic sublevels that are labeled with their value of $M_{J''}$, the quantum number associated with the projection of the total angular momentum vector $J''$ on the magnetic-field axis (see Fig.~\ref{fig:ZeemanShift}). The energy of two of the five magnetic sublevels increases as the magnetic-field strength increases. Molecules in these two states experience a force toward regions of low magnetic-field strength and are therefore referred to as ``low-field seekers''. Analogously, molecules in a state that displays a decrease in energy with increasing magnetic-field strength experience a force toward regions of high magnetic-field strength and are referred to as a ``high-field seekers''. When a low-field seeker approaches a magnetic field that is created by applying a current to a solenoid, part of its kinetic energy is converted into Zeeman energy and the molecule slows down. However, as soon as the low-field seeker crosses the region of maximum magnetic field, corresponding to the center of the solenoid, it gets accelerated again. In order to prevent this reacceleration, the magnetic field is switched off abruptly. By repeating this process many times and choosing the switch-off time of the current in the solenoids so as to maintain a phase-stable deceleration \cite{wiederkehr2010}, the molecules can be decelerated to any desired velocity. Because the magnetic field has to be switched off before the molecule leaves the coil, the maximum velocity that can be manipulated by a single coil is determined by the ratio of the length of the coil and the switch-off time of the magnetic field. For our experimental parameters (7.2\,mm, 8\,$\mu{s}$, 250\,A, and maximal on-axis field strength of 2\,T) this results in a maximum velocity around 700\,m/s. To obtain an initial velocity below this value, the valve body has to be cooled to 10\,K. The decelerator is segmented into three modules: two modules containing 12 coils and one module containing 31 coils. These modules are separated by pumping towers to guarantee a low background pressure in the decelerator. In addition, the modular design of the decelerator offers the flexibility to match the number of coils to the magnetic-moment-to-mass ratio of the species of interest. In order to maintain the magnetic quantization axis of the molecules as they traverse the region between the deceleration modules, the towers are equipped with solenoids as well, as explained by Wiederkehr \emph{et al.} \cite{wiederkehr2011}. \begin{figure*}[bt] \centering \includegraphics[width=1\columnwidth]{./Fig03_Setup.pdf} \caption{Schematic representation (not to scale) of the experimental setup. a) pulsed-amplified ring-dye laser, b) vacuum setup showing the discharge source, the Zeeman decelerator, and the magnetically shielded photoexcitation region, and c) cw doubling of the output of a ring dye laser using a cavity. Nd:YAG, neodymium-doped yttrium aluminum garnet; WM, wave meter; SMF, single mode fiber; BBO, beta barium borate crystal; MCP, microchannel plate detector. \label{fig:setup}} \end{figure*} After about 1\,m of flight, the molecules enter a third vacuum chamber that is used for photoexcitation and detection. Approximately 60\,mm beyond the last coil of the decelerator, the molecular beam is intersected at right angles with the UV laser beam that is used to drive transitions to $n$p Rydberg states. The excitation region is surrounded by a cylindrically symmetric stack of electrodes for the application of ionization and extraction electric fields. A weak dc electric field is applied to the stack during photoexcitation in order to reduce stray electric fields to below 1\,mV/cm. The stray field is determined by recording spectra in the presence of different dc electric fields and fitting the observed Stark shifts to a quadratic polynomial, as illustrated in Fig. \ref{fig:strayfieldcomp}. In order to suppress stray magnetic fields, two concentric mu-metal tubes are used to shield the excitation region. For molecules decelerated to 120\,m/s, the current in the last coil is switched off 0.5\,ms before the molecules reach the excitation volume. The Rydberg states are ionized for detection by the application of a pulsed electric field which is also used to extract the ions toward a microchannel-plate (MCP) detector. A small electric field is applied to the stack shortly after photoexcitation but before the ionization pulse. This discrimination pulse separates prompt ions, produced by direct ionization or rapid autoionization, from ions produced by pulsed field ionization, based on their different arrival times on the MCP detector. The discrimination pulse also induces the field ionization of Rydberg states with $n\gtrsim200$, so that these states contribute to the prompt-ion signal. \begin{figure}[bt] \centering \includegraphics[width=.4\columnwidth]{./Fig04_StrayFieldCompensation.pdf} \caption{Spectra of the $1_1\rightarrow 123\text{p}1_2$ transition (left-hand side) and $1_0\rightarrow 124\text{p}1_1$ transition (right-hand side) of He$_2^*$ recorded in the presence of different dc electric fields. The vertical shift of the baseline of each spectrum corresponds to the applied field in units of mV/cm. The diamonds show the transition frequencies and the dashed lines represent fits using quadratic polynomials. In this way, the stray field could be reduced to below 1\,mV/cm. \label{fig:strayfieldcomp}} \end{figure} \section{Results} \label{sec_results} The procedure for obtaining the intervals between successive rotational states in He${_2}^+$ relies on the determination of the relative convergence limits of Rydberg series excited by Q-type and O-type transitions. This procedure has been used to extract the interval $\tilde{\nu}_{31}^+$ between the $N^+=1$ and $N^+=3$ rotational levels of He${_2}^+$ in Ref. \cite{jansen2015} and is briefly repeated here. $\tilde{\nu}_{31}^+$ corresponds to the difference between the convergence limits $\tilde{\nu}_{13}$ of the $3\rightarrow n\text{p}1_2$ series and $\tilde{\nu}_{33}$ of the $3\rightarrow n\text{p}3_3$ series (see Fig.~\ref{fig:energy_levels}). This interval can also be determined from the convergence limits $\tilde{\nu}_{11}$ of the $1\rightarrow n\text{p}1_{0-2}$ series and $\tilde{\nu}_{33}$ of the $3\rightarrow n\text{p}3_3$ series in combination with the interval $\tilde{\nu}_{31}''$ between the $N''=1$ and $N''=3$ rotational levels of He$_2^*$ and is given by $\tilde{\nu}_{13}^+=\tilde{\nu}_{31}''+\tilde{\nu}_{33}-\tilde{\nu}_{11}$. In order to derive the $\tilde{\nu}_{31}''$ interval, differences between transition wave numbers from $N''=1$ and $N''=3$ rotational levels to any member of the $n$p$1_2$ series can be taken. This procedure is illustrated in Fig.~\ref{fig:combination_differences} that shows the $1\rightarrow n\text{p}1_{1,2}$ and $3\rightarrow n\text{p}1_{2}$ Rydberg series around $n=128$ \cite{jansen2015}. To account for the triplet structure of the initial states, three Gaussian profiles have been fitted to the observed line shapes. The energy splitting between the line centers is fixed to the known fine-structure intervals \cite{lichten1974} and the relative intensities are assumed to correspond to the degeneracy factors $2J''+1$. The interval between the $N''=1$ and 3 rotational levels of He$_2^*$ was determined to be 75.8137(4)\,cm$^{-1}$ \cite{jansen2015}, in agreement with the value of 75.8129(3)\,cm$^{-1}$ derived from the rotational Hamiltonian and molecular constants reported by Focsa \emph{et al.}~\cite{focsa1998}. \begin{figure}[htb] \centering \includegraphics[width=0.7\columnwidth]{./Fig05_CombinationDifferences.pdf} \caption{Determination of the $\tilde{\nu}_{31}''$ interval between the $N''=1$ and 3 rotational levels of He$_2^*$ from combinational differences of transitions converging on the same $n\text{p}1_2$ Rydberg level. Vertical dashed lines indicate the centers of gravity of the transitions. \label{fig:combination_differences}} \end{figure} The convergence limits of the Rydberg series were derived by extrapolating transitions to Rydberg states of principal quantum numbers in the range $n=95-115$. This choice represents a compromise between states that are high enough in $n$ for the uncertainties in the quantum defects to have a negligible effect on the extrapolated energies and low enough not to be strongly affected by the Stark effect. The ionization thresholds $\tilde{\nu}_{11}, \tilde{\nu}_{13}$ and $\tilde{\nu}_{33}$ were determined to be 34301.20585(10), 34225.39234(10), and 34296.33037(7)\,cm$^{-1}$, respectively, with a systematic uncertainty of $1.4\times 10^{-3}$\,cm$^{-1}$ \cite{jansen2015}. The lowest rotational interval in He${_2}^+$ was thus determined to be 70.9380(6)\,cm$^{-1}$ with a total uncertainty of $6\times 10^{-4}$\,cm$^{-1}$ or 18\,MHz, which is less than the estimated uncertainty ($\approx 4\times 10^{-3}$\,cm$^{-1}$) of the most recent and precise theoretical value \cite{tung2012-1}. Higher rotational intervals of He${_2}^+$ can be determined by recording Rydberg series that converge on higher-rotational states of the ion. As an example, spectra of the $13\rightarrow n\text{p}11_{12}$ Rydberg series in the range $n\approx 60-67$ and $13\rightarrow n\text{p}13_{13,14}$ Rydberg series in the range $n\approx 95-115$ are shown in Fig.~\ref{fig:Q13_O13}(a) and (b), respectively. The spectra also contain spectral features that correspond to transitions of the $1\rightarrow n\text{p}1_{1,2}$ to $11\rightarrow n\text{p}11_{11,12}$ series. The energy interval between the $N^+=11$ and 13 rotational states in He${_2}^+$ can be estimated by taking the difference between the series limits extrapolated using Rydberg's formula. Setting the uncertainty equal to the experimental linewidth, we determine this interval to be 346.988(6)\,cm$^{-1}$, as compared to the theoretical value of 346.977(4)\,cm$^{-1}$ \cite{tung2012-1}. Extrapolation of the $13\rightarrow n\text{p}11_{12}$ and $13\rightarrow n\text{p}13_{13,14}$ Rydberg series using MQDT should result in a more accurate determination of the $N^+=11$-to-13 rotational interval in He${_2}^+$. However, an accurate determination by MQDT requires the precise knowledge of higher rotational levels of He${_2}^+$, which is not available yet but can be obtained in a global analysis of all measured O-type and Q-type transitions to Rydberg states. Such an analysis is currently being performed, including series converging to rotational levels of He${_2}^+$ as high as $N^+=21$ \cite{semeria2016}. \begin{figure}[bt] \centering \includegraphics[width=0.7\columnwidth]{./Fig06_Q13andO13.pdf} \caption{Rydberg spectra of the (a) $13\rightarrow n\text{p}11_{12}$ series of He$_2$ in the range $n\approx 60-67$ and (b) $13\rightarrow n\text{p}13_{13,14}$ series in the range $n\approx 95-115$. The spectra also contain contributions from transitions belonging to $1\rightarrow n\text{p}1_{1,2}$ to $11\rightarrow n\text{p}11_{11,12}$ Rydberg series. \label{fig:Q13_O13}} \end{figure} An important asset of the experiment is the use of a multistage Zeeman decelerator to reduce the velocity of the beam of metastable helium molecules and, consequently, systematic Doppler \textit{shifts} resulting from the velocity components parallel to the laser beam. In Fig.~\ref{fig:dec_undec}, traces (a) and (b) show part of the Rydberg spectrum of He$_2^*$ in the vicinity of the $1\rightarrow 94\text{p}1_{1,2}$ transition that was obtained using an undecelerated beam with a velocity of 1000\,m/s and a decelerated beam with a velocity of 120\,m/s, respectively. Many lines that are present in the spectrum recorded with the undecelerated beam are not observed in the spectrum obtained with the decelerated beam. This reduction in spectral congestion is a consequence of the spin-rotational state selectivity of the deceleration process. As expected from the Zeeman map of Fig.~\ref{fig:ZeemanShift}, the $J''=N''$ component is completely rejected from the beam, and transitions originating from this level are absent after deceleration. In addition, the $J''=2$ component carries approximately half the intensity of the $J''=0$ component for the Q(1)-type transitions and in the case of the Q(3)-type transitions, the $J''=4$ component is hardly visible in the spectrum and only the $J''=2$ component is observed. From the Zeeman map of He$_2^*$ shown in Fig.~\ref{fig:ZeemanShift}, one would expect that the relative intensities of the $J''=0$ and 2, and $J''=2$ and 4 fine-structure states would reflect the respective number of low-field seeking states, that is, one would expect ratios of 1:2 and 5:2, respectively, in contrast to the ratios of 2:1 and $>$20:1 observed experimentally. The apparent loss of molecules in the $J''=N''+1$ fine-structure component during the deceleration process can be understood in terms of a redistribution over all $M_{J''}$ states in regions of near-zero magnetic-field strength in the pumping towers. Although the pumping towers are equipped with coils that are pulsed as the molecules pass, the generated magnetic field is not strong enough in this case to maintain the magnetic quantization axis over the whole tower region, resulting in nonadiabatic losses. Assuming a complete redistribution over all $M_{J''}$ states every time the molecules traverse a tower, one expects the population of the $J''=2$ and 4 components to be suppressed by factors of $(5/2)^\tau$ and $(9/2)^\tau$, respectively, where $\tau$ is the number of towers the molecules traverse. The spectra in Fig.~\ref{fig:dec_undec} have been normalized with respect to the $J''=0$ fine-structure component of the transition to the 94$n$p$1_2$ Rydberg state and vertical bars indicate the expected intensities of the fine-structure components with respect to the $J''=N''-1$ state, assuming a complete redistribution over $M_{J''}$ states for the decelerated sample with $\tau=2$. Trace (c) in Fig.~\ref{fig:dec_undec} displays the same spectral region but was obtained using a molecular beam that was decelerated from 500 to 335\,m/s using a single 31-stage deceleration module without any tower ($\tau=0$). The relative intensities of the fine-structure components in this spectrum exactly matches the number of low-field seeking states. The observed changes of intensities resulting from $M_{J^{\prime\prime}}$ redistribution processes confirm the validity of the assumption of a statistical redistribution among the near-degenerate magnetic sublevels at each tower. These $M_{J^{\prime\prime}}$ redistribution processes actually turn out to be a powerful tool to assign transitions to specific initial $N^{\prime\prime}, J^{\prime\prime}$ levels. \begin{figure}[bt] \centering \includegraphics[width=0.7\columnwidth]{./Fig07_FineStructureIntensities.pdf} \caption{Comparison between relative intensities of rotational fine structure components for spectra obtained from nondecelerated samples of He$_2^*$ [1000\,m/s, trace (a)], a spectrum obtained from a sample decelerated from 500 to 120\,m/s using three deceleration modules and two pumping towers [trace (b)], and a spectrum obtained from a sample decelerated from 500 to 335\,m/s using a single 31-stage deceleration module without any tower [trace (c)]. Vertical bars indicate expected relative line intensities based on the Zeeman energy of the different $M_{J''}$ states and assuming a complete redistribution among near-degenerate $M_{J^{\prime\prime}}$ levels. Note that for trace (c) the decelerated and undecelerated molecules are not completely separated spatially, giving rise to small features from the $J''=N''$ fine-structure components. \label{fig:dec_undec}} \end{figure} Figure~\ref{fig:dec_undec} also indicates that no residual Doppler \textit{shifts} persist within the statistical uncertainty of the measurements. In principle, a reduction of the beam velocity results in a reduction of the Doppler \textit{width} of spectral lines and should therefore allow for a more precise determination of observed spectral positions. However, the pulse-amplification process results in a spectral resolution that is limited by the pulse width of the pumping laser. Assuming a Fourier-transform-limited Gaussian pulse with a pulse duration of 4\,ns, the frequency-doubled output of the laser has a bandwidth of about 150\,MHz. Because this bandwidth is larger than the Doppler width, no reduction of the line widths was observed for decelerated molecules. In order to observe an effect of the beam velocity on the Doppler width of the measured transitions, the bandwidth of the laser system has to be reduced below the Doppler linewidth. Because of the inherent limitations of a pulsed-laser system, this can be best achieved by replacing the pulsed laser by a cw laser, as shown in Fig.~\ref{fig:setup}c). As an example, the $1_2\rightarrow 51n\text{p}1_2$ transition, obtained using the cw laser system presented in Fig.~\ref{fig:setup}c), is shown in Fig.~\ref{fig:cw_pulsed}, however, without pulsing currents through the deceleration coils. The red triangles and blue diamonds represent measurements with the valve body kept at a temperature of 77 and 10\,K, respectively. The red and blue curves are Gaussian fits to the data and have a full width at half maximum (FWHM) of 50 and 25\,MHz, respectively. The decrease of the linewidth by a factor of two reflects the reduction in velocity from 1000 to 500\,m/s for the lower temperature. It is expected that decelerating the molecules to velocities around 100\,m/s will result in a further reduction of the observed linewidths, and work in this direction is currently in progress. \begin{figure}[bt] \centering \includegraphics[width=0.6\columnwidth]{./Fig08_CWvsPulsed.pdf} \caption{Comparison between measurements of the $1_2\rightarrow51\text{p}1_2$ transition using the cw ring dye laser setup of Fig.~\ref{fig:setup}c) with valve temperatures of 77 (solid triangles) and 10\,K (solid diamonds), respectively. The solid red and blue lines represent gaussian fits to the data points with FWHM's of 50, and 25\,MHz, respectively. \label{fig:cw_pulsed}} \end{figure} \section{Conclusions} High-resolution spectroscopy of high Rydberg states of He$_2$ using a pulsed UV laser with a Fourier-transform-limited bandwidth of 150 MHz was used to determine energy intervals between rotational states of He${_2}^+$, the $N^+=1$ to $N^+=3$ interval with a precision of 18 MHz and the $N^+=11$ to the $N^+=13$ interval with a precision of 150 MHz. Both intervals are smaller than predicted by the most recent \emph{ab initio} calculations \cite{tung2012-1}, which do not include QED corrections to the level energies. Multistage Zeeman deceleration was used to generate slow beams of translationally cold metastable He$_2$ molecules (He$_2^*$) and contributed to a reduction of the uncertainties in the experimental transition frequencies by reducing Doppler shifts. A complete redistribution among near-degenerate magnetic sublevels of the spin-rotational levels of metastable He$_2$ in regions of near-zero magnetic fields located in the pumping sections of the decelerator was shown to affect the relative populations of the $N^{\prime\prime},J^{\prime\prime}$ spin-rotational states of He$_2^*$, and, by doing so, facilitated the spectral assignments. Replacing the pulsed UV laser by a cw UV laser enabled us to reduce the line widths of the observed transitions from 150 MHz to 50 and 25 MHz in experiments carried out with He$_2^*$ beams having average velocities of 1000 and 500 m/s, respectively. The combination of multistage Zeeman deceleration of He$_2^*$ with cw-laser excitation has the potential to improve the precision and accuracy of the present results by more than an order of magnitude, as recently demonstrated using an ultracold sample of cesium atoms in Ref.~\cite{deiglmayr2015}. \section*{Acknowledgment} We thank Hansj\"urg Schmutz and Josef Agner for their expert technical assistance. This work is supported financially by the Swiss National Science Foundation under Project No. 200020-159848 and the NCCR QSIT. P. J. acknowledges ETH Zurich for support through an ETH fellowship. \section*{References}
1,477,468,749,878
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{C}{lustering} analysis is one of the important topics in machine learning \cite{jordan2015machine}, which has been widely applied in many fields, including data mining \cite{doring2006data}, pattern recognition \cite{bezdek2013pattern}, image processing \cite{Rezaee2000multiresolution}, etc. The clustering algorithm, which is an unsupervised learning approach, aims to divide the data sets into multiple clusters by similarity measure, among which the data points in the same cluster are similar. In general, the clustering methods are divided into the hard and soft clustering schemes \cite{Amit2017A,jain2010data}. The representative clustering algorithms are C-Means \cite{Lloyd1982Least} and Fuzzy C-Means (FCM) \cite{bezdek1984fcm}. The hard clustering scheme, in which a sample only belongs to a single cluster, assigns the membership grades between the samples and the clusters as 0 or 1. The hard clustering scheme is very simple and efficient. Inevitably, the hard clustering scheme lacks other distance information except for the closest distance information in the update of the cluster centers, which makes the algorithm more likely to fall into bad local minimum. The soft clustering scheme, in which a sample does not exclusively belong to a single cluster, allows the membership grades to vary between 0 and 1. The soft clustering scheme has better clustering quality because of its flexibility and robustness \cite{zbian}. The C-Means algorithm (Lloyd algorithm) \cite{Lloyd1982Least} is the most representative method in the hard clustering scheme. However, the computational complexity of all samples-centers distances is very high in C-Means. Thus, many improved methods have been proposed. Both of these algorithms \cite{elkan2003using, ding2015yinyang} were proposed to speed up C-Means by applying a triangle inequality, which are effectively to avoid unnecessary distance calculations, and achieve higher efficiency. Another trick to deal with this challenge is region division of clusters in clustering. Some related research has been done \cite{Lingras2004Interval, ZHANG2019three, Multi2022, Effective2022}. In recent, ball C-Means \cite{ball2020kmeans} has been proposed to focus on the efficiency of C-Means by reducing the samples-centers distance computations. Significantly, the concept of the neighbor clusters and the partition of cluster are designed to attain the same performance in less time by the multiple novel schemes. As one of the most typical soft clustering methods, Fuzzy C-Means (FCM) \cite{bezdek1984fcm} is to divide $n$ samples into $c$ clusters by membership grade matrix $\mathbf{U}$, in which $u_{ij}$ represents the grade $j$th sample belongs to $i$th cluster. FCM is successful in finding and describing overlapped clusters that are ubiquitous in the complex real-world data (see \cite{doring2006data, Rezaee2000multiresolution, Coletta2012} and the references therein). However, all samples-centers distance computations also leads to high computing cost. Meanwhile, all samples are involved in the update of all centers by the memberships, which leads to low efficiency of FCM in the clustering process (see \cite{xu2019robust, Zhou2020A}) In theory, the convergence rate theorem for FCM \cite{hathaway1988recent} is proved, which is that FCM converges linearly to the local minima. Meanwhile, based on the analysis of C-Means\cite{kieffer1982exponential, du1999centroidal, kanungo2000analysis}, it can be found that when C-Means is close to the local minima, the convergence rate of C-Mean drops from an exponential rate to a linear rate. Since both C-Means and FCM are alternating optimization algorithm (AO), Therefore, likewise, the convergence rate of FCM also drops, when FCM is close to a local minima. Meanwhile, the convergence rate of FCM is slower than that of C-Means in the clustering process. Many researchers have managed to tackle this issue based on the new update of the centers. Mitra \emph{et al.} \cite{Mitra2006Rough} have designed a Rough-Fuzzy C-Means (RFCM) clustering algorithm, which absorbs the advantages of fuzzy set and rough set and enhances the robustness and efficiency of the fuzzy clustering. Roy and Maji \cite{Roy2020Medical} proposed a spatially constrained Rough-Fuzzy C-Means (sRFCM), which wisely applies the advantages of rough-fuzzy clustering and local neighborhood information together. Furthermore, each cluster was divided into the prossibilistic core region and probabilistic boundary region by sRFCM, which improves the performance of the algorithm. Shadowed sets in the characterization of rough-fuzzy clustering (SRFCM) \cite{zhou2011Shadowed} was introduced to improve the clustering quality and efficiency by optimizing the threshold parameters based on the concept of shadowed set that affect the lower bound and boundary region of each cluster automatically. Similar research can be seen {\cite{Shadowed2022}, \cite{Particle2022}, \cite{Criterion2022}, \cite{Hybrid2022}, \cite{M3W2022}, \cite{Zhou2018Rough} and the references therein. Unfortunately, unreasonable partition thresholds will result in undesired clustering results. Therefore, the partition parameters need to be optimized per iteration. Inevitably, the computational cost of the selection of the parameters is very high for the region partition. To solve it, many research improve the performance of FCM by constraining the update of the memberships so that the centers can be updated to their target position more efficiently \cite{Zhao2021, Nie2022, Scalable2022, Novel2022Gu, Semisupervised2022Wang}. Recently, membership scaling Fuzzy C-Means clustering algorithm (MSFCM) \cite{Zhou2020A} has been presented to accelerate the convergence of FCM and maintain high clustering quality, where the in-cluster and out-of-cluster samples are identified by a triangle inequality. Then, the membership grades are scaled to boost the effect of the in-cluster samples and weaken the effect of the out-of-cluster samples in the clustering process. Although the above-mentioned FCM variants usually improve the efficiency and effectiveness of the algorithms, they ignore the low efficiency in the mid-to-late stage of the clustering process. There are three reasons: 1) the convergence rate of the alternating optimization algorithm (AO) drops, when the algorithms are in the mid-to-late stage \cite{du1999centroidal}; 2) the FCM variants still need to do a full inverse-distance weighting \cite{xu2019robust}; 3) all samples are still involved in the update of all centers \cite{Zhou2020A}. (see detail analysis in Subsection \ref{subsec3-2}). In this study, we first delve into the relationship between the samples and the centers, and further investigate the characteristic of the clustering process by dividing it into the early stage and the mid-to-late stage. Stemming from those findings, we propose a new accelerated FCM clustering algorithm called \textbf{AMFCM} (\textbf{a}ffinity filtering and \textbf{m}embership scaling based \textbf{FCM}). In the proposed algorithm, a new affinity filtering technique is put forward to precisely identify the complete set of the non-affinity centers of each sample (see \textbf{Definition} \ref{definition1} in Section \ref{sec3}), and a new membership scaling method is suggested to accelerate the whole convergence process of the algorithm. The main contributions of this paper are as follows: \begin{enumerate} \item We design a new affinity filtering scheme, which is composed of $c$ triangle inequalities, to discover all samples-centers affinities. Compared with the previous methods by the triangle inequality in \cite{ding2015yinyang} and \cite{Zhou2020A}, the designed scheme can identify the complete set of the non-affinity center set of each sample more precisely with very low computational complexity. Compared with the works in \cite{Lingras2004Interval, Mitra2006Rough, zhou2011Shadowed, Zhou2018Rough}, the new affinity filtering scheme is parameter-free. \item We propose a new membership scaling scheme to accelerate FCM convergence, especially in the mid-to-late stage. The new membership scaling scheme sets the membership grades to 0 in the update of the non-affinity centers, which eliminates the effect of the sample on the update of its non-affinity centers and reduces the burden of the fuzzy clusterings in efficiency, and maintains the original update of the remaining centers for each sample. \item By integrating those schemes with FCM clustering, we propose a new accelerated clustering algorithm called \textbf{AMFCM}, which is the first work that focuses on accelerating the whole convergence process of the FCM-type clusterings. \item Several experimental results on synthetic and real-world data sets illustrate that the proposed AMFCM outperforms the state-of-the-art algorithms in efficiency. For example, AMFCM reduces the number of the iteration of FCM by 80$\%$ on average. \end{enumerate} The paper is organized as follows. Section \ref{sec2} presents some preliminaries including notations, C-Means, FCM, and the related clustering algorithm. The research motivation is described in Section \ref{sec3} and a new algorithm is presented in Section \ref{sec4}. The experimental results with discussion are reported in Section \ref{sec5} and Section \ref{sec6} concludes the paper. \section{preliminaries}\label{sec2} In this section, some related clusterings are briefly relisted for the convenience of the following discussion. \subsection{Notations} Let a data set be $\mathbf{X}=\{\mathbf{x}_1,\mathbf{x}_2, \cdots, \mathbf{x}_n\}$ with $\mathbf{x}_{j}\in\mathbb{R}^p$, and the cluster centers be $\mathbf{V}=[\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_c]$, where $\mathbf{v}_{i}\in\mathbb{R}^{p}$ is the centroid of the cluster $\mathcal{C}_{i}$ for $i=1,2,...,c$. $t$ is the number of iterations. The distances between $\mathbf{x}_{j}$ and the cluster centers $\mathbf{V}$ are $d_{ij}=\|\mathbf{x}_{j}-\mathbf{v}_{i}\| (i=1,\cdots,c)$ and they are rearranged in ascending order as $D_{j}^{(1)}\leq D_{j}^{(2)}\leq\cdots\leq D_{j}^{(c)}$. Displacement of the center $\mathbf{v}_{i}$ after one update is denoted by $\delta_{i}^{(t)}=d(\mathbf{v}_{i}^{(t+1)},\mathbf{v}_{i}^{(t)})$. The membership grade matrix is denoted by $\mathbf{U}=[u_{ij}]\in\mathbb{R}^{c\times n}$, where $u_{ij}$ represents the grade of $j$th sample belonging to $i$th cluster. \subsection{C-Means}\label{subsec2-1} C-Means clustering \cite{Lloyd1982Least}, as the most representative algorithm in the hard clustering, aims to find the $c$ partitions of $\mathbf{X}$ by minimizing the within-cluster sum of the distance from each sample to its nearest center. The underlying objective function is expressed as follows: \begin{equation}\label{eq_1} \begin{aligned} \min_{\mathbf{U},\mathbf{V}} J_{\textbf{Hard}}(\mathbf{U},\mathbf{V})=&\sum_{i=1}^c\sum_{j=1}^n u_{ij}\|\mathbf{x}_{j}-\mathbf{v}_{i}\|^2,\\ s.t. \quad &\sum_{i=1}^c u_{ij}=1, u_{ij}= 0 \ \textrm{or} \ 1, \end{aligned} \end{equation} To solve problem \eqref{eq_1}, which is NP-hard, C-Means \cite{Lloyd1982Least} consists of two steps: the assignment step assigns each sample to its closest cluster and the update step renews each of the $c$ cluster centers with the centroid of the samples assigned to that cluster. The algorithm repeats those two steps until convergence. \subsection{Fuzzy C-Means}\label{subsec2-2} FCM clustering \cite{bezdek1984fcm}, which is a soft clustering, allows a sample to have membership grades in all clusters instead of exclusively belonging to one single cluster. FCM partitions $\mathbf{X}$ into $c$ clusters by the cluster centers. The objective function is expressed as follows: \begin{equation}\label{eq_fcm} \begin{aligned} \min_{\mathbf{U},\mathbf{V}} J_{\textbf{Fuzzy}}(\mathbf{U},\mathbf{V})=&\sum_{i=1}^c\sum_{j=1}^n u_{ij}^m\|\mathbf{x}_{j}-\mathbf{v}_{i}\|^2,\\ s.t. \quad &\sum_{i=1}^c u_{ij}=1, u_{ij}\geq 0, \end{aligned} \end{equation} with the fuzziness weighting exponent $m>1$. To optimize problem \eqref{eq_fcm}, FCM usually initializes $\mathbf{U}^{(0)}$, which is a randomly initialized partition matrix, and updates $\mathbf{V}$ and $\mathbf{U}$ iteratively by \begin{align} \mathbf{v}^{(t+1)}_{i}&=\frac{\sum\limits_{j=1}^{n}\left(u^{(t)}_{ij}\right)^{m}\mathbf{x}_{j}} {\sum\limits_{j=1}^{n}\left(u^{(t)}_{ij}\right)^{m}},\label{eq_2}\\ u^{(t+1)}_{ij}&=\left[\sum_{k=1}^c\left(\frac{\|\mathbf{x}_{j}-\mathbf{v}^{(t+1)}_{i}\|}{\|\mathbf{x}_{j} -\mathbf{v}^{(t+1)}_{k}\|}\right)^{\frac{2}{m-1}}\right]^{-1},\label{eq_3} \end{align} until convergence. \subsection{Membership Scaling Fuzzy C-Means}\label{subsec2-2} Membership scaling Fuzzy C-Means clustering algorithm (MSFCM) \cite{Zhou2020A} accelerates the clustering convergence and maintains high clustering quality by using a triangle inequality and membership scaling. Specifically, the triangle inequality \cite{elkan2003using,ding2015yinyang}, which is a tool for mining the samples-centers affinities, is as follows: \begin{lemma}\label{lem1} A sample $\mathbf{x}_{j}$ cannot change its nearest cluster after one update, if \begin{equation}\label{eq_4} D_{j}^{(2)}-\max\limits_{1\le i\le c}\delta_{i}\ge D_{j}^{(1)}+\delta_{I_{j}^{*}}, \end{equation} where $I_{j}^{*}=\arg{\min\limits_{1\leq i\leq c}\{d_{ij}\}}$. \end{lemma} The samples whose closeness relationships do not change after one update are filtered out by using the triangle inequality \eqref{eq_4} and $Q$ is taken as the index set of the filtered samples. The membership grades of the filtered samples are scaled to accelerate the convergence of FCM. Therefore, the new update scheme for $\mathbf{U}^{(t+1)}$ is as follows: \begin{align} u^{(t+1)}_{i,j}&=\left\{ \begin{array}{ll} M_j^{(t)}, & j\in Q^{(t)}, i={I^{*}_j} ^{(t)}, \\ \beta_j^{(t)} u^{(t)}_{i,j}, & j\in Q^{(t)}, i\neq{{I^{*}_j}^{(t)}}, \\ u^{(t)}_{i,j}, & j\notin Q^{(t)}, 1\le i\le c, \\ \end{array} \right.\label{eq_new u} \end{align} where $M_j^{(t)}=\left[1+(c-1)\left({D_{j}^{(1)}}/{D_{j}^{(c)}}\right)^{\frac{2}{m-1}}\right]^{-1}$, $\beta_j^{(t)}=\tfrac{1-M_{j}^{(t)}}{1-u^{(t)}_{I_{j}^{*},j}}.$ Then the update of $\mathbf{V}$ in MSFCM is also Eq. \eqref{eq_2}. In \cite{Zhou2020A}, MSFCM also reduces the participation of the filtered samples in the update of their non-affinity centers and increases the participation of the filtered samples in the update of the remaining centers by scaling the memberships. Therefore, MSFCM has good properties, such as fewer number of iterations, lower time consumption, and higher clustering quality. \section{Motivation}\label{sec3} In this section, the relationship between the samples and the centers is first described by the following new \textbf{Definition} \ref{definition1} and \ref{definition2}. \begin{definition} A cluster center $\mathbf{v}_{i}$ is the \textbf{non-affinity center} of a sample $\mathbf{x}_{j}$, if $\mathbf{v}_{i}$ cannot be the nearest center of $\mathbf{x}_{j}$ in next iteration. Let $\mathcal{P}_{j}$ be the set of the non-affinity centers of $\mathbf{x}_{j}, j=1,2,...,n$. \label{definition1} \end{definition} \begin{definition} A sample $\mathbf{x}_{j}$ is the \textbf{non-affinity sample} of a cluster center $\mathbf{v}_{i}$, if $\mathbf{x}_{j}$ cannot belong to $\mathbf{v}_{i}$ in next iteration. Let $\mathcal{C}_{i-}$ be the set of the non-affinity samples of $\mathbf{v}_{i}$, and $\overline{\mathcal{C}_{i-}}$ be the set of the remaining samples, $i=1,2,...,c$. \label{definition2} \end{definition} Note that if a sample $\mathbf{x}_{j}$ is the non-affinity sample of $\mathbf{v}_{i}$ , $\mathbf{v}_{i}$ is the non-affinity center of $\mathbf{x}_{j}$, ($i \in \mathcal{P}_{j}$, if $ j \in \mathcal{C}_{i-}$), and vice versa. The current nearest center of $\mathbf{x}_{j}$ can not be the non-affinity center of $\mathbf{x}_{j}$ after one iteration. That is, $|\mathcal{P}_{j}|\leq c-1$. Similarly, $\mathbf{x}_{j}$ can not be the non-affinity sample of the current nearest center of $\mathbf{x}_{j}$. For improving the convergence speed and clustering quality, the hierarchy of information granules \cite{Bargiela2008, YaoGranular} guides the algorithms to reduce the contributions of the samples in the update of their non-affinity centers and increase the contributions of the samples in the update of the remaining centers. However, two problems need to be solved. One is how to get the set of the non-affinity centers of each sample efficiently. The other is how to formulate the modification benchmarks for the contributions of the samples in in the update of centers. These two problems motivate the proposal of two new schemes. \subsection{Searching Non-Affinity Centers by A New Affinity Filtering}\label{subsec3-1} In MSFCM, the affinity filtering scheme \eqref{eq_4} can produce the non-affinity centers of the samples with low computations. For any sample $\mathbf{x}_{j}$, \eqref{eq_4} can identify the complete non-affinity centers of $\mathbf{x}_{j}$ if $|\mathcal{P}_{j}|=0$ or $c-1$. However, a set of the non-affinity centers of the sample identified by \eqref{eq_4} is incomplete when $|\mathcal{P}_{j}|\neq 0, c-1$. In order to illustrate this situation, a geometric interpretation is shown in Fig. \ref{fig4}. \begin{figure}[htp] \centering \subfloat[]{\label{fig4a}\includegraphics[width=0.24\textwidth]{figs/fig-pro1}}~~ \subfloat[]{\label{fig4b}\includegraphics[width=0.24\textwidth]{figs/fig-pro3}}\\ \subfloat[]{\label{fig4c}\includegraphics[width=0.24\textwidth]{figs/fig-pro2}}~~ \subfloat[]{\label{fig4d}\includegraphics[width=0.24\textwidth]{figs/fig-pro4}}\\ \caption{Geometric explanation of identifying the non-affinity centers of $\mathbf{x}_{j}$ by the affinity filtering scheme \eqref{eq_4}. For $\mathbf{x}_{j}$, centers $\mathbf{v}_{1}$, $\mathbf{v}_{2}$, $\mathbf{v}_{3}$ are its nearest, second-nearest, and third-nearest centers, respectively. The brown star points are the possible position of $\mathbf{v}_{i}$ in next iteration. The radius of the black and gray dot-circles are $\delta_{i}$ and $\max_{1 \leq i \leq c} \delta_{i}$, respectively. In case (\ref{fig4a}), $\mathbf{v}_{2}$ and $\mathbf{v}_{3}$ are identified as the complete non-affinity centers of $\mathbf{x}_{j}$ by \eqref{eq_4}, where $|\mathcal{P}_{j}|=2$. In case (\ref{fig4b}) and (\ref{fig4c}), the set of the non-affinity centers of $\mathbf{x}_{j}$ is considered empty by \eqref{eq_4}. Actually, the complete set of the non-affinity centers of $\mathbf{x}_{j}$ cannot be accurately identified by \eqref{eq_4} when $|\mathcal{P}_{j}|\neq 0, 2$. For example, $\mathbf{v}_{2}$ is the non-affinity center of $\mathbf{x}_{j}$ in case (\ref{fig4b}), and $\mathbf{v}_{3}$ is the non-affinity center of $\mathbf{x}_{j}$ in case (\ref{fig4c}). In case (\ref{fig4d}), the set of the non-affinity centers of $\mathbf{x}_{j}$ identified by \eqref{eq_4} is complete because $|\mathcal{P}_{j}|=0$.}\label{fig4} \end{figure} In Fig. \ref{fig4}, centers $\mathbf{v}_{1}$, $\mathbf{v}_{2}$, $\mathbf{v}_{3}$ are its nearest, second-nearest, and third-nearest centers for $\mathbf{x}_{j}$, respectively. Here, $c=3$. The radius of red dot-arcs is the upper bound of $\mathbf{v}_{1}$, $d_{1,j}+\delta_{1}$, the radius of green dot-arcs is the lower bound of $\mathbf{v}_{2}$, $d_{2,j}-\max_{1\le i\le 3}\delta_{i}$, and the radius of blue dot-arcs is the lower bound of $\mathbf{v}_{3}$, $d_{3,j}-\max_{1\le i\le 3}\delta_{i}$, for $\mathbf{x}_{j}$. Fig. \ref{fig4a} shows the situation, where $|\mathcal{P}_{j}|=2$ for $\mathbf{x}_{j}$. At this time, \eqref{eq_4} ensures that $\mathbf{v}_{2}$ and $\mathbf{v}_{3}$ are the complete non-affinity centers of $\mathbf{x}_{j}$. In Fig. \ref{fig4b} and \ref{fig4c}, the set of the non-affinity centers of $\mathbf{x}_{j}$ is considered empty by \eqref{eq_4}. Actually, the complete set of the non-affinity centers of $\mathbf{x}_{j}$ cannot be accurately identified by \eqref{eq_4} when $|\mathcal{P}_{j}|\neq 0, 2$. For example, $\mathbf{v}_{2}$ is the non-affinity center of $\mathbf{x}_{j}$ in case (\ref{fig4b}), and $\mathbf{v}_{3}$ is the non-affinity center of $\mathbf{x}_{j}$ in case (\ref{fig4c}). In Fig. \ref{fig4d}, the set of the non-affinity centers of $\mathbf{x}_{j}$ identified by \eqref{eq_4} is complete because $|\mathcal{P}_{j}|=0$. For $c \geq 3$, the affinity filtering scheme \eqref{eq_4} always recognizes the incomplete set of non-affinity centers of the sample $\mathbf{x}_{j}$ when $|\mathcal{P}_{j}|\neq 0, c-1$, because \eqref{eq_4} only employs the lower bound of the second closest center of $\mathbf{x}_{j}$, $D_{j}^{(2)}-\max_{1\le i\le c}\delta_{i}$, to screen all samples-centers affinities. More specifically, the lower bound of its second closest center is used to determine the affinity between $\mathbf{x}_{j}$ and its second closest center, which is also inaccurate. For example, in the case of the same lower bound of $\mathbf{v}_{2}$, the affinity between $\mathbf{x}_{j}$ and $\mathbf{v}_{2}$ is different according to \textbf{Definition} \ref{definition1} in Fig. \ref{fig4b} and Fig. \ref{fig4c}. Therefore, the precise identification of all samples-centers affinities should involve the $c$ new lower bounds formed by replacing $\max_{1\le i\le c}\delta_{i}$ with $\delta_{i}$ for $i=1,2,...,c$. In this paper, a new affinity filtering scheme is proposed, which is composed of $c$ new triangle inequalities. The lower bound of $i$th triangle inequality is $d_{ij}-\delta_{i}$ for $i=1,2,...,c$, as shown in \textbf{Lemma} \ref{lem2}. In conclusion, the new affinity filtering scheme is manipulated to search the complete set of the non-affinity centers of each sample $\mathbf{x}_{j}$ more precisely in any situation, where $0 \leq|\mathcal{P}_{j}|\leq c-1$. To the best of our knowledge, there is no other effort that contributes to discussing the capture of the complete set of the non-affinity centers of each sample in this novel way. \subsection{Refining the Convergence of FCM}\label{subsec3-2} In this part, six real-world data sets are clustered by FCM with random initializations. The details of the data sets are shown in Section \ref{sec5}. The iterative fuzzy objective value is applied to analyze the convergence of FCM. The curves of the convergence of FCM with the iteration $t$ are shown in Fig. \ref{fig1}, where the y-axis is the ratio of objectives to the initial value. \begin{figure}[htp] \centering \label{fig3a}\includegraphics[width=0.49\textwidth]{figs/Example1-new-1008}~~~~~~ \caption{Plots of $\frac{J_{\textbf{Fuzzy}}(\mathbf{U}^{(t)}, \mathbf{V}^{(t)})}{J_{\textbf{Fuzzy}}(\mathbf{U}^{(0)}, \mathbf{V}^{(0)})}$ for the iteration $t$ on six real-world data sets with FCM. The initializations is selected randomly for each data set. The plots clearly show that the clustering process of FCM can be divided into stages [A] and [B], where [A] represents the early stage and [B] represents the mid-to-late stage.}\label{fig1} \end{figure} In Fig. \ref{fig1}, it can be observed that the curves of the fuzzy objectives can be divided into stages [A] and [B], where [A] represents the early stage and [B] represents the mid-to-late stage. At the beginning of the iteration, with random initializations, the belongings of the samples are temporary and uncertain. Therefore, the membership grades are modified for good clustering results, which enlarges the displacement length of the cluster centers to reduce the objective value, as shown in stage [A]. However, the samples in other clusters always continuously interfere with the update of the centers given Eq. \eqref{eq_2}. Once FCM enters stage [B], the convergence efficiency of FCM drops rapidly \cite{du1999centroidal}, which can also be observed that the curves of the fuzzy objectives of FCM are quite stable and long in Fig. \ref{fig1}. According to this observation, it can be inferred that the cluster centers near those target positions and move in a small step in stage [B], where the assignment of most samples will not change, except for the boundary samples between clusters. To take this question a step further, the update of $\mathbf{v}_{i}$ is rewritten as $\mathbf{v}_{i}=(\sum_{j}u_{ij}^{m})^{-1}(\sum_{j\in \mathcal{C}_{i}}u_{ij}^{m}\mathbf{x}_{j}+\sum_{j\notin\mathcal{C}_{i}}u_{ij}^{m}\mathbf{x}_{j})$ in FCM. Clearly, $(\sum_{j}u_{ij}^{m})^{-1}\sum_{j\in \mathcal{C}_{i}}u_{ij}^{m}\mathbf{x}_{j}$ promotes $\mathbf{v}_{i}$ to approach its final position, and $(\sum_{j}u_{ij}^{m})^{-1} \sum_{j\notin\mathcal{C}_{i}}u_{ij}^{m}\mathbf{x}_{j}$ prevents $\mathbf{v}_{i}$ from approaching its final position, which causes the cluster centers still keep fluctuating slightly around the final targets in stage [B]. In this case, although FCM has not converged, the assignment of most samples has been determined. Actually, the fuzzy membership grades are redundant to assess uncertainty for the samples with the unchanging assignment in the current situation. Those membership grades are not only expensive to calculate, but more seriously, the update of those membership grades does not improve clustering quality. To illustrate this problem, a geometric explanation of stage [B] of the convergence process of FCM is presented, as shown in Fig.\ref{fig2}. \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{figs/fig-FCM-BP1} \caption{Geometric explanation of stage [B] of the convergence process of FCM. The thin red and blue dotted lines are the bisectors between the clusters in these two iterations, respectively. After one iteration in [B] stage, centers still have slight changes around the final targets. Note that the fluctuation of the centers, which hardly affects the convergence result, will occur many times in stage [B]. In this case, the assignment of most samples remains unchanged, and the assignment of some boundary samples, near the thin red and blue dotted lines, is vulnerable to the slight changes of the centers.}\label{fig2} \end{figure} In Fig. \ref{fig2}, the assignment of most samples remains unchanged in stage [B]. However, centers always have slight changes around the final targets after one iteration in stage [B] because of the contributions of the samples in other clusters in the update of the centers. Note that the fluctuation of the centers, which hardly affects the convergence result, will occur many times in stage [B], resulting in the slow convergence of stage [B]. In this case, the assignment of some boundary samples, near the thin red and blue dotted lines, is vulnerable to the slight changes of the centers. Therefore, this is the main reason for the slow convergence of stage [B]. Regardless of the defects of the alternating optimization algorithm (AO), the reasons for the slow convergence of FCM can be summarized as the following two points: \begin{itemize} \item The contributions of the samples in other clusters in the update of the cluster centers continuously delay the whole stages of the convergence process of FCM; \item The fuzzy membership grades are redundant to assess uncertainty for the samples with the unchanging assignment in stage [B]. \end{itemize} As analyzed above, the early stopping in stage [B] can improve the efficiency of the algorithm. An ideal way to do this is to pick an appropriate FCM convergence threshold, $\|\mathbf{V}^{(t+1)}-\mathbf{V}^{(t)}\|$. In fact, the selection of convergence threshold for the early stopping in stage [B] is an extremely difficult task in unsupervised learning. To illustrate this problem, $\|\mathbf{V}^{(t+1)}-\mathbf{V}^{(t)}\|$ for the iteration $t$ on six real-world data sets with FCM are shown in Fig. \ref{fig10}, where the green point is the beginning of stage [B] of FCM. It is found that for any data, the appropriate convergence threshold for the early stopping in stage [B] is difficult to determine without prior information. Therefore, this paper adopts a new feasible approach to effectively accelerate the convergence process of FCM. \begin{figure}[htp] \centering \label{fig10}\includegraphics[width=0.49\textwidth]{figs/fig-new-V} \caption{Plots of $\|\mathbf{V}^{(t+1)}-\mathbf{V}^{(t)}\|$ for the iteration $t$ on six real-world data sets with FCM. The plots show that the appropriate convergence threshold for the early stopping in stage [B] is difficult to determine without prior information.}\label{fig10} \end{figure} This paper hopes to design a FCM-type clustering algorithm that is a good trade-off between clustering efficiency and quality. According to \textbf{Definition} \ref{definition1} and \ref{definition2}, the update of $\mathbf{v}_{i}$ can be equivalently written in a novel way as $\mathbf{v}_{i}=(\sum_{j}u_{ij}^{m})^{-1}(\sum_{j\in \mathcal{C}_{i-}}u_{ij}^{m}\mathbf{x}_{j}+\sum_{j\in\overline{\mathcal{C}_{i-}}}u_{ij}^{m}\mathbf{x}_{j})$. Obviously, in the whole convergence stages of FCM, the contribution of the samples in $\mathcal{C}_{i-}$, $(\sum_{j}u_{ij}^{m})^{-1}(\sum_{j\in \mathcal{C}_{i-}}u_{ij}^{m}\mathbf{x}_{j})$, should be eliminated in the update of $\mathbf{v}_{i}$ for the efficiency and quality of the algorithm. This behavior is very significant in improving the efficiency of the algorithm, especially in stage [B], where $\sum_{j\in \mathcal{C}_{i-}}u_{ij}^{m}\mathbf{x}_{j}$ has a large proportion because the assignment of most samples in other clusters does not change. While the contributions of the samples in $\overline{\mathcal{C}_{i-}}$, $(\sum_{j}u_{ij}^{m})^{-1}(\sum_{j\in \overline{\mathcal{C}_{i-}}}u_{ij}^{m}\mathbf{x}_{j})$, should be preserved in the update of $\mathbf{v}_{i}$, which promotes algorithm convergence and ensures clustering quality. Reasonably, fuzzy membership grades are still meaningful for the samples in $\overline{\mathcal{C}_{i-}}$ in the update of $\mathbf{v}_{i}$. The details are presented in Section \ref{sec4}. \section{Accelerated FCM Based on New Affinity Filtering and Membership Scaling}\label{sec4} In this section, FCM based on new affinity filtering and membership scaling (AMFCM) is proposed to accelerate the whole convergence process of FCM and improve clustering performance. In the proposed AMFCM, a new affinity filtering method is designed to obtain the complete set of the non-affinity information more precisely by a new set of triangle inequalities. Then, a new membership scaling scheme improves the efficiency of FCM convergence in stages [A] and [B]. The operation of the new membership scaling update is two-fold, including the elimination of the contributions of the samples in the update of their non-affinity centers and enhancing the contributions of the samples in the update of the remaining centers. \subsection{New Affinity Filtering Scheme}\label{subsec4_1} In the modified C-Means algorithms \cite{elkan2003using,ding2015yinyang}, the triangle inequality that appears in the affinity filtering scheme \eqref{eq_4} is applied to identify all samples-centers affinities. However, the triangle inequality is insufficient for FCM according to the analysis in Section \ref{subsec3-1}. Therefore, a new lemma, which is better applicable to FCM, is presented as follows: \begin{lemma}\label{lem2} A cluster center $\mathbf{v}_{i}$ is the non-affinity center of a sample $\mathbf{x}_{j}$ after one update, if \begin{equation}\label{eq6} d_{ij}^{(t)}-\delta_{i}^{(t)}\geq {D_{j}^{(1)}}^{(t)}+\delta_{I_{j}^{*}}^{(t)}, \quad i\in \{1,2,...,c\}, \end{equation} where $I_{j}^{*}=\arg{\min\limits_{1\leq i\leq c}\{d_{ij}^{(t)}\}}$. \end{lemma} \begin{proof} In virtue of the triangle inequality for $i\in \{1,2,...,c\}$, there is that \begin{align*} d_{ij}^{(t)}-\delta_{i}^{(t)} &= \|\mathbf{x}_{j}-\mathbf{v}_{i}^{(t)}\|-\|\mathbf{v}_{i}^{(t+1)}-\mathbf{v}_{i}^{(t)}\| &\leq d_{ij}^{(t+1)}. \end{align*} Similarly, there is that \begin{align*} {D_{j}^{(1)}}^{(t)}+\delta_{I_{j}^{*}}^{(t)}=\|\mathbf{x}_{j}-\mathbf{v}_{I_{j}^{*}}^{(t)}\|+\|\mathbf{v}_{I_{j}^{*}}^{(t)}-\mathbf{v}_{I_{j}^{*}}^{(t+1)}\| \geq d_{I_{j}^{*}j}^{(t+1)}. \end{align*} If $d_{ij}^{(t)}-\delta_{i}^{(t)}\geq {D_{j}^{(1)}}^{(t)}+\delta_{I_{j}^{*}}^{(t)}$ holds, we have $d_{ij}^{(t+1)} \geq d_{I_{j}^{*}j}^{(t+1)}$. Therefore, $\mathbf{v}_{i}$ cannot be the nearest center of $\mathbf{x}_{j}$ after one update. According to \textbf{Definition} \ref{definition1}, we can conclude that $\mathbf{v}_{i}$ is the non-affinity center of $\mathbf{x}_{j}$. \end{proof} Compared with the previous \textbf{Lemma} \ref{lem1}, which can filter the complete set of the non-affinity centers only when $|\mathcal{P}_{j}|=0$ or $c-1$ for each sample $\mathbf{x}_{j}$, \textbf{Lemma} \ref{lem2} provides a more efficient and precise affinity filtering scheme for exploring any situation, where $0 \leq|\mathcal{P}_{j}|\leq c-1$. According to \textbf{Lemma} \ref{lem2}, a new affinity filtering scheme, which consists of $c$ new triangular inequalities that are the same as \eqref{eq6}, is proposed to more precisely search the complete set of the non-affinity centers of each sample $\mathbf{x}_{j}$ by employing the lower bounds of all centers in any situation, where $0 \leq|\mathcal{P}_{j}|\leq c-1$. Furthermore, the new affinity filtering scheme is parameter-free, which does not need to seek extra thresholds, which determine the non-affinity centers of the samples. A geometric interpretation for the new affinity filtering scheme \eqref{eq6} is given in \textbf{Appendix} \ref{sec7-1}. In this part, \textbf{Lemma} \ref{lem2} provides a new affinity filtering scheme, which can search the complete non-affinity centers of each sample in any situation, where $0 \leq|\mathcal{P}_{j}|\leq c-1$. As analyzed above, the computational cost of the new affinity filtering condition is very low, because the additional calculations concern only the displacements of the centers. Next, a new membership scaling scheme is proposed to accelerate the whole convergence stages of FCM in Subsection \ref{subsec4-2}. \subsection{ New Membership Scaling Scheme}\label{subsec4-2} As mentioned above in Subsection \ref{subsec3-2}, the update of $\mathbf{v}_{i}$ is written as $(\sum_{j}u_{ij}^{m})^{-1}(\sum_{j\in \mathcal{C}_{i-}}u_{ij}^{m}\mathbf{x}_{j}+\sum_{j\in\overline{\mathcal{C}_{i-}}}u_{ij}^{m}\mathbf{x}_{j})$. Based on the new affinity filtering scheme \eqref{eq6}, which can autonomously identify the complete set of the non-affinity center of each sample $\mathbf{x}_{j}$, $\mathcal{P}_{j}$, per iteration without much computational cost and extra thresholds, it can be concluded that $\mathbf{x}_{j} \in \mathcal{C}_{i-}$, if $i \in \mathcal{P}_{j}$; otherwise, $\mathbf{x}_{j} \in \overline{\mathcal{C}_{i-}}$, $i=1,2,...,c$. In this paper, the contributions of the samples in the update of their non-affinity centers should be eliminated and the contributions of the samples in the update of the remaining centers should be increased by the membership grades for the clustering efficiency and quality of the algorithms. Therefore, a new membership scaling technique is cleverly designed to better integrate all samples-centers affinities. In the new membership scaling scheme, the membership grades are set to 0 to eliminate the contributions of the samples in the update of their non-affinity centers, which reduces the burden of the fuzzy clusterings in efficiency. And the fuzzy membership grades still be applied for the remaining centers. Here, the new membership scaling scheme for $\tilde{\mathbf{U}}^{(t)}$ is \begin{align} \tilde{u}^{(t)}_{ij}&=\left\{ \begin{array}{ll} \left[\sum_{k\notin\mathcal{P}_{j}^{(t)}}\left(\frac{d_{ij}^{(t)}}{d_{kj}^{(t)}}\right)^{\frac{2}{m-1}}\right]^{-1}, & i\notin \mathcal{P}_{j}^{(t)}, \\[1cm] \qquad \qquad \qquad 0 , & i \in \mathcal{P}_{j}^{(t)}. \\ \end{array} \right.\label{eq_modified u} \end{align} Note that the update of $\tilde{u}_{ij}$ is equivalent to the normalization of the membership grades, $\tilde{u}^{(t)}_{ij}=u^{(t)}_{ij}/\sum_{k\notin\mathcal{P}_{j}^{(t)}} u^{(t)}_{kj}$, for $i\notin \mathcal{P}_{j}$. The update of $\mathbf{V}$ is the same way as FCM (Eq. \eqref{eq_2}). Obviously, this scheme is simple and efficient. The required distances between the samples and the current centers, $d_{ij}^{(t)}$, have been calculated. Therefore, the additional calculations concern only the displacements of the centers in the determination of $\mathcal{P}_{j}$ based on \textbf{Lemma} \ref{lem2}. The complexity analysis will be shown in Subsection \ref{subsec4-3}. \subsection{ The Proposed Algorithm}\label{subsec4-3} Accelerated FCM based on new affinity filtering and membership scaling (\textbf{AMFCM}) integrating with traditional one is herein proposed. In this new algorithm, after a traditional FCM iteration, the current $\mathbf{U}$ is adjusted by using the new affinity filtering and membership scaling scheme. The proposed algorithm is presented as follows in Algorithm \ref{alg1}. \begin{algorithm}[htp!] \caption{\textbf{AMFCM}} \label{alg1} \begin{algorithmic}[1] \REQUIRE Date set $\mathbf{X}=\{\mathbf{x}_1,\mathbf{x}_2, \cdots, \mathbf{x}_n\}$, cluster number $c$, fuzzy exponent $m$, and convergence threshold $\varepsilon$; \ENSURE Cluster center $\mathbf{V}$. \STATE Initialize cluster centers $\mathbf{V}^{(0)}$ and set $t:=0$; \STATE Compute $d_{ij}^{(t)}=\|\mathbf{x}_{j}-\mathbf{v}^{(t)}_{i}\|$, $i=1,...,c, j=1,...,n$;\label{step3} \STATE Compute $\mathbf{U}^{(t)}$ with $u^{(t)}_{ij}=\left[\sum_{k=1}^c\left(d^{(t)}_{ij}/d^{(t)}_{kj}\right)^{\frac{2}{m-1}}\right]^{-1}$;\label{step5} \STATE Compute $\bar{\mathbf{V}}^{(t+1)}$ with $\bar{\mathbf{v}}_{i}^{(t+1)}=\frac{\sum_{j=1}^{n}\left(u^{(t)}_{ij}\right)^{m}\mathbf{x}_{j}}{\sum_{j=1}^{n}\left(u^{(t)}_{ij}\right)^{m}}$;\label{alg_lin0} \STATE Compute $\delta_{i}^{(t)}=\|\bar{\mathbf{v}}_{i}^{(t+1)}-\mathbf{v}_{i}^{(t)}\|$ for $i=1,2,...,c$; \label{alg_lin1} \FOR{$j=1$ to $n$}\label{alg_lin2} \STATE $\mathcal{P}_{j}^{(t)}=\{1 \leq i \leq\ c \mid d^{(t)}_{ij}-\delta^{(t)}_{i} \geq {D_{j}^{(1)}}^{(t)}{}+\delta^{(t)}_{I_{j}^{*}}\}$; \STATE Compute $\tilde{u}_{ij}^{(t)}$ according to Eq. \eqref{eq_modified u};\label{alg_lin4}\\ \ENDFOR\label{alg_lin3} \STATE Compute $\mathbf{V}^{(t+1)}$ with $\mathbf{v}_i^{(t+1)}=\frac{\sum_{j=1}^{n}\left(\tilde{u}^{(t)}_{ij}\right)^{m}\mathbf{x}_{j}} {\sum_{j=1}^{n}\left(\tilde{u}^{(t)}_{ij}\right)^{m}}$; \IF {$\|{\mathbf{V}^{(t+1)}-\mathbf{V}^{(t)}}\|\geq\varepsilon$} \STATE Set $t:=t+1$;\\ \STATE Goto Step \ref{step3}; \ELSE \RETURN $\mathbf{V}=\mathbf{V}^{(t+1)}$; \ENDIF \end{algorithmic} \end{algorithm} For simplicity, the flowchart of AMFCM is as follows: \begin{equation*} \mathbf{V}^{(t)}\xrightarrow[\eqref{eq6},~\eqref{eq_modified u}]{\xrightarrow{\eqref{eq_3}}\mathbf{U}^{(t)} \xrightarrow{\eqref{eq_2}}\bar{\mathbf{V}}^{(t+1)}, \delta_{i}^{(t)}=\|\bar{\mathbf{v}}_{i}^{(t+1)}-\mathbf{v}_{i}^{(t)}\|}\tilde{\mathbf{U}}^{(t)}\xrightarrow{\eqref{eq_2}} \mathbf{V}^{(t+1)}. \end{equation*} There are the following comments for AMFCM. \begin{itemize} \item As shown above, the difference between AMFCM and FCM is the calculation of $\tilde{\mathbf{U}}^{(t)}$, which is the novelty of AMFCM. This inserted part can improve the efficiency and quality of the clustering algorithms. \item The extra cost of AMFCM over FCM is Step \ref{alg_lin0}-\ref{alg_lin3}. The cost in Step \ref{alg_lin0} is $\mathcal{O}(ncp)$, the cost of computing $\delta_{i},(1\le i\le c)$ in Step \ref{alg_lin1} is only $\mathcal{O}(cp)$, and the new cluster filtering technique \eqref{eq6} needs $\mathcal{O}(n(c-1))$, where $d_{ij}^{(t)}$ in \eqref{eq6} and \eqref{eq_modified u} has been calculated in Step \ref{step3}. Therefore, the cost of AMFCM is $\mathcal{O}(3ncp)$ per iteration (the cost of FCM is $\mathcal{O}(2ncp)$ per iteration). \item The time complexity of FCM is $\mathcal{O}(nc^{2}pt_{\textbf{FCM}})$ \cite{kolen2002reducing, bhat2022reducing}, where $t_{\textbf{FCM}}$ is the iteration of FCM. For the time complexity of AMFCM, the update of the centers $\mathbf{V}_{\textbf{AMFCM}}$ requires $\mathcal{O}((\sum_{j=1}^{n} |\mathcal{P}_{j}|) p)$ per iteration, where $1\le |\mathcal{P}_{j}| \le c$ for $j=1,2,...,n$. With the update of the membership grade matrix $\tilde{\mathbf{U}}$, the time complexity of AMFCM is $\mathcal{O}(nc^{2}pt_{\textbf{AMFCM}})$, where $t_{\textbf{AMFCM}}$ is the iteration of AMFCM. Based on \textbf{Theorem} \ref{theorem3}, $t_{\textbf{AMFCM}} < t_{\textbf{FCM}}$. Aa a result, AMFCM can save the running time. \item For any membership grade of AMFCM, ${\widetilde{u}}_{ij}\in\left[0,1\right]$ for $i=1,2,...,c$ and $j=1,2,...,n$, based on Eq. \eqref{eq_modified u}. Obviously, AMFCM is a combination of the hard and soft clustering algorithms, which sets the membership grades to 0 for eliminating the contributions of the samples in the update of their non-affinity centers. Meanwhile, the fuzzy-type update way is performed for the remaining centers. In particular, AMFCM is a parameter-free and adaptive clustering algorithm, which can autonomously determine all samples-centers affinities and the update method. \end{itemize} \section{Theoretical Analysis} \label{sec6} In this section, the convergence properties of AMFCM are provided. First, the flowchart of FCM and AMFCM in $t$ iteration are as follows: \begin{equation*} \begin{aligned} \textbf{FCM}:~~& \mathbf{V}^{(t)}\xrightarrow{\eqref{eq_3}}\mathbf{U}_{\textbf{FCM}}^{(t)}\xrightarrow{\eqref{eq_2}} \mathbf{V}_{\textbf{FCM}}^{(t+1)}. \\ \textbf{AMFCM}:~~& \mathbf{V}^{(t)}\xrightarrow[\delta_{i}^{(t)}=\| {\mathbf{v}_{\textbf{FCM}}}_{i}^{(t+1)}-\mathbf{v}_{i}^{(t)}\|, \eqref{eq6},~\eqref{eq_modified u}]{\xrightarrow{\eqref{eq_3}}\mathbf{U}_{\textbf{FCM}}^{(t)}\xrightarrow{\eqref{eq_2}}\mathbf{V}_{\textbf{FCM}}^{(t+1)}}\tilde{\mathbf{U}}^{(t)}\xrightarrow{\eqref{eq_2}} \mathbf{V}_{\textbf{AMFCM}}^{(t+1)}.\\ \end{aligned} \end{equation*} \begin{theorem} The number of the iteration of AMFCM is smaller than that of FCM in the clustering process. \label{theorem3} \end{theorem} \begin{proof} The proof is given in \textbf{Appendix} \ref{sec7-4}. \end{proof} \begin{theorem} AMFCM does not converge precociously in the mid stage of the clustering process. \label{theorem1} \end{theorem} \begin{proof} The proof is given in \textbf{Appendix} \ref{sec7-2}. \end{proof} \begin{theorem} For the samples $\mathbf{x}_{j}$ with $|\mathcal{P}_{j}^{(t)}|=1$, the corresponding hard objective value of AMFCM is smaller than that of FCM in $t$ iteration. \label{theorem2} \end{theorem} \begin{proof} The proof is given in \textbf{Appendix} \ref{sec7-3}. \end{proof} In the next section, several experiments are performed to illustrate the efficiency of the proposed algorithm. \section{Experimental Results} \label{sec5} To verify the effectiveness and efficiency of the proposed algorithm, experimental studies are carried out on synthetic and real-world data sets, respectively. AMFCM is compared with another seven clustering algorithms, including: \begin{enumerate} \item Fuzzy C-Means (FCM) \cite{bezdek1984fcm}, \item Rough Fuzzy C-Means (RFCM) \cite{Mitra2006Rough}, \item Shadowed Set-based Rough C-Means (SRCM), Shadowed Set-based Rough Fuzzy C-Means $\textrm{\uppercase\expandafter{\romannumeral1}}$ (SRFCM $\textrm{\uppercase\expandafter{\romannumeral1}}$), and Shadowed Set-based Rough Fuzzy C-Means $\textrm{\uppercase\expandafter{\romannumeral2}}$ (SRFCM $\textrm{\uppercase\expandafter{\romannumeral2}}$) \cite{zhou2011Shadowed}, \item Rough-Fuzzy Clustering based on Two-stage Three-way Approximations (ARFCM) \cite{Zhou2018Rough}, \item Membership Scaling Fuzzy C-Means (MSFCM) \cite{Zhou2020A}. \end{enumerate} These algorithms are chosen because they use different techniques to reduce the contributions of the samples in the update of their non-affinity centers and increase the contributions of the samples in the update of the remaining centers for good clustering quality and fast convergence. All experiments are run on a computer with an Intel Core i7-6700 processor and a maximum memory of 8GB for all processes; the computer runs Windows 7 with MATLAB R2017a. The experimental setup and the evaluation metrics used for clustering performance are first described. The fuzziness weighting exponent $m=2$ and the termination parameter $\varepsilon=10^{-6}$ for all algorithms. In addition, the weight exponent of the core region $w_{l}=0.95$ and the weight exponent of the boundary region $w_{b}=1-w_{l}$ for RFCM \cite{Mitra2006Rough}, SRCM, SRFCM $\textrm{\uppercase\expandafter{\romannumeral1}}$ and SRFCM $\textrm{\uppercase\expandafter{\romannumeral2}}$ \cite{zhou2011Shadowed}, ARFCM \cite{Zhou2018Rough}. \subsection{Evaluation Metrics} In order to evaluate the performances of the newly proposed clustering algorithms, three external metrics, including the overall F-measure for the entire data set ($\textbf{F}^{*}$), Normalized Mutual Information (\textbf{NMI}), and Adjusted Rand Index (\textbf{ARI}) \cite{parker2013accelerating, mei2016large,Hubert1985Comparingpartitions}, are used. All three metrics are used to measure the agreement of the ground truth and the clustering results produced by an algorithm. The metrics that do not require the labels of data are also used for performance evaluation, called the internal metrics. The three internal validity metrics are selected, including \textbf{PC} \cite{James1973Cluster}, \textbf{DBI} \cite{Davies1979Cluster}, and \textbf{XB} \cite{xie1991validity}. \begin{align} \textbf{PC}&=\frac{1}{n}\sum_{i=1}^{c}\sum_{j=1}^{n} u_{ij}^{2},\\ \textbf{DBI}&=\frac{1}{c}\sum\limits_{k=1}^{c}\max_{i \neq k} {\frac{\frac{1}{|C_{i}|}\sum\limits_{\mathbf{x}_{j} \in C_{i}} d_{ij}^{2}+\frac{1}{|C_{k}|}\sum\limits_{\mathbf{x}_{j}\in C_{k}} d_{kj}^{2}}{\|\mathbf{v}_{i}-\mathbf{v}_{k}\|^{2}}},\\ \textbf{XB}&=\frac{\sum_{i=1}^{c}\sum_{j=1}^{n} u_{ij}^{m}d_{ij}^{2}}{n\min_{i\neq k}\|\mathbf{v}_{k}-\mathbf{v}_{i}\|^{2}}. \end{align} Note that \textbf{Time} and \textbf{Iteration} are the remaining two evaluation metrics for expressing the efficiency of the algorithms. \subsection{Experiments on Synthetic Data Sets} In the first set of experiments, to test the efficiency of AMFCM in the whole convergence stages, two synthetic data sets in $\mathbb{R}^{2}$ are implemented to observe the convergence path. The first synthetic data set contains three clusters, which are generalized by the two-dimensional Gaussian distribution with mean vector $\mu_{i}$ and covariance matrix $\Sigma_{i}$, $i=1,2,3$. The number of data in each cluster is 200, and the corresponding parameters $\mu_{i}$ and $\Sigma_{i}$ are $\mu_{1}=[10, 10]$, $\Sigma_{1}=\scriptsize{\setlength{\arraycolsep}{1.5pt}\begin{bmatrix} 0.3&0\\0&0.3\end{bmatrix}}$, $\mu_{2}=[13, 10]$, $\Sigma_{2}=\scriptsize{\setlength{\arraycolsep}{1.5pt}\begin{bmatrix} 0.8&0\\0&0.8\end{bmatrix}}$, and $\mu_{3}=[11, 4]$, $\Sigma_{3}=\scriptsize{\setlength{\arraycolsep}{1.5pt}\begin{bmatrix} 1.2&0\\0&1.2\end{bmatrix}}$, respectively. To further reflect the effect of the contributions of the samples in other clusters on the convergence of the algorithms, the second synthetic data set is designed, in which some samples are added to the first synthetic data set. These two synthetic data sets are called D1 and D2, respectively. To illustrate intuitively, the visualized figures of the convergence trajectories of FCM, MSFCM, and AMFCM on D1 and D2 for comparison in this part, where same initializations are selected, are shown as Fig. \ref{fig6}. \textbf{Time} and \textbf{Iterations} are selected to characterize the performance of the algorithms. Here, $t_{\textrm{A}}$ and $t_{\textrm{B}}$ are defined as the number of iteration of the algorithm in stages [A] and [B], respectively. \begin{figure*}[t] \centering \subfloat[On D1]{\includegraphics[width=0.25\textwidth]{figs/fig-all-D1}\label{fig6a}} \subfloat[FCM on D2]{\includegraphics[width=0.25\textwidth]{figs/fig-FCM-D2-new}\label{fig6b}} \subfloat[MSFCM on D2]{\includegraphics[width=0.25\textwidth]{figs/fig-MSFCM-D2-new}\label{fig6c}} \subfloat[AMFCM on D2]{\includegraphics[width=0.25\textwidth]{figs/fig-AMFCM-D2-new}\label{fig6d}} \caption{The convergence trajectories of FCM, MSFCM and AMFCM with same initializations on data sets D1 and D2. The convergence trajectories of the three algorithms on D1 are put together, as shown in Fig. \ref{fig6a}. The convergence trajectories of the three algorithms on D2 are as shown in Fig. \ref{fig6b}, \ref{fig6c} and \ref{fig6d}, respectively. On D1, FCM, MSFCM and AMFCM are converged with 12 ($t_{\textrm{A}}$=2; $t_{\textrm{B}}$=10), 10 ($t_{\textrm{A}}$=2; $t_{\textrm{B}}$=8), and 8 ($t_{\textrm{A}}$=2; $t_{\textrm{B}}$=6) iterations and take 0.1294, 0.0781, and 0.054 seconds, respectively. On D2, FCM, MSFCM and AMFCM are converged with 30 ($t_{\textrm{A}}$=8; $t_{\textrm{B}}$=22), 18 ($t_{\textrm{A}}$=6; $t_{\textrm{B}}$=12), and 10 ($t_{\textrm{A}}$=3; $t_{\textrm{B}}$=7) iterations and take 0.2830, 0.1094, and 0.066 seconds, respectively.}\label{fig6} \end{figure*} First of all, the convergence trajectories of FCM, MSFCM, and AMFCM on D1 are similar, where the total contributions of the samples in the update of their non-affinity centers are small and not enough to observe. Therefore, the convergence trajectories of the three algorithms on D1 are put together, as shown in Fig. \ref{fig6a}. The experimental results on D1 show that AMFCM performs best, as shown in Fig. \ref{fig6}. From the convergence trajectories on D1, it is observed that although $t_{\textrm{A}}$=2 for FCM, MSFCM and AMFCM, $t_{\textrm{B}}$ of AMFCM is the least with only 6 iterations. On D2, the contributions of the new added samples in the update of their non-affinity centers mislead the displacement direction of the centers by the membership grades for FCM and MSFCM. Therefore, stages [A] and [B] of FCM and MSFCM have been extended, where $t_{\textrm{A}}$=8 and $t_{\textrm{B}}$=22 for FCM; $t_{\textrm{A}}$=6 and $t_{\textrm{B}}$=12 for MSFCM. However, AMFCM completely eliminates the misleading of the total contributions of the newly added samples in the update of their non-affinity centers, which achieves better performance. From the convergence trajectory of AMFCM on D2, $t_{\textrm{A}}$=3 and $t_{\textrm{B}}$=7 for AMFCM. Secondly, it is observed that the efficiency of MSFCM is higher than that of FCM. But the performance of MSFCM is limited. As analyzed in Section \ref{subsec3-1}, the previous affinity filtering \eqref{eq_4} of MSFCM fails to obtain the complete set of the non-affinity centers of each sample on D1 and D2, where $|\mathcal{P}|=1$ for some samples in this stage. Therefore, MSFCM does not always maintain high efficiency, because the membership scaling \eqref{eq_4} is invalid. However, AMFCM makes up for this shortcoming. AMFCM accelerates the whole convergence process of FCM under same initializations. Both stages [A] and [B] have been accelerated. Especially, AMFCM can save 67\% of the number of the iteration of FCM on D2. As the previous complexity analysis, the running time of AMFCM is decreased. Thus, it can be concluded that the new affinity filtering scheme \eqref{eq6} is implemented with high efficiency, and the new membership scaling scheme \eqref{eq_modified u} is outstanding in terms of efficiency and clustering quality. \subsection{Experiments on Real-World Data Sets} In this subsection, some experiments are done to verify the clustering efficiency and performance of AMFCM on real-world data sets. \subsubsection{Acceleration and Performance of AMFCM} To verify the acceleration of AMFCM in stages [A] and [B] on real-world data sets, the experiments in Fig. \ref{fig1} are redone by AMFCM with the same settings, and the corresponding hard-objective of AMFCM are also displayed, which is to illustrate the clustering characteristics of AMFCM, as shown in Fig. \ref{fig7}. \begin{figure}[htp!] \centering \label{fig3a}\includegraphics[width=0.49\textwidth]{figs/Example2-new}~~~~~~ \caption{Plots of $\frac{J_{\textbf{Fuzzy}}(\mathbf{U}^{(t)}, \mathbf{V}^{(t)})}{J_{\textbf{Fuzzy}}(\mathbf{U}^{(0)}, \mathbf{V}^{(0)})}$ and $\frac{J_{\textbf{Hard}}(\mathbf{U}^{(t)}, \mathbf{V}^{(t)})}{J_{\textbf{Hard}}(\mathbf{U}^{(0)}, \mathbf{V}^{(0)})}$ for iteration $t$ on six real-world data sets with AMFCM. The initializations is selected randomly for each data set. The plots clearly show that the clustering process of AMFCM can be divided into stages [A] and [B], where [A] represents the early stage and [B] represents the mid-to-late stage.}\label{fig7} \end{figure} Similar to FCM, the fuzzy-objective and the corresponding hard-objective of AMFCM also can be divided into stages [A] and [B]. However, compared with Fig. \ref{fig1}, the number of the iteration of AMFCM is much lower than that of FCM, which saves at least 76$\%$ of the total rounds of the iteration on these six real-world data sets. Meanwhile, stages [A] and [B] of the convergence process of AMFCM are terminated earlier than the corresponding FCM, which is attributed to the new affinity filtering and membership scaling schemes, as shown in Fig. \ref{fig7}. Recently, Nie \emph{et al.} \cite{Coordinate2021Nie} mentioned that bad local minimum makes the objective value not small enough, which limits the performance of the algorithms. According to this, the comparison results of the fuzzy and corresponding hard objective value of FCM and AMFCM on these six real-world data sets are displayed in TABLE \ref{table2}. \begin{table}[h] \centering \caption{Comparison for the fuzzy and hard objective value with FCM and AMFCM. The values are averaged over 10 trials with random initializations. The best results are shown in boldface.} \label{table2} \begin{tabular}{r|rr|rr} \toprule \multicolumn{1}{r|}{\multirow{2}{*}{Data sets}} &\multicolumn{2}{c|}{\multirow{1}{*}{Fuzzy Objective Value}} &\multicolumn{2}{c}{\multirow{1}{*}{Hard Objective Value}} \\ \vspace{1.5mm} &\multicolumn{1}{c}{\multirow{2}{*}{FCM}} &\multicolumn{1}{c|}{\multirow{2}{*}{AMFCM}} &\multicolumn{1}{c}{\multirow{2}{*}{FCM}} &\multicolumn{1}{c}{\multirow{2}{*}{AMFCM}} \\ \midrule Arcene& \textbf{4.297$\texttt{E}$+04} &4.439$\texttt{E}$+04 & 6.635$\texttt{E}$+04 &\textbf{6.261$\texttt{E}$+04}\\ DrivFace&\textbf{6.079$\texttt{E}$+04} &6.362$\texttt{E}$+04 &1.315$\texttt{E}$+05 &\textbf{1.101$\texttt{E}$+05}\\ Led7&4.302$\texttt{E}$+02 &\textbf{3.709$\texttt{E}$+02}&3.468$\texttt{E}$+03 &\textbf{1.468$\texttt{E}$+03}\\ Satimage&\textbf{5.988$\texttt{E}$+02} &6.474$\texttt{E}$+02&1.764$\texttt{E}$+03 &\textbf{1.389$\texttt{E}$+03}\\ Shuttle&1.383$\texttt{E}$+02 & \textbf{1.326$\texttt{E}$+02}&2.925$\texttt{E}$+02& \textbf{2.588$\texttt{E}$+02} \\ Sensorless&\textbf{1.869$\texttt{E}$+03} &1.919$\texttt{E}$+03&6.243$\texttt{E}$+03 &\textbf{4.558$\texttt{E}$+03}\\ \bottomrule \end{tabular} \end{table} In TABLE \ref{table2}, the fuzzy objective value cannot achieve a small value because some values derived by the ordinary optimization theory are modified by AMFCM. The fuzzy objective is sacrificed for the efficiency of the algorithm. However, AMFCM can increase the membership of each sample to its nearest center through Eq. \ref{eq_modified u}, so that the corresponding hard objective of AMFCM is continuously optimized. Therefore, the corresponding hard objective value of AMFCM is better than that of FCM. Furthermore, AMFCM not only greatly improves the efficiency of FCM, but also maintains better clustering performance. In order to display the performance of AMFCM more comprehensively, AMFCM is compared with the seven chosen clustering algorithms on the above eight evaluation metrics. Moreover, all real-world data sets, which are selected from UCI Machine Learning Repository\footnote{\url{https://archive.ics.uci.edu/ml/index.php}}, are clustered by the chosen clustering algorithms. The detailed information on the data sets is given in each table title, where $n$ is the number of training size, $p$ is the dimensionality of samples, and $c$ is the given number of clusters. The values are averaged over 10 trials with random initializations and the standard deviations are given after the means (linked with $\pm$), and the best results are shown in boldface. In addition, the corresponding \textbf{Iteration} and \textbf{Time} of the different algorithms on ten real-world data sets are shown in Fig. \ref{fig9}. \begin{figure*}[htp!] \centering \includegraphics[width=0.45\textwidth]{figs/NEW1108-1}~~~~~~~~~~ \includegraphics[width=0.45\textwidth]{figs/NEW1108-2} \caption{Plot of the corresponding \textbf{Iteration} and \textbf{Time} of the different algorithms on ten real-world data sets.}\label{fig9} \end{figure*} \begin{table*}[htp!] \caption{Experimental results on ten real-world data sets for different algorithms. The values are averaged over 10 trials with random initializations. The standard deviations are given after the means (linked with $\pm$), and the best results are shown in boldface.} \label{table3} \centering \resizebox{\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|r|rrrrrrrr} \toprule \multicolumn{1}{l|}{\multirow{1}{*}{Data sets}} &\multicolumn{1}{c|}{\multirow{1}{*}{Metrics \tnote{1}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{FCM}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{RFCM}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{SRCM}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{SRFCM $\textrm{\uppercase\expandafter{\romannumeral1}}$}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{SRFCM $\textrm{\uppercase\expandafter{\romannumeral2}}$}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{ARFCM}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{MSFCM}}} &\multicolumn{1}{c}{\multirow{1}{*}{\textbf{AMFCM}}} \\ \midrule &PC $\uparrow$&0.655$\pm$0.001&0.671$\pm$0.001&0.663$\pm$0.023&0.698$\pm$0.001&0.652$\pm$0.001&\textbf{0.710}$\pm$\textbf{0.001}&0.655$\pm$0.001&0.658$\pm$0.001\\ \textbf{Arcene}&DBI $\downarrow$&1.015$\pm$0.001&1.031$\pm$0.001&1.042$\pm$0.167&0.857$\pm$0.001&1.124$\pm$0.002&1.000$\pm$0.001&1.015$\pm$0.001&\textbf{0.831}$\pm$\textbf{0.001}\\ $n$=200&XB $\downarrow$&0.379$\pm$0.001&0.357$\pm$0.001&0.386$\pm$0.001&0.337$\pm$0.001&0.405$\pm$0.001&0.335$\pm$0.001&0.379$\pm$0.001&\textbf{0.316}$\pm$\textbf{0.001}\\ $p$=10000&$F^{*}$ $\uparrow$&0.583$\pm$0.001&0.641$\pm$0.001&0.633$\pm$0.004&0.586$\pm$0.001&0.643$\pm$0.001&0.649$\pm$0.001&0.583$\pm$0.001&\textbf{0.654}$\pm$\textbf{0.001}\\ $c$=2&ARI $\uparrow$&0.027$\pm$0.001&0.089$\pm$0.001&0.074$\pm$0.003&0.030$\pm$0.001&0.090$\pm$0.001&\textbf{0.091}$\pm$\textbf{0.001}&0.027$\pm$0.001&\textbf{0.091}$\pm$\textbf{0.001}\\ &NMI $\uparrow$&0.018$\pm$0.001&0.091$\pm$0.001&0.066$\pm$0.001&0.020$\pm$0.001&0.086$\pm$0.001&0.085$\pm$0.001&0.018$\pm$0.001&\textbf{0.087}$\pm$\textbf{0.001}\\ \midrule &PC $\uparrow$&0.420$\pm$0.001&0.489$\pm$0.001&0.479$\pm$0.010&0.487$\pm$0.006&0.477$\pm$0.009&0.445$\pm$0.001&0.461$\pm$0.001&\textbf{0.490}$\pm$\textbf{0.001}\\ \textbf{DrivFace}&DBI $\downarrow$&15.7$\pm$0.1&2.641$\pm$0.011&3.638$\pm$1.375&2.258$\pm$.0461&4.964$\pm$1.809&2.666$\pm$0.810&2.677$\pm$0.001&\textbf{1.644}$\pm$\textbf{0.001}\\ $n$=606&XB $\downarrow$&5.354$\pm$0.001&0.897$\pm$0.004&1.685$\pm$0.425&1.187$\pm$0.267&1.732$\pm$0.601&0.936$\pm$0.072&0.902$\pm$0.001&\textbf{0.567}$\pm$\textbf{0.001}\\ $p$=6400&$F^{*}$ $\uparrow$&0.558$\pm$0.001&0.576$\pm$0.005&0.541$\pm$0.001&0.576$\pm$0.004&0.571$\pm$0.003&0.566$\pm$0.001&0.587$\pm$0.001&\textbf{0.597}$\pm$\textbf{0.001}\\ $c$=3&ARI $\uparrow$&0.016$\pm$0.001&0.021$\pm$0.001&0.008$\pm$0.001&0.006$\pm$0.001&0.020$\pm$0.003&0.019$\pm$0.001&0.019$\pm$0.001&\textbf{0.024}$\pm$\textbf{0.001}\\ &NMI $\uparrow$&0.053$\pm$0.001&0.054$\pm$0.001&0.030$\pm$0.002&0.035$\pm$0.007&0.045$\pm$0.002&0.054$\pm$0.001&0.053$\pm$0.001&\textbf{0.057}$\pm$\textbf{0.001}\\ \midrule &PC $\uparrow$&\textbf{0.007}$\pm$\textbf{0.001}&\textbf{0.007}$\pm$\textbf{0.001}&0.006$\pm$0.003&\textbf{0.007}$\pm$\textbf{0.001}&0.007$\pm$0.002&0.005$\pm$0.001&\textbf{0.007}$\pm$\textbf{0.001}&\textbf{0.007}$\pm$\textbf{0.001}\\ \textbf{Feret}&DBI $\downarrow$&2.729$\pm$0.106&2.630$\pm$0.186&2.832$\pm$0.389&2.311$\pm$0.834&7.267$\pm$3.920&1.963$\pm$0.345&1.729$\pm$0.259&\textbf{0.927}$\pm$0.122\\ $n$=1400&XB $\downarrow$&0.127$\pm$0.001&0.292$\pm$0.001&0.156$\pm$0.041&0.062$\pm$0.010&5.984$\pm$4.184&0.056$\pm$0.001&0.088$\pm$0.001&\textbf{0.053}$\pm$0.009\\ $p$=1600&$F^{*}$ $\uparrow$&0.133$\pm$0.005&0.119$\pm$0.005&0.144$\pm$0.026&0.146$\pm$0.012&0.143$\pm$0.022&0.150$\pm$0.025&0.119$\pm$0.030&\textbf{0.153}$\pm$\textbf{0.002}\\ $c$=200&ARI $\uparrow$&0.016$\pm$0.034&0.016$\pm$0.020&0.022$\pm$0.009&0.024$\pm$0.011&0.019$\pm$0.005&0.021$\pm$0.006&0.021$\pm$0.003&\textbf{0.025}$\pm$\textbf{0.003}\\ &NMI $\uparrow$&0.415$\pm$0.009&0.382$\pm$0.015&0.415$\pm$0.028&0.419$\pm$0.021&0.402$\pm$0.019&0.425$\pm$0.017&0.419$\pm$0.058&\textbf{0.478}$\pm$0.034\\ \midrule &PC $\uparrow$&0.055$\pm$0.001&0.056$\pm$0.006&0.091$\pm$0.009&0.097$\pm$0.009&0.102$\pm$0.009&0.055$\pm$0.001&0.058$\pm$0.005&\textbf{0.108}$\pm$0.018\\ \textbf{COIL20}&DBI $\downarrow$&1.776$\pm$0.225&2.961$\pm$0.154&42.3$\pm$21.2&42.3$\pm$1.7&37.9$\pm$15.6&1.276$\pm$0.074&1.787$\pm$0.001&\textbf{0.847}$\pm$0.012\\ $n$=1440&XB $\downarrow$&1.077$\pm$0.001&1.099$\pm$0.014&9.7$\pm$3.7&10.1$\pm$3.0&14.2$\pm$6.2&0.297$\pm$0.001&0.970$\pm$0.003&\textbf{0.127}$\pm$0.005\\ $p$=1024&$F^{*}$ $\uparrow$&0.242$\pm$0.028&0.228$\pm$0.001&0.282$\pm$0.041&0.401$\pm$0.032&0.397$\pm$0.010&0.271$\pm$0.001&0.260$\pm$0.015&\textbf{0.443}$\pm$0.060\\ $c$=20&ARI $\uparrow$&0.110$\pm$0.036&0.133$\pm$0.001&0.248$\pm$0.058&0.237$\pm$0.026&0.254$\pm$0.017&0.279$\pm$0.002&0.110$\pm$0.023&\textbf{0.286}$\pm$0.068\\ &NMI $\uparrow$&0.299$\pm$0.042&0.372$\pm$0.001&0.339$\pm$0.045&0.385$\pm$0.020&0.317$\pm$0.004&0.301$\pm$0.003&0.374$\pm$0.017&\textbf{0.574}$\pm$0.059\\ \midrule &PC $\uparrow$&0.229$\pm$0.027&0.521$\pm$0.021&0.281$\pm$0.043&0.421$\pm$0.064&0.371$\pm$0.070&\textbf{0.593}$\pm$0.026&0.351$\pm$0.221&0.509$\pm$0.051\\ \textbf{Led7}&DBI $\downarrow$&0.986$\pm$0.131&1.145$\pm$0.100&2.852$\pm$0.575&1.454$\pm$0.229&1.807$\pm$0.229&1.160$\pm$0.206&0.983$\pm$0.117&\textbf{0.857}$\pm$0.105\\ $n$=3200&XB $\downarrow$&0.178$\pm$0.011&0.132$\pm$0.008&0.191$\pm$0.015&0.165$\pm$0.022&0.223$\pm$0.042&0.144$\pm$0.027&0.164$\pm$0.032&\textbf{0.122}$\pm$\textbf{0.002}\\ $p$=7&$F^{*}$ $\uparrow$&0.424$\pm$0.001&0.614$\pm$0.049&0.591$\pm$0.055&0.618$\pm$0.111&0.586$\pm$0.018&0.694$\pm$0.025&0.487$\pm$0.165&\textbf{0.731}$\pm$0.007\\ $c$=10&ARI $\uparrow$&0.232$\pm$0.001&0.415$\pm$0.043&0.376$\pm$0.063&0.405$\pm$0.101&0.356$\pm$0.034&0.438$\pm$0.021&0.436$\pm$0.152&\textbf{0.497}$\pm$0.007\\ &NMI $\uparrow$&0.366$\pm$0.001&0.498$\pm$0.038&0.465$\pm$0.052&0.494$\pm$0.077&0.466$\pm$0.033&0.510$\pm$0.016&0.473$\pm$0.104&\textbf{0.563}$\pm$0.019\\ \midrule &PC $\uparrow$&0.390$\pm$0.001&0.432$\pm$0.029&0.315$\pm$0.045&0.351$\pm$0.010&0.341$\pm$0.038&0.408$\pm$0.019&0.448$\pm$0.028&\textbf{0.476}$\pm$\textbf{0.001}\\ \textbf{Satimage}&DBI $\downarrow$&4.880$\pm$0.001&2.464$\pm$0.862&39.4$\pm$12.4&8.906$\pm$3.432&28.8$\pm$10.9&3.681$\pm$1.310&2.261$\pm$1.048&\textbf{0.908}$\pm$\textbf{0.001}\\ $n$=6435&XB $\downarrow$&3.478$\pm$0.001&1.112$\pm$0.023&25.2$\pm$3.1&1.612$\pm$0.419&8.032$\pm$2.187&1.180$\pm$0.186&0.493$\pm$0.036&\textbf{0.455}$\pm$\textbf{0.001}\\ $p$=36&$F^{*}$ $\uparrow$&0.553$\pm$0.001&0.593$\pm$0.029&0.631$\pm$0.049&0.638$\pm$0.062&0.612$\pm$0.080&0.635$\pm$0.009&0.638$\pm$0.013&\textbf{0.659}$\pm$\textbf{0.001}\\ $c$=6&ARI $\uparrow$&0.292$\pm$0.001&0.317$\pm$0.027&0.348$\pm$0.051&0.358$\pm$0.071&0.337$\pm$0.018&0.389$\pm$0.005&0.406$\pm$0.019&\textbf{0.443}$\pm$\textbf{0.001}\\ &NMI $\uparrow$&0.450$\pm$0.001&0.461$\pm$0.023&0.457$\pm$0.029&0.457$\pm$0.034&0.446$\pm$0.062&0.471$\pm$0.007&0.493$\pm$0.006&\textbf{0.515}$\pm$\textbf{0.001}\\ \midrule &PC $\uparrow$&0.656$\pm$0.001&0.720$\pm$0.001&0.733$\pm$0.001&0.717$\pm$0.001&0.702$\pm$0.001&0.679$\pm$0.036&0.656$\pm$0.001&\textbf{0.729}$\pm$\textbf{0.001}\\ \textbf{Magic}&DBI $\downarrow$&1.886$\pm$0.001&1.549$\pm$0.001&1.542$\pm$0.001&1.539$\pm$0.001&1.539$\pm$0.001&1.490$\pm$0.005&1.886$\pm$0.001&\textbf{1.050}$\pm$\textbf{0.001}\\ $n$=19020&XB $\downarrow$&0.545$\pm$0.001&0.390$\pm$0.001&0.373$\pm$0.001&0.381$\pm$0.001&0.372$\pm$0.001&0.438$\pm$0.010&0.545$\pm$0.001&\textbf{0.266}$\pm$\textbf{0.001}\\ $p$=10&$F^{*}$ $\uparrow$&0.582$\pm$0.001&0.627$\pm$0.001&0.622$\pm$0.001&0.633$\pm$0.001&0.612$\pm$0.001&0.591$\pm$0.003&0.582$\pm$0.001&\textbf{0.641}$\pm$\textbf{0.001}\\ $c$=2&ARI $\uparrow$&0.007$\pm$0.001&0.013$\pm$0.001&0.018$\pm$0.001&0.014$\pm$0.001&0.018$\pm$0.001&0.008$\pm$0.001&0.007$\pm$0.001&\textbf{0.020}$\pm$\textbf{0.001}\\ &NMI $\uparrow$&0.020$\pm$0.001&0.043$\pm$0.001&0.051$\pm$0.001&0.046$\pm$0.001&0.053$\pm$0.001&0.053$\pm$0.001&0.020$\pm$0.001&\textbf{0.057}$\pm$\textbf{0.001}\\ \midrule &PC $\uparrow$&0.362$\pm$0.001&0.447$\pm$0.035&0.353$\pm$0.006&0.398$\pm$0.017&0.349$\pm$0.032&0.330$\pm$0.032&0.409$\pm$0.006&\textbf{0.513}$\pm$0.049\\ \textbf{Shuttle}&DBI $\downarrow$&290.5$\pm$0.8&157.8$\pm$27.4&307.7$\pm$124.3&230.4$\pm$30.2&244.6$\pm$27.1&287.9$\pm$63.9&153.6$\pm$19.4&\textbf{52.2}$\pm$33.7\\ $n$=58000&XB $\downarrow$&98.7$\pm$1.6&9.7$\pm$0.4&13.5$\pm$1.3&13.0$\pm$3.9&12.2$\pm$0.4&65.1$\pm$8.41&15.4$\pm$0.1&\textbf{7.8}$\pm$3.6\\ $p$=9&$F^{*}$ $\uparrow$&0.504$\pm$0.001&0.593$\pm$0.010&0.546$\pm$0.061&0.578$\pm$0.062&0.571$\pm$0.010&0.460$\pm$0.056&0.512$\pm$0.043&\textbf{0.667}$\pm$0.071\\ $c$=7&ARI $\uparrow$&0.114$\pm$0.001&0.157$\pm$0.026&0.117$\pm$0.054&0.157$\pm$0.073&0.149$\pm$0.007&0.085$\pm$0.022&0.153$\pm$0.052&\textbf{0.190}$\pm$0.062\\ &NMI $\uparrow$&0.218$\pm$0.001&0.257$\pm$0.027&0.201$\pm$0.066&0.226$\pm$0.023&0.206$\pm$0.030&0.212$\pm$0.036&0.218$\pm$0.060&\textbf{0.260}$\pm$0.016\\ \midrule &PC $\uparrow$&0.263$\pm$0.016&0.251$\pm$0.023&0.202$\pm$0.026&0.191$\pm$0.024&0.112$\pm$0.004&0.241$\pm$0.016&0.295$\pm$0.023&\textbf{0.309}$\pm$\textbf{0.004}\\ \textbf{Sensorless}&DBI $\downarrow$&3.206$\pm$0.091&5.304$\pm$0.951&156.7$\pm$32.4&31.1$\pm$28.6&21.6$\pm$9.4&5.968$\pm$3.370&1.869$\pm$0.507&\textbf{0.814}$\pm$\textbf{0.054}\\ $n$=58509&XB $\downarrow$&1.494$\pm$0.067&3.051$\pm$0.667&35.5$\pm$14.7&25.2$\pm$14.5&15.9$\pm$8.6&3.906$\pm$1.513&1.195$\pm$0.635&\textbf{0.326}$\pm$\textbf{0.052}\\ $p$=48&$F^{*}$ $\uparrow$&0.307$\pm$0.014&0.282$\pm$0.021&0.281$\pm$0.017&0.271$\pm$0.020&0.288$\pm$0.023&0.275$\pm$0.028&0.311$\pm$0.023&\textbf{0.325}$\pm$\textbf{0.005}\\ $c$=11&ARI $\uparrow$&0.142$\pm$0.007&0.122$\pm$0.014&0.106$\pm$0.013&0.009$\pm$0.001&0.103$\pm$0.018&0.100$\pm$0.018&0.143$\pm$0.023&\textbf{0.147}$\pm$0.001\\ &NMI $\uparrow$&0.306$\pm$0.006&0.278$\pm$0.031&0.265$\pm$0.012&0.243$\pm$0.025&0.276$\pm$0.039&0.238$\pm$0.027&0.325$\pm$0.020&\textbf{0.339}$\pm$0.006\\ \midrule &PC $\uparrow$&0.420$\pm$0.001&0.521$\pm$0.001&0.522$\pm$0.033&0.512$\pm$0.053&0.514$\pm$0.028&\textbf{0.544}$\pm$\textbf{0.001}&0.447$\pm$0.001&0.503$\pm$0.001\\ \textbf{Seismic}&DBI $\downarrow$&10.9$\pm$0.1&2.834$\pm$0.001&3.055$\pm$1.179&2.569$\pm$0.490&3.186$\pm$1.057&\textbf{2.008}$\pm$\textbf{0.001}&7.034$\pm$0.001&3.364$\pm$0.005\\ $n$=78823&XB $\downarrow$&2.220$\pm$0.001&0.566$\pm$0.001&0.724$\pm$0.356&0.474$\pm$0.027&0.734$\pm$0.336&\textbf{0.428}$\pm$\textbf{0.001}&1.412$\pm$0.001&0.680$\pm$0.001\\ $p$=30&$F^{*}$ $\uparrow$&0.448$\pm$0.001&0.460$\pm$0.001&0.493$\pm$0.006&0.492$\pm$0.011&\textbf{0.503}$\pm$0.006&0.491$\pm$0.001&0.451$\pm$0.001&0.471$\pm$0.001\\ $c$=3&ARI $\uparrow$&0.038$\pm$0.001&0.040$\pm$0.001&\textbf{0.069}$\pm$0.001&0.046$\pm$0.004&0.055$\pm$0.022&0.063$\pm$0.001&0.038$\pm$0.001&0.045$\pm$0.001\\ &NMI $\uparrow$&0.043$\pm$0.001&0.045$\pm$0.001&0.066$\pm$0.003&0.065$\pm$0.005&\textbf{0.081}$\pm$0.024&0.058$\pm$0.001&0.043$\pm$0.001&0.049$\pm$0.001\\ \bottomrule \end{tabular} \begin{tablenotes} \item[1] The superscript '$\uparrow$' sign of evaluation metrics represents that the larger evaluation metrics, the better the clustering performance. The superscript '$\downarrow$' sign of evaluation metrics represents that the smaller evaluation metrics, the better the clustering performance. \end{tablenotes} \end{threeparttable}} \end{table*} From the experimental results in TABLE \ref{table3} and Fig. \ref{fig9}, the following conclusions are obtained. \begin{itemize} \item Comparing the first and last columns of each data set, AMFCM has better performance than FCM on all data sets in terms of the above six evaluation metrics in TABLE \ref{table3}. Moreover, it is worth mentioning that the efficiency of AMFCM has been improved in the whole stages, which reduces the number of the iteration of FCM by 80$\%$ on average without significant computational cost in Fig. \ref{fig9}. Therefore, AMFCM has also achieved significant savings in running time. As shown in Fig. \ref{fig9}, the total \textbf{Iteration} and \textbf{Time} of AMFCM is much lower than that of other algorithms. \item According to \cite{Zhou2020A}, it is found that the cost of AMFCM and MSFCM are both $\mathcal{O}(3ncp)$ per iteration. For the experimental results of MSFCM in the penultimate column of each table, the acceleration of MSFCM on the two data sets failed because the affinity filtering\eqref{eq_4} fails to obtain the complete non-affinity information as analyzed in Section \ref{subsec3-1}, and the low efficiency of the membership scaling \eqref{eq_new u}, which will be explained in the next Subsection \ref{subsubsec5-3-2}. In the remaining eight data sets, although MSFCM is effective, the acceleration performance of AMFCM in the whole stages is better than that of MSFCM. Thus, AMFCM is a successful generalization to MSFCM. \item The remaining five algorithms sometimes have better clustering quality than AMFCM, which is due to the good parameters for the division of each cluster. In this case, the clustering efficiency of the algorithms is reduced, as shown in Fig. \ref{fig9}. However, searching the complete set of the non-affinity centers is a parameter-free and autonomous process for AMFCM. In summary, AMFCM is a good trade-off between efficiency and quality. \end{itemize} According to the summary above, AMFCM can greatly improve clustering efficiency and quality on real-world data sets, which is based on the new affinity filtering \eqref{eq6} and membership scaling \eqref{eq_modified u} schemes. The acquisition and elimination of the redundant contributions of the samples in the update of the centers is the key to the success of AMFCM. \subsubsection{Statistical Comparisons by Friedman Test}\label{subsubsec5-3-2} In order to compare the multiple algorithms systematically, the Friedman test \cite{Demiar2006Statistical} is applied to compare the clustering efficiency (\textbf{Time} and \textbf{Iteration}) and quality ($\textbf{F}^{*}$ and \textbf{ARI}) of the eight algorithms on the selected ten data sets. In detail, Friedman test at significance level $\alpha=0.05$ rejects the null hypothesis of equal performance, which leads to the use of post-hoc tests to find out which algorithms are actually different. Next, Nemenyi test is used to where the performance of two algorithms is significantly different if their average ranks over all datasets differ by at least one critical difference. The critical difference is defined as $\text{CD}=q_{\alpha} \sqrt{\frac{K(K+1)}{6N}}$, where critical values $q_{\alpha}$ are based on the Studentized range statistic divided by $\sqrt{2}$, $K$ is the number of the comparison algorithms, and $N$ is the number of the data sets. In this part, $\textbf{F}^{*}$ and \textbf{ARI} are selected to evaluate the clustering quality, and the remaining metric have the similar results. The critical difference (CD) diagrams, as shown in Fig. \ref{fig11}, are presented to analyze the significance between AMFCM and the comparison algorithms on the ten data sets with $\textbf{F}^{*}$, \textbf{ARI}, \textbf{Iteration} and \textbf{Time}, where the average rank of each algorithm is marked on the line and the axis. The axis is turned so that the lowest (best) ranks are to the right. Groups of algorithms that are not significantly different according to Nemenyi test are connected with a red line. The critical difference (CD = 3.3203 at 0.05 significance level) is also shown above the axis in each subfigure. \begin{figure}[htp] \centering \subfloat[$\textbf{F}^{*}$]{\label{fig11a}\includegraphics[width=0.5\textwidth]{figs/fig-nonpara-F}}\\ \subfloat[$\textbf{ARI}$]{\label{fig11b}\includegraphics[width=0.5\textwidth]{figs/fig-nonpara-ARI}}\\ \subfloat[$\textbf{Iteration}$]{\label{fig11c}\includegraphics[width=0.5\textwidth]{figs/fig-nonpara-iter}}\\ \subfloat[$\textbf{Time}$]{\label{fig11d}\includegraphics[width=0.5\textwidth]{figs/fig-nonpara-time}} \caption{CD diagrams of the eight comparison algorithms on the ten data sets with $\textbf{F}^{*}$, $\textbf{ARI}$, \textbf{Iteration}, and \textbf{Time}. It is clear that AMFCM statistically achieves a good trade-off between the clustering quality and efficiency.}\label{fig11} \end{figure} According to the CD diagrams, first, in the clustering efficiency and quality, AMFCM achieves the statistically superior performance than that of FCM on the ten data sets. Second, from the Fig. \ref{fig11a} and \ref{fig11b}, AMFCM presents statistically comparable clustering quality with ARFCM on the ten data sets. However, AMFCM statistically outperforms ARFCM in the clustering efficiency, as shown in Fig. \ref{fig11c} and \ref{fig11d}. Finally, it can be found that none of the algorithms can present statistically comparable performance with AMFCM in both efficiency and quality. Therefore, AMFCM statistically achieves a good trade-off between the clustering of quality and efficiency. \subsubsection{Efficiency of the New Affinity Filtering Scheme}\label{subsubsec5-3-3} \begin{figure*}[htp!] \centering \includegraphics[width=0.96\textwidth]{figs/fig-filter2} \caption{Plots of $\frac{\hat{n}_{t}}{n}$ in relation to iteration on ten data sets. The log-scale of the x-axis clearly shows the differences of the sample filtering efficiency in [A] stage of the convergence process of the algorithms.}\label{fig8} \end{figure*} This set of experiments, carried out on the same data sets, is presented to test the filtering rate of the new affinity filtering \eqref{eq6}, which is the key factor that determines the acceleration of AMFCM. The previous affinity filtering \eqref{eq_4} and the new affinity filtering \eqref{eq6} are chosen with the same settings for comparison. Let $\hat{n}_{t}=|\{1\leq j \leq n \mid |\mathcal{P}^{(t)}_{j}|\neq 0\}|$ denote the number of the samples that satisfy the affinity filtering in $t$ iteration. For ten data sets, the curves of $\frac{\hat{n}_{t}}{n}$ are plotted in relation to iteration $t$ in Fig. \ref{fig8}, where the blue and red lines represent MSFCM and AMFCM, respectively. And the log-scale of the x-axis clearly shows the differences of the sample filtering efficiency in stage [A] of the convergence process of the algorithms. Firstly, the filtering efficiency of the new affinity filtering \eqref{eq6} is higher than that of the previous affinity filtering \eqref{eq_4} in stage [A]. Moreover, the new affinity filtering \eqref{eq6} has reached the highest filtering rate before 10 iterations, except for Magic and Seismic. Thus, the new affinity filtering \eqref{eq6} overcomes the inherent shortcomings of the previous affinity filtering \eqref{eq_4} that is easy to be invalid in stage [A], as shown in Fig. \ref{fig8}. Secondly, for data sets Magic and Seismic, the efficiency of the previous affinity filtering \eqref{eq_4} is higher than that of AMFCM in stage [B] of the convergence process of the algorithms. However, the previous membership scaling \eqref{eq_new u} is not very effective in accelerating the algorithm in stage [B], which is because $\beta_j^{(t)}$ is very close to 1 in stage [B]. The redundant contributions of the samples in the update of the centers have not been reduced in MSFCM, resulting in a decrease in its clustering efficiency. AMFCM always maintains a high-efficiency level for all data sets, which can be observed from the number of iteration on the x-axis. According to the above analysis, it can be seen that the new affinity filtering \eqref{eq6} and membership scaling \eqref{eq_modified u} schemes are complementary to each other. Therefore, AMFCM is very efficient in stages [A] and [B] of the convergence process of the algorithms. \section{Conclusion}\label{sec7} In this paper, FCM based on new affinity filtering and membership scaling (AMFCM) is proposed to accelerate the whole convergence stages of the traditional FCM clustering. In the proposed algorithm, a new affinity filtering is designed to obtain the complete non-affinity centers for each sample by a new set of triangle inequalities, which is more compatible with fuzzy clustering. Then, a new membership scaling is suggested to eliminate the contributions of the samples in the update of their non-affinity centers by setting the membership grades to 0 and promote the contributions of the samples in the update of the remaining centers by the fuzzy membership grades, which improves the performance and efficiency of the algorithm. Many experimental results have verified its effectiveness and efficiency on synthetic and real-world data sets. Therefore, AMFCM is a well-balanced FCM-type algorithm in clustering efficiency and quality. For future work, AMFCM could be explored to enhance the performance of the FCM-type clustering algorithms on high-dimensional data sets. Another interesting possibility is to generalize the concept to nonlinear fuzzy clustering according to information granules. \bibliographystyle{IEEEtran}
1,477,468,749,879
arxiv
\section{Introduction} \noindent The subject of this paper are states and orthogonal polynomials in non-commuting variables. The definition is straightforward. The usual orthogonal polynomials are obtained by starting with a measure $\mu$ on $\mf{R}^d$, thinking of $\mf{R}[x_1, x_2, \ldots, x_d]$ as a vector space with the (pre-)inner product \[ \ip{P}{Q} = \int_{\mf{R}^d} P(\mb{x}) Q(\mb{x}) \,d\mu(\mb{x}), \] and applying the Gram-Schmidt procedure to the monomials $\set{x_{u(1)} x_{u(2)} \ldots x_{u(n)}}$. In the non-commutative case, one starts directly with a positive linear functional (state) $\phi$ on the algebra of non-commutative polynomials $\mf{R} \langle x_1, x_2, \ldots, x_n \rangle$, and orthogonalizes the monomials in non-commuting variables with respect to the inner product \[ \ip{P}{Q} = \state{P^\ast(\mb{x}) Q(\mb{x})}. \] \medskip\noindent Among the general ``non-commutative measures'' and polynomials orthogonal with respect to them, there is a specific class of what is appropriate to call \emph{free Meixner states}. The classical Meixner class \cite{Meixner} consists of familiar distributions---normal, Poisson, gamma, negative binomial, Meix\-ner, and binomial---which, somewhat less familiarly, share a number of common properties: their orthogonal polynomials have exponential-form generating functions, they satisfy a quadratic regression property \cite{Laha-Lukacs}, they generate quadratic natural exponential families \cite{Morris}, they are quadratic harnesses \cite{Wes-commutative}, they are induced by representations of $\mk{su}(1,1)$ \cite{Koelink-Convolutions}, and they have explicit linearization coefficient formulas \cite{KimZeng}. The multivariate Meixner distributions have also been investigated, frequently in the guise of quadratic exponential families \cite{Casalis-Simple-quadratic,Pommeret-Test}, though a complete classification is still lacking. Even the infinite-dimensional case was considered \cite{Sniady-SWN,Lytvynov-Meixner}. \medskip\noindent In \cite{AnsMeixner}, I introduced the free Meixner polynomials, which are a family of orthogonal polynomials in one variable. The term ``free'' refers to their relation to free probability, see \cite{VDN,Nica-Speicher-book} for an introduction. As a matter of fact, these polynomials have been found independently both before and after my work, for example in \cite{Sze22,CTConstant,Freeman,SaiConstant,Kubo-IDAQP}. They share a number of the Meixner properties listed above, as long they are properly translated into the ``free'' context, see my original paper and also \cite{Boz-Bryc}. Some of the corresponding distributions also appear in random matrix theory, as the limiting distributions in the Gaussian, Wishart, and Jacobi ensembles. \medskip\noindent In \cite{AnsMulti-Sheffer} I started the investigation of multivariate free Meixner distributions, which are states on the algebra of non-commutative polynomials. I continue their study in Section~\ref{Section:Meixner}. The main new tool is to represent these states as joint distributions of certain operators on a Fock space, following the more general construction in~\cite{AnsMonic}. I use this machinery, in combination with combinatorial methods, to find explicit formulas for the free cumulants of these states. This provides an explanation for the one-variable results in Section 3.1 of \cite{AnsMeixner} and Proposition 2.2 of \cite{Boz-Bryc}, and is the first main result of the paper. The operator representation of the state also allows me to handle states that are not necessarily faithful, thus answering a question of the referee of \cite{AnsMulti-Sheffer}, where only faithful free Meixner states were considered. \medskip\noindent Having an explicit representation for the cumulants, and being able to handle non-faithful states, allows me to describe a number of examples, which is done in Section~\ref{Section:Examples}. Among the usual multivariate Meixner distributions, two are familiar, namely the multivariate normal and the multinomial distributions. It is well known that the free analog of the multivariate normal distribution is the distribution of a free semicircular system, see Section~\ref{Subsec:Semicircular}. The second question treated in this paper is: what is the ``free'' multinomial distribution? I show that the basic multinomial distribution \emph{itself} also belongs to the free Meixner class. In particular, this allows me to calculate the distribution of a free sum of $d$-tuples of orthogonal projections. \medskip\noindent Among states on non-commutative algebras, \emph{traces} form an important class. The final result in this paper provides a way to construct a large family of non-trivial, tracial free Meixner states. These turn out to be analogs of simple quadratic exponential families. \section{Preliminaries} \noindent Variables in this paper will typically come in $d$-tuples, which will be denoted using the bold font: $\mb{x} = (x_1, x_2, \ldots, x_d)$, and the same for $\mb{z}, \mb{S}$, etc. \subsection{Polynomials} Let $\mf{R}\langle \mb{x} \rangle = \mf{R}\langle x_1, x_2, \ldots, x_d \rangle$ be all the polynomials with real coefficients in $d$ non-commuting variables. \emph{Multi-indices} are elements $\vec{u} \in \set{1, \ldots, d}^k$ for $k \geq 0$; for $\abs{\vec{u}} = 0$ denote $\vec{u}$ by $\emptyset$. Monomials in non-commuting variables $(x_1, \ldots, x_d)$ are indexed by such multi-indices: \[ x_{\vec{u}} = x_{u(1)} \ldots x_{u(k)}. \] Note that our use of the term ``multi-index'' is different from the usual one, which is more suited for indexing monomials in commuting variables. \medskip\noindent For two multi-indices $\vec{u}, \vec{v}$, denote by $(\vec{u}, \vec{v})$ their concatenation. For $\vec{u}$ with $\abs{\vec{u}} = k$, denote \[ (\vec{u})^{op} = (u(k), \ldots, u(2), u(1)). \] Define an involution on $\mf{R}\langle \mb{x} \rangle$ via the $\mf{R}$-linear extension of \[ (x_{\vec{u}})^\ast = x_{(\vec{u})^{op}}. \] \medskip\noindent A \emph{monic polynomial family} in $\mb{x}$ is a family $\set{P_{\vec{u}}(\mb{x})}$ indexed by all multi-indices \[ \bigcup_{k=1}^\infty \set{\vec{u} \in \set{1, \ldots, d}^k} \] (with $P_{\emptyset} = 1$ being understood) such that\[ P_{\vec{u}}(\mb{x}) = x_{\vec{u}} + \textsl{lower-order terms}. \] Note that $P_{\vec{u}}^\ast \neq P_{(\vec{u})^{op}}$ in general. \begin{Defn} \label{Defn:State} A \emph{state} on $\mf{R} \langle \mb{x} \rangle$ is a functional \[ \phi: \mf{R} \langle x_1, x_2, \ldots, x_d \rangle \rightarrow \mf{R} \] that is linear, compatible with the $\ast$-operation, that is for any $P$, \[ \state{P} = \state{P^\ast}, \] unital, that is $\state{1} = 1$, and positive, that is for any $P$, \[ \state{P^\ast P} \geq 0. \] A state is \emph{faithful} if in the preceding equation, the equality holds only for $P = 0$. Unless noted otherwise, the states in this paper are \emph{not} assumed to be faithful. \medskip\noindent The numbers $\state{x_{\vec{u}}}$ are called the \emph{moments} of $\phi$. \medskip\noindent A state $\phi$ induces the pre-inner product \[ \ip{P}{Q}_\phi = \state{P^\ast Q} = \ip{Q}{P}_\phi \] and the seminorm \[ \norm{P}_\phi = \sqrt{\state{P^\ast P}}. \] Throughout the paper, we will typically drop $\phi$ from the notation, and denote the inner product and norm it induces simply by $\ip{\cdot}{\cdot}$, $\norm{\cdot}$. \medskip\noindent We may think of $\phi$ is a ``joint distribution'' of ``random variables'' $(x_1, x_2, \ldots, x_d)$. In the remainder of the paper, as we did in \cite{AnsMulti-Sheffer}, we will assume that under the state $\phi$, the variables have zero mean and identity covariance, \[ \state{x_i} = 0, \qquad \state{x_i x_j} = \delta_{ij}. \] The last assumption is made primarily so that equation~\eqref{PDE} has a clean form. In Section~\ref{Subsec:Covariance} we briefly describe how to modify the results if that assumption is dropped. \end{Defn} \subsection{Monic orthogonal polynomials states} \label{Subsec:MOPS} \begin{Defn} A state has a \emph{monic orthogonal polynomial system}, or MOPS, if for any multi-index $\vec{u}$, there is a monic polynomial $P_{\vec{u}}$ with leading term $x_{\vec{u}}$, such that these polynomials are orthogonal with respect to $\phi$, that is, \[ \ip{P_{\vec{u}}}{P_{\vec{v}}} = 0 \] for $\vec{u} \neq \vec{v}$. \end{Defn} \noindent Note that the same abbreviation is used in~\cite{Dumitriu-MOPS} to denote a class of multivariate orthogonal polynomials systems, which is different from ours. \medskip\noindent States that have MOPS were characterized in \cite{AnsMonic}. We briefly summarize the results of that paper which we will use in the next section. \subsubsection{Fock space construction I} \label{Subsubsec:General-Fock} Let $\mc{H} = \mf{C}^d$, with the canonical orthonormal basis $e_1, e_2, \ldots, e_d$. Define the (algebraic) full Fock space of $\mc{H}$ to be \[ \Falg(\mc{H}) = \bigoplus_{k=0}^\infty \mc{H}^{\otimes k} \] Equivalently, $\Falg(\mc{H})$ is the vector space of non-commutative polynomials in $e_1, e_2, \ldots, e_d$. Following convention, we will denote the generating vector in $\mc{H}^{\otimes 0} = \mf{C}$ by $\Omega$ instead of $1$. \medskip\noindent For $i = 1, 2, \ldots, d$, define $a_i^+$ and $a_i^-$ to be the usual (left) free creation and annihilation operators, \begin{align*} a_i^+ & \left(e_{u(1)} \otimes e_{u(2)} \otimes \ldots \otimes e_{u(k)} \right) = e_i \otimes e_{u(1)} \otimes e_{u(2)} \otimes \ldots \otimes e_{u(k)}, \\ a_i^- & (e_j) = \ip{e_i}{e_j} \Omega = \delta_{i j} \Omega, \\ a_i^- & \left(e_{u(1)} \otimes e_{u(2)} \otimes \ldots \otimes e_{u(k)} \right) = \ip{e_i}{e_{u(1)}} e_{u(2)} \otimes \ldots \otimes e_{u(k)}. \end{align*} \medskip\noindent For each $k \geq 2$ let $\mc{C}^{(k)}$ be an operator \[ \mc{C}^{(k)}: \mc{H}^{\otimes k} \rightarrow \mc{H}^{\otimes k}. \] We think of each $\mc{C}^{(k)}$ as a $d^k \times d^k$ matrix. Assume that for each $k$, $\mc{C}^{(k)}$ is diagonal and $\mc{C}^{(k)} \geq 0$. It is convenient to also take $\mc{C}^{(1)} = I$; this corresponds to the identity covariance. Similarly, for each $i = 1, 2, \ldots, d$ and each $k \geq 1$, let $\mc{T}_i^{(k)}$ be an operator \[ \mc{T}_i^{(k)}: \mc{H}^{\otimes k} \rightarrow \mc{H}^{\otimes k}. \] Assume that $\mc{T}_i^{(k)}$ and $\mc{C}^{(j)}$ satisfy a commutation relation (see \cite{AnsMonic}). We will denote by $\mc{T}_i$ and $\mc{C}$ the operators acting as $\mc{T}_i^{(k)}$ and $\mc{C}^{(k)}$ on each component. Finally, let $\tilde{a}_i^- = a_i^- \mc{C}$ and \[ \mc{X}_i = a_i^+ + \mc{T}_i + \tilde{a}_i^-. \] With the appropriate choice of the inner product $\ip{\cdot}{\cdot}_{\mc{C}}$ on the completion $\mc{F}_{\mc{C}}(\mc{H})$ of the quotient of $\Falg(\mc{H})$, all the operators $a_i^+, \mc{T}_i, \tilde{a}_i^-$ factor through to $\mc{F}_{\mc{C}}(\mc{H})$, and each $\mc{X}_i$ is a symmetric operator on it. \begin{Thm}(Part of Theorem 2 of \cite{AnsMonic}) \label{Thm:Monic-states} Let $\phi$ be a state on $\mf{R} \langle \mb{x} \rangle$. The following are equivalent: \begin{enumerate} \item The state $\phi$ has a monic orthogonal polynomial system. \item There is a family of polynomials $\set{P_{\vec{u}}}$ such that $\state{P_{\vec{u}}} = 0$ for all $\vec{u} \neq \emptyset$ and they satisfy a recursion relation \begin{align*} x_i & = P_i + B_{i, \emptyset, \emptyset}, \\ x_i P_u & = P_{(i, u)} + \sum_{w=1}^d B_{i, w, u} P_{w} + \delta_{i, u} C_u, \\ x_i P_{\vec{u}} & = P_{(i, \vec{u})} + \sum_{\abs{\vec{w}} = \abs{\vec{u}}} B_{i, \vec{w}, \vec{u}} P_{\vec{w}} + \delta_{i, u(1)} C_{\vec{u}} P_{(u(2), u(3), \ldots, u(k))}, \end{align*} with $C_{\vec{u}} \geq 0$ and, denoting $\vec{s}_j = (s(j), \ldots, s(k))$, \[ B_{i, \vec{s}, \vec{u}} \prod_{j=1}^k C_{\vec{s}_j} = B_{i, \vec{u}, \vec{s}} \prod_{j=1}^k C_{\vec{u}_j}. \] \item For some choice of the matrices $\mc{C}^{(k)}$ and $\mc{T}_i^{(k)}$ as in Section~\ref{Subsubsec:General-Fock}, the state $\phi$ has a Fock space representation $\phi_{\mc{C}, \set{\mc{T}_i}}$ as \begin{equation*} \state{P(x_1, x_2, \ldots, x_d)} = \ip{\Omega}{P(\mc{X}_1, \mc{X}_2, \ldots, \mc{X}_d) \Omega}. \end{equation*} \end{enumerate} \end{Thm} \medskip\noindent We will also need the following relation between the operators in part (c) and coefficients in part (b) of the theorem: \begin{equation} \label{Expansion-T} \mc{T}_i(e_{u(1)} \otimes \ldots \otimes e_{u(k)}) = \sum_{\abs{\vec{w}} = k} B_{i, \vec{w}, \vec{u}} e_{w(1)} \otimes \ldots \otimes e_{w(k)} \end{equation} and \begin{equation} \label{Expansion-C} \mc{C}(e_{u(1)} \otimes \ldots \otimes e_{u(k)}) = C_{\vec{u}} e_{u(1)} \otimes \ldots \otimes e_{u(k)}. \end{equation} \subsection{Fock space construction II} \label{Subsec:Fock2} The following construction is a particular case of the construction in Section~\ref{Subsubsec:General-Fock}, but this time we provide full details. As before, let $\mc{H} = \mf{C}^d$, with the canonical basis $e_1, e_2, \ldots, e_d$, denote its (algebraic) full Fock space by $\Falg(\mc{H})$, and the generator of the zeroth component by $\Omega$. Let $C$ be an n operator on $\mc{H} \otimes \mc{H}$, which we identify with its $d^2 \times d^2$ matrix in the standard basis. Assume that $C$ is diagonal and \begin{equation} \label{C-positive} (I \otimes I) + C \geq 0, \end{equation} where $I$ will always denote the identity operator on $\mc{H}$. On $\Falg(\mc{H})$, define a new inner product using the non-negative kernel \[ K_C = \bigl(I^{\otimes (k-2)} \otimes (I^{\otimes 2} + C)\bigr) \ldots \bigl(I \otimes (I^{\otimes 2} + C) \otimes I^{\otimes (k-3)}\bigr) \bigl((I^{\otimes 2} + C) \otimes I^{\otimes (k-2)}\bigr) \] on each $\mc{H}^{\otimes k}$, and denote the completion of $\Falg(\mc{H})$ with respect to this inner product $\mc{F}_C(\mc{H})$. If the inner product is degenerate, first factor out the subspace of vectors of length zero, and then complete. \medskip\noindent For $i = 1, 2, \ldots, d$, let $a_i^+$ and $a_i^-$ be the usual (left) free creation and annihilation operators as defined in Section~\ref{Subsubsec:General-Fock}. Let $T_1, \ldots, T_d$ be operators on $\mc{H}$ which we identify with their $d \times d$ matrices. Assume that each $T_i$ is symmetric and \[ (T_i \otimes I) C = C (T_i \otimes I). \] With a slight abuse of notation, we will denote \[ T_i = T_i \otimes I^{\otimes (k-1)} \text{ on } \mc{H}^{\otimes k} \] and \[ \tilde{a}_i = a_i^- (C \otimes I^{\otimes (k-2)}) \text{ on } \mc{H}^{\otimes k}. \] Note that \begin{equation} \label{Zero} a_i^- \Omega = T_i \Omega = \tilde{a}_i \Omega = 0 \text{ and } \tilde{a}_i = 0 \text{ on } \mc{H}. \end{equation} It follows from the general construction in Section~\ref{Subsubsec:General-Fock} that all the operators \[ X_i = a_i^+ + a_i^- + T_i + \tilde{a}_i \] factor through to $\mc{F}_C(\mc{H})$. \begin{Defn} \label{Defn:Fock-state} The Fock state $\phi = \phi_{C, \set{T_i}}$ on $\mf{R} \langle \mb{x} \rangle$ determined by such $C$ and $T_i$ is the state \begin{equation*} \state{P(x_1, x_2, \ldots, x_d)} = \ip{\Omega}{P(X_1, X_2, \ldots, X_d) \Omega} = \ip{\Omega}{P(X_1, X_2, \ldots, X_d) \Omega}_C. \end{equation*} \end{Defn} \subsection{Non-crossing partitions} A \emph{partition} $\pi$ of a set $V \subset \mf{Z}$ is a collection of disjoint subsets of $V$ (classes of $\pi$), $\pi = (B_1, B_2, \ldots, B_k)$, whose union equals $V$. Most of the time we will be interested in partitions of $\set{1, 2, \ldots, n}$. Partitions form a partially ordered set (in fact a lattice) under the operation of refinement, so that the largest partition is $\hat{1} = \bigl( \set{1, 2, \ldots, n} \bigr)$ and the smallest partition is $\hat{0} = \bigl( \set{1}, \set{2}, \ldots, \set{n} \bigr)$. We will use $i \stackrel{\pi}{\sim} j$ to denote that $i, j$ lie in the same class of $\pi$. \medskip\noindent Let $\NC(V)$ denote the collection of non-crossing partitions of $V$, which are partitions $\pi$ such that \[ i \stackrel{\pi}{\sim} i', j \stackrel{\pi}{\sim} j', i \stackrel{\pi}{\not \sim} j, i < j < i' \Rightarrow i < j' < i'. \] Equivalently, a partition is non-crossing if and only if one of its classes is an interval and the restriction of the partition to the complement of this class is non-crossing. Non-crossing partitions are a sub-lattice of the lattice of all partitions. For each $n$, let $\NC(n)$ denote the lattice of non-crossing partitions of the set $\set{1, 2, \ldots, n}$. We will also denote by $\NC_0(V)$ all non-crossing partitions with no singletons (one-element classes), and by $\NC'(V)$ all the partitions $\pi$ such that \[ \min V \stackrel{\pi}{\sim} \max V. \] Equivalently, partitions in $\NC'(V)$ have a single outer class---the one that contains both $\min V$ and $\max V$---in the terminology of \cite{BLS96}. (A class $B \in \pi$ is outer if there do \emph{not} exist $i, i' \not \in B$, $j \in B$ with $i \stackrel{\pi}{\sim} i'$ and $i < j < i'$.) See \cite{Nica-Speicher-book} or \cite{Stanley-volume-1} for more details on the relevant combinatorics. \subsection{Free cumulants} The free cumulant functional $R$ corresponding to a state $\phi$ is the linear functional on $\mf{R} \langle \mb{x} \rangle$ defined recursively by $\Cum{1} = 0$ and for $\abs{\vec{u}} = n$, \begin{equation} \label{Cumulants-definition} \Cum{x_{\vec{u}}} = \state{x_{\vec{u}}} - \sum_{\substack{\pi \in \NC(n), \\ \pi \neq \hat{1}}} \prod_{B \in \pi} \Cum{\prod_{i \in B} x_{u(i)}}, \end{equation} which expresses $\Cum{x_{\vec{u}}}$ in terms of the joint moments and sums of products of lower-order free cumulants. From these, we can form the free cumulant generating function of $\phi$ via \begin{equation} \label{Non-crossing} R(z_1, z_2, \ldots, z_d) = \sum_{n=1}^\infty \sum_{\abs{\vec{u}} = n} \Cum{x_{\vec{u}}} z_{\vec{u}}, \end{equation} where $\mb{z} = (z_1, \ldots, z_d)$ are non-commuting indeterminates. One can also define $R$ using an implicit functional relation involving the moment generating function of $\phi$, see Corollary~16.16 of \cite{Nica-Speicher-book}. \subsection{Words and partitions} \label{Subsec:Facts} In this section, we collect a number of facts that will be useful in the proof of the next two theorems. Note that in many places, operators are considered as acting on $\Falg(\mc{H})$, with a degenerate inner product, rather than on $\mc{F}_C(\mc{H})$. \begin{Lemma} Let $\vec{u}$ be a multi-index indexed by a set $V \subset \mf{Z}$, and $W = \prod_{i \in V} W(i)$ be a word with $W(i)$ equal to $a^+_{u(i)}, T_{u(i)}, a_{u(i)}^-$, or $\tilde{a}_{u(i)}$. If \[ \ip{\Omega}{\prod_{i \in V} W(i) \Omega} \neq 0, \] then \begin{equation} \label{Catalan-walk} \begin{split} W(\min V) = a_{u(\min V)}^-, \quad W(\max V) = a_{u(\max V)}^+, & \\ \forall i \in V, \abs{\set{j \in V | j \geq i, W(j) = a_{u(j)}^- \text{ or } W(j) = \tilde{a}_{u(j)}}} & \leq \abs{\set{j \in V | j \geq i, W(j) = a_{u(j)}^+}}, \\ \abs{\set{j \in V | W(j) = a_{u(j)}^- \text{ or } W(j) = \tilde{a}_{u(j)}}} & = \abs{\set{j \in V | W(j) = a_{u(j)}^+}}, \end{split} \end{equation} and \begin{multline} \label{Level-two} \abs{\set{j \in V | j \geq i, W(j) = a_{u(j)}^- \text{ or } W(j) = \tilde{a}_{u(j)}}} = \abs{\set{j \in V | j \geq i, W(j) = a_{u(j)}^+}} \\ \Rightarrow W(i) = a_{u(i)}^-. \end{multline} \end{Lemma} \begin{proof} This follows from the fact that if $\eta \in \mc{H}^{\otimes k}$, then $a_i^+(\eta) \in \mc{H}^{\otimes (k+1)}$, $T_i(\eta) \in \mc{H}^{\otimes k}$, and $a_i^-(\eta), \tilde{a}_i(\eta) \in \mc{H}^{\otimes (k-1)}$, and equation~\eqref{Zero}. \end{proof} \noindent In combinatorics, equation~\eqref{Catalan-walk} is related to the notion of a Motzkin path. More generally, our operator representations are closely related to a common way of representing moments as sums over lattice paths \cite{Flajolet,Viennot-Short}, but in the multivariate case we find the operator formulation more useful. \begin{Notation} Let \[ \mc{W}_n(\vec{u}) = \bigl\{W = W(1) W(2) \ldots W(n) \text{ satisfying conditions } \eqref{Catalan-walk} \text{ and } \eqref{Level-two} \text{ for } V = \set{1, \ldots, n}\bigr\}, \] and for a general subset $V \subset \mf{Z}$, define $\mc{W}_V(\vec{u})$ similarly. \end{Notation} \begin{Lemma} \label{Lemma:Bijection} For any multi-index $\vec{u}$, partition $\pi \in \NC_0(n)$, $\pi = (V_1, V_2, \ldots, V_k)$ and partitions $\sigma_j \in \NC_0'(V_j)$, $j = 1, 2, \ldots, k$, define a word $W = \beta_{\vec{u}}(\pi; \sigma_1, \ldots, \sigma_k)$ by \begin{equation} \label{Partition-word} W(i) = \begin{cases} a_{u(i)}^+, & i \in B \in \sigma_j, i = \max B, \\ a_{u(i)}^-, & i \in V_j, i = \min V_j, \\ \tilde{a}_{u(i)}, & i \in B \in \sigma_j, i = \min B, i \neq \min V_j, \\ T_{u(i)}, & \text{otherwise}. \end{cases} \end{equation} Then $W \in \mc{W}_n(\vec{u})$, and for each $V \in \pi$, $W$ restricted to $V$ is in $\mc{W}_V(\vec{u}:V)$, where $(\vec{u}:V)$ is the sub-multi-index of $\vec{u}$ indexed by the elements of $V$. Moreover, for each $\vec{u}$, $\beta_{\vec{u}}$ is a bijection. \end{Lemma} \begin{proof} Let $W = \beta_{\vec{u}}(\pi; \sigma_1, \ldots, \sigma_k)$. Condition~\eqref{Catalan-walk} for the whole set $\set{1, 2, \ldots, n}$ (respectively, for $V_j$) follows from the definition of $\beta$ and the fact that $\pi, \sigma_1, \ldots, \sigma_k$ (respectively, $\sigma_j$) are non-crossing. Condition~\eqref{Level-two} follows from the definition that the minima of the outer classes of the partition $\pi$ (respectively, $\sigma_j$) are all $a^-$. \medskip\noindent Conversely, let $W \in \mc{W}_n(\vec{u})$. Let $\Lambda \subset \set{1, 2, \ldots, n}$, \[ \Lambda = \set{j | W(j) \neq T_{u(j)}}. \] It follows from Proposition~2.13 and Exercise~8.23 of \cite{Nica-Speicher-book} that, as long as $W$ restricted to $\Lambda$ satisfies condition~\eqref{Catalan-walk}, there is a unique non-crossing pair partition $\pi' \in \NC(\Lambda)$ such that for any $B \in \pi'$, \begin{align*} i = \min B & \Leftrightarrow W(i) = a_{u(i)}^- \text{ or } \tilde{a}_{u(i)}, \\ i' = \max B & \Leftrightarrow W(i') = a_{u(i')}^+. \end{align*} Note that $W(1) = a_{u(1)}^-$, so $(1, i') \in \pi'$ for some $i' > 1$. Moreover, $W(i'+1) \ldots W(n) \Omega \in \mc{H}^{\otimes 0}$, so by condition~\eqref{Level-two}, $W(i'+1) = a_{u(i'+1)}^-$. Thus $(i'+1, j') \in \pi'$ for some $j'$, etc. ending with $(s,n) \in \pi'$. It follows that for any $j \in \set{1, \ldots, n}$, there exist $i \stackrel{\pi'}{\sim} i'$ such that $i \leq j \leq i'$ and $W(i) = a_{u(i)}^-$. For each $j$, choose the largest $i$ such that $i \stackrel{\pi'}{\sim} i'$, $i \leq j \leq i'$, and $W(i) = a_{u(i)}^-$, and require that $i \stackrel{\pi}{\sim} j \stackrel{\pi}{\sim} i'$. Similarly, for each class $V_s \in \pi$ and each $j \in V_s$, choose the largest $i \in V_s$ such that $i \stackrel{\pi'}{\sim} i'$ and $i \leq j \leq i'$, and require that $i \stackrel{\sigma_s}{\sim} j \stackrel{\sigma_s}{\sim} i'$. Pictorially, we draw the integers $1, 2, \ldots, n$ on a line, draw the pair classes or $\pi'$ as arcs connecting each $i$ with the corresponding $i'$ above the line, and then connect each of the other elements to the arc immediately above it. \end{proof} \begin{Lemma} \label{Lemma:Factor} For any $W \in \mc{W}_n(\vec{u})$, let $(\pi; \sigma_1, \ldots, \sigma_k) = \beta_{\vec{u}}^{-1}(W)$ with $\pi = (V_1, V_2, \ldots, V_k)$. Then \[ \ip{\Omega}{W(1) W(2) \ldots W(n) \Omega} = \prod_{i=1}^k \ip{\Omega}{\prod_{j \in V_i} W(j) \Omega}. \] \end{Lemma} \begin{proof} Since $\pi$ is a non-crossing partition, it has a class $V$ that is an interval, \[ V = [i, i'] = \set{j | i \leq j \leq i'}. \] Since $\pi$ restricted to $\set{1, \ldots, n} \backslash V$ is still a non-crossing partition, it suffices to show that \[ \ip{\Omega}{W(1) W(2) \ldots W(n) \Omega} = \ip{\Omega}{\prod_{j=1}^{i-1} W(j) \prod_{j=i'+1}^n W(j) \Omega} \ip{\Omega}{\prod_{j=i}^{i'} W(j) \Omega}. \] Denote \[ \eta = W(i'+1) \ldots W(n) \Omega \in \mc{H}^{\otimes m}. \] We now show that for any $i < j \leq i'$, \[ W(j) \ldots W(n) \Omega = \zeta_j \otimes \eta \] for \[ \zeta_j = W(j) \ldots W(i') \Omega. \] The proof is by induction. \[ W(i') \eta = a_{u(i')}^+ \eta = e_{u(i')} \otimes \eta = (W(i') \Omega) \otimes \eta. \] If $W(j) = a_{u(j)}^+$, $\zeta_j = e_{u(j)} \otimes \zeta_{j+1}$. If $W(j) = T_{u(j)}$, then $\zeta_j = T_{u(j)} \zeta_{j+1}$. $W(j)$ cannot equal $a_{u(j)}^-$. Finally, it follows from condition~\eqref{Level-two} applied to $V = [i, i']$ that for all $j$, $i < j \leq i'$, \[ W(j+1) \ldots W(n) \Omega \in \mc{H}^{\otimes s} \] with $s > m$. Thus $W(j)$ may equal $\tilde{a}_{u(j)}$ only if $s \geq m + 2$, otherwise \[ W(j) W(j+1) \ldots W(n) \Omega \in \mc{H}^{\otimes m}. \] But if $s \geq m + 2$, $\zeta_j = a_{u(j)}^- C \zeta_{j+1}$. \medskip\noindent It follows that also \[ W(i) \ldots W(n) \Omega = (W(i) \ldots W(i') \Omega) \otimes \eta = \ip{\Omega}{W(i) \ldots W(i') \Omega} \eta. \] Thus \[ \begin{split} \ip{\Omega}{W(1) W(2) \ldots W(n) \Omega} & = \ip{\Omega}{W(1) \ldots W(i-1) \ip{\Omega}{W(i) \ldots W(i') \Omega} \eta} \\ & = \ip{\Omega}{W(1) \ldots W(i-1) \eta} \ip{\Omega}{W(i) \ldots W(i') \Omega} \\ & = \ip{\Omega}{\prod_{j=1}^{i-1} W(j) \prod_{j=i'+1}^n W(j) \Omega} \ip{\Omega}{\prod_{j=i}^{i'} W(j) \Omega}. \qedhere \end{split} \] \end{proof} \begin{Notation} \label{Notation:Covered-bijection} For $V \subset \mf{Z}$, denote \[ \begin{split} \mc{W}_V'(\vec{u}) = \{W \in \mc{W}_V(\vec{u}) |& W(\min V) = a_{u(\min V)}^-, W(\max V) = a_{u(\max V)}^+, \\ &\quad \text{ and none of the other $W(i)$ are equal to } a_{u(i)}^-\}. \end{split} \] The partition $\pi$ corresponding to any such $W$ has only one class, $\pi = (V) \in \NC(V)$, and \[ \beta_{\vec{u}}^{-1}(\mc{W}_V'(\vec{u})) = \set{((V), \sigma) | \sigma \in \NC_0'(V)} \cong \NC_0'(V). \] Denote \[ \Theta(\sigma; V, \vec{u}) = \ip{\Omega}{\beta_{\vec{u}}((V), \sigma) \Omega}. \] \end{Notation} \begin{Lemma} If $W \in \mc{W}_n(\vec{u})$ and $\beta_{\vec{u}}^{-1}(W) = (\pi, \sigma_1, \ldots, \sigma_k)$, $\pi = (V_1, V_2, \ldots, V_k)$, then \[ \ip{\Omega}{W(1) \ldots W(n) \Omega} = \prod_{j=1}^k \ip{\Omega}{\prod_{i \in V_j} W(i) \Omega} = \prod_{j=1}^k \Theta(\sigma_j; V_j, (\vec{u}:V_j)), \] where $(\vec{u}:V_j)$ is the sub-multi-index of $\vec{u}$ indexed by the elements of $V_j$. \end{Lemma} \begin{proof} This follows from Lemma~\ref{Lemma:Factor} using Notation~\ref{Notation:Covered-bijection}. \end{proof} \begin{Lemma} \label{Lemma:Factor2} Suppose that $W \in \mc{W}_n'(\vec{u})$ such that $W(1) = a_{u(1)}^- = a_{j}^-$ and $W(2) = \tilde{a}_{u(2)} = \tilde{a}_{i}$. Then $\beta_{\vec{u}}^{-1}(W) = ((\set{1, \ldots, n}), \sigma)$. It follows from condition~\eqref{Partition-word} that for $2 \in B \in \sigma$, we have $2 = \min B$. Let $k = \max B$. Then $W(k) = a_{u(k)}^+$ and \begin{multline*} \ip{\Omega}{W(1) W(2) \ldots W(k) \ldots W(n) \Omega} = \ip{\Omega}{a_{j}^- \tilde{a}_{i} W(3) \ldots a_{u(k)}^+ W(k+1) \ldots W(n-1) a_{u(n)}^+ \Omega} \\ = C_{ij} \ip{\Omega}{a_j^- W(k+1) \ldots W(n-1) a_{u(n)}^+ \Omega} \ip{\Omega}{a_i^- W(3) \ldots a_{u(k)}^+ \Omega}. \end{multline*} Moreover, the map \[ \begin{split} \{W \in \mc{W}_n'(\vec{u}) | & W(1) = a_{u(1)}^- = a_{j}^-, W(2) = \tilde{a}_{u(2)} = \tilde{a}_{i}\} \\ & \rightarrow \bigcup_{k=3}^{n-1} \mc{W}_{\set{1, k+1, \ldots, n}}'\bigl((\vec{u}:\set{1, k+1, \ldots, n})\bigr) \times \mc{W}_{\set{2, \ldots, k}}'\bigl((\vec{u}:\set{2, \ldots, k})\bigr) \\ & \cong \bigcup_{k=3}^{n-1} \NC_0'(\set{1, k+1, \ldots, n}) \times \NC_0'(\set{2, \ldots, k}) \end{split} \] is a bijection. \end{Lemma} \begin{proof} By the same method as in Lemma~\ref{Lemma:Factor}, we deduce that \[ W(3) \ldots a_{u(k)}^+ W(k+1) \ldots W(n-1) a_{u(n)}^+ \Omega = \bigl(W(3) \ldots a_{u(k)}^+ \Omega \bigr) \otimes \bigl(W(k+1) \ldots W(n-1) a_{u(n)}^+ \Omega \bigr). \] The inner product of this vector with $C(e_i \otimes e_j)$ is the desired expression. \end{proof} \begin{Lemma} \label{Lemma:Last} Suppose that $C_{ij} = C(e_i \otimes e_j) = c$ for all $i, j$. Let $\sigma \in NC_0'(n)$, \[ \sigma = \Bigl( \set{b_{1,1}, \ldots, b_{1, j(1)}}, \ldots, \set{b_{1, k}, \ldots, b_{k, j(k)}} \Bigr), \] where each class is ordered and $b_{1,1} = 1$. Then \[ \Theta(\sigma; \set{1, \ldots, n}, \vec{u}) = c^{k-1} \prod_{i=1}^k \ip{e_{u(b_{i,1})}}{T_{u(b_{i,2})} \ldots T_{u(b_{i, j(i) - 1})} e_{u(b_{i, j(i)})}}. \] \end{Lemma} \begin{proof} This follows from the definition of $\beta$, noting that $W(b_{1,1}) = W(1) = a_{u(1)}^-$, $W(b_{1,j}) = \tilde{a}_{u(b_{1,l})} = c a_{u(b_{1,l})}^-$ for $j \neq 1$, $W(b_{i, j(l)}) = a_{u(b_{i, j(l)})}^+$, and the rest of the terms are $T_{u(b_{i,l})}$. \end{proof} \section{Main theorems} \label{Section:Meixner} \begin{Thm} \label{Thm:Cumulants} For each $i$, let \[ S_i = a_i^+ + T_i + \tilde{a}_i = X_i - a_i^- \] be an operator on $\Falg(\mc{H})$. Then the free cumulants of the Fock state $\phi_{C, \set{T_i}}$ from Definition~\ref{Defn:Fock-state} are given by the formula $\Cum{x_i} = 0$, \[ \Cum{x_i P(\mb{x}) x_j} = \ip{e_i}{P(\mb{S}) e_j} = \ip{e_i}{P(\mb{S}) e_j}_C. \] \end{Thm} \begin{proof} Since $S_{u(i)} = a_{u(i)}^+ + T_{u(i)} + \tilde{a}_{u(i)}$, and using Notation~\ref{Notation:Covered-bijection}, for $\abs{\vec{u}} = n$, \[ \begin{split} \ip{e_{u(1)}}{S_{u(2)} \ldots S_{u(n-1)} e_{u(n)}} & = \ip{\Omega}{a_{u(1)}^- S_{u(2)} \ldots S_{u(n-1)} a_{u(n)}^+ \Omega} = \sum_{W \in \mc{W}_n'(\vec{u})} \ip{\Omega}{W \Omega} \\ & = \sum_{\sigma \in \NC_0'(n)}\ip{\Omega}{\beta_{\vec{u}}^{-1}(\set{1, \ldots, n}, \sigma) \Omega} \\ & = \sum_{\sigma \in \NC_0'(n)} \Theta(\sigma; \set{1, \ldots, n}, \vec{u}). \end{split} \] Similarly, since $X_{u(i)} = a_{u(i)}^+ + T_{u(i)} + a_{u(i)}^- + \tilde{a}_{u(i)}$, using Lemma~\ref{Lemma:Bijection} and the preceding equation, \[ \begin{split} \ip{\Omega}{X_{u(1)} X_{u(2)} \ldots X_{u(n)} \Omega} & = \sum_{W \in \mc{W}_n(\vec{u})} \ip{\Omega}{W(1) W(2) \ldots W(n) \Omega} \\ & = \sum_{k=1}^n \sum_{\substack{\pi \in \NC_0(n) \\ \pi = (V_1, V_2, \ldots, V_k)}} \sum_{\substack{\sigma_j \in \NC_0'(V_j) \\ j = 1, \ldots, k}} \prod_{i=1}^k \Theta(\sigma_i; V_i, (\vec{u}:V_i)) \\ & = \sum_{k=1}^n \sum_{\substack{\pi \in \NC_0(n) \\ \pi = (V_1, V_2, \ldots, V_k)}} \prod_{i=1}^k \left( \sum_{\sigma_i \in \NC_0'(V_i)} \Theta(\sigma_i; V_i, (\vec{u}:V_i)) \right) \\ & = \sum_{k=1}^n \sum_{\substack{\pi \in \NC_0(n) \\ \pi = (V_1, V_2, \ldots, V_k)}} \prod_{i=1}^k \ip{e_{(\vec{u}:V_i)(1)}}{S_{(\vec{u}:V_i)(2)} \ldots S_{(\vec{u}:V_i)(n-1)} e_{(\vec{u}:V_i)(n)}} \end{split} \] Thus \[ \state{x_{\vec{u}}} = \sum_{\pi \in \NC_0(n)} \prod_{V \in \pi} \ip{e_{(\vec{u}:V)(1)}}{S_{(\vec{u}:V)(2)} \ldots S_{(\vec{u}:V)(n-1)} e_{(\vec{u}:V)(n)}}. \] Since $\Cum{x_i} = \state{x_i} = 0$, the conclusion of the theorem now follows from the defining relation for the free cumulants, namely \begin{equation*} \state{x_{\vec{u}}} = \sum_{\pi \in \NC(n)} \prod_{B \in \pi} \Cum{\prod_{i \in B} x_{u(i)}}. \qedhere \end{equation*} \end{proof} \begin{Cor} \label{Cor:Self-adjoint} Each $X_i$ is symmetric and bounded, hence self-adjoint. \end{Cor} \begin{proof} The symmetry is proved exactly as in Proposition~1 of \cite{AnsMonic}, or can be deduced from it. To prove boundedness, choose $m$ such that $\norm{C}, \norm{T_i} < m$. Since $\abs{\NC(n)} < 4^n$, and $\abs{\Theta(\pi; V, \vec{u})} < m^{\abs{V}}$, it follows that $\Cum{x_{\vec{u}}} < (4m)^{\abs{u}}$ and \[ \phi_{C, \set{T_i}} \left[ X_{u(1)} X_{u(2)} \ldots X_{u(n)} \right] < (16 m)^n. \] Thus for each $i$, $\norm{X_i} < 16 m$. \end{proof} \begin{Notation} Let $\mb{z} = (z_1, \ldots, z_d)$ be non-commuting indeterminates, which commute with $\mb{x}$. For a non-commutative power series $G$ in $\mb{z}$ and $i = 1, \ldots, d$, define the left non-commutative partial derivative $D_i G$ by a linear extension of $D_i(1) = 0$, \[ D_i z_{\vec{u}} = \delta_{i u(1)} z_{u(2)} \ldots z_{u(n)}. \] Denote by $\mb{D} G = (D_1 G, \ldots, D_d G)$ the left non-commutative gradient. \medskip\noindent For a non-commutative power series $G$, denote by $G^{-1}$ its inverse with respect to multiplication. For a $d$-tuple of non-commutative power series $\mb{G} = (G_1, \ldots, G_d)$, denote by $\mb{G}^{\langle -1 \rangle}$ its inverse with respect to composition (which is also a $d$-tuple). \end{Notation} \begin{Thm} \label{Thm:Meixner} Let $\phi$ be a state on $\mf{R} \langle \mb{x} \rangle$ with a monic orthogonal polynomial system (MOPS), zero means and identity covariance. The following are equivalent. \begin{enumerate} \item There exists a non-commutative power series \[ F(\mb{z}) = 1 + (\textsl{terms of degree } \geq 2) \] and a $d$-tuple of non-commutative power series $\mb{U}$, \[ U_i(\mb{z}) = z_i + \textsl{higher-order terms}, \] such that the polynomials defined via their generating function \[ \sum_{\abs{\vec{u}} \geq 0} P_{\vec{u}}(\mb{x}) z_{\vec{u}} = F(\mb{z}) \Bigl(1 - \mb{x} \cdot \mb{U}(\mb{z})\Bigr)^{-1} \] are a MOPS for $\phi$. \item The polynomials with the generating function \begin{equation} \label{Generating} \sum_{\abs{\vec{u}} \geq 0} P_{\vec{u}}(\mb{x}) z_{\vec{u}} = \Bigl( 1 - \mb{x} \cdot (\mb{D} R)^{\langle -1 \rangle} (\mb{z}) + R\bigl((\mb{D} R)^{\langle -1 \rangle} (\mb{z})\bigr) \Bigr)^{-1} \end{equation} are a MOPS for $\phi$, where $R$ is the free cumulant generating function~\eqref{Non-crossing} of $\phi$. \item The free cumulant generating function of $\phi$ satisfies, for each $i, j$, a (non-commutative) second-order partial differential equation \begin{equation} \label{PDE} D_i D_j R(\mb{z}) = \delta_{ij} + \sum_{k=1}^d B_{ij}^k D_k R(\mb{z}) + C_{ij} D_i R(\mb{z}) D_j R(\mb{z}), \end{equation} where $C_{ij} \geq -1$, $B_{ij}^{k} = B_{ik}^{j}$, and for each $j,k$, either $B_{ij}^{k} = 0$ for all $i$, or $C_{ju} = C_{ku}$ for all $u$. \item There is a family of polynomials $\set{P_{\vec{u}}}$ such that $\state{P_{\vec{u}}} = 0$ for all $\vec{u} \neq \emptyset$ and they satisfy a recursion relation \begin{align*} x_i & = P_i, \\ x_i P_{j} &= P_{(i,j)} + \sum_{k=1}^d B_{ij}^{k} P_{k} + \delta_{ij}, \\ x_i P_{(j, \vec{u})} &= P_{(i, j, \vec{u})} + \sum_{k=1}^d B_{ij}^{k} P_{(k, \vec{u})} + \delta_{ij} (1 + C_{i, u(1)}) P_{\vec{u}}, \end{align*} where $C_{ij}, B_{ij}^{k}$ satisfy the same conditions as in part (c). \item There exist symmetric matrices $T_i$ and a diagonal non-negative matrix $C$ with $(T_i \otimes I) C = C (T_i \otimes I)$ such that $\phi$ has a representation $\phi_{C, \set{T_i}}$ as a Fock state of Definition~\ref{Defn:Fock-state}. \end{enumerate} We call such states \emph{free Meixner states}. \end{Thm} \begin{proof} The equivalence (a)~$\Leftrightarrow$~(b) follows from Lemma~4 of \cite{AnsMulti-Sheffer} and Theorem 3.21 of \cite{AnsAppell}, neither of which relied on the assumption that $\phi$ is faithful. The equivalence (d)~$\Leftrightarrow$~(e) follows from the equivalence between the more general Fock space construction and the more general recursion relation in Theorem~\ref{Thm:Monic-states}. \medskip\noindent (e) $\Rightarrow$ (c). By Theorem~\ref{Thm:Cumulants}, \[ R(\mb{z}) = \sum_{j, l = 1}^d \biggl( \ip{e_j}{e_l} z_j z_l + \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{S_{\vec{u}} e_l} z_j z_{\vec{u}} z_l \biggr). \] Therefore \[ D_j R(\mb{z}) = \sum_{l = 1}^d \biggl( \ip{e_j}{e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{S_{\vec{u}} e_l} z_{\vec{u}} z_l \biggr) \] and \[ \begin{split} D_i D_j R(\mb{z}) & = \ip{e_j}{e_i} + \sum_{l = 1}^d \biggl( \ip{e_j}{S_i e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{S_i S_{\vec{u}} e_l} z_{\vec{u}} z_l \biggr) \\ & = \ip{e_j}{e_i} + \sum_{l = 1}^d \biggl( \ip{e_j}{T_i e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{(T_i + \tilde{a}_i) S_{\vec{u}} e_l} z_{\vec{u}} z_l \biggr) \\ & = \ip{e_j}{e_i} + \sum_{l = 1}^d \biggl( \ip{e_j}{T_i e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{T_i S_{\vec{u}} e_l} z_{\vec{u}} z_l \biggr) + \sum_{l = 1}^d \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{\tilde{a}_i S_{\vec{u}} e_l} z_{\vec{u}} z_l \\ & = \ip{e_j}{e_i} + \sum_{l = 1}^d \biggl( \sum_{k=1}^d \ip{e_j}{T_i e_k} \ip{e_k}{e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \sum_{k=1}^d \ip{e_j}{T_i e_k} \ip{e_k}{S_{\vec{u}} e_l} z_{\vec{u}} z_l \biggr) \\ &\quad + \sum_{l = 1}^d \sum_{\abs{\vec{u}} \geq 1} \ip{e_j}{\tilde{a}_i S_{\vec{u}} e_l} z_{\vec{u}} z_l \end{split} \] where in the last step we have used the fact that $\set{e_k}$ form an orthonormal basis. Using Lemma~\ref{Lemma:Factor2}, for $n \geq 4$ and $\vec{u}$ a multi-index on $\set{3, \ldots, n-1}$ \[ \begin{split} \ip{e_j}{\tilde{a}_i S_{\vec{u}} e_l} & = \ip{\Omega}{a_j^- \tilde{a}_i S_{\vec{u}} e_l} \\ & = C_{ij} \sum_{k=3}^{n-1} \sum_{\begin{subarray}{l} W_1 \in \mc{W}_{\set{1, k+1, \ldots, n}}'\bigl(j, (\vec{u}:\set{k+1, \ldots, n-1}), l\bigr) \\ W_2 \in \mc{W}_{\set{2, \ldots, k}}'\bigl(i, (\vec{u}:\set{3, \ldots, k})\bigr) \end{subarray}} \ip{\Omega}{W_1 \Omega} \ip{\Omega}{W_2 \Omega} \\ & = C_{ij} \sum_{k=3}^{n-1} \sum_{W_1 \in \mc{W}_{\set{1, k+1, \ldots, n}}'\bigl((j, \vec{w}, l)\bigr)} \sum_{W_2 \in \mc{W}_{\set{2, \ldots, k}}'\bigl((i, \vec{v})\bigr)} \ip{e_j}{W_1 e_l} \ip{e_i}{W_2 e_s}, \end{split} \] where $\vec{v} = (\vec{u}:\set{3, \ldots, k-1})\bigr)$, $s = u(k)$, and $\vec{w} = (\vec{u}:\set{k+1, \ldots, n})\bigr)$. Thus \[ \begin{split} D_i D_j R(\mb{z}) & = \ip{e_j}{e_i} + \sum_{l = 1}^d \left( \sum_{k=1}^d \ip{e_j}{T_i e_k} \ip{e_k}{e_l} z_l + \sum_{\abs{\vec{u}} \geq 1} \sum_{k=1}^d \ip{e_j}{T_i e_k} \ip{e_k}{S_{\vec{u}} e_l} z_{\vec{u}} z_l \right) \\ &\quad + C_{ij} \sum_{l = 1}^d \sum_{(\vec{v}, s, \vec{w})} \sum_{W_1 \in \mc{W}_{\set{1, k+1, \ldots, n}}'\bigl((j, \vec{w}, l)\bigr)} \sum_{W_2 \in \mc{W}_{\set{2, \ldots, k}}'\bigl((i, \vec{v})\bigr)} \ip{e_j}{W_1 e_l} \ip{e_i}{W_2 e_s} z_{\vec{v}} z_s z_{\vec{w}} z_l \\ & = \ip{e_j}{e_i} + \sum_{k=1}^d \ip{e_j}{T_i e_k} D_k R(\mb{z}) + \sum_{l = 1}^d \sum_{\substack{\vec{u} = (\vec{v}, s, \vec{w}) \\ \abs{\vec{v}}, \abs{\vec{w}} \geq 0}} C_{ij} \ip{e_i}{S_{\vec{v}} e_s} \ip{e_j}{S_{\vec{w}} e_l} z_{\vec{v}} z_s z_{\vec{w}} z_l \\ & = \ip{e_j}{e_i} + \sum_{k=1}^d \ip{e_j}{T_i e_k} D_k R(\mb{z}) + C_{ij} D_i R(\mb{z}) D_j R(\mb{z}). \end{split} \] The conditions on the coefficients in part (c) are equivalent to the conditions on the matrices in part (e). \medskip\noindent (c) $\Rightarrow$ (e). Since the states are assumed to have zero means, the corresponding free cumulant generating functions have no linear terms. In that case, a free cumulant generating function $R$, and so the corresponding state $\phi$, are completely determined by equations~\eqref{PDE}. Moreover, for any choice of $\set{C_{ij}, B_{ij}^k}$ subject to the conditions of part (c), if \[ T_i (e_j) = \sum_{k=1}^d B_{ij}^k e_k \] and \[ C(e_i \otimes e_j) = C_{ij} \ e_i \otimes e_j, \] then those equations are satisfied by $R_{\phi_{C, \set{T_i}}}$. So the states whose free cumulant generating functions satisfy the equations in part (c) are exactly the states in part (e). \medskip\noindent (b) $\Rightarrow $ (c). $\phi$ has a MOPS, so by Theorem~\ref{Thm:Monic-states}, $\phi = \phi_{\mc{C}, \set{\mc{T}_i}}$ for some $\set{\mc{C}^{(k)}, \mc{T}^{(k)}_i}$. Thus, $\phi$ is the joint distribution of the operators $(\mc{X}_1, \ldots, \mc{X}_d)$ on the Hilbert space $\mc{F}_{\mc{C}}(\mc{H})$, with \[ \mc{X}_i = a_i^+ + \mc{T}_i + a_i^- \mc{C}. \] Note that since $\phi$ has means zero and identity covariance, $\mc{T}_i^{(0)} = 0$ and $\mc{C}^{(1)} = I$. Using notation from Section~\ref{Subsubsec:General-Fock}, and denoting \[ (DR)_{\vec{u}}(\mb{z}) = D_{u(1)} R (\mb{z}) \ldots D_{u(\abs{\vec{u}})} R (\mb{z}) \] and \[ e_{\vec{u}} = e_{u(1)} \otimes \ldots \otimes e_{u(\abs{\vec{u}})}, \] we see that \[ \begin{split} & \Bigl(1 - \mb{X} \cdot \mb{z} + R(\mb{z}) \Bigr) \Bigl( \Omega + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} \Bigr) \\ &\quad = \Omega - \sum_{i=1}^d z_i e_i + R(\mb{z}) \Omega + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} + R(\mb{z}) \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} \\ &\qquad - \sum_{i=1}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) e_{(i,\vec{u})} - \sum_{i=1}^d \Bigl(z_i D_i R(\mb{z}) \Omega + \sum_{\vec{u}} z_i (D_i R)(\mb{z}) (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} \Bigr) \\ &\qquad - \sum_{i}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) \mc{T}_i (e_{\vec{u}}) - \sum_{i}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) a_i^- (\mc{C} - I) e_{\vec{u}}. \end{split} \] Since for any function $G$ with zero constant term, \begin{equation} \label{Integral} \sum_{i=1}^d z_i D_i G(\mb{z}) = G(\mb{z}), \end{equation} the preceding expression equals \[ \begin{split} & = \Omega - \sum_{i=1}^d z_i e_i + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} - \sum_{i=1}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) e_{(i,\vec{u})} \\ &\quad - \sum_{i}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) \mc{T}_i (e_{\vec{u}}) - \sum_{i}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) a_i^- (\mc{C} - I) e_{\vec{u}}. \end{split} \] Using the expansions \eqref{Expansion-T} and \eqref{Expansion-C} from Theorem~\ref{Thm:Monic-states}, we now continue the equation as \[ \begin{split} & = \Omega - \sum_{i=1}^d z_i e_i + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} - \sum_{i=1}^d \sum_{\vec{u}} z_i (DR)_{\vec{u}}(\mb{z}) e_{(i,\vec{u})} \\ &\quad - \sum_{i,j,k=1}^d z_i \Bigl(B_{i, k, j} D_k R(\mb{z}) e_j + \sum_{\vec{u}, \vec{w}} B_{i, (k, \vec{u}), (j, \vec{w})} D_k R(\mb{x}) (DR)_{\vec{u}}(\mb{z}) e_{(j,\vec{w})} \Bigr) \\ &\quad - \sum_{i,j=1}^d z_i \Bigl( (C_{(i,j)} - 1) D_i R(\mb{z}) D_j R(\mb{z}) e_j + \sum_{\vec{u}} (C_{(i,j, \vec{u})} - 1) D_i R(\mb{z}) D_j R(\mb{z}) (DR)_{\vec{u}}(\mb{z}) e_{(j, \vec{u})} \end{split} \] which can be re-organized as \[ \begin{split} & = \Omega + \sum_{j=1}^d \Bigl[D_j R(\mb{z}) - \sum_{i=1}^d z_i \Bigl(\delta_{ij} + \sum_{k=1}^d B_{i, k, j} D_k R(\mb{z}) + (C_{(i,j)} - 1) D_i R(\mb{z}) D_j R(\mb{z}) \Bigr) \Bigr] e_j \\ &\quad + \sum_{j=1}^d \sum_{\vec{u}} \Bigl[D_j R(\mb{z}) (DR)_{\vec{u}}(\mb{z}) - \sum_{i=1}^d z_i \Bigl(\delta_{ij} (DR)_{\vec{u}}(\mb{z}) \\ &\qquad + \sum_{k=1}^d \sum_{\vec{w}} B_{i, (k, \vec{u}), (j, \vec{w})} D_k R(\mb{x}) (DR)_{\vec{w}}(\mb{z}) + (C_{(i,j, \vec{u})} - 1) D_i R(\mb{z}) D_j R(\mb{z}) (DR)_{\vec{u}}(\mb{z})\Bigr)\Bigr] e_{(j, \vec{u})}. \end{split} \] Using equation~\eqref{Integral} again, this equals \begin{equation} \label{Intermediate} \begin{split} & = \Omega + \sum_{i,j=1}^d \Bigl[D_i D_j R(\mb{z}) - \Bigl(\delta_{ij} + \sum_{k=1}^d B_{i, k, j} D_k R(\mb{z}) + (C_{(i,j)} - 1) D_i R(\mb{z}) D_j R(\mb{z}) \Bigr) \Bigr] z_i e_j \\ &\quad + \sum_{i,j=1}^d \sum_{\vec{u}} \Bigl[D_i D_j R(\mb{z}) (DR)_{\vec{u}}(\mb{z}) - \Bigl(\delta_{ij} (DR)_{\vec{u}}(\mb{z}) \\ &\qquad + \sum_{k=1}^d \sum_{\vec{w}} B_{i, (k, \vec{u}), (j, \vec{w})} D_k R(\mb{x}) (DR)_{\vec{w}}(\mb{z}) + (C_{(i,j, \vec{u})} - 1) D_i R(\mb{z}) D_j R(\mb{z}) (DR)_{\vec{u}}(\mb{z})\Bigr)\Bigr] z_i e_{(j, \vec{u})}. \end{split} \end{equation} If the polynomials $\set{P_{\vec{u}}}$ with the generating function~\eqref{Generating} from part (b) are orthogonal, then \[ \sum_{\abs{\vec{u}} \geq 0} P_{\vec{u}}(\mb{x})(DR)_{\vec{u}}(\mb{z}) = \Bigl( 1 - \mb{x} \cdot \mb{z} + R(\mb{z}) \Bigr)^{-1}, \] and \begin{equation} \label{Polynomials-vectors} P_{\vec{u}} (\mb{X}) \Omega = e_{\vec{u}}, \end{equation} so that \begin{equation} \label{Generating-Omega} \Bigl(1 - \mb{X} \cdot \mb{z} + R(\mb{z}) \Bigr) \Bigl( \Omega + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{\vec{u}} \Bigr) = \Omega. \end{equation} Equating to zero the coefficient of $z_i e_j$ in equation~\eqref{Intermediate}, we get exactly equation~\eqref{PDE} from part (c), with $B_{ij}^k = B_{i,k,j}$ and $C_{uj} = C_{(i,j)} - 1$. The conditions on the coefficients follow from the general conditions in Theorem~\ref{Thm:Monic-states}. \medskip\noindent (e) $\Rightarrow$ (b). If $\phi = \phi_{C, \set{T_i}}$, it follows that in equation~\eqref{Intermediate}, $B_{i, (k, \vec{u}), (j, \vec{w})} = B_{ij}^k \delta_{\vec{u}, \vec{w}} $ and $C_{i, j, \vec{u}} = 1 + C_{ij}$. Then that expression equals to \[ \begin{split} = \Omega + \sum_{i,j=1}^d & \Bigl[D_i D_j R(\mb{z}) - \Bigl(\delta_{ij} + \sum_{k=1}^d B_{ij}^k D_k R(\mb{z}) + C_{ij} D_i R(\mb{z}) D_j R(\mb{z}) \Bigr) \Bigr] \\ & \times z_i \Bigl[e_j + \sum_{\vec{u}} (DR)_{\vec{u}}(\mb{z}) e_{(j, \vec{u})} \Bigr] = \Omega \end{split} \] since part (e) $\Rightarrow $ (c). So equation~\eqref{Generating-Omega} holds. As a result, for polynomials with the generating function~\eqref{Generating}, \[ \Bigl(1 + \sum_{\abs{\vec{u}} \geq 0} P_{\vec{u}}(\mb{X}) z_{\vec{u}} \Bigr) \Omega = \Omega + \sum_{\abs{\vec{u}} \geq 0} z_{\vec{u}} e_{\vec{u}}. \] Thus equation~\eqref{Polynomials-vectors} holds, and so the polynomials are orthogonal. \end{proof} \subsection{Nontrivial covariance and other extensions} \label{Subsec:Covariance} In this section we consider a number of constructions and examples that involve free Meixner states with non-trivial covariances. We still assume that they have zero means (for simplicity); if desired, the means $p_1, \ldots, p_d$ can easily be incorporated into the operator model by considering the operators $(X_1 + p_1, \ldots, X_d + p_d)$ instead, and the corresponding combinatorics will involve all non-crossing partitions $\NC(n)$ rather than the non-crossing partitions without singletons $\NC_0(n)$. \medskip\noindent On the other hand, Theorem~\ref{Thm:Monic-states} requires that for any state with MOPS, the covariance matrices have to be diagonal. But now we allow \begin{equation} \label{Covariance} \psi \left[ x_i^2 \right] = t_i. \end{equation} Note that degenerate variances $\psi \left[ x_i^2 \right] = 0$ are still not permitted. \subsubsection{Dilations} Let $\phi$ be a Meixner state, and fix positive numbers $(t_1, t_2, \ldots, t_d)$. Let $\psi$ be the state defined by the $\mf{R}$-linear extension of \[ \psi \left[ P(x_1, \ldots, x_d) \right] = \state{P(t_1 x_1, \ldots, t_d x_d)}. \] Note that equation~\eqref{Covariance} holds. It is easy to see that if $\set{P_{\vec{u}}}$ is a MOPS for $\phi$, then \[ Q_{\vec{u}}(x_1, \ldots, x_d) = t_{\vec{u}} P_{\vec{u}}(x_1/t_1, \ldots, x_d/t_d) \] is a MOPS for $\psi$. \medskip\noindent We now briefly state how the results of Theorem~\ref{Thm:Meixner} get modified for $\psi$. The generating function for the MOPS still has the same ``resolvent'' form, and any state with MOPS and such a generating function arises as a dilation of a free Meixner state. \[ R_\psi \left[ P(x_1, \ldots, x_d) \right] = R_\phi \left[P(t_1 x_1, \ldots, t_d x_d) \right] \] which shows how to modify the differential equation satisfied by the free cumulant generating function. Similarly, the MOPS satisfy the recursion relation \[ x_i Q_{(j, \vec{u})} = Q_{(i, j, \vec{u})} + \sum_{k=1}^d t_i B_{ij}^{k} Q_{(k, \vec{u})} + \delta_{ij} t_i^2 (1 + C_{i, u(1)}) Q_{\vec{u}}., \] where $\set{B_{ij}^k, C_{ij}}$ were the corresponding coefficients for $\phi$. Finally, suppose that $\phi = \phi_{C, \set{T_i}}$, represented as the joint distribution of $(X_1, X_2, \ldots, X_d)$. In the Hilbert space $\mc{H} = \mf{C}^d$ with an orthonormal basis $\set{e_i}$, let $f_i = t_i e_i$. On the Fock space $\mc{F}_C(\mc{H})$, let \[ a^+_{e_i} := a^+_i, \quad a_{e_i} := a_i^-, \quad T_{e_i} := T_i, \] and extend these definitions $\mf{C}$-linearly to $a^+_f$, $a_f$, $T_f$ for any $f \in \mc{H}$. Let \begin{equation} \label{Dilated-operator} X_{f_i} = a^+_{f_i} + T_{f_i} + a_{f_i}^- + a_{f_i}^- C = t_i X_i. \end{equation} Then $\psi$ is the joint distribution of $(X_{f_1}, X_{f_2}, \ldots, X_{f_d})$, \[ \psi \left[ P(\mb{x}) \right] = \ip{\Omega}{P(\mb{X_f}) \Omega}. \] \subsubsection{Free convolution semigroups} Let $\phi$ be a free Meixner state. For $t > 0$, define a linear functional $\phi^{\boxplus t}$ via its free cumulant functional using relation~\eqref{Cumulants-definition}: \[ R_{\phi^{\boxplus t}} \left[P(\mb{x}) \right] = t R_\phi[P(\mb{x})]. \] Note that $\phi^{\boxplus t} \left[ x_i^2 \right] = t$. The notation reflects the fact that \[ \phi^{\boxplus s} \boxplus \phi^{\boxplus t} = \phi^{\boxplus (s+t)}, \] where $\boxplus$ is the operation of (additive) free convolution; we will not use this property in the paper. Using the methods of Theorem~\ref{Thm:Cumulants}, it is easy to see that $\phi^{\boxplus t}$ is a state (and so positive) if any only if \[ t + \min_{i,j} C_{ij} \geq 0, \] in other words if $t (I \otimes I) + C \geq 0$. In particular, by assumption~\eqref{C-positive}, $\phi^{\boxplus t}$ is always a state for $t \geq 1$; this is typical behavior for free convolution, as indicated by Corollary 14.13 in \cite{Nica-Speicher-book}. $\phi^{\boxplus t}$ is a state for all $t > 0$ if and only if $C \geq 0$; in this case we say that $\phi$ is \emph{freely infinitely divisible}. \medskip\noindent Again, $\phi^{\boxplus t}$ has a MOPS and the generating function for the MOPS still has the same ``resolvent'' form. \[ D_i D_j R_{\phi^{\boxplus t}} = \delta_{ij} t + \sum_{k=1}^d B_{ij}^k D_k R_{\phi^{\boxplus t}} + (C_{ij}/t) D_i R_{\phi^{\boxplus t}} \ D_j R_{\phi^{\boxplus t}} \] and \[ x_i P_{(j, \vec{u})} = P_{(i, j, \vec{u})} + \sum_{k=1}^d B_{ij}^{k} P_{(k, \vec{u})} + \delta_{ij} (t + C_{i, u(1)}) P_{\vec{u}}, \] where $\set{B_{ij}^k, C_{ij}}$ were the corresponding coefficients for $\phi$. Finally, suppose that $\phi = \phi_{C, \set{T_i}}$. On the algebraic Fock space $\Falg(\mc{H})$, define an inner product using the kernel \[ \begin{split} K_C^{(t)} & = \bigl(I^{\otimes (k-2)} \otimes (t I^{\otimes 2} + C)\bigr) \ldots \bigl(I \otimes (t I^{\otimes 2} + C) \otimes I^{\otimes (k-3)}\bigr) \bigl((t I^{\otimes 2} + C) \otimes I^{\otimes (k-2)}\bigr) t \\ & = t^k K_{C/t} \end{split} \] on $\mc{H}^{\otimes k}$, and denote the completion of $\Falg(\mc{H})$ with respect to this inner product $\mc{F}_C^{(t)}(\mc{H})$. Let \begin{equation} \label{Xs} X^{(t)}_i = a^+_{i} + T_{i} + t a_{i}^- + \tilde{a}_i = a^+_{i} + T_{i} + t a_i^- (I + C/t). \end{equation} Then $\phi^{\boxplus t}$ is the joint distribution of $\left(X^{(t)}_1, \ldots, X^{(t)}_d \right)$. \medskip\noindent As constructed above, $\set{X^{(t)}_i}$ are represented on different Hilbert spaces for different $t$. We can combine this construction with an idea from Section 7.2 of~\cite{Sniady-SWN} to represent a whole family of functionals $\set{\phi^{\boxplus t} | 0 < t < 1}$ on a single space. \medskip\noindent A subset $S \subset \set{1, 2, \ldots, n-1}$ can be identified with an \emph{interval partition} $\pi(S) \in \Int(n)$: if $S = \set{i(1), i(2), \ldots, i(k)}$, then \[ \pi = \bigl( \set{1, \ldots, i(1)}, \set{i(1) + 1, \ldots, i(2)}, \ldots, \set{i(k) + 1, \ldots, n} \bigr). \] Consider the vector space $H = \mc{H} \otimes L^\infty([0,1], dx)$ as a subspace of the Hilbert space $\mc{H} \otimes L^2([0,1], dx)$, with the inner product \[ \ip{\eta \otimes f}{\zeta \otimes g} = \ip{\eta}{\zeta} \int_0^1 f(x) g(x) \,dx. \] On its algebraic Fock space $\Falg(H)$, define the inner product \begin{multline*} \ip{(\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_l \otimes f_l)}{(\zeta_1 \otimes g_1) \otimes \ldots \otimes (\zeta_n \otimes g_n)}_C \\ = \delta_{ln} \sum_{\substack{S \subset \set{1, \ldots, n-1} \\ \pi(S) = (V_1, V_2, \ldots, V_k)}} \ip{\eta_1 \otimes \ldots \otimes \eta_n}{C^{S^c} \left(\zeta_1 \otimes \ldots \otimes \zeta_n \right)} \prod_{j=1}^{k} \left( \int_{\mf{R}} \left[ \prod_{i \in V_j} f_i(x) g_i(x) \right] \,dx \right), \end{multline*} where $S^c$ is the complement $\set{1, \ldots, n-1} \backslash S$, and \[ C^{S^c} = \prod_{i \in S^c} I^{\otimes (i-1)} \otimes C \otimes I^{\otimes (n-i-1)}. \] Complete with respect to this inner product, to get the Hilbert space $\mc{F}_C (H)$. On this space, define operators \begin{align*} {a_i^+}^{(t)} \bigl((\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n)\bigr) & = (e_i \otimes \chf{[0,t)}) \otimes (\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n) \\ {a_i^-}^{(t)} \bigl((\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n)\bigr) & = \ip{e_i}{\eta_1} \Bigl( \int_0^t f_1(x) \,dx \Bigr) (\eta_2 \otimes f_2) \otimes \ldots \otimes (\eta_n \otimes f_n), \\ T_i^{(t)} \bigl((\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n)\bigr) & = (T_i \eta_1 \otimes f_1 \chf{[0,t)}) \otimes (\eta_2 \otimes f_2) \otimes \ldots \otimes (\eta_n \otimes f_n), \\ \tilde{a}_i^{(t)} \bigl((\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n)\bigr) & \\ = \bigl((a_i^- C(\eta_1 & \otimes \eta_2)) \otimes (f_1 \chf{[0,t)} f_2)\bigr) \otimes (\eta_3 \otimes f_3) \otimes \ldots \otimes (\eta_n \otimes f_n), \end{align*} where $\chf{[0,t)}$ is the indicator function of the interval $[0,t)$, and let \[ X_i^{(t)} = {a_i^+}^{{(t)}} + T_i^{(t)} + {a_i^-}^{(t)} + \tilde{a}_i^{(t)}. \] By combining Corollary~\ref{Cor:Self-adjoint} with (a slight modification of) Theorem~6 from \cite{Sniady-SWN}, it follows that each $X_i^{(t)}$ is self-adjoint on $\mc{F}_C(H)$. Note that if all $f_i = g_i = \chf{[0,t)}$, then \[ \begin{split} & \ip{(\eta_1 \otimes f_1) \otimes \ldots \otimes (\eta_n \otimes f_n)}{(\zeta_1 \otimes g_1) \otimes \ldots \otimes (\zeta_n \otimes g_n)}_C \\ &\qquad = \sum_{\substack{S \subset \set{1, \ldots, n-1} \\ \pi(S) = (V_1, V_2, \ldots, V_k)}} \ip{\eta_1 \otimes \ldots \otimes \eta_n}{C^{S^c} \left(\zeta_1 \otimes \ldots \otimes \zeta_n \right)} t^k \\ &\qquad = t^n \ip{\eta_1 \otimes \ldots \otimes \eta_n}{\zeta_1 \otimes \ldots \otimes \zeta_n}_{C/t}. \end{split} \] Moreover, each $X_i^{(t)}$ restricted to \[ \mc{F}_C(\mc{H} \otimes \Span{\chf{[0,t)}}) \cong \mc{F}_C^{(t)}(\mc{H}) \] is given by the equation~\eqref{Xs}, and so $\phi^{\boxplus t}$ is the joint distribution of $\left(X^{(t)}_1, \ldots, X^{(t)}_d \right)$. \subsubsection{Rotations} \label{Subsubsec:Rotations} Let $O = (O_{ij})$ be an orthogonal $d \times d$ matrix. Let \[ O^T \mb{x} = \left( \sum_{i=1}^d O_{i1} x_i, \ldots, \sum_{i=1}^d O_{id} x_i \right) \] and \begin{equation} \label{Change-of-variable} \phi^O \left[ P(\mb{x}) \right] = \state{P(O^T \mb{x})}. \end{equation} We call $\phi^O$ a rotation of $\phi$. $\phi^O$ is the joint distribution of $(X_{f_1}, \ldots, X_{f_d})$ from~\eqref{Dilated-operator}, where we take \[ f_j = O (e_j) = \sum_{i=1}^d O_{ij} e_i. \] $\phi^O$ need not have a MOPS, since the matrix $C$ need not be diagonal in the basis $\set{f_1, \ldots, f_d}$. In fact, it follows from Lemma~9 of \cite{AnsMulti-Sheffer} that $\phi^O$ has a MOPS for \emph{all} $O$ if and only if $C_{ij} = c$ for all $i, j$, and that in this case $\phi^O$ is also a free Meixner state. It is easy to see that more generally, if $S \subset \set{1, \ldots, d}$ and $C_{ij} = c$ for all $i, j \in S$, then $\phi^O$ is a free Meixner state whenever $O (e_k) = e_k$ for all $k \not \in S$. \subsubsection{Linear transformations} Finally, one can consider a general invertible change of variables \[ A^T \mb{x} = \left( \sum_{i=1}^d A_{i1} x_i, \ldots, \sum_{i=1}^d A_{id} x_i \right) \] and the corresponding state $\phi^A$ defined as in equation~\eqref{Change-of-variable}. $\phi^A$ is the joint distribution of operators from~\eqref{Dilated-operator}, where we take $f_j = A(e_j) = \sum_{i=1}^d A_{ij} e_i$. As an alternative to our definition, one can call free Meixner states all states obtained by a linear transformation of a free Meixner state with MOPS (compare with \cite{Pommeret-Test}). \section{Examples} \label{Section:Examples} \subsection{Free products} In preparation for the examples in this section, for the reader's convenience we explain a key notion from free probability. Again, see \cite{VDN,Nica-Speicher-book} for more details. \medskip\noindent Let $\phi_1, \ldots, \phi_d$ be one-dimensional states on $\mf{R}[x_1], \ldots, \mf{R}[x_d]$, respectively. There is a canonical way to define their \emph{free product state} $\phi$ on $\mf{R} \langle x_1, \ldots, x_d \rangle$. Combinatorially, a natural way to define $\phi$ is via its MOPS. Let $\set{P_n^{(i)}}$ be the MOPS for $\phi_i$. For a multi-index $\vec{u}$, decompose \[ x_{\vec{u}} = x_{v(1)}^{i(1)} x_{v(2)}^{i(2)} \ldots x_{v(k)}^{i(k)}, \] where the consecutive indices $v(j) \neq v(j+1)$, although non-consecutive indices may coincide. Then the MOPS $\set{P_{\vec{u}}}$ for $\phi$ are defined by \[ P_{\vec{u}}(\mb{x}) = \prod_{j=1}^k P_{i(j)}^{(v(j))}(x_{v(j)}). \] For example, \[ P_{1,1,2,1,2}(\mb{x}) = P_2^{(1)}(x_1) P_1^{(2)}(x_2) P_1^{(1)}(x_1) P_1^{(2)}(x_2). \] Note that if one considers polynomials in commuting variables and assumes that \emph{all} $v(j)$ above are different, one gets the usual (Cartesian) product of measures. Also, $\phi$ is a free product state if any only if the elements $x_1, x_2, \ldots, x_d$ are freely independent with respect to $\phi$, in the sense of Voiculescu. This can be taken as the definition of free independence; note that for random variables independent in the usual probabilistic sense, their joint distribution is a product measure. Finally, the crucial property of free cumulant generating functions is their relation to free products: a state $\phi$ is a free product state of $\phi_1, \ldots, \phi_d$ if any only if the free cumulant generating function of $\phi$ decomposes as \[ R_\phi(\mb{z}) = \sum_{i=1}^d R_{\phi_i}(z_i). \] This is often stated as the ``mixed free cumulants are zero'' condition. It is related to the familiar property that the Fourier transform of the joint distribution of independent random variables is the product of their individual Fourier transforms. \medskip\noindent It is easy to see that free product free Meixner states are exactly the free products of one-dimensional free Meixner states, see Remark~6 of \cite{AnsMulti-Sheffer}. Recall that these one-dimensional distributions, as described in that remark, Theorem 4 of \cite{AnsMeixner} and Section 2.2 of \cite{Boz-Bryc}, are known. With variance $t$, they are: the semicircular (free Gaussian) distributions $\frac{1}{2 \pi} \sqrt{4 t - x^2} \,dx$, the Marchenko-Pastur (free Poisson) distributions $\frac{1}{2 \pi} \frac{\sqrt{4t - (x-b)^2}}{1 + (b/t) x} \,dx + \text{ possibly one atom}$, and more generally \[ \frac{1}{2 \pi} \frac{\sqrt{4 (t + c) - (x - b)^2}}{1 + (b/t) x + (c/t^2) x^2} \,dx + \text{ zero, one, or two atoms}, \] depending on the particular values of $b,c,t$. \subsection{Semicircular systems} \label{Subsec:Semicircular} Let $C = 0$ and all $T_i = 0$. Then \[ S_i = a_i^+ \] and \[ \Cum{x_i x_{\vec{u}} x_j} = \ip{e_i}{S_{\vec{u}} e_j} = 0 \] for $\abs{\vec{u}} \geq 1$. Thus in distribution, all $S_i \sim 0$. Only second-order free cumulants of $(X_1, X_2, \ldots, X_d)$ are non-zero, and $\phi$ is the distribution of a freely independent semicircular system, the free analog of the standard $d$-dimensional Gaussian distribution. \subsection{Free Poisson states} Let $C = 0$ and $T_i$ arbitrary. Then \[ S_i = a_i^+ + T_i \] and \[ \Cum{x_i x_{\vec{u}} x_j} = \ip{e_i}{S_{\vec{u}} e_j} = \ip{e_i}{T_{\vec{u}} e_j}. \] Thus in distribution, $(S_1, S_2, \ldots, S_d) \sim (T_1, T_2, \ldots, T_d)$. It is appropriate to say that in this case, the joint distribution $\phi$ of $(X_1, X_2, \ldots, X_d)$ is $d$-dimensional free Poisson. In \cite{AnsMulti-Sheffer} we showed that if $\phi$ is tracial, then $\phi$ is a rotation of a free product of one-dimensional free Poisson distributions. Whether or not $\phi$ is tracial, the vector $\Omega$ is cyclic and separating for the von Neumann algebra $W^\ast(X_1, X_2, \ldots, X_d)$. \subsection{Free product states} \label{Subsec:Free-products} For two vectors $f, g$, denote by $E_{f,g}$ the corresponding rank one operator, \[ E_{f,g}(h) = f \ip{g}{h}. \] For an orthonormal basis $\set{f_i}$, $E_{f_i, f_j}$ are the corresponding matrix units. For the standard basis $\set{e_i}$, we will denote these simply by $E_{ij}$. In particular, $E_{ii}$ is the orthogonal projection onto $e_i$. \medskip\noindent Let $C(e_i \otimes e_j) = c_i \delta_{ij} (e_i \otimes e_j)$, and let $T_i = b_i E_{ii}$. Then $S_i$ acts entirely on the subspace $\mc{F}_{c_i}(\Span{e_i})$, on which it equals \[ S_i = a_i^+ + b_i + c_i a_i^-. \] $a_i^+ + c_i a_i$ has the centered semicircular distribution with variance $c_i$ (note that on $\mc{F}_{C}(\mc{H})$, this operator is not symmetric, so its \emph{star}-distribution is different from the semicircular one). Therefore $S_i$ has the semicircular distribution with mean $b_i$ and variance $c_i$. Also, it follows that \[ \Cum{x_i x_{\vec{u}} x_j} = \ip{e_i}{S_{\vec{u}} e_j} = 0 \] unless \[ i = u(1) = \ldots = u(n) = j. \] In other words, all the mixed free cumulants of $(X_1, X_2, \ldots, X_d)$ are zero. This says precisely that their joint distribution $\phi$ is a free product of the distributions of each of $X_1, X_2, \ldots, X_d$. Each of these, in turn, is a one-dimensional free Meixner distribution, whose free cumulants are, up to a shift of index, the moments of the semicircular distribution with mean $b_i$ and variance $c_i$. \subsection{Exponentiated semicircular systems} Let $C(e_i \otimes e_j) = c_i (e_i \otimes e_j)$ and $T_i = b_i I$. Then \[ S_i = a_i^+ + c_i a_i^- + b_i I, \] again the distribution of $S_i$ is the semicircular distribution with mean $b_i$ and variance $c_i$, but now the operators $S_i$ themselves are freely independent with respect to the state $\phi$, so that their joint distribution is a free product. The joint distribution of $(X_1, X_2, \ldots, X_d)$ is \emph{not} a free product (typically, not even tracial); it was described in the last section of \cite{AnsMulti-Sheffer}. \medskip\noindent Note that the preceding two examples make sense for $-1 \leq c_i < 0$, except that one loses the interpretation of $S_i$ as having a semicircular distribution with variance $c_i$, and the resulting states are not freely infinitely divisible. \begin{Remark} The constructions in the previous two examples coincide in the one-dimensional case. That case, and in particular the corresponding free cumulants, were also considered in \cite{AnsMeixner} and described completely in \cite{Boz-Bryc}. Moreover, many one-dimensional free Meixner distributions arise as limits in the central and Poisson limit theorems for the $t$-transformed free convolution, in the sense of \cite{Boz-Wys}; that paper also contains a Fock space construction which coincides with the one-dimensional version of the one in Section~\ref{Subsec:Fock2}. \end{Remark} \subsection{Free multinomial states} \label{Subsec:Free-multinomial} It is well known that the Bernoulli distribution \[ (1-p) \delta_0 + p \delta_1 \] is a Meixner distribution. It was noted in \cite{AnsMeixner} that it is also a (one-dimensional) free Meixner distribution. Moreover, the binomial distributions, which are convolution powers \[ \bigl((1-p) \delta_0 + p \delta_1\bigr)^{\ast n} = \sum_{k=0}^n \binom{n}{k} (1-p)^{n-k} p^k \delta_k \] of the Bernoulli distribution, are all Meixner, and the free binomial distributions, which are free convolution powers $\bigl((1-p) \delta_0 + p \delta_1\bigr)^{\boxplus n}$ of the Bernoulli distribution are free Meixner. In fact, it was noted in \cite{Boz-Bryc} that $\bigl((1-p) \delta_0 + p \delta_1\bigr)^{\boxplus t}$ are free Meixner for all real $t \geq 1$. \medskip\noindent It is also well-known that the multinomial distributions are Meixner \cite{Pommeret-Test}. In particular, the basic multinomial distribution \begin{equation} \label{Multinomial} p_1 \delta_{e_1} + p_2 \delta_{e_2} + \ldots p_d \delta_{e_d} \end{equation} on $\mf{R}^d$ has this property. We now show that it, and so the free semigroup it generates, also induce free Meixner states. In this example, it is natural to consider the state with non-trivial means and a non-diagonal covariance matrix; an actual free Meixner state can be obtained from it by an affine transformation as in Section~\ref{Subsec:Covariance}. \medskip\noindent In the Fock space construction of Section~\ref{Subsec:Fock2}, take $\dim \mc{H} = d-1$ rather than $d$, and \[ C(e_i \otimes e_j) = - e_i \otimes e_j \] for all $i, j$, so that $C_{ij} = -1$. In this case the induced inner product on the Fock space $\Falg(\mc{H})$ is degenerate, and the vector space factors through to simply $\mc{F}_C(\mc{H}) = \mf{C} \oplus \mc{H}$. \emph{In this example only}, choose (linearly dependent) vectors $\set{e_i | i = 1, 2, \ldots d}$ in $\mc{H}$ that are not orthonormal, but instead satisfy \begin{align*} \ip{e_i}{e_i} & = p_i (1 - p_i), \\ \ip{e_i}{e_j} & = - p_i p_j, \end{align*} where \[ p_i > 0, \qquad i = 1, 2, \ldots, d, \qquad p_1 + p_2 + \ldots + p_{d} = 1. \] Since these numbers are the covariances of the centered version of the basic multinomial distribution \eqref{Multinomial}, the corresponding matrix is positive semi-definite and so the $\set{e_i}$ can be chosen in this fashion. \medskip\noindent Let \begin{align*} T_i(e_i) & = (1 - 2 p_i) e_i, \\ T_i(e_j) & = - p_i e_j - p_j e_i, \end{align*} and define \[ X_i = a_i^+ + T_i + a_i^- + a_i^- C \] as usual, except that $a_i^+ = 0$ on $\mc{H}$. In other words, for $Y_i = X_i + p_i$, \begin{equation} \label{Multinomial-operators} \begin{split} Y_i \Omega & = e_i + p_i \Omega, \\ Y_i e_i & = (1 - p_i) [e_i + p_i \Omega], \\ Y_i e_j & = - p_j [e_i + p_i \Omega]. \end{split} \end{equation} \begin{Prop} $Y_i$ is an orthogonal projection of $\mf{C} \oplus \mc{H}$ onto $\Span{e_i + p_i \Omega}$. These projections are orthogonal among themselves and their sum is the identity operator. Their joint distribution with respect to the state $\phi_{C, \set{T_i}}$ from Definition~\ref{Defn:Fock-state} is the basic multinomial distribution \eqref{Multinomial}. In particular, $\state{Y_i} = p_i$. The free cumulant generating function of $\phi$ satisfies the differential equation \[ \begin{split} D_i D_j R & = (\delta_{ij} p_i - p_i p_j) + (\delta_{ij} - p_j) D_i R - p_i D_j R - D_i R \ D_j R \\ & = \delta_{ij} (D_i R + p_i) - (D_i R + p_i) (D_j R + p_j). \end{split} \] \end{Prop} \begin{proof} $Y_i$ is self-adjoint, its image is $\Span{e_i + p_i \Omega}$, and \[ Y_i(e_i + p_i \Omega) = (1 - p_i) [e_i + p_i \Omega] + p_i [e_i + p_i \Omega] = e_i + p_i \Omega, \] so it is an orthogonal projection onto its image. \[ \ip{e_i + p_i \Omega}{e_j + p_i \Omega} = - p_i p_j + p_i p_j = 0, \] so these subspaces, and therefore projections onto them, are orthogonal. $\mf{C} \oplus \mc{H}$ has dimension $d$, therefore the sum $\sum_{i=1}^{d} Y_i$ is identity. It also follows that \[ \state{Y_{\vec{u}}} = \begin{cases} p_i, & i = u(1) = u(2) = \ldots, \\ 0, & \text{ otherwise}. \end{cases} \] Therefore, the joint distribution of $(Y_1, Y_2, \ldots, Y_{d})$ with respect to $\phi$ is the basic multinomial distribution. The last part follows from the operator representation. \end{proof} \begin{Defn} \label{Defn:Free-multinomial} Free multinomial states are the states $\set{\phi^{\boxplus t} | t \geq 1}$, where $\phi$ is the basic multinomial distribution~\eqref{Multinomial}. Note that $\phi^{\boxplus n}$ is the joint distribution of the sum of $n$ $d$-tuples of orthogonal projections. \end{Defn} \begin{Remark} Since the Bernoulli distribution is both classical and free Meixner, one may conjecture that it is in some sense also $q$-Meixner (for $0 \leq q \leq 1$, with $q=1$ corresponding to the classical case and $q=0$ corresponding to the free case). The meaning of this term is not well-defined, but see for example Section 4.3 of \cite{AnsAppell}. Indeed, the recursion relation for its orthogonal polynomials is of the $q$-Meixner form \begin{align*} x P_0 & = P_1 + p, \\ x P_1 & = P_2 + (1-p) P_1 + p (1-p) P_0, \\ x P_n & = P_{n+1} + (1-p) [n]_q P_n + [n]_q p (1-p)(1 - [n-1]_q) P_{n-1} \end{align*} independently of $q$, as long as the degree of the polynomial $n \leq 1$, which suffices since \[ L^2\bigl((1-p)\delta_0 + p \delta_1\bigr) \] is $2$-dimensional. One may also hope that its $q$-cumulant generating function would then satisfy the equation \[ D_q^2 R^{(q)} = D_q R^{(q)} - (D_q R^{(q)})^2, \] where $D_q$ is the $q$-derivative \[ D_q(f)(z) = \frac{f(z) - f(qz)}{(1-q) z}, \] and \[ R^{(q)} = \sum_{n=1}^\infty \frac{1}{[n]_q!} \alpha_n z^n. \] The corresponding recursion for its $q$-cumulants is \[ \alpha_{n+2} = \alpha_{n+1} - \sum_{i=0}^n \left[ \begin{matrix} n \\ i\end{matrix} \right]_q \alpha_{i+1} \alpha_{n-i+1} \] (compare with Remark 5.4 of \cite{Boz-Bryc}), with the initial condition $\alpha_1 = p$. Using Maple, it is easy to calculate the first $5$ cumulants. Unfortunately, the fifth $q$-cumulant of the Bernoulli distribution calculated in this fashion differs from its fifth $q$-cumulant in the sense of Section 6 of \cite{AnsQCum}. \end{Remark} \subsection{Tracial examples} If $\phi$ is a state on a non-commutative algebra $\mc{A}$, one says that $\phi$ is \emph{tracial}, or \emph{a trace}, if for any $x, y \in \mc{A}$, \[ \state{x y} = \state{y x}. \] Tracial states play a crucial role, for example, in the theory of von Neumann algebras. \begin{Lemma} Let $\phi = \phi_{C, \set{T_i}}$ be a free Meixner state, represented as the joint distribution of operators $(X_1, \ldots, X_d)$. Suppose that $\phi$ is tracial. Then for all $i, j$, \begin{equation} \label{Trace1} T_i e_j = T_j e_i \end{equation} and \begin{equation} \label{Commutator} T_i T_j - T_j T_i = C_{ji} E_{ij} - C_{ij} E_{ji}. \end{equation} \end{Lemma} \begin{proof} Since $\phi$ is tracial, for all $i, j, k$, \[ \state{X_i X_j X_k} = \ip{e_i}{T_j e_k} = \ip{e_j}{T_k e_i} = \ip{e_i}{T_k e_j}, \] so for all $j, k$, \[ T_j e_k = T_k e_j. \] Similarly, for all $i, j, k, l$, \[ \begin{split} \state{X_i X_j X_k X_l} & = \ip{e_i}{e_j} \ip{e_k}{e_l} + \ip{e_i}{e_l} \ip{e_j}{e_k} (1 + C_{kl}) + \ip{e_i}{T_j T_k e_l} \\ & = \ip{e_j}{e_k} \ip{e_l}{e_i} + \ip{e_j}{e_i} \ip{e_k}{e_l} (1 + C_{li}) + \ip{e_j}{T_k T_l e_i}, \end{split} \] so \[ \ip{e_i}{e_l} \ip{e_j}{e_k} C_{kl} + \ip{e_i}{T_j T_k e_l} = \ip{e_j}{e_i} \ip{e_k}{e_l} C_{li} + \ip{e_i}{T_l T_k e_j}. \] Using equation~\eqref{Trace1} and the orthonormality of $\set{e_i}$, \[ \ip{e_i}{e_l} \ip{e_j}{e_k} C_{jl} + \ip{e_i}{T_j T_l e_k} = \ip{e_j}{e_i} \ip{e_k}{e_l} C_{lj} + \ip{e_i}{T_l T_j e_k}, \] so \[ \ip{e_j}{e_k} C_{jl} e_l + T_j T_l e_k = \ip{e_k}{e_l} C_{lj} e_j + T_l T_j e_k \] and \[ T_j T_l - T_l T_j = C_{lj} E_{jl} - C_{jl} E_{lj}. \qedhere \] \end{proof} \begin{Ex} A general theorem of Voiculescu (Proposition~2.5.3 in \cite{VDN}) implies that the free product states from Example~\ref{Subsec:Free-products} are tracial. It is also easy to see that any rotation of a tracial state is tracial. \end{Ex} \begin{Lemma} \label{Lemma:Tracial-semigroup} If $\phi$ is tracial, then $\phi^{\boxplus t}$ is tracial for all $t$ for which it is defined. \end{Lemma} \begin{proof} It is easy to see directly from the defining equation~\eqref{Cumulants-definition} that a state $\phi$ is tracial if any only if its free cumulant generating functional $R_\phi$ is tracial. The combinatorial reason is that if a partition $\pi$ is non-crossing when points $\set{1, 2, \ldots, n}$ are placed on a line, it is also non-crossing when they are placed on a circle. The lemma follows from this fact and the defining relation for $\phi^{\boxplus t}$ \[ R_{\phi^{\boxplus t}} \left[ x_{\vec{u}} \right] = t R_{\phi} \left[ x_{\vec{u}} \right]. \qedhere \] \end{proof} \begin{Prop} All free multinomial states are tracial. \end{Prop} \begin{proof} Since the operators $\set{Y_i}$ defined in equation~\eqref{Multinomial-operators} commute, the basic multinomial distribution is tracial; in fact, it factors through to a state on commutative polynomials $\mf{R}[x_1, \ldots, x_d]$ corresponding to the basic multinomial measure~\eqref{Multinomial}. It follows from Lemma~\ref{Lemma:Tracial-semigroup} that all the other free multinomial states are tracial as well. \end{proof} \noindent We conclude the paper with three further results on when Meixner states are traces. The first two show that under one set of general assumptions, the only tracial Meixner states are the trivial ones, namely the rotations of free product states. It generalizes Proposition~11 of \cite{AnsMulti-Sheffer}. The last one provides a way to construct a large class of tracial examples that do not come from free products. It generalizes the multinomial example above. \begin{Prop} \label{Prop:Free-products} Let $\phi = \phi_{C, \set{T_i}}$ be a tracial free Meixner state with $C$ diagonal as a $d \times d$ matrix, $C_{ij} = \delta_{ij} c_i$. Then $\phi$ is a rotation of a free product state. \end{Prop} \begin{proof} If $C_{ij} = \delta_{ij} c_i$, then equation~\eqref{Commutator} states that all $T_i, T_j$ commute. Combining this with equation~\eqref{Trace1}, we see that moreover, for some orthonormal basis $\set{f_1, f_2, \ldots, f_d}$, \begin{equation} \label{Diagonalized} T_j = \sum_{i=1}^d \alpha_i \ip{f_i}{e_j} E_{f_i, f_i}. \end{equation} From \[ C (T_j \otimes I) (e_k \otimes e_l) = (T_j \otimes I) C (e_k \otimes e_l) \] it follows that \[ \sum_{i=1}^d \alpha_i \ip{f_i}{e_j} \ip{f_i}{e_k} \ip{f_i}{e_l} c_l (e_l \otimes e_l) = \delta_{kl} \sum_{i,m=1}^d \alpha_i \ip{f_i}{e_j} \ip{f_i}{e_k} \ip{f_i}{e_m} c_l (e_m \otimes e_l). \] Thus for all $k \neq l$, \[ \sum_{i=1}^d \alpha_i \ip{f_i}{e_j} \ip{f_i}{e_k} \ip{f_i}{e_l} c_l = \ip{e_k}{\left( \sum_{i=1}^d \alpha_i \ip{f_i}{e_j} E_{f_i, f_i} \right) e_l} c_l = \ip{e_k}{T_j e_l} c_l = 0. \] It follows that whenever $c_l \neq 0$, $T_j e_l \in \Span{e_l}$, and one can take $f_l = e_l$. So if $S = \set{l | c_l \neq 0}$, then \[ \Span{e_l | l \in S} = \Span{f_l | l \in S} \] is an invariant subspace for all $T_j$. We can choose an orthogonal transformation $O$ so that $O (e_i) = f_i$, in other words \[ O (e_i) = \begin{cases} e_i & \text{ for } i \in S, \text{ that is } c_i \neq 0, \\ f_i & \text{ for } i \not \in S, \text{ that is } c_i = 0. \end{cases} \] Following the comments at the end of Section~\ref{Subsubsec:Rotations}, the state $\phi^O$, which is the joint distribution of $(X_{f_1}, \ldots, X_{f_d})$, is still a tracial free Meixner state. From equation~\eqref{Diagonalized}, \[ T_{f_j} = \alpha_j E_{f_j, f_j}. \] Finally, $C(\eta \otimes \zeta) = 0$ whenever one of $\eta, \zeta \in \Span{e_l | l \not \in S} = \Span{f_l | l \not \in S}$, so \[ C(f_i \otimes f_j) = \begin{cases} C(e_i \otimes e_j) = c_i \delta_{ij} & \text{ if } i, j \in S, \\ 0 & \text{ if one of } i, j \not \in S. \end{cases} \] Thus $C, \set{T_i}$ have the form in Example~\ref{Subsec:Free-products}, and so $\phi^O$ is a free product state. \end{proof} \begin{Cor} Let $\phi = \phi_{C, \set{T_i}}$ be a tracial free Meixner state and $T_i = 0$ for all $i$. Then $\phi$ is a free product state. \end{Cor} \begin{proof} If $T_i = 0$ for all $i$, then it follows from equation~\eqref{Commutator} that $C_{ij} = \delta_{ij} c_i$, and the preceding proposition applies. In this case, a rotation is unnecessary. \end{proof} \noindent Meixner states correspond to quadratic natural exponential families. The following states are free versions of \emph{simple} quadratic natural exponential families in the terminology of \cite{Casalis-Simple-quadratic}, where all such (classical) families were classified. \begin{Prop} Let $C$ be a constant matrix, $C_{ij} = c$ for all $i,j$. Then the necessary conditions \eqref{Trace1} and~\eqref{Commutator} for $\phi$ to be a tracial free Meixner state, namely that $\set{T_i}$ are symmetric matrices, $T_i e_j = T_j e_i$ and \[ (T_i T_j - T_j T_i) = c \bigl( E_{ij} - E_{ji} \bigr), \] are also sufficient. \end{Prop} \begin{proof} If $c=0$, the result follows from Proposition~\ref{Prop:Free-products}. So we will assume that $c \neq 0$. \medskip\noindent For each $n$, let \[ A(n) = \set{\pi \in \NC_0'(n) | 1 \stackrel{\pi}{\sim} 2}. \] For any partition $\sigma \in \NC_0'(n) \backslash A(n)$, define the partition $l(\sigma) \in A(n)$ as follows: if $1 \in B \in \sigma$ and $2 \in C \in \sigma$, let $l(\sigma)$ be the partition with the same classes as $\sigma$ except that $B \cup C$ is a class of $l(\sigma)$. Conversely, any such $\sigma$ can be obtained by starting with $\pi \in A(n)$ with the (unique) outer class $B$, choosing $i \in B$, $2 < i < n$ (if it exists), and taking $\sigma$ to have the same classes as $\pi$ except that \[ B \cap \left( \set{1} \cup \set{i + 1, \ldots, n} \right) \] and \[ B \cap \set{2, \ldots, i} \] are classes of $\sigma$. \medskip\noindent For $\pi \in NC_0(n)$, define the partition $\rho(\pi) \in \NC_0(\set{2, 3, \ldots, n, \bar{1}})$ (where $\bar{1}$ is identified with $n+1$) by \begin{align*} & i \stackrel{\pi}{\sim} j \Leftrightarrow i \stackrel{\rho(\pi)}{\sim} j \text{ for } i, j \neq 1, \\ & 1 \stackrel{\pi}{\sim} j \Leftrightarrow \bar{1} \stackrel{\rho(\pi)}{\sim} j. \end{align*} Clearly \[ \rho(\NC_0'(n)) = \set{\pi \in \NC_0(\set{2, \ldots, n, \bar{1}}) | n \stackrel{\pi}{\sim} \bar{1}}, \] and \[ \rho(A(n)) = \set{\pi \in \NC_0(\set{2, \ldots, n, \bar{1}}) | 2 \stackrel{\pi}{\sim} n \stackrel{\pi}{\sim} \bar{1}}. \] In particular, $\rho(A(n)) \subset \NC_0'(\set{2, \ldots, n, \bar{1}})$. For any partition $\sigma \in \NC_0'(\set{2, \ldots, n, \bar{1}}) \backslash \rho(A(n))$, define the partition $r(\sigma) \in \rho(A(n))$ as follows: if $\bar{1} \in B \in \sigma$ and $n \in C \in \sigma$, let $r(\sigma)$ be the partition with the same classes as $\sigma$ except that $B \cup C$ is a class of $r(\sigma)$. Conversely, any such $\sigma$ can be obtained by starting with $\pi \in \rho(A(n))$ with the (unique) outer class $B$, choosing $i \in B$, $2 < i < n$ (if it exists), and taking $\sigma$ to have the same classes as $\pi$ except \[ B \cap \left( \set{2, \ldots, i} \cup \set{\bar{1}} \right) \] and \[ B \cap \set{i+1, \ldots, n} \] are classes of $\sigma$. \medskip\noindent We will use the usual commutator notation $[T,T'] = T T' - T' T$. Using equation~\eqref{Trace1}, \[ \begin{split} & \ip{e_{u(1)}}{T_{u(2)} T_{u(3)} T_{u(4)} \ldots T_{u(n-2)} T_{u(n-1)} e_{u(n)}} \\ &\quad = \ip{e_{u(2)}}{T_{u(1)} T_{u(3)} T_{u(4)} \ldots T_{u(n-2)} T_{u(n-1)} e_{u(n)}} \\ &\quad = \ip{e_{u(2)}}{\bigl[T_{u(1)}, T_{u(3)}\bigr] T_{u(4)} \ldots T_{u(n-1)} e_{u(n)}} + \ldots \\ &\qquad + \ip{e_{u(2)}}{T_{u(3)} T_{u(4)} \ldots T_{u(n-2)} \bigl[T_{u(1)}, T_{u(n-1)}\bigr] e_{u(n)}} \\ &\qquad + \ip{e_{u(2)}}{T_{u(3)} T_{u(4)} \ldots T_{u(n-1)} T_{u(n)} e_{u(1)}} \end{split} \] Now using equation~\eqref{Commutator}, $[T_i, T_j] = c (E_{ij} - E_{ji})$, \[ \begin{split} & \ip{e_{u(1)}}{T_{u(2)} T_{u(3)} T_{u(4)} \ldots T_{u(n-2)} T_{u(n-1)} e_{u(n)}} \\ &\quad = c \ip{e_{u(2)}}{e_{u(1)}} \ip{e_{u(3)}}{T_{u(4)} \ldots T_{u(n-1)} e_{u(n)}} \\ &\qquad + c \ip{e_{u(2)}}{T_{u(3)} e_{u(1)}} \ip{e_{u(4)}}{T_{u(5)} \ldots T_{u(n-1)} e_{u(n)}} \\ &\qquad + c \ip{e_{u(2)}}{T_{u(3)} \ldots T_{u(n-2)} e_{u(1)}} \ip{e_{u(n-1)}}{e_{u(n)}} + \ldots \\ &\qquad + \ip{e_{u(2)}}{T_{u(3)} \ldots T_{u(n)} e_{u(1)}} \\ &\qquad - c \ip{e_{u(2)}}{e_{u(3)}} \ip{e_{u(1)}}{T_{u(4)} \ldots T_{u(n-1)} e_{u(n)}} \\ &\qquad - c \ip{e_{u(2)}}{T_{u(3)} e_{u(4)}} \ip{e_{u(1)}}{T_{u(5)} \ldots T_{u(n-1)} e_{u(n)}} - \ldots \\ &\qquad - c \ip{e_{u(2)}}{T_{u(3)} \ldots T_{u(n-2)} e_{u(n-1)}} \ip{e_{u(1)}}{e_{u(n)}} \end{split} \] The left-hand-side of the preceding equation is equal to $\Theta(\mb{\hat{1}_n}; \set{1, \ldots, n}, \vec{u})$, where $\mb{\hat{1}_n} \in \NC_0'(n)$ is the partition with a single class. Similarly, using Lemma~\ref{Lemma:Last}, the preceding equation itself states that \[ \begin{split} \Theta(\mb{\hat{1}_n}; \set{1, \ldots, n}, \vec{u}) & = \Theta(\rho(\mb{\hat{1}_n}); \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad + \sum_{i=2}^{n-2} \Theta(\bigl(\set{2, \ldots, i, \bar{1}}, \set{i+1, \ldots, n} \bigr) \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad - \sum_{i=3}^{n-1} \Theta(\bigl(\set{2, \ldots, i}, \set{1, i+1, \ldots, n}\bigr); \set{1, \ldots, n}, \vec{u}), \end{split} \] where for a multi-index $\vec{u} = ((u(1), \ldots, u(n))$, we denote by $\rho(\vec{u}) = (u(2), \ldots, u(n), u(1))$ the multi-index on $\set{2, \ldots, n, \bar{1}}$. Using the descriptions of $l(\sigma)$, $r(\sigma)$ at the beginning of the proof, this in turn equals to \begin{equation} \label{Single-class} \begin{split} & = \Theta(\rho(\mb{\hat{1}_n}); \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad + \sum_{\tau: r(\tau) = \rho(\mb{\hat{1}_n})} \Theta(\tau; \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad - \sum_{\sigma: l(\sigma) = \mb{\hat{1}_n}} \Theta(\sigma); \set{1, \ldots, n}, \vec{u}). \end{split} \end{equation} Using Lemma~\ref{Lemma:Last} again and equation~\eqref{Single-class} applied to the class of $\pi$ containing $1$, we conclude that for $\pi \in A(n)$, \[ \begin{split} c^{-(\abs{\pi} - 1)} \Theta(\pi; \set{1, \ldots, n}, \vec{u}) & = c^{-(\abs{\pi} - 1)} \Theta(\rho(\pi); \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad + c \sum_{\tau: r(\tau) = \rho(\pi)} c^{-(\abs{\tau} - 1)} \Theta(\tau; \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad - c \sum_{\sigma: l(\sigma) = \pi} c^{-(\abs{\sigma} - 1)} \Theta(\sigma; \set{1, \ldots, n}, \vec{u}), \end{split} \] or, since $\abs{\rho(\pi)} = \abs{\pi}$ and $\abs{\tau} = \abs{\sigma} = \abs{\pi} + 1$, \[ \begin{split} \Theta(\pi; \set{1, \ldots, n}, \vec{u}) & = \Theta(\rho(\pi); \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ & + \sum_{\tau: r(\tau) = \rho(\pi)} \Theta(\tau; \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ & - \sum_{\sigma: l(\sigma) = \pi} \Theta(\sigma); \set{1, \ldots, n}, \vec{u}), \end{split} \] Therefore \[ \begin{split} & \ip{e_{u(1)}}{S_{u(2)} \ldots S_{u(n-1)} e_{u(n)}} \\ &\quad = \sum_{\pi \in \NC_0'(n)} \Theta(\pi; \set{1, \ldots, n}, \vec{u}) \\ &\quad = \sum_{\pi \in A(n)} \Bigl( \Theta(\pi; \set{1, \ldots, n}, \vec{u}) + \sum_{\sigma: l(\sigma) = \pi} \Theta(\sigma; \set{1, \ldots, n}, \vec{u}) \Bigr) \\ &\quad = \sum_{\pi \in A(n)} \Bigl( \Theta(\rho(\pi); \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) + \sum_{\tau: r(\tau) = \rho(\pi)} \Theta(\tau; \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \Bigr) \\ &\quad = \sum_{\tau \in \rho(\NC_0'(\set{2, \ldots, n, \bar{1}}))} \Theta(\tau; \set{2, \ldots, n, \bar{1}}, \rho(\vec{u})) \\ &\quad = \ip{e_{u(2)}}{S_{u(3)} \ldots S_{u(n)} e_{u(1)}}. \end{split} \] Using Theorem~\ref{Thm:Cumulants}, we conclude that the free cumulant functional $R_\phi$, and so $\phi$ itself, is tracial. \end{proof} \begin{Ex} For $d=2$, one can take \[ T_1 = \begin{pmatrix} c+1 & 0 \\ 0 & 1 \end{pmatrix}, \qquad T_2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. \] For $d=3$, one can take \[ T_1 = \begin{pmatrix} 0 & c & 0 \\ c & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \qquad T_2 = \begin{pmatrix} c & 0 & 0 \\ 0 & c+1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \qquad T_3 = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}. \] For $d=4$, one can take \[ T_1 = \begin{pmatrix} c & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \; T_2 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & c & 0 \\ 0 & c & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \; T_3 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & c & 0 & 0 \\ 0 & 0 & c+1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \; T_4 = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 \end{pmatrix}. \] \end{Ex} \begin{Remark} If $C_{ij} = c$ for all $i, j$, the corresponding Fock space is an interacting Fock space in the sense of \cite{AccBozGaussianization}. If $C_{ij} = c$ and in addition all $T_i = 0$, the von Neumann algebras $W^\ast(X_1, \ldots, X_d)$ were described in \cite{Ricard-t-Gaussian}. \end{Remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,477,468,749,880
arxiv
\section{Proof of the first part of Theorem \ref{TheThm}}\label{SectProof1} We let $(M,g)$ be a smooth compact Riemannian manifold of dimension $n \ge 3$, $p \ge 1$ an integer, and $A: M \to M_p^s({\mathbb R})$, $A = (A_{ij})$, be smooth and such that $A(x)$ is positive as a bilinear form for all $x \in M$. We know from Hebey and Vaugon \cite{HebVau1,HebVau2} that there exists $B > 0$ such that for any $u \in H_1^2(M)$, \begin{equation}\label{ProofEqt1} \left(\int_M\vert u\vert^{2^\star}dv_g\right)^{2/2^\star} \le K_n^2\int_M\vert\nabla u\vert^2dv_g + B\int_Mu^2dv_g\hskip.1cm , \end{equation} where $H_1^2(M)$ is the Sobolev space of functions in $L^2(M)$ with one derivative in $L^2$. Since $2/2^\star \le 1$, $(a + b)^{2/2^\star} \le a^{2/2^\star} + b^{2/2^\star}$ for $a, b \ge 0$, and it follows from (\ref{ProofEqt1}) that for any ${\mathcal U} \in H_{1,p}^2(M)$, \begin{equation}\label{ProofEqt2} \left(\int_M\vert{\mathcal U}\vert^{2^\star}dv_g\right)^{2/2^\star} \le K_n^2 \int_M\vert\nabla{\mathcal U}\vert^2dv_g + B\int_M\vert{\mathcal U}\vert^2dv_g \hskip.1cm , \end{equation} where $H_{1,p}^2(M)$ is the space of $p$-maps ${\mathcal U}: M \to {\mathbb R}^p$, ${\mathcal U} = (u_1,\dots,u_p)$, which are such that the $u_i$'s are all in $H_1^2(M)$. Since we assumed that $A(x)$ is positive for all $x$ as a bilinear form, there exists $t > 0$ such that $\delta_{ij} \le tA_{ij}(x)$ for all $x$, in the sense of bilinear forms. Letting $\Lambda = Bt$, we get that for any ${\mathcal U} \in H_{1,p}^2(M)$, \begin{equation}\label{ProofEqt3} \left(\int_M\vert{\mathcal U}\vert^{2^\star}dv_g\right)^{2/2^\star} \le K_n^2 \int_M\vert\nabla{\mathcal U}\vert^2dv_g + \int_MA_\Lambda({\mathcal U},{\mathcal U})dv_g \hskip.1cm , \end{equation} where $A_\Lambda = \Lambda A$, and $A$ is regarded as a bilinear form. Conversely, taking ${\mathcal U}$ in (\ref{GenSobIneqIntro}) such that the components $u_j$ of ${\mathcal U}$ are all zero if $j \not= i$, and $u_i = u$ is arbitrary, it clearly follows from an inequality like (\ref{GenSobIneqIntro}) that for any $i$, and any $u \in H_1^2(M)$, \begin{equation}\label{ProofEqt4} \left(\int_M\vert u\vert^{2^\star}dv_g\right)^{2/2^\star} \le K\int_M\vert\nabla u\vert^2dv_g + \Lambda\int_MA_{ii}u^2dv_g\hskip.1cm . \end{equation} In particular, see for instance Hebey \cite{HebCIMSBook}, we get from (\ref{ProofEqt4}) that we necessarily have that $K \ge K_n^2$. This inequality, together with (\ref{ProofEqt3}), gives that $K_s = K_n^2$. By (\ref{ProofEqt3}) we also have that (\ref{SharpIneq}) is true, and by the definition of $\Lambda_0(g)$, we can take $\Lambda = \Lambda_0(g)$ in (\ref{SharpIneq}). In particular, (\ref{SharpSatIneq}) is true. Like when passing from (\ref{GenSobIneqIntro}) to (\ref{ProofEqt4}), it follows from the sharp and saturated (\ref{SharpSatIneq}) that for any $i$, and any $u \in H_1^2(M)$, \begin{equation}\label{ProofEqt5} \left(\int_M\vert u\vert^{2^\star}dv_g\right)^{2/2^\star} \le K_n^2\int_M\vert\nabla u\vert^2dv_g + \Lambda_0(g)\int_MA_{ii}u^2dv_g\hskip.1cm . \end{equation} Taking $u = 1$ in (\ref{ProofEqt5}), we get that $\Lambda_0(g)\int_MA_{ii}dv_g \ge V_g^{2/2^\star}$ for all $i$, where $V_g$ is the volume of $M$ with respect to $g$. Since $A(x)$ is positive for all $x$, the $A_{ii}$'s are positive functions, and $\Lambda_0(g)$ has to be positive. By the developments in Aubin \cite{Aub}, we also get with (\ref{ProofEqt5}) that when $n \ge 4$, (\ref{2ndCstIneq}) has to be true for all $x \in M$, and all $i$. More precisely, given $\delta > 0$ small, $\varepsilon > 0$ small, and $x_0 \in M$, we let $u_{x_0}^\varepsilon$ be the function defined by $$u_{x_0}^\varepsilon = \left(\varepsilon + r^2\right)^{1-n/2} - \left(\varepsilon + \delta^2\right)^{1-n/2}$$ if $r \le \delta$, and $u_{x_0}^\varepsilon = 0$ if not, where $r = d_g(x_0,\cdot)$. Then, see Aubin \cite{Aub}, $$\frac{\int_M\vert\nabla u_{x_0}^\varepsilon\vert^2dv_g + \frac{\Lambda_0(g)}{K_n^2}\int_MA_{ii}(u_{x_0}^\varepsilon)^2dv_g} {\left(\int_M\vert u_{x_0}^\varepsilon\vert^{2^\star}dv_g\right)^{2/2^\star}} < \frac{1}{K_n^2}$$ if $n \ge 4$, $\varepsilon > 0$ is sufficiently small, and (\ref{2ndCstIneq}) is not satisfied at $x_0$. This proves (\ref{2ndCstIneq}). Now, in order to end the proof of Theorem \ref{TheThm}, it remains to prove the assertions in the theorem concerning extremal maps. This is the subject of Sections \ref{SectProof2} and \ref{SectProof4}. \section{Proof of the second part of Theorem \ref{TheThm}}\label{SectProof2} As in Section \ref{SectProof1}, we let $(M,g)$ be a smooth compact Riemannian manifold of dimension $n \ge 3$, $p \ge 1$ an integer, and $A: M \to M_p^s({\mathbb R})$, $A = (A_{ij})$, be smooth and such that $A(x)$ is positive as a bilinear form for all $x \in M$. We know from Section \ref{SectProof1} that $K_s = K_n^2$, and that (\ref{SharpIneq}) and (\ref{2ndCstIneq}) are true. It remains to prove that if the inequality in (\ref{2ndCstIneq}) is strict for all $i$ and all $x$, then the sharp and saturated inequality (\ref{SharpSatIneq}) possesses extremal maps, and the $L^{2^\star}$-normalized set of such extremal maps is precompact in the $C^{2,\theta}_p$-topology, where $0 < \theta < 1$. It remains also to prove that extremal maps, when they exist, are in general of undeterminate sign, but that they can be chosen weakly positive if $-A$ is cooperative, and strongly positive if $A$ is also globally irreducible. We claim that the existence of extremal maps when the inequality in (\ref{2ndCstIneq}) is strict, and compactness of extremal maps, follow from Lemma \ref{TheLem} below. In the sequel, $\Delta_g = -div_g\nabla$ is the Laplace-Beltrami operator with respect to $g$. \begin{lem}\label{TheLem} Let $(M,g)$ be a smooth compact Riemannian manifold of dimension $n \ge 4$, $p \ge 1$ an integer, and $A_0: M \to M_p^s({\mathbb R})$, $A_0 = (A^0_{ij})$, be smooth, such that $A_0(x)$ is positive as a bilinear form for all $x \in M$, and such that for any $i$ and any $x$, $A^0_{ii}(x) > \frac{n-2}{4(n-1)}S_g(x)$, where $S_g$ is the scalar curvature of $g$. Let $\left(A(\alpha)\right)_\alpha$, $\alpha\in{\mathbb N}$, be a sequence of smooth maps $A(\alpha): M \to M_p^s({\mathbb R})$ such that $A^\alpha_{ij} \to A^0_{ij}$ in $C^{0,\theta}(M)$ as $\alpha \to +\infty$, for all $i, j$, where the $A^\alpha_{ij}$'s are the components of $A(\alpha)$, and $0 < \theta < 1$. Let also $({\mathcal U}_\alpha)_\alpha$ be a sequence of $C^{2,\theta}$-solutions of the $p$-systems \begin{equation}\label{GenericEqtLem} \Delta_gu_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}(x)u_\alpha^j = \lambda_\alpha\vert u_\alpha^i\vert^{2^\star-2}u_\alpha^i \end{equation} for all $i$ and all $\alpha$, such that $\int_M\vert{\mathcal U}_\alpha\vert^{2^\star}dv_g = 1$ and $0 < \lambda_\alpha \le K_n^{-2}$ for all $\alpha$, where the $u_\alpha^i$'s are the components of ${\mathcal U}_\alpha$. Then, up to a subsequence, ${\mathcal U}_\alpha \to {\mathcal U}^0$ in $C^{2,\theta}_p(M)$ as $\alpha \to +\infty$, where ${\mathcal U}^0$ is a nontrivial $p$-map in $C^{2,\theta}_p(M)$. \end{lem} The proof of Lemma \ref{TheLem} is postponed to Section \ref{SectProof3}. We prove here, in Section \ref{SectProof2}, that when the inequality in (\ref{2ndCstIneq}) is strict for all $i$, the existence of extremal maps, and compactness of extremal maps, follow from the lemma. The $A(\alpha)$'s in our context are either like $A(\alpha) = \Lambda_\alpha K_n^{-2} A$, where the $\Lambda_\alpha$'s are real numbers converging to $\Lambda_0(g)$, or like $A(\alpha) = \Lambda_0(g) K_n^{-2}A$ for all $\alpha$, where $\Lambda_0(g)$ and $A$ are as in Theorem \ref{TheThm}. Extensions of Lemma \ref{TheLem} to higher energies, in the case of conformally flat manifolds, are in Hebey \cite{Heb}. The manifold in Lemma \ref{TheLem} needs not to be conformally flat. Possible references on elliptic systems, not necessarily like (\ref{GenericEqtLem}), are Amster, De N\'apoli, and Mariani \cite{AmsNapMar}, De Figueiredo \cite{DeF}, De Figueiredo and Ding \cite{DeFDin}, De Figueiredo and Felmer \cite{DeFFel}, Hulshof, Mitidieri and Vandervorst \cite{HulMitVan}, Mitidieri and Sweers \cite{MitSwe}, and Sweers \cite{Swe}. \medskip We assume that Lemma \ref{TheLem} is true. Given $\Lambda > 0$, and ${\mathcal U} \in H_{1,p}^2(M)$, we define the energies $E_g^\Lambda({\mathcal U})$ and $\Phi_g({\mathcal U})$ by \begin{equation}\label{DefEner} E_g^\Lambda({\mathcal U}) = \int_M\vert\nabla{\mathcal U}\vert^2dv_g + \frac{\Lambda}{K_n^2}\int_MA({\mathcal U},{\mathcal U})dv_g \end{equation} and $\Phi_g({\mathcal U}) = \int_M\vert{\mathcal U}\vert^{2^\star}dv_g$. By definition of $\Lambda_0(g)$, \begin{equation}\label{Sec2Eqt1} \inf_{{\mathcal U} \in {\mathcal H}} E_g^\Lambda({\mathcal U}) < \frac{1}{K_n^2} \end{equation} when $\Lambda < \Lambda_0(g)$, where ${\mathcal H}$ is the set consisting of the ${\mathcal U} \in H_{1,p}^2(M)$ which are such that $\Phi_g({\mathcal U}) = 1$. Let $(\Lambda_\alpha)_\alpha$ be a sequence of positive real numbers such that $\Lambda_\alpha < \Lambda_0(g)$ for all $\alpha$, and $\Lambda_\alpha \to \Lambda_0(g)$ as $\alpha \to +\infty$. Let also $\lambda_\alpha$ be the infimum in (\ref{Sec2Eqt1}) when we let $\Lambda = \Lambda_\alpha$. Since $A > 0$ as a bilinear form, $\lambda_\alpha$ is positive for all $\alpha$. By the strict inequality in (\ref{Sec2Eqt1}), see Hebey \cite{Heb}, for any $\alpha$, there exists ${\mathcal U}_\alpha = (u_\alpha^1,\dots,u_\alpha^p)$ a minimizer for $\lambda_\alpha$. In particular, the ${\mathcal U}_\alpha$'s are solutions of the $p$-systems \begin{equation}\label{Sec2Eqt2} \Delta_gu_\alpha^i + \frac{\Lambda_\alpha}{K_n^2}\sum_{j=1}^pA_{ij}(x)u_\alpha^j = \lambda_\alpha\vert u_\alpha^i\vert^{2^\star-2}u_\alpha^i \end{equation} for all $i$, and such that $\Phi_g({\mathcal U}_\alpha) = 1$ and ${\mathcal U}_\alpha \in C^{2,\theta}_p(M)$ for all $\alpha$, where $0 < \theta < 1$. Up to a subsequence, we may assume that $\lambda_\alpha \to \lambda_0$ as $\alpha \to +\infty$. If the inequality in (\ref{2ndCstIneq}) is strict for all $i$, we can apply Lemma \ref{TheLem} with $A(\alpha) = \Lambda_\alpha K_n^{-2} A$, and $A_0 = \Lambda_0(g)K_n^{-2}A$. By Lemma \ref{TheLem} we then get that, up to a subsequence, the ${\mathcal U}_\alpha$'s converge in $C^{2,\theta}_p(M)$ to some ${\mathcal U}^0$. Then $\Phi_g({\mathcal U}^0) = 1$, and, by (\ref{Sec2Eqt2}), \begin{equation}\label{Sec2Eqt3} \Delta_gu^0_i + \frac{1}{K_n^2}\sum_{j=1}^pA^0_{ij}(x)u^0_j = \lambda_0 \vert u^0_i\vert^{2^\star-2}u^0_i \end{equation} for all $i$, where the $A^0_{ij}$'s are the components of the matrix $A_0(g) = \Lambda_0(g)A$. Since we have that $\lambda_\alpha < K_n^{-2}$ for all $\alpha$, we can write that $\lambda_0 \le K_n^{-2}$. On the other hand, multiplying (\ref{Sec2Eqt3}) by $u^0_i$, integrating over $M$, and summing over $i$, we get that $$E_g^{\Lambda_0(g)}({\mathcal U}^0) = \lambda_0 \hskip.1cm ,$$ where $E^\Lambda_g$ is given by (\ref{DefEner}). By the definition of $\Lambda_0(g)$, it follows that $\lambda_0 \ge K_n^{-2}$. In particular, $\lambda_0 = K_n^{-2}$, and ${\mathcal U}^0$ is a nontrivial extremal map for (\ref{SharpSatIneq}). This proves the above claim that if the inequality in (\ref{2ndCstIneq}) is strict for all $i$, then the existence of extremal maps follows from Lemma \ref{TheLem}. \medskip Concerning compactness, let ${\mathcal H}_0$ be the $L^{2^\star}$-normalized set of extremal maps for (\ref{SharpSatIneq}). Then ${\mathcal H}_0$ consists of the ${\mathcal U}^0 \in H_{1,p}^2(M)$ such that $\Phi_g({\mathcal U}^0) = 1$, and $$E_g^{\Lambda_0(g)}({\mathcal U}^0) = \inf_{\left\{\Phi_g({\mathcal U}) = 1\right\}} E_g^{\Lambda_0(g)}({\mathcal U}) = \frac{1}{K_n^2} \hskip.1cm ,$$ where $E^\Lambda_g$ is given by (\ref{DefEner}). In particular, the extremal maps ${\mathcal U}^0$ in ${\mathcal H}_0$ are solutions of the $p$-system \begin{equation}\label{Sec2Eqt4} \Delta_gu^0_i + \frac{1}{K_n^2}\sum_{j=1}^pA^0_{ij}(x)u^0_j = K_n^{-2} \vert u^0_i\vert^{2^\star-2}u^0_i \end{equation} for all $i$, and such that $\Phi_g({\mathcal U}^0) = 1$, where the $A^0_{ij}$'s are the components of the matrix $A_0(g) = \Lambda_0(g)A$, and the $u^0_i$'s are the components of ${\mathcal U}$. Such ${\mathcal U}^0$'s, see Hebey \cite{Heb}, are in $C^{2,\theta}_p(M)$, where $0 < \theta < 1$. If the inequality in (\ref{2ndCstIneq}) is strict for all $i$, we can apply Lemma \ref{TheLem} with $A(\alpha) = A_0 = \Lambda_0(g)K_n^{-2}A$. We get that any sequence in ${\mathcal H}_0$ possesses a converging subsequence in $C^{2,\theta}_p(M)$. In particular, ${\mathcal H}_0$ is precompact in the $C^{2,\theta}_p$-topology. This proves the above claim that if the inequality in (\ref{2ndCstIneq}) is strict for all $i$, then the compactness of the set of extremal maps in Theorem \ref{TheThm} follows from Lemma \ref{TheLem}. \medskip Now, in order to end this section, we discuss the assertions in Theorem \ref{TheThm} concerning the sign of extremal maps. Extremal maps for (\ref{SharpSatIneq}) are solutions of systems like \begin{equation}\label{Sec2Eqt5} \Delta_gu_i + \sum_{j=1}^pA^0_{ij}(x)u_j = \Lambda \vert u_i\vert^{2^\star-2}u_i \end{equation} for all $i$, where $A_0 = (A^0_{ij})$ is like $A_0 = tA$ for some $t > 0$, and $\Lambda = K_n^{-2}$ is positive. General remarks on weak solutions of (\ref{Sec2Eqt5}) are as follows. First we can note (see, for instance, Hebey \cite{Heb}) that weak solutions of such systems are in $C^{2,\theta}_p(M)$, $0 < \theta < 1$. Then, when $p \ge 2$, and no specific assumption is made on $A_0$, we can note that there are no maximum principles for such systems. For instance, see again Hebey \cite{Heb}, we can construct examples of $p$-systems like (\ref{Sec2Eqt5}), $p \ge 2$, such that the system possesses solutions with the property that the factors of the solutions are nonnegative, nonzero, but with zeros in $M$. Such a phenomenon does not occur when $p = 1$ since, when $p = 1$, the maximum principle can be applied and nonnegative solutions are either identically zero or everywhere positive. On the other hand, we recover the maximum principle for (\ref{Sec2Eqt5}) if we assume that $-A_0$ is cooperative. Indeed, when $-A_0$ is cooperative, nonnegative solutions of (\ref{Sec2Eqt5}) are such that $$\Delta_gu_i + A^0_{ii}u_i \ge \lambda u_i^{2^\star-1}$$ for all $i$, and the classical maximum principle for functions can be applied so that either $u_i > 0$ everywhere in $M$, or $u_i \equiv 0$. In particular, in this case, nonnegative solutions of (\ref{Sec2Eqt5}) are weakly positive. Still when $-A_0$ is cooperative, if ${\mathcal U}$ is a weakly positive solution of the system, with zero factors, then $A_0$ can be factorized in blocs with respect to the zero and nonzero components of ${\mathcal U}$. More precisely, if we write ${\mathcal U} = (u_1,\dots,u_k,0,\dots,0)$ with $k < p$, and $u_i > 0$ for all $i$, then \begin{equation}\label{ExSecMatrix} A_0 = \left( \begin{matrix} S & 0\\ 0 & T \end{matrix} \right)\hskip.1cm , \end{equation} where $S: M \to M_k^s({\mathbb R})$, $T: M \to M_{p-k}^s({\mathbb R})$, and the $0$'s are null matrix of respective order $k\times(p-k)$ and $(p-k)\times k$. This easily follows from the equations $\sum_{j=1}^kA^0_{ij}u_j = 0$ for all $i \ge k+1$, so that we necessarily have that $A^0_{ij} = 0$ for all $i \ge k+1$ and $j \le k$. In this case, the $p$-system (\ref{Sec2Eqt5}) splits into two independent systems -- a $k$-system where $A_0$ is replaced by $S$, and a $(p-k)$-system where $A_0$ is replaced by $T$. In particular, if $-A_0$ is cooperative and $A_0$ is globally irreducible, so that (\ref{ExSecMatrix}) cannot be true, then any weakly positive solution of the system is also strongly positive. \medskip Coming back to minimizers, and to Theorem \ref{TheThm}, the first assertion concerning the sign of extremal maps in Theorem \ref{TheThm} is that extremal maps might be of undeterminate sign when no specific assumption is made on $A$. Of course this has to be understood when $p \ge 2$ since, when $p = 1$, the maximum principle for functions can be applied. When $p = 1$, extremal functions are either positive or negative. We assume in what follows that $p = 2$, and let $A$, $A^\prime$ be the matrix \begin{equation}\label{ExPosMinSecMatrix} A = \left( \begin{matrix} \alpha & \beta\\ \beta & \gamma \end{matrix} \right)\hskip.2cm\hbox{and}\hskip.2cm A^\prime = \left( \begin{matrix} \alpha & -\beta\\ -\beta & \gamma \end{matrix} \right)\hskip.1cm , \end{equation} where $\alpha, \beta, \gamma$ are smooth functions in $M$, and $A(x)$ is positive for all $x$ as a bilinear form. For ${\mathcal U} = (u,v)$ in $H_{1,2}^2(M)$ we let ${\mathcal U}^\prime$ be given by ${\mathcal U}^\prime = (u,-v)$. We let also $\beta \ge 0$, $\beta \not\equiv 0$, be such that it is nontrivial and nonnegative. Noting that $A({\mathcal U},{\mathcal U}) = A^\prime({\mathcal U}^\prime,{\mathcal U}^\prime)$, we easily get that if ${\mathcal U}_0 = (u_0,v_0)$ is an extremal map for the sharp and saturated inequality (\ref{SharpSatIneq}), then ${\mathcal U}_0^\prime$ is an extremal map for the modified problem we get by replacing $A$ by $A^\prime$, where $A$, $A^\prime$ are as in (\ref{ExPosMinSecMatrix}). Since ${\mathcal U}_0$ is an extremal map for (\ref{SharpSatIneq}), it is also a minimizer for $F = E/\Phi_g^{2/2^\star}$, where $\Phi_g({\mathcal U}) = \int_M\vert{\mathcal U}\vert^{2^\star}dv_g$, $E({\mathcal U}) = E_g^\Lambda({\mathcal U})$ is as in (\ref{DefEner}), and $\Lambda = \Lambda_0(g)$. In particular, $F({\mathcal U}_0) \le F({\mathcal U}_0^\prime)$, and it follows that \begin{equation}\label{SignEqt} \int_M\beta u_0v_0dv_g \le 0\hskip.1cm . \end{equation} Since $\beta \ge 0$, $-A^\prime$ is cooperative, and we can also write that $A^\prime(\hat{\mathcal U}_0,\hat{\mathcal U}_0) \le A^\prime({\mathcal U}_0^\prime,{\mathcal U}_0^\prime)$, where $\hat{\mathcal U}_0$ is given by $\hat{\mathcal U}_0 = (\vert u_0\vert,\vert v_0\vert)$. In particular, $\hat{\mathcal U}_0$ is also an extremal map for the modified problem we get by replacing $A$ by $A^\prime$. Since $\beta\not\equiv 0$, $A^\prime$ is globally irreducible, and it follows from the above discussion that $\vert u_0\vert$ and $\vert v_0\vert$ are positive functions. Then, by (\ref{SignEqt}), ${\mathcal U}_0$ is like ${\mathcal U}_0 = (u_0,-v_0)$ or ${\mathcal U}_0 = (-u_0,v_0)$ where $u_0$ and $v_0$ are positive functions. In particular, neither ${\mathcal U}_0$ nor $-{\mathcal U}_0$ are nonnegative. Clearly, this type of discussion extends to integers $p \ge 2$. For instance, when $p = 3$, choosing $A$ such that $A_{12}, A_{23} \ge 0$ and $A_{13} \le 0$, we easily construct minimizers like ${\mathcal U}_0 = (u_0,-v_0,w_0)$ or ${\mathcal U}_0 = (-u_0,v_0,-w_0)$, where $u_0$, $v_0$, $w_0$ are positive functions. This proves the above claim that, when no specific assumption is made on $A$, extremal maps for (\ref{SharpSatIneq}) might be of undeterminate sign. On the contrary, if we assume that $-A$ is cooperative, then $A(\hat{\mathcal U},\hat{\mathcal U}) \le A({\mathcal U},{\mathcal U})$ for all ${\mathcal U} = (u_1,\dots,u_p)$, where $\hat{\mathcal U} = (\vert u_1\vert,\dots,\vert u_p\vert)$. In particular, if ${\mathcal U}_0$ is an extremal map for (\ref{SharpSatIneq}), then $\hat{\mathcal U}_0$ is also an extremal map for (\ref{SharpSatIneq}). By the above discussion for systems like (\ref{Sec2Eqt5}), $\hat{\mathcal U}_0$ has to be weakly positive since $-A$ is cooperative. It is even strongly positive if $A$ is also globally irreducible. In particular, extremal maps for (\ref{SharpSatIneq}) can be chosen weakly positive when $-A$ is cooperative, and even strongly positive $A$ is also globally irreducible. This proves the assertions in Theorem \ref{TheThm} concerning the sign of extremal maps. Up to Lemma \ref{TheLem}, Theorem \ref{TheThm} is proved. \medskip When $-A$ is cooperative, and $A$ is globally irreducible, we can prove the stronger result that any extremal map ${\mathcal U}$ for (\ref{SharpSatIneq}) has to be such that either ${\mathcal U}$ or $-{\mathcal U}$ is strongly positive. In order to see this we first note that, according to the above proof, when $-A$ is cooperative, and $A$ is globally irreducible, the components of an extremal map for (\ref{SharpSatIneq}) are either positive or negative functions. By contradiction, up to permuting the indices, we write that ${\mathcal U} = (u_1,\dots,u_k,-u_{k+1},\dots,-u_p)$ is an extremal map for (\ref{SharpSatIneq}), where the $u_i$'s are positive functions. We let ${\mathcal U}^\prime$ be given by ${\mathcal U}^\prime = (u_1,\dots,u_p)$. Writing that $E^\Lambda_g({\mathcal U}) \le E^\Lambda_g({\mathcal U}^\prime)$, where $E^\Lambda_g$ is as in (\ref{DefEner}) and $\Lambda = \Lambda_0(g)$, we get that $$\sum_{i\in{\mathcal H}_k, j \in {\mathcal H}_{k+1}} A_{ij}u_iu_j \ge 0\hskip.1cm ,$$ where ${\mathcal H}_k = \left\{1,\dots,k\right\}$, and ${\mathcal H}_{k+1} = \left\{k+1,\dots,p\right\}$. The contradiction follows since $-A$ is cooperative, $A$ is globally irreducible, and the $u_i$'s are positive functions. This proves that when $-A$ is cooperative, and $A$ is globally irreducible, extremal maps ${\mathcal U}$ for (\ref{SharpSatIneq}) are such that either ${\mathcal U}$ or $-{\mathcal U}$ is strongly positive. \section{Applications of Theorem \ref{TheThm}}\label{SectProof3} We discuss the two corollaries, or applications, of Theorem \ref{TheThm} we briefly mentionned in the introduction. The first application, stating that the sharp and saturated inequality (\ref{SharpSatIneq}) possesses extremal maps when $(M,g)$ has nonpositive scalar curvature and $n \ge 4$, is easy to get. Indeed, since $A$ in Theorem \ref{TheThm} is such that $A(x)$ is positive in the sense of bilinear forms for all $x$, we clearly have that $A_{ii}(x) > 0$ for all $x$ and all $i$. In particular, (\ref{2ndCstIneq}) is always true when $(M,g)$ has nonpositive scalar curvature. \medskip A less obvious result is the second application stating that if $n \ge 4$, $A$ does not depend on $x$, and $(M,g)$ has constant scalar curvature, then (\ref{SharpSatIneq}) possesses extremal maps. When $(M,g)$ is not conformally diffeomorphic to the unit sphere, the result easily follows from the developments in Aubin \cite{Aub} and Schoen \cite{Sch}. The energy estimates in Aubin \cite{Aub} and Schoen \cite{Sch} give that, in this case, when $(M,g)$ is not conformally diffeomorphic to the unit sphere, the inequality in (\ref{2ndCstIneq}) has to be strict. Then we can apply Theorem \ref{TheThm}. When $(M,g)$ is the unit sphere, or conformally diffeomorphic to the unit sphere, the only problem is when equality holds in (\ref{2ndCstIneq}) for one, or at least one $i$. For such an $i$, we claim that we necessarily have that $A_{ij} = 0$ for all $j \not=i$. Assuming for the moment that the claim is true, we easily get with such a claim that there exist extremal maps for (\ref{SharpSatIneq}). The sharp and saturated scalar Sobolev inequality on the unit sphere $(S^n,g_0)$ reads as $$\left(\int_{S^n}\vert u\vert^{2^\star}dv_{g_0}\right)^{2/2^\star} \le K_n^2\int_{S^n}\vert\nabla u\vert^2dv_{g_0} + \omega_n^{-2/n} \int_{S^n}u^2dv_{g_0}\hskip.1cm ,$$ where $\omega_n$ is the volume of the unit sphere. In particular, see for instance Hebey \cite{HebCIMSBook} for a reference in book form, there is a whole family of extremal functions for the inequality, including constant functions. Let $u_0$ be one of these functions. We choose $u_0$ such that $u_0$ is positive and $\Vert u_0\Vert_{2^\star} = 1$. When $(M,g)$ is the unit sphere, equality holds in (\ref{2ndCstIneq}) for one $i$, and $A_{ij} = 0$ for all $j \not=i$, the $p$-map ${\mathcal U} = (u_1,\dots,u_p)$, where $u_i = u_0$ and $u_j = 0$ for $j \not= i$, is clearly an extremal map for (\ref{SharpSatIneq}). In particular, (\ref{SharpSatIneq}) possesses an extremal map. It remains to prove the above claim that when $(M,g)$ is the unit sphere, and equality holds in (\ref{2ndCstIneq}) for one $i$, we necessarily have that $A_{ij} = 0$ for all $j\not= i$. In order to prove this, we proceed by contradiction. We assume that $(M,g)$ is the unit sphere, that equality holds in (\ref{2ndCstIneq}) for one $i$, and that there exists $j \not= i$ such that $A_{ij} \not= 0$. We let ${\mathcal U}_\varepsilon = (u_\varepsilon^1,\dots,u_\varepsilon^p)$ be given by $u_\varepsilon^i = u_0$, where $u_0$ is as above, $u_\varepsilon^j = -\varepsilon A_{ij}$, and $u_\varepsilon^k = 0$ if $k \not= i, j$, where $\varepsilon > 0$ is small. Then, with the notations in Theorem \ref{TheThm}, \begin{eqnarray*} &&K_n^2\int_{S^n}\vert\nabla{\mathcal U}_\varepsilon\vert^2dv_{g_0} + \Lambda_0(g)\int_{S^n}A({\mathcal U}_\varepsilon,{\mathcal U}_\varepsilon)dv_{g_0}\\ &&= K_n^2\int_{S^n}\vert\nabla u_0\vert^2dv_{g_0} + \omega_n^{-2/n} \int_{S^n}u_0^2dv_{g_0}\\ &&\hskip.4cm - 2\Lambda_0(g_0)A_{ij}^2\varepsilon\int_{S^n}u_0dv_{g_0} + O\left(\varepsilon^2\right)\\ &&= 1 - 2\Lambda_0(g_0)A_{ij}^2\varepsilon\int_{S^n}u_0dv_{g_0} + O\left(\varepsilon^2\right) \end{eqnarray*} and since $\int_{S^n}\vert{\mathcal U}_\varepsilon\vert^{2^\star}dv_{g_0} \ge 1$, we get a contradiction with (\ref{SharpSatIneq}) by choosing $\varepsilon > 0$ sufficiently small. This proves the above claim that when $(M,g)$ is the unit sphere, and equality holds in (\ref{2ndCstIneq}) for one $i$, we cannot have that there exists $j \not= i$ such that $A_{ij} \not= 0$. This also ends the proof of the second application of Theorem \ref{TheThm} stating that if $n \ge 4$, $A$ does not depend on $x$, and $(M,g)$ has constant scalar curvature, then (\ref{SharpSatIneq}) possesses extremal maps. \section{Proof of Lemma \ref{TheLem}}\label{SectProof4} We prove Lemma \ref{TheLem} in this Section. We let $(M,g)$ be a smooth compact Riemannian manifold of dimension $n \ge 3$, $p \ge 1$ an integer, and $A_0: M \to M_p^s({\mathbb R})$, $A_0 = (A^0_{ij})$, be smooth and such that $A_0(x)$ is positive as a bilinear form for all $x \in M$. We let $\left(A(\alpha)\right)_\alpha$, $\alpha\in{\mathbb N}$, be a sequence of smooth maps $A(\alpha): M \to M_p^s({\mathbb R})$ such that $A^\alpha_{ij} \to A^0_{ij}$ in $C^{0,\theta}(M)$ as $\alpha \to +\infty$, for all $i, j$, where the $A^\alpha_{ij}$'s are the components of $A(\alpha)$, and $0 < \theta < 1$. We let also $({\mathcal U}_\alpha)_\alpha$ be a sequence of $C^{2,\theta}$-solutions of the $p$-systems \begin{equation}\label{GenericEqtLemSec4} \Delta_gu_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}(x)u_\alpha^j = \lambda_\alpha\vert u_\alpha^i\vert^{2^\star-2}u_\alpha^i \end{equation} for all $i$ and all $\alpha$, such that $\int_M\vert{\mathcal U}_\alpha\vert^{2^\star}dv_g = 1$ and $\lambda_\alpha \le K_n^{-2}$ for all $\alpha$, where the $u_\alpha^i$'s are the components of ${\mathcal U}_\alpha$. Since $A_0(x)$ is positive as a bilinear form for all $x$, and since $A^\alpha_{ij} \to A^0_{ij}$ in $C^{0,\theta}(M)$ as $\alpha \to +\infty$ for all $i$, $j$, there exists $K > 0$ such that $A^\alpha_{ij}(x) \ge K\delta_{ij}$ in the sense of bilinear forms, for all $x$ and all $\alpha$ sufficiently large. Multiplying (\ref{GenericEqtLemSec4}) by $u_\alpha^i$, integrating over $M$, and summing over $i$, we then get with the Sobolev inequality that there exists $\lambda > 0$ such that $\lambda_\alpha \ge \lambda$ for all $\alpha$ sufficiently large. Now we define $\tilde{\mathcal U}_\alpha$ by $\tilde{\mathcal U}_\alpha = (\tilde u_\alpha^1,\dots,\tilde u_\alpha^p)$, where \begin{equation}\label{Sec4ProofEqt1} \tilde u_\alpha^i = \lambda_\alpha^{\frac{n-2}{4}}u_\alpha^i \end{equation} for all $\alpha$ and all $i$. Then, for any $\alpha$, $\tilde{\mathcal U}_\alpha$ is a solution of the $p$-system \begin{equation}\label{Sec4ProofEqt2} \Delta_g\tilde u_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}(x)\tilde u_\alpha^j = \vert\tilde u_\alpha^i\vert^{2^\star-2}\tilde u_\alpha^i \end{equation} for all $i$. Moreover, since $\lambda_\alpha \le K_n^{-2}$, we also have that \begin{equation}\label{Sec4ProofEqt3} \int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g \le K_n^{-n} \end{equation} for all $\alpha$. Lemma \ref{TheLem} states that, up to a subsequence, ${\mathcal U}_\alpha \to {\mathcal U}^0$ in $C^{2,\theta}_p(M)$ as $\alpha \to +\infty$, where ${\mathcal U}^0$ is a nontrivial $p$-map in $C^{2,\theta}_p(M)$. By standard elliptic theory, and (\ref{GenericEqtLemSec4}), it suffices to prove that, up to a subsequence, the ${\mathcal U}_\alpha$'s are uniformly bounded in $L^\infty(M)$. Since $\lambda_\alpha \in [\lambda,K_n^{-2}]$ for $\alpha$ large, the ${\mathcal U}_\alpha$'s are uniformly bounded in $L^\infty(M)$ if and only if the $\tilde{\mathcal U}_\alpha$'s are uniformly bounded in $L^\infty(M)$. In particular, Lemma \ref{TheLem} reduces to proving that, up to a subsequence, there exists $C > 0$ such that \begin{equation}\label{MainResToProve} \vert\tilde{\mathcal U}_\alpha\vert \le C \end{equation} in $M$, for all $\alpha$, where $\vert\tilde{\mathcal U}_\alpha\vert = \sum_i\vert\tilde u_\alpha^i\vert$. We prove (\ref{MainResToProve}) in what follows. Up to a subsequence, by compactness of the embedding $H_1^2 \subset L^2$, we may assume that $\tilde{\mathcal U}_\alpha \to \tilde{\mathcal U}^0$ in $L^2$ for some $\tilde{\mathcal U}^0 \in H_{1,p}^2(M)$. In other words, we can assume that there are functions $\tilde u^0_i$ in $H_1^2(M)$ such that \begin{equation}\label{Sec4ProofEqt4} \tilde u_\alpha^i \to \tilde u^0_i\hskip.2cm\hbox{in}\hskip.1cm L^2(M) \end{equation} for all $i$, as $\alpha \to +\infty$. We may also assume that $\tilde u_\alpha^i \rightharpoonup \tilde u^0_i$ weakly in $H_1^2(M)$, that $\tilde u_\alpha^i \to \tilde u^0_i$ a.e in $M$, and that $\vert\tilde u_\alpha^i\vert^{2^\star-2}\tilde u_\alpha^i \rightharpoonup \vert\tilde u^0_i\vert^{2^\star-2}\tilde u^0_i$ weakly in $L^{2^\star/(2^\star-1)}(M)$ for all $i$. In particular, $\tilde{\mathcal U}^0$ is a solution of the limit equation \begin{equation}\label{Sec4ProofEqt5} \Delta_g\tilde u^0_i + \sum_{j=1}^pA^\alpha_{ij}(x)\tilde u^0_j = \vert\tilde u^0_i\vert^{2^\star-2}\tilde u^0_i \end{equation} for all $i$. Then, see, for instance, Hebey \cite{Heb}, we can prove that $\tilde{\mathcal U}^0$ is in $C^{2,\theta}_p(M)$. For $(x_\alpha)_\alpha$ a converging sequence of points in $M$, and $(\mu_\alpha)_\alpha$ a sequence of positive real numbers converging to zero, we define a {\it $1$-bubble} as a sequence $(B_\alpha)_\alpha$ of functions in $M$ given by \begin{equation}\label{Def1BubbleSec4} B_\alpha(x) = \left(\frac{\mu_\alpha}{\mu_\alpha^2 + \frac{d_g(x_\alpha,x)^2}{n(n-2)}}\right)^{\frac{n-2}{2}}\hskip.1cm . \end{equation} The $x_\alpha$'s are referred to as the {\it centers} and the $\mu_\alpha$'s as the {\it weights} of the $1$-bubble $(B_\alpha)_\alpha$. We define a {\it $p$-bubble} as a sequence $({\mathcal B}_\alpha)_\alpha$ of $p$-maps such that, if we write that ${\mathcal B}_\alpha = (B_\alpha^1,\dots,B_\alpha^p)$, then $(B_\alpha^i)_\alpha$ is a $1$-bubble for exactly one $i$, and for $j \not= i$, $(B_\alpha^j)_\alpha$ is the trivial zero sequence. In other words, a $p$-bubble is a sequence of $p$-maps such that one of the components of the sequence is a $1$-bubble, and the other components are trivial zero sequences. One remark with respect to the definition (\ref{Def1BubbleSec4}) is that if $u: {\mathbb R}^n \to {\mathbb R}$ is given by \begin{equation}\label{PosSolCritEuclEqt} u(x) = \left(1 + \frac{\vert x\vert^2}{n(n-2)}\right)^{-\frac{n-2}{2}}\hskip.1cm , \end{equation} then $u$ is a positive solution of the critical Euclidean equation $\Delta u = u^{2^\star-1}$, where $\Delta = -\sum\partial^2/\partial x_i^2$. More precisely, $u$ is the only positive solution of the equation in ${\mathbb R}^n$ which is such that $u(0) = 1$ and $u$ is maximum at $0$. All the other positive solutions of the equation $\Delta u = u^{2^\star-1}$ in ${\mathbb R}^n$, see Caffarelli, Gidas and Spruck \cite{CafGidSpr} and Obata \cite{Oba}, are then given by $$\tilde u(x) = \lambda^{(n-2)/2}u\left(\lambda(x-a)\right)\hskip.1cm ,$$ where $\lambda > 0$, and $a \in {\mathbb R}^n$. We prove (\ref{MainResToProve}), and thus Lemma \ref{TheLem}, in several steps. The first step in the proof is as follows. \begin{Step}\label{Step1ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). If $\tilde{\mathcal U}^0 \not\equiv 0$, then (\ref{MainResToProve}) is true. If, on the contrary, $\tilde{\mathcal U}^0 \equiv 0$, then there exists a $p$-bubble $({\mathcal B}_\alpha)_\alpha$ such that, up to a subsequence, \begin{equation}\label{Step1StatemSec4Eqt1} \tilde{\mathcal U}_\alpha = {\mathcal B}_\alpha + {\mathcal R}_\alpha \end{equation} for all $\alpha$, where ${\mathcal R}_\alpha \to 0$ strongly in $H_{1,p}^2(M)$ as $\alpha \to +\infty$. There also exists $C > 0$ such that \begin{equation}\label{Step1StatemSec4Eqt2} d_g(x_\alpha,x)^{\frac{n-2}{2}}\sum_{i=1}^p\vert\tilde u^i_\alpha(x)\vert \le C \end{equation} for all $\alpha$ and all $x \in M$, where the $x_\alpha$'s are the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ is defined. In particular, the $\vert\tilde{\mathcal U}_\alpha\vert$'s are uniformly bounded in any compact subset of $M\backslash\{x_0\}$, and $\tilde u^i_\alpha \to 0$ in $C^0_{loc}(M\backslash\{x_0\})$ for all $i$ as $\alpha \to +\infty$, where $x_0$ is the limit of the $x_\alpha$'s. \end{Step} \begin{proof}[Proof of Step \ref{Step1ProofLemSec4}] By the $H_1^2$-theory for blow-up, see Hebey \cite{Heb}, there are generalized $p$-bubbles $(\hat{\mathcal B}_{j,\alpha})_\alpha$, $j = 1,\dots,k$, such that, up to a subsequence, \begin{equation}\label{ProofStep1Sec4Eqt1} \tilde{\mathcal U}_\alpha = \tilde{\mathcal U}^0 + \sum_{j=1}^k\hat{\mathcal B}_{j,\alpha} + {\mathcal R}_\alpha \end{equation} and such that \begin{equation}\label{ProofStep1Sec4Eqt2} \frac{1}{n}\int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g = \frac{1}{n}\int_M\vert\tilde{\mathcal U}^0\vert^{2^\star}dv_g + \sum_{j=1}^kE_f(\hat{\mathcal B}_{j,\alpha}) + o(1) \end{equation} for all $\alpha$, where ${\mathcal R}_\alpha \to 0$ strongly in $H_{1,p}^2(M)$ as $\alpha \to +\infty$, $E_f(\hat{\mathcal B}_{j,\alpha})$ is the energy of the generalized $p$-bubble $(\hat {\mathcal B}_{j,\alpha})_\alpha$, and $o(1) \to 0$ as $\alpha \to +\infty$. Generalized $p$-bubbles are rescaling of solutions of the critical equation $\Delta u = \vert u\vert^{2^\star-2}u$, and the energy of the generalized $p$-bubble is the energy of $u$. In particular, the energy $E_f(\hat{\mathcal B}_{j,\alpha})$ does not depend on $\alpha$. It is always greater than or equal to $K_n^{-n}/n$, and if equality holds, then, up to lower order terms, the generalized $p$-bubble has to be a $p$-bubble. Namely, we always have that $E_f(\hat{\mathcal B}_{j,\alpha}) \ge K_n^{-n}/n$, and if equality holds, then $$\hat{\mathcal B}_{j,\alpha} = {\mathcal B}_{j,\alpha} + {\mathcal R}_\alpha\hskip.1cm ,$$ where $({\mathcal B}_{j,\alpha})_\alpha$ is a $p$-bubble, as defined above, and ${\mathcal R}_\alpha \to 0$ strongly in $H_{1,p}^2(M)$ as $\alpha \to +\infty$. By (\ref{Sec4ProofEqt3}), it follows from (\ref{ProofStep1Sec4Eqt2}) that if $\tilde{\mathcal U}^0 \not\equiv 0$, then $k = 0$, and that if $\tilde{\mathcal U}^0 \equiv 0$, then $k = 0$ or $k = 1$. When $\tilde{\mathcal U}^0 \equiv 0$, and $k = 0$, we get with (\ref{ProofStep1Sec4Eqt1}) that $\tilde{\mathcal U}_\alpha\to 0$ strongly in $H_{1,p}^2(M)$ as $\alpha \to +\infty$. This is impossible since, by construction of the $\tilde{\mathcal U}_\alpha$'s, we also have that there is a uniform positive lower bound for the left hand side in (\ref{ProofStep1Sec4Eqt2}). In particular, $k = 1$ when $\tilde{\mathcal U}^0 \equiv 0$. When $\tilde{\mathcal U}^0 \equiv 0$, and $k = 1$, we also get from (\ref{Sec4ProofEqt3}), (\ref{ProofStep1Sec4Eqt2}), and the above discussion, that the generalized $p$-bubble in (\ref{ProofStep1Sec4Eqt1}) has to be a $p$-bubble, and that $\lambda_\alpha \to K_n^{-2}$ as $\alpha \to +\infty$. Summarizing, we get with the $H_1^2$-theory for blow-up that if $\tilde{\mathcal U}^0 \not\equiv 0$, then, up to a subsequence, \begin{equation}\label{ProofStep1Sec4Eqt3} \tilde{\mathcal U}_\alpha= \tilde{\mathcal U}^0 + {\mathcal R}_\alpha \end{equation} for all $\alpha$, and that if $\tilde{\mathcal U}^0 \equiv 0$, then there exists a $p$-bubble $({\mathcal B}_\alpha)_\alpha$ such that, up to a subsequence, \begin{equation}\label{ProofStep1Sec4Eqt4} \tilde{\mathcal U}_\alpha = {\mathcal B}_\alpha + {\mathcal R}_\alpha \end{equation} for all $\alpha$, where, in (\ref{ProofStep1Sec4Eqt3}) and (\ref{ProofStep1Sec4Eqt4}), ${\mathcal R}_\alpha \to 0$ strongly in $H_{1,p}^2(M)$ as $\alpha \to +\infty$. We let the $x_\alpha$'s and $\mu_\alpha$'s be the centers and weights of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{ProofStep1Sec4Eqt4}) is defined. We claim that \begin{equation}\label{ProofStep1Sec4Eqt5} \hbox{(\ref{MainResToProve}) is true if}\hskip.1cm \tilde{\mathcal U}^0 \not\equiv 0 \hskip.1cm ,\hskip.1cm\hbox{while (\ref{Step1StatemSec4Eqt2}) is true if}\hskip.1cm \tilde{\mathcal U}^0 \equiv 0 \hskip.1cm . \end{equation} In order to prove (\ref{ProofStep1Sec4Eqt5}), we let $\Phi_\alpha$ be the function given by $\Phi_\alpha(x) = 1$ if $\tilde{\mathcal U}^0 \not\equiv 0$, and $\Phi_\alpha(x) = d_g(x_\alpha,x)$ if $\tilde{\mathcal U}^0 \equiv 0$. We let also $\Psi_\alpha$ be the function given by \begin{equation}\label{PsiDefProofThPointEst} \Psi_\alpha(x) = \Phi_\alpha(x)^{\frac{n-2}{2}} \sum_{i=1}^p\vert\tilde u_\alpha^i(x)\vert \hskip.1cm . \end{equation} Then (\ref{ProofStep1Sec4Eqt5}) is equivalent to the statement that the $\Psi_\alpha$'s are uniformly bounded in $L^\infty(M)$. Now we proceed by contradiction. We let the $y_\alpha$'s be points in $M$ such that the $\Psi_\alpha$'s are maximum at $y_\alpha$ and $\Psi_\alpha(y_\alpha) \to +\infty$ as $\alpha \to +\infty$. Up to a subsequence, we may assume that $\vert\tilde u_\alpha^{i_0}\vert(y_\alpha) \ge \vert\tilde u_\alpha^i\vert(y_\alpha)$ for some $i_0=1,\dots,p$, and all $i$. We set $\tilde\mu_\alpha = \vert\tilde u_\alpha^{i_0}\vert(y_\alpha)^{-2/(n-2)}$. Then $\tilde\mu_\alpha \to 0$ as $\alpha \to +\infty$, and by (\ref{PsiDefProofThPointEst}) we also have that \begin{equation}\label{PsiDefProofThPointEstEqt1} \frac{d_g(x_\alpha,y_\alpha)}{\tilde\mu_\alpha} \to +\infty \end{equation} if $\tilde{\mathcal U}^0 \equiv 0$, as $\alpha \to +\infty$. Let $\delta > 0$ be less than the injectivity radius of $(M,g)$. For $i = 1,\dots,p$, we define the function $\tilde v_\alpha^i$ in $B_0(\delta\mu_\alpha^{-1})$ by \begin{equation}\label{PsiDefProofThPointEstEqt2} \tilde v_\alpha^i(x) = \tilde\mu_\alpha^{\frac{n-2}{2}}\tilde u_\alpha^i\left(\exp_{y_\alpha}(\tilde\mu_\alpha x)\right) \hskip.1cm , \end{equation} where $B_0(\delta\mu_\alpha^{-1})$ is the Euclidean ball of radius $\delta\tilde\mu_\alpha^{-1}$ centered at $0$, and $\exp_{y_\alpha}$ is the exponential map at $y_\alpha$. Given $R > 0$ and $x \in B_0(R)$, the Euclidean ball of radius $R$ centered at $0$, we can write with (\ref{PsiDefProofThPointEst}) and (\ref{PsiDefProofThPointEstEqt2}) that \begin{equation}\label{PsiDefProofThPointEstEqt3} \vert\tilde v_\alpha^i\vert(x) \le \frac{\tilde\mu_\alpha^{\frac{n-2}{2}}\Psi_\alpha\left(\exp_{y_\alpha}(\tilde\mu_\alpha x)\right)} {\Phi_\alpha\left(\exp_{y_\alpha}(\tilde\mu_\alpha x)\right)^{\frac{n-2}{2}}} \end{equation} for all $i$, when $\alpha$ is sufficiently large. For any $x \in B_0(R)$, when $\tilde{\mathcal U}^0 \equiv 0$, \begin{eqnarray*} d_g\left(x_\alpha,\exp_{y_\alpha}(\tilde \mu_\alpha x)\right) & \ge & d_g\left(x_\alpha,y_\alpha\right) - R\tilde\mu_\alpha\\ & \ge & \left(1 - \frac{R\tilde\mu_\alpha}{\Phi_\alpha(y_\alpha)}\right)\Phi_\alpha(y_\alpha) \end{eqnarray*} when $\alpha$ is sufficiently large so that, by (\ref{PsiDefProofThPointEstEqt1}), the right hand side of the last equation is positive. Coming back to (\ref{PsiDefProofThPointEstEqt3}), thanks to the definition of the $y_\alpha$'s, we then get that for any $i$, and any $x \in B_0(R)$, \begin{equation}\label{PsiDefProofThPointEstEqt4} \begin{split} \vert\tilde v_\alpha^i\vert(x) & \le \frac{\tilde\mu_\alpha^{\frac{n-2}{2}}\Psi_\alpha(y_\alpha)} {\Phi_\alpha\left(\exp_{y_\alpha}(\tilde\mu_\alpha x)\right)^{\frac{n-2}{2}}}\\ & \le p \left(1 - \frac{R\tilde\mu_\alpha}{\Phi_\alpha(y_\alpha)}\right)^{-\frac{n-2}{2}} \end{split} \end{equation} when $\alpha$ is sufficiently large. In particular, by (\ref{PsiDefProofThPointEstEqt1}) and (\ref{PsiDefProofThPointEstEqt4}), up to passing to a subsequence, the $\tilde v_\alpha^i$'s are uniformly bounded in any compact subset of ${\mathbb R}^n$ for all $i$. Let $\tilde{\mathcal V}_\alpha = (\tilde v_\alpha^1,\dots,\tilde v_\alpha^p)$. The $\tilde{\mathcal V}_\alpha$'s are solutions of the system \begin{equation}\label{PsiDefProofThPointEstEqt5} \Delta_{g_\alpha}\tilde v_\alpha^i + \sum_{j=1}^p\tilde\mu_\alpha^2\tilde A_{ij}^\alpha\tilde v_\alpha^j = \vert\tilde v_\alpha^i\vert^{2^\star-2}\tilde v_\alpha^i \hskip.1cm , \end{equation} for all $i$, where \begin{eqnarray*} &&\tilde A_{ij}^\alpha(x) = A_{ij}^\alpha\left(\exp_{y_\alpha}(\tilde\mu_\alpha x)\right) \hskip.1cm ,\hskip.1cm\hbox{and}\\ &&g_\alpha(x) = \left(\exp_{y_\alpha}^\star g\right)(\tilde\mu_\alpha x)\hskip.1cm . \end{eqnarray*} Let $\xi$ be the Euclidean metric. Clearly, for any compact subset $K$ of ${\mathbb R}^n$, $g_\alpha \to \xi$ in $C^2(K)$ as $\alpha \to +\infty$. Then, by standard elliptic theory, and (\ref{PsiDefProofThPointEstEqt5}), we get that the $\tilde v_\alpha^i$'s are uniformly bounded in $C^{2,\theta}_{loc}({\mathbb R}^n)$ for all $i$, where $0 < \theta < 1$. In particular, up to a subsequence, we can assume that $\tilde v_\alpha^i \to \tilde v_i$ in $C^2_{loc}({\mathbb R}^n)$ as $\alpha \to +\infty$ for all $i$, where the $\tilde v_i$'s are functions in $C^2({\mathbb R}^n)$. The $\tilde v_i$'s are bounded in ${\mathbb R}^n$ by (\ref{PsiDefProofThPointEstEqt4}), and such that $\vert\tilde v_{i_0}\vert(0) = 1$ by construction. Without loss of generality, we may also assume that the $\tilde v_i$'s are in ${\mathcal D}_1^2({\mathbb R}^n)$ and in $L^{2^\star}({\mathbb R}^n)$ for all $i$, where ${\mathcal D}_1^2({\mathbb R}^n)$ is the Beppo-Levi space defined as the completion of $C^\infty_0({\mathbb R}^n)$, the space of smooth functions with compact support in ${\mathbb R}^n$, with respect to the norm $\Vert u\Vert = \Vert\nabla u\Vert_2$. We let $\tilde{\mathcal V} = (\tilde v_1,\dots,\tilde v_p)$. According to the above, $\tilde {\mathcal V} \not\equiv 0$. By construction, for any $R > 0$, $$\int_{B_{y_\alpha}(R\tilde \mu_\alpha)}\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g = \int_{B_0(R)}\vert\tilde{\mathcal V}_\alpha\vert^{2^\star}dv_{g_\alpha}\hskip.1cm .$$ It follows that for any $R > 0$, \begin{equation}\label{PsiDefProofThPointEstEqt6} \int_{B_{y_\alpha}(R\tilde\mu_\alpha)}\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g = \int_{{\mathbb R}^n}\vert\tilde{\mathcal V}\vert^{2^\star}dx + \varepsilon_R(\alpha)\hskip.1cm , \end{equation} where $\varepsilon_R(\alpha)$ is such that $\lim_R\lim_\alpha\varepsilon_R(\alpha) = 0$, and the limits are as $\alpha\to+\infty$ and $R\to+\infty$. When $\tilde{\mathcal U}^0 \equiv 0$, see for instance Hebey \cite{Heb}, we also get with (\ref{PsiDefProofThPointEstEqt1}) that \begin{equation}\label{PsiDefProofThPointEstEqt7} \lim_{\alpha\to+\infty}\int_{B_{y_\alpha}(R\tilde\mu_\alpha)} \vert\tilde{\mathcal B}_\alpha\vert^{2^\star}dv_g = 0 \end{equation} for all $R > 0$, where $(\tilde{\mathcal B}_\alpha)_\alpha$ is the $p$-bubble in (\ref{ProofStep1Sec4Eqt4}). By (\ref{ProofStep1Sec4Eqt3}) and (\ref{ProofStep1Sec4Eqt4}), \begin{equation}\label{PsiDefProofThPointEstEqt8} \begin{split} \int_{B_{y_\alpha}(R\tilde\mu_\alpha)}\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g &\le C\int_{B_{y_\alpha}(R\tilde\mu_\alpha)}\vert\tilde{\mathcal U}^0\vert^{2^\star}dv_g + o(1)\\ &= o(1) \end{split} \end{equation} for all $\alpha$ and $R > 0$ if $\tilde{\mathcal U}^0 \not\equiv 0$, while \begin{equation}\label{PsiDefProofThPointEstEqt9} \int_{B_{y_\alpha}(R\tilde\mu_\alpha)}\vert\tilde{\mathcal U}_\alpha\vert^{2^\star}dv_g \le C\int_{B_{y_\alpha}(R\tilde\mu_\alpha)}\vert\tilde{\mathcal B}_\alpha\vert^{2^\star}dv_g + o(1) \end{equation} for all $\alpha$ and $R > 0$ if $\tilde{\mathcal U}^0 \equiv 0$, where $C > 0$ is independent of $\alpha$ and $R$, and $o(1) \to 0$ as $\alpha \to +\infty$. Combining (\ref{PsiDefProofThPointEstEqt6})--(\ref{PsiDefProofThPointEstEqt9}), letting $\alpha \to +\infty$, and then $R\to+\infty$, we get that $$\int_{{\mathbb R}^n}\vert\tilde{\mathcal V}\vert^{2^\star}dx = 0\hskip.1cm ,$$ and this is in contradiction with the equation $\tilde {\mathcal V} \not\equiv 0$. In particular, the $\Psi_\alpha$'s are uniformly bounded in $L^\infty(M)$. This proves (\ref{ProofStep1Sec4Eqt5}). When $\tilde{\mathcal U}^0 \equiv 0$, (\ref{ProofStep1Sec4Eqt5}) gives that (\ref{Step1StatemSec4Eqt2}) is true, and if $x_0$ is the limit of the $x_\alpha$'s, (\ref{Step1StatemSec4Eqt2}) gives that the $\vert\tilde{\mathcal U}_\alpha\vert$'s are uniformly bounded in any compact subset of $M\backslash\{x_0\}$. By standard elliptic theory, equation (\ref{Sec4ProofEqt2}) satisfied by the $\tilde{\mathcal U}_\alpha$'s, and since $\tilde{\mathcal U}_\alpha \to 0$ in $L^2$ when $\tilde{\mathcal U}^0 \equiv 0$, we get that $\vert\tilde{\mathcal U}_\alpha\vert \to 0$ in $C^0_{loc}(M\backslash\{x_0\})$ as $\alpha \to +\infty$. This ends the proof of Step \ref{Step1ProofLemSec4}. \end{proof} According to Step \ref{Step1ProofLemSec4}, in order to prove (\ref{MainResToProve}), it suffices to prove that the $p$-map $\tilde{\mathcal U}^0$ given by (\ref{Sec4ProofEqt4}) is not identically zero. We proceed here by contradiction and assume that $\tilde{\mathcal U}^0 \equiv 0$. The next step in the proof of (\ref{MainResToProve}) consists in proving that the $\tilde{\mathcal U}_\alpha$'s satisfy perturbed De Giorgi-Nash-Moser type estimates. Step \ref{Step2ProofLemSec4} in the proof of (\ref{MainResToProve}) is as follows. \begin{Step}\label{Step2ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). Assume $\tilde{\mathcal U}^0 \equiv 0$. For any $\delta > 0$, there exists $C > 0$ such that, up to a subsequence, \begin{equation}\label{Step2StatemSec4Eqt1} \max_{M\backslash B_\delta}\vert\tilde{\mathcal U}_\alpha\vert \le C \int_M\left(1 + \vert\tilde{\mathcal U}_\alpha\vert^{2^\star-2}\right)\vert\tilde{\mathcal U}_\alpha\vert dv_g \end{equation} for all $\alpha$, where $B_\delta = B_{x_0}(\delta)$ is the ball centered at $x_0$ of radius $\delta$, $\vert\tilde{\mathcal U}_\alpha\vert = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert$, $\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-2} = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert^{2^\star-2}$, and $x_0$ is the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. \end{Step} \begin{proof}[Proof of Step \ref{Step2ProofLemSec4}] Let $B = B_x(r)$ be such that $B_x(2r) \subset M\backslash\{x_0\}$. By (\ref{Sec4ProofEqt2}), and Step \ref{Step1ProofLemSec4}, $\vert\Delta_g\tilde u_\alpha^i\vert \le C\vert\tilde{\mathcal U}_\alpha\vert$ in $B$, for all $i$ and all $\alpha$, where $C > 0$ is independent of $\alpha$ and $i$. Then we also have that \begin{equation}\label{Step2Sec4Eqt1} \left\vert\Delta_g\tilde u_\alpha^i + \tilde u_\alpha^i\right\vert \le C\vert\tilde{\mathcal U}_\alpha\vert \end{equation} in $B$, for all $i$ and all $\alpha$, where $C > 0$ is independent of $\alpha$ and $i$. We define the $\hat u_\alpha^i$'s by \begin{equation}\label{Step2Sec4Eqt2} \Delta_g\hat u_\alpha^i + \hat u_\alpha^i = \left\vert\Delta_g\tilde u_\alpha^i + \tilde u_\alpha^i\right\vert \end{equation} in $M$, for all $\alpha$ and all $i$. Since $$\Delta_g\left(\hat u_\alpha^i \pm \tilde u_\alpha^i\right) + \left(\hat u_\alpha^i \pm \tilde u_\alpha^i\right) \ge 0\hskip.1cm ,$$ we can write that $\hat u_\alpha^i \ge \vert\tilde u_\alpha^i\vert$ in $M$, for all $\alpha$ and all $i$. In particular, the $\hat u_\alpha^i$'s are nonnegative. By (\ref{Step2Sec4Eqt1}) and (\ref{Step2Sec4Eqt2}) we also have that \begin{equation}\label{Step2Sec4Eqt3} \Delta_g\vert\hat{\mathcal U}_\alpha\vert \le C \vert\hat{\mathcal U}_\alpha\vert \end{equation} in $B$, for all $\alpha$, where $\hat{\mathcal U}_\alpha$ is the $p$-map of components the $\hat u_\alpha^i$'s, $\vert\hat{\mathcal U}_\alpha\vert = \sum_{i=1}^p\hat u_\alpha^i$ since the $\hat u_\alpha^i$'s are nonnegative, and $C > 0$ is independent of $\alpha$. It easily follows from (\ref{Step2Sec4Eqt1}), (\ref{Step2Sec4Eqt2}), and Step \ref{Step1ProofLemSec4} that the $\vert\hat{\mathcal U}_\alpha\vert$'s are uniformly bounded in $L^\infty(B)$. Then, by (\ref{Step2Sec4Eqt3}), we can apply the De Giorgi-Nash-Moser iterative scheme for functions to the $\vert\hat{\mathcal U}_\alpha\vert$'s. In particular, we can write that \begin{equation}\label{Step2Sec4Eqt4} \max_{B_x(r/4)}\vert\hat{\mathcal U}_\alpha\vert \le C \int_{B_x(r/2)}\vert\hat{\mathcal U}_\alpha\vert dv_g \hskip.1cm , \end{equation} where $C > 0$ is independent of $\alpha$. With the notations in the statement of Step \ref{Step2ProofLemSec4}, since $B$ is basically any ball in $M\backslash\{x_0\}$, it easily follows form (\ref{Step2Sec4Eqt4}) that for any $\delta > 0$, \begin{equation}\label{Step2Sec4Eqt5} \max_{M\backslash B_\delta}\vert\hat{\mathcal U}_\alpha\vert \le C \int_M\vert\hat{\mathcal U}_\alpha\vert dv_g \hskip.1cm , \end{equation} where $C > 0$ is independent of $\alpha$. By (\ref{Sec4ProofEqt2}), \begin{equation}\label{Step2Sec4Eqt6} \left\vert\Delta_g\tilde u_\alpha^i + \tilde u_\alpha^i\right\vert \le C\left(1 + \vert\tilde{\mathcal U}_\alpha\vert^{2^\star-2}\right)\vert\tilde{\mathcal U}_\alpha\vert \end{equation} in $M$, for all $i$ and $\alpha$, where $C > 0$ is independent of $\alpha$ and $i$. Integrating (\ref{Step2Sec4Eqt2}) over $M$, since $\int_M(\Delta_g\hat u_\alpha^i)dv_g = 0$ for all $i$ and all $\alpha$, we get with (\ref{Step2Sec4Eqt6}) that \begin{equation}\label{Step2Sec4Eqt7} \int_M\vert\hat{\mathcal U}_\alpha\vert dv_g \le C \int_M\left(1 + \vert\tilde{\mathcal U}_\alpha\vert^{2^\star-2}\right)\vert\tilde{\mathcal U}_\alpha\vert dv_g \end{equation} for all $\alpha$, where $C > 0$ is independent of $\alpha$. As already mentionned, $\vert\tilde{\mathcal U}_\alpha\vert \le \vert\hat{\mathcal U}_\alpha\vert$ in $M$. In particular, we get with (\ref{Step2Sec4Eqt5}) and (\ref{Step2Sec4Eqt7}) that (\ref{Step2StatemSec4Eqt1}) is true. Step \ref{Step2ProofLemSec4} is proved. \end{proof} Step \ref{Step3ProofLemSec4} in the proof of (\ref{MainResToProve}) is concerned with the $L^1/L^{2^\star-1}$-controlled balance property of the $\tilde{\mathcal U}_\alpha$'s. Step \ref{Step3ProofLemSec4} is as follows. \begin{Step}\label{Step3ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ be given by (\ref{Sec4ProofEqt1}). There exists $C > 0$ such that, up to a subsequence, \begin{equation}\label{Step3StatemSec4Eqt1} \int_M\vert\tilde{\mathcal U}_\alpha\vert dv_g \le C \int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g \end{equation} for all $\alpha$, where $\vert\tilde{\mathcal U}_\alpha\vert = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert$, and $\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1} = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert^{2^\star-1}$. \end{Step} \begin{proof}[Proof of Step \ref{Step3ProofLemSec4}] Let $f_\alpha^i = \hbox{sign}(\tilde u_\alpha^i)$ be the function given by \begin{equation}\label{ProofStep3Sec4Eqt1} f_\alpha^i = \chi_{\left\{\tilde u_\alpha^i > 0\right\}} - \chi_{\left\{\tilde u_\alpha^i < 0\right\}}\hskip.1cm , \end{equation} where $\chi_A$ is the characterictic function of $A$. Then $f_\alpha^i\tilde u_\alpha^i = \vert\tilde u_\alpha^i\vert$ for all $\alpha$ and all $i$. We also have that $\vert f_\alpha^i\vert \le 1$ for all $\alpha$ and all $i$. As already mentionned, up to passing to a subsequence, we can assume that there exists $K > 0$ such that $A^\alpha_{ij}(x) \ge K\delta_{ij}$ in the sense of bilinear forms for all $x$. In particular, if we let $\Delta_g^p$ be the Laplacian acting on $p$-maps, the operators $\Delta_g^p + A(\alpha)$ are (uniformly) coercive in the sense that there exists $C > 0$ such that for any ${\mathcal U} \in H_{1,p}^2(M)$, and any $\alpha$, \begin{equation}\label{ProofStep3Sec4Eqt2} I_{A(\alpha)}({\mathcal U}) \ge C \Vert{\mathcal U}\Vert_{H_{1,p}^2}^2\hskip.1cm , \end{equation} where \begin{equation}\label{ProofStep3Sec4Eqt3} I_{A(\alpha)}({\mathcal U}) = \int_M\vert\nabla{\mathcal U}\vert^2dv_g + \int_MA(\alpha)({\mathcal U},{\mathcal U})dv_g\hskip.1cm . \end{equation} By (\ref{ProofStep3Sec4Eqt2}), and standard minimization technics, there is a solution ${\mathcal U}_\alpha^\prime$ to the minimization problem consisting of finding a minimizer for $I_{A(\alpha)}({\mathcal U})$ under the constraint $\int_M(f_\alpha,{\mathcal U})dv_g = 1$, where $I_{A(\alpha)}({\mathcal U})$ is as in (\ref{ProofStep3Sec4Eqt3}), $(f_\alpha,{\mathcal U}) = \sum_{i=1}^pf_\alpha^iu_i$, and the $u_i$'s are the components of ${\mathcal U}$. If $\lambda_\alpha$ is the minimum of the $I_{A(\alpha)}({\mathcal U})$'s, where ${\mathcal U} \in H_{1,p}^2(M)$ satisfies the constraint $\int_M(f_\alpha,{\mathcal U})dv_g = 1$, it easily follows from (\ref{ProofStep3Sec4Eqt2}) that $\lambda_\alpha > 0$. We let $\hat{\mathcal U}_\alpha = \lambda_\alpha^{-1}{\mathcal U}_\alpha^\prime$. Then $\hat{\mathcal U}_\alpha$ is a solution of the system \begin{equation}\label{ProofStep3Sec4Eqt4} \Delta_g\hat u_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}\hat u_\alpha^j = f_\alpha^i \end{equation} for all $i$ and all $\alpha$, where the $\hat u_\alpha^i$'s are the components of $\hat{\mathcal U}_\alpha$, and the $f_\alpha^i$'s are as in (\ref{ProofStep3Sec4Eqt1}). Multiplying (\ref{ProofStep3Sec4Eqt4}) by $\hat u_\alpha^i$, integrating over $M$, and summing over $i$, we get with (\ref{ProofStep3Sec4Eqt2}) that the square of the $H_{1,p}^2$-norm of the $\hat{\mathcal U}_\alpha$'s is uniformly controlled by the $L^1$-norm of the $\vert\hat{\mathcal U}_\alpha\vert$'s. In particular, the $\hat u_\alpha^i$'s are uniformly bounded in $L^2$. By standard elliptic theory, the $\hat u_\alpha^i$'s are in the Sobolev spaces $H_2^q$ for all $q$. As an easy consequence, the $\hat u_\alpha^i$'s are continuous. By the above discussion, and standard elliptic theory, we then get that there exists $C > 0$ such that $\vert\hat u_\alpha^i\vert \le C$ in $M$, for all $\alpha$ and all $i$. By (\ref{Sec4ProofEqt2}) and (\ref{ProofStep3Sec4Eqt4}) we can now write that \begin{equation}\label{ProofStep3Sec4Eqt5} \begin{split} \sum_{i=1}^p\int_M\vert\tilde u_\alpha^i\vert dv_g &= \sum_{i=1}^p\int_M\tilde u_\alpha^if_\alpha^idv_g\\ &= \sum_{i=1}^p\int_M\left(\Delta_g\hat u_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}\hat u_\alpha^j\right)\tilde u_\alpha^idv_g\\ &= \sum_{i=1}^p\int_M\left(\Delta_g\tilde u_\alpha^i + \sum_{j=1}^pA^\alpha_{ij}\tilde u_\alpha^j\right)\hat u_\alpha^idv_g\\ &= \sum_{i=1}^p\int_M\vert\tilde u_\alpha^i\vert^{2^\star-2}\tilde u_\alpha^i\hat u_\alpha^idv_g\\ \end{split} \end{equation} for all $\alpha$. Since there exists $C > 0$ such that $\vert\hat u_\alpha^i\vert \le C$ in $M$ for all $\alpha$ and all $i$, it follows from (\ref{ProofStep3Sec4Eqt5}) that $$\int_M\vert\tilde{\mathcal U}_\alpha\vert dv_g \le C \int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g$$ for all $\alpha$, where $C > 0$ does not depend on $\alpha$. This proves Step \ref{Step3ProofLemSec4}. \end{proof} Step \ref{Step4ProofLemSec4} in the proof of (\ref{MainResToProve}) is concerned with $L^2$-concentration. We assume here that $n \ge 4$. When $n =3$, bubbles do not concentrate in the $L^2$-norm, and $L^2$-concentration turns out to be false in this dimension. Dimension $4$ is the smallest dimension for this notion of $L^2$-concentration. Step \ref{Step4ProofLemSec4} is as follows. \begin{Step}\label{Step4ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). Assume $\tilde{\mathcal U}^0 \equiv 0$ and $n \ge 4$. Up to a subsequence, \begin{equation}\label{Step4StatemSec4Eqt1} \lim_{\alpha \to +\infty} \frac{\int_{B_\delta}\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} {\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} = 1 \end{equation} for all $\delta > 0$, where $B_\delta = B_{x_0}(\delta)$ is the ball centered at $x_0$ of radius $\delta$, $x_0$ is the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined, and $\vert\tilde{\mathcal U}_\alpha\vert^2 = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert^2$. \end{Step} \begin{proof}[Proof of Step \ref{Step4ProofLemSec4}] Clearly, Step \ref{Step4ProofLemSec4} is equivalent to proving that for any $\delta > 0$, $R_\delta(\alpha) \to 0$ as $\alpha \to +\infty$, where $R_\delta(\alpha)$ is the ratio given by \begin{equation}\label{RatioL2Conc} R_\delta(\alpha) = \frac{\int_{M\backslash B_\delta}\vert{\mathcal U}_\alpha\vert^2dv_g} {\int_M\vert{\mathcal U}_\alpha\vert^2dv_g}\hskip.1cm . \end{equation} We fix $\delta > 0$. By Steps \ref{Step2ProofLemSec4} and \ref{Step3ProofLemSec4}, we can write that for any $\alpha$, \begin{eqnarray*} \int_{M\backslash{\mathcal B}_\delta}\vert\tilde{\mathcal U}_\alpha\vert^2dv_g & \le & \left(\max_{M\backslash{\mathcal B}_\delta}\vert\tilde{\mathcal U}_\alpha\vert\right) \int_{M\backslash{\mathcal B}_\delta}\vert\tilde{\mathcal U}_\alpha\vert dv_g\\ & \le & C \sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} \int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g\hskip.1cm , \end{eqnarray*} where $C > 0$ is independent of $\alpha$. In particular, \begin{equation}\label{RatioL2ConcProof} R_\delta(\alpha) \le C \frac{\int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g} {\sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g}} \end{equation} for all $\alpha$, where $C > 0$ is independent of $\alpha$, and $R_\delta(\alpha)$ is given by (\ref{RatioL2Conc}). If we assume now that $n \ge 6$, then $2^\star-1 \le 2$, and we can write with H\"older's inequality that $$\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g \le V_g^{\frac{3-2^\star}{2}} \left(\int_M\vert\tilde u_\alpha^i\vert^2dv_g\right)^{\frac{2^\star-1}{2}}$$ for all $i$, where $V_g$ is the volume of $M$ with respect to $g$. In particular, there exists $C > 0$ such that \begin{equation}\label{HighDimProofStep4} \int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g \le C \left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right)^{\frac{2^\star-1}{2}}\hskip.1cm , \end{equation} for all $\alpha$. Since $\tilde{\mathcal U}^0 \equiv 0$, it follows from (\ref{Sec4ProofEqt4}) that $\tilde{\mathcal U}_\alpha \to 0$ in $L^2$ as $\alpha \to +\infty$. Since $2^\star > 2$, we then get with (\ref{RatioL2ConcProof}) and (\ref{HighDimProofStep4}) that $R_\delta(\alpha) \to 0$ as $\alpha \to +\infty$. This proves (\ref{Step4StatemSec4Eqt1}) when $n \ge 6$. If we assume now that $n = 5$, then $2 \le 2^\star-1 \le 2^\star$, and we can write with H\"older's inequality that $$\left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g\right)^{\frac{1}{2^\star-1}} \le \left(\int_M\vert\tilde u_\alpha^i\vert^2dv_g\right)^{\frac{\theta}{2}} \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{\frac{1-\theta}{2^\star}}\hskip.1cm ,$$ where $\theta = \frac{3}{2(2^\star-1)}$. By (\ref{Sec4ProofEqt3}) we then get that $$\int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g \le C \left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right)^{\frac{3}{4}}\hskip.1cm ,$$ where $C > 0$ does not depend on $\alpha$. Since $\frac{3}{4} > \frac{1}{2}$ and $\tilde{\mathcal U}_\alpha \to 0$ in $L^2$, we get with (\ref{RatioL2ConcProof}) that $R_\delta(\alpha) \to 0$ as $\alpha \to +\infty$. This proves (\ref{Step4StatemSec4Eqt1}) when $n = 5$. Now it remains to prove (\ref{Step4StatemSec4Eqt1}) when $n = 4$. The argument when $n = 4$ is slightly more delicate. We start writing that \begin{equation}\label{L2ConcProofDim4Eqt1} \begin{split} \frac{\int_M\vert\tilde{\mathcal U}_\alpha\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g}} &= \sum_{i=1}^p \frac{\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g}}\\ &\le \sum_{i=1}^p\frac{\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde u_\alpha^i\vert^2dv_g}}\hskip.1cm . \end{split} \end{equation} We let the $x_\alpha$'s and $\mu_\alpha$'s be the centers and weights of the $1$-bubble $(B_\alpha)_\alpha$ from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. We let $i_0 = 1,\dots,p$, be such that ${\mathcal B}_\alpha^{i_0} = B_\alpha$ for all $\alpha$. For $R > 0$, we let also $\Omega_{i_0,\alpha}(R)$ be given by \begin{equation}\label{L2ConcProofDim4EqtSet1} \Omega_{i_0,\alpha}(R) = B_{x_\alpha}(R\mu_\alpha)\hskip.1cm . \end{equation} Since $n = 4$, we have that $2^\star = 4$. If $i \not= i_0$, we can write, thanks to H\"older's inequalities, that $$\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g \le \sqrt{\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g} \sqrt{\int_M\vert\tilde u_\alpha^i\vert^2dv_g}\hskip.1cm .$$ By (\ref{Step1StatemSec4Eqt1}), when $i \not= i_0$, $\tilde u_\alpha^i \to 0$ in $H_1^2(M)$ as $\alpha \to +\infty$. It follows that for any $i \not= i_0$, \begin{equation}\label{Sec4ProofL2ConcConclEqt1} \frac{\int_M\vert\tilde u_\alpha^i\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde u_\alpha^i\vert^2dv_g}} = o(1)\hskip.1cm , \end{equation} where $o(1) \to 0$ as $\alpha \to +\infty$. On the other hand, when $i = i_0$, we get with H\"older's inequalities that \begin{eqnarray*} \int_M\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g &\le& \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g\\ &&\hskip.4cm + \sqrt{\int_{M\backslash\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star}dv_g} \sqrt{\int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g}\hskip.1cm , \end{eqnarray*} and we can write that \begin{equation}\label{L2ConcProofDim4Eqt3} \frac{\int_M\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g}} \le \sqrt{\int_{M\backslash\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star}dv_g} + \frac{\int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g} {\sqrt{\int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g}}\hskip.1cm , \end{equation} where $\Omega_{i_0,\alpha}(R)$ is as in (\ref{L2ConcProofDim4EqtSet1}). For $\varphi \in C^\infty_0({\mathbb R}^n)$, where $C^\infty_0({\mathbb R}^n)$ is the set of smooth functions with compact support in ${\mathbb R}^n$, we let $\varphi_\alpha^{i_0}$ be the function defined by the equation \begin{equation}\label{L2ConcProofDim4Eqt4} \varphi_\alpha^{i_0}(x) = (\mu_\alpha)^{-\frac{n-2}{2}} \varphi\left((\mu_\alpha)^{-1}\exp_{x_\alpha}^{-1}(x)\right) \hskip.1cm . \end{equation} Straightforward computations give that for any $R > 0$,\par \medskip (i) $\displaystyle\int_{M\backslash \Omega_{i_0,\alpha}(R)} (B_\alpha)^{2^\star}dv_g = \varepsilon_R(\alpha)$,\par \medskip (ii) $\displaystyle\int_{\Omega_{i_0,\alpha}(R)} (B_\alpha)^{2^\star-1}\varphi_\alpha^{i_0}dv_g = \int_{B_0(R)}u^{2^\star-1}\varphi dx + o(1)$,\par \medskip (iii) $\displaystyle\int_{\Omega_{i_0,\alpha}(R)} (B_\alpha)^2(\varphi_\alpha^{i_0})^{2^\star-2}dv_g = \int_{B_0(R)}u^2\varphi^{2^\star-2}dx + o(1)$\par \medskip\noindent where $i_0$ is such that ${\mathcal B}_\alpha^{i_0} = B_\alpha$ for all $\alpha$, $({\mathcal B}_\alpha)_\alpha$ is the $p$-bubble in (\ref{Step1StatemSec4Eqt1}), $u$ is given by (\ref{PosSolCritEuclEqt}), $\Omega_{i_0,\alpha}(R)$ is given by (\ref{L2ConcProofDim4EqtSet1}), $o(1) \to 0$ as $\alpha \to +\infty$, and the $\varepsilon_R(\alpha)$'s are such that \begin{equation}\label{EqtRestL2ConcDim4} \lim_{R \to +\infty}\limsup_{\alpha\to +\infty}\varepsilon_R(\alpha) = 0 \hskip.1cm . \end{equation} By (i) and (\ref{Step1StatemSec4Eqt1}) we can write that \begin{equation}\label{L2ConcProofDim4Eqt5} \int_{M\backslash\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star}dv_g = \varepsilon_R(\alpha)\hskip.1cm , \end{equation} where $\Omega_{i_0,\alpha}(R)$ is as in (\ref{L2ConcProofDim4EqtSet1}), and the $\varepsilon_R(\alpha)$'s are such that (\ref{EqtRestL2ConcDim4}) holds. From now on, we let $\varphi$ in (\ref{L2ConcProofDim4Eqt4}) be such that $\varphi = 1$ in $B_0(R)$, $R > 0$. Then, $$\int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g = \mu_\alpha^{\frac{n-2}{2}} \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}\varphi_\alpha^{i_0}dv_g$$ and, by (\ref{Step1StatemSec4Eqt1}) and (ii), we can write that \begin{eqnarray*} \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}\varphi_\alpha^{i_0}dv_g & \le & C\int_{\Omega_{i_0,\alpha}(R)}B_\alpha^{2^\star-1}\varphi_\alpha^{i_0}dv_g + o(1)\\ & \le & C \int_{B_0(R)}u^{2^\star-1}dx + o(1)\hskip.1cm , \end{eqnarray*} where $o(1) \to 0$ as $\alpha \to +\infty$, and $C > 0$ does not depend on $\alpha$ and $R$. In particular, we have that \begin{equation}\label{L2ConcProofDim4Eqt6} \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g \le \left(C \int_{B_0(R)}u^{2^\star-1}dx + o(1)\right) \mu_\alpha^{\frac{n-2}{2}}\hskip.1cm , \end{equation} where $o(1) \to 0$ as $\alpha \to +\infty$, $u$ is as in (\ref{PosSolCritEuclEqt}), and $C > 0$ does not depend on $\alpha$ and $R$. Independently, we also have that \begin{eqnarray*} \int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g & \ge & \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^2dv_g\\ & \ge & \mu_\alpha^{n-2} \int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^2(\varphi_\alpha^{i_0})^{2^\star-2}dv_g \end{eqnarray*} Here, $2^\star-2 = 2$. As is easily checked, we can write with (\ref{Step1StatemSec4Eqt1}) that $$\int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^2(\varphi_\alpha^{i_0})^{2^\star-2}dv_g = \int_{\Omega_{i_0,\alpha}(R)}B_\alpha^2 (\varphi_\alpha^{i_0})^{2^\star-2}dv_g + o(1)$$ and thanks to (iii) we get that $$\int_{\Omega_{i_0,\alpha}(R)}\vert\tilde u_\alpha^{i_0}\vert^2(\varphi_\alpha^{i_0})^{2^\star-2}dv_g \ge \int_{B_0(R)}u^2dx + o(1)\hskip.1cm .$$ In particular, \begin{equation}\label{L2ConcProofDim4Eqt7} \int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g \ge \mu_\alpha^{n-2} \left(\int_{B_0(R)}u^2dx + o(1)\right)\hskip.1cm , \end{equation} where $o(1) \to 0$ as $\alpha \to +\infty$, and $u$ is as in (\ref{PosSolCritEuclEqt}). By (\ref{RatioL2ConcProof}), (\ref{L2ConcProofDim4Eqt1}), and (\ref{Sec4ProofL2ConcConclEqt1}), we can write that $$R_\delta(\alpha) \le C \frac{\int_M\vert\tilde u_\alpha^{i_0}\vert^{2^\star-1}dv_g}{\sqrt{\int_M\vert\tilde u_\alpha^{i_0}\vert^2dv_g}} + o(1)$$ for all $\alpha$, where $R_\delta(\alpha) $ is given by (\ref{RatioL2Conc}), and $C > 0$ is independent of $\alpha$. Then, by (\ref{L2ConcProofDim4Eqt3}), and (\ref{L2ConcProofDim4Eqt5})--(\ref{L2ConcProofDim4Eqt7}), we get that for any $R > 0$, \begin{equation}\label{L2ConcProofDim4Eqt8} \limsup_{\alpha\to+\infty}R_\delta(\alpha) \le \varepsilon_R + C \frac{\int_{B_0(R)}u^{2^\star-1}dx}{\sqrt{\int_{B_0(R)}u^2dx}}\hskip.1cm , \end{equation} where $\varepsilon_R \to 0$ as $R \to +\infty$, and $C > 0$ does not depend on $R$. It is easily seen that \begin{eqnarray*} \lim_{R\to+\infty}\int_{B_0(R)}u^{2^\star-1}dx & = & \int_{{\mathbb R}^n}u^{2^\star-1}dx\\ & < & +\infty \end{eqnarray*} On the other hand, when $n = 4$, $$\lim_{R\to+\infty}\int_{B_0(R)}u^2dx = +\infty\hskip.1cm .$$ Coming back to (\ref{L2ConcProofDim4Eqt8}), it follows that for any $\delta > 0$, $R_\delta(\alpha) \to 0$ as $\alpha \to +\infty$. In particular, (\ref{Step4StatemSec4Eqt1}) is true when $n = 4$. This ends the proof of Step \ref{Step4ProofLemSec4}. \end{proof} Step \ref{Step5ProofLemSec4} in the proof of (\ref{MainResToProve}) is concerned with proving that the off diagonal terms $\int_M\tilde u_\alpha^i\tilde u_\alpha^jdv_g$, $i \not= j$, are small when compared to the diagonal terms $\int_M(\tilde u_\alpha^i)^2dv_g$. Step \ref{Step5ProofLemSec4} is as follows. \begin{Step}\label{Step5ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). Assume $\tilde{\mathcal U}^0 \equiv 0$. Up to a subsequence, for any $i, j = 1,\dots,p$ such that $i\not= j$, \begin{equation}\label{Step5StatemSec4Eqt1} \int_{B_{x_0}(\delta)}\vert\tilde u_\alpha^i\tilde u_\alpha^j\vert dv_g \le \varepsilon_\delta \int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{equation} for all $\delta > 0$ and all $\alpha$, where $x_0$ is the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined, $\vert\tilde{\mathcal U}_\alpha\vert^2 = \sum_{i=1}^p\vert\tilde u_\alpha^i\vert^2$, and $\varepsilon_\delta > 0$ is independent of $\alpha$ and such that $\varepsilon_\delta \to 0$ as $\delta \to 0$. \end{Step} \begin{proof}[Proof of Step \ref{Step5ProofLemSec4}] As in the proof of Step \ref{Step4ProofLemSec4}, we let $i_0 = 1,\dots,p$, be such that ${\mathcal B}_\alpha^{i_0} = B_\alpha$ for all $\alpha$, where $(B_\alpha)_\alpha$ is the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. Then, by (\ref{Step1StatemSec4Eqt1}), \begin{equation}\label{ProofStep5Sec4Eqt1} \int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g = o(1) \end{equation} for all $\alpha$ and all $i \not= i_0$, where $o(1) \to 0$ as $\alpha \to +\infty$. Let $i\not= i_0$. We multiply the $i$th equation in (\ref{Sec4ProofEqt2}) by $\tilde u_\alpha^i$, and integrate over $M$. Then we can write that \begin{equation}\label{ProofStep5Sec4Eqt2} \int_M\left(\vert\nabla\tilde u_\alpha^i\vert^2 + A^\alpha_{ii}(\tilde u_\alpha^i)^2\right)dv_g \le \int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g + C \sum_{j\not= i}\int_M\vert\tilde u_\alpha^i\vert \vert\tilde u_\alpha^j\vert dv_g \end{equation} for all $\alpha$, where $C > 0$ is independent of $\alpha$ and $i$. As already mentionned in the introduction of this section, up to passing to a subsequence, we can assume that there exists $K > 0$ such that $A^\alpha_{ij} \ge K\delta_{ij}$ in the sense of bilinear forms, for all $\alpha$. Then $A^\alpha_{ii} \ge K$ in $M$, for all $\alpha$ and all $i$, and by the Sobolev embedding theorem, we get that there exists $C > 0$ such that \begin{equation}\label{ProofStep5Sec4Eqt3} \int_M\left(\vert\nabla\tilde u_\alpha^i\vert^2 + A^\alpha_{ii}(\tilde u_\alpha^i)^2\right)dv_g \ge C \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} \end{equation} for all $\alpha$. Combining (\ref{ProofStep5Sec4Eqt2}) and (\ref{ProofStep5Sec4Eqt3}), we get that there exist $C, C^\prime > 0$ such that \begin{equation}\label{ProofStep5Sec4Eqt4} C \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} \le \int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g + C^\prime \sum_{j\not= i}\int_M\vert\tilde u_\alpha^i\vert \vert\tilde u_\alpha^j\vert dv_g \end{equation} for all $\alpha$, and all $i\not= i_0$. By H\"older's inequality, \begin{equation}\label{ProofStep5Sec4Eqt5} \begin{split} \int_M\vert\tilde u_\alpha^i\vert \vert\tilde u_\alpha^j\vert dv_g &\le \sqrt{\int_M\vert\tilde u_\alpha^i\vert^2dv_g} \sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g}\\ &\le C\left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{1/2^\star} \sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} \end{split} \end{equation} for all $\alpha$, where $C > 0$ is independent of $\alpha$. Combining (\ref{ProofStep5Sec4Eqt4}) and (\ref{ProofStep5Sec4Eqt5}), it follows that \begin{equation}\label{ProofStep5Sec4Eqt6} \begin{split} C \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} & \le \int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\\ &\hskip.4cm + C^\prime \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{1/2^\star} \sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} \end{split} \end{equation} for all $\alpha$, and all $i\not= i_0$, where $C, C^\prime > 0$ are independent of $\alpha$ and $i$. By (\ref{ProofStep5Sec4Eqt1}) we then get that there exists $C > 0$ such that \begin{equation}\label{ProofStep5Sec4Eqt7} \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{1/2^\star} \le C \sqrt{\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g} \end{equation} for all $\alpha$, and all $i\not= i_0$. Now, given $\delta > 0$, and $i\not= j$ arbitrary, we write that \begin{equation}\label{ProofStep5Sec4Eqt8} \int_{B_{x_0}(\delta)}\vert\tilde u_\alpha^i\vert \vert\tilde u_\alpha^j\vert dv_g \le \sqrt{\int_{B_{x_0}(\delta)}(\tilde u_\alpha^i)^2dv_g} \sqrt{\int_{B_{x_0}(\delta)}(\tilde u_\alpha^j)^2dv_g} \end{equation} for all $\alpha$, where $x_0$ is the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. Since $i\not= j$, either $i\not= i_0$ or $j\not= i_0$. Suppose $j \not= i_0$. On the one hand we can write that \begin{equation}\label{ProofStep5Sec4Eqt9} \int_{B_{x_0}(\delta)}(\tilde u_\alpha^i)^2dv_g\ \le \int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{equation} for all $\alpha$ and $\delta > 0$. On the other hand, by H\"older's inequality, we can write that \begin{eqnarray*} \int_{B_{x_0}(\delta)}(\tilde u_\alpha^j)^2dv_g &&\le \vert B_{x_0}(\delta)\vert^{\frac{2^\star-2}{2^\star}} \left(\int_{B_{x_0}(\delta)}\vert\tilde u_\alpha^j\vert^{2^\star}dv_g\right)^{2/2^\star}\\ &&\le \vert B_{x_0}(\delta)\vert^{\frac{2^\star-2}{2^\star}} \left(\int_M\vert\tilde u_\alpha^j\vert^{2^\star}dv_g\right)^{2/2^\star} \end{eqnarray*} for all $\alpha$, where $\vert B_{x_0}(\delta)\vert$ is the volume of $B_{x_0}(\delta)$ with respect to $g$. By (\ref{ProofStep5Sec4Eqt7}), since $j \not= i_0$, we then get that \begin{equation}\label{ProofStep5Sec4Eqt10} \int_{B_{x_0}(\delta)}(\tilde u_\alpha^j)^2dv_g \le C \vert B_{x_0}(\delta)\vert^{\frac{2^\star-2}{2^\star}} \int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{equation} for all $\alpha$ and $\delta > 0$, where $C > 0$ is independent of $\alpha$ and $\delta$. Plugging (\ref{ProofStep5Sec4Eqt9}) and (\ref{ProofStep5Sec4Eqt10}) into (\ref{ProofStep5Sec4Eqt8}), since $\vert B_{x_0}(\delta)\vert \to 0$ as $\delta \to 0$, we get that (\ref{Step5StatemSec4Eqt1}) is true. This ends the proof of Step \ref{Step5ProofLemSec4}. \end{proof} By Steps \ref{Step1ProofLemSec4} to \ref{Step5ProofLemSec4}, we are now in position to prove (\ref{MainResToProve}), and hence to prove Lemma \ref{TheLem}. We use in the process that for any $\varepsilon > 0$, there exists $\delta_\varepsilon > 0$, such that for any smooth function $u$ with compact support in $B_{x_0}(\delta_\varepsilon)$, \begin{equation}\label{BOLoc} \left(\int_M\vert u\vert^{2^\star}dv_g\right)^{2/2^\star} \le K_n^2\int_M\vert\nabla u\vert_{\hat g}^2dv_g + B_\varepsilon\int_Mu^2dv_g \end{equation} where $B_\varepsilon = \frac{n-2}{4(n-1)}K_n^2\left(S_g(x_0) + \varepsilon\right)$, $K_n$ is given by (\ref{SharpCstEucl}), and $S_g$ is the scalar curvature of $g$. Inequality (\ref{BOLoc}) is a straightforward consequence of the local isoperimetric inequality proved in Druet \cite{Dru}. Step \ref{Step6ProofLemSec4} is as follows. \begin{Step}\label{Step6ProofLemSec4} Let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). Assume that for any $i$ and any $x \in M$, \begin{equation}\label{IneqSecProof4Eqt1} A^0_{ii}(x) > \frac{n-2}{4(n-1)}S_g(x)\hskip.1cm , \end{equation} and that $n \ge 4$. Then $\tilde{\mathcal U}^0 \not\equiv 0$. In particular, (\ref{MainResToProve}) and Lemma \ref{TheLem} are true. \end{Step} \begin{proof}[Proof of Step \ref{Step6ProofLemSec4}] We proceed by contradiction and assume that $\tilde{\mathcal U}^0 \equiv 0$. We let $x_0$ be the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. We fix $\varepsilon > 0$, and let $\eta$ be a smooth cutoff function such that $\eta = 1$ in $B_{x_0}(\delta_\varepsilon/4)$, $\eta = 0$ in $M\backslash B_{x_0}(\delta_\varepsilon/2)$, and $0 \le \eta \le 1$. We plugg the $\eta\tilde u_\alpha^i$'s into (\ref{BOLoc}), $i = 1,\dots,p$, and then sum over $i$. Noting that $$\int_M\vert\nabla(\eta\tilde u_\alpha^i)\vert^2dv_g = \int_M\eta^2\tilde u_\alpha^i(\Delta_g\tilde u_\alpha^i)dv_g + \int_M\vert\nabla\eta\vert^2(\tilde u_\alpha^i)^2dv_g\hskip.1cm ,$$ and that $\vert\nabla\eta\vert = 0$ around $x_0$, we get with (\ref{Sec4ProofEqt2}) and $L^2$-concentration in Step \ref{Step4ProofLemSec4} that \begin{equation}\label{Step6Sec4ProofEqt1} \begin{split} &\sum_{i=1}^p\left(\left(\int_M\vert\eta\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} - K_n^2\int_M\eta^2\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)\\ &\le -K_n^2\sum_{i,j=1}^p \int_M\eta^2A_{ij}^\alpha\tilde u_\alpha^i\tilde u_\alpha^jdv_g + \left(B_\varepsilon + o(1)\right)\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{split} \end{equation} for all $\alpha$, where $o(1) \to 0$ as $\alpha \to +\infty$, and $B_\varepsilon$ is as in (\ref{BOLoc}). By H\"older's inequality, and (\ref{Sec4ProofEqt3}), \begin{equation}\label{Step6Sec4ProofEqt2} \begin{split} \int_M\eta^2\vert\tilde u_\alpha^i\vert^{2^\star}dv_g &\le \left(\int_M\vert\eta\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} \left(\int_M\vert\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{(2^\star-2)/2^\star}\\ &\le K_n^{-2}\left(\int_M\vert\eta\tilde u_\alpha^i\vert^{2^\star}dv_g\right)^{2/2^\star} \end{split} \end{equation} for all $\alpha$ and $i$. By (\ref{Step6Sec4ProofEqt2}), the left hand side in (\ref{Step6Sec4ProofEqt1}) is nonnegative. Since we also have that $A^\alpha_{ij} \to A^0_{ij}$ in $C^{0,\theta}(M)$, we can write with (\ref{Step6Sec4ProofEqt1}) that \begin{equation}\label{Step6Sec4ProofEqt3} K_n^2\sum_{i,j=1}^p \int_M\eta^2A_{ij}^0\tilde u_\alpha^i\tilde u_\alpha^jdv_g \le \left(B_\varepsilon + o(1)\right)\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{equation} for all $\alpha$, where $o(1) \to 0$ as $\alpha \to +\infty$, and $B_\varepsilon$ is as in (\ref{BOLoc}). By $L^2$-concentration in Step \ref{Step4ProofLemSec4}, and the control of the off diagonal terms in Step \ref{Step5ProofLemSec4}, it follows from (\ref{Step6Sec4ProofEqt3}) that for any $\varepsilon > 0$, and any $\delta > 0$, \begin{equation}\label{Step6Sec4ProofEqt4} \begin{split} &\sum_{i=1}^p \int_M\left(A_{ii}^0(x_0) - \frac{n-2}{4(n-1)}S_g(x_0)\right)(\tilde u_\alpha^i)^2dv_g\\ &\le C \sum_{i\not= j}\int_{B_{x_0}(\delta)}\vert\tilde u_\alpha^i\vert \vert\tilde u_\alpha^j\vert dv_g + C\left(\varepsilon + o(1)\right)\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\\ &\hskip.4cm + \sum_{i=1}^p\left(\sup_{x \in B_{x_0}(\delta)}\left\vert A^0_{ii}(x) - A^0_{ii}(x_0)\right\vert\right) \int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\\ &\le C\left(\varepsilon + \varepsilon_\delta + o(1)\right)\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{split} \end{equation} for all $\alpha$, where $o(1) \to 0$ as $\alpha \to +\infty$, $\varepsilon_\delta \to 0$ as $\delta \to 0$, and $C > 0$ does not depend on $\alpha$, $\varepsilon$, and $\delta$. By (\ref{IneqSecProof4Eqt1}) there exists $\varepsilon_0 > 0$ such that \begin{equation}\label{Step6Sec4ProofEqt5} A^0_{ii}(x_0) \ge \frac{n-2}{4(n-1)}S_g(x_0) + \varepsilon_0 \end{equation} for all $i$. Then the contradiction easily follows from (\ref{Step6Sec4ProofEqt4}) by choosing $\varepsilon > 0$ and $\delta > 0$ sufficiently small such that $C(\varepsilon + \varepsilon_\delta) \le \varepsilon_0/2$, where $C > 0$ is the constant in (\ref{Step6Sec4ProofEqt4}), and $\varepsilon_0$ is as in (\ref{Step6Sec4ProofEqt5}). This proves that for $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ as in (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}), we necessarily have that $\tilde{\mathcal U}^0 \not\equiv 0$ when we assume that $n \ge 4$ and that (\ref{IneqSecProof4Eqt1}) holds. Then, by Step \ref{Step1ProofLemSec4}, we get that (\ref{MainResToProve}) is also true. By standard elliptic theory, as already mentionned, this implies in turn that Lemma \ref{TheLem} is true. \end{proof} A possible extension of Lemma \ref{TheLem} is to replace the condition in the Lemma that $A^0_{ii}(x) > \frac{n-2}{4(n-1)}S_g(x)$ for all $i$ and all $x$, by the condition that for any $i$, either $A^0_{ii}(x) > \frac{n-2}{4(n-1)}S_g(x)$ for all $x$, or $A^0_{ii}(x) < \frac{n-2}{4(n-1)}S_g(x)$ for all $x$, and hence that for any $i$, and any $x$, \begin{equation}\label{Sec4FinRemEqt1} A^0_{ii}(x) \not= \frac{n-2}{4(n-1)}S_g(x)\hskip.1cm . \end{equation} If we assume that the convergence of the $A^\alpha_{ij}$'s to the $A^0_{ij}$'s is in $C^1(M)$, and that the manifold is conformally flat, we can prove, with the estimates we obtained in Steps \ref{Step1ProofLemSec4} to \ref{Step5ProofLemSec4}, this claim that Lemma \ref{TheLem} remains true if we only assume (\ref{Sec4FinRemEqt1}). The proof, based on the Pohozaev identity instead of (\ref{BOLoc}), is as follows. We let $\tilde{\mathcal U}_\alpha$ and $\tilde{\mathcal U}^0$ be given by (\ref{Sec4ProofEqt1}) and (\ref{Sec4ProofEqt4}). We assume by contradiction that ${\mathcal U}^0 \equiv 0$, and let $x_0$ be the limit of the centers of the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. Since $g$ is conformally flat, there exist $\delta_0 > 0$ and a conformal metric $\hat g$ to $g$ such that $\hat g$ is flat in $B_{x_0}(4\delta_0)$. Let $\hat g = \varphi^{4/(n-2)}g$, where $\varphi$ is smooth and positive, and $\hat u_\alpha^i = \tilde u_\alpha^i\varphi^{-1}$ for all $\alpha$ and $i$. By conformal invariance of the conformal Laplacian, and by (\ref{Sec4ProofEqt2}), \begin{equation}\label{Sec4FinRemEqt2} \Delta\hat u_\alpha^i + \sum_{j=1}^p\hat A^\alpha_{ij}\hat u_\alpha^j = (\hat u_\alpha^i)^{2^\star-1} \end{equation} in $B_{x_0}(4\delta_0)$ for all $i$ and all $\alpha$, where $\Delta = \Delta_{\hat g}$ is the Euclidean Laplacian, and $\varphi^{2^\star-2} \hat A^\alpha_{ij} = A^\alpha_{ij} - \frac{n-2}{4(n-1)}S_g\delta_{ij}$. The Pohozaev identity in the Euclidean space reads as \begin{equation}\label{PohozaevIdent} \begin{split} &\int_\Omega(x^k\partial_ku)\Delta u dx + \frac{n-2}{2} \int_\Omega u(\Delta u)dx\\ &= - \int_{\partial\Omega}(x^k\partial_ku)\partial_\nu ud\sigma + \frac{1}{2} \int_{\partial\Omega}(x,\nu)\vert\nabla u\vert^2d\sigma \\ &\hskip.4cm - \frac{n-2}{2}\int_{\partial\Omega}u\partial_\nu ud\sigma\hskip.1cm , \end{split} \end{equation} where $\nu$ is the outward unit normal to $\partial\Omega$, $d\sigma$ is the Euclidean volume element on $\partial\Omega$, and there is a sum over $k$ from $1$ to $n$. For $\delta > 0$ small, we let $\eta$ be a smooth cutoff function such that $\eta = 1$ in $B_{x_0}(\delta)$, $\eta = 0$ in $M\backslash B_{x_0}(2\delta)$, and $0 \le \eta \le 1$. We plugg the $\eta\hat u_\alpha^i$'s into the Pohozaev identity (\ref{PohozaevIdent}) and sum over $i$. In the process, we regard the $\hat u_\alpha^i$'s, $\varphi$, $\eta$, and the $\hat A^\alpha_{ij}$'s as defined in the Euclidean space. Thanks to (\ref{Sec4FinRemEqt2}), to the $C^1$-convergence of the $A^\alpha_{ij}$'s to the $A^0_{ij}$'s, and to $L^2$-concentration, coming back to the manifold, we get after lengthy (but simple) computations that \begin{equation}\label{Sec4FinRemEqt3} \begin{split} &\sum_{i,j=1}^p \int_{B_{x_0}(\delta)}\left(A_{ij}^0(x_0) - \frac{n-2}{4(n-1)}S_g(x_0)\delta_{ij}\right)\tilde u_\alpha^i\tilde u_\alpha^jdv_g\\ &= \varepsilon_\delta O\left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right) + o\left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right) \end{split} \end{equation} for all $\alpha$, where $\varepsilon_\delta > 0$ is independent of $\alpha$ and such that $\varepsilon_\delta \to 0$ as $\delta \to 0$, and where the first term in the right hand side of (\ref{Sec4FinRemEqt3}) depends on $\delta$ only by $\varepsilon_\delta$. Let $i_0 = 1,\dots,p$, be such that ${\mathcal B}_\alpha^{i_0} = B_\alpha$ for all $\alpha$, where $(B_\alpha)_\alpha$ is the $1$-bubble from which the $p$-bubble $({\mathcal B}_\alpha)_\alpha$ in (\ref{Step1StatemSec4Eqt1}) is defined. The argument we developed in the proof of Step \ref{Step5ProofLemSec4} gives that for $i \not= i_0$, \begin{equation}\label{Sec4FinRemEqt4} \int_{B_{x_0}(\delta)}(\tilde u_\alpha^i)^2dv_g \le \varepsilon_\delta \int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g \end{equation} for all $\alpha$, where $\varepsilon_\delta > 0$ is independent of $\alpha$ and such that $\varepsilon_\delta \to 0$ as $\delta \to 0$. Combining the off diagonal estimates (\ref{Step5StatemSec4Eqt1}) of Step \ref{Step5ProofLemSec4}, $L^2$-concentration, (\ref{Sec4FinRemEqt3}), and (\ref{Sec4FinRemEqt4}), it follows that \begin{equation}\label{Sec4FinRemEqt5} \begin{split} &\left(A_{i_0i_0}^0(x_0) - \frac{n-2}{4(n-1)}S_g(x_0)\right)\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\\ &= \varepsilon_\delta O\left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right) + o\left(\int_M\vert\tilde{\mathcal U}_\alpha\vert^2dv_g\right) \end{split} \end{equation} for all $\alpha$, where $\varepsilon_\delta > 0$ is independent of $\alpha$ and such that $\varepsilon_\delta \to 0$ as $\delta \to 0$, and where the first term in the right hand side of (\ref{Sec4FinRemEqt5}) depends on $\delta$ only by $\varepsilon_\delta$. Choosing $\delta > 0$ sufficiently small, and letting $\alpha \to +\infty$, we get a contradiction by combining (\ref{Sec4FinRemEqt1}) wth $i = i_0$ and (\ref{Sec4FinRemEqt5}). This proves that if we assume that the convergence of the $A^\alpha_{ij}$'s to the $A^0_{ij}$'s is in $C^1(M)$, and that the manifold is conformally flat, then Lemma \ref{TheLem} remains true if we replace the condition $A^0_{ii}(x) > \frac{n-2}{4(n-1)}S_g(x)$ for all $i$ and all $x$, by the condition that $A^0_{ii}(x) \not= \frac{n-2}{4(n-1)}S_g(x)$ for all $i$ and all $x$. \medskip\noindent{\bf Acknowledgements:} The author is indebted to Olivier Druet and Fr\'ed\'eric Robert for their valuable comments on the manuscript.